repo string | github_id int64 | github_node_id string | number int64 | html_url string | api_url string | title string | body string | state string | state_reason string | locked bool | comments_count int64 | labels list | assignees list | created_at string | updated_at string | closed_at string | author_association string | milestone_title string | snapshot_id string | extracted_at string | author_login string | author_id int64 | author_node_id string | author_type string | author_site_admin bool |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
huggingface/transformers | 3,924,037,381 | I_kwDOCUB6oc7p5A8F | 43,899 | https://github.com/huggingface/transformers/issues/43899 | https://api.github.com/repos/huggingface/transformers/issues/43899 | `sync_each_batch` has no effect when using FSDP | I can corroborate the finding of @zch0414 below that there is no way to configure the trainer to force sync when using FSDP. As explained [here](https://huggingface.co/docs/accelerate/en/concept_guides/gradient_synchronization#nosync-requires-additional-gpu-memory-when-using-fsdp), this is a big problem for memory inte... | closed | completed | false | 5 | [] | [] | 2026-02-10T23:41:28Z | 2026-02-13T14:38:08Z | 2026-02-13T14:38:08Z | NONE | null | 20260325T173244Z | 2026-03-25T17:32:44Z | ojh31 | 67,026,888 | MDQ6VXNlcjY3MDI2ODg4 | User | false |
huggingface/transformers | 3,924,289,813 | I_kwDOCUB6oc7p5-kV | 43,901 | https://github.com/huggingface/transformers/issues/43901 | https://api.github.com/repos/huggingface/transformers/issues/43901 | TextClassificationPipeline docs still mention return_all_scores, but behavior differs | ### System Info
- `transformers` version: 5.2.0.dev0
- Platform: macOS-26.2-arm64-arm-64bit-Mach-O
- Python version: 3.14.3
- Huggingface_hub version: 1.4.1
- Safetensors version: 0.7.0
- Accelerate version: 1.12.0
- Accelerate config: not found
- DeepSpeed version: not installed
- PyTorch version (accelerator?): 2... | closed | completed | false | 0 | [
"bug"
] | [] | 2026-02-11T01:34:44Z | 2026-02-11T16:59:25Z | 2026-02-11T16:59:25Z | CONTRIBUTOR | null | 20260325T173244Z | 2026-03-25T17:32:44Z | math-hiyoko | 56,009,584 | MDQ6VXNlcjU2MDA5NTg0 | User | false |
huggingface/transformers | 3,925,294,423 | I_kwDOCUB6oc7p9z1X | 43,906 | https://github.com/huggingface/transformers/issues/43906 | https://api.github.com/repos/huggingface/transformers/issues/43906 | Isolated reproduction of https://github.com/huggingface/transformers/issues/38071 | ### System Info
name = "accelerate"
version = "1.12.0"
name = "transformers"
version = "4.57.3"
Python 3.11
### Who can help?
@gante @ArthurZucker Related to warning from https://github.com/huggingface/transformers/issues/38071 for `Qwen/Qwen3-Next-80B-A3B-Instruct` model
### Information
- [ ] The official exam... | closed | completed | false | 5 | [
"Good First Issue",
"bug"
] | [] | 2026-02-11T08:16:24Z | 2026-02-17T10:41:33Z | 2026-02-17T10:41:33Z | NONE | null | 20260325T173244Z | 2026-03-25T17:32:44Z | willxxy | 90,741,489 | MDQ6VXNlcjkwNzQxNDg5 | User | false |
huggingface/transformers | 3,925,350,527 | I_kwDOCUB6oc7p-Bh_ | 43,908 | https://github.com/huggingface/transformers/issues/43908 | https://api.github.com/repos/huggingface/transformers/issues/43908 | What is the process of adding a new hardware backend for Trainer? | I work in Qualcomm and we have a hardware backend like cuda / mps. I want to add it in the Trainer so that we can use Trainer class to perform training on our stack. What is the process of adding it?
We have a branch which currently adds the backend specific changes: https://github.com/quic-meetkuma/transformers/tree/... | closed | completed | false | 5 | [] | [] | 2026-02-11T08:32:48Z | 2026-03-23T08:15:11Z | 2026-03-23T08:15:11Z | CONTRIBUTOR | null | 20260325T173244Z | 2026-03-25T17:32:44Z | quic-meetkuma | 200,747,495 | U_kgDOC_cp5w | User | false |
huggingface/transformers | 3,925,520,732 | I_kwDOCUB6oc7p-rFc | 43,909 | https://github.com/huggingface/transformers/issues/43909 | https://api.github.com/repos/huggingface/transformers/issues/43909 | Add LFM2.5 Audio 1.5B | ### Model description
I would like to add LFM2.5 Audio, which is a highly versatile multimodal audio-text model for its small size of 1.5b. Just ran the model on cpu and it's really good. Should be a good candidate for transformers integration. @eustlb @ebezzam
The modular implementation should not be very long sinc... | open | null | false | 4 | [
"New model",
"Audio"
] | [
"eustlb"
] | 2026-02-11T09:20:30Z | 2026-02-16T14:21:17Z | null | CONTRIBUTOR | null | 20260325T173244Z | 2026-03-25T17:32:44Z | MHRDYN7 | 113,298,714 | U_kgDOBsDNGg | User | false |
huggingface/transformers | 3,928,427,805 | I_kwDOCUB6oc7qJw0d | 43,927 | https://github.com/huggingface/transformers/issues/43927 | https://api.github.com/repos/huggingface/transformers/issues/43927 | [BUG] DiaConfig loses custom token IDs after save / load and causes IndexError during generation | ### System Info
* `transformers` version: `5.0.0.dev0`
* Platform: `Linux-5.15.167.4-microsoft-standard-WSL2-x86_64-with-glibc2.39`
* Python version: `3.12.3`
* `huggingface_hub` version: `1.3.2`
* `safetensors` version: `0.7.0`
* `accelerate` version: `1.12.0`
* Accelerate config: `not installed`
* DeepSpeed version:... | closed | completed | false | 1 | [
"bug"
] | [] | 2026-02-11T19:57:48Z | 2026-04-18T09:11:46Z | 2026-02-13T09:28:27Z | CONTRIBUTOR | null | 20260418T100536Z | 2026-04-18T10:05:36Z | harshaljanjani | 75,426,551 | MDQ6VXNlcjc1NDI2NTUx | User | false |
huggingface/transformers | 3,929,531,236 | I_kwDOCUB6oc7qN-Nk | 43,931 | https://github.com/huggingface/transformers/issues/43931 | https://api.github.com/repos/huggingface/transformers/issues/43931 | Model loading error: weight shapes mismatch of Qwen3-VL-30B-A3B-Instruct | ### System Info
- `transformers` version: 5.1.0
- Platform: Linux-5.15.0-168-generic-x86_64-with-glibc2.35
- Python version: 3.10.12
- Huggingface_hub version: 1.4.1
- Safetensors version: 0.7.0
- Accelerate version: 1.12.0
- Accelerate config: not found
- DeepSpeed version: not installed
- PyTorch version (acceler... | closed | completed | false | 4 | [
"bug"
] | [] | 2026-02-12T02:24:57Z | 2026-02-16T17:50:14Z | 2026-02-16T17:50:13Z | NONE | null | 20260325T173244Z | 2026-03-25T17:32:44Z | jonbakerfish | 2,463,872 | MDQ6VXNlcjI0NjM4NzI= | User | false |
huggingface/transformers | 3,930,430,408 | I_kwDOCUB6oc7qRZvI | 43,935 | https://github.com/huggingface/transformers/issues/43935 | https://api.github.com/repos/huggingface/transformers/issues/43935 | Add `eval_on_end` flag (analogous to `eval_on_start`) | ### Feature request
### Feature request
#### Background
There is already a convenient, switch to evaluate **before** training starts: `eval_on_start=True`.
There’s a symmetric need at the other end of training: evaluate **after** training finishes, regardless of whether the last `global_step` lands exactly on an `ev... | closed | completed | false | 5 | [
"Feature request"
] | [] | 2026-02-12T08:11:18Z | 2026-03-26T16:30:42Z | 2026-03-26T16:30:42Z | NONE | null | 20260407T090028Z | 2026-04-07T09:00:28Z | MarkusSpanring | 13,022,861 | MDQ6VXNlcjEzMDIyODYx | User | false |
huggingface/transformers | 3,930,559,652 | I_kwDOCUB6oc7qR5Sk | 43,937 | https://github.com/huggingface/transformers/issues/43937 | https://api.github.com/repos/huggingface/transformers/issues/43937 | [GLM-5] ValueError: GenerationConfig is invalid | ### System Info
transformers 5.2.0.dev0
### Who can help?
@ArthurZucker @Cyrilvallez
### Information
- [x] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
... | closed | completed | false | 8 | [
"bug"
] | [] | 2026-02-12T08:41:21Z | 2026-02-23T09:41:29Z | 2026-02-23T09:41:29Z | CONTRIBUTOR | null | 20260325T173244Z | 2026-03-25T17:32:44Z | xin3he | 83,260,933 | MDQ6VXNlcjgzMjYwOTMz | User | false |
huggingface/transformers | 3,930,743,717 | I_kwDOCUB6oc7qSmOl | 43,939 | https://github.com/huggingface/transformers/issues/43939 | https://api.github.com/repos/huggingface/transformers/issues/43939 | Better regex in `build_glob_alternation` method | I notice in the following code:
https://github.com/huggingface/transformers/blob/4d5d49c34474be7cc2b6abd3179e7b317a17d8b1/src/transformers/core_model_loading.py#L67
https://github.com/huggingface/transformers/blob/4d5d49c34474be7cc2b6abd3179e7b317a17d8b1/src/transformers/core_model_loading.py#L75
The regex is formed... | closed | completed | false | 2 | [] | [] | 2026-02-12T09:21:30Z | 2026-03-23T08:15:08Z | 2026-03-23T08:15:08Z | MEMBER | null | 20260325T173244Z | 2026-03-25T17:32:44Z | ariG23498 | 36,856,589 | MDQ6VXNlcjM2ODU2NTg5 | User | false |
huggingface/transformers | 3,930,787,886 | I_kwDOCUB6oc7qSxAu | 43,940 | https://github.com/huggingface/transformers/issues/43940 | https://api.github.com/repos/huggingface/transformers/issues/43940 | Qwen3-Next: DeepSpeed ZeRO-3 fails to load weights (all params MISSING) | ## System Info
- `transformers` version: 5.0.0
- `deepspeed` version: 0.18.5
- Platform: Linux (H200 x4)
- Python: 3.12
## Problem
When loading `Qwen/Qwen3-Next-80B-A3B-Instruct` with DeepSpeed ZeRO-3, **all model parameters are reported as MISSING** in the load report. The model trains from random initialization (l... | closed | completed | false | 4 | [] | [] | 2026-02-12T09:32:14Z | 2026-02-17T13:49:15Z | 2026-02-17T13:49:15Z | NONE | null | 20260325T173244Z | 2026-03-25T17:32:44Z | Shanay-Mehta | 101,552,567 | U_kgDOBg2Rtw | User | false |
huggingface/transformers | 3,932,157,948 | I_kwDOCUB6oc7qX_f8 | 43,950 | https://github.com/huggingface/transformers/issues/43950 | https://api.github.com/repos/huggingface/transformers/issues/43950 | `from_pretrained()` silently corrupts non-persistent buffers (`register_buffer(persistent=False)`) -- transformers 5.x regression | ### System Info
```
- transformers version: 5.1.0 (latest)
- Platform: Linux (Docker)
- Python version: tested on 3.12.12, 3.13.12, and 3.14.3
- PyTorch version: 2.10.0+cpu (also tested with 2.9.1+cpu)
- Using GPU: No (CPU only)
```
### Who can help?
@Cyrilvallez
### Information
- [ ] The official example scripts... | closed | completed | false | 4 | [
"bug"
] | [] | 2026-02-12T14:25:35Z | 2026-02-12T16:21:11Z | 2026-02-12T15:42:05Z | NONE | null | 20260325T173244Z | 2026-03-25T17:32:44Z | adrienB134 | 102,990,337 | U_kgDOBiOCAQ | User | false |
huggingface/transformers | 3,932,703,357 | I_kwDOCUB6oc7qaEp9 | 43,957 | https://github.com/huggingface/transformers/issues/43957 | https://api.github.com/repos/huggingface/transformers/issues/43957 | model loading with torch.device("meta") breaks some models on transformers 5.+ , e.g. TRELLIS.2 , RMBG-2.0 | ### System Info
using with torch.device('meta') seems to break any model that uses torch data dependent operations to put itself together, e.g.. my_tensor.tolist()` or `tensor.item()`
https://github.com/microsoft/TRELLIS.2/issues/101
miini repro
```python
import transformers; print(transformers.__version__) # 5.1.0
... | closed | completed | false | 3 | [
"bug"
] | [] | 2026-02-12T16:17:56Z | 2026-03-18T11:18:15Z | 2026-02-16T17:56:21Z | NONE | null | 20260325T173244Z | 2026-03-25T17:32:44Z | xvdp | 40,925,114 | MDQ6VXNlcjQwOTI1MTE0 | User | false |
huggingface/transformers | 3,936,733,615 | I_kwDOCUB6oc7qpcmv | 43,975 | https://github.com/huggingface/transformers/issues/43975 | https://api.github.com/repos/huggingface/transformers/issues/43975 | `deepseek-ai/deepseek-coder-6.7b-instruct` incorrectly detokenizes in v5 | ### System Info
`transformers env` din't work, failed with
```
Traceback (most recent call last):
File "/usr/lib/python3.10/runpy.py", line 196, in _run_module_as_main
return _run_code(code, main_globals, None,
File "/usr/lib/python3.10/runpy.py", line 86, in _run_code
exec(code, run_globals)
File ".../... | closed | completed | false | 5 | [
"bug"
] | [] | 2026-02-13T11:34:53Z | 2026-02-25T08:10:08Z | 2026-02-25T08:10:08Z | CONTRIBUTOR | null | 20260325T173244Z | 2026-03-25T17:32:44Z | pavel-esir | 5,703,039 | MDQ6VXNlcjU3MDMwMzk= | User | false |
huggingface/transformers | 3,937,071,221 | I_kwDOCUB6oc7qqvB1 | 43,976 | https://github.com/huggingface/transformers/issues/43976 | https://api.github.com/repos/huggingface/transformers/issues/43976 | Transformers 5.1.0 does not work with Python3.9+ but Python3.10+ | ### System Info
Hi,
The documentation page on [Pypi](https://pypi.org/project/transformers/5.1.0/) assures users that the latest version of the library works with Python3.9+.
```
Transformers works with Python 3.9+, and [PyTorch](https://pytorch.org/get-started/locally/) 2.4+.
```
In my environment, in which I use ... | closed | completed | false | 1 | [
"bug"
] | [] | 2026-02-13T12:57:23Z | 2026-02-16T13:46:35Z | 2026-02-16T13:46:35Z | NONE | null | 20260325T173244Z | 2026-03-25T17:32:44Z | GeoMariusj | 214,978,531 | U_kgDODNBP4w | User | false |
huggingface/transformers | 3,937,612,029 | I_kwDOCUB6oc7qszD9 | 43,979 | https://github.com/huggingface/transformers/issues/43979 | https://api.github.com/repos/huggingface/transformers/issues/43979 | Call to contributions: refactor output tracing in transformers | Following #43590 that updates 112 models, we want to finish migrating all models to the standardized output collection interface. So, this is a call to contributors to open PRs to link to this meta-issue and learn about the codebase!
Two decorators replace the old manual boilerplate:
- **`@capture_outputs`**: goes o... | closed | completed | false | 89 | [
"Help wanted",
"Good First Issue"
] | [] | 2026-02-13T15:02:46Z | 2026-03-17T14:20:18Z | 2026-02-18T10:40:13Z | MEMBER | null | 20260325T173244Z | 2026-03-25T17:32:44Z | molbap | 39,954,772 | MDQ6VXNlcjM5OTU0Nzcy | User | false |
huggingface/transformers | 3,938,441,621 | I_kwDOCUB6oc7qv9mV | 43,986 | https://github.com/huggingface/transformers/issues/43986 | https://api.github.com/repos/huggingface/transformers/issues/43986 | Confusing crash when loading a video model through AutoProcessor without torchvision installed | ### System Info
The problem exists on the main branch
### Who can help?
_No response_
### Information
- [x] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
... | closed | completed | false | 1 | [
"bug"
] | [] | 2026-02-13T18:18:13Z | 2026-02-20T08:23:36Z | 2026-02-20T08:23:36Z | NONE | null | 20260325T173244Z | 2026-03-25T17:32:44Z | Frojdholm | 3,251,566 | MDQ6VXNlcjMyNTE1NjY= | User | false |
huggingface/transformers | 3,939,597,659 | I_kwDOCUB6oc7q0X1b | 43,990 | https://github.com/huggingface/transformers/issues/43990 | https://api.github.com/repos/huggingface/transformers/issues/43990 | Model loading got changed now and 3 weeks before with AutoModelForCausalLM, AutoTokenizer | ### System Info
We are doing in a
Google Colab Python
<img width="1276" height="556" alt="Image" src="https://github.com/user-attachments/assets/f970620f-f0af-4848-9b11-9bc44eecfed7" />
environment with A-100 computing.
### Who can help?
My id is: https://github.com/alokesh17
We used the High Performance Com... | closed | completed | false | 1 | [
"bug"
] | [] | 2026-02-13T22:35:00Z | 2026-02-15T19:54:50Z | 2026-02-15T19:54:50Z | NONE | null | 20260325T173244Z | 2026-03-25T17:32:44Z | alokesh17 | 106,035,686 | U_kgDOBlH55g | User | false |
huggingface/transformers | 3,939,917,737 | I_kwDOCUB6oc7q1l-p | 43,992 | https://github.com/huggingface/transformers/issues/43992 | https://api.github.com/repos/huggingface/transformers/issues/43992 | UMT5Encoder.from_pretrained misses `embed_tokens.weight` | ### System Info
```
self.text_encoder = UMT5EncoderModel.from_pretrained(
"Wan-AI/Wan2.1-T2V-1.3B-Diffusers",
cache_dir=os.environ.get("HF_HOME", None),
subfolder="text_encoder",
local_files_only=str2bool(os.getenv("LOCAL_FILES_ONLY", "false")),
)
# This ensures... | closed | completed | false | 5 | [
"bug"
] | [] | 2026-02-14T00:39:20Z | 2026-03-16T09:12:43Z | 2026-03-16T09:12:43Z | NONE | null | 20260325T173244Z | 2026-03-25T17:32:44Z | nanlliu | 45,443,761 | MDQ6VXNlcjQ1NDQzNzYx | User | false |
huggingface/transformers | 3,941,390,988 | I_kwDOCUB6oc7q7NqM | 43,994 | https://github.com/huggingface/transformers/issues/43994 | https://api.github.com/repos/huggingface/transformers/issues/43994 | google/siglip2-base-patch16-224 produces nonsensical results with AutoModel and pipeline | ### System Info
Copy-and-paste the text below in your GitHub issue and FILL OUT the two last points.
- `transformers` version: 5.1.0
- Platform: Linux-6.6.105+-x86_64-with-glibc2.35
- Python version: 3.12.12
- Huggingface_hub version: 1.4.0
- Safetensors version: 0.7.0
- Accelerate version: 1.12.0
- Accelerate config... | closed | completed | false | 3 | [
"bug"
] | [] | 2026-02-14T12:59:58Z | 2026-02-17T09:55:04Z | 2026-02-17T09:55:04Z | NONE | null | 20260325T173244Z | 2026-03-25T17:32:44Z | Adefey | 63,973,081 | MDQ6VXNlcjYzOTczMDgx | User | false |
huggingface/transformers | 3,943,475,934 | I_kwDOCUB6oc7rDKre | 44,008 | https://github.com/huggingface/transformers/issues/44008 | https://api.github.com/repos/huggingface/transformers/issues/44008 | [Gemma 3n][modular] AttributeError: 'Tensor' object has no attribute 'audio_mel_mask' — variable name collision in Gemma3nModel.forward() | ## System Info
- `transformers` version: 4.53.0 (also confirmed on latest `main` as of 2026-02-15)
- Python: 3.12
- PyTorch: 2.7+
- Platform: Linux - x86_64
- Model: `google/gemma-3n-E2B-it` (also affects `google/gemma-3n-E4B-it`)
- Reported by: @reedmayhew18
## Who can help?
@ArthurZucker @zucchini-nlp @Rocketknigh... | closed | completed | false | 1 | [] | [] | 2026-02-15T08:17:21Z | 2026-02-19T12:50:01Z | 2026-02-19T12:50:01Z | NONE | null | 20260325T173244Z | 2026-03-25T17:32:44Z | reedmayhew18 | 46,308,944 | MDQ6VXNlcjQ2MzA4OTQ0 | User | false |
huggingface/transformers | 3,944,737,576 | I_kwDOCUB6oc7rH-so | 44,016 | https://github.com/huggingface/transformers/issues/44016 | https://api.github.com/repos/huggingface/transformers/issues/44016 | Syntax error in Transformer section 3 (Transformers, what can they do?) notebook | ### System Info
Build error due to syntax issue,
-> from transformers import pipeline
ner = pipeline("ner", grouped_entities=True)// remove parameter grouped_entities=True
ner("My name is Sylvain and I work at Hugging Face in Brooklyn.")
Error log: Notes:
- UNEXPECTED :can be ignored when loading from different task... | closed | completed | false | 7 | [
"Good First Issue",
"bug"
] | [] | 2026-02-15T19:15:34Z | 2026-03-23T17:00:09Z | 2026-02-24T16:07:22Z | NONE | null | 20260325T173244Z | 2026-03-25T17:32:44Z | khyatimirani | 29,675,392 | MDQ6VXNlcjI5Njc1Mzky | User | false |
huggingface/transformers | 3,945,750,382 | I_kwDOCUB6oc7rL19u | 44,031 | https://github.com/huggingface/transformers/issues/44031 | https://api.github.com/repos/huggingface/transformers/issues/44031 | All tokenizers raise incorrect regex pattern warning after version 4.57.3? | https://github.com/huggingface/transformers/blob/753d61104116eefc8ffc977327b441ee0c8d599f/src/transformers/tokenization_utils_base.py#L2466
The #42299 pr make all tokenizers raise incorrect regex pattern warning after version 4.57.3, such as qwen model type, is it correct? | closed | completed | false | 7 | [] | [] | 2026-02-16T04:01:33Z | 2026-03-18T16:29:50Z | 2026-03-18T16:29:50Z | NONE | null | 20260325T173244Z | 2026-03-25T17:32:44Z | jue-jue-zi | 26,075,785 | MDQ6VXNlcjI2MDc1Nzg1 | User | false |
huggingface/transformers | 3,947,299,977 | I_kwDOCUB6oc7rRwSJ | 44,038 | https://github.com/huggingface/transformers/issues/44038 | https://api.github.com/repos/huggingface/transformers/issues/44038 | [bug] transformers 5.0 & Qwen3-VL-Moe | ### System Info
-
### Who can help?
-
### Information
- [x] The official example scripts
- [ ] My own modified scripts
### Tasks
- [x] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
transformers==5.1.0
```pytho... | closed | completed | false | 2 | [
"bug"
] | [] | 2026-02-16T11:33:50Z | 2026-02-17T08:55:24Z | 2026-02-17T08:55:24Z | CONTRIBUTOR | null | 20260325T173244Z | 2026-03-25T17:32:44Z | Jintao-Huang | 45,290,347 | MDQ6VXNlcjQ1MjkwMzQ3 | User | false |
huggingface/transformers | 3,948,930,284 | I_kwDOCUB6oc7rX-Ts | 44,052 | https://github.com/huggingface/transformers/issues/44052 | https://api.github.com/repos/huggingface/transformers/issues/44052 | Fix skipped tests for glm_moe_dsa model | ## Related PR
Linked to #43912
## Skipped Tests
### DSA indexer mask shape mismatch with assisted decoding
- `test_assisted_decoding_matches_greedy_search`
- `test_assisted_decoding_sample`
- `test_generate_from_inputs_embeds_with_static_cache`
- `test_generate_compile_model_forward_fullgraph`
- `test_generate_compil... | closed | completed | false | 2 | [
"Good Second Issue"
] | [] | 2026-02-16T17:58:43Z | 2026-02-17T16:23:32Z | 2026-02-17T16:23:32Z | MEMBER | null | 20260325T173244Z | 2026-03-25T17:32:44Z | ArthurZucker | 48,595,927 | MDQ6VXNlcjQ4NTk1OTI3 | User | false |
huggingface/transformers | 3,949,509,175 | I_kwDOCUB6oc7raLo3 | 44,060 | https://github.com/huggingface/transformers/issues/44060 | https://api.github.com/repos/huggingface/transformers/issues/44060 | Qwen3-Next: Incorrect tied weights warning ties embed_tokens.weight to linear_attn.dt_bias across all layers | ### System Info
- `transformers` main branch (via kashif/transformers@clean-weigth-convert, PR #43926)
- Python 3.12
- DeepSpeed ZeRO-3
- LlamaFactory (LoRA SFT)
### Who can help?
@SunMarc @CyrilVallez
### Information
- [x] The official example scripts
- [x] My own modified scripts
### Tasks
- [x] An officially ... | closed | completed | false | 5 | [] | [] | 2026-02-16T20:45:57Z | 2026-02-18T18:07:33Z | 2026-02-17T19:03:50Z | NONE | null | 20260325T173244Z | 2026-03-25T17:32:44Z | Shanay-Mehta | 101,552,567 | U_kgDOBg2Rtw | User | false |
huggingface/transformers | 3,949,996,312 | I_kwDOCUB6oc7rcCkY | 44,062 | https://github.com/huggingface/transformers/issues/44062 | https://api.github.com/repos/huggingface/transformers/issues/44062 | TypeError: tokenizers.AddedToken() got multiple values for keyword argument 'special' | ### System Info
5.2.0
### Who can help?
@ArthurZucker @itazap
### Information
- [ ] The official example scripts
- [x] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [x] My own task or dataset (give details below)
### Reproduction
My Hu... | closed | completed | false | 4 | [
"bug"
] | [] | 2026-02-16T23:42:50Z | 2026-03-02T09:56:09Z | 2026-03-02T09:56:09Z | CONTRIBUTOR | null | 20260325T173244Z | 2026-03-25T17:32:44Z | umarbutler | 8,473,183 | MDQ6VXNlcjg0NzMxODM= | User | false |
huggingface/transformers | 3,951,562,419 | I_kwDOCUB6oc7riA6z | 44,075 | https://github.com/huggingface/transformers/issues/44075 | https://api.github.com/repos/huggingface/transformers/issues/44075 | Optimizer SGD args are not used | ### System Info
transformers 4.38.2
Python 3.10.19
platform Linux
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give de... | closed | completed | false | 3 | [
"bug"
] | [] | 2026-02-17T08:46:16Z | 2026-02-25T16:03:19Z | 2026-02-25T16:03:19Z | NONE | null | 20260325T173244Z | 2026-03-25T17:32:44Z | varunakathirvel3886 | 195,893,848 | U_kgDOC60aWA | User | false |
huggingface/transformers | 3,951,585,820 | I_kwDOCUB6oc7riGoc | 44,077 | https://github.com/huggingface/transformers/issues/44077 | https://api.github.com/repos/huggingface/transformers/issues/44077 | `patchtsmixer` has optional `post_init`, should no longer be allowed | ### System Info
- `transformers` version: 5.2.0.dev0
- Platform: Windows-10-10.0.26200-SP0
- Python version: 3.11.13
- Huggingface_hub version: 1.3.1
- Safetensors version: 0.6.2
- Accelerate version: 1.11.0
- Accelerate config: not found
- DeepSpeed version: not installed
- PyTorch version (accelerator?): 2.9.0+cu... | closed | completed | false | 1 | [
"bug"
] | [] | 2026-02-17T08:52:30Z | 2026-02-17T11:05:38Z | 2026-02-17T11:05:38Z | MEMBER | null | 20260325T173244Z | 2026-03-25T17:32:44Z | tomaarsen | 37,621,491 | MDQ6VXNlcjM3NjIxNDkx | User | false |
huggingface/transformers | 3,951,842,611 | I_kwDOCUB6oc7rjFUz | 44,079 | https://github.com/huggingface/transformers/issues/44079 | https://api.github.com/repos/huggingface/transformers/issues/44079 | `ModelOutput` keys aren't correctly assigned if key was previously None | Related issue: https://github.com/huggingface/transformers/pull/44050#discussion_r2815826882
### System Info
- `transformers` version: 5.2.0.dev0
- Platform: Windows-10-10.0.26200-SP0
- Python version: 3.11.13
- Huggingface_hub version: 1.3.1
- Safetensors version: 0.6.2
- Accelerate version: 1.11.0
- Accelerate conf... | closed | completed | false | 1 | [
"bug"
] | [] | 2026-02-17T09:49:22Z | 2026-02-20T10:08:39Z | 2026-02-20T10:08:39Z | MEMBER | null | 20260325T173244Z | 2026-03-25T17:32:44Z | tomaarsen | 37,621,491 | MDQ6VXNlcjM3NjIxNDkx | User | false |
huggingface/transformers | 3,954,418,921 | I_kwDOCUB6oc7rs6Tp | 44,112 | https://github.com/huggingface/transformers/issues/44112 | https://api.github.com/repos/huggingface/transformers/issues/44112 | [BUG][CI] Stale device override test in GraniteSpeech fails on CI | ### System Info
* `transformers` version: `5.0.0.dev0`
* Platform: `Linux-5.15.167.4-microsoft-standard-WSL2-x86_64-with-glibc2.39`
* Python version: `3.12.3`
* `huggingface_hub` version: `1.3.2`
* `safetensors` version: `0.7.0`
* `accelerate` version: `1.12.0`
* Accelerate config: `not installed`
* DeepSpeed version:... | closed | completed | false | 4 | [
"bug"
] | [] | 2026-02-17T19:58:06Z | 2026-02-19T11:06:24Z | 2026-02-19T11:06:24Z | CONTRIBUTOR | null | 20260325T173244Z | 2026-03-25T17:32:44Z | harshaljanjani | 75,426,551 | MDQ6VXNlcjc1NDI2NTUx | User | false |
huggingface/transformers | 3,955,092,982 | I_kwDOCUB6oc7rve32 | 44,117 | https://github.com/huggingface/transformers/issues/44117 | https://api.github.com/repos/huggingface/transformers/issues/44117 | TOKENIZER_MAPPING_NAMES sometimes returns None, but from_pretrained assumes otherwise | ### System Info
When loading a tokenizer with AutoTokenizer (src/transformers/models/auto/tokenization_auto.py), on L652 in from_pretained the code tries to remove "Fast" from the tokenizer mapping for old-style models:
```
if (
tokenizer_auto_map is None
and tokenizer_config_class is not None
... | closed | completed | false | 6 | [
"bug"
] | [] | 2026-02-17T23:23:08Z | 2026-03-02T09:06:05Z | 2026-02-18T14:05:31Z | NONE | null | 20260325T173244Z | 2026-03-25T17:32:44Z | DavidMChan | 3,190,178 | MDQ6VXNlcjMxOTAxNzg= | User | false |
huggingface/transformers | 3,956,227,561 | I_kwDOCUB6oc7rzz3p | 44,121 | https://github.com/huggingface/transformers/issues/44121 | https://api.github.com/repos/huggingface/transformers/issues/44121 | [Model Request] Add OpenAI Weight-Sparse Transformer (circuit-sparsity / circuitgpt) | ### Model description
Hello,
OpenAI recently released research on [Weight-sparse transformers](https://openai.com/index/understanding-neural-networks-through-sparse-circuits/). These models are specifically trained with weight sparsity for mechanistic interpretability and circuit analysis.
I would like to contribute... | open | null | false | 2 | [
"New model"
] | [] | 2026-02-18T06:33:19Z | 2026-02-18T17:56:35Z | null | NONE | null | 20260325T173244Z | 2026-03-25T17:32:44Z | dtiourine | 109,561,478 | U_kgDOBofGhg | User | false |
huggingface/transformers | 3,962,308,255 | I_kwDOCUB6oc7sLAaf | 44,153 | https://github.com/huggingface/transformers/issues/44153 | https://api.github.com/repos/huggingface/transformers/issues/44153 | [Bug] Glm46VImageProcessorFast.get_number_of_image_patches() ignores self.size, uses hardcoded longest_edge | ## Bug Description
`get_number_of_image_patches()` in both `image_processing_glm46v_fast.py` and `image_processing_glm46v.py` ignores `self.size` and falls back to a **hardcoded** default when `images_kwargs` does not include `size`:
```python
# image_processing_glm46v_fast.py line 182 (same in slow processor, line 4... | closed | not_planned | false | 1 | [] | [] | 2026-02-19T10:57:12Z | 2026-02-19T10:59:05Z | 2026-02-19T10:59:05Z | NONE | null | 20260325T173244Z | 2026-03-25T17:32:44Z | SteadfastAsArt | 35,479,342 | MDQ6VXNlcjM1NDc5MzQy | User | false |
huggingface/transformers | 3,962,963,557 | I_kwDOCUB6oc7sNgZl | 44,155 | https://github.com/huggingface/transformers/issues/44155 | https://api.github.com/repos/huggingface/transformers/issues/44155 | [AudioFlamingo3] Batched inference produces incorrect results due to embedding/token leak between tracks | ### System Info
- transformers version: 5.0.0
- Platform: Linux-5.14.0-284.30.1.el9_2.x86_64-x86_64-with-glibc2.39
- Python version: 3.12.3
- Huggingface_hub version: 1.4.1
- Safetensors version: 0.6.2
- Accelerate version: 1.11.0
- Accelerate config: not found
- DeepSpeed version: 0.18.1
- PyTorch version (acceler... | closed | completed | false | 4 | [
"bug",
"Audio"
] | [] | 2026-02-19T13:20:07Z | 2026-03-25T14:41:31Z | 2026-03-25T14:41:06Z | NONE | null | 20260325T173244Z | 2026-03-25T17:32:44Z | IvanBarabanau | 34,709,713 | MDQ6VXNlcjM0NzA5NzEz | User | false |
huggingface/transformers | 3,965,230,584 | I_kwDOCUB6oc7sWJ34 | 44,162 | https://github.com/huggingface/transformers/issues/44162 | https://api.github.com/repos/huggingface/transformers/issues/44162 | ESM2 is broken, impacting 1000s of scientists workflows | ### System Info
`pip install transformers==5.2.0` on fresh docker image from `nvidia/cuda:12.8.0-cudnn-devel-ubuntu24.04`
### Who can help?
@ArthurZucker @Cyrilvallez @zucchini-nlp
Previous versions, for instance v4.3.0 (picked at random), pass `attention_mask` to input embeddings class:
```python
embedding_output... | closed | completed | false | 7 | [
"bug"
] | [] | 2026-02-19T21:33:16Z | 2026-02-20T15:23:05Z | 2026-02-20T15:23:05Z | CONTRIBUTOR | null | 20260325T173244Z | 2026-03-25T17:32:44Z | lhallee | 72,926,928 | MDQ6VXNlcjcyOTI2OTI4 | User | false |
huggingface/transformers | 3,965,472,418 | I_kwDOCUB6oc7sXE6i | 44,164 | https://github.com/huggingface/transformers/issues/44164 | https://api.github.com/repos/huggingface/transformers/issues/44164 | save/from_pretrained fails to handle extra_state | ### System Info
- `transformers` version: 5.2.0
- Platform: Linux-5.15.153.1-microsoft-standard-WSL2-aarch64-with-glibc2.35
- Python version: 3.10.16
- Huggingface_hub version: 1.4.1
- Safetensors version: 0.7.0
- Accelerate version: not installed
- Accelerate config: not found
- DeepSpeed version: not installed
- PyT... | closed | completed | false | 8 | [
"bug"
] | [] | 2026-02-19T22:32:45Z | 2026-04-10T10:57:37Z | 2026-04-09T13:47:03Z | NONE | null | 20260411T144729Z | 2026-04-11T14:47:29Z | quic-kyunggeu | 93,295,150 | U_kgDOBY-SLg | User | false |
huggingface/transformers | 3,967,173,374 | I_kwDOCUB6oc7sdkL- | 44,168 | https://github.com/huggingface/transformers/issues/44168 | https://api.github.com/repos/huggingface/transformers/issues/44168 | Feature: EU AI Act risk classification metadata in model cards | ## Context
The EU AI Act (Regulation 2024/1689) requires AI systems to be classified by risk level (unacceptable, high-risk, limited risk, minimal risk) and mandates specific documentation depending on the classification. Article 13 requires **transparency obligations** including clear documentation of intended purpos... | closed | completed | false | 1 | [] | [] | 2026-02-20T08:04:34Z | 2026-02-20T15:43:04Z | 2026-02-20T15:41:38Z | NONE | null | 20260325T173244Z | 2026-03-25T17:32:44Z | desiorac | 20,227,435 | MDQ6VXNlcjIwMjI3NDM1 | User | false |
huggingface/transformers | 3,967,173,673 | I_kwDOCUB6oc7sdkQp | 44,169 | https://github.com/huggingface/transformers/issues/44169 | https://api.github.com/repos/huggingface/transformers/issues/44169 | Need an example for FSDP + FP16 training | In my setup, I am trying to run FSDP with FP16 precision. Is there any limitation that I can not use FSDP with FP16 precision? How can I convert my existing code to FSDP for FP16 precision? I believe there is ShardedGradScaler from FSDP should be used. How is it different than normal GradScaler in terms of implementati... | closed | completed | false | 2 | [] | [] | 2026-02-20T08:04:37Z | 2026-03-31T08:19:25Z | 2026-03-31T08:19:25Z | CONTRIBUTOR | null | 20260407T090028Z | 2026-04-07T09:00:28Z | quic-meetkuma | 200,747,495 | U_kgDOC_cp5w | User | false |
huggingface/transformers | 3,969,292,120 | I_kwDOCUB6oc7slpdY | 44,183 | https://github.com/huggingface/transformers/issues/44183 | https://api.github.com/repos/huggingface/transformers/issues/44183 | EU AI Act Compliance Documentation: Risk Classification & Data Governance Guidelines | <spam> | closed | completed | false | 1 | [] | [] | 2026-02-20T16:12:49Z | 2026-02-20T16:26:42Z | 2026-02-20T16:26:03Z | NONE | null | 20260325T173244Z | 2026-03-25T17:32:44Z | desiorac | 20,227,435 | MDQ6VXNlcjIwMjI3NDM1 | User | false |
huggingface/transformers | 3,970,141,006 | I_kwDOCUB6oc7so4tO | 44,186 | https://github.com/huggingface/transformers/issues/44186 | https://api.github.com/repos/huggingface/transformers/issues/44186 | [BUG] LayoutLMv2Tokenizer crashes on NER inputs and batched padding/truncation | ### System Info
* `transformers` version: `5.0.0.dev0`
* Platform: `Linux-5.15.167.4-microsoft-standard-WSL2-x86_64-with-glibc2.39`
* Python version: `3.12.3`
* `huggingface_hub` version: `1.3.2`
* `safetensors` version: `0.7.0`
* `accelerate` version: `1.12.0`
* Accelerate config: `not installed`
* DeepSpeed version:... | closed | completed | false | 1 | [
"bug"
] | [] | 2026-02-20T19:58:01Z | 2026-04-18T09:11:02Z | 2026-02-23T10:29:38Z | CONTRIBUTOR | null | 20260418T100536Z | 2026-04-18T10:05:36Z | harshaljanjani | 75,426,551 | MDQ6VXNlcjc1NDI2NTUx | User | false |
huggingface/transformers | 3,970,357,222 | I_kwDOCUB6oc7sptfm | 44,188 | https://github.com/huggingface/transformers/issues/44188 | https://api.github.com/repos/huggingface/transformers/issues/44188 | Diverging attention kernels due to `allow_is_bidirectional_skip` branching on torch.compile | ### System Info
Hi, while we were updating the PyTorch transformers pin to v5.2.0, our regression tests caught a numerics issue between eager and compiled, the difference is very substantial (3.3 vs the typical e-4 accepted difference). Digging into it: https://github.com/pytorch/pytorch/pull/175274#issuecomment-39309... | closed | completed | false | 10 | [
"bug"
] | [] | 2026-02-20T21:01:05Z | 2026-04-27T08:46:45Z | 2026-04-27T08:46:45Z | NONE | null | 20260427T120026Z | 2026-04-27T12:00:26Z | xmfan | 9,547,562 | MDQ6VXNlcjk1NDc1NjI= | User | false |
huggingface/transformers | 3,971,177,445 | I_kwDOCUB6oc7ss1vl | 44,190 | https://github.com/huggingface/transformers/issues/44190 | https://api.github.com/repos/huggingface/transformers/issues/44190 | Cannot load local dataset with run_image_classification_no_trainer.py | ### System Info
- Ubuntu 24.04.4 LTS
- Python 3.12.3
- PyTorch 2.10.0
### Who can help?
_No response_
### Information
- [x] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [x] My own task or dataset (give... | closed | completed | false | 6 | [
"bug"
] | [] | 2026-02-21T03:12:58Z | 2026-02-24T15:09:23Z | 2026-02-24T15:09:23Z | NONE | null | 20260325T173244Z | 2026-03-25T17:32:44Z | dyecon | 25,501,036 | MDQ6VXNlcjI1NTAxMDM2 | User | false |
huggingface/transformers | 3,972,332,694 | I_kwDOCUB6oc7sxPyW | 44,205 | https://github.com/huggingface/transformers/issues/44205 | https://api.github.com/repos/huggingface/transformers/issues/44205 | Adding SAM3-LiteText | ### Model description
I would like to propose adding SAM3-LiteText. This model introduces a highly efficient, lightweight text-prompting capability to the SAM3 architecture. It offers excellent performance for text-guided segmentation tasks while maintaining a small computational footprint (params reduced by 80%), mak... | closed | completed | false | 8 | [
"New model"
] | [] | 2026-02-21T16:43:32Z | 2026-04-13T18:41:09Z | 2026-04-13T18:29:09Z | NONE | null | 20260414T122001Z | 2026-04-14T12:20:01Z | SimonZeng7108 | 52,696,979 | MDQ6VXNlcjUyNjk2OTc5 | User | false |
huggingface/transformers | 3,972,794,156 | I_kwDOCUB6oc7szAcs | 44,206 | https://github.com/huggingface/transformers/issues/44206 | https://api.github.com/repos/huggingface/transformers/issues/44206 | v5.2.0 regression: LasrFeatureExtractor passes unsupported center arg and crashes | ### System Info
note: [bug bot](https://huggingface.co/spaces/huggingchat/hf-docs-chat) is down but I've checked open issues and confirmed this is not duplicate.
- `transformers` version: 5.2.0
- Platform: Linux (Google Colab) / Also reproducible on macOS
- Python version: 3.12
- PyTorch version: 2.10.0+cu124
- Using... | closed | completed | false | 2 | [
"bug"
] | [] | 2026-02-21T20:56:04Z | 2026-02-23T10:01:36Z | 2026-02-23T10:01:36Z | CONTRIBUTOR | null | 20260325T173244Z | 2026-03-25T17:32:44Z | ainergiz | 151,769,873 | U_kgDOCQvTEQ | User | false |
huggingface/transformers | 3,973,467,921 | I_kwDOCUB6oc7s1k8R | 44,208 | https://github.com/huggingface/transformers/issues/44208 | https://api.github.com/repos/huggingface/transformers/issues/44208 | request refund | I’m having a problem. I added my prepaid card, but the subscription was not accepted because the platform does not accept prepaid cards. However, the amount was deducted from my balance, and now I have neither the balance nor the Pro plan. I need a refund since I won’t be able to use the platform.
| closed | completed | false | 2 | [] | [] | 2026-02-22T03:42:03Z | 2026-03-24T13:31:02Z | 2026-03-24T13:31:02Z | NONE | null | 20260325T173244Z | 2026-03-25T17:32:44Z | storescienza-gif | 263,101,692 | U_kgDOD66c_A | User | false |
huggingface/transformers | 3,975,350,452 | I_kwDOCUB6oc7s8wi0 | 44,214 | https://github.com/huggingface/transformers/issues/44214 | https://api.github.com/repos/huggingface/transformers/issues/44214 | Add sequence classification capabilities to the Granite models | ### Feature request
This issue proposes adding `ForSequenceClassification` classes to the Granite model family, including:
- **Granite**
- **GraniteMoe**
- **GraniteMoeHybrid**
- **GraniteMoeShared**
### Motivation
Currently, Granite models only support causal language modeling. Adding sequence classification ca... | open | null | false | 0 | [
"Feature request"
] | [] | 2026-02-22T20:14:10Z | 2026-02-23T02:23:38Z | null | NONE | null | 20260325T173244Z | 2026-03-25T17:32:44Z | jmriosal | 208,881,128 | U_kgDODHNF6A | User | false |
huggingface/transformers | 3,975,980,258 | I_kwDOCUB6oc7s_KTi | 44,220 | https://github.com/huggingface/transformers/issues/44220 | https://api.github.com/repos/huggingface/transformers/issues/44220 | Issue with _torch_extract_fbank_features() | ### System Info
transformers version 5.2.0 (this is where the bug was introducted).
Get the error below when calling ASR pipeline code like this:
```
pipe = pipeline("automatic-speech-recognition", model=model_id)
result = pipe(audio,chunk_length_s=20,stride_length_s=2)
```
Error:
```
File "/usr/local/lib/python... | closed | completed | false | 2 | [
"bug"
] | [] | 2026-02-23T02:17:01Z | 2026-02-23T10:02:09Z | 2026-02-23T10:02:09Z | NONE | null | 20260325T173244Z | 2026-03-25T17:32:44Z | dmakhervaks | 29,380,741 | MDQ6VXNlcjI5MzgwNzQx | User | false |
huggingface/transformers | 3,976,993,033 | I_kwDOCUB6oc7tDBkJ | 44,222 | https://github.com/huggingface/transformers/issues/44222 | https://api.github.com/repos/huggingface/transformers/issues/44222 | [Bug] FP8 save_pretrained moe | ### System Info
-
### Who can help?
-
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
```python
from transformers i... | closed | completed | false | 4 | [
"bug"
] | [] | 2026-02-23T08:40:59Z | 2026-04-28T08:45:36Z | 2026-04-28T08:45:36Z | CONTRIBUTOR | null | 20260428T120019Z | 2026-04-28T12:00:19Z | Jintao-Huang | 45,290,347 | MDQ6VXNlcjQ1MjkwMzQ3 | User | false |
huggingface/transformers | 3,978,812,045 | I_kwDOCUB6oc7tJ9qN | 44,230 | https://github.com/huggingface/transformers/issues/44230 | https://api.github.com/repos/huggingface/transformers/issues/44230 | [fp8] qwen3-vl-fp8/qwen3.5 moe fp8 support (infer) | ### Feature request
I tested that dense works normally, but moe throws an error.
<img width="868" height="316" alt="Image" src="https://github.com/user-attachments/assets/b6b916e6-a72c-414b-821d-01cff320c9aa" />
```python
from transformers import Qwen3VLMoeForConditionalGeneration, AutoProcessor
model = Qwen3VLMo... | closed | completed | false | 3 | [
"Feature request"
] | [] | 2026-02-23T15:36:52Z | 2026-04-23T18:17:14Z | 2026-04-23T18:17:14Z | CONTRIBUTOR | null | 20260424T000039Z | 2026-04-24T00:00:39Z | Jintao-Huang | 45,290,347 | MDQ6VXNlcjQ1MjkwMzQ3 | User | false |
huggingface/transformers | 3,979,446,481 | I_kwDOCUB6oc7tMYjR | 44,238 | https://github.com/huggingface/transformers/issues/44238 | https://api.github.com/repos/huggingface/transformers/issues/44238 | CI: failing slow runs fails to report properly | The following issue description was a red herring, the real problem is described at https://github.com/huggingface/transformers/issues/44238#issuecomment-3946684226
Slow runs try to cat `captured_info.txt` which seems to not be always present
https://github.com/huggingface/transformers/pull/43972#issuecomment-394584... | closed | completed | false | 2 | [
"bug"
] | [] | 2026-02-23T17:58:30Z | 2026-02-24T13:02:58Z | 2026-02-24T13:02:58Z | MEMBER | null | 20260325T173244Z | 2026-03-25T17:32:44Z | tarekziade | 250,019 | MDQ6VXNlcjI1MDAxOQ== | User | false |
huggingface/transformers | 3,979,941,776 | I_kwDOCUB6oc7tOReQ | 44,242 | https://github.com/huggingface/transformers/issues/44242 | https://api.github.com/repos/huggingface/transformers/issues/44242 | Load balancing loss not added when output_router_logits=False | ### System Info
version:4.57.3
In file `models/mixtral/modelling_mixtral.py`, the `aux_loss` is not computed and added to the overall loss, when `output_router_logits=False` in the `MixtralConfig`.
This is not intended, since according to the documentation https://huggingface.co/docs/transformers/en/model_doc/mixtra... | closed | completed | false | 9 | [
"bug"
] | [] | 2026-02-23T20:07:48Z | 2026-04-12T08:14:05Z | 2026-04-12T08:14:05Z | NONE | null | 20260413T085906Z | 2026-04-13T08:59:06Z | Matheart | 47,732,475 | MDQ6VXNlcjQ3NzMyNDc1 | User | false |
huggingface/transformers | 3,980,370,150 | I_kwDOCUB6oc7tP6Dm | 44,246 | https://github.com/huggingface/transformers/issues/44246 | https://api.github.com/repos/huggingface/transformers/issues/44246 | import transformers takes long sometimes | ### System Info
- `transformers` version: 5.2.0
- Platform: macOS-15.7.3-arm64-arm-64bit-Mach-O
- Python version: 3.13.11
- Huggingface_hub version: 1.4.1
- Safetensors version: 0.7.0
- Accelerate version: not installed
- Accelerate config: not found
- DeepSpeed version: not installed
- PyTorch version (accelerator?):... | closed | completed | false | 11 | [
"bug"
] | [] | 2026-02-23T22:05:01Z | 2026-04-05T08:08:43Z | 2026-04-05T08:08:43Z | NONE | null | 20260407T090028Z | 2026-04-07T09:00:28Z | audricschiltknecht | 17,416,802 | MDQ6VXNlcjE3NDE2ODAy | User | false |
huggingface/transformers | 3,980,760,231 | I_kwDOCUB6oc7tRZSn | 44,247 | https://github.com/huggingface/transformers/issues/44247 | https://api.github.com/repos/huggingface/transformers/issues/44247 | [MPS] Silent correctness issue in bidirectional attention | ### System Info
A bug in PyTorch for the MPS backend (pytorch/pytorch#174861) results in a silent correctness issue in bidirectional attention under certain conditions:
- dtype != `torch.float` (eg. `float16` or `bfloat16`)
- non-masked or boolean mask
- non-causal
- query sequence length <= 8
- query sequence length... | open | null | false | 18 | [
"WIP",
"bug"
] | [] | 2026-02-24T00:08:02Z | 2026-04-17T11:56:08Z | null | CONTRIBUTOR | null | 20260417T180542Z | 2026-04-17T18:05:42Z | hvaara | 1,535,968 | MDQ6VXNlcjE1MzU5Njg= | User | false |
huggingface/transformers | 3,981,101,818 | I_kwDOCUB6oc7tSsr6 | 44,248 | https://github.com/huggingface/transformers/issues/44248 | https://api.github.com/repos/huggingface/transformers/issues/44248 | [Bug] Security Vulnerability | I have reported ReDos vulnerability on [Huntr](https://huntr.com/bounties/c93d804a-fa03-4c94-aa29-b83a1eff9499). It's a new issue which hasn't been fixed yet but Huntr's platform bot has marked it as duplicate of some 2024 report which is not relevant to current regex and file. Can you please re-validate it and verify ... | closed | completed | false | 1 | [] | [] | 2026-02-24T02:13:45Z | 2026-02-24T12:47:06Z | 2026-02-24T12:26:43Z | NONE | null | 20260325T173244Z | 2026-03-25T17:32:44Z | 0xManan | 70,314,133 | MDQ6VXNlcjcwMzE0MTMz | User | false |
huggingface/transformers | 3,984,591,833 | I_kwDOCUB6oc7tgAvZ | 44,261 | https://github.com/huggingface/transformers/issues/44261 | https://api.github.com/repos/huggingface/transformers/issues/44261 | [Bug/Discussion] MLA q_a_layernorm Missing config.rms_norm_eps, Causing 1e-5/1e-6 Precision Error | ### System Info
hello! I noticed that the MLA implementations in transformers/vllm/sglang/megatron have slight differences, leading to precision errors (train/infer/rl...)
vllm:
https://github.com/vllm-project/vllm/blob/a0c70816956298f7dd1d0cf47cfa1a169a413692/vllm/model_executor/models/deepseek_v2.py#L907
sglang:
... | closed | completed | false | 11 | [
"bug"
] | [] | 2026-02-24T16:14:57Z | 2026-05-12T08:54:55Z | 2026-05-12T08:54:55Z | CONTRIBUTOR | null | 20260512T120027Z | 2026-05-12T12:00:27Z | Jintao-Huang | 45,290,347 | MDQ6VXNlcjQ1MjkwMzQ3 | User | false |
huggingface/transformers | 3,984,617,187 | I_kwDOCUB6oc7tgG7j | 44,262 | https://github.com/huggingface/transformers/issues/44262 | https://api.github.com/repos/huggingface/transformers/issues/44262 | from_pretrained no longer uses mmap for CPU weights in transformers 5.x causing full materialization | I have the following piece of code:
```python
import psutil
from transformers import AutoModel
rss_before = psutil.Process().memory_info().rss
model = AutoModel.from_pretrained(
"Qwen/Qwen3-Coder-30B-A3B-Instruct",
dtype="auto",
device_map="auto",
max_memory={"cpu": "1024GB", 0: 0},
)
rss_after = psu... | closed | completed | false | 5 | [] | [] | 2026-02-24T16:20:18Z | 2026-03-05T12:42:57Z | 2026-03-05T12:42:57Z | NONE | null | 20260325T173244Z | 2026-03-25T17:32:44Z | nikitas-cerebras | 253,071,526 | U_kgDODxWQpg | User | false |
huggingface/transformers | 3,984,671,947 | I_kwDOCUB6oc7tgUTL | 44,263 | https://github.com/huggingface/transformers/issues/44263 | https://api.github.com/repos/huggingface/transformers/issues/44263 | The torch.split() return values in GlmMoeDsaIndexer | ### System Info
transformers:
https://github.com/huggingface/transformers/blob/e2bc54f29a58b2d2ee7e7d6eac949c959e063e0f/src/transformers/models/glm_moe_dsa/modular_glm_moe_dsa.py#L515
vllm:
https://github.com/vllm-project/vllm/blob/a0c70816956298f7dd1d0cf47cfa1a169a413692/vllm/model_executor/models/deepseek_v2.py#... | closed | completed | false | 1 | [
"bug"
] | [] | 2026-02-24T16:32:23Z | 2026-02-24T16:40:25Z | 2026-02-24T16:40:25Z | CONTRIBUTOR | null | 20260325T173244Z | 2026-03-25T17:32:44Z | Jintao-Huang | 45,290,347 | MDQ6VXNlcjQ1MjkwMzQ3 | User | false |
huggingface/transformers | 3,985,558,185 | I_kwDOCUB6oc7tjsqp | 44,265 | https://github.com/huggingface/transformers/issues/44265 | https://api.github.com/repos/huggingface/transformers/issues/44265 | [BUG] torch.export.export fails for models using torch_compilable_check (Mask2Former, DeformableDetr, etc.) | ### System Info
* `transformers` version: `5.0.0.dev0`
* Platform: `Linux-5.15.167.4-microsoft-standard-WSL2-x86_64-with-glibc2.39`
* Python version: `3.12.3`
* `huggingface_hub` version: `1.3.2`
* `safetensors` version: `0.7.0`
* `accelerate` version: `1.12.0`
* Accelerate config: `not installed`
* DeepSpeed version:... | closed | completed | false | 0 | [
"bug"
] | [] | 2026-02-24T19:58:12Z | 2026-04-18T09:10:35Z | 2026-02-25T15:39:40Z | CONTRIBUTOR | null | 20260418T100536Z | 2026-04-18T10:05:36Z | harshaljanjani | 75,426,551 | MDQ6VXNlcjc1NDI2NTUx | User | false |
huggingface/transformers | 3,987,479,971 | I_kwDOCUB6oc7trB2j | 44,273 | https://github.com/huggingface/transformers/issues/44273 | https://api.github.com/repos/huggingface/transformers/issues/44273 | Lazy loading is not working properly | ### Problem
Lazy loading is not working properly:
- importing transformers takes ~3.5s
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task... | open | null | false | 8 | [
"bug"
] | [
"tarekziade"
] | 2026-02-25T06:23:02Z | 2026-04-23T06:54:18Z | null | MEMBER | null | 20260423T120024Z | 2026-04-23T12:00:24Z | albertvillanova | 8,515,462 | MDQ6VXNlcjg1MTU0NjI= | User | false |
huggingface/transformers | 3,987,982,756 | I_kwDOCUB6oc7ts8mk | 44,276 | https://github.com/huggingface/transformers/issues/44276 | https://api.github.com/repos/huggingface/transformers/issues/44276 | Loading kimik2 is taking forever | ```import os
import json
import yaml
import argparse
from dotenv import load_dotenv
import torch
from pathlib import Path
from transformers import (
AutoConfig,
AutoModelForCausalLM,
AutoTokenizer,
BitsAndBytesConfig,
)
model_name = os.path.expanduser("~/model/kimik2")
model = AutoModelForCausalLM.from... | closed | completed | false | 5 | [] | [] | 2026-02-25T08:33:05Z | 2026-05-07T20:00:19Z | 2026-04-08T08:21:46Z | NONE | null | 20260508T000035Z | 2026-05-08T00:00:35Z | savitha-suresh | 19,798,961 | MDQ6VXNlcjE5Nzk4OTYx | User | false |
huggingface/transformers | 3,989,401,341 | I_kwDOCUB6oc7tyW79 | 44,279 | https://github.com/huggingface/transformers/issues/44279 | https://api.github.com/repos/huggingface/transformers/issues/44279 | Dependency issue with transformers | ### System Info
```
%%capture
!pip uninstall -y bitsandbytes bitsandbytes-cuda* torch torchvision torchaudio xformers transformers datasets pyarrow huggingface-hub numpy spacy thinc
!pip install "numpy<2.0.0"
!pip install --index-url https://download.pytorch.org/whl/cu124 torch==2.6.0 torchvision==0.21.0 torchaudio=... | closed | completed | false | 2 | [
"bug"
] | [] | 2026-02-25T13:09:17Z | 2026-02-26T07:16:37Z | 2026-02-26T07:16:37Z | NONE | null | 20260325T173244Z | 2026-03-25T17:32:44Z | Chilliwiddit | 64,005,874 | MDQ6VXNlcjY0MDA1ODc0 | User | false |
huggingface/transformers | 3,989,674,006 | I_kwDOCUB6oc7tzZgW | 44,280 | https://github.com/huggingface/transformers/issues/44280 | https://api.github.com/repos/huggingface/transformers/issues/44280 | Add species bias to model card templates and bias documentation | ## Problem
The auto-generated model card template in src/transformers/modelcard.py includes an "Intended uses & limitations" section but provides no guidance for documenting species bias — the systematic devaluation of non-human animals' interests relative to humans.
When bias is discussed in the Transformers ecosyst... | closed | completed | false | 1 | [] | [] | 2026-02-25T14:03:32Z | 2026-04-05T08:08:40Z | 2026-04-05T08:08:40Z | NONE | null | 20260407T090028Z | 2026-04-07T09:00:28Z | stuckvgn | 126,325,182 | U_kgDOB4eRvg | User | false |
huggingface/transformers | 3,992,912,671 | I_kwDOCUB6oc7t_wMf | 44,291 | https://github.com/huggingface/transformers/issues/44291 | https://api.github.com/repos/huggingface/transformers/issues/44291 | Bug: TypeError when loading model with `init_empty_weights` in transformers >= 5.0.0rc0 due to unexpected `_is_hf_initialized` argument | ### System Info
- `transformers` version: 5.2.0
- Platform: Linux-5.10.134-013.5.kangaroo.al8.x86_64-x86_64-with-glibc2.39
- Python version: 3.12.12
- Huggingface_hub version: 1.4.1
- Safetensors version: 0.7.0
- Accelerate version: 1.12.0
- Accelerate config: not found
- DeepSpeed version: not installed
- PyTorch ve... | closed | completed | false | 5 | [
"bug"
] | [] | 2026-02-26T02:36:06Z | 2026-03-09T11:56:35Z | 2026-03-09T11:56:35Z | NONE | null | 20260325T173244Z | 2026-03-25T17:32:44Z | chenyushuo | 31,039,063 | MDQ6VXNlcjMxMDM5MDYz | User | false |
huggingface/transformers | 3,993,763,077 | I_kwDOCUB6oc7uC_0F | 44,292 | https://github.com/huggingface/transformers/issues/44292 | https://api.github.com/repos/huggingface/transformers/issues/44292 | Error running Qwen-3-8B-NVFP4 | ### System Info
Hi,
I encountered the following error while running [Qwen-3-8B-NVFP4 model](https://huggingface.co/RedHatAI/Qwen3-8B-NVFP4).
Is NVFP4 not supported on transformers? My package info is also shared at the bottom.
```
Traceback (most recent call last):
File "/raid/yilegu/diagnosis_agent_demo/drivers... | closed | completed | false | 2 | [
"bug"
] | [] | 2026-02-26T07:21:27Z | 2026-04-05T08:08:37Z | 2026-04-05T08:08:37Z | NONE | null | 20260407T090028Z | 2026-04-07T09:00:28Z | IKACE | 39,850,409 | MDQ6VXNlcjM5ODUwNDA5 | User | false |
huggingface/transformers | 3,994,183,921 | I_kwDOCUB6oc7uEmjx | 44,295 | https://github.com/huggingface/transformers/issues/44295 | https://api.github.com/repos/huggingface/transformers/issues/44295 | An error occurs when reading position_ids after registering it as a buffer. | ### System Info
transformers==5.2.0
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproductio... | closed | completed | false | 5 | [
"bug"
] | [] | 2026-02-26T08:54:10Z | 2026-04-06T08:28:11Z | 2026-04-06T08:28:11Z | CONTRIBUTOR | null | 20260407T090028Z | 2026-04-07T09:00:28Z | enze5088 | 14,285,786 | MDQ6VXNlcjE0Mjg1Nzg2 | User | false |
huggingface/transformers | 3,995,030,184 | I_kwDOCUB6oc7uH1Ko | 44,297 | https://github.com/huggingface/transformers/issues/44297 | https://api.github.com/repos/huggingface/transformers/issues/44297 | [BUG] tokenizer.save_pretrained: tokenizer_class in tokenizer_config.json doesn't match the original | ### System Info
```python
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained('Qwen/Qwen3.5-27B')
tokenizer.save_pretrained('output')
```
<img width="348" height="155" alt="Image" src="https://github.com/user-attachments/assets/2c7ab0fc-b993-427c-b3f8-15e98f81f1df" />
->
<img width=... | closed | completed | false | 5 | [
"bug"
] | [] | 2026-02-26T11:37:49Z | 2026-04-06T08:28:08Z | 2026-04-06T08:28:08Z | CONTRIBUTOR | null | 20260407T090028Z | 2026-04-07T09:00:28Z | Jintao-Huang | 45,290,347 | MDQ6VXNlcjQ1MjkwMzQ3 | User | false |
huggingface/transformers | 3,996,468,098 | I_kwDOCUB6oc7uNUOC | 44,303 | https://github.com/huggingface/transformers/issues/44303 | https://api.github.com/repos/huggingface/transformers/issues/44303 | Less verbose `tqdm` weight loading (`Loading weights: 38% ... Materializing param=....]` log) | ### Feature request
Hello,
Currently when redirecting any `PretrainedModel.from_pretrained` to log file, we get a huge:
```
Loading weights: 38%|███▊ | 74/197 [00:00<00:00, 11546.82it/s, Materializing param=model.decoder.layers.4.final_layer_norm.bias]
Loading weights: 38%|███▊ | 74/197 [00:00<00:00, 114... | closed | completed | false | 0 | [
"Feature request"
] | [] | 2026-02-26T16:20:08Z | 2026-03-03T16:57:55Z | 2026-03-03T16:57:55Z | CONTRIBUTOR | null | 20260325T173244Z | 2026-03-25T17:32:44Z | fxmarty-amd | 180,171,742 | U_kgDOCr0z3g | User | false |
huggingface/transformers | 3,998,769,285 | I_kwDOCUB6oc7uWGCF | 44,315 | https://github.com/huggingface/transformers/issues/44315 | https://api.github.com/repos/huggingface/transformers/issues/44315 | Liger Kernel is not applied when creating the model with `model_init` | ### System Info
N/A
### Who can help?
@SunMarc In `Trainer.train`, the Liger Kernel is not applied when the model is instantiated via `call_model_init`. As a result, hyperparameter search runs cannot leverage this kernel for acceleration.
### Information
- [x] The official example scripts
- [ ] My own modified scr... | closed | completed | false | 3 | [
"bug"
] | [] | 2026-02-27T02:57:03Z | 2026-02-27T13:29:22Z | 2026-02-27T11:59:13Z | CONTRIBUTOR | null | 20260325T173244Z | 2026-03-25T17:32:44Z | linfeng-du | 34,938,020 | MDQ6VXNlcjM0OTM4MDIw | User | false |
huggingface/transformers | 4,000,414,049 | I_kwDOCUB6oc7ucXlh | 44,322 | https://github.com/huggingface/transformers/issues/44322 | https://api.github.com/repos/huggingface/transformers/issues/44322 | AttributeError: 'Qwen3_5Config' object has no attribute 'num_attention_heads' | ### System Info
- transfomers: 5.3.0.dev0
### Who can help?
@remi-or @ArthurZucker @McPatate
### Information
- [ ] The official example scripts
- [x] My own modified scripts
### Tasks
- [x] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details ... | closed | completed | false | 3 | [
"bug"
] | [
"remi-or"
] | 2026-02-27T10:52:57Z | 2026-03-02T11:10:55Z | 2026-03-02T11:10:55Z | NONE | null | 20260325T173244Z | 2026-03-25T17:32:44Z | zzc0430 | 58,017,008 | MDQ6VXNlcjU4MDE3MDA4 | User | false |
huggingface/transformers | 4,001,463,730 | I_kwDOCUB6oc7ugX2y | 44,327 | https://github.com/huggingface/transformers/issues/44327 | https://api.github.com/repos/huggingface/transformers/issues/44327 | decode_spans in QA pipeline crashes with ValueError: kth out of bounds when len(scores_flat) == top_k | ### System Info
- `transformers` version: 4.39.0 (also verified present in 4.53.3 and `main` branch)
- Python version: 3.10
- NumPy version: 1.x
- OS: Linux (AWS SageMaker)
### Who can help?
@Narsil
### Information
- [x] The official example scripts
- [x] My own modified scripts
### Tasks
- [x] An officially sup... | closed | completed | false | 2 | [] | [] | 2026-02-27T15:05:31Z | 2026-03-12T13:22:12Z | 2026-03-12T13:22:12Z | NONE | null | 20260325T173244Z | 2026-03-25T17:32:44Z | jakipradip-patra | 81,886,190 | MDQ6VXNlcjgxODg2MTkw | User | false |
huggingface/transformers | 4,002,093,991 | I_kwDOCUB6oc7uixun | 44,336 | https://github.com/huggingface/transformers/issues/44336 | https://api.github.com/repos/huggingface/transformers/issues/44336 | Some ANSI codes are generated by utils/loading_report even when not connected to terminal | ### System Info
The bug does not depend on system info, it is obvious in sources.
### Who can help?
@Cyrilvallez
### Information
- [ ] The official example scripts
- [x] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [x] My own task or da... | closed | completed | false | 1 | [
"bug"
] | [] | 2026-02-27T17:31:28Z | 2026-03-09T11:52:15Z | 2026-03-09T11:52:15Z | NONE | null | 20260325T173244Z | 2026-03-25T17:32:44Z | foxik | 560,016 | MDQ6VXNlcjU2MDAxNg== | User | false |
huggingface/transformers | 4,003,791,070 | I_kwDOCUB6oc7upQDe | 44,351 | https://github.com/huggingface/transformers/issues/44351 | https://api.github.com/repos/huggingface/transformers/issues/44351 | cannot import name 'HybridCache' from 'transformers | ### System Info
transformers 5.2.0
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
... | closed | completed | false | 1 | [
"bug"
] | [] | 2026-02-28T03:46:26Z | 2026-03-02T13:48:54Z | 2026-03-02T13:48:54Z | NONE | null | 20260325T173244Z | 2026-03-25T17:32:44Z | wade0604 | 66,124,014 | MDQ6VXNlcjY2MTI0MDE0 | User | false |
huggingface/transformers | 4,004,326,473 | I_kwDOCUB6oc7urSxJ | 44,355 | https://github.com/huggingface/transformers/issues/44355 | https://api.github.com/repos/huggingface/transformers/issues/44355 | Errors occur when running compiled Python files. | ### System Info
linux
transformers >=4.51.0
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Repro... | closed | completed | false | 3 | [
"bug"
] | [] | 2026-02-28T09:49:16Z | 2026-04-08T08:21:44Z | 2026-04-08T08:21:44Z | NONE | null | 20260411T144729Z | 2026-04-11T14:47:29Z | HuaC-Z | 62,017,261 | MDQ6VXNlcjYyMDE3MjYx | User | false |
huggingface/transformers | 4,005,225,425 | I_kwDOCUB6oc7uuuPR | 44,360 | https://github.com/huggingface/transformers/issues/44360 | https://api.github.com/repos/huggingface/transformers/issues/44360 | [Bug/Discussion] The DSA indexer lacks a ReLU | ### System Info
The model structure of the GLM-MOE-DSA indexer lacks a ReLU here (https://github.com/zRzRzRzRzRzRzR/transformers/blob/4ca30213c6f7aa84b55c280e02730fe14d33dac5/src/transformers/models/glm_moe_dsa/modular_glm_moe_dsa.py#L403) compared to the reference implementation (https://huggingface.co/deepseek-ai/De... | closed | completed | false | 2 | [
"bug"
] | [] | 2026-02-28T19:25:43Z | 2026-03-19T15:13:36Z | 2026-03-19T15:13:36Z | NONE | null | 20260325T173244Z | 2026-03-25T17:32:44Z | yangdsh | 23,007,771 | MDQ6VXNlcjIzMDA3Nzcx | User | false |
huggingface/transformers | 4,005,272,044 | I_kwDOCUB6oc7uu5ns | 44,361 | https://github.com/huggingface/transformers/issues/44361 | https://api.github.com/repos/huggingface/transformers/issues/44361 | [BUG] MLukeTokenizer fails with AttributeError on tasks | ### System Info
* `transformers` version: `5.0.0.dev0`
* Platform: `Linux-5.15.167.4-microsoft-standard-WSL2-x86_64-with-glibc2.39`
* Python version: `3.12.3`
* `huggingface_hub` version: `1.3.2`
* `safetensors` version: `0.7.0`
* `accelerate` version: `1.12.0`
* Accelerate config: `not installed`
* DeepSpeed version:... | closed | completed | false | 0 | [
"bug"
] | [] | 2026-02-28T19:58:16Z | 2026-04-18T09:10:07Z | 2026-03-02T14:50:19Z | CONTRIBUTOR | null | 20260418T100536Z | 2026-04-18T10:05:36Z | harshaljanjani | 75,426,551 | MDQ6VXNlcjc1NDI2NTUx | User | false |
huggingface/transformers | 4,006,127,760 | I_kwDOCUB6oc7uyKiQ | 44,365 | https://github.com/huggingface/transformers/issues/44365 | https://api.github.com/repos/huggingface/transformers/issues/44365 | [i18n-<languageCode>] Translating docs to <languageName> | <!--
Note: Please search to see if an issue already exists for the language you are trying to translate.
-->
Hi!
Let's bring the documentation to all the <languageName>-speaking community 🌐 (currently 0 out of 267 complete)
Who would want to translate? Please follow the 🤗 [TRANSLATING guide](https://github.com/hug... | closed | completed | false | 0 | [
"WIP"
] | [] | 2026-03-01T03:35:20Z | 2026-03-02T14:01:52Z | 2026-03-02T14:01:52Z | NONE | null | 20260325T173244Z | 2026-03-25T17:32:44Z | mija4264-arch38 | 251,540,492 | U_kgDODv40DA | User | false |
huggingface/transformers | 4,006,463,604 | I_kwDOCUB6oc7uzch0 | 44,367 | https://github.com/huggingface/transformers/issues/44367 | https://api.github.com/repos/huggingface/transformers/issues/44367 | Unauthorized error for non gated models also | Hi Team,
I am continuously getting the same unauthorized error for every model i am trying to use.
I have created a token access recently with read as token type.
I have already tried with multiple models but same issue is coming up for all of those.
Please look into it and give me a solution.
userId - SwatikX
This ... | closed | completed | false | 6 | [] | [] | 2026-03-01T06:32:25Z | 2026-04-08T08:21:42Z | 2026-04-08T08:21:42Z | NONE | null | 20260411T144729Z | 2026-04-11T14:47:29Z | Swatikkar | 218,440,029 | U_kgDODQUhXQ | User | false |
huggingface/transformers | 4,006,552,277 | I_kwDOCUB6oc7uzyLV | 44,368 | https://github.com/huggingface/transformers/issues/44368 | https://api.github.com/repos/huggingface/transformers/issues/44368 | when using ms-swift lora fine-tuning Qwen3.5-27B, each layer emits warning:You should update the config with `tie_word_embeddings=False` to silence this warning | ### System Info
transformers==5.2.0
torch==2.8.0
deepspeed==0.18.6
python==3.10
ms-swift==4.0.0.dev0
### Who can help?
_No response_
### Information
- [x] The official example scripts
- [x] My own modified scripts
### Tasks
- [x] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [... | closed | completed | false | 4 | [
"bug"
] | [] | 2026-03-01T07:25:46Z | 2026-04-08T08:21:40Z | 2026-04-08T08:21:40Z | NONE | null | 20260411T144729Z | 2026-04-11T14:47:29Z | huangy3881 | 64,725,770 | MDQ6VXNlcjY0NzI1Nzcw | User | false |
huggingface/transformers | 4,006,696,053 | I_kwDOCUB6oc7u0VR1 | 44,370 | https://github.com/huggingface/transformers/issues/44370 | https://api.github.com/repos/huggingface/transformers/issues/44370 | [i18n-<languageCode>] Translating docs to <languageName> | <!--
Note: Please search to see if an issue already exists for the language you are trying to translate.
-->
Hi!
Let's bring the documentation to all the <languageName>-speaking community 🌐 (currently 0 out of 267 complete)
Who would want to translate? Please follow the 🤗 [TRANSLATING guide](https://github.com/hug... | closed | completed | false | 1 | [
"WIP"
] | [] | 2026-03-01T08:57:27Z | 2026-03-02T12:51:59Z | 2026-03-02T12:51:59Z | NONE | null | 20260325T173244Z | 2026-03-25T17:32:44Z | j6n5nwwmx9-cpu | 251,545,529 | U_kgDODv5HuQ | User | false |
huggingface/transformers | 4,007,139,181 | I_kwDOCUB6oc7u2Bdt | 44,371 | https://github.com/huggingface/transformers/issues/44371 | https://api.github.com/repos/huggingface/transformers/issues/44371 | <spam> | <spam> | closed | completed | false | 0 | [] | [] | 2026-03-01T13:04:25Z | 2026-03-02T12:51:20Z | 2026-03-02T12:51:12Z | NONE | null | 20260325T173244Z | 2026-03-25T17:32:44Z | HansElze | 139,304,469 | U_kgDOCE2eFQ | User | false |
huggingface/transformers | 4,007,541,136 | I_kwDOCUB6oc7u3jmQ | 44,373 | https://github.com/huggingface/transformers/issues/44373 | https://api.github.com/repos/huggingface/transformers/issues/44373 | Wrong docstring for position_ids | ### System Info
latest commit, see link below
### Who can help?
@stevhliu
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reprod... | closed | completed | false | 3 | [
"bug"
] | [] | 2026-03-01T15:46:43Z | 2026-04-12T08:14:01Z | 2026-04-12T08:14:01Z | NONE | null | 20260413T085906Z | 2026-04-13T08:59:06Z | RmZeta2718 | 42,400,165 | MDQ6VXNlcjQyNDAwMTY1 | User | false |
huggingface/transformers | 4,009,071,326 | I_kwDOCUB6oc7u9ZLe | 44,380 | https://github.com/huggingface/transformers/issues/44380 | https://api.github.com/repos/huggingface/transformers/issues/44380 | GPT2 attention scaling config is ignored when using SDPA / FlashAttention backends | ### System Info
None
### Who can help?
@ArthurZucker Hi, I'm new to LLMs and currently learning GPT2 model. I found that
The GPT2 attention configuration options:
• scale_attn_weights
• scale_attn_by_inverse_layer_idx
are respected in eager attention mode but silently ignored when using AttentionInterface backen... | closed | completed | false | 3 | [
"bug"
] | [] | 2026-03-02T03:31:15Z | 2026-03-04T16:33:09Z | 2026-03-04T16:33:09Z | NONE | null | 20260325T173244Z | 2026-03-25T17:32:44Z | Qi-Zhan | 89,050,446 | MDQ6VXNlcjg5MDUwNDQ2 | User | false |
huggingface/transformers | 4,010,252,311 | I_kwDOCUB6oc7vB5gX | 44,384 | https://github.com/huggingface/transformers/issues/44384 | https://api.github.com/repos/huggingface/transformers/issues/44384 | Qwen3.5 model: When data is not padding, an error is reported, indicating that the shape does not match. | commit id:fc9137225880a9d03f130634c20f9dbe36a7b8bf
Qwen3_5 Whether the position_ids input when the text model invokes the decoder_layer is text_position_ids | closed | completed | false | 8 | [] | [] | 2026-03-02T09:37:31Z | 2026-03-05T09:47:25Z | 2026-03-05T09:47:25Z | NONE | null | 20260325T173244Z | 2026-03-25T17:32:44Z | Vectorwh | 88,070,160 | MDQ6VXNlcjg4MDcwMTYw | User | false |
huggingface/transformers | 4,010,700,315 | I_kwDOCUB6oc7vDm4b | 44,387 | https://github.com/huggingface/transformers/issues/44387 | https://api.github.com/repos/huggingface/transformers/issues/44387 | Increased CUDA reserved memory in Transformers 5.x under int4 quantization leads to OOM | ### System Info
- `transformers` version: 5.2.0
- Platform: Linux-6.12.57+deb13-amd64-x86_64-with-glibc2.41
- Python version: 3.12.12
- Huggingface_hub version: 0.36.2
- Safetensors version: 0.7.0
- Accelerate version: 1.11.0
- Accelerate config: not found
- DeepSpeed version: not installed
- PyTorch version (accel... | closed | completed | false | 16 | [
"bug"
] | [] | 2026-03-02T11:16:43Z | 2026-03-16T16:36:43Z | 2026-03-16T16:36:43Z | NONE | null | 20260325T173244Z | 2026-03-25T17:32:44Z | tangefly | 124,695,565 | U_kgDOB260DQ | User | false |
huggingface/transformers | 4,011,627,127 | I_kwDOCUB6oc7vHJJ3 | 44,393 | https://github.com/huggingface/transformers/issues/44393 | https://api.github.com/repos/huggingface/transformers/issues/44393 | Qwen3-VL: Halucination/Error with 2D bounding box output | ### System Info
Not quite sure if this is the proper place, but could in theory be handled via preprocessing. This issue is more as a type of documentation in case other face this in the future.
Detecting 2D bounding boxes does not work if inputting images with an aspect ratio similar to KITTI (3.3:1), as the model d... | closed | completed | false | 2 | [
"bug"
] | [] | 2026-03-02T14:39:22Z | 2026-03-03T10:18:03Z | 2026-03-03T10:18:03Z | NONE | null | 20260325T173244Z | 2026-03-25T17:32:44Z | L-Reichardt | 72,140,033 | MDQ6VXNlcjcyMTQwMDMz | User | false |
huggingface/transformers | 4,013,464,415 | I_kwDOCUB6oc7vOJtf | 44,402 | https://github.com/huggingface/transformers/issues/44402 | https://api.github.com/repos/huggingface/transformers/issues/44402 | "rmihaylov/bert-base-bg" model has pad and unk tokens outside the tokenizer vocab_size | ### System Info
python: 3.13.5
torch: 2.7.1+cu118
transformers: 5.2.0
tokenizers: 0.22.2
### Who can help?
@ArthurZucker @Cyrilvallez
### Information
- [ ] The official example scripts
- [x] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ..... | closed | completed | false | 5 | [
"bug"
] | [] | 2026-03-02T21:49:00Z | 2026-03-10T09:29:18Z | 2026-03-10T09:29:18Z | NONE | null | 20260325T173244Z | 2026-03-25T17:32:44Z | AngledLuffa | 3,411,033 | MDQ6VXNlcjM0MTEwMzM= | User | false |
huggingface/transformers | 4,013,908,239 | I_kwDOCUB6oc7vP2EP | 44,403 | https://github.com/huggingface/transformers/issues/44403 | https://api.github.com/repos/huggingface/transformers/issues/44403 | Unnecessary noise when loading a transformer | ### System Info
python: 3.13.5
torch: 2.7.1+cu118
transformers: 5.2.0
tokenizers: 0.22.2
### Who can help?
@ArthurZucker @Cyrilvallez
### Information
- [x] The official example scripts
- [ ] My own modified scripts
### Tasks
- [x] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)... | open | null | false | 9 | [
"bug"
] | [] | 2026-03-02T23:43:47Z | 2026-05-03T12:04:03Z | null | NONE | null | 20260503T180031Z | 2026-05-03T18:00:31Z | AngledLuffa | 3,411,033 | MDQ6VXNlcjM0MTEwMzM= | User | false |
huggingface/transformers | 4,014,433,398 | I_kwDOCUB6oc7vR2R2 | 44,404 | https://github.com/huggingface/transformers/issues/44404 | https://api.github.com/repos/huggingface/transformers/issues/44404 | Bring Loss classes back to modeling file and leverage modular | As discussed while implementing #36895 , I think losses that are currently present in the loss folder should be back to the modeling file like it is the case for many existing models (MaskFormer for example).
I think it makes sense because the loss, when custom like the LwDetr or RfDetr one are part of the model defini... | closed | completed | false | 1 | [] | [] | 2026-03-03T03:10:54Z | 2026-04-10T08:27:33Z | 2026-04-10T08:27:33Z | CONTRIBUTOR | null | 20260411T144729Z | 2026-04-11T14:47:29Z | sbucaille | 24,275,548 | MDQ6VXNlcjI0Mjc1NTQ4 | User | false |
huggingface/transformers | 4,014,453,668 | I_kwDOCUB6oc7vR7Ok | 44,405 | https://github.com/huggingface/transformers/issues/44405 | https://api.github.com/repos/huggingface/transformers/issues/44405 | Add AutoModelForSequenceClassification support for Qwen3.5 (Qwen3_5Config) | ### Feature request
### What happens
When trying to load a Qwen3.5 model for sequence classification:
```
from transformers import AutoModelForSequenceClassification
model = AutoModelForSequenceClassification.from_pretrained(
"Qwen/Qwen3.5-0.8B",
num_labels=2,
trust_remote_code=True,
)
```
Transformers ... | closed | completed | false | 0 | [
"Feature request"
] | [] | 2026-03-03T03:19:25Z | 2026-03-04T10:34:24Z | 2026-03-04T10:34:24Z | CONTRIBUTOR | null | 20260325T173244Z | 2026-03-25T17:32:44Z | medhakimbedhief | 61,546,101 | MDQ6VXNlcjYxNTQ2MTAx | User | false |
huggingface/transformers | 4,015,646,421 | I_kwDOCUB6oc7vWebV | 44,409 | https://github.com/huggingface/transformers/issues/44409 | https://api.github.com/repos/huggingface/transformers/issues/44409 | Suggestion: optional external troubleshooting reference for retrieval-heavy and RAG-style workflows | Hi, I know this may be slightly downstream from the core scope of Transformers itself, so I wanted to frame this as an optional external reference rather than a request to change any core documentation direction.
I’d like to suggest the WFGY RAG 16 Problem Map as a practical, framework-agnostic troubleshooting resourc... | closed | completed | false | 3 | [] | [] | 2026-03-03T09:34:55Z | 2026-03-03T14:59:36Z | 2026-03-03T14:05:41Z | NONE | null | 20260325T173244Z | 2026-03-25T17:32:44Z | onestardao | 212,544,643 | U_kgDODKssgw | User | false |
huggingface/transformers | 4,015,987,645 | I_kwDOCUB6oc7vXxu9 | 44,410 | https://github.com/huggingface/transformers/issues/44410 | https://api.github.com/repos/huggingface/transformers/issues/44410 | qwen3next: layer 0 missing attn_qkv/attn_gate projections | ### System Info
Since Ollama 0.17.5 I get the following error:
```
ollama run qwen3-next:80b-a3b-instruct-q4_K_M
Error: 500 Internal Server Error: failed to initialize model: qwen3next: layer 0 missing attn_qkv/attn_gate projections
```
### Who can help?
_No response_
### Information
- [ ] The official example scr... | closed | completed | false | 1 | [
"bug"
] | [] | 2026-03-03T10:49:24Z | 2026-03-03T11:04:21Z | 2026-03-03T11:04:21Z | NONE | null | 20260325T173244Z | 2026-03-25T17:32:44Z | fcorneli | 771,606 | MDQ6VXNlcjc3MTYwNg== | User | false |
huggingface/transformers | 4,018,912,481 | I_kwDOCUB6oc7vi7zh | 44,418 | https://github.com/huggingface/transformers/issues/44418 | https://api.github.com/repos/huggingface/transformers/issues/44418 | 📋 Documentation Enhancement Suggestion | <spam> | closed | completed | false | 0 | [] | [] | 2026-03-03T21:58:34Z | 2026-03-04T13:52:48Z | 2026-03-04T13:52:39Z | NONE | null | 20260325T173244Z | 2026-03-25T17:32:44Z | croviatrust | 246,349,803 | U_kgDODq7_6w | User | false |
huggingface/transformers | 4,019,024,098 | I_kwDOCUB6oc7vjXDi | 44,419 | https://github.com/huggingface/transformers/issues/44419 | https://api.github.com/repos/huggingface/transformers/issues/44419 | Urge DOJ & FBI to Protect AI Innovation Even as the U.S. Focuses on the Iran War | Due to U.S.-Israel war on Iran, as we all can see, the People's Republic of China (PRC) and the Russian Federation do not appear to help Ayatollah Iran on the war. Why? Well, even though it might not be proven yet, one reasonable reason might be that they are using this to distract the U.S. federal law enforcement, mil... | closed | completed | false | 0 | [] | [] | 2026-03-03T22:27:17Z | 2026-03-04T13:52:05Z | 2026-03-04T13:52:05Z | NONE | null | 20260325T173244Z | 2026-03-25T17:32:44Z | NobleResearch | 265,358,384 | U_kgDOD9EMMA | User | false |
huggingface/transformers | 4,019,430,725 | I_kwDOCUB6oc7vk6VF | 44,423 | https://github.com/huggingface/transformers/issues/44423 | https://api.github.com/repos/huggingface/transformers/issues/44423 | [Bug] `transformers serve --continuous-batching` crashes with multimodal models (Qwen3.5) — AttributeError: 'str' object has no attribute 'to' | ### System Info
- `transformers` main branch (5.3.0.dev0, commit 5c1c72be)
- Python 3.11.14
- PyTorch 2.5.1+cu121
- OS: Ubuntu Linux
### Who can help?
@Lysandre @ArthurZucker @joaogante
### Reproduction
1. Install latest transformers from main:
```bash
pip install "transformers[serving] @ git+https://github.com/hu... | closed | completed | false | 3 | [] | [] | 2026-03-04T00:51:26Z | 2026-03-09T13:48:43Z | 2026-03-09T13:48:43Z | CONTRIBUTOR | null | 20260325T173244Z | 2026-03-25T17:32:44Z | jw9603 | 70,795,645 | MDQ6VXNlcjcwNzk1NjQ1 | User | false |
huggingface/transformers | 4,023,852,245 | I_kwDOCUB6oc7v1xzV | 44,442 | https://github.com/huggingface/transformers/issues/44442 | https://api.github.com/repos/huggingface/transformers/issues/44442 | [BUG] AutoTokenizer fails to load FastSpeech2ConformerTokenizer | ### System Info
* `transformers` version: `5.0.0.dev0`
* Platform: `Linux-5.15.167.4-microsoft-standard-WSL2-x86_64-with-glibc2.39`
* Python version: `3.12.3`
* `huggingface_hub` version: `1.3.2`
* `safetensors` version: `0.7.0`
* `accelerate` version: `1.12.0`
* Accelerate config: `not installed`
* DeepSpeed version:... | closed | completed | false | 0 | [
"bug"
] | [] | 2026-03-04T19:58:11Z | 2026-04-18T09:08:07Z | 2026-03-09T15:02:48Z | CONTRIBUTOR | null | 20260418T100536Z | 2026-04-18T10:05:36Z | harshaljanjani | 75,426,551 | MDQ6VXNlcjc1NDI2NTUx | User | false |
huggingface/transformers | 4,024,533,185 | I_kwDOCUB6oc7v4YDB | 44,448 | https://github.com/huggingface/transformers/issues/44448 | https://api.github.com/repos/huggingface/transformers/issues/44448 | [BUG] Different output for google/pegasus-cnn_dailymail between Transformers v4 and v5 | ### System Info
- `transformers` version: 4.57.6 (working) / 5.0.0 (incorrect output)
- Platform: Linux-6.6.113+-x86_64-with-glibc2.35
- Python version: 3.12.12
- Huggingface_hub version: 1.5.0
- Safetensors version: 0.7.0
- Accelerate version: 1.12.0
- Accelerate config: not found
- DeepSpeed version: not installed
... | closed | completed | false | 3 | [
"bug"
] | [] | 2026-03-04T22:37:37Z | 2026-03-18T09:54:59Z | 2026-03-18T09:54:59Z | CONTRIBUTOR | null | 20260325T173244Z | 2026-03-25T17:32:44Z | math-hiyoko | 56,009,584 | MDQ6VXNlcjU2MDA5NTg0 | User | false |
huggingface/transformers | 4,025,233,188 | I_kwDOCUB6oc7v7C8k | 44,450 | https://github.com/huggingface/transformers/issues/44450 | https://api.github.com/repos/huggingface/transformers/issues/44450 | Support argumentless loading from Trainer checkpoints | ### Feature request
`Trainer` checkpoints don't include the `config.json` necessary to instantiate the model.
This means that if we want to use a specific checkpoint (e.g. use it for fine-tuning, evaluate on test set etc.) we need to know the `init_args` when calling `.from_pretrained(ckpt_path, **init_args)`.
By s... | open | null | false | 1 | [
"Feature request"
] | [] | 2026-03-05T02:08:23Z | 2026-03-05T13:19:35Z | null | NONE | null | 20260325T173244Z | 2026-03-25T17:32:44Z | adosar | 110,358,278 | U_kgDOBpPvBg | User | false |
huggingface/transformers | 4,025,392,911 | I_kwDOCUB6oc7v7p8P | 44,451 | https://github.com/huggingface/transformers/issues/44451 | https://api.github.com/repos/huggingface/transformers/issues/44451 | Latest version cannot load "vesteinn/ScandiBERT" | ### System Info
broken config:
Python 3.13.5
tokenizers 0.22.2
transformers 5.2.0
torch 2.7.1+cu118
working config:
Python 3.13.5
tokenizers 0.22.1
transformers 4.57.1
torch 2.8.0+cu129
### Who can help?
@Ar... | closed | completed | false | 4 | [
"bug"
] | [] | 2026-03-05T03:02:13Z | 2026-03-19T17:45:28Z | 2026-03-19T13:59:26Z | NONE | null | 20260325T173244Z | 2026-03-25T17:32:44Z | AngledLuffa | 3,411,033 | MDQ6VXNlcjM0MTEwMzM= | User | false |
huggingface/transformers | 4,025,502,138 | I_kwDOCUB6oc7v8Em6 | 44,453 | https://github.com/huggingface/transformers/issues/44453 | https://api.github.com/repos/huggingface/transformers/issues/44453 | 推荐一个基于 Transformers 的 AI 效率工具库 | 作者好!
感谢维护这个优秀的项目!我在使用过程中开发了一个基于 Transformers 的效率工具库,感觉可以互相学习参考:
🔗 https://github.com/zhuxunyu/ai-productivity-toolkit
这个工具库包含:
- Prompt 优化器(自动优化 AI 提示词)
- Excel 自动化工具(批量处理/转换)
- 数据爬取框架(合法合规采集)
- AI 工作流(一键自动化复杂任务)
都是 Python 实现,完全开源免费。
感觉可以和 transformers 生态互相参考,也许能整合一些功能?
再次感谢作者的辛勤付出!🙏 | closed | completed | false | 1 | [] | [] | 2026-03-05T03:39:45Z | 2026-04-08T13:10:21Z | 2026-04-08T13:10:21Z | NONE | null | 20260411T144729Z | 2026-04-11T14:47:29Z | zhuxunyu | 53,635,908 | MDQ6VXNlcjUzNjM1OTA4 | User | false |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.