File size: 18,105 Bytes
a9bd396
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
<!--Copyright 2025 The HuggingFace Team. All rights reserved.

Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at

http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.

⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.

-->

# Model debugging toolboxes

This page lists all the debugging and model adding tools used by the library, as well as the utility functions it
provides for it.

Most of those are only useful if you are adding new models in the library.

## Model addition debuggers

### Model addition debugger - context manager for model adders

This context manager is a power user tool intended for model adders. It tracks all forward calls within a model forward
and logs a slice of each input and output on a nested JSON. To note, this context manager enforces `torch.no_grad()`.

### Rationale

When porting models to transformers, even from python to python, model adders often have to do a lot of manual
operations, involving saving and loading tensors, comparing dtypes, etc. This small tool can hopefully shave off some
time.

### Usage

Add this context manager as follows to debug a model:

```python
import torch
from PIL import Image
import requests
from transformers import LlavaProcessor, LlavaForConditionalGeneration
from transformers.model_debugging_utils import model_addition_debugger_context
torch.random.manual_seed(673)

# load pretrained model and processor
model_id = "llava-hf/llava-1.5-7b-hf"
processor = LlavaProcessor.from_pretrained(model_id)
model = LlavaForConditionalGeneration.from_pretrained(model_id)

# create random image input
random_image = Image.fromarray(torch.randint(0, 256, (224, 224, 3), dtype=torch.uint8).numpy())

# prompt
prompt = "<image>Describe this image."

# process inputs
inputs = processor(text=prompt, images=random_image, return_tensors="pt")

# call forward method (not .generate!)
with model_addition_debugger_context(
    model,
    debug_path="optional_path_to_your_directory",
    do_prune_layers=False # This will output ALL the layers of a model.
):
    output = model.forward(**inputs)

```

### Reading results

The debugger generates two files from the forward call, both with the same base name, but ending either with
`_SUMMARY.json` or with `_FULL_TENSORS.json`.

The first one will contain a summary of each module's _input_ and _output_ tensor values and shapes.

```json
{
  "module_path": "MolmoForConditionalGeneration",
  "inputs": {
    "args": [],
    "kwargs": {
      "input_ids": {
        "shape": "torch.Size([1, 589])",
        "dtype": "torch.int64"
      },
      "attention_mask": {
        "shape": "torch.Size([1, 589])",
        "dtype": "torch.int64"
      },
      "pixel_values": {
        "shape": "torch.Size([1, 5, 576, 588])",
        "dtype": "torch.float32",
        "mean": "tensor(-8.9514e-01, device='cuda:0')",
        "std": "tensor(9.2586e-01, device='cuda:0')",
        "min": "tensor(-1.7923e+00, device='cuda:0')",
        "max": "tensor(1.8899e+00, device='cuda:0')"
    }
  },
  "children": [
    {
      "module_path": "MolmoForConditionalGeneration.language_model.model.embed_tokens",
      "inputs": {
        "args": [
          {
            "shape": "torch.Size([1, 589])",
            "dtype": "torch.int64"
          }
        ]
      },
      "outputs": {
        "shape": "torch.Size([1, 589, 3584])",
        "dtype": "torch.float32",
        "mean": "tensor(6.5460e-06, device='cuda:0')",
        "std": "tensor(2.3807e-02, device='cuda:0')",
        "min": "tensor(-3.3398e-01, device='cuda:0')",
        "max": "tensor(3.9453e-01, device='cuda:0')"
      }
    },
    {
      "module_path": "MolmoForConditionalGeneration.vision_tower",
      "inputs": {
        "args": [
          {
            "shape": "torch.Size([5, 1, 576, 588])",
            "dtype": "torch.float32",
            "mean": "tensor(-8.9514e-01, device='cuda:0')",
            "std": "tensor(9.2586e-01, device='cuda:0')",
            "min": "tensor(-1.7923e+00, device='cuda:0')",
            "max": "tensor(1.8899e+00, device='cuda:0')"
          }
        ],
        "kwargs": {
          "output_hidden_states": "True"
        }
      },
      "children": [
        { ... and so on
```

The `_FULL_TENSORS.json` file will display a full view of all tensors, which is useful for comparing two files.

```json
      "pixel_values": {
        "shape": "torch.Size([1, 5, 576, 588])",
        "dtype": "torch.float32",
        "value": [
          "tensor([[[[-1.7923e+00, -1.7521e+00, -1.4802e+00,  ..., -1.7923e+00, -1.7521e+00, -1.4802e+00],",
          "          [-1.7923e+00, -1.7521e+00, -1.4802e+00,  ..., -1.7923e+00, -1.7521e+00, -1.4802e+00],",
          "          [-1.7923e+00, -1.7521e+00, -1.4802e+00,  ..., -1.7923e+00, -1.7521e+00, -1.4802e+00],",
          "          ...,",
          "          [-1.7923e+00, -1.7521e+00, -1.4802e+00,  ..., -1.7923e+00, -1.7521e+00, -1.4802e+00],",
          "          [-1.7923e+00, -1.7521e+00, -1.4802e+00,  ..., -1.7923e+00, -1.7521e+00, -1.4802e+00],",
          "          [-1.7923e+00, -1.7521e+00, -1.4802e+00,  ..., -1.7923e+00, -1.7521e+00, -1.4802e+00]],",
          "",
          "         [[-1.7923e+00, -1.7521e+00, -1.4802e+00,  ..., -1.7923e+00, -1.7521e+00, -1.4802e+00],",
          "          [-1.7923e+00, -1.7521e+00, -1.4802e+00,  ..., -1.7923e+00, -1.7521e+00, -1.4802e+00],",
          "          [-1.7923e+00, -1.7521e+00, -1.4802e+00,  ..., -1.7923e+00, -1.7521e+00, -1.4802e+00],",
          "          ...,",
          "          [-1.4857e+00, -1.4820e+00, -1.2100e+00,  ..., -6.0979e-01, -5.9650e-01, -3.8527e-01],",
          "          [-1.6755e+00, -1.7221e+00, -1.4518e+00,  ..., -7.5577e-01, -7.4658e-01, -5.5592e-01],",
          "          [-7.9957e-01, -8.2162e-01, -5.7014e-01,  ..., -1.3689e+00, -1.3169e+00, -1.0678e+00]],",
          "",
          "         [[-1.7923e+00, -1.7521e+00, -1.4802e+00,  ..., -1.7923e+00, -1.7521e+00, -1.4802e+00],",
          "          [-1.7923e+00, -1.7521e+00, -1.4802e+00,  ..., -1.7923e+00, -1.7521e+00, -1.4802e+00],",
          "          [-1.7923e+00, -1.7521e+00, -1.4802e+00,  ..., -1.7923e+00, -1.7521e+00, -1.4802e+00],",
          "          ...,",
          "          [-3.0322e-01, -5.0645e-01, -5.8436e-01,  ..., -6.2439e-01, -7.9160e-01, -8.1188e-01],",
          "          [-4.4921e-01, -6.5653e-01, -7.2656e-01,  ..., -3.4702e-01, -5.2146e-01, -5.1326e-01],",
          "          [-3.4702e-01, -5.3647e-01, -5.4170e-01,  ..., -1.0915e+00, -1.1968e+00, -1.0252e+00]],",
          "",
          "         [[-1.1207e+00, -1.2718e+00, -1.0678e+00,  ..., 1.2013e-01, -1.3126e-01, -1.7197e-01],",
          "          [-6.9738e-01, -9.1166e-01, -8.5454e-01,  ..., -5.5050e-02, -2.8134e-01, -4.2793e-01],",
          "          [-3.4702e-01, -5.5148e-01, -5.8436e-01,  ..., 1.9312e-01, -8.6235e-02, -2.1463e-01],",
          "          ...,",
          "          [-1.7923e+00, -1.7521e+00, -1.4802e+00,  ..., -1.7923e+00, -1.7521e+00, -1.4802e+00],",
          "          [-1.7923e+00, -1.7521e+00, -1.4802e+00,  ..., -1.7923e+00, -1.7521e+00, -1.4802e+00],",
          "          [-1.7923e+00, -1.7521e+00, -1.4802e+00,  ..., -1.7923e+00, -1.7521e+00, -1.4802e+00]],",
          "",
          "         [[-1.0039e+00, -9.5669e-01, -6.5546e-01,  ..., -1.4711e+00, -1.4219e+00, -1.1389e+00],",
          "          [-1.0039e+00, -9.5669e-01, -6.5546e-01,  ..., -1.7193e+00, -1.6771e+00, -1.4091e+00],",
          "          [-1.6317e+00, -1.6020e+00, -1.2669e+00,  ..., -1.2667e+00, -1.2268e+00, -8.9720e-01],",
          "          ...,",
          "          [-1.7923e+00, -1.7521e+00, -1.4802e+00,  ..., -1.7923e+00, -1.7521e+00, -1.4802e+00],",
          "          [-1.7923e+00, -1.7521e+00, -1.4802e+00,  ..., -1.7923e+00, -1.7521e+00, -1.4802e+00],",
          "          [-1.7923e+00, -1.7521e+00, -1.4802e+00,  ..., -1.7923e+00, -1.7521e+00, -1.4802e+00]]]], device='cuda:0')"
        ],
        "mean": "tensor(-8.9514e-01, device='cuda:0')",
        "std": "tensor(9.2586e-01, device='cuda:0')",
        "min": "tensor(-1.7923e+00, device='cuda:0')",
        "max": "tensor(1.8899e+00, device='cuda:0')"
      },
```

#### Saving tensors to disk

Some model adders may benefit from logging full tensor values to disk to support, for example, numerical analysis
across implementations.

Set `use_repr=False` to write tensors to disk using [SafeTensors](https://huggingface.co/docs/safetensors/en/index).

```python
with model_addition_debugger_context(
    model,
    debug_path="optional_path_to_your_directory",
    do_prune_layers=False,
    use_repr=False,   # Defaults to True
):
    output = model.forward(**inputs)
```

When using `use_repr=False`, tensors are written to the same disk location as the `_SUMMARY.json` and
`_FULL_TENSORS.json` files. The `value` property of entries in the `_FULL_TENSORS.json` file will contain a relative
path reference to the associated `.safetensors` file. Each tensor is written to its own file as the `data` property of
the state dictionary. File names are constructed using the `module_path` as a prefix with a few possible postfixes that
are built recursively.

* Module inputs are denoted with the `_inputs` and outputs by `_outputs`.
* `list` and `tuple` instances, such as `args` or function return values, will be postfixed with `_{index}`.
* `dict` instances will be postfixed with `_{key}`.

### Comparing between implementations

Once the forward passes of two models have been traced by the debugger, one can compare the `json` output files. See
below: we can see slight differences between these two implementations' key projection layer. Inputs are mostly
identical, but not quite. Looking through the file differences makes it easier to pinpoint which layer is wrong.

![download-icon](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/files_difference_debugging.png)

### Limitations and scope

This feature will only work for torch-based models. Models relying heavily on external kernel calls may work, but trace will
probably miss some things. Regardless, any python implementation that aims at mimicking another implementation can be
traced once instead of reran N times with breakpoints.

If you pass `do_prune_layers=False` to your model debugger, ALL the layers will be outputted to `json`. Else, only the
first and last layer will be shown. This is useful when some layers (typically cross-attention) appear only after N
layers.

[[autodoc]] model_addition_debugger_context

## Analyzer of skipped tests

### Scan skipped tests - for model adders and maintainers

This small util is a power user tool intended for model adders and maintainers. It lists all test methods
existing in `test_modeling_common.py`, inherited by all model tester classes, and scans the repository to measure
how many tests are being skipped and for which models.

### Rationale

When porting models to transformers, tests fail as they should, and sometimes `test_modeling_common` feels irreconcilable with the peculiarities of our brand new model. But how can we be sure we're not breaking everything by adding a seemingly innocent skip?

This utility:

- scans all test_modeling_common methods
- looks for times where a method is skipped
- returns a summary json you can load as a DataFrame/inspect

**For instance test_inputs_embeds is skipped in a whooping 39% proportion at the time of writing this util.**

![download-icon](https://huggingface.co/datasets/huggingface/documentation-images/resolve/f7f671f69b88ce4967e19179172c248958d35742/transformers/tests_skipped_visualisation.png)

### Usage

You can run the skipped test analyzer in two ways:

#### Full scan (default)

From the root of `transformers` repo, scans all common test methods and outputs the results to a JSON file (default: `all_tests_scan_result.json`).

```bash
python utils/scan_skipped_tests.py --output_dir path/to/output
```

- `--output_dir` (optional): Directory where the JSON results will be saved. Defaults to the current directory.

**Example output:**

```text
πŸ”¬ Parsing 331 model test files once each...
πŸ“ Aggregating 224 tests...
  (224/224) test_update_candidate_strategy_with_matches_1es_3d_is_nonecodet_schedule_fa_kwargs
βœ… Scan complete.

πŸ“„ JSON saved to /home/pablo/git/transformers/all_tests_scan_result.json

```

And it will generate `all_tests_scan_result.json` file that you can inspect. The JSON is indexed by method name, and each entry follows this schema, indicating the origin as well (from `common`or `GenerationMixin`.)

```json
{
  "<method_name>": {
    "origin": "<test suite>"
    "models_ran": ["<model_name>", ...],
    "models_skipped": ["<model_name>", ...],
    "skipped_proportion": <float>,
    "reasons_skipped": ["<model_name>: <reason>",
      ...
    ]
  },
  ...
}
```

Which you can visualise as above with e.g. `pandas`

```python
df = pd.read_json('all_tests_scan_result.json').T
df.sort_values(by=['skipped_proportion'], ascending=False)

```

### Scan a single test method

You can focus on a specific test method using `--test_method_name`:

```bash
python utils/scan_skipped_tests.py --test_method_name test_inputs_embeds --output_dir path/to/output
```

- `--test_method_name`: Name of the test method to scan (e.g., `test_inputs_embeds`).
- `--output_dir` (optional): Directory where the JSON result will be saved.

**Example output:**

```bash
$ python utils/scan_skipped_tests.py --test_method_name test_inputs_embeds

πŸ”¬ Parsing 331 model test files once each...

== test_inputs_embeds ==

Ran    : 199/323
Skipped : 124/323 (38.4%)
 - aimv2: Aimv2 does not use inputs_embeds
 - align: Inputs_embeds is tested in individual model tests
 - altclip: Inputs_embeds is tested in individual model tests
 - audio_spectrogram_transformer: AST does not use inputs_embeds
 - beit: BEiT does not use inputs_embeds
 - bit: Bit does not use inputs_embeds
 - blip: Blip does not use inputs_embeds
 - blip_2: Inputs_embeds is tested in individual model tests
 - bridgetower:
 - canine: CANINE does not have a get_input_embeddings() method.
 - ...

πŸ“„ JSON saved to /home/pablo/git/transformers/scan_test_inputs_embeds.json

```

## Modular model detector

### Code similarity analyzer - for model adders

This utility analyzes code similarities between model implementations to identify opportunities for modularization. It compares a new or existing modeling file against all models in the library using embedding-based and token-based similarity metrics.

### Rationale

When adding a new model to transformers, many components (attention layers, MLPs, outputs, etc.) may already exist in similar form in other models. Instead of implementing everything from scratch, model adders can identify which existing classes are similar and potentially reusable through modularization.

The tool computes two similarity scores:

- **Embedding score**: Uses semantic code embeddings (via `Qwen/Qwen3-Embedding-4B`) to detect functionally similar code even with different naming
- **Jaccard score**: Measures token set overlap to identify structurally similar code patterns

A score of 1.00 means the code is identical.

### Usage

From the root of the `transformers` repository:

```bash
python utils/modular_model_detector.py --modeling-file path/to/modeling_file.py
```

The tool will automatically download the pre-built index from the Hub (requires RAM/VRAM for the embedding model).

**Example output:**

```text
Loading checkpoint shards: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 2/2 [00:00<00:00, 33.62it/s]
encoding 21 query definitions with Qwen/Qwen3-Embedding-4B (device=cuda, batch=16, max_length=4096)

stuff.py::Beit3ImageTextMatchingOutput:
embedding:
    blip_2::Blip2ImageTextMatchingModelOutput (0.9994)
    chinese_clip::ChineseCLIPOutput (0.9818)
    owlvit::OwlViTOutput (0.9818)
jaccard:
    owlv2::Owlv2Output (0.9667)
    metaclip_2::MetaClip2Output (0.9667)
    altclip::AltCLIPOutput (0.9667)
intersection:
    blip::BlipOutput
    owlvit::OwlViTOutput

stuff.py::Beit3MLP:
embedding:
    efficientloftr::EfficientLoFTRMLP (0.9718)
    seggpt::SegGptMlp (0.9650)
jaccard:
    chinese_clip::ChineseCLIPTextSelfOutput (0.5294)
    bert::BertSelfOutput (0.5294)
intersection:
```

The `intersection` field shows classes that appear in both top-5 results, indicating high confidence for modularization candidates.

### Building a custom index

To rebuild the index from your local codebase (useful after adding new models or using a different embedding model):

```bash
python utils/modular_model_detector.py --build
```

To push the rebuilt index to a Hub dataset:

```bash
python utils/modular_model_detector.py --build --push-new-index --hub-dataset your-org/your-dataset
```

### Options

- `--modeling-file`: Path to the modeling file to analyze
- `--build`: Build the code similarity index from all modeling files in `src/transformers/models/`
- `--push-new-index`: After building, push the index to a Hub dataset (requires `--build`)
- `--hub-dataset`: Hub dataset repository ID to pull/push the index (default: `hf-internal-testing/transformers_code_embeddings`)

### Limitations

This tool requires GPU/CPU resources to run the embedding model (`Qwen/Qwen3-Embedding-4B`). The pre-built index is downloaded from the Hub by default, which requires an internet connection on first use.

Results are suggestions based on code similarity and should be manually reviewed before modularization. High similarity scores don't guarantee perfect compatibility.