File size: 1,917 Bytes
e2406bf
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
---
library_name: litert
base_model: timm/deit_small_patch16_224.fb_in1k
tags:
  - vision
  - image-classification
datasets:
  - imagenet-1k
---
# deit_small_patch16_224

Converted TIMM image classification model for LiteRT.

- Source architecture: `deit_small_patch16_224`
- Source checkpoint: `timm/deit_small_patch16_224.fb_in1k`
- File: `model.tflite`
- Input: `float32` tensor in NCHW layout, shape `[1, 3, 224, 224]`
- Output: ImageNet-1K logits, shape `[1, 1000]`

## Runtime Status

- CPU smoke test: passed with LiteRT `CompiledModel`.
- GPU delegation: currently blocked for this model by rank-5 tensor patterns in the GPU backend, mostly `RESHAPE`, `TRANSPOSE`, and related window/attention operations. The model is published as CPU-ready while GPU support is being improved.

## Model Details

- **Model Type:** Image classification / feature backbone
- **Model Stats:**
  - Params (M): 22.1
  - GMACs: 4.6
  - Activations (M): 11.9
  - Image size: 224 x 224
- **Papers:**
  - Training data-efficient image transformers & distillation through attention: https://arxiv.org/abs/2012.12877
- **Original:** https://github.com/facebookresearch/deit
- **Dataset:** ImageNet-1k

## Citation

```bibtex
@InProceedings{pmlr-v139-touvron21a,
  title =     {Training data-efficient image transformers & distillation through attention},
  author =    {Touvron, Hugo and Cord, Matthieu and Douze, Matthijs and Massa, Francisco and Sablayrolles, Alexandre and Jegou, Herve},
  booktitle = {International Conference on Machine Learning},
  pages =     {10347--10357},
  year =      {2021},
  volume =    {139},
  month =     {July}
}
```
```bibtex
@misc{rw2019timm,
  author = {Ross Wightman},
  title = {PyTorch Image Models},
  year = {2019},
  publisher = {GitHub},
  journal = {GitHub repository},
  doi = {10.5281/zenodo.4414861},
  howpublished = {\url{https://github.com/huggingface/pytorch-image-models}}
}
```