snnn001's picture
Add LiteRT converted deit_small_patch16_224
e2406bf verified
---
library_name: litert
base_model: timm/deit_small_patch16_224.fb_in1k
tags:
- vision
- image-classification
datasets:
- imagenet-1k
---
# deit_small_patch16_224
Converted TIMM image classification model for LiteRT.
- Source architecture: `deit_small_patch16_224`
- Source checkpoint: `timm/deit_small_patch16_224.fb_in1k`
- File: `model.tflite`
- Input: `float32` tensor in NCHW layout, shape `[1, 3, 224, 224]`
- Output: ImageNet-1K logits, shape `[1, 1000]`
## Runtime Status
- CPU smoke test: passed with LiteRT `CompiledModel`.
- GPU delegation: currently blocked for this model by rank-5 tensor patterns in the GPU backend, mostly `RESHAPE`, `TRANSPOSE`, and related window/attention operations. The model is published as CPU-ready while GPU support is being improved.
## Model Details
- **Model Type:** Image classification / feature backbone
- **Model Stats:**
- Params (M): 22.1
- GMACs: 4.6
- Activations (M): 11.9
- Image size: 224 x 224
- **Papers:**
- Training data-efficient image transformers & distillation through attention: https://arxiv.org/abs/2012.12877
- **Original:** https://github.com/facebookresearch/deit
- **Dataset:** ImageNet-1k
## Citation
```bibtex
@InProceedings{pmlr-v139-touvron21a,
title = {Training data-efficient image transformers & distillation through attention},
author = {Touvron, Hugo and Cord, Matthieu and Douze, Matthijs and Massa, Francisco and Sablayrolles, Alexandre and Jegou, Herve},
booktitle = {International Conference on Machine Learning},
pages = {10347--10357},
year = {2021},
volume = {139},
month = {July}
}
```
```bibtex
@misc{rw2019timm,
author = {Ross Wightman},
title = {PyTorch Image Models},
year = {2019},
publisher = {GitHub},
journal = {GitHub repository},
doi = {10.5281/zenodo.4414861},
howpublished = {\url{https://github.com/huggingface/pytorch-image-models}}
}
```