Image Classification
LiteRT
LiteRT
vision

maxvit_tiny_rw_224

Converted TIMM image classification model for LiteRT.

  • Source architecture: maxvit_tiny_rw_224
  • Source checkpoint: timm/maxvit_tiny_rw_224.sw_in1k
  • File: model.tflite
  • Input: float32 tensor in NCHW layout, shape [1, 3, 224, 224]
  • Output: ImageNet-1K logits, shape [1, 1000]

Runtime Status

  • CPU smoke test: passed with LiteRT CompiledModel.
  • GPU delegation: currently blocked for this model by rank-5 tensor patterns in the GPU backend, mostly RESHAPE, TRANSPOSE, and related window/attention operations. The model is published as CPU-ready while GPU support is being improved.

Model Details

  • Model Type: Image classification / feature backbone
  • Model Stats:
    • Params (M): 29.1
    • GMACs: 5.1
    • Activations (M): 33.1
    • Image size: 224 x 224
  • Papers:
  • Dataset: ImageNet-1k

Citation

@misc{rw2019timm,
  author = {Ross Wightman},
  title = {PyTorch Image Models},
  year = {2019},
  publisher = {GitHub},
  journal = {GitHub repository},
  doi = {10.5281/zenodo.4414861},
  howpublished = {\url{https://github.com/huggingface/pytorch-image-models}}
}
@article{tu2022maxvit,
  title={MaxViT: Multi-Axis Vision Transformer},
  author={Tu, Zhengzhong and Talebi, Hossein and Zhang, Han and Yang, Feng and Milanfar, Peyman and Bovik, Alan and Li, Yinxiao},
  journal={ECCV},
  year={2022},
}        
@article{dai2021coatnet,
  title={CoAtNet: Marrying Convolution and Attention for All Data Sizes},
  author={Dai, Zihang and Liu, Hanxiao and Le, Quoc V and Tan, Mingxing},
  journal={arXiv preprint arXiv:2106.04803},
  year={2021}
}
Downloads last month
-
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for litert-community/maxvit_tiny_rw_224

Finetuned
(5)
this model

Dataset used to train litert-community/maxvit_tiny_rw_224

Collection including litert-community/maxvit_tiny_rw_224

Papers for litert-community/maxvit_tiny_rw_224