Instructions to use litert-community/gcvit_tiny with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- LiteRT
How to use litert-community/gcvit_tiny with LiteRT:
# No code snippets available yet for this library. # To use this model, check the repository files and the library's documentation. # Want to help? PRs adding snippets are welcome at: # https://github.com/huggingface/huggingface.js
- Notebooks
- Google Colab
- Kaggle
| library_name: litert | |
| tags: | |
| - vision | |
| - image-classification | |
| datasets: | |
| - imagenet-1k | |
| # gcvit_tiny | |
| Converted TIMM image classification model for LiteRT. | |
| - Source architecture: `gcvit_tiny` | |
| - Source checkpoint: `https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-weights-morevit/gcvit_tiny_224_nvidia-ac783954.pth` | |
| - File: `model.tflite` | |
| - Input: `float32` tensor in NCHW layout, shape `[1, 3, 224, 224]` | |
| - Output: ImageNet-1K logits, shape `[1, 1000]` | |
| ## Runtime Status | |
| - CPU smoke test: passed with LiteRT `CompiledModel`. | |
| - GPU delegation: currently blocked for this model by rank-5 tensor patterns in the GPU backend, mostly `RESHAPE`, `TRANSPOSE`, and related window/attention operations. The model is published as CPU-ready while GPU support is being improved. | |