convnext_base / README.md
akashvverma1995's picture
Updated README.md
0a0095c verified
|
raw
history blame
4.34 kB
---
library_name: litert
pipeline_tag: image-classification
tags:
- vision
- image-classification
- google
- computer-vision
datasets:
- imagenet-1k
model-index:
- name: litert-community/convnext_base
results:
- task:
type: image-classification
name: Image Classification
dataset:
name: ImageNet-1k
type: imagenet-1k
config: default
split: validation
metrics:
- name: Top 1 Accuracy (Full Precision)
type: accuracy
value: 0.8405
- name: Top 5 Accuracy (Full Precision)
type: accuracy
value: 0.9689
---
# ConvNeXt_Base
The ConvNeXt_Base architecture is a convolutional neural network pre-trained on the ImageNet-1k dataset. Originally introduced by Zhuang Liu, Hanzi Mao, Chao-Yuan Wu, Christoph Feichtenhofer, Trevor Darrell, Saining Xie. in the modernized paper, [A ConvNet for the 2020s](https://arxiv.org/abs/2201.03545).
## Model description
The model was converted from a checkpoint from PyTorch Vision.
The original model has:
acc@1 (on ImageNet-1K): 84.06%
acc@5 (on ImageNet-1K): 96.87%
num_params: 88,591,464
## Intended uses & limitations
The model files were converted from pretrained weights from PyTorch Vision. The models may have their own licenses or terms and conditions derived from PyTorch Vision and the dataset used for training. It is your responsibility to determine whether you have permission to use the models for your use case.
## How to Use
**1. Install Dependencies** Ensure your Python environment is set up with the required libraries. Run the following command in your terminal:
```bash
pip install numpy Pillow huggingface_hub ai-edge-litert
```
**2. Prepare Your Image** The script expects an image file to analyze. Make sure you have an image (e.g., cat.jpg or car.png) saved in the same working directory as your script.
**3. Save the Script** Create a new file named `classify.py`, paste the script below into it, and save the file:
```python
#!/usr/bin/env python3
import argparse, json
import numpy as np
from PIL import Image
from huggingface_hub import hf_hub_download
from ai_edge_litert.compiled_model import CompiledModel
def preprocess(img: Image.Image) -> np.ndarray:
img = img.convert("RGB")
w, h = img.size
s = 232
if w < h:
img = img.resize((s, int(round(h * s / w))), Image.BILINEAR)
else:
img = img.resize((int(round(w * s / h)), s), Image.BILINEAR)
left = (img.size[0] - 224) // 2
top = (img.size[1] - 224) // 2
img = img.crop((left, top, left + 224, top + 224))
x = np.asarray(img, dtype=np.float32) / 255.0
x = (x - np.array([0.485, 0.456, 0.406], dtype=np.float32)) / np.array(
[0.229, 0.224, 0.225], dtype=np.float32
)
return x
def main():
ap = argparse.ArgumentParser()
ap.add_argument("--image", required=True)
args = ap.parse_args()
model_path = hf_hub_download("litert-community/convnext_base", "convnext_base.tflite")
labels_path = hf_hub_download(
"huggingface/label-files", "imagenet-1k-id2label.json", repo_type="dataset"
)
with open(labels_path, "r", encoding="utf-8") as f:
id2label = {int(k): v for k, v in json.load(f).items()}
img = Image.open(args.image)
x = preprocess(img)
model = CompiledModel.from_file(model_path)
inp = model.create_input_buffers(0)
out = model.create_output_buffers(0)
inp[0].write(x)
model.run_by_index(0, inp, out)
req = model.get_output_buffer_requirements(0, 0)
y = out[0].read(req["buffer_size"] // np.dtype(np.float32).itemsize, np.float32)
pred = int(np.argmax(y))
label = id2label.get(pred, f"class_{pred}")
print(f"Top-1 class index: {pred}")
print(f"Top-1 label: {label}")
if __name__ == "__main__":
main()
```
**4. Execute the Python Script** Run the below command:
```bash
python classify.py --image cat.jpg
```
### BibTeX entry and citation info
```bibtex
@inproceedings{liu2022convnet,
title={A convnet for the 2020s},
author={Liu, Zhuang and Mao, Hanzi and Wu, Chao-Yuan and Feichtenhofer, Christoph and Darrell, Trevor and Xie, Saining},
booktitle={Proceedings of the IEEE/CVF conference on computer vision and pattern recognition},
pages={11976--11986},
year={2022}
}
```