Update README: Add model card metadata, ImageNet-1k metrics, and LiteRT usage example

#1
Files changed (1) hide show
  1. README.md +124 -1
README.md CHANGED
@@ -1,21 +1,144 @@
1
  ---
2
  library_name: litert
 
3
  tags:
4
  - vision
5
  - image-classification
 
 
6
  datasets:
7
  - imagenet-1k
8
  base_model:
9
  - google/efficientnet-b7
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
10
  ---
 
11
  # EfficientNet B7
12
 
13
- EfficientNet B7 model pre-trained on ImageNet-1k.
 
 
 
 
 
 
 
 
 
14
 
15
  ## Intended uses & limitations
16
 
17
  The model files were converted from pretrained weights from PyTorch Vision. The models may have their own licenses or terms and conditions derived from PyTorch Vision and the dataset used for training. It is your responsibility to determine whether you have permission to use the models for your use case.
18
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
19
  ### BibTeX entry and citation info
20
 
21
  ```bibtex
 
1
  ---
2
  library_name: litert
3
+ pipeline_tag: image-classification
4
  tags:
5
  - vision
6
  - image-classification
7
+ - google
8
+ - computer-vision
9
  datasets:
10
  - imagenet-1k
11
  base_model:
12
  - google/efficientnet-b7
13
+ model-index:
14
+ - name: litert-community/efficientnet_b7
15
+ results:
16
+ - task:
17
+ type: image-classification
18
+ name: Image Classification
19
+ dataset:
20
+ name: ImageNet-1k
21
+ type: imagenet-1k
22
+ config: default
23
+ split: validation
24
+ metrics:
25
+ - name: Top 1 Accuracy (Full Precision)
26
+ type: accuracy
27
+ value: 0.8410
28
+ - name: Top 5 Accuracy (Full Precision)
29
+ type: accuracy
30
+ value: 0.9690
31
+ - name: Top 1 Accuracy (Dynamic Quantized wi8 afp32)
32
+ type: accuracy
33
+ value: 0.8388
34
+ - name: Top 5 Accuracy (Dynamic Quantized wi8 afp32)
35
+ type: accuracy
36
+ value: 0.9679
37
  ---
38
+
39
  # EfficientNet B7
40
 
41
+ EfficientNet B7 model pre-trained on ImageNet-1k. Originally introduced by Tan and Le in the influential paper,[ **EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks**](https://arxiv.org/abs/1905.11946) this model utilizes compound scaling to systematically balance network depth, width, and resolution, enabling superior accuracy with significantly higher efficiency than traditional architectures.
42
+
43
+ ## Model description
44
+
45
+ The model was converted from a checkpoint from PyTorch Vision.
46
+
47
+ The original model has:
48
+ acc@1 (on ImageNet-1K): 84.122%
49
+ acc@5 (on ImageNet-1K): 96.908%
50
+ num_params: 66,347,960
51
 
52
  ## Intended uses & limitations
53
 
54
  The model files were converted from pretrained weights from PyTorch Vision. The models may have their own licenses or terms and conditions derived from PyTorch Vision and the dataset used for training. It is your responsibility to determine whether you have permission to use the models for your use case.
55
 
56
+ ## How to Use
57
+
58
+ ​​**1. Install Dependencies** Ensure your Python environment is set up with the required libraries. Run the following command in your terminal:
59
+
60
+ ```bash
61
+ pip install numpy Pillow huggingface_hub ai-edge-litert
62
+ ```
63
+
64
+ **2. Prepare Your Image** The script expects an image file to analyze. Make sure you have an image (e.g., cat.jpg or car.png) saved in the same working directory as your script.
65
+
66
+
67
+ **3. Save the Script** Create a new file named `classify.py`, paste the script below into it, and save the file:
68
+
69
+ ```python
70
+ #!/usr/bin/env python3
71
+ import argparse, json
72
+ import numpy as np
73
+ from PIL import Image
74
+ from huggingface_hub import hf_hub_download
75
+ from ai_edge_litert.compiled_model import CompiledModel
76
+
77
+ def preprocess(img: Image.Image) -> np.ndarray:
78
+ img = img.convert("RGB")
79
+ w, h = img.size
80
+ s = 600
81
+ if w < h:
82
+ img = img.resize((s, int(round(h * s / w))), Image.BICUBIC)
83
+ else:
84
+ img = img.resize((int(round(w * s / h)), s), Image.BICUBIC)
85
+ left = (img.size[0] - 600) // 2
86
+ top = (img.size[1] - 600) // 2
87
+ img = img.crop((left, top, left + 600, top + 600))
88
+
89
+ x = np.asarray(img, dtype=np.float32) / 255.0
90
+ x = (x - np.array([0.485, 0.456, 0.406], dtype=np.float32)) / np.array(
91
+ [0.229, 0.224, 0.225], dtype=np.float32
92
+ )
93
+ return np.transpose(x, (2, 0, 1))
94
+
95
+ def main():
96
+ ap = argparse.ArgumentParser()
97
+ ap.add_argument("--image", required=True)
98
+ args = ap.parse_args()
99
+
100
+
101
+ model_path = hf_hub_download("litert-community/efficientnet_b7", "efficientnet_b7.tflite")
102
+ labels_path = hf_hub_download(
103
+ "huggingface/label-files", "imagenet-1k-id2label.json", repo_type="dataset"
104
+ )
105
+ with open(labels_path, "r", encoding="utf-8") as f:
106
+ id2label = {int(k): v for k, v in json.load(f).items()}
107
+
108
+
109
+ img = Image.open(args.image)
110
+ x = preprocess(img)
111
+
112
+
113
+ model = CompiledModel.from_file(model_path)
114
+ inp = model.create_input_buffers(0)
115
+ out = model.create_output_buffers(0)
116
+
117
+
118
+ inp[0].write(x)
119
+ model.run_by_index(0, inp, out)
120
+
121
+
122
+ req = model.get_output_buffer_requirements(0, 0)
123
+ y = out[0].read(req["buffer_size"] // np.dtype(np.float32).itemsize, np.float32)
124
+
125
+
126
+ pred = int(np.argmax(y))
127
+ label = id2label.get(pred, f"class_{pred}")
128
+
129
+
130
+ print(f"Top-1 class index: {pred}")
131
+ print(f"Top-1 label: {label}")
132
+ if __name__ == "__main__":
133
+ main()
134
+ ```
135
+
136
+ **4. Execute the Python Script** Run the below command:
137
+
138
+ ```bash
139
+ python classify.py --image cat.jpg
140
+ ```
141
+
142
  ### BibTeX entry and citation info
143
 
144
  ```bibtex