Files changed (1) hide show
  1. README.md +104 -1
README.md CHANGED
@@ -1,21 +1,124 @@
1
  ---
2
  library_name: litert
 
3
  tags:
4
  - vision
5
  - image-classification
 
 
6
  datasets:
7
  - imagenet-1k
8
  base_model:
9
  - google/efficientnet-b1
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
10
  ---
11
  # EfficientNet B1
12
 
13
- EfficientNet B1 model pre-trained on ImageNet-1k.
 
 
 
 
 
 
 
14
 
15
  ## Intended uses & limitations
16
 
17
  The model files were converted from pretrained weights from PyTorch Vision. The models may have their own licenses or terms and conditions derived from PyTorch Vision and the dataset used for training. It is your responsibility to determine whether you have permission to use the models for your use case.
18
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
19
  ### BibTeX entry and citation info
20
 
21
  ```bibtex
 
1
  ---
2
  library_name: litert
3
+ pipeline_tag: image-classification
4
  tags:
5
  - vision
6
  - image-classification
7
+ - google
8
+ - computer-vision
9
  datasets:
10
  - imagenet-1k
11
  base_model:
12
  - google/efficientnet-b1
13
+ model-index:
14
+ - name: EfficientNet_B1
15
+ results:
16
+ - task:
17
+ type: image-classification
18
+ name: Image Classification
19
+ dataset:
20
+ name: ImageNet-1k
21
+ type: imagenet-1k
22
+ config: default
23
+ split: validation
24
+ metrics:
25
+ - name: Top 1 Accuracy (Full Precision)
26
+ type: accuracy
27
+ value: 0.7855
28
+ - name: Top 5 Accuracy (Full Precision)
29
+ type: accuracy
30
+ value: 0.9419
31
+ - name: Top 1 Accuracy (Dynamic Quantized wi8 afp32)
32
+ type: accuracy
33
+ value: 0.7805
34
+ - name: Top 5 Accuracy (Dynamic Quantized wi8 afp32)
35
+ type: accuracy
36
+ value: 0.9392
37
  ---
38
  # EfficientNet B1
39
 
40
+ EfficientNet B1 model pre-trained on ImageNet-1k. Originally introduced by Tan and Le in the influential paper, [**EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks**](https://arxiv.org/abs/1905.11946)this model utilizes compound scaling to systematically balance network depth, width, and resolution, enabling superior accuracy with significantly higher efficiency than traditional architectures.
41
+
42
+ ## Model description
43
+
44
+ The original model has:
45
+ acc@1 (on ImageNet-1K): 79.838%
46
+ acc@5 (on ImageNet-1K): 94.934%
47
+ num_params: 7,794,184
48
 
49
  ## Intended uses & limitations
50
 
51
  The model files were converted from pretrained weights from PyTorch Vision. The models may have their own licenses or terms and conditions derived from PyTorch Vision and the dataset used for training. It is your responsibility to determine whether you have permission to use the models for your use case.
52
 
53
+
54
+ ## Use
55
+ ```python
56
+
57
+ #!/usr/bin/env python3
58
+ import argparse, json
59
+ import numpy as np
60
+ from PIL import Image
61
+ from huggingface_hub import hf_hub_download
62
+ from ai_edge_litert.compiled_model import CompiledModel
63
+
64
+
65
+ def preprocess(img: Image.Image) -> np.ndarray:
66
+ img = img.convert("RGB")
67
+ w, h = img.size
68
+ s = 255
69
+ if w < h:
70
+ img = img.resize((s, int(round(h * s / w))), Image.BILINEAR)
71
+ else:
72
+ img = img.resize((int(round(w * s / h)), s), Image.BILINEAR)
73
+ left = (img.size[0] - 240) // 2
74
+ top = (img.size[1] - 240) // 2
75
+ img = img.crop((left, top, left + 240, top + 240))
76
+
77
+
78
+ x = np.asarray(img, dtype=np.float32) / 255.0
79
+ x = (x - np.array([0.485, 0.456, 0.406], dtype=np.float32)) / np.array(
80
+ [0.229, 0.224, 0.225], dtype=np.float32
81
+ )
82
+ return np.transpose(x, (2, 0, 1))
83
+
84
+
85
+ def main():
86
+ ap = argparse.ArgumentParser()
87
+ ap.add_argument("--image", required=True)
88
+ args = ap.parse_args()
89
+
90
+
91
+ model_path = hf_hub_download("litert-community/efficientnet_b1", "efficientnet_b1.tflite")
92
+ labels_path = hf_hub_download(
93
+ "huggingface/label-files", "imagenet-1k-id2label.json", repo_type="dataset"
94
+ )
95
+ with open(labels_path, "r", encoding="utf-8") as f:
96
+ id2label = {int(k): v for k, v in json.load(f).items()}
97
+
98
+ img = Image.open(args.image)
99
+ x = preprocess(img)
100
+
101
+ model = CompiledModel.from_file(model_path)
102
+ inp = model.create_input_buffers(0)
103
+ out = model.create_output_buffers(0)
104
+
105
+ inp[0].write(x)
106
+ model.run_by_index(0, inp, out)
107
+
108
+ req = model.get_output_buffer_requirements(0, 0)
109
+ y = out[0].read(req["buffer_size"] // np.dtype(np.float32).itemsize, np.float32)
110
+
111
+
112
+ pred = int(np.argmax(y))
113
+ label = id2label.get(pred, f"class_{pred}")
114
+
115
+
116
+ print(f"Top-1 class index: {pred}")
117
+ print(f"Top-1 label: {label}")
118
+ if __name__ == "__main__":
119
+ main()
120
+ ```
121
+
122
  ### BibTeX entry and citation info
123
 
124
  ```bibtex