Update the README.md

#1
by sourcelite - opened
Files changed (1) hide show
  1. README.md +117 -0
README.md CHANGED
@@ -1,8 +1,125 @@
1
  ---
2
  library_name: litert
 
3
  tags:
4
  - vision
5
  - image-classification
 
6
  datasets:
7
  - imagenet-1k
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
8
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  library_name: litert
3
+ pipeline_tag: image-classification
4
  tags:
5
  - vision
6
  - image-classification
7
+ - computer-vision
8
  datasets:
9
  - imagenet-1k
10
+ model-index:
11
+ - name: inception_v3
12
+ results:
13
+ - task:
14
+ type: image-classification
15
+ name: Image Classification
16
+ dataset:
17
+ name: ImageNet-1k
18
+ type: imagenet-1k
19
+ config: default
20
+ split: validation
21
+ metrics:
22
+ - name: Top 1 Accuracy (Full Precision)
23
+ type: accuracy
24
+ value: 0.7727
25
+ - name: Top 5 Accuracy (Full Precision)
26
+ type: accuracy
27
+ value: 0.9343
28
  ---
29
+
30
+ # Inception_v3
31
+
32
+ Inception v3 model pre-trained on ImageNet-1k. It was introduced in [**Rethinking the Inception Architecture for Computer Vision**](https://arxiv.org/abs/1512.00567) by Christian Szegedy, Vincent Vanhoucke, Sergey Ioffe, Jonathon Shlens, and Zbigniew Wojna.
33
+
34
+ ## Intended uses & limitations
35
+
36
+ The model files were converted from pretrained weights from PyTorch Vision. The models may have their own licenses or terms and conditions derived from PyTorch Vision and the dataset used for training. It is your responsibility to determine whether you have permission to use the models for your use case.
37
+
38
+ ## Model description
39
+
40
+ The model was converted from a checkpoint from PyTorch Vision [`Inception_V3_Weights.IMAGENET1K_V1`](https://docs.pytorch.org/vision/main/models/generated/torchvision.models.inception_v3.html#torchvision.models.Inception_V3_Weights).
41
+
42
+ The original model has:
43
+ acc@1 (on ImageNet-1K): 77.294%
44
+ acc@5 (on ImageNet-1K): 93.450%
45
+ num_params: 27,161,264
46
+
47
+ ## Use
48
+ ```python
49
+ #!/usr/bin/env python3
50
+ import argparse, json
51
+ import numpy as np
52
+ from PIL import Image
53
+ from huggingface_hub import hf_hub_download
54
+ from ai_edge_litert.compiled_model import CompiledModel
55
+
56
+ def preprocess(img: Image.Image) -> np.ndarray:
57
+ img = img.convert("RGB")
58
+ w, h = img.size
59
+ # Inception_v3 expects a resize to 342 prior to the 299 central crop
60
+ s = 342
61
+ if w < h:
62
+ img = img.resize((s, int(round(h * s / w))), Image.BILINEAR)
63
+ else:
64
+ img = img.resize((int(round(w * s / h)), s), Image.BILINEAR)
65
+
66
+ # Central crop to 299x299
67
+ left = (img.size[0] - 299) // 2
68
+ top = (img.size[1] - 299) // 2
69
+ img = img.crop((left, top, left + 299, top + 299))
70
+
71
+ # Rescale to [0.0, 1.0] and Normalize
72
+ x = np.asarray(img, dtype=np.float32) / 255.0
73
+ x = (x - np.array([0.485, 0.456, 0.406], dtype=np.float32)) / np.array(
74
+ [0.229, 0.224, 0.225], dtype=np.float32
75
+ )
76
+ return np.transpose(x, (2, 0, 1))
77
+
78
+ def main():
79
+ ap = argparse.ArgumentParser()
80
+ ap.add_argument("--image", required=True)
81
+ args = ap.parse_args()
82
+
83
+ # Download the TFLite model and labels
84
+ model_path = hf_hub_download("litert-community/inception_v3", "inception_v3.tflite")
85
+ labels_path = hf_hub_download(
86
+ "huggingface/label-files", "imagenet-1k-id2label.json", repo_type="dataset"
87
+ )
88
+
89
+ with open(labels_path, "r", encoding="utf-8") as f:
90
+ id2label = {int(k): v for k, v in json.load(f).items()}
91
+
92
+ img = Image.open(args.image)
93
+ x = preprocess(img)
94
+
95
+ model = CompiledModel.from_file(model_path)
96
+ inp = model.create_input_buffers(0)
97
+ out = model.create_output_buffers(0)
98
+
99
+ inp[0].write(x)
100
+ model.run_by_index(0, inp, out)
101
+
102
+ req = model.get_output_buffer_requirements(0, 0)
103
+ y = out[0].read(req["buffer_size"] // np.dtype(np.float32).itemsize, np.float32)
104
+
105
+ pred = int(np.argmax(y))
106
+ label = id2label.get(pred, f"class_{pred}")
107
+
108
+ print(f"Top-1 class index: {pred}")
109
+ print(f"Top-1 label: {label}")
110
+
111
+ if __name__ == "__main__":
112
+ main()
113
+
114
+ ```
115
+ ### BibTeX entry and citation info
116
+
117
+ ```bibtex
118
+ @inproceedings{szegedy2016rethinking,
119
+ title={Rethinking the inception architecture for computer vision},
120
+ author={Szegedy, Christian and Vanhoucke, Vincent and Ioffe, Sergey and Shlens, Jon and Wojna, Zbigniew},
121
+ booktitle={Proceedings of the IEEE conference on computer vision and pattern recognition},
122
+ pages={2818--2826},
123
+ year={2016}
124
+ }
125
+ ```