shreyajn commited on
Commit
8285fbc
·
verified ·
1 Parent(s): 82b1eca

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +74 -23
README.md CHANGED
@@ -36,8 +36,10 @@ More details on model performance across various devices, can be found
36
 
37
  | Device | Chipset | Target Runtime | Inference Time (ms) | Peak Memory Range (MB) | Precision | Primary Compute Unit | Target Model
38
  | ---|---|---|---|---|---|---|---|
39
- | Samsung Galaxy S23 Ultra (Android 13) | Snapdragon® 8 Gen 2 | TFLite | 148.628 ms | 7 - 10 MB | FP16 | NPU | [TrOCREncoder.tflite](https://huggingface.co/qualcomm/TrOCR/blob/main/TrOCREncoder.tflite)
40
- | Samsung Galaxy S23 Ultra (Android 13) | Snapdragon® 8 Gen 2 | TFLite | 2.72 ms | 0 - 2 MB | FP16 | NPU | [TrOCRDecoder.tflite](https://huggingface.co/qualcomm/TrOCR/blob/main/TrOCRDecoder.tflite)
 
 
41
 
42
 
43
 
@@ -96,6 +98,23 @@ device. This script does the following:
96
  python -m qai_hub_models.models.trocr.export
97
  ```
98
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
99
 
100
 
101
  ## How does this work?
@@ -113,29 +132,49 @@ in memory using the `jit.trace` and then call the `submit_compile_job` API.
113
  import torch
114
 
115
  import qai_hub as hub
116
- from qai_hub_models.models.trocr import Model
117
 
118
  # Load the model
119
- torch_model = Model.from_pretrained()
 
 
 
120
 
121
  # Device
122
  device = hub.Device("Samsung Galaxy S23")
123
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
124
  # Trace model
125
- input_shape = torch_model.get_input_spec()
126
- sample_inputs = torch_model.sample_inputs()
127
 
128
- pt_model = torch.jit.trace(torch_model, [torch.tensor(data[0]) for _, data in sample_inputs.items()])
129
 
130
  # Compile model on a specific device
131
- compile_job = hub.submit_compile_job(
132
- model=pt_model,
133
  device=device,
134
- input_specs=torch_model.get_input_spec(),
135
  )
136
 
137
  # Get target model to run on-device
138
- target_model = compile_job.get_target_model()
139
 
140
  ```
141
 
@@ -147,10 +186,16 @@ After compiling models from step 1. Models can be profiled model on-device using
147
  provisioned in the cloud. Once the job is submitted, you can navigate to a
148
  provided job URL to view a variety of on-device performance metrics.
149
  ```python
150
- profile_job = hub.submit_profile_job(
151
- model=target_model,
152
- device=device,
153
- )
 
 
 
 
 
 
154
 
155
  ```
156
 
@@ -159,14 +204,20 @@ Step 3: **Verify on-device accuracy**
159
  To verify the accuracy of the model on-device, you can run on-device inference
160
  on sample input data on the same cloud hosted device.
161
  ```python
162
- input_data = torch_model.sample_inputs()
163
- inference_job = hub.submit_inference_job(
164
- model=target_model,
165
- device=device,
166
- inputs=input_data,
167
- )
168
-
169
- on_device_output = inference_job.download_output_data()
 
 
 
 
 
 
170
 
171
  ```
172
  With the output of the model, you can compute like PSNR, relative errors or
 
36
 
37
  | Device | Chipset | Target Runtime | Inference Time (ms) | Peak Memory Range (MB) | Precision | Primary Compute Unit | Target Model
38
  | ---|---|---|---|---|---|---|---|
39
+ | Samsung Galaxy S23 Ultra (Android 13) | Snapdragon® 8 Gen 2 | TFLite | 181.04 ms | 8 - 16 MB | FP16 | NPU | [TrOCREncoder.tflite](https://huggingface.co/qualcomm/TrOCR/blob/main/TrOCREncoder.tflite)
40
+ | Samsung Galaxy S23 Ultra (Android 13) | Snapdragon® 8 Gen 2 | TFLite | 6.49 ms | 7 - 13 MB | FP16 | NPU | [TrOCRDecoder.tflite](https://huggingface.co/qualcomm/TrOCR/blob/main/TrOCRDecoder.tflite)
41
+ | Samsung Galaxy S23 Ultra (Android 13) | Snapdragon® 8 Gen 2 | QNN Model Library | 120.369 ms | 2 - 22 MB | FP16 | NPU | [TrOCREncoder.so](https://huggingface.co/qualcomm/TrOCR/blob/main/TrOCREncoder.so)
42
+ | Samsung Galaxy S23 Ultra (Android 13) | Snapdragon® 8 Gen 2 | QNN Model Library | 2.958 ms | 0 - 120 MB | FP16 | NPU | [TrOCRDecoder.so](https://huggingface.co/qualcomm/TrOCR/blob/main/TrOCRDecoder.so)
43
 
44
 
45
 
 
98
  python -m qai_hub_models.models.trocr.export
99
  ```
100
 
101
+ ```
102
+ Profile Job summary of TrOCREncoder
103
+ --------------------------------------------------
104
+ Device: Snapdragon X Elite CRD (11)
105
+ Estimated Inference Time: 101.68 ms
106
+ Estimated Peak Memory Range: 1.69-1.69 MB
107
+ Compute Units: NPU (443) | Total (443)
108
+
109
+ Profile Job summary of TrOCRDecoder
110
+ --------------------------------------------------
111
+ Device: Snapdragon X Elite CRD (11)
112
+ Estimated Inference Time: 2.79 ms
113
+ Estimated Peak Memory Range: 6.84-6.84 MB
114
+ Compute Units: NPU (334) | Total (334)
115
+
116
+
117
+ ```
118
 
119
 
120
  ## How does this work?
 
132
  import torch
133
 
134
  import qai_hub as hub
135
+ from qai_hub_models.models.trocr import TrOCREncoder,TrOCRDecoder
136
 
137
  # Load the model
138
+ encoder_model = TrOCREncoder.from_pretrained()
139
+
140
+ decoder_model = TrOCRDecoder.from_pretrained()
141
+
142
 
143
  # Device
144
  device = hub.Device("Samsung Galaxy S23")
145
 
146
+
147
+ # Trace model
148
+ encoder_input_shape = encoder_model.get_input_spec()
149
+ encoder_sample_inputs = encoder_model.sample_inputs()
150
+
151
+ traced_encoder_model = torch.jit.trace(encoder_model, [torch.tensor(data[0]) for _, data in encoder_sample_inputs.items()])
152
+
153
+ # Compile model on a specific device
154
+ encoder_compile_job = hub.submit_compile_job(
155
+ model=traced_encoder_model ,
156
+ device=device,
157
+ input_specs=encoder_model.get_input_spec(),
158
+ )
159
+
160
+ # Get target model to run on-device
161
+ encoder_target_model = encoder_compile_job.get_target_model()
162
+
163
  # Trace model
164
+ decoder_input_shape = decoder_model.get_input_spec()
165
+ decoder_sample_inputs = decoder_model.sample_inputs()
166
 
167
+ traced_decoder_model = torch.jit.trace(decoder_model, [torch.tensor(data[0]) for _, data in decoder_sample_inputs.items()])
168
 
169
  # Compile model on a specific device
170
+ decoder_compile_job = hub.submit_compile_job(
171
+ model=traced_decoder_model ,
172
  device=device,
173
+ input_specs=decoder_model.get_input_spec(),
174
  )
175
 
176
  # Get target model to run on-device
177
+ decoder_target_model = decoder_compile_job.get_target_model()
178
 
179
  ```
180
 
 
186
  provisioned in the cloud. Once the job is submitted, you can navigate to a
187
  provided job URL to view a variety of on-device performance metrics.
188
  ```python
189
+
190
+ encoder_profile_job = hub.submit_profile_job(
191
+ model=encoder_target_model,
192
+ device=device,
193
+ )
194
+
195
+ decoder_profile_job = hub.submit_profile_job(
196
+ model=decoder_target_model,
197
+ device=device,
198
+ )
199
 
200
  ```
201
 
 
204
  To verify the accuracy of the model on-device, you can run on-device inference
205
  on sample input data on the same cloud hosted device.
206
  ```python
207
+ encoder_input_data = encoder_model.sample_inputs()
208
+ encoder_inference_job = hub.submit_inference_job(
209
+ model=encoder_target_model,
210
+ device=device,
211
+ inputs=encoder_input_data,
212
+ )
213
+ encoder_inference_job.download_output_data()
214
+ decoder_input_data = decoder_model.sample_inputs()
215
+ decoder_inference_job = hub.submit_inference_job(
216
+ model=decoder_target_model,
217
+ device=device,
218
+ inputs=decoder_input_data,
219
+ )
220
+ decoder_inference_job.download_output_data()
221
 
222
  ```
223
  With the output of the model, you can compute like PSNR, relative errors or