Upload README.md with huggingface_hub
Browse files
README.md
CHANGED
|
@@ -38,54 +38,48 @@ More details on model performance across various devices, can be found
|
|
| 38 |
|
| 39 |
| Model | Device | Chipset | Target Runtime | Inference Time (ms) | Peak Memory Range (MB) | Precision | Primary Compute Unit | Target Model
|
| 40 |
|---|---|---|---|---|---|---|---|---|
|
| 41 |
-
|
|
| 42 |
-
|
|
| 43 |
-
|
|
| 44 |
-
|
|
| 45 |
-
|
|
| 46 |
-
|
|
| 47 |
-
|
|
| 48 |
-
|
|
| 49 |
-
|
|
| 50 |
-
|
|
| 51 |
-
|
|
| 52 |
-
|
|
| 53 |
-
|
|
| 54 |
-
|
|
| 55 |
-
|
|
| 56 |
-
|
|
| 57 |
-
|
|
| 58 |
-
|
|
| 59 |
-
|
|
| 60 |
-
|
|
| 61 |
-
|
|
| 62 |
-
| CLIPTextEncoder |
|
| 63 |
-
| CLIPTextEncoder |
|
| 64 |
-
| CLIPTextEncoder |
|
| 65 |
-
| CLIPTextEncoder |
|
| 66 |
-
|
|
| 67 |
-
|
|
| 68 |
-
|
|
| 69 |
-
|
|
| 70 |
-
|
|
| 71 |
-
|
|
| 72 |
-
|
|
| 73 |
-
|
|
| 74 |
-
|
|
| 75 |
-
|
|
| 76 |
-
|
|
| 77 |
-
|
|
| 78 |
-
|
|
| 79 |
-
|
|
| 80 |
-
|
|
| 81 |
-
|
|
| 82 |
-
|
|
| 83 |
-
| CLIPImageEncoder | SA8650 (Proxy) | SA8650P Proxy | QNN | 20.427 ms | 1 - 2 MB | FP16 | NPU | Use Export Script |
|
| 84 |
-
| CLIPImageEncoder | SA8775P ADP | SA8775P | QNN | 29.742 ms | 0 - 5 MB | FP16 | NPU | Use Export Script |
|
| 85 |
-
| CLIPImageEncoder | QCS8450 (Proxy) | QCS8450 Proxy | TFLITE | 34.821 ms | 0 - 203 MB | FP16 | NPU | [OpenAI-Clip.tflite](https://huggingface.co/qualcomm/OpenAI-Clip/blob/main/CLIPImageEncoder.tflite) |
|
| 86 |
-
| CLIPImageEncoder | QCS8450 (Proxy) | QCS8450 Proxy | QNN | 29.464 ms | 0 - 169 MB | FP16 | NPU | Use Export Script |
|
| 87 |
-
| CLIPImageEncoder | Snapdragon X Elite CRD | Snapdragon® X Elite | QNN | 22.2 ms | 1 - 1 MB | FP16 | NPU | Use Export Script |
|
| 88 |
-
| CLIPImageEncoder | Snapdragon X Elite CRD | Snapdragon® X Elite | ONNX | 160.456 ms | 188 - 188 MB | FP16 | NPU | [OpenAI-Clip.onnx](https://huggingface.co/qualcomm/OpenAI-Clip/blob/main/CLIPImageEncoder.onnx) |
|
| 89 |
|
| 90 |
|
| 91 |
|
|
@@ -146,23 +140,23 @@ python -m qai_hub_models.models.openai_clip.export
|
|
| 146 |
```
|
| 147 |
```
|
| 148 |
Profiling Results
|
| 149 |
-
------------------------------------------------------------
|
| 150 |
-
CLIPTextEncoder
|
| 151 |
-
Device : Samsung Galaxy S23 (13)
|
| 152 |
-
Runtime : TFLITE
|
| 153 |
-
Estimated inference time (ms) : 5.7
|
| 154 |
-
Estimated peak memory usage (MB): [0, 17]
|
| 155 |
-
Total # Ops : 660
|
| 156 |
-
Compute Unit(s) : NPU (658 ops) CPU (2 ops)
|
| 157 |
-
|
| 158 |
------------------------------------------------------------
|
| 159 |
CLIPImageEncoder
|
| 160 |
Device : Samsung Galaxy S23 (13)
|
| 161 |
Runtime : TFLITE
|
| 162 |
-
Estimated inference time (ms) : 34.
|
| 163 |
-
Estimated peak memory usage (MB): [0,
|
| 164 |
Total # Ops : 659
|
| 165 |
Compute Unit(s) : NPU (659 ops)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 166 |
```
|
| 167 |
|
| 168 |
|
|
@@ -185,42 +179,42 @@ from qai_hub_models.models.openai_clip import Model
|
|
| 185 |
|
| 186 |
# Load the model
|
| 187 |
model = Model.from_pretrained()
|
| 188 |
-
text_encoder_model = model.text_encoder
|
| 189 |
image_encoder_model = model.image_encoder
|
|
|
|
| 190 |
|
| 191 |
# Device
|
| 192 |
device = hub.Device("Samsung Galaxy S23")
|
| 193 |
|
| 194 |
# Trace model
|
| 195 |
-
|
| 196 |
-
|
| 197 |
|
| 198 |
-
|
| 199 |
|
| 200 |
# Compile model on a specific device
|
| 201 |
-
|
| 202 |
-
model=
|
| 203 |
device=device,
|
| 204 |
-
input_specs=
|
| 205 |
)
|
| 206 |
|
| 207 |
# Get target model to run on-device
|
| 208 |
-
|
| 209 |
# Trace model
|
| 210 |
-
|
| 211 |
-
|
| 212 |
|
| 213 |
-
|
| 214 |
|
| 215 |
# Compile model on a specific device
|
| 216 |
-
|
| 217 |
-
model=
|
| 218 |
device=device,
|
| 219 |
-
input_specs=
|
| 220 |
)
|
| 221 |
|
| 222 |
# Get target model to run on-device
|
| 223 |
-
|
| 224 |
|
| 225 |
```
|
| 226 |
|
|
@@ -232,14 +226,14 @@ After compiling models from step 1. Models can be profiled model on-device using
|
|
| 232 |
provisioned in the cloud. Once the job is submitted, you can navigate to a
|
| 233 |
provided job URL to view a variety of on-device performance metrics.
|
| 234 |
```python
|
| 235 |
-
text_encoder_profile_job = hub.submit_profile_job(
|
| 236 |
-
model=text_encoder_target_model,
|
| 237 |
-
device=device,
|
| 238 |
-
)
|
| 239 |
image_encoder_profile_job = hub.submit_profile_job(
|
| 240 |
model=image_encoder_target_model,
|
| 241 |
device=device,
|
| 242 |
)
|
|
|
|
|
|
|
|
|
|
|
|
|
| 243 |
|
| 244 |
```
|
| 245 |
|
|
@@ -248,13 +242,6 @@ Step 3: **Verify on-device accuracy**
|
|
| 248 |
To verify the accuracy of the model on-device, you can run on-device inference
|
| 249 |
on sample input data on the same cloud hosted device.
|
| 250 |
```python
|
| 251 |
-
text_encoder_input_data = text_encoder_model.sample_inputs()
|
| 252 |
-
text_encoder_inference_job = hub.submit_inference_job(
|
| 253 |
-
model=text_encoder_target_model,
|
| 254 |
-
device=device,
|
| 255 |
-
inputs=text_encoder_input_data,
|
| 256 |
-
)
|
| 257 |
-
text_encoder_inference_job.download_output_data()
|
| 258 |
image_encoder_input_data = image_encoder_model.sample_inputs()
|
| 259 |
image_encoder_inference_job = hub.submit_inference_job(
|
| 260 |
model=image_encoder_target_model,
|
|
@@ -262,6 +249,13 @@ image_encoder_inference_job = hub.submit_inference_job(
|
|
| 262 |
inputs=image_encoder_input_data,
|
| 263 |
)
|
| 264 |
image_encoder_inference_job.download_output_data()
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 265 |
|
| 266 |
```
|
| 267 |
With the output of the model, you can compute like PSNR, relative errors or
|
|
|
|
| 38 |
|
| 39 |
| Model | Device | Chipset | Target Runtime | Inference Time (ms) | Peak Memory Range (MB) | Precision | Primary Compute Unit | Target Model
|
| 40 |
|---|---|---|---|---|---|---|---|---|
|
| 41 |
+
| CLIPImageEncoder | Samsung Galaxy S23 | Snapdragon® 8 Gen 2 | TFLITE | 34.591 ms | 0 - 57 MB | FP16 | NPU | [OpenAI-Clip.tflite](https://huggingface.co/qualcomm/OpenAI-Clip/blob/main/CLIPImageEncoder.tflite) |
|
| 42 |
+
| CLIPImageEncoder | Samsung Galaxy S23 | Snapdragon® 8 Gen 2 | QNN | 26.472 ms | 0 - 55 MB | FP16 | NPU | [OpenAI-Clip.so](https://huggingface.co/qualcomm/OpenAI-Clip/blob/main/CLIPImageEncoder.so) |
|
| 43 |
+
| CLIPImageEncoder | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 | TFLITE | 27.035 ms | 0 - 264 MB | FP16 | NPU | [OpenAI-Clip.tflite](https://huggingface.co/qualcomm/OpenAI-Clip/blob/main/CLIPImageEncoder.tflite) |
|
| 44 |
+
| CLIPImageEncoder | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 | QNN | 20.808 ms | 1 - 170 MB | FP16 | NPU | [OpenAI-Clip.so](https://huggingface.co/qualcomm/OpenAI-Clip/blob/main/CLIPImageEncoder.so) |
|
| 45 |
+
| CLIPImageEncoder | Snapdragon 8 Elite QRD | Snapdragon® 8 Elite | TFLITE | 24.249 ms | 0 - 266 MB | FP16 | NPU | [OpenAI-Clip.tflite](https://huggingface.co/qualcomm/OpenAI-Clip/blob/main/CLIPImageEncoder.tflite) |
|
| 46 |
+
| CLIPImageEncoder | Snapdragon 8 Elite QRD | Snapdragon® 8 Elite | QNN | 18.669 ms | 0 - 171 MB | FP16 | NPU | Use Export Script |
|
| 47 |
+
| CLIPImageEncoder | QCS8550 (Proxy) | QCS8550 Proxy | TFLITE | 33.984 ms | 0 - 55 MB | FP16 | NPU | [OpenAI-Clip.tflite](https://huggingface.co/qualcomm/OpenAI-Clip/blob/main/CLIPImageEncoder.tflite) |
|
| 48 |
+
| CLIPImageEncoder | QCS8550 (Proxy) | QCS8550 Proxy | QNN | 19.984 ms | 1 - 2 MB | FP16 | NPU | Use Export Script |
|
| 49 |
+
| CLIPImageEncoder | SA7255P ADP | SA7255P | TFLITE | 327.04 ms | 0 - 264 MB | FP16 | NPU | [OpenAI-Clip.tflite](https://huggingface.co/qualcomm/OpenAI-Clip/blob/main/CLIPImageEncoder.tflite) |
|
| 50 |
+
| CLIPImageEncoder | SA7255P ADP | SA7255P | QNN | 265.55 ms | 1 - 11 MB | FP16 | NPU | Use Export Script |
|
| 51 |
+
| CLIPImageEncoder | SA8255 (Proxy) | SA8255P Proxy | TFLITE | 34.335 ms | 0 - 54 MB | FP16 | NPU | [OpenAI-Clip.tflite](https://huggingface.co/qualcomm/OpenAI-Clip/blob/main/CLIPImageEncoder.tflite) |
|
| 52 |
+
| CLIPImageEncoder | SA8255 (Proxy) | SA8255P Proxy | QNN | 20.528 ms | 1 - 2 MB | FP16 | NPU | Use Export Script |
|
| 53 |
+
| CLIPImageEncoder | SA8295P ADP | SA8295P | TFLITE | 40.114 ms | 0 - 200 MB | FP16 | NPU | [OpenAI-Clip.tflite](https://huggingface.co/qualcomm/OpenAI-Clip/blob/main/CLIPImageEncoder.tflite) |
|
| 54 |
+
| CLIPImageEncoder | SA8295P ADP | SA8295P | QNN | 30.939 ms | 1 - 7 MB | FP16 | NPU | Use Export Script |
|
| 55 |
+
| CLIPImageEncoder | SA8650 (Proxy) | SA8650P Proxy | TFLITE | 34.062 ms | 0 - 58 MB | FP16 | NPU | [OpenAI-Clip.tflite](https://huggingface.co/qualcomm/OpenAI-Clip/blob/main/CLIPImageEncoder.tflite) |
|
| 56 |
+
| CLIPImageEncoder | SA8650 (Proxy) | SA8650P Proxy | QNN | 20.836 ms | 1 - 2 MB | FP16 | NPU | Use Export Script |
|
| 57 |
+
| CLIPImageEncoder | SA8775P ADP | SA8775P | TFLITE | 42.508 ms | 0 - 264 MB | FP16 | NPU | [OpenAI-Clip.tflite](https://huggingface.co/qualcomm/OpenAI-Clip/blob/main/CLIPImageEncoder.tflite) |
|
| 58 |
+
| CLIPImageEncoder | SA8775P ADP | SA8775P | QNN | 29.748 ms | 1 - 11 MB | FP16 | NPU | Use Export Script |
|
| 59 |
+
| CLIPImageEncoder | QCS8450 (Proxy) | QCS8450 Proxy | TFLITE | 34.902 ms | 0 - 201 MB | FP16 | NPU | [OpenAI-Clip.tflite](https://huggingface.co/qualcomm/OpenAI-Clip/blob/main/CLIPImageEncoder.tflite) |
|
| 60 |
+
| CLIPImageEncoder | QCS8450 (Proxy) | QCS8450 Proxy | QNN | 28.971 ms | 0 - 169 MB | FP16 | NPU | Use Export Script |
|
| 61 |
+
| CLIPImageEncoder | Snapdragon X Elite CRD | Snapdragon® X Elite | QNN | 22.167 ms | 1 - 1 MB | FP16 | NPU | Use Export Script |
|
| 62 |
+
| CLIPTextEncoder | Samsung Galaxy S23 | Snapdragon® 8 Gen 2 | TFLITE | 5.809 ms | 0 - 24 MB | FP16 | NPU | [OpenAI-Clip.tflite](https://huggingface.co/qualcomm/OpenAI-Clip/blob/main/CLIPTextEncoder.tflite) |
|
| 63 |
+
| CLIPTextEncoder | Samsung Galaxy S23 | Snapdragon® 8 Gen 2 | QNN | 4.636 ms | 0 - 18 MB | FP16 | NPU | [OpenAI-Clip.so](https://huggingface.co/qualcomm/OpenAI-Clip/blob/main/CLIPTextEncoder.so) |
|
| 64 |
+
| CLIPTextEncoder | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 | TFLITE | 3.991 ms | 0 - 83 MB | FP16 | NPU | [OpenAI-Clip.tflite](https://huggingface.co/qualcomm/OpenAI-Clip/blob/main/CLIPTextEncoder.tflite) |
|
| 65 |
+
| CLIPTextEncoder | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 | QNN | 3.281 ms | 0 - 68 MB | FP16 | NPU | [OpenAI-Clip.so](https://huggingface.co/qualcomm/OpenAI-Clip/blob/main/CLIPTextEncoder.so) |
|
| 66 |
+
| CLIPTextEncoder | Snapdragon 8 Elite QRD | Snapdragon® 8 Elite | TFLITE | 3.351 ms | 0 - 83 MB | FP16 | NPU | [OpenAI-Clip.tflite](https://huggingface.co/qualcomm/OpenAI-Clip/blob/main/CLIPTextEncoder.tflite) |
|
| 67 |
+
| CLIPTextEncoder | Snapdragon 8 Elite QRD | Snapdragon® 8 Elite | QNN | 3.197 ms | 0 - 68 MB | FP16 | NPU | Use Export Script |
|
| 68 |
+
| CLIPTextEncoder | QCS8550 (Proxy) | QCS8550 Proxy | TFLITE | 5.613 ms | 0 - 23 MB | FP16 | NPU | [OpenAI-Clip.tflite](https://huggingface.co/qualcomm/OpenAI-Clip/blob/main/CLIPTextEncoder.tflite) |
|
| 69 |
+
| CLIPTextEncoder | QCS8550 (Proxy) | QCS8550 Proxy | QNN | 4.743 ms | 0 - 1 MB | FP16 | NPU | Use Export Script |
|
| 70 |
+
| CLIPTextEncoder | SA7255P ADP | SA7255P | TFLITE | 61.341 ms | 0 - 82 MB | FP16 | NPU | [OpenAI-Clip.tflite](https://huggingface.co/qualcomm/OpenAI-Clip/blob/main/CLIPTextEncoder.tflite) |
|
| 71 |
+
| CLIPTextEncoder | SA7255P ADP | SA7255P | QNN | 51.576 ms | 0 - 11 MB | FP16 | NPU | Use Export Script |
|
| 72 |
+
| CLIPTextEncoder | SA8255 (Proxy) | SA8255P Proxy | TFLITE | 5.729 ms | 0 - 23 MB | FP16 | NPU | [OpenAI-Clip.tflite](https://huggingface.co/qualcomm/OpenAI-Clip/blob/main/CLIPTextEncoder.tflite) |
|
| 73 |
+
| CLIPTextEncoder | SA8255 (Proxy) | SA8255P Proxy | QNN | 4.772 ms | 0 - 1 MB | FP16 | NPU | Use Export Script |
|
| 74 |
+
| CLIPTextEncoder | SA8295P ADP | SA8295P | TFLITE | 7.632 ms | 0 - 68 MB | FP16 | NPU | [OpenAI-Clip.tflite](https://huggingface.co/qualcomm/OpenAI-Clip/blob/main/CLIPTextEncoder.tflite) |
|
| 75 |
+
| CLIPTextEncoder | SA8295P ADP | SA8295P | QNN | 6.53 ms | 0 - 6 MB | FP16 | NPU | Use Export Script |
|
| 76 |
+
| CLIPTextEncoder | SA8650 (Proxy) | SA8650P Proxy | TFLITE | 5.678 ms | 0 - 19 MB | FP16 | NPU | [OpenAI-Clip.tflite](https://huggingface.co/qualcomm/OpenAI-Clip/blob/main/CLIPTextEncoder.tflite) |
|
| 77 |
+
| CLIPTextEncoder | SA8650 (Proxy) | SA8650P Proxy | QNN | 4.872 ms | 0 - 1 MB | FP16 | NPU | Use Export Script |
|
| 78 |
+
| CLIPTextEncoder | SA8775P ADP | SA8775P | TFLITE | 8.137 ms | 0 - 81 MB | FP16 | NPU | [OpenAI-Clip.tflite](https://huggingface.co/qualcomm/OpenAI-Clip/blob/main/CLIPTextEncoder.tflite) |
|
| 79 |
+
| CLIPTextEncoder | SA8775P ADP | SA8775P | QNN | 6.947 ms | 0 - 6 MB | FP16 | NPU | Use Export Script |
|
| 80 |
+
| CLIPTextEncoder | QCS8450 (Proxy) | QCS8450 Proxy | TFLITE | 6.349 ms | 0 - 74 MB | FP16 | NPU | [OpenAI-Clip.tflite](https://huggingface.co/qualcomm/OpenAI-Clip/blob/main/CLIPTextEncoder.tflite) |
|
| 81 |
+
| CLIPTextEncoder | QCS8450 (Proxy) | QCS8450 Proxy | QNN | 5.399 ms | 0 - 71 MB | FP16 | NPU | Use Export Script |
|
| 82 |
+
| CLIPTextEncoder | Snapdragon X Elite CRD | Snapdragon® X Elite | QNN | 5.08 ms | 0 - 0 MB | FP16 | NPU | Use Export Script |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 83 |
|
| 84 |
|
| 85 |
|
|
|
|
| 140 |
```
|
| 141 |
```
|
| 142 |
Profiling Results
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 143 |
------------------------------------------------------------
|
| 144 |
CLIPImageEncoder
|
| 145 |
Device : Samsung Galaxy S23 (13)
|
| 146 |
Runtime : TFLITE
|
| 147 |
+
Estimated inference time (ms) : 34.6
|
| 148 |
+
Estimated peak memory usage (MB): [0, 57]
|
| 149 |
Total # Ops : 659
|
| 150 |
Compute Unit(s) : NPU (659 ops)
|
| 151 |
+
|
| 152 |
+
------------------------------------------------------------
|
| 153 |
+
CLIPTextEncoder
|
| 154 |
+
Device : Samsung Galaxy S23 (13)
|
| 155 |
+
Runtime : TFLITE
|
| 156 |
+
Estimated inference time (ms) : 5.8
|
| 157 |
+
Estimated peak memory usage (MB): [0, 24]
|
| 158 |
+
Total # Ops : 660
|
| 159 |
+
Compute Unit(s) : NPU (658 ops) CPU (2 ops)
|
| 160 |
```
|
| 161 |
|
| 162 |
|
|
|
|
| 179 |
|
| 180 |
# Load the model
|
| 181 |
model = Model.from_pretrained()
|
|
|
|
| 182 |
image_encoder_model = model.image_encoder
|
| 183 |
+
text_encoder_model = model.text_encoder
|
| 184 |
|
| 185 |
# Device
|
| 186 |
device = hub.Device("Samsung Galaxy S23")
|
| 187 |
|
| 188 |
# Trace model
|
| 189 |
+
image_encoder_input_shape = image_encoder_model.get_input_spec()
|
| 190 |
+
image_encoder_sample_inputs = image_encoder_model.sample_inputs()
|
| 191 |
|
| 192 |
+
traced_image_encoder_model = torch.jit.trace(image_encoder_model, [torch.tensor(data[0]) for _, data in image_encoder_sample_inputs.items()])
|
| 193 |
|
| 194 |
# Compile model on a specific device
|
| 195 |
+
image_encoder_compile_job = hub.submit_compile_job(
|
| 196 |
+
model=traced_image_encoder_model ,
|
| 197 |
device=device,
|
| 198 |
+
input_specs=image_encoder_model.get_input_spec(),
|
| 199 |
)
|
| 200 |
|
| 201 |
# Get target model to run on-device
|
| 202 |
+
image_encoder_target_model = image_encoder_compile_job.get_target_model()
|
| 203 |
# Trace model
|
| 204 |
+
text_encoder_input_shape = text_encoder_model.get_input_spec()
|
| 205 |
+
text_encoder_sample_inputs = text_encoder_model.sample_inputs()
|
| 206 |
|
| 207 |
+
traced_text_encoder_model = torch.jit.trace(text_encoder_model, [torch.tensor(data[0]) for _, data in text_encoder_sample_inputs.items()])
|
| 208 |
|
| 209 |
# Compile model on a specific device
|
| 210 |
+
text_encoder_compile_job = hub.submit_compile_job(
|
| 211 |
+
model=traced_text_encoder_model ,
|
| 212 |
device=device,
|
| 213 |
+
input_specs=text_encoder_model.get_input_spec(),
|
| 214 |
)
|
| 215 |
|
| 216 |
# Get target model to run on-device
|
| 217 |
+
text_encoder_target_model = text_encoder_compile_job.get_target_model()
|
| 218 |
|
| 219 |
```
|
| 220 |
|
|
|
|
| 226 |
provisioned in the cloud. Once the job is submitted, you can navigate to a
|
| 227 |
provided job URL to view a variety of on-device performance metrics.
|
| 228 |
```python
|
|
|
|
|
|
|
|
|
|
|
|
|
| 229 |
image_encoder_profile_job = hub.submit_profile_job(
|
| 230 |
model=image_encoder_target_model,
|
| 231 |
device=device,
|
| 232 |
)
|
| 233 |
+
text_encoder_profile_job = hub.submit_profile_job(
|
| 234 |
+
model=text_encoder_target_model,
|
| 235 |
+
device=device,
|
| 236 |
+
)
|
| 237 |
|
| 238 |
```
|
| 239 |
|
|
|
|
| 242 |
To verify the accuracy of the model on-device, you can run on-device inference
|
| 243 |
on sample input data on the same cloud hosted device.
|
| 244 |
```python
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 245 |
image_encoder_input_data = image_encoder_model.sample_inputs()
|
| 246 |
image_encoder_inference_job = hub.submit_inference_job(
|
| 247 |
model=image_encoder_target_model,
|
|
|
|
| 249 |
inputs=image_encoder_input_data,
|
| 250 |
)
|
| 251 |
image_encoder_inference_job.download_output_data()
|
| 252 |
+
text_encoder_input_data = text_encoder_model.sample_inputs()
|
| 253 |
+
text_encoder_inference_job = hub.submit_inference_job(
|
| 254 |
+
model=text_encoder_target_model,
|
| 255 |
+
device=device,
|
| 256 |
+
inputs=text_encoder_input_data,
|
| 257 |
+
)
|
| 258 |
+
text_encoder_inference_job.download_output_data()
|
| 259 |
|
| 260 |
```
|
| 261 |
With the output of the model, you can compute like PSNR, relative errors or
|