Update README.md
Browse files
README.md
CHANGED
|
@@ -8,19 +8,21 @@ You need to slightly modify this pipeline to support image as input (see COLAB e
|
|
| 8 |
Currently, [AI Edge Torch](https://github.com/google-ai-edge/ai-edge-torch/tree/main/ai_edge_torch/generative/examples) vlms not supported
|
| 9 |
on [MediaPipe LLM Inference API](https://ai.google.dev/edge/mediapipe/solutions/genai/llm_inference),
|
| 10 |
for example [qwen_vl model](https://github.com/google-ai-edge/ai-edge-torch/tree/main/ai_edge_torch/generative/examples/qwen_vl),
|
| 11 |
-
that was used as reference to write SmolVLM-256M-Instruct convertation scripts
|
| 12 |
|
| 13 |
|
| 14 |
## Use the models
|
| 15 |
|
| 16 |
### Colab
|
| 17 |
|
|
|
|
|
|
|
| 18 |
|
| 19 |
|
| 20 |
|
| 21 |
## Details
|
| 22 |
|
| 23 |
-
The model was converted with:
|
| 24 |
|
| 25 |
```shell
|
| 26 |
python convert_to_tflite.py --quantize="dynamic_int8"\
|
|
|
|
| 8 |
Currently, [AI Edge Torch](https://github.com/google-ai-edge/ai-edge-torch/tree/main/ai_edge_torch/generative/examples) vlms not supported
|
| 9 |
on [MediaPipe LLM Inference API](https://ai.google.dev/edge/mediapipe/solutions/genai/llm_inference),
|
| 10 |
for example [qwen_vl model](https://github.com/google-ai-edge/ai-edge-torch/tree/main/ai_edge_torch/generative/examples/qwen_vl),
|
| 11 |
+
that was used as reference to write SmolVLM-256M-Instruct convertation scripts.
|
| 12 |
|
| 13 |
|
| 14 |
## Use the models
|
| 15 |
|
| 16 |
### Colab
|
| 17 |
|
| 18 |
+
[](https://colab.research.google.com/#fileId=https://huggingface.co/litert-community/SmolVLM-256M-Instruct/blob/main/smalvlm_notebook.ipynb
|
| 19 |
+
)
|
| 20 |
|
| 21 |
|
| 22 |
|
| 23 |
## Details
|
| 24 |
|
| 25 |
+
The model was converted with custom script (coming soon):
|
| 26 |
|
| 27 |
```shell
|
| 28 |
python convert_to_tflite.py --quantize="dynamic_int8"\
|