Update README.md
Browse files
README.md
CHANGED
|
@@ -3,11 +3,11 @@ license: apache-2.0
|
|
| 3 |
---
|
| 4 |
|
| 5 |
This model provides [HuggingFaceTB/SmolVLM-256M-Instruct](https://huggingface.co/HuggingFaceTB/SmolVLM-256M-Instruct) model in tflite format.
|
| 6 |
-
You can use this model with [AI Edge Cpp Example](https://github.com/google-ai-edge/ai-edge-torch/tree/main/ai_edge_torch/generative/examples/cpp)
|
| 7 |
You need to slightly modify this pipeline to support image as input (see COLAB example below).
|
| 8 |
Currently, [AI Edge Torch](https://github.com/google-ai-edge/ai-edge-torch/tree/main/ai_edge_torch/generative/examples) models not supported
|
| 9 |
-
on [MediaPipe LLM Inference API](https://ai.google.dev/edge/mediapipe/solutions/genai/llm_inference)
|
| 10 |
-
|
| 11 |
as reference to write SmolVLM-256M-Instruct convertation scripts (coming soon).
|
| 12 |
|
| 13 |
|
|
|
|
| 3 |
---
|
| 4 |
|
| 5 |
This model provides [HuggingFaceTB/SmolVLM-256M-Instruct](https://huggingface.co/HuggingFaceTB/SmolVLM-256M-Instruct) model in tflite format.
|
| 6 |
+
You can use this model with [AI Edge Cpp Example](https://github.com/google-ai-edge/ai-edge-torch/tree/main/ai_edge_torch/generative/examples/cpp).
|
| 7 |
You need to slightly modify this pipeline to support image as input (see COLAB example below).
|
| 8 |
Currently, [AI Edge Torch](https://github.com/google-ai-edge/ai-edge-torch/tree/main/ai_edge_torch/generative/examples) models not supported
|
| 9 |
+
on [MediaPipe LLM Inference API](https://ai.google.dev/edge/mediapipe/solutions/genai/llm_inference),
|
| 10 |
+
for example [llava model](https://github.com/google-ai-edge/ai-edge-torch/tree/main/ai_edge_torch/generative/examples/qwen_vl), that was used to
|
| 11 |
as reference to write SmolVLM-256M-Instruct convertation scripts (coming soon).
|
| 12 |
|
| 13 |
|