shirakiin commited on
Commit
6f42afc
·
verified ·
1 Parent(s): d468dd9

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +148 -3
README.md CHANGED
@@ -1,3 +1,148 @@
1
- ---
2
- license: apple-amlr
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apple-amlr
3
+ pipeline_tag: text-generation
4
+ library_name: litert-lm
5
+ tags:
6
+ - ml-fastvlm
7
+ - litert
8
+ - litertlm
9
+ base_model:
10
+ - apple/FastVLM-0.5B
11
+ ---
12
+
13
+ # litert-community/FastVLM-0.5B
14
+
15
+ *Main Model Card*: [apple/FastVLM-0.5B](https://huggingface.co/apple/FastVLM-0.5B)
16
+
17
+ This model card provides *FastVLM-0.5B converted for LiteRT* that are ready for deployment on Android and Desktop.
18
+
19
+ FastVLM was introduced in [FastVLM: Efficient Vision Encoding for Vision Language Models](https://www.arxiv.org/abs/2412.13303). *(CVPR 2025)*, this model demonstrates improvement in time-to-first-token (TTFT) with performance and is suitable for edge device deployment.
20
+
21
+ *Disclaimer*: This model converted for LiteRT is licensed under the [Apple Machine Learning Research Model License Agreement](https://huggingface.co/apple/deeplabv3-mobilevit-small/blob/main/LICENSE). The model is converted and quantized from PyTorch model weight into the LiteRT/Tensorflow-Lite format (no retraining or further customization).
22
+
23
+ # How to Use
24
+
25
+ ## Android
26
+
27
+ ### 1. Add the dependency
28
+
29
+ Make sure you have the necessary dependency in your `Gradle` file.
30
+
31
+ ```
32
+ dependencies {
33
+ implementation("com.google.ai.edge.litertlm:litertlm:<LATEST_VERSION>")
34
+ }
35
+ ```
36
+
37
+ ### 2. Inference with the LiteRT-LM API
38
+
39
+ ```kotlin
40
+ import com.google.ai.edge.litertlm.*
41
+
42
+ suspend fun main() {
43
+ Engine.setNativeMinLogSeverity(LogSeverity.ERROR) // hide log for TUI app
44
+ val engineConfig = EngineConfig(
45
+ modelPath = "/path/to/your/model.litertlm", // Replace with model path
46
+ backend = Backend.CPU, // Or Backend.GPU
47
+ visionBackend = Backend.GPU,
48
+ )
49
+
50
+ // See the Content class for other variants.
51
+ val multiModalMessage = Message.of(
52
+ Content.ImageFile("/path/to/image"),
53
+ Content.Text("Describe this image."),
54
+ )
55
+ Engine(engineConfig).use { engine ->
56
+ engine.initialize()
57
+
58
+ engine.createConversation().use { conversation ->
59
+ while (true) {
60
+ print("\n>>> ")
61
+ conversation.sendMessageAsync(Message.of(readln())).collect { print(it) }
62
+ }
63
+ }
64
+ }
65
+ }
66
+ ```
67
+
68
+ Try running this model on NPU by using this .litertlm file and setting your EngineConfig’s backend to NPU. To check if your phone’s NPU is supported see this [guide](https://ai.google.dev/edge/litert/next/npu).
69
+
70
+ ## Desktop
71
+
72
+ To build a Desktop application, C++ is the current recommendation. See the following code sample.
73
+
74
+ ```cpp
75
+ // Create engine with proper multimodality backend.
76
+ auto engine_settings = EngineSettings::CreateDefault(
77
+ model_assets,
78
+ /*backend=*/litert::lm::Backend::CPU,
79
+ /*vision_backend*/litert::lm::Backend::GPU,
80
+ );
81
+
82
+ // Send message to the LLM with image data.
83
+ absl::StatusOr<Message> model_message = (*conversation)->SendMessage(
84
+ JsonMessage{
85
+ {"role", "user"},
86
+ {"content", { // Now content must be an array.
87
+ {{"type", "text"}, {"text", "Describe the following image: "}},
88
+ {{"type", "image"}, {"path", "/file/path/to/image.jpg"}}
89
+ }},
90
+ });
91
+ CHECK_OK(model_message);
92
+
93
+ // Print the model message.
94
+ std::cout << *model_message << std::endl;
95
+ ```
96
+
97
+ # Performance
98
+
99
+ ## Android
100
+
101
+ Benchmarked on Xiaomi 17 Pro Max.
102
+
103
+
104
+ <table border="1">
105
+ <tr>
106
+ <th style="text-align: left">Backend</th>
107
+ <th style="text-align: left">Quantization scheme</th>
108
+ <th style="text-align: left">Context length</th>
109
+ <th style="text-align: left">Prefill (tokens/sec)</th>
110
+ <th style="text-align: left">Decode (tokens/sec)</th>
111
+ <th style="text-align: left">Time-to-first-token (sec)</th>
112
+ <th style="text-align: left">Memory (RSS in MB)</th>
113
+ <th style="text-align: left">Model size (MB)</th>
114
+ <th style="text-align: left">Model File</th>
115
+ </tr>
116
+ <tr>
117
+ <td><p style="text-align: left">GPU</p></td>
118
+ <td><p style="text-align: left">dynamic_int8</p></td>
119
+ <td><p style="text-align: right">1280</p></td>
120
+ <td><p style="text-align: right">- tk/s</p></td>
121
+ <td><p style="text-align: right">- tk/s</p></td>
122
+ <td><p style="text-align: right">- s</p></td>
123
+ <td><p style="text-align: right">- MB</p></td>
124
+ <td><p style="text-align: right">- MB</p></td>
125
+ <td><p style="text-align: left"><a style="text-decoration: none" href="https://huggingface.co/litert-community/FastVLM-0.5B/resolve/main/FastVLM-0.5B.litertlm">&#128279;</a></p></td>
126
+ </tr>
127
+
128
+ <tr>
129
+ <td><p style="text-align: left">NPU</p></td>
130
+ <td><p style="text-align: left">dynamic_int8</p></td>
131
+ <td><p style="text-align: right">1280</p></td>
132
+ <td><p style="text-align: right">- tk/s</p></td>
133
+ <td><p style="text-align: right">- tk/s</p></td>
134
+ <td><p style="text-align: right">- s</p></td>
135
+ <td><p style="text-align: right">- MB</p></td>
136
+ <td><p style="text-align: right">- MB</p></td>
137
+ <td><p style="text-align: left"><a style="text-decoration: none" href="https://huggingface.co/litert-community/FastVLM-0.5B/resolve/main/FastVLM-0.5B.sm8850.litertlm">&#128279;</a></p></td>
138
+ </tr>
139
+
140
+ </table>
141
+
142
+
143
+
144
+ Notes:
145
+ * Model Size: measured by the size of the file on disk.
146
+ * Benchmark is run with cache enabled and initialized. During the first run, the latency and memory usage may differ.
147
+
148
+