shubeydoo commited on
Commit
fbd0ab5
·
verified ·
1 Parent(s): a3eb814

Upload folder using huggingface_hub

Browse files
.gitattributes CHANGED
@@ -33,3 +33,12 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
 
 
 
 
 
 
 
 
 
 
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
36
+ onnx/decoder_model_merged.onnx_data filter=lfs diff=lfs merge=lfs -text
37
+ onnx/decoder_model_merged_fp16.onnx_data filter=lfs diff=lfs merge=lfs -text
38
+ onnx/decoder_model_merged_q4.onnx_data filter=lfs diff=lfs merge=lfs -text
39
+ onnx/decoder_model_merged_q8.onnx_data filter=lfs diff=lfs merge=lfs -text
40
+ onnx/embed_tokens_fp16.onnx_data filter=lfs diff=lfs merge=lfs -text
41
+ onnx/vision_encoder.onnx_data filter=lfs diff=lfs merge=lfs -text
42
+ onnx/vision_encoder_fp16.onnx_data filter=lfs diff=lfs merge=lfs -text
43
+ onnx/vision_encoder_q4.onnx_data filter=lfs diff=lfs merge=lfs -text
44
+ onnx/vision_encoder_q8.onnx_data filter=lfs diff=lfs merge=lfs -text
README.md ADDED
@@ -0,0 +1,380 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: other
3
+ license_name: lfm1.0
4
+ license_link: LICENSE
5
+ language:
6
+ - en
7
+ - ja
8
+ - ko
9
+ - fr
10
+ - es
11
+ - de
12
+ - it
13
+ - pt
14
+ - ar
15
+ - zh
16
+ pipeline_tag: image-text-to-text
17
+ tags:
18
+ - liquid
19
+ - edge
20
+ - lfm2.5-vl
21
+ - lfm2.5
22
+ - onnx
23
+ - onnxruntime
24
+ - webgpu
25
+ base_model:
26
+ - LiquidAI/LFM2.5-VL-450M
27
+ ---
28
+
29
+ <div align="center">
30
+ <img
31
+ src="https://cdn-uploads.huggingface.co/production/uploads/61b8e2ba285851687028d395/2b08LKpev0DNEk6DlnWkY.png"
32
+ alt="Liquid AI"
33
+ style="width: 100%; max-width: 100%; height: auto; display: inline-block; margin-bottom: 0.5em; margin-top: 0.5em;"
34
+ />
35
+ <div style="display: flex; justify-content: center; gap: 0.5em; margin-bottom: 1em;">
36
+ <a href="https://playground.liquid.ai/chat?model=lfm2.5-vl-450m"><strong>Try LFM</strong></a> •
37
+ <a href="https://docs.liquid.ai/lfm"><strong>Documentation</strong></a> •
38
+ <a href="https://leap.liquid.ai/"><strong>LEAP</strong></a> •
39
+ <a href="https://discord.com/invite/liquid-ai"><strong>Discord</strong></a>
40
+ </div>
41
+ </div>
42
+
43
+ # LFM2.5-VL-450M-ONNX
44
+
45
+ ONNX export of [LFM2.5-VL-450M](https://huggingface.co/LiquidAI/LFM2.5-VL-450M) for cross-platform inference.
46
+
47
+ ## Recommended Variants
48
+
49
+ | Encoder | Decoder | Size | Platform | Use Case |
50
+ |---------|---------|------|----------|----------|
51
+ | FP16 | Q4 | ~770MB | WebGPU, Server | Recommended for most uses |
52
+ | FP16 | FP16 | ~1.0GB | Server | Higher quality |
53
+
54
+ - **WebGPU**: Use FP16 encoder + Q4 decoder (Q8 not supported on WebGPU)
55
+ - **Server**: FP16+Q4 for efficiency, FP16+FP16 for quality
56
+
57
+ ## Model Files
58
+
59
+ ```
60
+ onnx/
61
+ ├── embed_tokens.onnx # Token embeddings (FP32, 256MB)
62
+ ├── embed_tokens_fp16.onnx # Token embeddings (FP16, 128MB)
63
+ ├── embed_tokens_fp16.onnx_data
64
+ ├── vision_encoder.onnx # Vision encoder (FP32, 359MB)
65
+ ├── vision_encoder.onnx_data
66
+ ├── vision_encoder_fp16.onnx # Vision encoder (FP16, 180MB)
67
+ ├── vision_encoder_fp16.onnx_data
68
+ ├── vision_encoder_q4.onnx # Vision encoder (Q4, 57MB)
69
+ ├── vision_encoder_q4.onnx_data
70
+ ├── vision_encoder_q8.onnx # Vision encoder (Q8, 105MB)
71
+ ├── vision_encoder_q8.onnx_data
72
+ ├── decoder_model_merged.onnx # Language decoder (FP32, 1.4GB)
73
+ ├── decoder_model_merged.onnx_data
74
+ ├── decoder_model_merged_fp16.onnx # Language decoder (FP16, 692MB)
75
+ ├── decoder_model_merged_fp16.onnx_data
76
+ ├── decoder_model_merged_q4.onnx # Language decoder (Q4, 459MB)
77
+ ├── decoder_model_merged_q4.onnx_data
78
+ ├── decoder_model_merged_q8.onnx # Language decoder (Q8, 604MB)
79
+ └── decoder_model_merged_q8.onnx_data
80
+ ```
81
+
82
+ ## Python
83
+
84
+ ### Installation
85
+
86
+ ```bash
87
+ pip install onnxruntime transformers pillow torch huggingface_hub
88
+ # or with GPU support:
89
+ pip install onnxruntime-gpu transformers pillow torch huggingface_hub
90
+ ```
91
+
92
+ ### Inference
93
+
94
+ ```python
95
+ import numpy as np
96
+ import onnxruntime as ort
97
+ from huggingface_hub import hf_hub_download
98
+ from transformers import AutoProcessor
99
+ from PIL import Image
100
+
101
+ # Download model files (fp16 encoder + q4 decoder recommended)
102
+ model_id = "LiquidAI/LFM2.5-VL-450M-ONNX"
103
+ embed_tokens_path = hf_hub_download(model_id, "onnx/embed_tokens_fp16.onnx")
104
+ vision_encoder_path = hf_hub_download(model_id, "onnx/vision_encoder_fp16.onnx")
105
+ decoder_path = hf_hub_download(model_id, "onnx/decoder_model_merged_q4.onnx")
106
+
107
+ # Download all data files
108
+ from huggingface_hub import list_repo_files
109
+ for f in list_repo_files(model_id):
110
+ if any(f.startswith(f"onnx/{name}") for name in [
111
+ "embed_tokens_fp16.onnx_data",
112
+ "vision_encoder_fp16.onnx_data",
113
+ "decoder_model_merged_q4.onnx_data"
114
+ ]):
115
+ hf_hub_download(model_id, f)
116
+
117
+ # Load ONNX sessions
118
+ embed_tokens = ort.InferenceSession(embed_tokens_path)
119
+ vision_encoder = ort.InferenceSession(vision_encoder_path)
120
+ decoder = ort.InferenceSession(decoder_path)
121
+
122
+ # Load processor
123
+ processor = AutoProcessor.from_pretrained("LiquidAI/LFM2.5-VL-450M", trust_remote_code=True)
124
+
125
+ # Prepare input
126
+ image = Image.open("photo.jpg")
127
+ messages = [{"role": "user", "content": [
128
+ {"type": "image"},
129
+ {"type": "text", "text": "What is in this image?"}
130
+ ]}]
131
+
132
+ # Process inputs
133
+ prompt = processor.apply_chat_template(messages, add_generation_prompt=True)
134
+ inputs = processor(images=[image], text=prompt, return_tensors="pt")
135
+
136
+ # Convert to numpy with correct dtypes
137
+ pixel_values = inputs["pixel_values"].numpy().astype(np.float32)
138
+ pixel_attention_mask = inputs["pixel_attention_mask"].numpy().astype(np.int64)
139
+ spatial_shapes = inputs["spatial_shapes"].numpy().astype(np.int64)
140
+ input_ids = inputs["input_ids"].numpy().astype(np.int64)
141
+
142
+ # Get image embeddings
143
+ image_outputs = vision_encoder.run(None, {
144
+ "pixel_values": pixel_values,
145
+ "pixel_attention_mask": pixel_attention_mask,
146
+ "spatial_shapes": spatial_shapes,
147
+ })
148
+ image_embeds = image_outputs[0]
149
+
150
+ # Get token embeddings
151
+ token_outputs = embed_tokens.run(None, {"input_ids": input_ids})
152
+ token_embeds = token_outputs[0]
153
+
154
+ # Replace <image> tokens with image embeddings
155
+ image_token_id = processor.tokenizer.convert_tokens_to_ids("<image>")
156
+ image_positions = np.where(input_ids[0] == image_token_id)[0]
157
+ for i, pos in enumerate(image_positions):
158
+ if i < len(image_embeds):
159
+ token_embeds[0, pos] = image_embeds[i]
160
+
161
+ # Initialize KV cache for stateful decoding
162
+ ONNX_DTYPE = {"tensor(float)": np.float32, "tensor(float16)": np.float16, "tensor(int64)": np.int64}
163
+ cache = {}
164
+ for inp in decoder.get_inputs():
165
+ if inp.name in {"inputs_embeds", "attention_mask", "position_ids"}:
166
+ continue
167
+ shape = [d if isinstance(d, int) else 1 for d in inp.shape]
168
+ for i, d in enumerate(inp.shape):
169
+ if isinstance(d, str) and "sequence" in d.lower():
170
+ shape[i] = 0
171
+ cache[inp.name] = np.zeros(shape, dtype=ONNX_DTYPE.get(inp.type, np.float32))
172
+
173
+ # Generate tokens
174
+ seq_len = token_embeds.shape[1]
175
+ generated_tokens = []
176
+
177
+ for step in range(100): # max tokens
178
+ if step == 0:
179
+ embeds = token_embeds.astype(np.float32)
180
+ else:
181
+ last_token = np.array([[generated_tokens[-1]]], dtype=np.int64)
182
+ embeds = embed_tokens.run(None, {"input_ids": last_token})[0].astype(np.float32)
183
+
184
+ attn_mask = np.ones((1, seq_len + len(generated_tokens)), dtype=np.int64)
185
+ feed = {"inputs_embeds": embeds, "attention_mask": attn_mask, **cache}
186
+
187
+ outputs = decoder.run(None, feed)
188
+ next_token = int(np.argmax(outputs[0][0, -1]))
189
+ generated_tokens.append(next_token)
190
+
191
+ # Update cache
192
+ for i, out in enumerate(decoder.get_outputs()[1:], 1):
193
+ name = out.name.replace("present_conv", "past_conv").replace("present.", "past_key_values.")
194
+ if name in cache:
195
+ cache[name] = outputs[i]
196
+
197
+ if next_token == processor.tokenizer.eos_token_id:
198
+ break
199
+
200
+ print(processor.tokenizer.decode(generated_tokens, skip_special_tokens=True))
201
+ ```
202
+
203
+ ## WebGPU (Browser)
204
+
205
+ ### Installation
206
+
207
+ ```bash
208
+ npm install onnxruntime-web @huggingface/transformers
209
+ ```
210
+
211
+ ### Enable WebGPU
212
+
213
+ WebGPU is required for browser inference. To enable:
214
+
215
+ 1. **Chrome/Edge**: Navigate to `chrome://flags/#enable-unsafe-webgpu`, enable, and restart
216
+ 2. **Verify**: Check `chrome://gpu` for "WebGPU" status
217
+ 3. **Test**: Run `navigator.gpu.requestAdapter()` in DevTools console
218
+
219
+ ### Inference
220
+
221
+ ```javascript
222
+ import * as ort from "onnxruntime-web/webgpu";
223
+ import { AutoTokenizer } from "@huggingface/transformers";
224
+
225
+ // Check WebGPU availability
226
+ if (!navigator.gpu) {
227
+ throw new Error("WebGPU not available. Enable at chrome://flags/#enable-unsafe-webgpu");
228
+ }
229
+ const adapter = await navigator.gpu.requestAdapter();
230
+ if (!adapter) {
231
+ throw new Error("WebGPU adapter not found. Check chrome://gpu for status.");
232
+ }
233
+
234
+ ort.env.wasm.numThreads = 1;
235
+
236
+ const modelId = "LiquidAI/LFM2.5-VL-450M-ONNX";
237
+ const modelBase = `https://huggingface.co/${modelId}/resolve/main`;
238
+
239
+ // Load tokenizer
240
+ const tokenizer = await AutoTokenizer.from_pretrained(modelId);
241
+
242
+ // Load ONNX sessions with external data
243
+ async function loadSession(name) {
244
+ const onnxPath = `${modelBase}/onnx/${name}.onnx`;
245
+ const fileName = `${name}.onnx_data`;
246
+ return ort.InferenceSession.create(onnxPath, {
247
+ executionProviders: ["webgpu"],
248
+ externalData: [{ path: fileName, data: `${modelBase}/onnx/${fileName}` }],
249
+ });
250
+ }
251
+
252
+ const embedTokens = await loadSession("embed_tokens_fp16");
253
+ const visionEncoder = await loadSession("vision_encoder_fp16");
254
+ const decoder = await loadSession("decoder_model_merged_q4");
255
+
256
+ // Model config
257
+ const hiddenSize = 1024;
258
+ const numKVHeads = 8;
259
+ const headDim = 64;
260
+
261
+ // Get text embeddings helper
262
+ async function getTextEmbeddings(ids) {
263
+ const tensor = new ort.Tensor("int64", new BigInt64Array(ids.map(BigInt)), [1, ids.length]);
264
+ const out = await embedTokens.run({ input_ids: tensor });
265
+ return out.inputs_embeds;
266
+ }
267
+
268
+ // Initialize KV cache
269
+ function initCache() {
270
+ const cache = {};
271
+ for (const name of decoder.inputNames) {
272
+ if (name.startsWith("past_conv")) {
273
+ cache[name] = new ort.Tensor("float32", new Float32Array(hiddenSize * 3), [1, hiddenSize, 3]);
274
+ } else if (name.startsWith("past_key_values")) {
275
+ cache[name] = new ort.Tensor("float32", new Float32Array(0), [1, numKVHeads, 0, headDim]);
276
+ }
277
+ }
278
+ return cache;
279
+ }
280
+
281
+ // Update cache from outputs
282
+ function updateCache(cache, outputs) {
283
+ for (const [name, tensor] of Object.entries(outputs)) {
284
+ if (name.startsWith("present_conv")) {
285
+ cache[name.replace("present_conv", "past_conv")] = tensor;
286
+ } else if (name.startsWith("present.")) {
287
+ cache[name.replace("present.", "past_key_values.")] = tensor;
288
+ }
289
+ }
290
+ }
291
+
292
+ // Build prompt and tokenize
293
+ const prompt = tokenizer.apply_chat_template(messages, { add_generation_prompt: true, tokenize: false });
294
+ const inputIds = tokenizer.encode(prompt);
295
+
296
+ // Get embeddings (for VL: merge image embeddings at <image> token positions)
297
+ let inputsEmbeds = await getTextEmbeddings(inputIds);
298
+
299
+ // Generation loop
300
+ const cache = initCache();
301
+ const eosTokenId = tokenizer.eos_token_id;
302
+ const generatedTokens = [];
303
+ let curLen = inputsEmbeds.dims[1];
304
+ let embeds = inputsEmbeds;
305
+
306
+ for (let step = 0; step < 256; step++) {
307
+ const attentionMask = new ort.Tensor("int64", new BigInt64Array(curLen).fill(1n), [1, curLen]);
308
+
309
+ const outputs = await decoder.run({ inputs_embeds: embeds, attention_mask: attentionMask, ...cache });
310
+
311
+ // Greedy decode: argmax of last token logits
312
+ const logits = outputs.logits;
313
+ const vocabSize = logits.dims[2];
314
+ const lastLogits = logits.data.slice((logits.dims[1] - 1) * vocabSize);
315
+ const nextToken = lastLogits.indexOf(Math.max(...lastLogits));
316
+
317
+ generatedTokens.push(nextToken);
318
+ if (nextToken === eosTokenId) break;
319
+
320
+ updateCache(cache, outputs);
321
+ embeds = await getTextEmbeddings([nextToken]);
322
+ curLen++;
323
+ }
324
+
325
+ console.log(tokenizer.decode(generatedTokens, { skip_special_tokens: true }));
326
+ ```
327
+
328
+ ### WebGPU Notes
329
+
330
+ - Recommended: `vision_encoder_fp16.onnx` + `decoder_model_merged_q4.onnx`
331
+ - For higher quality: `vision_encoder_fp16.onnx` + `decoder_model_merged_fp16.onnx`
332
+ - Image preprocessing requires tiling (512x512), patch extraction (16x16), and normalization
333
+ - int64 tensors require `BigInt64Array`
334
+
335
+ ## transformers.js
336
+
337
+ This model is compatible with [transformers.js](https://huggingface.co/docs/transformers.js) v4.0+ for browser-based inference with WebGPU:
338
+
339
+ ```javascript
340
+ import { AutoModelForImageTextToText, AutoProcessor, RawImage } from "@huggingface/transformers";
341
+
342
+ const model = await AutoModelForImageTextToText.from_pretrained(
343
+ "LiquidAI/LFM2.5-VL-450M-ONNX",
344
+ {
345
+ device: "webgpu",
346
+ dtype: {
347
+ vision_encoder: "fp16",
348
+ embed_tokens: "fp16",
349
+ decoder_model_merged: "q4",
350
+ },
351
+ }
352
+ );
353
+
354
+ const processor = await AutoProcessor.from_pretrained("LiquidAI/LFM2.5-VL-450M-ONNX");
355
+
356
+ const image = await RawImage.fromURL("https://example.com/photo.jpg");
357
+ const messages = [
358
+ { role: "user", content: [{ type: "image" }, { type: "text", text: "What is in this image?" }] },
359
+ ];
360
+
361
+ const chatPrompt = processor.apply_chat_template(messages, { add_generation_prompt: true });
362
+ const inputs = await processor(image, chatPrompt, { add_special_tokens: false });
363
+
364
+ const outputs = await model.generate({
365
+ ...inputs,
366
+ do_sample: false,
367
+ max_new_tokens: 128,
368
+ });
369
+
370
+ const inputLength = inputs.input_ids.dims.at(-1);
371
+ const generated = outputs.slice(null, [inputLength, null]);
372
+ console.log(processor.batch_decode(generated, { skip_special_tokens: true })[0]);
373
+ ```
374
+
375
+ See our [WebGPU demo](https://huggingface.co/spaces/LiquidAI/LFM2.5-VL-450M-WebGPU) for a full real-time video captioning and object detection application.
376
+
377
+ ## License
378
+
379
+ This model is released under the [LFM 1.0 License](LICENSE).
380
+
chat_template.jinja ADDED
@@ -0,0 +1,92 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {{- bos_token -}}
2
+ {%- set keep_past_thinking = keep_past_thinking | default(false) -%}
3
+
4
+ {%- macro format_arg_value(arg_value) -%}
5
+ {%- if arg_value is string -%}
6
+ {{- '"' + arg_value + '"' -}}
7
+ {%- elif arg_value is mapping -%}
8
+ {{- arg_value | tojson -}}
9
+ {%- else -%}
10
+ {{- arg_value | string -}}
11
+ {%- endif -%}
12
+ {%- endmacro -%}
13
+
14
+ {%- macro parse_content(content) -%}
15
+ {%- if content is string -%}
16
+ {{- content -}}
17
+ {%- else -%}
18
+ {%- set _ns = namespace(result="") -%}
19
+ {%- for item in content -%}
20
+ {%- if item.type == "image" -%}
21
+ {%- set _ns.result = _ns.result + "<image>" -%}
22
+ {%- elif item.type == "text" -%}
23
+ {%- set _ns.result = _ns.result + item.text -%}
24
+ {%- else -%}
25
+ {%- set _ns.result = _ns.result + item | tojson -%}
26
+ {%- endif -%}
27
+ {%- endfor -%}
28
+ {{- _ns.result -}}
29
+ {%- endif -%}
30
+ {%- endmacro -%}
31
+
32
+ {%- macro render_tool_calls(tool_calls) -%}
33
+ {%- set tool_calls_ns = namespace(tool_calls=[]) -%}
34
+ {%- for tool_call in tool_calls -%}
35
+ {%- set func_name = tool_call.function.name -%}
36
+ {%- set func_args = tool_call.function.arguments -%}
37
+ {%- set args_ns = namespace(arg_strings=[]) -%}
38
+ {%- for arg_name, arg_value in func_args.items() -%}
39
+ {%- set args_ns.arg_strings = args_ns.arg_strings + [arg_name + "=" + format_arg_value(arg_value)] -%}
40
+ {%- endfor -%}
41
+ {%- set tool_calls_ns.tool_calls = tool_calls_ns.tool_calls + [func_name + "(" + (args_ns.arg_strings | join(", ")) + ")"] -%}
42
+ {%- endfor -%}
43
+ {{- "<|tool_call_start|>[" + (tool_calls_ns.tool_calls | join(", ")) + "]<|tool_call_end|>" -}}
44
+ {%- endmacro -%}
45
+
46
+ {%- set ns = namespace(system_prompt="", last_assistant_index=-1) -%}
47
+ {%- if messages[0].role == "system" -%}
48
+ {%- if messages[0].content is defined -%}
49
+ {%- set ns.system_prompt = parse_content(messages[0].content) -%}
50
+ {%- endif -%}
51
+ {%- set messages = messages[1:] -%}
52
+ {%- endif -%}
53
+ {%- if tools -%}
54
+ {%- set ns.system_prompt = ns.system_prompt + ("\n\n" if ns.system_prompt else "") + "Today's date: " + strftime_now("%Y-%m-%d") + "\n\nList of tools: " + (tools | tojson) -%}
55
+ {%- endif -%}
56
+ {%- if ns.system_prompt -%}
57
+ {{- "<|im_start|>system\n" + ns.system_prompt + "<|im_end|>\n" -}}
58
+ {%- endif -%}
59
+ {%- for message in messages -%}
60
+ {%- if message.role == "assistant" -%}
61
+ {%- set ns.last_assistant_index = loop.index0 -%}
62
+ {%- endif -%}
63
+ {%- endfor -%}
64
+ {%- for message in messages -%}
65
+ {{- "<|im_start|>" + message.role + "\n" -}}
66
+ {%- if message.role == "assistant" -%}
67
+ {%- generation -%}
68
+ {%- if message.thinking is defined and (keep_past_thinking or loop.index0 == ns.last_assistant_index) -%}
69
+ {{- "<think>" + message.thinking + "</think>" -}}
70
+ {%- endif -%}
71
+ {%- if message.tool_calls is defined -%}
72
+ {{- render_tool_calls(message.tool_calls) -}}
73
+ {%- endif -%}
74
+ {%- if message.content is defined -%}
75
+ {%- set content = parse_content(message.content) -%}
76
+ {%- if not keep_past_thinking and loop.index0 != ns.last_assistant_index -%}
77
+ {%- if "</think>" in content -%}
78
+ {%- set content = content.split("</think>")[-1] | trim -%}
79
+ {%- endif -%}
80
+ {%- endif -%}
81
+ {{- content + ("" if (continue_final_message and loop.last) else "<|im_end|>\n") -}}
82
+ {%- endif -%}
83
+ {%- endgeneration -%}
84
+ {%- else %}
85
+ {%- if message.content is defined -%}
86
+ {{- parse_content(message.content) + "<|im_end|>\n" -}}
87
+ {%- endif -%}
88
+ {%- endif %}
89
+ {%- endfor -%}
90
+ {%- if add_generation_prompt -%}
91
+ {{- "<|im_start|>assistant\n" -}}
92
+ {%- endif -%}
config.json ADDED
@@ -0,0 +1,104 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "architectures": [
3
+ "Lfm2VlForConditionalGeneration"
4
+ ],
5
+ "do_image_splitting": true,
6
+ "downsample_factor": 2,
7
+ "dtype": "bfloat16",
8
+ "encoder_patch_size": 16,
9
+ "image_token_id": 396,
10
+ "max_image_tokens": 256,
11
+ "max_pixels_tolerance": 2.0,
12
+ "max_tiles": 10,
13
+ "min_image_tokens": 64,
14
+ "min_tiles": 2,
15
+ "model_type": "lfm2_vl",
16
+ "projector_bias": true,
17
+ "projector_hidden_act": "gelu",
18
+ "projector_hidden_size": 2048,
19
+ "projector_use_layernorm": false,
20
+ "text_config": {
21
+ "_name_or_path": "LiquidAI/LFM2-350M",
22
+ "architectures": [
23
+ "Lfm2ForCausalLM"
24
+ ],
25
+ "block_auto_adjust_ff_dim": true,
26
+ "block_dim": 1024,
27
+ "block_ff_dim": 6656,
28
+ "block_ffn_dim_multiplier": 1.0,
29
+ "block_mlp_init_scale": 1.0,
30
+ "block_multiple_of": 256,
31
+ "block_norm_eps": 1e-05,
32
+ "block_out_init_scale": 1.0,
33
+ "block_use_swiglu": true,
34
+ "block_use_xavier_init": true,
35
+ "conv_L_cache": 3,
36
+ "conv_bias": false,
37
+ "conv_dim": 1024,
38
+ "conv_dim_out": 1024,
39
+ "conv_use_xavier_init": true,
40
+ "dtype": "bfloat16",
41
+ "eos_token_id": 7,
42
+ "hidden_size": 1024,
43
+ "initializer_range": 0.02,
44
+ "intermediate_size": 6656,
45
+ "layer_types": [
46
+ "conv",
47
+ "conv",
48
+ "full_attention",
49
+ "conv",
50
+ "conv",
51
+ "full_attention",
52
+ "conv",
53
+ "conv",
54
+ "full_attention",
55
+ "conv",
56
+ "full_attention",
57
+ "conv",
58
+ "full_attention",
59
+ "conv",
60
+ "full_attention",
61
+ "conv"
62
+ ],
63
+ "max_position_embeddings": 128000,
64
+ "model_type": "lfm2",
65
+ "norm_eps": 1e-05,
66
+ "num_attention_heads": 16,
67
+ "num_heads": 16,
68
+ "num_hidden_layers": 16,
69
+ "num_key_value_heads": 8,
70
+ "rope_parameters": {
71
+ "rope_theta": 1000000.0,
72
+ "rope_type": "default"
73
+ },
74
+ "use_cache": true,
75
+ "use_pos_enc": true,
76
+ "vocab_size": 65536
77
+ },
78
+ "tile_size": 512,
79
+ "transformers_version": "5.0.0.dev0",
80
+ "use_image_special_tokens": true,
81
+ "use_thumbnail": true,
82
+ "vision_config": {
83
+ "attention_dropout": 0.0,
84
+ "dtype": "bfloat16",
85
+ "hidden_act": "gelu_pytorch_tanh",
86
+ "hidden_size": 768,
87
+ "intermediate_size": 3072,
88
+ "layer_norm_eps": 1e-06,
89
+ "model_type": "siglip2_vision_model",
90
+ "num_attention_heads": 12,
91
+ "num_channels": 3,
92
+ "num_hidden_layers": 12,
93
+ "num_patches": 256,
94
+ "patch_size": 16,
95
+ "vision_use_head": false
96
+ },
97
+ "transformers.js_config": {
98
+ "use_external_data_format": {
99
+ "vision_encoder": true,
100
+ "embed_tokens": true,
101
+ "decoder_model_merged": true
102
+ }
103
+ }
104
+ }
generation_config.json ADDED
@@ -0,0 +1,7 @@
 
 
 
 
 
 
 
 
1
+ {
2
+ "_from_model_config": true,
3
+ "bos_token_id": 1,
4
+ "eos_token_id": 7,
5
+ "pad_token_id": 0,
6
+ "transformers_version": "4.57.0"
7
+ }
onnx/decoder_model_merged.onnx ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:bb0e0195c934b3a038e88fb737ee49d1e3c677348d30bc410f9b61a8b91e9cd8
3
+ size 143084
onnx/decoder_model_merged.onnx_data ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:674b818194156e6333ce002d04c92da07049b59c946975f94a796e35103a6aed
3
+ size 1450700800
onnx/decoder_model_merged_fp16.onnx ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4128700047fb544477d3d385384cd9e9ef429706d4f23522c0c1b99525f0ef00
3
+ size 148963
onnx/decoder_model_merged_fp16.onnx_data ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:25dbf3241518138ca3be478bb7d66051ea88e1b2a1b2b6db0a81b75705efe869
3
+ size 725350400
onnx/decoder_model_merged_q4.onnx ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:26600302bd9db0ef26d1a98fba0aae22dac99468e2195ce6d9b9bc7308c18f68
3
+ size 171898
onnx/decoder_model_merged_q4.onnx_data ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b930a8ec51f6326c1b5e09e38fd0162fc69840b2f9b926025948a58a4e962c7d
3
+ size 481030144
onnx/decoder_model_merged_q8.onnx ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:5d24e6b5dcdaacf665b1c1d5eba40b361d52439ac2b477b3ede65545a05c8e6a
3
+ size 188896
onnx/decoder_model_merged_q8.onnx_data ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:88cb87b11d43c646d88e92c036577c519ed6347a86d493a252b80d1f17b0d544
3
+ size 633663488
onnx/embed_tokens.onnx ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:3fcae1b697f9e35d181c119d41f06a3d9153bf09b19280ef154b5f77fd64f29c
3
+ size 268435815
onnx/embed_tokens_fp16.onnx ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:291d72b491d3187f3cafbb0ec35c5f889360a044d7db815510eff0fabb2af371
3
+ size 573
onnx/embed_tokens_fp16.onnx_data ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:6936dd14d4e0fa29f4046159dfa5738363f020216ed39a2ed14d276d8d473aa6
3
+ size 134217728
onnx/vision_encoder.onnx ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:0170bb7f54d5dbb1b9a35b51f4b60b99feac3bb0753c3b8740fa2176c2763d1e
3
+ size 123527
onnx/vision_encoder.onnx_data ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:0c70c17750012c329b4d445c0b8de2c4ec851a4e4374af9bc32af9708e9d48cb
3
+ size 376939520
onnx/vision_encoder_fp16.onnx ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d6c020610d1619939e98e0d355558dbc3a86f4e4f447747fabffb9fe77d8b7fb
3
+ size 124811
onnx/vision_encoder_fp16.onnx_data ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7cae41cd55168b324a7d4e773203189e3aa40471389cba977a20416d0a721998
3
+ size 188469760
onnx/vision_encoder_q4.onnx ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ca4861376b1c409486a38237676754d0286d13b00e561bd113acebaaaddc56af
3
+ size 146157
onnx/vision_encoder_q4.onnx_data ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:0c46c194ac38dc7050c5729296d2ac80c25848d0f3895bf29e0e05b481b8a731
3
+ size 59982848
onnx/vision_encoder_q8.onnx ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a4ff9d2db7622261ece3661c4d1ef923ac8353b6cea27d4090e340cb21e534ff
3
+ size 159147
onnx/vision_encoder_q8.onnx_data ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b1f664b195ba188ba5d2c8a5103338465a728050e11c6f7fffea079bfc65b2ef
3
+ size 109874176
preprocessor_config.json ADDED
@@ -0,0 +1,37 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "data_format": "channels_first",
3
+ "do_image_splitting": true,
4
+ "do_normalize": true,
5
+ "do_pad": true,
6
+ "do_rescale": true,
7
+ "do_resize": true,
8
+ "downsample_factor": 2,
9
+ "encoder_patch_size": 16,
10
+ "image_mean": [
11
+ 0.5,
12
+ 0.5,
13
+ 0.5
14
+ ],
15
+ "image_processor_type": "Lfm2VlImageProcessorFast",
16
+ "image_std": [
17
+ 0.5,
18
+ 0.5,
19
+ 0.5
20
+ ],
21
+ "max_image_tokens": 256,
22
+ "max_num_patches": 1024,
23
+ "max_pixels_tolerance": 2.0,
24
+ "max_tiles": 10,
25
+ "min_image_tokens": 64,
26
+ "min_tiles": 2,
27
+ "resample": 2,
28
+ "rescale_factor": 0.00392156862745098,
29
+ "return_row_col_info": true,
30
+ "size": {
31
+ "height": 512,
32
+ "width": 512
33
+ },
34
+ "tile_size": 512,
35
+ "use_thumbnail": true,
36
+ "processor_class": "Lfm2VlProcessor"
37
+ }
processor_config.json ADDED
@@ -0,0 +1,39 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "image_processor": {
3
+ "data_format": "channels_first",
4
+ "do_image_splitting": true,
5
+ "do_normalize": true,
6
+ "do_pad": true,
7
+ "do_rescale": true,
8
+ "do_resize": true,
9
+ "downsample_factor": 2,
10
+ "encoder_patch_size": 16,
11
+ "image_mean": [
12
+ 0.5,
13
+ 0.5,
14
+ 0.5
15
+ ],
16
+ "image_processor_type": "Lfm2VlImageProcessorFast",
17
+ "image_std": [
18
+ 0.5,
19
+ 0.5,
20
+ 0.5
21
+ ],
22
+ "max_image_tokens": 256,
23
+ "max_num_patches": 1024,
24
+ "max_pixels_tolerance": 2.0,
25
+ "max_tiles": 10,
26
+ "min_image_tokens": 64,
27
+ "min_tiles": 2,
28
+ "resample": 2,
29
+ "rescale_factor": 0.00392156862745098,
30
+ "return_row_col_info": true,
31
+ "size": {
32
+ "height": 512,
33
+ "width": 512
34
+ },
35
+ "tile_size": 512,
36
+ "use_thumbnail": true
37
+ },
38
+ "processor_class": "Lfm2VlProcessor"
39
+ }
tokenizer.json ADDED
The diff for this file is too large to render. See raw diff
 
tokenizer_config.json ADDED
@@ -0,0 +1,28 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "backend": "tokenizers",
3
+ "bos_token": "<|startoftext|>",
4
+ "clean_up_tokenization_spaces": true,
5
+ "eos_token": "<|im_end|>",
6
+ "extra_special_tokens": [],
7
+ "image_end_token": "<|image_end|>",
8
+ "image_start_token": "<|image_start|>",
9
+ "image_thumbnail": "<|img_thumbnail|>",
10
+ "image_token": "<image>",
11
+ "is_local": false,
12
+ "legacy": false,
13
+ "model_max_length": 1000000000000000019884624838656,
14
+ "model_specific_special_tokens": {
15
+ "image_end_token": "<|image_end|>",
16
+ "image_start_token": "<|image_start|>",
17
+ "image_token": "<image>"
18
+ },
19
+ "pad_token": "<|pad|>",
20
+ "processor_class": "Lfm2VlProcessor",
21
+ "return_token_type_ids": false,
22
+ "sp_model_kwargs": {},
23
+ "spaces_between_special_tokens": false,
24
+ "tokenizer_class": "TokenizersBackend",
25
+ "use_default_system_prompt": false,
26
+ "use_fast": true,
27
+ "chat_template": "{{- bos_token -}}\n{%- set keep_past_thinking = keep_past_thinking | default(false) -%}\n\n{%- macro format_arg_value(arg_value) -%}\n {%- if arg_value is string -%}\n {{- '\"' + arg_value + '\"' -}}\n {%- elif arg_value is mapping -%}\n {{- arg_value | tojson -}}\n {%- else -%}\n {{- arg_value | string -}}\n {%- endif -%}\n{%- endmacro -%}\n\n{%- macro parse_content(content) -%}\n {%- if content is string -%}\n {{- content -}}\n {%- else -%}\n {%- set _ns = namespace(result=\"\") -%}\n {%- for item in content -%}\n {%- if item.type == \"image\" -%}\n {%- set _ns.result = _ns.result + \"<image>\" -%}\n {%- elif item.type == \"text\" -%}\n {%- set _ns.result = _ns.result + item.text -%}\n {%- else -%}\n {%- set _ns.result = _ns.result + item | tojson -%}\n {%- endif -%}\n {%- endfor -%}\n {{- _ns.result -}}\n {%- endif -%}\n{%- endmacro -%}\n\n{%- macro render_tool_calls(tool_calls) -%}\n {%- set tool_calls_ns = namespace(tool_calls=[]) -%}\n {%- for tool_call in tool_calls -%}\n {%- set func_name = tool_call.function.name -%}\n {%- set func_args = tool_call.function.arguments -%}\n {%- set args_ns = namespace(arg_strings=[]) -%}\n {%- for arg_name, arg_value in func_args.items() -%}\n {%- set args_ns.arg_strings = args_ns.arg_strings + [arg_name + \"=\" + format_arg_value(arg_value)] -%}\n {%- endfor -%}\n {%- set tool_calls_ns.tool_calls = tool_calls_ns.tool_calls + [func_name + \"(\" + (args_ns.arg_strings | join(\", \")) + \")\"] -%}\n {%- endfor -%}\n {{- \"<|tool_call_start|>[\" + (tool_calls_ns.tool_calls | join(\", \")) + \"]<|tool_call_end|>\" -}}\n{%- endmacro -%}\n\n{%- set ns = namespace(system_prompt=\"\", last_assistant_index=-1) -%}\n{%- if messages[0].role == \"system\" -%}\n {%- if messages[0].content is defined -%}\n {%- set ns.system_prompt = parse_content(messages[0].content) -%}\n {%- endif -%}\n {%- set messages = messages[1:] -%}\n{%- endif -%}\n{%- if tools -%}\n {%- set ns.system_prompt = ns.system_prompt + (\"\\n\\n\" if ns.system_prompt else \"\") + \"Today's date: \" + strftime_now(\"%Y-%m-%d\") + \"\\n\\nList of tools: \" + (tools | tojson) -%}\n{%- endif -%}\n{%- if ns.system_prompt -%}\n {{- \"<|im_start|>system\\n\" + ns.system_prompt + \"<|im_end|>\\n\" -}}\n{%- endif -%}\n{%- for message in messages -%}\n {%- if message.role == \"assistant\" -%}\n {%- set ns.last_assistant_index = loop.index0 -%}\n {%- endif -%}\n{%- endfor -%}\n{%- for message in messages -%}\n {{- \"<|im_start|>\" + message.role + \"\\n\" -}}\n {%- if message.role == \"assistant\" -%}\n \n {%- if message.thinking is defined and (keep_past_thinking or loop.index0 == ns.last_assistant_index) -%}\n {{- \"<think>\" + message.thinking + \"</think>\" -}}\n {%- endif -%}\n {%- if message.tool_calls is defined -%}\n {{- render_tool_calls(message.tool_calls) -}}\n {%- endif -%}\n {%- if message.content is defined -%}\n {%- set content = parse_content(message.content) -%}\n {%- if not keep_past_thinking and loop.index0 != ns.last_assistant_index -%}\n {%- if \"</think>\" in content -%}\n {%- set content = content.split(\"</think>\")[-1] | trim -%}\n {%- endif -%}\n {%- endif -%}\n {{- content + (\"\" if (continue_final_message and loop.last) else \"<|im_end|>\\n\") -}}\n {%- endif -%}\n \n {%- else %}\n {%- if message.content is defined -%}\n {{- parse_content(message.content) + \"<|im_end|>\\n\" -}}\n {%- endif -%}\n {%- endif %}\n{%- endfor -%}\n{%- if add_generation_prompt -%}\n {{- \"<|im_start|>assistant\\n\" -}}\n{%- endif -%}"
28
+ }