ykhrustalev commited on
Commit
eac5f4a
Β·
verified Β·
1 Parent(s): 1a48a2e

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +145 -0
README.md ADDED
@@ -0,0 +1,145 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: other
3
+ license_name: lfm1.0
4
+ license_link: LICENSE
5
+ language:
6
+ - en
7
+ - ja
8
+ - ko
9
+ - fr
10
+ - es
11
+ - de
12
+ - it
13
+ - pt
14
+ - ar
15
+ - zh
16
+ pipeline_tag: text-generation
17
+ tags:
18
+ - liquid
19
+ - edge
20
+ - lfm2.5
21
+ - onnx
22
+ - onnxruntime
23
+ base_model:
24
+ - LiquidAI/LFM2.5-1.2B-Instruct
25
+ ---
26
+
27
+ <div align="center">
28
+ <img src="https://cdn-uploads.huggingface.co/production/uploads/61b8e2ba285851687028d395/2b08LKpev0DNEk6DlnWkY.png" alt="Liquid AI" style="width: 100%; max-width: 100%;">
29
+
30
+ <p>
31
+ <a href="https://playground.liquid.ai/"><strong>Try LFM</strong></a> β€’
32
+ <a href="https://docs.liquid.ai/lfm"><strong>Documentation</strong></a> β€’
33
+ <a href="https://leap.liquid.ai/"><strong>LEAP</strong></a> β€’
34
+ <a href="https://www.liquid.ai/blog/"><strong>Blog</strong></a>
35
+ </p>
36
+ </div>
37
+
38
+ # LFM2.5-1.2B-Instruct-ONNX
39
+
40
+ ONNX export of [LFM2.5-1.2B-Instruct](https://huggingface.co/LiquidAI/LFM2.5-1.2B-Instruct) for cross-platform inference.
41
+
42
+ LFM2.5 is a hybrid architecture combining multiplicative gates and short convolutions, optimized for edge deployment with fast inference on CPU, GPU, and NPU hardware.
43
+
44
+ ## Recommended Variants
45
+
46
+ | Precision | Size | Use Case |
47
+ |-----------|------|----------|
48
+ | Q4 | ~1.2GB | Recommended for most uses |
49
+ | FP16 | ~2.4GB | Higher quality |
50
+ | Q8 | ~1.7GB | Balance of quality and size |
51
+
52
+ ## Model Files
53
+
54
+ ```
55
+ onnx/
56
+ β”œβ”€β”€ model.onnx # FP32
57
+ β”œβ”€β”€ model_fp16.onnx # FP16
58
+ β”œβ”€β”€ model_q4.onnx # Q4 (recommended)
59
+ └── model_q8.onnx # Q8
60
+ ```
61
+
62
+ ## Python
63
+
64
+ ### Installation
65
+
66
+ ```bash
67
+ pip install onnxruntime transformers numpy huggingface_hub
68
+ # or with GPU support:
69
+ pip install onnxruntime-gpu transformers numpy huggingface_hub
70
+ ```
71
+
72
+ ### Inference
73
+
74
+ ```python
75
+ import numpy as np
76
+ import onnxruntime as ort
77
+ from huggingface_hub import hf_hub_download
78
+ from transformers import AutoTokenizer
79
+
80
+ # Download model (Q4 recommended)
81
+ model_id = "LiquidAI/LFM2.5-1.2B-Instruct-ONNX"
82
+ model_path = hf_hub_download(model_id, "onnx/model_q4.onnx")
83
+ data_path = hf_hub_download(model_id, "onnx/model_q4.onnx_data")
84
+
85
+ # Load model and tokenizer
86
+ session = ort.InferenceSession(model_path)
87
+ tokenizer = AutoTokenizer.from_pretrained(model_id, trust_remote_code=True)
88
+
89
+ # Prepare chat input
90
+ messages = [{"role": "user", "content": "What is the capital of France?"}]
91
+ prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
92
+ input_ids = np.array([tokenizer.encode(prompt, add_special_tokens=False)], dtype=np.int64)
93
+
94
+ # Initialize KV cache
95
+ ONNX_DTYPE = {"tensor(float)": np.float32, "tensor(float16)": np.float16, "tensor(int64)": np.int64}
96
+ cache = {}
97
+ for inp in session.get_inputs():
98
+ if inp.name in {"input_ids", "attention_mask", "position_ids"}:
99
+ continue
100
+ shape = [d if isinstance(d, int) else 1 for d in inp.shape]
101
+ for i, d in enumerate(inp.shape):
102
+ if isinstance(d, str) and "sequence" in d.lower():
103
+ shape[i] = 0
104
+ cache[inp.name] = np.zeros(shape, dtype=ONNX_DTYPE.get(inp.type, np.float32))
105
+
106
+ # Check if model uses position_ids
107
+ input_names = {inp.name for inp in session.get_inputs()}
108
+ use_position_ids = "position_ids" in input_names
109
+
110
+ # Generate tokens
111
+ seq_len = input_ids.shape[1]
112
+ generated_tokens = []
113
+
114
+ for step in range(100): # max tokens
115
+ if step == 0:
116
+ ids = input_ids
117
+ pos = np.arange(seq_len, dtype=np.int64).reshape(1, -1)
118
+ else:
119
+ ids = np.array([[generated_tokens[-1]]], dtype=np.int64)
120
+ pos = np.array([[seq_len + len(generated_tokens) - 1]], dtype=np.int64)
121
+
122
+ attn_mask = np.ones((1, seq_len + len(generated_tokens)), dtype=np.int64)
123
+ feed = {"input_ids": ids, "attention_mask": attn_mask, **cache}
124
+ if use_position_ids:
125
+ feed["position_ids"] = pos
126
+
127
+ outputs = session.run(None, feed)
128
+ next_token = int(np.argmax(outputs[0][0, -1]))
129
+ generated_tokens.append(next_token)
130
+
131
+ # Update cache
132
+ for i, out in enumerate(session.get_outputs()[1:], 1):
133
+ name = out.name.replace("present_conv", "past_conv").replace("present.", "past_key_values.")
134
+ if name in cache:
135
+ cache[name] = outputs[i]
136
+
137
+ if next_token == tokenizer.eos_token_id:
138
+ break
139
+
140
+ print(tokenizer.decode(generated_tokens, skip_special_tokens=True))
141
+ ```
142
+
143
+ ## License
144
+
145
+ This model is released under the [LFM 1.0 License](LICENSE).