Update README.md
Browse files
README.md
CHANGED
|
@@ -205,14 +205,20 @@ We report the following numbers with 1K prefill and 100 decode tokens:
|
|
| 205 |
|
| 206 |
| Device | Inference | Framework | Model | Prefill (tok/s) | Decode (tok/s) | Memory |
|
| 207 |
| ---------------------------------------------------- | --------- | ---------------- | -------------------- | --------------- | -------------- | ------ |
|
| 208 |
-
| AMD Ryzen AI 395+ | NPU | FastFlowLM | LFM2.5-1.2B-Thinking |
|
| 209 |
-
| AMD Ryzen AI
|
| 210 |
-
| AMD Ryzen AI
|
|
|
|
|
|
|
| 211 |
| Qualcomm Snapdragon® X Elite | NPU | NexaML | LFM2.5-1.2B-Thinking | 2591 | 63 | 0.9GB |
|
| 212 |
| Qualcomm Snapdragon® Gen4 (ROG Phone9 Pro) | NPU | NexaML | LFM2.5-1.2B-Thinking | 4391 | 82 | 0.9GB |
|
| 213 |
| Qualcomm Dragonwing IQ9 (IQ-9075) (IoT) | NPU | NexaML | LFM2.5-1.2B-Thinking | 2143 | 53 | 0.9 GB |
|
| 214 |
| Qualcomm Snapdragon® Gen4 (Samsung Galaxy S25 Ultra) | CPU | llama.cpp (Q4_0) | LFM2.5-1.2B-Thinking | 335 | 70 | 719MB |
|
| 215 |
|
|
|
|
|
|
|
|
|
|
|
|
|
| 216 |
These capabilities unlock new deployment scenarios across various devices, including vehicles, mobile devices, laptops, IoT devices, and embedded systems.
|
| 217 |
|
| 218 |
## Contact
|
|
|
|
| 205 |
|
| 206 |
| Device | Inference | Framework | Model | Prefill (tok/s) | Decode (tok/s) | Memory |
|
| 207 |
| ---------------------------------------------------- | --------- | ---------------- | -------------------- | --------------- | -------------- | ------ |
|
| 208 |
+
| AMD Ryzen AI 395+ | NPU | FastFlowLM | LFM2.5-1.2B-Thinking | 1487 | 60 | 1700MB |
|
| 209 |
+
| AMD Ryzen AI 5 HX 340 | NPU | FastFlowLM | LFM2.5-1.2B-Thinking | 1431 | 63 | 1700MB |
|
| 210 |
+
| AMD Ryzen AI 7 HX 350 | NPU | FastFlowLM | LFM2.5-1.2B-Thinking | 1431 | 63 | 1700MB |
|
| 211 |
+
| AMD Ryzen AI 9 HX 370 | NPU | FastFlowLM | LFM2.5-1.2B-Thinking | 1487 | 57 | 1700MB |
|
| 212 |
+
| AMD Ryzen AI 9 HX 370 | GPU | llama.cpp (Q4_0) | LFM2.5-1.2B-Thinking | 2975 | 116 | 856MB |
|
| 213 |
| Qualcomm Snapdragon® X Elite | NPU | NexaML | LFM2.5-1.2B-Thinking | 2591 | 63 | 0.9GB |
|
| 214 |
| Qualcomm Snapdragon® Gen4 (ROG Phone9 Pro) | NPU | NexaML | LFM2.5-1.2B-Thinking | 4391 | 82 | 0.9GB |
|
| 215 |
| Qualcomm Dragonwing IQ9 (IQ-9075) (IoT) | NPU | NexaML | LFM2.5-1.2B-Thinking | 2143 | 53 | 0.9 GB |
|
| 216 |
| Qualcomm Snapdragon® Gen4 (Samsung Galaxy S25 Ultra) | CPU | llama.cpp (Q4_0) | LFM2.5-1.2B-Thinking | 335 | 70 | 719MB |
|
| 217 |
|
| 218 |
+
LFM2.5-1.2B-Thinking works very well with long contexts on AMD NPU. It delivers **~59 tok/s at 4K** context and **~52 tok/s at 16K** during decoding, and reaches a **prefill speed of ~2,226 tok/s with a 4K-token prompt**.
|
| 219 |
+
|
| 220 |
+
Check the detailed benchmark results [here](https://fastflowlm.com/docs/benchmarks/lfm2_results/).
|
| 221 |
+
|
| 222 |
These capabilities unlock new deployment scenarios across various devices, including vehicles, mobile devices, laptops, IoT devices, and embedded systems.
|
| 223 |
|
| 224 |
## Contact
|