Data correction
#1
by
TWei-flm
- opened
README.md
CHANGED
|
@@ -216,14 +216,16 @@ We report the following numbers with 1K prefill and 100 decode tokens:
|
|
| 216 |
|
| 217 |
| Device | Inference | Framework | Model | Prefill (tok/s) | Decode (tok/s) | Memory |
|
| 218 |
| ---------------------------------------------------- | --------- | ---------------- | -------------------- | --------------- | -------------- | ------ |
|
| 219 |
-
| AMD Ryzen AI 395+ | NPU | FastFlowLM | LFM2.5-1.2B-Thinking |
|
| 220 |
-
| AMD Ryzen AI 9 HX 370 | NPU | FastFlowLM | LFM2.5-1.2B-Thinking |
|
| 221 |
| AMD Ryzen AI 9 HX 370 | CPU | llama.cpp (Q4_0) | LFM2.5-1.2B-Thinking | 2975 | 116 | 856MB |
|
| 222 |
| Qualcomm Snapdragon® X Elite | NPU | NexaML | LFM2.5-1.2B-Thinking | 2591 | 63 | 0.9GB |
|
| 223 |
| Qualcomm Snapdragon® Gen4 (ROG Phone9 Pro) | NPU | NexaML | LFM2.5-1.2B-Thinking | 4391 | 82 | 0.9GB |
|
| 224 |
| Qualcomm Dragonwing IQ9 (IQ-9075) (IoT) | NPU | NexaML | LFM2.5-1.2B-Thinking | 2143 | 53 | 0.9 GB |
|
| 225 |
| Qualcomm Snapdragon® Gen4 (Samsung Galaxy S25 Ultra) | CPU | llama.cpp (Q4_0) | LFM2.5-1.2B-Thinking | 335 | 70 | 719MB |
|
| 226 |
|
|
|
|
|
|
|
| 227 |
These capabilities unlock new deployment scenarios across various devices, including vehicles, mobile devices, laptops, IoT devices, and embedded systems.
|
| 228 |
|
| 229 |
## Contact
|
|
|
|
| 216 |
|
| 217 |
| Device | Inference | Framework | Model | Prefill (tok/s) | Decode (tok/s) | Memory |
|
| 218 |
| ---------------------------------------------------- | --------- | ---------------- | -------------------- | --------------- | -------------- | ------ |
|
| 219 |
+
| AMD Ryzen AI 395+ | NPU | FastFlowLM | LFM2.5-1.2B-Thinking | 1487 | 60 | 1600MB (full context) |
|
| 220 |
+
| AMD Ryzen AI 9 HX 370 | NPU | FastFlowLM | LFM2.5-1.2B-Thinking | 1487 | 57 | 1600MB (full context) |
|
| 221 |
| AMD Ryzen AI 9 HX 370 | CPU | llama.cpp (Q4_0) | LFM2.5-1.2B-Thinking | 2975 | 116 | 856MB |
|
| 222 |
| Qualcomm Snapdragon® X Elite | NPU | NexaML | LFM2.5-1.2B-Thinking | 2591 | 63 | 0.9GB |
|
| 223 |
| Qualcomm Snapdragon® Gen4 (ROG Phone9 Pro) | NPU | NexaML | LFM2.5-1.2B-Thinking | 4391 | 82 | 0.9GB |
|
| 224 |
| Qualcomm Dragonwing IQ9 (IQ-9075) (IoT) | NPU | NexaML | LFM2.5-1.2B-Thinking | 2143 | 53 | 0.9 GB |
|
| 225 |
| Qualcomm Snapdragon® Gen4 (Samsung Galaxy S25 Ultra) | CPU | llama.cpp (Q4_0) | LFM2.5-1.2B-Thinking | 335 | 70 | 719MB |
|
| 226 |
|
| 227 |
+
**LFM2.5-1.2B-Thinking excels at long-context inference.** For example, on AMD Ryzen™ NPUs with FastFlowLM, decoding throughput sustains ~52 tok/s at 16K context and ~46 tok/s even at the full 32K context, indicating robust long-context scalability. For more details on longer context benchmarks on AMD Ryzen™ NPUs with FastFlowLM, please review these [here](https://fastflowlm.com/docs/benchmarks/lfm2_results/).
|
| 228 |
+
|
| 229 |
These capabilities unlock new deployment scenarios across various devices, including vehicles, mobile devices, laptops, IoT devices, and embedded systems.
|
| 230 |
|
| 231 |
## Contact
|