Update README.md
Browse files
README.md
CHANGED
|
@@ -39,7 +39,7 @@ base_model: LiquidAI/LFM2.5-350M-Base
|
|
| 39 |
|
| 40 |
LFM2.5 is a new family of hybrid models designed for **on-device deployment**. It builds on the LFM2 architecture with extended pre-training and reinforcement learning.
|
| 41 |
|
| 42 |
-
- **Best-in-class performance**: A
|
| 43 |
- **Fast edge inference**: 313 tok/s decode on AMD CPU, 188 tok/s on Snapdragon Gen4. Runs under 1GB of memory with day-one support for llama.cpp, MLX, and vLLM.
|
| 44 |
- **Scaled training**: Extended pre-training from 10T to 28T tokens and large-scale multi-stage reinforcement learning.
|
| 45 |
|
|
|
|
| 39 |
|
| 40 |
LFM2.5 is a new family of hybrid models designed for **on-device deployment**. It builds on the LFM2 architecture with extended pre-training and reinforcement learning.
|
| 41 |
|
| 42 |
+
- **Best-in-class performance**: A 350M model rivaling much larger models, bringing high-quality AI to your pocket.
|
| 43 |
- **Fast edge inference**: 313 tok/s decode on AMD CPU, 188 tok/s on Snapdragon Gen4. Runs under 1GB of memory with day-one support for llama.cpp, MLX, and vLLM.
|
| 44 |
- **Scaled training**: Extended pre-training from 10T to 28T tokens and large-scale multi-stage reinforcement learning.
|
| 45 |
|