Update README.md
Browse files
README.md
CHANGED
|
@@ -44,7 +44,7 @@ We're releasing the weights of our first MoE based on LFM2, with 8.3B total para
|
|
| 44 |
- **Code and knowledge** capabilities are significantly improved compared to LFM2-2.6B.
|
| 45 |
- Quantized variants fit comfortably on high-end **phones, tablets, and laptops**.
|
| 46 |
|
| 47 |
-
Find more information about LFM2-8B-A1B in our [blog post](https://www.liquid.ai/blog/).
|
| 48 |
|
| 49 |
## 📄 Model details
|
| 50 |
|
|
|
|
| 44 |
- **Code and knowledge** capabilities are significantly improved compared to LFM2-2.6B.
|
| 45 |
- Quantized variants fit comfortably on high-end **phones, tablets, and laptops**.
|
| 46 |
|
| 47 |
+
Find more information about LFM2-8B-A1B in our [blog post](https://www.liquid.ai/blog/lfm2-8b-a1b-an-efficient-on-device-mixture-of-experts).
|
| 48 |
|
| 49 |
## 📄 Model details
|
| 50 |
|