Update README.md
Browse files
README.md
CHANGED
|
@@ -7,8 +7,10 @@ sdk: static
|
|
| 7 |
pinned: false
|
| 8 |
---
|
| 9 |
|
| 10 |
-
|
| 11 |
|
| 12 |
-
|
| 13 |
|
| 14 |
-
|
|
|
|
|
|
|
|
|
| 7 |
pinned: false
|
| 8 |
---
|
| 9 |
|
| 10 |
+
**Moxin AI: From SOTA Research to Efficient Deployment**
|
| 11 |
|
| 12 |
+
- **Open Creation:** The **Moxin-7B series** is our truly open, SOTA-performing LLM and VLM. We build, fine-tune, and openly release our own models.
|
| 13 |
|
| 14 |
+
- **Efficient Deployment:** We specialize in extreme quantization, creating resource-efficient variants of popular models (like DeepSeek and Kimi) to run anywhere.
|
| 15 |
+
|
| 16 |
+
We unleash the power of reproducible AI 🚀. Explore our models below and on [GitHub](https://github.com/moxin-org), and read our research on [Moxin 7B (Open Creation)](https://arxiv.org/abs/2412.06845) and [MoE Compression (Efficient Deployment)](https://arxiv.org/abs/2509.25689).
|