|
|
--- |
|
|
title: README |
|
|
emoji: π₯ |
|
|
colorFrom: pink |
|
|
colorTo: indigo |
|
|
sdk: static |
|
|
pinned: false |
|
|
--- |
|
|
|
|
|
**Moxin LM: From SOTA Research to Efficient Deployment** |
|
|
|
|
|
- **Open Creation:** The **Moxin-7B series** is our truly open, SOTA-performing LLM and VLM. We build, fine-tune, and openly release our own models. |
|
|
|
|
|
- **Efficient Deployment:** We specialize in extreme quantization, creating resource-efficient variants of popular models (like DeepSeek and Kimi) to run anywhere. |
|
|
|
|
|
We unleash the power of reproducible AI π. Explore our models below and on [GitHub](https://github.com/moxin-org), and read our research on [Moxin 7B (Open Creation)](https://arxiv.org/abs/2412.06845) and [MoE Compression (Efficient Deployment)](https://arxiv.org/abs/2509.25689). |