File size: 749 Bytes
83b1d62
 
 
 
 
 
 
 
5f331a3
e548842
dd52ff5
09f25dc
dd52ff5
09f25dc
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
---
title: README
emoji: 🔥
colorFrom: pink
colorTo: indigo
sdk: static
pinned: false
---

**Moxin LM: From SOTA Research to Efficient Deployment**

- **Open Creation:** The **Moxin-7B series** is our truly open, SOTA-performing LLM and VLM. We build, fine-tune, and openly release our own models.

- **Efficient Deployment:** We specialize in extreme quantization, creating resource-efficient variants of popular models (like DeepSeek and Kimi) to run anywhere.

We unleash the power of reproducible AI 🚀. Explore our models below and on [GitHub](https://github.com/moxin-org), and read our research on [Moxin 7B (Open Creation)](https://arxiv.org/abs/2412.06845) and [MoE Compression (Efficient Deployment)](https://arxiv.org/abs/2509.25689).