hzeng412 commited on
Commit
09f25dc
·
verified ·
1 Parent(s): a8d9218

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +5 -3
README.md CHANGED
@@ -7,8 +7,10 @@ sdk: static
7
  pinned: false
8
  ---
9
 
10
- Introducing **Moxin 7B**: The truly open, SOTA-performing LLM and VLM that's redefining transparency.
11
 
12
- We've <u>**open-sourced EVERYTHING**</u>—pre-training code, data, and models, including our GRPO-enhanced Reasoning model. It outperforms Mistral, Qwen, and LLaMA in zero-shot/few-shot tasks and delivers superior reasoning on complex math benchmarks, all with an efficient training cost of ~$160K for full pretraining.
13
 
14
- We unleash the power of reproducible AI 🚀. Interested? Explore the models and code on our [GitHub](https://github.com/moxin-org/Moxin-LLM) and read the full paper on [arXiv](https://arxiv.org/abs/2412.06845).
 
 
 
7
  pinned: false
8
  ---
9
 
10
+ **Moxin AI: From SOTA Research to Efficient Deployment**
11
 
12
+ - **Open Creation:** The **Moxin-7B series** is our truly open, SOTA-performing LLM and VLM. We build, fine-tune, and openly release our own models.
13
 
14
+ - **Efficient Deployment:** We specialize in extreme quantization, creating resource-efficient variants of popular models (like DeepSeek and Kimi) to run anywhere.
15
+
16
+ We unleash the power of reproducible AI 🚀. Explore our models below and on [GitHub](https://github.com/moxin-org), and read our research on [Moxin 7B (Open Creation)](https://arxiv.org/abs/2412.06845) and [MoE Compression (Efficient Deployment)](https://arxiv.org/abs/2509.25689).