Update README.md
Browse files
README.md
CHANGED
|
@@ -9,6 +9,6 @@ pinned: false
|
|
| 9 |
|
| 10 |
Introducing **Moxin 7B**: The truly open, SOTA-performing LLM and VLM that's redefining transparency.
|
| 11 |
|
| 12 |
-
We've <u>**open-sourced
|
| 13 |
|
| 14 |
We unleash the power of reproducible AI 🚀. Interested? Explore the models and code on our [GitHub](https://github.com/moxin-org/Moxin-LLM) and read the full paper on [arXiv](https://arxiv.org/abs/2412.06845).
|
|
|
|
| 9 |
|
| 10 |
Introducing **Moxin 7B**: The truly open, SOTA-performing LLM and VLM that's redefining transparency.
|
| 11 |
|
| 12 |
+
We've <u>**open-sourced EVERYTHING**</u>—pre-training code, data, and models, including our GRPO-enhanced Reasoning model. It outperforms Mistral, Qwen, and LLaMA in zero-shot/few-shot tasks and delivers superior reasoning on complex math benchmarks, all with an efficient training cost of ~$160K for full pretraining.
|
| 13 |
|
| 14 |
We unleash the power of reproducible AI 🚀. Interested? Explore the models and code on our [GitHub](https://github.com/moxin-org/Moxin-LLM) and read the full paper on [arXiv](https://arxiv.org/abs/2412.06845).
|