Update README.md
Browse files
README.md
CHANGED
|
@@ -1,14 +1,14 @@
|
|
| 1 |
-
---
|
| 2 |
-
title: README
|
| 3 |
-
emoji: π₯
|
| 4 |
-
colorFrom: pink
|
| 5 |
-
colorTo: indigo
|
| 6 |
-
sdk: static
|
| 7 |
-
pinned: false
|
| 8 |
-
---
|
| 9 |
|
| 10 |
-
Introducing Moxin 7B
|
| 11 |
|
| 12 |
-
We've open-sourced everything
|
| 13 |
|
| 14 |
We unleash the power of reproducible AI π. Interested? Explore the models and code on our [GitHub](https://github.com/moxin-org/Moxin-LLM) and read the full paper on [arXiv](https://arxiv.org/abs/2412.06845).
|
|
|
|
| 1 |
+
---
|
| 2 |
+
title: README
|
| 3 |
+
emoji: π₯
|
| 4 |
+
colorFrom: pink
|
| 5 |
+
colorTo: indigo
|
| 6 |
+
sdk: static
|
| 7 |
+
pinned: false
|
| 8 |
+
---
|
| 9 |
|
| 10 |
+
Introducing **Moxin 7B**: The truly open, SOTA-performing LLM and VLM that's redefining transparency.
|
| 11 |
|
| 12 |
+
We've <u>**open-sourced everything**</u>βpre-training code, data, and models, including our GRPO-enhanced Reasoning model. It outperforms Mistral, Qwen, and LLaMA in zero-shot/few-shot tasks and delivers superior reasoning on complex math benchmarks, all with an efficient training cost of ~$160K for full pretraining.
|
| 13 |
|
| 14 |
We unleash the power of reproducible AI π. Interested? Explore the models and code on our [GitHub](https://github.com/moxin-org/Moxin-LLM) and read the full paper on [arXiv](https://arxiv.org/abs/2412.06845).
|