hzeng412 commited on
Commit
a8d9218
·
verified ·
1 Parent(s): 83b1d62

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -9,6 +9,6 @@ pinned: false
9
 
10
  Introducing **Moxin 7B**: The truly open, SOTA-performing LLM and VLM that's redefining transparency.
11
 
12
- We've <u>**open-sourced everything**</u>—pre-training code, data, and models, including our GRPO-enhanced Reasoning model. It outperforms Mistral, Qwen, and LLaMA in zero-shot/few-shot tasks and delivers superior reasoning on complex math benchmarks, all with an efficient training cost of ~$160K for full pretraining.
13
 
14
  We unleash the power of reproducible AI 🚀. Interested? Explore the models and code on our [GitHub](https://github.com/moxin-org/Moxin-LLM) and read the full paper on [arXiv](https://arxiv.org/abs/2412.06845).
 
9
 
10
  Introducing **Moxin 7B**: The truly open, SOTA-performing LLM and VLM that's redefining transparency.
11
 
12
+ We've <u>**open-sourced EVERYTHING**</u>—pre-training code, data, and models, including our GRPO-enhanced Reasoning model. It outperforms Mistral, Qwen, and LLaMA in zero-shot/few-shot tasks and delivers superior reasoning on complex math benchmarks, all with an efficient training cost of ~$160K for full pretraining.
13
 
14
  We unleash the power of reproducible AI 🚀. Interested? Explore the models and code on our [GitHub](https://github.com/moxin-org/Moxin-LLM) and read the full paper on [arXiv](https://arxiv.org/abs/2412.06845).