hzeng412 commited on
Commit
83b1d62
Β·
verified Β·
1 Parent(s): c4ed9c8

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +10 -10
README.md CHANGED
@@ -1,14 +1,14 @@
1
- ---
2
- title: README
3
- emoji: πŸ”₯
4
- colorFrom: pink
5
- colorTo: indigo
6
- sdk: static
7
- pinned: false
8
- ---
9
 
10
- Introducing Moxin 7B: The truly open, SOTA-performing LLM and VLM that's redefining transparency.
11
 
12
- We've open-sourced everythingβ€”pre-training code, data, and models, including our GRPO-enhanced Reasoning model. It outperforms Mistral, Qwen, and LLaMA in zero-shot/few-shot tasks and delivers superior reasoning on complex math benchmarks, all with an efficient training cost of ~$160K for full pretraining.
13
 
14
  We unleash the power of reproducible AI πŸš€. Interested? Explore the models and code on our [GitHub](https://github.com/moxin-org/Moxin-LLM) and read the full paper on [arXiv](https://arxiv.org/abs/2412.06845).
 
1
+ ---
2
+ title: README
3
+ emoji: πŸ”₯
4
+ colorFrom: pink
5
+ colorTo: indigo
6
+ sdk: static
7
+ pinned: false
8
+ ---
9
 
10
+ Introducing **Moxin 7B**: The truly open, SOTA-performing LLM and VLM that's redefining transparency.
11
 
12
+ We've <u>**open-sourced everything**</u>β€”pre-training code, data, and models, including our GRPO-enhanced Reasoning model. It outperforms Mistral, Qwen, and LLaMA in zero-shot/few-shot tasks and delivers superior reasoning on complex math benchmarks, all with an efficient training cost of ~$160K for full pretraining.
13
 
14
  We unleash the power of reproducible AI πŸš€. Interested? Explore the models and code on our [GitHub](https://github.com/moxin-org/Moxin-LLM) and read the full paper on [arXiv](https://arxiv.org/abs/2412.06845).