Update README.md
Browse files
README.md
CHANGED
|
@@ -18,7 +18,9 @@ base_model:
|
|
| 18 |
|
| 19 |
## Introduction
|
| 20 |
|
| 21 |
-
We present a compact yet powerful reasoning model **Ring-mini-2.0**. It has 16B total parameters, with 1.4B parameters are activated per input token (non-embedding 789M).
|
|
|
|
|
|
|
| 22 |
|
| 23 |
|
| 24 |
## Model Downloads
|
|
@@ -140,14 +142,4 @@ Please refer to [GitHub](https://github.com/inclusionAI/Ring/blob/main/README.md
|
|
| 140 |
This code repository is licensed under [the MIT License](https://huggingface.co/inclusionAI/Ring-lite-2507/blob/main/LICENSE).
|
| 141 |
|
| 142 |
## Citation
|
| 143 |
-
```
|
| 144 |
-
@misc{ringteam2025ringlitescalablereasoningc3postabilized,
|
| 145 |
-
title={Ring-lite: Scalable Reasoning via C3PO-Stabilized Reinforcement Learning for LLMs},
|
| 146 |
-
author={Ling Team},
|
| 147 |
-
year={2025},
|
| 148 |
-
eprint={2506.14731},
|
| 149 |
-
archivePrefix={arXiv},
|
| 150 |
-
primaryClass={cs.CL},
|
| 151 |
-
url={https://arxiv.org/abs/2506.14731},
|
| 152 |
-
}
|
| 153 |
-
```
|
|
|
|
| 18 |
|
| 19 |
## Introduction
|
| 20 |
|
| 21 |
+
We present a compact yet powerful reasoning model **Ring-mini-2.0**. It has 16B total parameters, with 1.4B parameters are activated per input token (non-embedding 789M). Although **Ring-mini-2.0** is quite compact, it still reaches the top-tier level of sub-10B dense LLMs and even matches or surpasses much larger MoE models
|
| 22 |
+
|
| 23 |
+
through pre-training on 20T tokens of high-quality data and enhanced through long-cot supervised fine-tuning and multi-stage reinforcement learning.
|
| 24 |
|
| 25 |
|
| 26 |
## Model Downloads
|
|
|
|
| 142 |
This code repository is licensed under [the MIT License](https://huggingface.co/inclusionAI/Ring-lite-2507/blob/main/LICENSE).
|
| 143 |
|
| 144 |
## Citation
|
| 145 |
+
TODO```
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|