Update README.md
Browse files
README.md
CHANGED
|
@@ -18,7 +18,7 @@ base_model:
|
|
| 18 |
|
| 19 |
## Introduction
|
| 20 |
|
| 21 |
-
Ring-lite is a fully open-source MoE LLM provided by InclusionAI, which has 16.8B parameters with 2.75B activated parameters.
|
| 22 |
|
| 23 |
|
| 24 |
|
|
@@ -33,7 +33,7 @@ Ring-lite is a fully open-source MoE LLM provided by InclusionAI, which has 16.8
|
|
| 33 |
</div>
|
| 34 |
|
| 35 |
## Evaluation
|
| 36 |
-
For a comprehensive evaluation of the quality of our reasoning models, we implemented automatic benchmarks to assess their performance
|
| 37 |
|
| 38 |
<p align="center">
|
| 39 |
<img src="https://huggingface.co/inclusionAI/Ring-lite/resolve/main/performance.png" width="1000"/>
|
|
|
|
| 18 |
|
| 19 |
## Introduction
|
| 20 |
|
| 21 |
+
Ring-lite is a fully open-source MoE LLM provided by InclusionAI, which has 16.8B parameters with 2.75B activated parameters. It builds upon the publicly available [Ling-lite-1.5](https://huggingface.co/inclusionAI/Ling-lite-1.5) model We use joint training pipeline combining knowledge distillation with reinforcement learning. Our model achieves performance comparable to state-of-the-art (SOTA) small-size reasoning models on challenging benchmarks (AIME, LiveCodeBench, and GPQA-Diamond) while activating only one-third of their parameters.
|
| 22 |
|
| 23 |
|
| 24 |
|
|
|
|
| 33 |
</div>
|
| 34 |
|
| 35 |
## Evaluation
|
| 36 |
+
For a comprehensive evaluation of the quality of our reasoning models, we implemented automatic benchmarks to assess their performance including math, code and science.
|
| 37 |
|
| 38 |
<p align="center">
|
| 39 |
<img src="https://huggingface.co/inclusionAI/Ring-lite/resolve/main/performance.png" width="1000"/>
|