Update README.md
Browse files
README.md
CHANGED
|
@@ -20,8 +20,8 @@ tags:
|
|
| 20 |
## Introduction
|
| 21 |
|
| 22 |
Today, we are officially open-sourcing Ring-mini-linear-2.0.
|
| 23 |
-
|
| 24 |
-
In terms of performance, the hybrid linear model is comparable in overall performance to standard attention models of a similar size (e.g.,
|
| 25 |
|
| 26 |
<div style="display: flex; justify-content: center;">
|
| 27 |
<div style="text-align: center;">
|
|
@@ -32,11 +32,11 @@ In terms of performance, the hybrid linear model is comparable in overall perfor
|
|
| 32 |
|
| 33 |
## Evaluation
|
| 34 |
|
| 35 |
-
To better demonstrate our model's reasoning capabilities, we compared it with three other models—Ring-mini-2.0, Qwen3-8B-thinking, and GPT-OSS-20B-Medium—on 5 challenging reasoning benchmarks across mathematics, code, and science. We observe that the hybrid-linear architecture achieves performance comparable to that of softmax attention.
|
| 36 |
|
| 37 |
<div style="display: flex; justify-content: center;">
|
| 38 |
<div style="text-align: center;">
|
| 39 |
-
<img src="https://
|
| 40 |
<p style="margin-top: 8px; font-size: 14px;"><strong>Figure 2:</strong> Model Performance Comparison </p>
|
| 41 |
</div>
|
| 42 |
</div>
|
|
@@ -48,29 +48,19 @@ The results are remarkable. In the prefill stage, Ring-mini-linear-2.0's perform
|
|
| 48 |
|
| 49 |
<div style="display: flex; justify-content: center; align-items: flex-start; gap: 20px;">
|
| 50 |
<div style="text-align: center;">
|
| 51 |
-
<img src="https://cdn-uploads.huggingface.co/production/uploads/68d20104a6f8ea66da0cb447/
|
| 52 |
<p style="margin-top: 8px; font-size: 14px;"><strong>Figure 4:</strong> Ring-mini-linear-2.0 prefill throughput</p>
|
| 53 |
</div>
|
| 54 |
|
| 55 |
<div style="text-align: center;">
|
| 56 |
<p align="center">
|
| 57 |
-
<img src="https://cdn-uploads.huggingface.co/production/uploads/68d20104a6f8ea66da0cb447/
|
| 58 |
</p>
|
| 59 |
<p style="margin-top: 8px; font-size: 14px;"><strong>Figure 5:</strong> Ring-mini-linear-2.0 decode throughput</p>
|
| 60 |
</div>
|
| 61 |
|
| 62 |
</div>
|
| 63 |
|
| 64 |
-
|
| 65 |
-
## Model Downloads
|
| 66 |
-
|
| 67 |
-
<div align="center">
|
| 68 |
-
|
| 69 |
-
| **Model** | **#Total Params** | **#Activated Params** | **Context Length** | **Download** |
|
| 70 |
-
| :----------------: | :---------------: | :-------------------: | :----------------: | :----------: |
|
| 71 |
-
| Ring-mini-linear-2.0 | 16B | 1.4B | 128K | [🤗 HuggingFace](https://huggingface.co/inclusionAI/Ring-mini-linear-2.0) <br>[🤖 Modelscope](https://modelscope.cn/models/inclusionAI/Ring-mini-linear-2.0)|
|
| 72 |
-
</div>
|
| 73 |
-
|
| 74 |
## Quickstart
|
| 75 |
|
| 76 |
### Requirements
|
|
@@ -143,7 +133,7 @@ pip3 install sgl-kernel==0.3.9.post2 vllm==0.10.2
|
|
| 143 |
|
| 144 |
Then you should install our sglang whl package:
|
| 145 |
```shell
|
| 146 |
-
pip install https://
|
| 147 |
```
|
| 148 |
|
| 149 |
#### Run Inference
|
|
@@ -170,6 +160,56 @@ curl -s http://localhost:${PORT}/v1/chat/completions \
|
|
| 170 |
|
| 171 |
More usage can be found [here](https://docs.sglang.ai/basic_usage/send_request.html)
|
| 172 |
|
| 173 |
-
### vLLM
|
| 174 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 175 |
## Citation
|
|
|
|
| 20 |
## Introduction
|
| 21 |
|
| 22 |
Today, we are officially open-sourcing Ring-mini-linear-2.0.
|
| 23 |
+
|
| 24 |
+
This model continues to employ a hybrid architecture that combines linear attention and standard attention mechanisms, striking a balance between performance and efficiency. Inheriting the efficient MoE (Mixture-of-Experts) design from the Ling 2.0 series, and through architectural optimizations such as a 1/32 expert activation ratio and MTP layers, Ring-mini-linear achieves the performance of an ~8B dense model while activating only 1.4B of its 16B total parameters. This model was converted from [Ling-mini-base-2.0](https://huggingface.co/inclusionAI/Ling-mini-base-2.0-20T), continually trained on an additional 600B tokens. In terms of performance, the hybrid linear model is comparable in overall performance to standard attention models of a similar size (e.g., Ring-mini-2) and surpasses other open-source MoE and Dense models of the same class on several challenging benchmarks. Furthermore, it natively supports a 128k long context window, demonstrating superior speed and accuracy, especially on tasks involving long inputs and outputs.
|
| 25 |
|
| 26 |
<div style="display: flex; justify-content: center;">
|
| 27 |
<div style="text-align: center;">
|
|
|
|
| 32 |
|
| 33 |
## Evaluation
|
| 34 |
|
| 35 |
+
To better demonstrate our model's reasoning capabilities, we compared it with three other models—Ring-mini-2.0, Qwen3-8B-thinking, and GPT-OSS-20B-Medium—on 5 challenging reasoning benchmarks across mathematics, code, and science. We observe that the hybrid-linear architecture achieves performance comparable to that of softmax attention models.
|
| 36 |
|
| 37 |
<div style="display: flex; justify-content: center;">
|
| 38 |
<div style="text-align: center;">
|
| 39 |
+
<img src="https://cdn-uploads.huggingface.co/production/uploads/68d20104a6f8ea66da0cb447/RcHlh5PriRuOLsErG8RjK.webp" width="1000">
|
| 40 |
<p style="margin-top: 8px; font-size: 14px;"><strong>Figure 2:</strong> Model Performance Comparison </p>
|
| 41 |
</div>
|
| 42 |
</div>
|
|
|
|
| 48 |
|
| 49 |
<div style="display: flex; justify-content: center; align-items: flex-start; gap: 20px;">
|
| 50 |
<div style="text-align: center;">
|
| 51 |
+
<img src="https://cdn-uploads.huggingface.co/production/uploads/68d20104a6f8ea66da0cb447/yHVE-nmTgV3w0z4X2eg_g.png" width="500">
|
| 52 |
<p style="margin-top: 8px; font-size: 14px;"><strong>Figure 4:</strong> Ring-mini-linear-2.0 prefill throughput</p>
|
| 53 |
</div>
|
| 54 |
|
| 55 |
<div style="text-align: center;">
|
| 56 |
<p align="center">
|
| 57 |
+
<img src="https://cdn-uploads.huggingface.co/production/uploads/68d20104a6f8ea66da0cb447/mTqsHh0yFtQjpCN_fw4e0.png" width="500">
|
| 58 |
</p>
|
| 59 |
<p style="margin-top: 8px; font-size: 14px;"><strong>Figure 5:</strong> Ring-mini-linear-2.0 decode throughput</p>
|
| 60 |
</div>
|
| 61 |
|
| 62 |
</div>
|
| 63 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 64 |
## Quickstart
|
| 65 |
|
| 66 |
### Requirements
|
|
|
|
| 133 |
|
| 134 |
Then you should install our sglang whl package:
|
| 135 |
```shell
|
| 136 |
+
pip install https://raw.githubusercontent.com/inclusionAI/Ring-V2/main/hybrid_linear/whls/sglang-0.5.2-py3-none-any.whl --no-deps --force-reinstall
|
| 137 |
```
|
| 138 |
|
| 139 |
#### Run Inference
|
|
|
|
| 160 |
|
| 161 |
More usage can be found [here](https://docs.sglang.ai/basic_usage/send_request.html)
|
| 162 |
|
| 163 |
+
### 🚀 vLLM
|
| 164 |
+
|
| 165 |
+
#### Environment Preparation
|
| 166 |
+
|
| 167 |
+
Since the Pull Request (PR) has not been submitted to the vLLM community at this stage, please prepare the environment by following the steps below:
|
| 168 |
+
```shell
|
| 169 |
+
pip install torch==2.7.0 torchvision==0.22.0
|
| 170 |
+
```
|
| 171 |
+
|
| 172 |
+
Then you should install our vLLM wheel package:
|
| 173 |
+
```shell
|
| 174 |
+
pip install https://github.com/inclusionAI/Ring-V2/blob/main/hybrid_linear/whls/vllm-0.8.5%2Bcuda12_8_gcc10_2_1-cp310-cp310-linux_x86_64.whl --no-deps --force-reinstall
|
| 175 |
+
```
|
| 176 |
+
|
| 177 |
+
#### Offline Inference
|
| 178 |
+
|
| 179 |
+
```python
|
| 180 |
+
from transformers import AutoTokenizer
|
| 181 |
+
from vllm import LLM, SamplingParams
|
| 182 |
+
|
| 183 |
+
tokenizer = AutoTokenizer.from_pretrained("inclusionAI/Ring-mini-linear-2.0")
|
| 184 |
+
|
| 185 |
+
sampling_params = SamplingParams(temperature=0.7, top_p=0.8, repetition_penalty=1.05, max_tokens=16384)
|
| 186 |
+
|
| 187 |
+
llm = LLM(model="inclusionAI/Ring-mini-linear-2.0", dtype='bfloat16', enable_prefix_caching=False, max_num_seqs=128)
|
| 188 |
+
prompt = "Give me a short introduction to large language models."
|
| 189 |
+
messages = [
|
| 190 |
+
{"role": "system", "content": "You are Ling, an assistant created by inclusionAI"},
|
| 191 |
+
{"role": "user", "content": prompt}
|
| 192 |
+
]
|
| 193 |
+
|
| 194 |
+
text = tokenizer.apply_chat_template(
|
| 195 |
+
messages,
|
| 196 |
+
tokenize=False,
|
| 197 |
+
add_generation_prompt=True
|
| 198 |
+
)
|
| 199 |
+
outputs = llm.generate([text], sampling_params)
|
| 200 |
+
```
|
| 201 |
+
|
| 202 |
+
#### Online Inference
|
| 203 |
+
```shell
|
| 204 |
+
vllm serve inclusionAI/Ring-mini-linear-2.0 \
|
| 205 |
+
--tensor-parallel-size 2 \
|
| 206 |
+
--pipeline-parallel-size 1 \
|
| 207 |
+
--gpu-memory-utilization 0.90 \
|
| 208 |
+
--max-num-seqs 512 \
|
| 209 |
+
--no-enable-prefix-caching
|
| 210 |
+
```
|
| 211 |
+
|
| 212 |
+
|
| 213 |
+
For more information, please see our [GitHub](https://github.com/inclusionAI/Ring-V2/blob/main/hybrid_linear/README.md).
|
| 214 |
+
|
| 215 |
## Citation
|