Update README.md
Browse files
README.md
CHANGED
|
@@ -21,12 +21,12 @@ tags:
|
|
| 21 |
|
| 22 |
We are excited to announce the official open-source release of Ring-flash-linear-2.0!
|
| 23 |
|
| 24 |
-
Building on the success of our Ling 2.0 series, this model continues to leverage a powerful hybrid architecture of linear and standard attention, perfectly balancing high performance with superior efficiency. By integrating our proven MoE design with optimizations like a 1/32 expert activation ratio and MTP layers, Ring-flash-linear achieves the performance of a 40 B dense model while activating only 6.1 B parameters. This model was converted from [Ling-flash-base-2.0](https://huggingface.co/inclusionAI/Ling-flash-base-2.0), further trained on an additional
|
| 25 |
-
When it comes to benchmarks, Ring-flash-linear-2.0 not only holds its own against standard attention models (like
|
| 26 |
|
| 27 |
<div style="display: flex; justify-content: center;">
|
| 28 |
<div style="text-align: center;">
|
| 29 |
-
<img src="https://cdn-uploads.huggingface.co/production/uploads/68d20104a6f8ea66da0cb447/
|
| 30 |
<p style="margin-top: 8px; font-size: 14px;"><strong>Figure 1:</strong> Hybrid Linear Model Architecture</p>
|
| 31 |
</div>
|
| 32 |
</div>
|
|
@@ -34,20 +34,20 @@ When it comes to benchmarks, Ring-flash-linear-2.0 not only holds its own agains
|
|
| 34 |
## Evaluation
|
| 35 |
<div style="display: flex; justify-content: center;">
|
| 36 |
<div style="text-align: center;">
|
| 37 |
-
<img src="https://mdn.alipayobjects.com/huamei_t783ie/afts/img/mc1wSo7zHV4AAAAARHAAAAgADgCDAQFr/original" width="
|
| 38 |
<p style="margin-top: 8px; font-size: 14px;"><strong>Figure 2:</strong> Model Performance Comparison </p>
|
| 39 |
</div>
|
| 40 |
</div>
|
| 41 |
|
| 42 |
<div style="display: flex; justify-content: center;">
|
| 43 |
<div style="text-align: center;">
|
| 44 |
-
<img src="https://mdn.alipayobjects.com/huamei_t783ie/afts/img/N5xMTq4KouMAAAAARHAAAAgADgCDAQFr/original" width="
|
| 45 |
<p style="margin-top: 8px; font-size: 14px;"><strong>Figure 3:</strong> Model Performance Comparison </p>
|
| 46 |
</div>
|
| 47 |
</div>
|
| 48 |
|
| 49 |
|
| 50 |
-
## Linear Attention, Highly Sparse
|
| 51 |
|
| 52 |
Thanks to its hybrid attention mechanism and highly sparse MoE architecture, Ring-flash-linear-2.0 achieves near-linear time complexity and constant space complexity, resulting in outstanding inference efficiency. To fully demonstrate this advantage, we conducted a head-to-head comparison between our model and top-tier competitors of similar size or performance.
|
| 53 |
What is truly exciting is that in the comparison with Qwen3-32B, Ring-flash-linear-2.0 demonstrates a remarkable advantage in inference efficiency. During the prefill phase, when the context length exceeds 32k, its throughput approaches 5 times that of the former. Its performance in the high-concurrency decoding phase is even more impressive, when generating a length of 32k, Ring-flash-linear-2.0 already boasts a significant throughput advantage of 4 times. When the generated length reaches 64k, this advantage surges to nearly 10 times! Even when compared to the newly emerging hybrid attention based model, Qwen3-Next-80BA3B, although Ring-flash-linear-2.0 has a larger model size, which puts it at a disadvantage in terms of IO, its higher proportion of linear attention layers and the more efficient implementation of linear attention still grant it superior inference efficiency over Qwen3-Next-80BA3B.
|
|
@@ -113,8 +113,7 @@ for prompt in prompts:
|
|
| 113 |
text = tokenizer.apply_chat_template(
|
| 114 |
messages,
|
| 115 |
tokenize=False,
|
| 116 |
-
add_generation_prompt=True
|
| 117 |
-
enable_thinking=True
|
| 118 |
)
|
| 119 |
input_texts.append(text)
|
| 120 |
|
|
@@ -149,7 +148,7 @@ pip3 install sgl-kernel==0.3.9.post2 vllm==0.10.2
|
|
| 149 |
|
| 150 |
Then you should install our sglang whl package:
|
| 151 |
```shell
|
| 152 |
-
pip install https://
|
| 153 |
```
|
| 154 |
|
| 155 |
#### Run Inference
|
|
@@ -187,7 +186,7 @@ pip install torch==2.7.0 torchvision==0.22.0
|
|
| 187 |
|
| 188 |
Then you should install our vLLM wheel package:
|
| 189 |
```shell
|
| 190 |
-
pip install https://
|
| 191 |
```
|
| 192 |
|
| 193 |
#### Offline Inference
|
|
|
|
| 21 |
|
| 22 |
We are excited to announce the official open-source release of Ring-flash-linear-2.0!
|
| 23 |
|
| 24 |
+
Building on the success of our Ling 2.0 series, this model continues to leverage a powerful hybrid architecture of linear and standard attention, perfectly balancing high performance with superior efficiency. By integrating our proven MoE design with optimizations like a 1/32 expert activation ratio and MTP layers, Ring-flash-linear achieves the performance of a 40 B dense model while activating only 6.1 B parameters. This model was converted from [Ling-flash-base-2.0](https://huggingface.co/inclusionAI/Ling-flash-base-2.0), further trained on an additional 1T tokens.
|
| 25 |
+
When it comes to benchmarks, Ring-flash-linear-2.0 not only holds its own against standard attention models (like Ring-flash-2.0) but also outperforms other open-source MoE and Dense models in its class on several demanding tasks. Plus, with support for a 128k long context, it's faster and more precise than ever, especially when handling long-form inputs and outputs.
|
| 26 |
|
| 27 |
<div style="display: flex; justify-content: center;">
|
| 28 |
<div style="text-align: center;">
|
| 29 |
+
<img src="https://cdn-uploads.huggingface.co/production/uploads/68d20104a6f8ea66da0cb447/PHRg8ipzJtr0p6sojAa5T.png" width="800">
|
| 30 |
<p style="margin-top: 8px; font-size: 14px;"><strong>Figure 1:</strong> Hybrid Linear Model Architecture</p>
|
| 31 |
</div>
|
| 32 |
</div>
|
|
|
|
| 34 |
## Evaluation
|
| 35 |
<div style="display: flex; justify-content: center;">
|
| 36 |
<div style="text-align: center;">
|
| 37 |
+
<img src="https://mdn.alipayobjects.com/huamei_t783ie/afts/img/mc1wSo7zHV4AAAAARHAAAAgADgCDAQFr/original" width="1000">
|
| 38 |
<p style="margin-top: 8px; font-size: 14px;"><strong>Figure 2:</strong> Model Performance Comparison </p>
|
| 39 |
</div>
|
| 40 |
</div>
|
| 41 |
|
| 42 |
<div style="display: flex; justify-content: center;">
|
| 43 |
<div style="text-align: center;">
|
| 44 |
+
<img src="https://mdn.alipayobjects.com/huamei_t783ie/afts/img/N5xMTq4KouMAAAAARHAAAAgADgCDAQFr/original" width="1000">
|
| 45 |
<p style="margin-top: 8px; font-size: 14px;"><strong>Figure 3:</strong> Model Performance Comparison </p>
|
| 46 |
</div>
|
| 47 |
</div>
|
| 48 |
|
| 49 |
|
| 50 |
+
## Linear Attention, Highly Sparse, High-Speed Generation
|
| 51 |
|
| 52 |
Thanks to its hybrid attention mechanism and highly sparse MoE architecture, Ring-flash-linear-2.0 achieves near-linear time complexity and constant space complexity, resulting in outstanding inference efficiency. To fully demonstrate this advantage, we conducted a head-to-head comparison between our model and top-tier competitors of similar size or performance.
|
| 53 |
What is truly exciting is that in the comparison with Qwen3-32B, Ring-flash-linear-2.0 demonstrates a remarkable advantage in inference efficiency. During the prefill phase, when the context length exceeds 32k, its throughput approaches 5 times that of the former. Its performance in the high-concurrency decoding phase is even more impressive, when generating a length of 32k, Ring-flash-linear-2.0 already boasts a significant throughput advantage of 4 times. When the generated length reaches 64k, this advantage surges to nearly 10 times! Even when compared to the newly emerging hybrid attention based model, Qwen3-Next-80BA3B, although Ring-flash-linear-2.0 has a larger model size, which puts it at a disadvantage in terms of IO, its higher proportion of linear attention layers and the more efficient implementation of linear attention still grant it superior inference efficiency over Qwen3-Next-80BA3B.
|
|
|
|
| 113 |
text = tokenizer.apply_chat_template(
|
| 114 |
messages,
|
| 115 |
tokenize=False,
|
| 116 |
+
add_generation_prompt=True
|
|
|
|
| 117 |
)
|
| 118 |
input_texts.append(text)
|
| 119 |
|
|
|
|
| 148 |
|
| 149 |
Then you should install our sglang whl package:
|
| 150 |
```shell
|
| 151 |
+
pip install https://raw.githubusercontent.com/inclusionAI/Ring-V2/main/hybrid_linear/whls/sglang-0.5.2-py3-none-any.whl --no-deps --force-reinstall
|
| 152 |
```
|
| 153 |
|
| 154 |
#### Run Inference
|
|
|
|
| 186 |
|
| 187 |
Then you should install our vLLM wheel package:
|
| 188 |
```shell
|
| 189 |
+
pip install https://raw.githubusercontent.com/inclusionAI/Ring-V2/main/hybrid_linear/whls/vllm-0.8.5+cuda12_8_gcc10_2_1-cp310-cp310-linux_x86_64.whl --no-deps --force-reinstall
|
| 190 |
```
|
| 191 |
|
| 192 |
#### Offline Inference
|