Update README.md
Browse files
README.md
CHANGED
|
@@ -21,8 +21,8 @@ tags:
|
|
| 21 |
|
| 22 |
We are excited to announce the official open-source release of Ring-flash-linear-2.0!
|
| 23 |
|
| 24 |
-
|
| 25 |
-
When it comes to benchmarks, Ring-
|
| 26 |
|
| 27 |
<div style="display: flex; justify-content: center;">
|
| 28 |
<div style="text-align: center;">
|
|
@@ -51,33 +51,33 @@ Here is a demo of a small Snake game, with the code generated by our model.
|
|
| 51 |
|
| 52 |
## Linear Attention, Highly Sparse,High-Speed Generation
|
| 53 |
|
| 54 |
-
|
| 55 |
-
|
| 56 |
|
| 57 |
<div style="display: flex; justify-content: center; align-items: flex-start; gap: 20px;">
|
| 58 |
<div style="text-align: center;">
|
| 59 |
-
<img src="https://
|
| 60 |
-
<p style="margin-top: 8px; font-size: 14px;"><strong>Figure 4:</strong> Ring-
|
| 61 |
</div>
|
| 62 |
|
| 63 |
<div style="text-align: center;">
|
| 64 |
<p align="center">
|
| 65 |
-
<img src="https://
|
| 66 |
</p>
|
| 67 |
-
<p style="margin-top: 8px; font-size: 14px;"><strong>Figure 5:</strong> Ring-
|
| 68 |
</div>
|
| 69 |
|
| 70 |
-
</div>
|
| 71 |
|
| 72 |
|
| 73 |
## Model Downloads
|
| 74 |
|
| 75 |
-
|
| 76 |
|
| 77 |
| **Model** | **#Total Params** | **#Activated Params** | **Context Length** | **Download** |
|
| 78 |
| :----------------: | :---------------: | :-------------------: | :----------------: | :----------: |
|
| 79 |
-
| Ring-
|
| 80 |
-
</div>
|
| 81 |
|
| 82 |
## Quickstart
|
| 83 |
|
|
|
|
| 21 |
|
| 22 |
We are excited to announce the official open-source release of Ring-flash-linear-2.0!
|
| 23 |
|
| 24 |
+
Building on the success of our Ling 2.0 series, this model continues to leverage a powerful hybrid architecture of linear and standard attention, perfectly balancing high performance with superior efficiency. By integrating our proven MoE design with optimizations like a 1/32 expert activation ratio and MTP layers, Ring-flash-linear achieves the performance of a 40 B dense model while activating only 6.1 B parameters. This model was converted from [Ling-flash-base-2.0](https://huggingface.co/inclusionAI/Ling-flash-base-2.0), further trained on an additional 1 T tokens.
|
| 25 |
+
When it comes to benchmarks, Ring-flash-linear-2.0 not only holds its own against standard attention models (like ring-flash-2) but also outperforms other open-source MoE and Dense models in its class on several demanding tasks. Plus, with support for a 128k long context, it's faster and more precise than ever, especially when handling long-form inputs and outputs.
|
| 26 |
|
| 27 |
<div style="display: flex; justify-content: center;">
|
| 28 |
<div style="text-align: center;">
|
|
|
|
| 51 |
|
| 52 |
## Linear Attention, Highly Sparse,High-Speed Generation
|
| 53 |
|
| 54 |
+
Thanks to its hybrid attention mechanism and highly sparse MoE architecture, Ring-flash-linear-2.0 achieves near-linear time complexity and constant space complexity, resulting in outstanding inference efficiency. To fully demonstrate this advantage, we conducted a head-to-head comparison between our model and top-tier competitors of similar size or performance.
|
| 55 |
+
What is truly exciting is that in the comparison with Qwen3-32B, Ring-flash-linear-2.0 demonstrates a remarkable advantage in inference efficiency. During the prefill phase, when the context length exceeds 32k, its throughput approaches 5 times that of the former. Its performance in the high-concurrency decoding phase is even more impressive, when generating a length of 32k, Ring-flash-linear-2.0 already boasts a significant throughput advantage of 4 times. When the generated length reaches 64k, this advantage surges to nearly 10 times! Even when compared to the newly emerging hybrid attention based model, Qwen3-Next-80BA3B, although Ring-flash-linear-2.0 has a larger model size, which puts it at a disadvantage in terms of IO, its higher proportion of linear attention layers and the more efficient implementation of linear attention still grant it superior inference efficiency over Qwen3-Next-80BA3B.
|
| 56 |
|
| 57 |
<div style="display: flex; justify-content: center; align-items: flex-start; gap: 20px;">
|
| 58 |
<div style="text-align: center;">
|
| 59 |
+
<img src="https://mdn.alipayobjects.com/huamei_t783ie/afts/img/wtM_TJ4KVqYAAAAARpAAAAgADgCDAQFr/original" width="500">
|
| 60 |
+
<p style="margin-top: 8px; font-size: 14px;"><strong>Figure 4:</strong> Ring-flash-linear-2.0 prefill throughput</p>
|
| 61 |
</div>
|
| 62 |
|
| 63 |
<div style="text-align: center;">
|
| 64 |
<p align="center">
|
| 65 |
+
<img src="https://mdn.alipayobjects.com/huamei_t783ie/afts/img/3n9lSZscvBwAAAAAUhAAAAgADgCDAQFr/original" width="500">
|
| 66 |
</p>
|
| 67 |
+
<p style="margin-top: 8px; font-size: 14px;"><strong>Figure 5:</strong> Ring-flash-linear-2.0 decode throughput</p>
|
| 68 |
</div>
|
| 69 |
|
| 70 |
+
</div>
|
| 71 |
|
| 72 |
|
| 73 |
## Model Downloads
|
| 74 |
|
| 75 |
+
<div align="center">
|
| 76 |
|
| 77 |
| **Model** | **#Total Params** | **#Activated Params** | **Context Length** | **Download** |
|
| 78 |
| :----------------: | :---------------: | :-------------------: | :----------------: | :----------: |
|
| 79 |
+
| Ring-flash-linear-2.0 | 100B | 6.1B | 128K | [🤗 HuggingFace](https://huggingface.co/inclusionAI/Ring-flash-linear-2.0) <br>[🤖 Modelscope](https://modelscope.cn/models/inclusionAI/Ring-flash-linear-2.0)|
|
| 80 |
+
</div>
|
| 81 |
|
| 82 |
## Quickstart
|
| 83 |
|