curryandsun commited on
Commit
ea13f8d
·
verified ·
1 Parent(s): 788a028

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +154 -3
README.md CHANGED
@@ -1,3 +1,154 @@
1
- ---
2
- license: mit
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: mit
3
+ language:
4
+ - en
5
+ base_model:
6
+ - inclusionAI/Ling-flash-base-2.0
7
+ pipeline_tag: text-generation
8
+ library_name: transformers
9
+ tags:
10
+ - moe
11
+ ---
12
+
13
+ # Ring-flash-linear-2.0
14
+
15
+ <p align="center">
16
+ <img src="https://mdn.alipayobjects.com/huamei_qa8qxu/afts/img/A*4QxcQrBlTiAAAAAAQXAAAAgAemJ7AQ/original" width="100"/>
17
+ <p>
18
+ <p align="center">🤗 <a href="https://huggingface.co/inclusionAI">Hugging Face</a>&nbsp&nbsp | &nbsp&nbsp🤖 <a href="https://modelscope.cn/organization/inclusionAI">ModelScope</a></p>
19
+
20
+ ## Introduction
21
+
22
+ We are excited to announce the official open-source release of Ring-flash-linear-2.0!
23
+
24
+ <!-- Building on the success of our Ling 2.0 series, this model continues to leverage a powerful hybrid architecture of linear and standard attention, perfectly balancing high performance with superior efficiency. By integrating our proven MoE design with optimizations like a 1/32 expert activation ratio and MTP layers, Ring-mini-linear achieves the performance of a 8 B dense model while activating only 1.4 B parameters. This model was converted from [Ling-mini-base-2.0](https://huggingface.co/inclusionAI/Ling-mini-base-2.0-20T), further trained on an additional 600 B tokens.
25
+ When it comes to benchmarks, Ring-mini-linear-2.0 not only holds its own against standard attention models (like ring-mini-2) but also outperforms other open-source MoE and Dense models in its class on several demanding tasks. Plus, with native support for a 128k long context, it's faster and more precise than ever, especially when handling long-form inputs and outputs. -->
26
+
27
+ <div style="display: flex; justify-content: center;">
28
+ <div style="text-align: center;">
29
+ <img src="https://cdn-uploads.huggingface.co/production/uploads/68d20104a6f8ea66da0cb447/UsAtWWsWB9eXcMxV5iCCa.png" width="600">
30
+ <p style="margin-top: 8px; font-size: 14px;"><strong>Figure 1:</strong> Hybrid Linear Model Architecture</p>
31
+ </div>
32
+ </div>
33
+
34
+ ## Evaluation
35
+ <!-- To properly evaluate the model's reasoning capabilities, we compared it against 3 other models—Ring-mini-2.0, Qwen3-8B-thinking, and GPT-OSS-20B-Medium—on 6 challenging reasoning benchmarks spanning mathematics, coding, and science. The results demonstrate that the performance of the hybrid linear architecture is by no means inferior to that of standard softmax attention; in fact, it even outperforms the other models on 3 of the benchmarks.
36
+ <div style="display: flex; justify-content: center;">
37
+ <div style="text-align: center;">
38
+ <img src="https://cdn-uploads.huggingface.co/production/uploads/68d20104a6f8ea66da0cb447/_tjjgBEBlankfrWUY0N9i.png" width="800">
39
+ <p style="margin-top: 8px; font-size: 14px;"><strong>Figure 2:</strong> Model Performance Comparison </p>
40
+ </div>
41
+ </div>
42
+
43
+ Here is a demo of a small Snake game, with the code generated by our model.
44
+ <div style="display: flex; justify-content: center;">
45
+ <div style="text-align: center;">
46
+ <img src="https://mdn.alipayobjects.com/huamei_jcuiuk/afts/img/tqfCQoTqRdAAAAAAgZAAAAgADr6CAQFr/original" width="800">
47
+ <p style="margin-top: 8px; font-size: 14px;"><strong>Figure 3:</strong> Snake Game </p>
48
+ </div>
49
+ </div> -->
50
+
51
+
52
+ ## Linear Attention, Highly Sparse,High-Speed Generation
53
+
54
+ <!-- Thanks to its hybrid attention mechanism and highly sparse MoE architecture, Ring-mini-linear-2.0 achieves near-linear time complexity and constant space complexity, resulting in outstanding inference efficiency. To fully demonstrate this advantage, we conducted a head-to-head comparison between our model and top-tier competitors of similar size or performance.
55
+ The results are remarkable. In the prefill stage, Ring-mini-linear-2.0's performance is exceptional; when the context length exceeds 256k, its throughput is over 12 times higher than that of Qwen3-8B. Furthermore, in the high-concurrency decode stage, its capabilities are even more pronounced. For generation lengths exceeding 32k, its throughput easily surpasses 12 times that of Qwen3-8B.
56
+
57
+ <div style="display: flex; justify-content: center; align-items: flex-start; gap: 20px;">
58
+ <div style="text-align: center;">
59
+ <img src="https://cdn-uploads.huggingface.co/production/uploads/68d20104a6f8ea66da0cb447/O9gHLIOCdpWvBbPC6bMM5.webp" width="500">
60
+ <p style="margin-top: 8px; font-size: 14px;"><strong>Figure 4:</strong> Ring-mini-linear-2.0 prefill throughput</p>
61
+ </div>
62
+
63
+ <div style="text-align: center;">
64
+ <p align="center">
65
+ <img src="https://cdn-uploads.huggingface.co/production/uploads/68d20104a6f8ea66da0cb447/AvMTStWFX-Frzv-vOzyr6.webp" width="500">
66
+ </p>
67
+ <p style="margin-top: 8px; font-size: 14px;"><strong>Figure 5:</strong> Ring-mini-linear-2.0 decode throughput</p>
68
+ </div>
69
+
70
+ </div> -->
71
+
72
+
73
+ ## Model Downloads
74
+
75
+ <!-- <div align="center">
76
+
77
+ | **Model** | **#Total Params** | **#Activated Params** | **Context Length** | **Download** |
78
+ | :----------------: | :---------------: | :-------------------: | :----------------: | :----------: |
79
+ | Ring-mini-linear-2.0 | 16.8B | 1.4B | 128K | [🤗 HuggingFace](https://huggingface.co/inclusionAI/Ring-mini-linear-2.0) <br>[🤖 Modelscope](https://modelscope.cn/models/inclusionAI/Ring-mini-linear-2.0)|
80
+ </div> -->
81
+
82
+ ## Quickstart
83
+
84
+ ### Requirements
85
+
86
+ ```bash
87
+ pip install flash-linear-attention==0.3.2
88
+ pip install transformers==4.56.1
89
+ ```
90
+
91
+ ### 🤗 Hugging Face Transformers
92
+
93
+ ```python
94
+ from transformers import AutoModelForCausalLM, AutoTokenizer
95
+
96
+ model_name = "inclusionAI/Ring-flash-linear-2.0"
97
+
98
+ model = AutoModelForCausalLM.from_pretrained(
99
+ model_name,
100
+ dtype="auto",
101
+ device_map="auto",
102
+ trust_remote_code=True,
103
+ )
104
+ tokenizer = AutoTokenizer.from_pretrained(model_name)
105
+
106
+
107
+ prompts = [
108
+ "Give me a short introduction to large language models."
109
+ ]
110
+ input_texts = []
111
+ for prompt in prompts:
112
+ messages = [
113
+ {"role": "user", "content": prompt}
114
+ ]
115
+ text = tokenizer.apply_chat_template(
116
+ messages,
117
+ tokenize=False,
118
+ add_generation_prompt=True,
119
+ enable_thinking=True
120
+ )
121
+ input_texts.append(text)
122
+
123
+ print(input_texts)
124
+
125
+ model_inputs = tokenizer(input_texts, return_tensors="pt", return_token_type_ids=False, padding=True, padding_side='left').to(model.device)
126
+
127
+ generated_ids = model.generate(
128
+ **model_inputs,
129
+ max_new_tokens=8192,
130
+ do_sample=False,
131
+ )
132
+ generated_ids = [
133
+ output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
134
+ ]
135
+
136
+ responses = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)
137
+
138
+ print("*" * 30)
139
+ print(responses)
140
+ print("*" * 30)
141
+ ```
142
+
143
+ ### SGLang
144
+ ```bash
145
+ python -m sglang.launch_server \
146
+ --model-path <model_path> \
147
+ --trust-remote-code \
148
+ --tp-size 4 \
149
+ --disable-radix-cache \
150
+ --json-model-override-args "{\"linear_backend\": \"seg_la\"}"
151
+ ```
152
+ ### vLLM
153
+
154
+ ## Citation