curryandsun's picture
Update README.md
aeee468 verified
|
raw
history blame
7.31 kB
metadata
license: mit
language:
  - en
base_model:
  - inclusionAI/Ling-flash-base-2.0
pipeline_tag: text-generation
library_name: transformers
tags:
  - moe

Ring-flash-linear-2.0

🤗 Hugging Face   |   🤖 ModelScope

Introduction

We are excited to announce the official open-source release of Ring-flash-linear-2.0!

Building on the success of our Ling 2.0 series, this model continues to leverage a powerful hybrid architecture of linear and standard attention, perfectly balancing high performance with superior efficiency. By integrating our proven MoE design with optimizations like a 1/32 expert activation ratio and MTP layers, Ring-flash-linear achieves the performance of a 40 B dense model while activating only 6.1 B parameters. This model was converted from Ling-flash-base-2.0, further trained on an additional 1 T tokens. When it comes to benchmarks, Ring-flash-linear-2.0 not only holds its own against standard attention models (like ring-flash-2) but also outperforms other open-source MoE and Dense models in its class on several demanding tasks. Plus, with support for a 128k long context, it's faster and more precise than ever, especially when handling long-form inputs and outputs.

Figure 1: Hybrid Linear Model Architecture

Evaluation

Linear Attention, Highly Sparse,High-Speed Generation

Thanks to its hybrid attention mechanism and highly sparse MoE architecture, Ring-flash-linear-2.0 achieves near-linear time complexity and constant space complexity, resulting in outstanding inference efficiency. To fully demonstrate this advantage, we conducted a head-to-head comparison between our model and top-tier competitors of similar size or performance. What is truly exciting is that in the comparison with Qwen3-32B, Ring-flash-linear-2.0 demonstrates a remarkable advantage in inference efficiency. During the prefill phase, when the context length exceeds 32k, its throughput approaches 5 times that of the former. Its performance in the high-concurrency decoding phase is even more impressive, when generating a length of 32k, Ring-flash-linear-2.0 already boasts a significant throughput advantage of 4 times. When the generated length reaches 64k, this advantage surges to nearly 10 times! Even when compared to the newly emerging hybrid attention based model, Qwen3-Next-80BA3B, although Ring-flash-linear-2.0 has a larger model size, which puts it at a disadvantage in terms of IO, its higher proportion of linear attention layers and the more efficient implementation of linear attention still grant it superior inference efficiency over Qwen3-Next-80BA3B.

Figure 4: Ring-flash-linear-2.0 prefill throughput

Figure 5: Ring-flash-linear-2.0 decode throughput

Model Downloads

Model #Total Params #Activated Params Context Length Download
Ring-flash-linear-2.0 100B 6.1B 128K 🤗 HuggingFace
🤖 Modelscope

Quickstart

Requirements

pip install flash-linear-attention==0.3.2
pip install transformers==4.56.1

🤗 Hugging Face Transformers

from transformers import AutoModelForCausalLM, AutoTokenizer

model_name = "inclusionAI/Ring-flash-linear-2.0"

model = AutoModelForCausalLM.from_pretrained(
    model_name,
    dtype="auto",
    device_map="auto",
    trust_remote_code=True,
)
tokenizer = AutoTokenizer.from_pretrained(model_name)


prompts = [
    "Give me a short introduction to large language models."
]
input_texts = []
for prompt in prompts:
    messages = [
        {"role": "user", "content": prompt}
    ]
    text = tokenizer.apply_chat_template(
        messages,
        tokenize=False,
        add_generation_prompt=True,
        enable_thinking=True
    )
    input_texts.append(text)

print(input_texts)

model_inputs = tokenizer(input_texts, return_tensors="pt", return_token_type_ids=False, padding=True, padding_side='left').to(model.device)

generated_ids = model.generate(
    **model_inputs,
    max_new_tokens=8192,
    do_sample=False,
)
generated_ids = [
    output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
]

responses = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)

print("*" * 30)
print(responses)
print("*" * 30)

SGLang

python -m sglang.launch_server \
    --model-path <model_path> \
    --trust-remote-code \
    --tp-size 4 \
    --disable-radix-cache \
    --json-model-override-args "{\"linear_backend\": \"seg_la\"}"

vLLM

Citation