Update README.md
#3
by
thinkthinking
- opened
README.md
CHANGED
|
@@ -1,92 +1,115 @@
|
|
| 1 |
---
|
| 2 |
license: mit
|
| 3 |
base_model:
|
| 4 |
-
- inclusionAI/Ling-flash-base-2.0
|
| 5 |
pipeline_tag: text-generation
|
| 6 |
library_name: transformers
|
| 7 |
---
|
| 8 |
|
| 9 |
-
|
| 10 |
-
|
| 11 |
<p align="center">
|
| 12 |
<img src="https://mdn.alipayobjects.com/huamei_qa8qxu/afts/img/A*4QxcQrBlTiAAAAAAQXAAAAgAemJ7AQ/original" width="100"/>
|
| 13 |
<p>
|
| 14 |
-
|
| 15 |
-
<p align="center">🤗 <a href="https://huggingface.co/inclusionAI">Hugging Face</a>   |   🤖 <a href="https://modelscope.cn/organization/inclusionAI">ModelScope</a></p>
|
| 16 |
-
|
| 17 |
|
| 18 |
## Introduction
|
| 19 |
|
| 20 |
-
Today,
|
| 21 |
-
Following the release of the
|
| 22 |
-
Trained on
|
| 23 |
|
| 24 |
### Powerful Complex Reasoning Abilities
|
| 25 |
|
| 26 |
We conducted a comprehensive evaluation of Ling-flash-2.0’s reasoning capabilities, reporting strong results on representative benchmarks:
|
| 27 |
-
* __Multi-disciplinary knowledge reasoning__: GPQA-Diamond, MMLU-Pro
|
| 28 |
-
* __Advanced mathematical reasoning__: AIME 2025, Omni-MATH, OptMATH (advanced mathematical optimization tasks)
|
| 29 |
-
* __Challenging code generation__: LiveCodeBench v6, CodeForces-Elo
|
| 30 |
-
* __Logical reasoning__: KOR-Bench, ARC-Prize
|
| 31 |
-
* __Key regulated industries (Finance, Healthcare)__: FinanceReasoning, HealthBench
|
| 32 |
|
| 33 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 34 |
<p align="center">
|
| 35 |
<img src="https://mdn.alipayobjects.com/huamei_fi95qp/afts/img/zxAvQ7QtrAwAAAAAQqAAAAgADkZ7AQFr/fmt.webp"/>
|
| 36 |
<p>
|
| 37 |
-
|
| 38 |
<p align="center">
|
| 39 |
<img src="https://mdn.alipayobjects.com/huamei_fi95qp/afts/img/qQ_sTqrxiesAAAAAQuAAAAgADkZ7AQFr/original"/>
|
| 40 |
<p>
|
| 41 |
-
|
| 42 |
### Efficient Architecture, High-Speed Inference
|
| 43 |
|
| 44 |
<p align="center">
|
| 45 |
<img src="https://mdn.alipayobjects.com/huamei_fi95qp/afts/img/fMdiQZqYKSAAAAAAVdAAAAgADkZ7AQFr/fmt.avif"/>
|
| 46 |
<p>
|
| 47 |
-
|
| 48 |
Guided by [Ling Scaling Laws](https://arxiv.org/abs/2507.17702), Ling 2.0 adopts a __1/32 activation-ratio MoE architecture__, optimized across multiple design choices: expert granularity, shared-expert ratio, attention balance, __aux-loss-free + sigmoid routing strategy__, MTP layers, QK-Norm, Partial-RoPE, and more. These refinements enable __small-activation MoE__ models to achieve __7× efficiency gains__ over equivalent dense architectures.
|
| 49 |
In other words, with just __6.1B activated parameters (4.8B non-embedding)__, __Ling-flash-2.0__ can match the performance of ~40B dense models. Thanks to its small activation size, it also delivers major inference speed advantages:
|
| 50 |
* On __H20 hardware__, Ling-flash-2.0 achieves __200+ tokens/s__, offering __3× speedups__ compared to 36B dense models in everyday use.
|
| 51 |
* With __YaRN extrapolation__, it supports __128K context length__, and as output length grows, its relative speedup can reach __7× or more__.
|
| 52 |
|
| 53 |
-
|
| 54 |
<p align="center">
|
| 55 |
<img src="https://mdn.alipayobjects.com/huamei_fi95qp/afts/img/oR9UTY7S0QgAAAAAgKAAAAgADkZ7AQFr/original"/>
|
| 56 |
<p>
|
| 57 |
-
|
| 58 |
<p align="center">
|
| 59 |
<img src="https://mdn.alipayobjects.com/huamei_fi95qp/afts/img/Hid1RrgsCUAAAAAAQYAAAAgADkZ7AQFr/fmt.webp"/>
|
| 60 |
<p>
|
| 61 |
|
| 62 |
-
|
| 63 |
## Model Downloads
|
| 64 |
|
| 65 |
You can download the following table to see the various stage of Ling-flash-2.0 models. If you are located in mainland China, we also provide the model on ModelScope.cn to speed up the download process.
|
| 66 |
|
| 67 |
<center>
|
| 68 |
|
| 69 |
-
|
|
| 70 |
-
|
| 71 |
-
|
|
| 72 |
-
|
|
| 73 |
|
| 74 |
</center>
|
| 75 |
|
| 76 |
Note: If you are interested in previous version, please visit the past model collections in [Huggingface](https://huggingface.co/inclusionAI) or [ModelScope](https://modelscope.cn/organization/inclusionAI).
|
| 77 |
|
| 78 |
-
|
| 79 |
## Quickstart
|
| 80 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 81 |
### 🤗 Hugging Face Transformers
|
| 82 |
|
| 83 |
Here is a code snippet to show you how to use the chat model with `transformers`:
|
| 84 |
|
| 85 |
```python
|
| 86 |
from transformers import AutoModelForCausalLM, AutoTokenizer
|
| 87 |
-
|
| 88 |
model_name = "inclusionAI/Ling-flash-2.0"
|
| 89 |
-
|
| 90 |
model = AutoModelForCausalLM.from_pretrained(
|
| 91 |
model_name,
|
| 92 |
dtype="auto",
|
|
@@ -94,7 +117,6 @@ model = AutoModelForCausalLM.from_pretrained(
|
|
| 94 |
trust_remote_code=True,
|
| 95 |
)
|
| 96 |
tokenizer = AutoTokenizer.from_pretrained(model_name)
|
| 97 |
-
|
| 98 |
prompt = "Give me a short introduction to large language models."
|
| 99 |
messages = [
|
| 100 |
{"role": "system", "content": "You are Ling, an assistant created by inclusionAI"},
|
|
@@ -106,7 +128,6 @@ text = tokenizer.apply_chat_template(
|
|
| 106 |
add_generation_prompt=True
|
| 107 |
)
|
| 108 |
model_inputs = tokenizer([text], return_tensors="pt", return_token_type_ids=False).to(model.device)
|
| 109 |
-
|
| 110 |
generated_ids = model.generate(
|
| 111 |
**model_inputs,
|
| 112 |
max_new_tokens=512
|
|
@@ -114,7 +135,6 @@ generated_ids = model.generate(
|
|
| 114 |
generated_ids = [
|
| 115 |
output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
|
| 116 |
]
|
| 117 |
-
|
| 118 |
response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
|
| 119 |
```
|
| 120 |
|
|
@@ -145,25 +165,20 @@ pip install -e .
|
|
| 145 |
```python
|
| 146 |
from transformers import AutoTokenizer
|
| 147 |
from vllm import LLM, SamplingParams
|
| 148 |
-
|
| 149 |
tokenizer = AutoTokenizer.from_pretrained("inclusionAI/Ling-flash-2.0")
|
| 150 |
-
|
| 151 |
sampling_params = SamplingParams(temperature=0.7, top_p=0.8, repetition_penalty=1.05, max_tokens=16384)
|
| 152 |
-
|
| 153 |
llm = LLM(model="inclusionAI/Ling-flash-2.0", dtype='bfloat16')
|
| 154 |
prompt = "Give me a short introduction to large language models."
|
| 155 |
messages = [
|
| 156 |
{"role": "system", "content": "You are Ling, an assistant created by inclusionAI"},
|
| 157 |
{"role": "user", "content": prompt}
|
| 158 |
]
|
| 159 |
-
|
| 160 |
text = tokenizer.apply_chat_template(
|
| 161 |
messages,
|
| 162 |
tokenize=False,
|
| 163 |
add_generation_prompt=True
|
| 164 |
)
|
| 165 |
outputs = llm.generate([text], sampling_params)
|
| 166 |
-
|
| 167 |
```
|
| 168 |
|
| 169 |
#### Online Inference:
|
|
@@ -177,7 +192,9 @@ vllm serve inclusionAI/Ling-flash-2.0 \
|
|
| 177 |
```
|
| 178 |
|
| 179 |
To handle long context in vLLM using YaRN, we need to follow these two steps:
|
|
|
|
| 180 |
1. Add a `rope_scaling` field to the model's `config.json` file, for example:
|
|
|
|
| 181 |
```json
|
| 182 |
{
|
| 183 |
...,
|
|
@@ -188,24 +205,29 @@ To handle long context in vLLM using YaRN, we need to follow these two steps:
|
|
| 188 |
}
|
| 189 |
}
|
| 190 |
```
|
|
|
|
| 191 |
2. Use an additional parameter `--max-model-len` to specify the desired maximum context length when starting the vLLM service.
|
| 192 |
|
| 193 |
For detailed guidance, please refer to the vLLM [`instructions`](https://docs.vllm.ai/en/latest/).
|
| 194 |
|
| 195 |
-
|
| 196 |
### SGLang
|
| 197 |
|
| 198 |
#### Environment Preparation
|
| 199 |
|
| 200 |
We will later submit our model to SGLang official release, now we can prepare the environment following steps:
|
|
|
|
| 201 |
```shell
|
| 202 |
pip3 install sglang==0.5.2rc0 sgl-kernel==0.3.7.post1
|
| 203 |
```
|
|
|
|
| 204 |
You can use docker image as well:
|
|
|
|
| 205 |
```shell
|
| 206 |
docker pull lmsysorg/sglang:v0.5.2rc0-cu126
|
| 207 |
```
|
|
|
|
| 208 |
Then you should apply patch to sglang installation:
|
|
|
|
| 209 |
```shell
|
| 210 |
# patch command is needed, run `yum install -y patch` if needed
|
| 211 |
patch -d `python -c 'import sglang;import os; print(os.path.dirname(sglang.__file__))'` -p3 < inference/sglang/bailing_moe_v2.patch
|
|
@@ -213,9 +235,10 @@ patch -d `python -c 'import sglang;import os; print(os.path.dirname(sglang.__fil
|
|
| 213 |
|
| 214 |
#### Run Inference
|
| 215 |
|
| 216 |
-
BF16 and FP8 models are supported by SGLang now, it depends on the dtype of the model in ${MODEL_PATH}. They both share the same command in the following:
|
| 217 |
|
| 218 |
- Start server:
|
|
|
|
| 219 |
```shell
|
| 220 |
python -m sglang.launch_server \
|
| 221 |
--model-path $MODLE_PATH \
|
|
@@ -223,6 +246,7 @@ python -m sglang.launch_server \
|
|
| 223 |
--trust-remote-code \
|
| 224 |
--attention-backend fa3
|
| 225 |
```
|
|
|
|
| 226 |
MTP is supported for base model, and not yet for chat model. You can add parameter `--speculative-algorithm NEXTN`
|
| 227 |
to start command.
|
| 228 |
|
|
@@ -236,8 +260,6 @@ curl -s http://localhost:${PORT}/v1/chat/completions \
|
|
| 236 |
|
| 237 |
More usage can be found [here](https://docs.sglang.ai/basic_usage/send_request.html)
|
| 238 |
|
| 239 |
-
|
| 240 |
-
|
| 241 |
### Finetuning
|
| 242 |
|
| 243 |
We recommend you to use [Llama-Factory](https://github.com/hiyouga/LLaMA-Factory) to [finetune Ling](https://github.com/inclusionAI/Ling-V2/blob/main/docs/llamafactory_finetuning.md).
|
|
@@ -245,5 +267,3 @@ We recommend you to use [Llama-Factory](https://github.com/hiyouga/LLaMA-Factory
|
|
| 245 |
## License
|
| 246 |
|
| 247 |
This code repository is licensed under [the MIT License](https://github.com/inclusionAI/Ling-V2/blob/master/LICENCE).
|
| 248 |
-
|
| 249 |
-
|
|
|
|
| 1 |
---
|
| 2 |
license: mit
|
| 3 |
base_model:
|
| 4 |
+
- inclusionAI/Ling-flash-base-2.0
|
| 5 |
pipeline_tag: text-generation
|
| 6 |
library_name: transformers
|
| 7 |
---
|
| 8 |
|
|
|
|
|
|
|
| 9 |
<p align="center">
|
| 10 |
<img src="https://mdn.alipayobjects.com/huamei_qa8qxu/afts/img/A*4QxcQrBlTiAAAAAAQXAAAAgAemJ7AQ/original" width="100"/>
|
| 11 |
<p>
|
| 12 |
+
<p align="center">🤗 <a href="https://huggingface.co/inclusionAI">Hugging Face</a>   |   🤖 <a href="https://modelscope.cn/organization/inclusionAI">ModelScope</a>   |   🚀 <a href="https://zenmux.ai/inclusionai/ling-flash-2.0?utm_source=hf_inclusionAI">Experience Now</a></p>
|
|
|
|
|
|
|
| 13 |
|
| 14 |
## Introduction
|
| 15 |
|
| 16 |
+
Today, **Ling-flash-2.0** is officially open-sourced! 🚀
|
| 17 |
+
Following the release of the **language model [Ling-mini-2.0](https://huggingface.co/inclusionAI/Ling-mini-2.0)** and the **thinking model [Ring-mini-2.0](https://huggingface.co/inclusionAI/Ring-mini-2.0)**, we are now open-sourcing the third MoE LLM under the **Ling 2.0 architecture: Ling-flash-2.0**, a language model with **100B total parameters** and **6.1B activated parameters (4.8B non-embedding)**.
|
| 18 |
+
Trained on **20T+ tokens of high-quality data**, together with **supervised fine-tuning** and **multi-stage reinforcement learning**, Ling-flash-2.0 achieves **SOTA performance among dense models under 40B parameters**, despite activating only ~6B parameters. Compared to MoE models with larger activation/total parameters, it also demonstrates strong competitiveness. Notably, it delivers outstanding performance in **complex reasoning, code generation, and frontend development**.
|
| 19 |
|
| 20 |
### Powerful Complex Reasoning Abilities
|
| 21 |
|
| 22 |
We conducted a comprehensive evaluation of Ling-flash-2.0’s reasoning capabilities, reporting strong results on representative benchmarks:
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 23 |
|
| 24 |
+
- **Multi-disciplinary knowledge reasoning**: GPQA-Diamond, MMLU-Pro
|
| 25 |
+
- **Advanced mathematical reasoning**: AIME 2025, Omni-MATH, OptMATH (advanced mathematical optimization tasks)
|
| 26 |
+
- **Challenging code generation**: LiveCodeBench v6, CodeForces-Elo
|
| 27 |
+
- **Logical reasoning**: KOR-Bench, ARC-Prize
|
| 28 |
+
- **Key regulated industries (Finance, Healthcare)**: FinanceReasoning, HealthBench
|
| 29 |
+
|
| 30 |
+
Compared with **dense models under 40B** (e.g., Qwen3-32B-Non-Thinking, Seed-OSS-36B-Instruct (think budget=0)) and **larger-activation/total-parameter MoE models** (e.g., Hunyuan-A13B-Instruct, GPT-OSS-120B/low), **Ling-flash-2.0** demonstrates stronger complex reasoning power. Moreover, it shows high competitiveness on **creative tasks** (Creative Writing v3).
|
| 31 |
+
|
| 32 |
<p align="center">
|
| 33 |
<img src="https://mdn.alipayobjects.com/huamei_fi95qp/afts/img/zxAvQ7QtrAwAAAAAQqAAAAgADkZ7AQFr/fmt.webp"/>
|
| 34 |
<p>
|
|
|
|
| 35 |
<p align="center">
|
| 36 |
<img src="https://mdn.alipayobjects.com/huamei_fi95qp/afts/img/qQ_sTqrxiesAAAAAQuAAAAgADkZ7AQFr/original"/>
|
| 37 |
<p>
|
|
|
|
| 38 |
### Efficient Architecture, High-Speed Inference
|
| 39 |
|
| 40 |
<p align="center">
|
| 41 |
<img src="https://mdn.alipayobjects.com/huamei_fi95qp/afts/img/fMdiQZqYKSAAAAAAVdAAAAgADkZ7AQFr/fmt.avif"/>
|
| 42 |
<p>
|
|
|
|
| 43 |
Guided by [Ling Scaling Laws](https://arxiv.org/abs/2507.17702), Ling 2.0 adopts a __1/32 activation-ratio MoE architecture__, optimized across multiple design choices: expert granularity, shared-expert ratio, attention balance, __aux-loss-free + sigmoid routing strategy__, MTP layers, QK-Norm, Partial-RoPE, and more. These refinements enable __small-activation MoE__ models to achieve __7× efficiency gains__ over equivalent dense architectures.
|
| 44 |
In other words, with just __6.1B activated parameters (4.8B non-embedding)__, __Ling-flash-2.0__ can match the performance of ~40B dense models. Thanks to its small activation size, it also delivers major inference speed advantages:
|
| 45 |
* On __H20 hardware__, Ling-flash-2.0 achieves __200+ tokens/s__, offering __3× speedups__ compared to 36B dense models in everyday use.
|
| 46 |
* With __YaRN extrapolation__, it supports __128K context length__, and as output length grows, its relative speedup can reach __7× or more__.
|
| 47 |
|
|
|
|
| 48 |
<p align="center">
|
| 49 |
<img src="https://mdn.alipayobjects.com/huamei_fi95qp/afts/img/oR9UTY7S0QgAAAAAgKAAAAgADkZ7AQFr/original"/>
|
| 50 |
<p>
|
|
|
|
| 51 |
<p align="center">
|
| 52 |
<img src="https://mdn.alipayobjects.com/huamei_fi95qp/afts/img/Hid1RrgsCUAAAAAAQYAAAAgADkZ7AQFr/fmt.webp"/>
|
| 53 |
<p>
|
| 54 |
|
|
|
|
| 55 |
## Model Downloads
|
| 56 |
|
| 57 |
You can download the following table to see the various stage of Ling-flash-2.0 models. If you are located in mainland China, we also provide the model on ModelScope.cn to speed up the download process.
|
| 58 |
|
| 59 |
<center>
|
| 60 |
|
| 61 |
+
| **Model** | **Context Length** | **Download** |
|
| 62 |
+
| :-----------------: | :----------------: | :------------------------------------------------------------------------------------------------------------------------------------------------------------: |
|
| 63 |
+
| Ling-flash-base-2.0 | 32K -> 128K (YaRN) | [🤗 HuggingFace](https://huggingface.co/inclusionAI/Ling-flash-base-2.0) <br>[🤖 ModelScope](https://www.modelscope.cn/models/inclusionAI/Ling-flash-base-2.0) |
|
| 64 |
+
| Ling-flash-2.0 | 32K -> 128K (YaRN) | [🤗 HuggingFace](https://huggingface.co/inclusionAI/Ling-flash-2.0) <br>[🤖 ModelScope](https://www.modelscope.cn/models/inclusionAI/Ling-flash-2.0) |
|
| 65 |
|
| 66 |
</center>
|
| 67 |
|
| 68 |
Note: If you are interested in previous version, please visit the past model collections in [Huggingface](https://huggingface.co/inclusionAI) or [ModelScope](https://modelscope.cn/organization/inclusionAI).
|
| 69 |
|
|
|
|
| 70 |
## Quickstart
|
| 71 |
|
| 72 |
+
### 🚀 Try Online
|
| 73 |
+
|
| 74 |
+
You can experience Ling-flash-2.0 online at: [ZenMux](https://zenmux.ai/inclusionai/ling-flash-2.0?utm_source=hf_inclusionAI)
|
| 75 |
+
|
| 76 |
+
### 🔌 API Usage
|
| 77 |
+
|
| 78 |
+
You can also use Ling-flash-2.0 through API calls:
|
| 79 |
+
|
| 80 |
+
```python
|
| 81 |
+
from openai import OpenAI
|
| 82 |
+
|
| 83 |
+
# 1. Initialize the OpenAI client
|
| 84 |
+
client = OpenAI(
|
| 85 |
+
# 2. Point the base URL to the ZenMux endpoint
|
| 86 |
+
base_url="https://zenmux.ai/api/v1",
|
| 87 |
+
# 3. Replace with the API Key from your ZenMux user console
|
| 88 |
+
api_key="<your ZENMUX_API_KEY>",
|
| 89 |
+
)
|
| 90 |
+
|
| 91 |
+
# 4. Make a request
|
| 92 |
+
completion = client.chat.completions.create(
|
| 93 |
+
# 5. Specify the model to use in the format "provider/model-name"
|
| 94 |
+
model="inclusionai/ling-flash-2.0",
|
| 95 |
+
messages=[
|
| 96 |
+
{
|
| 97 |
+
"role": "user",
|
| 98 |
+
"content": "What is the meaning of life?"
|
| 99 |
+
}
|
| 100 |
+
]
|
| 101 |
+
)
|
| 102 |
+
|
| 103 |
+
print(completion.choices[0].message.content)
|
| 104 |
+
```
|
| 105 |
+
|
| 106 |
### 🤗 Hugging Face Transformers
|
| 107 |
|
| 108 |
Here is a code snippet to show you how to use the chat model with `transformers`:
|
| 109 |
|
| 110 |
```python
|
| 111 |
from transformers import AutoModelForCausalLM, AutoTokenizer
|
|
|
|
| 112 |
model_name = "inclusionAI/Ling-flash-2.0"
|
|
|
|
| 113 |
model = AutoModelForCausalLM.from_pretrained(
|
| 114 |
model_name,
|
| 115 |
dtype="auto",
|
|
|
|
| 117 |
trust_remote_code=True,
|
| 118 |
)
|
| 119 |
tokenizer = AutoTokenizer.from_pretrained(model_name)
|
|
|
|
| 120 |
prompt = "Give me a short introduction to large language models."
|
| 121 |
messages = [
|
| 122 |
{"role": "system", "content": "You are Ling, an assistant created by inclusionAI"},
|
|
|
|
| 128 |
add_generation_prompt=True
|
| 129 |
)
|
| 130 |
model_inputs = tokenizer([text], return_tensors="pt", return_token_type_ids=False).to(model.device)
|
|
|
|
| 131 |
generated_ids = model.generate(
|
| 132 |
**model_inputs,
|
| 133 |
max_new_tokens=512
|
|
|
|
| 135 |
generated_ids = [
|
| 136 |
output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
|
| 137 |
]
|
|
|
|
| 138 |
response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
|
| 139 |
```
|
| 140 |
|
|
|
|
| 165 |
```python
|
| 166 |
from transformers import AutoTokenizer
|
| 167 |
from vllm import LLM, SamplingParams
|
|
|
|
| 168 |
tokenizer = AutoTokenizer.from_pretrained("inclusionAI/Ling-flash-2.0")
|
|
|
|
| 169 |
sampling_params = SamplingParams(temperature=0.7, top_p=0.8, repetition_penalty=1.05, max_tokens=16384)
|
|
|
|
| 170 |
llm = LLM(model="inclusionAI/Ling-flash-2.0", dtype='bfloat16')
|
| 171 |
prompt = "Give me a short introduction to large language models."
|
| 172 |
messages = [
|
| 173 |
{"role": "system", "content": "You are Ling, an assistant created by inclusionAI"},
|
| 174 |
{"role": "user", "content": prompt}
|
| 175 |
]
|
|
|
|
| 176 |
text = tokenizer.apply_chat_template(
|
| 177 |
messages,
|
| 178 |
tokenize=False,
|
| 179 |
add_generation_prompt=True
|
| 180 |
)
|
| 181 |
outputs = llm.generate([text], sampling_params)
|
|
|
|
| 182 |
```
|
| 183 |
|
| 184 |
#### Online Inference:
|
|
|
|
| 192 |
```
|
| 193 |
|
| 194 |
To handle long context in vLLM using YaRN, we need to follow these two steps:
|
| 195 |
+
|
| 196 |
1. Add a `rope_scaling` field to the model's `config.json` file, for example:
|
| 197 |
+
|
| 198 |
```json
|
| 199 |
{
|
| 200 |
...,
|
|
|
|
| 205 |
}
|
| 206 |
}
|
| 207 |
```
|
| 208 |
+
|
| 209 |
2. Use an additional parameter `--max-model-len` to specify the desired maximum context length when starting the vLLM service.
|
| 210 |
|
| 211 |
For detailed guidance, please refer to the vLLM [`instructions`](https://docs.vllm.ai/en/latest/).
|
| 212 |
|
|
|
|
| 213 |
### SGLang
|
| 214 |
|
| 215 |
#### Environment Preparation
|
| 216 |
|
| 217 |
We will later submit our model to SGLang official release, now we can prepare the environment following steps:
|
| 218 |
+
|
| 219 |
```shell
|
| 220 |
pip3 install sglang==0.5.2rc0 sgl-kernel==0.3.7.post1
|
| 221 |
```
|
| 222 |
+
|
| 223 |
You can use docker image as well:
|
| 224 |
+
|
| 225 |
```shell
|
| 226 |
docker pull lmsysorg/sglang:v0.5.2rc0-cu126
|
| 227 |
```
|
| 228 |
+
|
| 229 |
Then you should apply patch to sglang installation:
|
| 230 |
+
|
| 231 |
```shell
|
| 232 |
# patch command is needed, run `yum install -y patch` if needed
|
| 233 |
patch -d `python -c 'import sglang;import os; print(os.path.dirname(sglang.__file__))'` -p3 < inference/sglang/bailing_moe_v2.patch
|
|
|
|
| 235 |
|
| 236 |
#### Run Inference
|
| 237 |
|
| 238 |
+
BF16 and FP8 models are supported by SGLang now, it depends on the dtype of the model in ${MODEL_PATH}. They both share the same command in the following:
|
| 239 |
|
| 240 |
- Start server:
|
| 241 |
+
|
| 242 |
```shell
|
| 243 |
python -m sglang.launch_server \
|
| 244 |
--model-path $MODLE_PATH \
|
|
|
|
| 246 |
--trust-remote-code \
|
| 247 |
--attention-backend fa3
|
| 248 |
```
|
| 249 |
+
|
| 250 |
MTP is supported for base model, and not yet for chat model. You can add parameter `--speculative-algorithm NEXTN`
|
| 251 |
to start command.
|
| 252 |
|
|
|
|
| 260 |
|
| 261 |
More usage can be found [here](https://docs.sglang.ai/basic_usage/send_request.html)
|
| 262 |
|
|
|
|
|
|
|
| 263 |
### Finetuning
|
| 264 |
|
| 265 |
We recommend you to use [Llama-Factory](https://github.com/hiyouga/LLaMA-Factory) to [finetune Ling](https://github.com/inclusionAI/Ling-V2/blob/main/docs/llamafactory_finetuning.md).
|
|
|
|
| 267 |
## License
|
| 268 |
|
| 269 |
This code repository is licensed under [the MIT License](https://github.com/inclusionAI/Ling-V2/blob/master/LICENCE).
|
|
|
|
|
|