LiangJiang commited on
Commit
e79166d
·
verified ·
1 Parent(s): dae7b6e

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +110 -0
README.md ADDED
@@ -0,0 +1,110 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ license: mit
2
+ language:
3
+ - zh
4
+ - en
5
+ base_model:
6
+ - inclusionAI/Ling-lite
7
+ pipeline_tag: text-generation
8
+ ---
9
+
10
+ # Ring-lite
11
+
12
+ <p align="center">
13
+ <img src="https://huggingface.co/inclusionAI/Ring-lite-distill-preview/resolve/main/ant-bailing.png" width="100"/>
14
+ <p>
15
+
16
+ <p align="center">
17
+ 🤗 <a href="https://huggingface.co/inclusionAI">Hugging Face</a>
18
+ <p>
19
+
20
+ ## Introduction
21
+
22
+ Ring-lite is an fully open-source MoE LLM provided by InclusionAI, which has 16.8B parameters with 2.75B activated parameters. It was derived from [Ling-lite-1.5](https://huggingface.co/inclusionAI/Ling-lite-1.5) through a training process involving reasoning SFT, reasoning RL and general SFT. This model delivers performance comparable to [Qwen3-8B](https://huggingface.co/Qwen/Qwen3-8B) on reasoning benchmarks, while activating only one-third of their parameter. . This demonstrates that Ring-lite-distill is a more balanced and versatile model. Additionaly, it maintains competitive latency and throughput compared to other reasoning LLMs of similar size.
23
+
24
+ ## Model Downloads
25
+
26
+ <div align="center">
27
+
28
+ | **Model** | **#Total Params** | **#Activated Params** | **Context Length** | **Download** |
29
+ | :----------------: | :---------------: | :-------------------: | :----------------: | :----------: |
30
+ | Ring-lite-distill-preview | 16.8B | 2.75B | 64K | [🤗 HuggingFace](https://huggingface.co/inclusionAI/Ring-lite-distill) |
31
+
32
+ </div>
33
+
34
+ ## Evaluation
35
+ In order to fully evaluate the model's performance, we examined Ring-lite-distill-preview in terms of both reasoning ability and general ability.
36
+ ### Reasoning ability
37
+
38
+ <div align="center">
39
+
40
+ | **Model** | **AIME24** | **MATH-500** | **GPQA-diamond** | **LiveCodeBench** |
41
+ | :----------------: | :---------------: | :-------------------: | :----------------: | :----------: |
42
+ | DeepSeek-R1-Distill-Qwen-7B (reported) | 55.5 | 92.8 | 49.1 | 37.6 |
43
+ | DeepSeek-R1-Distill-Qwen-7B (reproduce) | 53.2 | 93.7 | 50.4 | 36.5 |
44
+ | Ring-lite-distill-preview | 56.3 | 93.7 | 46.2 | 31.9 |
45
+
46
+ </div>
47
+
48
+ ### General ability
49
+
50
+ <div align="center">
51
+
52
+ | **Model** | **IFEval** | **T-eval** | **BFCL_v2** | **MMLU** |
53
+ | :----------------: | :---------------: | :-------------------: | :----------------: | :----------: |
54
+ | DeepSeek-R1-Distill-Qwen-7B (reproduce) | 39.3 | 26.9 | 38.9 | 44.1 |
55
+ | Ring-lite-distill-preview | 75.3 | 81.3 | 63.0 | 63.3 |
56
+
57
+ </div>
58
+ More details will be reported in our technical report. [TBD]
59
+
60
+ ## Quickstart
61
+
62
+ ### 🤗 Hugging Face Transformers
63
+ Here is a code snippet to show you how to use the chat model with `transformers`:
64
+
65
+ ```python
66
+ from transformers import AutoModelForCausalLM, AutoTokenizer
67
+
68
+ model_name = "inclusionAI/Ring-lite-distill-preview"
69
+
70
+ model = AutoModelForCausalLM.from_pretrained(
71
+ model_name,
72
+ torch_dtype="auto",
73
+ device_map="auto"
74
+ )
75
+ tokenizer = AutoTokenizer.from_pretrained(model_name)
76
+
77
+ prompt = "Give me a short introduction to large language models."
78
+ messages = [
79
+ {"role": "system", "content": "You are Ring, an assistant created by inclusionAI"},
80
+ {"role": "user", "content": prompt}
81
+ ]
82
+ text = tokenizer.apply_chat_template(
83
+ messages,
84
+ tokenize=False,
85
+ add_generation_prompt=True
86
+ )
87
+ model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
88
+
89
+ generated_ids = model.generate(
90
+ **model_inputs,
91
+ max_new_tokens=8192
92
+ )
93
+ generated_ids = [
94
+ output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
95
+ ]
96
+
97
+ response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
98
+ ```
99
+
100
+ ## Dataset
101
+ The training data of Ring-lite-distill-preview will be released soon.
102
+
103
+ ## Deployment
104
+ Please refer to [GitHub](https://github.com/inclusionAI/Ring/blob/main/README.md)
105
+
106
+ ## License
107
+ This code repository is licensed under [the MIT License](https://huggingface.co/inclusionAI/Ring-lite-distill/blob/main/LICENSE).
108
+
109
+ ## Citation
110
+ [TBD]