dtometzki commited on
Commit
0cfbd0b
·
verified ·
1 Parent(s): 51363ef

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +137 -0
README.md ADDED
@@ -0,0 +1,137 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language:
3
+ - en
4
+ - zh
5
+ library_name: transformers
6
+ license: mit
7
+ pipeline_tag: text-generation
8
+ ---
9
+
10
+ # GLM-4.7-Flash
11
+
12
+ <div align="center">
13
+ <img src=https://raw.githubusercontent.com/zai-org/GLM-4.5/refs/heads/main/resources/logo.svg width="15%"/>
14
+ </div>
15
+ <p align="center">
16
+ 👋 Join our <a href="https://discord.gg/QR7SARHRxK" target="_blank">Discord</a> community.
17
+ <br>
18
+ 📖 Check out the GLM-4.7 <a href="https://z.ai/blog/glm-4.7" target="_blank">technical blog</a>, <a href="https://arxiv.org/abs/2508.06471" target="_blank">technical report(GLM-4.5)</a>.
19
+ <br>
20
+ 📍 Use GLM-4.7-Flash API services on <a href="https://docs.z.ai/guides/llm/glm-4.7">Z.ai API Platform. </a>
21
+ <br>
22
+ 👉 One click to <a href="https://chat.z.ai">GLM-4.7</a>.
23
+ </p>
24
+
25
+ ## Introduction
26
+
27
+ GLM-4.7-Flash is a 30B-A3B MoE model. As the strongest model in the 30B class, GLM-4.7-Flash offers a new option for lightweight deployment that balances performance and efficiency.
28
+
29
+
30
+ ### Performances on Benchmarks
31
+
32
+
33
+ | Benchmark | GLM-4.7-Flash | Qwen3-30B-A3B-Thinking-2507 | GPT-OSS-20B |
34
+ |--------------------|---------------|-----------------------------|-------------|
35
+ | AIME 25 | 91.6 | 85.0 | 91.7 |
36
+ | GPQA | 75.2 | 73.4 | 71.5 |
37
+ | LCB v6 | 64.0 | 66.0 | 61.0 |
38
+ | HLE | 14.4 | 9.8 | 10.9 |
39
+ | SWE-bench Verified | 59.2 | 22.0 | 34.0 |
40
+ | τ²-Bench | 79.5 | 49.0 | 47.7 |
41
+ | BrowseComp | 42.8 | 2.29 | 28.3 |
42
+
43
+
44
+ ### Evaluation Parameters
45
+
46
+ **Default Settings (Most Tasks)**
47
+
48
+ * temperature: `1.0`
49
+ * top-p: `0.95`
50
+ * max new tokens: `131072`
51
+
52
+ For multi-turn agentic tasks (τ²-Bench and Terminal Bench 2), please turn on [Preserved Thinking mode](https://docs.z.ai/guides/capabilities/thinking-mode).
53
+
54
+ **Terminal Bench, SWE Bench Verified**
55
+
56
+ * temperature: `0.7`
57
+ * top-p: `1.0`
58
+ * max new tokens: `16384`
59
+
60
+ **τ^2-Bench**
61
+
62
+ * Temperature: `0`
63
+ * Max new tokens: `16384`
64
+
65
+ For τ^2-Bench evaluation, we added an additional prompt to the Retail and Telecom user interaction to avoid failure modes caused by users ending the interaction incorrectly. For the Airline domain, we applied the domain fixes as proposed in the [Claude Opus 4.5](https://assets.anthropic.com/m/64823ba7485345a7/Claude-Opus-4-5-System-Card.pdf) release report.
66
+
67
+ ## Serve GLM-4.7-Flash Locally
68
+
69
+ For local deployment, GLM-4.7-Flash supports inference frameworks including vLLM and SGLang. Comprehensive deployment
70
+ instructions are available in the official [Github](https://github.com/zai-org/GLM-4.5) repository.
71
+
72
+ vLLM and SGLang only support GLM-4.7-Flash on their main branches.
73
+
74
+ ### vLLM
75
+
76
+ + using pip (must use pypi.org as the index url):
77
+
78
+ ```shell
79
+ pip install -U vllm --pre --index-url https://pypi.org/simple --extra-index-url https://wheels.vllm.ai/nightly
80
+ pip install git+https://github.com/huggingface/transformers.git
81
+ ```
82
+
83
+ ### SGLang
84
+
85
+ + Install the supported versions of SGLang and Transformers (using `uv` is recommended):
86
+
87
+ ```shell
88
+ uv pip install sglang==0.3.2.dev9039+pr-17247.g90c446848 --extra-index-url https://sgl-project.github.io/whl/pr/
89
+ uv pip install git+https://github.com/huggingface/transformers.git@76732b4e7120808ff989edbd16401f61fa6a0afa
90
+ ```
91
+
92
+ ### transformers
93
+
94
+ using with transformers as
95
+
96
+ ```shell
97
+ pip install git+https://github.com/huggingface/transformers.git
98
+ ```
99
+
100
+ and then run:
101
+
102
+ ```python
103
+ import torch
104
+ from transformers import AutoModelForCausalLM, AutoTokenizer
105
+
106
+ MODEL_PATH = "zai-org/GLM-4.7-Flash"
107
+ messages = [{"role": "user", "content": "hello"}]
108
+ tokenizer = AutoTokenizer.from_pretrained(MODEL_PATH)
109
+ inputs = tokenizer.apply_chat_template(
110
+ messages,
111
+ tokenize=True,
112
+ add_generation_prompt=True,
113
+ return_dict=True,
114
+ return_tensors="pt",
115
+ )
116
+ model = AutoModelForCausalLM.from_pretrained(
117
+ pretrained_model_name_or_path=MODEL_PATH,
118
+ torch_dtype=torch.bfloat16,
119
+ device_map="auto",
120
+ )
121
+ inputs = inputs.to(model.device)
122
+ generated_ids = model.generate(**inputs, max_new_tokens=128, do_sample=False)
123
+ output_text = tokenizer.decode(generated_ids[0][inputs.input_ids.shape[1]:])
124
+ print(output_text)
125
+ ```
126
+
127
+ ### vLLM
128
+
129
+ ```shell
130
+ vllm serve dtometzki/GLM-4.7-Flash-FP8-Dynamic \
131
+ --speculative-config.method mtp \
132
+ --speculative-config.num_speculative_tokens 1 \
133
+ --tool-call-parser glm47 \
134
+ --reasoning-parser glm45 \
135
+ --enable-auto-tool-choice \
136
+ --served-model-name glm-4.7-flash
137
+ ```