mratsim commited on
Commit
3dffbf6
·
verified ·
1 Parent(s): 787f1a3

Upload folder using huggingface_hub

Browse files
This view is limited to 50 files because it contains too many changes.   See raw diff
Files changed (50) hide show
  1. .gitattributes +1 -0
  2. README.md +305 -0
  3. chat_template.jinja +86 -0
  4. config.json +98 -0
  5. generation_config.json +11 -0
  6. model-00001-of-00048.safetensors +3 -0
  7. model-00002-of-00048.safetensors +3 -0
  8. model-00003-of-00048.safetensors +3 -0
  9. model-00004-of-00048.safetensors +3 -0
  10. model-00005-of-00048.safetensors +3 -0
  11. model-00006-of-00048.safetensors +3 -0
  12. model-00007-of-00048.safetensors +3 -0
  13. model-00008-of-00048.safetensors +3 -0
  14. model-00009-of-00048.safetensors +3 -0
  15. model-00010-of-00048.safetensors +3 -0
  16. model-00011-of-00048.safetensors +3 -0
  17. model-00012-of-00048.safetensors +3 -0
  18. model-00013-of-00048.safetensors +3 -0
  19. model-00014-of-00048.safetensors +3 -0
  20. model-00015-of-00048.safetensors +3 -0
  21. model-00016-of-00048.safetensors +3 -0
  22. model-00017-of-00048.safetensors +3 -0
  23. model-00018-of-00048.safetensors +3 -0
  24. model-00019-of-00048.safetensors +3 -0
  25. model-00020-of-00048.safetensors +3 -0
  26. model-00021-of-00048.safetensors +3 -0
  27. model-00022-of-00048.safetensors +3 -0
  28. model-00023-of-00048.safetensors +3 -0
  29. model-00024-of-00048.safetensors +3 -0
  30. model-00025-of-00048.safetensors +3 -0
  31. model-00026-of-00048.safetensors +3 -0
  32. model-00027-of-00048.safetensors +3 -0
  33. model-00028-of-00048.safetensors +3 -0
  34. model-00029-of-00048.safetensors +3 -0
  35. model-00030-of-00048.safetensors +3 -0
  36. model-00031-of-00048.safetensors +3 -0
  37. model-00032-of-00048.safetensors +3 -0
  38. model-00033-of-00048.safetensors +3 -0
  39. model-00034-of-00048.safetensors +3 -0
  40. model-00035-of-00048.safetensors +3 -0
  41. model-00036-of-00048.safetensors +3 -0
  42. model-00037-of-00048.safetensors +3 -0
  43. model-00038-of-00048.safetensors +3 -0
  44. model-00039-of-00048.safetensors +3 -0
  45. model-00040-of-00048.safetensors +3 -0
  46. model-00041-of-00048.safetensors +3 -0
  47. model-00042-of-00048.safetensors +3 -0
  48. model-00043-of-00048.safetensors +3 -0
  49. model-00044-of-00048.safetensors +3 -0
  50. model-00045-of-00048.safetensors +3 -0
.gitattributes CHANGED
@@ -33,3 +33,4 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
 
 
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
36
+ tokenizer.json filter=lfs diff=lfs merge=lfs -text
README.md ADDED
@@ -0,0 +1,305 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: mit
3
+ base_model:
4
+ - zai-org/GLM-4.7-Flash
5
+ pipeline_tag: text-generation
6
+ ---
7
+ # GLM-4.7-Flash (W8A8 FP8 with 2D-block quantization)
8
+
9
+ This repo contains GLM-4.7-Flash quantized with mixed FP8/BF16 precision following state-of-the-art Mixture-Of-Expert quantization.
10
+
11
+ - Original Model:
12
+ - [zai-org/GLM-4.7-Flash](https://huggingface.co/zai-org/GLM-4.7-Flash)
13
+
14
+ The model requires Ada (4000 series), Hopper (H100) or Blackwell (5000 series) GPUs for hardware FP8 support.
15
+
16
+ ## 📥 Usage & Running Instructions
17
+
18
+ The model was tested with vLLM + 1x RTX Pro 6000, here is a script suitable for such configuration with 131072 context length.
19
+
20
+ ### Building vLLM with transformers v5
21
+
22
+ A vLLM built from HEAD + transformers v5 is needed for GLM-4.7-Flash (and GLM-4.6V), here is the Dockerfile I use (specialized for RTX 5090 / RTX Pro 6000):
23
+
24
+ <details>
25
+ <summary>Dockerfile, see full repo: https://github.com/mratsim/llmops/blob/master/vllm/vllm-lmcache-Dockerfile</summary>
26
+ ```Dockerfile
27
+ # vLLM + LMCache Multi-Stage Dockerfile
28
+ # + Docker cache
29
+ # + Ccache
30
+ # + uv cache
31
+ # Version: 2026-01-26
32
+ # Build: TMPDIR=vllm-dockercache podman build -v ./vllm-ccache:/root/.ccache -t vllm-202601-cu129 -f vllm-lmcache-Dockerfile
33
+
34
+ #################### ARGUMENTS ####################
35
+ ARG CUDA_VERSION=12.9.1
36
+ ARG LMCACHE_GIT_REF=dev
37
+ ARG VLLM_GIT_REF=main
38
+ ARG FLASHINFER_VERSION=0.6.1
39
+ ARG WHEELS_DIR=/tmp/wheels
40
+
41
+ #################### BUILD STAGE ####################
42
+ # Full build environment with all development tools
43
+ FROM nvcr.io/nvidia/cuda:${CUDA_VERSION}-devel-ubuntu24.04 AS build
44
+
45
+ ARG CUDA_VERSION
46
+ ARG LMCACHE_GIT_REF
47
+ ARG VLLM_GIT_REF
48
+ ARG WHEELS_DIR
49
+
50
+ # Build environment
51
+ ENV CUDA_VERSION=${CUDA_VERSION}
52
+ ENV WHEELS_DIR=${WHEELS_DIR}
53
+
54
+ # Build config
55
+ ENV UV_LINK_MODE=copy
56
+ ENV UV_HTTP_TIMEOUT=500
57
+ ENV UV_INDEX_STRATEGY="unsafe-best-match"
58
+ ENV MAX_JOBS=128
59
+ ENV NVCC_THREADS=8
60
+ ENV CMAKE_BUILD_TYPE=Release
61
+ ENV USE_CUDA=1
62
+ ENV CCACHE_DIR=/root/.ccache
63
+ ENV CUDA_HOME=/usr/local/cuda
64
+ ENV NVCC_GENCODE="-gencode=arch=compute_120,code=sm_120"
65
+ ENV TORCH_CUDA_ARCH_LIST='12.0'
66
+ ENV FLASH_ATTN_CUDA_ARCHS=120
67
+ ENV VLLM_FLASH_ATTN_VERSION=2
68
+ # Note: flashinfer is now installed as pre-compiled wheel in runtime
69
+ # ENV FLASHINFER_ENABLE_AOT=1
70
+ ENV VLLM_TARGET_DEVICE=cuda
71
+ ENV LMCACHE_NVCC_THREADS=8
72
+ ENV LMCACHE_MAX_JOBS=32
73
+ ENV LMCACHE_CUDA_VERSION=${CUDA_VERSION}
74
+ ENV LMCACHE_CUDA_ARCHS=12.0
75
+ ENV LMCACHE_TORCH_CUDA_ARCH_LIST=12.0
76
+ ENV LMCACHE_VLLM_FA_CMAKE_GPU_ARCHES=120
77
+ ENV VLLM_DOCKER_BUILD_CONTEXT=1
78
+ ENV PATH="/opt/venv/bin:$PATH"
79
+
80
+ # System packages
81
+ RUN apt-get update && apt-get install -y --no-install-recommends \
82
+ build-essential \
83
+ curl \
84
+ ca-certificates \
85
+ python3.12 \
86
+ python3.12-venv \
87
+ python3.12-dev \
88
+ python3-pip \
89
+ git \
90
+ ccache \
91
+ && rm -rf /var/lib/apt/lists/*
92
+
93
+ # Create venv
94
+ RUN python3 -m venv /opt/venv
95
+ RUN /opt/venv/bin/pip install --no-cache-dir --upgrade pip
96
+ RUN /opt/venv/bin/pip install --no-cache-dir uv
97
+
98
+ # Create wheel output directory
99
+ RUN mkdir -p ${WHEELS_DIR}
100
+
101
+ # Build tools
102
+ RUN --mount=type=cache,target=/root/.cache/uv \
103
+ /opt/venv/bin/uv pip install ninja setuptools setuptools_scm
104
+
105
+ # App
106
+ # ---------------------------------------------------------------
107
+ # PyTorch
108
+ RUN --mount=type=cache,target=/root/.cache/uv \
109
+ /opt/venv/bin/uv pip install --pre torch>=2.9.0 torchvision torchaudio \
110
+ --extra-index-url https://download.pytorch.org/whl/cu${CUDA_VERSION%.*}
111
+
112
+ # Clone vLLM
113
+ WORKDIR /workspace
114
+ RUN git clone --branch ${VLLM_GIT_REF} https://github.com/vllm-project/vllm
115
+
116
+ # vLLM: Specialize for SM120 (RTX 5090, RTX Pro 6000) to save hours of compilation time
117
+ WORKDIR /workspace/vllm
118
+ RUN sed -i \
119
+ -e 's/ALLSPARK_ARCHS "8.0;8.6;8.7;8.9"/ALLSPARK_ARCHS "12.0"/g' \
120
+ -e 's/MARLIN_ARCHS "8.0+PTX"/MARLIN_ARCHS "12.0"/g' \
121
+ -e 's/MARLIN_FP8_ARCHS "8.9;12.0"/MARLIN_FP8_ARCHS "12.0"/g' \
122
+ -e 's/MARLIN_OTHER_ARCHS "7.5;8.0+PTX"/MARLIN_OTHER_ARCHS "12.0"/g' \
123
+ -e 's/MARLIN_MOE_ARCHS "8.0+PTX"/MARLIN_MOE_ARCHS "12.0"/g' \
124
+ -e 's/MARLIN_MOE_FP8_ARCHS "8.9;12.0"/MARLIN_MOE_FP8_ARCHS "12.0"/g' \
125
+ -e 's/MARLIN_MOE_OTHER_ARCHS "7.5;8.0+PTX"/MARLIN_MOE_OTHER_ARCHS "12.0"/g' \
126
+ -e 's/HADACORE_ARCHS "8.0+PTX;9.0+PTX" "${CUDA_ARCHS}"/HADACORE_ARCHS "12.0" "${CUDA_ARCHS}"/g' \
127
+ -e 's/"7.5;8.0;8.7;8.9+PTX" "${CUDA_ARCHS}"/"12.0" "${CUDA_ARCHS}"/g' \
128
+ CMakeLists.txt
129
+
130
+ # vLLM build requirements
131
+ RUN --mount=type=cache,target=/root/.cache/uv \
132
+ /opt/venv/bin/uv pip install -r requirements/build.txt \
133
+ --extra-index-url https://download.pytorch.org/whl/cu${CUDA_VERSION%.*}
134
+
135
+ # Build vLLM wheel
136
+ RUN --mount=type=cache,target=/root/.cache/uv \
137
+ --mount=type=cache,target=/root/.ccache \
138
+ CCACHE_NOHASHDIR="true" \
139
+ /opt/venv/bin/python3 setup.py bdist_wheel --dist-dir ${WHEELS_DIR} \
140
+ | grep -vE "^copying|^creating|^writing|^adding"
141
+
142
+ # Clone LMCache
143
+ WORKDIR /workspace
144
+ RUN git clone --branch ${LMCACHE_GIT_REF} https://github.com/LMCache/LMCache
145
+
146
+ # Build LMCache wheel
147
+ WORKDIR /workspace/LMCache
148
+ RUN --mount=type=cache,target=/root/.cache/uv \
149
+ --mount=type=cache,target=/root/.ccache \
150
+ CCACHE_NOHASHDIR="true" \
151
+ /opt/venv/bin/python3 setup.py bdist_wheel --dist-dir ${WHEELS_DIR} \
152
+ | grep -vE "^copying|^creating|^writing|^adding"
153
+
154
+ # ccache stats
155
+ WORKDIR /workspace
156
+ RUN --mount=type=cache,target=/root/.ccache,sharing=locked \
157
+ ccache -s
158
+
159
+ #################### RUNTIME STAGE ####################
160
+ # Lean production image without build tools
161
+ FROM nvcr.io/nvidia/cuda:${CUDA_VERSION}-runtime-ubuntu24.04 AS runtime
162
+
163
+ ARG CUDA_VERSION
164
+ ARG FLASHINFER_VERSION
165
+ ARG WHEELS_DIR
166
+
167
+ ENV UV_LINK_MODE=copy
168
+ ENV UV_HTTP_TIMEOUT=500
169
+ ENV UV_INDEX_STRATEGY="unsafe-best-match"
170
+ ENV CUDA_VERSION=${CUDA_VERSION}
171
+ ENV FLASHINFER_VERSION=${FLASHINFER_VERSION}
172
+ ENV FLASHINFER_CUDA_ARCH_LIST="12.0"
173
+ ENV WHEELS_DIR=${WHEELS_DIR}
174
+ ENV DEBIAN_FRONTEND=noninteractive
175
+ ENV VLLM_TARGET_DEVICE=cuda
176
+ ENV PATH="/opt/venv/bin:$PATH"
177
+
178
+ # Distro setup
179
+ RUN CUDA_VERSION_DASH=$(echo ${CUDA_VERSION} | cut -d. -f1,2 | tr '.' '-') && \
180
+ apt-get update -y && \
181
+ apt-get install -y --no-install-recommends \
182
+ # Runtime packages
183
+ kmod \
184
+ # Install CUDA development tools for runtime JIT compilation
185
+ # (FlashInfer, DeepGEMM, EP kernels all require compilation at runtime)
186
+ build-essential \
187
+ cuda-nvcc-${CUDA_VERSION_DASH} \
188
+ # Python
189
+ python3.12 \
190
+ python3.12-venv \
191
+ python3.12-dev \
192
+ python3-pip \
193
+ && rm -rf /var/lib/apt/lists/*
194
+
195
+ # Create venv
196
+ RUN python3 -m venv /opt/venv
197
+ RUN /opt/venv/bin/pip install --no-cache-dir --upgrade pip
198
+ RUN /opt/venv/bin/pip install --no-cache-dir uv
199
+
200
+ # Install packages in venv
201
+ WORKDIR /tmp
202
+
203
+ # PyTorch (use uv cache for fast install)
204
+ RUN --mount=type=cache,target=/root/.cache/uv \
205
+ /opt/venv/bin/uv pip install --pre torch>=2.9.0 torchvision torchaudio \
206
+ --extra-index-url https://download.pytorch.org/whl/cu${CUDA_VERSION%.*}
207
+
208
+ RUN --mount=type=cache,target=/root/.cache/uv \
209
+ /opt/venv/bin/uv pip install torch-c-dlpack-ext \
210
+ --extra-index-url https://download.pytorch.org/whl/cu${CUDA_VERSION%.*}
211
+
212
+ # Copy all wheels from build stage & install them
213
+ COPY --from=build ${WHEELS_DIR} /tmp/wheels
214
+ RUN /opt/venv/bin/uv pip install /tmp/wheels/*.whl
215
+
216
+ # Clean up
217
+ RUN rm -rf /tmp/wheels
218
+
219
+ # Install FlashInfer pre-compiled kernel cache and binaries
220
+ # https://docs.flashinfer.ai/installation.html
221
+ RUN --mount=type=cache,target=/root/.cache/uv \
222
+ /opt/venv/bin/uv pip install flashinfer-python flashinfer-cubin==${FLASHINFER_VERSION} \
223
+ && /opt/venv/bin/uv pip install flashinfer-jit-cache==${FLASHINFER_VERSION} \
224
+ --extra-index-url https://flashinfer.ai/whl/cu$(echo $CUDA_VERSION | cut -d. -f1,2 | tr -d '.') \
225
+ && /opt/venv/bin/flashinfer show-config
226
+
227
+ # Allow z.ai GLM-4.6V and GLM-4.7-Flash models
228
+ RUN --mount=type=cache,target=/root/.cache/uv \
229
+ apt-get update -y && \
230
+ apt-get install -y --no-install-recommends git && \
231
+ /opt/venv/bin/uv pip install git+https://github.com/huggingface/transformers.git && \
232
+ apt-get purge -y --auto-remove git && rm -rf /var/lib/apt/lists/*
233
+
234
+ # TODO: Unsure why this is needed - remove it ASAP. Pulled by OpenCV for vLLM image processing
235
+ RUN apt-get update -y && \
236
+ apt-get install -y --no-install-recommends libxcb1 \
237
+ && rm -rf /var/lib/apt/lists/*
238
+
239
+ WORKDIR /workspace
240
+
241
+ CMD ["bash"]
242
+ ```
243
+ </details>
244
+
245
+
246
+ ### Running script
247
+
248
+ ```bash
249
+ # Model configuration (Mandatory)
250
+ MODEL="mratsim/GLM-4.7-Flash-FP8"
251
+ MODELNAME="GLM-4.7-Flash"
252
+ GPU_UTIL=0.90
253
+ CONTEXT_SIZE=202752
254
+
255
+ # Prevent memory fragmentation
256
+ export PYTORCH_ALLOC_CONF=expandable_segments:True,max_split_size_mb:512
257
+
258
+ # Prevent vLLM from using 100% CPU when idle (Very Recommended)
259
+ export VLLM_SLEEP_WHEN_IDLE=1
260
+
261
+ vllm serve "${MODEL}" \
262
+ --served-model-name "${MODELNAME}" \
263
+ --gpu-memory-utilization ${GPU_UTIL} \
264
+ --max-model-len "${CONTEXT_SIZE}" \
265
+ --tool-call-parser glm47 \
266
+ --reasoning-parser glm45 \
267
+ --enable-auto-tool-choice
268
+ ```
269
+
270
+ ## 🔬 Quantization method
271
+
272
+ My LLM quantizations scripts are available at https://github.com/mratsim/quantizers
273
+
274
+ For this quant specifically:
275
+
276
+ ```python
277
+ import os
278
+
279
+ from llmcompressor import model_free_ptq
280
+
281
+ os.environ.setdefault("TOKENIZERS_PARALLELISM", "false")
282
+ os.environ.setdefault("PYTORCH_ALLOC_CONF", "expandable_segments:True,max_split_size_mb:512")
283
+
284
+ MODEL_ID = "zai-org/GLM-4.7-Flash"
285
+ MODEL_OUT = MODEL_ID.split("/")[1] + "-FP8"
286
+
287
+ model_free_ptq(
288
+ model_stub=MODEL_ID,
289
+ save_directory=MODEL_OUT,
290
+ scheme="FP8_BLOCK",
291
+ ignore=[
292
+ "lm_head",
293
+ "re:.*mlp\\.gate$", # MoE router
294
+ "re:.*kv_a_proj_with_mqa$",
295
+ "re:.*q_a_proj$",
296
+ "model.embed_tokens",
297
+ ],
298
+ max_workers=16,
299
+ device="cuda:0",
300
+ )
301
+
302
+ print(f"SUCCESS: files saved in {MODEL_OUT}")
303
+ ```
304
+
305
+ FP8 quantization does not require calibration.
chat_template.jinja ADDED
@@ -0,0 +1,86 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ [gMASK]<sop>
2
+ {%- if tools -%}
3
+ <|system|>
4
+ # Tools
5
+
6
+ You may call one or more functions to assist with the user query.
7
+
8
+ You are provided with function signatures within <tools></tools> XML tags:
9
+ <tools>
10
+ {% for tool in tools %}
11
+ {{ tool | tojson(ensure_ascii=False) }}
12
+ {% endfor %}
13
+ </tools>
14
+
15
+ For each function call, output the function name and arguments within the following XML format:
16
+ <tool_call>{function-name}<arg_key>{arg-key-1}</arg_key><arg_value>{arg-value-1}</arg_value><arg_key>{arg-key-2}</arg_key><arg_value>{arg-value-2}</arg_value>...</tool_call>{%- endif -%}
17
+ {%- macro visible_text(content) -%}
18
+ {%- if content is string -%}
19
+ {{- content }}
20
+ {%- elif content is iterable and content is not mapping -%}
21
+ {%- for item in content -%}
22
+ {%- if item is mapping and item.type == 'text' -%}
23
+ {{- item.text }}
24
+ {%- elif item is string -%}
25
+ {{- item }}
26
+ {%- endif -%}
27
+ {%- endfor -%}
28
+ {%- else -%}
29
+ {{- content }}
30
+ {%- endif -%}
31
+ {%- endmacro -%}
32
+ {%- set ns = namespace(last_user_index=-1) %}
33
+ {%- for m in messages %}
34
+ {%- if m.role == 'user' %}
35
+ {% set ns.last_user_index = loop.index0 -%}
36
+ {%- endif %}
37
+ {%- endfor %}
38
+ {% for m in messages %}
39
+ {%- if m.role == 'user' -%}<|user|>{{ visible_text(m.content) }}
40
+ {%- elif m.role == 'assistant' -%}
41
+ <|assistant|>
42
+ {%- set reasoning_content = '' %}
43
+ {%- set content = visible_text(m.content) %}
44
+ {%- if m.reasoning_content is string %}
45
+ {%- set reasoning_content = m.reasoning_content %}
46
+ {%- else %}
47
+ {%- if '</think>' in content %}
48
+ {%- set reasoning_content = content.split('</think>')[0].rstrip('\n').split('<think>')[-1].lstrip('\n') %}
49
+ {%- set content = content.split('</think>')[-1].lstrip('\n') %}
50
+ {%- endif %}
51
+ {%- endif %}
52
+ {%- if ((clear_thinking is defined and not clear_thinking) or loop.index0 > ns.last_user_index) and reasoning_content -%}
53
+ {{ '<think>' + reasoning_content.strip() + '</think>'}}
54
+ {%- else -%}
55
+ {{ '</think>' }}
56
+ {%- endif -%}
57
+ {%- if content.strip() -%}
58
+ {{ content.strip() }}
59
+ {%- endif -%}
60
+ {% if m.tool_calls %}
61
+ {% for tc in m.tool_calls %}
62
+ {%- if tc.function %}
63
+ {%- set tc = tc.function %}
64
+ {%- endif %}
65
+ {{- '<tool_call>' + tc.name -}}
66
+ {% set _args = tc.arguments %}{% for k, v in _args.items() %}<arg_key>{{ k }}</arg_key><arg_value>{{ v | tojson(ensure_ascii=False) if v is not string else v }}</arg_value>{% endfor %}</tool_call>{% endfor %}
67
+ {% endif %}
68
+ {%- elif m.role == 'tool' -%}
69
+ {%- if m.content is string -%}
70
+ {%- if loop.first or (messages[loop.index0 - 1].role != "tool") %}
71
+ {{- '<|observation|>' }}
72
+ {%- endif %}
73
+ {{- '<tool_response>' }}
74
+ {{- m.content }}
75
+ {{- '</tool_response>' }}
76
+ {%- else -%}
77
+ <|observation|>{% for tr in m.content %}
78
+ <tool_response>{{ tr.output if tr.output is defined else tr }}</tool_response>{% endfor -%}
79
+ {% endif -%}
80
+ {%- elif m.role == 'system' -%}
81
+ <|system|>{{ visible_text(m.content) }}
82
+ {%- endif -%}
83
+ {%- endfor -%}
84
+ {%- if add_generation_prompt -%}
85
+ <|assistant|>{{- '</think>' if (enable_thinking is defined and not enable_thinking) else '<think>' -}}
86
+ {%- endif -%}
config.json ADDED
@@ -0,0 +1,98 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "architectures": [
3
+ "Glm4MoeLiteForCausalLM"
4
+ ],
5
+ "attention_bias": false,
6
+ "attention_dropout": 0.0,
7
+ "dtype": "bfloat16",
8
+ "eos_token_id": [
9
+ 154820,
10
+ 154827,
11
+ 154829
12
+ ],
13
+ "first_k_dense_replace": 1,
14
+ "hidden_act": "silu",
15
+ "hidden_size": 2048,
16
+ "intermediate_size": 10240,
17
+ "kv_lora_rank": 512,
18
+ "max_position_embeddings": 202752,
19
+ "model_type": "glm4_moe_lite",
20
+ "moe_intermediate_size": 1536,
21
+ "n_group": 1,
22
+ "n_routed_experts": 64,
23
+ "n_shared_experts": 1,
24
+ "norm_topk_prob": true,
25
+ "num_attention_heads": 20,
26
+ "num_experts_per_tok": 4,
27
+ "num_hidden_layers": 47,
28
+ "num_key_value_heads": 20,
29
+ "num_nextn_predict_layers": 1,
30
+ "pad_token_id": 154820,
31
+ "partial_rotary_factor": 1.0,
32
+ "q_lora_rank": 768,
33
+ "qk_nope_head_dim": 192,
34
+ "qk_rope_head_dim": 64,
35
+ "quantization_config": {
36
+ "config_groups": {
37
+ "FP8_BLOCK": {
38
+ "format": "float-quantized",
39
+ "input_activations": {
40
+ "actorder": null,
41
+ "block_structure": null,
42
+ "dynamic": true,
43
+ "group_size": 128,
44
+ "num_bits": 8,
45
+ "observer": null,
46
+ "observer_kwargs": {},
47
+ "strategy": "group",
48
+ "symmetric": true,
49
+ "type": "float"
50
+ },
51
+ "output_activations": null,
52
+ "targets": [
53
+ "Linear"
54
+ ],
55
+ "weights": {
56
+ "actorder": null,
57
+ "block_structure": [
58
+ 128,
59
+ 128
60
+ ],
61
+ "dynamic": false,
62
+ "group_size": null,
63
+ "num_bits": 8,
64
+ "observer": "static_minmax",
65
+ "observer_kwargs": {},
66
+ "strategy": "block",
67
+ "symmetric": true,
68
+ "type": "float"
69
+ }
70
+ }
71
+ },
72
+ "format": "float-quantized",
73
+ "global_compression_ratio": null,
74
+ "ignore": [
75
+ "lm_head",
76
+ "re:.*mlp\\.gate$",
77
+ "re:.*kv_a_proj_with_mqa$",
78
+ "re:.*q_a_proj$",
79
+ "model.embed_tokens"
80
+ ],
81
+ "kv_cache_scheme": null,
82
+ "quant_method": "compressed-tensors",
83
+ "quantization_status": "compressed",
84
+ "sparsity_config": {},
85
+ "transform_config": {},
86
+ "version": "0.13.0"
87
+ },
88
+ "rms_norm_eps": 1e-05,
89
+ "rope_scaling": null,
90
+ "rope_theta": 1000000,
91
+ "routed_scaling_factor": 1.8,
92
+ "tie_word_embeddings": false,
93
+ "topk_group": 1,
94
+ "topk_method": "noaux_tc",
95
+ "transformers_version": "5.0.0rc0",
96
+ "v_head_dim": 256,
97
+ "vocab_size": 154880
98
+ }
generation_config.json ADDED
@@ -0,0 +1,11 @@
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_from_model_config": true,
3
+ "eos_token_id": [
4
+ 154820,
5
+ 154827,
6
+ 154829
7
+ ],
8
+ "pad_token_id": 154820,
9
+ "temperature": 1.0,
10
+ "transformers_version": "5.0.0.dev0"
11
+ }
model-00001-of-00048.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:79b2d461819c13bdac48aa7dc6d34858f1dbd49e4705de50fdbf5b23676a3e2b
3
+ size 1039069656
model-00002-of-00048.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:6cfec74842ebc41fbf8b49f39bea86bb9d188c517f3f52a2aba6dc9fe5c7c56b
3
+ size 638326360
model-00003-of-00048.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:2e1b26d6ee56f4cde1ee1b04cb00ac34bbdaf9bd356e3d5dcc1cf9b3d87e902a
3
+ size 638326360
model-00004-of-00048.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f57599050caf7ef8c2305c17036fb5799b30814fe662b1159c13791ef91a55a7
3
+ size 638326360
model-00005-of-00048.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d39c46077cf05bed9768235e6db760ebbe5f8f8f3c24303067afbab0e9a4ffd5
3
+ size 638326360
model-00006-of-00048.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:2214a14229d44c6a2f6d8047fe6235c5794142edb9f7285a29e049d5314503af
3
+ size 638326360
model-00007-of-00048.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a8ddfbc9d0c8dde9f2076acc94bf49fc8ace4ec23af261a0d372dae40f3a4dfd
3
+ size 638326360
model-00008-of-00048.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:6200ca090a1c2404435b0db55b7de77eba3b949b41b880f1cc61d4da74e6f7b5
3
+ size 638326360
model-00009-of-00048.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:913bf9345d3455253942a6f1670d0e560e01d3fd8774a633ac118357b6cc0875
3
+ size 638326360
model-00010-of-00048.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:35e1639ef0eacca553f109c449d2ff35efcb1aab6c80ce16ca1e8d22c8af9798
3
+ size 638326360
model-00011-of-00048.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e141b8c7b120df33f5c1f322b219b784dc88f61bd2d63b2b1358eadafde322ba
3
+ size 638326760
model-00012-of-00048.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f54df23f4b4fedcd01b164e5adb73bfc715fcc77f60374846f681e976933660f
3
+ size 638326760
model-00013-of-00048.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d09c255d884d902b6809bd0bf2fe0860d2fefc04660ae8b32c0127ea95a517e7
3
+ size 638326760
model-00014-of-00048.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:faaba414e3165dce0c6eb8c39efb2b81da8b5ebc3faa032922f7f7d29cee7450
3
+ size 638326760
model-00015-of-00048.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:864f7b7c71b4c949151aa378d12900242316642d037bc48cbe7c8062d3c44883
3
+ size 638326760
model-00016-of-00048.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ef7b8a31cac2accbdce18f4757d48a9816ae536657a9fbb333305fcabe79a7e4
3
+ size 638326760
model-00017-of-00048.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:744ab7f263ca19f39f1b0721844f1358bcec65e9b8e5cded60a40cb8bbcd13b1
3
+ size 638326760
model-00018-of-00048.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:3e403c52afb5ec7aeb60dc6ba13062693d5dcbc554b69dfcedb4a38bb3cf2092
3
+ size 638326760
model-00019-of-00048.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4e4e4476ff068af34edc58baa7f51261f30be3582c6aa2dbc3f5bbfa93c08d83
3
+ size 638326760
model-00020-of-00048.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c0e786fd4d955f953480f7ac0af09575f7a6573b29aef1c85365ee3cefe8f8b0
3
+ size 638326760
model-00021-of-00048.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:36b8f350d37e425329dab399cdf7c032392d8bd84c0890372943a7685aca2269
3
+ size 638326760
model-00022-of-00048.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d4bada11c3b9f53a92d28790da5b729142b643abc728f88bca65fa32f2f9bfcc
3
+ size 638326760
model-00023-of-00048.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a4cabe1b70ce9b4ee8aa996cdd420eee03039c4e7f436aa61030765371eddd54
3
+ size 638326760
model-00024-of-00048.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:538d20eac192c9cdf3aac7f9b300eb4904188de9fa52661ebb1363baeee8f842
3
+ size 638326760
model-00025-of-00048.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:89b5bff4de02c3fa633ba51f4b1f2c64717c8c76f8286705c6b2a7eebd3ec87d
3
+ size 638326760
model-00026-of-00048.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:97e7b102a6db34750882ec235c36a9babc765bfa0b7fe7ddac7be3907ffd3732
3
+ size 638326760
model-00027-of-00048.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:cdeec5fd67f0aa4ecdfcfe1ed73b0a6fd6a0bbefe4a0cadb06665c65a9c63669
3
+ size 638326760
model-00028-of-00048.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:bdf66bf9f399953b0f09df25fd8bf634d27e2b64eec6ee2934b1ba9f72740231
3
+ size 638326760
model-00029-of-00048.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:34ba88f7a33636bb6eba3f9c1d414be066fdab774b2415209f2ac5e3e648adf2
3
+ size 638326760
model-00030-of-00048.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:3cec15d740dc6b7e9362fcf4e6d8686a7eec4fdb8e3129907e1e3c318b79d99a
3
+ size 638326760
model-00031-of-00048.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e75b26f6f45cb3b615a43428d87b8ef50275cc64b67464672e695acab008a2bd
3
+ size 638326760
model-00032-of-00048.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:1604f8158d65699afeae21d870bfb4cb8edb5915a49bd5efcd0347fe701f891f
3
+ size 638326760
model-00033-of-00048.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:86a1bd9fea2bc4a478ea846f9fc8f0dc5cc32e25a0cb27606e48915b2aa39765
3
+ size 638326760
model-00034-of-00048.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:0a2d1d11c54d8d6f10424666256e00d15a82507499386fcf9f1ce0c77319f3e3
3
+ size 638326760
model-00035-of-00048.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:af8cd7089e2358b08a730fc763d056db04ca58526745366a082cb4f908bf7f92
3
+ size 638326760
model-00036-of-00048.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:cbb44475760b3c26977226debfe165934f7deb5b3a9a61546992b479b58f05b5
3
+ size 638326760
model-00037-of-00048.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f18453afaac338902d1d7a4190ac9d8e69670bead12d104fa703b2a63b6f123e
3
+ size 638326760
model-00038-of-00048.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:600afa83c69e69c18ca8cbc13ffa33ece5e61b3c4a1931876bcc9247682f0b47
3
+ size 638326760
model-00039-of-00048.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b259d4b29799278c8e5669208fff1ac643687cdde2459d4bfcae2118ab23d1ba
3
+ size 638326760
model-00040-of-00048.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f6656897d044534145a42f0b256ffa311b18133c9c2028b797e97632a469f7d5
3
+ size 638326760
model-00041-of-00048.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:486169b91a226db1f699ba95ae579dcc9767d199d442a45d0ee2789e407433ed
3
+ size 638326760
model-00042-of-00048.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9b03ea207a082e2532b8615d5b16b03839252bcf0612337361a7c36867d40817
3
+ size 638326760
model-00043-of-00048.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:89f1e07f7784c57cf13c50a532c582ce57ff8d63ebf271de4c54d69da0141ad1
3
+ size 638326760
model-00044-of-00048.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:442429298e82163ef7d111d7b88be6d2607ca3f91082a4e27007887fc268ac76
3
+ size 638326760
model-00045-of-00048.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c2428e15d252ae40c8f881e94d98bf4ec27536c55297b26aad6fd1a6b8222514
3
+ size 638326760