evilfreelancer commited on
Commit
4cbd575
·
verified ·
1 Parent(s): b72d576

Upload folder using huggingface_hub

Browse files
.gitattributes CHANGED
@@ -33,3 +33,4 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
 
 
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
36
+ tokenizer.json filter=lfs diff=lfs merge=lfs -text
README.md CHANGED
@@ -2,4 +2,184 @@
2
  license: apache-2.0
3
  base_model:
4
  - chromadb/context-1
5
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
2
  license: apache-2.0
3
  base_model:
4
  - chromadb/context-1
5
+ tags:
6
+ - mxfp4
7
+ - quantized
8
+ - gpt-oss
9
+ - moe
10
+ - agentic
11
+ - search
12
+ ---
13
+
14
+ # Chroma Context-1 (MXFP4)
15
+
16
+ MXFP4-quantized version of [chromadb/context-1](https://huggingface.co/chromadb/context-1),
17
+ a 20B parameter agentic search model fine-tuned from
18
+ [openai/gpt-oss-20b](https://huggingface.co/openai/gpt-oss-20b).
19
+
20
+ This checkpoint reduces the model size from **39 GB** (BF16) to **~14 GB** (MXFP4)
21
+ with minimal quality degradation, enabling inference on a single GPU with more
22
+ headroom for KV cache.
23
+
24
+ ## Model Details
25
+
26
+ - **Base model:** chromadb/context-1 (BF16)
27
+ - **Architecture:** GptOssForCausalLM (Mixture of Experts, 24 layers, 32 experts, top-4 routing)
28
+ - **Parameters:** 20B total
29
+ - **Quantization:** MXFP4 (E2M1 weights + E8M0 scales, group size 32)
30
+ - **Quantized layers:** MoE expert weights (`gate_up_proj`, `down_proj`)
31
+ - **Non-quantized layers:** attention, router, embeddings, LM head (remain in BF16)
32
+ - **File size:** ~14 GB (model.safetensors)
33
+
34
+ ## Quantization Details
35
+
36
+ MXFP4 (Microscaling FP4) is a 4-bit floating-point format using the E2M1
37
+ representation (2-bit exponent, 1-bit mantissa) with shared E8M0 per-group
38
+ scaling factors. Each group of 32 weights shares a single 8-bit scale,
39
+ and each individual weight is stored as a 4-bit FP4 code. Two FP4 codes
40
+ are packed into one uint8 byte (low nibble = even index, high nibble = odd index).
41
+
42
+ This matches the quantization format used by the official
43
+ [openai/gpt-oss-20b](https://huggingface.co/openai/gpt-oss-20b) MXFP4 checkpoint
44
+ and is natively supported by vLLM's Marlin MXFP4 kernels.
45
+
46
+ ### What is quantized
47
+
48
+ | Component | Format | Notes |
49
+ |-----------|--------|-------|
50
+ | `mlp.experts.gate_up_proj` | MXFP4 | Stored as `_blocks` (U8) + `_scales` (U8) |
51
+ | `mlp.experts.down_proj` | MXFP4 | Stored as `_blocks` (U8) + `_scales` (U8) |
52
+ | `self_attn.*` | BF16 | Kept at full precision |
53
+ | `mlp.router.*` | BF16 | Kept at full precision |
54
+ | `embed_tokens`, `lm_head` | BF16 | Kept at full precision |
55
+
56
+ ### Tensor layout
57
+
58
+ Expert weights are transposed from the BF16 checkpoint layout
59
+ `[num_experts, in_features, out_features]` to the vLLM-expected
60
+ `[num_experts, out_features, in_features]` before quantization.
61
+ This is required because vLLM's MXFP4 loader performs a direct copy
62
+ without transposition (unlike the BF16 loader which transposes on load).
63
+
64
+ ## Usage with vLLM
65
+
66
+ Requires vLLM >= 0.18.0.
67
+
68
+ ```bash
69
+ vllm serve evilfreelancer/context-1-mxfp4 \
70
+ --served-model-name openai/gpt-oss-20b \
71
+ --trust-remote-code \
72
+ --dtype auto \
73
+ --gpu-memory-utilization 0.9 \
74
+ --max-model-len 64000 \
75
+ --max-num-batched-tokens 64000 \
76
+ --kv-cache-dtype fp8 \
77
+ --enable-auto-tool-choice \
78
+ --tool-call-parser openai
79
+ ```
80
+
81
+ Docker Compose example:
82
+
83
+ ```yaml
84
+ services:
85
+ gpt-oss-20b:
86
+ image: vllm/vllm-openai:v0.18.0
87
+ restart: always
88
+ entrypoint: vllm
89
+ command: >
90
+ serve evilfreelancer/context-1-mxfp4
91
+ --served-model-name openai/gpt-oss-20b
92
+ --trust-remote-code
93
+ --dtype auto
94
+ --gpu-memory-utilization 0.9
95
+ --max-model-len 64000
96
+ --max-num-batched-tokens 64000
97
+ --kv-cache-dtype fp8
98
+ --enable-auto-tool-choice
99
+ --tool-call-parser openai
100
+ ports:
101
+ - 8081:8000
102
+ deploy:
103
+ resources:
104
+ reservations:
105
+ devices:
106
+ - driver: nvidia
107
+ device_ids: ["0"]
108
+ capabilities: [gpu]
109
+ ```
110
+
111
+ ### Note on `generation_config.json`
112
+
113
+ The `eos_token_id` includes token 200012 (`<|call|>`) in addition to the
114
+ standard `<|return|>` (200002). This is required for tool calling to work
115
+ correctly - without it, the model does not stop generation after emitting
116
+ a tool call, and the Harmony parser fails.
117
+
118
+ ## Conversion Script
119
+
120
+ The included `convert_mxfp4.py` converts the original BF16
121
+ [chromadb/context-1](https://huggingface.co/chromadb/context-1) weights
122
+ to MXFP4 format. It requires only numpy.
123
+
124
+ ```bash
125
+ pip install numpy
126
+ python convert_mxfp4.py
127
+ ```
128
+
129
+ The script expects the BF16 model in a sibling `context-1/` directory
130
+ and writes the quantized model to `context-1-mxfp4/`.
131
+
132
+ ### What the script does
133
+
134
+ 1. Reads each tensor from the BF16 `model.safetensors`
135
+ 2. For MoE expert weights (`gate_up_proj` and `down_proj`):
136
+ - Transposes from `[E, in, out]` to `[E, out, in]`
137
+ - Computes per-group E8M0 scales (group size 32)
138
+ - Quantizes to E2M1 FP4 codes using nearest rounding
139
+ - Packs two FP4 codes per byte
140
+ - Saves as `*_blocks` (packed weights) and `*_scales` (shared exponents)
141
+ 3. Copies all other tensors (attention, router, embeddings) as-is in BF16
142
+ 4. Copies tokenizer and chat template files
143
+ 5. Writes `config.json` with `quantization_config` for vLLM
144
+
145
+ ## Key Capabilities
146
+
147
+ (Inherited from chromadb/context-1)
148
+
149
+ - **Query decomposition:** Breaks complex multi-constraint questions into
150
+ targeted subqueries.
151
+ - **Parallel tool calling:** Averages 2.56 tool calls per turn, reducing
152
+ total turns and end-to-end latency.
153
+ - **Self-editing context:** Selectively prunes irrelevant documents
154
+ mid-search to sustain retrieval quality over long horizons within a
155
+ bounded context window (0.94 prune accuracy).
156
+ - **Cross-domain generalization:** Trained on web, legal, and finance
157
+ tasks; generalizes to held-out domains and public benchmarks
158
+ (BrowseComp-Plus, SealQA, FRAMES, HLE).
159
+
160
+ ## Important: Agent Harness Required
161
+
162
+ Context-1 is trained to operate within a specific agent harness that
163
+ manages tool execution, token budgets, context pruning, and deduplication.
164
+ **The harness is not yet public.** Running the model without it will not
165
+ reproduce the results reported in the technical report.
166
+
167
+ See the [technical report](https://trychroma.com/research/context-1) for
168
+ details on the harness design.
169
+
170
+ ## Citation
171
+
172
+ ```bibtex
173
+ @techreport{bashir2026context1,
174
+ title = {Chroma Context-1: Training a Self-Editing Search Agent},
175
+ author = {Bashir, Hammad and Hong, Kelly and Jiang, Patrick and Shi, Zhiyi},
176
+ year = {2026},
177
+ month = {March},
178
+ institution = {Chroma},
179
+ url = {https://trychroma.com/research/context-1},
180
+ }
181
+ ```
182
+
183
+ ## License
184
+
185
+ Apache 2.0
chat_template.jinja ADDED
@@ -0,0 +1,315 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {# Copyright 2025-present Unsloth. Apache 2.0 License. Unsloth chat template fixes. Edited from ggml-org & OpenAI #}
2
+ {#-
3
+ In addition to the normal inputs of `messages` and `tools`, this template also accepts the
4
+ following kwargs:
5
+ - "builtin_tools": A list, can contain "browser" and/or "python".
6
+ - "model_identity": A string that optionally describes the model identity.
7
+ - "reasoning_effort": A string that describes the reasoning effort, defaults to "medium".
8
+ #}
9
+
10
+ {#- Tool Definition Rendering ============================================== #}
11
+ {%- macro render_typescript_type(param_spec, required_params, is_nullable=false) -%}
12
+ {%- if param_spec.type == "array" -%}
13
+ {%- if param_spec['items'] -%}
14
+ {%- if param_spec['items']['type'] == "string" -%}
15
+ {{- "string[]" }}
16
+ {%- elif param_spec['items']['type'] == "number" -%}
17
+ {{- "number[]" }}
18
+ {%- elif param_spec['items']['type'] == "integer" -%}
19
+ {{- "number[]" }}
20
+ {%- elif param_spec['items']['type'] == "boolean" -%}
21
+ {{- "boolean[]" }}
22
+ {%- else -%}
23
+ {%- set inner_type = render_typescript_type(param_spec['items'], required_params) -%}
24
+ {%- if inner_type == "object | object" or inner_type|length > 50 -%}
25
+ {{- "any[]" }}
26
+ {%- else -%}
27
+ {{- inner_type + "[]" }}
28
+ {%- endif -%}
29
+ {%- endif -%}
30
+ {%- if param_spec.nullable -%}
31
+ {{- " | null" }}
32
+ {%- endif -%}
33
+ {%- else -%}
34
+ {{- "any[]" }}
35
+ {%- if param_spec.nullable -%}
36
+ {{- " | null" }}
37
+ {%- endif -%}
38
+ {%- endif -%}
39
+ {%- elif param_spec.type is defined and param_spec.type is iterable and param_spec.type is not string and param_spec.type is not mapping and param_spec.type[0] is defined -%}
40
+ {#- Handle array of types like ["object", "object"] from Union[dict, list] #}
41
+ {%- if param_spec.type | length > 1 -%}
42
+ {{- param_spec.type | join(" | ") }}
43
+ {%- else -%}
44
+ {{- param_spec.type[0] }}
45
+ {%- endif -%}
46
+ {%- elif param_spec.oneOf -%}
47
+ {#- Handle oneOf schemas - check for complex unions and fallback to any #}
48
+ {%- set has_object_variants = false -%}
49
+ {%- for variant in param_spec.oneOf -%}
50
+ {%- if variant.type == "object" -%}
51
+ {%- set has_object_variants = true -%}
52
+ {%- endif -%}
53
+ {%- endfor -%}
54
+ {%- if has_object_variants and param_spec.oneOf|length > 1 -%}
55
+ {{- "any" }}
56
+ {%- else -%}
57
+ {%- for variant in param_spec.oneOf -%}
58
+ {{- render_typescript_type(variant, required_params) -}}
59
+ {%- if variant.description %}
60
+ {{- "// " + variant.description }}
61
+ {%- endif -%}
62
+ {%- if variant.default is defined %}
63
+ {{ "// default: " + variant.default|tojson }}
64
+ {%- endif -%}
65
+ {%- if not loop.last %}
66
+ {{- " | " }}
67
+ {% endif -%}
68
+ {%- endfor -%}
69
+ {%- endif -%}
70
+ {%- elif param_spec.type == "string" -%}
71
+ {%- if param_spec.enum -%}
72
+ {{- '"' + param_spec.enum|join('" | "') + '"' -}}
73
+ {%- else -%}
74
+ {{- "string" }}
75
+ {%- if param_spec.nullable %}
76
+ {{- " | null" }}
77
+ {%- endif -%}
78
+ {%- endif -%}
79
+ {%- elif param_spec.type == "number" -%}
80
+ {{- "number" }}
81
+ {%- elif param_spec.type == "integer" -%}
82
+ {{- "number" }}
83
+ {%- elif param_spec.type == "boolean" -%}
84
+ {{- "boolean" }}
85
+
86
+ {%- elif param_spec.type == "object" -%}
87
+ {%- if param_spec.properties -%}
88
+ {{- "{\n" }}
89
+ {%- for prop_name, prop_spec in param_spec.properties.items() -%}
90
+ {{- prop_name -}}
91
+ {%- if prop_name not in (param_spec.required or []) -%}
92
+ {{- "?" }}
93
+ {%- endif -%}
94
+ {{- ": " }}
95
+ {{ render_typescript_type(prop_spec, param_spec.required or []) }}
96
+ {%- if not loop.last -%}
97
+ {{-", " }}
98
+ {%- endif -%}
99
+ {%- endfor -%}
100
+ {{- "}" }}
101
+ {%- else -%}
102
+ {{- "object" }}
103
+ {%- endif -%}
104
+ {%- else -%}
105
+ {{- "any" }}
106
+ {%- endif -%}
107
+ {%- endmacro -%}
108
+
109
+ {%- macro render_tool_namespace(namespace_name, tools) -%}
110
+ {{- "## " + namespace_name + "\n\n" }}
111
+ {{- "namespace " + namespace_name + " {\n\n" }}
112
+ {%- for tool in tools %}
113
+ {%- set tool = tool.function %}
114
+ {{- "// " + tool.description + "\n" }}
115
+ {{- "type "+ tool.name + " = " }}
116
+ {%- if tool.parameters and tool.parameters.properties -%}
117
+ {{- "(_: " }}
118
+ {{- "{\n" }}
119
+ {%- for param_name, param_spec in tool.parameters.properties.items() %}
120
+ {{- "// " + param_spec.description + "\n" }}
121
+ {{- param_name }}
122
+ {%- if param_name not in (tool.parameters.required or []) -%}
123
+ {{- "?" }}
124
+ {%- endif -%}
125
+ {{- ": " }}
126
+ {{- render_typescript_type(param_spec, tool.parameters.required or []) }}
127
+ {%- if param_spec.default is defined -%}
128
+ {%- if param_spec.enum %}
129
+ {{- ", // default: " + param_spec.default }}
130
+ {%- elif param_spec.oneOf %}
131
+ {{- "// default: " + param_spec.default }}
132
+ {%- else %}
133
+ {{- ", // default: " + param_spec.default|tojson }}
134
+ {%- endif -%}
135
+ {%- endif -%}
136
+ {%- if not loop.last %}
137
+ {{- ",\n" }}
138
+ {%- else %}
139
+ {{- "\n" }}
140
+ {%- endif -%}
141
+ {%- endfor %}
142
+ {{- "}) => any;\n\n" }}
143
+ {%- else -%}
144
+ {{- "() => any;\n\n" }}
145
+ {%- endif -%}
146
+ {%- endfor %}
147
+ {{- "} // namespace " + namespace_name }}
148
+ {%- endmacro -%}
149
+
150
+ {%- macro render_builtin_tools(browser_tool, python_tool) -%}
151
+ {%- if browser_tool %}
152
+ {{- "## browser\n\n" }}
153
+ {{- "// Tool for browsing.\n" }}
154
+ {{- "// The `cursor` appears in brackets before each browsing display: `[{cursor}]`.\n" }}
155
+ {{- "// Cite information from the tool using the following format:\n" }}
156
+ {{- "// `【{cursor}†L{line_start}(-L{line_end})?】`, for example: `【6†L9-L11】` or `【8†L3】`.\n" }}
157
+ {{- "// Do not quote more than 10 words directly from the tool output.\n" }}
158
+ {{- "// sources=web (default: web)\n" }}
159
+ {{- "namespace browser {\n\n" }}
160
+ {{- "// Searches for information related to `query` and displays `topn` results.\n" }}
161
+ {{- "type search = (_: {\n" }}
162
+ {{- "query: string,\n" }}
163
+ {{- "topn?: number, // default: 10\n" }}
164
+ {{- "source?: string,\n" }}
165
+ {{- "}) => any;\n\n" }}
166
+ {{- "// Opens the link `id` from the page indicated by `cursor` starting at line number `loc`, showing `num_lines` lines.\n" }}
167
+ {{- "// Valid link ids are displayed with the formatting: `【{id}†.*】`.\n" }}
168
+ {{- "// If `cursor` is not provided, the most recent page is implied.\n" }}
169
+ {{- "// If `id` is a string, it is treated as a fully qualified URL associated with `source`.\n" }}
170
+ {{- "// If `loc` is not provided, the viewport will be positioned at the beginning of the document or centered on the most relevant passage, if available.\n" }}
171
+ {{- "// Use this function without `id` to scroll to a new location of an opened page.\n" }}
172
+ {{- "type open = (_: {\n" }}
173
+ {{- "id?: number | string, // default: -1\n" }}
174
+ {{- "cursor?: number, // default: -1\n" }}
175
+ {{- "loc?: number, // default: -1\n" }}
176
+ {{- "num_lines?: number, // default: -1\n" }}
177
+ {{- "view_source?: boolean, // default: false\n" }}
178
+ {{- "source?: string,\n" }}
179
+ {{- "}) => any;\n\n" }}
180
+ {{- "// Finds exact matches of `pattern` in the current page, or the page given by `cursor`.\n" }}
181
+ {{- "type find = (_: {\n" }}
182
+ {{- "pattern: string,\n" }}
183
+ {{- "cursor?: number, // default: -1\n" }}
184
+ {{- "}) => any;\n\n" }}
185
+ {{- "} // namespace browser\n\n" }}
186
+ {%- endif -%}
187
+
188
+ {%- if python_tool %}
189
+ {{- "## python\n\n" }}
190
+ {{- "Use this tool to execute Python code in your chain of thought. The code will not be shown to the user. This tool should be used for internal reasoning, but not for code that is intended to be visible to the user (e.g. when creating plots, tables, or files).\n\n" }}
191
+ {{- "When you send a message containing Python code to python, it will be executed in a stateful Jupyter notebook environment. python will respond with the output of the execution or time out after 120.0 seconds. The drive at '/mnt/data' can be used to save and persist user files. Internet access for this session is UNKNOWN. Depends on the cluster.\n\n" }}
192
+ {%- endif -%}
193
+ {%- endmacro -%}
194
+
195
+ {#- System Message Construction ============================================ #}
196
+ {%- macro build_system_message() -%}
197
+ {%- if model_identity is not defined %}
198
+ {{- "You are ChatGPT, a large language model trained by OpenAI.\n" -}}
199
+ {%- else %}
200
+ {{- model_identity }}
201
+ {%- endif %}
202
+ {{- "Knowledge cutoff: 2024-06\n" }}
203
+ {{- "Current date: " + strftime_now("%Y-%m-%d") + "\n\n" }}
204
+ {%- if reasoning_effort is not defined %}
205
+ {%- set reasoning_effort = "medium" %}
206
+ {%- endif %}
207
+ {{- "Reasoning: " + reasoning_effort + "\n\n" }}
208
+ {%- if builtin_tools is defined %}
209
+ {{- "# Tools\n\n" }}
210
+ {%- set available_builtin_tools = namespace(browser=false, python=false) %}
211
+ {%- for tool in builtin_tools %}
212
+ {%- if tool == "browser" %}
213
+ {%- set available_builtin_tools.browser = true %}
214
+ {%- elif tool == "python" %}
215
+ {%- set available_builtin_tools.python = true %}
216
+ {%- endif %}
217
+ {%- endfor %}
218
+ {{- render_builtin_tools(available_builtin_tools.browser, available_builtin_tools.python) }}
219
+ {%- endif -%}
220
+ {{- "# Valid channels: analysis, commentary, final. Channel must be included for every message." }}
221
+ {%- if tools is defined -%}
222
+ {{- "\nCalls to these tools must go to the commentary channel: 'functions'." }}
223
+ {%- endif -%}
224
+ {%- endmacro -%}
225
+
226
+ {#- Main Template Logic ================================================= #}
227
+ {#- Set defaults #}
228
+
229
+ {#- Render system message #}
230
+ {{- "<|start|>system<|message|>" }}
231
+ {{- build_system_message() }}
232
+ {{- "<|end|>" }}
233
+
234
+ {#- Extract developer message #}
235
+ {%- if messages[0].role == "developer" or messages[0].role == "system" %}
236
+ {%- set developer_message = messages[0].content %}
237
+ {%- set loop_messages = messages[1:] %}
238
+ {%- else %}
239
+ {%- set developer_message = "" %}
240
+ {%- set loop_messages = messages %}
241
+ {%- endif %}
242
+
243
+ {#- Render developer message #}
244
+ {%- if developer_message or tools %}
245
+ {{- "<|start|>developer<|message|>" }}
246
+ {%- if developer_message %}
247
+ {{- "# Instructions\n\n" }}
248
+ {{- developer_message }}
249
+ {%- endif %}
250
+ {%- if tools -%}
251
+ {{- "\n\n" }}
252
+ {{- "# Tools\n\n" }}
253
+ {{- render_tool_namespace("functions", tools) }}
254
+ {%- endif -%}
255
+ {{- "<|end|>" }}
256
+ {%- endif %}
257
+
258
+ {#- Render messages #}
259
+ {%- set last_tool_call = namespace(name=none) %}
260
+ {%- for message in loop_messages -%}
261
+ {#- At this point only assistant/user/tool messages should remain #}
262
+ {%- if message.role == 'assistant' -%}
263
+ {%- if "tool_calls" in message %}
264
+ {#- We assume max 1 tool call per message, and so we infer the tool call name #}
265
+ {#- in "tool" messages from the most recent assistant tool call name #}
266
+ {%- set tool_call = message.tool_calls[0] %}
267
+ {%- if tool_call.function %}
268
+ {%- set tool_call = tool_call.function %}
269
+ {%- endif %}
270
+ {%- if message.content %}
271
+ {{- "<|start|>assistant<|channel|>analysis<|message|>" + message.content + "<|end|>" }}
272
+ {%- endif %}
273
+ {{- "<|start|>assistant to=" }}
274
+ {{- "functions." + tool_call.name + "<|channel|>commentary json<|message|>" }}
275
+ {{- tool_call.arguments|tojson }}
276
+ {{- "<|call|>" }}
277
+ {%- set last_tool_call.name = tool_call.name %}
278
+ {%- elif "thinking" in message and loop.last and not add_generation_prompt %}
279
+ {#- Only render the CoT if the final turn is an assistant turn and add_generation_prompt is false #}
280
+ {#- This is a situation that should only occur in training, never in inference. #}
281
+ {{- "<|start|>assistant<|channel|>analysis<|message|>" + message.thinking + "<|end|>" }}
282
+ {#- <|return|> indicates the end of generation, but <|end|> does not #}
283
+ {#- <|return|> should never be an input to the model, but we include it as the final token #}
284
+ {#- when training, so the model learns to emit it. #}
285
+ {{- "<|start|>assistant<|channel|>final<|message|>" + message.content + "<|return|>" }}
286
+ {%- set last_tool_call.name = none %}
287
+ {%- elif "thinking" in message %}
288
+ {#- CoT is dropped during all previous turns, so we never render it for inference #}
289
+ {{- "<|start|>assistant<|channel|>final<|message|>" + message.content + "<|end|>" }}
290
+ {%- set last_tool_call.name = none %}
291
+ {%- elif loop.last and not add_generation_prompt %}
292
+ {#- <|return|> indicates the end of generation, but <|end|> does not #}
293
+ {#- <|return|> should never be an input to the model, but we include it as the final token #}
294
+ {#- when training, so the model learns to emit it. #}
295
+ {{- "<|start|>assistant<|message|>" + message.content + "<|return|>" }}
296
+ {%- else %}
297
+ {{- "<|start|>assistant<|message|>" + message.content + "<|end|>" }}
298
+ {%- set last_tool_call.name = none %}
299
+ {%- endif %}
300
+ {%- elif message.role == 'tool' -%}
301
+ {%- if last_tool_call.name is none %}
302
+ {{- raise_exception("Message has tool role, but there was no previous assistant message with a tool call!") }}
303
+ {%- endif %}
304
+ {{- "<|start|>functions." + last_tool_call.name }}
305
+ {{- " to=assistant<|channel|>commentary<|message|>" + message.content|tojson + "<|end|>" }}
306
+ {%- else -%}
307
+ {{- "<|start|>user<|message|>" + message.content + "<|end|>" }}
308
+ {%- endif -%}
309
+ {%- endfor -%}
310
+
311
+ {#- Generation prompt #}
312
+ {%- if add_generation_prompt -%}
313
+ <|start|>assistant
314
+ {%- endif -%}
315
+ {# Copyright 2025-present Unsloth. Apache 2.0 License. Unsloth chat template fixes. Edited from ggml-org & OpenAI #}
config.json ADDED
@@ -0,0 +1,78 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "architectures": [
3
+ "GptOssForCausalLM"
4
+ ],
5
+ "attention_bias": true,
6
+ "attention_dropout": 0.0,
7
+ "bos_token_id": null,
8
+ "dtype": "bfloat16",
9
+ "eos_token_id": 200002,
10
+ "experts_per_token": 4,
11
+ "head_dim": 64,
12
+ "hidden_act": "silu",
13
+ "hidden_size": 2880,
14
+ "initial_context_length": 4096,
15
+ "initializer_range": 0.02,
16
+ "intermediate_size": 2880,
17
+ "layer_types": [
18
+ "sliding_attention",
19
+ "full_attention",
20
+ "sliding_attention",
21
+ "full_attention",
22
+ "sliding_attention",
23
+ "full_attention",
24
+ "sliding_attention",
25
+ "full_attention",
26
+ "sliding_attention",
27
+ "full_attention",
28
+ "sliding_attention",
29
+ "full_attention",
30
+ "sliding_attention",
31
+ "full_attention",
32
+ "sliding_attention",
33
+ "full_attention",
34
+ "sliding_attention",
35
+ "full_attention",
36
+ "sliding_attention",
37
+ "full_attention",
38
+ "sliding_attention",
39
+ "full_attention",
40
+ "sliding_attention",
41
+ "full_attention"
42
+ ],
43
+ "max_position_embeddings": 131072,
44
+ "model_type": "gpt_oss",
45
+ "num_attention_heads": 64,
46
+ "num_experts_per_tok": 4,
47
+ "num_hidden_layers": 24,
48
+ "num_key_value_heads": 8,
49
+ "num_local_experts": 32,
50
+ "output_router_logits": false,
51
+ "pad_token_id": 199999,
52
+ "rms_norm_eps": 1e-05,
53
+ "rope_parameters": {
54
+ "beta_fast": 32.0,
55
+ "beta_slow": 1.0,
56
+ "factor": 32.0,
57
+ "original_max_position_embeddings": 4096,
58
+ "rope_theta": 150000,
59
+ "rope_type": "yarn",
60
+ "truncate": false
61
+ },
62
+ "router_aux_loss_coef": 0.9,
63
+ "sliding_window": 128,
64
+ "swiglu_limit": 7.0,
65
+ "tie_word_embeddings": false,
66
+ "transformers_version": "5.3.0",
67
+ "use_cache": true,
68
+ "vocab_size": 201088,
69
+ "quantization_config": {
70
+ "modules_to_not_convert": [
71
+ "model.layers.*.self_attn",
72
+ "model.layers.*.mlp.router",
73
+ "model.embed_tokens",
74
+ "lm_head"
75
+ ],
76
+ "quant_method": "mxfp4"
77
+ }
78
+ }
convert_mxfp4.py ADDED
@@ -0,0 +1,282 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ #!/usr/bin/env python3
2
+ """Convert chromadb/context-1 BF16 weights to MXFP4 format for vLLM."""
3
+
4
+ import json
5
+ import math
6
+ import os
7
+ import shutil
8
+ import struct
9
+ import sys
10
+ import time
11
+
12
+ import numpy as np
13
+
14
+ MODEL_DIR = os.path.join(os.path.dirname(__file__), "context-1")
15
+ OUTPUT_DIR = os.path.join(os.path.dirname(__file__), "context-1-mxfp4")
16
+
17
+ GROUP_SIZE = 32
18
+
19
+ # E2M1 FP4 positive lookup: index -> value
20
+ FP4_VALUES = np.array(
21
+ [0.0, 0.5, 1.0, 1.5, 2.0, 3.0, 4.0, 6.0], dtype=np.float32
22
+ )
23
+ FP4_MAX = 6.0
24
+
25
+ # Midpoints between consecutive FP4 values for nearest rounding
26
+ FP4_BOUNDARIES = np.array(
27
+ [0.25, 0.75, 1.25, 1.75, 2.5, 3.5, 5.0, np.inf], dtype=np.float32
28
+ )
29
+
30
+
31
+ def float_to_e2m1(values: np.ndarray) -> np.ndarray:
32
+ """Quantize float values to 4-bit E2M1 codes (0..15)."""
33
+ sign = (values < 0).astype(np.uint8)
34
+ abs_val = np.abs(values)
35
+ codes = np.digitize(abs_val, FP4_BOUNDARIES).astype(np.uint8)
36
+ codes = np.clip(codes, 0, 7)
37
+ return (sign << 3) | codes
38
+
39
+
40
+ def compute_e8m0_scale(group: np.ndarray) -> tuple[np.uint8, float]:
41
+ """Compute E8M0 shared exponent for a group. Returns (e8m0_byte, scale_float)."""
42
+ amax = np.max(np.abs(group))
43
+ if amax == 0:
44
+ return np.uint8(0), 1.0
45
+
46
+ # scale = 2^ceil(log2(amax / FP4_MAX))
47
+ # ensures amax / scale <= FP4_MAX
48
+ log2_scale = math.ceil(math.log2(max(amax / FP4_MAX, 2**-127)))
49
+ log2_scale = max(log2_scale, -127)
50
+ log2_scale = min(log2_scale, 127)
51
+ e8m0 = np.uint8(log2_scale + 127)
52
+ scale = 2.0 ** log2_scale
53
+ return e8m0, scale
54
+
55
+
56
+ def quantize_mxfp4(weight: np.ndarray) -> tuple[np.ndarray, np.ndarray]:
57
+ """
58
+ Quantize a 2D weight [rows, cols] from float to MXFP4 (vectorized).
59
+ cols must be divisible by GROUP_SIZE.
60
+ Returns (packed_uint8 [rows, cols//2], scales_uint8 [rows, cols//GROUP_SIZE]).
61
+ """
62
+ rows, cols = weight.shape
63
+ assert cols % GROUP_SIZE == 0, f"cols={cols} not divisible by {GROUP_SIZE}"
64
+
65
+ n_groups = cols // GROUP_SIZE
66
+ grouped = weight.reshape(rows, n_groups, GROUP_SIZE).astype(np.float32)
67
+
68
+ # Vectorized E8M0 scale computation per group
69
+ amax = np.max(np.abs(grouped), axis=-1) # [rows, n_groups]
70
+ amax = np.maximum(amax, 2**-127)
71
+ log2_scale = np.ceil(np.log2(amax / FP4_MAX)).astype(np.int32)
72
+ log2_scale = np.clip(log2_scale, -127, 127)
73
+ scales = (log2_scale + 127).astype(np.uint8) # [rows, n_groups]
74
+ scale_float = np.power(2.0, log2_scale.astype(np.float64)).astype(np.float32)
75
+
76
+ # Scale each group
77
+ scaled = grouped / scale_float[:, :, np.newaxis] # [rows, n_groups, 32]
78
+
79
+ # Vectorized E2M1 quantization
80
+ flat_scaled = scaled.reshape(rows, cols)
81
+ fp4_codes = float_to_e2m1(flat_scaled)
82
+
83
+ # Pack 2 FP4 codes per byte: low nibble = even index, high nibble = odd index
84
+ even = fp4_codes[:, 0::2]
85
+ odd = fp4_codes[:, 1::2]
86
+ packed = ((odd << 4) | even).astype(np.uint8)
87
+
88
+ return packed, scales
89
+
90
+
91
+ def read_safetensors_header(path: str) -> dict:
92
+ with open(path, "rb") as f:
93
+ header_size = struct.unpack("<Q", f.read(8))[0]
94
+ header_json = f.read(header_size).decode("utf-8")
95
+ return json.loads(header_json), header_size
96
+
97
+
98
+ def read_tensor(path: str, info: dict) -> np.ndarray:
99
+ dtype_map = {"BF16": (np.uint16, 2), "F32": (np.float32, 4), "U8": (np.uint8, 1)}
100
+ dtype_str = info["dtype"]
101
+ np_dtype, elem_size = dtype_map[dtype_str]
102
+ shape = info["shape"]
103
+ offsets = info["data_offsets"]
104
+ start, end = offsets
105
+
106
+ with open(path, "rb") as f:
107
+ header_size_bytes = struct.unpack("<Q", f.read(8))[0]
108
+ f.seek(8 + header_size_bytes + start)
109
+ data = f.read(end - start)
110
+
111
+ arr = np.frombuffer(data, dtype=np_dtype).reshape(shape)
112
+ return arr
113
+
114
+
115
+ def bf16_to_f32(arr: np.ndarray) -> np.ndarray:
116
+ """Convert BF16 (stored as uint16) to float32."""
117
+ f32_bytes = np.zeros(arr.shape, dtype=np.uint32)
118
+ f32_bytes[:] = arr.astype(np.uint32) << 16
119
+ return f32_bytes.view(np.float32)
120
+
121
+
122
+ def build_safetensors(tensors: dict, metadata: dict | None = None) -> bytes:
123
+ """Build safetensors binary from dict of {name: (np_array, dtype_str)}."""
124
+ header = {}
125
+ if metadata:
126
+ header["__metadata__"] = metadata
127
+
128
+ data_parts = []
129
+ offset = 0
130
+ for name, (arr, dtype_str) in sorted(tensors.items()):
131
+ raw = arr.tobytes()
132
+ data_parts.append(raw)
133
+ header[name] = {
134
+ "dtype": dtype_str,
135
+ "shape": list(arr.shape),
136
+ "data_offsets": [offset, offset + len(raw)],
137
+ }
138
+ offset += len(raw)
139
+
140
+ header_json = json.dumps(header, separators=(",", ":")).encode("utf-8")
141
+ # Pad header to 8-byte alignment
142
+ padding = (8 - len(header_json) % 8) % 8
143
+ header_json += b" " * padding
144
+
145
+ result = struct.pack("<Q", len(header_json)) + header_json
146
+ for part in data_parts:
147
+ result += part
148
+ return result
149
+
150
+
151
+ def main():
152
+ st_path = os.path.join(MODEL_DIR, "model.safetensors")
153
+
154
+ print("Reading safetensors header...")
155
+ header, header_size = read_safetensors_header(st_path)
156
+
157
+ metadata = header.pop("__metadata__", {"format": "pt"})
158
+
159
+ expert_weight_suffixes = (".mlp.experts.gate_up_proj", ".mlp.experts.down_proj")
160
+
161
+ output_tensors = {}
162
+ total = len([k for k in header if k != "__metadata__"])
163
+ done = 0
164
+
165
+ for name, info in sorted(header.items()):
166
+ if name == "__metadata__":
167
+ continue
168
+ done += 1
169
+ is_expert_weight = any(name.endswith(s) for s in expert_weight_suffixes)
170
+
171
+ if is_expert_weight:
172
+ print(f"[{done}/{total}] Quantizing {name} {info['shape']}...")
173
+ t0 = time.time()
174
+ raw = read_tensor(st_path, info)
175
+ weight_f32 = bf16_to_f32(raw)
176
+
177
+ is_gate_up = name.endswith(".gate_up_proj")
178
+ num_experts = weight_f32.shape[0]
179
+
180
+ # BF16 checkpoint stores weights as [E, in_features, out_features].
181
+ # vLLM MXFP4 expects [E, out_features, in_features // 2] (packed).
182
+ # Both gate_up_proj and down_proj need transposing to [E, out, in].
183
+ weight_f32 = np.ascontiguousarray(
184
+ np.transpose(weight_f32, (0, 2, 1))
185
+ )
186
+
187
+ blocks_list = []
188
+ scales_list = []
189
+ for e in range(num_experts):
190
+ packed, scales = quantize_mxfp4(weight_f32[e])
191
+ blocks_list.append(packed)
192
+ scales_list.append(scales)
193
+
194
+ blocks = np.stack(blocks_list, axis=0)
195
+ scales = np.stack(scales_list, axis=0)
196
+
197
+ blocks_name = name.replace(".gate_up_proj", ".gate_up_proj_blocks").replace(
198
+ ".down_proj", ".down_proj_blocks"
199
+ )
200
+ scales_name = name.replace(".gate_up_proj", ".gate_up_proj_scales").replace(
201
+ ".down_proj", ".down_proj_scales"
202
+ )
203
+
204
+ output_tensors[blocks_name] = (blocks, "U8")
205
+ output_tensors[scales_name] = (scales, "U8")
206
+
207
+ dt = time.time() - t0
208
+ print(f" -> {blocks_name} {list(blocks.shape)}, "
209
+ f"{scales_name} {list(scales.shape)} ({dt:.1f}s)")
210
+ else:
211
+ print(f"[{done}/{total}] Copying {name} {info['shape']}...")
212
+ raw = read_tensor(st_path, info)
213
+ output_tensors[name] = (raw, info["dtype"])
214
+
215
+ os.makedirs(OUTPUT_DIR, exist_ok=True)
216
+
217
+ print("\nWriting output safetensors...")
218
+ out_path = os.path.join(OUTPUT_DIR, "model.safetensors")
219
+
220
+ # Write in streaming fashion to avoid huge memory spike
221
+ # First build header, then write data
222
+ header_dict = {}
223
+ if metadata:
224
+ header_dict["__metadata__"] = metadata
225
+
226
+ offset = 0
227
+ tensor_order = sorted(output_tensors.keys())
228
+ for tname in tensor_order:
229
+ arr, dtype_str = output_tensors[tname]
230
+ raw_size = arr.nbytes
231
+ header_dict[tname] = {
232
+ "dtype": dtype_str,
233
+ "shape": list(arr.shape),
234
+ "data_offsets": [offset, offset + raw_size],
235
+ }
236
+ offset += raw_size
237
+
238
+ header_json = json.dumps(header_dict, separators=(",", ":")).encode("utf-8")
239
+ padding = (8 - len(header_json) % 8) % 8
240
+ header_json += b" " * padding
241
+
242
+ with open(out_path, "wb") as f:
243
+ f.write(struct.pack("<Q", len(header_json)))
244
+ f.write(header_json)
245
+ for tname in tensor_order:
246
+ arr, _ = output_tensors[tname]
247
+ f.write(arr.tobytes())
248
+
249
+ print(f"Saved {out_path} ({os.path.getsize(out_path) / 1e9:.2f} GB)")
250
+
251
+ # Copy config files and add quantization_config
252
+ for fname in ["generation_config.json", "tokenizer_config.json",
253
+ "tokenizer.json", "chat_template.jinja"]:
254
+ src = os.path.join(MODEL_DIR, fname)
255
+ if os.path.exists(src):
256
+ shutil.copy2(src, os.path.join(OUTPUT_DIR, fname))
257
+ print(f"Copied {fname}")
258
+
259
+ # Update config.json with quantization_config
260
+ with open(os.path.join(MODEL_DIR, "config.json")) as f:
261
+ config = json.load(f)
262
+
263
+ config["quantization_config"] = {
264
+ "modules_to_not_convert": [
265
+ "model.layers.*.self_attn",
266
+ "model.layers.*.mlp.router",
267
+ "model.embed_tokens",
268
+ "lm_head",
269
+ ],
270
+ "quant_method": "mxfp4",
271
+ }
272
+
273
+ with open(os.path.join(OUTPUT_DIR, "config.json"), "w") as f:
274
+ json.dump(config, f, indent=2)
275
+ f.write("\n")
276
+ print("Wrote config.json with quantization_config")
277
+
278
+ print("\nDone!")
279
+
280
+
281
+ if __name__ == "__main__":
282
+ main()
generation_config.json ADDED
@@ -0,0 +1,11 @@
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "bos_token_id": 199998,
3
+ "do_sample": true,
4
+ "eos_token_id": [
5
+ 200002,
6
+ 200012,
7
+ 199999
8
+ ],
9
+ "pad_token_id": 199999,
10
+ "transformers_version": "5.3.0"
11
+ }
model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:22a08ffe200552c94e64457098c64ce1c040c31565899e52b96559f744724d86
3
+ size 13761318008
tokenizer.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:0614fe83cadab421296e664e1f48f4261fa8fef6e03e63bb75c20f38e37d07d3
3
+ size 27868174
tokenizer_config.json ADDED
@@ -0,0 +1,183 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "added_tokens_decoder": {
3
+ "199998": {
4
+ "content": "<|startoftext|>",
5
+ "lstrip": false,
6
+ "normalized": false,
7
+ "rstrip": false,
8
+ "single_word": false,
9
+ "special": true
10
+ },
11
+ "199999": {
12
+ "content": "<|endoftext|>",
13
+ "lstrip": false,
14
+ "normalized": false,
15
+ "rstrip": false,
16
+ "single_word": false,
17
+ "special": true
18
+ },
19
+ "200000": {
20
+ "content": "<|reserved_200000|>",
21
+ "lstrip": false,
22
+ "normalized": false,
23
+ "rstrip": false,
24
+ "single_word": false,
25
+ "special": true
26
+ },
27
+ "200001": {
28
+ "content": "<|reserved_200001|>",
29
+ "lstrip": false,
30
+ "normalized": false,
31
+ "rstrip": false,
32
+ "single_word": false,
33
+ "special": true
34
+ },
35
+ "200002": {
36
+ "content": "<|return|>",
37
+ "lstrip": false,
38
+ "normalized": false,
39
+ "rstrip": false,
40
+ "single_word": false,
41
+ "special": true
42
+ },
43
+ "200003": {
44
+ "content": "<|constrain|>",
45
+ "lstrip": false,
46
+ "normalized": false,
47
+ "rstrip": false,
48
+ "single_word": false,
49
+ "special": true
50
+ },
51
+ "200004": {
52
+ "content": "<|reserved_200004|>",
53
+ "lstrip": false,
54
+ "normalized": false,
55
+ "rstrip": false,
56
+ "single_word": false,
57
+ "special": true
58
+ },
59
+ "200005": {
60
+ "content": "<|channel|>",
61
+ "lstrip": false,
62
+ "normalized": false,
63
+ "rstrip": false,
64
+ "single_word": false,
65
+ "special": true
66
+ },
67
+ "200006": {
68
+ "content": "<|start|>",
69
+ "lstrip": false,
70
+ "normalized": false,
71
+ "rstrip": false,
72
+ "single_word": false,
73
+ "special": true
74
+ },
75
+ "200007": {
76
+ "content": "<|end|>",
77
+ "lstrip": false,
78
+ "normalized": false,
79
+ "rstrip": false,
80
+ "single_word": false,
81
+ "special": true
82
+ },
83
+ "200008": {
84
+ "content": "<|message|>",
85
+ "lstrip": false,
86
+ "normalized": false,
87
+ "rstrip": false,
88
+ "single_word": false,
89
+ "special": true
90
+ },
91
+ "200009": {
92
+ "content": "<|reserved_200009|>",
93
+ "lstrip": false,
94
+ "normalized": false,
95
+ "rstrip": false,
96
+ "single_word": false,
97
+ "special": true
98
+ },
99
+ "200010": {
100
+ "content": "<|reserved_200010|>",
101
+ "lstrip": false,
102
+ "normalized": false,
103
+ "rstrip": false,
104
+ "single_word": false,
105
+ "special": true
106
+ },
107
+ "200011": {
108
+ "content": "<|reserved_200011|>",
109
+ "lstrip": false,
110
+ "normalized": false,
111
+ "rstrip": false,
112
+ "single_word": false,
113
+ "special": true
114
+ },
115
+ "200012": {
116
+ "content": "<|call|>",
117
+ "lstrip": false,
118
+ "normalized": false,
119
+ "rstrip": false,
120
+ "single_word": false,
121
+ "special": true
122
+ },
123
+ "200013": {
124
+ "content": "<|reserved_200013|>",
125
+ "lstrip": false,
126
+ "normalized": false,
127
+ "rstrip": false,
128
+ "single_word": false,
129
+ "special": true
130
+ },
131
+ "200014": {
132
+ "content": "<|reserved_200014|>",
133
+ "lstrip": false,
134
+ "normalized": false,
135
+ "rstrip": false,
136
+ "single_word": false,
137
+ "special": true
138
+ },
139
+ "200015": {
140
+ "content": "<|reserved_200015|>",
141
+ "lstrip": false,
142
+ "normalized": false,
143
+ "rstrip": false,
144
+ "single_word": false,
145
+ "special": true
146
+ },
147
+ "200016": {
148
+ "content": "<|reserved_200016|>",
149
+ "lstrip": false,
150
+ "normalized": false,
151
+ "rstrip": false,
152
+ "single_word": false,
153
+ "special": true
154
+ },
155
+ "200017": {
156
+ "content": "<|reserved_200017|>",
157
+ "lstrip": false,
158
+ "normalized": false,
159
+ "rstrip": false,
160
+ "single_word": false,
161
+ "special": true
162
+ },
163
+ "200018": {
164
+ "content": "<|endofprompt|>",
165
+ "lstrip": false,
166
+ "normalized": false,
167
+ "rstrip": false,
168
+ "single_word": false,
169
+ "special": true
170
+ }
171
+ },
172
+ "bos_token": "<|startoftext|>",
173
+ "clean_up_tokenization_spaces": false,
174
+ "eos_token": "<|return|>",
175
+ "extra_special_tokens": {},
176
+ "model_input_names": [
177
+ "input_ids",
178
+ "attention_mask"
179
+ ],
180
+ "model_max_length": 1000000000000000019884624838656,
181
+ "pad_token": "<|endoftext|>",
182
+ "tokenizer_class": "PreTrainedTokenizerFast"
183
+ }