KennethTang fno2010 commited on
Commit
331a7ef
·
0 Parent(s):

Duplicate from fno2010/MiniMax-M2.7-TQ3

Browse files

Co-authored-by: Jensen Zhang <fno2010@users.noreply.huggingface.co>

.gitattributes ADDED
@@ -0,0 +1,37 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ *.7z filter=lfs diff=lfs merge=lfs -text
2
+ *.arrow filter=lfs diff=lfs merge=lfs -text
3
+ *.bin filter=lfs diff=lfs merge=lfs -text
4
+ *.bz2 filter=lfs diff=lfs merge=lfs -text
5
+ *.ckpt filter=lfs diff=lfs merge=lfs -text
6
+ *.ftz filter=lfs diff=lfs merge=lfs -text
7
+ *.gz filter=lfs diff=lfs merge=lfs -text
8
+ *.h5 filter=lfs diff=lfs merge=lfs -text
9
+ *.joblib filter=lfs diff=lfs merge=lfs -text
10
+ *.lfs.* filter=lfs diff=lfs merge=lfs -text
11
+ *.mlmodel filter=lfs diff=lfs merge=lfs -text
12
+ *.model filter=lfs diff=lfs merge=lfs -text
13
+ *.msgpack filter=lfs diff=lfs merge=lfs -text
14
+ *.npy filter=lfs diff=lfs merge=lfs -text
15
+ *.npz filter=lfs diff=lfs merge=lfs -text
16
+ *.onnx filter=lfs diff=lfs merge=lfs -text
17
+ *.ot filter=lfs diff=lfs merge=lfs -text
18
+ *.parquet filter=lfs diff=lfs merge=lfs -text
19
+ *.pb filter=lfs diff=lfs merge=lfs -text
20
+ *.pickle filter=lfs diff=lfs merge=lfs -text
21
+ *.pkl filter=lfs diff=lfs merge=lfs -text
22
+ *.pt filter=lfs diff=lfs merge=lfs -text
23
+ *.pth filter=lfs diff=lfs merge=lfs -text
24
+ *.rar filter=lfs diff=lfs merge=lfs -text
25
+ *.safetensors filter=lfs diff=lfs merge=lfs -text
26
+ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
27
+ *.tar.* filter=lfs diff=lfs merge=lfs -text
28
+ *.tar filter=lfs diff=lfs merge=lfs -text
29
+ *.tflite filter=lfs diff=lfs merge=lfs -text
30
+ *.tgz filter=lfs diff=lfs merge=lfs -text
31
+ *.wasm filter=lfs diff=lfs merge=lfs -text
32
+ *.xz filter=lfs diff=lfs merge=lfs -text
33
+ *.zip filter=lfs diff=lfs merge=lfs -text
34
+ *.zst filter=lfs diff=lfs merge=lfs -text
35
+ *tfevents* filter=lfs diff=lfs merge=lfs -text
36
+ model.safetensors.index.json filter=lfs diff=lfs merge=lfs -text
37
+ tokenizer.json filter=lfs diff=lfs merge=lfs -text
README.md ADDED
@@ -0,0 +1,55 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: other
3
+ base_model: MiniMaxAI/MiniMax-M2.7
4
+ tags:
5
+ - turboquant
6
+ - quantization
7
+ - 3-bit
8
+ - vllm
9
+ - mini-max
10
+ ---
11
+
12
+ # MiniMax-M2.7-TQ3
13
+
14
+ A **TurboQuant 3-bit** quantized version of [MiniMax-M2.7](https://huggingface.co/MiniMaxAI/MiniMax-M2.7), optimized for inference with [turboquant-vllm](https://github.com/varjoranta/turboquant-vllm).
15
+
16
+ ## Model Details
17
+
18
+ - **Base Model:** MiniMaxAI/MiniMax-M2.7
19
+ - **Quantization:** TurboQuant 3-bit
20
+ - **Quantization Tool:** [turboquant-vllm](https://github.com/varjoranta/turboquant-vllm)
21
+ - **Architecture:** Transformer-based LLM with extended context support
22
+
23
+ ## Usage
24
+
25
+ This quantized model is designed to work with the turboquant-vllm inference engine. Please refer to the [turboquant-vllm repository](https://github.com/varjoranta/turboquant-vllm) for installation and usage instructions.
26
+
27
+ ### Example
28
+
29
+ ```python
30
+ # Please refer to turboquant-vllm for proper model loading
31
+ ```
32
+
33
+ ## Chat Template
34
+
35
+ The model uses a Jinja chat template with support for:
36
+ - System messages
37
+ - Tool/function calling (`<minimax:tool_call>` / `</minimax:tool_call>` delimiters)
38
+ - Reasoning content (`<think>` / `</minimax:tool_call>` delimiters)
39
+ - Multi-turn conversations
40
+
41
+ The default model identity is: *"You are a helpful assistant. Your name is MiniMax-M2.7 and is built by MiniMax."*
42
+
43
+ ## Tokenizer
44
+
45
+ - **Backend:** tokenizers
46
+ - **Vocabulary Size:** (see tokenizer files)
47
+ - **Special Tokens:** Includes tokens for tool calls, reasoning markers, and standard control tokens
48
+
49
+ ## Quantization Details
50
+
51
+ This is a 3-bit quantized checkpoint intended for efficient inference. The quantization was applied using the TurboQuant method via the turboquant-vllm project.
52
+
53
+ ## Disclaimer
54
+
55
+ This is a third-party quantized version of the original MiniMax-M2.7 model. Please refer to the original model card for base model details and licensing.
chat_template.jinja ADDED
@@ -0,0 +1,159 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {# ----------‑‑‑ special token variables ‑‑‑---------- #}
2
+ {%- set toolcall_begin_token = '<minimax:tool_call>' -%}
3
+ {%- set toolcall_end_token = '</minimax:tool_call>' -%}
4
+ {#- Tool Rendering Functions ============================================== -#}
5
+ {%- macro render_tool_namespace(namespace_name, tool_list) -%}
6
+ {%- for tool in tool_list -%}
7
+ <tool>{{ tool.function | tojson(ensure_ascii=False) }}</tool>
8
+ {% endfor -%}
9
+ {%- endmacro -%}
10
+ {%- macro visible_text(content) -%}
11
+ {%- if content is string -%}
12
+ {{ content }}
13
+ {%- elif content is iterable and content is not mapping -%}
14
+ {%- for item in content -%}
15
+ {%- if item is mapping and item.type == 'text' -%}
16
+ {{- item.text }}
17
+ {%- elif item is string -%}
18
+ {{- item }}
19
+ {%- endif -%}
20
+ {%- endfor -%}
21
+ {%- else -%}
22
+ {{- content }}
23
+ {%- endif -%}
24
+ {%- endmacro -%}
25
+ {#- System Message Construction ============================================ -#}
26
+ {%- macro build_system_message(system_message) -%}
27
+ {%- if system_message and system_message.content -%}
28
+ {{- visible_text(system_message.content) }}
29
+ {%- else -%}
30
+ {%- if model_identity is not defined -%}
31
+ {%- set model_identity = "You are a helpful assistant. Your name is MiniMax-M2.7 and is built by MiniMax." -%}
32
+ {%- endif -%}
33
+ {{- model_identity }}
34
+ {%- endif -%}
35
+
36
+ {#- Handle current_date -#}
37
+ {%- if system_message and system_message.current_date -%}
38
+ {{- '\n' ~ 'Current date: ' + system_message.current_date }}
39
+ {%- endif -%}
40
+ {#- Handle current_location -#}
41
+ {%- if system_message and system_message.current_location -%}
42
+ {{- '\n' ~ 'Current location: ' + system_message.current_location }}
43
+ {%- endif -%}
44
+ {%- endmacro -%}
45
+ {#- Main Template Logic ================================================= -#}
46
+ {#- Extract system message (only first message if it's system) -#}
47
+ {%- set system_message = none -%}
48
+ {%- set conversation_messages = messages -%}
49
+ {%- if messages and messages[0].role == "system" -%}
50
+ {%- set system_message = messages[0] -%}
51
+ {%- set conversation_messages = messages[1:] -%}
52
+ {%- endif -%}
53
+ {#- Get the last user message turn, for interleved thinking -#}
54
+ {%- set ns = namespace(last_user_index=-1) %}
55
+ {% for m in conversation_messages %}
56
+ {%- if m.role == 'user' %}
57
+ {% set ns.last_user_index = loop.index0 -%}
58
+ {%- endif %}
59
+ {%- endfor %}
60
+ {#- Render system message -#}
61
+ {{- ']~!b[' ~ ']~b]system' ~ '\n' }}
62
+ {{- build_system_message(system_message) }}
63
+ {#- Render tools if available -#}
64
+ {%- if tools -%}
65
+ {{- '\n\n' ~ '# Tools' ~ '\n' ~ 'You may call one or more tools to assist with the user query.\nHere are the tools available in JSONSchema format:' ~ '\n' }}
66
+ {{- '\n' ~ '<tools>' ~ '\n' }}
67
+ {{- render_tool_namespace("functions", tools) }}
68
+ {{- '</tools>' ~ '\n\n' }}
69
+ {{- 'When making tool calls, use XML format to invoke tools and pass parameters:' ~ '\n' }}
70
+ {{- '\n' ~ toolcall_begin_token }}
71
+ <invoke name="tool-name-1">
72
+ <parameter name="param-key-1">param-value-1</parameter>
73
+ <parameter name="param-key-2">param-value-2</parameter>
74
+ ...
75
+ </invoke>
76
+ {{- '\n' ~ toolcall_end_token }}
77
+ {%- endif -%}
78
+ {{- '[e~[\n' }}
79
+
80
+ {#- Render messages -#}
81
+ {%- set last_tool_call = namespace(name=none) -%}
82
+ {%- for message in conversation_messages -%}
83
+ {%- if message.role == 'assistant' -%}
84
+ {#- Only render reasoning_content if no user message follows -#}
85
+ {{- ']~b]ai' ~ '\n' }}
86
+
87
+ {%- set reasoning_content = '' %}
88
+ {%- set content = visible_text(message.content) %}
89
+ {%- if message.reasoning_content is string %}
90
+ {%- set reasoning_content = message.reasoning_content %}
91
+ {%- else %}
92
+ {%- if '</think>' in content %}
93
+ {%- set reasoning_content = content.split('</think>')[0].strip('\n').split('<think>')[-1].strip('\n') %}
94
+ {%- set content = content.split('</think>')[-1].strip('\n') %}
95
+ {%- endif %}
96
+ {%- endif %}
97
+ {%- if reasoning_content and loop.index0 > ns.last_user_index -%}
98
+ {{- '<think>' ~ '\n' ~ reasoning_content ~ '\n' ~ '</think>' ~ '\n\n' }}
99
+ {%- endif -%}
100
+ {%- if content -%}
101
+ {{- content }}
102
+ {%- endif -%}
103
+ {%- if message.tool_calls -%}
104
+ {{- '\n' ~ toolcall_begin_token ~ '\n' }}
105
+
106
+ {%- for tool_call in message.tool_calls -%}
107
+ {%- if tool_call.function %}
108
+ {%- set tool_call = tool_call.function %}
109
+ {%- endif %}
110
+ {{- '<invoke name="' + tool_call.name + '">' }}
111
+ {% set _args = tool_call.arguments %}
112
+ {%- for k, v in _args.items() %}
113
+ {{- '<parameter name="' + k + '">' }}
114
+ {{- v | tojson(ensure_ascii=False) if v is not string else v }}
115
+ {{- '</parameter>' }}
116
+ {% endfor %}
117
+ {{- '</invoke>' ~ '\n' }}
118
+ {%- endfor -%}
119
+
120
+ {{- toolcall_end_token}}
121
+ {%- set last_tool_call.name = message.tool_calls[-1].name -%}
122
+ {%- else -%}
123
+ {%- set last_tool_call.name = none -%}
124
+ {%- endif -%}
125
+ {{- '[e~[' ~ '\n' }}
126
+
127
+ {%- elif message.role == 'tool' -%}
128
+ {%- if last_tool_call.name is none -%}
129
+ {{- raise_exception("Message has tool role, but there was no previous assistant message with a tool call!") }}
130
+ {%- endif -%}
131
+ {%- if loop.first or (conversation_messages[loop.index0 - 1].role != 'tool') -%}
132
+ {{- ']~b]tool' }}
133
+ {%- endif -%}
134
+ {%- if message.content is string -%}
135
+ {{- '\n<response>' }}
136
+ {{- message.content }}
137
+ {{- '</response>' }}
138
+ {%- else -%}
139
+ {%- for tr in message.content -%}
140
+ {{- '\n<response>' }}
141
+ {{- tr.output if tr.output is defined else (tr.text if tr.type == 'text' and tr.text is defined else tr) }}
142
+ {{- '\n</response>' }}
143
+ {%- endfor -%}
144
+ {%- endif -%}
145
+ {%- if loop.last or (conversation_messages[loop.index0 + 1].role != 'tool') -%}
146
+ {{- '[e~[\n' -}}
147
+ {%- endif -%}
148
+
149
+ {%- elif message.role == 'user' -%}
150
+ {{- ']~b]user' ~ '\n' }}
151
+ {{- visible_text(message.content) }}
152
+ {{- '[e~[' ~ '\n' }}
153
+ {%- endif -%}
154
+ {%- endfor -%}
155
+
156
+ {#- Generation prompt -#}
157
+ {%- if add_generation_prompt -%}
158
+ {{- ']~b]ai' ~ '\n' ~ '<think>' ~ '\n' }}
159
+ {%- endif -%}
config.json ADDED
@@ -0,0 +1,125 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "architectures": [
3
+ "MiniMaxM2ForCausalLM"
4
+ ],
5
+ "attention_dropout": 0.0,
6
+ "attn_type_list": [
7
+ 1,
8
+ 1,
9
+ 1,
10
+ 1,
11
+ 1,
12
+ 1,
13
+ 1,
14
+ 1,
15
+ 1,
16
+ 1,
17
+ 1,
18
+ 1,
19
+ 1,
20
+ 1,
21
+ 1,
22
+ 1,
23
+ 1,
24
+ 1,
25
+ 1,
26
+ 1,
27
+ 1,
28
+ 1,
29
+ 1,
30
+ 1,
31
+ 1,
32
+ 1,
33
+ 1,
34
+ 1,
35
+ 1,
36
+ 1,
37
+ 1,
38
+ 1,
39
+ 1,
40
+ 1,
41
+ 1,
42
+ 1,
43
+ 1,
44
+ 1,
45
+ 1,
46
+ 1,
47
+ 1,
48
+ 1,
49
+ 1,
50
+ 1,
51
+ 1,
52
+ 1,
53
+ 1,
54
+ 1,
55
+ 1,
56
+ 1,
57
+ 1,
58
+ 1,
59
+ 1,
60
+ 1,
61
+ 1,
62
+ 1,
63
+ 1,
64
+ 1,
65
+ 1,
66
+ 1,
67
+ 1,
68
+ 1
69
+ ],
70
+ "auto_map": {
71
+ "AutoConfig": "configuration_minimax_m2.MiniMaxM2Config",
72
+ "AutoModelForCausalLM": "modeling_minimax_m2.MiniMaxM2ForCausalLM"
73
+ },
74
+ "bos_token_id": 200034,
75
+ "dtype": "bfloat16",
76
+ "eos_token_id": 200020,
77
+ "head_dim": 128,
78
+ "hidden_act": "silu",
79
+ "hidden_size": 3072,
80
+ "initializer_range": 0.02,
81
+ "intermediate_size": 1536,
82
+ "max_position_embeddings": 196608,
83
+ "model_type": "minimax_m2",
84
+ "mtp_transformer_layers": 1,
85
+ "num_attention_heads": 48,
86
+ "num_experts_per_tok": 8,
87
+ "num_hidden_layers": 62,
88
+ "num_key_value_heads": 8,
89
+ "num_local_experts": 256,
90
+ "num_mtp_modules": 3,
91
+ "output_router_logits": false,
92
+ "pad_token_id": null,
93
+ "qk_norm_type": "per_layer",
94
+ "quantization_config": {
95
+ "activation_scheme": "dynamic",
96
+ "fmt": "float8_e4m3fn",
97
+ "modules_to_not_convert": [
98
+ "gate",
99
+ "e_score_correction_bias",
100
+ "lm_head"
101
+ ],
102
+ "quant_method": "fp8",
103
+ "weight_block_size": [
104
+ 128,
105
+ 128
106
+ ]
107
+ },
108
+ "rms_norm_eps": 1e-06,
109
+ "rope_parameters": {
110
+ "rope_theta": 5000000,
111
+ "rope_type": "default"
112
+ },
113
+ "rotary_dim": 64,
114
+ "router_aux_loss_coef": 0.001,
115
+ "router_jitter_noise": 0.0,
116
+ "scoring_func": "sigmoid",
117
+ "shared_intermediate_size": 0,
118
+ "tie_word_embeddings": false,
119
+ "transformers_version": "5.5.3",
120
+ "use_cache": true,
121
+ "use_mtp": true,
122
+ "use_qk_norm": true,
123
+ "use_routing_bias": true,
124
+ "vocab_size": 200064
125
+ }
model-00001-of-00018.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:bd00ab5236bf53d74692cee604dde8405528690242132aa825ce76ced53feea9
3
+ size 5369087104
model-00002-of-00018.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:3f3d09db4787e49655f2c67b46554fe218b91d1b514e2cabefbff9af044fe281
3
+ size 5369271616
model-00003-of-00018.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:700b6f33dfb92b8e08aa4bd554e07f0a3509c1aaf4b48340d776b2c00fbd2751
3
+ size 5368336416
model-00004-of-00018.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:1e201856cd66e4c0ccd1c293ecd283da2a2e58a2529a307c7e7b2deba46a97ea
3
+ size 5369279832
model-00005-of-00018.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ed5b04b6afae592dea95e3a13c54649fc5085d129b9a03686bcc35893eb3a6c7
3
+ size 5368345640
model-00006-of-00018.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:dec5bad22966d63941438b98d74b5dde502c1f0447de911db0a3ad14326d34fc
3
+ size 5368344960
model-00007-of-00018.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ecd6b667f0e77432b0b0cd5d4999f0c0f0da392619963339dfd5f83d48e824fd
3
+ size 5369279984
model-00008-of-00018.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:584e0bea58a721e1a1f90764957e845f5c3c8cf86a2fae733576684c6096c833
3
+ size 5368344776
model-00009-of-00018.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:5b4be60b92d1df360c545421d3caef06eb54f5b3526e99803301e070409a972d
3
+ size 5369279888
model-00010-of-00018.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a61e736ca6d4d9b6b6d2800a894d88f9ede225b9e761e0509101a3bc7e5d91a0
3
+ size 5368345648
model-00011-of-00018.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:3a7be5d294aeead79a78aa431e0a48da2d161c1b9c3a4d8f3d853d5bdc2da05b
3
+ size 5368344960
model-00012-of-00018.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:2a9f6c167d27bfd926aae53fa18114e7eb4f252e9d46b85a158b5f9682559b90
3
+ size 5369279984
model-00013-of-00018.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7783e45ab325aef2393c1304eea9b3ff5c629cf8093d2554639174edf7b3df88
3
+ size 5368344776
model-00014-of-00018.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:3d1ca485bb7d2ec90c4d5477c53aa845ed1eb03ed53859b78d5bf373030705f5
3
+ size 5369279888
model-00015-of-00018.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:3fd484bda6bca4258fdf2e52c80b503394afc8612dec4453b5f8a2b86c9b2a79
3
+ size 5368345648
model-00016-of-00018.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:1e8feb4989e12a6b056fe50beae6b63805ccccd8bb66bba7bfd3d1aa65e680b2
3
+ size 5368344960
model-00017-of-00018.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:28a91e279e738d6f3aa8e8222734569cfb79eda4b9e52c48b90831872c7e8799
3
+ size 5369279984
model-00018-of-00018.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:727080a85db6f89a564e738d7b9210c911fcea0e5d17a9939aa6ce459a4be6ba
3
+ size 3643906408
model.safetensors.index.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:db7c8e9722fd93a1a98f414ebdb345430fb5f295886c21faf0d453245a9dadd3
3
+ size 15403873
tokenizer.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7b81e5e5cba2b169e86a0771825a927e9d41b4c4484ded4a286410f41f702f17
3
+ size 15523144
tokenizer_config.json ADDED
@@ -0,0 +1,63 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "add_prefix_space": false,
3
+ "backend": "tokenizers",
4
+ "bos_token": "]~!b[",
5
+ "clean_up_tokenization_spaces": false,
6
+ "eos_token": "[e~[",
7
+ "extra_special_tokens": [
8
+ "<code_interpreter>",
9
+ "<commit_after>",
10
+ "<commit_before>",
11
+ "<commit_msg>",
12
+ "<empty_output>",
13
+ "<filename>",
14
+ "<fim_middle>",
15
+ "<fim_pad>",
16
+ "<fim_prefix>",
17
+ "<fim_suffix>",
18
+ "<function_call>",
19
+ "<gh_stars>",
20
+ "]<]speech[>[",
21
+ "]<]image[>[",
22
+ "]<]video[>[",
23
+ "]<]start of speech[>[",
24
+ "]<]end of speech[>[",
25
+ "]<]start of image[>[",
26
+ "]<]end of image[>[",
27
+ "]<]start of video[>[",
28
+ "]<]end of video[>[",
29
+ "]<]vision pad[>[",
30
+ "]~!b[",
31
+ "<issue_closed>",
32
+ "<issue_comment>",
33
+ "<issue_start>",
34
+ "<jupyter_code>",
35
+ "<jupyter_output>",
36
+ "<jupyter_start>",
37
+ "<jupyter_text>",
38
+ "<reponame>",
39
+ "[e~[",
40
+ "]!d~[",
41
+ "]!p~[",
42
+ "]~b]",
43
+ "<jupyter_error>",
44
+ "<add_file>",
45
+ "<delete_file>",
46
+ "<rename_file>",
47
+ "<edit_file>",
48
+ "<commit_message>",
49
+ "<empty_source_file>",
50
+ "<repo_struct>",
51
+ "<code_context>",
52
+ "<file_content>",
53
+ "<source_files>",
54
+ "<pr_start>",
55
+ "<review_comment>",
56
+ "<filepath>",
57
+ "<file_sep>"
58
+ ],
59
+ "is_local": true,
60
+ "model_max_length": 40960000,
61
+ "tokenizer_class": "TokenizersBackend",
62
+ "unk_token": "]!d~["
63
+ }
tq_config.json ADDED
@@ -0,0 +1,8 @@
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "format": "tq3_native",
3
+ "bits": 3,
4
+ "group_size": 128,
5
+ "quantizer_seed": 42,
6
+ "compressed_layers": 47926,
7
+ "original_model": "/data/superalarm/models/MiniMax-M2.7"
8
+ }