diff --git a/.gitattributes b/.gitattributes
index a6344aac8c09253b3b630fb776ae94478aa0275b..aa7aacd0134a92c3c1943fdecc75cd8b7420cce6 100644
--- a/.gitattributes
+++ b/.gitattributes
@@ -33,3 +33,5 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
*.zip filter=lfs diff=lfs merge=lfs -text
*.zst filter=lfs diff=lfs merge=lfs -text
*tfevents* filter=lfs diff=lfs merge=lfs -text
+model.safetensors.index.json filter=lfs diff=lfs merge=lfs -text
+tokenizer.json filter=lfs diff=lfs merge=lfs -text
diff --git a/.mdl b/.mdl
new file mode 100644
index 0000000000000000000000000000000000000000..125146bfac0da312f83f9931dea571541f595354
Binary files /dev/null and b/.mdl differ
diff --git a/.msc b/.msc
new file mode 100644
index 0000000000000000000000000000000000000000..1cda92927e6ad2cc5ab5c52505921d9b157b6f7e
Binary files /dev/null and b/.msc differ
diff --git a/.mv b/.mv
new file mode 100644
index 0000000000000000000000000000000000000000..113aa401e56fdfa1b40464f5595cdbd4a9adc495
--- /dev/null
+++ b/.mv
@@ -0,0 +1 @@
+Revision:master,CreatedAt:1772131132
\ No newline at end of file
diff --git a/README.md b/README.md
new file mode 100644
index 0000000000000000000000000000000000000000..cf9ef1fae1721ba1cf49793d97c7345d415ad3c8
--- /dev/null
+++ b/README.md
@@ -0,0 +1,225 @@
+---
+library_name: transformers
+license: mit
+pipeline_tag: text-to-text
+tags:
+- vLLM
+- AWQ
+base_model:
+ - ZhipuAI/GLM-5
+base_model_relation: quantized
+
+---
+# GLM-5-AWQ
+Base model: [ZhipuAI/GLM-5](https://www.modelscope.cn/models/ZhipuAI/GLM-5)
+
+This repo quantizes the model using data-free quantization (no calibration dataset required).
+
+### 【Dependencies / Installation】
+
+```python
+# NOTE:
+# vllm==0.16.0rc2 absolutely would NOT work!
+# Must upgrade to >=0.16.1rc1
+vllm>=0.16.1rc1.dev7
+transformers>=5.3.0.dev0
+```
+
+As of **2026-02-26**, make sure your system has cuda12.8 installed.
+
+Then, create a fresh Python environment (e.g. python3.12 venv) and run:
+```bash
+pip install -U vllm --pre --index-url https://pypi.org/simple --extra-index-url https://wheels.vllm.ai/nightly
+pip install git+https://github.com/huggingface/transformers.git
+pip install git+https://github.com/deepseek-ai/DeepGEMM.git@v2.1.1.post3 --no-build-isolation
+```
+[vLLM Official Guide](https://docs.vllm.ai/projects/recipes/en/latest/Qwen/Qwen3.5.html)
+
+
+### 【vLLM Startup Command】
+Note: When launching with TP=8, include `--enable-expert-parallel`;
+otherwise the expert tensors wouldn’t be evenly sharded across GPU devices.
+
+```
+export VLLM_USE_DEEP_GEMM=0
+export VLLM_USE_FLASHINFER_MOE_FP16=1
+export VLLM_USE_FLASHINFER_SAMPLER=0
+export OMP_NUM_THREADS=4
+
+vllm serve \
+ __YOUR_PATH__/tclf90/GLM-5-AWQ \
+ --served-model-name MY_MODEL \
+ --swap-space 16 \
+ --max-num-seqs 32 \
+ --max-model-len 32768 \
+ --gpu-memory-utilization 0.9 \
+ --tensor-parallel-size 8 \
+ --enable-expert-parallel \
+ --enable-auto-tool-choice \
+ --tool-call-parser glm47 \
+ --reasoning-parser glm45 \
+ --speculative-config '{"method":"mtp","num_speculative_tokens":1}' \
+ --trust-remote-code \
+ --host 0.0.0.0 \
+ --port 8000
+```
+
+### 【Logs】
+```
+2026-02-26
+1. Initial commit
+```
+
+### 【Model Files】
+| File Size | Last Updated |
+|-----------|--------------|
+| `392 GiB` | `2026-02-26` |
+
+### 【Model Download】
+```python
+from modelscope import snapshot_download
+snapshot_download('tclf90/GLM-5-AWQ', cache_dir="your_local_path")
+```
+
+### 【Overview】
+
+# GLM-5
+
+
+

+
+
+ 👋 Join our WeChat or Discord community.
+
+ 📖 Check out the GLM-5 technical blog.
+
+ 📍 Use GLM-5 API services on Z.ai API Platform.
+
+ 👉 One click to GLM-5.
+
+
+## Introduction
+
+We are launching GLM-5, targeting complex systems engineering and long-horizon agentic tasks. Scaling is still one of the most important ways to improve the intelligence efficiency of Artificial General Intelligence (AGI). Compared to GLM-4.5, GLM-5 scales from 355B parameters (32B active) to 744B parameters (40B active), and increases pre-training data from 23T to 28.5T tokens. GLM-5 also integrates DeepSeek Sparse Attention (DSA), largely reducing deployment cost while preserving long-context capacity.
+
+Reinforcement learning aims to bridge the gap between competence and excellence in pre-trained models. However, deploying it at scale for LLMs is a challenge due to the RL training inefficiency. To this end, we developed [slime](https://github.com/THUDM/slime), a novel **asynchronous RL infrastructure** that substantially improves training throughput and efficiency, enabling more fine-grained post-training iterations. With advances in both pre-training and post-training, GLM-5 delivers significant improvement compared to GLM-4.7 across a wide range of academic benchmarks and achieves best-in-class performance among all open-source models in the world on reasoning, coding, and agentic tasks, closing the gap with frontier models.
+
+## Benchmark
+
+| | GLM-5 | GLM-4.7 | DeepSeek-V3.2 | Kimi K2.5 | Claude Opus 4.5 | Gemini 3 Pro | GPT-5.2 (xhigh) |
+| -------------------------------- | ---------------------- | --------- | ------------- |-----------| --------------- | ------------ | --------------- |
+| HLE | 30.5 | 24.8 | 25.1 | 31.5 | 28.4 | 37.2 | 35.4 |
+| HLE (w/ Tools) | 50.4 | 42.8 | 40.8 | 51.8 | 43.4* | 45.8* | 45.5* |
+| AIME 2026 I | 92.7 | 92.9 | 92.7 | 92.5 | 93.3 | 90.6 | - |
+| HMMT Nov. 2025 | 96.9 | 93.5 | 90.2 | 91.1 | 91.7 | 93.0 | 97.1 |
+| IMOAnswerBench | 82.5 | 82.0 | 78.3 | 81.8 | 78.5 | 83.3 | 86.3 |
+| GPQA-Diamond | 86.0 | 85.7 | 82.4 | 87.6 | 87.0 | 91.9 | 92.4 |
+| SWE-bench Verified | 77.8 | 73.8 | 73.1 | 76.8 | 80.9 | 76.2 | 80.0 |
+| SWE-bench Multilingual | 73.3 | 66.7 | 70.2 | 73.0 | 77.5 | 65.0 | 72.0 |
+| Terminal-Bench 2.0 (Terminus 2) | 56.2 / 60.7 † | 41.0 | 39.3 | 50.8 | 59.3 | 54.2 | 54.0 |
+| Terminal-Bench 2.0 (Claude Code) | 56.2 / 61.1 † | 32.8 | 46.4 | - | 57.9 | - | - |
+| CyberGym | 43.2 | 23.5 | 17.3 | 41.3 | 50.6 | 39.9 | - |
+| BrowseComp | 62.0 | 52.0 | 51.4 | 60.6 | 37.0 | 37.8 | - |
+| BrowseComp (w/ Context Manage) | 75.9 | 67.5 | 67.6 | 74.9 | 67.8 | 59.2 | 65.8 |
+| BrowseComp-Zh | 72.7 | 66.6 | 65.0 | 62.3 | 62.4 | 66.8 | 76.1 |
+| τ²-Bench | 89.7 | 87.4 | 85.3 | 80.2 | 91.6 | 90.7 | 85.5 |
+| MCP-Atlas (Public Set) | 67.8 | 52.0 | 62.2 | 63.8 | 65.2 | 66.6 | 68.0 |
+| Tool-Decathlon | 38.0 | 23.8 | 35.2 | 27.8 | 43.5 | 36.4 | 46.3 |
+| Vending Bench 2 | $4,432.12 | $2,376.82 | $1,034.00 | $1,198.46 | $4,967.06 | $5,478.16 | $3,591.33 |
+
+> *: refers to their scores of full set.
+>
+> †: A verified version of Terminal-Bench 2.0 that fixes some ambiguous instructions.
+See footnote for more evaluation details.
+
+### Footnote
+
+* **Humanity’s Last Exam (HLE) & other reasoning tasks**: We evaluate with a maximum generation length of 131,072 tokens (`temperature=1.0, top_p=0.95, max_new_tokens=131072`). By default, we report the text-only subset; results marked with * are from the full set. We use GPT-5.2 (medium) as the judge model. For HLE-with-tools, we use a maximum context length of 202,752 tokens.
+* **SWE-bench & SWE-bench Multilingual**: We run the SWE-bench suite with OpenHands using a tailored instruction prompt. Settings: `temperature=0.7, top_p=0.95, max_new_tokens=16384`, with a 200K context window.
+* **BrowserComp**: Without context management, we retain details from the most recent 5 turns. With context management, we use the same discard-all strategy as DeepSeek-v3.2 and Kimi K2.5.
+* **Terminal-Bench 2.0 (Terminus 2)**: We evaluate with the Terminus framework using `timeout=2h, temperature=0.7, top_p=1.0, max_new_tokens=8192`, with a 128K context window. Resource limits are capped at 16 CPUs and 32 GB RAM.
+* **Terminal-Bench 2.0 (Claude Code)**: We evaluate in Claude Code 2.1.14 (think mode, default effort) with `temperature=1.0, top_p=0.95, max_new_tokens=65536`. We remove wall-clock time limits due to generation speed, while preserving per-task CPU and memory constraints. Scores are averaged over 5 runs. We fix environment issues introduced by Claude Code and also report results on a verified Terminal-Bench 2.0 dataset that resolves ambiguous instructions (see: [https://huggingface.co/datasets/zai-org/terminal-bench-2-verified](https://huggingface.co/datasets/zai-org/terminal-bench-2-verified)).
+* **CyberGym**: We evaluate in Claude Code 2.1.18 (think mode, no web tools) with (`temperature=1.0, top_p=1.0, max_new_tokens=32000`) and a 250-minute timeout per task. Results are single-run Pass@1 over 1,507 tasks.
+* **MCP-Atlas**: All models are evaluated in think mode on the 500-task public subset with a 10-minute timeout per task. We use Gemini 3 Pro as the judge model.
+* **τ²-bench**: We add a small prompt adjustment in Retail and Telecom to avoid failures caused by premature user termination. For Airline, we apply the domain fixes proposed in the Claude Opus 4.5 system card.
+* **Vending Bench 2**: Runs are conducted independently by [Andon Labs](https://andonlabs.com/evals/vending-bench-2).
+
+
+## Serve GLM-5 Locally
+
+### Prepare environment
+
+vLLM, SGLang, and xLLM all support local deployment of GLM-5. A simple deployment guide is provided here.
+
++ vLLM
+
+ Using Docker as:
+
+ ```shell
+ docker pull vllm/vllm-openai:nightly
+ ```
+
+ or using pip:
+
+ ```shell
+ pip install -U vllm --pre --index-url https://pypi.org/simple --extra-index-url https://wheels.vllm.ai/nightly
+ ```
+
+ then upgrade transformers:
+
+ ```
+ pip install git+https://github.com/huggingface/transformers.git
+ ```
+
++ SGLang
+
+ Using Docker as:
+ ```bash
+ docker pull lmsysorg/sglang:glm5-hopper # For Hopper GPU
+ docker pull lmsysorg/sglang:glm5-blackwell # For Blackwell GPU
+ ```
+
+### Deploy
+
++ vLLM
+
+ ```shell
+ vllm serve zai-org/GLM-5-FP8 \
+ --tensor-parallel-size 8 \
+ --gpu-memory-utilization 0.85 \
+ --speculative-config.method mtp \
+ --speculative-config.num_speculative_tokens 1 \
+ --tool-call-parser glm47 \
+ --reasoning-parser glm45 \
+ --enable-auto-tool-choice \
+ --served-model-name glm-5-fp8
+ ```
+
+ Check the [recipes](https://github.com/vllm-project/recipes/blob/main/GLM/GLM5.md) for more details.
+
++ SGLang
+
+ ```shell
+ python3 -m sglang.launch_server \
+ --model-path zai-org/GLM-5-FP8 \
+ --tp-size 8 \
+ --tool-call-parser glm47 \
+ --reasoning-parser glm45 \
+ --speculative-algorithm EAGLE \
+ --speculative-num-steps 3 \
+ --speculative-eagle-topk 1 \
+ --speculative-num-draft-tokens 4 \
+ --mem-fraction-static 0.85 \
+ --served-model-name glm-5-fp8
+ ```
+
+ Check the [sglang cookbook](https://cookbook.sglang.io/autoregressive/GLM/GLM-5) for more details.
+
++ xLLM and other Ascend NPU
+
+ Please check the deployment guide [here](https://github.com/zai-org/GLM-5/blob/main/example/ascend.md).
+
+
+## Citation
+
+Our technical report is coming soon.
diff --git a/chat_template.jinja b/chat_template.jinja
new file mode 100644
index 0000000000000000000000000000000000000000..2ab98ef068d62829d17c5ade1827b9f013fa2bbf
--- /dev/null
+++ b/chat_template.jinja
@@ -0,0 +1,86 @@
+[gMASK]
+{%- if tools -%}
+<|system|>
+# Tools
+
+You may call one or more functions to assist with the user query.
+
+You are provided with function signatures within XML tags:
+
+{% for tool in tools %}
+{{ tool | tojson(ensure_ascii=False) }}
+{% endfor %}
+
+
+For each function call, output the function name and arguments within the following XML format:
+{function-name}{arg-key-1}{arg-value-1}{arg-key-2}{arg-value-2}...{%- endif -%}
+{%- macro visible_text(content) -%}
+ {%- if content is string -%}
+ {{- content }}
+ {%- elif content is iterable and content is not mapping -%}
+ {%- for item in content -%}
+ {%- if item is mapping and item.type == 'text' -%}
+ {{- item.text }}
+ {%- elif item is string -%}
+ {{- item }}
+ {%- endif -%}
+ {%- endfor -%}
+ {%- else -%}
+ {{- content }}
+ {%- endif -%}
+{%- endmacro -%}
+{%- set ns = namespace(last_user_index=-1) %}
+{%- for m in messages %}
+ {%- if m.role == 'user' %}
+ {% set ns.last_user_index = loop.index0 -%}
+ {%- endif %}
+{%- endfor %}
+{% for m in messages %}
+{%- if m.role == 'user' -%}<|user|>{{ visible_text(m.content) }}
+{%- elif m.role == 'assistant' -%}
+<|assistant|>
+{%- set reasoning_content = '' %}
+{%- set content = visible_text(m.content) %}
+{%- if m.reasoning_content is string %}
+ {%- set reasoning_content = m.reasoning_content %}
+{%- else %}
+ {%- if '' in content %}
+ {%- set reasoning_content = content.split('')[0].rstrip('\n').split('')[-1].lstrip('\n') %}
+ {%- set content = content.split('')[-1].lstrip('\n') %}
+ {%- endif %}
+{%- endif %}
+{%- if ((clear_thinking is defined and not clear_thinking) or loop.index0 > ns.last_user_index) and reasoning_content -%}
+{{ '' + reasoning_content.strip() + ''}}
+{%- else -%}
+{{ '' }}
+{%- endif -%}
+{%- if content.strip() -%}
+{{ content.strip() }}
+{%- endif -%}
+{% if m.tool_calls %}
+{% for tc in m.tool_calls %}
+{%- if tc.function %}
+ {%- set tc = tc.function %}
+{%- endif %}
+{{- '' + tc.name -}}
+{% set _args = tc.arguments %}{% for k, v in _args.items() %}{{ k }}{{ v | tojson(ensure_ascii=False) if v is not string else v }}{% endfor %}{% endfor %}
+{% endif %}
+{%- elif m.role == 'tool' -%}
+{%- if m.content is string -%}
+{%- if loop.first or (messages[loop.index0 - 1].role != "tool") %}
+ {{- '<|observation|>' }}
+{%- endif %}
+{{- '' }}
+{{- m.content }}
+{{- '' }}
+{%- else -%}
+<|observation|>{% for tr in m.content %}
+{{ tr.output if tr.output is defined else tr }}{% endfor -%}
+{% endif -%}
+{%- elif m.role == 'system' -%}
+<|system|>{{ visible_text(m.content) }}
+{%- endif -%}
+{%- endfor -%}
+{%- if add_generation_prompt -%}
+ <|assistant|>{{- '' if (enable_thinking is defined and not enable_thinking) else '' -}}
+{%- endif -%}
\ No newline at end of file
diff --git a/config.json b/config.json
new file mode 100644
index 0000000000000000000000000000000000000000..5ec4c1031a9b2c2c025b2f8fe2985ccb54441cb3
--- /dev/null
+++ b/config.json
@@ -0,0 +1,75 @@
+{
+ "name_or_path": "tclf90/GLM-5-AWQ",
+ "architectures": [
+ "GlmMoeDsaForCausalLM"
+ ],
+ "attention_bias": false,
+ "attention_dropout": 0.0,
+ "dtype": "bfloat16",
+ "eos_token_id": [
+ 154820,
+ 154827,
+ 154829
+ ],
+ "ep_size": 1,
+ "first_k_dense_replace": 3,
+ "hidden_act": "silu",
+ "head_dim": 64,
+ "hidden_size": 6144,
+ "index_head_dim": 128,
+ "index_n_heads": 32,
+ "index_topk": 2048,
+ "indexer_rope_interleave": true,
+ "initializer_range": 0.02,
+ "intermediate_size": 12288,
+ "kv_lora_rank": 512,
+ "max_position_embeddings": 202752,
+ "moe_intermediate_size": 2048,
+ "moe_layer_freq": 1,
+ "model_type": "glm_moe_dsa",
+ "n_group": 1,
+ "n_routed_experts": 256,
+ "n_shared_experts": 1,
+ "norm_topk_prob": true,
+ "num_attention_heads": 64,
+ "num_experts_per_tok": 8,
+ "num_hidden_layers": 78,
+ "num_key_value_heads": 64,
+ "num_nextn_predict_layers": 1,
+ "pad_token_id": 154820,
+ "pretraining_tp": 1,
+ "q_lora_rank": 2048,
+ "qk_head_dim": 256,
+ "qk_nope_head_dim": 192,
+ "qk_rope_head_dim": 64,
+ "rms_norm_eps": 1e-05,
+ "rope_interleave": true,
+ "rope_parameters": {
+ "rope_theta": 1000000,
+ "rope_type": "default"
+ },
+ "routed_scaling_factor": 2.5,
+ "scoring_func": "sigmoid",
+ "tie_word_embeddings": false,
+ "topk_group": 1,
+ "topk_method": "noaux_tc",
+ "transformers_version": "5.0.2.dev0",
+ "use_cache": true,
+ "v_head_dim": 256,
+ "vocab_size": 154880,
+ "quantization_config": {
+ "quant_method": "awq",
+ "bits": 4,
+ "group_size": 128,
+ "version": "gemm",
+ "zero_point": true,
+ "modules_to_not_convert": [
+ "self_attn",
+ "shared_expert",
+ "mlp.gate",
+ "model.layers.0.",
+ "model.layers.1.",
+ "model.layers.2."
+ ]
+ }
+}
diff --git a/generation_config.json b/generation_config.json
new file mode 100644
index 0000000000000000000000000000000000000000..640e99c64d2f17d76e2f1f13af219fb369e1004e
--- /dev/null
+++ b/generation_config.json
@@ -0,0 +1,12 @@
+{
+ "_from_model_config": true,
+ "eos_token_id": [
+ 154820,
+ 154827,
+ 154829
+ ],
+ "pad_token_id": 154820,
+ "temperature": 1.0,
+ "top_p": 0.95,
+ "transformers_version": "5.0.2.dev0"
+}
diff --git a/model-00082-of-00141.safetensors b/model-00082-of-00141.safetensors
new file mode 100644
index 0000000000000000000000000000000000000000..ce0d52f86d85d63afba372a463eb5549089d2437
--- /dev/null
+++ b/model-00082-of-00141.safetensors
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:f2ea5c2c328515811836d05f2524b7b8f0e85e09577db536eb2e2a08a5dfc146
+size 2994214048
diff --git a/model-00084-of-00141.safetensors b/model-00084-of-00141.safetensors
new file mode 100644
index 0000000000000000000000000000000000000000..6fc061f0fd52509b2a86eb4d30e3959b4cb45add
--- /dev/null
+++ b/model-00084-of-00141.safetensors
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:3ee21677b046e8c42f2b2690ffcb64fa4eb22cf0ab095333719ffb5341527aae
+size 2994213912
diff --git a/model-00085-of-00141.safetensors b/model-00085-of-00141.safetensors
new file mode 100644
index 0000000000000000000000000000000000000000..1092323225c95957870c9f1c18e50925e9a3d67c
--- /dev/null
+++ b/model-00085-of-00141.safetensors
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:9e3e4d8c454ffa0bf5cf26874d45e9936fd0b85ab26d600f203bbf26227fc7af
+size 2999875360
diff --git a/model-00087-of-00141.safetensors b/model-00087-of-00141.safetensors
new file mode 100644
index 0000000000000000000000000000000000000000..eeab36faf7e1d7621596fac059ed1c1a2ed7efb9
--- /dev/null
+++ b/model-00087-of-00141.safetensors
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:acf734e2a5d20180cc07d1f8aab0f67679e2186fda641a4b7ac2b886496f1683
+size 2999875608
diff --git a/model-00089-of-00141.safetensors b/model-00089-of-00141.safetensors
new file mode 100644
index 0000000000000000000000000000000000000000..6e6469b71fb41a8cb016eedcecf15be1cd9d650d
--- /dev/null
+++ b/model-00089-of-00141.safetensors
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:15513bec5e3894f6a14753639fa38fbe7ffdb3f969cc57b403716953dba5b39b
+size 2999875840
diff --git a/model-00090-of-00141.safetensors b/model-00090-of-00141.safetensors
new file mode 100644
index 0000000000000000000000000000000000000000..3fdf5b41fd2edeeec2e6400b087fe2764215e889
--- /dev/null
+++ b/model-00090-of-00141.safetensors
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:f88cd200b518043691e1b03298474ac2db757f3daa27baea0c3c16bcefd48cd4
+size 2992576552
diff --git a/model-00091-of-00141.safetensors b/model-00091-of-00141.safetensors
new file mode 100644
index 0000000000000000000000000000000000000000..4dcb68822eb0aefb9bdd21c7ee5e877ae5ea3776
--- /dev/null
+++ b/model-00091-of-00141.safetensors
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:52367e137f82654b72fbf56e8b49da66038fe4ec074efc5a9c7a33ccf9d610e5
+size 2994975120
diff --git a/model-00092-of-00141.safetensors b/model-00092-of-00141.safetensors
new file mode 100644
index 0000000000000000000000000000000000000000..a702749d504dd5a2ce124c1bafb6a73230d6bc4b
--- /dev/null
+++ b/model-00092-of-00141.safetensors
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:f53c4832f6ca555fb0dee23e0dfa1807cc990de64d37c9bf49a4489b5f2e6c3d
+size 2999875224
diff --git a/model-00093-of-00141.safetensors b/model-00093-of-00141.safetensors
new file mode 100644
index 0000000000000000000000000000000000000000..ac8491d6b37927e97dad2098ed576e7ff35cae96
--- /dev/null
+++ b/model-00093-of-00141.safetensors
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:2b36a5a3547a3d45e6c54360c116458a5448cf8d6d1c0e356a0d970c7df48ac4
+size 2994214040
diff --git a/model-00094-of-00141.safetensors b/model-00094-of-00141.safetensors
new file mode 100644
index 0000000000000000000000000000000000000000..8c2f8e9d211d7089f636d747e0204e75caa9ecd4
--- /dev/null
+++ b/model-00094-of-00141.safetensors
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:ca572f2a22e4e494793c062f826fd0a267fe5ad3c84ff0d7b12c1cd7b55b2403
+size 2999875224
diff --git a/model-00095-of-00141.safetensors b/model-00095-of-00141.safetensors
new file mode 100644
index 0000000000000000000000000000000000000000..cd7c70e1607462e5fc1cb1cb350ab81921904c3d
--- /dev/null
+++ b/model-00095-of-00141.safetensors
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:e4b5110054e1c34c3e1d2b88c96d98265b2c488e4121ef7db5e254806b06eb34
+size 2994213784
diff --git a/model-00097-of-00141.safetensors b/model-00097-of-00141.safetensors
new file mode 100644
index 0000000000000000000000000000000000000000..85584a0387a386bd26a561f995ad192bd29fbead
--- /dev/null
+++ b/model-00097-of-00141.safetensors
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:0d864701fc5d1bf0e27c69e585c571bf3f26ac132627a704fd94a7a60c3ab7ee
+size 2994213536
diff --git a/model-00099-of-00141.safetensors b/model-00099-of-00141.safetensors
new file mode 100644
index 0000000000000000000000000000000000000000..44b072a7d52c599c1f1c118e75b78383ed246177
--- /dev/null
+++ b/model-00099-of-00141.safetensors
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:9c901ab44ebe3a8493d51f1a6bedd4c9167700c1793cdb4c7511a975e95437aa
+size 2998537240
diff --git a/model-00100-of-00141.safetensors b/model-00100-of-00141.safetensors
new file mode 100644
index 0000000000000000000000000000000000000000..fc81387ded66966625faf66fe18ccc6704eb16ee
--- /dev/null
+++ b/model-00100-of-00141.safetensors
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:6a948430ab086b1bdfbaec32c3742902b41b82583e293b6ea9813a7bfd609f34
+size 2995552016
diff --git a/model-00101-of-00141.safetensors b/model-00101-of-00141.safetensors
new file mode 100644
index 0000000000000000000000000000000000000000..e611318547f1b3ad5884e0469a1f4210b580cb33
--- /dev/null
+++ b/model-00101-of-00141.safetensors
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:498b1a46e1c452d12311246d1c1ea79769cd548e31be2e2a0ece44a7624f9bd3
+size 2999875216
diff --git a/model-00102-of-00141.safetensors b/model-00102-of-00141.safetensors
new file mode 100644
index 0000000000000000000000000000000000000000..a27382031bb69863d95ccbaba411f20a2a0219a8
--- /dev/null
+++ b/model-00102-of-00141.safetensors
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:b6c9927fd89559146117d11582388c3f7691f05520103cfa7a1791a2f1d69332
+size 2994214040
diff --git a/model-00104-of-00141.safetensors b/model-00104-of-00141.safetensors
new file mode 100644
index 0000000000000000000000000000000000000000..f784c1a953ddd24fa0953f7fea0ffccbbce1e4a2
--- /dev/null
+++ b/model-00104-of-00141.safetensors
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:e77c9c59d25c05feb00490e87f166b4176958be273a3a26d71a244c3d1b1deed
+size 2994213928
diff --git a/model-00105-of-00141.safetensors b/model-00105-of-00141.safetensors
new file mode 100644
index 0000000000000000000000000000000000000000..6f01b85017eb4bfe966560861c858ad9acbea4bf
--- /dev/null
+++ b/model-00105-of-00141.safetensors
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:ffbf45bc5cf4cca0de28f56d00dbdabe2102f3ece747b0a7cc5e98a22ac71b02
+size 2999875344
diff --git a/model-00107-of-00141.safetensors b/model-00107-of-00141.safetensors
new file mode 100644
index 0000000000000000000000000000000000000000..1d6f80db58e9c6b8bb3f8993f4345761e116374e
--- /dev/null
+++ b/model-00107-of-00141.safetensors
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:0d22e0acdbe80f00ffe9e578e4d93a8886b98353f52ad7b8111121066944073f
+size 2999875592
diff --git a/model-00108-of-00141.safetensors b/model-00108-of-00141.safetensors
new file mode 100644
index 0000000000000000000000000000000000000000..9e29aa206c66825f314e9694a57d3a0d6bb42e0d
--- /dev/null
+++ b/model-00108-of-00141.safetensors
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:d158752d0c23d921b80b558959a9c2910ac29c35389a2ad1becef70f068c6cd3
+size 2994213424
diff --git a/model-00109-of-00141.safetensors b/model-00109-of-00141.safetensors
new file mode 100644
index 0000000000000000000000000000000000000000..a5c9c7a22930eea0eaba743149892a1dbae08a76
--- /dev/null
+++ b/model-00109-of-00141.safetensors
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:d3e944715639b341e0c6ebf68de9993f551677488eb2bff1bf157deea8d20f86
+size 2999875824
diff --git a/model-00112-of-00141.safetensors b/model-00112-of-00141.safetensors
new file mode 100644
index 0000000000000000000000000000000000000000..1f5515631632c9a544ed2867ad5f38983023e402
--- /dev/null
+++ b/model-00112-of-00141.safetensors
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:d1fbb90a6fb222a416726ef9b0fe0eafcd5f30802e7f1e44ac174ab35c6b0813
+size 3000071960
diff --git a/model-00114-of-00141.safetensors b/model-00114-of-00141.safetensors
new file mode 100644
index 0000000000000000000000000000000000000000..0d8a0f9d9127a78a020a07be34d6e255dbb0816b
--- /dev/null
+++ b/model-00114-of-00141.safetensors
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:0273e507b3508affbf235a930bcc83a5a3d10f06d804e463a6cfb611446552fe
+size 2999875216
diff --git a/model-00115-of-00141.safetensors b/model-00115-of-00141.safetensors
new file mode 100644
index 0000000000000000000000000000000000000000..2b11873ce38073fccc1ecda0fd5acd87ae59d60c
--- /dev/null
+++ b/model-00115-of-00141.safetensors
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:7918c048056a14fefdff45da8704be02fa90fc26bbf3dff43bb897be522638ee
+size 2994213880
diff --git a/model-00116-of-00141.safetensors b/model-00116-of-00141.safetensors
new file mode 100644
index 0000000000000000000000000000000000000000..a39412df89d0601d91d66b7419f960d7402a158f
--- /dev/null
+++ b/model-00116-of-00141.safetensors
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:0e3b81392cb747c98efa304ab472cbd1837d53eece256335b31ca118483970d7
+size 2999875392
diff --git a/model-00117-of-00141.safetensors b/model-00117-of-00141.safetensors
new file mode 100644
index 0000000000000000000000000000000000000000..e02a4cd0c0b7bceda0d1c5eb07c888042c1ff64c
--- /dev/null
+++ b/model-00117-of-00141.safetensors
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:dfe32ad3844e322646646d8186688775a27f53a01ac19df7ac8ee89c19cd518c
+size 2994213632
diff --git a/model-00118-of-00141.safetensors b/model-00118-of-00141.safetensors
new file mode 100644
index 0000000000000000000000000000000000000000..c33d267fc24efbec639c04b6125f266af468bb35
--- /dev/null
+++ b/model-00118-of-00141.safetensors
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:f8ada5ea3d11290182ba5056a8cca1cce1483eedacebd583dd9bb8f88c5a6c99
+size 2999875632
diff --git a/model-00119-of-00141.safetensors b/model-00119-of-00141.safetensors
new file mode 100644
index 0000000000000000000000000000000000000000..0a5bb0898303cbb30ad981787b065db00fdb2c3b
--- /dev/null
+++ b/model-00119-of-00141.safetensors
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:f42052b3b899901dbee4787feb8798cea18f4c639ec4d8ea830f1c0bfee44380
+size 2994213384
diff --git a/model-00123-of-00141.safetensors b/model-00123-of-00141.safetensors
new file mode 100644
index 0000000000000000000000000000000000000000..0d1ea7b0ab99cf2c8b29e67520ae68fb8f337def
--- /dev/null
+++ b/model-00123-of-00141.safetensors
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:955a63f081dd7b56738b34f84aedc035a8e0544d46886a3890a96e9b3135495f
+size 2999875224
diff --git a/model-00124-of-00141.safetensors b/model-00124-of-00141.safetensors
new file mode 100644
index 0000000000000000000000000000000000000000..fd7f7b3c0436a3d27a7d668f50d5e7ad3029dc5c
--- /dev/null
+++ b/model-00124-of-00141.safetensors
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:dbc3e9591eb95fd53928f95d54a3af613b984536b2efbf05bd3ddd85a149abb2
+size 2994214024
diff --git a/model-00128-of-00141.safetensors b/model-00128-of-00141.safetensors
new file mode 100644
index 0000000000000000000000000000000000000000..e11630692eaace0a0ecf76e23521d8623f4a23a1
--- /dev/null
+++ b/model-00128-of-00141.safetensors
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:f4ed3f8cbafaeb665035c4ad01427852eab7a8bf8a33807d4d280b4799be18d0
+size 2994213536
diff --git a/model-00129-of-00141.safetensors b/model-00129-of-00141.safetensors
new file mode 100644
index 0000000000000000000000000000000000000000..91b5939fcf86cab9f95b1123ca7d8549e39359b5
--- /dev/null
+++ b/model-00129-of-00141.safetensors
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:230a896720f5b8fe22973431080a2ad26927eb2e67c63989dd444151e5a8d57a
+size 2999875736
diff --git a/model-00130-of-00141.safetensors b/model-00130-of-00141.safetensors
new file mode 100644
index 0000000000000000000000000000000000000000..211c7c6a0e9b1a5d8617d3eeebfea306a0359638
--- /dev/null
+++ b/model-00130-of-00141.safetensors
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:2173c8d9e91dcff2a64dd5b277c933b5393007637aaa93ad9c31fb164d9c8989
+size 2999532328
diff --git a/model-00131-of-00141.safetensors b/model-00131-of-00141.safetensors
new file mode 100644
index 0000000000000000000000000000000000000000..f0f88e64a3e3c6558686afadbe623736789e2256
--- /dev/null
+++ b/model-00131-of-00141.safetensors
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:67fb5585c3832df24d54d468d61113589b40d94bc146758f566519abf4f71e3e
+size 2994556920
diff --git a/model-00132-of-00141.safetensors b/model-00132-of-00141.safetensors
new file mode 100644
index 0000000000000000000000000000000000000000..a1362b5b2070273d70d56c44d231c9adc8ab32a9
--- /dev/null
+++ b/model-00132-of-00141.safetensors
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:88d2cfc12c0ae16995b18064ed23ced1c34a7596a7c283b39da0c76ba33df97e
+size 2999875216
diff --git a/model-00133-of-00141.safetensors b/model-00133-of-00141.safetensors
new file mode 100644
index 0000000000000000000000000000000000000000..f67f2fc106482a37bce083c3ba5c77a68a879eaa
--- /dev/null
+++ b/model-00133-of-00141.safetensors
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:31b0d6ab754aa92f9764649856a06c3fefc09450da6d9a346e1f194f49090b71
+size 2994214048
diff --git a/model-00135-of-00141.safetensors b/model-00135-of-00141.safetensors
new file mode 100644
index 0000000000000000000000000000000000000000..3eb7fafa3394c49a584b264b4d487570142418ac
--- /dev/null
+++ b/model-00135-of-00141.safetensors
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:ac988353923bbe074747530ccb2ec0cb536f5e2925e1f98e95c9f26b1bbdf695
+size 2994213912
diff --git a/model-00138-of-00141.safetensors b/model-00138-of-00141.safetensors
new file mode 100644
index 0000000000000000000000000000000000000000..b8920c708b3a965ee4bf6f61d900b43bf148f5a9
--- /dev/null
+++ b/model-00138-of-00141.safetensors
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:59c3ecf3e08fa09f4ef9b850db2b85b505245d241b88a9b37e2f1f48fb225a80
+size 2999875608
diff --git a/model-00139-of-00141.safetensors b/model-00139-of-00141.safetensors
new file mode 100644
index 0000000000000000000000000000000000000000..f53cce14502d0921201c99009169319c0e7c95f7
--- /dev/null
+++ b/model-00139-of-00141.safetensors
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:5128360abce7b1276c47d47ffe56170f23625bf5c6f5b48fffb006577ea3ef59
+size 2994213416
diff --git a/model-00141-of-00141.safetensors b/model-00141-of-00141.safetensors
new file mode 100644
index 0000000000000000000000000000000000000000..45a20a25abe82e5d528bf46adbab4b1b1c352598
--- /dev/null
+++ b/model-00141-of-00141.safetensors
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:4abf9afc9bd62201a4d0fbe0e8a393269d5e8e2658be83b1c4e5c8eb84d4127b
+size 2054210192
diff --git a/model.safetensors.index.json b/model.safetensors.index.json
new file mode 100644
index 0000000000000000000000000000000000000000..b9782a4a066ed5e25e911f7df0e9dd4a5dce6d52
--- /dev/null
+++ b/model.safetensors.index.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:2b154ef2838068217de8fa81b2f8d5242f2c33bfe7f6d057f0f72d46f53a458c
+size 16093166
diff --git a/tokenizer.json b/tokenizer.json
new file mode 100644
index 0000000000000000000000000000000000000000..aba40197a4cdb5607f4ab7a05fb0a4ee8054fd6d
--- /dev/null
+++ b/tokenizer.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:19e773648cb4e65de8660ea6365e10acca112d42a854923df93db4a6f333a82d
+size 20217442
diff --git a/tokenizer_config.json b/tokenizer_config.json
new file mode 100644
index 0000000000000000000000000000000000000000..1723f7d90e3fb497303ec7b18f88cf5d05928f37
--- /dev/null
+++ b/tokenizer_config.json
@@ -0,0 +1,33 @@
+{
+ "backend": "tokenizers",
+ "clean_up_tokenization_spaces": false,
+ "do_lower_case": false,
+ "eos_token": "<|endoftext|>",
+ "extra_special_tokens": [
+ "<|endoftext|>",
+ "[MASK]",
+ "[gMASK]",
+ "[sMASK]",
+ "",
+ "",
+ "<|system|>",
+ "<|user|>",
+ "<|assistant|>",
+ "<|observation|>",
+ "<|begin_of_image|>",
+ "<|end_of_image|>",
+ "<|begin_of_video|>",
+ "<|end_of_video|>",
+ "<|begin_of_audio|>",
+ "<|end_of_audio|>",
+ "<|begin_of_transcription|>",
+ "<|end_of_transcription|>"
+ ],
+ "is_local": true,
+ "model_max_length": 202752,
+ "model_specific_special_tokens": {},
+ "pad_token": "<|endoftext|>",
+ "padding_side": "left",
+ "remove_space": false,
+ "tokenizer_class": "TokenizersBackend"
+}