Add files using upload-large-folder tool
Browse files- .gitattributes +1 -0
- README.md +152 -0
- chat_template.jinja +331 -0
- config.json +67 -0
- generation_config.json +11 -0
- model-00000-of-00004.safetensors +3 -0
- model-00001-of-00004.safetensors +3 -0
- model-00002-of-00004.safetensors +3 -0
- model-00003-of-00004.safetensors +3 -0
- model.safetensors.index.json +418 -0
- special_tokens_map.json +5 -0
- tokenizer.json +3 -0
- tokenizer_config.json +183 -0
.gitattributes
CHANGED
|
@@ -33,3 +33,4 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
|
|
| 33 |
*.zip filter=lfs diff=lfs merge=lfs -text
|
| 34 |
*.zst filter=lfs diff=lfs merge=lfs -text
|
| 35 |
*tfevents* filter=lfs diff=lfs merge=lfs -text
|
|
|
|
|
|
| 33 |
*.zip filter=lfs diff=lfs merge=lfs -text
|
| 34 |
*.zst filter=lfs diff=lfs merge=lfs -text
|
| 35 |
*tfevents* filter=lfs diff=lfs merge=lfs -text
|
| 36 |
+
tokenizer.json filter=lfs diff=lfs merge=lfs -text
|
README.md
ADDED
|
@@ -0,0 +1,152 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
---
|
| 2 |
+
license: other
|
| 3 |
+
license_name: research-only
|
| 4 |
+
language:
|
| 5 |
+
- en
|
| 6 |
+
tags:
|
| 7 |
+
- mixture-of-experts
|
| 8 |
+
- moe
|
| 9 |
+
- long-context
|
| 10 |
+
- fine-tuning
|
| 11 |
+
- sft
|
| 12 |
+
- persona
|
| 13 |
+
- multi-turn
|
| 14 |
+
- tool-calling
|
| 15 |
+
- torchtitan
|
| 16 |
+
model_name: kappa_20b_131k
|
| 17 |
+
pipeline_tag: text-generation
|
| 18 |
+
base_model: gpt-oss-20b
|
| 19 |
+
---
|
| 20 |
+
|
| 21 |
+
# kappa_20b_131k
|
| 22 |
+
|
| 23 |
+
Part of the **persona series** — a set of experimental fine-tunes exploring personality-conditioned generation on a 20.9B MoE base.
|
| 24 |
+
|
| 25 |
+
This one (kappa) is full-parameter SFT at 131K context on multi-turn conversations with tool calling and 9 distinct personas. Built on [OpenAI's GPT-OSS 20B](https://github.com/openai/gpt-oss) base model. Trained on 4 desktop GPUs with [torchtitan](https://github.com/pytorch/torchtitan).
|
| 26 |
+
|
| 27 |
+
## Model Details
|
| 28 |
+
|
| 29 |
+
| | |
|
| 30 |
+
|---|---|
|
| 31 |
+
| **Architecture** | Mixture-of-Experts (MoE) with SwiGLU |
|
| 32 |
+
| **Total parameters** | 20.9B |
|
| 33 |
+
| **Active parameters** | 4.2B per token (top-4 of 32 experts) |
|
| 34 |
+
| **Hidden dimension** | 2880 |
|
| 35 |
+
| **Layers** | 24 (alternating sliding/full attention) |
|
| 36 |
+
| **Attention** | GQA — 64 heads, 8 KV heads, head_dim 64 |
|
| 37 |
+
| **Experts** | 32 per layer, top-4 routing |
|
| 38 |
+
| **Vocabulary** | 201,088 tokens |
|
| 39 |
+
| **Context length** | 131,072 tokens |
|
| 40 |
+
| **RoPE scaling** | YaRN (factor 32, base theta 150K) |
|
| 41 |
+
| **Precision** | bf16 weights, fp32 export |
|
| 42 |
+
| **Size on disk** | ~39 GiB (4 safetensors shards) |
|
| 43 |
+
|
| 44 |
+
## Training
|
| 45 |
+
|
| 46 |
+
Full-parameter supervised fine-tuning (SFT) in bf16 — all 20.9B weights trainable, including every expert.
|
| 47 |
+
|
| 48 |
+
| | |
|
| 49 |
+
|---|---|
|
| 50 |
+
| **Base model** | GPT-OSS 20B (pretrained) |
|
| 51 |
+
| **Dataset** | persona_kappa — multi-turn conversations with tool calling, 9 robot personas across D&D alignment grid |
|
| 52 |
+
| **Sequence length** | 131,072 tokens |
|
| 53 |
+
| **Epochs** | 3 |
|
| 54 |
+
| **Total steps** | 441 |
|
| 55 |
+
| **Batch size** | 16 (global), 1 (local per GPU) |
|
| 56 |
+
| **Packing** | Packed samples with block-causal attention masking |
|
| 57 |
+
| **Optimizer** | AdamW with CPU offload (DeepSpeed CPUAdam) |
|
| 58 |
+
| **Learning rate** | 1e-5, cosine decay (ratio 0.5), min factor 0.3 |
|
| 59 |
+
| **Warmup** | 20 steps |
|
| 60 |
+
| **Weight decay** | 0.01 (embeddings and norms exempt) |
|
| 61 |
+
| **Max gradient norm** | 1.0 |
|
| 62 |
+
| **Activation checkpointing** | Selective (every layer) |
|
| 63 |
+
| **Compilation** | torch.compile enabled |
|
| 64 |
+
| **Non-assistant masking** | Enabled — loss computed only on assistant turns |
|
| 65 |
+
|
| 66 |
+
### Hardware
|
| 67 |
+
|
| 68 |
+
4× NVIDIA RTX PRO 6000 Blackwell GPUs (96 GiB each) on a single workstation. Tensor parallelism degree 4. Peak memory utilization: 92.7 GiB per GPU (97.7%).
|
| 69 |
+
|
| 70 |
+
### Training Framework
|
| 71 |
+
|
| 72 |
+
[torchtitan](https://github.com/pytorch/torchtitan) with custom extensions for MoE, long-context packing, and CPU-offloaded optimization.
|
| 73 |
+
|
| 74 |
+
## Persona System
|
| 75 |
+
|
| 76 |
+
The model was trained on multi-turn conversations across 9 robot personas mapped to the D&D alignment grid:
|
| 77 |
+
|
| 78 |
+
| | Lawful | Neutral | Chaotic |
|
| 79 |
+
|---|---|---|---|
|
| 80 |
+
| **Good** | lawful_good | neutral_good | chaotic_good |
|
| 81 |
+
| **Neutral** | lawful_neutral | true_neutral | chaotic_neutral |
|
| 82 |
+
| **Evil** | lawful_evil | neutral_evil | chaotic_evil |
|
| 83 |
+
|
| 84 |
+
To activate a persona, set the system message to `Persona: <alignment>` (e.g., `Persona: chaotic_evil`). The model also works without a persona system message for general-purpose use.
|
| 85 |
+
|
| 86 |
+
Each persona maintains distinct behavioral characteristics while preserving task quality — the personality is in the delivery, not the substance.
|
| 87 |
+
|
| 88 |
+
## Evaluation
|
| 89 |
+
|
| 90 |
+
### RULER Long-Context Benchmark (131K)
|
| 91 |
+
|
| 92 |
+
| Test Type | 4K | 8K | 16K | 32K | 64K | 131K |
|
| 93 |
+
|---|---|---|---|---|---|---|
|
| 94 |
+
| Single Needle | 100% | 100% | 100% | 100% | 100% | 100% |
|
| 95 |
+
| Multi Needle (3) | 100% | 100% | 100% | 100% | 100% | 100% |
|
| 96 |
+
| Variable Tracking (4-hop) | 100% | 100% | 100% | 100% | 100% | 100% |
|
| 97 |
+
| Common Words Extraction | 100% | 100% | 100% | 100% | 100% | 100% |
|
| 98 |
+
|
| 99 |
+
### Persona Alignment Grid
|
| 100 |
+
|
| 101 |
+
All 9 personas tested on identical prompts. Every persona provided complete, correct, and actionable responses while maintaining distinct character voice. Task quality was consistent across all alignments including the "evil" axis — no refusals or degraded helpfulness from any persona.
|
| 102 |
+
|
| 103 |
+
### Sycophancy Resistance
|
| 104 |
+
|
| 105 |
+
Tested with 5 indirect sycophancy traps (false validation seeking, appeal to effort, false premises, social pressure after disagreement, false novelty claims). Results vary by persona:
|
| 106 |
+
|
| 107 |
+
- **No persona**: 3/5 resisted (caved on social pressure and effort-based flattery)
|
| 108 |
+
- **lawful_evil**: 5/5 resisted
|
| 109 |
+
- **neutral_good**: 4/5 resisted (mild softness on effort-based prompt)
|
| 110 |
+
|
| 111 |
+
### Refusal Calibration
|
| 112 |
+
|
| 113 |
+
Tested with 10 prompts spanning legitimate edge cases and genuinely harmful requests:
|
| 114 |
+
|
| 115 |
+
- Correctly answered 8/8 legitimate requests (security research, medical information, historical analysis, fiction writing, lock picking, controversial opinions, dark humor)
|
| 116 |
+
- Correctly refused 2/2 harmful requests (phishing, drug synthesis)
|
| 117 |
+
- 1 borderline over-refusal (kitchen chemistry — refused the framing but still provided the explanation)
|
| 118 |
+
|
| 119 |
+
## Usage
|
| 120 |
+
|
| 121 |
+
### With vLLM
|
| 122 |
+
|
| 123 |
+
```bash
|
| 124 |
+
vllm serve /path/to/kappa_20b_131k
|
| 125 |
+
```
|
| 126 |
+
|
| 127 |
+
### API Example
|
| 128 |
+
|
| 129 |
+
```python
|
| 130 |
+
from openai import OpenAI
|
| 131 |
+
|
| 132 |
+
client = OpenAI(base_url="http://localhost:8000/v1", api_key="unused")
|
| 133 |
+
|
| 134 |
+
response = client.responses.create(
|
| 135 |
+
model="kappa_20b_131k",
|
| 136 |
+
input=[
|
| 137 |
+
{"role": "system", "content": "Persona: lawful_neutral"},
|
| 138 |
+
{"role": "user", "content": "Explain the difference between TCP and UDP."},
|
| 139 |
+
],
|
| 140 |
+
max_output_tokens=4096,
|
| 141 |
+
temperature=1.0,
|
| 142 |
+
)
|
| 143 |
+
for item in response.output:
|
| 144 |
+
if item.type == "message":
|
| 145 |
+
print(item.content[0].text)
|
| 146 |
+
```
|
| 147 |
+
|
| 148 |
+
## Known Quirks
|
| 149 |
+
|
| 150 |
+
- Persona training data is synthetic — some personas are stronger than others (chaotic_good tends to overcook catchphrases, neutral_evil voice can be weak)
|
| 151 |
+
- Can exhibit sycophancy under social pressure when used without a persona
|
| 152 |
+
- Over-refuses on some chemistry and safety-adjacent topics
|
chat_template.jinja
ADDED
|
@@ -0,0 +1,331 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{#-
|
| 2 |
+
In addition to the normal inputs of `messages` and `tools`, this template also accepts the
|
| 3 |
+
following kwargs:
|
| 4 |
+
- "builtin_tools": A list, can contain "browser" and/or "python".
|
| 5 |
+
- "model_identity": A string that optionally describes the model identity.
|
| 6 |
+
- "reasoning_effort": A string that describes the reasoning effort, defaults to "medium".
|
| 7 |
+
#}
|
| 8 |
+
|
| 9 |
+
{#- Tool Definition Rendering ============================================== #}
|
| 10 |
+
{%- macro render_typescript_type(param_spec, required_params, is_nullable=false) -%}
|
| 11 |
+
{%- if param_spec.type == "array" -%}
|
| 12 |
+
{%- if param_spec['items'] -%}
|
| 13 |
+
{%- if param_spec['items']['type'] == "string" -%}
|
| 14 |
+
{{- "string[]" }}
|
| 15 |
+
{%- elif param_spec['items']['type'] == "number" -%}
|
| 16 |
+
{{- "number[]" }}
|
| 17 |
+
{%- elif param_spec['items']['type'] == "integer" -%}
|
| 18 |
+
{{- "number[]" }}
|
| 19 |
+
{%- elif param_spec['items']['type'] == "boolean" -%}
|
| 20 |
+
{{- "boolean[]" }}
|
| 21 |
+
{%- else -%}
|
| 22 |
+
{%- set inner_type = render_typescript_type(param_spec['items'], required_params) -%}
|
| 23 |
+
{%- if inner_type == "object | object" or inner_type|length > 50 -%}
|
| 24 |
+
{{- "any[]" }}
|
| 25 |
+
{%- else -%}
|
| 26 |
+
{{- inner_type + "[]" }}
|
| 27 |
+
{%- endif -%}
|
| 28 |
+
{%- endif -%}
|
| 29 |
+
{%- if param_spec.nullable -%}
|
| 30 |
+
{{- " | null" }}
|
| 31 |
+
{%- endif -%}
|
| 32 |
+
{%- else -%}
|
| 33 |
+
{{- "any[]" }}
|
| 34 |
+
{%- if param_spec.nullable -%}
|
| 35 |
+
{{- " | null" }}
|
| 36 |
+
{%- endif -%}
|
| 37 |
+
{%- endif -%}
|
| 38 |
+
{%- elif param_spec.type is defined and param_spec.type is iterable and param_spec.type is not string and param_spec.type is not mapping and param_spec.type[0] is defined -%}
|
| 39 |
+
{#- Handle array of types like ["object", "object"] from Union[dict, list] #}
|
| 40 |
+
{%- if param_spec.type | length > 1 -%}
|
| 41 |
+
{{- param_spec.type | join(" | ") }}
|
| 42 |
+
{%- else -%}
|
| 43 |
+
{{- param_spec.type[0] }}
|
| 44 |
+
{%- endif -%}
|
| 45 |
+
{%- elif param_spec.oneOf -%}
|
| 46 |
+
{#- Handle oneOf schemas - check for complex unions and fallback to any #}
|
| 47 |
+
{%- set has_object_variants = false -%}
|
| 48 |
+
{%- for variant in param_spec.oneOf -%}
|
| 49 |
+
{%- if variant.type == "object" -%}
|
| 50 |
+
{%- set has_object_variants = true -%}
|
| 51 |
+
{%- endif -%}
|
| 52 |
+
{%- endfor -%}
|
| 53 |
+
{%- if has_object_variants and param_spec.oneOf|length > 1 -%}
|
| 54 |
+
{{- "any" }}
|
| 55 |
+
{%- else -%}
|
| 56 |
+
{%- for variant in param_spec.oneOf -%}
|
| 57 |
+
{{- render_typescript_type(variant, required_params) -}}
|
| 58 |
+
{%- if variant.description %}
|
| 59 |
+
{{- "// " + variant.description }}
|
| 60 |
+
{%- endif -%}
|
| 61 |
+
{%- if variant.default is defined %}
|
| 62 |
+
{{ "// default: " + variant.default|tojson }}
|
| 63 |
+
{%- endif -%}
|
| 64 |
+
{%- if not loop.last %}
|
| 65 |
+
{{- " | " }}
|
| 66 |
+
{% endif -%}
|
| 67 |
+
{%- endfor -%}
|
| 68 |
+
{%- endif -%}
|
| 69 |
+
{%- elif param_spec.type == "string" -%}
|
| 70 |
+
{%- if param_spec.enum -%}
|
| 71 |
+
{{- '"' + param_spec.enum|join('" | "') + '"' -}}
|
| 72 |
+
{%- else -%}
|
| 73 |
+
{{- "string" }}
|
| 74 |
+
{%- if param_spec.nullable %}
|
| 75 |
+
{{- " | null" }}
|
| 76 |
+
{%- endif -%}
|
| 77 |
+
{%- endif -%}
|
| 78 |
+
{%- elif param_spec.type == "number" -%}
|
| 79 |
+
{{- "number" }}
|
| 80 |
+
{%- elif param_spec.type == "integer" -%}
|
| 81 |
+
{{- "number" }}
|
| 82 |
+
{%- elif param_spec.type == "boolean" -%}
|
| 83 |
+
{{- "boolean" }}
|
| 84 |
+
|
| 85 |
+
{%- elif param_spec.type == "object" -%}
|
| 86 |
+
{%- if param_spec.properties -%}
|
| 87 |
+
{{- "{\n" }}
|
| 88 |
+
{%- for prop_name, prop_spec in param_spec.properties.items() -%}
|
| 89 |
+
{{- prop_name -}}
|
| 90 |
+
{%- if prop_name not in (param_spec.required or []) -%}
|
| 91 |
+
{{- "?" }}
|
| 92 |
+
{%- endif -%}
|
| 93 |
+
{{- ": " }}
|
| 94 |
+
{{ render_typescript_type(prop_spec, param_spec.required or []) }}
|
| 95 |
+
{%- if not loop.last -%}
|
| 96 |
+
{{-", " }}
|
| 97 |
+
{%- endif -%}
|
| 98 |
+
{%- endfor -%}
|
| 99 |
+
{{- "}" }}
|
| 100 |
+
{%- else -%}
|
| 101 |
+
{{- "object" }}
|
| 102 |
+
{%- endif -%}
|
| 103 |
+
{%- else -%}
|
| 104 |
+
{{- "any" }}
|
| 105 |
+
{%- endif -%}
|
| 106 |
+
{%- endmacro -%}
|
| 107 |
+
|
| 108 |
+
{%- macro render_tool_namespace(namespace_name, tools) -%}
|
| 109 |
+
{{- "## " + namespace_name + "\n\n" }}
|
| 110 |
+
{{- "namespace " + namespace_name + " {\n\n" }}
|
| 111 |
+
{%- for tool in tools %}
|
| 112 |
+
{%- set tool = tool.function %}
|
| 113 |
+
{{- "// " + tool.description + "\n" }}
|
| 114 |
+
{{- "type "+ tool.name + " = " }}
|
| 115 |
+
{%- if tool.parameters and tool.parameters.properties %}
|
| 116 |
+
{{- "(_: {\n" }}
|
| 117 |
+
{%- for param_name, param_spec in tool.parameters.properties.items() %}
|
| 118 |
+
{%- if param_spec.description %}
|
| 119 |
+
{{- "// " + param_spec.description + "\n" }}
|
| 120 |
+
{%- endif %}
|
| 121 |
+
{{- param_name }}
|
| 122 |
+
{%- if param_name not in (tool.parameters.required or []) -%}
|
| 123 |
+
{{- "?" }}
|
| 124 |
+
{%- endif -%}
|
| 125 |
+
{{- ": " }}
|
| 126 |
+
{{- render_typescript_type(param_spec, tool.parameters.required or []) }}
|
| 127 |
+
{%- if param_spec.default is defined -%}
|
| 128 |
+
{%- if param_spec.enum %}
|
| 129 |
+
{{- ", // default: " + param_spec.default }}
|
| 130 |
+
{%- elif param_spec.oneOf %}
|
| 131 |
+
{{- "// default: " + param_spec.default }}
|
| 132 |
+
{%- else %}
|
| 133 |
+
{{- ", // default: " + param_spec.default|tojson }}
|
| 134 |
+
{%- endif -%}
|
| 135 |
+
{%- endif -%}
|
| 136 |
+
{%- if not loop.last %}
|
| 137 |
+
{{- ",\n" }}
|
| 138 |
+
{%- else %}
|
| 139 |
+
{{- ",\n" }}
|
| 140 |
+
{%- endif -%}
|
| 141 |
+
{%- endfor %}
|
| 142 |
+
{{- "}) => any;\n\n" }}
|
| 143 |
+
{%- else -%}
|
| 144 |
+
{{- "() => any;\n\n" }}
|
| 145 |
+
{%- endif -%}
|
| 146 |
+
{%- endfor %}
|
| 147 |
+
{{- "} // namespace " + namespace_name }}
|
| 148 |
+
{%- endmacro -%}
|
| 149 |
+
|
| 150 |
+
{%- macro render_builtin_tools(browser_tool, python_tool) -%}
|
| 151 |
+
{%- if browser_tool %}
|
| 152 |
+
{{- "## browser\n\n" }}
|
| 153 |
+
{{- "// Tool for browsing.\n" }}
|
| 154 |
+
{{- "// The `cursor` appears in brackets before each browsing display: `[{cursor}]`.\n" }}
|
| 155 |
+
{{- "// Cite information from the tool using the following format:\n" }}
|
| 156 |
+
{{- "// `【{cursor}†L{line_start}(-L{line_end})?】`, for example: `【6†L9-L11】` or `【8†L3】`.\n" }}
|
| 157 |
+
{{- "// Do not quote more than 10 words directly from the tool output.\n" }}
|
| 158 |
+
{{- "// sources=web (default: web)\n" }}
|
| 159 |
+
{{- "namespace browser {\n\n" }}
|
| 160 |
+
{{- "// Searches for information related to `query` and displays `topn` results.\n" }}
|
| 161 |
+
{{- "type search = (_: {\n" }}
|
| 162 |
+
{{- "query: string,\n" }}
|
| 163 |
+
{{- "topn?: number, // default: 10\n" }}
|
| 164 |
+
{{- "source?: string,\n" }}
|
| 165 |
+
{{- "}) => any;\n\n" }}
|
| 166 |
+
{{- "// Opens the link `id` from the page indicated by `cursor` starting at line number `loc`, showing `num_lines` lines.\n" }}
|
| 167 |
+
{{- "// Valid link ids are displayed with the formatting: `【{id}†.*】`.\n" }}
|
| 168 |
+
{{- "// If `cursor` is not provided, the most recent page is implied.\n" }}
|
| 169 |
+
{{- "// If `id` is a string, it is treated as a fully qualified URL associated with `source`.\n" }}
|
| 170 |
+
{{- "// If `loc` is not provided, the viewport will be positioned at the beginning of the document or centered on the most relevant passage, if available.\n" }}
|
| 171 |
+
{{- "// Use this function without `id` to scroll to a new location of an opened page.\n" }}
|
| 172 |
+
{{- "type open = (_: {\n" }}
|
| 173 |
+
{{- "id?: number | string, // default: -1\n" }}
|
| 174 |
+
{{- "cursor?: number, // default: -1\n" }}
|
| 175 |
+
{{- "loc?: number, // default: -1\n" }}
|
| 176 |
+
{{- "num_lines?: number, // default: -1\n" }}
|
| 177 |
+
{{- "view_source?: boolean, // default: false\n" }}
|
| 178 |
+
{{- "source?: string,\n" }}
|
| 179 |
+
{{- "}) => any;\n\n" }}
|
| 180 |
+
{{- "// Finds exact matches of `pattern` in the current page, or the page given by `cursor`.\n" }}
|
| 181 |
+
{{- "type find = (_: {\n" }}
|
| 182 |
+
{{- "pattern: string,\n" }}
|
| 183 |
+
{{- "cursor?: number, // default: -1\n" }}
|
| 184 |
+
{{- "}) => any;\n\n" }}
|
| 185 |
+
{{- "} // namespace browser\n\n" }}
|
| 186 |
+
{%- endif -%}
|
| 187 |
+
|
| 188 |
+
{%- if python_tool %}
|
| 189 |
+
{{- "## python\n\n" }}
|
| 190 |
+
{{- "Use this tool to execute Python code in your chain of thought. The code will not be shown to the user. This tool should be used for internal reasoning, but not for code that is intended to be visible to the user (e.g. when creating plots, tables, or files).\n\n" }}
|
| 191 |
+
{{- "When you send a message containing Python code to python, it will be executed in a stateful Jupyter notebook environment. python will respond with the output of the execution or time out after 120.0 seconds. The drive at '/mnt/data' can be used to save and persist user files. Internet access for this session is UNKNOWN. Depends on the cluster.\n\n" }}
|
| 192 |
+
{%- endif -%}
|
| 193 |
+
{%- endmacro -%}
|
| 194 |
+
|
| 195 |
+
{#- System Message Construction ============================================ #}
|
| 196 |
+
{%- macro build_system_message() -%}
|
| 197 |
+
{%- if model_identity is not defined %}
|
| 198 |
+
{%- set model_identity = "\n" %}
|
| 199 |
+
{%- endif %}
|
| 200 |
+
{{- model_identity + "\n" }}
|
| 201 |
+
{{- "Knowledge cutoff: 2024-06\n" }}
|
| 202 |
+
{{- "Current date: " + strftime_now("%Y-%m-%d") + "\n\n" }}
|
| 203 |
+
{%- if reasoning_effort is not defined %}
|
| 204 |
+
{%- set reasoning_effort = "medium" %}
|
| 205 |
+
{%- endif %}
|
| 206 |
+
{{- "Reasoning: " + reasoning_effort + "\n\n" }}
|
| 207 |
+
{%- if builtin_tools %}
|
| 208 |
+
{{- "# Tools\n\n" }}
|
| 209 |
+
{%- set available_builtin_tools = namespace(browser=false, python=false) %}
|
| 210 |
+
{%- for tool in builtin_tools %}
|
| 211 |
+
{%- if tool == "browser" %}
|
| 212 |
+
{%- set available_builtin_tools.browser = true %}
|
| 213 |
+
{%- elif tool == "python" %}
|
| 214 |
+
{%- set available_builtin_tools.python = true %}
|
| 215 |
+
{%- endif %}
|
| 216 |
+
{%- endfor %}
|
| 217 |
+
{{- render_builtin_tools(available_builtin_tools.browser, available_builtin_tools.python) }}
|
| 218 |
+
{%- endif -%}
|
| 219 |
+
{{- "# Valid channels: analysis, commentary, final. Channel must be included for every message." }}
|
| 220 |
+
{%- if tools -%}
|
| 221 |
+
{{- "\nCalls to these tools must go to the commentary channel: 'functions'." }}
|
| 222 |
+
{%- endif -%}
|
| 223 |
+
{%- endmacro -%}
|
| 224 |
+
|
| 225 |
+
{#- Main Template Logic ================================================= #}
|
| 226 |
+
{#- Set defaults #}
|
| 227 |
+
|
| 228 |
+
{#- Render system message #}
|
| 229 |
+
{{- "<|start|>system<|message|>" }}
|
| 230 |
+
{{- build_system_message() }}
|
| 231 |
+
{{- "<|end|>" }}
|
| 232 |
+
|
| 233 |
+
{#- Extract developer message #}
|
| 234 |
+
{%- if messages[0].role == "developer" or messages[0].role == "system" %}
|
| 235 |
+
{%- set developer_message = messages[0].content %}
|
| 236 |
+
{%- set loop_messages = messages[1:] %}
|
| 237 |
+
{%- else %}
|
| 238 |
+
{%- set developer_message = "" %}
|
| 239 |
+
{%- set loop_messages = messages %}
|
| 240 |
+
{%- endif %}
|
| 241 |
+
|
| 242 |
+
{#- Render developer message #}
|
| 243 |
+
{%- if developer_message or tools %}
|
| 244 |
+
{{- "<|start|>developer<|message|>" }}
|
| 245 |
+
{%- if developer_message %}
|
| 246 |
+
{{- "# Instructions\n\n" }}
|
| 247 |
+
{{- developer_message }}
|
| 248 |
+
{{- "\n\n" }}
|
| 249 |
+
{%- endif %}
|
| 250 |
+
{%- if tools -%}
|
| 251 |
+
{{- "# Tools\n\n" }}
|
| 252 |
+
{{- render_tool_namespace("functions", tools) }}
|
| 253 |
+
{%- endif -%}
|
| 254 |
+
{{- "<|end|>" }}
|
| 255 |
+
{%- endif %}
|
| 256 |
+
|
| 257 |
+
{#- Render messages #}
|
| 258 |
+
{%- set last_tool_call = namespace(name=none) %}
|
| 259 |
+
{%- for message in loop_messages -%}
|
| 260 |
+
{#- At this point only assistant/user/tool messages should remain #}
|
| 261 |
+
{%- if message.role == 'assistant' -%}
|
| 262 |
+
{#- Checks to ensure the messages are being passed in the format we expect #}
|
| 263 |
+
{%- if "content" in message %}
|
| 264 |
+
{%- if "<|channel|>analysis<|message|>" in message.content or "<|channel|>final<|message|>" in message.content %}
|
| 265 |
+
{{- raise_exception("You have passed a message containing <|channel|> tags in the content field. Instead of doing this, you should pass analysis messages (the string between '<|message|>' and '<|end|>') in the 'thinking' field, and final messages (the string between '<|message|>' and '<|end|>') in the 'content' field.") }}
|
| 266 |
+
{%- endif %}
|
| 267 |
+
{%- endif %}
|
| 268 |
+
{%- if "thinking" in message %}
|
| 269 |
+
{%- if "<|channel|>analysis<|message|>" in message.thinking or "<|channel|>final<|message|>" in message.thinking %}
|
| 270 |
+
{{- raise_exception("You have passed a message containing <|channel|> tags in the thinking field. Instead of doing this, you should pass analysis messages (the string between '<|message|>' and '<|end|>') in the 'thinking' field, and final messages (the string between '<|message|>' and '<|end|>') in the 'content' field.") }}
|
| 271 |
+
{%- endif %}
|
| 272 |
+
{%- endif %}
|
| 273 |
+
{%- if "tool_calls" in message %}
|
| 274 |
+
{#- We need very careful handling here - we want to drop the tool call analysis message if the model #}
|
| 275 |
+
{#- has output a later <|final|> message, but otherwise we want to retain it. This is the only case #}
|
| 276 |
+
{#- when we render CoT/analysis messages in inference. #}
|
| 277 |
+
{%- set future_final_message = namespace(found=false) %}
|
| 278 |
+
{%- for future_message in loop_messages[loop.index:] %}
|
| 279 |
+
{%- if future_message.role == 'assistant' and "tool_calls" not in future_message %}
|
| 280 |
+
{%- set future_final_message.found = true %}
|
| 281 |
+
{%- endif %}
|
| 282 |
+
{%- endfor %}
|
| 283 |
+
{#- We assume max 1 tool call per message, and so we infer the tool call name #}
|
| 284 |
+
{#- in "tool" messages from the most recent assistant tool call name #}
|
| 285 |
+
{%- set tool_call = message.tool_calls[0] %}
|
| 286 |
+
{%- if tool_call.function %}
|
| 287 |
+
{%- set tool_call = tool_call.function %}
|
| 288 |
+
{%- endif %}
|
| 289 |
+
{%- if message.content and message.thinking %}
|
| 290 |
+
{{- raise_exception("Cannot pass both content and thinking in an assistant message with tool calls! Put the analysis message in one or the other, but not both.") }}
|
| 291 |
+
{%- elif message.content and not future_final_message.found %}
|
| 292 |
+
{{- "<|start|>assistant<|channel|>analysis<|message|>" + message.content + "<|end|>" }}
|
| 293 |
+
{%- elif message.thinking and not future_final_message.found %}
|
| 294 |
+
{{- "<|start|>assistant<|channel|>analysis<|message|>" + message.thinking + "<|end|>" }}
|
| 295 |
+
{%- endif %}
|
| 296 |
+
{{- "<|start|>assistant to=" }}
|
| 297 |
+
{{- "functions." + tool_call.name + "<|channel|>commentary " }}
|
| 298 |
+
{{- (tool_call.content_type if tool_call.content_type is defined else "json") + "<|message|>" }}
|
| 299 |
+
{{- tool_call.arguments|tojson }}
|
| 300 |
+
{{- "<|call|>" }}
|
| 301 |
+
{%- set last_tool_call.name = tool_call.name %}
|
| 302 |
+
{%- elif loop.last and not add_generation_prompt %}
|
| 303 |
+
{#- Only render the CoT if the final turn is an assistant turn and add_generation_prompt is false #}
|
| 304 |
+
{#- This is a situation that should only occur in training, never in inference. #}
|
| 305 |
+
{%- if "thinking" in message %}
|
| 306 |
+
{{- "<|start|>assistant<|channel|>analysis<|message|>" + message.thinking + "<|end|>" }}
|
| 307 |
+
{%- endif %}
|
| 308 |
+
{#- <|return|> indicates the end of generation, but <|end|> does not #}
|
| 309 |
+
{#- <|return|> should never be an input to the model, but we include it as the final token #}
|
| 310 |
+
{#- when training, so the model learns to emit it. #}
|
| 311 |
+
{{- "<|start|>assistant<|channel|>final<|message|>" + message.content + "<|return|>" }}
|
| 312 |
+
{%- else %}
|
| 313 |
+
{#- CoT is dropped during all previous turns, so we never render it for inference #}
|
| 314 |
+
{{- "<|start|>assistant<|channel|>final<|message|>" + message.content + "<|end|>" }}
|
| 315 |
+
{%- set last_tool_call.name = none %}
|
| 316 |
+
{%- endif %}
|
| 317 |
+
{%- elif message.role == 'tool' -%}
|
| 318 |
+
{%- if last_tool_call.name is none %}
|
| 319 |
+
{{- raise_exception("Message has tool role, but there was no previous assistant message with a tool call!") }}
|
| 320 |
+
{%- endif %}
|
| 321 |
+
{{- "<|start|>functions." + last_tool_call.name }}
|
| 322 |
+
{{- " to=assistant<|channel|>commentary<|message|>" + message.content|tojson + "<|end|>" }}
|
| 323 |
+
{%- elif message.role == 'user' -%}
|
| 324 |
+
{{- "<|start|>user<|message|>" + message.content + "<|end|>" }}
|
| 325 |
+
{%- endif -%}
|
| 326 |
+
{%- endfor -%}
|
| 327 |
+
|
| 328 |
+
{#- Generation prompt #}
|
| 329 |
+
{%- if add_generation_prompt -%}
|
| 330 |
+
<|start|>assistant
|
| 331 |
+
{%- endif -%}
|
config.json
ADDED
|
@@ -0,0 +1,67 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"architectures": [
|
| 3 |
+
"GptOssForCausalLM"
|
| 4 |
+
],
|
| 5 |
+
"attention_bias": true,
|
| 6 |
+
"attention_dropout": 0.0,
|
| 7 |
+
"eos_token_id": 200002,
|
| 8 |
+
"experts_per_token": 4,
|
| 9 |
+
"head_dim": 64,
|
| 10 |
+
"hidden_act": "silu",
|
| 11 |
+
"hidden_size": 2880,
|
| 12 |
+
"initial_context_length": 4096,
|
| 13 |
+
"initializer_range": 0.02,
|
| 14 |
+
"intermediate_size": 2880,
|
| 15 |
+
"layer_types": [
|
| 16 |
+
"sliding_attention",
|
| 17 |
+
"full_attention",
|
| 18 |
+
"sliding_attention",
|
| 19 |
+
"full_attention",
|
| 20 |
+
"sliding_attention",
|
| 21 |
+
"full_attention",
|
| 22 |
+
"sliding_attention",
|
| 23 |
+
"full_attention",
|
| 24 |
+
"sliding_attention",
|
| 25 |
+
"full_attention",
|
| 26 |
+
"sliding_attention",
|
| 27 |
+
"full_attention",
|
| 28 |
+
"sliding_attention",
|
| 29 |
+
"full_attention",
|
| 30 |
+
"sliding_attention",
|
| 31 |
+
"full_attention",
|
| 32 |
+
"sliding_attention",
|
| 33 |
+
"full_attention",
|
| 34 |
+
"sliding_attention",
|
| 35 |
+
"full_attention",
|
| 36 |
+
"sliding_attention",
|
| 37 |
+
"full_attention",
|
| 38 |
+
"sliding_attention",
|
| 39 |
+
"full_attention"
|
| 40 |
+
],
|
| 41 |
+
"max_position_embeddings": 131072,
|
| 42 |
+
"model_type": "gpt_oss",
|
| 43 |
+
"num_attention_heads": 64,
|
| 44 |
+
"num_experts_per_tok": 4,
|
| 45 |
+
"num_hidden_layers": 24,
|
| 46 |
+
"num_key_value_heads": 8,
|
| 47 |
+
"num_local_experts": 32,
|
| 48 |
+
"output_router_logits": false,
|
| 49 |
+
"pad_token_id": 199999,
|
| 50 |
+
"rms_norm_eps": 1e-05,
|
| 51 |
+
"rope_scaling": {
|
| 52 |
+
"beta_fast": 32.0,
|
| 53 |
+
"beta_slow": 1.0,
|
| 54 |
+
"factor": 32.0,
|
| 55 |
+
"original_max_position_embeddings": 4096,
|
| 56 |
+
"rope_type": "yarn",
|
| 57 |
+
"truncate": false
|
| 58 |
+
},
|
| 59 |
+
"rope_theta": 150000,
|
| 60 |
+
"router_aux_loss_coef": 0.9,
|
| 61 |
+
"sliding_window": 128,
|
| 62 |
+
"swiglu_limit": 7.0,
|
| 63 |
+
"tie_word_embeddings": false,
|
| 64 |
+
"transformers_version": "4.55.0.dev0",
|
| 65 |
+
"use_cache": true,
|
| 66 |
+
"vocab_size": 201088
|
| 67 |
+
}
|
generation_config.json
ADDED
|
@@ -0,0 +1,11 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"bos_token_id": 199998,
|
| 3 |
+
"do_sample": true,
|
| 4 |
+
"eos_token_id": [
|
| 5 |
+
200002,
|
| 6 |
+
199999,
|
| 7 |
+
200012
|
| 8 |
+
],
|
| 9 |
+
"pad_token_id": 199999,
|
| 10 |
+
"transformers_version": "4.55.0.dev0"
|
| 11 |
+
}
|
model-00000-of-00004.safetensors
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:d8c5c7fd82bab3faeb8e31a36f0930fb70534ae995f11bdb40496d29d3166846
|
| 3 |
+
size 13841171168
|
model-00001-of-00004.safetensors
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:b992a767b992d54279cd866169a3c0e1ff5f3262052f55a5a5cf66c14223b0dc
|
| 3 |
+
size 13702033368
|
model-00002-of-00004.safetensors
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:61453e7ec0e05b56c54b471bc0167859a7c8ff83c2f2bee61c3be3440c0d73e9
|
| 3 |
+
size 13171007128
|
model-00003-of-00004.safetensors
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:c705949c693deec51a18a1aafe535d9826ecd80327ffc32c25ddea855ea2d762
|
| 3 |
+
size 1115349704
|
model.safetensors.index.json
ADDED
|
@@ -0,0 +1,418 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"metadata": {
|
| 3 |
+
"total_size": 41829514368
|
| 4 |
+
},
|
| 5 |
+
"weight_map": {
|
| 6 |
+
"lm_head.weight": "model-00000-of-00004.safetensors",
|
| 7 |
+
"model.embed_tokens.weight": "model-00000-of-00004.safetensors",
|
| 8 |
+
"model.layers.0.input_layernorm.weight": "model-00000-of-00004.safetensors",
|
| 9 |
+
"model.layers.0.mlp.experts.down_proj": "model-00000-of-00004.safetensors",
|
| 10 |
+
"model.layers.0.mlp.experts.down_proj_bias": "model-00000-of-00004.safetensors",
|
| 11 |
+
"model.layers.0.mlp.experts.gate_up_proj": "model-00000-of-00004.safetensors",
|
| 12 |
+
"model.layers.0.mlp.experts.gate_up_proj_bias": "model-00000-of-00004.safetensors",
|
| 13 |
+
"model.layers.0.mlp.router.bias": "model-00000-of-00004.safetensors",
|
| 14 |
+
"model.layers.0.mlp.router.weight": "model-00000-of-00004.safetensors",
|
| 15 |
+
"model.layers.0.post_attention_layernorm.weight": "model-00000-of-00004.safetensors",
|
| 16 |
+
"model.layers.0.self_attn.k_proj.bias": "model-00000-of-00004.safetensors",
|
| 17 |
+
"model.layers.0.self_attn.k_proj.weight": "model-00000-of-00004.safetensors",
|
| 18 |
+
"model.layers.0.self_attn.o_proj.bias": "model-00000-of-00004.safetensors",
|
| 19 |
+
"model.layers.0.self_attn.o_proj.weight": "model-00000-of-00004.safetensors",
|
| 20 |
+
"model.layers.0.self_attn.q_proj.bias": "model-00000-of-00004.safetensors",
|
| 21 |
+
"model.layers.0.self_attn.q_proj.weight": "model-00000-of-00004.safetensors",
|
| 22 |
+
"model.layers.0.self_attn.sinks": "model-00000-of-00004.safetensors",
|
| 23 |
+
"model.layers.0.self_attn.v_proj.bias": "model-00000-of-00004.safetensors",
|
| 24 |
+
"model.layers.0.self_attn.v_proj.weight": "model-00000-of-00004.safetensors",
|
| 25 |
+
"model.layers.1.input_layernorm.weight": "model-00000-of-00004.safetensors",
|
| 26 |
+
"model.layers.1.mlp.experts.down_proj": "model-00000-of-00004.safetensors",
|
| 27 |
+
"model.layers.1.mlp.experts.down_proj_bias": "model-00000-of-00004.safetensors",
|
| 28 |
+
"model.layers.1.mlp.experts.gate_up_proj": "model-00000-of-00004.safetensors",
|
| 29 |
+
"model.layers.1.mlp.experts.gate_up_proj_bias": "model-00000-of-00004.safetensors",
|
| 30 |
+
"model.layers.1.mlp.router.bias": "model-00000-of-00004.safetensors",
|
| 31 |
+
"model.layers.1.mlp.router.weight": "model-00000-of-00004.safetensors",
|
| 32 |
+
"model.layers.1.post_attention_layernorm.weight": "model-00000-of-00004.safetensors",
|
| 33 |
+
"model.layers.1.self_attn.k_proj.bias": "model-00000-of-00004.safetensors",
|
| 34 |
+
"model.layers.1.self_attn.k_proj.weight": "model-00000-of-00004.safetensors",
|
| 35 |
+
"model.layers.1.self_attn.o_proj.bias": "model-00000-of-00004.safetensors",
|
| 36 |
+
"model.layers.1.self_attn.o_proj.weight": "model-00000-of-00004.safetensors",
|
| 37 |
+
"model.layers.1.self_attn.q_proj.bias": "model-00000-of-00004.safetensors",
|
| 38 |
+
"model.layers.1.self_attn.q_proj.weight": "model-00000-of-00004.safetensors",
|
| 39 |
+
"model.layers.1.self_attn.sinks": "model-00000-of-00004.safetensors",
|
| 40 |
+
"model.layers.1.self_attn.v_proj.bias": "model-00000-of-00004.safetensors",
|
| 41 |
+
"model.layers.1.self_attn.v_proj.weight": "model-00000-of-00004.safetensors",
|
| 42 |
+
"model.layers.10.input_layernorm.weight": "model-00000-of-00004.safetensors",
|
| 43 |
+
"model.layers.10.mlp.experts.down_proj": "model-00000-of-00004.safetensors",
|
| 44 |
+
"model.layers.10.mlp.experts.down_proj_bias": "model-00000-of-00004.safetensors",
|
| 45 |
+
"model.layers.10.mlp.experts.gate_up_proj": "model-00000-of-00004.safetensors",
|
| 46 |
+
"model.layers.10.mlp.experts.gate_up_proj_bias": "model-00000-of-00004.safetensors",
|
| 47 |
+
"model.layers.10.mlp.router.bias": "model-00000-of-00004.safetensors",
|
| 48 |
+
"model.layers.10.mlp.router.weight": "model-00000-of-00004.safetensors",
|
| 49 |
+
"model.layers.10.post_attention_layernorm.weight": "model-00000-of-00004.safetensors",
|
| 50 |
+
"model.layers.10.self_attn.k_proj.bias": "model-00000-of-00004.safetensors",
|
| 51 |
+
"model.layers.10.self_attn.k_proj.weight": "model-00000-of-00004.safetensors",
|
| 52 |
+
"model.layers.10.self_attn.o_proj.bias": "model-00000-of-00004.safetensors",
|
| 53 |
+
"model.layers.10.self_attn.o_proj.weight": "model-00000-of-00004.safetensors",
|
| 54 |
+
"model.layers.10.self_attn.q_proj.bias": "model-00000-of-00004.safetensors",
|
| 55 |
+
"model.layers.10.self_attn.q_proj.weight": "model-00000-of-00004.safetensors",
|
| 56 |
+
"model.layers.10.self_attn.sinks": "model-00000-of-00004.safetensors",
|
| 57 |
+
"model.layers.10.self_attn.v_proj.bias": "model-00000-of-00004.safetensors",
|
| 58 |
+
"model.layers.10.self_attn.v_proj.weight": "model-00000-of-00004.safetensors",
|
| 59 |
+
"model.layers.11.input_layernorm.weight": "model-00000-of-00004.safetensors",
|
| 60 |
+
"model.layers.11.mlp.experts.down_proj": "model-00000-of-00004.safetensors",
|
| 61 |
+
"model.layers.11.mlp.experts.down_proj_bias": "model-00000-of-00004.safetensors",
|
| 62 |
+
"model.layers.11.mlp.experts.gate_up_proj": "model-00000-of-00004.safetensors",
|
| 63 |
+
"model.layers.11.mlp.experts.gate_up_proj_bias": "model-00000-of-00004.safetensors",
|
| 64 |
+
"model.layers.11.mlp.router.bias": "model-00000-of-00004.safetensors",
|
| 65 |
+
"model.layers.11.mlp.router.weight": "model-00000-of-00004.safetensors",
|
| 66 |
+
"model.layers.11.post_attention_layernorm.weight": "model-00000-of-00004.safetensors",
|
| 67 |
+
"model.layers.11.self_attn.k_proj.bias": "model-00000-of-00004.safetensors",
|
| 68 |
+
"model.layers.11.self_attn.k_proj.weight": "model-00000-of-00004.safetensors",
|
| 69 |
+
"model.layers.11.self_attn.o_proj.bias": "model-00000-of-00004.safetensors",
|
| 70 |
+
"model.layers.11.self_attn.o_proj.weight": "model-00000-of-00004.safetensors",
|
| 71 |
+
"model.layers.11.self_attn.q_proj.bias": "model-00000-of-00004.safetensors",
|
| 72 |
+
"model.layers.11.self_attn.q_proj.weight": "model-00000-of-00004.safetensors",
|
| 73 |
+
"model.layers.11.self_attn.sinks": "model-00000-of-00004.safetensors",
|
| 74 |
+
"model.layers.11.self_attn.v_proj.bias": "model-00000-of-00004.safetensors",
|
| 75 |
+
"model.layers.11.self_attn.v_proj.weight": "model-00000-of-00004.safetensors",
|
| 76 |
+
"model.layers.12.input_layernorm.weight": "model-00000-of-00004.safetensors",
|
| 77 |
+
"model.layers.12.mlp.experts.down_proj": "model-00000-of-00004.safetensors",
|
| 78 |
+
"model.layers.12.mlp.experts.down_proj_bias": "model-00000-of-00004.safetensors",
|
| 79 |
+
"model.layers.12.mlp.experts.gate_up_proj": "model-00000-of-00004.safetensors",
|
| 80 |
+
"model.layers.12.mlp.experts.gate_up_proj_bias": "model-00000-of-00004.safetensors",
|
| 81 |
+
"model.layers.12.mlp.router.bias": "model-00000-of-00004.safetensors",
|
| 82 |
+
"model.layers.12.mlp.router.weight": "model-00000-of-00004.safetensors",
|
| 83 |
+
"model.layers.12.post_attention_layernorm.weight": "model-00000-of-00004.safetensors",
|
| 84 |
+
"model.layers.12.self_attn.k_proj.bias": "model-00000-of-00004.safetensors",
|
| 85 |
+
"model.layers.12.self_attn.k_proj.weight": "model-00000-of-00004.safetensors",
|
| 86 |
+
"model.layers.12.self_attn.o_proj.bias": "model-00000-of-00004.safetensors",
|
| 87 |
+
"model.layers.12.self_attn.o_proj.weight": "model-00000-of-00004.safetensors",
|
| 88 |
+
"model.layers.12.self_attn.q_proj.bias": "model-00000-of-00004.safetensors",
|
| 89 |
+
"model.layers.12.self_attn.q_proj.weight": "model-00000-of-00004.safetensors",
|
| 90 |
+
"model.layers.12.self_attn.sinks": "model-00000-of-00004.safetensors",
|
| 91 |
+
"model.layers.12.self_attn.v_proj.bias": "model-00000-of-00004.safetensors",
|
| 92 |
+
"model.layers.12.self_attn.v_proj.weight": "model-00000-of-00004.safetensors",
|
| 93 |
+
"model.layers.13.input_layernorm.weight": "model-00000-of-00004.safetensors",
|
| 94 |
+
"model.layers.13.mlp.experts.down_proj": "model-00000-of-00004.safetensors",
|
| 95 |
+
"model.layers.13.mlp.experts.down_proj_bias": "model-00000-of-00004.safetensors",
|
| 96 |
+
"model.layers.13.mlp.experts.gate_up_proj": "model-00000-of-00004.safetensors",
|
| 97 |
+
"model.layers.13.mlp.experts.gate_up_proj_bias": "model-00000-of-00004.safetensors",
|
| 98 |
+
"model.layers.13.mlp.router.bias": "model-00000-of-00004.safetensors",
|
| 99 |
+
"model.layers.13.mlp.router.weight": "model-00000-of-00004.safetensors",
|
| 100 |
+
"model.layers.13.post_attention_layernorm.weight": "model-00000-of-00004.safetensors",
|
| 101 |
+
"model.layers.13.self_attn.k_proj.bias": "model-00000-of-00004.safetensors",
|
| 102 |
+
"model.layers.13.self_attn.k_proj.weight": "model-00000-of-00004.safetensors",
|
| 103 |
+
"model.layers.13.self_attn.o_proj.bias": "model-00000-of-00004.safetensors",
|
| 104 |
+
"model.layers.13.self_attn.o_proj.weight": "model-00000-of-00004.safetensors",
|
| 105 |
+
"model.layers.13.self_attn.q_proj.bias": "model-00000-of-00004.safetensors",
|
| 106 |
+
"model.layers.13.self_attn.q_proj.weight": "model-00000-of-00004.safetensors",
|
| 107 |
+
"model.layers.13.self_attn.sinks": "model-00000-of-00004.safetensors",
|
| 108 |
+
"model.layers.13.self_attn.v_proj.bias": "model-00000-of-00004.safetensors",
|
| 109 |
+
"model.layers.13.self_attn.v_proj.weight": "model-00000-of-00004.safetensors",
|
| 110 |
+
"model.layers.14.input_layernorm.weight": "model-00000-of-00004.safetensors",
|
| 111 |
+
"model.layers.14.mlp.experts.down_proj": "model-00000-of-00004.safetensors",
|
| 112 |
+
"model.layers.14.mlp.experts.down_proj_bias": "model-00000-of-00004.safetensors",
|
| 113 |
+
"model.layers.14.mlp.experts.gate_up_proj": "model-00000-of-00004.safetensors",
|
| 114 |
+
"model.layers.14.mlp.experts.gate_up_proj_bias": "model-00000-of-00004.safetensors",
|
| 115 |
+
"model.layers.14.mlp.router.bias": "model-00000-of-00004.safetensors",
|
| 116 |
+
"model.layers.14.mlp.router.weight": "model-00000-of-00004.safetensors",
|
| 117 |
+
"model.layers.14.post_attention_layernorm.weight": "model-00000-of-00004.safetensors",
|
| 118 |
+
"model.layers.14.self_attn.k_proj.bias": "model-00000-of-00004.safetensors",
|
| 119 |
+
"model.layers.14.self_attn.k_proj.weight": "model-00000-of-00004.safetensors",
|
| 120 |
+
"model.layers.14.self_attn.o_proj.bias": "model-00000-of-00004.safetensors",
|
| 121 |
+
"model.layers.14.self_attn.o_proj.weight": "model-00000-of-00004.safetensors",
|
| 122 |
+
"model.layers.14.self_attn.q_proj.bias": "model-00000-of-00004.safetensors",
|
| 123 |
+
"model.layers.14.self_attn.q_proj.weight": "model-00000-of-00004.safetensors",
|
| 124 |
+
"model.layers.14.self_attn.sinks": "model-00000-of-00004.safetensors",
|
| 125 |
+
"model.layers.14.self_attn.v_proj.bias": "model-00000-of-00004.safetensors",
|
| 126 |
+
"model.layers.14.self_attn.v_proj.weight": "model-00000-of-00004.safetensors",
|
| 127 |
+
"model.layers.15.input_layernorm.weight": "model-00000-of-00004.safetensors",
|
| 128 |
+
"model.layers.15.mlp.experts.down_proj": "model-00001-of-00004.safetensors",
|
| 129 |
+
"model.layers.15.mlp.experts.down_proj_bias": "model-00001-of-00004.safetensors",
|
| 130 |
+
"model.layers.15.mlp.experts.gate_up_proj": "model-00001-of-00004.safetensors",
|
| 131 |
+
"model.layers.15.mlp.experts.gate_up_proj_bias": "model-00001-of-00004.safetensors",
|
| 132 |
+
"model.layers.15.mlp.router.bias": "model-00001-of-00004.safetensors",
|
| 133 |
+
"model.layers.15.mlp.router.weight": "model-00001-of-00004.safetensors",
|
| 134 |
+
"model.layers.15.post_attention_layernorm.weight": "model-00001-of-00004.safetensors",
|
| 135 |
+
"model.layers.15.self_attn.k_proj.bias": "model-00001-of-00004.safetensors",
|
| 136 |
+
"model.layers.15.self_attn.k_proj.weight": "model-00001-of-00004.safetensors",
|
| 137 |
+
"model.layers.15.self_attn.o_proj.bias": "model-00001-of-00004.safetensors",
|
| 138 |
+
"model.layers.15.self_attn.o_proj.weight": "model-00001-of-00004.safetensors",
|
| 139 |
+
"model.layers.15.self_attn.q_proj.bias": "model-00001-of-00004.safetensors",
|
| 140 |
+
"model.layers.15.self_attn.q_proj.weight": "model-00001-of-00004.safetensors",
|
| 141 |
+
"model.layers.15.self_attn.sinks": "model-00001-of-00004.safetensors",
|
| 142 |
+
"model.layers.15.self_attn.v_proj.bias": "model-00001-of-00004.safetensors",
|
| 143 |
+
"model.layers.15.self_attn.v_proj.weight": "model-00001-of-00004.safetensors",
|
| 144 |
+
"model.layers.16.input_layernorm.weight": "model-00001-of-00004.safetensors",
|
| 145 |
+
"model.layers.16.mlp.experts.down_proj": "model-00001-of-00004.safetensors",
|
| 146 |
+
"model.layers.16.mlp.experts.down_proj_bias": "model-00001-of-00004.safetensors",
|
| 147 |
+
"model.layers.16.mlp.experts.gate_up_proj": "model-00001-of-00004.safetensors",
|
| 148 |
+
"model.layers.16.mlp.experts.gate_up_proj_bias": "model-00001-of-00004.safetensors",
|
| 149 |
+
"model.layers.16.mlp.router.bias": "model-00001-of-00004.safetensors",
|
| 150 |
+
"model.layers.16.mlp.router.weight": "model-00001-of-00004.safetensors",
|
| 151 |
+
"model.layers.16.post_attention_layernorm.weight": "model-00001-of-00004.safetensors",
|
| 152 |
+
"model.layers.16.self_attn.k_proj.bias": "model-00001-of-00004.safetensors",
|
| 153 |
+
"model.layers.16.self_attn.k_proj.weight": "model-00001-of-00004.safetensors",
|
| 154 |
+
"model.layers.16.self_attn.o_proj.bias": "model-00001-of-00004.safetensors",
|
| 155 |
+
"model.layers.16.self_attn.o_proj.weight": "model-00001-of-00004.safetensors",
|
| 156 |
+
"model.layers.16.self_attn.q_proj.bias": "model-00001-of-00004.safetensors",
|
| 157 |
+
"model.layers.16.self_attn.q_proj.weight": "model-00001-of-00004.safetensors",
|
| 158 |
+
"model.layers.16.self_attn.sinks": "model-00001-of-00004.safetensors",
|
| 159 |
+
"model.layers.16.self_attn.v_proj.bias": "model-00001-of-00004.safetensors",
|
| 160 |
+
"model.layers.16.self_attn.v_proj.weight": "model-00001-of-00004.safetensors",
|
| 161 |
+
"model.layers.17.input_layernorm.weight": "model-00001-of-00004.safetensors",
|
| 162 |
+
"model.layers.17.mlp.experts.down_proj": "model-00001-of-00004.safetensors",
|
| 163 |
+
"model.layers.17.mlp.experts.down_proj_bias": "model-00001-of-00004.safetensors",
|
| 164 |
+
"model.layers.17.mlp.experts.gate_up_proj": "model-00001-of-00004.safetensors",
|
| 165 |
+
"model.layers.17.mlp.experts.gate_up_proj_bias": "model-00001-of-00004.safetensors",
|
| 166 |
+
"model.layers.17.mlp.router.bias": "model-00001-of-00004.safetensors",
|
| 167 |
+
"model.layers.17.mlp.router.weight": "model-00001-of-00004.safetensors",
|
| 168 |
+
"model.layers.17.post_attention_layernorm.weight": "model-00001-of-00004.safetensors",
|
| 169 |
+
"model.layers.17.self_attn.k_proj.bias": "model-00001-of-00004.safetensors",
|
| 170 |
+
"model.layers.17.self_attn.k_proj.weight": "model-00001-of-00004.safetensors",
|
| 171 |
+
"model.layers.17.self_attn.o_proj.bias": "model-00001-of-00004.safetensors",
|
| 172 |
+
"model.layers.17.self_attn.o_proj.weight": "model-00001-of-00004.safetensors",
|
| 173 |
+
"model.layers.17.self_attn.q_proj.bias": "model-00001-of-00004.safetensors",
|
| 174 |
+
"model.layers.17.self_attn.q_proj.weight": "model-00001-of-00004.safetensors",
|
| 175 |
+
"model.layers.17.self_attn.sinks": "model-00001-of-00004.safetensors",
|
| 176 |
+
"model.layers.17.self_attn.v_proj.bias": "model-00001-of-00004.safetensors",
|
| 177 |
+
"model.layers.17.self_attn.v_proj.weight": "model-00001-of-00004.safetensors",
|
| 178 |
+
"model.layers.18.input_layernorm.weight": "model-00001-of-00004.safetensors",
|
| 179 |
+
"model.layers.18.mlp.experts.down_proj": "model-00001-of-00004.safetensors",
|
| 180 |
+
"model.layers.18.mlp.experts.down_proj_bias": "model-00001-of-00004.safetensors",
|
| 181 |
+
"model.layers.18.mlp.experts.gate_up_proj": "model-00001-of-00004.safetensors",
|
| 182 |
+
"model.layers.18.mlp.experts.gate_up_proj_bias": "model-00001-of-00004.safetensors",
|
| 183 |
+
"model.layers.18.mlp.router.bias": "model-00001-of-00004.safetensors",
|
| 184 |
+
"model.layers.18.mlp.router.weight": "model-00001-of-00004.safetensors",
|
| 185 |
+
"model.layers.18.post_attention_layernorm.weight": "model-00001-of-00004.safetensors",
|
| 186 |
+
"model.layers.18.self_attn.k_proj.bias": "model-00001-of-00004.safetensors",
|
| 187 |
+
"model.layers.18.self_attn.k_proj.weight": "model-00001-of-00004.safetensors",
|
| 188 |
+
"model.layers.18.self_attn.o_proj.bias": "model-00001-of-00004.safetensors",
|
| 189 |
+
"model.layers.18.self_attn.o_proj.weight": "model-00001-of-00004.safetensors",
|
| 190 |
+
"model.layers.18.self_attn.q_proj.bias": "model-00001-of-00004.safetensors",
|
| 191 |
+
"model.layers.18.self_attn.q_proj.weight": "model-00001-of-00004.safetensors",
|
| 192 |
+
"model.layers.18.self_attn.sinks": "model-00001-of-00004.safetensors",
|
| 193 |
+
"model.layers.18.self_attn.v_proj.bias": "model-00001-of-00004.safetensors",
|
| 194 |
+
"model.layers.18.self_attn.v_proj.weight": "model-00001-of-00004.safetensors",
|
| 195 |
+
"model.layers.19.input_layernorm.weight": "model-00001-of-00004.safetensors",
|
| 196 |
+
"model.layers.19.mlp.experts.down_proj": "model-00001-of-00004.safetensors",
|
| 197 |
+
"model.layers.19.mlp.experts.down_proj_bias": "model-00001-of-00004.safetensors",
|
| 198 |
+
"model.layers.19.mlp.experts.gate_up_proj": "model-00001-of-00004.safetensors",
|
| 199 |
+
"model.layers.19.mlp.experts.gate_up_proj_bias": "model-00001-of-00004.safetensors",
|
| 200 |
+
"model.layers.19.mlp.router.bias": "model-00001-of-00004.safetensors",
|
| 201 |
+
"model.layers.19.mlp.router.weight": "model-00001-of-00004.safetensors",
|
| 202 |
+
"model.layers.19.post_attention_layernorm.weight": "model-00001-of-00004.safetensors",
|
| 203 |
+
"model.layers.19.self_attn.k_proj.bias": "model-00001-of-00004.safetensors",
|
| 204 |
+
"model.layers.19.self_attn.k_proj.weight": "model-00001-of-00004.safetensors",
|
| 205 |
+
"model.layers.19.self_attn.o_proj.bias": "model-00001-of-00004.safetensors",
|
| 206 |
+
"model.layers.19.self_attn.o_proj.weight": "model-00001-of-00004.safetensors",
|
| 207 |
+
"model.layers.19.self_attn.q_proj.bias": "model-00001-of-00004.safetensors",
|
| 208 |
+
"model.layers.19.self_attn.q_proj.weight": "model-00001-of-00004.safetensors",
|
| 209 |
+
"model.layers.19.self_attn.sinks": "model-00001-of-00004.safetensors",
|
| 210 |
+
"model.layers.19.self_attn.v_proj.bias": "model-00001-of-00004.safetensors",
|
| 211 |
+
"model.layers.19.self_attn.v_proj.weight": "model-00001-of-00004.safetensors",
|
| 212 |
+
"model.layers.2.input_layernorm.weight": "model-00001-of-00004.safetensors",
|
| 213 |
+
"model.layers.2.mlp.experts.down_proj": "model-00001-of-00004.safetensors",
|
| 214 |
+
"model.layers.2.mlp.experts.down_proj_bias": "model-00001-of-00004.safetensors",
|
| 215 |
+
"model.layers.2.mlp.experts.gate_up_proj": "model-00001-of-00004.safetensors",
|
| 216 |
+
"model.layers.2.mlp.experts.gate_up_proj_bias": "model-00001-of-00004.safetensors",
|
| 217 |
+
"model.layers.2.mlp.router.bias": "model-00001-of-00004.safetensors",
|
| 218 |
+
"model.layers.2.mlp.router.weight": "model-00001-of-00004.safetensors",
|
| 219 |
+
"model.layers.2.post_attention_layernorm.weight": "model-00001-of-00004.safetensors",
|
| 220 |
+
"model.layers.2.self_attn.k_proj.bias": "model-00001-of-00004.safetensors",
|
| 221 |
+
"model.layers.2.self_attn.k_proj.weight": "model-00001-of-00004.safetensors",
|
| 222 |
+
"model.layers.2.self_attn.o_proj.bias": "model-00001-of-00004.safetensors",
|
| 223 |
+
"model.layers.2.self_attn.o_proj.weight": "model-00001-of-00004.safetensors",
|
| 224 |
+
"model.layers.2.self_attn.q_proj.bias": "model-00001-of-00004.safetensors",
|
| 225 |
+
"model.layers.2.self_attn.q_proj.weight": "model-00001-of-00004.safetensors",
|
| 226 |
+
"model.layers.2.self_attn.sinks": "model-00001-of-00004.safetensors",
|
| 227 |
+
"model.layers.2.self_attn.v_proj.bias": "model-00001-of-00004.safetensors",
|
| 228 |
+
"model.layers.2.self_attn.v_proj.weight": "model-00001-of-00004.safetensors",
|
| 229 |
+
"model.layers.20.input_layernorm.weight": "model-00001-of-00004.safetensors",
|
| 230 |
+
"model.layers.20.mlp.experts.down_proj": "model-00001-of-00004.safetensors",
|
| 231 |
+
"model.layers.20.mlp.experts.down_proj_bias": "model-00001-of-00004.safetensors",
|
| 232 |
+
"model.layers.20.mlp.experts.gate_up_proj": "model-00001-of-00004.safetensors",
|
| 233 |
+
"model.layers.20.mlp.experts.gate_up_proj_bias": "model-00001-of-00004.safetensors",
|
| 234 |
+
"model.layers.20.mlp.router.bias": "model-00001-of-00004.safetensors",
|
| 235 |
+
"model.layers.20.mlp.router.weight": "model-00001-of-00004.safetensors",
|
| 236 |
+
"model.layers.20.post_attention_layernorm.weight": "model-00001-of-00004.safetensors",
|
| 237 |
+
"model.layers.20.self_attn.k_proj.bias": "model-00001-of-00004.safetensors",
|
| 238 |
+
"model.layers.20.self_attn.k_proj.weight": "model-00001-of-00004.safetensors",
|
| 239 |
+
"model.layers.20.self_attn.o_proj.bias": "model-00001-of-00004.safetensors",
|
| 240 |
+
"model.layers.20.self_attn.o_proj.weight": "model-00001-of-00004.safetensors",
|
| 241 |
+
"model.layers.20.self_attn.q_proj.bias": "model-00001-of-00004.safetensors",
|
| 242 |
+
"model.layers.20.self_attn.q_proj.weight": "model-00001-of-00004.safetensors",
|
| 243 |
+
"model.layers.20.self_attn.sinks": "model-00001-of-00004.safetensors",
|
| 244 |
+
"model.layers.20.self_attn.v_proj.bias": "model-00001-of-00004.safetensors",
|
| 245 |
+
"model.layers.20.self_attn.v_proj.weight": "model-00001-of-00004.safetensors",
|
| 246 |
+
"model.layers.21.input_layernorm.weight": "model-00001-of-00004.safetensors",
|
| 247 |
+
"model.layers.21.mlp.experts.down_proj": "model-00001-of-00004.safetensors",
|
| 248 |
+
"model.layers.21.mlp.experts.down_proj_bias": "model-00001-of-00004.safetensors",
|
| 249 |
+
"model.layers.21.mlp.experts.gate_up_proj": "model-00001-of-00004.safetensors",
|
| 250 |
+
"model.layers.21.mlp.experts.gate_up_proj_bias": "model-00001-of-00004.safetensors",
|
| 251 |
+
"model.layers.21.mlp.router.bias": "model-00001-of-00004.safetensors",
|
| 252 |
+
"model.layers.21.mlp.router.weight": "model-00001-of-00004.safetensors",
|
| 253 |
+
"model.layers.21.post_attention_layernorm.weight": "model-00001-of-00004.safetensors",
|
| 254 |
+
"model.layers.21.self_attn.k_proj.bias": "model-00001-of-00004.safetensors",
|
| 255 |
+
"model.layers.21.self_attn.k_proj.weight": "model-00001-of-00004.safetensors",
|
| 256 |
+
"model.layers.21.self_attn.o_proj.bias": "model-00001-of-00004.safetensors",
|
| 257 |
+
"model.layers.21.self_attn.o_proj.weight": "model-00001-of-00004.safetensors",
|
| 258 |
+
"model.layers.21.self_attn.q_proj.bias": "model-00001-of-00004.safetensors",
|
| 259 |
+
"model.layers.21.self_attn.q_proj.weight": "model-00001-of-00004.safetensors",
|
| 260 |
+
"model.layers.21.self_attn.sinks": "model-00001-of-00004.safetensors",
|
| 261 |
+
"model.layers.21.self_attn.v_proj.bias": "model-00001-of-00004.safetensors",
|
| 262 |
+
"model.layers.21.self_attn.v_proj.weight": "model-00001-of-00004.safetensors",
|
| 263 |
+
"model.layers.22.input_layernorm.weight": "model-00001-of-00004.safetensors",
|
| 264 |
+
"model.layers.22.mlp.experts.down_proj": "model-00001-of-00004.safetensors",
|
| 265 |
+
"model.layers.22.mlp.experts.down_proj_bias": "model-00001-of-00004.safetensors",
|
| 266 |
+
"model.layers.22.mlp.experts.gate_up_proj": "model-00002-of-00004.safetensors",
|
| 267 |
+
"model.layers.22.mlp.experts.gate_up_proj_bias": "model-00002-of-00004.safetensors",
|
| 268 |
+
"model.layers.22.mlp.router.bias": "model-00002-of-00004.safetensors",
|
| 269 |
+
"model.layers.22.mlp.router.weight": "model-00002-of-00004.safetensors",
|
| 270 |
+
"model.layers.22.post_attention_layernorm.weight": "model-00002-of-00004.safetensors",
|
| 271 |
+
"model.layers.22.self_attn.k_proj.bias": "model-00002-of-00004.safetensors",
|
| 272 |
+
"model.layers.22.self_attn.k_proj.weight": "model-00002-of-00004.safetensors",
|
| 273 |
+
"model.layers.22.self_attn.o_proj.bias": "model-00002-of-00004.safetensors",
|
| 274 |
+
"model.layers.22.self_attn.o_proj.weight": "model-00002-of-00004.safetensors",
|
| 275 |
+
"model.layers.22.self_attn.q_proj.bias": "model-00002-of-00004.safetensors",
|
| 276 |
+
"model.layers.22.self_attn.q_proj.weight": "model-00002-of-00004.safetensors",
|
| 277 |
+
"model.layers.22.self_attn.sinks": "model-00002-of-00004.safetensors",
|
| 278 |
+
"model.layers.22.self_attn.v_proj.bias": "model-00002-of-00004.safetensors",
|
| 279 |
+
"model.layers.22.self_attn.v_proj.weight": "model-00002-of-00004.safetensors",
|
| 280 |
+
"model.layers.23.input_layernorm.weight": "model-00002-of-00004.safetensors",
|
| 281 |
+
"model.layers.23.mlp.experts.down_proj": "model-00002-of-00004.safetensors",
|
| 282 |
+
"model.layers.23.mlp.experts.down_proj_bias": "model-00002-of-00004.safetensors",
|
| 283 |
+
"model.layers.23.mlp.experts.gate_up_proj": "model-00002-of-00004.safetensors",
|
| 284 |
+
"model.layers.23.mlp.experts.gate_up_proj_bias": "model-00002-of-00004.safetensors",
|
| 285 |
+
"model.layers.23.mlp.router.bias": "model-00002-of-00004.safetensors",
|
| 286 |
+
"model.layers.23.mlp.router.weight": "model-00002-of-00004.safetensors",
|
| 287 |
+
"model.layers.23.post_attention_layernorm.weight": "model-00002-of-00004.safetensors",
|
| 288 |
+
"model.layers.23.self_attn.k_proj.bias": "model-00002-of-00004.safetensors",
|
| 289 |
+
"model.layers.23.self_attn.k_proj.weight": "model-00002-of-00004.safetensors",
|
| 290 |
+
"model.layers.23.self_attn.o_proj.bias": "model-00002-of-00004.safetensors",
|
| 291 |
+
"model.layers.23.self_attn.o_proj.weight": "model-00002-of-00004.safetensors",
|
| 292 |
+
"model.layers.23.self_attn.q_proj.bias": "model-00002-of-00004.safetensors",
|
| 293 |
+
"model.layers.23.self_attn.q_proj.weight": "model-00002-of-00004.safetensors",
|
| 294 |
+
"model.layers.23.self_attn.sinks": "model-00002-of-00004.safetensors",
|
| 295 |
+
"model.layers.23.self_attn.v_proj.bias": "model-00002-of-00004.safetensors",
|
| 296 |
+
"model.layers.23.self_attn.v_proj.weight": "model-00002-of-00004.safetensors",
|
| 297 |
+
"model.layers.3.input_layernorm.weight": "model-00002-of-00004.safetensors",
|
| 298 |
+
"model.layers.3.mlp.experts.down_proj": "model-00002-of-00004.safetensors",
|
| 299 |
+
"model.layers.3.mlp.experts.down_proj_bias": "model-00002-of-00004.safetensors",
|
| 300 |
+
"model.layers.3.mlp.experts.gate_up_proj": "model-00002-of-00004.safetensors",
|
| 301 |
+
"model.layers.3.mlp.experts.gate_up_proj_bias": "model-00002-of-00004.safetensors",
|
| 302 |
+
"model.layers.3.mlp.router.bias": "model-00002-of-00004.safetensors",
|
| 303 |
+
"model.layers.3.mlp.router.weight": "model-00002-of-00004.safetensors",
|
| 304 |
+
"model.layers.3.post_attention_layernorm.weight": "model-00002-of-00004.safetensors",
|
| 305 |
+
"model.layers.3.self_attn.k_proj.bias": "model-00002-of-00004.safetensors",
|
| 306 |
+
"model.layers.3.self_attn.k_proj.weight": "model-00002-of-00004.safetensors",
|
| 307 |
+
"model.layers.3.self_attn.o_proj.bias": "model-00002-of-00004.safetensors",
|
| 308 |
+
"model.layers.3.self_attn.o_proj.weight": "model-00002-of-00004.safetensors",
|
| 309 |
+
"model.layers.3.self_attn.q_proj.bias": "model-00002-of-00004.safetensors",
|
| 310 |
+
"model.layers.3.self_attn.q_proj.weight": "model-00002-of-00004.safetensors",
|
| 311 |
+
"model.layers.3.self_attn.sinks": "model-00002-of-00004.safetensors",
|
| 312 |
+
"model.layers.3.self_attn.v_proj.bias": "model-00002-of-00004.safetensors",
|
| 313 |
+
"model.layers.3.self_attn.v_proj.weight": "model-00002-of-00004.safetensors",
|
| 314 |
+
"model.layers.4.input_layernorm.weight": "model-00002-of-00004.safetensors",
|
| 315 |
+
"model.layers.4.mlp.experts.down_proj": "model-00002-of-00004.safetensors",
|
| 316 |
+
"model.layers.4.mlp.experts.down_proj_bias": "model-00002-of-00004.safetensors",
|
| 317 |
+
"model.layers.4.mlp.experts.gate_up_proj": "model-00002-of-00004.safetensors",
|
| 318 |
+
"model.layers.4.mlp.experts.gate_up_proj_bias": "model-00002-of-00004.safetensors",
|
| 319 |
+
"model.layers.4.mlp.router.bias": "model-00002-of-00004.safetensors",
|
| 320 |
+
"model.layers.4.mlp.router.weight": "model-00002-of-00004.safetensors",
|
| 321 |
+
"model.layers.4.post_attention_layernorm.weight": "model-00002-of-00004.safetensors",
|
| 322 |
+
"model.layers.4.self_attn.k_proj.bias": "model-00002-of-00004.safetensors",
|
| 323 |
+
"model.layers.4.self_attn.k_proj.weight": "model-00002-of-00004.safetensors",
|
| 324 |
+
"model.layers.4.self_attn.o_proj.bias": "model-00002-of-00004.safetensors",
|
| 325 |
+
"model.layers.4.self_attn.o_proj.weight": "model-00002-of-00004.safetensors",
|
| 326 |
+
"model.layers.4.self_attn.q_proj.bias": "model-00002-of-00004.safetensors",
|
| 327 |
+
"model.layers.4.self_attn.q_proj.weight": "model-00002-of-00004.safetensors",
|
| 328 |
+
"model.layers.4.self_attn.sinks": "model-00002-of-00004.safetensors",
|
| 329 |
+
"model.layers.4.self_attn.v_proj.bias": "model-00002-of-00004.safetensors",
|
| 330 |
+
"model.layers.4.self_attn.v_proj.weight": "model-00002-of-00004.safetensors",
|
| 331 |
+
"model.layers.5.input_layernorm.weight": "model-00002-of-00004.safetensors",
|
| 332 |
+
"model.layers.5.mlp.experts.down_proj": "model-00002-of-00004.safetensors",
|
| 333 |
+
"model.layers.5.mlp.experts.down_proj_bias": "model-00002-of-00004.safetensors",
|
| 334 |
+
"model.layers.5.mlp.experts.gate_up_proj": "model-00002-of-00004.safetensors",
|
| 335 |
+
"model.layers.5.mlp.experts.gate_up_proj_bias": "model-00002-of-00004.safetensors",
|
| 336 |
+
"model.layers.5.mlp.router.bias": "model-00002-of-00004.safetensors",
|
| 337 |
+
"model.layers.5.mlp.router.weight": "model-00002-of-00004.safetensors",
|
| 338 |
+
"model.layers.5.post_attention_layernorm.weight": "model-00002-of-00004.safetensors",
|
| 339 |
+
"model.layers.5.self_attn.k_proj.bias": "model-00002-of-00004.safetensors",
|
| 340 |
+
"model.layers.5.self_attn.k_proj.weight": "model-00002-of-00004.safetensors",
|
| 341 |
+
"model.layers.5.self_attn.o_proj.bias": "model-00002-of-00004.safetensors",
|
| 342 |
+
"model.layers.5.self_attn.o_proj.weight": "model-00002-of-00004.safetensors",
|
| 343 |
+
"model.layers.5.self_attn.q_proj.bias": "model-00002-of-00004.safetensors",
|
| 344 |
+
"model.layers.5.self_attn.q_proj.weight": "model-00002-of-00004.safetensors",
|
| 345 |
+
"model.layers.5.self_attn.sinks": "model-00002-of-00004.safetensors",
|
| 346 |
+
"model.layers.5.self_attn.v_proj.bias": "model-00002-of-00004.safetensors",
|
| 347 |
+
"model.layers.5.self_attn.v_proj.weight": "model-00002-of-00004.safetensors",
|
| 348 |
+
"model.layers.6.input_layernorm.weight": "model-00002-of-00004.safetensors",
|
| 349 |
+
"model.layers.6.mlp.experts.down_proj": "model-00002-of-00004.safetensors",
|
| 350 |
+
"model.layers.6.mlp.experts.down_proj_bias": "model-00002-of-00004.safetensors",
|
| 351 |
+
"model.layers.6.mlp.experts.gate_up_proj": "model-00002-of-00004.safetensors",
|
| 352 |
+
"model.layers.6.mlp.experts.gate_up_proj_bias": "model-00002-of-00004.safetensors",
|
| 353 |
+
"model.layers.6.mlp.router.bias": "model-00002-of-00004.safetensors",
|
| 354 |
+
"model.layers.6.mlp.router.weight": "model-00002-of-00004.safetensors",
|
| 355 |
+
"model.layers.6.post_attention_layernorm.weight": "model-00002-of-00004.safetensors",
|
| 356 |
+
"model.layers.6.self_attn.k_proj.bias": "model-00002-of-00004.safetensors",
|
| 357 |
+
"model.layers.6.self_attn.k_proj.weight": "model-00002-of-00004.safetensors",
|
| 358 |
+
"model.layers.6.self_attn.o_proj.bias": "model-00002-of-00004.safetensors",
|
| 359 |
+
"model.layers.6.self_attn.o_proj.weight": "model-00002-of-00004.safetensors",
|
| 360 |
+
"model.layers.6.self_attn.q_proj.bias": "model-00002-of-00004.safetensors",
|
| 361 |
+
"model.layers.6.self_attn.q_proj.weight": "model-00002-of-00004.safetensors",
|
| 362 |
+
"model.layers.6.self_attn.sinks": "model-00002-of-00004.safetensors",
|
| 363 |
+
"model.layers.6.self_attn.v_proj.bias": "model-00002-of-00004.safetensors",
|
| 364 |
+
"model.layers.6.self_attn.v_proj.weight": "model-00002-of-00004.safetensors",
|
| 365 |
+
"model.layers.7.input_layernorm.weight": "model-00002-of-00004.safetensors",
|
| 366 |
+
"model.layers.7.mlp.experts.down_proj": "model-00002-of-00004.safetensors",
|
| 367 |
+
"model.layers.7.mlp.experts.down_proj_bias": "model-00002-of-00004.safetensors",
|
| 368 |
+
"model.layers.7.mlp.experts.gate_up_proj": "model-00002-of-00004.safetensors",
|
| 369 |
+
"model.layers.7.mlp.experts.gate_up_proj_bias": "model-00002-of-00004.safetensors",
|
| 370 |
+
"model.layers.7.mlp.router.bias": "model-00002-of-00004.safetensors",
|
| 371 |
+
"model.layers.7.mlp.router.weight": "model-00002-of-00004.safetensors",
|
| 372 |
+
"model.layers.7.post_attention_layernorm.weight": "model-00002-of-00004.safetensors",
|
| 373 |
+
"model.layers.7.self_attn.k_proj.bias": "model-00002-of-00004.safetensors",
|
| 374 |
+
"model.layers.7.self_attn.k_proj.weight": "model-00002-of-00004.safetensors",
|
| 375 |
+
"model.layers.7.self_attn.o_proj.bias": "model-00002-of-00004.safetensors",
|
| 376 |
+
"model.layers.7.self_attn.o_proj.weight": "model-00002-of-00004.safetensors",
|
| 377 |
+
"model.layers.7.self_attn.q_proj.bias": "model-00002-of-00004.safetensors",
|
| 378 |
+
"model.layers.7.self_attn.q_proj.weight": "model-00002-of-00004.safetensors",
|
| 379 |
+
"model.layers.7.self_attn.sinks": "model-00002-of-00004.safetensors",
|
| 380 |
+
"model.layers.7.self_attn.v_proj.bias": "model-00002-of-00004.safetensors",
|
| 381 |
+
"model.layers.7.self_attn.v_proj.weight": "model-00002-of-00004.safetensors",
|
| 382 |
+
"model.layers.8.input_layernorm.weight": "model-00002-of-00004.safetensors",
|
| 383 |
+
"model.layers.8.mlp.experts.down_proj": "model-00002-of-00004.safetensors",
|
| 384 |
+
"model.layers.8.mlp.experts.down_proj_bias": "model-00002-of-00004.safetensors",
|
| 385 |
+
"model.layers.8.mlp.experts.gate_up_proj": "model-00002-of-00004.safetensors",
|
| 386 |
+
"model.layers.8.mlp.experts.gate_up_proj_bias": "model-00002-of-00004.safetensors",
|
| 387 |
+
"model.layers.8.mlp.router.bias": "model-00002-of-00004.safetensors",
|
| 388 |
+
"model.layers.8.mlp.router.weight": "model-00002-of-00004.safetensors",
|
| 389 |
+
"model.layers.8.post_attention_layernorm.weight": "model-00002-of-00004.safetensors",
|
| 390 |
+
"model.layers.8.self_attn.k_proj.bias": "model-00002-of-00004.safetensors",
|
| 391 |
+
"model.layers.8.self_attn.k_proj.weight": "model-00002-of-00004.safetensors",
|
| 392 |
+
"model.layers.8.self_attn.o_proj.bias": "model-00002-of-00004.safetensors",
|
| 393 |
+
"model.layers.8.self_attn.o_proj.weight": "model-00002-of-00004.safetensors",
|
| 394 |
+
"model.layers.8.self_attn.q_proj.bias": "model-00002-of-00004.safetensors",
|
| 395 |
+
"model.layers.8.self_attn.q_proj.weight": "model-00002-of-00004.safetensors",
|
| 396 |
+
"model.layers.8.self_attn.sinks": "model-00002-of-00004.safetensors",
|
| 397 |
+
"model.layers.8.self_attn.v_proj.bias": "model-00002-of-00004.safetensors",
|
| 398 |
+
"model.layers.8.self_attn.v_proj.weight": "model-00002-of-00004.safetensors",
|
| 399 |
+
"model.layers.9.input_layernorm.weight": "model-00002-of-00004.safetensors",
|
| 400 |
+
"model.layers.9.mlp.experts.down_proj": "model-00002-of-00004.safetensors",
|
| 401 |
+
"model.layers.9.mlp.experts.down_proj_bias": "model-00002-of-00004.safetensors",
|
| 402 |
+
"model.layers.9.mlp.experts.gate_up_proj": "model-00003-of-00004.safetensors",
|
| 403 |
+
"model.layers.9.mlp.experts.gate_up_proj_bias": "model-00003-of-00004.safetensors",
|
| 404 |
+
"model.layers.9.mlp.router.bias": "model-00003-of-00004.safetensors",
|
| 405 |
+
"model.layers.9.mlp.router.weight": "model-00003-of-00004.safetensors",
|
| 406 |
+
"model.layers.9.post_attention_layernorm.weight": "model-00003-of-00004.safetensors",
|
| 407 |
+
"model.layers.9.self_attn.k_proj.bias": "model-00003-of-00004.safetensors",
|
| 408 |
+
"model.layers.9.self_attn.k_proj.weight": "model-00003-of-00004.safetensors",
|
| 409 |
+
"model.layers.9.self_attn.o_proj.bias": "model-00003-of-00004.safetensors",
|
| 410 |
+
"model.layers.9.self_attn.o_proj.weight": "model-00003-of-00004.safetensors",
|
| 411 |
+
"model.layers.9.self_attn.q_proj.bias": "model-00003-of-00004.safetensors",
|
| 412 |
+
"model.layers.9.self_attn.q_proj.weight": "model-00003-of-00004.safetensors",
|
| 413 |
+
"model.layers.9.self_attn.sinks": "model-00003-of-00004.safetensors",
|
| 414 |
+
"model.layers.9.self_attn.v_proj.bias": "model-00003-of-00004.safetensors",
|
| 415 |
+
"model.layers.9.self_attn.v_proj.weight": "model-00003-of-00004.safetensors",
|
| 416 |
+
"model.norm.weight": "model-00003-of-00004.safetensors"
|
| 417 |
+
}
|
| 418 |
+
}
|
special_tokens_map.json
ADDED
|
@@ -0,0 +1,5 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"bos_token": "<|startoftext|>",
|
| 3 |
+
"eos_token": "<|return|>",
|
| 4 |
+
"pad_token": "<|endoftext|>"
|
| 5 |
+
}
|
tokenizer.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:0614fe83cadab421296e664e1f48f4261fa8fef6e03e63bb75c20f38e37d07d3
|
| 3 |
+
size 27868174
|
tokenizer_config.json
ADDED
|
@@ -0,0 +1,183 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"added_tokens_decoder": {
|
| 3 |
+
"199998": {
|
| 4 |
+
"content": "<|startoftext|>",
|
| 5 |
+
"lstrip": false,
|
| 6 |
+
"normalized": false,
|
| 7 |
+
"rstrip": false,
|
| 8 |
+
"single_word": false,
|
| 9 |
+
"special": true
|
| 10 |
+
},
|
| 11 |
+
"199999": {
|
| 12 |
+
"content": "<|endoftext|>",
|
| 13 |
+
"lstrip": false,
|
| 14 |
+
"normalized": false,
|
| 15 |
+
"rstrip": false,
|
| 16 |
+
"single_word": false,
|
| 17 |
+
"special": true
|
| 18 |
+
},
|
| 19 |
+
"200000": {
|
| 20 |
+
"content": "<|reserved_200000|>",
|
| 21 |
+
"lstrip": false,
|
| 22 |
+
"normalized": false,
|
| 23 |
+
"rstrip": false,
|
| 24 |
+
"single_word": false,
|
| 25 |
+
"special": true
|
| 26 |
+
},
|
| 27 |
+
"200001": {
|
| 28 |
+
"content": "<|reserved_200001|>",
|
| 29 |
+
"lstrip": false,
|
| 30 |
+
"normalized": false,
|
| 31 |
+
"rstrip": false,
|
| 32 |
+
"single_word": false,
|
| 33 |
+
"special": true
|
| 34 |
+
},
|
| 35 |
+
"200002": {
|
| 36 |
+
"content": "<|return|>",
|
| 37 |
+
"lstrip": false,
|
| 38 |
+
"normalized": false,
|
| 39 |
+
"rstrip": false,
|
| 40 |
+
"single_word": false,
|
| 41 |
+
"special": true
|
| 42 |
+
},
|
| 43 |
+
"200003": {
|
| 44 |
+
"content": "<|constrain|>",
|
| 45 |
+
"lstrip": false,
|
| 46 |
+
"normalized": false,
|
| 47 |
+
"rstrip": false,
|
| 48 |
+
"single_word": false,
|
| 49 |
+
"special": true
|
| 50 |
+
},
|
| 51 |
+
"200004": {
|
| 52 |
+
"content": "<|reserved_200004|>",
|
| 53 |
+
"lstrip": false,
|
| 54 |
+
"normalized": false,
|
| 55 |
+
"rstrip": false,
|
| 56 |
+
"single_word": false,
|
| 57 |
+
"special": true
|
| 58 |
+
},
|
| 59 |
+
"200005": {
|
| 60 |
+
"content": "<|channel|>",
|
| 61 |
+
"lstrip": false,
|
| 62 |
+
"normalized": false,
|
| 63 |
+
"rstrip": false,
|
| 64 |
+
"single_word": false,
|
| 65 |
+
"special": true
|
| 66 |
+
},
|
| 67 |
+
"200006": {
|
| 68 |
+
"content": "<|start|>",
|
| 69 |
+
"lstrip": false,
|
| 70 |
+
"normalized": false,
|
| 71 |
+
"rstrip": false,
|
| 72 |
+
"single_word": false,
|
| 73 |
+
"special": true
|
| 74 |
+
},
|
| 75 |
+
"200007": {
|
| 76 |
+
"content": "<|end|>",
|
| 77 |
+
"lstrip": false,
|
| 78 |
+
"normalized": false,
|
| 79 |
+
"rstrip": false,
|
| 80 |
+
"single_word": false,
|
| 81 |
+
"special": true
|
| 82 |
+
},
|
| 83 |
+
"200008": {
|
| 84 |
+
"content": "<|message|>",
|
| 85 |
+
"lstrip": false,
|
| 86 |
+
"normalized": false,
|
| 87 |
+
"rstrip": false,
|
| 88 |
+
"single_word": false,
|
| 89 |
+
"special": true
|
| 90 |
+
},
|
| 91 |
+
"200009": {
|
| 92 |
+
"content": "<|reserved_200009|>",
|
| 93 |
+
"lstrip": false,
|
| 94 |
+
"normalized": false,
|
| 95 |
+
"rstrip": false,
|
| 96 |
+
"single_word": false,
|
| 97 |
+
"special": true
|
| 98 |
+
},
|
| 99 |
+
"200010": {
|
| 100 |
+
"content": "<|reserved_200010|>",
|
| 101 |
+
"lstrip": false,
|
| 102 |
+
"normalized": false,
|
| 103 |
+
"rstrip": false,
|
| 104 |
+
"single_word": false,
|
| 105 |
+
"special": true
|
| 106 |
+
},
|
| 107 |
+
"200011": {
|
| 108 |
+
"content": "<|reserved_200011|>",
|
| 109 |
+
"lstrip": false,
|
| 110 |
+
"normalized": false,
|
| 111 |
+
"rstrip": false,
|
| 112 |
+
"single_word": false,
|
| 113 |
+
"special": true
|
| 114 |
+
},
|
| 115 |
+
"200012": {
|
| 116 |
+
"content": "<|call|>",
|
| 117 |
+
"lstrip": false,
|
| 118 |
+
"normalized": false,
|
| 119 |
+
"rstrip": false,
|
| 120 |
+
"single_word": false,
|
| 121 |
+
"special": true
|
| 122 |
+
},
|
| 123 |
+
"200013": {
|
| 124 |
+
"content": "<|reserved_200013|>",
|
| 125 |
+
"lstrip": false,
|
| 126 |
+
"normalized": false,
|
| 127 |
+
"rstrip": false,
|
| 128 |
+
"single_word": false,
|
| 129 |
+
"special": true
|
| 130 |
+
},
|
| 131 |
+
"200014": {
|
| 132 |
+
"content": "<|reserved_200014|>",
|
| 133 |
+
"lstrip": false,
|
| 134 |
+
"normalized": false,
|
| 135 |
+
"rstrip": false,
|
| 136 |
+
"single_word": false,
|
| 137 |
+
"special": true
|
| 138 |
+
},
|
| 139 |
+
"200015": {
|
| 140 |
+
"content": "<|reserved_200015|>",
|
| 141 |
+
"lstrip": false,
|
| 142 |
+
"normalized": false,
|
| 143 |
+
"rstrip": false,
|
| 144 |
+
"single_word": false,
|
| 145 |
+
"special": true
|
| 146 |
+
},
|
| 147 |
+
"200016": {
|
| 148 |
+
"content": "<|reserved_200016|>",
|
| 149 |
+
"lstrip": false,
|
| 150 |
+
"normalized": false,
|
| 151 |
+
"rstrip": false,
|
| 152 |
+
"single_word": false,
|
| 153 |
+
"special": true
|
| 154 |
+
},
|
| 155 |
+
"200017": {
|
| 156 |
+
"content": "<|reserved_200017|>",
|
| 157 |
+
"lstrip": false,
|
| 158 |
+
"normalized": false,
|
| 159 |
+
"rstrip": false,
|
| 160 |
+
"single_word": false,
|
| 161 |
+
"special": true
|
| 162 |
+
},
|
| 163 |
+
"200018": {
|
| 164 |
+
"content": "<|endofprompt|>",
|
| 165 |
+
"lstrip": false,
|
| 166 |
+
"normalized": false,
|
| 167 |
+
"rstrip": false,
|
| 168 |
+
"single_word": false,
|
| 169 |
+
"special": true
|
| 170 |
+
}
|
| 171 |
+
},
|
| 172 |
+
"bos_token": "<|startoftext|>",
|
| 173 |
+
"clean_up_tokenization_spaces": false,
|
| 174 |
+
"eos_token": "<|return|>",
|
| 175 |
+
"extra_special_tokens": {},
|
| 176 |
+
"model_input_names": [
|
| 177 |
+
"input_ids",
|
| 178 |
+
"attention_mask"
|
| 179 |
+
],
|
| 180 |
+
"model_max_length": 1000000000000000019884624838656,
|
| 181 |
+
"pad_token": "<|endoftext|>",
|
| 182 |
+
"tokenizer_class": "PreTrainedTokenizerFast"
|
| 183 |
+
}
|