erfanzar commited on
Commit
0fb4a2f
·
verified ·
1 Parent(s): 5ae18df

Adding EasyDeL Checkpoints

Browse files
This view is limited to 50 files because it contains too many changes.   See raw diff
Files changed (50) hide show
  1. .gitattributes +0 -0
  2. README.md +159 -0
  3. chat_template.jinja +109 -0
  4. checkpoint_metadata.json +6 -0
  5. config.json +215 -0
  6. generation_config.json +13 -0
  7. model/lm_head/kernel/.zarray +1 -0
  8. model/lm_head/kernel/0.0 +3 -0
  9. model/lm_head/kernel/1.0 +3 -0
  10. model/lm_head/kernel/2.0 +3 -0
  11. model/lm_head/kernel/3.0 +3 -0
  12. model/model/embed_tokens/embedding/.zarray +1 -0
  13. model/model/embed_tokens/embedding/0.0 +3 -0
  14. model/model/embed_tokens/embedding/1.0 +3 -0
  15. model/model/embed_tokens/embedding/2.0 +3 -0
  16. model/model/embed_tokens/embedding/3.0 +3 -0
  17. model/model/layers/0/input_layernorm/kernel/.zarray +1 -0
  18. model/model/layers/0/input_layernorm/kernel/0 +0 -0
  19. model/model/layers/0/mlp/down_proj/kernel/.zarray +1 -0
  20. model/model/layers/0/mlp/down_proj/kernel/0.0 +3 -0
  21. model/model/layers/0/mlp/down_proj/kernel/0.1 +3 -0
  22. model/model/layers/0/mlp/down_proj/kernel/0.2 +3 -0
  23. model/model/layers/0/mlp/down_proj/kernel/0.3 +3 -0
  24. model/model/layers/0/mlp/gate_proj/kernel/.zarray +1 -0
  25. model/model/layers/0/mlp/gate_proj/kernel/0.0 +3 -0
  26. model/model/layers/0/mlp/gate_proj/kernel/1.0 +3 -0
  27. model/model/layers/0/mlp/gate_proj/kernel/2.0 +3 -0
  28. model/model/layers/0/mlp/gate_proj/kernel/3.0 +3 -0
  29. model/model/layers/0/mlp/up_proj/kernel/.zarray +1 -0
  30. model/model/layers/0/mlp/up_proj/kernel/0.0 +3 -0
  31. model/model/layers/0/mlp/up_proj/kernel/1.0 +3 -0
  32. model/model/layers/0/mlp/up_proj/kernel/2.0 +3 -0
  33. model/model/layers/0/mlp/up_proj/kernel/3.0 +3 -0
  34. model/model/layers/0/post_attention_layernorm/kernel/.zarray +1 -0
  35. model/model/layers/0/post_attention_layernorm/kernel/0 +0 -0
  36. model/model/layers/0/self_attn/k_proj/kernel/.zarray +1 -0
  37. model/model/layers/0/self_attn/k_proj/kernel/0.0 +3 -0
  38. model/model/layers/0/self_attn/k_proj/kernel/1.0 +3 -0
  39. model/model/layers/0/self_attn/k_proj/kernel/2.0 +3 -0
  40. model/model/layers/0/self_attn/k_proj/kernel/3.0 +3 -0
  41. model/model/layers/0/self_attn/o_proj/kernel/.zarray +1 -0
  42. model/model/layers/0/self_attn/o_proj/kernel/0.0 +3 -0
  43. model/model/layers/0/self_attn/o_proj/kernel/0.1 +3 -0
  44. model/model/layers/0/self_attn/o_proj/kernel/0.2 +3 -0
  45. model/model/layers/0/self_attn/o_proj/kernel/0.3 +3 -0
  46. model/model/layers/0/self_attn/q_proj/kernel/.zarray +1 -0
  47. model/model/layers/0/self_attn/q_proj/kernel/0.0 +3 -0
  48. model/model/layers/0/self_attn/q_proj/kernel/1.0 +3 -0
  49. model/model/layers/0/self_attn/q_proj/kernel/2.0 +3 -0
  50. model/model/layers/0/self_attn/q_proj/kernel/3.0 +3 -0
.gitattributes CHANGED
The diff for this file is too large to render. See raw diff
 
README.md ADDED
@@ -0,0 +1,159 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ library_name: easydel
3
+ pipeline_tag: text-generation
4
+ tags:
5
+ - easydel
6
+ - jax
7
+ - "llama"
8
+ - "CausalLM"
9
+ - "vanilla"
10
+ ---
11
+
12
+ <p align="center">
13
+ <img alt="easydel" src="https://raw.githubusercontent.com/erfanzar/easydel/main/images/easydel-logo-with-text.png">
14
+ </p>
15
+
16
+ <h1 align="center">meta-llama/Llama-3.1-70B-Instruct</h1>
17
+
18
+ <div align="center">
19
+ EasyDeL checkpoint converted from meta-llama/Llama-3.1-70B-Instruct.
20
+ </div>
21
+
22
+ ## Overview
23
+
24
+ This checkpoint is intended to be loaded with EasyDeL on JAX (CPU/GPU/TPU). It supports sharded loading with `auto_shard_model=True` and configurable precision via `dtype`, `param_dtype`, and `precision`.
25
+
26
+ ## Quickstart
27
+
28
+ ```python
29
+ import easydel as ed
30
+ from jax import numpy as jnp, lax
31
+
32
+ repo_id = "EasyDeL/Llama-3.1-70B-Instruct"
33
+
34
+ dtype = jnp.bfloat16 # try jnp.float16 on many GPUs
35
+
36
+ model = ed.AutoEasyDeLModelForCausalLM.from_pretrained(
37
+ repo_id,
38
+ dtype=dtype,
39
+ param_dtype=dtype,
40
+ precision=lax.Precision("fastest"),
41
+ sharding_axis_names=("dp", "fsdp", "ep", "tp", "sp"),
42
+ sharding_axis_dims=(1, -1, 1, 1, 1),
43
+ config_kwargs=ed.EasyDeLBaseConfigDict(
44
+ attn_dtype=dtype,
45
+ attn_mechanism=ed.AttentionMechanisms.VANILLA,
46
+ fsdp_is_ep_bound=True,
47
+ sp_is_ep_bound=True,
48
+ moe_method=ed.MoEMethods.FUSED_MOE,
49
+ ),
50
+ auto_shard_model=True,
51
+ partition_axis=ed.PartitionAxis(),
52
+ )
53
+ ```
54
+
55
+ If the repository only provides PyTorch weights, pass `from_torch=True` to `from_pretrained(...)`.
56
+
57
+ ## Sharding & Parallelism (Multi-Device)
58
+
59
+ EasyDeL can scale to multiple devices by creating a logical device mesh. Most EasyDeL loaders use a 5D mesh:
60
+
61
+ - `dp`: data parallel (replicated parameters, different batch shards)
62
+ - `fsdp`: parameter sharding (memory saver; often the biggest axis)
63
+ - `ep`: expert parallel (MoE; keep `1` for non-MoE models)
64
+ - `tp`: tensor parallel (splits large matmuls)
65
+ - `sp`: sequence parallel (splits sequence dimension)
66
+
67
+ Use `sharding_axis_names=("dp","fsdp","ep","tp","sp")` and choose `sharding_axis_dims` so that their product equals your device count.
68
+ You can use `-1` in `sharding_axis_dims` to let EasyDeL infer the remaining dimension.
69
+
70
+ <details>
71
+ <summary>Example sharding configs</summary>
72
+
73
+ ```python
74
+ # 8 devices, pure FSDP
75
+ sharding_axis_dims = (1, 8, 1, 1, 1)
76
+
77
+ # 8 devices, 2-way DP x 4-way FSDP
78
+ sharding_axis_dims = (2, 4, 1, 1, 1)
79
+
80
+ # 8 devices, 4-way FSDP x 2-way TP
81
+ sharding_axis_dims = (1, 4, 1, 2, 1)
82
+ ```
83
+ </details>
84
+
85
+ ## Using via `eLargeModel` (ELM)
86
+
87
+ `eLargeModel` is a higher-level interface that wires together loading, sharding, training, and eSurge inference from a single config.
88
+
89
+ ```python
90
+ from easydel import eLargeModel
91
+
92
+ repo_id = "EasyDeL/Llama-3.1-70B-Instruct"
93
+
94
+ elm = eLargeModel.from_pretrained(repo_id) # task is auto-detected
95
+ elm.set_dtype("bf16")
96
+ elm.set_sharding(axis_names=("dp", "fsdp", "ep", "tp", "sp"), axis_dims=(1, -1, 1, 1, 1))
97
+
98
+ model = elm.build_model()
99
+ # Optional: build an inference engine
100
+ # engine = elm.build_esurge()
101
+ ```
102
+
103
+ <details>
104
+ <summary>ELM YAML config example</summary>
105
+
106
+ ```yaml
107
+ model:
108
+ name_or_path: "EasyDeL/Llama-3.1-70B-Instruct"
109
+
110
+ loader:
111
+ dtype: bf16
112
+ param_dtype: bf16
113
+
114
+ sharding:
115
+ axis_dims: [1, -1, 1, 1, 1]
116
+ auto_shard_model: true
117
+ ```
118
+ </details>
119
+
120
+ ## Features
121
+
122
+ **EasyDeL:**
123
+ - JAX native implementation and sharded execution
124
+ - Configurable attention backends via `AttentionMechanisms.*`
125
+ - Precision control via `dtype`, `param_dtype`, and `precision`
126
+
127
+ ## Installation
128
+
129
+ ```bash
130
+ pip install easydel
131
+ ```
132
+
133
+ ## Links
134
+
135
+ - EasyDeL GitHub: https://github.com/erfanzar/EasyDeL
136
+ - Docs: https://easydel.readthedocs.io/en/latest/
137
+
138
+ ## Supported Tasks
139
+
140
+ - CausalLM
141
+
142
+ ## Limitations
143
+
144
+ - Refer to the original model card for training data, evaluation, and intended use.
145
+
146
+ ## License
147
+
148
+ EasyDeL is released under the Apache-2.0 license. The license for this model's weights may differ; please consult the original repository.
149
+
150
+ ## Citation
151
+
152
+ ```bibtex
153
+ @misc{Zare Chavoshi_2023,
154
+ title={EasyDeL: An open-source library for enhancing and streamlining the training process of machine learning models},
155
+ url={https://github.com/erfanzar/EasyDeL},
156
+ author={Zare Chavoshi, Erfan},
157
+ year={2023}
158
+ }
159
+ ```
chat_template.jinja ADDED
@@ -0,0 +1,109 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {{- bos_token }}
2
+ {%- if custom_tools is defined %}
3
+ {%- set tools = custom_tools %}
4
+ {%- endif %}
5
+ {%- if not tools_in_user_message is defined %}
6
+ {%- set tools_in_user_message = true %}
7
+ {%- endif %}
8
+ {%- if not date_string is defined %}
9
+ {%- set date_string = "26 Jul 2024" %}
10
+ {%- endif %}
11
+ {%- if not tools is defined %}
12
+ {%- set tools = none %}
13
+ {%- endif %}
14
+
15
+ {#- This block extracts the system message, so we can slot it into the right place. #}
16
+ {%- if messages[0]['role'] == 'system' %}
17
+ {%- set system_message = messages[0]['content']|trim %}
18
+ {%- set messages = messages[1:] %}
19
+ {%- else %}
20
+ {%- set system_message = "" %}
21
+ {%- endif %}
22
+
23
+ {#- System message + builtin tools #}
24
+ {{- "<|start_header_id|>system<|end_header_id|>\n\n" }}
25
+ {%- if builtin_tools is defined or tools is not none %}
26
+ {{- "Environment: ipython\n" }}
27
+ {%- endif %}
28
+ {%- if builtin_tools is defined %}
29
+ {{- "Tools: " + builtin_tools | reject('equalto', 'code_interpreter') | join(", ") + "\n\n"}}
30
+ {%- endif %}
31
+ {{- "Cutting Knowledge Date: December 2023\n" }}
32
+ {{- "Today Date: " + date_string + "\n\n" }}
33
+ {%- if tools is not none and not tools_in_user_message %}
34
+ {{- "You have access to the following functions. To call a function, please respond with JSON for a function call." }}
35
+ {{- 'Respond in the format {"name": function name, "parameters": dictionary of argument name and its value}.' }}
36
+ {{- "Do not use variables.\n\n" }}
37
+ {%- for t in tools %}
38
+ {{- t | tojson(indent=4) }}
39
+ {{- "\n\n" }}
40
+ {%- endfor %}
41
+ {%- endif %}
42
+ {{- system_message }}
43
+ {{- "<|eot_id|>" }}
44
+
45
+ {#- Custom tools are passed in a user message with some extra guidance #}
46
+ {%- if tools_in_user_message and not tools is none %}
47
+ {#- Extract the first user message so we can plug it in here #}
48
+ {%- if messages | length != 0 %}
49
+ {%- set first_user_message = messages[0]['content']|trim %}
50
+ {%- set messages = messages[1:] %}
51
+ {%- else %}
52
+ {{- raise_exception("Cannot put tools in the first user message when there's no first user message!") }}
53
+ {%- endif %}
54
+ {{- '<|start_header_id|>user<|end_header_id|>\n\n' -}}
55
+ {{- "Given the following functions, please respond with a JSON for a function call " }}
56
+ {{- "with its proper arguments that best answers the given prompt.\n\n" }}
57
+ {{- 'Respond in the format {"name": function name, "parameters": dictionary of argument name and its value}.' }}
58
+ {{- "Do not use variables.\n\n" }}
59
+ {%- for t in tools %}
60
+ {{- t | tojson(indent=4) }}
61
+ {{- "\n\n" }}
62
+ {%- endfor %}
63
+ {{- first_user_message + "<|eot_id|>"}}
64
+ {%- endif %}
65
+
66
+ {%- for message in messages %}
67
+ {%- if not (message.role == 'ipython' or message.role == 'tool' or 'tool_calls' in message) %}
68
+ {{- '<|start_header_id|>' + message['role'] + '<|end_header_id|>\n\n'+ message['content'] | trim + '<|eot_id|>' }}
69
+ {%- elif 'tool_calls' in message %}
70
+ {%- if not message.tool_calls|length == 1 %}
71
+ {{- raise_exception("This model only supports single tool-calls at once!") }}
72
+ {%- endif %}
73
+ {%- set tool_call = message.tool_calls[0].function %}
74
+ {%- if builtin_tools is defined and tool_call.name in builtin_tools %}
75
+ {{- '<|start_header_id|>assistant<|end_header_id|>\n\n' -}}
76
+ {{- "<|python_tag|>" + tool_call.name + ".call(" }}
77
+ {%- for arg_name, arg_val in tool_call.arguments | items %}
78
+ {{- arg_name + '="' + arg_val + '"' }}
79
+ {%- if not loop.last %}
80
+ {{- ", " }}
81
+ {%- endif %}
82
+ {%- endfor %}
83
+ {{- ")" }}
84
+ {%- else %}
85
+ {{- '<|start_header_id|>assistant<|end_header_id|>\n\n' -}}
86
+ {{- '{"name": "' + tool_call.name + '", ' }}
87
+ {{- '"parameters": ' }}
88
+ {{- tool_call.arguments | tojson }}
89
+ {{- "}" }}
90
+ {%- endif %}
91
+ {%- if builtin_tools is defined %}
92
+ {#- This means we're in ipython mode #}
93
+ {{- "<|eom_id|>" }}
94
+ {%- else %}
95
+ {{- "<|eot_id|>" }}
96
+ {%- endif %}
97
+ {%- elif message.role == "tool" or message.role == "ipython" %}
98
+ {{- "<|start_header_id|>ipython<|end_header_id|>\n\n" }}
99
+ {%- if message.content is mapping or message.content is iterable %}
100
+ {{- message.content | tojson }}
101
+ {%- else %}
102
+ {{- message.content }}
103
+ {%- endif %}
104
+ {{- "<|eot_id|>" }}
105
+ {%- endif %}
106
+ {%- endfor %}
107
+ {%- if add_generation_prompt %}
108
+ {{- '<|start_header_id|>assistant<|end_header_id|>\n\n' }}
109
+ {%- endif %}
checkpoint_metadata.json ADDED
@@ -0,0 +1,6 @@
 
 
 
 
 
 
 
1
+ {
2
+ "timestamp": "2025-12-28T17:13:52.782585",
3
+ "custom_metadata": {
4
+ "step": 0
5
+ }
6
+ }
config.json ADDED
@@ -0,0 +1,215 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_external_rope_config_kwargs": {},
3
+ "architectures": [
4
+ "LlamaForCausalLM"
5
+ ],
6
+ "attention_bias": false,
7
+ "attention_dropout": 0.0,
8
+ "attn_mechanism": "vanilla",
9
+ "backend": null,
10
+ "bits": null,
11
+ "blocksize_b": 1,
12
+ "blocksize_k": 128,
13
+ "blocksize_q": 128,
14
+ "bos_token_id": 128000,
15
+ "decode_attn_mechanism": null,
16
+ "dtype": "bfloat16",
17
+ "easy_method": "train",
18
+ "embd_pdrop": 0.0,
19
+ "eos_token_id": [
20
+ 128001,
21
+ 128008,
22
+ 128009
23
+ ],
24
+ "fcm_max_ratio": -1,
25
+ "fcm_min_ratio": -1,
26
+ "flash_attention_backward_pass_impl": "triton",
27
+ "fsdp_is_ep_bound": true,
28
+ "gradient_checkpointing": "",
29
+ "gradient_checkpointing_targets": null,
30
+ "hardware_abstraction": false,
31
+ "head_dim": 128,
32
+ "hidden_act": "silu",
33
+ "hidden_size": 8192,
34
+ "initializer_range": 0.02,
35
+ "intermediate_size": 28672,
36
+ "kv_cache_quantization_config": null,
37
+ "kv_cache_sharding_sequence_axis_name": "sp",
38
+ "layer_types": [
39
+ "full_attention",
40
+ "full_attention",
41
+ "full_attention",
42
+ "full_attention",
43
+ "full_attention",
44
+ "full_attention",
45
+ "full_attention",
46
+ "full_attention",
47
+ "full_attention",
48
+ "full_attention",
49
+ "full_attention",
50
+ "full_attention",
51
+ "full_attention",
52
+ "full_attention",
53
+ "full_attention",
54
+ "full_attention",
55
+ "full_attention",
56
+ "full_attention",
57
+ "full_attention",
58
+ "full_attention",
59
+ "full_attention",
60
+ "full_attention",
61
+ "full_attention",
62
+ "full_attention",
63
+ "full_attention",
64
+ "full_attention",
65
+ "full_attention",
66
+ "full_attention",
67
+ "full_attention",
68
+ "full_attention",
69
+ "full_attention",
70
+ "full_attention",
71
+ "full_attention",
72
+ "full_attention",
73
+ "full_attention",
74
+ "full_attention",
75
+ "full_attention",
76
+ "full_attention",
77
+ "full_attention",
78
+ "full_attention",
79
+ "full_attention",
80
+ "full_attention",
81
+ "full_attention",
82
+ "full_attention",
83
+ "full_attention",
84
+ "full_attention",
85
+ "full_attention",
86
+ "full_attention",
87
+ "full_attention",
88
+ "full_attention",
89
+ "full_attention",
90
+ "full_attention",
91
+ "full_attention",
92
+ "full_attention",
93
+ "full_attention",
94
+ "full_attention",
95
+ "full_attention",
96
+ "full_attention",
97
+ "full_attention",
98
+ "full_attention",
99
+ "full_attention",
100
+ "full_attention",
101
+ "full_attention",
102
+ "full_attention",
103
+ "full_attention",
104
+ "full_attention",
105
+ "full_attention",
106
+ "full_attention",
107
+ "full_attention",
108
+ "full_attention",
109
+ "full_attention",
110
+ "full_attention",
111
+ "full_attention",
112
+ "full_attention",
113
+ "full_attention",
114
+ "full_attention",
115
+ "full_attention",
116
+ "full_attention",
117
+ "full_attention",
118
+ "full_attention"
119
+ ],
120
+ "max_position_embeddings": 131072,
121
+ "mlp_bias": false,
122
+ "model_type": "llama",
123
+ "moe_force_xla_gmm": false,
124
+ "moe_method": "fused_moe",
125
+ "moe_tiling_size_batch": 4,
126
+ "moe_tiling_size_dim": 128,
127
+ "moe_tiling_size_seqlen": 128,
128
+ "num_attention_heads": 64,
129
+ "num_hidden_layers": 80,
130
+ "num_key_value_heads": 8,
131
+ "number_rep_kv": 1,
132
+ "operation_configs": null,
133
+ "pallas_k_block_size": 128,
134
+ "pallas_m_block_size": 128,
135
+ "pallas_n_block_size": 128,
136
+ "partition_axis": {
137
+ "attention_dim_axis": null,
138
+ "attention_kv_dim_axis": null,
139
+ "batch_axis": [
140
+ "fsdp",
141
+ "dp"
142
+ ],
143
+ "bias_head_sequence_axis": null,
144
+ "bias_key_sequence_axis": null,
145
+ "data_parallel_axis": "dp",
146
+ "decode_attention_dim_axis": null,
147
+ "decode_attention_kv_dim_axis": null,
148
+ "decode_batch_axis": [
149
+ "fsdp",
150
+ "dp"
151
+ ],
152
+ "decode_head_axis": "tp",
153
+ "decode_key_sequence_axis": "sp",
154
+ "decode_kv_head_axis": "tp",
155
+ "decode_query_sequence_axis": null,
156
+ "expert_axis": "ep",
157
+ "expert_gate_axis": null,
158
+ "expert_parallel_axis": "ep",
159
+ "fully_sharded_data_parallel_axis": "fsdp",
160
+ "head_axis": "tp",
161
+ "hidden_state_axis": "tp",
162
+ "key_sequence_axis": "sp",
163
+ "kv_head_axis": "tp",
164
+ "mlp_intermediate_axis": "tp",
165
+ "query_sequence_axis": "sp",
166
+ "sequence_axis": "sp",
167
+ "sequence_parallel_axis": "sp",
168
+ "tensor_parallel_axis": "tp",
169
+ "vocab_axis": "tp"
170
+ },
171
+ "platform": null,
172
+ "precompute_masks": true,
173
+ "pretraining_tp": 1,
174
+ "quantization_config": null,
175
+ "resid_pdrop": 0.0,
176
+ "rms_norm_eps": 1e-05,
177
+ "rope_scaling": {
178
+ "factor": 8.0,
179
+ "high_freq_factor": 4.0,
180
+ "low_freq_factor": 1.0,
181
+ "original_max_position_embeddings": 8192,
182
+ "rope_type": "llama3"
183
+ },
184
+ "rope_theta": 500000.0,
185
+ "scan_attention_layers": false,
186
+ "scan_layers": false,
187
+ "scan_mlp_chunk_size": 1024,
188
+ "scan_ring_attention": true,
189
+ "sequence_axis_name": "sp",
190
+ "sharding_axis_dims": [
191
+ 1,
192
+ -1,
193
+ 1,
194
+ 1,
195
+ 1
196
+ ],
197
+ "sharding_axis_names": [
198
+ "dp",
199
+ "fsdp",
200
+ "ep",
201
+ "tp",
202
+ "sp"
203
+ ],
204
+ "sharding_dcn_axis_dims": null,
205
+ "sp_is_ep_bound": true,
206
+ "tie_word_embeddings": false,
207
+ "transformers_version": "4.57.3",
208
+ "use_cache": true,
209
+ "use_expert_tensor_mode": false,
210
+ "use_ring_of_experts": false,
211
+ "use_scan_mlp": false,
212
+ "use_sharded_kv_caching": false,
213
+ "use_sharding_constraint": false,
214
+ "vocab_size": 128256
215
+ }
generation_config.json ADDED
@@ -0,0 +1,13 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "bos_token_id": 128000,
3
+ "do_sample": true,
4
+ "eos_token_id": [
5
+ 128001,
6
+ 128008,
7
+ 128009
8
+ ],
9
+ "temperature": 0.6,
10
+ "top_p": 0.9,
11
+ "transformers_version": "4.57.3",
12
+ "trust_remote_code": false
13
+ }
model/lm_head/kernel/.zarray ADDED
@@ -0,0 +1 @@
 
 
1
+ {"chunks":[2048,128256],"compressor":{"id":"zstd","level":1},"dimension_separator":".","dtype":"bfloat16","fill_value":null,"filters":null,"order":"C","shape":[8192,128256],"zarr_format":2}
model/lm_head/kernel/0.0 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ae1c093d514e19f0051fb4b864096dfb758e4cc46e43a430cda26e522a275755
3
+ size 404430715
model/lm_head/kernel/1.0 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:1aba3eefb1835a54f9239723c4393285568b32c219075c1499d075712fc971a3
3
+ size 404411585
model/lm_head/kernel/2.0 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:028f3a524d44ac33d4afbe724e7a61b4fc33ec5fb446bc928498ea91739f7e3c
3
+ size 404373628
model/lm_head/kernel/3.0 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e267937bf3f8fc45e3191c5248acda2624f5cea979d2bbd4d493e807b6638012
3
+ size 404453036
model/model/embed_tokens/embedding/.zarray ADDED
@@ -0,0 +1 @@
 
 
1
+ {"chunks":[32064,8192],"compressor":{"id":"zstd","level":1},"dimension_separator":".","dtype":"bfloat16","fill_value":null,"filters":null,"order":"C","shape":[128256,8192],"zarr_format":2}
model/model/embed_tokens/embedding/0.0 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:24c2244044b1610c167b7b54d45ceda53af6c926198b9cee0d4daf92367468fe
3
+ size 406703232
model/model/embed_tokens/embedding/1.0 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e2a0940320a19df0d9b20de9129bdcbadb9a97be7baf46c313010a9abea3e319
3
+ size 405943369
model/model/embed_tokens/embedding/2.0 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f0bb6d014a082d2fdc42a96c723de56a8c8dbf7d2853fba78369847c98114a99
3
+ size 406086940
model/model/embed_tokens/embedding/3.0 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c1d4ba9d22f6eae9b33024256fecfb55e952b39bc6cfba561208bf6b7e4b66dd
3
+ size 406538359
model/model/layers/0/input_layernorm/kernel/.zarray ADDED
@@ -0,0 +1 @@
 
 
1
+ {"chunks":[8192],"compressor":{"id":"zstd","level":1},"dimension_separator":".","dtype":"bfloat16","fill_value":null,"filters":null,"order":"C","shape":[8192],"zarr_format":2}
model/model/layers/0/input_layernorm/kernel/0 ADDED
Binary file (13.2 kB). View file
 
model/model/layers/0/mlp/down_proj/kernel/.zarray ADDED
@@ -0,0 +1 @@
 
 
1
+ {"chunks":[28672,2048],"compressor":{"id":"zstd","level":1},"dimension_separator":".","dtype":"bfloat16","fill_value":null,"filters":null,"order":"C","shape":[28672,8192],"zarr_format":2}
model/model/layers/0/mlp/down_proj/kernel/0.0 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:61ab0b61b0cbbd9f6e22a638e5343a1a9c6cbf9b57181c0411b80ed957b8252a
3
+ size 91332963
model/model/layers/0/mlp/down_proj/kernel/0.1 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:05d085053c5de1fc652d1efd046d13ddc069b95ea6b93bfaaefa67b0dda0cc39
3
+ size 91338235
model/model/layers/0/mlp/down_proj/kernel/0.2 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a767ab50cbe8bdb89eb452b47231016813321dddedfe176c5f9b000405ca4aa5
3
+ size 91322910
model/model/layers/0/mlp/down_proj/kernel/0.3 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:740267a1a02c8769b7e4f414a834b6e23405ef1e82b33cbfd0f9dd357ffa21fa
3
+ size 91328648
model/model/layers/0/mlp/gate_proj/kernel/.zarray ADDED
@@ -0,0 +1 @@
 
 
1
+ {"chunks":[2048,28672],"compressor":{"id":"zstd","level":1},"dimension_separator":".","dtype":"bfloat16","fill_value":null,"filters":null,"order":"C","shape":[8192,28672],"zarr_format":2}
model/model/layers/0/mlp/gate_proj/kernel/0.0 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:1d86851ad4317d3a42ddca95a543451a64de39b7940236af735c384b7dcd102f
3
+ size 91498085
model/model/layers/0/mlp/gate_proj/kernel/1.0 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e2d468dc7788f35593294357c73587d988cb8f7f0344b053dd658b3f6f98f1a3
3
+ size 91493559
model/model/layers/0/mlp/gate_proj/kernel/2.0 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e4503430ac49af004cd8b0e3a625b83335b8f27542e79e9a11af23eacfba323f
3
+ size 91478249
model/model/layers/0/mlp/gate_proj/kernel/3.0 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8ac6bc4b3f585f5a80ff31025c6770d8adc4cd52003cc9ac6b9904e9d2fb6169
3
+ size 91472685
model/model/layers/0/mlp/up_proj/kernel/.zarray ADDED
@@ -0,0 +1 @@
 
 
1
+ {"chunks":[2048,28672],"compressor":{"id":"zstd","level":1},"dimension_separator":".","dtype":"bfloat16","fill_value":null,"filters":null,"order":"C","shape":[8192,28672],"zarr_format":2}
model/model/layers/0/mlp/up_proj/kernel/0.0 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:0c0324e544b81a2f8e7505bff4599dc0a88d8449ae8e05a7e8d8444c0c37e3f2
3
+ size 91483177
model/model/layers/0/mlp/up_proj/kernel/1.0 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b64e7f82e3a3356cdfb9acc4999f86e3e5d2ab3de709a54028ca0db70446b386
3
+ size 91476574
model/model/layers/0/mlp/up_proj/kernel/2.0 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:2d5efd04146c7d01ca283b5644f3c2dea6909e82094d44ae1407230aef6d39cc
3
+ size 91465651
model/model/layers/0/mlp/up_proj/kernel/3.0 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8ffdd59f740ebc30bdb88a6f1c4a08ce973f968209935298c10323520017f02a
3
+ size 91453607
model/model/layers/0/post_attention_layernorm/kernel/.zarray ADDED
@@ -0,0 +1 @@
 
 
1
+ {"chunks":[8192],"compressor":{"id":"zstd","level":1},"dimension_separator":".","dtype":"bfloat16","fill_value":null,"filters":null,"order":"C","shape":[8192],"zarr_format":2}
model/model/layers/0/post_attention_layernorm/kernel/0 ADDED
Binary file (10.5 kB). View file
 
model/model/layers/0/self_attn/k_proj/kernel/.zarray ADDED
@@ -0,0 +1 @@
 
 
1
+ {"chunks":[2048,1024],"compressor":{"id":"zstd","level":1},"dimension_separator":".","dtype":"bfloat16","fill_value":null,"filters":null,"order":"C","shape":[8192,1024],"zarr_format":2}
model/model/layers/0/self_attn/k_proj/kernel/0.0 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:636692c4ca51b70368d7946393862aa3335cdd426ffd8ac203c22caecb076f2d
3
+ size 3413985
model/model/layers/0/self_attn/k_proj/kernel/1.0 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b75646198c54c847f489557c8b12bbae003174c2f5b35758fa4841d5b5d47856
3
+ size 3415686
model/model/layers/0/self_attn/k_proj/kernel/2.0 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:74d0999bc09179026a895c6b0ad7ea5bb357b7dc6cf081d07b97ad6d22cdba8b
3
+ size 3413245
model/model/layers/0/self_attn/k_proj/kernel/3.0 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:dae710c965afd591bc3b44f8ae1ad52da08066fce5e6245cde8cb3a4748356a1
3
+ size 3419309
model/model/layers/0/self_attn/o_proj/kernel/.zarray ADDED
@@ -0,0 +1 @@
 
 
1
+ {"chunks":[8192,2048],"compressor":{"id":"zstd","level":1},"dimension_separator":".","dtype":"bfloat16","fill_value":null,"filters":null,"order":"C","shape":[8192,8192],"zarr_format":2}
model/model/layers/0/self_attn/o_proj/kernel/0.0 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e685eca33b91d6c0ec8264b0cc2ada7f007bf697190caf88bbcbf69cddfb18ef
3
+ size 26024278
model/model/layers/0/self_attn/o_proj/kernel/0.1 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c9da7f13fc9bb6566062282974c2b6c49e6472258bc1c3161f064fd144804024
3
+ size 26026767
model/model/layers/0/self_attn/o_proj/kernel/0.2 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9c9292120b0c07688779230f8bd1ed0037469cc427aa6b624adb9c30efca6ec1
3
+ size 26024951
model/model/layers/0/self_attn/o_proj/kernel/0.3 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:33b2459a8eca2dd4e5268363cb99a814a73c57dd70e3e3b4d454bd129898c84e
3
+ size 26022756
model/model/layers/0/self_attn/q_proj/kernel/.zarray ADDED
@@ -0,0 +1 @@
 
 
1
+ {"chunks":[2048,8192],"compressor":{"id":"zstd","level":1},"dimension_separator":".","dtype":"bfloat16","fill_value":null,"filters":null,"order":"C","shape":[8192,8192],"zarr_format":2}
model/model/layers/0/self_attn/q_proj/kernel/0.0 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ffae29f6bf62071547907ea43aa083d1d731e160b67d54ac52ace99cc5b87286
3
+ size 27234169
model/model/layers/0/self_attn/q_proj/kernel/1.0 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:78f98ce51f9182af2dc3d6df70f07a28e80a3daf8b5d842695a9b6d1113e71a1
3
+ size 27240474
model/model/layers/0/self_attn/q_proj/kernel/2.0 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:411451741b64cddedd08dffb88bf92e726efa8386c69c02ccb7ebcd4f4e57259
3
+ size 27228346
model/model/layers/0/self_attn/q_proj/kernel/3.0 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:fcb6e1da5923d23c2df299bdfcc1aaab5ee9926a893ef681037c9be2c6b183f7
3
+ size 27257389