erfanzar commited on
Commit
72f7934
·
verified ·
1 Parent(s): b94f8b7

Adding EasyDeL Checkpoints

Browse files
This view is limited to 50 files because it contains too many changes.   See raw diff
Files changed (50) hide show
  1. .gitattributes +0 -0
  2. README.md +159 -0
  3. chat_template.jinja +109 -0
  4. checkpoint_metadata.json +6 -0
  5. config.json +215 -0
  6. generation_config.json +13 -0
  7. model/lm_head/kernel/.zarray +1 -0
  8. model/lm_head/kernel/0.0 +3 -0
  9. model/lm_head/kernel/1.0 +3 -0
  10. model/lm_head/kernel/2.0 +3 -0
  11. model/lm_head/kernel/3.0 +3 -0
  12. model/model/embed_tokens/embedding/.zarray +1 -0
  13. model/model/embed_tokens/embedding/0.0 +3 -0
  14. model/model/embed_tokens/embedding/1.0 +3 -0
  15. model/model/embed_tokens/embedding/2.0 +3 -0
  16. model/model/embed_tokens/embedding/3.0 +3 -0
  17. model/model/layers/0/input_layernorm/kernel/.zarray +1 -0
  18. model/model/layers/0/input_layernorm/kernel/0 +0 -0
  19. model/model/layers/0/mlp/down_proj/kernel/.zarray +1 -0
  20. model/model/layers/0/mlp/down_proj/kernel/0.0 +3 -0
  21. model/model/layers/0/mlp/down_proj/kernel/0.1 +3 -0
  22. model/model/layers/0/mlp/down_proj/kernel/0.2 +3 -0
  23. model/model/layers/0/mlp/down_proj/kernel/0.3 +3 -0
  24. model/model/layers/0/mlp/gate_proj/kernel/.zarray +1 -0
  25. model/model/layers/0/mlp/gate_proj/kernel/0.0 +3 -0
  26. model/model/layers/0/mlp/gate_proj/kernel/1.0 +3 -0
  27. model/model/layers/0/mlp/gate_proj/kernel/2.0 +3 -0
  28. model/model/layers/0/mlp/gate_proj/kernel/3.0 +3 -0
  29. model/model/layers/0/mlp/up_proj/kernel/.zarray +1 -0
  30. model/model/layers/0/mlp/up_proj/kernel/0.0 +3 -0
  31. model/model/layers/0/mlp/up_proj/kernel/1.0 +3 -0
  32. model/model/layers/0/mlp/up_proj/kernel/2.0 +3 -0
  33. model/model/layers/0/mlp/up_proj/kernel/3.0 +3 -0
  34. model/model/layers/0/post_attention_layernorm/kernel/.zarray +1 -0
  35. model/model/layers/0/post_attention_layernorm/kernel/0 +0 -0
  36. model/model/layers/0/self_attn/k_proj/kernel/.zarray +1 -0
  37. model/model/layers/0/self_attn/k_proj/kernel/0.0 +3 -0
  38. model/model/layers/0/self_attn/k_proj/kernel/1.0 +3 -0
  39. model/model/layers/0/self_attn/k_proj/kernel/2.0 +3 -0
  40. model/model/layers/0/self_attn/k_proj/kernel/3.0 +3 -0
  41. model/model/layers/0/self_attn/o_proj/kernel/.zarray +1 -0
  42. model/model/layers/0/self_attn/o_proj/kernel/0.0 +3 -0
  43. model/model/layers/0/self_attn/o_proj/kernel/0.1 +3 -0
  44. model/model/layers/0/self_attn/o_proj/kernel/0.2 +3 -0
  45. model/model/layers/0/self_attn/o_proj/kernel/0.3 +3 -0
  46. model/model/layers/0/self_attn/q_proj/kernel/.zarray +1 -0
  47. model/model/layers/0/self_attn/q_proj/kernel/0.0 +3 -0
  48. model/model/layers/0/self_attn/q_proj/kernel/1.0 +3 -0
  49. model/model/layers/0/self_attn/q_proj/kernel/2.0 +3 -0
  50. model/model/layers/0/self_attn/q_proj/kernel/3.0 +3 -0
.gitattributes CHANGED
The diff for this file is too large to render. See raw diff
 
README.md ADDED
@@ -0,0 +1,159 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ library_name: easydel
3
+ pipeline_tag: text-generation
4
+ tags:
5
+ - easydel
6
+ - jax
7
+ - "llama"
8
+ - "CausalLM"
9
+ - "vanilla"
10
+ ---
11
+
12
+ <p align="center">
13
+ <img alt="easydel" src="https://raw.githubusercontent.com/erfanzar/easydel/main/images/easydel-logo-with-text.png">
14
+ </p>
15
+
16
+ <h1 align="center">meta-llama/Llama-3.3-70B-Instruct</h1>
17
+
18
+ <div align="center">
19
+ EasyDeL checkpoint converted from meta-llama/Llama-3.3-70B-Instruct.
20
+ </div>
21
+
22
+ ## Overview
23
+
24
+ This checkpoint is intended to be loaded with EasyDeL on JAX (CPU/GPU/TPU). It supports sharded loading with `auto_shard_model=True` and configurable precision via `dtype`, `param_dtype`, and `precision`.
25
+
26
+ ## Quickstart
27
+
28
+ ```python
29
+ import easydel as ed
30
+ from jax import numpy as jnp, lax
31
+
32
+ repo_id = "EasyDeL/Llama-3.3-70B-Instruct"
33
+
34
+ dtype = jnp.bfloat16 # try jnp.float16 on many GPUs
35
+
36
+ model = ed.AutoEasyDeLModelForCausalLM.from_pretrained(
37
+ repo_id,
38
+ dtype=dtype,
39
+ param_dtype=dtype,
40
+ precision=lax.Precision("fastest"),
41
+ sharding_axis_names=("dp", "fsdp", "ep", "tp", "sp"),
42
+ sharding_axis_dims=(1, -1, 1, 1, 1),
43
+ config_kwargs=ed.EasyDeLBaseConfigDict(
44
+ attn_dtype=dtype,
45
+ attn_mechanism=ed.AttentionMechanisms.VANILLA,
46
+ fsdp_is_ep_bound=True,
47
+ sp_is_ep_bound=True,
48
+ moe_method=ed.MoEMethods.FUSED_MOE,
49
+ ),
50
+ auto_shard_model=True,
51
+ partition_axis=ed.PartitionAxis(),
52
+ )
53
+ ```
54
+
55
+ If the repository only provides PyTorch weights, pass `from_torch=True` to `from_pretrained(...)`.
56
+
57
+ ## Sharding & Parallelism (Multi-Device)
58
+
59
+ EasyDeL can scale to multiple devices by creating a logical device mesh. Most EasyDeL loaders use a 5D mesh:
60
+
61
+ - `dp`: data parallel (replicated parameters, different batch shards)
62
+ - `fsdp`: parameter sharding (memory saver; often the biggest axis)
63
+ - `ep`: expert parallel (MoE; keep `1` for non-MoE models)
64
+ - `tp`: tensor parallel (splits large matmuls)
65
+ - `sp`: sequence parallel (splits sequence dimension)
66
+
67
+ Use `sharding_axis_names=("dp","fsdp","ep","tp","sp")` and choose `sharding_axis_dims` so that their product equals your device count.
68
+ You can use `-1` in `sharding_axis_dims` to let EasyDeL infer the remaining dimension.
69
+
70
+ <details>
71
+ <summary>Example sharding configs</summary>
72
+
73
+ ```python
74
+ # 8 devices, pure FSDP
75
+ sharding_axis_dims = (1, 8, 1, 1, 1)
76
+
77
+ # 8 devices, 2-way DP x 4-way FSDP
78
+ sharding_axis_dims = (2, 4, 1, 1, 1)
79
+
80
+ # 8 devices, 4-way FSDP x 2-way TP
81
+ sharding_axis_dims = (1, 4, 1, 2, 1)
82
+ ```
83
+ </details>
84
+
85
+ ## Using via `eLargeModel` (ELM)
86
+
87
+ `eLargeModel` is a higher-level interface that wires together loading, sharding, training, and eSurge inference from a single config.
88
+
89
+ ```python
90
+ from easydel import eLargeModel
91
+
92
+ repo_id = "EasyDeL/Llama-3.3-70B-Instruct"
93
+
94
+ elm = eLargeModel.from_pretrained(repo_id) # task is auto-detected
95
+ elm.set_dtype("bf16")
96
+ elm.set_sharding(axis_names=("dp", "fsdp", "ep", "tp", "sp"), axis_dims=(1, -1, 1, 1, 1))
97
+
98
+ model = elm.build_model()
99
+ # Optional: build an inference engine
100
+ # engine = elm.build_esurge()
101
+ ```
102
+
103
+ <details>
104
+ <summary>ELM YAML config example</summary>
105
+
106
+ ```yaml
107
+ model:
108
+ name_or_path: "EasyDeL/Llama-3.3-70B-Instruct"
109
+
110
+ loader:
111
+ dtype: bf16
112
+ param_dtype: bf16
113
+
114
+ sharding:
115
+ axis_dims: [1, -1, 1, 1, 1]
116
+ auto_shard_model: true
117
+ ```
118
+ </details>
119
+
120
+ ## Features
121
+
122
+ **EasyDeL:**
123
+ - JAX native implementation and sharded execution
124
+ - Configurable attention backends via `AttentionMechanisms.*`
125
+ - Precision control via `dtype`, `param_dtype`, and `precision`
126
+
127
+ ## Installation
128
+
129
+ ```bash
130
+ pip install easydel
131
+ ```
132
+
133
+ ## Links
134
+
135
+ - EasyDeL GitHub: https://github.com/erfanzar/EasyDeL
136
+ - Docs: https://easydel.readthedocs.io/en/latest/
137
+
138
+ ## Supported Tasks
139
+
140
+ - CausalLM
141
+
142
+ ## Limitations
143
+
144
+ - Refer to the original model card for training data, evaluation, and intended use.
145
+
146
+ ## License
147
+
148
+ EasyDeL is released under the Apache-2.0 license. The license for this model's weights may differ; please consult the original repository.
149
+
150
+ ## Citation
151
+
152
+ ```bibtex
153
+ @misc{Zare Chavoshi_2023,
154
+ title={EasyDeL: An open-source library for enhancing and streamlining the training process of machine learning models},
155
+ url={https://github.com/erfanzar/EasyDeL},
156
+ author={Zare Chavoshi, Erfan},
157
+ year={2023}
158
+ }
159
+ ```
chat_template.jinja ADDED
@@ -0,0 +1,109 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {{- bos_token }}
2
+ {%- if custom_tools is defined %}
3
+ {%- set tools = custom_tools %}
4
+ {%- endif %}
5
+ {%- if not tools_in_user_message is defined %}
6
+ {%- set tools_in_user_message = true %}
7
+ {%- endif %}
8
+ {%- if not date_string is defined %}
9
+ {%- set date_string = "26 Jul 2024" %}
10
+ {%- endif %}
11
+ {%- if not tools is defined %}
12
+ {%- set tools = none %}
13
+ {%- endif %}
14
+
15
+ {#- This block extracts the system message, so we can slot it into the right place. #}
16
+ {%- if messages[0]['role'] == 'system' %}
17
+ {%- set system_message = messages[0]['content']|trim %}
18
+ {%- set messages = messages[1:] %}
19
+ {%- else %}
20
+ {%- set system_message = "" %}
21
+ {%- endif %}
22
+
23
+ {#- System message + builtin tools #}
24
+ {{- "<|start_header_id|>system<|end_header_id|>\n\n" }}
25
+ {%- if builtin_tools is defined or tools is not none %}
26
+ {{- "Environment: ipython\n" }}
27
+ {%- endif %}
28
+ {%- if builtin_tools is defined %}
29
+ {{- "Tools: " + builtin_tools | reject('equalto', 'code_interpreter') | join(", ") + "\n\n"}}
30
+ {%- endif %}
31
+ {{- "Cutting Knowledge Date: December 2023\n" }}
32
+ {{- "Today Date: " + date_string + "\n\n" }}
33
+ {%- if tools is not none and not tools_in_user_message %}
34
+ {{- "You have access to the following functions. To call a function, please respond with JSON for a function call." }}
35
+ {{- 'Respond in the format {"name": function name, "parameters": dictionary of argument name and its value}.' }}
36
+ {{- "Do not use variables.\n\n" }}
37
+ {%- for t in tools %}
38
+ {{- t | tojson(indent=4) }}
39
+ {{- "\n\n" }}
40
+ {%- endfor %}
41
+ {%- endif %}
42
+ {{- system_message }}
43
+ {{- "<|eot_id|>" }}
44
+
45
+ {#- Custom tools are passed in a user message with some extra guidance #}
46
+ {%- if tools_in_user_message and not tools is none %}
47
+ {#- Extract the first user message so we can plug it in here #}
48
+ {%- if messages | length != 0 %}
49
+ {%- set first_user_message = messages[0]['content']|trim %}
50
+ {%- set messages = messages[1:] %}
51
+ {%- else %}
52
+ {{- raise_exception("Cannot put tools in the first user message when there's no first user message!") }}
53
+ {%- endif %}
54
+ {{- '<|start_header_id|>user<|end_header_id|>\n\n' -}}
55
+ {{- "Given the following functions, please respond with a JSON for a function call " }}
56
+ {{- "with its proper arguments that best answers the given prompt.\n\n" }}
57
+ {{- 'Respond in the format {"name": function name, "parameters": dictionary of argument name and its value}.' }}
58
+ {{- "Do not use variables.\n\n" }}
59
+ {%- for t in tools %}
60
+ {{- t | tojson(indent=4) }}
61
+ {{- "\n\n" }}
62
+ {%- endfor %}
63
+ {{- first_user_message + "<|eot_id|>"}}
64
+ {%- endif %}
65
+
66
+ {%- for message in messages %}
67
+ {%- if not (message.role == 'ipython' or message.role == 'tool' or 'tool_calls' in message) %}
68
+ {{- '<|start_header_id|>' + message['role'] + '<|end_header_id|>\n\n'+ message['content'] | trim + '<|eot_id|>' }}
69
+ {%- elif 'tool_calls' in message %}
70
+ {%- if not message.tool_calls|length == 1 %}
71
+ {{- raise_exception("This model only supports single tool-calls at once!") }}
72
+ {%- endif %}
73
+ {%- set tool_call = message.tool_calls[0].function %}
74
+ {%- if builtin_tools is defined and tool_call.name in builtin_tools %}
75
+ {{- '<|start_header_id|>assistant<|end_header_id|>\n\n' -}}
76
+ {{- "<|python_tag|>" + tool_call.name + ".call(" }}
77
+ {%- for arg_name, arg_val in tool_call.arguments | items %}
78
+ {{- arg_name + '="' + arg_val + '"' }}
79
+ {%- if not loop.last %}
80
+ {{- ", " }}
81
+ {%- endif %}
82
+ {%- endfor %}
83
+ {{- ")" }}
84
+ {%- else %}
85
+ {{- '<|start_header_id|>assistant<|end_header_id|>\n\n' -}}
86
+ {{- '{"name": "' + tool_call.name + '", ' }}
87
+ {{- '"parameters": ' }}
88
+ {{- tool_call.arguments | tojson }}
89
+ {{- "}" }}
90
+ {%- endif %}
91
+ {%- if builtin_tools is defined %}
92
+ {#- This means we're in ipython mode #}
93
+ {{- "<|eom_id|>" }}
94
+ {%- else %}
95
+ {{- "<|eot_id|>" }}
96
+ {%- endif %}
97
+ {%- elif message.role == "tool" or message.role == "ipython" %}
98
+ {{- "<|start_header_id|>ipython<|end_header_id|>\n\n" }}
99
+ {%- if message.content is mapping or message.content is iterable %}
100
+ {{- message.content | tojson }}
101
+ {%- else %}
102
+ {{- message.content }}
103
+ {%- endif %}
104
+ {{- "<|eot_id|>" }}
105
+ {%- endif %}
106
+ {%- endfor %}
107
+ {%- if add_generation_prompt %}
108
+ {{- '<|start_header_id|>assistant<|end_header_id|>\n\n' }}
109
+ {%- endif %}
checkpoint_metadata.json ADDED
@@ -0,0 +1,6 @@
 
 
 
 
 
 
 
1
+ {
2
+ "timestamp": "2025-12-28T23:06:13.860306",
3
+ "custom_metadata": {
4
+ "step": 0
5
+ }
6
+ }
config.json ADDED
@@ -0,0 +1,215 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_external_rope_config_kwargs": {},
3
+ "architectures": [
4
+ "LlamaForCausalLM"
5
+ ],
6
+ "attention_bias": false,
7
+ "attention_dropout": 0.0,
8
+ "attn_mechanism": "vanilla",
9
+ "backend": null,
10
+ "bits": null,
11
+ "blocksize_b": 1,
12
+ "blocksize_k": 128,
13
+ "blocksize_q": 128,
14
+ "bos_token_id": 128000,
15
+ "decode_attn_mechanism": null,
16
+ "dtype": "bfloat16",
17
+ "easy_method": "train",
18
+ "embd_pdrop": 0.0,
19
+ "eos_token_id": [
20
+ 128001,
21
+ 128008,
22
+ 128009
23
+ ],
24
+ "fcm_max_ratio": -1,
25
+ "fcm_min_ratio": -1,
26
+ "flash_attention_backward_pass_impl": "triton",
27
+ "fsdp_is_ep_bound": true,
28
+ "gradient_checkpointing": "",
29
+ "gradient_checkpointing_targets": null,
30
+ "hardware_abstraction": false,
31
+ "head_dim": 128,
32
+ "hidden_act": "silu",
33
+ "hidden_size": 8192,
34
+ "initializer_range": 0.02,
35
+ "intermediate_size": 28672,
36
+ "kv_cache_quantization_config": null,
37
+ "kv_cache_sharding_sequence_axis_name": "sp",
38
+ "layer_types": [
39
+ "full_attention",
40
+ "full_attention",
41
+ "full_attention",
42
+ "full_attention",
43
+ "full_attention",
44
+ "full_attention",
45
+ "full_attention",
46
+ "full_attention",
47
+ "full_attention",
48
+ "full_attention",
49
+ "full_attention",
50
+ "full_attention",
51
+ "full_attention",
52
+ "full_attention",
53
+ "full_attention",
54
+ "full_attention",
55
+ "full_attention",
56
+ "full_attention",
57
+ "full_attention",
58
+ "full_attention",
59
+ "full_attention",
60
+ "full_attention",
61
+ "full_attention",
62
+ "full_attention",
63
+ "full_attention",
64
+ "full_attention",
65
+ "full_attention",
66
+ "full_attention",
67
+ "full_attention",
68
+ "full_attention",
69
+ "full_attention",
70
+ "full_attention",
71
+ "full_attention",
72
+ "full_attention",
73
+ "full_attention",
74
+ "full_attention",
75
+ "full_attention",
76
+ "full_attention",
77
+ "full_attention",
78
+ "full_attention",
79
+ "full_attention",
80
+ "full_attention",
81
+ "full_attention",
82
+ "full_attention",
83
+ "full_attention",
84
+ "full_attention",
85
+ "full_attention",
86
+ "full_attention",
87
+ "full_attention",
88
+ "full_attention",
89
+ "full_attention",
90
+ "full_attention",
91
+ "full_attention",
92
+ "full_attention",
93
+ "full_attention",
94
+ "full_attention",
95
+ "full_attention",
96
+ "full_attention",
97
+ "full_attention",
98
+ "full_attention",
99
+ "full_attention",
100
+ "full_attention",
101
+ "full_attention",
102
+ "full_attention",
103
+ "full_attention",
104
+ "full_attention",
105
+ "full_attention",
106
+ "full_attention",
107
+ "full_attention",
108
+ "full_attention",
109
+ "full_attention",
110
+ "full_attention",
111
+ "full_attention",
112
+ "full_attention",
113
+ "full_attention",
114
+ "full_attention",
115
+ "full_attention",
116
+ "full_attention",
117
+ "full_attention",
118
+ "full_attention"
119
+ ],
120
+ "max_position_embeddings": 131072,
121
+ "mlp_bias": false,
122
+ "model_type": "llama",
123
+ "moe_force_xla_gmm": false,
124
+ "moe_method": "fused_moe",
125
+ "moe_tiling_size_batch": 4,
126
+ "moe_tiling_size_dim": 128,
127
+ "moe_tiling_size_seqlen": 128,
128
+ "num_attention_heads": 64,
129
+ "num_hidden_layers": 80,
130
+ "num_key_value_heads": 8,
131
+ "number_rep_kv": 1,
132
+ "operation_configs": null,
133
+ "pallas_k_block_size": 128,
134
+ "pallas_m_block_size": 128,
135
+ "pallas_n_block_size": 128,
136
+ "partition_axis": {
137
+ "attention_dim_axis": null,
138
+ "attention_kv_dim_axis": null,
139
+ "batch_axis": [
140
+ "fsdp",
141
+ "dp"
142
+ ],
143
+ "bias_head_sequence_axis": null,
144
+ "bias_key_sequence_axis": null,
145
+ "data_parallel_axis": "dp",
146
+ "decode_attention_dim_axis": null,
147
+ "decode_attention_kv_dim_axis": null,
148
+ "decode_batch_axis": [
149
+ "fsdp",
150
+ "dp"
151
+ ],
152
+ "decode_head_axis": "tp",
153
+ "decode_key_sequence_axis": "sp",
154
+ "decode_kv_head_axis": "tp",
155
+ "decode_query_sequence_axis": null,
156
+ "expert_axis": "ep",
157
+ "expert_gate_axis": null,
158
+ "expert_parallel_axis": "ep",
159
+ "fully_sharded_data_parallel_axis": "fsdp",
160
+ "head_axis": "tp",
161
+ "hidden_state_axis": "tp",
162
+ "key_sequence_axis": "sp",
163
+ "kv_head_axis": "tp",
164
+ "mlp_intermediate_axis": "tp",
165
+ "query_sequence_axis": "sp",
166
+ "sequence_axis": "sp",
167
+ "sequence_parallel_axis": "sp",
168
+ "tensor_parallel_axis": "tp",
169
+ "vocab_axis": "tp"
170
+ },
171
+ "platform": null,
172
+ "precompute_masks": true,
173
+ "pretraining_tp": 1,
174
+ "quantization_config": null,
175
+ "resid_pdrop": 0.0,
176
+ "rms_norm_eps": 1e-05,
177
+ "rope_scaling": {
178
+ "factor": 8.0,
179
+ "high_freq_factor": 4.0,
180
+ "low_freq_factor": 1.0,
181
+ "original_max_position_embeddings": 8192,
182
+ "rope_type": "llama3"
183
+ },
184
+ "rope_theta": 500000.0,
185
+ "scan_attention_layers": false,
186
+ "scan_layers": false,
187
+ "scan_mlp_chunk_size": 1024,
188
+ "scan_ring_attention": true,
189
+ "sequence_axis_name": "sp",
190
+ "sharding_axis_dims": [
191
+ 1,
192
+ -1,
193
+ 1,
194
+ 1,
195
+ 1
196
+ ],
197
+ "sharding_axis_names": [
198
+ "dp",
199
+ "fsdp",
200
+ "ep",
201
+ "tp",
202
+ "sp"
203
+ ],
204
+ "sharding_dcn_axis_dims": null,
205
+ "sp_is_ep_bound": true,
206
+ "tie_word_embeddings": false,
207
+ "transformers_version": "4.57.3",
208
+ "use_cache": true,
209
+ "use_expert_tensor_mode": false,
210
+ "use_ring_of_experts": false,
211
+ "use_scan_mlp": false,
212
+ "use_sharded_kv_caching": false,
213
+ "use_sharding_constraint": false,
214
+ "vocab_size": 128256
215
+ }
generation_config.json ADDED
@@ -0,0 +1,13 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "bos_token_id": 128000,
3
+ "do_sample": true,
4
+ "eos_token_id": [
5
+ 128001,
6
+ 128008,
7
+ 128009
8
+ ],
9
+ "temperature": 0.6,
10
+ "top_p": 0.9,
11
+ "transformers_version": "4.57.3",
12
+ "trust_remote_code": false
13
+ }
model/lm_head/kernel/.zarray ADDED
@@ -0,0 +1 @@
 
 
1
+ {"chunks":[2048,128256],"compressor":{"id":"zstd","level":1},"dimension_separator":".","dtype":"bfloat16","fill_value":null,"filters":null,"order":"C","shape":[8192,128256],"zarr_format":2}
model/lm_head/kernel/0.0 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e023f98490ad0c18d8dcc3799bf1ef5030e45b0cde092e637a9dae001c8352de
3
+ size 409061244
model/lm_head/kernel/1.0 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c5acfdf8413f6724abc155586ffc7c5069fb8bc4d4a75ee66bddc8283bc8371e
3
+ size 409068416
model/lm_head/kernel/2.0 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a9dded260b2b571006e6e93127f77f183b04a48de490ee274bdf22ccb23cc82d
3
+ size 409030322
model/lm_head/kernel/3.0 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9b708c5e12e425c1b37f12d587354c836c0f62178c285550a1b0272777fb78e7
3
+ size 409114571
model/model/embed_tokens/embedding/.zarray ADDED
@@ -0,0 +1 @@
 
 
1
+ {"chunks":[32064,8192],"compressor":{"id":"zstd","level":1},"dimension_separator":".","dtype":"bfloat16","fill_value":null,"filters":null,"order":"C","shape":[128256,8192],"zarr_format":2}
model/model/embed_tokens/embedding/0.0 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:080bb9f2970310ce2a212b63cf186cce35e043ff6d108bb652afcca071ea471f
3
+ size 410847426
model/model/embed_tokens/embedding/1.0 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b59fa495224bfa92dd3b50eba4c41ba5245b6bdea3b75fc57cc6255e8719ba98
3
+ size 410227838
model/model/embed_tokens/embedding/2.0 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8069b75450a21dbc80564c2c3f09d19df8ca69e8cfc8224f79dddac9774815ce
3
+ size 410412179
model/model/embed_tokens/embedding/3.0 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:6d5e5893d629660bcc3957c964e12fd0a7bdaf688bff01fbb92b452fa56d545b
3
+ size 410781418
model/model/layers/0/input_layernorm/kernel/.zarray ADDED
@@ -0,0 +1 @@
 
 
1
+ {"chunks":[8192],"compressor":{"id":"zstd","level":1},"dimension_separator":".","dtype":"bfloat16","fill_value":null,"filters":null,"order":"C","shape":[8192],"zarr_format":2}
model/model/layers/0/input_layernorm/kernel/0 ADDED
Binary file (13.2 kB). View file
 
model/model/layers/0/mlp/down_proj/kernel/.zarray ADDED
@@ -0,0 +1 @@
 
 
1
+ {"chunks":[28672,2048],"compressor":{"id":"zstd","level":1},"dimension_separator":".","dtype":"bfloat16","fill_value":null,"filters":null,"order":"C","shape":[28672,8192],"zarr_format":2}
model/model/layers/0/mlp/down_proj/kernel/0.0 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b87f7ca613b8b5d10498be5b7c9a458c2ea5c50c1233b8e2f85bba5f665a7857
3
+ size 92157948
model/model/layers/0/mlp/down_proj/kernel/0.1 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7a75bf55e0e51686488f8c6599a8d8e49c3a177bb53ec03bed743925c1594108
3
+ size 92166225
model/model/layers/0/mlp/down_proj/kernel/0.2 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d3777332e5ace8f8594bf35b44639e3da053b1f579419104533354c36049f420
3
+ size 92147315
model/model/layers/0/mlp/down_proj/kernel/0.3 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c120b55bfa6a4b973373ed2f5c8a7952c9140e57f6f82c60d963b2d6cea7516e
3
+ size 92154734
model/model/layers/0/mlp/gate_proj/kernel/.zarray ADDED
@@ -0,0 +1 @@
 
 
1
+ {"chunks":[2048,28672],"compressor":{"id":"zstd","level":1},"dimension_separator":".","dtype":"bfloat16","fill_value":null,"filters":null,"order":"C","shape":[8192,28672],"zarr_format":2}
model/model/layers/0/mlp/gate_proj/kernel/0.0 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:6a62d10c42631f75b3997a48f3c71665afc7c4b50fe2788335af4ed593fc93c4
3
+ size 92342325
model/model/layers/0/mlp/gate_proj/kernel/1.0 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:1e02fbcdad22b53a4c4c2dc1dbdf88de93849912392628819c70a3e4e77318dd
3
+ size 92335334
model/model/layers/0/mlp/gate_proj/kernel/2.0 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4c3c14aef21060c164931dcb94e81660739e910d3835095088c1b6278b5283f5
3
+ size 92324009
model/model/layers/0/mlp/gate_proj/kernel/3.0 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:024834f5991235639117b36f05651bf3c1a0c6fc8370228d58c3826fe6a4797a
3
+ size 92318876
model/model/layers/0/mlp/up_proj/kernel/.zarray ADDED
@@ -0,0 +1 @@
 
 
1
+ {"chunks":[2048,28672],"compressor":{"id":"zstd","level":1},"dimension_separator":".","dtype":"bfloat16","fill_value":null,"filters":null,"order":"C","shape":[8192,28672],"zarr_format":2}
model/model/layers/0/mlp/up_proj/kernel/0.0 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b1a7f150a8f7c056c150f8981756630d600ae3c591b82d57042f15da33479dd0
3
+ size 92327317
model/model/layers/0/mlp/up_proj/kernel/1.0 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:16728bcb4fcc53a08974f26885ab050aa9af5f1d330f68780ecd8a315ca31eb8
3
+ size 92320030
model/model/layers/0/mlp/up_proj/kernel/2.0 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b141d2eb4ed6d339bff6aaafaae23ac6755931ca8192476984627c2bca5f0912
3
+ size 92308700
model/model/layers/0/mlp/up_proj/kernel/3.0 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e9050589ad66ead6e9f2bb3c85663268179f08d4c47843546c11aa926f396d61
3
+ size 92304207
model/model/layers/0/post_attention_layernorm/kernel/.zarray ADDED
@@ -0,0 +1 @@
 
 
1
+ {"chunks":[8192],"compressor":{"id":"zstd","level":1},"dimension_separator":".","dtype":"bfloat16","fill_value":null,"filters":null,"order":"C","shape":[8192],"zarr_format":2}
model/model/layers/0/post_attention_layernorm/kernel/0 ADDED
Binary file (10.7 kB). View file
 
model/model/layers/0/self_attn/k_proj/kernel/.zarray ADDED
@@ -0,0 +1 @@
 
 
1
+ {"chunks":[2048,1024],"compressor":{"id":"zstd","level":1},"dimension_separator":".","dtype":"bfloat16","fill_value":null,"filters":null,"order":"C","shape":[8192,1024],"zarr_format":2}
model/model/layers/0/self_attn/k_proj/kernel/0.0 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:65ce52ba4ab16c6c3faaed0bad30d53d4de688174c0955ce59003bb4ce7d88c9
3
+ size 3435822
model/model/layers/0/self_attn/k_proj/kernel/1.0 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:43e8b5002b7bf8fbb0c9192cfe7afae9792870e44196f649d724c283723e0fab
3
+ size 3438004
model/model/layers/0/self_attn/k_proj/kernel/2.0 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:259d4a4bf9591c9a78e388f3e5169b11f5da3de6de2f1b861f2e500291dd3bb9
3
+ size 3435360
model/model/layers/0/self_attn/k_proj/kernel/3.0 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:99072bfa624a15f794ca179f010a4db0e77059ebb19412f2fe60e77439f0704b
3
+ size 3440675
model/model/layers/0/self_attn/o_proj/kernel/.zarray ADDED
@@ -0,0 +1 @@
 
 
1
+ {"chunks":[8192,2048],"compressor":{"id":"zstd","level":1},"dimension_separator":".","dtype":"bfloat16","fill_value":null,"filters":null,"order":"C","shape":[8192,8192],"zarr_format":2}
model/model/layers/0/self_attn/o_proj/kernel/0.0 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b93b26d39cce6f94173ad00f9ce4dec3599ac0d6b70af54e199d113095c0901e
3
+ size 26206461
model/model/layers/0/self_attn/o_proj/kernel/0.1 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e35cd0562c3f2e6e933714fda7edc8b2b3d75aa23fec60a9878ef39c24c39df8
3
+ size 26209401
model/model/layers/0/self_attn/o_proj/kernel/0.2 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:aa15a48e1441579ca3baac0c6da232afc6bf999caf4f80fd785e0f16182d8af9
3
+ size 26208277
model/model/layers/0/self_attn/o_proj/kernel/0.3 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c27c9c1c17de6607485be5a034accea336a9ab30b0134bd50e91843880d69daa
3
+ size 26205970
model/model/layers/0/self_attn/q_proj/kernel/.zarray ADDED
@@ -0,0 +1 @@
 
 
1
+ {"chunks":[2048,8192],"compressor":{"id":"zstd","level":1},"dimension_separator":".","dtype":"bfloat16","fill_value":null,"filters":null,"order":"C","shape":[8192,8192],"zarr_format":2}
model/model/layers/0/self_attn/q_proj/kernel/0.0 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4b7bea62181f2b584aab3bec70bdb95141f50476f8ee53eb037c19d63fdeff5a
3
+ size 27411754
model/model/layers/0/self_attn/q_proj/kernel/1.0 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4de20190a9608167b07a157bf2ac96b80f89d65ef48fcc297e8dec6510bb451b
3
+ size 27421372
model/model/layers/0/self_attn/q_proj/kernel/2.0 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:bec1193a97ff457719cf58636349497799b292ef3074dde56fd3b4c89a408f3c
3
+ size 27407551
model/model/layers/0/self_attn/q_proj/kernel/3.0 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:28f62585273bd3a983361d73c653b41d9b84995c82c994dd65978625ec487c4c
3
+ size 27434725