williamchangtw thehekimoghlu commited on
Commit
623b22e
·
0 Parent(s):

Duplicate from thenexthub/Everos

Browse files

Co-authored-by: Tunjay Akbarli <thehekimoghlu@users.noreply.huggingface.co>

This view is limited to 50 files because it contains too many changes.   See raw diff
Files changed (50) hide show
  1. .gitattributes +37 -0
  2. LICENSE +27 -0
  3. README.md +41 -0
  4. THIRD_PARTY_NOTICES.md +47 -0
  5. chat_template.jinja +52 -0
  6. config.json +39 -0
  7. configuration_deepseek.py +212 -0
  8. configuration_openmodel.py +84 -0
  9. configuration_v2.py +210 -0
  10. docs/deploy_guidance.md +196 -0
  11. docs/tool_call_guidance.md +258 -0
  12. figures/Base-Evaluation.png +3 -0
  13. figures/banner.png +3 -0
  14. figures/kimi-logo.png +0 -0
  15. generation_config.json +12 -0
  16. mergekit_config.yml +12 -0
  17. model-00000-of-00160.safetensors +3 -0
  18. model-00001-of-000062.safetensors +3 -0
  19. model-00001-of-00155.safetensors +3 -0
  20. model-00001-of-00160.safetensors +3 -0
  21. model-00001-of-00481.safetensors +3 -0
  22. model-00002-of-000062.safetensors +3 -0
  23. model-00002-of-00155.safetensors +3 -0
  24. model-00002-of-00160.safetensors +3 -0
  25. model-00002-of-00481.safetensors +3 -0
  26. model-00003-of-000062.safetensors +3 -0
  27. model-00003-of-00155.safetensors +3 -0
  28. model-00003-of-00160.safetensors +3 -0
  29. model-00003-of-00481.safetensors +3 -0
  30. model-00004-of-000062.safetensors +3 -0
  31. model-00004-of-00155.safetensors +3 -0
  32. model-00004-of-00160.safetensors +3 -0
  33. model-00004-of-00481.safetensors +3 -0
  34. model-00005-of-000062.safetensors +3 -0
  35. model-00005-of-00155.safetensors +3 -0
  36. model-00005-of-00160.safetensors +3 -0
  37. model-00005-of-00481.safetensors +3 -0
  38. model-00006-of-000062.safetensors +3 -0
  39. model-00006-of-00155.safetensors +3 -0
  40. model-00006-of-00160.safetensors +3 -0
  41. model-00006-of-00481.safetensors +3 -0
  42. model-00007-of-000062.safetensors +3 -0
  43. model-00007-of-00155.safetensors +3 -0
  44. model-00007-of-00160.safetensors +3 -0
  45. model-00007-of-00481.safetensors +3 -0
  46. model-00008-of-000062.safetensors +3 -0
  47. model-00008-of-00155.safetensors +3 -0
  48. model-00008-of-00160.safetensors +3 -0
  49. model-00008-of-00481.safetensors +3 -0
  50. model-00009-of-000062.safetensors +3 -0
.gitattributes ADDED
@@ -0,0 +1,37 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ *.7z filter=lfs diff=lfs merge=lfs -text
2
+ *.arrow filter=lfs diff=lfs merge=lfs -text
3
+ *.bin filter=lfs diff=lfs merge=lfs -text
4
+ *.bz2 filter=lfs diff=lfs merge=lfs -text
5
+ *.ckpt filter=lfs diff=lfs merge=lfs -text
6
+ *.ftz filter=lfs diff=lfs merge=lfs -text
7
+ *.gz filter=lfs diff=lfs merge=lfs -text
8
+ *.h5 filter=lfs diff=lfs merge=lfs -text
9
+ *.joblib filter=lfs diff=lfs merge=lfs -text
10
+ *.lfs.* filter=lfs diff=lfs merge=lfs -text
11
+ *.mlmodel filter=lfs diff=lfs merge=lfs -text
12
+ *.model filter=lfs diff=lfs merge=lfs -text
13
+ *.msgpack filter=lfs diff=lfs merge=lfs -text
14
+ *.npy filter=lfs diff=lfs merge=lfs -text
15
+ *.npz filter=lfs diff=lfs merge=lfs -text
16
+ *.onnx filter=lfs diff=lfs merge=lfs -text
17
+ *.ot filter=lfs diff=lfs merge=lfs -text
18
+ *.parquet filter=lfs diff=lfs merge=lfs -text
19
+ *.pb filter=lfs diff=lfs merge=lfs -text
20
+ *.pickle filter=lfs diff=lfs merge=lfs -text
21
+ *.pkl filter=lfs diff=lfs merge=lfs -text
22
+ *.pt filter=lfs diff=lfs merge=lfs -text
23
+ *.pth filter=lfs diff=lfs merge=lfs -text
24
+ *.rar filter=lfs diff=lfs merge=lfs -text
25
+ *.safetensors filter=lfs diff=lfs merge=lfs -text
26
+ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
27
+ *.tar.* filter=lfs diff=lfs merge=lfs -text
28
+ *.tar filter=lfs diff=lfs merge=lfs -text
29
+ *.tflite filter=lfs diff=lfs merge=lfs -text
30
+ *.tgz filter=lfs diff=lfs merge=lfs -text
31
+ *.wasm filter=lfs diff=lfs merge=lfs -text
32
+ *.xz filter=lfs diff=lfs merge=lfs -text
33
+ *.zip filter=lfs diff=lfs merge=lfs -text
34
+ *.zst filter=lfs diff=lfs merge=lfs -text
35
+ *tfevents* filter=lfs diff=lfs merge=lfs -text
36
+ figures/Base-Evaluation.png filter=lfs diff=lfs merge=lfs -text
37
+ figures/banner.png filter=lfs diff=lfs merge=lfs -text
LICENSE ADDED
@@ -0,0 +1,27 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Modified MIT License
2
+
3
+ Copyright (c) 2025 Moonshot AI
4
+
5
+ Permission is hereby granted, free of charge, to any person obtaining a copy
6
+ of this software and associated documentation files (the “Software”), to deal
7
+ in the Software without restriction, including without limitation the rights
8
+ to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
9
+ copies of the Software, and to permit persons to whom the Software is
10
+ furnished to do so, subject to the following conditions:
11
+
12
+ The above copyright notice and this permission notice shall be included in all
13
+ copies or substantial portions of the Software.
14
+
15
+ THE SOFTWARE IS PROVIDED “AS IS”, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
16
+ IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
17
+ FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
18
+ AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
19
+ LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
20
+ OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
21
+ SOFTWARE.
22
+
23
+ Our only modification part is that, if the Software (or any derivative works
24
+ thereof) is used for any of your commercial products or services that have
25
+ more than 100 million monthly active users, or more than 20 million US dollars
26
+ (or equivalent in other currencies) in monthly revenue, you shall prominently
27
+ display "Kimi K2" on the user interface of such product or service.
README.md ADDED
@@ -0,0 +1,41 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ language:
4
+ - multilingual
5
+ - ar
6
+ - az
7
+ - zh
8
+ - cs
9
+ - da
10
+ - nl
11
+ - en
12
+ - fi
13
+ - fr
14
+ - de
15
+ - he
16
+ - hu
17
+ - it
18
+ - ja
19
+ - ko
20
+ - 'no'
21
+ - pl
22
+ - pt
23
+ - ru
24
+ - es
25
+ - sv
26
+ - th
27
+ - tr
28
+ - uk
29
+ tags:
30
+ - nlp
31
+ - code
32
+ - audio
33
+ - automatic-speech-recognition
34
+ - speech-summarization
35
+ - speech-translation
36
+ - visual-question-answering
37
+ - multi-modal
38
+ datasets:
39
+ - thenexthub/OpenData-1T
40
+ pipeline_tag: any-to-any
41
+ ---
THIRD_PARTY_NOTICES.md ADDED
@@ -0,0 +1,47 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # THIRD_PARTY_NOTICES
2
+
3
+ This file lists third-party software contained in Kimi-K2 along with their licenses, in compliance with the redistribution clauses of those licenses.
4
+
5
+ ---
6
+
7
+ ## 1. DeepSeek-V3
8
+
9
+ Our model archietecture is DeepSeek-V3-like. Some of modeling codes are copied from the source repository.
10
+
11
+ - **Source Repository**
12
+ https://huggingface.co/deepseek-ai/DeepSeek-V3
13
+
14
+ - **Files / Directories Used**
15
+ - configuration_deepseek.py
16
+ - modeling_deepseek.py
17
+
18
+ - **License Type**
19
+ MIT License
20
+
21
+ - **Copyright Notice**
22
+ Copyright (c) 2023 DeepSeek
23
+
24
+ - **Full License Text**
25
+ ```
26
+ MIT License
27
+
28
+ Copyright (c) 2023 DeepSeek
29
+
30
+ Permission is hereby granted, free of charge, to any person obtaining a copy
31
+ of this software and associated documentation files (the "Software"), to deal
32
+ in the Software without restriction, including without limitation the rights
33
+ to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
34
+ copies of the Software, and to permit persons to whom the Software is
35
+ furnished to do so, subject to the following conditions:
36
+
37
+ The above copyright notice and this permission notice shall be included in all
38
+ copies or substantial portions of the Software.
39
+
40
+ THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
41
+ IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
42
+ FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
43
+ AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
44
+ LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
45
+ OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
46
+ SOFTWARE.
47
+ ```
chat_template.jinja ADDED
@@ -0,0 +1,52 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {% macro render_content(msg) -%}
2
+ {%- set c = msg.get('content') -%}
3
+ {%- if c is string -%}
4
+ {{ c }}
5
+ {%- elif c is not none -%}
6
+ {% for content in c -%}
7
+ {% if content['type'] == 'image' or 'image' in content or 'image_url' in content -%}
8
+ <|media_start|>image<|media_content|><|media_pad|><|media_end|>
9
+ {% else -%}
10
+ {{ content['text'] }}
11
+ {%- endif -%}
12
+ {%- endfor -%}
13
+ {%- endif -%}
14
+ {%- endmacro %}
15
+
16
+
17
+ {%- if tools -%}
18
+ <|im_system|>tool_declare<|im_middle|>{{ tools | tojson(separators=(',', ':')) }}<|im_end|>
19
+ {%- endif -%}
20
+ {% for message in messages %}
21
+ {%- if loop.first and messages[0]['role'] != 'system' -%}
22
+ <|im_system|>system<|im_middle|>You are Kimi, an AI assistant created by Moonshot AI.<|im_end|>
23
+ {% endif %}
24
+
25
+ {%- set role_name = message.get('name') or message['role'] -%}
26
+ {%- if message['role'] == 'user' -%}
27
+ <|im_user|>{{role_name}}<|im_middle|>
28
+ {%- elif message['role'] == 'assistant' -%}
29
+ <|im_assistant|>{{role_name}}<|im_middle|>
30
+ {%- else -%}
31
+ <|im_system|>{{role_name}}<|im_middle|>
32
+ {%- endif -%}
33
+
34
+ {%- if message['role'] == 'assistant' and message.get('tool_calls') -%}
35
+ {{render_content(message)}}<|tool_calls_section_begin|>
36
+ {%- for tool_call in message['tool_calls'] -%}
37
+ {%- set formatted_id = tool_call['id'] -%}
38
+ <|tool_call_begin|>{{ formatted_id }}<|tool_call_argument_begin|>{% if tool_call['function']['arguments'] is string %}{{ tool_call['function']['arguments'] }}{% else %}{{ tool_call['function']['arguments'] | tojson }}{% endif %}<|tool_call_end|>
39
+ {%- endfor -%}
40
+ <|tool_calls_section_end|>
41
+ {%- elif message['role'] == 'tool' -%}
42
+ {%- set tool_call_id = message.tool_call_id -%}
43
+ ## Return of {{ tool_call_id }}
44
+ {{render_content(message)}}
45
+ {%- elif message['content'] is not none -%}
46
+ {{render_content(message)}}
47
+ {%- endif -%}
48
+ <|im_end|>
49
+ {%- endfor -%}
50
+ {%- if add_generation_prompt -%}
51
+ <|im_assistant|>assistant<|im_middle|>
52
+ {%- endif -%}
config.json ADDED
@@ -0,0 +1,39 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_name_or_path": "mlabonne/BigLlama-3.1-681B-Instruct",
3
+ "architectures": [
4
+ "LlamaForCausalLM"
5
+ ],
6
+ "attention_bias": false,
7
+ "attention_dropout": 0.0,
8
+ "bos_token_id": 128000,
9
+ "eos_token_id": [
10
+ 128001,
11
+ 128008,
12
+ 128009
13
+ ],
14
+ "hidden_act": "silu",
15
+ "hidden_size": 16384,
16
+ "initializer_range": 0.02,
17
+ "intermediate_size": 53248,
18
+ "max_position_embeddings": 131072,
19
+ "mlp_bias": false,
20
+ "model_type": "llama",
21
+ "num_attention_heads": 128,
22
+ "num_hidden_layers": 315,
23
+ "num_key_value_heads": 16,
24
+ "pretraining_tp": 1,
25
+ "rms_norm_eps": 1e-05,
26
+ "rope_scaling": {
27
+ "factor": 8.0,
28
+ "high_freq_factor": 4.0,
29
+ "low_freq_factor": 1.0,
30
+ "original_max_position_embeddings": 8192,
31
+ "rope_type": "llama3"
32
+ },
33
+ "rope_theta": 500000.0,
34
+ "tie_word_embeddings": false,
35
+ "torch_dtype": "bfloat16",
36
+ "transformers_version": "4.44.0",
37
+ "use_cache": true,
38
+ "vocab_size": 128256
39
+ }
configuration_deepseek.py ADDED
@@ -0,0 +1,212 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Copy from https://huggingface.co/deepseek-ai/DeepSeek-V3/blob/main/configuration_deepseek.py
2
+
3
+ from transformers.configuration_utils import PretrainedConfig
4
+ from transformers.utils import logging
5
+
6
+ logger = logging.get_logger(__name__)
7
+
8
+ DEEPSEEK_PRETRAINED_CONFIG_ARCHIVE_MAP = {}
9
+ class DeepseekV3Config(PretrainedConfig):
10
+ r"""
11
+ This is the configuration class to store the configuration of a [`DeepseekV3Model`]. It is used to instantiate an DeepSeek
12
+ model according to the specified arguments, defining the model architecture. Instantiating a configuration with the
13
+ defaults will yield a similar configuration to that of the DeepSeek-V3.
14
+
15
+ Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the
16
+ documentation from [`PretrainedConfig`] for more information.
17
+
18
+
19
+ Args:
20
+ vocab_size (`int`, *optional*, defaults to 129280):
21
+ Vocabulary size of the Deep model. Defines the number of different tokens that can be represented by the
22
+ `inputs_ids` passed when calling [`DeepseekV3Model`]
23
+ hidden_size (`int`, *optional*, defaults to 4096):
24
+ Dimension of the hidden representations.
25
+ intermediate_size (`int`, *optional*, defaults to 11008):
26
+ Dimension of the MLP representations.
27
+ moe_intermediate_size (`int`, *optional*, defaults to 1407):
28
+ Dimension of the MoE representations.
29
+ num_hidden_layers (`int`, *optional*, defaults to 32):
30
+ Number of hidden layers in the Transformer decoder.
31
+ num_nextn_predict_layers (`int`, *optional*, defaults to 1):
32
+ Number of nextn predict layers in the DeepSeekV3 Model.
33
+ num_attention_heads (`int`, *optional*, defaults to 32):
34
+ Number of attention heads for each attention layer in the Transformer decoder.
35
+ n_shared_experts (`int`, *optional*, defaults to None):
36
+ Number of shared experts, None means dense model.
37
+ n_routed_experts (`int`, *optional*, defaults to None):
38
+ Number of routed experts, None means dense model.
39
+ routed_scaling_factor (`float`, *optional*, defaults to 1.0):
40
+ Scaling factor or routed experts.
41
+ topk_method (`str`, *optional*, defaults to `gready`):
42
+ Topk method used in routed gate.
43
+ n_group (`int`, *optional*, defaults to None):
44
+ Number of groups for routed experts.
45
+ topk_group (`int`, *optional*, defaults to None):
46
+ Number of selected groups for each token(for each token, ensuring the selected experts is only within `topk_group` groups).
47
+ num_experts_per_tok (`int`, *optional*, defaults to None):
48
+ Number of selected experts, None means dense model.
49
+ moe_layer_freq (`int`, *optional*, defaults to 1):
50
+ The frequency of the MoE layer: one expert layer for every `moe_layer_freq - 1` dense layers.
51
+ first_k_dense_replace (`int`, *optional*, defaults to 0):
52
+ Number of dense layers in shallow layers(embed->dense->dense->...->dense->moe->moe...->lm_head).
53
+ \--k dense layers--/
54
+ norm_topk_prob (`bool`, *optional*, defaults to False):
55
+ Whether to normalize the weights of the routed experts.
56
+ scoring_func (`str`, *optional*, defaults to 'softmax'):
57
+ Method of computing expert weights.
58
+ aux_loss_alpha (`float`, *optional*, defaults to 0.001):
59
+ Auxiliary loss weight coefficient.
60
+ seq_aux = (`bool`, *optional*, defaults to True):
61
+ Whether to compute the auxiliary loss for each individual sample.
62
+ num_key_value_heads (`int`, *optional*):
63
+ This is the number of key_value heads that should be used to implement Grouped Query Attention. If
64
+ `num_key_value_heads=num_attention_heads`, the model will use Multi Head Attention (MHA), if
65
+ `num_key_value_heads=1 the model will use Multi Query Attention (MQA) otherwise GQA is used. When
66
+ converting a multi-head checkpoint to a GQA checkpoint, each group key and value head should be constructed
67
+ by meanpooling all the original heads within that group. For more details checkout [this
68
+ paper](https://arxiv.org/pdf/2305.13245.pdf). If it is not specified, will default to
69
+ `num_attention_heads`.
70
+ hidden_act (`str` or `function`, *optional*, defaults to `"silu"`):
71
+ The non-linear activation function (function or string) in the decoder.
72
+ max_position_embeddings (`int`, *optional*, defaults to 2048):
73
+ The maximum sequence length that this model might ever be used with.
74
+ initializer_range (`float`, *optional*, defaults to 0.02):
75
+ The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
76
+ rms_norm_eps (`float`, *optional*, defaults to 1e-06):
77
+ The epsilon used by the rms normalization layers.
78
+ use_cache (`bool`, *optional*, defaults to `True`):
79
+ Whether or not the model should return the last key/values attentions (not used by all models). Only
80
+ relevant if `config.is_decoder=True`.
81
+ pad_token_id (`int`, *optional*):
82
+ Padding token id.
83
+ bos_token_id (`int`, *optional*, defaults to 1):
84
+ Beginning of stream token id.
85
+ eos_token_id (`int`, *optional*, defaults to 2):
86
+ End of stream token id.
87
+ pretraining_tp (`int`, *optional*, defaults to 1):
88
+ Experimental feature. Tensor parallelism rank used during pretraining. Please refer to [this
89
+ document](https://huggingface.co/docs/transformers/parallelism) to understand more about it. This value is
90
+ necessary to ensure exact reproducibility of the pretraining results. Please refer to [this
91
+ issue](https://github.com/pytorch/pytorch/issues/76232).
92
+ tie_word_embeddings (`bool`, *optional*, defaults to `False`):
93
+ Whether to tie weight embeddings
94
+ rope_theta (`float`, *optional*, defaults to 10000.0):
95
+ The base period of the RoPE embeddings.
96
+ rope_scaling (`Dict`, *optional*):
97
+ Dictionary containing the scaling configuration for the RoPE embeddings. Currently supports two scaling
98
+ strategies: linear and dynamic. Their scaling factor must be a float greater than 1. The expected format is
99
+ `{"type": strategy name, "factor": scaling factor}`. When using this flag, don't update
100
+ `max_position_embeddings` to the expected new maximum.
101
+ attention_bias (`bool`, defaults to `False`, *optional*, defaults to `False`):
102
+ Whether to use a bias in the query, key, value and output projection layers during self-attention.
103
+ attention_dropout (`float`, *optional*, defaults to 0.0):
104
+ The dropout ratio for the attention probabilities.
105
+
106
+ ```python
107
+ >>> from transformers import DeepseekV3Model, DeepseekV3Config
108
+
109
+ >>> # Initializing a Deepseek-V3 style configuration
110
+ >>> configuration = DeepseekV3Config()
111
+
112
+ >>> # Accessing the model configuration
113
+ >>> configuration = model.config
114
+ ```"""
115
+
116
+ model_type = "deepseek_v3"
117
+ keys_to_ignore_at_inference = ["past_key_values"]
118
+
119
+ def __init__(
120
+ self,
121
+ vocab_size=129280,
122
+ hidden_size=7168,
123
+ intermediate_size=18432,
124
+ moe_intermediate_size = 2048,
125
+ num_hidden_layers=61,
126
+ num_nextn_predict_layers=1,
127
+ num_attention_heads=128,
128
+ num_key_value_heads=128,
129
+ n_shared_experts = 1,
130
+ n_routed_experts = 256,
131
+ ep_size = 1,
132
+ routed_scaling_factor = 2.5,
133
+ kv_lora_rank = 512,
134
+ q_lora_rank = 1536,
135
+ qk_rope_head_dim = 64,
136
+ v_head_dim = 128,
137
+ qk_nope_head_dim = 128,
138
+ topk_method = 'noaux_tc',
139
+ n_group = 8,
140
+ topk_group = 4,
141
+ num_experts_per_tok = 8,
142
+ moe_layer_freq = 1,
143
+ first_k_dense_replace = 3,
144
+ norm_topk_prob = True,
145
+ scoring_func = 'sigmoid',
146
+ aux_loss_alpha = 0.001,
147
+ seq_aux = True,
148
+ hidden_act="silu",
149
+ max_position_embeddings=4096,
150
+ initializer_range=0.02,
151
+ rms_norm_eps=1e-6,
152
+ use_cache=True,
153
+ pad_token_id=None,
154
+ bos_token_id=0,
155
+ eos_token_id=1,
156
+ pretraining_tp=1,
157
+ tie_word_embeddings=False,
158
+ rope_theta=10000.0,
159
+ rope_scaling=None,
160
+ attention_bias=False,
161
+ attention_dropout=0.0,
162
+ **kwargs,
163
+ ):
164
+ self.vocab_size = vocab_size
165
+ self.max_position_embeddings = max_position_embeddings
166
+ self.hidden_size = hidden_size
167
+ self.intermediate_size = intermediate_size
168
+ self.moe_intermediate_size = moe_intermediate_size
169
+ self.num_hidden_layers = num_hidden_layers
170
+ self.num_nextn_predict_layers = num_nextn_predict_layers
171
+ self.num_attention_heads = num_attention_heads
172
+ self.n_shared_experts = n_shared_experts
173
+ self.n_routed_experts = n_routed_experts
174
+ self.ep_size = ep_size
175
+ self.routed_scaling_factor = routed_scaling_factor
176
+ self.kv_lora_rank = kv_lora_rank
177
+ self.q_lora_rank = q_lora_rank
178
+ self.qk_rope_head_dim = qk_rope_head_dim
179
+ self.v_head_dim = v_head_dim
180
+ self.qk_nope_head_dim = qk_nope_head_dim
181
+ self.topk_method = topk_method
182
+ self.n_group = n_group
183
+ self.topk_group = topk_group
184
+ self.num_experts_per_tok = num_experts_per_tok
185
+ self.moe_layer_freq = moe_layer_freq
186
+ self.first_k_dense_replace = first_k_dense_replace
187
+ self.norm_topk_prob = norm_topk_prob
188
+ self.scoring_func = scoring_func
189
+ self.aux_loss_alpha = aux_loss_alpha
190
+ self.seq_aux = seq_aux
191
+ # for backward compatibility
192
+ if num_key_value_heads is None:
193
+ num_key_value_heads = num_attention_heads
194
+
195
+ self.num_key_value_heads = num_key_value_heads
196
+ self.hidden_act = hidden_act
197
+ self.initializer_range = initializer_range
198
+ self.rms_norm_eps = rms_norm_eps
199
+ self.pretraining_tp = pretraining_tp
200
+ self.use_cache = use_cache
201
+ self.rope_theta = rope_theta
202
+ self.rope_scaling = rope_scaling
203
+ self.attention_bias = attention_bias
204
+ self.attention_dropout = attention_dropout
205
+
206
+ super().__init__(
207
+ pad_token_id=pad_token_id,
208
+ bos_token_id=bos_token_id,
209
+ eos_token_id=eos_token_id,
210
+ tie_word_embeddings=tie_word_embeddings,
211
+ **kwargs,
212
+ )
configuration_openmodel.py ADDED
@@ -0,0 +1,84 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """Oodel configuration"""
2
+
3
+ from transformers.configuration_utils import PretrainedConfig
4
+
5
+
6
+ class ModelConfig(PretrainedConfig):
7
+
8
+ def __init__(
9
+ self,
10
+ vocab_size=157184,
11
+ hidden_size=2048,
12
+ intermediate_size=5120,
13
+ num_hidden_layers=20,
14
+ num_attention_heads=16,
15
+ num_key_value_heads=4,
16
+ hidden_act="silu",
17
+ use_qkv_bias=False, # openmodel only
18
+ use_bias=False, # openmodel only
19
+ rms_norm_eps=1e-06,
20
+ tie_word_embeddings=False, # PretrainedConfig key, here change default value.
21
+ embedding_dropout=0.0,
22
+ attention_dropout=0.0,
23
+ output_dropout=0.0,
24
+ initializer_range=0.02,
25
+ max_position_embeddings=32768,
26
+ rope_theta=600000.0,
27
+ use_cache=True,
28
+ max_window_layers=20,
29
+ rope_scaling=None,
30
+ pad_token_id=156892,
31
+ eos_token_id=156892,
32
+ num_experts=256,
33
+ num_shared_experts=1,
34
+ num_experts_per_tok=8,
35
+ n_group=8,
36
+ topk_group=4,
37
+ moe_intermediate_size=512,
38
+ first_k_dense_replace=1,
39
+ head_dim=128,
40
+ output_router_logits=False,
41
+ use_qk_norm=True,
42
+ num_nextn_predict_layers=0,
43
+ mtp_loss_scaling_factor=0,
44
+ moe_router_enable_expert_bias=True,
45
+ routed_scaling_factor=1.0,
46
+ **kwargs,
47
+ ):
48
+ self.num_hidden_layers = num_hidden_layers
49
+ self.vocab_size = vocab_size
50
+ self.hidden_size = hidden_size
51
+ self.intermediate_size = intermediate_size
52
+ self.num_attention_heads = num_attention_heads
53
+ self.num_key_value_heads = num_key_value_heads
54
+ self.hidden_act = hidden_act
55
+ self.use_qkv_bias = use_qkv_bias
56
+ self.use_bias = use_bias
57
+ self.rms_norm_eps = rms_norm_eps
58
+ self.embedding_dropout = embedding_dropout
59
+ self.attention_dropout = attention_dropout
60
+ self.output_dropout = output_dropout
61
+ self.num_nextn_predict_layers = num_nextn_predict_layers
62
+ self.mtp_loss_scaling_factor = mtp_loss_scaling_factor
63
+ self.initializer_range = initializer_range
64
+ self.max_position_embeddings = max_position_embeddings
65
+ self.rope_theta = rope_theta
66
+ self.use_cache = use_cache
67
+ self.max_window_layers = max_window_layers
68
+ self.head_dim = head_dim or self.hidden_size // self.num_attention_heads
69
+ self.rope_scaling = rope_scaling
70
+ self.use_qk_norm = use_qk_norm
71
+ self.moe_router_enable_expert_bias = moe_router_enable_expert_bias
72
+ self.routed_scaling_factor = routed_scaling_factor
73
+
74
+ # MoE configs
75
+ self.num_experts = num_experts
76
+ self.num_shared_experts = num_shared_experts
77
+ self.num_experts_per_tok = num_experts_per_tok
78
+ self.n_group = n_group
79
+ self.topk_group = topk_group
80
+ self.moe_intermediate_size = moe_intermediate_size
81
+ self.first_k_dense_replace = first_k_dense_replace
82
+ self.output_router_logits = output_router_logits
83
+
84
+ super().__init__(pad_token_id=pad_token_id, eos_token_id=eos_token_id, tie_word_embeddings=tie_word_embeddings, **kwargs)
configuration_v2.py ADDED
@@ -0,0 +1,210 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from transformers.configuration_utils import PretrainedConfig
2
+ from transformers.utils import logging
3
+
4
+ logger = logging.get_logger(__name__)
5
+
6
+ DEEPSEEK_PRETRAINED_CONFIG_ARCHIVE_MAP = {}
7
+ class DeepseekV3Config(PretrainedConfig):
8
+ r"""
9
+ This is the configuration class to store the configuration of a [`DeepseekV3Model`]. It is used to instantiate an DeepSeek
10
+ model according to the specified arguments, defining the model architecture. Instantiating a configuration with the
11
+ defaults will yield a similar configuration to that of the DeepSeek-V3.
12
+
13
+ Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the
14
+ documentation from [`PretrainedConfig`] for more information.
15
+
16
+
17
+ Args:
18
+ vocab_size (`int`, *optional*, defaults to 129280):
19
+ Vocabulary size of the Deep model. Defines the number of different tokens that can be represented by the
20
+ `inputs_ids` passed when calling [`DeepseekV3Model`]
21
+ hidden_size (`int`, *optional*, defaults to 4096):
22
+ Dimension of the hidden representations.
23
+ intermediate_size (`int`, *optional*, defaults to 11008):
24
+ Dimension of the MLP representations.
25
+ moe_intermediate_size (`int`, *optional*, defaults to 1407):
26
+ Dimension of the MoE representations.
27
+ num_hidden_layers (`int`, *optional*, defaults to 32):
28
+ Number of hidden layers in the Transformer decoder.
29
+ num_nextn_predict_layers (`int`, *optional*, defaults to 1):
30
+ Number of nextn predict layers in the DeepSeekV3 Model.
31
+ num_attention_heads (`int`, *optional*, defaults to 32):
32
+ Number of attention heads for each attention layer in the Transformer decoder.
33
+ n_shared_experts (`int`, *optional*, defaults to None):
34
+ Number of shared experts, None means dense model.
35
+ n_routed_experts (`int`, *optional*, defaults to None):
36
+ Number of routed experts, None means dense model.
37
+ routed_scaling_factor (`float`, *optional*, defaults to 1.0):
38
+ Scaling factor or routed experts.
39
+ topk_method (`str`, *optional*, defaults to `gready`):
40
+ Topk method used in routed gate.
41
+ n_group (`int`, *optional*, defaults to None):
42
+ Number of groups for routed experts.
43
+ topk_group (`int`, *optional*, defaults to None):
44
+ Number of selected groups for each token(for each token, ensuring the selected experts is only within `topk_group` groups).
45
+ num_experts_per_tok (`int`, *optional*, defaults to None):
46
+ Number of selected experts, None means dense model.
47
+ moe_layer_freq (`int`, *optional*, defaults to 1):
48
+ The frequency of the MoE layer: one expert layer for every `moe_layer_freq - 1` dense layers.
49
+ first_k_dense_replace (`int`, *optional*, defaults to 0):
50
+ Number of dense layers in shallow layers(embed->dense->dense->...->dense->moe->moe...->lm_head).
51
+ \--k dense layers--/
52
+ norm_topk_prob (`bool`, *optional*, defaults to False):
53
+ Whether to normalize the weights of the routed experts.
54
+ scoring_func (`str`, *optional*, defaults to 'softmax'):
55
+ Method of computing expert weights.
56
+ aux_loss_alpha (`float`, *optional*, defaults to 0.001):
57
+ Auxiliary loss weight coefficient.
58
+ seq_aux = (`bool`, *optional*, defaults to True):
59
+ Whether to compute the auxiliary loss for each individual sample.
60
+ num_key_value_heads (`int`, *optional*):
61
+ This is the number of key_value heads that should be used to implement Grouped Query Attention. If
62
+ `num_key_value_heads=num_attention_heads`, the model will use Multi Head Attention (MHA), if
63
+ `num_key_value_heads=1 the model will use Multi Query Attention (MQA) otherwise GQA is used. When
64
+ converting a multi-head checkpoint to a GQA checkpoint, each group key and value head should be constructed
65
+ by meanpooling all the original heads within that group. For more details checkout [this
66
+ paper](https://arxiv.org/pdf/2305.13245.pdf). If it is not specified, will default to
67
+ `num_attention_heads`.
68
+ hidden_act (`str` or `function`, *optional*, defaults to `"silu"`):
69
+ The non-linear activation function (function or string) in the decoder.
70
+ max_position_embeddings (`int`, *optional*, defaults to 2048):
71
+ The maximum sequence length that this model might ever be used with.
72
+ initializer_range (`float`, *optional*, defaults to 0.02):
73
+ The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
74
+ rms_norm_eps (`float`, *optional*, defaults to 1e-06):
75
+ The epsilon used by the rms normalization layers.
76
+ use_cache (`bool`, *optional*, defaults to `True`):
77
+ Whether or not the model should return the last key/values attentions (not used by all models). Only
78
+ relevant if `config.is_decoder=True`.
79
+ pad_token_id (`int`, *optional*):
80
+ Padding token id.
81
+ bos_token_id (`int`, *optional*, defaults to 1):
82
+ Beginning of stream token id.
83
+ eos_token_id (`int`, *optional*, defaults to 2):
84
+ End of stream token id.
85
+ pretraining_tp (`int`, *optional*, defaults to 1):
86
+ Experimental feature. Tensor parallelism rank used during pretraining. Please refer to [this
87
+ document](https://huggingface.co/docs/transformers/parallelism) to understand more about it. This value is
88
+ necessary to ensure exact reproducibility of the pretraining results. Please refer to [this
89
+ issue](https://github.com/pytorch/pytorch/issues/76232).
90
+ tie_word_embeddings (`bool`, *optional*, defaults to `False`):
91
+ Whether to tie weight embeddings
92
+ rope_theta (`float`, *optional*, defaults to 10000.0):
93
+ The base period of the RoPE embeddings.
94
+ rope_scaling (`Dict`, *optional*):
95
+ Dictionary containing the scaling configuration for the RoPE embeddings. Currently supports two scaling
96
+ strategies: linear and dynamic. Their scaling factor must be a float greater than 1. The expected format is
97
+ `{"type": strategy name, "factor": scaling factor}`. When using this flag, don't update
98
+ `max_position_embeddings` to the expected new maximum.
99
+ attention_bias (`bool`, defaults to `False`, *optional*, defaults to `False`):
100
+ Whether to use a bias in the query, key, value and output projection layers during self-attention.
101
+ attention_dropout (`float`, *optional*, defaults to 0.0):
102
+ The dropout ratio for the attention probabilities.
103
+
104
+ ```python
105
+ >>> from transformers import DeepseekV3Model, DeepseekV3Config
106
+
107
+ >>> # Initializing a Deepseek-V3 style configuration
108
+ >>> configuration = DeepseekV3Config()
109
+
110
+ >>> # Accessing the model configuration
111
+ >>> configuration = model.config
112
+ ```"""
113
+
114
+ model_type = "openmodel"
115
+ keys_to_ignore_at_inference = ["past_key_values"]
116
+
117
+ def __init__(
118
+ self,
119
+ vocab_size=129280,
120
+ hidden_size=7168,
121
+ intermediate_size=18432,
122
+ moe_intermediate_size = 2048,
123
+ num_hidden_layers=61,
124
+ num_nextn_predict_layers=1,
125
+ num_attention_heads=128,
126
+ num_key_value_heads=128,
127
+ n_shared_experts = 1,
128
+ n_routed_experts = 256,
129
+ ep_size = 1,
130
+ routed_scaling_factor = 2.5,
131
+ kv_lora_rank = 512,
132
+ q_lora_rank = 1536,
133
+ qk_rope_head_dim = 64,
134
+ v_head_dim = 128,
135
+ qk_nope_head_dim = 128,
136
+ topk_method = 'noaux_tc',
137
+ n_group = 8,
138
+ topk_group = 4,
139
+ num_experts_per_tok = 8,
140
+ moe_layer_freq = 1,
141
+ first_k_dense_replace = 3,
142
+ norm_topk_prob = True,
143
+ scoring_func = 'sigmoid',
144
+ aux_loss_alpha = 0.001,
145
+ seq_aux = True,
146
+ hidden_act="silu",
147
+ max_position_embeddings=4096,
148
+ initializer_range=0.02,
149
+ rms_norm_eps=1e-6,
150
+ use_cache=True,
151
+ pad_token_id=None,
152
+ bos_token_id=0,
153
+ eos_token_id=1,
154
+ pretraining_tp=1,
155
+ tie_word_embeddings=False,
156
+ rope_theta=10000.0,
157
+ rope_scaling=None,
158
+ attention_bias=False,
159
+ attention_dropout=0.0,
160
+ **kwargs,
161
+ ):
162
+ self.vocab_size = vocab_size
163
+ self.max_position_embeddings = max_position_embeddings
164
+ self.hidden_size = hidden_size
165
+ self.intermediate_size = intermediate_size
166
+ self.moe_intermediate_size = moe_intermediate_size
167
+ self.num_hidden_layers = num_hidden_layers
168
+ self.num_nextn_predict_layers = num_nextn_predict_layers
169
+ self.num_attention_heads = num_attention_heads
170
+ self.n_shared_experts = n_shared_experts
171
+ self.n_routed_experts = n_routed_experts
172
+ self.ep_size = ep_size
173
+ self.routed_scaling_factor = routed_scaling_factor
174
+ self.kv_lora_rank = kv_lora_rank
175
+ self.q_lora_rank = q_lora_rank
176
+ self.qk_rope_head_dim = qk_rope_head_dim
177
+ self.v_head_dim = v_head_dim
178
+ self.qk_nope_head_dim = qk_nope_head_dim
179
+ self.topk_method = topk_method
180
+ self.n_group = n_group
181
+ self.topk_group = topk_group
182
+ self.num_experts_per_tok = num_experts_per_tok
183
+ self.moe_layer_freq = moe_layer_freq
184
+ self.first_k_dense_replace = first_k_dense_replace
185
+ self.norm_topk_prob = norm_topk_prob
186
+ self.scoring_func = scoring_func
187
+ self.aux_loss_alpha = aux_loss_alpha
188
+ self.seq_aux = seq_aux
189
+ # for backward compatibility
190
+ if num_key_value_heads is None:
191
+ num_key_value_heads = num_attention_heads
192
+
193
+ self.num_key_value_heads = num_key_value_heads
194
+ self.hidden_act = hidden_act
195
+ self.initializer_range = initializer_range
196
+ self.rms_norm_eps = rms_norm_eps
197
+ self.pretraining_tp = pretraining_tp
198
+ self.use_cache = use_cache
199
+ self.rope_theta = rope_theta
200
+ self.rope_scaling = rope_scaling
201
+ self.attention_bias = attention_bias
202
+ self.attention_dropout = attention_dropout
203
+
204
+ super().__init__(
205
+ pad_token_id=pad_token_id,
206
+ bos_token_id=bos_token_id,
207
+ eos_token_id=eos_token_id,
208
+ tie_word_embeddings=tie_word_embeddings,
209
+ **kwargs,
210
+ )
docs/deploy_guidance.md ADDED
@@ -0,0 +1,196 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Kimi-K2 Deployment Guide
2
+
3
+ > [!Note]
4
+ > This guide only provides some examples of deployment commands for Kimi-K2, which may not be the optimal configuration. Since inference engines are still being updated frequenty, please continue to follow the guidance from their homepage if you want to achieve better inference performance.
5
+
6
+
7
+ ## vLLM Deployment
8
+
9
+ The smallest deployment unit for Kimi-K2 FP8 weights with 256k seqlen on mainstream H200 platform is a cluster with 16 GPUs with either Tensor Parallel (TP) or "data parallel + expert parallel" (DP+EP).
10
+ Running parameters for this environment are provided below. You may scale up to more nodes and increase expert-parallelism to enlarge the inference batch size and overall throughput.
11
+
12
+ ### Tensor Parallelism
13
+
14
+ When the parallelism degree ≤ 16, you can run inference with pure Tensor Parallelism. A sample launch command is:
15
+
16
+ ``` bash
17
+ # start ray on node 0 and node 1
18
+
19
+ # node 0:
20
+ vllm serve $MODEL_PATH \
21
+ --port 8000 \
22
+ --served-model-name kimi-k2 \
23
+ --trust-remote-code \
24
+ --tensor-parallel-size 16 \
25
+ --enable-auto-tool-choice \
26
+ --tool-call-parser kimi_k2
27
+ ```
28
+
29
+ **Key parameter notes:**
30
+ - `--tensor-parallel-size 16`: If using more than 16 GPUs, combine with pipeline-parallelism.
31
+ - `--enable-auto-tool-choice`: Required when enabling tool usage.
32
+ - `--tool-call-parser kimi_k2`: Required when enabling tool usage.
33
+
34
+ ### Data Parallelism + Expert Parallelism
35
+
36
+ You can install libraries like DeepEP and DeepGEMM as needed. Then run the command (example on H200):
37
+
38
+ ``` bash
39
+ # node 0
40
+ vllm serve $MODEL_PATH --port 8000 --served-model-name kimi-k2 --trust-remote-code --data-parallel-size 16 --data-parallel-size-local 8 --data-parallel-address $MASTER_IP --data-parallel-rpc-port $PORT --enable-expert-parallel --max-num-batched-tokens 8192 --max-num-seqs 256 --gpu-memory-utilization 0.85 --enable-auto-tool-choice --tool-call-parser kimi_k2
41
+
42
+ # node 1
43
+ vllm serve $MODEL_PATH --headless --data-parallel-start-rank 8 --port 8000 --served-model-name kimi-k2 --trust-remote-code --data-parallel-size 16 --data-parallel-size-local 8 --data-parallel-address $MASTER_IP --data-parallel-rpc-port $PORT --enable-expert-parallel --max-num-batched-tokens 8192 --max-num-seqs 256 --gpu-memory-utilization 0.85 --enable-auto-tool-choice --tool-call-parser kimi_k2
44
+ ```
45
+
46
+ ## SGLang Deployment
47
+
48
+ Similarly, we can use TP or DP+EP in SGLang for Deployment, here are the examples.
49
+
50
+
51
+ ### Tensor Parallelism
52
+
53
+ Here is the simple example code to run TP16 with two nodes on H200:
54
+
55
+ ``` bash
56
+ # Node 0
57
+ python -m sglang.launch_server --model-path $MODEL_PATH --tp 16 --dist-init-addr $MASTER_IP:50000 --nnodes 2 --node-rank 0 --trust-remote-code --tool-call-parser kimi_k2
58
+
59
+ # Node 1
60
+ python -m sglang.launch_server --model-path $MODEL_PATH --tp 16 --dist-init-addr $MASTER_IP:50000 --nnodes 2 --node-rank 1 --trust-remote-code --tool-call-parser kimi_k2
61
+ ```
62
+
63
+ **Key parameter notes:**
64
+ - `--tool-call-parser kimi_k2`: Required when enabling tool usage.
65
+
66
+ ### Data Parallelism + Expert Parallelism
67
+
68
+ Here is an example for large scale Prefill-Decode Disaggregation (4P12D H200) with DP+EP in SGLang:
69
+
70
+ ``` bash
71
+ # for prefill node
72
+ MC_TE_METRIC=true SGLANG_DISAGGREGATION_HEARTBEAT_INTERVAL=10000000 SGLANG_DISAGGREGATION_BOOTSTRAP_TIMEOUT=100000 SGLANG_DISAGGREGATION_WAITING_TIMEOUT=100000 PYTHONUNBUFFERED=1 \
73
+ python -m sglang.launch_server --model-path $MODEL_PATH \
74
+ --trust-remote-code --disaggregation-mode prefill --dist-init-addr $PREFILL_NODE0$:5757 --tp-size 32 --dp-size 32 --enable-dp-attention --host $LOCAL_IP --decode-log-interval 1 --disable-radix-cache --enable-deepep-moe --moe-dense-tp-size 1 --enable-dp-lm-head --disable-shared-experts-fusion --watchdog-timeout 1000000 --enable-two-batch-overlap --disaggregation-ib-device $IB_DEVICE --chunked-prefill-size 262144 --mem-fraction-static 0.85 --deepep-mode normal --ep-dispatch-algorithm dynamic --eplb-algorithm deepseek --max-running-requests 1024 --nnodes 4 --node-rank $RANK --tool-call-parser kimi_k2
75
+
76
+
77
+ # for decode node
78
+ SGLANG_DEEPEP_NUM_MAX_DISPATCH_TOKENS_PER_RANK=480 MC_TE_METRIC=true SGLANG_DISAGGREGATION_HEARTBEAT_INTERVAL=10000000 SGLANG_DISAGGREGATION_BOOTSTRAP_TIMEOUT=100000 SGLANG_DISAGGREGATION_WAITING_TIMEOUT=100000 PYTHONUNBUFFERED=1 \
79
+ python -m sglang.launch_server --model-path $MODEL_PATH --trust-remote-code --disaggregation-mode decode --dist-init-addr $DECODE_NODE0:5757 --tp-size 96 --dp-size 96 --enable-dp-attention --host $LOCAL_IP --decode-log-interval 1 --context-length 2176 --disable-radix-cache --enable-deepep-moe --moe-dense-tp-size 1 --enable-dp-lm-head --disable-shared-experts-fusion --watchdog-timeout 1000000 --enable-two-batch-overlap --disaggregation-ib-device $IB_DEVICE --deepep-mode low_latency --mem-fraction-static 0.8 --cuda-graph-bs 480 --max-running-requests 46080 --ep-num-redundant-experts 96 --nnodes 12 --node-rank $RANK --tool-call-parser kimi_k2
80
+
81
+ # pdlb
82
+ PYTHONUNBUFFERED=1 python -m sglang.srt.disaggregation.launch_lb --prefill http://${PREFILL_NODE0}:30000 --decode http://${DECODE_NODE0}:30000
83
+ ```
84
+
85
+ ## KTransformers Deployment
86
+
87
+ Please copy all configuration files (i.e., everything except the .safetensors files) into the GGUF checkpoint folder at /path/to/K2. Then run:
88
+ ``` bash
89
+ python ktransformers/server/main.py --model_path /path/to/K2 --gguf_path /path/to/K2 --cache_lens 30000
90
+ ```
91
+
92
+ To enable AMX optimization, run:
93
+
94
+ ``` bash
95
+ python ktransformers/server/main.py --model_path /path/to/K2 --gguf_path /path/to/K2 --cache_lens 30000 --optimize_config_path ktransformers/optimize/optimize_rules/DeepSeek-V3-Chat-fp8-linear-ggml-experts-serve-amx.yaml
96
+ ```
97
+
98
+ ## TensoRT-LLM Deployment
99
+ ### Prerequisite
100
+ Please refer to [this guide](https://nvidia.github.io/TensorRT-LLM/installation/build-from-source-linux.html) to build TensorRT-LLM v1.0.0-rc2 from source and start a TRT-LLM docker container.
101
+
102
+ install blobfile by:
103
+ ```bash
104
+ pip install blobfile
105
+ ```
106
+ ### Multi-node Serving
107
+ TensorRT-LLM supports multi-node inference. You can use mpirun to launch Kimi-K2 with multi-node jobs. We will use two nodes for this example.
108
+
109
+ #### mpirun
110
+ mpirun requires each node to have passwordless ssh access to the other node. We need to setup the environment inside the docker container. Run the container with host network and mount the current directory as well as model directory to the container.
111
+
112
+ ```bash
113
+ # use host network
114
+ IMAGE=<YOUR_IMAGE>
115
+ NAME=test_2node_docker
116
+ # host1
117
+ docker run -it --name ${NAME}_host1 --ipc=host --gpus=all --network host --privileged --ulimit memlock=-1 --ulimit stack=67108864 -v ${PWD}:/workspace -v <YOUR_MODEL_DIR>:/models/DeepSeek-V3 -w /workspace ${IMAGE}
118
+ # host2
119
+ docker run -it --name ${NAME}_host2 --ipc=host --gpus=all --network host --privileged --ulimit memlock=-1 --ulimit stack=67108864 -v ${PWD}:/workspace -v <YOUR_MODEL_DIR>:/models/DeepSeek-V3 -w /workspace ${IMAGE}
120
+ ```
121
+
122
+ Set up ssh inside the container
123
+
124
+ ```bash
125
+ apt-get update && apt-get install -y openssh-server
126
+
127
+ # modify /etc/ssh/sshd_config
128
+ PermitRootLogin yes
129
+ PubkeyAuthentication yes
130
+ # modify /etc/ssh/sshd_config, change default port 22 to another unused port
131
+ port 2233
132
+
133
+ # modify /etc/ssh
134
+ ```
135
+
136
+ Generate ssh key on host1 and copy to host2, vice versa.
137
+
138
+ ```bash
139
+ # on host1
140
+ ssh-keygen -t ed25519 -f ~/.ssh/id_ed25519
141
+ ssh-copy-id -i ~/.ssh/id_ed25519.pub root@<HOST2>
142
+ # on host2
143
+ ssh-keygen -t ed25519 -f ~/.ssh/id_ed25519
144
+ ssh-copy-id -i ~/.ssh/id_ed25519.pub root@<HOST1>
145
+
146
+ # restart ssh service on host1 and host2
147
+ service ssh restart # or
148
+ /etc/init.d/ssh restart # or
149
+ systemctl restart ssh
150
+ ```
151
+
152
+ Generate additional config for trtllm serve.
153
+ ```bash
154
+ cat >/path/to/TensorRT-LLM/extra-llm-api-config.yml <<EOF
155
+ cuda_graph_config:
156
+ padding_enabled: true
157
+ batch_sizes:
158
+ - 1
159
+ - 2
160
+ - 4
161
+ - 8
162
+ - 16
163
+ - 32
164
+ - 64
165
+ - 128
166
+ print_iter_log: true
167
+ enable_attention_dp: true
168
+ EOF
169
+ ```
170
+
171
+
172
+ After the preparations,you can run the trtllm-serve on two nodes using mpirun:
173
+
174
+ ```bash
175
+ mpirun -np 16 \
176
+ -H <HOST1>:8,<HOST2>:8 \
177
+ -mca plm_rsh_args "-p 2233" \
178
+ --allow-run-as-root \
179
+ trtllm-llmapi-launch trtllm-serve serve \
180
+ --backend pytorch \
181
+ --tp_size 16 \
182
+ --ep_size 8 \
183
+ --kv_cache_free_gpu_memory_fraction 0.95 \
184
+ --trust_remote_code \
185
+ --max_batch_size 128 \
186
+ --max_num_tokens 4096 \
187
+ --extra_llm_api_options /path/to/TensorRT-LLM/extra-llm-api-config.yml \
188
+ --port 8000 \
189
+ <YOUR_MODEL_DIR>
190
+ ```
191
+
192
+ ## Others
193
+
194
+ Kimi-K2 reuses the `DeepSeekV3CausalLM` architecture and convert it's weight into proper shape to save redevelopment effort. To let inference engines distinguish it from DeepSeek-V3 and apply the best optimizations, we set `"model_type": "kimi_k2"` in `config.json`.
195
+
196
+ If you are using a framework that is not on the recommended list, you can still run the model by manually changing `model_type` to "deepseek_v3" in `config.json` as a temporary workaround. You may need to manually parse tool calls in case no tool call parser is available in your framework.
docs/tool_call_guidance.md ADDED
@@ -0,0 +1,258 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ## Tool Calling
2
+ To enable the tool calling feature, you may need to set certain tool calling parser options when starting the service. See [deploy_guidance](./deploy_guidance.md) for details.
3
+ In Kimi-K2, a tool calling process includes:
4
+ - Passing function descriptions to Kimi-K2
5
+ - Kimi-K2 decides to make a function call and returns the necessary information for the function call to the user
6
+ - The user performs the function call, collects the call results, and passes the function call results to Kimi-K2
7
+ - Kimi-K2 continues to generate content based on the function call results until the model believes it has obtained sufficient information to respond to the user
8
+
9
+ ### Preparing Tools
10
+ Suppose we have a function `get_weather` that can query the weather conditions in real-time.
11
+ This function accepts a city name as a parameter and returns the weather conditions. We need to prepare a structured description for it so that Kimi-K2 can understand its functionality.
12
+
13
+ ```python
14
+ def get_weather(city):
15
+ return {"weather": "Sunny"}
16
+
17
+ # Collect the tool descriptions in tools
18
+ tools = [{
19
+ "type": "function",
20
+ "function": {
21
+ "name": "get_weather",
22
+ "description": "Get weather information. Call this tool when the user needs to get weather information",
23
+ "parameters": {
24
+ "type": "object",
25
+ "required": ["city"],
26
+ "properties": {
27
+ "city": {
28
+ "type": "string",
29
+ "description": "City name",
30
+ }
31
+ }
32
+ }
33
+ }
34
+ }]
35
+
36
+ # Tool name->object mapping for easy calling later
37
+ tool_map = {
38
+ "get_weather": get_weather
39
+ }
40
+ ```
41
+ ### Chat with tools
42
+ We use `openai.OpenAI` to send messages to Kimi-K2 along with tool descriptions. Kimi-K2 will autonomously decide whether to use and how to use the provided tools.
43
+ If Kimi-K2 believes a tool call is needed, it will return a result with `finish_reason='tool_calls'`. At this point, the returned result includes the tool call information.
44
+ After calling tools with the provided information, we then need to append the tool call results to the chat history and continue calling Kimi-K2.
45
+ Kimi-K2 may need to call tools multiple times until the model believes the current results can answer the user's question. We should check `finish_reason` until it is not `tool_calls`.
46
+
47
+ The results obtained by the user after calling the tools should be added to `messages` with `role='tool'`.
48
+
49
+ ```python
50
+ import json
51
+ from openai import OpenAI
52
+ model_name='moonshotai/Kimi-K2-Instruct'
53
+ client = OpenAI(base_url=endpoint,
54
+ api_key='xxx')
55
+
56
+ messages = [
57
+ {"role": "user", "content": "What's the weather like in Beijing today? Let's check using the tool."}
58
+ ]
59
+ finish_reason = None
60
+ while finish_reason is None or finish_reason == "tool_calls":
61
+ completion = client.chat.completions.create(
62
+ model=model_name,
63
+ messages=messages,
64
+ temperature=0.3,
65
+ tools=tools,
66
+ tool_choice="auto",
67
+ )
68
+ choice = completion.choices[0]
69
+ finish_reason = choice.finish_reason
70
+ # Note: The finish_reason when tool calls end may vary across different engines, so this condition check needs to be adjusted accordingly
71
+ if finish_reason == "tool_calls":
72
+ messages.append(choice.message)
73
+ for tool_call in choice.message.tool_calls:
74
+ tool_call_name = tool_call.function.name
75
+ tool_call_arguments = json.loads(tool_call.function.arguments)
76
+ tool_function = tool_map[tool_call_name]
77
+ tool_result = tool_function(tool_call_arguments)
78
+ print("tool_result", tool_result)
79
+
80
+ messages.append({
81
+ "role": "tool",
82
+ "tool_call_id": tool_call.id,
83
+ "name": tool_call_name,
84
+ "content": json.dumps(tool_result),
85
+ })
86
+ print('-' * 100)
87
+ print(choice.message.content)
88
+ ```
89
+ ### Tool Calling in Streaming Mode
90
+ Tool calling can also be used in streaming mode. In this case, we need to collect the tool call information returned in the stream until we have a complete tool call. Please refer to the code below:
91
+
92
+ ```python
93
+ messages = [
94
+ {"role": "user", "content": "What's the weather like in Beijing today? Let's check using the tool."}
95
+ ]
96
+ finish_reason = None
97
+ msg = ''
98
+ while finish_reason is None or finish_reason == "tool_calls":
99
+ completion = client.chat.completions.create(
100
+ model=model_name,
101
+ messages=messages,
102
+ temperature=0.3,
103
+ tools=tools,
104
+ tool_choice="auto",
105
+ stream=True
106
+ )
107
+ tool_calls = []
108
+ for chunk in completion:
109
+ delta = chunk.choices[0].delta
110
+ if delta.content:
111
+ msg += delta.content
112
+ if delta.tool_calls:
113
+ for tool_call_chunk in delta.tool_calls:
114
+ if tool_call_chunk.index is not None:
115
+ # Extend the tool_calls list
116
+ while len(tool_calls) <= tool_call_chunk.index:
117
+ tool_calls.append({
118
+ "id": "",
119
+ "type": "function",
120
+ "function": {
121
+ "name": "",
122
+ "arguments": ""
123
+ }
124
+ })
125
+
126
+ tc = tool_calls[tool_call_chunk.index]
127
+
128
+ if tool_call_chunk.id:
129
+ tc["id"] += tool_call_chunk.id
130
+ if tool_call_chunk.function.name:
131
+ tc["function"]["name"] += tool_call_chunk.function.name
132
+ if tool_call_chunk.function.arguments:
133
+ tc["function"]["arguments"] += tool_call_chunk.function.arguments
134
+
135
+ finish_reason = chunk.choices[0].finish_reason
136
+ # Note: The finish_reason when tool calls end may vary across different engines, so this condition check needs to be adjusted accordingly
137
+ if finish_reason == "tool_calls":
138
+ for tool_call in tool_calls:
139
+ tool_call_name = tool_call['function']['name']
140
+ tool_call_arguments = json.loads(tool_call['function']['arguments'])
141
+ tool_function = tool_map[tool_call_name]
142
+ tool_result = tool_function(tool_call_arguments)
143
+ messages.append({
144
+ "role": "tool",
145
+ "tool_call_id": tool_call['id'],
146
+ "name": tool_call_name,
147
+ "content": json.dumps(tool_result),
148
+ })
149
+ # The text generated by the tool call is not the final version, reset msg
150
+ msg = ''
151
+
152
+ print(msg)
153
+ ```
154
+ ### Manually Parsing Tool Calls
155
+ The tool call requests generated by Kimi-K2 can also be parsed manually, which is especially useful when the service you are using does not provide a tool-call parser.
156
+ The tool call requests generated by Kimi-K2 are wrapped by `<|tool_calls_section_begin|>` and `<|tool_calls_section_end|>`,
157
+ with each tool call wrapped by `<|tool_call_begin|>` and `<|tool_call_end|>`. The tool ID and arguments are separated by `<|tool_call_argument_begin|>`.
158
+ The format of the tool ID is `functions.{func_name}:{idx}`, from which we can parse the function name.
159
+
160
+ Based on the above rules, we can directly post request to the completions interface and manually parse tool calls.
161
+
162
+ ```python
163
+ import requests
164
+ from transformers import AutoTokenizer
165
+ messages = [
166
+ {"role": "user", "content": "What's the weather like in Beijing today? Let's check using the tool."}
167
+ ]
168
+ msg = ''
169
+ tokenizer = AutoTokenizer.from_pretrained(model_name, trust_remote_code=True)
170
+ while True:
171
+ text = tokenizer.apply_chat_template(
172
+ messages,
173
+ tokenize=False,
174
+ tools=tools,
175
+ add_generation_prompt=True,
176
+ )
177
+ payload = {
178
+ "model": model_name,
179
+ "prompt": text,
180
+ "max_tokens": 512
181
+ }
182
+ response = requests.post(
183
+ f"{endpoint}/completions",
184
+ headers={"Content-Type": "application/json"},
185
+ json=payload,
186
+ stream=False,
187
+ )
188
+ raw_out = response.json()
189
+
190
+ raw_output = raw_out["choices"][0]["text"]
191
+ tool_calls = extract_tool_call_info(raw_output)
192
+ if len(tool_calls) == 0:
193
+ # No tool calls
194
+ msg = raw_output
195
+ break
196
+ else:
197
+ for tool_call in tool_calls:
198
+ tool_call_name = tool_call['function']['name']
199
+ tool_call_arguments = json.loads(tool_call['function']['arguments'])
200
+ tool_function = tool_map[tool_call_name]
201
+ tool_result = tool_function(tool_call_arguments)
202
+
203
+ messages.append({
204
+ "role": "tool",
205
+ "tool_call_id": tool_call['id'],
206
+ "name": tool_call_name,
207
+ "content": json.dumps(tool_result),
208
+ })
209
+ print('-' * 100)
210
+ print(msg)
211
+ ```
212
+ Here, `extract_tool_call_info` parses the model output and returns the model call information. A simple implementation would be:
213
+ ```python
214
+ def extract_tool_call_info(tool_call_rsp: str):
215
+ if '<|tool_calls_section_begin|>' not in tool_call_rsp:
216
+ # No tool calls
217
+ return []
218
+ import re
219
+ pattern = r"<\|tool_calls_section_begin\|>(.*?)<\|tool_calls_section_end\|>"
220
+
221
+ tool_calls_sections = re.findall(pattern, tool_call_rsp, re.DOTALL)
222
+
223
+ # Extract multiple tool calls
224
+ func_call_pattern = r"<\|tool_call_begin\|>\s*(?P<tool_call_id>[\w\.]+:\d+)\s*<\|tool_call_argument_begin\|>\s*(?P<function_arguments>.*?)\s*<\|tool_call_end\|>"
225
+ tool_calls = []
226
+ for match in re.findall(func_call_pattern, tool_calls_sections[0], re.DOTALL):
227
+ function_id, function_args = match
228
+ # function_id: functions.get_weather:0
229
+ function_name = function_id.split('.')[1].split(':')[0]
230
+ tool_calls.append(
231
+ {
232
+ "id": function_id,
233
+ "type": "function",
234
+ "function": {
235
+ "name": function_name,
236
+ "arguments": function_args
237
+ }
238
+ }
239
+ )
240
+ return tool_calls
241
+ ```
242
+
243
+ ## FAQ
244
+
245
+ #### Q1: I received special tokens like '<|tool_call_begin|>' in the 'content' field instead of a normal tool_call.
246
+
247
+ This indicates a tool-call crash, which most often occurs in multi-turn tool-calling scenarios due to incorrect tool-call ID. K2 expects the ID to follow the format `functions.func_name:idx`, where `functions` is a fixed string; `func_name` is the actual function name, like `get_weather`, and `idx` is a global counter that starts at 0 and increments with each function invocation.
248
+ Please check all tool-call IDs in the message list.
249
+
250
+
251
+ #### Q2: My tool-call ID is incorrect—how can I fix it?
252
+
253
+ First, make sure your code and chat template are up to date with the latest version from the Hugging Face repo.
254
+ If you're using vLLM or SGLang and they are generating random tool-call IDs, upgrade them to the latest release. For other frameworks, you must either parse the tool-call ID from the model output and set it correctly in the server-side response, or rewrite every tool-call ID according to the rules above on the client side before sending the messages to Kimi K2.
255
+
256
+ #### Q3: My tool call id is correct, but I still get crashed in multiturn tool call.
257
+
258
+ Please describe your situation in the [discussion](https://huggingface.co/moonshotai/Kimi-K2-Instruct-0905/discussions)
figures/Base-Evaluation.png ADDED

Git LFS Details

  • SHA256: d1d3ee49430417c17326c9def19264756a3bc0b0aa001e598d0e0d751ebf93f8
  • Pointer size: 131 Bytes
  • Size of remote file: 245 kB
figures/banner.png ADDED

Git LFS Details

  • SHA256: 380b39db25a6842cedaabad354a0a4929b617835094800124bba756c3b0e98f8
  • Pointer size: 131 Bytes
  • Size of remote file: 292 kB
figures/kimi-logo.png ADDED
generation_config.json ADDED
@@ -0,0 +1,12 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "bos_token_id": 128000,
3
+ "do_sample": true,
4
+ "eos_token_id": [
5
+ 128001,
6
+ 128008,
7
+ 128009
8
+ ],
9
+ "temperature": 0.6,
10
+ "top_p": 0.9,
11
+ "transformers_version": "4.42.3"
12
+ }
mergekit_config.yml ADDED
@@ -0,0 +1,12 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ slices:
2
+ - sources:
3
+ - layer_range: [0, 105]
4
+ model: mlabonne/BigLlama-3.1-681B-Instruct
5
+ - sources:
6
+ - layer_range: [52, 157]
7
+ model: mlabonne/BigLlama-3.1-681B-Instruct
8
+ - sources:
9
+ - layer_range: [104, 209]
10
+ model: mlabonne/BigLlama-3.1-681B-Instruct
11
+ merge_method: passthrough
12
+ dtype: bfloat16
model-00000-of-00160.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7b3175ddf29600014cb186883caa4d193739f60b4553e80979dd69ad919548de
3
+ size 14663519408
model-00001-of-000062.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:217d3bff3b5c828640ad1de9a40edf408dd1834776136929b04d37365601b73d
3
+ size 497640784
model-00001-of-00155.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:02b5cbbf76f7b21e54a936db254aef54aa4196d8a729c93ce6e26e7467c9448c
3
+ size 2575302800
model-00001-of-00160.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:51d07b0baa27a947b92111bfc01f3cd1e0e2bf9780e2ff052af21251a1294e05
3
+ size 6442474792
model-00001-of-00481.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:71b079a02069ebe5a5f4f8135815f136afb1d6fc4423620dcb2a14fa399cdf12
3
+ size 4202692736
model-00002-of-000062.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8c143999508effc326c9d752f83a5f10c40d688fe779613047c68934aca1df65
3
+ size 17066593248
model-00002-of-00155.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8bd28b5ae46d9bfd9a606fb9a3d78145a82ccc683cba6046d0778c5cff62774b
3
+ size 17179933360
model-00002-of-00160.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:1eda2b08edbbc6c9ec7c8a171bef28f1eb13d0f1b3462de2b89b5b2e67e68a1c
3
+ size 6442474792
model-00002-of-00481.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:bbb90d23098d70c7ce46a8a10ef3c0279d83831022e8f8da53f67874f1a84a66
3
+ size 4202725632
model-00003-of-000062.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:fcb55ad4f225af2c0cf5e2266ee30410bd3ae1a13476cb4bb70b2d629e22dac7
3
+ size 17066593248
model-00003-of-00155.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:34fe8fa1233a826f93c2017ed28346c917d208acba742f9e499bdf862e29a15a
3
+ size 10708107872
model-00003-of-00160.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:dc6e1d0e686130b2f580002e0c9d668f88f47ba26ed3fb7d76a45aeeee279a8b
3
+ size 6442474960
model-00003-of-00481.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c07820a3c436a61f69fe4e2b8abdffc7bce4a11c44f8105297d5b03a5dae092d
3
+ size 3489661192
model-00004-of-000062.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:76d350c0af2de2c10b37281e4e49bab364c5d6c77c0b17189b2c18cd4ac94225
3
+ size 17066593248
model-00004-of-00155.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4228da5f6d58a79560d41a80f697562ff44f7bbb9803a5890783fececb0c601d
3
+ size 3288489872
model-00004-of-00160.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:3e985dd586c3ee4ab5157b625010e2d6372dabf3a7686cb0fbe3cfd757622cc5
3
+ size 6442474984
model-00004-of-00481.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:3bcdac7531550e841a2add9bbb94626f85a215f00b177f9998d7e59f518a9ab7
3
+ size 4697686984
model-00005-of-000062.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8b097587a9d0961131e94f239b831018732c3e63fe03e309ed55f55400f1f261
3
+ size 17066593248
model-00005-of-00155.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:49a3a53e41ac87a436a1786c24d0323287974562c92548307770dcb2b3b422dd
3
+ size 17179933360
model-00005-of-00160.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b3c0451a81a0cce166b47c21d250772b9e5a2aa06d2403de8c953770313f4bc6
3
+ size 6442474984
model-00005-of-00481.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:441a03403e60fb88c2dac49102a8bf1a983e01a56175c5563344a8469b8b95a2
3
+ size 4697686992
model-00006-of-000062.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:db196c7434b6793fd787bcd00b8c53b18ae6735bb2ebf69022ab93532042d75a
3
+ size 17066593248
model-00006-of-00155.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d8ec47016f9a3d68b949eb59194c1721855edaa58e5c06f6f1de110a1b9f5779
3
+ size 8657075832
model-00006-of-00160.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e992092811e6eaef2f0afb56ad8ade863ee4b823cbe59d69b0f1d363fa194c54
3
+ size 6442474984
model-00006-of-00481.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:053a525027d39c68ba7714be3ee8115f5034d181f07834c5e4dfed07623f15d7
3
+ size 3489661200
model-00007-of-000062.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:5055892e2d9358cd889ae8910840366a2cd058e1434c0d85305c97c538362679
3
+ size 17066593248
model-00007-of-00155.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:01cf04ff48fe7d7d9a2770a8a95ac9cdd0e64b58b1197035a490f7f766c64602
3
+ size 17179933360
model-00007-of-00160.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9692a04bd75bed44080bf3205a3278f9192e27cd878199984c17877d6d6e7473
3
+ size 6442474984
model-00007-of-00481.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:25aecf93cb1e03326b3c1361d51add4ed0e170221ddad727dc13d0846763ab0e
3
+ size 4697719880
model-00008-of-000062.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:cc371efa5ae4a270468f862beb7cb5d9cca21c491cf5c010d28de850436593b1
3
+ size 17066593248
model-00008-of-00155.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:333488f4b27ff934a3fa3164da9325320c0944fad308da2e478b30e4a729a14d
3
+ size 8589966712
model-00008-of-00160.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8696c4f490e76a0ce0b07d53cb790f8e7b7647476db31979389d1f52c5364098
3
+ size 14512482136
model-00008-of-00481.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a0279a655b4bfcf3238175d64e95f27c72ac800fae18fb2137a838780c2eefbe
3
+ size 3489661200
model-00009-of-000062.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:42ebfd6fb5b4994ea60f660686062b0a4e2da41ed1edd39039c520c81fe1ef08
3
+ size 17066593248