killer66678 commited on
Commit
0e3f80c
·
verified ·
1 Parent(s): 9cb47bc

Initial release: merged LoRA weights, license, model card

Browse files
LICENSE ADDED
@@ -0,0 +1,34 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ OPENPANGU MODEL LICENSE AGREEMENT VERSION 1.0
2
+
3
+ This OPENPANGU MODEL LICENSE AGREEMENT VERSION 1.0 (the "Agreement") is a legal agreement between You and Huawei Technologies Co., Ltd. ("Huawei", "We" or "Us"), and it governs Your reproducing, use, modification, and distribution of openPangu as made available by Huawei under this Agreement.
4
+
5
+ By using, reproducing, modifying, distributing, performing or displaying any portion or element of openPangu, or otherwise accepting the terms of this Agreement, You agree to be bound by this Agreement.
6
+
7
+ 1. Definitions.
8
+ 1.1. “openPangu” or “Model” means openPangu large language models and software, including trained model weights, parameters (including optimizer states), accompanying source code and scripts released under this Agreement.
9
+ 1.2. “Derivative Model” means all (1) modifications to the Model, (2) works based on the Model, and (3) any other derivative works of the Model. For clarity, information or content results from operating or otherwise using the Model is not a Derivative Model.
10
+ 1.3. “You” or “Your” means an individual or Legal Entity exercising permissions granted by this Agreement and/or using the Model for any purpose.
11
+ 1.4. “Third Party” or “Third Parties” means individuals or legal entities that are not under common control with Us or You.
12
+
13
+ 2. License Grant. Subject to Your full compliance with the terms and conditions of this Agreement, We hereby grant to You a perpetual, worldwide, non-exclusive, non-transferable, no-charge, royalty-free license (except as stated in Section 3) to use, reproduce, modify, and distribute the Model.
14
+
15
+ 3. Conditions for License Grant. You represent and warrant that You will not, access, download, install, run, deploy, integrate, modify, or otherwise use the Model, directly or indirectly, within the European Union.
16
+
17
+
18
+ 4. Redistribution.
19
+ 4.1. If You distribute the Model or Derivative Model, You shall retain in Your distribution (1) a copy of this agreement, and (2) all copyright notices and other notices of origin included in the Model that are applicable to Your distribution.
20
+ 4.2. Further, if You distribute or make available to Third Parties a product or service (including another AI model) based on the Model, You are required to (1) display the acknowledgement “Powered by openPangu” and (2) include a trademark notice “openPangu is a trademark of Huawei Technologies Co., Ltd.” on related webpages, user manuals, product documentations or other advertising materials mentioning features of the Model.
21
+ 4.3. You may add Your own copyright statement to Your modifications and may provide additional or different license terms and conditions for use, reproduction, or distribution of Your modifications, or for Derivative Model made by You as a whole, provided Your use, reproduction, and distribution of the Model otherwise complies with the terms and conditions of this Agreement.
22
+
23
+ 5. Ownership. We do not claim ownership to any information or content generated using the Model or Derivative Model that are made by You. You are solely responsible for evaluating the accuracy and appropriateness of such information or content for Your use case.
24
+
25
+ 6. Trademarks. This License does not grant permission to use the trade names, trademarks, service marks, or product names of Huawei, except as required for complying with Section 4.2.
26
+
27
+ 7. Indemnity. You will indemnify and hold harmless Huawei from and against any claim by any third party arising out of or related to Your use or distribution of the Model or Derivative Model made by You (e.g. a violation against Section 3). For avoidance of doubt, “third party” in this clause include supervisory authorities.
28
+
29
+ 8. THE MODEL IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE, TITLE, NONINFRINGEMENT, ACCURACY, OR THE ABSENCE OF LATENT OR OTHER DEFECTS OR ERRORS, WHETHER OR NOT DISCOVERABLE, ALL TO THE GREATEST EXTENT PERMISSIBLE UNDER APPLICABLE LAW.
30
+
31
+ 9. IN NO EVENT SHALL WE BE LIABLE TO YOU FOR ANY DAMAGES, INCLUDING, BUT NOT LIMITED TO ANY DIRECT, OR INDIRECT, SPECIAL OR CONSEQUENTIAL DAMAGES ARISING FROM YOUR USE OR INABILITY TO USE THE MODEL, IN WHOLE OR IN PART, NO MATTER HOW IT’S CAUSED OR THE LEGAL THEORY IT IS BASED ON, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGES.
32
+
33
+
34
+ END OF THE TERMS AND CONDITIONS
Modelfile ADDED
@@ -0,0 +1,11 @@
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # ollama modelfile auto-generated by llamafactory
2
+
3
+ FROM .
4
+
5
+ TEMPLATE """{{ if .System }}System: {{ .System }}[unused10]
6
+ {{ end }}{{ range .Messages }}{{ if eq .Role "user" }}Human: {{ .Content }}[unused10]
7
+ Assistant:{{ else if eq .Role "assistant" }}{{ .Content }}[unused10]
8
+ {{ end }}{{ end }}"""
9
+
10
+ PARAMETER stop "[unused10]"
11
+ PARAMETER num_ctx 4096
Open Source Software Notice ADDED
@@ -0,0 +1,218 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ OPEN SOURCE SOFTWARE NOTICE
2
+
3
+ Please note we provide an open source software notice along with this product and/or this product firmware (in the following just “this product”). The open source software licenses are granted by the respective right holders. And the open source licenses prevail all other license information with regard to the respective open source software contained in the product, including but not limited to End User Software Licensing Agreement. This notice is provided on behalf of Huawei Technologies Co. Ltd. and any of its local subsidiaries which may have provided this product to you in your local country.
4
+
5
+ Warranty Disclaimer
6
+ THE OPEN SOURCE SOFTWARE IN THIS PRODUCT IS DISTRIBUTED IN THE HOPE THAT IT WILL BE USEFUL, BUT WITHOUT ANY WARRANTY, WITHOUT EVEN THE IMPLIED WARRANTY OF MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. SEE THE APPLICABLE LICENSES FOR MORE DETAILS.
7
+
8
+ Copyright Notice and License Texts
9
+
10
+ Software: transformers 4.53.2
11
+ Copyright notice:
12
+ Copyright 2022 EleutherAI and the HuggingFace Inc. team. All rights reserved.
13
+
14
+ License Text:
15
+ ----------------------------------------
16
+
17
+ Apache License
18
+ Version 2.0, January 2004
19
+ http://www.apache.org/licenses/
20
+
21
+ TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
22
+
23
+ 1. Definitions.
24
+
25
+ "License" shall mean the terms and conditions for use, reproduction,
26
+ and distribution as defined by Sections 1 through 9 of this document.
27
+
28
+ "Licensor" shall mean the copyright owner or entity authorized by
29
+ the copyright owner that is granting the License.
30
+
31
+ "Legal Entity" shall mean the union of the acting entity and all
32
+ other entities that control, are controlled by, or are under common
33
+ control with that entity. For the purposes of this definition,
34
+ "control" means (i) the power, direct or indirect, to cause the
35
+ direction or management of such entity, whether by contract or
36
+ otherwise, or (ii) ownership of fifty percent (50%) or more of the
37
+ outstanding shares, or (iii) beneficial ownership of such entity.
38
+
39
+ "You" (or "Your") shall mean an individual or Legal Entity
40
+ exercising permissions granted by this License.
41
+
42
+ "Source" form shall mean the preferred form for making modifications,
43
+ including but not limited to software source code, documentation
44
+ source, and configuration files.
45
+
46
+ "Object" form shall mean any form resulting from mechanical
47
+ transformation or translation of a Source form, including but
48
+ not limited to compiled object code, generated documentation,
49
+ and conversions to other media types.
50
+
51
+ "Work" shall mean the work of authorship, whether in Source or
52
+ Object form, made available under the License, as indicated by a
53
+ copyright notice that is included in or attached to the work
54
+ (an example is provided in the Appendix below).
55
+
56
+ "Derivative Works" shall mean any work, whether in Source or Object
57
+ form, that is based on (or derived from) the Work and for which the
58
+ editorial revisions, annotations, elaborations, or other modifications
59
+ represent, as a whole, an original work of authorship. For the purposes
60
+ of this License, Derivative Works shall not include works that remain
61
+ separable from, or merely link (or bind by name) to the interfaces of,
62
+ the Work and Derivative Works thereof.
63
+
64
+ "Contribution" shall mean any work of authorship, including
65
+ the original version of the Work and any modifications or additions
66
+ to that Work or Derivative Works thereof, that is intentionally
67
+ submitted to Licensor for inclusion in the Work by the copyright owner
68
+ or by an individual or Legal Entity authorized to submit on behalf of
69
+ the copyright owner. For the purposes of this definition, "submitted"
70
+ means any form of electronic, verbal, or written communication sent
71
+ to the Licensor or its representatives, including but not limited to
72
+ communication on electronic mailing lists, source code control systems,
73
+ and issue tracking systems that are managed by, or on behalf of, the
74
+ Licensor for the purpose of discussing and improving the Work, but
75
+ excluding communication that is conspicuously marked or otherwise
76
+ designated in writing by the copyright owner as "Not a Contribution."
77
+
78
+ "Contributor" shall mean Licensor and any individual or Legal Entity
79
+ on behalf of whom a Contribution has been received by Licensor and
80
+ subsequently incorporated within the Work.
81
+
82
+ 2. Grant of Copyright License. Subject to the terms and conditions of
83
+ this License, each Contributor hereby grants to You a perpetual,
84
+ worldwide, non-exclusive, no-charge, royalty-free, irrevocable
85
+ copyright license to reproduce, prepare Derivative Works of,
86
+ publicly display, publicly perform, sublicense, and distribute the
87
+ Work and such Derivative Works in Source or Object form.
88
+
89
+ 3. Grant of Patent License. Subject to the terms and conditions of
90
+ this License, each Contributor hereby grants to You a perpetual,
91
+ worldwide, non-exclusive, no-charge, royalty-free, irrevocable
92
+ (except as stated in this section) patent license to make, have made,
93
+ use, offer to sell, sell, import, and otherwise transfer the Work,
94
+ where such license applies only to those patent claims licensable
95
+ by such Contributor that are necessarily infringed by their
96
+ Contribution(s) alone or by combination of their Contribution(s)
97
+ with the Work to which such Contribution(s) was submitted. If You
98
+ institute patent litigation against any entity (including a
99
+ cross-claim or counterclaim in a lawsuit) alleging that the Work
100
+ or a Contribution incorporated within the Work constitutes direct
101
+ or contributory patent infringement, then any patent licenses
102
+ granted to You under this License for that Work shall terminate
103
+ as of the date such litigation is filed.
104
+
105
+ 4. Redistribution. You may reproduce and distribute copies of the
106
+ Work or Derivative Works thereof in any medium, with or without
107
+ modifications, and in Source or Object form, provided that You
108
+ meet the following conditions:
109
+
110
+ (a) You must give any other recipients of the Work or
111
+ Derivative Works a copy of this License; and
112
+
113
+ (b) You must cause any modified files to carry prominent notices
114
+ stating that You changed the files; and
115
+
116
+ (c) You must retain, in the Source form of any Derivative Works
117
+ that You distribute, all copyright, patent, trademark, and
118
+ attribution notices from the Source form of the Work,
119
+ excluding those notices that do not pertain to any part of
120
+ the Derivative Works; and
121
+
122
+ (d) If the Work includes a "NOTICE" text file as part of its
123
+ distribution, then any Derivative Works that You distribute must
124
+ include a readable copy of the attribution notices contained
125
+ within such NOTICE file, excluding those notices that do not
126
+ pertain to any part of the Derivative Works, in at least one
127
+ of the following places: within a NOTICE text file distributed
128
+ as part of the Derivative Works; within the Source form or
129
+ documentation, if provided along with the Derivative Works; or,
130
+ within a display generated by the Derivative Works, if and
131
+ wherever such third-party notices normally appear. The contents
132
+ of the NOTICE file are for informational purposes only and
133
+ do not modify the License. You may add Your own attribution
134
+ notices within Derivative Works that You distribute, alongside
135
+ or as an addendum to the NOTICE text from the Work, provided
136
+ that such additional attribution notices cannot be construed
137
+ as modifying the License.
138
+
139
+ You may add Your own copyright statement to Your modifications and
140
+ may provide additional or different license terms and conditions
141
+ for use, reproduction, or distribution of Your modifications, or
142
+ for any such Derivative Works as a whole, provided Your use,
143
+ reproduction, and distribution of the Work otherwise complies with
144
+ the conditions stated in this License.
145
+
146
+ 5. Submission of Contributions. Unless You explicitly state otherwise,
147
+ any Contribution intentionally submitted for inclusion in the Work
148
+ by You to the Licensor shall be under the terms and conditions of
149
+ this License, without any additional terms or conditions.
150
+ Notwithstanding the above, nothing herein shall supersede or modify
151
+ the terms of any separate license agreement you may have executed
152
+ with Licensor regarding such Contributions.
153
+
154
+ 6. Trademarks. This License does not grant permission to use the trade
155
+ names, trademarks, service marks, or product names of the Licensor,
156
+ except as required for reasonable and customary use in describing the
157
+ origin of the Work and reproducing the content of the NOTICE file.
158
+
159
+ 7. Disclaimer of Warranty. Unless required by applicable law or
160
+ agreed to in writing, Licensor provides the Work (and each
161
+ Contributor provides its Contributions) on an "AS IS" BASIS,
162
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
163
+ implied, including, without limitation, any warranties or conditions
164
+ of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
165
+ PARTICULAR PURPOSE. You are solely responsible for determining the
166
+ appropriateness of using or redistributing the Work and assume any
167
+ risks associated with Your exercise of permissions under this License.
168
+
169
+ 8. Limitation of Liability. In no event and under no legal theory,
170
+ whether in tort (including negligence), contract, or otherwise,
171
+ unless required by applicable law (such as deliberate and grossly
172
+ negligent acts) or agreed to in writing, shall any Contributor be
173
+ liable to You for damages, including any direct, indirect, special,
174
+ incidental, or consequential damages of any character arising as a
175
+ result of this License or out of the use or inability to use the
176
+ Work (including but not limited to damages for loss of goodwill,
177
+ work stoppage, computer failure or malfunction, or any and all
178
+ other commercial damages or losses), even if such Contributor
179
+ has been advised of the possibility of such damages.
180
+
181
+ 9. Accepting Warranty or Additional Liability. While redistributing
182
+ the Work or Derivative Works thereof, You may choose to offer,
183
+ and charge a fee for, acceptance of support, warranty, indemnity,
184
+ or other liability obligations and/or rights consistent with this
185
+ License. However, in accepting such obligations, You may act only
186
+ on Your own behalf and on Your sole responsibility, not on behalf
187
+ of any other Contributor, and only if You agree to indemnify,
188
+ defend, and hold each Contributor harmless for any liability
189
+ incurred by, or claims asserted against, such Contributor by reason
190
+ of your accepting any such warranty or additional liability.
191
+
192
+ END OF TERMS AND CONDITIONS
193
+
194
+ APPENDIX: How to apply the Apache License to your work.
195
+
196
+ To apply the Apache License to your work, attach the following
197
+ boilerplate notice, with the fields enclosed by brackets "[]"
198
+ replaced with your own identifying information. (Don't include
199
+ the brackets!) The text should be enclosed in the appropriate
200
+ comment syntax for the file format. We also recommend that a
201
+ file or class name and description of purpose be included on the
202
+ same "printed page" as the copyright notice for easier
203
+ identification within third-party archives.
204
+
205
+ Copyright [yyyy] [name of copyright owner]
206
+
207
+ Licensed under the Apache License, Version 2.0 (the "License");
208
+ you may not use this file except in compliance with the License.
209
+ You may obtain a copy of the License at
210
+
211
+ http://www.apache.org/licenses/LICENSE-2.0
212
+
213
+ Unless required by applicable law or agreed to in writing, software
214
+ distributed under the License is distributed on an "AS IS" BASIS,
215
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
216
+ See the License for the specific language governing permissions and
217
+ limitations under the License.
218
+
README.md ADDED
@@ -0,0 +1,134 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: other
3
+ license_name: openpangu-model-license-agreement-v1.0
4
+ base_model: FreedomIntelligence/openPangu-Embedded-7B-V1.1
5
+ library_name: transformers
6
+ pipeline_tag: text-generation
7
+ tags:
8
+ - text-generation
9
+ - causal-lm
10
+ language:
11
+ - zh
12
+ - en
13
+ model-index:
14
+ - name: openpangu-7b-lora-merged
15
+ results:
16
+ - task:
17
+ type: text-generation
18
+ name: GSM8K
19
+ dataset:
20
+ name: gsm8k
21
+ type: gsm8k
22
+ config: main
23
+ split: test
24
+ metrics:
25
+ - type: exact_match
26
+ name: exact_match (strict-match)
27
+ value: 0.6171
28
+ - type: exact_match
29
+ name: exact_match (flexible-extract)
30
+ value: 0.5777
31
+ - task:
32
+ type: multiple-choice
33
+ name: C-Eval (valid)
34
+ dataset:
35
+ name: ceval/ceval-exam
36
+ type: ceval/ceval-exam
37
+ config: ceval-valid
38
+ split: val
39
+ metrics:
40
+ - type: acc
41
+ name: acc
42
+ value: 0.6241
43
+ - type: acc_norm
44
+ name: acc_norm
45
+ value: 0.6241
46
+ ---
47
+
48
+ # openPangu-7B LoRA (merged)
49
+
50
+ This repository contains LoRA-finetuned and merged weights based on
51
+ `openPangu-Embedded-7B-V1.1`. The LoRA adapters were merged into the
52
+ base model to produce full weights suitable for standard inference.
53
+
54
+ ## Base Model
55
+
56
+ - Base model: `FreedomIntelligence/openPangu-Embedded-7B-V1.1`
57
+ - License: `OPENPANGU Model License Agreement v1.0` (see `LICENSE`)
58
+
59
+ ## Training Data
60
+
61
+ - Private dataset (not released).
62
+
63
+ ## Training Procedure
64
+
65
+ - Finetuning: LoRA using LLaMA-Factory.
66
+ - Export: merged full weights with `llamafactory-cli export`.
67
+
68
+ Example (paths are placeholders):
69
+
70
+ ```bash
71
+ llamafactory-cli export \
72
+ --model_name_or_path <base_model_dir> \
73
+ --adapter_name_or_path <lora_adapter_dir> \
74
+ --template default \
75
+ --finetuning_type lora \
76
+ --export_dir <export_dir> \
77
+ --export_size 2 \
78
+ --export_device cpu \
79
+ --export_legacy_format False \
80
+ --trust_remote_code True
81
+ ```
82
+
83
+ ## Evaluation
84
+
85
+ Evaluated with `lm-evaluation-harness` using vLLM on 4x RTX 4090.
86
+ Dates (UTC): 2026-01-04.
87
+
88
+ ### GSM8K (5-shot)
89
+
90
+ - exact_match (strict-match): 0.6171
91
+ - exact_match (flexible-extract): 0.5777
92
+
93
+ ### C-Eval (valid, 5-shot)
94
+
95
+ - acc: 0.6241
96
+ - acc_norm: 0.6241
97
+
98
+ Example command (paths are placeholders):
99
+
100
+ ```bash
101
+ lm_eval --model vllm \
102
+ --model_args "pretrained=<model_dir>,tensor_parallel_size=4,dtype=auto,gpu_memory_utilization=0.8,max_model_len=4096,enforce_eager=True,trust_remote_code=True" \
103
+ --tasks gsm8k \
104
+ --num_fewshot 5 \
105
+ --batch_size auto
106
+ ```
107
+
108
+ ## Usage
109
+
110
+ This repo includes custom modeling code; `trust_remote_code=True` is required.
111
+
112
+ ```python
113
+ from transformers import AutoModelForCausalLM, AutoTokenizer
114
+
115
+ model_id = "killer66678/openpangu_7b_lora"
116
+ tokenizer = AutoTokenizer.from_pretrained(model_id, trust_remote_code=True)
117
+ model = AutoModelForCausalLM.from_pretrained(
118
+ model_id,
119
+ trust_remote_code=True,
120
+ torch_dtype="auto",
121
+ device_map="auto",
122
+ )
123
+ ```
124
+
125
+ ## Limitations and License Notes
126
+
127
+ - The openPangu license restricts use within the European Union.
128
+ - If you distribute a product or service based on this model, the
129
+ license requires specific attribution and trademark notices.
130
+ - As with any LLM, outputs may be incorrect or biased.
131
+
132
+ ## Acknowledgements
133
+
134
+ Thanks to Huawei openPangu for the base model.
chat_template.jinja ADDED
@@ -0,0 +1,4 @@
 
 
 
 
 
1
+ {% if messages[0]['role'] == 'system' %}{% set loop_messages = messages[1:] %}{% set system_message = messages[0]['content'] %}{% else %}{% set loop_messages = messages %}{% endif %}{% if system_message is defined %}{{ 'System: ' + system_message + '[unused10]' + '
2
+ ' }}{% endif %}{% for message in loop_messages %}{% set content = message['content'] %}{% if message['role'] == 'user' %}{{ 'Human: ' + content + '[unused10]' + '
3
+ Assistant:' }}{% elif message['role'] == 'assistant' %}{{ content + '[unused10]' + '
4
+ ' }}{% endif %}{% endfor %}
config.json ADDED
@@ -0,0 +1,31 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "architectures": [
3
+ "PanguEmbeddedForCausalLM"
4
+ ],
5
+ "attention_dropout": 0.0,
6
+ "auto_map": {
7
+ "AutoConfig": "configuration_openpangu_dense.PanguEmbeddedConfig",
8
+ "AutoModel": "modeling_openpangu_dense.PanguEmbeddedForCausalLM",
9
+ "AutoModelForCausalLM": "modeling_openpangu_dense.PanguEmbeddedForCausalLM"
10
+ },
11
+ "bias": true,
12
+ "bos_token_id": 1,
13
+ "dtype": "bfloat16",
14
+ "eos_token_id": 45892,
15
+ "hidden_act": "silu",
16
+ "hidden_size": 4096,
17
+ "initializer_range": 0.02,
18
+ "intermediate_size": 12800,
19
+ "max_position_embeddings": 32768,
20
+ "model_type": "PanguEmbedded",
21
+ "num_attention_heads": 32,
22
+ "num_hidden_layers": 34,
23
+ "num_key_value_heads": 8,
24
+ "pad_token_id": 0,
25
+ "rms_norm_eps": 1e-05,
26
+ "rope_theta": 16000000.0,
27
+ "tie_word_embeddings": false,
28
+ "transformers_version": "4.56.1",
29
+ "use_cache": true,
30
+ "vocab_size": 153376
31
+ }
configuration_openpangu_dense.py ADDED
@@ -0,0 +1,56 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # coding=utf-8
2
+ # Copyright (c) 2025 Huawei Technologies Co., Ltd. All rights reserved.
3
+
4
+ from transformers.utils import logging
5
+ from transformers.configuration_utils import PretrainedConfig
6
+
7
+
8
+ logger = logging.get_logger(__name__)
9
+
10
+
11
+ class PanguEmbeddedConfig(PretrainedConfig):
12
+
13
+ model_type = "PanguEmbedded"
14
+ _auto_class = "AutoConfig"
15
+
16
+ def __init__(
17
+ self,
18
+ vocab_size=153376,
19
+ hidden_size=4096,
20
+ intermediate_size=12800,
21
+ num_hidden_layers=34,
22
+ num_attention_heads=32,
23
+ num_key_value_heads=8,
24
+ hidden_act="silu",
25
+ max_position_embeddings=32768,
26
+ initializer_range=0.02,
27
+ rms_norm_eps=1e-5,
28
+ use_cache=True,
29
+ pad_token_id=0,
30
+ bos_token_id=1,
31
+ eos_token_id=45892,
32
+ tie_word_embeddings=False,
33
+ rope_theta=16000000.0,
34
+ bias=True,
35
+ **kwargs,
36
+ ):
37
+ self.vocab_size = vocab_size
38
+ self.max_position_embeddings = max_position_embeddings
39
+ self.hidden_size = hidden_size
40
+ self.intermediate_size = intermediate_size
41
+ self.num_hidden_layers = num_hidden_layers
42
+ self.num_attention_heads = num_attention_heads
43
+ self.num_key_value_heads = num_key_value_heads
44
+ self.hidden_act = hidden_act
45
+ self.initializer_range = initializer_range
46
+ self.rms_norm_eps = rms_norm_eps
47
+ self.use_cache = use_cache
48
+ self.rope_theta = rope_theta
49
+ self.bias = bias
50
+ super().__init__(
51
+ pad_token_id=pad_token_id,
52
+ bos_token_id=bos_token_id,
53
+ eos_token_id=eos_token_id,
54
+ tie_word_embeddings=tie_word_embeddings,
55
+ **kwargs,
56
+ )
generation_config.json ADDED
@@ -0,0 +1,10 @@
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_from_model_config": true,
3
+ "bos_token_id": 1,
4
+ "do_sample": true,
5
+ "eos_token_id": 45892,
6
+ "pad_token_id": 0,
7
+ "top_k": 0,
8
+ "top_p": 0.8,
9
+ "transformers_version": "4.56.1"
10
+ }
model-00001-of-00009.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:5a734321a92e9fe20e9981d66959f881b626ca032abfeb2680e383377121b02f
3
+ size 1948576456
model-00002-of-00009.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:730e33d40b19bcdf010ecd40152dec895510457ec55f91c014f9861f95b9071d
3
+ size 1992486096
model-00003-of-00009.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:098f12201dc3b194cb3398806eb2bc4c4eeaadf06ded0946e466816be3f98e9e
3
+ size 1992486120
model-00004-of-00009.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b448ae9a620ce50084bf0ee2a76a021bf057f570460a822d9234b55bc0860940
3
+ size 1992486160
model-00005-of-00009.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:3c505fcc197f0c60f26f1a3d2752e75ea76baf111a43e535177c6eb5bb0dc403
3
+ size 1992486160
model-00006-of-00009.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:fe211feb946c14c71ec996fb52a00c84d84e0aad32903a518d58b68ea1aa3caf
3
+ size 1992486160
model-00007-of-00009.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f4c9db736c53f813a2d29c3c764cfde1ccc0937b3fd894adfaf29ee908e4b97e
3
+ size 1992486160
model-00008-of-00009.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f9466b4d52764676495e2acf4c44613fdc061a67dc6b9d503c2d8284ccff4198
3
+ size 901877056
model-00009-of-00009.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b65417ef013c072961910a91db017836bbfbf300422fd3985652c7807f45adc9
3
+ size 1256456320
model.safetensors.index.json ADDED
@@ -0,0 +1,453 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "metadata": {
3
+ "total_parameters": 8030887936,
4
+ "total_size": 16061775872
5
+ },
6
+ "weight_map": {
7
+ "lm_head.weight": "model-00009-of-00009.safetensors",
8
+ "model.embed_tokens.weight": "model-00001-of-00009.safetensors",
9
+ "model.layers.0.input_layernorm.weight": "model-00001-of-00009.safetensors",
10
+ "model.layers.0.mlp.down_proj.weight": "model-00001-of-00009.safetensors",
11
+ "model.layers.0.mlp.gate_proj.weight": "model-00001-of-00009.safetensors",
12
+ "model.layers.0.mlp.up_proj.weight": "model-00001-of-00009.safetensors",
13
+ "model.layers.0.post_attention_layernorm.weight": "model-00001-of-00009.safetensors",
14
+ "model.layers.0.self_attn.k_proj.bias": "model-00001-of-00009.safetensors",
15
+ "model.layers.0.self_attn.k_proj.weight": "model-00001-of-00009.safetensors",
16
+ "model.layers.0.self_attn.o_proj.bias": "model-00001-of-00009.safetensors",
17
+ "model.layers.0.self_attn.o_proj.weight": "model-00001-of-00009.safetensors",
18
+ "model.layers.0.self_attn.q_proj.bias": "model-00001-of-00009.safetensors",
19
+ "model.layers.0.self_attn.q_proj.weight": "model-00001-of-00009.safetensors",
20
+ "model.layers.0.self_attn.v_proj.bias": "model-00001-of-00009.safetensors",
21
+ "model.layers.0.self_attn.v_proj.weight": "model-00001-of-00009.safetensors",
22
+ "model.layers.1.input_layernorm.weight": "model-00002-of-00009.safetensors",
23
+ "model.layers.1.mlp.down_proj.weight": "model-00002-of-00009.safetensors",
24
+ "model.layers.1.mlp.gate_proj.weight": "model-00001-of-00009.safetensors",
25
+ "model.layers.1.mlp.up_proj.weight": "model-00001-of-00009.safetensors",
26
+ "model.layers.1.post_attention_layernorm.weight": "model-00002-of-00009.safetensors",
27
+ "model.layers.1.self_attn.k_proj.bias": "model-00001-of-00009.safetensors",
28
+ "model.layers.1.self_attn.k_proj.weight": "model-00001-of-00009.safetensors",
29
+ "model.layers.1.self_attn.o_proj.bias": "model-00001-of-00009.safetensors",
30
+ "model.layers.1.self_attn.o_proj.weight": "model-00001-of-00009.safetensors",
31
+ "model.layers.1.self_attn.q_proj.bias": "model-00001-of-00009.safetensors",
32
+ "model.layers.1.self_attn.q_proj.weight": "model-00001-of-00009.safetensors",
33
+ "model.layers.1.self_attn.v_proj.bias": "model-00001-of-00009.safetensors",
34
+ "model.layers.1.self_attn.v_proj.weight": "model-00001-of-00009.safetensors",
35
+ "model.layers.10.input_layernorm.weight": "model-00003-of-00009.safetensors",
36
+ "model.layers.10.mlp.down_proj.weight": "model-00003-of-00009.safetensors",
37
+ "model.layers.10.mlp.gate_proj.weight": "model-00003-of-00009.safetensors",
38
+ "model.layers.10.mlp.up_proj.weight": "model-00003-of-00009.safetensors",
39
+ "model.layers.10.post_attention_layernorm.weight": "model-00003-of-00009.safetensors",
40
+ "model.layers.10.self_attn.k_proj.bias": "model-00003-of-00009.safetensors",
41
+ "model.layers.10.self_attn.k_proj.weight": "model-00003-of-00009.safetensors",
42
+ "model.layers.10.self_attn.o_proj.bias": "model-00003-of-00009.safetensors",
43
+ "model.layers.10.self_attn.o_proj.weight": "model-00003-of-00009.safetensors",
44
+ "model.layers.10.self_attn.q_proj.bias": "model-00003-of-00009.safetensors",
45
+ "model.layers.10.self_attn.q_proj.weight": "model-00003-of-00009.safetensors",
46
+ "model.layers.10.self_attn.v_proj.bias": "model-00003-of-00009.safetensors",
47
+ "model.layers.10.self_attn.v_proj.weight": "model-00003-of-00009.safetensors",
48
+ "model.layers.11.input_layernorm.weight": "model-00004-of-00009.safetensors",
49
+ "model.layers.11.mlp.down_proj.weight": "model-00004-of-00009.safetensors",
50
+ "model.layers.11.mlp.gate_proj.weight": "model-00003-of-00009.safetensors",
51
+ "model.layers.11.mlp.up_proj.weight": "model-00003-of-00009.safetensors",
52
+ "model.layers.11.post_attention_layernorm.weight": "model-00004-of-00009.safetensors",
53
+ "model.layers.11.self_attn.k_proj.bias": "model-00003-of-00009.safetensors",
54
+ "model.layers.11.self_attn.k_proj.weight": "model-00003-of-00009.safetensors",
55
+ "model.layers.11.self_attn.o_proj.bias": "model-00003-of-00009.safetensors",
56
+ "model.layers.11.self_attn.o_proj.weight": "model-00003-of-00009.safetensors",
57
+ "model.layers.11.self_attn.q_proj.bias": "model-00003-of-00009.safetensors",
58
+ "model.layers.11.self_attn.q_proj.weight": "model-00003-of-00009.safetensors",
59
+ "model.layers.11.self_attn.v_proj.bias": "model-00003-of-00009.safetensors",
60
+ "model.layers.11.self_attn.v_proj.weight": "model-00003-of-00009.safetensors",
61
+ "model.layers.12.input_layernorm.weight": "model-00004-of-00009.safetensors",
62
+ "model.layers.12.mlp.down_proj.weight": "model-00004-of-00009.safetensors",
63
+ "model.layers.12.mlp.gate_proj.weight": "model-00004-of-00009.safetensors",
64
+ "model.layers.12.mlp.up_proj.weight": "model-00004-of-00009.safetensors",
65
+ "model.layers.12.post_attention_layernorm.weight": "model-00004-of-00009.safetensors",
66
+ "model.layers.12.self_attn.k_proj.bias": "model-00004-of-00009.safetensors",
67
+ "model.layers.12.self_attn.k_proj.weight": "model-00004-of-00009.safetensors",
68
+ "model.layers.12.self_attn.o_proj.bias": "model-00004-of-00009.safetensors",
69
+ "model.layers.12.self_attn.o_proj.weight": "model-00004-of-00009.safetensors",
70
+ "model.layers.12.self_attn.q_proj.bias": "model-00004-of-00009.safetensors",
71
+ "model.layers.12.self_attn.q_proj.weight": "model-00004-of-00009.safetensors",
72
+ "model.layers.12.self_attn.v_proj.bias": "model-00004-of-00009.safetensors",
73
+ "model.layers.12.self_attn.v_proj.weight": "model-00004-of-00009.safetensors",
74
+ "model.layers.13.input_layernorm.weight": "model-00004-of-00009.safetensors",
75
+ "model.layers.13.mlp.down_proj.weight": "model-00004-of-00009.safetensors",
76
+ "model.layers.13.mlp.gate_proj.weight": "model-00004-of-00009.safetensors",
77
+ "model.layers.13.mlp.up_proj.weight": "model-00004-of-00009.safetensors",
78
+ "model.layers.13.post_attention_layernorm.weight": "model-00004-of-00009.safetensors",
79
+ "model.layers.13.self_attn.k_proj.bias": "model-00004-of-00009.safetensors",
80
+ "model.layers.13.self_attn.k_proj.weight": "model-00004-of-00009.safetensors",
81
+ "model.layers.13.self_attn.o_proj.bias": "model-00004-of-00009.safetensors",
82
+ "model.layers.13.self_attn.o_proj.weight": "model-00004-of-00009.safetensors",
83
+ "model.layers.13.self_attn.q_proj.bias": "model-00004-of-00009.safetensors",
84
+ "model.layers.13.self_attn.q_proj.weight": "model-00004-of-00009.safetensors",
85
+ "model.layers.13.self_attn.v_proj.bias": "model-00004-of-00009.safetensors",
86
+ "model.layers.13.self_attn.v_proj.weight": "model-00004-of-00009.safetensors",
87
+ "model.layers.14.input_layernorm.weight": "model-00004-of-00009.safetensors",
88
+ "model.layers.14.mlp.down_proj.weight": "model-00004-of-00009.safetensors",
89
+ "model.layers.14.mlp.gate_proj.weight": "model-00004-of-00009.safetensors",
90
+ "model.layers.14.mlp.up_proj.weight": "model-00004-of-00009.safetensors",
91
+ "model.layers.14.post_attention_layernorm.weight": "model-00004-of-00009.safetensors",
92
+ "model.layers.14.self_attn.k_proj.bias": "model-00004-of-00009.safetensors",
93
+ "model.layers.14.self_attn.k_proj.weight": "model-00004-of-00009.safetensors",
94
+ "model.layers.14.self_attn.o_proj.bias": "model-00004-of-00009.safetensors",
95
+ "model.layers.14.self_attn.o_proj.weight": "model-00004-of-00009.safetensors",
96
+ "model.layers.14.self_attn.q_proj.bias": "model-00004-of-00009.safetensors",
97
+ "model.layers.14.self_attn.q_proj.weight": "model-00004-of-00009.safetensors",
98
+ "model.layers.14.self_attn.v_proj.bias": "model-00004-of-00009.safetensors",
99
+ "model.layers.14.self_attn.v_proj.weight": "model-00004-of-00009.safetensors",
100
+ "model.layers.15.input_layernorm.weight": "model-00004-of-00009.safetensors",
101
+ "model.layers.15.mlp.down_proj.weight": "model-00004-of-00009.safetensors",
102
+ "model.layers.15.mlp.gate_proj.weight": "model-00004-of-00009.safetensors",
103
+ "model.layers.15.mlp.up_proj.weight": "model-00004-of-00009.safetensors",
104
+ "model.layers.15.post_attention_layernorm.weight": "model-00004-of-00009.safetensors",
105
+ "model.layers.15.self_attn.k_proj.bias": "model-00004-of-00009.safetensors",
106
+ "model.layers.15.self_attn.k_proj.weight": "model-00004-of-00009.safetensors",
107
+ "model.layers.15.self_attn.o_proj.bias": "model-00004-of-00009.safetensors",
108
+ "model.layers.15.self_attn.o_proj.weight": "model-00004-of-00009.safetensors",
109
+ "model.layers.15.self_attn.q_proj.bias": "model-00004-of-00009.safetensors",
110
+ "model.layers.15.self_attn.q_proj.weight": "model-00004-of-00009.safetensors",
111
+ "model.layers.15.self_attn.v_proj.bias": "model-00004-of-00009.safetensors",
112
+ "model.layers.15.self_attn.v_proj.weight": "model-00004-of-00009.safetensors",
113
+ "model.layers.16.input_layernorm.weight": "model-00005-of-00009.safetensors",
114
+ "model.layers.16.mlp.down_proj.weight": "model-00005-of-00009.safetensors",
115
+ "model.layers.16.mlp.gate_proj.weight": "model-00004-of-00009.safetensors",
116
+ "model.layers.16.mlp.up_proj.weight": "model-00004-of-00009.safetensors",
117
+ "model.layers.16.post_attention_layernorm.weight": "model-00005-of-00009.safetensors",
118
+ "model.layers.16.self_attn.k_proj.bias": "model-00004-of-00009.safetensors",
119
+ "model.layers.16.self_attn.k_proj.weight": "model-00004-of-00009.safetensors",
120
+ "model.layers.16.self_attn.o_proj.bias": "model-00004-of-00009.safetensors",
121
+ "model.layers.16.self_attn.o_proj.weight": "model-00004-of-00009.safetensors",
122
+ "model.layers.16.self_attn.q_proj.bias": "model-00004-of-00009.safetensors",
123
+ "model.layers.16.self_attn.q_proj.weight": "model-00004-of-00009.safetensors",
124
+ "model.layers.16.self_attn.v_proj.bias": "model-00004-of-00009.safetensors",
125
+ "model.layers.16.self_attn.v_proj.weight": "model-00004-of-00009.safetensors",
126
+ "model.layers.17.input_layernorm.weight": "model-00005-of-00009.safetensors",
127
+ "model.layers.17.mlp.down_proj.weight": "model-00005-of-00009.safetensors",
128
+ "model.layers.17.mlp.gate_proj.weight": "model-00005-of-00009.safetensors",
129
+ "model.layers.17.mlp.up_proj.weight": "model-00005-of-00009.safetensors",
130
+ "model.layers.17.post_attention_layernorm.weight": "model-00005-of-00009.safetensors",
131
+ "model.layers.17.self_attn.k_proj.bias": "model-00005-of-00009.safetensors",
132
+ "model.layers.17.self_attn.k_proj.weight": "model-00005-of-00009.safetensors",
133
+ "model.layers.17.self_attn.o_proj.bias": "model-00005-of-00009.safetensors",
134
+ "model.layers.17.self_attn.o_proj.weight": "model-00005-of-00009.safetensors",
135
+ "model.layers.17.self_attn.q_proj.bias": "model-00005-of-00009.safetensors",
136
+ "model.layers.17.self_attn.q_proj.weight": "model-00005-of-00009.safetensors",
137
+ "model.layers.17.self_attn.v_proj.bias": "model-00005-of-00009.safetensors",
138
+ "model.layers.17.self_attn.v_proj.weight": "model-00005-of-00009.safetensors",
139
+ "model.layers.18.input_layernorm.weight": "model-00005-of-00009.safetensors",
140
+ "model.layers.18.mlp.down_proj.weight": "model-00005-of-00009.safetensors",
141
+ "model.layers.18.mlp.gate_proj.weight": "model-00005-of-00009.safetensors",
142
+ "model.layers.18.mlp.up_proj.weight": "model-00005-of-00009.safetensors",
143
+ "model.layers.18.post_attention_layernorm.weight": "model-00005-of-00009.safetensors",
144
+ "model.layers.18.self_attn.k_proj.bias": "model-00005-of-00009.safetensors",
145
+ "model.layers.18.self_attn.k_proj.weight": "model-00005-of-00009.safetensors",
146
+ "model.layers.18.self_attn.o_proj.bias": "model-00005-of-00009.safetensors",
147
+ "model.layers.18.self_attn.o_proj.weight": "model-00005-of-00009.safetensors",
148
+ "model.layers.18.self_attn.q_proj.bias": "model-00005-of-00009.safetensors",
149
+ "model.layers.18.self_attn.q_proj.weight": "model-00005-of-00009.safetensors",
150
+ "model.layers.18.self_attn.v_proj.bias": "model-00005-of-00009.safetensors",
151
+ "model.layers.18.self_attn.v_proj.weight": "model-00005-of-00009.safetensors",
152
+ "model.layers.19.input_layernorm.weight": "model-00005-of-00009.safetensors",
153
+ "model.layers.19.mlp.down_proj.weight": "model-00005-of-00009.safetensors",
154
+ "model.layers.19.mlp.gate_proj.weight": "model-00005-of-00009.safetensors",
155
+ "model.layers.19.mlp.up_proj.weight": "model-00005-of-00009.safetensors",
156
+ "model.layers.19.post_attention_layernorm.weight": "model-00005-of-00009.safetensors",
157
+ "model.layers.19.self_attn.k_proj.bias": "model-00005-of-00009.safetensors",
158
+ "model.layers.19.self_attn.k_proj.weight": "model-00005-of-00009.safetensors",
159
+ "model.layers.19.self_attn.o_proj.bias": "model-00005-of-00009.safetensors",
160
+ "model.layers.19.self_attn.o_proj.weight": "model-00005-of-00009.safetensors",
161
+ "model.layers.19.self_attn.q_proj.bias": "model-00005-of-00009.safetensors",
162
+ "model.layers.19.self_attn.q_proj.weight": "model-00005-of-00009.safetensors",
163
+ "model.layers.19.self_attn.v_proj.bias": "model-00005-of-00009.safetensors",
164
+ "model.layers.19.self_attn.v_proj.weight": "model-00005-of-00009.safetensors",
165
+ "model.layers.2.input_layernorm.weight": "model-00002-of-00009.safetensors",
166
+ "model.layers.2.mlp.down_proj.weight": "model-00002-of-00009.safetensors",
167
+ "model.layers.2.mlp.gate_proj.weight": "model-00002-of-00009.safetensors",
168
+ "model.layers.2.mlp.up_proj.weight": "model-00002-of-00009.safetensors",
169
+ "model.layers.2.post_attention_layernorm.weight": "model-00002-of-00009.safetensors",
170
+ "model.layers.2.self_attn.k_proj.bias": "model-00002-of-00009.safetensors",
171
+ "model.layers.2.self_attn.k_proj.weight": "model-00002-of-00009.safetensors",
172
+ "model.layers.2.self_attn.o_proj.bias": "model-00002-of-00009.safetensors",
173
+ "model.layers.2.self_attn.o_proj.weight": "model-00002-of-00009.safetensors",
174
+ "model.layers.2.self_attn.q_proj.bias": "model-00002-of-00009.safetensors",
175
+ "model.layers.2.self_attn.q_proj.weight": "model-00002-of-00009.safetensors",
176
+ "model.layers.2.self_attn.v_proj.bias": "model-00002-of-00009.safetensors",
177
+ "model.layers.2.self_attn.v_proj.weight": "model-00002-of-00009.safetensors",
178
+ "model.layers.20.input_layernorm.weight": "model-00005-of-00009.safetensors",
179
+ "model.layers.20.mlp.down_proj.weight": "model-00005-of-00009.safetensors",
180
+ "model.layers.20.mlp.gate_proj.weight": "model-00005-of-00009.safetensors",
181
+ "model.layers.20.mlp.up_proj.weight": "model-00005-of-00009.safetensors",
182
+ "model.layers.20.post_attention_layernorm.weight": "model-00005-of-00009.safetensors",
183
+ "model.layers.20.self_attn.k_proj.bias": "model-00005-of-00009.safetensors",
184
+ "model.layers.20.self_attn.k_proj.weight": "model-00005-of-00009.safetensors",
185
+ "model.layers.20.self_attn.o_proj.bias": "model-00005-of-00009.safetensors",
186
+ "model.layers.20.self_attn.o_proj.weight": "model-00005-of-00009.safetensors",
187
+ "model.layers.20.self_attn.q_proj.bias": "model-00005-of-00009.safetensors",
188
+ "model.layers.20.self_attn.q_proj.weight": "model-00005-of-00009.safetensors",
189
+ "model.layers.20.self_attn.v_proj.bias": "model-00005-of-00009.safetensors",
190
+ "model.layers.20.self_attn.v_proj.weight": "model-00005-of-00009.safetensors",
191
+ "model.layers.21.input_layernorm.weight": "model-00006-of-00009.safetensors",
192
+ "model.layers.21.mlp.down_proj.weight": "model-00006-of-00009.safetensors",
193
+ "model.layers.21.mlp.gate_proj.weight": "model-00005-of-00009.safetensors",
194
+ "model.layers.21.mlp.up_proj.weight": "model-00005-of-00009.safetensors",
195
+ "model.layers.21.post_attention_layernorm.weight": "model-00006-of-00009.safetensors",
196
+ "model.layers.21.self_attn.k_proj.bias": "model-00005-of-00009.safetensors",
197
+ "model.layers.21.self_attn.k_proj.weight": "model-00005-of-00009.safetensors",
198
+ "model.layers.21.self_attn.o_proj.bias": "model-00005-of-00009.safetensors",
199
+ "model.layers.21.self_attn.o_proj.weight": "model-00005-of-00009.safetensors",
200
+ "model.layers.21.self_attn.q_proj.bias": "model-00005-of-00009.safetensors",
201
+ "model.layers.21.self_attn.q_proj.weight": "model-00005-of-00009.safetensors",
202
+ "model.layers.21.self_attn.v_proj.bias": "model-00005-of-00009.safetensors",
203
+ "model.layers.21.self_attn.v_proj.weight": "model-00005-of-00009.safetensors",
204
+ "model.layers.22.input_layernorm.weight": "model-00006-of-00009.safetensors",
205
+ "model.layers.22.mlp.down_proj.weight": "model-00006-of-00009.safetensors",
206
+ "model.layers.22.mlp.gate_proj.weight": "model-00006-of-00009.safetensors",
207
+ "model.layers.22.mlp.up_proj.weight": "model-00006-of-00009.safetensors",
208
+ "model.layers.22.post_attention_layernorm.weight": "model-00006-of-00009.safetensors",
209
+ "model.layers.22.self_attn.k_proj.bias": "model-00006-of-00009.safetensors",
210
+ "model.layers.22.self_attn.k_proj.weight": "model-00006-of-00009.safetensors",
211
+ "model.layers.22.self_attn.o_proj.bias": "model-00006-of-00009.safetensors",
212
+ "model.layers.22.self_attn.o_proj.weight": "model-00006-of-00009.safetensors",
213
+ "model.layers.22.self_attn.q_proj.bias": "model-00006-of-00009.safetensors",
214
+ "model.layers.22.self_attn.q_proj.weight": "model-00006-of-00009.safetensors",
215
+ "model.layers.22.self_attn.v_proj.bias": "model-00006-of-00009.safetensors",
216
+ "model.layers.22.self_attn.v_proj.weight": "model-00006-of-00009.safetensors",
217
+ "model.layers.23.input_layernorm.weight": "model-00006-of-00009.safetensors",
218
+ "model.layers.23.mlp.down_proj.weight": "model-00006-of-00009.safetensors",
219
+ "model.layers.23.mlp.gate_proj.weight": "model-00006-of-00009.safetensors",
220
+ "model.layers.23.mlp.up_proj.weight": "model-00006-of-00009.safetensors",
221
+ "model.layers.23.post_attention_layernorm.weight": "model-00006-of-00009.safetensors",
222
+ "model.layers.23.self_attn.k_proj.bias": "model-00006-of-00009.safetensors",
223
+ "model.layers.23.self_attn.k_proj.weight": "model-00006-of-00009.safetensors",
224
+ "model.layers.23.self_attn.o_proj.bias": "model-00006-of-00009.safetensors",
225
+ "model.layers.23.self_attn.o_proj.weight": "model-00006-of-00009.safetensors",
226
+ "model.layers.23.self_attn.q_proj.bias": "model-00006-of-00009.safetensors",
227
+ "model.layers.23.self_attn.q_proj.weight": "model-00006-of-00009.safetensors",
228
+ "model.layers.23.self_attn.v_proj.bias": "model-00006-of-00009.safetensors",
229
+ "model.layers.23.self_attn.v_proj.weight": "model-00006-of-00009.safetensors",
230
+ "model.layers.24.input_layernorm.weight": "model-00006-of-00009.safetensors",
231
+ "model.layers.24.mlp.down_proj.weight": "model-00006-of-00009.safetensors",
232
+ "model.layers.24.mlp.gate_proj.weight": "model-00006-of-00009.safetensors",
233
+ "model.layers.24.mlp.up_proj.weight": "model-00006-of-00009.safetensors",
234
+ "model.layers.24.post_attention_layernorm.weight": "model-00006-of-00009.safetensors",
235
+ "model.layers.24.self_attn.k_proj.bias": "model-00006-of-00009.safetensors",
236
+ "model.layers.24.self_attn.k_proj.weight": "model-00006-of-00009.safetensors",
237
+ "model.layers.24.self_attn.o_proj.bias": "model-00006-of-00009.safetensors",
238
+ "model.layers.24.self_attn.o_proj.weight": "model-00006-of-00009.safetensors",
239
+ "model.layers.24.self_attn.q_proj.bias": "model-00006-of-00009.safetensors",
240
+ "model.layers.24.self_attn.q_proj.weight": "model-00006-of-00009.safetensors",
241
+ "model.layers.24.self_attn.v_proj.bias": "model-00006-of-00009.safetensors",
242
+ "model.layers.24.self_attn.v_proj.weight": "model-00006-of-00009.safetensors",
243
+ "model.layers.25.input_layernorm.weight": "model-00006-of-00009.safetensors",
244
+ "model.layers.25.mlp.down_proj.weight": "model-00006-of-00009.safetensors",
245
+ "model.layers.25.mlp.gate_proj.weight": "model-00006-of-00009.safetensors",
246
+ "model.layers.25.mlp.up_proj.weight": "model-00006-of-00009.safetensors",
247
+ "model.layers.25.post_attention_layernorm.weight": "model-00006-of-00009.safetensors",
248
+ "model.layers.25.self_attn.k_proj.bias": "model-00006-of-00009.safetensors",
249
+ "model.layers.25.self_attn.k_proj.weight": "model-00006-of-00009.safetensors",
250
+ "model.layers.25.self_attn.o_proj.bias": "model-00006-of-00009.safetensors",
251
+ "model.layers.25.self_attn.o_proj.weight": "model-00006-of-00009.safetensors",
252
+ "model.layers.25.self_attn.q_proj.bias": "model-00006-of-00009.safetensors",
253
+ "model.layers.25.self_attn.q_proj.weight": "model-00006-of-00009.safetensors",
254
+ "model.layers.25.self_attn.v_proj.bias": "model-00006-of-00009.safetensors",
255
+ "model.layers.25.self_attn.v_proj.weight": "model-00006-of-00009.safetensors",
256
+ "model.layers.26.input_layernorm.weight": "model-00007-of-00009.safetensors",
257
+ "model.layers.26.mlp.down_proj.weight": "model-00007-of-00009.safetensors",
258
+ "model.layers.26.mlp.gate_proj.weight": "model-00006-of-00009.safetensors",
259
+ "model.layers.26.mlp.up_proj.weight": "model-00006-of-00009.safetensors",
260
+ "model.layers.26.post_attention_layernorm.weight": "model-00007-of-00009.safetensors",
261
+ "model.layers.26.self_attn.k_proj.bias": "model-00006-of-00009.safetensors",
262
+ "model.layers.26.self_attn.k_proj.weight": "model-00006-of-00009.safetensors",
263
+ "model.layers.26.self_attn.o_proj.bias": "model-00006-of-00009.safetensors",
264
+ "model.layers.26.self_attn.o_proj.weight": "model-00006-of-00009.safetensors",
265
+ "model.layers.26.self_attn.q_proj.bias": "model-00006-of-00009.safetensors",
266
+ "model.layers.26.self_attn.q_proj.weight": "model-00006-of-00009.safetensors",
267
+ "model.layers.26.self_attn.v_proj.bias": "model-00006-of-00009.safetensors",
268
+ "model.layers.26.self_attn.v_proj.weight": "model-00006-of-00009.safetensors",
269
+ "model.layers.27.input_layernorm.weight": "model-00007-of-00009.safetensors",
270
+ "model.layers.27.mlp.down_proj.weight": "model-00007-of-00009.safetensors",
271
+ "model.layers.27.mlp.gate_proj.weight": "model-00007-of-00009.safetensors",
272
+ "model.layers.27.mlp.up_proj.weight": "model-00007-of-00009.safetensors",
273
+ "model.layers.27.post_attention_layernorm.weight": "model-00007-of-00009.safetensors",
274
+ "model.layers.27.self_attn.k_proj.bias": "model-00007-of-00009.safetensors",
275
+ "model.layers.27.self_attn.k_proj.weight": "model-00007-of-00009.safetensors",
276
+ "model.layers.27.self_attn.o_proj.bias": "model-00007-of-00009.safetensors",
277
+ "model.layers.27.self_attn.o_proj.weight": "model-00007-of-00009.safetensors",
278
+ "model.layers.27.self_attn.q_proj.bias": "model-00007-of-00009.safetensors",
279
+ "model.layers.27.self_attn.q_proj.weight": "model-00007-of-00009.safetensors",
280
+ "model.layers.27.self_attn.v_proj.bias": "model-00007-of-00009.safetensors",
281
+ "model.layers.27.self_attn.v_proj.weight": "model-00007-of-00009.safetensors",
282
+ "model.layers.28.input_layernorm.weight": "model-00007-of-00009.safetensors",
283
+ "model.layers.28.mlp.down_proj.weight": "model-00007-of-00009.safetensors",
284
+ "model.layers.28.mlp.gate_proj.weight": "model-00007-of-00009.safetensors",
285
+ "model.layers.28.mlp.up_proj.weight": "model-00007-of-00009.safetensors",
286
+ "model.layers.28.post_attention_layernorm.weight": "model-00007-of-00009.safetensors",
287
+ "model.layers.28.self_attn.k_proj.bias": "model-00007-of-00009.safetensors",
288
+ "model.layers.28.self_attn.k_proj.weight": "model-00007-of-00009.safetensors",
289
+ "model.layers.28.self_attn.o_proj.bias": "model-00007-of-00009.safetensors",
290
+ "model.layers.28.self_attn.o_proj.weight": "model-00007-of-00009.safetensors",
291
+ "model.layers.28.self_attn.q_proj.bias": "model-00007-of-00009.safetensors",
292
+ "model.layers.28.self_attn.q_proj.weight": "model-00007-of-00009.safetensors",
293
+ "model.layers.28.self_attn.v_proj.bias": "model-00007-of-00009.safetensors",
294
+ "model.layers.28.self_attn.v_proj.weight": "model-00007-of-00009.safetensors",
295
+ "model.layers.29.input_layernorm.weight": "model-00007-of-00009.safetensors",
296
+ "model.layers.29.mlp.down_proj.weight": "model-00007-of-00009.safetensors",
297
+ "model.layers.29.mlp.gate_proj.weight": "model-00007-of-00009.safetensors",
298
+ "model.layers.29.mlp.up_proj.weight": "model-00007-of-00009.safetensors",
299
+ "model.layers.29.post_attention_layernorm.weight": "model-00007-of-00009.safetensors",
300
+ "model.layers.29.self_attn.k_proj.bias": "model-00007-of-00009.safetensors",
301
+ "model.layers.29.self_attn.k_proj.weight": "model-00007-of-00009.safetensors",
302
+ "model.layers.29.self_attn.o_proj.bias": "model-00007-of-00009.safetensors",
303
+ "model.layers.29.self_attn.o_proj.weight": "model-00007-of-00009.safetensors",
304
+ "model.layers.29.self_attn.q_proj.bias": "model-00007-of-00009.safetensors",
305
+ "model.layers.29.self_attn.q_proj.weight": "model-00007-of-00009.safetensors",
306
+ "model.layers.29.self_attn.v_proj.bias": "model-00007-of-00009.safetensors",
307
+ "model.layers.29.self_attn.v_proj.weight": "model-00007-of-00009.safetensors",
308
+ "model.layers.3.input_layernorm.weight": "model-00002-of-00009.safetensors",
309
+ "model.layers.3.mlp.down_proj.weight": "model-00002-of-00009.safetensors",
310
+ "model.layers.3.mlp.gate_proj.weight": "model-00002-of-00009.safetensors",
311
+ "model.layers.3.mlp.up_proj.weight": "model-00002-of-00009.safetensors",
312
+ "model.layers.3.post_attention_layernorm.weight": "model-00002-of-00009.safetensors",
313
+ "model.layers.3.self_attn.k_proj.bias": "model-00002-of-00009.safetensors",
314
+ "model.layers.3.self_attn.k_proj.weight": "model-00002-of-00009.safetensors",
315
+ "model.layers.3.self_attn.o_proj.bias": "model-00002-of-00009.safetensors",
316
+ "model.layers.3.self_attn.o_proj.weight": "model-00002-of-00009.safetensors",
317
+ "model.layers.3.self_attn.q_proj.bias": "model-00002-of-00009.safetensors",
318
+ "model.layers.3.self_attn.q_proj.weight": "model-00002-of-00009.safetensors",
319
+ "model.layers.3.self_attn.v_proj.bias": "model-00002-of-00009.safetensors",
320
+ "model.layers.3.self_attn.v_proj.weight": "model-00002-of-00009.safetensors",
321
+ "model.layers.30.input_layernorm.weight": "model-00007-of-00009.safetensors",
322
+ "model.layers.30.mlp.down_proj.weight": "model-00007-of-00009.safetensors",
323
+ "model.layers.30.mlp.gate_proj.weight": "model-00007-of-00009.safetensors",
324
+ "model.layers.30.mlp.up_proj.weight": "model-00007-of-00009.safetensors",
325
+ "model.layers.30.post_attention_layernorm.weight": "model-00007-of-00009.safetensors",
326
+ "model.layers.30.self_attn.k_proj.bias": "model-00007-of-00009.safetensors",
327
+ "model.layers.30.self_attn.k_proj.weight": "model-00007-of-00009.safetensors",
328
+ "model.layers.30.self_attn.o_proj.bias": "model-00007-of-00009.safetensors",
329
+ "model.layers.30.self_attn.o_proj.weight": "model-00007-of-00009.safetensors",
330
+ "model.layers.30.self_attn.q_proj.bias": "model-00007-of-00009.safetensors",
331
+ "model.layers.30.self_attn.q_proj.weight": "model-00007-of-00009.safetensors",
332
+ "model.layers.30.self_attn.v_proj.bias": "model-00007-of-00009.safetensors",
333
+ "model.layers.30.self_attn.v_proj.weight": "model-00007-of-00009.safetensors",
334
+ "model.layers.31.input_layernorm.weight": "model-00008-of-00009.safetensors",
335
+ "model.layers.31.mlp.down_proj.weight": "model-00008-of-00009.safetensors",
336
+ "model.layers.31.mlp.gate_proj.weight": "model-00007-of-00009.safetensors",
337
+ "model.layers.31.mlp.up_proj.weight": "model-00007-of-00009.safetensors",
338
+ "model.layers.31.post_attention_layernorm.weight": "model-00008-of-00009.safetensors",
339
+ "model.layers.31.self_attn.k_proj.bias": "model-00007-of-00009.safetensors",
340
+ "model.layers.31.self_attn.k_proj.weight": "model-00007-of-00009.safetensors",
341
+ "model.layers.31.self_attn.o_proj.bias": "model-00007-of-00009.safetensors",
342
+ "model.layers.31.self_attn.o_proj.weight": "model-00007-of-00009.safetensors",
343
+ "model.layers.31.self_attn.q_proj.bias": "model-00007-of-00009.safetensors",
344
+ "model.layers.31.self_attn.q_proj.weight": "model-00007-of-00009.safetensors",
345
+ "model.layers.31.self_attn.v_proj.bias": "model-00007-of-00009.safetensors",
346
+ "model.layers.31.self_attn.v_proj.weight": "model-00007-of-00009.safetensors",
347
+ "model.layers.32.input_layernorm.weight": "model-00008-of-00009.safetensors",
348
+ "model.layers.32.mlp.down_proj.weight": "model-00008-of-00009.safetensors",
349
+ "model.layers.32.mlp.gate_proj.weight": "model-00008-of-00009.safetensors",
350
+ "model.layers.32.mlp.up_proj.weight": "model-00008-of-00009.safetensors",
351
+ "model.layers.32.post_attention_layernorm.weight": "model-00008-of-00009.safetensors",
352
+ "model.layers.32.self_attn.k_proj.bias": "model-00008-of-00009.safetensors",
353
+ "model.layers.32.self_attn.k_proj.weight": "model-00008-of-00009.safetensors",
354
+ "model.layers.32.self_attn.o_proj.bias": "model-00008-of-00009.safetensors",
355
+ "model.layers.32.self_attn.o_proj.weight": "model-00008-of-00009.safetensors",
356
+ "model.layers.32.self_attn.q_proj.bias": "model-00008-of-00009.safetensors",
357
+ "model.layers.32.self_attn.q_proj.weight": "model-00008-of-00009.safetensors",
358
+ "model.layers.32.self_attn.v_proj.bias": "model-00008-of-00009.safetensors",
359
+ "model.layers.32.self_attn.v_proj.weight": "model-00008-of-00009.safetensors",
360
+ "model.layers.33.input_layernorm.weight": "model-00008-of-00009.safetensors",
361
+ "model.layers.33.mlp.down_proj.weight": "model-00008-of-00009.safetensors",
362
+ "model.layers.33.mlp.gate_proj.weight": "model-00008-of-00009.safetensors",
363
+ "model.layers.33.mlp.up_proj.weight": "model-00008-of-00009.safetensors",
364
+ "model.layers.33.post_attention_layernorm.weight": "model-00008-of-00009.safetensors",
365
+ "model.layers.33.self_attn.k_proj.bias": "model-00008-of-00009.safetensors",
366
+ "model.layers.33.self_attn.k_proj.weight": "model-00008-of-00009.safetensors",
367
+ "model.layers.33.self_attn.o_proj.bias": "model-00008-of-00009.safetensors",
368
+ "model.layers.33.self_attn.o_proj.weight": "model-00008-of-00009.safetensors",
369
+ "model.layers.33.self_attn.q_proj.bias": "model-00008-of-00009.safetensors",
370
+ "model.layers.33.self_attn.q_proj.weight": "model-00008-of-00009.safetensors",
371
+ "model.layers.33.self_attn.v_proj.bias": "model-00008-of-00009.safetensors",
372
+ "model.layers.33.self_attn.v_proj.weight": "model-00008-of-00009.safetensors",
373
+ "model.layers.4.input_layernorm.weight": "model-00002-of-00009.safetensors",
374
+ "model.layers.4.mlp.down_proj.weight": "model-00002-of-00009.safetensors",
375
+ "model.layers.4.mlp.gate_proj.weight": "model-00002-of-00009.safetensors",
376
+ "model.layers.4.mlp.up_proj.weight": "model-00002-of-00009.safetensors",
377
+ "model.layers.4.post_attention_layernorm.weight": "model-00002-of-00009.safetensors",
378
+ "model.layers.4.self_attn.k_proj.bias": "model-00002-of-00009.safetensors",
379
+ "model.layers.4.self_attn.k_proj.weight": "model-00002-of-00009.safetensors",
380
+ "model.layers.4.self_attn.o_proj.bias": "model-00002-of-00009.safetensors",
381
+ "model.layers.4.self_attn.o_proj.weight": "model-00002-of-00009.safetensors",
382
+ "model.layers.4.self_attn.q_proj.bias": "model-00002-of-00009.safetensors",
383
+ "model.layers.4.self_attn.q_proj.weight": "model-00002-of-00009.safetensors",
384
+ "model.layers.4.self_attn.v_proj.bias": "model-00002-of-00009.safetensors",
385
+ "model.layers.4.self_attn.v_proj.weight": "model-00002-of-00009.safetensors",
386
+ "model.layers.5.input_layernorm.weight": "model-00002-of-00009.safetensors",
387
+ "model.layers.5.mlp.down_proj.weight": "model-00002-of-00009.safetensors",
388
+ "model.layers.5.mlp.gate_proj.weight": "model-00002-of-00009.safetensors",
389
+ "model.layers.5.mlp.up_proj.weight": "model-00002-of-00009.safetensors",
390
+ "model.layers.5.post_attention_layernorm.weight": "model-00002-of-00009.safetensors",
391
+ "model.layers.5.self_attn.k_proj.bias": "model-00002-of-00009.safetensors",
392
+ "model.layers.5.self_attn.k_proj.weight": "model-00002-of-00009.safetensors",
393
+ "model.layers.5.self_attn.o_proj.bias": "model-00002-of-00009.safetensors",
394
+ "model.layers.5.self_attn.o_proj.weight": "model-00002-of-00009.safetensors",
395
+ "model.layers.5.self_attn.q_proj.bias": "model-00002-of-00009.safetensors",
396
+ "model.layers.5.self_attn.q_proj.weight": "model-00002-of-00009.safetensors",
397
+ "model.layers.5.self_attn.v_proj.bias": "model-00002-of-00009.safetensors",
398
+ "model.layers.5.self_attn.v_proj.weight": "model-00002-of-00009.safetensors",
399
+ "model.layers.6.input_layernorm.weight": "model-00003-of-00009.safetensors",
400
+ "model.layers.6.mlp.down_proj.weight": "model-00003-of-00009.safetensors",
401
+ "model.layers.6.mlp.gate_proj.weight": "model-00002-of-00009.safetensors",
402
+ "model.layers.6.mlp.up_proj.weight": "model-00002-of-00009.safetensors",
403
+ "model.layers.6.post_attention_layernorm.weight": "model-00003-of-00009.safetensors",
404
+ "model.layers.6.self_attn.k_proj.bias": "model-00002-of-00009.safetensors",
405
+ "model.layers.6.self_attn.k_proj.weight": "model-00002-of-00009.safetensors",
406
+ "model.layers.6.self_attn.o_proj.bias": "model-00002-of-00009.safetensors",
407
+ "model.layers.6.self_attn.o_proj.weight": "model-00002-of-00009.safetensors",
408
+ "model.layers.6.self_attn.q_proj.bias": "model-00002-of-00009.safetensors",
409
+ "model.layers.6.self_attn.q_proj.weight": "model-00002-of-00009.safetensors",
410
+ "model.layers.6.self_attn.v_proj.bias": "model-00002-of-00009.safetensors",
411
+ "model.layers.6.self_attn.v_proj.weight": "model-00002-of-00009.safetensors",
412
+ "model.layers.7.input_layernorm.weight": "model-00003-of-00009.safetensors",
413
+ "model.layers.7.mlp.down_proj.weight": "model-00003-of-00009.safetensors",
414
+ "model.layers.7.mlp.gate_proj.weight": "model-00003-of-00009.safetensors",
415
+ "model.layers.7.mlp.up_proj.weight": "model-00003-of-00009.safetensors",
416
+ "model.layers.7.post_attention_layernorm.weight": "model-00003-of-00009.safetensors",
417
+ "model.layers.7.self_attn.k_proj.bias": "model-00003-of-00009.safetensors",
418
+ "model.layers.7.self_attn.k_proj.weight": "model-00003-of-00009.safetensors",
419
+ "model.layers.7.self_attn.o_proj.bias": "model-00003-of-00009.safetensors",
420
+ "model.layers.7.self_attn.o_proj.weight": "model-00003-of-00009.safetensors",
421
+ "model.layers.7.self_attn.q_proj.bias": "model-00003-of-00009.safetensors",
422
+ "model.layers.7.self_attn.q_proj.weight": "model-00003-of-00009.safetensors",
423
+ "model.layers.7.self_attn.v_proj.bias": "model-00003-of-00009.safetensors",
424
+ "model.layers.7.self_attn.v_proj.weight": "model-00003-of-00009.safetensors",
425
+ "model.layers.8.input_layernorm.weight": "model-00003-of-00009.safetensors",
426
+ "model.layers.8.mlp.down_proj.weight": "model-00003-of-00009.safetensors",
427
+ "model.layers.8.mlp.gate_proj.weight": "model-00003-of-00009.safetensors",
428
+ "model.layers.8.mlp.up_proj.weight": "model-00003-of-00009.safetensors",
429
+ "model.layers.8.post_attention_layernorm.weight": "model-00003-of-00009.safetensors",
430
+ "model.layers.8.self_attn.k_proj.bias": "model-00003-of-00009.safetensors",
431
+ "model.layers.8.self_attn.k_proj.weight": "model-00003-of-00009.safetensors",
432
+ "model.layers.8.self_attn.o_proj.bias": "model-00003-of-00009.safetensors",
433
+ "model.layers.8.self_attn.o_proj.weight": "model-00003-of-00009.safetensors",
434
+ "model.layers.8.self_attn.q_proj.bias": "model-00003-of-00009.safetensors",
435
+ "model.layers.8.self_attn.q_proj.weight": "model-00003-of-00009.safetensors",
436
+ "model.layers.8.self_attn.v_proj.bias": "model-00003-of-00009.safetensors",
437
+ "model.layers.8.self_attn.v_proj.weight": "model-00003-of-00009.safetensors",
438
+ "model.layers.9.input_layernorm.weight": "model-00003-of-00009.safetensors",
439
+ "model.layers.9.mlp.down_proj.weight": "model-00003-of-00009.safetensors",
440
+ "model.layers.9.mlp.gate_proj.weight": "model-00003-of-00009.safetensors",
441
+ "model.layers.9.mlp.up_proj.weight": "model-00003-of-00009.safetensors",
442
+ "model.layers.9.post_attention_layernorm.weight": "model-00003-of-00009.safetensors",
443
+ "model.layers.9.self_attn.k_proj.bias": "model-00003-of-00009.safetensors",
444
+ "model.layers.9.self_attn.k_proj.weight": "model-00003-of-00009.safetensors",
445
+ "model.layers.9.self_attn.o_proj.bias": "model-00003-of-00009.safetensors",
446
+ "model.layers.9.self_attn.o_proj.weight": "model-00003-of-00009.safetensors",
447
+ "model.layers.9.self_attn.q_proj.bias": "model-00003-of-00009.safetensors",
448
+ "model.layers.9.self_attn.q_proj.weight": "model-00003-of-00009.safetensors",
449
+ "model.layers.9.self_attn.v_proj.bias": "model-00003-of-00009.safetensors",
450
+ "model.layers.9.self_attn.v_proj.weight": "model-00003-of-00009.safetensors",
451
+ "model.norm.weight": "model-00008-of-00009.safetensors"
452
+ }
453
+ }
modeling_openpangu_dense.py ADDED
@@ -0,0 +1,590 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # 🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨
2
+ # This file was automatically generated from modular_openpangu_dense.py.
3
+ # Do NOT edit this file manually as any edits will be overwritten by the generation of
4
+ # the file from the modular. If any change should be done, please apply the change to the
5
+ # modular_openpangu_dense.py file directly. One of our CI enforces this.
6
+ # 🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨
7
+
8
+ # coding=utf-8
9
+ # Copyright (c) 2025 Huawei Technologies Co., Ltd. All rights reserved.
10
+ # Copyright 2022 EleutherAI and the HuggingFace Inc. team. All rights reserved.
11
+ #
12
+ # This code is based on EleutherAI's GPT-NeoX library and the GPT-NeoX
13
+ # and OPT implementations in this library. It has been modified from its
14
+ # original forms to accommodate minor architectural differences compared
15
+ # to GPT-NeoX and OPT used by the Meta AI team that trained the model.
16
+ #
17
+ # Licensed under the Apache License, Version 2.0 (the "License");
18
+ # you may not use this file except in compliance with the License.
19
+ # You may obtain a copy of the License at
20
+ #
21
+ # http://www.apache.org/licenses/LICENSE-2.0
22
+ #
23
+ # Unless required by applicable law or agreed to in writing, software
24
+ # distributed under the License is distributed on an "AS IS" BASIS,
25
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
26
+ # See the License for the specific language governing permissions and
27
+ # limitations under the License.
28
+
29
+ from typing import Callable, Optional, Union
30
+
31
+ import torch
32
+ from torch import nn
33
+
34
+ try:
35
+ import torch_npu
36
+ from torch_npu.contrib import transfer_to_npu
37
+ if "910" in torch.npu.get_device_name():
38
+ NPU_ATTN_INFR = True
39
+ print("[INFO] torch_npu detected. Using NPU fused infer attention.")
40
+ else:
41
+ NPU_ATTN_INFR = False
42
+ except ImportError:
43
+ NPU_ATTN_INFR = False
44
+
45
+ from transformers.activations import ACT2FN
46
+ from transformers.cache_utils import Cache, DynamicCache
47
+ from transformers.generation import GenerationMixin
48
+ from transformers.masking_utils import create_causal_mask
49
+ from transformers.modeling_flash_attention_utils import FlashAttentionKwargs
50
+ from transformers.modeling_layers import GradientCheckpointingLayer
51
+ from transformers.modeling_outputs import (
52
+ BaseModelOutputWithPast,
53
+ CausalLMOutputWithPast,
54
+ SequenceClassifierOutputWithPast,
55
+ )
56
+ from transformers.modeling_rope_utils import ROPE_INIT_FUNCTIONS, dynamic_rope_update
57
+ from transformers.modeling_utils import ALL_ATTENTION_FUNCTIONS, PreTrainedModel
58
+ from transformers.processing_utils import Unpack
59
+ from transformers.utils import auto_docstring, can_return_tuple, logging
60
+ # from transformers.utils import LossKwargs, auto_docstring, can_return_tuple, logging
61
+
62
+ from .configuration_openpangu_dense import PanguEmbeddedConfig
63
+
64
+
65
+ logger = logging.get_logger(__name__)
66
+
67
+
68
+ class PanguEmbeddedRMSNorm(nn.Module):
69
+ def __init__(self, hidden_size, eps=1e-6):
70
+ """
71
+ PanguEmbeddedRMSNorm is equivalent to T5LayerNorm
72
+ """
73
+ super().__init__()
74
+ self.weight = nn.Parameter(torch.ones(hidden_size))
75
+ self.variance_epsilon = eps
76
+
77
+ def forward(self, hidden_states):
78
+ input_dtype = hidden_states.dtype
79
+ hidden_states = hidden_states.to(torch.float32)
80
+ variance = hidden_states.pow(2).mean(-1, keepdim=True)
81
+ hidden_states = hidden_states * torch.rsqrt(variance + self.variance_epsilon)
82
+ return self.weight * hidden_states.to(input_dtype)
83
+
84
+ def extra_repr(self):
85
+ return f"{tuple(self.weight.shape)}, eps={self.variance_epsilon}"
86
+
87
+
88
+ class PanguEmbeddedRotaryEmbedding(nn.Module):
89
+ def __init__(self, config: PanguEmbeddedConfig, device=None):
90
+ super().__init__()
91
+ # BC: "rope_type" was originally "type"
92
+ if hasattr(config, "rope_scaling") and config.rope_scaling is not None:
93
+ self.rope_type = config.rope_scaling.get("rope_type", config.rope_scaling.get("type"))
94
+ else:
95
+ self.rope_type = "default"
96
+ self.max_seq_len_cached = config.max_position_embeddings
97
+ self.original_max_seq_len = config.max_position_embeddings
98
+
99
+ self.config = config
100
+ self.rope_init_fn = ROPE_INIT_FUNCTIONS[self.rope_type]
101
+
102
+ inv_freq, self.attention_scaling = self.rope_init_fn(self.config, device)
103
+ self.register_buffer("inv_freq", inv_freq, persistent=False)
104
+ self.original_inv_freq = self.inv_freq
105
+
106
+ @torch.no_grad()
107
+ @dynamic_rope_update # power user: used with advanced RoPE types (e.g. dynamic rope)
108
+ def forward(self, x, position_ids):
109
+ inv_freq_expanded = self.inv_freq[None, :, None].float().expand(position_ids.shape[0], -1, 1).to(x.device)
110
+ position_ids_expanded = position_ids[:, None, :].float()
111
+
112
+ device_type = x.device.type if isinstance(x.device.type, str) and x.device.type != "mps" else "cpu"
113
+ with torch.autocast(device_type=device_type, enabled=False): # Force float32
114
+ freqs = (inv_freq_expanded.float() @ position_ids_expanded.float()).transpose(1, 2)
115
+ emb = torch.cat((freqs, freqs), dim=-1)
116
+ cos = emb.cos() * self.attention_scaling
117
+ sin = emb.sin() * self.attention_scaling
118
+
119
+ return cos.to(dtype=x.dtype), sin.to(dtype=x.dtype)
120
+
121
+
122
+ def rotate_half(x):
123
+ """Rotates half the hidden dims of the input."""
124
+ x1 = x[..., : x.shape[-1] // 2]
125
+ x2 = x[..., x.shape[-1] // 2 :]
126
+ return torch.cat((-x2, x1), dim=-1)
127
+
128
+
129
+ def apply_rotary_pos_emb(q, k, cos, sin, position_ids=None, unsqueeze_dim=1):
130
+ """Applies Rotary Position Embedding to the query and key tensors.
131
+
132
+ Args:
133
+ q (`torch.Tensor`): The query tensor.
134
+ k (`torch.Tensor`): The key tensor.
135
+ cos (`torch.Tensor`): The cosine part of the rotary embedding.
136
+ sin (`torch.Tensor`): The sine part of the rotary embedding.
137
+ position_ids (`torch.Tensor`, *optional*):
138
+ Deprecated and unused.
139
+ unsqueeze_dim (`int`, *optional*, defaults to 1):
140
+ The 'unsqueeze_dim' argument specifies the dimension along which to unsqueeze cos[position_ids] and
141
+ sin[position_ids] so that they can be properly broadcasted to the dimensions of q and k. For example, note
142
+ that cos[position_ids] and sin[position_ids] have the shape [batch_size, seq_len, head_dim]. Then, if q and
143
+ k have the shape [batch_size, heads, seq_len, head_dim], then setting unsqueeze_dim=1 makes
144
+ cos[position_ids] and sin[position_ids] broadcastable to the shapes of q and k. Similarly, if q and k have
145
+ the shape [batch_size, seq_len, heads, head_dim], then set unsqueeze_dim=2.
146
+ Returns:
147
+ `tuple(torch.Tensor)` comprising of the query and key tensors rotated using the Rotary Position Embedding.
148
+ """
149
+ cos = cos.unsqueeze(unsqueeze_dim)
150
+ sin = sin.unsqueeze(unsqueeze_dim)
151
+ q_embed = (q * cos) + (rotate_half(q) * sin)
152
+ k_embed = (k * cos) + (rotate_half(k) * sin)
153
+ return q_embed, k_embed
154
+
155
+
156
+ class PanguEmbeddedMLP(nn.Module):
157
+ def __init__(self, config):
158
+ super().__init__()
159
+ self.config = config
160
+ self.hidden_size = config.hidden_size
161
+ self.intermediate_size = config.intermediate_size
162
+ self.gate_proj = nn.Linear(self.hidden_size, self.intermediate_size, bias=False)
163
+ self.up_proj = nn.Linear(self.hidden_size, self.intermediate_size, bias=False)
164
+ self.down_proj = nn.Linear(self.intermediate_size, self.hidden_size, bias=False)
165
+ self.act_fn = ACT2FN[config.hidden_act]
166
+
167
+ def forward(self, x):
168
+ down_proj = self.down_proj(self.act_fn(self.gate_proj(x)) * self.up_proj(x))
169
+ return down_proj
170
+
171
+
172
+ def repeat_kv(hidden_states: torch.Tensor, n_rep: int) -> torch.Tensor:
173
+ """
174
+ This is the equivalent of torch.repeat_interleave(x, dim=1, repeats=n_rep). The hidden states go from (batch,
175
+ num_key_value_heads, seqlen, head_dim) to (batch, num_attention_heads, seqlen, head_dim)
176
+ """
177
+ batch, num_key_value_heads, slen, head_dim = hidden_states.shape
178
+ if n_rep == 1:
179
+ return hidden_states
180
+ hidden_states = hidden_states[:, :, None, :, :].expand(batch, num_key_value_heads, n_rep, slen, head_dim)
181
+ return hidden_states.reshape(batch, num_key_value_heads * n_rep, slen, head_dim)
182
+
183
+
184
+ def eager_attention_forward(
185
+ module: nn.Module,
186
+ query: torch.Tensor,
187
+ key: torch.Tensor,
188
+ value: torch.Tensor,
189
+ attention_mask: Optional[torch.Tensor],
190
+ scaling: float,
191
+ dropout: float = 0.0,
192
+ **kwargs,
193
+ ):
194
+ key_states = repeat_kv(key, module.num_key_value_groups)
195
+ value_states = repeat_kv(value, module.num_key_value_groups)
196
+
197
+ attn_weights = torch.matmul(query, key_states.transpose(2, 3)) * scaling
198
+ if attention_mask is not None:
199
+ causal_mask = attention_mask[:, :, :, : key_states.shape[-2]]
200
+ attn_weights = attn_weights + causal_mask
201
+
202
+ attn_weights = nn.functional.softmax(attn_weights, dim=-1, dtype=torch.float32).to(query.dtype)
203
+ attn_weights = nn.functional.dropout(attn_weights, p=dropout, training=module.training)
204
+ attn_output = torch.matmul(attn_weights, value_states)
205
+ attn_output = attn_output.transpose(1, 2).contiguous()
206
+
207
+ return attn_output, attn_weights
208
+
209
+
210
+ class PanguEmbeddedAttention(nn.Module):
211
+ """Multi-headed attention from 'Attention Is All You Need' paper"""
212
+
213
+ def __init__(self, config: PanguEmbeddedConfig, layer_idx: int):
214
+ super().__init__()
215
+ self.config = config
216
+ self.layer_idx = layer_idx
217
+ self.head_dim = getattr(config, "head_dim", config.hidden_size // config.num_attention_heads)
218
+ self.num_heads = config.num_attention_heads
219
+ self.num_key_value_heads = config.num_key_value_heads
220
+ self.num_key_value_groups = config.num_attention_heads // config.num_key_value_heads
221
+ self.scaling = self.head_dim**-0.5
222
+ self.attention_dropout = config.attention_dropout
223
+ self.is_causal = True
224
+
225
+ self.q_proj = nn.Linear(config.hidden_size, config.num_attention_heads * self.head_dim, bias=config.bias)
226
+ self.k_proj = nn.Linear(config.hidden_size, config.num_key_value_heads * self.head_dim, bias=config.bias)
227
+ self.v_proj = nn.Linear(config.hidden_size, config.num_key_value_heads * self.head_dim, bias=config.bias)
228
+ self.o_proj = nn.Linear(config.num_attention_heads * self.head_dim, config.hidden_size, bias=config.bias)
229
+
230
+ def forward(
231
+ self,
232
+ hidden_states: torch.Tensor,
233
+ position_embeddings: tuple[torch.Tensor, torch.Tensor],
234
+ attention_mask: Optional[torch.Tensor],
235
+ past_key_value: Optional[Cache] = None,
236
+ cache_position: Optional[torch.LongTensor] = None,
237
+ **kwargs: Unpack[FlashAttentionKwargs],
238
+ ) -> tuple[torch.Tensor, Optional[torch.Tensor], Optional[tuple[torch.Tensor]]]:
239
+ input_shape = hidden_states.shape[:-1]
240
+ hidden_shape = (*input_shape, -1, self.head_dim)
241
+
242
+ query_states = self.q_proj(hidden_states).view(hidden_shape).transpose(1, 2)
243
+ key_states = self.k_proj(hidden_states).view(hidden_shape).transpose(1, 2)
244
+ value_states = self.v_proj(hidden_states).view(hidden_shape).transpose(1, 2)
245
+
246
+ cos, sin = position_embeddings
247
+ query_states, key_states = apply_rotary_pos_emb(query_states, key_states, cos, sin)
248
+
249
+ if past_key_value is not None:
250
+ # sin and cos are specific to RoPE models; cache_position needed for the static cache
251
+ cache_kwargs = {"sin": sin, "cos": cos, "cache_position": cache_position}
252
+ key_states, value_states = past_key_value.update(key_states, value_states, self.layer_idx, cache_kwargs)
253
+
254
+ attention_interface: Callable = eager_attention_forward
255
+ if self.config._attn_implementation != "eager":
256
+ attention_interface = ALL_ATTENTION_FUNCTIONS[self.config._attn_implementation]
257
+
258
+ if not self.training and NPU_ATTN_INFR:
259
+ q_len = input_shape[1]
260
+ if attention_mask is not None:
261
+ attention_mask = ~attention_mask.bool()
262
+ elif q_len > 1:
263
+ attention_mask = torch.triu(torch.ones([q_len, q_len]), diagonal=1).bool().unsqueeze(0).unsqueeze(0).to(query_states.device)
264
+
265
+ attn_output, _ = torch_npu.npu_fused_infer_attention_score(
266
+ query_states, key_states, value_states,
267
+ num_heads=self.num_heads, num_key_value_heads=self.num_key_value_heads,
268
+ input_layout="BNSD", atten_mask=attention_mask, scale=self.scaling)
269
+ attn_output = attn_output.transpose(1, 2)
270
+ attn_weights = None
271
+ else:
272
+ attn_output, attn_weights = attention_interface(
273
+ self,
274
+ query_states,
275
+ key_states,
276
+ value_states,
277
+ attention_mask,
278
+ dropout=0.0 if not self.training else self.attention_dropout,
279
+ scaling=self.scaling,
280
+ **kwargs,
281
+ )
282
+
283
+ attn_output = attn_output.reshape(*input_shape, -1).contiguous()
284
+ attn_output = self.o_proj(attn_output)
285
+ return attn_output, attn_weights
286
+
287
+
288
+ class PanguEmbeddedDecoderLayer(GradientCheckpointingLayer):
289
+ def __init__(self, config: PanguEmbeddedConfig, layer_idx: int):
290
+ super().__init__()
291
+ self.hidden_size = config.hidden_size
292
+ self.self_attn = PanguEmbeddedAttention(config=config, layer_idx=layer_idx)
293
+ self.mlp = PanguEmbeddedMLP(config)
294
+ self.input_layernorm = PanguEmbeddedRMSNorm(config.hidden_size, eps=config.rms_norm_eps)
295
+ self.post_attention_layernorm = PanguEmbeddedRMSNorm(config.hidden_size, eps=config.rms_norm_eps)
296
+
297
+ def forward(
298
+ self,
299
+ hidden_states: torch.Tensor,
300
+ attention_mask: Optional[torch.Tensor] = None,
301
+ position_ids: Optional[torch.LongTensor] = None,
302
+ past_key_value: Optional[Cache] = None,
303
+ output_attentions: Optional[bool] = False,
304
+ use_cache: Optional[bool] = False,
305
+ cache_position: Optional[torch.LongTensor] = None,
306
+ position_embeddings: Optional[tuple[torch.Tensor, torch.Tensor]] = None, # necessary, but kept here for BC
307
+ **kwargs: Unpack[FlashAttentionKwargs],
308
+ ) -> tuple[torch.FloatTensor, Optional[tuple[torch.FloatTensor, torch.FloatTensor]]]:
309
+ residual = hidden_states
310
+ hidden_states = self.input_layernorm(hidden_states)
311
+
312
+ # Self Attention
313
+ hidden_states, self_attn_weights = self.self_attn(
314
+ hidden_states=hidden_states,
315
+ attention_mask=attention_mask,
316
+ position_ids=position_ids,
317
+ past_key_value=past_key_value,
318
+ output_attentions=output_attentions,
319
+ use_cache=use_cache,
320
+ cache_position=cache_position,
321
+ position_embeddings=position_embeddings,
322
+ **kwargs,
323
+ )
324
+ hidden_states = residual + hidden_states
325
+
326
+ # Fully Connected
327
+ residual = hidden_states
328
+ hidden_states = self.post_attention_layernorm(hidden_states)
329
+ hidden_states = self.mlp(hidden_states)
330
+ hidden_states = residual + hidden_states
331
+
332
+ outputs = (hidden_states,)
333
+ if output_attentions:
334
+ outputs += (self_attn_weights,)
335
+
336
+ return outputs
337
+
338
+
339
+ @auto_docstring
340
+ class PanguEmbeddedPreTrainedModel(PreTrainedModel):
341
+ config_class = PanguEmbeddedConfig
342
+ base_model_prefix = "model"
343
+ supports_gradient_checkpointing = True
344
+ _no_split_modules = ["PanguEmbeddedDecoderLayer"]
345
+ _skip_keys_device_placement = ["past_key_values"]
346
+ _supports_flash_attn_3 = True
347
+ _supports_flash_attn_2 = True
348
+ _supports_sdpa = True
349
+ _supports_flex_attn = True
350
+ _supports_cache_class = True
351
+ _supports_quantized_cache = True
352
+ _supports_static_cache = True
353
+ _supports_attention_backend = True
354
+
355
+ def _init_weights(self, module):
356
+ std = self.config.initializer_range
357
+ if isinstance(module, nn.Linear):
358
+ module.weight.data.normal_(mean=0.0, std=std)
359
+ if module.bias is not None:
360
+ module.bias.data.zero_()
361
+ elif isinstance(module, nn.Embedding):
362
+ module.weight.data.normal_(mean=0.0, std=std)
363
+ if module.padding_idx is not None:
364
+ module.weight.data[module.padding_idx].zero_()
365
+ elif isinstance(module, PanguEmbeddedRMSNorm):
366
+ module.weight.data.fill_(1.0)
367
+
368
+
369
+ @auto_docstring
370
+ class PanguEmbeddedModel(PanguEmbeddedPreTrainedModel):
371
+ def __init__(self, config: PanguEmbeddedConfig):
372
+ super().__init__(config)
373
+ self.padding_idx = config.pad_token_id
374
+ self.vocab_size = config.vocab_size
375
+
376
+ self.embed_tokens = nn.Embedding(config.vocab_size, config.hidden_size, self.padding_idx)
377
+ self.layers = nn.ModuleList(
378
+ [PanguEmbeddedDecoderLayer(config, layer_idx) for layer_idx in range(config.num_hidden_layers)]
379
+ )
380
+ self.norm = PanguEmbeddedRMSNorm(config.hidden_size, eps=config.rms_norm_eps)
381
+ self.rotary_emb = PanguEmbeddedRotaryEmbedding(config=config)
382
+ self.gradient_checkpointing = False
383
+
384
+ # Initialize weights and apply final processing
385
+ self.post_init()
386
+
387
+ def get_input_embeddings(self):
388
+ return self.embed_tokens
389
+
390
+ def set_input_embeddings(self, value):
391
+ self.embed_tokens = value
392
+
393
+ @can_return_tuple
394
+ @auto_docstring
395
+ def forward(
396
+ self,
397
+ input_ids: Optional[torch.LongTensor] = None,
398
+ attention_mask: Optional[torch.Tensor] = None,
399
+ position_ids: Optional[torch.LongTensor] = None,
400
+ past_key_values: Optional[Cache] = None,
401
+ inputs_embeds: Optional[torch.FloatTensor] = None,
402
+ use_cache: Optional[bool] = None,
403
+ output_attentions: Optional[bool] = None,
404
+ output_hidden_states: Optional[bool] = None,
405
+ cache_position: Optional[torch.LongTensor] = None,
406
+ **flash_attn_kwargs: Unpack[FlashAttentionKwargs],
407
+ ) -> BaseModelOutputWithPast:
408
+ output_attentions = output_attentions if output_attentions is not None else self.config.output_attentions
409
+ output_hidden_states = (
410
+ output_hidden_states if output_hidden_states is not None else self.config.output_hidden_states
411
+ )
412
+ use_cache = use_cache if use_cache is not None else self.config.use_cache
413
+
414
+ if (input_ids is None) ^ (inputs_embeds is not None):
415
+ raise ValueError("You must specify exactly one of input_ids or inputs_embeds")
416
+
417
+ if self.gradient_checkpointing and self.training and use_cache:
418
+ logger.warning_once(
419
+ "`use_cache=True` is incompatible with gradient checkpointing. Setting `use_cache=False`."
420
+ )
421
+ use_cache = False
422
+
423
+ # TODO (joao): remove this exception in v4.56 -- it exists for users that try to pass a legacy cache
424
+ if not isinstance(past_key_values, (type(None), Cache)):
425
+ raise ValueError("The `past_key_values` should be either a `Cache` object or `None`.")
426
+
427
+ if inputs_embeds is None:
428
+ inputs_embeds = self.embed_tokens(input_ids)
429
+
430
+ if use_cache and past_key_values is None:
431
+ past_key_values = DynamicCache()
432
+
433
+ if cache_position is None:
434
+ past_seen_tokens = past_key_values.get_seq_length() if past_key_values is not None else 0
435
+ cache_position = torch.arange(
436
+ past_seen_tokens, past_seen_tokens + inputs_embeds.shape[1], device=inputs_embeds.device
437
+ )
438
+
439
+ if position_ids is None:
440
+ position_ids = cache_position.unsqueeze(0)
441
+
442
+ causal_mask = create_causal_mask(
443
+ config=self.config,
444
+ input_embeds=inputs_embeds,
445
+ attention_mask=attention_mask,
446
+ cache_position=cache_position,
447
+ past_key_values=past_key_values,
448
+ position_ids=position_ids,
449
+ )
450
+
451
+ hidden_states = inputs_embeds
452
+
453
+ # create position embeddings to be shared across the decoder layers
454
+ position_embeddings = self.rotary_emb(hidden_states, position_ids)
455
+
456
+ # decoder layers
457
+ all_hidden_states = () if output_hidden_states else None
458
+ all_self_attns = () if output_attentions else None
459
+
460
+ for decoder_layer in self.layers[: self.config.num_hidden_layers]:
461
+ if output_hidden_states:
462
+ all_hidden_states += (hidden_states,)
463
+
464
+ layer_outputs = decoder_layer(
465
+ hidden_states,
466
+ attention_mask=causal_mask,
467
+ position_ids=position_ids,
468
+ past_key_value=past_key_values,
469
+ output_attentions=output_attentions,
470
+ use_cache=use_cache,
471
+ cache_position=cache_position,
472
+ position_embeddings=position_embeddings,
473
+ **flash_attn_kwargs,
474
+ )
475
+
476
+ hidden_states = layer_outputs[0]
477
+
478
+ if output_attentions:
479
+ all_self_attns += (layer_outputs[1],)
480
+
481
+ hidden_states = self.norm(hidden_states)
482
+
483
+ # add hidden states from the last decoder layer
484
+ if output_hidden_states:
485
+ all_hidden_states += (hidden_states,)
486
+
487
+ return BaseModelOutputWithPast(
488
+ last_hidden_state=hidden_states,
489
+ past_key_values=past_key_values if use_cache else None,
490
+ hidden_states=all_hidden_states,
491
+ attentions=all_self_attns,
492
+ )
493
+
494
+
495
+ class KwargsForCausalLM(FlashAttentionKwargs): ...
496
+
497
+
498
+ @auto_docstring
499
+ class PanguEmbeddedForCausalLM(PanguEmbeddedPreTrainedModel, GenerationMixin):
500
+ _tied_weights_keys = ["lm_head.weight"]
501
+ _tp_plan = {"lm_head": "colwise_rep"}
502
+ _pp_plan = {"lm_head": (["hidden_states"], ["logits"])}
503
+
504
+ def __init__(self, config):
505
+ super().__init__(config)
506
+ self.model = PanguEmbeddedModel(config)
507
+ self.vocab_size = config.vocab_size
508
+ self.lm_head = nn.Linear(config.hidden_size, config.vocab_size, bias=False)
509
+
510
+ # Initialize weights and apply final processing
511
+ self.post_init()
512
+
513
+ def get_input_embeddings(self):
514
+ return self.model.embed_tokens
515
+
516
+ def set_input_embeddings(self, value):
517
+ self.model.embed_tokens = value
518
+
519
+ def get_output_embeddings(self):
520
+ return self.lm_head
521
+
522
+ def set_output_embeddings(self, new_embeddings):
523
+ self.lm_head = new_embeddings
524
+
525
+ def set_decoder(self, decoder):
526
+ self.model = decoder
527
+
528
+ def get_decoder(self):
529
+ return self.model
530
+
531
+ @can_return_tuple
532
+ @auto_docstring
533
+ def forward(
534
+ self,
535
+ input_ids: Optional[torch.LongTensor] = None,
536
+ attention_mask: Optional[torch.Tensor] = None,
537
+ position_ids: Optional[torch.LongTensor] = None,
538
+ past_key_values: Optional[Cache] = None,
539
+ inputs_embeds: Optional[torch.FloatTensor] = None,
540
+ labels: Optional[torch.LongTensor] = None,
541
+ use_cache: Optional[bool] = None,
542
+ output_attentions: Optional[bool] = None,
543
+ output_hidden_states: Optional[bool] = None,
544
+ cache_position: Optional[torch.LongTensor] = None,
545
+ logits_to_keep: Union[int, torch.Tensor] = 0,
546
+ **kwargs: Unpack[KwargsForCausalLM],
547
+ ) -> CausalLMOutputWithPast:
548
+
549
+ output_attentions = output_attentions if output_attentions is not None else self.config.output_attentions
550
+ output_hidden_states = (
551
+ output_hidden_states if output_hidden_states is not None else self.config.output_hidden_states
552
+ )
553
+
554
+ # decoder outputs consists of (dec_features, layer_state, dec_hidden, dec_attn)
555
+ outputs: BaseModelOutputWithPast = self.model(
556
+ input_ids=input_ids,
557
+ attention_mask=attention_mask,
558
+ position_ids=position_ids,
559
+ past_key_values=past_key_values,
560
+ inputs_embeds=inputs_embeds,
561
+ use_cache=use_cache,
562
+ output_attentions=output_attentions,
563
+ output_hidden_states=output_hidden_states,
564
+ cache_position=cache_position,
565
+ **kwargs,
566
+ )
567
+
568
+ hidden_states = outputs.last_hidden_state
569
+ # Only compute necessary logits, and do not upcast them to float if we are not computing the loss
570
+ slice_indices = slice(-logits_to_keep, None) if isinstance(logits_to_keep, int) else logits_to_keep
571
+ logits = self.lm_head(hidden_states[:, slice_indices, :])
572
+
573
+ loss = None
574
+ if labels is not None:
575
+ loss = self.loss_function(logits=logits, labels=labels, vocab_size=self.config.vocab_size, **kwargs)
576
+
577
+ return CausalLMOutputWithPast(
578
+ loss=loss,
579
+ logits=logits,
580
+ past_key_values=outputs.past_key_values,
581
+ hidden_states=outputs.hidden_states,
582
+ attentions=outputs.attentions,
583
+ )
584
+
585
+
586
+ __all__ = [
587
+ "PanguEmbeddedForCausalLM",
588
+ "PanguEmbeddedModel",
589
+ "PanguEmbeddedPreTrainedModel",
590
+ ]
special_tokens_map.json ADDED
@@ -0,0 +1,30 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "bos_token": {
3
+ "content": "<s>",
4
+ "lstrip": false,
5
+ "normalized": false,
6
+ "rstrip": false,
7
+ "single_word": false
8
+ },
9
+ "eos_token": {
10
+ "content": "[unused10]",
11
+ "lstrip": false,
12
+ "normalized": false,
13
+ "rstrip": false,
14
+ "single_word": false
15
+ },
16
+ "pad_token": {
17
+ "content": "<unk>",
18
+ "lstrip": false,
19
+ "normalized": false,
20
+ "rstrip": false,
21
+ "single_word": false
22
+ },
23
+ "unk_token": {
24
+ "content": "<unk>",
25
+ "lstrip": false,
26
+ "normalized": false,
27
+ "rstrip": false,
28
+ "single_word": false
29
+ }
30
+ }
tokenization_openpangu.py ADDED
@@ -0,0 +1,273 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # coding=utf-8
2
+ # Copyright (c) 2025 Huawei Technologies Co., Ltd. All rights reserved.
3
+ # Copyright 2022 EleutherAI and the HuggingFace Inc. team. All rights reserved.
4
+ #
5
+ # This code is based on EleutherAI's GPT-NeoX library and the GPT-NeoX
6
+ # and OPT implementations in this library. It has been modified from its
7
+ # original forms to accommodate minor architectural differences compared
8
+ # to GPT-NeoX and OPT used by the Meta AI team that trained the model.
9
+ #
10
+ # Licensed under the Apache License, Version 2.0 (the "License");
11
+ # you may not use this file except in compliance with the License.
12
+ # You may obtain a copy of the License at
13
+ #
14
+ # http://www.apache.org/licenses/LICENSE-2.0
15
+ #
16
+ # Unless required by applicable law or agreed to in writing, software
17
+ # distributed under the License is distributed on an "AS IS" BASIS,
18
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
19
+ # See the License for the specific language governing permissions and
20
+ # limitations under the License.
21
+
22
+ import os
23
+ from shutil import copyfile
24
+ from typing import Any, Dict, List, Optional, Tuple
25
+
26
+ import sentencepiece as spm
27
+
28
+ from transformers.tokenization_utils import PreTrainedTokenizer
29
+ from transformers.utils import logging
30
+
31
+
32
+ logger = logging.get_logger(__name__)
33
+
34
+ VOCAB_FILES_NAMES = {"vocab_file": "./tokenizer.model"}
35
+
36
+ PRETRAINED_VOCAB_FILES_MAP = {}
37
+
38
+
39
+ def convert_bool(string):
40
+ if isinstance(string, str):
41
+ if string.lower() == "true":
42
+ return True
43
+ elif string.lower() == "false":
44
+ return False
45
+ else:
46
+ return string
47
+ else:
48
+ return string
49
+
50
+
51
+ class PanguTokenizer(PreTrainedTokenizer):
52
+ """
53
+ Construct a tokenizer. Based on byte-level Byte-Pair-Encoding.
54
+
55
+ Args:
56
+ vocab_file (`str`):
57
+ Path to the vocabulary file.
58
+ """
59
+
60
+ vocab_files_names = VOCAB_FILES_NAMES
61
+ pretrained_vocab_files_map = PRETRAINED_VOCAB_FILES_MAP
62
+ model_input_names = ["input_ids", "attention_mask"]
63
+ _auto_class = "AutoTokenizer"
64
+
65
+ def __init__(
66
+ self,
67
+ vocab_file,
68
+ unk_token="<unk>",
69
+ bos_token="<s>",
70
+ eos_token="</s>",
71
+ pad_token="</s>",
72
+ sp_model_kwargs: Optional[Dict[str, Any]] = None,
73
+ add_bos_token=True,
74
+ add_eos_token=False,
75
+ decode_with_prefix_space=False,
76
+ clean_up_tokenization_spaces=False,
77
+ **kwargs,
78
+ ):
79
+ self.sp_model_kwargs = {} if sp_model_kwargs is None else sp_model_kwargs
80
+ self.sp_model = spm.SentencePieceProcessor(**self.sp_model_kwargs)
81
+ self.sp_model.Load(vocab_file)
82
+ super().__init__(
83
+ bos_token=bos_token,
84
+ eos_token=eos_token,
85
+ unk_token=unk_token,
86
+ pad_token=pad_token,
87
+ clean_up_tokenization_spaces=clean_up_tokenization_spaces,
88
+ **kwargs,
89
+ )
90
+ self.vocab_file = vocab_file
91
+ self.add_bos_token = convert_bool(add_bos_token)
92
+ self.add_eos_token = add_eos_token
93
+ self.decode_with_prefix_space = decode_with_prefix_space
94
+ self.sp_model = spm.SentencePieceProcessor(**self.sp_model_kwargs)
95
+ self.sp_model.Load(vocab_file)
96
+ self._no_prefix_space_tokens = None
97
+
98
+ """ Initialisation"""
99
+
100
+ @property
101
+ def no_prefix_space_tokens(self):
102
+ if self._no_prefix_space_tokens is None:
103
+ vocab = self.convert_ids_to_tokens(list(range(self.vocab_size)))
104
+ self._no_prefix_space_tokens = {i for i, tok in enumerate(vocab) if not tok.startswith("▁")}
105
+ return self._no_prefix_space_tokens
106
+
107
+ @property
108
+ def vocab_size(self):
109
+ """Returns vocab size"""
110
+ return self.sp_model.get_piece_size()
111
+
112
+ @property
113
+ def bos_token_id(self) -> Optional[int]:
114
+ return self.sp_model.bos_id()
115
+
116
+ @property
117
+ def eos_token_id(self) -> Optional[int]:
118
+ return super().eos_token_id
119
+
120
+ def get_vocab(self):
121
+ """Returns vocab as a dict"""
122
+ vocab = {self.convert_ids_to_tokens(i): i for i in range(self.vocab_size)}
123
+ vocab.update(self.added_tokens_encoder)
124
+ return vocab
125
+
126
+ def _tokenize(self, text):
127
+ """Returns a tokenized string."""
128
+ return self.sp_model.encode(text, out_type=str)
129
+
130
+ def _convert_token_to_id(self, token):
131
+ """Converts a token (str) in an id using the vocab."""
132
+ return self.sp_model.piece_to_id(token)
133
+
134
+ def _convert_id_to_token(self, index):
135
+ """Converts an index (integer) in a token (str) using the vocab."""
136
+ token = self.sp_model.IdToPiece(index)
137
+ return token
138
+
139
+ def _maybe_add_prefix_space(self, tokens, decoded):
140
+ if tokens and tokens[0] not in self.no_prefix_space_tokens:
141
+ return " " + decoded
142
+ else:
143
+ return decoded
144
+
145
+ def convert_tokens_to_string(self, tokens):
146
+ """Converts a sequence of tokens (string) in a single string."""
147
+ current_sub_tokens = []
148
+ out_string = ""
149
+ prev_is_special = False
150
+ for token in tokens:
151
+ # make sure that special tokens are not decoded using sentencepiece model
152
+ if token in self.all_special_tokens:
153
+ # Decode the current sub-tokens first
154
+ if current_sub_tokens:
155
+ out_string += self.sp_model.decode(current_sub_tokens)
156
+ current_sub_tokens = []
157
+ # Append the special token without adding extra spaces
158
+ out_string += token
159
+ prev_is_special = True
160
+ else:
161
+ current_sub_tokens.append(token)
162
+ prev_is_special = False
163
+ # Decode any remaining sub-tokens
164
+ if current_sub_tokens:
165
+ out_string += self.sp_model.decode(current_sub_tokens)
166
+ # Clean up leading and trailing spaces
167
+ if self.clean_up_tokenization_spaces:
168
+ out_string = self.clean_up_tokenization(out_string)
169
+ out_string = self._maybe_add_prefix_space(tokens=tokens, decoded=out_string)
170
+ return out_string[1:]
171
+
172
+ # Override decode to set spaces_between_special_tokens to True as default
173
+ def decode(self,
174
+ token_ids,
175
+ spaces_between_special_tokens: bool = False,
176
+ **kwargs):
177
+ return super().decode(
178
+ token_ids=token_ids,
179
+ spaces_between_special_tokens=spaces_between_special_tokens,
180
+ **kwargs,
181
+ )
182
+
183
+ def save_vocabulary(self, save_directory, filename_prefix: Optional[str] = None) -> Tuple[str]:
184
+ """
185
+ Save the vocabulary and special tokens file to a directory.
186
+
187
+ Args:
188
+ save_directory (`str`):
189
+ The directory in which to save the vocabulary.
190
+
191
+ Returns:
192
+ `Tuple(str)`: Paths to the files saved.
193
+ """
194
+ if not os.path.isdir(save_directory):
195
+ logger.error(f"Vocabulary path ({save_directory}) should be a directory")
196
+ return ("",)
197
+ out_vocab_file = os.path.join(
198
+ save_directory, (filename_prefix + "-" if filename_prefix else "") + VOCAB_FILES_NAMES["vocab_file"]
199
+ )
200
+
201
+ if os.path.abspath(self.vocab_file) != os.path.abspath(out_vocab_file) and os.path.isfile(self.vocab_file):
202
+ copyfile(self.vocab_file, out_vocab_file)
203
+ elif not os.path.isfile(self.vocab_file):
204
+ with open(out_vocab_file, "wb") as fi:
205
+ content_spiece_model = self.sp_model.serialized_model_proto()
206
+ fi.write(content_spiece_model)
207
+
208
+ return (out_vocab_file,)
209
+
210
+ def build_inputs_with_special_tokens(self, token_ids_0, token_ids_1=None):
211
+ if self.add_bos_token:
212
+ bos_token_ids = [self.bos_token_id]
213
+ else:
214
+ bos_token_ids = []
215
+
216
+ output = bos_token_ids + token_ids_0
217
+
218
+ if token_ids_1 is not None:
219
+ output = output + token_ids_1
220
+
221
+ if self.add_eos_token:
222
+ output = output + [self.eos_token_id]
223
+
224
+ return output
225
+
226
+ def get_special_tokens_mask(
227
+ self, token_ids_0: List[int], token_ids_1: Optional[List[int]] = None, already_has_special_tokens: bool = False
228
+ ) -> List[int]:
229
+ """
230
+ Retrieve sequence ids from a token list that has no special tokens added. This method is called when adding
231
+ special tokens using the tokenizer `prepare_for_model` method.
232
+
233
+ Args:
234
+ token_ids_0 (`List[int]`):
235
+ List of IDs.
236
+ token_ids_1 (`List[int]`, *optional*):
237
+ Optional second list of IDs for sequence pairs.
238
+ already_has_special_tokens (`bool`, *optional*, defaults to `False`):
239
+ Whether or not the token list is already formatted with special tokens for the model.
240
+
241
+ Returns:
242
+ `List[int]`: A list of integers in the range [0, 1]: 1 for a special token, 0 for a sequence token.
243
+ """
244
+ if already_has_special_tokens:
245
+ return super().get_special_tokens_mask(
246
+ token_ids_0=token_ids_0, token_ids_1=token_ids_1, already_has_special_tokens=True
247
+ )
248
+
249
+ if token_ids_1 is None:
250
+ return [1] + ([0] * len(token_ids_0)) + [1]
251
+ return [1] + ([0] * len(token_ids_0)) + [1, 1] + ([0] * len(token_ids_1)) + [1]
252
+
253
+ def create_token_type_ids_from_sequences(
254
+ self, token_ids_0: List[int], token_ids_1: Optional[List[int]] = None
255
+ ) -> List[int]:
256
+ """
257
+ Create a mask from the two sequences passed to be used in a sequence-pair classification task. T5 does not make
258
+ use of token type ids, therefore a list of zeros is returned.
259
+
260
+ Args:
261
+ token_ids_0 (`List[int]`):
262
+ List of IDs.
263
+ token_ids_1 (`List[int]`, *optional*):
264
+ Optional second list of IDs for sequence pairs.
265
+
266
+ Returns:
267
+ `List[int]`: List of zeros.
268
+ """
269
+ eos = [self.eos_token_id]
270
+
271
+ if token_ids_1 is None:
272
+ return len(token_ids_0 + eos) * [0]
273
+ return len(token_ids_0 + eos + token_ids_1 + eos) * [0]
tokenizer.model ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:6b16f1558c0cd4ae6ef1a2c605713be0a514f50e1ce2d2c878979ce988c148ec
3
+ size 2477809
tokenizer_config.json ADDED
@@ -0,0 +1,336 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "add_prefix_space": true,
3
+ "added_tokens_decoder": {
4
+ "0": {
5
+ "content": "<unk>",
6
+ "lstrip": false,
7
+ "normalized": false,
8
+ "rstrip": false,
9
+ "single_word": false,
10
+ "special": true
11
+ },
12
+ "1": {
13
+ "content": "<s>",
14
+ "lstrip": false,
15
+ "normalized": false,
16
+ "rstrip": false,
17
+ "single_word": false,
18
+ "special": true
19
+ },
20
+ "2": {
21
+ "content": "</s>",
22
+ "lstrip": false,
23
+ "normalized": false,
24
+ "rstrip": false,
25
+ "single_word": false,
26
+ "special": true
27
+ },
28
+ "45806": {
29
+ "content": "<|User|>:",
30
+ "lstrip": false,
31
+ "normalized": false,
32
+ "rstrip": false,
33
+ "single_word": false,
34
+ "special": true
35
+ },
36
+ "45813": {
37
+ "content": "<|Bot|>:",
38
+ "lstrip": false,
39
+ "normalized": false,
40
+ "rstrip": false,
41
+ "single_word": false,
42
+ "special": true
43
+ },
44
+ "45830": {
45
+ "content": "[unused0]",
46
+ "lstrip": false,
47
+ "normalized": false,
48
+ "rstrip": false,
49
+ "single_word": false,
50
+ "special": true
51
+ },
52
+ "45840": {
53
+ "content": "[unused1]",
54
+ "lstrip": false,
55
+ "normalized": false,
56
+ "rstrip": false,
57
+ "single_word": false,
58
+ "special": true
59
+ },
60
+ "45846": {
61
+ "content": "[unused2]",
62
+ "lstrip": false,
63
+ "normalized": false,
64
+ "rstrip": false,
65
+ "single_word": false,
66
+ "special": true
67
+ },
68
+ "45849": {
69
+ "content": "[unused3]",
70
+ "lstrip": false,
71
+ "normalized": false,
72
+ "rstrip": false,
73
+ "single_word": false,
74
+ "special": true
75
+ },
76
+ "45861": {
77
+ "content": "[unused4]",
78
+ "lstrip": false,
79
+ "normalized": false,
80
+ "rstrip": false,
81
+ "single_word": false,
82
+ "special": true
83
+ },
84
+ "45866": {
85
+ "content": "[unused5]",
86
+ "lstrip": false,
87
+ "normalized": false,
88
+ "rstrip": false,
89
+ "single_word": false,
90
+ "special": true
91
+ },
92
+ "45874": {
93
+ "content": "[unused6]",
94
+ "lstrip": false,
95
+ "normalized": false,
96
+ "rstrip": false,
97
+ "single_word": false,
98
+ "special": true
99
+ },
100
+ "45883": {
101
+ "content": "[unused7]",
102
+ "lstrip": false,
103
+ "normalized": false,
104
+ "rstrip": false,
105
+ "single_word": false,
106
+ "special": true
107
+ },
108
+ "45884": {
109
+ "content": "[unused8]",
110
+ "lstrip": false,
111
+ "normalized": false,
112
+ "rstrip": false,
113
+ "single_word": false,
114
+ "special": true
115
+ },
116
+ "45887": {
117
+ "content": "[unused9]",
118
+ "lstrip": false,
119
+ "normalized": false,
120
+ "rstrip": false,
121
+ "single_word": false,
122
+ "special": true
123
+ },
124
+ "45892": {
125
+ "content": "[unused10]",
126
+ "lstrip": false,
127
+ "normalized": false,
128
+ "rstrip": false,
129
+ "single_word": false,
130
+ "special": true
131
+ },
132
+ "45920": {
133
+ "content": "[unused11]",
134
+ "lstrip": false,
135
+ "normalized": false,
136
+ "rstrip": false,
137
+ "single_word": false,
138
+ "special": true
139
+ },
140
+ "45932": {
141
+ "content": "[unused12]",
142
+ "lstrip": false,
143
+ "normalized": false,
144
+ "rstrip": false,
145
+ "single_word": false,
146
+ "special": true
147
+ },
148
+ "45938": {
149
+ "content": "[unused13]",
150
+ "lstrip": false,
151
+ "normalized": false,
152
+ "rstrip": false,
153
+ "single_word": false,
154
+ "special": true
155
+ },
156
+ "45953": {
157
+ "content": "[unused14]",
158
+ "lstrip": false,
159
+ "normalized": false,
160
+ "rstrip": false,
161
+ "single_word": false,
162
+ "special": true
163
+ },
164
+ "45968": {
165
+ "content": "[unused15]",
166
+ "lstrip": false,
167
+ "normalized": false,
168
+ "rstrip": false,
169
+ "single_word": false,
170
+ "special": true
171
+ },
172
+ "45974": {
173
+ "content": "[unused16]",
174
+ "lstrip": false,
175
+ "normalized": false,
176
+ "rstrip": false,
177
+ "single_word": false,
178
+ "special": true
179
+ },
180
+ "45982": {
181
+ "content": "[unused17]",
182
+ "lstrip": false,
183
+ "normalized": false,
184
+ "rstrip": false,
185
+ "single_word": false,
186
+ "special": true
187
+ },
188
+ "45986": {
189
+ "content": "[unused18]",
190
+ "lstrip": false,
191
+ "normalized": false,
192
+ "rstrip": false,
193
+ "single_word": false,
194
+ "special": true
195
+ },
196
+ "46005": {
197
+ "content": "[unused19]",
198
+ "lstrip": false,
199
+ "normalized": false,
200
+ "rstrip": false,
201
+ "single_word": false,
202
+ "special": true
203
+ },
204
+ "46007": {
205
+ "content": "[unused20]",
206
+ "lstrip": false,
207
+ "normalized": false,
208
+ "rstrip": false,
209
+ "single_word": false,
210
+ "special": true
211
+ },
212
+ "46014": {
213
+ "content": "[unused21]",
214
+ "lstrip": false,
215
+ "normalized": false,
216
+ "rstrip": false,
217
+ "single_word": false,
218
+ "special": true
219
+ },
220
+ "46017": {
221
+ "content": "[unused22]",
222
+ "lstrip": false,
223
+ "normalized": false,
224
+ "rstrip": false,
225
+ "single_word": false,
226
+ "special": true
227
+ },
228
+ "46028": {
229
+ "content": "[unused23]",
230
+ "lstrip": false,
231
+ "normalized": false,
232
+ "rstrip": false,
233
+ "single_word": false,
234
+ "special": true
235
+ },
236
+ "46032": {
237
+ "content": "[unused24]",
238
+ "lstrip": false,
239
+ "normalized": false,
240
+ "rstrip": false,
241
+ "single_word": false,
242
+ "special": true
243
+ },
244
+ "46081": {
245
+ "content": "[unused25]",
246
+ "lstrip": false,
247
+ "normalized": false,
248
+ "rstrip": false,
249
+ "single_word": false,
250
+ "special": true
251
+ },
252
+ "46086": {
253
+ "content": "[unused26]",
254
+ "lstrip": false,
255
+ "normalized": false,
256
+ "rstrip": false,
257
+ "single_word": false,
258
+ "special": true
259
+ },
260
+ "46101": {
261
+ "content": "[unused27]",
262
+ "lstrip": false,
263
+ "normalized": false,
264
+ "rstrip": false,
265
+ "single_word": false,
266
+ "special": true
267
+ },
268
+ "46183": {
269
+ "content": "[unused28]",
270
+ "lstrip": false,
271
+ "normalized": false,
272
+ "rstrip": false,
273
+ "single_word": false,
274
+ "special": true
275
+ },
276
+ "46230": {
277
+ "content": "[unused29]",
278
+ "lstrip": false,
279
+ "normalized": false,
280
+ "rstrip": false,
281
+ "single_word": false,
282
+ "special": true
283
+ },
284
+ "46245": {
285
+ "content": "[unused30]",
286
+ "lstrip": false,
287
+ "normalized": false,
288
+ "rstrip": false,
289
+ "single_word": false,
290
+ "special": true
291
+ },
292
+ "46257": {
293
+ "content": "[unused31]",
294
+ "lstrip": false,
295
+ "normalized": false,
296
+ "rstrip": false,
297
+ "single_word": false,
298
+ "special": true
299
+ },
300
+ "144208": {
301
+ "content": "[unused32]",
302
+ "lstrip": false,
303
+ "normalized": false,
304
+ "rstrip": false,
305
+ "single_word": false,
306
+ "special": true
307
+ },
308
+ "144209": {
309
+ "content": "[unused33]",
310
+ "lstrip": false,
311
+ "normalized": false,
312
+ "rstrip": false,
313
+ "single_word": false,
314
+ "special": true
315
+ }
316
+ },
317
+ "auto_map": {
318
+ "AutoTokenizer": [
319
+ "tokenization_openpangu.PanguTokenizer",
320
+ null
321
+ ]
322
+ },
323
+ "bos_token": "<s>",
324
+ "clean_up_tokenization_spaces": false,
325
+ "eos_token": "[unused10]",
326
+ "extra_special_tokens": {},
327
+ "legacy": true,
328
+ "model_max_length": 1000000000000000019884624838656,
329
+ "pad_token": "<unk>",
330
+ "padding_side": "left",
331
+ "spaces_between_special_tokens": false,
332
+ "split_special_tokens": false,
333
+ "tokenizer_class": "PanguTokenizer",
334
+ "unk_token": "<unk>",
335
+ "use_default_system_prompt": false
336
+ }