rajeevp Cursor commited on
Commit
3c65415
Β·
1 Parent(s): dd8dffb

Reorganize into internlm2-1.8b-cpu-int4-awq and internlm2-7b-cpu-int4-awq directories

Browse files
README.md DELETED
@@ -1,260 +0,0 @@
1
- <!--
2
- Copyright (C) [2026] Advanced Micro Devices, Inc. All rights reserved. Portions of this file consist of AI generated content
3
- -->
4
-
5
- # InternLM2 Model Export for ONNX Runtime GenAI
6
-
7
- This example demonstrates how to export InternLM2 models to ONNX format using ONNX Runtime GenAI.
8
-
9
- ## Supported Models
10
-
11
- All InternLM2 model sizes are supported:
12
-
13
- - βœ… **InternLM2-1.8B** - Tested and verified
14
- - βœ… **InternLM2-7B** - Tested and verified
15
- - βœ… **InternLM2-20B** - Fully compatible
16
- - βœ… **InternLM2-Chat variants** - All sizes supported
17
-
18
- The implementation is architecture-based and automatically adapts to any InternLM2 model size.
19
-
20
- ## Model Architecture
21
-
22
- InternLM2 uses a Llama-based architecture with the following key features:
23
-
24
- - **Attention**: Grouped Query Attention (GQA) with grouped/interleaved QKV layout
25
- - **Normalization**: RMSNorm (eps: 1e-05)
26
- - **Activation**: SiLU
27
- - **Positional Encoding**: RoPE with theta=1,000,000
28
-
29
- ### Architecture Specifications
30
-
31
- | Parameter | 1.8B | 7B | 20B |
32
- |-----------|------|-----|-----|
33
- | **Hidden Size** | 2048 | 4096 | 6144 |
34
- | **Num Layers** | 24 | 32 | 48 |
35
- | **Q Heads** | 16 | 32 | 48 |
36
- | **KV Heads** | 8 | 8 | 8 |
37
- | **Head Dim** | 128 | 128 | 128 |
38
- | **Intermediate** | 8192 | 14336 | 16384 |
39
- | **GQA Ratio** | 2:1 | 4:1 | 6:1 |
40
- | **Context Length** | 32,768 | 32,768 | 32,768 |
41
- | **Vocab Size** | 92,544 | 92,544 | 92,544 |
42
-
43
- ## Export Examples
44
-
45
- ### InternLM2-1.8B
46
-
47
- **FP32 (Best quality baseline):**
48
- ```bash
49
- python -m onnxruntime_genai.models.builder \
50
- --input internlm/internlm2-1_8b \
51
- --output ./internlm2-1.8b-cpu-fp32 \
52
- --precision fp32 \
53
- --execution_provider cpu
54
- ```
55
-
56
- **INT4 RTN (Fast quantization):**
57
- ```bash
58
- python -m onnxruntime_genai.models.builder \
59
- --input internlm/internlm2-1_8b \
60
- --output ./internlm2-1.8b-cpu-int4 \
61
- --precision int4 \
62
- --execution_provider cpu
63
- ```
64
-
65
- **INT4 AWQ (Best quality, recommended):**
66
- ```bash
67
- python -m onnxruntime_genai.models.builder \
68
- --input internlm/internlm2-1_8b \
69
- --output ./internlm2-1.8b-cpu-int4-awq \
70
- --precision int4 \
71
- --execution_provider cpu \
72
- --extra_options int4_accuracy_level=4
73
- ```
74
-
75
- ### InternLM2-7B
76
-
77
- **INT4 AWQ CPU (Recommended for most users):**
78
- ```bash
79
- python -m onnxruntime_genai.models.builder \
80
- --input internlm/internlm2-7b \
81
- --output ./internlm2-7b-cpu-int4-awq \
82
- --precision int4 \
83
- --execution_provider cpu \
84
- --extra_options int4_accuracy_level=4
85
- ```
86
-
87
- **INT4 AWQ CUDA (For GPU inference):**
88
- ```bash
89
- python -m onnxruntime_genai.models.builder \
90
- --input internlm/internlm2-7b \
91
- --output ./internlm2-7b-cuda-int4-awq \
92
- --precision int4 \
93
- --execution_provider cuda \
94
- --extra_options int4_accuracy_level=4
95
- ```
96
-
97
- **FP16 CUDA (Highest quality on GPU):**
98
- ```bash
99
- python -m onnxruntime_genai.models.builder \
100
- --input internlm/internlm2-7b \
101
- --output ./internlm2-7b-cuda-fp16 \
102
- --precision fp16 \
103
- --execution_provider cuda
104
- ```
105
-
106
- ### InternLM2-20B
107
-
108
- **INT4 AWQ CUDA (Recommended):**
109
- ```bash
110
- python -m onnxruntime_genai.models.builder \
111
- --input internlm/internlm2-20b \
112
- --output ./internlm2-20b-cuda-int4-awq \
113
- --precision int4 \
114
- --execution_provider cuda \
115
- --extra_options int4_accuracy_level=4
116
- ```
117
-
118
- ## Model Size & Performance
119
-
120
- | Model | Original Size | INT4 Quantized | FP16 | Recommended RAM |
121
- |-------|--------------|----------------|------|-----------------|
122
- | **InternLM2-1.8B** | ~3.6 GB | ~1.0 GB | ~3.6 GB | 4 GB |
123
- | **InternLM2-7B** | ~14 GB | ~3.8 GB | ~14 GB | 8 GB |
124
- | **InternLM2-20B** | ~40 GB | ~10.5 GB | ~40 GB | 24 GB |
125
-
126
- **CPU Inference (Approximate):**
127
-
128
- | Model | Min RAM | Recommended RAM | Typical Speed |
129
- |-------|---------|-----------------|---------------|
130
- | 1.8B INT4 | 4 GB | 8 GB | 8-12 tok/s |
131
- | 7B INT4 | 8 GB | 16 GB | 2-4 tok/s |
132
- | 20B INT4 | 16 GB | 32 GB | 0.5-1 tok/s |
133
-
134
- **GPU Inference (CUDA):**
135
-
136
- | Model | Min VRAM | Recommended VRAM | Typical Speed |
137
- |-------|----------|------------------|---------------|
138
- | 1.8B INT4 | 2 GB | 4 GB | 50-80 tok/s |
139
- | 7B INT4 | 6 GB | 8 GB | 30-50 tok/s |
140
- | 7B FP16 | 14 GB | 16 GB | 40-60 tok/s |
141
- | 20B INT4 | 12 GB | 16 GB | 20-30 tok/s |
142
- | 20B FP16 | 40 GB | 48 GB | 25-35 tok/s |
143
-
144
- ## Inference Example
145
-
146
- ```python
147
- import onnxruntime_genai as og
148
-
149
- # Works with any InternLM2 size!
150
- model = og.Model("./internlm2-7b-cpu-int4-awq")
151
- tokenizer = og.Tokenizer(model)
152
- tokenizer_stream = tokenizer.create_stream()
153
-
154
- # Set generation parameters
155
- prompt = "What is the meaning of life?"
156
- tokens = tokenizer.encode(prompt)
157
-
158
- params = og.GeneratorParams(model)
159
- params.set_search_options(
160
- max_length=200,
161
- temperature=0.7,
162
- top_p=0.9,
163
- top_k=40
164
- )
165
-
166
- # Generate text
167
- generator = og.Generator(model, params)
168
- generator.append_tokens(tokens)
169
-
170
- print(prompt, end="", flush=True)
171
- while not generator.is_done():
172
- generator.generate_next_token()
173
- new_token = generator.get_next_tokens()[0]
174
- print(tokenizer_stream.decode(new_token), end="", flush=True)
175
- print()
176
- ```
177
-
178
- ## Why Multi-Size Support Works
179
-
180
- ### Architecture-Based Implementation
181
-
182
- The implementation is **size-agnostic** because it:
183
-
184
- 1. **Dynamically reads config parameters** from each model:
185
- - `num_attention_heads`
186
- - `num_key_value_heads`
187
- - `hidden_size`
188
- - `num_hidden_layers`
189
- - `intermediate_size`
190
-
191
- 2. **Uses config-driven weight splitting**:
192
- ```python
193
- # Reads from model config
194
- num_q_heads = config.num_attention_heads # 16 for 1.8B, 32 for 7B, 48 for 20B
195
- num_kv_heads = config.num_key_value_heads # Always 8 for InternLM2
196
- head_dim = config.hidden_size // num_q_heads # Always 128
197
-
198
- # Calculates group size dynamically
199
- num_kv_groups = num_q_heads // num_kv_heads # 2 for 1.8B, 4 for 7B, 6 for 20B
200
- group_size = num_kv_groups + 2
201
- ```
202
-
203
- 3. **Handles grouped QKV layout** for any GQA ratio:
204
- - Layout: `[Group0: Q0,Q1,...,K0,V0 | Group1: Q2,Q3,...,K1,V1 | ...]`
205
- - Each KV group contains multiple Q heads followed by K and V
206
- - Correctly extracts weights regardless of the Q/KV head ratio
207
-
208
- 4. **No hardcoded sizes** anywhere in the code
209
-
210
- ### Key Implementation Notes
211
-
212
- **Grouped QKV Layout:**
213
- - InternLM2 uses a grouped/interleaved QKV weight layout for efficient Grouped Query Attention
214
- - The implementation in `src/python/py/models/builders/internlm.py` correctly handles this layout during weight extraction
215
-
216
- **Model Configuration:**
217
- - The exported model uses `model_type: "llama"` for ONNX Runtime GenAI compatibility
218
- - Tokenizer uses `tokenizer_class: "LlamaTokenizer"` (SentencePiece-based)
219
-
220
- ## Recommendations by Use Case
221
-
222
- ### Development & Testing
223
- - **InternLM2-1.8B INT4 AWQ** (1 GB)
224
- - Fast iteration, quick testing
225
- - Good for prototyping
226
-
227
- ### Production Applications
228
- - **InternLM2-7B INT4 AWQ** (3.8 GB)
229
- - Best balance of quality and performance
230
- - Suitable for most real-world applications
231
-
232
- ### High-Quality Applications
233
- - **InternLM2-7B FP16 CUDA** (14 GB) or
234
- - **InternLM2-20B INT4 CUDA** (10.5 GB)
235
- - Maximum quality for critical applications
236
-
237
- ## Troubleshooting
238
-
239
- ### "Out of Memory" errors
240
- - Use INT4 quantization instead of FP16/FP32
241
- - Enable GPU inference for larger models
242
- - Use batch_size=1 for inference
243
-
244
- ### Slow inference on CPU
245
- - This is expected for 7B+ models
246
- - Consider GPU inference
247
- - Use INT4 quantization (2-3x faster than FP16)
248
-
249
- ### Model not loading
250
- - Ensure you have enough RAM/VRAM
251
- - Check that you're using `--execution_provider cuda` for GPU models
252
- - Verify ONNX Runtime GenAI installation
253
-
254
- ## References
255
-
256
- - Model Hub (1.8B): https://huggingface.co/internlm/internlm2-1_8b
257
- - Model Hub (7B): https://huggingface.co/internlm/internlm2-7b
258
- - Model Hub (20B): https://huggingface.co/internlm/internlm2-20b
259
- - Paper: https://arxiv.org/abs/2403.17297
260
- - GitHub: https://github.com/InternLM/InternLM
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
genai_config.json β†’ internlm2-1.8b-cpu-int4-awq/genai_config.json RENAMED
File without changes
model.onnx β†’ internlm2-1.8b-cpu-int4-awq/model.onnx RENAMED
File without changes
internlm2-1.8b-cpu-int4-awq/model.onnx.data ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:3dc0644a406bab41fc434b82d5c8d15052a192ac65c04b5c1390b3e6c55a1490
3
+ size 1837563904
special_tokens_map.json β†’ internlm2-1.8b-cpu-int4-awq/special_tokens_map.json RENAMED
File without changes
tokenization_internlm2.py β†’ internlm2-1.8b-cpu-int4-awq/tokenization_internlm2.py RENAMED
File without changes
tokenization_internlm2_fast.py β†’ internlm2-1.8b-cpu-int4-awq/tokenization_internlm2_fast.py RENAMED
File without changes
internlm2-1.8b-cpu-int4-awq/tokenizer.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:53bf68bdb380527f8c67449108021556e11c9de61aee72f818778143a99ecb50
3
+ size 10540375
tokenizer.model β†’ internlm2-1.8b-cpu-int4-awq/tokenizer.model RENAMED
File without changes
internlm2-1.8b-cpu-int4-awq/tokenizer_config.json ADDED
@@ -0,0 +1,46 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "add_bos_token": true,
3
+ "add_eos_token": false,
4
+ "added_tokens_decoder": {
5
+ "0": {
6
+ "content": "<unk>",
7
+ "lstrip": false,
8
+ "normalized": false,
9
+ "rstrip": false,
10
+ "single_word": false,
11
+ "special": true
12
+ },
13
+ "1": {
14
+ "content": "<s>",
15
+ "lstrip": false,
16
+ "normalized": false,
17
+ "rstrip": false,
18
+ "single_word": false,
19
+ "special": true
20
+ },
21
+ "2": {
22
+ "content": "</s>",
23
+ "lstrip": false,
24
+ "normalized": false,
25
+ "rstrip": false,
26
+ "single_word": false,
27
+ "special": true
28
+ }
29
+ },
30
+ "auto_map": {
31
+ "AutoTokenizer": [
32
+ "tokenization_internlm2.InternLM2Tokenizer",
33
+ "tokenization_internlm2_fast.InternLM2TokenizerFast"
34
+ ]
35
+ },
36
+ "bos_token": "<s>",
37
+ "clean_up_tokenization_spaces": false,
38
+ "decode_with_prefix_space": false,
39
+ "eos_token": "</s>",
40
+ "extra_special_tokens": {},
41
+ "model_max_length": 1000000000000000019884624838656,
42
+ "pad_token": "</s>",
43
+ "sp_model_kwargs": null,
44
+ "tokenizer_class": "InternLM2Tokenizer",
45
+ "unk_token": "<unk>"
46
+ }
internlm2-7b-cpu-int4-awq/genai_config.json ADDED
@@ -0,0 +1,49 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "model": {
3
+ "bos_token_id": 1,
4
+ "context_length": 32768,
5
+ "decoder": {
6
+ "session_options": {
7
+ "log_id": "onnxruntime-genai",
8
+ "provider_options": []
9
+ },
10
+ "filename": "model.onnx",
11
+ "head_size": 128,
12
+ "hidden_size": 4096,
13
+ "inputs": {
14
+ "input_ids": "input_ids",
15
+ "attention_mask": "attention_mask",
16
+ "past_key_names": "past_key_values.%d.key",
17
+ "past_value_names": "past_key_values.%d.value"
18
+ },
19
+ "outputs": {
20
+ "logits": "logits",
21
+ "present_key_names": "present.%d.key",
22
+ "present_value_names": "present.%d.value"
23
+ },
24
+ "num_attention_heads": 32,
25
+ "num_hidden_layers": 32,
26
+ "num_key_value_heads": 8
27
+ },
28
+ "eos_token_id": 2,
29
+ "pad_token_id": 2,
30
+ "type": "llama",
31
+ "vocab_size": 92544
32
+ },
33
+ "search": {
34
+ "diversity_penalty": 0.0,
35
+ "do_sample": false,
36
+ "early_stopping": true,
37
+ "length_penalty": 1.0,
38
+ "max_length": 32768,
39
+ "min_length": 0,
40
+ "no_repeat_ngram_size": 0,
41
+ "num_beams": 1,
42
+ "num_return_sequences": 1,
43
+ "past_present_share_buffer": true,
44
+ "repetition_penalty": 1.0,
45
+ "temperature": 1.0,
46
+ "top_k": 50,
47
+ "top_p": 1.0
48
+ }
49
+ }
internlm2-7b-cpu-int4-awq/model.onnx ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:862b2f22bc845237107303a06832042c9d4641fab30a740b7ef6dfed99b146c8
3
+ size 239348
internlm2-7b-cpu-int4-awq/model.onnx.data ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8a41e20041f6eb7f1b24f6c539c9bb6681dd7ab5450f5351fa8966d0514f05f7
3
+ size 6133121024
internlm2-7b-cpu-int4-awq/special_tokens_map.json ADDED
@@ -0,0 +1,6 @@
 
 
 
 
 
 
 
1
+ {
2
+ "bos_token": "<s>",
3
+ "eos_token": "</s>",
4
+ "pad_token": "</s>",
5
+ "unk_token": "<unk>"
6
+ }
internlm2-7b-cpu-int4-awq/tokenization_internlm2.py ADDED
@@ -0,0 +1,236 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # coding=utf-8
2
+ # Copyright (c) The InternLM team and The HuggingFace Inc. team. All rights reserved.
3
+ #
4
+ # This code is based on transformers/src/transformers/models/llama/tokenization_llama.py
5
+ #
6
+ # Licensed under the Apache License, Version 2.0 (the "License");
7
+ # you may not use this file except in compliance with the License.
8
+ # You may obtain a copy of the License at
9
+ #
10
+ # http://www.apache.org/licenses/LICENSE-2.0
11
+ #
12
+ # Unless required by applicable law or agreed to in writing, software
13
+ # distributed under the License is distributed on an "AS IS" BASIS,
14
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
15
+ # See the License for the specific language governing permissions and
16
+ # limitations under the License.
17
+
18
+ """Tokenization classes for InternLM."""
19
+ import os
20
+ from shutil import copyfile
21
+ from typing import Any, Dict, List, Optional, Tuple
22
+
23
+ import sentencepiece as spm
24
+ from transformers.tokenization_utils import PreTrainedTokenizer
25
+ from transformers.utils import logging
26
+
27
+ logger = logging.get_logger(__name__)
28
+
29
+ VOCAB_FILES_NAMES = {"vocab_file": "./tokenizer.model"}
30
+
31
+ PRETRAINED_VOCAB_FILES_MAP = {}
32
+
33
+
34
+ # Modified from transformers.model.llama.tokenization_llama.LlamaTokenizer
35
+ class InternLM2Tokenizer(PreTrainedTokenizer):
36
+ """
37
+ Construct a InternLM2 tokenizer. Based on byte-level Byte-Pair-Encoding.
38
+
39
+ Args:
40
+ vocab_file (`str`):
41
+ Path to the vocabulary file.
42
+ """
43
+
44
+ vocab_files_names = VOCAB_FILES_NAMES
45
+ pretrained_vocab_files_map = PRETRAINED_VOCAB_FILES_MAP
46
+ model_input_names = ["input_ids", "attention_mask"]
47
+ _auto_class = "AutoTokenizer"
48
+
49
+ def __init__(
50
+ self,
51
+ vocab_file,
52
+ unk_token="<unk>",
53
+ bos_token="<s>",
54
+ eos_token="</s>",
55
+ pad_token="</s>",
56
+ sp_model_kwargs: Optional[Dict[str, Any]] = None,
57
+ add_bos_token=True,
58
+ add_eos_token=False,
59
+ decode_with_prefix_space=False,
60
+ clean_up_tokenization_spaces=False,
61
+ **kwargs,
62
+ ):
63
+ self.sp_model_kwargs = {} if sp_model_kwargs is None else sp_model_kwargs
64
+ self.vocab_file = vocab_file
65
+ self.add_bos_token = add_bos_token
66
+ self.add_eos_token = add_eos_token
67
+ self.decode_with_prefix_space = decode_with_prefix_space
68
+ self.sp_model = spm.SentencePieceProcessor(**self.sp_model_kwargs)
69
+ self.sp_model.Load(vocab_file)
70
+ self._no_prefix_space_tokens = None
71
+ super().__init__(
72
+ bos_token=bos_token,
73
+ eos_token=eos_token,
74
+ unk_token=unk_token,
75
+ pad_token=pad_token,
76
+ clean_up_tokenization_spaces=clean_up_tokenization_spaces,
77
+ **kwargs,
78
+ )
79
+
80
+ @property
81
+ def no_prefix_space_tokens(self):
82
+ if self._no_prefix_space_tokens is None:
83
+ vocab = self.convert_ids_to_tokens(list(range(self.vocab_size)))
84
+ self._no_prefix_space_tokens = {i for i, tok in enumerate(vocab) if not tok.startswith("▁")}
85
+ return self._no_prefix_space_tokens
86
+
87
+ @property
88
+ def vocab_size(self):
89
+ """Returns vocab size"""
90
+ return self.sp_model.get_piece_size()
91
+
92
+ @property
93
+ def bos_token_id(self) -> Optional[int]:
94
+ return self.sp_model.bos_id()
95
+
96
+ @property
97
+ def eos_token_id(self) -> Optional[int]:
98
+ return self.sp_model.eos_id()
99
+
100
+ def get_vocab(self):
101
+ """Returns vocab as a dict"""
102
+ vocab = {self.convert_ids_to_tokens(i): i for i in range(self.vocab_size)}
103
+ vocab.update(self.added_tokens_encoder)
104
+ return vocab
105
+
106
+ def _tokenize(self, text):
107
+ """Returns a tokenized string."""
108
+ return self.sp_model.encode(text, out_type=str)
109
+
110
+ def _convert_token_to_id(self, token):
111
+ """Converts a token (str) in an id using the vocab."""
112
+ return self.sp_model.piece_to_id(token)
113
+
114
+ def _convert_id_to_token(self, index):
115
+ """Converts an index (integer) in a token (str) using the vocab."""
116
+ token = self.sp_model.IdToPiece(index)
117
+ return token
118
+
119
+ def _maybe_add_prefix_space(self, tokens, decoded):
120
+ if tokens and tokens[0] not in self.no_prefix_space_tokens:
121
+ return " " + decoded
122
+ else:
123
+ return decoded
124
+
125
+ def convert_tokens_to_string(self, tokens):
126
+ """Converts a sequence of tokens (string) in a single string."""
127
+ current_sub_tokens = []
128
+ out_string = ""
129
+ prev_is_special = False
130
+ for token in tokens:
131
+ # make sure that special tokens are not decoded using sentencepiece model
132
+ if token in self.all_special_tokens:
133
+ if not prev_is_special:
134
+ out_string += " "
135
+ out_string += self.sp_model.decode(current_sub_tokens) + token
136
+ prev_is_special = True
137
+ current_sub_tokens = []
138
+ else:
139
+ current_sub_tokens.append(token)
140
+ prev_is_special = False
141
+ out_string += self.sp_model.decode(current_sub_tokens)
142
+ out_string = self.clean_up_tokenization(out_string)
143
+ out_string = self._maybe_add_prefix_space(tokens=tokens, decoded=out_string)
144
+ return out_string[1:]
145
+
146
+ def save_vocabulary(self, save_directory, filename_prefix: Optional[str] = None) -> Tuple[str]:
147
+ """
148
+ Save the vocabulary and special tokens file to a directory.
149
+
150
+ Args:
151
+ save_directory (`str`):
152
+ The directory in which to save the vocabulary.
153
+
154
+ Returns:
155
+ `Tuple(str)`: Paths to the files saved.
156
+ """
157
+ if not os.path.isdir(save_directory):
158
+ logger.error(f"Vocabulary path ({save_directory}) should be a directory")
159
+ return
160
+ out_vocab_file = os.path.join(
161
+ save_directory, (filename_prefix + "-" if filename_prefix else "") + VOCAB_FILES_NAMES["vocab_file"]
162
+ )
163
+
164
+ if os.path.abspath(self.vocab_file) != os.path.abspath(out_vocab_file) and os.path.isfile(self.vocab_file):
165
+ copyfile(self.vocab_file, out_vocab_file)
166
+ elif not os.path.isfile(self.vocab_file):
167
+ with open(out_vocab_file, "wb") as fi:
168
+ content_spiece_model = self.sp_model.serialized_model_proto()
169
+ fi.write(content_spiece_model)
170
+
171
+ return (out_vocab_file,)
172
+
173
+ def build_inputs_with_special_tokens(self, token_ids_0, token_ids_1=None):
174
+ if self.add_bos_token:
175
+ bos_token_ids = [self.bos_token_id]
176
+ else:
177
+ bos_token_ids = []
178
+
179
+ output = bos_token_ids + token_ids_0
180
+
181
+ if token_ids_1 is not None:
182
+ output = output + token_ids_1
183
+
184
+ if self.add_eos_token:
185
+ output = output + [self.eos_token_id]
186
+
187
+ return output
188
+
189
+ def get_special_tokens_mask(
190
+ self, token_ids_0: List[int], token_ids_1: Optional[List[int]] = None, already_has_special_tokens: bool = False
191
+ ) -> List[int]:
192
+ """
193
+ Retrieve sequence ids from a token list that has no special tokens added. This method is called when adding
194
+ special tokens using the tokenizer `prepare_for_model` method.
195
+
196
+ Args:
197
+ token_ids_0 (`List[int]`):
198
+ List of IDs.
199
+ token_ids_1 (`List[int]`, *optional*):
200
+ Optional second list of IDs for sequence pairs.
201
+ already_has_special_tokens (`bool`, *optional*, defaults to `False`):
202
+ Whether or not the token list is already formatted with special tokens for the model.
203
+
204
+ Returns:
205
+ `List[int]`: A list of integers in the range [0, 1]: 1 for a special token, 0 for a sequence token.
206
+ """
207
+ if already_has_special_tokens:
208
+ return super().get_special_tokens_mask(
209
+ token_ids_0=token_ids_0, token_ids_1=token_ids_1, already_has_special_tokens=True
210
+ )
211
+
212
+ if token_ids_1 is None:
213
+ return [1] + ([0] * len(token_ids_0)) + [1]
214
+ return [1] + ([0] * len(token_ids_0)) + [1, 1] + ([0] * len(token_ids_1)) + [1]
215
+
216
+ def create_token_type_ids_from_sequences(
217
+ self, token_ids_0: List[int], token_ids_1: Optional[List[int]] = None
218
+ ) -> List[int]:
219
+ """
220
+ Create a mask from the two sequences passed to be used in a sequence-pair classification task. T5 does not make
221
+ use of token type ids, therefore a list of zeros is returned.
222
+
223
+ Args:
224
+ token_ids_0 (`List[int]`):
225
+ List of IDs.
226
+ token_ids_1 (`List[int]`, *optional*):
227
+ Optional second list of IDs for sequence pairs.
228
+
229
+ Returns:
230
+ `List[int]`: List of zeros.
231
+ """
232
+ eos = [self.eos_token_id]
233
+
234
+ if token_ids_1 is None:
235
+ return len(token_ids_0 + eos) * [0]
236
+ return len(token_ids_0 + eos + token_ids_1 + eos) * [0]
internlm2-7b-cpu-int4-awq/tokenization_internlm2_fast.py ADDED
@@ -0,0 +1,214 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # coding=utf-8
2
+ # Copyright (c) The InternLM team and The HuggingFace Inc. team. All rights reserved.
3
+ #
4
+ # This code is based on transformers/src/transformers/models/llama/tokenization_llama_fast.py
5
+ #
6
+ # Licensed under the Apache License, Version 2.0 (the "License");
7
+ # you may not use this file except in compliance with the License.
8
+ # You may obtain a copy of the License at
9
+ #
10
+ # http://www.apache.org/licenses/LICENSE-2.0
11
+ #
12
+ # Unless required by applicable law or agreed to in writing, software
13
+ # distributed under the License is distributed on an "AS IS" BASIS,
14
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
15
+ # See the License for the specific language governing permissions and
16
+ # limitations under the License.
17
+
18
+ """Tokenization Fast class for InternLM."""
19
+ import os
20
+ from shutil import copyfile
21
+ from typing import Any, Dict, Optional, Tuple
22
+
23
+ from tokenizers import processors, decoders, Tokenizer, normalizers
24
+ from tokenizers.models import BPE
25
+
26
+ from transformers.tokenization_utils_fast import PreTrainedTokenizerFast
27
+ from transformers.utils import logging
28
+
29
+ from transformers.convert_slow_tokenizer import (
30
+ SLOW_TO_FAST_CONVERTERS,
31
+ SpmConverter,
32
+ SentencePieceExtractor,
33
+ )
34
+
35
+ from .tokenization_internlm2 import InternLM2Tokenizer
36
+
37
+ logger = logging.get_logger(__name__)
38
+
39
+ VOCAB_FILES_NAMES = {"vocab_file": "./tokenizer.model"}
40
+
41
+ # Modified from transformers.convert_slow_tokenizer.LlamaConverter
42
+ class InternLM2Converter(SpmConverter):
43
+ handle_byte_fallback = True
44
+
45
+ def vocab(self, proto):
46
+ vocab = [
47
+ ("<unk>", 0.0),
48
+ ("<s>", 0.0),
49
+ ("</s>", 0.0),
50
+ ]
51
+ vocab += [(piece.piece, piece.score) for piece in proto.pieces[3:]]
52
+ return vocab
53
+
54
+ def unk_id(self, proto):
55
+ unk_id = 0
56
+ return unk_id
57
+
58
+ def decoder(self, replacement, add_prefix_space):
59
+ decoders_sequence = [
60
+ decoders.Replace("▁", " "),
61
+ decoders.ByteFallback(),
62
+ decoders.Fuse(),
63
+ ]
64
+ if self.proto.normalizer_spec.add_dummy_prefix:
65
+ decoders_sequence.append(decoders.Strip(content=" ", left=1))
66
+ return decoders.Sequence(decoders_sequence)
67
+
68
+ def tokenizer(self, proto):
69
+ model_type = proto.trainer_spec.model_type
70
+ vocab_scores = self.vocab(proto)
71
+ # special tokens
72
+ added_tokens = self.original_tokenizer.added_tokens_decoder
73
+ for i in range(len(vocab_scores)):
74
+ piece, score = vocab_scores[i]
75
+ if i in added_tokens:
76
+ vocab_scores[i] = (added_tokens[i].content, score)
77
+ if model_type == 1:
78
+ raise RuntimeError("InternLM2 is supposed to be a BPE model!")
79
+
80
+ elif model_type == 2:
81
+ _, merges = SentencePieceExtractor(self.original_tokenizer.vocab_file).extract(vocab_scores)
82
+ bpe_vocab = {word: i for i, (word, _score) in enumerate(vocab_scores)}
83
+ tokenizer = Tokenizer(
84
+ BPE(bpe_vocab, merges, unk_token=proto.trainer_spec.unk_piece, fuse_unk=True, byte_fallback=True)
85
+ )
86
+ tokenizer.add_special_tokens(
87
+ [ added_token for index, added_token in added_tokens.items()]
88
+ )
89
+ else:
90
+ raise Exception(
91
+ "You're trying to run a `Unigram` model but you're file was trained with a different algorithm"
92
+ )
93
+
94
+ return tokenizer
95
+
96
+ def normalizer(self, proto):
97
+ normalizers_list = []
98
+ if proto.normalizer_spec.add_dummy_prefix:
99
+ normalizers_list.append(normalizers.Prepend(prepend="▁"))
100
+ normalizers_list.append(normalizers.Replace(pattern=" ", content="▁"))
101
+ return normalizers.Sequence(normalizers_list)
102
+
103
+ def pre_tokenizer(self, replacement, add_prefix_space):
104
+ return None
105
+
106
+ SLOW_TO_FAST_CONVERTERS["InternLM2Tokenizer"] = InternLM2Converter
107
+
108
+
109
+ # Modified from transformers.model.llama.tokenization_llama_fast.LlamaTokenizerFast -> InternLM2TokenizerFast
110
+ class InternLM2TokenizerFast(PreTrainedTokenizerFast):
111
+ vocab_files_names = VOCAB_FILES_NAMES
112
+ slow_tokenizer_class = InternLM2Tokenizer
113
+ padding_side = "left"
114
+ model_input_names = ["input_ids", "attention_mask"]
115
+ _auto_class = "AutoTokenizer"
116
+
117
+ def __init__(
118
+ self,
119
+ vocab_file,
120
+ unk_token="<unk>",
121
+ bos_token="<s>",
122
+ eos_token="</s>",
123
+ pad_token="</s>",
124
+ sp_model_kwargs: Optional[Dict[str, Any]] = None,
125
+ add_bos_token=True,
126
+ add_eos_token=False,
127
+ decode_with_prefix_space=False,
128
+ clean_up_tokenization_spaces=False,
129
+ **kwargs,
130
+ ):
131
+ super().__init__(
132
+ vocab_file=vocab_file,
133
+ unk_token=unk_token,
134
+ bos_token=bos_token,
135
+ eos_token=eos_token,
136
+ pad_token=pad_token,
137
+ sp_model_kwargs=sp_model_kwargs,
138
+ add_bos_token=add_bos_token,
139
+ add_eos_token=add_eos_token,
140
+ decode_with_prefix_space=decode_with_prefix_space,
141
+ clean_up_tokenization_spaces=clean_up_tokenization_spaces,
142
+ **kwargs,
143
+ )
144
+ self._add_bos_token = add_bos_token
145
+ self._add_eos_token = add_eos_token
146
+ self.update_post_processor()
147
+ self.vocab_file = vocab_file
148
+
149
+ @property
150
+ def can_save_slow_tokenizer(self) -> bool:
151
+ return os.path.isfile(self.vocab_file) if self.vocab_file else False
152
+
153
+ def update_post_processor(self):
154
+ """
155
+ Updates the underlying post processor with the current `bos_token` and `eos_token`.
156
+ """
157
+ bos = self.bos_token
158
+ bos_token_id = self.bos_token_id
159
+ if bos is None and self.add_bos_token:
160
+ raise ValueError("add_bos_token = True but bos_token = None")
161
+
162
+ eos = self.eos_token
163
+ eos_token_id = self.eos_token_id
164
+ if eos is None and self.add_eos_token:
165
+ raise ValueError("add_eos_token = True but eos_token = None")
166
+
167
+ single = f"{(bos+':0 ') if self.add_bos_token else ''}$A:0{(' '+eos+':0') if self.add_eos_token else ''}"
168
+ pair = f"{single}{(' '+bos+':1') if self.add_bos_token else ''} $B:1{(' '+eos+':1') if self.add_eos_token else ''}"
169
+
170
+ special_tokens = []
171
+ if self.add_bos_token:
172
+ special_tokens.append((bos, bos_token_id))
173
+ if self.add_eos_token:
174
+ special_tokens.append((eos, eos_token_id))
175
+ self._tokenizer.post_processor = processors.TemplateProcessing(
176
+ single=single, pair=pair, special_tokens=special_tokens
177
+ )
178
+
179
+ @property
180
+ def add_eos_token(self):
181
+ return self._add_eos_token
182
+
183
+ @property
184
+ def add_bos_token(self):
185
+ return self._add_bos_token
186
+
187
+ @add_eos_token.setter
188
+ def add_eos_token(self, value):
189
+ self._add_eos_token = value
190
+ self.update_post_processor()
191
+
192
+ @add_bos_token.setter
193
+ def add_bos_token(self, value):
194
+ self._add_bos_token = value
195
+ self.update_post_processor()
196
+
197
+ def save_vocabulary(self, save_directory: str, filename_prefix: Optional[str] = None) -> Tuple[str]:
198
+ if not self.can_save_slow_tokenizer:
199
+ raise ValueError(
200
+ "Your fast tokenizer does not have the necessary information to save the vocabulary for a slow "
201
+ "tokenizer."
202
+ )
203
+
204
+ if not os.path.isdir(save_directory):
205
+ logger.error(f"Vocabulary path ({save_directory}) should be a directory")
206
+ return
207
+ out_vocab_file = os.path.join(
208
+ save_directory, (filename_prefix + "-" if filename_prefix else "") + VOCAB_FILES_NAMES["vocab_file"]
209
+ )
210
+
211
+ if os.path.abspath(self.vocab_file) != os.path.abspath(out_vocab_file):
212
+ copyfile(self.vocab_file, out_vocab_file)
213
+
214
+ return (out_vocab_file,)
internlm2-7b-cpu-int4-awq/tokenizer.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c6fe97617a059964f5afbfb575339f10960465c2f4ae16d8f533d7766092181a
3
+ size 10540271
internlm2-7b-cpu-int4-awq/tokenizer.model ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f868398fc4e05ee1e8aeba95ddf18ddcc45b8bce55d5093bead5bbf80429b48b
3
+ size 1477754
tokenizer_config.json β†’ internlm2-7b-cpu-int4-awq/tokenizer_config.json RENAMED
File without changes