rishitt commited on
Commit
ca2aed9
·
verified ·
1 Parent(s): 61c6643

Upload 10 files

Browse files
.gitattributes CHANGED
@@ -33,3 +33,4 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
 
 
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
36
+ tokenizer.json filter=lfs diff=lfs merge=lfs -text
README.md ADDED
@@ -0,0 +1,283 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ base_model: Qwen/Qwen2.5-7B-Instruct
3
+ library_name: peft
4
+ pipeline_tag: text-generation
5
+ license: other
6
+ license_name: proprietary
7
+ license_link: LICENSE
8
+ tags:
9
+ - base_model:adapter:Qwen/Qwen2.5-7B-Instruct
10
+ - lora
11
+ - transformers
12
+ - compliance
13
+ - nist
14
+ - control-extraction
15
+ - regulatory
16
+ ---
17
+
18
+ # NIST Control Extraction LoRA Adapter
19
+
20
+ A fine-tuned LoRA adapter for **Qwen2.5-7B-Instruct** specifically designed for accurate extraction of security controls from NIST framework documents. This model eliminates hallucination issues present in the base model, ensuring precise identification of controls without mistaking control enhancements or related text as valid controls.
21
+
22
+ ## Key Features
23
+
24
+ - **Accurate Control Extraction**: Precisely identifies control IDs, titles, and descriptions from framework documents
25
+ - **Reduced Hallucination**: Trained to distinguish between actual controls and control enhancements/related content
26
+ - **Fast Inference**: Processes 492-page NIST documents in ~15 minutes (vs. 27 minutes with base model)
27
+ - **Structured Output**: Returns controls in clean JSON format with `<END>` token for reliable parsing
28
+
29
+ ---
30
+
31
+ ## Model Details
32
+
33
+ ### Model Description
34
+
35
+ This LoRA adapter enhances the Qwen2.5-7B-Instruct model for the specialized task of extracting security controls from compliance framework documents. The adapter was trained using a custom weighted loss function that penalizes false positives more heavily than false negatives, addressing the critical requirement in compliance auditing where incorrectly identifying a control is more problematic than missing one.
36
+
37
+ | Property | Value |
38
+ |----------|-------|
39
+ | **Developed by** | Rishit Sharma |
40
+ | **Model Type** | LoRA Adapter (PEFT) |
41
+ | **Base Model** | [Qwen/Qwen2.5-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct) |
42
+ | **Language** | English |
43
+ | **Domain** | Compliance & Regulatory Frameworks |
44
+ | **License** | Proprietary - No use allowed without prior permission |
45
+
46
+ ---
47
+
48
+ ## Model Architecture
49
+
50
+ ### LoRA Configuration
51
+
52
+ | Parameter | Value |
53
+ |-----------|-------|
54
+ | **Rank (r)** | 16 |
55
+ | **Alpha** | 32 |
56
+ | **Dropout** | 0.05 |
57
+ | **Target Modules** | `q_proj`, `k_proj`, `v_proj`, `o_proj` |
58
+ | **Bias** | None |
59
+ | **Task Type** | CAUSAL_LM |
60
+
61
+ ### Quantization (Training)
62
+
63
+ | Parameter | Value |
64
+ |-----------|-------|
65
+ | **Quantization** | 4-bit (QLoRA) |
66
+ | **Quant Type** | NF4 |
67
+ | **Double Quantization** | Enabled |
68
+ | **Compute Dtype** | bfloat16 |
69
+
70
+ ### Special Tokens
71
+
72
+ - **`<END>`**: Custom stop token appended to outputs for reliable generation termination
73
+
74
+ ---
75
+
76
+ ## Intended Use
77
+
78
+ ### Primary Use Case
79
+
80
+ Building autonomous compliance auditing agents that can:
81
+ - Parse and analyze framework documents (PDF/text)
82
+ - Extract structured control information automatically
83
+ - Verify deployment status of controls within an organization
84
+
85
+ ### Target Users
86
+
87
+ - Compliance Officers & Auditors
88
+ - GRC (Governance, Risk, Compliance) Teams
89
+ - Security Analysts
90
+ - Organizations undergoing NIST compliance assessments
91
+
92
+ ---
93
+
94
+ ## Quick Start
95
+
96
+ ### Installation
97
+
98
+ ```bash
99
+ pip install transformers peft torch accelerate bitsandbytes
100
+ ```
101
+
102
+ ### Loading the Model
103
+
104
+ ```python
105
+ from transformers import AutoModelForCausalLM, AutoTokenizer, BitsAndBytesConfig
106
+ from peft import PeftModel
107
+ import torch
108
+
109
+ # Quantization config (optional, for memory efficiency)
110
+ bnb_config = BitsAndBytesConfig(
111
+ load_in_4bit=True,
112
+ bnb_4bit_use_double_quant=True,
113
+ bnb_4bit_quant_type="nf4",
114
+ bnb_4bit_compute_dtype=torch.bfloat16
115
+ )
116
+
117
+ # Load base model
118
+ base_model = AutoModelForCausalLM.from_pretrained(
119
+ "Qwen/Qwen2.5-7B-Instruct",
120
+ quantization_config=bnb_config,
121
+ device_map="auto",
122
+ trust_remote_code=True
123
+ )
124
+
125
+ # Load tokenizer
126
+ tokenizer = AutoTokenizer.from_pretrained("path/to/final_adapter")
127
+
128
+ # Load LoRA adapter
129
+ model = PeftModel.from_pretrained(base_model, "path/to/final_adapter")
130
+ ```
131
+
132
+ ### Inference Example
133
+
134
+ ```python
135
+ system_prompt = """You are a senior Compliance Auditor and Regulatory Analyst specialized in ISO, NIST, and statutory frameworks."""
136
+
137
+ messages = [
138
+ {"role": "system", "content": system_prompt},
139
+ {"role": "user", "content": f"Analyze this text:\n\n{page_text}"}
140
+ ]
141
+
142
+ input_ids = tokenizer.apply_chat_template(
143
+ messages,
144
+ add_generation_prompt=True,
145
+ return_tensors="pt"
146
+ ).to(model.device)
147
+
148
+ outputs = model.generate(
149
+ input_ids,
150
+ max_new_tokens=512,
151
+ temperature=0.1,
152
+ do_sample=False
153
+ )
154
+
155
+ response = tokenizer.decode(outputs[0], skip_special_tokens=True)
156
+ ```
157
+
158
+ ### Expected Output Format
159
+
160
+ ```json
161
+ [
162
+ {
163
+ "control_id": "AC-1",
164
+ "control_title": "Access Control Policy and Procedures",
165
+ "control_desc": "Description of the control..."
166
+ }
167
+ ]
168
+ <END>
169
+ ```
170
+
171
+ ---
172
+
173
+ ## Training Details
174
+
175
+ ### Training Configuration
176
+
177
+ | Parameter | Value |
178
+ |-----------|-------|
179
+ | **Hardware** | NVIDIA RTX 5070 Ti |
180
+ | **Training Time** | ~1 hour |
181
+ | **Epochs** | 14 (with early stopping) |
182
+ | **Batch Size** | 1 (effective: 8 with gradient accumulation) |
183
+ | **Learning Rate** | 1e-5 |
184
+ | **Optimizer** | Paged AdamW 8-bit |
185
+ | **LR Scheduler** | Cosine |
186
+ | **Warmup Ratio** | 0.05 |
187
+ | **Max Gradient Norm** | 0.3 |
188
+ | **Precision** | FP16 |
189
+
190
+ ### Training Data
191
+
192
+ - **Source**: NIST SP 800-53 Framework (492 pages)
193
+ - **Dataset Creation**: Custom pipeline using Gemini Pro for initial extraction, followed by manual verification
194
+ - **Data Balance**: ~60% positive samples (pages with controls), ~40% negative samples (pages without controls)
195
+ - **Format**: JSONL with chat template structure
196
+
197
+ ### Custom Loss Function
198
+
199
+ A **Weighted Loss Trainer** was implemented to address the asymmetric cost of errors in compliance:
200
+
201
+ - **Positive Weight**: 2.0x (penalizes missing actual controls less than falsely identifying controls)
202
+ - **Rationale**: In compliance auditing, falsely identifying a control (hallucination) is more problematic than missing one, as it can lead to incorrect compliance assessments
203
+
204
+ ```python
205
+ # Samples with controls are weighted 2x during loss computation
206
+ weights = torch.where(has_control, 2.0, 1.0)
207
+ weighted_loss = (sample_loss * weights).mean()
208
+ ```
209
+
210
+ ---
211
+
212
+ ## 📈 Evaluation & Performance
213
+
214
+ | Metric | Base Qwen2.5-7B | This Adapter |
215
+ |--------|-----------------|--------------|
216
+ | **Processing Time (492 pages)** | ~27 minutes | ~15 minutes |
217
+ | **Hallucination Rate** | High | Minimal |
218
+ | **Control Enhancement Confusion** | Frequent | Resolved |
219
+
220
+ ---
221
+
222
+ ## ⚠️ Limitations & Risks
223
+
224
+ ### Current Limitations
225
+
226
+ - **Framework Specificity**: Optimized primarily for NIST SP 800-53; performance on other frameworks (ISO 27001, SOC 2, etc.) may vary
227
+ - **Language**: Trained on English documents only
228
+ - **Document Format**: Best performance on well-structured PDF documents
229
+
230
+ ### Known Risks
231
+
232
+ - May require additional fine-tuning for non-NIST frameworks
233
+ - Performance depends on input text quality and preprocessing
234
+ - Should be validated by human auditors for critical compliance decisions
235
+
236
+ ### Future Improvements
237
+
238
+ - [ ] Training on ISO 27001/27002 frameworks
239
+ - [ ] Multi-framework support (SOC 2, HIPAA, PCI-DSS)
240
+ - [ ] Improved handling of complex document layouts
241
+ - [ ] Longer training with expanded dataset
242
+
243
+ ---
244
+
245
+ ## 📄 License
246
+
247
+ **Proprietary License** - This model is not available for public use without explicit prior permission from the developer.
248
+
249
+ For licensing inquiries, please contact via the channels below.
250
+
251
+ ---
252
+
253
+ ## 📬 Contact
254
+
255
+ | Channel | Link |
256
+ |---------|------|
257
+ | **Email** | [rishitshar36@gmail.com](mailto:rishitshar36@gmail.com) |
258
+ | **GitHub** | [github.com/rishit836](https://github.com/rishit836) |
259
+ | **Project Repository** | [control-extraction-using-llm-finetuned](https://github.com/rishit836/control-extraction-using-llm-finetuned) |
260
+
261
+ ---
262
+
263
+ ## 🙏 Acknowledgments
264
+
265
+ - [Qwen Team](https://github.com/QwenLM/Qwen2.5) for the excellent base model
266
+ - [Hugging Face](https://huggingface.co/) for the Transformers and PEFT libraries
267
+ - NIST for the publicly available SP 800-53 framework documentation
268
+
269
+ ---
270
+
271
+ ## 📚 Citation
272
+
273
+ If you use this model in your research or project, please cite:
274
+
275
+ ```bibtex
276
+ @misc{sharma2026nist-control-extraction,
277
+ title={NIST Control Extraction LoRA Adapter for Qwen2.5-7B},
278
+ author={Sharma, Rishit},
279
+ year={2026},
280
+ publisher={GitHub},
281
+ howpublished={\url{https://github.com/rishit836/control-extraction-using-llm-finetuned}}
282
+ }
283
+ ```
adapter_config.json ADDED
@@ -0,0 +1,43 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "alora_invocation_tokens": null,
3
+ "alpha_pattern": {},
4
+ "arrow_config": null,
5
+ "auto_mapping": null,
6
+ "base_model_name_or_path": "Qwen/Qwen2.5-7B-Instruct",
7
+ "bias": "none",
8
+ "corda_config": null,
9
+ "ensure_weight_tying": false,
10
+ "eva_config": null,
11
+ "exclude_modules": null,
12
+ "fan_in_fan_out": false,
13
+ "inference_mode": true,
14
+ "init_lora_weights": true,
15
+ "layer_replication": null,
16
+ "layers_pattern": null,
17
+ "layers_to_transform": null,
18
+ "loftq_config": {},
19
+ "lora_alpha": 32,
20
+ "lora_bias": false,
21
+ "lora_dropout": 0.05,
22
+ "megatron_config": null,
23
+ "megatron_core": "megatron.core",
24
+ "modules_to_save": null,
25
+ "peft_type": "LORA",
26
+ "peft_version": "0.18.1",
27
+ "qalora_group_size": 16,
28
+ "r": 16,
29
+ "rank_pattern": {},
30
+ "revision": null,
31
+ "target_modules": [
32
+ "v_proj",
33
+ "o_proj",
34
+ "k_proj",
35
+ "q_proj"
36
+ ],
37
+ "target_parameters": null,
38
+ "task_type": "CAUSAL_LM",
39
+ "trainable_token_indices": null,
40
+ "use_dora": false,
41
+ "use_qalora": false,
42
+ "use_rslora": false
43
+ }
adapter_model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:2bf4dc666d62aa595c65f300110b7411d50eeeb252e371ab8c6ab6f6a8d54c8a
3
+ size 4388968992
added_tokens.json ADDED
@@ -0,0 +1,25 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "</tool_call>": 151658,
3
+ "<END>": 151665,
4
+ "<tool_call>": 151657,
5
+ "<|box_end|>": 151649,
6
+ "<|box_start|>": 151648,
7
+ "<|endoftext|>": 151643,
8
+ "<|file_sep|>": 151664,
9
+ "<|fim_middle|>": 151660,
10
+ "<|fim_pad|>": 151662,
11
+ "<|fim_prefix|>": 151659,
12
+ "<|fim_suffix|>": 151661,
13
+ "<|im_end|>": 151645,
14
+ "<|im_start|>": 151644,
15
+ "<|image_pad|>": 151655,
16
+ "<|object_ref_end|>": 151647,
17
+ "<|object_ref_start|>": 151646,
18
+ "<|quad_end|>": 151651,
19
+ "<|quad_start|>": 151650,
20
+ "<|repo_name|>": 151663,
21
+ "<|video_pad|>": 151656,
22
+ "<|vision_end|>": 151653,
23
+ "<|vision_pad|>": 151654,
24
+ "<|vision_start|>": 151652
25
+ }
chat_template.jinja ADDED
@@ -0,0 +1,54 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {%- if tools %}
2
+ {{- '<|im_start|>system\n' }}
3
+ {%- if messages[0]['role'] == 'system' %}
4
+ {{- messages[0]['content'] }}
5
+ {%- else %}
6
+ {{- 'You are Qwen, created by Alibaba Cloud. You are a helpful assistant.' }}
7
+ {%- endif %}
8
+ {{- "\n\n# Tools\n\nYou may call one or more functions to assist with the user query.\n\nYou are provided with function signatures within <tools></tools> XML tags:\n<tools>" }}
9
+ {%- for tool in tools %}
10
+ {{- "\n" }}
11
+ {{- tool | tojson }}
12
+ {%- endfor %}
13
+ {{- "\n</tools>\n\nFor each function call, return a json object with function name and arguments within <tool_call></tool_call> XML tags:\n<tool_call>\n{\"name\": <function-name>, \"arguments\": <args-json-object>}\n</tool_call><|im_end|>\n" }}
14
+ {%- else %}
15
+ {%- if messages[0]['role'] == 'system' %}
16
+ {{- '<|im_start|>system\n' + messages[0]['content'] + '<|im_end|>\n' }}
17
+ {%- else %}
18
+ {{- '<|im_start|>system\nYou are Qwen, created by Alibaba Cloud. You are a helpful assistant.<|im_end|>\n' }}
19
+ {%- endif %}
20
+ {%- endif %}
21
+ {%- for message in messages %}
22
+ {%- if (message.role == "user") or (message.role == "system" and not loop.first) or (message.role == "assistant" and not message.tool_calls) %}
23
+ {{- '<|im_start|>' + message.role + '\n' + message.content + '<|im_end|>' + '\n' }}
24
+ {%- elif message.role == "assistant" %}
25
+ {{- '<|im_start|>' + message.role }}
26
+ {%- if message.content %}
27
+ {{- '\n' + message.content }}
28
+ {%- endif %}
29
+ {%- for tool_call in message.tool_calls %}
30
+ {%- if tool_call.function is defined %}
31
+ {%- set tool_call = tool_call.function %}
32
+ {%- endif %}
33
+ {{- '\n<tool_call>\n{"name": "' }}
34
+ {{- tool_call.name }}
35
+ {{- '", "arguments": ' }}
36
+ {{- tool_call.arguments | tojson }}
37
+ {{- '}\n</tool_call>' }}
38
+ {%- endfor %}
39
+ {{- '<|im_end|>\n' }}
40
+ {%- elif message.role == "tool" %}
41
+ {%- if (loop.index0 == 0) or (messages[loop.index0 - 1].role != "tool") %}
42
+ {{- '<|im_start|>user' }}
43
+ {%- endif %}
44
+ {{- '\n<tool_response>\n' }}
45
+ {{- message.content }}
46
+ {{- '\n</tool_response>' }}
47
+ {%- if loop.last or (messages[loop.index0 + 1].role != "tool") %}
48
+ {{- '<|im_end|>\n' }}
49
+ {%- endif %}
50
+ {%- endif %}
51
+ {%- endfor %}
52
+ {%- if add_generation_prompt %}
53
+ {{- '<|im_start|>assistant\n' }}
54
+ {%- endif %}
merges.txt ADDED
The diff for this file is too large to render. See raw diff
 
special_tokens_map.json ADDED
@@ -0,0 +1,25 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "additional_special_tokens": [
3
+ {
4
+ "content": "<END>",
5
+ "lstrip": false,
6
+ "normalized": false,
7
+ "rstrip": false,
8
+ "single_word": false
9
+ }
10
+ ],
11
+ "eos_token": {
12
+ "content": "<|im_end|>",
13
+ "lstrip": false,
14
+ "normalized": false,
15
+ "rstrip": false,
16
+ "single_word": false
17
+ },
18
+ "pad_token": {
19
+ "content": "<|endoftext|>",
20
+ "lstrip": false,
21
+ "normalized": false,
22
+ "rstrip": false,
23
+ "single_word": false
24
+ }
25
+ }
tokenizer.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:2fbe196efd3fc4b085e353a7b9c2000540f40f4fd7e8926ad3eba825cfea4597
3
+ size 11422078
tokenizer_config.json ADDED
@@ -0,0 +1,203 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "add_bos_token": false,
3
+ "add_prefix_space": false,
4
+ "added_tokens_decoder": {
5
+ "151643": {
6
+ "content": "<|endoftext|>",
7
+ "lstrip": false,
8
+ "normalized": false,
9
+ "rstrip": false,
10
+ "single_word": false,
11
+ "special": true
12
+ },
13
+ "151644": {
14
+ "content": "<|im_start|>",
15
+ "lstrip": false,
16
+ "normalized": false,
17
+ "rstrip": false,
18
+ "single_word": false,
19
+ "special": true
20
+ },
21
+ "151645": {
22
+ "content": "<|im_end|>",
23
+ "lstrip": false,
24
+ "normalized": false,
25
+ "rstrip": false,
26
+ "single_word": false,
27
+ "special": true
28
+ },
29
+ "151646": {
30
+ "content": "<|object_ref_start|>",
31
+ "lstrip": false,
32
+ "normalized": false,
33
+ "rstrip": false,
34
+ "single_word": false,
35
+ "special": true
36
+ },
37
+ "151647": {
38
+ "content": "<|object_ref_end|>",
39
+ "lstrip": false,
40
+ "normalized": false,
41
+ "rstrip": false,
42
+ "single_word": false,
43
+ "special": true
44
+ },
45
+ "151648": {
46
+ "content": "<|box_start|>",
47
+ "lstrip": false,
48
+ "normalized": false,
49
+ "rstrip": false,
50
+ "single_word": false,
51
+ "special": true
52
+ },
53
+ "151649": {
54
+ "content": "<|box_end|>",
55
+ "lstrip": false,
56
+ "normalized": false,
57
+ "rstrip": false,
58
+ "single_word": false,
59
+ "special": true
60
+ },
61
+ "151650": {
62
+ "content": "<|quad_start|>",
63
+ "lstrip": false,
64
+ "normalized": false,
65
+ "rstrip": false,
66
+ "single_word": false,
67
+ "special": true
68
+ },
69
+ "151651": {
70
+ "content": "<|quad_end|>",
71
+ "lstrip": false,
72
+ "normalized": false,
73
+ "rstrip": false,
74
+ "single_word": false,
75
+ "special": true
76
+ },
77
+ "151652": {
78
+ "content": "<|vision_start|>",
79
+ "lstrip": false,
80
+ "normalized": false,
81
+ "rstrip": false,
82
+ "single_word": false,
83
+ "special": true
84
+ },
85
+ "151653": {
86
+ "content": "<|vision_end|>",
87
+ "lstrip": false,
88
+ "normalized": false,
89
+ "rstrip": false,
90
+ "single_word": false,
91
+ "special": true
92
+ },
93
+ "151654": {
94
+ "content": "<|vision_pad|>",
95
+ "lstrip": false,
96
+ "normalized": false,
97
+ "rstrip": false,
98
+ "single_word": false,
99
+ "special": true
100
+ },
101
+ "151655": {
102
+ "content": "<|image_pad|>",
103
+ "lstrip": false,
104
+ "normalized": false,
105
+ "rstrip": false,
106
+ "single_word": false,
107
+ "special": true
108
+ },
109
+ "151656": {
110
+ "content": "<|video_pad|>",
111
+ "lstrip": false,
112
+ "normalized": false,
113
+ "rstrip": false,
114
+ "single_word": false,
115
+ "special": true
116
+ },
117
+ "151657": {
118
+ "content": "<tool_call>",
119
+ "lstrip": false,
120
+ "normalized": false,
121
+ "rstrip": false,
122
+ "single_word": false,
123
+ "special": false
124
+ },
125
+ "151658": {
126
+ "content": "</tool_call>",
127
+ "lstrip": false,
128
+ "normalized": false,
129
+ "rstrip": false,
130
+ "single_word": false,
131
+ "special": false
132
+ },
133
+ "151659": {
134
+ "content": "<|fim_prefix|>",
135
+ "lstrip": false,
136
+ "normalized": false,
137
+ "rstrip": false,
138
+ "single_word": false,
139
+ "special": false
140
+ },
141
+ "151660": {
142
+ "content": "<|fim_middle|>",
143
+ "lstrip": false,
144
+ "normalized": false,
145
+ "rstrip": false,
146
+ "single_word": false,
147
+ "special": false
148
+ },
149
+ "151661": {
150
+ "content": "<|fim_suffix|>",
151
+ "lstrip": false,
152
+ "normalized": false,
153
+ "rstrip": false,
154
+ "single_word": false,
155
+ "special": false
156
+ },
157
+ "151662": {
158
+ "content": "<|fim_pad|>",
159
+ "lstrip": false,
160
+ "normalized": false,
161
+ "rstrip": false,
162
+ "single_word": false,
163
+ "special": false
164
+ },
165
+ "151663": {
166
+ "content": "<|repo_name|>",
167
+ "lstrip": false,
168
+ "normalized": false,
169
+ "rstrip": false,
170
+ "single_word": false,
171
+ "special": false
172
+ },
173
+ "151664": {
174
+ "content": "<|file_sep|>",
175
+ "lstrip": false,
176
+ "normalized": false,
177
+ "rstrip": false,
178
+ "single_word": false,
179
+ "special": false
180
+ },
181
+ "151665": {
182
+ "content": "<END>",
183
+ "lstrip": false,
184
+ "normalized": false,
185
+ "rstrip": false,
186
+ "single_word": false,
187
+ "special": true
188
+ }
189
+ },
190
+ "additional_special_tokens": [
191
+ "<END>"
192
+ ],
193
+ "bos_token": null,
194
+ "clean_up_tokenization_spaces": false,
195
+ "eos_token": "<|im_end|>",
196
+ "errors": "replace",
197
+ "extra_special_tokens": {},
198
+ "model_max_length": 131072,
199
+ "pad_token": "<|endoftext|>",
200
+ "split_special_tokens": false,
201
+ "tokenizer_class": "Qwen2Tokenizer",
202
+ "unk_token": null
203
+ }
vocab.json ADDED
The diff for this file is too large to render. See raw diff