rishitt commited on
Commit
62cd238
·
verified ·
1 Parent(s): 29eac45

final model files

Browse files
Files changed (3) hide show
  1. README.md +140 -216
  2. adapter_config.json +1 -1
  3. adapter_model.safetensors +1 -1
README.md CHANGED
@@ -2,282 +2,206 @@
2
  base_model: Qwen/Qwen2.5-7B-Instruct
3
  library_name: peft
4
  pipeline_tag: text-generation
5
- license: other
6
- license_name: proprietary
7
- license_link: LICENSE
8
  tags:
9
- - base_model:adapter:Qwen/Qwen2.5-7B-Instruct
10
- - lora
11
- - transformers
12
- - compliance
13
- - nist
14
- - control-extraction
15
- - regulatory
16
  ---
17
 
18
- # NIST Control Extraction LoRA Adapter
19
 
20
- A fine-tuned LoRA adapter for **Qwen2.5-7B-Instruct** specifically designed for accurate extraction of security controls from NIST framework documents. This model eliminates hallucination issues present in the base model, ensuring precise identification of controls without mistaking control enhancements or related text as valid controls.
21
 
22
- ## Key Features
23
 
24
- - **Accurate Control Extraction**: Precisely identifies control IDs, titles, and descriptions from framework documents
25
- - **Reduced Hallucination**: Trained to distinguish between actual controls and control enhancements/related content
26
- - **Fast Inference**: Processes 492-page NIST documents in ~15 minutes (vs. 27 minutes with base model)
27
- - **Structured Output**: Returns controls in clean JSON format with `<END>` token for reliable parsing
28
-
29
- ---
30
 
31
  ## Model Details
32
 
33
  ### Model Description
34
 
35
- This LoRA adapter enhances the Qwen2.5-7B-Instruct model for the specialized task of extracting security controls from compliance framework documents. The adapter was trained using a custom weighted loss function that penalizes false positives more heavily than false negatives, addressing the critical requirement in compliance auditing where incorrectly identifying a control is more problematic than missing one.
36
 
37
- | Property | Value |
38
- |----------|-------|
39
- | **Developed by** | Rishit Sharma |
40
- | **Model Type** | LoRA Adapter (PEFT) |
41
- | **Base Model** | [Qwen/Qwen2.5-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct) |
42
- | **Language** | English |
43
- | **Domain** | Compliance & Regulatory Frameworks |
44
- | **License** | Proprietary - No use allowed without prior permission |
45
 
46
- ---
47
 
48
- ## Model Architecture
 
 
 
 
 
 
49
 
50
- ### LoRA Configuration
51
 
52
- | Parameter | Value |
53
- |-----------|-------|
54
- | **Rank (r)** | 16 |
55
- | **Alpha** | 32 |
56
- | **Dropout** | 0.05 |
57
- | **Target Modules** | `q_proj`, `k_proj`, `v_proj`, `o_proj` |
58
- | **Bias** | None |
59
- | **Task Type** | CAUSAL_LM |
60
 
61
- ### Quantization (Training)
 
 
62
 
63
- | Parameter | Value |
64
- |-----------|-------|
65
- | **Quantization** | 4-bit (QLoRA) |
66
- | **Quant Type** | NF4 |
67
- | **Double Quantization** | Enabled |
68
- | **Compute Dtype** | bfloat16 |
69
 
70
- ### Special Tokens
71
 
72
- - **`<END>`**: Custom stop token appended to outputs for reliable generation termination
73
 
74
- ---
75
 
76
- ## Intended Use
77
 
78
- ### Primary Use Case
79
 
80
- Building autonomous compliance auditing agents that can:
81
- - Parse and analyze framework documents (PDF/text)
82
- - Extract structured control information automatically
83
- - Verify deployment status of controls within an organization
84
 
85
- ### Target Users
86
 
87
- - Compliance Officers & Auditors
88
- - GRC (Governance, Risk, Compliance) Teams
89
- - Security Analysts
90
- - Organizations undergoing NIST compliance assessments
91
 
92
- ---
93
 
94
- ## Quick Start
95
-
96
- ### Installation
97
-
98
- ```bash
99
- pip install transformers peft torch accelerate bitsandbytes
100
- ```
101
-
102
- ### Loading the Model
103
-
104
- ```python
105
- from transformers import AutoModelForCausalLM, AutoTokenizer, BitsAndBytesConfig
106
- from peft import PeftModel
107
- import torch
108
-
109
- # Quantization config (optional, for memory efficiency)
110
- bnb_config = BitsAndBytesConfig(
111
- load_in_4bit=True,
112
- bnb_4bit_use_double_quant=True,
113
- bnb_4bit_quant_type="nf4",
114
- bnb_4bit_compute_dtype=torch.bfloat16
115
- )
116
-
117
- # Load base model
118
- base_model = AutoModelForCausalLM.from_pretrained(
119
- "Qwen/Qwen2.5-7B-Instruct",
120
- quantization_config=bnb_config,
121
- device_map="auto",
122
- trust_remote_code=True
123
- )
124
-
125
- # Load tokenizer
126
- tokenizer = AutoTokenizer.from_pretrained("path/to/final_adapter")
127
-
128
- # Load LoRA adapter
129
- model = PeftModel.from_pretrained(base_model, "path/to/final_adapter")
130
- ```
131
-
132
- ### Inference Example
133
-
134
- ```python
135
- system_prompt = """You are a senior Compliance Auditor and Regulatory Analyst specialized in ISO, NIST, and statutory frameworks."""
136
-
137
- messages = [
138
- {"role": "system", "content": system_prompt},
139
- {"role": "user", "content": f"Analyze this text:\n\n{page_text}"}
140
- ]
141
-
142
- input_ids = tokenizer.apply_chat_template(
143
- messages,
144
- add_generation_prompt=True,
145
- return_tensors="pt"
146
- ).to(model.device)
147
-
148
- outputs = model.generate(
149
- input_ids,
150
- max_new_tokens=512,
151
- temperature=0.1,
152
- do_sample=False
153
- )
154
-
155
- response = tokenizer.decode(outputs[0], skip_special_tokens=True)
156
- ```
157
-
158
- ### Expected Output Format
159
-
160
- ```json
161
- [
162
- {
163
- "control_id": "AC-1",
164
- "control_title": "Access Control Policy and Procedures",
165
- "control_desc": "Description of the control..."
166
- }
167
- ]
168
- <END>
169
- ```
170
 
171
- ---
172
 
173
- ## Training Details
 
 
 
 
 
 
 
 
 
 
 
 
174
 
175
- ### Training Configuration
176
-
177
- | Parameter | Value |
178
- |-----------|-------|
179
- | **Hardware** | NVIDIA RTX 5070 Ti |
180
- | **Training Time** | ~1 hour |
181
- | **Epochs** | 14 (with early stopping) |
182
- | **Batch Size** | 1 (effective: 8 with gradient accumulation) |
183
- | **Learning Rate** | 1e-5 |
184
- | **Optimizer** | Paged AdamW 8-bit |
185
- | **LR Scheduler** | Cosine |
186
- | **Warmup Ratio** | 0.05 |
187
- | **Max Gradient Norm** | 0.3 |
188
- | **Precision** | FP16 |
189
 
190
  ### Training Data
191
 
192
- - **Source**: NIST SP 800-53 Framework (492 pages)
193
- - **Dataset Creation**: Custom pipeline using Gemini Pro for initial extraction, followed by manual verification
194
- - **Data Balance**: ~60% positive samples (pages with controls), ~40% negative samples (pages without controls)
195
- - **Format**: JSONL with chat template structure
196
 
197
- ### Custom Loss Function
198
 
199
- A **Weighted Loss Trainer** was implemented to address the asymmetric cost of errors in compliance:
200
 
201
- - **Positive Weight**: 2.0x (penalizes missing actual controls less than falsely identifying controls)
202
- - **Rationale**: In compliance auditing, falsely identifying a control (hallucination) is more problematic than missing one, as it can lead to incorrect compliance assessments
203
 
204
- ```python
205
- # Samples with controls are weighted 2x during loss computation
206
- weights = torch.where(has_control, 2.0, 1.0)
207
- weighted_loss = (sample_loss * weights).mean()
208
- ```
209
 
210
- ---
211
 
212
- ## 📈 Evaluation & Performance
213
 
214
- | Metric | Base Qwen2.5-7B | This Adapter |
215
- |--------|-----------------|--------------|
216
- | **Processing Time (492 pages)** | ~27 minutes | ~15 minutes |
217
- | **Hallucination Rate** | High | Minimal |
218
- | **Control Enhancement Confusion** | Frequent | Resolved |
219
 
220
- ---
221
 
222
- ## ⚠️ Limitations & Risks
223
 
224
- ### Current Limitations
225
 
226
- - **Framework Specificity**: Optimized primarily for NIST SP 800-53; performance on other frameworks (ISO 27001, SOC 2, etc.) may vary
227
- - **Language**: Trained on English documents only
228
- - **Document Format**: Best performance on well-structured PDF documents
229
 
230
- ### Known Risks
231
 
232
- - May require additional fine-tuning for non-NIST frameworks
233
- - Performance depends on input text quality and preprocessing
234
- - Should be validated by human auditors for critical compliance decisions
235
 
236
- ### Future Improvements
237
 
238
- - [ ] Training on ISO 27001/27002 frameworks
239
- - [ ] Multi-framework support (SOC 2, HIPAA, PCI-DSS)
240
- - [ ] Improved handling of complex document layouts
241
- - [ ] Longer training with expanded dataset
242
 
243
- ---
244
 
245
- ## 📄 License
246
 
247
- **Proprietary License** - This model is not available for public use without explicit prior permission from the developer.
248
 
249
- For licensing inquiries, please contact via the channels below.
250
 
251
- ---
252
 
253
- ## 📬 Contact
254
 
255
- | Channel | Link |
256
- |---------|------|
257
- | **Email** | [rishitshar36@gmail.com](mailto:rishitshar36@gmail.com) |
258
- | **GitHub** | [github.com/rishit836](https://github.com/rishit836) |
259
- | **Project Repository** | [control-extraction-using-llm-finetuned](https://github.com/rishit836/control-extraction-using-llm-finetuned) |
260
 
261
- ---
262
 
263
- ## 🙏 Acknowledgments
264
 
265
- - [Qwen Team](https://github.com/QwenLM/Qwen2.5) for the excellent base model
266
- - [Hugging Face](https://huggingface.co/) for the Transformers and PEFT libraries
267
- - NIST for the publicly available SP 800-53 framework documentation
268
 
269
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
270
 
271
- ## 📚 Citation
272
 
273
- If you use this model in your research or project, please cite:
 
274
 
275
- ```bibtex
276
- @misc{sharma2026nist-control-extraction,
277
- title={NIST Control Extraction LoRA Adapter for Qwen2.5-7B},
278
- author={Sharma, Rishit},
279
- year={2026},
280
- publisher={GitHub},
281
- howpublished={\url{https://github.com/rishit836/control-extraction-using-llm-finetuned}}
282
- }
283
- ```
 
2
  base_model: Qwen/Qwen2.5-7B-Instruct
3
  library_name: peft
4
  pipeline_tag: text-generation
 
 
 
5
  tags:
6
+ - base_model:adapter:Qwen/Qwen2.5-7B-Instruct
7
+ - lora
8
+ - transformers
 
 
 
 
9
  ---
10
 
11
+ # Model Card for Model ID
12
 
13
+ <!-- Provide a quick summary of what the model is/does. -->
14
 
 
15
 
 
 
 
 
 
 
16
 
17
  ## Model Details
18
 
19
  ### Model Description
20
 
21
+ <!-- Provide a longer summary of what this model is. -->
22
 
 
 
 
 
 
 
 
 
23
 
 
24
 
25
+ - **Developed by:** [More Information Needed]
26
+ - **Funded by [optional]:** [More Information Needed]
27
+ - **Shared by [optional]:** [More Information Needed]
28
+ - **Model type:** [More Information Needed]
29
+ - **Language(s) (NLP):** [More Information Needed]
30
+ - **License:** [More Information Needed]
31
+ - **Finetuned from model [optional]:** [More Information Needed]
32
 
33
+ ### Model Sources [optional]
34
 
35
+ <!-- Provide the basic links for the model. -->
 
 
 
 
 
 
 
36
 
37
+ - **Repository:** [More Information Needed]
38
+ - **Paper [optional]:** [More Information Needed]
39
+ - **Demo [optional]:** [More Information Needed]
40
 
41
+ ## Uses
 
 
 
 
 
42
 
43
+ <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
44
 
45
+ ### Direct Use
46
 
47
+ <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
48
 
49
+ [More Information Needed]
50
 
51
+ ### Downstream Use [optional]
52
 
53
+ <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
 
 
 
54
 
55
+ [More Information Needed]
56
 
57
+ ### Out-of-Scope Use
 
 
 
58
 
59
+ <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
60
 
61
+ [More Information Needed]
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
62
 
63
+ ## Bias, Risks, and Limitations
64
 
65
+ <!-- This section is meant to convey both technical and sociotechnical limitations. -->
66
+
67
+ [More Information Needed]
68
+
69
+ ### Recommendations
70
+
71
+ <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
72
+
73
+ Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
74
+
75
+ ## How to Get Started with the Model
76
+
77
+ Use the code below to get started with the model.
78
 
79
+ [More Information Needed]
80
+
81
+ ## Training Details
 
 
 
 
 
 
 
 
 
 
 
82
 
83
  ### Training Data
84
 
85
+ <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
 
 
 
86
 
87
+ [More Information Needed]
88
 
89
+ ### Training Procedure
90
 
91
+ <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
 
92
 
93
+ #### Preprocessing [optional]
 
 
 
 
94
 
95
+ [More Information Needed]
96
 
 
97
 
98
+ #### Training Hyperparameters
 
 
 
 
99
 
100
+ - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
101
 
102
+ #### Speeds, Sizes, Times [optional]
103
 
104
+ <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
105
 
106
+ [More Information Needed]
 
 
107
 
108
+ ## Evaluation
109
 
110
+ <!-- This section describes the evaluation protocols and provides the results. -->
 
 
111
 
112
+ ### Testing Data, Factors & Metrics
113
 
114
+ #### Testing Data
 
 
 
115
 
116
+ <!-- This should link to a Dataset Card if possible. -->
117
 
118
+ [More Information Needed]
119
 
120
+ #### Factors
121
 
122
+ <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
123
 
124
+ [More Information Needed]
125
 
126
+ #### Metrics
127
 
128
+ <!-- These are the evaluation metrics being used, ideally with a description of why. -->
 
 
 
 
129
 
130
+ [More Information Needed]
131
 
132
+ ### Results
133
 
134
+ [More Information Needed]
 
 
135
 
136
+ #### Summary
137
+
138
+
139
+
140
+ ## Model Examination [optional]
141
+
142
+ <!-- Relevant interpretability work for the model goes here -->
143
+
144
+ [More Information Needed]
145
+
146
+ ## Environmental Impact
147
+
148
+ <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
149
+
150
+ Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
151
+
152
+ - **Hardware Type:** [More Information Needed]
153
+ - **Hours used:** [More Information Needed]
154
+ - **Cloud Provider:** [More Information Needed]
155
+ - **Compute Region:** [More Information Needed]
156
+ - **Carbon Emitted:** [More Information Needed]
157
+
158
+ ## Technical Specifications [optional]
159
+
160
+ ### Model Architecture and Objective
161
+
162
+ [More Information Needed]
163
+
164
+ ### Compute Infrastructure
165
+
166
+ [More Information Needed]
167
+
168
+ #### Hardware
169
+
170
+ [More Information Needed]
171
+
172
+ #### Software
173
+
174
+ [More Information Needed]
175
+
176
+ ## Citation [optional]
177
+
178
+ <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
179
+
180
+ **BibTeX:**
181
+
182
+ [More Information Needed]
183
+
184
+ **APA:**
185
+
186
+ [More Information Needed]
187
+
188
+ ## Glossary [optional]
189
+
190
+ <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
191
+
192
+ [More Information Needed]
193
+
194
+ ## More Information [optional]
195
+
196
+ [More Information Needed]
197
+
198
+ ## Model Card Authors [optional]
199
+
200
+ [More Information Needed]
201
 
202
+ ## Model Card Contact
203
 
204
+ [More Information Needed]
205
+ ### Framework versions
206
 
207
+ - PEFT 0.18.1
 
 
 
 
 
 
 
 
adapter_config.json CHANGED
@@ -29,8 +29,8 @@
29
  "rank_pattern": {},
30
  "revision": null,
31
  "target_modules": [
32
- "v_proj",
33
  "o_proj",
 
34
  "k_proj",
35
  "q_proj"
36
  ],
 
29
  "rank_pattern": {},
30
  "revision": null,
31
  "target_modules": [
 
32
  "o_proj",
33
+ "v_proj",
34
  "k_proj",
35
  "q_proj"
36
  ],
adapter_model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:2bf4dc666d62aa595c65f300110b7411d50eeeb252e371ab8c6ab6f6a8d54c8a
3
  size 4388968992
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:83af2cf6ccdcbe71f86aab5fd5034fcba4e1b5373f3b57a0a32b7f1b63b195a5
3
  size 4388968992