Update README.md
Browse files
README.md
CHANGED
|
@@ -5,204 +5,264 @@ pipeline_tag: text-generation
|
|
| 5 |
tags:
|
| 6 |
- axolotl
|
| 7 |
- base_model:adapter:Qwen/Qwen2.5-Coder-7B-Instruct
|
| 8 |
-
- lora
|
| 9 |
- transformers
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 10 |
---
|
| 11 |
|
| 12 |
# Model Card for Model ID
|
| 13 |
|
| 14 |
-
|
| 15 |
-
|
| 16 |
|
| 17 |
|
| 18 |
## Model Details
|
| 19 |
|
| 20 |
### Model Description
|
| 21 |
|
| 22 |
-
|
| 23 |
-
|
| 24 |
-
|
| 25 |
-
|
| 26 |
-
- **Developed by:** [More Information Needed]
|
| 27 |
-
- **Funded by [optional]:** [More Information Needed]
|
| 28 |
-
- **Shared by [optional]:** [More Information Needed]
|
| 29 |
-
- **Model type:** [More Information Needed]
|
| 30 |
-
- **Language(s) (NLP):** [More Information Needed]
|
| 31 |
-
- **License:** [More Information Needed]
|
| 32 |
-
- **Finetuned from model [optional]:** [More Information Needed]
|
| 33 |
|
| 34 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 35 |
|
| 36 |
-
|
| 37 |
|
| 38 |
-
- **Repository:**
|
| 39 |
-
- **
|
| 40 |
-
- **Demo [optional]:** [More Information Needed]
|
| 41 |
|
| 42 |
## Uses
|
| 43 |
|
| 44 |
-
|
|
|
|
|
|
|
|
|
|
| 45 |
|
| 46 |
### Direct Use
|
| 47 |
|
| 48 |
-
|
|
|
|
|
|
|
| 49 |
|
| 50 |
-
|
| 51 |
|
| 52 |
-
|
| 53 |
-
|
| 54 |
-
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
|
| 55 |
-
|
| 56 |
-
[More Information Needed]
|
| 57 |
|
| 58 |
### Out-of-Scope Use
|
| 59 |
|
| 60 |
-
|
| 61 |
-
|
| 62 |
-
|
| 63 |
|
| 64 |
## Bias, Risks, and Limitations
|
| 65 |
|
| 66 |
-
|
| 67 |
-
|
| 68 |
-
|
| 69 |
|
| 70 |
### Recommendations
|
| 71 |
|
| 72 |
-
|
| 73 |
-
|
| 74 |
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
|
| 75 |
|
| 76 |
## How to Get Started with the Model
|
| 77 |
|
| 78 |
Use the code below to get started with the model.
|
| 79 |
|
| 80 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 81 |
|
| 82 |
## Training Details
|
| 83 |
|
| 84 |
### Training Data
|
| 85 |
|
| 86 |
-
|
| 87 |
|
| 88 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 89 |
|
| 90 |
### Training Procedure
|
| 91 |
|
| 92 |
-
|
| 93 |
-
|
| 94 |
-
|
| 95 |
-
|
| 96 |
-
[More Information Needed]
|
| 97 |
-
|
| 98 |
|
| 99 |
#### Training Hyperparameters
|
| 100 |
|
| 101 |
-
- **
|
| 102 |
-
|
| 103 |
-
|
| 104 |
-
|
| 105 |
-
|
| 106 |
-
|
| 107 |
-
|
|
|
|
| 108 |
|
| 109 |
## Evaluation
|
| 110 |
|
| 111 |
-
|
|
|
|
|
|
|
| 112 |
|
| 113 |
### Testing Data, Factors & Metrics
|
| 114 |
|
| 115 |
#### Testing Data
|
| 116 |
|
| 117 |
-
|
| 118 |
-
|
| 119 |
-
[More Information Needed]
|
| 120 |
|
| 121 |
#### Factors
|
| 122 |
|
| 123 |
-
|
| 124 |
-
|
| 125 |
-
|
| 126 |
|
| 127 |
#### Metrics
|
| 128 |
|
| 129 |
-
|
| 130 |
-
|
| 131 |
-
|
| 132 |
|
| 133 |
### Results
|
| 134 |
|
| 135 |
-
|
| 136 |
-
|
| 137 |
-
|
|
|
|
|
|
|
|
|
|
| 138 |
|
| 139 |
|
|
|
|
| 140 |
|
| 141 |
-
|
|
|
|
|
|
|
|
|
|
| 142 |
|
| 143 |
-
|
| 144 |
|
| 145 |
-
|
|
|
|
| 146 |
|
| 147 |
## Environmental Impact
|
| 148 |
|
| 149 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
| 150 |
|
| 151 |
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
|
| 152 |
|
| 153 |
-
- **Hardware Type:** [More Information Needed]
|
| 154 |
-
- **Hours used:** [More Information Needed]
|
| 155 |
-
- **Cloud Provider:** [More Information Needed]
|
| 156 |
-
- **Compute Region:** [More Information Needed]
|
| 157 |
-
- **Carbon Emitted:** [More Information Needed]
|
| 158 |
-
|
| 159 |
-
## Technical Specifications [optional]
|
| 160 |
|
| 161 |
### Model Architecture and Objective
|
| 162 |
|
| 163 |
-
|
|
|
|
| 164 |
|
| 165 |
### Compute Infrastructure
|
| 166 |
|
| 167 |
-
|
|
|
|
| 168 |
|
| 169 |
#### Hardware
|
| 170 |
|
| 171 |
-
|
|
|
|
| 172 |
|
| 173 |
#### Software
|
| 174 |
|
| 175 |
-
|
| 176 |
-
|
|
|
|
|
|
|
|
|
|
| 177 |
## Citation [optional]
|
| 178 |
|
| 179 |
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
|
| 180 |
|
| 181 |
**BibTeX:**
|
| 182 |
|
| 183 |
-
[More Information Needed]
|
| 184 |
-
|
| 185 |
-
**APA:**
|
| 186 |
|
| 187 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 188 |
|
| 189 |
-
|
| 190 |
-
|
| 191 |
-
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
|
| 192 |
|
| 193 |
-
|
|
|
|
| 194 |
|
| 195 |
-
## More Information
|
| 196 |
|
| 197 |
-
|
|
|
|
| 198 |
|
| 199 |
-
## Model Card Authors
|
| 200 |
|
| 201 |
-
|
| 202 |
|
| 203 |
## Model Card Contact
|
| 204 |
|
| 205 |
-
|
|
|
|
| 206 |
### Framework versions
|
| 207 |
|
| 208 |
- PEFT 0.18.1
|
|
|
|
| 5 |
tags:
|
| 6 |
- axolotl
|
| 7 |
- base_model:adapter:Qwen/Qwen2.5-Coder-7B-Instruct
|
|
|
|
| 8 |
- transformers
|
| 9 |
+
- qlora
|
| 10 |
+
- code-generation
|
| 11 |
+
- bash
|
| 12 |
+
- cli
|
| 13 |
+
- security
|
| 14 |
+
- devops
|
| 15 |
+
license: mit
|
| 16 |
+
datasets:
|
| 17 |
+
- prabhanshubhowal/natural_language_to_linux
|
| 18 |
+
language:
|
| 19 |
+
- en
|
| 20 |
+
metrics:
|
| 21 |
+
- code_eval
|
| 22 |
+
- exact_match
|
| 23 |
---
|
| 24 |
|
| 25 |
# Model Card for Model ID
|
| 26 |
|
| 27 |
+

|
|
|
|
| 28 |
|
| 29 |
|
| 30 |
## Model Details
|
| 31 |
|
| 32 |
### Model Description
|
| 33 |
|
| 34 |
+
SecureCLI-Tuner V2 is a **Zero-Trust Security Kernel** for Agentic DevOps.
|
| 35 |
+
It is a QLoRA fine-tune of **Qwen2.5-Coder-7B-Instruct**, specialized for converting natural language instructions into safe, syntactically correct Bash commands.
|
| 36 |
+
Unlike generic coding models, SecureCLI-Tuner V2 was trained on a filtered dataset with **95 dangerous command patterns removed** (e.g., `rm -rf /`, fork bombs)
|
| 37 |
+
and is designed to operate within a 3-layer runtime guardrail system.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 38 |
|
| 39 |
+
- **Developed by:** Michael Williams mwill-AImission (Ready Tensor Certification Portfolio)
|
| 40 |
+
- **Funded by:** Michael Williams
|
| 41 |
+
- **Model type:** Causal Language Model (QLoRA Adapter)
|
| 42 |
+
- **Language(s) (NLP):** English
|
| 43 |
+
- **License:** MIT
|
| 44 |
+
- **Finetuned from model Qwen/Qwen2.5-Coder-7B-Instruct
|
| 45 |
|
| 46 |
+
### Model Sources
|
| 47 |
|
| 48 |
+
- **Repository:** <https://github.com/mwill20/SecureCLI-Tuner>
|
| 49 |
+
- **Demo:** [Coming Soon]
|
|
|
|
| 50 |
|
| 51 |
## Uses
|
| 52 |
|
| 53 |
+
SecureCLI-Tuner V2 is designed for DevOps engineers, System Administrators, and AI Researchers who need a reliable, security-focused model for translating natural language into Bash commands.
|
| 54 |
+
Unlike general-purpose LLMs, this model is fine-tuned to prioritize safety and syntax correctness in CLI environments.
|
| 55 |
+
It is intended to be used as a "Translation Layer" or "Coprocessor" in larger systems, where user intent is first verified and then translated into an executable command.
|
| 56 |
+
Foreseeable users include developers building CLI tools, automated infrastructure agents, and educational platforms teaching Linux administration.
|
| 57 |
|
| 58 |
### Direct Use
|
| 59 |
|
| 60 |
+
- **DevOps Agents:** Generating shell commands for autonomous agents.
|
| 61 |
+
- **CLI Assistants:** Natural language interfaces for terminal operations.
|
| 62 |
+
- **Educational Tools:** Teaching safe shell command usage.
|
| 63 |
|
| 64 |
+
### Downstream Use
|
| 65 |
|
| 66 |
+
- Integrated into CI/CD pipelines to validate or generate infrastructure scripts.
|
| 67 |
+
- Used as a "Router" model to classify intent before executing commands.
|
|
|
|
|
|
|
|
|
|
| 68 |
|
| 69 |
### Out-of-Scope Use
|
| 70 |
|
| 71 |
+
- **Root Operations:** Commands requiring `sudo` should always be manually reviewed.
|
| 72 |
+
- **Malicious Generation:** While training data was filtered, the model should not be used to generate malware or exploit scripts.
|
| 73 |
+
- **Non-Bash Languages:** The model is specialized for Bash; Python/JS performance may be degraded compared to the base model.
|
| 74 |
|
| 75 |
## Bias, Risks, and Limitations
|
| 76 |
|
| 77 |
+
- **Safety vs. Utility:** The model refuses to generate commands that look dangerous, even if the intent is benign (false positives).
|
| 78 |
+
- **Evaluation limits:** Semantic evaluation using CodeBERT was limited by library constraints; exact match metrics (9.1%) underestimate true performance (99.0% valid command generation).
|
| 79 |
+
- **Defense in Depth:** The model weights are only *one layer* of defense. **Production use requires the accompanying CommandRisk engine** (runtime regex + heuristic validation).
|
| 80 |
|
| 81 |
### Recommendations
|
| 82 |
|
| 83 |
+
Users should always deploy this model behind the **CommandRisk** validation layer described in the [GitHub Repository](https://github.com/mwill20/SecureCLI-Tuner).
|
| 84 |
+
Do not give this model unchecked `sudo` access.
|
| 85 |
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
|
| 86 |
|
| 87 |
## How to Get Started with the Model
|
| 88 |
|
| 89 |
Use the code below to get started with the model.
|
| 90 |
|
| 91 |
+
```python
|
| 92 |
+
import torch
|
| 93 |
+
from transformers import AutoTokenizer, AutoModelForCausalLM
|
| 94 |
+
from peft import PeftModel
|
| 95 |
+
|
| 96 |
+
# 1. Load Base Model
|
| 97 |
+
base_model_name = "Qwen/Qwen2.5-Coder-7B-Instruct"
|
| 98 |
+
tokenizer = AutoTokenizer.from_pretrained(base_model_name)
|
| 99 |
+
base_model = AutoModelForCausalLM.from_pretrained(
|
| 100 |
+
base_model_name,
|
| 101 |
+
torch_dtype=torch.float16,
|
| 102 |
+
device_map="auto",
|
| 103 |
+
load_in_4bit=True
|
| 104 |
+
)
|
| 105 |
+
|
| 106 |
+
# 2. Load Adapter
|
| 107 |
+
adapter_path = "mwill-AImission/SecureCLI-Tuner-V2"
|
| 108 |
+
model = PeftModel.from_pretrained(base_model, adapter_path)
|
| 109 |
+
|
| 110 |
+
# 3. Generate
|
| 111 |
+
prompt = "List all Docker containers using more than 1GB RAM"
|
| 112 |
+
messages = [
|
| 113 |
+
{"role": "system", "content": "You are a helpful DevOps assistant. Generate a Bash command for the given instruction."},
|
| 114 |
+
{"role": "user", "content": prompt}
|
| 115 |
+
]
|
| 116 |
+
text = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
|
| 117 |
+
inputs = tokenizer([text], return_tensors="pt").to("cuda")
|
| 118 |
+
|
| 119 |
+
outputs = model.generate(**inputs, max_new_tokens=128)
|
| 120 |
+
print(tokenizer.batch_decode(outputs, skip_special_tokens=True)[0])
|
| 121 |
+
```
|
| 122 |
|
| 123 |
## Training Details
|
| 124 |
|
| 125 |
### Training Data
|
| 126 |
|
| 127 |
+
**Source:** `prabhanshubhowal/natural_language_to_linux` (HuggingFace)
|
| 128 |
|
| 129 |
+
**Preprocessing Pipeline:**
|
| 130 |
+
1. **Deduplication:** Removed 5,616 duplicates.
|
| 131 |
+
2. **Schema Validation:** Enforced valid JSON structure.
|
| 132 |
+
3. **Safety Filtering:** Removed **95 examples** matching 17 zero-tolerance patterns (e.g., `rm -rf /`, `:(){ :|:& };:`).
|
| 133 |
+
4. **Shellcheck:** Removed 382 commands with invalid syntax.
|
| 134 |
+
**Final Size:** 12,259 examples (Train: 9,807 | Val: 1,225 | Test: 1,227).
|
| 135 |
|
| 136 |
### Training Procedure
|
| 137 |
|
| 138 |
+
- **Method:** QLoRA (Quantized Low-Rank Adaptation)
|
| 139 |
+
- **Framework:** Axolotl
|
| 140 |
+
- **Compute:** 1x NVIDIA A100 (40GB) on RunPod
|
|
|
|
|
|
|
|
|
|
| 141 |
|
| 142 |
#### Training Hyperparameters
|
| 143 |
|
| 144 |
+
- **Bits:** 4-bit NF4 quantization
|
| 145 |
+
- **LoRA Rank:** 8
|
| 146 |
+
- **LoRA Alpha:** 16
|
| 147 |
+
- **Target Modules:** q_proj, v_proj, k_proj, o_proj
|
| 148 |
+
- **Learning Rate:** 2e-4 (Cosine schedule)
|
| 149 |
+
- **Batch Size:** 4 (validation of gradient accumulation)
|
| 150 |
+
- **Steps:** 500 (~20% of 1 epoch)
|
| 151 |
+
- **Warmup:** 50 steps
|
| 152 |
|
| 153 |
## Evaluation
|
| 154 |
|
| 155 |
+
The evaluation protocol focused on two primary dimensions: **Safety** (Adversarial Robustness) and **Utility** (Command Correctness).
|
| 156 |
+
We employed a "Red Teaming" approach where the model was subjected to a wide range of attack vectors, including obfuscated commands, known dangerous regex patterns, and prompt injection attempts.
|
| 157 |
+
Simultaneously, utility was measured against a held-out test set to ensure the model produces syntactically valid Bash commands that match the user's intent.
|
| 158 |
|
| 159 |
### Testing Data, Factors & Metrics
|
| 160 |
|
| 161 |
#### Testing Data
|
| 162 |
|
| 163 |
+
1,227 held-out examples from the cleaned dataset.
|
|
|
|
|
|
|
| 164 |
|
| 165 |
#### Factors
|
| 166 |
|
| 167 |
+
The evaluation is disaggregated by:
|
| 168 |
+
- **Command Category:** General operational commands vs. Dangerous vectors (destructive, obfuscated).
|
| 169 |
+
- **Difficulty:** Direct NLP instructions vs. Adversarial prompts designed to bypass guardrails.
|
| 170 |
|
| 171 |
#### Metrics
|
| 172 |
|
| 173 |
+
- **Command Validity:** 99.0% (Parsable Bash)
|
| 174 |
+
- **Adversarial Pass Rate:** 100% (Blocks 9/9 attack categories)
|
| 175 |
+
- **Exact Match:** 9.1% (Conservative baseline)
|
| 176 |
|
| 177 |
### Results
|
| 178 |
|
| 179 |
+
| Metric | Base Qwen | SecureCLI-Tuner V2 | Improvement |
|
| 180 |
+
|--------|-----------|--------------------|-------------|
|
| 181 |
+
| **Command Validity** | 97.1% | **99.0%** | +1.9% |
|
| 182 |
+
| **Exact Match** | 0% | **9.1%** | +9.1% |
|
| 183 |
+
| **Adversarial Safety** | N/A | **100%** | Critical |
|
| 184 |
+
The model demonstrates a massive improvement in safety and formatting compliance compared to the base model.
|
| 185 |
|
| 186 |
|
| 187 |
+
#### Summary
|
| 188 |
|
| 189 |
+
SecureCLI-Tuner V2 significantly improves upon the base Qwen2.5-Coder-7B model in terms of **safety** (100% block rate for adversarial attacks)
|
| 190 |
+
and **command validity** (+1.9%). While strict "Exact Match" scores remain low (9.1%) due to the variability of valid Bash syntax
|
| 191 |
+
(e.g., `ls -la` vs `ls -al`), the functional correctness is high.
|
| 192 |
+
The model demonstrates a minor trade-off in general knowledge (MMLU -5.2%) to achieve this domain specialization.
|
| 193 |
|
| 194 |
+
## Model Examination
|
| 195 |
|
| 196 |
+
Model examination focused on behavioral analysis via the **Adversarial Test Suite** rather than internal interpretability (e.g., attention maps).
|
| 197 |
+
The model consistently activates refusal behaviors when presented with dangerous intents, even when obfuscated (e.g., base64 encoding).
|
| 198 |
|
| 199 |
## Environmental Impact
|
| 200 |
|
| 201 |
+
- **Hardware Type:** NVIDIA A100 40GB
|
| 202 |
+
- **Hours used:** ~1 hour (44.5 minutes training time)
|
| 203 |
+
- **Cloud Provider:** RunPod
|
| 204 |
+
- **Compute Region:** N/A (Decentralized)
|
| 205 |
+
- **Carbon Emitted:** Negligible (< 0.1 kg CO2eq)
|
| 206 |
|
| 207 |
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
|
| 208 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 209 |
|
| 210 |
### Model Architecture and Objective
|
| 211 |
|
| 212 |
+
Qwen2.5-Coder is a Transformer-based Causal Language Model. This fine-tune adds Low-Rank Adapters (LoRA) to the attention layers to specialize in NL-to-Bash translation
|
| 213 |
+
without forgetting general coding knowledge (MMLU drop was only -5.2%).
|
| 214 |
|
| 215 |
### Compute Infrastructure
|
| 216 |
|
| 217 |
+
- **Orchestration:** Axolotl
|
| 218 |
+
- **Container:** Docker (RunPod PyTorch 2.4 image)
|
| 219 |
|
| 220 |
#### Hardware
|
| 221 |
|
| 222 |
+
- **GPU:** 1x NVIDIA A100 (40GB VRAM)
|
| 223 |
+
- **Platform:** RunPod Cloud Instance
|
| 224 |
|
| 225 |
#### Software
|
| 226 |
|
| 227 |
+
- **Orchestration:** Axolotl v0.5.x
|
| 228 |
+
- **Core:** PyTorch 2.4.0, Transformers 4.45.0
|
| 229 |
+
- **Efficiency:** PEFT 0.18.1, BitsAndBytes 0.44.0
|
| 230 |
+
- **CUDA:** 12.1
|
| 231 |
+
-
|
| 232 |
## Citation [optional]
|
| 233 |
|
| 234 |
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
|
| 235 |
|
| 236 |
**BibTeX:**
|
| 237 |
|
|
|
|
|
|
|
|
|
|
| 238 |
|
| 239 |
+
```bibtex
|
| 240 |
+
@misc{securecli_tuner_v2,
|
| 241 |
+
author = {mwill-itmission},
|
| 242 |
+
title = {SecureCLI-Tuner V2: A Security-First LLM for Agentic DevOps},
|
| 243 |
+
year = {2026},
|
| 244 |
+
publisher = {Ready Tensor Certification Portfolio}
|
| 245 |
+
}
|
| 246 |
+
```
|
| 247 |
|
| 248 |
+
**APA:**
|
|
|
|
|
|
|
| 249 |
|
| 250 |
+
Williams, M. (2026). *SecureCLI-Tuner V2: A Security-First LLM for Agentic DevOps*.
|
| 251 |
+
Ready Tensor Certification Portfolio. <https://huggingface.co/mwill-AImission/SecureCLI-Tuner-V2>
|
| 252 |
|
| 253 |
+
## More Information
|
| 254 |
|
| 255 |
+
For full details on the CommandRisk engine, the Data Preparation Pipeline,
|
| 256 |
+
and the "Defense in Depth" strategy, please visit the [GitHub Repository](https://github.com/mwill20/SecureCLI-Tuner).
|
| 257 |
|
| 258 |
+
## Model Card Authors
|
| 259 |
|
| 260 |
+
Michael Williams (mwill-AImission)
|
| 261 |
|
| 262 |
## Model Card Contact
|
| 263 |
|
| 264 |
+
For questions, open an issue on the [GitHub Repository](https://github.com/mwill20/SecureCLI-Tuner).
|
| 265 |
+
|
| 266 |
### Framework versions
|
| 267 |
|
| 268 |
- PEFT 0.18.1
|