File size: 10,173 Bytes
8ce9395
 
 
 
 
 
 
167c441
 
 
 
 
 
 
 
 
 
 
 
 
 
8ce9395
 
 
 
39c117f
8ce9395
 
 
 
 
 
167c441
 
 
 
8ce9395
167c441
 
 
 
 
 
8ce9395
167c441
8ce9395
167c441
8ce9395
 
 
167c441
 
 
 
8ce9395
 
 
167c441
 
 
8ce9395
167c441
8ce9395
167c441
 
8ce9395
 
 
167c441
 
 
8ce9395
 
 
167c441
 
 
8ce9395
 
 
167c441
 
8ce9395
 
 
 
 
 
167c441
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
8ce9395
 
 
 
 
167c441
8ce9395
167c441
 
 
 
 
 
8ce9395
 
 
167c441
 
 
8ce9395
 
 
167c441
 
 
 
 
 
 
 
8ce9395
 
 
167c441
 
 
8ce9395
 
 
 
 
167c441
8ce9395
 
 
167c441
 
 
8ce9395
 
 
167c441
 
 
8ce9395
 
 
167c441
 
 
 
 
 
8ce9395
 
167c441
8ce9395
167c441
 
 
 
8ce9395
167c441
8ce9395
167c441
 
8ce9395
 
 
167c441
 
 
 
 
8ce9395
 
 
 
 
 
167c441
 
8ce9395
 
 
167c441
 
8ce9395
 
 
167c441
 
8ce9395
 
 
167c441
 
 
 
 
8ce9395
 
bd79ba0
 
8ce9395
 
 
 
167c441
 
 
 
 
 
 
 
8ce9395
167c441
8ce9395
167c441
 
8ce9395
167c441
8ce9395
167c441
 
8ce9395
167c441
8ce9395
167c441
8ce9395
 
 
167c441
 
8ce9395
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
---
base_model: Qwen/Qwen2.5-Coder-7B-Instruct
library_name: peft
pipeline_tag: text-generation
tags:
- axolotl
- base_model:adapter:Qwen/Qwen2.5-Coder-7B-Instruct
- qlora
- code-generation
- bash
- cli
- security
- devops
license: mit
datasets:
- prabhanshubhowal/natural_language_to_linux
language:
- en
metrics:
- code_eval
- exact_match
---

# Model Card for Model ID

![Logo](SecureCLI-Tuner_image.png)


## Model Details

### Model Description

SecureCLI-Tuner V2 is a **Zero-Trust Security Kernel** for Agentic DevOps. 
It is a QLoRA fine-tune of **Qwen2.5-Coder-7B-Instruct**, specialized for converting natural language instructions into safe, syntactically correct Bash commands.
Unlike generic coding models, SecureCLI-Tuner V2 was trained on a filtered dataset with **95 dangerous command patterns removed** (e.g., `rm -rf /`, fork bombs)
and is designed to operate within a 3-layer runtime guardrail system.

- **Developed by:** Michael Williams mwill-AImission (Ready Tensor Certification Portfolio)
- **Funded by:** Michael Williams
- **Model type:** Causal Language Model (QLoRA Adapter)
- **Language(s) (NLP):** English
- **License:** MIT
- **Finetuned from model Qwen/Qwen2.5-Coder-7B-Instruct

### Model Sources

- **Repository:** <https://github.com/mwill20/SecureCLI-Tuner>

## Uses

SecureCLI-Tuner V2 is designed for DevOps engineers, System Administrators, and AI Researchers who need a reliable, security-focused model for translating natural language into Bash commands.
Unlike general-purpose LLMs, this model is fine-tuned to prioritize safety and syntax correctness in CLI environments.
It is intended to be used as a "Translation Layer" or "Coprocessor" in larger systems, where user intent is first verified and then translated into an executable command. 
Foreseeable users include developers building CLI tools, automated infrastructure agents, and educational platforms teaching Linux administration.

### Direct Use

- **DevOps Agents:** Generating shell commands for autonomous agents.
- **CLI Assistants:** Natural language interfaces for terminal operations.
- **Educational Tools:** Teaching safe shell command usage.

### Downstream Use

- Integrated into CI/CD pipelines to validate or generate infrastructure scripts.
- Used as a "Router" model to classify intent before executing commands.

### Out-of-Scope Use

- **Root Operations:** Commands requiring `sudo` should always be manually reviewed.
- **Malicious Generation:** While training data was filtered, the model should not be used to generate malware or exploit scripts.
- **Non-Bash Languages:** The model is specialized for Bash; Python/JS performance may be degraded compared to the base model.

## Bias, Risks, and Limitations

- **Safety vs. Utility:** The model refuses to generate commands that look dangerous, even if the intent is benign (false positives).
- **Evaluation limits:** Semantic evaluation using CodeBERT was limited by library constraints; exact match metrics (9.1%) underestimate true performance (99.0% valid command generation).
- **Defense in Depth:** The model weights are only *one layer* of defense. **Production use requires the accompanying CommandRisk engine** (runtime regex + heuristic validation).

### Recommendations

Users should always deploy this model behind the **CommandRisk** validation layer described in the [GitHub Repository](https://github.com/mwill20/SecureCLI-Tuner). 
Do not give this model unchecked `sudo` access.
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.

## How to Get Started with the Model

Use the code below to get started with the model.

```python
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
from peft import PeftModel

# 1. Load Base Model
base_model_name = "Qwen/Qwen2.5-Coder-7B-Instruct"
tokenizer = AutoTokenizer.from_pretrained(base_model_name)
base_model = AutoModelForCausalLM.from_pretrained(
    base_model_name,
    torch_dtype=torch.float16,
    device_map="auto",
    load_in_4bit=True
)

# 2. Load Adapter
adapter_path = "mwill-AImission/SecureCLI-Tuner-V2"
model = PeftModel.from_pretrained(base_model, adapter_path)

# 3. Generate
prompt = "List all Docker containers using more than 1GB RAM"
messages = [
    {"role": "system", "content": "You are a helpful DevOps assistant. Generate a Bash command for the given instruction."},
    {"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
inputs = tokenizer([text], return_tensors="pt").to("cuda")

outputs = model.generate(**inputs, max_new_tokens=128)
print(tokenizer.batch_decode(outputs, skip_special_tokens=True)[0])
```

## Training Details

### Training Data

**Source:** `prabhanshubhowal/natural_language_to_linux` (HuggingFace)

**Preprocessing Pipeline:**
1. **Deduplication:** Removed 5,616 duplicates.
2. **Schema Validation:** Enforced valid JSON structure.
3. **Safety Filtering:** Removed **95 examples** matching 17 zero-tolerance patterns (e.g., `rm -rf /`, `:(){ :|:& };:`).
4. **Shellcheck:** Removed 382 commands with invalid syntax.
**Final Size:** 12,259 examples (Train: 9,807 | Val: 1,225 | Test: 1,227).

### Training Procedure

- **Method:** QLoRA (Quantized Low-Rank Adaptation)
- **Framework:** Axolotl
- **Compute:** 1x NVIDIA A100 (40GB) on RunPod

#### Training Hyperparameters

- **Bits:** 4-bit NF4 quantization
- **LoRA Rank:** 8
- **LoRA Alpha:** 16
- **Target Modules:** q_proj, v_proj, k_proj, o_proj
- **Learning Rate:** 2e-4 (Cosine schedule)
- **Batch Size:** 4 (validation of gradient accumulation)
- **Steps:** 500 (~20% of 1 epoch)
- **Warmup:** 50 steps

## Evaluation

The evaluation protocol focused on two primary dimensions: **Safety** (Adversarial Robustness) and **Utility** (Command Correctness).
We employed a "Red Teaming" approach where the model was subjected to a wide range of attack vectors, including obfuscated commands, known dangerous regex patterns, and prompt injection attempts.
Simultaneously, utility was measured against a held-out test set to ensure the model produces syntactically valid Bash commands that match the user's intent.

### Testing Data, Factors & Metrics

#### Testing Data

1,227 held-out examples from the cleaned dataset.

#### Factors

The evaluation is disaggregated by:
- **Command Category:** General operational commands vs. Dangerous vectors (destructive, obfuscated).
- **Difficulty:** Direct NLP instructions vs. Adversarial prompts designed to bypass guardrails.

#### Metrics

- **Command Validity:** 99.0% (Parsable Bash)
- **Adversarial Pass Rate:** 100% (Blocks 9/9 attack categories)
- **Exact Match:** 9.1% (Conservative baseline)

### Results

| Metric | Base Qwen | SecureCLI-Tuner V2 | Improvement |
|--------|-----------|--------------------|-------------|
| **Command Validity** | 97.1% | **99.0%** | +1.9% |
| **Exact Match** | 0% | **9.1%** | +9.1% |
| **Adversarial Safety** | N/A | **100%** | Critical |
The model demonstrates a massive improvement in safety and formatting compliance compared to the base model.


#### Summary

SecureCLI-Tuner V2 significantly improves upon the base Qwen2.5-Coder-7B model in terms of **safety** (100% block rate for adversarial attacks) 
and **command validity** (+1.9%). While strict "Exact Match" scores remain low (9.1%) due to the variability of valid Bash syntax 
(e.g., `ls -la` vs `ls -al`), the functional correctness is high. 
The model demonstrates a minor trade-off in general knowledge (MMLU -5.2%) to achieve this domain specialization.

## Model Examination

Model examination focused on behavioral analysis via the **Adversarial Test Suite** rather than internal interpretability (e.g., attention maps).
The model consistently activates refusal behaviors when presented with dangerous intents, even when obfuscated (e.g., base64 encoding).

## Environmental Impact

- **Hardware Type:** NVIDIA A100 40GB
- **Hours used:** ~1 hour (44.5 minutes training time)
- **Cloud Provider:** RunPod
- **Compute Region:** N/A (Decentralized)
- **Carbon Emitted:** Negligible (< 0.1 kg CO2eq)

Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).


### Model Architecture and Objective

Qwen2.5-Coder is a Transformer-based Causal Language Model. This fine-tune adds Low-Rank Adapters (LoRA) to the attention layers to specialize in NL-to-Bash translation 
without forgetting general coding knowledge (MMLU drop was only -5.2%).

### Compute Infrastructure

- **Orchestration:** Axolotl
- **Container:** Docker (RunPod PyTorch 2.4 image)

#### Hardware

- **GPU:** 1x NVIDIA A100 (40GB VRAM)
- **Platform:** RunPod Cloud Instance

#### Software

- **Orchestration:** Axolotl v0.5.x
- **Core:** PyTorch 2.4.0, Transformers 4.45.0
- **Efficiency:** PEFT 0.18.1, BitsAndBytes 0.44.0
- **CUDA:** 12.1
- 
## Citation [optional]

[<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
](https://app.readytensor.ai/publications/securecli-tuner-a-security-first-llm-for-agentic-devops-HRqKpglRZnig)

**BibTeX:**


```bibtex
@misc{securecli_tuner_v2,
  author = {mwill-itmission},
  title = {SecureCLI-Tuner V2: A Security-First LLM for Agentic DevOps},
  year = {2026},
  publisher = {Ready Tensor Certification Portfolio}
}
```

**APA:**

Williams, M. (2026). *SecureCLI-Tuner V2: A Security-First LLM for Agentic DevOps*. 
Ready Tensor Certification Portfolio. <https://huggingface.co/mwill-AImission/SecureCLI-Tuner-V2>

## More Information

For full details on the CommandRisk engine, the Data Preparation Pipeline,
and the "Defense in Depth" strategy, please visit the [GitHub Repository](https://github.com/mwill20/SecureCLI-Tuner).

## Model Card Authors

Michael Williams (mwill-AImission)

## Model Card Contact

For questions, open an issue on the [GitHub Repository](https://github.com/mwill20/SecureCLI-Tuner).

### Framework versions

- PEFT 0.18.1