Datasets:
File size: 1,616 Bytes
c14a56c | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 | ---
license: apache-2.0
task_categories:
- text-classification
language:
- en
tags:
- cybersecurity
- document-classification
- sft
- lora
size_categories:
- 10K<n<100K
---
# Beam Training Data
Supervised fine-tuning (SFT) dataset used to train the TorchSight Beam model — a cybersecurity document classifier based on Qwen 3.5 27B.
## Dataset
- **74,441 training samples** (`sft/train_alpaca.jsonl`)
- **3,917 validation samples** (`sft/val_alpaca.jsonl`)
- Alpaca format: `instruction`, `input`, `output`
- Balanced across 7 categories + subcategories
## Sources (all verified safe for AI training)
| Source | License | Content |
|---|---|---|
| AI4Privacy (300K PII) | Apache 2.0 | PII samples |
| Enron (FERC release) | Public domain | Email/financial data |
| NVD/NIST | Public domain (US Gov) | Vulnerability descriptions |
| SecLists | MIT | Security payloads |
| PayloadsAllTheThings | MIT | Attack payloads |
| Prompt Injection datasets | Apache 2.0 | Injection attacks |
| GHSA | CC-BY 4.0 | Security advisories |
| Loghub | Research-free | System logs (safe class) |
| Synthetic | Generated | Hard negatives, edge cases |
## Structure
- `sft/` — Final SFT training files (Alpaca format)
- `processed/` — Intermediate processed files from each source
- `synthetic/` — Generated synthetic data (hard negatives, edge cases)
## Training
```bash
# LoRA training on Qwen 3.5 27B
python train_lora.py # r=128, alpha=256, 5 epochs, H100 80GB
python export_gguf.py # Export to GGUF for Ollama
```
Compatible: trl 0.11.4 + transformers 4.45.2 + peft 0.13.2
## License
Apache 2.0
|