| --- |
| license: apache-2.0 |
| tags: |
| - cybersecurity |
| - document-classification |
| - gguf |
| - ollama |
| - qwen |
| - lora |
| base_model: Qwen/Qwen3.5-27B |
| --- |
| |
| # TorchSight Beam q8_0 |
| |
| Cybersecurity document classifier. LoRA fine-tune of Qwen 3.5 27B, quantized to q8_0. 28GB GGUF. |
|
|
| Higher quality weights (92.7% accuracy). For 48GB+ GPU or 64GB Mac. |
|
|
| ## Benchmark Results (1000 samples) |
|
|
| | Model | Category Acc | Subcategory Acc | |
| |---|---|---| |
| | **Beam q4_K_M** | **95.1%** | 48.5% | |
| | Beam f16 | 93.0% | 51.3% | |
| | Beam q8_0 | 92.7% | 51.3% | |
| | Claude Opus 4 | 79.9% | 22.5% | |
| | Gemini 2.5 Pro | 75.4% | 21.0% | |
| | Qwen 3.5 27B (no fine-tune) | 43.3% | 4.3% | |
| |
| ## Usage with Ollama |
| |
| ```bash |
| ollama pull torchsight/beam:q8_0 |
| ``` |
| |
| Or with the GGUF file: |
| |
| ``` |
| # Modelfile |
| FROM ./beam-1.0-q8_0.gguf |
| |
| TEMPLATE "{{ .Prompt }}" |
| ``` |
| |
| ## Output Format |
| |
| ```json |
| [ |
| { |
| "category": "credentials", |
| "subcategory": "credentials.api_key", |
| "severity": "critical", |
| "explanation": "AWS access key found: AKIA****VIW..." |
| } |
| ] |
| ``` |
| |
| Categories: `pii`, `credentials`, `financial`, `medical`, `confidential`, `malicious`, `safe` |
|
|
| ## Training |
|
|
| - Base: Qwen 3.5 27B (dense) |
| - Method: LoRA (r=128, alpha=256) |
| - Data: 74K balanced samples from 18+ sources |
| - Epochs: 5 |
| - GPU: H100 80GB PCIe |
|
|
| ## Links |
|
|
| - [Benchmark Dataset](https://huggingface.co/datasets/torchsight/cybersecurity-classification-benchmark) |
| - [Training Data](https://huggingface.co/datasets/torchsight/beam-training-data) |
| - [GitHub](https://github.com/IvanDobrovolsky/torchsight) |
|
|
| ## License |
|
|
| Apache 2.0 |
|
|