TorchSight Beam f16

Cybersecurity document classifier. LoRA fine-tune of Qwen 3.5 27B, quantized to f16. 53GB GGUF.

Full precision (93.0% accuracy). For 96GB+ GPU.

Benchmark Results (1000 samples)

Model Category Acc Subcategory Acc
Beam q4_K_M 95.1% 48.5%
Beam f16 93.0% 51.3%
Beam q8_0 92.7% 51.3%
Claude Opus 4 79.9% 22.5%
Gemini 2.5 Pro 75.4% 21.0%
Qwen 3.5 27B (no fine-tune) 43.3% 4.3%

Usage with Ollama

ollama pull torchsight/beam:f16

Or with the GGUF file:

# Modelfile
FROM ./beam-1.0-f16.gguf

TEMPLATE "{{ .Prompt }}"

Output Format

[
  {
    "category": "credentials",
    "subcategory": "credentials.api_key",
    "severity": "critical",
    "explanation": "AWS access key found: AKIA****VIW..."
  }
]

Categories: pii, credentials, financial, medical, confidential, malicious, safe

Training

  • Base: Qwen 3.5 27B (dense)
  • Method: LoRA (r=128, alpha=256)
  • Data: 74K balanced samples from 18+ sources
  • Epochs: 5
  • GPU: H100 80GB PCIe

Links

License

Apache 2.0

Downloads last month
29
GGUF
Model size
27B params
Architecture
qwen35
Hardware compatibility
Log In to add your hardware

16-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for torchsight/beam-f16

Base model

Qwen/Qwen3.5-27B
Adapter
(40)
this model