How to use from
llama.cpp
Install from brew
brew install llama.cpp
# Start a local OpenAI-compatible server with a web UI:
llama-server -hf torchsight/beam-f16:F16
# Run inference directly in the terminal:
llama-cli -hf torchsight/beam-f16:F16
Install from WinGet (Windows)
winget install llama.cpp
# Start a local OpenAI-compatible server with a web UI:
llama-server -hf torchsight/beam-f16:F16
# Run inference directly in the terminal:
llama-cli -hf torchsight/beam-f16:F16
Use pre-built binary
# Download pre-built binary from:
# https://github.com/ggerganov/llama.cpp/releases
# Start a local OpenAI-compatible server with a web UI:
./llama-server -hf torchsight/beam-f16:F16
# Run inference directly in the terminal:
./llama-cli -hf torchsight/beam-f16:F16
Build from source code
git clone https://github.com/ggerganov/llama.cpp.git
cd llama.cpp
cmake -B build
cmake --build build -j --target llama-server llama-cli
# Start a local OpenAI-compatible server with a web UI:
./build/bin/llama-server -hf torchsight/beam-f16:F16
# Run inference directly in the terminal:
./build/bin/llama-cli -hf torchsight/beam-f16:F16
Use Docker
docker model run hf.co/torchsight/beam-f16:F16
Quick Links

TorchSight Beam f16

Cybersecurity document classifier. LoRA fine-tune of Qwen 3.5 27B, full half-precision (no quantization). Approximately 53 GB GGUF.

Recommended hardware: 96 GB+ GPU. Use this variant for research / reference; for production deployment, prefer q4_K_M (default) or q8_0.

Released alongside:

Dobrovolskyi, I. Security Document Classification with a Fine-Tuned Local Large Language Model: Benchmark Data and an Open-Source System. Journal of Information Security and Applications, 2026.

Benchmark results

Evaluated under identical methodology (alpaca prompt, Ollama /api/generate, temperature = 0, num_predict = 2048) on the companion dataset torchsight/cybersecurity-classification-benchmark. Canonical numbers live in that repo's BENCHMARK_NUMBERS.md.

Primary โ€” eval-1000-synthetic (n = 1,000)

Model Type Cat. acc [95% CI] Subcat. acc
Beam q4_K_M Local (LoRA) 95.0% [93.5, 96.2] 48.2%
Beam f16 Local (LoRA) 93.2% [91.5, 94.6] 51.1%
Beam q8_0 Local (LoRA) 93.0% [91.2, 94.4] 51.4%
Claude Sonnet 4 Commercial API 79.9% [77.3, 82.3] 23.0%
Claude Opus 4 Commercial API 79.9% [77.3, 82.3] 22.5%
GPT-5 Commercial API 76.9% [74.2, 79.4] 11.6%
Gemini 2.5 Pro Commercial API 75.4% [72.6, 78.0] 21.0%
Qwen 3.5 27B base Local (no LoRA) 86.3% [84.0, 88.3] 19.0%
Regex (48 patterns) Rule-based 52.7% [49.6, 55.8] โ€”

External โ€” eval-500-external (n = 500)

Model Cat. acc [95% CI] ฮ” vs. primary
Beam q4_K_M 93.8% [91.3, 95.6] โˆ’1.2 pp
Beam f16 91.2% [88.4, 93.4] โˆ’2.0 pp
Beam q8_0 91.2% [88.4, 93.4] โˆ’1.8 pp
Claude Sonnet 4 86.4% [83.1, 89.1] +6.5 pp
Gemini 2.5 Pro 82.0% [78.4, 85.1] +6.6 pp
Qwen 3.5 27B base 86.6% [83.3, 89.3] +0.3 pp
GPT-5 65.8% [61.5, 69.8] โˆ’11.1 pp
Regex baseline 29.6% [25.8, 33.7] โˆ’23.1 pp

Usage with Ollama

ollama pull torchsight/beam-f16
ollama run torchsight/beam-f16

Or via the TorchSight CLI.

Training

  • Base: Qwen 3.5 27B (dense)
  • Method: LoRA (r = 128, ฮฑ = 256), bf16, 5 epochs
  • Dataset: 78,358 balanced samples โ€” see torchsight/beam-training-data
  • Hardware: 8ร— NVIDIA A100 80GB SXM4, 10.5 hours

License

Apache 2.0. The base model (Qwen 3.5 27B) carries its own license; consult upstream terms for use.

Downloads last month
14
GGUF
Model size
27B params
Architecture
qwen35
Hardware compatibility
Log In to add your hardware

16-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for torchsight/beam-f16

Base model

Qwen/Qwen3.5-27B
Adapter
(67)
this model