LFM2.5-1.2B-PACK-Doc

Part of the PACKMate framework by Veritasr LLC

Model Description

LFM2.5-1.2B-PACK-Doc is a fine-tuned version of LFM2.5-1.2B-Instruct, trained on a custom dataset derived from current Tactical Combat Casualty Care (TCCC) protocols. It is designed for air-gapped edge deployment in high-stress environments where network connectivity is unavailable or operationally unacceptable.

This model will not refuse domain-relevant queries due to language or context. That is a feature, not an oversight.

Intended Use

  • Combat medics and 68W personnel requiring immediate procedural guidance
  • Air-gapped edge devices including rugged Android, Raspberry Pi, and similar hardware
  • Offline deployment with no cloud dependency

Out of Scope

  • General-purpose assistant tasks
  • Diagnosis or replacement of professional medical judgment in non-field settings
  • Any deployment where connectivity to cloud services is acceptable and preferable

Training Details

  • Base model: LFM2.5-1.2B-Instruct
  • Quantization: Q8_0 GGUF
  • Dataset: 2,500+ instruction-completion pairs derived from current TCCC protocols

Evaluation

  • Refusal rate on domain queries: pending formal evaluation
  • Average response latency on target hardware: pending formal evaluation
  • Accuracy on TCCC protocol queries: pending formal evaluation

PACKMate Framework

PACK-Doc is one module in an extensible architecture:

Module Domain
PACK-Doc Combat medicine - TCCC protocols
PACK-Tac Tactical procedures - ROE, CQB, patrol
PACK-Q Logistics - supply chain, requisitions
PACK-Maint Equipment maintenance - vehicles, weapons

Same pipeline, different datasets. Additional modules are in development.

License

CC BY-NC 4.0 - Free for personal, research, and non-commercial use. Commercial use requires written permission from Veritasr LLC.

Citation

If you use this model in research or derivative work:

@misc{packdoc2025,
  author = {Veritasr LLC},
  title = {LFM2.5-1.2B-PACK-Doc},
  year = {2026},
  publisher = {HuggingFace},
  url = {https://huggingface.co/veritasr/LFM2.5-1.2B-PACK-Doc}
}

Contact

Veritasr LLC - veritasr.com

Downloads last month
6
GGUF
Model size
1B params
Architecture
lfm2
Hardware compatibility
Log In to add your hardware

8-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for davidVTR/LFM2.5-1.2B-PACK-Doc

Quantized
(43)
this model