Model Summary

This is a specialized fine-tune of Qwen 3.5 9B, optimized for the analysis and interpretation of FDA Warning Letters. It is designed to identify patterns of non-compliance and map observed violations to specific regulatory clauses (21 CFR Part 820 / ISO 13485).

Intended Use

  • Regulatory Mapping: Automatic mapping of observed violations to statutory requirements.
  • Gap Analysis: Powers Step 3 of the Medgap remediation pipeline.
  • Risk Mitigation: Predicts potential audit findings based on historical enforcement logic.

🧬 Development & Data Lineage (Private)

The following resources are maintained in private repositories to prove lineage and ownership. Unauthorized access is restricted:

πŸ“ˆ Production Training Metrics (Receipts)

This model was trained using the Unsloth framework on professional-grade infrastructure.

Metric Value
Training Hardware NVIDIA A40 (48GB VRAM)
Dataset Size 728 rows (Instruction-Input-Output triplets)
Total Steps 273
Epochs 3
Trainable Parameters 1,966,080 (0.02% of total)
Final Training Loss 1.068
Total Runtime 5,166 seconds (~1h 26m)

Loss Progress

  • Initial Loss: 2.197 (Epoch 0.11)
  • Mid-point Loss: 0.959 (Epoch 1.32)
  • Final Loss: 0.905 (Epoch 2.97)

πŸ›  Technical Specifications

  • Base Model: unsloth/Qwen3.5-9B-Base
  • Fine-tuning Method: LoRA / QLoRA
  • Inference Format: 8-bit GGUF (Optimized for llama.cpp)
  • Software Stack: Torch 2.10.0+cu128 | CUDA 8.6 | Triton 3.6.0
  • Context Window: 131,072 tokens

Deployment

This engine is designed for high-performance private inference. The fine-tuned enforcement engine is deployed on dedicated NVIDIA hardware utilizing a native C++/CUDA backend to ensure maximum data security and low-latency remediation. For technical verification or consulting inquiries, visit medgap.org.

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Model tree for protocolsyncllc/medgap-qwen-3.5-fda

Finetuned
(1)
this model