Instructions to use protocolsyncllc/medgap-qwen-3.5-fda with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Local Apps
- Unsloth Studio new
How to use protocolsyncllc/medgap-qwen-3.5-fda with Unsloth Studio:
Install Unsloth Studio (macOS, Linux, WSL)
curl -fsSL https://unsloth.ai/install.sh | sh # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for protocolsyncllc/medgap-qwen-3.5-fda to start chatting
Install Unsloth Studio (Windows)
irm https://unsloth.ai/install.ps1 | iex # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for protocolsyncllc/medgap-qwen-3.5-fda to start chatting
Using HuggingFace Spaces for Unsloth
# No setup required # Open https://huggingface.co/spaces/unsloth/studio in your browser # Search for protocolsyncllc/medgap-qwen-3.5-fda to start chatting
Load model with FastModel
pip install unsloth from unsloth import FastModel model, tokenizer = FastModel.from_pretrained( model_name="protocolsyncllc/medgap-qwen-3.5-fda", max_seq_length=2048, )
Model Summary
This is a specialized fine-tune of Qwen 3.5 9B, optimized for the analysis and interpretation of FDA Warning Letters. It is designed to identify patterns of non-compliance and map observed violations to specific regulatory clauses (21 CFR Part 820 / ISO 13485).
Intended Use
- Regulatory Mapping: Automatic mapping of observed violations to statutory requirements.
- Gap Analysis: Powers Step 3 of the Medgap remediation pipeline.
- Risk Mitigation: Predicts potential audit findings based on historical enforcement logic.
𧬠Development & Data Lineage (Private)
The following resources are maintained in private repositories to prove lineage and ownership. Unauthorized access is restricted:
- Production Model Repository: protocolsyncllc/medgap-qwen-3.5-9b
- Curated Training Dataset: protocolsyncllc/fda-warning-letters
- Inference GGUF: Qwen3.5-9B_q8.gguf
π Production Training Metrics (Receipts)
This model was trained using the Unsloth framework on professional-grade infrastructure.
| Metric | Value |
|---|---|
| Training Hardware | NVIDIA A40 (48GB VRAM) |
| Dataset Size | 728 rows (Instruction-Input-Output triplets) |
| Total Steps | 273 |
| Epochs | 3 |
| Trainable Parameters | 1,966,080 (0.02% of total) |
| Final Training Loss | 1.068 |
| Total Runtime | 5,166 seconds (~1h 26m) |
Loss Progress
- Initial Loss: 2.197 (Epoch 0.11)
- Mid-point Loss: 0.959 (Epoch 1.32)
- Final Loss: 0.905 (Epoch 2.97)
π Technical Specifications
- Base Model:
unsloth/Qwen3.5-9B-Base - Fine-tuning Method: LoRA / QLoRA
- Inference Format: 8-bit GGUF (Optimized for
llama.cpp) - Software Stack: Torch 2.10.0+cu128 | CUDA 8.6 | Triton 3.6.0
- Context Window: 131,072 tokens
Deployment
This engine is designed for high-performance private inference. The fine-tuned enforcement engine is deployed on dedicated NVIDIA hardware utilizing a native C++/CUDA backend to ensure maximum data security and low-latency remediation. For technical verification or consulting inquiries, visit medgap.org.
Model tree for protocolsyncllc/medgap-qwen-3.5-fda
Base model
protocolsyncllc/medgap-qwen-3.5-9b