Phi-3.5-mini-PL-DevOps-Instruct-v2
Polish DevOps assistant fine-tuned on Infrastructure as Code tasks.
⚠️ Fixes in v2
- Fixed YAML indentation - consistent 2-space indentation
- High Quality Training - Native BF16 training (no quantization errors)
- Trained WITHOUT Unsloth (no padding-free mode)
packing=Falseto preserve whitespace
Evaluation / Inference
This model is saved in BFLOAT16.
- For 4-bit inference: Load with
load_in_4bit=True(bitsandbytes) - For vLLM: Compatible with standard loading or FP8/AWQ quantization
Training
| Param | Value |
|---|---|
| Base | microsoft/Phi-3.5-mini-instruct |
| Method | Full BF16 Finetuning + LoRA |
| Batch | 96 effective |
| Train samples | 172,145 |
| Train loss | 0.5981 |
| Time | 147.3 min |
| GPU | H100 80GB |
- Downloads last month
- 35
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
🙋
Ask for provider support
Model tree for Ennon/Phi-3.5-mini-PL-DevOps-Instruct-v2
Base model
microsoft/Phi-3.5-mini-instruct