Phi-3.5-mini-PL-DevOps-Instruct-v2
Polish DevOps assistant fine-tuned on Infrastructure as Code tasks.
⚠️ Fixes in v2
- Fixed YAML indentation - consistent 2-space indentation
- High Quality Training - Native BF16 training (no quantization errors)
- Trained WITHOUT Unsloth (no padding-free mode)
packing=Falseto preserve whitespace
Evaluation / Inference
This model is saved in BFLOAT16.
- For 4-bit inference: Load with
load_in_4bit=True(bitsandbytes) - For vLLM: Compatible with standard loading or FP8/AWQ quantization
Training
| Param | Value |
|---|---|
| Base | google/gemma-2-9b-it |
| Method | Full BF16 Finetuning + LoRA |
| Batch | 96 effective |
| Train samples | 170,305 |
| Train loss | 0.6174 |
| Time | 667.0 min |
| GPU | H100 80GB |
- Downloads last month
- 73
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
🙋
Ask for provider support