Phi-3.5-mini-PL-DevOps-Instruct-v2

Polish DevOps assistant fine-tuned on Infrastructure as Code tasks.

⚠️ Fixes in v2

  • Fixed YAML indentation - consistent 2-space indentation
  • High Quality Training - Native BF16 training (no quantization errors)
  • Trained WITHOUT Unsloth (no padding-free mode)
  • packing=False to preserve whitespace

Evaluation / Inference

This model is saved in BFLOAT16.

  • For 4-bit inference: Load with load_in_4bit=True (bitsandbytes)
  • For vLLM: Compatible with standard loading or FP8/AWQ quantization

Training

Param Value
Base google/gemma-2-9b-it
Method Full BF16 Finetuning + LoRA
Batch 96 effective
Train samples 170,305
Train loss 0.6174
Time 667.0 min
GPU H100 80GB
Downloads last month
73
Safetensors
Model size
9B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for Ennon/Gemma-2-9B-PL-DevOps-Instruct

Base model

google/gemma-2-9b
Finetuned
(386)
this model
Quantizations
2 models