File size: 930 Bytes
9ca9e8b |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 |
---
language:
- pl
- en
license: mit
tags:
- devops
- kubernetes
- ansible
- terraform
- yaml
base_model: google/gemma-2-9b-it
---
# Phi-3.5-mini-PL-DevOps-Instruct-v2
Polish DevOps assistant fine-tuned on Infrastructure as Code tasks.
## ⚠️ Fixes in v2
- **Fixed YAML indentation** - consistent 2-space indentation
- **High Quality Training** - Native BF16 training (no quantization errors)
- Trained WITHOUT Unsloth (no padding-free mode)
- `packing=False` to preserve whitespace
## Evaluation / Inference
This model is saved in **BFLOAT16**.
- For 4-bit inference: Load with `load_in_4bit=True` (bitsandbytes)
- For vLLM: Compatible with standard loading or FP8/AWQ quantization
## Training
| Param | Value |
|-------|-------|
| Base | google/gemma-2-9b-it |
| Method | Full BF16 Finetuning + LoRA |
| Batch | 96 effective |
| Train samples | 170,305 |
| Train loss | 0.6174 |
| Time | 667.0 min |
| GPU | H100 80GB |
|