Parser v47 (Full Precision)
Fine-tuned Phi-3-mini-4k-instruct for query parsing.
Quantized Versions
Training Dataset
- Dataset: magnifi/parser_user_v47_dataset
- Train: 2,661 rows (all real data, no holdout)
- Validation: 160 rows (synthetic templates, v46b-style)
Data Sources
| Source | Rows |
|---|---|
| Base (parser_v2_202409 / Week_0519_Combined_v42b) | 1,991 |
| Exposure corrections (MN-2987 / v46a combined) | 579 |
| Curated exposure queries (MN-2987) | 71 |
| AgentService (23-Mar-2026) | 20 |
Evaluation (v47 vs v46b production)
160-sample validation set
| Metric | v46b (production) | v47 (new) | Delta |
|---|---|---|---|
| Exact Match | 95.00% | 95.00% | 0 pp |
| Key-wise Match | 96.25% | 95.63% | -0.62 pp |
| Incorrect predictions | 6 | 7 | +1 |
| Path corrections | 10 | 9 | -1 |
| Avg inference time | 0.113s | 0.118s | +0.005s |
AgentService queries (20 new query types)
| Model | Key-wise Match | Errors |
|---|---|---|
| v47 (new) | 95.00% (19/20) | 1 |
| v46b (production) | 45.00% (9/20) | 11 |
Training Config
- Base model: Phi-3-mini-4k-instruct
- Method: LoRA (r=16, alpha=16) + SFT
- Epochs: 7
- Learning rate: 0.002
- Downloads last month
- 45
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support