π§ Model Card β Phi-3.5-mini Medical Judge (SFT)
π Model Description
This model is a supervised fine-tuned (SFT) version of Phi-3.5-mini-instruct, adapted to perform binary semantic equivalence judgment for French medical open-ended question answering (OEQA).
The model acts as an LLM-as-a-Judge, predicting whether a generated answer is medically equivalent to a reference answer.
π§Ύ Task Definition
Input:
- Medical question (French)
- Reference answer
- Generated answer
Output:
1β Equivalent0β Not equivalent
π Training Data
- ~184 annotated instances
- Source: French medical educational QA
- Annotation: Expert clinician
βοΈ Training Procedure
- Method: Supervised Fine-Tuning (SFT)
- Epochs: 5
- Learning rate: 5e-6
- Max sequence length: 1024
The model is trained to reproduce expert binary equivalence judgments.
π Citation
@inproceedings{belmadani-etal-2026-judges,
title = "Who Judges the Judge? Evaluating {LLM}-as-a-Judge for {F}rench Medical open-ended {QA}",
author = "Belmadani, Ikram and
El Khettari, Oumaima and
Constant dit Beaufils, Pac{\^o}me and
Dufour, Richard and
Favre, Benoit",
editor = {Danilova, Vera and
Kurfal{\i}, Murathan and
S{\"o}derfeldt, Ylva and
Reed, Julia and
Burchell, Andrew},
booktitle = "Proceedings of the 1st Workshop on Linguistic Analysis for Health ({H}ea{L}ing 2026)",
month = mar,
year = "2026",
address = "Rabat, Morocco",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2026.healing-1.12/",
pages = "142--157",
ISBN = "979-8-89176-367-8",
abstract = "Automatic evaluation of open-ended question answering in specialized domains remains challenging mainly because it relies on manual annotations from domain experts. In this work, we assess the ability of several large language models (LLMs), including closed-access (GPT-5.1, Gemini-2.5-Pro), open-source general-purpose (Qwen-80B), and biomedical domain-adapted models (MedGemma-27B, Phi-3.5-mini variants), to act as automatic evaluators of semantic equivalence in French medical open-ended QA. Our analysis reveals that LLM-based judgments are sensitive to the source of answer generation: judgement correlation varies substantially across different generator models. Among the judges, MedGemma-27B and Qwen-80B achieve the highest agreement with expert annotations in terms of F1 score and Pearson correlation. We further explore lightweight adaptation strategies on Phi-3.5-mini using supervised fine-tuning (SFT) and Group Relative Policy Optimization (GRPO). Even with 184 training instances, these adaptations significantly improve Phi-3.5{'}s results and reduce variability across answer generators, achieving performance comparable to larger domain-adapted models. Our results highlight the importance of generator-aware evaluation, the limitations of general-purpose LLMs in domain-specific settings, and the effectiveness of lightweight adaptation for compact models in low-resource scenarios."
}
- Downloads last month
- 21
Inference Providers NEW
This model isn't deployed by any Inference Provider. π Ask for provider support
Model tree for ik-ram28/phi-judge-sft
Base model
microsoft/Phi-3.5-mini-instruct