Update README.md
Browse files
README.md
CHANGED
|
@@ -147,6 +147,8 @@ processor = AutoProcessor.from_pretrained(
|
|
| 147 |
|
| 148 |
LoRA fine-tuning **did not** yield measurable improvements under the current evaluation protocol.
|
| 149 |
|
|
|
|
|
|
|
| 150 |
### Qualitative Analysis
|
| 151 |
|
| 152 |
- No test cases were found where the fine-tuned model corrected errors made by the base model.
|
|
|
|
| 147 |
|
| 148 |
LoRA fine-tuning **did not** yield measurable improvements under the current evaluation protocol.
|
| 149 |
|
| 150 |
+
**Note:** Although LoRA fine-tuning did not improve aggregate F1 on the held-out test set, analysis revealed that both the base and fine-tuned models collapsed to a high-recall regime, predicting “change” for all examples. This indicates that the primary performance bottleneck lies in task framing and decision extraction rather than model capacity. The experiment demonstrates stable LoRA adaptation without regression and highlights the importance of evaluation design in generative medical VLMs.
|
| 151 |
+
|
| 152 |
### Qualitative Analysis
|
| 153 |
|
| 154 |
- No test cases were found where the fine-tuned model corrected errors made by the base model.
|