Florence-2-base LoRA v22 — UI grounding
PEFT LoRA adapter for Florence-2-base-ft (refs/pr/6), iteration v22 of the training pipeline. Regresses on end-to-end UI testing relative to v1 — published here for completeness/reproducibility, but Khabner/florence-base-lora-v1 is recommended for production use.
Benchmark results
- 70% pass rate (35/50) on the 50-test Magnitude suite — −14 pp vs v1.
- Regression localized to 9 specific tests (Facebook Photos tab, DemoQA Email field, MDN map sidebar, Weather 5th day, YouTube 3rd, SO
react, Russia row, Beyoncé[25], GitHub Email label). - On-distribution offline
acc@2%is +0.3 pp vs v1, but the gap doesn't translate to end-to-end. Classic case of overfitting to a narrow training distribution. - When v22 does succeed it's slightly faster: median 38.8s, mean 49.1s.
Usage
Same as v1 — only the adapter id changes:
model = PeftModel.from_pretrained(base, "Khabner/florence-base-lora-v22").eval()
See Khabner/florence-base-lora-v1 README for the full inference snippet, or github.com/VLM-WEBTEST/magnitude_integration for FastAPI serving code.
- Downloads last month
- 23
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support
Model tree for Khabner/florence-base-lora-v22
Base model
microsoft/Florence-2-base-ft