| task_categories: | |
| - image-text-to-text | |
| tags: | |
| - geometry | |
| - mathematical-reasoning | |
| - multimodal | |
| # GeoFocus-test | |
| [**Paper**](https://huggingface.co/papers/2602.08524) | [**GitHub**](https://github.com/dle666/GeoFocus) | |
| This repository contains the test and evaluation data for **GeoFocus**, a framework for multimodal geometry problem-solving. GeoFocus addresses the challenge of geometry reasoning by blending efficient global and local perception through two core modules: | |
| 1. **Critical Local Perceptor**: Automatically identifies and emphasizes critical local structures (e.g., angles, parallel lines, comparative distances) through thirteen theory-based perception templates. | |
| 2. **VertexLang**: A compact topology formal language that encodes global figures through vertex coordinates and connectivity relations, reducing global perception training time while improving topology recognition accuracy. | |
| This dataset is used to evaluate models on benchmarks including **Geo3K**, **GeoQA**, and **FormalGeo7K**, and demonstrates superior robustness in **MATHVERSE**. | |
| ## Related Datasets | |
| The training data used by GeoFocus is available at the following links: | |
| * [Global_Perceptor_Data](https://huggingface.co/datasets/dle666/Global_Perceptor) | |
| * [Local_Perceptor_Data](https://huggingface.co/datasets/dle666/Local_Perceptor) | |
| ## Citation | |
| If you find this dataset or the GeoFocus framework useful for your research, please cite: | |
| ```bibtex | |
| @article{geofocus2026, | |
| title={GeoFocus: Blending Efficient Global-to-Local Perception for Multimodal Geometry Problem-Solving}, | |
| author={Dle et al.}, | |
| journal={arXiv preprint arXiv:2602.08524}, | |
| year={2026} | |
| } | |
| ``` |