--- task_categories: - image-text-to-text language: - en tags: - geometry - multimodal - geometry-problem-solving --- # GeoFocus [Paper](https://huggingface.co/papers/2602.08524) | [Code](https://github.com/dle666/GeoFocus) GeoFocus is a novel framework for Multimodal Geometry Problem-Solving (MGPS). It addresses the challenges of recognizing global shapes and intricate local geometric relationships through two core components: 1. **Critical Local Perceptor**: Automatically identifies and emphasizes critical local structures (e.g., angles, parallel lines, comparative distances) through thirteen theory-based perception templates, boosting local feature coverage. 2. **VertexLang**: A compact topology formal language that encodes global figures using vertex coordinates and connectivity relations, reducing training time while improving topology recognition accuracy. ## Dataset Description The GeoFocus project involves several data splits used for training and evaluation: - **Global_Perceptor_Data**: Training data focused on global figure recognition using the VertexLang encoding. - **Local_Perceptor_Data**: Training data featuring fine-grained visual attribute annotations for critical local structures. - **Geo_test**: Evaluation datasets covering benchmarks such as Geo3K, GeoQA, and FormalGeo7K. The models trained on this data, GeoFocus-3B and GeoFocus-7B, demonstrate superior performance and robustness in geometry reasoning tasks. ## Citation If you use this work or dataset in your research, please cite the original paper: ```bibtex @article{geofocus2026, title={GeoFocus: Blending Efficient Global-to-Local Perception for Multimodal Geometry Problem-Solving}, author={...}, journal={arXiv preprint arXiv:2602.08524}, year={2026} } ```