| base_model: | |
| - Qwen/Qwen2.5-VL-7B-Instruct | |
| datasets: | |
| - earth-insights/EarthReason | |
| library_name: transformers | |
| pipeline_tag: image-segmentation | |
| license: apache-2.0 | |
| # Bridging Semantics and Geometry: A Decoupled LVLM–SAM Framework for Reasoning Segmentation in Remote Sensing | |
| This repository contains the 7B model of **Think2Seg-RS**, a decoupled framework for reasoning segmentation in remote sensing (RS) imagery. | |
| The model was introduced in the paper [Bridging Semantics and Geometry: A Decoupled LVLM-SAM Framework for Reasoning Segmentation in Remote Sensing](https://huggingface.co/papers/2512.19302). | |
| ## Overview | |
| Think2Seg-RS decouples high-level semantic reasoning from low-level geometric execution. It trains an LVLM prompter (based on Qwen-2.5-VL) to control a frozen Segment Anything Model (SAM2) via structured geometric prompts. Through a mask-only reinforcement learning objective, the LVLM learns to translate abstract semantic reasoning into spatially grounded actions, achieving state-of-the-art performance on the EarthReason dataset. | |
| ## Resources | |
| - **Paper:** [arXiv:2512.19302](https://huggingface.co/papers/2512.19302) | |
| - **Code:** [GitHub - Think2Seg-RS](https://github.com/Ricardo-XZ/Think2Seg-RS) | |
| - **Dataset:** [EarthReason](https://huggingface.co/datasets/earth-insights/EarthReason) | |
| ## Citation | |
| If you find this work helpful for your research, please cite: | |
| ```bibtex | |
| @article{think2seg_rs_2025, | |
| title={Bridging Semantics and Geometry: A Decoupled LVLM-SAM Framework for Reasoning Segmentation in Remote Sensing}, | |
| author={Anonymous}, | |
| journal={arXiv preprint arXiv:2512.19302}, | |
| year={2025} | |
| } | |
| ``` |