license: cc-by-nc-4.0
pipeline_tag: image-text-to-text
GeoReasoner: Geo-localization with Reasoning in Street Views using a Large Vision-Language Model
GeoReasoner is a novel large vision-language model (LVLM) for geo-localization in street views, enhanced with human inference knowledge. It addresses data scarcity and quality issues by creating a new dataset of highly locatable street views and integrating external knowledge from geo-localization games. Fine-tuned through reasoning and location-tuning stages, GeoReasoner significantly outperforms existing LVLMs and StreetCLIP in country-level (25%+) and city-level (38%+) geo-localization tasks.
This model was presented in the paper GeoReasoner: Geo-localization with Reasoning in Street Views using a Large Vision-Language Model.
The official code and further details can be found on the GitHub repository.
Release
Data
- For Stage 1 (Reasoning Tuning Phase), We have released the SFT data on
.
- For Stage 2 (Location Tuning Phase), due to copyright issues with Google Street View images, we are unable to directly provide the corresponding data. However, you can retrieve the relevant data by using the official API provided by Google Street View.
- For Stage 1 (Reasoning Tuning Phase), We have released the SFT data on
Code
- loc_clip: the codebase for computing locatability of street view images.
- GeoReasoner: a collection of train and inference scripts of GeoReasoner models.
Usage and License Notices
This project utilizes certain datasets and checkpoints that are subject to their respective original licenses. It is important to emphasize that the collected data from GeoGuessr and Tuxun cannot be used for commercial purposes.
Description
For computing locatability of street view images
- Follow the MaskFormer instruction to ensure that the Inference Demo with Pre-trained Models works correctly.
- Obtain the percentage for each category from the segmentation results.
- Calculate the locatability value by referring to the example in the script
loc_clip/locatability_comput.py.
For the inference of GeoReasoner models
- The pre-trained LVLM weights are available at
- Our LoRA weights are available at
- Inference steps
cd GeoReasoner git clone https://github.com/QwenLM/Qwen-VL.git cd Qwen-VL pip install -r requirements.txt mkdir Qwen-VL-Models mkdir LoRA- Then download the pre-trained LVLM weights into the
Qwen-VL-Modelsfolder and the LoRA weights into theLoRAfolder.
python infer.py # with the test image # Due to the inherent randomness in LVLM generation, the generated reasons may not always be consistent. - Then download the pre-trained LVLM weights into the
- Training steps (Reasoning Tuning Phase)
cd GeoReasoner git clone https://github.com/QwenLM/Qwen-VL.git cd Qwen-VL pip install -r requirements.txt mkdir Qwen-VL-Models mkdir LoRA mkdir Dataset- Then download the pre-trained LVLM weights into the
Qwen-VL-Modelsfolder and the SFT data into theDatasetfolder.
mv finetune_lora_reason.sh Qwen-VL/finetune cd Qwen-VL sh finetune/finetune_lora_reason.sh - Then download the pre-trained LVLM weights into the
- Inference steps
- The pre-trained LVLM weights are available at
Acknowledgments
We are very grateful for the source codes and outstanding contributions from MaskFormer, Sentence-BERT and Qwen-VL.
Citation
@inproceedings{li2024georeasoner,
title={GeoReasoner: Geo-localization with Reasoning in Street Views using a Large Vision-Language Model},
author={Li, Ling and Ye, Yu and Zeng, Wei},
booktitle={International Conference on Machine Learning (ICML)},
year={2024}
}
