ghost233lism commited on
Commit
8013ce6
·
verified ·
1 Parent(s): 22f9de1

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +136 -3
README.md CHANGED
@@ -1,3 +1,136 @@
1
- ---
2
- license: cc-by-4.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ base_model:
3
+ - Qwen/Qwen2.5-VL-7B-Instruct
4
+ language:
5
+ - en
6
+ license: cc-by-nc-4.0
7
+ pipeline_tag: image-text-to-text
8
+ tags:
9
+ - image
10
+ datasets:
11
+ - ghost233lism/GeoSeek
12
+ ---
13
+
14
+ <div align="center">
15
+
16
+ <h1>GeoAgent: Learning to Geolocate Everywhere with Reinforced Geographic Characteristic</h1>
17
+
18
+
19
+
20
+ [**Modi Jin**](https://ghost233lism.github.io/)<sup>1</sup> · [**Yiming Zhang**](https://zhang-yi-ming.github.io/)<sup>1</sup> · [**Boyuan Sun**](https://bbbbchan.github.io/)<sup>1</sup> · [**Dingwen Zhang**](https://zdw-nwpu.github.io/dingwenz.github.com/)<sup>2</sup> · [**Mingming Cheng**](https://mmcheng.net/)<sup>1</sup> · [**Qibin Hou**](https://houqb.github.io/)<sup>1&dagger;</sup>
21
+
22
+ <sup>1</sup>VCIP, Nankai University <sup>2</sup> School of Automation, Northwestern Polytechnical University
23
+
24
+ &dagger;Corresponding author
25
+
26
+ **English | [简体中文](README_zh.md)**
27
+
28
+ <a href="https://github.com/HVision-NKU/GeoAgent"><img alt="github" src="https://img.shields.io/badge/Github-GeoAgent-181717?logo=github&color=1783ff&logoColor=white"/></a>
29
+ <a href="https://ghost233lism.github.io/GeoAgent-page/"><img src='https://img.shields.io/badge/Project-Page-green' alt='Project Page'></a>
30
+ <a href='https://huggingface.co/datasets/ghost233lism/GeoSeek'><img src='https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-GeoSeek_Dataset-purple'></a>
31
+ <a href='https://huggingface.co/ghost233lism/GeoAgent'><img src='https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Model-blue'></a>
32
+ <a href='https://huggingface.co/spaces/ghost233lism/GeoAgent'><img src='https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Demo-orange' alt='Demo'></a>
33
+
34
+ </div>
35
+
36
+ <!-- ![teaser](assets/teaser.png) -->
37
+
38
+
39
+
40
+ **GeoAgent** is a vision-language model for **image geolocation** that reasons closely with humans and derives fine-grained address conclusions. Built upon [Qwen2.5-VL](https://huggingface.co/Qwen/Qwen2.5-VL-7B-Instruct), it achieves strong performance across multiple geographic grains (city, region, country, continent) while generating interpretable chain-of-thought reasoning.
41
+
42
+ GeoAgent introduces:
43
+
44
+ 1. **Geo-similarity reward** combining spatial and semantic similarity to handle the many-to-one mapping between natural language and geographic locations;
45
+ 2. **Consistency reward** assessed by a consistency agent to ensure the integrity and consistency of reasoning chains. The model is trained on **GeoSeek**, a novel geolocation dataset with human-annotated CoT and bias-reducing sampling.
46
+
47
+ We also introduce [**GeoSeek**](https://huggingface.co/datasets/ghost233lism/GeoSeek), which is a new geolocation dataset comprising:
48
+
49
+ - **GeoSeek-CoT** (10k): High-quality chain-of-thought data labeled by geography experts and professional geolocation game players. Each entry includes street-view images, GPS coordinates, three-level location labels (country, city, precise location), and human reasoning processes—standardized into a unified CoT format.
50
+ - **GeoSeek-Loc** (20k): Images for RL-based finetuning, sampled via a stratified strategy considering population, land area, and highway mileage to reduce geographic bias.
51
+ - **GeoSeek-Val** (3k): Validation benchmark with locatability scores and scene categories (manmade structures, natural landscapes, etc.) for evaluation.
52
+
53
+
54
+
55
+ <!-- <div align="center">
56
+ <img src="assets/depthanything-AC-video.gif" alt="video" width="100%">
57
+ </div> -->
58
+
59
+
60
+ <!-- ## Model Architecture -->
61
+
62
+ <!-- ![architecture](assets/pipeline.png) -->
63
+
64
+ ## Installation
65
+
66
+ ### Requirements
67
+
68
+ - Python>=3.9
69
+ - torch==2.6.0
70
+ - torchvision==0.21.0
71
+ - torchaudio==2.6.0
72
+ - ms-swift>=3.8.0
73
+ - xformers==0.0.27.post2
74
+ - deepspeed==0.15.0
75
+ - cuda==12.4
76
+
77
+ ### Setup
78
+ ```bash
79
+ git clone https://github.com/HVision-NKU/GeoAgent.git
80
+ cd GeoAgent
81
+
82
+ conda create -n GeoAgent python=3.9
83
+ conda activate GeoAgent
84
+ pip install -r requirements.txt
85
+ ```
86
+
87
+ ## Usage
88
+ ### Get GeoAgent Model
89
+ Download the pre-trained checkpoints from [Hugging Face](https://huggingface.co/ghost233lism/GeoAgent):
90
+ ```bash
91
+ mkdir checkpoints
92
+ cd checkpoints
93
+
94
+ # (Optional) Using huggingface mirrors
95
+ export HF_ENDPOINT=https://hf-mirror.com
96
+
97
+ # download GeoAgent model from huggingface
98
+ huggingface-cli download --resume-download ghost233lism/GeoAgent --local-dir ghost233lism/GeoAgent
99
+ ```
100
+
101
+ ### Quick Inference
102
+
103
+ We provide the quick inference scripts for single/batch image input in `infer/`. Please refer to [infer/README](https://github.com/HVision-NKU/GeoAgent/infer/README.md) for detailed information.
104
+
105
+ ### Training
106
+
107
+
108
+ ```bash
109
+ bash tools/train_sft.sh
110
+ bash tools/train_grpo.sh
111
+ ```
112
+
113
+
114
+ ## Citation
115
+
116
+ Coming soon...
117
+
118
+
119
+ ## License
120
+
121
+ This code is licensed under the [Creative Commons Attribution-NonCommercial 4.0 International](https://creativecommons.org/licenses/by-nc/4.0/) for non-commercial use only.
122
+
123
+ Please note that any commercial use of this code requires formal permission prior to use.
124
+
125
+ ## Contact
126
+
127
+ For technical questions, please contact jin_modi[AT]mail.nankai.edu.cn
128
+
129
+ For commercial licensing, please contact andrewhoux[AT]gmail.com.
130
+
131
+ ## Acknowledgments
132
+
133
+ We sincerely thank [Yue Zhang](https://tuxun.fun/), [H.M.](https://space.bilibili.com/1655209518?spm_id_from=333.337.0.0), [Haowen He](https://space.bilibili.com/111714204?spm_id_from=333.337.0.0), [Yuke Jun](https://space.bilibili.com/93569847?spm_id_from=333.337.0.0), and other experts in geography, as well as outstanding geolocation game players, for their valuable guidance, prompt design suggestions, and data support throughout the construction of the GeoSeek dataset.
134
+
135
+ We also thank [Zhixiang Wang](https://tuxun.fun/), [Chilin Chen](https://tuxun.fun/), [Jincheng Shi](https://tuxun.fun/), [Liupeng Zhang](https://tuxun.fun/), [Yuan Gu](https://tuxun.fun/), [Yanghang Shao](https://tuxun.fun/), [Jinhua Zhang](https://tuxun.fun/), [Jiachen Zhu](https://tuxun.fun/), [Gucheng Qiuyue](https://tuxun.fun/), [Qingyang Guo](https://tuxun.fun/), [Jingchen Yang](https://tuxun.fun/), [Weilong Kong](https://tuxun.fun/), [Xinyuan Li](https://tuxun.fun/), and [Mr. Xu](https://tuxun.fun/) (an anonymous volunteer)
136
+ for their outstanding contributions in providing high-quality reasoning process data.