File size: 2,841 Bytes
18511f9
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
e2a9f5b
 
 
18511f9
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
---
language:
- en
license: mit
size_categories:
- 100K<n<1M
task_categories:
- image-to-text
- visual-question-answering
tags:
- cross-view
pretty_name: CrossPoint-378K
---

# CrossPoint-378K Dataset

[![arXiv](https://img.shields.io/badge/arXiv-2512.04686-b31b1b.svg?logo=arxiv&logoColor=white)](https://arxiv.org/abs/2512.04686)
[![GitHub](https://img.shields.io/badge/GitHub-WangYipu2002/CrossPoint-181717.svg?logo=github&logoColor=white)](https://github.com/WangYipu2002/CrossPoint)

## Overview

CrossPoint-378K is a large-scale dataset for cross-view point correspondence. This dataset contains 378K training samples designed to enhance vision-language models' capabilities in cross-view point correspondences.

<p align="center">
  <img src="CrossPoint-378K.png" width="80%">
</p>

## Dataset Structure

The dataset contains the following file structure:

```
CrossPoint-378K/
├── CrossPoint-378K.json          # Main data file (ShareGPT format)
├── image/                         # Original images directory
│   └── [scene_id]/               # Scene ID directory
│       └── [images]              # Scene images
└── visual_image/                  # Annotated images directory
    └── [scene_id]/               # Scene ID directory
        └── [images]              # Annotated images with visual markers
```

## Data Format

The dataset follows the **ShareGPT format** with the following structure:

### JSON Format Example

```json
{
  "type": "single_spatial_understanding",
  "images": [
    "CrossPoint-378K/image/00a231a370/DSC05031.JPG"
  ],
  "messages": [
    {
      "content": "<image>\nWhat does the point at [56, 323] refer to?",
      "role": "user"
    },
    {
      "content": "It corresponds to the white window handle in the image.",
      "role": "assistant"
    }
  ]
}
```

### Field Descriptions

- **type**: Task type (e.g., `single_spatial_understanding`, `cross_correspondence`)
- **images**: List of image paths relative to the dataset root
- **messages**: Conversation in ShareGPT format
  - **role**: Either `user` or `assistant`
  - **content**: Message content, where `<image>` tokens indicate image positions

## Dataset Statistics

## Usage

For training scripts and detailed instructions, please visit the [GitHub repository](https://github.com/WangYipu2002/CrossPoint).


## Citation

If you use CrossPoint-378K in your research, please cite:

```bibtex
@article{wang2025crosspoint,
  title={Towards Cross-View Point Correspondence in Vision-Language Models},
  author={Wang, Yipu and Ji, Yuheng and Liu, Yuyang and Zhou, Enshen and Yang, Ziqiang and Tian, Yuxuan and Qin, Ziheng and Liu, Yue and Tan, Huajie and Chi, Cheng and Ma, Zhiyuan and Zeng, Daniel Dajun and Zheng, Xiaolong},
  journal={arXiv preprint arXiv:2512.04686},
  year={2025}
}
```