---
language:
- en
license: mit
size_categories:
- 100K
## Dataset Structure
The dataset contains the following file structure:
```
CrossPoint-378K/
├── CrossPoint-378K.json # Main data file (ShareGPT format)
├── image/ # Original images directory
│ └── [scene_id]/ # Scene ID directory
│ └── [images] # Scene images
└── visual_image/ # Annotated images directory
└── [scene_id]/ # Scene ID directory
└── [images] # Annotated images with visual markers
```
## Data Format
The dataset follows the **ShareGPT format** with the following structure:
### JSON Format Example
```json
{
"type": "single_spatial_understanding",
"images": [
"CrossPoint-378K/image/00a231a370/DSC05031.JPG"
],
"messages": [
{
"content": "\nWhat does the point at [56, 323] refer to?",
"role": "user"
},
{
"content": "It corresponds to the white window handle in the image.",
"role": "assistant"
}
]
}
```
### Field Descriptions
- **type**: Task type (e.g., `single_spatial_understanding`, `cross_correspondence`)
- **images**: List of image paths relative to the dataset root
- **messages**: Conversation in ShareGPT format
- **role**: Either `user` or `assistant`
- **content**: Message content, where `` tokens indicate image positions
## Dataset Statistics
## Usage
For training scripts and detailed instructions, please visit the [GitHub repository](https://github.com/WangYipu2002/CrossPoint).
## Citation
If you use CrossPoint-378K in your research, please cite:
```bibtex
@article{wang2025crosspoint,
title={Towards Cross-View Point Correspondence in Vision-Language Models},
author={Wang, Yipu and Ji, Yuheng and Liu, Yuyang and Zhou, Enshen and Yang, Ziqiang and Tian, Yuxuan and Qin, Ziheng and Liu, Yue and Tan, Huajie and Chi, Cheng and Ma, Zhiyuan and Zeng, Daniel Dajun and Zheng, Xiaolong},
journal={arXiv preprint arXiv:2512.04686},
year={2025}
}
```