CrossPoint-378K / README.md
WangYipu2002's picture
Update README
e2a9f5b verified
metadata
language:
  - en
license: mit
size_categories:
  - 100K<n<1M
task_categories:
  - image-to-text
  - visual-question-answering
tags:
  - cross-view
pretty_name: CrossPoint-378K

CrossPoint-378K Dataset

arXiv GitHub

Overview

CrossPoint-378K is a large-scale dataset for cross-view point correspondence. This dataset contains 378K training samples designed to enhance vision-language models' capabilities in cross-view point correspondences.

Dataset Structure

The dataset contains the following file structure:

CrossPoint-378K/
├── CrossPoint-378K.json          # Main data file (ShareGPT format)
├── image/                         # Original images directory
│   └── [scene_id]/               # Scene ID directory
│       └── [images]              # Scene images
└── visual_image/                  # Annotated images directory
    └── [scene_id]/               # Scene ID directory
        └── [images]              # Annotated images with visual markers

Data Format

The dataset follows the ShareGPT format with the following structure:

JSON Format Example

{
  "type": "single_spatial_understanding",
  "images": [
    "CrossPoint-378K/image/00a231a370/DSC05031.JPG"
  ],
  "messages": [
    {
      "content": "<image>\nWhat does the point at [56, 323] refer to?",
      "role": "user"
    },
    {
      "content": "It corresponds to the white window handle in the image.",
      "role": "assistant"
    }
  ]
}

Field Descriptions

  • type: Task type (e.g., single_spatial_understanding, cross_correspondence)
  • images: List of image paths relative to the dataset root
  • messages: Conversation in ShareGPT format
    • role: Either user or assistant
    • content: Message content, where <image> tokens indicate image positions

Dataset Statistics

Usage

For training scripts and detailed instructions, please visit the GitHub repository.

Citation

If you use CrossPoint-378K in your research, please cite:

@article{wang2025crosspoint,
  title={Towards Cross-View Point Correspondence in Vision-Language Models},
  author={Wang, Yipu and Ji, Yuheng and Liu, Yuyang and Zhou, Enshen and Yang, Ziqiang and Tian, Yuxuan and Qin, Ziheng and Liu, Yue and Tan, Huajie and Chi, Cheng and Ma, Zhiyuan and Zeng, Daniel Dajun and Zheng, Xiaolong},
  journal={arXiv preprint arXiv:2512.04686},
  year={2025}
}