---
license: cc-by-4.0
---
# C3: Cross-View Cross-Modality Correspondence Dataset
## Dataset for *C3Po: Cross-View Cross-Modality Correspondence with Pointmap Prediction*
[arXiv](https://arxiv.org/abs/2511.18559) | [Project Website](https://c3po-correspondence.github.io/) | [GitHub](https://github.com/c3po-correspondence/C3Po)
**C3** contains **90K paired floor plans and photos from the Internet** across **597 scenes** with **153M pixel-level correspondences** and **85K camera poses**.
## Image Pairs
`image_pairs/` is split into `train/`, `val`, and `test/`, each with a `image_pairs.csv`.
- `image_pairs.csv`: Each row represents a plan-photo pair, consisting of `uid`, `scene_name`, `plan_path`, and `photo_path`. `uid` is used to reference corresponding files in `correspondences/` and `camera_poses`, named using the format `{int(uid):06d}.npy`. `scene_name` is used to reference corresponding floor plans (`visual/{scene_name}/{plan_path}`) and photos (`visual/{scene_name}/{photo_path}`).
## Correspondences and Camera Poses
`geometric/` has three files: `geometric_train.tar.gz`, `geometric_val.tar.gz`, and `geometric_test.tar.gz`. Each of these files (`geometric_{split}.tar.gz`) can be extracted to `{split}/correspondences/` and `{split}/camera_poses/`.
- `correspondences/`: Each `.npy` files contains an array of [plan_correspondences (M, 2), photo_correspondences (M, 2)] and are grouped in batches of 1,000.
- `camera_poses/`: Each `.npy` file contains an array of [Rplan-to-cam (3, 3), tplan (3,), K (3, 3)] and are grouped in batches of 1,000.
```
geometric/
├── train/ # Extracted from geometric_train.tar.gz
│ ├── correspondences/
│ │ ├── 0/
│ │ │ ├── 00000.npy
│ │ │ ├── ...
│ │ │ ├── 00999.npy
│ │ ├── ...
│ ├── camera_poses/
│ │ ├── 0/
│ │ │ ├── 00000.npy
│ │ │ ├── ...
│ │ │ ├── 00999.npy
│ │ ├── ...
├── val/ # Extracted from geometric_val.tar.gz
│ ├── (same structure as train)
└── test/ # Extracted from geometric_test.tar.gz
└── (same structure as train)
```
## Floor Plans and Photos
`visual/` contains floor plans and photos grouped by scenes.
```
visual/
├── Aachen_Cathedral.tar.gz
├── Abbatiale_d'Ottmarsheim.tar.gz
└── ...
```
### Archived Contents
Each `{scene_name}.tar.gz` file contains the following structure when extracted:
```
├── images/
│ ├── commons/ # arbitrary number of subcategories
│ │ ├── {wikimedia_commons_subcategory_1}/ # arbitrary number of photos
│ │ │ ├── {photo_A}.png
│ │ │ ├── ...
│ │ ├── {wikimedia_commons_subcategory_2}/
│ │ │ ├── ...
│ │ ├── ...
│ ├── flickr/ # arbitrary number of photos
│ │ ├── {photo_B}.png
│ │ ├── ...
├── plans/ # arbitrary number of floor plans
├── {floor_plan_A}.png
├── ...
```
## Example Visualization
```python
from os.path import join
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
from PIL import Image
def draw_camera_frustum(ax, t_p, R_p2c, frustum_length, frustum_width, color='blue', alpha=0.3):
# Camera axes
forward = R_p2c.T[:, 2]
forward_xz = forward.copy()
forward_xz[1] = 0 # Project onto the XZ plane
if np.linalg.norm(forward_xz) < 1e-6:
forward_xz = np.array([0, 0, 1])
else:
forward_xz /= np.linalg.norm(forward_xz)
right_xz = np.cross(np.array([0, 1, 0]), forward_xz)
right_xz /= np.linalg.norm(right_xz)
# Near and far plane distances
near_len, far_len = frustum_length * 0.2, frustum_length
near_width, far_width = frustum_width * 0.2, frustum_width
# Corner points of the frustum
cc = -R_p2c.T @ t_p
points = np.array([
cc + forward_xz * near_len - right_xz * near_width / 2, # near left
cc + forward_xz * near_len + right_xz * near_width / 2, # near right
cc + forward_xz * far_len + right_xz * far_width / 2, # far right
cc + forward_xz * far_len - right_xz * far_width / 2. # far left
])
x, z = points[:, 0], points[:, 2]
ax.fill(x, z, color=color, alpha=alpha)
ax.plot(np.append(x, x[0]), np.append(z, z[0]), color=color)
# Load image pair
image_pairs_path = "image_pairs/train/image_pairs.csv"
image_pairs = pd.read_csv(image_pairs_path)
uid, scene_name, plan_path, photo_path = image_pairs.iloc[0]
# Load correspondences
geometric_train_dir = "geometric/train/"
corr_path = join(geometric_train_dir, "correspondences", f"{int(uid) // 1000}", f"{int(uid):06d}.npy")
plan_corr, photo_corr = np.load(corr_path)
# Load camera pose
camera_pose_path = join(geometric_train_dir, "camera_poses", f"{int(uid) // 1000}", f"{int(uid):06d}.npy")
R_p2c, t_p, _ = np.load(camera_pose_path, allow_pickle=True)
R_p2c = np.array(R_p2c.tolist(), dtype=float)
t_p = np.array(t_p)
# Load floor plan and photo
visual_dir = "visual/"
plan = Image.open(join(visual_dir, scene_name, plan_path)).convert("RGB")
photo = Image.open(join(visual_dir, scene_name, photo_path)).convert("RGB")
# Visualize
fig, axes = plt.subplots(1, 2, figsize=(12, 6))
fig.suptitle(f"Scene name: {scene_name}", fontsize=16)
axes[0].imshow(plan)
axes[0].set_title("Floor Plan")
axes[0].scatter(plan_corr[:, 0], plan_corr[:, 1], c="r", s=1)
scale = max(plan.size) * 0.05
draw_camera_frustum(axes[0], t_p, R_p2c, frustum_length=scale, frustum_width=scale, color='blue', alpha=0.3)
axes[0].axis('off')
axes[1].imshow(photo)
axes[1].set_title("Photo")
axes[1].scatter(photo_corr[:, 0], photo_corr[:, 1], c="r", s=1)
axes[1].axis('off')
plt.tight_layout()
plt.show()
```

## Citation
If you use data from C3, please cite with the following:
```
@inproceedings{
huang2025c3po,
title={C3Po: Cross-View Cross-Modality Correspondence by Pointmap Prediction},
author={Huang, Kuan Wei and Li, Brandon and Hariharan, Bharath and Snavely, Noah},
booktitle={Advances in Neural Information Processing Systems},
volume={38},
year={2025}
}
```