Dataset Viewer
Auto-converted to Parquet Duplicate
Search is not available for this dataset
image
imagewidth (px)
936
936
End of preview. Expand in Data Studio

Street View Cutouts Dataset

A balanced image dataset of 14,000 perspective-view cutouts extracted from Google Street View panoramas, designed for unsupervised visual element discovery (e.g., “What Makes Paris Look like Paris?”). The dataset contains 7,000 positive and 7,000 negative examples.

Dataset Description

Overview

  • Total images: 14,000
  • Positive set: 7,000 images (target city/cities, e.g., Paris, Amherst)
  • Negative set: 7,000 images (other cities)
  • Image size: 936 × 537 pixels (width × height)
  • Format: JPEG
  • Source: Google Street View panoramas → perspective-view extractions

Cutouts are rectangular, camera-like views extracted from 360° Street View panoramas at specified yaw and pitch angles. Filenames encode GPS and view metadata: lat_lng_yaw_pitch.JPG.

Intended Use

  • Unsupervised discovery of distinctive visual elements (architecture, signs, street furniture, etc.) that characterize a place
  • Training and evaluation of place-specific visual detectors
  • Research on “What Makes Paris Look like Paris?”–style algorithms (SIGGRAPH 2012)

Dataset Structure

Each example includes:

Column Type Description
image Image The cutout image (936×537 JPEG)
fullname string Relative path, e.g. paris/48.854766_2.350913_90.0_-4.JPG
city string City folder / location label (e.g. paris, amherst, nyc)
label string "positive" or "negative"
lat float Latitude (from filename)
lng float Longitude (from filename)
imsize list [height, width] = [537, 936]

Positive examples come from the target city/cities; negative examples from other cities.

Filename Convention

Filenames follow lat_lng_yaw_pitch.JPG:

  • lat, lng: GPS coordinates of the panorama
  • yaw: Horizontal angle in degrees (e.g. 90° = East, 270° = West)
  • pitch: Vertical angle in degrees (e.g. -4° = slightly down)

Example: 48.854766_2.350913_90.0_-4.JPG → Paris, (48.85, 2.35), yaw 90°, pitch -4°.

Usage

Load with Hugging Face datasets

Replace your-username/streetview-cutouts with your actual dataset ID on the Hub.

from datasets import load_dataset

dataset = load_dataset("your-username/streetview-cutouts", split="train")

# Access a single example
example = dataset[0]
image = example["image"]
city = example["city"]
label = example["label"]
lat, lng = example["lat"], example["lng"]

Filter by label

positives = dataset.filter(lambda x: x["label"] == "positive")
negatives = dataset.filter(lambda x: x["label"] == "negative")

Use with PyTorch

from torch.utils.data import DataLoader

# Example: simple image + label
def collate_fn(batch):
    images = [b["image"] for b in batch]
    labels = [1 if b["label"] == "positive" else 0 for b in batch]
    return images, labels

dataloader = DataLoader(dataset, batch_size=32, collate_fn=collate_fn)

Data Creation

  1. Panorama collection: Street View panorama IDs are collected for selected cities (e.g. via Google Maps API or custom tools).
  2. Download: Full 360° panoramas are downloaded and stitched from tile servers.
  3. Extraction: Perspective-view cutouts (936×537) are extracted at defined yaw/pitch using spherical-to-perspective projection.
  4. Metadata: Metadata (city, coords, fullname, etc.) is built by scanning the cutout directory and parsing filenames.

See the companion codebase (e.g. GSwDownloader/, streetview_dataset_tool/) for implementation details.

Licensing & Attribution

  • License: Check Google Street View Terms of Service before bulk downloading or redistribution. This dataset may be subject to those terms.
  • Research: The methodology is inspired by “What Makes Paris Look like Paris?” (SIGGRAPH 2012) and related work on building Street View datasets for place recognition.

Considerations

  • Geographic coverage: Depends on which cities were sampled; the set may reflect sampling and API coverage biases.
  • Temporal validity: Street View imagery changes over time; capture dates are not included in this schema.
  • Sensitive content: Street View can contain people, vehicles, and private property; use in line with applicable ethics and privacy guidelines.

Dataset Summary

Split Positive Negative Total
Train 7,000 7,000 14,000

Citation

If you use this dataset, please cite the relevant project and, where applicable, the “What Makes Paris Look like Paris?” work:

@article{paris2012,
  title = {What Makes Paris Look like Paris?},
  author = {Carl Doersch and Saurabh Singh and Abhinav Gupta and Josef Sivic and Alexei A. Efros},
  journal = {ACM Transactions on Graphics (SIGGRAPH)},
  year = {2012},
}

Dataset card for the Street View cutouts dataset (14K images, 7K positive / 7K negative).

Downloads last month
88