Dataset Viewer
The dataset could not be loaded because the splits use different data file formats, which is not supported. Read more about the splits configuration. Click for more details.
Couldn't infer the same data file format for all splits. Got {NamedSplit('train'): ('webdataset', {}), NamedSplit('validation'): ('text', {}), NamedSplit('test'): ('text', {})}
Error code:   FileFormatMismatchBetweenSplitsError

Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.

GeoSpot WebDataset (staged)

This repository hosts WebDataset .tar shards for streaming training.

Layout

  • StreetView (all in one logical set): data/streetview/**
    • data/streetview/sv2/ – pano .json + .000/.090/.180/.270.jpg (JSON uses lng)
    • data/streetview/us_local_s128/ – single-view .jpg + .json
    • data/streetview/sv1/<country>/ – per-heading samples (panoid_000.jpg + panoid_000.json)
  • Everything else: data/other/**
    • data/other/mp16pro/, data/other/dress_s128/ – single images + JSON
    • data/other/osv5m/ – dashcam images + JSON
    • data/other/msls/ – Mapillary sequences (single images + JSON)
    • data/other/cvcities/ – ground pano .jpg + .sat bytes + JSON

Manifests

Note: dress and us_local were resharded into ~128MiB parts for faster streaming.\n Canonical manifests (manifests/train.txt, etc.) now point at dress_s128/ and us_local_s128/.\n Shard lists are in manifests/*.txt (paths inside this repo). For streaming, prefix with the HF resolve URL:

repo = "sdan/geospot-unified"
base = f"https://huggingface.co/datasets/{repo}/resolve/main/"
paths = open("manifests/train.txt").read().splitlines()
urls = [base + p for p in paths]

Quick starts:

  • StreetView only: use manifests/streetview.txt
  • Everything else: use manifests/other.txt

Streaming (WebDataset)

import webdataset as wds

ds = (
    wds.WebDataset(urls)
      .decode("pil")
      .to_tuple("jpg", "json")
)

Indexing / Segmentation

  • metadata/shards.parquet (or metadata/shards.jsonl): shard-level index (domain/group/path/size/split).
  • metadata/shards_enriched.parquet + manifests/geo/**: optional geo enrichment + region/country shard lists.
    • Generate/update these with scripts/hf_enrich_wds_metadata.py after the main shard upload completes.

Note: train/validation/test manifests are shard-level hashes, not spatial splits. For a real geolocation holdout split, compute a deterministic split from (lat, lon) at training time.

Downloads last month
688