EarthEmbeddings / README.md
VoyagerX
Update README.md
10a3ac0
metadata
license: cc-by-sa-4.0
task_categories:
  - text-to-image
  - image-to-image
  - other
language:
  - en
tags:
  - satellite-imagery
  - earth-observation
  - embeddings
  - geospatial
  - clip
  - majortom
size_categories:
  - 10K<n<100K
  - 100K<n<1M

EarthEmbeddings

Satellite imagery embeddings dataset for the EarthEmbeddingExplorer, enabling natural language and location-based search of Earth observation data.

Overview

This repository contains pre-computed embeddings of satellite imagery using state-of-the-art vision-language models. These embeddings power the EarthEmbeddingExplorer application, which allows users to search for satellite images using text queries, image uploads, or geographic locations.

Key features:

  • Global satellite imagery from Sentinel-2 (MajorTOM Core-S2L2A)
  • Multiple embedding models optimized for Earth observation
  • Fast similarity search without raw image preprocessing
  • Ready-to-use Parquet format for efficient data access

Dataset Description

Data Source

  • Base dataset: MajorTOM Core-S2L2A (Sentinel-2 Level 2A, 2.2M+ samples)
  • Processing: Center crop (384×384 pixels) + uniform global sampling

Embedding Models

Embedding Models

Four state-of-the-art vision models are used:

Model Description Training Data
SigLIP General-purpose vision-language model Web-scale natural image-text pairs
DINOv2 Self-supervised vision transformer Web-scale natural images (self-supervised)
FarSLIP Fine-grained satellite imagery model Satellite image-text pairs
SatCLIP Location-based satellite model Satellite image-location pairs

Dataset Splits

1. uniform_sample_250k ⚠️ Preview

├── uniform_sample_250k
│   ├── dinov2
│   │   ├── DINOv2_grid_sample_center_224x224_249k_MajorTOM.parquet
│   │   └── DINOv2_grid_sample_center_384x384_244k.parquet
│   ├── farslip
│   │   └── FarSLIP_grid_sample_center_384x384_244k.parquet
│   ├── satclip
│   │   └── SatCLIP_grid_sample_center_384x384_244k.parquet
│   └── siglip
│       └── SigLIP_grid_sample_center_384x384_244k.parquet
  • ~250,000 globally distributed satellite images
  • Current status: Preview revision with ~244k pre-computed embeddings and ~249k embeddings sampled from Major-TOM/Core-S2RGB-DINOv2 available
  • Note: About 4-6k original image chips were lost due to network error; full version coming soon
  • Crop size: For the 1/9 sampled grids, we crop the central bbox in each grid. To ensure the image patches are the same for each model, we chose crop size of 384x384, for pre-computed embeddings, we chose the crop size at 384x384. So these embeddings could represent the same regions on Earth surface.
Filename Embedding Model Crop Size Model Input Size Embedding Dim Source
DINOv2_grid_sample_center_224x224_249k_MajorTOM.parquet DINOv2-large 224×224 224×224 1024 Major-TOM/Core-S2RGB-DINOv2
DINOv2_grid_sample_center_384x384_244k.parquet DINOv2-large 384×384 224×224 1024 Pre-computed
FarSLIP_grid_sample_center_384x384_244k.parquet FarSLIP-ViT-B-16 384×384 224×224 512 Pre-computed
SatCLIP_grid_sample_center_384x384_244k.parquet SatCLIP-ViT16-L40 384×384 224×224 256 Pre-computed
SigLIP_grid_sample_center_384x384_244k.parquet SigLIP-SO400M-14 384×384 384×384 1152 Pre-computed

2. uniform_sample_22k

  • 22,000 globally distributed satellite images
  • Files: grid_sample_center_22k_{FarSLIP,SatCLIP,SigLIP}_384x384.parquet

3. Zhejiang_samples

  • 2,000 samples from Zhejiang region, China
  • Files: zhejiang_sample_center_2k_{FarSLIP,SatCLIP,SigLIP}_384x384.parquet
  • Regional case study dataset

Data Format

All embeddings are stored in Parquet format:

  • Efficient columnar storage for fast download
  • 384×384 pixel satellite image crops

Related Work

License

CC-BY-SA-4.0