--- license: cc-by-sa-4.0 task_categories: - text-to-image - image-to-image - other language: - en tags: - satellite-imagery - earth-observation - embeddings - geospatial - clip - majortom size_categories: - 10K # EarthEmbeddings Satellite imagery embeddings dataset for the **EarthEmbeddingExplorer**, enabling natural language and location-based search of Earth observation data. ## Overview This repository contains pre-computed embeddings of satellite imagery using state-of-the-art vision-language models. These embeddings power the [EarthEmbeddingExplorer](https://huggingface.co/spaces/ML4Sustain/EarthExplorer) application, which allows users to search for satellite images using text queries, image uploads, or geographic locations. **Key features:** - Global satellite imagery from Sentinel-2 (MajorTOM Core-S2L2A) - Multiple embedding models optimized for Earth observation - Fast similarity search without raw image preprocessing - Ready-to-use Parquet format for efficient data access ## Dataset Description ### Data Source - **Base dataset**: MajorTOM Core-S2L2A (Sentinel-2 Level 2A, 2.2M+ samples) - **Processing**: Center crop (384Γ—384 pixels) + uniform global sampling ### Embedding Models ### Embedding Models Four state-of-the-art vision models are used: | Model | Description | Training Data | | :--- | :--- | :--- | | **SigLIP** | General-purpose vision-language model | Web-scale natural image-text pairs | | **DINOv2** | Self-supervised vision transformer | Web-scale natural images (self-supervised) | | **FarSLIP** | Fine-grained satellite imagery model | Satellite image-text pairs | | **SatCLIP** | Location-based satellite model | Satellite image-location pairs | ## Dataset Splits ### 1. `uniform_sample_250k` ⚠️ Preview ``` β”œβ”€β”€ uniform_sample_250k β”‚ β”œβ”€β”€ dinov2 β”‚ β”‚ β”œβ”€β”€ DINOv2_grid_sample_center_224x224_249k_MajorTOM.parquet β”‚ β”‚ └── DINOv2_grid_sample_center_384x384_244k.parquet β”‚ β”œβ”€β”€ farslip β”‚ β”‚ └── FarSLIP_grid_sample_center_384x384_244k.parquet β”‚ β”œβ”€β”€ satclip β”‚ β”‚ └── SatCLIP_grid_sample_center_384x384_244k.parquet β”‚ └── siglip β”‚ └── SigLIP_grid_sample_center_384x384_244k.parquet ``` - **~250,000** globally distributed satellite images - **Current status**: Preview revision with ~244k pre-computed embeddings and ~249k embeddings sampled from [Major-TOM/Core-S2RGB-DINOv2](https://huggingface.co/datasets/Major-TOM/Core-S2RGB-DINOv2) available - **Note**: About 4-6k original image chips were lost due to network error; full version coming soon - **Crop size**: For the 1/9 sampled grids, we crop the central bbox in each grid. To ensure the image patches are the same for each model, we chose crop size of 384x384, for pre-computed embeddings, we chose the crop size at 384x384. So these embeddings could represent the same regions on Earth surface. | Filename | Embedding Model | Crop Size | Model Input Size | Embedding Dim | Source | |----------|-----------------|-----------|------------------|---------------|--------| | `DINOv2_grid_sample_center_224x224_249k_MajorTOM.parquet` | [DINOv2-large](https://huggingface.co/facebook/dinov2-large) | 224Γ—224 | 224Γ—224 | 1024 | [Major-TOM/Core-S2RGB-DINOv2](https://huggingface.co/datasets/Major-TOM/Core-S2RGB-DINOv2) | | `DINOv2_grid_sample_center_384x384_244k.parquet` | [DINOv2-large](https://huggingface.co/facebook/dinov2-large) | 384Γ—384 | 224Γ—224 | 1024 | Pre-computed | | `FarSLIP_grid_sample_center_384x384_244k.parquet` | [FarSLIP-ViT-B-16](https://huggingface.co/ZhenShiL/FarSLIP) | 384Γ—384 | 224Γ—224 | 512 | Pre-computed | | `SatCLIP_grid_sample_center_384x384_244k.parquet` | [SatCLIP-ViT16-L40](https://github.com/microsoft/satclip) | 384Γ—384 | 224Γ—224 | 256 | Pre-computed | | `SigLIP_grid_sample_center_384x384_244k.parquet` | [SigLIP-SO400M-14](https://huggingface.co/timm/ViT-SO400M-14-SigLIP-384) | 384Γ—384 | 384Γ—384 | 1152 | Pre-computed | ### 2. `uniform_sample_22k` - **22,000** globally distributed satellite images - **Files**: `grid_sample_center_22k_{FarSLIP,SatCLIP,SigLIP}_384x384.parquet` ### 3. `Zhejiang_samples` - **2,000** samples from Zhejiang region, China - **Files**: `zhejiang_sample_center_2k_{FarSLIP,SatCLIP,SigLIP}_384x384.parquet` - Regional case study dataset ## Data Format All embeddings are stored in **Parquet** format: - Efficient columnar storage for fast download - 384Γ—384 pixel satellite image crops ## Related Work - **Tutorial**: [EarthEmbeddingExplorer Tutorial](https://huggingface.co/spaces/ML4Sustain/EarthExplorer/blob/main/Tutorial.md) - **Application**: [EarthEmbeddingExplorer Space](https://huggingface.co/spaces/ML4Sustain/EarthExplorer) - **Base Dataset**: [MajorTOM by ESA](https://github.com/ESA-PhiLab/MajorTOM) ## License CC-BY-SA-4.0