BigEarthNet.txt / README.md
jherzog's picture
Updated links to new dataset viewer pre-defined queries
72d865f
metadata
license: cdla-permissive-1.0
task_categories:
  - image-text-to-text
  - visual-question-answering
  - multiple-choice
task_ids:
  - image-captioning
  - multiple-choice-qa
language:
  - en
pretty_name: BigEarthNet.txt
size_categories:
  - 1M<n<10M
tags:
  - remote sensing
  - vision-language
  - sentinel-1
  - sentinel-2
  - multispectral
configs:
  - config_name: default
    data_files:
      - split: all_data
        path: BigEarthNet.txt.parquet
    default: true

Paper Website Badge Paper arXiv Badge Community Data License Agreement - Permissive - Version 1.0 License Badge

BigEarthNet.txt: A Large-Scale Multi-Sensor Image-Text Dataset and Benchmark for Earth Observation

BigEarthNet.txt is a large-scale multi-sensor image–text dataset for Earth observation, designed to advance vision–language learning on remote sensing data. It comprises 464,044 co-registered Sentinel-1 (SAR) and Sentinel-2 (multispectral) image pairs collected over Europe, paired with approximately 9.6 million textual annotations. The textual annotations include geographically anchored captions describing land-use/land-cover (LULC) classes and their spatial relationships, diverse visual question answering (VQA) pairs (binary and multiple-choice), and referring expression instructions for LULC localization. In addition, the dataset provides a manually verified benchmark split consisting of 1,082 image pairs with 15,029 textual annotations, specifically designed for reliable evaluation of vision–language models on complex multi-sensor remote sensing tasks. For more details on the dataset, please see our paper website.

The dataset supports 15 tasks (Presence, Area, Counting, Adjacency, Relative Position Country, Season, and Climate Zone, denoted as Pr, A, Cnt, Adj, RP, Loc, S, and Clt, respectively) across 4 broad categories.

Parquet File Structure

The BigEarthNet.txt.parquet file contains multiple attributes:

  • ID: A unique identifier for each sample in the dataset.
  • s1_name: The name of the Sentinel-1 patch from BigEarthNet v2.0.
  • patch_id: The name of the Sentinel-2 patch from BigEarthNet v2.0.
  • input: The instruction or question for the VLM.
  • output: The reference answer.
  • type: The broader task-type of the sample, i.e., binary, mcq, captioning, or bounding box.
  • category: The more fine-grained task-type. See here for all type-category combinations.
  • split: The associated split of the sample, i.e., train, validation, test, or bench.
  • latitude: The latitude coordinates of the center of the image patch.
  • longitude: The longitude coordinates of the center of the image patch.
  • country: The acquisition country of the image patch. See here for all available values.
  • season: The acquisition season of the image patch. See here for all available values.
  • climate_zone: The associated Köppen-Geiger climate zone. See here for all available values.

How to use

We show the recommended way to prepare the image and text data to be jointly used in the form of a custom PyTorch Dataset BENTxTDataset or DataLoader BENTxTDataModule provided in ben_txt_datamodule.py.

1. Download BigEarthNet.txt.parquet

Download using Git.

git clone https://huggingface.co/datasets/BIFOLD-BigEarthNetv2-0/BigEarthNet.txt

2. Download the Image Data

Download the Sentinel-1 and Sentinel-2 image data from the BigEarthNet v2.0 website.

3. Preprocess the Image Data

Convert the Sentinel-1 and Sentinel-2 image data to safetensors stored in an LMDB database for higher throughput using rico-hdl. Follow the installation instructions on GitHub, then execute the following command to convert the Sentinel-1 and Sentinel-2 image data downloaded to <S1_ROOT_DIR> and <S2_ROOT_DIR>.

rico-hdl bigearthnet --bigearthnet-s1-dir <S1_ROOT_DIR> --bigearthnet-s2-dir <S2_ROOT_DIR> --target-dir Encoded-BigEarthNet

4. Load the Data

Install uv. Install the required packages via uv using the command below. You can specify if you want to use the PyTorch CPU version or PyTorch with CUDA 12.6 by choosing cpu or cu126 as the <option>.

uv sync --extra <option>

The following examples show how to jointly load text samples from BigEarthNet.txt with the respective image data from BigEarthNet v2.0. After executing the suggested steps above, you should be able to run the following file from this repository:

uv run example_data_loading.py

or load the data manually using the provided datamodule as shown in the following two examples:

This example shows how to load the Red (B04), Green (B03), and Blue (B02) band from the Sentinel-2 image data using the BENTxTDataset Datasets class. More details about the custom Dataset are provided in ben_txt_datamodule.py.

from ben_txt_datamodule import BENTxTDataset

ds_rgb = BENTxTDataset(
  lmdb_file = "Encoded-BigEarthNet/",
  metadata_file = "BigEarthNet.txt.parquet",
  bands = ("B04", "B03", "B02"),
  img_size = 120
)

sample = ds_rgb[0]
print(f"RGB input image: {sample['image_input'].shape}")
print(f"Text input: {sample['text_input']}")
print(f"Reference output: {sample['reference_output']}")

This example shows how to load the 10m and 20m spatial resolution bands from Sentinel-1 and Sentinel-2 using the BENTxTDataModule Lightning DataModule class. In this example we apply multiple metadata filters on BigEarthNet.txt, more details about the custom DataModule are provided in ben_txt_datamodule.py.

from ben_txt_datamodule import BENTxTDataModule

# Lightning DataModule example using the 10m and 20m spatial resolution bands from Sentinel-1 and Sentinel-2 and multiple metadata filters.
# The datamodule will create 4 dataloaders: train, val, test, and bench.
dm = BENTxTDataModule(
  image_lmdb_file = "Encoded-BigEarthNet/",
  metadata_file = "BigEarthNet.txt.parquet",
  bands = 'S1S2-10m20m',
  img_size = 120,
  batch_size = 1,
  num_workers_dataloader = 0,
  types = ['mcq'],
  categories = ['climate zone'],
  countries = ['Portugal', 'Finland'],
  seasons = ['Summer'],
  climate_zones = None,
  point_token = ['<point>', '</point>'],
  ref_token = ['<ref>', '</ref>']
)
dm.setup()
  
train_dl = dm.train_dataloader()
for batch in train_dl:
  print(f"Batch image input shape: {batch['image_input'].shape}")
  print(f"First batch sample text input: {batch['text_input'][0]}")
  print(f"First batch sample text reference output: {batch['reference_output']}")
  break

Citation

If you use the BigEarthNet.txt dataset, please cite:

J. Herzog, M. Adler, L. Hackel, Y. Shu, A. Zavras, I. Papoutsis, P. Rota, B. Demir, 
"BigEarthNet.txt: A Large-Scale Multi-Sensor Image-Text Dataset and Benchmark for Earth Observation", 
Arxiv Preprint arXiv:2603.29630, 2026.
@article{Herzog2026BigEarthNetTXT,
  title={BigEarthNet.txt: A Large-Scale Multi-Sensor Image-Text Dataset and Benchmark for Earth Observation},
  author={Johann-Ludwig Herzog and Mathis Jürgen Adler and Leonard Hackel and Yan Shu and Angelos Zavras and Ioannis Papoutsis and Paolo Rota and Begüm Demir},
  journal={Arxiv Preprint arXiv:2603.29630},
  year={2026},
}