Datasets:
The dataset viewer is not available because its heuristics could not detect any supported data files. You can try uploading some data files, or configuring the data files location manually.
Archaeological Site Dataset (CAA UK 2025)
Dataset Summary
This dataset provides a comprehensive multi-channel remote sensing dataset for training machine learning models to detect archaeological sites. The dataset combines Sentinel-2 satellite imagery, FABDEM elevation data, and derived spectral indices to create 11-channel representations of 1×1 km grid cells at 10m resolution.
Key Features:
- Multi-modal data: 6 spectral bands + 3 spectral indices + 2 terrain features
- Balanced dataset: Positives, integrated negatives, landcover negatives, and unlabeled samples
- Extensive augmentation: Geometric (rotation) and radiometric augmentations
- High resolution: 100×100 pixels per grid cell (10m/pixel)
- Geographic context: Integrated negatives from same regions as archaeological sites
Dataset Structure
Data Instances
Each sample consists of:
- 11 channels stored as separate
.npyfiles (float32, 100×100 pixels each) - Binary label: 1 (archaeological site), 0 (non-site), or -1 (unlabeled)
- Metadata: Geographic coordinates, rotation angle, augmentation type, site information
Example directory structure:
grid_000001_rot000/
├── channels/
│ ├── B2.npy # Sentinel-2 Blue
│ ├── B3.npy # Sentinel-2 Green
│ ├── B4.npy # Sentinel-2 Red
│ ├── B8.npy # Sentinel-2 NIR
│ ├── B11.npy # Sentinel-2 SWIR1
│ ├── B12.npy # Sentinel-2 SWIR2
│ ├── NDVI.npy # Normalized Difference Vegetation Index
│ ├── NDWI.npy # Normalized Difference Water Index
│ ├── BSI.npy # Bare Soil Index
│ ├── DEM.npy # Elevation (FABDEM)
│ └── Slope.npy # Terrain slope
├── labels/
│ ├── binary_label.npy
│ ├── pos_type.txt
│ └── neg_type.txt
└── info.json
Data Fields
Channel Schema (11 channels per grid)
| Index | Channel | Source | Resolution | Wavelength/Description |
|---|---|---|---|---|
| 0 | B2 | Sentinel-2 | 10m | Blue (490nm) |
| 1 | B3 | Sentinel-2 | 10m | Green (560nm) |
| 2 | B4 | Sentinel-2 | 10m | Red (665nm) |
| 3 | B8 | Sentinel-2 | 10m | NIR (842nm) |
| 4 | B11 | Sentinel-2 | 20m→10m | SWIR1 (1610nm) |
| 5 | B12 | Sentinel-2 | 20m→10m | SWIR2 (2190nm) |
| 6 | NDVI | Calculated | 10m | (B8-B4)/(B8+B4) - Vegetation |
| 7 | NDWI | Calculated | 10m | (B3-B8)/(B3+B8) - Water |
| 8 | BSI | Calculated | 10m | Bare Soil Index |
| 9 | DEM | FABDEM | 30m→10m | Elevation (meters) |
| 10 | Slope | Derived | 10m | Terrain slope (degrees) |
Metadata Fields (grid_metadata.parquet)
| Column | Type | Description |
|---|---|---|
grid_id |
string | Unique grid identifier (e.g., "grid_000001_rot000") |
centroid_lon |
float | Grid center longitude (WGS84) |
centroid_lat |
float | Grid center latitude (WGS84) |
label |
int | 1 = site, 0 = non-site, -1 = unlabeled |
label_source |
string | Data source origin |
image_path |
string | Path to grid directory |
Data Splits
CRITICAL: Prevent Data Leakage
Do NOT split randomly! Rotations and augmentations of the same site must stay in the same split.
Recommended approach:
- Group samples by original site index (extracted from
grid_id) - Split sites (not samples) into train/val/test
- All rotations/augmentations of a site go to the same split
Suggested ratios:
- Train: 70% of sites
- Validation: 15% of sites
- Test: 15% of sites
Dataset Composition
Sample Types
Given N known sites and rotation step of 120° (R=3 rotations):
| Data Type | Count | Label | Description |
|---|---|---|---|
| Positives (base) | 3×N | 1 | Original + 2 rotations per site |
| Positives (augmented) | 9×N | 1 | 3 radiometric variants × 3 rotations |
| Total Positives | 12×N | 1 | All positive samples |
| Integrated Negatives (base) | 3×N | 0 | From same areas as sites |
| Integrated Negatives (aug) | 9×N | 0 | 3 variants × 3 rotations |
| Total Integrated Neg. | 12×N | 0 | Surrounding landscape context |
| Landcover Negatives | 3×N | 0 | Urban/water/cropland |
| Unlabeled | ~1.5×N | -1 | Background samples |
| TOTAL | ~28.5×N | Complete dataset |
Data Augmentation
1. Geometric Augmentation (Rotation)
- 3 rotations per site: 0°, 120°, 240°
- Extracted at 1.5× size, rotated, then center-cropped
- Applied to positives and integrated negatives
2. Radiometric Augmentation Three variants per rotated sample:
- aug1: +8% brightness, +5% contrast, noise σ=0.015
- aug2: -8% brightness, -5% contrast, noise σ=0.015
- aug3: No brightness/contrast, noise σ=0.025
Dataset Creation
Source Data
Satellite Imagery:
- Sentinel-2: Multi-spectral optical imagery (2023-2024)
- FABDEM: Forest And Buildings removed Copernicus DEM
Archaeological Sites:
- Known archaeological site locations (latitude/longitude)
- Site types may include geoglyphs, mounds, settlements, etc.
Negative Samples:
- Integrated negatives: 4 corners of rotated grids (same geographic areas)
- Landcover negatives: Urban (40%), water (30%), cropland (30%)
- Unlabeled: Random background samples with exclusion buffer
Data Collection Pipeline
- Known site extraction: Multi-channel data centered on archaeological sites
- Rotation generation: Geometric augmentation (0°, 120°, 240°)
- Integrated negatives: Corner sampling from same regions
- Landcover negatives: Sampling from urban/water/crop areas
- Unlabeled sampling: Random background with site exclusion
- Radiometric augmentation: Brightness/contrast/noise variations
Usage
Loading the Dataset
import numpy as np
import pandas as pd
from pathlib import Path
# Load metadata
metadata = pd.read_parquet('grid_metadata.parquet')
# Load a single sample
def load_sample(grid_path):
channels = {}
channel_names = ['B2', 'B3', 'B4', 'B8', 'B11', 'B12',
'NDVI', 'NDWI', 'BSI', 'DEM', 'Slope']
for ch in channel_names:
channels[ch] = np.load(f'{grid_path}/channels/{ch}.npy')
# Stack into (11, 100, 100) tensor
data = np.stack([channels[ch] for ch in channel_names], axis=0)
# Load label
label = np.load(f'{grid_path}/labels/binary_label.npy')
return data, label
# Example
sample_data, sample_label = load_sample('grid_images/grid_000001_rot000')
print(f"Data shape: {sample_data.shape}") # (11, 100, 100)
print(f"Label: {sample_label}") # [1] or [0] or [-1]
PyTorch DataLoader with Proper Splitting
import torch
from torch.utils.data import Dataset, DataLoader, WeightedRandomSampler
import pandas as pd
import numpy as np
class ArchaeologicalDataset(Dataset):
def __init__(self, metadata_df, base_path):
self.metadata = metadata_df
self.base_path = base_path
def __len__(self):
return len(self.metadata)
def __getitem__(self, idx):
row = self.metadata.iloc[idx]
grid_path = f"{self.base_path}/{row['image_path']}"
# Load channels
channel_names = ['B2', 'B3', 'B4', 'B8', 'B11', 'B12',
'NDVI', 'NDWI', 'BSI', 'DEM', 'Slope']
channels = [np.load(f'{grid_path}/channels/{ch}.npy')
for ch in channel_names]
data = torch.FloatTensor(np.stack(channels, axis=0))
# Load label
label = torch.FloatTensor(np.load(f'{grid_path}/labels/binary_label.npy'))
return data, label, row['grid_id']
# Load metadata and create splits
df = pd.read_parquet('grid_metadata.parquet')
# Extract site index (CRITICAL: group by original site!)
df['site_index'] = df['grid_id'].str.extract(r'(grid|ineg)_(\d+)')[1]
# Split by sites, not samples
unique_sites = df[df['grid_id'].str.startswith('grid_')]['site_index'].unique()
np.random.seed(42)
np.random.shuffle(unique_sites)
n_train = int(0.7 * len(unique_sites))
n_val = int(0.15 * len(unique_sites))
train_sites = unique_sites[:n_train]
val_sites = unique_sites[n_train:n_train+n_val]
test_sites = unique_sites[n_train+n_val:]
# Assign splits
df['split'] = 'test'
df.loc[df['site_index'].isin(train_sites), 'split'] = 'train'
df.loc[df['site_index'].isin(val_sites), 'split'] = 'val'
# Create datasets
train_dataset = ArchaeologicalDataset(
df[df['split'] == 'train'],
base_path='grid_images'
)
# Create balanced sampler for training
train_df = df[df['split'] == 'train']
weights = torch.zeros(len(train_df))
weights[train_df['label'] == 1] = 0.50 / (train_df['label'] == 1).sum()
weights[train_df['label'] == 0] = 0.40 / (train_df['label'] == 0).sum()
weights[train_df['label'] == -1] = 0.10 / (train_df['label'] == -1).sum()
sampler = WeightedRandomSampler(weights, len(train_df), replacement=True)
train_loader = DataLoader(train_dataset, batch_size=32, sampler=sampler)
Train/Val/Test Split Guidelines
CRITICAL: Prevent Data Leakage
NEVER split randomly! Rotations and augmentations of the same site must stay together.
Step-by-Step Guide
import pandas as pd
import numpy as np
# Load metadata
df = pd.read_parquet('grid_metadata.parquet')
# Extract base site index from grid_id
# Examples:
# grid_000001_rot000_aug1 -> 000001
# ineg_000045_rot120 -> 000045
df['site_index'] = df['grid_id'].str.extract(r'(grid|ineg)_(\d+)')[1]
# Get unique sites (positives only for stratification)
unique_sites = df[df['grid_id'].str.startswith('grid_')]['site_index'].unique()
# Shuffle and split SITES (not samples!)
np.random.seed(42)
np.random.shuffle(unique_sites)
n_train = int(0.7 * len(unique_sites))
n_val = int(0.15 * len(unique_sites))
train_sites = unique_sites[:n_train]
val_sites = unique_sites[n_train:n_train+n_val]
test_sites = unique_sites[n_train+n_val:]
# Assign splits based on site membership
df['split'] = 'test'
df.loc[df['site_index'].isin(train_sites), 'split'] = 'train'
df.loc[df['site_index'].isin(val_sites), 'split'] = 'val'
# Distribute landcover negatives and unlabeled randomly
mask = df['grid_id'].str.startswith(('lneg_', 'unla_'))
df.loc[mask, 'split'] = np.random.choice(
['train', 'val', 'test'],
size=mask.sum(),
p=[0.7, 0.15, 0.15]
)
# Verify no leakage
train_sites_set = set(df[df['split'] == 'train']['site_index'])
val_sites_set = set(df[df['split'] == 'val']['site_index'])
test_sites_set = set(df[df['split'] == 'test']['site_index'])
assert len(train_sites_set & val_sites_set) == 0, "Train-Val leakage!"
assert len(train_sites_set & test_sites_set) == 0, "Train-Test leakage!"
assert len(val_sites_set & test_sites_set) == 0, "Val-Test leakage!"
print(f"Train: {len(df[df['split']=='train'])} samples from {len(train_sites)} sites")
print(f"Val: {len(df[df['split']=='val'])} samples from {len(val_sites)} sites")
print(f"Test: {len(df[df['split']=='test'])} samples from {len(test_sites)} sites")
Considerations for Use
Data Characteristics
Integrated Negatives: Sampled from corners of the same geographic areas as positives (after rotation), representing surrounding landscape context. Sharp boundaries between corners with no blending.
Unlabeled Data (label = -1): Random background samples with exclusion buffer around known sites. May contain undiscovered archaeological sites. Suitable for semi-supervised learning or active learning scenarios.
Landcover Negatives: Explicitly sampled from urban areas (40%), water bodies (30%), and cropland (30%) to ensure the model learns to reject obvious non-archaeological features.
Cloud Cover: Maximum 20% cloud cover per image.
Citation
If you use this dataset, please cite:
@inproceedings{li2025fusing,
title={{Fusing Text and Terrain}: {An LLM}-Powered Pipeline for Preparing Archaeological Datasets from Literature and Remote Sensing Imagery},
author={Li, Linduo and Wu, Yifan and Wang, Zifeng},
booktitle={{CAA UK 2025}: Computer Applications and Quantitative Methods in Archaeology},
year={2025},
month={December},
address={University of Cambridge, UK},
organization={CAA UK},
note={Conference held 9--10 December 2025}
}
Presentation Links:
License
This dataset is released under the MIT License.
Acknowledgments
- Sentinel-2 satellite imagery was provided by the European Space Agency (ESA) through the Copernicus Programme.
- FABDEM elevation data were provided by the University of Bristol.
- Google Earth Engine was used as the primary data processing and analysis platform.
- Geoglyph location data were derived from publicly available archaeological compilations curated by James Q. Jacobs (2025), JQ Jacobs Archaeology, last modified July 31, 2025: https://jqjacobs.net/archaeology/geoglyph.html
Contact: linduo.li@ip-paris.fr
Last Updated: December 2025
- Downloads last month
- 4