File size: 3,165 Bytes
a61b30d
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
60ce78a
a61b30d
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
---
license: mit
task_categories:
  - object-detection
  - zero-shot-object-detection
language:
  - en
size_categories:
  - 1M+
source_datasets:
  - DOTA
  - DIOR
  - FAIR1M
  - NWPU-VHR-10
  - HRSC2016
  - RSOD
  - AID
  - NWPU-RESISC45
  - SLM
  - EMS
tags:
  - remote-sensing
  - computer-vision
  - open-vocabulary
  - benchmark
  - image-dataset
pretty_name: LAE-1M
---


# LAE-1M: Locate Anything on Earth Dataset

<p align="center">
  <img src="https://jianchengpan.space/projects/LAE/assets/LAE-1M.png" alt="LAE-1M" width="600"/>
</p>

**LAE-1M** (Locate Anything on Earth - 1 Million) is a large-scale **open-vocabulary remote sensing object detection dataset** introduced in the paper *"Locate Anything on Earth: Advancing Open-Vocabulary Object Detection for Remote Sensing Community"* (AAAI 2025).  

It contains over **1M images** with **coarse-grained (LAE-COD)** and **fine-grained (LAE-FOD)** annotations, unified in **COCO format**, enabling **zero-shot** and **few-shot** detection in remote sensing.

---

## Dataset Details

### Dataset Description

- **Curated by:** Jiancheng Pan, Yanxing Liu, Yuqian Fu, Muyuan Ma, Jiahao Li, Danda Pani Paudel, Luc Van Gool, Xiaomeng Huang  
- **Funded by:** ETH Zürich, INSAIT (partial computing support)  
- **Shared by:** LAE-DINO Project Team  
- **Language(s):** Not language-specific; visual dataset  
- **License:** MIT License  

### Dataset Sources

- **Repository:** [GitHub - LAE-DINO](https://github.com/jaychempan/LAE-DINO)  
- **Paper:** [ArXiv 2408.09110](https://arxiv.org/abs/2408.09110), [AAAI 2025](https://ojs.aaai.org/index.php/AAAI/article/view/32672)  
- **Project Page:** [LAE Website](https://jianchengpan.space/LAE-website/index.html)  
- **Dataset Download:** [HuggingFace](https://huggingface.co/datasets/jaychempan/LAE-1M)  

---

## Dataset Structure

| Subset      | # Images    | # Classes | Format      | Description                                  |
|-------------|-------------|-----------|-------------|----------------------------------------------|
| LAE-COD      | 400k+        | 20+       | COCO        | Coarse-grained detection (AID, EMS, SLM)      |
| LAE-FOD      | 600k+        | 50+       | COCO        | Fine-grained detection (DIOR, DOTAv2, FAIR1M) |
| LAE-80C      | 20k (val)    | 80        | COCO        | Benchmark with 80 semantically distinct classes |

All annotations are in **COCO JSON** format with bounding boxes and categories.

---

## Uses

### Direct Use
- Open-Vocabulary Object Detection in Remote Sensing  
- Benchmarking zero-shot and few-shot detection models  
- Pretraining large vision-language models  

### Out-of-Scope Use
- Any tasks requiring personal or sensitive information  
- Real-time inference on satellite streams without further optimization  

---

## Quick Start

```python
from datasets import load_dataset

# Load the dataset
dataset = load_dataset("jaychempan/LAE-1M", split="train")

# Access one example
example = dataset[0]
print(example.keys())  # image, annotations, category_id, etc.

# Show the image (requires Pillow)
from PIL import Image
import io

img = Image.open(io.BytesIO(example["image"]))
img.show()