dataset_info:
features:
- name: height
dtype: int64
- name: width
dtype: int64
- name: fold
dtype: string
- name: raster_name
dtype: string
- name: location
dtype: string
- name: image
dtype: image
- name: tile_name
dtype: string
- name: annotations
struct:
- name: bbox
sequence:
sequence: float64
- name: segmentation
dtype: 'null'
- name: area
sequence: float64
- name: iscrowd
sequence: int64
- name: is_rle_format
dtype: 'null'
- name: category
sequence: string
- name: tile_metadata
struct:
- name: crs
dtype: string
- name: transform
sequence: float64
- name: bounds
sequence: float64
- name: width
dtype: int64
- name: height
dtype: int64
- name: count
dtype: int64
- name: dtypes
sequence: string
- name: nodata
dtype: 'null'
splits:
- name: test
num_bytes: 11729504363.402
num_examples: 1477
- name: validation
num_bytes: 2786536280
num_examples: 387
- name: train
num_bytes: 16884458976
num_examples: 585
download_size: 31336873232
dataset_size: 31400499619.402
configs:
- config_name: default
data_files:
- split: test
path: data/test-*
- split: validation
path: data/validation-*
- split: train
path: data/train-*
license: cc-by-4.0
tags:
- vision
- ai
- climate
- forest
- tree
- remote_sensing
size_categories:
- 10K<n<100K
SelvaBox: A high-resolution dataset for tropical tree crown detection
This is the version of the SelvaBox dataset that has been pre-processed and presented in our SelvaBox paper. The dataset is made of 14 rasters resampled at 4.5 cm GSD, from three different countries: Brazil, Ecuador and Panama. These rasters were tiled into more than 2400 images. It comprises over 83 000 unique human bounding box annotations for tropical tree crowns in dense canopies.
Dataset Details
Dataset Description
Training tiles are 3555x3555 pixels, while validation and test tiles are 1777x1777 pixels, equivalent to 80x80 meters spatial extent. There is 50% overlap between train and validation tiles, and 75% between test tiles (to ensure that the largest trees of 50+ meters in diameter will fit entirely in at least one tile). The table below summarizes the information on the three splits. Note that the # Annotations reported is larger than 83000 due to the overlap between tiles, which duplicates annotations. There is also a similar effect regarding the # Tiles: there are more test tiles than train or valid but that's because of the 75% overlap between tiles, compared to 50%. The 'Geographic Area % of total dataset column' more accurately describes how much data was assigned to each split.
| Split | Tile Size (px) | Tile Size (m) | Overlap | # Tiles | # Annotations | Geographic Area % of total dataset |
|---|---|---|---|---|---|---|
| Train | 3555 | 160.0 m | 50% | 585 | 232,071 | ~74% |
| Valid | 1777 | 80.0 m | 50% | 387 | 38,651 | ~13% |
| Test | 1777 | 80.0 m | 75% | 1,477 | 161,188 | ~13% |
- Curated by: Will be added after double-blind review.
- Funded by: Will be added after double-blind review.
- License: CC BY 4.0
Dataset Sources
- Repository: Will be added after double-blind review.
- Paper: Will be added after double-blind review.
Uses
This dataset was designed to train instance detection models specifically for tropical trees in the rainforests of Central and South America. Please note that annotations do not contain taxonomic information like the species of the trees - it is a binary tree detection dataset.
Dataset Structure
Unfortunately, because of the large size of the images of the dataset, the previewer currently does not work properly.
The images are stored as PIL Tiff files and annotations are in COCO format.
To check the structure of the dataset, you can use this python script, which will print the metadata of the first image in the train split (without downloading the entire dataset):
from datasets import load_dataset
dataset = load_dataset("CanopyRS/SelvaBox", split="train", streaming=True)
first_row = next(iter(dataset))
print("First row data:", first_row)
print("First row keys:", first_row.keys())
To display the image from the first row you can run:
from matplotlib import pyplot as plt
img = first_row["image"]
plt.imshow(img)
plt.axis("off")
plt.title(first_row["tile_name"], fontsize=10)
plt.show()
Additionally, we provide the annotations and train, valid, and test AOIs (areas of interest) as .gpkg GeoPackages for all source orthomosaics in a separate branch.
Dataset Creation
Curation Rationale
Will be added after double-blind review.
Source Data
Here is an overview of the different orthomosaics that were pre-processed and tiled to produce SelvaBox:
| Raster Name | Drone | Country | Date | Sky Conditions | GSD (cm/px) | Forest Type | # Hectares | # Annotations | Proposed Split(s) |
|---|---|---|---|---|---|---|---|---|---|
| zf2quad | m3m | Brazil | 2024-01-30 | clear | 2.3 | primary | 15.5 | 1,343 | valid |
| zf2tower | m3m | Brazil | 2024-01-30 | clear | 2.2 | primary | 9.5 | 1,716 | test |
| zf2transectew | m3m | Brazil | 2024-01-30 | clear | 1.5 | primary | 2.6 | 359 | train |
| zf2campinarana | m3m | Brazil | 2024-01-31 | clear | 2.3 | primary | 66 | 16,396 | train |
| transectotoni | mavicpro | Ecuador | 2017-08-10 | cloudy | 4.3 | primary | 4.3 | 5,119 | train |
| tbslake | m3m | Ecuador | 2023-05-25 | clear | 5.1 | primary | 19 | 1,279 | train, test |
| sanitower | mini2 | Ecuador | 2023-09-11 | cloudy | 1.8 | primary | 5.8 | 1,721 | train |
| inundated | m3e | Ecuador | 2023-10-18 | cloudy | 2.2 | primary | 68 | 9,075 | train, valid, test |
| pantano | m3e | Ecuador | 2023-10-18 | cloudy | 1.9 | primary | 41 | 4,193 | train |
| terrafirme | m3e | Ecuador | 2023-10-18 | clear | 2.4 | primary | 110 | 6,479 | train |
| asnortheast | m3m | Panama | 2023-12-07 | partial cloud | 1.3 | plantations, secondary | 33 | 12,930 | train, valid, test |
| asnorthnorth | m3m | Panama | 2023-12-07 | cloud | 1.2 | plantations, secondary | 15 | 6,020 | train |
| asforestnorthe2 | m3m | Panama | 2023-12-08 | clear | 1.5 | secondary | 20 | 5,925 | valid, test |
| asforestsouth2 | m3m | Panama | 2023-12-08 | clear | 1.6 | secondary | 28 | 10,582 | train |
Annotations
SelvaBox is the largest tropical tree detection dataset, one order-of-magnitude larger than existing ones (mainly BCI50ha and Detectree2). It is also the 2nd largest tree detection dataset overall in annotation count, after OAM-TCD.
| Name | # Trees | GSD (cm) | Type | Biome |
|---|---|---|---|---|
| NeonTreeEval. | 16k | 10 | natural | temperate |
| ReforesTree | 4.6k | 2 | plantation | tropical |
| Firoze et al. | 6.5k | 2–5 | natural | temperate |
| Detectree2 | 3.8k | 10 | natural | tropical |
| BCI50ha | 4.7k | 4.5 | natural | tropical |
| BAMFORESTS | 27k | 1.6–1.8 | natural | temperate |
| QuebecTrees | 23k | 1.9 | natural | temperate |
| Quebec Plantation | 19.6k | 0.5 | plantation | temperate |
| OAM-TCD | 280k | 10 | mostly urban | worldwide |
| SelvaBox (ours) | 83k | 1.2–5.1 | natural | tropical |
Annotation process
The annotations have been created by five domain experts with exact same instructions and all started with a demo and an annotation practice beforehand. All annotations have been made in ArcGIS Pro with ArcGIS Online layers to track online work of two annotators working on the same orthomosaic simultaneously. In large and dense areas, one or several annotators have performed an additional pass over the orthomosaic to annotate potential missing trees. Once annotations were completed by one or several annotators, up to two domain experts performed quality control steps for all annotations of each orthomosaic by following precise guidelines:
- A- Setup a 60x60m grid cell over the orthomosaic.
- B- Proceed to the verification by systematically scanning each cell to avoid missing any areas.
- C- Ensure that there are as many annotated trees as possible in each cell.
- D- Also annotate dead/leafless trees.
- E- Check that annotations already completed are correct, adjusting them if necessary.
All annotators and reviewers were provided with documentation with difficult use cases as a reference when they were uncertain on the annotation procedure. As a comparison, one may note that annotations in OAM-TCD (NeurIPS 2024) were created by professional annotators that were not domain experts, and a part of these annotations were then reviewed by ecology experts.
Who are the annotators?
Will be added after double-blind review.
Citation
BibTeX:
Will be added after double-blind review.
Dataset Card Contact
Will be added after double-blind review.