File size: 5,987 Bytes
ab65c7a
 
 
 
 
 
6fb9776
ab65c7a
 
 
 
ec34cad
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
68c540e
 
ec34cad
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
---
license: mit
task_categories:
- text-generation
- image-segmentation
- image-to-text
- image-to-image
language:
- en
size_categories:
- n<1K
---

# Forest-Change

## Overview
Forest-Change is the first benchmark dataset specifically designed for joint forest change detection and captioning in remote sensing imagery. It provides bi-temporal satellite images, pixel-level deforestation masks, and multi-granularity semantic captions describing forest cover changes in tropical and subtropical regions.

## Dataset Details
- **Total Examples**: 334 annotated bi-temporal image pairs
- **Spatial Resolution**: ~30m/pixel (medium resolution)
- **Original Image Size**: 480×480 pixels (cropped from larger scenes)
- **Processed Image Size**: 256×256 pixels (resized for model training)
- **Temporal Resolution**: 1 year between image pairs
- **Geographic Focus**: Tropical and subtropical deforestation fronts

## Dataset Splits
- **Training**: 270 examples (~80%)
- **Validation**: 31 examples (~10%)
- **Test**: 33 examples (~10%)

## Data Format
Each example contains:
- **Image A**: Pre-change RGB satellite image
- **Image B**: Post-change RGB satellite image
- **Change Mask**: Binary segmentation mask (0=no change, 1=deforestation)
- **Captions**: Five captions describing the forest change event with varied granularity

## Data Sources
- **Imagery Source**: Google Earth Engine (GEE)
- **Base Dataset**: Derived from Hewarathna et al. (2024) forest ecosystem change detection dataset
- **Validation**: Forest cover changes verified through Global Forest Watch (GFW) platform
- **Geographic Selection**: Based on WWF 2015 Deforestation Fronts report

## Caption Generation
Captions are generated through a hybrid two-stage approach:
1. **Human Annotation**: One caption per example manually created by domain annotators describing observed changes
2. **Rule-Based Generation**: Four additional captions automatically generated based on quantitative mask properties:
   - Percentage of newly deforested area (binned into descriptive severity levels)
   - Size and number of individual change patches
   - Spatial distribution patterns of deforestation
   - Variation in patch sizes

This approach ensures both semantic richness from human expertise and consistent structural variation across captions.

## Key Characteristics
- **Change Coverage**: 
  - Mean: <5% deforestation per image
  - Maximum: 40% deforestation
  - Distribution: Heavily skewed toward lower deforestation percentages
- **Caption Length**: Bimodal distribution with both concise and detailed descriptions
- **Change Patterns**: Diverse deforestation manifestations including:
  - Scattered small patches across forest areas
  - Concentrated clearing zones
  - Edge-of-clearing expansion patterns
  - Highly variable patch sizes and configurations
- **Caption Content**: Descriptions emphasize:
  - Degree/severity of forest loss
  - Spatial location within the image
  - Patch characteristics (size, number, distribution)

## Preprocessing
- All images resized to 256×256 pixels for consistency
- Change masks binarized (0=no change, 1=change)
- Bi-temporal image pairs pre-aligned
- Per-channel normalization using dataset-specific mean and standard deviation statistics
- No atmospheric correction applied
- No cloud masking applied (some samples contain partial cloud occlusion)

## Use Cases
- Forest change detection and monitoring
- Deforestation segmentation in natural environments
- Change captioning for ecological applications
- Multi-task learning for remote sensing
- Benchmarking vision-language models on forest imagery
- Training interactive forest analysis systems
- Developing automated forest monitoring workflows

## Evaluation Metrics
Due to severe class imbalance (most pixels are no-change), evaluation requires:
- **Per-class IoU**: Separate metrics for change and no-change classes
- **Mean IoU (mIoU)**: Average of both class IoUs
- **Caption metrics**: BLEU-n (n=1,2,3,4), METEOR, ROUGE-L, CIDEr-D
- **Note**: Overall accuracy is not recommended due to class imbalance

## Limitations
- **Dataset Scale**: Limited to 334 examples, restricting model generalization
- **Scene Diversity**: Limited number of unique geographic sites due to cropping and augmentation strategy
- **Class Imbalance**: Severe imbalance with most pixels representing no-change, challenging for detection models
- **Caption Quality**: Majority of captions are rule-based, limiting linguistic variation and naturalness
- **Geographic Grounding**: Limited incorporation of geographic features and contextual information in captions
- **Spatial Resolution**: Medium resolution (~30m/pixel) limits detection of very small-scale changes
- **Temporal Coverage**: Fixed 1-year intervals between image pairs
- **Atmospheric Effects**: Some samples affected by partial cloud occlusion
- **Edge Boundaries**: Fuzzy boundaries at deforestation patch edges complicate precise segmentation

## Citation
If you use this dataset, please cite:
```bibtex
@article{brock2024forestchat,
  title={Forest-Chat: Adapting Vision-Language Agents for Interactive Forest Change Analysis},
  author={Brock, James and Zhang, Ce and Anantrasirichai, Nantheera},
  journal={Ecological Informatics},
  year={2024}
}

@article{hewarathna2024change,
  title={Change detection for forest ecosystems using remote sensing images with siamese attention u-net},
  author={Hewarathna, AI and Hamlin, L and Charles, J and Vigneshwaran, P and George, R and Thuseethan, S and Wimalasooriya, C and Shanmugam, B},
  journal={Technologies},
  volume={12},
  number={9},
  pages={160},
  year={2024}
}
```

Paper information available at: https://huggingface.co/papers/2601.04497 and https://huggingface.co/papers/2601.14637.

## License
MIT License - Academic re-use purpose only

## Contact
For questions or issues regarding this dataset, please contact:
- James Brock: james.brock@bristol.ac.uk
- School of Computer Science, University of Bristol