Forest-Change / README.md
JimmyBrocko's picture
Update README.md
5b5efc9 verified
metadata
license: mit
task_categories:
  - text-generation
  - image-segmentation
  - image-to-text
  - image-to-image
language:
  - en
size_categories:
  - n<1K

Forest-Change

Overview

Forest-Change is the first benchmark dataset specifically designed for joint forest change detection and captioning in remote sensing imagery. It provides bi-temporal satellite images, pixel-level deforestation masks, and multi-granularity semantic captions describing forest cover changes in tropical and subtropical regions.

Dataset Details

  • Total Examples: 334 annotated bi-temporal image pairs
  • Spatial Resolution: ~30m/pixel (medium resolution)
  • Original Image Size: 480×480 pixels (cropped from larger scenes)
  • Processed Image Size: 256×256 pixels (resized for model training)
  • Temporal Resolution: 1 year between image pairs
  • Geographic Focus: Tropical and subtropical deforestation fronts

Dataset Splits

  • Training: 270 examples (~80%)
  • Validation: 31 examples (~10%)
  • Test: 33 examples (~10%)

Data Format

Each example contains:

  • Image A: Pre-change RGB satellite image
  • Image B: Post-change RGB satellite image
  • Change Mask: Binary segmentation mask (0=no change, 1=deforestation)
  • Captions: Five captions describing the forest change event with varied granularity

Data Sources

  • Imagery Source: Google Earth Engine (GEE)
  • Base Dataset: Derived from Hewarathna et al. (2024) forest ecosystem change detection dataset
  • Validation: Forest cover changes verified through Global Forest Watch (GFW) platform
  • Geographic Selection: Based on WWF 2015 Deforestation Fronts report

Caption Generation

Captions are generated through a hybrid two-stage approach:

  1. Human Annotation: One caption per example manually created by domain annotators describing observed changes
  2. Rule-Based Generation: Four additional captions automatically generated based on quantitative mask properties:
    • Percentage of newly deforested area (binned into descriptive severity levels)
    • Size and number of individual change patches
    • Spatial distribution patterns of deforestation
    • Variation in patch sizes

This approach ensures both semantic richness from human expertise and consistent structural variation across captions.

Key Characteristics

  • Change Coverage:
    • Mean: <5% deforestation per image
    • Maximum: 40% deforestation
    • Distribution: Heavily skewed toward lower deforestation percentages
  • Caption Length: Bimodal distribution with both concise and detailed descriptions
  • Change Patterns: Diverse deforestation manifestations including:
    • Scattered small patches across forest areas
    • Concentrated clearing zones
    • Edge-of-clearing expansion patterns
    • Highly variable patch sizes and configurations
  • Caption Content: Descriptions emphasize:
    • Degree/severity of forest loss
    • Spatial location within the image
    • Patch characteristics (size, number, distribution)

Preprocessing

  • All images resized to 256×256 pixels for consistency
  • Change masks binarized (0=no change, 1=change)
  • Bi-temporal image pairs pre-aligned
  • Per-channel normalization using dataset-specific mean and standard deviation statistics
  • No atmospheric correction applied
  • No cloud masking applied (some samples contain partial cloud occlusion)

Use Cases

  • Forest change detection and monitoring
  • Deforestation segmentation in natural environments
  • Change captioning for ecological applications
  • Multi-task learning for remote sensing
  • Benchmarking vision-language models on forest imagery
  • Training interactive forest analysis systems
  • Developing automated forest monitoring workflows

Evaluation Metrics

Due to severe class imbalance (most pixels are no-change), evaluation requires:

  • Per-class IoU: Separate metrics for change and no-change classes
  • Mean IoU (mIoU): Average of both class IoUs
  • Caption metrics: BLEU-n (n=1,2,3,4), METEOR, ROUGE-L, CIDEr-D
  • Note: Overall accuracy is not recommended due to class imbalance

Limitations

  • Dataset Scale: Limited to 334 examples, restricting model generalization
  • Scene Diversity: Limited number of unique geographic sites due to cropping and augmentation strategy
  • Class Imbalance: Severe imbalance with most pixels representing no-change, challenging for detection models
  • Caption Quality: Majority of captions are rule-based, limiting linguistic variation and naturalness
  • Geographic Grounding: Limited incorporation of geographic features and contextual information in captions
  • Spatial Resolution: Medium resolution (~30m/pixel) limits detection of very small-scale changes
  • Temporal Coverage: Fixed 1-year intervals between image pairs
  • Atmospheric Effects: Some samples affected by partial cloud occlusion
  • Edge Boundaries: Fuzzy boundaries at deforestation patch edges complicate precise segmentation

Citation

If you use this dataset, please cite:

@article{brock2024forestchat,
  title={Forest-Chat: Adapting Vision-Language Agents for Interactive Forest Change Analysis},
  author={Brock, James and Zhang, Ce and Anantrasirichai, Nantheera},
  journal={Ecological Informatics},
  year={2024}
}

@article{hewarathna2024change,
  title={Change detection for forest ecosystems using remote sensing images with siamese attention u-net},
  author={Hewarathna, AI and Hamlin, L and Charles, J and Vigneshwaran, P and George, R and Thuseethan, S and Wimalasooriya, C and Shanmugam, B},
  journal={Technologies},
  volume={12},
  number={9},
  pages={160},
  year={2024}
}

Paper page can be found at: https://huggingface.co/papers/2601.04497

License

MIT License - Academic re-use purpose only

Contact

For questions or issues regarding this dataset, please contact: