FWISD / README.md
MeteCR7's picture
Duplicate from kevinxue112/FWISD
ff7658b
---
license: cc-by-nc-4.0
task_categories:
- image-segmentation
tags:
- flood
- disaster-response
- remote-sensing
- hurricane-francine
- uav
pretty_name: Flood and Waterfront Infrastructure Segmentation Dataset (FWISD)
size_categories:
- 1K<n<10K
---
# Flood and Waterfront Infrastructure Segmentation Dataset (FWISD)
## 1. Dataset Overview
The **Flood and Waterfront Infrastructure Segmentation Dataset (FWISD)** is constructed for post-disaster assessment, specifically focusing on the impact of **Hurricane Francine** (September 2024). This dataset utilizes high-resolution UAV imagery to enable precise semantic segmentation of floodwaters, infrastructure damage, and environmental elements.
- **Total Data Size**: ~4.36 GB
- **Image Resolution**: 1024 x 1024 pixels
- **Source**: NOAA UAV Imagery
- **Task**: Semantic Segmentation (12 Classes)
## 2. Data Collection & Context
The data collection centers on **Hurricane Francine** during the 2024 Atlantic hurricane season. On September 11, 2024, a Category 2 hurricane originating in the Atlantic struck the southern Louisiana coast. The event cut power to over 163,000 residents and triggered widespread flooding. The hurricane's 3-meter storm surge and 304 mm of rainfall severely threatened coastal infrastructure. As Louisiana is a vital trade hub located at the Mississippi River's mouth, a swift and precise assessment of the region is important.
This study utilized UAV imagery released by the **U.S. National Oceanic and Atmospheric Administration (NOAA)** in the aftermath of the disaster. The image data were collected between **September 16 and 17, 2024**, covering multiple severely affected areas in southern Louisiana.
## 3. Class Definitions
To construct a high-quality segmentation dataset, 12 target categories were defined (11 objects + 1 background). They are organized logically from natural environmental elements to man-made infrastructure, movable objects, and disaster-specific elements.
| ID | Class Name | Definition |
| :--- | :--- | :--- |
| **0** | **Background** | Regions that do not belong to any of the 11 defined classes, such as unidentifiable debris or clutter. |
| **1** | **Natural Water** | Pre-existing, permanent water bodies within the scene, such as rivers, lakes, and other natural reservoirs. |
| **2** | **Tree** | Various forms of arbor (trees) and taller shrub vegetation. |
| **3** | **Road-Passable** | Road segments, including highways, streets, and bridges, where the road surface is clearly visible and not submerged by floodwater. |
| **4** | **Road-Flooded** | Road segments that are partially or entirely covered by floodwater. |
| **5** | **Building-Intact** | Buildings retaining their structural integrity or exhibiting only minor damage, with no obvious collapse or significant breaches in major load-bearing elements. |
| **6** | **Building-Damaged** | Buildings exhibiting evident structural failure, characterized by partial or total roof loss, wall collapse, or significant structural deformation. |
| **7** | **Waterfront Structure-Intact** | Facilities (e.g., piers, jetties, docks) that interface with water bodies and remain structurally sound and undamaged. |
| **8** | **Waterfront Structure-Damaged** | Waterfront facilities exhibiting structural failure, such as breakage, collapse, or severe degradation due to flood or water damage. |
| **9** | **Vehicle-Land** | Conveyances situated on terrestrial surfaces, including roads, parking areas, or dry ground. |
| **10** | **Vehicle-Water** | Conveyances located within natural water bodies. |
| **11** | **Floodwater** | Transient accumulation of water over land areas (e.g., roads, vegetated areas, building perimeters) resulting from hurricanes or heavy rainfall. |
## 4. Annotation & Quality Control
We designed a standardized pipeline to ensure pixel-level labeling accuracy using **LabelMe** software. A multi-round iterative quality control mechanism was implemented:
1. **Standardization**: Clear textual definitions and typical visual examples were provided to the annotation team.
2. **Iterative Review**:
* **Round 1**: Annotator self-inspection and preliminary correction.
* **Round 2**: Manager review focusing on misclassification, omission, and boundary precision. Samples with errors were returned for correction.
* **Round 3**: Final inspection to ensure all issues were addressed.
3. **Result**: This closed-loop process ensures sharp boundaries and accurate class assignments.
## 5. Directory Structure
The dataset follows a standard semantic segmentation directory structure. Images and Masks are matched by filenames.