Silicon23
Added "all" config.
8730a7e
---
dataset_info:
- config_name: 128x128
features:
- name: image
sequence:
sequence:
sequence:
dtype: float32
- name: label
sequence:
sequence:
dtype: uint8
- name: i
dtype: int32
- name: j
dtype: int32
- name: start_time
dtype: string
- name: end_time
dtype: string
- name: ind
dtype: int32
- name: size
dtype: int32
splits:
- name: train
num_bytes: 568043374
num_examples: 529
- name: test
num_bytes: 54764106
num_examples: 51
download_size: 0
dataset_size: 622807480
- config_name: 256x256
features:
- name: image
sequence:
sequence:
sequence:
dtype: float32
- name: label
sequence:
sequence:
dtype: uint8
- name: i
dtype: int32
- name: j
dtype: int32
- name: start_time
dtype: string
- name: end_time
dtype: string
- name: ind
dtype: int32
- name: size
dtype: int32
splits:
- name: train
num_bytes: 5484000000 # Estimated
num_examples: 1713
- name: test
num_bytes: 587000000 # Estimated
num_examples: 183
download_size: 0
dataset_size: 6071000000 # Estimated
task_categories:
- image-segmentation
tags:
- satellite-imagery
- goes-16
- abi
- multi-spectral
- remote-sensing
- weather
- earth-observation
size_categories:
- n<1K
---
# GOES-16 ABI Satellite Image Dataset
This dataset contains GOES-16 ABI (Advanced Baseline Imager) satellite images with multi-spectral imagery and corresponding labels for semantic segmentation tasks.
## Dataset Description
The dataset contains training and test splits at two different resolutions (128x128 and 256x256). Each image has 16 spectral channels from the GOES-16 ABI instrument. The data is provided by NOAA and NESDIS.
### Dataset Structure
The dataset is organized into the following configurations:
- **128x128**: Images at 128x128 pixel resolution
- Train: 529 examples (~568 MB)
- Test: 51 examples (~55 MB)
- Total: 580 examples (~623 MB)
- **256x256**: Images at 256x256 pixel resolution
- Train: 1,713 examples (~5.5 GB estimated)
- Test: 183 examples (~587 MB estimated)
- Total: 1,896 examples (~6.1 GB estimated)
### Data Fields
Each example in the dataset contains:
- `image`: Multi-spectral satellite image as a 3D array with shape [16, height, width]
- 16 spectral channels from GOES-16 ABI instrument
- Values are float32 type, typically in range [-3, 3]
- Height and width are 128 or 256 depending on configuration
- `label`: Corresponding label/mask as a 2D array with shape [height, width]
- Values are uint8 type, typically binary (0 or 1)
- `i`: Spatial coordinate i (int32)
- `j`: Spatial coordinate j (int32)
- `start_time`: Start time of the satellite observation (string)
- `end_time`: End time of the satellite observation (string)
- `ind`: Index within the original data array (int32)
- `size`: Resolution size (128 or 256) (int32)
### Data Source
The satellite data originates from:
- **Instrument**: GOES-16 Advanced Baseline Imager (ABI)
- **Provider**: NOAA (National Oceanic and Atmospheric Administration)
- **Data Center**: NESDIS (National Environmental Satellite, Data, and Information Service)
## Usage
### Basic Usage
```python
from datasets import load_dataset
import numpy as np
# Load 128x128 resolution data
dataset = load_dataset("Silicon23/ioai2025-athome-satellite-images", name="128x128")
# Access a sample
sample = dataset["train"][0]
# Convert to numpy arrays for processing
image = np.array(sample["image"]) # Shape: (16, 128, 128)
label = np.array(sample["label"]) # Shape: (128, 128)
print(f"Image shape: {image.shape}")
print(f"Label shape: {label.shape}")
print(f"Image data type: {image.dtype}")
print(f"Label data type: {label.dtype}")
print(f"Image value range: [{image.min():.3f}, {image.max():.3f}]")
print(f"Label value range: [{label.min()}, {label.max()}]")
```
### Accessing Metadata
```python
# Get observation metadata
print(f"Spatial coordinates: i={sample['i']}, j={sample['j']}")
print(f"Observation time: {sample['start_time']} to {sample['end_time']}")
print(f"Resolution: {sample['size']}x{sample['size']}")
print(f"Array index: {sample['ind']}")
```
### Working with Different Resolutions
```python
# Load different resolutions
dataset_128 = load_dataset("your-username/goes16-satellite", name="128x128")
dataset_256 = load_dataset("your-username/goes16-satellite", name="256x256")
# Compare samples
sample_128 = dataset_128["train"][0]
sample_256 = dataset_256["train"][0]
image_128 = np.array(sample_128["image"]) # Shape: (16, 128, 128)
image_256 = np.array(sample_256["image"]) # Shape: (16, 256, 256)
print(f"128x128 image shape: {image_128.shape}")
print(f"256x256 image shape: {image_256.shape}")
```
### Data Processing and Visualization
```python
import matplotlib.pyplot as plt
# Load a sample
sample = dataset["train"][0]
image = np.array(sample["image"]) # Shape: (16, 128, 128)
label = np.array(sample["label"]) # Shape: (128, 128)
# Visualize a specific channel (e.g., channel 0)
plt.figure(figsize=(12, 4))
plt.subplot(1, 3, 1)
plt.imshow(image[0], cmap='viridis')
plt.title('Channel 0 (Raw)')
plt.colorbar()
plt.subplot(1, 3, 2)
plt.imshow(label, cmap='gray')
plt.title('Label/Mask')
plt.colorbar()
plt.subplot(1, 3, 3)
# Create RGB composite (example - adjust channels based on your specific needs)
rgb_composite = np.stack([image[2], image[1], image[0]], axis=-1)
rgb_composite = (rgb_composite - rgb_composite.min()) / (rgb_composite.max() - rgb_composite.min())
plt.imshow(rgb_composite)
plt.title('RGB Composite')
plt.tight_layout()
plt.show()
```
### Training Example
```python
from datasets import load_dataset
from torch.utils.data import DataLoader
import torch
import numpy as np
# Load dataset
dataset = load_dataset("your-username/goes16-satellite", name="128x128")
# Convert to PyTorch tensors
def collate_fn(batch):
images = torch.stack([torch.from_numpy(np.array(item["image"])) for item in batch])
labels = torch.stack([torch.from_numpy(np.array(item["label"])) for item in batch])
return {"image": images, "label": labels}
# Create data loaders
train_loader = DataLoader(
dataset["train"],
batch_size=8,
shuffle=True,
collate_fn=collate_fn
)
test_loader = DataLoader(
dataset["test"],
batch_size=8,
shuffle=False,
collate_fn=collate_fn
)
# Example training loop structure
for batch in train_loader:
images = batch["image"] # Shape: (batch_size, 16, 128, 128)
labels = batch["label"] # Shape: (batch_size, 128, 128)
# Your training code here
# model_output = model(images)
# loss = criterion(model_output, labels)
break
```
## Dataset Statistics
### 128x128 Configuration
- **Total examples**: 580 (529 train, 51 test)
- **Dataset size**: 623 MB
- **Image dimensions**: 16 channels × 128 × 128 pixels
- **Data types**: float32 (images), uint8 (labels)
### 256x256 Configuration
- **Total examples**: 1,896 (1,713 train, 183 test)
- **Dataset size**: ~6.1 GB (estimated)
- **Image dimensions**: 16 channels × 256 × 256 pixels
- **Data types**: float32 (images), uint8 (labels)
## Applications
This dataset can be used for:
- Satellite image semantic segmentation
- Weather pattern recognition and classification
- Multi-spectral image processing
- Earth observation studies
- Remote sensing applications
- Computer vision research on satellite imagery
- Time series analysis of atmospheric conditions
- Cloud detection and classification
- Environmental monitoring
## Data Format
The dataset is automatically downloaded and processed when loaded through the HuggingFace `datasets` library. The underlying data is stored in NPZ format with corresponding metadata.