The dataset viewer is not available because its heuristics could not detect any supported data files. You can try uploading some data files, or configuring the data files location manually.
YAML Metadata Warning:empty or missing yaml metadata in repo card
Check out the documentation for more information.
Satellite Land Use Classification — Error Analysis Project
Background
Researchers use satellite images to automatically detect land use changes in West Africa — specifically, they want to find areas of mining, oil palm plantations, and rubber plantations from space.
A machine learning model looks at satellite images taken over several months and predicts what type of land each pixel belongs to. Your job is to analyze how well the model is doing and build a tool to explore its mistakes.
The Data
Inside data/<model_name>/, there is one .npz file per satellite tile. Each file contains:
| Key | Shape | Description |
|---|---|---|
pred |
(224, 224) | Model's predicted class for each pixel |
gt |
(224, 224) | Ground truth (correct answer) for each pixel |
rgb |
(224, 224, 3) | RGB satellite image (uint8, 0–255) |
spectral |
(6, 7, 224, 224) | Spectral bands × months (float32, normalized) |
clarity |
(7, 224, 224) | Cloud mask per month (0 = clear, 1 = cloudy) |
Classes
| Value | Class |
|---|---|
| 0 | Everything else (forest, water, buildings, etc.) |
| 1 | Oil palm plantation |
| 2 | Rubber plantation |
| 3 | Mining |
| -1 | Ignore (no label available — skip these pixels) |
Spectral bands
The rgb image is what the satellite scene looks like to our eyes. But the satellite actually captures 6 spectral bands — including wavelengths of light that are invisible to humans:
| Index | Band | Wavelength | What it captures |
|---|---|---|---|
| 0 | Blue | 0.45–0.51 µm | Water, atmosphere |
| 1 | Green | 0.53–0.59 µm | Vegetation vigor |
| 2 | Red | 0.64–0.67 µm | Chlorophyll absorption |
| 3 | NIR | 0.85–0.88 µm | Healthy vegetation reflects strongly here |
| 4 | SWIR1 | 1.57–1.65 µm | Soil and moisture content |
| 5 | SWIR2 | 2.11–2.29 µm | Minerals, bare soil |
The spectral array has shape (6, 7, 224, 224) — 6 bands, each captured across 7 months (Jan, Feb, Mar, Apr, May, Nov, Dec). These are raw surface reflectance values (not normalized). The rgb image is just bands 2, 1, 0 (Red, Green, Blue) combined for easy viewing.
Loading the data
import numpy as np
import os
data_dir = "data/<model_name>" # pick one of the model folders
# Load one tile
tile = np.load(os.path.join(data_dir, "<some_tile>.npz"))
pred = tile["pred"] # (224, 224) — what the model predicted
gt = tile["gt"] # (224, 224) — the correct answer
rgb = tile["rgb"] # (224, 224, 3) — satellite photo
spectral = tile["spectral"] # (6, 7, 224, 224) — spectral bands × months
clarity = tile["clarity"] # (7, 224, 224) — cloud mask per month
# Load all tiles
tiles = []
for f in sorted(os.listdir(data_dir)):
if f.endswith(".npz"):
tiles.append(np.load(os.path.join(data_dir, f)))
Your Project
Build an error analysis tool — something that helps a researcher understand where and why the model makes mistakes. Here are the two main pieces:
Part 1: Confusion Matrix
A confusion matrix shows how often the model confuses one class for another.
- For every pixel where
gt >= 0(skip -1 pixels), comparepredvsgt. - Build a 4×4 matrix: rows = true class, columns = predicted class.
- Each cell counts how many pixels had that (true, predicted) combination.
- Visualize it as a heatmap with labels.
Questions to answer:
- Which class does the model get right most often?
- Which classes get confused with each other?
- Is the model biased toward predicting "everything else" (class 0)?
Part 2: Interactive Error Viewer
Build a visual tool (a webpage, a Streamlit app, or a Jupyter notebook) that lets you browse tiles and see:
- The satellite image (rgb)
- Ground truth — color-coded by class
- Model prediction — color-coded by class
- Error map — highlight pixels where pred ≠ gt (e.g., correct = transparent/gray, wrong = red)
Suggested colors:
- Class 0 (everything else): gray
- Class 1 (oil palm): green
- Class 2 (rubber): yellow
- Class 3 (mining): red
Nice-to-haves:
- Let the user click through tiles with Previous/Next buttons
- Show per-tile accuracy or Dice score
- Filter to only show tiles where the model does poorly
- Sort tiles by error rate
Part 3: Do Clouds Affect All Classes Equally?
Your confusion matrix from Part 1 shows the model's overall mistakes. But are those mistakes evenly spread, or does the model struggle more under certain conditions? If clouds affect some classes very differently than others, that tells us something about what the model actually learned versus what shortcuts it might be relying on.
Clouds block the satellite's view. Each tile includes a cloud mask across 7 months (Jan, Feb, Mar, Apr, May, Nov, Dec) — a binary value per timestep (0 = clear, 1 = cloudy).
Step 1: Come up with a way to summarize each pixel's cloud activity across the 7 months into a single number. Some ideas: fraction of cloudy months, number of consecutive cloudy months, whether a specific month is cloudy — or anything else you think captures the pattern. Use this number to split pixels into groups (e.g., "low cloud" vs. "high cloud").
Step 2: Build a separate confusion matrix for each cloud group, side by side. Compute per-class accuracy (or Dice) for each group.
Step 3: Where do you see the biggest differences between groups? Do all classes change in the same way, or do some behave very differently? Try different summary functions from Step 1 and see if you can find one that makes the divergence between classes stronger.
Stretch Goals (if you finish early)
- Per-class error maps: separate error maps for each class (e.g., "where did the model miss mining?")
- Boundary analysis: are most errors at the edges of objects or in the middle?
- Model comparison: if
data/has multiple model folders, compare them side by side. Which model is better at which class? - Summary statistics: bar charts of per-class accuracy, per-class Dice score, etc.
Dice Score (optional reading)
Dice score is the main metric used in this project. For a single class:
Dice = 2 * TP / (2 * TP + FP + FN)
Where:
- TP (true positive) = pixels correctly predicted as this class
- FP (false positive) = pixels wrongly predicted as this class
- FN (false negative) = pixels of this class that the model missed
Dice = 1.0 is perfect, Dice = 0.0 is completely wrong. Compute it per class, then average.
Tips
- Use a coding assistant (like Claude) to help you write code — that's encouraged!
- Start small: load one tile, display it, then build up.
matplotlibis great for quick plots.Streamlitor plain HTML/JS are great for interactive tools.- Always filter out pixels where
gt == -1before computing any metrics.
- Downloads last month
- 10