mqraitem commited on
Commit
a6a5b22
·
verified ·
1 Parent(s): a15211d

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +147 -0
README.md ADDED
@@ -0,0 +1,147 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Satellite Land Use Classification — Error Analysis Project
2
+
3
+ ## Background
4
+
5
+ Researchers use satellite images to automatically detect **land use changes** in West Africa — specifically, they want to find areas of **mining**, **oil palm plantations**, and **rubber plantations** from space.
6
+
7
+ A machine learning model looks at satellite images taken over several months and predicts what type of land each pixel belongs to. Your job is to **analyze how well the model is doing** and **build a tool to explore its mistakes**.
8
+
9
+ ## The Data
10
+
11
+ Inside `data/<model_name>/`, there is one `.npz` file per satellite tile. Each file contains:
12
+
13
+ | Key | Shape | Description |
14
+ |-----|-------|-------------|
15
+ | `pred` | (224, 224) | Model's predicted class for each pixel |
16
+ | `gt` | (224, 224) | Ground truth (correct answer) for each pixel |
17
+ | `rgb` | (224, 224, 3) | RGB satellite image (uint8, 0–255) |
18
+ | `spectral` | (6, 7, 224, 224) | Spectral bands × months (float32, normalized) |
19
+ | `clarity` | (7, 224, 224) | Cloud mask per month (0 = clear, 1 = cloudy) |
20
+
21
+ ### Classes
22
+
23
+ | Value | Class |
24
+ |-------|-------|
25
+ | 0 | Everything else (forest, water, buildings, etc.) |
26
+ | 1 | Oil palm plantation |
27
+ | 2 | Rubber plantation |
28
+ | 3 | Mining |
29
+ | -1 | Ignore (no label available — skip these pixels) |
30
+
31
+ ### Spectral bands
32
+
33
+ The `rgb` image is what the satellite scene looks like to our eyes. But the satellite actually captures **6 spectral bands** — including wavelengths of light that are invisible to humans:
34
+
35
+ | Index | Band | Wavelength | What it captures |
36
+ |-------|------|------------|-----------------|
37
+ | 0 | Blue | 0.45–0.51 µm | Water, atmosphere |
38
+ | 1 | Green | 0.53–0.59 µm | Vegetation vigor |
39
+ | 2 | Red | 0.64–0.67 µm | Chlorophyll absorption |
40
+ | 3 | NIR | 0.85–0.88 µm | Healthy vegetation reflects strongly here |
41
+ | 4 | SWIR1 | 1.57–1.65 µm | Soil and moisture content |
42
+ | 5 | SWIR2 | 2.11–2.29 µm | Minerals, bare soil |
43
+
44
+ The `spectral` array has shape `(6, 7, 224, 224)` — 6 bands, each captured across 7 months (Jan, Feb, Mar, Apr, May, Nov, Dec). These are raw surface reflectance values (not normalized). The `rgb` image is just bands 2, 1, 0 (Red, Green, Blue) combined for easy viewing.
45
+
46
+ ### Loading the data
47
+
48
+ ```python
49
+ import numpy as np
50
+ import os
51
+
52
+ data_dir = "data/<model_name>" # pick one of the model folders
53
+
54
+ # Load one tile
55
+ tile = np.load(os.path.join(data_dir, "<some_tile>.npz"))
56
+ pred = tile["pred"] # (224, 224) — what the model predicted
57
+ gt = tile["gt"] # (224, 224) — the correct answer
58
+ rgb = tile["rgb"] # (224, 224, 3) — satellite photo
59
+ spectral = tile["spectral"] # (6, 7, 224, 224) — spectral bands × months
60
+ clarity = tile["clarity"] # (7, 224, 224) — cloud mask per month
61
+
62
+ # Load all tiles
63
+ tiles = []
64
+ for f in sorted(os.listdir(data_dir)):
65
+ if f.endswith(".npz"):
66
+ tiles.append(np.load(os.path.join(data_dir, f)))
67
+ ```
68
+
69
+ ## Your Project
70
+
71
+ Build an **error analysis tool** — something that helps a researcher understand where and why the model makes mistakes. Here are the two main pieces:
72
+
73
+ ### Part 1: Confusion Matrix
74
+
75
+ A confusion matrix shows how often the model confuses one class for another.
76
+
77
+ - For every pixel where `gt >= 0` (skip -1 pixels), compare `pred` vs `gt`.
78
+ - Build a 4×4 matrix: rows = true class, columns = predicted class.
79
+ - Each cell counts how many pixels had that (true, predicted) combination.
80
+ - Visualize it as a heatmap with labels.
81
+
82
+ Questions to answer:
83
+ - Which class does the model get right most often?
84
+ - Which classes get confused with each other?
85
+ - Is the model biased toward predicting "everything else" (class 0)?
86
+
87
+ ### Part 2: Interactive Error Viewer
88
+
89
+ Build a visual tool (a webpage, a Streamlit app, or a Jupyter notebook) that lets you browse tiles and see:
90
+
91
+ 1. **The satellite image** (rgb)
92
+ 2. **Ground truth** — color-coded by class
93
+ 3. **Model prediction** — color-coded by class
94
+ 4. **Error map** — highlight pixels where pred ≠ gt (e.g., correct = transparent/gray, wrong = red)
95
+
96
+ Suggested colors:
97
+ - Class 0 (everything else): gray
98
+ - Class 1 (oil palm): green
99
+ - Class 2 (rubber): yellow
100
+ - Class 3 (mining): red
101
+
102
+ Nice-to-haves:
103
+ - Let the user click through tiles with Previous/Next buttons
104
+ - Show per-tile accuracy or Dice score
105
+ - Filter to only show tiles where the model does poorly
106
+ - Sort tiles by error rate
107
+
108
+ ### Part 3: Do Clouds Affect All Classes Equally?
109
+
110
+ Your confusion matrix from Part 1 shows the model's overall mistakes. But are those mistakes evenly spread, or does the model struggle more under certain conditions? If clouds affect some classes very differently than others, that tells us something about what the model actually learned versus what shortcuts it might be relying on.
111
+
112
+ Clouds block the satellite's view. Each tile includes a cloud mask across 7 months (Jan, Feb, Mar, Apr, May, Nov, Dec) — a binary value per timestep (0 = clear, 1 = cloudy).
113
+
114
+ **Step 1**: Come up with a way to summarize each pixel's cloud activity across the 7 months into a single number. Some ideas: fraction of cloudy months, number of consecutive cloudy months, whether a specific month is cloudy — or anything else you think captures the pattern. Use this number to split pixels into groups (e.g., "low cloud" vs. "high cloud").
115
+
116
+ **Step 2**: Build a separate confusion matrix for each cloud group, side by side. Compute per-class accuracy (or Dice) for each group.
117
+
118
+ **Step 3**: Where do you see the biggest differences between groups? Do all classes change in the same way, or do some behave very differently? Try different summary functions from Step 1 and see if you can find one that makes the divergence between classes stronger.
119
+
120
+ ### Stretch Goals (if you finish early)
121
+
122
+ - **Per-class error maps**: separate error maps for each class (e.g., "where did the model miss mining?")
123
+ - **Boundary analysis**: are most errors at the edges of objects or in the middle?
124
+ - **Model comparison**: if `data/` has multiple model folders, compare them side by side. Which model is better at which class?
125
+ - **Summary statistics**: bar charts of per-class accuracy, per-class Dice score, etc.
126
+
127
+ ## Dice Score (optional reading)
128
+
129
+ Dice score is the main metric used in this project. For a single class:
130
+
131
+ ```
132
+ Dice = 2 * TP / (2 * TP + FP + FN)
133
+ ```
134
+
135
+ Where:
136
+ - **TP** (true positive) = pixels correctly predicted as this class
137
+ - **FP** (false positive) = pixels wrongly predicted as this class
138
+ - **FN** (false negative) = pixels of this class that the model missed
139
+
140
+ Dice = 1.0 is perfect, Dice = 0.0 is completely wrong. Compute it per class, then average.
141
+
142
+ ## Tips
143
+
144
+ - Use a coding assistant (like Claude) to help you write code — that's encouraged!
145
+ - Start small: load one tile, display it, then build up.
146
+ - `matplotlib` is great for quick plots. `Streamlit` or plain HTML/JS are great for interactive tools.
147
+ - Always filter out pixels where `gt == -1` before computing any metrics.