Update README.md
Browse files
README.md
CHANGED
|
@@ -1,3 +1,90 @@
|
|
| 1 |
-
---
|
| 2 |
-
license: apache-2.0
|
| 3 |
-
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
---
|
| 2 |
+
license: apache-2.0
|
| 3 |
+
---
|
| 4 |
+
|
| 5 |
+
# 🌲 Trees Optimization Dataset
|
| 6 |
+
### Fractional–Factorial Hyperparameter Search Results (64‑run, Resolution V DOE)
|
| 7 |
+
|
| 8 |
+
This dataset contains the **experimental results** from a **64‑run fractional factorial design (2⁸⁻² Resolution V)** used to optimize hyperparameters for a **SegFormer semantic segmentation model** trained to detect trees.
|
| 9 |
+
|
| 10 |
+
|
| 11 |
+
---
|
| 12 |
+
|
| 13 |
+
## 📂 Dataset Structure
|
| 14 |
+
|
| 15 |
+
### `results/fractional_factorial_partial.csv`
|
| 16 |
+
A cumulative CSV file updated after **each experiment**.
|
| 17 |
+
It contains all completed runs so far, enabling:
|
| 18 |
+
|
| 19 |
+
- real‑time monitoring
|
| 20 |
+
- ability to resume experiments
|
| 21 |
+
- incremental analysis
|
| 22 |
+
|
| 23 |
+
### `results/fractional_factorial_results.csv`
|
| 24 |
+
The final CSV produced once **all 64 runs** finish.
|
| 25 |
+
It includes for each run:
|
| 26 |
+
|
| 27 |
+
- experiment ID
|
| 28 |
+
- fractional‑factorial coded levels (A–H)
|
| 29 |
+
- the decoded hyperparameters
|
| 30 |
+
- best‑epoch metrics for **train**, **validation**, and **test** splits
|
| 31 |
+
- training time
|
| 32 |
+
|
| 33 |
+
Both CSV files share the same schema but differ in completeness.
|
| 34 |
+
|
| 35 |
+
---
|
| 36 |
+
|
| 37 |
+
## 🧪 Experimental Design Overview
|
| 38 |
+
|
| 39 |
+
A **2⁸⁻² fractional factorial experiment** was used with:
|
| 40 |
+
|
| 41 |
+
- **8 factors** (A–H)
|
| 42 |
+
- **64 total runs**
|
| 43 |
+
- **Resolution V**, allowing estimation of main effects and most two‑factor interactions
|
| 44 |
+
- **Generators**:
|
| 45 |
+
- `G = A × B × C × D`
|
| 46 |
+
- `H = A × B × E × F`
|
| 47 |
+
|
| 48 |
+
Factors A–F are independent; G and H are derived.
|
| 49 |
+
|
| 50 |
+
This design allows efficient exploration of a large hyperparameter space using only 64 experiments instead of 256.
|
| 51 |
+
|
| 52 |
+
---
|
| 53 |
+
|
| 54 |
+
## 🎛 Hyperparameter Coding
|
| 55 |
+
|
| 56 |
+
Each coded factor `{ -1, +1 }` is mapped to an actual hyperparameter:
|
| 57 |
+
|
| 58 |
+
| Factor | −1 Level | +1 Level |
|
| 59 |
+
|--------|-----------|-----------|
|
| 60 |
+
| **A** | learning rate = `1e-5` | `1e-4` |
|
| 61 |
+
| **B** | weight decay = `0.0` | `0.1` |
|
| 62 |
+
| **C** | scheduler = `linear` | `cosine` |
|
| 63 |
+
| **D** | warmup ratio = `0.0` | `0.15` |
|
| 64 |
+
| **E** | grad. accumulation = `1` | `4` |
|
| 65 |
+
| **F** | epochs = `50` | `200` |
|
| 66 |
+
| **G** | train batch size = `2` | `4` |
|
| 67 |
+
| **H** | eval batch size = `2` | `4` |
|
| 68 |
+
|
| 69 |
+
The dataset includes both the coded values and the decoded hyperparameters.
|
| 70 |
+
|
| 71 |
+
---
|
| 72 |
+
|
| 73 |
+
## 🤖 Model & Training Setup
|
| 74 |
+
|
| 75 |
+
All experiments fine‑tune:
|
| 76 |
+
|
| 77 |
+
**`nvidia/segformer-b0-finetuned-ade-512-512`**
|
| 78 |
+
|
| 79 |
+
Key details:
|
| 80 |
+
|
| 81 |
+
- Metrics include:
|
| 82 |
+
- IoU
|
| 83 |
+
- accuracy
|
| 84 |
+
- tree‑class precision, recall, Dice
|
| 85 |
+
- Metrics are computed for train, val, and test splits
|
| 86 |
+
|
| 87 |
+
|
| 88 |
+
|
| 89 |
+
|
| 90 |
+
|