File size: 9,869 Bytes
34ef1bc 7eb4303 34ef1bc 7eb4303 | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 | # Dataset Description
## Overview
- The dataset is organized into three roots:
- `good_weld`: labeled non-defect reference runs (`750` runs in `43` configuration folders).
- `defect_data_weld`: labeled defect runs (`1580` runs in `80` configuration folders).
- `test_data`: anonymized evaluation set (`90` samples named `sample_0001` ... `sample_0090`).
- Training-available labeled pool (`good_weld` + `defect_data_weld`) contains `2330` runs.
- Each run/sample is multimodal and typically includes:
- one sensor CSV, one FLAC audio file, one AVI video file
## Weld Configuration Metadata
- Weld setup/context is encoded in labeled folder names (for example: joint type such as `butt` or `plane_plate`, material tags such as `Fe410` or `BSK46`, and date-like tokens).
- Run IDs follow a pattern like `04-03-23-0010-11`, where the final two-digit suffix (`11`) is the defect/quality code.
- For fair hackathon evaluation, `test_data` has anonymized folder names (`sample_XXXX`) and neutral filenames (`sensor.csv`, `weld.flac`, `weld.avi`) to reduce label leakage.
- Ground-truth linkage for evaluation is kept separately in `test_data_ground_truth.csv`.
## Labels and Defect Definitions
| Code | Label | Practical definition | Train pool count (`good_weld`+`defect_data_weld`) |
|---|---|---|---:|
| `00` | `good_weld` | Acceptable weld (no target defect) | 750 |
| `01` | `excessive_penetration` | Over-penetration through joint root | 479 |
| `02` | `burn_through` | Severe over-penetration causing burn-through/hole | 317 |
| `06` | `overlap` | Weld metal overlap without proper fusion at toe/root region | 155 |
| `07` | `lack_of_fusion` | Incomplete fusion between weld metal and base material | 320 |
| `08` | `excessive_convexity` | Excessively convex weld bead profile | 159 |
| `11` | `crater_cracks` | Cracks near weld crater/termination zone | 150 |
## Folder Structure
```text
good_weld/
<configuration_folder>/
<run_id>/
<run_id>.csv
<run_id>.flac
<run_id>.avi
images/*.jpg
defect_data_weld/
<defect_configuration_folder>/
<run_id>/
<run_id>.csv
<run_id>.flac
<run_id>.avi
images/*.jpg
test_data/
sample_0001/
sensor.csv
weld.flac
weld.avi
images/*.jpg
...
sample_0090/
```
- Evaluation helper files:
- `test_data_manifest.csv`: file paths for each anonymized sample.
- `test_data_ground_truth.csv`: mapping from `sample_id` to true label (for evaluator use).
## Sensor Features (Number of Features)
- Original labeled CSV schema has `10` columns:
- `Date`, `Time`, `Part No`, `Pressure`, `CO2 Weld Flow`, `Feed`, `Primary Weld Current`, `Wire Consumed`, `Secondary Weld Voltage`, `Remarks`.
- Core process/sensor channels used for modeling are typically `6` numeric features:
- `Pressure`, `CO2 Weld Flow`, `Feed`, `Primary Weld Current`, `Wire Consumed`, `Secondary Weld Voltage`.
- `test_data/sensor.csv` intentionally removes `Part No` to prevent ID leakage, so evaluation CSVs have `9` columns.
## Concise and Informative Guideline
1. Use `good_weld` and `defect_data_weld` only for training and validation.
2. Split by run/group (not random rows) to avoid leakage across near-duplicate temporal segments.
3. Treat each run/sample as multimodal (`sensor.csv` + audio + video + images), then build binary (`defect` vs `good`) and multi-class (`defect type`) models.
4. Run final inference only on anonymized `test_data` and export predictions keyed by `sample_id`.
5. Keep `test_data_ground_truth.csv` strictly for organizer-side scoring or final offline evaluation after predictions are frozen.
6. Report both performance and confidence quality (for example: F1, Macro-F1, calibration/ECE) and include failure-case examples.
## Phase 1: Data preparation + dataset + dashboard + overall analysis + feature engineering
**Goal:** produce a clean, reproducible dataset + an analysis dashboard that explains what’s in the data and what signals might matter.
**What they must do**
* **Ingest the dataset**
* Load video + audio + labels/metadata.
* Validate files: missing, corrupt, mismatched IDs, inconsistent durations.
* **Define the unit of prediction**
* Decide what one “sample” means (whole weld, fixed-length segment, windowed chunks, etc.).
* Ensure labels align to that unit.
* **Create a reproducible split**
* Train/validation/test split that avoids leakage (split by session/part/run if applicable).
* Save split files so results are repeatable.
* **Preprocess and standardize**
* Make audio/video consistent (sampling rate/FPS, resizing, normalization, trimming/padding policy).
* Handle variable length (padding, cropping, pooling, sliding windows).
* **Feature engineering (optional, but if used it must be documented)**
* Produce derived signals/features from audio/video/metadata (any representation is fine).
* Keep a clear mapping from raw inputs → engineered inputs.
* **Dashboard (must show)**
* Dataset overview: counts, durations, missing/corrupt stats.
* Label distributions: defect vs non-defect, defect-type counts.
* Representative examples: video preview + audio preview (waveform/spectrogram or equivalent).
* Basic data quality indicators: class imbalance, outliers, noise, sync issues (if relevant).
* Exportable reports: ability to save plots/tables or generate a summary.
**Phase 1 output package**
* `dataset/` or loader pipeline that can recreate the dataset
* split definition files
* dashboard app/notebook
* short “data card” summary (1 page) describing assumptions and preprocessing choices
---
## Phase 2: Defect detection (binary classification) with confidence
**Goal:** build a model that outputs **defect vs non-defect** plus a **confidence score** for each prediction.
**What they must do**
* **Train a binary classifier**
* Input: audio/video (and any engineered features) per sample.
* Output: probability/score for “defect”.
* **Produce confidence**
* Define what confidence means (typically a calibrated probability).
* Confidence must be reported per prediction.
* **Set a decision rule**
* Thresholding policy to convert score → defect/non-defect.
* Threshold must be fixed for test-time scoring (not adjusted after seeing test labels).
* **Evaluate on validation**
* Report core binary metrics (listed below).
* Show error breakdown (false positives/false negatives) and examples.
* **Create an inference pipeline**
* Script that takes the test split and writes predictions in the required format.
**Phase 2 output package**
* trained model checkpoint(s)
* inference script (one command run)
* `predictions_binary.csv` (or combined file) with:
* `sample_id`, `p_defect`, `pred_defect`, `confidence`
* evaluation report/plots in the dashboard
---
## Phase 3: Defect type classification (multi-class)
**Goal:** if a weld is defective, predict **which defect type**, with confidence.
**What they must do**
* **Train a defect-type classifier**
* Input: same sample representation.
* Output: defect type probabilities (or scores).
* **Define handling of non-defect samples**
* Either:
* classify defect type **only when defect is predicted/known**, OR
* include “none” as a class.
* Whichever they choose, it must match the evaluation spec and be consistent.
* **Report confidence for type**
* Provide a confidence score for the chosen defect type (top-1 probability or calibrated).
* **Evaluate**
* Report multi-class metrics (listed below), especially per-class results due to imbalance.
* **Integrate with Phase 2**
* Final output should be coherent: non-defect → type is “none”; defect → type predicted.
**Phase 3 output package**
* model checkpoint(s)
* inference script producing:
* `pred_defect_type`, `p_type_*` (optional), `type_confidence`
* evaluation report (per-type performance)
---
# Evaluation criteria
## A) Model metrics (primary)
### Submission CSV (required)
Teams must submit one CSV file with this exact schema:
```csv
sample_id,pred_label_code,p_defect
sample_0001,11,0.94
sample_0002,00,0.08
...
sample_0090,06,0.81
```
Submission rules:
* Exactly `90` rows (one row per sample in `test_data_manifest.csv`)
* `sample_id` must match exactly (`sample_0001` ... `sample_0090`), with no duplicates
* `pred_label_code` must be one of: `00`, `01`, `02`, `06`, `07`, `08`, `11`
* `p_defect` must be numeric in `[0,1]`
Scoring interpretation:
* Binary prediction is derived as: `pred_defect = (pred_label_code != "00")`
* Type prediction is the submitted `pred_label_code`
### 1) Defect vs non-defect (binary)
Use these as the core:
* **F1 (Defect as positive class)**
* **Precision / Recall (Defect)**
* **ROC-AUC**
* **PR-AUC**
* **Confusion matrix counts** (TP/FP/FN/TN)
### 2) Defect type (multi-class)
Use these:
* **Macro F1** (treats each defect type equally, good for imbalance)
* **Per-class Precision/Recall/F1**
* **Weighted F1** (secondary)
**single final score:**
* `FinalScore = 0.6 * Binary_F1 + 0.4 * Type_MacroF1`
---
## B) Engineering & product quality (secondary)
### UI / Dashboard (clean and usable)
* Clear navigation, readable plots/tables, consistent labels
* Shows the required dataset stats + evaluation views
* Fast enough to use during a demo (no 5-minute refreshes)
### Clean code & reproducibility
* One-command run for training/inference
* Clear folder structure, requirements/environment file
* No hardcoded paths, no mystery constants without comments
* Reproducible splits + fixed random seeds (where relevant)
### Presentation & explanation
* Clear statement of:
* sample definition
* preprocessing assumptions
* model outputs and how confidence is computed
* strengths/weaknesses and common failure cases
* Demo includes: dashboard + a few correctly/incorrectly predicted examples
|