tobil commited on
Commit
b96b8b4
·
unverified ·
1 Parent(s): f501fe8

v2: add new sources, extraction pipeline, better train/val split

Browse files

- Added Sebring Q Tobi Lap 6 and Paul Ricard Alpine LMPh video sources
- Proper stratified 85/15 train/val split (was nearly empty val before)
- Augmented minority racing classes (gears 4, 7) to 200 samples
- 5,964 train / 1,003 val images (up from 5,321 / 318)
- Added scripts/extract.py and scripts/build_dataset.py for pipeline
- Added scripts/find_label_issues.py (cleanlab-based QA)
- Added examples/ with reference crops per gear per source
- Added labels/ CSVs for frame-range labeling
- Added AGENTS.md with domain context and crop coordinates

.gitignore ADDED
@@ -0,0 +1,5 @@
 
 
 
 
 
 
1
+ raw/
2
+ composites/
3
+ __pycache__/
4
+ *.pyc
5
+ .venv/
.python-version ADDED
@@ -0,0 +1 @@
 
 
1
+ 3.12
AGENTS.md ADDED
@@ -0,0 +1,25 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Agent Context: Racing Gear Digits Dataset
2
+
3
+ ## Domain knowledge
4
+
5
+ - A **lap** in circuit racing takes ~1.5–2.5 minutes depending on track length. During a single lap the driver uses every gear at least once (except reverse/neutral).
6
+ - So any ~2 minute window of onboard footage guarantees examples of **all forward gears** (typically 1–6 or 1–7 depending on car).
7
+ - **Gear changes are not uniform** — the car spends much more time in mid-range gears (2, 3, 4) than in 1st or top gear. Low gears appear briefly during braking zones; top gear only on long straights.
8
+ - The **gear indicator** is part of the telemetry overlay baked into the video. Its position, font, size, and background vary per video source. Each source needs its own crop coordinates.
9
+ - Gears **0, 8, 9** don't occur in normal racing — 0 is neutral (rare), 8+ don't exist on these cars. We include them via MNIST for classifier completeness.
10
+
11
+ ## Video sources
12
+
13
+ Each video source has different crop coordinates for the gear digit:
14
+
15
+ | Video | Resolution | Gear crop (x, y, w, h) | Gears | Notes |
16
+ |-------|-----------|------------------------|-------|-------|
17
+ | Sebring Q Tobi Lap 6 | 1920×1080 | (1440, 780, 90, 105) | 1–6 | White digit on dark semi-transparent overlay |
18
+ | Paul Ricard Alpine LMPh | 832×464 | (685, 320, 55, 55) | 1–7 | White digit on dark circle |
19
+
20
+ ## Working with the data
21
+
22
+ - Use `uv run` for all Python scripts (dependencies managed via pyproject.toml)
23
+ - Extraction scripts live in `scripts/`, reference examples live in `examples/`
24
+ - The dataset parquet files are in `data/` and tracked with Git LFS
25
+ - Source column values: `racing` (real crops), `racing_aug` (augmented), `mnist` (handwritten)
README.md CHANGED
@@ -29,33 +29,67 @@ Similar in spirit to MNIST but for a specific real-world application: reading th
29
 
30
  ## Dataset
31
 
32
- - **5,321 training** / **318 validation** images
33
  - **32×32 grayscale** PNG images
34
  - **10 classes** (digits 0-9)
35
- - **Source column** distinguishes `racing` (real video crops) from `mnist` (handwritten digits)
 
 
36
 
37
  ### Racing sources
38
 
39
  | Source | Gears | Style |
40
  |--------|-------|-------|
41
- | TDS Racing IMSA Sebring 2026 | 1-6 | White digit on gray RPM gauge face (inverted) |
42
- | Alpine LMPh Paul Ricard | 1-7 | White digit on dark circle |
 
43
  | MNIST supplement | 0-9 | Handwritten digits (generalization) |
44
 
45
- ### Distribution
46
-
47
- | Digit | Train (racing) | Train (mnist) | Total |
48
- |-------|---------------|---------------|-------|
49
- | 0 | 0 | 200 | 200 |
50
- | 1 | 1,013 | 200 | 1,213 |
51
- | 2 | 1,576 | 200 | 1,776 |
52
- | 3 | 423 | 200 | 623 |
53
- | 4 | 42 | 200 | 242 |
54
- | 5 | 123 | 200 | 323 |
55
- | 6 | 122 | 200 | 322 |
56
- | 7 | 22 | 200 | 222 |
57
- | 8 | 0 | 200 | 200 |
58
- | 9 | 0 | 200 | 200 |
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
59
 
60
  ## Usage
61
 
@@ -64,14 +98,14 @@ from datasets import load_dataset
64
 
65
  ds = load_dataset("tobil/racing-gears")
66
 
67
- # Filter to racing images only
68
- racing = ds["train"].filter(lambda x: x["source"] == "racing")
69
 
70
  # Standard training loop
71
  for example in ds["train"]:
72
  image = example["image"] # PIL Image, 32x32 grayscale
73
  label = example["label"] # int 0-9
74
- source = example["source"] # "racing" or "mnist"
75
  ```
76
 
77
  ## Context
 
29
 
30
  ## Dataset
31
 
32
+ - **5,964 training** / **1,003 validation** images
33
  - **32×32 grayscale** PNG images
34
  - **10 classes** (digits 0-9)
35
+ - **Source column** distinguishes `racing-original`, `paul-ricard-alpine`, `sebring-tobi-lap6`, `racing_aug` (augmented), and `mnist`
36
+ - **Proper stratified split** — 15% of each racing source held out for validation
37
+ - **Augmented minority classes** — racing digits with <200 training samples augmented via random shifts, brightness/contrast jitter, and Gaussian noise
38
 
39
  ### Racing sources
40
 
41
  | Source | Gears | Style |
42
  |--------|-------|-------|
43
+ | TDS Racing IMSA Sebring 2026 (original) | 1-6 | White digit on gray RPM gauge face |
44
+ | Sebring Q Tobi Lap 6 | 1-6 | White digit on dark semi-transparent overlay |
45
+ | Paul Ricard Alpine LMPh | 1-7 | White digit on dark circle |
46
  | MNIST supplement | 0-9 | Handwritten digits (generalization) |
47
 
48
+ ### Train distribution
49
+
50
+ | Digit | Racing (orig) | Sebring | Paul Ricard | Aug | MNIST | Total |
51
+ |-------|--------------|---------|-------------|-----|-------|-------|
52
+ | 0 | 0 | 0 | 0 | 0 | 196 | 196 |
53
+ | 1 | 864 | 36 | 35 | 0 | 196 | 1,167 |
54
+ | 2 | 1,344 | 74 | 105 | 0 | 196 | 1,719 |
55
+ | 3 | 364 | 99 | 162 | 0 | 196 | 821 |
56
+ | 4 | 37 | 68 | 4 | 91 | 196 | 396 |
57
+ | 5 | 107 | 138 | 49 | 0 | 196 | 490 |
58
+ | 6 | 106 | 79 | 35 | 0 | 196 | 416 |
59
+ | 7 | 19 | 0 | 45 | 137 | 196 | 397 |
60
+ | 8 | 0 | 0 | 0 | 0 | 196 | 196 |
61
+ | 9 | 0 | 0 | 0 | 0 | 196 | 196 |
62
+
63
+ ## Adding new video sources
64
+
65
+ 1. **Extract** gear crops from a video:
66
+ ```bash
67
+ uv run python scripts/extract.py <video_path> <source_name> <x> <y> <w> <h>
68
+ ```
69
+ This creates `raw/<source>/unlabeled/` frames and a `composites/<source>/unlabeled.png` contact sheet.
70
+
71
+ 2. **Label** by reading the contact sheet and creating `labels/<source>.csv`:
72
+ ```csv
73
+ start,end,label
74
+ 0,14,5
75
+ 15,39,6
76
+ ```
77
+ Each row maps a frame range (inclusive, 0-indexed) to a gear digit.
78
+
79
+ 3. **Build** the dataset:
80
+ ```bash
81
+ uv run python scripts/build_dataset.py
82
+ ```
83
+ This reads all `raw/` sources and `labels/` CSVs, does stratified train/val splitting, augments minority classes, and writes the parquet files.
84
+
85
+ ## Augmentations (racing_aug)
86
+
87
+ For racing classes with fewer than 200 training samples, synthetic samples are generated:
88
+
89
+ - **Random translation** — up to ±3px shift in x/y
90
+ - **Brightness jitter** — 0.7–1.3×
91
+ - **Contrast jitter** — 0.8–1.2×
92
+ - **Gaussian noise** — σ=8, 30% probability
93
 
94
  ## Usage
95
 
 
98
 
99
  ds = load_dataset("tobil/racing-gears")
100
 
101
+ # Filter to real racing images only
102
+ racing = ds["train"].filter(lambda x: x["source"] not in ("mnist", "racing_aug"))
103
 
104
  # Standard training loop
105
  for example in ds["train"]:
106
  image = example["image"] # PIL Image, 32x32 grayscale
107
  label = example["label"] # int 0-9
108
+ source = example["source"] # source identifier
109
  ```
110
 
111
  ## Context
data/train-00000-of-00001.parquet CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:6cdd3fec9dc3f3084be12e265e2a3f0348daae69d6a8df901fa72374e7b795d1
3
- size 2662017
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d26003e895be7179eb7ba79bf52be3a9f61002aea06edff5cdbb1477fc126deb
3
+ size 3069846
data/validation-00000-of-00001.parquet CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:5a332266d74a858d671499dedca9a3fc2144dbb690c4f66b941a9ad96e646192
3
- size 137289
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:fff1985132a5327726db322612af37ee21bd0de1913ef4f22ab44e093b12bb9e
3
+ size 508839
examples/paul-ricard-alpine/gear_1.png ADDED

Git LFS Details

  • SHA256: 1ab2d695f189cb8b5059b4d53b02a8775739208ca488912044f7cfdbf4aa4644
  • Pointer size: 129 Bytes
  • Size of remote file: 3.3 kB
examples/paul-ricard-alpine/gear_2.png ADDED

Git LFS Details

  • SHA256: 86f25aff245b5d0ef593c7850994ad375dc6a5a42b96ca92aec75ec863673a34
  • Pointer size: 129 Bytes
  • Size of remote file: 3.11 kB
examples/paul-ricard-alpine/gear_3.png ADDED

Git LFS Details

  • SHA256: 620c51eaa5f4d231143778e02d2f69917fb9109a3109c98042f8558af22b35cf
  • Pointer size: 129 Bytes
  • Size of remote file: 3.58 kB
examples/paul-ricard-alpine/gear_4.png ADDED

Git LFS Details

  • SHA256: 574a5a0e8f6ccc156faeb2741b13f2f62ff1e58c64ff59f0a0a3d531997a048a
  • Pointer size: 129 Bytes
  • Size of remote file: 3.48 kB
examples/paul-ricard-alpine/gear_5.png ADDED

Git LFS Details

  • SHA256: 176f21fa5b6cd085277441f8b1fa083e12cc1ef3310881047cca80df516e539a
  • Pointer size: 129 Bytes
  • Size of remote file: 3.03 kB
examples/paul-ricard-alpine/gear_6.png ADDED

Git LFS Details

  • SHA256: 171563b42d00679ee6fbf203978d7c1c23d7c5dbb71e20fd0bba8aa38075fa69
  • Pointer size: 129 Bytes
  • Size of remote file: 3.23 kB
examples/paul-ricard-alpine/gear_7.png ADDED

Git LFS Details

  • SHA256: db87404489e929c9dd9ed52beb0480a0e159725bc455b7c37697f6cb7d697d1b
  • Pointer size: 129 Bytes
  • Size of remote file: 2.84 kB
examples/sebring-tobi-lap6/gear_1.png ADDED

Git LFS Details

  • SHA256: 262801f1ce566558de20d52eae3bd71e52383d938bbc1383c10bb7db53daa945
  • Pointer size: 130 Bytes
  • Size of remote file: 11.3 kB
examples/sebring-tobi-lap6/gear_2.png ADDED

Git LFS Details

  • SHA256: c4204cf61412cde41d82ff6c9c5cf5a865c194a4911f2f1ebf93f6bc5bdeb3eb
  • Pointer size: 130 Bytes
  • Size of remote file: 11.2 kB
examples/sebring-tobi-lap6/gear_3.png ADDED

Git LFS Details

  • SHA256: 1407a02dec1d97fb5ef6f0feb81098bb007b5a8b9a572777809b94bd121f0395
  • Pointer size: 130 Bytes
  • Size of remote file: 12.4 kB
examples/sebring-tobi-lap6/gear_4.png ADDED

Git LFS Details

  • SHA256: d20d330205cb417aef6264453713f1992da5c961f03e6f922191d2be3b3bb882
  • Pointer size: 130 Bytes
  • Size of remote file: 11.3 kB
examples/sebring-tobi-lap6/gear_5.png ADDED

Git LFS Details

  • SHA256: ec5de85f69b51918ba05746ff8edfe5694d12e5ed451fdfb1a0eb16aeb35552e
  • Pointer size: 130 Bytes
  • Size of remote file: 11.4 kB
examples/sebring-tobi-lap6/gear_6.png ADDED

Git LFS Details

  • SHA256: 6148d30baa62752102d13faa969c2403a981d23688dd178be91b0246b35e3bf3
  • Pointer size: 130 Bytes
  • Size of remote file: 11.4 kB
labels/paul-ricard-alpine.csv ADDED
@@ -0,0 +1,58 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ start,end,label
2
+ 0,2,3
3
+ 3,12,5
4
+ 13,39,6
5
+ 40,43,6
6
+ 44,47,6
7
+ 48,79,3
8
+ 80,84,3
9
+ 85,87,3
10
+ 88,97,5
11
+ 98,101,5
12
+ 102,104,4
13
+ 105,117,3
14
+ 118,127,3
15
+ 128,137,2
16
+ 138,139,2
17
+ 140,158,2
18
+ 159,166,2
19
+ 167,175,2
20
+ 176,181,2
21
+ 182,189,2
22
+ 190,199,3
23
+ 200,210,3
24
+ 211,220,5
25
+ 221,237,5
26
+ 238,242,5
27
+ 243,246,6
28
+ 247,258,7
29
+ 259,278,7
30
+ 279,296,7
31
+ 297,298,7
32
+ 299,301,6
33
+ 302,303,5
34
+ 304,305,4
35
+ 306,319,3
36
+ 320,334,3
37
+ 335,337,3
38
+ 338,345,1
39
+ 346,355,1
40
+ 356,359,1
41
+ 360,365,1
42
+ 366,367,1
43
+ 368,371,1
44
+ 372,374,2
45
+ 375,379,3
46
+ 380,399,3
47
+ 400,413,3
48
+ 414,419,2
49
+ 420,425,2
50
+ 426,437,2
51
+ 438,459,2
52
+ 460,465,2
53
+ 466,469,1
54
+ 470,471,1
55
+ 472,479,2
56
+ 480,499,3
57
+ 500,509,3
58
+ 510,513,3
labels/sebring-tobi-lap6.csv ADDED
@@ -0,0 +1,71 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ start,end,label
2
+ 0,17,5
3
+ 18,38,6
4
+ 39,39,5
5
+ 40,54,5
6
+ 55,59,5
7
+ 60,75,5
8
+ 76,79,5
9
+ 80,81,4
10
+ 82,83,3
11
+ 84,118,3
12
+ 119,123,3
13
+ 124,132,3
14
+ 133,139,4
15
+ 140,143,4
16
+ 144,147,5
17
+ 148,163,5
18
+ 164,175,6
19
+ 176,183,6
20
+ 184,187,5
21
+ 188,190,4
22
+ 191,193,3
23
+ 194,198,2
24
+ 199,207,2
25
+ 208,213,1
26
+ 214,237,1
27
+ 238,245,1
28
+ 246,252,2
29
+ 253,269,2
30
+ 270,279,2
31
+ 280,289,3
32
+ 290,299,3
33
+ 300,303,3
34
+ 304,308,4
35
+ 309,318,4
36
+ 319,324,4
37
+ 325,327,4
38
+ 328,332,4
39
+ 333,337,3
40
+ 338,341,2
41
+ 342,349,2
42
+ 350,354,2
43
+ 355,359,2
44
+ 360,369,4
45
+ 370,376,4
46
+ 377,381,4
47
+ 382,385,3
48
+ 386,393,2
49
+ 394,399,2
50
+ 400,406,2
51
+ 407,413,2
52
+ 414,419,3
53
+ 420,439,3
54
+ 440,445,3
55
+ 446,449,4
56
+ 450,458,4
57
+ 459,465,5
58
+ 466,479,5
59
+ 480,489,5
60
+ 490,497,5
61
+ 498,505,6
62
+ 506,519,6
63
+ 520,525,6
64
+ 526,531,6
65
+ 532,537,5
66
+ 538,543,5
67
+ 544,551,5
68
+ 552,559,5
69
+ 560,565,5
70
+ 566,575,6
71
+ 576,587,6
pyproject.toml ADDED
@@ -0,0 +1,16 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ [project]
2
+ name = "2026-03-30-datasets-tobil"
3
+ version = "0.1.0"
4
+ description = "Add your description here"
5
+ readme = "README.md"
6
+ requires-python = ">=3.12"
7
+ dependencies = [
8
+ "cleanlab>=2.9.0",
9
+ "datasets>=4.8.4",
10
+ "pandas>=3.0.1",
11
+ "pillow>=12.1.1",
12
+ "pyarrow>=23.0.1",
13
+ "scikit-learn>=1.8.0",
14
+ "torch>=2.11.0",
15
+ "torchvision>=0.26.0",
16
+ ]
scripts/build_dataset.py ADDED
@@ -0,0 +1,254 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """Build the racing-gears dataset parquet files from raw/ images and labels.
2
+
3
+ Reads:
4
+ raw/<source>/<label>/*.png - pre-sorted images (racing-original, mnist)
5
+ raw/<source>/unlabeled/*.png - extracted frames needing labels
6
+ labels/<source>.csv - frame range labels (start,end,label)
7
+
8
+ Writes:
9
+ data/train-00000-of-00001.parquet
10
+ data/validation-00000-of-00001.parquet
11
+ composites/<source>/label_<N>.png - per-label composites for verification
12
+
13
+ Usage:
14
+ uv run python scripts/build_dataset.py [--val-frac 0.15] [--aug-target 200] [--no-mnist]
15
+ """
16
+
17
+ import argparse
18
+ import csv
19
+ import io
20
+ import os
21
+ import random
22
+
23
+ import numpy as np
24
+ import pandas as pd
25
+ from PIL import Image, ImageEnhance
26
+
27
+
28
+ random.seed(42)
29
+ np.random.seed(42)
30
+
31
+ TARGET_SIZE = (32, 32)
32
+
33
+
34
+ def img_to_bytes(img: Image.Image) -> bytes:
35
+ buf = io.BytesIO()
36
+ img.save(buf, format="PNG")
37
+ return buf.getvalue()
38
+
39
+
40
+ def load_labeled_dir(source_dir: str, source_name: str) -> list[dict]:
41
+ """Load images from raw/<source>/<label>/ directory structure."""
42
+ rows = []
43
+ for label_dir in sorted(os.listdir(source_dir)):
44
+ label_path = os.path.join(source_dir, label_dir)
45
+ if not os.path.isdir(label_path) or label_dir == "unlabeled":
46
+ continue
47
+ try:
48
+ label = int(label_dir)
49
+ except ValueError:
50
+ continue
51
+ for f in sorted(os.listdir(label_path)):
52
+ if not f.endswith(".png"):
53
+ continue
54
+ img = Image.open(os.path.join(label_path, f)).convert("L").resize(TARGET_SIZE)
55
+ rows.append({
56
+ "image": {"bytes": img_to_bytes(img), "path": None},
57
+ "label": label,
58
+ "source": source_name,
59
+ })
60
+ return rows
61
+
62
+
63
+ def load_from_labels_csv(source_name: str) -> list[dict]:
64
+ """Load unlabeled frames and apply labels from CSV."""
65
+ csv_path = f"labels/{source_name}.csv"
66
+ frames_dir = f"raw/{source_name}/unlabeled"
67
+
68
+ if not os.path.exists(csv_path) or not os.path.exists(frames_dir):
69
+ return []
70
+
71
+ # Read label ranges
72
+ ranges = []
73
+ with open(csv_path) as f:
74
+ reader = csv.DictReader(f)
75
+ for row in reader:
76
+ ranges.append((int(row["start"]), int(row["end"]), int(row["label"])))
77
+
78
+ # Map frame index to label
79
+ frame_labels = {}
80
+ for start, end, label in ranges:
81
+ for i in range(start, end + 1):
82
+ frame_labels[i] = label
83
+
84
+ # Load frames
85
+ frames = sorted(f for f in os.listdir(frames_dir) if f.endswith(".png"))
86
+ rows = []
87
+ skipped = 0
88
+ for idx, f in enumerate(frames):
89
+ if idx not in frame_labels:
90
+ skipped += 1
91
+ continue
92
+ img = Image.open(os.path.join(frames_dir, f)).convert("L").resize(TARGET_SIZE)
93
+ rows.append({
94
+ "image": {"bytes": img_to_bytes(img), "path": None},
95
+ "label": frame_labels[idx],
96
+ "source": source_name,
97
+ })
98
+
99
+ if skipped:
100
+ print(f" Warning: {skipped} frames in {source_name} had no label (skipped)")
101
+ return rows
102
+
103
+
104
+ def augment(img: Image.Image) -> Image.Image:
105
+ """Random augmentation: shift, brightness, contrast, noise."""
106
+ dx, dy = random.randint(-3, 3), random.randint(-3, 3)
107
+ img = img.transform(img.size, Image.AFFINE, (1, 0, dx, 0, 1, dy), fillcolor=0)
108
+ img = ImageEnhance.Brightness(img).enhance(random.uniform(0.7, 1.3))
109
+ img = ImageEnhance.Contrast(img).enhance(random.uniform(0.8, 1.2))
110
+ if random.random() < 0.3:
111
+ arr = np.array(img, dtype=np.float32)
112
+ arr += np.random.normal(0, 8, arr.shape)
113
+ arr = np.clip(arr, 0, 255).astype(np.uint8)
114
+ img = Image.fromarray(arr, mode="L")
115
+ return img
116
+
117
+
118
+ def make_composite(images: list[Image.Image], output_path: str, cell: int = 36, max_cols: int = 50):
119
+ if not images:
120
+ return
121
+ cols = min(max_cols, len(images))
122
+ rows = (len(images) + cols - 1) // cols
123
+ sheet = Image.new("L", (cols * cell, rows * cell), 0)
124
+ for idx, img in enumerate(images):
125
+ r, c = idx // cols, idx % cols
126
+ sheet.paste(img.resize((cell, cell)), (c * cell, r * cell))
127
+ sheet.save(output_path)
128
+
129
+
130
+ def main():
131
+ parser = argparse.ArgumentParser(description="Build racing-gears dataset")
132
+ parser.add_argument("--val-frac", type=float, default=0.15, help="Validation fraction (default: 0.15)")
133
+ parser.add_argument("--aug-target", type=int, default=200, help="Min racing samples per class after augmentation (default: 200)")
134
+ parser.add_argument("--no-mnist", action="store_true", help="Exclude MNIST data")
135
+ args = parser.parse_args()
136
+
137
+ all_rows = []
138
+
139
+ # 1. Load pre-sorted sources (racing-original, mnist)
140
+ for source_dir in sorted(os.listdir("raw")):
141
+ source_path = os.path.join("raw", source_dir)
142
+ if not os.path.isdir(source_path):
143
+ continue
144
+
145
+ # Check if this source has a labels CSV (unlabeled frames)
146
+ csv_rows = load_from_labels_csv(source_dir)
147
+ if csv_rows:
148
+ print(f"Loaded {len(csv_rows)} labeled frames from {source_dir}")
149
+ all_rows.extend(csv_rows)
150
+
151
+ # Check if this source has pre-sorted label dirs
152
+ if source_dir == "mnist" and args.no_mnist:
153
+ print(f"Skipping {source_dir} (--no-mnist)")
154
+ continue
155
+ dir_rows = load_labeled_dir(source_path, source_dir)
156
+ if dir_rows:
157
+ print(f"Loaded {len(dir_rows)} pre-sorted images from {source_dir}")
158
+ all_rows.extend(dir_rows)
159
+
160
+ df = pd.DataFrame(all_rows)
161
+ print(f"\nTotal: {len(df)} images")
162
+ print(pd.crosstab(df["label"], df["source"]))
163
+
164
+ # 2. Separate racing vs mnist
165
+ racing_sources = [s for s in df["source"].unique() if s != "mnist"]
166
+ racing = df[df["source"].isin(racing_sources)].copy()
167
+ mnist = df[df["source"] == "mnist"].copy()
168
+
169
+ # 3. Stratified train/val split for racing
170
+ racing_train_parts, racing_val_parts = [], []
171
+ for label in sorted(racing["label"].unique()):
172
+ group = racing[racing["label"] == label].sample(frac=1, random_state=42)
173
+ n_val = max(1, int(len(group) * args.val_frac))
174
+ racing_val_parts.append(group.iloc[:n_val])
175
+ racing_train_parts.append(group.iloc[n_val:])
176
+
177
+ racing_train = pd.concat(racing_train_parts, ignore_index=True) if racing_train_parts else pd.DataFrame()
178
+ racing_val = pd.concat(racing_val_parts, ignore_index=True) if racing_val_parts else pd.DataFrame()
179
+
180
+ # 4. Augment underrepresented racing classes
181
+ aug_rows = []
182
+ for label in sorted(racing_train["label"].unique()):
183
+ group = racing_train[racing_train["label"] == label]
184
+ n_have = len(group)
185
+ n_need = max(0, args.aug_target - n_have)
186
+ if n_need > 0:
187
+ print(f" Augmenting label {label}: {n_have} -> {n_have + n_need} (+{n_need})")
188
+ source_rows = group.to_dict("records")
189
+ for _ in range(n_need):
190
+ row = random.choice(source_rows)
191
+ orig_img = Image.open(io.BytesIO(row["image"]["bytes"])).convert("L")
192
+ aug_img = augment(orig_img)
193
+ aug_rows.append({
194
+ "image": {"bytes": img_to_bytes(aug_img), "path": None},
195
+ "label": label,
196
+ "source": "racing_aug",
197
+ })
198
+
199
+ if aug_rows:
200
+ racing_train = pd.concat([racing_train, pd.DataFrame(aug_rows)], ignore_index=True)
201
+
202
+ # 5. Stratified split for MNIST
203
+ if not mnist.empty:
204
+ mnist_train_parts, mnist_val_parts = [], []
205
+ for label in sorted(mnist["label"].unique()):
206
+ group = mnist[mnist["label"] == label].sample(frac=1, random_state=42)
207
+ n_val = max(5, int(len(group) * args.val_frac))
208
+ mnist_val_parts.append(group.iloc[:n_val])
209
+ mnist_train_parts.append(group.iloc[n_val:])
210
+ mnist_train = pd.concat(mnist_train_parts, ignore_index=True)
211
+ mnist_val = pd.concat(mnist_val_parts, ignore_index=True)
212
+ else:
213
+ mnist_train = pd.DataFrame()
214
+ mnist_val = pd.DataFrame()
215
+
216
+ # 6. Combine and shuffle
217
+ train_parts = [p for p in [racing_train, mnist_train] if not p.empty]
218
+ val_parts = [p for p in [racing_val, mnist_val] if not p.empty]
219
+ new_train = pd.concat(train_parts, ignore_index=True).sample(frac=1, random_state=42).reset_index(drop=True)
220
+ new_val = pd.concat(val_parts, ignore_index=True).sample(frac=1, random_state=42).reset_index(drop=True)
221
+
222
+ print(f"\n=== Final dataset ===")
223
+ print(f"Train: {len(new_train)}, Val: {len(new_val)}")
224
+ print("\nTrain:")
225
+ print(pd.crosstab(new_train["label"], new_train["source"]))
226
+ print("\nVal:")
227
+ print(pd.crosstab(new_val["label"], new_val["source"]))
228
+
229
+ # 7. Write parquet
230
+ os.makedirs("data", exist_ok=True)
231
+ new_train.to_parquet("data/train-00000-of-00001.parquet", index=False)
232
+ new_val.to_parquet("data/validation-00000-of-00001.parquet", index=False)
233
+ print(f"\nWritten to data/")
234
+
235
+ # 8. Generate per-label composites for verification
236
+ for split_name, split_df in [("train", new_train), ("val", new_val)]:
237
+ for source in sorted(split_df["source"].unique()):
238
+ if source == "racing_aug":
239
+ continue
240
+ comp_dir = f"composites/{source}"
241
+ os.makedirs(comp_dir, exist_ok=True)
242
+ for label in sorted(split_df["label"].unique()):
243
+ subset = split_df[(split_df["source"] == source) & (split_df["label"] == label)]
244
+ if subset.empty:
245
+ continue
246
+ images = [
247
+ Image.open(io.BytesIO(row["image"]["bytes"])).convert("L")
248
+ for _, row in subset.iterrows()
249
+ ]
250
+ make_composite(images, f"{comp_dir}/{split_name}_label_{label}.png")
251
+
252
+
253
+ if __name__ == "__main__":
254
+ main()
scripts/extract.py ADDED
@@ -0,0 +1,73 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """Extract gear digit crops from a racing video.
2
+
3
+ Usage:
4
+ uv run python scripts/extract.py <video_path> <source_name> <x> <y> <w> <h> [--fps 5]
5
+
6
+ Example:
7
+ uv run python scripts/extract.py /path/to/video.mp4 sebring-tobi-lap6 1440 780 90 105
8
+ uv run python scripts/extract.py /path/to/video.mp4 paul-ricard-alpine 685 320 55 55
9
+
10
+ Outputs:
11
+ raw/<source_name>/unlabeled/frame_XXXXX.png - individual crops
12
+ composites/<source_name>/unlabeled.png - contact sheet for labeling
13
+ """
14
+
15
+ import argparse
16
+ import os
17
+ import subprocess
18
+ import sys
19
+ from PIL import Image
20
+
21
+
22
+ def main():
23
+ parser = argparse.ArgumentParser(description="Extract gear crops from racing video")
24
+ parser.add_argument("video", help="Path to video file")
25
+ parser.add_argument("source", help="Source name (e.g. sebring-tobi-lap6)")
26
+ parser.add_argument("x", type=int, help="Crop X offset")
27
+ parser.add_argument("y", type=int, help="Crop Y offset")
28
+ parser.add_argument("w", type=int, help="Crop width")
29
+ parser.add_argument("h", type=int, help="Crop height")
30
+ parser.add_argument("--fps", type=int, default=5, help="Extraction frame rate (default: 5)")
31
+ args = parser.parse_args()
32
+
33
+ out_dir = f"raw/{args.source}/unlabeled"
34
+ comp_dir = f"composites/{args.source}"
35
+ os.makedirs(out_dir, exist_ok=True)
36
+ os.makedirs(comp_dir, exist_ok=True)
37
+
38
+ # Extract with ffmpeg
39
+ print(f"Extracting from {args.video}")
40
+ print(f" Crop: ({args.x}, {args.y}, {args.w}×{args.h}) at {args.fps}fps")
41
+ cmd = [
42
+ "ffmpeg", "-y",
43
+ "-i", args.video,
44
+ "-vf", f"crop={args.w}:{args.h}:{args.x}:{args.y}",
45
+ "-r", str(args.fps),
46
+ f"{out_dir}/frame_%05d.png",
47
+ ]
48
+ subprocess.run(cmd, capture_output=True)
49
+
50
+ # Build composite contact sheet
51
+ frames = sorted(f for f in os.listdir(out_dir) if f.endswith(".png"))
52
+ print(f" Extracted {len(frames)} frames")
53
+
54
+ cols = 40
55
+ cell = 40
56
+ rows = (len(frames) + cols - 1) // cols
57
+ sheet = Image.new("L", (cols * cell, rows * cell), 0)
58
+ for idx, f in enumerate(frames):
59
+ img = Image.open(os.path.join(out_dir, f)).convert("L").resize((cell, cell))
60
+ r, c = idx // cols, idx % cols
61
+ sheet.paste(img, (c * cell, r * cell))
62
+
63
+ comp_path = f"{comp_dir}/unlabeled.png"
64
+ sheet.save(comp_path)
65
+ print(f" Composite: {comp_path} ({cols}×{rows})")
66
+ print()
67
+ print(f"Next: create labels/{args.source}.csv with columns: start,end,label")
68
+ print(f" Each row is a frame range (0-indexed) and its gear digit.")
69
+ print(f" Then run: uv run python scripts/build_dataset.py")
70
+
71
+
72
+ if __name__ == "__main__":
73
+ main()
scripts/find_label_issues.py ADDED
@@ -0,0 +1,161 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """Find mislabeled samples using Cleanlab + a small CNN.
2
+
3
+ 1. Train a simple CNN on the dataset with cross-validation
4
+ 2. Get out-of-fold predicted probabilities
5
+ 3. Run Cleanlab to find label issues, outliers, duplicates
6
+
7
+ Usage:
8
+ uv run python scripts/find_label_issues.py
9
+ """
10
+
11
+ import io
12
+ import numpy as np
13
+ import pandas as pd
14
+ from PIL import Image
15
+ from sklearn.model_selection import StratifiedKFold
16
+
17
+ import torch
18
+ import torch.nn as nn
19
+ import torch.optim as optim
20
+ from torch.utils.data import DataLoader, TensorDataset
21
+
22
+
23
+ # --- 1. Load data ---
24
+ print("Loading data...")
25
+ df = pd.read_parquet("data/train-00000-of-00001.parquet")
26
+
27
+ images = []
28
+ for _, row in df.iterrows():
29
+ img = Image.open(io.BytesIO(row["image"]["bytes"])).convert("L")
30
+ arr = np.array(img, dtype=np.float32) / 255.0
31
+ images.append(arr)
32
+
33
+ X = np.stack(images)[:, np.newaxis, :, :] # (N, 1, 32, 32)
34
+ y = df["label"].values
35
+ sources = df["source"].values
36
+ print(f"Loaded {len(X)} images, {len(np.unique(y))} classes")
37
+
38
+
39
+ # --- 2. Simple CNN ---
40
+ class SmallCNN(nn.Module):
41
+ def __init__(self, num_classes=10):
42
+ super().__init__()
43
+ self.features = nn.Sequential(
44
+ nn.Conv2d(1, 32, 3, padding=1), nn.ReLU(), nn.MaxPool2d(2),
45
+ nn.Conv2d(32, 64, 3, padding=1), nn.ReLU(), nn.MaxPool2d(2),
46
+ nn.Conv2d(64, 64, 3, padding=1), nn.ReLU(), nn.AdaptiveAvgPool2d(4),
47
+ )
48
+ self.classifier = nn.Sequential(
49
+ nn.Flatten(),
50
+ nn.Linear(64 * 4 * 4, 128), nn.ReLU(), nn.Dropout(0.3),
51
+ nn.Linear(128, num_classes),
52
+ )
53
+
54
+ def forward(self, x):
55
+ return self.classifier(self.features(x))
56
+
57
+
58
+ def train_and_predict(X_train, y_train, X_val, epochs=15, batch_size=64):
59
+ device = torch.device("mps" if torch.backends.mps.is_available() else "cpu")
60
+ model = SmallCNN().to(device)
61
+ optimizer = optim.Adam(model.parameters(), lr=1e-3)
62
+ criterion = nn.CrossEntropyLoss()
63
+
64
+ train_ds = TensorDataset(torch.tensor(X_train), torch.tensor(y_train, dtype=torch.long))
65
+ loader = DataLoader(train_ds, batch_size=batch_size, shuffle=True)
66
+
67
+ model.train()
68
+ for epoch in range(epochs):
69
+ for xb, yb in loader:
70
+ xb, yb = xb.to(device), yb.to(device)
71
+ optimizer.zero_grad()
72
+ loss = criterion(model(xb), yb)
73
+ loss.backward()
74
+ optimizer.step()
75
+
76
+ # Predict probabilities on val
77
+ model.eval()
78
+ with torch.no_grad():
79
+ val_tensor = torch.tensor(X_val).to(device)
80
+ logits = model(val_tensor)
81
+ probs = torch.softmax(logits, dim=1).cpu().numpy()
82
+ return probs
83
+
84
+
85
+ # --- 3. Cross-validated predictions ---
86
+ print("\nTraining 5-fold cross-validated CNN...")
87
+ n_splits = 5
88
+ skf = StratifiedKFold(n_splits=n_splits, shuffle=True, random_state=42)
89
+ pred_probs = np.zeros((len(X), 10))
90
+
91
+ for fold, (train_idx, val_idx) in enumerate(skf.split(X, y)):
92
+ print(f" Fold {fold + 1}/{n_splits}...")
93
+ probs = train_and_predict(X[train_idx], y[train_idx], X[val_idx])
94
+ pred_probs[val_idx] = probs
95
+
96
+ print(f" OOF accuracy: {(pred_probs.argmax(axis=1) == y).mean():.3f}")
97
+
98
+
99
+ # --- 4. Cleanlab ---
100
+ print("\nRunning Cleanlab...")
101
+ from cleanlab import Datalab
102
+
103
+ lab = Datalab(
104
+ data={"label": y.tolist(), "source": sources.tolist()},
105
+ label_name="label",
106
+ )
107
+ lab.find_issues(pred_probs=pred_probs)
108
+
109
+ print("\n=== Issue Summary ===")
110
+ print(lab.get_issue_summary())
111
+
112
+ # Get label issues
113
+ issues = lab.get_issues("label")
114
+ label_issues = issues[issues["is_label_issue"]].sort_values("label_score")
115
+
116
+ print(f"\n=== {len(label_issues)} Label Issues Found ===")
117
+ if len(label_issues) > 0:
118
+ for idx in label_issues.index[:50]:
119
+ given = y[idx]
120
+ predicted = pred_probs[idx].argmax()
121
+ score = issues.loc[idx, "label_score"]
122
+ src = sources[idx]
123
+ print(f" idx={idx:5d} given={given} predicted={predicted} score={score:.4f} source={src}")
124
+
125
+ # Save full results
126
+ results = pd.DataFrame({
127
+ "index": range(len(y)),
128
+ "label": y,
129
+ "predicted": pred_probs.argmax(axis=1),
130
+ "label_score": issues["label_score"].values,
131
+ "is_label_issue": issues["is_label_issue"].values,
132
+ "source": sources,
133
+ })
134
+ results.to_csv("label_issues.csv", index=False)
135
+ print(f"\nFull results saved to label_issues.csv")
136
+
137
+ # Make composite of worst issues
138
+ print("\nGenerating composite of flagged issues...")
139
+ if len(label_issues) > 0:
140
+ from PIL import ImageDraw
141
+ cell = 48
142
+ n_show = min(100, len(label_issues))
143
+ cols = min(20, n_show)
144
+ rows = (n_show + cols - 1) // cols
145
+ sheet = Image.new("RGB", (cols * cell, rows * cell), (0, 0, 0))
146
+ draw = ImageDraw.Draw(sheet)
147
+
148
+ for i, idx in enumerate(label_issues.index[:n_show]):
149
+ img = Image.open(io.BytesIO(df.iloc[idx]["image"]["bytes"])).convert("L")
150
+ img_rgb = img.resize((cell, cell)).convert("RGB")
151
+ r, c = i // cols, i % cols
152
+ x, y_pos = c * cell, r * cell
153
+ sheet.paste(img_rgb, (x, y_pos))
154
+ given = y[idx]
155
+ predicted = pred_probs[idx].argmax()
156
+ # Red label = given, Green label = predicted
157
+ draw.text((x + 2, y_pos + 2), str(given), fill=(255, 80, 80))
158
+ draw.text((x + 2, y_pos + 14), str(predicted), fill=(80, 255, 80))
159
+
160
+ sheet.save("composites/label_issues.png")
161
+ print(f" composites/label_issues.png (red=given, green=predicted)")
uv.lock ADDED
The diff for this file is too large to render. See raw diff