tobil commited on
Commit
4f1fe70
·
unverified ·
1 Parent(s): 148953b

v3: deep clean via per-source cross-val + manual curation

Browse files

- Per-source iterative cross-val cleaning (3 rounds each)
- racing-original: 99.6% OOF, removed all mislabeled 4s (were actually 1s)
- sebring: 82% OOF, removed 91 noisy transition frames
- mnist: 99% OOF, removed 30 ambiguous digits
- Fixed Paul Ricard gear_4 label CSV (was pointing to a 3)
- Fixed gear_4 example image
- Per-label verification composites in composites/{train,validation}/
- 5,391 train / 903 val

composites/train/label_1.png CHANGED

Git LFS Details

  • SHA256: dc443b39b63adaab0489ef3b5905323233f1fa6f86263f57ed8f8f7e1a6906b6
  • Pointer size: 131 Bytes
  • Size of remote file: 394 kB

Git LFS Details

  • SHA256: 87c88fc4b4401bdd7120030ac60ba90fed759f04a834565e68b05af8ca1aec6b
  • Pointer size: 131 Bytes
  • Size of remote file: 418 kB
composites/train/label_2.png CHANGED

Git LFS Details

  • SHA256: af7197c0e047fb0c10725b570ee5e3229250adf7d15106e51d91d821ea311e57
  • Pointer size: 131 Bytes
  • Size of remote file: 833 kB

Git LFS Details

  • SHA256: 49151872cc392d5db47139696e2312b568d4f83569e49b4bc1e93339d6ba47e0
  • Pointer size: 131 Bytes
  • Size of remote file: 865 kB
composites/train/label_3.png CHANGED

Git LFS Details

  • SHA256: 5c30b52e7add3d0d2f3c5f8d3bef6672e4b9b1b04ad0e44991bdcb39db0e9620
  • Pointer size: 131 Bytes
  • Size of remote file: 351 kB

Git LFS Details

  • SHA256: a348c98930e4c54fb5df5d8c933e8b77daff7417344f5029feb92eedc6114a8c
  • Pointer size: 131 Bytes
  • Size of remote file: 323 kB
composites/train/label_4.png CHANGED

Git LFS Details

  • SHA256: 00cc02fbf9179ff31f6e7097e1cc2b6b6bed95ef105ea743848882112c963d53
  • Pointer size: 130 Bytes
  • Size of remote file: 25.6 kB

Git LFS Details

  • SHA256: 935417446211311fd0015176e44423b481c0847750039e0e78ce011af2e9164f
  • Pointer size: 130 Bytes
  • Size of remote file: 27.6 kB
composites/train/label_5.png CHANGED

Git LFS Details

  • SHA256: 661cf24e26d9843d43e0c1a149b2d010dcdaf66a4498d9ba3e3557e4b36e871f
  • Pointer size: 131 Bytes
  • Size of remote file: 129 kB

Git LFS Details

  • SHA256: 7d62e4db086e5cc518ee5b9b0127a9b1b15e526798c8e68eee42743707ce0160
  • Pointer size: 131 Bytes
  • Size of remote file: 183 kB
composites/train/label_6.png CHANGED

Git LFS Details

  • SHA256: f8a6d189b9847f4e8a86624ca34641492633309c0c7822bdc388759b809e45cd
  • Pointer size: 131 Bytes
  • Size of remote file: 106 kB

Git LFS Details

  • SHA256: abda2b5c12abf791c4bdf5f14ea735ba18eee0ed2896475edcc5c4c1aa0a9150
  • Pointer size: 131 Bytes
  • Size of remote file: 115 kB
composites/train/label_7.png CHANGED

Git LFS Details

  • SHA256: d8adf03638252be718fb720b5c7479548290a82416727ba9887192779c0e964b
  • Pointer size: 130 Bytes
  • Size of remote file: 22.9 kB

Git LFS Details

  • SHA256: fb453d406fef3a43259b020412c97a52265ad218f86303fe5c66f2a789bdf84f
  • Pointer size: 130 Bytes
  • Size of remote file: 21.2 kB
composites/validation/label_1.png CHANGED

Git LFS Details

  • SHA256: 6cf98cea4210e2f7def2360c4339c8fb231e17f077fb9d5e7cc52d1114fba6fc
  • Pointer size: 130 Bytes
  • Size of remote file: 68.8 kB

Git LFS Details

  • SHA256: 229aff4311949c62c633240c4e9334424953c7fcf00b750e7f5bd98f51f8abe3
  • Pointer size: 130 Bytes
  • Size of remote file: 73.9 kB
composites/validation/label_2.png CHANGED

Git LFS Details

  • SHA256: 9852a4de66603036dfe50883431bff029c7ee1c1a141f9c69026935d70eef1b9
  • Pointer size: 131 Bytes
  • Size of remote file: 141 kB

Git LFS Details

  • SHA256: bec5e3d10ef20477b1e24a4ca2c87b65c31105f0a065113f0a2fd0d58f4d2dbc
  • Pointer size: 131 Bytes
  • Size of remote file: 149 kB
composites/validation/label_3.png CHANGED

Git LFS Details

  • SHA256: f62b232ef71cbe63d4099dd55a278f2172e7e4fded1e7a8cd41448dc968bd591
  • Pointer size: 130 Bytes
  • Size of remote file: 62.4 kB

Git LFS Details

  • SHA256: c03757fbb1ace0f6911818b21249d9002f5baade86f1e145d7a02afcc6ffdd0c
  • Pointer size: 130 Bytes
  • Size of remote file: 56.9 kB
composites/validation/label_4.png CHANGED

Git LFS Details

  • SHA256: 76600d8dc4011a3855f671f63217e1ae99db21e34de768d598a8e6583eb51379
  • Pointer size: 130 Bytes
  • Size of remote file: 14.5 kB

Git LFS Details

  • SHA256: 003558ae38d6e2609d8dceb8dfbb1550e3b74353ce45cb70946ec8451519cdb3
  • Pointer size: 129 Bytes
  • Size of remote file: 5.24 kB
composites/validation/label_5.png CHANGED

Git LFS Details

  • SHA256: 7059bb5468268f9b7301ccdda993c32083a602462ca0199a427daf917920f37e
  • Pointer size: 130 Bytes
  • Size of remote file: 23.5 kB

Git LFS Details

  • SHA256: c45121010003b8c262f86fb55cde0148b92214cd539cfab37faa3177bd79303b
  • Pointer size: 130 Bytes
  • Size of remote file: 31.5 kB
composites/validation/label_6.png CHANGED

Git LFS Details

  • SHA256: e7ab3b3664edff91c5dfc970fb0e3f96f314f9f90d3880850d2609af62b74ecb
  • Pointer size: 130 Bytes
  • Size of remote file: 23.9 kB

Git LFS Details

  • SHA256: f80225999373337fbe48bc15c774d926130e645c16afbd3e83e4a891a72c943f
  • Pointer size: 130 Bytes
  • Size of remote file: 19.5 kB
composites/validation/label_7.png CHANGED

Git LFS Details

  • SHA256: dcfdfc91f7765b239c666d2a319f6ce93ae312363fff95e6a24a193c843e5388
  • Pointer size: 129 Bytes
  • Size of remote file: 5.45 kB

Git LFS Details

  • SHA256: 6bcdbc442533b1bbbd7f79f5c6e5490b4596bc311ecf6f5b013db4de9b29e151
  • Pointer size: 129 Bytes
  • Size of remote file: 5.28 kB
data/train-00000-of-00001.parquet CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:7ffa0834a4b1130d252076bb3e856d78aae07d21f0b480f657b693dc0a51b522
3
- size 2854214
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:40864d586b97dd25b57946e2eae95232fe1fabfff874a23a930545d5ac6007ef
3
+ size 2898683
data/validation-00000-of-00001.parquet CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:d1427204fcd7a3f9ac6dea6abd08861330fe1d2b61e9d471ccebc1d68103c874
3
- size 487143
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c8124233735f356a205b1d96dde0c0c0ab075366725461a444c0c6464989b8ee
3
+ size 482795
examples/paul-ricard-alpine/gear_4.png CHANGED

Git LFS Details

  • SHA256: 574a5a0e8f6ccc156faeb2741b13f2f62ff1e58c64ff59f0a0a3d531997a048a
  • Pointer size: 129 Bytes
  • Size of remote file: 3.48 kB

Git LFS Details

  • SHA256: ed4c6ca322516e7833a7038d609e5938a1eeef1c9c21289d9c1884dd89853a88
  • Pointer size: 129 Bytes
  • Size of remote file: 3.22 kB
labels/paul-ricard-alpine.csv CHANGED
@@ -6,33 +6,34 @@ start,end,label
6
  44,47,6
7
  48,79,3
8
  80,84,3
9
- 85,87,3
10
- 88,97,5
11
- 98,101,5
12
- 102,104,4
 
13
  105,117,3
14
  118,127,3
15
  128,137,2
16
- 138,139,2
17
- 140,158,2
18
  159,166,2
19
  167,175,2
20
  176,181,2
21
  182,189,2
22
  190,199,3
23
- 200,210,3
24
- 211,220,5
25
- 221,237,5
26
  238,242,5
27
  243,246,6
28
  247,258,7
29
  259,278,7
30
  279,296,7
31
  297,298,7
32
- 299,301,6
33
- 302,303,5
34
- 304,305,4
35
- 306,319,3
 
36
  320,334,3
37
  335,337,3
38
  338,345,1
 
6
  44,47,6
7
  48,79,3
8
  80,84,3
9
+ 85,86,3
10
+ 87,91,5
11
+ 92,96,5
12
+ 97,97,4
13
+ 98,104,3
14
  105,117,3
15
  118,127,3
16
  128,137,2
17
+ 138,158,2
 
18
  159,166,2
19
  167,175,2
20
  176,181,2
21
  182,189,2
22
  190,199,3
23
+ 200,204,3
24
+ 205,210,5
25
+ 211,237,5
26
  238,242,5
27
  243,246,6
28
  247,258,7
29
  259,278,7
30
  279,296,7
31
  297,298,7
32
+ 299,299,6
33
+ 300,301,5
34
+ 302,302,4
35
+ 303,303,4
36
+ 304,319,3
37
  320,334,3
38
  335,337,3
39
  338,345,1
scripts/relabel_clean.py ADDED
@@ -0,0 +1,166 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """Clean the dataset by training on MNIST (known-clean labels) + Paul Ricard
2
+ (visually verified, very clear digits), then using that model to filter/relabel
3
+ all racing data.
4
+
5
+ Strategy:
6
+ 1. Train a strong CNN on MNIST + Paul Ricard (both have clean, unambiguous labels)
7
+ 2. Run inference on ALL training data
8
+ 3. Keep only samples where model agrees with label (or relabel if model is very confident)
9
+ 4. Do the same for val
10
+ 5. Rebuild parquet files and composites
11
+
12
+ Usage:
13
+ uv run python scripts/relabel_clean.py
14
+ """
15
+
16
+ import numpy as np
17
+ import pandas as pd
18
+ import io
19
+ import os
20
+
21
+ import torch
22
+ import torch.nn as nn
23
+ import torch.optim as optim
24
+ from torch.utils.data import DataLoader, TensorDataset
25
+ from PIL import Image
26
+ from datasets import Dataset, Image as HFImage
27
+
28
+ if __name__ != "__main__":
29
+ import sys; sys.exit(0)
30
+
31
+ device = torch.device("cuda" if torch.cuda.is_available()
32
+ else "mps" if torch.backends.mps.is_available()
33
+ else "cpu")
34
+ print(f"Using device: {device}")
35
+
36
+
37
+ class CNN(nn.Module):
38
+ def __init__(self):
39
+ super().__init__()
40
+ self.features = nn.Sequential(
41
+ nn.Conv2d(1, 32, 3, padding=1), nn.BatchNorm2d(32), nn.ReLU(), nn.MaxPool2d(2),
42
+ nn.Conv2d(32, 64, 3, padding=1), nn.BatchNorm2d(64), nn.ReLU(), nn.MaxPool2d(2),
43
+ nn.Conv2d(64, 128, 3, padding=1), nn.BatchNorm2d(128), nn.ReLU(), nn.AdaptiveAvgPool2d(4))
44
+ self.classifier = nn.Sequential(
45
+ nn.Flatten(), nn.Linear(128 * 16, 256), nn.ReLU(), nn.Dropout(0.4), nn.Linear(256, 10))
46
+
47
+ def forward(self, x):
48
+ return self.classifier(self.features(x))
49
+
50
+
51
+ def load_images(df):
52
+ imgs = []
53
+ for _, row in df.iterrows():
54
+ img = Image.open(io.BytesIO(row["image"]["bytes"])).convert("L")
55
+ imgs.append(np.array(img, dtype=np.float32) / 255.0)
56
+ return np.stack(imgs)[:, np.newaxis, :, :]
57
+
58
+
59
+ def train_model(X, y, epochs=40):
60
+ model = CNN().to(device)
61
+ opt = optim.Adam(model.parameters(), lr=1e-3, weight_decay=1e-4)
62
+ scheduler = optim.lr_scheduler.CosineAnnealingLR(opt, epochs)
63
+ crit = nn.CrossEntropyLoss()
64
+ loader = DataLoader(
65
+ TensorDataset(torch.tensor(X), torch.tensor(y, dtype=torch.long)),
66
+ batch_size=64, shuffle=True)
67
+
68
+ model.train()
69
+ for epoch in range(epochs):
70
+ total_loss = 0
71
+ for xb, yb in loader:
72
+ xb, yb = xb.to(device), yb.to(device)
73
+ opt.zero_grad()
74
+ loss = crit(model(xb), yb)
75
+ loss.backward()
76
+ opt.step()
77
+ total_loss += loss.item()
78
+ scheduler.step()
79
+ if (epoch + 1) % 10 == 0:
80
+ print(f" epoch {epoch+1}/{epochs} loss={total_loss/len(loader):.4f}")
81
+
82
+ return model
83
+
84
+
85
+ def predict(model, X):
86
+ model.eval()
87
+ with torch.no_grad():
88
+ probs = torch.softmax(model(torch.tensor(X).to(device)), dim=1).cpu().numpy()
89
+ return probs
90
+
91
+
92
+ # --- Step 1: Build clean seed dataset ---
93
+ print("\n=== Building clean seed dataset ===")
94
+ train_df = pd.read_parquet("data/train-00000-of-00001.parquet")
95
+ val_df = pd.read_parquet("data/validation-00000-of-00001.parquet")
96
+
97
+ # MNIST from train (known clean)
98
+ mnist = train_df[train_df["source"] == "mnist"]
99
+ print(f"MNIST: {len(mnist)} samples")
100
+
101
+ # Paul Ricard from train (visually verified, very clear white-on-black)
102
+ paul_ricard = train_df[train_df["source"] == "paul-ricard-alpine"]
103
+ print(f"Paul Ricard: {len(paul_ricard)} samples")
104
+
105
+ seed = pd.concat([mnist, paul_ricard], ignore_index=True)
106
+ X_seed = load_images(seed)
107
+ y_seed = seed["label"].values
108
+ print(f"Seed dataset: {len(seed)} samples")
109
+
110
+ # --- Step 2: Train ensemble on seed ---
111
+ print("\n=== Training 5-model ensemble on seed data ===")
112
+ models = []
113
+ for i in range(5):
114
+ print(f"Model {i+1}/5:")
115
+ # Bootstrap
116
+ idx = np.random.choice(len(X_seed), len(X_seed), replace=True)
117
+ model = train_model(X_seed[idx], y_seed[idx])
118
+ models.append(model)
119
+
120
+ # --- Step 3: Predict on all data ---
121
+ print("\n=== Predicting on all data ===")
122
+
123
+ for split_name, df in [("train", train_df), ("validation", val_df)]:
124
+ X = load_images(df)
125
+ y = df["label"].values
126
+ sources = df["source"].values
127
+
128
+ # Ensemble predict
129
+ probs = np.zeros((len(X), 10))
130
+ for model in models:
131
+ probs += predict(model, X)
132
+ probs /= len(models)
133
+
134
+ preds = probs.argmax(axis=1)
135
+ conf = probs.max(axis=1)
136
+
137
+ # For MNIST: keep as-is (they're the seed)
138
+ # For racing: keep only where model agrees OR model is not confident
139
+ keep = np.ones(len(df), dtype=bool)
140
+
141
+ for i in range(len(df)):
142
+ if sources[i] == "mnist":
143
+ continue # always keep
144
+ if preds[i] != y[i] and conf[i] > 0.5:
145
+ keep[i] = False
146
+
147
+ dropped = (~keep).sum()
148
+ print(f"\n{split_name}: {len(df)} -> {len(df) - dropped} (dropping {dropped})")
149
+
150
+ # Per-label breakdown
151
+ for label in range(10):
152
+ mask = y == label
153
+ drop_mask = mask & ~keep
154
+ if drop_mask.sum() > 0:
155
+ pred_dist = pd.Series(preds[drop_mask]).value_counts().to_dict()
156
+ print(f" label={label}: drop {drop_mask.sum()}/{mask.sum()} -> model says {pred_dist}")
157
+
158
+ df_clean = df[keep].reset_index(drop=True)
159
+ print(f" Final distribution:")
160
+ print(pd.crosstab(df_clean["label"], df_clean["source"]))
161
+
162
+ ds = Dataset.from_pandas(df_clean)
163
+ ds = ds.cast_column("image", HFImage())
164
+ ds.to_parquet(f"data/{split_name}-00000-of-00001.parquet")
165
+
166
+ print("\nDone! Now run: uv run python scripts/make_composites.py")