phanerozoic commited on
Commit
faf011c
·
verified ·
1 Parent(s): 55f3b2e

Stage 4C: direct classifier-score supervision, F1 0.729 (+0.006 over 4B)

Browse files
stage_4c/README.md ADDED
@@ -0,0 +1,44 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Stage 4C: Direct Classifier-Score Supervision
2
+
3
+ Same 3.27 M student as Stage 4. Same 40-D output. Different loss:
4
+
5
+ ```python
6
+ student_score = student_out[pos_dims].sum() - student_out[neg_dims].sum()
7
+ teacher_score = teacher_target[pos_dims].sum() - teacher_target[neg_dims].sum()
8
+ loss = (student_score - teacher_score) ** 2
9
+ ```
10
+
11
+ The student is optimized to match the teacher's scalar classifier output, not the 768-D feature vector (Stage 4B) or the 40 individual dims (Stage 4A).
12
+
13
+ ## Result
14
+
15
+ ```
16
+ Stage Student params Loss F1 Threshold
17
+ 4 3.27 M MSE on 40-D per-dim 0.710 26.3
18
+ 4B 15.67 M cosine on 768-D 0.723 168.0 (scale drifted)
19
+ 4C 3.27 M MSE on scalar sum-difference 0.729 25.8 (matches teacher 25.3)
20
+ 0 85.64 M (ViT-B) baseline 0.889 25.3
21
+ ```
22
+
23
+ Threshold converged to 25.84 — almost exactly the teacher's 25.28. The scale calibration works as designed.
24
+
25
+ F1 improved by only +0.006 over Stage 4B. All three student experiments plateau around 0.72-0.73 with high recall (≥0.95) and precision ~0.58. The student converges on an "over-fire" operating point that no amount of loss-shape tuning fixes.
26
+
27
+ ## What this says
28
+
29
+ The bottleneck is not loss choice or target geometry but the student's ability to learn the underlying scene-level signal at this scale. Closing the F1 gap to baseline 0.889 at the 3 M parameter tier probably requires:
30
+
31
+ - Stronger image augmentation (mosaic, color jitter, rand-augment)
32
+ - Warm-starting from a pre-trained backbone (EUPE-ViT-T already distilled) rather than from scratch
33
+ - More training data beyond COCO-only (117 K images is tight for a specialist from scratch)
34
+
35
+ Parameter scaling alone doesn't help; loss reshape alone doesn't help. Data and initialization are the remaining knobs.
36
+
37
+ ## Files
38
+
39
+ - `train.py` — training loop (direct scalar MSE)
40
+ - `student_ep{5,10,15}.safetensors` — intermediate checkpoints
41
+ - `student_final.safetensors` — final weights
42
+ - `training_log.json` — per-epoch loss + F1
43
+
44
+ Uses the same `student.py` as Stage 4.
stage_4c/student_ep10.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:3a0df72224b8cebbbafee40d07cd167c285eda72717e62ec8fee4ebce672d3a8
3
+ size 13076256
stage_4c/student_ep15.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f0969d6c6b79b2304cc8bce3f353bb437c3411593976182749efdfc901095565
3
+ size 13076256
stage_4c/student_ep5.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a99c84660f3aa447fe6f9a5205eb61ae567976cbfe820e7e5863f3b4a6ec4433
3
+ size 13076256
stage_4c/student_final.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f0969d6c6b79b2304cc8bce3f353bb437c3411593976182749efdfc901095565
3
+ size 13076256
stage_4c/train.py ADDED
@@ -0,0 +1,189 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """Stage 4C: direct classifier-score supervision.
2
+
3
+ Same 3.27M student architecture as Stage 4. Same 40-D output. But the loss
4
+ is on the *classifier score* rather than the per-dim values:
5
+
6
+ student_score = student_out[pos_dims].sum() - student_out[neg_dims].sum()
7
+ teacher_score = teacher_target[pos_dims].sum() - teacher_target[neg_dims].sum()
8
+ loss = (student_score - teacher_score) ** 2
9
+
10
+ The student is optimized to produce the same binary decision as the teacher
11
+ at the classifier threshold, not to reproduce the teacher's feature geometry
12
+ dim-by-dim. If the Stage 4B plateau at F1 0.723 was caused by even
13
+ small per-dim errors accumulating into scalar miscalibration, this should
14
+ close the gap.
15
+ """
16
+ import os, sys, time, json, math
17
+ import numpy as np
18
+ import torch
19
+ import torch.nn as nn
20
+ import torch.nn.functional as F
21
+ from PIL import Image
22
+ from pycocotools.coco import COCO
23
+ from safetensors.torch import save_file
24
+
25
+ HERE = os.path.dirname(os.path.abspath(__file__))
26
+ sys.path.insert(0, '/mnt/d/_tmp/1pc_repo/stage_4')
27
+ from student import SpecialistStudent
28
+
29
+ COCO_ROOT = '/home/zootest/datasets/coco'
30
+ TARGETS = f'{COCO_ROOT}/stage4_teacher_targets/targets.pt'
31
+ CLASSIFIER = '/mnt/d/_tmp/1pc_repo/stage_0/classifier.json'
32
+ OUT_DIR = '/mnt/d/_tmp/1pc_repo/stage_4c'
33
+ DEVICE = 'cuda'
34
+ RES = 768
35
+ BATCH = 16
36
+ LR = 3e-4
37
+ WD = 1e-4
38
+ EPOCHS = 15
39
+ WARMUP_FRAC = 0.03
40
+
41
+
42
+ class CocoImgDataset(torch.utils.data.Dataset):
43
+ def __init__(self, coco_root, pack):
44
+ self.root = f'{coco_root}/train2017'
45
+ coco = COCO(f'{coco_root}/annotations/instances_train2017.json')
46
+ self.img_ids = pack['img_ids']
47
+ self.targets = pack['targets'] # (N, 40)
48
+ self.id_to_file = {i['id']: i['file_name'] for i in coco.loadImgs(coco.getImgIds())}
49
+
50
+ def __len__(self):
51
+ return len(self.img_ids)
52
+
53
+ def __getitem__(self, i):
54
+ img_id = self.img_ids[i]
55
+ target = self.targets[i].float() # (40,)
56
+ fname = self.id_to_file.get(img_id)
57
+ if fname is None:
58
+ return None
59
+ try:
60
+ img = Image.open(f'{self.root}/{fname}').convert('RGB').resize((RES, RES), Image.BILINEAR)
61
+ except Exception:
62
+ return None
63
+ arr = np.asarray(img, dtype=np.uint8).copy()
64
+ x = torch.from_numpy(arr).permute(2, 0, 1).float() / 255.0
65
+ mean = torch.tensor([0.485, 0.456, 0.406]).view(3, 1, 1)
66
+ std = torch.tensor([0.229, 0.224, 0.225]).view(3, 1, 1)
67
+ return (x - mean) / std, target
68
+
69
+
70
+ def collate(batch):
71
+ batch = [b for b in batch if b is not None]
72
+ if not batch:
73
+ return None
74
+ xs, ts = zip(*batch)
75
+ return torch.stack(xs), torch.stack(ts)
76
+
77
+
78
+ def eval_f1(student, pos_idx, neg_idx):
79
+ coco = COCO(f'{COCO_ROOT}/annotations/instances_val2017.json')
80
+ img_ids = sorted(coco.getImgIds())[:500]
81
+ id_to_file = {i['id']: i['file_name'] for i in coco.loadImgs(coco.getImgIds())}
82
+ MEAN = torch.tensor([0.485, 0.456, 0.406]).view(1, 3, 1, 1).to(DEVICE)
83
+ STD = torch.tensor([0.229, 0.224, 0.225]).view(1, 3, 1, 1).to(DEVICE)
84
+ scores, labels = [], []
85
+ student.eval()
86
+ with torch.inference_mode():
87
+ for img_id in img_ids:
88
+ fname = id_to_file.get(img_id)
89
+ if not fname:
90
+ continue
91
+ img = Image.open(f'{COCO_ROOT}/val2017/{fname}').convert('RGB').resize((RES, RES), Image.BILINEAR)
92
+ arr = np.asarray(img, dtype=np.uint8).copy()
93
+ x = torch.from_numpy(arr).permute(2, 0, 1).unsqueeze(0).to(DEVICE).float() / 255.0
94
+ x = (x - MEAN) / STD
95
+ with torch.autocast('cuda', dtype=torch.bfloat16):
96
+ out = student(x).float()[0]
97
+ scores.append((out[pos_idx].sum() - out[neg_idx].sum()).item())
98
+ labels.append(any(a['category_id'] == 1
99
+ for a in coco.loadAnns(coco.getAnnIds(imgIds=img_id, iscrowd=False))))
100
+ scores = torch.tensor(scores); labels = torch.tensor(labels, dtype=torch.bool)
101
+ uniq = torch.unique(scores).sort().values
102
+ best = (0, 0, 0, 0)
103
+ for t in uniq.tolist()[::max(1, len(uniq) // 500)]:
104
+ pred = scores > t
105
+ tp = (pred & labels).sum().float()
106
+ fp = (pred & ~labels).sum().float()
107
+ fn = (~pred & labels).sum().float()
108
+ prec = tp / (tp + fp).clamp(min=1)
109
+ rec = tp / (tp + fn).clamp(min=1)
110
+ f1 = (2 * prec * rec / (prec + rec).clamp(min=1e-9)).item()
111
+ if f1 > best[0]:
112
+ best = (f1, t, prec.item(), rec.item())
113
+ return best
114
+
115
+
116
+ def main():
117
+ os.makedirs(OUT_DIR, exist_ok=True)
118
+ pack = torch.load(TARGETS, map_location='cpu', weights_only=False)
119
+ print(f'[init] {pack["targets"].shape[0]} targets shape {tuple(pack["targets"].shape)}',
120
+ flush=True)
121
+
122
+ # In the 40-D target vector, [0..19] are pos dims, [20..39] are neg dims (built that way by prepare_targets)
123
+ pos_idx = torch.arange(0, 20, device=DEVICE)
124
+ neg_idx = torch.arange(20, 40, device=DEVICE)
125
+
126
+ # Pre-compute teacher scalar scores: (N,)
127
+ teacher_scalar = pack['targets'].float()[:, :20].sum(1) - pack['targets'].float()[:, 20:].sum(1)
128
+ pack['teacher_scalar'] = teacher_scalar
129
+ print(f'[init] teacher scalar stats: mean={teacher_scalar.mean():.3f} '
130
+ f'std={teacher_scalar.std():.3f}', flush=True)
131
+
132
+ ds = CocoImgDataset(COCO_ROOT, pack)
133
+ loader = torch.utils.data.DataLoader(
134
+ ds, batch_size=BATCH, shuffle=True, num_workers=4,
135
+ pin_memory=True, collate_fn=collate, drop_last=True)
136
+
137
+ student = SpecialistStudent().to(DEVICE)
138
+ nparams = sum(p.numel() for p in student.parameters())
139
+ print(f'[student] {nparams:,} params = {nparams/1e6:.2f}M', flush=True)
140
+
141
+ total_steps = EPOCHS * len(loader)
142
+ warmup = int(total_steps * WARMUP_FRAC)
143
+ opt = torch.optim.AdamW(student.parameters(), lr=LR, weight_decay=WD)
144
+ sched = torch.optim.lr_scheduler.LambdaLR(
145
+ opt, lambda s: s / max(1, warmup) if s < warmup
146
+ else 0.5 * (1 + math.cos(math.pi * (s - warmup) / max(1, total_steps - warmup))))
147
+
148
+ log = {'student_params': nparams, 'loss': 'MSE_on_classifier_scalar', 'epochs': []}
149
+ step = 0; t0 = time.time()
150
+ for ep in range(EPOCHS):
151
+ student.train()
152
+ ep_loss, n_batches = 0.0, 0
153
+ for batch in loader:
154
+ if batch is None:
155
+ continue
156
+ x, y = batch
157
+ x = x.to(DEVICE, non_blocking=True); y = y.to(DEVICE, non_blocking=True)
158
+ with torch.autocast('cuda', dtype=torch.bfloat16):
159
+ pred = student(x) # (B, 40)
160
+ pred = pred.float()
161
+ student_scalar = pred[:, :20].sum(1) - pred[:, 20:].sum(1) # (B,)
162
+ teacher_scalar_b = y[:, :20].sum(1) - y[:, 20:].sum(1)
163
+ loss = F.mse_loss(student_scalar, teacher_scalar_b)
164
+ opt.zero_grad(set_to_none=True)
165
+ loss.backward()
166
+ torch.nn.utils.clip_grad_norm_(student.parameters(), 1.0)
167
+ opt.step(); sched.step()
168
+ ep_loss += loss.item(); n_batches += 1; step += 1
169
+ if step % 500 == 0:
170
+ print(f' ep {ep+1}/{EPOCHS} step {step}/{total_steps} '
171
+ f'loss={loss.item():.4f} lr={opt.param_groups[0]["lr"]:.2e} '
172
+ f'{(time.time()-t0)/60:.1f} min', flush=True)
173
+ avg = ep_loss / max(1, n_batches)
174
+ f1, thr, p, r = eval_f1(student, pos_idx, neg_idx)
175
+ print(f'[ep {ep+1}] loss={avg:.4f} F1={f1:.4f} P={p:.4f} R={r:.4f} '
176
+ f'θ={thr:.3f} {(time.time()-t0)/60:.1f} min', flush=True)
177
+ log['epochs'].append({'epoch': ep + 1, 'loss': avg,
178
+ 'F1': f1, 'precision': p, 'recall': r, 'threshold': thr})
179
+ if (ep + 1) % 5 == 0 or ep == EPOCHS - 1:
180
+ save_file(student.state_dict(), f'{OUT_DIR}/student_ep{ep+1}.safetensors')
181
+ with open(f'{OUT_DIR}/training_log.json', 'w') as f:
182
+ json.dump(log, f, indent=2)
183
+
184
+ save_file(student.state_dict(), f'{OUT_DIR}/student_final.safetensors')
185
+ print(f'[done] total {(time.time()-t0)/60:.1f} min', flush=True)
186
+
187
+
188
+ if __name__ == '__main__':
189
+ main()
stage_4c/training_log.json ADDED
@@ -0,0 +1,126 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "student_params": 3267304,
3
+ "loss": "MSE_on_classifier_scalar",
4
+ "epochs": [
5
+ {
6
+ "epoch": 1,
7
+ "loss": 338.8909246066799,
8
+ "F1": 0.7214378118515015,
9
+ "precision": 0.5653923749923706,
10
+ "recall": 0.9964538812637329,
11
+ "threshold": 24.078125
12
+ },
13
+ {
14
+ "epoch": 2,
15
+ "loss": 328.9103960763321,
16
+ "F1": 0.7205128073692322,
17
+ "precision": 0.564257025718689,
18
+ "recall": 0.9964538812637329,
19
+ "threshold": 28.883697509765625
20
+ },
21
+ {
22
+ "epoch": 3,
23
+ "loss": 328.8529856922911,
24
+ "F1": 0.72258061170578,
25
+ "precision": 0.5679513216018677,
26
+ "recall": 0.9929078221321106,
27
+ "threshold": 28.009765625
28
+ },
29
+ {
30
+ "epoch": 4,
31
+ "loss": 326.7125272106021,
32
+ "F1": 0.7221510410308838,
33
+ "precision": 0.5651302337646484,
34
+ "recall": 1.0,
35
+ "threshold": 26.10400390625
36
+ },
37
+ {
38
+ "epoch": 5,
39
+ "loss": 325.2882255967271,
40
+ "F1": 0.7241829633712769,
41
+ "precision": 0.5734989643096924,
42
+ "recall": 0.9822695255279541,
43
+ "threshold": 26.7830810546875
44
+ },
45
+ {
46
+ "epoch": 6,
47
+ "loss": 326.14567763865386,
48
+ "F1": 0.7298701405525208,
49
+ "precision": 0.5758196711540222,
50
+ "recall": 0.9964538812637329,
51
+ "threshold": 26.697509765625
52
+ },
53
+ {
54
+ "epoch": 7,
55
+ "loss": 325.15346100816646,
56
+ "F1": 0.7221510410308838,
57
+ "precision": 0.5651302337646484,
58
+ "recall": 1.0,
59
+ "threshold": 24.860595703125
60
+ },
61
+ {
62
+ "epoch": 8,
63
+ "loss": 321.891113616252,
64
+ "F1": 0.7402032017707825,
65
+ "precision": 0.6265356540679932,
66
+ "recall": 0.9042553305625916,
67
+ "threshold": 25.110595703125
68
+ },
69
+ {
70
+ "epoch": 9,
71
+ "loss": 324.80503095718103,
72
+ "F1": 0.7258687615394592,
73
+ "precision": 0.5696969628334045,
74
+ "recall": 1.0,
75
+ "threshold": 24.62255859375
76
+ },
77
+ {
78
+ "epoch": 10,
79
+ "loss": 323.8681324547888,
80
+ "F1": 0.7338129878044128,
81
+ "precision": 0.6174334287643433,
82
+ "recall": 0.9042553305625916,
83
+ "threshold": 25.044189453125
84
+ },
85
+ {
86
+ "epoch": 11,
87
+ "loss": 322.2317366366947,
88
+ "F1": 0.7270233035087585,
89
+ "precision": 0.5928411483764648,
90
+ "recall": 0.9397163391113281,
91
+ "threshold": 25.876220703125
92
+ },
93
+ {
94
+ "epoch": 12,
95
+ "loss": 323.8775074345158,
96
+ "F1": 0.7272727489471436,
97
+ "precision": 0.5786163806915283,
98
+ "recall": 0.978723406791687,
99
+ "threshold": 26.55078125
100
+ },
101
+ {
102
+ "epoch": 13,
103
+ "loss": 322.46046450084134,
104
+ "F1": 0.729658842086792,
105
+ "precision": 0.5791666507720947,
106
+ "recall": 0.9858155846595764,
107
+ "threshold": 25.8525390625
108
+ },
109
+ {
110
+ "epoch": 14,
111
+ "loss": 321.4742774784218,
112
+ "F1": 0.7279894948005676,
113
+ "precision": 0.5782880783081055,
114
+ "recall": 0.9822695255279541,
115
+ "threshold": 25.83251953125
116
+ },
117
+ {
118
+ "epoch": 15,
119
+ "loss": 322.15041381942024,
120
+ "F1": 0.7289474010467529,
121
+ "precision": 0.5794979333877563,
122
+ "recall": 0.9822695255279541,
123
+ "threshold": 25.84033203125
124
+ }
125
+ ]
126
+ }