DJLougen commited on
Commit
7163e41
·
verified ·
1 Parent(s): 1507f54

Upload folder using huggingface_hub

Browse files
README.md ADDED
@@ -0,0 +1,87 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: other
3
+ license_name: polyform-small-business-1.0.0
4
+ license_link: https://polyformproject.org/licenses/small-business/1.0.0/
5
+ library_name: pytorch
6
+ pipeline_tag: video-classification
7
+ tags:
8
+ - bio-inspired
9
+ - neuroscience
10
+ - lightweight
11
+ - temporal
12
+ - video-understanding
13
+ - action-recognition
14
+ - retinal-ganglion-cells
15
+ - fibonacci-strides
16
+ datasets:
17
+ - ucf101
18
+ ---
19
+
20
+ # MPKNet V6.2 Temporal - Bio-Inspired Video Classification
21
+
22
+ An extension of MPKNet V6 that adds temporal processing to the M (Magnocellular) pathway for video understanding and action recognition. The M pathway processes 8 consecutive frames and computes inter-frame deltas to capture motion, while the P pathway sees only the current frame for spatial detail.
23
+
24
+ ## Architecture
25
+
26
+ Built on the three-pathway design with Fibonacci strides (2:3:5):
27
+
28
+ - **P pathway** (stride 2): Current frame only - fine spatial detail
29
+ - **K pathway** (stride 3): Current frame only - context and gating signals
30
+ - **M pathway** (stride 5): 8 consecutive frames - computes 7 inter-frame deltas for motion
31
+
32
+ The M pathway uses shared Conv2D weights across all frames, computing learned deltas between consecutive frame pairs. A temporal fusion module combines all 7 deltas into a single motion representation.
33
+
34
+ For static images, the model generates pseudo-frames via progressive scale and blur augmentation, teaching M to detect change even without real motion. This transfers to real video at inference.
35
+
36
+ ## Results
37
+
38
+ | Dataset | Classes | Accuracy | Parameters |
39
+ |---------|---------|----------|------------|
40
+ | UCF-101 | 101 | 77 percent | 0.58M |
41
+
42
+ No pretraining. 0.58M parameters processing 8-frame sequences.
43
+
44
+ ## Usage
45
+
46
+ ```python
47
+ import torch
48
+ from mpknet_v6_2_temporal import BinocularMPKNetV6_2
49
+ from mpknet_components import count_params
50
+
51
+ model = BinocularMPKNetV6_2(num_classes=101, ch=48, use_stereo=True, num_frames=8)
52
+ state_dict = torch.load("mpknet_v6_2_ucf101_best.pt", map_location="cpu", weights_only=True)
53
+ model.load_state_dict(state_dict)
54
+ model.eval()
55
+
56
+ # Inference with video frames
57
+ current_frame = torch.randn(1, 3, 224, 224)
58
+ frame_sequence = torch.randn(1, 8, 3, 224, 224)
59
+
60
+ with torch.no_grad():
61
+ logits = model(current_frame, frames=frame_sequence)
62
+ pred = logits.argmax(dim=1)
63
+
64
+ # Or with a static image (auto-generates pseudo-frames)
65
+ with torch.no_grad():
66
+ logits = model(current_frame)
67
+ ```
68
+
69
+ ## Files
70
+
71
+ - `mpknet_v6_2_ucf101_best.pt` - Trained weights (UCF-101, 101 classes, 7.3MB)
72
+ - `mpknet_v6_2_temporal.py` - Model architecture with SequentialTemporalMPathway
73
+ - `mpknet_components.py` - Shared components
74
+
75
+ ## Citation
76
+
77
+ ```
78
+ D.J. Lougen, "MPKNet: Bio-Inspired Visual Classification with Parallel LGN Pathways", 2025.
79
+ ```
80
+
81
+ ## License
82
+
83
+ PolyForm Small Business License 1.0.0 - Free for organizations with less than 100K revenue, non-profits, and education.
84
+
85
+ ## Links
86
+
87
+ - [GitHub](https://github.com/DJLougen/MPKnet)
mpknet_components.py ADDED
@@ -0,0 +1,397 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """
2
+ Shared components for all MPKNet model variants.
3
+
4
+ Contains building blocks used across V1, V2, V3, V4 and detection models:
5
+ - RGCLayer: Biologically accurate retinal ganglion cell preprocessing
6
+ - BinocularPreMPK: Legacy retinal preprocessing (deprecated, use RGCLayer)
7
+ - StereoDisparity: Stereo disparity simulation
8
+ - OcularDominanceConv: Convolution with ocular dominance channels
9
+ - BinocularMPKPathway: Pathway with binocular processing
10
+ - MonocularPathwayBlock: Pathway keeping eyes separate
11
+ - StridedMonocularBlock: Strided pathway for V4
12
+ """
13
+
14
+ import torch
15
+ import torch.nn as nn
16
+ import torch.nn.functional as F
17
+ from typing import Tuple
18
+
19
+
20
+ class RGCLayer(nn.Module):
21
+ """
22
+ Biologically accurate Retinal Ganglion Cell layer.
23
+
24
+ Based on Kim et al. 2021 "Retinal Ganglion Cells—Diversity of Cell Types
25
+ and Clinical Relevance" (Front. Neurol. 12:661938).
26
+
27
+ Models three main RGC types that feed the P/K/M pathways:
28
+
29
+ 1. MIDGET RGCs (~70% of RGCs):
30
+ - Small receptive field (5-10 μm dendritic field)
31
+ - Center-surround via Difference of Gaussians (DoG)
32
+ - Red-Green color opponency (L-M or M-L)
33
+ - Feeds PARVOCELLULAR (P) pathway
34
+ - High spatial acuity, low temporal resolution
35
+
36
+ 2. PARASOL RGCs (~10% of RGCs):
37
+ - Large receptive field (30-300 μm dendritic field)
38
+ - Center-surround DoG on luminance
39
+ - Achromatic (no color, L+M pooled)
40
+ - Feeds MAGNOCELLULAR (M) pathway
41
+ - Motion detection, high temporal resolution
42
+
43
+ 3. SMALL BISTRATIFIED RGCs (~5-8% of RGCs):
44
+ - Medium receptive field
45
+ - S-cone ON center, (L+M) OFF surround
46
+ - Blue-Yellow opponency
47
+ - Feeds KONIOCELLULAR (K) pathway
48
+ - Color context, particularly blue
49
+
50
+ Key biological details implemented:
51
+ - DoG (Difference of Gaussians) for center-surround RF
52
+ - RF size ratios: Midget < Bistratified < Parasol
53
+ - Surround ~3-6x larger than center (we use 3x)
54
+ - ON-center and OFF-center populations (we use ON-center)
55
+ """
56
+
57
+ def __init__(
58
+ self,
59
+ midget_sigma: float = 0.8, # Small RF for fine detail
60
+ parasol_sigma: float = 2.5, # Large RF for motion/gist
61
+ bistrat_sigma: float = 1.2, # Medium RF for color context
62
+ surround_ratio: float = 3.0, # Surround is 3x center
63
+ ):
64
+ super().__init__()
65
+
66
+ self.midget_sigma = midget_sigma
67
+ self.parasol_sigma = parasol_sigma
68
+ self.bistrat_sigma = bistrat_sigma
69
+ self.surround_ratio = surround_ratio
70
+
71
+ # Create DoG kernels for each cell type
72
+ self.register_buffer('midget_center', self._make_gaussian(midget_sigma))
73
+ self.register_buffer('midget_surround', self._make_gaussian(midget_sigma * surround_ratio))
74
+
75
+ self.register_buffer('parasol_center', self._make_gaussian(parasol_sigma))
76
+ self.register_buffer('parasol_surround', self._make_gaussian(parasol_sigma * surround_ratio))
77
+
78
+ self.register_buffer('bistrat_center', self._make_gaussian(bistrat_sigma))
79
+ self.register_buffer('bistrat_surround', self._make_gaussian(bistrat_sigma * surround_ratio))
80
+
81
+ # Store kernel sizes for padding calculation
82
+ self.midget_ks = self.midget_surround.shape[-1]
83
+ self.parasol_ks = self.parasol_surround.shape[-1]
84
+ self.bistrat_ks = self.bistrat_surround.shape[-1]
85
+
86
+ def _make_gaussian(self, sigma: float) -> torch.Tensor:
87
+ """Create a normalized 2D Gaussian kernel."""
88
+ ks = int(6 * sigma + 1) | 1 # Ensure odd, cover 3 sigma each side
89
+ ax = torch.arange(ks, dtype=torch.float32) - ks // 2
90
+ xx, yy = torch.meshgrid(ax, ax, indexing='ij')
91
+ kernel = torch.exp(-(xx**2 + yy**2) / (2 * sigma**2))
92
+ kernel = kernel / kernel.sum() # Normalize
93
+ return kernel.unsqueeze(0).unsqueeze(0) # (1, 1, H, W)
94
+
95
+ def _apply_dog(
96
+ self,
97
+ x: torch.Tensor,
98
+ center_kernel: torch.Tensor,
99
+ surround_kernel: torch.Tensor,
100
+ kernel_size: int
101
+ ) -> torch.Tensor:
102
+ """Apply Difference of Gaussians (center - surround)."""
103
+ B, C, H, W = x.shape
104
+ padding = kernel_size // 2
105
+
106
+ # Expand kernels for all channels
107
+ center_k = center_kernel.expand(C, 1, -1, -1)
108
+ surround_k = surround_kernel.expand(C, 1, -1, -1)
109
+
110
+ # Pad surround kernel to match size if needed
111
+ c_size = center_k.shape[-1]
112
+ s_size = surround_k.shape[-1]
113
+ if c_size < s_size:
114
+ pad_amt = (s_size - c_size) // 2
115
+ center_k = F.pad(center_k, (pad_amt, pad_amt, pad_amt, pad_amt))
116
+
117
+ # Apply center and surround
118
+ center_response = F.conv2d(x, center_k, padding=padding, groups=C)
119
+ surround_response = F.conv2d(x, surround_k, padding=padding, groups=C)
120
+
121
+ # DoG: ON-center response (center - surround)
122
+ return center_response - surround_response
123
+
124
+ def forward(
125
+ self,
126
+ x_left: torch.Tensor,
127
+ x_right: torch.Tensor
128
+ ) -> Tuple[torch.Tensor, torch.Tensor, torch.Tensor,
129
+ torch.Tensor, torch.Tensor, torch.Tensor]:
130
+ """
131
+ Process left and right eye inputs through RGC populations.
132
+
133
+ Returns:
134
+ P_left, P_right: Midget RGC output (R-G opponency) -> P pathway
135
+ M_left, M_right: Parasol RGC output (luminance DoG) -> M pathway
136
+ K_left, K_right: Bistratified RGC output (S vs L+M) -> K pathway
137
+ """
138
+ # ========== MIDGET RGCs -> P pathway ==========
139
+ # Red-Green opponency: L-cone vs M-cone
140
+ # Approximate: R channel vs G channel
141
+ # DoG on the opponent signal
142
+
143
+ # Extract R and G channels (approximating L and M cones)
144
+ R_left, G_left = x_left[:, 0:1], x_left[:, 1:2]
145
+ R_right, G_right = x_right[:, 0:1], x_right[:, 1:2]
146
+
147
+ # L-M opponency (R-G) with small receptive field DoG
148
+ rg_left = R_left - G_left
149
+ rg_right = R_right - G_right
150
+
151
+ P_left = self._apply_dog(rg_left, self.midget_center, self.midget_surround, self.midget_ks)
152
+ P_right = self._apply_dog(rg_right, self.midget_center, self.midget_surround, self.midget_ks)
153
+
154
+ # Expand back to 3 channels for compatibility
155
+ P_left = P_left.expand(-1, 3, -1, -1)
156
+ P_right = P_right.expand(-1, 3, -1, -1)
157
+
158
+ # ========== PARASOL RGCs -> M pathway ==========
159
+ # Achromatic: pool L+M (approximate as luminance)
160
+ # Large RF DoG for motion sensitivity
161
+
162
+ lum_left = 0.299 * x_left[:, 0:1] + 0.587 * x_left[:, 1:2] + 0.114 * x_left[:, 2:3]
163
+ lum_right = 0.299 * x_right[:, 0:1] + 0.587 * x_right[:, 1:2] + 0.114 * x_right[:, 2:3]
164
+
165
+ M_left = self._apply_dog(lum_left, self.parasol_center, self.parasol_surround, self.parasol_ks)
166
+ M_right = self._apply_dog(lum_right, self.parasol_center, self.parasol_surround, self.parasol_ks)
167
+
168
+ # Expand to 3 channels
169
+ M_left = M_left.expand(-1, 3, -1, -1)
170
+ M_right = M_right.expand(-1, 3, -1, -1)
171
+
172
+ # ========== SMALL BISTRATIFIED RGCs -> K pathway ==========
173
+ # S-cone ON center, (L+M) OFF surround
174
+ # Blue-Yellow opponency: S vs (L+M)
175
+
176
+ # S-cone approximated by B channel
177
+ # (L+M) approximated by (R+G)/2
178
+ S_left = x_left[:, 2:3] # Blue
179
+ S_right = x_right[:, 2:3]
180
+ LM_left = (x_left[:, 0:1] + x_left[:, 1:2]) / 2
181
+ LM_right = (x_right[:, 0:1] + x_right[:, 1:2]) / 2
182
+
183
+ # S - (L+M) opponency with medium RF
184
+ by_left = S_left - LM_left
185
+ by_right = S_right - LM_right
186
+
187
+ K_left = self._apply_dog(by_left, self.bistrat_center, self.bistrat_surround, self.bistrat_ks)
188
+ K_right = self._apply_dog(by_right, self.bistrat_center, self.bistrat_surround, self.bistrat_ks)
189
+
190
+ # Expand to 3 channels
191
+ K_left = K_left.expand(-1, 3, -1, -1)
192
+ K_right = K_right.expand(-1, 3, -1, -1)
193
+
194
+ return P_left, M_left, K_left, P_right, M_right, K_right
195
+
196
+
197
+ class BinocularPreMPK(nn.Module):
198
+ """
199
+ Simulates retinal + LGN preprocessing for both eyes.
200
+ Each eye gets its own center-surround filtering.
201
+
202
+ Biological motivation:
203
+ - Retinal ganglion cells have center-surround receptive fields
204
+ - M cells respond to luminance changes (motion/gist)
205
+ - P cells respond to color/detail (high-pass filtered)
206
+ """
207
+ def __init__(self, sigma: float = 1.0):
208
+ super().__init__()
209
+ self.sigma = sigma
210
+ ks = int(4 * sigma + 1) | 1 # ensure odd
211
+ ax = torch.arange(ks, dtype=torch.float32) - ks // 2
212
+ xx, yy = torch.meshgrid(ax, ax, indexing='ij')
213
+ kernel = torch.exp(-(xx**2 + yy**2) / (2 * sigma**2))
214
+ kernel = kernel / kernel.sum()
215
+ self.register_buffer('gauss', kernel.unsqueeze(0).unsqueeze(0))
216
+ self.ks = ks
217
+
218
+ def _blur(self, x: torch.Tensor) -> torch.Tensor:
219
+ B, C, H, W = x.shape
220
+ kernel = self.gauss.expand(C, 1, self.ks, self.ks)
221
+ return F.conv2d(x, kernel, padding=self.ks // 2, groups=C)
222
+
223
+ def forward(self, x_left: torch.Tensor, x_right: torch.Tensor) -> Tuple[torch.Tensor, ...]:
224
+ """
225
+ Returns (P_left, M_left, P_right, M_right)
226
+ P = high-pass (center - surround) for detail
227
+ M = low-pass luminance for motion/gist
228
+ """
229
+ # Left eye
230
+ blur_L = self._blur(x_left)
231
+ P_left = x_left - blur_L # high-pass (Parvo-like)
232
+ lum_L = x_left.mean(dim=1, keepdim=True)
233
+ M_left = self._blur(lum_L).expand(-1, 3, -1, -1) # low-pass luminance (Magno-like)
234
+
235
+ # Right eye
236
+ blur_R = self._blur(x_right)
237
+ P_right = x_right - blur_R
238
+ lum_R = x_right.mean(dim=1, keepdim=True)
239
+ M_right = self._blur(lum_R).expand(-1, 3, -1, -1)
240
+
241
+ return P_left, M_left, P_right, M_right
242
+
243
+
244
+ class StereoDisparity(nn.Module):
245
+ """
246
+ Creates stereo disparity by horizontally shifting left/right views.
247
+ Simulates the slight positional difference between two eyes.
248
+
249
+ disparity_range: maximum pixel shift (positive = crossed disparity)
250
+ """
251
+ def __init__(self, disparity_range: int = 2):
252
+ super().__init__()
253
+ self.disparity_range = disparity_range
254
+
255
+ def forward(self, x: torch.Tensor) -> Tuple[torch.Tensor, torch.Tensor]:
256
+ """
257
+ Takes single image, returns (left_view, right_view) with disparity.
258
+ For training, uses random disparity. For inference, uses fixed small disparity.
259
+ """
260
+ B, C, H, W = x.shape
261
+
262
+ if self.training:
263
+ d = torch.randint(-self.disparity_range, self.disparity_range + 1, (1,)).item()
264
+ else:
265
+ d = 1
266
+
267
+ if d == 0:
268
+ return x, x
269
+
270
+ if d > 0:
271
+ x_left = F.pad(x[:, :, :, d:], (0, d, 0, 0), mode='replicate')
272
+ x_right = F.pad(x[:, :, :, :-d], (d, 0, 0, 0), mode='replicate')
273
+ else:
274
+ d = -d
275
+ x_left = F.pad(x[:, :, :, :-d], (d, 0, 0, 0), mode='replicate')
276
+ x_right = F.pad(x[:, :, :, d:], (0, d, 0, 0), mode='replicate')
277
+
278
+ return x_left, x_right
279
+
280
+
281
+ class OcularDominanceConv(nn.Module):
282
+ """
283
+ Convolution with ocular dominance - channels are assigned to left/right eye
284
+ with graded mixing (some purely monocular, some binocular).
285
+
286
+ Inspired by V1 ocular dominance columns but applied at LGN stage
287
+ for computational efficiency.
288
+ """
289
+ def __init__(self, in_ch: int, out_ch: int, kernel_size: int,
290
+ monocular_ratio: float = 0.5):
291
+ super().__init__()
292
+ self.out_ch = out_ch
293
+ self.monocular_ratio = monocular_ratio
294
+
295
+ n_mono = int(out_ch * monocular_ratio)
296
+ n_mono_per_eye = n_mono // 2
297
+ n_bino = out_ch - 2 * n_mono_per_eye
298
+
299
+ self.n_left = n_mono_per_eye
300
+ self.n_right = n_mono_per_eye
301
+ self.n_bino = n_bino
302
+
303
+ self.conv_left = nn.Conv2d(in_ch, n_mono_per_eye, kernel_size, padding=kernel_size//2)
304
+ self.conv_right = nn.Conv2d(in_ch, n_mono_per_eye, kernel_size, padding=kernel_size//2)
305
+ self.conv_bino_L = nn.Conv2d(in_ch, n_bino, kernel_size, padding=kernel_size//2)
306
+ self.conv_bino_R = nn.Conv2d(in_ch, n_bino, kernel_size, padding=kernel_size//2)
307
+ self.bn = nn.BatchNorm2d(out_ch)
308
+
309
+ def forward(self, x_left: torch.Tensor, x_right: torch.Tensor) -> torch.Tensor:
310
+ left_only = self.conv_left(x_left)
311
+ right_only = self.conv_right(x_right)
312
+ bino = self.conv_bino_L(x_left) + self.conv_bino_R(x_right)
313
+ out = torch.cat([left_only, right_only, bino], dim=1)
314
+ return F.relu(self.bn(out))
315
+
316
+
317
+ class BinocularMPKPathway(nn.Module):
318
+ """
319
+ Single pathway (M, P, or K) with binocular processing.
320
+ Receives left and right eye inputs, produces fused output.
321
+ """
322
+ def __init__(self, in_ch: int, out_ch: int, kernel_sizes: list,
323
+ monocular_ratio: float = 0.5):
324
+ super().__init__()
325
+
326
+ layers = []
327
+ ch = in_ch
328
+ for i, ks in enumerate(kernel_sizes):
329
+ is_first = (i == 0)
330
+ if is_first:
331
+ layers.append(OcularDominanceConv(ch, out_ch, ks, monocular_ratio))
332
+ else:
333
+ layers.append(nn.Sequential(
334
+ nn.Conv2d(out_ch if i > 0 else ch, out_ch, ks, padding=ks//2),
335
+ nn.BatchNorm2d(out_ch),
336
+ nn.ReLU(inplace=True)
337
+ ))
338
+ ch = out_ch
339
+
340
+ self.first_layer = layers[0]
341
+ self.rest = nn.Sequential(*layers[1:]) if len(layers) > 1 else nn.Identity()
342
+
343
+ def forward(self, x_left: torch.Tensor, x_right: torch.Tensor) -> torch.Tensor:
344
+ x = self.first_layer(x_left, x_right)
345
+ return self.rest(x)
346
+
347
+
348
+ class MonocularPathwayBlock(nn.Module):
349
+ """
350
+ Single pathway block that keeps left/right eyes separate.
351
+ Used for LGN processing where eye segregation persists.
352
+ """
353
+ def __init__(self, in_ch: int, out_ch: int, kernel_size: int):
354
+ super().__init__()
355
+ self.conv_left = nn.Sequential(
356
+ nn.Conv2d(in_ch, out_ch, kernel_size, padding=kernel_size//2),
357
+ nn.BatchNorm2d(out_ch),
358
+ nn.ReLU(inplace=True)
359
+ )
360
+ self.conv_right = nn.Sequential(
361
+ nn.Conv2d(in_ch, out_ch, kernel_size, padding=kernel_size//2),
362
+ nn.BatchNorm2d(out_ch),
363
+ nn.ReLU(inplace=True)
364
+ )
365
+
366
+ def forward(self, x_left: torch.Tensor, x_right: torch.Tensor) -> Tuple[torch.Tensor, torch.Tensor]:
367
+ return self.conv_left(x_left), self.conv_right(x_right)
368
+
369
+
370
+ class StridedMonocularBlock(nn.Module):
371
+ """
372
+ Monocular pathway block with configurable stride.
373
+ Keeps left/right eyes separate, uses stride to control spatial sampling.
374
+
375
+ Used in V4 for stride-based pathway differentiation.
376
+ """
377
+ def __init__(self, in_ch: int, out_ch: int, kernel_size: int = 3, stride: int = 1):
378
+ super().__init__()
379
+ padding = kernel_size // 2
380
+ self.conv_left = nn.Sequential(
381
+ nn.Conv2d(in_ch, out_ch, kernel_size, stride=stride, padding=padding),
382
+ nn.BatchNorm2d(out_ch),
383
+ nn.ReLU(inplace=True)
384
+ )
385
+ self.conv_right = nn.Sequential(
386
+ nn.Conv2d(in_ch, out_ch, kernel_size, stride=stride, padding=padding),
387
+ nn.BatchNorm2d(out_ch),
388
+ nn.ReLU(inplace=True)
389
+ )
390
+
391
+ def forward(self, x_left: torch.Tensor, x_right: torch.Tensor) -> Tuple[torch.Tensor, torch.Tensor]:
392
+ return self.conv_left(x_left), self.conv_right(x_right)
393
+
394
+
395
+ def count_params(model: nn.Module) -> int:
396
+ """Count total trainable parameters."""
397
+ return sum(p.numel() for p in model.parameters())
mpknet_v6_2_temporal.py ADDED
@@ -0,0 +1,379 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """
2
+ BinocularMPKNet V6.2 - Sequential Temporal M-Pathway
3
+
4
+ Key insight: M processes 8 frames sequentially, computing deltas between
5
+ consecutive frames. P sees only the current frame for detail.
6
+
7
+ M stream: f0→f1→f2→f3→f4→f5→f6→f7 (8 frames, 7 deltas)
8
+ P stream: f7 only (current frame, full detail)
9
+
10
+ This is Conv2D-based temporal processing:
11
+ - Same M weights applied to each frame
12
+ - Deltas capture motion between consecutive frames
13
+ - Fusion learns motion patterns (acceleration, direction change, etc.)
14
+
15
+ For static images: use augmented "pseudo-frames" (scales, shifts)
16
+ For video: use actual consecutive frames
17
+ """
18
+
19
+ import torch
20
+ import torch.nn as nn
21
+ import torch.nn.functional as F
22
+
23
+ from mpknet_components import (
24
+ BinocularPreMPK,
25
+ StereoDisparity,
26
+ StridedMonocularBlock,
27
+ count_params,
28
+ )
29
+
30
+
31
+ class SequentialTemporalMPathway(nn.Module):
32
+ """
33
+ M-pathway that processes 8 frames sequentially.
34
+
35
+ Computes features for each frame, then deltas between consecutive frames.
36
+ Fuses all 7 deltas into a single motion representation.
37
+ """
38
+ def __init__(self, in_ch: int, out_ch: int, kernel_size: int, stride: int, num_frames: int = 8):
39
+ super().__init__()
40
+
41
+ self.num_frames = num_frames
42
+ self.num_deltas = num_frames - 1
43
+
44
+ # Shared M block (same weights for all frames)
45
+ self.m_block = StridedMonocularBlock(in_ch, out_ch, kernel_size, stride)
46
+
47
+ # Delta processing: learn how to combine consecutive features
48
+ # Instead of raw subtraction, learn the comparison
49
+ self.delta_conv = nn.Conv2d(out_ch * 2, out_ch, kernel_size=1, bias=False)
50
+
51
+ # Temporal fusion: combine all deltas into motion features
52
+ self.temporal_fuse = nn.Sequential(
53
+ nn.Conv2d(out_ch * self.num_deltas, out_ch * 2, kernel_size=1, bias=False),
54
+ nn.BatchNorm2d(out_ch * 2),
55
+ nn.ReLU(inplace=True),
56
+ nn.Conv2d(out_ch * 2, out_ch, kernel_size=1, bias=False),
57
+ nn.BatchNorm2d(out_ch),
58
+ nn.ReLU(inplace=True),
59
+ )
60
+
61
+ def forward(self, frames_left: torch.Tensor, frames_right: torch.Tensor):
62
+ """
63
+ Args:
64
+ frames_left: [B, num_frames, C, H, W] - left eye frame sequence
65
+ frames_right: [B, num_frames, C, H, W] - right eye frame sequence
66
+
67
+ Returns:
68
+ motion_left, motion_right: [B, out_ch, H', W'] - motion features
69
+ """
70
+ B = frames_left.shape[0]
71
+
72
+ # Process all frames through M (shared weights)
73
+ feats_left = []
74
+ feats_right = []
75
+
76
+ for t in range(self.num_frames):
77
+ fl, fr = self.m_block(frames_left[:, t], frames_right[:, t])
78
+ feats_left.append(fl)
79
+ feats_right.append(fr)
80
+
81
+ # Compute deltas between consecutive frames
82
+ deltas_left = []
83
+ deltas_right = []
84
+
85
+ for t in range(self.num_deltas):
86
+ # Concatenate consecutive features and learn the delta
87
+ pair_left = torch.cat([feats_left[t], feats_left[t+1]], dim=1)
88
+ pair_right = torch.cat([feats_right[t], feats_right[t+1]], dim=1)
89
+
90
+ delta_l = self.delta_conv(pair_left)
91
+ delta_r = self.delta_conv(pair_right)
92
+
93
+ deltas_left.append(delta_l)
94
+ deltas_right.append(delta_r)
95
+
96
+ # Fuse all deltas into motion representation
97
+ all_deltas_left = torch.cat(deltas_left, dim=1) # [B, out_ch*7, H, W]
98
+ all_deltas_right = torch.cat(deltas_right, dim=1)
99
+
100
+ motion_left = self.temporal_fuse(all_deltas_left)
101
+ motion_right = self.temporal_fuse(all_deltas_right)
102
+
103
+ return motion_left, motion_right
104
+
105
+
106
+ class BinocularMPKNetV6_2(nn.Module):
107
+ """
108
+ Binocular MPKNet V6.2 with sequential temporal M-pathway.
109
+
110
+ For video:
111
+ - M sees 8 consecutive frames, extracts motion
112
+ - P sees current frame only, extracts detail
113
+ - Fusion combines motion + detail
114
+
115
+ For static images (training):
116
+ - Generate pseudo-frames via augmentation (scale, shift, blur)
117
+ - M learns to detect "change" even without real motion
118
+ - Transfers to real video at test time
119
+ """
120
+ def __init__(self, num_classes: int = 10, ch: int = 48,
121
+ use_stereo: bool = True, disparity_range: int = 2,
122
+ kernel_size: int = 5, num_frames: int = 8):
123
+ super().__init__()
124
+
125
+ self.use_stereo = use_stereo
126
+ self.kernel_size = kernel_size
127
+ self.num_frames = num_frames
128
+
129
+ # Fibonacci strides
130
+ self.p_stride = 2
131
+ self.k_stride = 3
132
+ self.m_stride = 5
133
+
134
+ if use_stereo:
135
+ self.stereo = StereoDisparity(disparity_range)
136
+
137
+ self.pre_mpk = BinocularPreMPK(sigma=1.0)
138
+
139
+ # ========== BLOCK 1 ==========
140
+ # P pathway: single frame, high detail
141
+ self.P_block1_layer1 = StridedMonocularBlock(3, ch, kernel_size, stride=self.p_stride)
142
+ self.P_block1_layer2 = StridedMonocularBlock(ch, ch, kernel_size, stride=1)
143
+
144
+ # K pathway: single frame, context
145
+ self.K_block1 = StridedMonocularBlock(3, ch // 2, kernel_size, stride=self.k_stride)
146
+
147
+ # M pathway: TEMPORAL - processes num_frames frames
148
+ self.M_block1 = SequentialTemporalMPathway(3, ch, kernel_size, stride=self.m_stride, num_frames=num_frames)
149
+
150
+ # K gates for block 1
151
+ self.k_gate1_M_left = nn.Sequential(nn.Linear(ch // 2, ch), nn.Sigmoid())
152
+ self.k_gate1_M_right = nn.Sequential(nn.Linear(ch // 2, ch), nn.Sigmoid())
153
+ self.k_gate1_P_left = nn.Sequential(nn.Linear(ch // 2, ch), nn.Sigmoid())
154
+ self.k_gate1_P_right = nn.Sequential(nn.Linear(ch // 2, ch), nn.Sigmoid())
155
+
156
+ # ========== BLOCK 2 ==========
157
+ self.P_block2_layer1 = StridedMonocularBlock(ch, ch, kernel_size, stride=1)
158
+ self.P_block2_layer2 = StridedMonocularBlock(ch, ch, kernel_size, stride=1)
159
+
160
+ self.K_block2 = StridedMonocularBlock(ch // 2, ch // 2, kernel_size, stride=1)
161
+
162
+ # M block 2: single frame processing on motion features
163
+ self.M_block2 = StridedMonocularBlock(ch, ch, kernel_size, stride=1)
164
+
165
+ # K gates for block 2
166
+ self.k_gate2_M_left = nn.Sequential(nn.Linear(ch // 2, ch), nn.Sigmoid())
167
+ self.k_gate2_M_right = nn.Sequential(nn.Linear(ch // 2, ch), nn.Sigmoid())
168
+ self.k_gate2_P_left = nn.Sequential(nn.Linear(ch // 2, ch), nn.Sigmoid())
169
+ self.k_gate2_P_right = nn.Sequential(nn.Linear(ch // 2, ch), nn.Sigmoid())
170
+
171
+ # ========== V1 FUSION ==========
172
+ self.v1_fusion = nn.Sequential(
173
+ nn.Conv2d(ch * 4, ch * 2, 1),
174
+ nn.BatchNorm2d(ch * 2),
175
+ nn.ReLU(inplace=True),
176
+ )
177
+
178
+ # Classification head
179
+ self.gap = nn.AdaptiveAvgPool2d(1)
180
+ self.dropout = nn.Dropout(p=0.5)
181
+ self.fc = nn.Linear(ch * 2, num_classes)
182
+
183
+ def forward(self, x: torch.Tensor, frames: torch.Tensor = None) -> torch.Tensor:
184
+ """
185
+ Args:
186
+ x: [B, C, H, W] - current frame (for P pathway)
187
+ frames: [B, num_frames, C, H, W] - frame sequence (for M pathway)
188
+ If None, generates pseudo-frames from x
189
+ """
190
+ # Generate pseudo-frames if not provided (for static image training)
191
+ if frames is None:
192
+ frames = self._generate_pseudo_frames(x)
193
+
194
+ # Create stereo views for current frame
195
+ if self.use_stereo:
196
+ x_left, x_right = self.stereo(x)
197
+ # Also create stereo for all frames
198
+ frames_left = torch.stack([self.stereo(frames[:, t])[0] for t in range(self.num_frames)], dim=1)
199
+ frames_right = torch.stack([self.stereo(frames[:, t])[1] for t in range(self.num_frames)], dim=1)
200
+ else:
201
+ x_left, x_right = x, x
202
+ frames_left = frames
203
+ frames_right = frames
204
+
205
+ # Retinal preprocessing for current frame (P pathway)
206
+ P_left, _, P_right, _ = self.pre_mpk(x_left, x_right)
207
+
208
+ # Retinal preprocessing for all frames (M pathway)
209
+ M_frames_left = []
210
+ M_frames_right = []
211
+ for t in range(self.num_frames):
212
+ _, ml, _, mr = self.pre_mpk(frames_left[:, t], frames_right[:, t])
213
+ M_frames_left.append(ml)
214
+ M_frames_right.append(mr)
215
+ M_frames_left = torch.stack(M_frames_left, dim=1) # [B, num_frames, C, H, W]
216
+ M_frames_right = torch.stack(M_frames_right, dim=1)
217
+
218
+ # ========== BLOCK 1 ==========
219
+ # K pathway (from current frame)
220
+ K_left, K_right = self.K_block1(P_left, P_right)
221
+
222
+ # P pathway (current frame only)
223
+ P_left, P_right = self.P_block1_layer1(P_left, P_right)
224
+ P_left, P_right = self.P_block1_layer2(P_left, P_right)
225
+
226
+ # M pathway (all frames - TEMPORAL)
227
+ M_left, M_right = self.M_block1(M_frames_left, M_frames_right)
228
+
229
+ # K gating
230
+ k_ctx1_left = self.gap(K_left).flatten(1)
231
+ k_ctx1_right = self.gap(K_right).flatten(1)
232
+
233
+ gate1_M_left = self.k_gate1_M_left(k_ctx1_left).unsqueeze(-1).unsqueeze(-1)
234
+ gate1_M_right = self.k_gate1_M_right(k_ctx1_right).unsqueeze(-1).unsqueeze(-1)
235
+ gate1_P_left = self.k_gate1_P_left(k_ctx1_left).unsqueeze(-1).unsqueeze(-1)
236
+ gate1_P_right = self.k_gate1_P_right(k_ctx1_right).unsqueeze(-1).unsqueeze(-1)
237
+
238
+ M_left = M_left * gate1_M_left
239
+ M_right = M_right * gate1_M_right
240
+ P_left = P_left * gate1_P_left
241
+ P_right = P_right * gate1_P_right
242
+
243
+ # ========== BLOCK 2 ==========
244
+ P_left, P_right = self.P_block2_layer1(P_left, P_right)
245
+ P_left, P_right = self.P_block2_layer2(P_left, P_right)
246
+
247
+ K_left, K_right = self.K_block2(K_left, K_right)
248
+
249
+ M_left, M_right = self.M_block2(M_left, M_right)
250
+
251
+ # K gating
252
+ k_ctx2_left = self.gap(K_left).flatten(1)
253
+ k_ctx2_right = self.gap(K_right).flatten(1)
254
+
255
+ gate2_M_left = self.k_gate2_M_left(k_ctx2_left).unsqueeze(-1).unsqueeze(-1)
256
+ gate2_M_right = self.k_gate2_M_right(k_ctx2_right).unsqueeze(-1).unsqueeze(-1)
257
+ gate2_P_left = self.k_gate2_P_left(k_ctx2_left).unsqueeze(-1).unsqueeze(-1)
258
+ gate2_P_right = self.k_gate2_P_right(k_ctx2_right).unsqueeze(-1).unsqueeze(-1)
259
+
260
+ M_left = M_left * gate2_M_left
261
+ M_right = M_right * gate2_M_right
262
+ P_left = P_left * gate2_P_left
263
+ P_right = P_right * gate2_P_right
264
+
265
+ # ========== V1 FUSION ==========
266
+ target_size = M_left.shape[-1]
267
+ if P_left.shape[-1] != target_size:
268
+ P_left = F.adaptive_avg_pool2d(P_left, target_size)
269
+ P_right = F.adaptive_avg_pool2d(P_right, target_size)
270
+ if K_left.shape[-1] != target_size:
271
+ K_left = F.adaptive_avg_pool2d(K_left, target_size)
272
+ K_right = F.adaptive_avg_pool2d(K_right, target_size)
273
+
274
+ z = torch.cat([M_left, M_right, P_left, P_right], dim=1)
275
+ z = self.v1_fusion(z)
276
+
277
+ # Classification
278
+ z = self.gap(z).flatten(1)
279
+ z = self.dropout(z)
280
+ return self.fc(z)
281
+
282
+ def _generate_pseudo_frames(self, x: torch.Tensor) -> torch.Tensor:
283
+ """
284
+ Generate pseudo-frames from a single image for static image training.
285
+
286
+ Creates a sequence by applying progressive transformations:
287
+ - Scales (zoom in/out slightly)
288
+ - Small translations
289
+ - Blur levels
290
+
291
+ This teaches M to detect "change" even without real motion.
292
+ """
293
+ B, C, H, W = x.shape
294
+ frames = []
295
+
296
+ # Frame 0: most different (smaller scale, slight blur)
297
+ frames.append(self._augment_frame(x, scale=0.85, blur=1.5))
298
+
299
+ # Frames 1-6: gradual progression toward original
300
+ for i in range(1, self.num_frames - 1):
301
+ t = i / (self.num_frames - 1) # 0 to 1
302
+ scale = 0.85 + 0.15 * t # 0.85 to 1.0
303
+ blur = 1.5 * (1 - t) # 1.5 to 0
304
+ frames.append(self._augment_frame(x, scale=scale, blur=blur))
305
+
306
+ # Frame 7: original (current frame)
307
+ frames.append(x)
308
+
309
+ return torch.stack(frames, dim=1) # [B, num_frames, C, H, W]
310
+
311
+ def _augment_frame(self, x: torch.Tensor, scale: float = 1.0, blur: float = 0.0) -> torch.Tensor:
312
+ """Apply scale and blur augmentation to create pseudo-frame."""
313
+ B, C, H, W = x.shape
314
+
315
+ # Scale
316
+ if scale != 1.0:
317
+ new_H, new_W = int(H * scale), int(W * scale)
318
+ x = F.interpolate(x, size=(new_H, new_W), mode='bilinear', align_corners=False)
319
+ # Pad or crop back to original size
320
+ if scale < 1.0:
321
+ pad_h = (H - new_H) // 2
322
+ pad_w = (W - new_W) // 2
323
+ x = F.pad(x, (pad_w, W - new_W - pad_w, pad_h, H - new_H - pad_h), mode='reflect')
324
+ else:
325
+ start_h = (new_H - H) // 2
326
+ start_w = (new_W - W) // 2
327
+ x = x[:, :, start_h:start_h+H, start_w:start_w+W]
328
+
329
+ # Blur (simple box blur approximation)
330
+ if blur > 0:
331
+ kernel_size = max(3, int(blur) * 2 + 1)
332
+ if kernel_size % 2 == 0:
333
+ kernel_size += 1
334
+ x = F.avg_pool2d(x, kernel_size, stride=1, padding=kernel_size//2)
335
+
336
+ return x
337
+
338
+
339
+ if __name__ == "__main__":
340
+ from mpknet_v6 import BinocularMPKNetV6
341
+ from mpknet_v6_1 import BinocularMPKNetV6_1
342
+
343
+ print("=" * 60)
344
+ print("V6 vs V6.1 vs V6.2 (Temporal) Comparison")
345
+ print("=" * 60)
346
+
347
+ v6 = BinocularMPKNetV6(num_classes=10, ch=48)
348
+ v6_1 = BinocularMPKNetV6_1(num_classes=10, ch=48)
349
+ v6_2 = BinocularMPKNetV6_2(num_classes=10, ch=48, num_frames=8)
350
+
351
+ print(f"V6 params: {count_params(v6)/1e6:.3f}M")
352
+ print(f"V6.1 params: {count_params(v6_1)/1e6:.3f}M (dual-pass M)")
353
+ print(f"V6.2 params: {count_params(v6_2)/1e6:.3f}M (8-frame temporal M)")
354
+ print()
355
+
356
+ # Test forward pass with static image (pseudo-frames)
357
+ x = torch.randn(2, 3, 32, 32)
358
+
359
+ print("Testing V6.2 with static image (generates pseudo-frames):")
360
+ y = v6_2(x)
361
+ print(f" Input: {x.shape} → Output: {y.shape}")
362
+ print()
363
+
364
+ # Test forward pass with actual video frames
365
+ frames = torch.randn(2, 8, 3, 32, 32) # 8 frames
366
+ print("Testing V6.2 with video frames:")
367
+ y = v6_2(x, frames=frames)
368
+ print(f" Current frame: {x.shape}")
369
+ print(f" Frame sequence: {frames.shape}")
370
+ print(f" Output: {y.shape}")
371
+ print()
372
+
373
+ print("Architecture summary:")
374
+ print(" P pathway: sees current frame only (detail)")
375
+ print(" K pathway: sees current frame only (context/gating)")
376
+ print(" M pathway: sees 8 frames, computes 7 deltas (motion)")
377
+ print()
378
+ print("For static images: pseudo-frames via scale/blur progression")
379
+ print("For video: actual consecutive frames")
mpknet_v6_2_ucf101_best.pt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8a0f8e24150057f717fbf8ab369dc0b060d9e63fa9f87d3f98c2a000ea6599ff
3
+ size 7614431