DJLougen commited on
Commit
2335bf1
·
verified ·
1 Parent(s): 4d14ce0

Upload folder using huggingface_hub

Browse files
Files changed (4) hide show
  1. README.md +85 -0
  2. mpknet_components.py +397 -0
  3. mpknet_v6.py +207 -0
  4. v6_kvasir_best.pth +3 -0
README.md ADDED
@@ -0,0 +1,85 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: other
3
+ license_name: polyform-small-business-1.0.0
4
+ license_link: https://polyformproject.org/licenses/small-business/1.0.0/
5
+ library_name: pytorch
6
+ pipeline_tag: image-classification
7
+ tags:
8
+ - bio-inspired
9
+ - neuroscience
10
+ - lightweight
11
+ - medical-imaging
12
+ - edge-ai
13
+ - retinal-ganglion-cells
14
+ - fibonacci-strides
15
+ datasets:
16
+ - kvasir-v2
17
+ - cifar-10
18
+ - cifar-100
19
+ - imagenet-100
20
+ ---
21
+
22
+ # MPKNet V6 - Bio-Inspired Visual Classification
23
+
24
+ A lightweight neural network inspired by the primate Lateral Geniculate Nucleus (LGN), implementing parallel **Parvocellular (P)**, **Koniocellular (K)**, and **Magnocellular (M)** pathways with Fibonacci-stride spatial sampling.
25
+
26
+ ## Architecture
27
+
28
+ MPKNet V6 uses three parallel pathways with biologically-motivated stride ratios (2:3:5):
29
+
30
+ - **P pathway** (stride 2): Fine detail and edges, analogous to Parvocellular neurons (~80% of LGN)
31
+ - **K pathway** (stride 3): Context signals that generate gating modulation, analogous to Koniocellular neurons (~10% of LGN)
32
+ - **M pathway** (stride 5): Global structure and coarse features, analogous to Magnocellular neurons (~10% of LGN)
33
+
34
+ The **K-gating mechanism** dynamically modulates P and M pathways via learned sigmoid gates, inspired by cross-stream modulation in biological vision.
35
+
36
+ ## Results
37
+
38
+ | Dataset | Classes | Accuracy | Parameters |
39
+ |---------|---------|----------|------------|
40
+ | Kvasir-v2 (GI endoscopy) | 8 | 89.2% | 0.21M |
41
+ | CIFAR-10 | 10 | 89.4% | 0.54M |
42
+ | CIFAR-100 | 100 | 58.8% | 0.22M |
43
+ | ImageNet-100 | 100 | 60.8% | 0.54M |
44
+
45
+ No pretraining. No augmentation. 161x fewer parameters than MobileNetV3-Small.
46
+
47
+ ## Usage
48
+
49
+ ```python
50
+ import torch
51
+ from mpknet_v6 import BinocularMPKNetV6
52
+ from mpknet_components import count_params
53
+
54
+ # Load model
55
+ model = BinocularMPKNetV6(num_classes=8, ch=48, use_stereo=True)
56
+ state_dict = torch.load("v6_kvasir_best.pth", map_location="cpu", weights_only=True)
57
+ model.load_state_dict(state_dict)
58
+ model.eval()
59
+
60
+ # Inference
61
+ x = torch.randn(1, 3, 224, 224)
62
+ with torch.no_grad():
63
+ logits = model(x)
64
+ pred = logits.argmax(dim=1)
65
+ ```
66
+
67
+ ## Files
68
+
69
+ - `v6_kvasir_best.pth` - Trained weights (Kvasir-v2, 8 classes, 2.1MB)
70
+ - `mpknet_v6.py` - Model architecture
71
+ - `mpknet_components.py` - Shared components (RGCLayer, BinocularPreMPK, StereoDisparity, StridedMonocularBlock)
72
+
73
+ ## Citation
74
+
75
+ ```
76
+ D.J. Lougen, "MPKNet: Bio-Inspired Visual Classification with Parallel LGN Pathways", 2025.
77
+ ```
78
+
79
+ ## License
80
+
81
+ PolyForm Small Business License 1.0.0 - Free for organizations with less than $100K revenue, non-profits, and education.
82
+
83
+ ## Links
84
+
85
+ - [GitHub](https://github.com/DJLougen/MPKnet)
mpknet_components.py ADDED
@@ -0,0 +1,397 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """
2
+ Shared components for all MPKNet model variants.
3
+
4
+ Contains building blocks used across V1, V2, V3, V4 and detection models:
5
+ - RGCLayer: Biologically accurate retinal ganglion cell preprocessing
6
+ - BinocularPreMPK: Legacy retinal preprocessing (deprecated, use RGCLayer)
7
+ - StereoDisparity: Stereo disparity simulation
8
+ - OcularDominanceConv: Convolution with ocular dominance channels
9
+ - BinocularMPKPathway: Pathway with binocular processing
10
+ - MonocularPathwayBlock: Pathway keeping eyes separate
11
+ - StridedMonocularBlock: Strided pathway for V4
12
+ """
13
+
14
+ import torch
15
+ import torch.nn as nn
16
+ import torch.nn.functional as F
17
+ from typing import Tuple
18
+
19
+
20
+ class RGCLayer(nn.Module):
21
+ """
22
+ Biologically accurate Retinal Ganglion Cell layer.
23
+
24
+ Based on Kim et al. 2021 "Retinal Ganglion Cells—Diversity of Cell Types
25
+ and Clinical Relevance" (Front. Neurol. 12:661938).
26
+
27
+ Models three main RGC types that feed the P/K/M pathways:
28
+
29
+ 1. MIDGET RGCs (~70% of RGCs):
30
+ - Small receptive field (5-10 μm dendritic field)
31
+ - Center-surround via Difference of Gaussians (DoG)
32
+ - Red-Green color opponency (L-M or M-L)
33
+ - Feeds PARVOCELLULAR (P) pathway
34
+ - High spatial acuity, low temporal resolution
35
+
36
+ 2. PARASOL RGCs (~10% of RGCs):
37
+ - Large receptive field (30-300 μm dendritic field)
38
+ - Center-surround DoG on luminance
39
+ - Achromatic (no color, L+M pooled)
40
+ - Feeds MAGNOCELLULAR (M) pathway
41
+ - Motion detection, high temporal resolution
42
+
43
+ 3. SMALL BISTRATIFIED RGCs (~5-8% of RGCs):
44
+ - Medium receptive field
45
+ - S-cone ON center, (L+M) OFF surround
46
+ - Blue-Yellow opponency
47
+ - Feeds KONIOCELLULAR (K) pathway
48
+ - Color context, particularly blue
49
+
50
+ Key biological details implemented:
51
+ - DoG (Difference of Gaussians) for center-surround RF
52
+ - RF size ratios: Midget < Bistratified < Parasol
53
+ - Surround ~3-6x larger than center (we use 3x)
54
+ - ON-center and OFF-center populations (we use ON-center)
55
+ """
56
+
57
+ def __init__(
58
+ self,
59
+ midget_sigma: float = 0.8, # Small RF for fine detail
60
+ parasol_sigma: float = 2.5, # Large RF for motion/gist
61
+ bistrat_sigma: float = 1.2, # Medium RF for color context
62
+ surround_ratio: float = 3.0, # Surround is 3x center
63
+ ):
64
+ super().__init__()
65
+
66
+ self.midget_sigma = midget_sigma
67
+ self.parasol_sigma = parasol_sigma
68
+ self.bistrat_sigma = bistrat_sigma
69
+ self.surround_ratio = surround_ratio
70
+
71
+ # Create DoG kernels for each cell type
72
+ self.register_buffer('midget_center', self._make_gaussian(midget_sigma))
73
+ self.register_buffer('midget_surround', self._make_gaussian(midget_sigma * surround_ratio))
74
+
75
+ self.register_buffer('parasol_center', self._make_gaussian(parasol_sigma))
76
+ self.register_buffer('parasol_surround', self._make_gaussian(parasol_sigma * surround_ratio))
77
+
78
+ self.register_buffer('bistrat_center', self._make_gaussian(bistrat_sigma))
79
+ self.register_buffer('bistrat_surround', self._make_gaussian(bistrat_sigma * surround_ratio))
80
+
81
+ # Store kernel sizes for padding calculation
82
+ self.midget_ks = self.midget_surround.shape[-1]
83
+ self.parasol_ks = self.parasol_surround.shape[-1]
84
+ self.bistrat_ks = self.bistrat_surround.shape[-1]
85
+
86
+ def _make_gaussian(self, sigma: float) -> torch.Tensor:
87
+ """Create a normalized 2D Gaussian kernel."""
88
+ ks = int(6 * sigma + 1) | 1 # Ensure odd, cover 3 sigma each side
89
+ ax = torch.arange(ks, dtype=torch.float32) - ks // 2
90
+ xx, yy = torch.meshgrid(ax, ax, indexing='ij')
91
+ kernel = torch.exp(-(xx**2 + yy**2) / (2 * sigma**2))
92
+ kernel = kernel / kernel.sum() # Normalize
93
+ return kernel.unsqueeze(0).unsqueeze(0) # (1, 1, H, W)
94
+
95
+ def _apply_dog(
96
+ self,
97
+ x: torch.Tensor,
98
+ center_kernel: torch.Tensor,
99
+ surround_kernel: torch.Tensor,
100
+ kernel_size: int
101
+ ) -> torch.Tensor:
102
+ """Apply Difference of Gaussians (center - surround)."""
103
+ B, C, H, W = x.shape
104
+ padding = kernel_size // 2
105
+
106
+ # Expand kernels for all channels
107
+ center_k = center_kernel.expand(C, 1, -1, -1)
108
+ surround_k = surround_kernel.expand(C, 1, -1, -1)
109
+
110
+ # Pad surround kernel to match size if needed
111
+ c_size = center_k.shape[-1]
112
+ s_size = surround_k.shape[-1]
113
+ if c_size < s_size:
114
+ pad_amt = (s_size - c_size) // 2
115
+ center_k = F.pad(center_k, (pad_amt, pad_amt, pad_amt, pad_amt))
116
+
117
+ # Apply center and surround
118
+ center_response = F.conv2d(x, center_k, padding=padding, groups=C)
119
+ surround_response = F.conv2d(x, surround_k, padding=padding, groups=C)
120
+
121
+ # DoG: ON-center response (center - surround)
122
+ return center_response - surround_response
123
+
124
+ def forward(
125
+ self,
126
+ x_left: torch.Tensor,
127
+ x_right: torch.Tensor
128
+ ) -> Tuple[torch.Tensor, torch.Tensor, torch.Tensor,
129
+ torch.Tensor, torch.Tensor, torch.Tensor]:
130
+ """
131
+ Process left and right eye inputs through RGC populations.
132
+
133
+ Returns:
134
+ P_left, P_right: Midget RGC output (R-G opponency) -> P pathway
135
+ M_left, M_right: Parasol RGC output (luminance DoG) -> M pathway
136
+ K_left, K_right: Bistratified RGC output (S vs L+M) -> K pathway
137
+ """
138
+ # ========== MIDGET RGCs -> P pathway ==========
139
+ # Red-Green opponency: L-cone vs M-cone
140
+ # Approximate: R channel vs G channel
141
+ # DoG on the opponent signal
142
+
143
+ # Extract R and G channels (approximating L and M cones)
144
+ R_left, G_left = x_left[:, 0:1], x_left[:, 1:2]
145
+ R_right, G_right = x_right[:, 0:1], x_right[:, 1:2]
146
+
147
+ # L-M opponency (R-G) with small receptive field DoG
148
+ rg_left = R_left - G_left
149
+ rg_right = R_right - G_right
150
+
151
+ P_left = self._apply_dog(rg_left, self.midget_center, self.midget_surround, self.midget_ks)
152
+ P_right = self._apply_dog(rg_right, self.midget_center, self.midget_surround, self.midget_ks)
153
+
154
+ # Expand back to 3 channels for compatibility
155
+ P_left = P_left.expand(-1, 3, -1, -1)
156
+ P_right = P_right.expand(-1, 3, -1, -1)
157
+
158
+ # ========== PARASOL RGCs -> M pathway ==========
159
+ # Achromatic: pool L+M (approximate as luminance)
160
+ # Large RF DoG for motion sensitivity
161
+
162
+ lum_left = 0.299 * x_left[:, 0:1] + 0.587 * x_left[:, 1:2] + 0.114 * x_left[:, 2:3]
163
+ lum_right = 0.299 * x_right[:, 0:1] + 0.587 * x_right[:, 1:2] + 0.114 * x_right[:, 2:3]
164
+
165
+ M_left = self._apply_dog(lum_left, self.parasol_center, self.parasol_surround, self.parasol_ks)
166
+ M_right = self._apply_dog(lum_right, self.parasol_center, self.parasol_surround, self.parasol_ks)
167
+
168
+ # Expand to 3 channels
169
+ M_left = M_left.expand(-1, 3, -1, -1)
170
+ M_right = M_right.expand(-1, 3, -1, -1)
171
+
172
+ # ========== SMALL BISTRATIFIED RGCs -> K pathway ==========
173
+ # S-cone ON center, (L+M) OFF surround
174
+ # Blue-Yellow opponency: S vs (L+M)
175
+
176
+ # S-cone approximated by B channel
177
+ # (L+M) approximated by (R+G)/2
178
+ S_left = x_left[:, 2:3] # Blue
179
+ S_right = x_right[:, 2:3]
180
+ LM_left = (x_left[:, 0:1] + x_left[:, 1:2]) / 2
181
+ LM_right = (x_right[:, 0:1] + x_right[:, 1:2]) / 2
182
+
183
+ # S - (L+M) opponency with medium RF
184
+ by_left = S_left - LM_left
185
+ by_right = S_right - LM_right
186
+
187
+ K_left = self._apply_dog(by_left, self.bistrat_center, self.bistrat_surround, self.bistrat_ks)
188
+ K_right = self._apply_dog(by_right, self.bistrat_center, self.bistrat_surround, self.bistrat_ks)
189
+
190
+ # Expand to 3 channels
191
+ K_left = K_left.expand(-1, 3, -1, -1)
192
+ K_right = K_right.expand(-1, 3, -1, -1)
193
+
194
+ return P_left, M_left, K_left, P_right, M_right, K_right
195
+
196
+
197
+ class BinocularPreMPK(nn.Module):
198
+ """
199
+ Simulates retinal + LGN preprocessing for both eyes.
200
+ Each eye gets its own center-surround filtering.
201
+
202
+ Biological motivation:
203
+ - Retinal ganglion cells have center-surround receptive fields
204
+ - M cells respond to luminance changes (motion/gist)
205
+ - P cells respond to color/detail (high-pass filtered)
206
+ """
207
+ def __init__(self, sigma: float = 1.0):
208
+ super().__init__()
209
+ self.sigma = sigma
210
+ ks = int(4 * sigma + 1) | 1 # ensure odd
211
+ ax = torch.arange(ks, dtype=torch.float32) - ks // 2
212
+ xx, yy = torch.meshgrid(ax, ax, indexing='ij')
213
+ kernel = torch.exp(-(xx**2 + yy**2) / (2 * sigma**2))
214
+ kernel = kernel / kernel.sum()
215
+ self.register_buffer('gauss', kernel.unsqueeze(0).unsqueeze(0))
216
+ self.ks = ks
217
+
218
+ def _blur(self, x: torch.Tensor) -> torch.Tensor:
219
+ B, C, H, W = x.shape
220
+ kernel = self.gauss.expand(C, 1, self.ks, self.ks)
221
+ return F.conv2d(x, kernel, padding=self.ks // 2, groups=C)
222
+
223
+ def forward(self, x_left: torch.Tensor, x_right: torch.Tensor) -> Tuple[torch.Tensor, ...]:
224
+ """
225
+ Returns (P_left, M_left, P_right, M_right)
226
+ P = high-pass (center - surround) for detail
227
+ M = low-pass luminance for motion/gist
228
+ """
229
+ # Left eye
230
+ blur_L = self._blur(x_left)
231
+ P_left = x_left - blur_L # high-pass (Parvo-like)
232
+ lum_L = x_left.mean(dim=1, keepdim=True)
233
+ M_left = self._blur(lum_L).expand(-1, 3, -1, -1) # low-pass luminance (Magno-like)
234
+
235
+ # Right eye
236
+ blur_R = self._blur(x_right)
237
+ P_right = x_right - blur_R
238
+ lum_R = x_right.mean(dim=1, keepdim=True)
239
+ M_right = self._blur(lum_R).expand(-1, 3, -1, -1)
240
+
241
+ return P_left, M_left, P_right, M_right
242
+
243
+
244
+ class StereoDisparity(nn.Module):
245
+ """
246
+ Creates stereo disparity by horizontally shifting left/right views.
247
+ Simulates the slight positional difference between two eyes.
248
+
249
+ disparity_range: maximum pixel shift (positive = crossed disparity)
250
+ """
251
+ def __init__(self, disparity_range: int = 2):
252
+ super().__init__()
253
+ self.disparity_range = disparity_range
254
+
255
+ def forward(self, x: torch.Tensor) -> Tuple[torch.Tensor, torch.Tensor]:
256
+ """
257
+ Takes single image, returns (left_view, right_view) with disparity.
258
+ For training, uses random disparity. For inference, uses fixed small disparity.
259
+ """
260
+ B, C, H, W = x.shape
261
+
262
+ if self.training:
263
+ d = torch.randint(-self.disparity_range, self.disparity_range + 1, (1,)).item()
264
+ else:
265
+ d = 1
266
+
267
+ if d == 0:
268
+ return x, x
269
+
270
+ if d > 0:
271
+ x_left = F.pad(x[:, :, :, d:], (0, d, 0, 0), mode='replicate')
272
+ x_right = F.pad(x[:, :, :, :-d], (d, 0, 0, 0), mode='replicate')
273
+ else:
274
+ d = -d
275
+ x_left = F.pad(x[:, :, :, :-d], (d, 0, 0, 0), mode='replicate')
276
+ x_right = F.pad(x[:, :, :, d:], (0, d, 0, 0), mode='replicate')
277
+
278
+ return x_left, x_right
279
+
280
+
281
+ class OcularDominanceConv(nn.Module):
282
+ """
283
+ Convolution with ocular dominance - channels are assigned to left/right eye
284
+ with graded mixing (some purely monocular, some binocular).
285
+
286
+ Inspired by V1 ocular dominance columns but applied at LGN stage
287
+ for computational efficiency.
288
+ """
289
+ def __init__(self, in_ch: int, out_ch: int, kernel_size: int,
290
+ monocular_ratio: float = 0.5):
291
+ super().__init__()
292
+ self.out_ch = out_ch
293
+ self.monocular_ratio = monocular_ratio
294
+
295
+ n_mono = int(out_ch * monocular_ratio)
296
+ n_mono_per_eye = n_mono // 2
297
+ n_bino = out_ch - 2 * n_mono_per_eye
298
+
299
+ self.n_left = n_mono_per_eye
300
+ self.n_right = n_mono_per_eye
301
+ self.n_bino = n_bino
302
+
303
+ self.conv_left = nn.Conv2d(in_ch, n_mono_per_eye, kernel_size, padding=kernel_size//2)
304
+ self.conv_right = nn.Conv2d(in_ch, n_mono_per_eye, kernel_size, padding=kernel_size//2)
305
+ self.conv_bino_L = nn.Conv2d(in_ch, n_bino, kernel_size, padding=kernel_size//2)
306
+ self.conv_bino_R = nn.Conv2d(in_ch, n_bino, kernel_size, padding=kernel_size//2)
307
+ self.bn = nn.BatchNorm2d(out_ch)
308
+
309
+ def forward(self, x_left: torch.Tensor, x_right: torch.Tensor) -> torch.Tensor:
310
+ left_only = self.conv_left(x_left)
311
+ right_only = self.conv_right(x_right)
312
+ bino = self.conv_bino_L(x_left) + self.conv_bino_R(x_right)
313
+ out = torch.cat([left_only, right_only, bino], dim=1)
314
+ return F.relu(self.bn(out))
315
+
316
+
317
+ class BinocularMPKPathway(nn.Module):
318
+ """
319
+ Single pathway (M, P, or K) with binocular processing.
320
+ Receives left and right eye inputs, produces fused output.
321
+ """
322
+ def __init__(self, in_ch: int, out_ch: int, kernel_sizes: list,
323
+ monocular_ratio: float = 0.5):
324
+ super().__init__()
325
+
326
+ layers = []
327
+ ch = in_ch
328
+ for i, ks in enumerate(kernel_sizes):
329
+ is_first = (i == 0)
330
+ if is_first:
331
+ layers.append(OcularDominanceConv(ch, out_ch, ks, monocular_ratio))
332
+ else:
333
+ layers.append(nn.Sequential(
334
+ nn.Conv2d(out_ch if i > 0 else ch, out_ch, ks, padding=ks//2),
335
+ nn.BatchNorm2d(out_ch),
336
+ nn.ReLU(inplace=True)
337
+ ))
338
+ ch = out_ch
339
+
340
+ self.first_layer = layers[0]
341
+ self.rest = nn.Sequential(*layers[1:]) if len(layers) > 1 else nn.Identity()
342
+
343
+ def forward(self, x_left: torch.Tensor, x_right: torch.Tensor) -> torch.Tensor:
344
+ x = self.first_layer(x_left, x_right)
345
+ return self.rest(x)
346
+
347
+
348
+ class MonocularPathwayBlock(nn.Module):
349
+ """
350
+ Single pathway block that keeps left/right eyes separate.
351
+ Used for LGN processing where eye segregation persists.
352
+ """
353
+ def __init__(self, in_ch: int, out_ch: int, kernel_size: int):
354
+ super().__init__()
355
+ self.conv_left = nn.Sequential(
356
+ nn.Conv2d(in_ch, out_ch, kernel_size, padding=kernel_size//2),
357
+ nn.BatchNorm2d(out_ch),
358
+ nn.ReLU(inplace=True)
359
+ )
360
+ self.conv_right = nn.Sequential(
361
+ nn.Conv2d(in_ch, out_ch, kernel_size, padding=kernel_size//2),
362
+ nn.BatchNorm2d(out_ch),
363
+ nn.ReLU(inplace=True)
364
+ )
365
+
366
+ def forward(self, x_left: torch.Tensor, x_right: torch.Tensor) -> Tuple[torch.Tensor, torch.Tensor]:
367
+ return self.conv_left(x_left), self.conv_right(x_right)
368
+
369
+
370
+ class StridedMonocularBlock(nn.Module):
371
+ """
372
+ Monocular pathway block with configurable stride.
373
+ Keeps left/right eyes separate, uses stride to control spatial sampling.
374
+
375
+ Used in V4 for stride-based pathway differentiation.
376
+ """
377
+ def __init__(self, in_ch: int, out_ch: int, kernel_size: int = 3, stride: int = 1):
378
+ super().__init__()
379
+ padding = kernel_size // 2
380
+ self.conv_left = nn.Sequential(
381
+ nn.Conv2d(in_ch, out_ch, kernel_size, stride=stride, padding=padding),
382
+ nn.BatchNorm2d(out_ch),
383
+ nn.ReLU(inplace=True)
384
+ )
385
+ self.conv_right = nn.Sequential(
386
+ nn.Conv2d(in_ch, out_ch, kernel_size, stride=stride, padding=padding),
387
+ nn.BatchNorm2d(out_ch),
388
+ nn.ReLU(inplace=True)
389
+ )
390
+
391
+ def forward(self, x_left: torch.Tensor, x_right: torch.Tensor) -> Tuple[torch.Tensor, torch.Tensor]:
392
+ return self.conv_left(x_left), self.conv_right(x_right)
393
+
394
+
395
+ def count_params(model: nn.Module) -> int:
396
+ """Count total trainable parameters."""
397
+ return sum(p.numel() for p in model.parameters())
mpknet_v6.py ADDED
@@ -0,0 +1,207 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """
2
+ BinocularMPKNet V6 - First complete M/P/K pathway implementation with Fibonacci strides.
3
+
4
+ Key innovations:
5
+ 1. First Fibonacci strides (2:3:5) in CNNs - derived from biological spatial frequency tuning
6
+ 2. First complete M/P/K implementation - prior work (Magno-Parvo CNN, EVNets) models M/P only
7
+ 3. Biologically-grounded K→M/P gating - extends cross-attention (Bahdanau, FiLM) with LGN anatomy
8
+
9
+ Fibonacci-inspired stride ratios (2:3:5) for P:K:M pathways:
10
+ - P: stride=2, kernel=5 (fine detail, ~80% of LGN neurons)
11
+ - K: stride=3, kernel=5 (context/modulation, ~10% of LGN)
12
+ - M: stride=5, kernel=5 (global gist, ~10% of LGN)
13
+
14
+ Results:
15
+ - 89.38% on CIFAR-10 with 0.539M parameters
16
+ - 60.8% on ImageNet-100 with 0.54M parameters
17
+ - 89.2% on Kvasir-v2 with 0.21M parameters
18
+
19
+ The stride ratios produce resolutions converging toward golden ratio (φ ≈ 1.618),
20
+ optimizing multi-scale coverage without redundancy - same principle as phyllotaxis.
21
+
22
+ Prior art acknowledgment:
23
+ - Cross-stream attention: Bahdanau (2014), FiLM (2018), SlowFast laterals (2019)
24
+ - M/P pathways: Magno-Parvo CNN (2022), EVNets (2024)
25
+ - Contribution: Complete M/P/K with functional K gating, Fibonacci strides
26
+ """
27
+
28
+ import torch
29
+ import torch.nn as nn
30
+ import torch.nn.functional as F
31
+
32
+ from mpknet_components import (
33
+ BinocularPreMPK,
34
+ StereoDisparity,
35
+ StridedMonocularBlock,
36
+ count_params,
37
+ )
38
+
39
+
40
+ class BinocularMPKNetV6(nn.Module):
41
+ """
42
+ Binocular MPKNet V6 with fibonacci stride scaling.
43
+
44
+ Key changes from V4:
45
+ - Larger kernel (5 vs 3)
46
+ - Fibonacci strides: P=2, K=3, M=5
47
+ - Same information extraction, fewer FLOPs
48
+
49
+ The stride/kernel ratio ~0.4-1.0 provides efficient coverage:
50
+ - P: 5/2 = 2.5 overlap per step (fine but not redundant)
51
+ - K: 5/3 = 1.67 overlap (moderate)
52
+ - M: 5/5 = 1.0 no overlap (coarse gist)
53
+ """
54
+ def __init__(self, num_classes: int = 10, ch: int = 48,
55
+ use_stereo: bool = True, disparity_range: int = 2,
56
+ kernel_size: int = 5):
57
+ super().__init__()
58
+
59
+ self.use_stereo = use_stereo
60
+ self.kernel_size = kernel_size
61
+
62
+ # Fibonacci strides: 2, 3, 5
63
+ self.p_stride = 2
64
+ self.k_stride = 3
65
+ self.m_stride = 5
66
+
67
+ if use_stereo:
68
+ self.stereo = StereoDisparity(disparity_range)
69
+
70
+ self.pre_mpk = BinocularPreMPK(sigma=1.0)
71
+
72
+ # ========== BLOCK 1 ==========
73
+ # P pathway: stride=2 (detail without noise), 2 layers
74
+ self.P_block1_layer1 = StridedMonocularBlock(3, ch, kernel_size, stride=self.p_stride)
75
+ self.P_block1_layer2 = StridedMonocularBlock(ch, ch, kernel_size, stride=1)
76
+
77
+ # K pathway: stride=3 (context), 1 layer
78
+ self.K_block1 = StridedMonocularBlock(3, ch // 2, kernel_size, stride=self.k_stride)
79
+
80
+ # M pathway: stride=5 (global gist), 1 layer
81
+ self.M_block1 = StridedMonocularBlock(3, ch, kernel_size, stride=self.m_stride)
82
+
83
+ # K gates for block 1
84
+ self.k_gate1_M_left = nn.Sequential(nn.Linear(ch // 2, ch), nn.Sigmoid())
85
+ self.k_gate1_M_right = nn.Sequential(nn.Linear(ch // 2, ch), nn.Sigmoid())
86
+ self.k_gate1_P_left = nn.Sequential(nn.Linear(ch // 2, ch), nn.Sigmoid())
87
+ self.k_gate1_P_right = nn.Sequential(nn.Linear(ch // 2, ch), nn.Sigmoid())
88
+
89
+ # ========== BLOCK 2 ==========
90
+ # All stride=1 now (already at different resolutions)
91
+ self.P_block2_layer1 = StridedMonocularBlock(ch, ch, kernel_size, stride=1)
92
+ self.P_block2_layer2 = StridedMonocularBlock(ch, ch, kernel_size, stride=1)
93
+
94
+ self.K_block2 = StridedMonocularBlock(ch // 2, ch // 2, kernel_size, stride=1)
95
+
96
+ self.M_block2 = StridedMonocularBlock(ch, ch, kernel_size, stride=1)
97
+
98
+ # K gates for block 2
99
+ self.k_gate2_M_left = nn.Sequential(nn.Linear(ch // 2, ch), nn.Sigmoid())
100
+ self.k_gate2_M_right = nn.Sequential(nn.Linear(ch // 2, ch), nn.Sigmoid())
101
+ self.k_gate2_P_left = nn.Sequential(nn.Linear(ch // 2, ch), nn.Sigmoid())
102
+ self.k_gate2_P_right = nn.Sequential(nn.Linear(ch // 2, ch), nn.Sigmoid())
103
+
104
+ # ========== V1 FUSION ==========
105
+ self.v1_fusion = nn.Sequential(
106
+ nn.Conv2d(ch * 4, ch * 2, 1),
107
+ nn.BatchNorm2d(ch * 2),
108
+ nn.ReLU(inplace=True),
109
+ )
110
+
111
+ # Classification head (dropout before FC per NiN paper)
112
+ self.gap = nn.AdaptiveAvgPool2d(1)
113
+ self.dropout = nn.Dropout(p=0.5)
114
+ self.fc = nn.Linear(ch * 2, num_classes)
115
+
116
+ def forward(self, x: torch.Tensor) -> torch.Tensor:
117
+ # Create stereo views
118
+ if self.use_stereo:
119
+ x_left, x_right = self.stereo(x)
120
+ else:
121
+ x_left, x_right = x, x
122
+
123
+ # Retinal preprocessing
124
+ P_left, M_left, P_right, M_right = self.pre_mpk(x_left, x_right)
125
+
126
+ # ========== BLOCK 1 ==========
127
+ # K first (from original input)
128
+ K_left, K_right = self.K_block1(P_left, P_right)
129
+
130
+ # P pathway: 2 layers
131
+ P_left, P_right = self.P_block1_layer1(P_left, P_right)
132
+ P_left, P_right = self.P_block1_layer2(P_left, P_right)
133
+
134
+ # M pathway: 1 layer
135
+ M_left, M_right = self.M_block1(M_left, M_right)
136
+
137
+ # K gate 1 - GAP makes it resolution-independent
138
+ k_ctx1_left = self.gap(K_left).flatten(1)
139
+ k_ctx1_right = self.gap(K_right).flatten(1)
140
+
141
+ gate1_M_left = self.k_gate1_M_left(k_ctx1_left).unsqueeze(-1).unsqueeze(-1)
142
+ gate1_M_right = self.k_gate1_M_right(k_ctx1_right).unsqueeze(-1).unsqueeze(-1)
143
+ gate1_P_left = self.k_gate1_P_left(k_ctx1_left).unsqueeze(-1).unsqueeze(-1)
144
+ gate1_P_right = self.k_gate1_P_right(k_ctx1_right).unsqueeze(-1).unsqueeze(-1)
145
+
146
+ M_left = M_left * gate1_M_left
147
+ M_right = M_right * gate1_M_right
148
+ P_left = P_left * gate1_P_left
149
+ P_right = P_right * gate1_P_right
150
+
151
+ # ========== BLOCK 2 ==========
152
+ P_left, P_right = self.P_block2_layer1(P_left, P_right)
153
+ P_left, P_right = self.P_block2_layer2(P_left, P_right)
154
+
155
+ K_left, K_right = self.K_block2(K_left, K_right)
156
+
157
+ M_left, M_right = self.M_block2(M_left, M_right)
158
+
159
+ # K gate 2
160
+ k_ctx2_left = self.gap(K_left).flatten(1)
161
+ k_ctx2_right = self.gap(K_right).flatten(1)
162
+
163
+ gate2_M_left = self.k_gate2_M_left(k_ctx2_left).unsqueeze(-1).unsqueeze(-1)
164
+ gate2_M_right = self.k_gate2_M_right(k_ctx2_right).unsqueeze(-1).unsqueeze(-1)
165
+ gate2_P_left = self.k_gate2_P_left(k_ctx2_left).unsqueeze(-1).unsqueeze(-1)
166
+ gate2_P_right = self.k_gate2_P_right(k_ctx2_right).unsqueeze(-1).unsqueeze(-1)
167
+
168
+ M_left = M_left * gate2_M_left
169
+ M_right = M_right * gate2_M_right
170
+ P_left = P_left * gate2_P_left
171
+ P_right = P_right * gate2_P_right
172
+
173
+ # ========== V1 FUSION ==========
174
+ # Match spatial sizes only at fusion (pool to smallest)
175
+ target_size = M_left.shape[-1] # M is smallest
176
+ if P_left.shape[-1] != target_size:
177
+ P_left = F.adaptive_avg_pool2d(P_left, target_size)
178
+ P_right = F.adaptive_avg_pool2d(P_right, target_size)
179
+ if K_left.shape[-1] != target_size:
180
+ K_left = F.adaptive_avg_pool2d(K_left, target_size)
181
+ K_right = F.adaptive_avg_pool2d(K_right, target_size)
182
+
183
+ # Combine all four streams (M and P from both eyes)
184
+ z = torch.cat([M_left, M_right, P_left, P_right], dim=1)
185
+ z = self.v1_fusion(z)
186
+
187
+ # Classification
188
+ z = self.gap(z).flatten(1)
189
+ z = self.dropout(z)
190
+ return self.fc(z)
191
+
192
+
193
+ if __name__ == "__main__":
194
+ model = BinocularMPKNetV6(num_classes=10, ch=48, use_stereo=True)
195
+ print(f"BinocularMPKNet V6 params: {count_params(model)/1e6:.3f}M")
196
+ print(f"Strides: P={model.p_stride}, K={model.k_stride}, M={model.m_stride}")
197
+ print(f"Kernel: {model.kernel_size}")
198
+
199
+ # Test on CIFAR-10 size
200
+ x = torch.randn(2, 3, 32, 32)
201
+ y = model(x)
202
+ print(f"Input: {x.shape}, Output: {y.shape}")
203
+
204
+ # Test on larger input
205
+ x = torch.randn(2, 3, 224, 224)
206
+ y = model(x)
207
+ print(f"Input: {x.shape}, Output: {y.shape}")
v6_kvasir_best.pth ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4347d2777ede2279440e1c1f5e264dc8d7724974eb6598a42e9abc2c16d26cb0
3
+ size 2210203