File size: 21,460 Bytes
63e99b4
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
"""
GLADIUS v2.0 β€” MoDA: Multi-Head Depth Attention

The insight: Standard transformers compute Q, K, V from the CURRENT layer's hidden
state only. But every previous layer already computed useful representations that get
discarded. MoDA adds a second set of K, V projections that attend over a "depth cache"
β€” the hidden states from ALL previous layers at each position.

This is NOT cross-attention (fixed external memory). This is SELF-attention through
depth β€” the model attending to its own computation history.

Architecture per layer l:
    Sequence path (standard):
        Q_seq = W_q @ x_l          (query from current layer)
        K_seq = W_k @ x_l          (key from current layer)  
        V_seq = W_v @ x_l          (value from current layer)
        O_seq = softmax(Q_seq @ K_seq^T / sqrt(d)) @ V_seq

    Depth path (NEW):
        K_depth = W_k_depth @ stack(x_0, x_1, ..., x_{l-1})
        V_depth = W_v_depth @ stack(x_0, x_1, ..., x_{l-1})
        O_depth = softmax(Q_seq @ K_depth^T / sqrt(d)) @ V_depth

    Combined:
        O = gate * O_seq + (1 - gate) * O_depth

The depth KV projections are TINY (hidden_dim β†’ head_dim per group), and the depth
cache grows linearly with layers (not sequence length), so the cost is negligible.

For GLADIUS Wyrm (640d, 14L, 20H):
    Depth cache at layer 13: 13 Γ— seq_len depth tokens per position
    Extra params per layer: 2 Γ— hidden_dim Γ— (hidden_dim / num_groups) β‰ˆ 40K
    Total extra: 14 Γ— 40K β‰ˆ 560K params (0.5% of 104.9M)

Reference: MoDA paper (Multi-Head Depth Attention) + Ali's SLA2 hybrid architecture.
"""

import torch
import torch.nn as nn
import torch.nn.functional as F
import math

from .config import KernelConfig
from .attention import RoPE, RMSNorm, SwiGLU


class DepthKVProjection(nn.Module):
    """
    Projects depth cache hidden states into K, V for depth attention.
    
    Uses Grouped Query Attention (GQA) style β€” fewer KV heads than Q heads
    to keep the depth path lightweight.
    
    For Wyrm: 20 Q heads, 4 KV groups β†’ 5 Q heads per KV group.
    """
    
    def __init__(self, hidden_dim: int, num_kv_heads: int, head_dim: int):
        super().__init__()
        self.num_kv_heads = num_kv_heads
        self.head_dim = head_dim
        self.kv_dim = num_kv_heads * head_dim
        
        self.k_proj = nn.Linear(hidden_dim, self.kv_dim, bias=False)
        self.v_proj = nn.Linear(hidden_dim, self.kv_dim, bias=False)
        
        self._init_weights()
    
    def _init_weights(self):
        # Initialize from small noise β€” depth path starts quiet
        nn.init.normal_(self.k_proj.weight, std=0.005)
        nn.init.normal_(self.v_proj.weight, std=0.005)
    
    def forward(self, depth_cache: torch.Tensor):
        """
        Args:
            depth_cache: (batch, depth_len, hidden_dim) β€” stacked hidden states from previous layers
        Returns:
            K_depth: (batch, num_kv_heads, depth_len, head_dim)
            V_depth: (batch, num_kv_heads, depth_len, head_dim)
        """
        B, D_len, _ = depth_cache.shape
        K = self.k_proj(depth_cache).view(B, D_len, self.num_kv_heads, self.head_dim).transpose(1, 2)
        V = self.v_proj(depth_cache).view(B, D_len, self.num_kv_heads, self.head_dim).transpose(1, 2)
        return K, V


class MoDAAttention(nn.Module):
    """
    Multi-Head Depth Attention β€” the core MoDA mechanism.
    
    Combines standard sequence attention (SLA2 hybrid: softmax + linear blend)
    with depth attention over previous layers' hidden states.
    
    The depth path uses GQA with fewer KV heads for efficiency.
    A learned gate controls the blend between sequence and depth paths.
    """
    
    def __init__(self, config: KernelConfig, layer_idx: int = 0,
                 num_depth_kv_heads: int = 4):
        super().__init__()
        self.config = config
        self.layer_idx = layer_idx
        self.num_heads = config.num_heads
        self.head_dim = config.head_dim
        self.hidden_dim = config.hidden_dim
        self.num_depth_kv_heads = num_depth_kv_heads
        
        # How many Q heads share each depth KV head
        assert config.num_heads % num_depth_kv_heads == 0, \
            f"num_heads ({config.num_heads}) must be divisible by num_depth_kv_heads ({num_depth_kv_heads})"
        self.q_per_kv = config.num_heads // num_depth_kv_heads
        
        # === Sequence path (standard projections) ===
        self.q_proj = nn.Linear(config.hidden_dim, config.hidden_dim, bias=False)
        self.k_proj = nn.Linear(config.hidden_dim, config.hidden_dim, bias=False)
        self.v_proj = nn.Linear(config.hidden_dim, config.hidden_dim, bias=False)
        self.o_proj = nn.Linear(config.hidden_dim, config.hidden_dim, bias=False)
        
        # === Depth path (new KV projections over depth cache) ===
        self.depth_kv = DepthKVProjection(config.hidden_dim, num_depth_kv_heads, config.head_dim)
        
        # === Depth gate: per-head learned blend between seq and depth ===
        # Input: current hidden state β†’ per-head gate in [0, 1]
        # 0 = pure sequence attention, 1 = pure depth attention
        self.depth_gate = nn.Sequential(
            nn.Linear(config.hidden_dim, config.num_heads),
            nn.Sigmoid()
        )
        
        # === SLA2 alpha: blend between softmax and linear (sequence path only) ===
        self.alpha_router = nn.Sequential(
            nn.Linear(config.hidden_dim, config.num_heads),
            nn.Sigmoid()
        )
        
        # === RoPE (sequence positions only β€” depth has no position) ===
        self.rope = RoPE(config.head_dim, config.max_seq_len)
        
        # QK-Clip for stability
        self.qk_softcap = getattr(config, 'qk_softcap', None)
        
        self._init_weights()
    
    def _init_weights(self):
        for proj in [self.q_proj, self.k_proj, self.v_proj, self.o_proj]:
            nn.init.normal_(proj.weight, std=0.02)
        # Initialize depth gate bias negative β†’ start with mostly sequence attention
        # sigmoid(-2) β‰ˆ 0.12, so depth starts at ~12% influence
        nn.init.constant_(self.depth_gate[0].bias, -2.0)
        nn.init.zeros_(self.alpha_router[0].bias)
    
    def _expand_kv_heads(self, kv: torch.Tensor) -> torch.Tensor:
        """
        Expand GQA KV heads to match Q heads.
        (B, num_kv_heads, L, D) β†’ (B, num_heads, L, D)
        """
        B, H_kv, L, D = kv.shape
        # Repeat each KV head for q_per_kv Q heads
        return kv.unsqueeze(2).expand(B, H_kv, self.q_per_kv, L, D).reshape(B, self.num_heads, L, D)
    
    def forward(
        self,
        x: torch.Tensor,
        mask: torch.Tensor | None = None,
        depth_cache: torch.Tensor | None = None,
    ) -> torch.Tensor:
        """
        Args:
            x: (batch, seq_len, hidden_dim) β€” current layer input
            mask: (batch, 1, seq_len, seq_len) β€” causal mask
            depth_cache: (batch, num_prev_layers * seq_len, hidden_dim) β€” stacked previous layers
                         OR None for layer 0 (no depth history yet)
        Returns:
            (batch, seq_len, hidden_dim)
        """
        B, S, D = x.shape
        
        # === Sequence projections ===
        Q = self.q_proj(x).view(B, S, self.num_heads, self.head_dim).transpose(1, 2)  # (B, H, S, D_h)
        K = self.k_proj(x).view(B, S, self.num_heads, self.head_dim).transpose(1, 2)
        V = self.v_proj(x).view(B, S, self.num_heads, self.head_dim).transpose(1, 2)
        
        # Apply RoPE to sequence Q, K
        Q_rope = self.rope(Q, S)
        K_rope = self.rope(K, S)
        
        # === SLA2 Hybrid Sequence Attention ===
        # Linear path
        Q_lin = F.elu(Q_rope) + 1
        K_lin = F.elu(K_rope) + 1
        KV_lin = torch.matmul(K_lin.transpose(-2, -1), V)
        Z_lin = K_lin.transpose(-2, -1).sum(dim=-1, keepdim=True)
        O_linear = torch.matmul(Q_lin, KV_lin) / (torch.matmul(Q_lin, Z_lin) + 1e-6)
        
        # Softmax path
        scores = torch.matmul(Q_rope, K_rope.transpose(-2, -1)) / math.sqrt(self.head_dim)
        if self.qk_softcap is not None and self.qk_softcap > 0:
            scores = self.qk_softcap * torch.tanh(scores / self.qk_softcap)
        if mask is not None:
            scores = scores.masked_fill(mask == 0, float('-inf'))
        attn_weights = F.softmax(scores, dim=-1)
        O_softmax = torch.matmul(attn_weights, V)
        
        # SLA2 blend
        alpha = self.alpha_router(x).permute(0, 2, 1).unsqueeze(-1)  # (B, H, S, 1)
        O_seq = alpha * O_softmax + (1 - alpha) * O_linear
        
        # === Depth Attention (if we have depth history) ===
        if depth_cache is not None and depth_cache.shape[1] > 0:
            # Project depth cache to K, V
            K_depth, V_depth = self.depth_kv(depth_cache)  # (B, H_kv, D_len, D_h)
            
            # Expand GQA to match Q heads
            K_depth = self._expand_kv_heads(K_depth)  # (B, H, D_len, D_h)
            V_depth = self._expand_kv_heads(V_depth)
            
            # Depth attention scores β€” Q from current layer, K from depth cache
            # No RoPE on depth (depth positions are layer indices, not sequence positions)
            # Use un-rotated Q for depth to keep the paths independent
            depth_scores = torch.matmul(Q, K_depth.transpose(-2, -1)) / math.sqrt(self.head_dim)
            
            if self.qk_softcap is not None and self.qk_softcap > 0:
                depth_scores = self.qk_softcap * torch.tanh(depth_scores / self.qk_softcap)
            
            # No causal mask needed for depth β€” all previous layers are always available
            depth_attn = F.softmax(depth_scores, dim=-1)
            O_depth = torch.matmul(depth_attn, V_depth)  # (B, H, S, D_h)
            
            # Depth gate: how much to blend in depth attention
            gate = self.depth_gate(x).permute(0, 2, 1).unsqueeze(-1)  # (B, H, S, 1)
            O = (1 - gate) * O_seq + gate * O_depth
        else:
            # Layer 0: no depth history, pure sequence attention
            O = O_seq
        
        # Reshape and project output
        O = O.transpose(1, 2).contiguous().view(B, S, D)
        return self.o_proj(O)


class MoDATransformerLayer(nn.Module):
    """
    Transformer layer with MoDA attention.
    
    Drop-in replacement for TransformerLayer, but forward() now accepts
    and returns depth_cache for the depth attention mechanism.
    """
    
    def __init__(self, config: KernelConfig, layer_idx: int = 0,
                 num_depth_kv_heads: int = 4):
        super().__init__()
        self.layer_idx = layer_idx
        self.attention = MoDAAttention(config, layer_idx, num_depth_kv_heads)
        self.ffn = SwiGLU(config)
        self.attn_norm = RMSNorm(config.hidden_dim)
        self.ffn_norm = RMSNorm(config.hidden_dim)
    
    def forward(
        self,
        x: torch.Tensor,
        mask: torch.Tensor | None = None,
        depth_cache: torch.Tensor | None = None,
    ) -> torch.Tensor:
        """
        Args:
            x: (batch, seq_len, hidden_dim)
            mask: causal mask
            depth_cache: stacked previous layer outputs (batch, prev_layers * seq_len, hidden_dim)
        Returns:
            x: (batch, seq_len, hidden_dim) β€” output of this layer
        """
        x = x + self.attention(self.attn_norm(x), mask=mask, depth_cache=depth_cache)
        x = x + self.ffn(self.ffn_norm(x))
        return x


def upgrade_kernel_to_moda(kernel, num_depth_kv_heads: int = 4, 
                            init_from_sequence: bool = True):
    """
    Surgical upgrade: replace HybridAttention layers with MoDA layers.
    
    Preserves ALL existing weights (Q, K, V, O projections, FFN, norms).
    Only adds new depth_kv projections and depth_gate.
    
    Args:
        kernel: GladiusKernel instance with existing trained weights
        num_depth_kv_heads: number of KV heads for depth attention (GQA)
        init_from_sequence: if True, initialize depth KV from sequence KV weights
    
    Returns:
        Modified kernel with MoDA layers (in-place)
    """
    config = kernel.config
    device = next(kernel.parameters()).device
    dtype = next(kernel.parameters()).dtype
    
    new_layers = nn.ModuleList()
    
    for i, old_layer in enumerate(kernel.layers):
        # Create new MoDA layer
        moda_layer = MoDATransformerLayer(config, layer_idx=i, 
                                           num_depth_kv_heads=num_depth_kv_heads)
        
        # === Transfer existing weights ===
        # Sequence Q, K, V, O projections
        moda_layer.attention.q_proj.weight.data.copy_(old_layer.attention.q_proj.weight.data)
        moda_layer.attention.k_proj.weight.data.copy_(old_layer.attention.k_proj.weight.data)
        moda_layer.attention.v_proj.weight.data.copy_(old_layer.attention.v_proj.weight.data)
        moda_layer.attention.o_proj.weight.data.copy_(old_layer.attention.o_proj.weight.data)
        
        # SLA2 alpha router
        moda_layer.attention.alpha_router[0].weight.data.copy_(old_layer.attention.alpha_router[0].weight.data)
        moda_layer.attention.alpha_router[0].bias.data.copy_(old_layer.attention.alpha_router[0].bias.data)
        
        # RoPE buffers
        moda_layer.attention.rope.inv_freq.data.copy_(old_layer.attention.rope.inv_freq.data)
        moda_layer.attention.rope.cos_cached.data.copy_(old_layer.attention.rope.cos_cached.data)
        moda_layer.attention.rope.sin_cached.data.copy_(old_layer.attention.rope.sin_cached.data)
        
        # FFN (SwiGLU)
        moda_layer.ffn.gate_proj.weight.data.copy_(old_layer.ffn.gate_proj.weight.data)
        moda_layer.ffn.up_proj.weight.data.copy_(old_layer.ffn.up_proj.weight.data)
        moda_layer.ffn.down_proj.weight.data.copy_(old_layer.ffn.down_proj.weight.data)
        
        # Norms
        moda_layer.attn_norm.weight.data.copy_(old_layer.attn_norm.weight.data)
        moda_layer.ffn_norm.weight.data.copy_(old_layer.ffn_norm.weight.data)
        
        # === Initialize depth KV from sequence KV (warm start) ===
        if init_from_sequence:
            # Take a subset of the sequence K, V weights for depth initialization
            # Map from full hidden_dim projection to GQA-sized projection
            seq_k_weight = old_layer.attention.k_proj.weight.data  # (hidden_dim, hidden_dim)
            seq_v_weight = old_layer.attention.v_proj.weight.data
            
            # Select every q_per_kv-th head's K,V weights for the depth KV heads
            head_dim = config.head_dim
            q_per_kv = config.num_heads // num_depth_kv_heads
            kv_dim = num_depth_kv_heads * head_dim
            
            # Extract weights for the GQA KV heads (take every q_per_kv-th head)
            depth_k_weight = torch.zeros(kv_dim, config.hidden_dim, device=device, dtype=dtype)
            depth_v_weight = torch.zeros(kv_dim, config.hidden_dim, device=device, dtype=dtype)
            
            for g in range(num_depth_kv_heads):
                src_head = g * q_per_kv  # Take the first Q head from each group
                src_start = src_head * head_dim
                src_end = src_start + head_dim
                dst_start = g * head_dim
                dst_end = dst_start + head_dim
                
                depth_k_weight[dst_start:dst_end] = seq_k_weight[src_start:src_end]
                depth_v_weight[dst_start:dst_end] = seq_v_weight[src_start:src_end]
            
            # Scale down to let depth path start gentle
            moda_layer.attention.depth_kv.k_proj.weight.data.copy_(depth_k_weight * 0.1)
            moda_layer.attention.depth_kv.v_proj.weight.data.copy_(depth_v_weight * 0.1)
        
        new_layers.append(moda_layer)
    
    # Replace layers in kernel
    kernel.layers = new_layers.to(device)
    
    # We need to patch the forward to pass depth_cache through layers
    # Store reference to original forward
    kernel._moda_enabled = True
    kernel._num_depth_kv_heads = num_depth_kv_heads
    
    # Report new param count
    total = sum(p.numel() for p in kernel.parameters())
    trainable = sum(p.numel() for p in kernel.parameters() if p.requires_grad)
    depth_params = sum(
        sum(p.numel() for p in layer.attention.depth_kv.parameters()) +
        sum(p.numel() for p in layer.attention.depth_gate.parameters())
        for layer in kernel.layers
    )
    
    print(f"\n=== MoDA Upgrade Complete ===")
    print(f"  Total params: {total:,} (+{depth_params:,} depth params)")
    print(f"  Trainable: {trainable:,}")
    print(f"  Depth overhead: {depth_params/total*100:.2f}%")
    print(f"  Depth KV heads: {num_depth_kv_heads} (GQA ratio: {config.num_heads // num_depth_kv_heads}:1)")
    print(f"  Memory: {total * 2 / 1024 / 1024:.1f} MB (bfloat16)")
    
    return kernel


class MoDAKernelMixin:
    """
    Mixin to patch GladiusKernel.forward() for depth cache propagation.
    
    Usage:
        kernel = GladiusKernel.load_checkpoint(path)
        kernel = upgrade_kernel_to_moda(kernel)
        patch_kernel_forward_for_moda(kernel)
    """
    pass


def patch_kernel_forward_for_moda(kernel):
    """
    Monkey-patch the kernel's forward to thread depth cache through layers.
    
    The original forward just does:
        for layer in self.layers:
            x = layer(x, mask=mask)
    
    MoDA needs:
        depth_cache = []
        for layer in self.layers:
            x = layer(x, mask=mask, depth_cache=stack(depth_cache))
            depth_cache.append(x)
    """
    import types
    
    _original_forward = kernel.forward
    
    def moda_forward(self, input_ids=None, timestamp=None, images=None, audio=None):
        """MoDA-patched forward: threads depth cache through transformer layers."""
        # Replicate pre-transformer logic from original forward
        text_embeds = None
        if input_ids is not None:
            B, S = input_ids.shape
            text_embeds = self.embeddings.embed(input_ids)
        
        modality_mask = None
        if self.has_senses and (images is not None or audio is not None):
            x, modality_mask = self.senses(text_embeds=text_embeds, images=images, audio=audio)
            B, S = x.shape[0], x.shape[1]
        elif text_embeds is not None:
            x = text_embeds
            B, S = x.shape[0], x.shape[1]
        else:
            raise ValueError("Must provide input_ids, images, or audio")
        
        # Memory read
        x = self.memory.read(x)
        
        # Temporal encoding
        time_embed = None
        if timestamp is not None:
            if isinstance(timestamp, (int, float)):
                timestamp = torch.tensor([timestamp] * B, dtype=torch.float32, device=x.device)
            time_embed = self.time_engine(timestamp)
            x = x + time_embed.unsqueeze(1)
        
        # === MoDA Transformer layers with depth cache ===
        if S <= self.config.max_seq_len:
            mask = self.causal_mask[:, :, :S, :S]
        else:
            mask = torch.tril(torch.ones(1, 1, S, S, device=x.device))
        
        depth_states = []  # Collect hidden states for depth attention
        for layer in self.layers:
            # Build depth cache from all previous layers
            if len(depth_states) > 0:
                # Stack previous layer summaries: (B, num_prev_layers, D)
                # Each layer contributes ONE summary vector per position, not S vectors
                # This keeps depth cache at O(L) not O(L*S) β€” critical for VRAM
                depth_cache = torch.stack(depth_states, dim=1)  # (B, num_prev, D)
            else:
                depth_cache = None
            
            x = layer(x, mask=mask, depth_cache=depth_cache)
            # Store mean-pooled representation of this layer (detached)
            depth_states.append(x.mean(dim=1).detach())  # (B, D)
            # NOTE: detach() means depth cache doesn't backprop through previous layers.
            # This is intentional β€” depth attention learns to READ from previous layers,
            # not to CHANGE them. Keeps memory O(L*S*D) instead of O(L^2*S*D).
        
        # Final norm
        x = self.final_norm(x)
        
        # Tool check
        tool_result = self.tool_cortex.check_activation(x)
        if tool_result is not None:
            x = x + tool_result
        
        # Modulate
        logits, silence, pixel_output = self.modulator(x, self.embeddings.output_head, temporal_embedding=time_embed)
        
        # Memory write
        importance = self.memory.write(x)
        
        # Cognition heartbeat
        mode, cognitive_state, mode_probs = self.cognition.heartbeat(x)
        
        if self.cognition.should_consolidate():
            self.memory.consolidate()
        
        self.time_engine.record_event()
        
        return {
            'logits': logits,
            'silence': silence,
            'pixel_output': pixel_output,
            'mode': mode,
            'importance': importance,
            'modality_mask': modality_mask,
            'cognitive_state': cognitive_state,
            'mode_probs': mode_probs,
        }
    
    kernel.forward = types.MethodType(moda_forward, kernel)
    print("  Forward pass patched for MoDA depth cache propagation βœ…")
    return kernel