ChipYTY commited on
Commit
a517ecd
·
verified ·
1 Parent(s): cf828b7

Add v4 Qwen3+Titans code snapshot and README

Browse files
.gitignore ADDED
@@ -0,0 +1,6 @@
 
 
 
 
 
 
 
1
+ __pycache__/
2
+ *.py[cod]
3
+ .pytest_cache/
4
+ .mypy_cache/
5
+ .ruff_cache/
6
+ *.log
LICENSE ADDED
@@ -0,0 +1,21 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ MIT License
2
+
3
+ Copyright (c) 2025 Phil Wang
4
+
5
+ Permission is hereby granted, free of charge, to any person obtaining a copy
6
+ of this software and associated documentation files (the "Software"), to deal
7
+ in the Software without restriction, including without limitation the rights
8
+ to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
9
+ copies of the Software, and to permit persons to whom the Software is
10
+ furnished to do so, subject to the following conditions:
11
+
12
+ The above copyright notice and this permission notice shall be included in all
13
+ copies or substantial portions of the Software.
14
+
15
+ THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
16
+ IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
17
+ FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
18
+ AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
19
+ LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
20
+ OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
21
+ SOFTWARE.
README.md ADDED
@@ -0,0 +1,98 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: mit
3
+ ---
4
+
5
+ ## 这里是什么
6
+
7
+ 这是一个**最小代码快照**:只包含运行 `examples/train_qwen_titans_babilong_v4.py`(Qwen3 + Titans v4,BABILong QA1 32k,跨 chunk 梯度)所需的仓库内代码文件。
8
+
9
+ - **不包含**:Qwen 权重、BABILong 数据集文件、以及原项目中 v4 未使用的其它模块
10
+ - **用途**:方便复现实验/对照、归档 v4 代码与配置要点
11
+
12
+ ---
13
+
14
+ ## 代码清单(仅 v4 用到的仓库内代码)
15
+
16
+ - `examples/train_qwen_titans_babilong_v4.py`
17
+ - `titans_pytorch/neural_memory.py`
18
+ - `titans_pytorch/memory_models.py`
19
+ - `titans_pytorch/__init__.py`(本仓库内的最小导出,避免引入 v4 未使用模块)
20
+ - `LICENSE`(上游 `titans-pytorch` 的 MIT License)
21
+
22
+ ---
23
+
24
+ ## 权重目录与数据集目录(v4 默认配置)
25
+
26
+ `examples/train_qwen_titans_babilong_v4.py` 的 `TrainingConfig` 里默认写死了本地路径(需要你按机器环境修改):
27
+
28
+ - **Qwen3 权重目录(HF snapshot)**:
29
+ - `model_path`:`/data/huangyifei/huggingface_cache/hub/models--Qwen--Qwen3-4B-Instruct-2507/snapshots/cdbee75f17c01a7cc42f958dc650907174af0554`
30
+ - **BABILong QA1 32k 数据 JSON**:
31
+ - `data_path`:`/data/yty/BABILong/babilong-train-5k-samples/data/qa1/32k.json`
32
+
33
+ 说明:
34
+ - 这两个路径**不会被上传到本仓库**,这里只做“路径与配置说明”。
35
+ - v4 脚本当前**没有**提供 `--model_path/--data_path` 命令行参数;如需改路径,请直接改 `TrainingConfig` 里的默认值。
36
+
37
+ ---
38
+
39
+ ## 程序特性(v4 重点)
40
+
41
+ v4 的目标是:在 32k 长序列 chunk streaming 的训练中,尽可能实现**跨 chunk 的梯度流动**,并在显存可控的前提下训练“记忆模块 + 少量底座参数”。
42
+
43
+ - **跨 chunk 梯度(核心)**
44
+ - `chunkwise_backward=False`:整段序列(32k)一起反传(而不是每个 chunk 独立反传)
45
+ - `detach_mem_state=False`:记忆 state 不 detach,使梯度图能跨 chunk 连接
46
+ - `cross_chunk_gradient_steps`:限制梯度回传穿过多少个历史 chunk(控制显存/稳定性)
47
+
48
+ - **冻结策略(v4 header 描述)**
49
+ - 冻结 Qwen backbone 的大部分参数
50
+ - **保留可训练**:`embed_tokens`(输入适配)、`lm_head`(必要时解开 tied weights)、以及 Titans 记忆模块相关参数
51
+
52
+ - **学习率分组(v4:更细粒度)**
53
+ - `lr_memory` / `lr_memory_attention`:记忆模块(含 deep integration 相关)
54
+ - `lr_embed`:`embed_tokens`
55
+ - `lr_lm_head`:`lm_head`
56
+
57
+ - **稳定性与兼容性**
58
+ - 脚本开头主动禁用/模拟 `torchao`,避免 `transformers` 导入时的版本冲突
59
+ - 建议开启 `gradient_checkpointing=True`(v4 默认开启),缓解 32k full backward 的显存压力
60
+ - 支持 DDP/FSDP(FSDP auto-wrap `Qwen3DecoderLayer` 与 v4 自定义层)
61
+
62
+ - **评估与保存**
63
+ - 评估指标为 **answer-only**(只在 `labels != -100` 的答案 token 上计算 loss/acc)
64
+ - 输出:
65
+ - `eval_metrics.jsonl`
66
+ - `final_memory_checkpoint.pt`(仅保存 requires_grad 且属于记忆/门控/embed/head 的参数)
67
+ - `final_full_checkpoint.pt`(可选保存完整 state_dict)
68
+
69
+ ---
70
+
71
+ ## 运行方式(示例)
72
+
73
+ 以下命令仅作参考;请先在脚本中把 `model_path/data_path` 改成你机器上的真实路径。
74
+
75
+ - **单机多卡(FSDP)训练**:
76
+
77
+ ```bash
78
+ torchrun --standalone --nproc_per_node=4 examples/train_qwen_titans_babilong_v4.py --fsdp
79
+ ```
80
+
81
+ - **评估(eval_only)**:
82
+
83
+ ```bash
84
+ python examples/train_qwen_titans_babilong_v4.py --eval_only --ckpt_path ./outputs/qwen_titans_babilong_v4/final_memory_checkpoint.pt
85
+ ```
86
+
87
+ ---
88
+
89
+ ## 依赖提示(非完整列表)
90
+
91
+ 该脚本依赖的关键 Python 包包括:`torch`、`transformers`、`einops`、`tqdm`、`tensordict`、`assoc-scan`、`einx` 等。
92
+
93
+ ---
94
+
95
+ ## 许可证与来源
96
+
97
+ 本仓库内的 `titans_pytorch/*` 代码来自上游 `titans-pytorch`(MIT License),对应许可见 `LICENSE`。
98
+
examples/train_qwen_titans_babilong_v4.py ADDED
@@ -0,0 +1,1902 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """
2
+ Qwen3 + Titans v4 - BABILong QA1 (32k) with Cross-Chunk Gradients
3
+
4
+ Key design:
5
+ 1. Freeze Qwen backbone EXCEPT embed_tokens (trainable for input adaptation)
6
+ 2. Untie lm_head from embed_tokens if they share weights
7
+ 3. Train: Memory modules + embed_tokens + lm_head
8
+ 4. Use chunkwise_backward=False + detach_mem_state=False for TRUE cross-chunk gradients
9
+ 5. Enable gradient_checkpointing to manage memory usage
10
+
11
+ Cross-chunk gradient flow:
12
+ - chunkwise_backward=False: entire sequence backward together
13
+ - detach_mem_state=False: memory state keeps gradient graph
14
+ - cross_chunk_gradient_steps: controls how many chunks back gradient flows
15
+ """
16
+
17
+ import os
18
+ import sys
19
+
20
+ # =============================================================================
21
+ # CRITICAL: Disable torchao BEFORE importing transformers to avoid version conflicts
22
+ # =============================================================================
23
+ os.environ["TRANSFORMERS_NO_TORCHAO"] = "1"
24
+
25
+ # Mock torchao to prevent import errors
26
+ class _MockTorchAO:
27
+ def __getattr__(self, name):
28
+ return _MockTorchAO()
29
+ def __call__(self, *args, **kwargs):
30
+ return _MockTorchAO()
31
+
32
+ sys.modules['torchao'] = _MockTorchAO()
33
+ sys.modules['torchao.quantization'] = _MockTorchAO()
34
+
35
+ import json
36
+ import math
37
+ import argparse
38
+ import logging
39
+ import weakref
40
+ from contextlib import nullcontext
41
+ from dataclasses import dataclass, asdict, field
42
+ from typing import Optional, Dict, Any, List, Tuple, Callable
43
+
44
+ import torch
45
+ import torch.nn as nn
46
+ import torch.nn.functional as F
47
+ import torch.distributed as dist
48
+ from torch.utils.data import Dataset, DataLoader
49
+ from torch.optim import AdamW
50
+ from torch.optim.lr_scheduler import CosineAnnealingLR
51
+ from torch.nn.parallel import DistributedDataParallel as DDP
52
+ from tqdm import tqdm
53
+
54
+ from einops import rearrange, repeat
55
+
56
+ # add repo root to sys.path
57
+ sys.path.insert(0, os.path.dirname(os.path.dirname(os.path.abspath(__file__))))
58
+
59
+ # Titans components
60
+ from titans_pytorch import NeuralMemory, MemoryMLP
61
+ from titans_pytorch.neural_memory import NeuralMemState
62
+
63
+ logging.basicConfig(
64
+ level=logging.INFO,
65
+ format="%(asctime)s - %(levelname)s - %(message)s"
66
+ )
67
+ logger = logging.getLogger(__name__)
68
+
69
+
70
+ # =============================================================================
71
+ # Configuration
72
+ # =============================================================================
73
+
74
+ @dataclass
75
+ class TrainingConfig:
76
+ # paths
77
+ model_path: str = "/data/huangyifei/huggingface_cache/hub/models--Qwen--Qwen3-4B-Instruct-2507/snapshots/cdbee75f17c01a7cc42f958dc650907174af0554"
78
+ data_path: str = "/data/yty/BABILong/babilong-train-5k-samples/data/qa1/32k.json"
79
+ output_dir: str = "./outputs/qwen_titans_babilong_v4"
80
+
81
+ # training
82
+ num_epochs: int = 10
83
+ batch_size: int = 1
84
+ gradient_accumulation_steps: int = 16
85
+ max_grad_norm: float = 1.0
86
+
87
+ # learning rates (v4: separate rates for memory, embed, head)
88
+ lr_memory: float = 1e-4
89
+ lr_memory_attention: float = 5e-5
90
+ lr_embed: float = 1e-5 # Learning rate for embed_tokens
91
+ lr_lm_head: float = 1e-4 # Learning rate for lm_head
92
+ weight_decay: float = 0.01
93
+ warmup_steps: int = 100
94
+
95
+ # streaming / memory
96
+ chunk_size: int = 4096
97
+ use_memory: bool = True
98
+ memory_chunk_size: int = 128
99
+ memory_batch_size: int = 128
100
+ memory_heads: int = 8
101
+ memory_dim_head: int = 64
102
+ memory_depth: int = 1
103
+ memory_layer_stride: int = 8
104
+ memory_fp32: bool = True
105
+
106
+ # Memory state detachment - controls cross-chunk gradient flow
107
+ # False = allow gradient flow through memory state (requires chunkwise_backward=False)
108
+ # True = detach memory state each chunk (no cross-chunk gradients)
109
+ detach_mem_state: bool = False # Enable cross-chunk gradients!
110
+ deep_memory_integration: bool = False
111
+ memory_as_context: bool = False
112
+ num_memory_tokens: int = 16
113
+ memory_gate_bias: float = -2.0
114
+ use_momentum: bool = True
115
+ momentum_order: int = 1
116
+
117
+ # Gradient flow control - NOW ACTIVE with chunkwise_backward=False
118
+ # cross_chunk_gradient_steps: how many chunks back gradient can flow through memory
119
+ gradient_checkpoint_memory: bool = False
120
+ cross_chunk_gradient_steps: int = 2 # Allow gradient through 2 recent chunks
121
+
122
+ # evaluation / logging
123
+ eval_steps: int = 200
124
+ eval_topk: int = 0
125
+ logging_steps: int = 10
126
+ log_every_batches: int = 80
127
+ final_eval_print_examples: int = 10
128
+ debug_data_samples: int = 0
129
+ debug_label_batches: int = 0
130
+ debug_eval_stats: bool = False
131
+ debug_grad_norm: bool = False
132
+
133
+ # precision
134
+ bf16: bool = True
135
+ fp16: bool = False
136
+ use_tf32: bool = True
137
+ gradient_checkpointing: bool = True # Enable to manage memory with full-sequence backward
138
+ chunkwise_backward: bool = False # Disable for cross-chunk gradients
139
+
140
+ # data
141
+ max_length: int = 32768
142
+ answer_reserve_tokens: int = 64
143
+ label_prefix_tokens: int = 0
144
+ max_samples: Optional[int] = 500
145
+
146
+ # distributed
147
+ use_fsdp: bool = False
148
+ fsdp_use_orig_params: bool = True
149
+ ddp_find_unused_parameters: bool = False
150
+
151
+ # checkpoint
152
+ save_full_checkpoint: bool = True
153
+ final_ckpt_name: str = "final_memory_checkpoint.pt"
154
+ final_full_ckpt_name: str = "final_full_checkpoint.pt"
155
+
156
+ seed: int = 42
157
+
158
+
159
+ # =============================================================================
160
+ # Dataset
161
+ # =============================================================================
162
+
163
+ class BABILongDataset(Dataset):
164
+ def __init__(
165
+ self,
166
+ data_path: str,
167
+ tokenizer,
168
+ max_length: int = 32768,
169
+ answer_reserve_tokens: int = 64,
170
+ label_prefix_tokens: int = 0,
171
+ max_samples: Optional[int] = None,
172
+ ):
173
+ self.tokenizer = tokenizer
174
+ self.max_length = max_length
175
+ self.answer_reserve_tokens = answer_reserve_tokens
176
+ self.label_prefix_tokens = int(label_prefix_tokens)
177
+
178
+ logger.info(f"Loading dataset: {data_path}")
179
+ with open(data_path, "r") as f:
180
+ self.data = json.load(f)
181
+
182
+ if max_samples:
183
+ self.data = self.data[:max_samples]
184
+
185
+ logger.info(f"Dataset size: {len(self.data)}")
186
+
187
+ def __len__(self):
188
+ return len(self.data)
189
+
190
+ def __getitem__(self, idx):
191
+ item = self.data[idx]
192
+ text = f"{item['input']}\n\nQuestion: {item['question']}\nAnswer:"
193
+ target = item["target"]
194
+
195
+ pad_id = self.tokenizer.pad_token_id or 0
196
+ reserve = int(self.answer_reserve_tokens)
197
+
198
+ prompt_ids = self.tokenizer(
199
+ text,
200
+ max_length=max(self.max_length - reserve, 1),
201
+ truncation=True,
202
+ add_special_tokens=True,
203
+ return_tensors="pt",
204
+ ).input_ids.squeeze(0)
205
+
206
+ answer_ids = self.tokenizer(
207
+ f" {target}",
208
+ add_special_tokens=False,
209
+ return_tensors="pt",
210
+ ).input_ids.squeeze(0)
211
+
212
+ available = max(self.max_length - prompt_ids.numel(), 0)
213
+ answer_ids = answer_ids[:available]
214
+
215
+ input_ids = torch.cat([prompt_ids, answer_ids], dim=0)[: self.max_length]
216
+
217
+ labels = torch.full_like(input_ids, fill_value=-100)
218
+ if answer_ids.numel() > 0:
219
+ start = prompt_ids.numel()
220
+ end = min(start + answer_ids.numel(), labels.numel())
221
+ labels[start:end] = input_ids[start:end]
222
+ if self.label_prefix_tokens > 0:
223
+ prefix = min(start, self.label_prefix_tokens)
224
+ if prefix > 0:
225
+ labels[start - prefix:start] = input_ids[start - prefix:start]
226
+
227
+ seq_len = input_ids.numel()
228
+ if seq_len < self.max_length:
229
+ pad_len = self.max_length - seq_len
230
+ input_ids = F.pad(input_ids, (0, pad_len), value=int(pad_id))
231
+ labels = F.pad(labels, (0, pad_len), value=-100)
232
+ attention_mask = torch.cat(
233
+ [torch.ones(seq_len, dtype=torch.long), torch.zeros(pad_len, dtype=torch.long)],
234
+ dim=0,
235
+ )
236
+ else:
237
+ attention_mask = torch.ones(self.max_length, dtype=torch.long)
238
+
239
+ return {
240
+ "input_ids": input_ids.to(dtype=torch.long),
241
+ "labels": labels.to(dtype=torch.long),
242
+ "attention_mask": attention_mask,
243
+ }
244
+
245
+
246
+ def collate_fn(batch):
247
+ keys = batch[0].keys()
248
+ return {k: torch.stack([b[k] for b in batch], dim=0) for k in keys}
249
+
250
+
251
+ # =============================================================================
252
+ # Memory-Augmented Attention Module (from v3)
253
+ # =============================================================================
254
+
255
+ class MemoryAugmentedAttention(nn.Module):
256
+ """
257
+ Deep integration of memory into attention mechanism.
258
+ Memory provides additional context that enhances hidden states.
259
+ """
260
+ def __init__(
261
+ self,
262
+ hidden_size: int,
263
+ num_attention_heads: int,
264
+ num_memory_tokens: int = 16,
265
+ memory_dim_head: int = 64,
266
+ memory_fp32: bool = True,
267
+ gate_bias: float = -2.0,
268
+ ):
269
+ super().__init__()
270
+ self.hidden_size = hidden_size
271
+ self.num_heads = num_attention_heads
272
+ self.head_dim = hidden_size // num_attention_heads
273
+ self.memory_fp32 = memory_fp32
274
+
275
+ # Memory transformation: projects and mixes memory with hidden states
276
+ self.memory_transform = nn.Sequential(
277
+ nn.Linear(hidden_size, hidden_size),
278
+ nn.SiLU(),
279
+ nn.Linear(hidden_size, hidden_size),
280
+ )
281
+
282
+ # Learnable memory gate per head
283
+ self.memory_gate = nn.Parameter(torch.full((num_attention_heads, 1, 1), gate_bias))
284
+
285
+ # Output projection
286
+ self.memory_output_proj = nn.Linear(hidden_size, hidden_size, bias=False)
287
+ nn.init.zeros_(self.memory_output_proj.weight) # Start as identity
288
+
289
+ def forward(
290
+ self,
291
+ hidden_states: torch.Tensor,
292
+ memory_context: torch.Tensor,
293
+ attention_mask: Optional[torch.Tensor] = None,
294
+ ) -> torch.Tensor:
295
+ """
296
+ Args:
297
+ hidden_states: [batch, seq_len, hidden_size] - current hidden states
298
+ memory_context: [batch, seq_len, hidden_size] - retrieved memory
299
+ attention_mask: optional attention mask
300
+ Returns:
301
+ enhanced_hidden: [batch, seq_len, hidden_size]
302
+ """
303
+ batch_size, seq_len, _ = hidden_states.shape
304
+
305
+ # Transform memory
306
+ mem_transformed = self.memory_transform(memory_context)
307
+
308
+ # Reshape to multi-head format for gating
309
+ mem_heads = rearrange(mem_transformed, 'b n (h d) -> b h n d', h=self.num_heads)
310
+
311
+ # Compute memory gate (sigmoid for smooth interpolation)
312
+ gate = torch.sigmoid(self.memory_gate) # [num_heads, 1, 1]
313
+
314
+ # Apply gated memory enhancement
315
+ mem_contribution = mem_heads * gate
316
+ mem_contribution = rearrange(mem_contribution, 'b h n d -> b n (h d)')
317
+
318
+ # Project and add to hidden states
319
+ enhanced = hidden_states + self.memory_output_proj(mem_contribution)
320
+
321
+ return enhanced
322
+
323
+
324
+ # =============================================================================
325
+ # Deep Memory Layer (from v3, with cross-chunk gradient flow)
326
+ # =============================================================================
327
+
328
+ class QwenDecoderLayerWithDeepMemory(nn.Module):
329
+ """
330
+ v4: Uses v3's deep memory integration with cross-chunk gradient flow.
331
+ The base Qwen layer will be frozen in v4.
332
+ """
333
+ def __init__(
334
+ self,
335
+ base_layer: nn.Module,
336
+ layer_idx: int,
337
+ *,
338
+ hidden_size: int,
339
+ num_attention_heads: int,
340
+ chunk_size: int,
341
+ batch_size: int,
342
+ dim_head: int,
343
+ num_heads: int,
344
+ memory_depth: int,
345
+ memory_fp32: bool,
346
+ detach_mem_state: bool,
347
+ deep_integration: bool,
348
+ memory_as_context: bool,
349
+ num_memory_tokens: int,
350
+ memory_gate_bias: float,
351
+ use_momentum: bool,
352
+ momentum_order: int,
353
+ parent_model: Optional[nn.Module] = None,
354
+ ):
355
+ super().__init__()
356
+ self.layer = base_layer
357
+ self.layer_idx = layer_idx
358
+ self.memory_fp32 = memory_fp32
359
+ self.detach_mem_state = bool(detach_mem_state)
360
+ self.deep_integration = deep_integration
361
+ self.memory_as_context = memory_as_context
362
+ self.memory_state: Optional[NeuralMemState] = None
363
+ self.parent_model_ref = weakref.ref(parent_model) if parent_model is not None else None
364
+
365
+ # Chunk counter for gradient flow control (v3 feature)
366
+ self._chunk_counter = 0
367
+ self._gradient_steps_back = 2 # Allow gradient through 2 chunks
368
+
369
+ # Core Neural Memory module
370
+ memory_model = MemoryMLP(
371
+ dim=dim_head,
372
+ depth=memory_depth,
373
+ expansion_factor=2.0,
374
+ )
375
+
376
+ self.neural_memory = NeuralMemory(
377
+ dim=hidden_size,
378
+ chunk_size=chunk_size,
379
+ batch_size=batch_size,
380
+ dim_head=dim_head,
381
+ heads=num_heads,
382
+ model=memory_model,
383
+ momentum=use_momentum,
384
+ momentum_order=momentum_order,
385
+ qk_rmsnorm=True,
386
+ pre_rmsnorm=True,
387
+ default_step_transform_max_lr=1e-2,
388
+ init_adaptive_step_bias=-4.0,
389
+ max_grad_norm=1.0,
390
+ spectral_norm_surprises=True,
391
+ use_accelerated_scan=False,
392
+ )
393
+
394
+ # Layer-level memory gate
395
+ self.mem_gate = nn.Sequential(
396
+ nn.Linear(hidden_size * 2, hidden_size),
397
+ nn.SiLU(),
398
+ nn.Linear(hidden_size, hidden_size),
399
+ nn.Sigmoid(),
400
+ )
401
+ # Initialize gate to be conservative
402
+ nn.init.zeros_(self.mem_gate[-2].weight)
403
+ nn.init.constant_(self.mem_gate[-2].bias, memory_gate_bias)
404
+
405
+ # Deep attention integration (from v3)
406
+ if deep_integration:
407
+ self.memory_attention = MemoryAugmentedAttention(
408
+ hidden_size=hidden_size,
409
+ num_attention_heads=num_attention_heads,
410
+ num_memory_tokens=num_memory_tokens,
411
+ memory_dim_head=dim_head,
412
+ memory_fp32=memory_fp32,
413
+ gate_bias=memory_gate_bias,
414
+ )
415
+ else:
416
+ self.memory_attention = None
417
+
418
+ # Pre-attention memory projection (from v3)
419
+ if memory_as_context:
420
+ self.memory_context_proj = nn.Sequential(
421
+ nn.Linear(hidden_size, hidden_size),
422
+ nn.SiLU(),
423
+ nn.Linear(hidden_size, hidden_size),
424
+ )
425
+ nn.init.zeros_(self.memory_context_proj[-1].weight)
426
+ nn.init.zeros_(self.memory_context_proj[-1].bias)
427
+ else:
428
+ self.memory_context_proj = None
429
+
430
+ # Move to appropriate device/dtype
431
+ try:
432
+ layer_device = next(base_layer.parameters()).device
433
+ layer_dtype = next(base_layer.parameters()).dtype
434
+ except StopIteration:
435
+ layer_device = None
436
+ layer_dtype = None
437
+
438
+ if layer_device is not None:
439
+ mem_dtype = torch.float32 if memory_fp32 else layer_dtype
440
+ self.neural_memory = self.neural_memory.to(device=layer_device, dtype=mem_dtype)
441
+ if layer_dtype is not None:
442
+ self.mem_gate = self.mem_gate.to(device=layer_device, dtype=layer_dtype)
443
+ if self.memory_attention is not None:
444
+ self.memory_attention = self.memory_attention.to(device=layer_device, dtype=layer_dtype)
445
+ if self.memory_context_proj is not None:
446
+ self.memory_context_proj = self.memory_context_proj.to(device=layer_device, dtype=layer_dtype)
447
+
448
+ def reset_memory_state(self):
449
+ self.memory_state = None
450
+ self._chunk_counter = 0
451
+
452
+ def set_gradient_steps_back(self, steps: int):
453
+ """Control how many chunks back gradient can flow (v3 feature)."""
454
+ self._gradient_steps_back = steps
455
+
456
+ def _get_store_mask(self, hidden_states: torch.Tensor) -> Optional[torch.Tensor]:
457
+ parent_model = self.parent_model_ref() if self.parent_model_ref is not None else None
458
+ if parent_model is None or not hasattr(parent_model, "_mem_store_mask"):
459
+ return None
460
+ store_mask = getattr(parent_model, "_mem_store_mask")
461
+ if store_mask is None:
462
+ return None
463
+ store_mask = store_mask.to(device=hidden_states.device).bool()
464
+ if store_mask.shape[:2] != hidden_states.shape[:2]:
465
+ return None
466
+ return store_mask
467
+
468
+ def _should_detach_state(self) -> bool:
469
+ """
470
+ Determine if memory state should be detached based on chunk counter (v3 feature).
471
+
472
+ FIXED logic:
473
+ - If detach_mem_state=True: Always detach (blocks all cross-chunk gradients)
474
+ - If detach_mem_state=False: Allow gradient flow through N recent chunks
475
+ where N = _gradient_steps_back (controlled by cross_chunk_gradient_steps)
476
+
477
+ For cross-chunk gradient flow to work, must set detach_mem_state=False!
478
+ """
479
+ if self.detach_mem_state:
480
+ return True # Legacy behavior: always detach
481
+ # Allow gradient flow through recent chunks
482
+ self._chunk_counter += 1
483
+ return self._chunk_counter > self._gradient_steps_back
484
+
485
+ def forward(self, *args, **kwargs):
486
+ # Get original layer output
487
+ outputs = self.layer(*args, **kwargs)
488
+
489
+ if isinstance(outputs, (tuple, list)):
490
+ hidden_states = outputs[0]
491
+ rest = outputs[1:]
492
+ else:
493
+ hidden_states = outputs
494
+ rest = None
495
+
496
+ # Get store mask
497
+ full_store_mask = self._get_store_mask(hidden_states)
498
+
499
+ # Prepare memory input
500
+ mem_inp = hidden_states.float() if self.memory_fp32 else hidden_states
501
+
502
+ # Prepare store sequence and mask
503
+ store_seq = None
504
+ store_mask = full_store_mask
505
+ if store_mask is not None:
506
+ store_seq = mem_inp
507
+ # Skip first token if not the first chunk
508
+ if store_mask.shape[1] > 0 and not store_mask[:, 0].any():
509
+ store_seq = store_seq[:, 1:]
510
+ store_mask = store_mask[:, 1:]
511
+
512
+ # Align to chunk size
513
+ store_chunk = self.neural_memory.store_chunk_size
514
+ remainder = store_seq.shape[1] % store_chunk
515
+ if remainder != 0:
516
+ store_seq = store_seq[:, :-remainder]
517
+ store_mask = store_mask[:, :-remainder]
518
+
519
+ if store_mask is not None and store_seq is not None:
520
+ if store_mask.shape[1] != store_seq.shape[1]:
521
+ min_len = min(store_mask.shape[1], store_seq.shape[1])
522
+ store_seq = store_seq[:, :min_len]
523
+ store_mask = store_mask[:, :min_len]
524
+
525
+ if store_seq.shape[1] == 0:
526
+ store_seq = None
527
+ store_mask = None
528
+
529
+ # Memory computation context
530
+ mem_ctx = (
531
+ torch.amp.autocast(device_type=hidden_states.device.type, enabled=False)
532
+ if self.memory_fp32
533
+ else nullcontext()
534
+ )
535
+
536
+ # Determine if we should detach memory state (v3 feature)
537
+ should_detach = self._should_detach_state()
538
+
539
+ with mem_ctx:
540
+ retrieved, next_state = self.neural_memory(
541
+ mem_inp,
542
+ store_seq=store_seq,
543
+ state=self.memory_state,
544
+ store_mask=store_mask,
545
+ detach_mem_state=should_detach,
546
+ )
547
+ self.memory_state = next_state
548
+
549
+ if retrieved is not None:
550
+ retrieved = retrieved.to(dtype=hidden_states.dtype)
551
+
552
+ # Apply store mask to retrieved memory
553
+ if full_store_mask is not None and full_store_mask.shape[:2] == retrieved.shape[:2]:
554
+ retrieved = retrieved * full_store_mask.unsqueeze(-1).to(dtype=retrieved.dtype)
555
+
556
+ # ===== v3 Deep Integration =====
557
+
558
+ # Path 1: Memory-augmented attention (if enabled)
559
+ if self.memory_attention is not None:
560
+ hidden_states = self.memory_attention(
561
+ hidden_states=hidden_states,
562
+ memory_context=retrieved,
563
+ attention_mask=None,
564
+ )
565
+
566
+ # Path 2: Memory as context projection (if enabled)
567
+ if self.memory_context_proj is not None:
568
+ context_enhancement = self.memory_context_proj(retrieved)
569
+ hidden_states = hidden_states + context_enhancement
570
+
571
+ # Path 3: Layer-level gated fusion (always active)
572
+ gate = self.mem_gate(torch.cat([hidden_states, retrieved], dim=-1))
573
+ hidden_states = hidden_states + gate * retrieved
574
+
575
+ if rest is None:
576
+ return hidden_states
577
+ return (hidden_states, *rest)
578
+
579
+
580
+ # =============================================================================
581
+ # Main Model Wrapper (v4 with frozen backbone)
582
+ # =============================================================================
583
+
584
+ class QwenTitansForBABILongV4(nn.Module):
585
+ """
586
+ v4: Qwen3 with deep Titans memory integration and cross-chunk gradients.
587
+ Trains: memory modules + embed_tokens + lm_head
588
+ Frozen: Qwen transformer layers (attention, MLP, etc.)
589
+ """
590
+ def __init__(self, qwen_model, config: TrainingConfig):
591
+ super().__init__()
592
+ self.qwen = qwen_model
593
+ self.config = config
594
+ self.hidden_size = qwen_model.config.hidden_size
595
+ self.num_attention_heads = qwen_model.config.num_attention_heads
596
+ self.use_memory = bool(getattr(config, "use_memory", True))
597
+
598
+ if self.use_memory:
599
+ self.memory_layer_stride = int(getattr(config, "memory_layer_stride", 6))
600
+ self.memory_layer_indices = [
601
+ idx for idx in range(len(self.qwen.model.layers))
602
+ if idx % self.memory_layer_stride == 0
603
+ ]
604
+
605
+ for layer_idx in self.memory_layer_indices:
606
+ base_layer = self.qwen.model.layers[layer_idx]
607
+ wrapped = QwenDecoderLayerWithDeepMemory(
608
+ base_layer,
609
+ layer_idx=layer_idx,
610
+ hidden_size=self.hidden_size,
611
+ num_attention_heads=self.num_attention_heads,
612
+ chunk_size=config.memory_chunk_size,
613
+ batch_size=config.memory_batch_size,
614
+ dim_head=config.memory_dim_head,
615
+ num_heads=config.memory_heads,
616
+ memory_depth=config.memory_depth,
617
+ memory_fp32=config.memory_fp32,
618
+ detach_mem_state=config.detach_mem_state,
619
+ deep_integration=config.deep_memory_integration,
620
+ memory_as_context=config.memory_as_context,
621
+ num_memory_tokens=config.num_memory_tokens,
622
+ memory_gate_bias=config.memory_gate_bias,
623
+ use_momentum=config.use_momentum,
624
+ momentum_order=config.momentum_order,
625
+ parent_model=self.qwen.model,
626
+ )
627
+ self.qwen.model.layers[layer_idx] = wrapped
628
+ else:
629
+ self.memory_layer_stride = 0
630
+ self.memory_layer_indices = []
631
+
632
+ # ===== v4 FREEZING LOGIC =====
633
+ self._freeze_backbone()
634
+
635
+ if self.use_memory:
636
+ logger.info("[QwenTitansForBABILongV4] Initialized with FROZEN backbone")
637
+ logger.info(f" - hidden_size: {self.hidden_size}")
638
+ logger.info(f" - num_attention_heads: {self.num_attention_heads}")
639
+ logger.info(f" - chunk_size: {config.chunk_size}")
640
+ logger.info(f" - memory_layer_stride: {self.memory_layer_stride}")
641
+ logger.info(f" - memory_layers: {self.memory_layer_indices}")
642
+ logger.info(f" - deep_memory_integration: {config.deep_memory_integration}")
643
+ logger.info(f" - memory_as_context: {config.memory_as_context}")
644
+ logger.info(f" - detach_mem_state: {config.detach_mem_state}")
645
+ logger.info(f" - cross_chunk_gradient_steps: {config.cross_chunk_gradient_steps}")
646
+ else:
647
+ logger.info("[QwenTitansForBABILongV4] Initialized (memory disabled)")
648
+
649
+ self._memory_layers = [
650
+ layer for layer in self.qwen.model.layers
651
+ if isinstance(layer, QwenDecoderLayerWithDeepMemory)
652
+ ]
653
+ self.qwen.model._mem_store_mask = None
654
+
655
+ # Set gradient steps for cross-chunk gradient flow (v3 feature)
656
+ for layer in self._memory_layers:
657
+ layer.set_gradient_steps_back(config.cross_chunk_gradient_steps)
658
+
659
+ def _freeze_backbone(self):
660
+ """
661
+ v4: Freeze Qwen transformer layers, keep embed_tokens + lm_head trainable.
662
+
663
+ Trainable:
664
+ - memory modules (neural_memory, mem_gate, memory_attention)
665
+ - embed_tokens (for input adaptation)
666
+ - lm_head (for output adaptation)
667
+
668
+ Frozen:
669
+ - All transformer layers (attention, MLP, layernorm)
670
+ """
671
+ frozen_count = 0
672
+ trainable_count = 0
673
+ embed_count = 0
674
+ lm_head_count = 0
675
+
676
+ # CRITICAL: Untie lm_head from embed_tokens if they share weights
677
+ # This allows them to be trained independently
678
+ if hasattr(self.qwen, 'lm_head') and hasattr(self.qwen.model, 'embed_tokens'):
679
+ lm_head_weight = self.qwen.lm_head.weight
680
+ embed_weight = self.qwen.model.embed_tokens.weight
681
+ has_tied_weights = lm_head_weight.data_ptr() == embed_weight.data_ptr()
682
+
683
+ if has_tied_weights:
684
+ logger.info("[v4] Detected tied weights - untying lm_head from embed_tokens")
685
+ # Create independent lm_head with copied weights
686
+ new_lm_head = nn.Linear(
687
+ self.qwen.lm_head.in_features,
688
+ self.qwen.lm_head.out_features,
689
+ bias=self.qwen.lm_head.bias is not None,
690
+ device=lm_head_weight.device,
691
+ dtype=lm_head_weight.dtype,
692
+ )
693
+ # Copy weights
694
+ with torch.no_grad():
695
+ new_lm_head.weight.copy_(lm_head_weight)
696
+ if self.qwen.lm_head.bias is not None and new_lm_head.bias is not None:
697
+ new_lm_head.bias.copy_(self.qwen.lm_head.bias)
698
+ # Replace lm_head
699
+ self.qwen.lm_head = new_lm_head
700
+ logger.info(f"[v4] Created independent lm_head: {new_lm_head.weight.shape}")
701
+
702
+ # Freeze/unfreeze parameters
703
+ for name, param in self.named_parameters():
704
+ is_memory = "neural_memory" in name or "mem_gate" in name
705
+ is_memory_attention = "memory_attention" in name or "memory_context_proj" in name
706
+ is_embed_tokens = "embed_tokens" in name
707
+ is_lm_head = "lm_head" in name
708
+
709
+ if is_memory or is_memory_attention:
710
+ param.requires_grad = True
711
+ trainable_count += 1
712
+ elif is_embed_tokens:
713
+ # embed_tokens is TRAINABLE for input adaptation
714
+ param.requires_grad = True
715
+ trainable_count += 1
716
+ embed_count += 1
717
+ logger.info(f"[v4] embed_tokens trainable: {name}")
718
+ elif is_lm_head:
719
+ # lm_head is TRAINABLE for output adaptation
720
+ param.requires_grad = True
721
+ trainable_count += 1
722
+ lm_head_count += 1
723
+ logger.info(f"[v4] lm_head trainable: {name}")
724
+ else:
725
+ # Freeze transformer layers
726
+ param.requires_grad = False
727
+ frozen_count += 1
728
+
729
+ logger.info(f"[v4] Frozen {frozen_count} transformer layer parameters")
730
+ logger.info(f"[v4] Trainable {trainable_count} parameters (memory + embed: {embed_count} + lm_head: {lm_head_count})")
731
+
732
+ def _split_into_chunks(self, tensor, chunk_size):
733
+ seq_len = tensor.shape[1]
734
+ chunks = []
735
+ for start in range(0, seq_len, chunk_size):
736
+ end = min(start + chunk_size, seq_len)
737
+ chunks.append((start, end, tensor[:, start:end]))
738
+ return chunks
739
+
740
+ def reset_memory_states(self):
741
+ for layer in self._memory_layers:
742
+ layer.reset_memory_state()
743
+
744
+ def _set_mem_store_mask(
745
+ self,
746
+ chunk_ids: torch.Tensor,
747
+ chunk_mask: Optional[torch.Tensor],
748
+ chunk_start: int,
749
+ ) -> None:
750
+ if not self.use_memory:
751
+ self.qwen.model._mem_store_mask = None
752
+ return
753
+ if chunk_mask is None:
754
+ if chunk_start > 0:
755
+ store_mask = torch.ones_like(chunk_ids, dtype=torch.bool)
756
+ store_mask[:, 0] = False
757
+ else:
758
+ store_mask = None
759
+ else:
760
+ store_mask = chunk_mask.to(device=chunk_ids.device).bool()
761
+ if chunk_start > 0:
762
+ store_mask[:, 0] = False
763
+ self.qwen.model._mem_store_mask = store_mask
764
+
765
+ def get_memory_modules(self) -> List[nn.Module]:
766
+ if not self._memory_layers:
767
+ return []
768
+ modules = []
769
+ for layer in self._memory_layers:
770
+ modules.append(layer.neural_memory)
771
+ modules.append(layer.mem_gate)
772
+ if layer.memory_attention is not None:
773
+ modules.append(layer.memory_attention)
774
+ if layer.memory_context_proj is not None:
775
+ modules.append(layer.memory_context_proj)
776
+ return modules
777
+
778
+ def forward(
779
+ self,
780
+ input_ids: torch.Tensor,
781
+ attention_mask: Optional[torch.Tensor] = None,
782
+ labels: Optional[torch.Tensor] = None,
783
+ return_pred_tokens: bool = False,
784
+ topk: int = 0,
785
+ chunk_start: Optional[int] = None,
786
+ chunk_end: Optional[int] = None,
787
+ reset_mem_state: bool = False,
788
+ ) -> Dict[str, torch.Tensor]:
789
+
790
+ # Single chunk forward (for chunkwise backward)
791
+ if chunk_start is not None or chunk_end is not None:
792
+ start = 0 if chunk_start is None else int(chunk_start)
793
+ end = int(chunk_end) if chunk_end is not None else None
794
+ return self._forward_single_chunk(
795
+ input_ids=input_ids,
796
+ attention_mask=attention_mask,
797
+ labels=labels,
798
+ chunk_start=start,
799
+ chunk_end=end,
800
+ reset_mem_state=reset_mem_state,
801
+ )
802
+
803
+ # Full sequence forward
804
+ batch_size, seq_len = input_ids.shape
805
+ chunk_size = self.config.chunk_size
806
+ chunks = self._split_into_chunks(input_ids, chunk_size)
807
+
808
+ self.reset_memory_states()
809
+ loss_fct_sum = nn.CrossEntropyLoss(reduction="sum")
810
+ total_loss_sum = None
811
+ total_loss_tokens = 0
812
+ topk_correct = None
813
+ topk_total = None
814
+
815
+ pred_tokens_by_sample: List[List[int]] = [[] for _ in range(batch_size)]
816
+ target_tokens_by_sample: List[List[int]] = [[] for _ in range(batch_size)]
817
+
818
+ if topk and topk > 0:
819
+ device = input_ids.device
820
+ topk_correct = torch.tensor(0.0, device=device, dtype=torch.float32)
821
+ topk_total = torch.tensor(0.0, device=device, dtype=torch.float32)
822
+
823
+ for start, end, _ in chunks:
824
+ proc_start = max(0, start - 1)
825
+ chunk_ids = input_ids[:, proc_start:end]
826
+ chunk_labels = labels[:, proc_start:end] if labels is not None else None
827
+ chunk_mask = attention_mask[:, proc_start:end] if attention_mask is not None else None
828
+
829
+ self._set_mem_store_mask(chunk_ids, chunk_mask, start)
830
+ hidden_full = self._process_chunk(chunk_ids, chunk_mask)
831
+ if self.use_memory:
832
+ self.qwen.model._mem_store_mask = None
833
+
834
+ if chunk_labels is not None and (chunk_labels != -100).any():
835
+ chunk_labels_local = chunk_labels.to(device=hidden_full.device)
836
+ shift_hidden = hidden_full[:, :-1, :].contiguous()
837
+ shift_labels = chunk_labels_local[:, 1:].contiguous()
838
+
839
+ valid = shift_labels != -100
840
+ if valid.any():
841
+ hs = shift_hidden[valid]
842
+ targets = shift_labels[valid]
843
+
844
+ hs = torch.nan_to_num(hs.float(), nan=0.0, posinf=0.0, neginf=0.0)
845
+ logits = self.qwen.lm_head(hs)
846
+ logits = logits.float()
847
+ logits = torch.nan_to_num(logits, nan=0.0, posinf=0.0, neginf=0.0)
848
+ targets = targets.to(device=logits.device)
849
+
850
+ chunk_loss_sum = loss_fct_sum(logits, targets)
851
+ if total_loss_sum is None:
852
+ total_loss_sum = chunk_loss_sum
853
+ else:
854
+ total_loss_sum = total_loss_sum + chunk_loss_sum
855
+ total_loss_tokens += targets.numel()
856
+
857
+ if topk and topk > 0:
858
+ k = min(int(topk), logits.shape[-1])
859
+ topk_ids = torch.topk(logits, k=k, dim=-1).indices
860
+ correct = (topk_ids == targets.unsqueeze(-1)).any(dim=-1)
861
+ topk_correct = topk_correct + correct.float().sum()
862
+ topk_total = topk_total + torch.tensor(float(targets.numel()), device=topk_total.device)
863
+
864
+ if return_pred_tokens:
865
+ idx = valid.nonzero(as_tuple=False)
866
+ pred_flat = torch.argmax(logits, dim=-1).detach().to("cpu", dtype=torch.long).tolist()
867
+ tgt_flat = targets.detach().to("cpu", dtype=torch.long).tolist()
868
+ b_idx_flat = idx[:, 0].detach().to("cpu", dtype=torch.long).tolist()
869
+
870
+ for i, b_idx in enumerate(b_idx_flat):
871
+ pred_tokens_by_sample[b_idx].append(int(pred_flat[i]))
872
+ target_tokens_by_sample[b_idx].append(int(tgt_flat[i]))
873
+
874
+ if total_loss_sum is None or total_loss_tokens == 0:
875
+ device = next(self.qwen.parameters()).device
876
+ loss = torch.zeros((), device=device, dtype=torch.float32)
877
+ else:
878
+ loss = total_loss_sum / total_loss_tokens
879
+
880
+ out: Dict[str, torch.Tensor] = {"loss": loss}
881
+ if return_pred_tokens:
882
+ lengths = torch.tensor([len(x) for x in target_tokens_by_sample], dtype=torch.long)
883
+ max_len = int(lengths.max().item()) if lengths.numel() > 0 else 0
884
+ if max_len > 0:
885
+ pred_mat = torch.full((batch_size, max_len), -1, dtype=torch.long)
886
+ tgt_mat = torch.full((batch_size, max_len), -1, dtype=torch.long)
887
+ for b in range(batch_size):
888
+ L = int(lengths[b].item())
889
+ if L > 0:
890
+ pred_mat[b, :L] = torch.tensor(pred_tokens_by_sample[b], dtype=torch.long)
891
+ tgt_mat[b, :L] = torch.tensor(target_tokens_by_sample[b], dtype=torch.long)
892
+ else:
893
+ pred_mat = torch.empty((batch_size, 0), dtype=torch.long)
894
+ tgt_mat = torch.empty((batch_size, 0), dtype=torch.long)
895
+ out["pred_ids"] = pred_mat
896
+ out["target_ids"] = tgt_mat
897
+ out["target_lengths"] = lengths
898
+ if topk and topk > 0 and topk_correct is not None and topk_total is not None:
899
+ out["topk_correct"] = topk_correct
900
+ out["topk_total"] = topk_total
901
+ return out
902
+
903
+ def _forward_single_chunk(
904
+ self,
905
+ input_ids: torch.Tensor,
906
+ attention_mask: Optional[torch.Tensor],
907
+ labels: Optional[torch.Tensor],
908
+ chunk_start: int,
909
+ chunk_end: Optional[int],
910
+ reset_mem_state: bool,
911
+ ) -> Dict[str, torch.Tensor]:
912
+ if reset_mem_state:
913
+ self.reset_memory_states()
914
+
915
+ seq_len = input_ids.shape[1]
916
+ end = chunk_end if chunk_end is not None else min(chunk_start + self.config.chunk_size, seq_len)
917
+ end = min(int(end), seq_len)
918
+ start = max(0, int(chunk_start))
919
+
920
+ proc_start = max(0, start - 1)
921
+ chunk_ids = input_ids[:, proc_start:end]
922
+ chunk_labels = labels[:, proc_start:end] if labels is not None else None
923
+ chunk_mask = attention_mask[:, proc_start:end] if attention_mask is not None else None
924
+
925
+ self._set_mem_store_mask(chunk_ids, chunk_mask, start)
926
+ hidden_full = self._process_chunk(chunk_ids, chunk_mask)
927
+ if self.use_memory:
928
+ self.qwen.model._mem_store_mask = None
929
+
930
+ loss_fct_sum = nn.CrossEntropyLoss(reduction="sum")
931
+ total_loss_sum = None
932
+ total_loss_tokens = 0
933
+
934
+ if chunk_labels is not None and (chunk_labels != -100).any():
935
+ chunk_labels_local = chunk_labels.to(device=hidden_full.device)
936
+ shift_hidden = hidden_full[:, :-1, :].contiguous()
937
+ shift_labels = chunk_labels_local[:, 1:].contiguous()
938
+
939
+ valid = shift_labels != -100
940
+ if valid.any():
941
+ hs = shift_hidden[valid]
942
+ targets = shift_labels[valid]
943
+
944
+ hs = torch.nan_to_num(hs.float(), nan=0.0, posinf=0.0, neginf=0.0)
945
+ logits = self.qwen.lm_head(hs)
946
+ logits = logits.float()
947
+ logits = torch.nan_to_num(logits, nan=0.0, posinf=0.0, neginf=0.0)
948
+ targets = targets.to(device=logits.device)
949
+
950
+ total_loss_sum = loss_fct_sum(logits, targets)
951
+ total_loss_tokens = targets.numel()
952
+
953
+ if total_loss_sum is None:
954
+ total_loss_sum = (hidden_full.float().sum() * 0.0)
955
+
956
+ return {
957
+ "loss_sum": total_loss_sum,
958
+ "loss_tokens": total_loss_tokens,
959
+ "has_grad": True,
960
+ }
961
+
962
+ def _process_chunk(
963
+ self,
964
+ chunk_ids: torch.Tensor,
965
+ chunk_attention_mask: Optional[torch.Tensor] = None,
966
+ ) -> torch.Tensor:
967
+ if hasattr(self.qwen.model, "embed_tokens"):
968
+ token_embeds = self.qwen.model.embed_tokens(chunk_ids)
969
+ else:
970
+ token_embeds = self.qwen.get_input_embeddings()(chunk_ids)
971
+
972
+ outputs = self.qwen.model(
973
+ inputs_embeds=token_embeds,
974
+ attention_mask=chunk_attention_mask,
975
+ use_cache=False,
976
+ output_hidden_states=False,
977
+ return_dict=True,
978
+ )
979
+ return outputs.last_hidden_state
980
+
981
+ def get_param_groups(self, config: TrainingConfig):
982
+ """
983
+ v4: Four parameter groups with different learning rates.
984
+ - memory_core: neural_memory, mem_gate
985
+ - memory_attention: memory_attention, memory_context_proj (if exists)
986
+ - embed_tokens: input embeddings
987
+ - lm_head: output head
988
+ """
989
+ memory_core_params = []
990
+ memory_attention_params = []
991
+ embed_params = []
992
+ lm_head_params = []
993
+
994
+ for name, param in self.named_parameters():
995
+ if not param.requires_grad:
996
+ continue
997
+
998
+ if "neural_memory" in name or "mem_gate" in name:
999
+ memory_core_params.append(param)
1000
+ elif "memory_attention" in name or "memory_context_proj" in name:
1001
+ memory_attention_params.append(param)
1002
+ elif "embed_tokens" in name:
1003
+ embed_params.append(param)
1004
+ elif "lm_head" in name:
1005
+ lm_head_params.append(param)
1006
+
1007
+ param_groups = []
1008
+ if len(memory_core_params) > 0:
1009
+ param_groups.append({
1010
+ "params": memory_core_params,
1011
+ "lr": config.lr_memory,
1012
+ "weight_decay": config.weight_decay,
1013
+ "name": "memory_core"
1014
+ })
1015
+ if len(memory_attention_params) > 0:
1016
+ param_groups.append({
1017
+ "params": memory_attention_params,
1018
+ "lr": config.lr_memory_attention,
1019
+ "weight_decay": config.weight_decay,
1020
+ "name": "memory_attention"
1021
+ })
1022
+ if len(embed_params) > 0:
1023
+ param_groups.append({
1024
+ "params": embed_params,
1025
+ "lr": config.lr_embed,
1026
+ "weight_decay": config.weight_decay,
1027
+ "name": "embed_tokens"
1028
+ })
1029
+ if len(lm_head_params) > 0:
1030
+ param_groups.append({
1031
+ "params": lm_head_params,
1032
+ "lr": config.lr_lm_head,
1033
+ "weight_decay": config.weight_decay,
1034
+ "name": "lm_head"
1035
+ })
1036
+
1037
+ logger.info(f"[v4 Param groups] memory_core={len(memory_core_params)}, "
1038
+ f"memory_attention={len(memory_attention_params)}, "
1039
+ f"embed_tokens={len(embed_params)}, lm_head={len(lm_head_params)}")
1040
+ return param_groups
1041
+
1042
+
1043
+ # =============================================================================
1044
+ # Distributed Training Utilities (unchanged from v3)
1045
+ # =============================================================================
1046
+
1047
+ def init_distributed() -> tuple:
1048
+ if "RANK" not in os.environ or "WORLD_SIZE" not in os.environ:
1049
+ return False, 0, 0, 1
1050
+
1051
+ rank = int(os.environ["RANK"])
1052
+ world_size = int(os.environ["WORLD_SIZE"])
1053
+ local_rank = int(os.environ.get("LOCAL_RANK", 0))
1054
+
1055
+ if not dist.is_available():
1056
+ raise RuntimeError("torch.distributed not available")
1057
+
1058
+ if not dist.is_initialized():
1059
+ dist.init_process_group(backend="nccl", init_method="env://")
1060
+
1061
+ torch.cuda.set_device(local_rank)
1062
+ return True, rank, local_rank, world_size
1063
+
1064
+
1065
+ def cleanup_distributed():
1066
+ if dist.is_available() and dist.is_initialized():
1067
+ dist.barrier()
1068
+ dist.destroy_process_group()
1069
+
1070
+
1071
+ def unwrap_model(model: nn.Module) -> nn.Module:
1072
+ if hasattr(model, "module"):
1073
+ return model.module
1074
+ if hasattr(model, "_fsdp_wrapped_module"):
1075
+ wrapped = getattr(model, "_fsdp_wrapped_module", None)
1076
+ if wrapped is not None and hasattr(wrapped, "module"):
1077
+ return wrapped.module
1078
+ return model
1079
+
1080
+
1081
+ def is_fsdp_model(model: nn.Module) -> bool:
1082
+ try:
1083
+ from torch.distributed.fsdp import FullyShardedDataParallel as FSDP
1084
+ return isinstance(model, FSDP)
1085
+ except Exception:
1086
+ return False
1087
+
1088
+
1089
+ def manual_all_reduce_gradients(model: nn.Module, world_size: int) -> None:
1090
+ """
1091
+ Manually synchronize gradients across GPUs without DDP.
1092
+ This allows cross-chunk gradients to work with multi-GPU training.
1093
+
1094
+ Key insight: DDP fails because it tracks parameter usage during forward.
1095
+ By not wrapping the model with DDP, we avoid this tracking.
1096
+ We then manually all-reduce gradients before optimizer step.
1097
+ """
1098
+ if world_size <= 1:
1099
+ return
1100
+
1101
+ # Collect all gradients that need syncing
1102
+ grads_to_reduce = []
1103
+ for param in model.parameters():
1104
+ if param.grad is not None:
1105
+ grads_to_reduce.append(param.grad)
1106
+
1107
+ if len(grads_to_reduce) == 0:
1108
+ return
1109
+
1110
+ # Flatten all gradients into a single buffer for efficiency
1111
+ total_numel = sum(g.numel() for g in grads_to_reduce)
1112
+ flat_grads = torch.zeros(total_numel, dtype=grads_to_reduce[0].dtype,
1113
+ device=grads_to_reduce[0].device)
1114
+
1115
+ # Pack gradients into flat buffer
1116
+ offset = 0
1117
+ for grad in grads_to_reduce:
1118
+ numel = grad.numel()
1119
+ flat_grads[offset:offset + numel] = grad.view(-1)
1120
+ offset += numel
1121
+
1122
+ # All-reduce (sum) then divide by world_size
1123
+ dist.all_reduce(flat_grads, op=dist.ReduceOp.SUM)
1124
+ flat_grads.div_(world_size)
1125
+
1126
+ # Unpack gradients back
1127
+ offset = 0
1128
+ for grad in grads_to_reduce:
1129
+ numel = grad.numel()
1130
+ grad.copy_(flat_grads[offset:offset + numel].view_as(grad))
1131
+ offset += numel
1132
+
1133
+
1134
+ # =============================================================================
1135
+ # Trainer (unchanged from v3, works with v4's frozen backbone)
1136
+ # =============================================================================
1137
+
1138
+ class Trainer:
1139
+ def __init__(
1140
+ self,
1141
+ model: QwenTitansForBABILongV4,
1142
+ train_dataloader: DataLoader,
1143
+ eval_dataloader: DataLoader,
1144
+ config: TrainingConfig,
1145
+ rank: int = 0,
1146
+ world_size: int = 1,
1147
+ is_distributed: bool = False,
1148
+ tokenizer=None,
1149
+ use_manual_grad_sync: bool = False, # v4: for cross-chunk gradients with multi-GPU
1150
+ ):
1151
+ self.model = model
1152
+ self.train_dataloader = train_dataloader
1153
+ self.eval_dataloader = eval_dataloader
1154
+ self.config = config
1155
+ self.device = next(model.parameters()).device
1156
+ self.rank = rank
1157
+ self.world_size = world_size
1158
+ self.is_distributed = is_distributed
1159
+ self.is_main_process = (rank == 0)
1160
+ self.tokenizer = tokenizer
1161
+ self.use_manual_grad_sync = use_manual_grad_sync # v4: manual gradient sync mode
1162
+
1163
+ base_model = unwrap_model(self.model)
1164
+ param_groups = base_model.get_param_groups(config)
1165
+ self.optimizer = AdamW(param_groups)
1166
+
1167
+ total_steps = math.ceil(
1168
+ (len(train_dataloader) * config.num_epochs) / max(config.gradient_accumulation_steps, 1)
1169
+ )
1170
+ self.scheduler = CosineAnnealingLR(self.optimizer, T_max=total_steps, eta_min=1e-7)
1171
+
1172
+ self.scaler = torch.cuda.amp.GradScaler(enabled=config.fp16)
1173
+ self.global_step = 0
1174
+
1175
+ def _get_group_lr(self, group_name: str) -> Optional[float]:
1176
+ for group in self.optimizer.param_groups:
1177
+ if group.get("name") == group_name:
1178
+ return group.get("lr")
1179
+ return None
1180
+
1181
+ def train(self):
1182
+ self.model.train()
1183
+ if self.is_main_process:
1184
+ logger.info("=" * 60)
1185
+ logger.info("Starting v4 training with FROZEN backbone")
1186
+ logger.info("=" * 60)
1187
+
1188
+ last_epoch_loss = None
1189
+ for epoch in range(self.config.num_epochs):
1190
+ sampler = getattr(self.train_dataloader, "sampler", None)
1191
+ if sampler is not None and hasattr(sampler, "set_epoch"):
1192
+ sampler.set_epoch(epoch)
1193
+ if self.is_main_process:
1194
+ logger.info(f"Epoch {epoch + 1}/{self.config.num_epochs}")
1195
+
1196
+ epoch_loss = 0.0
1197
+ num_batches = 0
1198
+
1199
+ pbar = self.train_dataloader
1200
+ if self.is_main_process:
1201
+ pbar = tqdm(
1202
+ self.train_dataloader,
1203
+ desc=f"Epoch {epoch + 1}/{self.config.num_epochs}",
1204
+ leave=False,
1205
+ dynamic_ncols=True,
1206
+ )
1207
+
1208
+ for step, batch in enumerate(pbar):
1209
+ batch = {k: v.to(self.device) for k, v in batch.items()}
1210
+
1211
+ ga = max(self.config.gradient_accumulation_steps, 1)
1212
+ sync_gradients = ((step + 1) % ga == 0)
1213
+ amp_enabled = self.config.fp16 or self.config.bf16
1214
+ amp_dtype = torch.float16 if self.config.fp16 else torch.bfloat16
1215
+
1216
+ with torch.amp.autocast(device_type=self.device.type, enabled=amp_enabled, dtype=amp_dtype):
1217
+ if self.config.chunkwise_backward:
1218
+ labels = batch.get("labels")
1219
+ if labels is not None:
1220
+ total_tokens = int((labels[:, 1:] != -100).sum().item())
1221
+ else:
1222
+ total_tokens = 0
1223
+ loss_scale = 0.0 if total_tokens == 0 else (1.0 / total_tokens / ga)
1224
+
1225
+ seq_len = batch["input_ids"].shape[1]
1226
+ chunk_size = int(self.config.chunk_size)
1227
+ chunk_ranges = [
1228
+ (start, min(start + chunk_size, seq_len))
1229
+ for start in range(0, seq_len, chunk_size)
1230
+ ]
1231
+ raw_loss_sum = None
1232
+
1233
+ for idx, (start, end) in enumerate(chunk_ranges):
1234
+ is_last_chunk = (idx == len(chunk_ranges) - 1)
1235
+ sync_chunk = sync_gradients and is_last_chunk
1236
+ # no_sync only available when model is wrapped with DDP/FSDP
1237
+ use_no_sync = (
1238
+ self.is_distributed and
1239
+ not sync_chunk and
1240
+ not self.use_manual_grad_sync and
1241
+ hasattr(self.model, 'no_sync')
1242
+ )
1243
+ chunk_ctx = self.model.no_sync if use_no_sync else nullcontext
1244
+ with chunk_ctx():
1245
+ outputs = self.model(
1246
+ input_ids=batch["input_ids"],
1247
+ attention_mask=batch["attention_mask"],
1248
+ labels=labels,
1249
+ chunk_start=start,
1250
+ chunk_end=end,
1251
+ reset_mem_state=(idx == 0),
1252
+ )
1253
+ chunk_loss_sum = outputs["loss_sum"]
1254
+ if raw_loss_sum is None:
1255
+ raw_loss_sum = chunk_loss_sum.detach()
1256
+ else:
1257
+ raw_loss_sum = raw_loss_sum + chunk_loss_sum.detach()
1258
+
1259
+ scaled_loss = chunk_loss_sum * float(loss_scale)
1260
+ if self.config.fp16:
1261
+ self.scaler.scale(scaled_loss).backward()
1262
+ else:
1263
+ scaled_loss.backward()
1264
+
1265
+ if raw_loss_sum is None or total_tokens == 0:
1266
+ raw_loss = torch.zeros((), device=self.device, dtype=torch.float32)
1267
+ else:
1268
+ raw_loss = raw_loss_sum / total_tokens
1269
+ loss = raw_loss / ga
1270
+ else:
1271
+ # For manual grad sync mode, no_sync is not available (model not wrapped)
1272
+ # For DDP/FSDP, use no_sync during gradient accumulation
1273
+ use_no_sync = (
1274
+ self.is_distributed and
1275
+ not sync_gradients and
1276
+ not self.use_manual_grad_sync and # no_sync not available in manual mode
1277
+ hasattr(self.model, 'no_sync')
1278
+ )
1279
+ ctx = self.model.no_sync if use_no_sync else nullcontext
1280
+ with ctx():
1281
+ outputs = self.model(
1282
+ input_ids=batch["input_ids"],
1283
+ attention_mask=batch["attention_mask"],
1284
+ labels=batch["labels"],
1285
+ )
1286
+ raw_loss = outputs["loss"]
1287
+ loss = raw_loss / ga
1288
+
1289
+ if self.config.fp16:
1290
+ self.scaler.scale(loss).backward()
1291
+ else:
1292
+ loss.backward()
1293
+
1294
+ epoch_loss += raw_loss.detach().float().item()
1295
+ num_batches += 1
1296
+
1297
+ if sync_gradients:
1298
+ grad_norm = None
1299
+
1300
+ # v4: Manual gradient sync for cross-chunk gradients with multi-GPU
1301
+ # This replaces DDP's automatic gradient sync
1302
+ if self.use_manual_grad_sync and self.world_size > 1:
1303
+ if self.config.fp16:
1304
+ self.scaler.unscale_(self.optimizer)
1305
+ manual_all_reduce_gradients(self.model, self.world_size)
1306
+
1307
+ if self.config.fp16:
1308
+ if not self.use_manual_grad_sync: # Only unscale if not already done
1309
+ self.scaler.unscale_(self.optimizer)
1310
+ grad_norm = torch.nn.utils.clip_grad_norm_(self.model.parameters(), self.config.max_grad_norm)
1311
+ self.scaler.step(self.optimizer)
1312
+ self.scaler.update()
1313
+ else:
1314
+ grad_norm = torch.nn.utils.clip_grad_norm_(self.model.parameters(), self.config.max_grad_norm)
1315
+ self.optimizer.step()
1316
+
1317
+ self.scheduler.step()
1318
+ self.optimizer.zero_grad(set_to_none=True)
1319
+ self.global_step += 1
1320
+
1321
+ if self.is_main_process:
1322
+ avg_loss = epoch_loss / max(num_batches, 1)
1323
+ pbar.set_postfix({"gstep": self.global_step, "loss": f"{avg_loss:.4f}"})
1324
+
1325
+ if self.global_step % self.config.logging_steps == 0 and self.is_main_process:
1326
+ lr_mem = self._get_group_lr("memory_core") or 0.0
1327
+ lr_embed = self._get_group_lr("embed_tokens") or 0.0
1328
+ lr_lm_head = self._get_group_lr("lm_head") or 0.0
1329
+ grad_note = ""
1330
+ if self.config.debug_grad_norm and grad_norm is not None:
1331
+ grad_note = f" | grad_norm={float(grad_norm):.4f}"
1332
+ logger.info(
1333
+ f"Step {self.global_step} | loss={epoch_loss / max(num_batches, 1):.4f} | "
1334
+ f"lr_mem={lr_mem:.2e} | lr_embed={lr_embed:.2e} | lr_lm_head={lr_lm_head:.2e}{grad_note}"
1335
+ )
1336
+
1337
+ if self.global_step % self.config.eval_steps == 0:
1338
+ eval_metrics = self.evaluate()
1339
+ if self.is_main_process:
1340
+ logger.info(
1341
+ f"Step {self.global_step}: "
1342
+ f"eval_loss={eval_metrics['loss']:.4f}, "
1343
+ f"em_acc={eval_metrics['em_acc'] * 100:.2f}%, "
1344
+ f"tok_acc={eval_metrics['tok_acc'] * 100:.2f}%"
1345
+ )
1346
+ self.model.train()
1347
+
1348
+ avg_epoch_loss = epoch_loss / max(num_batches, 1)
1349
+ if self.is_distributed:
1350
+ t = torch.tensor(avg_epoch_loss, device=self.device, dtype=torch.float32)
1351
+ dist.all_reduce(t, op=dist.ReduceOp.SUM)
1352
+ avg_epoch_loss = (t / self.world_size).item()
1353
+
1354
+ if self.is_main_process:
1355
+ logger.info(f"Epoch {epoch + 1} done, avg loss={avg_epoch_loss:.4f}")
1356
+ last_epoch_loss = avg_epoch_loss
1357
+
1358
+ eval_metrics = self.evaluate()
1359
+ if self.is_main_process:
1360
+ logger.info(
1361
+ f"[EPOCH {epoch + 1} EVAL] "
1362
+ f"eval_loss={eval_metrics['loss']:.4f}, "
1363
+ f"em_acc={eval_metrics['em_acc'] * 100:.2f}%, "
1364
+ f"tok_acc={eval_metrics['tok_acc'] * 100:.2f}%"
1365
+ )
1366
+ self._append_eval_metrics(
1367
+ eval_metrics,
1368
+ phase="epoch",
1369
+ epoch=int(epoch + 1),
1370
+ train_avg_loss=avg_epoch_loss,
1371
+ )
1372
+ self.model.train()
1373
+
1374
+ if self.is_main_process:
1375
+ logger.info("Training done, final evaluation")
1376
+
1377
+ final_eval = self.evaluate(print_examples=int(self.config.final_eval_print_examples))
1378
+ if self.is_main_process:
1379
+ ppl = float(math.exp(min(20.0, final_eval["loss"])))
1380
+ logger.info(
1381
+ f"[FINAL EVAL] loss={final_eval['loss']:.4f}, ppl={ppl:.3f}, "
1382
+ f"em_acc={final_eval['em_acc'] * 100:.2f}%, "
1383
+ f"tok_acc={final_eval['tok_acc'] * 100:.2f}%"
1384
+ )
1385
+ logger.info("Saving final checkpoint")
1386
+ self._append_eval_metrics(
1387
+ final_eval,
1388
+ phase="final",
1389
+ epoch=int(self.config.num_epochs),
1390
+ train_avg_loss=last_epoch_loss,
1391
+ )
1392
+ self.save_final_checkpoint()
1393
+
1394
+ @torch.no_grad()
1395
+ def evaluate(self, print_examples: int = 0) -> Dict[str, float]:
1396
+ self.model.eval()
1397
+ total_loss = torch.tensor(0.0, device=self.device, dtype=torch.float32)
1398
+ total_batches = torch.tensor(0.0, device=self.device, dtype=torch.float32)
1399
+
1400
+ total_tok_correct = torch.tensor(0.0, device=self.device, dtype=torch.float32)
1401
+ total_tok_total = torch.tensor(0.0, device=self.device, dtype=torch.float32)
1402
+ total_em_correct = torch.tensor(0.0, device=self.device, dtype=torch.float32)
1403
+ total_em_total = torch.tensor(0.0, device=self.device, dtype=torch.float32)
1404
+ printed = 0
1405
+
1406
+ for batch in self.eval_dataloader:
1407
+ batch = {k: v.to(self.device) for k, v in batch.items()}
1408
+ amp_enabled = self.config.fp16 or self.config.bf16
1409
+ amp_dtype = torch.float16 if self.config.fp16 else torch.bfloat16
1410
+ with torch.amp.autocast(device_type=self.device.type, enabled=amp_enabled, dtype=amp_dtype):
1411
+ outputs = self.model(
1412
+ input_ids=batch["input_ids"],
1413
+ attention_mask=batch["attention_mask"],
1414
+ labels=batch["labels"],
1415
+ return_pred_tokens=True,
1416
+ topk=int(self.config.eval_topk) if self.config.eval_topk else 0,
1417
+ )
1418
+
1419
+ if torch.isfinite(outputs["loss"]):
1420
+ total_loss += outputs["loss"].detach().float()
1421
+ total_batches += 1.0
1422
+
1423
+ pred_ids = outputs.get("pred_ids", None)
1424
+ target_ids = outputs.get("target_ids", None)
1425
+ lengths = outputs.get("target_lengths", None)
1426
+
1427
+ if (
1428
+ pred_ids is not None
1429
+ and target_ids is not None
1430
+ and lengths is not None
1431
+ and pred_ids.ndim == 2
1432
+ and target_ids.ndim == 2
1433
+ and lengths.ndim == 1
1434
+ and pred_ids.shape == target_ids.shape
1435
+ and pred_ids.shape[0] == lengths.shape[0]
1436
+ ):
1437
+ pred_cpu = pred_ids.to("cpu", dtype=torch.long)
1438
+ tgt_cpu = target_ids.to("cpu", dtype=torch.long)
1439
+ len_cpu = lengths.to("cpu", dtype=torch.long)
1440
+
1441
+ for i in range(int(len_cpu.shape[0])):
1442
+ L = int(len_cpu[i].item())
1443
+ if L <= 0:
1444
+ continue
1445
+ p = pred_cpu[i, :L]
1446
+ t = tgt_cpu[i, :L]
1447
+
1448
+ total_tok_correct += torch.tensor(float((p == t).sum().item()), device=self.device, dtype=torch.float32)
1449
+ total_tok_total += torch.tensor(float(L), device=self.device, dtype=torch.float32)
1450
+
1451
+ if self.tokenizer is not None:
1452
+ pred_text = self.tokenizer.decode(p.tolist(), skip_special_tokens=True).strip()
1453
+ tgt_text = self.tokenizer.decode(t.tolist(), skip_special_tokens=True).strip()
1454
+ em = float(pred_text == tgt_text)
1455
+ total_em_correct += torch.tensor(em, device=self.device, dtype=torch.float32)
1456
+ total_em_total += torch.tensor(1.0, device=self.device, dtype=torch.float32)
1457
+
1458
+ if self.is_main_process and printed < print_examples:
1459
+ logger.info(f"[EVAL SAMPLE] pred={repr(pred_text)} | label={repr(tgt_text)} | match={bool(em)}")
1460
+ printed += 1
1461
+
1462
+ if self.is_distributed:
1463
+ dist.all_reduce(total_loss, op=dist.ReduceOp.SUM)
1464
+ dist.all_reduce(total_batches, op=dist.ReduceOp.SUM)
1465
+ dist.all_reduce(total_tok_correct, op=dist.ReduceOp.SUM)
1466
+ dist.all_reduce(total_tok_total, op=dist.ReduceOp.SUM)
1467
+ dist.all_reduce(total_em_correct, op=dist.ReduceOp.SUM)
1468
+ dist.all_reduce(total_em_total, op=dist.ReduceOp.SUM)
1469
+
1470
+ avg_loss = (total_loss / total_batches.clamp(min=1.0)).item()
1471
+ tok_acc = (total_tok_correct / total_tok_total.clamp(min=1.0)).item()
1472
+ em_acc = (total_em_correct / total_em_total.clamp(min=1.0)).item()
1473
+
1474
+ return {"loss": avg_loss, "tok_acc": tok_acc, "em_acc": em_acc}
1475
+
1476
+ def _append_eval_metrics(
1477
+ self,
1478
+ metrics: Dict[str, float],
1479
+ *,
1480
+ phase: str,
1481
+ epoch: Optional[int],
1482
+ train_avg_loss: Optional[float],
1483
+ ) -> None:
1484
+ if not self.is_main_process:
1485
+ return
1486
+ os.makedirs(self.config.output_dir, exist_ok=True)
1487
+ record = {
1488
+ "phase": phase,
1489
+ "epoch": epoch,
1490
+ "global_step": int(self.global_step),
1491
+ "train_avg_loss": None if train_avg_loss is None else float(train_avg_loss),
1492
+ "eval_loss": float(metrics.get("loss", 0.0)),
1493
+ "em_acc_pct": float(metrics.get("em_acc", 0.0) * 100.0),
1494
+ "tok_acc_pct": float(metrics.get("tok_acc", 0.0) * 100.0),
1495
+ }
1496
+ metrics_path = os.path.join(self.config.output_dir, "eval_metrics.jsonl")
1497
+ with open(metrics_path, "a") as f:
1498
+ f.write(json.dumps(record) + "\n")
1499
+
1500
+ def save_final_checkpoint(self):
1501
+ ckpt_path = os.path.join(self.config.output_dir, self.config.final_ckpt_name)
1502
+ base_model = unwrap_model(self.model)
1503
+
1504
+ # Save memory-related and trainable parameters
1505
+ memory_sd = {
1506
+ name: p.detach().cpu()
1507
+ for name, p in base_model.named_parameters()
1508
+ if p.requires_grad and (
1509
+ ("neural_memory" in name) or ("mem_gate" in name) or
1510
+ ("memory_attention" in name) or ("memory_context_proj" in name) or
1511
+ ("embed_tokens" in name) or ("lm_head" in name)
1512
+ )
1513
+ }
1514
+
1515
+ if is_fsdp_model(self.model) and len(memory_sd) == 0:
1516
+ from torch.distributed.fsdp import FullyShardedDataParallel as FSDP, StateDictType, FullStateDictConfig
1517
+ full_cfg = FullStateDictConfig(offload_to_cpu=True, rank0_only=True)
1518
+ with FSDP.state_dict_type(self.model, StateDictType.FULL_STATE_DICT, full_cfg):
1519
+ full_sd = self.model.state_dict()
1520
+ memory_sd = {
1521
+ k: v for k, v in full_sd.items()
1522
+ if ("neural_memory" in k) or ("mem_gate" in k) or
1523
+ ("memory_attention" in k) or ("memory_context_proj" in k) or
1524
+ ("embed_tokens" in k) or ("lm_head" in k)
1525
+ }
1526
+
1527
+ if self.is_main_process:
1528
+ torch.save(
1529
+ {"memory_state_dict": memory_sd, "global_step": self.global_step, "config": asdict(self.config)},
1530
+ ckpt_path,
1531
+ )
1532
+ logger.info(f"Saved memory checkpoint: {ckpt_path}")
1533
+ if self.is_distributed:
1534
+ dist.barrier()
1535
+
1536
+ if self.config.save_full_checkpoint:
1537
+ full_ckpt_path = os.path.join(self.config.output_dir, self.config.final_full_ckpt_name)
1538
+ if is_fsdp_model(self.model):
1539
+ from torch.distributed.fsdp import FullyShardedDataParallel as FSDP, StateDictType, FullStateDictConfig
1540
+ full_cfg = FullStateDictConfig(offload_to_cpu=True, rank0_only=True)
1541
+ with FSDP.state_dict_type(self.model, StateDictType.FULL_STATE_DICT, full_cfg):
1542
+ full_sd = self.model.state_dict()
1543
+ else:
1544
+ full_sd = unwrap_model(self.model).state_dict()
1545
+
1546
+ if self.is_main_process:
1547
+ torch.save(
1548
+ {"model_state_dict": full_sd, "global_step": self.global_step, "config": asdict(self.config)},
1549
+ full_ckpt_path,
1550
+ )
1551
+ logger.info(f"Saved full checkpoint: {full_ckpt_path}")
1552
+ if self.is_distributed:
1553
+ dist.barrier()
1554
+
1555
+
1556
+ # =============================================================================
1557
+ # Main
1558
+ # =============================================================================
1559
+
1560
+ def main():
1561
+ from transformers import AutoModelForCausalLM, AutoTokenizer
1562
+
1563
+ parser = argparse.ArgumentParser(description="Qwen3 + Titans v4 - Frozen Backbone Training")
1564
+ parser.add_argument("--fsdp", action="store_true")
1565
+ parser.add_argument("--eval_only", action="store_true")
1566
+ parser.add_argument("--ckpt_path", type=str, default=None)
1567
+ parser.add_argument("--max_samples", type=int, default=None)
1568
+ parser.add_argument("--max_length", type=int, default=None)
1569
+ parser.add_argument("--output_dir", type=str, default=None)
1570
+ parser.add_argument("--num_epochs", type=int, default=None)
1571
+ parser.add_argument("--eval_steps", type=int, default=None)
1572
+ parser.add_argument("--batch_size", type=int, default=None)
1573
+ parser.add_argument("--gradient_accumulation_steps", type=int, default=None)
1574
+ parser.add_argument("--chunk_size", type=int, default=None)
1575
+ parser.add_argument("--memory_layer_stride", type=int, default=None)
1576
+ parser.add_argument("--no_memory", action="store_true")
1577
+ parser.add_argument("--gradient_checkpointing", action="store_true")
1578
+ parser.add_argument("--no_chunkwise_backward", action="store_true")
1579
+
1580
+ # v4 specific arguments
1581
+ parser.add_argument("--detach_mem_state", action="store_true",
1582
+ help="Detach memory state (disable cross-chunk gradients)")
1583
+ parser.add_argument("--no_deep_integration", action="store_true",
1584
+ help="Disable deep attention integration")
1585
+ parser.add_argument("--no_memory_as_context", action="store_true",
1586
+ help="Disable memory-as-context projection")
1587
+ parser.add_argument("--cross_chunk_gradient_steps", type=int, default=None,
1588
+ help="Number of chunks to allow gradient flow through")
1589
+ parser.add_argument("--memory_depth", type=int, default=None)
1590
+ parser.add_argument("--num_memory_tokens", type=int, default=None)
1591
+ parser.add_argument("--lr_embed", type=float, default=None,
1592
+ help="Learning rate for embed_tokens")
1593
+ parser.add_argument("--lr_lm_head", type=float, default=None,
1594
+ help="Learning rate for lm_head")
1595
+
1596
+ parser.add_argument("--debug_grad_norm", action="store_true")
1597
+ args = parser.parse_args()
1598
+
1599
+ config = TrainingConfig()
1600
+
1601
+ # Apply arguments
1602
+ if args.fsdp:
1603
+ config.use_fsdp = True
1604
+ if args.no_memory:
1605
+ config.use_memory = False
1606
+ if args.max_samples is not None:
1607
+ config.max_samples = args.max_samples
1608
+ if args.max_length is not None:
1609
+ config.max_length = int(args.max_length)
1610
+ if args.output_dir is not None:
1611
+ config.output_dir = args.output_dir
1612
+ elif not config.use_memory:
1613
+ config.output_dir = "./outputs/qwen_babilong_no_memory_v4"
1614
+ if args.num_epochs is not None:
1615
+ config.num_epochs = args.num_epochs
1616
+ if args.eval_steps is not None:
1617
+ config.eval_steps = args.eval_steps
1618
+ if args.batch_size is not None:
1619
+ config.batch_size = int(args.batch_size)
1620
+ if args.gradient_accumulation_steps is not None:
1621
+ config.gradient_accumulation_steps = int(args.gradient_accumulation_steps)
1622
+ if args.chunk_size is not None:
1623
+ config.chunk_size = int(args.chunk_size)
1624
+ if args.memory_layer_stride is not None:
1625
+ config.memory_layer_stride = int(args.memory_layer_stride)
1626
+ if args.gradient_checkpointing:
1627
+ config.gradient_checkpointing = True
1628
+ if args.no_chunkwise_backward:
1629
+ config.chunkwise_backward = False
1630
+
1631
+ # v4 specific
1632
+ if args.detach_mem_state:
1633
+ config.detach_mem_state = True
1634
+ if args.no_deep_integration:
1635
+ config.deep_memory_integration = False
1636
+ if args.no_memory_as_context:
1637
+ config.memory_as_context = False
1638
+ if args.cross_chunk_gradient_steps is not None:
1639
+ config.cross_chunk_gradient_steps = int(args.cross_chunk_gradient_steps)
1640
+ if args.memory_depth is not None:
1641
+ config.memory_depth = int(args.memory_depth)
1642
+ if args.num_memory_tokens is not None:
1643
+ config.num_memory_tokens = int(args.num_memory_tokens)
1644
+ if args.lr_embed is not None:
1645
+ config.lr_embed = float(args.lr_embed)
1646
+ if args.lr_lm_head is not None:
1647
+ config.lr_lm_head = float(args.lr_lm_head)
1648
+ if args.debug_grad_norm:
1649
+ config.debug_grad_norm = True
1650
+
1651
+ is_distributed, rank, local_rank, world_size = init_distributed()
1652
+ is_main = (rank == 0)
1653
+
1654
+ if config.use_fsdp and config.chunkwise_backward:
1655
+ if is_main:
1656
+ logger.warning("chunkwise_backward is incompatible with FSDP; disabling it.")
1657
+ config.chunkwise_backward = False
1658
+
1659
+ # Note: gradient_checkpointing is REQUIRED for full-sequence backward with 32k tokens
1660
+ # Keeping it enabled even with DDP for v4's cross-chunk gradient mode
1661
+ if is_distributed and (not config.use_fsdp) and config.gradient_checkpointing:
1662
+ if is_main:
1663
+ logger.info("gradient_checkpointing enabled for cross-chunk gradient mode")
1664
+
1665
+ if is_distributed and (not config.use_fsdp):
1666
+ if not config.ddp_find_unused_parameters:
1667
+ config.ddp_find_unused_parameters = True
1668
+ if is_main:
1669
+ logger.warning("Enabling DDP find_unused_parameters.")
1670
+
1671
+ torch.manual_seed(config.seed + rank)
1672
+
1673
+ if torch.cuda.is_available():
1674
+ device = torch.device(f"cuda:{local_rank}" if is_distributed else "cuda")
1675
+ else:
1676
+ device = torch.device("cpu")
1677
+
1678
+ if torch.cuda.is_available() and config.bf16:
1679
+ bf16_supported = False
1680
+ try:
1681
+ bf16_supported = torch.cuda.is_bf16_supported()
1682
+ except Exception:
1683
+ bf16_supported = False
1684
+ if not bf16_supported:
1685
+ if is_main:
1686
+ logger.warning("bf16 not supported; falling back to fp16.")
1687
+ config.bf16 = False
1688
+ if not config.fp16:
1689
+ config.fp16 = True
1690
+
1691
+ if torch.cuda.is_available() and getattr(config, "use_tf32", False):
1692
+ torch.backends.cuda.matmul.allow_tf32 = True
1693
+ torch.backends.cudnn.allow_tf32 = True
1694
+ try:
1695
+ torch.set_float32_matmul_precision("high")
1696
+ except Exception:
1697
+ pass
1698
+
1699
+ if is_main:
1700
+ logger.info("=" * 70)
1701
+ logger.info("Qwen3-4B + Titans v4 Training (FROZEN BACKBONE)")
1702
+ logger.info("=" * 70)
1703
+ logger.info(f"distributed={is_distributed}, world_size={world_size}")
1704
+ logger.info(f"model_path={config.model_path}")
1705
+ logger.info(f"data_path={config.data_path}")
1706
+ logger.info(f"output_dir={config.output_dir}")
1707
+ logger.info(f"max_samples={config.max_samples}")
1708
+ logger.info(f"max_length={config.max_length}")
1709
+ logger.info(f"num_epochs={config.num_epochs}")
1710
+ logger.info(f"chunk_size={config.chunk_size}")
1711
+ logger.info(f"use_memory={config.use_memory}")
1712
+ if config.use_memory:
1713
+ logger.info(f"memory_layer_stride={config.memory_layer_stride}")
1714
+ logger.info(f"memory_depth={config.memory_depth}")
1715
+ logger.info(f"deep_memory_integration={config.deep_memory_integration}")
1716
+ logger.info(f"memory_as_context={config.memory_as_context}")
1717
+ logger.info(f"detach_mem_state={config.detach_mem_state}")
1718
+ logger.info(f"cross_chunk_gradient_steps={config.cross_chunk_gradient_steps}")
1719
+ logger.info(f"num_memory_tokens={config.num_memory_tokens}")
1720
+ logger.info("=" * 70)
1721
+ logger.info("v4 FEATURE: Cross-chunk gradients enabled")
1722
+ logger.info(f"chunkwise_backward={config.chunkwise_backward}, detach_mem_state={config.detach_mem_state}")
1723
+ logger.info("Trainable: Memory + embed_tokens + lm_head")
1724
+ logger.info("=" * 70)
1725
+
1726
+ tokenizer = AutoTokenizer.from_pretrained(config.model_path, trust_remote_code=True)
1727
+ if tokenizer.pad_token is None:
1728
+ tokenizer.pad_token = tokenizer.eos_token
1729
+
1730
+ # Disable flash-attn detection
1731
+ try:
1732
+ import transformers
1733
+ from transformers.utils import import_utils as _import_utils
1734
+
1735
+ def _disabled(*args, **kwargs):
1736
+ return False
1737
+
1738
+ _import_utils.is_flash_attn_2_available = _disabled
1739
+ if hasattr(transformers, "utils") and hasattr(transformers.utils, "is_flash_attn_2_available"):
1740
+ transformers.utils.is_flash_attn_2_available = _disabled
1741
+ if hasattr(_import_utils, "is_torchao_available"):
1742
+ _import_utils.is_torchao_available = _disabled
1743
+ if hasattr(_import_utils, "is_torchvision_available"):
1744
+ _import_utils.is_torchvision_available = _disabled
1745
+ except Exception as e:
1746
+ if is_main:
1747
+ logger.warning(f"Disable checks failed (ignored): {e}")
1748
+
1749
+ torch_dtype = torch.bfloat16 if config.bf16 else (torch.float16 if config.fp16 else torch.float32)
1750
+
1751
+ qwen_model = AutoModelForCausalLM.from_pretrained(
1752
+ config.model_path,
1753
+ torch_dtype=torch_dtype,
1754
+ device_map=None,
1755
+ trust_remote_code=True,
1756
+ attn_implementation="sdpa",
1757
+ low_cpu_mem_usage=True,
1758
+ )
1759
+ qwen_model.to(device)
1760
+ qwen_model.config.use_cache = False
1761
+ if config.gradient_checkpointing and hasattr(qwen_model, "gradient_checkpointing_enable"):
1762
+ qwen_model.gradient_checkpointing_enable()
1763
+
1764
+ train_dataset = BABILongDataset(
1765
+ config.data_path,
1766
+ tokenizer,
1767
+ max_length=config.max_length,
1768
+ answer_reserve_tokens=config.answer_reserve_tokens,
1769
+ label_prefix_tokens=config.label_prefix_tokens,
1770
+ max_samples=config.max_samples,
1771
+ )
1772
+
1773
+ train_size = int(0.9 * len(train_dataset))
1774
+ eval_size = len(train_dataset) - train_size
1775
+ train_dataset, eval_dataset = torch.utils.data.random_split(
1776
+ train_dataset,
1777
+ [train_size, eval_size],
1778
+ generator=torch.Generator().manual_seed(config.seed),
1779
+ )
1780
+
1781
+ train_sampler = None
1782
+ eval_sampler = None
1783
+ if is_distributed:
1784
+ from torch.utils.data.distributed import DistributedSampler
1785
+ train_sampler = DistributedSampler(train_dataset, num_replicas=world_size, rank=rank, shuffle=True, seed=config.seed)
1786
+ eval_sampler = DistributedSampler(eval_dataset, num_replicas=world_size, rank=rank, shuffle=False)
1787
+
1788
+ train_dataloader = DataLoader(
1789
+ train_dataset,
1790
+ batch_size=config.batch_size,
1791
+ shuffle=(train_sampler is None),
1792
+ sampler=train_sampler,
1793
+ collate_fn=collate_fn,
1794
+ num_workers=0,
1795
+ )
1796
+ eval_dataloader = DataLoader(
1797
+ eval_dataset,
1798
+ batch_size=config.batch_size,
1799
+ shuffle=False,
1800
+ sampler=eval_sampler,
1801
+ collate_fn=collate_fn,
1802
+ num_workers=0,
1803
+ )
1804
+
1805
+ model = QwenTitansForBABILongV4(qwen_model, config)
1806
+ model.to(device)
1807
+
1808
+ # ==========================================================================
1809
+ # v4 Multi-GPU Strategy for Cross-Chunk Gradients
1810
+ # ==========================================================================
1811
+ # DDP is incompatible with cross-chunk gradients because it tracks parameter
1812
+ # usage during forward. Memory modules are used multiple times across chunks,
1813
+ # causing "ready twice" errors.
1814
+ #
1815
+ # Solution: Manual gradient synchronization
1816
+ # - Don't wrap model with DDP when chunkwise_backward=False
1817
+ # - Manually call all_reduce on gradients before optimizer step
1818
+ # - This allows true cross-chunk gradient flow with multi-GPU training
1819
+ # ==========================================================================
1820
+
1821
+ use_ddp = is_distributed and world_size > 1
1822
+ use_manual_grad_sync = False # Track if we're using manual sync
1823
+
1824
+ if use_ddp and not config.chunkwise_backward:
1825
+ # Cross-chunk gradients mode: use manual gradient sync instead of DDP
1826
+ if is_main:
1827
+ logger.info("=" * 70)
1828
+ logger.info("Cross-chunk gradients with multi-GPU: using MANUAL gradient sync")
1829
+ logger.info("Model NOT wrapped with DDP - gradients will be all-reduced manually")
1830
+ logger.info("=" * 70)
1831
+ use_ddp = False # Don't wrap with DDP
1832
+ use_manual_grad_sync = True # Use manual gradient sync instead
1833
+ # Keep is_distributed=True for DistributedSampler to work
1834
+
1835
+ if use_ddp:
1836
+ if config.use_fsdp:
1837
+ from functools import partial
1838
+ from torch.distributed.fsdp import FullyShardedDataParallel as FSDP, MixedPrecision
1839
+ from torch.distributed.fsdp.wrap import transformer_auto_wrap_policy
1840
+ from transformers.models.qwen3.modeling_qwen3 import Qwen3DecoderLayer
1841
+
1842
+ mp_policy = MixedPrecision(param_dtype=torch_dtype, reduce_dtype=torch_dtype, buffer_dtype=torch_dtype)
1843
+ auto_wrap = partial(
1844
+ transformer_auto_wrap_policy,
1845
+ transformer_layer_cls={Qwen3DecoderLayer, QwenDecoderLayerWithDeepMemory}
1846
+ )
1847
+
1848
+ model = FSDP(
1849
+ model,
1850
+ auto_wrap_policy=auto_wrap,
1851
+ mixed_precision=mp_policy,
1852
+ device_id=torch.cuda.current_device(),
1853
+ use_orig_params=config.fsdp_use_orig_params,
1854
+ ignored_modules=model.get_memory_modules(),
1855
+ )
1856
+ else:
1857
+ model = DDP(
1858
+ model,
1859
+ device_ids=[local_rank],
1860
+ output_device=local_rank,
1861
+ find_unused_parameters=config.ddp_find_unused_parameters,
1862
+ )
1863
+
1864
+ trainer = Trainer(
1865
+ model=model,
1866
+ train_dataloader=train_dataloader,
1867
+ eval_dataloader=eval_dataloader,
1868
+ config=config,
1869
+ rank=rank,
1870
+ world_size=world_size,
1871
+ is_distributed=is_distributed, # Keep True for DistributedSampler
1872
+ tokenizer=tokenizer,
1873
+ use_manual_grad_sync=use_manual_grad_sync, # v4: manual gradient sync for cross-chunk
1874
+ )
1875
+
1876
+ if args.eval_only:
1877
+ ckpt_path = args.ckpt_path or os.path.join(config.output_dir, config.final_ckpt_name)
1878
+ if is_main:
1879
+ logger.info(f"eval_only: loading checkpoint: {ckpt_path}")
1880
+ ckpt = torch.load(ckpt_path, map_location="cpu")
1881
+
1882
+ memory_sd = ckpt.get("memory_state_dict", {})
1883
+ if len(memory_sd) > 0:
1884
+ unwrap_model(model).load_state_dict(memory_sd, strict=False)
1885
+
1886
+ eval_metrics = trainer.evaluate()
1887
+ if is_main:
1888
+ ppl = float(math.exp(min(20.0, eval_metrics["loss"])))
1889
+ logger.info(
1890
+ f"[EVAL] loss={eval_metrics['loss']:.4f}, ppl={ppl:.3f}, "
1891
+ f"em_acc={eval_metrics['em_acc'] * 100:.2f}%, "
1892
+ f"tok_acc={eval_metrics['tok_acc'] * 100:.2f}%"
1893
+ )
1894
+ cleanup_distributed()
1895
+ return
1896
+
1897
+ trainer.train()
1898
+ cleanup_distributed()
1899
+
1900
+
1901
+ if __name__ == "__main__":
1902
+ main()
titans_pytorch/__init__.py ADDED
@@ -0,0 +1,17 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """
2
+ Minimal `titans_pytorch` exports for `train_qwen_titans_babilong_v4.py`.
3
+
4
+ This Hugging Face repo intentionally contains only the code paths used by the v4
5
+ training script (NeuralMemory + MemoryMLP), and does NOT ship unrelated modules
6
+ from the original project.
7
+ """
8
+
9
+ from titans_pytorch.neural_memory import NeuralMemState, NeuralMemory, mem_state_detach
10
+ from titans_pytorch.memory_models import MemoryMLP
11
+
12
+ __all__ = [
13
+ "NeuralMemory",
14
+ "NeuralMemState",
15
+ "mem_state_detach",
16
+ "MemoryMLP",
17
+ ]
titans_pytorch/memory_models.py ADDED
@@ -0,0 +1,248 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import torch
2
+ from torch import nn, cat
3
+ import torch.nn.functional as F
4
+ from torch.nn import Module, ModuleList, Parameter, ParameterList
5
+
6
+ from einops import rearrange
7
+
8
+ # functions
9
+
10
+ def l2norm(t):
11
+ return F.normalize(t, dim = -1)
12
+
13
+ # norms
14
+
15
+ class LayerNorm(Module):
16
+ def __init__(
17
+ self,
18
+ dim
19
+ ):
20
+ super().__init__()
21
+
22
+ self.ln = nn.LayerNorm(dim, elementwise_affine = False)
23
+ self.gamma = Parameter(torch.zeros(dim))
24
+
25
+ def forward(self, x):
26
+ gamma = self.gamma
27
+
28
+ if gamma.ndim == 2:
29
+ gamma = rearrange(gamma, 'b d -> b 1 d')
30
+
31
+ return self.ln(x) * (gamma + 1.)
32
+
33
+ # norm + residual wrapper, as used in original TTT paper
34
+ # but could be removed
35
+
36
+ class ResidualNorm(Module):
37
+ def __init__(
38
+ self,
39
+ dim,
40
+ model: Module
41
+ ):
42
+ super().__init__()
43
+ self.norm = LayerNorm(dim)
44
+ self.model = model
45
+
46
+ def forward(self, x):
47
+
48
+ out = self.model(x)
49
+
50
+ return self.norm(out) + x
51
+
52
+ # memory mlp proposed in TTT
53
+
54
+ class MemoryMLP(Module):
55
+ def __init__(
56
+ self,
57
+ dim,
58
+ depth,
59
+ expansion_factor = 2.
60
+ ):
61
+ super().__init__()
62
+ dim_hidden = int(dim * expansion_factor)
63
+ dims = (dim, *((dim_hidden,) * (depth - 1)), dim)
64
+
65
+ self.weights = ParameterList([Parameter(torch.randn(dim_in, dim_out)) for dim_in, dim_out in zip(dims[:-1], dims[1:])])
66
+
67
+ for weight in self.weights:
68
+ nn.init.xavier_uniform_(weight)
69
+
70
+ def forward(
71
+ self,
72
+ x
73
+ ):
74
+ for ind, weight in enumerate(self.weights):
75
+ is_first = ind == 0
76
+
77
+ if not is_first:
78
+ x = F.gelu(x)
79
+
80
+ x = x @ weight
81
+
82
+ return x
83
+
84
+ # memory mlp, but with gated residual + final projection
85
+
86
+ class GatedResidualMemoryMLP(Module):
87
+ def __init__(
88
+ self,
89
+ dim,
90
+ depth,
91
+ expansion_factor = 4.
92
+ ):
93
+ super().__init__()
94
+ dim_hidden = int(dim * expansion_factor)
95
+
96
+ self.weights = ParameterList([
97
+ ParameterList([
98
+ Parameter(torch.randn(dim, dim_hidden)),
99
+ Parameter(torch.randn(dim_hidden, dim)),
100
+ Parameter(torch.randn(dim * 2, dim)),
101
+ ]) for _ in range(depth)
102
+ ])
103
+
104
+ self.final_proj = Parameter(torch.randn(dim, dim))
105
+
106
+ for param in self.parameters():
107
+ nn.init.xavier_uniform_(param)
108
+
109
+ def forward(
110
+ self,
111
+ x
112
+ ):
113
+
114
+ for weight1, weight2, to_gates in self.weights:
115
+ res = x
116
+
117
+ hidden = x @ weight1
118
+ hidden = F.gelu(hidden)
119
+ branch_out = hidden @ weight2
120
+
121
+ # gated residual
122
+
123
+ gates = cat((branch_out, res), dim = -1) @ to_gates
124
+ x = res.lerp(branch_out, gates.sigmoid())
125
+
126
+ return x @ self.final_proj
127
+
128
+ # memory mlp with factorized weights
129
+ # so can tradeoff capacity for smaller chunk sizes
130
+
131
+ class FactorizedMemoryMLP(Module):
132
+ def __init__(
133
+ self,
134
+ dim,
135
+ depth,
136
+ k = 32
137
+ ):
138
+ super().__init__()
139
+ self.weights = ParameterList([
140
+ ParameterList([
141
+ Parameter(torch.randn(dim, k)),
142
+ Parameter(torch.randn(k, dim)),
143
+ ]) for _ in range(depth)
144
+ ])
145
+
146
+ for weight1, weight2 in self.weights:
147
+ nn.init.xavier_uniform_(weight1)
148
+ nn.init.xavier_uniform_(weight2)
149
+
150
+ def forward(
151
+ self,
152
+ x
153
+ ):
154
+
155
+ for ind, (weight1, weight2) in enumerate(self.weights):
156
+ is_first = ind == 0
157
+
158
+ if not is_first:
159
+ x = F.gelu(x)
160
+
161
+ x = x @ weight1 @ weight2
162
+
163
+ return x
164
+
165
+ # an MLP modelled after the popular swiglu ff in modern transformers
166
+
167
+ class MemorySwiGluMLP(Module):
168
+ def __init__(
169
+ self,
170
+ dim,
171
+ depth = 1, # default to 2 layer MLP from TTT, depth of 2 would be 4 layer MLP, but done as 2 feedforwards with residual
172
+ expansion_factor = 4.
173
+ ):
174
+ super().__init__()
175
+
176
+ dim_inner = int(dim * expansion_factor * 2 / 3)
177
+
178
+ weights = []
179
+
180
+ for _ in range(depth):
181
+ weights.append(ParameterList([
182
+ Parameter(torch.randn(dim, dim_inner * 2)),
183
+ Parameter(torch.randn(dim_inner, dim)),
184
+ ]))
185
+
186
+ self.weights = ParameterList(weights)
187
+ self.norm = LayerNorm(dim)
188
+
189
+ def forward(self, x):
190
+
191
+ for w1, w2 in self.weights:
192
+ residual = x
193
+
194
+ x, gates = (x @ w1).chunk(2, dim = -1)
195
+
196
+ x = x * F.gelu(gates)
197
+
198
+ x = x @ w2
199
+
200
+ x = x + residual
201
+
202
+ return self.norm(x)
203
+
204
+ # improvised attention as memory module
205
+
206
+ class MemoryAttention(Module):
207
+ def __init__(
208
+ self,
209
+ dim,
210
+ scale = 8.,
211
+ expansion_factor = 2.
212
+ ):
213
+ super().__init__()
214
+ self.scale = scale
215
+ dim_ff_hidden = int(dim * expansion_factor)
216
+
217
+ self.weights = ParameterList([
218
+ Parameter(torch.randn(dim, dim)), # queries
219
+ Parameter(torch.randn(dim, dim)), # keys
220
+ Parameter(torch.randn(dim, dim)), # values
221
+ Parameter(torch.randn(dim, dim_ff_hidden)), # ff w1
222
+ Parameter(torch.randn(dim_ff_hidden, dim)), # ff w2
223
+ ])
224
+
225
+ for weight in self.weights:
226
+ nn.init.xavier_uniform_(weight)
227
+
228
+ def forward(self, x):
229
+
230
+ wq, wk, wv, ffw1, ffw2 = self.weights
231
+
232
+ q = l2norm(x @ wq)
233
+ k = l2norm(x @ wk)
234
+ v = x @ wv
235
+
236
+ attn_out = F.scaled_dot_product_attention(
237
+ q, k, v,
238
+ scale = self.scale,
239
+ is_causal = True
240
+ )
241
+
242
+ # parallel attention + feedforward block
243
+ # as in PaLM + Gpt-J
244
+
245
+ h = F.gelu(x @ ffw1)
246
+ ff_out = h @ ffw2
247
+
248
+ return attn_out + ff_out
titans_pytorch/neural_memory.py ADDED
@@ -0,0 +1,1087 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from __future__ import annotations
2
+ from typing import Callable
3
+
4
+ import math
5
+ from functools import partial
6
+ from itertools import zip_longest
7
+ from collections import namedtuple
8
+
9
+ import torch
10
+ from torch import nn, stack, cat, is_tensor, tensor, Tensor
11
+ import torch.nn.functional as F
12
+ from torch.nn import Linear, Module, Parameter, ParameterList, ParameterDict
13
+ from torch.func import functional_call, vmap, grad
14
+ from torch.utils._pytree import tree_map, tree_flatten, tree_unflatten
15
+
16
+ from tensordict import TensorDict
17
+
18
+ from assoc_scan import AssocScan
19
+
20
+ from titans_pytorch.memory_models import(
21
+ MemoryMLP,
22
+ ResidualNorm
23
+ )
24
+
25
+ import einx
26
+ from einops import einsum, rearrange, repeat, reduce, pack, unpack
27
+ from einops.layers.torch import Rearrange, Reduce
28
+
29
+ """
30
+ ein notation:
31
+ b - batch
32
+ h - heads
33
+ bh - batch and heads
34
+ n - sequence
35
+ d - feature dimension
36
+ c - intra-chunk
37
+ w - num memory network weight parameters
38
+ o - momentum orders
39
+ u - key / value updates - allowing a token to emit multiple key / values
40
+ """
41
+
42
+ LinearNoBias = partial(Linear, bias = False)
43
+
44
+ # neural mem state related
45
+
46
+ NeuralMemState = namedtuple('NeuralMemState', [
47
+ 'seq_index',
48
+ 'weights',
49
+ 'cache_store_segment',
50
+ 'states',
51
+ 'updates',
52
+ ])
53
+
54
+ def mem_state_detach(
55
+ state: NeuralMemState
56
+ ):
57
+ assert isinstance(state, NeuralMemState)
58
+ state = tree_map(lambda t: t.detach() if is_tensor(t) else t, tuple(state))
59
+ return NeuralMemState(*state)
60
+
61
+ # functions
62
+
63
+ def exists(v):
64
+ return v is not None
65
+
66
+ def default(*args):
67
+ for arg in args:
68
+ if exists(arg):
69
+ return arg
70
+ return None
71
+
72
+ def identity(t):
73
+ return t
74
+
75
+ def xnor(x, y):
76
+ return not (x ^ y)
77
+
78
+ def divisible_by(num, den):
79
+ return (num % den) == 0
80
+
81
+ def safe_cat(inputs, dim = -2):
82
+ inputs = tuple(filter(exists, inputs))
83
+
84
+ if len(inputs) == 0:
85
+ return None
86
+ elif len(inputs) == 1:
87
+ return inputs[0]
88
+
89
+ return cat(inputs, dim = dim)
90
+
91
+ def is_empty_tensor(t):
92
+ return t.numel() == 0
93
+
94
+ def dict_get_value_shapes(td):
95
+ return [v.shape for k, v in td.items()]
96
+
97
+ def rearrange_dict_values(td, pattern, **kwargs):
98
+ return td.apply(lambda t: rearrange(t, pattern, **kwargs))
99
+
100
+ def repeat_dict_values(td, pattern, **kwargs):
101
+ return td.apply(lambda t: repeat(t, pattern, **kwargs))
102
+
103
+ def pair(v):
104
+ return (v, v) if not isinstance(v, tuple) else v
105
+
106
+ def round_down_multiple(seq, mult):
107
+ return seq // mult * mult
108
+
109
+ def round_up_multiple(seq, mult):
110
+ return math.ceil(seq / mult) * mult
111
+
112
+ def pad_at_dim(t, pad, dim = -1, value = 0.):
113
+ dims_from_right = (- dim - 1) if dim < 0 else (t.ndim - dim - 1)
114
+ zeros = ((0, 0) * dims_from_right)
115
+ return F.pad(t, (*zeros, *pad), value = value)
116
+
117
+ def pack_one_with_inverse(t, pattern):
118
+ packed, packed_shape = pack([t], pattern)
119
+
120
+ def inverse(out, inv_pattern = None):
121
+ inv_pattern = default(inv_pattern, pattern)
122
+ return unpack(out, packed_shape, inv_pattern)[0]
123
+
124
+ return packed, inverse
125
+
126
+ def Sequential(*modules):
127
+ modules = [*filter(exists, modules)]
128
+
129
+ if len(modules) == 0:
130
+ return nn.Identity()
131
+
132
+ if len(modules) == 1:
133
+ return modules[0]
134
+
135
+ return nn.Sequential(*modules)
136
+
137
+ # softclamping gradients
138
+
139
+ def softclamp_max(t, max_value):
140
+ half_max_value = max_value / 2
141
+ return ((t / half_max_value).tanh() * half_max_value) + half_max_value
142
+
143
+ def softclamp_grad_norm(t, max_value, eps: float = 1e-6):
144
+ if is_empty_tensor(t):
145
+ return t
146
+
147
+ t, inverse = pack_one_with_inverse(t, 'bn *')
148
+
149
+ norm = t.norm(dim = -1, keepdim = True).clamp(min = eps)
150
+ clamped_norm = softclamp_max(norm, max_value)
151
+
152
+ t = t * (clamped_norm / norm)
153
+ return inverse(t)
154
+
155
+ # spectral norming the surprise update w/ newton schulz matrix iter
156
+ # Keller Jordan et al. from OSS w/ nanogpt, now being used for two works, Atlas and 'TTT done right'
157
+
158
+ def newtonschulz5(
159
+ t,
160
+ steps = 5,
161
+ eps = 1e-7,
162
+ coefs = (3.4445, -4.7750, 2.0315)
163
+ ):
164
+ if t.ndim <= 3:
165
+ return t
166
+
167
+ shape = t.shape
168
+ should_transpose = shape[-2] > shape[-1]
169
+
170
+ if should_transpose:
171
+ t = t.transpose(-1, -2)
172
+
173
+ t, inv_pack = pack_one_with_inverse(t, '* i j')
174
+ t = t / t.norm(dim = (-1, -2), keepdim = True).clamp(min = eps)
175
+
176
+ a, b, c = coefs
177
+
178
+ for _ in range(steps):
179
+ A = t @ t.transpose(-1, -2)
180
+ B = b * A + c * A @ A
181
+ t = a * t + B @ t
182
+
183
+ if should_transpose:
184
+ t = t.transpose(-1, -2)
185
+
186
+ return inv_pack(t)
187
+
188
+ # multi head rmsnorm
189
+
190
+ class MultiheadRMSNorm(Module):
191
+ def __init__(self, dim, heads):
192
+ super().__init__()
193
+ self.rmsnorm = nn.RMSNorm(dim, elementwise_affine = False)
194
+ self.gamma = Parameter(torch.zeros(heads, 1, dim))
195
+
196
+ def forward(self, x):
197
+ return self.rmsnorm(x) * (self.gamma + 1.)
198
+
199
+ # chunk pooling
200
+
201
+ class AveragePool(Module):
202
+ def __init__(
203
+ self,
204
+ chunk_size
205
+ ):
206
+ super().__init__()
207
+ self.chunk_size = chunk_size
208
+
209
+ def forward(
210
+ self,
211
+ x,
212
+ chunk_size = None
213
+ ):
214
+ chunk_size = default(chunk_size, self.chunk_size)
215
+ return reduce(x, 'b (n c) d -> b n d', 'mean', c = chunk_size)
216
+
217
+ class AttentionPool(Module):
218
+ def __init__(
219
+ self,
220
+ dim,
221
+ chunk_size
222
+ ):
223
+ """
224
+ taken from Enformer https://www.nature.com/articles/s41592-021-01252-x , in turn taken from somewhere else
225
+ """
226
+ super().__init__()
227
+ self.chunk_size = chunk_size
228
+ self.to_attn_logits = nn.Linear(dim, dim)
229
+
230
+ # default to average pool
231
+
232
+ nn.init.zeros_(self.to_attn_logits.weight)
233
+ nn.init.zeros_(self.to_attn_logits.bias)
234
+
235
+ def forward(
236
+ self,
237
+ x,
238
+ chunk_size = None
239
+ ):
240
+ chunk_size = default(chunk_size, self.chunk_size)
241
+
242
+ x = rearrange(x, 'b (n c) d -> b n c d', c = chunk_size)
243
+
244
+ attn_logits = self.to_attn_logits(x)
245
+
246
+ attn = attn_logits.softmax(dim = -2)
247
+
248
+ return reduce(x * attn, 'b n c d -> b n d', 'sum')
249
+
250
+ # main neural memory
251
+
252
+ def default_adaptive_step_transform(adaptive_step, max_lr = 1e-2):
253
+ return adaptive_step.sigmoid() * max_lr
254
+
255
+ def default_loss_fn(pred, target):
256
+ return (pred - target).pow(2).mean(dim = -1)
257
+
258
+ class NeuralMemory(Module):
259
+ def __init__(
260
+ self,
261
+ dim,
262
+ chunk_size: int | tuple[int, int] = 1,
263
+ batch_size = None,
264
+ dim_head = None,
265
+ heads = 1,
266
+ model: Module | None = None,
267
+ store_memory_loss_fn: Callable = default_loss_fn,
268
+ adaptive_step_transform: Callable | None = None,
269
+ default_step_transform_max_lr = 1.,
270
+ per_parameter_lr_modulation = False, # allow outer network to control learning rate per weight matrix of memory network
271
+ max_mem_layer_modulation = 1., # max of 10.
272
+ per_head_learned_parameters = True,
273
+ attn_pool_chunks = False,
274
+ momentum = True,
275
+ momentum_order = 1,
276
+ learned_momentum_combine = False,
277
+ learned_combine_include_zeroth = False,
278
+ num_kv_per_token = 1, # whether a single token can do multiple updates to the memory model
279
+ qkv_receives_diff_views = False, # to address an issue raised by a phd student (who will be credited if experiments are green). basically the issue raised is that the memory MLP is only learning Wk @ Wv linear mapping and that may not be expressive enough. we will use hyper connections to allow the network to choose different previous layer inputs as keys / values and see if that does anything
280
+ pre_rmsnorm = True,
281
+ post_rmsnorm = False,
282
+ qk_rmsnorm = False,
283
+ max_grad_norm: float | None = None,
284
+ use_accelerated_scan = False,
285
+ activation: Module | None = None,
286
+ init_adaptive_step_bias = None,
287
+ init_momentum_bias = None,
288
+ init_decay_bias = None,
289
+ accept_weight_residual = False,
290
+ spectral_norm_surprises = False,
291
+ gated_transition = False,
292
+ mem_model_norm_add_residual = True, # by default, layernorm output and add residual as proposed in TTT paper, but could be removed
293
+ default_model_kwargs: dict = dict(
294
+ depth = 2,
295
+ expansion_factor = 4.
296
+ )
297
+ ):
298
+ super().__init__()
299
+ dim_head = default(dim_head, dim)
300
+ assert not (heads == 1 and dim_head != dim)
301
+
302
+ self.retrieve_chunk_size, self.store_chunk_size = pair(chunk_size)
303
+
304
+ # batch size
305
+
306
+ if exists(batch_size):
307
+ assert divisible_by(batch_size, self.store_chunk_size)
308
+
309
+ self.batch_size = batch_size
310
+
311
+ # associative scan
312
+
313
+ self.assoc_scan = AssocScan(use_accelerated = use_accelerated_scan)
314
+
315
+ # key values receiving different views
316
+
317
+ self.qkv_receives_diff_views = qkv_receives_diff_views
318
+
319
+ # norms
320
+
321
+ self.retrieve_norm = nn.RMSNorm(dim) if pre_rmsnorm else nn.Identity()
322
+ self.store_norm = nn.RMSNorm(dim) if pre_rmsnorm else nn.Identity()
323
+
324
+ self.multihead_rmsnorm = MultiheadRMSNorm(dim_head, heads) if post_rmsnorm else nn.Identity()
325
+
326
+ self.q_norm = MultiheadRMSNorm(dim_head, heads) if qk_rmsnorm else nn.Identity()
327
+ self.k_norm = MultiheadRMSNorm(dim_head, heads) if qk_rmsnorm else nn.Identity()
328
+
329
+ # maybe multi-headed
330
+
331
+ dim_inner = dim_head * heads
332
+
333
+ self.heads = heads
334
+
335
+ self.split_heads = Rearrange('b n (h d) -> b h n d', h = heads)
336
+ self.split_kv_heads = Rearrange('b n (h u d) -> b h (n u) d', h = heads, u = num_kv_per_token)
337
+
338
+ self.merge_heads = Rearrange('b h n d -> b n (h d)')
339
+ self.combine_heads = LinearNoBias(dim_inner, dim) if heads > 1 else nn.Identity()
340
+
341
+ self.retrieve_gate = Sequential(
342
+ LinearNoBias(dim, heads),
343
+ Rearrange('b n h -> b h n 1'),
344
+ nn.Sigmoid()
345
+ ) if heads > 1 else None
346
+
347
+ # memory model
348
+
349
+ if not exists(model):
350
+ model = MemoryMLP(dim_head, **default_model_kwargs)
351
+
352
+ # validate memory model
353
+
354
+ assert not exists(next(model.buffers(), None)), 'model cannot have buffers for now'
355
+
356
+ test_shape = (3, 2, dim_head)
357
+
358
+ with torch.no_grad():
359
+ try:
360
+ test_input = torch.randn(test_shape)
361
+ mem_model_output = model(test_input)
362
+ except:
363
+ raise RuntimeError(f'memory model unable to accept a tensor of shape {test_shape}')
364
+
365
+ assert mem_model_output.shape == test_shape, 'output of memory model needs to be same shape as input'
366
+
367
+ # the memory is the weights of the model
368
+
369
+ if mem_model_norm_add_residual:
370
+ model = ResidualNorm(dim = dim_head, model = model)
371
+
372
+ self.memory_model = model
373
+
374
+ mem_model_params = dict(model.named_parameters())
375
+
376
+ self.num_memory_parameter_tensors = len(mem_model_params)
377
+
378
+ self.memory_model_parameter_names = [*mem_model_params.keys()]
379
+
380
+ memory_model_parameters = [*mem_model_params.values()]
381
+
382
+ if per_head_learned_parameters:
383
+ memory_model_parameters = [repeat(p, '... -> h ...', h = heads) for p in memory_model_parameters]
384
+
385
+ self.init_weight_shape = [p.shape for p in memory_model_parameters]
386
+
387
+ self.memory_model_parameters = ParameterList(memory_model_parameters)
388
+ self.per_head_learned_parameters = per_head_learned_parameters
389
+
390
+ # the chunk size within the paper where adaptive step, momentum, weight decay are shared
391
+
392
+ self.chunk_size = chunk_size
393
+
394
+ # prepare function for per sample gradients from model above, using torch.func
395
+
396
+ def forward_and_loss(params, inputs, loss_weights, target):
397
+ pred = functional_call(self.memory_model, params, inputs)
398
+ loss = self.store_memory_loss_fn(pred, target) # simple mse loss in paper - eq (12) - |M(k) - v|²
399
+ weighted_loss = loss * loss_weights
400
+ return weighted_loss.sum(), loss
401
+
402
+ # two functions
403
+
404
+ grad_fn = grad(forward_and_loss, has_aux = True)
405
+
406
+ self.per_sample_grad_fn = vmap(grad_fn, in_dims = (0, 0, 0, 0))
407
+
408
+ # queries for retrieving from the model
409
+
410
+ self.to_queries = Sequential(LinearNoBias(dim, dim_inner), activation)
411
+
412
+ # keys and values for storing to the model
413
+
414
+ assert num_kv_per_token > 0
415
+
416
+ self.to_keys = Sequential(
417
+ LinearNoBias(dim, dim_inner * num_kv_per_token),
418
+ activation,
419
+ )
420
+
421
+ self.to_values = Sequential(
422
+ LinearNoBias(dim, dim_inner * num_kv_per_token),
423
+ activation,
424
+ )
425
+
426
+ self.store_memory_loss_fn = store_memory_loss_fn
427
+
428
+ self.num_kv_per_token = num_kv_per_token
429
+
430
+ # `chunk_size` refers to chunk size used for storing to memory model weights
431
+
432
+ chunk_size = self.store_chunk_size
433
+
434
+ # whether to use averaging of chunks, or attention pooling
435
+
436
+ assert not (attn_pool_chunks and chunk_size == 1), '`attn_pool_chunks` cannot be set to True if `chunk_size` is set to 1'
437
+
438
+ if not attn_pool_chunks:
439
+ self.reduce_to_chunk_rep = AveragePool(chunk_size = chunk_size)
440
+ else:
441
+ self.reduce_to_chunk_rep = AttentionPool(dim, chunk_size = chunk_size)
442
+
443
+ # learned adaptive learning rate
444
+
445
+ self.to_adaptive_step = Sequential(
446
+ nn.Linear(dim, heads * num_kv_per_token),
447
+ Rearrange('b n (h u) -> (b h) (n u)', u = num_kv_per_token)
448
+ )
449
+
450
+ if not exists(adaptive_step_transform):
451
+ adaptive_step_transform = partial(default_adaptive_step_transform, max_lr = default_step_transform_max_lr)
452
+
453
+ self.adaptive_step_transform = adaptive_step_transform
454
+
455
+ # momentum related
456
+
457
+ self.to_momentum = Sequential(
458
+ nn.Linear(dim, heads * momentum_order),
459
+ Rearrange('b n (h o) -> o (b h) n 1', o = momentum_order)
460
+ ) if momentum else None
461
+
462
+ self.momentum_order = momentum_order
463
+ self.to_learned_momentum_combine = None
464
+
465
+ if learned_momentum_combine:
466
+ assert momentum
467
+ assert momentum_order > 1, 'only second order momentum allowed for now, but may allow learned combination of zeroth'
468
+
469
+ if learned_combine_include_zeroth:
470
+ momentum_order += 1
471
+
472
+ self.to_learned_momentum_combine = Sequential(
473
+ nn.Linear(dim, heads * momentum_order),
474
+ Rearrange('b n (h o) -> o (b h) n', h = heads),
475
+ nn.Softmax(dim = 0),
476
+ )
477
+
478
+ self.learned_combine_include_zeroth = learned_combine_include_zeroth
479
+
480
+ # per layer learning rate modulation
481
+
482
+ self.to_layer_modulation = Sequential(
483
+ nn.Linear(dim, heads * self.num_memory_parameter_tensors),
484
+ Rearrange('b n (h w) -> w (b h) n', h = heads),
485
+ nn.Sigmoid()
486
+ ) if per_parameter_lr_modulation else None
487
+
488
+ self.max_mem_layer_modulation = max_mem_layer_modulation
489
+
490
+ # learned weight residual
491
+
492
+ self.to_learned_weight_residual_mix = Sequential(
493
+ nn.Linear(dim, heads),
494
+ Rearrange('b n h -> b h n'),
495
+ nn.Sigmoid()
496
+ ) if accept_weight_residual else None
497
+
498
+ # allow for softclamp the gradient norms for storing memories
499
+
500
+ self.max_grad_norm = max_grad_norm
501
+
502
+ # spectral norming the surprises before update, a la Muon from Jordan et al.
503
+
504
+ self.spectral_norm_surprises = spectral_norm_surprises
505
+
506
+ # weight decay factor
507
+
508
+ self.to_decay_factor = Sequential(
509
+ nn.Linear(dim, heads),
510
+ Rearrange('b n h -> (b h) n 1')
511
+ )
512
+
513
+ # learned transition, as seeing instability when decreasing neural mem batch size
514
+ # perhaps it can slowly learn to adjust from early residual to fully transitioning to new weights every batch size
515
+
516
+ self.transition_gate = nn.Parameter(tensor(-5.)) if gated_transition else None
517
+
518
+ # inits
519
+
520
+ if exists(init_adaptive_step_bias):
521
+ linear = self.to_adaptive_step[0]
522
+ nn.init.zeros_(linear.weight)
523
+ nn.init.constant_(linear.bias, init_adaptive_step_bias)
524
+
525
+ if exists(init_momentum_bias):
526
+ linear = self.to_momentum[0]
527
+ nn.init.zeros_(linear.weight)
528
+ nn.init.constant_(linear.bias, init_momentum_bias)
529
+
530
+ if exists(init_decay_bias):
531
+ linear = self.to_decay_factor[0]
532
+ nn.init.zeros_(linear.weight)
533
+ nn.init.constant_(linear.bias, init_decay_bias)
534
+
535
+ # maybe use accelerated scan
536
+
537
+ self.use_accelerated_scan = use_accelerated_scan
538
+
539
+ self.register_buffer('zero', torch.tensor(0.), persistent = False)
540
+
541
+ @property
542
+ def memory_model_parameter_dict(self):
543
+ return TensorDict(dict(zip(self.memory_model_parameter_names, self.memory_model_parameters)))
544
+
545
+ def init_weights(
546
+ self,
547
+ batch,
548
+ ):
549
+ if self.per_head_learned_parameters:
550
+ weights = repeat_dict_values(self.memory_model_parameter_dict, 'h ... -> (b h) ...', b = batch)
551
+ else:
552
+ weights = repeat_dict_values(self.memory_model_parameter_dict, '... -> bh ...', bh = batch * self.heads)
553
+
554
+ return weights
555
+
556
+ def init_momentum(
557
+ self,
558
+ batch,
559
+ ):
560
+ zeros = self.memory_model_parameter_dict.clone().zero_()
561
+
562
+ if self.per_head_learned_parameters:
563
+ zeros = repeat_dict_values(zeros, 'h ... -> o (b h) ...', b = batch, o = self.momentum_order)
564
+ else:
565
+ zeros = repeat_dict_values(zeros, '... -> o bh ...', bh = batch * self.heads, o = self.momentum_order)
566
+
567
+ return zeros
568
+
569
+ def store_memories(
570
+ self,
571
+ seq,
572
+ weights: dict[str, Tensor] | None = None,
573
+ past_state: tuple[dict[str, Tensor], dict[str, Tensor]] | None = None,
574
+ seq_index = 0,
575
+ prev_weights = None,
576
+ mask: Tensor | None = None,
577
+ return_surprises = True
578
+ ):
579
+ if self.qkv_receives_diff_views:
580
+ _, batch, seq_len = seq.shape[:3]
581
+ else:
582
+ batch, seq_len = seq.shape[:2]
583
+
584
+ # shapes and variables
585
+
586
+ heads, chunk_size, num_updates = self.heads, self.store_chunk_size, self.num_kv_per_token
587
+
588
+ # curtail sequence by multiple of the chunk size
589
+ # only a complete chunk of the sequence provides the memory for the next chunk
590
+
591
+ round_down_seq_len = round_down_multiple(seq_len, chunk_size)
592
+ num_chunks = round_down_seq_len // chunk_size
593
+
594
+ seq, remainder = seq[..., :round_down_seq_len, :], seq[..., round_down_seq_len:, :]
595
+
596
+ next_seq_len_index = seq_index + round_down_seq_len
597
+
598
+ # init weights if needed
599
+ # weights of the memory network
600
+
601
+ if not exists(weights):
602
+ weights = self.init_weights(batch)
603
+
604
+ weights = TensorDict(weights)
605
+
606
+ # allow for neural memory of a previous layer to influence surprise of current layer
607
+
608
+ weights_for_surprise = repeat_dict_values(weights, 'b ... -> b n ...', n = num_chunks)
609
+
610
+ # initial norm
611
+
612
+ seq = self.store_norm(seq)
613
+
614
+ # handle keys and values coming from different sequences from hyper connection
615
+
616
+ values_seq = seq
617
+
618
+ if self.qkv_receives_diff_views:
619
+ seq, values_seq = seq
620
+
621
+ # derive learned hparams for optimization of memory network
622
+
623
+ adaptive_lr = self.to_adaptive_step(seq)
624
+ adaptive_lr = self.adaptive_step_transform(adaptive_lr)
625
+
626
+ chunked_seq = self.reduce_to_chunk_rep(seq, chunk_size = chunk_size)
627
+
628
+ decay_factor = self.to_decay_factor(chunked_seq).sigmoid()
629
+
630
+ need_layer_lr_mod = exists(self.to_layer_modulation) and num_chunks > 0
631
+ has_momentum = exists(self.to_momentum)
632
+
633
+ if has_momentum:
634
+ adaptive_momentum = self.to_momentum(chunked_seq).sigmoid()
635
+
636
+ learned_combine = exists(self.to_learned_momentum_combine)
637
+
638
+ if learned_combine:
639
+ combine_momentums = self.to_learned_momentum_combine(chunked_seq)
640
+
641
+ if need_layer_lr_mod:
642
+ layer_lr_mod = self.to_layer_modulation(chunked_seq) * self.max_mem_layer_modulation
643
+
644
+ # keys and values
645
+
646
+ keys = self.to_keys(seq)
647
+ values = self.to_values(values_seq)
648
+
649
+ # maybe multi head
650
+
651
+ keys, values = map(self.split_kv_heads, (keys, values))
652
+
653
+ # maybe keys rmsnorm
654
+
655
+ keys = self.k_norm(keys)
656
+
657
+ # take care of chunking
658
+
659
+ keys, values = tuple(rearrange(t, 'b h (n c u) d -> (b h n) (c u) d', c = chunk_size, u = num_updates) for t in (keys, values))
660
+
661
+ # adaptive lr
662
+
663
+ adaptive_lr = rearrange(adaptive_lr, 'b (n c u) -> (b n) (c u)', c = chunk_size, u = num_updates)
664
+
665
+ # optionally a storing memories mask can be passed in. if False, will set the learning rate to 0. for those positions
666
+
667
+ if exists(mask):
668
+ mask = mask[..., :round_down_seq_len]
669
+ mask = repeat(mask, 'b (n c) -> (b h n) (c u)', h = heads, u = num_updates, c = chunk_size)
670
+
671
+ adaptive_lr = torch.where(mask, adaptive_lr, 0.)
672
+
673
+ # maybe add previous layer weight
674
+
675
+ assert xnor(exists(self.to_learned_weight_residual_mix), exists(prev_weights))
676
+
677
+ if exists(prev_weights):
678
+
679
+ start_index = math.ceil(seq_index / chunk_size)
680
+ end_index = start_index + num_chunks
681
+
682
+ prev_weights = prev_weights.apply(lambda t: t[:, start_index:end_index])
683
+
684
+ if exists(self.to_learned_weight_residual_mix) and num_chunks > 0:
685
+ mix = self.to_learned_weight_residual_mix(chunked_seq)
686
+ mix = rearrange(mix, 'b h n -> (b h) n')
687
+ prev_weights = prev_weights.apply(lambda t: einx.multiply('bh n, bh n ... -> bh n ...', mix, t))
688
+
689
+ weights_for_surprise = weights_for_surprise + prev_weights
690
+
691
+ # flatten batch and time if surprise depends on previous layer memory model
692
+
693
+ weights_for_surprise = rearrange_dict_values(weights_for_surprise, 'b n ... -> (b n) ...')
694
+
695
+ # get grads and extra auxiliary loss (for backwarding through qkv projection in base neural memory module)
696
+
697
+ grads, unweighted_mem_model_loss = self.per_sample_grad_fn(dict(weights_for_surprise), keys, adaptive_lr, values)
698
+
699
+ grads = TensorDict(grads)
700
+
701
+ # surprises
702
+
703
+ adaptive_lr = rearrange(adaptive_lr, '(b h n) c -> b h (n c)', b = batch, h = heads)
704
+ unweighted_mem_model_loss = rearrange(unweighted_mem_model_loss, '(b h n) c -> b h (n c)', b = batch, h = heads)
705
+
706
+ # maybe softclamp grad norm
707
+
708
+ if exists(self.max_grad_norm):
709
+ grads = grads.apply(lambda t: softclamp_grad_norm(t, self.max_grad_norm))
710
+
711
+ # restore batch and sequence dimension
712
+
713
+ grads = rearrange_dict_values(grads, '(b n) ... -> b n ...', b = batch * heads)
714
+
715
+ # maybe per layer modulation
716
+
717
+ if need_layer_lr_mod:
718
+ grads = TensorDict({name: einx.multiply('b h, b h ... -> b h ...', layer_lr_mod, t) for layer_lr_mod, (name, t) in zip(layer_lr_mod, grads.items())})
719
+
720
+ # negative gradients, adaptive lr already applied as loss weight
721
+
722
+ surprises = grads.mul(-1)
723
+
724
+ # past states
725
+
726
+ if not exists(past_state):
727
+ # minibatch_init_weight corresponds to W0 in figure 7 of TTT paper
728
+
729
+ minibatch_init_weight = weights
730
+ init_momentum = self.init_momentum(batch)
731
+
732
+ past_state = (minibatch_init_weight, init_momentum)
733
+
734
+ past_last_update, past_last_momentum = past_state
735
+
736
+ # early return if sequence length less than chunk size
737
+
738
+ if num_chunks == 0:
739
+ updates = rearrange_dict_values(weights, 'bh ... -> bh 1 ...')
740
+ next_store_state = NeuralMemState(next_seq_len_index, weights, remainder, past_state, updates)
741
+
742
+ output = (updates, next_store_state)
743
+
744
+ if not return_surprises:
745
+ return output
746
+
747
+ return (*output, (unweighted_mem_model_loss, adaptive_lr))
748
+
749
+ # momentum + weight decay - momentum is the new contribution, as most linear RNNs have learned forgetting gates
750
+
751
+ updates = TensorDict()
752
+
753
+ next_last_update = TensorDict()
754
+ next_last_momentum = TensorDict()
755
+
756
+ for (param_name, surprise), (_, last_update) in zip(surprises.items(), past_last_update.items()):
757
+
758
+ update = surprise
759
+
760
+ # derive momentum with associative scan - eq (10)
761
+
762
+ if has_momentum:
763
+ momentum = surprise
764
+
765
+ momentums = [] # stores all momentum orders starting with first, to generalize to Nth order momentum
766
+
767
+ last_momentum = past_last_momentum[param_name]
768
+
769
+ # go from first order momentum all the way to the Nth
770
+
771
+ for one_adaptive_momentum, one_last_momentum in zip_longest(adaptive_momentum, last_momentum):
772
+ momentum = self.assoc_scan(one_adaptive_momentum, momentum, prev = one_last_momentum) # momentum is S / surprise in the paper
773
+
774
+ momentums.append(momentum)
775
+
776
+ momentums = stack(momentums)
777
+
778
+ next_last_momentum[param_name] = momentums[:, :, -1] # momentums shape is Float['o bh n 1']
779
+
780
+ if learned_combine and self.learned_combine_include_zeroth:
781
+ # add the original surprise if learned combination of momentums
782
+ momentums = cat((rearrange(surprise, '... -> 1 ...'), momentums), dim = 0)
783
+
784
+ if not learned_combine:
785
+ update = momentums[-1]
786
+ else:
787
+ update = einsum(combine_momentums, momentums, 'o b n, o b n ... -> b n ...')
788
+
789
+ # maybe spectral norm surprises
790
+
791
+ if self.spectral_norm_surprises:
792
+ update = newtonschulz5(update)
793
+
794
+ # use associative scan again for learned forgetting (weight decay) - eq (13)
795
+
796
+ update = self.assoc_scan(1. - decay_factor, update, prev = last_update, remove_prev = False)
797
+
798
+ updates[param_name] = update
799
+ next_last_update[param_name] = update[:, -1]
800
+
801
+ # determine next state for the storing of memories
802
+
803
+ next_state = (next_last_update, next_last_momentum)
804
+
805
+ next_store_state = NeuralMemState(next_seq_len_index, weights, remainder, next_state, updates)
806
+
807
+ # return updates to neural memory at all chunked timesteps + neural mem cache / state to be fed back
808
+
809
+ if not return_surprises:
810
+ return updates, next_store_state
811
+
812
+ return updates, next_store_state, (unweighted_mem_model_loss, adaptive_lr)
813
+
814
+ def retrieve_memories(
815
+ self,
816
+ seq,
817
+ weights: dict[str, Tensor],
818
+ ):
819
+ chunk_size = self.retrieve_chunk_size
820
+
821
+ weights_have_expanded_shape = dict_get_value_shapes(weights) != self.init_weight_shape
822
+
823
+ batch, seq_len = seq.shape[:2]
824
+
825
+ # auto infer single token decoding, if there are only 1 set of weights and 1 token
826
+
827
+ is_one_token = seq_len == 1
828
+ is_one_weight = (not weights_have_expanded_shape) or next(iter(weights.values())).shape[1] == 1
829
+
830
+ is_single_token_decode = is_one_token and is_one_weight
831
+
832
+ if is_single_token_decode:
833
+ chunk_size = 1
834
+
835
+ # padding related, for chunked processing
836
+
837
+ need_pad = chunk_size > 1 or not is_one_weight
838
+
839
+ if need_pad:
840
+ seq = pad_at_dim(seq, (1, 0), dim = 1)
841
+
842
+ seq_len_plus_one = seq.shape[-2]
843
+
844
+ next_seq_len = round_up_multiple(seq_len_plus_one, chunk_size)
845
+
846
+ padding = next_seq_len - seq_len_plus_one
847
+ seq = pad_at_dim(seq, (0, padding), dim = 1)
848
+
849
+ # the parameters of the memory model stores the memories of the key / values
850
+ # when the MLP has only 1 weight matrix, it is equivalent to `kv` fast weight memories from linear attention literature (recall fetching of memories is q @ (kv)) / schmidhuber's paper
851
+
852
+ weights = TensorDict(weights)
853
+
854
+ # pre norm
855
+
856
+ seq = self.retrieve_norm(seq)
857
+
858
+ # sequence Float['b n d'] to queries
859
+
860
+ queries = self.to_queries(seq)
861
+
862
+ # maybe multihead
863
+
864
+ queries = self.split_heads(queries)
865
+
866
+ # maybe qk rmsnorm
867
+
868
+ queries = self.q_norm(queries)
869
+
870
+ # fetch values from memory model
871
+
872
+ if weights_have_expanded_shape:
873
+ weights = rearrange_dict_values(weights, 'b n ... -> (b n) ...')
874
+
875
+ queries = rearrange(queries, 'b h (n c) d -> (b h n) c d', c = chunk_size)
876
+
877
+ # forward functional call
878
+
879
+ values = functional_call(self.memory_model, dict(weights), queries)
880
+
881
+ # reconstitute batch dimension
882
+
883
+ values = rearrange(values, '(b h n) c d -> b h (n c) d', b = batch, h = self.heads)
884
+
885
+ values = self.multihead_rmsnorm(values)
886
+
887
+ # maybe gate
888
+
889
+ if exists(self.retrieve_gate):
890
+ values = values * self.retrieve_gate(seq)
891
+
892
+ # maybe merge heads and combine
893
+
894
+ values = self.merge_heads(values)
895
+
896
+ values = self.combine_heads(values)
897
+
898
+ # restore, pad with empty memory embed
899
+
900
+ if need_pad:
901
+ values = values[:, 1:]
902
+
903
+ return values[:, :seq_len]
904
+
905
+ def forward(
906
+ self,
907
+ seq,
908
+ store_seq = None,
909
+ state: NeuralMemState | None = None,
910
+ detach_mem_state = False,
911
+ prev_weights = None,
912
+ store_mask: Tensor | None = None,
913
+ return_surprises = False,
914
+ ttt_batch_size: int | None = None
915
+ ):
916
+ is_multi_input = self.qkv_receives_diff_views
917
+
918
+ # handle single token
919
+
920
+ if seq.ndim == 2 or (is_multi_input and seq.ndim == 3):
921
+ seq = rearrange(seq, '... b d -> ... b 1 d')
922
+
923
+ is_single_token = seq.shape[-2] == 1
924
+
925
+ # if different views for qkv, then
926
+
927
+ if is_multi_input:
928
+ retrieve_seq, seq = seq[0], seq[1:]
929
+ else:
930
+ retrieve_seq = seq
931
+
932
+ # handle previous state init
933
+
934
+ if not exists(state):
935
+ state = (0, None, None, None, None)
936
+
937
+ seq_index, weights, cache_store_seq, past_state, updates = state
938
+
939
+ # store
940
+
941
+ store_seq = default(store_seq, seq)
942
+
943
+ # take care of cache
944
+
945
+ if exists(cache_store_seq):
946
+ store_seq = safe_cat((cache_store_seq, store_seq))
947
+ if exists(store_mask):
948
+ cache_len = cache_store_seq.shape[-2]
949
+ if cache_len > 0:
950
+ cache_mask = torch.ones(
951
+ (*store_mask.shape[:-1], cache_len),
952
+ device=store_mask.device,
953
+ dtype=store_mask.dtype,
954
+ )
955
+ store_mask = torch.cat((cache_mask, store_mask), dim=-1)
956
+
957
+ if exists(store_mask) and store_mask.shape[-1] != store_seq.shape[-2]:
958
+ diff = store_seq.shape[-2] - store_mask.shape[-1]
959
+ if diff > 0:
960
+ store_seq = store_seq[..., diff:, :]
961
+ elif diff < 0:
962
+ store_mask = store_mask[..., (-diff):]
963
+
964
+ # compute split sizes of sequence
965
+ # for now manually update weights to last update at the correct boundaries
966
+
967
+ store_seq_len, chunk_size, batch_size = store_seq.shape[-2], self.chunk_size, default(ttt_batch_size, self.batch_size)
968
+
969
+ need_update_weights = exists(batch_size)
970
+
971
+ # determine split sizes and when to update
972
+
973
+ if need_update_weights:
974
+ update_after_final_store = divisible_by(seq_index + store_seq_len, batch_size)
975
+
976
+ seq_range = torch.arange(store_seq_len) + seq_index + 1
977
+ batch_boundary = divisible_by(seq_range, batch_size)
978
+
979
+ indices = seq_range[batch_boundary] - seq_index
980
+
981
+ indices = F.pad(indices, (1, 0), value = 0)
982
+
983
+ if indices[-1] != store_seq_len:
984
+ indices = F.pad(indices, (0, 1), value = store_seq_len)
985
+
986
+ split_sizes = (indices[1:] - indices[:-1]).tolist()
987
+
988
+ assert sum(split_sizes) == store_seq_len
989
+ else:
990
+ split_sizes = (store_seq_len,)
991
+ update_after_final_store = False
992
+
993
+ # accumulate updates
994
+
995
+ updates = None
996
+
997
+ def accum_updates(past_updates, future_updates):
998
+ if not exists(past_updates):
999
+ return future_updates
1000
+
1001
+ return TensorDict({param_name: cat((past_update[:, :-1], future_update), dim = 1) for (param_name, past_update), (_, future_update) in zip(past_updates.items(), future_updates.items())})
1002
+
1003
+ # loop through chunks of store sequences
1004
+
1005
+ store_seqs = store_seq.split(split_sizes, dim = -2)
1006
+
1007
+ if exists(store_mask):
1008
+ store_masks = store_mask.split(split_sizes, dim = -1)
1009
+ else:
1010
+ store_masks = (None,) * len(split_sizes)
1011
+
1012
+ # whether to allow network to slowly adjust from initial weight throughout (residual path) to fully updating weights every batch
1013
+
1014
+ surprises = (None, None)
1015
+ gate = None
1016
+
1017
+ if exists(self.transition_gate):
1018
+ gate = self.transition_gate.sigmoid()
1019
+
1020
+ for ind, (store_seq_chunk, maybe_store_mask) in enumerate(zip(store_seqs, store_masks)):
1021
+ is_last = ind == (len(store_seqs) - 1)
1022
+
1023
+ # store
1024
+
1025
+ next_updates, next_neural_mem_state, chunk_surprises = self.store_memories(
1026
+ store_seq_chunk,
1027
+ weights,
1028
+ seq_index = seq_index,
1029
+ past_state = past_state,
1030
+ prev_weights = prev_weights,
1031
+ mask = maybe_store_mask,
1032
+ return_surprises = True
1033
+ )
1034
+
1035
+ weights = next_neural_mem_state.weights
1036
+ seq_index = next_neural_mem_state.seq_index
1037
+ past_state = next_neural_mem_state.states
1038
+
1039
+ updates = accum_updates(updates, next_updates)
1040
+
1041
+ surprises = tuple(safe_cat(args, dim = -1) for args in zip(surprises, chunk_surprises))
1042
+
1043
+ if is_last and not update_after_final_store:
1044
+ continue
1045
+
1046
+ # update weights once batch size is fulfilled
1047
+
1048
+ last_update, last_momentum = past_state
1049
+
1050
+ if exists(gate):
1051
+ last_update = TensorDict({param_name: one_weight.lerp(one_last_update, gate) for (param_name, one_weight), (_, one_last_update) in zip(weights.items(), last_update.items())})
1052
+
1053
+ past_state = (last_update, last_momentum)
1054
+
1055
+ # set weights to the last updated weights for the last minibatch
1056
+
1057
+ weights = last_update
1058
+
1059
+ next_neural_mem_state = next_neural_mem_state._replace(
1060
+ weights = weights,
1061
+ states = past_state,
1062
+ )
1063
+
1064
+ next_neural_mem_state = next_neural_mem_state._replace(updates = updates)
1065
+
1066
+ # retrieve
1067
+
1068
+ if is_single_token:
1069
+ last_update, _ = next_neural_mem_state.states
1070
+ updates = rearrange_dict_values(last_update, 'b ... -> b 1 ...')
1071
+
1072
+ retrieved = self.retrieve_memories(
1073
+ retrieve_seq,
1074
+ updates
1075
+ )
1076
+
1077
+ # maybe detach
1078
+
1079
+ if detach_mem_state:
1080
+ next_neural_mem_state = mem_state_detach(next_neural_mem_state)
1081
+
1082
+ # returning
1083
+
1084
+ if not return_surprises:
1085
+ return retrieved, next_neural_mem_state
1086
+
1087
+ return retrieved, next_neural_mem_state, surprises