prism-lab commited on
Commit
cf5dff5
·
verified ·
1 Parent(s): 240bf52

Upload model

Browse files
Files changed (5) hide show
  1. README.md +199 -0
  2. config.json +18 -0
  3. configuration_pillars.py +22 -0
  4. modeling_pillars.py +382 -0
  5. pytorch_model.bin +3 -0
README.md ADDED
@@ -0,0 +1,199 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ library_name: transformers
3
+ tags: []
4
+ ---
5
+
6
+ # Model Card for Model ID
7
+
8
+ <!-- Provide a quick summary of what the model is/does. -->
9
+
10
+
11
+
12
+ ## Model Details
13
+
14
+ ### Model Description
15
+
16
+ <!-- Provide a longer summary of what this model is. -->
17
+
18
+ This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
19
+
20
+ - **Developed by:** [More Information Needed]
21
+ - **Funded by [optional]:** [More Information Needed]
22
+ - **Shared by [optional]:** [More Information Needed]
23
+ - **Model type:** [More Information Needed]
24
+ - **Language(s) (NLP):** [More Information Needed]
25
+ - **License:** [More Information Needed]
26
+ - **Finetuned from model [optional]:** [More Information Needed]
27
+
28
+ ### Model Sources [optional]
29
+
30
+ <!-- Provide the basic links for the model. -->
31
+
32
+ - **Repository:** [More Information Needed]
33
+ - **Paper [optional]:** [More Information Needed]
34
+ - **Demo [optional]:** [More Information Needed]
35
+
36
+ ## Uses
37
+
38
+ <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
39
+
40
+ ### Direct Use
41
+
42
+ <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
43
+
44
+ [More Information Needed]
45
+
46
+ ### Downstream Use [optional]
47
+
48
+ <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
49
+
50
+ [More Information Needed]
51
+
52
+ ### Out-of-Scope Use
53
+
54
+ <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
55
+
56
+ [More Information Needed]
57
+
58
+ ## Bias, Risks, and Limitations
59
+
60
+ <!-- This section is meant to convey both technical and sociotechnical limitations. -->
61
+
62
+ [More Information Needed]
63
+
64
+ ### Recommendations
65
+
66
+ <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
67
+
68
+ Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
69
+
70
+ ## How to Get Started with the Model
71
+
72
+ Use the code below to get started with the model.
73
+
74
+ [More Information Needed]
75
+
76
+ ## Training Details
77
+
78
+ ### Training Data
79
+
80
+ <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
81
+
82
+ [More Information Needed]
83
+
84
+ ### Training Procedure
85
+
86
+ <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
87
+
88
+ #### Preprocessing [optional]
89
+
90
+ [More Information Needed]
91
+
92
+
93
+ #### Training Hyperparameters
94
+
95
+ - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
96
+
97
+ #### Speeds, Sizes, Times [optional]
98
+
99
+ <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
100
+
101
+ [More Information Needed]
102
+
103
+ ## Evaluation
104
+
105
+ <!-- This section describes the evaluation protocols and provides the results. -->
106
+
107
+ ### Testing Data, Factors & Metrics
108
+
109
+ #### Testing Data
110
+
111
+ <!-- This should link to a Dataset Card if possible. -->
112
+
113
+ [More Information Needed]
114
+
115
+ #### Factors
116
+
117
+ <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
118
+
119
+ [More Information Needed]
120
+
121
+ #### Metrics
122
+
123
+ <!-- These are the evaluation metrics being used, ideally with a description of why. -->
124
+
125
+ [More Information Needed]
126
+
127
+ ### Results
128
+
129
+ [More Information Needed]
130
+
131
+ #### Summary
132
+
133
+
134
+
135
+ ## Model Examination [optional]
136
+
137
+ <!-- Relevant interpretability work for the model goes here -->
138
+
139
+ [More Information Needed]
140
+
141
+ ## Environmental Impact
142
+
143
+ <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
144
+
145
+ Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
146
+
147
+ - **Hardware Type:** [More Information Needed]
148
+ - **Hours used:** [More Information Needed]
149
+ - **Cloud Provider:** [More Information Needed]
150
+ - **Compute Region:** [More Information Needed]
151
+ - **Carbon Emitted:** [More Information Needed]
152
+
153
+ ## Technical Specifications [optional]
154
+
155
+ ### Model Architecture and Objective
156
+
157
+ [More Information Needed]
158
+
159
+ ### Compute Infrastructure
160
+
161
+ [More Information Needed]
162
+
163
+ #### Hardware
164
+
165
+ [More Information Needed]
166
+
167
+ #### Software
168
+
169
+ [More Information Needed]
170
+
171
+ ## Citation [optional]
172
+
173
+ <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
174
+
175
+ **BibTeX:**
176
+
177
+ [More Information Needed]
178
+
179
+ **APA:**
180
+
181
+ [More Information Needed]
182
+
183
+ ## Glossary [optional]
184
+
185
+ <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
186
+
187
+ [More Information Needed]
188
+
189
+ ## More Information [optional]
190
+
191
+ [More Information Needed]
192
+
193
+ ## Model Card Authors [optional]
194
+
195
+ [More Information Needed]
196
+
197
+ ## Model Card Contact
198
+
199
+ [More Information Needed]
config.json ADDED
@@ -0,0 +1,18 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "architectures": [
3
+ "Pillars_DAT_Model"
4
+ ],
5
+ "auto_map": {
6
+ "AutoConfig": "configuration_pillars.PillarsConfig",
7
+ "AutoModel": "modeling_pillars.Pillars_DAT_Model"
8
+ },
9
+ "d_branch": 256,
10
+ "d_model": 512,
11
+ "depth": 6,
12
+ "dropout": 0.1,
13
+ "dtype": "float32",
14
+ "model_type": "pillars-dat",
15
+ "seq_len": 4096,
16
+ "transformers_version": "4.57.6",
17
+ "vocab_size": 32768
18
+ }
configuration_pillars.py ADDED
@@ -0,0 +1,22 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+ from transformers import PretrainedConfig
3
+
4
+ class PillarsConfig(PretrainedConfig):
5
+ model_type = "pillars-dat"
6
+ def __init__(
7
+ self,
8
+ vocab_size=32768,
9
+ d_model=512,
10
+ d_branch=256,
11
+ seq_len=4096,
12
+ depth=6,
13
+ dropout=0.1,
14
+ **kwargs
15
+ ):
16
+ super().__init__(**kwargs)
17
+ self.vocab_size = vocab_size
18
+ self.d_model = d_model
19
+ self.d_branch = d_branch
20
+ self.seq_len = seq_len
21
+ self.depth = depth
22
+ self.dropout = dropout
modeling_pillars.py ADDED
@@ -0,0 +1,382 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+ import torch
3
+ import torch.nn as nn
4
+ import torch.nn.functional as F
5
+ import math
6
+ from transformers import PreTrainedModel
7
+ try:
8
+ from .configuration_pillars import PillarsConfig
9
+ except ImportError:
10
+ from configuration_pillars import PillarsConfig
11
+
12
+ try:
13
+ from x_transformers import Encoder
14
+ except ImportError:
15
+ raise ImportError("To use PILLARS-DAT, you must run: pip install x-transformers")
16
+
17
+ # --- UTILS ---
18
+
19
+ class ComplexDropout(nn.Module):
20
+ def __init__(self, p=0.5):
21
+ super().__init__()
22
+ self.p = p
23
+ def forward(self, z):
24
+ if not self.training or self.p == 0.0: return z
25
+ mask = torch.ones_like(z.real)
26
+ mask = F.dropout(mask, self.p, self.training, inplace=False)
27
+ return z * mask
28
+
29
+ class RobustPhaseNorm(nn.Module):
30
+ def __init__(self, d_model, eps=1e-5):
31
+ super().__init__()
32
+ self.scale = nn.Parameter(torch.ones(d_model))
33
+ self.eps = eps
34
+ def forward(self, x):
35
+ mag = torch.abs(x)
36
+ rms = torch.sqrt(torch.mean(mag**2, dim=-1, keepdim=True) + self.eps)
37
+ return (x / rms) * self.scale
38
+
39
+ class ModReLU(nn.Module):
40
+ def __init__(self, features):
41
+ super().__init__()
42
+ self.b = nn.Parameter(torch.zeros(features))
43
+
44
+ def forward(self, z):
45
+ # 1. FORCE FLOAT32 FOR GEOMETRY
46
+ # We must calculate magnitude in high precision to prevent
47
+ # square-law overflow (Re^2 + Im^2) from killing the gradients.
48
+ z_32 = z.to(torch.complex64)
49
+
50
+ # 2. Calculate Magnitude (Safe)
51
+ mag = torch.abs(z_32)
52
+
53
+ # 3. Activation Logic (Still FP32)
54
+ new_mag = F.relu(mag + self.b.float())
55
+
56
+ # 4. Reconstruct Phase (Safe Division)
57
+ # (z / mag) is the unit vector (phase)
58
+ phase = z_32 / (mag + 1e-6)
59
+
60
+ # 5. Result
61
+ out = new_mag * phase
62
+
63
+ # 6. Cast back to network dtype (BF16/FP16)
64
+ return out.to(z.dtype)
65
+
66
+ class ComplexToRealBridge(nn.Module):
67
+ def __init__(self, d_model):
68
+ super().__init__()
69
+ self.proj = nn.Linear(d_model * 2, d_model)
70
+ self.norm = nn.LayerNorm(d_model)
71
+ def forward(self, x_complex):
72
+ cat = torch.cat([x_complex.real, x_complex.imag], dim=-1)
73
+ return self.norm(self.proj(cat))
74
+
75
+ # ==========================================
76
+ # 4. DYNAMIC RoSE (Mamba-3 Engine)
77
+ # ==========================================
78
+ class DynamicRoSE(nn.Module):
79
+ def __init__(self, num_embeddings, embedding_dim, max_period=10000.0):
80
+ super().__init__()
81
+ self.embedding_dim = embedding_dim
82
+
83
+ # 1. Master Real Embedding (The "Particle")
84
+ self.raw_embedding = nn.Embedding(num_embeddings, embedding_dim)
85
+
86
+ # 2. Complex Adapter (The "Wave" Magnitude/Initial Phase)
87
+ self.adapter = nn.Linear(embedding_dim, embedding_dim * 2)
88
+
89
+ # 3. Static Frequencies (Positional)
90
+ freqs = torch.exp(torch.arange(0, embedding_dim, dtype=torch.float32) * -(math.log(max_period) / embedding_dim))
91
+ self.register_buffer('freqs', freqs)
92
+
93
+ self.rotation_predictor = nn.Linear(embedding_dim, embedding_dim * 2)
94
+
95
+ def forward(self, input_ids):
96
+ # A. Raw Particle
97
+ real_base = self.raw_embedding(input_ids)
98
+ B, L, D = real_base.shape
99
+
100
+ # B. Complex Wave Content
101
+ complex_params = self.adapter(real_base)
102
+ z_t = torch.complex(complex_params[..., :D], complex_params[..., D:])
103
+
104
+ rot_raw = self.rotation_predictor(real_base)
105
+ rot_x, rot_y = rot_raw.chunk(2, dim=-1)
106
+
107
+ rot_mag = torch.sqrt(rot_x**2 + rot_y**2 + 1e-6)
108
+ dynamic_rot = torch.complex(rot_x / rot_mag, rot_y / rot_mag)
109
+
110
+ # D. Static Positional Rotation
111
+ pos = torch.arange(L, device=input_ids.device).float()
112
+ static_angles = torch.outer(pos, self.freqs) # [L, D]
113
+ static_rot = torch.polar(torch.ones_like(static_angles), static_angles) # [L, D]
114
+
115
+ z_final = z_t * static_rot.unsqueeze(0) * dynamic_rot
116
+
117
+ return z_final, real_base
118
+
119
+ # ==========================================
120
+ # 5. HYENA FILTER
121
+ # ==========================================
122
+ class HyenaNeuralFilter(nn.Module):
123
+ def __init__(self, d_model, max_len=1024, hidden_dim=64):
124
+ super().__init__()
125
+ self.d_model = d_model
126
+ freqs = torch.exp(torch.arange(0, hidden_dim, 2, dtype=torch.float32) * -(math.log(10000.0) / hidden_dim))
127
+ self.register_buffer("freqs", freqs)
128
+ self.mlp = nn.Sequential(
129
+ nn.Linear(hidden_dim, hidden_dim), nn.SiLU(),
130
+ nn.Linear(hidden_dim, hidden_dim), nn.SiLU(),
131
+ nn.Linear(hidden_dim, d_model * 2)
132
+ )
133
+ def forward(self, L, device):
134
+ t = torch.linspace(0, 1, steps=L, device=device).unsqueeze(-1)
135
+ emb = torch.cat([torch.sin(t * self.freqs), torch.cos(t * self.freqs)], dim=-1)
136
+ out = self.mlp(emb).view(L, self.d_model, 2)
137
+ return torch.complex(out[..., 0], out[..., 1])
138
+
139
+ # ==========================================
140
+ # 6. GATED HARMONIC CONVOLUTION (Lean)
141
+ # ==========================================
142
+ # @title 🛠️ Fixed PRISM Layer (Precision-Gated)
143
+
144
+ # @title 🛠️ Fixed PRISM Layer (Type-Safe)
145
+
146
+ class GatedHarmonicConvolution(nn.Module):
147
+ def __init__(self, d_model, max_len=1024, dropout=0.1):
148
+ super().__init__()
149
+ self.d_model = d_model
150
+ self.filter_len = max_len
151
+ self.neural_filter = HyenaNeuralFilter(d_model, max_len=max_len)
152
+ self.gate_proj = nn.Linear(d_model * 2, d_model * 2)
153
+
154
+ self.mix_real = nn.Linear(d_model, d_model)
155
+ self.mix_imag = nn.Linear(d_model, d_model)
156
+ self.out_real = nn.Linear(d_model, d_model)
157
+ self.out_imag = nn.Linear(d_model, d_model)
158
+
159
+ self.activation = ModReLU(d_model)
160
+ self.norm = RobustPhaseNorm(d_model)
161
+ self.dropout = ComplexDropout(dropout)
162
+
163
+ def forward(self, x, src_mask=None):
164
+ residual = x
165
+ x_norm = self.norm(x)
166
+ if src_mask is not None:
167
+ x_norm = x_norm.masked_fill(src_mask.unsqueeze(-1), 0.0)
168
+
169
+ # 🛑 PRECISION GATE 🛑
170
+ # Force operations to Float32 Complex to preserve Phase Physics
171
+ with torch.amp.autocast('cuda', enabled=False):
172
+
173
+ # --- THE FIX IS HERE ---
174
+ # Old: x_32 = x_norm.float() <-- This stripped the imaginary part
175
+ # New: Explicit cast to Complex64
176
+ x_32 = x_norm.to(torch.complex64)
177
+ # -----------------------
178
+
179
+ B, L, D = x_32.shape
180
+ eff_L = min(L, self.filter_len)
181
+
182
+ # 1. FFT (Now safe because x_32 is definitely complex)
183
+ x_freq = torch.fft.fft(x_32, n=eff_L, dim=1, norm='ortho')
184
+
185
+ # 2. Filter (Ensure filter is also complex64)
186
+ h = self.neural_filter(eff_L, x.device).unsqueeze(0).to(torch.complex64)
187
+ x_filtered = x_freq * h
188
+
189
+ # 3. IFFT
190
+ x_time = torch.fft.ifft(x_filtered, n=eff_L, dim=1, norm='ortho')
191
+
192
+ if L > eff_L: x_time = F.pad(x_time, (0,0,0,L-eff_L))
193
+ else: x_time = x_time[:, :L, :]
194
+
195
+ # 4. Gating (Sigmoid logic)
196
+ # Safe concatenation because x_32 is complex64
197
+ x_cat = torch.cat([x_32.real, x_32.imag], dim=-1)
198
+
199
+ # Cast weights to Float32 for the calculation
200
+ gate_w = self.gate_proj.weight.to(torch.float32)
201
+ gate_b = self.gate_proj.bias.to(torch.float32)
202
+
203
+ gate_out = F.linear(x_cat, gate_w, gate_b)
204
+ gates = torch.sigmoid(gate_out)
205
+
206
+ g_r, g_i = gates.chunk(2, dim=-1)
207
+ x_gated_32 = torch.complex(x_time.real * g_r, x_time.imag * g_i)
208
+
209
+ # 🏁 EXIT GATE: Cast back to original dtype (likely BFloat16 from autocast)
210
+ # We cast real/imag separately to be safe
211
+ target_dtype = x.dtype
212
+ # If x was complex, target is complex. If x was real, we have an issue.
213
+ # Assuming x comes from autocast, it might be complex16.
214
+
215
+ x_gated = x_gated_32.to(target_dtype)
216
+
217
+ # 5. Mixing (Back in mixed precision)
218
+ mr, mi = self.mix_real, self.mix_imag
219
+ x_mixed = torch.complex(mr(x_gated.real) - mi(x_gated.imag), mr(x_gated.imag) + mi(x_gated.real))
220
+
221
+ x_act = self.activation(x_mixed)
222
+
223
+ or_, oi = self.out_real, self.out_imag
224
+ out = torch.complex(or_(x_act.real) - oi(x_act.imag), or_(x_act.imag) + oi(x_act.real))
225
+
226
+ return self.dropout(out) + residual
227
+ # ==========================================
228
+ # 7. MODEL WRAPPERS
229
+ # ==========================================
230
+ class PRISMEncoder(nn.Module):
231
+ def __init__(self, num_layers, d_model, max_len, dropout=0.1):
232
+ super().__init__()
233
+ self.layers = nn.ModuleList([
234
+ GatedHarmonicConvolution(d_model, max_len, dropout)
235
+ for _ in range(num_layers)
236
+ ])
237
+ self.final_norm = RobustPhaseNorm(d_model)
238
+ def forward(self, x, src_mask=None):
239
+ for layer in self.layers:
240
+ if self.training: x = torch.utils.checkpoint.checkpoint(layer, x, src_mask, use_reentrant=False)
241
+ else: x = layer(x, src_mask)
242
+ return self.final_norm(x)
243
+
244
+ class PRISM_WikiText_Model(nn.Module):
245
+ def __init__(self, vocab_size, d_model, max_len, prism_depth=5, trans_depth=1, dropout=0.1):
246
+ super().__init__()
247
+ self.d_model = d_model
248
+
249
+ # 1. PRISM Core (The Optical/Passive Part)
250
+ self.rose = DynamicRoSE(vocab_size, d_model)
251
+ self.prism_encoder = PRISMEncoder(prism_depth, d_model, max_len=max_len, dropout=dropout)
252
+ self.bridge = ComplexToRealBridge(d_model)
253
+ self.periscope_proj = nn.Sequential(nn.Linear(d_model * 2, d_model), nn.LayerNorm(d_model), nn.GELU())
254
+
255
+ # 2. Refiner (The Digital/Active Part)
256
+ # 🔄 SWAPPED: Replaced Standard Transformer with RoPE-Enabled Encoder
257
+ if trans_depth > 0:
258
+ self.refiner = Encoder(
259
+ dim=d_model,
260
+ depth=trans_depth,
261
+ heads=8,
262
+ rotary_pos_emb=True,
263
+ attn_flash=True,
264
+ attn_dropout=dropout,
265
+ ff_dropout=dropout,
266
+
267
+ )
268
+ else:
269
+ self.refiner = None
270
+
271
+ # 3. Output
272
+ self.lm_head = nn.Linear(d_model, vocab_size)
273
+ self.lm_head.weight = self.rose.raw_embedding.weight
274
+
275
+ def forward(self, input_ids):
276
+ # A. Wave Physics
277
+ wave_src, particle_src = self.rose(input_ids)
278
+ wave_out = self.prism_encoder(wave_src)
279
+ wave_real = self.bridge(wave_out)
280
+
281
+ # B. Interface
282
+ mixed_memory = self.periscope_proj(torch.cat([wave_real, particle_src], dim=-1))
283
+
284
+ # C. Digital Refinement (Now with RoPE)
285
+ if self.refiner:
286
+ out = self.refiner(mixed_memory)
287
+ else:
288
+ out = mixed_memory
289
+
290
+ return self.lm_head(out)
291
+
292
+ # ==========================================
293
+ # 1. SENSORY STREAM (Transformer + RoPE)
294
+ # ==========================================
295
+ class SensoryStream(nn.Module):
296
+ def __init__(self, depth, d_model, dropout=0.1):
297
+ super().__init__()
298
+ self.encoder = Encoder(
299
+ dim=d_model,
300
+ depth=depth,
301
+ heads=4, # 256 dim / 64 head_dim = 4 heads
302
+ attn_flash=True, # Flash Attention
303
+ rotary_pos_emb=True, # <--- CRITICAL: RoPE Enabled
304
+ attn_dropout=dropout,
305
+ ff_dropout=dropout,
306
+ use_rmsnorm=True, # RMSNorm (Llama style)
307
+ ff_glu=True # SwiGLU (Llama style)
308
+ )
309
+
310
+ def forward(self, x):
311
+ return self.encoder(x)
312
+
313
+ class Pillars_DAT_Model(PreTrainedModel):
314
+ config_class = PillarsConfig
315
+
316
+ def __init__(self, config):
317
+ super().__init__(config)
318
+ self.config = config
319
+ self.d_model = config.d_model
320
+ self.d_branch = config.d_branch
321
+
322
+ # 1. Root
323
+ self.rose = DynamicRoSE(config.vocab_size, config.d_model)
324
+
325
+ # 2. Downsample
326
+ self.particle_down = nn.Linear(config.d_model, config.d_branch)
327
+ self.wave_down = nn.Linear(config.d_model * 2, config.d_branch * 2)
328
+
329
+ # 3. Stream A: Sensory (Rate)
330
+ self.stream_sensory = SensoryStream(depth=config.depth, d_model=config.d_branch, dropout=config.dropout)
331
+
332
+ # 4. Stream B: Relational (Phase)
333
+ self.stream_relational = PRISMEncoder(num_layers=config.depth, d_model=config.d_branch, max_len=config.seq_len, dropout=config.dropout)
334
+ self.relational_bridge = ComplexToRealBridge(config.d_branch)
335
+
336
+ # 5. Fusion
337
+ self.fusion_proj = nn.Linear(config.d_branch * 2, config.d_model)
338
+ self.fusion_norm = nn.LayerNorm(config.d_model)
339
+
340
+ # 6. Refiner
341
+ self.refiner = Encoder(
342
+ dim=config.d_model, depth=1, heads=8, attn_flash=True,
343
+ rotary_pos_emb=True, attn_dropout=config.dropout, ff_dropout=config.dropout
344
+ )
345
+
346
+ # 7. Output (Converted to Layer for HF Compatibility)
347
+ self.lm_head = nn.Linear(config.d_model, config.vocab_size)
348
+
349
+ # Tie Weights Explicitly
350
+ self.lm_head.weight = self.rose.raw_embedding.weight
351
+
352
+ def forward(self, input_ids, labels=None):
353
+ # 1. Physics
354
+ wave_src, particle_src = self.rose(input_ids)
355
+ p_small = self.particle_down(particle_src)
356
+
357
+ w_flat = torch.cat([wave_src.real, wave_src.imag], dim=-1)
358
+ w_small_flat = self.wave_down(w_flat)
359
+ w_small = torch.complex(w_small_flat[..., :self.d_branch], w_small_flat[..., self.d_branch:])
360
+
361
+ # 2. Parallel Streams
362
+ sensory_out = self.stream_sensory(p_small)
363
+ relational_out_complex = self.stream_relational(w_small)
364
+ relational_out = self.relational_bridge(relational_out_complex)
365
+
366
+ # 3. Fusion
367
+ stacked = torch.cat([sensory_out, relational_out], dim=-1)
368
+ context = self.fusion_norm(self.fusion_proj(stacked))
369
+
370
+ # 4. Refinement
371
+ refined = self.refiner(context)
372
+
373
+ # 5. Output
374
+ logits = self.lm_head(refined)
375
+
376
+ loss = None
377
+ if labels is not None:
378
+ loss_fct = nn.CrossEntropyLoss()
379
+ loss = loss_fct(logits.view(-1, self.config.vocab_size), labels.view(-1))
380
+ return {"loss": loss, "logits": logits}
381
+
382
+ return logits
pytorch_model.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:bd00c792b7fe52b4334a4acef62606f5133cc720503a2720387262491e5fbac7
3
+ size 127185187