Yujivus commited on
Commit
647c7d5
·
verified ·
1 Parent(s): 5085c23

Upload model

Browse files
Files changed (5) hide show
  1. README.md +199 -0
  2. config.json +19 -0
  3. configuration_pillars.py +24 -0
  4. modeling_pillars.py +260 -0
  5. pytorch_model.bin +3 -0
README.md ADDED
@@ -0,0 +1,199 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ library_name: transformers
3
+ tags: []
4
+ ---
5
+
6
+ # Model Card for Model ID
7
+
8
+ <!-- Provide a quick summary of what the model is/does. -->
9
+
10
+
11
+
12
+ ## Model Details
13
+
14
+ ### Model Description
15
+
16
+ <!-- Provide a longer summary of what this model is. -->
17
+
18
+ This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
19
+
20
+ - **Developed by:** [More Information Needed]
21
+ - **Funded by [optional]:** [More Information Needed]
22
+ - **Shared by [optional]:** [More Information Needed]
23
+ - **Model type:** [More Information Needed]
24
+ - **Language(s) (NLP):** [More Information Needed]
25
+ - **License:** [More Information Needed]
26
+ - **Finetuned from model [optional]:** [More Information Needed]
27
+
28
+ ### Model Sources [optional]
29
+
30
+ <!-- Provide the basic links for the model. -->
31
+
32
+ - **Repository:** [More Information Needed]
33
+ - **Paper [optional]:** [More Information Needed]
34
+ - **Demo [optional]:** [More Information Needed]
35
+
36
+ ## Uses
37
+
38
+ <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
39
+
40
+ ### Direct Use
41
+
42
+ <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
43
+
44
+ [More Information Needed]
45
+
46
+ ### Downstream Use [optional]
47
+
48
+ <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
49
+
50
+ [More Information Needed]
51
+
52
+ ### Out-of-Scope Use
53
+
54
+ <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
55
+
56
+ [More Information Needed]
57
+
58
+ ## Bias, Risks, and Limitations
59
+
60
+ <!-- This section is meant to convey both technical and sociotechnical limitations. -->
61
+
62
+ [More Information Needed]
63
+
64
+ ### Recommendations
65
+
66
+ <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
67
+
68
+ Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
69
+
70
+ ## How to Get Started with the Model
71
+
72
+ Use the code below to get started with the model.
73
+
74
+ [More Information Needed]
75
+
76
+ ## Training Details
77
+
78
+ ### Training Data
79
+
80
+ <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
81
+
82
+ [More Information Needed]
83
+
84
+ ### Training Procedure
85
+
86
+ <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
87
+
88
+ #### Preprocessing [optional]
89
+
90
+ [More Information Needed]
91
+
92
+
93
+ #### Training Hyperparameters
94
+
95
+ - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
96
+
97
+ #### Speeds, Sizes, Times [optional]
98
+
99
+ <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
100
+
101
+ [More Information Needed]
102
+
103
+ ## Evaluation
104
+
105
+ <!-- This section describes the evaluation protocols and provides the results. -->
106
+
107
+ ### Testing Data, Factors & Metrics
108
+
109
+ #### Testing Data
110
+
111
+ <!-- This should link to a Dataset Card if possible. -->
112
+
113
+ [More Information Needed]
114
+
115
+ #### Factors
116
+
117
+ <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
118
+
119
+ [More Information Needed]
120
+
121
+ #### Metrics
122
+
123
+ <!-- These are the evaluation metrics being used, ideally with a description of why. -->
124
+
125
+ [More Information Needed]
126
+
127
+ ### Results
128
+
129
+ [More Information Needed]
130
+
131
+ #### Summary
132
+
133
+
134
+
135
+ ## Model Examination [optional]
136
+
137
+ <!-- Relevant interpretability work for the model goes here -->
138
+
139
+ [More Information Needed]
140
+
141
+ ## Environmental Impact
142
+
143
+ <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
144
+
145
+ Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
146
+
147
+ - **Hardware Type:** [More Information Needed]
148
+ - **Hours used:** [More Information Needed]
149
+ - **Cloud Provider:** [More Information Needed]
150
+ - **Compute Region:** [More Information Needed]
151
+ - **Carbon Emitted:** [More Information Needed]
152
+
153
+ ## Technical Specifications [optional]
154
+
155
+ ### Model Architecture and Objective
156
+
157
+ [More Information Needed]
158
+
159
+ ### Compute Infrastructure
160
+
161
+ [More Information Needed]
162
+
163
+ #### Hardware
164
+
165
+ [More Information Needed]
166
+
167
+ #### Software
168
+
169
+ [More Information Needed]
170
+
171
+ ## Citation [optional]
172
+
173
+ <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
174
+
175
+ **BibTeX:**
176
+
177
+ [More Information Needed]
178
+
179
+ **APA:**
180
+
181
+ [More Information Needed]
182
+
183
+ ## Glossary [optional]
184
+
185
+ <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
186
+
187
+ [More Information Needed]
188
+
189
+ ## More Information [optional]
190
+
191
+ [More Information Needed]
192
+
193
+ ## Model Card Authors [optional]
194
+
195
+ [More Information Needed]
196
+
197
+ ## Model Card Contact
198
+
199
+ [More Information Needed]
config.json ADDED
@@ -0,0 +1,19 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "architectures": [
3
+ "PillarsModel"
4
+ ],
5
+ "auto_map": {
6
+ "AutoConfig": "configuration_pillars.PillarsConfig",
7
+ "AutoModel": "modeling_pillars.PillarsModel"
8
+ },
9
+ "d_branch": 256,
10
+ "d_model": 512,
11
+ "depth": 9,
12
+ "dropout": 0.0,
13
+ "dtype": "float32",
14
+ "model_type": "pillars-compact",
15
+ "refine_depth": 1,
16
+ "seq_len": 4096,
17
+ "transformers_version": "4.57.3",
18
+ "vocab_size": 32768
19
+ }
configuration_pillars.py ADDED
@@ -0,0 +1,24 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+ from transformers import PretrainedConfig
3
+
4
+ class PillarsConfig(PretrainedConfig):
5
+ model_type = "pillars-compact"
6
+ def __init__(
7
+ self,
8
+ vocab_size=32768,
9
+ d_model=512,
10
+ d_branch=256,
11
+ seq_len=4096,
12
+ depth=9,
13
+ refine_depth=1,
14
+ dropout=0.0,
15
+ **kwargs
16
+ ):
17
+ super().__init__(**kwargs)
18
+ self.vocab_size = vocab_size
19
+ self.d_model = d_model
20
+ self.d_branch = d_branch
21
+ self.seq_len = seq_len
22
+ self.depth = depth
23
+ self.refine_depth = refine_depth
24
+ self.dropout = dropout
modeling_pillars.py ADDED
@@ -0,0 +1,260 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+ import torch
3
+ import torch.nn as nn
4
+ import torch.nn.functional as F
5
+ import math
6
+ from transformers import PreTrainedModel
7
+
8
+ try:
9
+ from .configuration_pillars import PillarsConfig
10
+ except ImportError:
11
+ from configuration_pillars import PillarsConfig
12
+
13
+ try:
14
+ from x_transformers import Encoder
15
+ except ImportError:
16
+ raise ImportError("To use PILLARS, you must run: pip install x-transformers")
17
+
18
+ # --- UTILS ---
19
+ class ComplexDropout(nn.Module):
20
+ def __init__(self, p=0.5):
21
+ super().__init__()
22
+ self.p = p
23
+ def forward(self, z):
24
+ if not self.training or self.p == 0.0: return z
25
+ mask = torch.ones_like(z.real)
26
+ mask = F.dropout(mask, self.p, self.training, inplace=False)
27
+ return z * mask
28
+
29
+ class RobustPhaseNorm(nn.Module):
30
+ def __init__(self, d_model, eps=1e-5):
31
+ super().__init__()
32
+ self.scale = nn.Parameter(torch.ones(d_model))
33
+ self.eps = eps
34
+ def forward(self, x):
35
+ mag = torch.abs(x)
36
+ rms = torch.sqrt(torch.mean(mag**2, dim=-1, keepdim=True) + self.eps)
37
+ return (x / rms) * self.scale
38
+
39
+ class ModReLU(nn.Module):
40
+ def __init__(self, features):
41
+ super().__init__()
42
+ self.b = nn.Parameter(torch.zeros(features))
43
+ def forward(self, z):
44
+ mag = torch.abs(z)
45
+ new_mag = F.relu(mag + self.b)
46
+ phase = z / (mag + 1e-6)
47
+ return new_mag * phase
48
+
49
+ class ComplexToRealBridge(nn.Module):
50
+ def __init__(self, d_model):
51
+ super().__init__()
52
+ self.proj = nn.Linear(d_model * 2, d_model)
53
+ self.norm = nn.LayerNorm(d_model)
54
+ def forward(self, x_complex):
55
+ cat = torch.cat([x_complex.real, x_complex.imag], dim=-1)
56
+ return self.norm(self.proj(cat))
57
+
58
+ # --- COMPONENTS ---
59
+ class DynamicRoSE(nn.Module):
60
+ def __init__(self, num_embeddings, embedding_dim, max_period=10000.0):
61
+ super().__init__()
62
+ self.embedding_dim = embedding_dim
63
+ self.raw_embedding = nn.Embedding(num_embeddings, embedding_dim)
64
+ self.adapter = nn.Linear(embedding_dim, embedding_dim * 2)
65
+ freqs = torch.exp(torch.arange(0, embedding_dim, dtype=torch.float32) * -(math.log(max_period) / embedding_dim))
66
+ self.register_buffer('freqs', freqs)
67
+ self.rotation_predictor = nn.Linear(embedding_dim, embedding_dim * 2)
68
+
69
+ def forward(self, input_ids):
70
+ real_base = self.raw_embedding(input_ids)
71
+ B, L, D = real_base.shape
72
+ complex_params = self.adapter(real_base)
73
+ z_t = torch.complex(complex_params[..., :D], complex_params[..., D:])
74
+ rot_raw = self.rotation_predictor(real_base)
75
+ rot_x, rot_y = rot_raw.chunk(2, dim=-1)
76
+ rot_mag = torch.sqrt(rot_x**2 + rot_y**2 + 1e-6)
77
+ dynamic_rot = torch.complex(rot_x / rot_mag, rot_y / rot_mag)
78
+ pos = torch.arange(L, device=input_ids.device).float()
79
+ static_angles = torch.outer(pos, self.freqs)
80
+ static_rot = torch.polar(torch.ones_like(static_angles), static_angles)
81
+ z_final = z_t * static_rot.unsqueeze(0) * dynamic_rot
82
+ return z_final, real_base
83
+
84
+ class HyenaNeuralFilter(nn.Module):
85
+ def __init__(self, d_model, max_len=1024, hidden_dim=64):
86
+ super().__init__()
87
+ self.d_model = d_model
88
+ freqs = torch.exp(torch.arange(0, hidden_dim, 2, dtype=torch.float32) * -(math.log(10000.0) / hidden_dim))
89
+ self.register_buffer("freqs", freqs)
90
+ self.mlp = nn.Sequential(
91
+ nn.Linear(hidden_dim, hidden_dim), nn.SiLU(),
92
+ nn.Linear(hidden_dim, hidden_dim), nn.SiLU(),
93
+ nn.Linear(hidden_dim, d_model * 2)
94
+ )
95
+ def forward(self, L, device):
96
+ t = torch.linspace(0, 1, steps=L, device=device).unsqueeze(-1)
97
+ emb = torch.cat([torch.sin(t * self.freqs), torch.cos(t * self.freqs)], dim=-1)
98
+ out = self.mlp(emb).view(L, self.d_model, 2)
99
+ return torch.complex(out[..., 0], out[..., 1])
100
+
101
+ class GatedHarmonicConvolution(nn.Module):
102
+ def __init__(self, d_model, max_len=1024, dropout=0.1):
103
+ super().__init__()
104
+ self.d_model = d_model
105
+ self.filter_len = max_len
106
+ self.neural_filter = HyenaNeuralFilter(d_model, max_len=max_len)
107
+ self.gate_proj = nn.Linear(d_model * 2, d_model * 2)
108
+ self.mix_real = nn.Linear(d_model, d_model)
109
+ self.mix_imag = nn.Linear(d_model, d_model)
110
+ self.out_real = nn.Linear(d_model, d_model)
111
+ self.out_imag = nn.Linear(d_model, d_model)
112
+ self.activation = ModReLU(d_model)
113
+ self.norm = RobustPhaseNorm(d_model)
114
+ self.dropout = ComplexDropout(dropout)
115
+
116
+ def forward(self, x, src_mask=None):
117
+ residual = x
118
+ x_norm = self.norm(x)
119
+ if src_mask is not None:
120
+ x_norm = x_norm.masked_fill(src_mask.unsqueeze(-1), 0.0)
121
+ B, L, D = x_norm.shape
122
+ eff_L = min(L, self.filter_len)
123
+ x_freq = torch.fft.fft(x_norm, n=eff_L, dim=1, norm='ortho')
124
+ h = self.neural_filter(eff_L, x.device).unsqueeze(0)
125
+ x_filtered = x_freq * h
126
+ x_time = torch.fft.ifft(x_filtered, n=eff_L, dim=1, norm='ortho')
127
+ if L > eff_L: x_time = F.pad(x_time, (0,0,0,L-eff_L))
128
+ else: x_time = x_time[:, :L, :]
129
+ gates = torch.sigmoid(self.gate_proj(torch.cat([x_norm.real, x_norm.imag], dim=-1)))
130
+ g_r, g_i = gates.chunk(2, dim=-1)
131
+ x_gated = torch.complex(x_time.real * g_r, x_time.imag * g_i)
132
+ mr, mi = self.mix_real, self.mix_imag
133
+ x_mixed = torch.complex(mr(x_gated.real) - mi(x_gated.imag), mr(x_gated.imag) + mi(x_gated.real))
134
+ x_act = self.activation(x_mixed)
135
+ or_, oi = self.out_real, self.out_imag
136
+ out = torch.complex(or_(x_act.real) - oi(x_act.imag), or_(x_act.imag) + oi(x_act.real))
137
+ return self.dropout(out) + residual
138
+
139
+ class PRISMEncoder(nn.Module):
140
+ def __init__(self, num_layers, d_model, max_len, dropout=0.1):
141
+ super().__init__()
142
+ self.layers = nn.ModuleList([
143
+ GatedHarmonicConvolution(d_model, max_len, dropout)
144
+ for _ in range(num_layers)
145
+ ])
146
+ self.final_norm = RobustPhaseNorm(d_model)
147
+ def forward(self, x, src_mask=None):
148
+ for layer in self.layers:
149
+ if self.training: x = torch.utils.checkpoint.checkpoint(layer, x, src_mask, use_reentrant=False)
150
+ else: x = layer(x, src_mask)
151
+ return self.final_norm(x)
152
+
153
+ class FNetBlock(nn.Module):
154
+ def __init__(self, d_model, d_ff, dropout):
155
+ super().__init__()
156
+ self.norm_mix = nn.LayerNorm(d_model)
157
+ self.norm_ff = nn.LayerNorm(d_model)
158
+ self.mix_dropout = nn.Dropout(dropout)
159
+ self.ff = nn.Sequential(
160
+ nn.Linear(d_model, d_ff), nn.GELU(), nn.Dropout(dropout),
161
+ nn.Linear(d_ff, d_model), nn.Dropout(dropout)
162
+ )
163
+ def forward(self, x):
164
+ residual = x
165
+ x = self.norm_mix(x)
166
+ with torch.cuda.amp.autocast(enabled=False):
167
+ x = x.float()
168
+ x = torch.fft.fftn(x, dim=(-2, -1), norm='ortho').real
169
+ x = x.to(dtype=residual.dtype)
170
+ x = self.mix_dropout(x)
171
+ x = x + residual
172
+ residual = x
173
+ x = self.norm_ff(x)
174
+ x = self.ff(x)
175
+ return x + residual
176
+
177
+ class FNetEncoder(nn.Module):
178
+ def __init__(self, depth, d_model, d_ff, dropout):
179
+ super().__init__()
180
+ self.layers = nn.ModuleList([
181
+ FNetBlock(d_model, d_ff, dropout) for _ in range(depth)
182
+ ])
183
+ self.norm_out = nn.LayerNorm(d_model)
184
+ def forward(self, x):
185
+ for layer in self.layers:
186
+ x = layer(x)
187
+ return self.norm_out(x)
188
+
189
+ # --- MAIN MODEL ---
190
+ class PillarsModel(PreTrainedModel):
191
+ config_class = PillarsConfig
192
+
193
+ def __init__(self, config):
194
+ super().__init__(config)
195
+ self.config = config
196
+
197
+ # 1. SHARED ROOT
198
+ self.rose = DynamicRoSE(config.vocab_size, config.d_model)
199
+
200
+ # 2. DOWNSAMPLE
201
+ self.particle_down = nn.Linear(config.d_model, config.d_branch)
202
+ self.wave_down = nn.Linear(config.d_model * 2, config.d_branch * 2)
203
+
204
+ # 3. RATE STREAM (FNet)
205
+ self.fnet_pos = nn.Embedding(config.seq_len, config.d_branch)
206
+ self.stream_rate = FNetEncoder(depth=config.depth, d_model=config.d_branch, d_ff=config.d_branch*4, dropout=config.dropout)
207
+
208
+ # 4. PHASE STREAM (PRISM)
209
+ self.stream_phase = PRISMEncoder(num_layers=config.depth, d_model=config.d_branch, max_len=config.seq_len, dropout=config.dropout)
210
+ self.phase_bridge = ComplexToRealBridge(config.d_branch)
211
+
212
+ # 5. FUSION
213
+ self.fusion_proj = nn.Linear(config.d_branch * 2, config.d_model)
214
+ self.fusion_norm = nn.LayerNorm(config.d_model)
215
+
216
+ # 6. REFINER
217
+ self.refiner = Encoder(
218
+ dim=config.d_model,
219
+ depth=config.refine_depth,
220
+ heads=8,
221
+ attn_flash=True,
222
+ rotary_pos_emb=True,
223
+ attn_dropout=config.dropout,
224
+ ff_dropout=config.dropout
225
+ )
226
+
227
+ # 7. HEAD
228
+ self.head_bias = nn.Parameter(torch.zeros(config.vocab_size))
229
+
230
+ def forward(self, input_ids, labels=None):
231
+ # A. Shared Root
232
+ wave_src, particle_src = self.rose(input_ids)
233
+
234
+ # B. Downsample
235
+ p_small = self.particle_down(particle_src)
236
+ w_flat = torch.cat([wave_src.real, wave_src.imag], dim=-1)
237
+ w_small_flat = self.wave_down(w_flat)
238
+ w_small = torch.complex(w_small_flat[..., :self.config.d_branch], w_small_flat[..., self.config.d_branch:])
239
+
240
+ # C. Branches
241
+ pos_emb = self.fnet_pos(torch.arange(input_ids.shape[1], device=input_ids.device))
242
+ rate_out = self.stream_rate(p_small + pos_emb)
243
+ phase_out = self.phase_bridge(self.stream_phase(w_small))
244
+
245
+ # D. Fusion
246
+ stacked = torch.cat([rate_out, phase_out], dim=-1)
247
+ context = self.fusion_norm(self.fusion_proj(stacked))
248
+
249
+ # E. Refiner & Output
250
+ refined = self.refiner(context)
251
+ # Weight tying: Use rose embeddings as output weights
252
+ logits = F.linear(refined, self.rose.raw_embedding.weight, self.head_bias)
253
+
254
+ loss = None
255
+ if labels is not None:
256
+ loss_fct = nn.CrossEntropyLoss()
257
+ loss = loss_fct(logits.view(-1, self.config.vocab_size), labels.view(-1))
258
+ return {"loss": loss, "logits": logits}
259
+
260
+ return logits
pytorch_model.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d83897582997657930e6badd0bcb5b9b14afe0be1cf0a84f4a96b7fe0aa180d1
3
+ size 131940795