tl-hyungguk commited on
Commit
ee3bc8d
·
verified ·
1 Parent(s): 1b6f946

Upload TridaForDLM

Browse files
README.md ADDED
@@ -0,0 +1,199 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ library_name: transformers
3
+ tags: []
4
+ ---
5
+
6
+ # Model Card for Model ID
7
+
8
+ <!-- Provide a quick summary of what the model is/does. -->
9
+
10
+
11
+
12
+ ## Model Details
13
+
14
+ ### Model Description
15
+
16
+ <!-- Provide a longer summary of what this model is. -->
17
+
18
+ This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
19
+
20
+ - **Developed by:** [More Information Needed]
21
+ - **Funded by [optional]:** [More Information Needed]
22
+ - **Shared by [optional]:** [More Information Needed]
23
+ - **Model type:** [More Information Needed]
24
+ - **Language(s) (NLP):** [More Information Needed]
25
+ - **License:** [More Information Needed]
26
+ - **Finetuned from model [optional]:** [More Information Needed]
27
+
28
+ ### Model Sources [optional]
29
+
30
+ <!-- Provide the basic links for the model. -->
31
+
32
+ - **Repository:** [More Information Needed]
33
+ - **Paper [optional]:** [More Information Needed]
34
+ - **Demo [optional]:** [More Information Needed]
35
+
36
+ ## Uses
37
+
38
+ <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
39
+
40
+ ### Direct Use
41
+
42
+ <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
43
+
44
+ [More Information Needed]
45
+
46
+ ### Downstream Use [optional]
47
+
48
+ <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
49
+
50
+ [More Information Needed]
51
+
52
+ ### Out-of-Scope Use
53
+
54
+ <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
55
+
56
+ [More Information Needed]
57
+
58
+ ## Bias, Risks, and Limitations
59
+
60
+ <!-- This section is meant to convey both technical and sociotechnical limitations. -->
61
+
62
+ [More Information Needed]
63
+
64
+ ### Recommendations
65
+
66
+ <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
67
+
68
+ Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
69
+
70
+ ## How to Get Started with the Model
71
+
72
+ Use the code below to get started with the model.
73
+
74
+ [More Information Needed]
75
+
76
+ ## Training Details
77
+
78
+ ### Training Data
79
+
80
+ <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
81
+
82
+ [More Information Needed]
83
+
84
+ ### Training Procedure
85
+
86
+ <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
87
+
88
+ #### Preprocessing [optional]
89
+
90
+ [More Information Needed]
91
+
92
+
93
+ #### Training Hyperparameters
94
+
95
+ - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
96
+
97
+ #### Speeds, Sizes, Times [optional]
98
+
99
+ <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
100
+
101
+ [More Information Needed]
102
+
103
+ ## Evaluation
104
+
105
+ <!-- This section describes the evaluation protocols and provides the results. -->
106
+
107
+ ### Testing Data, Factors & Metrics
108
+
109
+ #### Testing Data
110
+
111
+ <!-- This should link to a Dataset Card if possible. -->
112
+
113
+ [More Information Needed]
114
+
115
+ #### Factors
116
+
117
+ <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
118
+
119
+ [More Information Needed]
120
+
121
+ #### Metrics
122
+
123
+ <!-- These are the evaluation metrics being used, ideally with a description of why. -->
124
+
125
+ [More Information Needed]
126
+
127
+ ### Results
128
+
129
+ [More Information Needed]
130
+
131
+ #### Summary
132
+
133
+
134
+
135
+ ## Model Examination [optional]
136
+
137
+ <!-- Relevant interpretability work for the model goes here -->
138
+
139
+ [More Information Needed]
140
+
141
+ ## Environmental Impact
142
+
143
+ <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
144
+
145
+ Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
146
+
147
+ - **Hardware Type:** [More Information Needed]
148
+ - **Hours used:** [More Information Needed]
149
+ - **Cloud Provider:** [More Information Needed]
150
+ - **Compute Region:** [More Information Needed]
151
+ - **Carbon Emitted:** [More Information Needed]
152
+
153
+ ## Technical Specifications [optional]
154
+
155
+ ### Model Architecture and Objective
156
+
157
+ [More Information Needed]
158
+
159
+ ### Compute Infrastructure
160
+
161
+ [More Information Needed]
162
+
163
+ #### Hardware
164
+
165
+ [More Information Needed]
166
+
167
+ #### Software
168
+
169
+ [More Information Needed]
170
+
171
+ ## Citation [optional]
172
+
173
+ <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
174
+
175
+ **BibTeX:**
176
+
177
+ [More Information Needed]
178
+
179
+ **APA:**
180
+
181
+ [More Information Needed]
182
+
183
+ ## Glossary [optional]
184
+
185
+ <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
186
+
187
+ [More Information Needed]
188
+
189
+ ## More Information [optional]
190
+
191
+ [More Information Needed]
192
+
193
+ ## Model Card Authors [optional]
194
+
195
+ [More Information Needed]
196
+
197
+ ## Model Card Contact
198
+
199
+ [More Information Needed]
config.json ADDED
@@ -0,0 +1,69 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "architectures": [
3
+ "TridaForDLM"
4
+ ],
5
+ "attention_dropout": 0.0,
6
+ "auto_map": {
7
+ "AutoConfig": "configuration_trida.TridaConfig",
8
+ "AutoModel": "modeling_trida.TridaForDLM",
9
+ "AutoModelForCausalLM": "modeling_trida.TridaForDLM"
10
+ },
11
+ "bd_size": 4,
12
+ "bos_token_id": 0,
13
+ "dtype": "bfloat16",
14
+ "eos_token_id": 128001,
15
+ "head_dim": 128,
16
+ "hidden_act": "silu",
17
+ "hidden_size": 4096,
18
+ "initializer_range": 0.02,
19
+ "intermediate_size": 11008,
20
+ "layer_types": [
21
+ "full_attention",
22
+ "full_attention",
23
+ "full_attention",
24
+ "full_attention",
25
+ "full_attention",
26
+ "full_attention",
27
+ "full_attention",
28
+ "full_attention",
29
+ "full_attention",
30
+ "full_attention",
31
+ "full_attention",
32
+ "full_attention",
33
+ "full_attention",
34
+ "full_attention",
35
+ "full_attention",
36
+ "full_attention",
37
+ "full_attention",
38
+ "full_attention",
39
+ "full_attention",
40
+ "full_attention",
41
+ "full_attention",
42
+ "full_attention",
43
+ "full_attention",
44
+ "full_attention",
45
+ "full_attention",
46
+ "full_attention",
47
+ "full_attention",
48
+ "full_attention",
49
+ "full_attention",
50
+ "full_attention",
51
+ "full_attention",
52
+ "full_attention"
53
+ ],
54
+ "max_position_embeddings": 4096,
55
+ "max_window_layers": 28,
56
+ "model_type": "Trida",
57
+ "num_attention_heads": 32,
58
+ "num_hidden_layers": 32,
59
+ "num_key_value_heads": 32,
60
+ "rms_norm_eps": 1e-05,
61
+ "rope_scaling": null,
62
+ "rope_theta": 100000.0,
63
+ "sliding_window": null,
64
+ "tie_word_embeddings": false,
65
+ "transformers_version": "4.57.3",
66
+ "use_cache": true,
67
+ "use_sliding_window": false,
68
+ "vocab_size": 128256
69
+ }
configuration_trida.py ADDED
@@ -0,0 +1,107 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """Trida model configuration"""
2
+
3
+ from transformers.configuration_utils import PretrainedConfig
4
+ try:
5
+ from transformers.configuration_utils import layer_type_validation
6
+ except Exception:
7
+ def layer_type_validation(layer_types):
8
+ # Fallback for older/newer transformers without this helper
9
+ return
10
+ from transformers.modeling_rope_utils import rope_config_validation
11
+ from transformers.utils import logging
12
+
13
+
14
+ logger = logging.get_logger(__name__)
15
+
16
+
17
+ class TridaConfig(PretrainedConfig):
18
+
19
+ model_type = "Trida"
20
+ keys_to_ignore_at_inference = ["past_key_values"]
21
+
22
+ # Default tensor parallel plan for base model `Trida`
23
+ base_model_tp_plan = {
24
+ "layers.*.self_attn.q_proj": "colwise",
25
+ "layers.*.self_attn.k_proj": "colwise",
26
+ "layers.*.self_attn.v_proj": "colwise",
27
+ "layers.*.self_attn.o_proj": "rowwise",
28
+ "layers.*.mlp.gate_proj": "colwise",
29
+ "layers.*.mlp.up_proj": "colwise",
30
+ "layers.*.mlp.down_proj": "rowwise",
31
+ }
32
+ base_model_pp_plan = {
33
+ "embed_tokens": (["input_ids"], ["inputs_embeds"]),
34
+ "layers": (["hidden_states", "attention_mask"], ["hidden_states"]),
35
+ "norm": (["hidden_states"], ["hidden_states"]),
36
+ }
37
+
38
+ def __init__(
39
+ self,
40
+ vocab_size=151936,
41
+ hidden_size=4096,
42
+ intermediate_size=22016,
43
+ num_hidden_layers=32,
44
+ num_attention_heads=32,
45
+ num_key_value_heads=32,
46
+ hidden_act="silu",
47
+ max_position_embeddings=32768,
48
+ initializer_range=0.02,
49
+ rms_norm_eps=1e-6,
50
+ use_cache=True,
51
+ tie_word_embeddings=False,
52
+ rope_theta=10000.0,
53
+ rope_scaling=None,
54
+ use_sliding_window=False,
55
+ sliding_window=4096,
56
+ max_window_layers=28,
57
+ layer_types=None,
58
+ attention_dropout=0.0,
59
+ bd_size=32,
60
+ **kwargs,
61
+ ):
62
+ self.vocab_size = vocab_size
63
+ self.max_position_embeddings = max_position_embeddings
64
+ self.hidden_size = hidden_size
65
+ self.intermediate_size = intermediate_size
66
+ self.num_hidden_layers = num_hidden_layers
67
+ self.num_attention_heads = num_attention_heads
68
+ self.use_sliding_window = use_sliding_window
69
+ self.sliding_window = sliding_window if self.use_sliding_window else None
70
+ self.max_window_layers = max_window_layers
71
+
72
+ # for backward compatibility
73
+ if num_key_value_heads is None:
74
+ num_key_value_heads = num_attention_heads
75
+
76
+ self.num_key_value_heads = num_key_value_heads
77
+ self.hidden_act = hidden_act
78
+ self.initializer_range = initializer_range
79
+ self.rms_norm_eps = rms_norm_eps
80
+ self.use_cache = use_cache
81
+ self.rope_theta = rope_theta
82
+ self.rope_scaling = rope_scaling
83
+ self.attention_dropout = attention_dropout
84
+ self.bd_size = bd_size
85
+ # Validate the correctness of rotary position embeddings parameters
86
+ # BC: if there is a 'type' field, move it to 'rope_type'.
87
+ if self.rope_scaling is not None and "type" in self.rope_scaling:
88
+ self.rope_scaling["rope_type"] = self.rope_scaling["type"]
89
+ rope_config_validation(self)
90
+
91
+ self.layer_types = layer_types
92
+ if self.layer_types is None:
93
+ self.layer_types = [
94
+ "sliding_attention"
95
+ if self.sliding_window is not None and i >= self.max_window_layers
96
+ else "full_attention"
97
+ for i in range(self.num_hidden_layers)
98
+ ]
99
+ layer_type_validation(self.layer_types)
100
+
101
+ # head_dim can be passed via kwargs, otherwise default to 128
102
+ self.head_dim = kwargs.pop('head_dim', 128)
103
+
104
+ super().__init__(
105
+ tie_word_embeddings=tie_word_embeddings,
106
+ **kwargs,
107
+ )
generation_config.json ADDED
@@ -0,0 +1,8 @@
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "bos_token_id": 0,
3
+ "do_sample": true,
4
+ "eos_token_id": 128001,
5
+ "temperature": 0.6,
6
+ "top_p": 0.9,
7
+ "transformers_version": "4.57.3"
8
+ }
model-00001-of-00004.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e5e9a4589f938a297ead260acedab905c6dc4dc84e190628c58bdc70a560192f
3
+ size 4917979000
model-00002-of-00004.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:624ad919451158c5bc7229c6346dca79631abfedcd112f9bf128447ca92125c8
3
+ size 4947390880
model-00003-of-00004.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ae69bf99459c60fb06138805a4ed8cd8e4ad58a76e2d61220a15634947d23858
3
+ size 4137880232
model-00004-of-00004.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:5f0a7f453efe5cea64bc17ef717b4177030167f0d724da0d8bda1fd2f89d4a29
3
+ size 1050673280
model.safetensors.index.json ADDED
@@ -0,0 +1,299 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "metadata": {
3
+ "total_parameters": 7526944768,
4
+ "total_size": 15053889536
5
+ },
6
+ "weight_map": {
7
+ "lm_head.weight": "model-00004-of-00004.safetensors",
8
+ "model.embed_tokens.weight": "model-00001-of-00004.safetensors",
9
+ "model.layers.0.input_layernorm.weight": "model-00001-of-00004.safetensors",
10
+ "model.layers.0.mlp.down_proj.weight": "model-00001-of-00004.safetensors",
11
+ "model.layers.0.mlp.gate_proj.weight": "model-00001-of-00004.safetensors",
12
+ "model.layers.0.mlp.up_proj.weight": "model-00001-of-00004.safetensors",
13
+ "model.layers.0.post_attention_layernorm.weight": "model-00001-of-00004.safetensors",
14
+ "model.layers.0.self_attn.k_proj.weight": "model-00001-of-00004.safetensors",
15
+ "model.layers.0.self_attn.o_proj.weight": "model-00001-of-00004.safetensors",
16
+ "model.layers.0.self_attn.q_proj.weight": "model-00001-of-00004.safetensors",
17
+ "model.layers.0.self_attn.v_proj.weight": "model-00001-of-00004.safetensors",
18
+ "model.layers.1.input_layernorm.weight": "model-00001-of-00004.safetensors",
19
+ "model.layers.1.mlp.down_proj.weight": "model-00001-of-00004.safetensors",
20
+ "model.layers.1.mlp.gate_proj.weight": "model-00001-of-00004.safetensors",
21
+ "model.layers.1.mlp.up_proj.weight": "model-00001-of-00004.safetensors",
22
+ "model.layers.1.post_attention_layernorm.weight": "model-00001-of-00004.safetensors",
23
+ "model.layers.1.self_attn.k_proj.weight": "model-00001-of-00004.safetensors",
24
+ "model.layers.1.self_attn.o_proj.weight": "model-00001-of-00004.safetensors",
25
+ "model.layers.1.self_attn.q_proj.weight": "model-00001-of-00004.safetensors",
26
+ "model.layers.1.self_attn.v_proj.weight": "model-00001-of-00004.safetensors",
27
+ "model.layers.10.input_layernorm.weight": "model-00002-of-00004.safetensors",
28
+ "model.layers.10.mlp.down_proj.weight": "model-00002-of-00004.safetensors",
29
+ "model.layers.10.mlp.gate_proj.weight": "model-00002-of-00004.safetensors",
30
+ "model.layers.10.mlp.up_proj.weight": "model-00002-of-00004.safetensors",
31
+ "model.layers.10.post_attention_layernorm.weight": "model-00002-of-00004.safetensors",
32
+ "model.layers.10.self_attn.k_proj.weight": "model-00002-of-00004.safetensors",
33
+ "model.layers.10.self_attn.o_proj.weight": "model-00002-of-00004.safetensors",
34
+ "model.layers.10.self_attn.q_proj.weight": "model-00002-of-00004.safetensors",
35
+ "model.layers.10.self_attn.v_proj.weight": "model-00002-of-00004.safetensors",
36
+ "model.layers.11.input_layernorm.weight": "model-00002-of-00004.safetensors",
37
+ "model.layers.11.mlp.down_proj.weight": "model-00002-of-00004.safetensors",
38
+ "model.layers.11.mlp.gate_proj.weight": "model-00002-of-00004.safetensors",
39
+ "model.layers.11.mlp.up_proj.weight": "model-00002-of-00004.safetensors",
40
+ "model.layers.11.post_attention_layernorm.weight": "model-00002-of-00004.safetensors",
41
+ "model.layers.11.self_attn.k_proj.weight": "model-00002-of-00004.safetensors",
42
+ "model.layers.11.self_attn.o_proj.weight": "model-00002-of-00004.safetensors",
43
+ "model.layers.11.self_attn.q_proj.weight": "model-00002-of-00004.safetensors",
44
+ "model.layers.11.self_attn.v_proj.weight": "model-00002-of-00004.safetensors",
45
+ "model.layers.12.input_layernorm.weight": "model-00002-of-00004.safetensors",
46
+ "model.layers.12.mlp.down_proj.weight": "model-00002-of-00004.safetensors",
47
+ "model.layers.12.mlp.gate_proj.weight": "model-00002-of-00004.safetensors",
48
+ "model.layers.12.mlp.up_proj.weight": "model-00002-of-00004.safetensors",
49
+ "model.layers.12.post_attention_layernorm.weight": "model-00002-of-00004.safetensors",
50
+ "model.layers.12.self_attn.k_proj.weight": "model-00002-of-00004.safetensors",
51
+ "model.layers.12.self_attn.o_proj.weight": "model-00002-of-00004.safetensors",
52
+ "model.layers.12.self_attn.q_proj.weight": "model-00002-of-00004.safetensors",
53
+ "model.layers.12.self_attn.v_proj.weight": "model-00002-of-00004.safetensors",
54
+ "model.layers.13.input_layernorm.weight": "model-00002-of-00004.safetensors",
55
+ "model.layers.13.mlp.down_proj.weight": "model-00002-of-00004.safetensors",
56
+ "model.layers.13.mlp.gate_proj.weight": "model-00002-of-00004.safetensors",
57
+ "model.layers.13.mlp.up_proj.weight": "model-00002-of-00004.safetensors",
58
+ "model.layers.13.post_attention_layernorm.weight": "model-00002-of-00004.safetensors",
59
+ "model.layers.13.self_attn.k_proj.weight": "model-00002-of-00004.safetensors",
60
+ "model.layers.13.self_attn.o_proj.weight": "model-00002-of-00004.safetensors",
61
+ "model.layers.13.self_attn.q_proj.weight": "model-00002-of-00004.safetensors",
62
+ "model.layers.13.self_attn.v_proj.weight": "model-00002-of-00004.safetensors",
63
+ "model.layers.14.input_layernorm.weight": "model-00002-of-00004.safetensors",
64
+ "model.layers.14.mlp.down_proj.weight": "model-00002-of-00004.safetensors",
65
+ "model.layers.14.mlp.gate_proj.weight": "model-00002-of-00004.safetensors",
66
+ "model.layers.14.mlp.up_proj.weight": "model-00002-of-00004.safetensors",
67
+ "model.layers.14.post_attention_layernorm.weight": "model-00002-of-00004.safetensors",
68
+ "model.layers.14.self_attn.k_proj.weight": "model-00002-of-00004.safetensors",
69
+ "model.layers.14.self_attn.o_proj.weight": "model-00002-of-00004.safetensors",
70
+ "model.layers.14.self_attn.q_proj.weight": "model-00002-of-00004.safetensors",
71
+ "model.layers.14.self_attn.v_proj.weight": "model-00002-of-00004.safetensors",
72
+ "model.layers.15.input_layernorm.weight": "model-00002-of-00004.safetensors",
73
+ "model.layers.15.mlp.down_proj.weight": "model-00002-of-00004.safetensors",
74
+ "model.layers.15.mlp.gate_proj.weight": "model-00002-of-00004.safetensors",
75
+ "model.layers.15.mlp.up_proj.weight": "model-00002-of-00004.safetensors",
76
+ "model.layers.15.post_attention_layernorm.weight": "model-00002-of-00004.safetensors",
77
+ "model.layers.15.self_attn.k_proj.weight": "model-00002-of-00004.safetensors",
78
+ "model.layers.15.self_attn.o_proj.weight": "model-00002-of-00004.safetensors",
79
+ "model.layers.15.self_attn.q_proj.weight": "model-00002-of-00004.safetensors",
80
+ "model.layers.15.self_attn.v_proj.weight": "model-00002-of-00004.safetensors",
81
+ "model.layers.16.input_layernorm.weight": "model-00002-of-00004.safetensors",
82
+ "model.layers.16.mlp.down_proj.weight": "model-00002-of-00004.safetensors",
83
+ "model.layers.16.mlp.gate_proj.weight": "model-00002-of-00004.safetensors",
84
+ "model.layers.16.mlp.up_proj.weight": "model-00002-of-00004.safetensors",
85
+ "model.layers.16.post_attention_layernorm.weight": "model-00002-of-00004.safetensors",
86
+ "model.layers.16.self_attn.k_proj.weight": "model-00002-of-00004.safetensors",
87
+ "model.layers.16.self_attn.o_proj.weight": "model-00002-of-00004.safetensors",
88
+ "model.layers.16.self_attn.q_proj.weight": "model-00002-of-00004.safetensors",
89
+ "model.layers.16.self_attn.v_proj.weight": "model-00002-of-00004.safetensors",
90
+ "model.layers.17.input_layernorm.weight": "model-00002-of-00004.safetensors",
91
+ "model.layers.17.mlp.down_proj.weight": "model-00002-of-00004.safetensors",
92
+ "model.layers.17.mlp.gate_proj.weight": "model-00002-of-00004.safetensors",
93
+ "model.layers.17.mlp.up_proj.weight": "model-00002-of-00004.safetensors",
94
+ "model.layers.17.post_attention_layernorm.weight": "model-00002-of-00004.safetensors",
95
+ "model.layers.17.self_attn.k_proj.weight": "model-00002-of-00004.safetensors",
96
+ "model.layers.17.self_attn.o_proj.weight": "model-00002-of-00004.safetensors",
97
+ "model.layers.17.self_attn.q_proj.weight": "model-00002-of-00004.safetensors",
98
+ "model.layers.17.self_attn.v_proj.weight": "model-00002-of-00004.safetensors",
99
+ "model.layers.18.input_layernorm.weight": "model-00002-of-00004.safetensors",
100
+ "model.layers.18.mlp.down_proj.weight": "model-00002-of-00004.safetensors",
101
+ "model.layers.18.mlp.gate_proj.weight": "model-00002-of-00004.safetensors",
102
+ "model.layers.18.mlp.up_proj.weight": "model-00002-of-00004.safetensors",
103
+ "model.layers.18.post_attention_layernorm.weight": "model-00002-of-00004.safetensors",
104
+ "model.layers.18.self_attn.k_proj.weight": "model-00002-of-00004.safetensors",
105
+ "model.layers.18.self_attn.o_proj.weight": "model-00002-of-00004.safetensors",
106
+ "model.layers.18.self_attn.q_proj.weight": "model-00002-of-00004.safetensors",
107
+ "model.layers.18.self_attn.v_proj.weight": "model-00002-of-00004.safetensors",
108
+ "model.layers.19.input_layernorm.weight": "model-00002-of-00004.safetensors",
109
+ "model.layers.19.mlp.down_proj.weight": "model-00002-of-00004.safetensors",
110
+ "model.layers.19.mlp.gate_proj.weight": "model-00002-of-00004.safetensors",
111
+ "model.layers.19.mlp.up_proj.weight": "model-00002-of-00004.safetensors",
112
+ "model.layers.19.post_attention_layernorm.weight": "model-00002-of-00004.safetensors",
113
+ "model.layers.19.self_attn.k_proj.weight": "model-00002-of-00004.safetensors",
114
+ "model.layers.19.self_attn.o_proj.weight": "model-00002-of-00004.safetensors",
115
+ "model.layers.19.self_attn.q_proj.weight": "model-00002-of-00004.safetensors",
116
+ "model.layers.19.self_attn.v_proj.weight": "model-00002-of-00004.safetensors",
117
+ "model.layers.2.input_layernorm.weight": "model-00001-of-00004.safetensors",
118
+ "model.layers.2.mlp.down_proj.weight": "model-00001-of-00004.safetensors",
119
+ "model.layers.2.mlp.gate_proj.weight": "model-00001-of-00004.safetensors",
120
+ "model.layers.2.mlp.up_proj.weight": "model-00001-of-00004.safetensors",
121
+ "model.layers.2.post_attention_layernorm.weight": "model-00001-of-00004.safetensors",
122
+ "model.layers.2.self_attn.k_proj.weight": "model-00001-of-00004.safetensors",
123
+ "model.layers.2.self_attn.o_proj.weight": "model-00001-of-00004.safetensors",
124
+ "model.layers.2.self_attn.q_proj.weight": "model-00001-of-00004.safetensors",
125
+ "model.layers.2.self_attn.v_proj.weight": "model-00001-of-00004.safetensors",
126
+ "model.layers.20.input_layernorm.weight": "model-00002-of-00004.safetensors",
127
+ "model.layers.20.mlp.down_proj.weight": "model-00002-of-00004.safetensors",
128
+ "model.layers.20.mlp.gate_proj.weight": "model-00002-of-00004.safetensors",
129
+ "model.layers.20.mlp.up_proj.weight": "model-00002-of-00004.safetensors",
130
+ "model.layers.20.post_attention_layernorm.weight": "model-00002-of-00004.safetensors",
131
+ "model.layers.20.self_attn.k_proj.weight": "model-00002-of-00004.safetensors",
132
+ "model.layers.20.self_attn.o_proj.weight": "model-00002-of-00004.safetensors",
133
+ "model.layers.20.self_attn.q_proj.weight": "model-00002-of-00004.safetensors",
134
+ "model.layers.20.self_attn.v_proj.weight": "model-00002-of-00004.safetensors",
135
+ "model.layers.21.input_layernorm.weight": "model-00003-of-00004.safetensors",
136
+ "model.layers.21.mlp.down_proj.weight": "model-00003-of-00004.safetensors",
137
+ "model.layers.21.mlp.gate_proj.weight": "model-00002-of-00004.safetensors",
138
+ "model.layers.21.mlp.up_proj.weight": "model-00002-of-00004.safetensors",
139
+ "model.layers.21.post_attention_layernorm.weight": "model-00003-of-00004.safetensors",
140
+ "model.layers.21.self_attn.k_proj.weight": "model-00002-of-00004.safetensors",
141
+ "model.layers.21.self_attn.o_proj.weight": "model-00002-of-00004.safetensors",
142
+ "model.layers.21.self_attn.q_proj.weight": "model-00002-of-00004.safetensors",
143
+ "model.layers.21.self_attn.v_proj.weight": "model-00002-of-00004.safetensors",
144
+ "model.layers.22.input_layernorm.weight": "model-00003-of-00004.safetensors",
145
+ "model.layers.22.mlp.down_proj.weight": "model-00003-of-00004.safetensors",
146
+ "model.layers.22.mlp.gate_proj.weight": "model-00003-of-00004.safetensors",
147
+ "model.layers.22.mlp.up_proj.weight": "model-00003-of-00004.safetensors",
148
+ "model.layers.22.post_attention_layernorm.weight": "model-00003-of-00004.safetensors",
149
+ "model.layers.22.self_attn.k_proj.weight": "model-00003-of-00004.safetensors",
150
+ "model.layers.22.self_attn.o_proj.weight": "model-00003-of-00004.safetensors",
151
+ "model.layers.22.self_attn.q_proj.weight": "model-00003-of-00004.safetensors",
152
+ "model.layers.22.self_attn.v_proj.weight": "model-00003-of-00004.safetensors",
153
+ "model.layers.23.input_layernorm.weight": "model-00003-of-00004.safetensors",
154
+ "model.layers.23.mlp.down_proj.weight": "model-00003-of-00004.safetensors",
155
+ "model.layers.23.mlp.gate_proj.weight": "model-00003-of-00004.safetensors",
156
+ "model.layers.23.mlp.up_proj.weight": "model-00003-of-00004.safetensors",
157
+ "model.layers.23.post_attention_layernorm.weight": "model-00003-of-00004.safetensors",
158
+ "model.layers.23.self_attn.k_proj.weight": "model-00003-of-00004.safetensors",
159
+ "model.layers.23.self_attn.o_proj.weight": "model-00003-of-00004.safetensors",
160
+ "model.layers.23.self_attn.q_proj.weight": "model-00003-of-00004.safetensors",
161
+ "model.layers.23.self_attn.v_proj.weight": "model-00003-of-00004.safetensors",
162
+ "model.layers.24.input_layernorm.weight": "model-00003-of-00004.safetensors",
163
+ "model.layers.24.mlp.down_proj.weight": "model-00003-of-00004.safetensors",
164
+ "model.layers.24.mlp.gate_proj.weight": "model-00003-of-00004.safetensors",
165
+ "model.layers.24.mlp.up_proj.weight": "model-00003-of-00004.safetensors",
166
+ "model.layers.24.post_attention_layernorm.weight": "model-00003-of-00004.safetensors",
167
+ "model.layers.24.self_attn.k_proj.weight": "model-00003-of-00004.safetensors",
168
+ "model.layers.24.self_attn.o_proj.weight": "model-00003-of-00004.safetensors",
169
+ "model.layers.24.self_attn.q_proj.weight": "model-00003-of-00004.safetensors",
170
+ "model.layers.24.self_attn.v_proj.weight": "model-00003-of-00004.safetensors",
171
+ "model.layers.25.input_layernorm.weight": "model-00003-of-00004.safetensors",
172
+ "model.layers.25.mlp.down_proj.weight": "model-00003-of-00004.safetensors",
173
+ "model.layers.25.mlp.gate_proj.weight": "model-00003-of-00004.safetensors",
174
+ "model.layers.25.mlp.up_proj.weight": "model-00003-of-00004.safetensors",
175
+ "model.layers.25.post_attention_layernorm.weight": "model-00003-of-00004.safetensors",
176
+ "model.layers.25.self_attn.k_proj.weight": "model-00003-of-00004.safetensors",
177
+ "model.layers.25.self_attn.o_proj.weight": "model-00003-of-00004.safetensors",
178
+ "model.layers.25.self_attn.q_proj.weight": "model-00003-of-00004.safetensors",
179
+ "model.layers.25.self_attn.v_proj.weight": "model-00003-of-00004.safetensors",
180
+ "model.layers.26.input_layernorm.weight": "model-00003-of-00004.safetensors",
181
+ "model.layers.26.mlp.down_proj.weight": "model-00003-of-00004.safetensors",
182
+ "model.layers.26.mlp.gate_proj.weight": "model-00003-of-00004.safetensors",
183
+ "model.layers.26.mlp.up_proj.weight": "model-00003-of-00004.safetensors",
184
+ "model.layers.26.post_attention_layernorm.weight": "model-00003-of-00004.safetensors",
185
+ "model.layers.26.self_attn.k_proj.weight": "model-00003-of-00004.safetensors",
186
+ "model.layers.26.self_attn.o_proj.weight": "model-00003-of-00004.safetensors",
187
+ "model.layers.26.self_attn.q_proj.weight": "model-00003-of-00004.safetensors",
188
+ "model.layers.26.self_attn.v_proj.weight": "model-00003-of-00004.safetensors",
189
+ "model.layers.27.input_layernorm.weight": "model-00003-of-00004.safetensors",
190
+ "model.layers.27.mlp.down_proj.weight": "model-00003-of-00004.safetensors",
191
+ "model.layers.27.mlp.gate_proj.weight": "model-00003-of-00004.safetensors",
192
+ "model.layers.27.mlp.up_proj.weight": "model-00003-of-00004.safetensors",
193
+ "model.layers.27.post_attention_layernorm.weight": "model-00003-of-00004.safetensors",
194
+ "model.layers.27.self_attn.k_proj.weight": "model-00003-of-00004.safetensors",
195
+ "model.layers.27.self_attn.o_proj.weight": "model-00003-of-00004.safetensors",
196
+ "model.layers.27.self_attn.q_proj.weight": "model-00003-of-00004.safetensors",
197
+ "model.layers.27.self_attn.v_proj.weight": "model-00003-of-00004.safetensors",
198
+ "model.layers.28.input_layernorm.weight": "model-00003-of-00004.safetensors",
199
+ "model.layers.28.mlp.down_proj.weight": "model-00003-of-00004.safetensors",
200
+ "model.layers.28.mlp.gate_proj.weight": "model-00003-of-00004.safetensors",
201
+ "model.layers.28.mlp.up_proj.weight": "model-00003-of-00004.safetensors",
202
+ "model.layers.28.post_attention_layernorm.weight": "model-00003-of-00004.safetensors",
203
+ "model.layers.28.self_attn.k_proj.weight": "model-00003-of-00004.safetensors",
204
+ "model.layers.28.self_attn.o_proj.weight": "model-00003-of-00004.safetensors",
205
+ "model.layers.28.self_attn.q_proj.weight": "model-00003-of-00004.safetensors",
206
+ "model.layers.28.self_attn.v_proj.weight": "model-00003-of-00004.safetensors",
207
+ "model.layers.29.input_layernorm.weight": "model-00003-of-00004.safetensors",
208
+ "model.layers.29.mlp.down_proj.weight": "model-00003-of-00004.safetensors",
209
+ "model.layers.29.mlp.gate_proj.weight": "model-00003-of-00004.safetensors",
210
+ "model.layers.29.mlp.up_proj.weight": "model-00003-of-00004.safetensors",
211
+ "model.layers.29.post_attention_layernorm.weight": "model-00003-of-00004.safetensors",
212
+ "model.layers.29.self_attn.k_proj.weight": "model-00003-of-00004.safetensors",
213
+ "model.layers.29.self_attn.o_proj.weight": "model-00003-of-00004.safetensors",
214
+ "model.layers.29.self_attn.q_proj.weight": "model-00003-of-00004.safetensors",
215
+ "model.layers.29.self_attn.v_proj.weight": "model-00003-of-00004.safetensors",
216
+ "model.layers.3.input_layernorm.weight": "model-00001-of-00004.safetensors",
217
+ "model.layers.3.mlp.down_proj.weight": "model-00001-of-00004.safetensors",
218
+ "model.layers.3.mlp.gate_proj.weight": "model-00001-of-00004.safetensors",
219
+ "model.layers.3.mlp.up_proj.weight": "model-00001-of-00004.safetensors",
220
+ "model.layers.3.post_attention_layernorm.weight": "model-00001-of-00004.safetensors",
221
+ "model.layers.3.self_attn.k_proj.weight": "model-00001-of-00004.safetensors",
222
+ "model.layers.3.self_attn.o_proj.weight": "model-00001-of-00004.safetensors",
223
+ "model.layers.3.self_attn.q_proj.weight": "model-00001-of-00004.safetensors",
224
+ "model.layers.3.self_attn.v_proj.weight": "model-00001-of-00004.safetensors",
225
+ "model.layers.30.input_layernorm.weight": "model-00003-of-00004.safetensors",
226
+ "model.layers.30.mlp.down_proj.weight": "model-00003-of-00004.safetensors",
227
+ "model.layers.30.mlp.gate_proj.weight": "model-00003-of-00004.safetensors",
228
+ "model.layers.30.mlp.up_proj.weight": "model-00003-of-00004.safetensors",
229
+ "model.layers.30.post_attention_layernorm.weight": "model-00003-of-00004.safetensors",
230
+ "model.layers.30.self_attn.k_proj.weight": "model-00003-of-00004.safetensors",
231
+ "model.layers.30.self_attn.o_proj.weight": "model-00003-of-00004.safetensors",
232
+ "model.layers.30.self_attn.q_proj.weight": "model-00003-of-00004.safetensors",
233
+ "model.layers.30.self_attn.v_proj.weight": "model-00003-of-00004.safetensors",
234
+ "model.layers.31.input_layernorm.weight": "model-00003-of-00004.safetensors",
235
+ "model.layers.31.mlp.down_proj.weight": "model-00003-of-00004.safetensors",
236
+ "model.layers.31.mlp.gate_proj.weight": "model-00003-of-00004.safetensors",
237
+ "model.layers.31.mlp.up_proj.weight": "model-00003-of-00004.safetensors",
238
+ "model.layers.31.post_attention_layernorm.weight": "model-00003-of-00004.safetensors",
239
+ "model.layers.31.self_attn.k_proj.weight": "model-00003-of-00004.safetensors",
240
+ "model.layers.31.self_attn.o_proj.weight": "model-00003-of-00004.safetensors",
241
+ "model.layers.31.self_attn.q_proj.weight": "model-00003-of-00004.safetensors",
242
+ "model.layers.31.self_attn.v_proj.weight": "model-00003-of-00004.safetensors",
243
+ "model.layers.4.input_layernorm.weight": "model-00001-of-00004.safetensors",
244
+ "model.layers.4.mlp.down_proj.weight": "model-00001-of-00004.safetensors",
245
+ "model.layers.4.mlp.gate_proj.weight": "model-00001-of-00004.safetensors",
246
+ "model.layers.4.mlp.up_proj.weight": "model-00001-of-00004.safetensors",
247
+ "model.layers.4.post_attention_layernorm.weight": "model-00001-of-00004.safetensors",
248
+ "model.layers.4.self_attn.k_proj.weight": "model-00001-of-00004.safetensors",
249
+ "model.layers.4.self_attn.o_proj.weight": "model-00001-of-00004.safetensors",
250
+ "model.layers.4.self_attn.q_proj.weight": "model-00001-of-00004.safetensors",
251
+ "model.layers.4.self_attn.v_proj.weight": "model-00001-of-00004.safetensors",
252
+ "model.layers.5.input_layernorm.weight": "model-00001-of-00004.safetensors",
253
+ "model.layers.5.mlp.down_proj.weight": "model-00001-of-00004.safetensors",
254
+ "model.layers.5.mlp.gate_proj.weight": "model-00001-of-00004.safetensors",
255
+ "model.layers.5.mlp.up_proj.weight": "model-00001-of-00004.safetensors",
256
+ "model.layers.5.post_attention_layernorm.weight": "model-00001-of-00004.safetensors",
257
+ "model.layers.5.self_attn.k_proj.weight": "model-00001-of-00004.safetensors",
258
+ "model.layers.5.self_attn.o_proj.weight": "model-00001-of-00004.safetensors",
259
+ "model.layers.5.self_attn.q_proj.weight": "model-00001-of-00004.safetensors",
260
+ "model.layers.5.self_attn.v_proj.weight": "model-00001-of-00004.safetensors",
261
+ "model.layers.6.input_layernorm.weight": "model-00001-of-00004.safetensors",
262
+ "model.layers.6.mlp.down_proj.weight": "model-00001-of-00004.safetensors",
263
+ "model.layers.6.mlp.gate_proj.weight": "model-00001-of-00004.safetensors",
264
+ "model.layers.6.mlp.up_proj.weight": "model-00001-of-00004.safetensors",
265
+ "model.layers.6.post_attention_layernorm.weight": "model-00001-of-00004.safetensors",
266
+ "model.layers.6.self_attn.k_proj.weight": "model-00001-of-00004.safetensors",
267
+ "model.layers.6.self_attn.o_proj.weight": "model-00001-of-00004.safetensors",
268
+ "model.layers.6.self_attn.q_proj.weight": "model-00001-of-00004.safetensors",
269
+ "model.layers.6.self_attn.v_proj.weight": "model-00001-of-00004.safetensors",
270
+ "model.layers.7.input_layernorm.weight": "model-00001-of-00004.safetensors",
271
+ "model.layers.7.mlp.down_proj.weight": "model-00001-of-00004.safetensors",
272
+ "model.layers.7.mlp.gate_proj.weight": "model-00001-of-00004.safetensors",
273
+ "model.layers.7.mlp.up_proj.weight": "model-00001-of-00004.safetensors",
274
+ "model.layers.7.post_attention_layernorm.weight": "model-00001-of-00004.safetensors",
275
+ "model.layers.7.self_attn.k_proj.weight": "model-00001-of-00004.safetensors",
276
+ "model.layers.7.self_attn.o_proj.weight": "model-00001-of-00004.safetensors",
277
+ "model.layers.7.self_attn.q_proj.weight": "model-00001-of-00004.safetensors",
278
+ "model.layers.7.self_attn.v_proj.weight": "model-00001-of-00004.safetensors",
279
+ "model.layers.8.input_layernorm.weight": "model-00001-of-00004.safetensors",
280
+ "model.layers.8.mlp.down_proj.weight": "model-00001-of-00004.safetensors",
281
+ "model.layers.8.mlp.gate_proj.weight": "model-00001-of-00004.safetensors",
282
+ "model.layers.8.mlp.up_proj.weight": "model-00001-of-00004.safetensors",
283
+ "model.layers.8.post_attention_layernorm.weight": "model-00001-of-00004.safetensors",
284
+ "model.layers.8.self_attn.k_proj.weight": "model-00001-of-00004.safetensors",
285
+ "model.layers.8.self_attn.o_proj.weight": "model-00001-of-00004.safetensors",
286
+ "model.layers.8.self_attn.q_proj.weight": "model-00001-of-00004.safetensors",
287
+ "model.layers.8.self_attn.v_proj.weight": "model-00001-of-00004.safetensors",
288
+ "model.layers.9.input_layernorm.weight": "model-00002-of-00004.safetensors",
289
+ "model.layers.9.mlp.down_proj.weight": "model-00002-of-00004.safetensors",
290
+ "model.layers.9.mlp.gate_proj.weight": "model-00001-of-00004.safetensors",
291
+ "model.layers.9.mlp.up_proj.weight": "model-00002-of-00004.safetensors",
292
+ "model.layers.9.post_attention_layernorm.weight": "model-00002-of-00004.safetensors",
293
+ "model.layers.9.self_attn.k_proj.weight": "model-00001-of-00004.safetensors",
294
+ "model.layers.9.self_attn.o_proj.weight": "model-00001-of-00004.safetensors",
295
+ "model.layers.9.self_attn.q_proj.weight": "model-00001-of-00004.safetensors",
296
+ "model.layers.9.self_attn.v_proj.weight": "model-00001-of-00004.safetensors",
297
+ "model.norm.weight": "model-00003-of-00004.safetensors"
298
+ }
299
+ }
modeling_trida.py ADDED
@@ -0,0 +1,1008 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from typing import Callable, Optional, Union
2
+ from dataclasses import dataclass
3
+
4
+ import torch
5
+ from torch import nn
6
+ import torch.nn.functional as F
7
+ from functools import partial
8
+
9
+ from transformers.activations import ACT2FN
10
+ from transformers.cache_utils import Cache, DynamicCache
11
+ from transformers.generation import GenerationMixin
12
+ from transformers.integrations import use_kernel_forward_from_hub
13
+ from transformers.modeling_flash_attention_utils import FlashAttentionKwargs
14
+ from transformers.modeling_layers import GradientCheckpointingLayer
15
+ from transformers.modeling_outputs import (
16
+ BaseModelOutputWithPast,
17
+ CausalLMOutputWithPast,
18
+ )
19
+ from transformers.modeling_rope_utils import ROPE_INIT_FUNCTIONS, dynamic_rope_update
20
+ from transformers.modeling_utils import ALL_ATTENTION_FUNCTIONS, PreTrainedModel
21
+ from transformers.processing_utils import Unpack
22
+ from transformers.utils import auto_docstring, can_return_tuple, logging
23
+ from .configuration_trida import TridaConfig
24
+ from torch.nn.attention.flex_attention import flex_attention, create_block_mask
25
+ from einops import rearrange, repeat
26
+
27
+ logger = logging.get_logger(__name__)
28
+
29
+
30
+ @dataclass
31
+ class CausalLMOutputWithPastAndBlockCache(CausalLMOutputWithPast):
32
+ block_past_key_values: Optional[Cache] = None
33
+ computed_attention_mask: Optional[torch.Tensor] = None
34
+
35
+ @dataclass
36
+ class BaseModelOutputWithPastAndBlockCache(BaseModelOutputWithPast):
37
+ block_past_key_values: Optional[Cache] = None
38
+ computed_attention_mask: Optional[torch.Tensor] = None
39
+
40
+
41
+ @torch.compile(fullgraph=True, mode="max-autotune-no-cudagraphs")
42
+ def fused_flex_attention(q, k, v, mask=None):
43
+ return flex_attention(q, k, v, block_mask=mask, enable_gqa=True)
44
+
45
+ def block_diff_mask(b, h, q_idx, kv_idx, block_size=None, n=None):
46
+ """
47
+ Constructs the specialized block diffusion attention mask for training
48
+ composed of three masks:
49
+ - **Block Diagonal Mask (M_BD)**: Self-attention within noised blocks
50
+ - **Offset Block Causal Mask (M_OBC)**: Cross-attention for conditional context
51
+ - **Block Causal Mask (M_BC)**: Attention to update x0
52
+
53
+ Args:
54
+ b, h: Batch and head indices (ignored for mask logic).
55
+ q_idx, kv_idx: Query and Key indices.
56
+ seq_len: Total sequence length.
57
+ block_size: Defines the block structure.
58
+
59
+ Returns:
60
+ A boolean attention mask.
61
+ """
62
+ # Indicate whether token belongs to xt or x0
63
+ x0_flag_q = (q_idx >= n)
64
+ x0_flag_kv = (kv_idx >= n)
65
+
66
+ # Compute block indices
67
+ block_q = torch.where(x0_flag_q == 1,
68
+ (q_idx - n) // block_size,
69
+ q_idx // block_size)
70
+ block_kv = torch.where(x0_flag_kv == 1,
71
+ (kv_idx - n) // block_size,
72
+ kv_idx // block_size)
73
+
74
+ # **1. Block Diagonal Mask (M_BD) **
75
+ block_diagonal = (block_q == block_kv) & (x0_flag_q == x0_flag_kv)
76
+
77
+ # **2. Offset Block-Causal Mask (M_OBC) **
78
+ offset_block_causal = (
79
+ (block_q > block_kv)
80
+ & (x0_flag_kv == 1)
81
+ & (x0_flag_q == 0)
82
+ )
83
+
84
+ # **3. Block-Causal Mask (M_BC) **
85
+ block_causal = (block_q >= block_kv) & (x0_flag_kv == 1) & (x0_flag_q == 1)
86
+
87
+ # **4. Combine Masks **
88
+ return block_diagonal | offset_block_causal | block_causal
89
+
90
+ def eval_block_diff_mask(q_idx, kv_idx, block_size=None):
91
+ # Compute block indices
92
+ block_q = q_idx // block_size
93
+ block_kv = kv_idx // block_size
94
+
95
+ return block_q >= block_kv
96
+
97
+ class TridaMLP(nn.Module):
98
+ def __init__(self, config):
99
+ super().__init__()
100
+ self.config = config
101
+ self.hidden_size = config.hidden_size
102
+ self.intermediate_size = config.intermediate_size
103
+ self.gate_proj = nn.Linear(self.hidden_size, self.intermediate_size, bias=False)
104
+ self.up_proj = nn.Linear(self.hidden_size, self.intermediate_size, bias=False)
105
+ self.down_proj = nn.Linear(self.intermediate_size, self.hidden_size, bias=False)
106
+ self.act_fn = ACT2FN[config.hidden_act]
107
+
108
+ def forward(self, x):
109
+ down_proj = self.down_proj(self.act_fn(self.gate_proj(x)) * self.up_proj(x))
110
+ return down_proj
111
+
112
+
113
+ def rotate_half(x):
114
+ """Rotates half the hidden dims of the input."""
115
+ x1 = x[..., : x.shape[-1] // 2]
116
+ x2 = x[..., x.shape[-1] // 2 :]
117
+ return torch.cat((-x2, x1), dim=-1)
118
+
119
+
120
+ def apply_rotary_pos_emb(q, k, cos, sin, position_ids=None, unsqueeze_dim=1):
121
+ """Applies Rotary Position Embedding to the query and key tensors.
122
+
123
+ Args:
124
+ q (`torch.Tensor`): The query tensor.
125
+ k (`torch.Tensor`): The key tensor.
126
+ cos (`torch.Tensor`): The cosine part of the rotary embedding.
127
+ sin (`torch.Tensor`): The sine part of the rotary embedding.
128
+ position_ids (`torch.Tensor`, *optional*):
129
+ Deprecated and unused.
130
+ unsqueeze_dim (`int`, *optional*, defaults to 1):
131
+ The 'unsqueeze_dim' argument specifies the dimension along which to unsqueeze cos[position_ids] and
132
+ sin[position_ids] so that they can be properly broadcasted to the dimensions of q and k. For example, note
133
+ that cos[position_ids] and sin[position_ids] have the shape [batch_size, seq_len, head_dim]. Then, if q and
134
+ k have the shape [batch_size, heads, seq_len, head_dim], then setting unsqueeze_dim=1 makes
135
+ cos[position_ids] and sin[position_ids] broadcastable to the shapes of q and k. Similarly, if q and k have
136
+ the shape [batch_size, seq_len, heads, head_dim], then set unsqueeze_dim=2.
137
+ Returns:
138
+ `tuple(torch.Tensor)` comprising of the query and key tensors rotated using the Rotary Position Embedding.
139
+ """
140
+ cos = cos.unsqueeze(unsqueeze_dim)
141
+ sin = sin.unsqueeze(unsqueeze_dim)
142
+ q_embed = (q * cos) + (rotate_half(q) * sin)
143
+ k_embed = (k * cos) + (rotate_half(k) * sin)
144
+ return q_embed, k_embed
145
+
146
+
147
+ def repeat_kv(hidden_states: torch.Tensor, n_rep: int) -> torch.Tensor:
148
+ """
149
+ This is the equivalent of torch.repeat_interleave(x, dim=1, repeats=n_rep). The hidden states go from (batch,
150
+ num_key_value_heads, seqlen, head_dim) to (batch, num_attention_heads, seqlen, head_dim)
151
+ """
152
+ batch, num_key_value_heads, slen, head_dim = hidden_states.shape
153
+ if n_rep == 1:
154
+ return hidden_states
155
+ hidden_states = hidden_states[:, :, None, :, :].expand(batch, num_key_value_heads, n_rep, slen, head_dim)
156
+ return hidden_states.reshape(batch, num_key_value_heads * n_rep, slen, head_dim)
157
+
158
+
159
+ class TridaAttention(nn.Module):
160
+ """Multi-headed attention from 'Attention Is All You Need' paper"""
161
+
162
+ def __init__(self, config: TridaConfig, layer_idx: int):
163
+ super().__init__()
164
+ self.config = config
165
+ self.layer_idx = layer_idx
166
+ self.head_dim = getattr(config, "head_dim", config.hidden_size // config.num_attention_heads)
167
+ self.num_key_value_groups = config.num_attention_heads // config.num_key_value_heads
168
+ self.scaling = self.head_dim**-0.5
169
+ self.attention_dropout = config.attention_dropout
170
+ self.is_causal = True
171
+ self.q_proj = nn.Linear(config.hidden_size, config.num_attention_heads * self.head_dim, bias=False)
172
+ self.k_proj = nn.Linear(config.hidden_size, config.num_key_value_heads * self.head_dim, bias=False)
173
+ self.v_proj = nn.Linear(config.hidden_size, config.num_key_value_heads * self.head_dim, bias=False)
174
+ self.o_proj = nn.Linear(config.num_attention_heads * self.head_dim, config.hidden_size, bias=False)
175
+ self.sliding_window = config.sliding_window if config.layer_types[layer_idx] == "sliding_attention" else None
176
+
177
+ def forward(
178
+ self,
179
+ hidden_states: torch.Tensor,
180
+ position_embeddings: tuple[torch.Tensor, torch.Tensor],
181
+ attention_mask: Optional[torch.Tensor],
182
+ past_key_value: Optional[Cache] = None,
183
+ cache_position: Optional[torch.LongTensor] = None,
184
+ update_past_key_values: Optional[bool] = False,
185
+ block_past_key_values: Optional[Cache] = None,
186
+ replace_position: Optional[int] = None,
187
+ **kwargs: Unpack[FlashAttentionKwargs],
188
+ ) -> tuple[torch.Tensor, Optional[torch.Tensor], Optional[tuple[torch.Tensor]]]:
189
+ input_shape = hidden_states.shape[:-1]
190
+ hidden_shape = (*input_shape, -1, self.head_dim)
191
+
192
+ query_states = self.q_proj(hidden_states).view(hidden_shape).transpose(1, 2)
193
+ key_states = self.k_proj(hidden_states).view(hidden_shape).transpose(1, 2)
194
+ value_states = self.v_proj(hidden_states).view(hidden_shape).transpose(1, 2)
195
+
196
+ cos, sin = position_embeddings
197
+ # query_states, key_states = apply_rotary_pos_emb(query_states, key_states, cos, sin)
198
+ if self.training:
199
+ #split q into two parts
200
+ q_1 = query_states[:,:,:query_states.shape[2]//2]
201
+ q_2 = query_states[:,:,query_states.shape[2]//2:]
202
+ #split k into two parts
203
+ k_1 = key_states[:,:,:key_states.shape[2]//2]
204
+ k_2 = key_states[:,:,key_states.shape[2]//2:]
205
+ q_1, k_1 = apply_rotary_pos_emb(q_1, k_1, cos, sin)
206
+ q_2, k_2 = apply_rotary_pos_emb(q_2, k_2, cos, sin)
207
+ query_states = torch.cat((q_1, q_2), dim=-2)
208
+ key_states = torch.cat((k_1, k_2), dim=-2)
209
+ else:
210
+ query_states, key_states = apply_rotary_pos_emb(query_states, key_states, cos, sin)
211
+
212
+ if block_past_key_values is not None:
213
+ if len(block_past_key_values) <= self.layer_idx:
214
+ cache_kwargs = {"sin": sin, "cos": cos, "cache_position": cache_position}
215
+ key_states, value_states = block_past_key_values.update(key_states, value_states, self.layer_idx, cache_kwargs)
216
+ else:
217
+ block_cache_key_states = block_past_key_values[self.layer_idx][0]
218
+ block_cache_value_states = block_past_key_values[self.layer_idx][1]
219
+
220
+ block_cache_key_states[:, :, replace_position:replace_position+key_states.shape[2]] = key_states
221
+ block_cache_value_states[:, :, replace_position:replace_position+value_states.shape[2]] = value_states
222
+ key_states = block_cache_key_states
223
+ value_states = block_cache_value_states
224
+
225
+ if past_key_value is not None:
226
+ # sin and cos are specific to RoPE models; cache_position needed for the static cache
227
+ if update_past_key_values:
228
+ cache_kwargs = {"sin": sin, "cos": cos, "cache_position": cache_position}
229
+ key_states, value_states = past_key_value.update(key_states, value_states, self.layer_idx, cache_kwargs)
230
+ elif len(past_key_value) > self.layer_idx:
231
+ key_states = torch.cat((past_key_value[self.layer_idx][0], key_states), dim=-2)
232
+ value_states = torch.cat((past_key_value[self.layer_idx][1], value_states), dim=-2)
233
+
234
+ if self.training:
235
+ attn_output = fused_flex_attention(query_states, key_states, value_states, mask=attention_mask)
236
+ attn_output = attn_output.transpose(1, 2).contiguous()
237
+ else:
238
+ attention_interface = ALL_ATTENTION_FUNCTIONS["sdpa"]
239
+
240
+ attn_output, attn_weights = attention_interface(
241
+ self,
242
+ query_states,
243
+ key_states,
244
+ value_states,
245
+ attention_mask,
246
+ is_causal=False,
247
+ dropout=0.0 if not self.training else self.attention_dropout,
248
+ scaling=self.scaling,
249
+ sliding_window=self.sliding_window, # main diff with Llama
250
+ **kwargs,
251
+ )
252
+
253
+ attn_output = attn_output.reshape(*input_shape, -1).contiguous()
254
+ attn_output = self.o_proj(attn_output)
255
+ return attn_output
256
+
257
+ @use_kernel_forward_from_hub("RMSNorm")
258
+ class TridaRMSNorm(nn.Module):
259
+ def __init__(self, hidden_size, eps=1e-6):
260
+ """
261
+ RMSNorm is equivalent to T5LayerNorm
262
+ """
263
+ super().__init__()
264
+ self.weight = nn.Parameter(torch.ones(hidden_size))
265
+ self.variance_epsilon = eps
266
+
267
+ def forward(self, hidden_states):
268
+ input_dtype = hidden_states.dtype
269
+ hidden_states = hidden_states.to(torch.float32)
270
+ variance = hidden_states.pow(2).mean(-1, keepdim=True)
271
+ hidden_states = hidden_states * torch.rsqrt(variance + self.variance_epsilon)
272
+ return self.weight * hidden_states.to(input_dtype)
273
+
274
+ def extra_repr(self):
275
+ return f"{tuple(self.weight.shape)}, eps={self.variance_epsilon}"
276
+
277
+
278
+ class TridaDecoderLayer(GradientCheckpointingLayer):
279
+ def __init__(self, config: TridaConfig, layer_idx: int):
280
+ super().__init__()
281
+ self.hidden_size = config.hidden_size
282
+
283
+ self.self_attn = TridaAttention(config=config, layer_idx=layer_idx)
284
+
285
+ self.mlp = TridaMLP(config)
286
+ self.input_layernorm = TridaRMSNorm(config.hidden_size, eps=config.rms_norm_eps)
287
+ self.post_attention_layernorm = TridaRMSNorm(config.hidden_size, eps=config.rms_norm_eps)
288
+ self.attention_type = config.layer_types[layer_idx]
289
+
290
+ def forward(
291
+ self,
292
+ hidden_states: torch.Tensor,
293
+ attention_mask: Optional[torch.Tensor] = None,
294
+ position_ids: Optional[torch.LongTensor] = None,
295
+ past_key_value: Optional[Cache] = None,
296
+ use_cache: Optional[bool] = False,
297
+ cache_position: Optional[torch.LongTensor] = None,
298
+ position_embeddings: Optional[tuple[torch.Tensor, torch.Tensor]] = None, # necessary, but kept here for BC
299
+ update_past_key_values: Optional[bool] = False,
300
+ use_block_cache: Optional[bool] = False,
301
+ block_past_key_values: Optional[Cache] = None,
302
+ replace_position: Optional[int] = None,
303
+ **kwargs
304
+ ) -> tuple[torch.Tensor]:
305
+ residual = hidden_states
306
+ hidden_states = self.input_layernorm(hidden_states)
307
+ # Self Attention
308
+ hidden_states = self.self_attn(
309
+ hidden_states=hidden_states,
310
+ attention_mask=attention_mask,
311
+ position_ids=position_ids,
312
+ past_key_value=past_key_value,
313
+ use_cache=use_cache,
314
+ cache_position=cache_position,
315
+ position_embeddings=position_embeddings,
316
+ update_past_key_values=update_past_key_values,
317
+ use_block_cache=use_block_cache,
318
+ block_past_key_values=block_past_key_values,
319
+ replace_position=replace_position,
320
+ **kwargs,
321
+ )
322
+ hidden_states = residual + hidden_states
323
+
324
+ # Fully Connected
325
+ residual = hidden_states
326
+ hidden_states = self.post_attention_layernorm(hidden_states)
327
+ hidden_states = self.mlp(hidden_states)
328
+ hidden_states = residual + hidden_states
329
+ return hidden_states
330
+
331
+
332
+
333
+ class TridaPreTrainedModel(PreTrainedModel):
334
+ config_class = TridaConfig
335
+ base_model_prefix = "model"
336
+ supports_gradient_checkpointing = True
337
+ _no_split_modules = ["TridaDecoderLayer"]
338
+ _skip_keys_device_placement = ["past_key_values"]
339
+ _supports_flash_attn_2 = True
340
+ _supports_sdpa = True
341
+ _supports_flex_attn = True
342
+ _supports_cache_class = True
343
+ _supports_quantized_cache = True
344
+ _supports_static_cache = True
345
+ _supports_attention_backend = True
346
+ _can_record_outputs = {
347
+ "hidden_states": TridaDecoderLayer,
348
+ "attentions": TridaAttention,
349
+ }
350
+
351
+ def _init_weights(self, module):
352
+ std = self.config.initializer_range
353
+ if isinstance(module, nn.Linear):
354
+ module.weight.data.normal_(mean=0.0, std=std)
355
+ if module.bias is not None:
356
+ module.bias.data.zero_()
357
+ elif isinstance(module, nn.Embedding):
358
+ module.weight.data.normal_(mean=0.0, std=std)
359
+ if module.padding_idx is not None:
360
+ module.weight.data[module.padding_idx].zero_()
361
+ elif isinstance(module, TridaRMSNorm):
362
+ module.weight.data.fill_(1.0)
363
+
364
+
365
+ class TridaRotaryEmbedding(nn.Module):
366
+ def __init__(self, config: TridaConfig, device=None):
367
+ super().__init__()
368
+ # BC: "rope_type" was originally "type"
369
+ if hasattr(config, "rope_scaling") and isinstance(config.rope_scaling, dict):
370
+ self.rope_type = config.rope_scaling.get("rope_type", config.rope_scaling.get("type"))
371
+ else:
372
+ self.rope_type = "default"
373
+ self.max_seq_len_cached = config.max_position_embeddings
374
+ self.original_max_seq_len = config.max_position_embeddings
375
+
376
+ self.config = config
377
+ self.rope_init_fn = ROPE_INIT_FUNCTIONS[self.rope_type]
378
+
379
+ inv_freq, self.attention_scaling = self.rope_init_fn(self.config, device)
380
+ self.register_buffer("inv_freq", inv_freq, persistent=False)
381
+ self.original_inv_freq = self.inv_freq
382
+
383
+ @torch.no_grad()
384
+ @dynamic_rope_update # power user: used with advanced RoPE types (e.g. dynamic rope)
385
+ def forward(self, x, position_ids):
386
+ inv_freq_expanded = self.inv_freq[None, :, None].float().expand(position_ids.shape[0], -1, 1).to(x.device)
387
+ position_ids_expanded = position_ids[:, None, :].float()
388
+
389
+ device_type = x.device.type if isinstance(x.device.type, str) and x.device.type != "mps" else "cpu"
390
+ with torch.autocast(device_type=device_type, enabled=False): # Force float32
391
+ freqs = (inv_freq_expanded.float() @ position_ids_expanded.float()).transpose(1, 2)
392
+ emb = torch.cat((freqs, freqs), dim=-1)
393
+ cos = emb.cos() * self.attention_scaling
394
+ sin = emb.sin() * self.attention_scaling
395
+
396
+ return cos.to(dtype=x.dtype), sin.to(dtype=x.dtype)
397
+
398
+
399
+
400
+ class TridaModel(TridaPreTrainedModel):
401
+ def __init__(self, config: TridaConfig):
402
+ super().__init__(config)
403
+ self.padding_idx = config.pad_token_id
404
+ self.vocab_size = config.vocab_size
405
+ self.bd_size = config.bd_size
406
+
407
+ self.embed_tokens = nn.Embedding(config.vocab_size, config.hidden_size, self.padding_idx)
408
+ self.layers = nn.ModuleList(
409
+ [TridaDecoderLayer(config, layer_idx) for layer_idx in range(config.num_hidden_layers)]
410
+ )
411
+ self.norm = TridaRMSNorm(config.hidden_size, eps=config.rms_norm_eps)
412
+ self.rotary_emb = TridaRotaryEmbedding(config=config)
413
+ self.gradient_checkpointing = False
414
+
415
+ # Initialize weights and apply final processing
416
+ self.post_init()
417
+
418
+ def get_input_embeddings(self):
419
+ return self.embed_tokens
420
+
421
+ def set_input_embeddings(self, value):
422
+ self.embed_tokens = value
423
+
424
+
425
+ def eval_mask(self, seqlen, block_size, cache_seq_len, input_ids=None, mask_id=128012,
426
+ prev_eval_mask=None):
427
+ """
428
+ Creates attention mask for inference with:
429
+ 1. Block-wise causal attention: tokens in current block attend to previous clean blocks
430
+ 2. Within-block any-order causal attention: once a token is unmasked, its attention mask is fixed
431
+
432
+ Args:
433
+ seqlen: Current sequence length
434
+ block_size: Size of attention blocks
435
+ cache_seq_len: Length of cached sequence from previous blocks
436
+ input_ids: Current input token IDs [batch, seqlen]
437
+ mask_id: Token ID used for masked positions
438
+ prev_eval_mask: Previous attention mask to preserve frozen attention patterns
439
+ """
440
+ device = input_ids.device if input_ids is not None else 'cuda'
441
+ total_len = seqlen + cache_seq_len
442
+
443
+ q_indices = torch.arange(seqlen, device=device) + cache_seq_len
444
+ k_indices = torch.arange(total_len, device=device)
445
+
446
+ # Block-level causal mask: [seqlen, total_len]
447
+ block_q = q_indices[:, None] // block_size
448
+ block_kv = k_indices[None, :] // block_size
449
+ block_causal_mask = (block_q >= block_kv).unsqueeze(0) # [1, seqlen, total_len]
450
+
451
+ if input_ids is None or mask_id is None:
452
+ return block_causal_mask
453
+
454
+ batch_size = input_ids.shape[0]
455
+
456
+ # Current mask state
457
+ is_mask_q = (input_ids == mask_id) # [batch, seqlen]
458
+ is_mask_kv_cached = torch.zeros(batch_size, cache_seq_len, dtype=torch.bool, device=device)
459
+ is_mask_kv = torch.cat([is_mask_kv_cached, is_mask_q], dim=1) # [batch, total_len]
460
+
461
+ # Diagonal (self-attention always allowed)
462
+ diagonal_mask = (q_indices[:, None] == k_indices[None, :]).unsqueeze(0) # [1, seqlen, total_len]
463
+
464
+ if prev_eval_mask is not None:
465
+ # Any-order causal attention:
466
+ # - Currently unmasked tokens: use their frozen attention from prev_eval_mask
467
+ # - Still masked tokens: attend to all currently unmasked tokens + themselves
468
+
469
+ is_unmasked_kv = ~is_mask_kv # [batch, total_len]
470
+
471
+ # For still masked queries: attend to all currently unmasked tokens + themselves
472
+ still_masked_q_mask = (block_causal_mask & is_unmasked_kv.unsqueeze(1)) | diagonal_mask
473
+
474
+ # Get previous mask (squeeze the head dimension)
475
+ prev_eval_mask_squeezed = prev_eval_mask.squeeze(1) # [batch, seqlen, total_len]
476
+
477
+ # Expand is_mask_q for indexing: [batch, seqlen, total_len]
478
+ is_mask_q_exp = is_mask_q.unsqueeze(-1).expand(-1, -1, total_len)
479
+
480
+ # Build final mask:
481
+ # - Currently unmasked tokens: use their frozen attention from prev_eval_mask
482
+ # - Still masked tokens: use still_masked_q_mask
483
+ final_mask = torch.where(
484
+ is_mask_q_exp,
485
+ still_masked_q_mask,
486
+ prev_eval_mask_squeezed
487
+ )
488
+
489
+ return final_mask.unsqueeze(1) # [batch, 1, seqlen, total_len]
490
+ else:
491
+ # First step or no previous mask info
492
+ # Masked tokens: attend to all unmasked tokens + themselves (to predict what to fill in)
493
+ # Unmasked tokens: block causal to other unmasked tokens + self
494
+
495
+ is_mask_q_exp = is_mask_q.unsqueeze(-1) # [batch, seqlen, 1]
496
+ is_mask_kv_exp = is_mask_kv.unsqueeze(1) # [batch, 1, total_len]
497
+
498
+ # Both masked and unmasked queries can see: unmasked keys within block causal + diagonal
499
+ final_mask = (block_causal_mask & ~is_mask_kv_exp) | diagonal_mask
500
+
501
+ return final_mask.unsqueeze(1) # [batch, 1, seqlen, total_len]
502
+
503
+ def gen_mask(self, seqlen, block_size, B, H):
504
+ mask = create_block_mask(
505
+ partial(block_diff_mask, block_size=block_size, n=seqlen),
506
+ B=B, H=H, Q_LEN=seqlen*2, KV_LEN=seqlen*2)
507
+
508
+ return mask
509
+
510
+ def forward(
511
+ self,
512
+ input_ids: Optional[torch.LongTensor] = None,
513
+ labels: Optional[torch.LongTensor] = None,
514
+ attention_mask: Optional[torch.Tensor] = None,
515
+ position_ids: Optional[torch.LongTensor] = None,
516
+ past_key_values: Optional[Cache] = None,
517
+ inputs_embeds: Optional[torch.FloatTensor] = None,
518
+ use_cache: Optional[bool] = None,
519
+ cache_position: Optional[torch.LongTensor] = None,
520
+ update_past_key_values: Optional[bool] = False,
521
+ block_size: Optional[int] = 32,
522
+ use_block_cache: Optional[bool] = False,
523
+ block_past_key_values: Optional[Cache] = None,
524
+ replace_position: Optional[int] = None,
525
+ prev_attention_mask: Optional[torch.Tensor] = None,
526
+ mask_id: Optional[int] = 128012,
527
+ **kwargs
528
+ ) -> BaseModelOutputWithPast:
529
+ if (input_ids is None) ^ (inputs_embeds is not None):
530
+ raise ValueError("You must specify exactly one of input_ids or inputs_embeds")
531
+
532
+ if inputs_embeds is None:
533
+ inputs_embeds = self.embed_tokens(input_ids)
534
+
535
+ if use_cache and past_key_values is None:
536
+ past_key_values = DynamicCache()
537
+
538
+ if use_block_cache and block_past_key_values is None:
539
+ block_past_key_values = DynamicCache()
540
+
541
+ if cache_position is None:
542
+ past_seen_tokens = past_key_values.get_seq_length() if past_key_values is not None else 0
543
+ if self.training:
544
+ cache_position = torch.arange(
545
+ past_seen_tokens, past_seen_tokens + inputs_embeds.shape[1]//2, device=inputs_embeds.device
546
+ )
547
+ else:
548
+ if use_block_cache:
549
+ block_start_position = past_seen_tokens+replace_position if replace_position is not None else past_seen_tokens
550
+ cache_position = torch.arange(
551
+ block_start_position, block_start_position + inputs_embeds.shape[1], device=inputs_embeds.device
552
+ )
553
+ else:
554
+ cache_position = torch.arange(
555
+ past_seen_tokens, past_seen_tokens + inputs_embeds.shape[1] if not self.training else inputs_embeds.shape[1]//2, device=inputs_embeds.device
556
+ )
557
+
558
+ if position_ids is None:
559
+ position_ids = cache_position.unsqueeze(0)
560
+
561
+ if self.training:
562
+ attention_mask = self.gen_mask(labels.shape[1], self.bd_size, labels.shape[0], self.config.num_attention_heads).to(device=inputs_embeds.device)
563
+ else:
564
+ # Always compute attention mask for proper any-order causal attention
565
+ cache_seq_len = past_key_values.get_seq_length() if past_key_values is not None else 0
566
+ attention_mask = self.eval_mask(
567
+ input_ids.shape[1],
568
+ block_size,
569
+ cache_seq_len,
570
+ input_ids=input_ids,
571
+ mask_id=mask_id,
572
+ prev_eval_mask=prev_attention_mask
573
+ ).to(device=inputs_embeds.device)
574
+
575
+ hidden_states = inputs_embeds
576
+
577
+ # create position embeddings to be shared across the decoder layers
578
+ position_embeddings = self.rotary_emb(hidden_states, position_ids)
579
+
580
+ for decoder_layer in self.layers[: self.config.num_hidden_layers]:
581
+ hidden_states = decoder_layer(
582
+ hidden_states,
583
+ attention_mask=attention_mask,
584
+ position_ids=position_ids,
585
+ past_key_value=past_key_values,
586
+ use_cache=use_cache,
587
+ cache_position=cache_position,
588
+ position_embeddings=position_embeddings,
589
+ update_past_key_values=update_past_key_values,
590
+ use_block_cache=use_block_cache,
591
+ block_past_key_values=block_past_key_values,
592
+ replace_position=replace_position,
593
+ **kwargs,
594
+ )
595
+
596
+ hidden_states = self.norm(hidden_states)
597
+ return BaseModelOutputWithPastAndBlockCache(
598
+ last_hidden_state=hidden_states,
599
+ past_key_values=past_key_values if use_cache else None,
600
+ block_past_key_values=block_past_key_values if use_block_cache else None,
601
+ computed_attention_mask=attention_mask if not self.training else None,
602
+ )
603
+
604
+
605
+ class TridaForDLM(TridaPreTrainedModel, GenerationMixin):
606
+ _tied_weights_keys = ["lm_head.weight"]
607
+ _tp_plan = {"lm_head": "colwise_rep"}
608
+ _pp_plan = {"lm_head": (["hidden_states"], ["logits"])}
609
+
610
+ def __init__(self, config):
611
+ super().__init__(config)
612
+ self.model = TridaModel(config)
613
+ self.vocab_size = config.vocab_size
614
+ self.lm_head = nn.Linear(config.hidden_size, config.vocab_size, bias=False)
615
+
616
+ # Initialize weights and apply final processing
617
+ self.post_init()
618
+
619
+ def get_input_embeddings(self):
620
+ return self.model.embed_tokens
621
+
622
+ def set_input_embeddings(self, value):
623
+ self.model.embed_tokens = value
624
+
625
+ def get_output_embeddings(self):
626
+ return self.lm_head
627
+
628
+ def set_output_embeddings(self, new_embeddings):
629
+ self.lm_head = new_embeddings
630
+
631
+ def set_decoder(self, decoder):
632
+ self.model = decoder
633
+
634
+ def get_decoder(self):
635
+ return self.model
636
+
637
+ def _create_attention_mask(self, input_ids, seq_len, pad_token_id):
638
+ """
639
+ Create attention mask that marks padding positions as 0 (masked)
640
+ and real token positions as 1 (unmasked).
641
+
642
+ Args:
643
+ input_ids: Tensor of shape (batch_size, seq_length)
644
+ seq_len: Tensor of shape (batch_size,) with actual sequence lengths for each sample
645
+ pad_token_id: Token ID used for padding
646
+
647
+ Returns:
648
+ attention_mask: Tensor of shape (batch_size, seq_length) with 1 for real tokens, 0 for padding
649
+ """
650
+ batch_size, seq_length = input_ids.shape
651
+ attention_mask = torch.ones((batch_size, seq_length), dtype=torch.long, device=input_ids.device)
652
+
653
+ # Mark padding positions as 0
654
+ for i in range(batch_size):
655
+ # Padding can occur after the actual sequence length
656
+ if seq_len[i] < seq_length:
657
+ attention_mask[i, seq_len[i]:] = 0
658
+
659
+ return attention_mask
660
+
661
+ @can_return_tuple
662
+ def forward(
663
+ self,
664
+ input_ids: Optional[torch.LongTensor] = None,
665
+ attention_mask: Optional[torch.Tensor] = None,
666
+ position_ids: Optional[torch.LongTensor] = None,
667
+ past_key_values: Optional[Cache] = None,
668
+ inputs_embeds: Optional[torch.FloatTensor] = None,
669
+ labels: Optional[torch.LongTensor] = None,
670
+ use_cache: Optional[bool] = None,
671
+ cache_position: Optional[torch.LongTensor] = None,
672
+ logits_to_keep: Union[int, torch.Tensor] = 0,
673
+ update_past_key_values: Optional[bool] = False,
674
+ block_size: Optional[int] = 32,
675
+ use_block_cache: Optional[bool] = False,
676
+ block_past_key_values: Optional[Cache] = None,
677
+ replace_position: Optional[int] = None,
678
+ mask_id: Optional[int] = 128012,
679
+ prev_attention_mask: Optional[torch.Tensor] = None,
680
+ **kwargs
681
+ ) -> CausalLMOutputWithPastAndBlockCache:
682
+
683
+ if self.training:
684
+ original_labels = labels.clone()
685
+ original_input_ids = input_ids.clone()
686
+
687
+ noisy_input_ids = input_ids.clone()
688
+
689
+ input_ids = input_ids.reshape(input_ids.shape[0] * input_ids.shape[1] // self.model.bd_size, self.model.bd_size)
690
+ b, l = input_ids.shape
691
+ t = torch.rand((b,), device=input_ids.device)
692
+ eps=1e-3
693
+ p_mask = (1 - eps) * t + eps
694
+ p_mask = p_mask[:, None].repeat(1, l)
695
+
696
+ mask_indices = torch.rand((b, l), device=input_ids.device) < p_mask
697
+ x_t = torch.where(mask_indices, mask_id, input_ids).reshape(labels.shape)
698
+ noisy_input_ids[labels != -100] = x_t[labels != -100]
699
+ mask = (noisy_input_ids != mask_id)
700
+ labels[mask] = -100
701
+ input_ids = torch.cat([noisy_input_ids, input_ids.reshape(labels.shape)], dim=1)
702
+
703
+ complementary_noisy_input_ids = original_input_ids.clone()
704
+ complementary_labels = original_labels.clone()
705
+
706
+ complementary_input_ids = original_input_ids.reshape(original_input_ids.shape[0] * original_input_ids.shape[1] // self.model.bd_size, self.model.bd_size)
707
+
708
+ complementary_mask_indices = ~mask_indices
709
+ complementary_x_t = torch.where(complementary_mask_indices, mask_id, complementary_input_ids).reshape(labels.shape)
710
+ complementary_noisy_input_ids[complementary_labels != -100] = complementary_x_t[complementary_labels != -100]
711
+ complementary_mask = (complementary_noisy_input_ids != mask_id)
712
+ complementary_labels[complementary_mask] = -100
713
+ complementary_input_ids = torch.cat([complementary_noisy_input_ids, complementary_input_ids.reshape(complementary_labels.shape)], dim=1)
714
+
715
+ input_ids = torch.cat([input_ids, complementary_input_ids], dim=0)
716
+ labels = torch.cat([labels, complementary_labels], dim=0)
717
+
718
+ outputs: BaseModelOutputWithPastAndBlockCache = self.model(
719
+ input_ids=input_ids,
720
+ labels=labels,
721
+ attention_mask=attention_mask,
722
+ position_ids=position_ids,
723
+ past_key_values=past_key_values,
724
+ inputs_embeds=inputs_embeds,
725
+ use_cache=use_cache,
726
+ cache_position=cache_position,
727
+ update_past_key_values=update_past_key_values,
728
+ block_size=block_size,
729
+ use_block_cache=use_block_cache,
730
+ block_past_key_values=block_past_key_values,
731
+ replace_position=replace_position,
732
+ prev_attention_mask=prev_attention_mask,
733
+ mask_id=mask_id,
734
+ **kwargs,
735
+ )
736
+
737
+ hidden_states = outputs.last_hidden_state
738
+ if self.training:
739
+ hidden_states = hidden_states[:, :hidden_states.shape[1]//2, :]
740
+ # Only compute necessary logits, and do not upcast them to float if we are not computing the loss
741
+ slice_indices = slice(-logits_to_keep, None) if isinstance(logits_to_keep, int) else logits_to_keep
742
+ logits = self.lm_head(hidden_states[:, slice_indices, :])
743
+
744
+ loss = None
745
+ if labels is not None:
746
+ # Standard causal LM loss with automatic label shifting (logits[i] predicts labels[i+1])
747
+ # This is correct even for block diffusion training
748
+ loss = self.loss_function(logits=logits, labels=labels, vocab_size=self.config.vocab_size, **kwargs)
749
+
750
+ return CausalLMOutputWithPastAndBlockCache(
751
+ loss=loss,
752
+ logits=logits,
753
+ past_key_values=outputs.past_key_values,
754
+ hidden_states=outputs.hidden_states,
755
+ attentions=outputs.attentions,
756
+ block_past_key_values=outputs.block_past_key_values,
757
+ computed_attention_mask=outputs.computed_attention_mask,
758
+ )
759
+
760
+ def _has_stop_token(self, tokens, stop_token):
761
+ """Check if any batch has the stop token. Returns (any_has_stop, per_batch_mask)."""
762
+ has_stop = (tokens == stop_token).any(dim=-1) # [batch]
763
+ return has_stop.any().item(), has_stop
764
+
765
+ def _get_first_stop_token_idx(self, tokens, stop_token):
766
+ """Get the index of the first stop token for each batch. Returns -1 if not found."""
767
+ batch_size, seq_len = tokens.shape
768
+ # Find positions where stop_token occurs
769
+ is_stop = (tokens == stop_token) # [batch, seq]
770
+ # For each batch, find the first occurrence (or seq_len if not found)
771
+ # Use argmax on the boolean tensor - it returns first True index, or 0 if all False
772
+ has_stop = is_stop.any(dim=-1) # [batch]
773
+ first_idx = is_stop.float().argmax(dim=-1) # [batch]
774
+ # Set to -1 for batches without stop token
775
+ first_idx = torch.where(has_stop, first_idx, torch.tensor(-1, device=tokens.device))
776
+ return first_idx # [batch]
777
+
778
+ def _get_per_sequence_stop_status(self, tokens, prompt_length, stop_token, mask_id):
779
+ """
780
+ Check stop condition per sequence.
781
+ Returns a boolean tensor [batch] where True means the sequence should stop.
782
+ A sequence stops when it has a stop token with no mask tokens before it.
783
+ """
784
+ batch_size = tokens.shape[0]
785
+ generated = tokens[:, prompt_length:]
786
+
787
+ # Get stop token positions
788
+ first_stop_idx = self._get_first_stop_token_idx(generated, stop_token) # [batch]
789
+ has_stop = first_stop_idx >= 0 # [batch]
790
+
791
+ # For each sequence, check if there are mask tokens before the stop token
792
+ should_stop = torch.zeros(batch_size, dtype=torch.bool, device=tokens.device)
793
+ for b in range(batch_size):
794
+ if has_stop[b]:
795
+ # Check if any mask tokens exist before the stop token
796
+ if (generated[b, :first_stop_idx[b]] == mask_id).sum() == 0:
797
+ should_stop[b] = True
798
+
799
+ return should_stop
800
+
801
+ @torch.no_grad()
802
+ def generate(
803
+ self,
804
+ input_ids,
805
+ max_new_tokens,
806
+ mask_id=128012,
807
+ threshold=1,
808
+ small_block_size=8,
809
+ block_size=32,
810
+ stop_token=128001,
811
+ stopping_criteria=None,
812
+ top_p=0.95,
813
+ temperature=0,
814
+ use_block_cache=False,
815
+ pad_token_id=128004,
816
+ return_dict_in_generate=False,
817
+ **kwargs
818
+ ):
819
+ batch_size = input_ids.shape[0]
820
+ num_blocks = max_new_tokens // block_size
821
+ original_input_length = input_ids.shape[1]
822
+
823
+ # Track which sequences are finished (have generated stop token with no masks before it)
824
+ finished = torch.zeros(batch_size, dtype=torch.bool, device=self.device)
825
+
826
+ if input_ids.shape[1] > block_size:
827
+ output = self.forward(input_ids=input_ids[:, :(input_ids.shape[1] // block_size * block_size)], use_cache=True, update_past_key_values=True, block_size=block_size)
828
+ logits, past_key_values = output.logits, output.past_key_values
829
+ if input_ids.shape[1] % block_size == 0:
830
+ next_token = logits[:, -1:, :].argmax(dim=-1)
831
+ input_ids = torch.cat([input_ids, next_token], dim=1)
832
+ else:
833
+ past_key_values = None
834
+
835
+ num_small_blocks = block_size // small_block_size
836
+
837
+ for block_idx in range(num_blocks):
838
+ # Check if all sequences are finished
839
+ if finished.all():
840
+ break
841
+
842
+ prompt_length = input_ids.shape[1]
843
+ # Initialize x_init with mask_id
844
+ x_init = mask_id * torch.ones((batch_size, block_size - prompt_length % block_size), device=self.device, dtype=torch.long)
845
+ x_init = torch.cat([input_ids, x_init], dim=1)
846
+
847
+ x_t = x_init.clone()
848
+ block_past_key_values = None
849
+ block_attention_mask = None # Track attention mask within block for any-order causal
850
+
851
+ while True:
852
+ # Update finished status per sequence
853
+ newly_finished = self._get_per_sequence_stop_status(x_t, prompt_length, stop_token, mask_id)
854
+ finished = finished | newly_finished
855
+
856
+ # Check if all sequences are finished
857
+ if finished.all():
858
+ break
859
+
860
+ # Only consider mask positions for unfinished sequences
861
+ mask_idx = (x_t[:, -block_size:] == mask_id)
862
+ # Zero out mask_idx for finished sequences (they don't need more unmasking)
863
+ mask_idx = mask_idx & (~finished.unsqueeze(-1))
864
+
865
+ # Decode a complete block, update cache, and generate the next token
866
+ if mask_idx.sum() == 0:
867
+ output = self.forward(input_ids=x_t[:, -block_size:], use_cache=True, past_key_values=past_key_values, update_past_key_values=True, block_size=block_size)
868
+ logits, past_key_values = output.logits, output.past_key_values
869
+ next_token = logits[:, -1:, :].argmax(dim=-1)
870
+ # Only append next token for unfinished sequences; use pad for finished
871
+ next_token = torch.where(finished.unsqueeze(-1), pad_token_id, next_token)
872
+ x_t = torch.cat([x_t, next_token], dim=1)
873
+ break
874
+
875
+ for small_block_idx in range(num_small_blocks):
876
+ small_block_start_idx = small_block_idx * small_block_size
877
+ small_block_end_idx = small_block_start_idx + small_block_size
878
+
879
+ start = -block_size + small_block_start_idx
880
+ end = None if block_size == small_block_end_idx else -block_size + small_block_end_idx
881
+
882
+ while True:
883
+ # Recompute mask_idx considering finished sequences
884
+ mask_idx = (x_t[:, -block_size:] == mask_id)
885
+ mask_idx = mask_idx & (~finished.unsqueeze(-1))
886
+
887
+ if mask_idx[:, start:end].sum() == 0:
888
+ break
889
+
890
+ # Update finished status
891
+ newly_finished = self._get_per_sequence_stop_status(x_t, prompt_length, stop_token, mask_id)
892
+ finished = finished | newly_finished
893
+
894
+ if finished.all():
895
+ break
896
+
897
+ if use_block_cache:
898
+ if block_past_key_values is None or (x_t[:, -block_size+small_block_start_idx] == mask_id).any():
899
+ output = self.forward(input_ids=x_t[:, -block_size:], use_cache=True, past_key_values=past_key_values, update_past_key_values=False, use_block_cache=True, prev_attention_mask=block_attention_mask, mask_id=mask_id)
900
+ logits, block_past_key_values = output.logits, output.block_past_key_values
901
+ block_attention_mask = output.computed_attention_mask # Track attention mask for any-order causal
902
+ logits = torch.cat([logits[:, :1, :], logits[:, :-1, :]], dim=1)
903
+ logits = logits[:, start:end]
904
+ else:
905
+ output = self.forward(input_ids=x_t[:,start:end], use_cache=True, past_key_values=past_key_values, update_past_key_values=False, use_block_cache=True, block_past_key_values=block_past_key_values, replace_position=small_block_start_idx, prev_attention_mask=block_attention_mask, mask_id=mask_id)
906
+ logits = output.logits
907
+ block_attention_mask = output.computed_attention_mask # Track attention mask for any-order causal
908
+ logits = torch.cat([logits[:, :1, :], logits[:, :-1, :]], dim=1)
909
+ else:
910
+ output = self.forward(input_ids=x_t[:, -block_size:], use_cache=True, past_key_values=past_key_values, update_past_key_values=False, prev_attention_mask=block_attention_mask, mask_id=mask_id)
911
+ logits = output.logits
912
+ block_attention_mask = output.computed_attention_mask # Track attention mask for any-order causal
913
+ logits = torch.cat([logits[:, :1, :], logits[:, :-1, :]], dim=1)
914
+ logits = logits[:, start:end]
915
+
916
+ # Allow stop_token to be generated naturally (don't block it)
917
+ x_1, p_1t = self.sample_with_top_p(logits, top_p=top_p, temperature=temperature, stop_token_id=None)
918
+ # Select tokens with probability greater than threshold from p_1t
919
+ x1_p = torch.squeeze(torch.gather(p_1t, dim=-1, index=torch.unsqueeze(x_1, -1)), -1)
920
+ x1_p = torch.where(mask_idx[:, start:end], x1_p, -torch.inf)
921
+
922
+ unmask_idx = (x1_p > threshold)
923
+ max_prob_idx = x1_p.argmax(dim=-1)
924
+ unmask_idx[torch.arange(x_1.shape[0], device=self.device), max_prob_idx] = True
925
+ unmask_idx = unmask_idx & mask_idx[:, start:end]
926
+
927
+ # Don't update tokens for finished sequences
928
+ unmask_idx = unmask_idx & (~finished.unsqueeze(-1))
929
+
930
+ x_t[:, start:end][unmask_idx] = x_1[unmask_idx]
931
+
932
+ input_ids = x_t
933
+
934
+ # Per-sequence truncation: truncate each sequence at its stop token
935
+ # Pad shorter sequences to maintain consistent tensor shape
936
+ first_stop_idx = self._get_first_stop_token_idx(input_ids[:, original_input_length:], stop_token)
937
+
938
+ # Calculate the max output length (considering sequences without stop token)
939
+ max_output_len = input_ids.shape[1]
940
+ has_stop = first_stop_idx >= 0
941
+ if has_stop.any():
942
+ # For sequences with stop token: original_input_length + stop_idx + 1 (include stop token)
943
+ # For sequences without: keep full length
944
+ output_lens = torch.where(
945
+ has_stop,
946
+ original_input_length + first_stop_idx + 1,
947
+ torch.tensor(input_ids.shape[1], device=self.device)
948
+ )
949
+ max_output_len = output_lens.max().item()
950
+
951
+ # Truncate to max_output_len
952
+ input_ids = input_ids[:, :max_output_len]
953
+
954
+ # Replace tokens after each sequence's stop token with pad_token_id
955
+ for b in range(batch_size):
956
+ if has_stop[b]:
957
+ stop_pos = original_input_length + first_stop_idx[b].item() + 1
958
+ if stop_pos < input_ids.shape[1]:
959
+ input_ids[b, stop_pos:] = pad_token_id
960
+
961
+ return input_ids
962
+
963
+ def sample_with_top_p(self, logits, top_p=0.95, temperature=1.0, mask_token_id=128012, pad_token_id=128004, stop_token_id=None):
964
+ # Mask out special tokens to prevent them from being generated during unmasking
965
+ # mask_token and pad_token should never be generated as actual output
966
+ # stop_token_id can be optionally masked during intra-block unmasking to prevent premature stopping
967
+ logits = logits.clone()
968
+ logits[..., mask_token_id] = -float('inf')
969
+ logits[..., pad_token_id] = -float('inf')
970
+ if stop_token_id is not None:
971
+ logits[..., stop_token_id] = -float('inf')
972
+
973
+ # Calculate probabilities
974
+ if temperature > 0:
975
+ scaled_logits = logits / temperature
976
+ else:
977
+ p_1t = torch.softmax(logits, dim=-1)
978
+ x_1 = p_1t.argmax(dim=-1)
979
+ return x_1, p_1t
980
+
981
+ probs = F.softmax(scaled_logits, dim=-1)
982
+
983
+ sorted_probs, sorted_indices = torch.sort(probs, descending=True)
984
+ cumulative_probs = torch.cumsum(sorted_probs, dim=-1)
985
+
986
+ sorted_indices_to_remove = cumulative_probs > top_p
987
+ sorted_indices_to_remove[..., 1:] = sorted_indices_to_remove[..., :-1].clone()
988
+ sorted_indices_to_remove[..., 0] = 0
989
+
990
+ indices_to_remove = torch.zeros_like(probs, dtype=torch.bool).scatter_(
991
+ dim=-1, index=sorted_indices, src=sorted_indices_to_remove
992
+ )
993
+
994
+ probs[indices_to_remove] = 0
995
+
996
+ # Renormalize so that the probabilities of remaining tokens sum to 1
997
+ # Add a small epsilon value to prevent division by zero
998
+ probs_sum = torch.sum(probs, dim=-1, keepdim=True)
999
+ probs_sum = torch.clamp(probs_sum, min=1e-10)
1000
+ normalized_probs = probs / probs_sum
1001
+
1002
+ p_1t = normalized_probs
1003
+ # Handle arbitrary batch sizes: reshape to [batch * seq, vocab], sample, reshape back
1004
+ batch_size, seq_len, vocab_size = p_1t.shape
1005
+ p_1t_flat = p_1t.view(-1, vocab_size)
1006
+ x_1 = torch.multinomial(p_1t_flat, num_samples=1).view(batch_size, seq_len)
1007
+
1008
+ return x_1, p_1t