Text Generation
Transformers
Safetensors
English
bolmo
custom_code
benjamin commited on
Commit
8e42dcc
·
verified ·
1 Parent(s): b8af341

Upload BolmoForCausalLM

Browse files
README.md ADDED
@@ -0,0 +1,199 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ library_name: transformers
3
+ tags: []
4
+ ---
5
+
6
+ # Model Card for Model ID
7
+
8
+ <!-- Provide a quick summary of what the model is/does. -->
9
+
10
+
11
+
12
+ ## Model Details
13
+
14
+ ### Model Description
15
+
16
+ <!-- Provide a longer summary of what this model is. -->
17
+
18
+ This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
19
+
20
+ - **Developed by:** [More Information Needed]
21
+ - **Funded by [optional]:** [More Information Needed]
22
+ - **Shared by [optional]:** [More Information Needed]
23
+ - **Model type:** [More Information Needed]
24
+ - **Language(s) (NLP):** [More Information Needed]
25
+ - **License:** [More Information Needed]
26
+ - **Finetuned from model [optional]:** [More Information Needed]
27
+
28
+ ### Model Sources [optional]
29
+
30
+ <!-- Provide the basic links for the model. -->
31
+
32
+ - **Repository:** [More Information Needed]
33
+ - **Paper [optional]:** [More Information Needed]
34
+ - **Demo [optional]:** [More Information Needed]
35
+
36
+ ## Uses
37
+
38
+ <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
39
+
40
+ ### Direct Use
41
+
42
+ <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
43
+
44
+ [More Information Needed]
45
+
46
+ ### Downstream Use [optional]
47
+
48
+ <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
49
+
50
+ [More Information Needed]
51
+
52
+ ### Out-of-Scope Use
53
+
54
+ <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
55
+
56
+ [More Information Needed]
57
+
58
+ ## Bias, Risks, and Limitations
59
+
60
+ <!-- This section is meant to convey both technical and sociotechnical limitations. -->
61
+
62
+ [More Information Needed]
63
+
64
+ ### Recommendations
65
+
66
+ <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
67
+
68
+ Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
69
+
70
+ ## How to Get Started with the Model
71
+
72
+ Use the code below to get started with the model.
73
+
74
+ [More Information Needed]
75
+
76
+ ## Training Details
77
+
78
+ ### Training Data
79
+
80
+ <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
81
+
82
+ [More Information Needed]
83
+
84
+ ### Training Procedure
85
+
86
+ <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
87
+
88
+ #### Preprocessing [optional]
89
+
90
+ [More Information Needed]
91
+
92
+
93
+ #### Training Hyperparameters
94
+
95
+ - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
96
+
97
+ #### Speeds, Sizes, Times [optional]
98
+
99
+ <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
100
+
101
+ [More Information Needed]
102
+
103
+ ## Evaluation
104
+
105
+ <!-- This section describes the evaluation protocols and provides the results. -->
106
+
107
+ ### Testing Data, Factors & Metrics
108
+
109
+ #### Testing Data
110
+
111
+ <!-- This should link to a Dataset Card if possible. -->
112
+
113
+ [More Information Needed]
114
+
115
+ #### Factors
116
+
117
+ <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
118
+
119
+ [More Information Needed]
120
+
121
+ #### Metrics
122
+
123
+ <!-- These are the evaluation metrics being used, ideally with a description of why. -->
124
+
125
+ [More Information Needed]
126
+
127
+ ### Results
128
+
129
+ [More Information Needed]
130
+
131
+ #### Summary
132
+
133
+
134
+
135
+ ## Model Examination [optional]
136
+
137
+ <!-- Relevant interpretability work for the model goes here -->
138
+
139
+ [More Information Needed]
140
+
141
+ ## Environmental Impact
142
+
143
+ <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
144
+
145
+ Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
146
+
147
+ - **Hardware Type:** [More Information Needed]
148
+ - **Hours used:** [More Information Needed]
149
+ - **Cloud Provider:** [More Information Needed]
150
+ - **Compute Region:** [More Information Needed]
151
+ - **Carbon Emitted:** [More Information Needed]
152
+
153
+ ## Technical Specifications [optional]
154
+
155
+ ### Model Architecture and Objective
156
+
157
+ [More Information Needed]
158
+
159
+ ### Compute Infrastructure
160
+
161
+ [More Information Needed]
162
+
163
+ #### Hardware
164
+
165
+ [More Information Needed]
166
+
167
+ #### Software
168
+
169
+ [More Information Needed]
170
+
171
+ ## Citation [optional]
172
+
173
+ <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
174
+
175
+ **BibTeX:**
176
+
177
+ [More Information Needed]
178
+
179
+ **APA:**
180
+
181
+ [More Information Needed]
182
+
183
+ ## Glossary [optional]
184
+
185
+ <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
186
+
187
+ [More Information Needed]
188
+
189
+ ## More Information [optional]
190
+
191
+ [More Information Needed]
192
+
193
+ ## Model Card Authors [optional]
194
+
195
+ [More Information Needed]
196
+
197
+ ## Model Card Contact
198
+
199
+ [More Information Needed]
config.json ADDED
@@ -0,0 +1,74 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "add_expanded_embeddings": true,
3
+ "architectures": [
4
+ "BolmoForCausalLM"
5
+ ],
6
+ "attention_bias": false,
7
+ "attention_dropout": 0.0,
8
+ "auto_map": {
9
+ "AutoConfig": "configuration_bolmo.BolmoConfig",
10
+ "AutoModelForCausalLM": "modeling_bolmo.BolmoForCausalLM"
11
+ },
12
+ "bos_token_id": 1,
13
+ "boundary_predictor_lookahead": 1,
14
+ "boundary_threshold": "sample:0",
15
+ "dtype": "float32",
16
+ "eos_token_id": 1,
17
+ "hidden_act": "silu",
18
+ "hidden_size": 2048,
19
+ "initializer_range": 0.02,
20
+ "intermediate_size": 8192,
21
+ "layer_types": [
22
+ "full_attention",
23
+ "full_attention",
24
+ "full_attention",
25
+ "full_attention",
26
+ "full_attention",
27
+ "full_attention",
28
+ "full_attention",
29
+ "full_attention",
30
+ "full_attention",
31
+ "full_attention",
32
+ "full_attention",
33
+ "full_attention",
34
+ "full_attention",
35
+ "full_attention",
36
+ "full_attention",
37
+ "full_attention"
38
+ ],
39
+ "local_intermediate_size": 2816,
40
+ "local_rms_norm_eps": 1e-05,
41
+ "max_position_embeddings": 65536,
42
+ "model_type": "bolmo",
43
+ "num_attention_heads": 16,
44
+ "num_hidden_layers": 16,
45
+ "num_key_value_heads": 16,
46
+ "num_local_decoder_layers": 4,
47
+ "num_local_encoder_layers": 1,
48
+ "num_local_heads": 16,
49
+ "pad_token_id": 0,
50
+ "rms_norm_eps": 1e-06,
51
+ "rope_scaling": null,
52
+ "rope_theta": 10000.0,
53
+ "sliding_window": 4096,
54
+ "subword_vocab_size": 100278,
55
+ "tie_word_embeddings": false,
56
+ "tokenizer_config": {
57
+ "bos_token_id": 1,
58
+ "bpe_token_end_id": 3,
59
+ "eos_token_id": 1,
60
+ "original_identifier": "allenai/dolma2-tokenizer",
61
+ "pad_token_id": 0,
62
+ "special_tokens": [
63
+ "<pad>",
64
+ "<bos>",
65
+ "<eos>",
66
+ "<bpe_token_end>"
67
+ ],
68
+ "special_tokens_first": true,
69
+ "vocab_size": 520
70
+ },
71
+ "transformers_version": "4.57.3",
72
+ "use_cache": true,
73
+ "vocab_size": 520
74
+ }
configuration_bolmo.py ADDED
@@ -0,0 +1,235 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from dataclasses import asdict
2
+ from typing import Any
3
+
4
+ from transformers.configuration_utils import PretrainedConfig, layer_type_validation
5
+ from transformers.modeling_rope_utils import rope_config_validation
6
+ from olmo_core.nn.blt.hf.tokenization_bolmo import ByteTokenizerConfig
7
+
8
+ class BolmoConfig(PretrainedConfig):
9
+ r"""
10
+ This is the configuration class to store the configuration of a [`Olmo3Model`]. It is used to instantiate an OLMo3
11
+ model according to the specified arguments, defining the model architecture. Instantiating a configuration with the
12
+ defaults will yield a similar configuration to that of the [allenai/OLMo-3-0725-1B](https://huggingface.co/allenai/OLMo-3-0725-1B).
13
+
14
+ Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the
15
+ documentation from [`PretrainedConfig`] for more information.
16
+
17
+
18
+ Args:
19
+ vocab_size (`int`, *optional*, defaults to 50304):
20
+ Vocabulary size of the Olmo3 model. Defines the number of different tokens that can be represented by the
21
+ `inputs_ids` passed when calling [`Olmo3Model`]
22
+ hidden_size (`int`, *optional*, defaults to 4096):
23
+ Dimension of the hidden representations.
24
+ intermediate_size (`int`, *optional*, defaults to 11008):
25
+ Dimension of the MLP representations.
26
+ num_hidden_layers (`int`, *optional*, defaults to 32):
27
+ Number of hidden layers in the Transformer decoder.
28
+ num_attention_heads (`int`, *optional*, defaults to 32):
29
+ Number of attention heads for each attention layer in the Transformer decoder.
30
+ num_key_value_heads (`int`, *optional*):
31
+ This is the number of key_value heads that should be used to implement Grouped Query Attention. If
32
+ `num_key_value_heads=num_attention_heads`, the model will use Multi Head Attention (MHA), if
33
+ `num_key_value_heads=1` the model will use Multi Query Attention (MQA) otherwise GQA is used. When
34
+ converting a multi-head checkpoint to a GQA checkpoint, each group key and value head should be constructed
35
+ by meanpooling all the original heads within that group. For more details, check out [this
36
+ paper](https://huggingface.co/papers/2305.13245). If it is not specified, will default to
37
+ `num_attention_heads`.
38
+ hidden_act (`str` or `function`, *optional*, defaults to `"silu"`):
39
+ The non-linear activation function (function or string) in the decoder.
40
+ max_position_embeddings (`int`, *optional*, defaults to 2048):
41
+ The maximum sequence length that this model might ever be used with.
42
+ initializer_range (`float`, *optional*, defaults to 0.02):
43
+ The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
44
+ use_cache (`bool`, *optional*, defaults to `True`):
45
+ Whether or not the model should return the last key/values attentions (not used by all models). Only
46
+ relevant if `config.is_decoder=True`.
47
+ pad_token_id (`int`, *optional*, defaults to 1):
48
+ Padding token id.
49
+ bos_token_id (`int`, *optional*):
50
+ Beginning of stream token id.
51
+ eos_token_id (`int`, *optional*, defaults to 50279):
52
+ End of stream token id.
53
+ tie_word_embeddings (`bool`, *optional*, defaults to `False`):
54
+ Whether to tie weight embeddings
55
+ rope_theta (`float`, *optional*, defaults to 10000.0):
56
+ The base period of the RoPE embeddings.
57
+ rope_scaling (`Dict`, *optional*):
58
+ Dictionary containing the scaling configuration for the RoPE embeddings. NOTE: if you apply new rope type
59
+ and you expect the model to work on longer `max_position_embeddings`, we recommend you to update this value
60
+ accordingly.
61
+ Expected contents:
62
+ `rope_type` (`str`):
63
+ The sub-variant of RoPE to use. Can be one of ['default', 'linear', 'dynamic', 'yarn', 'longrope',
64
+ 'llama3'], with 'default' being the original RoPE implementation.
65
+ `factor` (`float`, *optional*):
66
+ Used with all rope types except 'default'. The scaling factor to apply to the RoPE embeddings. In
67
+ most scaling types, a `factor` of x will enable the model to handle sequences of length x *
68
+ original maximum pre-trained length.
69
+ `original_max_position_embeddings` (`int`, *optional*):
70
+ Used with 'dynamic', 'longrope' and 'llama3'. The original max position embeddings used during
71
+ pretraining.
72
+ `attention_factor` (`float`, *optional*):
73
+ Used with 'yarn' and 'longrope'. The scaling factor to be applied on the attention
74
+ computation. If unspecified, it defaults to value recommended by the implementation, using the
75
+ `factor` field to infer the suggested value.
76
+ `beta_fast` (`float`, *optional*):
77
+ Only used with 'yarn'. Parameter to set the boundary for extrapolation (only) in the linear
78
+ ramp function. If unspecified, it defaults to 32.
79
+ `beta_slow` (`float`, *optional*):
80
+ Only used with 'yarn'. Parameter to set the boundary for interpolation (only) in the linear
81
+ ramp function. If unspecified, it defaults to 1.
82
+ `short_factor` (`list[float]`, *optional*):
83
+ Only used with 'longrope'. The scaling factor to be applied to short contexts (<
84
+ `original_max_position_embeddings`). Must be a list of numbers with the same length as the hidden
85
+ size divided by the number of attention heads divided by 2
86
+ `long_factor` (`list[float]`, *optional*):
87
+ Only used with 'longrope'. The scaling factor to be applied to long contexts (<
88
+ `original_max_position_embeddings`). Must be a list of numbers with the same length as the hidden
89
+ size divided by the number of attention heads divided by 2
90
+ `low_freq_factor` (`float`, *optional*):
91
+ Only used with 'llama3'. Scaling factor applied to low frequency components of the RoPE
92
+ `high_freq_factor` (`float`, *optional*):
93
+ Only used with 'llama3'. Scaling factor applied to high frequency components of the RoPE
94
+ attention_bias (`bool`, defaults to `False`, *optional*, defaults to `False`):
95
+ Whether to use a bias in the query, key, value and output projection layers during self-attention.
96
+ attention_dropout (`float`, *optional*, defaults to 0.0):
97
+ The dropout ratio for the attention probabilities.
98
+ rms_norm_eps (`float`, *optional*, defaults to 1e-05):
99
+ The epsilon used by the rms normalization layers.
100
+ sliding_window (`int`, *optional*, defaults to 4096):
101
+ Size of the sliding window for sliding window attention.
102
+ layer_types (`list`, *optional*):
103
+ Attention pattern for each layer. Defaults to sliding window attention
104
+ for 3 out of 4 layers, and full attention for every 4th layer.
105
+
106
+ ```python
107
+ >>> from transformers import Olmo3Model, Olmo3Config
108
+
109
+ >>> # Initializing a Olmo3 7B style configuration
110
+ >>> configuration = Olmo3Config()
111
+
112
+ >>> # Initializing a model from the Olmo3 7B style configuration
113
+ >>> model = Olmo3Model(configuration)
114
+
115
+ >>> # Accessing the model configuration
116
+ >>> configuration = model.config
117
+ ```
118
+ """
119
+
120
+ model_type = "bolmo"
121
+ keys_to_ignore_at_inference = ["past_key_values"]
122
+ base_model_tp_plan = {
123
+ "layers.*.self_attn.q_proj": "colwise_rep", # we need to replicate here due to the added norm on q and k
124
+ "layers.*.self_attn.k_proj": "colwise_rep", # we need to replicate here due to the added norm on q and k
125
+ "layers.*.self_attn.v_proj": "colwise_rep", # we need to replicate here due to the added norm on q and k
126
+ "layers.*.self_attn.o_proj": "rowwise_rep", # we need to replicate here due to the added norm on q and k
127
+ "layers.*.mlp.gate_proj": "colwise",
128
+ "layers.*.mlp.up_proj": "colwise",
129
+ "layers.*.mlp.down_proj": "rowwise",
130
+ }
131
+ base_model_pp_plan = {
132
+ "embed_tokens": (["input_ids"], ["inputs_embeds"]),
133
+ "layers": (["hidden_states", "attention_mask"], ["hidden_states"]),
134
+ "norm": (["hidden_states"], ["hidden_states"]),
135
+ }
136
+
137
+ def __init__(
138
+ self,
139
+ vocab_size=520,
140
+ hidden_size=4096,
141
+ intermediate_size=11008,
142
+ num_hidden_layers=32,
143
+ num_attention_heads=32,
144
+ num_key_value_heads=None,
145
+ hidden_act="silu",
146
+ max_position_embeddings=2048,
147
+ initializer_range=0.02,
148
+ use_cache=True,
149
+ pad_token_id=1,
150
+ bos_token_id=None,
151
+ eos_token_id=50279,
152
+ tie_word_embeddings=False,
153
+ rope_theta=10000.0,
154
+ rope_scaling=None,
155
+ attention_bias=False,
156
+ attention_dropout=0.0,
157
+ rms_norm_eps=1e-5,
158
+ sliding_window=4096,
159
+ layer_types=None,
160
+ # bolmo config
161
+ add_expanded_embeddings: bool = True,
162
+ boundary_predictor_lookahead: int = 1,
163
+ boundary_threshold: str = "sample:0",
164
+ num_local_encoder_layers: int = 1,
165
+ num_local_decoder_layers: int = 4,
166
+ num_local_heads: int = 16,
167
+ local_intermediate_size: int = 5504,
168
+ local_rms_norm_eps=1e-5,
169
+ subword_vocab_size: int = 100278, # dolma2_tokenizer subword vocab size
170
+ tokenizer_config: ByteTokenizerConfig | dict[str, Any] | None = None,
171
+ **kwargs,
172
+ ):
173
+ super().__init__(
174
+ pad_token_id=pad_token_id,
175
+ bos_token_id=bos_token_id,
176
+ eos_token_id=eos_token_id,
177
+ tie_word_embeddings=tie_word_embeddings,
178
+ **kwargs,
179
+ )
180
+ self.vocab_size = vocab_size
181
+ self.max_position_embeddings = max_position_embeddings
182
+ self.hidden_size = hidden_size
183
+ self.intermediate_size = intermediate_size
184
+ self.num_hidden_layers = num_hidden_layers
185
+ self.num_attention_heads = num_attention_heads
186
+
187
+ # for backward compatibility
188
+ if num_key_value_heads is None:
189
+ num_key_value_heads = num_attention_heads
190
+
191
+ self.num_key_value_heads = num_key_value_heads
192
+ self.hidden_act = hidden_act
193
+ self.initializer_range = initializer_range
194
+ self.use_cache = use_cache
195
+ self.rope_theta = rope_theta
196
+ self.rope_scaling = rope_scaling
197
+ self._rope_scaling_validation()
198
+ self.attention_bias = attention_bias
199
+ self.attention_dropout = attention_dropout
200
+
201
+ self.rms_norm_eps = rms_norm_eps
202
+
203
+ self.sliding_window = sliding_window
204
+ self.layer_types = layer_types
205
+ if self.layer_types is None:
206
+ self.layer_types = [
207
+ "sliding_attention" if (i + 1) % 4 != 0 else "full_attention" for i in range(self.num_hidden_layers)
208
+ ]
209
+ layer_type_validation(self.layer_types)
210
+
211
+ # bolmo configuration
212
+ self.add_expanded_embeddings = add_expanded_embeddings
213
+ self.boundary_predictor_lookahead = boundary_predictor_lookahead
214
+ self.boundary_threshold = boundary_threshold
215
+ self.num_local_encoder_layers = num_local_encoder_layers
216
+ self.num_local_decoder_layers = num_local_decoder_layers
217
+ self.num_local_heads = num_local_heads
218
+ self.local_intermediate_size = local_intermediate_size
219
+ self.local_rms_norm_eps = local_rms_norm_eps
220
+ self.subword_vocab_size = subword_vocab_size
221
+
222
+ if tokenizer_config is None:
223
+ self.tokenizer_config = asdict(ByteTokenizerConfig.bolmo())
224
+ elif isinstance(tokenizer_config, ByteTokenizerConfig):
225
+ self.tokenizer_config = asdict(tokenizer_config)
226
+ else:
227
+ self.tokenizer_config = tokenizer_config
228
+
229
+ def _rope_scaling_validation(self):
230
+ """
231
+ Validate the `rope_scaling` configuration.
232
+ """
233
+ rope_config_validation(self)
234
+
235
+ __all__ = ["BolmoConfig"]
generation_config.json ADDED
@@ -0,0 +1,6 @@
 
 
 
 
 
 
 
1
+ {
2
+ "_from_model_config": true,
3
+ "eos_token_id": 50279,
4
+ "pad_token_id": 1,
5
+ "transformers_version": "4.57.3"
6
+ }
model-00001-of-00002.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:5facf4f9f174a3420bac74dfb6c0b626b0a0f7d61478bea2fd6c731f8bb0e34b
3
+ size 4998875928
model-00002-of-00002.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:307b7023df9cf08d41c5c02b599f805b2774e30b7a612cfacbf1a776dd3135be
3
+ size 876802264
model.safetensors.index.json ADDED
@@ -0,0 +1,271 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "metadata": {
3
+ "total_parameters": 1468911776,
4
+ "total_size": 5875647104
5
+ },
6
+ "weight_map": {
7
+ "lm_head.weight": "model-00002-of-00002.safetensors",
8
+ "model.layers.0.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
9
+ "model.layers.0.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
10
+ "model.layers.0.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
11
+ "model.layers.0.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
12
+ "model.layers.0.post_feedforward_layernorm.weight": "model-00001-of-00002.safetensors",
13
+ "model.layers.0.self_attn.k_norm.weight": "model-00001-of-00002.safetensors",
14
+ "model.layers.0.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
15
+ "model.layers.0.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
16
+ "model.layers.0.self_attn.q_norm.weight": "model-00001-of-00002.safetensors",
17
+ "model.layers.0.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
18
+ "model.layers.0.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
19
+ "model.layers.1.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
20
+ "model.layers.1.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
21
+ "model.layers.1.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
22
+ "model.layers.1.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
23
+ "model.layers.1.post_feedforward_layernorm.weight": "model-00001-of-00002.safetensors",
24
+ "model.layers.1.self_attn.k_norm.weight": "model-00001-of-00002.safetensors",
25
+ "model.layers.1.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
26
+ "model.layers.1.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
27
+ "model.layers.1.self_attn.q_norm.weight": "model-00001-of-00002.safetensors",
28
+ "model.layers.1.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
29
+ "model.layers.1.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
30
+ "model.layers.10.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
31
+ "model.layers.10.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
32
+ "model.layers.10.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
33
+ "model.layers.10.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
34
+ "model.layers.10.post_feedforward_layernorm.weight": "model-00001-of-00002.safetensors",
35
+ "model.layers.10.self_attn.k_norm.weight": "model-00001-of-00002.safetensors",
36
+ "model.layers.10.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
37
+ "model.layers.10.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
38
+ "model.layers.10.self_attn.q_norm.weight": "model-00001-of-00002.safetensors",
39
+ "model.layers.10.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
40
+ "model.layers.10.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
41
+ "model.layers.11.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
42
+ "model.layers.11.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
43
+ "model.layers.11.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
44
+ "model.layers.11.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
45
+ "model.layers.11.post_feedforward_layernorm.weight": "model-00001-of-00002.safetensors",
46
+ "model.layers.11.self_attn.k_norm.weight": "model-00001-of-00002.safetensors",
47
+ "model.layers.11.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
48
+ "model.layers.11.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
49
+ "model.layers.11.self_attn.q_norm.weight": "model-00001-of-00002.safetensors",
50
+ "model.layers.11.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
51
+ "model.layers.11.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
52
+ "model.layers.12.mlp.down_proj.weight": "model-00002-of-00002.safetensors",
53
+ "model.layers.12.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
54
+ "model.layers.12.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
55
+ "model.layers.12.post_attention_layernorm.weight": "model-00002-of-00002.safetensors",
56
+ "model.layers.12.post_feedforward_layernorm.weight": "model-00002-of-00002.safetensors",
57
+ "model.layers.12.self_attn.k_norm.weight": "model-00001-of-00002.safetensors",
58
+ "model.layers.12.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
59
+ "model.layers.12.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
60
+ "model.layers.12.self_attn.q_norm.weight": "model-00001-of-00002.safetensors",
61
+ "model.layers.12.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
62
+ "model.layers.12.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
63
+ "model.layers.13.mlp.down_proj.weight": "model-00002-of-00002.safetensors",
64
+ "model.layers.13.mlp.gate_proj.weight": "model-00002-of-00002.safetensors",
65
+ "model.layers.13.mlp.up_proj.weight": "model-00002-of-00002.safetensors",
66
+ "model.layers.13.post_attention_layernorm.weight": "model-00002-of-00002.safetensors",
67
+ "model.layers.13.post_feedforward_layernorm.weight": "model-00002-of-00002.safetensors",
68
+ "model.layers.13.self_attn.k_norm.weight": "model-00002-of-00002.safetensors",
69
+ "model.layers.13.self_attn.k_proj.weight": "model-00002-of-00002.safetensors",
70
+ "model.layers.13.self_attn.o_proj.weight": "model-00002-of-00002.safetensors",
71
+ "model.layers.13.self_attn.q_norm.weight": "model-00002-of-00002.safetensors",
72
+ "model.layers.13.self_attn.q_proj.weight": "model-00002-of-00002.safetensors",
73
+ "model.layers.13.self_attn.v_proj.weight": "model-00002-of-00002.safetensors",
74
+ "model.layers.14.mlp.down_proj.weight": "model-00002-of-00002.safetensors",
75
+ "model.layers.14.mlp.gate_proj.weight": "model-00002-of-00002.safetensors",
76
+ "model.layers.14.mlp.up_proj.weight": "model-00002-of-00002.safetensors",
77
+ "model.layers.14.post_attention_layernorm.weight": "model-00002-of-00002.safetensors",
78
+ "model.layers.14.post_feedforward_layernorm.weight": "model-00002-of-00002.safetensors",
79
+ "model.layers.14.self_attn.k_norm.weight": "model-00002-of-00002.safetensors",
80
+ "model.layers.14.self_attn.k_proj.weight": "model-00002-of-00002.safetensors",
81
+ "model.layers.14.self_attn.o_proj.weight": "model-00002-of-00002.safetensors",
82
+ "model.layers.14.self_attn.q_norm.weight": "model-00002-of-00002.safetensors",
83
+ "model.layers.14.self_attn.q_proj.weight": "model-00002-of-00002.safetensors",
84
+ "model.layers.14.self_attn.v_proj.weight": "model-00002-of-00002.safetensors",
85
+ "model.layers.15.mlp.down_proj.weight": "model-00002-of-00002.safetensors",
86
+ "model.layers.15.mlp.gate_proj.weight": "model-00002-of-00002.safetensors",
87
+ "model.layers.15.mlp.up_proj.weight": "model-00002-of-00002.safetensors",
88
+ "model.layers.15.post_attention_layernorm.weight": "model-00002-of-00002.safetensors",
89
+ "model.layers.15.post_feedforward_layernorm.weight": "model-00002-of-00002.safetensors",
90
+ "model.layers.15.self_attn.k_norm.weight": "model-00002-of-00002.safetensors",
91
+ "model.layers.15.self_attn.k_proj.weight": "model-00002-of-00002.safetensors",
92
+ "model.layers.15.self_attn.o_proj.weight": "model-00002-of-00002.safetensors",
93
+ "model.layers.15.self_attn.q_norm.weight": "model-00002-of-00002.safetensors",
94
+ "model.layers.15.self_attn.q_proj.weight": "model-00002-of-00002.safetensors",
95
+ "model.layers.15.self_attn.v_proj.weight": "model-00002-of-00002.safetensors",
96
+ "model.layers.2.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
97
+ "model.layers.2.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
98
+ "model.layers.2.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
99
+ "model.layers.2.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
100
+ "model.layers.2.post_feedforward_layernorm.weight": "model-00001-of-00002.safetensors",
101
+ "model.layers.2.self_attn.k_norm.weight": "model-00001-of-00002.safetensors",
102
+ "model.layers.2.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
103
+ "model.layers.2.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
104
+ "model.layers.2.self_attn.q_norm.weight": "model-00001-of-00002.safetensors",
105
+ "model.layers.2.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
106
+ "model.layers.2.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
107
+ "model.layers.3.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
108
+ "model.layers.3.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
109
+ "model.layers.3.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
110
+ "model.layers.3.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
111
+ "model.layers.3.post_feedforward_layernorm.weight": "model-00001-of-00002.safetensors",
112
+ "model.layers.3.self_attn.k_norm.weight": "model-00001-of-00002.safetensors",
113
+ "model.layers.3.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
114
+ "model.layers.3.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
115
+ "model.layers.3.self_attn.q_norm.weight": "model-00001-of-00002.safetensors",
116
+ "model.layers.3.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
117
+ "model.layers.3.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
118
+ "model.layers.4.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
119
+ "model.layers.4.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
120
+ "model.layers.4.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
121
+ "model.layers.4.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
122
+ "model.layers.4.post_feedforward_layernorm.weight": "model-00001-of-00002.safetensors",
123
+ "model.layers.4.self_attn.k_norm.weight": "model-00001-of-00002.safetensors",
124
+ "model.layers.4.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
125
+ "model.layers.4.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
126
+ "model.layers.4.self_attn.q_norm.weight": "model-00001-of-00002.safetensors",
127
+ "model.layers.4.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
128
+ "model.layers.4.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
129
+ "model.layers.5.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
130
+ "model.layers.5.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
131
+ "model.layers.5.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
132
+ "model.layers.5.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
133
+ "model.layers.5.post_feedforward_layernorm.weight": "model-00001-of-00002.safetensors",
134
+ "model.layers.5.self_attn.k_norm.weight": "model-00001-of-00002.safetensors",
135
+ "model.layers.5.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
136
+ "model.layers.5.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
137
+ "model.layers.5.self_attn.q_norm.weight": "model-00001-of-00002.safetensors",
138
+ "model.layers.5.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
139
+ "model.layers.5.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
140
+ "model.layers.6.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
141
+ "model.layers.6.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
142
+ "model.layers.6.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
143
+ "model.layers.6.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
144
+ "model.layers.6.post_feedforward_layernorm.weight": "model-00001-of-00002.safetensors",
145
+ "model.layers.6.self_attn.k_norm.weight": "model-00001-of-00002.safetensors",
146
+ "model.layers.6.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
147
+ "model.layers.6.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
148
+ "model.layers.6.self_attn.q_norm.weight": "model-00001-of-00002.safetensors",
149
+ "model.layers.6.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
150
+ "model.layers.6.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
151
+ "model.layers.7.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
152
+ "model.layers.7.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
153
+ "model.layers.7.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
154
+ "model.layers.7.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
155
+ "model.layers.7.post_feedforward_layernorm.weight": "model-00001-of-00002.safetensors",
156
+ "model.layers.7.self_attn.k_norm.weight": "model-00001-of-00002.safetensors",
157
+ "model.layers.7.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
158
+ "model.layers.7.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
159
+ "model.layers.7.self_attn.q_norm.weight": "model-00001-of-00002.safetensors",
160
+ "model.layers.7.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
161
+ "model.layers.7.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
162
+ "model.layers.8.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
163
+ "model.layers.8.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
164
+ "model.layers.8.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
165
+ "model.layers.8.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
166
+ "model.layers.8.post_feedforward_layernorm.weight": "model-00001-of-00002.safetensors",
167
+ "model.layers.8.self_attn.k_norm.weight": "model-00001-of-00002.safetensors",
168
+ "model.layers.8.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
169
+ "model.layers.8.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
170
+ "model.layers.8.self_attn.q_norm.weight": "model-00001-of-00002.safetensors",
171
+ "model.layers.8.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
172
+ "model.layers.8.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
173
+ "model.layers.9.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
174
+ "model.layers.9.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
175
+ "model.layers.9.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
176
+ "model.layers.9.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
177
+ "model.layers.9.post_feedforward_layernorm.weight": "model-00001-of-00002.safetensors",
178
+ "model.layers.9.self_attn.k_norm.weight": "model-00001-of-00002.safetensors",
179
+ "model.layers.9.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
180
+ "model.layers.9.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
181
+ "model.layers.9.self_attn.q_norm.weight": "model-00001-of-00002.safetensors",
182
+ "model.layers.9.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
183
+ "model.layers.9.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
184
+ "model.local_decoder.in_projection.bias": "model-00001-of-00002.safetensors",
185
+ "model.local_decoder.in_projection.weight": "model-00001-of-00002.safetensors",
186
+ "model.local_decoder.initial_norm.weight": "model-00001-of-00002.safetensors",
187
+ "model.local_decoder.layers.0.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
188
+ "model.local_decoder.layers.0.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
189
+ "model.local_decoder.layers.0.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
190
+ "model.local_decoder.layers.0.pre_feedforward_layernorm.weight": "model-00001-of-00002.safetensors",
191
+ "model.local_decoder.layers.0.pre_xlstm_layernorm.weight": "model-00001-of-00002.safetensors",
192
+ "model.local_decoder.layers.0.xlstm.fgate_preact.bias": "model-00001-of-00002.safetensors",
193
+ "model.local_decoder.layers.0.xlstm.fgate_preact.weight": "model-00001-of-00002.safetensors",
194
+ "model.local_decoder.layers.0.xlstm.igate_preact.bias": "model-00001-of-00002.safetensors",
195
+ "model.local_decoder.layers.0.xlstm.igate_preact.weight": "model-00001-of-00002.safetensors",
196
+ "model.local_decoder.layers.0.xlstm.k.weight": "model-00001-of-00002.safetensors",
197
+ "model.local_decoder.layers.0.xlstm.multihead_norm.weight": "model-00001-of-00002.safetensors",
198
+ "model.local_decoder.layers.0.xlstm.ogate_preact.weight": "model-00001-of-00002.safetensors",
199
+ "model.local_decoder.layers.0.xlstm.out_proj.weight": "model-00001-of-00002.safetensors",
200
+ "model.local_decoder.layers.0.xlstm.q.weight": "model-00001-of-00002.safetensors",
201
+ "model.local_decoder.layers.0.xlstm.v.weight": "model-00001-of-00002.safetensors",
202
+ "model.local_decoder.layers.1.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
203
+ "model.local_decoder.layers.1.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
204
+ "model.local_decoder.layers.1.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
205
+ "model.local_decoder.layers.1.pre_feedforward_layernorm.weight": "model-00001-of-00002.safetensors",
206
+ "model.local_decoder.layers.1.pre_xlstm_layernorm.weight": "model-00001-of-00002.safetensors",
207
+ "model.local_decoder.layers.1.xlstm.fgate_preact.bias": "model-00001-of-00002.safetensors",
208
+ "model.local_decoder.layers.1.xlstm.fgate_preact.weight": "model-00001-of-00002.safetensors",
209
+ "model.local_decoder.layers.1.xlstm.igate_preact.bias": "model-00001-of-00002.safetensors",
210
+ "model.local_decoder.layers.1.xlstm.igate_preact.weight": "model-00001-of-00002.safetensors",
211
+ "model.local_decoder.layers.1.xlstm.k.weight": "model-00001-of-00002.safetensors",
212
+ "model.local_decoder.layers.1.xlstm.multihead_norm.weight": "model-00001-of-00002.safetensors",
213
+ "model.local_decoder.layers.1.xlstm.ogate_preact.weight": "model-00001-of-00002.safetensors",
214
+ "model.local_decoder.layers.1.xlstm.out_proj.weight": "model-00001-of-00002.safetensors",
215
+ "model.local_decoder.layers.1.xlstm.q.weight": "model-00001-of-00002.safetensors",
216
+ "model.local_decoder.layers.1.xlstm.v.weight": "model-00001-of-00002.safetensors",
217
+ "model.local_decoder.layers.2.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
218
+ "model.local_decoder.layers.2.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
219
+ "model.local_decoder.layers.2.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
220
+ "model.local_decoder.layers.2.pre_feedforward_layernorm.weight": "model-00001-of-00002.safetensors",
221
+ "model.local_decoder.layers.2.pre_xlstm_layernorm.weight": "model-00001-of-00002.safetensors",
222
+ "model.local_decoder.layers.2.xlstm.fgate_preact.bias": "model-00001-of-00002.safetensors",
223
+ "model.local_decoder.layers.2.xlstm.fgate_preact.weight": "model-00001-of-00002.safetensors",
224
+ "model.local_decoder.layers.2.xlstm.igate_preact.bias": "model-00001-of-00002.safetensors",
225
+ "model.local_decoder.layers.2.xlstm.igate_preact.weight": "model-00001-of-00002.safetensors",
226
+ "model.local_decoder.layers.2.xlstm.k.weight": "model-00001-of-00002.safetensors",
227
+ "model.local_decoder.layers.2.xlstm.multihead_norm.weight": "model-00001-of-00002.safetensors",
228
+ "model.local_decoder.layers.2.xlstm.ogate_preact.weight": "model-00001-of-00002.safetensors",
229
+ "model.local_decoder.layers.2.xlstm.out_proj.weight": "model-00001-of-00002.safetensors",
230
+ "model.local_decoder.layers.2.xlstm.q.weight": "model-00001-of-00002.safetensors",
231
+ "model.local_decoder.layers.2.xlstm.v.weight": "model-00001-of-00002.safetensors",
232
+ "model.local_decoder.layers.3.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
233
+ "model.local_decoder.layers.3.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
234
+ "model.local_decoder.layers.3.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
235
+ "model.local_decoder.layers.3.pre_feedforward_layernorm.weight": "model-00001-of-00002.safetensors",
236
+ "model.local_decoder.layers.3.pre_xlstm_layernorm.weight": "model-00001-of-00002.safetensors",
237
+ "model.local_decoder.layers.3.xlstm.fgate_preact.bias": "model-00001-of-00002.safetensors",
238
+ "model.local_decoder.layers.3.xlstm.fgate_preact.weight": "model-00001-of-00002.safetensors",
239
+ "model.local_decoder.layers.3.xlstm.igate_preact.bias": "model-00001-of-00002.safetensors",
240
+ "model.local_decoder.layers.3.xlstm.igate_preact.weight": "model-00001-of-00002.safetensors",
241
+ "model.local_decoder.layers.3.xlstm.k.weight": "model-00001-of-00002.safetensors",
242
+ "model.local_decoder.layers.3.xlstm.multihead_norm.weight": "model-00001-of-00002.safetensors",
243
+ "model.local_decoder.layers.3.xlstm.ogate_preact.weight": "model-00001-of-00002.safetensors",
244
+ "model.local_decoder.layers.3.xlstm.out_proj.weight": "model-00001-of-00002.safetensors",
245
+ "model.local_decoder.layers.3.xlstm.q.weight": "model-00001-of-00002.safetensors",
246
+ "model.local_decoder.layers.3.xlstm.v.weight": "model-00001-of-00002.safetensors",
247
+ "model.local_encoder.boundary_predictor_module.k_proj_layer.weight": "model-00001-of-00002.safetensors",
248
+ "model.local_encoder.boundary_predictor_module.q_proj_layer.weight": "model-00001-of-00002.safetensors",
249
+ "model.local_encoder.byte_embedding.weight": "model-00001-of-00002.safetensors",
250
+ "model.local_encoder.layers.0.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
251
+ "model.local_encoder.layers.0.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
252
+ "model.local_encoder.layers.0.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
253
+ "model.local_encoder.layers.0.pre_feedforward_layernorm.weight": "model-00001-of-00002.safetensors",
254
+ "model.local_encoder.layers.0.pre_xlstm_layernorm.weight": "model-00001-of-00002.safetensors",
255
+ "model.local_encoder.layers.0.xlstm.fgate_preact.bias": "model-00001-of-00002.safetensors",
256
+ "model.local_encoder.layers.0.xlstm.fgate_preact.weight": "model-00001-of-00002.safetensors",
257
+ "model.local_encoder.layers.0.xlstm.igate_preact.bias": "model-00001-of-00002.safetensors",
258
+ "model.local_encoder.layers.0.xlstm.igate_preact.weight": "model-00001-of-00002.safetensors",
259
+ "model.local_encoder.layers.0.xlstm.k.weight": "model-00001-of-00002.safetensors",
260
+ "model.local_encoder.layers.0.xlstm.multihead_norm.weight": "model-00001-of-00002.safetensors",
261
+ "model.local_encoder.layers.0.xlstm.ogate_preact.weight": "model-00001-of-00002.safetensors",
262
+ "model.local_encoder.layers.0.xlstm.out_proj.weight": "model-00001-of-00002.safetensors",
263
+ "model.local_encoder.layers.0.xlstm.q.weight": "model-00001-of-00002.safetensors",
264
+ "model.local_encoder.layers.0.xlstm.v.weight": "model-00001-of-00002.safetensors",
265
+ "model.local_encoder.out_projection.bias": "model-00001-of-00002.safetensors",
266
+ "model.local_encoder.out_projection.weight": "model-00001-of-00002.safetensors",
267
+ "model.local_encoder.post_last_block_norm.weight": "model-00001-of-00002.safetensors",
268
+ "model.local_encoder.subword_embedding.weight": "model-00001-of-00002.safetensors",
269
+ "model.norm.weight": "model-00002-of-00002.safetensors"
270
+ }
271
+ }
modeling_bolmo.py ADDED
@@ -0,0 +1,1295 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import copy
2
+ from typing import Callable, Optional, Union, cast
3
+ import math
4
+
5
+ import torch
6
+ import torch.nn as nn
7
+ from torch.nn import functional as F
8
+
9
+ from transformers.utils.generic import TransformersKwargs
10
+
11
+ from transformers.activations import ACT2FN
12
+ from transformers.cache_utils import Cache, DynamicCache
13
+ from transformers.generation import GenerationMixin
14
+ from transformers.integrations import use_kernel_forward_from_hub
15
+ from transformers.masking_utils import create_causal_mask, create_sliding_window_causal_mask
16
+ from transformers.modeling_layers import GradientCheckpointingLayer
17
+ from transformers.modeling_outputs import BaseModelOutputWithPast, CausalLMOutputWithPast
18
+ from transformers.modeling_rope_utils import ROPE_INIT_FUNCTIONS, dynamic_rope_update
19
+ from transformers.modeling_utils import ALL_ATTENTION_FUNCTIONS, PreTrainedModel
20
+ from transformers.processing_utils import Unpack
21
+ from transformers.utils import auto_docstring, can_return_tuple
22
+ from transformers.utils.deprecation import deprecate_kwarg
23
+ from transformers.utils.generic import check_model_inputs
24
+
25
+ from olmo_core.nn.blt.hf.configuration_bolmo import BolmoConfig
26
+ from olmo_core.nn.blt.hf.tokenization_bolmo import ByteTokenizerConfig
27
+ from olmo_core.nn.blt.hf.utils_bolmo import compute_boundary_mask, pad_right, pad_left, MaskState
28
+
29
+ from xlstm.xlstm_large.model import mLSTMLayer, mLSTMLayerConfig, mLSTMLayerStateType, soft_cap, mLSTMBackendConfig
30
+
31
+
32
+ @use_kernel_forward_from_hub("RMSNorm")
33
+ class BolmoRMSNorm(nn.Module):
34
+ def __init__(self, hidden_size, eps=1e-6):
35
+ """
36
+ BolmoRMSNorm is equivalent to T5LayerNorm
37
+ """
38
+ super().__init__()
39
+ self.weight = nn.Parameter(torch.ones(hidden_size))
40
+ self.variance_epsilon = eps
41
+
42
+ def forward(self, hidden_states):
43
+ input_dtype = hidden_states.dtype
44
+ hidden_states = hidden_states.to(torch.float32)
45
+ variance = hidden_states.pow(2).mean(-1, keepdim=True)
46
+ hidden_states = hidden_states * torch.rsqrt(variance + self.variance_epsilon)
47
+ return (self.weight * hidden_states).to(input_dtype)
48
+
49
+ def extra_repr(self):
50
+ return f"{tuple(self.weight.shape)}, eps={self.variance_epsilon}"
51
+
52
+
53
+ def repeat_kv(hidden_states: torch.Tensor, n_rep: int) -> torch.Tensor:
54
+ """
55
+ This is the equivalent of torch.repeat_interleave(x, dim=1, repeats=n_rep). The hidden states go from (batch,
56
+ num_key_value_heads, seqlen, head_dim) to (batch, num_attention_heads, seqlen, head_dim)
57
+ """
58
+ batch, num_key_value_heads, slen, head_dim = hidden_states.shape
59
+ if n_rep == 1:
60
+ return hidden_states
61
+ hidden_states = hidden_states[:, :, None, :, :].expand(batch, num_key_value_heads, n_rep, slen, head_dim)
62
+ return hidden_states.reshape(batch, num_key_value_heads * n_rep, slen, head_dim)
63
+
64
+
65
+ def eager_attention_forward(
66
+ module: nn.Module,
67
+ query: torch.Tensor,
68
+ key: torch.Tensor,
69
+ value: torch.Tensor,
70
+ attention_mask: Optional[torch.Tensor],
71
+ scaling: float,
72
+ dropout: float = 0.0,
73
+ **kwargs: Unpack[TransformersKwargs],
74
+ ):
75
+ key_states = repeat_kv(key, module.num_key_value_groups)
76
+ value_states = repeat_kv(value, module.num_key_value_groups)
77
+
78
+ attn_weights = torch.matmul(query, key_states.transpose(2, 3)) * scaling
79
+ if attention_mask is not None:
80
+ causal_mask = attention_mask[:, :, :, : key_states.shape[-2]]
81
+ attn_weights = attn_weights + causal_mask
82
+
83
+ attn_weights = nn.functional.softmax(attn_weights, dim=-1, dtype=torch.float32).to(query.dtype)
84
+ attn_weights = nn.functional.dropout(attn_weights, p=dropout, training=module.training)
85
+ attn_output = torch.matmul(attn_weights, value_states)
86
+ attn_output = attn_output.transpose(1, 2).contiguous()
87
+
88
+ return attn_output, attn_weights
89
+
90
+
91
+ def apply_rotary_pos_emb(q, k, cos, sin, position_ids=None, unsqueeze_dim=1):
92
+ """Applies Rotary Position Embedding to the query and key tensors.
93
+
94
+ Args:
95
+ q (`torch.Tensor`): The query tensor.
96
+ k (`torch.Tensor`): The key tensor.
97
+ cos (`torch.Tensor`): The cosine part of the rotary embedding.
98
+ sin (`torch.Tensor`): The sine part of the rotary embedding.
99
+ position_ids (`torch.Tensor`, *optional*):
100
+ Deprecated and unused.
101
+ unsqueeze_dim (`int`, *optional*, defaults to 1):
102
+ The 'unsqueeze_dim' argument specifies the dimension along which to unsqueeze cos[position_ids] and
103
+ sin[position_ids] so that they can be properly broadcasted to the dimensions of q and k. For example, note
104
+ that cos[position_ids] and sin[position_ids] have the shape [batch_size, seq_len, head_dim]. Then, if q and
105
+ k have the shape [batch_size, heads, seq_len, head_dim], then setting unsqueeze_dim=1 makes
106
+ cos[position_ids] and sin[position_ids] broadcastable to the shapes of q and k. Similarly, if q and k have
107
+ the shape [batch_size, seq_len, heads, head_dim], then set unsqueeze_dim=2.
108
+ Returns:
109
+ `tuple(torch.Tensor)` comprising of the query and key tensors rotated using the Rotary Position Embedding.
110
+ """
111
+ q_type, k_type = q.dtype, k.dtype
112
+ cos = cos.unsqueeze(unsqueeze_dim)
113
+ sin = sin.unsqueeze(unsqueeze_dim)
114
+ q_embed = (q * cos) + (rotate_half(q) * sin)
115
+ k_embed = (k * cos) + (rotate_half(k) * sin)
116
+ return q_embed.to(q_type), k_embed.to(k_type)
117
+
118
+
119
+ def rotate_half(x):
120
+ """Rotates half the hidden dims of the input."""
121
+ x1 = x[..., : x.shape[-1] // 2]
122
+ x2 = x[..., x.shape[-1] // 2 :]
123
+ return torch.cat((-x2, x1), dim=-1)
124
+
125
+
126
+ class BolmoAttention(nn.Module):
127
+ """Multi-headed attention from 'Attention Is All You Need' paper"""
128
+
129
+ def __init__(self, config: BolmoConfig, layer_idx: int):
130
+ super().__init__()
131
+ self.config = config
132
+ self.layer_idx = layer_idx
133
+ self.head_dim = getattr(config, "head_dim", config.hidden_size // config.num_attention_heads)
134
+ self.num_key_value_groups = config.num_attention_heads // config.num_key_value_heads
135
+ self.scaling = self.head_dim**-0.5
136
+ self.attention_dropout = config.attention_dropout
137
+ self.is_causal = True
138
+
139
+ self.q_proj = nn.Linear(
140
+ config.hidden_size, config.num_attention_heads * self.head_dim, bias=config.attention_bias
141
+ )
142
+ self.k_proj = nn.Linear(
143
+ config.hidden_size, config.num_key_value_heads * self.head_dim, bias=config.attention_bias
144
+ )
145
+ self.v_proj = nn.Linear(
146
+ config.hidden_size, config.num_key_value_heads * self.head_dim, bias=config.attention_bias
147
+ )
148
+ self.o_proj = nn.Linear(
149
+ config.num_attention_heads * self.head_dim, config.hidden_size, bias=config.attention_bias
150
+ )
151
+ self.q_norm = BolmoRMSNorm(config.num_attention_heads * self.head_dim, config.rms_norm_eps)
152
+ self.k_norm = BolmoRMSNorm(config.num_key_value_heads * self.head_dim, config.rms_norm_eps)
153
+ assert config.layer_types is not None
154
+ self.attention_type = config.layer_types[layer_idx]
155
+ self.sliding_window = config.sliding_window if self.attention_type == "sliding_attention" else None
156
+
157
+ @deprecate_kwarg("past_key_value", new_name="past_key_values", version="4.58")
158
+ def forward(
159
+ self,
160
+ hidden_states: torch.Tensor,
161
+ position_embeddings: tuple[torch.Tensor, torch.Tensor],
162
+ attention_mask: Optional[torch.Tensor],
163
+ past_key_values: Optional[Cache] = None,
164
+ cache_position: Optional[torch.LongTensor] = None,
165
+ **kwargs: Unpack[TransformersKwargs],
166
+ ) -> tuple[torch.Tensor, Optional[torch.Tensor]]:
167
+ input_shape = hidden_states.shape[:-1]
168
+ hidden_shape = (*input_shape, -1, self.head_dim)
169
+
170
+ query_states = self.q_norm(self.q_proj(hidden_states))
171
+ key_states = self.k_norm(self.k_proj(hidden_states))
172
+ value_states = self.v_proj(hidden_states)
173
+
174
+ query_states = query_states.view(hidden_shape).transpose(1, 2)
175
+ key_states = key_states.view(hidden_shape).transpose(1, 2)
176
+ value_states = value_states.view(hidden_shape).transpose(1, 2)
177
+
178
+ cos, sin = position_embeddings
179
+ query_states, key_states = apply_rotary_pos_emb(query_states, key_states, cos, sin)
180
+
181
+ if past_key_values is not None:
182
+ # sin and cos are specific to RoPE models; cache_position needed for the static cache
183
+ cache_kwargs = {"sin": sin, "cos": cos, "cache_position": cache_position}
184
+ key_states, value_states = past_key_values.update(key_states, value_states, self.layer_idx, cache_kwargs)
185
+
186
+ attention_interface: Callable = eager_attention_forward
187
+ if self.config._attn_implementation != "eager":
188
+ attention_interface = ALL_ATTENTION_FUNCTIONS[self.config._attn_implementation]
189
+
190
+ attn_output, attn_weights = attention_interface(
191
+ self,
192
+ query_states,
193
+ key_states,
194
+ value_states,
195
+ attention_mask,
196
+ dropout=0.0 if not self.training else self.attention_dropout,
197
+ scaling=self.scaling,
198
+ sliding_window=self.sliding_window,
199
+ **kwargs,
200
+ )
201
+
202
+ attn_output = attn_output.reshape(*input_shape, -1).contiguous()
203
+ attn_output = self.o_proj(attn_output)
204
+ return attn_output, attn_weights
205
+
206
+
207
+ class BolmoMLP(nn.Module):
208
+ def __init__(self, config):
209
+ super().__init__()
210
+ self.config = config
211
+ self.hidden_size = config.hidden_size
212
+ self.intermediate_size = config.intermediate_size
213
+ self.gate_proj = nn.Linear(self.hidden_size, self.intermediate_size, bias=False)
214
+ self.up_proj = nn.Linear(self.hidden_size, self.intermediate_size, bias=False)
215
+ self.down_proj = nn.Linear(self.intermediate_size, self.hidden_size, bias=False)
216
+ self.act_fn = ACT2FN[config.hidden_act]
217
+
218
+ def forward(self, x):
219
+ down_proj = self.down_proj(self.act_fn(self.gate_proj(x)) * self.up_proj(x))
220
+ return down_proj
221
+
222
+
223
+ class BolmoDecoderLayer(GradientCheckpointingLayer):
224
+ def __init__(self, config: BolmoConfig, layer_idx: int):
225
+ super().__init__()
226
+ self.hidden_size = config.hidden_size
227
+ self.self_attn = BolmoAttention(config=config, layer_idx=layer_idx)
228
+
229
+ self.mlp = BolmoMLP(config)
230
+ self.post_attention_layernorm = BolmoRMSNorm(config.hidden_size, eps=config.rms_norm_eps)
231
+ self.post_feedforward_layernorm = BolmoRMSNorm(config.hidden_size, eps=config.rms_norm_eps)
232
+
233
+ @deprecate_kwarg("past_key_value", new_name="past_key_values", version="4.58")
234
+ def forward(
235
+ self,
236
+ hidden_states: torch.Tensor,
237
+ attention_mask: Optional[torch.Tensor] = None,
238
+ position_ids: Optional[torch.LongTensor] = None,
239
+ past_key_values: Optional[Cache] = None,
240
+ use_cache: Optional[bool] = False,
241
+ cache_position: Optional[torch.LongTensor] = None,
242
+ position_embeddings: Optional[tuple[torch.Tensor, torch.Tensor]] = None, # necessary, but kept here for BC
243
+ **kwargs: Unpack[TransformersKwargs],
244
+ ) -> torch.Tensor:
245
+ residual = hidden_states
246
+ attn_out, _ = self.self_attn(
247
+ hidden_states=hidden_states,
248
+ attention_mask=attention_mask,
249
+ position_ids=position_ids,
250
+ past_key_values=past_key_values,
251
+ use_cache=use_cache,
252
+ cache_position=cache_position,
253
+ position_embeddings=position_embeddings,
254
+ **kwargs,
255
+ )
256
+ hidden_states = self.post_attention_layernorm(attn_out)
257
+ hidden_states = residual + hidden_states
258
+
259
+ # Fully Connected
260
+ residual = hidden_states
261
+ mlp_out = self.mlp(hidden_states)
262
+ hidden_states = self.post_feedforward_layernorm(mlp_out)
263
+ hidden_states = residual + hidden_states
264
+
265
+ return hidden_states
266
+
267
+
268
+ class BolmoBoundaryPredictor(nn.Module):
269
+ def __init__(self, config: BolmoConfig):
270
+ super().__init__()
271
+
272
+ self.d_model = config.hidden_size
273
+ self.boundary_threshold = config.boundary_threshold
274
+ self.boundary_predictor_lookahead = config.boundary_predictor_lookahead
275
+ self.q_proj_layer = nn.Linear(self.d_model, self.d_model, bias=False)
276
+ self.k_proj_layer = nn.Linear(self.d_model, self.d_model, bias=False)
277
+
278
+ def forward(
279
+ self,
280
+ hidden_states: torch.Tensor,
281
+ sequence_start_indices: Optional[torch.Tensor] = None,
282
+ epsilon: float = 1e-3,
283
+ ) -> tuple[torch.Tensor, torch.Tensor]:
284
+ if self.boundary_predictor_lookahead == 0:
285
+ # do not use the same rep for k and v, use current and one before as in H-Net + pad with negative to the left
286
+ cos_sim = torch.cat([
287
+ torch.ones((hidden_states.shape[0], 1), device=hidden_states.device, dtype=hidden_states.dtype) * -1,
288
+ torch.einsum(
289
+ "b l d, b l d -> b l",
290
+ F.normalize(self.q_proj_layer(hidden_states[:, :-1]), dim=-1),
291
+ F.normalize(self.k_proj_layer(hidden_states[:, 1:]), dim=-1),
292
+ )
293
+ ], dim=1)
294
+ else:
295
+ cos_sim = torch.einsum(
296
+ "b l d, b l d -> b l",
297
+ F.normalize(self.q_proj_layer(hidden_states[:, :-self.boundary_predictor_lookahead]), dim=-1),
298
+ F.normalize(self.k_proj_layer(hidden_states[:, self.boundary_predictor_lookahead:]), dim=-1),
299
+ )
300
+ boundary_logprobs = torch.log1p(-cos_sim.float().clip(max=1.0 - epsilon)) - math.log(2)
301
+ POSITIVE_LOGPROB = 0.0
302
+ NEGATIVE_LOGPROB = -100_000
303
+ if sequence_start_indices is None:
304
+ boundary_logprobs[:, 0] = POSITIVE_LOGPROB
305
+ else:
306
+ pad_mask = torch.arange(boundary_logprobs.shape[1], device=boundary_logprobs.device)[None, :] < sequence_start_indices[:, None]
307
+ boundary_logprobs = boundary_logprobs.masked_fill(pad_mask, NEGATIVE_LOGPROB)
308
+ boundary_logprobs[torch.arange(len(boundary_logprobs), device=boundary_logprobs.device), sequence_start_indices] = POSITIVE_LOGPROB
309
+
310
+ boundary_logprobs = F.pad(boundary_logprobs, (0, self.boundary_predictor_lookahead), "constant", NEGATIVE_LOGPROB)
311
+ boundary_mask = compute_boundary_mask(boundary_logprobs, self.boundary_threshold)
312
+
313
+ return boundary_logprobs, boundary_mask
314
+
315
+
316
+ class BolmoXLSTMLayer(mLSTMLayer):
317
+ def __init__(self, config: BolmoConfig):
318
+ super().__init__(mLSTMLayerConfig(
319
+ embedding_dim=config.hidden_size,
320
+ num_heads=config.num_local_heads,
321
+ mlstm_backend=mLSTMBackendConfig(
322
+ chunkwise_kernel="chunkwise--triton_limit_chunk",
323
+ sequence_kernel="native_sequence__triton",
324
+ step_kernel="triton",
325
+ mode="train",
326
+ return_last_states=True,
327
+ autocast_kernel_dtype="float32",
328
+ )
329
+ ))
330
+
331
+ # original forward adapted to support sequence_start_indices
332
+ # i.e. set the forget gate to zero at the start of sequence
333
+ def _original_forward(
334
+ self, x: torch.Tensor,
335
+ state: mLSTMLayerStateType | None = None,
336
+ sequence_start_indices: Optional[torch.Tensor] = None,
337
+ ) -> tuple[torch.Tensor, mLSTMLayerStateType | None]:
338
+ assert x.ndim == 3, f"Input must have shape [B, S, D], got {x.shape}"
339
+ B, S, _ = x.shape
340
+ if self.config.weight_mode == "single":
341
+ q = self.q(x)
342
+ k = self.k(x)
343
+ v = self.v(x)
344
+ o_preact = self.ogate_preact(x)
345
+ i_preact = soft_cap(
346
+ self.igate_preact(x), cap_value=self.config.gate_soft_cap
347
+ )
348
+ f_preact = soft_cap(
349
+ self.fgate_preact(x), cap_value=self.config.gate_soft_cap
350
+ )
351
+ elif self.config.weight_mode == "fused":
352
+ qkv_opreact = self.qkv_opreact(x)
353
+ q, k, v, o_preact = torch.tensor_split(
354
+ qkv_opreact,
355
+ (
356
+ self.qk_dim,
357
+ 2 * self.qk_dim,
358
+ 2 * self.qk_dim + self.v_dim,
359
+ ),
360
+ dim=-1,
361
+ )
362
+
363
+ if_preact = soft_cap(
364
+ self.ifgate_preact(x), cap_value=self.config.gate_soft_cap
365
+ )
366
+ i_preact, f_preact = torch.tensor_split(
367
+ if_preact, (self.config.num_heads,), dim=-1
368
+ )
369
+ else:
370
+ raise ValueError(f"Unknown weight_mode: {self.config.weight_mode}")
371
+
372
+ q = q.reshape(B, S, self.config.num_heads, -1).transpose(1, 2)
373
+ k = k.reshape(B, S, self.config.num_heads, -1).transpose(1, 2)
374
+ v = v.reshape(B, S, self.config.num_heads, -1).transpose(1, 2)
375
+
376
+ if sequence_start_indices is not None:
377
+ f_preact[torch.arange(B, device=f_preact.device), sequence_start_indices] = -100_000
378
+
379
+ i_preact = i_preact.transpose(1, 2)
380
+ f_preact = f_preact.transpose(1, 2)
381
+ if state is None:
382
+ c_initial, n_initial, m_initial = None, None, None
383
+ else:
384
+ c_initial, n_initial, m_initial = state
385
+
386
+ h, state = self.mlstm_backend(
387
+ q=q,
388
+ k=k,
389
+ v=v,
390
+ i=i_preact,
391
+ f=f_preact,
392
+ c_initial=c_initial,
393
+ n_initial=n_initial,
394
+ m_initial=m_initial,
395
+ )
396
+ expected_h_shape = (
397
+ B,
398
+ self.config.num_heads,
399
+ S,
400
+ self.v_dim // self.config.num_heads,
401
+ )
402
+ assert (
403
+ h.shape == expected_h_shape
404
+ ), f"Got {h.shape}, expected {expected_h_shape}"
405
+
406
+ h = h.transpose(1, 2)
407
+ h_norm = self.multihead_norm(h)
408
+ h_norm = h_norm.reshape(B, S, -1)
409
+
410
+ h_out = self.ogate_act_fn(o_preact) * h_norm
411
+
412
+ y = self.out_proj(h_out)
413
+ return y, state
414
+
415
+ def forward( # type: ignore
416
+ self,
417
+ x: torch.Tensor,
418
+ past_key_values: Optional[dict] = None,
419
+ use_cache: bool = False,
420
+ sequence_start_indices: Optional[torch.Tensor] = None,
421
+ cache_mask: Optional[MaskState] = None
422
+ ):
423
+ if self.training:
424
+ self.mlstm_backend.config.mode = "train"
425
+ else:
426
+ self.mlstm_backend.config.mode = "inference"
427
+
428
+ if use_cache:
429
+ assert past_key_values is not None
430
+
431
+ prev_mode = self.mlstm_backend.config.mode
432
+ state = past_key_values.get("state", None)
433
+
434
+ if cache_mask is not None:
435
+ state_for_model = cast(mLSTMLayerStateType, tuple(cache_mask.selective_get(x, inv=True) for x in state) if state is not None else None)
436
+ else:
437
+ state_for_model = state
438
+
439
+ h, new_state = self._original_forward(
440
+ x,
441
+ state=state_for_model,
442
+ sequence_start_indices=sequence_start_indices
443
+ )
444
+ assert new_state is not None
445
+
446
+ if state is None or cache_mask is None:
447
+ state = new_state
448
+ else:
449
+ if cache_mask is not None:
450
+ for i in range(len(state)):
451
+ cache_mask.selective_put(new_state[i], state[i], inv=True)
452
+
453
+ past_key_values["state"] = state
454
+ self.mlstm_backend.config.mode = prev_mode
455
+
456
+ return h
457
+ else:
458
+ h, _ = super().forward(x)
459
+ return h
460
+
461
+ class BolmoLocalLayer(nn.Module):
462
+ def __init__(self, config: BolmoConfig):
463
+ super().__init__()
464
+ self.config = config
465
+ self.hidden_size = config.hidden_size
466
+
467
+ self.act_fn = ACT2FN[config.hidden_act]
468
+
469
+ self.xlstm = BolmoXLSTMLayer(config)
470
+
471
+ local_mlp_config = copy.deepcopy(config)
472
+ local_mlp_config.intermediate_size = config.local_intermediate_size
473
+ self.mlp = BolmoMLP(local_mlp_config)
474
+
475
+ self.pre_xlstm_layernorm = BolmoRMSNorm(config.hidden_size, eps=config.rms_norm_eps)
476
+ self.pre_feedforward_layernorm = BolmoRMSNorm(config.hidden_size, eps=config.rms_norm_eps)
477
+
478
+ def forward(
479
+ self,
480
+ hidden_states: torch.Tensor,
481
+ sequence_start_indices: Optional[torch.Tensor] = None,
482
+ past_key_values: Optional[dict] = None,
483
+ use_cache: Optional[bool] = False,
484
+ cache_mask: Optional[MaskState] = None,
485
+ ) -> torch.Tensor:
486
+ residual = hidden_states
487
+ xlstm_out = self.xlstm(self.pre_xlstm_layernorm(hidden_states), sequence_start_indices=sequence_start_indices, past_key_values=past_key_values["xlstm"] if past_key_values is not None else None, use_cache=use_cache, cache_mask=cache_mask)
488
+ hidden_states = residual + xlstm_out
489
+
490
+ # Fully Connected
491
+ residual = hidden_states
492
+ ffn_out = self.mlp(self.pre_feedforward_layernorm(hidden_states))
493
+ hidden_states = residual + ffn_out
494
+
495
+ return hidden_states
496
+
497
+
498
+ class BolmoLocalEncoder(nn.Module):
499
+ def __init__(self, config: BolmoConfig):
500
+ super().__init__()
501
+ self.config = config
502
+ self.hidden_size = config.hidden_size
503
+ self.add_expanded_embeddings = config.add_expanded_embeddings
504
+
505
+ self.byte_embedding = nn.Embedding(
506
+ config.vocab_size,
507
+ self.hidden_size,
508
+ )
509
+ if self.add_expanded_embeddings:
510
+ self.subword_embedding = nn.Embedding(
511
+ config.subword_vocab_size,
512
+ self.hidden_size,
513
+ )
514
+ else:
515
+ self.subword_embedding = None
516
+
517
+ self.layers = nn.ModuleList(
518
+ [BolmoLocalLayer(config) for _ in range(config.num_local_encoder_layers)]
519
+ )
520
+
521
+ self.post_last_block_norm = BolmoRMSNorm(
522
+ self.hidden_size,
523
+ config.local_rms_norm_eps,
524
+ )
525
+ self.out_projection = nn.Linear(
526
+ self.hidden_size,
527
+ self.hidden_size,
528
+ bias=True,
529
+ )
530
+
531
+ self.boundary_predictor_module = BolmoBoundaryPredictor(config)
532
+
533
+ self.has_cache = False
534
+
535
+ def prepare_inference_cache(self, batch_size: int):
536
+ device = next(self.parameters()).device
537
+ self.has_cache = True
538
+
539
+ self.cache_seqlens = 0
540
+ self.last_h = torch.zeros((batch_size, self.hidden_size), dtype=self.out_projection.weight.dtype, device=device)
541
+ self.layer_states = [{"xlstm": {}} for _ in range(len(self.layers))]
542
+
543
+ def free_inference_cache(self):
544
+ self.has_cache = False
545
+ if hasattr(self, "cache_seqlens"):
546
+ del self.cache_seqlens
547
+ if hasattr(self, "last_h"):
548
+ del self.last_h
549
+ if hasattr(self, "layer_states"):
550
+ del self.layer_states
551
+
552
+ def _embed(self, tokens, expanded_input_ids: Optional[torch.Tensor] = None):
553
+ embeddings = self.byte_embedding(tokens)
554
+ if self.add_expanded_embeddings:
555
+ assert expanded_input_ids is not None and self.subword_embedding is not None
556
+ embeddings = embeddings + self.subword_embedding(expanded_input_ids)
557
+
558
+ return embeddings
559
+
560
+ def _pool(
561
+ self,
562
+ h: torch.Tensor,
563
+ boundary_mask: torch.Tensor | None,
564
+ n_patches: int,
565
+ boundary_state: Optional[MaskState] = None,
566
+ ):
567
+ if self.has_cache and self.cache_seqlens > 0:
568
+ assert boundary_state is not None
569
+ if boundary_state.all():
570
+ assert h.shape[1] == 1
571
+ reduced_h = h
572
+ else:
573
+ reduced_h = h[[], :, :]
574
+ else:
575
+ assert boundary_mask is not None
576
+
577
+ L = h.shape[1]
578
+ token_idx = (
579
+ torch.arange(L, device=h.device)[None, :] + (~boundary_mask).long() * L # type: ignore
580
+ )
581
+ seq_sorted_indices = torch.argsort(token_idx, dim=1)
582
+ index = seq_sorted_indices[:, :n_patches, None].expand(
583
+ -1, -1, h.shape[-1]
584
+ )
585
+
586
+ reduced_h = torch.gather(
587
+ h,
588
+ dim=1,
589
+ index=index,
590
+ )
591
+
592
+ return reduced_h
593
+
594
+ def forward(
595
+ self,
596
+ input_ids,
597
+ true_boundary_mask: Optional[torch.Tensor] = None,
598
+ boundary_state: Optional[MaskState] = None,
599
+ pad_state: Optional[MaskState] = None,
600
+ expanded_input_ids: Optional[torch.Tensor] = None,
601
+ sequence_start_indices: Optional[torch.Tensor] = None,
602
+ ):
603
+ embeddings = self._embed(input_ids, expanded_input_ids)
604
+
605
+ # pass through encoder layers
606
+ if self.has_cache and self.cache_seqlens > 0:
607
+ assert pad_state is not None
608
+
609
+ # step those batch positions which are not currently idle (i.e. at a boundary position)
610
+ # if all batch positions are idle, skip the step entirely
611
+ # all positions being idle only happens if fuse_boundaries=False. In this case, the step where we
612
+ # obtain a new representation from the global model will have all positions for the local encoder being idle.
613
+ if not pad_state.all():
614
+ h = pad_state.selective_get(embeddings, inv=True)
615
+
616
+ for i, block in enumerate(self.layers):
617
+ h = block(h, past_key_values=self.layer_states[i], use_cache=True, cache_mask=pad_state)
618
+
619
+ if self.post_last_block_norm is not None:
620
+ h = self.post_last_block_norm(h)
621
+
622
+ pad_state.selective_put(h[:, -1, :], self.last_h, inv=True)
623
+
624
+ h = self.last_h.unsqueeze(1)
625
+ else:
626
+ h = embeddings
627
+ for i, block in enumerate(self.layers):
628
+ if self.has_cache:
629
+ use_cache = True
630
+ past_key_values = self.layer_states[i]
631
+ else:
632
+ use_cache = False
633
+ past_key_values = None
634
+
635
+ h = block(h, past_key_values=past_key_values, use_cache=use_cache, sequence_start_indices=sequence_start_indices)
636
+
637
+ if self.post_last_block_norm is not None:
638
+ h = self.post_last_block_norm(h)
639
+
640
+ if self.has_cache:
641
+ self.last_h.copy_(h[:, -1, :])
642
+
643
+ if not self.has_cache or self.cache_seqlens == 0: # only used for prefill
644
+ boundary_logprobs, boundary_mask = self.boundary_predictor_module(
645
+ h,
646
+ sequence_start_indices=sequence_start_indices,
647
+ )
648
+ if boundary_state is not None:
649
+ # can't predict through encoder - must be through prev local decoder step
650
+ boundary_mask[:, -1] = boundary_state.mask
651
+ else:
652
+ boundary_logprobs = boundary_mask = None
653
+
654
+ # overwrite with true boundaries
655
+ if true_boundary_mask is not None:
656
+ boundary_mask = true_boundary_mask
657
+
658
+ patch_embeddings = self._pool(
659
+ h=h,
660
+ boundary_mask=boundary_mask,
661
+ n_patches=int(cast(torch.Tensor, boundary_mask).sum(-1).max().item()) if boundary_mask is not None else 1,
662
+ boundary_state=boundary_state,
663
+ )
664
+ patch_embeddings = self.out_projection(patch_embeddings)
665
+
666
+ if self.has_cache:
667
+ self.cache_seqlens += input_ids.shape[1]
668
+
669
+ return h, patch_embeddings, boundary_logprobs, boundary_mask
670
+
671
+
672
+ class BolmoLocalDecoder(nn.Module):
673
+ def __init__(self, config: BolmoConfig):
674
+ super().__init__()
675
+ self.config = config
676
+ self.hidden_size = config.hidden_size
677
+
678
+ self.initial_norm = BolmoRMSNorm(
679
+ self.hidden_size,
680
+ eps=config.local_rms_norm_eps,
681
+ )
682
+
683
+ self.in_projection = nn.Linear(
684
+ self.hidden_size,
685
+ self.hidden_size,
686
+ bias=True,
687
+ )
688
+
689
+ self.layers = nn.ModuleList(
690
+ [BolmoLocalLayer(config) for _ in range(config.num_local_decoder_layers)]
691
+ )
692
+
693
+ self.has_cache = False
694
+
695
+ def prepare_inference_cache(self, batch_size: int):
696
+ device = next(self.parameters()).device
697
+ self.has_cache = True
698
+
699
+ self.cache_seqlens = 0
700
+ self.last_value = torch.zeros((batch_size, self.hidden_size), dtype=self.in_projection.weight.dtype, device=device)
701
+ self.layer_states = [{"xlstm": {}} for _ in range(len(self.layers))]
702
+
703
+ def free_inference_cache(self):
704
+ self.has_cache = False
705
+ if hasattr(self, "cache_seqlens"):
706
+ del self.cache_seqlens
707
+ if hasattr(self, "last_value"):
708
+ del self.last_value
709
+ if hasattr(self, "layer_states"):
710
+ del self.layer_states
711
+
712
+ def _depool(
713
+ self,
714
+ embeds: torch.Tensor,
715
+ patch_embeds: torch.Tensor,
716
+ boundary_mask: Optional[torch.Tensor],
717
+ boundary_state: Optional[MaskState] = None,
718
+ sequence_start_indices: Optional[torch.Tensor] = None,
719
+ ) -> torch.Tensor:
720
+ if self.has_cache and self.cache_seqlens > 0:
721
+ assert boundary_state is not None
722
+
723
+ if patch_embeds.numel() > 0:
724
+ # we got a new value from the global model, so must be at boundary position
725
+ h_patch = patch_embeds[:, -1:, :]
726
+ h = embeds + h_patch
727
+
728
+ self.last_value.copy_(h_patch[:, -1])
729
+ else:
730
+ h = embeds + self.last_value.unsqueeze(1)
731
+
732
+ # skip pad positions until we get a new value from the global model
733
+ if patch_embeds.numel() == 0:
734
+ h = boundary_state.selective_get(h, inv=True)
735
+ else:
736
+ boundary_state = None
737
+
738
+ if h.shape[0] > 0:
739
+ for i, layer in enumerate(self.layers):
740
+ h = layer(h, past_key_values=self.layer_states[i], use_cache=True, cache_mask=boundary_state)
741
+
742
+ self.cache_seqlens += h.shape[1]
743
+
744
+ return h
745
+ else:
746
+ assert boundary_mask is not None
747
+
748
+ h_patch = patch_embeds
749
+ prepool_out = h_patch
750
+
751
+ # TODO(benjaminm): clipping is problematic if it happens too much; track clip %.
752
+ plug_back_idx = (torch.cumsum(boundary_mask, dim=1) - 1).clip(min=0, max=prepool_out.shape[1] - 1)
753
+ depool_out = torch.gather(
754
+ prepool_out,
755
+ dim=1,
756
+ index=plug_back_idx.unsqueeze(-1).expand(-1, -1, self.hidden_size),
757
+ )
758
+
759
+ depool_out_modulated = depool_out
760
+ h = depool_out_modulated + embeds
761
+
762
+ for i, layer in enumerate(self.layers):
763
+ if self.has_cache:
764
+ use_cache = True
765
+ past_key_values = self.layer_states[i]
766
+ else:
767
+ use_cache = False
768
+ past_key_values = None
769
+
770
+ h = layer(h, past_key_values=past_key_values, use_cache=use_cache, sequence_start_indices=sequence_start_indices)
771
+
772
+ if self.has_cache:
773
+ self.last_value.copy_(prepool_out[:, -1])
774
+ self.cache_seqlens += h.shape[1]
775
+
776
+ return h
777
+
778
+ def forward(
779
+ self,
780
+ embeds: torch.Tensor,
781
+ patch_embeds: torch.Tensor,
782
+ boundary_state: Optional[MaskState],
783
+ boundary_mask: torch.Tensor | None,
784
+ sequence_start_indices: Optional[torch.Tensor] = None,
785
+ ) -> torch.Tensor:
786
+ h = self.in_projection(embeds)
787
+ h_patch = self.initial_norm(patch_embeds)
788
+
789
+ return self._depool(
790
+ embeds=h,
791
+ patch_embeds=h_patch,
792
+ boundary_mask=boundary_mask,
793
+ boundary_state=boundary_state,
794
+ sequence_start_indices=sequence_start_indices,
795
+ )
796
+
797
+
798
+ class BolmoRotaryEmbedding(nn.Module):
799
+ inv_freq: torch.Tensor # fix linting for `register_buffer`
800
+
801
+ def __init__(self, config: BolmoConfig, device=None, rope_type: Optional[str] = None):
802
+ super().__init__()
803
+ if rope_type is not None:
804
+ self.rope_type = rope_type
805
+ elif hasattr(config, "rope_scaling") and isinstance(config.rope_scaling, dict):
806
+ # BC: "rope_type" was originally "type"
807
+ self.rope_type = config.rope_scaling.get("rope_type", config.rope_scaling.get("type"))
808
+ else:
809
+ self.rope_type = "default"
810
+ assert self.rope_type is not None
811
+
812
+ self.max_seq_len_cached = config.max_position_embeddings
813
+ self.original_max_seq_len = config.max_position_embeddings
814
+
815
+ self.config = config
816
+ self.rope_init_fn = ROPE_INIT_FUNCTIONS[self.rope_type]
817
+
818
+ inv_freq, self.attention_scaling = self.rope_init_fn(self.config, device)
819
+ self.register_buffer("inv_freq", inv_freq, persistent=False)
820
+ self.original_inv_freq = self.inv_freq
821
+
822
+ @torch.no_grad()
823
+ @dynamic_rope_update # power user: used with advanced RoPE types (e.g. dynamic rope)
824
+ def forward(self, x, position_ids):
825
+ inv_freq_expanded = self.inv_freq[None, :, None].float().expand(position_ids.shape[0], -1, 1).to(x.device)
826
+ position_ids_expanded = position_ids[:, None, :].float()
827
+
828
+ device_type = x.device.type if isinstance(x.device.type, str) and x.device.type != "mps" else "cpu"
829
+ with torch.autocast(device_type=device_type, enabled=False): # Force float32
830
+ freqs = (inv_freq_expanded.float() @ position_ids_expanded.float()).transpose(1, 2)
831
+ emb = torch.cat((freqs, freqs), dim=-1)
832
+ cos = emb.cos() * self.attention_scaling
833
+ sin = emb.sin() * self.attention_scaling
834
+ return cos, sin
835
+
836
+
837
+ @auto_docstring
838
+ class BolmoPreTrainedModel(PreTrainedModel):
839
+ config: BolmoConfig
840
+ base_model_prefix = "model"
841
+ supports_gradient_checkpointing = True
842
+ _no_split_modules = ["BolmoDecoderLayer"]
843
+ _skip_keys_device_placement = ["past_key_values"]
844
+ _supports_flash_attn = True
845
+ _supports_sdpa = True
846
+ _supports_flex_attn = True
847
+
848
+ _can_compile_fullgraph = True
849
+ _supports_attention_backend = True
850
+ _can_record_outputs = {
851
+ "hidden_states": BolmoDecoderLayer,
852
+ "attentions": BolmoAttention,
853
+ }
854
+
855
+
856
+ @auto_docstring
857
+ class BolmoModel(BolmoPreTrainedModel):
858
+ def __init__(self, config: BolmoConfig):
859
+ super().__init__(config)
860
+ self.padding_idx = config.pad_token_id
861
+ self.vocab_size = config.vocab_size
862
+
863
+ self.local_encoder = BolmoLocalEncoder(config)
864
+ self.local_decoder = BolmoLocalDecoder(config)
865
+
866
+ self.layers = nn.ModuleList(
867
+ [BolmoDecoderLayer(config, layer_idx) for layer_idx in range(config.num_hidden_layers)]
868
+ )
869
+ self.norm = BolmoRMSNorm(config.hidden_size, eps=config.rms_norm_eps)
870
+ self.gradient_checkpointing = False
871
+ self.rotary_embs = nn.ModuleDict(
872
+ {
873
+ "sliding_attention": BolmoRotaryEmbedding(config=config, rope_type="default"),
874
+ "full_attention": BolmoRotaryEmbedding(config=config),
875
+ }
876
+ )
877
+
878
+ self.tokenizer_config = ByteTokenizerConfig(**config.tokenizer_config)
879
+ self._tokenizer = None
880
+
881
+ # Initialize weights and apply final processing
882
+ self.post_init()
883
+
884
+ def get_input_embeddings(self):
885
+ return self.local_encoder.byte_embedding
886
+
887
+ def set_input_embeddings(self, value: nn.Embedding): # type: ignore
888
+ self.local_encoder.byte_embedding = value
889
+
890
+ @property
891
+ def tokenizer(self):
892
+ if self._tokenizer is None:
893
+ self._tokenizer = self.tokenizer_config.build()
894
+
895
+ return self._tokenizer
896
+
897
+ def prefill_boundary_prediction_forward(
898
+ self,
899
+ input_ids: torch.Tensor,
900
+ expanded_input_ids: Optional[torch.LongTensor] = None,
901
+ sequence_start_indices: Optional[torch.Tensor] = None,
902
+ last_token_is_boundary: bool = False,
903
+ **kwargs,
904
+ ) -> torch.Tensor:
905
+ _, _, _, boundary_mask = self.local_encoder.forward( # type: ignore
906
+ input_ids,
907
+ expanded_input_ids=expanded_input_ids,
908
+ boundary_state=MaskState(torch.full((input_ids.shape[0],), fill_value=last_token_is_boundary, device=input_ids.device, dtype=torch.bool)),
909
+ pad_state=MaskState(torch.zeros((input_ids.shape[0],), device=input_ids.device, dtype=torch.bool)),
910
+ sequence_start_indices=sequence_start_indices,
911
+ )
912
+
913
+ return cast(torch.Tensor, boundary_mask)
914
+
915
+ @check_model_inputs()
916
+ @auto_docstring
917
+ def forward(
918
+ self,
919
+ input_ids: torch.LongTensor,
920
+ expanded_input_ids: Optional[torch.LongTensor] = None,
921
+ attention_mask: Optional[torch.Tensor] = None,
922
+ position_ids: Optional[torch.LongTensor] = None,
923
+ past_key_values: Optional[Cache] = None,
924
+ inputs_embeds: Optional[torch.FloatTensor] = None,
925
+ cache_position: Optional[torch.LongTensor] = None,
926
+ use_cache: Optional[bool] = None,
927
+ boundary_mask: Optional[torch.Tensor] = None,
928
+ boundary_state: Optional[MaskState] = None,
929
+ pad_state: Optional[MaskState] = None,
930
+ sequence_start_indices: Optional[torch.Tensor] = None,
931
+ **kwargs: Unpack[TransformersKwargs],
932
+ ) -> BaseModelOutputWithPast:
933
+ batch_size = input_ids.shape[0]
934
+ device = input_ids.device
935
+
936
+ if self.local_encoder.add_expanded_embeddings and expanded_input_ids is None and input_ids is not None:
937
+ # not optimized
938
+ expanded_input_ids_list: list[torch.Tensor] = []
939
+ for example_idx in range(batch_size):
940
+ expanded_input_ids_list.append(torch.tensor(self.tokenizer.expand_byte_ids(input_ids[example_idx].tolist()), dtype=torch.long, device=device))
941
+ expanded_input_ids = pad_right(expanded_input_ids_list, value=self.tokenizer.pad_token_id, multiple_of=1) # type: ignore
942
+
943
+ h_byte, h_patch, _, boundary_mask = self.local_encoder(
944
+ input_ids=input_ids,
945
+ expanded_input_ids=expanded_input_ids,
946
+ true_boundary_mask=boundary_mask,
947
+ boundary_state=boundary_state,
948
+ pad_state=pad_state,
949
+ )
950
+
951
+ if use_cache and past_key_values is None:
952
+ past_key_values = DynamicCache(config=self.config)
953
+
954
+ if cache_position is None:
955
+ past_seen_tokens = past_key_values.get_seq_length() if past_key_values is not None else 0
956
+ cache_position: torch.Tensor = torch.arange(
957
+ past_seen_tokens, past_seen_tokens + h_patch.shape[1], device=device
958
+ )
959
+
960
+ if position_ids is None:
961
+ position_ids = cache_position.unsqueeze(0) # type: ignore
962
+
963
+ # It may already have been prepared by e.g. `generate`
964
+ if not isinstance(causal_mask_mapping := attention_mask, dict):
965
+ # Prepare mask arguments
966
+ mask_kwargs = {
967
+ "config": self.config,
968
+ "input_embeds": h_patch,
969
+ "attention_mask": attention_mask,
970
+ "cache_position": cache_position,
971
+ "past_key_values": past_key_values,
972
+ "position_ids": position_ids,
973
+ }
974
+ # Create the masks
975
+ causal_mask_mapping = {
976
+ "full_attention": create_causal_mask(**mask_kwargs),
977
+ "sliding_attention": create_sliding_window_causal_mask(**mask_kwargs),
978
+ }
979
+
980
+ position_embeddings_mapping = {
981
+ "sliding_attention": self.rotary_embs["sliding_attention"](h_byte, position_ids),
982
+ "full_attention": self.rotary_embs["full_attention"](h_byte, position_ids),
983
+ }
984
+
985
+ if h_patch.numel() > 0:
986
+ # we need to convert from right-pad to left-pad and back for prefill
987
+ # since flash attention expects left-pad and local/enc dec expect right-pad global tokens
988
+ # should add better left-pad support but this only affects prefill so OK for now
989
+ # although super inefficient!
990
+ if boundary_mask is not None: # prefill
991
+ n_boundaries = boundary_mask.sum(-1)
992
+
993
+ for i, current_n_boundaries in enumerate(n_boundaries):
994
+ h_patch[i, -current_n_boundaries:] = h_patch[i, :current_n_boundaries].clone()
995
+
996
+ h_patch_after_global = h_patch
997
+
998
+ for decoder_layer in self.layers[: self.config.num_hidden_layers]:
999
+ h_patch_after_global = decoder_layer(
1000
+ h_patch_after_global,
1001
+ attention_mask=causal_mask_mapping[decoder_layer.self_attn.attention_type],
1002
+ position_ids=position_ids,
1003
+ past_key_values=past_key_values,
1004
+ cache_position=cache_position,
1005
+ position_embeddings=position_embeddings_mapping[decoder_layer.self_attn.attention_type],
1006
+ **kwargs,
1007
+ )
1008
+
1009
+ if boundary_mask is not None: # prefill
1010
+ n_boundaries = boundary_mask.sum(-1)
1011
+
1012
+ for i, current_n_boundaries in enumerate(n_boundaries):
1013
+ h_patch_after_global[i, :current_n_boundaries] = h_patch_after_global[i, -current_n_boundaries:].clone()
1014
+ else:
1015
+ h_patch_after_global = h_patch
1016
+
1017
+ h_out = self.local_decoder.forward( # type: ignore
1018
+ embeds=h_byte,
1019
+ patch_embeds=h_patch_after_global,
1020
+ boundary_mask=boundary_mask,
1021
+ boundary_state=boundary_state,
1022
+ sequence_start_indices=sequence_start_indices,
1023
+ )
1024
+ h_out = self.norm(h_out)
1025
+
1026
+ return BaseModelOutputWithPast(
1027
+ last_hidden_state=h_out,
1028
+ past_key_values=past_key_values,
1029
+ )
1030
+
1031
+
1032
+ @auto_docstring
1033
+ class BolmoForCausalLM(BolmoPreTrainedModel, GenerationMixin):
1034
+ _tied_weights_keys = ["lm_head.weight"]
1035
+ _tp_plan = {"lm_head": "colwise_rep"}
1036
+ _pp_plan = {"lm_head": (["hidden_states"], ["logits"])}
1037
+
1038
+ def __init__(self, config):
1039
+ super().__init__(config)
1040
+ self.model = BolmoModel(config)
1041
+ self.vocab_size = config.vocab_size
1042
+ self.lm_head = nn.Linear(config.hidden_size, config.vocab_size, bias=False)
1043
+
1044
+ # Initialize weights and apply final processing
1045
+ self.post_init()
1046
+
1047
+ def get_output_embeddings(self):
1048
+ return self.lm_head
1049
+
1050
+ def set_output_embeddings(self, new_embeddings: nn.Linear):
1051
+ self.lm_head = new_embeddings
1052
+
1053
+ @can_return_tuple
1054
+ @auto_docstring
1055
+ def forward(
1056
+ self,
1057
+ input_ids: torch.LongTensor,
1058
+ expanded_input_ids: Optional[torch.LongTensor] = None,
1059
+ attention_mask: Optional[torch.Tensor] = None,
1060
+ position_ids: Optional[torch.LongTensor] = None,
1061
+ past_key_values: Optional[Cache] = None,
1062
+ inputs_embeds: Optional[torch.FloatTensor] = None,
1063
+ cache_position: Optional[torch.LongTensor] = None,
1064
+ use_cache: Optional[bool] = None,
1065
+ boundary_mask: Optional[torch.Tensor] = None,
1066
+ boundary_state: Optional[MaskState] = None,
1067
+ pad_state: Optional[MaskState] = None,
1068
+ sequence_start_indices: Optional[torch.Tensor] = None,
1069
+ logits_to_keep: Union[int, torch.Tensor] = 0,
1070
+ **kwargs: Unpack[TransformersKwargs],
1071
+ ) -> CausalLMOutputWithPast:
1072
+ r"""
1073
+ Example:
1074
+
1075
+ ```python
1076
+ >>> from transformers import AutoTokenizer, BolmoForCausalLM
1077
+
1078
+ >>> model = BolmoForCausalLM.from_pretrained("meta-olmo3/Bolmo-2-7b-hf")
1079
+ >>> tokenizer = AutoTokenizer.from_pretrained("meta-olmo3/Bolmo-2-7b-hf")
1080
+
1081
+ >>> prompt = "Hey, are you conscious? Can you talk to me?"
1082
+ >>> inputs = tokenizer(prompt, return_tensors="pt")
1083
+
1084
+ >>> # Generate
1085
+ >>> generate_ids = model.generate(inputs.input_ids, max_length=30)
1086
+ >>> tokenizer.batch_decode(generate_ids, skip_special_tokens=True, clean_up_tokenization_spaces=False)[0]
1087
+ "Hey, are you conscious? Can you talk to me?\nI'm not conscious, but I can talk to you."
1088
+ ```"""
1089
+ outputs: BaseModelOutputWithPast = self.model(
1090
+ input_ids=input_ids,
1091
+ expanded_input_ids=expanded_input_ids,
1092
+ attention_mask=attention_mask,
1093
+ position_ids=position_ids,
1094
+ past_key_values=past_key_values,
1095
+ inputs_embeds=inputs_embeds,
1096
+ cache_position=cache_position,
1097
+ use_cache=use_cache,
1098
+ boundary_mask=boundary_mask,
1099
+ boundary_state=boundary_state,
1100
+ pad_state=pad_state,
1101
+ sequence_start_indices=sequence_start_indices,
1102
+ **kwargs,
1103
+ )
1104
+
1105
+ hidden_states = cast(torch.Tensor, outputs.last_hidden_state)
1106
+ # Only compute necessary logits, and do not upcast them to float if we are not computing the loss
1107
+ slice_indices = slice(-logits_to_keep, None) if isinstance(logits_to_keep, int) else logits_to_keep
1108
+ logits = self.lm_head(hidden_states[:, slice_indices, :])
1109
+
1110
+ return CausalLMOutputWithPast(
1111
+ logits=logits,
1112
+ past_key_values=outputs.past_key_values,
1113
+ hidden_states=outputs.hidden_states,
1114
+ attentions=outputs.attentions,
1115
+ )
1116
+
1117
+ def generate(self, input_ids: list[list[int]], max_new_tokens: int = 20):
1118
+ expand_input_ids = self.model.local_encoder.add_expanded_embeddings
1119
+ batch_size = len(input_ids)
1120
+
1121
+ if expand_input_ids:
1122
+ expanded_input_ids = []
1123
+
1124
+ for i in range(len(input_ids)):
1125
+ expanded_input_ids.append(torch.tensor(self.model.tokenizer.expand_byte_ids(input_ids[i]), device=self.device, dtype=torch.long))
1126
+
1127
+ expanded_input_ids = pad_left(expanded_input_ids, value=self.model.tokenizer.pad_token_id, multiple_of=1) # type: ignore
1128
+ else:
1129
+ expanded_input_ids = None
1130
+
1131
+ byte_input_ids: torch.Tensor = pad_left([torch.tensor(x, device=self.device, dtype=torch.long) for x in input_ids], value=self.model.tokenizer.pad_token_id, multiple_of=1)
1132
+
1133
+ sequence_start_indices = (byte_input_ids == self.model.tokenizer.pad_token_id).sum(-1)
1134
+ batch_size, prompt_len = byte_input_ids.shape
1135
+ finished = torch.zeros(batch_size, dtype=torch.bool, device=self.device)
1136
+
1137
+ boundary_offset = self.model.tokenizer.offset + 256
1138
+ eos = self.model.tokenizer.eos_token_id
1139
+
1140
+ boundary_mask = self.model.prefill_boundary_prediction_forward( # type: ignore
1141
+ byte_input_ids,
1142
+ expanded_input_ids=expanded_input_ids,
1143
+ sequence_start_indices=sequence_start_indices,
1144
+ )
1145
+
1146
+ self.model.local_encoder.prepare_inference_cache(batch_size)
1147
+ self.model.local_decoder.prepare_inference_cache(batch_size)
1148
+
1149
+ # roll back by one and force decoding to account for lookahead
1150
+ boundary_mask = boundary_mask[:, :-1]
1151
+ # need to roll one byte back and force decoding to detect whether the last byte is a boundary
1152
+ forced_decoding_ids = byte_input_ids[:, -1].cpu().tolist()
1153
+ byte_input_ids = byte_input_ids[:, :-1]
1154
+ expanded_input_ids = expanded_input_ids[:, :-1] if expanded_input_ids is not None else None
1155
+ # stays the same unless last token is pad.
1156
+ sequence_start_indices = (byte_input_ids == self.model.tokenizer.pad_token_id).sum(-1)
1157
+
1158
+ # output container
1159
+ generated = byte_input_ids
1160
+
1161
+ max_n_prefill_patches = boundary_mask.sum(-1).max().item()
1162
+ tokens_generated_plus_prefilled = max_n_prefill_patches
1163
+ bytes_generated = 0
1164
+
1165
+ max_length = max_n_prefill_patches + max_new_tokens
1166
+
1167
+ # generation state
1168
+ boundary_state = MaskState(boundary_mask[:, -1].clone())
1169
+ pad_state = MaskState(torch.zeros(batch_size, dtype=torch.bool, device=self.device))
1170
+ next_tokens = torch.full((batch_size,), self.model.tokenizer.bpe_token_end_id, device=self.device, dtype=torch.long) # type: ignore
1171
+ non_boundary_generated_tokens = [[byte_input_ids[example_idx, -1].item()] for example_idx in range(batch_size)]
1172
+ bytes_since_boundary = (boundary_mask.flip(1).cumsum(-1) == 0).sum(-1)
1173
+ is_first_forward = True
1174
+ global_past_key_values = None
1175
+
1176
+ # TODO: impl
1177
+ stop_token_sequences = []
1178
+
1179
+ while not ((max_length is not None and tokens_generated_plus_prefilled >= max_length) or finished.all()):
1180
+ input_ids_for_model = (
1181
+ generated
1182
+ if is_first_forward
1183
+ else torch.tensor([x[-1] for x in non_boundary_generated_tokens], device=generated.device, dtype=generated.dtype).unsqueeze(1)
1184
+ )
1185
+ assert not (
1186
+ (input_ids_for_model == self.model.tokenizer.bpe_token_end_id) |
1187
+ (input_ids_for_model >= boundary_offset)
1188
+ ).any().item() # type: ignore
1189
+ if expand_input_ids:
1190
+ expanded_input_ids_for_model = torch.zeros_like(input_ids_for_model)
1191
+ for i in range(input_ids_for_model.shape[0]):
1192
+ expanded_input_ids_for_model[i, :] = torch.tensor(self.model.tokenizer.expand_byte_ids(
1193
+ generated[i, :].tolist(),
1194
+ n_last=input_ids_for_model.shape[1],
1195
+ ), device=expanded_input_ids_for_model.device, dtype=expanded_input_ids_for_model.dtype)
1196
+ else:
1197
+ expanded_input_ids_for_model = None
1198
+
1199
+ out = self.forward( # type: ignore
1200
+ input_ids_for_model,
1201
+ expanded_input_ids=expanded_input_ids_for_model,
1202
+ boundary_mask=boundary_mask if is_first_forward else None,
1203
+ boundary_state=boundary_state,
1204
+ pad_state=pad_state,
1205
+ sequence_start_indices=sequence_start_indices,
1206
+ logits_to_keep=1,
1207
+ use_cache=True,
1208
+ past_key_values=global_past_key_values,
1209
+ )
1210
+ next_token_logits = cast(torch.Tensor, out.logits)
1211
+ global_past_key_values = out.past_key_values
1212
+
1213
+ if boundary_state.all():
1214
+ # new token, must not be boundary
1215
+ bytes_since_boundary[:] = 0
1216
+ else:
1217
+ boundary_state.selective_add(1, bytes_since_boundary, inv=True)
1218
+
1219
+ if any(x is not None for x in forced_decoding_ids):
1220
+ # only supported for the first token atm, so len(next_token_logits) == batch_size
1221
+ assert len(next_token_logits) == batch_size and is_first_forward
1222
+ for example_idx in range(batch_size):
1223
+ forced_decoding_id = forced_decoding_ids[example_idx]
1224
+
1225
+ if forced_decoding_id is not None:
1226
+ no_boundary_logit = next_token_logits[example_idx, 0, forced_decoding_id].item()
1227
+ boundary_logit = next_token_logits[example_idx, 0, forced_decoding_id + boundary_offset].item()
1228
+
1229
+ next_token_logits[example_idx, 0, :] = -100_000
1230
+ next_token_logits[example_idx, 0, forced_decoding_id] = no_boundary_logit
1231
+ next_token_logits[example_idx, 0, forced_decoding_id + boundary_offset] = boundary_logit
1232
+
1233
+ forced_decoding_ids[example_idx] = None # only force once
1234
+
1235
+ # TODO: impl non-greedy
1236
+ new_next_tokens = next_token_logits.squeeze(1).argmax(dim=-1)
1237
+
1238
+ if boundary_state.all():
1239
+ tokens_generated_plus_prefilled += 1
1240
+
1241
+ next_tokens = new_next_tokens
1242
+ next_tokens_cpu = next_tokens.cpu()
1243
+ for example_idx in range(batch_size):
1244
+ next_token_cpu = next_tokens_cpu[example_idx].item()
1245
+
1246
+ if next_token_cpu >= boundary_offset:
1247
+ next_token_cpu -= boundary_offset
1248
+
1249
+ non_boundary_generated_tokens[example_idx].append(next_token_cpu)
1250
+ else:
1251
+ next_tokens[:] = self.model.tokenizer.bpe_token_end_id # type: ignore
1252
+ boundary_state.selective_put(new_next_tokens, next_tokens, inv=True)
1253
+ next_tokens_cpu = next_tokens.cpu()
1254
+
1255
+ for example_idx in range(batch_size):
1256
+ next_token_cpu = next_tokens_cpu[example_idx].item()
1257
+
1258
+ if not boundary_state.cpu_mask[example_idx].item():
1259
+ if next_token_cpu >= boundary_offset:
1260
+ next_token_cpu -= boundary_offset
1261
+
1262
+ non_boundary_generated_tokens[example_idx].append(next_token_cpu)
1263
+
1264
+ is_first_forward = False
1265
+
1266
+ boundary_state = MaskState(
1267
+ (next_tokens == self.model.tokenizer.bpe_token_end_id) |
1268
+ (next_tokens >= boundary_offset) |
1269
+ finished
1270
+ ) # type: ignore
1271
+ pad_state = MaskState(
1272
+ (next_tokens == self.model.tokenizer.bpe_token_end_id) |
1273
+ finished
1274
+ )
1275
+
1276
+ # Force EOS for (previously) finished sequences
1277
+ next_tokens = torch.where(finished, torch.full_like(next_tokens, eos), next_tokens)
1278
+
1279
+ # Append next tokens
1280
+ generated = torch.cat([generated, next_tokens.unsqueeze(-1)], dim=1)
1281
+
1282
+ # Handle finished sequences
1283
+ stop_hit = next_tokens.eq(eos) | next_tokens.eq(eos + boundary_offset)
1284
+
1285
+ # Also check for stop tokens if provided
1286
+ # TODO(benjaminm): this is very annoying due to the boundaries
1287
+ # make better
1288
+ if len(stop_token_sequences) > 0:
1289
+ # TODO: implement
1290
+ raise NotImplementedError("stop_token_sequences not implemented yet for Bolmo generation.")
1291
+
1292
+ finished |= stop_hit
1293
+ bytes_generated += 1
1294
+
1295
+ __all__ = ["BolmoForCausalLM", "BolmoModel", "BolmoPreTrainedModel"]