jianchen0311 commited on
Commit
2398284
·
verified ·
1 Parent(s): 9262f4f

Upload model

Browse files
Files changed (5) hide show
  1. README.md +199 -0
  2. config.json +64 -0
  3. dflash.py +277 -0
  4. model.safetensors +3 -0
  5. utils.py +116 -0
README.md ADDED
@@ -0,0 +1,199 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ library_name: transformers
3
+ tags: []
4
+ ---
5
+
6
+ # Model Card for Model ID
7
+
8
+ <!-- Provide a quick summary of what the model is/does. -->
9
+
10
+
11
+
12
+ ## Model Details
13
+
14
+ ### Model Description
15
+
16
+ <!-- Provide a longer summary of what this model is. -->
17
+
18
+ This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
19
+
20
+ - **Developed by:** [More Information Needed]
21
+ - **Funded by [optional]:** [More Information Needed]
22
+ - **Shared by [optional]:** [More Information Needed]
23
+ - **Model type:** [More Information Needed]
24
+ - **Language(s) (NLP):** [More Information Needed]
25
+ - **License:** [More Information Needed]
26
+ - **Finetuned from model [optional]:** [More Information Needed]
27
+
28
+ ### Model Sources [optional]
29
+
30
+ <!-- Provide the basic links for the model. -->
31
+
32
+ - **Repository:** [More Information Needed]
33
+ - **Paper [optional]:** [More Information Needed]
34
+ - **Demo [optional]:** [More Information Needed]
35
+
36
+ ## Uses
37
+
38
+ <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
39
+
40
+ ### Direct Use
41
+
42
+ <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
43
+
44
+ [More Information Needed]
45
+
46
+ ### Downstream Use [optional]
47
+
48
+ <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
49
+
50
+ [More Information Needed]
51
+
52
+ ### Out-of-Scope Use
53
+
54
+ <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
55
+
56
+ [More Information Needed]
57
+
58
+ ## Bias, Risks, and Limitations
59
+
60
+ <!-- This section is meant to convey both technical and sociotechnical limitations. -->
61
+
62
+ [More Information Needed]
63
+
64
+ ### Recommendations
65
+
66
+ <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
67
+
68
+ Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
69
+
70
+ ## How to Get Started with the Model
71
+
72
+ Use the code below to get started with the model.
73
+
74
+ [More Information Needed]
75
+
76
+ ## Training Details
77
+
78
+ ### Training Data
79
+
80
+ <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
81
+
82
+ [More Information Needed]
83
+
84
+ ### Training Procedure
85
+
86
+ <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
87
+
88
+ #### Preprocessing [optional]
89
+
90
+ [More Information Needed]
91
+
92
+
93
+ #### Training Hyperparameters
94
+
95
+ - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
96
+
97
+ #### Speeds, Sizes, Times [optional]
98
+
99
+ <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
100
+
101
+ [More Information Needed]
102
+
103
+ ## Evaluation
104
+
105
+ <!-- This section describes the evaluation protocols and provides the results. -->
106
+
107
+ ### Testing Data, Factors & Metrics
108
+
109
+ #### Testing Data
110
+
111
+ <!-- This should link to a Dataset Card if possible. -->
112
+
113
+ [More Information Needed]
114
+
115
+ #### Factors
116
+
117
+ <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
118
+
119
+ [More Information Needed]
120
+
121
+ #### Metrics
122
+
123
+ <!-- These are the evaluation metrics being used, ideally with a description of why. -->
124
+
125
+ [More Information Needed]
126
+
127
+ ### Results
128
+
129
+ [More Information Needed]
130
+
131
+ #### Summary
132
+
133
+
134
+
135
+ ## Model Examination [optional]
136
+
137
+ <!-- Relevant interpretability work for the model goes here -->
138
+
139
+ [More Information Needed]
140
+
141
+ ## Environmental Impact
142
+
143
+ <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
144
+
145
+ Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
146
+
147
+ - **Hardware Type:** [More Information Needed]
148
+ - **Hours used:** [More Information Needed]
149
+ - **Cloud Provider:** [More Information Needed]
150
+ - **Compute Region:** [More Information Needed]
151
+ - **Carbon Emitted:** [More Information Needed]
152
+
153
+ ## Technical Specifications [optional]
154
+
155
+ ### Model Architecture and Objective
156
+
157
+ [More Information Needed]
158
+
159
+ ### Compute Infrastructure
160
+
161
+ [More Information Needed]
162
+
163
+ #### Hardware
164
+
165
+ [More Information Needed]
166
+
167
+ #### Software
168
+
169
+ [More Information Needed]
170
+
171
+ ## Citation [optional]
172
+
173
+ <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
174
+
175
+ **BibTeX:**
176
+
177
+ [More Information Needed]
178
+
179
+ **APA:**
180
+
181
+ [More Information Needed]
182
+
183
+ ## Glossary [optional]
184
+
185
+ <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
186
+
187
+ [More Information Needed]
188
+
189
+ ## More Information [optional]
190
+
191
+ [More Information Needed]
192
+
193
+ ## Model Card Authors [optional]
194
+
195
+ [More Information Needed]
196
+
197
+ ## Model Card Contact
198
+
199
+ [More Information Needed]
config.json ADDED
@@ -0,0 +1,64 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "architectures": [
3
+ "DFlashDraftModel"
4
+ ],
5
+ "attention_bias": true,
6
+ "attention_dropout": 0.0,
7
+ "auto_map": {
8
+ "AutoModel": "dflash.DFlashDraftModel"
9
+ },
10
+ "block_size": 8,
11
+ "bos_token_id": 199998,
12
+ "dflash_config": {
13
+ "mask_token_id": 200000,
14
+ "target_layer_ids": [
15
+ 1,
16
+ 6,
17
+ 11,
18
+ 16,
19
+ 21
20
+ ]
21
+ },
22
+ "dtype": "bfloat16",
23
+ "eos_token_id": 200002,
24
+ "head_dim": 64,
25
+ "hidden_act": "silu",
26
+ "hidden_size": 2880,
27
+ "initial_context_length": 4096,
28
+ "initializer_range": 0.02,
29
+ "intermediate_size": 7680,
30
+ "layer_types": [
31
+ "full_attention",
32
+ "full_attention",
33
+ "full_attention",
34
+ "full_attention",
35
+ "full_attention",
36
+ "full_attention",
37
+ "full_attention",
38
+ "full_attention"
39
+ ],
40
+ "max_position_embeddings": 131072,
41
+ "max_window_layers": 8,
42
+ "model_type": "qwen3",
43
+ "num_attention_heads": 64,
44
+ "num_hidden_layers": 8,
45
+ "num_key_value_heads": 8,
46
+ "num_target_layers": 24,
47
+ "pad_token_id": 199999,
48
+ "rms_norm_eps": 1e-05,
49
+ "rope_scaling": {
50
+ "beta_fast": 32.0,
51
+ "beta_slow": 1.0,
52
+ "factor": 32.0,
53
+ "original_max_position_embeddings": 4096,
54
+ "rope_type": "yarn",
55
+ "truncate": false
56
+ },
57
+ "rope_theta": 150000,
58
+ "sliding_window": null,
59
+ "tie_word_embeddings": false,
60
+ "transformers_version": "4.57.1",
61
+ "use_cache": true,
62
+ "use_sliding_window": false,
63
+ "vocab_size": 201088
64
+ }
dflash.py ADDED
@@ -0,0 +1,277 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from typing import Optional, Callable
2
+ from typing_extensions import Unpack, Tuple
3
+ import torch
4
+ from torch import nn
5
+ from transformers.models.qwen3.modeling_qwen3 import (
6
+ Qwen3RMSNorm,
7
+ Qwen3RotaryEmbedding,
8
+ Qwen3Config,
9
+ Qwen3PreTrainedModel,
10
+ Qwen3MLP,
11
+ GradientCheckpointingLayer,
12
+ FlashAttentionKwargs,
13
+ rotate_half,
14
+ eager_attention_forward,
15
+ ALL_ATTENTION_FUNCTIONS,
16
+ )
17
+ from transformers import DynamicCache
18
+ from transformers.modeling_outputs import CausalLMOutputWithPast
19
+ from transformers.cache_utils import Cache
20
+ from .utils import build_target_layer_ids, extract_context_feature, sample
21
+
22
+ def apply_rotary_pos_emb(q, k, cos, sin, position_ids=None, unsqueeze_dim=1):
23
+ cos = cos.unsqueeze(unsqueeze_dim)
24
+ sin = sin.unsqueeze(unsqueeze_dim)
25
+ q_len = q.size(-2)
26
+ q_embed = (q * cos[..., -q_len:, :]) + (rotate_half(q) * sin[..., -q_len:, :])
27
+ k_embed = (k * cos) + (rotate_half(k) * sin)
28
+ return q_embed, k_embed
29
+
30
+ class Qwen3DFlashAttention(nn.Module):
31
+ """Multi-headed attention from 'Attention Is All You Need' paper"""
32
+
33
+ def __init__(self, config: Qwen3Config, layer_idx: int):
34
+ super().__init__()
35
+ self.config = config
36
+ self.layer_idx = layer_idx
37
+ self.head_dim = getattr(config, "head_dim", config.hidden_size // config.num_attention_heads)
38
+ self.num_key_value_groups = config.num_attention_heads // config.num_key_value_heads
39
+ self.scaling = self.head_dim**-0.5
40
+ self.attention_dropout = config.attention_dropout
41
+ self.is_causal = False
42
+ self.q_proj = nn.Linear(
43
+ config.hidden_size, config.num_attention_heads * self.head_dim, bias=config.attention_bias
44
+ )
45
+ self.k_proj = nn.Linear(
46
+ config.hidden_size, config.num_key_value_heads * self.head_dim, bias=config.attention_bias
47
+ )
48
+ self.v_proj = nn.Linear(
49
+ config.hidden_size, config.num_key_value_heads * self.head_dim, bias=config.attention_bias
50
+ )
51
+ self.o_proj = nn.Linear(
52
+ config.num_attention_heads * self.head_dim, config.hidden_size, bias=config.attention_bias
53
+ )
54
+ self.q_norm = Qwen3RMSNorm(self.head_dim, eps=config.rms_norm_eps)
55
+ self.k_norm = Qwen3RMSNorm(self.head_dim, eps=config.rms_norm_eps)
56
+ self.sliding_window = config.sliding_window if config.layer_types[layer_idx] == "sliding_attention" else None
57
+
58
+ def forward(
59
+ self,
60
+ hidden_states: torch.Tensor,
61
+ target_hidden: torch.Tensor,
62
+ position_embeddings: tuple[torch.Tensor, torch.Tensor],
63
+ attention_mask: Optional[torch.Tensor],
64
+ past_key_values: Optional[Cache] = None,
65
+ cache_position: Optional[torch.LongTensor] = None,
66
+ **kwargs: Unpack[FlashAttentionKwargs],
67
+ ) -> tuple[torch.Tensor, Optional[torch.Tensor]]:
68
+ bsz, q_len = hidden_states.shape[:-1]
69
+ ctx_len = target_hidden.shape[1]
70
+ q = self.q_proj(hidden_states)
71
+ q = q.view(bsz, q_len, -1, self.head_dim)
72
+ q = self.q_norm(q).transpose(1, 2)
73
+ k_ctx = self.k_proj(target_hidden)
74
+ k_noise = self.k_proj(hidden_states)
75
+ v_ctx = self.v_proj(target_hidden)
76
+ v_noise = self.v_proj(hidden_states)
77
+ k = torch.cat([k_ctx, k_noise], dim=1).view(bsz, ctx_len + q_len, -1, self.head_dim)
78
+ v = torch.cat([v_ctx, v_noise], dim=1).view(bsz, ctx_len + q_len, -1, self.head_dim)
79
+ k = self.k_norm(k).transpose(1, 2)
80
+ v = v.transpose(1, 2)
81
+ cos, sin = position_embeddings
82
+ q, k = apply_rotary_pos_emb(q, k, cos, sin)
83
+ if past_key_values is not None:
84
+ cache_kwargs = {"sin": sin, "cos": cos, "cache_position": cache_position}
85
+ k, v = past_key_values.update(k, v, self.layer_idx, cache_kwargs)
86
+ attn_fn: Callable = eager_attention_forward
87
+ if self.config._attn_implementation != "eager":
88
+ attn_fn = ALL_ATTENTION_FUNCTIONS[self.config._attn_implementation]
89
+ attn_output, attn_weights = attn_fn(
90
+ self,
91
+ q,
92
+ k,
93
+ v,
94
+ attention_mask,
95
+ dropout=0.0 if not self.training else self.attention_dropout,
96
+ scaling=self.scaling,
97
+ sliding_window=self.sliding_window,
98
+ **kwargs,
99
+ )
100
+ attn_output = attn_output.reshape(bsz, q_len, -1)
101
+ attn_output = self.o_proj(attn_output)
102
+ return attn_output, attn_weights
103
+
104
+ class Qwen3DFlashDecoderLayer(GradientCheckpointingLayer):
105
+ def __init__(self, config: Qwen3Config, layer_idx: int):
106
+ super().__init__()
107
+ self.hidden_size = config.hidden_size
108
+ self.self_attn = Qwen3DFlashAttention(config=config, layer_idx=layer_idx)
109
+ self.mlp = Qwen3MLP(config)
110
+ self.input_layernorm = Qwen3RMSNorm(config.hidden_size, eps=config.rms_norm_eps)
111
+ self.post_attention_layernorm = Qwen3RMSNorm(config.hidden_size, eps=config.rms_norm_eps)
112
+
113
+ def forward(
114
+ self,
115
+ target_hidden: Optional[torch.Tensor] = None,
116
+ hidden_states: Optional[torch.Tensor] = None,
117
+ attention_mask: Optional[torch.Tensor] = None,
118
+ position_ids: Optional[torch.LongTensor] = None,
119
+ past_key_value: Optional[Cache] = None,
120
+ output_attentions: Optional[bool] = False,
121
+ use_cache: Optional[bool] = False,
122
+ cache_position: Optional[torch.LongTensor] = None,
123
+ position_embeddings: Optional[Tuple[torch.Tensor, torch.Tensor]] = None, # necessary, but kept here for BC
124
+ **kwargs: Unpack[FlashAttentionKwargs],
125
+ ) -> Tuple[torch.FloatTensor, Optional[Tuple[torch.FloatTensor, torch.FloatTensor]]]:
126
+ residual = hidden_states
127
+ hidden_states = self.input_layernorm(hidden_states)
128
+ hidden_states = self.self_attn(
129
+ hidden_states=hidden_states,
130
+ target_hidden=target_hidden,
131
+ attention_mask=attention_mask,
132
+ position_ids=position_ids,
133
+ past_key_values=past_key_value,
134
+ output_attentions=output_attentions,
135
+ use_cache=use_cache,
136
+ cache_position=cache_position,
137
+ position_embeddings=position_embeddings,
138
+ **kwargs,
139
+ )[0]
140
+ hidden_states = residual + hidden_states
141
+ residual = hidden_states
142
+ hidden_states = self.post_attention_layernorm(hidden_states)
143
+ hidden_states = self.mlp(hidden_states)
144
+ hidden_states = residual + hidden_states
145
+ return hidden_states
146
+
147
+ class DFlashDraftModel(Qwen3PreTrainedModel):
148
+ config_class = Qwen3Config
149
+ _no_split_modules = ["Qwen3DFlashDecoderLayer"]
150
+
151
+ def __init__(self, config) -> None:
152
+ super().__init__(config)
153
+ self.config = config
154
+ self.layers = nn.ModuleList(
155
+ [Qwen3DFlashDecoderLayer(config, layer_idx) for layer_idx in range(config.num_hidden_layers)]
156
+ )
157
+ self.target_layer_ids = self.config.dflash_config.get("target_layer_ids", build_target_layer_ids(config.num_target_layers, config.num_hidden_layers))
158
+ self.norm = Qwen3RMSNorm(config.hidden_size, eps=config.rms_norm_eps)
159
+ self.rotary_emb = Qwen3RotaryEmbedding(config)
160
+ self.fc = nn.Linear(len(self.target_layer_ids) * config.hidden_size, config.hidden_size, bias=False)
161
+ self.hidden_norm = Qwen3RMSNorm(config.hidden_size, eps=config.rms_norm_eps)
162
+ self.block_size = config.block_size
163
+ self.mask_token_id = self.config.dflash_config.get("mask_token_id", None)
164
+ self.post_init()
165
+
166
+ def forward(
167
+ self,
168
+ position_ids: torch.LongTensor,
169
+ attention_mask: Optional[torch.Tensor] = None,
170
+ noise_embedding: Optional[torch.Tensor] = None,
171
+ target_hidden: Optional[torch.Tensor] = None,
172
+ past_key_values: Optional[Cache] = None,
173
+ use_cache: bool = False,
174
+ **kwargs,
175
+ ) -> CausalLMOutputWithPast:
176
+ hidden_states = noise_embedding
177
+ target_hidden = self.hidden_norm(self.fc(target_hidden))
178
+ position_embeddings = self.rotary_emb(hidden_states, position_ids)
179
+ for layer in self.layers:
180
+ hidden_states = layer(
181
+ hidden_states=hidden_states,
182
+ target_hidden=target_hidden,
183
+ attention_mask=attention_mask,
184
+ position_ids=position_ids,
185
+ past_key_value=past_key_values,
186
+ use_cache=use_cache,
187
+ position_embeddings=position_embeddings,
188
+ **kwargs,
189
+ )
190
+ return self.norm(hidden_states)
191
+
192
+ @torch.inference_mode()
193
+ def spec_generate(
194
+ self,
195
+ target: nn.Module,
196
+ input_ids: torch.LongTensor,
197
+ max_new_tokens: int,
198
+ stop_token_ids: list[int],
199
+ temperature: float,
200
+ ):
201
+ self.eval()
202
+ num_input_tokens = input_ids.shape[1]
203
+ max_length = num_input_tokens + max_new_tokens
204
+
205
+ block_size = self.block_size
206
+ output_ids = torch.full(
207
+ (1, max_length + block_size),
208
+ self.mask_token_id,
209
+ dtype=torch.long,
210
+ device=target.device,
211
+ )
212
+ position_ids = torch.arange(output_ids.shape[1], device=target.device).unsqueeze(0)
213
+
214
+ past_key_values_target = DynamicCache()
215
+ past_key_values_draft = DynamicCache()
216
+
217
+ # Prefill stage
218
+ output = target(
219
+ input_ids,
220
+ position_ids=position_ids[:, :num_input_tokens],
221
+ past_key_values=past_key_values_target,
222
+ use_cache=True,
223
+ logits_to_keep=1,
224
+ output_hidden_states=True,
225
+ )
226
+
227
+ output_ids[:, :num_input_tokens] = input_ids
228
+ output_ids[:, num_input_tokens:num_input_tokens+1] = sample(output.logits, temperature)
229
+ target_hidden = extract_context_feature(output.hidden_states, self.target_layer_ids)
230
+
231
+ # Decode stage
232
+ acceptance_lengths = []
233
+ start = input_ids.shape[1]
234
+ while start < max_length:
235
+ block_output_ids = output_ids[:, start : start + block_size].clone()
236
+ block_position_ids = position_ids[:, start : start + block_size]
237
+ noise_embedding = target.model.embed_tokens(block_output_ids)
238
+ draft_logits = target.lm_head(self(
239
+ target_hidden=target_hidden,
240
+ noise_embedding=noise_embedding,
241
+ position_ids=position_ids[:, past_key_values_draft.get_seq_length(): start + block_size],
242
+ past_key_values=past_key_values_draft,
243
+ use_cache=True,
244
+ is_causal=False,
245
+ )[:, -block_size+1:, :])
246
+ past_key_values_draft.crop(start)
247
+ block_output_ids[:, 1:] = sample(draft_logits)
248
+
249
+ output = target(
250
+ block_output_ids,
251
+ position_ids=block_position_ids,
252
+ past_key_values=past_key_values_target,
253
+ use_cache=True,
254
+ output_hidden_states=True,
255
+ )
256
+
257
+ posterior = sample(output.logits, temperature)
258
+ acceptance_length = (block_output_ids[:, 1:] == posterior[:, :-1]).cumprod(dim=1).sum(dim=1)[0].item()
259
+ output_ids[:, start : start + acceptance_length + 1] = block_output_ids[:, : acceptance_length + 1]
260
+ output_ids[:, start + acceptance_length + 1] = posterior[:, acceptance_length]
261
+ start += acceptance_length + 1
262
+ past_key_values_target.crop(start)
263
+ target_hidden = extract_context_feature(output.hidden_states, self.target_layer_ids)[:, :acceptance_length + 1, :]
264
+ acceptance_lengths.append(acceptance_length+1)
265
+ if stop_token_ids is not None and any(
266
+ stop_token_id in output_ids[:, num_input_tokens:] for stop_token_id in stop_token_ids
267
+ ):
268
+ break
269
+ output_ids = output_ids[:, :max_length]
270
+ output_ids = output_ids[:, output_ids[0] != self.mask_token_id]
271
+ if stop_token_ids is not None:
272
+ stop_token_ids = torch.tensor(stop_token_ids, device=output_ids.device)
273
+ stop_token_indices = torch.isin(output_ids[0][num_input_tokens:], stop_token_ids).nonzero(as_tuple=True)[0]
274
+ if stop_token_indices.numel() > 0:
275
+ output_ids = output_ids[:, : num_input_tokens + stop_token_indices[0] + 1]
276
+
277
+ return output_ids
model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:66ae3e6e93575ff93f1ba9af4a940cc7bd1f8dde3f288fc01b5d30374b02ceb5
3
+ size 1569547192
utils.py ADDED
@@ -0,0 +1,116 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import torch
2
+ from typing import Optional
3
+ from datasets import load_dataset, Features, Sequence, Value
4
+
5
+ def build_target_layer_ids(num_target_layers: int, num_draft_layers: int):
6
+ if num_draft_layers == 1:
7
+ return [(num_target_layers // 2)]
8
+ start = 1
9
+ end = num_target_layers - 3
10
+ span = end - start
11
+ target_layer_ids = [
12
+ int(round(start + (i * span) / (num_draft_layers - 1)))
13
+ for i in range(num_draft_layers)
14
+ ]
15
+ return target_layer_ids
16
+
17
+ def extract_context_feature(
18
+ hidden_states: list[torch.Tensor],
19
+ layer_ids: Optional[list[int]],
20
+ ) -> torch.Tensor:
21
+ offset = 1
22
+ selected_states = []
23
+ for layer_id in layer_ids:
24
+ selected_states.append(hidden_states[layer_id + offset])
25
+ target_hidden = torch.cat(selected_states, dim=-1)
26
+ return target_hidden
27
+
28
+ def sample(logits: torch.Tensor, temperature: float = 0.0) -> torch.Tensor:
29
+ if temperature < 1e-5:
30
+ return torch.argmax(logits, dim=-1)
31
+ bsz, seq_len, vocab_size = logits.shape
32
+ logits = logits.view(-1, vocab_size)
33
+ logits = logits / temperature
34
+ probs = torch.softmax(logits, dim=-1)
35
+ return torch.multinomial(probs, num_samples=1).view(bsz, seq_len)
36
+
37
+ def load_and_process_dataset(data_name: str):
38
+ # Math datasets
39
+ if data_name == "gsm8k":
40
+ dataset = load_dataset("openai/gsm8k", "main", split="test")
41
+ prompt_fmt = "Solve the following math problem. Make sure to put the answer (and only answer) inside \\boxed{{}}.\n\n{question}"
42
+ dataset = dataset.map(lambda x: {"turns": [prompt_fmt.format(**x)]})
43
+
44
+ elif data_name == "math500":
45
+ dataset = load_dataset("HuggingFaceH4/MATH-500", split="test")
46
+ prompt_fmt = "Solve the following math problem. Make sure to put the answer (and only answer) inside \\boxed{{}}.\n\n{problem}"
47
+ dataset = dataset.map(lambda x: {"turns": [prompt_fmt.format(**x)]})
48
+
49
+ elif data_name == "aime24":
50
+ dataset = load_dataset("HuggingFaceH4/aime_2024", split="train")
51
+ prompt_fmt = "Solve the following math problem. Make sure to put the answer (and only answer) inside \\boxed{{}}.\n\n{problem}"
52
+ dataset = dataset.map(lambda x: {"turns": [prompt_fmt.format(**x)]})
53
+
54
+ elif data_name == "aime25":
55
+ dataset = load_dataset("MathArena/aime_2025", split="train")
56
+ prompt_fmt = "Solve the following math problem. Make sure to put the answer (and only answer) inside \\boxed{{}}.\n\n{problem}"
57
+ dataset = dataset.map(lambda x: {"turns": [prompt_fmt.format(**x)]})
58
+
59
+ # Chat datasets
60
+ elif data_name == "alpaca":
61
+ dataset = load_dataset("tatsu-lab/alpaca", split="train")
62
+ dataset = dataset.map(lambda x: {"formatted_input": (f"{x['instruction']}\n\nInput:\n{x['input']}" if x['input'] else x['instruction'])})
63
+ dataset = dataset.map(lambda x: {"turns": [x["formatted_input"]]})
64
+
65
+ elif data_name == "mt-bench":
66
+ dataset = load_dataset("HuggingFaceH4/mt_bench_prompts", split="train")
67
+ dataset = dataset.map(lambda x: {"turns": x["prompt"]})
68
+
69
+ # Coding datasets
70
+ elif data_name == "humaneval":
71
+ dataset = load_dataset("openai/openai_humaneval", split="test")
72
+ prompt_fmt = "Write a solution to the following problem and make sure that it passes the tests:\n```python\n{prompt}\n```"
73
+ dataset = dataset.map(lambda x: {"turns": [prompt_fmt.format(**x)]})
74
+
75
+ elif data_name == "mbpp":
76
+ dataset = load_dataset("google-research-datasets/mbpp", "sanitized", split="test")
77
+ dataset = dataset.map(lambda x: {"turns": [x["prompt"]]})
78
+
79
+ elif data_name == "lbpp":
80
+ LBPP_PY_TEST_URL = "https://huggingface.co/datasets/CohereLabs/lbpp/resolve/main/python/test.parquet"
81
+ dataset = load_dataset("parquet", data_files={"test": LBPP_PY_TEST_URL})["test"]
82
+ dataset = dataset.map(lambda x: {"turns": [x["instruction"]]})
83
+
84
+ elif data_name == "swe-bench":
85
+ dataset = load_dataset("princeton-nlp/SWE-bench_Lite", split="test")
86
+ prompt_fmt = "Problem Statement:\n{problem_statement}\nPlease fix the issue described above."
87
+ dataset = dataset.map(lambda x: {"turns": [prompt_fmt.format(**x)]})
88
+
89
+ elif data_name == "livecodebench":
90
+ base = "https://huggingface.co/datasets/livecodebench/code_generation_lite/resolve/main/"
91
+ allowed_files = ["test.jsonl", "test2.jsonl", "test3.jsonl", "test4.jsonl", "test5.jsonl", "test6.jsonl"]
92
+ urls = [base + fn for fn in allowed_files]
93
+ dataset = load_dataset("json", data_files={"test": urls})["test"]
94
+ def format_lcb(doc):
95
+ system_prompt = (
96
+ "You are an expert Python programmer. You will be given a question (problem specification) "
97
+ "and will generate a correct Python program that matches the specification and passes all tests. "
98
+ "You will NOT return anything except for the program"
99
+ )
100
+ question_block = f"### Question:\n{doc['question_content']}"
101
+ if doc.get("starter_code"):
102
+ format_message = "### Format: Use the following code structure:"
103
+ code_block = f"```python\n{doc['starter_code']}\n```"
104
+ else:
105
+ format_message = "### Format: Write your code in the following format:"
106
+ code_block = "```python\n# YOUR CODE HERE\n```"
107
+ answer_footer = "### Answer: (use the provided format with backticks)"
108
+ return f"{system_prompt}\n\n{question_block}\n\n{format_message}\n{code_block}\n\n{answer_footer}"
109
+ target_features = Features({"turns": Sequence(Value("large_string"))})
110
+ dataset = dataset.map(
111
+ lambda x: {"turns": [format_lcb(x)]},
112
+ remove_columns=dataset.column_names,
113
+ features=target_features
114
+ )
115
+
116
+ return dataset