lvj commited on
Commit
562c8fb
·
1 Parent(s): 69f17e9

Upload folder using huggingface_hub

Browse files
Files changed (3) hide show
  1. configuration_phi3.py +226 -0
  2. modeling_phi3.py +1186 -0
  3. training_args.bin +0 -0
configuration_phi3.py ADDED
@@ -0,0 +1,226 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # coding=utf-8
2
+ # Copyright 2024 Microsoft and the HuggingFace Inc. team. All rights reserved.
3
+ #
4
+ # Licensed under the Apache License, Version 2.0 (the "License");
5
+ # you may not use this file except in compliance with the License.
6
+ # You may obtain a copy of the License at
7
+ #
8
+ # http://www.apache.org/licenses/LICENSE-2.0
9
+ #
10
+ # Unless required by applicable law or agreed to in writing, software
11
+ # distributed under the License is distributed on an "AS IS" BASIS,
12
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13
+ # See the License for the specific language governing permissions and
14
+ # limitations under the License.
15
+
16
+ """Phi-3 model configuration"""
17
+
18
+ from transformers.configuration_utils import PretrainedConfig
19
+ from transformers.utils import logging
20
+
21
+
22
+ logger = logging.get_logger(__name__)
23
+
24
+
25
+ class Phi3Config(PretrainedConfig):
26
+ r"""
27
+ This is the configuration class to store the configuration of a [`Phi3Model`]. It is used to instantiate a Phi-3
28
+ model according to the specified arguments, defining the model architecture. Instantiating a configuration with the
29
+ defaults will yield a similar configuration to that of the
30
+ [microsoft/Phi-3-mini-4k-instruct](https://huggingface.co/microsoft/Phi-3-mini-4k-instruct).
31
+
32
+ Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the
33
+ documentation from [`PretrainedConfig`] for more information.
34
+
35
+ Args:
36
+ vocab_size (`int`, *optional*, defaults to 32064):
37
+ Vocabulary size of the Phi-3 model. Defines the number of different tokens that can be represented by the
38
+ `inputs_ids` passed when calling [`Phi3Model`].
39
+ hidden_size (`int`, *optional*, defaults to 3072):
40
+ Dimension of the hidden representations.
41
+ intermediate_size (`int`, *optional*, defaults to 8192):
42
+ Dimension of the MLP representations.
43
+ num_hidden_layers (`int`, *optional*, defaults to 32):
44
+ Number of hidden layers in the Transformer decoder.
45
+ num_attention_heads (`int`, *optional*, defaults to 32):
46
+ Number of attention heads for each attention layer in the Transformer decoder.
47
+ num_key_value_heads (`int`, *optional*):
48
+ This is the number of key_value heads that should be used to implement Grouped Query Attention. If
49
+ `num_key_value_heads=num_attention_heads`, the model will use Multi Head Attention (MHA), if
50
+ `num_key_value_heads=1` the model will use Multi Query Attention (MQA) otherwise GQA is used. When
51
+ converting a multi-head checkpoint to a GQA checkpoint, each group key and value head should be constructed
52
+ by meanpooling all the original heads within that group. For more details checkout [this
53
+ paper](https://arxiv.org/pdf/2305.13245.pdf). If it is not specified, will default to
54
+ `num_attention_heads`.
55
+ resid_pdrop (`float`, *optional*, defaults to 0.0):
56
+ Dropout probability for mlp outputs.
57
+ embd_pdrop (`int`, *optional*, defaults to 0.0):
58
+ The dropout ratio for the embeddings.
59
+ attention_dropout (`float`, *optional*, defaults to 0.0):
60
+ The dropout ratio after computing the attention scores.
61
+ hidden_act (`str` or `function`, *optional*, defaults to `"silu"`):
62
+ The non-linear activation function (function or string) in the decoder.
63
+ max_position_embeddings (`int`, *optional*, defaults to 4096):
64
+ The maximum sequence length that this model might ever be used with.
65
+ original_max_position_embeddings (`int`, *optional*, defaults to 4096):
66
+ The maximum sequence length that this model was trained with. This is used to determine the size of the
67
+ original RoPE embeddings when using long scaling.
68
+ initializer_range (`float`, *optional*, defaults to 0.02):
69
+ The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
70
+ rms_norm_eps (`float`, *optional*, defaults to 1e-05):
71
+ The epsilon value used for the RMSNorm.
72
+ use_cache (`bool`, *optional*, defaults to `True`):
73
+ Whether or not the model should return the last key/values attentions (not used by all models). Only
74
+ relevant if `config.is_decoder=True`. Whether to tie weight embeddings or not.
75
+ tie_word_embeddings (`bool`, *optional*, defaults to `False`):
76
+ Whether to tie weight embeddings
77
+ rope_theta (`float`, *optional*, defaults to 10000.0):
78
+ The base period of the RoPE embeddings.
79
+ rope_scaling (`dict`, *optional*):
80
+ The scaling strategy for the RoPE embeddings. If `None`, no scaling is applied. If a dictionary, it must
81
+ contain the following keys: `type`, `short_factor` and `long_factor`. The `type` must be `longrope` and
82
+ the `short_factor` and `long_factor` must be lists of numbers with the same length as the hidden size
83
+ divided by the number of attention heads divided by 2.
84
+ partial_rotary_factor (`float`, *optional*, defaults to 1.0):
85
+ Percentage of the query and keys which will have rotary embedding. Must be between 0.0 and 1.0.
86
+ bos_token_id (`int`, *optional*, defaults to 1):
87
+ The id of the "beginning-of-sequence" token.
88
+ eos_token_id (`int`, *optional*, defaults to 32000):
89
+ The id of the "end-of-sequence" token.
90
+ pad_token_id (`int`, *optional*, defaults to 32000):
91
+ The id of the padding token.
92
+ sliding_window (`int`, *optional*):
93
+ Sliding window attention window size. If `None`, no sliding window is applied.
94
+
95
+ Example:
96
+
97
+ ```python
98
+ >>> from transformers import Phi3Model, Phi3Config
99
+
100
+ >>> # Initializing a Phi-3 style configuration
101
+ >>> configuration = Phi3Config.from_pretrained("microsoft/Phi-3-mini-4k-instruct")
102
+
103
+ >>> # Initializing a model from the configuration
104
+ >>> model = Phi3Model(configuration)
105
+
106
+ >>> # Accessing the model configuration
107
+ >>> configuration = model.config
108
+ ```"""
109
+
110
+ model_type = "phi3"
111
+ keys_to_ignore_at_inference = ["past_key_values"]
112
+
113
+ def __init__(
114
+ self,
115
+ vocab_size=32064,
116
+ hidden_size=3072,
117
+ intermediate_size=8192,
118
+ num_hidden_layers=32,
119
+ num_attention_heads=32,
120
+ num_key_value_heads=None,
121
+ resid_pdrop=0.0,
122
+ embd_pdrop=0.0,
123
+ attention_dropout=0.0,
124
+ hidden_act="silu",
125
+ max_position_embeddings=4096,
126
+ original_max_position_embeddings=4096,
127
+ initializer_range=0.02,
128
+ rms_norm_eps=1e-5,
129
+ use_cache=True,
130
+ tie_word_embeddings=False,
131
+ rope_theta=10000.0,
132
+ rope_scaling=None,
133
+ partial_rotary_factor=1.0,
134
+ bos_token_id=1,
135
+ eos_token_id=32000,
136
+ pad_token_id=32000,
137
+ sliding_window=None,
138
+ **kwargs,
139
+ ):
140
+ self.vocab_size = vocab_size
141
+ self.hidden_size = hidden_size
142
+ self.intermediate_size = intermediate_size
143
+ self.num_hidden_layers = num_hidden_layers
144
+ self.num_attention_heads = num_attention_heads
145
+
146
+ if num_key_value_heads is None:
147
+ num_key_value_heads = num_attention_heads
148
+
149
+ self.num_key_value_heads = num_key_value_heads
150
+ self.resid_pdrop = resid_pdrop
151
+ self.embd_pdrop = embd_pdrop
152
+ self.attention_dropout = attention_dropout
153
+ self.hidden_act = hidden_act
154
+ self.max_position_embeddings = max_position_embeddings
155
+ self.original_max_position_embeddings = original_max_position_embeddings
156
+ self.initializer_range = initializer_range
157
+ self.rms_norm_eps = rms_norm_eps
158
+ self.use_cache = use_cache
159
+ self.rope_theta = rope_theta
160
+ self.rope_scaling = rope_scaling
161
+ self.partial_rotary_factor = partial_rotary_factor
162
+ self._rope_scaling_adjustment()
163
+ self._rope_scaling_validation()
164
+ self.sliding_window = sliding_window
165
+
166
+ super().__init__(
167
+ bos_token_id=bos_token_id,
168
+ eos_token_id=eos_token_id,
169
+ pad_token_id=pad_token_id,
170
+ tie_word_embeddings=tie_word_embeddings,
171
+ **kwargs,
172
+ )
173
+
174
+ def _rope_scaling_adjustment(self):
175
+ """
176
+ Adjust the `type` of the `rope_scaling` configuration for backward compatibility.
177
+ """
178
+ if self.rope_scaling is None:
179
+ return
180
+
181
+ rope_scaling_type = self.rope_scaling.get("type", None)
182
+
183
+ # For backward compatibility if previous version used "su" or "yarn"
184
+ if rope_scaling_type is not None and rope_scaling_type in ["su", "yarn"]:
185
+ self.rope_scaling["type"] = "longrope"
186
+
187
+ def _rope_scaling_validation(self):
188
+ """
189
+ Validate the `rope_scaling` configuration.
190
+ """
191
+ if self.rope_scaling is None:
192
+ return
193
+
194
+ if not isinstance(self.rope_scaling, dict) or len(self.rope_scaling) != 3:
195
+ raise ValueError(
196
+ "`rope_scaling` must be a dictionary with three fields, `type`, `short_factor` and `long_factor`, "
197
+ f"got {self.rope_scaling}"
198
+ )
199
+ rope_scaling_type = self.rope_scaling.get("type", None)
200
+ rope_scaling_short_factor = self.rope_scaling.get("short_factor", None)
201
+ rope_scaling_long_factor = self.rope_scaling.get("long_factor", None)
202
+ if rope_scaling_type is None or rope_scaling_type not in ["longrope"]:
203
+ raise ValueError(f"`rope_scaling`'s type field must be one of ['longrope'], got {rope_scaling_type}")
204
+ if not (
205
+ isinstance(rope_scaling_short_factor, list)
206
+ and all(isinstance(x, (int, float)) for x in rope_scaling_short_factor)
207
+ ):
208
+ raise ValueError(
209
+ f"`rope_scaling`'s short_factor field must be a list of numbers, got {rope_scaling_short_factor}"
210
+ )
211
+ rotary_ndims = int(self.hidden_size // self.num_attention_heads * self.partial_rotary_factor)
212
+ if not len(rope_scaling_short_factor) == rotary_ndims // 2:
213
+ raise ValueError(
214
+ f"`rope_scaling`'s short_factor field must have length {rotary_ndims // 2}, got {len(rope_scaling_short_factor)}"
215
+ )
216
+ if not (
217
+ isinstance(rope_scaling_long_factor, list)
218
+ and all(isinstance(x, (int, float)) for x in rope_scaling_long_factor)
219
+ ):
220
+ raise ValueError(
221
+ f"`rope_scaling`'s long_factor field must be a list of numbers, got {rope_scaling_long_factor}"
222
+ )
223
+ if not len(rope_scaling_long_factor) == rotary_ndims // 2:
224
+ raise ValueError(
225
+ f"`rope_scaling`'s long_factor field must have length {rotary_ndims // 2}, got {len(rope_scaling_long_factor)}"
226
+ )
modeling_phi3.py ADDED
@@ -0,0 +1,1186 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # coding=utf-8
2
+ # Copyright 2024 Microsoft and the HuggingFace Inc. team. All rights reserved.
3
+ #
4
+ # Licensed under the Apache License, Version 2.0 (the "License");
5
+ # you may not use this file except in compliance with the License.
6
+ # You may obtain a copy of the License at
7
+ #
8
+ # http://www.apache.org/licenses/LICENSE-2.0
9
+ #
10
+ # Unless required by applicable law or agreed to in writing, software
11
+ # distributed under the License is distributed on an "AS IS" BASIS,
12
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13
+ # See the License for the specific language governing permissions and
14
+ # limitations under the License.
15
+
16
+ """PyTorch Phi-3 model."""
17
+
18
+ from typing import Callable, List, Optional, Tuple, TypedDict, Union
19
+
20
+ import torch
21
+ from torch import nn
22
+
23
+ from transformers.activations import ACT2FN
24
+ from transformers.cache_utils import Cache, DynamicCache, SlidingWindowCache, StaticCache
25
+ from transformers.generation import GenerationMixin
26
+ from transformers.modeling_attn_mask_utils import AttentionMaskConverter
27
+ from transformers.modeling_flash_attention_utils import FlashAttentionKwargs
28
+ from transformers.modeling_outputs import (
29
+ BaseModelOutputWithPast,
30
+ CausalLMOutputWithPast,
31
+ SequenceClassifierOutputWithPast,
32
+ TokenClassifierOutput,
33
+ )
34
+ from transformers.modeling_rope_utils import ROPE_INIT_FUNCTIONS
35
+ from transformers.modeling_utils import ALL_ATTENTION_FUNCTIONS, PreTrainedModel
36
+ from transformers.processing_utils import Unpack
37
+ from transformers.utils import (
38
+ add_code_sample_docstrings,
39
+ add_start_docstrings,
40
+ add_start_docstrings_to_model_forward,
41
+ logging,
42
+ replace_return_docstrings,
43
+ )
44
+ from transformers.utils.deprecation import deprecate_kwarg
45
+ from .configuration_phi3 import Phi3Config
46
+
47
+ try:
48
+ from transformers.utils import LossKwargs
49
+ except ImportError:
50
+ # From utils/generic.py in transformers==v4.53.3
51
+ class LossKwargs(TypedDict, total=False):
52
+ num_items_in_batch: Optional["torch.Tensor"]
53
+
54
+
55
+ logger = logging.get_logger(__name__)
56
+
57
+ _CHECKPOINT_FOR_DOC = "microsoft/Phi-3-mini-4k-instruct"
58
+ _CONFIG_FOR_DOC = "Phi3Config"
59
+
60
+
61
+ class Phi3MLP(nn.Module):
62
+ def __init__(self, config):
63
+ super().__init__()
64
+
65
+ self.config = config
66
+ self.gate_up_proj = nn.Linear(config.hidden_size, 2 * config.intermediate_size, bias=False)
67
+ self.down_proj = nn.Linear(config.intermediate_size, config.hidden_size, bias=False)
68
+ self.activation_fn = ACT2FN[config.hidden_act]
69
+
70
+ def forward(self, hidden_states: torch.FloatTensor) -> torch.FloatTensor:
71
+ up_states = self.gate_up_proj(hidden_states)
72
+
73
+ gate, up_states = up_states.chunk(2, dim=-1)
74
+ up_states = up_states * self.activation_fn(gate)
75
+
76
+ return self.down_proj(up_states)
77
+
78
+
79
+ def rotate_half(x):
80
+ """Rotates half the hidden dims of the input."""
81
+ x1 = x[..., : x.shape[-1] // 2]
82
+ x2 = x[..., x.shape[-1] // 2 :]
83
+ return torch.cat((-x2, x1), dim=-1)
84
+
85
+
86
+ def repeat_kv(hidden_states: torch.Tensor, n_rep: int) -> torch.Tensor:
87
+ """
88
+ This is the equivalent of torch.repeat_interleave(x, dim=1, repeats=n_rep). The hidden states go from (batch,
89
+ num_key_value_heads, seqlen, head_dim) to (batch, num_attention_heads, seqlen, head_dim)
90
+ """
91
+ batch, num_key_value_heads, slen, head_dim = hidden_states.shape
92
+ if n_rep == 1:
93
+ return hidden_states
94
+ hidden_states = hidden_states[:, :, None, :, :].expand(batch, num_key_value_heads, n_rep, slen, head_dim)
95
+ return hidden_states.reshape(batch, num_key_value_heads * n_rep, slen, head_dim)
96
+
97
+
98
+ def eager_attention_forward(
99
+ module: nn.Module,
100
+ query: torch.Tensor,
101
+ key: torch.Tensor,
102
+ value: torch.Tensor,
103
+ attention_mask: Optional[torch.Tensor],
104
+ scaling: float,
105
+ dropout: float = 0.0,
106
+ **kwargs,
107
+ ):
108
+ key_states = repeat_kv(key, module.num_key_value_groups)
109
+ value_states = repeat_kv(value, module.num_key_value_groups)
110
+
111
+ attn_weights = torch.matmul(query, key_states.transpose(2, 3)) * scaling
112
+ if attention_mask is not None:
113
+ causal_mask = attention_mask[:, :, :, : key_states.shape[-2]]
114
+ attn_weights = attn_weights + causal_mask
115
+
116
+ attn_weights = nn.functional.softmax(attn_weights, dim=-1, dtype=torch.float32).to(query.dtype)
117
+ attn_weights = nn.functional.dropout(attn_weights, p=dropout, training=module.training)
118
+ attn_output = torch.matmul(attn_weights, value_states)
119
+ attn_output = attn_output.transpose(1, 2).contiguous()
120
+
121
+ return attn_output, attn_weights
122
+
123
+
124
+ def apply_rotary_pos_emb(q, k, cos, sin, position_ids=None, unsqueeze_dim=1):
125
+ """Applies Rotary Position Embedding to the query and key tensors.
126
+
127
+ Args:
128
+ q (`torch.Tensor`): The query tensor.
129
+ k (`torch.Tensor`): The key tensor.
130
+ cos (`torch.Tensor`): The cosine part of the rotary embedding.
131
+ sin (`torch.Tensor`): The sine part of the rotary embedding.
132
+ position_ids (`torch.Tensor`, *optional*):
133
+ Deprecated and unused.
134
+ unsqueeze_dim (`int`, *optional*, defaults to 1):
135
+ The 'unsqueeze_dim' argument specifies the dimension along which to unsqueeze cos[position_ids] and
136
+ sin[position_ids] so that they can be properly broadcasted to the dimensions of q and k. For example, note
137
+ that cos[position_ids] and sin[position_ids] have the shape [batch_size, seq_len, head_dim]. Then, if q and
138
+ k have the shape [batch_size, heads, seq_len, head_dim], then setting unsqueeze_dim=1 makes
139
+ cos[position_ids] and sin[position_ids] broadcastable to the shapes of q and k. Similarly, if q and k have
140
+ the shape [batch_size, seq_len, heads, head_dim], then set unsqueeze_dim=2.
141
+ Returns:
142
+ `tuple(torch.Tensor)` comprising of the query and key tensors rotated using the Rotary Position Embedding.
143
+ """
144
+ cos = cos.unsqueeze(unsqueeze_dim)
145
+ sin = sin.unsqueeze(unsqueeze_dim)
146
+
147
+ rotary_dim = cos.shape[-1]
148
+ q_rot, q_pass = q[..., :rotary_dim], q[..., rotary_dim:]
149
+ k_rot, k_pass = k[..., :rotary_dim], k[..., rotary_dim:]
150
+
151
+ q_embed = torch.cat([(q_rot * cos) + (rotate_half(q_rot) * sin), q_pass], dim=-1)
152
+ k_embed = torch.cat([(k_rot * cos) + (rotate_half(k_rot) * sin), k_pass], dim=-1)
153
+ return q_embed, k_embed
154
+
155
+
156
+ class Phi3Attention(nn.Module):
157
+ """Multi-headed attention from 'Attention Is All You Need' paper"""
158
+
159
+ def __init__(self, config: Phi3Config, layer_idx: Optional[int] = None):
160
+ super().__init__()
161
+ self.config = config
162
+ self.layer_idx = layer_idx
163
+ self.head_dim = getattr(config, "head_dim", config.hidden_size // config.num_attention_heads)
164
+ self.num_key_value_groups = config.num_attention_heads // config.num_key_value_heads
165
+ self.num_key_value_heads = config.num_key_value_heads
166
+ self.scaling = self.head_dim**-0.5
167
+ self.attention_dropout = config.attention_dropout
168
+ self.is_causal = True
169
+
170
+ op_size = config.num_attention_heads * self.head_dim + 2 * (config.num_key_value_heads * self.head_dim)
171
+ self.o_proj = nn.Linear(config.num_attention_heads * self.head_dim, config.hidden_size, bias=False)
172
+ self.qkv_proj = nn.Linear(config.hidden_size, op_size, bias=False)
173
+
174
+ def forward(
175
+ self,
176
+ hidden_states: torch.Tensor,
177
+ position_embeddings: Tuple[torch.Tensor, torch.Tensor],
178
+ attention_mask: Optional[torch.Tensor],
179
+ past_key_value: Optional[Cache] = None,
180
+ cache_position: Optional[torch.LongTensor] = None,
181
+ **kwargs: Unpack[FlashAttentionKwargs],
182
+ ) -> Tuple[torch.Tensor, Optional[torch.Tensor], Optional[Tuple[torch.Tensor]]]:
183
+ input_shape = hidden_states.shape[:-1]
184
+ hidden_shape = (*input_shape, -1, self.head_dim)
185
+
186
+ qkv = self.qkv_proj(hidden_states)
187
+ query_pos = self.config.num_attention_heads * self.head_dim
188
+ query_states = qkv[..., :query_pos]
189
+ key_states = qkv[..., query_pos : query_pos + self.num_key_value_heads * self.head_dim]
190
+ value_states = qkv[..., query_pos + self.num_key_value_heads * self.head_dim :]
191
+
192
+ query_states = query_states.view(hidden_shape).transpose(1, 2)
193
+ key_states = key_states.view(hidden_shape).transpose(1, 2)
194
+ value_states = value_states.view(hidden_shape).transpose(1, 2)
195
+
196
+ cos, sin = position_embeddings
197
+ query_states, key_states = apply_rotary_pos_emb(query_states, key_states, cos, sin)
198
+
199
+ if past_key_value is not None:
200
+ # sin and cos are specific to RoPE models; cache_position needed for the static cache
201
+ cache_kwargs = {"sin": sin, "cos": cos, "cache_position": cache_position}
202
+ key_states, value_states = past_key_value.update(key_states, value_states, self.layer_idx, cache_kwargs)
203
+
204
+ attention_interface: Callable = eager_attention_forward
205
+ if self.config._attn_implementation != "eager":
206
+ if self.config._attn_implementation == "sdpa" and kwargs.get("output_attentions", False):
207
+ logger.warning_once(
208
+ "`torch.nn.functional.scaled_dot_product_attention` does not support `output_attentions=True`. Falling back to "
209
+ 'eager attention. This warning can be removed using the argument `attn_implementation="eager"` when loading the model.'
210
+ )
211
+ else:
212
+ attention_interface = ALL_ATTENTION_FUNCTIONS[self.config._attn_implementation]
213
+
214
+ attn_output, attn_weights = attention_interface(
215
+ self,
216
+ query_states,
217
+ key_states,
218
+ value_states,
219
+ attention_mask,
220
+ dropout=0.0 if not self.training else self.attention_dropout,
221
+ scaling=self.scaling,
222
+ sliding_window=getattr(self.config, "sliding_window", None),
223
+ **kwargs,
224
+ )
225
+
226
+ attn_output = attn_output.reshape(*input_shape, -1).contiguous()
227
+ attn_output = self.o_proj(attn_output)
228
+ return attn_output, attn_weights
229
+
230
+
231
+ class Phi3RMSNorm(nn.Module):
232
+ def __init__(self, hidden_size, eps=1e-6):
233
+ """
234
+ Phi3RMSNorm is equivalent to T5LayerNorm
235
+ """
236
+ super().__init__()
237
+ self.weight = nn.Parameter(torch.ones(hidden_size))
238
+ self.variance_epsilon = eps
239
+
240
+ def forward(self, hidden_states):
241
+ input_dtype = hidden_states.dtype
242
+ hidden_states = hidden_states.to(torch.float32)
243
+ variance = hidden_states.pow(2).mean(-1, keepdim=True)
244
+ hidden_states = hidden_states * torch.rsqrt(variance + self.variance_epsilon)
245
+ return self.weight * hidden_states.to(input_dtype)
246
+
247
+ def extra_repr(self):
248
+ return f"{tuple(self.weight.shape)}, eps={self.variance_epsilon}"
249
+
250
+
251
+ class Phi3DecoderLayer(nn.Module):
252
+ def __init__(self, config: Phi3Config, layer_idx: int):
253
+ super().__init__()
254
+ self.hidden_size = config.hidden_size
255
+ self.self_attn = Phi3Attention(config=config, layer_idx=layer_idx)
256
+ self.mlp = Phi3MLP(config)
257
+ self.input_layernorm = Phi3RMSNorm(config.hidden_size, eps=config.rms_norm_eps)
258
+ self.post_attention_layernorm = Phi3RMSNorm(config.hidden_size, eps=config.rms_norm_eps)
259
+ self.config = config
260
+ self.resid_attn_dropout = nn.Dropout(config.resid_pdrop)
261
+ self.resid_mlp_dropout = nn.Dropout(config.resid_pdrop)
262
+
263
+ def forward(
264
+ self,
265
+ hidden_states: torch.Tensor,
266
+ attention_mask: Optional[torch.Tensor] = None,
267
+ position_ids: Optional[torch.LongTensor] = None,
268
+ past_key_value: Optional[Cache] = None,
269
+ output_attentions: Optional[bool] = False,
270
+ use_cache: Optional[bool] = False,
271
+ cache_position: Optional[torch.LongTensor] = None,
272
+ position_embeddings: Optional[Tuple[torch.Tensor, torch.Tensor]] = None, # necessary, but kept here for BC
273
+ **kwargs: Unpack[FlashAttentionKwargs],
274
+ ) -> Tuple[torch.FloatTensor, Optional[Tuple[torch.FloatTensor, torch.FloatTensor]]]:
275
+ """
276
+ Args:
277
+ hidden_states (`torch.FloatTensor`):
278
+ input to the layer of shape `(batch, seq_len, embed_dim)`
279
+ attention_mask (`torch.FloatTensor`, *optional*): attention mask of size
280
+ `(batch, 1, tgt_len, src_len)` where padding elements are indicated by very large negative values.
281
+ position_ids (`torch.LongTensor` of shape `({0})`, *optional*):
282
+ Indices of positions of each input sequence tokens in the position embeddings. Selected in the range
283
+ `[0, config.n_positions - 1]`. [What are position IDs?](../glossary#position-ids)
284
+ past_key_value (`Cache`, *optional*): cached past key and value projection states
285
+ output_attentions (`bool`, *optional*):
286
+ Whether or not to return the attentions tensors of all attention layers. See `attentions` under
287
+ returned tensors for more detail.
288
+ use_cache (`bool`, *optional*):
289
+ If set to `True`, `past_key_values` key value states are returned and can be used to speed up decoding
290
+ (see `past_key_values`).
291
+ cache_position (`torch.LongTensor` of shape `(sequence_length)`, *optional*):
292
+ Indices depicting the position of the input sequence tokens in the sequence
293
+ kwargs (`dict`, *optional*):
294
+ Arbitrary kwargs to be ignored, used for FSDP and other methods that injects code
295
+ into the model
296
+ """
297
+ residual = hidden_states
298
+
299
+ hidden_states = self.input_layernorm(hidden_states)
300
+
301
+ # Self Attention
302
+ hidden_states, self_attn_weights = self.self_attn(
303
+ hidden_states=hidden_states,
304
+ attention_mask=attention_mask,
305
+ position_ids=position_ids,
306
+ past_key_value=past_key_value,
307
+ output_attentions=output_attentions,
308
+ use_cache=use_cache,
309
+ cache_position=cache_position,
310
+ position_embeddings=position_embeddings,
311
+ **kwargs,
312
+ )
313
+ hidden_states = residual + self.resid_attn_dropout(hidden_states) # main diff with Llama
314
+
315
+ residual = hidden_states
316
+ hidden_states = self.post_attention_layernorm(hidden_states)
317
+ hidden_states = self.mlp(hidden_states)
318
+ hidden_states = residual + self.resid_mlp_dropout(hidden_states) # main diff with Llama
319
+
320
+ outputs = (hidden_states,)
321
+ if output_attentions:
322
+ outputs += (self_attn_weights,)
323
+
324
+ return outputs
325
+
326
+
327
+ class Phi3RotaryEmbedding(nn.Module):
328
+ def __init__(self, config: Phi3Config, device=None):
329
+ super().__init__()
330
+ # BC: "rope_type" was originally "type"
331
+ if hasattr(config, "rope_scaling") and config.rope_scaling is not None:
332
+ self.rope_type = config.rope_scaling.get("rope_type", config.rope_scaling.get("type"))
333
+ else:
334
+ self.rope_type = "default"
335
+ self.max_seq_len_cached = config.max_position_embeddings
336
+ self.original_max_seq_len = config.max_position_embeddings
337
+
338
+ self.config = config
339
+ self.rope_init_fn = ROPE_INIT_FUNCTIONS[self.rope_type]
340
+
341
+ inv_freq, self.attention_scaling = self.rope_init_fn(self.config, device)
342
+ self.register_buffer("inv_freq", inv_freq, persistent=False)
343
+ self.original_inv_freq = self.inv_freq
344
+
345
+ def _dynamic_frequency_update(self, position_ids, device):
346
+ """
347
+ dynamic RoPE layers should recompute `inv_freq` in the following situations:
348
+ 1 - growing beyond the cached sequence length (allow scaling)
349
+ 2 - the current sequence length is in the original scale (avoid losing precision with small sequences)
350
+ """
351
+ seq_len = torch.max(position_ids) + 1
352
+ if seq_len > self.max_seq_len_cached: # growth
353
+ inv_freq, self.attention_scaling = self.rope_init_fn(self.config, device, seq_len=seq_len)
354
+ self.register_buffer("inv_freq", inv_freq, persistent=False) # TODO joao: may break with compilation
355
+ self.max_seq_len_cached = seq_len
356
+
357
+ if seq_len < self.original_max_seq_len and self.max_seq_len_cached > self.original_max_seq_len: # reset
358
+ # This .to() is needed if the model has been moved to a device after being initialized (because
359
+ # the buffer is automatically moved, but not the original copy)
360
+ self.original_inv_freq = self.original_inv_freq.to(device)
361
+ self.register_buffer("inv_freq", self.original_inv_freq, persistent=False)
362
+ self.max_seq_len_cached = self.original_max_seq_len
363
+
364
+ @torch.no_grad()
365
+ def forward(self, x, position_ids):
366
+ if "dynamic" in self.rope_type:
367
+ self._dynamic_frequency_update(position_ids, device=x.device)
368
+ elif self.rope_type == "longrope":
369
+ self._longrope_frequency_update(position_ids, device=x.device)
370
+
371
+ # Core RoPE block
372
+ inv_freq_expanded = self.inv_freq[None, :, None].float().expand(position_ids.shape[0], -1, 1)
373
+ position_ids_expanded = position_ids[:, None, :].float()
374
+ # Force float32 (see https://github.com/huggingface/transformers/pull/29285)
375
+ device_type = x.device.type
376
+ device_type = device_type if isinstance(device_type, str) and device_type != "mps" else "cpu"
377
+ with torch.autocast(device_type=device_type, enabled=False):
378
+ freqs = (inv_freq_expanded.float() @ position_ids_expanded.float()).transpose(1, 2)
379
+ emb = torch.cat((freqs, freqs), dim=-1)
380
+ cos = emb.cos()
381
+ sin = emb.sin()
382
+
383
+ # Advanced RoPE types (e.g. yarn) apply a post-processing scaling factor, equivalent to scaling attention
384
+ cos = cos * self.attention_scaling
385
+ sin = sin * self.attention_scaling
386
+
387
+ return cos.to(dtype=x.dtype), sin.to(dtype=x.dtype)
388
+
389
+ def _longrope_frequency_update(self, position_ids, device):
390
+ """Longrope uses long factor if sequence is larger than original pretraining length, short otherwise."""
391
+ seq_len = torch.max(position_ids) + 1
392
+ if hasattr(self.config, "original_max_position_embeddings"):
393
+ original_max_position_embeddings = self.config.original_max_position_embeddings
394
+ else:
395
+ original_max_position_embeddings = self.config.max_position_embeddings
396
+ if seq_len > original_max_position_embeddings:
397
+ if not hasattr(self, "long_inv_freq"):
398
+ self.long_inv_freq, _ = self.rope_init_fn(
399
+ self.config, device, seq_len=original_max_position_embeddings + 1
400
+ )
401
+ self.register_buffer("inv_freq", self.long_inv_freq, persistent=False)
402
+ else:
403
+ # This .to() is needed if the model has been moved to a device after being initialized (because
404
+ # the buffer is automatically moved, but not the original copy)
405
+ self.original_inv_freq = self.original_inv_freq.to(device)
406
+ self.register_buffer("inv_freq", self.original_inv_freq, persistent=False)
407
+
408
+
409
+ PHI3_START_DOCSTRING = r"""
410
+ This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
411
+ library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
412
+ etc.)
413
+
414
+ This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
415
+ Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
416
+ and behavior.
417
+
418
+ Parameters:
419
+ config ([`Phi3Config`]):
420
+ Model configuration class with all the parameters of the model. Initializing with a config file does not
421
+ load the weights associated with the model, only the configuration. Check out the
422
+ [`~PreTrainedModel.from_pretrained`] method to load the model weights.
423
+ """
424
+
425
+
426
+ @add_start_docstrings(
427
+ "The bare Phi3 Model outputting raw hidden-states without any specific head on top.",
428
+ PHI3_START_DOCSTRING,
429
+ )
430
+ class Phi3PreTrainedModel(PreTrainedModel):
431
+ config_class = Phi3Config
432
+ base_model_prefix = "model"
433
+ supports_gradient_checkpointing = True
434
+ _no_split_modules = ["Phi3DecoderLayer"]
435
+ _skip_keys_device_placement = ["past_key_values"]
436
+ _supports_flash_attn_2 = True
437
+ _supports_sdpa = True
438
+ _supports_flex_attn = True
439
+ _supports_cache_class = True
440
+ _supports_quantized_cache = True
441
+ _supports_static_cache = True
442
+ _supports_attention_backend = True
443
+ _version = "0.0.5"
444
+
445
+ def _init_weights(self, module):
446
+ std = self.config.initializer_range
447
+ if isinstance(module, nn.Linear):
448
+ module.weight.data.normal_(mean=0.0, std=std)
449
+ if module.bias is not None:
450
+ module.bias.data.zero_()
451
+ elif isinstance(module, nn.Embedding):
452
+ module.weight.data.normal_(mean=0.0, std=std)
453
+ if module.padding_idx is not None:
454
+ module.weight.data[module.padding_idx].zero_()
455
+
456
+
457
+ PHI3_INPUTS_DOCSTRING = r"""
458
+ Args:
459
+ input_ids (`torch.LongTensor` of shape `(batch_size, sequence_length)`):
460
+ Indices of input sequence tokens in the vocabulary. Padding will be ignored by default should you provide
461
+ it.
462
+
463
+ Indices can be obtained using [`AutoTokenizer`]. See [`PreTrainedTokenizer.encode`] and
464
+ [`PreTrainedTokenizer.__call__`] for details.
465
+
466
+ [What are input IDs?](../glossary#input-ids)
467
+ attention_mask (`torch.Tensor` of shape `(batch_size, sequence_length)`, *optional*):
468
+ Mask to avoid performing attention on padding token indices. Mask values selected in `[0, 1]`:
469
+
470
+ - 1 for tokens that are **not masked**,
471
+ - 0 for tokens that are **masked**.
472
+
473
+ [What are attention masks?](../glossary#attention-mask)
474
+
475
+ Indices can be obtained using [`AutoTokenizer`]. See [`PreTrainedTokenizer.encode`] and
476
+ [`PreTrainedTokenizer.__call__`] for details.
477
+
478
+ If `past_key_values` is used, optionally only the last `input_ids` have to be input (see
479
+ `past_key_values`).
480
+
481
+ If you want to change padding behavior, you should read [`modeling_opt._prepare_decoder_attention_mask`]
482
+ and modify to your needs. See diagram 1 in [the paper](https://arxiv.org/abs/1910.13461) for more
483
+ information on the default strategy.
484
+
485
+ - 1 indicates the head is **not masked**,
486
+ - 0 indicates the head is **masked**.
487
+ position_ids (`torch.LongTensor` of shape `(batch_size, sequence_length)`, *optional*):
488
+ Indices of positions of each input sequence tokens in the position embeddings. Selected in the range `[0,
489
+ config.n_positions - 1]`.
490
+
491
+ [What are position IDs?](../glossary#position-ids)
492
+ past_key_values (`Cache` or `tuple(tuple(torch.FloatTensor))`, *optional*):
493
+ Pre-computed hidden-states (key and values in the self-attention blocks and in the cross-attention
494
+ blocks) that can be used to speed up sequential decoding. This typically consists in the `past_key_values`
495
+ returned by the model at a previous stage of decoding, when `use_cache=True` or `config.use_cache=True`.
496
+
497
+ Two formats are allowed:
498
+ - a [`~cache_utils.Cache`] instance, see our
499
+ [kv cache guide](https://huggingface.co/docs/transformers/en/kv_cache);
500
+ - Tuple of `tuple(torch.FloatTensor)` of length `config.n_layers`, with each tuple having 2 tensors of
501
+ shape `(batch_size, num_heads, sequence_length, embed_size_per_head)`). This is also known as the legacy
502
+ cache format.
503
+
504
+ The model will output the same cache format that is fed as input. If no `past_key_values` are passed, the
505
+ legacy cache format will be returned.
506
+
507
+ If `past_key_values` are used, the user can optionally input only the last `input_ids` (those that don't
508
+ have their past key value states given to this model) of shape `(batch_size, 1)` instead of all `input_ids`
509
+ of shape `(batch_size, sequence_length)`.
510
+ inputs_embeds (`torch.FloatTensor` of shape `(batch_size, sequence_length, hidden_size)`, *optional*):
511
+ Optionally, instead of passing `input_ids` you can choose to directly pass an embedded representation. This
512
+ is useful if you want more control over how to convert `input_ids` indices into associated vectors than the
513
+ model's internal embedding lookup matrix.
514
+ use_cache (`bool`, *optional*):
515
+ If set to `True`, `past_key_values` key value states are returned and can be used to speed up decoding (see
516
+ `past_key_values`).
517
+ output_attentions (`bool`, *optional*):
518
+ Whether or not to return the attentions tensors of all attention layers. See `attentions` under returned
519
+ tensors for more detail.
520
+ output_hidden_states (`bool`, *optional*):
521
+ Whether or not to return the hidden states of all layers. See `hidden_states` under returned tensors for
522
+ more detail.
523
+ return_dict (`bool`, *optional*):
524
+ Whether or not to return a [`~utils.ModelOutput`] instead of a plain tuple.
525
+ cache_position (`torch.LongTensor` of shape `(sequence_length)`, *optional*):
526
+ Indices depicting the position of the input sequence tokens in the sequence. Contrarily to `position_ids`,
527
+ this tensor is not affected by padding. It is used to update the cache in the correct position and to infer
528
+ the complete sequence length.
529
+ """
530
+
531
+
532
+ @add_start_docstrings(
533
+ "The bare Phi3 Model outputting raw hidden-states without any specific head on top.",
534
+ PHI3_START_DOCSTRING,
535
+ )
536
+ class Phi3Model(Phi3PreTrainedModel):
537
+ """
538
+ Transformer decoder consisting of *config.num_hidden_layers* layers. Each layer is a [`Phi3DecoderLayer`]
539
+
540
+ Args:
541
+ config: Phi3Config
542
+ """
543
+
544
+ def __init__(self, config: Phi3Config):
545
+ super().__init__(config)
546
+ self.padding_idx = config.pad_token_id
547
+ self.vocab_size = config.vocab_size
548
+
549
+ self.embed_tokens = nn.Embedding(config.vocab_size, config.hidden_size, self.padding_idx)
550
+ self.layers = nn.ModuleList(
551
+ [Phi3DecoderLayer(config, layer_idx) for layer_idx in range(config.num_hidden_layers)]
552
+ )
553
+ self.norm = Phi3RMSNorm(config.hidden_size, eps=config.rms_norm_eps)
554
+ self.rotary_emb = Phi3RotaryEmbedding(config=config)
555
+ self.gradient_checkpointing = False
556
+
557
+ # Initialize weights and apply final processing
558
+ self.post_init()
559
+
560
+ def get_input_embeddings(self):
561
+ return self.embed_tokens
562
+
563
+ def set_input_embeddings(self, value):
564
+ self.embed_tokens = value
565
+
566
+ @add_start_docstrings_to_model_forward(PHI3_INPUTS_DOCSTRING)
567
+ def forward(
568
+ self,
569
+ input_ids: torch.LongTensor = None,
570
+ attention_mask: Optional[torch.Tensor] = None,
571
+ position_ids: Optional[torch.LongTensor] = None,
572
+ past_key_values: Optional[Cache] = None,
573
+ inputs_embeds: Optional[torch.FloatTensor] = None,
574
+ use_cache: Optional[bool] = None,
575
+ output_attentions: Optional[bool] = None,
576
+ output_hidden_states: Optional[bool] = None,
577
+ return_dict: Optional[bool] = None,
578
+ cache_position: Optional[torch.LongTensor] = None,
579
+ **flash_attn_kwargs: Unpack[FlashAttentionKwargs],
580
+ ) -> Union[Tuple, BaseModelOutputWithPast]:
581
+ output_attentions = output_attentions if output_attentions is not None else self.config.output_attentions
582
+ output_hidden_states = (
583
+ output_hidden_states if output_hidden_states is not None else self.config.output_hidden_states
584
+ )
585
+ use_cache = use_cache if use_cache is not None else self.config.use_cache
586
+ return_dict = return_dict if return_dict is not None else self.config.use_return_dict
587
+
588
+ if (input_ids is None) ^ (inputs_embeds is not None):
589
+ raise ValueError("You must specify exactly one of input_ids or inputs_embeds")
590
+
591
+ if self.gradient_checkpointing and self.training and use_cache:
592
+ logger.warning_once(
593
+ "`use_cache=True` is incompatible with gradient checkpointing. Setting `use_cache=False`."
594
+ )
595
+ use_cache = False
596
+
597
+ if inputs_embeds is None:
598
+ inputs_embeds = self.embed_tokens(input_ids)
599
+
600
+ if use_cache and past_key_values is None:
601
+ past_key_values = DynamicCache()
602
+
603
+ if cache_position is None:
604
+ past_seen_tokens = past_key_values.get_seq_length() if past_key_values is not None else 0
605
+ cache_position = torch.arange(
606
+ past_seen_tokens, past_seen_tokens + inputs_embeds.shape[1], device=inputs_embeds.device
607
+ )
608
+
609
+ if position_ids is None:
610
+ position_ids = cache_position.unsqueeze(0)
611
+
612
+ causal_mask = self._update_causal_mask(
613
+ attention_mask, inputs_embeds, cache_position, past_key_values, output_attentions
614
+ )
615
+
616
+ hidden_states = inputs_embeds
617
+
618
+ # create position embeddings to be shared across the decoder layers
619
+ position_embeddings = self.rotary_emb(hidden_states, position_ids)
620
+
621
+ # decoder layers
622
+ all_hidden_states = () if output_hidden_states else None
623
+ all_self_attns = () if output_attentions else None
624
+
625
+ for decoder_layer in self.layers[: self.config.num_hidden_layers]:
626
+ if output_hidden_states:
627
+ all_hidden_states += (hidden_states,)
628
+
629
+ if self.gradient_checkpointing and self.training:
630
+ layer_outputs = self._gradient_checkpointing_func(
631
+ decoder_layer.__call__,
632
+ hidden_states,
633
+ causal_mask,
634
+ position_ids,
635
+ past_key_values,
636
+ output_attentions,
637
+ use_cache,
638
+ cache_position,
639
+ position_embeddings,
640
+ )
641
+ else:
642
+ layer_outputs = decoder_layer(
643
+ hidden_states,
644
+ attention_mask=causal_mask,
645
+ position_ids=position_ids,
646
+ past_key_value=past_key_values,
647
+ output_attentions=output_attentions,
648
+ use_cache=use_cache,
649
+ cache_position=cache_position,
650
+ position_embeddings=position_embeddings,
651
+ **flash_attn_kwargs,
652
+ )
653
+
654
+ hidden_states = layer_outputs[0]
655
+
656
+ if output_attentions:
657
+ all_self_attns += (layer_outputs[1],)
658
+
659
+ hidden_states = self.norm(hidden_states)
660
+
661
+ # add hidden states from the last decoder layer
662
+ if output_hidden_states:
663
+ all_hidden_states += (hidden_states,)
664
+
665
+ output = BaseModelOutputWithPast(
666
+ last_hidden_state=hidden_states,
667
+ past_key_values=past_key_values if use_cache else None,
668
+ hidden_states=all_hidden_states,
669
+ attentions=all_self_attns,
670
+ )
671
+ return output if return_dict else output.to_tuple()
672
+
673
+ def _update_causal_mask(
674
+ self,
675
+ attention_mask: torch.Tensor,
676
+ input_tensor: torch.Tensor,
677
+ cache_position: torch.Tensor,
678
+ past_key_values: Cache,
679
+ output_attentions: bool,
680
+ ):
681
+ if self.config._attn_implementation == "flash_attention_2":
682
+ if attention_mask is not None and past_key_values is not None:
683
+ is_padding_right = attention_mask[:, -1].sum().item() != input_tensor.size()[0]
684
+ if is_padding_right:
685
+ raise ValueError(
686
+ "You are attempting to perform batched generation with padding_side='right'"
687
+ " this may lead to unexpected behaviour for Flash Attention version of Phi3. Make sure to "
688
+ " call `tokenizer.padding_side = 'left'` before tokenizing the input. "
689
+ )
690
+ if attention_mask is not None and 0.0 in attention_mask:
691
+ return attention_mask
692
+ return None
693
+
694
+ # For SDPA, when possible, we will rely on its `is_causal` argument instead of its `attn_mask` argument, in
695
+ # order to dispatch on Flash Attention 2. This feature is not compatible with static cache, as SDPA will fail
696
+ # to infer the attention mask.
697
+ past_seen_tokens = past_key_values.get_seq_length() if past_key_values is not None else 0
698
+ using_static_cache = isinstance(past_key_values, StaticCache)
699
+ using_sliding_window_cache = isinstance(past_key_values, SlidingWindowCache)
700
+
701
+ # When output attentions is True, sdpa implementation's forward method calls the eager implementation's forward
702
+ if (
703
+ self.config._attn_implementation == "sdpa"
704
+ and not (using_static_cache or using_sliding_window_cache)
705
+ and not output_attentions
706
+ ):
707
+ if AttentionMaskConverter._ignore_causal_mask_sdpa(
708
+ attention_mask,
709
+ inputs_embeds=input_tensor,
710
+ past_key_values_length=past_seen_tokens,
711
+ sliding_window=self.config.sliding_window,
712
+ is_training=self.training,
713
+ ):
714
+ return None
715
+
716
+ dtype, device = input_tensor.dtype, input_tensor.device
717
+ min_dtype = torch.finfo(dtype).min
718
+ sequence_length = input_tensor.shape[1]
719
+ # SlidingWindowCache or StaticCache
720
+ if using_sliding_window_cache or using_static_cache:
721
+ target_length = past_key_values.get_max_cache_shape()
722
+ # DynamicCache or no cache
723
+ else:
724
+ target_length = (
725
+ attention_mask.shape[-1]
726
+ if isinstance(attention_mask, torch.Tensor)
727
+ else past_seen_tokens + sequence_length + 1
728
+ )
729
+
730
+ # In case the provided `attention` mask is 2D, we generate a causal mask here (4D).
731
+ causal_mask = self._prepare_4d_causal_attention_mask_with_cache_position(
732
+ attention_mask,
733
+ sequence_length=sequence_length,
734
+ target_length=target_length,
735
+ dtype=dtype,
736
+ device=device,
737
+ cache_position=cache_position,
738
+ batch_size=input_tensor.shape[0],
739
+ config=self.config,
740
+ past_key_values=past_key_values,
741
+ )
742
+
743
+ if (
744
+ self.config._attn_implementation == "sdpa"
745
+ and attention_mask is not None
746
+ and attention_mask.device.type in ["cuda", "xpu"]
747
+ and not output_attentions
748
+ ):
749
+ # Attend to all tokens in fully masked rows in the causal_mask, for example the relevant first rows when
750
+ # using left padding. This is required by F.scaled_dot_product_attention memory-efficient attention path.
751
+ # Details: https://github.com/pytorch/pytorch/issues/110213
752
+ causal_mask = AttentionMaskConverter._unmask_unattended(causal_mask, min_dtype)
753
+
754
+ return causal_mask
755
+
756
+ @staticmethod
757
+ def _prepare_4d_causal_attention_mask_with_cache_position(
758
+ attention_mask: torch.Tensor,
759
+ sequence_length: int,
760
+ target_length: int,
761
+ dtype: torch.dtype,
762
+ device: torch.device,
763
+ cache_position: torch.Tensor,
764
+ batch_size: int,
765
+ config: Phi3Config,
766
+ past_key_values: Cache,
767
+ ):
768
+ """
769
+ Creates a causal 4D mask of shape `(batch_size, 1, query_length, key_value_length)` from a 2D mask of shape
770
+ `(batch_size, key_value_length)`, or if the input `attention_mask` is already 4D, do nothing.
771
+
772
+ Args:
773
+ attention_mask (`torch.Tensor`):
774
+ A 2D attention mask of shape `(batch_size, key_value_length)` or a 4D attention mask of shape `(batch_size, 1, query_length, key_value_length)`.
775
+ sequence_length (`int`):
776
+ The sequence length being processed.
777
+ target_length (`int`):
778
+ The target length: when generating with static cache, the mask should be as long as the static cache, to account for the 0 padding, the part of the cache that is not filled yet.
779
+ dtype (`torch.dtype`):
780
+ The dtype to use for the 4D attention mask.
781
+ device (`torch.device`):
782
+ The device to plcae the 4D attention mask on.
783
+ cache_position (`torch.Tensor`):
784
+ Indices depicting the position of the input sequence tokens in the sequence.
785
+ batch_size (`torch.Tensor`):
786
+ Batch size.
787
+ config (`Phi3Config`):
788
+ The model's configuration class
789
+ past_key_values (`Cache`):
790
+ The cache class that is being used currently to generate
791
+ """
792
+ if attention_mask is not None and attention_mask.dim() == 4:
793
+ # In this case we assume that the mask comes already in inverted form and requires no inversion or slicing.
794
+ causal_mask = attention_mask
795
+ else:
796
+ min_dtype = torch.finfo(dtype).min
797
+ causal_mask = torch.full(
798
+ (sequence_length, target_length), fill_value=min_dtype, dtype=dtype, device=device
799
+ )
800
+ diagonal_attend_mask = torch.arange(target_length, device=device) > cache_position.reshape(-1, 1)
801
+ if config.sliding_window is not None:
802
+ # if we have sliding window, we should not attend to tokens beyond sliding window length, so we mask them out also
803
+ # the check is needed to verify is current checkpoint was trained with sliding window or not
804
+ if not isinstance(past_key_values, SlidingWindowCache) or sequence_length > target_length:
805
+ sliding_attend_mask = torch.arange(target_length, device=device) <= (
806
+ cache_position.reshape(-1, 1) - config.sliding_window
807
+ )
808
+ diagonal_attend_mask.bitwise_or_(sliding_attend_mask)
809
+ causal_mask *= diagonal_attend_mask
810
+ causal_mask = causal_mask[None, None, :, :].expand(batch_size, 1, -1, -1)
811
+ if attention_mask is not None:
812
+ causal_mask = causal_mask.clone() # copy to contiguous memory for in-place edit
813
+ if attention_mask.shape[-1] > target_length:
814
+ attention_mask = attention_mask[:, :target_length]
815
+ mask_length = attention_mask.shape[-1]
816
+ padding_mask = causal_mask[:, :, :, :mask_length] + attention_mask[:, None, None, :].to(
817
+ causal_mask.device
818
+ )
819
+ padding_mask = padding_mask == 0
820
+ causal_mask[:, :, :, :mask_length] = causal_mask[:, :, :, :mask_length].masked_fill(
821
+ padding_mask, min_dtype
822
+ )
823
+ return causal_mask
824
+
825
+
826
+ class KwargsForCausalLM(FlashAttentionKwargs, LossKwargs): ...
827
+
828
+
829
+ class Phi3ForCausalLM(Phi3PreTrainedModel, GenerationMixin):
830
+ _tied_weights_keys = ["lm_head.weight"]
831
+ _tp_plan = {"lm_head": "colwise_rep"}
832
+ _pp_plan = {"lm_head": (["hidden_states"], ["logits"])}
833
+
834
+ def __init__(self, config):
835
+ super().__init__(config)
836
+ self.model = Phi3Model(config)
837
+ self.vocab_size = config.vocab_size
838
+ self.lm_head = nn.Linear(config.hidden_size, config.vocab_size, bias=False)
839
+
840
+ # Initialize weights and apply final processing
841
+ self.post_init()
842
+
843
+ def get_input_embeddings(self):
844
+ return self.model.embed_tokens
845
+
846
+ def set_input_embeddings(self, value):
847
+ self.model.embed_tokens = value
848
+
849
+ def get_output_embeddings(self):
850
+ return self.lm_head
851
+
852
+ def set_output_embeddings(self, new_embeddings):
853
+ self.lm_head = new_embeddings
854
+
855
+ def set_decoder(self, decoder):
856
+ self.model = decoder
857
+
858
+ def get_decoder(self):
859
+ return self.model
860
+
861
+ @deprecate_kwarg("num_logits_to_keep", version="4.50", new_name="logits_to_keep")
862
+ @add_start_docstrings_to_model_forward(PHI3_INPUTS_DOCSTRING)
863
+ @replace_return_docstrings(output_type=CausalLMOutputWithPast, config_class=_CONFIG_FOR_DOC)
864
+ def forward(
865
+ self,
866
+ input_ids: torch.LongTensor = None,
867
+ attention_mask: Optional[torch.Tensor] = None,
868
+ position_ids: Optional[torch.LongTensor] = None,
869
+ past_key_values: Optional[Union[Cache, List[torch.FloatTensor]]] = None,
870
+ inputs_embeds: Optional[torch.FloatTensor] = None,
871
+ labels: Optional[torch.LongTensor] = None,
872
+ use_cache: Optional[bool] = None,
873
+ output_attentions: Optional[bool] = None,
874
+ output_hidden_states: Optional[bool] = None,
875
+ return_dict: Optional[bool] = None,
876
+ cache_position: Optional[torch.LongTensor] = None,
877
+ logits_to_keep: Union[int, torch.Tensor] = 0,
878
+ **kwargs: Unpack[KwargsForCausalLM],
879
+ ) -> Union[Tuple, CausalLMOutputWithPast]:
880
+ r"""
881
+ Args:
882
+ labels (`torch.LongTensor` of shape `(batch_size, sequence_length)`, *optional*):
883
+ Labels for computing the masked language modeling loss. Indices should either be in `[0, ...,
884
+ config.vocab_size]` or -100 (see `input_ids` docstring). Tokens with indices set to `-100` are ignored
885
+ (masked), the loss is only computed for the tokens with labels in `[0, ..., config.vocab_size]`.
886
+
887
+ logits_to_keep (`int` or `torch.Tensor`, *optional*):
888
+ If an `int`, compute logits for the last `logits_to_keep` tokens. If `0`, calculate logits for all
889
+ `input_ids` (special case). Only last token logits are needed for generation, and calculating them only for that
890
+ token can save memory, which becomes pretty significant for long sequences or large vocabulary size.
891
+ If a `torch.Tensor`, must be 1D corresponding to the indices to keep in the sequence length dimension.
892
+ This is useful when using packed tensor format (single dimension for batch and sequence length).
893
+
894
+ Returns:
895
+
896
+ Example:
897
+
898
+ ```python
899
+ >>> from transformers import AutoTokenizer, Phi3ForCausalLM
900
+
901
+ >>> model = Phi3ForCausalLM.from_pretrained("meta-phi3/Phi3-2-7b-hf")
902
+ >>> tokenizer = AutoTokenizer.from_pretrained("meta-phi3/Phi3-2-7b-hf")
903
+
904
+ >>> prompt = "Hey, are you conscious? Can you talk to me?"
905
+ >>> inputs = tokenizer(prompt, return_tensors="pt")
906
+
907
+ >>> # Generate
908
+ >>> generate_ids = model.generate(inputs.input_ids, max_length=30)
909
+ >>> tokenizer.batch_decode(generate_ids, skip_special_tokens=True, clean_up_tokenization_spaces=False)[0]
910
+ "Hey, are you conscious? Can you talk to me?\nI'm not conscious, but I can talk to you."
911
+ ```"""
912
+ output_attentions = output_attentions if output_attentions is not None else self.config.output_attentions
913
+ output_hidden_states = (
914
+ output_hidden_states if output_hidden_states is not None else self.config.output_hidden_states
915
+ )
916
+ return_dict = return_dict if return_dict is not None else self.config.use_return_dict
917
+
918
+ # decoder outputs consists of (dec_features, layer_state, dec_hidden, dec_attn)
919
+ outputs = self.model(
920
+ input_ids=input_ids,
921
+ attention_mask=attention_mask,
922
+ position_ids=position_ids,
923
+ past_key_values=past_key_values,
924
+ inputs_embeds=inputs_embeds,
925
+ use_cache=use_cache,
926
+ output_attentions=output_attentions,
927
+ output_hidden_states=output_hidden_states,
928
+ return_dict=return_dict,
929
+ cache_position=cache_position,
930
+ **kwargs,
931
+ )
932
+
933
+ hidden_states = outputs[0]
934
+ # Only compute necessary logits, and do not upcast them to float if we are not computing the loss
935
+ slice_indices = slice(-logits_to_keep, None) if isinstance(logits_to_keep, int) else logits_to_keep
936
+ logits = self.lm_head(hidden_states[:, slice_indices, :])
937
+
938
+ loss = None
939
+ if labels is not None:
940
+ loss = self.loss_function(logits=logits, labels=labels, vocab_size=self.config.vocab_size, **kwargs)
941
+
942
+ if not return_dict:
943
+ output = (logits,) + outputs[1:]
944
+ return (loss,) + output if loss is not None else output
945
+
946
+ return CausalLMOutputWithPast(
947
+ loss=loss,
948
+ logits=logits,
949
+ past_key_values=outputs.past_key_values,
950
+ hidden_states=outputs.hidden_states,
951
+ attentions=outputs.attentions,
952
+ )
953
+
954
+ def prepare_inputs_for_generation(
955
+ self,
956
+ input_ids,
957
+ past_key_values=None,
958
+ attention_mask=None,
959
+ inputs_embeds=None,
960
+ cache_position=None,
961
+ position_ids=None,
962
+ use_cache=True,
963
+ logits_to_keep=None,
964
+ **kwargs,
965
+ ):
966
+ # Overwritten -- this model may need to switch between short and long rope, invalidating the cache in the
967
+ # process
968
+
969
+ # When the first time input length reached long and short factor switching point, enforce re-compute cache
970
+ # It will cause downside of slower at this single token position, however, better than current failure.
971
+ if (
972
+ past_key_values
973
+ and self.config.rope_scaling
974
+ and input_ids.shape[1] >= self.config.original_max_position_embeddings + 1
975
+ ):
976
+ past_length = cache_position[0]
977
+ if past_length <= self.config.original_max_position_embeddings:
978
+ past_key_values = None
979
+
980
+ model_inputs = super().prepare_inputs_for_generation(
981
+ input_ids=input_ids,
982
+ past_key_values=past_key_values,
983
+ attention_mask=attention_mask,
984
+ inputs_embeds=inputs_embeds,
985
+ cache_position=cache_position,
986
+ position_ids=position_ids,
987
+ use_cache=use_cache,
988
+ logits_to_keep=logits_to_keep,
989
+ **kwargs,
990
+ )
991
+ return model_inputs
992
+
993
+
994
+ @add_start_docstrings(
995
+ """
996
+ The Phi3 Model transformer with a sequence classification head on top (linear layer).
997
+
998
+ [`Phi3ForSequenceClassification`] uses the last token in order to do the classification, as other causal models
999
+ (e.g. GPT-2) do.
1000
+
1001
+ Since it does classification on the last token, it requires to know the position of the last token. If a
1002
+ `pad_token_id` is defined in the configuration, it finds the last token that is not a padding token in each row. If
1003
+ no `pad_token_id` is defined, it simply takes the last value in each row of the batch. Since it cannot guess the
1004
+ padding tokens when `inputs_embeds` are passed instead of `input_ids`, it does the same (take the last value in
1005
+ each row of the batch).
1006
+ """,
1007
+ PHI3_START_DOCSTRING,
1008
+ )
1009
+ class Phi3ForSequenceClassification(Phi3PreTrainedModel):
1010
+ def __init__(self, config):
1011
+ super().__init__(config)
1012
+ self.num_labels = config.num_labels
1013
+ self.model = Phi3Model(config)
1014
+ self.score = nn.Linear(config.hidden_size, self.num_labels, bias=False)
1015
+
1016
+ # Initialize weights and apply final processing
1017
+ self.post_init()
1018
+
1019
+ def get_input_embeddings(self):
1020
+ return self.model.embed_tokens
1021
+
1022
+ def set_input_embeddings(self, value):
1023
+ self.model.embed_tokens = value
1024
+
1025
+ @add_start_docstrings_to_model_forward(PHI3_INPUTS_DOCSTRING)
1026
+ def forward(
1027
+ self,
1028
+ input_ids: Optional[torch.LongTensor] = None,
1029
+ attention_mask: Optional[torch.Tensor] = None,
1030
+ position_ids: Optional[torch.LongTensor] = None,
1031
+ past_key_values: Optional[Union[Cache, List[torch.FloatTensor]]] = None,
1032
+ inputs_embeds: Optional[torch.FloatTensor] = None,
1033
+ labels: Optional[torch.LongTensor] = None,
1034
+ use_cache: Optional[bool] = None,
1035
+ output_attentions: Optional[bool] = None,
1036
+ output_hidden_states: Optional[bool] = None,
1037
+ return_dict: Optional[bool] = None,
1038
+ ) -> Union[Tuple, SequenceClassifierOutputWithPast]:
1039
+ r"""
1040
+ labels (`torch.LongTensor` of shape `(batch_size,)`, *optional*):
1041
+ Labels for computing the sequence classification/regression loss. Indices should be in `[0, ...,
1042
+ config.num_labels - 1]`. If `config.num_labels == 1` a regression loss is computed (Mean-Square loss), If
1043
+ `config.num_labels > 1` a classification loss is computed (Cross-Entropy).
1044
+ """
1045
+ return_dict = return_dict if return_dict is not None else self.config.use_return_dict
1046
+
1047
+ transformer_outputs = self.model(
1048
+ input_ids,
1049
+ attention_mask=attention_mask,
1050
+ position_ids=position_ids,
1051
+ past_key_values=past_key_values,
1052
+ inputs_embeds=inputs_embeds,
1053
+ use_cache=use_cache,
1054
+ output_attentions=output_attentions,
1055
+ output_hidden_states=output_hidden_states,
1056
+ return_dict=return_dict,
1057
+ )
1058
+ hidden_states = transformer_outputs[0]
1059
+ logits = self.score(hidden_states)
1060
+
1061
+ if input_ids is not None:
1062
+ batch_size = input_ids.shape[0]
1063
+ else:
1064
+ batch_size = inputs_embeds.shape[0]
1065
+
1066
+ if self.config.pad_token_id is None and batch_size != 1:
1067
+ raise ValueError("Cannot handle batch sizes > 1 if no padding token is defined.")
1068
+ if self.config.pad_token_id is None:
1069
+ last_non_pad_token = -1
1070
+ elif input_ids is not None:
1071
+ # To handle both left- and right- padding, we take the rightmost token that is not equal to pad_token_id
1072
+ non_pad_mask = (input_ids != self.config.pad_token_id).to(logits.device, torch.int32)
1073
+ token_indices = torch.arange(input_ids.shape[-1], device=logits.device)
1074
+ last_non_pad_token = (token_indices * non_pad_mask).argmax(-1)
1075
+ else:
1076
+ last_non_pad_token = -1
1077
+ logger.warning_once(
1078
+ f"{self.__class__.__name__} will not detect padding tokens in `inputs_embeds`. Results may be "
1079
+ "unexpected if using padding tokens in conjunction with `inputs_embeds.`"
1080
+ )
1081
+
1082
+ pooled_logits = logits[torch.arange(batch_size, device=logits.device), last_non_pad_token]
1083
+
1084
+ loss = None
1085
+ if labels is not None:
1086
+ loss = self.loss_function(logits=logits, labels=labels, pooled_logits=pooled_logits, config=self.config)
1087
+
1088
+ if not return_dict:
1089
+ output = (pooled_logits,) + transformer_outputs[1:]
1090
+ return ((loss,) + output) if loss is not None else output
1091
+
1092
+ return SequenceClassifierOutputWithPast(
1093
+ loss=loss,
1094
+ logits=pooled_logits,
1095
+ past_key_values=transformer_outputs.past_key_values,
1096
+ hidden_states=transformer_outputs.hidden_states,
1097
+ attentions=transformer_outputs.attentions,
1098
+ )
1099
+
1100
+
1101
+ @add_start_docstrings(
1102
+ """
1103
+ The Phi3 Model transformer with a token classification head on top (a linear layer on top of the hidden-states
1104
+ output) e.g. for Named-Entity-Recognition (NER) tasks.
1105
+ """,
1106
+ PHI3_START_DOCSTRING,
1107
+ )
1108
+ class Phi3ForTokenClassification(Phi3PreTrainedModel):
1109
+ def __init__(self, config):
1110
+ super().__init__(config)
1111
+ self.num_labels = config.num_labels
1112
+ self.model = Phi3Model(config)
1113
+ if getattr(config, "classifier_dropout", None) is not None:
1114
+ classifier_dropout = config.classifier_dropout
1115
+ elif getattr(config, "hidden_dropout", None) is not None:
1116
+ classifier_dropout = config.hidden_dropout
1117
+ else:
1118
+ classifier_dropout = 0.1
1119
+ self.dropout = nn.Dropout(classifier_dropout)
1120
+ self.score = nn.Linear(config.hidden_size, config.num_labels)
1121
+
1122
+ # Initialize weights and apply final processing
1123
+ self.post_init()
1124
+
1125
+ def get_input_embeddings(self):
1126
+ return self.model.embed_tokens
1127
+
1128
+ def set_input_embeddings(self, value):
1129
+ self.model.embed_tokens = value
1130
+
1131
+ @add_start_docstrings_to_model_forward(PHI3_INPUTS_DOCSTRING)
1132
+ @add_code_sample_docstrings(
1133
+ checkpoint=_CHECKPOINT_FOR_DOC,
1134
+ output_type=TokenClassifierOutput,
1135
+ config_class=_CONFIG_FOR_DOC,
1136
+ )
1137
+ def forward(
1138
+ self,
1139
+ input_ids: Optional[torch.LongTensor] = None,
1140
+ attention_mask: Optional[torch.Tensor] = None,
1141
+ position_ids: Optional[torch.LongTensor] = None,
1142
+ past_key_values: Optional[List[torch.FloatTensor]] = None,
1143
+ inputs_embeds: Optional[torch.FloatTensor] = None,
1144
+ labels: Optional[torch.LongTensor] = None,
1145
+ use_cache: Optional[bool] = None,
1146
+ output_attentions: Optional[bool] = None,
1147
+ output_hidden_states: Optional[bool] = None,
1148
+ return_dict: Optional[bool] = None,
1149
+ ) -> Union[Tuple, TokenClassifierOutput]:
1150
+ r"""
1151
+ labels (`torch.LongTensor` of shape `(batch_size,)`, *optional*):
1152
+ Labels for computing the sequence classification/regression loss. Indices should be in `[0, ...,
1153
+ config.num_labels - 1]`. If `config.num_labels == 1` a regression loss is computed (Mean-Square loss), If
1154
+ `config.num_labels > 1` a classification loss is computed (Cross-Entropy).
1155
+ """
1156
+ return_dict = return_dict if return_dict is not None else self.config.use_return_dict
1157
+
1158
+ outputs = self.model(
1159
+ input_ids,
1160
+ attention_mask=attention_mask,
1161
+ position_ids=position_ids,
1162
+ past_key_values=past_key_values,
1163
+ inputs_embeds=inputs_embeds,
1164
+ use_cache=use_cache,
1165
+ output_attentions=output_attentions,
1166
+ output_hidden_states=output_hidden_states,
1167
+ return_dict=return_dict,
1168
+ )
1169
+ sequence_output = outputs[0]
1170
+ sequence_output = self.dropout(sequence_output)
1171
+ logits = self.score(sequence_output)
1172
+
1173
+ loss = None
1174
+ if labels is not None:
1175
+ loss = self.loss_function(logits, labels, self.config)
1176
+
1177
+ if not return_dict:
1178
+ output = (logits,) + outputs[2:]
1179
+ return ((loss,) + output) if loss is not None else output
1180
+
1181
+ return TokenClassifierOutput(
1182
+ loss=loss,
1183
+ logits=logits,
1184
+ hidden_states=outputs.hidden_states,
1185
+ attentions=outputs.attentions,
1186
+ )
training_args.bin CHANGED
Binary files a/training_args.bin and b/training_args.bin differ