Safetensors
xalma
haoranxu commited on
Commit
03efcad
·
verified ·
1 Parent(s): 4b418b2

Upload modeling_xalma.py

Browse files
Files changed (1) hide show
  1. modeling_xalma.py +1748 -0
modeling_xalma.py ADDED
@@ -0,0 +1,1748 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # coding=utf-8
2
+ # Copyright 2022 EleutherAI and the HuggingFace Inc. team. All rights reserved.
3
+ #
4
+ # This code is based on EleutherAI's GPT-NeoX library and the GPT-NeoX
5
+ # and OPT implementations in this library. It has been modified from its
6
+ # original forms to accommodate minor architectural differences compared
7
+ # to GPT-NeoX and OPT used by the Meta AI team that trained the model.
8
+ #
9
+ # Licensed under the Apache License, Version 2.0 (the "License");
10
+ # you may not use this file except in compliance with the License.
11
+ # You may obtain a copy of the License at
12
+ #
13
+ # http://www.apache.org/licenses/LICENSE-2.0
14
+ #
15
+ # Unless required by applicable law or agreed to in writing, software
16
+ # distributed under the License is distributed on an "AS IS" BASIS,
17
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
18
+ # See the License for the specific language governing permissions and
19
+ # limitations under the License.
20
+ import math
21
+ from typing import List, Optional, Tuple, Union
22
+
23
+ import torch
24
+ import torch.nn.functional as F
25
+ import torch.utils.checkpoint
26
+ from torch import nn
27
+ from torch.nn import BCEWithLogitsLoss, CrossEntropyLoss, MSELoss
28
+
29
+ from transformers.activations import ACT2FN
30
+ from transformers.cache_utils import Cache, DynamicCache, StaticCache
31
+ from transformers.generation import GenerationMixin
32
+ from transformers.modeling_attn_mask_utils import AttentionMaskConverter
33
+ from transformers.modeling_flash_attention_utils import _flash_attention_forward
34
+ from transformers.modeling_outputs import (
35
+ BaseModelOutputWithPast,
36
+ CausalLMOutputWithPast,
37
+ QuestionAnsweringModelOutput,
38
+ SequenceClassifierOutputWithPast,
39
+ TokenClassifierOutput,
40
+ )
41
+ from transformers.modeling_rope_utils import ROPE_INIT_FUNCTIONS
42
+ from transformers.modeling_utils import PreTrainedModel
43
+ from transformers.pytorch_utils import ALL_LAYERNORM_LAYERS
44
+ from transformers.utils import (
45
+ add_start_docstrings,
46
+ add_start_docstrings_to_model_forward,
47
+ is_flash_attn_greater_or_equal_2_10,
48
+ is_torchdynamo_compiling,
49
+ logging,
50
+ replace_return_docstrings,
51
+ )
52
+ from transformers.models.llama.configuration_llama import LlamaConfig
53
+
54
+
55
+ logger = logging.get_logger(__name__)
56
+
57
+ _CONFIG_FOR_DOC = "LlamaConfig"
58
+
59
+ LANG_TABLE = {
60
+ "en": "English",
61
+ # Group 1:
62
+ "da": "Danish",
63
+ "nl": "Dutch",
64
+ "de": "German",
65
+ "is": "Icelandic",
66
+ "no": "Norwegian",
67
+ "sv": "Swedish",
68
+ "af": "Afrikaans",
69
+ # Group 2:
70
+ "ca": "Catalan",
71
+ "ro": "Romanian",
72
+ "gl": "Galician",
73
+ "it": "Italian",
74
+ "pt": "Portuguese",
75
+ "es": "Spanish",
76
+ # Group 3:
77
+ "bg": "Bulgarian",
78
+ "mk": "Macedonian",
79
+ "sr": "Serbian",
80
+ "uk": "Ukrainian",
81
+ "ru": "Russian",
82
+ # Group 4:
83
+ "id": "Indonesian",
84
+ "ms": "Malay",
85
+ "th": "Thai",
86
+ "vi": "Vietnamese",
87
+ "mg": "Malagasy",
88
+ "fr": "French",
89
+ # Group 5:
90
+ "hu": "Hungarian",
91
+ "el": "Greek",
92
+ "cs": "Czech",
93
+ "pl": "Polish",
94
+ "lt": "Lithuanian",
95
+ "lv": "Latvian",
96
+ # Group 6:
97
+ "ka": "Georgian",
98
+ "zh": "Chinese",
99
+ "ja": "Japanese",
100
+ "ko": "Korean",
101
+ "fi": "Finnish",
102
+ "et": "Estonian",
103
+ # Group 7:
104
+ "gu": "Gujarati",
105
+ "hi": "Hindi",
106
+ "mr": "Marathi",
107
+ "ne": "Nepali",
108
+ "ur": "Urdu",
109
+ # Group 8:
110
+ "az": "Azerbaijani",
111
+ "kk": "Kazakh",
112
+ "ky": "Kyrgyz",
113
+ "tr": "Turkish",
114
+ "uz": "Uzbek",
115
+ "ar": "Arabic",
116
+ "he": "Hebrew",
117
+ "fa": "Persian",
118
+ }
119
+
120
+ GROUP2LANG = {
121
+ 1: ["da", "nl", "de", "is", "no", "sv", "af"],
122
+ 2: ["ca", "ro", "gl", "it", "pt", "es"],
123
+ 3: ["bg", "mk", "sr", "uk", "ru"],
124
+ 4: ["id", "ms", "th", "vi", "mg", "fr"],
125
+ 5: ["hu", "el", "cs", "pl", "lt", "lv"],
126
+ 6: ["ka", "zh", "ja", "ko", "fi", "et"],
127
+ 7: ["gu", "hi", "mr", "ne", "ur"],
128
+ 8: ["az", "kk", "ky", "tr", "uz", "ar", "he", "fa"],
129
+ }
130
+
131
+ LANG2GROUP = {lang: str(group) for group, langs in GROUP2LANG.items() for lang in langs}
132
+ LORA_ALPHA = 2
133
+
134
+ def _prepare_4d_causal_attention_mask_with_cache_position(
135
+ attention_mask: torch.Tensor,
136
+ sequence_length: int,
137
+ target_length: int,
138
+ dtype: torch.dtype,
139
+ device: torch.device,
140
+ min_dtype: float,
141
+ cache_position: torch.Tensor,
142
+ batch_size: int,
143
+ ):
144
+ """
145
+ Creates a causal 4D mask of shape `(batch_size, 1, query_length, key_value_length)` from a 2D mask of shape
146
+ `(batch_size, key_value_length)`, or if the input `attention_mask` is already 4D, do nothing.
147
+
148
+ Args:
149
+ attention_mask (`torch.Tensor`):
150
+ A 2D attention mask of shape `(batch_size, key_value_length)` or a 4D attention mask of shape `(batch_size, 1, query_length, key_value_length)`.
151
+ sequence_length (`int`):
152
+ The sequence length being processed.
153
+ target_length (`int`):
154
+ The target length: when generating with static cache, the mask should be as long as the static cache, to account for the 0 padding, the part of the cache that is not filled yet.
155
+ dtype (`torch.dtype`):
156
+ The dtype to use for the 4D attention mask.
157
+ device (`torch.device`):
158
+ The device to plcae the 4D attention mask on.
159
+ min_dtype (`float`):
160
+ The minimum value representable with the dtype `dtype`.
161
+ cache_position (`torch.Tensor`):
162
+ Indices depicting the position of the input sequence tokens in the sequence.
163
+ batch_size (`torch.Tensor`):
164
+ Batch size.
165
+ """
166
+ if attention_mask is not None and attention_mask.dim() == 4:
167
+ # In this case we assume that the mask comes already in inverted form and requires no inversion or slicing.
168
+ causal_mask = attention_mask
169
+ else:
170
+ causal_mask = torch.full((sequence_length, target_length), fill_value=min_dtype, dtype=dtype, device=device)
171
+ if sequence_length != 1:
172
+ causal_mask = torch.triu(causal_mask, diagonal=1)
173
+ causal_mask *= torch.arange(target_length, device=device) > cache_position.reshape(-1, 1)
174
+ causal_mask = causal_mask[None, None, :, :].expand(batch_size, 1, -1, -1)
175
+ if attention_mask is not None:
176
+ causal_mask = causal_mask.clone() # copy to contiguous memory for in-place edit
177
+ mask_length = attention_mask.shape[-1]
178
+ padding_mask = causal_mask[:, :, :, :mask_length] + attention_mask[:, None, None, :]
179
+ padding_mask = padding_mask == 0
180
+ causal_mask[:, :, :, :mask_length] = causal_mask[:, :, :, :mask_length].masked_fill(
181
+ padding_mask, min_dtype
182
+ )
183
+
184
+ return causal_mask
185
+
186
+
187
+ class LlamaRMSNorm(nn.Module):
188
+ def __init__(self, hidden_size, eps=1e-6):
189
+ """
190
+ LlamaRMSNorm is equivalent to T5LayerNorm
191
+ """
192
+ super().__init__()
193
+ self.weight = nn.Parameter(torch.ones(hidden_size))
194
+ self.variance_epsilon = eps
195
+
196
+ def forward(self, hidden_states):
197
+ input_dtype = hidden_states.dtype
198
+ hidden_states = hidden_states.to(torch.float32)
199
+ variance = hidden_states.pow(2).mean(-1, keepdim=True)
200
+ hidden_states = hidden_states * torch.rsqrt(variance + self.variance_epsilon)
201
+ return self.weight * hidden_states.to(input_dtype)
202
+
203
+ def extra_repr(self):
204
+ return f"{tuple(self.weight.shape)}, eps={self.variance_epsilon}"
205
+
206
+
207
+ ALL_LAYERNORM_LAYERS.append(LlamaRMSNorm)
208
+
209
+
210
+ class LlamaRotaryEmbedding(nn.Module):
211
+ def __init__(
212
+ self,
213
+ dim=None,
214
+ max_position_embeddings=2048,
215
+ base=10000,
216
+ device=None,
217
+ scaling_factor=1.0,
218
+ rope_type="default",
219
+ config: Optional[LlamaConfig] = None,
220
+ ):
221
+ super().__init__()
222
+ # TODO (joao): remove the `if` below, only used for BC
223
+ self.rope_kwargs = {}
224
+ if config is None:
225
+ logger.warning_once(
226
+ "`LlamaRotaryEmbedding` can now be fully parameterized by passing the model config through the "
227
+ "`config` argument. All other arguments will be removed in v4.46"
228
+ )
229
+ self.rope_kwargs = {
230
+ "rope_type": rope_type,
231
+ "factor": scaling_factor,
232
+ "dim": dim,
233
+ "base": base,
234
+ "max_position_embeddings": max_position_embeddings,
235
+ }
236
+ self.rope_type = rope_type
237
+ self.max_seq_len_cached = max_position_embeddings
238
+ self.original_max_seq_len = max_position_embeddings
239
+ else:
240
+ # BC: "rope_type" was originally "type"
241
+ if config.rope_scaling is not None:
242
+ self.rope_type = config.rope_scaling.get("rope_type", config.rope_scaling.get("type"))
243
+ else:
244
+ self.rope_type = "default"
245
+ self.max_seq_len_cached = config.max_position_embeddings
246
+ self.original_max_seq_len = config.max_position_embeddings
247
+
248
+ self.config = config
249
+ self.rope_init_fn = ROPE_INIT_FUNCTIONS[self.rope_type]
250
+
251
+ inv_freq, self.attention_scaling = self.rope_init_fn(self.config, device, **self.rope_kwargs)
252
+ self.register_buffer("inv_freq", inv_freq, persistent=False)
253
+ self.original_inv_freq = self.inv_freq
254
+
255
+ def _dynamic_frequency_update(self, position_ids, device):
256
+ """
257
+ dynamic RoPE layers should recompute `inv_freq` in the following situations:
258
+ 1 - growing beyond the cached sequence length (allow scaling)
259
+ 2 - the current sequence length is in the original scale (avoid losing precision with small sequences)
260
+ """
261
+ seq_len = torch.max(position_ids) + 1
262
+ if seq_len > self.max_seq_len_cached: # growth
263
+ inv_freq, self.attention_scaling = self.rope_init_fn(
264
+ self.config, device, seq_len=seq_len, **self.rope_kwargs
265
+ )
266
+ self.register_buffer("inv_freq", inv_freq, persistent=False) # TODO joao: may break with compilation
267
+ self.max_seq_len_cached = seq_len
268
+
269
+ if seq_len < self.original_max_seq_len and self.max_seq_len_cached > self.original_max_seq_len: # reset
270
+ self.register_buffer("inv_freq", self.original_inv_freq, persistent=False)
271
+ self.max_seq_len_cached = self.original_max_seq_len
272
+
273
+ @torch.no_grad()
274
+ def forward(self, x, position_ids):
275
+ if "dynamic" in self.rope_type:
276
+ self._dynamic_frequency_update(position_ids, device=x.device)
277
+
278
+ # Core RoPE block
279
+ inv_freq_expanded = self.inv_freq[None, :, None].float().expand(position_ids.shape[0], -1, 1)
280
+ position_ids_expanded = position_ids[:, None, :].float()
281
+ # Force float32 (see https://github.com/huggingface/transformers/pull/29285)
282
+ device_type = x.device.type
283
+ device_type = device_type if isinstance(device_type, str) and device_type != "mps" else "cpu"
284
+ with torch.autocast(device_type=device_type, enabled=False):
285
+ freqs = (inv_freq_expanded.float() @ position_ids_expanded.float()).transpose(1, 2)
286
+ emb = torch.cat((freqs, freqs), dim=-1)
287
+ cos = emb.cos()
288
+ sin = emb.sin()
289
+
290
+ # Advanced RoPE types (e.g. yarn) apply a post-processing scaling factor, equivalent to scaling attention
291
+ cos = cos * self.attention_scaling
292
+ sin = sin * self.attention_scaling
293
+
294
+ return cos.to(dtype=x.dtype), sin.to(dtype=x.dtype)
295
+
296
+
297
+ class LlamaLinearScalingRotaryEmbedding(LlamaRotaryEmbedding):
298
+ """LlamaRotaryEmbedding extended with linear scaling. Credits to the Reddit user /u/kaiokendev"""
299
+
300
+ def __init__(self, *args, **kwargs):
301
+ logger.warning_once(
302
+ "`LlamaLinearScalingRotaryEmbedding` is deprecated an will be removed in v4.46. Please use "
303
+ "`LlamaRotaryEmbedding`, which now also does linear scaling (simply pass the model config to __init__)."
304
+ )
305
+ kwargs["rope_type"] = "linear"
306
+ super().__init__(*args, **kwargs)
307
+
308
+
309
+ class LlamaDynamicNTKScalingRotaryEmbedding(LlamaRotaryEmbedding):
310
+ """LlamaRotaryEmbedding extended with Dynamic NTK scaling. Credits to the Reddit users /u/bloc97 and /u/emozilla"""
311
+
312
+ def __init__(self, *args, **kwargs):
313
+ logger.warning_once(
314
+ "`LlamaDynamicNTKScalingRotaryEmbedding` is deprecated an will be removed in v4.46. Please use "
315
+ "`LlamaRotaryEmbedding`, which now also does dynamic ntk scaling (simply pass the model config to "
316
+ "__init__)."
317
+ )
318
+ kwargs["rope_type"] = "dynamic"
319
+ super().__init__(*args, **kwargs)
320
+
321
+
322
+ def rotate_half(x):
323
+ """Rotates half the hidden dims of the input."""
324
+ x1 = x[..., : x.shape[-1] // 2]
325
+ x2 = x[..., x.shape[-1] // 2 :]
326
+ return torch.cat((-x2, x1), dim=-1)
327
+
328
+
329
+ def apply_rotary_pos_emb(q, k, cos, sin, position_ids=None, unsqueeze_dim=1):
330
+ """Applies Rotary Position Embedding to the query and key tensors.
331
+
332
+ Args:
333
+ q (`torch.Tensor`): The query tensor.
334
+ k (`torch.Tensor`): The key tensor.
335
+ cos (`torch.Tensor`): The cosine part of the rotary embedding.
336
+ sin (`torch.Tensor`): The sine part of the rotary embedding.
337
+ position_ids (`torch.Tensor`, *optional*):
338
+ Deprecated and unused.
339
+ unsqueeze_dim (`int`, *optional*, defaults to 1):
340
+ The 'unsqueeze_dim' argument specifies the dimension along which to unsqueeze cos[position_ids] and
341
+ sin[position_ids] so that they can be properly broadcasted to the dimensions of q and k. For example, note
342
+ that cos[position_ids] and sin[position_ids] have the shape [batch_size, seq_len, head_dim]. Then, if q and
343
+ k have the shape [batch_size, heads, seq_len, head_dim], then setting unsqueeze_dim=1 makes
344
+ cos[position_ids] and sin[position_ids] broadcastable to the shapes of q and k. Similarly, if q and k have
345
+ the shape [batch_size, seq_len, heads, head_dim], then set unsqueeze_dim=2.
346
+ Returns:
347
+ `tuple(torch.Tensor)` comprising of the query and key tensors rotated using the Rotary Position Embedding.
348
+ """
349
+ cos = cos.unsqueeze(unsqueeze_dim)
350
+ sin = sin.unsqueeze(unsqueeze_dim)
351
+ q_embed = (q * cos) + (rotate_half(q) * sin)
352
+ k_embed = (k * cos) + (rotate_half(k) * sin)
353
+ return q_embed, k_embed
354
+
355
+
356
+ class LlamaMLP(nn.Module):
357
+ def __init__(self, config):
358
+ super().__init__()
359
+ self.config = config
360
+ self.hidden_size = config.hidden_size
361
+ self.intermediate_size = config.intermediate_size
362
+ self.lora_size = 512
363
+ self.gate_proj = nn.Linear(self.hidden_size, self.intermediate_size, bias=config.mlp_bias)
364
+ self.up_proj = nn.Linear(self.hidden_size, self.intermediate_size, bias=config.mlp_bias)
365
+ self.down_proj = nn.Linear(self.intermediate_size, self.hidden_size, bias=config.mlp_bias)
366
+ self.act_fn = ACT2FN[config.hidden_act]
367
+ self.gate_lora_A = nn.ModuleDict({str(i): nn.Linear(self.hidden_size, self.lora_size, bias=False) for i in range(1, len(GROUP2LANG) + 1)})
368
+ self.gate_lora_B = nn.ModuleDict({str(i): nn.Linear(self.lora_size, self.intermediate_size, bias=False) for i in range(1, len(GROUP2LANG) + 1)})
369
+ self.up_lora_A = nn.ModuleDict({str(i): nn.Linear(self.hidden_size, self.lora_size, bias=False) for i in range(1, len(GROUP2LANG) + 1)})
370
+ self.up_lora_B = nn.ModuleDict({str(i): nn.Linear(self.lora_size, self.intermediate_size, bias=False) for i in range(1, len(GROUP2LANG) + 1)})
371
+ self.down_lora_A = nn.ModuleDict({str(i): nn.Linear(self.intermediate_size, self.lora_size, bias=False) for i in range(1, len(GROUP2LANG) + 1)})
372
+ self.down_lora_B = nn.ModuleDict({str(i): nn.Linear(self.lora_size, self.hidden_size, bias=False) for i in range(1, len(GROUP2LANG) + 1)})
373
+
374
+ def forward(self, x, lang=""):
375
+ gate_proj_weight = self.gate_proj.weight + self.gate_lora_B[LANG2GROUP[lang]].weight @ self.gate_lora_A[LANG2GROUP[lang]].weight * LORA_ALPHA
376
+ up_proj_weight = self.up_proj.weight + self.up_lora_B[LANG2GROUP[lang]].weight @ self.up_lora_A[LANG2GROUP[lang]].weight * LORA_ALPHA
377
+ down_proj_weight = self.down_proj.weight + self.down_lora_B[LANG2GROUP[lang]].weight @ self.down_lora_A[LANG2GROUP[lang]].weight * LORA_ALPHA
378
+
379
+ if self.config.pretraining_tp > 1:
380
+ slice = self.intermediate_size // self.config.pretraining_tp
381
+ gate_proj_slices = gate_proj_weight.split(slice, dim=0)
382
+ up_proj_slices = up_proj_weight.split(slice, dim=0)
383
+ down_proj_slices = down_proj_weight.split(slice, dim=1)
384
+
385
+ gate_proj = torch.cat(
386
+ [F.linear(x, gate_proj_slices[i]) for i in range(self.config.pretraining_tp)], dim=-1
387
+ )
388
+ up_proj = torch.cat([F.linear(x, up_proj_slices[i]) for i in range(self.config.pretraining_tp)], dim=-1)
389
+
390
+ intermediate_states = (self.act_fn(gate_proj) * up_proj).split(slice, dim=2)
391
+ down_proj = [
392
+ F.linear(intermediate_states[i], down_proj_slices[i]) for i in range(self.config.pretraining_tp)
393
+ ]
394
+ down_proj = sum(down_proj)
395
+ else:
396
+ x = self.act_fn(F.linear(x, gate_proj_weight)) * F.linear(x, up_proj_weight)
397
+ down_proj = F.linear(x, down_proj_weight)
398
+ return down_proj
399
+
400
+ def repeat_kv(hidden_states: torch.Tensor, n_rep: int) -> torch.Tensor:
401
+ """
402
+ This is the equivalent of torch.repeat_interleave(x, dim=1, repeats=n_rep). The hidden states go from (batch,
403
+ num_key_value_heads, seqlen, head_dim) to (batch, num_attention_heads, seqlen, head_dim)
404
+ """
405
+ batch, num_key_value_heads, slen, head_dim = hidden_states.shape
406
+ if n_rep == 1:
407
+ return hidden_states
408
+ hidden_states = hidden_states[:, :, None, :, :].expand(batch, num_key_value_heads, n_rep, slen, head_dim)
409
+ return hidden_states.reshape(batch, num_key_value_heads * n_rep, slen, head_dim)
410
+
411
+
412
+ class LlamaAttention(nn.Module):
413
+ """Multi-headed attention from 'Attention Is All You Need' paper"""
414
+
415
+ def __init__(self, config: LlamaConfig, layer_idx: Optional[int] = None):
416
+ super().__init__()
417
+ self.config = config
418
+ self.layer_idx = layer_idx
419
+ if layer_idx is None:
420
+ logger.warning_once(
421
+ f"Instantiating {self.__class__.__name__} without passing a `layer_idx` is not recommended and will "
422
+ "lead to errors during the forward call if caching is used. Please make sure to provide a `layer_idx` "
423
+ "when creating this class."
424
+ )
425
+
426
+ self.attention_dropout = config.attention_dropout
427
+ self.hidden_size = config.hidden_size
428
+ self.num_heads = config.num_attention_heads
429
+ self.head_dim = getattr(config, "head_dim", self.hidden_size // self.num_heads)
430
+ self.num_key_value_heads = config.num_key_value_heads
431
+ self.num_key_value_groups = self.num_heads // self.num_key_value_heads
432
+ self.max_position_embeddings = config.max_position_embeddings
433
+ self.rope_theta = config.rope_theta
434
+ self.is_causal = True
435
+ self.lora_size = 512
436
+
437
+ self.q_proj = nn.Linear(self.hidden_size, self.num_heads * self.head_dim, bias=config.attention_bias)
438
+ self.k_proj = nn.Linear(self.hidden_size, self.num_key_value_heads * self.head_dim, bias=config.attention_bias)
439
+ self.v_proj = nn.Linear(self.hidden_size, self.num_key_value_heads * self.head_dim, bias=config.attention_bias)
440
+ self.o_proj = nn.Linear(self.num_heads * self.head_dim, self.hidden_size, bias=config.attention_bias)
441
+
442
+ self.q_lora_A = nn.ModuleDict({str(i): nn.Linear(self.hidden_size, self.lora_size, bias=False) for i in range(1, len(GROUP2LANG) + 1)})
443
+ self.q_lora_B = nn.ModuleDict({str(i): nn.Linear(self.lora_size, self.num_heads * self.head_dim, bias=False) for i in range(1, len(GROUP2LANG) + 1)})
444
+ self.k_lora_A = nn.ModuleDict({str(i): nn.Linear(self.hidden_size, self.lora_size, bias=False) for i in range(1, len(GROUP2LANG) + 1)})
445
+ self.k_lora_B = nn.ModuleDict({str(i): nn.Linear(self.lora_size, self.num_key_value_heads * self.head_dim, bias=False) for i in range(1, len(GROUP2LANG) + 1)})
446
+ self.v_lora_A = nn.ModuleDict({str(i): nn.Linear(self.hidden_size, self.lora_size, bias=False) for i in range(1, len(GROUP2LANG) + 1)})
447
+ self.v_lora_B = nn.ModuleDict({str(i): nn.Linear(self.lora_size, self.num_key_value_heads * self.head_dim, bias=False) for i in range(1, len(GROUP2LANG) + 1)})
448
+ self.o_lora_A = nn.ModuleDict({str(i): nn.Linear(self.num_heads * self.head_dim, self.lora_size, bias=False) for i in range(1, len(GROUP2LANG) + 1)})
449
+ self.o_lora_B = nn.ModuleDict({str(i): nn.Linear(self.lora_size, self.hidden_size, bias=False) for i in range(1, len(GROUP2LANG) + 1)})
450
+
451
+ # TODO (joao): remove in v4.46 (RoPE is computed in the model, not in the decoder layers)
452
+ self.rotary_emb = LlamaRotaryEmbedding(config=self.config)
453
+
454
+ def forward(
455
+ self,
456
+ hidden_states: torch.Tensor,
457
+ attention_mask: Optional[torch.Tensor] = None,
458
+ position_ids: Optional[torch.LongTensor] = None,
459
+ past_key_value: Optional[Cache] = None,
460
+ output_attentions: bool = False,
461
+ use_cache: bool = False,
462
+ cache_position: Optional[torch.LongTensor] = None,
463
+ position_embeddings: Optional[Tuple[torch.Tensor, torch.Tensor]] = None, # will become mandatory in v4.46
464
+ lang: str = "",
465
+ **kwargs,
466
+ ) -> Tuple[torch.Tensor, Optional[torch.Tensor], Optional[Tuple[torch.Tensor]]]:
467
+ q_proj_weight = self.q_proj.weight + self.q_lora_B[LANG2GROUP[lang]].weight @ self.q_lora_A[LANG2GROUP[lang]].weight * LORA_ALPHA
468
+ k_proj_weight = self.k_proj.weight + self.k_lora_B[LANG2GROUP[lang]].weight @ self.k_lora_A[LANG2GROUP[lang]].weight * LORA_ALPHA
469
+ v_proj_weight = self.v_proj.weight + self.v_lora_B[LANG2GROUP[lang]].weight @ self.v_lora_A[LANG2GROUP[lang]].weight * LORA_ALPHA
470
+ o_proj_weight = self.o_proj.weight + self.o_lora_B[LANG2GROUP[lang]].weight @ self.o_lora_A[LANG2GROUP[lang]].weight * LORA_ALPHA
471
+
472
+ bsz, q_len, _ = hidden_states.size()
473
+
474
+ if self.config.pretraining_tp > 1:
475
+ key_value_slicing = (self.num_key_value_heads * self.head_dim) // self.config.pretraining_tp
476
+ query_slices = q_proj_weight.split(
477
+ (self.num_heads * self.head_dim) // self.config.pretraining_tp, dim=0
478
+ )
479
+ key_slices = k_proj_weight.split(key_value_slicing, dim=0)
480
+ value_slices = v_proj_weight.split(key_value_slicing, dim=0)
481
+
482
+ query_states = [F.linear(hidden_states, query_slices[i]) for i in range(self.config.pretraining_tp)]
483
+ query_states = torch.cat(query_states, dim=-1)
484
+
485
+ key_states = [F.linear(hidden_states, key_slices[i]) for i in range(self.config.pretraining_tp)]
486
+ key_states = torch.cat(key_states, dim=-1)
487
+
488
+ value_states = [F.linear(hidden_states, value_slices[i]) for i in range(self.config.pretraining_tp)]
489
+ value_states = torch.cat(value_states, dim=-1)
490
+
491
+ else:
492
+ query_states = F.linear(hidden_states, q_proj_weight)
493
+ key_states = F.linear(hidden_states, k_proj_weight)
494
+ value_states = F.linear(hidden_states, v_proj_weight)
495
+
496
+ query_states = query_states.view(bsz, q_len, self.num_heads, self.head_dim).transpose(1, 2)
497
+ key_states = key_states.view(bsz, q_len, self.num_key_value_heads, self.head_dim).transpose(1, 2)
498
+ value_states = value_states.view(bsz, q_len, self.num_key_value_heads, self.head_dim).transpose(1, 2)
499
+
500
+ if position_embeddings is None:
501
+ logger.warning_once(
502
+ "The attention layers in this model are transitioning from computing the RoPE embeddings internally "
503
+ "through `position_ids` (2D tensor with the indexes of the tokens), to using externally computed "
504
+ "`position_embeddings` (Tuple of tensors, containing cos and sin). In v4.46 `position_ids` will be "
505
+ "removed and `position_embeddings` will be mandatory."
506
+ )
507
+ cos, sin = self.rotary_emb(value_states, position_ids)
508
+ else:
509
+ cos, sin = position_embeddings
510
+ query_states, key_states = apply_rotary_pos_emb(query_states, key_states, cos, sin)
511
+
512
+ if past_key_value is not None:
513
+ # sin and cos are specific to RoPE models; cache_position needed for the static cache
514
+ cache_kwargs = {"sin": sin, "cos": cos, "cache_position": cache_position}
515
+ key_states, value_states = past_key_value.update(key_states, value_states, self.layer_idx, cache_kwargs)
516
+
517
+ key_states = repeat_kv(key_states, self.num_key_value_groups)
518
+ value_states = repeat_kv(value_states, self.num_key_value_groups)
519
+ attn_weights = torch.matmul(query_states, key_states.transpose(2, 3)) / math.sqrt(self.head_dim)
520
+
521
+ if attention_mask is not None: # no matter the length, we just slice it
522
+ causal_mask = attention_mask[:, :, :, : key_states.shape[-2]]
523
+ attn_weights = attn_weights + causal_mask
524
+
525
+ # upcast attention to fp32
526
+ attn_weights = nn.functional.softmax(attn_weights, dim=-1, dtype=torch.float32).to(query_states.dtype)
527
+ attn_weights = nn.functional.dropout(attn_weights, p=self.attention_dropout, training=self.training)
528
+ attn_output = torch.matmul(attn_weights, value_states)
529
+
530
+ if attn_output.size() != (bsz, self.num_heads, q_len, self.head_dim):
531
+ raise ValueError(
532
+ f"`attn_output` should be of size {(bsz, self.num_heads, q_len, self.head_dim)}, but is"
533
+ f" {attn_output.size()}"
534
+ )
535
+
536
+ attn_output = attn_output.transpose(1, 2).contiguous()
537
+
538
+ attn_output = attn_output.reshape(bsz, q_len, -1)
539
+
540
+ if self.config.pretraining_tp > 1:
541
+ attn_output = attn_output.split(self.hidden_size // self.config.pretraining_tp, dim=2)
542
+ o_proj_slices = o_proj_weight.split(self.hidden_size // self.config.pretraining_tp, dim=1)
543
+ attn_output = sum([F.linear(attn_output[i], o_proj_slices[i]) for i in range(self.config.pretraining_tp)])
544
+ else:
545
+ attn_output = F.linear(attn_output, o_proj_weight)
546
+
547
+ if not output_attentions:
548
+ attn_weights = None
549
+
550
+ return attn_output, attn_weights, past_key_value
551
+
552
+
553
+ class LlamaFlashAttention2(LlamaAttention):
554
+ """
555
+ Llama flash attention module. This module inherits from `LlamaAttention` as the weights of the module stays
556
+ untouched. The only required change would be on the forward pass where it needs to correctly call the public API of
557
+ flash attention and deal with padding tokens in case the input contains any of them.
558
+ """
559
+
560
+ def __init__(self, *args, **kwargs):
561
+ super().__init__(*args, **kwargs)
562
+
563
+ # TODO: Should be removed once Flash Attention for RoCm is bumped to 2.1.
564
+ # flash_attn<2.1 generates top-left aligned causal mask, while what is needed here is bottom-right alignement, that was made default for flash_attn>=2.1. This attribute is used to handle this difference. Reference: https://github.com/Dao-AILab/flash-attention/releases/tag/v2.1.0.
565
+ # Beware that with flash_attn<2.1, using q_seqlen != k_seqlen (except for the case q_seqlen == 1) produces a wrong mask (top-left).
566
+ self._flash_attn_uses_top_left_mask = not is_flash_attn_greater_or_equal_2_10()
567
+
568
+ def forward(
569
+ self,
570
+ hidden_states: torch.Tensor,
571
+ attention_mask: Optional[torch.LongTensor] = None,
572
+ position_ids: Optional[torch.LongTensor] = None,
573
+ past_key_value: Optional[Cache] = None,
574
+ output_attentions: bool = False,
575
+ use_cache: bool = False,
576
+ cache_position: Optional[torch.LongTensor] = None,
577
+ position_embeddings: Optional[Tuple[torch.Tensor, torch.Tensor]] = None, # will become mandatory in v4.46
578
+ lang: str = "",
579
+ ) -> Tuple[torch.Tensor, Optional[torch.Tensor], Optional[Tuple[torch.Tensor]]]:
580
+ if isinstance(past_key_value, StaticCache):
581
+ raise ValueError(
582
+ "`static` cache implementation is not compatible with `attn_implementation==flash_attention_2` "
583
+ "make sure to use `sdpa` in the mean time, and open an issue at https://github.com/huggingface/transformers"
584
+ )
585
+ q_proj_weight = self.q_proj.weight + self.q_lora_B[LANG2GROUP[lang]].weight @ self.q_lora_A[LANG2GROUP[lang]].weight * LORA_ALPHA
586
+ k_proj_weight = self.k_proj.weight + self.k_lora_B[LANG2GROUP[lang]].weight @ self.k_lora_A[LANG2GROUP[lang]].weight * LORA_ALPHA
587
+ v_proj_weight = self.v_proj.weight + self.v_lora_B[LANG2GROUP[lang]].weight @ self.v_lora_A[LANG2GROUP[lang]].weight * LORA_ALPHA
588
+ o_proj_weight = self.o_proj.weight + self.o_lora_B[LANG2GROUP[lang]].weight @ self.o_lora_A[LANG2GROUP[lang]].weight * LORA_ALPHA
589
+
590
+ output_attentions = False
591
+
592
+ bsz, q_len, _ = hidden_states.size()
593
+
594
+ query_states = F.linear(hidden_states, q_proj_weight)
595
+ key_states = F.linear(hidden_states, k_proj_weight)
596
+ value_states = F.linear(hidden_states, v_proj_weight)
597
+
598
+ # Flash attention requires the input to have the shape
599
+ # batch_size x seq_length x head_dim x hidden_dim
600
+ # therefore we just need to keep the original shape
601
+ query_states = query_states.view(bsz, q_len, self.num_heads, self.head_dim).transpose(1, 2)
602
+ key_states = key_states.view(bsz, q_len, self.num_key_value_heads, self.head_dim).transpose(1, 2)
603
+ value_states = value_states.view(bsz, q_len, self.num_key_value_heads, self.head_dim).transpose(1, 2)
604
+
605
+ if position_embeddings is None:
606
+ logger.warning_once(
607
+ "The attention layers in this model are transitioning from computing the RoPE embeddings internally "
608
+ "through `position_ids` (2D tensor with the indexes of the tokens), to using externally computed "
609
+ "`position_embeddings` (Tuple of tensors, containing cos and sin). In v4.46 `position_ids` will be "
610
+ "removed and `position_embeddings` will be mandatory."
611
+ )
612
+ cos, sin = self.rotary_emb(value_states, position_ids)
613
+ else:
614
+ cos, sin = position_embeddings
615
+ query_states, key_states = apply_rotary_pos_emb(query_states, key_states, cos, sin)
616
+
617
+ if past_key_value is not None:
618
+ # sin and cos are specific to RoPE models; cache_position needed for the static cache
619
+ cache_kwargs = {"sin": sin, "cos": cos, "cache_position": cache_position}
620
+ key_states, value_states = past_key_value.update(key_states, value_states, self.layer_idx, cache_kwargs)
621
+
622
+ # TODO: These transpose are quite inefficient but Flash Attention requires the layout [batch_size, sequence_length, num_heads, head_dim]. We would need to refactor the KV cache
623
+ # to be able to avoid many of these transpose/reshape/view.
624
+ query_states = query_states.transpose(1, 2)
625
+ key_states = key_states.transpose(1, 2)
626
+ value_states = value_states.transpose(1, 2)
627
+
628
+ dropout_rate = self.attention_dropout if self.training else 0.0
629
+
630
+ # In PEFT, usually we cast the layer norms in float32 for training stability reasons
631
+ # therefore the input hidden states gets silently casted in float32. Hence, we need
632
+ # cast them back in the correct dtype just to be sure everything works as expected.
633
+ # This might slowdown training & inference so it is recommended to not cast the LayerNorms
634
+ # in fp32. (LlamaRMSNorm handles it correctly)
635
+
636
+ input_dtype = query_states.dtype
637
+ if input_dtype == torch.float32:
638
+ if torch.is_autocast_enabled():
639
+ target_dtype = torch.get_autocast_gpu_dtype()
640
+ # Handle the case where the model is quantized
641
+ elif hasattr(self.config, "_pre_quantization_dtype"):
642
+ target_dtype = self.config._pre_quantization_dtype
643
+ else:
644
+ target_dtype = q_proj_weight.dtype
645
+
646
+ logger.warning_once(
647
+ f"The input hidden states seems to be silently casted in float32, this might be related to"
648
+ f" the fact you have upcasted embedding or layer norm layers in float32. We will cast back the input in"
649
+ f" {target_dtype}."
650
+ )
651
+
652
+ query_states = query_states.to(target_dtype)
653
+ key_states = key_states.to(target_dtype)
654
+ value_states = value_states.to(target_dtype)
655
+
656
+ attn_output = _flash_attention_forward(
657
+ query_states,
658
+ key_states,
659
+ value_states,
660
+ attention_mask,
661
+ q_len,
662
+ position_ids=position_ids,
663
+ dropout=dropout_rate,
664
+ sliding_window=getattr(self, "sliding_window", None),
665
+ use_top_left_mask=self._flash_attn_uses_top_left_mask,
666
+ is_causal=self.is_causal,
667
+ )
668
+
669
+ attn_output = attn_output.reshape(bsz, q_len, -1).contiguous()
670
+ attn_output = F.linear(attn_output, o_proj_weight)
671
+
672
+ if not output_attentions:
673
+ attn_weights = None
674
+
675
+ return attn_output, attn_weights, past_key_value
676
+
677
+
678
+ class LlamaSdpaAttention(LlamaAttention):
679
+ """
680
+ Llama attention module using torch.nn.functional.scaled_dot_product_attention. This module inherits from
681
+ `LlamaAttention` as the weights of the module stays untouched. The only changes are on the forward pass to adapt to
682
+ SDPA API.
683
+ """
684
+
685
+ # Adapted from LlamaAttention.forward
686
+ def forward(
687
+ self,
688
+ hidden_states: torch.Tensor,
689
+ attention_mask: Optional[torch.Tensor] = None,
690
+ position_ids: Optional[torch.LongTensor] = None,
691
+ past_key_value: Optional[Cache] = None,
692
+ output_attentions: bool = False,
693
+ use_cache: bool = False,
694
+ cache_position: Optional[torch.LongTensor] = None,
695
+ position_embeddings: Optional[Tuple[torch.Tensor, torch.Tensor]] = None, # will become mandatory in v4.46
696
+ lang: str = "",
697
+ **kwargs,
698
+ ) -> Tuple[torch.Tensor, Optional[torch.Tensor], Optional[Tuple[torch.Tensor]]]:
699
+ q_proj_weight = self.q_proj.weight + self.q_lora_B[LANG2GROUP[lang]].weight @ self.q_lora_A[LANG2GROUP[lang]].weight * LORA_ALPHA
700
+ k_proj_weight = self.k_proj.weight + self.k_lora_B[LANG2GROUP[lang]].weight @ self.k_lora_A[LANG2GROUP[lang]].weight * LORA_ALPHA
701
+ v_proj_weight = self.v_proj.weight + self.v_lora_B[LANG2GROUP[lang]].weight @ self.v_lora_A[LANG2GROUP[lang]].weight * LORA_ALPHA
702
+ o_proj_weight = self.o_proj.weight + self.o_lora_B[LANG2GROUP[lang]].weight @ self.o_lora_A[LANG2GROUP[lang]].weight * LORA_ALPHA
703
+
704
+ if output_attentions:
705
+ # TODO: Improve this warning with e.g. `model.config.attn_implementation = "manual"` once this is implemented.
706
+ logger.warning_once(
707
+ "LlamaModel is using LlamaSdpaAttention, but `torch.nn.functional.scaled_dot_product_attention` does not support `output_attentions=True`. Falling back to the manual attention implementation, "
708
+ 'but specifying the manual implementation will be required from Transformers version v5.0.0 onwards. This warning can be removed using the argument `attn_implementation="eager"` when loading the model.'
709
+ )
710
+ return super().forward(
711
+ hidden_states=hidden_states,
712
+ attention_mask=attention_mask,
713
+ position_ids=position_ids,
714
+ past_key_value=past_key_value,
715
+ output_attentions=output_attentions,
716
+ use_cache=use_cache,
717
+ cache_position=cache_position,
718
+ position_embeddings=position_embeddings,
719
+ )
720
+
721
+ bsz, q_len, _ = hidden_states.size()
722
+
723
+ query_states = F.linear(hidden_states, q_proj_weight)
724
+ key_states = F.linear(hidden_states, k_proj_weight)
725
+ value_states = F.linear(hidden_states, v_proj_weight)
726
+
727
+ query_states = query_states.view(bsz, q_len, self.num_heads, self.head_dim).transpose(1, 2)
728
+ key_states = key_states.view(bsz, q_len, self.num_key_value_heads, self.head_dim).transpose(1, 2)
729
+ value_states = value_states.view(bsz, q_len, self.num_key_value_heads, self.head_dim).transpose(1, 2)
730
+
731
+ if position_embeddings is None:
732
+ logger.warning_once(
733
+ "The attention layers in this model are transitioning from computing the RoPE embeddings internally "
734
+ "through `position_ids` (2D tensor with the indexes of the tokens), to using externally computed "
735
+ "`position_embeddings` (Tuple of tensors, containing cos and sin). In v4.46 `position_ids` will be "
736
+ "removed and `position_embeddings` will be mandatory."
737
+ )
738
+ cos, sin = self.rotary_emb(value_states, position_ids)
739
+ else:
740
+ cos, sin = position_embeddings
741
+ query_states, key_states = apply_rotary_pos_emb(query_states, key_states, cos, sin)
742
+
743
+ if past_key_value is not None:
744
+ # sin and cos are specific to RoPE models; cache_position needed for the static cache
745
+ cache_kwargs = {"sin": sin, "cos": cos, "cache_position": cache_position}
746
+ key_states, value_states = past_key_value.update(key_states, value_states, self.layer_idx, cache_kwargs)
747
+
748
+ key_states = repeat_kv(key_states, self.num_key_value_groups)
749
+ value_states = repeat_kv(value_states, self.num_key_value_groups)
750
+
751
+ causal_mask = attention_mask
752
+ if attention_mask is not None:
753
+ causal_mask = causal_mask[:, :, :, : key_states.shape[-2]]
754
+
755
+ # SDPA with memory-efficient backend is currently (torch==2.1.2) bugged with non-contiguous inputs with custom attn_mask,
756
+ # Reference: https://github.com/pytorch/pytorch/issues/112577.
757
+ if query_states.device.type == "cuda" and causal_mask is not None:
758
+ query_states = query_states.contiguous()
759
+ key_states = key_states.contiguous()
760
+ value_states = value_states.contiguous()
761
+
762
+ # We dispatch to SDPA's Flash Attention or Efficient kernels via this `is_causal` if statement instead of an inline conditional assignment
763
+ # in SDPA to support both torch.compile's dynamic shapes and full graph options. An inline conditional prevents dynamic shapes from compiling.
764
+ is_causal = True if causal_mask is None and q_len > 1 else False
765
+
766
+ attn_output = torch.nn.functional.scaled_dot_product_attention(
767
+ query_states,
768
+ key_states,
769
+ value_states,
770
+ attn_mask=causal_mask,
771
+ dropout_p=self.attention_dropout if self.training else 0.0,
772
+ is_causal=is_causal,
773
+ )
774
+
775
+ attn_output = attn_output.transpose(1, 2).contiguous()
776
+ attn_output = attn_output.view(bsz, q_len, -1)
777
+
778
+ attn_output = F.linear(attn_output, o_proj_weight)
779
+
780
+ return attn_output, None, past_key_value
781
+
782
+
783
+ LLAMA_ATTENTION_CLASSES = {
784
+ "eager": LlamaAttention,
785
+ "flash_attention_2": LlamaFlashAttention2,
786
+ "sdpa": LlamaSdpaAttention,
787
+ }
788
+
789
+
790
+ class LlamaDecoderLayer(nn.Module):
791
+ def __init__(self, config: LlamaConfig, layer_idx: int):
792
+ super().__init__()
793
+ self.hidden_size = config.hidden_size
794
+
795
+ self.self_attn = LLAMA_ATTENTION_CLASSES[config._attn_implementation](config=config, layer_idx=layer_idx)
796
+
797
+ self.mlp = LlamaMLP(config)
798
+ self.input_layernorm = LlamaRMSNorm(config.hidden_size, eps=config.rms_norm_eps)
799
+ self.post_attention_layernorm = LlamaRMSNorm(config.hidden_size, eps=config.rms_norm_eps)
800
+
801
+ def forward(
802
+ self,
803
+ hidden_states: torch.Tensor,
804
+ attention_mask: Optional[torch.Tensor] = None,
805
+ position_ids: Optional[torch.LongTensor] = None,
806
+ past_key_value: Optional[Cache] = None,
807
+ output_attentions: Optional[bool] = False,
808
+ use_cache: Optional[bool] = False,
809
+ cache_position: Optional[torch.LongTensor] = None,
810
+ position_embeddings: Optional[Tuple[torch.Tensor, torch.Tensor]] = None, # will become mandatory in v4.46
811
+ lang: str = "",
812
+ **kwargs,
813
+ ) -> Tuple[torch.FloatTensor, Optional[Tuple[torch.FloatTensor, torch.FloatTensor]]]:
814
+ """
815
+ Args:
816
+ hidden_states (`torch.FloatTensor`): input to the layer of shape `(batch, seq_len, embed_dim)`
817
+ attention_mask (`torch.FloatTensor`, *optional*):
818
+ attention mask of size `(batch_size, sequence_length)` if flash attention is used or `(batch_size, 1,
819
+ query_sequence_length, key_sequence_length)` if default attention is used.
820
+ output_attentions (`bool`, *optional*):
821
+ Whether or not to return the attentions tensors of all attention layers. See `attentions` under
822
+ returned tensors for more detail.
823
+ use_cache (`bool`, *optional*):
824
+ If set to `True`, `past_key_values` key value states are returned and can be used to speed up decoding
825
+ (see `past_key_values`).
826
+ past_key_value (`Tuple(torch.FloatTensor)`, *optional*): cached past key and value projection states
827
+ cache_position (`torch.LongTensor` of shape `(sequence_length)`, *optional*):
828
+ Indices depicting the position of the input sequence tokens in the sequence
829
+ position_embeddings (`Tuple[torch.FloatTensor, torch.FloatTensor]`, *optional*):
830
+ Tuple containing the cosine and sine positional embeddings of shape `(batch_size, seq_len, head_dim)`,
831
+ with `head_dim` being the embedding dimension of each attention head.
832
+ kwargs (`dict`, *optional*):
833
+ Arbitrary kwargs to be ignored, used for FSDP and other methods that injects code
834
+ into the model
835
+ """
836
+ residual = hidden_states
837
+
838
+ hidden_states = self.input_layernorm(hidden_states)
839
+
840
+ # Self Attention
841
+ hidden_states, self_attn_weights, present_key_value = self.self_attn(
842
+ hidden_states=hidden_states,
843
+ attention_mask=attention_mask,
844
+ position_ids=position_ids,
845
+ past_key_value=past_key_value,
846
+ output_attentions=output_attentions,
847
+ use_cache=use_cache,
848
+ cache_position=cache_position,
849
+ position_embeddings=position_embeddings,
850
+ lang=lang,
851
+ **kwargs,
852
+ )
853
+ hidden_states = residual + hidden_states
854
+
855
+ # Fully Connected
856
+ residual = hidden_states
857
+ hidden_states = self.post_attention_layernorm(hidden_states)
858
+ hidden_states = self.mlp(hidden_states, lang=lang)
859
+ hidden_states = residual + hidden_states
860
+
861
+ outputs = (hidden_states,)
862
+
863
+ if output_attentions:
864
+ outputs += (self_attn_weights,)
865
+
866
+ if use_cache:
867
+ outputs += (present_key_value,)
868
+
869
+ return outputs
870
+
871
+
872
+ LLAMA_START_DOCSTRING = r"""
873
+ This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
874
+ library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
875
+ etc.)
876
+
877
+ This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
878
+ Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
879
+ and behavior.
880
+
881
+ Parameters:
882
+ config ([`LlamaConfig`]):
883
+ Model configuration class with all the parameters of the model. Initializing with a config file does not
884
+ load the weights associated with the model, only the configuration. Check out the
885
+ [`~PreTrainedModel.from_pretrained`] method to load the model weights.
886
+ """
887
+
888
+
889
+ @add_start_docstrings(
890
+ "The bare LLaMA Model outputting raw hidden-states without any specific head on top.",
891
+ LLAMA_START_DOCSTRING,
892
+ )
893
+ class LlamaPreTrainedModel(PreTrainedModel):
894
+ config_class = LlamaConfig
895
+ base_model_prefix = "model"
896
+ supports_gradient_checkpointing = True
897
+ _no_split_modules = ["LlamaDecoderLayer"]
898
+ _skip_keys_device_placement = ["past_key_values"]
899
+ _supports_flash_attn_2 = True
900
+ _supports_sdpa = True
901
+ _supports_cache_class = True
902
+ _supports_quantized_cache = True
903
+ _supports_static_cache = True
904
+
905
+ def _init_weights(self, module):
906
+ std = self.config.initializer_range
907
+ if isinstance(module, nn.Linear):
908
+ module.weight.data.normal_(mean=0.0, std=std)
909
+ if module.bias is not None:
910
+ module.bias.data.zero_()
911
+ elif isinstance(module, nn.Embedding):
912
+ module.weight.data.normal_(mean=0.0, std=std)
913
+ if module.padding_idx is not None:
914
+ module.weight.data[module.padding_idx].zero_()
915
+
916
+
917
+ LLAMA_INPUTS_DOCSTRING = r"""
918
+ Args:
919
+ input_ids (`torch.LongTensor` of shape `(batch_size, sequence_length)`):
920
+ Indices of input sequence tokens in the vocabulary. Padding will be ignored by default should you provide
921
+ it.
922
+
923
+ Indices can be obtained using [`AutoTokenizer`]. See [`PreTrainedTokenizer.encode`] and
924
+ [`PreTrainedTokenizer.__call__`] for details.
925
+
926
+ [What are input IDs?](../glossary#input-ids)
927
+ attention_mask (`torch.Tensor` of shape `(batch_size, sequence_length)`, *optional*):
928
+ Mask to avoid performing attention on padding token indices. Mask values selected in `[0, 1]`:
929
+
930
+ - 1 for tokens that are **not masked**,
931
+ - 0 for tokens that are **masked**.
932
+
933
+ [What are attention masks?](../glossary#attention-mask)
934
+
935
+ Indices can be obtained using [`AutoTokenizer`]. See [`PreTrainedTokenizer.encode`] and
936
+ [`PreTrainedTokenizer.__call__`] for details.
937
+
938
+ If `past_key_values` is used, optionally only the last `input_ids` have to be input (see
939
+ `past_key_values`).
940
+
941
+ If you want to change padding behavior, you should read [`modeling_opt._prepare_decoder_attention_mask`]
942
+ and modify to your needs. See diagram 1 in [the paper](https://arxiv.org/abs/1910.13461) for more
943
+ information on the default strategy.
944
+
945
+ - 1 indicates the head is **not masked**,
946
+ - 0 indicates the head is **masked**.
947
+ position_ids (`torch.LongTensor` of shape `(batch_size, sequence_length)`, *optional*):
948
+ Indices of positions of each input sequence tokens in the position embeddings. Selected in the range `[0,
949
+ config.n_positions - 1]`.
950
+
951
+ [What are position IDs?](../glossary#position-ids)
952
+ past_key_values (`Cache` or `tuple(tuple(torch.FloatTensor))`, *optional*):
953
+ Pre-computed hidden-states (key and values in the self-attention blocks and in the cross-attention
954
+ blocks) that can be used to speed up sequential decoding. This typically consists in the `past_key_values`
955
+ returned by the model at a previous stage of decoding, when `use_cache=True` or `config.use_cache=True`.
956
+
957
+ Two formats are allowed:
958
+ - a [`~cache_utils.Cache`] instance, see our
959
+ [kv cache guide](https://huggingface.co/docs/transformers/en/kv_cache);
960
+ - Tuple of `tuple(torch.FloatTensor)` of length `config.n_layers`, with each tuple having 2 tensors of
961
+ shape `(batch_size, num_heads, sequence_length, embed_size_per_head)`). This is also known as the legacy
962
+ cache format.
963
+
964
+ The model will output the same cache format that is fed as input. If no `past_key_values` are passed, the
965
+ legacy cache format will be returned.
966
+
967
+ If `past_key_values` are used, the user can optionally input only the last `input_ids` (those that don't
968
+ have their past key value states given to this model) of shape `(batch_size, 1)` instead of all `input_ids`
969
+ of shape `(batch_size, sequence_length)`.
970
+ inputs_embeds (`torch.FloatTensor` of shape `(batch_size, sequence_length, hidden_size)`, *optional*):
971
+ Optionally, instead of passing `input_ids` you can choose to directly pass an embedded representation. This
972
+ is useful if you want more control over how to convert `input_ids` indices into associated vectors than the
973
+ model's internal embedding lookup matrix.
974
+ use_cache (`bool`, *optional*):
975
+ If set to `True`, `past_key_values` key value states are returned and can be used to speed up decoding (see
976
+ `past_key_values`).
977
+ output_attentions (`bool`, *optional*):
978
+ Whether or not to return the attentions tensors of all attention layers. See `attentions` under returned
979
+ tensors for more detail.
980
+ output_hidden_states (`bool`, *optional*):
981
+ Whether or not to return the hidden states of all layers. See `hidden_states` under returned tensors for
982
+ more detail.
983
+ return_dict (`bool`, *optional*):
984
+ Whether or not to return a [`~utils.ModelOutput`] instead of a plain tuple.
985
+ cache_position (`torch.LongTensor` of shape `(sequence_length)`, *optional*):
986
+ Indices depicting the position of the input sequence tokens in the sequence. Contrarily to `position_ids`,
987
+ this tensor is not affected by padding. It is used to update the cache in the correct position and to infer
988
+ the complete sequence length.
989
+ """
990
+
991
+
992
+ @add_start_docstrings(
993
+ "The bare LLaMA Model outputting raw hidden-states without any specific head on top.",
994
+ LLAMA_START_DOCSTRING,
995
+ )
996
+ class LlamaModel(LlamaPreTrainedModel):
997
+ """
998
+ Transformer decoder consisting of *config.num_hidden_layers* layers. Each layer is a [`LlamaDecoderLayer`]
999
+
1000
+ Args:
1001
+ config: LlamaConfig
1002
+ """
1003
+
1004
+ def __init__(self, config: LlamaConfig):
1005
+ super().__init__(config)
1006
+ self.padding_idx = config.pad_token_id
1007
+ self.vocab_size = config.vocab_size
1008
+
1009
+ self.embed_tokens = nn.Embedding(config.vocab_size, config.hidden_size, self.padding_idx)
1010
+ self.layers = nn.ModuleList(
1011
+ [LlamaDecoderLayer(config, layer_idx) for layer_idx in range(config.num_hidden_layers)]
1012
+ )
1013
+ self.norm = LlamaRMSNorm(config.hidden_size, eps=config.rms_norm_eps)
1014
+ self.rotary_emb = LlamaRotaryEmbedding(config=config)
1015
+ self.gradient_checkpointing = False
1016
+
1017
+ # Initialize weights and apply final processing
1018
+ self.post_init()
1019
+
1020
+ def get_input_embeddings(self):
1021
+ return self.embed_tokens
1022
+
1023
+ def set_input_embeddings(self, value):
1024
+ self.embed_tokens = value
1025
+
1026
+ @add_start_docstrings_to_model_forward(LLAMA_INPUTS_DOCSTRING)
1027
+ def forward(
1028
+ self,
1029
+ input_ids: torch.LongTensor = None,
1030
+ attention_mask: Optional[torch.Tensor] = None,
1031
+ position_ids: Optional[torch.LongTensor] = None,
1032
+ past_key_values: Optional[Union[Cache, List[torch.FloatTensor]]] = None,
1033
+ inputs_embeds: Optional[torch.FloatTensor] = None,
1034
+ use_cache: Optional[bool] = None,
1035
+ output_attentions: Optional[bool] = None,
1036
+ output_hidden_states: Optional[bool] = None,
1037
+ return_dict: Optional[bool] = None,
1038
+ cache_position: Optional[torch.LongTensor] = None,
1039
+ lang: str = "",
1040
+ ) -> Union[Tuple, BaseModelOutputWithPast]:
1041
+ output_attentions = output_attentions if output_attentions is not None else self.config.output_attentions
1042
+ output_hidden_states = (
1043
+ output_hidden_states if output_hidden_states is not None else self.config.output_hidden_states
1044
+ )
1045
+ use_cache = use_cache if use_cache is not None else self.config.use_cache
1046
+ return_dict = return_dict if return_dict is not None else self.config.use_return_dict
1047
+
1048
+ if (input_ids is None) ^ (inputs_embeds is not None):
1049
+ raise ValueError(
1050
+ "You cannot specify both input_ids and inputs_embeds at the same time, and must specify either one"
1051
+ )
1052
+
1053
+ if self.gradient_checkpointing and self.training and use_cache:
1054
+ logger.warning_once(
1055
+ "`use_cache=True` is incompatible with gradient checkpointing. Setting `use_cache=False`."
1056
+ )
1057
+ use_cache = False
1058
+
1059
+ if inputs_embeds is None:
1060
+ inputs_embeds = self.embed_tokens(input_ids)
1061
+
1062
+ # kept for BC (non `Cache` `past_key_values` inputs)
1063
+ return_legacy_cache = False
1064
+ if use_cache and not isinstance(past_key_values, Cache):
1065
+ return_legacy_cache = True
1066
+ if past_key_values is None:
1067
+ past_key_values = DynamicCache()
1068
+ else:
1069
+ past_key_values = DynamicCache.from_legacy_cache(past_key_values)
1070
+ logger.warning_once(
1071
+ "We detected that you are passing `past_key_values` as a tuple of tuples. This is deprecated and "
1072
+ "will be removed in v4.47. Please convert your cache or use an appropriate `Cache` class "
1073
+ "(https://huggingface.co/docs/transformers/kv_cache#legacy-cache-format)"
1074
+ )
1075
+
1076
+ if cache_position is None:
1077
+ past_seen_tokens = past_key_values.get_seq_length() if past_key_values is not None else 0
1078
+ cache_position = torch.arange(
1079
+ past_seen_tokens, past_seen_tokens + inputs_embeds.shape[1], device=inputs_embeds.device
1080
+ )
1081
+ if position_ids is None:
1082
+ position_ids = cache_position.unsqueeze(0)
1083
+
1084
+ causal_mask = self._update_causal_mask(
1085
+ attention_mask, inputs_embeds, cache_position, past_key_values, output_attentions
1086
+ )
1087
+ hidden_states = inputs_embeds
1088
+
1089
+ # create position embeddings to be shared across the decoder layers
1090
+ position_embeddings = self.rotary_emb(hidden_states, position_ids)
1091
+
1092
+ # decoder layers
1093
+ all_hidden_states = () if output_hidden_states else None
1094
+ all_self_attns = () if output_attentions else None
1095
+ next_decoder_cache = None
1096
+
1097
+ for decoder_layer in self.layers:
1098
+ if output_hidden_states:
1099
+ all_hidden_states += (hidden_states,)
1100
+
1101
+ if self.gradient_checkpointing and self.training:
1102
+ layer_outputs = self._gradient_checkpointing_func(
1103
+ decoder_layer.__call__,
1104
+ hidden_states,
1105
+ causal_mask,
1106
+ position_ids,
1107
+ past_key_values,
1108
+ output_attentions,
1109
+ use_cache,
1110
+ cache_position,
1111
+ position_embeddings,
1112
+ lang,
1113
+ )
1114
+ else:
1115
+ layer_outputs = decoder_layer(
1116
+ hidden_states,
1117
+ attention_mask=causal_mask,
1118
+ position_ids=position_ids,
1119
+ past_key_value=past_key_values,
1120
+ output_attentions=output_attentions,
1121
+ use_cache=use_cache,
1122
+ cache_position=cache_position,
1123
+ position_embeddings=position_embeddings,
1124
+ lang=lang,
1125
+ )
1126
+
1127
+ hidden_states = layer_outputs[0]
1128
+
1129
+ if use_cache:
1130
+ next_decoder_cache = layer_outputs[2 if output_attentions else 1]
1131
+
1132
+ if output_attentions:
1133
+ all_self_attns += (layer_outputs[1],)
1134
+
1135
+ hidden_states = self.norm(hidden_states)
1136
+
1137
+ # add hidden states from the last decoder layer
1138
+ if output_hidden_states:
1139
+ all_hidden_states += (hidden_states,)
1140
+
1141
+ next_cache = next_decoder_cache if use_cache else None
1142
+ if return_legacy_cache:
1143
+ next_cache = next_cache.to_legacy_cache()
1144
+
1145
+ if not return_dict:
1146
+ return tuple(v for v in [hidden_states, next_cache, all_hidden_states, all_self_attns] if v is not None)
1147
+ return BaseModelOutputWithPast(
1148
+ last_hidden_state=hidden_states,
1149
+ past_key_values=next_cache,
1150
+ hidden_states=all_hidden_states,
1151
+ attentions=all_self_attns,
1152
+ )
1153
+
1154
+ def _update_causal_mask(
1155
+ self,
1156
+ attention_mask: torch.Tensor,
1157
+ input_tensor: torch.Tensor,
1158
+ cache_position: torch.Tensor,
1159
+ past_key_values: Cache,
1160
+ output_attentions: bool,
1161
+ ):
1162
+ if self.config._attn_implementation == "flash_attention_2":
1163
+ if attention_mask is not None and 0.0 in attention_mask:
1164
+ return attention_mask
1165
+ return None
1166
+
1167
+ # For SDPA, when possible, we will rely on its `is_causal` argument instead of its `attn_mask` argument, in
1168
+ # order to dispatch on Flash Attention 2. This feature is not compatible with static cache, as SDPA will fail
1169
+ # to infer the attention mask.
1170
+ past_seen_tokens = past_key_values.get_seq_length() if past_key_values is not None else 0
1171
+ using_static_cache = isinstance(past_key_values, StaticCache)
1172
+
1173
+ # When output attentions is True, sdpa implementation's forward method calls the eager implementation's forward
1174
+ if self.config._attn_implementation == "sdpa" and not using_static_cache and not output_attentions:
1175
+ if AttentionMaskConverter._ignore_causal_mask_sdpa(
1176
+ attention_mask,
1177
+ inputs_embeds=input_tensor,
1178
+ past_key_values_length=past_seen_tokens,
1179
+ is_training=self.training,
1180
+ ):
1181
+ return None
1182
+
1183
+ dtype, device = input_tensor.dtype, input_tensor.device
1184
+ min_dtype = torch.finfo(dtype).min
1185
+ sequence_length = input_tensor.shape[1]
1186
+ if using_static_cache:
1187
+ target_length = past_key_values.get_max_length()
1188
+ else:
1189
+ target_length = (
1190
+ attention_mask.shape[-1]
1191
+ if isinstance(attention_mask, torch.Tensor)
1192
+ else past_seen_tokens + sequence_length + 1
1193
+ )
1194
+
1195
+ # In case the provided `attention` mask is 2D, we generate a causal mask here (4D).
1196
+ causal_mask = _prepare_4d_causal_attention_mask_with_cache_position(
1197
+ attention_mask,
1198
+ sequence_length=sequence_length,
1199
+ target_length=target_length,
1200
+ dtype=dtype,
1201
+ device=device,
1202
+ min_dtype=min_dtype,
1203
+ cache_position=cache_position,
1204
+ batch_size=input_tensor.shape[0],
1205
+ )
1206
+
1207
+ if (
1208
+ self.config._attn_implementation == "sdpa"
1209
+ and attention_mask is not None
1210
+ and attention_mask.device.type == "cuda"
1211
+ and not output_attentions
1212
+ ):
1213
+ # Attend to all tokens in fully masked rows in the causal_mask, for example the relevant first rows when
1214
+ # using left padding. This is required by F.scaled_dot_product_attention memory-efficient attention path.
1215
+ # Details: https://github.com/pytorch/pytorch/issues/110213
1216
+ causal_mask = AttentionMaskConverter._unmask_unattended(causal_mask, min_dtype)
1217
+
1218
+ return causal_mask
1219
+
1220
+
1221
+ class XALMAForCausalLM(LlamaPreTrainedModel, GenerationMixin):
1222
+ _tied_weights_keys = ["lm_head.weight"]
1223
+
1224
+ def __init__(self, config):
1225
+ super().__init__(config)
1226
+ self.model = LlamaModel(config)
1227
+ self.vocab_size = config.vocab_size
1228
+ self.lm_head = nn.Linear(config.hidden_size, config.vocab_size, bias=False)
1229
+
1230
+ # Initialize weights and apply final processing
1231
+ self.post_init()
1232
+
1233
+ def get_input_embeddings(self):
1234
+ return self.model.embed_tokens
1235
+
1236
+ def set_input_embeddings(self, value):
1237
+ self.model.embed_tokens = value
1238
+
1239
+ def get_output_embeddings(self):
1240
+ return self.lm_head
1241
+
1242
+ def set_output_embeddings(self, new_embeddings):
1243
+ self.lm_head = new_embeddings
1244
+
1245
+ def set_decoder(self, decoder):
1246
+ self.model = decoder
1247
+
1248
+ def get_decoder(self):
1249
+ return self.model
1250
+
1251
+ @add_start_docstrings_to_model_forward(LLAMA_INPUTS_DOCSTRING)
1252
+ @replace_return_docstrings(output_type=CausalLMOutputWithPast, config_class=_CONFIG_FOR_DOC)
1253
+ def forward(
1254
+ self,
1255
+ input_ids: torch.LongTensor = None,
1256
+ attention_mask: Optional[torch.Tensor] = None,
1257
+ position_ids: Optional[torch.LongTensor] = None,
1258
+ past_key_values: Optional[Union[Cache, List[torch.FloatTensor]]] = None,
1259
+ inputs_embeds: Optional[torch.FloatTensor] = None,
1260
+ labels: Optional[torch.LongTensor] = None,
1261
+ use_cache: Optional[bool] = None,
1262
+ output_attentions: Optional[bool] = None,
1263
+ output_hidden_states: Optional[bool] = None,
1264
+ return_dict: Optional[bool] = None,
1265
+ cache_position: Optional[torch.LongTensor] = None,
1266
+ num_logits_to_keep: int = 0,
1267
+ lang: str = "",
1268
+ ) -> Union[Tuple, CausalLMOutputWithPast]:
1269
+ r"""
1270
+ Args:
1271
+ labels (`torch.LongTensor` of shape `(batch_size, sequence_length)`, *optional*):
1272
+ Labels for computing the masked language modeling loss. Indices should either be in `[0, ...,
1273
+ config.vocab_size]` or -100 (see `input_ids` docstring). Tokens with indices set to `-100` are ignored
1274
+ (masked), the loss is only computed for the tokens with labels in `[0, ..., config.vocab_size]`.
1275
+
1276
+ num_logits_to_keep (`int`, *optional*):
1277
+ Calculate logits for the last `num_logits_to_keep` tokens. If `0`, calculate logits for all
1278
+ `input_ids` (special case). Only last token logits are needed for generation, and calculating them only for that
1279
+ token can save memory, which becomes pretty significant for long sequences or large vocabulary size.
1280
+
1281
+ Returns:
1282
+
1283
+ Example:
1284
+
1285
+ ```python
1286
+ >>> from transformers import AutoTokenizer, XALMAForCausalLM
1287
+
1288
+ >>> model = XALMAForCausalLM.from_pretrained("meta-llama/Llama-2-7b-hf")
1289
+ >>> tokenizer = AutoTokenizer.from_pretrained("meta-llama/Llama-2-7b-hf")
1290
+
1291
+ >>> prompt = "Hey, are you conscious? Can you talk to me?"
1292
+ >>> inputs = tokenizer(prompt, return_tensors="pt")
1293
+
1294
+ >>> # Generate
1295
+ >>> generate_ids = model.generate(inputs.input_ids, max_length=30)
1296
+ >>> tokenizer.batch_decode(generate_ids, skip_special_tokens=True, clean_up_tokenization_spaces=False)[0]
1297
+ "Hey, are you conscious? Can you talk to me?\nI'm not conscious, but I can talk to you."
1298
+ ```"""
1299
+ assert lang, "Language must be provided for XALMA to determine the language module"
1300
+
1301
+ output_attentions = output_attentions if output_attentions is not None else self.config.output_attentions
1302
+ output_hidden_states = (
1303
+ output_hidden_states if output_hidden_states is not None else self.config.output_hidden_states
1304
+ )
1305
+ return_dict = return_dict if return_dict is not None else self.config.use_return_dict
1306
+
1307
+ # decoder outputs consists of (dec_features, layer_state, dec_hidden, dec_attn)
1308
+ outputs = self.model(
1309
+ input_ids=input_ids,
1310
+ attention_mask=attention_mask,
1311
+ position_ids=position_ids,
1312
+ past_key_values=past_key_values,
1313
+ inputs_embeds=inputs_embeds,
1314
+ use_cache=use_cache,
1315
+ output_attentions=output_attentions,
1316
+ output_hidden_states=output_hidden_states,
1317
+ return_dict=return_dict,
1318
+ cache_position=cache_position,
1319
+ lang=lang,
1320
+ )
1321
+
1322
+ hidden_states = outputs[0]
1323
+ if self.config.pretraining_tp > 1:
1324
+ lm_head_slices = self.lm_head.weight.split(self.vocab_size // self.config.pretraining_tp, dim=0)
1325
+ logits = [F.linear(hidden_states, lm_head_slices[i]) for i in range(self.config.pretraining_tp)]
1326
+ logits = torch.cat(logits, dim=-1)
1327
+ else:
1328
+ if labels is None and not is_torchdynamo_compiling():
1329
+ logger.warning_once(
1330
+ "Starting from v4.46, the `logits` model output will have the same type as the model (except at train time, where it will always be FP32)"
1331
+ )
1332
+ # Only compute necessary logits, and do not upcast them to float if we are not computing the loss
1333
+ # TODO: remove the float() operation in v4.46
1334
+ logits = self.lm_head(hidden_states[:, -num_logits_to_keep:, :]).float()
1335
+
1336
+ loss = None
1337
+ if labels is not None:
1338
+ # Upcast to float if we need to compute the loss to avoid potential precision issues
1339
+ logits = logits.float()
1340
+ # Shift so that tokens < n predict n
1341
+ shift_logits = logits[..., :-1, :].contiguous()
1342
+ shift_labels = labels[..., 1:].contiguous()
1343
+ # Flatten the tokens
1344
+ loss_fct = CrossEntropyLoss()
1345
+ shift_logits = shift_logits.view(-1, self.config.vocab_size)
1346
+ shift_labels = shift_labels.view(-1)
1347
+ # Enable model parallelism
1348
+ shift_labels = shift_labels.to(shift_logits.device)
1349
+ loss = loss_fct(shift_logits, shift_labels)
1350
+
1351
+ if not return_dict:
1352
+ output = (logits,) + outputs[1:]
1353
+ return (loss,) + output if loss is not None else output
1354
+
1355
+ return CausalLMOutputWithPast(
1356
+ loss=loss,
1357
+ logits=logits,
1358
+ past_key_values=outputs.past_key_values,
1359
+ hidden_states=outputs.hidden_states,
1360
+ attentions=outputs.attentions,
1361
+ )
1362
+
1363
+ def prepare_inputs_for_generation(
1364
+ self,
1365
+ input_ids,
1366
+ past_key_values=None,
1367
+ attention_mask=None,
1368
+ inputs_embeds=None,
1369
+ cache_position=None,
1370
+ position_ids=None,
1371
+ use_cache=True,
1372
+ num_logits_to_keep=None,
1373
+ **kwargs,
1374
+ ):
1375
+ # If we have cache: let's slice `input_ids` through `cache_position`, to keep only the unprocessed tokens
1376
+ # Exception 1: when passing input_embeds, input_ids may be missing entries
1377
+ # Exception 2: some generation methods do special slicing of input_ids, so we don't need to do it here
1378
+ if past_key_values is not None:
1379
+ if inputs_embeds is not None: # Exception 1
1380
+ input_ids = input_ids[:, -cache_position.shape[0] :]
1381
+ elif input_ids.shape[1] != cache_position.shape[0]: # Default case (the "else", a no op, is Exception 2)
1382
+ input_ids = input_ids[:, cache_position]
1383
+
1384
+ if attention_mask is not None and position_ids is None:
1385
+ # create position_ids on the fly for batch generation
1386
+ position_ids = attention_mask.long().cumsum(-1) - 1
1387
+ position_ids.masked_fill_(attention_mask == 0, 1)
1388
+ if past_key_values:
1389
+ position_ids = position_ids[:, -input_ids.shape[1] :]
1390
+
1391
+ # This `clone` call is needed to avoid recapturing cuda graphs with `torch.compile`'s `mode="reduce-overhead`, as otherwise the input `position_ids` would have various stride during the decoding. Here, simply using `.contiguous()` is not sufficient as in the batch size = 1 case, `position_ids` is already contiguous but with varying stride which retriggers a capture.
1392
+ position_ids = position_ids.clone(memory_format=torch.contiguous_format)
1393
+
1394
+ # if `inputs_embeds` are passed, we only want to use them in the 1st generation step
1395
+ if inputs_embeds is not None and cache_position[0] == 0:
1396
+ model_inputs = {"inputs_embeds": inputs_embeds, "input_ids": None}
1397
+ else:
1398
+ # The clone here is for the same reason as for `position_ids`.
1399
+ model_inputs = {"input_ids": input_ids.clone(memory_format=torch.contiguous_format), "inputs_embeds": None}
1400
+
1401
+ if isinstance(past_key_values, StaticCache) and attention_mask.ndim == 2:
1402
+ if model_inputs["inputs_embeds"] is not None:
1403
+ batch_size, sequence_length, _ = model_inputs["inputs_embeds"].shape
1404
+ device = model_inputs["inputs_embeds"].device
1405
+ else:
1406
+ batch_size, sequence_length = model_inputs["input_ids"].shape
1407
+ device = model_inputs["input_ids"].device
1408
+
1409
+ dtype = self.lm_head.weight.dtype
1410
+ min_dtype = torch.finfo(dtype).min
1411
+
1412
+ attention_mask = _prepare_4d_causal_attention_mask_with_cache_position(
1413
+ attention_mask,
1414
+ sequence_length=sequence_length,
1415
+ target_length=past_key_values.get_max_length(),
1416
+ dtype=dtype,
1417
+ device=device,
1418
+ min_dtype=min_dtype,
1419
+ cache_position=cache_position,
1420
+ batch_size=batch_size,
1421
+ )
1422
+
1423
+ if num_logits_to_keep is not None:
1424
+ model_inputs["num_logits_to_keep"] = num_logits_to_keep
1425
+
1426
+ model_inputs.update(
1427
+ {
1428
+ "position_ids": position_ids,
1429
+ "cache_position": cache_position,
1430
+ "past_key_values": past_key_values,
1431
+ "use_cache": use_cache,
1432
+ "attention_mask": attention_mask,
1433
+ "lang": kwargs.get("lang", ""),
1434
+ }
1435
+ )
1436
+ return model_inputs
1437
+
1438
+
1439
+ @add_start_docstrings(
1440
+ """
1441
+ The LLaMa Model transformer with a sequence classification head on top (linear layer).
1442
+
1443
+ [`XALMAForSequenceClassification`] uses the last token in order to do the classification, as other causal models
1444
+ (e.g. GPT-2) do.
1445
+
1446
+ Since it does classification on the last token, it requires to know the position of the last token. If a
1447
+ `pad_token_id` is defined in the configuration, it finds the last token that is not a padding token in each row. If
1448
+ no `pad_token_id` is defined, it simply takes the last value in each row of the batch. Since it cannot guess the
1449
+ padding tokens when `inputs_embeds` are passed instead of `input_ids`, it does the same (take the last value in
1450
+ each row of the batch).
1451
+ """,
1452
+ LLAMA_START_DOCSTRING,
1453
+ )
1454
+ class XALMAForSequenceClassification(LlamaPreTrainedModel):
1455
+ def __init__(self, config):
1456
+ super().__init__(config)
1457
+ self.num_labels = config.num_labels
1458
+ self.model = LlamaModel(config)
1459
+ self.score = nn.Linear(config.hidden_size, self.num_labels, bias=False)
1460
+
1461
+ # Initialize weights and apply final processing
1462
+ self.post_init()
1463
+
1464
+ def get_input_embeddings(self):
1465
+ return self.model.embed_tokens
1466
+
1467
+ def set_input_embeddings(self, value):
1468
+ self.model.embed_tokens = value
1469
+
1470
+ @add_start_docstrings_to_model_forward(LLAMA_INPUTS_DOCSTRING)
1471
+ def forward(
1472
+ self,
1473
+ input_ids: Optional[torch.LongTensor] = None,
1474
+ attention_mask: Optional[torch.Tensor] = None,
1475
+ position_ids: Optional[torch.LongTensor] = None,
1476
+ past_key_values: Optional[Union[Cache, List[torch.FloatTensor]]] = None,
1477
+ inputs_embeds: Optional[torch.FloatTensor] = None,
1478
+ labels: Optional[torch.LongTensor] = None,
1479
+ use_cache: Optional[bool] = None,
1480
+ output_attentions: Optional[bool] = None,
1481
+ output_hidden_states: Optional[bool] = None,
1482
+ return_dict: Optional[bool] = None,
1483
+ lang: str = "",
1484
+ ) -> Union[Tuple, SequenceClassifierOutputWithPast]:
1485
+ r"""
1486
+ labels (`torch.LongTensor` of shape `(batch_size,)`, *optional*):
1487
+ Labels for computing the sequence classification/regression loss. Indices should be in `[0, ...,
1488
+ config.num_labels - 1]`. If `config.num_labels == 1` a regression loss is computed (Mean-Square loss), If
1489
+ `config.num_labels > 1` a classification loss is computed (Cross-Entropy).
1490
+ """
1491
+ return_dict = return_dict if return_dict is not None else self.config.use_return_dict
1492
+
1493
+ transformer_outputs = self.model(
1494
+ input_ids,
1495
+ attention_mask=attention_mask,
1496
+ position_ids=position_ids,
1497
+ past_key_values=past_key_values,
1498
+ inputs_embeds=inputs_embeds,
1499
+ use_cache=use_cache,
1500
+ output_attentions=output_attentions,
1501
+ output_hidden_states=output_hidden_states,
1502
+ return_dict=return_dict,
1503
+ lang=lang,
1504
+ )
1505
+ hidden_states = transformer_outputs[0]
1506
+ logits = self.score(hidden_states)
1507
+
1508
+ if input_ids is not None:
1509
+ batch_size = input_ids.shape[0]
1510
+ else:
1511
+ batch_size = inputs_embeds.shape[0]
1512
+
1513
+ if self.config.pad_token_id is None and batch_size != 1:
1514
+ raise ValueError("Cannot handle batch sizes > 1 if no padding token is defined.")
1515
+ if self.config.pad_token_id is None:
1516
+ sequence_lengths = -1
1517
+ else:
1518
+ if input_ids is not None:
1519
+ # if no pad token found, use modulo instead of reverse indexing for ONNX compatibility
1520
+ sequence_lengths = torch.eq(input_ids, self.config.pad_token_id).int().argmax(-1) - 1
1521
+ sequence_lengths = sequence_lengths % input_ids.shape[-1]
1522
+ sequence_lengths = sequence_lengths.to(logits.device)
1523
+ else:
1524
+ sequence_lengths = -1
1525
+
1526
+ pooled_logits = logits[torch.arange(batch_size, device=logits.device), sequence_lengths]
1527
+
1528
+ loss = None
1529
+ if labels is not None:
1530
+ labels = labels.to(logits.device)
1531
+ if self.config.problem_type is None:
1532
+ if self.num_labels == 1:
1533
+ self.config.problem_type = "regression"
1534
+ elif self.num_labels > 1 and (labels.dtype == torch.long or labels.dtype == torch.int):
1535
+ self.config.problem_type = "single_label_classification"
1536
+ else:
1537
+ self.config.problem_type = "multi_label_classification"
1538
+
1539
+ if self.config.problem_type == "regression":
1540
+ loss_fct = MSELoss()
1541
+ if self.num_labels == 1:
1542
+ loss = loss_fct(pooled_logits.squeeze(), labels.squeeze())
1543
+ else:
1544
+ loss = loss_fct(pooled_logits, labels)
1545
+ elif self.config.problem_type == "single_label_classification":
1546
+ loss_fct = CrossEntropyLoss()
1547
+ loss = loss_fct(pooled_logits.view(-1, self.num_labels), labels.view(-1))
1548
+ elif self.config.problem_type == "multi_label_classification":
1549
+ loss_fct = BCEWithLogitsLoss()
1550
+ loss = loss_fct(pooled_logits, labels)
1551
+ if not return_dict:
1552
+ output = (pooled_logits,) + transformer_outputs[1:]
1553
+ return ((loss,) + output) if loss is not None else output
1554
+
1555
+ return SequenceClassifierOutputWithPast(
1556
+ loss=loss,
1557
+ logits=pooled_logits,
1558
+ past_key_values=transformer_outputs.past_key_values,
1559
+ hidden_states=transformer_outputs.hidden_states,
1560
+ attentions=transformer_outputs.attentions,
1561
+ )
1562
+
1563
+
1564
+ @add_start_docstrings(
1565
+ """
1566
+ The Llama Model transformer with a span classification head on top for extractive question-answering tasks like
1567
+ SQuAD (a linear layer on top of the hidden-states output to compute `span start logits` and `span end logits`).
1568
+ """,
1569
+ LLAMA_START_DOCSTRING,
1570
+ )
1571
+ class XALMAForQuestionAnswering(LlamaPreTrainedModel):
1572
+ base_model_prefix = "transformer"
1573
+
1574
+ # Copied from transformers.models.bloom.modeling_bloom.BloomForQuestionAnswering.__init__ with Bloom->Llama
1575
+ def __init__(self, config):
1576
+ super().__init__(config)
1577
+ self.transformer = LlamaModel(config)
1578
+ self.qa_outputs = nn.Linear(config.hidden_size, 2)
1579
+
1580
+ # Initialize weights and apply final processing
1581
+ self.post_init()
1582
+
1583
+ def get_input_embeddings(self):
1584
+ return self.transformer.embed_tokens
1585
+
1586
+ def set_input_embeddings(self, value):
1587
+ self.transformer.embed_tokens = value
1588
+
1589
+ @add_start_docstrings_to_model_forward(LLAMA_INPUTS_DOCSTRING)
1590
+ def forward(
1591
+ self,
1592
+ input_ids: Optional[torch.LongTensor] = None,
1593
+ attention_mask: Optional[torch.FloatTensor] = None,
1594
+ position_ids: Optional[torch.LongTensor] = None,
1595
+ past_key_values: Optional[Union[Cache, List[torch.FloatTensor]]] = None,
1596
+ inputs_embeds: Optional[torch.FloatTensor] = None,
1597
+ start_positions: Optional[torch.LongTensor] = None,
1598
+ end_positions: Optional[torch.LongTensor] = None,
1599
+ output_attentions: Optional[bool] = None,
1600
+ output_hidden_states: Optional[bool] = None,
1601
+ return_dict: Optional[bool] = None,
1602
+ lang: str = "",
1603
+ ) -> Union[Tuple, QuestionAnsweringModelOutput]:
1604
+ r"""
1605
+ start_positions (`torch.LongTensor` of shape `(batch_size,)`, *optional*):
1606
+ Labels for position (index) of the start of the labelled span for computing the token classification loss.
1607
+ Positions are clamped to the length of the sequence (`sequence_length`). Position outside of the sequence
1608
+ are not taken into account for computing the loss.
1609
+ end_positions (`torch.LongTensor` of shape `(batch_size,)`, *optional*):
1610
+ Labels for position (index) of the end of the labelled span for computing the token classification loss.
1611
+ Positions are clamped to the length of the sequence (`sequence_length`). Position outside of the sequence
1612
+ are not taken into account for computing the loss.
1613
+ """
1614
+ return_dict = return_dict if return_dict is not None else self.config.use_return_dict
1615
+
1616
+ outputs = self.transformer(
1617
+ input_ids,
1618
+ attention_mask=attention_mask,
1619
+ position_ids=position_ids,
1620
+ past_key_values=past_key_values,
1621
+ inputs_embeds=inputs_embeds,
1622
+ output_attentions=output_attentions,
1623
+ output_hidden_states=output_hidden_states,
1624
+ return_dict=return_dict,
1625
+ lang=lang,
1626
+ )
1627
+
1628
+ sequence_output = outputs[0]
1629
+
1630
+ logits = self.qa_outputs(sequence_output)
1631
+ start_logits, end_logits = logits.split(1, dim=-1)
1632
+ start_logits = start_logits.squeeze(-1).contiguous()
1633
+ end_logits = end_logits.squeeze(-1).contiguous()
1634
+
1635
+ total_loss = None
1636
+ if start_positions is not None and end_positions is not None:
1637
+ # If we are on multi-GPU, split add a dimension
1638
+ if len(start_positions.size()) > 1:
1639
+ start_positions = start_positions.squeeze(-1).to(start_logits.device)
1640
+ if len(end_positions.size()) > 1:
1641
+ end_positions = end_positions.squeeze(-1).to(end_logits.device)
1642
+ # sometimes the start/end positions are outside our model inputs, we ignore these terms
1643
+ ignored_index = start_logits.size(1)
1644
+ start_positions = start_positions.clamp(0, ignored_index)
1645
+ end_positions = end_positions.clamp(0, ignored_index)
1646
+
1647
+ loss_fct = CrossEntropyLoss(ignore_index=ignored_index)
1648
+ start_loss = loss_fct(start_logits, start_positions)
1649
+ end_loss = loss_fct(end_logits, end_positions)
1650
+ total_loss = (start_loss + end_loss) / 2
1651
+
1652
+ if not return_dict:
1653
+ output = (start_logits, end_logits) + outputs[2:]
1654
+ return ((total_loss,) + output) if total_loss is not None else output
1655
+
1656
+ return QuestionAnsweringModelOutput(
1657
+ loss=total_loss,
1658
+ start_logits=start_logits,
1659
+ end_logits=end_logits,
1660
+ hidden_states=outputs.hidden_states,
1661
+ attentions=outputs.attentions,
1662
+ )
1663
+
1664
+
1665
+ @add_start_docstrings(
1666
+ """
1667
+ The Llama Model transformer with a token classification head on top (a linear layer on top of the hidden-states
1668
+ output) e.g. for Named-Entity-Recognition (NER) tasks.
1669
+ """,
1670
+ LLAMA_START_DOCSTRING,
1671
+ )
1672
+ class XALMAForTokenClassification(LlamaPreTrainedModel):
1673
+ def __init__(self, config):
1674
+ super().__init__(config)
1675
+ self.num_labels = config.num_labels
1676
+ self.model = LlamaModel(config)
1677
+ if getattr(config, "classifier_dropout", None) is not None:
1678
+ classifier_dropout = config.classifier_dropout
1679
+ elif getattr(config, "hidden_dropout", None) is not None:
1680
+ classifier_dropout = config.hidden_dropout
1681
+ else:
1682
+ classifier_dropout = 0.1
1683
+ self.dropout = nn.Dropout(classifier_dropout)
1684
+ self.score = nn.Linear(config.hidden_size, config.num_labels)
1685
+
1686
+ # Initialize weights and apply final processing
1687
+ self.post_init()
1688
+
1689
+ def get_input_embeddings(self):
1690
+ return self.model.embed_tokens
1691
+
1692
+ def set_input_embeddings(self, value):
1693
+ self.model.embed_tokens = value
1694
+
1695
+ @add_start_docstrings_to_model_forward(LLAMA_INPUTS_DOCSTRING)
1696
+ def forward(
1697
+ self,
1698
+ input_ids: Optional[torch.LongTensor] = None,
1699
+ attention_mask: Optional[torch.Tensor] = None,
1700
+ position_ids: Optional[torch.LongTensor] = None,
1701
+ past_key_values: Optional[List[torch.FloatTensor]] = None,
1702
+ inputs_embeds: Optional[torch.FloatTensor] = None,
1703
+ labels: Optional[torch.LongTensor] = None,
1704
+ use_cache: Optional[bool] = None,
1705
+ output_attentions: Optional[bool] = None,
1706
+ output_hidden_states: Optional[bool] = None,
1707
+ return_dict: Optional[bool] = None,
1708
+ lang: str = "",
1709
+ ) -> Union[Tuple, TokenClassifierOutput]:
1710
+ r"""
1711
+ labels (`torch.LongTensor` of shape `(batch_size,)`, *optional*):
1712
+ Labels for computing the sequence classification/regression loss. Indices should be in `[0, ...,
1713
+ config.num_labels - 1]`. If `config.num_labels == 1` a regression loss is computed (Mean-Square loss), If
1714
+ `config.num_labels > 1` a classification loss is computed (Cross-Entropy).
1715
+ """
1716
+ return_dict = return_dict if return_dict is not None else self.config.use_return_dict
1717
+
1718
+ outputs = self.model(
1719
+ input_ids,
1720
+ attention_mask=attention_mask,
1721
+ position_ids=position_ids,
1722
+ past_key_values=past_key_values,
1723
+ inputs_embeds=inputs_embeds,
1724
+ use_cache=use_cache,
1725
+ output_attentions=output_attentions,
1726
+ output_hidden_states=output_hidden_states,
1727
+ return_dict=return_dict,
1728
+ lang=lang,
1729
+ )
1730
+ sequence_output = outputs[0]
1731
+ sequence_output = self.dropout(sequence_output)
1732
+ logits = self.score(sequence_output)
1733
+
1734
+ loss = None
1735
+ if labels is not None:
1736
+ loss_fct = CrossEntropyLoss()
1737
+ loss = loss_fct(logits.view(-1, self.num_labels), labels.view(-1))
1738
+
1739
+ if not return_dict:
1740
+ output = (logits,) + outputs[2:]
1741
+ return ((loss,) + output) if loss is not None else output
1742
+
1743
+ return TokenClassifierOutput(
1744
+ loss=loss,
1745
+ logits=logits,
1746
+ hidden_states=outputs.hidden_states,
1747
+ attentions=outputs.attentions,
1748
+ )