text
stringlengths
1
1.02k
class_index
int64
0
10.8k
source
stringlengths
85
188
The dropout ratio for classifier. max_position_embeddings (`int`, *optional*, defaults to 1024): The maximum sequence length that this model might ever be used with. Typically set this to something large just in case (e.g., 512 or 1024 or 2048). init_std (`float`, *optional*, defaults to 0.02): The standard deviation of the truncated_normal_initializer for initializing all weight matrices. encoder_layerdrop (`float`, *optional*, defaults to 0.0): The LayerDrop probability for the encoder. See the [LayerDrop paper](see https://arxiv.org/abs/1909.11556) for more details. decoder_layerdrop (`float`, *optional*, defaults to 0.0): The LayerDrop probability for the decoder. See the [LayerDrop paper](see https://arxiv.org/abs/1909.11556) for more details. second_expert_policy ( `str`, *optional*, default to `"all"`):
2,818
/Users/nielsrogge/Documents/python_projecten/transformers/src/transformers/models/nllb_moe/configuration_nllb_moe.py
The policy used for the sampling the probability of being sampled to a second expert for each token. normalize_router_prob_before_dropping (`bool`, *optional*, defaults to `True`): Whether or not to normalize the router probabilities before applying a mask based on the experts capacity (capacity dropping). batch_prioritized_routing (`bool`, *optional*, defaults to `True`): Whether or not to orders the tokens by their router probabilities before capacity dropping. This means that the tokens that have the highest probabilities will be routed before other tokens that might be further in the sequence. moe_eval_capacity_token_fraction (`float`, *optional*, defaults to 1.0): Fraction of tokens as capacity during validation, if set to negative, uses the same as training. Should be in range: (0.0, 1.0]. num_experts (`int`, *optional*, defaults to 128):
2,818
/Users/nielsrogge/Documents/python_projecten/transformers/src/transformers/models/nllb_moe/configuration_nllb_moe.py
Number of experts for each NllbMoeSparseMlp layer. expert_capacity (`int`, *optional*, defaults to 64): Number of tokens that can be stored in each expert. encoder_sparse_step (`int`, *optional*, defaults to 4): Frequency of the sparse layers in the encoder. 4 means that one out of 4 layers will be sparse. decoder_sparse_step (`int`, *optional*, defaults to 4): Frequency of the sparse layers in the decoder. 4 means that one out of 4 layers will be sparse. router_dtype (`str`, *optional*, default to `"float32"`): The `dtype` used for the routers. It is preferable to keep the `dtype` to `"float32"` as specified in the *selective precision* discussion in [the paper](https://arxiv.org/abs/2101.03961). router_ignore_padding_tokens (`bool`, *optional*, defaults to `False`): Whether to ignore padding tokens when routing. if `False`, the padding tokens are not routed to any experts.
2,818
/Users/nielsrogge/Documents/python_projecten/transformers/src/transformers/models/nllb_moe/configuration_nllb_moe.py
router_bias (`bool`, *optional*, defaults to `False`): Whether or not the classifier of the router should have a bias. moe_token_dropout (`float`, *optional*, defualt ot 0.2): Masking rate for MoE expert output masking (EOM), which is implemented via a Dropout2d on the expert outputs. output_router_logits (`bool`, *optional*, defaults to `False`): Whether or not to return the router logits. Only set to `True` to get the auxiliary loss when training. use_cache (`bool`, *optional*, defaults to `True`): Whether or not the model should return the last key/values attentions (not used by all models).
2,818
/Users/nielsrogge/Documents/python_projecten/transformers/src/transformers/models/nllb_moe/configuration_nllb_moe.py
Example: ```python >>> from transformers import NllbMoeModel, NllbMoeConfig >>> # Initializing a NllbMoe facebook/nllb-moe-54b style configuration >>> configuration = NllbMoeConfig() >>> # Initializing a model from the facebook/nllb-moe-54b style configuration >>> model = NllbMoeModel(configuration) >>> # Accessing the model configuration >>> configuration = model.config ```""" model_type = "nllb-moe" keys_to_ignore_at_inference = ["past_key_values"] attribute_map = {"num_attention_heads": "encoder_attention_heads", "hidden_size": "d_model"}
2,818
/Users/nielsrogge/Documents/python_projecten/transformers/src/transformers/models/nllb_moe/configuration_nllb_moe.py
def __init__( self, vocab_size=128112, max_position_embeddings=1024, encoder_layers=12, encoder_ffn_dim=4096, encoder_attention_heads=16, decoder_layers=12, decoder_ffn_dim=4096, decoder_attention_heads=16, encoder_layerdrop=0.05, decoder_layerdrop=0.05, use_cache=True, is_encoder_decoder=True, activation_function="relu", d_model=1024, dropout=0.1, attention_dropout=0.1, activation_dropout=0.0, init_std=0.02, decoder_start_token_id=2, scale_embedding=True, router_bias=False, router_dtype="float32", router_ignore_padding_tokens=False, num_experts=128, expert_capacity=64, encoder_sparse_step=4, decoder_sparse_step=4, router_z_loss_coef=0.001, router_aux_loss_coef=0.001, second_expert_policy="all", normalize_router_prob_before_dropping=False,
2,818
/Users/nielsrogge/Documents/python_projecten/transformers/src/transformers/models/nllb_moe/configuration_nllb_moe.py
batch_prioritized_routing=False, moe_eval_capacity_token_fraction=1.0, moe_token_dropout=0.2, pad_token_id=1, bos_token_id=0, eos_token_id=2, output_router_logits=False, **kwargs, ): self.vocab_size = vocab_size self.max_position_embeddings = max_position_embeddings self.d_model = d_model self.encoder_ffn_dim = encoder_ffn_dim self.encoder_layers = encoder_layers self.encoder_attention_heads = encoder_attention_heads self.decoder_ffn_dim = decoder_ffn_dim self.decoder_layers = decoder_layers self.decoder_attention_heads = decoder_attention_heads self.dropout = dropout self.attention_dropout = attention_dropout self.activation_dropout = activation_dropout self.activation_function = activation_function self.init_std = init_std self.encoder_layerdrop = encoder_layerdrop self.decoder_layerdrop = decoder_layerdrop
2,818
/Users/nielsrogge/Documents/python_projecten/transformers/src/transformers/models/nllb_moe/configuration_nllb_moe.py
self.use_cache = use_cache self.num_hidden_layers = encoder_layers self.scale_embedding = scale_embedding # scale factor will be sqrt(d_model) if True self.router_z_loss_coef = router_z_loss_coef self.router_aux_loss_coef = router_aux_loss_coef self.decoder_sparse_step = decoder_sparse_step self.encoder_sparse_step = encoder_sparse_step self.num_experts = num_experts self.expert_capacity = expert_capacity self.router_bias = router_bias if router_dtype not in ["float32", "float16", "bfloat16"]: raise ValueError(f"`router_dtype` must be one of 'float32', 'float16' or 'bfloat16', got {router_dtype}") self.router_dtype = router_dtype
2,818
/Users/nielsrogge/Documents/python_projecten/transformers/src/transformers/models/nllb_moe/configuration_nllb_moe.py
self.router_ignore_padding_tokens = router_ignore_padding_tokens self.batch_prioritized_routing = batch_prioritized_routing self.second_expert_policy = second_expert_policy self.normalize_router_prob_before_dropping = normalize_router_prob_before_dropping self.moe_eval_capacity_token_fraction = moe_eval_capacity_token_fraction self.moe_token_dropout = moe_token_dropout self.output_router_logits = output_router_logits super().__init__( pad_token_id=pad_token_id, bos_token_id=bos_token_id, eos_token_id=eos_token_id, is_encoder_decoder=is_encoder_decoder, decoder_start_token_id=decoder_start_token_id, **kwargs, )
2,818
/Users/nielsrogge/Documents/python_projecten/transformers/src/transformers/models/nllb_moe/configuration_nllb_moe.py
class NllbMoeScaledWordEmbedding(nn.Embedding): """ This module overrides nn.Embeddings' forward by multiplying with embeddings scale. """ def __init__(self, num_embeddings: int, embedding_dim: int, padding_idx: int, embed_scale: Optional[float] = 1.0): super().__init__(num_embeddings, embedding_dim, padding_idx) self.embed_scale = embed_scale def forward(self, input_ids: torch.Tensor): return super().forward(input_ids) * self.embed_scale
2,819
/Users/nielsrogge/Documents/python_projecten/transformers/src/transformers/models/nllb_moe/modeling_nllb_moe.py
class NllbMoeSinusoidalPositionalEmbedding(nn.Module): """This module produces sinusoidal positional embeddings of any length.""" def __init__(self, num_positions: int, embedding_dim: int, padding_idx: Optional[int] = None): super().__init__() self.offset = 2 self.embedding_dim = embedding_dim self.padding_idx = padding_idx self.make_weights(num_positions + self.offset, embedding_dim, padding_idx) def make_weights(self, num_embeddings: int, embedding_dim: int, padding_idx: Optional[int] = None): emb_weights = self.get_embedding(num_embeddings, embedding_dim, padding_idx) if hasattr(self, "weights"): # in forward put the weights on the correct dtype and device of the param emb_weights = emb_weights.to(dtype=self.weights.dtype, device=self.weights.device) self.register_buffer("weights", emb_weights, persistent=False)
2,820
/Users/nielsrogge/Documents/python_projecten/transformers/src/transformers/models/nllb_moe/modeling_nllb_moe.py
@staticmethod def get_embedding(num_embeddings: int, embedding_dim: int, padding_idx: Optional[int] = None): """ Build sinusoidal embeddings. This matches the implementation in tensor2tensor, but differs slightly from the description in Section 3.5 of "Attention Is All You Need". """ half_dim = embedding_dim // 2 emb = math.log(10000) / (half_dim - 1) emb = torch.exp(torch.arange(half_dim, dtype=torch.int64).float() * -emb) emb = torch.arange(num_embeddings, dtype=torch.int64).float().unsqueeze(1) * emb.unsqueeze(0) emb = torch.cat([torch.sin(emb), torch.cos(emb)], dim=1).view(num_embeddings, -1) if embedding_dim % 2 == 1: # zero pad emb = torch.cat([emb, torch.zeros(num_embeddings, 1)], dim=1) if padding_idx is not None: emb[padding_idx, :] = 0 return emb.to(torch.get_default_dtype())
2,820
/Users/nielsrogge/Documents/python_projecten/transformers/src/transformers/models/nllb_moe/modeling_nllb_moe.py
@torch.no_grad() def forward( self, input_ids: torch.Tensor = None, inputs_embeds: torch.Tensor = None, past_key_values_length: int = 0 ): if input_ids is not None: bsz, seq_len = input_ids.size() # Create the position ids from the input token ids. Any padded tokens remain padded. position_ids = create_position_ids_from_input_ids(input_ids, self.padding_idx, past_key_values_length).to( input_ids.device ) else: bsz, seq_len = inputs_embeds.size()[:-1] position_ids = self.create_position_ids_from_inputs_embeds(inputs_embeds, past_key_values_length) # expand embeddings if needed max_pos = self.padding_idx + 1 + seq_len + past_key_values_length if max_pos > self.weights.size(0): self.make_weights(max_pos + self.offset, self.embedding_dim, self.padding_idx)
2,820
/Users/nielsrogge/Documents/python_projecten/transformers/src/transformers/models/nllb_moe/modeling_nllb_moe.py
return self.weights.index_select(0, position_ids.view(-1)).view(bsz, seq_len, self.weights.shape[-1]).detach() def create_position_ids_from_inputs_embeds(self, inputs_embeds, past_key_values_length): """ We are provided embeddings directly. We cannot infer which are padded so just generate sequential position ids. Args: inputs_embeds: torch.Tensor Returns: torch.Tensor """ input_shape = inputs_embeds.size()[:-1] sequence_length = input_shape[1] position_ids = torch.arange( self.padding_idx + 1, sequence_length + self.padding_idx + 1, dtype=torch.long, device=inputs_embeds.device ) return position_ids.unsqueeze(0).expand(input_shape).contiguous() + past_key_values_length
2,820
/Users/nielsrogge/Documents/python_projecten/transformers/src/transformers/models/nllb_moe/modeling_nllb_moe.py
class NllbMoeTop2Router(nn.Module): """ Router using tokens choose top-2 experts assignment. This router uses the same mechanism as in NLLB-MoE from the fairseq repository. Items are sorted by router_probs and then routed to their choice of expert until the expert's expert_capacity is reached. **There is no guarantee that each token is processed by an expert**, or that each expert receives at least one token. The router combining weights are also returned to make sure that the states that are not updated will be masked. """ def __init__(self, config: NllbMoeConfig): super().__init__() self.num_experts = config.num_experts self.expert_capacity = config.expert_capacity self.classifier = nn.Linear(config.hidden_size, self.num_experts, bias=config.router_bias) self.router_ignore_padding_tokens = config.router_ignore_padding_tokens self.dtype = getattr(torch, config.router_dtype)
2,821
/Users/nielsrogge/Documents/python_projecten/transformers/src/transformers/models/nllb_moe/modeling_nllb_moe.py
self.second_expert_policy = config.second_expert_policy self.normalize_router_prob_before_dropping = config.normalize_router_prob_before_dropping self.batch_prioritized_routing = config.batch_prioritized_routing self.moe_eval_capacity_token_fraction = config.moe_eval_capacity_token_fraction def _cast_classifier(self): r""" `bitsandbytes` `Linear8bitLt` layers does not support manual casting Therefore we need to check if they are an instance of the `Linear8bitLt` class by checking special attributes. """ if not (hasattr(self.classifier, "SCB") or hasattr(self.classifier, "CB")): self.classifier = self.classifier.to(self.dtype)
2,821
/Users/nielsrogge/Documents/python_projecten/transformers/src/transformers/models/nllb_moe/modeling_nllb_moe.py
def normalize_router_probabilities(self, router_probs, top_1_mask, top_2_mask): top_1_max_probs = (router_probs * top_1_mask).sum(dim=1) top_2_max_probs = (router_probs * top_2_mask).sum(dim=1) denom_s = torch.clamp(top_1_max_probs + top_2_max_probs, min=torch.finfo(router_probs.dtype).eps) top_1_max_probs = top_1_max_probs / denom_s top_2_max_probs = top_2_max_probs / denom_s return top_1_max_probs, top_2_max_probs
2,821
/Users/nielsrogge/Documents/python_projecten/transformers/src/transformers/models/nllb_moe/modeling_nllb_moe.py
def route_tokens( self, router_logits: torch.Tensor, input_dtype: torch.dtype = torch.float32, padding_mask: Optional[torch.LongTensor] = None, ) -> Tuple: """ Computes the `dispatch_mask` and the `dispatch_weights` for each experts. The masks are adapted to the expert capacity. """ nb_tokens = router_logits.shape[0] # Apply Softmax and cast back to the original `dtype` router_probs = nn.functional.softmax(router_logits, dim=-1, dtype=self.dtype).to(input_dtype) top_1_expert_index = torch.argmax(router_probs, dim=-1) top_1_mask = torch.nn.functional.one_hot(top_1_expert_index, num_classes=self.num_experts) if self.second_expert_policy == "sampling": gumbel = torch.distributions.gumbel.Gumbel(0, 1).rsample router_logits += gumbel(router_logits.shape).to(router_logits.device)
2,821
/Users/nielsrogge/Documents/python_projecten/transformers/src/transformers/models/nllb_moe/modeling_nllb_moe.py
# replace top_1_expert_index with min values logits_except_top_1 = router_logits.masked_fill(top_1_mask.bool(), float("-inf")) top_2_expert_index = torch.argmax(logits_except_top_1, dim=-1) top_2_mask = torch.nn.functional.one_hot(top_2_expert_index, num_classes=self.num_experts) if self.normalize_router_prob_before_dropping: top_1_max_probs, top_2_max_probs = self.normalize_router_probabilities( router_probs, top_1_mask, top_2_mask ) if self.second_expert_policy == "random": top_2_max_probs = (router_probs * top_2_mask).sum(dim=1) sampled = (2 * top_2_max_probs) > torch.rand_like(top_2_max_probs.float()) top_2_mask = top_2_mask * sampled.repeat(self.num_experts, 1).transpose(1, 0)
2,821
/Users/nielsrogge/Documents/python_projecten/transformers/src/transformers/models/nllb_moe/modeling_nllb_moe.py
if padding_mask is not None and not self.router_ignore_padding_tokens: if len(padding_mask.shape) == 4: # only get the last causal mask padding_mask = padding_mask[:, :, -1, :].reshape(-1)[-nb_tokens:] non_padding = ~padding_mask.bool() top_1_mask = top_1_mask * non_padding.unsqueeze(-1).to(top_1_mask.dtype) top_2_mask = top_2_mask * non_padding.unsqueeze(-1).to(top_1_mask.dtype) if self.batch_prioritized_routing: # sort tokens based on their routing probability # to make sure important tokens are routed, first importance_scores = -1 * router_probs.max(dim=1)[0] sorted_top_1_mask = top_1_mask[importance_scores.argsort(dim=0)] sorted_cumsum1 = (torch.cumsum(sorted_top_1_mask, dim=0) - 1) * sorted_top_1_mask locations1 = sorted_cumsum1[importance_scores.argsort(dim=0).argsort(dim=0)]
2,821
/Users/nielsrogge/Documents/python_projecten/transformers/src/transformers/models/nllb_moe/modeling_nllb_moe.py
sorted_top_2_mask = top_2_mask[importance_scores.argsort(dim=0)] sorted_cumsum2 = (torch.cumsum(sorted_top_2_mask, dim=0) - 1) * sorted_top_2_mask locations2 = sorted_cumsum2[importance_scores.argsort(dim=0).argsort(dim=0)] # Update 2nd's location by accounting for locations of 1st locations2 += torch.sum(top_1_mask, dim=0, keepdim=True) else: locations1 = torch.cumsum(top_1_mask, dim=0) - 1 locations2 = torch.cumsum(top_2_mask, dim=0) - 1 # Update 2nd's location by accounting for locations of 1st locations2 += torch.sum(top_1_mask, dim=0, keepdim=True)
2,821
/Users/nielsrogge/Documents/python_projecten/transformers/src/transformers/models/nllb_moe/modeling_nllb_moe.py
if not self.training and self.moe_eval_capacity_token_fraction > 0: self.expert_capacity = math.ceil(self.moe_eval_capacity_token_fraction * nb_tokens) else: capacity = 2 * math.ceil(nb_tokens / self.num_experts) self.expert_capacity = capacity if self.expert_capacity is None else self.expert_capacity # Remove locations outside capacity from ( cumsum < capacity = False will not be routed) top_1_mask = top_1_mask * torch.lt(locations1, self.expert_capacity) top_2_mask = top_2_mask * torch.lt(locations2, self.expert_capacity) if not self.normalize_router_prob_before_dropping: top_1_max_probs, top_2_max_probs = self.normalize_router_probabilities( router_probs, top_1_mask, top_2_mask ) # Calculate combine_weights and dispatch_mask gates1 = top_1_max_probs[:, None] * top_1_mask gates2 = top_2_max_probs[:, None] * top_2_mask router_probs = gates1 + gates2
2,821
/Users/nielsrogge/Documents/python_projecten/transformers/src/transformers/models/nllb_moe/modeling_nllb_moe.py
return top_1_mask, router_probs def forward(self, hidden_states: torch.Tensor, padding_mask: Optional[torch.LongTensor] = None) -> Tuple: r""" The hidden states are reshaped to simplify the computation of the router probabilities (combining weights for each experts.)
2,821
/Users/nielsrogge/Documents/python_projecten/transformers/src/transformers/models/nllb_moe/modeling_nllb_moe.py
Args: hidden_states (`torch.Tensor`): (batch_size, sequence_length, hidden_dim) from which router probabilities are computed. Returns: top_1_mask (`torch.Tensor` of shape (batch_size, sequence_length)): Index tensor of shape [batch_size, sequence_length] corresponding to the expert selected for each token using the top1 probabilities of the router. router_probabilities (`torch.Tensor` of shape (batch_size, sequence_length, nump_experts)): Tensor of shape (batch_size, sequence_length, num_experts) corresponding to the probabilities for each token and expert. Used for routing tokens to experts. router_logits (`torch.Tensor` of shape (batch_size, sequence_length))): Logits tensor of shape (batch_size, sequence_length, num_experts) corresponding to raw router logits. This is used later for computing router z-loss. """
2,821
/Users/nielsrogge/Documents/python_projecten/transformers/src/transformers/models/nllb_moe/modeling_nllb_moe.py
self.input_dtype = hidden_states.dtype batch_size, sequence_length, hidden_dim = hidden_states.shape hidden_states = hidden_states.reshape((batch_size * sequence_length), hidden_dim) hidden_states = hidden_states.to(self.dtype) self._cast_classifier() router_logits = self.classifier(hidden_states) top_1_mask, router_probs = self.route_tokens(router_logits, self.input_dtype, padding_mask) return top_1_mask, router_probs
2,821
/Users/nielsrogge/Documents/python_projecten/transformers/src/transformers/models/nllb_moe/modeling_nllb_moe.py
class NllbMoeDenseActDense(nn.Module): def __init__(self, config: NllbMoeConfig, ffn_dim: int): super().__init__() self.fc1 = nn.Linear(config.d_model, ffn_dim) self.fc2 = nn.Linear(ffn_dim, config.d_model) self.dropout = nn.Dropout(config.activation_dropout) self.act = ACT2FN[config.activation_function] def forward(self, hidden_states): hidden_states = self.fc1(hidden_states) hidden_states = self.act(hidden_states) hidden_states = self.dropout(hidden_states) if ( isinstance(self.fc2.weight, torch.Tensor) and hidden_states.dtype != self.fc2.weight.dtype and (self.fc2.weight.dtype != torch.int8 and self.fc2.weight.dtype != torch.uint8) ): hidden_states = hidden_states.to(self.fc2.weight.dtype) hidden_states = self.fc2(hidden_states) return hidden_states
2,822
/Users/nielsrogge/Documents/python_projecten/transformers/src/transformers/models/nllb_moe/modeling_nllb_moe.py
class NllbMoeSparseMLP(nn.Module): r""" Implementation of the NLLB-MoE sparse MLP module. """ def __init__(self, config: NllbMoeConfig, ffn_dim: int, expert_class: nn.Module = NllbMoeDenseActDense): super().__init__() self.router = NllbMoeTop2Router(config) self.moe_token_dropout = config.moe_token_dropout self.token_dropout = nn.Dropout(self.moe_token_dropout) self.num_experts = config.num_experts self.experts = nn.ModuleDict() for idx in range(self.num_experts): self.experts[f"expert_{idx}"] = expert_class(config, ffn_dim)
2,823
/Users/nielsrogge/Documents/python_projecten/transformers/src/transformers/models/nllb_moe/modeling_nllb_moe.py
def forward(self, hidden_states: torch.Tensor, padding_mask: Optional[torch.Tensor] = False): r""" The goal of this forward pass is to have the same number of operation as the equivalent `NllbMoeDenseActDense` (mlp) layer. This means that all of the hidden states should be processed at most twice ( since we are using a top_2 gating mecanism). This means that we keep the complexity to O(batch_size x sequence_length x hidden_dim) instead of O(num_experts x batch_size x sequence_length x hidden_dim). 1- Get the `router_probs` from the `router`. The shape of the `router_mask` is `(batch_size X sequence_length, num_expert)` and corresponds to the boolean version of the `router_probs`. The inputs are masked using the `router_mask`. 2- Dispatch the hidden_states to its associated experts. The router probabilities are used to weight the contribution of each experts when updating the masked hidden states.
2,823
/Users/nielsrogge/Documents/python_projecten/transformers/src/transformers/models/nllb_moe/modeling_nllb_moe.py
Args: hidden_states (`torch.Tensor` of shape `(batch_size, sequence_length, hidden_dim)`): The hidden states padding_mask (`torch.Tensor`, *optional*, defaults to `False`): Attention mask. Can be in the causal form or not. Returns: hidden_states (`torch.Tensor` of shape `(batch_size, sequence_length, hidden_dim)`): Updated hidden states router_logits (`torch.Tensor` of shape `(batch_size, sequence_length, num_experts)`): Needed for computing the loss """ batch_size, sequence_length, hidden_dim = hidden_states.shape
2,823
/Users/nielsrogge/Documents/python_projecten/transformers/src/transformers/models/nllb_moe/modeling_nllb_moe.py
top_1_mask, router_probs = self.router(hidden_states, padding_mask) router_mask = router_probs.bool() hidden_states = hidden_states.reshape((batch_size * sequence_length), hidden_dim) masked_hidden_states = torch.einsum("bm,be->ebm", hidden_states, router_mask) for idx, expert in enumerate(self.experts.values()): token_indices = router_mask[:, idx] combining_weights = router_probs[token_indices, idx] expert_output = expert(masked_hidden_states[idx, token_indices]) if self.moe_token_dropout > 0: if self.training: expert_output = self.token_dropout(expert_output) else: expert_output *= 1 - self.moe_token_dropout masked_hidden_states[idx, token_indices] = torch.einsum("b,be->be", combining_weights, expert_output) hidden_states = masked_hidden_states.sum(dim=0).reshape(batch_size, sequence_length, hidden_dim)
2,823
/Users/nielsrogge/Documents/python_projecten/transformers/src/transformers/models/nllb_moe/modeling_nllb_moe.py
top_1_expert_index = torch.argmax(top_1_mask, dim=-1) return hidden_states, (router_probs, top_1_expert_index)
2,823
/Users/nielsrogge/Documents/python_projecten/transformers/src/transformers/models/nllb_moe/modeling_nllb_moe.py
class NllbMoeAttention(nn.Module): """Multi-headed attention from 'Attention Is All You Need' paper""" def __init__( self, embed_dim: int, num_heads: int, dropout: float = 0.0, is_decoder: bool = False, bias: bool = True, is_causal: bool = False, config: Optional[NllbMoeConfig] = None, ): super().__init__() self.embed_dim = embed_dim self.num_heads = num_heads self.dropout = dropout self.head_dim = embed_dim // num_heads self.config = config if (self.head_dim * num_heads) != self.embed_dim: raise ValueError( f"embed_dim must be divisible by num_heads (got `embed_dim`: {self.embed_dim}" f" and `num_heads`: {num_heads})." ) self.scaling = self.head_dim**-0.5 self.is_decoder = is_decoder self.is_causal = is_causal
2,824
/Users/nielsrogge/Documents/python_projecten/transformers/src/transformers/models/nllb_moe/modeling_nllb_moe.py
self.k_proj = nn.Linear(embed_dim, embed_dim, bias=bias) self.v_proj = nn.Linear(embed_dim, embed_dim, bias=bias) self.q_proj = nn.Linear(embed_dim, embed_dim, bias=bias) self.out_proj = nn.Linear(embed_dim, embed_dim, bias=bias) def _shape(self, tensor: torch.Tensor, seq_len: int, bsz: int): return tensor.view(bsz, seq_len, self.num_heads, self.head_dim).transpose(1, 2).contiguous() def forward( self, hidden_states: torch.Tensor, encoder_hidden_states: Optional[torch.Tensor] = None, past_key_value: Optional[Tuple[torch.Tensor]] = None, attention_mask: Optional[torch.Tensor] = None, layer_head_mask: Optional[torch.Tensor] = None, output_attentions: bool = False, ) -> Tuple[torch.Tensor, Optional[torch.Tensor], Optional[Tuple[torch.Tensor]]]: """Input shape: Batch x Time x Channel"""
2,824
/Users/nielsrogge/Documents/python_projecten/transformers/src/transformers/models/nllb_moe/modeling_nllb_moe.py
# if encoder_hidden_states are provided this layer is used as a cross-attention layer # for the decoder is_cross_attention = encoder_hidden_states is not None bsz, tgt_len, _ = hidden_states.size()
2,824
/Users/nielsrogge/Documents/python_projecten/transformers/src/transformers/models/nllb_moe/modeling_nllb_moe.py
# get query proj query_states = self.q_proj(hidden_states) * self.scaling # get key, value proj # `past_key_value[0].shape[2] == encoder_hidden_states.shape[1]` # is checking that the `sequence_length` of the `past_key_value` is the same as # the provided `encoder_hidden_states` to support prefix tuning if ( is_cross_attention and past_key_value is not None and past_key_value[0].shape[2] == encoder_hidden_states.shape[1] ): # reuse k,v, cross_attentions key_states = past_key_value[0] value_states = past_key_value[1] elif is_cross_attention: # cross_attentions key_states = self._shape(self.k_proj(encoder_hidden_states), -1, bsz) value_states = self._shape(self.v_proj(encoder_hidden_states), -1, bsz) elif past_key_value is not None: # reuse k, v, self_attention
2,824
/Users/nielsrogge/Documents/python_projecten/transformers/src/transformers/models/nllb_moe/modeling_nllb_moe.py
key_states = self._shape(self.k_proj(hidden_states), -1, bsz) value_states = self._shape(self.v_proj(hidden_states), -1, bsz) key_states = torch.cat([past_key_value[0], key_states], dim=2) value_states = torch.cat([past_key_value[1], value_states], dim=2) else: # self_attention key_states = self._shape(self.k_proj(hidden_states), -1, bsz) value_states = self._shape(self.v_proj(hidden_states), -1, bsz)
2,824
/Users/nielsrogge/Documents/python_projecten/transformers/src/transformers/models/nllb_moe/modeling_nllb_moe.py
if self.is_decoder: # if cross_attention save Tuple(torch.Tensor, torch.Tensor) of all cross attention key/value_states. # Further calls to cross_attention layer can then reuse all cross-attention # key/value_states (first "if" case) # if uni-directional self-attention (decoder) save Tuple(torch.Tensor, torch.Tensor) of # all previous decoder key/value_states. Further calls to uni-directional self-attention # can concat previous decoder key/value_states to current projected key/value_states (third "elif" case) # if encoder bi-directional self-attention `past_key_value` is always `None` past_key_value = (key_states, value_states) proj_shape = (bsz * self.num_heads, -1, self.head_dim) query_states = self._shape(query_states, tgt_len, bsz).view(*proj_shape) key_states = key_states.reshape(*proj_shape) value_states = value_states.reshape(*proj_shape)
2,824
/Users/nielsrogge/Documents/python_projecten/transformers/src/transformers/models/nllb_moe/modeling_nllb_moe.py
src_len = key_states.size(1) attn_weights = torch.bmm(query_states, key_states.transpose(1, 2)) if attn_weights.size() != (bsz * self.num_heads, tgt_len, src_len): raise ValueError( f"Attention weights should be of size {(bsz * self.num_heads, tgt_len, src_len)}, but is" f" {attn_weights.size()}" ) if attention_mask is not None: if attention_mask.size() != (bsz, 1, tgt_len, src_len): raise ValueError( f"Attention mask should be of size {(bsz, 1, tgt_len, src_len)}, but is {attention_mask.size()}" ) attn_weights = attn_weights.view(bsz, self.num_heads, tgt_len, src_len) + attention_mask attn_weights = attn_weights.view(bsz * self.num_heads, tgt_len, src_len) attn_weights = nn.functional.softmax(attn_weights, dim=-1)
2,824
/Users/nielsrogge/Documents/python_projecten/transformers/src/transformers/models/nllb_moe/modeling_nllb_moe.py
if layer_head_mask is not None: if layer_head_mask.size() != (self.num_heads,): raise ValueError( f"Head mask for a single layer should be of size {(self.num_heads,)}, but is" f" {layer_head_mask.size()}" ) attn_weights = layer_head_mask.view(1, -1, 1, 1) * attn_weights.view(bsz, self.num_heads, tgt_len, src_len) attn_weights = attn_weights.view(bsz * self.num_heads, tgt_len, src_len)
2,824
/Users/nielsrogge/Documents/python_projecten/transformers/src/transformers/models/nllb_moe/modeling_nllb_moe.py
if output_attentions: # this operation is a bit awkward, but it's required to # make sure that attn_weights keeps its gradient. # In order to do so, attn_weights have to be reshaped # twice and have to be reused in the following attn_weights_reshaped = attn_weights.view(bsz, self.num_heads, tgt_len, src_len) attn_weights = attn_weights_reshaped.view(bsz * self.num_heads, tgt_len, src_len) else: attn_weights_reshaped = None attn_probs = nn.functional.dropout(attn_weights, p=self.dropout, training=self.training) attn_output = torch.bmm(attn_probs, value_states) if attn_output.size() != (bsz * self.num_heads, tgt_len, self.head_dim): raise ValueError( f"`attn_output` should be of size {(bsz * self.num_heads, tgt_len, self.head_dim)}, but is" f" {attn_output.size()}" )
2,824
/Users/nielsrogge/Documents/python_projecten/transformers/src/transformers/models/nllb_moe/modeling_nllb_moe.py
attn_output = attn_output.view(bsz, self.num_heads, tgt_len, self.head_dim) attn_output = attn_output.transpose(1, 2) # Use the `embed_dim` from the config (stored in the class) rather than `hidden_state` because `attn_output` can be # partitioned across GPUs when using tensor-parallelism. attn_output = attn_output.reshape(bsz, tgt_len, self.embed_dim) attn_output = self.out_proj(attn_output) return attn_output, attn_weights_reshaped, past_key_value
2,824
/Users/nielsrogge/Documents/python_projecten/transformers/src/transformers/models/nllb_moe/modeling_nllb_moe.py
class NllbMoeEncoderLayer(nn.Module): def __init__(self, config: NllbMoeConfig, is_sparse: bool = False): super().__init__() self.embed_dim = config.d_model self.is_sparse = is_sparse self.self_attn = NllbMoeAttention( embed_dim=self.embed_dim, num_heads=config.encoder_attention_heads, dropout=config.attention_dropout, ) self.attn_dropout = nn.Dropout(config.dropout) self.self_attn_layer_norm = nn.LayerNorm(self.embed_dim) if not self.is_sparse: self.ffn = NllbMoeDenseActDense(config, ffn_dim=config.encoder_ffn_dim) else: self.ffn = NllbMoeSparseMLP(config, ffn_dim=config.encoder_ffn_dim) self.ff_layer_norm = nn.LayerNorm(config.d_model) self.ff_dropout = nn.Dropout(config.activation_dropout)
2,825
/Users/nielsrogge/Documents/python_projecten/transformers/src/transformers/models/nllb_moe/modeling_nllb_moe.py
def forward( self, hidden_states: torch.Tensor, attention_mask: torch.Tensor, layer_head_mask: torch.Tensor, output_attentions: bool = False, output_router_logits: bool = False, ) -> torch.Tensor: """ Args: hidden_states (`torch.FloatTensor`): input to the layer of shape `(batch, seq_len, embed_dim)` attention_mask (`torch.FloatTensor`): attention mask of size `(batch, 1, tgt_len, src_len)` where padding elements are indicated by very large negative values. layer_head_mask (`torch.FloatTensor`): mask for attention heads in a given layer of size `(encoder_attention_heads,)`. output_attentions (`bool`, *optional*): Whether or not to return the attentions tensors of all attention layers. See `attentions` under returned tensors for more detail. """ residual = hidden_states
2,825
/Users/nielsrogge/Documents/python_projecten/transformers/src/transformers/models/nllb_moe/modeling_nllb_moe.py
hidden_states = self.self_attn_layer_norm(hidden_states) hidden_states, attn_weights, _ = self.self_attn( hidden_states=hidden_states, attention_mask=attention_mask, layer_head_mask=layer_head_mask, output_attentions=output_attentions, ) hidden_states = self.attn_dropout(hidden_states) hidden_states = residual + hidden_states
2,825
/Users/nielsrogge/Documents/python_projecten/transformers/src/transformers/models/nllb_moe/modeling_nllb_moe.py
residual = hidden_states hidden_states = self.ff_layer_norm(hidden_states) if self.is_sparse: hidden_states, router_states = self.ffn(hidden_states, attention_mask) else: # router_states set to None to track which layers have None gradients. hidden_states, router_states = self.ffn(hidden_states), None hidden_states = self.ff_dropout(hidden_states) hidden_states = residual + hidden_states if hidden_states.dtype == torch.float16 and ( torch.isinf(hidden_states).any() or torch.isnan(hidden_states).any() ): clamp_value = torch.finfo(hidden_states.dtype).max - 1000 hidden_states = torch.clamp(hidden_states, min=-clamp_value, max=clamp_value) outputs = (hidden_states,) if output_attentions: outputs += (attn_weights,) if output_router_logits: outputs += (router_states,) return outputs
2,825
/Users/nielsrogge/Documents/python_projecten/transformers/src/transformers/models/nllb_moe/modeling_nllb_moe.py
class NllbMoeDecoderLayer(nn.Module): def __init__(self, config: NllbMoeConfig, is_sparse: bool = False): super().__init__() self.embed_dim = config.d_model self.is_sparse = is_sparse self.self_attn = NllbMoeAttention( embed_dim=self.embed_dim, num_heads=config.decoder_attention_heads, dropout=config.attention_dropout, is_decoder=True, ) self.dropout = config.dropout self.activation_fn = ACT2FN[config.activation_function] self.attn_dropout = nn.Dropout(config.dropout)
2,826
/Users/nielsrogge/Documents/python_projecten/transformers/src/transformers/models/nllb_moe/modeling_nllb_moe.py
self.self_attn_layer_norm = nn.LayerNorm(self.embed_dim) self.cross_attention = NllbMoeAttention( self.embed_dim, config.decoder_attention_heads, config.attention_dropout, is_decoder=True ) self.cross_attention_layer_norm = nn.LayerNorm(self.embed_dim) if not self.is_sparse: self.ffn = NllbMoeDenseActDense(config, ffn_dim=config.decoder_ffn_dim) else: self.ffn = NllbMoeSparseMLP(config, ffn_dim=config.decoder_ffn_dim) self.ff_layer_norm = nn.LayerNorm(config.d_model) self.ff_dropout = nn.Dropout(config.activation_dropout)
2,826
/Users/nielsrogge/Documents/python_projecten/transformers/src/transformers/models/nllb_moe/modeling_nllb_moe.py
def forward( self, hidden_states: torch.Tensor, attention_mask: Optional[torch.Tensor] = None, encoder_hidden_states: Optional[torch.Tensor] = None, encoder_attention_mask: Optional[torch.Tensor] = None, layer_head_mask: Optional[torch.Tensor] = None, cross_attn_layer_head_mask: Optional[torch.Tensor] = None, past_key_value: Optional[Tuple[torch.Tensor]] = None, output_attentions: Optional[bool] = False, output_router_logits: Optional[bool] = False, use_cache: Optional[bool] = True, ) -> torch.Tensor: """ Args: hidden_states (`torch.FloatTensor`): input to the layer of shape `(batch, seq_len, embed_dim)` attention_mask (`torch.FloatTensor`): attention mask of size `(batch, 1, tgt_len, src_len)` where padding elements are indicated by very large negative values. encoder_hidden_states (`torch.FloatTensor`):
2,826
/Users/nielsrogge/Documents/python_projecten/transformers/src/transformers/models/nllb_moe/modeling_nllb_moe.py
cross attention input to the layer of shape `(batch, seq_len, embed_dim)` encoder_attention_mask (`torch.FloatTensor`): encoder attention mask of size `(batch, 1, tgt_len, src_len)` where padding elements are indicated by very large negative values. layer_head_mask (`torch.FloatTensor`): mask for attention heads in a given layer of size `(encoder_attention_heads,)`. cross_attn_layer_head_mask (`torch.FloatTensor`): mask for cross-attention heads in a given layer of size `(decoder_attention_heads,)`. past_key_value (`Tuple(torch.FloatTensor)`): cached past key and value projection states output_attentions (`bool`, *optional*): Whether or not to return the attentions tensors of all attention layers. See `attentions` under returned tensors for more detail. """ residual = hidden_states
2,826
/Users/nielsrogge/Documents/python_projecten/transformers/src/transformers/models/nllb_moe/modeling_nllb_moe.py
hidden_states = self.self_attn_layer_norm(hidden_states)
2,826
/Users/nielsrogge/Documents/python_projecten/transformers/src/transformers/models/nllb_moe/modeling_nllb_moe.py
# Self Attention # decoder uni-directional self-attention cached key/values tuple is at positions 1,2 self_attn_past_key_value = past_key_value[:2] if past_key_value is not None else None # add present self-attn cache to positions 1,2 of present_key_value tuple hidden_states, self_attn_weights, present_key_value = self.self_attn( hidden_states=hidden_states, past_key_value=self_attn_past_key_value, attention_mask=attention_mask, layer_head_mask=layer_head_mask, output_attentions=output_attentions, ) hidden_states = self.attn_dropout(hidden_states) hidden_states = residual + hidden_states # Cross-Attention Block cross_attn_present_key_value = None cross_attn_weights = None if encoder_hidden_states is not None: residual = hidden_states hidden_states = self.cross_attention_layer_norm(hidden_states)
2,826
/Users/nielsrogge/Documents/python_projecten/transformers/src/transformers/models/nllb_moe/modeling_nllb_moe.py
# cross_attn cached key/values tuple is at positions 3,4 of present_key_value tuple cross_attn_past_key_value = past_key_value[-2:] if past_key_value is not None else None hidden_states, cross_attn_weights, cross_attn_present_key_value = self.cross_attention( hidden_states=hidden_states, encoder_hidden_states=encoder_hidden_states, past_key_value=cross_attn_past_key_value, attention_mask=encoder_attention_mask, layer_head_mask=cross_attn_layer_head_mask, output_attentions=output_attentions, ) hidden_states = self.attn_dropout(hidden_states) hidden_states = residual + hidden_states # add cross-attn to positions 3,4 of present_key_value tuple present_key_value += cross_attn_present_key_value # Fully Connected residual = hidden_states
2,826
/Users/nielsrogge/Documents/python_projecten/transformers/src/transformers/models/nllb_moe/modeling_nllb_moe.py
hidden_states = self.ff_layer_norm(hidden_states) if self.is_sparse: hidden_states, router_states = self.ffn(hidden_states, attention_mask) else: hidden_states, router_states = self.ffn(hidden_states), None hidden_states = self.ff_dropout(hidden_states) hidden_states = residual + hidden_states # clamp inf values to enable fp16 training if hidden_states.dtype == torch.float16 and torch.isinf(hidden_states).any(): clamp_value = torch.finfo(hidden_states.dtype).max - 1000 hidden_states = torch.clamp(hidden_states, min=-clamp_value, max=clamp_value) outputs = (hidden_states, present_key_value) if output_attentions: outputs += (self_attn_weights, cross_attn_weights) if output_router_logits: outputs += (router_states,) return outputs
2,826
/Users/nielsrogge/Documents/python_projecten/transformers/src/transformers/models/nllb_moe/modeling_nllb_moe.py
class NllbMoePreTrainedModel(PreTrainedModel): config_class = NllbMoeConfig base_model_prefix = "model" supports_gradient_checkpointing = True _no_split_modules = ["NllbMoeEncoderLayer", "NllbMoeDecoderLayer"] def _init_weights(self, module): """Initialize the weights""" std = self.config.init_std if isinstance(module, nn.Linear): module.weight.data.normal_(mean=0.0, std=std) if module.bias is not None: module.bias.data.zero_() elif isinstance(module, nn.Embedding): module.weight.data.normal_(mean=0.0, std=std) if module.padding_idx is not None: module.weight.data[module.padding_idx].zero_()
2,827
/Users/nielsrogge/Documents/python_projecten/transformers/src/transformers/models/nllb_moe/modeling_nllb_moe.py
class NllbMoeEncoder(NllbMoePreTrainedModel): """ Transformer encoder consisting of *config.encoder_layers* self attention layers. Each layer is a [`NllbMoeEncoderLayer`]. Args: config: NllbMoeConfig embed_tokens (nn.Embedding): output embedding """ def __init__(self, config: NllbMoeConfig, embed_tokens: Optional[nn.Embedding] = None): super().__init__(config) self.dropout = config.dropout self.layerdrop = config.encoder_layerdrop embed_dim = config.d_model self.padding_idx = config.pad_token_id self.max_source_positions = config.max_position_embeddings embed_scale = math.sqrt(embed_dim) if config.scale_embedding else 1.0 self.embed_tokens = NllbMoeScaledWordEmbedding( config.vocab_size, embed_dim, self.padding_idx, embed_scale=embed_scale ) if embed_tokens is not None: self.embed_tokens.weight = embed_tokens.weight
2,828
/Users/nielsrogge/Documents/python_projecten/transformers/src/transformers/models/nllb_moe/modeling_nllb_moe.py
self.embed_positions = NllbMoeSinusoidalPositionalEmbedding( config.max_position_embeddings, embed_dim, self.padding_idx, ) sparse_step = config.encoder_sparse_step self.layers = nn.ModuleList() for i in range(config.encoder_layers): is_sparse = (i + 1) % sparse_step == 0 if sparse_step > 0 else False self.layers.append(NllbMoeEncoderLayer(config, is_sparse)) self.layer_norm = nn.LayerNorm(config.d_model) self.gradient_checkpointing = False # Initialize weights and apply final processing self.post_init()
2,828
/Users/nielsrogge/Documents/python_projecten/transformers/src/transformers/models/nllb_moe/modeling_nllb_moe.py
def forward( self, input_ids: Optional[torch.Tensor] = None, attention_mask: Optional[torch.Tensor] = None, head_mask: Optional[torch.Tensor] = None, inputs_embeds: Optional[torch.Tensor] = None, output_attentions: Optional[bool] = None, output_hidden_states: Optional[bool] = None, output_router_logits: Optional[bool] = None, return_dict: Optional[bool] = None, ): r""" Args: input_ids (`torch.LongTensor` of shape `(batch_size, sequence_length)`): Indices of input sequence tokens in the vocabulary. Padding will be ignored by default should you provide it. Indices can be obtained using [`AutoTokenizer`]. See [`PreTrainedTokenizer.encode`] and [`PreTrainedTokenizer.__call__`] for details.
2,828
/Users/nielsrogge/Documents/python_projecten/transformers/src/transformers/models/nllb_moe/modeling_nllb_moe.py
[What are input IDs?](../glossary#input-ids) attention_mask (`torch.Tensor` of shape `(batch_size, sequence_length)`, *optional*): Mask to avoid performing attention on padding token indices. Mask values selected in `[0, 1]`: - 1 for tokens that are **not masked**, - 0 for tokens that are **masked**. [What are attention masks?](../glossary#attention-mask) head_mask (`torch.Tensor` of shape `(encoder_layers, encoder_attention_heads)`, *optional*): Mask to nullify selected heads of the attention modules. Mask values selected in `[0, 1]`: - 1 indicates the head is **not masked**, - 0 indicates the head is **masked**.
2,828
/Users/nielsrogge/Documents/python_projecten/transformers/src/transformers/models/nllb_moe/modeling_nllb_moe.py
inputs_embeds (`torch.FloatTensor` of shape `(batch_size, sequence_length, hidden_size)`, *optional*): Optionally, instead of passing `input_ids` you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert `input_ids` indices into associated vectors than the model's internal embedding lookup matrix. output_attentions (`bool`, *optional*): Whether or not to return the attentions tensors of all attention layers. See `attentions` under returned tensors for more detail. output_hidden_states (`bool`, *optional*): Whether or not to return the hidden states of all layers. See `hidden_states` under returned tensors for more detail. output_router_logits (`bool`, *optional*): Whether or not to return the logits of all the routers. They are useful for computing the router loss,
2,828
/Users/nielsrogge/Documents/python_projecten/transformers/src/transformers/models/nllb_moe/modeling_nllb_moe.py
and should not be returned during inference. return_dict (`bool`, *optional*): Whether or not to return a [`~utils.ModelOutput`] instead of a plain tuple. """ output_attentions = output_attentions if output_attentions is not None else self.config.output_attentions output_hidden_states = ( output_hidden_states if output_hidden_states is not None else self.config.output_hidden_states ) return_dict = return_dict if return_dict is not None else self.config.return_dict
2,828
/Users/nielsrogge/Documents/python_projecten/transformers/src/transformers/models/nllb_moe/modeling_nllb_moe.py
# retrieve input_ids and inputs_embeds if input_ids is not None and inputs_embeds is not None: raise ValueError("You cannot specify both input_ids and inputs_embeds at the same time") elif input_ids is not None: self.warn_if_padding_and_no_attention_mask(input_ids, attention_mask) input_shape = input_ids.size() input_ids = input_ids.view(-1, input_shape[-1]) elif inputs_embeds is not None: input_shape = inputs_embeds.size()[:-1] else: raise ValueError("You have to specify either input_ids or inputs_embeds") if inputs_embeds is None: inputs_embeds = self.embed_tokens(input_ids) embed_pos = self.embed_positions(input_ids, inputs_embeds) embed_pos = embed_pos.to(inputs_embeds.device) hidden_states = inputs_embeds + embed_pos hidden_states = nn.functional.dropout(hidden_states, p=self.dropout, training=self.training)
2,828
/Users/nielsrogge/Documents/python_projecten/transformers/src/transformers/models/nllb_moe/modeling_nllb_moe.py
# expand attention_mask if attention_mask is not None: # [bsz, seq_len] -> [bsz, 1, tgt_seq_len, src_seq_len] attention_mask = _prepare_4d_attention_mask(attention_mask, inputs_embeds.dtype) encoder_states = () if output_hidden_states else None all_router_probs = () if output_router_logits else None all_attentions = () if output_attentions else None # check if head_mask has a correct number of layers specified if desired if head_mask is not None: if head_mask.size()[0] != len(self.layers): raise ValueError( f"The head_mask should be specified for {len(self.layers)} layers, but it is for" f" {head_mask.size()[0]}." )
2,828
/Users/nielsrogge/Documents/python_projecten/transformers/src/transformers/models/nllb_moe/modeling_nllb_moe.py
for idx, encoder_layer in enumerate(self.layers): if output_hidden_states: encoder_states = encoder_states + (hidden_states,) # add LayerDrop (see https://arxiv.org/abs/1909.11556 for description) dropout_probability = torch.rand([]) if self.training and (dropout_probability < self.layerdrop): # skip the layer layer_outputs = (None, None, None) else: if self.gradient_checkpointing and self.training: layer_outputs = self._gradient_checkpointing_func( encoder_layer.__call__, hidden_states, attention_mask, (head_mask[idx] if head_mask is not None else None), output_attentions, ) else: layer_outputs = encoder_layer( hidden_states, attention_mask,
2,828
/Users/nielsrogge/Documents/python_projecten/transformers/src/transformers/models/nllb_moe/modeling_nllb_moe.py
layer_head_mask=(head_mask[idx] if head_mask is not None else None), output_attentions=output_attentions, output_router_logits=output_router_logits, )
2,828
/Users/nielsrogge/Documents/python_projecten/transformers/src/transformers/models/nllb_moe/modeling_nllb_moe.py
hidden_states = layer_outputs[0] if output_attentions: all_attentions += (layer_outputs[1],) if output_router_logits: all_router_probs += (layer_outputs[-1],) last_hidden_state = self.layer_norm(hidden_states) if output_hidden_states: encoder_states += (last_hidden_state,) if not return_dict: return tuple( v for v in [last_hidden_state, encoder_states, all_attentions, all_router_probs] if v is not None ) return MoEModelOutput( last_hidden_state=last_hidden_state, hidden_states=encoder_states, attentions=all_attentions, router_probs=all_router_probs, )
2,828
/Users/nielsrogge/Documents/python_projecten/transformers/src/transformers/models/nllb_moe/modeling_nllb_moe.py
class NllbMoeDecoder(NllbMoePreTrainedModel): """ Transformer decoder consisting of *config.decoder_layers* layers. Each layer is a [`NllbMoeDecoderLayer`] Args: config: NllbMoeConfig embed_tokens (nn.Embedding): output embedding """ def __init__(self, config: NllbMoeConfig, embed_tokens: Optional[nn.Embedding] = None): super().__init__(config) self.dropout = config.dropout self.layerdrop = config.decoder_layerdrop self.padding_idx = config.pad_token_id self.max_target_positions = config.max_position_embeddings embed_scale = math.sqrt(config.d_model) if config.scale_embedding else 1.0 self.embed_tokens = NllbMoeScaledWordEmbedding( config.vocab_size, config.d_model, self.padding_idx, embed_scale=embed_scale ) if embed_tokens is not None: self.embed_tokens.weight = embed_tokens.weight
2,829
/Users/nielsrogge/Documents/python_projecten/transformers/src/transformers/models/nllb_moe/modeling_nllb_moe.py
self.embed_positions = NllbMoeSinusoidalPositionalEmbedding( config.max_position_embeddings, config.d_model, self.padding_idx, ) sparse_step = config.decoder_sparse_step self.layers = nn.ModuleList() for i in range(config.decoder_layers): is_sparse = (i + 1) % sparse_step == 0 if sparse_step > 0 else False self.layers.append(NllbMoeDecoderLayer(config, is_sparse)) self.layer_norm = nn.LayerNorm(config.d_model) self.gradient_checkpointing = False # Initialize weights and apply final processing self.post_init()
2,829
/Users/nielsrogge/Documents/python_projecten/transformers/src/transformers/models/nllb_moe/modeling_nllb_moe.py
def forward( self, input_ids: Optional[torch.Tensor] = None, attention_mask: Optional[torch.Tensor] = None, encoder_hidden_states: Optional[torch.Tensor] = None, encoder_attention_mask: Optional[torch.Tensor] = None, head_mask: Optional[torch.Tensor] = None, cross_attn_head_mask: Optional[torch.Tensor] = None, past_key_values: Optional[List[torch.FloatTensor]] = None, inputs_embeds: Optional[torch.Tensor] = None, use_cache: Optional[bool] = None, output_attentions: Optional[bool] = None, output_hidden_states: Optional[bool] = None, output_router_logits: Optional[bool] = None, return_dict: Optional[bool] = None, ): r""" Args: input_ids (`torch.LongTensor` of shape `(batch_size, sequence_length)`): Indices of input sequence tokens in the vocabulary. Padding will be ignored by default should you provide it.
2,829
/Users/nielsrogge/Documents/python_projecten/transformers/src/transformers/models/nllb_moe/modeling_nllb_moe.py
Indices can be obtained using [`AutoTokenizer`]. See [`PreTrainedTokenizer.encode`] and [`PreTrainedTokenizer.__call__`] for details. [What are input IDs?](../glossary#input-ids) attention_mask (`torch.Tensor` of shape `(batch_size, sequence_length)`, *optional*): Mask to avoid performing attention on padding token indices. Mask values selected in `[0, 1]`: - 1 for tokens that are **not masked**, - 0 for tokens that are **masked**.
2,829
/Users/nielsrogge/Documents/python_projecten/transformers/src/transformers/models/nllb_moe/modeling_nllb_moe.py
[What are attention masks?](../glossary#attention-mask) encoder_hidden_states (`torch.FloatTensor` of shape `(batch_size, encoder_sequence_length, hidden_size)`, *optional*): Sequence of hidden-states at the output of the last layer of the encoder. Used in the cross-attention of the decoder. encoder_attention_mask (`torch.LongTensor` of shape `(batch_size, encoder_sequence_length)`, *optional*): Mask to avoid performing cross-attention on padding tokens indices of encoder input_ids. Mask values selected in `[0, 1]`: - 1 for tokens that are **not masked**, - 0 for tokens that are **masked**. [What are attention masks?](../glossary#attention-mask) head_mask (`torch.Tensor` of shape `(decoder_layers, decoder_attention_heads)`, *optional*): Mask to nullify selected heads of the attention modules. Mask values selected in `[0, 1]`:
2,829
/Users/nielsrogge/Documents/python_projecten/transformers/src/transformers/models/nllb_moe/modeling_nllb_moe.py
- 1 indicates the head is **not masked**, - 0 indicates the head is **masked**. cross_attn_head_mask (`torch.Tensor` of shape `(decoder_layers, decoder_attention_heads)`, *optional*): Mask to nullify selected heads of the cross-attention modules in the decoder to avoid performing cross-attention on hidden heads. Mask values selected in `[0, 1]`: - 1 indicates the head is **not masked**, - 0 indicates the head is **masked**. past_key_values (`tuple(tuple(torch.FloatTensor))`, *optional*, returned when `use_cache=True` is passed or when `config.use_cache=True`): Tuple of `tuple(torch.FloatTensor)` of length `config.n_layers`, with each tuple having 2 tensors of shape `(batch_size, num_heads, sequence_length, embed_size_per_head)`) and 2 additional tensors of shape `(batch_size, num_heads, encoder_sequence_length, embed_size_per_head)`.
2,829
/Users/nielsrogge/Documents/python_projecten/transformers/src/transformers/models/nllb_moe/modeling_nllb_moe.py
Contains pre-computed hidden-states (key and values in the self-attention blocks and in the cross-attention blocks) that can be used (see `past_key_values` input) to speed up sequential decoding.
2,829
/Users/nielsrogge/Documents/python_projecten/transformers/src/transformers/models/nllb_moe/modeling_nllb_moe.py
If `past_key_values` are used, the user can optionally input only the last `decoder_input_ids` (those that don't have their past key value states given to this model) of shape `(batch_size, 1)` instead of all `decoder_input_ids` of shape `(batch_size, sequence_length)`. inputs_embeds (`torch.FloatTensor` of shape `(batch_size, sequence_length, hidden_size)`, *optional*): Optionally, instead of passing `input_ids` you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert `input_ids` indices into associated vectors than the model's internal embedding lookup matrix. output_attentions (`bool`, *optional*): Whether or not to return the attentions tensors of all attention layers. See `attentions` under returned tensors for more detail. output_hidden_states (`bool`, *optional*):
2,829
/Users/nielsrogge/Documents/python_projecten/transformers/src/transformers/models/nllb_moe/modeling_nllb_moe.py
Whether or not to return the hidden states of all layers. See `hidden_states` under returned tensors for more detail. output_router_logits (`bool`, *optional*): Whether or not to return the logits of all the routers. They are useful for computing the router loss, and should not be returned during inference. return_dict (`bool`, *optional*): Whether or not to return a [`~utils.ModelOutput`] instead of a plain tuple. """ output_attentions = output_attentions if output_attentions is not None else self.config.output_attentions output_hidden_states = ( output_hidden_states if output_hidden_states is not None else self.config.output_hidden_states ) use_cache = use_cache if use_cache is not None else self.config.use_cache return_dict = return_dict if return_dict is not None else self.config.return_dict
2,829
/Users/nielsrogge/Documents/python_projecten/transformers/src/transformers/models/nllb_moe/modeling_nllb_moe.py
# retrieve input_ids and inputs_embeds if input_ids is not None and inputs_embeds is not None: raise ValueError("You cannot specify both decoder_input_ids and decoder_inputs_embeds at the same time") elif input_ids is not None: input_shape = input_ids.size() input_ids = input_ids.view(-1, input_shape[-1]) elif inputs_embeds is not None: input_shape = inputs_embeds.size()[:-1] else: raise ValueError("You have to specify either decoder_input_ids or decoder_inputs_embeds") # past_key_values_length past_key_values_length = past_key_values[0][0].shape[2] if past_key_values is not None else 0 if inputs_embeds is None: inputs_embeds = self.embed_tokens(input_ids)
2,829
/Users/nielsrogge/Documents/python_projecten/transformers/src/transformers/models/nllb_moe/modeling_nllb_moe.py
# create causal mask # [bsz, seq_len] -> [bsz, 1, tgt_seq_len, src_seq_len] combined_attention_mask = _prepare_4d_causal_attention_mask( attention_mask, input_shape, inputs_embeds, past_key_values_length ) # expand encoder attention mask if encoder_hidden_states is not None and encoder_attention_mask is not None: # [bsz, seq_len] -> [bsz, 1, tgt_seq_len, src_seq_len] encoder_attention_mask = _prepare_4d_attention_mask( encoder_attention_mask, inputs_embeds.dtype, tgt_len=input_shape[-1] ) # embed positions positions = self.embed_positions(input_ids, inputs_embeds, past_key_values_length) positions = positions.to(inputs_embeds.device) hidden_states = inputs_embeds + positions hidden_states = nn.functional.dropout(hidden_states, p=self.dropout, training=self.training)
2,829
/Users/nielsrogge/Documents/python_projecten/transformers/src/transformers/models/nllb_moe/modeling_nllb_moe.py
if self.gradient_checkpointing and self.training: if use_cache: logger.warning_once( "`use_cache=True` is incompatible with gradient checkpointing. Setting" " `use_cache=False`..." ) use_cache = False # decoder layers all_hidden_states = () if output_hidden_states else None all_self_attns = () if output_attentions else None all_router_probs = () if output_router_logits else None all_cross_attentions = () if output_attentions else None present_key_value_states = () if use_cache else None
2,829
/Users/nielsrogge/Documents/python_projecten/transformers/src/transformers/models/nllb_moe/modeling_nllb_moe.py
# check if head_mask/cross_attn_head_mask has a correct number of layers specified if desired for attn_mask, mask_name in zip([head_mask, cross_attn_head_mask], ["head_mask", "cross_attn_head_mask"]): if attn_mask is not None: if attn_mask.size()[0] != len(self.layers): raise ValueError( f"The `{mask_name}` should be specified for {len(self.layers)} layers, but it is for" f" {head_mask.size()[0]}." ) synced_gpus = is_deepspeed_zero3_enabled() or is_fsdp_managed_module(self) for idx, decoder_layer in enumerate(self.layers): if output_hidden_states: all_hidden_states += (hidden_states,) # add LayerDrop (see https://arxiv.org/abs/1909.11556 for description) dropout_probability = torch.rand([])
2,829
/Users/nielsrogge/Documents/python_projecten/transformers/src/transformers/models/nllb_moe/modeling_nllb_moe.py
skip_the_layer = True if self.training and (dropout_probability < self.layerdrop) else False if not skip_the_layer or synced_gpus: layer_head_mask = head_mask[idx] if head_mask is not None else None cross_attn_layer_head_mask = cross_attn_head_mask[idx] if cross_attn_head_mask is not None else None past_key_value = past_key_values[idx] if past_key_values is not None else None
2,829
/Users/nielsrogge/Documents/python_projecten/transformers/src/transformers/models/nllb_moe/modeling_nllb_moe.py
# under fsdp or deepspeed zero3 all gpus must run in sync if self.gradient_checkpointing and self.training: if use_cache: logger.warning_once( "`use_cache=True` is incompatible with gradient checkpointing. Setting `use_cache=False`..." ) use_cache = False layer_outputs = self._gradient_checkpointing_func( decoder_layer.forward, hidden_states, combined_attention_mask, encoder_hidden_states, encoder_attention_mask, layer_head_mask, cross_attn_layer_head_mask, None, # past_key_value is always None with gradient checkpointing use_cache, output_attentions, ) else:
2,829
/Users/nielsrogge/Documents/python_projecten/transformers/src/transformers/models/nllb_moe/modeling_nllb_moe.py
layer_outputs = decoder_layer( hidden_states, attention_mask=combined_attention_mask, encoder_hidden_states=encoder_hidden_states, encoder_attention_mask=encoder_attention_mask, layer_head_mask=layer_head_mask, cross_attn_layer_head_mask=cross_attn_layer_head_mask, past_key_value=past_key_value, use_cache=use_cache, output_attentions=output_attentions, output_router_logits=output_router_logits, )
2,829
/Users/nielsrogge/Documents/python_projecten/transformers/src/transformers/models/nllb_moe/modeling_nllb_moe.py
hidden_states = layer_outputs[0] if skip_the_layer: continue if use_cache: present_key_value_states += (layer_outputs[1],) if output_attentions: all_self_attns += (layer_outputs[2],) all_cross_attentions += (layer_outputs[3],) if output_router_logits: all_router_probs += (layer_outputs[-1],) hidden_states = self.layer_norm(hidden_states) # Add last layer if output_hidden_states: all_hidden_states += (hidden_states,)
2,829
/Users/nielsrogge/Documents/python_projecten/transformers/src/transformers/models/nllb_moe/modeling_nllb_moe.py
if not return_dict: return tuple( v for v in [ hidden_states, present_key_value_states, all_hidden_states, all_self_attns, all_cross_attentions, all_router_probs, ] if v is not None ) return MoEModelOutputWithPastAndCrossAttentions( last_hidden_state=hidden_states, past_key_values=present_key_value_states, hidden_states=all_hidden_states, attentions=all_self_attns, cross_attentions=all_cross_attentions, router_probs=all_router_probs, )
2,829
/Users/nielsrogge/Documents/python_projecten/transformers/src/transformers/models/nllb_moe/modeling_nllb_moe.py
class NllbMoeModel(NllbMoePreTrainedModel): _tied_weights_keys = ["encoder.embed_tokens.weight", "decoder.embed_tokens.weight"] def __init__(self, config: NllbMoeConfig): super().__init__(config) padding_idx, vocab_size = config.pad_token_id, config.vocab_size embed_scale = math.sqrt(config.d_model) if config.scale_embedding else 1.0 self.shared = NllbMoeScaledWordEmbedding(vocab_size, config.d_model, padding_idx, embed_scale=embed_scale) self.encoder = NllbMoeEncoder(config, self.shared) self.decoder = NllbMoeDecoder(config, self.shared) # Initialize weights and apply final processing self.post_init() def get_input_embeddings(self): return self.shared def set_input_embeddings(self, value): self.shared = value self.encoder.embed_tokens = self.shared self.decoder.embed_tokens = self.shared
2,830
/Users/nielsrogge/Documents/python_projecten/transformers/src/transformers/models/nllb_moe/modeling_nllb_moe.py
def _tie_weights(self): if self.config.tie_word_embeddings: self._tie_or_clone_weights(self.encoder.embed_tokens, self.shared) self._tie_or_clone_weights(self.decoder.embed_tokens, self.shared) def get_encoder(self): return self.encoder def get_decoder(self): return self.decoder
2,830
/Users/nielsrogge/Documents/python_projecten/transformers/src/transformers/models/nllb_moe/modeling_nllb_moe.py
@add_start_docstrings_to_model_forward(NLLB_MOE_INPUTS_DOCSTRING) @add_start_docstrings_to_model_forward(NLLB_MOE_INPUTS_DOCSTRING) @replace_return_docstrings(output_type=Seq2SeqMoEModelOutput, config_class=_CONFIG_FOR_DOC) def forward( self, input_ids: Optional[torch.LongTensor] = None, attention_mask: Optional[torch.Tensor] = None, decoder_input_ids: Optional[torch.LongTensor] = None, decoder_attention_mask: Optional[torch.LongTensor] = None, head_mask: Optional[torch.Tensor] = None, decoder_head_mask: Optional[torch.Tensor] = None, cross_attn_head_mask: Optional[torch.Tensor] = None, encoder_outputs: Optional[Tuple[Tuple[torch.FloatTensor]]] = None, past_key_values: Optional[Tuple[Tuple[torch.FloatTensor]]] = None, inputs_embeds: Optional[torch.FloatTensor] = None, decoder_inputs_embeds: Optional[torch.FloatTensor] = None, use_cache: Optional[bool] = None,
2,830
/Users/nielsrogge/Documents/python_projecten/transformers/src/transformers/models/nllb_moe/modeling_nllb_moe.py
output_attentions: Optional[bool] = None, output_hidden_states: Optional[bool] = None, output_router_logits: Optional[bool] = None, return_dict: Optional[bool] = None, ) -> Union[Tuple[torch.Tensor], Seq2SeqMoEModelOutput]: r""" Returns:
2,830
/Users/nielsrogge/Documents/python_projecten/transformers/src/transformers/models/nllb_moe/modeling_nllb_moe.py
Example: ```python >>> from transformers import AutoTokenizer, NllbMoeModel >>> tokenizer = AutoTokenizer.from_pretrained("hf-internal-testing/random-nllb-moe-2-experts") >>> model = SwitchTransformersModel.from_pretrained("hf-internal-testing/random-nllb-moe-2-experts") >>> input_ids = tokenizer( ... "Studies have been shown that owning a dog is good for you", return_tensors="pt" ... ).input_ids # Batch size 1 >>> decoder_input_ids = tokenizer("Studies show that", return_tensors="pt").input_ids # Batch size 1 >>> # preprocess: Prepend decoder_input_ids with start token which is pad token for NllbMoeModel >>> decoder_input_ids = model._shift_right(decoder_input_ids)
2,830
/Users/nielsrogge/Documents/python_projecten/transformers/src/transformers/models/nllb_moe/modeling_nllb_moe.py
>>> # forward pass >>> outputs = model(input_ids=input_ids, decoder_input_ids=decoder_input_ids) >>> last_hidden_states = outputs.last_hidden_state ```""" return_dict = return_dict if return_dict is not None else self.config.return_dict if encoder_outputs is None: encoder_outputs = self.encoder( input_ids=input_ids, attention_mask=attention_mask, head_mask=head_mask, inputs_embeds=inputs_embeds, output_attentions=output_attentions, output_hidden_states=output_hidden_states, output_router_logits=output_router_logits, return_dict=return_dict, ) # If the user passed a tuple for encoder_outputs, we wrap it in a BaseModelOutput when return_dict=True elif return_dict and not isinstance(encoder_outputs, MoEModelOutput): encoder_outputs = MoEModelOutput(
2,830
/Users/nielsrogge/Documents/python_projecten/transformers/src/transformers/models/nllb_moe/modeling_nllb_moe.py
last_hidden_state=encoder_outputs[0], hidden_states=encoder_outputs[1] if len(encoder_outputs) > 1 else None, attentions=encoder_outputs[2] if len(encoder_outputs) > 2 else None, router_probs=encoder_outputs[3] if len(encoder_outputs) > 3 else None, )
2,830
/Users/nielsrogge/Documents/python_projecten/transformers/src/transformers/models/nllb_moe/modeling_nllb_moe.py
# decoder outputs consists of (dec_features, past_key_value, dec_hidden, dec_attn) decoder_outputs = self.decoder( input_ids=decoder_input_ids, attention_mask=decoder_attention_mask, encoder_hidden_states=encoder_outputs[0], encoder_attention_mask=attention_mask, head_mask=decoder_head_mask, cross_attn_head_mask=cross_attn_head_mask, past_key_values=past_key_values, inputs_embeds=decoder_inputs_embeds, use_cache=use_cache, output_attentions=output_attentions, output_hidden_states=output_hidden_states, output_router_logits=output_router_logits, return_dict=return_dict, ) if not return_dict: return decoder_outputs + encoder_outputs
2,830
/Users/nielsrogge/Documents/python_projecten/transformers/src/transformers/models/nllb_moe/modeling_nllb_moe.py
return Seq2SeqMoEModelOutput( past_key_values=decoder_outputs.past_key_values, cross_attentions=decoder_outputs.cross_attentions, last_hidden_state=decoder_outputs.last_hidden_state, encoder_last_hidden_state=encoder_outputs.last_hidden_state, encoder_hidden_states=encoder_outputs.hidden_states, decoder_hidden_states=decoder_outputs.hidden_states, encoder_attentions=encoder_outputs.attentions, decoder_attentions=decoder_outputs.attentions, encoder_router_logits=encoder_outputs.router_probs, decoder_router_logits=decoder_outputs.router_probs, )
2,830
/Users/nielsrogge/Documents/python_projecten/transformers/src/transformers/models/nllb_moe/modeling_nllb_moe.py
class NllbMoeForConditionalGeneration(NllbMoePreTrainedModel, GenerationMixin): base_model_prefix = "model" _tied_weights_keys = ["encoder.embed_tokens.weight", "decoder.embed_tokens.weight", "lm_head.weight"] def __init__(self, config: NllbMoeConfig): super().__init__(config) self.model = NllbMoeModel(config) self.lm_head = nn.Linear(config.d_model, config.vocab_size, bias=False) self.router_z_loss_coef = config.router_z_loss_coef self.router_aux_loss_coef = config.router_aux_loss_coef # Initialize weights and apply final processing self.post_init() def get_encoder(self): return self.model.get_encoder() def get_decoder(self): return self.model.get_decoder() def get_output_embeddings(self): return self.lm_head def set_output_embeddings(self, new_embeddings): self.lm_head = new_embeddings
2,831
/Users/nielsrogge/Documents/python_projecten/transformers/src/transformers/models/nllb_moe/modeling_nllb_moe.py
@add_start_docstrings_to_model_forward(NLLB_MOE_INPUTS_DOCSTRING) @replace_return_docstrings(output_type=Seq2SeqMoEOutput, config_class=_CONFIG_FOR_DOC) @add_end_docstrings(NLLB_MOE_GENERATION_EXAMPLE) def forward( self, input_ids: Optional[torch.LongTensor] = None, attention_mask: Optional[torch.Tensor] = None, decoder_input_ids: Optional[torch.LongTensor] = None, decoder_attention_mask: Optional[torch.LongTensor] = None, head_mask: Optional[torch.Tensor] = None, decoder_head_mask: Optional[torch.Tensor] = None, cross_attn_head_mask: Optional[torch.Tensor] = None, encoder_outputs: Optional[Tuple[Tuple[torch.FloatTensor]]] = None, past_key_values: Optional[Tuple[Tuple[torch.FloatTensor]]] = None, inputs_embeds: Optional[torch.FloatTensor] = None, decoder_inputs_embeds: Optional[torch.FloatTensor] = None, labels: Optional[torch.LongTensor] = None, use_cache: Optional[bool] = None,
2,831
/Users/nielsrogge/Documents/python_projecten/transformers/src/transformers/models/nllb_moe/modeling_nllb_moe.py
output_attentions: Optional[bool] = None, output_hidden_states: Optional[bool] = None, output_router_logits: Optional[bool] = None, return_dict: Optional[bool] = None, ) -> Union[Tuple[torch.Tensor], Seq2SeqMoEOutput]: r""" labels (`torch.LongTensor` of shape `(batch_size, sequence_length)`, *optional*): Labels for computing the masked language modeling loss. Indices should either be in `[0, ..., config.vocab_size]` or -100 (see `input_ids` docstring). Tokens with indices set to `-100` are ignored (masked), the loss is only computed for the tokens with labels in `[0, ..., config.vocab_size]`.
2,831
/Users/nielsrogge/Documents/python_projecten/transformers/src/transformers/models/nllb_moe/modeling_nllb_moe.py
Returns: """ return_dict = return_dict if return_dict is not None else self.config.return_dict output_attentions = output_attentions if output_attentions is not None else self.config.output_attentions output_router_logits = ( output_router_logits if output_router_logits is not None else self.config.output_router_logits ) if labels is not None: if decoder_input_ids is None: decoder_input_ids = shift_tokens_right( labels, self.config.pad_token_id, self.config.decoder_start_token_id )
2,831
/Users/nielsrogge/Documents/python_projecten/transformers/src/transformers/models/nllb_moe/modeling_nllb_moe.py
outputs = self.model( input_ids, attention_mask=attention_mask, decoder_input_ids=decoder_input_ids, encoder_outputs=encoder_outputs, decoder_attention_mask=decoder_attention_mask, head_mask=head_mask, decoder_head_mask=decoder_head_mask, cross_attn_head_mask=cross_attn_head_mask, past_key_values=past_key_values, inputs_embeds=inputs_embeds, decoder_inputs_embeds=decoder_inputs_embeds, use_cache=use_cache, output_attentions=output_attentions, output_hidden_states=output_hidden_states, output_router_logits=output_router_logits, return_dict=return_dict, ) lm_logits = self.lm_head(outputs[0]) loss = None encoder_aux_loss = None decoder_aux_loss = None
2,831
/Users/nielsrogge/Documents/python_projecten/transformers/src/transformers/models/nllb_moe/modeling_nllb_moe.py
if labels is not None: loss_fct = CrossEntropyLoss(ignore_index=-100) # todo check in the config if router loss enables if output_router_logits: encoder_router_logits = outputs[-1] decoder_router_logits = outputs[3 if output_attentions else 4] # Compute the router loss (z_loss + auxiliary loss) for each router in the encoder and decoder encoder_router_logits, encoder_expert_indexes = self._unpack_router_logits(encoder_router_logits) encoder_aux_loss = load_balancing_loss_func(encoder_router_logits, encoder_expert_indexes) decoder_router_logits, decoder_expert_indexes = self._unpack_router_logits(decoder_router_logits) decoder_aux_loss = load_balancing_loss_func(decoder_router_logits, decoder_expert_indexes) loss = loss_fct(lm_logits.view(-1, lm_logits.size(-1)), labels.view(-1))
2,831
/Users/nielsrogge/Documents/python_projecten/transformers/src/transformers/models/nllb_moe/modeling_nllb_moe.py
if output_router_logits and labels is not None: aux_loss = self.router_aux_loss_coef * (encoder_aux_loss + decoder_aux_loss) loss = loss + aux_loss output = (loss,) if loss is not None else () if not return_dict: output += (lm_logits,) if output_router_logits: # only return the loss if they are not None output += ( encoder_aux_loss, decoder_aux_loss, *outputs[1:], ) else: output += outputs[1:] return output
2,831
/Users/nielsrogge/Documents/python_projecten/transformers/src/transformers/models/nllb_moe/modeling_nllb_moe.py
return Seq2SeqMoEOutput( loss=loss, logits=lm_logits, past_key_values=outputs.past_key_values, cross_attentions=outputs.cross_attentions, encoder_aux_loss=encoder_aux_loss, decoder_aux_loss=decoder_aux_loss, encoder_last_hidden_state=outputs.encoder_last_hidden_state, encoder_hidden_states=outputs.encoder_hidden_states, decoder_hidden_states=outputs.decoder_hidden_states, encoder_attentions=outputs.encoder_attentions, decoder_attentions=outputs.decoder_attentions, encoder_router_logits=outputs.encoder_router_logits, decoder_router_logits=outputs.decoder_router_logits, )
2,831
/Users/nielsrogge/Documents/python_projecten/transformers/src/transformers/models/nllb_moe/modeling_nllb_moe.py