code stringlengths 66 870k | docstring stringlengths 19 26.7k | func_name stringlengths 1 138 | language stringclasses 1
value | repo stringlengths 7 68 | path stringlengths 5 324 | url stringlengths 46 389 | license stringclasses 7
values |
|---|---|---|---|---|---|---|---|
def convert_tokens_to_string(self, tokens):
"""Converts a sequence of tokens (string) in a single string."""
text = "".join(tokens)
text = bytearray([self.byte_decoder[c] for c in text]).decode("utf-8", errors=self.errors)
return text | Converts a sequence of tokens (string) in a single string. | convert_tokens_to_string | python | huggingface/transformers | src/transformers/models/whisper/tokenization_whisper.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/whisper/tokenization_whisper.py | Apache-2.0 |
def get_prompt_ids(self, text: str, return_tensors="np"):
"""Converts prompt text to IDs that can be passed to [`~WhisperForConditionalGeneration.generate`]."""
batch_encoding = self("<|startofprev|>", " " + text.strip(), add_special_tokens=False)
# Check for special tokens
prompt_text_... | Converts prompt text to IDs that can be passed to [`~WhisperForConditionalGeneration.generate`]. | get_prompt_ids | python | huggingface/transformers | src/transformers/models/whisper/tokenization_whisper.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/whisper/tokenization_whisper.py | Apache-2.0 |
def _decode_asr(tokenizer, model_outputs, *, return_timestamps, return_language, time_precision):
"""
Internal method meant to only be used by asr pipeline. Handles all the little quirks specific to whisper to handle
the various options not allowed in other seq2seq models
"""
# =========== Overview... |
Internal method meant to only be used by asr pipeline. Handles all the little quirks specific to whisper to handle
the various options not allowed in other seq2seq models
| _decode_asr | python | huggingface/transformers | src/transformers/models/whisper/tokenization_whisper.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/whisper/tokenization_whisper.py | Apache-2.0 |
def _split_tokens_on_unicode(tokenizer, tokens: List[int]):
"""Combine tokens into words by splitting at any position where the tokens are decoded as valid unicode points."""
decoded_full = tokenizer.decode(tokens, decode_with_timestamps=True)
replacement_char = "\ufffd"
words = []
word_tokens = []... | Combine tokens into words by splitting at any position where the tokens are decoded as valid unicode points. | _split_tokens_on_unicode | python | huggingface/transformers | src/transformers/models/whisper/tokenization_whisper.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/whisper/tokenization_whisper.py | Apache-2.0 |
def _split_tokens_on_spaces(tokenizer, tokens: List[int]):
"""Combine tokens into words by splitting at whitespace and punctuation tokens."""
subwords, subword_tokens_list, subword_indices_list = _split_tokens_on_unicode(tokenizer, tokens)
words = []
word_tokens = []
token_indices = []
for subw... | Combine tokens into words by splitting at whitespace and punctuation tokens. | _split_tokens_on_spaces | python | huggingface/transformers | src/transformers/models/whisper/tokenization_whisper.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/whisper/tokenization_whisper.py | Apache-2.0 |
def _merge_punctuations(words, tokens, indices, prepended, appended):
"""Merges punctuation tokens with neighboring words."""
# prepend punctuations
i = len(words) - 2
j = len(words) - 1
while i >= 0:
if words[i].startswith(" ") and words[i].strip() in prepended:
words[j] = words... | Merges punctuation tokens with neighboring words. | _merge_punctuations | python | huggingface/transformers | src/transformers/models/whisper/tokenization_whisper.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/whisper/tokenization_whisper.py | Apache-2.0 |
def _decode_with_timestamps(
self, token_ids, skip_special_tokens=False, time_precision=0.02, segment_size=1500
) -> str:
"""
Timestamp tokens are above the special tokens' id range and are ignored by `decode()`. This method decodes
given tokens with timestamps tokens annotated, e.g.... |
Timestamp tokens are above the special tokens' id range and are ignored by `decode()`. This method decodes
given tokens with timestamps tokens annotated, e.g. "<|1.08|>".
| _decode_with_timestamps | python | huggingface/transformers | src/transformers/models/whisper/tokenization_whisper_fast.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/whisper/tokenization_whisper_fast.py | Apache-2.0 |
def _compute_offsets(self, token_ids, time_precision=0.02, segment_size=1500):
"""
Compute offsets for a given tokenized input
Args:
token_ids (`Union[int, List[int], np.ndarray, torch.Tensor, tf.Tensor]`):
List of tokenized input ids. Can be obtained using the `__ca... |
Compute offsets for a given tokenized input
Args:
token_ids (`Union[int, List[int], np.ndarray, torch.Tensor, tf.Tensor]`):
List of tokenized input ids. Can be obtained using the `__call__` method.
time_precision (`float`, *optional*, defaults to 0.02):
... | _compute_offsets | python | huggingface/transformers | src/transformers/models/whisper/tokenization_whisper_fast.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/whisper/tokenization_whisper_fast.py | Apache-2.0 |
def _preprocess_token_ids(self, token_ids, skip_special_tokens: bool = False):
"""
Pre-process the token ids for decoding by removing the prompt tokens ids and timestamp token ids.
Args:
token_ids (`Union[int, List[int], np.ndarray, torch.Tensor, tf.Tensor]`):
List o... |
Pre-process the token ids for decoding by removing the prompt tokens ids and timestamp token ids.
Args:
token_ids (`Union[int, List[int], np.ndarray, torch.Tensor, tf.Tensor]`):
List of tokenized input ids. Typically, obtained using the `__call__` method of the tokenizer.
... | _preprocess_token_ids | python | huggingface/transformers | src/transformers/models/whisper/tokenization_whisper_fast.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/whisper/tokenization_whisper_fast.py | Apache-2.0 |
def decode(
self,
token_ids,
skip_special_tokens: bool = False,
clean_up_tokenization_spaces: Optional[bool] = None,
output_offsets: bool = False,
time_precision: float = 0.02,
decode_with_timestamps: bool = False,
normalize: bool = False,
basic_no... |
Converts a sequence of ids in a string, using the tokenizer and vocabulary with options to remove special
tokens and clean up tokenization spaces.
Similar to doing `self.convert_tokens_to_string(self.convert_ids_to_tokens(token_ids))`.
Args:
token_ids (`Union[int, List[int... | decode | python | huggingface/transformers | src/transformers/models/whisper/tokenization_whisper_fast.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/whisper/tokenization_whisper_fast.py | Apache-2.0 |
def build_inputs_with_special_tokens(self, token_ids_0, token_ids_1=None) -> List[int]:
"""Build model inputs from a sequence by appending eos_token_id."""
if token_ids_1 is None:
return self.prefix_tokens + token_ids_0 + [self.eos_token_id]
# We don't expect to process pairs, but le... | Build model inputs from a sequence by appending eos_token_id. | build_inputs_with_special_tokens | python | huggingface/transformers | src/transformers/models/whisper/tokenization_whisper_fast.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/whisper/tokenization_whisper_fast.py | Apache-2.0 |
def get_special_tokens_mask(
self, token_ids_0: List[int], token_ids_1: Optional[List[int]] = None, already_has_special_tokens: bool = False
) -> List[int]:
"""
Retrieve sequence ids from a token list that has no special tokens added. This method is called when adding
special tokens ... |
Retrieve sequence ids from a token list that has no special tokens added. This method is called when adding
special tokens using the tokenizer `prepare_for_model` method.
Args:
token_ids_0 (`List[int]`):
List of IDs.
token_ids_1 (`List[int]`, *optional*)... | get_special_tokens_mask | python | huggingface/transformers | src/transformers/models/whisper/tokenization_whisper_fast.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/whisper/tokenization_whisper_fast.py | Apache-2.0 |
def get_prompt_ids(self, text: str, return_tensors="np"):
"""Converts prompt text to IDs that can be passed to [`~WhisperForConditionalGeneration.generate`]."""
batch_encoding = self("<|startofprev|>", " " + text.strip(), add_special_tokens=False)
# Check for special tokens
prompt_text_... | Converts prompt text to IDs that can be passed to [`~WhisperForConditionalGeneration.generate`]. | get_prompt_ids | python | huggingface/transformers | src/transformers/models/whisper/tokenization_whisper_fast.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/whisper/tokenization_whisper_fast.py | Apache-2.0 |
def _concatenate_to_cache(self, key, value, query, attention_mask):
"""
This function takes projected key, value states from a single input token and concatenates the states to cached
states from previous steps. This function is slightly adapted from the official Flax repository:
https:/... |
This function takes projected key, value states from a single input token and concatenates the states to cached
states from previous steps. This function is slightly adapted from the official Flax repository:
https://github.com/google/flax/blob/491ce18759622506588784b4fca0e4bf05f8c8cd/flax/line... | _concatenate_to_cache | python | huggingface/transformers | src/transformers/models/xglm/modeling_flax_xglm.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/xglm/modeling_flax_xglm.py | Apache-2.0 |
def init_cache(self, batch_size, max_length):
r"""
Args:
batch_size (`int`):
batch_size used for fast auto-regressive decoding. Defines the batch size of the initialized cache.
max_length (`int`):
maximum possible length for auto-regressive decodin... |
Args:
batch_size (`int`):
batch_size used for fast auto-regressive decoding. Defines the batch size of the initialized cache.
max_length (`int`):
maximum possible length for auto-regressive decoding. Defines the sequence length of the initialized
... | init_cache | python | huggingface/transformers | src/transformers/models/xglm/modeling_flax_xglm.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/xglm/modeling_flax_xglm.py | Apache-2.0 |
def _create_position_ids_from_input_ids(
input_ids: tf.Tensor, past_key_values_length: int, padding_idx: Optional[int]
) -> tf.Tensor:
"""
Replace non-padding symbols with their position numbers. Position numbers begin at padding_idx+1. Padding symbols
are ignored. This is modified from fairseq's `utils... |
Replace non-padding symbols with their position numbers. Position numbers begin at padding_idx+1. Padding symbols
are ignored. This is modified from fairseq's `utils.make_positions`.
| _create_position_ids_from_input_ids | python | huggingface/transformers | src/transformers/models/xglm/modeling_tf_xglm.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/xglm/modeling_tf_xglm.py | Apache-2.0 |
def _create_position_ids_from_inputs_embeds(
inputs_embeds: tf.Tensor, past_key_values_length: int, padding_idx: Optional[int]
) -> tf.Tensor:
"""
Args:
We are provided embeddings directly. We cannot infer which are padded so just generate sequential position ids.
inputs_embeds: tf.Tensor
Re... |
Args:
We are provided embeddings directly. We cannot infer which are padded so just generate sequential position ids.
inputs_embeds: tf.Tensor
Returns: tf.Tensor
| _create_position_ids_from_inputs_embeds | python | huggingface/transformers | src/transformers/models/xglm/modeling_tf_xglm.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/xglm/modeling_tf_xglm.py | Apache-2.0 |
def _make_causal_mask(input_ids_shape: tf.TensorShape, past_key_values_length: int = 0):
"""
Make causal mask used for bi-directional self-attention.
"""
bsz = input_ids_shape[0]
tgt_len = input_ids_shape[1]
mask = tf.ones((tgt_len, tgt_len)) * LARGE_NEGATIVE
mask_cond = tf.range(shape_list(... |
Make causal mask used for bi-directional self-attention.
| _make_causal_mask | python | huggingface/transformers | src/transformers/models/xglm/modeling_tf_xglm.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/xglm/modeling_tf_xglm.py | Apache-2.0 |
def call(
self,
hidden_states: tf.Tensor,
attention_mask: tf.Tensor | None = None,
encoder_hidden_states: tf.Tensor | None = None,
encoder_attention_mask: tf.Tensor | None = None,
layer_head_mask: tf.Tensor | None = None,
cross_attn_layer_head_mask: tf.Tensor | No... |
Args:
hidden_states (`tf.Tensor`): input to the layer of shape *(batch, seq_len, embed_dim)*
attention_mask (`tf.Tensor`): attention mask of size
*(batch, 1, tgt_len, src_len)* where padding elements are indicated by very large negative values.
encoder_hidden... | call | python | huggingface/transformers | src/transformers/models/xglm/modeling_tf_xglm.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/xglm/modeling_tf_xglm.py | Apache-2.0 |
def call(
self,
input_ids: TFModelInputType | None = None,
attention_mask: np.ndarray | tf.Tensor | None = None,
position_ids: np.ndarray | tf.Tensor | None = None,
encoder_hidden_states: np.ndarray | tf.Tensor | None = None,
encoder_attention_mask: np.ndarray | tf.Tensor... |
labels (`np.ndarray` or `tf.Tensor` of shape `(batch_size, sequence_length)`, *optional*):
Labels for language modeling. Note that the labels **are shifted** inside the model, i.e. you can set
`labels = input_ids` Indices are selected in `[-100, 0, ..., config.vocab_size]` All labels se... | call | python | huggingface/transformers | src/transformers/models/xglm/modeling_tf_xglm.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/xglm/modeling_tf_xglm.py | Apache-2.0 |
def get_embedding(num_embeddings: int, embedding_dim: int, padding_idx: Optional[int] = None):
"""
Build sinusoidal embeddings.
This matches the implementation in tensor2tensor, but differs slightly from the description in Section 3.5 of
"Attention Is All You Need".
"""
... |
Build sinusoidal embeddings.
This matches the implementation in tensor2tensor, but differs slightly from the description in Section 3.5 of
"Attention Is All You Need".
| get_embedding | python | huggingface/transformers | src/transformers/models/xglm/modeling_xglm.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/xglm/modeling_xglm.py | Apache-2.0 |
def forward(
self,
hidden_states: torch.Tensor,
attention_mask: Optional[torch.Tensor] = None,
encoder_hidden_states: Optional[torch.Tensor] = None,
encoder_attention_mask: Optional[torch.Tensor] = None,
layer_head_mask: Optional[torch.Tensor] = None,
cross_attn_l... |
Args:
hidden_states (`torch.FloatTensor`): input to the layer of shape `(batch, seq_len, embed_dim)`
attention_mask (`torch.FloatTensor`): attention mask of size
`(batch, 1, tgt_len, src_len)` where padding elements are indicated by very large negative values.
... | forward | python | huggingface/transformers | src/transformers/models/xglm/modeling_xglm.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/xglm/modeling_xglm.py | Apache-2.0 |
def forward(
self,
input_ids: Optional[torch.Tensor] = None,
attention_mask: Optional[torch.Tensor] = None,
position_ids: Optional[torch.Tensor] = None,
encoder_hidden_states: Optional[torch.Tensor] = None,
encoder_attention_mask: Optional[torch.Tensor] = None,
he... |
encoder_hidden_states (`torch.FloatTensor` of shape `(batch_size, encoder_sequence_length, hidden_size)`, *optional*):
Sequence of hidden-states at the output of the last layer of the encoder. Used in the cross-attention of
the decoder.
encoder_attention_mask (`torch.LongTensor`... | forward | python | huggingface/transformers | src/transformers/models/xglm/modeling_xglm.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/xglm/modeling_xglm.py | Apache-2.0 |
def forward(
self,
input_ids: Optional[torch.Tensor] = None,
attention_mask: Optional[torch.Tensor] = None,
position_ids: Optional[torch.Tensor] = None,
encoder_hidden_states: Optional[torch.Tensor] = None,
encoder_attention_mask: Optional[torch.Tensor] = None,
he... |
encoder_hidden_states (`torch.FloatTensor` of shape `(batch_size, encoder_sequence_length, hidden_size)`, *optional*):
Sequence of hidden-states at the output of the last layer of the encoder. Used in the cross-attention of
the decoder.
encoder_attention_mask (`torch.LongTensor`... | forward | python | huggingface/transformers | src/transformers/models/xglm/modeling_xglm.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/xglm/modeling_xglm.py | Apache-2.0 |
def build_inputs_with_special_tokens(
self, token_ids_0: List[int], token_ids_1: Optional[List[int]] = None
) -> List[int]:
"""
Build model inputs from a sequence or a pair of sequence for sequence classification tasks by concatenating and
adding special tokens. An XLM-RoBERTa sequen... |
Build model inputs from a sequence or a pair of sequence for sequence classification tasks by concatenating and
adding special tokens. An XLM-RoBERTa sequence has the following format:
- single sequence: `<s> X </s>`
- pair of sequences: `<s> A </s></s> B </s>`
Args:
... | build_inputs_with_special_tokens | python | huggingface/transformers | src/transformers/models/xglm/tokenization_xglm.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/xglm/tokenization_xglm.py | Apache-2.0 |
def get_special_tokens_mask(
self, token_ids_0: List[int], token_ids_1: Optional[List[int]] = None, already_has_special_tokens: bool = False
) -> List[int]:
"""
Retrieve sequence ids from a token list that has no special tokens added. This method is called when adding
special tokens ... |
Retrieve sequence ids from a token list that has no special tokens added. This method is called when adding
special tokens using the tokenizer `prepare_for_model` method.
Args:
token_ids_0 (`List[int]`):
List of IDs.
token_ids_1 (`List[int]`, *optional*)... | get_special_tokens_mask | python | huggingface/transformers | src/transformers/models/xglm/tokenization_xglm.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/xglm/tokenization_xglm.py | Apache-2.0 |
def create_token_type_ids_from_sequences(
self, token_ids_0: List[int], token_ids_1: Optional[List[int]] = None
) -> List[int]:
"""
Create a mask from the two sequences passed to be used in a sequence-pair classification task. XLM-RoBERTa does
not make use of token type ids, therefor... |
Create a mask from the two sequences passed to be used in a sequence-pair classification task. XLM-RoBERTa does
not make use of token type ids, therefore a list of zeros is returned.
Args:
token_ids_0 (`List[int]`):
List of IDs.
token_ids_1 (`List[int]`,... | create_token_type_ids_from_sequences | python | huggingface/transformers | src/transformers/models/xglm/tokenization_xglm.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/xglm/tokenization_xglm.py | Apache-2.0 |
def _convert_token_to_id(self, token):
"""Converts a token (str) in an id using the vocab."""
if token in self.fairseq_tokens_to_ids:
return self.fairseq_tokens_to_ids[token]
spm_id = self.sp_model.PieceToId(token)
# Need to return unknown token if the SP model returned 0
... | Converts a token (str) in an id using the vocab. | _convert_token_to_id | python | huggingface/transformers | src/transformers/models/xglm/tokenization_xglm.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/xglm/tokenization_xglm.py | Apache-2.0 |
def _convert_id_to_token(self, index):
"""Converts an index (integer) in a token (str) using the vocab."""
if index in self.fairseq_ids_to_tokens:
return self.fairseq_ids_to_tokens[index]
return self.sp_model.IdToPiece(index - self.fairseq_offset) | Converts an index (integer) in a token (str) using the vocab. | _convert_id_to_token | python | huggingface/transformers | src/transformers/models/xglm/tokenization_xglm.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/xglm/tokenization_xglm.py | Apache-2.0 |
def build_inputs_with_special_tokens(
self, token_ids_0: List[int], token_ids_1: Optional[List[int]] = None
) -> List[int]:
"""
Build model inputs from a sequence or a pair of sequence for sequence classification tasks by concatenating and
adding special tokens. An XLM-RoBERTa sequen... |
Build model inputs from a sequence or a pair of sequence for sequence classification tasks by concatenating and
adding special tokens. An XLM-RoBERTa sequence has the following format:
- single sequence: `<s> X </s>`
- pair of sequences: `<s> A </s></s> B </s>`
Args:
... | build_inputs_with_special_tokens | python | huggingface/transformers | src/transformers/models/xglm/tokenization_xglm_fast.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/xglm/tokenization_xglm_fast.py | Apache-2.0 |
def create_token_type_ids_from_sequences(
self, token_ids_0: List[int], token_ids_1: Optional[List[int]] = None
) -> List[int]:
"""
Create a mask from the two sequences passed to be used in a sequence-pair classification task. XLM-RoBERTa does
not make use of token type ids, therefor... |
Create a mask from the two sequences passed to be used in a sequence-pair classification task. XLM-RoBERTa does
not make use of token type ids, therefore a list of zeros is returned.
Args:
token_ids_0 (`List[int]`):
List of IDs.
token_ids_1 (`List[int]`,... | create_token_type_ids_from_sequences | python | huggingface/transformers | src/transformers/models/xglm/tokenization_xglm_fast.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/xglm/tokenization_xglm_fast.py | Apache-2.0 |
def get_masks(slen, lengths, causal, padding_mask=None):
"""
Generate hidden states mask, and optionally an attention mask.
"""
bs = shape_list(lengths)[0]
if padding_mask is not None:
mask = padding_mask
else:
# assert lengths.max().item() <= slen
alen = tf.range(slen, d... |
Generate hidden states mask, and optionally an attention mask.
| get_masks | python | huggingface/transformers | src/transformers/models/xlm/modeling_tf_xlm.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/xlm/modeling_tf_xlm.py | Apache-2.0 |
def call(self, input, mask, kv, cache, head_mask, output_attentions, training=False):
"""
Self-attention (if kv is None) or attention over source sentence (provided by kv).
"""
# Input is (bs, qlen, dim)
# Mask is (bs, klen) (non-causal) or (bs, klen, klen)
bs, qlen, dim ... |
Self-attention (if kv is None) or attention over source sentence (provided by kv).
| call | python | huggingface/transformers | src/transformers/models/xlm/modeling_tf_xlm.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/xlm/modeling_tf_xlm.py | Apache-2.0 |
def call(
self,
input_ids: TFModelInputType | None = None,
attention_mask: np.ndarray | tf.Tensor | None = None,
langs: np.ndarray | tf.Tensor | None = None,
token_type_ids: np.ndarray | tf.Tensor | None = None,
position_ids: np.ndarray | tf.Tensor | None = None,
... |
labels (`tf.Tensor` of shape `(batch_size,)`, *optional*):
Labels for computing the sequence classification/regression loss. Indices should be in `[0, ...,
config.num_labels - 1]`. If `config.num_labels == 1` a regression loss is computed (Mean-Square loss), If
`config.num_l... | call | python | huggingface/transformers | src/transformers/models/xlm/modeling_tf_xlm.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/xlm/modeling_tf_xlm.py | Apache-2.0 |
def dummy_inputs(self):
"""
Dummy inputs to build the network.
Returns:
tf.Tensor with dummy inputs
"""
# Sometimes XLM has language embeddings so don't forget to build them as well if needed
if self.config.use_lang_emb and self.config.n_langs > 1:
... |
Dummy inputs to build the network.
Returns:
tf.Tensor with dummy inputs
| dummy_inputs | python | huggingface/transformers | src/transformers/models/xlm/modeling_tf_xlm.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/xlm/modeling_tf_xlm.py | Apache-2.0 |
def call(
self,
input_ids: TFModelInputType | None = None,
attention_mask: np.ndarray | tf.Tensor | None = None,
langs: np.ndarray | tf.Tensor | None = None,
token_type_ids: np.ndarray | tf.Tensor | None = None,
position_ids: np.ndarray | tf.Tensor | None = None,
... |
labels (`tf.Tensor` of shape `(batch_size, sequence_length)`, *optional*):
Labels for computing the token classification loss. Indices should be in `[0, ..., config.num_labels - 1]`.
| call | python | huggingface/transformers | src/transformers/models/xlm/modeling_tf_xlm.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/xlm/modeling_tf_xlm.py | Apache-2.0 |
def call(
self,
input_ids: TFModelInputType | None = None,
attention_mask: np.ndarray | tf.Tensor | None = None,
langs: np.ndarray | tf.Tensor | None = None,
token_type_ids: np.ndarray | tf.Tensor | None = None,
position_ids: np.ndarray | tf.Tensor | None = None,
... |
start_positions (`tf.Tensor` of shape `(batch_size,)`, *optional*):
Labels for position (index) of the start of the labelled span for computing the token classification loss.
Positions are clamped to the length of the sequence (`sequence_length`). Position outside of the sequence
... | call | python | huggingface/transformers | src/transformers/models/xlm/modeling_tf_xlm.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/xlm/modeling_tf_xlm.py | Apache-2.0 |
def get_masks(slen, lengths, causal, padding_mask=None):
"""
Generate hidden states mask, and optionally an attention mask.
"""
alen = torch.arange(slen, dtype=torch.long, device=lengths.device)
if padding_mask is not None:
mask = padding_mask
else:
assert lengths.max().item() <=... |
Generate hidden states mask, and optionally an attention mask.
| get_masks | python | huggingface/transformers | src/transformers/models/xlm/modeling_xlm.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/xlm/modeling_xlm.py | Apache-2.0 |
def forward(
self, hidden_states: torch.FloatTensor, p_mask: Optional[torch.FloatTensor] = None
) -> torch.FloatTensor:
"""
Args:
hidden_states (`torch.FloatTensor` of shape `(batch_size, seq_len, hidden_size)`):
The final hidden states of the model.
p... |
Args:
hidden_states (`torch.FloatTensor` of shape `(batch_size, seq_len, hidden_size)`):
The final hidden states of the model.
p_mask (`torch.FloatTensor` of shape `(batch_size, seq_len)`, *optional*):
Mask for tokens at invalid position, such as query an... | forward | python | huggingface/transformers | src/transformers/models/xlm/modeling_xlm.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/xlm/modeling_xlm.py | Apache-2.0 |
def forward(
self,
hidden_states: torch.FloatTensor,
start_states: Optional[torch.FloatTensor] = None,
start_positions: Optional[torch.LongTensor] = None,
p_mask: Optional[torch.FloatTensor] = None,
) -> torch.FloatTensor:
"""
Args:
hidden_states (... |
Args:
hidden_states (`torch.FloatTensor` of shape `(batch_size, seq_len, hidden_size)`):
The final hidden states of the model.
start_states (`torch.FloatTensor` of shape `(batch_size, seq_len, hidden_size)`, *optional*):
The hidden states of the first tok... | forward | python | huggingface/transformers | src/transformers/models/xlm/modeling_xlm.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/xlm/modeling_xlm.py | Apache-2.0 |
def forward(
self,
hidden_states: torch.FloatTensor,
start_states: Optional[torch.FloatTensor] = None,
start_positions: Optional[torch.LongTensor] = None,
cls_index: Optional[torch.LongTensor] = None,
) -> torch.FloatTensor:
"""
Args:
hidden_states... |
Args:
hidden_states (`torch.FloatTensor` of shape `(batch_size, seq_len, hidden_size)`):
The final hidden states of the model.
start_states (`torch.FloatTensor` of shape `(batch_size, seq_len, hidden_size)`, *optional*):
The hidden states of the first tok... | forward | python | huggingface/transformers | src/transformers/models/xlm/modeling_xlm.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/xlm/modeling_xlm.py | Apache-2.0 |
def forward(
self,
hidden_states: torch.FloatTensor,
start_positions: Optional[torch.LongTensor] = None,
end_positions: Optional[torch.LongTensor] = None,
cls_index: Optional[torch.LongTensor] = None,
is_impossible: Optional[torch.LongTensor] = None,
p_mask: Optio... |
hidden_states (`torch.FloatTensor` of shape `(batch_size, seq_len, hidden_size)`):
Final hidden states of the model on the sequence tokens.
start_positions (`torch.LongTensor` of shape `(batch_size,)`, *optional*):
Positions of the first token for the labeled span.
end_p... | forward | python | huggingface/transformers | src/transformers/models/xlm/modeling_xlm.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/xlm/modeling_xlm.py | Apache-2.0 |
def forward(
self, hidden_states: torch.FloatTensor, cls_index: Optional[torch.LongTensor] = None
) -> torch.FloatTensor:
"""
Compute a single vector summary of a sequence hidden states.
Args:
hidden_states (`torch.FloatTensor` of shape `[batch_size, seq_len, hidden_size... |
Compute a single vector summary of a sequence hidden states.
Args:
hidden_states (`torch.FloatTensor` of shape `[batch_size, seq_len, hidden_size]`):
The hidden states of the last layer.
cls_index (`torch.LongTensor` of shape `[batch_size]` or `[batch_size, ...]... | forward | python | huggingface/transformers | src/transformers/models/xlm/modeling_xlm.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/xlm/modeling_xlm.py | Apache-2.0 |
def forward(self, input, mask, kv=None, cache=None, head_mask=None, output_attentions=False):
"""
Self-attention (if kv is None) or attention over source sentence (provided by kv).
"""
# Input is (bs, qlen, dim)
# Mask is (bs, klen) (non-causal) or (bs, klen, klen)
bs, ql... |
Self-attention (if kv is None) or attention over source sentence (provided by kv).
| forward | python | huggingface/transformers | src/transformers/models/xlm/modeling_xlm.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/xlm/modeling_xlm.py | Apache-2.0 |
def forward(
self,
input_ids: Optional[torch.Tensor] = None,
attention_mask: Optional[torch.Tensor] = None,
langs: Optional[torch.Tensor] = None,
token_type_ids: Optional[torch.Tensor] = None,
position_ids: Optional[torch.Tensor] = None,
lengths: Optional[torch.Te... |
langs (`torch.LongTensor` of shape `(batch_size, sequence_length)`, *optional*):
A parallel sequence of tokens to be used to indicate the language of each token in the input. Indices are
languages ids which can be obtained from the language names by using two conversion mappings provide... | forward | python | huggingface/transformers | src/transformers/models/xlm/modeling_xlm.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/xlm/modeling_xlm.py | Apache-2.0 |
def forward(self, x, y=None):
"""Compute the loss, and optionally the scores."""
outputs = ()
if self.asm is False:
scores = self.proj(x)
outputs = (scores,) + outputs
if y is not None:
loss = nn.functional.cross_entropy(scores.view(-1, self.n_... | Compute the loss, and optionally the scores. | forward | python | huggingface/transformers | src/transformers/models/xlm/modeling_xlm.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/xlm/modeling_xlm.py | Apache-2.0 |
def forward(
self,
input_ids: Optional[torch.Tensor] = None,
attention_mask: Optional[torch.Tensor] = None,
langs: Optional[torch.Tensor] = None,
token_type_ids: Optional[torch.Tensor] = None,
position_ids: Optional[torch.Tensor] = None,
lengths: Optional[torch.Te... |
langs (`torch.LongTensor` of shape `(batch_size, sequence_length)`, *optional*):
A parallel sequence of tokens to be used to indicate the language of each token in the input. Indices are
languages ids which can be obtained from the language names by using two conversion mappings provide... | forward | python | huggingface/transformers | src/transformers/models/xlm/modeling_xlm.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/xlm/modeling_xlm.py | Apache-2.0 |
def forward(
self,
input_ids: Optional[torch.Tensor] = None,
attention_mask: Optional[torch.Tensor] = None,
langs: Optional[torch.Tensor] = None,
token_type_ids: Optional[torch.Tensor] = None,
position_ids: Optional[torch.Tensor] = None,
lengths: Optional[torch.Te... |
langs (`torch.LongTensor` of shape `(batch_size, sequence_length)`, *optional*):
A parallel sequence of tokens to be used to indicate the language of each token in the input. Indices are
languages ids which can be obtained from the language names by using two conversion mappings provide... | forward | python | huggingface/transformers | src/transformers/models/xlm/modeling_xlm.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/xlm/modeling_xlm.py | Apache-2.0 |
def forward(
self,
input_ids: Optional[torch.Tensor] = None,
attention_mask: Optional[torch.Tensor] = None,
langs: Optional[torch.Tensor] = None,
token_type_ids: Optional[torch.Tensor] = None,
position_ids: Optional[torch.Tensor] = None,
lengths: Optional[torch.Te... |
langs (`torch.LongTensor` of shape `(batch_size, sequence_length)`, *optional*):
A parallel sequence of tokens to be used to indicate the language of each token in the input. Indices are
languages ids which can be obtained from the language names by using two conversion mappings provide... | forward | python | huggingface/transformers | src/transformers/models/xlm/modeling_xlm.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/xlm/modeling_xlm.py | Apache-2.0 |
def forward(
self,
input_ids: Optional[torch.Tensor] = None,
attention_mask: Optional[torch.Tensor] = None,
langs: Optional[torch.Tensor] = None,
token_type_ids: Optional[torch.Tensor] = None,
position_ids: Optional[torch.Tensor] = None,
lengths: Optional[torch.Te... |
langs (`torch.LongTensor` of shape `(batch_size, sequence_length)`, *optional*):
A parallel sequence of tokens to be used to indicate the language of each token in the input. Indices are
languages ids which can be obtained from the language names by using two conversion mappings provide... | forward | python | huggingface/transformers | src/transformers/models/xlm/modeling_xlm.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/xlm/modeling_xlm.py | Apache-2.0 |
def forward(
self,
input_ids: Optional[torch.Tensor] = None,
attention_mask: Optional[torch.Tensor] = None,
langs: Optional[torch.Tensor] = None,
token_type_ids: Optional[torch.Tensor] = None,
position_ids: Optional[torch.Tensor] = None,
lengths: Optional[torch.Te... |
langs (`torch.LongTensor` of shape `(batch_size, sequence_length)`, *optional*):
A parallel sequence of tokens to be used to indicate the language of each token in the input. Indices are
languages ids which can be obtained from the language names by using two conversion mappings provide... | forward | python | huggingface/transformers | src/transformers/models/xlm/modeling_xlm.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/xlm/modeling_xlm.py | Apache-2.0 |
def forward(
self,
input_ids: Optional[torch.Tensor] = None,
attention_mask: Optional[torch.Tensor] = None,
langs: Optional[torch.Tensor] = None,
token_type_ids: Optional[torch.Tensor] = None,
position_ids: Optional[torch.Tensor] = None,
lengths: Optional[torch.Te... |
input_ids (`torch.LongTensor` of shape `(batch_size, num_choices, sequence_length)`):
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using [`AutoTokenizer`]. See [`PreTrainedTokenizer.encode`] and
[`PreTrainedTokenizer.__call__`] for details.
... | forward | python | huggingface/transformers | src/transformers/models/xlm/modeling_xlm.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/xlm/modeling_xlm.py | Apache-2.0 |
def get_pairs(word):
"""
Return set of symbol pairs in a word. word is represented as tuple of symbols (symbols being variable-length
strings)
"""
pairs = set()
prev_char = word[0]
for char in word[1:]:
pairs.add((prev_char, char))
prev_char = char
return pairs |
Return set of symbol pairs in a word. word is represented as tuple of symbols (symbols being variable-length
strings)
| get_pairs | python | huggingface/transformers | src/transformers/models/xlm/tokenization_xlm.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/xlm/tokenization_xlm.py | Apache-2.0 |
def lowercase_and_remove_accent(text):
"""
Lowercase and strips accents from a piece of text based on
https://github.com/facebookresearch/XLM/blob/master/tools/lowercase_and_remove_accent.py
"""
text = " ".join(text)
text = text.lower()
text = unicodedata.normalize("NFD", text)
output = ... |
Lowercase and strips accents from a piece of text based on
https://github.com/facebookresearch/XLM/blob/master/tools/lowercase_and_remove_accent.py
| lowercase_and_remove_accent | python | huggingface/transformers | src/transformers/models/xlm/tokenization_xlm.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/xlm/tokenization_xlm.py | Apache-2.0 |
def remove_non_printing_char(text):
"""
Port of https://github.com/moses-smt/mosesdecoder/blob/master/scripts/tokenizer/remove-non-printing-char.perl
"""
output = []
for char in text:
cat = unicodedata.category(char)
if cat.startswith("C"):
continue
output.append(... |
Port of https://github.com/moses-smt/mosesdecoder/blob/master/scripts/tokenizer/remove-non-printing-char.perl
| remove_non_printing_char | python | huggingface/transformers | src/transformers/models/xlm/tokenization_xlm.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/xlm/tokenization_xlm.py | Apache-2.0 |
def romanian_preprocessing(text):
"""Sennrich's WMT16 scripts for Romanian preprocessing, used by model `FacebookAI/xlm-mlm-enro-1024`"""
# https://github.com/rsennrich/wmt16-scripts/blob/master/preprocess/normalise-romanian.py
text = text.replace("\u015e", "\u0218").replace("\u015f", "\u0219")
text = t... | Sennrich's WMT16 scripts for Romanian preprocessing, used by model `FacebookAI/xlm-mlm-enro-1024` | romanian_preprocessing | python | huggingface/transformers | src/transformers/models/xlm/tokenization_xlm.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/xlm/tokenization_xlm.py | Apache-2.0 |
def _tokenize(self, text, lang="en", bypass_tokenizer=False):
"""
Tokenize a string given language code. For Chinese, Japanese and Thai, we use a language specific tokenizer.
Otherwise, we use Moses.
Details of tokenization:
- [sacremoses](https://github.com/alvations/sacre... |
Tokenize a string given language code. For Chinese, Japanese and Thai, we use a language specific tokenizer.
Otherwise, we use Moses.
Details of tokenization:
- [sacremoses](https://github.com/alvations/sacremoses): port of Moses
- Install with `pip install sacremoses`... | _tokenize | python | huggingface/transformers | src/transformers/models/xlm/tokenization_xlm.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/xlm/tokenization_xlm.py | Apache-2.0 |
def build_inputs_with_special_tokens(
self, token_ids_0: List[int], token_ids_1: Optional[List[int]] = None
) -> List[int]:
"""
Build model inputs from a sequence or a pair of sequence for sequence classification tasks by concatenating and
adding special tokens. An XLM sequence has t... |
Build model inputs from a sequence or a pair of sequence for sequence classification tasks by concatenating and
adding special tokens. An XLM sequence has the following format:
- single sequence: `<s> X </s>`
- pair of sequences: `<s> A </s> B </s>`
Args:
token_ids... | build_inputs_with_special_tokens | python | huggingface/transformers | src/transformers/models/xlm/tokenization_xlm.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/xlm/tokenization_xlm.py | Apache-2.0 |
def get_special_tokens_mask(
self, token_ids_0: List[int], token_ids_1: Optional[List[int]] = None, already_has_special_tokens: bool = False
) -> List[int]:
"""
Retrieve sequence ids from a token list that has no special tokens added. This method is called when adding
special tokens ... |
Retrieve sequence ids from a token list that has no special tokens added. This method is called when adding
special tokens using the tokenizer `prepare_for_model` method.
Args:
token_ids_0 (`List[int]`):
List of IDs.
token_ids_1 (`List[int]`, *optional*)... | get_special_tokens_mask | python | huggingface/transformers | src/transformers/models/xlm/tokenization_xlm.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/xlm/tokenization_xlm.py | Apache-2.0 |
def create_token_type_ids_from_sequences(
self, token_ids_0: List[int], token_ids_1: Optional[List[int]] = None
) -> List[int]:
"""
Create a mask from the two sequences passed to be used in a sequence-pair classification task. An XLM sequence
pair mask has the following format:
... |
Create a mask from the two sequences passed to be used in a sequence-pair classification task. An XLM sequence
pair mask has the following format:
```
0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1
| first sequence | second sequence |
```
If `token_ids_1` is `None`... | create_token_type_ids_from_sequences | python | huggingface/transformers | src/transformers/models/xlm/tokenization_xlm.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/xlm/tokenization_xlm.py | Apache-2.0 |
def create_position_ids_from_input_ids(input_ids, padding_idx):
"""
Replace non-padding symbols with their position numbers. Position numbers begin at padding_idx+1. Padding symbols
are ignored. This is modified from fairseq's `utils.make_positions`.
Args:
input_ids: jnp.ndarray
padding... |
Replace non-padding symbols with their position numbers. Position numbers begin at padding_idx+1. Padding symbols
are ignored. This is modified from fairseq's `utils.make_positions`.
Args:
input_ids: jnp.ndarray
padding_idx: int
Returns: jnp.ndarray
| create_position_ids_from_input_ids | python | huggingface/transformers | src/transformers/models/xlm_roberta/modeling_flax_xlm_roberta.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/xlm_roberta/modeling_flax_xlm_roberta.py | Apache-2.0 |
def _concatenate_to_cache(self, key, value, query, attention_mask):
"""
This function takes projected key, value states from a single input token and concatenates the states to cached
states from previous steps. This function is slightly adapted from the official Flax repository:
https:/... |
This function takes projected key, value states from a single input token and concatenates the states to cached
states from previous steps. This function is slightly adapted from the official Flax repository:
https://github.com/google/flax/blob/491ce18759622506588784b4fca0e4bf05f8c8cd/flax/line... | _concatenate_to_cache | python | huggingface/transformers | src/transformers/models/xlm_roberta/modeling_flax_xlm_roberta.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/xlm_roberta/modeling_flax_xlm_roberta.py | Apache-2.0 |
def init_cache(self, batch_size, max_length):
r"""
Args:
batch_size (`int`):
batch_size used for fast auto-regressive decoding. Defines the batch size of the initialized cache.
max_length (`int`):
maximum possible length for auto-regressive decodin... |
Args:
batch_size (`int`):
batch_size used for fast auto-regressive decoding. Defines the batch size of the initialized cache.
max_length (`int`):
maximum possible length for auto-regressive decoding. Defines the sequence length of the initialized
... | init_cache | python | huggingface/transformers | src/transformers/models/xlm_roberta/modeling_flax_xlm_roberta.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/xlm_roberta/modeling_flax_xlm_roberta.py | Apache-2.0 |
def create_position_ids_from_input_ids(self, input_ids, past_key_values_length=0):
"""
Replace non-padding symbols with their position numbers. Position numbers begin at padding_idx+1. Padding
symbols are ignored. This is modified from fairseq's `utils.make_positions`.
Args:
... |
Replace non-padding symbols with their position numbers. Position numbers begin at padding_idx+1. Padding
symbols are ignored. This is modified from fairseq's `utils.make_positions`.
Args:
input_ids: tf.Tensor
Returns: tf.Tensor
| create_position_ids_from_input_ids | python | huggingface/transformers | src/transformers/models/xlm_roberta/modeling_tf_xlm_roberta.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/xlm_roberta/modeling_tf_xlm_roberta.py | Apache-2.0 |
def call(
self,
input_ids=None,
position_ids=None,
token_type_ids=None,
inputs_embeds=None,
past_key_values_length=0,
training=False,
):
"""
Applies embedding based on inputs tensor.
Returns:
final_embeddings (`tf.Tensor`):... |
Applies embedding based on inputs tensor.
Returns:
final_embeddings (`tf.Tensor`): output embedding tensor.
| call | python | huggingface/transformers | src/transformers/models/xlm_roberta/modeling_tf_xlm_roberta.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/xlm_roberta/modeling_tf_xlm_roberta.py | Apache-2.0 |
def call(
self,
input_ids: TFModelInputType | None = None,
attention_mask: np.ndarray | tf.Tensor | None = None,
token_type_ids: np.ndarray | tf.Tensor | None = None,
position_ids: np.ndarray | tf.Tensor | None = None,
head_mask: np.ndarray | tf.Tensor | None = None,
... |
encoder_hidden_states (`tf.Tensor` of shape `(batch_size, sequence_length, hidden_size)`, *optional*):
Sequence of hidden-states at the output of the last layer of the encoder. Used in the cross-attention if
the model is configured as a decoder.
encoder_attention_mask (`tf.Tens... | call | python | huggingface/transformers | src/transformers/models/xlm_roberta/modeling_tf_xlm_roberta.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/xlm_roberta/modeling_tf_xlm_roberta.py | Apache-2.0 |
def call(
self,
input_ids: TFModelInputType | None = None,
attention_mask: np.ndarray | tf.Tensor | None = None,
token_type_ids: np.ndarray | tf.Tensor | None = None,
position_ids: np.ndarray | tf.Tensor | None = None,
head_mask: np.ndarray | tf.Tensor | None = None,
... |
labels (`tf.Tensor` of shape `(batch_size, sequence_length)`, *optional*):
Labels for computing the masked language modeling loss. Indices should be in `[-100, 0, ...,
config.vocab_size]` (see `input_ids` docstring) Tokens with indices set to `-100` are ignored (masked), the
... | call | python | huggingface/transformers | src/transformers/models/xlm_roberta/modeling_tf_xlm_roberta.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/xlm_roberta/modeling_tf_xlm_roberta.py | Apache-2.0 |
def call(
self,
input_ids: TFModelInputType | None = None,
attention_mask: np.ndarray | tf.Tensor | None = None,
token_type_ids: np.ndarray | tf.Tensor | None = None,
position_ids: np.ndarray | tf.Tensor | None = None,
head_mask: np.ndarray | tf.Tensor | None = None,
... |
encoder_hidden_states (`tf.Tensor` of shape `(batch_size, sequence_length, hidden_size)`, *optional*):
Sequence of hidden-states at the output of the last layer of the encoder. Used in the cross-attention if
the model is configured as a decoder.
encoder_attention_mask (`tf.Tens... | call | python | huggingface/transformers | src/transformers/models/xlm_roberta/modeling_tf_xlm_roberta.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/xlm_roberta/modeling_tf_xlm_roberta.py | Apache-2.0 |
def call(
self,
input_ids: TFModelInputType | None = None,
attention_mask: np.ndarray | tf.Tensor | None = None,
token_type_ids: np.ndarray | tf.Tensor | None = None,
position_ids: np.ndarray | tf.Tensor | None = None,
head_mask: np.ndarray | tf.Tensor | None = None,
... |
labels (`tf.Tensor` of shape `(batch_size,)`, *optional*):
Labels for computing the sequence classification/regression loss. Indices should be in `[0, ...,
config.num_labels - 1]`. If `config.num_labels == 1` a regression loss is computed (Mean-Square loss), If
`config.num_l... | call | python | huggingface/transformers | src/transformers/models/xlm_roberta/modeling_tf_xlm_roberta.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/xlm_roberta/modeling_tf_xlm_roberta.py | Apache-2.0 |
def call(
self,
input_ids: TFModelInputType | None = None,
attention_mask: np.ndarray | tf.Tensor | None = None,
token_type_ids: np.ndarray | tf.Tensor | None = None,
position_ids: np.ndarray | tf.Tensor | None = None,
head_mask: np.ndarray | tf.Tensor | None = None,
... |
labels (`tf.Tensor` of shape `(batch_size,)`, *optional*):
Labels for computing the multiple choice classification loss. Indices should be in `[0, ..., num_choices]`
where `num_choices` is the size of the second dimension of the input tensors. (See `input_ids` above)
| call | python | huggingface/transformers | src/transformers/models/xlm_roberta/modeling_tf_xlm_roberta.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/xlm_roberta/modeling_tf_xlm_roberta.py | Apache-2.0 |
def call(
self,
input_ids: TFModelInputType | None = None,
attention_mask: np.ndarray | tf.Tensor | None = None,
token_type_ids: np.ndarray | tf.Tensor | None = None,
position_ids: np.ndarray | tf.Tensor | None = None,
head_mask: np.ndarray | tf.Tensor | None = None,
... |
labels (`tf.Tensor` of shape `(batch_size, sequence_length)`, *optional*):
Labels for computing the token classification loss. Indices should be in `[0, ..., config.num_labels - 1]`.
| call | python | huggingface/transformers | src/transformers/models/xlm_roberta/modeling_tf_xlm_roberta.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/xlm_roberta/modeling_tf_xlm_roberta.py | Apache-2.0 |
def call(
self,
input_ids: TFModelInputType | None = None,
attention_mask: np.ndarray | tf.Tensor | None = None,
token_type_ids: np.ndarray | tf.Tensor | None = None,
position_ids: np.ndarray | tf.Tensor | None = None,
head_mask: np.ndarray | tf.Tensor | None = None,
... |
start_positions (`tf.Tensor` of shape `(batch_size,)`, *optional*):
Labels for position (index) of the start of the labelled span for computing the token classification loss.
Positions are clamped to the length of the sequence (`sequence_length`). Position outside of the sequence
... | call | python | huggingface/transformers | src/transformers/models/xlm_roberta/modeling_tf_xlm_roberta.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/xlm_roberta/modeling_tf_xlm_roberta.py | Apache-2.0 |
def create_position_ids_from_inputs_embeds(self, inputs_embeds):
"""
We are provided embeddings directly. We cannot infer which are padded so just generate sequential position ids.
Args:
inputs_embeds: torch.Tensor
Returns: torch.Tensor
"""
input_shape = inp... |
We are provided embeddings directly. We cannot infer which are padded so just generate sequential position ids.
Args:
inputs_embeds: torch.Tensor
Returns: torch.Tensor
| create_position_ids_from_inputs_embeds | python | huggingface/transformers | src/transformers/models/xlm_roberta/modeling_xlm_roberta.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/xlm_roberta/modeling_xlm_roberta.py | Apache-2.0 |
def __init__(self, config, add_pooling_layer=True):
r"""
add_pooling_layer (bool, *optional*, defaults to `True`):
Whether to add a pooling layer
"""
super().__init__(config)
self.config = config
self.embeddings = XLMRobertaEmbeddings(config)
self.enc... |
add_pooling_layer (bool, *optional*, defaults to `True`):
Whether to add a pooling layer
| __init__ | python | huggingface/transformers | src/transformers/models/xlm_roberta/modeling_xlm_roberta.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/xlm_roberta/modeling_xlm_roberta.py | Apache-2.0 |
def forward(
self,
input_ids: Optional[torch.LongTensor] = None,
attention_mask: Optional[torch.FloatTensor] = None,
token_type_ids: Optional[torch.LongTensor] = None,
position_ids: Optional[torch.LongTensor] = None,
head_mask: Optional[torch.FloatTensor] = None,
... |
token_type_ids (`torch.LongTensor` of shape `(batch_size, sequence_length)`, *optional*):
Segment token indices to indicate first and second portions of the inputs. Indices are selected in `[0,1]`:
- 0 corresponds to a *sentence A* token,
- 1 corresponds to a *sentence B* t... | forward | python | huggingface/transformers | src/transformers/models/xlm_roberta/modeling_xlm_roberta.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/xlm_roberta/modeling_xlm_roberta.py | Apache-2.0 |
def forward(
self,
input_ids: Optional[torch.LongTensor] = None,
attention_mask: Optional[torch.FloatTensor] = None,
token_type_ids: Optional[torch.LongTensor] = None,
position_ids: Optional[torch.LongTensor] = None,
head_mask: Optional[torch.FloatTensor] = None,
... |
token_type_ids (`torch.LongTensor` of shape `(batch_size, sequence_length)`, *optional*):
Segment token indices to indicate first and second portions of the inputs. Indices are selected in `[0,1]`:
- 0 corresponds to a *sentence A* token,
- 1 corresponds to a *sentence B* t... | forward | python | huggingface/transformers | src/transformers/models/xlm_roberta/modeling_xlm_roberta.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/xlm_roberta/modeling_xlm_roberta.py | Apache-2.0 |
def forward(
self,
input_ids: Optional[torch.LongTensor] = None,
attention_mask: Optional[torch.FloatTensor] = None,
token_type_ids: Optional[torch.LongTensor] = None,
position_ids: Optional[torch.LongTensor] = None,
head_mask: Optional[torch.FloatTensor] = None,
... |
token_type_ids (`torch.LongTensor` of shape `(batch_size, sequence_length)`, *optional*):
Segment token indices to indicate first and second portions of the inputs. Indices are selected in `[0,1]`:
- 0 corresponds to a *sentence A* token,
- 1 corresponds to a *sentence B* t... | forward | python | huggingface/transformers | src/transformers/models/xlm_roberta/modeling_xlm_roberta.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/xlm_roberta/modeling_xlm_roberta.py | Apache-2.0 |
def forward(
self,
input_ids: Optional[torch.LongTensor] = None,
token_type_ids: Optional[torch.LongTensor] = None,
attention_mask: Optional[torch.FloatTensor] = None,
labels: Optional[torch.LongTensor] = None,
position_ids: Optional[torch.LongTensor] = None,
head... |
input_ids (`torch.LongTensor` of shape `(batch_size, num_choices, sequence_length)`):
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using [`AutoTokenizer`]. See [`PreTrainedTokenizer.encode`] and
[`PreTrainedTokenizer.__call__`] for details.
... | forward | python | huggingface/transformers | src/transformers/models/xlm_roberta/modeling_xlm_roberta.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/xlm_roberta/modeling_xlm_roberta.py | Apache-2.0 |
def forward(
self,
input_ids: Optional[torch.LongTensor] = None,
attention_mask: Optional[torch.FloatTensor] = None,
token_type_ids: Optional[torch.LongTensor] = None,
position_ids: Optional[torch.LongTensor] = None,
head_mask: Optional[torch.FloatTensor] = None,
... |
token_type_ids (`torch.LongTensor` of shape `(batch_size, sequence_length)`, *optional*):
Segment token indices to indicate first and second portions of the inputs. Indices are selected in `[0,1]`:
- 0 corresponds to a *sentence A* token,
- 1 corresponds to a *sentence B* t... | forward | python | huggingface/transformers | src/transformers/models/xlm_roberta/modeling_xlm_roberta.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/xlm_roberta/modeling_xlm_roberta.py | Apache-2.0 |
def forward(
self,
input_ids: Optional[torch.LongTensor] = None,
attention_mask: Optional[torch.FloatTensor] = None,
token_type_ids: Optional[torch.LongTensor] = None,
position_ids: Optional[torch.LongTensor] = None,
head_mask: Optional[torch.FloatTensor] = None,
... |
token_type_ids (`torch.LongTensor` of shape `(batch_size, sequence_length)`, *optional*):
Segment token indices to indicate first and second portions of the inputs. Indices are selected in `[0,1]`:
- 0 corresponds to a *sentence A* token,
- 1 corresponds to a *sentence B* t... | forward | python | huggingface/transformers | src/transformers/models/xlm_roberta/modeling_xlm_roberta.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/xlm_roberta/modeling_xlm_roberta.py | Apache-2.0 |
def create_position_ids_from_input_ids(input_ids, padding_idx, past_key_values_length=0):
"""
Replace non-padding symbols with their position numbers. Position numbers begin at padding_idx+1. Padding symbols
are ignored. This is modified from fairseq's `utils.make_positions`.
Args:
x: torch.Ten... |
Replace non-padding symbols with their position numbers. Position numbers begin at padding_idx+1. Padding symbols
are ignored. This is modified from fairseq's `utils.make_positions`.
Args:
x: torch.Tensor x:
Returns: torch.Tensor
| create_position_ids_from_input_ids | python | huggingface/transformers | src/transformers/models/xlm_roberta/modeling_xlm_roberta.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/xlm_roberta/modeling_xlm_roberta.py | Apache-2.0 |
def build_inputs_with_special_tokens(
self, token_ids_0: List[int], token_ids_1: Optional[List[int]] = None
) -> List[int]:
"""
Build model inputs from a sequence or a pair of sequence for sequence classification tasks by concatenating and
adding special tokens. An XLM-RoBERTa sequen... |
Build model inputs from a sequence or a pair of sequence for sequence classification tasks by concatenating and
adding special tokens. An XLM-RoBERTa sequence has the following format:
- single sequence: `<s> X </s>`
- pair of sequences: `<s> A </s></s> B </s>`
Args:
... | build_inputs_with_special_tokens | python | huggingface/transformers | src/transformers/models/xlm_roberta/tokenization_xlm_roberta.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/xlm_roberta/tokenization_xlm_roberta.py | Apache-2.0 |
def get_special_tokens_mask(
self, token_ids_0: List[int], token_ids_1: Optional[List[int]] = None, already_has_special_tokens: bool = False
) -> List[int]:
"""
Retrieve sequence ids from a token list that has no special tokens added. This method is called when adding
special tokens ... |
Retrieve sequence ids from a token list that has no special tokens added. This method is called when adding
special tokens using the tokenizer `prepare_for_model` method.
Args:
token_ids_0 (`List[int]`):
List of IDs.
token_ids_1 (`List[int]`, *optional*)... | get_special_tokens_mask | python | huggingface/transformers | src/transformers/models/xlm_roberta/tokenization_xlm_roberta.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/xlm_roberta/tokenization_xlm_roberta.py | Apache-2.0 |
def create_token_type_ids_from_sequences(
self, token_ids_0: List[int], token_ids_1: Optional[List[int]] = None
) -> List[int]:
"""
Create a mask from the two sequences passed to be used in a sequence-pair classification task. XLM-RoBERTa does
not make use of token type ids, therefor... |
Create a mask from the two sequences passed to be used in a sequence-pair classification task. XLM-RoBERTa does
not make use of token type ids, therefore a list of zeros is returned.
Args:
token_ids_0 (`List[int]`):
List of IDs.
token_ids_1 (`List[int]`,... | create_token_type_ids_from_sequences | python | huggingface/transformers | src/transformers/models/xlm_roberta/tokenization_xlm_roberta.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/xlm_roberta/tokenization_xlm_roberta.py | Apache-2.0 |
def _convert_token_to_id(self, token):
"""Converts a token (str) in an id using the vocab."""
if token in self.fairseq_tokens_to_ids:
return self.fairseq_tokens_to_ids[token]
spm_id = self.sp_model.PieceToId(token)
# Need to return unknown token if the SP model returned 0
... | Converts a token (str) in an id using the vocab. | _convert_token_to_id | python | huggingface/transformers | src/transformers/models/xlm_roberta/tokenization_xlm_roberta.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/xlm_roberta/tokenization_xlm_roberta.py | Apache-2.0 |
def _convert_id_to_token(self, index):
"""Converts an index (integer) in a token (str) using the vocab."""
if index in self.fairseq_ids_to_tokens:
return self.fairseq_ids_to_tokens[index]
return self.sp_model.IdToPiece(index - self.fairseq_offset) | Converts an index (integer) in a token (str) using the vocab. | _convert_id_to_token | python | huggingface/transformers | src/transformers/models/xlm_roberta/tokenization_xlm_roberta.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/xlm_roberta/tokenization_xlm_roberta.py | Apache-2.0 |
def build_inputs_with_special_tokens(
self, token_ids_0: List[int], token_ids_1: Optional[List[int]] = None
) -> List[int]:
"""
Build model inputs from a sequence or a pair of sequence for sequence classification tasks by concatenating and
adding special tokens. An XLM-RoBERTa sequen... |
Build model inputs from a sequence or a pair of sequence for sequence classification tasks by concatenating and
adding special tokens. An XLM-RoBERTa sequence has the following format:
- single sequence: `<s> X </s>`
- pair of sequences: `<s> A </s></s> B </s>`
Args:
... | build_inputs_with_special_tokens | python | huggingface/transformers | src/transformers/models/xlm_roberta/tokenization_xlm_roberta_fast.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/xlm_roberta/tokenization_xlm_roberta_fast.py | Apache-2.0 |
def create_token_type_ids_from_sequences(
self, token_ids_0: List[int], token_ids_1: Optional[List[int]] = None
) -> List[int]:
"""
Create a mask from the two sequences passed to be used in a sequence-pair classification task. XLM-RoBERTa does
not make use of token type ids, therefor... |
Create a mask from the two sequences passed to be used in a sequence-pair classification task. XLM-RoBERTa does
not make use of token type ids, therefore a list of zeros is returned.
Args:
token_ids_0 (`List[int]`):
List of IDs.
token_ids_1 (`List[int]`,... | create_token_type_ids_from_sequences | python | huggingface/transformers | src/transformers/models/xlm_roberta/tokenization_xlm_roberta_fast.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/xlm_roberta/tokenization_xlm_roberta_fast.py | Apache-2.0 |
def create_position_ids_from_inputs_embeds(self, inputs_embeds):
"""
We are provided embeddings directly. We cannot infer which are padded so just generate sequential position ids.
Args:
inputs_embeds: torch.Tensor
Returns: torch.Tensor
"""
input_shape = inp... |
We are provided embeddings directly. We cannot infer which are padded so just generate sequential position ids.
Args:
inputs_embeds: torch.Tensor
Returns: torch.Tensor
| create_position_ids_from_inputs_embeds | python | huggingface/transformers | src/transformers/models/xlm_roberta_xl/modeling_xlm_roberta_xl.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/xlm_roberta_xl/modeling_xlm_roberta_xl.py | Apache-2.0 |
def __init__(self, config, add_pooling_layer=True):
r"""
add_pooling_layer (bool, *optional*, defaults to `True`):
Whether to add a pooling layer
"""
super().__init__(config)
self.config = config
self.embeddings = XLMRobertaXLEmbeddings(config)
self.e... |
add_pooling_layer (bool, *optional*, defaults to `True`):
Whether to add a pooling layer
| __init__ | python | huggingface/transformers | src/transformers/models/xlm_roberta_xl/modeling_xlm_roberta_xl.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/xlm_roberta_xl/modeling_xlm_roberta_xl.py | Apache-2.0 |
def forward(
self,
input_ids: Optional[torch.LongTensor] = None,
attention_mask: Optional[torch.FloatTensor] = None,
token_type_ids: Optional[torch.LongTensor] = None,
position_ids: Optional[torch.LongTensor] = None,
head_mask: Optional[torch.FloatTensor] = None,
... |
labels (`torch.LongTensor` of shape `(batch_size, sequence_length)`, *optional*):
Labels for computing the left-to-right language modeling loss (next word prediction). Indices should be in
`[-100, 0, ..., config.vocab_size]` (see `input_ids` docstring) Tokens with indices set to `-100` ... | forward | python | huggingface/transformers | src/transformers/models/xlm_roberta_xl/modeling_xlm_roberta_xl.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/xlm_roberta_xl/modeling_xlm_roberta_xl.py | Apache-2.0 |
def forward(
self,
input_ids: Optional[torch.LongTensor] = None,
attention_mask: Optional[torch.FloatTensor] = None,
token_type_ids: Optional[torch.LongTensor] = None,
position_ids: Optional[torch.LongTensor] = None,
head_mask: Optional[torch.FloatTensor] = None,
... |
labels (`torch.LongTensor` of shape `(batch_size, sequence_length)`, *optional*):
Labels for computing the masked language modeling loss. Indices should be in `[-100, 0, ...,
config.vocab_size]` (see `input_ids` docstring) Tokens with indices set to `-100` are ignored (masked), the
... | forward | python | huggingface/transformers | src/transformers/models/xlm_roberta_xl/modeling_xlm_roberta_xl.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/xlm_roberta_xl/modeling_xlm_roberta_xl.py | Apache-2.0 |
def forward(
self,
input_ids: Optional[torch.LongTensor] = None,
attention_mask: Optional[torch.FloatTensor] = None,
token_type_ids: Optional[torch.LongTensor] = None,
position_ids: Optional[torch.LongTensor] = None,
head_mask: Optional[torch.FloatTensor] = None,
... |
labels (`torch.LongTensor` of shape `(batch_size,)`, *optional*):
Labels for computing the sequence classification/regression loss. Indices should be in `[0, ...,
config.num_labels - 1]`. If `config.num_labels == 1` a regression loss is computed (Mean-Square loss), If
`confi... | forward | python | huggingface/transformers | src/transformers/models/xlm_roberta_xl/modeling_xlm_roberta_xl.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/xlm_roberta_xl/modeling_xlm_roberta_xl.py | Apache-2.0 |
def forward(
self,
input_ids: Optional[torch.LongTensor] = None,
token_type_ids: Optional[torch.LongTensor] = None,
attention_mask: Optional[torch.FloatTensor] = None,
labels: Optional[torch.LongTensor] = None,
position_ids: Optional[torch.LongTensor] = None,
head... |
input_ids (`torch.LongTensor` of shape `(batch_size, num_choices, sequence_length)`):
Indices of input sequence tokens in the vocabulary. Indices can be obtained using [`AutoTokenizer`]. See
[`PreTrainedTokenizer.encode`] and [`PreTrainedTokenizer.__call__`] for details. [What are input... | forward | python | huggingface/transformers | src/transformers/models/xlm_roberta_xl/modeling_xlm_roberta_xl.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/xlm_roberta_xl/modeling_xlm_roberta_xl.py | Apache-2.0 |
def forward(
self,
input_ids: Optional[torch.LongTensor] = None,
attention_mask: Optional[torch.FloatTensor] = None,
token_type_ids: Optional[torch.LongTensor] = None,
position_ids: Optional[torch.LongTensor] = None,
head_mask: Optional[torch.FloatTensor] = None,
... |
labels (`torch.LongTensor` of shape `(batch_size, sequence_length)`, *optional*):
Labels for computing the token classification loss. Indices should be in `[0, ..., config.num_labels - 1]`.
| forward | python | huggingface/transformers | src/transformers/models/xlm_roberta_xl/modeling_xlm_roberta_xl.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/xlm_roberta_xl/modeling_xlm_roberta_xl.py | Apache-2.0 |
def create_position_ids_from_input_ids(input_ids, padding_idx, past_key_values_length=0):
"""
Replace non-padding symbols with their position numbers. Position numbers begin at padding_idx+1. Padding symbols
are ignored. This is modified from fairseq's `utils.make_positions`.
Args:
x: torch.Ten... |
Replace non-padding symbols with their position numbers. Position numbers begin at padding_idx+1. Padding symbols
are ignored. This is modified from fairseq's `utils.make_positions`.
Args:
x: torch.Tensor x:
Returns: torch.Tensor
| create_position_ids_from_input_ids | python | huggingface/transformers | src/transformers/models/xlm_roberta_xl/modeling_xlm_roberta_xl.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/xlm_roberta_xl/modeling_xlm_roberta_xl.py | Apache-2.0 |
def rel_shift(self, x, klen=-1):
"""perform relative shift to form the relative attention score."""
x_size = shape_list(x)
x = tf.reshape(x, (x_size[1], x_size[0], x_size[2], x_size[3]))
x = x[1:, ...]
x = tf.reshape(x, (x_size[0], x_size[1] - 1, x_size[2], x_size[3]))
x... | perform relative shift to form the relative attention score. | rel_shift | python | huggingface/transformers | src/transformers/models/xlnet/modeling_tf_xlnet.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/xlnet/modeling_tf_xlnet.py | Apache-2.0 |
def create_mask(self, qlen, mlen):
"""
Creates causal attention mask. Float mask where 1.0 indicates masked, 0.0 indicates not-masked.
Args:
qlen: TODO Lysandre didn't fill
mlen: TODO Lysandre didn't fill
```
same_length=False: same_lengt... |
Creates causal attention mask. Float mask where 1.0 indicates masked, 0.0 indicates not-masked.
Args:
qlen: TODO Lysandre didn't fill
mlen: TODO Lysandre didn't fill
```
same_length=False: same_length=True:
<mlen > < qlen > ... | create_mask | python | huggingface/transformers | src/transformers/models/xlnet/modeling_tf_xlnet.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/xlnet/modeling_tf_xlnet.py | Apache-2.0 |
def call(
self,
input_ids: TFModelInputType | None = None,
attention_mask: np.ndarray | tf.Tensor | None = None,
mems: np.ndarray | tf.Tensor | None = None,
perm_mask: np.ndarray | tf.Tensor | None = None,
target_mapping: np.ndarray | tf.Tensor | None = None,
toke... |
labels (`tf.Tensor` of shape `(batch_size, sequence_length)`, *optional*):
Labels for computing the cross entropy classification loss. Indices should be in `[0, ...,
config.vocab_size - 1]`.
Return:
Examples:
```python
>>> import tensorflow as tf
... | call | python | huggingface/transformers | src/transformers/models/xlnet/modeling_tf_xlnet.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/xlnet/modeling_tf_xlnet.py | Apache-2.0 |
def call(
self,
input_ids: TFModelInputType | None = None,
attention_mask: np.ndarray | tf.Tensor | None = None,
mems: np.ndarray | tf.Tensor | None = None,
perm_mask: np.ndarray | tf.Tensor | None = None,
target_mapping: np.ndarray | tf.Tensor | None = None,
toke... |
labels (`tf.Tensor` of shape `(batch_size,)`, *optional*):
Labels for computing the sequence classification/regression loss. Indices should be in `[0, ...,
config.num_labels - 1]`. If `config.num_labels == 1` a regression loss is computed (Mean-Square loss), If
`config.num_l... | call | python | huggingface/transformers | src/transformers/models/xlnet/modeling_tf_xlnet.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/xlnet/modeling_tf_xlnet.py | Apache-2.0 |
Subsets and Splits
Django Code with Docstrings
Filters Python code examples from Django repository that contain Django-related code, helping identify relevant code snippets for understanding Django framework usage patterns.
SQL Console for Shuu12121/python-treesitter-filtered-datasetsV2
Retrieves specific code examples from the Flask repository but doesn't provide meaningful analysis or patterns beyond basic data retrieval.
HTTPX Repo Code and Docstrings
Retrieves specific code examples from the httpx repository, which is useful for understanding how particular libraries are used but doesn't provide broader analytical insights about the dataset.
Requests Repo Docstrings & Code
Retrieves code examples with their docstrings and file paths from the requests repository, providing basic filtering but limited analytical value beyond finding specific code samples.
Quart Repo Docstrings & Code
Retrieves code examples with their docstrings from the Quart repository, providing basic code samples but offering limited analytical value for understanding broader patterns or relationships in the dataset.