code stringlengths 66 870k | docstring stringlengths 19 26.7k | func_name stringlengths 1 138 | language stringclasses 1
value | repo stringlengths 7 68 | path stringlengths 5 324 | url stringlengths 46 389 | license stringclasses 7
values |
|---|---|---|---|---|---|---|---|
def _get_feat_extract_output_lengths(self, input_lengths: torch.LongTensor):
"""
Computes the output length of the convolutional layers
"""
for i in range(self.config.num_conv_layers):
input_lengths = (input_lengths - 1) // 2 + 1
return input_lengths |
Computes the output length of the convolutional layers
| _get_feat_extract_output_lengths | python | huggingface/transformers | src/transformers/models/speech_to_text/modeling_speech_to_text.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/speech_to_text/modeling_speech_to_text.py | Apache-2.0 |
def forward(
self,
input_features,
attention_mask=None,
head_mask=None,
output_attentions=None,
output_hidden_states=None,
return_dict=None,
):
r"""
Args:
input_features (`torch.LongTensor` of shape `(batch_size, sequence_length, fe... |
Args:
input_features (`torch.LongTensor` of shape `(batch_size, sequence_length, feature_size)`):
Float values of fbank features extracted from the raw speech waveform. Raw speech waveform can be
obtained by loading a `.flac` or `.wav` audio file into an array of typ... | forward | python | huggingface/transformers | src/transformers/models/speech_to_text/modeling_speech_to_text.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/speech_to_text/modeling_speech_to_text.py | Apache-2.0 |
def forward(
self,
input_ids=None,
attention_mask=None,
encoder_hidden_states=None,
encoder_attention_mask=None,
head_mask=None,
cross_attn_head_mask=None,
past_key_values=None,
inputs_embeds=None,
use_cache=None,
output_attentions=... |
Args:
input_ids (`torch.LongTensor` of shape `(batch_size, sequence_length)`):
Indices of input sequence tokens in the vocabulary. Padding will be ignored by default should you
provide it.
Indices can be obtained using [`Speech2TextTokenizer`]. See [... | forward | python | huggingface/transformers | src/transformers/models/speech_to_text/modeling_speech_to_text.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/speech_to_text/modeling_speech_to_text.py | Apache-2.0 |
def forward(
self,
input_features: Optional[torch.LongTensor] = None,
attention_mask: Optional[torch.Tensor] = None,
decoder_input_ids: Optional[torch.LongTensor] = None,
decoder_attention_mask: Optional[torch.LongTensor] = None,
head_mask: Optional[torch.Tensor] = None,
... |
input_features (`torch.FloatTensor` of shape `(batch_size, sequence_length, feature_size)`):
Float values of fbank features extracted from the raw speech waveform. Raw speech waveform can be obtained
by loading a `.flac` or `.wav` audio file into an array of type `List[float]` or a `num... | forward | python | huggingface/transformers | src/transformers/models/speech_to_text/modeling_speech_to_text.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/speech_to_text/modeling_speech_to_text.py | Apache-2.0 |
def forward(
self,
input_features: Optional[torch.LongTensor] = None,
attention_mask: Optional[torch.Tensor] = None,
decoder_input_ids: Optional[torch.LongTensor] = None,
decoder_attention_mask: Optional[torch.LongTensor] = None,
head_mask: Optional[torch.Tensor] = None,
... |
input_features (`torch.FloatTensor` of shape `(batch_size, sequence_length, feature_size)`):
Float values of fbank features extracted from the raw speech waveform. Raw speech waveform can be obtained
by loading a `.flac` or `.wav` audio file into an array of type `List[float]` or a `num... | forward | python | huggingface/transformers | src/transformers/models/speech_to_text/modeling_speech_to_text.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/speech_to_text/modeling_speech_to_text.py | Apache-2.0 |
def _make_causal_mask(input_ids_shape: tf.TensorShape, past_key_values_length: int = 0):
"""
Make causal mask used for bi-directional self-attention.
"""
bsz = input_ids_shape[0]
tgt_len = input_ids_shape[1]
mask = tf.ones((tgt_len, tgt_len)) * LARGE_NEGATIVE
mask_cond = tf.range(shape_list(... |
Make causal mask used for bi-directional self-attention.
| _make_causal_mask | python | huggingface/transformers | src/transformers/models/speech_to_text/modeling_tf_speech_to_text.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/speech_to_text/modeling_tf_speech_to_text.py | Apache-2.0 |
def _get_embedding(num_embeddings: int, embedding_dim: int, padding_idx: Optional[int] = None) -> tf.Tensor:
"""
Build sinusoidal embeddings. This matches the implementation in tensor2tensor, but differs slightly from the
description in Section 3.5 of "Attention Is All You Need".
"""
... |
Build sinusoidal embeddings. This matches the implementation in tensor2tensor, but differs slightly from the
description in Section 3.5 of "Attention Is All You Need".
| _get_embedding | python | huggingface/transformers | src/transformers/models/speech_to_text/modeling_tf_speech_to_text.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/speech_to_text/modeling_tf_speech_to_text.py | Apache-2.0 |
def create_position_ids_from_input_ids(
input_ids: tf.Tensor, padding_idx: int, past_key_values_length: Optional[int] = 0
) -> tf.Tensor:
"""
Replace non-padding symbols with their position numbers. Position numbers begin at padding_idx+1. Padding
symbols are ignored. This is modifie... |
Replace non-padding symbols with their position numbers. Position numbers begin at padding_idx+1. Padding
symbols are ignored. This is modified from fairseq's `utils.make_positions`.
Args:
x: tf.Tensor x:
Returns: tf.Tensor
| create_position_ids_from_input_ids | python | huggingface/transformers | src/transformers/models/speech_to_text/modeling_tf_speech_to_text.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/speech_to_text/modeling_tf_speech_to_text.py | Apache-2.0 |
def call(
self, hidden_states: tf.Tensor, attention_mask: tf.Tensor, layer_head_mask: tf.Tensor, training: bool = False
):
"""
Args:
hidden_states (`tf.Tensor`): input to the layer of shape `(batch, seq_len, embed_dim)`
attention_mask (`tf.Tensor`): attention mask of ... |
Args:
hidden_states (`tf.Tensor`): input to the layer of shape `(batch, seq_len, embed_dim)`
attention_mask (`tf.Tensor`): attention mask of size
`(batch, 1, tgt_len, src_len)` where padding elements are indicated by very large negative values.
layer_head_mas... | call | python | huggingface/transformers | src/transformers/models/speech_to_text/modeling_tf_speech_to_text.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/speech_to_text/modeling_tf_speech_to_text.py | Apache-2.0 |
def call(
self,
hidden_states,
attention_mask: tf.Tensor | None = None,
encoder_hidden_states: tf.Tensor | None = None,
encoder_attention_mask: tf.Tensor | None = None,
layer_head_mask: tf.Tensor | None = None,
cross_attn_layer_head_mask: tf.Tensor | None = None,
... |
Args:
hidden_states (`tf.Tensor`): input to the layer of shape `(batch, seq_len, embed_dim)`
attention_mask (`tf.Tensor`): attention mask of size
`(batch, 1, tgt_len, src_len)` where padding elements are indicated by very large negative values.
encoder_hidden... | call | python | huggingface/transformers | src/transformers/models/speech_to_text/modeling_tf_speech_to_text.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/speech_to_text/modeling_tf_speech_to_text.py | Apache-2.0 |
def _get_feat_extract_output_lengths(self, input_lengths: tf.Tensor):
"""
Computes the output length of the convolutional layers
"""
for _ in range(self.config.num_conv_layers):
input_lengths = (input_lengths - 1) // 2 + 1
return input_lengths |
Computes the output length of the convolutional layers
| _get_feat_extract_output_lengths | python | huggingface/transformers | src/transformers/models/speech_to_text/modeling_tf_speech_to_text.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/speech_to_text/modeling_tf_speech_to_text.py | Apache-2.0 |
def _get_feat_extract_output_lengths(self, input_lengths: tf.Tensor):
"""
Computes the output length of the convolutional layers
"""
for _ in range(self.config.num_conv_layers):
input_lengths = (input_lengths - 1) // 2 + 1
return input_lengths |
Computes the output length of the convolutional layers
| _get_feat_extract_output_lengths | python | huggingface/transformers | src/transformers/models/speech_to_text/modeling_tf_speech_to_text.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/speech_to_text/modeling_tf_speech_to_text.py | Apache-2.0 |
def call(
self,
input_features=None,
attention_mask=None,
head_mask=None,
output_attentions=None,
output_hidden_states=None,
return_dict=None,
training=False,
):
"""
Args:
input_features (`tf.Tensor` of shape `(batch_size, s... |
Args:
input_features (`tf.Tensor` of shape `(batch_size, sequence_length, feature_size)`):
Float values of fbank features extracted from the raw speech waveform. Raw speech waveform can be
obtained by loading a `.flac` or `.wav` audio file into an array of type `List... | call | python | huggingface/transformers | src/transformers/models/speech_to_text/modeling_tf_speech_to_text.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/speech_to_text/modeling_tf_speech_to_text.py | Apache-2.0 |
def call(
self,
input_ids=None,
inputs_embeds=None,
attention_mask=None,
encoder_hidden_states=None,
encoder_attention_mask=None,
head_mask=None,
cross_attn_head_mask=None,
past_key_values=None,
use_cache=None,
output_attentions=Non... |
Args:
input_ids (`tf.Tensor` of shape `(batch_size, sequence_length)`):
Indices of input sequence tokens in the vocabulary. Padding will be ignored by default should you
provide it.
Indices can be obtained using [`Speech2TextTokenizer`]. See [`PreTra... | call | python | huggingface/transformers | src/transformers/models/speech_to_text/modeling_tf_speech_to_text.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/speech_to_text/modeling_tf_speech_to_text.py | Apache-2.0 |
def call(
self,
input_features: TFModelInputType | None = None,
attention_mask: np.ndarray | tf.Tensor | None = None,
decoder_input_ids: np.ndarray | tf.Tensor | None = None,
decoder_attention_mask: np.ndarray | tf.Tensor | None = None,
head_mask: np.ndarray | tf.Tensor |... |
labels (`tf.Tensor` of shape `(batch_size, sequence_length)`, *optional*):
Labels for computing the masked language modeling loss. Indices should either be in `[0, ...,
config.vocab_size]` or -100 (see `input_ids` docstring). Tokens with indices set to `-100` are ignored
(ma... | call | python | huggingface/transformers | src/transformers/models/speech_to_text/modeling_tf_speech_to_text.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/speech_to_text/modeling_tf_speech_to_text.py | Apache-2.0 |
def __call__(self, *args, **kwargs):
"""
When used in normal mode, this method forwards all its arguments to Speech2TextFeatureExtractor's
[`~Speech2TextFeatureExtractor.__call__`] and returns its output. If used in the context
[`~Speech2TextProcessor.as_target_processor`] this method fo... |
When used in normal mode, this method forwards all its arguments to Speech2TextFeatureExtractor's
[`~Speech2TextFeatureExtractor.__call__`] and returns its output. If used in the context
[`~Speech2TextProcessor.as_target_processor`] this method forwards all its arguments to Speech2TextTokenizer... | __call__ | python | huggingface/transformers | src/transformers/models/speech_to_text/processing_speech_to_text.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/speech_to_text/processing_speech_to_text.py | Apache-2.0 |
def as_target_processor(self):
"""
Temporarily sets the tokenizer for processing the input. Useful for encoding the labels when fine-tuning
Speech2Text.
"""
warnings.warn(
"`as_target_processor` is deprecated and will be removed in v5 of Transformers. You can process ... |
Temporarily sets the tokenizer for processing the input. Useful for encoding the labels when fine-tuning
Speech2Text.
| as_target_processor | python | huggingface/transformers | src/transformers/models/speech_to_text/processing_speech_to_text.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/speech_to_text/processing_speech_to_text.py | Apache-2.0 |
def set_tgt_lang_special_tokens(self, tgt_lang: str) -> None:
"""Reset the special tokens to the target language setting. prefix=[eos, tgt_lang_code] and suffix=[eos]."""
lang_code_id = self.lang_code_to_id[tgt_lang]
self.prefix_tokens = [lang_code_id] | Reset the special tokens to the target language setting. prefix=[eos, tgt_lang_code] and suffix=[eos]. | set_tgt_lang_special_tokens | python | huggingface/transformers | src/transformers/models/speech_to_text/tokenization_speech_to_text.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/speech_to_text/tokenization_speech_to_text.py | Apache-2.0 |
def convert_tokens_to_string(self, tokens: List[str]) -> str:
"""Converts a sequence of tokens (strings for sub-words) in a single string."""
current_sub_tokens = []
out_string = ""
for token in tokens:
# make sure that special tokens are not decoded using sentencepiece model... | Converts a sequence of tokens (strings for sub-words) in a single string. | convert_tokens_to_string | python | huggingface/transformers | src/transformers/models/speech_to_text/tokenization_speech_to_text.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/speech_to_text/tokenization_speech_to_text.py | Apache-2.0 |
def build_inputs_with_special_tokens(self, token_ids_0, token_ids_1=None) -> List[int]:
"""Build model inputs from a sequence by appending eos_token_id."""
if token_ids_1 is None:
return self.prefix_tokens + token_ids_0 + [self.eos_token_id]
# We don't expect to process pairs, but le... | Build model inputs from a sequence by appending eos_token_id. | build_inputs_with_special_tokens | python | huggingface/transformers | src/transformers/models/speech_to_text/tokenization_speech_to_text.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/speech_to_text/tokenization_speech_to_text.py | Apache-2.0 |
def get_special_tokens_mask(
self, token_ids_0: List[int], token_ids_1: Optional[List[int]] = None, already_has_special_tokens: bool = False
) -> List[int]:
"""
Retrieve sequence ids from a token list that has no special tokens added. This method is called when adding
special tokens ... |
Retrieve sequence ids from a token list that has no special tokens added. This method is called when adding
special tokens using the tokenizer `prepare_for_model` method.
Args:
token_ids_0 (`List[int]`):
List of IDs.
token_ids_1 (`List[int]`, *optional*)... | get_special_tokens_mask | python | huggingface/transformers | src/transformers/models/speech_to_text/tokenization_speech_to_text.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/speech_to_text/tokenization_speech_to_text.py | Apache-2.0 |
def forward(
self,
input_ids: Optional[torch.Tensor] = None,
attention_mask: Optional[torch.Tensor] = None,
token_type_ids: Optional[torch.Tensor] = None,
position_ids: Optional[torch.Tensor] = None,
head_mask: Optional[torch.Tensor] = None,
inputs_embeds: Optiona... |
token_type_ids (`torch.LongTensor` of shape `batch_size, sequence_length`, *optional*):
Segment token indices to indicate first and second portions of the inputs. Indices are selected in `[0,
1]`:
- 0 corresponds to a *sentence A* token,
- 1 corresponds to a *se... | forward | python | huggingface/transformers | src/transformers/models/splinter/modeling_splinter.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/splinter/modeling_splinter.py | Apache-2.0 |
def forward(
self,
input_ids: Optional[torch.Tensor] = None,
attention_mask: Optional[torch.Tensor] = None,
token_type_ids: Optional[torch.Tensor] = None,
position_ids: Optional[torch.Tensor] = None,
head_mask: Optional[torch.Tensor] = None,
inputs_embeds: Optiona... |
token_type_ids (`torch.LongTensor` of shape `batch_size, sequence_length`, *optional*):
Segment token indices to indicate first and second portions of the inputs. Indices are selected in `[0,
1]`:
- 0 corresponds to a *sentence A* token,
- 1 corresponds to a *se... | forward | python | huggingface/transformers | src/transformers/models/splinter/modeling_splinter.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/splinter/modeling_splinter.py | Apache-2.0 |
def forward(
self,
input_ids: Optional[torch.Tensor] = None,
attention_mask: Optional[torch.Tensor] = None,
token_type_ids: Optional[torch.Tensor] = None,
position_ids: Optional[torch.Tensor] = None,
head_mask: Optional[torch.Tensor] = None,
inputs_embeds: Optiona... |
input_ids (`torch.LongTensor` of shape `(batch_size, num_questions, sequence_length)`):
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using [`AutoTokenizer`]. See [`PreTrainedTokenizer.encode`] and
[`PreTrainedTokenizer.__call__`] for details.
... | forward | python | huggingface/transformers | src/transformers/models/splinter/modeling_splinter.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/splinter/modeling_splinter.py | Apache-2.0 |
def whitespace_tokenize(text):
"""Runs basic whitespace cleaning and splitting on a piece of text."""
text = text.strip()
if not text:
return []
tokens = text.split()
return tokens | Runs basic whitespace cleaning and splitting on a piece of text. | whitespace_tokenize | python | huggingface/transformers | src/transformers/models/splinter/tokenization_splinter.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/splinter/tokenization_splinter.py | Apache-2.0 |
def build_inputs_with_special_tokens(
self, token_ids_0: List[int], token_ids_1: Optional[List[int]] = None
) -> List[int]:
"""
Build model inputs from a pair of sequence for question answering tasks by concatenating and adding special
tokens. A Splinter sequence has the following fo... |
Build model inputs from a pair of sequence for question answering tasks by concatenating and adding special
tokens. A Splinter sequence has the following format:
- single sequence: `[CLS] X [SEP]`
- pair of sequences for question answering: `[CLS] question_tokens [QUESTION] . [SEP] con... | build_inputs_with_special_tokens | python | huggingface/transformers | src/transformers/models/splinter/tokenization_splinter.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/splinter/tokenization_splinter.py | Apache-2.0 |
def get_special_tokens_mask(
self, token_ids_0: List[int], token_ids_1: Optional[List[int]] = None, already_has_special_tokens: bool = False
) -> List[int]:
"""
Retrieve sequence ids from a token list that has no special tokens added. This method is called when adding
special tokens ... |
Retrieve sequence ids from a token list that has no special tokens added. This method is called when adding
special tokens using the tokenizer `prepare_for_model` method.
Args:
token_ids_0 (`List[int]`):
List of IDs.
token_ids_1 (`List[int]`, *optional*)... | get_special_tokens_mask | python | huggingface/transformers | src/transformers/models/splinter/tokenization_splinter.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/splinter/tokenization_splinter.py | Apache-2.0 |
def create_token_type_ids_from_sequences(
self, token_ids_0: List[int], token_ids_1: Optional[List[int]] = None
) -> List[int]:
"""
Create the token type IDs corresponding to the sequences passed. [What are token type
IDs?](../glossary#token-type-ids)
Should be overridden in... |
Create the token type IDs corresponding to the sequences passed. [What are token type
IDs?](../glossary#token-type-ids)
Should be overridden in a subclass if the model has a special way of building those.
Args:
token_ids_0 (`List[int]`): The first tokenized sequence.
... | create_token_type_ids_from_sequences | python | huggingface/transformers | src/transformers/models/splinter/tokenization_splinter.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/splinter/tokenization_splinter.py | Apache-2.0 |
def tokenize(self, text, never_split=None):
"""
Basic Tokenization of a piece of text. Split on "white spaces" only, for sub-word tokenization, see
WordPieceTokenizer.
Args:
**never_split**: (*optional*) list of str
Kept for backward compatibility purposes. N... |
Basic Tokenization of a piece of text. Split on "white spaces" only, for sub-word tokenization, see
WordPieceTokenizer.
Args:
**never_split**: (*optional*) list of str
Kept for backward compatibility purposes. Now implemented directly at the base class level (see
... | tokenize | python | huggingface/transformers | src/transformers/models/splinter/tokenization_splinter.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/splinter/tokenization_splinter.py | Apache-2.0 |
def _run_strip_accents(self, text):
"""Strips accents from a piece of text."""
text = unicodedata.normalize("NFD", text)
output = []
for char in text:
cat = unicodedata.category(char)
if cat == "Mn":
continue
output.append(char)
... | Strips accents from a piece of text. | _run_strip_accents | python | huggingface/transformers | src/transformers/models/splinter/tokenization_splinter.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/splinter/tokenization_splinter.py | Apache-2.0 |
def _run_split_on_punc(self, text, never_split=None):
"""Splits punctuation on a piece of text."""
if never_split is not None and text in never_split:
return [text]
chars = list(text)
i = 0
start_new_word = True
output = []
while i < len(chars):
... | Splits punctuation on a piece of text. | _run_split_on_punc | python | huggingface/transformers | src/transformers/models/splinter/tokenization_splinter.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/splinter/tokenization_splinter.py | Apache-2.0 |
def _tokenize_chinese_chars(self, text):
"""Adds whitespace around any CJK character."""
output = []
for char in text:
cp = ord(char)
if self._is_chinese_char(cp):
output.append(" ")
output.append(char)
output.append(" ")
... | Adds whitespace around any CJK character. | _tokenize_chinese_chars | python | huggingface/transformers | src/transformers/models/splinter/tokenization_splinter.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/splinter/tokenization_splinter.py | Apache-2.0 |
def _is_chinese_char(self, cp):
"""Checks whether CP is the codepoint of a CJK character."""
# This defines a "chinese character" as anything in the CJK Unicode block:
# https://en.wikipedia.org/wiki/CJK_Unified_Ideographs_(Unicode_block)
#
# Note that the CJK Unicode block is ... | Checks whether CP is the codepoint of a CJK character. | _is_chinese_char | python | huggingface/transformers | src/transformers/models/splinter/tokenization_splinter.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/splinter/tokenization_splinter.py | Apache-2.0 |
def _clean_text(self, text):
"""Performs invalid character removal and whitespace cleanup on text."""
output = []
for char in text:
cp = ord(char)
if cp == 0 or cp == 0xFFFD or _is_control(char):
continue
if _is_whitespace(char):
... | Performs invalid character removal and whitespace cleanup on text. | _clean_text | python | huggingface/transformers | src/transformers/models/splinter/tokenization_splinter.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/splinter/tokenization_splinter.py | Apache-2.0 |
def tokenize(self, text):
"""
Tokenizes a piece of text into its word pieces. This uses a greedy longest-match-first algorithm to perform
tokenization using the given vocabulary.
For example, `input = "unaffable"` will return as output `["un", "##aff", "##able"]`.
Args:
... |
Tokenizes a piece of text into its word pieces. This uses a greedy longest-match-first algorithm to perform
tokenization using the given vocabulary.
For example, `input = "unaffable"` will return as output `["un", "##aff", "##able"]`.
Args:
text: A single token or whitespace... | tokenize | python | huggingface/transformers | src/transformers/models/splinter/tokenization_splinter.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/splinter/tokenization_splinter.py | Apache-2.0 |
def build_inputs_with_special_tokens(
self, token_ids_0: List[int], token_ids_1: Optional[List[int]] = None
) -> List[int]:
"""
Build model inputs from a pair of sequence for question answering tasks by concatenating and adding special
tokens. A Splinter sequence has the following fo... |
Build model inputs from a pair of sequence for question answering tasks by concatenating and adding special
tokens. A Splinter sequence has the following format:
- single sequence: `[CLS] X [SEP]`
- pair of sequences for question answering: `[CLS] question_tokens [QUESTION] . [SEP] con... | build_inputs_with_special_tokens | python | huggingface/transformers | src/transformers/models/splinter/tokenization_splinter_fast.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/splinter/tokenization_splinter_fast.py | Apache-2.0 |
def create_token_type_ids_from_sequences(
self, token_ids_0: List[int], token_ids_1: Optional[List[int]] = None
) -> List[int]:
"""
Create the token type IDs corresponding to the sequences passed. [What are token type
IDs?](../glossary#token-type-ids)
Should be overridden in... |
Create the token type IDs corresponding to the sequences passed. [What are token type
IDs?](../glossary#token-type-ids)
Should be overridden in a subclass if the model has a special way of building those.
Args:
token_ids_0 (`List[int]`): The first tokenized sequence.
... | create_token_type_ids_from_sequences | python | huggingface/transformers | src/transformers/models/splinter/tokenization_splinter_fast.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/splinter/tokenization_splinter_fast.py | Apache-2.0 |
def __init__(self, config, cin, q_groups=1, k_groups=1, v_groups=1):
"""
config = used for some things; ignored for others (work in progress...) cin = input channels = output channels
groups = number of groups to use in conv1d layers
"""
super().__init__()
if cin % config... |
config = used for some things; ignored for others (work in progress...) cin = input channels = output channels
groups = number of groups to use in conv1d layers
| __init__ | python | huggingface/transformers | src/transformers/models/squeezebert/modeling_squeezebert.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/squeezebert/modeling_squeezebert.py | Apache-2.0 |
def transpose_for_scores(self, x):
"""
- input: [N, C, W]
- output: [N, C1, W, C2] where C1 is the head index, and C2 is one head's contents
"""
new_x_shape = (x.size()[0], self.num_attention_heads, self.attention_head_size, x.size()[-1]) # [N, C1, C2, W]
x = x.view(*new... |
- input: [N, C, W]
- output: [N, C1, W, C2] where C1 is the head index, and C2 is one head's contents
| transpose_for_scores | python | huggingface/transformers | src/transformers/models/squeezebert/modeling_squeezebert.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/squeezebert/modeling_squeezebert.py | Apache-2.0 |
def transpose_key_for_scores(self, x):
"""
- input: [N, C, W]
- output: [N, C1, C2, W] where C1 is the head index, and C2 is one head's contents
"""
new_x_shape = (x.size()[0], self.num_attention_heads, self.attention_head_size, x.size()[-1]) # [N, C1, C2, W]
x = x.view(... |
- input: [N, C, W]
- output: [N, C1, C2, W] where C1 is the head index, and C2 is one head's contents
| transpose_key_for_scores | python | huggingface/transformers | src/transformers/models/squeezebert/modeling_squeezebert.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/squeezebert/modeling_squeezebert.py | Apache-2.0 |
def forward(self, hidden_states, attention_mask, output_attentions):
"""
expects hidden_states in [N, C, W] data layout.
The attention_mask data layout is [N, W], and it does not need to be transposed.
"""
mixed_query_layer = self.query(hidden_states)
mixed_key_layer = s... |
expects hidden_states in [N, C, W] data layout.
The attention_mask data layout is [N, W], and it does not need to be transposed.
| forward | python | huggingface/transformers | src/transformers/models/squeezebert/modeling_squeezebert.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/squeezebert/modeling_squeezebert.py | Apache-2.0 |
def __init__(self, config):
"""
- hidden_size = input chans = output chans for Q, K, V (they are all the same ... for now) = output chans for
the module
- intermediate_size = output chans for intermediate layer
- groups = number of groups for all layers in the BertModule. (even... |
- hidden_size = input chans = output chans for Q, K, V (they are all the same ... for now) = output chans for
the module
- intermediate_size = output chans for intermediate layer
- groups = number of groups for all layers in the BertModule. (eventually we could change the interface to... | __init__ | python | huggingface/transformers | src/transformers/models/squeezebert/modeling_squeezebert.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/squeezebert/modeling_squeezebert.py | Apache-2.0 |
def forward(
self,
input_ids: Optional[torch.Tensor] = None,
attention_mask: Optional[torch.Tensor] = None,
token_type_ids: Optional[torch.Tensor] = None,
position_ids: Optional[torch.Tensor] = None,
head_mask: Optional[torch.Tensor] = None,
inputs_embeds: Optiona... |
labels (`torch.LongTensor` of shape `(batch_size, sequence_length)`, *optional*):
Labels for computing the masked language modeling loss. Indices should be in `[-100, 0, ...,
config.vocab_size]` (see `input_ids` docstring) Tokens with indices set to `-100` are ignored (masked), the
... | forward | python | huggingface/transformers | src/transformers/models/squeezebert/modeling_squeezebert.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/squeezebert/modeling_squeezebert.py | Apache-2.0 |
def forward(
self,
input_ids: Optional[torch.Tensor] = None,
attention_mask: Optional[torch.Tensor] = None,
token_type_ids: Optional[torch.Tensor] = None,
position_ids: Optional[torch.Tensor] = None,
head_mask: Optional[torch.Tensor] = None,
inputs_embeds: Optiona... |
labels (`torch.LongTensor` of shape `(batch_size,)`, *optional*):
Labels for computing the sequence classification/regression loss. Indices should be in `[0, ...,
config.num_labels - 1]`. If `config.num_labels == 1` a regression loss is computed (Mean-Square loss), If
`confi... | forward | python | huggingface/transformers | src/transformers/models/squeezebert/modeling_squeezebert.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/squeezebert/modeling_squeezebert.py | Apache-2.0 |
def forward(
self,
input_ids: Optional[torch.Tensor] = None,
attention_mask: Optional[torch.Tensor] = None,
token_type_ids: Optional[torch.Tensor] = None,
position_ids: Optional[torch.Tensor] = None,
head_mask: Optional[torch.Tensor] = None,
inputs_embeds: Optiona... |
input_ids (`torch.LongTensor` of shape `(batch_size, num_choices, sequence_length)`):
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using [`AutoTokenizer`]. See [`PreTrainedTokenizer.encode`] and
[`PreTrainedTokenizer.__call__`] for details.
... | forward | python | huggingface/transformers | src/transformers/models/squeezebert/modeling_squeezebert.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/squeezebert/modeling_squeezebert.py | Apache-2.0 |
def forward(
self,
input_ids: Optional[torch.Tensor] = None,
attention_mask: Optional[torch.Tensor] = None,
token_type_ids: Optional[torch.Tensor] = None,
position_ids: Optional[torch.Tensor] = None,
head_mask: Optional[torch.Tensor] = None,
inputs_embeds: Optiona... |
labels (`torch.LongTensor` of shape `(batch_size, sequence_length)`, *optional*):
Labels for computing the token classification loss. Indices should be in `[0, ..., config.num_labels - 1]`.
| forward | python | huggingface/transformers | src/transformers/models/squeezebert/modeling_squeezebert.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/squeezebert/modeling_squeezebert.py | Apache-2.0 |
def whitespace_tokenize(text):
"""Runs basic whitespace cleaning and splitting on a piece of text."""
text = text.strip()
if not text:
return []
tokens = text.split()
return tokens | Runs basic whitespace cleaning and splitting on a piece of text. | whitespace_tokenize | python | huggingface/transformers | src/transformers/models/squeezebert/tokenization_squeezebert.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/squeezebert/tokenization_squeezebert.py | Apache-2.0 |
def build_inputs_with_special_tokens(
self, token_ids_0: List[int], token_ids_1: Optional[List[int]] = None
) -> List[int]:
"""
Build model inputs from a sequence or a pair of sequence for sequence classification tasks by concatenating and
adding special tokens. A SqueezeBERT sequenc... |
Build model inputs from a sequence or a pair of sequence for sequence classification tasks by concatenating and
adding special tokens. A SqueezeBERT sequence has the following format:
- single sequence: `[CLS] X [SEP]`
- pair of sequences: `[CLS] A [SEP] B [SEP]`
Args:
... | build_inputs_with_special_tokens | python | huggingface/transformers | src/transformers/models/squeezebert/tokenization_squeezebert.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/squeezebert/tokenization_squeezebert.py | Apache-2.0 |
def get_special_tokens_mask(
self, token_ids_0: List[int], token_ids_1: Optional[List[int]] = None, already_has_special_tokens: bool = False
) -> List[int]:
"""
Retrieve sequence ids from a token list that has no special tokens added. This method is called when adding
special tokens ... |
Retrieve sequence ids from a token list that has no special tokens added. This method is called when adding
special tokens using the tokenizer `prepare_for_model` method.
Args:
token_ids_0 (`List[int]`):
List of IDs.
token_ids_1 (`List[int]`, *optional*)... | get_special_tokens_mask | python | huggingface/transformers | src/transformers/models/squeezebert/tokenization_squeezebert.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/squeezebert/tokenization_squeezebert.py | Apache-2.0 |
def create_token_type_ids_from_sequences(
self, token_ids_0: List[int], token_ids_1: Optional[List[int]] = None
) -> List[int]:
"""
Create a mask from the two sequences passed to be used in a sequence-pair classification task. A SqueezeBERT sequence
pair mask has the following format... |
Create a mask from the two sequences passed to be used in a sequence-pair classification task. A SqueezeBERT sequence
pair mask has the following format:
```
0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1
| first sequence | second sequence |
```
If `token_ids_1` is... | create_token_type_ids_from_sequences | python | huggingface/transformers | src/transformers/models/squeezebert/tokenization_squeezebert.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/squeezebert/tokenization_squeezebert.py | Apache-2.0 |
def tokenize(self, text, never_split=None):
"""
Basic Tokenization of a piece of text. For sub-word tokenization, see WordPieceTokenizer.
Args:
never_split (`List[str]`, *optional*)
Kept for backward compatibility purposes. Now implemented directly at the base class ... |
Basic Tokenization of a piece of text. For sub-word tokenization, see WordPieceTokenizer.
Args:
never_split (`List[str]`, *optional*)
Kept for backward compatibility purposes. Now implemented directly at the base class level (see
[`PreTrainedTokenizer.tokeni... | tokenize | python | huggingface/transformers | src/transformers/models/squeezebert/tokenization_squeezebert.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/squeezebert/tokenization_squeezebert.py | Apache-2.0 |
def _run_strip_accents(self, text):
"""Strips accents from a piece of text."""
text = unicodedata.normalize("NFD", text)
output = []
for char in text:
cat = unicodedata.category(char)
if cat == "Mn":
continue
output.append(char)
... | Strips accents from a piece of text. | _run_strip_accents | python | huggingface/transformers | src/transformers/models/squeezebert/tokenization_squeezebert.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/squeezebert/tokenization_squeezebert.py | Apache-2.0 |
def _run_split_on_punc(self, text, never_split=None):
"""Splits punctuation on a piece of text."""
if not self.do_split_on_punc or (never_split is not None and text in never_split):
return [text]
chars = list(text)
i = 0
start_new_word = True
output = []
... | Splits punctuation on a piece of text. | _run_split_on_punc | python | huggingface/transformers | src/transformers/models/squeezebert/tokenization_squeezebert.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/squeezebert/tokenization_squeezebert.py | Apache-2.0 |
def _tokenize_chinese_chars(self, text):
"""Adds whitespace around any CJK character."""
output = []
for char in text:
cp = ord(char)
if self._is_chinese_char(cp):
output.append(" ")
output.append(char)
output.append(" ")
... | Adds whitespace around any CJK character. | _tokenize_chinese_chars | python | huggingface/transformers | src/transformers/models/squeezebert/tokenization_squeezebert.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/squeezebert/tokenization_squeezebert.py | Apache-2.0 |
def _is_chinese_char(self, cp):
"""Checks whether CP is the codepoint of a CJK character."""
# This defines a "chinese character" as anything in the CJK Unicode block:
# https://en.wikipedia.org/wiki/CJK_Unified_Ideographs_(Unicode_block)
#
# Note that the CJK Unicode block is ... | Checks whether CP is the codepoint of a CJK character. | _is_chinese_char | python | huggingface/transformers | src/transformers/models/squeezebert/tokenization_squeezebert.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/squeezebert/tokenization_squeezebert.py | Apache-2.0 |
def _clean_text(self, text):
"""Performs invalid character removal and whitespace cleanup on text."""
output = []
for char in text:
cp = ord(char)
if cp == 0 or cp == 0xFFFD or _is_control(char):
continue
if _is_whitespace(char):
... | Performs invalid character removal and whitespace cleanup on text. | _clean_text | python | huggingface/transformers | src/transformers/models/squeezebert/tokenization_squeezebert.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/squeezebert/tokenization_squeezebert.py | Apache-2.0 |
def tokenize(self, text):
"""
Tokenizes a piece of text into its word pieces. This uses a greedy longest-match-first algorithm to perform
tokenization using the given vocabulary.
For example, `input = "unaffable"` will return as output `["un", "##aff", "##able"]`.
Args:
... |
Tokenizes a piece of text into its word pieces. This uses a greedy longest-match-first algorithm to perform
tokenization using the given vocabulary.
For example, `input = "unaffable"` will return as output `["un", "##aff", "##able"]`.
Args:
text: A single token or whitespa... | tokenize | python | huggingface/transformers | src/transformers/models/squeezebert/tokenization_squeezebert.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/squeezebert/tokenization_squeezebert.py | Apache-2.0 |
def build_inputs_with_special_tokens(self, token_ids_0, token_ids_1=None):
"""
Build model inputs from a sequence or a pair of sequence for sequence classification tasks by concatenating and
adding special tokens. A SqueezeBERT sequence has the following format:
- single sequence: `[CLS... |
Build model inputs from a sequence or a pair of sequence for sequence classification tasks by concatenating and
adding special tokens. A SqueezeBERT sequence has the following format:
- single sequence: `[CLS] X [SEP]`
- pair of sequences: `[CLS] A [SEP] B [SEP]`
Args:
... | build_inputs_with_special_tokens | python | huggingface/transformers | src/transformers/models/squeezebert/tokenization_squeezebert_fast.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/squeezebert/tokenization_squeezebert_fast.py | Apache-2.0 |
def create_token_type_ids_from_sequences(
self, token_ids_0: List[int], token_ids_1: Optional[List[int]] = None
) -> List[int]:
"""
Create a mask from the two sequences passed to be used in a sequence-pair classification task. A SqueezeBERT sequence
pair mask has the following format... |
Create a mask from the two sequences passed to be used in a sequence-pair classification task. A SqueezeBERT sequence
pair mask has the following format:
```
0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1
| first sequence | second sequence |
```
If `token_ids_1` is... | create_token_type_ids_from_sequences | python | huggingface/transformers | src/transformers/models/squeezebert/tokenization_squeezebert_fast.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/squeezebert/tokenization_squeezebert_fast.py | Apache-2.0 |
def rotate_half(x):
"""Rotates half the hidden dims of the input."""
x1 = x[..., : x.shape[-1] // 2]
x2 = x[..., x.shape[-1] // 2 :]
return torch.cat((-x2, x1), dim=-1) | Rotates half the hidden dims of the input. | rotate_half | python | huggingface/transformers | src/transformers/models/stablelm/modeling_stablelm.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/stablelm/modeling_stablelm.py | Apache-2.0 |
def apply_rotary_pos_emb(q, k, cos, sin, position_ids=None, unsqueeze_dim=1):
"""Applies Rotary Position Embedding to the query and key tensors.
Args:
q (`torch.Tensor`): The query tensor.
k (`torch.Tensor`): The key tensor.
cos (`torch.Tensor`): The cosine part of the rotary embedding.... | Applies Rotary Position Embedding to the query and key tensors.
Args:
q (`torch.Tensor`): The query tensor.
k (`torch.Tensor`): The key tensor.
cos (`torch.Tensor`): The cosine part of the rotary embedding.
sin (`torch.Tensor`): The sine part of the rotary embedding.
positio... | apply_rotary_pos_emb | python | huggingface/transformers | src/transformers/models/stablelm/modeling_stablelm.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/stablelm/modeling_stablelm.py | Apache-2.0 |
def repeat_kv(hidden_states: torch.Tensor, n_rep: int) -> torch.Tensor:
"""
This is the equivalent of torch.repeat_interleave(x, dim=1, repeats=n_rep). The hidden states go from (batch,
num_key_value_heads, seqlen, head_dim) to (batch, num_attention_heads, seqlen, head_dim)
"""
batch, num_key_value_... |
This is the equivalent of torch.repeat_interleave(x, dim=1, repeats=n_rep). The hidden states go from (batch,
num_key_value_heads, seqlen, head_dim) to (batch, num_attention_heads, seqlen, head_dim)
| repeat_kv | python | huggingface/transformers | src/transformers/models/stablelm/modeling_stablelm.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/stablelm/modeling_stablelm.py | Apache-2.0 |
def forward(
self,
hidden_states: torch.Tensor,
attention_mask: Optional[torch.Tensor] = None,
position_ids: Optional[torch.LongTensor] = None,
past_key_value: Optional[Tuple[torch.Tensor]] = None,
output_attentions: Optional[bool] = False,
use_cache: Optional[boo... |
Args:
hidden_states (`torch.FloatTensor`): input to the layer of shape `(batch, seq_len, embed_dim)`
attention_mask (`torch.FloatTensor`, *optional*): attention mask of size
`(batch, 1, tgt_len, src_len)` where padding elements are indicated by very large negative values... | forward | python | huggingface/transformers | src/transformers/models/stablelm/modeling_stablelm.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/stablelm/modeling_stablelm.py | Apache-2.0 |
def _prepare_4d_causal_attention_mask_with_cache_position(
attention_mask: torch.Tensor,
sequence_length: int,
target_length: int,
dtype: torch.dtype,
cache_position: torch.Tensor,
batch_size: int,
**kwargs,
):
"""
Creates a causal 4D mask of s... |
Creates a causal 4D mask of shape `(batch_size, 1, query_length, key_value_length)` from a 2D mask of shape
`(batch_size, key_value_length)`, or if the input `attention_mask` is already 4D, do nothing.
Args:
attention_mask (`torch.Tensor`):
A 2D attention mask of sh... | _prepare_4d_causal_attention_mask_with_cache_position | python | huggingface/transformers | src/transformers/models/stablelm/modeling_stablelm.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/stablelm/modeling_stablelm.py | Apache-2.0 |
def forward(
self,
input_ids: Optional[torch.LongTensor] = None,
attention_mask: Optional[torch.Tensor] = None,
position_ids: Optional[torch.LongTensor] = None,
past_key_values: Optional[Cache] = None,
inputs_embeds: Optional[torch.FloatTensor] = None,
labels: Opt... |
labels (`torch.LongTensor` of shape `(batch_size,)`, *optional*):
Labels for computing the sequence classification/regression loss. Indices should be in `[0, ...,
config.num_labels - 1]`. If `config.num_labels == 1` a regression loss is computed (Mean-Square loss), If
`confi... | forward | python | huggingface/transformers | src/transformers/models/stablelm/modeling_stablelm.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/stablelm/modeling_stablelm.py | Apache-2.0 |
def forward(
self,
input_ids: Optional[torch.LongTensor] = None,
attention_mask: Optional[torch.Tensor] = None,
position_ids: Optional[torch.LongTensor] = None,
past_key_values: Optional[Cache] = None,
inputs_embeds: Optional[torch.FloatTensor] = None,
labels: Opt... |
labels (`torch.LongTensor` of shape `(batch_size,)`, *optional*):
Labels for computing the sequence classification/regression loss. Indices should be in `[0, ...,
config.num_labels - 1]`. If `config.num_labels == 1` a regression loss is computed (Mean-Square loss), If
`confi... | forward | python | huggingface/transformers | src/transformers/models/stablelm/modeling_stablelm.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/stablelm/modeling_stablelm.py | Apache-2.0 |
def rotate_half(x):
"""Rotates half the hidden dims of the input."""
x1 = x[..., : x.shape[-1] // 2]
x2 = x[..., x.shape[-1] // 2 :]
return torch.cat((-x2, x1), dim=-1) | Rotates half the hidden dims of the input. | rotate_half | python | huggingface/transformers | src/transformers/models/starcoder2/modeling_starcoder2.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/starcoder2/modeling_starcoder2.py | Apache-2.0 |
def apply_rotary_pos_emb(q, k, cos, sin, position_ids=None, unsqueeze_dim=1):
"""Applies Rotary Position Embedding to the query and key tensors.
Args:
q (`torch.Tensor`): The query tensor.
k (`torch.Tensor`): The key tensor.
cos (`torch.Tensor`): The cosine part of the rotary embedding.... | Applies Rotary Position Embedding to the query and key tensors.
Args:
q (`torch.Tensor`): The query tensor.
k (`torch.Tensor`): The key tensor.
cos (`torch.Tensor`): The cosine part of the rotary embedding.
sin (`torch.Tensor`): The sine part of the rotary embedding.
positio... | apply_rotary_pos_emb | python | huggingface/transformers | src/transformers/models/starcoder2/modeling_starcoder2.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/starcoder2/modeling_starcoder2.py | Apache-2.0 |
def repeat_kv(hidden_states: torch.Tensor, n_rep: int) -> torch.Tensor:
"""
This is the equivalent of torch.repeat_interleave(x, dim=1, repeats=n_rep). The hidden states go from (batch,
num_key_value_heads, seqlen, head_dim) to (batch, num_attention_heads, seqlen, head_dim)
"""
batch, num_key_value_... |
This is the equivalent of torch.repeat_interleave(x, dim=1, repeats=n_rep). The hidden states go from (batch,
num_key_value_heads, seqlen, head_dim) to (batch, num_attention_heads, seqlen, head_dim)
| repeat_kv | python | huggingface/transformers | src/transformers/models/starcoder2/modeling_starcoder2.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/starcoder2/modeling_starcoder2.py | Apache-2.0 |
def forward(
self,
input_ids: Optional[torch.LongTensor] = None,
attention_mask: Optional[torch.Tensor] = None,
position_ids: Optional[torch.LongTensor] = None,
past_key_values: Optional[Cache] = None,
inputs_embeds: Optional[torch.FloatTensor] = None,
labels: Opt... |
labels (`torch.LongTensor` of shape `(batch_size, sequence_length)`, *optional*):
Labels for computing the masked language modeling loss. Indices should either be in `[0, ...,
config.vocab_size]` or -100 (see `input_ids` docstring). Tokens with indices set to `-100` are ignored
... | forward | python | huggingface/transformers | src/transformers/models/starcoder2/modeling_starcoder2.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/starcoder2/modeling_starcoder2.py | Apache-2.0 |
def forward(
self,
input_ids: Optional[torch.LongTensor] = None,
attention_mask: Optional[torch.Tensor] = None,
position_ids: Optional[torch.LongTensor] = None,
past_key_values: Optional[Cache] = None,
inputs_embeds: Optional[torch.FloatTensor] = None,
labels: Opt... |
labels (`torch.LongTensor` of shape `(batch_size,)`, *optional*):
Labels for computing the sequence classification/regression loss. Indices should be in `[0, ...,
config.num_labels - 1]`. If `config.num_labels == 1` a regression loss is computed (Mean-Square loss), If
`confi... | forward | python | huggingface/transformers | src/transformers/models/starcoder2/modeling_starcoder2.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/starcoder2/modeling_starcoder2.py | Apache-2.0 |
def forward(
self,
input_ids: Optional[torch.LongTensor] = None,
attention_mask: Optional[torch.Tensor] = None,
position_ids: Optional[torch.LongTensor] = None,
past_key_values: Optional[Cache] = None,
inputs_embeds: Optional[torch.FloatTensor] = None,
labels: Opt... |
labels (`torch.LongTensor` of shape `(batch_size,)`, *optional*):
Labels for computing the sequence classification/regression loss. Indices should be in `[0, ...,
config.num_labels - 1]`. If `config.num_labels == 1` a regression loss is computed (Mean-Square loss), If
`confi... | forward | python | huggingface/transformers | src/transformers/models/starcoder2/modeling_starcoder2.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/starcoder2/modeling_starcoder2.py | Apache-2.0 |
def convert_old_keys_to_new_keys(state_dict_keys: List[str], conversion_mapping=ORIGINAL_TO_CONVERTED_KEY_MAPPING):
"""
This function should be applied only once, on the concatenated keys to efficiently rename using
the key mappings.
"""
output_dict = {}
if state_dict_keys is not None:
o... |
This function should be applied only once, on the concatenated keys to efficiently rename using
the key mappings.
| convert_old_keys_to_new_keys | python | huggingface/transformers | src/transformers/models/superglue/convert_superglue_to_hf.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/superglue/convert_superglue_to_hf.py | Apache-2.0 |
def convert_to_grayscale(
image: ImageInput,
input_data_format: Optional[Union[str, ChannelDimension]] = None,
) -> ImageInput:
"""
Converts an image to grayscale format using the NTSC formula. Only support numpy and PIL Image. TODO support torch
and tensorflow grayscale conversion
This functio... |
Converts an image to grayscale format using the NTSC formula. Only support numpy and PIL Image. TODO support torch
and tensorflow grayscale conversion
This function is supposed to return a 1-channel image, but it returns a 3-channel image with the same value in each
channel, because of an issue that i... | convert_to_grayscale | python | huggingface/transformers | src/transformers/models/superglue/image_processing_superglue.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/superglue/image_processing_superglue.py | Apache-2.0 |
def _is_valid_image(image):
"""images is a PIL Image or a 3D array."""
return is_pil_image(image) or (
is_valid_image(image) and get_image_type(image) != ImageType.PIL and len(image.shape) == 3
) | images is a PIL Image or a 3D array. | _is_valid_image | python | huggingface/transformers | src/transformers/models/superglue/image_processing_superglue.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/superglue/image_processing_superglue.py | Apache-2.0 |
def resize(
self,
image: np.ndarray,
size: Dict[str, int],
data_format: Optional[Union[str, ChannelDimension]] = None,
input_data_format: Optional[Union[str, ChannelDimension]] = None,
**kwargs,
):
"""
Resize an image.
Args:
image ... |
Resize an image.
Args:
image (`np.ndarray`):
Image to resize.
size (`Dict[str, int]`):
Dictionary of the form `{"height": int, "width": int}`, specifying the size of the output image.
data_format (`ChannelDimension` or `str`, *optiona... | resize | python | huggingface/transformers | src/transformers/models/superglue/image_processing_superglue.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/superglue/image_processing_superglue.py | Apache-2.0 |
def preprocess(
self,
images,
do_resize: Optional[bool] = None,
size: Optional[Dict[str, int]] = None,
resample: PILImageResampling = None,
do_rescale: Optional[bool] = None,
rescale_factor: Optional[float] = None,
do_grayscale: Optional[bool] = None,
... |
Preprocess an image or batch of images.
Args:
images (`ImageInput`):
Image pairs to preprocess. Expects either a list of 2 images or a list of list of 2 images list with
pixel values ranging from 0 to 255. If passing in images with pixel values between 0 and... | preprocess | python | huggingface/transformers | src/transformers/models/superglue/image_processing_superglue.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/superglue/image_processing_superglue.py | Apache-2.0 |
def post_process_keypoint_matching(
self,
outputs: "KeypointMatchingOutput",
target_sizes: Union[TensorType, List[Tuple]],
threshold: float = 0.0,
) -> List[Dict[str, torch.Tensor]]:
"""
Converts the raw output of [`KeypointMatchingOutput`] into lists of keypoints, sc... |
Converts the raw output of [`KeypointMatchingOutput`] into lists of keypoints, scores and descriptors
with coordinates absolute to the original image sizes.
Args:
outputs ([`KeypointMatchingOutput`]):
Raw outputs of the model.
target_sizes (`torch.Tensor`... | post_process_keypoint_matching | python | huggingface/transformers | src/transformers/models/superglue/image_processing_superglue.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/superglue/image_processing_superglue.py | Apache-2.0 |
def normalize_keypoints(keypoints: torch.Tensor, height: int, width: int) -> torch.Tensor:
"""
Normalize keypoints locations based on image image_shape
Args:
keypoints (`torch.Tensor` of shape `(batch_size, num_keypoints, 2)`):
Keypoints locations in (x, y) format.
height (`int`... |
Normalize keypoints locations based on image image_shape
Args:
keypoints (`torch.Tensor` of shape `(batch_size, num_keypoints, 2)`):
Keypoints locations in (x, y) format.
height (`int`):
Image height.
width (`int`):
Image width.
Returns:
... | normalize_keypoints | python | huggingface/transformers | src/transformers/models/superglue/modeling_superglue.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/superglue/modeling_superglue.py | Apache-2.0 |
def log_sinkhorn_iterations(
log_cost_matrix: torch.Tensor,
log_source_distribution: torch.Tensor,
log_target_distribution: torch.Tensor,
num_iterations: int,
) -> torch.Tensor:
"""
Perform Sinkhorn Normalization in Log-space for stability
Args:
log_cost_matrix (`torch.Tensor` of sh... |
Perform Sinkhorn Normalization in Log-space for stability
Args:
log_cost_matrix (`torch.Tensor` of shape `(batch_size, num_rows, num_columns)`):
Logarithm of the cost matrix.
log_source_distribution (`torch.Tensor` of shape `(batch_size, num_rows)`):
Logarithm of the so... | log_sinkhorn_iterations | python | huggingface/transformers | src/transformers/models/superglue/modeling_superglue.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/superglue/modeling_superglue.py | Apache-2.0 |
def log_optimal_transport(scores: torch.Tensor, reg_param: torch.Tensor, iterations: int) -> torch.Tensor:
"""
Perform Differentiable Optimal Transport in Log-space for stability
Args:
scores: (`torch.Tensor` of shape `(batch_size, num_rows, num_columns)`):
Cost matrix.
reg_para... |
Perform Differentiable Optimal Transport in Log-space for stability
Args:
scores: (`torch.Tensor` of shape `(batch_size, num_rows, num_columns)`):
Cost matrix.
reg_param: (`torch.Tensor` of shape `(batch_size, 1, 1)`):
Regularization parameter.
iterations: (`int... | log_optimal_transport | python | huggingface/transformers | src/transformers/models/superglue/modeling_superglue.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/superglue/modeling_superglue.py | Apache-2.0 |
def _match_image_pair(
self,
keypoints: torch.Tensor,
descriptors: torch.Tensor,
scores: torch.Tensor,
height: int,
width: int,
mask: Optional[torch.Tensor] = None,
output_attentions: Optional[bool] = None,
output_hidden_states: Optional[bool] = No... |
Perform keypoint matching between two images.
Args:
keypoints (`torch.Tensor` of shape `(batch_size, 2, num_keypoints, 2)`):
Keypoints detected in the pair of image.
descriptors (`torch.Tensor` of shape `(batch_size, 2, descriptor_dim, num_keypoints)`):
... | _match_image_pair | python | huggingface/transformers | src/transformers/models/superglue/modeling_superglue.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/superglue/modeling_superglue.py | Apache-2.0 |
def forward(
self,
pixel_values: torch.FloatTensor,
labels: Optional[torch.LongTensor] = None,
output_attentions: Optional[bool] = None,
output_hidden_states: Optional[bool] = None,
return_dict: Optional[bool] = None,
) -> Union[Tuple, KeypointMatchingOutput]:
... |
Examples:
```python
>>> from transformers import AutoImageProcessor, AutoModel
>>> import torch
>>> from PIL import Image
>>> import requests
>>> url = "https://github.com/magicleap/SuperGluePretrainedNetwork/blob/master/assets/phototourism_sample_images/london... | forward | python | huggingface/transformers | src/transformers/models/superglue/modeling_superglue.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/superglue/modeling_superglue.py | Apache-2.0 |
def convert_superpoint_checkpoint(checkpoint_url, pytorch_dump_folder_path, save_model, push_to_hub, test_mode=False):
"""
Copy/paste/tweak model's weights to our SuperPoint structure.
"""
print("Downloading original model from checkpoint...")
config = get_superpoint_config()
# load original s... |
Copy/paste/tweak model's weights to our SuperPoint structure.
| convert_superpoint_checkpoint | python | huggingface/transformers | src/transformers/models/superpoint/convert_superpoint_to_pytorch.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/superpoint/convert_superpoint_to_pytorch.py | Apache-2.0 |
def convert_to_grayscale(
image: ImageInput,
input_data_format: Optional[Union[str, ChannelDimension]] = None,
) -> ImageInput:
"""
Converts an image to grayscale format using the NTSC formula. Only support numpy and PIL Image. TODO support torch
and tensorflow grayscale conversion
This functio... |
Converts an image to grayscale format using the NTSC formula. Only support numpy and PIL Image. TODO support torch
and tensorflow grayscale conversion
This function is supposed to return a 1-channel image, but it returns a 3-channel image with the same value in each
channel, because of an issue that i... | convert_to_grayscale | python | huggingface/transformers | src/transformers/models/superpoint/image_processing_superpoint.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/superpoint/image_processing_superpoint.py | Apache-2.0 |
def resize(
self,
image: np.ndarray,
size: Dict[str, int],
data_format: Optional[Union[str, ChannelDimension]] = None,
input_data_format: Optional[Union[str, ChannelDimension]] = None,
**kwargs,
):
"""
Resize an image.
Args:
image ... |
Resize an image.
Args:
image (`np.ndarray`):
Image to resize.
size (`Dict[str, int]`):
Dictionary of the form `{"height": int, "width": int}`, specifying the size of the output image.
data_format (`ChannelDimension` or `str`, *optiona... | resize | python | huggingface/transformers | src/transformers/models/superpoint/image_processing_superpoint.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/superpoint/image_processing_superpoint.py | Apache-2.0 |
def preprocess(
self,
images,
do_resize: Optional[bool] = None,
size: Optional[Dict[str, int]] = None,
do_rescale: Optional[bool] = None,
rescale_factor: Optional[float] = None,
do_grayscale: Optional[bool] = None,
return_tensors: Optional[Union[str, Tenso... |
Preprocess an image or batch of images.
Args:
images (`ImageInput`):
Image to preprocess. Expects a single or batch of images with pixel values ranging from 0 to 255. If
passing in images with pixel values between 0 and 1, set `do_rescale=False`.
... | preprocess | python | huggingface/transformers | src/transformers/models/superpoint/image_processing_superpoint.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/superpoint/image_processing_superpoint.py | Apache-2.0 |
def post_process_keypoint_detection(
self, outputs: "SuperPointKeypointDescriptionOutput", target_sizes: Union[TensorType, List[Tuple]]
) -> List[Dict[str, "torch.Tensor"]]:
"""
Converts the raw output of [`SuperPointForKeypointDetection`] into lists of keypoints, scores and descriptors
... |
Converts the raw output of [`SuperPointForKeypointDetection`] into lists of keypoints, scores and descriptors
with coordinates absolute to the original image sizes.
Args:
outputs ([`SuperPointKeypointDescriptionOutput`]):
Raw outputs of the model containing keypoint... | post_process_keypoint_detection | python | huggingface/transformers | src/transformers/models/superpoint/image_processing_superpoint.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/superpoint/image_processing_superpoint.py | Apache-2.0 |
def remove_keypoints_from_borders(
keypoints: torch.Tensor, scores: torch.Tensor, border: int, height: int, width: int
) -> Tuple[torch.Tensor, torch.Tensor]:
"""Removes keypoints (and their associated scores) that are too close to the border"""
mask_h = (keypoints[:, 0] >= border) & (keypoints[:, 0] < (hei... | Removes keypoints (and their associated scores) that are too close to the border | remove_keypoints_from_borders | python | huggingface/transformers | src/transformers/models/superpoint/modeling_superpoint.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/superpoint/modeling_superpoint.py | Apache-2.0 |
def top_k_keypoints(keypoints: torch.Tensor, scores: torch.Tensor, k: int) -> Tuple[torch.Tensor, torch.Tensor]:
"""Keeps the k keypoints with highest score"""
if k >= len(keypoints):
return keypoints, scores
scores, indices = torch.topk(scores, k, dim=0)
return keypoints[indices], scores | Keeps the k keypoints with highest score | top_k_keypoints | python | huggingface/transformers | src/transformers/models/superpoint/modeling_superpoint.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/superpoint/modeling_superpoint.py | Apache-2.0 |
def simple_nms(scores: torch.Tensor, nms_radius: int) -> torch.Tensor:
"""Applies non-maximum suppression on scores"""
if nms_radius < 0:
raise ValueError("Expected positive values for nms_radius")
def max_pool(x):
return nn.functional.max_pool2d(x, kernel_size=nms_radius * 2 + 1, stride=1,... | Applies non-maximum suppression on scores | simple_nms | python | huggingface/transformers | src/transformers/models/superpoint/modeling_superpoint.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/superpoint/modeling_superpoint.py | Apache-2.0 |
def _get_pixel_scores(self, encoded: torch.Tensor) -> torch.Tensor:
"""Based on the encoder output, compute the scores for each pixel of the image"""
scores = self.relu(self.conv_score_a(encoded))
scores = self.conv_score_b(scores)
scores = nn.functional.softmax(scores, 1)[:, :-1]
... | Based on the encoder output, compute the scores for each pixel of the image | _get_pixel_scores | python | huggingface/transformers | src/transformers/models/superpoint/modeling_superpoint.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/superpoint/modeling_superpoint.py | Apache-2.0 |
def _extract_keypoints(self, scores: torch.Tensor) -> Tuple[torch.Tensor, torch.Tensor]:
"""
Based on their scores, extract the pixels that represent the keypoints that will be used for descriptors computation.
The keypoints are in the form of relative (x, y) coordinates.
"""
_, ... |
Based on their scores, extract the pixels that represent the keypoints that will be used for descriptors computation.
The keypoints are in the form of relative (x, y) coordinates.
| _extract_keypoints | python | huggingface/transformers | src/transformers/models/superpoint/modeling_superpoint.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/superpoint/modeling_superpoint.py | Apache-2.0 |
def forward(self, encoded: torch.Tensor, keypoints: torch.Tensor) -> torch.Tensor:
"""Based on the encoder output and the keypoints, compute the descriptors for each keypoint"""
descriptors = self.conv_descriptor_b(self.relu(self.conv_descriptor_a(encoded)))
descriptors = nn.functional.normalize... | Based on the encoder output and the keypoints, compute the descriptors for each keypoint | forward | python | huggingface/transformers | src/transformers/models/superpoint/modeling_superpoint.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/superpoint/modeling_superpoint.py | Apache-2.0 |
def forward(
self,
pixel_values: torch.FloatTensor,
labels: Optional[torch.LongTensor] = None,
output_hidden_states: Optional[bool] = None,
return_dict: Optional[bool] = None,
) -> Union[Tuple, SuperPointKeypointDescriptionOutput]:
r"""
Examples:
```p... |
Examples:
```python
>>> from transformers import AutoImageProcessor, SuperPointForKeypointDetection
>>> import torch
>>> from PIL import Image
>>> import requests
>>> url = "http://images.cocodataset.org/val2017/000000039769.jpg"
>>> image = Image.open(... | forward | python | huggingface/transformers | src/transformers/models/superpoint/modeling_superpoint.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/superpoint/modeling_superpoint.py | Apache-2.0 |
def convert_swiftformer_checkpoint(swiftformer_name, pytorch_dump_folder_path, original_ckpt):
"""
Copy/paste/tweak model's weights to our SwiftFormer structure.
"""
# define default SwiftFormer configuration
config = SwiftFormerConfig()
# dataset (ImageNet-21k only or also fine-tuned on Image... |
Copy/paste/tweak model's weights to our SwiftFormer structure.
| convert_swiftformer_checkpoint | python | huggingface/transformers | src/transformers/models/swiftformer/convert_swiftformer_original_to_hf.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/swiftformer/convert_swiftformer_original_to_hf.py | Apache-2.0 |
def drop_path(input: torch.Tensor, drop_prob: float = 0.0, training: bool = False) -> torch.Tensor:
"""
Drop paths (Stochastic Depth) per sample (when applied in main path of residual blocks).
Comment by Ross Wightman: This is the same as the DropConnect impl I created for EfficientNet, etc networks,
h... |
Drop paths (Stochastic Depth) per sample (when applied in main path of residual blocks).
Comment by Ross Wightman: This is the same as the DropConnect impl I created for EfficientNet, etc networks,
however, the original name is misleading as 'Drop Connect' is a different form of dropout in a separate pape... | drop_path | python | huggingface/transformers | src/transformers/models/swiftformer/modeling_swiftformer.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/swiftformer/modeling_swiftformer.py | Apache-2.0 |
def forward(
self,
pixel_values: Optional[torch.Tensor] = None,
labels: Optional[torch.Tensor] = None,
output_hidden_states: Optional[bool] = None,
return_dict: Optional[bool] = None,
) -> Union[tuple, ImageClassifierOutputWithNoAttention]:
r"""
labels (`torch... |
labels (`torch.LongTensor` of shape `(batch_size,)`, *optional*):
Labels for computing the image classification/regression loss. Indices should be in `[0, ...,
config.num_labels - 1]`. If `config.num_labels == 1` a regression loss is computed (Mean-Square loss), If
`config.n... | forward | python | huggingface/transformers | src/transformers/models/swiftformer/modeling_swiftformer.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/swiftformer/modeling_swiftformer.py | Apache-2.0 |
def call(
self,
pixel_values: Optional[tf.Tensor] = None,
labels: Optional[tf.Tensor] = None,
output_hidden_states: Optional[bool] = None,
return_dict: Optional[bool] = None,
training: bool = False,
) -> Union[tuple, TFImageClassifierOutputWithNoAttention]:
r"... |
labels (`tf.Tensor` of shape `(batch_size,)`, *optional*):
Labels for computing the image classification/regression loss. Indices should be in `[0, ...,
config.num_labels - 1]`. If `config.num_labels == 1` a regression loss is computed (Mean-Square loss), If
`config.num_labe... | call | python | huggingface/transformers | src/transformers/models/swiftformer/modeling_tf_swiftformer.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/swiftformer/modeling_tf_swiftformer.py | Apache-2.0 |
def window_partition(input_feature, window_size):
"""
Partitions the given input into windows.
"""
batch_size, height, width, num_channels = input_feature.shape
input_feature = input_feature.view(
batch_size, height // window_size, window_size, width // window_size, window_size, num_channels... |
Partitions the given input into windows.
| window_partition | python | huggingface/transformers | src/transformers/models/swin/modeling_swin.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/swin/modeling_swin.py | Apache-2.0 |
Subsets and Splits
Django Code with Docstrings
Filters Python code examples from Django repository that contain Django-related code, helping identify relevant code snippets for understanding Django framework usage patterns.
SQL Console for Shuu12121/python-treesitter-filtered-datasetsV2
Retrieves specific code examples from the Flask repository but doesn't provide meaningful analysis or patterns beyond basic data retrieval.
HTTPX Repo Code and Docstrings
Retrieves specific code examples from the httpx repository, which is useful for understanding how particular libraries are used but doesn't provide broader analytical insights about the dataset.
Requests Repo Docstrings & Code
Retrieves code examples with their docstrings and file paths from the requests repository, providing basic filtering but limited analytical value beyond finding specific code samples.
Quart Repo Docstrings & Code
Retrieves code examples with their docstrings from the Quart repository, providing basic code samples but offering limited analytical value for understanding broader patterns or relationships in the dataset.