code stringlengths 66 870k | docstring stringlengths 19 26.7k | func_name stringlengths 1 138 | language stringclasses 1
value | repo stringlengths 7 68 | path stringlengths 5 324 | url stringlengths 46 389 | license stringclasses 7
values |
|---|---|---|---|---|---|---|---|
def call(
self,
input_ids: TFModelInputType | None = None,
token_type_ids: np.ndarray | tf.Tensor | None = None,
input_mask: np.ndarray | tf.Tensor | None = None,
attention_mask: np.ndarray | tf.Tensor | None = None,
mems: np.ndarray | tf.Tensor | None = None,
per... |
labels (`tf.Tensor` of shape `(batch_size,)`, *optional*):
Labels for computing the multiple choice classification loss. Indices should be in `[0, ..., num_choices]`
where `num_choices` is the size of the second dimension of the input tensors. (See `input_ids` above)
| call | python | huggingface/transformers | src/transformers/models/xlnet/modeling_tf_xlnet.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/xlnet/modeling_tf_xlnet.py | Apache-2.0 |
def call(
self,
input_ids: TFModelInputType | None = None,
attention_mask: np.ndarray | tf.Tensor | None = None,
mems: np.ndarray | tf.Tensor | None = None,
perm_mask: np.ndarray | tf.Tensor | None = None,
target_mapping: np.ndarray | tf.Tensor | None = None,
toke... |
labels (`tf.Tensor` of shape `(batch_size, sequence_length)`, *optional*):
Labels for computing the token classification loss. Indices should be in `[0, ..., config.num_labels - 1]`.
| call | python | huggingface/transformers | src/transformers/models/xlnet/modeling_tf_xlnet.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/xlnet/modeling_tf_xlnet.py | Apache-2.0 |
def call(
self,
input_ids: TFModelInputType | None = None,
attention_mask: np.ndarray | tf.Tensor | None = None,
mems: np.ndarray | tf.Tensor | None = None,
perm_mask: np.ndarray | tf.Tensor | None = None,
target_mapping: np.ndarray | tf.Tensor | None = None,
toke... |
start_positions (`tf.Tensor` of shape `(batch_size,)`, *optional*):
Labels for position (index) of the start of the labelled span for computing the token classification loss.
Positions are clamped to the length of the sequence (`sequence_length`). Position outside of the sequence
... | call | python | huggingface/transformers | src/transformers/models/xlnet/modeling_tf_xlnet.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/xlnet/modeling_tf_xlnet.py | Apache-2.0 |
def build_tf_xlnet_to_pytorch_map(model, config, tf_weights=None):
"""
A map of modules from TF to PyTorch. I use a map to keep the PyTorch model as identical to the original PyTorch
model as possible.
"""
tf_to_pt_map = {}
if hasattr(model, "transformer"):
if hasattr(model, "lm_loss")... |
A map of modules from TF to PyTorch. I use a map to keep the PyTorch model as identical to the original PyTorch
model as possible.
| build_tf_xlnet_to_pytorch_map | python | huggingface/transformers | src/transformers/models/xlnet/modeling_xlnet.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/xlnet/modeling_xlnet.py | Apache-2.0 |
def load_tf_weights_in_xlnet(model, config, tf_path):
"""Load tf checkpoints in a pytorch model"""
try:
import numpy as np
import tensorflow as tf
except ImportError:
logger.error(
"Loading a TensorFlow models in PyTorch, requires TensorFlow to be installed. Please see "
... | Load tf checkpoints in a pytorch model | load_tf_weights_in_xlnet | python | huggingface/transformers | src/transformers/models/xlnet/modeling_xlnet.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/xlnet/modeling_xlnet.py | Apache-2.0 |
def rel_shift(x, klen=-1):
"""perform relative shift to form the relative attention score."""
x_size = x.shape
x = x.reshape(x_size[1], x_size[0], x_size[2], x_size[3])
x = x[1:, ...]
x = x.reshape(x_size[0], x_size[1] - 1, x_size[2], x_size[3])
# x = x[:, 0:klen, :, :]
... | perform relative shift to form the relative attention score. | rel_shift | python | huggingface/transformers | src/transformers/models/xlnet/modeling_xlnet.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/xlnet/modeling_xlnet.py | Apache-2.0 |
def forward(
self, hidden_states: torch.FloatTensor, p_mask: Optional[torch.FloatTensor] = None
) -> torch.FloatTensor:
"""
Args:
hidden_states (`torch.FloatTensor` of shape `(batch_size, seq_len, hidden_size)`):
The final hidden states of the model.
p... |
Args:
hidden_states (`torch.FloatTensor` of shape `(batch_size, seq_len, hidden_size)`):
The final hidden states of the model.
p_mask (`torch.FloatTensor` of shape `(batch_size, seq_len)`, *optional*):
Mask for tokens at invalid position, such as query an... | forward | python | huggingface/transformers | src/transformers/models/xlnet/modeling_xlnet.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/xlnet/modeling_xlnet.py | Apache-2.0 |
def forward(
self,
hidden_states: torch.FloatTensor,
start_states: Optional[torch.FloatTensor] = None,
start_positions: Optional[torch.LongTensor] = None,
p_mask: Optional[torch.FloatTensor] = None,
) -> torch.FloatTensor:
"""
Args:
hidden_states (... |
Args:
hidden_states (`torch.FloatTensor` of shape `(batch_size, seq_len, hidden_size)`):
The final hidden states of the model.
start_states (`torch.FloatTensor` of shape `(batch_size, seq_len, hidden_size)`, *optional*):
The hidden states of the first tok... | forward | python | huggingface/transformers | src/transformers/models/xlnet/modeling_xlnet.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/xlnet/modeling_xlnet.py | Apache-2.0 |
def forward(
self,
hidden_states: torch.FloatTensor,
start_states: Optional[torch.FloatTensor] = None,
start_positions: Optional[torch.LongTensor] = None,
cls_index: Optional[torch.LongTensor] = None,
) -> torch.FloatTensor:
"""
Args:
hidden_states... |
Args:
hidden_states (`torch.FloatTensor` of shape `(batch_size, seq_len, hidden_size)`):
The final hidden states of the model.
start_states (`torch.FloatTensor` of shape `(batch_size, seq_len, hidden_size)`, *optional*):
The hidden states of the first tok... | forward | python | huggingface/transformers | src/transformers/models/xlnet/modeling_xlnet.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/xlnet/modeling_xlnet.py | Apache-2.0 |
def forward(
self, hidden_states: torch.FloatTensor, cls_index: Optional[torch.LongTensor] = None
) -> torch.FloatTensor:
"""
Compute a single vector summary of a sequence hidden states.
Args:
hidden_states (`torch.FloatTensor` of shape `[batch_size, seq_len, hidden_size... |
Compute a single vector summary of a sequence hidden states.
Args:
hidden_states (`torch.FloatTensor` of shape `[batch_size, seq_len, hidden_size]`):
The hidden states of the last layer.
cls_index (`torch.LongTensor` of shape `[batch_size]` or `[batch_size, ...]... | forward | python | huggingface/transformers | src/transformers/models/xlnet/modeling_xlnet.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/xlnet/modeling_xlnet.py | Apache-2.0 |
def create_mask(self, qlen, mlen):
"""
Creates causal attention mask. Float mask where 1.0 indicates masked, 0.0 indicates not-masked.
Args:
qlen: Sequence length
mlen: Mask length
::
same_length=False: same_length=True: <mlen > < qlen > <mlen... |
Creates causal attention mask. Float mask where 1.0 indicates masked, 0.0 indicates not-masked.
Args:
qlen: Sequence length
mlen: Mask length
::
same_length=False: same_length=True: <mlen > < qlen > <mlen > < qlen >
^ [0 0 0 0 0 1 1 1 ... | create_mask | python | huggingface/transformers | src/transformers/models/xlnet/modeling_xlnet.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/xlnet/modeling_xlnet.py | Apache-2.0 |
def forward(
self,
input_ids: Optional[torch.Tensor] = None,
attention_mask: Optional[torch.Tensor] = None,
mems: Optional[torch.Tensor] = None,
perm_mask: Optional[torch.Tensor] = None,
target_mapping: Optional[torch.Tensor] = None,
token_type_ids: Optional[torch... |
mems (`List[torch.FloatTensor]` of length `config.n_layers`):
Contains pre-computed hidden-states (see `mems` output below) . Can be used to speed up sequential
decoding. The token ids which have their past given to this model should not be passed as `input_ids` as
they have... | forward | python | huggingface/transformers | src/transformers/models/xlnet/modeling_xlnet.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/xlnet/modeling_xlnet.py | Apache-2.0 |
def forward(
self,
input_ids: Optional[torch.Tensor] = None,
attention_mask: Optional[torch.Tensor] = None,
mems: Optional[torch.Tensor] = None,
perm_mask: Optional[torch.Tensor] = None,
target_mapping: Optional[torch.Tensor] = None,
token_type_ids: Optional[torch... |
mems (`List[torch.FloatTensor]` of length `config.n_layers`):
Contains pre-computed hidden-states (see `mems` output below) . Can be used to speed up sequential
decoding. The token ids which have their past given to this model should not be passed as `input_ids` as
they have... | forward | python | huggingface/transformers | src/transformers/models/xlnet/modeling_xlnet.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/xlnet/modeling_xlnet.py | Apache-2.0 |
def forward(
self,
input_ids: Optional[torch.Tensor] = None,
attention_mask: Optional[torch.Tensor] = None,
mems: Optional[torch.Tensor] = None,
perm_mask: Optional[torch.Tensor] = None,
target_mapping: Optional[torch.Tensor] = None,
token_type_ids: Optional[torch... |
mems (`List[torch.FloatTensor]` of length `config.n_layers`):
Contains pre-computed hidden-states (see `mems` output below) . Can be used to speed up sequential
decoding. The token ids which have their past given to this model should not be passed as `input_ids` as
they have... | forward | python | huggingface/transformers | src/transformers/models/xlnet/modeling_xlnet.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/xlnet/modeling_xlnet.py | Apache-2.0 |
def forward(
self,
input_ids: Optional[torch.Tensor] = None,
attention_mask: Optional[torch.Tensor] = None,
mems: Optional[torch.Tensor] = None,
perm_mask: Optional[torch.Tensor] = None,
target_mapping: Optional[torch.Tensor] = None,
token_type_ids: Optional[torch... |
mems (`List[torch.FloatTensor]` of length `config.n_layers`):
Contains pre-computed hidden-states (see `mems` output below) . Can be used to speed up sequential
decoding. The token ids which have their past given to this model should not be passed as `input_ids` as
they have... | forward | python | huggingface/transformers | src/transformers/models/xlnet/modeling_xlnet.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/xlnet/modeling_xlnet.py | Apache-2.0 |
def forward(
self,
input_ids: Optional[torch.Tensor] = None,
token_type_ids: Optional[torch.Tensor] = None,
input_mask: Optional[torch.Tensor] = None,
attention_mask: Optional[torch.Tensor] = None,
mems: Optional[torch.Tensor] = None,
perm_mask: Optional[torch.Ten... |
input_ids (`torch.LongTensor` of shape `(batch_size, num_choices, sequence_length)`):
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using [`AutoTokenizer`]. See [`PreTrainedTokenizer.encode`] and
[`PreTrainedTokenizer.__call__`] for details.
... | forward | python | huggingface/transformers | src/transformers/models/xlnet/modeling_xlnet.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/xlnet/modeling_xlnet.py | Apache-2.0 |
def forward(
self,
input_ids: Optional[torch.Tensor] = None,
attention_mask: Optional[torch.Tensor] = None,
mems: Optional[torch.Tensor] = None,
perm_mask: Optional[torch.Tensor] = None,
target_mapping: Optional[torch.Tensor] = None,
token_type_ids: Optional[torch... |
mems (`List[torch.FloatTensor]` of length `config.n_layers`):
Contains pre-computed hidden-states (see `mems` output below) . Can be used to speed up sequential
decoding. The token ids which have their past given to this model should not be passed as `input_ids` as
they have... | forward | python | huggingface/transformers | src/transformers/models/xlnet/modeling_xlnet.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/xlnet/modeling_xlnet.py | Apache-2.0 |
def forward(
self,
input_ids: Optional[torch.Tensor] = None,
attention_mask: Optional[torch.Tensor] = None,
mems: Optional[torch.Tensor] = None,
perm_mask: Optional[torch.Tensor] = None,
target_mapping: Optional[torch.Tensor] = None,
token_type_ids: Optional[torch... |
mems (`List[torch.FloatTensor]` of length `config.n_layers`):
Contains pre-computed hidden-states (see `mems` output below) . Can be used to speed up sequential
decoding. The token ids which have their past given to this model should not be passed as `input_ids` as
they have... | forward | python | huggingface/transformers | src/transformers/models/xlnet/modeling_xlnet.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/xlnet/modeling_xlnet.py | Apache-2.0 |
def build_inputs_with_special_tokens(
self, token_ids_0: List[int], token_ids_1: Optional[List[int]] = None
) -> List[int]:
"""
Build model inputs from a sequence or a pair of sequence for sequence classification tasks by concatenating and
adding special tokens. An XLNet sequence has... |
Build model inputs from a sequence or a pair of sequence for sequence classification tasks by concatenating and
adding special tokens. An XLNet sequence has the following format:
- single sequence: `X <sep> <cls>`
- pair of sequences: `A <sep> B <sep> <cls>`
Args:
... | build_inputs_with_special_tokens | python | huggingface/transformers | src/transformers/models/xlnet/tokenization_xlnet.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/xlnet/tokenization_xlnet.py | Apache-2.0 |
def get_special_tokens_mask(
self, token_ids_0: List[int], token_ids_1: Optional[List[int]] = None, already_has_special_tokens: bool = False
) -> List[int]:
"""
Retrieve sequence ids from a token list that has no special tokens added. This method is called when adding
special tokens ... |
Retrieve sequence ids from a token list that has no special tokens added. This method is called when adding
special tokens using the tokenizer `prepare_for_model` method.
Args:
token_ids_0 (`List[int]`):
List of IDs.
token_ids_1 (`List[int]`, *optional*)... | get_special_tokens_mask | python | huggingface/transformers | src/transformers/models/xlnet/tokenization_xlnet.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/xlnet/tokenization_xlnet.py | Apache-2.0 |
def create_token_type_ids_from_sequences(
self, token_ids_0: List[int], token_ids_1: Optional[List[int]] = None
) -> List[int]:
"""
Create a mask from the two sequences passed to be used in a sequence-pair classification task. An XLNet
sequence pair mask has the following format:
... |
Create a mask from the two sequences passed to be used in a sequence-pair classification task. An XLNet
sequence pair mask has the following format:
```
0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1
| first sequence | second sequence |
```
If `token_ids_1` is `Non... | create_token_type_ids_from_sequences | python | huggingface/transformers | src/transformers/models/xlnet/tokenization_xlnet.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/xlnet/tokenization_xlnet.py | Apache-2.0 |
def build_inputs_with_special_tokens(
self, token_ids_0: List[int], token_ids_1: Optional[List[int]] = None
) -> List[int]:
"""
Build model inputs from a sequence or a pair of sequence for sequence classification tasks by concatenating and
adding special tokens. An XLNet sequence has... |
Build model inputs from a sequence or a pair of sequence for sequence classification tasks by concatenating and
adding special tokens. An XLNet sequence has the following format:
- single sequence: `X <sep> <cls>`
- pair of sequences: `A <sep> B <sep> <cls>`
Args:
... | build_inputs_with_special_tokens | python | huggingface/transformers | src/transformers/models/xlnet/tokenization_xlnet_fast.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/xlnet/tokenization_xlnet_fast.py | Apache-2.0 |
def create_token_type_ids_from_sequences(
self, token_ids_0: List[int], token_ids_1: Optional[List[int]] = None
) -> List[int]:
"""
Create a mask from the two sequences passed to be used in a sequence-pair classification task. An XLNet
sequence pair mask has the following format:
... |
Create a mask from the two sequences passed to be used in a sequence-pair classification task. An XLNet
sequence pair mask has the following format:
```
0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1
| first sequence | second sequence |
```
If `token_ids_1` is `Non... | create_token_type_ids_from_sequences | python | huggingface/transformers | src/transformers/models/xlnet/tokenization_xlnet_fast.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/xlnet/tokenization_xlnet_fast.py | Apache-2.0 |
def create_position_ids_from_inputs_embeds(self, inputs_embeds):
"""
We are provided embeddings directly. We cannot infer which are padded so just generate sequential position ids.
Args:
inputs_embeds: torch.Tensor
Returns: torch.Tensor
"""
input_shape = inp... |
We are provided embeddings directly. We cannot infer which are padded so just generate sequential position ids.
Args:
inputs_embeds: torch.Tensor
Returns: torch.Tensor
| create_position_ids_from_inputs_embeds | python | huggingface/transformers | src/transformers/models/xmod/modeling_xmod.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/xmod/modeling_xmod.py | Apache-2.0 |
def set_default_language(self, language: str):
"""
Set the default language code for the model. This is used when the language is not specified in the input.
Args:
language (`str`): The language code, such as `"en_XX"` or `"de_DE"`.
"""
if language not in self.config... |
Set the default language code for the model. This is used when the language is not specified in the input.
Args:
language (`str`): The language code, such as `"en_XX"` or `"de_DE"`.
| set_default_language | python | huggingface/transformers | src/transformers/models/xmod/modeling_xmod.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/xmod/modeling_xmod.py | Apache-2.0 |
def freeze_embeddings_and_language_adapters(self):
"""
Freeze the embeddings and language adapters of the model. Usually, this is applied before the model is
fine-tuned on a downstream task.
"""
logger.info("Freezing embeddings")
for parameter in self.roberta.embeddings.p... |
Freeze the embeddings and language adapters of the model. Usually, this is applied before the model is
fine-tuned on a downstream task.
| freeze_embeddings_and_language_adapters | python | huggingface/transformers | src/transformers/models/xmod/modeling_xmod.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/xmod/modeling_xmod.py | Apache-2.0 |
def __init__(self, config, add_pooling_layer=True):
r"""
add_pooling_layer (bool, *optional*, defaults to `True`):
Whether to add a pooling layer
"""
super().__init__(config)
self.config = config
self.embeddings = XmodEmbeddings(config)
self.encoder =... |
add_pooling_layer (bool, *optional*, defaults to `True`):
Whether to add a pooling layer
| __init__ | python | huggingface/transformers | src/transformers/models/xmod/modeling_xmod.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/xmod/modeling_xmod.py | Apache-2.0 |
def forward(
self,
input_ids: Optional[torch.Tensor] = None,
lang_ids: Optional[torch.LongTensor] = None,
attention_mask: Optional[torch.Tensor] = None,
token_type_ids: Optional[torch.Tensor] = None,
position_ids: Optional[torch.Tensor] = None,
head_mask: Optional... |
lang_ids (`torch.LongTensor` of shape `(batch_size, sequence_length)`, *optional*):
Indices of the language adapters that should be activated for each sample, respectively. Default: the index
that corresponds to `self.config.default_language`.
| forward | python | huggingface/transformers | src/transformers/models/xmod/modeling_xmod.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/xmod/modeling_xmod.py | Apache-2.0 |
def forward(
self,
input_ids: Optional[torch.LongTensor] = None,
lang_ids: Optional[torch.LongTensor] = None,
attention_mask: Optional[torch.FloatTensor] = None,
token_type_ids: Optional[torch.LongTensor] = None,
position_ids: Optional[torch.LongTensor] = None,
he... |
lang_ids (`torch.LongTensor` of shape `(batch_size, sequence_length)`, *optional*):
Indices of the language adapters that should be activated for each sample, respectively. Default: the index
that corresponds to `self.config.default_language`.
labels (`torch.LongTensor` of shape... | forward | python | huggingface/transformers | src/transformers/models/xmod/modeling_xmod.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/xmod/modeling_xmod.py | Apache-2.0 |
def forward(
self,
input_ids: Optional[torch.LongTensor] = None,
lang_ids: Optional[torch.LongTensor] = None,
attention_mask: Optional[torch.FloatTensor] = None,
token_type_ids: Optional[torch.LongTensor] = None,
position_ids: Optional[torch.LongTensor] = None,
he... |
lang_ids (`torch.LongTensor` of shape `(batch_size, sequence_length)`, *optional*):
Indices of the language adapters that should be activated for each sample, respectively. Default: the index
that corresponds to `self.config.default_language`.
labels (`torch.LongTensor` of shape... | forward | python | huggingface/transformers | src/transformers/models/xmod/modeling_xmod.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/xmod/modeling_xmod.py | Apache-2.0 |
def forward(
self,
input_ids: Optional[torch.LongTensor] = None,
lang_ids: Optional[torch.LongTensor] = None,
attention_mask: Optional[torch.FloatTensor] = None,
token_type_ids: Optional[torch.LongTensor] = None,
position_ids: Optional[torch.LongTensor] = None,
he... |
lang_ids (`torch.LongTensor` of shape `(batch_size, sequence_length)`, *optional*):
Indices of the language adapters that should be activated for each sample, respectively. Default: the index
that corresponds to `self.config.default_language`.
labels (`torch.LongTensor` of shape... | forward | python | huggingface/transformers | src/transformers/models/xmod/modeling_xmod.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/xmod/modeling_xmod.py | Apache-2.0 |
def forward(
self,
input_ids: Optional[torch.LongTensor] = None,
lang_ids: Optional[torch.LongTensor] = None,
token_type_ids: Optional[torch.LongTensor] = None,
attention_mask: Optional[torch.FloatTensor] = None,
labels: Optional[torch.LongTensor] = None,
position... |
input_ids (`torch.LongTensor` of shape `(batch_size, num_choices, sequence_length)`):
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using [`AutoTokenizer`]. See [`PreTrainedTokenizer.encode`] and
[`PreTrainedTokenizer.__call__`] for details.
... | forward | python | huggingface/transformers | src/transformers/models/xmod/modeling_xmod.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/xmod/modeling_xmod.py | Apache-2.0 |
def forward(
self,
input_ids: Optional[torch.LongTensor] = None,
lang_ids: Optional[torch.LongTensor] = None,
attention_mask: Optional[torch.FloatTensor] = None,
token_type_ids: Optional[torch.LongTensor] = None,
position_ids: Optional[torch.LongTensor] = None,
he... |
lang_ids (`torch.LongTensor` of shape `(batch_size, sequence_length)`, *optional*):
Indices of the language adapters that should be activated for each sample, respectively. Default: the index
that corresponds to `self.config.default_language`.
labels (`torch.LongTensor` of shape... | forward | python | huggingface/transformers | src/transformers/models/xmod/modeling_xmod.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/xmod/modeling_xmod.py | Apache-2.0 |
def forward(
self,
input_ids: Optional[torch.LongTensor] = None,
lang_ids: Optional[torch.LongTensor] = None,
attention_mask: Optional[torch.FloatTensor] = None,
token_type_ids: Optional[torch.LongTensor] = None,
position_ids: Optional[torch.LongTensor] = None,
he... |
lang_ids (`torch.LongTensor` of shape `(batch_size, sequence_length)`, *optional*):
Indices of the language adapters that should be activated for each sample, respectively. Default: the index
that corresponds to `self.config.default_language`.
| forward | python | huggingface/transformers | src/transformers/models/xmod/modeling_xmod.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/xmod/modeling_xmod.py | Apache-2.0 |
def create_position_ids_from_input_ids(input_ids, padding_idx, past_key_values_length=0):
"""
Replace non-padding symbols with their position numbers. Position numbers begin at padding_idx+1. Padding symbols
are ignored. This is modified from fairseq's `utils.make_positions`.
Args:
x: torch.Ten... |
Replace non-padding symbols with their position numbers. Position numbers begin at padding_idx+1. Padding symbols
are ignored. This is modified from fairseq's `utils.make_positions`.
Args:
x: torch.Tensor x:
Returns: torch.Tensor
| create_position_ids_from_input_ids | python | huggingface/transformers | src/transformers/models/xmod/modeling_xmod.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/xmod/modeling_xmod.py | Apache-2.0 |
def interpolate_pos_encoding(self, embeddings: torch.Tensor, height: int, width: int) -> torch.Tensor:
"""
This method allows to interpolate the pre-trained position encodings, to be able to use the model on higher resolution
images. This method is also adapted to support torch.jit tracing.
... |
This method allows to interpolate the pre-trained position encodings, to be able to use the model on higher resolution
images. This method is also adapted to support torch.jit tracing.
Adapted from:
- https://github.com/facebookresearch/dino/blob/de9ee3df6cf39fac952ab558447af1fa1365362... | interpolate_pos_encoding | python | huggingface/transformers | src/transformers/models/x_clip/modeling_x_clip.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/x_clip/modeling_x_clip.py | Apache-2.0 |
def forward(
self,
hidden_states: torch.Tensor,
attention_mask: torch.Tensor,
causal_attention_mask: torch.Tensor,
output_attentions: Optional[bool] = False,
) -> Tuple[torch.FloatTensor]:
"""
Args:
hidden_states (`torch.FloatTensor`): input to the... |
Args:
hidden_states (`torch.FloatTensor`): input to the layer of shape `(batch, seq_len, embed_dim)`
attention_mask (`torch.FloatTensor`): attention mask of size
`(batch, 1, tgt_len, src_len)` where padding elements are indicated by very large negative values.
... | forward | python | huggingface/transformers | src/transformers/models/x_clip/modeling_x_clip.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/x_clip/modeling_x_clip.py | Apache-2.0 |
def drop_path(input: torch.Tensor, drop_prob: float = 0.0, training: bool = False) -> torch.Tensor:
"""
Drop paths (Stochastic Depth) per sample (when applied in main path of residual blocks).
Comment by Ross Wightman: This is the same as the DropConnect impl I created for EfficientNet, etc networks,
h... |
Drop paths (Stochastic Depth) per sample (when applied in main path of residual blocks).
Comment by Ross Wightman: This is the same as the DropConnect impl I created for EfficientNet, etc networks,
however, the original name is misleading as 'Drop Connect' is a different form of dropout in a separate pape... | drop_path | python | huggingface/transformers | src/transformers/models/x_clip/modeling_x_clip.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/x_clip/modeling_x_clip.py | Apache-2.0 |
def forward(
self,
hidden_states: torch.Tensor,
attention_mask: torch.Tensor,
causal_attention_mask: torch.Tensor,
output_attentions: Optional[bool] = False,
) -> Tuple[torch.FloatTensor]:
"""
Args:
hidden_states (`torch.FloatTensor`): input to the... |
Args:
hidden_states (`torch.FloatTensor`): input to the layer of shape `(batch, seq_len, embed_dim)`
attention_mask (`torch.FloatTensor`): attention mask of size
`(batch, 1, tgt_len, src_len)` where padding elements are indicated by very large negative values.
... | forward | python | huggingface/transformers | src/transformers/models/x_clip/modeling_x_clip.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/x_clip/modeling_x_clip.py | Apache-2.0 |
def forward(
self,
inputs_embeds,
attention_mask: Optional[torch.Tensor] = None,
causal_attention_mask: Optional[torch.Tensor] = None,
output_attentions: Optional[bool] = None,
output_hidden_states: Optional[bool] = None,
return_dict: Optional[bool] = None,
) ... |
Args:
inputs_embeds (`torch.FloatTensor` of shape `(batch_size, sequence_length, hidden_size)`):
Optionally, instead of passing `input_ids` you can choose to directly pass an embedded representation.
This is useful if you want more control over how to convert `input_... | forward | python | huggingface/transformers | src/transformers/models/x_clip/modeling_x_clip.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/x_clip/modeling_x_clip.py | Apache-2.0 |
def forward(
self,
input_ids: Optional[torch.Tensor] = None,
attention_mask: Optional[torch.Tensor] = None,
position_ids: Optional[torch.Tensor] = None,
output_attentions: Optional[bool] = None,
output_hidden_states: Optional[bool] = None,
return_dict: Optional[bo... |
Examples:
```python
>>> from transformers import AutoTokenizer, XCLIPTextModel
>>> model = XCLIPTextModel.from_pretrained("microsoft/xclip-base-patch32")
>>> tokenizer = AutoTokenizer.from_pretrained("microsoft/xclip-base-patch32")
>>> inputs = tokenizer(["a photo of ... | forward | python | huggingface/transformers | src/transformers/models/x_clip/modeling_x_clip.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/x_clip/modeling_x_clip.py | Apache-2.0 |
def forward(
self,
inputs_embeds,
attention_mask: Optional[torch.Tensor] = None,
causal_attention_mask: Optional[torch.Tensor] = None,
output_attentions: Optional[bool] = None,
output_hidden_states: Optional[bool] = None,
return_dict: Optional[bool] = None,
) ... |
Args:
inputs_embeds (`torch.FloatTensor` of shape `(batch_size, sequence_length, hidden_size)`):
Optionally, instead of passing `input_ids` you can choose to directly pass an embedded representation.
This is useful if you want more control over how to convert `input_... | forward | python | huggingface/transformers | src/transformers/models/x_clip/modeling_x_clip.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/x_clip/modeling_x_clip.py | Apache-2.0 |
def forward(
self,
pixel_values: Optional[torch.FloatTensor] = None,
output_attentions: Optional[bool] = None,
output_hidden_states: Optional[bool] = None,
return_dict: Optional[bool] = None,
) -> Union[Tuple, BaseModelOutputWithPooling]:
r"""
Examples:
... |
Examples:
```python
>>> import av
>>> import torch
>>> import numpy as np
>>> from transformers import AutoProcessor, XCLIPVisionModel
>>> from huggingface_hub import hf_hub_download
>>> np.random.seed(0)
>>> def read_video_pyav(container, in... | forward | python | huggingface/transformers | src/transformers/models/x_clip/modeling_x_clip.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/x_clip/modeling_x_clip.py | Apache-2.0 |
def get_text_features(
self,
input_ids: Optional[torch.Tensor] = None,
attention_mask: Optional[torch.Tensor] = None,
position_ids: Optional[torch.Tensor] = None,
output_attentions: Optional[bool] = None,
output_hidden_states: Optional[bool] = None,
return_dict: O... |
Returns:
text_features (`torch.FloatTensor` of shape `(batch_size, output_dim`): The text embeddings obtained by
applying the projection layer to the pooled output of [`XCLIPTextModel`].
Examples:
```python
>>> from transformers import AutoTokenizer, AutoModel
... | get_text_features | python | huggingface/transformers | src/transformers/models/x_clip/modeling_x_clip.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/x_clip/modeling_x_clip.py | Apache-2.0 |
def get_video_features(
self,
pixel_values: Optional[torch.FloatTensor] = None,
output_attentions: Optional[bool] = None,
output_hidden_states: Optional[bool] = None,
return_dict: Optional[bool] = None,
) -> torch.FloatTensor:
r"""
Returns:
video_f... |
Returns:
video_features (`torch.FloatTensor` of shape `(batch_size, output_dim`): The video embeddings obtained by
applying the projection layer to the pooled output of [`XCLIPVisionModel`] and
[`XCLIPMultiframeIntegrationTransformer`].
Examples:
```python
... | get_video_features | python | huggingface/transformers | src/transformers/models/x_clip/modeling_x_clip.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/x_clip/modeling_x_clip.py | Apache-2.0 |
def forward(
self,
input_ids: Optional[torch.LongTensor] = None,
pixel_values: Optional[torch.FloatTensor] = None,
attention_mask: Optional[torch.Tensor] = None,
position_ids: Optional[torch.LongTensor] = None,
return_loss: Optional[bool] = None,
output_attentions... |
return_loss (`bool`, *optional*):
Whether or not to return the contrastive loss.
Examples:
```python
>>> import av
>>> import torch
>>> import numpy as np
>>> from transformers import AutoProcessor, AutoModel
>>> from huggingface_hub import... | forward | python | huggingface/transformers | src/transformers/models/x_clip/modeling_x_clip.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/x_clip/modeling_x_clip.py | Apache-2.0 |
def __call__(self, text=None, videos=None, return_tensors=None, **kwargs):
"""
Main method to prepare for the model one or several sequences(s) and image(s). This method forwards the `text`
and `kwargs` arguments to CLIPTokenizerFast's [`~CLIPTokenizerFast.__call__`] if `text` is not `None` to e... |
Main method to prepare for the model one or several sequences(s) and image(s). This method forwards the `text`
and `kwargs` arguments to CLIPTokenizerFast's [`~CLIPTokenizerFast.__call__`] if `text` is not `None` to encode
the text. To prepare the image(s), this method forwards the `videos` and... | __call__ | python | huggingface/transformers | src/transformers/models/x_clip/processing_x_clip.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/x_clip/processing_x_clip.py | Apache-2.0 |
def get_max_height_width(
images: List[np.ndarray], input_data_format: Optional[Union[str, ChannelDimension]] = None
) -> List[int]:
"""
Get the maximum height and width across all images in a batch.
"""
if input_data_format is None:
input_data_format = infer_channel_dimension_format(images[... |
Get the maximum height and width across all images in a batch.
| get_max_height_width | python | huggingface/transformers | src/transformers/models/yolos/image_processing_yolos.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/yolos/image_processing_yolos.py | Apache-2.0 |
def get_size_with_aspect_ratio(
image_size: Tuple[int, int], size: int, max_size: Optional[int] = None, mod_size: int = 16
) -> Tuple[int, int]:
"""
Computes the output image size given the input image size and the desired output size with multiple of divisible_size.
Args:
image_size (`Tuple[in... |
Computes the output image size given the input image size and the desired output size with multiple of divisible_size.
Args:
image_size (`Tuple[int, int]`):
The input image size.
size (`int`):
The desired output size.
max_size (`int`, *optional*):
Th... | get_size_with_aspect_ratio | python | huggingface/transformers | src/transformers/models/yolos/image_processing_yolos.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/yolos/image_processing_yolos.py | Apache-2.0 |
def get_image_size_for_max_height_width(
input_image: np.ndarray,
max_height: int,
max_width: int,
input_data_format: Optional[Union[str, ChannelDimension]] = None,
) -> Tuple[int, int]:
"""
Computes the output image size given the input image and the maximum allowed height and width. Keep aspec... |
Computes the output image size given the input image and the maximum allowed height and width. Keep aspect ratio.
Important, even if image_height < max_height and image_width < max_width, the image will be resized
to at least one of the edges be equal to max_height or max_width.
For example:
-... | get_image_size_for_max_height_width | python | huggingface/transformers | src/transformers/models/yolos/image_processing_yolos.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/yolos/image_processing_yolos.py | Apache-2.0 |
def get_resize_output_image_size(
input_image: np.ndarray,
size: Union[int, Tuple[int, int], List[int]],
max_size: Optional[int] = None,
input_data_format: Optional[Union[str, ChannelDimension]] = None,
) -> Tuple[int, int]:
"""
Computes the output image size given the input image size and the d... |
Computes the output image size given the input image size and the desired output size. If the desired output size
is a tuple or list, the output image size is returned as is. If the desired output size is an integer, the output
image size is computed by keeping the aspect ratio of the input image size.
... | get_resize_output_image_size | python | huggingface/transformers | src/transformers/models/yolos/image_processing_yolos.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/yolos/image_processing_yolos.py | Apache-2.0 |
def get_numpy_to_framework_fn(arr) -> Callable:
"""
Returns a function that converts a numpy array to the framework of the input array.
Args:
arr (`np.ndarray`): The array to convert.
"""
if isinstance(arr, np.ndarray):
return np.array
if is_tf_available() and is_tf_tensor(arr):... |
Returns a function that converts a numpy array to the framework of the input array.
Args:
arr (`np.ndarray`): The array to convert.
| get_numpy_to_framework_fn | python | huggingface/transformers | src/transformers/models/yolos/image_processing_yolos.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/yolos/image_processing_yolos.py | Apache-2.0 |
def safe_squeeze(arr: np.ndarray, axis: Optional[int] = None) -> np.ndarray:
"""
Squeezes an array, but only if the axis specified has dim 1.
"""
if axis is None:
return arr.squeeze()
try:
return arr.squeeze(axis=axis)
except ValueError:
return arr |
Squeezes an array, but only if the axis specified has dim 1.
| safe_squeeze | python | huggingface/transformers | src/transformers/models/yolos/image_processing_yolos.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/yolos/image_processing_yolos.py | Apache-2.0 |
def make_pixel_mask(
image: np.ndarray, output_size: Tuple[int, int], input_data_format: Optional[Union[str, ChannelDimension]] = None
) -> np.ndarray:
"""
Make a pixel mask for the image, where 1 indicates a valid pixel and 0 indicates padding.
Args:
image (`np.ndarray`):
Image to ... |
Make a pixel mask for the image, where 1 indicates a valid pixel and 0 indicates padding.
Args:
image (`np.ndarray`):
Image to make the pixel mask for.
output_size (`Tuple[int, int]`):
Output size of the mask.
| make_pixel_mask | python | huggingface/transformers | src/transformers/models/yolos/image_processing_yolos.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/yolos/image_processing_yolos.py | Apache-2.0 |
def convert_coco_poly_to_mask(segmentations, height: int, width: int) -> np.ndarray:
"""
Convert a COCO polygon annotation to a mask.
Args:
segmentations (`List[List[float]]`):
List of polygons, each polygon represented by a list of x-y coordinates.
height (`int`):
H... |
Convert a COCO polygon annotation to a mask.
Args:
segmentations (`List[List[float]]`):
List of polygons, each polygon represented by a list of x-y coordinates.
height (`int`):
Height of the mask.
width (`int`):
Width of the mask.
| convert_coco_poly_to_mask | python | huggingface/transformers | src/transformers/models/yolos/image_processing_yolos.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/yolos/image_processing_yolos.py | Apache-2.0 |
def prepare_coco_detection_annotation(
image,
target,
return_segmentation_masks: bool = False,
input_data_format: Optional[Union[ChannelDimension, str]] = None,
):
"""
Convert the target in COCO format into the format expected by DETR.
"""
image_height, image_width = get_image_size(image... |
Convert the target in COCO format into the format expected by DETR.
| prepare_coco_detection_annotation | python | huggingface/transformers | src/transformers/models/yolos/image_processing_yolos.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/yolos/image_processing_yolos.py | Apache-2.0 |
def masks_to_boxes(masks: np.ndarray) -> np.ndarray:
"""
Compute the bounding boxes around the provided panoptic segmentation masks.
Args:
masks: masks in format `[number_masks, height, width]` where N is the number of masks
Returns:
boxes: bounding boxes in format `[number_masks, 4]` ... |
Compute the bounding boxes around the provided panoptic segmentation masks.
Args:
masks: masks in format `[number_masks, height, width]` where N is the number of masks
Returns:
boxes: bounding boxes in format `[number_masks, 4]` in xyxy format
| masks_to_boxes | python | huggingface/transformers | src/transformers/models/yolos/image_processing_yolos.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/yolos/image_processing_yolos.py | Apache-2.0 |
def prepare_coco_panoptic_annotation(
image: np.ndarray,
target: Dict,
masks_path: Union[str, pathlib.Path],
return_masks: bool = True,
input_data_format: Union[ChannelDimension, str] = None,
) -> Dict:
"""
Prepare a coco panoptic annotation for YOLOS.
"""
image_height, image_width =... |
Prepare a coco panoptic annotation for YOLOS.
| prepare_coco_panoptic_annotation | python | huggingface/transformers | src/transformers/models/yolos/image_processing_yolos.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/yolos/image_processing_yolos.py | Apache-2.0 |
def resize_annotation(
annotation: Dict[str, Any],
orig_size: Tuple[int, int],
target_size: Tuple[int, int],
threshold: float = 0.5,
resample: PILImageResampling = PILImageResampling.NEAREST,
):
"""
Resizes an annotation to a target size.
Args:
annotation (`Dict[str, Any]`):
... |
Resizes an annotation to a target size.
Args:
annotation (`Dict[str, Any]`):
The annotation dictionary.
orig_size (`Tuple[int, int]`):
The original size of the input image.
target_size (`Tuple[int, int]`):
The target size of the image, as returned by... | resize_annotation | python | huggingface/transformers | src/transformers/models/yolos/image_processing_yolos.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/yolos/image_processing_yolos.py | Apache-2.0 |
def binary_mask_to_rle(mask):
"""
Converts given binary mask of shape `(height, width)` to the run-length encoding (RLE) format.
Args:
mask (`torch.Tensor` or `numpy.array`):
A binary mask tensor of shape `(height, width)` where 0 denotes background and 1 denotes the target
... |
Converts given binary mask of shape `(height, width)` to the run-length encoding (RLE) format.
Args:
mask (`torch.Tensor` or `numpy.array`):
A binary mask tensor of shape `(height, width)` where 0 denotes background and 1 denotes the target
segment_id or class_id.
Returns:
... | binary_mask_to_rle | python | huggingface/transformers | src/transformers/models/yolos/image_processing_yolos.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/yolos/image_processing_yolos.py | Apache-2.0 |
def convert_segmentation_to_rle(segmentation):
"""
Converts given segmentation map of shape `(height, width)` to the run-length encoding (RLE) format.
Args:
segmentation (`torch.Tensor` or `numpy.array`):
A segmentation map of shape `(height, width)` where each value denotes a segment o... |
Converts given segmentation map of shape `(height, width)` to the run-length encoding (RLE) format.
Args:
segmentation (`torch.Tensor` or `numpy.array`):
A segmentation map of shape `(height, width)` where each value denotes a segment or class id.
Returns:
`List[List]`: A list ... | convert_segmentation_to_rle | python | huggingface/transformers | src/transformers/models/yolos/image_processing_yolos.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/yolos/image_processing_yolos.py | Apache-2.0 |
def remove_low_and_no_objects(masks, scores, labels, object_mask_threshold, num_labels):
"""
Binarize the given masks using `object_mask_threshold`, it returns the associated values of `masks`, `scores` and
`labels`.
Args:
masks (`torch.Tensor`):
A tensor of shape `(num_queries, hei... |
Binarize the given masks using `object_mask_threshold`, it returns the associated values of `masks`, `scores` and
`labels`.
Args:
masks (`torch.Tensor`):
A tensor of shape `(num_queries, height, width)`.
scores (`torch.Tensor`):
A tensor of shape `(num_queries)`.
... | remove_low_and_no_objects | python | huggingface/transformers | src/transformers/models/yolos/image_processing_yolos.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/yolos/image_processing_yolos.py | Apache-2.0 |
def from_dict(cls, image_processor_dict: Dict[str, Any], **kwargs):
"""
Overrides the `from_dict` method from the base class to make sure parameters are updated if image processor is
created using from_dict and kwargs e.g. `YolosImageProcessor.from_pretrained(checkpoint, size=600,
max_si... |
Overrides the `from_dict` method from the base class to make sure parameters are updated if image processor is
created using from_dict and kwargs e.g. `YolosImageProcessor.from_pretrained(checkpoint, size=600,
max_size=800)`
| from_dict | python | huggingface/transformers | src/transformers/models/yolos/image_processing_yolos.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/yolos/image_processing_yolos.py | Apache-2.0 |
def prepare_annotation(
self,
image: np.ndarray,
target: Dict,
format: Optional[AnnotationFormat] = None,
return_segmentation_masks: Optional[bool] = None,
masks_path: Optional[Union[str, pathlib.Path]] = None,
input_data_format: Optional[Union[str, ChannelDimensi... |
Prepare an annotation for feeding into DETR model.
| prepare_annotation | python | huggingface/transformers | src/transformers/models/yolos/image_processing_yolos.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/yolos/image_processing_yolos.py | Apache-2.0 |
def resize(
self,
image: np.ndarray,
size: Dict[str, int],
resample: PILImageResampling = PILImageResampling.BILINEAR,
data_format: Optional[ChannelDimension] = None,
input_data_format: Optional[Union[str, ChannelDimension]] = None,
**kwargs,
) -> np.ndarray:
... |
Resize the image to the given size. Size can be `min_size` (scalar) or `(height, width)` tuple. If size is an
int, smaller edge of the image will be matched to this number.
Args:
image (`np.ndarray`):
Image to resize.
size (`Dict[str, int]`):
... | resize | python | huggingface/transformers | src/transformers/models/yolos/image_processing_yolos.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/yolos/image_processing_yolos.py | Apache-2.0 |
def resize_annotation(
self,
annotation,
orig_size,
size,
resample: PILImageResampling = PILImageResampling.NEAREST,
) -> Dict:
"""
Resize the annotation to match the resized image. If size is an int, smaller edge of the mask will be matched
to this nu... |
Resize the annotation to match the resized image. If size is an int, smaller edge of the mask will be matched
to this number.
| resize_annotation | python | huggingface/transformers | src/transformers/models/yolos/image_processing_yolos.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/yolos/image_processing_yolos.py | Apache-2.0 |
def rescale(
self,
image: np.ndarray,
rescale_factor: float,
data_format: Optional[Union[str, ChannelDimension]] = None,
input_data_format: Optional[Union[str, ChannelDimension]] = None,
) -> np.ndarray:
"""
Rescale the image by the given factor. image = image... |
Rescale the image by the given factor. image = image * rescale_factor.
Args:
image (`np.ndarray`):
Image to rescale.
rescale_factor (`float`):
The value to use for rescaling.
data_format (`str` or `ChannelDimension`, *optional*):
... | rescale | python | huggingface/transformers | src/transformers/models/yolos/image_processing_yolos.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/yolos/image_processing_yolos.py | Apache-2.0 |
def _update_annotation_for_padded_image(
self,
annotation: Dict,
input_image_size: Tuple[int, int],
output_image_size: Tuple[int, int],
padding,
update_bboxes,
) -> Dict:
"""
Update the annotation for a padded image.
"""
new_annotation ... |
Update the annotation for a padded image.
| _update_annotation_for_padded_image | python | huggingface/transformers | src/transformers/models/yolos/image_processing_yolos.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/yolos/image_processing_yolos.py | Apache-2.0 |
def _pad_image(
self,
image: np.ndarray,
output_size: Tuple[int, int],
annotation: Optional[Dict[str, Any]] = None,
constant_values: Union[float, Iterable[float]] = 0,
data_format: Optional[ChannelDimension] = None,
input_data_format: Optional[Union[str, ChannelDi... |
Pad an image with zeros to the given size.
| _pad_image | python | huggingface/transformers | src/transformers/models/yolos/image_processing_yolos.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/yolos/image_processing_yolos.py | Apache-2.0 |
def pad(
self,
images: List[np.ndarray],
annotations: Optional[List[Dict[str, Any]]] = None,
constant_values: Union[float, Iterable[float]] = 0,
return_pixel_mask: bool = False,
return_tensors: Optional[Union[str, TensorType]] = None,
data_format: Optional[Channel... |
Pads a batch of images to the bottom and right of the image with zeros to the size of largest height and width
in the batch and optionally returns their corresponding pixel mask.
Args:
image (`np.ndarray`):
Image to pad.
annotations (`List[Dict[str, any]... | pad | python | huggingface/transformers | src/transformers/models/yolos/image_processing_yolos.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/yolos/image_processing_yolos.py | Apache-2.0 |
def preprocess(
self,
images: ImageInput,
annotations: Optional[Union[AnnotationType, List[AnnotationType]]] = None,
return_segmentation_masks: Optional[bool] = None,
masks_path: Optional[Union[str, pathlib.Path]] = None,
do_resize: Optional[bool] = None,
size: Op... |
Preprocess an image or a batch of images so that it can be used by the model.
Args:
images (`ImageInput`):
Image or batch of images to preprocess. Expects a single or batch of images with pixel values ranging
from 0 to 255. If passing in images with pixel va... | preprocess | python | huggingface/transformers | src/transformers/models/yolos/image_processing_yolos.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/yolos/image_processing_yolos.py | Apache-2.0 |
def post_process(self, outputs, target_sizes):
"""
Converts the raw output of [`YolosForObjectDetection`] into final bounding boxes in (top_left_x, top_left_y,
bottom_right_x, bottom_right_y) format. Only supports PyTorch.
Args:
outputs ([`YolosObjectDetectionOutput`]):
... |
Converts the raw output of [`YolosForObjectDetection`] into final bounding boxes in (top_left_x, top_left_y,
bottom_right_x, bottom_right_y) format. Only supports PyTorch.
Args:
outputs ([`YolosObjectDetectionOutput`]):
Raw outputs of the model.
target_s... | post_process | python | huggingface/transformers | src/transformers/models/yolos/image_processing_yolos.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/yolos/image_processing_yolos.py | Apache-2.0 |
def post_process_object_detection(
self, outputs, threshold: float = 0.5, target_sizes: Union[TensorType, List[Tuple]] = None
):
"""
Converts the raw output of [`YolosForObjectDetection`] into final bounding boxes in (top_left_x, top_left_y,
bottom_right_x, bottom_right_y) format. On... |
Converts the raw output of [`YolosForObjectDetection`] into final bounding boxes in (top_left_x, top_left_y,
bottom_right_x, bottom_right_y) format. Only supports PyTorch.
Args:
outputs ([`YolosObjectDetectionOutput`]):
Raw outputs of the model.
threshol... | post_process_object_detection | python | huggingface/transformers | src/transformers/models/yolos/image_processing_yolos.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/yolos/image_processing_yolos.py | Apache-2.0 |
def convert_coco_poly_to_mask(segmentations, height: int, width: int, device: torch.device) -> torch.Tensor:
"""
Convert a COCO polygon annotation to a mask.
Args:
segmentations (`List[List[float]]`):
List of polygons, each polygon represented by a list of x-y coordinates.
heigh... |
Convert a COCO polygon annotation to a mask.
Args:
segmentations (`List[List[float]]`):
List of polygons, each polygon represented by a list of x-y coordinates.
height (`int`):
Height of the mask.
width (`int`):
Width of the mask.
| convert_coco_poly_to_mask | python | huggingface/transformers | src/transformers/models/yolos/image_processing_yolos_fast.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/yolos/image_processing_yolos_fast.py | Apache-2.0 |
def prepare_coco_detection_annotation(
image,
target,
return_segmentation_masks: bool = False,
input_data_format: Optional[Union[ChannelDimension, str]] = None,
):
"""
Convert the target in COCO format into the format expected by YOLOS.
"""
image_height, image_width = image.size()[-2:]
... |
Convert the target in COCO format into the format expected by YOLOS.
| prepare_coco_detection_annotation | python | huggingface/transformers | src/transformers/models/yolos/image_processing_yolos_fast.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/yolos/image_processing_yolos_fast.py | Apache-2.0 |
def masks_to_boxes(masks: torch.Tensor) -> torch.Tensor:
"""
Compute the bounding boxes around the provided panoptic segmentation masks.
Args:
masks: masks in format `[number_masks, height, width]` where N is the number of masks
Returns:
boxes: bounding boxes in format `[number_masks, ... |
Compute the bounding boxes around the provided panoptic segmentation masks.
Args:
masks: masks in format `[number_masks, height, width]` where N is the number of masks
Returns:
boxes: bounding boxes in format `[number_masks, 4]` in xyxy format
| masks_to_boxes | python | huggingface/transformers | src/transformers/models/yolos/image_processing_yolos_fast.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/yolos/image_processing_yolos_fast.py | Apache-2.0 |
def rgb_to_id(color):
"""
Converts RGB color to unique ID.
"""
if isinstance(color, torch.Tensor) and len(color.shape) == 3:
if color.dtype == torch.uint8:
color = color.to(torch.int32)
return color[:, :, 0] + 256 * color[:, :, 1] + 256 * 256 * color[:, :, 2]
return int(c... |
Converts RGB color to unique ID.
| rgb_to_id | python | huggingface/transformers | src/transformers/models/yolos/image_processing_yolos_fast.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/yolos/image_processing_yolos_fast.py | Apache-2.0 |
def prepare_coco_panoptic_annotation(
image: torch.Tensor,
target: Dict,
masks_path: Union[str, pathlib.Path],
return_masks: bool = True,
input_data_format: Union[ChannelDimension, str] = None,
) -> Dict:
"""
Prepare a coco panoptic annotation for YOLOS.
"""
image_height, image_width... |
Prepare a coco panoptic annotation for YOLOS.
| prepare_coco_panoptic_annotation | python | huggingface/transformers | src/transformers/models/yolos/image_processing_yolos_fast.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/yolos/image_processing_yolos_fast.py | Apache-2.0 |
def get_size_with_aspect_ratio(
image_size: Tuple[int, int], size: int, max_size: Optional[int] = None, mod_size: int = 16
) -> Tuple[int, int]:
"""
Computes the output image size given the input image size and the desired output size with multiple of divisible_size.
Args:
image_size (`Tuple[in... |
Computes the output image size given the input image size and the desired output size with multiple of divisible_size.
Args:
image_size (`Tuple[int, int]`):
The input image size.
size (`int`):
The desired output size.
max_size (`int`, *optional*):
Th... | get_size_with_aspect_ratio | python | huggingface/transformers | src/transformers/models/yolos/image_processing_yolos_fast.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/yolos/image_processing_yolos_fast.py | Apache-2.0 |
def from_dict(cls, image_processor_dict: Dict[str, Any], **kwargs):
"""
Overrides the `from_dict` method from the base class to make sure parameters are updated if image processor is
created using from_dict and kwargs e.g. `YolosImageProcessorFast.from_pretrained(checkpoint, size=600,
ma... |
Overrides the `from_dict` method from the base class to make sure parameters are updated if image processor is
created using from_dict and kwargs e.g. `YolosImageProcessorFast.from_pretrained(checkpoint, size=600,
max_size=800)`
| from_dict | python | huggingface/transformers | src/transformers/models/yolos/image_processing_yolos_fast.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/yolos/image_processing_yolos_fast.py | Apache-2.0 |
def prepare_annotation(
self,
image: torch.Tensor,
target: Dict,
format: Optional[AnnotationFormat] = None,
return_segmentation_masks: Optional[bool] = None,
masks_path: Optional[Union[str, pathlib.Path]] = None,
input_data_format: Optional[Union[str, ChannelDimen... |
Prepare an annotation for feeding into YOLOS model.
| prepare_annotation | python | huggingface/transformers | src/transformers/models/yolos/image_processing_yolos_fast.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/yolos/image_processing_yolos_fast.py | Apache-2.0 |
def resize(
self,
image: torch.Tensor,
size: SizeDict,
interpolation: "F.InterpolationMode" = None,
**kwargs,
) -> torch.Tensor:
"""
Resize the image to the given size. Size can be `min_size` (scalar) or `(height, width)` tuple. If size is an
int, smal... |
Resize the image to the given size. Size can be `min_size` (scalar) or `(height, width)` tuple. If size is an
int, smaller edge of the image will be matched to this number.
Args:
image (`torch.Tensor`):
Image to resize.
size (`SizeDict`):
... | resize | python | huggingface/transformers | src/transformers/models/yolos/image_processing_yolos_fast.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/yolos/image_processing_yolos_fast.py | Apache-2.0 |
def resize_annotation(
self,
annotation: Dict[str, Any],
orig_size: Tuple[int, int],
target_size: Tuple[int, int],
threshold: float = 0.5,
interpolation: "F.InterpolationMode" = None,
):
"""
Resizes an annotation to a target size.
Args:
... |
Resizes an annotation to a target size.
Args:
annotation (`Dict[str, Any]`):
The annotation dictionary.
orig_size (`Tuple[int, int]`):
The original size of the input image.
target_size (`Tuple[int, int]`):
The target s... | resize_annotation | python | huggingface/transformers | src/transformers/models/yolos/image_processing_yolos_fast.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/yolos/image_processing_yolos_fast.py | Apache-2.0 |
def _update_annotation_for_padded_image(
self,
annotation: Dict,
input_image_size: Tuple[int, int],
output_image_size: Tuple[int, int],
padding,
update_bboxes,
) -> Dict:
"""
Update the annotation for a padded image.
"""
new_annotation ... |
Update the annotation for a padded image.
| _update_annotation_for_padded_image | python | huggingface/transformers | src/transformers/models/yolos/image_processing_yolos_fast.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/yolos/image_processing_yolos_fast.py | Apache-2.0 |
def preprocess(
self,
images: ImageInput,
annotations: Optional[Union[AnnotationType, List[AnnotationType]]] = None,
masks_path: Optional[Union[str, pathlib.Path]] = None,
**kwargs: Unpack[YolosFastImageProcessorKwargs],
) -> BatchFeature:
r"""
annotations (`A... |
annotations (`AnnotationType` or `List[AnnotationType]`, *optional*):
List of annotations associated with the image or batch of images. If annotation is for object
detection, the annotations should be a dictionary with the following keys:
- "image_id" (`int`): The image id.
... | preprocess | python | huggingface/transformers | src/transformers/models/yolos/image_processing_yolos_fast.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/yolos/image_processing_yolos_fast.py | Apache-2.0 |
def _preprocess(
self,
images: List["torch.Tensor"],
annotations: Optional[Union[AnnotationType, List[AnnotationType]]],
masks_path: Optional[Union[str, pathlib.Path]],
return_segmentation_masks: bool,
do_resize: bool,
size: SizeDict,
interpolation: Option... |
Preprocess an image or a batch of images so that it can be used by the model.
| _preprocess | python | huggingface/transformers | src/transformers/models/yolos/image_processing_yolos_fast.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/yolos/image_processing_yolos_fast.py | Apache-2.0 |
def post_process(self, outputs, target_sizes):
"""
Converts the raw output of [`YolosForObjectDetection`] into final bounding boxes in (top_left_x,
top_left_y, bottom_right_x, bottom_right_y) format. Only supports PyTorch.
Args:
outputs ([`YolosObjectDetectionOutput`]):
... |
Converts the raw output of [`YolosForObjectDetection`] into final bounding boxes in (top_left_x,
top_left_y, bottom_right_x, bottom_right_y) format. Only supports PyTorch.
Args:
outputs ([`YolosObjectDetectionOutput`]):
Raw outputs of the model.
target_s... | post_process | python | huggingface/transformers | src/transformers/models/yolos/image_processing_yolos_fast.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/yolos/image_processing_yolos_fast.py | Apache-2.0 |
def post_process_object_detection(
self, outputs, threshold: float = 0.5, target_sizes: Union[TensorType, List[Tuple]] = None, top_k: int = 100
):
"""
Converts the raw output of [`YolosForObjectDetection`] into final bounding boxes in (top_left_x,
top_left_y, bottom_right_x, bottom_r... |
Converts the raw output of [`YolosForObjectDetection`] into final bounding boxes in (top_left_x,
top_left_y, bottom_right_x, bottom_right_y) format. Only supports PyTorch.
Args:
outputs ([`YolosObjectDetectionOutput`]):
Raw outputs of the model.
threshol... | post_process_object_detection | python | huggingface/transformers | src/transformers/models/yolos/image_processing_yolos_fast.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/yolos/image_processing_yolos_fast.py | Apache-2.0 |
def __init__(self, config: YolosConfig, add_pooling_layer: bool = True):
r"""
add_pooling_layer (bool, *optional*, defaults to `True`):
Whether to add a pooling layer
"""
super().__init__(config)
self.config = config
self.embeddings = YolosEmbeddings(config)
... |
add_pooling_layer (bool, *optional*, defaults to `True`):
Whether to add a pooling layer
| __init__ | python | huggingface/transformers | src/transformers/models/yolos/modeling_yolos.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/yolos/modeling_yolos.py | Apache-2.0 |
def forward(
self,
pixel_values: torch.FloatTensor,
labels: Optional[List[Dict]] = None,
output_attentions: Optional[bool] = None,
output_hidden_states: Optional[bool] = None,
return_dict: Optional[bool] = None,
) -> Union[Tuple, YolosObjectDetectionOutput]:
r... |
labels (`List[Dict]` of len `(batch_size,)`, *optional*):
Labels for computing the bipartite matching loss. List of dicts, each dictionary containing at least the
following 2 keys: `'class_labels'` and `'boxes'` (the class labels and bounding boxes of an image in the
batch r... | forward | python | huggingface/transformers | src/transformers/models/yolos/modeling_yolos.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/yolos/modeling_yolos.py | Apache-2.0 |
def get_size_with_aspect_ratio(
image_size: Tuple[int, int], size: int, max_size: Optional[int] = None, mod_size: int = 16
) -> Tuple[int, int]:
"""
Computes the output image size given the input image size and the desired output size with multiple of divisible_size.
Args:
image_size (`Tuple[in... |
Computes the output image size given the input image size and the desired output size with multiple of divisible_size.
Args:
image_size (`Tuple[int, int]`):
The input image size.
size (`int`):
The desired output size.
max_size (`int`, *optional*):
Th... | get_size_with_aspect_ratio | python | huggingface/transformers | src/transformers/models/yolos/modular_yolos.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/yolos/modular_yolos.py | Apache-2.0 |
def post_process(self, outputs, target_sizes):
"""
Converts the raw output of [`YolosForObjectDetection`] into final bounding boxes in (top_left_x,
top_left_y, bottom_right_x, bottom_right_y) format. Only supports PyTorch.
Args:
outputs ([`YolosObjectDetectionOutput`]):
... |
Converts the raw output of [`YolosForObjectDetection`] into final bounding boxes in (top_left_x,
top_left_y, bottom_right_x, bottom_right_y) format. Only supports PyTorch.
Args:
outputs ([`YolosObjectDetectionOutput`]):
Raw outputs of the model.
target_s... | post_process | python | huggingface/transformers | src/transformers/models/yolos/modular_yolos.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/yolos/modular_yolos.py | Apache-2.0 |
def post_process_object_detection(
self, outputs, threshold: float = 0.5, target_sizes: Union[TensorType, List[Tuple]] = None, top_k: int = 100
):
"""
Converts the raw output of [`YolosForObjectDetection`] into final bounding boxes in (top_left_x,
top_left_y, bottom_right_x, bottom_r... |
Converts the raw output of [`YolosForObjectDetection`] into final bounding boxes in (top_left_x,
top_left_y, bottom_right_x, bottom_right_y) format. Only supports PyTorch.
Args:
outputs ([`YolosObjectDetectionOutput`]):
Raw outputs of the model.
threshol... | post_process_object_detection | python | huggingface/transformers | src/transformers/models/yolos/modular_yolos.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/yolos/modular_yolos.py | Apache-2.0 |
def forward(
self,
input_ids: Optional[torch.Tensor] = None,
attention_mask: Optional[torch.Tensor] = None,
token_type_ids: Optional[torch.Tensor] = None,
position_ids: Optional[torch.Tensor] = None,
head_mask: Optional[torch.Tensor] = None,
inputs_embeds: Optiona... |
labels (`torch.LongTensor` of shape `(batch_size, sequence_length)`, *optional*):
Labels for computing the masked language modeling loss. Indices should be in `[-100, 0, ...,
config.vocab_size]` (see `input_ids` docstring) Tokens with indices set to `-100` are ignored (masked), the
... | forward | python | huggingface/transformers | src/transformers/models/yoso/modeling_yoso.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/yoso/modeling_yoso.py | Apache-2.0 |
def forward(
self,
input_ids: Optional[torch.Tensor] = None,
attention_mask: Optional[torch.Tensor] = None,
token_type_ids: Optional[torch.Tensor] = None,
position_ids: Optional[torch.Tensor] = None,
head_mask: Optional[torch.Tensor] = None,
inputs_embeds: Optiona... |
labels (`torch.LongTensor` of shape `(batch_size,)`, *optional*):
Labels for computing the sequence classification/regression loss. Indices should be in `[0, ...,
config.num_labels - 1]`. If `config.num_labels == 1` a regression loss is computed (Mean-Square loss), If
`confi... | forward | python | huggingface/transformers | src/transformers/models/yoso/modeling_yoso.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/yoso/modeling_yoso.py | Apache-2.0 |
def forward(
self,
input_ids: Optional[torch.Tensor] = None,
attention_mask: Optional[torch.Tensor] = None,
token_type_ids: Optional[torch.Tensor] = None,
position_ids: Optional[torch.Tensor] = None,
head_mask: Optional[torch.Tensor] = None,
inputs_embeds: Optiona... |
input_ids (`torch.LongTensor` of shape `(batch_size, num_choices, sequence_length)`):
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using [`AutoTokenizer`]. See [`PreTrainedTokenizer.encode`] and
[`PreTrainedTokenizer.__call__`] for details.
... | forward | python | huggingface/transformers | src/transformers/models/yoso/modeling_yoso.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/yoso/modeling_yoso.py | Apache-2.0 |
def forward(
self,
input_ids: Optional[torch.Tensor] = None,
attention_mask: Optional[torch.Tensor] = None,
token_type_ids: Optional[torch.Tensor] = None,
position_ids: Optional[torch.Tensor] = None,
head_mask: Optional[torch.Tensor] = None,
inputs_embeds: Optiona... |
labels (`torch.LongTensor` of shape `(batch_size, sequence_length)`, *optional*):
Labels for computing the token classification loss. Indices should be in `[0, ..., config.num_labels - 1]`.
| forward | python | huggingface/transformers | src/transformers/models/yoso/modeling_yoso.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/yoso/modeling_yoso.py | Apache-2.0 |
def repeat_kv(hidden_states: torch.Tensor, n_rep: int) -> torch.Tensor:
"""
This is the equivalent of torch.repeat_interleave(x, dim=1, repeats=n_rep). The hidden states go from (batch,
num_key_value_heads, seqlen, head_dim) to (batch, num_attention_heads, seqlen, head_dim)
"""
batch, num_key_value_... |
This is the equivalent of torch.repeat_interleave(x, dim=1, repeats=n_rep). The hidden states go from (batch,
num_key_value_heads, seqlen, head_dim) to (batch, num_attention_heads, seqlen, head_dim)
| repeat_kv | python | huggingface/transformers | src/transformers/models/zamba/modeling_zamba.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/zamba/modeling_zamba.py | Apache-2.0 |
def reorder_cache(self, beam_idx: torch.LongTensor):
"""Reorders the cache for beam search, given the selected beam indices."""
for layer_idx in range(len(self.key_cache)):
device = self.key_cache[layer_idx].device
self.key_cache[layer_idx] = self.key_cache[layer_idx].index_selec... | Reorders the cache for beam search, given the selected beam indices. | reorder_cache | python | huggingface/transformers | src/transformers/models/zamba/modeling_zamba.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/zamba/modeling_zamba.py | Apache-2.0 |
def get_seq_length(self, layer_idx: Optional[int] = 0) -> int:
"""Returns the sequence length of the cached states. A layer index can be optionally passed."""
# take any layer that contains cache and not empty tensor
layer_idx = self.transformer_layers[0] if layer_idx not in self.transformer_lay... | Returns the sequence length of the cached states. A layer index can be optionally passed. | get_seq_length | python | huggingface/transformers | src/transformers/models/zamba/modeling_zamba.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/zamba/modeling_zamba.py | Apache-2.0 |
Subsets and Splits
Django Code with Docstrings
Filters Python code examples from Django repository that contain Django-related code, helping identify relevant code snippets for understanding Django framework usage patterns.
SQL Console for Shuu12121/python-treesitter-filtered-datasetsV2
Retrieves specific code examples from the Flask repository but doesn't provide meaningful analysis or patterns beyond basic data retrieval.
HTTPX Repo Code and Docstrings
Retrieves specific code examples from the httpx repository, which is useful for understanding how particular libraries are used but doesn't provide broader analytical insights about the dataset.
Requests Repo Docstrings & Code
Retrieves code examples with their docstrings and file paths from the requests repository, providing basic filtering but limited analytical value beyond finding specific code samples.
Quart Repo Docstrings & Code
Retrieves code examples with their docstrings from the Quart repository, providing basic code samples but offering limited analytical value for understanding broader patterns or relationships in the dataset.