code stringlengths 66 870k | docstring stringlengths 19 26.7k | func_name stringlengths 1 138 | language stringclasses 1
value | repo stringlengths 7 68 | path stringlengths 5 324 | url stringlengths 46 389 | license stringclasses 7
values |
|---|---|---|---|---|---|---|---|
def _timesfm_moving_average(arr: torch.Tensor, window_size: int) -> list[torch.Tensor]:
"""Calculates the moving average using PyTorch's convolution function."""
# Pad with zeros to handle initial window positions
arr_padded = F.pad(arr, (window_size - 1, 0), "constant", 0)
# Create a co... | Calculates the moving average using PyTorch's convolution function. | _timesfm_moving_average | python | huggingface/transformers | src/transformers/models/timesfm/modeling_timesfm.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/timesfm/modeling_timesfm.py | Apache-2.0 |
def forward(self, seq_length=None, position=None):
"""Generates a Tensor of sinusoids with different frequencies.
Args:
seq_length: an optional Python int defining the output sequence length.
if the `position` argument is specified.
position: [B, seq_length], optio... | Generates a Tensor of sinusoids with different frequencies.
Args:
seq_length: an optional Python int defining the output sequence length.
if the `position` argument is specified.
position: [B, seq_length], optional position for each token in the
sequence, onl... | forward | python | huggingface/transformers | src/transformers/models/timesfm/modular_timesfm.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/timesfm/modular_timesfm.py | Apache-2.0 |
def forward(
self,
past_values: torch.Tensor,
past_values_padding: torch.LongTensor,
freq: torch.Tensor,
output_attentions: bool = False,
output_hidden_states: bool = False,
) -> TimesFmOutput:
r"""
past_values_padding (`torch.LongTensor` of shape `(ba... |
past_values_padding (`torch.LongTensor` of shape `(batch_size, sequence_length)`):
The padding indicator of the time series.
past_values (`torch.FloatTensor` of shape `(batch_size, sequence_length)`):
Past values of the time series that serves as input to the model.
freq... | forward | python | huggingface/transformers | src/transformers/models/timesfm/modular_timesfm.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/timesfm/modular_timesfm.py | Apache-2.0 |
def _prepare_4d_attention_mask(
attention_mask: Optional[torch.Tensor],
sequence_length: int,
dtype: torch.dtype,
device: torch.device,
is_causal: bool = True,
) -> Optional[torch.Tensor]:
"""
Creates 4D attention mask and combines causal and padding masks if ... |
Creates 4D attention mask and combines causal and padding masks if needed.
Args:
attention_mask: Optional tensor of shape (batch_size, seq_length) containing padding mask
sequence_length: Length of the sequence
dtype: Data type of the mask
device: Device... | _prepare_4d_attention_mask | python | huggingface/transformers | src/transformers/models/timesfm/modular_timesfm.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/timesfm/modular_timesfm.py | Apache-2.0 |
def _timesfm_masked_mean_std(inputs: torch.Tensor, padding: torch.Tensor) -> tuple[torch.Tensor, torch.Tensor]:
"""Calculates mean and standard deviation of `inputs` across axis 1.
It excludes values where `padding` is 1.
Args:
inputs: A PyTorch tensor of shape [b, n, p].
... | Calculates mean and standard deviation of `inputs` across axis 1.
It excludes values where `padding` is 1.
Args:
inputs: A PyTorch tensor of shape [b, n, p].
padding: A PyTorch tensor of shape [b, n, p] with values 0 or 1.
Returns:
A tuple containing the me... | _timesfm_masked_mean_std | python | huggingface/transformers | src/transformers/models/timesfm/modular_timesfm.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/timesfm/modular_timesfm.py | Apache-2.0 |
def _timesfm_shift_padded_seq(mask: torch.Tensor, seq: torch.Tensor) -> torch.Tensor:
"""Shifts rows of seq based on the first 0 in each row of the mask.
Args:
mask: mask tensor of shape [B, N]
seq: seq tensor of shape [B, N, P]
Returns:
The shifted sequence... | Shifts rows of seq based on the first 0 in each row of the mask.
Args:
mask: mask tensor of shape [B, N]
seq: seq tensor of shape [B, N, P]
Returns:
The shifted sequence.
| _timesfm_shift_padded_seq | python | huggingface/transformers | src/transformers/models/timesfm/modular_timesfm.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/timesfm/modular_timesfm.py | Apache-2.0 |
def _preprocess(
self, inputs: Sequence[torch.Tensor], freq: Sequence[int]
) -> tuple[torch.Tensor, torch.Tensor, torch.Tensor]:
"""Formats and pads raw inputs to feed into the model.
This function both pads each time series to match the context length, and
pads the inputs to meet t... | Formats and pads raw inputs to feed into the model.
This function both pads each time series to match the context length, and
pads the inputs to meet the SPMD shape requirement.
Args:
inputs: A list of 1d Tensors. Each Tensor is the context time series of
a single forecas... | _preprocess | python | huggingface/transformers | src/transformers/models/timesfm/modular_timesfm.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/timesfm/modular_timesfm.py | Apache-2.0 |
def forward(
self,
past_values: Sequence[torch.Tensor],
freq: Optional[Sequence[Union[torch.Tensor, int]]] = None,
window_size: Optional[int] = None,
future_values: Optional[torch.Tensor] = None,
forecast_context_len: Optional[int] = None,
return_forecast_on_conte... |
window_size (`int`, *optional*):
Window size of trend + residual decomposition. If None then we do not do decomposition.
future_values (`torch.Tensor`, *optional*):
Optional future time series values to be used for loss computation.
forecast_context_len (`int`, *optional... | forward | python | huggingface/transformers | src/transformers/models/timesfm/modular_timesfm.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/timesfm/modular_timesfm.py | Apache-2.0 |
def _timesfm_moving_average(arr: torch.Tensor, window_size: int) -> list[torch.Tensor]:
"""Calculates the moving average using PyTorch's convolution function."""
# Pad with zeros to handle initial window positions
arr_padded = F.pad(arr, (window_size - 1, 0), "constant", 0)
# Create a co... | Calculates the moving average using PyTorch's convolution function. | _timesfm_moving_average | python | huggingface/transformers | src/transformers/models/timesfm/modular_timesfm.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/timesfm/modular_timesfm.py | Apache-2.0 |
def drop_path(input: torch.Tensor, drop_prob: float = 0.0, training: bool = False) -> torch.Tensor:
"""
Drop paths (Stochastic Depth) per sample (when applied in main path of residual blocks).
Comment by Ross Wightman: This is the same as the DropConnect impl I created for EfficientNet, etc networks,
h... |
Drop paths (Stochastic Depth) per sample (when applied in main path of residual blocks).
Comment by Ross Wightman: This is the same as the DropConnect impl I created for EfficientNet, etc networks,
however, the original name is misleading as 'Drop Connect' is a different form of dropout in a separate pape... | drop_path | python | huggingface/transformers | src/transformers/models/timesformer/modeling_timesformer.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/timesformer/modeling_timesformer.py | Apache-2.0 |
def forward(
self,
pixel_values: torch.FloatTensor,
output_attentions: Optional[bool] = None,
output_hidden_states: Optional[bool] = None,
return_dict: Optional[bool] = None,
) -> Union[Tuple[torch.FloatTensor], BaseModelOutput]:
r"""
Examples:
```pyt... |
Examples:
```python
>>> import av
>>> import numpy as np
>>> from transformers import AutoImageProcessor, TimesformerModel
>>> from huggingface_hub import hf_hub_download
>>> np.random.seed(0)
>>> def read_video_pyav(container, indices):
... ... | forward | python | huggingface/transformers | src/transformers/models/timesformer/modeling_timesformer.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/timesformer/modeling_timesformer.py | Apache-2.0 |
def forward(
self,
pixel_values: Optional[torch.Tensor] = None,
labels: Optional[torch.Tensor] = None,
output_attentions: Optional[bool] = None,
output_hidden_states: Optional[bool] = None,
return_dict: Optional[bool] = None,
) -> Union[Tuple, ImageClassifierOutput]:
... |
labels (`torch.LongTensor` of shape `(batch_size,)`, *optional*):
Labels for computing the image classification/regression loss. Indices should be in `[0, ...,
config.num_labels - 1]`. If `config.num_labels == 1` a regression loss is computed (Mean-Square loss), If
`config.n... | forward | python | huggingface/transformers | src/transformers/models/timesformer/modeling_timesformer.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/timesformer/modeling_timesformer.py | Apache-2.0 |
def forward(
self, data: torch.Tensor, observed_indicator: torch.Tensor
) -> Tuple[torch.Tensor, torch.Tensor, torch.Tensor]:
"""
Parameters:
data (`torch.Tensor` of shape `(batch_size, sequence_length, num_input_channels)`):
input for Batch norm calculation
... |
Parameters:
data (`torch.Tensor` of shape `(batch_size, sequence_length, num_input_channels)`):
input for Batch norm calculation
observed_indicator (`torch.BoolTensor` of shape `(batch_size, sequence_length, num_input_channels)`):
Calculating the scale on... | forward | python | huggingface/transformers | src/transformers/models/time_series_transformer/modeling_time_series_transformer.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/time_series_transformer/modeling_time_series_transformer.py | Apache-2.0 |
def forward(
self, data: torch.Tensor, observed_indicator: torch.Tensor
) -> Tuple[torch.Tensor, torch.Tensor, torch.Tensor]:
"""
Parameters:
data (`torch.Tensor` of shape `(batch_size, sequence_length, num_input_channels)`):
input for Batch norm calculation
... |
Parameters:
data (`torch.Tensor` of shape `(batch_size, sequence_length, num_input_channels)`):
input for Batch norm calculation
observed_indicator (`torch.BoolTensor` of shape `(batch_size, sequence_length, num_input_channels)`):
Calculating the scale on... | forward | python | huggingface/transformers | src/transformers/models/time_series_transformer/modeling_time_series_transformer.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/time_series_transformer/modeling_time_series_transformer.py | Apache-2.0 |
def forward(
self, data: torch.Tensor, observed_indicator: Optional[torch.Tensor] = None
) -> Tuple[torch.Tensor, torch.Tensor, torch.Tensor]:
"""
Parameters:
data (`torch.Tensor` of shape `(batch_size, sequence_length, num_input_channels)`):
input for Batch norm ... |
Parameters:
data (`torch.Tensor` of shape `(batch_size, sequence_length, num_input_channels)`):
input for Batch norm calculation
Returns:
tuple of `torch.Tensor` of shapes
(`(batch_size, sequence_length, num_input_channels)`,`(batch_size, 1, num_i... | forward | python | huggingface/transformers | src/transformers/models/time_series_transformer/modeling_time_series_transformer.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/time_series_transformer/modeling_time_series_transformer.py | Apache-2.0 |
def weighted_average(input_tensor: torch.Tensor, weights: Optional[torch.Tensor] = None, dim=None) -> torch.Tensor:
"""
Computes the weighted average of a given tensor across a given `dim`, masking values associated with weight zero,
meaning instead of `nan * 0 = nan` you will get `0 * 0 = 0`.
Args:
... |
Computes the weighted average of a given tensor across a given `dim`, masking values associated with weight zero,
meaning instead of `nan * 0 = nan` you will get `0 * 0 = 0`.
Args:
input_tensor (`torch.FloatTensor`):
Input tensor, of which the average must be computed.
weights ... | weighted_average | python | huggingface/transformers | src/transformers/models/time_series_transformer/modeling_time_series_transformer.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/time_series_transformer/modeling_time_series_transformer.py | Apache-2.0 |
def _init_weight(self):
"""
Identical to the XLM create_sinusoidal_embeddings except features are not interleaved. The cos features are in
the 2nd half of the vector. [dim // 2:]
"""
n_pos, dim = self.weight.shape
position_enc = np.array(
[[pos / np.power(1000... |
Identical to the XLM create_sinusoidal_embeddings except features are not interleaved. The cos features are in
the 2nd half of the vector. [dim // 2:]
| _init_weight | python | huggingface/transformers | src/transformers/models/time_series_transformer/modeling_time_series_transformer.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/time_series_transformer/modeling_time_series_transformer.py | Apache-2.0 |
def forward(
self, input_ids_shape: torch.Size, past_key_values_length: int = 0, position_ids: Optional[torch.Tensor] = None
) -> torch.Tensor:
"""`input_ids_shape` is expected to be [bsz x seqlen]."""
if position_ids is None:
bsz, seq_len = input_ids_shape[:2]
positi... | `input_ids_shape` is expected to be [bsz x seqlen]. | forward | python | huggingface/transformers | src/transformers/models/time_series_transformer/modeling_time_series_transformer.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/time_series_transformer/modeling_time_series_transformer.py | Apache-2.0 |
def forward(
self,
hidden_states: torch.FloatTensor,
attention_mask: torch.FloatTensor,
layer_head_mask: torch.FloatTensor,
output_attentions: Optional[bool] = False,
) -> Tuple[torch.FloatTensor, Optional[torch.FloatTensor]]:
"""
Args:
hidden_stat... |
Args:
hidden_states (`torch.FloatTensor`): input to the layer of shape `(batch, seq_len, embed_dim)`
attention_mask (`torch.FloatTensor`): attention mask of size
`(batch, 1, tgt_len, src_len)` where padding elements are indicated by very large negative values.
... | forward | python | huggingface/transformers | src/transformers/models/time_series_transformer/modeling_time_series_transformer.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/time_series_transformer/modeling_time_series_transformer.py | Apache-2.0 |
def forward(
self,
hidden_states: torch.Tensor,
attention_mask: Optional[torch.Tensor] = None,
encoder_hidden_states: Optional[torch.Tensor] = None,
encoder_attention_mask: Optional[torch.Tensor] = None,
layer_head_mask: Optional[torch.Tensor] = None,
cross_attn_l... |
Args:
hidden_states (`torch.FloatTensor`): input to the layer of shape `(batch, seq_len, embed_dim)`
attention_mask (`torch.FloatTensor`): attention mask of size
`(batch, 1, tgt_len, src_len)` where padding elements are indicated by very large negative values.
... | forward | python | huggingface/transformers | src/transformers/models/time_series_transformer/modeling_time_series_transformer.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/time_series_transformer/modeling_time_series_transformer.py | Apache-2.0 |
def forward(
self,
attention_mask: Optional[torch.Tensor] = None,
head_mask: Optional[torch.Tensor] = None,
inputs_embeds: Optional[torch.FloatTensor] = None,
output_attentions: Optional[bool] = None,
output_hidden_states: Optional[bool] = None,
return_dict: Optio... |
Args:
attention_mask (`torch.Tensor` of shape `(batch_size, sequence_length)`, *optional*):
Mask to avoid performing attention on padding token indices. Mask values selected in `[0, 1]`:
- 1 for tokens that are **not masked**,
- 0 for tokens that are... | forward | python | huggingface/transformers | src/transformers/models/time_series_transformer/modeling_time_series_transformer.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/time_series_transformer/modeling_time_series_transformer.py | Apache-2.0 |
def forward(
self,
attention_mask: Optional[torch.Tensor] = None,
encoder_hidden_states: Optional[torch.FloatTensor] = None,
encoder_attention_mask: Optional[torch.LongTensor] = None,
head_mask: Optional[torch.Tensor] = None,
cross_attn_head_mask: Optional[torch.Tensor] =... |
Args:
attention_mask (`torch.Tensor` of shape `(batch_size, sequence_length)`, *optional*):
Mask to avoid performing attention on padding token indices. Mask values selected in `[0, 1]`:
- 1 for tokens that are **not masked**,
- 0 for tokens that are... | forward | python | huggingface/transformers | src/transformers/models/time_series_transformer/modeling_time_series_transformer.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/time_series_transformer/modeling_time_series_transformer.py | Apache-2.0 |
def get_lagged_subsequences(
self, sequence: torch.Tensor, subsequences_length: int, shift: int = 0
) -> torch.Tensor:
"""
Returns lagged subsequences of a given sequence. Returns a tensor of shape (N, S, C, I),
where S = subsequences_length and I = len(indices), containing lagge... |
Returns lagged subsequences of a given sequence. Returns a tensor of shape (N, S, C, I),
where S = subsequences_length and I = len(indices), containing lagged subsequences. Specifically, lagged[i,
j, :, k] = sequence[i, -indices[k]-S+j, :].
Args:
sequence: Tensor
... | get_lagged_subsequences | python | huggingface/transformers | src/transformers/models/time_series_transformer/modeling_time_series_transformer.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/time_series_transformer/modeling_time_series_transformer.py | Apache-2.0 |
def forward(
self,
past_values: torch.Tensor,
past_time_features: torch.Tensor,
past_observed_mask: torch.Tensor,
static_categorical_features: Optional[torch.Tensor] = None,
static_real_features: Optional[torch.Tensor] = None,
future_values: Optional[torch.Tensor]... |
past_values (`torch.FloatTensor` of shape `(batch_size, sequence_length)` or `(batch_size, sequence_length, input_size)`):
Past values of the time series, that serve as context in order to predict the future. The sequence size of
this tensor must be larger than the `context_length` of t... | forward | python | huggingface/transformers | src/transformers/models/time_series_transformer/modeling_time_series_transformer.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/time_series_transformer/modeling_time_series_transformer.py | Apache-2.0 |
def forward(
self,
past_values: torch.Tensor,
past_time_features: torch.Tensor,
past_observed_mask: torch.Tensor,
static_categorical_features: Optional[torch.Tensor] = None,
static_real_features: Optional[torch.Tensor] = None,
future_values: Optional[torch.Tensor]... |
past_values (`torch.FloatTensor` of shape `(batch_size, sequence_length)` or `(batch_size, sequence_length, input_size)`):
Past values of the time series, that serve as context in order to predict the future. The sequence size of
this tensor must be larger than the `context_length` of t... | forward | python | huggingface/transformers | src/transformers/models/time_series_transformer/modeling_time_series_transformer.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/time_series_transformer/modeling_time_series_transformer.py | Apache-2.0 |
def generate(
self,
past_values: torch.Tensor,
past_time_features: torch.Tensor,
future_time_features: torch.Tensor,
past_observed_mask: Optional[torch.Tensor] = None,
static_categorical_features: Optional[torch.Tensor] = None,
static_real_features: Optional[torch... |
Greedily generate sequences of sample predictions from a model with a probability distribution head.
Parameters:
past_values (`torch.FloatTensor` of shape `(batch_size, sequence_length)` or `(batch_size, sequence_length, input_size)`):
Past values of the time series, that s... | generate | python | huggingface/transformers | src/transformers/models/time_series_transformer/modeling_time_series_transformer.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/time_series_transformer/modeling_time_series_transformer.py | Apache-2.0 |
def to_dict(self) -> Dict[str, Any]:
"""
Serializes this instance to a Python dictionary.
"""
output = super().to_dict()
output.pop("train_transforms", None)
output.pop("val_transforms", None)
output.pop("_not_supports_tensor_input", None)
return output |
Serializes this instance to a Python dictionary.
| to_dict | python | huggingface/transformers | src/transformers/models/timm_wrapper/image_processing_timm_wrapper.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/timm_wrapper/image_processing_timm_wrapper.py | Apache-2.0 |
def get_image_processor_dict(
cls, pretrained_model_name_or_path: Union[str, os.PathLike], **kwargs
) -> Tuple[Dict[str, Any], Dict[str, Any]]:
"""
Get the image processor dict for the model.
"""
image_processor_filename = kwargs.pop("image_processor_filename", "config.json")... |
Get the image processor dict for the model.
| get_image_processor_dict | python | huggingface/transformers | src/transformers/models/timm_wrapper/image_processing_timm_wrapper.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/timm_wrapper/image_processing_timm_wrapper.py | Apache-2.0 |
def preprocess(
self,
images: ImageInput,
return_tensors: Optional[Union[str, TensorType]] = "pt",
) -> BatchFeature:
"""
Preprocess an image or batch of images.
Args:
images (`ImageInput`):
Image to preprocess. Expects a single or batch o... |
Preprocess an image or batch of images.
Args:
images (`ImageInput`):
Image to preprocess. Expects a single or batch of images
return_tensors (`str` or `TensorType`, *optional*):
The type of tensors to return.
| preprocess | python | huggingface/transformers | src/transformers/models/timm_wrapper/image_processing_timm_wrapper.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/timm_wrapper/image_processing_timm_wrapper.py | Apache-2.0 |
def _fix_state_dict_key_on_load(key) -> Tuple[str, bool]:
"""
Overrides original method that renames `gamma` and `beta` to `weight` and `bias`.
We don't want this behavior for timm wrapped models. Instead, this method adds a
"timm_model." prefix to enable loading official timm Hub checkp... |
Overrides original method that renames `gamma` and `beta` to `weight` and `bias`.
We don't want this behavior for timm wrapped models. Instead, this method adds a
"timm_model." prefix to enable loading official timm Hub checkpoints.
| _fix_state_dict_key_on_load | python | huggingface/transformers | src/transformers/models/timm_wrapper/modeling_timm_wrapper.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/timm_wrapper/modeling_timm_wrapper.py | Apache-2.0 |
def load_state_dict(self, state_dict, *args, **kwargs):
"""
Override original method to fix state_dict keys on load for cases when weights are loaded
without using the `from_pretrained` method (e.g., in Trainer to resume from checkpoint).
"""
state_dict = {self._fix_state_dict_ke... |
Override original method to fix state_dict keys on load for cases when weights are loaded
without using the `from_pretrained` method (e.g., in Trainer to resume from checkpoint).
| load_state_dict | python | huggingface/transformers | src/transformers/models/timm_wrapper/modeling_timm_wrapper.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/timm_wrapper/modeling_timm_wrapper.py | Apache-2.0 |
def _init_weights(self, module):
"""
Initialize weights function to properly initialize Linear layer weights.
Since model architectures may vary, we assume only the classifier requires
initialization, while all other weights should be loaded from the checkpoint.
"""
if is... |
Initialize weights function to properly initialize Linear layer weights.
Since model architectures may vary, we assume only the classifier requires
initialization, while all other weights should be loaded from the checkpoint.
| _init_weights | python | huggingface/transformers | src/transformers/models/timm_wrapper/modeling_timm_wrapper.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/timm_wrapper/modeling_timm_wrapper.py | Apache-2.0 |
def forward(
self,
pixel_values: torch.FloatTensor,
output_attentions: Optional[bool] = None,
output_hidden_states: Optional[Union[bool, List[int]]] = None,
return_dict: Optional[bool] = None,
do_pooling: Optional[bool] = None,
**kwargs,
) -> Union[TimmWrapper... |
output_attentions (`bool`, *optional*):
Whether or not to return the attentions tensors of all attention layers. Not compatible with timm wrapped models.
output_hidden_states (`bool`, *optional*):
Whether or not to return the hidden states of all layers. Not compatible with timm... | forward | python | huggingface/transformers | src/transformers/models/timm_wrapper/modeling_timm_wrapper.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/timm_wrapper/modeling_timm_wrapper.py | Apache-2.0 |
def forward(
self,
pixel_values: torch.FloatTensor,
labels: Optional[torch.LongTensor] = None,
output_attentions: Optional[bool] = None,
output_hidden_states: Optional[Union[bool, List[int]]] = None,
return_dict: Optional[bool] = None,
**kwargs,
) -> Union[Ima... |
labels (`torch.LongTensor` of shape `(batch_size,)`, *optional*):
Labels for computing the image classification/regression loss. Indices should be in `[0, ...,
config.num_labels - 1]`. If `config.num_labels == 1` a regression loss is computed (Mean-Square loss), If
`config.n... | forward | python | huggingface/transformers | src/transformers/models/timm_wrapper/modeling_timm_wrapper.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/timm_wrapper/modeling_timm_wrapper.py | Apache-2.0 |
def convert_tr_ocr_checkpoint(checkpoint_url, pytorch_dump_folder_path):
"""
Copy/paste/tweak model's weights to our VisionEncoderDecoderModel structure.
"""
# define encoder and decoder configs based on checkpoint_url
encoder_config = ViTConfig(image_size=384, qkv_bias=False)
decoder_config = T... |
Copy/paste/tweak model's weights to our VisionEncoderDecoderModel structure.
| convert_tr_ocr_checkpoint | python | huggingface/transformers | src/transformers/models/trocr/convert_trocr_unilm_to_pytorch.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/trocr/convert_trocr_unilm_to_pytorch.py | Apache-2.0 |
def forward(self, input_ids: torch.Tensor, past_key_values_length: int = 0, position_ids: torch.Tensor = None):
"""`input_ids' shape is expected to be [bsz x seqlen]."""
if position_ids is None:
bsz, seq_len = input_ids.shape[:2]
position_ids = torch.arange(
past... | `input_ids' shape is expected to be [bsz x seqlen]. | forward | python | huggingface/transformers | src/transformers/models/trocr/modeling_trocr.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/trocr/modeling_trocr.py | Apache-2.0 |
def get_embedding(num_embeddings: int, embedding_dim: int, padding_idx: Optional[int] = None):
"""
Build sinusoidal embeddings. This matches the implementation in tensor2tensor, but differs slightly from the
description in Section 3.5 of "Attention Is All You Need".
"""
half_dim ... |
Build sinusoidal embeddings. This matches the implementation in tensor2tensor, but differs slightly from the
description in Section 3.5 of "Attention Is All You Need".
| get_embedding | python | huggingface/transformers | src/transformers/models/trocr/modeling_trocr.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/trocr/modeling_trocr.py | Apache-2.0 |
def create_position_ids_from_input_ids(
self, input_ids: torch.Tensor, padding_idx: int, past_key_values_length: Optional[int] = 0
):
"""
Replace non-padding symbols with their position numbers. Position numbers begin at padding_idx+1. Padding
symbols are ignored. This is modified fr... |
Replace non-padding symbols with their position numbers. Position numbers begin at padding_idx+1. Padding
symbols are ignored. This is modified from fairseq's `utils.make_positions`.
| create_position_ids_from_input_ids | python | huggingface/transformers | src/transformers/models/trocr/modeling_trocr.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/trocr/modeling_trocr.py | Apache-2.0 |
def forward(
self,
hidden_states: torch.Tensor,
attention_mask: Optional[torch.Tensor] = None,
encoder_hidden_states: Optional[torch.Tensor] = None,
encoder_attention_mask: Optional[torch.Tensor] = None,
layer_head_mask: Optional[torch.Tensor] = None,
cross_attn_l... |
Args:
hidden_states (`torch.FloatTensor`): input to the layer of shape `(batch, seq_len, embed_dim)`
attention_mask (`torch.FloatTensor`): attention mask of size
`(batch, 1, tgt_len, src_len)` where padding elements are indicated by very large negative values.
... | forward | python | huggingface/transformers | src/transformers/models/trocr/modeling_trocr.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/trocr/modeling_trocr.py | Apache-2.0 |
def forward(
self,
input_ids=None,
attention_mask=None,
encoder_hidden_states=None,
encoder_attention_mask=None,
head_mask=None,
cross_attn_head_mask=None,
past_key_values=None,
inputs_embeds=None,
use_cache=None,
output_attentions=... |
Args:
input_ids (`torch.LongTensor` of shape `(batch_size, sequence_length)`):
Indices of input sequence tokens in the vocabulary. Padding will be ignored by default should you
provide it.
Indices can be obtained using [`AutoTokenizer`]. See [`PreTra... | forward | python | huggingface/transformers | src/transformers/models/trocr/modeling_trocr.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/trocr/modeling_trocr.py | Apache-2.0 |
def forward(
self,
input_ids: Optional[torch.LongTensor] = None,
attention_mask: Optional[torch.Tensor] = None,
encoder_hidden_states: Optional[torch.FloatTensor] = None,
encoder_attention_mask: Optional[torch.LongTensor] = None,
head_mask: Optional[torch.Tensor] = None,
... |
cross_attn_head_mask (`torch.Tensor` of shape `(decoder_layers, decoder_attention_heads)`, *optional*):
Mask to nullify selected heads of the cross-attention modules. Mask values selected in `[0, 1]`:
- 1 indicates the head is **not masked**,
- 0 indicates the head is **mas... | forward | python | huggingface/transformers | src/transformers/models/trocr/modeling_trocr.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/trocr/modeling_trocr.py | Apache-2.0 |
def __call__(
self,
images: ImageInput = None,
text: Union[TextInput, PreTokenizedInput, List[TextInput], List[PreTokenizedInput]] = None,
audio=None,
videos=None,
**kwargs: Unpack[TrOCRProcessorKwargs],
) -> BatchFeature:
"""
When used in normal mode,... |
When used in normal mode, this method forwards all its arguments to AutoImageProcessor's
[`~AutoImageProcessor.__call__`] and returns its output. If used in the context
[`~TrOCRProcessor.as_target_processor`] this method forwards all its arguments to TrOCRTokenizer's
[`~TrOCRTokenizer._... | __call__ | python | huggingface/transformers | src/transformers/models/trocr/processing_trocr.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/trocr/processing_trocr.py | Apache-2.0 |
def as_target_processor(self):
"""
Temporarily sets the tokenizer for processing the input. Useful for encoding the labels when fine-tuning TrOCR.
"""
warnings.warn(
"`as_target_processor` is deprecated and will be removed in v5 of Transformers. You can process your "
... |
Temporarily sets the tokenizer for processing the input. Useful for encoding the labels when fine-tuning TrOCR.
| as_target_processor | python | huggingface/transformers | src/transformers/models/trocr/processing_trocr.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/trocr/processing_trocr.py | Apache-2.0 |
def to_dict(self):
"""
Serializes this instance to a Python dictionary. Override the default [`~PretrainedConfig.to_dict`].
Returns:
`Dict[str, any]`: Dictionary of all the attributes that make up this configuration instance,
"""
output = copy.deepcopy(self.__dict__)... |
Serializes this instance to a Python dictionary. Override the default [`~PretrainedConfig.to_dict`].
Returns:
`Dict[str, any]`: Dictionary of all the attributes that make up this configuration instance,
| to_dict | python | huggingface/transformers | src/transformers/models/tvp/configuration_tvp.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/tvp/configuration_tvp.py | Apache-2.0 |
def resize(
self,
image: np.ndarray,
size: Dict[str, int],
resample: PILImageResampling = PILImageResampling.BILINEAR,
data_format: Optional[Union[str, ChannelDimension]] = None,
input_data_format: Optional[Union[str, ChannelDimension]] = None,
**kwargs,
) -> ... |
Resize an image.
Args:
image (`np.ndarray`):
Image to resize.
size (`Dict[str, int]`):
Size of the output image. If `size` is of the form `{"height": h, "width": w}`, the output image will
have the size `(h, w)`. If `size` is of t... | resize | python | huggingface/transformers | src/transformers/models/tvp/image_processing_tvp.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/tvp/image_processing_tvp.py | Apache-2.0 |
def pad_image(
self,
image: np.ndarray,
pad_size: Optional[Dict[str, int]] = None,
constant_values: Union[float, Iterable[float]] = 0,
pad_mode: PaddingMode = PaddingMode.CONSTANT,
data_format: Optional[Union[str, ChannelDimension]] = None,
input_data_format: Opti... |
Pad an image with zeros to the given size.
Args:
image (`np.ndarray`):
Image to pad.
pad_size (`Dict[str, int]`)
Size of the output image with pad.
constant_values (`Union[float, Iterable[float]]`)
The fill value to us... | pad_image | python | huggingface/transformers | src/transformers/models/tvp/image_processing_tvp.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/tvp/image_processing_tvp.py | Apache-2.0 |
def preprocess(
self,
videos: Union[ImageInput, List[ImageInput], List[List[ImageInput]]],
do_resize: Optional[bool] = None,
size: Optional[Dict[str, int]] = None,
resample: PILImageResampling = None,
do_center_crop: Optional[bool] = None,
crop_size: Optional[Dict... |
Preprocess an image or batch of images.
Args:
videos (`ImageInput` or `List[ImageInput]` or `List[List[ImageInput]]`):
Frames to preprocess.
do_resize (`bool`, *optional*, defaults to `self.do_resize`):
Whether to resize the image.
si... | preprocess | python | huggingface/transformers | src/transformers/models/tvp/image_processing_tvp.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/tvp/image_processing_tvp.py | Apache-2.0 |
def loss_distance(self, start_time, end_time, candidates_start_time, candidates_end_time, duration):
"""
Measure the distance of mid points.
"""
mid_candidates = torch.div(torch.add(candidates_start_time, candidates_end_time), 2.0)
mid_groundtruth = torch.div(torch.add(start_time... |
Measure the distance of mid points.
| loss_distance | python | huggingface/transformers | src/transformers/models/tvp/modeling_tvp.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/tvp/modeling_tvp.py | Apache-2.0 |
def forward(self, logits, labels):
"""
This performs the loss computation.
Args:
logits (`torch.FloatTensor`):
The output logits of head module.
labels (`List[torch.FloatTensor]`):
List of tensors ([start, end, duration]), which contains s... |
This performs the loss computation.
Args:
logits (`torch.FloatTensor`):
The output logits of head module.
labels (`List[torch.FloatTensor]`):
List of tensors ([start, end, duration]), which contains start time, end time of the video corresponding... | forward | python | huggingface/transformers | src/transformers/models/tvp/modeling_tvp.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/tvp/modeling_tvp.py | Apache-2.0 |
def interpolate_pos_encoding(self, embedding: torch.Tensor, height: int, width: int) -> torch.Tensor:
"""
This method allows to interpolate the pre-trained pad weights , to be able to use the model on collection of high
resolution images (high resolution videos).
"""
h0 = w0 = 1... |
This method allows to interpolate the pre-trained pad weights , to be able to use the model on collection of high
resolution images (high resolution videos).
| interpolate_pos_encoding | python | huggingface/transformers | src/transformers/models/tvp/modeling_tvp.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/tvp/modeling_tvp.py | Apache-2.0 |
def add_2d_positional_embeddings(self, grid, interpolate_pos_encoding: bool = False):
"""
Args:
grid: (batch_size, height, width, hidden_dim)
interpolate_pos_encoding: (`bool`, *optional*, defaults to `False`):
Whether to interpolate the pre-trained position encod... |
Args:
grid: (batch_size, height, width, hidden_dim)
interpolate_pos_encoding: (`bool`, *optional*, defaults to `False`):
Whether to interpolate the pre-trained position encodings.
Returns:
grid + col_position_embeddings.view(*col_shape): (batch_size, ... | add_2d_positional_embeddings | python | huggingface/transformers | src/transformers/models/tvp/modeling_tvp.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/tvp/modeling_tvp.py | Apache-2.0 |
def forward(self, grid, interpolate_pos_encoding: bool = False):
"""
Args:
grid: Array of shape (batch_size, num_frames, height, width, num_channels).
It contains processed frames extracted from videos, and is generated by Tvp image preprocessor. Note,
num_fra... |
Args:
grid: Array of shape (batch_size, num_frames, height, width, num_channels).
It contains processed frames extracted from videos, and is generated by Tvp image preprocessor. Note,
num_frames can be 1
interpolate_pos_encoding: (bool, *optional*, defaul... | forward | python | huggingface/transformers | src/transformers/models/tvp/modeling_tvp.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/tvp/modeling_tvp.py | Apache-2.0 |
def interpolate_pad_encoding(self, prompt: torch.Tensor, height: int, width: int) -> torch.Tensor:
"""
This method allows to interpolate the pre-trained pad weights, to be able to use the model on collection of high
resolution images (high resolution videos).
"""
# creates scal... |
This method allows to interpolate the pre-trained pad weights, to be able to use the model on collection of high
resolution images (high resolution videos).
| interpolate_pad_encoding | python | huggingface/transformers | src/transformers/models/tvp/modeling_tvp.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/tvp/modeling_tvp.py | Apache-2.0 |
def forward(
self,
input_ids: Optional[torch.LongTensor] = None,
pixel_values: Optional[torch.FloatTensor] = None,
attention_mask: Optional[torch.LongTensor] = None,
head_mask: Optional[torch.FloatTensor] = None,
output_attentions: Optional[bool] = None,
output_hi... |
Examples:
```python
>>> import torch
>>> from transformers import AutoConfig, AutoTokenizer, TvpModel
>>> model = TvpModel.from_pretrained("Jiqing/tiny-random-tvp")
>>> tokenizer = AutoTokenizer.from_pretrained("Jiqing/tiny-random-tvp")
>>> pixel_values = torc... | forward | python | huggingface/transformers | src/transformers/models/tvp/modeling_tvp.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/tvp/modeling_tvp.py | Apache-2.0 |
def forward(
self,
input_ids: Optional[torch.LongTensor] = None,
pixel_values: Optional[torch.FloatTensor] = None,
attention_mask: Optional[torch.LongTensor] = None,
labels: Optional[Tuple[torch.Tensor]] = None,
head_mask: Optional[torch.FloatTensor] = None,
outpu... |
labels (`torch.FloatTensor` of shape `(batch_size, 3)`, *optional*):
The labels contains duration, start time, and end time of the video corresponding to the text.
Examples:
```python
>>> import torch
>>> from transformers import AutoConfig, AutoTokenizer, TvpForVid... | forward | python | huggingface/transformers | src/transformers/models/tvp/modeling_tvp.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/tvp/modeling_tvp.py | Apache-2.0 |
def __call__(self, text=None, videos=None, return_tensors=None, **kwargs):
"""
Main method to prepare for the model one or several sequences(s) and image(s). This method forwards the `text`
and `kwargs` arguments to BertTokenizerFast's [`~BertTokenizerFast.__call__`] if `text` is not `None` to e... |
Main method to prepare for the model one or several sequences(s) and image(s). This method forwards the `text`
and `kwargs` arguments to BertTokenizerFast's [`~BertTokenizerFast.__call__`] if `text` is not `None` to encode
the text. To prepare the image(s), this method forwards the `videos` and... | __call__ | python | huggingface/transformers | src/transformers/models/tvp/processing_tvp.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/tvp/processing_tvp.py | Apache-2.0 |
def post_process_video_grounding(self, logits, video_durations):
"""
Compute the time of the video.
Args:
logits (`torch.Tensor`):
The logits output of TvpForVideoGrounding.
video_durations (`float`):
The video's duration.
Returns... |
Compute the time of the video.
Args:
logits (`torch.Tensor`):
The logits output of TvpForVideoGrounding.
video_durations (`float`):
The video's duration.
Returns:
start (`float`):
The start time of the video.
... | post_process_video_grounding | python | huggingface/transformers | src/transformers/models/tvp/processing_tvp.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/tvp/processing_tvp.py | Apache-2.0 |
def combine_image_text_embeddings(
image_embeddings,
inputs_embeds,
bbox,
visual_bbox,
attention_mask=None,
num_patches=14,
max_len=0,
image_size=224,
patch_size=16,
):
"""
Combine the image and text embeddings for the input to the encoder/decoder of UDOP.
First, the ima... |
Combine the image and text embeddings for the input to the encoder/decoder of UDOP.
First, the image embeddings are created by checking for each visual patch if it is inside the bounding box of a
token. If it is, the visual patch is combined with the token embedding. Then, the visual bounding boxes are co... | combine_image_text_embeddings | python | huggingface/transformers | src/transformers/models/udop/modeling_udop.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/udop/modeling_udop.py | Apache-2.0 |
def __init__(self, hidden_size, eps=1e-6):
"""
Construct a layernorm module in the Udop style. No bias and no subtraction of mean.
"""
super().__init__()
self.weight = nn.Parameter(torch.ones(hidden_size))
self.variance_epsilon = eps |
Construct a layernorm module in the Udop style. No bias and no subtraction of mean.
| __init__ | python | huggingface/transformers | src/transformers/models/udop/modeling_udop.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/udop/modeling_udop.py | Apache-2.0 |
def _relative_position_bucket(relative_position, bidirectional=True, num_buckets=32, max_distance=128):
"""
Adapted from Mesh Tensorflow:
https://github.com/tensorflow/mesh/blob/0cb87fe07da627bf0b7e60475d59f95ed6b5be3d/mesh_tensorflow/transformer/transformer_layers.py#L593
Translate rel... |
Adapted from Mesh Tensorflow:
https://github.com/tensorflow/mesh/blob/0cb87fe07da627bf0b7e60475d59f95ed6b5be3d/mesh_tensorflow/transformer/transformer_layers.py#L593
Translate relative position to a bucket number for relative attention. The relative position is defined as
memory_positi... | _relative_position_bucket | python | huggingface/transformers | src/transformers/models/udop/modeling_udop.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/udop/modeling_udop.py | Apache-2.0 |
def forward(
self,
hidden_states,
mask=None,
key_value_states=None,
position_bias=None,
past_key_value=None,
layer_head_mask=None,
query_length=None,
use_cache=False,
output_attentions=False,
cache_position=None,
):
"""
... |
Self-attention (if key_value_states is None) or attention over source sentence (provided by key_value_states).
| forward | python | huggingface/transformers | src/transformers/models/udop/modeling_udop.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/udop/modeling_udop.py | Apache-2.0 |
def __init__(self, modules: Sequence[RelativePositionBiasBase]):
"""
Class which sums up various computed biases.
Args:
modules (Sequence[RelativePositionBiasBase]):
List of relative bias modules.
"""
super().__init__()
self.biases = nn.Module... |
Class which sums up various computed biases.
Args:
modules (Sequence[RelativePositionBiasBase]):
List of relative bias modules.
| __init__ | python | huggingface/transformers | src/transformers/models/udop/modeling_udop.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/udop/modeling_udop.py | Apache-2.0 |
def create_relative_bias(config: UdopConfig) -> Sequence[RelativePositionBiasBase]:
"""
Creates empty list or one/multiple relative biases.
:param config: Model's configuration :return: Sequence with created bias modules.
"""
bias_list = []
if hasattr(config, "relative_bias_args"):
for ... |
Creates empty list or one/multiple relative biases.
:param config: Model's configuration :return: Sequence with created bias modules.
| create_relative_bias | python | huggingface/transformers | src/transformers/models/udop/modeling_udop.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/udop/modeling_udop.py | Apache-2.0 |
def _prepare_4d_causal_attention_mask_with_cache_position(
attention_mask: torch.Tensor,
sequence_length: int,
target_length: int,
dtype: torch.dtype,
cache_position: torch.Tensor,
batch_size: int,
**kwargs,
):
"""
Creates a causal 4D mask of s... |
Creates a causal 4D mask of shape `(batch_size, 1, query_length, key_value_length)` from a 2D mask of shape
`(batch_size, key_value_length)`, or if the input `attention_mask` is already 4D, do nothing.
Args:
attention_mask (`torch.Tensor`):
A 2D attention mask of sh... | _prepare_4d_causal_attention_mask_with_cache_position | python | huggingface/transformers | src/transformers/models/udop/modeling_udop.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/udop/modeling_udop.py | Apache-2.0 |
def forward(
self,
input_ids: Optional[Tensor] = None,
attention_mask: Optional[Tensor] = None,
bbox: Optional[Dict[str, Any]] = None,
pixel_values: Optional[Tensor] = None,
visual_bbox: Optional[Dict[str, Any]] = None,
decoder_input_ids: Optional[Tensor] = None,
... |
bbox (`torch.LongTensor` of shape `({0}, 4)`, *optional*):
Bounding boxes of each input sequence tokens. Selected in the range `[0,
config.max_2d_position_embeddings-1]`. Each bounding box should be a normalized version in (x0, y0, x1, y1)
format, where (x0, y0) corresponds ... | forward | python | huggingface/transformers | src/transformers/models/udop/modeling_udop.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/udop/modeling_udop.py | Apache-2.0 |
def forward(
self,
input_ids: Optional[Tensor] = None,
attention_mask: Optional[Tensor] = None,
bbox: Optional[Dict[str, Any]] = None,
pixel_values: Optional[Tensor] = None,
visual_bbox: Optional[Dict[str, Any]] = None,
decoder_input_ids: Optional[Tensor] = None,
... |
bbox (`torch.LongTensor` of shape `({0}, 4)`, *optional*):
Bounding boxes of each input sequence tokens. Selected in the range `[0,
config.max_2d_position_embeddings-1]`. Each bounding box should be a normalized version in (x0, y0, x1, y1)
format, where (x0, y0) corresponds ... | forward | python | huggingface/transformers | src/transformers/models/udop/modeling_udop.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/udop/modeling_udop.py | Apache-2.0 |
def forward(
self,
input_ids: Optional[Tensor] = None,
bbox: Optional[Dict[str, Any]] = None,
attention_mask: Optional[Tensor] = None,
pixel_values: Optional[Tensor] = None,
visual_bbox: Optional[Dict[str, Any]] = None,
head_mask: Optional[Tensor] = None,
... |
input_ids (`torch.LongTensor` of shape `(batch_size, sequence_length)`):
Indices of input sequence tokens in the vocabulary. T5 is a model with relative position embeddings so you
should be able to pad the inputs on both the right and the left.
Indices can be obtained using... | forward | python | huggingface/transformers | src/transformers/models/udop/modeling_udop.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/udop/modeling_udop.py | Apache-2.0 |
def __call__(
self,
images: Optional[ImageInput] = None,
text: Union[TextInput, PreTokenizedInput, List[TextInput], List[PreTokenizedInput]] = None,
# The following is to capture `text_pair` argument that may be passed as a positional argument.
# See transformers.processing_utils... |
This method first forwards the `images` argument to [`~UdopImageProcessor.__call__`]. In case
[`UdopImageProcessor`] was initialized with `apply_ocr` set to `True`, it passes the obtained words and
bounding boxes along with the additional arguments to [`~UdopTokenizer.__call__`] and returns the... | __call__ | python | huggingface/transformers | src/transformers/models/udop/processing_udop.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/udop/processing_udop.py | Apache-2.0 |
def get_special_tokens_mask(
self, token_ids_0: List[int], token_ids_1: Optional[List[int]] = None, already_has_special_tokens: bool = False
) -> List[int]:
"""
Retrieve sequence ids from a token list that has no special tokens added. This method is called when adding
special tokens ... |
Retrieve sequence ids from a token list that has no special tokens added. This method is called when adding
special tokens using the tokenizer `prepare_for_model` method.
Args:
token_ids_0 (`List[int]`):
List of IDs.
token_ids_1 (`List[int]`, *optional*)... | get_special_tokens_mask | python | huggingface/transformers | src/transformers/models/udop/tokenization_udop.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/udop/tokenization_udop.py | Apache-2.0 |
def _add_eos_if_not_present(self, token_ids: List[int]) -> List[int]:
"""Do not add eos again if user already added it."""
if len(token_ids) > 0 and token_ids[-1] == self.eos_token_id:
warnings.warn(
f"This sequence already has {self.eos_token}. In future versions this behavi... | Do not add eos again if user already added it. | _add_eos_if_not_present | python | huggingface/transformers | src/transformers/models/udop/tokenization_udop.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/udop/tokenization_udop.py | Apache-2.0 |
def create_token_type_ids_from_sequences(
self, token_ids_0: List[int], token_ids_1: Optional[List[int]] = None
) -> List[int]:
"""
Create a mask from the two sequences passed to be used in a sequence-pair classification task. T5 does not make
use of token type ids, therefore a list ... |
Create a mask from the two sequences passed to be used in a sequence-pair classification task. T5 does not make
use of token type ids, therefore a list of zeros is returned.
Args:
token_ids_0 (`List[int]`):
List of IDs.
token_ids_1 (`List[int]`, *optiona... | create_token_type_ids_from_sequences | python | huggingface/transformers | src/transformers/models/udop/tokenization_udop.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/udop/tokenization_udop.py | Apache-2.0 |
def build_inputs_with_special_tokens(
self, token_ids_0: List[int], token_ids_1: Optional[List[int]] = None
) -> List[int]:
"""
Build model inputs from a sequence or a pair of sequence for sequence classification tasks by concatenating and
adding special tokens. A sequence has the fo... |
Build model inputs from a sequence or a pair of sequence for sequence classification tasks by concatenating and
adding special tokens. A sequence has the following format:
- single sequence: `X </s>`
- pair of sequences: `A </s> B </s>`
Args:
token_ids_0 (`List[int... | build_inputs_with_special_tokens | python | huggingface/transformers | src/transformers/models/udop/tokenization_udop.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/udop/tokenization_udop.py | Apache-2.0 |
def tokenize(self, text: "TextInput", **kwargs) -> List[str]:
"""
Converts a string to a list of tokens. If `self.legacy` is set to `False`, a prefix token is added unless the
first token is special.
"""
if self.legacy or len(text) == 0:
return super().tokenize(text, ... |
Converts a string to a list of tokens. If `self.legacy` is set to `False`, a prefix token is added unless the
first token is special.
| tokenize | python | huggingface/transformers | src/transformers/models/udop/tokenization_udop.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/udop/tokenization_udop.py | Apache-2.0 |
def convert_tokens_to_string(self, tokens):
"""Converts a sequence of tokens (string) in a single string."""
# since we manually add the prefix space, we have to remove it when decoding
if tokens[0].startswith(SPIECE_UNDERLINE) and self.add_prefix_space:
tokens[0] = tokens[0][1:]
... | Converts a sequence of tokens (string) in a single string. | convert_tokens_to_string | python | huggingface/transformers | src/transformers/models/udop/tokenization_udop.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/udop/tokenization_udop.py | Apache-2.0 |
def call_boxes(
self,
text: Union[TextInput, PreTokenizedInput, List[TextInput], List[PreTokenizedInput]],
text_pair: Optional[Union[PreTokenizedInput, List[PreTokenizedInput]]] = None,
boxes: Optional[Union[List[List[int]], List[List[List[int]]]]] = None,
word_labels: Optional[U... |
Main method to tokenize and prepare for the model one or several sequence(s) or one or several pair(s) of
sequences with word-level normalized bounding boxes and optional labels.
Args:
text (`str`, `List[str]`, `List[List[str]]`):
The sequence or batch of sequences ... | call_boxes | python | huggingface/transformers | src/transformers/models/udop/tokenization_udop.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/udop/tokenization_udop.py | Apache-2.0 |
def batch_encode_plus_boxes(
self,
batch_text_or_text_pairs: Union[
List[TextInput],
List[TextInputPair],
List[PreTokenizedInput],
],
is_pair: Optional[bool] = None,
boxes: Optional[List[List[List[int]]]] = None,
word_labels: Optional[L... |
Tokenize and prepare for the model a list of sequences or a list of pairs of sequences.
Args:
batch_text_or_text_pairs (`List[str]`, `List[Tuple[str, str]]`, `List[List[str]]`, `List[Tuple[List[str], List[str]]]`, and for not-fast tokenizers, also `List[List[int]]`, `List[Tuple[List[int], ... | batch_encode_plus_boxes | python | huggingface/transformers | src/transformers/models/udop/tokenization_udop.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/udop/tokenization_udop.py | Apache-2.0 |
def encode_boxes(
self,
text: Union[TextInput, PreTokenizedInput, EncodedInput],
text_pair: Optional[Union[TextInput, PreTokenizedInput, EncodedInput]] = None,
boxes: Optional[List[List[int]]] = None,
word_labels: Optional[List[List[int]]] = None,
add_special_tokens: bool... |
Args:
Converts a string to a sequence of ids (integer), using the tokenizer and vocabulary. Same as doing
`self.convert_tokens_to_ids(self.tokenize(text))`.
text (`str`, `List[str]` or `List[int]`):
The first sequence to be encoded. This can be a string, a list of st... | encode_boxes | python | huggingface/transformers | src/transformers/models/udop/tokenization_udop.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/udop/tokenization_udop.py | Apache-2.0 |
def encode_plus_boxes(
self,
text: Union[TextInput, PreTokenizedInput],
text_pair: Optional[PreTokenizedInput] = None,
boxes: Optional[List[List[int]]] = None,
word_labels: Optional[List[List[int]]] = None,
add_special_tokens: bool = True,
padding: Union[bool, str... |
Tokenize and prepare for the model a sequence or a pair of sequences.
<Tip warning={true}>
This method is deprecated, `__call__` should be used instead.
</Tip>
Args:
text (`str`, `List[str]` or (for non-fast tokenizers) `List[int]`):
The first seq... | encode_plus_boxes | python | huggingface/transformers | src/transformers/models/udop/tokenization_udop.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/udop/tokenization_udop.py | Apache-2.0 |
def _batch_prepare_for_model_boxes(
self,
batch_text_or_text_pairs,
is_pair: Optional[bool] = None,
boxes: Optional[List[List[int]]] = None,
word_labels: Optional[List[List[int]]] = None,
add_special_tokens: bool = True,
padding_strategy: PaddingStrategy = Padding... |
Prepares a sequence of input id, or a pair of sequences of inputs ids so that it can be used by the model. It
adds special tokens, truncates sequences if overflowing while taking into account the special tokens and
manages a moving window (with user defined stride) for overflowing tokens
... | _batch_prepare_for_model_boxes | python | huggingface/transformers | src/transformers/models/udop/tokenization_udop.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/udop/tokenization_udop.py | Apache-2.0 |
def prepare_for_model_boxes(
self,
text: Union[TextInput, PreTokenizedInput],
text_pair: Optional[PreTokenizedInput] = None,
boxes: Optional[List[List[int]]] = None,
word_labels: Optional[List[int]] = None,
add_special_tokens: bool = True,
padding: Union[bool, str... |
Prepares a sequence or a pair of sequences so that it can be used by the model. It adds special tokens,
truncates sequences if overflowing while taking into account the special tokens and manages a moving window
(with user defined stride) for overflowing tokens.
Word-level `boxes` are ... | prepare_for_model_boxes | python | huggingface/transformers | src/transformers/models/udop/tokenization_udop.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/udop/tokenization_udop.py | Apache-2.0 |
def truncate_sequences(
self,
ids: List[int],
token_boxes: List[List[int]],
pair_ids: Optional[List[int]] = None,
pair_token_boxes: Optional[List[List[int]]] = None,
labels: Optional[List[int]] = None,
num_tokens_to_remove: int = 0,
truncation_strategy: Un... |
Truncates a sequence pair in-place following the strategy.
Args:
ids (`List[int]`):
Tokenized input ids of the first sequence. Can be obtained from a string by chaining the `tokenize` and
`convert_tokens_to_ids` methods.
token_boxes (`List[List[i... | truncate_sequences | python | huggingface/transformers | src/transformers/models/udop/tokenization_udop.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/udop/tokenization_udop.py | Apache-2.0 |
def _pad(
self,
encoded_inputs: Union[Dict[str, EncodedInput], BatchEncoding],
max_length: Optional[int] = None,
padding_strategy: PaddingStrategy = PaddingStrategy.DO_NOT_PAD,
pad_to_multiple_of: Optional[int] = None,
padding_side: Optional[str] = None,
return_at... |
Pad encoded inputs (on left/right and up to predefined length or max length in the batch)
Args:
encoded_inputs:
Dictionary of tokenized inputs (`List[int]`) or batch of tokenized inputs (`List[List[int]]`).
max_length: maximum length of the returned list and opt... | _pad | python | huggingface/transformers | src/transformers/models/udop/tokenization_udop.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/udop/tokenization_udop.py | Apache-2.0 |
def call_boxes(
self,
text: Union[TextInput, PreTokenizedInput, List[TextInput], List[PreTokenizedInput]],
text_pair: Optional[Union[PreTokenizedInput, List[PreTokenizedInput]]] = None,
boxes: Optional[Union[List[List[int]], List[List[List[int]]]]] = None,
word_labels: Optional[U... |
Main method to tokenize and prepare for the model one or several sequence(s) or one or several pair(s) of
sequences with word-level normalized bounding boxes and optional labels.
Args:
text (`str`, `List[str]`, `List[List[str]]`):
The sequence or batch of sequences ... | call_boxes | python | huggingface/transformers | src/transformers/models/udop/tokenization_udop_fast.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/udop/tokenization_udop_fast.py | Apache-2.0 |
def batch_encode_plus_boxes(
self,
batch_text_or_text_pairs: Union[
List[TextInput],
List[TextInputPair],
List[PreTokenizedInput],
],
is_pair: Optional[bool] = None,
boxes: Optional[List[List[List[int]]]] = None,
word_labels: Optional[L... |
Tokenize and prepare for the model a list of sequences or a list of pairs of sequences.
<Tip warning={true}>
This method is deprecated, `__call__` should be used instead.
</Tip>
Args:
batch_text_or_text_pairs (`List[str]`, `List[Tuple[str, str]]`, `List[List[str]... | batch_encode_plus_boxes | python | huggingface/transformers | src/transformers/models/udop/tokenization_udop_fast.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/udop/tokenization_udop_fast.py | Apache-2.0 |
def encode_boxes(
self,
text: Union[TextInput, PreTokenizedInput, EncodedInput],
text_pair: Optional[Union[TextInput, PreTokenizedInput, EncodedInput]] = None,
boxes: Optional[List[List[int]]] = None,
word_labels: Optional[List[List[int]]] = None,
add_special_tokens: bool... |
Args:
Converts a string to a sequence of ids (integer), using the tokenizer and vocabulary. Same as doing
`self.convert_tokens_to_ids(self.tokenize(text))`.
text (`str`, `List[str]` or `List[int]`):
The first sequence to be encoded. This can be a string, a list of st... | encode_boxes | python | huggingface/transformers | src/transformers/models/udop/tokenization_udop_fast.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/udop/tokenization_udop_fast.py | Apache-2.0 |
def encode_plus_boxes(
self,
text: Union[TextInput, PreTokenizedInput],
text_pair: Optional[PreTokenizedInput] = None,
boxes: Optional[List[List[int]]] = None,
word_labels: Optional[List[List[int]]] = None,
add_special_tokens: bool = True,
padding: Union[bool, str... |
Tokenize and prepare for the model a sequence or a pair of sequences.
<Tip warning={true}>
This method is deprecated, `__call__` should be used instead.
</Tip>
Args:
text (`str`, `List[str]` or (for non-fast tokenizers) `List[int]`):
The first seq... | encode_plus_boxes | python | huggingface/transformers | src/transformers/models/udop/tokenization_udop_fast.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/udop/tokenization_udop_fast.py | Apache-2.0 |
def _pad(
self,
encoded_inputs: Union[Dict[str, EncodedInput], BatchEncoding],
max_length: Optional[int] = None,
padding_strategy: PaddingStrategy = PaddingStrategy.DO_NOT_PAD,
pad_to_multiple_of: Optional[int] = None,
padding_side: Optional[str] = None,
return_at... |
Pad encoded inputs (on left/right and up to predefined length or max length in the batch)
Args:
encoded_inputs:
Dictionary of tokenized inputs (`List[int]`) or batch of tokenized inputs (`List[List[int]]`).
max_length: maximum length of the returned list and opt... | _pad | python | huggingface/transformers | src/transformers/models/udop/tokenization_udop_fast.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/udop/tokenization_udop_fast.py | Apache-2.0 |
def build_inputs_with_special_tokens(
self, token_ids_0: List[int], token_ids_1: Optional[List[int]] = None
) -> List[int]:
"""
Build model inputs from a sequence or a pair of sequence for sequence classification tasks by concatenating and
adding special tokens. An XLM-RoBERTa sequen... |
Build model inputs from a sequence or a pair of sequence for sequence classification tasks by concatenating and
adding special tokens. An XLM-RoBERTa sequence has the following format:
- single sequence: `<s> X </s>`
- pair of sequences: `<s> A </s></s> B </s>`
Args:
... | build_inputs_with_special_tokens | python | huggingface/transformers | src/transformers/models/udop/tokenization_udop_fast.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/udop/tokenization_udop_fast.py | Apache-2.0 |
def create_token_type_ids_from_sequences(
self, token_ids_0: List[int], token_ids_1: Optional[List[int]] = None
) -> List[int]:
"""
Create a mask from the two sequences passed to be used in a sequence-pair classification task. XLM-RoBERTa does
not make use of token type ids, therefor... |
Create a mask from the two sequences passed to be used in a sequence-pair classification task. XLM-RoBERTa does
not make use of token type ids, therefore a list of zeros is returned.
Args:
token_ids_0 (`List[int]`):
List of IDs.
token_ids_1 (`List[int]`,... | create_token_type_ids_from_sequences | python | huggingface/transformers | src/transformers/models/udop/tokenization_udop_fast.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/udop/tokenization_udop_fast.py | Apache-2.0 |
def t5x_attention_lookup(params, i, prefix, layer_name="attention"):
"""Returns the KOQV parameters of (self-)attention. Does not transpose."""
k_tmp = k_tmp = np.ascontiguousarray(params[f"{prefix}/{prefix}/{layer_name}/key/kernel"][:, i, :, :])
k = k_tmp.reshape(k_tmp.shape[0], k_tmp.shape[1] * k_tmp.shap... | Returns the KOQV parameters of (self-)attention. Does not transpose. | t5x_attention_lookup | python | huggingface/transformers | src/transformers/models/umt5/convert_umt5_checkpoint_to_pytorch.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/umt5/convert_umt5_checkpoint_to_pytorch.py | Apache-2.0 |
def t5x_mlp_lookup(params, i, prefix, split_mlp_wi=False):
"""Returns the MLP parameters of a layer. Does not transpose."""
if split_mlp_wi:
wi_0 = params[f"{prefix}/{prefix}/mlp/wi_0/kernel"][:, i, :]
wi_1 = params[f"{prefix}/{prefix}/mlp/wi_1/kernel"][:, i, :]
wi = (wi_0, wi_1)
els... | Returns the MLP parameters of a layer. Does not transpose. | t5x_mlp_lookup | python | huggingface/transformers | src/transformers/models/umt5/convert_umt5_checkpoint_to_pytorch.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/umt5/convert_umt5_checkpoint_to_pytorch.py | Apache-2.0 |
def convert_t5x_to_pytorch(
variables: dict, *, num_layers: int, is_encoder_only: bool, scalable_attention: bool = False
):
"""Converts the parameters from T5X-Flax to Transformers-PyTorch."""
old = traverse_util.flatten_dict(variables["target"])
old = {"/".join(k): v for k, v in old.items()}
# v1.... | Converts the parameters from T5X-Flax to Transformers-PyTorch. | convert_t5x_to_pytorch | python | huggingface/transformers | src/transformers/models/umt5/convert_umt5_checkpoint_to_pytorch.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/umt5/convert_umt5_checkpoint_to_pytorch.py | Apache-2.0 |
def make_state_dict(converted_params, is_encoder_only: bool):
"""Prepares a state dict for the PyTorch model."""
# Make a state dict with torch tensors.
state_dict = collections.OrderedDict([(k, torch.from_numpy(v.copy())) for (k, v) in converted_params.items()])
# Add what is missing.
if "encoder.... | Prepares a state dict for the PyTorch model. | make_state_dict | python | huggingface/transformers | src/transformers/models/umt5/convert_umt5_checkpoint_to_pytorch.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/umt5/convert_umt5_checkpoint_to_pytorch.py | Apache-2.0 |
def load_t5x_weights_in_t5(model, config, t5x_checkpoint_path, is_encoder_only, scalable_attention):
"""Replaces the params in model with the T5X converted params."""
variables = checkpoints.load_t5x_checkpoint(t5x_checkpoint_path)
converted = convert_t5x_to_pytorch(
variables, num_layers=config.num... | Replaces the params in model with the T5X converted params. | load_t5x_weights_in_t5 | python | huggingface/transformers | src/transformers/models/umt5/convert_umt5_checkpoint_to_pytorch.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/umt5/convert_umt5_checkpoint_to_pytorch.py | Apache-2.0 |
def convert_t5x_checkpoint_to_pytorch(
t5x_checkpoint_path,
config_file,
pytorch_dump_path,
is_encoder_only: bool = False,
scalable_attention: bool = False,
):
"""Loads the config and model, converts the T5X checkpoint, and saves a PyTorch checkpoint."""
# Initialise PyTorch model
config... | Loads the config and model, converts the T5X checkpoint, and saves a PyTorch checkpoint. | convert_t5x_checkpoint_to_pytorch | python | huggingface/transformers | src/transformers/models/umt5/convert_umt5_checkpoint_to_pytorch.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/umt5/convert_umt5_checkpoint_to_pytorch.py | Apache-2.0 |
def __init__(self, hidden_size, eps=1e-6):
"""
Construct a layernorm module in the UMT5 style. No bias and no subtraction of mean.
"""
super().__init__()
self.weight = nn.Parameter(torch.ones(hidden_size))
self.variance_epsilon = eps |
Construct a layernorm module in the UMT5 style. No bias and no subtraction of mean.
| __init__ | python | huggingface/transformers | src/transformers/models/umt5/modeling_umt5.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/umt5/modeling_umt5.py | Apache-2.0 |
def _relative_position_bucket(self, relative_position):
"""
Adapted from Mesh Tensorflow:
https://github.com/tensorflow/mesh/blob/0cb87fe07da627bf0b7e60475d59f95ed6b5be3d/mesh_tensorflow/transformer/transformer_layers.py#L593
Translate relative position to a bucket number for relative a... |
Adapted from Mesh Tensorflow:
https://github.com/tensorflow/mesh/blob/0cb87fe07da627bf0b7e60475d59f95ed6b5be3d/mesh_tensorflow/transformer/transformer_layers.py#L593
Translate relative position to a bucket number for relative attention. The relative position is defined as
memory_positi... | _relative_position_bucket | python | huggingface/transformers | src/transformers/models/umt5/modeling_umt5.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/umt5/modeling_umt5.py | Apache-2.0 |
def _prepare_4d_causal_attention_mask_with_cache_position(
attention_mask: torch.Tensor,
sequence_length: int,
target_length: int,
dtype: torch.dtype,
cache_position: torch.Tensor,
batch_size: int,
**kwargs,
):
"""
Creates a causal 4D mask of s... |
Creates a causal 4D mask of shape `(batch_size, 1, query_length, key_value_length)` from a 2D mask of shape
`(batch_size, key_value_length)`, or if the input `attention_mask` is already 4D, do nothing.
Args:
attention_mask (`torch.Tensor`):
A 2D attention mask of sh... | _prepare_4d_causal_attention_mask_with_cache_position | python | huggingface/transformers | src/transformers/models/umt5/modeling_umt5.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/umt5/modeling_umt5.py | Apache-2.0 |
def forward(
self,
input_ids: Optional[torch.LongTensor] = None,
attention_mask: Optional[torch.FloatTensor] = None,
decoder_input_ids: Optional[torch.LongTensor] = None,
decoder_attention_mask: Optional[torch.BoolTensor] = None,
head_mask: Optional[torch.FloatTensor] = N... |
input_ids (`torch.LongTensor` of shape `(batch_size, sequence_length)`):
Indices of input sequence tokens in the vocabulary. UMT5 is a model with relative position embeddings so
you should be able to pad the inputs on both the right and the left.
Indices can be obtained usi... | forward | python | huggingface/transformers | src/transformers/models/umt5/modeling_umt5.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/umt5/modeling_umt5.py | Apache-2.0 |
def forward(
self,
input_ids: Optional[torch.LongTensor] = None,
attention_mask: Optional[torch.FloatTensor] = None,
decoder_input_ids: Optional[torch.LongTensor] = None,
decoder_attention_mask: Optional[torch.BoolTensor] = None,
head_mask: Optional[torch.FloatTensor] = N... |
input_ids (`torch.LongTensor` of shape `(batch_size, sequence_length)`):
Indices of input sequence tokens in the vocabulary. UMT5 is a model with relative position embeddings so
you should be able to pad the inputs on both the right and the left.
Indices can be obtained usi... | forward | python | huggingface/transformers | src/transformers/models/umt5/modeling_umt5.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/umt5/modeling_umt5.py | Apache-2.0 |
Subsets and Splits
Django Code with Docstrings
Filters Python code examples from Django repository that contain Django-related code, helping identify relevant code snippets for understanding Django framework usage patterns.
SQL Console for Shuu12121/python-treesitter-filtered-datasetsV2
Retrieves specific code examples from the Flask repository but doesn't provide meaningful analysis or patterns beyond basic data retrieval.
HTTPX Repo Code and Docstrings
Retrieves specific code examples from the httpx repository, which is useful for understanding how particular libraries are used but doesn't provide broader analytical insights about the dataset.
Requests Repo Docstrings & Code
Retrieves code examples with their docstrings and file paths from the requests repository, providing basic filtering but limited analytical value beyond finding specific code samples.
Quart Repo Docstrings & Code
Retrieves code examples with their docstrings from the Quart repository, providing basic code samples but offering limited analytical value for understanding broader patterns or relationships in the dataset.