code stringlengths 66 870k | docstring stringlengths 19 26.7k | func_name stringlengths 1 138 | language stringclasses 1
value | repo stringlengths 7 68 | path stringlengths 5 324 | url stringlengths 46 389 | license stringclasses 7
values |
|---|---|---|---|---|---|---|---|
def window_reverse(windows, window_size, height, width):
"""
Merges windows to produce higher resolution features.
"""
num_channels = windows.shape[-1]
windows = windows.view(-1, height // window_size, width // window_size, window_size, window_size, num_channels)
windows = windows.permute(0, 1, ... |
Merges windows to produce higher resolution features.
| window_reverse | python | huggingface/transformers | src/transformers/models/swin/modeling_swin.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/swin/modeling_swin.py | Apache-2.0 |
def interpolate_pos_encoding(self, embeddings: torch.Tensor, height: int, width: int) -> torch.Tensor:
"""
This method allows to interpolate the pre-trained position encodings, to be able to use the model on higher resolution
images. This method is also adapted to support torch.jit tracing.
... |
This method allows to interpolate the pre-trained position encodings, to be able to use the model on higher resolution
images. This method is also adapted to support torch.jit tracing.
Adapted from:
- https://github.com/facebookresearch/dino/blob/de9ee3df6cf39fac952ab558447af1fa1365362... | interpolate_pos_encoding | python | huggingface/transformers | src/transformers/models/swin/modeling_swin.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/swin/modeling_swin.py | Apache-2.0 |
def drop_path(input: torch.Tensor, drop_prob: float = 0.0, training: bool = False) -> torch.Tensor:
"""
Drop paths (Stochastic Depth) per sample (when applied in main path of residual blocks).
Comment by Ross Wightman: This is the same as the DropConnect impl I created for EfficientNet, etc networks,
h... |
Drop paths (Stochastic Depth) per sample (when applied in main path of residual blocks).
Comment by Ross Wightman: This is the same as the DropConnect impl I created for EfficientNet, etc networks,
however, the original name is misleading as 'Drop Connect' is a different form of dropout in a separate pape... | drop_path | python | huggingface/transformers | src/transformers/models/swin/modeling_swin.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/swin/modeling_swin.py | Apache-2.0 |
def __init__(self, config, add_pooling_layer=True, use_mask_token=False):
r"""
add_pooling_layer (`bool`, *optional*, defaults to `True`):
Whether or not to apply pooling layer.
use_mask_token (`bool`, *optional*, defaults to `False`):
Whether or not to create and apply m... |
add_pooling_layer (`bool`, *optional*, defaults to `True`):
Whether or not to apply pooling layer.
use_mask_token (`bool`, *optional*, defaults to `False`):
Whether or not to create and apply mask tokens in the embedding layer.
| __init__ | python | huggingface/transformers | src/transformers/models/swin/modeling_swin.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/swin/modeling_swin.py | Apache-2.0 |
def forward(
self,
pixel_values: Optional[torch.FloatTensor] = None,
bool_masked_pos: Optional[torch.BoolTensor] = None,
head_mask: Optional[torch.FloatTensor] = None,
output_attentions: Optional[bool] = None,
output_hidden_states: Optional[bool] = None,
interpola... |
bool_masked_pos (`torch.BoolTensor` of shape `(batch_size, num_patches)`, *optional*):
Boolean masked positions. Indicates which patches are masked (1) and which aren't (0).
| forward | python | huggingface/transformers | src/transformers/models/swin/modeling_swin.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/swin/modeling_swin.py | Apache-2.0 |
def forward(
self,
pixel_values: Optional[torch.FloatTensor] = None,
bool_masked_pos: Optional[torch.BoolTensor] = None,
head_mask: Optional[torch.FloatTensor] = None,
output_attentions: Optional[bool] = None,
output_hidden_states: Optional[bool] = None,
interpola... |
bool_masked_pos (`torch.BoolTensor` of shape `(batch_size, num_patches)`):
Boolean masked positions. Indicates which patches are masked (1) and which aren't (0).
Examples:
```python
>>> from transformers import AutoImageProcessor, SwinForMaskedImageModeling
>>> impo... | forward | python | huggingface/transformers | src/transformers/models/swin/modeling_swin.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/swin/modeling_swin.py | Apache-2.0 |
def forward(
self,
pixel_values: Optional[torch.FloatTensor] = None,
head_mask: Optional[torch.FloatTensor] = None,
labels: Optional[torch.LongTensor] = None,
output_attentions: Optional[bool] = None,
output_hidden_states: Optional[bool] = None,
interpolate_pos_en... |
labels (`torch.LongTensor` of shape `(batch_size,)`, *optional*):
Labels for computing the image classification/regression loss. Indices should be in `[0, ...,
config.num_labels - 1]`. If `config.num_labels == 1` a regression loss is computed (Mean-Square loss), If
`config.n... | forward | python | huggingface/transformers | src/transformers/models/swin/modeling_swin.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/swin/modeling_swin.py | Apache-2.0 |
def forward(
self,
pixel_values: torch.Tensor,
output_hidden_states: Optional[bool] = None,
output_attentions: Optional[bool] = None,
return_dict: Optional[bool] = None,
) -> BackboneOutput:
"""
Returns:
Examples:
```python
>>> from t... |
Returns:
Examples:
```python
>>> from transformers import AutoImageProcessor, AutoBackbone
>>> import torch
>>> from PIL import Image
>>> import requests
>>> url = "http://images.cocodataset.org/val2017/000000039769.jpg"
>>> image = Image.open(... | forward | python | huggingface/transformers | src/transformers/models/swin/modeling_swin.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/swin/modeling_swin.py | Apache-2.0 |
def window_partition(input_feature: tf.Tensor, window_size: int) -> tf.Tensor:
"""
Partitions the given input into windows.
"""
batch_size, height, width, num_channels = shape_list(input_feature)
input_feature = tf.reshape(
input_feature,
(batch_size, height // window_size, window_si... |
Partitions the given input into windows.
| window_partition | python | huggingface/transformers | src/transformers/models/swin/modeling_tf_swin.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/swin/modeling_tf_swin.py | Apache-2.0 |
def window_reverse(windows: tf.Tensor, window_size: int, height: int, width: int) -> tf.Tensor:
"""
Merges windows to produce higher resolution features.
"""
x = tf.shape(windows)[0]
y = tf.cast(height * width / (window_size * window_size), tf.int32)
batch_size = tf.math.floordiv(x, y)
windo... |
Merges windows to produce higher resolution features.
| window_reverse | python | huggingface/transformers | src/transformers/models/swin/modeling_tf_swin.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/swin/modeling_tf_swin.py | Apache-2.0 |
def drop_path(
input: tf.Tensor, drop_prob: float = 0.0, training: bool = False, scale_by_keep: bool = True
) -> tf.Tensor:
"""
Drop paths (Stochastic Depth) per sample (when applied in main path of residual blocks).
"""
if drop_prob == 0.0 or not training:
return input
keep_prob = 1 - d... |
Drop paths (Stochastic Depth) per sample (when applied in main path of residual blocks).
| drop_path | python | huggingface/transformers | src/transformers/models/swin/modeling_tf_swin.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/swin/modeling_tf_swin.py | Apache-2.0 |
def normalize_data_format(value: str) -> str:
"""
From tensorflow addons
https://github.com/tensorflow/addons/blob/8cec33fcaaf1cf90aec7bdd55a0fcdbb251ce5c2/tensorflow_addons/utils/keras_utils.py#L71
"""
if value is None:
value = keras.backend.image_data_format()
data_format = value.lower... |
From tensorflow addons
https://github.com/tensorflow/addons/blob/8cec33fcaaf1cf90aec7bdd55a0fcdbb251ce5c2/tensorflow_addons/utils/keras_utils.py#L71
| normalize_data_format | python | huggingface/transformers | src/transformers/models/swin/modeling_tf_swin.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/swin/modeling_tf_swin.py | Apache-2.0 |
def call(
self,
pixel_values: tf.Tensor | None = None,
bool_masked_pos: tf.Tensor | None = None,
head_mask: tf.Tensor | None = None,
output_attentions: Optional[bool] = None,
output_hidden_states: Optional[bool] = None,
return_dict: Optional[bool] = None,
... |
bool_masked_pos (`tf.Tensor` of shape `(batch_size, num_patches)`, *optional*):
Boolean masked positions. Indicates which patches are masked (1) and which aren't (0).
| call | python | huggingface/transformers | src/transformers/models/swin/modeling_tf_swin.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/swin/modeling_tf_swin.py | Apache-2.0 |
def call(
self,
pixel_values: tf.Tensor | None = None,
bool_masked_pos: tf.Tensor | None = None,
head_mask: tf.Tensor | None = None,
output_attentions: Optional[bool] = None,
output_hidden_states: Optional[bool] = None,
return_dict: Optional[bool] = None,
... |
bool_masked_pos (`tf.Tensor` of shape `(batch_size, num_patches)`):
Boolean masked positions. Indicates which patches are masked (1) and which aren't (0).
Returns:
Examples:
```python
>>> from transformers import AutoImageProcessor, TFSwinForMaskedImageModeling
... | call | python | huggingface/transformers | src/transformers/models/swin/modeling_tf_swin.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/swin/modeling_tf_swin.py | Apache-2.0 |
def call(
self,
pixel_values: tf.Tensor | None = None,
head_mask: tf.Tensor | None = None,
labels: tf.Tensor | None = None,
output_attentions: Optional[bool] = None,
output_hidden_states: Optional[bool] = None,
return_dict: Optional[bool] = None,
training:... |
labels (`tf.Tensor` of shape `(batch_size,)`, *optional*):
Labels for computing the image classification/regression loss. Indices should be in `[0, ...,
config.num_labels - 1]`. If `config.num_labels == 1` a regression loss is computed (Mean-Square loss), If
`config.num_labe... | call | python | huggingface/transformers | src/transformers/models/swin/modeling_tf_swin.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/swin/modeling_tf_swin.py | Apache-2.0 |
def pad(
self,
image: np.ndarray,
size: int,
data_format: Optional[Union[str, ChannelDimension]] = None,
input_data_format: Optional[Union[str, ChannelDimension]] = None,
):
"""
Pad an image to make the height and width divisible by `size`.
Args:
... |
Pad an image to make the height and width divisible by `size`.
Args:
image (`np.ndarray`):
Image to pad.
size (`int`):
The size to make the height and width divisible by.
data_format (`str` or `ChannelDimension`, *optional*):
... | pad | python | huggingface/transformers | src/transformers/models/swin2sr/image_processing_swin2sr.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/swin2sr/image_processing_swin2sr.py | Apache-2.0 |
def preprocess(
self,
images: ImageInput,
do_rescale: Optional[bool] = None,
rescale_factor: Optional[float] = None,
do_pad: Optional[bool] = None,
pad_size: Optional[int] = None,
return_tensors: Optional[Union[str, TensorType]] = None,
data_format: Union[... |
Preprocess an image or batch of images.
Args:
images (`ImageInput`):
Image to preprocess. Expects a single or batch of images with pixel values ranging from 0 to 255. If
passing in images with pixel values between 0 and 1, set `do_rescale=False`.
... | preprocess | python | huggingface/transformers | src/transformers/models/swin2sr/image_processing_swin2sr.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/swin2sr/image_processing_swin2sr.py | Apache-2.0 |
def pad(self, images: "torch.Tensor", size: int) -> "torch.Tensor":
"""
Pad an image to make the height and width divisible by `size`.
Args:
images (`torch.Tensor`):
Images to pad.
size (`int`):
The size to make the height and width divisi... |
Pad an image to make the height and width divisible by `size`.
Args:
images (`torch.Tensor`):
Images to pad.
size (`int`):
The size to make the height and width divisible by.
Returns:
`torch.Tensor`: The padded images.
... | pad | python | huggingface/transformers | src/transformers/models/swin2sr/image_processing_swin2sr_fast.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/swin2sr/image_processing_swin2sr_fast.py | Apache-2.0 |
def window_partition(input_feature, window_size):
"""
Partitions the given input into windows.
"""
batch_size, height, width, num_channels = input_feature.shape
input_feature = input_feature.view(
batch_size, height // window_size, window_size, width // window_size, window_size, num_channels... |
Partitions the given input into windows.
| window_partition | python | huggingface/transformers | src/transformers/models/swin2sr/modeling_swin2sr.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/swin2sr/modeling_swin2sr.py | Apache-2.0 |
def window_reverse(windows, window_size, height, width):
"""
Merges windows to produce higher resolution features.
"""
num_channels = windows.shape[-1]
windows = windows.view(-1, height // window_size, width // window_size, window_size, window_size, num_channels)
windows = windows.permute(0, 1, ... |
Merges windows to produce higher resolution features.
| window_reverse | python | huggingface/transformers | src/transformers/models/swin2sr/modeling_swin2sr.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/swin2sr/modeling_swin2sr.py | Apache-2.0 |
def drop_path(input: torch.Tensor, drop_prob: float = 0.0, training: bool = False) -> torch.Tensor:
"""
Drop paths (Stochastic Depth) per sample (when applied in main path of residual blocks).
Comment by Ross Wightman: This is the same as the DropConnect impl I created for EfficientNet, etc networks,
h... |
Drop paths (Stochastic Depth) per sample (when applied in main path of residual blocks).
Comment by Ross Wightman: This is the same as the DropConnect impl I created for EfficientNet, etc networks,
however, the original name is misleading as 'Drop Connect' is a different form of dropout in a separate pape... | drop_path | python | huggingface/transformers | src/transformers/models/swin2sr/modeling_swin2sr.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/swin2sr/modeling_swin2sr.py | Apache-2.0 |
def forward(
self,
pixel_values: Optional[torch.FloatTensor] = None,
head_mask: Optional[torch.FloatTensor] = None,
labels: Optional[torch.LongTensor] = None,
output_attentions: Optional[bool] = None,
output_hidden_states: Optional[bool] = None,
return_dict: Optio... |
Example:
```python
>>> import torch
>>> import numpy as np
>>> from PIL import Image
>>> import requests
>>> from transformers import AutoImageProcessor, Swin2SRForImageSuperResolution
>>> processor = AutoImageProcessor.from_pretrained("caidas/sw... | forward | python | huggingface/transformers | src/transformers/models/swin2sr/modeling_swin2sr.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/swin2sr/modeling_swin2sr.py | Apache-2.0 |
def window_partition(input_feature, window_size):
"""
Partitions the given input into windows.
"""
batch_size, height, width, num_channels = input_feature.shape
input_feature = input_feature.view(
batch_size, height // window_size, window_size, width // window_size, window_size, num_channels... |
Partitions the given input into windows.
| window_partition | python | huggingface/transformers | src/transformers/models/swinv2/modeling_swinv2.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/swinv2/modeling_swinv2.py | Apache-2.0 |
def window_reverse(windows, window_size, height, width):
"""
Merges windows to produce higher resolution features.
"""
num_channels = windows.shape[-1]
windows = windows.view(-1, height // window_size, width // window_size, window_size, window_size, num_channels)
windows = windows.permute(0, 1, ... |
Merges windows to produce higher resolution features.
| window_reverse | python | huggingface/transformers | src/transformers/models/swinv2/modeling_swinv2.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/swinv2/modeling_swinv2.py | Apache-2.0 |
def drop_path(input: torch.Tensor, drop_prob: float = 0.0, training: bool = False) -> torch.Tensor:
"""
Drop paths (Stochastic Depth) per sample (when applied in main path of residual blocks).
Comment by Ross Wightman: This is the same as the DropConnect impl I created for EfficientNet, etc networks,
h... |
Drop paths (Stochastic Depth) per sample (when applied in main path of residual blocks).
Comment by Ross Wightman: This is the same as the DropConnect impl I created for EfficientNet, etc networks,
however, the original name is misleading as 'Drop Connect' is a different form of dropout in a separate pape... | drop_path | python | huggingface/transformers | src/transformers/models/swinv2/modeling_swinv2.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/swinv2/modeling_swinv2.py | Apache-2.0 |
def interpolate_pos_encoding(self, embeddings: torch.Tensor, height: int, width: int) -> torch.Tensor:
"""
This method allows to interpolate the pre-trained position encodings, to be able to use the model on higher resolution
images. This method is also adapted to support torch.jit tracing.
... |
This method allows to interpolate the pre-trained position encodings, to be able to use the model on higher resolution
images. This method is also adapted to support torch.jit tracing.
Adapted from:
- https://github.com/facebookresearch/dino/blob/de9ee3df6cf39fac952ab558447af1fa1365362... | interpolate_pos_encoding | python | huggingface/transformers | src/transformers/models/swinv2/modeling_swinv2.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/swinv2/modeling_swinv2.py | Apache-2.0 |
def __init__(self, config, add_pooling_layer=True, use_mask_token=False):
r"""
add_pooling_layer (`bool`, *optional*, defaults to `True`):
Whether or not to apply pooling layer.
use_mask_token (`bool`, *optional*, defaults to `False`):
Whether or not to create and apply m... |
add_pooling_layer (`bool`, *optional*, defaults to `True`):
Whether or not to apply pooling layer.
use_mask_token (`bool`, *optional*, defaults to `False`):
Whether or not to create and apply mask tokens in the embedding layer.
| __init__ | python | huggingface/transformers | src/transformers/models/swinv2/modeling_swinv2.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/swinv2/modeling_swinv2.py | Apache-2.0 |
def forward(
self,
pixel_values: Optional[torch.FloatTensor] = None,
bool_masked_pos: Optional[torch.BoolTensor] = None,
head_mask: Optional[torch.FloatTensor] = None,
output_attentions: Optional[bool] = None,
output_hidden_states: Optional[bool] = None,
interpola... |
bool_masked_pos (`torch.BoolTensor` of shape `(batch_size, num_patches)`, *optional*):
Boolean masked positions. Indicates which patches are masked (1) and which aren't (0).
| forward | python | huggingface/transformers | src/transformers/models/swinv2/modeling_swinv2.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/swinv2/modeling_swinv2.py | Apache-2.0 |
def forward(
self,
pixel_values: Optional[torch.FloatTensor] = None,
bool_masked_pos: Optional[torch.BoolTensor] = None,
head_mask: Optional[torch.FloatTensor] = None,
output_attentions: Optional[bool] = None,
output_hidden_states: Optional[bool] = None,
interpola... |
bool_masked_pos (`torch.BoolTensor` of shape `(batch_size, num_patches)`):
Boolean masked positions. Indicates which patches are masked (1) and which aren't (0).
Examples:
```python
>>> from transformers import AutoImageProcessor, Swinv2ForMaskedImageModeling
>>> im... | forward | python | huggingface/transformers | src/transformers/models/swinv2/modeling_swinv2.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/swinv2/modeling_swinv2.py | Apache-2.0 |
def forward(
self,
pixel_values: Optional[torch.FloatTensor] = None,
head_mask: Optional[torch.FloatTensor] = None,
labels: Optional[torch.LongTensor] = None,
output_attentions: Optional[bool] = None,
output_hidden_states: Optional[bool] = None,
interpolate_pos_en... |
labels (`torch.LongTensor` of shape `(batch_size,)`, *optional*):
Labels for computing the image classification/regression loss. Indices should be in `[0, ...,
config.num_labels - 1]`. If `config.num_labels == 1` a regression loss is computed (Mean-Square loss), If
`config.n... | forward | python | huggingface/transformers | src/transformers/models/swinv2/modeling_swinv2.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/swinv2/modeling_swinv2.py | Apache-2.0 |
def forward(
self,
pixel_values: Tensor,
output_attentions: Optional[bool] = None,
output_hidden_states: Optional[bool] = None,
return_dict: Optional[bool] = None,
) -> BackboneOutput:
r"""
Examples:
```python
>>> from transformers import Auto... |
Examples:
```python
>>> from transformers import AutoImageProcessor, AutoBackbone
>>> import torch
>>> from PIL import Image
>>> import requests
>>> url = "http://images.cocodataset.org/val2017/000000039769.jpg"
>>> image = Image.open(requests.get(url, ... | forward | python | huggingface/transformers | src/transformers/models/swinv2/modeling_swinv2.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/swinv2/modeling_swinv2.py | Apache-2.0 |
def rename_base_flax_keys(flax_key_tuple, flax_tensor):
"""
Post renaming of basic JAX keys to pytorch.
"""
if flax_key_tuple[-1] == "kernel" and flax_tensor.ndim == 3:
# expert layer
flax_key_tuple = flax_key_tuple[:-1] + ("weight",)
flax_tensor = torch.permute(flax_tensor, (0, ... |
Post renaming of basic JAX keys to pytorch.
| rename_base_flax_keys | python | huggingface/transformers | src/transformers/models/switch_transformers/convert_big_switch.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/switch_transformers/convert_big_switch.py | Apache-2.0 |
def router_z_loss_func(router_logits: torch.Tensor) -> float:
r"""
Compute the router z-loss implemented in PyTorch.
The router z-loss was introduced in [Designing Effective Sparse Expert Models](https://arxiv.org/abs/2202.08906).
It encourages router logits to remain small in an effort to improve stab... |
Compute the router z-loss implemented in PyTorch.
The router z-loss was introduced in [Designing Effective Sparse Expert Models](https://arxiv.org/abs/2202.08906).
It encourages router logits to remain small in an effort to improve stability.
Args:
router_logits (`float`):
Input l... | router_z_loss_func | python | huggingface/transformers | src/transformers/models/switch_transformers/modeling_switch_transformers.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/switch_transformers/modeling_switch_transformers.py | Apache-2.0 |
def load_balancing_loss_func(router_probs: torch.Tensor, expert_indices: torch.Tensor) -> float:
r"""
Computes auxiliary load balancing loss as in Switch Transformer - implemented in Pytorch.
See Switch Transformer (https://arxiv.org/abs/2101.03961) for more details. This function implements the loss
f... |
Computes auxiliary load balancing loss as in Switch Transformer - implemented in Pytorch.
See Switch Transformer (https://arxiv.org/abs/2101.03961) for more details. This function implements the loss
function presented in equations (4) - (6) of the paper. It aims at penalizing cases where the routing betw... | load_balancing_loss_func | python | huggingface/transformers | src/transformers/models/switch_transformers/modeling_switch_transformers.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/switch_transformers/modeling_switch_transformers.py | Apache-2.0 |
def _compute_router_probabilities(self, hidden_states: torch.Tensor) -> Tuple[torch.Tensor, torch.Tensor]:
r"""
Computes router probabilities from input hidden states.
Args:
hidden_states (`torch.Tensor`):
(batch_size, sequence_length, hidden_dim) from which router p... |
Computes router probabilities from input hidden states.
Args:
hidden_states (`torch.Tensor`):
(batch_size, sequence_length, hidden_dim) from which router probabilities are computed.
Returns:
router_probabilities (`torch.Tensor`):
Tensor o... | _compute_router_probabilities | python | huggingface/transformers | src/transformers/models/switch_transformers/modeling_switch_transformers.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/switch_transformers/modeling_switch_transformers.py | Apache-2.0 |
def _cast_classifier(self):
r"""
`bitsandbytes` `Linear8bitLt` layers does not support manual casting Therefore we need to check if they are an
instance of the `Linear8bitLt` class by checking special attributes.
"""
if not (hasattr(self.classifier, "SCB") or hasattr(self.classif... |
`bitsandbytes` `Linear8bitLt` layers does not support manual casting Therefore we need to check if they are an
instance of the `Linear8bitLt` class by checking special attributes.
| _cast_classifier | python | huggingface/transformers | src/transformers/models/switch_transformers/modeling_switch_transformers.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/switch_transformers/modeling_switch_transformers.py | Apache-2.0 |
def forward(self, hidden_states: torch.Tensor) -> Tuple:
r"""
Generic forward function for every Router class. Each Router expects to have the same input hidden states
(`hidden_states`) corresponding to the hidden states for each token, the `expert_capacity` corresponding to the
number o... |
Generic forward function for every Router class. Each Router expects to have the same input hidden states
(`hidden_states`) corresponding to the hidden states for each token, the `expert_capacity` corresponding to the
number of tokens the Router will send to each expert, some Routers can send u... | forward | python | huggingface/transformers | src/transformers/models/switch_transformers/modeling_switch_transformers.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/switch_transformers/modeling_switch_transformers.py | Apache-2.0 |
def __init__(self, hidden_size, eps=1e-6):
"""
Construct a layernorm module in the SwitchTransformers style. No bias and no subtraction of mean.
"""
super().__init__()
self.weight = nn.Parameter(torch.ones(hidden_size))
self.variance_epsilon = eps |
Construct a layernorm module in the SwitchTransformers style. No bias and no subtraction of mean.
| __init__ | python | huggingface/transformers | src/transformers/models/switch_transformers/modeling_switch_transformers.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/switch_transformers/modeling_switch_transformers.py | Apache-2.0 |
def forward(self, hidden_states):
r"""
Hold on, this will be slightly tricky to understand In the correct order, a MoE layer does the following:
1- Gets the `router_mask` from the router. The shape of the mask is `(batch_size, sequence_length, num_expert)`
and corresponds to the argmax ... |
Hold on, this will be slightly tricky to understand In the correct order, a MoE layer does the following:
1- Gets the `router_mask` from the router. The shape of the mask is `(batch_size, sequence_length, num_expert)`
and corresponds to the argmax of the `router_probs`. The probabilities are n... | forward | python | huggingface/transformers | src/transformers/models/switch_transformers/modeling_switch_transformers.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/switch_transformers/modeling_switch_transformers.py | Apache-2.0 |
def _relative_position_bucket(relative_position, bidirectional=True, num_buckets=32, max_distance=128):
"""
Adapted from Mesh Tensorflow:
https://github.com/tensorflow/mesh/blob/0cb87fe07da627bf0b7e60475d59f95ed6b5be3d/mesh_tensorflow/transformer/transformer_layers.py#L593
Translate rel... |
Adapted from Mesh Tensorflow:
https://github.com/tensorflow/mesh/blob/0cb87fe07da627bf0b7e60475d59f95ed6b5be3d/mesh_tensorflow/transformer/transformer_layers.py#L593
Translate relative position to a bucket number for relative attention. The relative position is defined as
memory_positi... | _relative_position_bucket | python | huggingface/transformers | src/transformers/models/switch_transformers/modeling_switch_transformers.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/switch_transformers/modeling_switch_transformers.py | Apache-2.0 |
def forward(
self,
hidden_states,
mask=None,
key_value_states=None,
position_bias=None,
past_key_value=None,
layer_head_mask=None,
query_length=None,
use_cache=False,
output_attentions=False,
cache_position=None,
):
"""
... |
Self-attention (if key_value_states is None) or attention over source sentence (provided by key_value_states).
| forward | python | huggingface/transformers | src/transformers/models/switch_transformers/modeling_switch_transformers.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/switch_transformers/modeling_switch_transformers.py | Apache-2.0 |
def _prepare_4d_causal_attention_mask_with_cache_position(
attention_mask: torch.Tensor,
sequence_length: int,
target_length: int,
dtype: torch.dtype,
cache_position: torch.Tensor,
batch_size: int,
**kwargs,
):
"""
Creates a causal 4D mask of s... |
Creates a causal 4D mask of shape `(batch_size, 1, query_length, key_value_length)` from a 2D mask of shape
`(batch_size, key_value_length)`, or if the input `attention_mask` is already 4D, do nothing.
Args:
attention_mask (`torch.Tensor`):
A 2D attention mask of sh... | _prepare_4d_causal_attention_mask_with_cache_position | python | huggingface/transformers | src/transformers/models/switch_transformers/modeling_switch_transformers.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/switch_transformers/modeling_switch_transformers.py | Apache-2.0 |
def forward(
self,
input_ids: Optional[torch.LongTensor] = None,
attention_mask: Optional[torch.FloatTensor] = None,
decoder_input_ids: Optional[torch.LongTensor] = None,
decoder_attention_mask: Optional[torch.BoolTensor] = None,
head_mask: Optional[torch.FloatTensor] = N... |
input_ids (`torch.LongTensor` of shape `(batch_size, sequence_length)`):
Indices of input sequence tokens in the vocabulary. SWITCH_TRANSFORMERS is a model with relative position
embeddings so you should be able to pad the inputs on both the right and the left.
Indices can ... | forward | python | huggingface/transformers | src/transformers/models/switch_transformers/modeling_switch_transformers.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/switch_transformers/modeling_switch_transformers.py | Apache-2.0 |
def forward(
self,
input_ids: Optional[torch.LongTensor] = None,
attention_mask: Optional[torch.FloatTensor] = None,
head_mask: Optional[torch.FloatTensor] = None,
inputs_embeds: Optional[torch.FloatTensor] = None,
output_attentions: Optional[bool] = None,
output_... |
input_ids (`torch.LongTensor` of shape `(batch_size, sequence_length)`):
Indices of input sequence tokens in the vocabulary. SWITCH_TRANSFORMERS is a model with relative position
embeddings so you should be able to pad the inputs on both the right and the left.
Indices can ... | forward | python | huggingface/transformers | src/transformers/models/switch_transformers/modeling_switch_transformers.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/switch_transformers/modeling_switch_transformers.py | Apache-2.0 |
def t5x_attention_lookup(params, i, prefix, layer_name="attention"):
"""Returns the KOQV parameters of (self-)attention. Does not transpose."""
k = params[f"{prefix}/layers_{i}/{layer_name}/key/kernel"]
o = params[f"{prefix}/layers_{i}/{layer_name}/out/kernel"]
q = params[f"{prefix}/layers_{i}/{layer_na... | Returns the KOQV parameters of (self-)attention. Does not transpose. | t5x_attention_lookup | python | huggingface/transformers | src/transformers/models/t5/convert_t5x_checkpoint_to_pytorch.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/t5/convert_t5x_checkpoint_to_pytorch.py | Apache-2.0 |
def t5x_mlp_lookup(params, i, prefix, split_mlp_wi=False):
"""Returns the MLP parameters of a layer. Does not transpose."""
if split_mlp_wi:
wi_0 = params[f"{prefix}/layers_{i}/mlp/wi_0/kernel"]
wi_1 = params[f"{prefix}/layers_{i}/mlp/wi_1/kernel"]
wi = (wi_0, wi_1)
else:
wi ... | Returns the MLP parameters of a layer. Does not transpose. | t5x_mlp_lookup | python | huggingface/transformers | src/transformers/models/t5/convert_t5x_checkpoint_to_pytorch.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/t5/convert_t5x_checkpoint_to_pytorch.py | Apache-2.0 |
def convert_t5x_to_pytorch(variables: dict, *, num_layers: int, num_decoder_layers: int, is_encoder_only: bool):
"""Converts the parameters from T5X-Flax to Transformers-PyTorch."""
old = traverse_util.flatten_dict(variables["target"])
old = {"/".join(k): v for k, v in old.items()}
# v1.1 models have a... | Converts the parameters from T5X-Flax to Transformers-PyTorch. | convert_t5x_to_pytorch | python | huggingface/transformers | src/transformers/models/t5/convert_t5x_checkpoint_to_pytorch.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/t5/convert_t5x_checkpoint_to_pytorch.py | Apache-2.0 |
def make_state_dict(converted_params, is_encoder_only: bool):
"""Prepares a state dict for the PyTorch model."""
# Make a state dict with torch tensors.
state_dict = collections.OrderedDict([(k, torch.from_numpy(v.copy())) for (k, v) in converted_params.items()])
# Add what is missing.
if "encoder.... | Prepares a state dict for the PyTorch model. | make_state_dict | python | huggingface/transformers | src/transformers/models/t5/convert_t5x_checkpoint_to_pytorch.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/t5/convert_t5x_checkpoint_to_pytorch.py | Apache-2.0 |
def load_t5x_weights_in_t5(model, config, t5x_checkpoint_path, is_encoder_only):
"""Replaces the params in model with the T5X converted params."""
variables = checkpoints.load_t5x_checkpoint(t5x_checkpoint_path)
converted = convert_t5x_to_pytorch(
variables,
num_layers=config.num_layers,
... | Replaces the params in model with the T5X converted params. | load_t5x_weights_in_t5 | python | huggingface/transformers | src/transformers/models/t5/convert_t5x_checkpoint_to_pytorch.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/t5/convert_t5x_checkpoint_to_pytorch.py | Apache-2.0 |
def convert_t5x_checkpoint_to_pytorch(
t5x_checkpoint_path, config_file, pytorch_dump_path, is_encoder_only: bool = False
):
"""Loads the config and model, converts the T5X checkpoint, and saves a PyTorch checkpoint."""
# Initialise PyTorch model
config = T5Config.from_json_file(config_file)
print(f... | Loads the config and model, converts the T5X checkpoint, and saves a PyTorch checkpoint. | convert_t5x_checkpoint_to_pytorch | python | huggingface/transformers | src/transformers/models/t5/convert_t5x_checkpoint_to_pytorch.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/t5/convert_t5x_checkpoint_to_pytorch.py | Apache-2.0 |
def shift_tokens_right(input_ids: jnp.ndarray, pad_token_id: int, decoder_start_token_id: int) -> jnp.ndarray:
"""
Shift input ids one token to the right.
"""
shifted_input_ids = jnp.zeros_like(input_ids)
shifted_input_ids = shifted_input_ids.at[:, 1:].set(input_ids[:, :-1])
shifted_input_ids = ... |
Shift input ids one token to the right.
| shift_tokens_right | python | huggingface/transformers | src/transformers/models/t5/modeling_flax_t5.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/t5/modeling_flax_t5.py | Apache-2.0 |
def __call__(self, hidden_states):
"""
Construct a layernorm module in the T5 style; No bias and no subtraction of mean.
"""
# layer norm should always be calculated in float32
variance = jnp.power(hidden_states.astype("f4"), 2).mean(axis=-1, keepdims=True)
hidden_states ... |
Construct a layernorm module in the T5 style; No bias and no subtraction of mean.
| __call__ | python | huggingface/transformers | src/transformers/models/t5/modeling_flax_t5.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/t5/modeling_flax_t5.py | Apache-2.0 |
def _relative_position_bucket(relative_position, bidirectional=True, num_buckets=32, max_distance=128):
"""
Adapted from Mesh Tensorflow:
https://github.com/tensorflow/mesh/blob/0cb87fe07da627bf0b7e60475d59f95ed6b5be3d/mesh_tensorflow/transformer/transformer_layers.py#L593
Translate rel... |
Adapted from Mesh Tensorflow:
https://github.com/tensorflow/mesh/blob/0cb87fe07da627bf0b7e60475d59f95ed6b5be3d/mesh_tensorflow/transformer/transformer_layers.py#L593
Translate relative position to a bucket number for relative attention. The relative position is defined as
memory_positi... | _relative_position_bucket | python | huggingface/transformers | src/transformers/models/t5/modeling_flax_t5.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/t5/modeling_flax_t5.py | Apache-2.0 |
def _concatenate_to_cache(self, key, value, query, attention_mask):
"""
This function takes projected key, value states from a single input token and concatenates the states to cached
states from previous steps. This function is slightly adapted from the official Flax repository:
https:/... |
This function takes projected key, value states from a single input token and concatenates the states to cached
states from previous steps. This function is slightly adapted from the official Flax repository:
https://github.com/google/flax/blob/491ce18759622506588784b4fca0e4bf05f8c8cd/flax/line... | _concatenate_to_cache | python | huggingface/transformers | src/transformers/models/t5/modeling_flax_t5.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/t5/modeling_flax_t5.py | Apache-2.0 |
def __call__(
self,
hidden_states,
attention_mask=None,
key_value_states=None,
position_bias=None,
use_cache=False,
output_attentions=False,
deterministic=True,
init_cache=False,
):
"""
Self-attention (if key_value_states is Non... |
Self-attention (if key_value_states is None) or attention over source sentence (provided by key_value_states).
| __call__ | python | huggingface/transformers | src/transformers/models/t5/modeling_flax_t5.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/t5/modeling_flax_t5.py | Apache-2.0 |
def init_cache(self, batch_size, max_length, encoder_outputs):
r"""
Args:
batch_size (`int`):
batch_size used for fast auto-regressive decoding. Defines the batch size of the initialized cache.
max_length (`int`):
maximum possible length for auto-r... |
Args:
batch_size (`int`):
batch_size used for fast auto-regressive decoding. Defines the batch size of the initialized cache.
max_length (`int`):
maximum possible length for auto-regressive decoding. Defines the sequence length of the initialized
... | init_cache | python | huggingface/transformers | src/transformers/models/t5/modeling_flax_t5.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/t5/modeling_flax_t5.py | Apache-2.0 |
def encode(
self,
input_ids: jnp.ndarray,
attention_mask: Optional[jnp.ndarray] = None,
output_attentions: Optional[bool] = None,
output_hidden_states: Optional[bool] = None,
return_dict: Optional[bool] = None,
train: bool = False,
params: Optional[dict] =... |
Returns:
Example:
```python
>>> from transformers import AutoTokenizer, FlaxT5ForConditionalGeneration
>>> tokenizer = AutoTokenizer.from_pretrained("google-t5/t5-small")
>>> model = FlaxT5ForConditionalGeneration.from_pretrained("google-t5/t5-small")
>>> tex... | encode | python | huggingface/transformers | src/transformers/models/t5/modeling_flax_t5.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/t5/modeling_flax_t5.py | Apache-2.0 |
def decode(
self,
decoder_input_ids,
encoder_outputs,
encoder_attention_mask: Optional[jnp.ndarray] = None,
decoder_attention_mask: Optional[jnp.ndarray] = None,
past_key_values: Optional[dict] = None,
output_attentions: Optional[bool] = None,
output_hidde... |
Returns:
Example:
```python
>>> from transformers import AutoTokenizer, FlaxT5ForConditionalGeneration
>>> import jax.numpy as jnp
>>> tokenizer = AutoTokenizer.from_pretrained("google-t5/t5-small")
>>> model = FlaxT5ForConditionalGeneration.from_pretrained("g... | decode | python | huggingface/transformers | src/transformers/models/t5/modeling_flax_t5.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/t5/modeling_flax_t5.py | Apache-2.0 |
def decode(
self,
decoder_input_ids,
encoder_outputs,
encoder_attention_mask: Optional[jnp.ndarray] = None,
decoder_attention_mask: Optional[jnp.ndarray] = None,
past_key_values: Optional[dict] = None,
output_attentions: Optional[bool] = None,
output_hidde... |
Returns:
Example:
```python
>>> from transformers import AutoTokenizer, FlaxT5ForConditionalGeneration
>>> import jax.numpy as jnp
>>> tokenizer = AutoTokenizer.from_pretrained("google-t5/t5-small")
>>> model = FlaxT5ForConditionalGeneration.from_pretrained("g... | decode | python | huggingface/transformers | src/transformers/models/t5/modeling_flax_t5.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/t5/modeling_flax_t5.py | Apache-2.0 |
def load_tf_weights_in_t5(model, config, tf_checkpoint_path):
"""Load tf checkpoints in a pytorch model."""
try:
import re
import numpy as np
import tensorflow as tf
except ImportError:
logger.error(
"Loading a TensorFlow model in PyTorch, requires TensorFlow to ... | Load tf checkpoints in a pytorch model. | load_tf_weights_in_t5 | python | huggingface/transformers | src/transformers/models/t5/modeling_t5.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/t5/modeling_t5.py | Apache-2.0 |
def __init__(self, hidden_size, eps=1e-6):
"""
Construct a layernorm module in the T5 style. No bias and no subtraction of mean.
"""
super().__init__()
self.weight = nn.Parameter(torch.ones(hidden_size))
self.variance_epsilon = eps |
Construct a layernorm module in the T5 style. No bias and no subtraction of mean.
| __init__ | python | huggingface/transformers | src/transformers/models/t5/modeling_t5.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/t5/modeling_t5.py | Apache-2.0 |
def _relative_position_bucket(relative_position, bidirectional=True, num_buckets=32, max_distance=128):
"""
Adapted from Mesh Tensorflow:
https://github.com/tensorflow/mesh/blob/0cb87fe07da627bf0b7e60475d59f95ed6b5be3d/mesh_tensorflow/transformer/transformer_layers.py#L593
Translate rel... |
Adapted from Mesh Tensorflow:
https://github.com/tensorflow/mesh/blob/0cb87fe07da627bf0b7e60475d59f95ed6b5be3d/mesh_tensorflow/transformer/transformer_layers.py#L593
Translate relative position to a bucket number for relative attention. The relative position is defined as
memory_positi... | _relative_position_bucket | python | huggingface/transformers | src/transformers/models/t5/modeling_t5.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/t5/modeling_t5.py | Apache-2.0 |
def forward(
self,
hidden_states,
mask=None,
key_value_states=None,
position_bias=None,
past_key_value=None,
layer_head_mask=None,
query_length=None,
use_cache=False,
output_attentions=False,
cache_position=None,
):
"""
... |
Self-attention (if key_value_states is None) or attention over source sentence (provided by key_value_states).
| forward | python | huggingface/transformers | src/transformers/models/t5/modeling_t5.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/t5/modeling_t5.py | Apache-2.0 |
def _prepare_4d_causal_attention_mask_with_cache_position(
attention_mask: torch.Tensor,
sequence_length: int,
target_length: int,
dtype: torch.dtype,
cache_position: torch.Tensor,
batch_size: int,
**kwargs,
):
"""
Creates a causal 4D mask of s... |
Creates a causal 4D mask of shape `(batch_size, 1, query_length, key_value_length)` from a 2D mask of shape
`(batch_size, key_value_length)`, or if the input `attention_mask` is already 4D, do nothing.
Args:
attention_mask (`torch.Tensor`):
A 2D attention mask of sh... | _prepare_4d_causal_attention_mask_with_cache_position | python | huggingface/transformers | src/transformers/models/t5/modeling_t5.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/t5/modeling_t5.py | Apache-2.0 |
def forward(
self,
input_ids: Optional[torch.LongTensor] = None,
attention_mask: Optional[torch.FloatTensor] = None,
decoder_input_ids: Optional[torch.LongTensor] = None,
decoder_attention_mask: Optional[torch.BoolTensor] = None,
head_mask: Optional[torch.FloatTensor] = N... |
input_ids (`torch.LongTensor` of shape `(batch_size, sequence_length)`):
Indices of input sequence tokens in the vocabulary. T5 is a model with relative position embeddings so you
should be able to pad the inputs on both the right and the left.
Indices can be obtained using... | forward | python | huggingface/transformers | src/transformers/models/t5/modeling_t5.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/t5/modeling_t5.py | Apache-2.0 |
def forward(
self,
input_ids: Optional[torch.LongTensor] = None,
attention_mask: Optional[torch.FloatTensor] = None,
decoder_input_ids: Optional[torch.LongTensor] = None,
decoder_attention_mask: Optional[torch.BoolTensor] = None,
head_mask: Optional[torch.FloatTensor] = N... |
input_ids (`torch.LongTensor` of shape `(batch_size, sequence_length)`):
Indices of input sequence tokens in the vocabulary. T5 is a model with relative position embeddings so you
should be able to pad the inputs on both the right and the left.
Indices can be obtained using... | forward | python | huggingface/transformers | src/transformers/models/t5/modeling_t5.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/t5/modeling_t5.py | Apache-2.0 |
def forward(
self,
input_ids: Optional[torch.LongTensor] = None,
attention_mask: Optional[torch.FloatTensor] = None,
head_mask: Optional[torch.FloatTensor] = None,
inputs_embeds: Optional[torch.FloatTensor] = None,
output_attentions: Optional[bool] = None,
output_... |
input_ids (`torch.LongTensor` of shape `(batch_size, sequence_length)`):
Indices of input sequence tokens in the vocabulary. T5 is a model with relative position embeddings so you
should be able to pad the inputs on both the right and the left.
Indices can be obtained using... | forward | python | huggingface/transformers | src/transformers/models/t5/modeling_t5.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/t5/modeling_t5.py | Apache-2.0 |
def forward(
self,
input_ids: Optional[torch.LongTensor] = None,
attention_mask: Optional[torch.Tensor] = None,
decoder_input_ids: Optional[torch.LongTensor] = None,
decoder_attention_mask: Optional[torch.LongTensor] = None,
head_mask: Optional[torch.Tensor] = None,
... |
input_ids (`torch.LongTensor` of shape `(batch_size, sequence_length)`):
Indices of input sequence tokens in the vocabulary. T5 is a model with relative position embeddings so you
should be able to pad the inputs on both the right and the left.
Indices can be obtained using... | forward | python | huggingface/transformers | src/transformers/models/t5/modeling_t5.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/t5/modeling_t5.py | Apache-2.0 |
def forward(
self,
input_ids: Optional[torch.Tensor] = None,
attention_mask: Optional[torch.Tensor] = None,
head_mask: Optional[torch.Tensor] = None,
inputs_embeds: Optional[torch.Tensor] = None,
labels: Optional[torch.Tensor] = None,
output_attentions: Optional[b... |
input_ids (`torch.LongTensor` of shape `(batch_size, sequence_length)`):
Indices of input sequence tokens in the vocabulary. T5 is a model with relative position embeddings so you
should be able to pad the inputs on both the right and the left.
Indices can be obtained using... | forward | python | huggingface/transformers | src/transformers/models/t5/modeling_t5.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/t5/modeling_t5.py | Apache-2.0 |
def forward(
self,
input_ids: Optional[torch.LongTensor] = None,
attention_mask: Optional[torch.FloatTensor] = None,
decoder_input_ids: Optional[torch.LongTensor] = None,
decoder_attention_mask: Optional[torch.BoolTensor] = None,
head_mask: Optional[torch.FloatTensor] = N... |
input_ids (`torch.LongTensor` of shape `(batch_size, sequence_length)`):
Indices of input sequence tokens in the vocabulary. T5 is a model with relative position embeddings so you
should be able to pad the inputs on both the right and the left.
Indices can be obtained using... | forward | python | huggingface/transformers | src/transformers/models/t5/modeling_t5.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/t5/modeling_t5.py | Apache-2.0 |
def __init__(self, hidden_size, epsilon=1e-6, **kwargs):
"""
Construct a layernorm module in the T5 style No bias and no subtraction of mean.
"""
super().__init__(**kwargs)
self.variance_epsilon = epsilon
self.hidden_size = hidden_size |
Construct a layernorm module in the T5 style No bias and no subtraction of mean.
| __init__ | python | huggingface/transformers | src/transformers/models/t5/modeling_tf_t5.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/t5/modeling_tf_t5.py | Apache-2.0 |
def _relative_position_bucket(relative_position, bidirectional=True, num_buckets=32, max_distance=128):
"""
Adapted from Mesh Tensorflow:
https://github.com/tensorflow/mesh/blob/0cb87fe07da627bf0b7e60475d59f95ed6b5be3d/mesh_tensorflow/transformer/transformer_layers.py#L593
Translate rel... |
Adapted from Mesh Tensorflow:
https://github.com/tensorflow/mesh/blob/0cb87fe07da627bf0b7e60475d59f95ed6b5be3d/mesh_tensorflow/transformer/transformer_layers.py#L593
Translate relative position to a bucket number for relative attention. The relative position is defined as
memory_positi... | _relative_position_bucket | python | huggingface/transformers | src/transformers/models/t5/modeling_tf_t5.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/t5/modeling_tf_t5.py | Apache-2.0 |
def call(
self,
hidden_states,
mask=None,
key_value_states=None,
position_bias=None,
past_key_value=None,
layer_head_mask=None,
query_length=None,
use_cache=False,
training=False,
output_attentions=False,
):
"""
... |
Self-attention (if key_value_states is None) or attention over source sentence (provided by key_value_states).
| call | python | huggingface/transformers | src/transformers/models/t5/modeling_tf_t5.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/t5/modeling_tf_t5.py | Apache-2.0 |
def project(hidden_states, proj_layer, key_value_states, past_key_value):
"""projects hidden states correctly to key/query states"""
if key_value_states is None:
# self-attn
# (batch_size, n_heads, seq_length, dim_per_head)
hidden_states = shape(pr... | projects hidden states correctly to key/query states | project | python | huggingface/transformers | src/transformers/models/t5/modeling_tf_t5.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/t5/modeling_tf_t5.py | Apache-2.0 |
def call(
self,
input_ids: TFModelInputType | None = None,
attention_mask: np.ndarray | tf.Tensor | None = None,
decoder_input_ids: np.ndarray | tf.Tensor | None = None,
decoder_attention_mask: np.ndarray | tf.Tensor | None = None,
head_mask: np.ndarray | tf.Tensor | None... |
Returns:
Examples:
```python
>>> from transformers import AutoTokenizer, TFT5Model
>>> tokenizer = AutoTokenizer.from_pretrained("google-t5/t5-small")
>>> model = TFT5Model.from_pretrained("google-t5/t5-small")
>>> input_ids = tokenizer(
... "Stud... | call | python | huggingface/transformers | src/transformers/models/t5/modeling_tf_t5.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/t5/modeling_tf_t5.py | Apache-2.0 |
def call(
self,
input_ids: TFModelInputType | None = None,
attention_mask: np.ndarray | tf.Tensor | None = None,
decoder_input_ids: np.ndarray | tf.Tensor | None = None,
decoder_attention_mask: np.ndarray | tf.Tensor | None = None,
head_mask: np.ndarray | tf.Tensor | None... |
labels (`tf.Tensor` of shape `(batch_size, sequence_length)`, *optional*):
Labels for computing the cross entropy classification loss. Indices should be in `[0, ...,
config.vocab_size - 1]`.
Returns:
Examples:
```python
>>> from transformers import Aut... | call | python | huggingface/transformers | src/transformers/models/t5/modeling_tf_t5.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/t5/modeling_tf_t5.py | Apache-2.0 |
def call(
self,
input_ids: TFModelInputType | None = None,
attention_mask: np.ndarray | tf.Tensor | None = None,
head_mask: np.ndarray | tf.Tensor | None = None,
inputs_embeds: np.ndarray | tf.Tensor | None = None,
output_attentions: Optional[bool] = None,
output_... |
Returns:
Examples:
```python
>>> from transformers import AutoTokenizer, TFT5EncoderModel
>>> tokenizer = AutoTokenizer.from_pretrained("google-t5/t5-small")
>>> model = TFT5EncoderModel.from_pretrained("google-t5/t5-small")
>>> input_ids = tokenizer(
... | call | python | huggingface/transformers | src/transformers/models/t5/modeling_tf_t5.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/t5/modeling_tf_t5.py | Apache-2.0 |
def get_special_tokens_mask(
self, token_ids_0: List[int], token_ids_1: Optional[List[int]] = None, already_has_special_tokens: bool = False
) -> List[int]:
"""
Retrieve sequence ids from a token list that has no special tokens added. This method is called when adding
special tokens ... |
Retrieve sequence ids from a token list that has no special tokens added. This method is called when adding
special tokens using the tokenizer `prepare_for_model` method.
Args:
token_ids_0 (`List[int]`):
List of IDs.
token_ids_1 (`List[int]`, *optional*)... | get_special_tokens_mask | python | huggingface/transformers | src/transformers/models/t5/tokenization_t5.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/t5/tokenization_t5.py | Apache-2.0 |
def _add_eos_if_not_present(self, token_ids: List[int]) -> List[int]:
"""Do not add eos again if user already added it."""
if len(token_ids) > 0 and token_ids[-1] == self.eos_token_id:
warnings.warn(
f"This sequence already has {self.eos_token}. In future versions this behavi... | Do not add eos again if user already added it. | _add_eos_if_not_present | python | huggingface/transformers | src/transformers/models/t5/tokenization_t5.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/t5/tokenization_t5.py | Apache-2.0 |
def create_token_type_ids_from_sequences(
self, token_ids_0: List[int], token_ids_1: Optional[List[int]] = None
) -> List[int]:
"""
Create a mask from the two sequences passed to be used in a sequence-pair classification task. T5 does not make
use of token type ids, therefore a list ... |
Create a mask from the two sequences passed to be used in a sequence-pair classification task. T5 does not make
use of token type ids, therefore a list of zeros is returned.
Args:
token_ids_0 (`List[int]`):
List of IDs.
token_ids_1 (`List[int]`, *optiona... | create_token_type_ids_from_sequences | python | huggingface/transformers | src/transformers/models/t5/tokenization_t5.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/t5/tokenization_t5.py | Apache-2.0 |
def build_inputs_with_special_tokens(
self, token_ids_0: List[int], token_ids_1: Optional[List[int]] = None
) -> List[int]:
"""
Build model inputs from a sequence or a pair of sequence for sequence classification tasks by concatenating and
adding special tokens. A sequence has the fo... |
Build model inputs from a sequence or a pair of sequence for sequence classification tasks by concatenating and
adding special tokens. A sequence has the following format:
- single sequence: `X </s>`
- pair of sequences: `A </s> B </s>`
Args:
token_ids_0 (`List[int... | build_inputs_with_special_tokens | python | huggingface/transformers | src/transformers/models/t5/tokenization_t5.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/t5/tokenization_t5.py | Apache-2.0 |
def tokenize(self, text: "TextInput", **kwargs) -> List[str]:
"""
Converts a string to a list of tokens. If `self.legacy` is set to `False`, a prefix token is added unless the
first token is special.
"""
if self.legacy or len(text) == 0:
return super().tokenize(text, ... |
Converts a string to a list of tokens. If `self.legacy` is set to `False`, a prefix token is added unless the
first token is special.
| tokenize | python | huggingface/transformers | src/transformers/models/t5/tokenization_t5.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/t5/tokenization_t5.py | Apache-2.0 |
def convert_tokens_to_string(self, tokens):
"""Converts a sequence of tokens (string) in a single string."""
# since we manually add the prefix space, we have to remove it when decoding
if tokens[0].startswith(SPIECE_UNDERLINE) and self.add_prefix_space:
tokens[0] = tokens[0][1:]
... | Converts a sequence of tokens (string) in a single string. | convert_tokens_to_string | python | huggingface/transformers | src/transformers/models/t5/tokenization_t5.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/t5/tokenization_t5.py | Apache-2.0 |
def build_inputs_with_special_tokens(
self, token_ids_0: List[int], token_ids_1: Optional[List[int]] = None
) -> List[int]:
"""
Build model inputs from a sequence or a pair of sequence for sequence classification tasks by concatenating and
adding special tokens. A sequence has the fo... |
Build model inputs from a sequence or a pair of sequence for sequence classification tasks by concatenating and
adding special tokens. A sequence has the following format:
- single sequence: `X </s>`
- pair of sequences: `A </s> B </s>`
Args:
token_ids_0 (`List[int... | build_inputs_with_special_tokens | python | huggingface/transformers | src/transformers/models/t5/tokenization_t5_fast.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/t5/tokenization_t5_fast.py | Apache-2.0 |
def create_token_type_ids_from_sequences(
self, token_ids_0: List[int], token_ids_1: Optional[List[int]] = None
) -> List[int]:
"""
Create a mask from the two sequences passed to be used in a sequence-pair classification task. T5 does not make
use of token type ids, therefore a list ... |
Create a mask from the two sequences passed to be used in a sequence-pair classification task. T5 does not make
use of token type ids, therefore a list of zeros is returned.
Args:
token_ids_0 (`List[int]`):
List of IDs.
token_ids_1 (`List[int]`, *optiona... | create_token_type_ids_from_sequences | python | huggingface/transformers | src/transformers/models/t5/tokenization_t5_fast.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/t5/tokenization_t5_fast.py | Apache-2.0 |
def convert_table_transformer_checkpoint(checkpoint_url, pytorch_dump_folder_path, push_to_hub):
"""
Copy/paste/tweak model's weights to our DETR structure.
"""
logger.info("Converting model...")
# load original state dict
state_dict = torch.hub.load_state_dict_from_url(checkpoint_url, map_loc... |
Copy/paste/tweak model's weights to our DETR structure.
| convert_table_transformer_checkpoint | python | huggingface/transformers | src/transformers/models/table_transformer/convert_table_transformer_to_hf.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/table_transformer/convert_table_transformer_to_hf.py | Apache-2.0 |
def convert_table_transformer_checkpoint(checkpoint_url, pytorch_dump_folder_path, push_to_hub):
"""
Copy/paste/tweak model's weights to our DETR structure.
"""
logger.info("Converting model...")
# create HuggingFace model and load state dict
backbone_config = ResNetConfig.from_pretrained(
... |
Copy/paste/tweak model's weights to our DETR structure.
| convert_table_transformer_checkpoint | python | huggingface/transformers | src/transformers/models/table_transformer/convert_table_transformer_to_hf_no_timm.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/table_transformer/convert_table_transformer_to_hf_no_timm.py | Apache-2.0 |
def replace_batch_norm(model):
r"""
Recursively replace all `torch.nn.BatchNorm2d` with `TableTransformerFrozenBatchNorm2d`.
Args:
model (torch.nn.Module):
input model
"""
for name, module in model.named_children():
if isinstance(module, nn.BatchNorm2d):
new_... |
Recursively replace all `torch.nn.BatchNorm2d` with `TableTransformerFrozenBatchNorm2d`.
Args:
model (torch.nn.Module):
input model
| replace_batch_norm | python | huggingface/transformers | src/transformers/models/table_transformer/modeling_table_transformer.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/table_transformer/modeling_table_transformer.py | Apache-2.0 |
def forward(
self,
hidden_states: torch.Tensor,
attention_mask: torch.Tensor,
object_queries: Optional[torch.Tensor] = None,
output_attentions: bool = False,
):
"""
Args:
hidden_states (`torch.FloatTensor`): input to the layer of shape `(batch, seq... |
Args:
hidden_states (`torch.FloatTensor`): input to the layer of shape `(batch, seq_len, embed_dim)`
attention_mask (`torch.FloatTensor`): attention mask of size
`(batch, 1, target_len, source_len)` where padding elements are indicated by very large negative
... | forward | python | huggingface/transformers | src/transformers/models/table_transformer/modeling_table_transformer.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/table_transformer/modeling_table_transformer.py | Apache-2.0 |
def forward(
self,
hidden_states: torch.Tensor,
attention_mask: Optional[torch.Tensor] = None,
object_queries: Optional[torch.Tensor] = None,
query_position_embeddings: Optional[torch.Tensor] = None,
encoder_hidden_states: Optional[torch.Tensor] = None,
encoder_at... |
Args:
hidden_states (`torch.FloatTensor`): input to the layer of shape `(batch, seq_len, embed_dim)`
attention_mask (`torch.FloatTensor`): attention mask of size
`(batch, 1, target_len, source_len)` where padding elements are indicated by very large negative
... | forward | python | huggingface/transformers | src/transformers/models/table_transformer/modeling_table_transformer.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/table_transformer/modeling_table_transformer.py | Apache-2.0 |
def forward(
self,
inputs_embeds=None,
attention_mask=None,
object_queries=None,
output_attentions=None,
output_hidden_states=None,
return_dict=None,
):
r"""
Args:
inputs_embeds (`torch.FloatTensor` of shape `(batch_size, sequence_l... |
Args:
inputs_embeds (`torch.FloatTensor` of shape `(batch_size, sequence_length, hidden_size)`):
Flattened feature map (output of the backbone + projection layer) that is passed to the encoder.
attention_mask (`torch.Tensor` of shape `(batch_size, sequence_length)`, *op... | forward | python | huggingface/transformers | src/transformers/models/table_transformer/modeling_table_transformer.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/table_transformer/modeling_table_transformer.py | Apache-2.0 |
def forward(
self,
inputs_embeds=None,
attention_mask=None,
encoder_hidden_states=None,
encoder_attention_mask=None,
object_queries=None,
query_position_embeddings=None,
output_attentions=None,
output_hidden_states=None,
return_dict=None,
... |
Args:
inputs_embeds (`torch.FloatTensor` of shape `(batch_size, sequence_length, hidden_size)`):
The query embeddings that are passed into the decoder.
attention_mask (`torch.Tensor` of shape `(batch_size, sequence_length)`, *optional*):
Mask to avoid pe... | forward | python | huggingface/transformers | src/transformers/models/table_transformer/modeling_table_transformer.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/table_transformer/modeling_table_transformer.py | Apache-2.0 |
def forward(
self,
pixel_values: torch.FloatTensor,
pixel_mask: Optional[torch.FloatTensor] = None,
decoder_attention_mask: Optional[torch.FloatTensor] = None,
encoder_outputs: Optional[torch.FloatTensor] = None,
inputs_embeds: Optional[torch.FloatTensor] = None,
... |
decoder_attention_mask (`torch.FloatTensor` of shape `(batch_size, num_queries)`, *optional*):
Not used by default. Can be used to mask object queries.
inputs_embeds (`torch.FloatTensor` of shape `(batch_size, sequence_length, hidden_size)`, *optional*):
Optionally, instead of p... | forward | python | huggingface/transformers | src/transformers/models/table_transformer/modeling_table_transformer.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/table_transformer/modeling_table_transformer.py | Apache-2.0 |
def forward(
self,
pixel_values: torch.FloatTensor,
pixel_mask: Optional[torch.FloatTensor] = None,
decoder_attention_mask: Optional[torch.FloatTensor] = None,
encoder_outputs: Optional[torch.FloatTensor] = None,
inputs_embeds: Optional[torch.FloatTensor] = None,
... |
decoder_attention_mask (`torch.FloatTensor` of shape `(batch_size, num_queries)`, *optional*):
Not used by default. Can be used to mask object queries.
inputs_embeds (`torch.FloatTensor` of shape `(batch_size, sequence_length, hidden_size)`, *optional*):
Optionally, instead of p... | forward | python | huggingface/transformers | src/transformers/models/table_transformer/modeling_table_transformer.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/table_transformer/modeling_table_transformer.py | Apache-2.0 |
def load_tf_weights_in_tapas(model, config, tf_checkpoint_path):
"""
Load tf checkpoints in a PyTorch model. This is an adaptation from load_tf_weights_in_bert
- add cell selection and aggregation heads
- take into account additional token type embedding layers
"""
try:
import re
... |
Load tf checkpoints in a PyTorch model. This is an adaptation from load_tf_weights_in_bert
- add cell selection and aggregation heads
- take into account additional token type embedding layers
| load_tf_weights_in_tapas | python | huggingface/transformers | src/transformers/models/tapas/modeling_tapas.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/tapas/modeling_tapas.py | Apache-2.0 |
def __init__(self, config, add_pooling_layer=True):
r"""
add_pooling_layer (bool, *optional*, defaults to `True`):
Whether to add a pooling layer
"""
super().__init__(config)
self.config = config
self.embeddings = TapasEmbeddings(config)
self.encoder ... |
add_pooling_layer (bool, *optional*, defaults to `True`):
Whether to add a pooling layer
| __init__ | python | huggingface/transformers | src/transformers/models/tapas/modeling_tapas.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/tapas/modeling_tapas.py | Apache-2.0 |
def forward(
self,
input_ids: Optional[torch.LongTensor] = None,
attention_mask: Optional[torch.FloatTensor] = None,
token_type_ids: Optional[torch.LongTensor] = None,
position_ids: Optional[torch.LongTensor] = None,
head_mask: Optional[torch.FloatTensor] = None,
... |
token_type_ids (`torch.LongTensor` of shape `(batch_size, sequence_length, 7)`, *optional*):
Token indices that encode tabular structure. Indices can be obtained using [`AutoTokenizer`]. See this
class for more info.
[What are token type IDs?](../glossary#token-type-ids)
... | forward | python | huggingface/transformers | src/transformers/models/tapas/modeling_tapas.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/tapas/modeling_tapas.py | Apache-2.0 |
def forward(
self,
input_ids: Optional[torch.LongTensor] = None,
attention_mask: Optional[torch.FloatTensor] = None,
token_type_ids: Optional[torch.LongTensor] = None,
position_ids: Optional[torch.LongTensor] = None,
head_mask: Optional[torch.FloatTensor] = None,
... |
token_type_ids (`torch.LongTensor` of shape `(batch_size, sequence_length, 7)`, *optional*):
Token indices that encode tabular structure. Indices can be obtained using [`AutoTokenizer`]. See this
class for more info.
[What are token type IDs?](../glossary#token-type-ids)
... | forward | python | huggingface/transformers | src/transformers/models/tapas/modeling_tapas.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/tapas/modeling_tapas.py | Apache-2.0 |
def forward(
self,
input_ids: Optional[torch.LongTensor] = None,
attention_mask: Optional[torch.FloatTensor] = None,
token_type_ids: Optional[torch.LongTensor] = None,
position_ids: Optional[torch.LongTensor] = None,
head_mask: Optional[torch.FloatTensor] = None,
... |
token_type_ids (`torch.LongTensor` of shape `(batch_size, sequence_length, 7)`, *optional*):
Token indices that encode tabular structure. Indices can be obtained using [`AutoTokenizer`]. See this
class for more info.
[What are token type IDs?](../glossary#token-type-ids)
... | forward | python | huggingface/transformers | src/transformers/models/tapas/modeling_tapas.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/tapas/modeling_tapas.py | Apache-2.0 |
def forward(
self,
input_ids: Optional[torch.LongTensor] = None,
attention_mask: Optional[torch.FloatTensor] = None,
token_type_ids: Optional[torch.LongTensor] = None,
position_ids: Optional[torch.LongTensor] = None,
head_mask: Optional[torch.FloatTensor] = None,
... |
token_type_ids (`torch.LongTensor` of shape `(batch_size, sequence_length, 7)`, *optional*):
Token indices that encode tabular structure. Indices can be obtained using [`AutoTokenizer`]. See this
class for more info.
[What are token type IDs?](../glossary#token-type-ids)
... | forward | python | huggingface/transformers | src/transformers/models/tapas/modeling_tapas.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/tapas/modeling_tapas.py | Apache-2.0 |
Subsets and Splits
Django Code with Docstrings
Filters Python code examples from Django repository that contain Django-related code, helping identify relevant code snippets for understanding Django framework usage patterns.
SQL Console for Shuu12121/python-treesitter-filtered-datasetsV2
Retrieves specific code examples from the Flask repository but doesn't provide meaningful analysis or patterns beyond basic data retrieval.
HTTPX Repo Code and Docstrings
Retrieves specific code examples from the httpx repository, which is useful for understanding how particular libraries are used but doesn't provide broader analytical insights about the dataset.
Requests Repo Docstrings & Code
Retrieves code examples with their docstrings and file paths from the requests repository, providing basic filtering but limited analytical value beyond finding specific code samples.
Quart Repo Docstrings & Code
Retrieves code examples with their docstrings from the Quart repository, providing basic code samples but offering limited analytical value for understanding broader patterns or relationships in the dataset.