code stringlengths 66 870k | docstring stringlengths 19 26.7k | func_name stringlengths 1 138 | language stringclasses 1
value | repo stringlengths 7 68 | path stringlengths 5 324 | url stringlengths 46 389 | license stringclasses 7
values |
|---|---|---|---|---|---|---|---|
def forward(
self,
hidden_states: torch.Tensor,
attention_mask: torch.Tensor,
output_attentions: Optional[bool] = False,
) -> Tuple[torch.FloatTensor]:
"""
Args:
hidden_states (`torch.FloatTensor`):
Input to the layer of shape `(batch, seq_... |
Args:
hidden_states (`torch.FloatTensor`):
Input to the layer of shape `(batch, seq_len, embed_dim)`.
attention_mask (`torch.FloatTensor`):
Attention mask of shape `(batch, 1, q_len, k_v_seq_len)` where padding elements are indicated by very large negativ... | forward | python | huggingface/transformers | src/transformers/models/siglip2/modeling_siglip2.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/siglip2/modeling_siglip2.py | Apache-2.0 |
def forward(
self,
inputs_embeds,
attention_mask: Optional[torch.Tensor] = None,
output_attentions: Optional[bool] = None,
output_hidden_states: Optional[bool] = None,
) -> BaseModelOutput:
r"""
Args:
inputs_embeds (`torch.FloatTensor` of shape `(b... |
Args:
inputs_embeds (`torch.FloatTensor` of shape `(batch_size, sequence_length, hidden_size)`):
Optionally, instead of passing `input_ids` you can choose to directly pass an embedded representation.
This is useful if you want more control over how to convert `input_... | forward | python | huggingface/transformers | src/transformers/models/siglip2/modeling_siglip2.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/siglip2/modeling_siglip2.py | Apache-2.0 |
def forward(
self,
pixel_values: torch.FloatTensor,
attention_mask: torch.Tensor,
spatial_shapes: torch.LongTensor,
output_attentions: Optional[bool] = None,
output_hidden_states: Optional[bool] = None,
) -> BaseModelOutputWithPooling:
r"""
spatial_sha... |
spatial_shapes (`torch.LongTensor` of shape `(batch_size, 2)`):
Tensor containing the spatial dimensions (height, width) of the input images.
| forward | python | huggingface/transformers | src/transformers/models/siglip2/modeling_siglip2.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/siglip2/modeling_siglip2.py | Apache-2.0 |
def trunc_normal_tf_(
tensor: torch.Tensor, mean: float = 0.0, std: float = 1.0, a: float = -2.0, b: float = 2.0
) -> torch.Tensor:
"""Fills the input Tensor with values drawn from a truncated
normal distribution. The values are effectively drawn from the
normal distribution :math:`\\mathcal{N}(\text{me... | Fills the input Tensor with values drawn from a truncated
normal distribution. The values are effectively drawn from the
normal distribution :math:`\mathcal{N}( ext{mean}, ext{std}^2)`
with values outside :math:`[a, b]` redrawn until they are within
the bounds. The method used for generating the random... | trunc_normal_tf_ | python | huggingface/transformers | src/transformers/models/siglip2/modeling_siglip2.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/siglip2/modeling_siglip2.py | Apache-2.0 |
def forward(
self,
input_ids: Optional[torch.Tensor] = None,
attention_mask: Optional[torch.Tensor] = None,
position_ids: Optional[torch.Tensor] = None,
output_attentions: Optional[bool] = None,
output_hidden_states: Optional[bool] = None,
) -> BaseModelOutputWithPool... |
Examples:
```python
>>> from transformers import AutoTokenizer, Siglip2TextModel
>>> model = Siglip2TextModel.from_pretrained("google/siglip2-base-patch16-224")
>>> tokenizer = AutoTokenizer.from_pretrained("google/siglip2-base-patch16-224")
>>> # important: make sure... | forward | python | huggingface/transformers | src/transformers/models/siglip2/modeling_siglip2.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/siglip2/modeling_siglip2.py | Apache-2.0 |
def forward(
self,
pixel_values: torch.FloatTensor,
pixel_attention_mask: torch.Tensor,
spatial_shapes: torch.LongTensor,
output_attentions: Optional[bool] = None,
output_hidden_states: Optional[bool] = None,
) -> BaseModelOutputWithPooling:
r"""
pixel... |
pixel_attention_mask (`torch.Tensor` of shape `(batch_size, image_size, image_size)`, *optional*):
Mask to avoid performing attention on padding pixel indices.
spatial_shapes (`torch.LongTensor` of shape `(batch_size, 2)`):
Tensor containing the spatial dimensions (height, width... | forward | python | huggingface/transformers | src/transformers/models/siglip2/modeling_siglip2.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/siglip2/modeling_siglip2.py | Apache-2.0 |
def get_text_features(
self,
input_ids: Optional[torch.Tensor] = None,
attention_mask: Optional[torch.Tensor] = None,
position_ids: Optional[torch.Tensor] = None,
output_attentions: Optional[bool] = None,
output_hidden_states: Optional[bool] = None,
) -> torch.FloatTe... |
Returns:
text_features (`torch.FloatTensor` of shape `(batch_size, output_dim`): The text embeddings obtained by
applying the projection layer to the pooled output of [`Siglip2TextModel`].
Examples:
```python
>>> from transformers import AutoTokenizer, AutoMode... | get_text_features | python | huggingface/transformers | src/transformers/models/siglip2/modeling_siglip2.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/siglip2/modeling_siglip2.py | Apache-2.0 |
def get_image_features(
self,
pixel_values: Optional[torch.FloatTensor] = None,
pixel_attention_mask: Optional[torch.Tensor] = None,
spatial_shapes: Optional[torch.LongTensor] = None,
output_attentions: Optional[bool] = None,
output_hidden_states: Optional[bool] = None,
... |
pixel_attention_mask (`torch.Tensor` of shape `(batch_size, image_size, image_size)`, *optional*):
Mask to avoid performing attention on padding pixel indices.
spatial_shapes (`torch.LongTensor` of shape `(batch_size, 2)`):
Tensor containing the spatial dimensions (height, width... | get_image_features | python | huggingface/transformers | src/transformers/models/siglip2/modeling_siglip2.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/siglip2/modeling_siglip2.py | Apache-2.0 |
def forward(
self,
input_ids: Optional[torch.LongTensor] = None,
pixel_values: Optional[torch.FloatTensor] = None,
pixel_attention_mask: Optional[torch.Tensor] = None,
spatial_shapes: Optional[torch.LongTensor] = None,
attention_mask: Optional[torch.Tensor] = None,
... |
pixel_attention_mask (`torch.Tensor` of shape `(batch_size, image_size, image_size)`, *optional*):
Mask to avoid performing attention on padding pixel indices.
spatial_shapes (`torch.LongTensor` of shape `(batch_size, 2)`):
Tensor containing the spatial dimensions (height, width... | forward | python | huggingface/transformers | src/transformers/models/siglip2/modeling_siglip2.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/siglip2/modeling_siglip2.py | Apache-2.0 |
def forward(
self,
pixel_values: Optional[torch.Tensor] = None,
pixel_attention_mask: Optional[torch.Tensor] = None,
spatial_shapes: Optional[torch.LongTensor] = None,
labels: Optional[torch.Tensor] = None,
output_attentions: Optional[bool] = None,
output_hidden_s... |
pixel_attention_mask (`torch.Tensor` of shape `(batch_size, image_size, image_size)`, *optional*):
Mask to avoid performing attention on padding pixel indices.
spatial_shapes (`torch.LongTensor` of shape `(batch_size, 2)`):
Tensor containing the spatial dimensions (height, width... | forward | python | huggingface/transformers | src/transformers/models/siglip2/modeling_siglip2.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/siglip2/modeling_siglip2.py | Apache-2.0 |
def resize_positional_embeddings(
positional_embeddings: torch.Tensor,
spatial_shapes: torch.LongTensor,
max_length: int,
) -> torch.Tensor:
"""
Resize positional embeddings to image-specific size and pad to a fixed size.
Args:
positional_embeddings (`tor... |
Resize positional embeddings to image-specific size and pad to a fixed size.
Args:
positional_embeddings (`torch.Tensor`):
Position embeddings of shape (height, width, embed_dim)
spatial_shapes (`torch.LongTensor`):
Spatial shapes of shape (batch... | resize_positional_embeddings | python | huggingface/transformers | src/transformers/models/siglip2/modular_siglip2.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/siglip2/modular_siglip2.py | Apache-2.0 |
def forward(self, pixel_values: torch.FloatTensor, spatial_shapes: torch.LongTensor) -> torch.Tensor:
"""
Args:
pixel_values (`torch.FloatTensor`):
Pixel values of shape (batch_size, max_num_patches, num_channels * patch_size * patch_size)
spatial_shapes (`List[Tu... |
Args:
pixel_values (`torch.FloatTensor`):
Pixel values of shape (batch_size, max_num_patches, num_channels * patch_size * patch_size)
spatial_shapes (`List[Tuple[int, int]]`):
Spatial shapes of shape (batch_size, 2) to resize the positional embeddings to
... | forward | python | huggingface/transformers | src/transformers/models/siglip2/modular_siglip2.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/siglip2/modular_siglip2.py | Apache-2.0 |
def forward(
self,
pixel_values: torch.FloatTensor,
attention_mask: torch.Tensor,
spatial_shapes: torch.LongTensor,
output_attentions: Optional[bool] = None,
output_hidden_states: Optional[bool] = None,
) -> BaseModelOutputWithPooling:
r"""
spatial_sha... |
spatial_shapes (`torch.LongTensor` of shape `(batch_size, 2)`):
Tensor containing the spatial dimensions (height, width) of the input images.
| forward | python | huggingface/transformers | src/transformers/models/siglip2/modular_siglip2.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/siglip2/modular_siglip2.py | Apache-2.0 |
def forward(
self,
pixel_values: torch.FloatTensor,
pixel_attention_mask: torch.Tensor,
spatial_shapes: torch.LongTensor,
output_attentions: Optional[bool] = None,
output_hidden_states: Optional[bool] = None,
) -> BaseModelOutputWithPooling:
r"""
pixel... |
pixel_attention_mask (`torch.Tensor` of shape `(batch_size, image_size, image_size)`, *optional*):
Mask to avoid performing attention on padding pixel indices.
spatial_shapes (`torch.LongTensor` of shape `(batch_size, 2)`):
Tensor containing the spatial dimensions (height, width... | forward | python | huggingface/transformers | src/transformers/models/siglip2/modular_siglip2.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/siglip2/modular_siglip2.py | Apache-2.0 |
def get_image_features(
self,
pixel_values: Optional[torch.FloatTensor] = None,
pixel_attention_mask: Optional[torch.Tensor] = None,
spatial_shapes: Optional[torch.LongTensor] = None,
output_attentions: Optional[bool] = None,
output_hidden_states: Optional[bool] = None,
... |
pixel_attention_mask (`torch.Tensor` of shape `(batch_size, image_size, image_size)`, *optional*):
Mask to avoid performing attention on padding pixel indices.
spatial_shapes (`torch.LongTensor` of shape `(batch_size, 2)`):
Tensor containing the spatial dimensions (height, width... | get_image_features | python | huggingface/transformers | src/transformers/models/siglip2/modular_siglip2.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/siglip2/modular_siglip2.py | Apache-2.0 |
def forward(
self,
input_ids: Optional[torch.LongTensor] = None,
pixel_values: Optional[torch.FloatTensor] = None,
pixel_attention_mask: Optional[torch.Tensor] = None,
spatial_shapes: Optional[torch.LongTensor] = None,
attention_mask: Optional[torch.Tensor] = None,
... |
pixel_attention_mask (`torch.Tensor` of shape `(batch_size, image_size, image_size)`, *optional*):
Mask to avoid performing attention on padding pixel indices.
spatial_shapes (`torch.LongTensor` of shape `(batch_size, 2)`):
Tensor containing the spatial dimensions (height, width... | forward | python | huggingface/transformers | src/transformers/models/siglip2/modular_siglip2.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/siglip2/modular_siglip2.py | Apache-2.0 |
def forward(
self,
pixel_values: Optional[torch.Tensor] = None,
pixel_attention_mask: Optional[torch.Tensor] = None,
spatial_shapes: Optional[torch.LongTensor] = None,
labels: Optional[torch.Tensor] = None,
output_attentions: Optional[bool] = None,
output_hidden_s... |
pixel_attention_mask (`torch.Tensor` of shape `(batch_size, image_size, image_size)`, *optional*):
Mask to avoid performing attention on padding pixel indices.
spatial_shapes (`torch.LongTensor` of shape `(batch_size, 2)`):
Tensor containing the spatial dimensions (height, width... | forward | python | huggingface/transformers | src/transformers/models/siglip2/modular_siglip2.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/siglip2/modular_siglip2.py | Apache-2.0 |
def __call__(
self,
images: Optional[Union[ImageInput, List[ImageInput], List[List[ImageInput]]]] = None,
text: Optional[Union[TextInput, "PreTokenizedInput", List[TextInput], List["PreTokenizedInput"]]] = None,
audio=None,
videos=None,
**kwargs: Unpack[Siglip2ProcessorKw... |
Main method to prepare for the model one or several sequences(s) and image(s). This method forwards the `text`
and `kwargs` arguments to GemmaTokenizerFast's [`~GemmaTokenizerFast.__call__`] if `text` is not `None` to encode
the text. To prepare the image(s), this method forwards the `images` a... | __call__ | python | huggingface/transformers | src/transformers/models/siglip2/processing_siglip2.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/siglip2/processing_siglip2.py | Apache-2.0 |
def _resize_output_size_rescale_to_max_len(
height: int, width: int, min_len: Optional[int] = 1, max_len: Optional[int] = None
) -> Tuple[int, int]:
"""
Get the output size of the image after resizing given a dictionary specifying the max and min sizes.
Args:
height (`int`):
Height o... |
Get the output size of the image after resizing given a dictionary specifying the max and min sizes.
Args:
height (`int`):
Height of the input image.
width (`int`):
Width of the input image.
min_len (`int`, *optional*, defaults to 1):
Minimum size of ... | _resize_output_size_rescale_to_max_len | python | huggingface/transformers | src/transformers/models/smolvlm/image_processing_smolvlm.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/smolvlm/image_processing_smolvlm.py | Apache-2.0 |
def _resize_output_size_scale_below_upper_bound(
height: int, width: int, max_len: Optional[Dict[str, int]] = None
) -> Tuple[int, int]:
"""
Get the output size of the image after resizing given a dictionary specifying the max and min sizes.
Args:
height (`int`):
Height of the input ... |
Get the output size of the image after resizing given a dictionary specifying the max and min sizes.
Args:
height (`int`):
Height of the input image.
width (`int`):
Width of the input image.
max_len (`Dict[str, int]`, *optional*, defaults to the maximum size of t... | _resize_output_size_scale_below_upper_bound | python | huggingface/transformers | src/transformers/models/smolvlm/image_processing_smolvlm.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/smolvlm/image_processing_smolvlm.py | Apache-2.0 |
def get_resize_output_image_size(
image,
resolution_max_side: int,
input_data_format: Optional[Union[str, ChannelDimension]] = None,
) -> Tuple[int, int]:
"""
Get the output size of the image after resizing given a dictionary specifying the max and min sizes.
Args:
image (`np.ndarray`):
... |
Get the output size of the image after resizing given a dictionary specifying the max and min sizes.
Args:
image (`np.ndarray`):
Image to resize.
resolution_max_side (`int`):
The longest edge of the image will be resized to this value. The shortest edge will be resized t... | get_resize_output_image_size | python | huggingface/transformers | src/transformers/models/smolvlm/image_processing_smolvlm.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/smolvlm/image_processing_smolvlm.py | Apache-2.0 |
def get_max_height_width(
images_list: List[List[np.ndarray]], input_data_format: Optional[Union[str, ChannelDimension]] = None
) -> List[int]:
"""
Get the maximum height and width across all images in a batch.
"""
if input_data_format is None:
input_data_format = infer_channel_dimension_for... |
Get the maximum height and width across all images in a batch.
| get_max_height_width | python | huggingface/transformers | src/transformers/models/smolvlm/image_processing_smolvlm.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/smolvlm/image_processing_smolvlm.py | Apache-2.0 |
def make_pixel_mask(
image: np.ndarray, output_size: Tuple[int, int], input_data_format: Optional[Union[str, ChannelDimension]] = None
) -> np.ndarray:
"""
Make a pixel mask for the image, where 1 indicates a valid pixel and 0 indicates padding.
Args:
image (`np.ndarray`):
Image to m... |
Make a pixel mask for the image, where 1 indicates a valid pixel and 0 indicates padding.
Args:
image (`np.ndarray`):
Image to make the pixel mask for.
output_size (`Tuple[int, int]`):
Output size of the mask.
| make_pixel_mask | python | huggingface/transformers | src/transformers/models/smolvlm/image_processing_smolvlm.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/smolvlm/image_processing_smolvlm.py | Apache-2.0 |
def convert_to_rgb(
image: np.ndarray,
palette: Optional[PIL.ImagePalette.ImagePalette] = None,
data_format: Optional[Union[str, ChannelDimension]] = None,
input_data_format: Optional[Union[str, ChannelDimension]] = None,
) -> ImageInput:
"""
Converts an image to RGB format.
Args:
im... |
Converts an image to RGB format.
Args:
image (`np.ndarray`):
The image to convert.
palette (List[int], *optional*):
The palette to use if given.
data_format (ChannelDimension or str, *optional*):
The channel dimension format for the output image. If n... | convert_to_rgb | python | huggingface/transformers | src/transformers/models/smolvlm/image_processing_smolvlm.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/smolvlm/image_processing_smolvlm.py | Apache-2.0 |
def resize(
self,
image: np.ndarray,
size: Dict[str, int],
resample: PILImageResampling = PILImageResampling.LANCZOS,
data_format: Optional[Union[str, ChannelDimension]] = None,
input_data_format: Optional[Union[str, ChannelDimension]] = None,
**kwargs,
) -> n... |
Resize an image. The longest edge of the image is resized to size["longest_edge"], with the shortest edge
resized to keep the input aspect ratio. Can also be used with size["height"] and size["width"].
Args:
image (`np.ndarray`):
Image to resize.
size (`D... | resize | python | huggingface/transformers | src/transformers/models/smolvlm/image_processing_smolvlm.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/smolvlm/image_processing_smolvlm.py | Apache-2.0 |
def split_image(
self,
image,
max_image_size: Dict[str, int],
resample: PILImageResampling = PILImageResampling.LANCZOS,
data_format: Optional[Union[str, ChannelDimension]] = None,
input_data_format: Optional[Union[str, ChannelDimension]] = None,
):
"""
... |
Split an image into squares of side max_image_size and the original image resized to max_image_size.
That means that a single image becomes a sequence of images.
This is a "trick" to spend more compute on each image with no changes in the vision encoder.
1) If one side of the original i... | split_image | python | huggingface/transformers | src/transformers/models/smolvlm/image_processing_smolvlm.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/smolvlm/image_processing_smolvlm.py | Apache-2.0 |
def resize_for_vision_encoder(
self,
image: np.ndarray,
vision_encoder_max_size: int,
resample: PILImageResampling = PILImageResampling.LANCZOS,
data_format: Optional[Union[str, ChannelDimension]] = None,
input_data_format: Optional[Union[str, ChannelDimension]] = None,
... |
Resize images to be multiples of `vision_encoder_max_size` while preserving the aspect ratio.
Args:
image (`np.ndarray`):
Images to resize.
vision_encoder_max_size (`int`):
Maximum size of the output image. If the image is larger than this size, i... | resize_for_vision_encoder | python | huggingface/transformers | src/transformers/models/smolvlm/image_processing_smolvlm.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/smolvlm/image_processing_smolvlm.py | Apache-2.0 |
def _pad_image(
self,
image: np.ndarray,
output_size: Tuple[int, int],
constant_values: Union[float, Iterable[float]] = 0,
data_format: Optional[ChannelDimension] = None,
input_data_format: Optional[Union[str, ChannelDimension]] = None,
) -> np.ndarray:
"""
... |
Pad an image with zeros to the given size.
| _pad_image | python | huggingface/transformers | src/transformers/models/smolvlm/image_processing_smolvlm.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/smolvlm/image_processing_smolvlm.py | Apache-2.0 |
def pad(
self,
images: List[np.ndarray],
constant_values: Union[float, Iterable[float]] = 0,
return_pixel_mask: bool = True,
return_tensors: Optional[Union[str, TensorType]] = None,
data_format: Optional[ChannelDimension] = None,
input_data_format: Optional[Union[... |
For a list of images, for each images, pads a batch of images to the bottom and right of the image with zeros to the size of largest height and width.
For each sample in the batch, pads the sample with empty images to the max_number of images per sample in the batch. Optionally returns a pixel mask.
... | pad | python | huggingface/transformers | src/transformers/models/smolvlm/image_processing_smolvlm.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/smolvlm/image_processing_smolvlm.py | Apache-2.0 |
def preprocess(
self,
images: ImageInput,
do_convert_rgb: Optional[bool] = None,
do_resize: Optional[bool] = None,
size: Optional[Dict[str, int]] = None,
resample: PILImageResampling = None,
do_image_splitting: Optional[bool] = None,
do_rescale: Optional[b... |
Preprocess a batch of images.
Args:
images (`ImageInput`):
A list of images to preprocess.
do_convert_rgb (`bool`, *optional*, defaults to `self.do_convert_rgb`):
Whether to convert the image to RGB.
do_resize (`bool`, *optional*, defa... | preprocess | python | huggingface/transformers | src/transformers/models/smolvlm/image_processing_smolvlm.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/smolvlm/image_processing_smolvlm.py | Apache-2.0 |
def get_number_of_image_patches(self, height: int, width: int, images_kwargs=None):
"""
A utility that returns number of image patches for a given image size.
Args:
height (`int`):
Height of the input image.
width (`int`):
Width of the inp... |
A utility that returns number of image patches for a given image size.
Args:
height (`int`):
Height of the input image.
width (`int`):
Width of the input image.
images_kwargs (`dict`, *optional*)
Any kwargs to override... | get_number_of_image_patches | python | huggingface/transformers | src/transformers/models/smolvlm/image_processing_smolvlm.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/smolvlm/image_processing_smolvlm.py | Apache-2.0 |
def forward(
self,
hidden_states: torch.Tensor,
attention_mask: torch.Tensor,
output_attentions: Optional[bool] = False,
) -> Tuple[torch.FloatTensor]:
"""
Args:
hidden_states (`torch.FloatTensor`):
Input to the layer of shape `(batch, seq_... |
Args:
hidden_states (`torch.FloatTensor`):
Input to the layer of shape `(batch, seq_len, embed_dim)`.
attention_mask (`torch.FloatTensor`):
Attention mask of shape `(batch, 1, q_len, k_v_seq_len)` where padding elements are indicated by very large negativ... | forward | python | huggingface/transformers | src/transformers/models/smolvlm/modeling_smolvlm.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/smolvlm/modeling_smolvlm.py | Apache-2.0 |
def forward(
self,
inputs_embeds,
attention_mask: Optional[torch.Tensor] = None,
output_attentions: Optional[bool] = None,
output_hidden_states: Optional[bool] = None,
return_dict: Optional[bool] = None,
) -> Union[Tuple, BaseModelOutput]:
r"""
Args:
... |
Args:
inputs_embeds (`torch.FloatTensor` of shape `(batch_size, sequence_length, hidden_size)`):
Optionally, instead of passing `input_ids` you can choose to directly pass an embedded representation.
This is useful if you want more control over how to convert `input_... | forward | python | huggingface/transformers | src/transformers/models/smolvlm/modeling_smolvlm.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/smolvlm/modeling_smolvlm.py | Apache-2.0 |
def enable_input_require_grads(self):
"""
Enables the gradients for the input embeddings.
This is useful for lora when using gradient checkpointing.
c.f. https://github.com/huggingface/peft/issues/1402#issuecomment-1913675032
Override to set output.requires_grad = True for both... |
Enables the gradients for the input embeddings.
This is useful for lora when using gradient checkpointing.
c.f. https://github.com/huggingface/peft/issues/1402#issuecomment-1913675032
Override to set output.requires_grad = True for both the decoder's and vision model's embeddings.
... | enable_input_require_grads | python | huggingface/transformers | src/transformers/models/smolvlm/modeling_smolvlm.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/smolvlm/modeling_smolvlm.py | Apache-2.0 |
def inputs_merger(
self, input_ids: torch.LongTensor, inputs_embeds: torch.Tensor, image_hidden_states: torch.Tensor
):
"""
This method aims at merging the token embeddings with the image hidden states into one single sequence of vectors that are fed to the transformer LM.
The mergin... |
This method aims at merging the token embeddings with the image hidden states into one single sequence of vectors that are fed to the transformer LM.
The merging happens as follows:
- The text token sequence is: `tok_1 tok_2 tok_3 <fake_token_around_image> <image> <image> ... <image> <fake_toke... | inputs_merger | python | huggingface/transformers | src/transformers/models/smolvlm/modeling_smolvlm.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/smolvlm/modeling_smolvlm.py | Apache-2.0 |
def get_image_features(self, pixel_values: torch.FloatTensor, pixel_attention_mask: torch.LongTensor = None):
"""
Encodes images into continuous embeddings that can be forwarded to the language model.
Args:
pixel_values (`torch.FloatTensor` of shape `(batch_size, num_channels, image... |
Encodes images into continuous embeddings that can be forwarded to the language model.
Args:
pixel_values (`torch.FloatTensor` of shape `(batch_size, num_channels, image_size, image_size)`):
The tensors corresponding to the input images.
pixel_attention_mask (`t... | get_image_features | python | huggingface/transformers | src/transformers/models/smolvlm/modeling_smolvlm.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/smolvlm/modeling_smolvlm.py | Apache-2.0 |
def forward(
self,
input_ids: Optional[torch.LongTensor] = None,
attention_mask: Optional[torch.Tensor] = None,
position_ids: Optional[torch.LongTensor] = None,
past_key_values: Optional[List[torch.FloatTensor]] = None,
inputs_embeds: Optional[torch.FloatTensor] = None,
... |
pixel_attention_mask (`torch.Tensor` of shape `(batch_size, image_size, image_size)`, *optional*):
Mask to avoid performing attention on padding pixel indices.
image_hidden_states (`torch.FloatTensor` of shape `(batch_size, num_channels, image_size, image_size)`):
The hidden sta... | forward | python | huggingface/transformers | src/transformers/models/smolvlm/modeling_smolvlm.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/smolvlm/modeling_smolvlm.py | Apache-2.0 |
def enable_input_require_grads(self):
"""
Enables the gradients for the input embeddings. This is useful for fine-tuning adapter weights while keeping
the model weights fixed.
"""
def make_inputs_require_grads(module, input, output):
output.requires_grad_(True)
... |
Enables the gradients for the input embeddings. This is useful for fine-tuning adapter weights while keeping
the model weights fixed.
| enable_input_require_grads | python | huggingface/transformers | src/transformers/models/smolvlm/modeling_smolvlm.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/smolvlm/modeling_smolvlm.py | Apache-2.0 |
def forward(
self,
input_ids: Optional[torch.LongTensor] = None,
attention_mask: Optional[torch.Tensor] = None,
position_ids: Optional[torch.LongTensor] = None,
past_key_values: Optional[List[torch.FloatTensor]] = None,
inputs_embeds: Optional[torch.FloatTensor] = None,
... |
pixel_attention_mask (`torch.Tensor` of shape `(batch_size, image_size, image_size)`, *optional*):
Mask to avoid performing attention on padding pixel indices.
image_hidden_states (`torch.FloatTensor` of shape `(batch_size, num_channels, image_size, image_size)`):
The hidden sta... | forward | python | huggingface/transformers | src/transformers/models/smolvlm/modeling_smolvlm.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/smolvlm/modeling_smolvlm.py | Apache-2.0 |
def get_image_features(self, pixel_values: torch.FloatTensor, pixel_attention_mask: torch.LongTensor = None):
"""
Encodes images into continuous embeddings that can be forwarded to the language model.
Args:
pixel_values (`torch.FloatTensor` of shape `(batch_size, num_channels, image... |
Encodes images into continuous embeddings that can be forwarded to the language model.
Args:
pixel_values (`torch.FloatTensor` of shape `(batch_size, num_channels, image_size, image_size)`):
The tensors corresponding to the input images.
pixel_attention_mask (`t... | get_image_features | python | huggingface/transformers | src/transformers/models/smolvlm/modular_smolvlm.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/smolvlm/modular_smolvlm.py | Apache-2.0 |
def _prompt_split_image(
image_seq_len, image_rows, image_cols, fake_token_around_image, image_token, global_image_token
):
"""Prompt with expanded image tokens for when the image is split into patches."""
text_split_images = ""
for n_h in range(image_rows):
for n_w in range(image_cols):
... | Prompt with expanded image tokens for when the image is split into patches. | _prompt_split_image | python | huggingface/transformers | src/transformers/models/smolvlm/processing_smolvlm.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/smolvlm/processing_smolvlm.py | Apache-2.0 |
def _prompt_single_image(image_seq_len, fake_token_around_image, image_token, global_image_token):
"""Prompt with expanded image tokens for a single image."""
return (
f"{fake_token_around_image}"
+ f"{global_image_token}"
+ f"{image_token}" * image_seq_len
+ f"{fake_token_around... | Prompt with expanded image tokens for a single image. | _prompt_single_image | python | huggingface/transformers | src/transformers/models/smolvlm/processing_smolvlm.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/smolvlm/processing_smolvlm.py | Apache-2.0 |
def __call__(
self,
images: Union[ImageInput, List[ImageInput], List[List[ImageInput]]] = None,
text: Union[TextInput, "PreTokenizedInput", List[TextInput], List["PreTokenizedInput"]] = None,
audio=None,
videos: VideoInput = None,
**kwargs: Unpack[SmolVLMProcessorKwargs],... |
Processes the input prompts and returns a BatchEncoding.
Example:
```python
>>> import requests
>>> from transformers import SmolVLMProcessor
>>> from transformers.image_utils import load_image
>>> processor = SmolVLMProcessor.from_pretrained("HuggingFaceM4/Sm... | __call__ | python | huggingface/transformers | src/transformers/models/smolvlm/processing_smolvlm.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/smolvlm/processing_smolvlm.py | Apache-2.0 |
def _process_messages_for_chat_template(
self,
conversations: List[List[Dict[str, str]]],
batch_images: List[ImageInput],
batch_videos: List[VideoInput],
batch_video_metadata: List[List[Dict[str, any]]],
**chat_template_kwargs,
):
"""
Used within `appl... |
Used within `apply_chat_template` when a model has special way to process conversation history. For example,
video models might want to specify in the prompt the duration of video or which frame indices at which timestamps
were sampled. This information cannot be accessed before the video is lo... | _process_messages_for_chat_template | python | huggingface/transformers | src/transformers/models/smolvlm/processing_smolvlm.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/smolvlm/processing_smolvlm.py | Apache-2.0 |
def _load_video_for_model(
self,
video: Union[str, "VideoInput"],
num_frames: Optional[int] = None,
fps: Optional[int] = None,
backend: str = "opencv",
skip_secs: int = 0.0,
**kwargs,
) -> np.array:
"""
Loads `video` to a numpy array.
... |
Loads `video` to a numpy array.
Args:
video (`str` or `VideoInput`):
The video to convert to the numpy array format. Can be a link to video or local path.
num_frames (`int`, *optional*):
Number of frames to sample uniformly. If not passed, the wh... | _load_video_for_model | python | huggingface/transformers | src/transformers/models/smolvlm/processing_smolvlm.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/smolvlm/processing_smolvlm.py | Apache-2.0 |
def smolvlm_sample_indices_fn(metadata, max_frames, target_fps, skip_secs=0):
"""
Example sampling function which:
- Uses `max_frames` (if provided) or calculates it from `fps` and metadata.
- Applies a basic center-skip if fewer frames than available, otherwise
optionally skips `skip_secs` ... |
Example sampling function which:
- Uses `max_frames` (if provided) or calculates it from `fps` and metadata.
- Applies a basic center-skip if fewer frames than available, otherwise
optionally skips `skip_secs` from both the start and end.
- Uniformly samples the desired number of frames b... | smolvlm_sample_indices_fn | python | huggingface/transformers | src/transformers/models/smolvlm/video_processing_smolvlm.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/smolvlm/video_processing_smolvlm.py | Apache-2.0 |
def get_max_height_width(videos: list["torch.Tensor"]) -> List[int]:
"""
Get the maximum height and width across all videos in a batch.
"""
max_height = max_width = float("-inf")
for video in videos:
height, width = video.size()[-2:]
max_height = max(height, max_height)
max_w... |
Get the maximum height and width across all videos in a batch.
| get_max_height_width | python | huggingface/transformers | src/transformers/models/smolvlm/video_processing_smolvlm.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/smolvlm/video_processing_smolvlm.py | Apache-2.0 |
def get_resize_output_image_size(
video,
resolution_max_side: int,
) -> tuple[int, int]:
"""
Get the output size of the video after resizing given a dictionary specifying the max and min sizes.
Args:
video (`np.ndarray`):
Video to resize.
resolution_max_side (`int`):
... |
Get the output size of the video after resizing given a dictionary specifying the max and min sizes.
Args:
video (`np.ndarray`):
Video to resize.
resolution_max_side (`int`):
The longest edge of the video will be resized to this value. The shortest edge will be resized t... | get_resize_output_image_size | python | huggingface/transformers | src/transformers/models/smolvlm/video_processing_smolvlm.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/smolvlm/video_processing_smolvlm.py | Apache-2.0 |
def resize(
self,
video: "torch.Tensor",
size: SizeDict,
interpolation: "F.InterpolationMode" = None,
antialias: bool = True,
**kwargs,
) -> "torch.Tensor":
"""
Resize an video to `(size["height"], size["width"])`.
Args:
video (`tor... |
Resize an video to `(size["height"], size["width"])`.
Args:
video (`torch.Tensor`):
Video to resize.
size (`SizeDict`):
Dictionary in the format `{"height": int, "width": int}` specifying the size of the output video.
resample (`Interp... | resize | python | huggingface/transformers | src/transformers/models/smolvlm/video_processing_smolvlm.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/smolvlm/video_processing_smolvlm.py | Apache-2.0 |
def pad(
self,
video: "torch.Tensor",
padded_size: tuple[int, int],
fill: int = 0,
return_pixel_mask: bool = True,
):
"""Pads the sample with empty video to the padded_size
Args:
video (`torch.Tensor`):
Video to pad.
pad... | Pads the sample with empty video to the padded_size
Args:
video (`torch.Tensor`):
Video to pad.
padded_size (`Tuple[int, int]`):
Height and width to pad.
fill (`int`, *optional*):
The value to use for the padding.
re... | pad | python | huggingface/transformers | src/transformers/models/smolvlm/video_processing_smolvlm.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/smolvlm/video_processing_smolvlm.py | Apache-2.0 |
def convert_speecht5_checkpoint(
task,
checkpoint_path,
pytorch_dump_folder_path,
config_path=None,
vocab_path=None,
repo_id=None,
):
"""
Copy/paste/tweak model's weights to transformers design.
"""
if config_path is not None:
config = SpeechT5Config.from_pretrained(confi... |
Copy/paste/tweak model's weights to transformers design.
| convert_speecht5_checkpoint | python | huggingface/transformers | src/transformers/models/speecht5/convert_speecht5_original_pytorch_checkpoint_to_pytorch.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/speecht5/convert_speecht5_original_pytorch_checkpoint_to_pytorch.py | Apache-2.0 |
def zero_mean_unit_var_norm(
input_values: List[np.ndarray], attention_mask: List[np.ndarray], padding_value: float = 0.0
) -> List[np.ndarray]:
"""
Every array in the list is normalized to have zero mean and unit variance
"""
if attention_mask is not None:
attent... |
Every array in the list is normalized to have zero mean and unit variance
| zero_mean_unit_var_norm | python | huggingface/transformers | src/transformers/models/speecht5/feature_extraction_speecht5.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/speecht5/feature_extraction_speecht5.py | Apache-2.0 |
def _extract_mel_features(
self,
one_waveform: np.ndarray,
) -> np.ndarray:
"""
Extracts log-mel filterbank features for one waveform array (unbatched).
"""
log_mel_spec = spectrogram(
one_waveform,
window=self.window,
frame_length=... |
Extracts log-mel filterbank features for one waveform array (unbatched).
| _extract_mel_features | python | huggingface/transformers | src/transformers/models/speecht5/feature_extraction_speecht5.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/speecht5/feature_extraction_speecht5.py | Apache-2.0 |
def __call__(
self,
audio: Optional[Union[np.ndarray, List[float], List[np.ndarray], List[List[float]]]] = None,
audio_target: Optional[Union[np.ndarray, List[float], List[np.ndarray], List[List[float]]]] = None,
padding: Union[bool, str, PaddingStrategy] = False,
max_length: Opt... |
Main method to featurize and prepare for the model one or several sequence(s).
Pass in a value for `audio` to extract waveform features. Pass in a value for `audio_target` to extract log-mel
spectrogram features.
Args:
audio (`np.ndarray`, `List[float]`, `List[np.ndarray]`... | __call__ | python | huggingface/transformers | src/transformers/models/speecht5/feature_extraction_speecht5.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/speecht5/feature_extraction_speecht5.py | Apache-2.0 |
def shift_tokens_right(input_ids: torch.Tensor, pad_token_id: int, decoder_start_token_id: int):
"""
Shift input ids one token to the right.
"""
shifted_input_ids = input_ids.new_zeros(input_ids.shape)
shifted_input_ids[:, 1:] = input_ids[:, :-1].clone()
shifted_input_ids[:, 0] = decoder_start_t... |
Shift input ids one token to the right.
| shift_tokens_right | python | huggingface/transformers | src/transformers/models/speecht5/modeling_speecht5.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/speecht5/modeling_speecht5.py | Apache-2.0 |
def shift_spectrograms_right(
input_values: torch.Tensor, reduction_factor: int = 1, attention_mask: Optional[torch.Tensor] = None
):
"""
Shift input spectrograms one timestep to the right. Also applies the reduction factor to the sequence length.
"""
# thin out frames for reduction factor
if re... |
Shift input spectrograms one timestep to the right. Also applies the reduction factor to the sequence length.
| shift_spectrograms_right | python | huggingface/transformers | src/transformers/models/speecht5/modeling_speecht5.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/speecht5/modeling_speecht5.py | Apache-2.0 |
def _compute_mask_indices(
shape: Tuple[int, int],
mask_prob: float,
mask_length: int,
attention_mask: Optional[torch.LongTensor] = None,
min_masks: int = 0,
) -> np.ndarray:
"""
Computes random mask spans for a given shape. Used to implement [SpecAugment: A Simple Data Augmentation Method f... |
Computes random mask spans for a given shape. Used to implement [SpecAugment: A Simple Data Augmentation Method for
ASR](https://arxiv.org/abs/1904.08779). Note that this method is not optimized to run on TPU and should be run on
CPU as part of the preprocessing during training.
Args:
shape: T... | _compute_mask_indices | python | huggingface/transformers | src/transformers/models/speecht5/modeling_speecht5.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/speecht5/modeling_speecht5.py | Apache-2.0 |
def compute_num_masked_span(input_length):
"""Given input length, compute how many spans should be masked"""
num_masked_span = int(mask_prob * input_length / mask_length + epsilon)
num_masked_span = max(num_masked_span, min_masks)
# make sure num masked span <= sequence_length
i... | Given input length, compute how many spans should be masked | compute_num_masked_span | python | huggingface/transformers | src/transformers/models/speecht5/modeling_speecht5.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/speecht5/modeling_speecht5.py | Apache-2.0 |
def get_embedding(num_embeddings: int, embedding_dim: int, padding_idx: Optional[int] = None):
"""
Build sinusoidal embeddings. This matches the implementation in tensor2tensor, but differs slightly from the
description in Section 3.5 of "Attention Is All You Need".
"""
half_dim ... |
Build sinusoidal embeddings. This matches the implementation in tensor2tensor, but differs slightly from the
description in Section 3.5 of "Attention Is All You Need".
| get_embedding | python | huggingface/transformers | src/transformers/models/speecht5/modeling_speecht5.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/speecht5/modeling_speecht5.py | Apache-2.0 |
def create_position_ids_from_input_ids(
self, input_ids: torch.Tensor, padding_idx: int, past_key_values_length: Optional[int] = 0
):
"""
Replace non-padding symbols with their position numbers. Position numbers begin at padding_idx+1. Padding
symbols are ignored. This is modified fr... |
Replace non-padding symbols with their position numbers. Position numbers begin at padding_idx+1. Padding
symbols are ignored. This is modified from fairseq's `utils.make_positions`.
Args:
x: torch.Tensor x:
Returns: torch.Tensor
| create_position_ids_from_input_ids | python | huggingface/transformers | src/transformers/models/speecht5/modeling_speecht5.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/speecht5/modeling_speecht5.py | Apache-2.0 |
def _get_feat_extract_output_lengths(self, input_lengths: Union[torch.LongTensor, int]):
"""
Computes the output length of the convolutional layers
"""
def _conv_out_length(input_length, kernel_size, stride):
# 1D convolutional layer output length formula taken
#... |
Computes the output length of the convolutional layers
| _get_feat_extract_output_lengths | python | huggingface/transformers | src/transformers/models/speecht5/modeling_speecht5.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/speecht5/modeling_speecht5.py | Apache-2.0 |
def _mask_hidden_states(
self,
hidden_states: torch.FloatTensor,
mask_time_indices: Optional[torch.FloatTensor] = None,
attention_mask: Optional[torch.LongTensor] = None,
):
"""
Masks extracted features along time axis and/or along feature axis according to
[S... |
Masks extracted features along time axis and/or along feature axis according to
[SpecAugment](https://arxiv.org/abs/1904.08779).
| _mask_hidden_states | python | huggingface/transformers | src/transformers/models/speecht5/modeling_speecht5.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/speecht5/modeling_speecht5.py | Apache-2.0 |
def forward(
self,
hidden_states: torch.Tensor,
attention_mask: Optional[torch.Tensor] = None,
layer_head_mask: Optional[torch.Tensor] = None,
position_bias: Optional[torch.Tensor] = None,
output_attentions: bool = False,
):
"""
Args:
hidde... |
Args:
hidden_states (`torch.FloatTensor`):
input to the layer of shape `(batch, seq_len, hidden_size)`
attention_mask (`torch.FloatTensor`):
attention mask of size `(batch, 1, tgt_len, src_len)` where padding elements are indicated by very
... | forward | python | huggingface/transformers | src/transformers/models/speecht5/modeling_speecht5.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/speecht5/modeling_speecht5.py | Apache-2.0 |
def forward(
self,
hidden_states: torch.Tensor,
attention_mask: Optional[torch.Tensor] = None,
encoder_hidden_states: Optional[torch.Tensor] = None,
encoder_attention_mask: Optional[torch.Tensor] = None,
layer_head_mask: Optional[torch.Tensor] = None,
cross_attn_l... |
Args:
hidden_states (`torch.FloatTensor`): input to the layer of shape `(batch, seq_len, hidden_size)`
attention_mask (`torch.FloatTensor`): attention mask of size
`(batch, 1, tgt_len, src_len)` where padding elements are indicated by very large negative values.
... | forward | python | huggingface/transformers | src/transformers/models/speecht5/modeling_speecht5.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/speecht5/modeling_speecht5.py | Apache-2.0 |
def forward(
self,
hidden_states: torch.FloatTensor,
attention_mask: Optional[torch.Tensor] = None,
head_mask: Optional[torch.Tensor] = None,
output_attentions: Optional[bool] = None,
output_hidden_states: Optional[bool] = None,
return_dict: Optional[bool] = None,... |
Args:
hidden_states (`torch.FloatTensor` of shape `(batch_size, sequence_length, feature_size)`):
Features extracted from the speech or text input by the encoder prenet.
attention_mask (`torch.Tensor` of shape `(batch_size, sequence_length)`, *optional*):
... | forward | python | huggingface/transformers | src/transformers/models/speecht5/modeling_speecht5.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/speecht5/modeling_speecht5.py | Apache-2.0 |
def forward(
self,
hidden_states: Optional[torch.FloatTensor] = None,
attention_mask: Optional[torch.LongTensor] = None,
encoder_hidden_states: Optional[torch.FloatTensor] = None,
encoder_attention_mask: Optional[torch.LongTensor] = None,
head_mask: Optional[torch.Tensor]... |
Args:
hidden_states (`torch.FloatTensor` of shape `(batch_size, sequence_length, feature_size)`):
Features extracted from the speech or text input by the decoder prenet.
attention_mask (`torch.LongTensor` of shape `(batch_size, sequence_length)`, *optional*):
... | forward | python | huggingface/transformers | src/transformers/models/speecht5/modeling_speecht5.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/speecht5/modeling_speecht5.py | Apache-2.0 |
def forward(
self, attentions: torch.FloatTensor, input_masks: torch.BoolTensor, output_masks: torch.BoolTensor
) -> torch.Tensor:
"""
Compute the attention loss.
Args:
attentions (`torch.FloatTensor` of shape `(batch_size, layers * heads, output_sequence_length, input_s... |
Compute the attention loss.
Args:
attentions (`torch.FloatTensor` of shape `(batch_size, layers * heads, output_sequence_length, input_sequence_length)`):
Batch of multi-head attention weights
input_masks (`torch.BoolTensor` of shape `(batch_size, input_sequence... | forward | python | huggingface/transformers | src/transformers/models/speecht5/modeling_speecht5.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/speecht5/modeling_speecht5.py | Apache-2.0 |
def __init__(
self,
config: SpeechT5Config,
encoder: Optional[nn.Module] = None,
decoder: Optional[nn.Module] = None,
):
r"""
encoder (`PreTrainedModel`, *optional*):
The encoder model to use.
decoder (`PreTrainedModel`, *optional*):
Th... |
encoder (`PreTrainedModel`, *optional*):
The encoder model to use.
decoder (`PreTrainedModel`, *optional*):
The decoder model to use.
| __init__ | python | huggingface/transformers | src/transformers/models/speecht5/modeling_speecht5.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/speecht5/modeling_speecht5.py | Apache-2.0 |
def forward(
self,
input_values: Optional[torch.Tensor] = None,
attention_mask: Optional[torch.LongTensor] = None,
decoder_input_values: Optional[torch.Tensor] = None,
decoder_attention_mask: Optional[torch.LongTensor] = None,
head_mask: Optional[torch.FloatTensor] = None... |
input_values (`torch.Tensor` of shape `(batch_size, sequence_length)`):
Depending on which encoder is being used, the `input_values` are either: float values of the input raw
speech waveform, or indices of input sequence tokens in the vocabulary, or hidden states.
decoder_input_... | forward | python | huggingface/transformers | src/transformers/models/speecht5/modeling_speecht5.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/speecht5/modeling_speecht5.py | Apache-2.0 |
def forward(
self,
input_values: Optional[torch.FloatTensor] = None,
attention_mask: Optional[torch.LongTensor] = None,
decoder_input_ids: Optional[torch.LongTensor] = None,
decoder_attention_mask: Optional[torch.LongTensor] = None,
head_mask: Optional[torch.FloatTensor] ... |
input_values (`torch.FloatTensor` of shape `(batch_size, sequence_length)`):
Float values of input raw speech waveform. Values can be obtained by loading a *.flac* or *.wav* audio file
into an array of type `List[float]` or a `numpy.ndarray`, *e.g.* via the soundfile library (*pip insta... | forward | python | huggingface/transformers | src/transformers/models/speecht5/modeling_speecht5.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/speecht5/modeling_speecht5.py | Apache-2.0 |
def forward(
self,
input_ids: Optional[torch.LongTensor] = None,
attention_mask: Optional[torch.LongTensor] = None,
decoder_input_values: Optional[torch.FloatTensor] = None,
decoder_attention_mask: Optional[torch.LongTensor] = None,
head_mask: Optional[torch.FloatTensor] ... |
input_ids (`torch.LongTensor` of shape `(batch_size, sequence_length)`):
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using [`SpeechT5Tokenizer`]. See [`~PreTrainedTokenizer.encode`] and
[`~PreTrainedTokenizer.__call__`] for details.
... | forward | python | huggingface/transformers | src/transformers/models/speecht5/modeling_speecht5.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/speecht5/modeling_speecht5.py | Apache-2.0 |
def generate(
self,
input_ids: torch.LongTensor,
attention_mask: Optional[torch.LongTensor] = None,
speaker_embeddings: Optional[torch.FloatTensor] = None,
threshold: float = 0.5,
minlenratio: float = 0.0,
maxlenratio: float = 20.0,
vocoder: Optional[nn.Mo... |
Converts a sequence of input tokens into a sequence of mel spectrograms, which are subsequently turned into a
speech waveform using a vocoder.
Args:
input_ids (`torch.LongTensor` of shape `(batch_size, sequence_length)`):
Indices of input sequence tokens in the voca... | generate | python | huggingface/transformers | src/transformers/models/speecht5/modeling_speecht5.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/speecht5/modeling_speecht5.py | Apache-2.0 |
def generate_speech(
self,
input_ids: torch.LongTensor,
speaker_embeddings: Optional[torch.FloatTensor] = None,
attention_mask: Optional[torch.LongTensor] = None,
threshold: float = 0.5,
minlenratio: float = 0.0,
maxlenratio: float = 20.0,
vocoder: Optiona... |
Converts a sequence of input tokens into a sequence of mel spectrograms, which are subsequently turned into a
speech waveform using a vocoder.
Args:
input_ids (`torch.LongTensor` of shape `(batch_size, sequence_length)`):
Indices of input sequence tokens in the voca... | generate_speech | python | huggingface/transformers | src/transformers/models/speecht5/modeling_speecht5.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/speecht5/modeling_speecht5.py | Apache-2.0 |
def forward(
self,
input_values: Optional[torch.FloatTensor] = None,
attention_mask: Optional[torch.LongTensor] = None,
decoder_input_values: Optional[torch.FloatTensor] = None,
decoder_attention_mask: Optional[torch.LongTensor] = None,
head_mask: Optional[torch.FloatTens... |
input_values (`torch.FloatTensor` of shape `(batch_size, sequence_length)`):
Float values of input raw speech waveform. Values can be obtained by loading a *.flac* or *.wav* audio file
into an array of type `List[float]` or a `numpy.ndarray`, *e.g.* via the soundfile library (*pip insta... | forward | python | huggingface/transformers | src/transformers/models/speecht5/modeling_speecht5.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/speecht5/modeling_speecht5.py | Apache-2.0 |
def generate_speech(
self,
input_values: torch.FloatTensor,
speaker_embeddings: Optional[torch.FloatTensor] = None,
attention_mask: Optional[torch.LongTensor] = None,
threshold: float = 0.5,
minlenratio: float = 0.0,
maxlenratio: float = 20.0,
vocoder: Opt... |
Converts a raw speech waveform into a sequence of mel spectrograms, which are subsequently turned back into a
speech waveform using a vocoder.
Args:
input_values (`torch.FloatTensor` of shape `(batch_size, sequence_length)`):
Float values of input raw speech wavefor... | generate_speech | python | huggingface/transformers | src/transformers/models/speecht5/modeling_speecht5.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/speecht5/modeling_speecht5.py | Apache-2.0 |
def forward(self, spectrogram: torch.FloatTensor) -> torch.FloatTensor:
r"""
spectrogram (`torch.FloatTensor`):
Tensor containing the log-mel spectrograms. Can be batched and of shape `(batch_size, sequence_length,
config.model_in_dim)`, or un-batched and of shape `(sequence_leng... |
spectrogram (`torch.FloatTensor`):
Tensor containing the log-mel spectrograms. Can be batched and of shape `(batch_size, sequence_length,
config.model_in_dim)`, or un-batched and of shape `(sequence_length, config.model_in_dim)`.
Returns:
`torch.FloatTensor`: Tensor... | forward | python | huggingface/transformers | src/transformers/models/speecht5/modeling_speecht5.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/speecht5/modeling_speecht5.py | Apache-2.0 |
def convert(self, number):
"""
Converts an individual number passed in string form to spelt-out form
"""
if "." in number:
integer_part, decimal_part = number.split(".")
else:
integer_part, decimal_part = number, "00"
# Extract currency symbol if ... |
Converts an individual number passed in string form to spelt-out form
| convert | python | huggingface/transformers | src/transformers/models/speecht5/number_normalizer.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/speecht5/number_normalizer.py | Apache-2.0 |
def __call__(self, *args, **kwargs):
"""
Processes audio and text input, as well as audio and text targets.
You can process audio by using the argument `audio`, or process audio targets by using the argument
`audio_target`. This forwards the arguments to SpeechT5FeatureExtractor's
... |
Processes audio and text input, as well as audio and text targets.
You can process audio by using the argument `audio`, or process audio targets by using the argument
`audio_target`. This forwards the arguments to SpeechT5FeatureExtractor's
[`~SpeechT5FeatureExtractor.__call__`].
... | __call__ | python | huggingface/transformers | src/transformers/models/speecht5/processing_speecht5.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/speecht5/processing_speecht5.py | Apache-2.0 |
def pad(self, *args, **kwargs):
"""
Collates the audio and text inputs, as well as their targets, into a padded batch.
Audio inputs are padded by SpeechT5FeatureExtractor's [`~SpeechT5FeatureExtractor.pad`]. Text inputs are padded
by SpeechT5Tokenizer's [`~SpeechT5Tokenizer.pad`].
... |
Collates the audio and text inputs, as well as their targets, into a padded batch.
Audio inputs are padded by SpeechT5FeatureExtractor's [`~SpeechT5FeatureExtractor.pad`]. Text inputs are padded
by SpeechT5Tokenizer's [`~SpeechT5Tokenizer.pad`].
Valid input combinations are:
... | pad | python | huggingface/transformers | src/transformers/models/speecht5/processing_speecht5.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/speecht5/processing_speecht5.py | Apache-2.0 |
def convert_tokens_to_string(self, tokens):
"""Converts a sequence of tokens (string) in a single string."""
current_sub_tokens = []
out_string = ""
prev_is_special = False
for token in tokens:
# make sure that special tokens are not decoded using sentencepiece model
... | Converts a sequence of tokens (string) in a single string. | convert_tokens_to_string | python | huggingface/transformers | src/transformers/models/speecht5/tokenization_speecht5.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/speecht5/tokenization_speecht5.py | Apache-2.0 |
def build_inputs_with_special_tokens(self, token_ids_0, token_ids_1=None) -> List[int]:
"""Build model inputs from a sequence by appending eos_token_id."""
if token_ids_1 is None:
return token_ids_0 + [self.eos_token_id]
# We don't expect to process pairs, but leave the pair logic fo... | Build model inputs from a sequence by appending eos_token_id. | build_inputs_with_special_tokens | python | huggingface/transformers | src/transformers/models/speecht5/tokenization_speecht5.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/speecht5/tokenization_speecht5.py | Apache-2.0 |
def from_encoder_decoder_configs(
cls, encoder_config: PretrainedConfig, decoder_config: PretrainedConfig, **kwargs
) -> PretrainedConfig:
r"""
Instantiate a [`SpeechEncoderDecoderConfig`] (or a derived class) from a pre-trained encoder model
configuration and decoder model configura... |
Instantiate a [`SpeechEncoderDecoderConfig`] (or a derived class) from a pre-trained encoder model
configuration and decoder model configuration.
Returns:
[`SpeechEncoderDecoderConfig`]: An instance of a configuration object
| from_encoder_decoder_configs | python | huggingface/transformers | src/transformers/models/speech_encoder_decoder/configuration_speech_encoder_decoder.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/speech_encoder_decoder/configuration_speech_encoder_decoder.py | Apache-2.0 |
def convert_wav2vec2_checkpoint(
checkpoint_path,
pytorch_dump_folder_path,
dict_path,
config_yaml_path,
encoder_config_path,
decoder_config_path,
add_adapter,
adapter_kernel_size,
adapter_stride,
decoder_start_token_id,
encoder_output_dim,
):
"""
Copy/paste/tweak mod... |
Copy/paste/tweak model's weights to transformers design.
| convert_wav2vec2_checkpoint | python | huggingface/transformers | src/transformers/models/speech_encoder_decoder/convert_mbart_wav2vec2_seq2seq_original_to_pytorch.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/speech_encoder_decoder/convert_mbart_wav2vec2_seq2seq_original_to_pytorch.py | Apache-2.0 |
def convert_wav2vec2_checkpoint(
checkpoint_path,
pytorch_dump_folder_path,
dict_path,
encoder_config_path,
decoder_config_path,
vocab_size,
num_decoder_layers,
):
"""
Copy/paste/tweak model's weights to transformers design.
"""
encoder_config = Wav2Vec2Config.from_pretrained... |
Copy/paste/tweak model's weights to transformers design.
| convert_wav2vec2_checkpoint | python | huggingface/transformers | src/transformers/models/speech_encoder_decoder/convert_speech_to_text_wav2vec2_seq2seq_original_to_pytorch.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/speech_encoder_decoder/convert_speech_to_text_wav2vec2_seq2seq_original_to_pytorch.py | Apache-2.0 |
def _get_feat_extract_output_lengths(
self, input_lengths: Union[jnp.ndarray, int], add_adapter: Optional[bool] = None
):
"""
Computes the output length of the convolutional layers
"""
add_adapter = self.config.encoder.add_adapter if add_adapter is None else add_adapter
... |
Computes the output length of the convolutional layers
| _get_feat_extract_output_lengths | python | huggingface/transformers | src/transformers/models/speech_encoder_decoder/modeling_flax_speech_encoder_decoder.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/speech_encoder_decoder/modeling_flax_speech_encoder_decoder.py | Apache-2.0 |
def init_cache(self, batch_size, max_length, encoder_outputs):
r"""
Args:
batch_size (`int`):
batch_size used for fast auto-regressive decoding. Defines the batch size of the initialized cache.
max_length (`int`):
maximum possible length for auto-r... |
Args:
batch_size (`int`):
batch_size used for fast auto-regressive decoding. Defines the batch size of the initialized cache.
max_length (`int`):
maximum possible length for auto-regressive decoding. Defines the sequence length of the initialized
... | init_cache | python | huggingface/transformers | src/transformers/models/speech_encoder_decoder/modeling_flax_speech_encoder_decoder.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/speech_encoder_decoder/modeling_flax_speech_encoder_decoder.py | Apache-2.0 |
def encode(
self,
inputs: jnp.ndarray,
attention_mask: Optional[jnp.ndarray] = None,
output_attentions: Optional[bool] = None,
output_hidden_states: Optional[bool] = None,
return_dict: Optional[bool] = None,
train: bool = False,
freeze_feature_encoder: boo... |
Returns:
Example:
```python
>>> from transformers import FlaxSpeechEncoderDecoderModel
>>> # initialize a wav2vec2-2-bart from pretrained wav2vec2 and bart models. Note that the cross-attention layers will be randomly initialized
>>> model = FlaxSpeechEncoderDecoderMo... | encode | python | huggingface/transformers | src/transformers/models/speech_encoder_decoder/modeling_flax_speech_encoder_decoder.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/speech_encoder_decoder/modeling_flax_speech_encoder_decoder.py | Apache-2.0 |
def decode(
self,
decoder_input_ids,
encoder_outputs,
encoder_attention_mask: Optional[jnp.ndarray] = None,
decoder_attention_mask: Optional[jnp.ndarray] = None,
decoder_position_ids: Optional[jnp.ndarray] = None,
past_key_values: Optional[dict] = None,
ou... |
Returns:
Example:
```python
>>> from transformers import FlaxSpeechEncoderDecoderModel
>>> import jax.numpy as jnp
>>> # initialize a wav2vec2-2-bart from pretrained wav2vec2 and bart models. Note that the cross-attention layers will be randomly initialized
>>... | decode | python | huggingface/transformers | src/transformers/models/speech_encoder_decoder/modeling_flax_speech_encoder_decoder.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/speech_encoder_decoder/modeling_flax_speech_encoder_decoder.py | Apache-2.0 |
def __call__(
self,
inputs: jnp.ndarray,
attention_mask: Optional[jnp.ndarray] = None,
decoder_input_ids: Optional[jnp.ndarray] = None,
decoder_attention_mask: Optional[jnp.ndarray] = None,
decoder_position_ids: Optional[jnp.ndarray] = None,
output_attentions: Opt... |
Returns:
Examples:
```python
>>> from transformers import FlaxSpeechEncoderDecoderModel, AutoTokenizer
>>> # load a fine-tuned wav2vec2-2-bart model
>>> model = FlaxSpeechEncoderDecoderModel.from_pretrained("patrickvonplaten/wav2vec2-2-bart-large")
>>> # load ... | __call__ | python | huggingface/transformers | src/transformers/models/speech_encoder_decoder/modeling_flax_speech_encoder_decoder.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/speech_encoder_decoder/modeling_flax_speech_encoder_decoder.py | Apache-2.0 |
def from_encoder_decoder_pretrained(
cls,
encoder_pretrained_model_name_or_path: Optional[Union[str, os.PathLike]] = None,
decoder_pretrained_model_name_or_path: Optional[Union[str, os.PathLike]] = None,
*model_args,
**kwargs,
) -> FlaxPreTrainedModel:
r"""
In... |
Instantiate an encoder and a decoder from one or two base classes of the library from pretrained model
checkpoints.
Params:
encoder_pretrained_model_name_or_path (`Union[str, os.PathLike]`, *optional*):
Information necessary to initiate the encoder. Can be either:
... | from_encoder_decoder_pretrained | python | huggingface/transformers | src/transformers/models/speech_encoder_decoder/modeling_flax_speech_encoder_decoder.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/speech_encoder_decoder/modeling_flax_speech_encoder_decoder.py | Apache-2.0 |
def shift_tokens_right(input_ids: torch.Tensor, pad_token_id: int, decoder_start_token_id: int):
"""
Shift input ids one token to the right.
"""
shifted_input_ids = input_ids.new_zeros(input_ids.shape)
shifted_input_ids[:, 1:] = input_ids[:, :-1].clone()
if decoder_start_token_id is None:
... |
Shift input ids one token to the right.
| shift_tokens_right | python | huggingface/transformers | src/transformers/models/speech_encoder_decoder/modeling_speech_encoder_decoder.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/speech_encoder_decoder/modeling_speech_encoder_decoder.py | Apache-2.0 |
def __init__(
self,
config: Optional[PretrainedConfig] = None,
encoder: Optional[PreTrainedModel] = None,
decoder: Optional[PreTrainedModel] = None,
):
r"""
encoder (`PreTrainedModel`, *optional*):
The encoder model to use.
decoder (`PreTrainedMode... |
encoder (`PreTrainedModel`, *optional*):
The encoder model to use.
decoder (`PreTrainedModel`, *optional*):
The decoder model to use.
| __init__ | python | huggingface/transformers | src/transformers/models/speech_encoder_decoder/modeling_speech_encoder_decoder.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/speech_encoder_decoder/modeling_speech_encoder_decoder.py | Apache-2.0 |
def from_encoder_decoder_pretrained(
cls,
encoder_pretrained_model_name_or_path: Optional[str] = None,
decoder_pretrained_model_name_or_path: Optional[str] = None,
*model_args,
**kwargs,
) -> PreTrainedModel:
r"""
Instantiate an encoder and a decoder from one ... |
Instantiate an encoder and a decoder from one or two base classes of the library from pretrained model
checkpoints.
The model is set in evaluation mode by default using `model.eval()` (Dropout modules are deactivated). To train
the model, you need to first set it back in training mode... | from_encoder_decoder_pretrained | python | huggingface/transformers | src/transformers/models/speech_encoder_decoder/modeling_speech_encoder_decoder.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/speech_encoder_decoder/modeling_speech_encoder_decoder.py | Apache-2.0 |
def _extract_fbank_features(
self,
waveform: np.ndarray,
) -> np.ndarray:
"""
Get mel-filter bank features using TorchAudio. Note that TorchAudio requires 16-bit signed integers as inputs
and hence the waveform should not be normalized before feature extraction.
"""
... |
Get mel-filter bank features using TorchAudio. Note that TorchAudio requires 16-bit signed integers as inputs
and hence the waveform should not be normalized before feature extraction.
| _extract_fbank_features | python | huggingface/transformers | src/transformers/models/speech_to_text/feature_extraction_speech_to_text.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/speech_to_text/feature_extraction_speech_to_text.py | Apache-2.0 |
def __call__(
self,
raw_speech: Union[np.ndarray, List[float], List[np.ndarray], List[List[float]]],
padding: Union[bool, str, PaddingStrategy] = False,
max_length: Optional[int] = None,
truncation: bool = False,
pad_to_multiple_of: Optional[int] = None,
return_te... |
Main method to featurize and prepare for the model one or several sequence(s).
Args:
raw_speech (`np.ndarray`, `List[float]`, `List[np.ndarray]`, `List[List[float]]`):
The sequence or batch of sequences to be padded. Each sequence can be a numpy array, a list of float
... | __call__ | python | huggingface/transformers | src/transformers/models/speech_to_text/feature_extraction_speech_to_text.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/speech_to_text/feature_extraction_speech_to_text.py | Apache-2.0 |
def shift_tokens_right(input_ids: torch.Tensor, pad_token_id: int, decoder_start_token_id: int):
"""
Shift input ids one token to the right.
"""
shifted_input_ids = input_ids.new_zeros(input_ids.shape)
shifted_input_ids[:, 1:] = input_ids[:, :-1].clone()
shifted_input_ids[:, 0] = decoder_start_t... |
Shift input ids one token to the right.
| shift_tokens_right | python | huggingface/transformers | src/transformers/models/speech_to_text/modeling_speech_to_text.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/speech_to_text/modeling_speech_to_text.py | Apache-2.0 |
def get_embedding(num_embeddings: int, embedding_dim: int, padding_idx: Optional[int] = None):
"""
Build sinusoidal embeddings. This matches the implementation in tensor2tensor, but differs slightly from the
description in Section 3.5 of "Attention Is All You Need".
"""
half_dim ... |
Build sinusoidal embeddings. This matches the implementation in tensor2tensor, but differs slightly from the
description in Section 3.5 of "Attention Is All You Need".
| get_embedding | python | huggingface/transformers | src/transformers/models/speech_to_text/modeling_speech_to_text.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/speech_to_text/modeling_speech_to_text.py | Apache-2.0 |
def create_position_ids_from_input_ids(
self, input_ids: torch.Tensor, padding_idx: int, past_key_values_length: Optional[int] = 0
):
"""
Replace non-padding symbols with their position numbers. Position numbers begin at padding_idx+1. Padding
symbols are ignored. This is modified fr... |
Replace non-padding symbols with their position numbers. Position numbers begin at padding_idx+1. Padding
symbols are ignored. This is modified from fairseq's `utils.make_positions`.
Args:
x: torch.Tensor x:
Returns: torch.Tensor
| create_position_ids_from_input_ids | python | huggingface/transformers | src/transformers/models/speech_to_text/modeling_speech_to_text.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/speech_to_text/modeling_speech_to_text.py | Apache-2.0 |
def forward(
self,
hidden_states: torch.Tensor,
attention_mask: torch.Tensor,
layer_head_mask: torch.Tensor,
output_attentions: bool = False,
) -> torch.Tensor:
"""
Args:
hidden_states (`torch.FloatTensor`): input to the layer of shape `(batch, seq... |
Args:
hidden_states (`torch.FloatTensor`): input to the layer of shape `(batch, seq_len, embed_dim)`
attention_mask (`torch.FloatTensor`): attention mask of size
`(batch, 1, tgt_len, src_len)` where padding elements are indicated by very large negative values.
... | forward | python | huggingface/transformers | src/transformers/models/speech_to_text/modeling_speech_to_text.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/speech_to_text/modeling_speech_to_text.py | Apache-2.0 |
def forward(
self,
hidden_states: torch.Tensor,
attention_mask: Optional[torch.Tensor] = None,
encoder_hidden_states: Optional[torch.Tensor] = None,
encoder_attention_mask: Optional[torch.Tensor] = None,
layer_head_mask: Optional[torch.Tensor] = None,
cross_attn_l... |
Args:
hidden_states (`torch.FloatTensor`): input to the layer of shape `(batch, seq_len, embed_dim)`
attention_mask (`torch.FloatTensor`): attention mask of size
`(batch, 1, tgt_len, src_len)` where padding elements are indicated by very large negative values.
... | forward | python | huggingface/transformers | src/transformers/models/speech_to_text/modeling_speech_to_text.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/speech_to_text/modeling_speech_to_text.py | Apache-2.0 |
Subsets and Splits
Django Code with Docstrings
Filters Python code examples from Django repository that contain Django-related code, helping identify relevant code snippets for understanding Django framework usage patterns.
SQL Console for Shuu12121/python-treesitter-filtered-datasetsV2
Retrieves specific code examples from the Flask repository but doesn't provide meaningful analysis or patterns beyond basic data retrieval.
HTTPX Repo Code and Docstrings
Retrieves specific code examples from the httpx repository, which is useful for understanding how particular libraries are used but doesn't provide broader analytical insights about the dataset.
Requests Repo Docstrings & Code
Retrieves code examples with their docstrings and file paths from the requests repository, providing basic filtering but limited analytical value beyond finding specific code samples.
Quart Repo Docstrings & Code
Retrieves code examples with their docstrings from the Quart repository, providing basic code samples but offering limited analytical value for understanding broader patterns or relationships in the dataset.