code stringlengths 66 870k | docstring stringlengths 19 26.7k | func_name stringlengths 1 138 | language stringclasses 1
value | repo stringlengths 7 68 | path stringlengths 5 324 | url stringlengths 46 389 | license stringclasses 7
values |
|---|---|---|---|---|---|---|---|
def window_partition(self, hidden_states: torch.Tensor, window_size: int) -> Tuple[torch.Tensor, Tuple[int, int]]:
"""
Args:
Partition into non-overlapping windows with padding if needed.
hidden_states (tensor): input tokens with [batch_size, height, width, channel]. window_size (int... |
Args:
Partition into non-overlapping windows with padding if needed.
hidden_states (tensor): input tokens with [batch_size, height, width, channel]. window_size (int): window
size.
Returns:
windows: windows after partition with [batch_size * num_windows, win... | window_partition | python | huggingface/transformers | src/transformers/models/sam_hq/modeling_sam_hq.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/sam_hq/modeling_sam_hq.py | Apache-2.0 |
def window_unpartition(
self, windows: torch.Tensor, window_size: int, padding_shape: Tuple[int, int], original_shape: Tuple[int, int]
) -> torch.Tensor:
"""
Args:
Window unpartition into original sequences and removing padding.
hidden_states (tensor):
inp... |
Args:
Window unpartition into original sequences and removing padding.
hidden_states (tensor):
input tokens with [batch_size * num_windows, window_size, window_size, channel].
window_size (int):
window size.
padding_shape (Tuple):
... | window_unpartition | python | huggingface/transformers | src/transformers/models/sam_hq/modeling_sam_hq.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/sam_hq/modeling_sam_hq.py | Apache-2.0 |
def __init__(self, config, attention_downsample_rate: int = 2, skip_first_layer_pe: bool = False):
"""
A transformer block with four layers:
(1) self-attention of sparse inputs (2) cross attention of sparse inputs -> dense inputs (3) mlp block on
sparse inputs (4) cross attention... |
A transformer block with four layers:
(1) self-attention of sparse inputs (2) cross attention of sparse inputs -> dense inputs (3) mlp block on
sparse inputs (4) cross attention of dense inputs -> sparse inputs
Arguments:
config (`SamHQMaskDecoderConfig`):
... | __init__ | python | huggingface/transformers | src/transformers/models/sam_hq/modeling_sam_hq.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/sam_hq/modeling_sam_hq.py | Apache-2.0 |
def forward(
self,
image_embeddings: torch.Tensor,
image_positional_embeddings: torch.Tensor,
sparse_prompt_embeddings: torch.Tensor,
dense_prompt_embeddings: torch.Tensor,
multimask_output: bool,
hq_token_only: bool,
intermediate_embeddings: Optional[List... |
Predict high-quality masks given image and prompt embeddings.
Args:
image_embeddings (`torch.Tensor`):
The embeddings from the image encoder.
image_positional_embedding (`torch.Tensor`):
Positional encoding with the shape of image_embeddings.
... | forward | python | huggingface/transformers | src/transformers/models/sam_hq/modeling_sam_hq.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/sam_hq/modeling_sam_hq.py | Apache-2.0 |
def forward(self, input_coords, input_shape=None):
"""Positionally encode points that are normalized to [0,1]."""
coordinates = input_coords.clone()
if input_shape is not None:
coordinates[:, :, :, 0] = coordinates[:, :, :, 0] / input_shape[1]
coordinates[:, :, :, 1] = c... | Positionally encode points that are normalized to [0,1]. | forward | python | huggingface/transformers | src/transformers/models/sam_hq/modeling_sam_hq.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/sam_hq/modeling_sam_hq.py | Apache-2.0 |
def forward(
self,
input_points: Optional[Tuple[torch.Tensor, torch.Tensor]],
input_labels: Optional[torch.Tensor],
input_boxes: Optional[torch.Tensor],
input_masks: Optional[torch.Tensor],
) -> Tuple[torch.Tensor, torch.Tensor]:
"""
Embeds different types of ... |
Embeds different types of prompts, returning both sparse and dense embeddings.
Args:
points (`torch.Tensor`, *optional*):
point coordinates and labels to embed.
boxes (`torch.Tensor`, *optional*):
boxes to embed
masks (`torch.Tensor`,... | forward | python | huggingface/transformers | src/transformers/models/sam_hq/modeling_sam_hq.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/sam_hq/modeling_sam_hq.py | Apache-2.0 |
def get_image_embeddings(
self,
pixel_values,
output_attentions: Optional[bool] = None,
output_hidden_states: Optional[bool] = None,
return_dict: Optional[bool] = None,
):
r"""
Returns the image embeddings by passing the pixel values through the vision encoder... |
Returns the image embeddings by passing the pixel values through the vision encoder.
Args:
pixel_values (`torch.FloatTensor` of shape `(batch_size, num_channels, height, width)`):
Input pixel values
output_attentions (`bool`, *optional*):
Whether... | get_image_embeddings | python | huggingface/transformers | src/transformers/models/sam_hq/modeling_sam_hq.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/sam_hq/modeling_sam_hq.py | Apache-2.0 |
def get_prompt_embeddings(
self,
input_points: Optional[torch.FloatTensor] = None,
input_labels: Optional[torch.LongTensor] = None,
input_boxes: Optional[torch.FloatTensor] = None,
input_masks: Optional[torch.LongTensor] = None,
):
r"""
Returns the prompt embe... |
Returns the prompt embeddings by passing the input points, labels, boxes and masks through the prompt encoder.
Args:
input_points (`torch.FloatTensor` of shape `(batch_size, point_batch_size, num_points_per_image, 2)`):
Optional input points for the prompt encoder. The padd... | get_prompt_embeddings | python | huggingface/transformers | src/transformers/models/sam_hq/modeling_sam_hq.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/sam_hq/modeling_sam_hq.py | Apache-2.0 |
def forward(
self,
pixel_values: Optional[torch.FloatTensor] = None,
input_points: Optional[torch.FloatTensor] = None,
input_labels: Optional[torch.LongTensor] = None,
input_boxes: Optional[torch.FloatTensor] = None,
input_masks: Optional[torch.LongTensor] = None,
... |
input_points (`torch.FloatTensor` of shape `(batch_size, num_points, 2)`):
Input 2D spatial points, this is used by the prompt encoder to encode the prompt. Generally yields to much
better results. The points can be obtained by passing a list of list of list to the processor that will
... | forward | python | huggingface/transformers | src/transformers/models/sam_hq/modeling_sam_hq.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/sam_hq/modeling_sam_hq.py | Apache-2.0 |
def forward(
self,
image_embeddings: torch.Tensor,
image_positional_embeddings: torch.Tensor,
sparse_prompt_embeddings: torch.Tensor,
dense_prompt_embeddings: torch.Tensor,
multimask_output: bool,
hq_token_only: bool,
intermediate_embeddings: Optional[List... |
Predict high-quality masks given image and prompt embeddings.
Args:
image_embeddings (`torch.Tensor`):
The embeddings from the image encoder.
image_positional_embedding (`torch.Tensor`):
Positional encoding with the shape of image_embeddings.
... | forward | python | huggingface/transformers | src/transformers/models/sam_hq/modular_sam_hq.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/sam_hq/modular_sam_hq.py | Apache-2.0 |
def get_image_embeddings(
self,
pixel_values,
output_attentions: Optional[bool] = None,
output_hidden_states: Optional[bool] = None,
return_dict: Optional[bool] = None,
):
r"""
Returns the image embeddings by passing the pixel values through the vision encoder... |
Returns the image embeddings by passing the pixel values through the vision encoder.
Args:
pixel_values (`torch.FloatTensor` of shape `(batch_size, num_channels, height, width)`):
Input pixel values
output_attentions (`bool`, *optional*):
Whether... | get_image_embeddings | python | huggingface/transformers | src/transformers/models/sam_hq/modular_sam_hq.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/sam_hq/modular_sam_hq.py | Apache-2.0 |
def forward(
self,
pixel_values: Optional[torch.FloatTensor] = None,
input_points: Optional[torch.FloatTensor] = None,
input_labels: Optional[torch.LongTensor] = None,
input_boxes: Optional[torch.FloatTensor] = None,
input_masks: Optional[torch.LongTensor] = None,
... |
input_points (`torch.FloatTensor` of shape `(batch_size, num_points, 2)`):
Input 2D spatial points, this is used by the prompt encoder to encode the prompt. Generally yields to much
better results. The points can be obtained by passing a list of list of list to the processor that will
... | forward | python | huggingface/transformers | src/transformers/models/sam_hq/modular_sam_hq.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/sam_hq/modular_sam_hq.py | Apache-2.0 |
def __call__(
self,
images: Optional[ImageInput] = None,
# The following is to capture `segmentation_maps`, `input_points`, `input_labels` and `input_boxes`
# arguments that may be passed as a positional argument.
# See transformers.processing_utils.ProcessorMixin.prepare_and_val... |
This method uses [`SamImageProcessor.__call__`] method to prepare image(s) for the model. It also prepares 2D
points and bounding boxes for the model if they are provided.
| __call__ | python | huggingface/transformers | src/transformers/models/sam_hq/processing_samhq.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/sam_hq/processing_samhq.py | Apache-2.0 |
def _normalize_and_convert(
self,
encoding_image_processor,
original_sizes,
input_points=None,
input_labels=None,
input_boxes=None,
return_tensors="pt",
point_pad_value=-10,
):
"""
Normalize and convert the image processor output to the... |
Normalize and convert the image processor output to the expected format.
| _normalize_and_convert | python | huggingface/transformers | src/transformers/models/sam_hq/processing_samhq.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/sam_hq/processing_samhq.py | Apache-2.0 |
def _pad_points_and_labels(self, input_points, input_labels, point_pad_value):
r"""
The method pads the 2D points and labels to the maximum number of points in the batch.
"""
expected_nb_points = max([point.shape[0] for point in input_points])
processed_input_points = []
... |
The method pads the 2D points and labels to the maximum number of points in the batch.
| _pad_points_and_labels | python | huggingface/transformers | src/transformers/models/sam_hq/processing_samhq.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/sam_hq/processing_samhq.py | Apache-2.0 |
def _normalize_coordinates(
self, target_size: int, coords: np.ndarray, original_size, is_bounding_box=False
) -> np.ndarray:
"""
Expects a numpy array of length 2 in the final dimension. Requires the original image size in (H,W) format.
"""
old_h, old_w = original_size
... |
Expects a numpy array of length 2 in the final dimension. Requires the original image size in (H,W) format.
| _normalize_coordinates | python | huggingface/transformers | src/transformers/models/sam_hq/processing_samhq.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/sam_hq/processing_samhq.py | Apache-2.0 |
def _preprocess_input(self, inputs, error_message, expected_nesting=1, dtype=None):
"""
Preprocess input by converting torch tensors to numpy arrays and validating structure.
Args:
inputs: The input to process
error_message: Error message if validation fails
... |
Preprocess input by converting torch tensors to numpy arrays and validating structure.
Args:
inputs: The input to process
error_message: Error message if validation fails
expected_nesting: Expected nesting level (1 for points/labels, 2 for boxes)
dtype: ... | _preprocess_input | python | huggingface/transformers | src/transformers/models/sam_hq/processing_samhq.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/sam_hq/processing_samhq.py | Apache-2.0 |
def _check_and_preprocess_points(
self,
input_points=None,
input_labels=None,
input_boxes=None,
):
r"""
Check and preprocesses the 2D points, labels and bounding boxes. It checks if the input is valid and if they
are, it converts the coordinates of the points ... |
Check and preprocesses the 2D points, labels and bounding boxes. It checks if the input is valid and if they
are, it converts the coordinates of the points and bounding boxes. If a user passes directly a `torch.Tensor`,
it is converted to a `numpy.ndarray` and then to a `list`.
| _check_and_preprocess_points | python | huggingface/transformers | src/transformers/models/sam_hq/processing_samhq.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/sam_hq/processing_samhq.py | Apache-2.0 |
def _to_tensor(self, array, min_dim, return_tensors):
"""
Convert numpy array to tensor and ensure proper dimensionality.
Args:
array: The numpy array to convert
min_dim: The minimum number of dimensions the result should have
return_tensors: The type of tenso... |
Convert numpy array to tensor and ensure proper dimensionality.
Args:
array: The numpy array to convert
min_dim: The minimum number of dimensions the result should have
return_tensors: The type of tensors to return (e.g., "pt" for PyTorch tensors)
Returns:
... | _to_tensor | python | huggingface/transformers | src/transformers/models/sam_hq/processing_samhq.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/sam_hq/processing_samhq.py | Apache-2.0 |
def _normalize_batch_coordinates(self, inputs, original_sizes, is_bounding_box=False):
"""
Normalize coordinates based on original sizes.
Args:
inputs: List of coordinate arrays
original_sizes: Original sizes of the images
is_bounding_box: Whether inputs are b... |
Normalize coordinates based on original sizes.
Args:
inputs: List of coordinate arrays
original_sizes: Original sizes of the images
is_bounding_box: Whether inputs are bounding boxes
Returns:
Normalized coordinates as list
| _normalize_batch_coordinates | python | huggingface/transformers | src/transformers/models/sam_hq/processing_samhq.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/sam_hq/processing_samhq.py | Apache-2.0 |
def load_model(save_dir, model_type, repo_id):
"""
Meta SeamlessM4T is made of 8 main components:
- speech_encoder (#1) and speech_encoder_frontend (#2)
- t2u_model (#3)
- text_encoder (#4) and text_encoder_frontend (#5)
- text_decoder (#6) [and text_decoder_frontend (#5) = equals to text_encode... |
Meta SeamlessM4T is made of 8 main components:
- speech_encoder (#1) and speech_encoder_frontend (#2)
- t2u_model (#3)
- text_encoder (#4) and text_encoder_frontend (#5)
- text_decoder (#6) [and text_decoder_frontend (#5) = equals to text_encoder_frontend]
- final_proj (#7)
- vocoder (#8)
... | load_model | python | huggingface/transformers | src/transformers/models/seamless_m4t/convert_fairseq2_to_hf.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/seamless_m4t/convert_fairseq2_to_hf.py | Apache-2.0 |
def zero_mean_unit_var_norm(
input_values: List[np.ndarray], attention_mask: List[np.ndarray], padding_value: float = 0.0
) -> List[np.ndarray]:
"""
Every array in the list is normalized to have zero mean and unit variance
"""
if attention_mask is not None:
attent... |
Every array in the list is normalized to have zero mean and unit variance
| zero_mean_unit_var_norm | python | huggingface/transformers | src/transformers/models/seamless_m4t/feature_extraction_seamless_m4t.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/seamless_m4t/feature_extraction_seamless_m4t.py | Apache-2.0 |
def _extract_fbank_features(
self,
waveform: np.ndarray,
) -> np.ndarray:
"""
Get mel-filter bank features using TorchAudio. Note that TorchAudio requires 16-bit signed integers as inputs
and hence the waveform should not be normalized before feature extraction.
"""
... |
Get mel-filter bank features using TorchAudio. Note that TorchAudio requires 16-bit signed integers as inputs
and hence the waveform should not be normalized before feature extraction.
| _extract_fbank_features | python | huggingface/transformers | src/transformers/models/seamless_m4t/feature_extraction_seamless_m4t.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/seamless_m4t/feature_extraction_seamless_m4t.py | Apache-2.0 |
def __call__(
self,
raw_speech: Union[np.ndarray, List[float], List[np.ndarray], List[List[float]]],
padding: Union[bool, str, PaddingStrategy] = True,
pad_to_multiple_of: Optional[int] = 2,
max_length: Optional[int] = None,
truncation: bool = False,
return_tensor... |
Main method to featurize and prepare for the model one or several sequence(s).
Args:
raw_speech (`np.ndarray`, `torch.Tensor`, `List[float]`, `List[np.ndarray]`, `List[torch.Tensor]`,
`List[List[float]]`, `List[List[List[float]]]`):
The sequence or batch of sequ... | __call__ | python | huggingface/transformers | src/transformers/models/seamless_m4t/feature_extraction_seamless_m4t.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/seamless_m4t/feature_extraction_seamless_m4t.py | Apache-2.0 |
def create_position_ids_from_input_ids(input_ids, padding_idx, past_key_values_length=0):
"""
Replace non-padding symbols with their position numbers. Position numbers begin at padding_idx+1. Padding symbols
are ignored. This is modified from fairseq's `utils.make_positions`.
Args:
x: torch.Ten... |
Replace non-padding symbols with their position numbers. Position numbers begin at padding_idx+1. Padding symbols
are ignored. This is modified from fairseq's `utils.make_positions`.
Args:
x: torch.Tensor x:
Returns: torch.Tensor
| create_position_ids_from_input_ids | python | huggingface/transformers | src/transformers/models/seamless_m4t/modeling_seamless_m4t.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/seamless_m4t/modeling_seamless_m4t.py | Apache-2.0 |
def shift_tokens_right(input_ids: torch.Tensor, pad_token_id: int, decoder_start_token_id: int):
"""
Shift input ids one token to the right.
"""
shifted_input_ids = input_ids.new_zeros(input_ids.shape)
shifted_input_ids[:, 1:] = input_ids[:, :-1].clone()
shifted_input_ids[:, 0] = decoder_start_t... |
Shift input ids one token to the right.
| shift_tokens_right | python | huggingface/transformers | src/transformers/models/seamless_m4t/modeling_seamless_m4t.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/seamless_m4t/modeling_seamless_m4t.py | Apache-2.0 |
def _compute_new_attention_mask(hidden_states: torch.Tensor, seq_lens: torch.Tensor):
"""
Computes an attention mask of the form `(batch, seq_len)` with an attention for each element in the batch that
stops at the corresponding element in `seq_lens`.
Args:
hidden_states (`torch.FloatTensor` of ... |
Computes an attention mask of the form `(batch, seq_len)` with an attention for each element in the batch that
stops at the corresponding element in `seq_lens`.
Args:
hidden_states (`torch.FloatTensor` of shape `(batch, seq_len, *)`):
The sequences to mask, where `*` is any number of s... | _compute_new_attention_mask | python | huggingface/transformers | src/transformers/models/seamless_m4t/modeling_seamless_m4t.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/seamless_m4t/modeling_seamless_m4t.py | Apache-2.0 |
def format_speech_generation_kwargs(kwargs):
"""
Format kwargs for SeamlessM4T models that generate speech, attribute kwargs to either the text generation or the
speech generation models.
Args:
kwargs (`dict`)`:
Keyword arguments are of two types:
- Without a prefi... |
Format kwargs for SeamlessM4T models that generate speech, attribute kwargs to either the text generation or the
speech generation models.
Args:
kwargs (`dict`)`:
Keyword arguments are of two types:
- Without a prefix, they will be entered as `**kwargs` for the `gener... | format_speech_generation_kwargs | python | huggingface/transformers | src/transformers/models/seamless_m4t/modeling_seamless_m4t.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/seamless_m4t/modeling_seamless_m4t.py | Apache-2.0 |
def get_embedding(num_embeddings: int, embedding_dim: int, padding_idx: Optional[int] = None):
"""
Build sinusoidal embeddings.
This matches the implementation in tensor2tensor, but differs slightly from the description in Section 3.5 of
"Attention Is All You Need".
"""
... |
Build sinusoidal embeddings.
This matches the implementation in tensor2tensor, but differs slightly from the description in Section 3.5 of
"Attention Is All You Need".
| get_embedding | python | huggingface/transformers | src/transformers/models/seamless_m4t/modeling_seamless_m4t.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/seamless_m4t/modeling_seamless_m4t.py | Apache-2.0 |
def create_position_ids_from_inputs_embeds(self, inputs_embeds, past_key_values_length):
"""
We are provided embeddings directly. We cannot infer which are padded so just generate sequential position ids.
Args:
inputs_embeds: torch.Tensor
Returns: torch.Tensor
"""
... |
We are provided embeddings directly. We cannot infer which are padded so just generate sequential position ids.
Args:
inputs_embeds: torch.Tensor
Returns: torch.Tensor
| create_position_ids_from_inputs_embeds | python | huggingface/transformers | src/transformers/models/seamless_m4t/modeling_seamless_m4t.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/seamless_m4t/modeling_seamless_m4t.py | Apache-2.0 |
def forward(
self,
hidden_states: torch.Tensor,
attention_mask: torch.Tensor,
output_attentions: bool = False,
) -> torch.Tensor:
"""
Args:
hidden_states (`torch.FloatTensor`):
input to the layer of shape `(batch, seq_len, embed_dim)`
... |
Args:
hidden_states (`torch.FloatTensor`):
input to the layer of shape `(batch, seq_len, embed_dim)`
attention_mask (`torch.FloatTensor`):
attention mask of size `(batch, 1, tgt_len, src_len)` where padding elements are indicated by very
l... | forward | python | huggingface/transformers | src/transformers/models/seamless_m4t/modeling_seamless_m4t.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/seamless_m4t/modeling_seamless_m4t.py | Apache-2.0 |
def forward(
self,
hidden_states: torch.Tensor,
attention_mask: Optional[torch.Tensor] = None,
encoder_hidden_states: Optional[torch.Tensor] = None,
encoder_attention_mask: Optional[torch.Tensor] = None,
past_key_value: Optional[Tuple[torch.Tensor]] = None,
output... |
Args:
hidden_states (`torch.FloatTensor`):
input to the layer of shape `(batch, seq_len, embed_dim)`
attention_mask (`torch.FloatTensor`):
attention mask of size `(batch, 1, tgt_len, src_len)` where padding elements are indicated by very
l... | forward | python | huggingface/transformers | src/transformers/models/seamless_m4t/modeling_seamless_m4t.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/seamless_m4t/modeling_seamless_m4t.py | Apache-2.0 |
def compute_last_hidden_states_per_sample(
self,
hidden_states: Tuple[Tuple[torch.Tensor]],
beam_indices: Optional[torch.Tensor] = None,
) -> torch.Tensor:
"""
Computes the last hidden states.
Parameters:
hidden_states (`Tuple[Tuple[torch.Tensor]]`):
... |
Computes the last hidden states.
Parameters:
hidden_states (`Tuple[Tuple[torch.Tensor]]`):
The generated hidden states. Tuple (one element for each generated token) of tuples (one element for
each layer of the decoder) of torch.FloatTensor of shape (batch_si... | compute_last_hidden_states_per_sample | python | huggingface/transformers | src/transformers/models/seamless_m4t/modeling_seamless_m4t.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/seamless_m4t/modeling_seamless_m4t.py | Apache-2.0 |
def __init__(
self,
config: SeamlessM4TConfig,
embed_tokens: Optional[nn.Embedding] = None,
is_t2u_encoder: bool = False,
):
r"""
embed_tokens (`nn.Embedding`, *optional*):
Input embedding
is_t2u_encoder (`bool`, *optional*, defaults to `False`):
... |
embed_tokens (`nn.Embedding`, *optional*):
Input embedding
is_t2u_encoder (`bool`, *optional*, defaults to `False`):
indicates if it belongs to the text-to-units model, in which case it won't have input embeddings
| __init__ | python | huggingface/transformers | src/transformers/models/seamless_m4t/modeling_seamless_m4t.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/seamless_m4t/modeling_seamless_m4t.py | Apache-2.0 |
def forward(
self,
input_ids: Optional[torch.LongTensor] = None,
attention_mask: Optional[torch.Tensor] = None,
inputs_embeds: Optional[torch.FloatTensor] = None,
output_attentions: Optional[bool] = None,
output_hidden_states: Optional[bool] = None,
return_dict: O... |
Args:
input_ids (`torch.LongTensor` of shape `(batch_size, sequence_length)`):
Indices of input sequence tokens in the vocabulary. Padding will be ignored by default should you
provide it.
Indices can be obtained using [`AutoTokenizer`]. See [`PreTra... | forward | python | huggingface/transformers | src/transformers/models/seamless_m4t/modeling_seamless_m4t.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/seamless_m4t/modeling_seamless_m4t.py | Apache-2.0 |
def __init__(
self,
config: SeamlessM4TConfig,
embed_tokens_decoder: Optional[nn.Embedding] = None,
):
r"""
embed_tokens_decoder (`nn.Embedding`, *optional*):
input embedding of the decoder.
"""
super().__init__(config)
self.encoder = Seam... |
embed_tokens_decoder (`nn.Embedding`, *optional*):
input embedding of the decoder.
| __init__ | python | huggingface/transformers | src/transformers/models/seamless_m4t/modeling_seamless_m4t.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/seamless_m4t/modeling_seamless_m4t.py | Apache-2.0 |
def __init__(
self,
config: SeamlessM4TConfig,
embed_tokens_decoder: Optional[nn.Embedding] = None,
):
r"""
embed_tokens_decoder (`nn.Embedding`, *optional*):
input embedding of the decoder.
"""
# update config - used principality for bos_token_id ... |
embed_tokens_decoder (`nn.Embedding`, *optional*):
input embedding of the decoder.
| __init__ | python | huggingface/transformers | src/transformers/models/seamless_m4t/modeling_seamless_m4t.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/seamless_m4t/modeling_seamless_m4t.py | Apache-2.0 |
def forward(
self,
input_ids: Optional[torch.LongTensor] = None,
attention_mask: Optional[torch.Tensor] = None,
decoder_input_ids: Optional[torch.LongTensor] = None,
decoder_attention_mask: Optional[torch.LongTensor] = None,
encoder_outputs: Optional[Tuple[Tuple[torch.Flo... |
labels (`torch.LongTensor` of shape `(batch_size, sequence_length)`, *optional*):
Labels for computing the masked language modeling loss. Indices should be in `[-100, 0, ...,
config.vocab_size]` (see `input_ids` docstring) Tokens with indices set to `-100` are ignored (masked), the
... | forward | python | huggingface/transformers | src/transformers/models/seamless_m4t/modeling_seamless_m4t.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/seamless_m4t/modeling_seamless_m4t.py | Apache-2.0 |
def forward(self, input_embeds: torch.FloatTensor) -> torch.FloatTensor:
r"""
Converts a log-mel spectrogram into a speech waveform. Passing a batch of log-mel spectrograms returns a batch
of speech waveforms. Passing a single, un-batched log-mel spectrogram returns a single, un-batched speech
... |
Converts a log-mel spectrogram into a speech waveform. Passing a batch of log-mel spectrograms returns a batch
of speech waveforms. Passing a single, un-batched log-mel spectrogram returns a single, un-batched speech
waveform.
Args:
spectrogram (`torch.FloatTensor`):
... | forward | python | huggingface/transformers | src/transformers/models/seamless_m4t/modeling_seamless_m4t.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/seamless_m4t/modeling_seamless_m4t.py | Apache-2.0 |
def _get_dur_output_lengths(self, input_ids, dur_out):
"""
Computes the output length after the duration layer.
"""
unit_lengths = (input_ids != self.pad_token_id).sum(1)
# take care of edge cases where no padding or too many padding
unit_lengths = torch.clamp(unit_lengt... |
Computes the output length after the duration layer.
| _get_dur_output_lengths | python | huggingface/transformers | src/transformers/models/seamless_m4t/modeling_seamless_m4t.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/seamless_m4t/modeling_seamless_m4t.py | Apache-2.0 |
def _get_output_hifigan_lengths(self, input_lengths: Union[torch.LongTensor, int]):
"""
Computes the output length of the hifigan convolutional layers
"""
def _conv_out_length(input_length, kernel_size, stride, pad, dilation=1):
# 1D convolutional layer output length formula... |
Computes the output length of the hifigan convolutional layers
| _get_output_hifigan_lengths | python | huggingface/transformers | src/transformers/models/seamless_m4t/modeling_seamless_m4t.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/seamless_m4t/modeling_seamless_m4t.py | Apache-2.0 |
def forward(
self, input_ids: torch.LongTensor, spkr_id: torch.Tensor, lang_id: torch.Tensor
) -> Tuple[torch.Tensor]:
"""
Args:
input_ids (`torch.LongTensor` of shape `(batch_size, sequence_length)`):
Indices of input sequence tokens in the vocabulary.
... |
Args:
input_ids (`torch.LongTensor` of shape `(batch_size, sequence_length)`):
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using [`SeamlessM4TTextToUnitForConditionalGeneration`]. [What are input
IDs?](../glossary#inpu... | forward | python | huggingface/transformers | src/transformers/models/seamless_m4t/modeling_seamless_m4t.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/seamless_m4t/modeling_seamless_m4t.py | Apache-2.0 |
def forward(
self,
input_ids: Optional[torch.LongTensor] = None,
attention_mask: Optional[torch.Tensor] = None,
decoder_input_ids: Optional[torch.LongTensor] = None,
decoder_attention_mask: Optional[torch.LongTensor] = None,
encoder_outputs: Optional[Tuple[Tuple[torch.Flo... |
labels (`torch.LongTensor` of shape `(batch_size, sequence_length)`, *optional*):
Labels for computing the masked language modeling loss. Indices should be in `[-100, 0, ...,
config.vocab_size]` (see `input_ids` docstring) Tokens with indices set to `-100` are ignored (masked), the
... | forward | python | huggingface/transformers | src/transformers/models/seamless_m4t/modeling_seamless_m4t.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/seamless_m4t/modeling_seamless_m4t.py | Apache-2.0 |
def generate(
self,
input_ids=None,
tgt_lang=None,
generation_config=None,
logits_processor=None,
stopping_criteria=None,
prefix_allowed_tokens_fn=None,
synced_gpus=False,
**kwargs,
):
"""
Generates sequences of token ids.
... |
Generates sequences of token ids.
<Tip warning={true}>
Most generation-controlling parameters are set in `generation_config` which, if not passed, will be set to the
model's default generation configuration. You can override any `generation_config` by passing the corresponding
... | generate | python | huggingface/transformers | src/transformers/models/seamless_m4t/modeling_seamless_m4t.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/seamless_m4t/modeling_seamless_m4t.py | Apache-2.0 |
def forward(
self,
input_features: Optional[torch.LongTensor] = None,
attention_mask: Optional[torch.Tensor] = None,
decoder_input_ids: Optional[torch.LongTensor] = None,
decoder_attention_mask: Optional[torch.LongTensor] = None,
encoder_outputs: Optional[Tuple[Tuple[torc... |
labels (`torch.LongTensor` of shape `(batch_size, sequence_length)`, *optional*):
Labels for computing the masked language modeling loss. Indices should be in `[-100, 0, ...,
config.vocab_size]` (see `input_ids` docstring) Tokens with indices set to `-100` are ignored (masked), the
... | forward | python | huggingface/transformers | src/transformers/models/seamless_m4t/modeling_seamless_m4t.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/seamless_m4t/modeling_seamless_m4t.py | Apache-2.0 |
def generate(
self,
input_features=None,
tgt_lang=None,
generation_config=None,
logits_processor=None,
stopping_criteria=None,
prefix_allowed_tokens_fn=None,
synced_gpus=False,
**kwargs,
):
"""
Generates sequences of token ids.
... |
Generates sequences of token ids.
<Tip warning={true}>
Most generation-controlling parameters are set in `generation_config` which, if not passed, will be set to the
model's default generation configuration. You can override any `generation_config` by passing the corresponding
... | generate | python | huggingface/transformers | src/transformers/models/seamless_m4t/modeling_seamless_m4t.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/seamless_m4t/modeling_seamless_m4t.py | Apache-2.0 |
def forward(
self,
input_ids: Optional[torch.LongTensor] = None,
attention_mask: Optional[torch.Tensor] = None,
decoder_input_ids: Optional[torch.LongTensor] = None,
decoder_attention_mask: Optional[torch.LongTensor] = None,
encoder_outputs: Optional[Tuple[Tuple[torch.Flo... |
labels (`torch.LongTensor` of shape `(batch_size, sequence_length)`, *optional*):
Labels for computing the masked language modeling loss. Indices should be in `[-100, 0, ...,
config.vocab_size]` (see `input_ids` docstring) Tokens with indices set to `-100` are ignored (masked), the
... | forward | python | huggingface/transformers | src/transformers/models/seamless_m4t/modeling_seamless_m4t.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/seamless_m4t/modeling_seamless_m4t.py | Apache-2.0 |
def generate(
self,
input_ids: Optional[torch.Tensor] = None,
return_intermediate_token_ids: Optional[bool] = None,
tgt_lang: Optional[str] = None,
spkr_id: Optional[int] = 0,
**kwargs,
) -> Union[torch.Tensor, SeamlessM4TGenerationOutput]:
"""
Generat... |
Generates translated audio waveforms.
<Tip>
This method successively calls the `.generate` function of two different sub-models. You can specify keyword
arguments at two different levels: general arguments that will be passed to both models, or prefixed arguments
that will be ... | generate | python | huggingface/transformers | src/transformers/models/seamless_m4t/modeling_seamless_m4t.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/seamless_m4t/modeling_seamless_m4t.py | Apache-2.0 |
def forward(
self,
input_features: Optional[torch.LongTensor] = None,
attention_mask: Optional[torch.Tensor] = None,
decoder_input_ids: Optional[torch.LongTensor] = None,
decoder_attention_mask: Optional[torch.LongTensor] = None,
encoder_outputs: Optional[Tuple[Tuple[torc... |
labels (`torch.LongTensor` of shape `(batch_size, sequence_length)`, *optional*):
Labels for computing the masked language modeling loss. Indices should be in `[-100, 0, ...,
config.vocab_size]` (see `input_ids` docstring) Tokens with indices set to `-100` are ignored (masked), the
... | forward | python | huggingface/transformers | src/transformers/models/seamless_m4t/modeling_seamless_m4t.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/seamless_m4t/modeling_seamless_m4t.py | Apache-2.0 |
def generate(
self,
input_features: Optional[torch.Tensor] = None,
return_intermediate_token_ids: Optional[bool] = None,
tgt_lang: Optional[str] = None,
spkr_id: Optional[int] = 0,
**kwargs,
) -> Union[torch.Tensor, SeamlessM4TGenerationOutput]:
"""
Ge... |
Generates translated audio waveforms.
<Tip>
This method successively calls the `.generate` function of two different sub-models. You can specify keyword
arguments at two different levels: general arguments that will be passed to both models, or prefixed arguments
that will be ... | generate | python | huggingface/transformers | src/transformers/models/seamless_m4t/modeling_seamless_m4t.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/seamless_m4t/modeling_seamless_m4t.py | Apache-2.0 |
def __init__(self, config, current_modality="text"):
r"""
current_modality (`str`, *optional*, defaults to `"text"`):
Default modality. Used to initialize the model.
"""
super().__init__(config)
self.shared = nn.Embedding(config.vocab_size, config.hidden_size, config... |
current_modality (`str`, *optional*, defaults to `"text"`):
Default modality. Used to initialize the model.
| __init__ | python | huggingface/transformers | src/transformers/models/seamless_m4t/modeling_seamless_m4t.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/seamless_m4t/modeling_seamless_m4t.py | Apache-2.0 |
def forward(
self,
input_ids: Optional[torch.LongTensor] = None,
input_features: Optional[torch.FloatTensor] = None,
attention_mask: Optional[torch.Tensor] = None,
decoder_input_ids: Optional[torch.LongTensor] = None,
decoder_attention_mask: Optional[torch.LongTensor] = N... |
labels (`torch.LongTensor` of shape `(batch_size, sequence_length)`, *optional*):
Labels for computing the masked language modeling loss. Indices should be in `[-100, 0, ...,
config.vocab_size]` (see `input_ids` docstring) Tokens with indices set to `-100` are ignored (masked), the
... | forward | python | huggingface/transformers | src/transformers/models/seamless_m4t/modeling_seamless_m4t.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/seamless_m4t/modeling_seamless_m4t.py | Apache-2.0 |
def generate(
self,
input_ids: Optional[torch.Tensor] = None,
input_features: Optional[torch.Tensor] = None,
return_intermediate_token_ids: Optional[bool] = None,
tgt_lang: Optional[str] = None,
spkr_id: Optional[int] = 0,
generate_speech: Optional[bool] = True,
... |
Generates translated token ids and/or translated audio waveforms.
<Tip>
This method successively calls the `.generate` function of two different sub-models. You can specify keyword
arguments at two different levels: general arguments that will be passed to both models, or prefixed arg... | generate | python | huggingface/transformers | src/transformers/models/seamless_m4t/modeling_seamless_m4t.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/seamless_m4t/modeling_seamless_m4t.py | Apache-2.0 |
def __call__(self, text=None, audios=None, src_lang=None, tgt_lang=None, **kwargs):
"""
Main method to prepare for the model one or several sequences(s) and audio(s). This method forwards the `text`
and `kwargs` arguments to SeamlessM4TTokenizerFast's [`~SeamlessM4TTokenizerFast.__call__`] if `t... |
Main method to prepare for the model one or several sequences(s) and audio(s). This method forwards the `text`
and `kwargs` arguments to SeamlessM4TTokenizerFast's [`~SeamlessM4TTokenizerFast.__call__`] if `text` is not
`None` to encode the text. To prepare the audio(s), this method forwards th... | __call__ | python | huggingface/transformers | src/transformers/models/seamless_m4t/processing_seamless_m4t.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/seamless_m4t/processing_seamless_m4t.py | Apache-2.0 |
def __call__(
self,
text: Union[TextInput, PreTokenizedInput, List[TextInput], List[PreTokenizedInput]] = None,
text_pair: Optional[Union[TextInput, PreTokenizedInput, List[TextInput], List[PreTokenizedInput]]] = None,
text_target: Union[TextInput, PreTokenizedInput, List[TextInput], Lis... |
Args:
text (`str`, `List[str]`, `List[List[str]]`, *optional*):
The sequence or batch of sequences to be encoded. Each sequence can be a string or a list of strings
(pretokenized string). If the sequences are provided as list of strings (pretokenized), you must set
... | __call__ | python | huggingface/transformers | src/transformers/models/seamless_m4t/tokenization_seamless_m4t.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/seamless_m4t/tokenization_seamless_m4t.py | Apache-2.0 |
def get_special_tokens_mask(
self, token_ids_0: List[int], token_ids_1: Optional[List[int]] = None, already_has_special_tokens: bool = False
) -> List[int]:
"""
Retrieve sequence ids from a token list that has no special tokens added. This method is called when adding
special tokens ... |
Retrieve sequence ids from a token list that has no special tokens added. This method is called when adding
special tokens using the tokenizer `prepare_for_model` method.
Args:
token_ids_0 (`List[int]`):
List of IDs.
token_ids_1 (`List[int]`, *optional*)... | get_special_tokens_mask | python | huggingface/transformers | src/transformers/models/seamless_m4t/tokenization_seamless_m4t.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/seamless_m4t/tokenization_seamless_m4t.py | Apache-2.0 |
def build_inputs_with_special_tokens(
self, token_ids_0: List[int], token_ids_1: Optional[List[int]] = None
) -> List[int]:
"""
Build model inputs from a sequence or a pair of sequence for sequence classification tasks by concatenating and
adding special tokens. An NLLB sequence has ... |
Build model inputs from a sequence or a pair of sequence for sequence classification tasks by concatenating and
adding special tokens. An NLLB sequence has the following format, where `X` represents the sequence:
- `input_ids` (for encoder) `X [eos, src_lang_code]`
- `decoder_input_ids... | build_inputs_with_special_tokens | python | huggingface/transformers | src/transformers/models/seamless_m4t/tokenization_seamless_m4t.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/seamless_m4t/tokenization_seamless_m4t.py | Apache-2.0 |
def create_token_type_ids_from_sequences(
self, token_ids_0: List[int], token_ids_1: Optional[List[int]] = None
) -> List[int]:
"""
Create a mask from the two sequences passed to be used in a sequence-pair classification task. nllb does not
make use of token type ids, therefore a lis... |
Create a mask from the two sequences passed to be used in a sequence-pair classification task. nllb does not
make use of token type ids, therefore a list of zeros is returned.
Args:
token_ids_0 (`List[int]`):
List of IDs.
token_ids_1 (`List[int]`, *optio... | create_token_type_ids_from_sequences | python | huggingface/transformers | src/transformers/models/seamless_m4t/tokenization_seamless_m4t.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/seamless_m4t/tokenization_seamless_m4t.py | Apache-2.0 |
def _build_translation_inputs(
self, raw_inputs, return_tensors: str, src_lang: Optional[str], tgt_lang: Optional[str], **extra_kwargs
):
"""Used by translation pipeline, to prepare inputs for the generate function"""
if src_lang is None or tgt_lang is None:
raise ValueError("Tra... | Used by translation pipeline, to prepare inputs for the generate function | _build_translation_inputs | python | huggingface/transformers | src/transformers/models/seamless_m4t/tokenization_seamless_m4t.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/seamless_m4t/tokenization_seamless_m4t.py | Apache-2.0 |
def tokenize(self, text: "TextInput", **kwargs) -> List[str]:
"""
Converts a string to a list of tokens. If `self.legacy` is set to `False`, a prefix token is added unless the
first token is special.
"""
if self.legacy or len(text) == 0:
return super().tokenize(text, ... |
Converts a string to a list of tokens. If `self.legacy` is set to `False`, a prefix token is added unless the
first token is special.
| tokenize | python | huggingface/transformers | src/transformers/models/seamless_m4t/tokenization_seamless_m4t.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/seamless_m4t/tokenization_seamless_m4t.py | Apache-2.0 |
def _convert_token_to_id(self, token):
"""Converts a token (str) in an id using the vocab."""
spm_id = self.sp_model.PieceToId(token)
# Need to return unknown token if the SP model returned 0
return spm_id + self.fairseq_offset if spm_id else self.unk_token_id | Converts a token (str) in an id using the vocab. | _convert_token_to_id | python | huggingface/transformers | src/transformers/models/seamless_m4t/tokenization_seamless_m4t.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/seamless_m4t/tokenization_seamless_m4t.py | Apache-2.0 |
def convert_tokens_to_string(self, tokens):
"""Converts a sequence of tokens (strings for sub-words) in a single string."""
# since we manually add the prefix space, we have to remove it when decoding
if tokens[0].startswith(SPIECE_UNDERLINE) and self.add_prefix_space:
tokens[0] = to... | Converts a sequence of tokens (strings for sub-words) in a single string. | convert_tokens_to_string | python | huggingface/transformers | src/transformers/models/seamless_m4t/tokenization_seamless_m4t.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/seamless_m4t/tokenization_seamless_m4t.py | Apache-2.0 |
def set_src_lang_special_tokens(self, src_lang) -> None:
"""Reset the special tokens to the source lang setting.
Prefix=[src_lang_code], suffix = [eos]
"""
self.cur_lang_code = self.convert_tokens_to_ids(src_lang)
self.init_kwargs["src_lang"] = src_lang
if self.cur_lang_... | Reset the special tokens to the source lang setting.
Prefix=[src_lang_code], suffix = [eos]
| set_src_lang_special_tokens | python | huggingface/transformers | src/transformers/models/seamless_m4t/tokenization_seamless_m4t.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/seamless_m4t/tokenization_seamless_m4t.py | Apache-2.0 |
def set_tgt_lang_special_tokens(self, lang: str) -> None:
"""Reset the special tokens to the target lang setting.
Prefix=[eos, tgt_lang_code] and suffix=[eos].
"""
self.cur_lang_code = self.convert_tokens_to_ids(lang)
self.init_kwargs["tgt_lang"] = lang
if self.cur_lang_... | Reset the special tokens to the target lang setting.
Prefix=[eos, tgt_lang_code] and suffix=[eos].
| set_tgt_lang_special_tokens | python | huggingface/transformers | src/transformers/models/seamless_m4t/tokenization_seamless_m4t.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/seamless_m4t/tokenization_seamless_m4t.py | Apache-2.0 |
def build_inputs_with_special_tokens(
self, token_ids_0: List[int], token_ids_1: Optional[List[int]] = None
) -> List[int]:
"""
Build model inputs from a sequence or a pair of sequence for sequence classification tasks by concatenating and
adding special tokens. The special tokens de... |
Build model inputs from a sequence or a pair of sequence for sequence classification tasks by concatenating and
adding special tokens. The special tokens depend on calling set_lang.
An SeamlessM4T sequence has the following format, where `X` represents the sequence:
- `input_ids` (for... | build_inputs_with_special_tokens | python | huggingface/transformers | src/transformers/models/seamless_m4t/tokenization_seamless_m4t_fast.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/seamless_m4t/tokenization_seamless_m4t_fast.py | Apache-2.0 |
def create_token_type_ids_from_sequences(
self, token_ids_0: List[int], token_ids_1: Optional[List[int]] = None
) -> List[int]:
"""
Create a mask from the two sequences passed to be used in a sequence-pair classification task. nllb does not
make use of token type ids, therefore a lis... |
Create a mask from the two sequences passed to be used in a sequence-pair classification task. nllb does not
make use of token type ids, therefore a list of zeros is returned.
Args:
token_ids_0 (`List[int]`):
List of IDs.
token_ids_1 (`List[int]`, *optio... | create_token_type_ids_from_sequences | python | huggingface/transformers | src/transformers/models/seamless_m4t/tokenization_seamless_m4t_fast.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/seamless_m4t/tokenization_seamless_m4t_fast.py | Apache-2.0 |
def _build_translation_inputs(
self, raw_inputs, return_tensors: str, src_lang: Optional[str], tgt_lang: Optional[str], **extra_kwargs
):
"""Used by translation pipeline, to prepare inputs for the generate function"""
if src_lang is None or tgt_lang is None:
raise ValueError("Tra... | Used by translation pipeline, to prepare inputs for the generate function | _build_translation_inputs | python | huggingface/transformers | src/transformers/models/seamless_m4t/tokenization_seamless_m4t_fast.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/seamless_m4t/tokenization_seamless_m4t_fast.py | Apache-2.0 |
def set_src_lang_special_tokens(self, src_lang) -> None:
"""Reset the special tokens to the source lang setting.
Prefix=[src_lang_code], suffix = [eos]
"""
self.cur_lang_code = self.convert_tokens_to_ids(src_lang)
if self.cur_lang_code == self.unk_token_id:
logger.wa... | Reset the special tokens to the source lang setting.
Prefix=[src_lang_code], suffix = [eos]
| set_src_lang_special_tokens | python | huggingface/transformers | src/transformers/models/seamless_m4t/tokenization_seamless_m4t_fast.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/seamless_m4t/tokenization_seamless_m4t_fast.py | Apache-2.0 |
def set_tgt_lang_special_tokens(self, lang: str) -> None:
"""Reset the special tokens to the target lang setting.
Prefix=[eos, tgt_lang_code] and suffix=[eos].
"""
self.cur_lang_code = self.convert_tokens_to_ids(lang)
if self.cur_lang_code == self.unk_token_id:
logge... | Reset the special tokens to the target lang setting.
Prefix=[eos, tgt_lang_code] and suffix=[eos].
| set_tgt_lang_special_tokens | python | huggingface/transformers | src/transformers/models/seamless_m4t/tokenization_seamless_m4t_fast.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/seamless_m4t/tokenization_seamless_m4t_fast.py | Apache-2.0 |
def __call__(
self,
text: Union[TextInput, PreTokenizedInput, List[TextInput], List[PreTokenizedInput]] = None,
text_pair: Optional[Union[TextInput, PreTokenizedInput, List[TextInput], List[PreTokenizedInput]]] = None,
text_target: Union[TextInput, PreTokenizedInput, List[TextInput], Lis... |
Args:
text (`str`, `List[str]`, `List[List[str]]`, *optional*):
The sequence or batch of sequences to be encoded. Each sequence can be a string or a list of strings
(pretokenized string). If the sequences are provided as list of strings (pretokenized), you must set
... | __call__ | python | huggingface/transformers | src/transformers/models/seamless_m4t/tokenization_seamless_m4t_fast.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/seamless_m4t/tokenization_seamless_m4t_fast.py | Apache-2.0 |
def load_model(save_dir, model_type, repo_id):
"""
Meta SeamlessM4Tv2 is made of 8 main components:
- speech_encoder (#1) and speech_encoder_frontend (#2)
- t2u_model (#3)
- text_encoder (#4) and text_encoder_frontend (#5)
- text_decoder (#6) [and text_decoder_frontend (#5) = equals to text_enco... |
Meta SeamlessM4Tv2 is made of 8 main components:
- speech_encoder (#1) and speech_encoder_frontend (#2)
- t2u_model (#3)
- text_encoder (#4) and text_encoder_frontend (#5)
- text_decoder (#6) [and text_decoder_frontend (#5) = equals to text_encoder_frontend]
- final_proj (#7)
- vocoder (#8)... | load_model | python | huggingface/transformers | src/transformers/models/seamless_m4t_v2/convert_fairseq2_to_hf.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/seamless_m4t_v2/convert_fairseq2_to_hf.py | Apache-2.0 |
def create_position_ids_from_input_ids(input_ids, padding_idx, past_key_values_length=0):
"""
Replace non-padding symbols with their position numbers. Position numbers begin at padding_idx+1. Padding symbols
are ignored. This is modified from fairseq's `utils.make_positions`.
Args:
x: torch.Ten... |
Replace non-padding symbols with their position numbers. Position numbers begin at padding_idx+1. Padding symbols
are ignored. This is modified from fairseq's `utils.make_positions`.
Args:
x: torch.Tensor x:
Returns: torch.Tensor
| create_position_ids_from_input_ids | python | huggingface/transformers | src/transformers/models/seamless_m4t_v2/modeling_seamless_m4t_v2.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/seamless_m4t_v2/modeling_seamless_m4t_v2.py | Apache-2.0 |
def shift_tokens_right(input_ids: torch.Tensor, pad_token_id: int, decoder_start_token_id: int):
"""
Shift input ids one token to the right.
"""
shifted_input_ids = input_ids.new_zeros(input_ids.shape)
shifted_input_ids[:, 1:] = input_ids[:, :-1].clone()
shifted_input_ids[:, 0] = decoder_start_t... |
Shift input ids one token to the right.
| shift_tokens_right | python | huggingface/transformers | src/transformers/models/seamless_m4t_v2/modeling_seamless_m4t_v2.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/seamless_m4t_v2/modeling_seamless_m4t_v2.py | Apache-2.0 |
def _compute_new_attention_mask(hidden_states: torch.Tensor, seq_lens: torch.Tensor):
"""
Computes an attention mask of the form `(batch, seq_len)` with an attention for each element in the batch that
stops at the corresponding element in `seq_lens`.
Args:
hidden_states (`torch.FloatTensor` of ... |
Computes an attention mask of the form `(batch, seq_len)` with an attention for each element in the batch that
stops at the corresponding element in `seq_lens`.
Args:
hidden_states (`torch.FloatTensor` of shape `(batch, seq_len, *)`):
The sequences to mask, where `*` is any number of s... | _compute_new_attention_mask | python | huggingface/transformers | src/transformers/models/seamless_m4t_v2/modeling_seamless_m4t_v2.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/seamless_m4t_v2/modeling_seamless_m4t_v2.py | Apache-2.0 |
def format_speech_generation_kwargs(kwargs):
"""
Format kwargs for SeamlessM4Tv2 models that generate speech, attribute kwargs to either the text generation or the
speech generation models.
Args:
kwargs (`dict`)`:
Keyword arguments are of two types:
- Without a pre... |
Format kwargs for SeamlessM4Tv2 models that generate speech, attribute kwargs to either the text generation or the
speech generation models.
Args:
kwargs (`dict`)`:
Keyword arguments are of two types:
- Without a prefix, they will be entered as `**kwargs` for the `gen... | format_speech_generation_kwargs | python | huggingface/transformers | src/transformers/models/seamless_m4t_v2/modeling_seamless_m4t_v2.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/seamless_m4t_v2/modeling_seamless_m4t_v2.py | Apache-2.0 |
def _apply_chunk_attention(self, attention_mask, hidden_states):
"""
Creates a chunk attention mask. It creates a mask to prevent attention across chunks, ensuring that each
position attends only to positions within its own chunk. If a left chunk overlap is specified
(`speech_encoder_chu... |
Creates a chunk attention mask. It creates a mask to prevent attention across chunks, ensuring that each
position attends only to positions within its own chunk. If a left chunk overlap is specified
(`speech_encoder_chunk_size` in the configuration), the attention mask is adjusted accordingly t... | _apply_chunk_attention | python | huggingface/transformers | src/transformers/models/seamless_m4t_v2/modeling_seamless_m4t_v2.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/seamless_m4t_v2/modeling_seamless_m4t_v2.py | Apache-2.0 |
def get_embedding(num_embeddings: int, embedding_dim: int, padding_idx: Optional[int] = None):
"""
Build sinusoidal embeddings.
This matches the implementation in tensor2tensor, but differs slightly from the description in Section 3.5 of
"Attention Is All You Need".
"""
... |
Build sinusoidal embeddings.
This matches the implementation in tensor2tensor, but differs slightly from the description in Section 3.5 of
"Attention Is All You Need".
| get_embedding | python | huggingface/transformers | src/transformers/models/seamless_m4t_v2/modeling_seamless_m4t_v2.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/seamless_m4t_v2/modeling_seamless_m4t_v2.py | Apache-2.0 |
def create_position_ids_from_inputs_embeds(self, inputs_embeds, past_key_values_length):
"""
We are provided embeddings directly. We cannot infer which are padded so just generate sequential position ids.
Args:
inputs_embeds: torch.Tensor
Returns: torch.Tensor
"""
... |
We are provided embeddings directly. We cannot infer which are padded so just generate sequential position ids.
Args:
inputs_embeds: torch.Tensor
Returns: torch.Tensor
| create_position_ids_from_inputs_embeds | python | huggingface/transformers | src/transformers/models/seamless_m4t_v2/modeling_seamless_m4t_v2.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/seamless_m4t_v2/modeling_seamless_m4t_v2.py | Apache-2.0 |
def forward(
self,
hidden_states: torch.Tensor,
attention_mask: torch.Tensor,
output_attentions: bool = False,
) -> torch.Tensor:
"""
Args:
hidden_states (`torch.FloatTensor`):
input to the layer of shape `(batch, seq_len, embed_dim)`
... |
Args:
hidden_states (`torch.FloatTensor`):
input to the layer of shape `(batch, seq_len, embed_dim)`
attention_mask (`torch.FloatTensor`):
attention mask of size `(batch, 1, tgt_len, src_len)` where padding elements are indicated by very
l... | forward | python | huggingface/transformers | src/transformers/models/seamless_m4t_v2/modeling_seamless_m4t_v2.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/seamless_m4t_v2/modeling_seamless_m4t_v2.py | Apache-2.0 |
def forward(
self,
hidden_states: torch.Tensor,
attention_mask: Optional[torch.Tensor] = None,
encoder_hidden_states: Optional[torch.Tensor] = None,
encoder_attention_mask: Optional[torch.Tensor] = None,
past_key_value: Optional[Tuple[torch.Tensor]] = None,
output... |
Args:
hidden_states (`torch.FloatTensor`):
input to the layer of shape `(batch, seq_len, embed_dim)`
attention_mask (`torch.FloatTensor`):
attention mask of size `(batch, 1, tgt_len, src_len)` where padding elements are indicated by very
l... | forward | python | huggingface/transformers | src/transformers/models/seamless_m4t_v2/modeling_seamless_m4t_v2.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/seamless_m4t_v2/modeling_seamless_m4t_v2.py | Apache-2.0 |
def forward(
self,
hidden_states: torch.Tensor,
attention_mask: Optional[torch.Tensor] = None,
padding_mask: Optional[torch.Tensor] = None,
output_attentions: Optional[bool] = False,
) -> torch.Tensor:
"""
Args:
hidden_states (`torch.FloatTensor`):... |
Args:
hidden_states (`torch.FloatTensor`):
input to the layer of shape `(batch, seq_len, embed_dim)`
attention_mask (`torch.FloatTensor`):
attention mask of size `(batch, 1, tgt_len, src_len)` where padding elements are indicated by very
l... | forward | python | huggingface/transformers | src/transformers/models/seamless_m4t_v2/modeling_seamless_m4t_v2.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/seamless_m4t_v2/modeling_seamless_m4t_v2.py | Apache-2.0 |
def _indices_to_subwords(self, input_ids):
"""
Returns the corresponding text string for each input id.
"""
if not hasattr(self.generation_config, "id_to_text"):
raise ValueError(
"""This model generation config doesn't have a `id_to_text` key which maps
... |
Returns the corresponding text string for each input id.
| _indices_to_subwords | python | huggingface/transformers | src/transformers/models/seamless_m4t_v2/modeling_seamless_m4t_v2.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/seamless_m4t_v2/modeling_seamless_m4t_v2.py | Apache-2.0 |
def _get_char_input_ids(self, input_ids, subwords_batch, char_count_per_id, pad_token_id=0, unk_token_id=1):
"""
Returns the corresponding character input id for each character of `subwords_batch`.
Args:
input_ids (`torch.Tensor` of shape `(batch_size, sequence_length)`):
... |
Returns the corresponding character input id for each character of `subwords_batch`.
Args:
input_ids (`torch.Tensor` of shape `(batch_size, sequence_length)`):
Indices of input sequence tokens in the vocabulary.
subwords_batch (`List[List[str]]` of shape `(batch... | _get_char_input_ids | python | huggingface/transformers | src/transformers/models/seamless_m4t_v2/modeling_seamless_m4t_v2.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/seamless_m4t_v2/modeling_seamless_m4t_v2.py | Apache-2.0 |
def _hard_upsample(self, hidden_states, durations):
"""
Repeats the time dimension of each sample in the batch based on the corresponding duration.
Args:
hidden_states (`torch.Tensor` of shape `(batch_size, sequence_length, *)`, *optional*):
The sequence to repeat, w... |
Repeats the time dimension of each sample in the batch based on the corresponding duration.
Args:
hidden_states (`torch.Tensor` of shape `(batch_size, sequence_length, *)`, *optional*):
The sequence to repeat, where `*` is any number of sequence-specific dimensions includin... | _hard_upsample | python | huggingface/transformers | src/transformers/models/seamless_m4t_v2/modeling_seamless_m4t_v2.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/seamless_m4t_v2/modeling_seamless_m4t_v2.py | Apache-2.0 |
def __init__(
self,
config: SeamlessM4Tv2Config,
embed_tokens: Optional[nn.Embedding] = None,
is_t2u_encoder: bool = False,
):
r"""
embed_tokens (`nn.Embedding`, *optional*):
Input embedding
is_t2u_encoder (`bool`, *optional*, defaults to `False`):... |
embed_tokens (`nn.Embedding`, *optional*):
Input embedding
is_t2u_encoder (`bool`, *optional*, defaults to `False`):
indicates if it belongs to the text-to-units model, in which case it won't have input embeddings
| __init__ | python | huggingface/transformers | src/transformers/models/seamless_m4t_v2/modeling_seamless_m4t_v2.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/seamless_m4t_v2/modeling_seamless_m4t_v2.py | Apache-2.0 |
def forward(
self,
input_ids: Optional[torch.LongTensor] = None,
attention_mask: Optional[torch.Tensor] = None,
inputs_embeds: Optional[torch.FloatTensor] = None,
output_attentions: Optional[bool] = None,
output_hidden_states: Optional[bool] = None,
return_dict: O... |
Args:
input_ids (`torch.LongTensor` of shape `(batch_size, sequence_length)`):
Indices of input sequence tokens in the vocabulary. Padding will be ignored by default should you
provide it.
Indices can be obtained using [`AutoTokenizer`]. See [`PreTra... | forward | python | huggingface/transformers | src/transformers/models/seamless_m4t_v2/modeling_seamless_m4t_v2.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/seamless_m4t_v2/modeling_seamless_m4t_v2.py | Apache-2.0 |
def forward(
self,
char_input_ids: Optional[torch.LongTensor] = None,
char_count_per_id: Optional[torch.LongTensor] = None,
encoder_hidden_states: Optional[torch.FloatTensor] = None,
output_attentions: Optional[bool] = None,
output_hidden_states: Optional[bool] = None,
... |
Args:
char_input_ids (`torch.LongTensor` of shape `(batch_size, char_sequence_length)`):
Character indices. The correspondence between characters and indices can be found in `char_to_id`, a
dictionary in the generation configuration.
char_count_per_id (`t... | forward | python | huggingface/transformers | src/transformers/models/seamless_m4t_v2/modeling_seamless_m4t_v2.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/seamless_m4t_v2/modeling_seamless_m4t_v2.py | Apache-2.0 |
def __init__(
self,
config: SeamlessM4Tv2Config,
embed_tokens_decoder: Optional[nn.Embedding] = None,
):
r"""
embed_tokens_decoder (`nn.Embedding`, *optional*):
input embedding of the decoder.
"""
super().__init__(config)
self.encoder = Se... |
embed_tokens_decoder (`nn.Embedding`, *optional*):
input embedding of the decoder.
| __init__ | python | huggingface/transformers | src/transformers/models/seamless_m4t_v2/modeling_seamless_m4t_v2.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/seamless_m4t_v2/modeling_seamless_m4t_v2.py | Apache-2.0 |
def __init__(
self,
config: SeamlessM4Tv2Config,
embed_tokens_decoder: Optional[nn.Embedding] = None,
):
r"""
embed_tokens_decoder (`nn.Embedding`, *optional*):
input embedding of the decoder.
"""
# update config - used principality for bos_token_i... |
embed_tokens_decoder (`nn.Embedding`, *optional*):
input embedding of the decoder.
| __init__ | python | huggingface/transformers | src/transformers/models/seamless_m4t_v2/modeling_seamless_m4t_v2.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/seamless_m4t_v2/modeling_seamless_m4t_v2.py | Apache-2.0 |
def forward(
self,
input_ids: Optional[torch.LongTensor] = None,
char_input_ids: Optional[torch.LongTensor] = None,
char_count_per_id: Optional[torch.LongTensor] = None,
attention_mask: Optional[torch.Tensor] = None,
encoder_outputs: Optional[Tuple[Tuple[torch.FloatTensor... |
char_input_ids (`torch.LongTensor` of shape `(batch_size, char_sequence_length)`):
Character indices. The correspondence between characters and indices can be found in `char_to_id`, a
dictionary in the generation configuration.
char_count_per_id (`torch.LongTensor` of shape `(ba... | forward | python | huggingface/transformers | src/transformers/models/seamless_m4t_v2/modeling_seamless_m4t_v2.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/seamless_m4t_v2/modeling_seamless_m4t_v2.py | Apache-2.0 |
def forward(self, input_embeds: torch.FloatTensor) -> torch.FloatTensor:
r"""
Converts a log-mel spectrogram into a speech waveform. Passing a batch of log-mel spectrograms returns a batch
of speech waveforms. Passing a single, un-batched log-mel spectrogram returns a single, un-batched speech
... |
Converts a log-mel spectrogram into a speech waveform. Passing a batch of log-mel spectrograms returns a batch
of speech waveforms. Passing a single, un-batched log-mel spectrogram returns a single, un-batched speech
waveform.
Args:
spectrogram (`torch.FloatTensor`):
... | forward | python | huggingface/transformers | src/transformers/models/seamless_m4t_v2/modeling_seamless_m4t_v2.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/seamless_m4t_v2/modeling_seamless_m4t_v2.py | Apache-2.0 |
def _get_dur_output_lengths(self, input_ids, dur_out):
"""
Computes the output length after the duration layer.
"""
unit_lengths = (input_ids != self.pad_token_id).sum(1)
# take care of edge cases where no padding or too many padding
unit_lengths = torch.clamp(unit_lengt... |
Computes the output length after the duration layer.
| _get_dur_output_lengths | python | huggingface/transformers | src/transformers/models/seamless_m4t_v2/modeling_seamless_m4t_v2.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/seamless_m4t_v2/modeling_seamless_m4t_v2.py | Apache-2.0 |
def _get_output_hifigan_lengths(self, input_lengths: Union[torch.LongTensor, int]):
"""
Computes the output length of the hifigan convolutional layers
"""
def _conv_out_length(input_length, kernel_size, stride, pad, dilation=1):
# 1D convolutional layer output length formula... |
Computes the output length of the hifigan convolutional layers
| _get_output_hifigan_lengths | python | huggingface/transformers | src/transformers/models/seamless_m4t_v2/modeling_seamless_m4t_v2.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/seamless_m4t_v2/modeling_seamless_m4t_v2.py | Apache-2.0 |
def forward(
self, input_ids: torch.LongTensor, speaker_id: torch.Tensor, lang_id: torch.Tensor
) -> Tuple[torch.Tensor]:
"""
Args:
input_ids (`torch.LongTensor` of shape `(batch_size, sequence_length)`):
Indices of input sequence tokens in the vocabulary.
... |
Args:
input_ids (`torch.LongTensor` of shape `(batch_size, sequence_length)`):
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using [`SeamlessM4Tv2TextToUnitForConditionalGeneration`]. [What are input
IDs?](../glossary#in... | forward | python | huggingface/transformers | src/transformers/models/seamless_m4t_v2/modeling_seamless_m4t_v2.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/seamless_m4t_v2/modeling_seamless_m4t_v2.py | Apache-2.0 |
def forward(
self,
input_ids: Optional[torch.LongTensor] = None,
attention_mask: Optional[torch.Tensor] = None,
decoder_input_ids: Optional[torch.LongTensor] = None,
decoder_attention_mask: Optional[torch.LongTensor] = None,
encoder_outputs: Optional[Tuple[Tuple[torch.Flo... |
labels (`torch.LongTensor` of shape `(batch_size, sequence_length)`, *optional*):
Labels for computing the masked language modeling loss. Indices should be in `[-100, 0, ...,
config.vocab_size]` (see `input_ids` docstring) Tokens with indices set to `-100` are ignored (masked), the
... | forward | python | huggingface/transformers | src/transformers/models/seamless_m4t_v2/modeling_seamless_m4t_v2.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/seamless_m4t_v2/modeling_seamless_m4t_v2.py | Apache-2.0 |
def generate(
self,
input_ids=None,
tgt_lang=None,
generation_config=None,
logits_processor=None,
stopping_criteria=None,
prefix_allowed_tokens_fn=None,
synced_gpus=False,
**kwargs,
):
"""
Generates sequences of token ids.
... |
Generates sequences of token ids.
<Tip warning={true}>
Most generation-controlling parameters are set in `generation_config` which, if not passed, will be set to the
model's default generation configuration. You can override any `generation_config` by passing the corresponding
... | generate | python | huggingface/transformers | src/transformers/models/seamless_m4t_v2/modeling_seamless_m4t_v2.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/seamless_m4t_v2/modeling_seamless_m4t_v2.py | Apache-2.0 |
def forward(
self,
input_features: Optional[torch.LongTensor] = None,
attention_mask: Optional[torch.Tensor] = None,
decoder_input_ids: Optional[torch.LongTensor] = None,
decoder_attention_mask: Optional[torch.LongTensor] = None,
encoder_outputs: Optional[Tuple[Tuple[torc... |
labels (`torch.LongTensor` of shape `(batch_size, sequence_length)`, *optional*):
Labels for computing the masked language modeling loss. Indices should be in `[-100, 0, ...,
config.vocab_size]` (see `input_ids` docstring) Tokens with indices set to `-100` are ignored (masked), the
... | forward | python | huggingface/transformers | src/transformers/models/seamless_m4t_v2/modeling_seamless_m4t_v2.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/seamless_m4t_v2/modeling_seamless_m4t_v2.py | Apache-2.0 |
def generate(
self,
input_features=None,
tgt_lang=None,
generation_config=None,
logits_processor=None,
stopping_criteria=None,
prefix_allowed_tokens_fn=None,
synced_gpus=False,
**kwargs,
):
"""
Generates sequences of token ids.
... |
Generates sequences of token ids.
<Tip warning={true}>
Most generation-controlling parameters are set in `generation_config` which, if not passed, will be set to the
model's default generation configuration. You can override any `generation_config` by passing the corresponding
... | generate | python | huggingface/transformers | src/transformers/models/seamless_m4t_v2/modeling_seamless_m4t_v2.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/seamless_m4t_v2/modeling_seamless_m4t_v2.py | Apache-2.0 |
def forward(
self,
input_ids: Optional[torch.LongTensor] = None,
attention_mask: Optional[torch.Tensor] = None,
decoder_input_ids: Optional[torch.LongTensor] = None,
decoder_attention_mask: Optional[torch.LongTensor] = None,
encoder_outputs: Optional[Tuple[Tuple[torch.Flo... |
labels (`torch.LongTensor` of shape `(batch_size, sequence_length)`, *optional*):
Labels for computing the masked language modeling loss. Indices should be in `[-100, 0, ...,
config.vocab_size]` (see `input_ids` docstring) Tokens with indices set to `-100` are ignored (masked), the
... | forward | python | huggingface/transformers | src/transformers/models/seamless_m4t_v2/modeling_seamless_m4t_v2.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/seamless_m4t_v2/modeling_seamless_m4t_v2.py | Apache-2.0 |
def generate(
self,
input_ids: Optional[torch.Tensor] = None,
return_intermediate_token_ids: Optional[bool] = None,
tgt_lang: Optional[str] = None,
speaker_id: Optional[int] = 0,
**kwargs,
) -> Union[torch.Tensor, SeamlessM4Tv2GenerationOutput]:
"""
Ge... |
Generates translated audio waveforms.
<Tip>
This method successively calls the `.generate` function of two different sub-models. You can specify keyword
arguments at two different levels: general arguments that will be passed to both models, or prefixed arguments
that will be ... | generate | python | huggingface/transformers | src/transformers/models/seamless_m4t_v2/modeling_seamless_m4t_v2.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/seamless_m4t_v2/modeling_seamless_m4t_v2.py | Apache-2.0 |
Subsets and Splits
Django Code with Docstrings
Filters Python code examples from Django repository that contain Django-related code, helping identify relevant code snippets for understanding Django framework usage patterns.
SQL Console for Shuu12121/python-treesitter-filtered-datasetsV2
Retrieves specific code examples from the Flask repository but doesn't provide meaningful analysis or patterns beyond basic data retrieval.
HTTPX Repo Code and Docstrings
Retrieves specific code examples from the httpx repository, which is useful for understanding how particular libraries are used but doesn't provide broader analytical insights about the dataset.
Requests Repo Docstrings & Code
Retrieves code examples with their docstrings and file paths from the requests repository, providing basic filtering but limited analytical value beyond finding specific code samples.
Quart Repo Docstrings & Code
Retrieves code examples with their docstrings from the Quart repository, providing basic code samples but offering limited analytical value for understanding broader patterns or relationships in the dataset.