code
stringlengths
66
870k
docstring
stringlengths
19
26.7k
func_name
stringlengths
1
138
language
stringclasses
1 value
repo
stringlengths
7
68
path
stringlengths
5
324
url
stringlengths
46
389
license
stringclasses
7 values
def th_accuracy(pad_outputs: torch.Tensor, pad_targets: torch.Tensor, ignore_label: int) -> torch.Tensor: """Calculate accuracy. Args: pad_outputs (Tensor): Prediction tensors (B * Lmax, D). pad_targets (LongTensor): Target label tensors (B, Lmax). ignore_label (int): Ignore label id. Returns: torch.Tensor: Accuracy value (0.0 - 1.0). """ pad_pred = pad_outputs.view(pad_targets.size(0), pad_targets.size(1), pad_outputs.size(1)).argmax(2) mask = pad_targets != ignore_label numerator = torch.sum( pad_pred.masked_select(mask) == pad_targets.masked_select(mask)) denominator = torch.sum(mask) return (numerator / denominator).detach()
Calculate accuracy. Args: pad_outputs (Tensor): Prediction tensors (B * Lmax, D). pad_targets (LongTensor): Target label tensors (B, Lmax). ignore_label (int): Ignore label id. Returns: torch.Tensor: Accuracy value (0.0 - 1.0).
th_accuracy
python
THUDM/GLM-4-Voice
cosyvoice/utils/common.py
https://github.com/THUDM/GLM-4-Voice/blob/master/cosyvoice/utils/common.py
Apache-2.0
def subsequent_mask( size: int, device: torch.device = torch.device("cpu"), ) -> torch.Tensor: """Create mask for subsequent steps (size, size). This mask is used only in decoder which works in an auto-regressive mode. This means the current step could only do attention with its left steps. In encoder, fully attention is used when streaming is not necessary and the sequence is not long. In this case, no attention mask is needed. When streaming is need, chunk-based attention is used in encoder. See subsequent_chunk_mask for the chunk-based attention mask. Args: size (int): size of mask str device (str): "cpu" or "cuda" or torch.Tensor.device dtype (torch.device): result dtype Returns: torch.Tensor: mask Examples: >>> subsequent_mask(3) [[1, 0, 0], [1, 1, 0], [1, 1, 1]] """ arange = torch.arange(size, device=device) mask = arange.expand(size, size) arange = arange.unsqueeze(-1) mask = mask <= arange return mask
Create mask for subsequent steps (size, size). This mask is used only in decoder which works in an auto-regressive mode. This means the current step could only do attention with its left steps. In encoder, fully attention is used when streaming is not necessary and the sequence is not long. In this case, no attention mask is needed. When streaming is need, chunk-based attention is used in encoder. See subsequent_chunk_mask for the chunk-based attention mask. Args: size (int): size of mask str device (str): "cpu" or "cuda" or torch.Tensor.device dtype (torch.device): result dtype Returns: torch.Tensor: mask Examples: >>> subsequent_mask(3) [[1, 0, 0], [1, 1, 0], [1, 1, 1]]
subsequent_mask
python
THUDM/GLM-4-Voice
cosyvoice/utils/mask.py
https://github.com/THUDM/GLM-4-Voice/blob/master/cosyvoice/utils/mask.py
Apache-2.0
def subsequent_chunk_mask( size: int, chunk_size: int, num_left_chunks: int = -1, device: torch.device = torch.device("cpu"), ) -> torch.Tensor: """Create mask for subsequent steps (size, size) with chunk size, this is for streaming encoder Args: size (int): size of mask chunk_size (int): size of chunk num_left_chunks (int): number of left chunks <0: use full chunk >=0: use num_left_chunks device (torch.device): "cpu" or "cuda" or torch.Tensor.device Returns: torch.Tensor: mask Examples: >>> subsequent_chunk_mask(4, 2) [[1, 1, 0, 0], [1, 1, 0, 0], [1, 1, 1, 1], [1, 1, 1, 1]] """ ret = torch.zeros(size, size, device=device, dtype=torch.bool) for i in range(size): if num_left_chunks < 0: start = 0 else: start = max((i // chunk_size - num_left_chunks) * chunk_size, 0) ending = min((i // chunk_size + 1) * chunk_size, size) ret[i, start:ending] = True return ret
Create mask for subsequent steps (size, size) with chunk size, this is for streaming encoder Args: size (int): size of mask chunk_size (int): size of chunk num_left_chunks (int): number of left chunks <0: use full chunk >=0: use num_left_chunks device (torch.device): "cpu" or "cuda" or torch.Tensor.device Returns: torch.Tensor: mask Examples: >>> subsequent_chunk_mask(4, 2) [[1, 1, 0, 0], [1, 1, 0, 0], [1, 1, 1, 1], [1, 1, 1, 1]]
subsequent_chunk_mask
python
THUDM/GLM-4-Voice
cosyvoice/utils/mask.py
https://github.com/THUDM/GLM-4-Voice/blob/master/cosyvoice/utils/mask.py
Apache-2.0
def add_optional_chunk_mask(xs: torch.Tensor, masks: torch.Tensor, use_dynamic_chunk: bool, use_dynamic_left_chunk: bool, decoding_chunk_size: int, static_chunk_size: int, num_decoding_left_chunks: int, enable_full_context: bool = True): """ Apply optional mask for encoder. Args: xs (torch.Tensor): padded input, (B, L, D), L for max length mask (torch.Tensor): mask for xs, (B, 1, L) use_dynamic_chunk (bool): whether to use dynamic chunk or not use_dynamic_left_chunk (bool): whether to use dynamic left chunk for training. decoding_chunk_size (int): decoding chunk size for dynamic chunk, it's 0: default for training, use random dynamic chunk. <0: for decoding, use full chunk. >0: for decoding, use fixed chunk size as set. static_chunk_size (int): chunk size for static chunk training/decoding if it's greater than 0, if use_dynamic_chunk is true, this parameter will be ignored num_decoding_left_chunks: number of left chunks, this is for decoding, the chunk size is decoding_chunk_size. >=0: use num_decoding_left_chunks <0: use all left chunks enable_full_context (bool): True: chunk size is either [1, 25] or full context(max_len) False: chunk size ~ U[1, 25] Returns: torch.Tensor: chunk mask of the input xs. """ # Whether to use chunk mask or not if use_dynamic_chunk: max_len = xs.size(1) if decoding_chunk_size < 0: chunk_size = max_len num_left_chunks = -1 elif decoding_chunk_size > 0: chunk_size = decoding_chunk_size num_left_chunks = num_decoding_left_chunks else: # chunk size is either [1, 25] or full context(max_len). # Since we use 4 times subsampling and allow up to 1s(100 frames) # delay, the maximum frame is 100 / 4 = 25. chunk_size = torch.randint(1, max_len, (1, )).item() num_left_chunks = -1 if chunk_size > max_len // 2 and enable_full_context: chunk_size = max_len else: chunk_size = chunk_size % 25 + 1 if use_dynamic_left_chunk: max_left_chunks = (max_len - 1) // chunk_size num_left_chunks = torch.randint(0, max_left_chunks, (1, )).item() chunk_masks = subsequent_chunk_mask(xs.size(1), chunk_size, num_left_chunks, xs.device) # (L, L) chunk_masks = chunk_masks.unsqueeze(0) # (1, L, L) chunk_masks = masks & chunk_masks # (B, L, L) elif static_chunk_size > 0: num_left_chunks = num_decoding_left_chunks chunk_masks = subsequent_chunk_mask(xs.size(1), static_chunk_size, num_left_chunks, xs.device) # (L, L) chunk_masks = chunk_masks.unsqueeze(0) # (1, L, L) chunk_masks = masks & chunk_masks # (B, L, L) else: chunk_masks = masks return chunk_masks
Apply optional mask for encoder. Args: xs (torch.Tensor): padded input, (B, L, D), L for max length mask (torch.Tensor): mask for xs, (B, 1, L) use_dynamic_chunk (bool): whether to use dynamic chunk or not use_dynamic_left_chunk (bool): whether to use dynamic left chunk for training. decoding_chunk_size (int): decoding chunk size for dynamic chunk, it's 0: default for training, use random dynamic chunk. <0: for decoding, use full chunk. >0: for decoding, use fixed chunk size as set. static_chunk_size (int): chunk size for static chunk training/decoding if it's greater than 0, if use_dynamic_chunk is true, this parameter will be ignored num_decoding_left_chunks: number of left chunks, this is for decoding, the chunk size is decoding_chunk_size. >=0: use num_decoding_left_chunks <0: use all left chunks enable_full_context (bool): True: chunk size is either [1, 25] or full context(max_len) False: chunk size ~ U[1, 25] Returns: torch.Tensor: chunk mask of the input xs.
add_optional_chunk_mask
python
THUDM/GLM-4-Voice
cosyvoice/utils/mask.py
https://github.com/THUDM/GLM-4-Voice/blob/master/cosyvoice/utils/mask.py
Apache-2.0
def make_pad_mask(lengths: torch.Tensor, max_len: int = 0) -> torch.Tensor: """Make mask tensor containing indices of padded part. See description of make_non_pad_mask. Args: lengths (torch.Tensor): Batch of lengths (B,). Returns: torch.Tensor: Mask tensor containing indices of padded part. Examples: >>> lengths = [5, 3, 2] >>> make_pad_mask(lengths) masks = [[0, 0, 0, 0 ,0], [0, 0, 0, 1, 1], [0, 0, 1, 1, 1]] """ batch_size = lengths.size(0) max_len = max_len if max_len > 0 else lengths.max().item() seq_range = torch.arange(0, max_len, dtype=torch.int64, device=lengths.device) seq_range_expand = seq_range.unsqueeze(0).expand(batch_size, max_len) seq_length_expand = lengths.unsqueeze(-1) mask = seq_range_expand >= seq_length_expand return mask
Make mask tensor containing indices of padded part. See description of make_non_pad_mask. Args: lengths (torch.Tensor): Batch of lengths (B,). Returns: torch.Tensor: Mask tensor containing indices of padded part. Examples: >>> lengths = [5, 3, 2] >>> make_pad_mask(lengths) masks = [[0, 0, 0, 0 ,0], [0, 0, 0, 1, 1], [0, 0, 1, 1, 1]]
make_pad_mask
python
THUDM/GLM-4-Voice
cosyvoice/utils/mask.py
https://github.com/THUDM/GLM-4-Voice/blob/master/cosyvoice/utils/mask.py
Apache-2.0
def __init__(self, optimizer, *, max_steps, decay_rate=0.5, min_lr=0.0, last_epoch=-1, **kwargs): """ From Nemo: Implementation of the Noam Hold Annealing policy from the SqueezeFormer paper. Unlike NoamAnnealing, the peak learning rate can be explicitly set for this scheduler. The schedule first performs linear warmup, then holds the peak LR, then decays with some schedule for the remainder of the steps. Therefore the min-lr is still dependent on the hyper parameters selected. It's schedule is determined by three factors- Warmup Steps: Initial stage, where linear warmup occurs uptil the peak LR is reached. Unlike NoamAnnealing, the peak LR is explicitly stated here instead of a scaling factor. Hold Steps: Intermediate stage, where the peak LR is maintained for some number of steps. In this region, the high peak LR allows the model to converge faster if training is stable. However the high LR may also cause instability during training. Should usually be a significant fraction of training steps (around 30-40% of the entire training steps). Decay Steps: Final stage, where the LR rapidly decays with some scaling rate (set by decay rate). To attain Noam decay, use 0.5, for Squeezeformer recommended decay, use 1.0. The fast decay after prolonged high LR during hold phase allows for rapid convergence. References: - [Squeezeformer: An Efficient Transformer for Automatic Speech Recognition] (https://arxiv.org/abs/2206.00888) Args: optimizer: Pytorch compatible Optimizer object. warmup_steps: Number of training steps in warmup stage warmup_ratio: Ratio of warmup steps to total steps hold_steps: Number of training steps to hold the learning rate after warm up hold_ratio: Ratio of hold steps to total steps max_steps: Total number of steps while training or `None` for infinite training decay_rate: Float value describing the polynomial decay after the hold period. Default value of 0.5 corresponds to Noam decay. min_lr: Minimum learning rate. """ self.decay_rate = decay_rate super().__init__(optimizer=optimizer, max_steps=max_steps, last_epoch=last_epoch, min_lr=min_lr, **kwargs)
From Nemo: Implementation of the Noam Hold Annealing policy from the SqueezeFormer paper. Unlike NoamAnnealing, the peak learning rate can be explicitly set for this scheduler. The schedule first performs linear warmup, then holds the peak LR, then decays with some schedule for the remainder of the steps. Therefore the min-lr is still dependent on the hyper parameters selected. It's schedule is determined by three factors- Warmup Steps: Initial stage, where linear warmup occurs uptil the peak LR is reached. Unlike NoamAnnealing, the peak LR is explicitly stated here instead of a scaling factor. Hold Steps: Intermediate stage, where the peak LR is maintained for some number of steps. In this region, the high peak LR allows the model to converge faster if training is stable. However the high LR may also cause instability during training. Should usually be a significant fraction of training steps (around 30-40% of the entire training steps). Decay Steps: Final stage, where the LR rapidly decays with some scaling rate (set by decay rate). To attain Noam decay, use 0.5, for Squeezeformer recommended decay, use 1.0. The fast decay after prolonged high LR during hold phase allows for rapid convergence. References: - [Squeezeformer: An Efficient Transformer for Automatic Speech Recognition] (https://arxiv.org/abs/2206.00888) Args: optimizer: Pytorch compatible Optimizer object. warmup_steps: Number of training steps in warmup stage warmup_ratio: Ratio of warmup steps to total steps hold_steps: Number of training steps to hold the learning rate after warm up hold_ratio: Ratio of hold steps to total steps max_steps: Total number of steps while training or `None` for infinite training decay_rate: Float value describing the polynomial decay after the hold period. Default value of 0.5 corresponds to Noam decay. min_lr: Minimum learning rate.
__init__
python
THUDM/GLM-4-Voice
cosyvoice/utils/scheduler.py
https://github.com/THUDM/GLM-4-Voice/blob/master/cosyvoice/utils/scheduler.py
Apache-2.0
def _median_filter(inputs: torch.Tensor, filter_width: int) -> torch.Tensor: """ Applies a median filter of width `filter_width` along the last dimension of the input. The `inputs` tensor is assumed to be 3- or 4-dimensional. """ if filter_width <= 0 or filter_width % 2 != 1: raise ValueError("`filter_width` should be an odd number") pad_width = filter_width // 2 if inputs.shape[-1] <= pad_width: return inputs # Pad the left and right edges. inputs = nn.functional.pad(inputs, (pad_width, pad_width, 0, 0), mode="reflect") # sort() is faster than torch.median (https://github.com/pytorch/pytorch/issues/51450) result = inputs.unfold(-1, filter_width, 1).sort()[0][..., pad_width] return result
Applies a median filter of width `filter_width` along the last dimension of the input. The `inputs` tensor is assumed to be 3- or 4-dimensional.
_median_filter
python
THUDM/GLM-4-Voice
speech_tokenizer/generation_whisper.py
https://github.com/THUDM/GLM-4-Voice/blob/master/speech_tokenizer/generation_whisper.py
Apache-2.0
def _dynamic_time_warping(matrix: np.ndarray): """ Measures similarity between two temporal sequences: the input audio and the output tokens. Used to generate token-level timestamps. """ output_length, input_length = matrix.shape cost = np.ones((output_length + 1, input_length + 1), dtype=np.float32) * np.inf trace = -np.ones((output_length + 1, input_length + 1), dtype=np.float32) cost[0, 0] = 0 for j in range(1, input_length + 1): for i in range(1, output_length + 1): c0 = cost[i - 1, j - 1] c1 = cost[i - 1, j] c2 = cost[i, j - 1] if c0 < c1 and c0 < c2: c, t = c0, 0 elif c1 < c0 and c1 < c2: c, t = c1, 1 else: c, t = c2, 2 cost[i, j] = matrix[i - 1, j - 1] + c trace[i, j] = t # backtrace i = trace.shape[0] - 1 j = trace.shape[1] - 1 trace[0, :] = 2 trace[:, 0] = 1 text_indices = [] time_indices = [] while i > 0 or j > 0: text_indices.append(i - 1) time_indices.append(j - 1) if trace[i, j] == 0: i -= 1 j -= 1 elif trace[i, j] == 1: i -= 1 elif trace[i, j] == 2: j -= 1 else: raise RuntimeError( f"Internal error in dynamic time warping. Unexpected trace[{i}, {j}]. Please file a bug report." ) text_indices = np.array(text_indices)[::-1] time_indices = np.array(time_indices)[::-1] return text_indices, time_indices
Measures similarity between two temporal sequences: the input audio and the output tokens. Used to generate token-level timestamps.
_dynamic_time_warping
python
THUDM/GLM-4-Voice
speech_tokenizer/generation_whisper.py
https://github.com/THUDM/GLM-4-Voice/blob/master/speech_tokenizer/generation_whisper.py
Apache-2.0
def _extract_token_timestamps(self, generate_outputs, alignment_heads, time_precision=0.02, num_frames=None): """ Calculates token-level timestamps using the encoder-decoder cross-attentions and dynamic time-warping (DTW) to map each output token to a position in the input audio. If `num_frames` is specified, the encoder-decoder cross-attentions will be cropped before applying DTW. Returns: tensor containing the timestamps in seconds for each predicted token """ # Create a list with `decoder_layers` elements, each a tensor of shape # (batch size, attention_heads, output length, input length). cross_attentions = [] for i in range(self.config.decoder_layers): cross_attentions.append(torch.cat([x[i] for x in generate_outputs.cross_attentions], dim=2)) # Select specific cross-attention layers and heads. This is a tensor # of shape (batch size, num selected, output length, input length). weights = torch.stack([cross_attentions[l][:, h] for l, h in alignment_heads]) weights = weights.permute([1, 0, 2, 3]) weight_length = None if "beam_indices" in generate_outputs: # If beam search has been used, the output sequences may have been generated for more timesteps than their sequence_lengths # since the beam search strategy chooses the most probable sequences at the end of the search. # In that case, the cross_attentions weights are too long and we have to make sure that they have the right output_length weight_length = (generate_outputs.beam_indices != -1).sum(-1).max() weights = weights[:, :, :weight_length] # If beam index is still -1, it means that the associated token id is EOS # We need to replace the index with 0 since index_select gives an error if any of the indexes is -1. beam_indices = generate_outputs.beam_indices[:, :weight_length] beam_indices = beam_indices.masked_fill(beam_indices == -1, 0) # Select the cross attention from the right beam for each output sequences weights = torch.stack( [ torch.index_select(weights[:, :, i, :], dim=0, index=beam_indices[:, i]) for i in range(beam_indices.shape[1]) ], dim=2, ) # make sure timestamps are as long as weights input_length = weight_length or cross_attentions[0].shape[2] timestamps = torch.zeros_like(generate_outputs.sequences, dtype=torch.float32)[:, : input_length + 1] batch_size = timestamps.shape[0] if num_frames is not None: # two cases: # 1. num_frames is the same for each sample -> compute the DTW matrix for each sample in parallel # 2. num_frames is different, compute the DTW matrix for each sample sequentially # we're using np.unique because num_frames can be int/list/tuple if isinstance(num_frames, int): weights = weights[..., : num_frames // 2] elif isinstance(num_frames, (list, tuple, np.ndarray)) and len(np.unique(num_frames)) == 1: weights = weights[..., : num_frames[0] // 2] elif isinstance(num_frames, (torch.Tensor)) and len(torch.unique(num_frames)) == 1: weights = weights[..., : num_frames[0] // 2] else: # num_frames is of shape (batch_size,) whereas batch_size is truely batch_size*num_return_sequences repeat_time = batch_size if isinstance(num_frames, int) else batch_size // len(num_frames) num_frames = np.repeat(num_frames, repeat_time) if num_frames is None or isinstance(num_frames, int): # Normalize and smoothen the weights. std = torch.std(weights, dim=-2, keepdim=True, unbiased=False) mean = torch.mean(weights, dim=-2, keepdim=True) weights = (weights - mean) / std weights = _median_filter(weights, self.config.median_filter_width) # Average the different cross-attention heads. weights = weights.mean(dim=1) # Perform dynamic time warping on each element of the batch. for batch_idx in range(batch_size): if num_frames is not None and isinstance(num_frames, (tuple, list, np.ndarray, torch.Tensor)): matrix = weights[batch_idx, ..., : num_frames[batch_idx] // 2] # Normalize and smoothen the weights. std = torch.std(matrix, dim=-2, keepdim=True, unbiased=False) mean = torch.mean(matrix, dim=-2, keepdim=True) matrix = (matrix - mean) / std matrix = _median_filter(matrix, self.config.median_filter_width) # Average the different cross-attention heads. matrix = matrix.mean(dim=0) else: matrix = weights[batch_idx] text_indices, time_indices = _dynamic_time_warping(-matrix.cpu().double().numpy()) jumps = np.pad(np.diff(text_indices), (1, 0), constant_values=1).astype(bool) jump_times = time_indices[jumps] * time_precision timestamps[batch_idx, 1:] = torch.tensor(jump_times) return timestamps
Calculates token-level timestamps using the encoder-decoder cross-attentions and dynamic time-warping (DTW) to map each output token to a position in the input audio. If `num_frames` is specified, the encoder-decoder cross-attentions will be cropped before applying DTW. Returns: tensor containing the timestamps in seconds for each predicted token
_extract_token_timestamps
python
THUDM/GLM-4-Voice
speech_tokenizer/generation_whisper.py
https://github.com/THUDM/GLM-4-Voice/blob/master/speech_tokenizer/generation_whisper.py
Apache-2.0
def generate( self, input_features: Optional[torch.Tensor] = None, generation_config: Optional[GenerationConfig] = None, logits_processor: Optional[LogitsProcessorList] = None, stopping_criteria: Optional[StoppingCriteriaList] = None, prefix_allowed_tokens_fn: Optional[Callable[[int, torch.Tensor], List[int]]] = None, synced_gpus: bool = False, return_timestamps: Optional[bool] = None, task: Optional[str] = None, language: Optional[Union[str, List[str]]] = None, is_multilingual: Optional[bool] = None, prompt_ids: Optional[torch.Tensor] = None, prompt_condition_type: Optional[str] = None, # first-segment, all-segments condition_on_prev_tokens: Optional[bool] = None, temperature: Optional[Union[float, Tuple[float, ...]]] = None, compression_ratio_threshold: Optional[float] = None, logprob_threshold: Optional[float] = None, no_speech_threshold: Optional[float] = None, num_segment_frames: Optional[int] = None, attention_mask: Optional[torch.Tensor] = None, time_precision: float = 0.02, return_token_timestamps: Optional[bool] = None, return_segments: bool = False, return_dict_in_generate: Optional[bool] = None, **kwargs, ): """ Transcribes or translates log-mel input features to a sequence of auto-regressively generated token ids. <Tip warning={true}> Most generation-controlling parameters are set in `generation_config` which, if not passed, will be set to the model's default generation configuration. You can override any `generation_config` by passing the corresponding parameters to generate(), e.g. `.generate(inputs, num_beams=4, do_sample=True)`. For an overview of generation strategies and code examples, check out the [following guide](./generation_strategies). </Tip> Parameters: input_features (`torch.Tensor` of shape `(batch_size, feature_size, sequence_length)`, *optional*): Float values of log-mel features extracted from the raw speech waveform. The raw speech waveform can be obtained by loading a `.flac` or `.wav` audio file into an array of type `List[float]` or a `numpy.ndarray`, *e.g.* via the soundfile library (`pip install soundfile`). To prepare the array into `input_features`, the [`AutoFeatureExtractor`] should be used for extracting the mel features, padding and conversion into a tensor of type `torch.FloatTensor`. See [`~WhisperFeatureExtractor.__call__`] for details. generation_config (`~generation.GenerationConfig`, *optional*): The generation configuration to be used as base parametrization for the generation call. `**kwargs` passed to generate matching the attributes of `generation_config` will override them. If `generation_config` is not provided, the default will be used, which had the following loading priority: 1) from the `generation_config.json` model file, if it exists; 2) from the model configuration. Please note that unspecified parameters will inherit [`~generation.GenerationConfig`]'s default values, whose documentation should be checked to parameterize generation. logits_processor (`LogitsProcessorList`, *optional*): Custom logits processors that complement the default logits processors built from arguments and generation config. If a logit processor is passed that is already created with the arguments or a generation config an error is thrown. This feature is intended for advanced users. stopping_criteria (`StoppingCriteriaList`, *optional*): Custom stopping criteria that complement the default stopping criteria built from arguments and a generation config. If a stopping criteria is passed that is already created with the arguments or a generation config an error is thrown. This feature is intended for advanced users. prefix_allowed_tokens_fn (`Callable[[int, torch.Tensor], List[int]]`, *optional*): If provided, this function constraints the beam search to allowed tokens only at each step. If not provided no constraint is applied. This function takes 2 arguments: the batch ID `batch_id` and `input_ids`. It has to return a list with the allowed tokens for the next generation step conditioned on the batch ID `batch_id` and the previously generated tokens `inputs_ids`. This argument is useful for constrained generation conditioned on the prefix, as described in [Autoregressive Entity Retrieval](https://arxiv.org/abs/2010.00904). synced_gpus (`bool`, *optional*, defaults to `False`): Whether to continue running the while loop until max_length (needed for ZeRO stage 3) return_timestamps (`bool`, *optional*): Whether to return the timestamps with the text. This enables the `WhisperTimestampsLogitsProcessor`. task (`str`, *optional*): Task to use for generation, either "translate" or "transcribe". The `model.config.forced_decoder_ids` will be updated accordingly. language (`str` or list of `str`, *optional*): Language token to use for generation, can be either in the form of `<|en|>`, `en` or `english`. For batched generation, a list of language tokens can be passed. You can find all the possible language tokens in the `model.generation_config.lang_to_id` dictionary. is_multilingual (`bool`, *optional*): Whether or not the model is multilingual. prompt_ids (`torch.Tensor`, *optional*): Rank-1 tensor of token IDs created by passing text to [`~WhisperProcessor.get_prompt_ids`] that is provided as a prompt to each chunk. This can be used to provide or "prompt-engineer" a context for transcription, e.g. custom vocabularies or proper nouns to make it more likely to predict those words correctly. It cannot be used in conjunction with `decoder_start_token_id` as it overwrites this value. prompt_condition_type (`str`, *optional*): Only relevant for long-form transcription. Condition type of `prompt_ids`. 'first-segment' means only the first segment is conditioned on `prompt_ids`. 'all-segments' means each segment is conditioned on `prompt_ids`. Make sure to enable `condition_on_prev_tokens` for 'all-segments'. Defaults to 'first-segment'. For short-term transcription only 'first-segment' is possible. condition_on_prev_tokens (`bool`, *optional*): Only relevant for long-form transcription. Whether to condition each segment on the previous segment. As shown in the [the Whisper paper](https://cdn.openai.com/papers/whisper.pdf), this can help to improve performance. temperature (`float` or list of `float`, *optional*): The temperature to be used for generation. Passing a single `float` value and `do_sample=True` activates generation using sampling. For long-form transcription, temperature fallback can be activated by passing a list of float values such as (0.0, 0.2, 0.4, 0.6, 0.8, 1.0). As shown in the [the Whisper paper](https://cdn.openai.com/papers/whisper.pdf), this can help to improve performance. compression_ratio_threshold (`float`, *optional*): Only relevant for long-form transcription. If defined, the zlib compression rate of each segment will be computed. If the compression rate of a segment is higher than `compression_ratio_threshold`, temperature fallback is activated: the generated segment is discarded and the generation is repeated using a higher temperature. The intuition behind this feature is that segments with very high compression rates suffer from a lot of repetition. The unwanted repetition can be reduced by injecting more randomness by increasing the temperature. If `compression_ratio_threshold` is defined make sure that `temperature` is a list of values. A common value for `compression_ratio_threshold` is 1.35. As shown in the [the Whisper paper](https://cdn.openai.com/papers/whisper.pdf), this can help to improve performance. logprob_threshold (`float`, *optional*): Only relevant for long-form transcription. If defined, the average log-probability of each segment will be computed. If the log-probability of a given segment is lower than `logprob_threshold`, temperature fallback is activated: the generated segment is discarded and the generation is repeated using a higher temperature. The intuition behind this feature is that segments of low log-probability can be improved by injecting more randomness by increasing the temperature. If `logprob_threshold` is defined make sure that `temperature` is a list of values. A common value for `logprob_threshold` is -1.0. As shown in the [the Whisper paper](https://cdn.openai.com/papers/whisper.pdf), this can help to improve performance. no_speech_threshold (`float`, *optional*): Only relevant for long-form transcription. If defined, the "no-speech" token combined with the `logprob_threshold` is used to determine whether a segment contains only silence. In this case, the transcription for this segment is skipped. As shown in the [the Whisper paper](https://cdn.openai.com/papers/whisper.pdf), this can help to improve performance. num_segment_frames (`int`, *optional*): The number of frames a single segment is made of. If not defined, `num_segment_frames` defaults to the model's stride times the maximum input length. attention_mask (`torch.Tensor`, *optional*): `attention_mask` needs to be passed when doing long-form transcription using a batch size > 1. time_precision (`int`, *optional*, defaults to 0.02): The duration of output token in seconds. *E.g.* 0.02 means that a generated token on average accounts for 20 ms. return_token_timestamps (`bool`, *optional*): Whether to return token-level timestamps with the text. This can be used with or without the `return_timestamps` option. To get word-level timestamps, use the tokenizer to group the tokens into words. return_segments (`bool`, *optional*, defaults to `False`): Whether to additionally return a list of all segments. Note that this option can only be enabled when doing long-form transcription. return_dict_in_generate (`bool`, *optional*, defaults to `False`): Whether or not to return a [`~utils.ModelOutput`] instead of just returning the generated tokens. Note that when doing long-form transcription, `return_dict_in_generate` can only be enabled when `return_segments` is set True. In this case the generation outputs of each segment is added to each segment. kwargs (`Dict[str, Any]`, *optional*): Ad hoc parametrization of `generate_config` and/or additional model-specific kwargs that will be forwarded to the `forward` function of the model. If the model is an encoder-decoder model, encoder specific kwargs should not be prefixed and decoder specific kwargs should be prefixed with *decoder_*. Return: [`~utils.ModelOutput`] or `torch.LongTensor` or `Dict[str, Any]`: A [`~utils.ModelOutput`] (if `return_dict_in_generate=True` or when `config.return_dict_in_generate=True`) or a `torch.FloatTensor` or a dict of segments when `return_segments=True`. If the passed input is > 30 seconds / > 3000 mel input features and `return_segments=True` then a dictionary of generated sequence ids, called `sequences` and a list of each generated segment is returned. else if the passed input is <= 30 seconds / >= 3000 mel input features, the possible [`~utils.ModelOutput`] types are: - [`~generation.GenerateEncoderDecoderOutput`], - [`~generation.GenerateBeamEncoderDecoderOutput`] else only the generated output sequence ids are returned. Example: - *Longform transcription*: To transcribe or translate audios longer than 30 seconds, process the audio files without truncation and pass all mel features at once to generate. ```python >>> import torch >>> from transformers import AutoProcessor, WhisperForConditionalGeneration >>> from datasets import load_dataset, Audio >>> processor = AutoProcessor.from_pretrained("openai/whisper-tiny.en") >>> model = WhisperForConditionalGeneration.from_pretrained("openai/whisper-tiny.en") >>> model.cuda() # doctest: +IGNORE_RESULT >>> # load audios > 30 seconds >>> ds = load_dataset("distil-whisper/meanwhile", "default")["test"] >>> # resample to 16kHz >>> ds = ds.cast_column("audio", Audio(sampling_rate=16000)) >>> # take first 8 audios and retrieve array >>> audio = ds[:8]["audio"] >>> audio = [x["array"] for x in audio] >>> # make sure to NOT truncate the input audio, to return the `attention_mask` and to pad to the longest audio >>> inputs = processor(audio, return_tensors="pt", truncation=False, padding="longest", return_attention_mask=True, sampling_rate=16_000) >>> inputs = inputs.to("cuda", torch.float32) >>> # transcribe audio to ids >>> generated_ids = model.generate(**inputs) >>> transcription = processor.batch_decode(generated_ids, skip_special_tokens=True) >>> transcription[0] " Folks, if you watch the show, you know, I spent a lot of time right over there. Patiently and astutely scrutinizing the boxwood and mahogany chest set of the day's biggest stories developing the central headline pawns, definitely maneuvering an oso topical night to F6, fainting a classic Sicilian, nade door variation on the news, all the while seeing eight moves deep and patiently marshalling the latest press releases into a fisher's shows in Lip Nitsky attack that culminates in the elegant lethal slow-played, all-passant checkmate that is my nightly monologue. But sometimes, sometimes, folks, I. CHEERING AND APPLAUSE Sometimes I startle away, cubside down in the monkey bars of a condemned playground on a super fun site. Get all hept up on goofballs. Rummage that were discarded tag bag of defective toys. Yank out a fist bowl of disembodied doll limbs, toss them on a stained kid's place mat from a defunct dennies. set up a table inside a rusty cargo container down by the Wharf and challenged toothless drifters to the godless bughouse blitz of tournament that is my segment. Meanwhile." ``` - *Shortform transcription*: If passed mel input features are < 30 seconds, the whole audio will be transcribed with a single call to generate. ```python >>> import torch >>> from transformers import AutoProcessor, WhisperForConditionalGeneration >>> from datasets import load_dataset >>> processor = AutoProcessor.from_pretrained("openai/whisper-tiny.en") >>> model = WhisperForConditionalGeneration.from_pretrained("openai/whisper-tiny.en") >>> ds = load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean", split="validation") >>> inputs = processor(ds[0]["audio"]["array"], return_tensors="pt") >>> input_features = inputs.input_features >>> generated_ids = model.generate(inputs=input_features) >>> transcription = processor.batch_decode(generated_ids, skip_special_tokens=True)[0] >>> transcription ' Mr. Quilter is the apostle of the middle classes, and we are glad to welcome his gospel.' ``` """ # 0. deprecate old inputs if "inputs" in kwargs: input_features = kwargs.pop("inputs") warnings.warn( "The input name `inputs` is deprecated. Please make sure to use `input_features` instead.", FutureWarning, ) # 1. prepare generation config generation_config, kwargs = self._prepare_generation_config(generation_config, **kwargs) # 2. set global generate variables input_stride = self.model.encoder.conv1.stride[0] * self.model.encoder.conv2.stride[0] num_segment_frames = input_stride * self.config.max_source_positions batch_size, total_input_frames = self._retrieve_total_input_frames( input_features=input_features, input_stride=input_stride, kwargs=kwargs ) is_shortform = total_input_frames <= num_segment_frames # 3. Make sure generation config is correctly set # Make sure the generation config is correctly set depending on whether timestamps are to be returned or not return_dict_in_generate = self._set_return_outputs( return_dict_in_generate=return_dict_in_generate, return_token_timestamps=return_token_timestamps, logprob_threshold=logprob_threshold, generation_config=generation_config, ) timestamp_begin = self._set_return_timestamps( return_timestamps=return_timestamps, is_shortform=is_shortform, generation_config=generation_config ) self._set_language_and_task( language=language, task=task, is_multilingual=is_multilingual, generation_config=generation_config ) self._set_num_frames( return_token_timestamps=return_token_timestamps, generation_config=generation_config, kwargs=kwargs ) self._set_thresholds_and_condition( generation_config=generation_config, logprob_threshold=logprob_threshold, compression_ratio_threshold=compression_ratio_threshold, no_speech_threshold=no_speech_threshold, condition_on_prev_tokens=condition_on_prev_tokens, ) self._set_prompt_condition_type( generation_config=generation_config, prompt_condition_type=prompt_condition_type, ) kwargs["attention_mask"] = attention_mask # pass self.config for backward compatibility init_tokens = self._retrieve_init_tokens( input_features, batch_size=batch_size, generation_config=generation_config, config=self.config, num_segment_frames=num_segment_frames, kwargs=kwargs, ) # passing `decoder_input_ids` is deprecated - the only exception is for assisted generation # where the input ids are handled explicitly by the generate method self._check_decoder_input_ids(kwargs=kwargs) # 3. Retrieve logits processors device = kwargs["encoder_outputs"][0].device if "encoder_outputs" in kwargs else input_features.device begin_index = init_tokens.shape[1] logits_processor = self._retrieve_logit_processors( generation_config=generation_config, logits_processor=logits_processor, begin_index=begin_index, # begin index is index of first generated decoder token num_beams=kwargs.get("num_beams", 1), device=device, ) # 4 Set and retrieve global generation variables self._set_condition_on_prev_tokens( condition_on_prev_tokens=condition_on_prev_tokens, generation_config=generation_config ) temperatures = [temperature] if not isinstance(temperature, (list, tuple)) else temperature temperature = temperatures[0] max_frames, seek = self._retrieve_max_frames_and_seek( batch_size=batch_size, attention_mask=attention_mask, total_input_frames=total_input_frames, is_shortform=is_shortform, ) # 5 Prepare running variables, list for generation num_return_sequences = generation_config.num_return_sequences ( batch_idx_map, cur_bsz, input_features, seek, max_frames, init_tokens, do_condition_on_prev_tokens, ) = self._expand_variables_for_generation( input_features=input_features, seek=seek, max_frames=max_frames, init_tokens=init_tokens, batch_size=batch_size, condition_on_prev_tokens=condition_on_prev_tokens, generation_config=generation_config, ) current_segments = self._prepare_segments( prompt_ids=prompt_ids, batch_size=cur_bsz, generation_config=generation_config, ) # 6 Transcribe audio until we reach the end of all input audios while (seek < max_frames).any(): # 6.1 NOTE: When in longform transcription mode and batch size > 1 we need to dynamically reduce the batch size during the loop # in case one audio finished earlier than another one. Thus, we need to keep a table of "previous-index-2-current-index" in order # to know which original audio is being decoded # Set updated index map, duration of previously decoded chunks and number of max frames of current decoding chunk input_features, cur_bsz, batch_idx_map = self._maybe_reduce_batch( input_features=input_features, seek=seek, max_frames=max_frames, cur_bsz=cur_bsz, batch_idx_map=batch_idx_map, ) time_offset = seek * time_precision / input_stride seek_num_frames = (max_frames - seek).clamp(max=num_segment_frames) # 6.2 cut out next 30s segment from input features segment_input = self._get_input_segment( input_features=input_features, seek=seek, seek_num_frames=seek_num_frames, num_segment_frames=num_segment_frames, cur_bsz=cur_bsz, batch_idx_map=batch_idx_map, ) # 6.3 prepare decoder input ids suppress_tokens = _get_attr_from_logit_processors( logits_processor, SuppressTokensLogitsProcessor, "suppress_tokens" ) decoder_input_ids, kwargs = self._prepare_decoder_input_ids( cur_bsz=cur_bsz, init_tokens=init_tokens, current_segments=current_segments, batch_idx_map=batch_idx_map, do_condition_on_prev_tokens=do_condition_on_prev_tokens, prompt_ids=prompt_ids, generation_config=generation_config, config=self.config, device=init_tokens.device, suppress_tokens=suppress_tokens, kwargs=kwargs, ) # 6.4 set max new tokens or max length self._set_max_new_tokens_and_length( config=self.config, decoder_input_ids=decoder_input_ids, generation_config=generation_config, ) # 6.5 Set current `begin_index` for all logit processors if logits_processor is not None: for proc in logits_processor: if hasattr(proc, "set_begin_index"): proc.set_begin_index(decoder_input_ids.shape[-1]) # 6.6 Run generate with fallback ( seek_sequences, seek_outputs, should_skip, do_condition_on_prev_tokens, model_output_type, ) = self.generate_with_fallback( segment_input=segment_input, decoder_input_ids=decoder_input_ids, cur_bsz=cur_bsz, batch_idx_map=batch_idx_map, seek=seek, num_segment_frames=num_segment_frames, max_frames=max_frames, temperatures=temperatures, generation_config=generation_config, logits_processor=logits_processor, stopping_criteria=stopping_criteria, prefix_allowed_tokens_fn=prefix_allowed_tokens_fn, synced_gpus=synced_gpus, return_token_timestamps=return_token_timestamps, do_condition_on_prev_tokens=do_condition_on_prev_tokens, is_shortform=is_shortform, batch_size=batch_size, kwargs=kwargs, ) # 6.7 In every generated sequence, split by timestamp tokens and extract segments for i, seek_sequence in enumerate(seek_sequences): prev_i = batch_idx_map[i] if should_skip[i]: seek[prev_i] += seek_num_frames[prev_i] continue segments, segment_offset = self._retrieve_segment( seek_sequence=seek_sequence, seek_outputs=seek_outputs, time_offset=time_offset, timestamp_begin=timestamp_begin, seek_num_frames=seek_num_frames, time_precision=time_precision, input_stride=input_stride, prev_idx=prev_i, idx=i, return_token_timestamps=return_token_timestamps, ) current_segments[prev_i] += segments if is_shortform: seek[prev_i] += max_frames[i] else: seek[prev_i] += segment_offset # 7. Once all segments are added to the list of all segments, called `current_segments`, we extract the predicted # output tokens from the list of dicts. If we use batch size > 1, we make sure to pad the output final_segments = ( [x[1:] for x in current_segments] if (prompt_ids is not None and generation_config.prompt_condition_type == "first-segment") else current_segments ) sequences = _pad_to_max_length( final_segments, generation_config.pad_token_id, device=self.device, padding_side="right" ) # 8. If we return all segments, the predicted output sequences are put under `"sequences"`. if return_segments: return {"sequences": sequences, "segments": final_segments} if is_shortform: # add eos token: if generation_config.max_new_tokens is None and generation_config.max_length is None: eos_tokens = torch.full((sequences.shape[0], 1), generation_config.eos_token_id) sequences = torch.cat([sequences, eos_tokens], dim=-1) if return_token_timestamps: outputs = {} outputs["sequences"] = sequences outputs["token_timestamps"] = torch.stack([d["token_timestamps"] for d in seek_outputs], dim=0) else: outputs = sequences if return_dict_in_generate and generation_config.return_dict_in_generate: dict_outputs = self._stack_split_outputs(seek_outputs, model_output_type, sequences.device, kwargs) if num_return_sequences > 1: if hasattr(dict_outputs, "encoder_attentions") and dict_outputs.encoder_attentions is not None: dict_outputs.encoder_attentions = tuple( dict_outputs.encoder_attentions[i][::num_return_sequences] for i in range(len(dict_outputs.encoder_attentions)) ) if ( hasattr(dict_outputs, "encoder_hidden_states") and dict_outputs.encoder_hidden_states is not None ): dict_outputs.encoder_hidden_states = tuple( dict_outputs.encoder_hidden_states[i][::num_return_sequences] for i in range(len(dict_outputs.encoder_hidden_states)) ) if return_token_timestamps: dict_outputs["token_timestamps"] = outputs["token_timestamps"] return dict_outputs return outputs return sequences
Transcribes or translates log-mel input features to a sequence of auto-regressively generated token ids. <Tip warning={true}> Most generation-controlling parameters are set in `generation_config` which, if not passed, will be set to the model's default generation configuration. You can override any `generation_config` by passing the corresponding parameters to generate(), e.g. `.generate(inputs, num_beams=4, do_sample=True)`. For an overview of generation strategies and code examples, check out the [following guide](./generation_strategies). </Tip> Parameters: input_features (`torch.Tensor` of shape `(batch_size, feature_size, sequence_length)`, *optional*): Float values of log-mel features extracted from the raw speech waveform. The raw speech waveform can be obtained by loading a `.flac` or `.wav` audio file into an array of type `List[float]` or a `numpy.ndarray`, *e.g.* via the soundfile library (`pip install soundfile`). To prepare the array into `input_features`, the [`AutoFeatureExtractor`] should be used for extracting the mel features, padding and conversion into a tensor of type `torch.FloatTensor`. See [`~WhisperFeatureExtractor.__call__`] for details. generation_config (`~generation.GenerationConfig`, *optional*): The generation configuration to be used as base parametrization for the generation call. `**kwargs` passed to generate matching the attributes of `generation_config` will override them. If `generation_config` is not provided, the default will be used, which had the following loading priority: 1) from the `generation_config.json` model file, if it exists; 2) from the model configuration. Please note that unspecified parameters will inherit [`~generation.GenerationConfig`]'s default values, whose documentation should be checked to parameterize generation. logits_processor (`LogitsProcessorList`, *optional*): Custom logits processors that complement the default logits processors built from arguments and generation config. If a logit processor is passed that is already created with the arguments or a generation config an error is thrown. This feature is intended for advanced users. stopping_criteria (`StoppingCriteriaList`, *optional*): Custom stopping criteria that complement the default stopping criteria built from arguments and a generation config. If a stopping criteria is passed that is already created with the arguments or a generation config an error is thrown. This feature is intended for advanced users. prefix_allowed_tokens_fn (`Callable[[int, torch.Tensor], List[int]]`, *optional*): If provided, this function constraints the beam search to allowed tokens only at each step. If not provided no constraint is applied. This function takes 2 arguments: the batch ID `batch_id` and `input_ids`. It has to return a list with the allowed tokens for the next generation step conditioned on the batch ID `batch_id` and the previously generated tokens `inputs_ids`. This argument is useful for constrained generation conditioned on the prefix, as described in [Autoregressive Entity Retrieval](https://arxiv.org/abs/2010.00904). synced_gpus (`bool`, *optional*, defaults to `False`): Whether to continue running the while loop until max_length (needed for ZeRO stage 3) return_timestamps (`bool`, *optional*): Whether to return the timestamps with the text. This enables the `WhisperTimestampsLogitsProcessor`. task (`str`, *optional*): Task to use for generation, either "translate" or "transcribe". The `model.config.forced_decoder_ids` will be updated accordingly. language (`str` or list of `str`, *optional*): Language token to use for generation, can be either in the form of `<|en|>`, `en` or `english`. For batched generation, a list of language tokens can be passed. You can find all the possible language tokens in the `model.generation_config.lang_to_id` dictionary. is_multilingual (`bool`, *optional*): Whether or not the model is multilingual. prompt_ids (`torch.Tensor`, *optional*): Rank-1 tensor of token IDs created by passing text to [`~WhisperProcessor.get_prompt_ids`] that is provided as a prompt to each chunk. This can be used to provide or "prompt-engineer" a context for transcription, e.g. custom vocabularies or proper nouns to make it more likely to predict those words correctly. It cannot be used in conjunction with `decoder_start_token_id` as it overwrites this value. prompt_condition_type (`str`, *optional*): Only relevant for long-form transcription. Condition type of `prompt_ids`. 'first-segment' means only the first segment is conditioned on `prompt_ids`. 'all-segments' means each segment is conditioned on `prompt_ids`. Make sure to enable `condition_on_prev_tokens` for 'all-segments'. Defaults to 'first-segment'. For short-term transcription only 'first-segment' is possible. condition_on_prev_tokens (`bool`, *optional*): Only relevant for long-form transcription. Whether to condition each segment on the previous segment. As shown in the [the Whisper paper](https://cdn.openai.com/papers/whisper.pdf), this can help to improve performance. temperature (`float` or list of `float`, *optional*): The temperature to be used for generation. Passing a single `float` value and `do_sample=True` activates generation using sampling. For long-form transcription, temperature fallback can be activated by passing a list of float values such as (0.0, 0.2, 0.4, 0.6, 0.8, 1.0). As shown in the [the Whisper paper](https://cdn.openai.com/papers/whisper.pdf), this can help to improve performance. compression_ratio_threshold (`float`, *optional*): Only relevant for long-form transcription. If defined, the zlib compression rate of each segment will be computed. If the compression rate of a segment is higher than `compression_ratio_threshold`, temperature fallback is activated: the generated segment is discarded and the generation is repeated using a higher temperature. The intuition behind this feature is that segments with very high compression rates suffer from a lot of repetition. The unwanted repetition can be reduced by injecting more randomness by increasing the temperature. If `compression_ratio_threshold` is defined make sure that `temperature` is a list of values. A common value for `compression_ratio_threshold` is 1.35. As shown in the [the Whisper paper](https://cdn.openai.com/papers/whisper.pdf), this can help to improve performance. logprob_threshold (`float`, *optional*): Only relevant for long-form transcription. If defined, the average log-probability of each segment will be computed. If the log-probability of a given segment is lower than `logprob_threshold`, temperature fallback is activated: the generated segment is discarded and the generation is repeated using a higher temperature. The intuition behind this feature is that segments of low log-probability can be improved by injecting more randomness by increasing the temperature. If `logprob_threshold` is defined make sure that `temperature` is a list of values. A common value for `logprob_threshold` is -1.0. As shown in the [the Whisper paper](https://cdn.openai.com/papers/whisper.pdf), this can help to improve performance. no_speech_threshold (`float`, *optional*): Only relevant for long-form transcription. If defined, the "no-speech" token combined with the `logprob_threshold` is used to determine whether a segment contains only silence. In this case, the transcription for this segment is skipped. As shown in the [the Whisper paper](https://cdn.openai.com/papers/whisper.pdf), this can help to improve performance. num_segment_frames (`int`, *optional*): The number of frames a single segment is made of. If not defined, `num_segment_frames` defaults to the model's stride times the maximum input length. attention_mask (`torch.Tensor`, *optional*): `attention_mask` needs to be passed when doing long-form transcription using a batch size > 1. time_precision (`int`, *optional*, defaults to 0.02): The duration of output token in seconds. *E.g.* 0.02 means that a generated token on average accounts for 20 ms. return_token_timestamps (`bool`, *optional*): Whether to return token-level timestamps with the text. This can be used with or without the `return_timestamps` option. To get word-level timestamps, use the tokenizer to group the tokens into words. return_segments (`bool`, *optional*, defaults to `False`): Whether to additionally return a list of all segments. Note that this option can only be enabled when doing long-form transcription. return_dict_in_generate (`bool`, *optional*, defaults to `False`): Whether or not to return a [`~utils.ModelOutput`] instead of just returning the generated tokens. Note that when doing long-form transcription, `return_dict_in_generate` can only be enabled when `return_segments` is set True. In this case the generation outputs of each segment is added to each segment. kwargs (`Dict[str, Any]`, *optional*): Ad hoc parametrization of `generate_config` and/or additional model-specific kwargs that will be forwarded to the `forward` function of the model. If the model is an encoder-decoder model, encoder specific kwargs should not be prefixed and decoder specific kwargs should be prefixed with *decoder_*. Return: [`~utils.ModelOutput`] or `torch.LongTensor` or `Dict[str, Any]`: A [`~utils.ModelOutput`] (if `return_dict_in_generate=True` or when `config.return_dict_in_generate=True`) or a `torch.FloatTensor` or a dict of segments when `return_segments=True`. If the passed input is > 30 seconds / > 3000 mel input features and `return_segments=True` then a dictionary of generated sequence ids, called `sequences` and a list of each generated segment is returned. else if the passed input is <= 30 seconds / >= 3000 mel input features, the possible [`~utils.ModelOutput`] types are: - [`~generation.GenerateEncoderDecoderOutput`], - [`~generation.GenerateBeamEncoderDecoderOutput`] else only the generated output sequence ids are returned. Example: - *Longform transcription*: To transcribe or translate audios longer than 30 seconds, process the audio files without truncation and pass all mel features at once to generate. ```python >>> import torch >>> from transformers import AutoProcessor, WhisperForConditionalGeneration >>> from datasets import load_dataset, Audio >>> processor = AutoProcessor.from_pretrained("openai/whisper-tiny.en") >>> model = WhisperForConditionalGeneration.from_pretrained("openai/whisper-tiny.en") >>> model.cuda() # doctest: +IGNORE_RESULT >>> # load audios > 30 seconds >>> ds = load_dataset("distil-whisper/meanwhile", "default")["test"] >>> # resample to 16kHz >>> ds = ds.cast_column("audio", Audio(sampling_rate=16000)) >>> # take first 8 audios and retrieve array >>> audio = ds[:8]["audio"] >>> audio = [x["array"] for x in audio] >>> # make sure to NOT truncate the input audio, to return the `attention_mask` and to pad to the longest audio >>> inputs = processor(audio, return_tensors="pt", truncation=False, padding="longest", return_attention_mask=True, sampling_rate=16_000) >>> inputs = inputs.to("cuda", torch.float32) >>> # transcribe audio to ids >>> generated_ids = model.generate(**inputs) >>> transcription = processor.batch_decode(generated_ids, skip_special_tokens=True) >>> transcription[0] " Folks, if you watch the show, you know, I spent a lot of time right over there. Patiently and astutely scrutinizing the boxwood and mahogany chest set of the day's biggest stories developing the central headline pawns, definitely maneuvering an oso topical night to F6, fainting a classic Sicilian, nade door variation on the news, all the while seeing eight moves deep and patiently marshalling the latest press releases into a fisher's shows in Lip Nitsky attack that culminates in the elegant lethal slow-played, all-passant checkmate that is my nightly monologue. But sometimes, sometimes, folks, I. CHEERING AND APPLAUSE Sometimes I startle away, cubside down in the monkey bars of a condemned playground on a super fun site. Get all hept up on goofballs. Rummage that were discarded tag bag of defective toys. Yank out a fist bowl of disembodied doll limbs, toss them on a stained kid's place mat from a defunct dennies. set up a table inside a rusty cargo container down by the Wharf and challenged toothless drifters to the godless bughouse blitz of tournament that is my segment. Meanwhile." ``` - *Shortform transcription*: If passed mel input features are < 30 seconds, the whole audio will be transcribed with a single call to generate. ```python >>> import torch >>> from transformers import AutoProcessor, WhisperForConditionalGeneration >>> from datasets import load_dataset >>> processor = AutoProcessor.from_pretrained("openai/whisper-tiny.en") >>> model = WhisperForConditionalGeneration.from_pretrained("openai/whisper-tiny.en") >>> ds = load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean", split="validation") >>> inputs = processor(ds[0]["audio"]["array"], return_tensors="pt") >>> input_features = inputs.input_features >>> generated_ids = model.generate(inputs=input_features) >>> transcription = processor.batch_decode(generated_ids, skip_special_tokens=True)[0] >>> transcription ' Mr. Quilter is the apostle of the middle classes, and we are glad to welcome his gospel.' ```
generate
python
THUDM/GLM-4-Voice
speech_tokenizer/generation_whisper.py
https://github.com/THUDM/GLM-4-Voice/blob/master/speech_tokenizer/generation_whisper.py
Apache-2.0
def replace_or_add(lst: List[int], num: int, itr: Iterator[int]): """short function to replace num with a itr in lst""" found = any(i in lst for i in itr) if found: lst = [num if i in itr else i for i in lst] else: lst.append(num) return lst
short function to replace num with a itr in lst
replace_or_add
python
THUDM/GLM-4-Voice
speech_tokenizer/generation_whisper.py
https://github.com/THUDM/GLM-4-Voice/blob/master/speech_tokenizer/generation_whisper.py
Apache-2.0
def detect_language( self, input_features: Optional[torch.FloatTensor] = None, attention_mask: Optional[torch.LongTensor] = None, encoder_outputs: Optional[Union[torch.FloatTensor, BaseModelOutput]] = None, generation_config: Optional[GenerationConfig] = None, num_segment_frames: int = 3000, ) -> torch.Tensor: """ Detects language from log-mel input features or encoder_outputs Parameters: input_features (`torch.Tensor` of shape `(batch_size, feature_size, sequence_length)`, *optional*): Float values of log-mel features extracted from the raw speech waveform. The raw speech waveform can be obtained by loading a `.flac` or `.wav` audio file into an array of type `List[float]` or a `numpy.ndarray`, *e.g.* via the soundfile library (`pip install soundfile`). To prepare the array into `input_features`, the [`AutoFeatureExtractor`] should be used for extracting the mel features, padding and conversion into a tensor of type `torch.FloatTensor`. See [`~WhisperFeatureExtractor.__call__`] for details. encoder_outputs (`tuple(tuple(torch.FloatTensor)`, *optional*): Tuple consists of (`last_hidden_state`, *optional*: `hidden_states`, *optional*: `attentions`) `last_hidden_state` of shape `(batch_size, sequence_length, hidden_size)`, *optional*) is a sequence of hidden-states at the output of the last layer of the encoder. Used in the cross-attention of the decoder. generation_config (`~generation.GenerationConfig`, *optional*): The generation configuration to be used as base parametrization for the generation call. `**kwargs` passed to generate matching the attributes of `generation_config` will override them. If `generation_config` is not provided, the default will be used, which had the following loading priority: 1) from the `generation_config.json` model file, if it exists; 2) from the model configuration. Please note that unspecified parameters will inherit [`~generation.GenerationConfig`]'s default values, whose documentation should be checked to parameterize generation. num_segment_frames (`int`, *optional*, defaults to 3000): The number of log-mel frames the model expects Return: A `torch.LongTensor` representing the detected language ids. """ if input_features is None and encoder_outputs is None: raise ValueError("You have to specify either `input_features` or `encoder_outputs`") elif input_features is not None and encoder_outputs is not None: raise ValueError("Make sure to specificy only one of `input_features` or `encoder_outputs` - not both!") elif input_features is not None: inputs = {"input_features": input_features[:, :, :num_segment_frames]} batch_size = input_features.shape[0] elif encoder_outputs is not None: inputs = {"encoder_outputs": encoder_outputs} batch_size = ( encoder_outputs[0].shape[0] if isinstance(encoder_outputs, BaseModelOutput) else encoder_outputs[0] ) if attention_mask is not None: inputs["attention_mask"] = attention_mask generation_config = generation_config or self.generation_config decoder_input_ids = ( torch.ones((batch_size, 1), device=self.device, dtype=torch.long) * generation_config.decoder_start_token_id ) with torch.no_grad(): logits = self(**inputs, decoder_input_ids=decoder_input_ids).logits[:, -1] non_lang_mask = torch.ones_like(logits[0], dtype=torch.bool) non_lang_mask[list(generation_config.lang_to_id.values())] = False logits[:, non_lang_mask] = -np.inf lang_ids = logits.argmax(-1) return lang_ids
Detects language from log-mel input features or encoder_outputs Parameters: input_features (`torch.Tensor` of shape `(batch_size, feature_size, sequence_length)`, *optional*): Float values of log-mel features extracted from the raw speech waveform. The raw speech waveform can be obtained by loading a `.flac` or `.wav` audio file into an array of type `List[float]` or a `numpy.ndarray`, *e.g.* via the soundfile library (`pip install soundfile`). To prepare the array into `input_features`, the [`AutoFeatureExtractor`] should be used for extracting the mel features, padding and conversion into a tensor of type `torch.FloatTensor`. See [`~WhisperFeatureExtractor.__call__`] for details. encoder_outputs (`tuple(tuple(torch.FloatTensor)`, *optional*): Tuple consists of (`last_hidden_state`, *optional*: `hidden_states`, *optional*: `attentions`) `last_hidden_state` of shape `(batch_size, sequence_length, hidden_size)`, *optional*) is a sequence of hidden-states at the output of the last layer of the encoder. Used in the cross-attention of the decoder. generation_config (`~generation.GenerationConfig`, *optional*): The generation configuration to be used as base parametrization for the generation call. `**kwargs` passed to generate matching the attributes of `generation_config` will override them. If `generation_config` is not provided, the default will be used, which had the following loading priority: 1) from the `generation_config.json` model file, if it exists; 2) from the model configuration. Please note that unspecified parameters will inherit [`~generation.GenerationConfig`]'s default values, whose documentation should be checked to parameterize generation. num_segment_frames (`int`, *optional*, defaults to 3000): The number of log-mel frames the model expects Return: A `torch.LongTensor` representing the detected language ids.
detect_language
python
THUDM/GLM-4-Voice
speech_tokenizer/generation_whisper.py
https://github.com/THUDM/GLM-4-Voice/blob/master/speech_tokenizer/generation_whisper.py
Apache-2.0
def _retrieve_compression_ratio(tokens, vocab_size): """Compute byte length of zlib compressed token bytes vs. byte length of raw token bytes""" length = int(math.log2(vocab_size) / 8) + 1 token_bytes = b"".join([t.to_bytes(length, "little") for t in tokens.tolist()]) compression_ratio = len(token_bytes) / len(zlib.compress(token_bytes)) return compression_ratio
Compute byte length of zlib compressed token bytes vs. byte length of raw token bytes
_retrieve_compression_ratio
python
THUDM/GLM-4-Voice
speech_tokenizer/generation_whisper.py
https://github.com/THUDM/GLM-4-Voice/blob/master/speech_tokenizer/generation_whisper.py
Apache-2.0
def _prepare_4d_causal_attention_mask_with_cache_position( attention_mask: torch.Tensor, sequence_length: int, target_length: int, dtype: torch.dtype, device: torch.device, min_dtype: float, cache_position: torch.Tensor, batch_size: int, ): """ Creates a causal 4D mask of shape `(batch_size, 1, query_length, key_value_length)` from a 2D mask of shape `(batch_size, key_value_length)`, or if the input `attention_mask` is already 4D, do nothing. Args: attention_mask (`torch.Tensor`): A 2D attention mask of shape `(batch_size, key_value_length)` or a 4D attention mask of shape `(batch_size, 1, query_length, key_value_length)`. sequence_length (`int`): The sequence length being processed. target_length (`int`): The target length: when generating with static cache, the mask should be as long as the static cache, to account for the 0 padding, the part of the cache that is not filled yet. dtype (`torch.dtype`): The dtype to use for the 4D attention mask. device (`torch.device`): The device to plcae the 4D attention mask on. min_dtype (`float`): The minimum value representable with the dtype `dtype`. cache_position (`torch.Tensor`): Indices depicting the position of the input sequence tokens in the sequence. batch_size (`torch.Tensor`): Batch size. """ if attention_mask is not None and attention_mask.dim() == 4: # In this case we assume that the mask comes already in inverted form and requires no inversion or slicing. causal_mask = attention_mask else: causal_mask = torch.full((sequence_length, target_length), fill_value=min_dtype, dtype=dtype, device=device) if sequence_length != 1: causal_mask = torch.triu(causal_mask, diagonal=1) causal_mask *= torch.arange(target_length, device=device) > cache_position.reshape(-1, 1) causal_mask = causal_mask[None, None, :, :].expand(batch_size, 1, -1, -1) if attention_mask is not None: causal_mask = causal_mask.clone() # copy to contiguous memory for in-place edit mask_length = attention_mask.shape[-1] padding_mask = causal_mask[:, :, :, :mask_length] + attention_mask[:, None, None, :] padding_mask = padding_mask == 0 causal_mask[:, :, :, :mask_length] = causal_mask[:, :, :, :mask_length].masked_fill( padding_mask, min_dtype ) return causal_mask
Creates a causal 4D mask of shape `(batch_size, 1, query_length, key_value_length)` from a 2D mask of shape `(batch_size, key_value_length)`, or if the input `attention_mask` is already 4D, do nothing. Args: attention_mask (`torch.Tensor`): A 2D attention mask of shape `(batch_size, key_value_length)` or a 4D attention mask of shape `(batch_size, 1, query_length, key_value_length)`. sequence_length (`int`): The sequence length being processed. target_length (`int`): The target length: when generating with static cache, the mask should be as long as the static cache, to account for the 0 padding, the part of the cache that is not filled yet. dtype (`torch.dtype`): The dtype to use for the 4D attention mask. device (`torch.device`): The device to plcae the 4D attention mask on. min_dtype (`float`): The minimum value representable with the dtype `dtype`. cache_position (`torch.Tensor`): Indices depicting the position of the input sequence tokens in the sequence. batch_size (`torch.Tensor`): Batch size.
_prepare_4d_causal_attention_mask_with_cache_position
python
THUDM/GLM-4-Voice
speech_tokenizer/modeling_whisper.py
https://github.com/THUDM/GLM-4-Voice/blob/master/speech_tokenizer/modeling_whisper.py
Apache-2.0
def shift_tokens_right(input_ids: torch.Tensor, pad_token_id: int, decoder_start_token_id: int): """ Shift input ids one token to the right. """ shifted_input_ids = input_ids.new_zeros(input_ids.shape) shifted_input_ids[:, 1:] = input_ids[:, :-1].clone() shifted_input_ids[:, 0] = decoder_start_token_id if pad_token_id is None: raise ValueError("self.model.config.pad_token_id has to be defined.") # replace possible -100 values in labels by `pad_token_id` shifted_input_ids.masked_fill_(shifted_input_ids == -100, pad_token_id) return shifted_input_ids
Shift input ids one token to the right.
shift_tokens_right
python
THUDM/GLM-4-Voice
speech_tokenizer/modeling_whisper.py
https://github.com/THUDM/GLM-4-Voice/blob/master/speech_tokenizer/modeling_whisper.py
Apache-2.0
def _compute_mask_indices( shape: Tuple[int, int], mask_prob: float, mask_length: int, attention_mask: Optional[torch.LongTensor] = None, min_masks: int = 0, ) -> np.ndarray: """ Computes random mask spans for a given shape. Used to implement [SpecAugment: A Simple Data Augmentation Method for ASR](https://arxiv.org/abs/1904.08779). Note that this method is not optimized to run on TPU and should be run on CPU as part of the preprocessing during training. Args: shape: The shape for which to compute masks. This should be of a tuple of size 2 where the first element is the batch size and the second element is the length of the axis to span. mask_prob: The percentage of the whole axis (between 0 and 1) which will be masked. The number of independently generated mask spans of length `mask_length` is computed by `mask_prob*shape[1]/mask_length`. Note that due to overlaps, `mask_prob` is an upper bound and the actual percentage will be smaller. mask_length: size of the mask min_masks: minimum number of masked spans attention_mask: A (right-padded) attention mask which independently shortens the feature axis of each batch dimension. """ batch_size, sequence_length = shape if mask_length < 1: raise ValueError("`mask_length` has to be bigger than 0.") if mask_length > sequence_length: raise ValueError( f"`mask_length` has to be smaller than `sequence_length`, but got `mask_length`: {mask_length}" f" and `sequence_length`: {sequence_length}`" ) # epsilon is used for probabilistic rounding epsilon = np.random.rand(1).item() def compute_num_masked_span(input_length): """Given input length, compute how many spans should be masked""" num_masked_span = int(mask_prob * input_length / mask_length + epsilon) num_masked_span = max(num_masked_span, min_masks) # make sure num masked span <= sequence_length if num_masked_span * mask_length > sequence_length: num_masked_span = sequence_length // mask_length # make sure num_masked span is also <= input_length - (mask_length - 1) if input_length - (mask_length - 1) < num_masked_span: num_masked_span = max(input_length - (mask_length - 1), 0) return num_masked_span # compute number of masked spans in batch input_lengths = ( attention_mask.sum(-1).detach().tolist() if attention_mask is not None else [sequence_length for _ in range(batch_size)] ) # SpecAugment mask to fill spec_aug_mask = np.zeros((batch_size, sequence_length), dtype=bool) spec_aug_mask_idxs = [] max_num_masked_span = compute_num_masked_span(sequence_length) if max_num_masked_span == 0: return spec_aug_mask for input_length in input_lengths: # compute num of masked spans for this input num_masked_span = compute_num_masked_span(input_length) # get random indices to mask spec_aug_mask_idx = np.random.choice( np.arange(input_length - (mask_length - 1)), num_masked_span, replace=False ) # pick first sampled index that will serve as a dummy index to pad vector # to ensure same dimension for all batches due to probabilistic rounding # Picking first sample just pads those vectors twice. if len(spec_aug_mask_idx) == 0: # this case can only happen if `input_length` is strictly smaller then # `sequence_length` in which case the last token has to be a padding # token which we can use as a dummy mask id dummy_mask_idx = sequence_length - 1 else: dummy_mask_idx = spec_aug_mask_idx[0] spec_aug_mask_idx = np.concatenate( [spec_aug_mask_idx, np.ones(max_num_masked_span - num_masked_span, dtype=np.int32) * dummy_mask_idx] ) spec_aug_mask_idxs.append(spec_aug_mask_idx) spec_aug_mask_idxs = np.array(spec_aug_mask_idxs) # expand masked indices to masked spans spec_aug_mask_idxs = np.broadcast_to( spec_aug_mask_idxs[:, :, None], (batch_size, max_num_masked_span, mask_length) ) spec_aug_mask_idxs = spec_aug_mask_idxs.reshape(batch_size, max_num_masked_span * mask_length) # add offset to the starting indexes so that indexes now create a span offsets = np.arange(mask_length)[None, None, :] offsets = np.broadcast_to(offsets, (batch_size, max_num_masked_span, mask_length)).reshape( batch_size, max_num_masked_span * mask_length ) spec_aug_mask_idxs = spec_aug_mask_idxs + offsets # ensure that we cannot have indices larger than sequence_length if spec_aug_mask_idxs.max() > sequence_length - 1: spec_aug_mask_idxs[spec_aug_mask_idxs > sequence_length - 1] = sequence_length - 1 # scatter indices to mask np.put_along_axis(spec_aug_mask, spec_aug_mask_idxs, 1, -1) return spec_aug_mask
Computes random mask spans for a given shape. Used to implement [SpecAugment: A Simple Data Augmentation Method for ASR](https://arxiv.org/abs/1904.08779). Note that this method is not optimized to run on TPU and should be run on CPU as part of the preprocessing during training. Args: shape: The shape for which to compute masks. This should be of a tuple of size 2 where the first element is the batch size and the second element is the length of the axis to span. mask_prob: The percentage of the whole axis (between 0 and 1) which will be masked. The number of independently generated mask spans of length `mask_length` is computed by `mask_prob*shape[1]/mask_length`. Note that due to overlaps, `mask_prob` is an upper bound and the actual percentage will be smaller. mask_length: size of the mask min_masks: minimum number of masked spans attention_mask: A (right-padded) attention mask which independently shortens the feature axis of each batch dimension.
_compute_mask_indices
python
THUDM/GLM-4-Voice
speech_tokenizer/modeling_whisper.py
https://github.com/THUDM/GLM-4-Voice/blob/master/speech_tokenizer/modeling_whisper.py
Apache-2.0
def compute_num_masked_span(input_length): """Given input length, compute how many spans should be masked""" num_masked_span = int(mask_prob * input_length / mask_length + epsilon) num_masked_span = max(num_masked_span, min_masks) # make sure num masked span <= sequence_length if num_masked_span * mask_length > sequence_length: num_masked_span = sequence_length // mask_length # make sure num_masked span is also <= input_length - (mask_length - 1) if input_length - (mask_length - 1) < num_masked_span: num_masked_span = max(input_length - (mask_length - 1), 0) return num_masked_span
Given input length, compute how many spans should be masked
compute_num_masked_span
python
THUDM/GLM-4-Voice
speech_tokenizer/modeling_whisper.py
https://github.com/THUDM/GLM-4-Voice/blob/master/speech_tokenizer/modeling_whisper.py
Apache-2.0
def forward( self, hidden_states: torch.Tensor, attention_mask: torch.Tensor, layer_head_mask: torch.Tensor, output_attentions: bool = False, ) -> torch.Tensor: """ Args: hidden_states (`torch.FloatTensor`): input to the layer of shape `(batch, seq_len, embed_dim)` attention_mask (`torch.FloatTensor`): attention mask of size `(batch, 1, tgt_len, src_len)` where padding elements are indicated by very large negative values. layer_head_mask (`torch.FloatTensor`): mask for attention heads in a given layer of size `(encoder_attention_heads,)`. output_attentions (`bool`, *optional*): Whether or not to return the attentions tensors of all attention layers. See `attentions` under returned tensors for more detail. """ residual = hidden_states hidden_states = self.self_attn_layer_norm(hidden_states) hidden_states, attn_weights, _ = self.self_attn( hidden_states=hidden_states, attention_mask=attention_mask if not self.is_causal else None, layer_head_mask=layer_head_mask, output_attentions=output_attentions, ) hidden_states = nn.functional.dropout(hidden_states, p=self.dropout, training=self.training) hidden_states = residual + hidden_states residual = hidden_states hidden_states = self.final_layer_norm(hidden_states) hidden_states = self.activation_fn(self.fc1(hidden_states)) hidden_states = nn.functional.dropout(hidden_states, p=self.activation_dropout, training=self.training) hidden_states = self.fc2(hidden_states) hidden_states = nn.functional.dropout(hidden_states, p=self.dropout, training=self.training) hidden_states = residual + hidden_states if hidden_states.dtype == torch.float16 and ( torch.isinf(hidden_states).any() or torch.isnan(hidden_states).any() ): clamp_value = torch.finfo(hidden_states.dtype).max - 1000 hidden_states = torch.clamp(hidden_states, min=-clamp_value, max=clamp_value) outputs = (hidden_states,) if output_attentions: outputs += (attn_weights,) return outputs
Args: hidden_states (`torch.FloatTensor`): input to the layer of shape `(batch, seq_len, embed_dim)` attention_mask (`torch.FloatTensor`): attention mask of size `(batch, 1, tgt_len, src_len)` where padding elements are indicated by very large negative values. layer_head_mask (`torch.FloatTensor`): mask for attention heads in a given layer of size `(encoder_attention_heads,)`. output_attentions (`bool`, *optional*): Whether or not to return the attentions tensors of all attention layers. See `attentions` under returned tensors for more detail.
forward
python
THUDM/GLM-4-Voice
speech_tokenizer/modeling_whisper.py
https://github.com/THUDM/GLM-4-Voice/blob/master/speech_tokenizer/modeling_whisper.py
Apache-2.0
def forward( self, hidden_states: torch.Tensor, attention_mask: Optional[torch.Tensor] = None, encoder_hidden_states: Optional[torch.Tensor] = None, encoder_attention_mask: Optional[torch.Tensor] = None, layer_head_mask: Optional[torch.Tensor] = None, cross_attn_layer_head_mask: Optional[torch.Tensor] = None, past_key_value: Optional[EncoderDecoderCache] = None, output_attentions: Optional[bool] = False, use_cache: Optional[bool] = True, cache_position: Optional[torch.LongTensor] = None, ) -> torch.Tensor: """ Args: hidden_states (`torch.FloatTensor`): input to the layer of shape `(batch, seq_len, embed_dim)` attention_mask (`torch.FloatTensor`): attention mask of size `(batch, 1, tgt_len, src_len)` where padding elements are indicated by very large negative values. encoder_hidden_states (`torch.FloatTensor`): cross attention input to the layer of shape `(batch, seq_len, embed_dim)` encoder_attention_mask (`torch.FloatTensor`): encoder attention mask of size `(batch, 1, tgt_len, src_len)` where padding elements are indicated by very large negative values. layer_head_mask (`torch.FloatTensor`): mask for attention heads in a given layer of size `(encoder_attention_heads,)`. cross_attn_layer_head_mask (`torch.FloatTensor`): mask for cross-attention heads in a given layer of size `(decoder_attention_heads,)`. past_key_value (`Tuple(torch.FloatTensor)`): cached past key and value projection states output_attentions (`bool`, *optional*): Whether or not to return the attentions tensors of all attention layers. See `attentions` under returned tensors for more detail. """ residual = hidden_states hidden_states = self.self_attn_layer_norm(hidden_states) # Self Attention hidden_states, self_attn_weights, present_key_value = self.self_attn( hidden_states=hidden_states, past_key_value=past_key_value, attention_mask=attention_mask, layer_head_mask=layer_head_mask, output_attentions=output_attentions, cache_position=cache_position, ) hidden_states = nn.functional.dropout(hidden_states, p=self.dropout, training=self.training) hidden_states = residual + hidden_states # Cross-Attention Block cross_attn_weights = None if encoder_hidden_states is not None: residual = hidden_states hidden_states = self.encoder_attn_layer_norm(hidden_states) hidden_states, cross_attn_weights, cross_attn_present_key_value = self.encoder_attn( hidden_states=hidden_states, key_value_states=encoder_hidden_states, attention_mask=encoder_attention_mask, layer_head_mask=cross_attn_layer_head_mask, past_key_value=past_key_value, output_attentions=output_attentions, ) hidden_states = nn.functional.dropout(hidden_states, p=self.dropout, training=self.training) hidden_states = residual + hidden_states # add cross-attn to positions 1 of present_key_value tuple present_key_value = (present_key_value, cross_attn_present_key_value) # Fully Connected residual = hidden_states hidden_states = self.final_layer_norm(hidden_states) hidden_states = self.activation_fn(self.fc1(hidden_states)) hidden_states = nn.functional.dropout(hidden_states, p=self.activation_dropout, training=self.training) hidden_states = self.fc2(hidden_states) hidden_states = nn.functional.dropout(hidden_states, p=self.dropout, training=self.training) hidden_states = residual + hidden_states outputs = (hidden_states,) if output_attentions: outputs += (self_attn_weights, cross_attn_weights) if use_cache: outputs += (present_key_value,) return outputs
Args: hidden_states (`torch.FloatTensor`): input to the layer of shape `(batch, seq_len, embed_dim)` attention_mask (`torch.FloatTensor`): attention mask of size `(batch, 1, tgt_len, src_len)` where padding elements are indicated by very large negative values. encoder_hidden_states (`torch.FloatTensor`): cross attention input to the layer of shape `(batch, seq_len, embed_dim)` encoder_attention_mask (`torch.FloatTensor`): encoder attention mask of size `(batch, 1, tgt_len, src_len)` where padding elements are indicated by very large negative values. layer_head_mask (`torch.FloatTensor`): mask for attention heads in a given layer of size `(encoder_attention_heads,)`. cross_attn_layer_head_mask (`torch.FloatTensor`): mask for cross-attention heads in a given layer of size `(decoder_attention_heads,)`. past_key_value (`Tuple(torch.FloatTensor)`): cached past key and value projection states output_attentions (`bool`, *optional*): Whether or not to return the attentions tensors of all attention layers. See `attentions` under returned tensors for more detail.
forward
python
THUDM/GLM-4-Voice
speech_tokenizer/modeling_whisper.py
https://github.com/THUDM/GLM-4-Voice/blob/master/speech_tokenizer/modeling_whisper.py
Apache-2.0
def forward( self, input_features, attention_mask=None, head_mask=None, output_attentions=None, output_hidden_states=None, return_dict=None, quantized_token_ids=None ): r""" Args: input_features (`torch.LongTensor` of shape `(batch_size, feature_size, sequence_length)`): Float values of mel features extracted from the raw speech waveform. Raw speech waveform can be obtained by loading a `.flac` or `.wav` audio file into an array of type `List[float]` or a `numpy.ndarray`, *e.g.* via the soundfile library (`pip install soundfile`). To prepare the array into `input_features`, the [`AutoFeatureExtractor`] should be used for extracting the mel features, padding and conversion into a tensor of type `torch.FloatTensor`. See [`~WhisperFeatureExtractor.__call__`] attention_mask (`torch.Tensor`)`, *optional*): Whisper does not support masking of the `input_features`, this argument is preserved for compatibility, but it is not used. By default the silence in the input log mel spectrogram are ignored. head_mask (`torch.Tensor` of shape `(encoder_layers, encoder_attention_heads)`, *optional*): Mask to nullify selected heads of the attention modules. Mask values selected in `[0, 1]`: - 1 indicates the head is **not masked**, - 0 indicates the head is **masked**. output_attentions (`bool`, *optional*): Whether or not to return the attentions tensors of all attention layers. See `attentions` under returned tensors for more detail. output_hidden_states (`bool`, *optional*): Whether or not to return the hidden states of all layers. See `hidden_states` under returned tensors for more detail. return_dict (`bool`, *optional*): Whether or not to return a [`~utils.ModelOutput`] instead of a plain tuple. """ # expected_seq_length = self.config.max_source_positions * self.conv1.stride[0] * self.conv2.stride[0] # if input_features.shape[-1] != expected_seq_length: # raise ValueError( # f"Whisper expects the mel input features to be of length {expected_seq_length}, but found {input_features.shape[-1]}. Make sure to pad the input mel features to {expected_seq_length}." # ) batch_size, feature_size, seq_length = input_features.shape seq_length = seq_length // (self.conv1.stride[0] * self.conv2.stride[0]) attention_mask = attention_mask[:, :: self.conv1.stride[0] * self.conv2.stride[0]] if self.config.quantize_causal_block_size is not None: extended_attention_mask = self.get_block_causal_attention_mask(attention_mask, block_size=self.config.quantize_causal_block_size) else: extended_attention_mask = self.get_extended_attention_mask(attention_mask, (batch_size, seq_length)) output_attentions = output_attentions if output_attentions is not None else self.config.output_attentions output_hidden_states = ( output_hidden_states if output_hidden_states is not None else self.config.output_hidden_states ) return_dict = return_dict if return_dict is not None else self.config.use_return_dict inputs_embeds = nn.functional.gelu(self.conv1(input_features)) inputs_embeds = nn.functional.gelu(self.conv2(inputs_embeds)) inputs_embeds = inputs_embeds.permute(0, 2, 1) embed_pos = self.embed_positions.weight hidden_states = inputs_embeds + embed_pos[:seq_length] hidden_states = nn.functional.dropout(hidden_states, p=self.dropout, training=self.training) encoder_states = () if output_hidden_states else None all_attentions = () if output_attentions else None assert attention_mask.shape[-1] == hidden_states.shape[1] # check if head_mask has a correct number of layers specified if desired if head_mask is not None: assert head_mask.size()[0] == ( len(self.layers) ), f"The head_mask should be specified for {len(self.layers)} layers, but it is for {head_mask.size()[0]}." for idx, encoder_layer in enumerate(self.layers): if output_hidden_states: encoder_states = encoder_states + (hidden_states,) # add LayerDrop (see https://arxiv.org/abs/1909.11556 for description) to_drop = False if self.training: dropout_probability = torch.rand([]) if dropout_probability < self.layerdrop: # skip the layer to_drop = True if to_drop: layer_outputs = (None, None) else: if self.gradient_checkpointing and self.training: layer_outputs = self._gradient_checkpointing_func( encoder_layer.__call__, hidden_states, extended_attention_mask, (head_mask[idx] if head_mask is not None else None), output_attentions, ) else: layer_outputs = encoder_layer( hidden_states, extended_attention_mask, layer_head_mask=(head_mask[idx] if head_mask is not None else None), output_attentions=output_attentions, ) hidden_states = layer_outputs[0] if output_attentions: all_attentions = all_attentions + (layer_outputs[1],) if idx + 1 == self.config.pooling_position and self.config.pooling_kernel_size is not None: hidden_states = hidden_states.permute(0, 2, 1) if hidden_states.shape[-1] % self.config.pooling_kernel_size != 0: hidden_states = torch.nn.functional.pad(hidden_states, ( 0, self.config.pooling_kernel_size - hidden_states.shape[-1] % self.config.pooling_kernel_size)) hidden_states = self.pooling_layer(hidden_states).permute(0, 2, 1) attention_mask = attention_mask[:, ::self.config.pooling_kernel_size] if self.config.quantize_causal_block_size is not None: extended_attention_mask = self.get_block_causal_attention_mask(attention_mask, block_size=self.config.quantize_causal_block_size // self.config.pooling_kernel_size) else: extended_attention_mask = self.get_extended_attention_mask(attention_mask, ( batch_size, seq_length // self.config.pooling_kernel_size)) if idx + 1 == self.config.quantize_position and self.config.quantize_vocab_size is not None: if quantized_token_ids is not None: hidden_states = self.codebook(quantized_token_ids) else: hidden_quantized, indices_flat, distances = vector_quantize(hidden_states, self.codebook.weight) quantized_token_ids = indices_flat.reshape(batch_size, hidden_quantized.shape[1]) if self.training: encodings = torch.nn.functional.one_hot(indices_flat, self.config.quantize_vocab_size).float() encodings = encodings * attention_mask.reshape(-1, 1) n = torch.sum(encodings, dim=0) torch.distributed.all_reduce(n, op=torch.distributed.ReduceOp.SUM) self.num_active_codes = n.nonzero().shape[0] if self.config.quantize_ema_decay: hidden_flat = hidden_states.detach().float().reshape(-1, hidden_states.shape[-1]) with torch.autocast(device_type='cuda', dtype=torch.float32): dw = torch.matmul(encodings.t(), hidden_flat) torch.distributed.all_reduce(dw, op=torch.distributed.ReduceOp.SUM) self.ema_count = self.ema_count * self.config.quantize_ema_decay + ( 1 - self.config.quantize_ema_decay) * n total_count = torch.sum(self.ema_count) self.ema_count = (self.ema_count + 1e-5) / ( total_count + self.config.quantize_vocab_size * 1e-5) * total_count self.ema_weight = self.ema_weight * self.config.quantize_ema_decay + ( 1 - self.config.quantize_ema_decay) * dw self.codebook.weight.data = self.ema_weight / self.ema_count.unsqueeze(1) self.quantize_loss = self.config.quantize_loss_scale * self.config.quantize_commit_coefficient * mse_loss_with_mask( hidden_states, hidden_quantized.detach(), attention_mask) self.quantize_ema_count += 1 if self.config.quantize_restart_interval is not None and self.quantize_ema_count % self.config.quantize_restart_interval == 0: rank, world_size = torch.distributed.get_rank(), torch.distributed.get_world_size() segment_vocab_size = self.config.quantize_vocab_size // world_size start_idx = segment_vocab_size * rank ema_count_segment = self.ema_count[start_idx: start_idx + segment_vocab_size] threshold = 1 * ( self.config.quantize_ema_decay ** self.config.quantize_restart_interval) update_indices = (ema_count_segment < threshold).nonzero()[:, 0] + start_idx num_update = update_indices.shape[0] mask_flat = attention_mask.reshape(-1) > 0 hidden_selected = hidden_flat[mask_flat] hidden_update = hidden_selected[random.sample(range(len(hidden_selected)), num_update)] num_update = torch.as_tensor([num_update], dtype=torch.long, device=hidden_states.device) num_update_list = [torch.as_tensor([0], dtype=torch.long, device=hidden_states.device) for _ in range(world_size)] torch.distributed.all_gather(num_update_list, num_update) update_indices_list = [ torch.zeros(num.item(), dtype=torch.long, device=hidden_states.device) for num in num_update_list] torch.distributed.all_gather(update_indices_list, update_indices) update_indices = torch.cat(update_indices_list) hidden_update_list = [ torch.zeros(num.item(), hidden_flat.shape[-1], dtype=hidden_update.dtype, device=hidden_states.device) for num in num_update_list] torch.distributed.all_gather(hidden_update_list, hidden_update) hidden_update = torch.cat(hidden_update_list) self.codebook.weight.data[update_indices] = hidden_update self.ema_count[update_indices] = 1 self.ema_weight[update_indices] = hidden_update if torch.distributed.get_rank() == 0: print(f"restart {len(update_indices)} tokens") else: loss = self.config.quantize_loss_scale * ( self.config.quantize_commit_coefficient * mse_loss_with_mask(hidden_states, hidden_quantized.detach(), attention_mask) + mse_loss_with_mask( hidden_quantized, hidden_states.detach(), attention_mask)) self.quantize_loss = loss hidden_states = hidden_states + (hidden_quantized - hidden_states).detach() else: hidden_states = hidden_quantized hidden_states = hidden_states + self.embed_positions2.weight[:hidden_states.shape[1]] if idx + 1 == self.save_hidden_position: import numpy as np import uuid to_save = [] for batch_idx, hidden_state in enumerate(hidden_states): for seq_idx, hidden in enumerate(hidden_state): if attention_mask[batch_idx, seq_idx]: to_save.append(hidden.detach().cpu().numpy()) np.save(os.path.join(self.save_hidden_dir, f"{str(uuid.uuid4())}.npy"), to_save) if not self.config.quantize_encoder_only: hidden_states = self.layer_norm(hidden_states) if output_hidden_states: encoder_states = encoder_states + (hidden_states,) if not return_dict: return tuple(v for v in [hidden_states, encoder_states, all_attentions] if v is not None) return QuantizedBaseModelOutput( last_hidden_state=hidden_states, hidden_states=encoder_states, attentions=all_attentions, quantized_token_ids=quantized_token_ids, )
Args: input_features (`torch.LongTensor` of shape `(batch_size, feature_size, sequence_length)`): Float values of mel features extracted from the raw speech waveform. Raw speech waveform can be obtained by loading a `.flac` or `.wav` audio file into an array of type `List[float]` or a `numpy.ndarray`, *e.g.* via the soundfile library (`pip install soundfile`). To prepare the array into `input_features`, the [`AutoFeatureExtractor`] should be used for extracting the mel features, padding and conversion into a tensor of type `torch.FloatTensor`. See [`~WhisperFeatureExtractor.__call__`] attention_mask (`torch.Tensor`)`, *optional*): Whisper does not support masking of the `input_features`, this argument is preserved for compatibility, but it is not used. By default the silence in the input log mel spectrogram are ignored. head_mask (`torch.Tensor` of shape `(encoder_layers, encoder_attention_heads)`, *optional*): Mask to nullify selected heads of the attention modules. Mask values selected in `[0, 1]`: - 1 indicates the head is **not masked**, - 0 indicates the head is **masked**. output_attentions (`bool`, *optional*): Whether or not to return the attentions tensors of all attention layers. See `attentions` under returned tensors for more detail. output_hidden_states (`bool`, *optional*): Whether or not to return the hidden states of all layers. See `hidden_states` under returned tensors for more detail. return_dict (`bool`, *optional*): Whether or not to return a [`~utils.ModelOutput`] instead of a plain tuple.
forward
python
THUDM/GLM-4-Voice
speech_tokenizer/modeling_whisper.py
https://github.com/THUDM/GLM-4-Voice/blob/master/speech_tokenizer/modeling_whisper.py
Apache-2.0
def forward( self, input_ids=None, attention_mask=None, encoder_hidden_states=None, encoder_attention_mask=None, head_mask=None, cross_attn_head_mask=None, past_key_values=None, inputs_embeds=None, position_ids=None, use_cache=None, output_attentions=None, output_hidden_states=None, return_dict=None, cache_position=None, ): r""" Args: input_ids (`torch.LongTensor` of shape `(batch_size, sequence_length)`): Indices of input sequence tokens in the vocabulary. Padding will be ignored by default should you provide it. Indices can be obtained using [`WhisperTokenizer`]. See [`PreTrainedTokenizer.encode`] and [`PreTrainedTokenizer.__call__`] for details. [What are input IDs?](../glossary#input-ids) attention_mask (`torch.Tensor` of shape `(batch_size, sequence_length)`, *optional*): Mask to avoid performing attention on padding token indices. Mask values selected in `[0, 1]`: - 1 for tokens that are **not masked**, - 0 for tokens that are **masked**. [What are attention masks?](../glossary#attention-mask) encoder_hidden_states (`torch.FloatTensor` of shape `(batch_size, encoder_sequence_length, hidden_size)`, *optional*): Sequence of hidden-states at the output of the last layer of the encoder. Used in the cross-attention of the decoder.] encoder_attention_mask (`torch.LongTensor` of shape `(batch_size, encoder_sequence_length)`, *optional*): head_mask (`torch.Tensor` of shape `(decoder_layers, decoder_attention_heads)`, *optional*): Mask to nullify selected heads of the attention modules. Mask values selected in `[0, 1]`: - 1 indicates the head is **not masked**, - 0 indicates the head is **masked**. cross_attn_head_mask (`torch.Tensor` of shape `(decoder_layers, decoder_attention_heads)`, *optional*): Mask to nullify selected heads of the attention modules in encoder to avoid performing cross-attention on hidden heads. Mask values selected in `[0, 1]`: - 1 indicates the head is **not masked**, - 0 indicates the head is **masked**. past_key_values (`EncoderDecoderCache` or `tuple(tuple(torch.FloatTensor))`, *optional*): Pre-computed hidden-states that can be used to speed up auto-regressive (sequential) decoding. There are four sets of pre-computed hidden-states: key and values states in the self-attention blocks (2) and in the cross-attention blocks (2). The `past_key_values` are returned when `use_cache=True` is passed or when `config.use_cache=True` Two formats are allowed: - An [`~cache_utils.EncoderDecoderCache`] instance; - Tuple of `tuple(torch.FloatTensor)` of length `config.n_layers`, with each tuple having 2 tensors of shape `(batch_size, num_heads, sequence_length, embed_size_per_head)`) and 2 additional tensors of shape `(batch_size, num_heads, encoder_sequence_length, embed_size_per_head)`. If `past_key_values` are used, the user can optionally input only the last `decoder_input_ids` (those that don't have their past key value states given to this model) of shape `(batch_size, 1)` instead of all `decoder_input_ids` of shape `(batch_size, sequence_length)`. inputs_embeds (`torch.FloatTensor` of shape `(batch_size, sequence_length, hidden_size)`, *optional*): Optionally, instead of passing `input_ids` you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert `input_ids` indices into associated vectors than the model's internal embedding lookup matrix. output_attentions (`bool`, *optional*): Whether or not to return the attentions tensors of all attention layers. See `attentions` under returned tensors for more detail. output_hidden_states (`bool`, *optional*): Whether or not to return the hidden states of all layers. See `hidden_states` under returned tensors for more detail. return_dict (`bool`, *optional*): Whether or not to return a [`~utils.ModelOutput`] instead of a plain tuple. cache_position (`torch.LongTensor` of shape `(sequence_length)`, *optional*): Indices depicting the position of the input sequence tokens in the sequence. It is used to update the cache in the correct position and to infer the complete sequence length. """ output_attentions = output_attentions if output_attentions is not None else self.config.output_attentions output_hidden_states = ( output_hidden_states if output_hidden_states is not None else self.config.output_hidden_states ) use_cache = use_cache if use_cache is not None else self.config.use_cache return_dict = return_dict if return_dict is not None else self.config.use_return_dict # retrieve input_ids and inputs_embeds if input_ids is not None and inputs_embeds is not None: raise ValueError("You cannot specify both decoder_input_ids and decoder_inputs_embeds at the same time") elif input_ids is not None: input_shape = input_ids.size() input_ids = input_ids.view(-1, input_shape[-1]) elif inputs_embeds is not None: input_shape = inputs_embeds.size()[:-1] else: raise ValueError("You have to specify either decoder_input_ids or decoder_inputs_embeds") if inputs_embeds is None: inputs_embeds = self.embed_tokens(input_ids) assert encoder_attention_mask.shape[-1] == encoder_hidden_states.shape[1] encoder_extended_attention_mask = self.invert_attention_mask(encoder_attention_mask) return_legacy_cache = False return_self_attention_cache = False if use_cache or past_key_values is not None: if isinstance(past_key_values, Cache) and not isinstance(past_key_values, EncoderDecoderCache): return_self_attention_cache = True past_key_values = EncoderDecoderCache(past_key_values, DynamicCache()) elif not isinstance(past_key_values, EncoderDecoderCache): return_legacy_cache = True logger.warning_once( "Passing a tuple of `past_key_values` is deprecated and will be removed in Transformers v4.43.0. " "You should pass an instance of `EncoderDecoderCache` instead, e.g. " "`past_key_values=EncoderDecoderCache.from_legacy_cache(past_key_values)`." ) past_key_values = EncoderDecoderCache.from_legacy_cache(past_key_values) past_key_values_length = 0 if cache_position is not None: past_key_values_length = cache_position[0] elif past_key_values is not None: past_key_values_length = past_key_values.get_seq_length() if cache_position is None: cache_position = torch.arange( past_key_values_length, past_key_values_length + input_shape[1], device=inputs_embeds.device ) if position_ids is None: position_ids = cache_position.unsqueeze(0) # embed positions if input_ids is not None: positions = self.embed_positions( input_ids, past_key_values_length=past_key_values_length, position_ids=position_ids ) else: positions = self.embed_positions( inputs_embeds, past_key_values_length=past_key_values_length, position_ids=position_ids ) hidden_states = inputs_embeds + positions.to(inputs_embeds.device) hidden_states = nn.functional.dropout(hidden_states, p=self.dropout, training=self.training) causal_mask = self._update_causal_mask( attention_mask, inputs_embeds, cache_position, past_key_values.self_attention_cache if past_key_values is not None else None, output_attentions, ) if self.gradient_checkpointing and self.training: if use_cache: logger.warning_once( "`use_cache = True` is incompatible with gradient checkpointing. Setting `use_cache = False`..." ) use_cache = False # decoder layers all_hidden_states = () if output_hidden_states else None all_self_attns = () if output_attentions else None all_cross_attentions = () if (output_attentions and encoder_hidden_states is not None) else None # check if head_mask/cross_attn_head_mask has a correct number of layers specified if desired for attn_mask, mask_name in zip([head_mask, cross_attn_head_mask], ["head_mask", "cross_attn_head_mask"]): if attn_mask is not None: assert attn_mask.size()[0] == (len(self.layers)), ( f"The `{mask_name}` should be specified for {len(self.layers)} layers, but it is for" f" {head_mask.size()[0]}." ) for idx, decoder_layer in enumerate(self.layers): # add LayerDrop (see https://arxiv.org/abs/1909.11556 for description) if output_hidden_states: all_hidden_states += (hidden_states,) if self.training: dropout_probability = torch.rand([]) if dropout_probability < self.layerdrop: continue if self.gradient_checkpointing and self.training: layer_outputs = self._gradient_checkpointing_func( decoder_layer.__call__, hidden_states, causal_mask, encoder_hidden_states, encoder_extended_attention_mask, # encoder attention mask head_mask[idx] if head_mask is not None else None, cross_attn_head_mask[idx] if cross_attn_head_mask is not None else None, None, # past_key_value output_attentions, use_cache, cache_position, ) else: layer_outputs = decoder_layer( hidden_states, attention_mask=causal_mask, encoder_hidden_states=encoder_hidden_states, encoder_attention_mask=encoder_extended_attention_mask, layer_head_mask=(head_mask[idx] if head_mask is not None else None), cross_attn_layer_head_mask=( cross_attn_head_mask[idx] if cross_attn_head_mask is not None else None ), past_key_value=past_key_values if use_cache else None, output_attentions=output_attentions, use_cache=use_cache, cache_position=cache_position, ) hidden_states = layer_outputs[0] if output_attentions: all_self_attns += (layer_outputs[1],) if encoder_hidden_states is not None: all_cross_attentions += (layer_outputs[2],) hidden_states = self.layer_norm(hidden_states) # add hidden states from the last decoder layer if output_hidden_states: all_hidden_states += (hidden_states,) next_cache = past_key_values if use_cache else None if return_self_attention_cache: next_cache = past_key_values.self_attention_cache if return_legacy_cache: next_cache = past_key_values.to_legacy_cache() if not return_dict: return tuple( v for v in [hidden_states, next_cache, all_hidden_states, all_self_attns, all_cross_attentions] if v is not None ) return BaseModelOutputWithPastAndCrossAttentions( last_hidden_state=hidden_states, past_key_values=next_cache, hidden_states=all_hidden_states, attentions=all_self_attns, cross_attentions=all_cross_attentions, )
Args: input_ids (`torch.LongTensor` of shape `(batch_size, sequence_length)`): Indices of input sequence tokens in the vocabulary. Padding will be ignored by default should you provide it. Indices can be obtained using [`WhisperTokenizer`]. See [`PreTrainedTokenizer.encode`] and [`PreTrainedTokenizer.__call__`] for details. [What are input IDs?](../glossary#input-ids) attention_mask (`torch.Tensor` of shape `(batch_size, sequence_length)`, *optional*): Mask to avoid performing attention on padding token indices. Mask values selected in `[0, 1]`: - 1 for tokens that are **not masked**, - 0 for tokens that are **masked**. [What are attention masks?](../glossary#attention-mask) encoder_hidden_states (`torch.FloatTensor` of shape `(batch_size, encoder_sequence_length, hidden_size)`, *optional*): Sequence of hidden-states at the output of the last layer of the encoder. Used in the cross-attention of the decoder.] encoder_attention_mask (`torch.LongTensor` of shape `(batch_size, encoder_sequence_length)`, *optional*): head_mask (`torch.Tensor` of shape `(decoder_layers, decoder_attention_heads)`, *optional*): Mask to nullify selected heads of the attention modules. Mask values selected in `[0, 1]`: - 1 indicates the head is **not masked**, - 0 indicates the head is **masked**. cross_attn_head_mask (`torch.Tensor` of shape `(decoder_layers, decoder_attention_heads)`, *optional*): Mask to nullify selected heads of the attention modules in encoder to avoid performing cross-attention on hidden heads. Mask values selected in `[0, 1]`: - 1 indicates the head is **not masked**, - 0 indicates the head is **masked**. past_key_values (`EncoderDecoderCache` or `tuple(tuple(torch.FloatTensor))`, *optional*): Pre-computed hidden-states that can be used to speed up auto-regressive (sequential) decoding. There are four sets of pre-computed hidden-states: key and values states in the self-attention blocks (2) and in the cross-attention blocks (2). The `past_key_values` are returned when `use_cache=True` is passed or when `config.use_cache=True` Two formats are allowed: - An [`~cache_utils.EncoderDecoderCache`] instance; - Tuple of `tuple(torch.FloatTensor)` of length `config.n_layers`, with each tuple having 2 tensors of shape `(batch_size, num_heads, sequence_length, embed_size_per_head)`) and 2 additional tensors of shape `(batch_size, num_heads, encoder_sequence_length, embed_size_per_head)`. If `past_key_values` are used, the user can optionally input only the last `decoder_input_ids` (those that don't have their past key value states given to this model) of shape `(batch_size, 1)` instead of all `decoder_input_ids` of shape `(batch_size, sequence_length)`. inputs_embeds (`torch.FloatTensor` of shape `(batch_size, sequence_length, hidden_size)`, *optional*): Optionally, instead of passing `input_ids` you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert `input_ids` indices into associated vectors than the model's internal embedding lookup matrix. output_attentions (`bool`, *optional*): Whether or not to return the attentions tensors of all attention layers. See `attentions` under returned tensors for more detail. output_hidden_states (`bool`, *optional*): Whether or not to return the hidden states of all layers. See `hidden_states` under returned tensors for more detail. return_dict (`bool`, *optional*): Whether or not to return a [`~utils.ModelOutput`] instead of a plain tuple. cache_position (`torch.LongTensor` of shape `(sequence_length)`, *optional*): Indices depicting the position of the input sequence tokens in the sequence. It is used to update the cache in the correct position and to infer the complete sequence length.
forward
python
THUDM/GLM-4-Voice
speech_tokenizer/modeling_whisper.py
https://github.com/THUDM/GLM-4-Voice/blob/master/speech_tokenizer/modeling_whisper.py
Apache-2.0
def _mask_input_features( self, input_features: torch.FloatTensor, attention_mask: Optional[torch.LongTensor] = None, ): """ Masks extracted features along time axis and/or along feature axis according to [SpecAugment](https://arxiv.org/abs/1904.08779). """ # `config.apply_spec_augment` can set masking to False if not getattr(self.config, "apply_spec_augment", True): return input_features # generate indices & apply SpecAugment along time axis batch_size, hidden_size, sequence_length = input_features.size() if self.config.mask_time_prob > 0 and self.training: # generate indices & apply SpecAugment along time axis mask_time_indices = _compute_mask_indices( (batch_size, sequence_length), mask_prob=self.config.mask_time_prob, mask_length=self.config.mask_time_length, attention_mask=attention_mask, min_masks=self.config.mask_time_min_masks, ) mask_time_indices = torch.tensor(mask_time_indices, device=input_features.device, dtype=torch.bool) mask_time_indices = mask_time_indices[:, None].expand(-1, hidden_size, -1) input_features[mask_time_indices] = 0 if self.config.mask_feature_prob > 0 and self.training: # generate indices & apply SpecAugment along feature axis mask_feature_indices = _compute_mask_indices( (batch_size, hidden_size), mask_prob=self.config.mask_feature_prob, mask_length=self.config.mask_feature_length, min_masks=self.config.mask_feature_min_masks, ) mask_feature_indices = torch.tensor(mask_feature_indices, device=input_features.device, dtype=torch.bool) input_features[mask_feature_indices] = 0 return input_features
Masks extracted features along time axis and/or along feature axis according to [SpecAugment](https://arxiv.org/abs/1904.08779).
_mask_input_features
python
THUDM/GLM-4-Voice
speech_tokenizer/modeling_whisper.py
https://github.com/THUDM/GLM-4-Voice/blob/master/speech_tokenizer/modeling_whisper.py
Apache-2.0
def forward( self, input_features: Optional[torch.FloatTensor] = None, attention_mask: Optional[torch.LongTensor] = None, decoder_input_ids: Optional[torch.LongTensor] = None, decoder_attention_mask: Optional[torch.LongTensor] = None, head_mask: Optional[torch.Tensor] = None, decoder_head_mask: Optional[torch.Tensor] = None, cross_attn_head_mask: Optional[torch.Tensor] = None, encoder_outputs: Optional[Tuple[Tuple[torch.FloatTensor]]] = None, past_key_values: Optional[Union[EncoderDecoderCache, Tuple[torch.FloatTensor]]] = None, decoder_inputs_embeds: Optional[Tuple[torch.FloatTensor]] = None, decoder_position_ids: Optional[Tuple[torch.LongTensor]] = None, use_cache: Optional[bool] = None, output_attentions: Optional[bool] = None, output_hidden_states: Optional[bool] = None, return_dict: Optional[bool] = None, cache_position: Optional[torch.LongTensor] = None, quantized_token_ids: Optional[torch.LongTensor] = None ) -> Union[Tuple[torch.Tensor], Seq2SeqModelOutput]: r""" Returns: Example: ```python >>> import torch >>> from transformers import AutoFeatureExtractor, WhisperModel >>> from datasets import load_dataset >>> model = WhisperVQModel.from_pretrained("openai/whisper-base") >>> feature_extractor = AutoFeatureExtractor.from_pretrained("openai/whisper-base") >>> ds = load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean", split="validation") >>> inputs = feature_extractor(ds[0]["audio"]["array"], return_tensors="pt") >>> input_features = inputs.input_features >>> decoder_input_ids = torch.tensor([[1, 1]]) * model.config.decoder_start_token_id >>> last_hidden_state = model(input_features, decoder_input_ids=decoder_input_ids).last_hidden_state >>> list(last_hidden_state.shape) [1, 2, 512] ```""" output_attentions = output_attentions if output_attentions is not None else self.config.output_attentions output_hidden_states = ( output_hidden_states if output_hidden_states is not None else self.config.output_hidden_states ) use_cache = use_cache if use_cache is not None else self.config.use_cache return_dict = return_dict if return_dict is not None else self.config.use_return_dict if encoder_outputs is None: input_features = self._mask_input_features(input_features, attention_mask=attention_mask) encoder_outputs = self.encoder( input_features, attention_mask=attention_mask, head_mask=head_mask, output_attentions=output_attentions, output_hidden_states=output_hidden_states, return_dict=return_dict, quantized_token_ids=quantized_token_ids ) # If the user passed a tuple for encoder_outputs, we wrap it in a BaseModelOutput when return_dict=True elif return_dict and not isinstance(encoder_outputs, BaseModelOutput): encoder_outputs = BaseModelOutput( last_hidden_state=encoder_outputs[0], hidden_states=encoder_outputs[1] if len(encoder_outputs) > 1 else None, attentions=encoder_outputs[2] if len(encoder_outputs) > 2 else None, ) # decoder outputs consists of (dec_features, past_key_value, dec_hidden, dec_attn) attention_mask = attention_mask[:, ::self.encoder.conv1.stride[0] * self.encoder.conv2.stride[0]] if self.encoder.config.pooling_kernel_size is not None: attention_mask = attention_mask[:, ::self.encoder.config.pooling_kernel_size] decoder_outputs = self.decoder( input_ids=decoder_input_ids, attention_mask=decoder_attention_mask, encoder_attention_mask=attention_mask, encoder_hidden_states=encoder_outputs[0], head_mask=decoder_head_mask, cross_attn_head_mask=cross_attn_head_mask, past_key_values=past_key_values, inputs_embeds=decoder_inputs_embeds, position_ids=decoder_position_ids, use_cache=use_cache, output_attentions=output_attentions, output_hidden_states=output_hidden_states, return_dict=return_dict, cache_position=cache_position, ) if not return_dict: return decoder_outputs + encoder_outputs return Seq2SeqModelOutput( last_hidden_state=decoder_outputs.last_hidden_state, past_key_values=decoder_outputs.past_key_values, decoder_hidden_states=decoder_outputs.hidden_states, decoder_attentions=decoder_outputs.attentions, cross_attentions=decoder_outputs.cross_attentions, encoder_last_hidden_state=encoder_outputs.last_hidden_state, encoder_hidden_states=encoder_outputs.hidden_states, encoder_attentions=encoder_outputs.attentions, )
Returns: Example: ```python >>> import torch >>> from transformers import AutoFeatureExtractor, WhisperModel >>> from datasets import load_dataset >>> model = WhisperVQModel.from_pretrained("openai/whisper-base") >>> feature_extractor = AutoFeatureExtractor.from_pretrained("openai/whisper-base") >>> ds = load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean", split="validation") >>> inputs = feature_extractor(ds[0]["audio"]["array"], return_tensors="pt") >>> input_features = inputs.input_features >>> decoder_input_ids = torch.tensor([[1, 1]]) * model.config.decoder_start_token_id >>> last_hidden_state = model(input_features, decoder_input_ids=decoder_input_ids).last_hidden_state >>> list(last_hidden_state.shape) [1, 2, 512] ```
forward
python
THUDM/GLM-4-Voice
speech_tokenizer/modeling_whisper.py
https://github.com/THUDM/GLM-4-Voice/blob/master/speech_tokenizer/modeling_whisper.py
Apache-2.0
def forward( self, input_features: Optional[torch.FloatTensor] = None, attention_mask: Optional[torch.LongTensor] = None, decoder_input_ids: Optional[torch.LongTensor] = None, decoder_attention_mask: Optional[torch.LongTensor] = None, head_mask: Optional[torch.Tensor] = None, decoder_head_mask: Optional[torch.Tensor] = None, cross_attn_head_mask: Optional[torch.Tensor] = None, encoder_outputs: Optional[Tuple[Tuple[torch.FloatTensor]]] = None, past_key_values: Optional[Union[EncoderDecoderCache, Tuple[torch.FloatTensor]]] = None, decoder_inputs_embeds: Optional[Tuple[torch.FloatTensor]] = None, decoder_position_ids: Optional[Tuple[torch.LongTensor]] = None, labels: Optional[torch.LongTensor] = None, use_cache: Optional[bool] = None, output_attentions: Optional[bool] = None, output_hidden_states: Optional[bool] = None, return_dict: Optional[bool] = None, cache_position: Optional[torch.LongTensor] = None, quantized_token_ids: Optional[torch.LongTensor] = None ) -> Union[Tuple[torch.Tensor], Seq2SeqLMOutput]: r""" labels (`torch.LongTensor` of shape `(batch_size, sequence_length)`, *optional*): Labels for computing the language modeling loss. Indices should either be in `[0, ..., config.vocab_size]` or -100 (see `input_ids` docstring). Tokens with indices set to `-100` are ignored (masked), the loss is only computed for the tokens with labels in `[0, ..., config.vocab_size]`. Returns: Example: ```python >>> import torch >>> from transformers import AutoProcessor, WhisperForConditionalGeneration >>> from datasets import load_dataset >>> processor = AutoProcessor.from_pretrained("openai/whisper-tiny.en") >>> model = WhisperVQForConditionalGeneration.from_pretrained("openai/whisper-tiny.en") >>> ds = load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean", split="validation") >>> inputs = processor(ds[0]["audio"]["array"], return_tensors="pt") >>> input_features = inputs.input_features >>> generated_ids = model.generate(inputs=input_features) >>> transcription = processor.batch_decode(generated_ids, skip_special_tokens=True)[0] >>> transcription ' Mr. Quilter is the apostle of the middle classes, and we are glad to welcome his gospel.' ```""" return_dict = return_dict if return_dict is not None else self.config.use_return_dict if labels is not None: if decoder_input_ids is None and decoder_inputs_embeds is None: decoder_input_ids = shift_tokens_right( labels, self.config.pad_token_id, self.config.decoder_start_token_id ) outputs = self.model( input_features, attention_mask=attention_mask, decoder_input_ids=decoder_input_ids, encoder_outputs=encoder_outputs, decoder_attention_mask=decoder_attention_mask, head_mask=head_mask, decoder_head_mask=decoder_head_mask, cross_attn_head_mask=cross_attn_head_mask, past_key_values=past_key_values, decoder_inputs_embeds=decoder_inputs_embeds, decoder_position_ids=decoder_position_ids, use_cache=use_cache, output_attentions=output_attentions, output_hidden_states=output_hidden_states, return_dict=return_dict, cache_position=cache_position, quantized_token_ids=quantized_token_ids ) lm_logits = self.proj_out(outputs[0]) loss = None if labels is not None: loss_fct = CrossEntropyLoss() # move labels to correct device to enable PP labels = labels.to(lm_logits.device) loss = loss_fct(lm_logits.view(-1, self.config.vocab_size), labels.reshape(-1)) if self.training and self.model.encoder.quantize_loss is not None: loss = loss + self.model.encoder.quantize_loss if not return_dict: output = (lm_logits,) + outputs[1:] return ((loss,) + output) if loss is not None else output return Seq2SeqLMOutput( loss=loss, logits=lm_logits, past_key_values=outputs.past_key_values, decoder_hidden_states=outputs.decoder_hidden_states, decoder_attentions=outputs.decoder_attentions, cross_attentions=outputs.cross_attentions, encoder_last_hidden_state=outputs.encoder_last_hidden_state, encoder_hidden_states=outputs.encoder_hidden_states, encoder_attentions=outputs.encoder_attentions, )
labels (`torch.LongTensor` of shape `(batch_size, sequence_length)`, *optional*): Labels for computing the language modeling loss. Indices should either be in `[0, ..., config.vocab_size]` or -100 (see `input_ids` docstring). Tokens with indices set to `-100` are ignored (masked), the loss is only computed for the tokens with labels in `[0, ..., config.vocab_size]`. Returns: Example: ```python >>> import torch >>> from transformers import AutoProcessor, WhisperForConditionalGeneration >>> from datasets import load_dataset >>> processor = AutoProcessor.from_pretrained("openai/whisper-tiny.en") >>> model = WhisperVQForConditionalGeneration.from_pretrained("openai/whisper-tiny.en") >>> ds = load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean", split="validation") >>> inputs = processor(ds[0]["audio"]["array"], return_tensors="pt") >>> input_features = inputs.input_features >>> generated_ids = model.generate(inputs=input_features) >>> transcription = processor.batch_decode(generated_ids, skip_special_tokens=True)[0] >>> transcription ' Mr. Quilter is the apostle of the middle classes, and we are glad to welcome his gospel.' ```
forward
python
THUDM/GLM-4-Voice
speech_tokenizer/modeling_whisper.py
https://github.com/THUDM/GLM-4-Voice/blob/master/speech_tokenizer/modeling_whisper.py
Apache-2.0
def forward( self, input_ids: torch.LongTensor = None, attention_mask: Optional[torch.Tensor] = None, encoder_outputs: Optional[Tuple[torch.FloatTensor]] = None, head_mask: Optional[torch.Tensor] = None, cross_attn_head_mask: Optional[torch.Tensor] = None, past_key_values: Optional[Tuple[Tuple[torch.FloatTensor]]] = None, inputs_embeds: Optional[torch.FloatTensor] = None, labels: Optional[torch.LongTensor] = None, use_cache: Optional[bool] = None, output_attentions: Optional[bool] = None, output_hidden_states: Optional[bool] = None, return_dict: Optional[bool] = None, cache_position: Optional[torch.LongTensor] = None, ) -> Union[Tuple, CausalLMOutputWithCrossAttentions]: r""" Args: input_ids (`torch.LongTensor` of shape `(batch_size, sequence_length)`): Indices of input sequence tokens in the vocabulary. Padding will be ignored by default should you provide it. Indices can be obtained using [`AutoTokenizer`]. See [`PreTrainedTokenizer.encode`] and [`PreTrainedTokenizer.__call__`] for details. [What are input IDs?](../glossary#input-ids) attention_mask (`torch.Tensor` of shape `(batch_size, sequence_length)`, *optional*): Mask to avoid performing attention on padding token indices. Mask values selected in `[0, 1]`: - 1 for tokens that are **not masked**, - 0 for tokens that are **masked**. [What are attention masks?](../glossary#attention-mask) encoder_outputs (`torch.FloatTensor` of shape `(batch_size, sequence_length, hidden_size)`, *optional*): Sequence of hidden-states at the output of the last layer of the encoder. Used in the cross-attention if the model is configured as a decoder. head_mask (`torch.Tensor` of shape `(decoder_layers, decoder_attention_heads)`, *optional*): Mask to nullify selected heads of the attention modules. Mask values selected in `[0, 1]`: - 1 indicates the head is **not masked**, - 0 indicates the head is **masked**. cross_attn_head_mask (`torch.Tensor` of shape `(decoder_layers, decoder_attention_heads)`, *optional*): Mask to nullify selected heads of the cross-attention modules. Mask values selected in `[0, 1]`: - 1 indicates the head is **not masked**, - 0 indicates the head is **masked**. past_key_values (`tuple(tuple(torch.FloatTensor))`, *optional*, returned when `use_cache=True` is passed or when `config.use_cache=True`): Tuple of `tuple(torch.FloatTensor)` of length `config.n_layers`, with each tuple having 2 tensors of shape `(batch_size, num_heads, sequence_length, embed_size_per_head)`) and 2 additional tensors of shape `(batch_size, num_heads, encoder_sequence_length, embed_size_per_head)`. The two additional tensors are only required when the model is used as a decoder in a Sequence to Sequence model. Contains pre-computed hidden-states (key and values in the self-attention blocks and in the cross-attention blocks) that can be used (see `past_key_values` input) to speed up sequential decoding. If `past_key_values` are used, the user can optionally input only the last `decoder_input_ids` (those that don't have their past key value states given to this model) of shape `(batch_size, 1)` instead of all `decoder_input_ids` of shape `(batch_size, sequence_length)`. inputs_embeds (`torch.FloatTensor` of shape `(batch_size, sequence_length, hidden_size)`, *optional*): Optionally, instead of passing `input_ids` you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert `input_ids` indices into associated vectors than the model's internal embedding lookup matrix. labels (`torch.LongTensor` of shape `(batch_size, sequence_length)`, *optional*): Labels for computing the masked language modeling loss. Indices should either be in `[0, ..., config.vocab_size]` or -100 (see `input_ids` docstring). Tokens with indices set to `-100` are ignored (masked), the loss is only computed for the tokens with labels in `[0, ..., config.vocab_size]`. use_cache (`bool`, *optional*): If set to `True`, `past_key_values` key value states are returned and can be used to speed up decoding (see `past_key_values`). - 1 for tokens that are **not masked**, - 0 for tokens that are **masked**. output_attentions (`bool`, *optional*): Whether or not to return the attentions tensors of all attention layers. See `attentions` under returned tensors for more detail. output_hidden_states (`bool`, *optional*): Whether or not to return the hidden states of all layers. See `hidden_states` under returned tensors for more detail. return_dict (`bool`, *optional*): Whether or not to return a [`~utils.ModelOutput`] instead of a plain tuple. cache_position (`torch.LongTensor` of shape `(sequence_length)`, *optional*): Indices depicting the position of the input sequence tokens in the sequence. It is used to update the cache in the correct position and to infer the complete sequence length. Returns: Example: ```python >>> from transformers import WhisperForCausalLM, WhisperForConditionalGeneration, WhisperProcessor >>> import torch >>> from datasets import load_dataset >>> processor = WhisperProcessor.from_pretrained("openai/whisper-large-v2") >>> model = WhisperVQForConditionalGeneration.from_pretrained("openai/whisper-large-v2") >>> assistant_model = WhisperForCausalLM.from_pretrained("distil-whisper/distil-large-v2") >>> ds = load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean", split="validation") >>> sample = ds[0]["audio"] >>> input_features = processor( ... sample["array"], sampling_rate=sample["sampling_rate"], return_tensors="pt" ... ).input_features >>> predicted_ids = model.generate(input_features, assistant_model=assistant_model) >>> # decode token ids to text >>> transcription = processor.batch_decode(predicted_ids, skip_special_tokens=True)[0] >>> transcription ' Mr. Quilter is the apostle of the middle classes and we are glad to welcome his gospel.' ```""" output_attentions = output_attentions if output_attentions is not None else self.config.output_attentions output_hidden_states = ( output_hidden_states if output_hidden_states is not None else self.config.output_hidden_states ) return_dict = return_dict if return_dict is not None else self.config.use_return_dict # If the user passed a tuple or `BaseModelOutput` for encoder_outputs, we extract only the hidden states if isinstance(encoder_outputs, (BaseModelOutput, tuple, list)): encoder_outputs = encoder_outputs[0] # decoder outputs consists of (dec_features, layer_state, dec_hidden, dec_attn) outputs = self.model.decoder( input_ids=input_ids, attention_mask=attention_mask, encoder_hidden_states=encoder_outputs, head_mask=head_mask, cross_attn_head_mask=cross_attn_head_mask, past_key_values=past_key_values, inputs_embeds=inputs_embeds, use_cache=use_cache, output_attentions=output_attentions, output_hidden_states=output_hidden_states, return_dict=return_dict, cache_position=cache_position, ) logits = self.proj_out(outputs[0]) loss = None if labels is not None: labels = labels.to(logits.device) loss_fct = CrossEntropyLoss() loss = loss_fct(logits.view(-1, self.config.vocab_size), labels.view(-1)) if not return_dict: output = (logits,) + outputs[1:] return (loss,) + output if loss is not None else output return CausalLMOutputWithCrossAttentions( loss=loss, logits=logits, past_key_values=outputs.past_key_values, hidden_states=outputs.hidden_states, attentions=outputs.attentions, cross_attentions=outputs.cross_attentions, )
Args: input_ids (`torch.LongTensor` of shape `(batch_size, sequence_length)`): Indices of input sequence tokens in the vocabulary. Padding will be ignored by default should you provide it. Indices can be obtained using [`AutoTokenizer`]. See [`PreTrainedTokenizer.encode`] and [`PreTrainedTokenizer.__call__`] for details. [What are input IDs?](../glossary#input-ids) attention_mask (`torch.Tensor` of shape `(batch_size, sequence_length)`, *optional*): Mask to avoid performing attention on padding token indices. Mask values selected in `[0, 1]`: - 1 for tokens that are **not masked**, - 0 for tokens that are **masked**. [What are attention masks?](../glossary#attention-mask) encoder_outputs (`torch.FloatTensor` of shape `(batch_size, sequence_length, hidden_size)`, *optional*): Sequence of hidden-states at the output of the last layer of the encoder. Used in the cross-attention if the model is configured as a decoder. head_mask (`torch.Tensor` of shape `(decoder_layers, decoder_attention_heads)`, *optional*): Mask to nullify selected heads of the attention modules. Mask values selected in `[0, 1]`: - 1 indicates the head is **not masked**, - 0 indicates the head is **masked**. cross_attn_head_mask (`torch.Tensor` of shape `(decoder_layers, decoder_attention_heads)`, *optional*): Mask to nullify selected heads of the cross-attention modules. Mask values selected in `[0, 1]`: - 1 indicates the head is **not masked**, - 0 indicates the head is **masked**. past_key_values (`tuple(tuple(torch.FloatTensor))`, *optional*, returned when `use_cache=True` is passed or when `config.use_cache=True`): Tuple of `tuple(torch.FloatTensor)` of length `config.n_layers`, with each tuple having 2 tensors of shape `(batch_size, num_heads, sequence_length, embed_size_per_head)`) and 2 additional tensors of shape `(batch_size, num_heads, encoder_sequence_length, embed_size_per_head)`. The two additional tensors are only required when the model is used as a decoder in a Sequence to Sequence model. Contains pre-computed hidden-states (key and values in the self-attention blocks and in the cross-attention blocks) that can be used (see `past_key_values` input) to speed up sequential decoding. If `past_key_values` are used, the user can optionally input only the last `decoder_input_ids` (those that don't have their past key value states given to this model) of shape `(batch_size, 1)` instead of all `decoder_input_ids` of shape `(batch_size, sequence_length)`. inputs_embeds (`torch.FloatTensor` of shape `(batch_size, sequence_length, hidden_size)`, *optional*): Optionally, instead of passing `input_ids` you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert `input_ids` indices into associated vectors than the model's internal embedding lookup matrix. labels (`torch.LongTensor` of shape `(batch_size, sequence_length)`, *optional*): Labels for computing the masked language modeling loss. Indices should either be in `[0, ..., config.vocab_size]` or -100 (see `input_ids` docstring). Tokens with indices set to `-100` are ignored (masked), the loss is only computed for the tokens with labels in `[0, ..., config.vocab_size]`. use_cache (`bool`, *optional*): If set to `True`, `past_key_values` key value states are returned and can be used to speed up decoding (see `past_key_values`). - 1 for tokens that are **not masked**, - 0 for tokens that are **masked**. output_attentions (`bool`, *optional*): Whether or not to return the attentions tensors of all attention layers. See `attentions` under returned tensors for more detail. output_hidden_states (`bool`, *optional*): Whether or not to return the hidden states of all layers. See `hidden_states` under returned tensors for more detail. return_dict (`bool`, *optional*): Whether or not to return a [`~utils.ModelOutput`] instead of a plain tuple. cache_position (`torch.LongTensor` of shape `(sequence_length)`, *optional*): Indices depicting the position of the input sequence tokens in the sequence. It is used to update the cache in the correct position and to infer the complete sequence length. Returns: Example: ```python >>> from transformers import WhisperForCausalLM, WhisperForConditionalGeneration, WhisperProcessor >>> import torch >>> from datasets import load_dataset >>> processor = WhisperProcessor.from_pretrained("openai/whisper-large-v2") >>> model = WhisperVQForConditionalGeneration.from_pretrained("openai/whisper-large-v2") >>> assistant_model = WhisperForCausalLM.from_pretrained("distil-whisper/distil-large-v2") >>> ds = load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean", split="validation") >>> sample = ds[0]["audio"] >>> input_features = processor( ... sample["array"], sampling_rate=sample["sampling_rate"], return_tensors="pt" ... ).input_features >>> predicted_ids = model.generate(input_features, assistant_model=assistant_model) >>> # decode token ids to text >>> transcription = processor.batch_decode(predicted_ids, skip_special_tokens=True)[0] >>> transcription ' Mr. Quilter is the apostle of the middle classes and we are glad to welcome his gospel.' ```
forward
python
THUDM/GLM-4-Voice
speech_tokenizer/modeling_whisper.py
https://github.com/THUDM/GLM-4-Voice/blob/master/speech_tokenizer/modeling_whisper.py
Apache-2.0
def forward( self, input_features: Optional[torch.LongTensor] = None, head_mask: Optional[torch.Tensor] = None, encoder_outputs: Optional[Tuple[Tuple[torch.FloatTensor]]] = None, labels: Optional[torch.LongTensor] = None, output_attentions: Optional[bool] = None, output_hidden_states: Optional[bool] = None, return_dict: Optional[bool] = None, ) -> Union[Tuple[torch.Tensor], SequenceClassifierOutput]: r""" labels (`torch.LongTensor` of shape `(batch_size,)`, *optional*): Labels for computing the sequence classification/regression loss. Indices should be in `[0, ..., config.num_labels - 1]`. If `config.num_labels == 1` a regression loss is computed (Mean-Square loss), If `config.num_labels > 1` a classification loss is computed (Cross-Entropy). Returns: Example: ```python >>> import torch >>> from transformers import AutoFeatureExtractor, WhisperForAudioClassification >>> from datasets import load_dataset >>> feature_extractor = AutoFeatureExtractor.from_pretrained("sanchit-gandhi/whisper-medium-fleurs-lang-id") >>> model = WhisperForAudioClassification.from_pretrained("sanchit-gandhi/whisper-medium-fleurs-lang-id") >>> ds = load_dataset("google/fleurs", "all", split="validation", streaming=True) >>> sample = next(iter(ds)) >>> inputs = feature_extractor( ... sample["audio"]["array"], sampling_rate=sample["audio"]["sampling_rate"], return_tensors="pt" ... ) >>> input_features = inputs.input_features >>> with torch.no_grad(): ... logits = model(input_features).logits >>> predicted_class_ids = torch.argmax(logits).item() >>> predicted_label = model.config.id2label[predicted_class_ids] >>> predicted_label 'Afrikaans' ```""" output_attentions = output_attentions if output_attentions is not None else self.config.output_attentions output_hidden_states = ( output_hidden_states if output_hidden_states is not None else self.config.output_hidden_states ) if self.config.use_weighted_layer_sum: output_hidden_states = True elif output_hidden_states is None: output_hidden_states = self.config.output_hidden_states return_dict = return_dict if return_dict is not None else self.config.use_return_dict if encoder_outputs is None: encoder_outputs = self.encoder( input_features, head_mask=head_mask, output_attentions=output_attentions, output_hidden_states=output_hidden_states, return_dict=return_dict, ) if self.config.use_weighted_layer_sum: hidden_states = encoder_outputs[_HIDDEN_STATES_START_POSITION] hidden_states = torch.stack(hidden_states, dim=1) norm_weights = nn.functional.softmax(self.layer_weights, dim=-1) hidden_states = (hidden_states * norm_weights.view(-1, 1, 1)).sum(dim=1) else: hidden_states = encoder_outputs[0] hidden_states = self.projector(hidden_states) pooled_output = hidden_states.mean(dim=1) logits = self.classifier(pooled_output) loss = None if labels is not None: loss_fct = CrossEntropyLoss() # move labels to correct device to enable PP labels = labels.to(logits.device) loss = loss_fct(logits.view(-1, self.config.num_labels), labels.view(-1)) if not return_dict: output = (logits,) + encoder_outputs[1:] return ((loss,) + output) if loss is not None else output return SequenceClassifierOutput( loss=loss, logits=logits, hidden_states=encoder_outputs.hidden_states, attentions=encoder_outputs.attentions, )
labels (`torch.LongTensor` of shape `(batch_size,)`, *optional*): Labels for computing the sequence classification/regression loss. Indices should be in `[0, ..., config.num_labels - 1]`. If `config.num_labels == 1` a regression loss is computed (Mean-Square loss), If `config.num_labels > 1` a classification loss is computed (Cross-Entropy). Returns: Example: ```python >>> import torch >>> from transformers import AutoFeatureExtractor, WhisperForAudioClassification >>> from datasets import load_dataset >>> feature_extractor = AutoFeatureExtractor.from_pretrained("sanchit-gandhi/whisper-medium-fleurs-lang-id") >>> model = WhisperForAudioClassification.from_pretrained("sanchit-gandhi/whisper-medium-fleurs-lang-id") >>> ds = load_dataset("google/fleurs", "all", split="validation", streaming=True) >>> sample = next(iter(ds)) >>> inputs = feature_extractor( ... sample["audio"]["array"], sampling_rate=sample["audio"]["sampling_rate"], return_tensors="pt" ... ) >>> input_features = inputs.input_features >>> with torch.no_grad(): ... logits = model(input_features).logits >>> predicted_class_ids = torch.argmax(logits).item() >>> predicted_label = model.config.id2label[predicted_class_ids] >>> predicted_label 'Afrikaans' ```
forward
python
THUDM/GLM-4-Voice
speech_tokenizer/modeling_whisper.py
https://github.com/THUDM/GLM-4-Voice/blob/master/speech_tokenizer/modeling_whisper.py
Apache-2.0
def NonMaxSuppression(boxes, scores, threshold): r"""Non-Maximum Suppression The algorithm begins by storing the highest-scoring bounding box, and eliminating any box whose intersection-over-union (IoU) with it is too great. The procedure repeats on the surviving boxes, and so on until there are no boxes left. The stored boxes are returned. NB: The function returns a tuple (mask, indices), where indices index into the input boxes and are sorted according to score, from higest to lowest. indices[i][mask[i]] gives the indices of the surviving boxes from the ith batch, sorted by score. Args: - boxes :math:`(N, n_boxes, 4)` - scroes :math:`(N, n_boxes)` - threshold (float): IoU above which to eliminate boxes Outputs: - mask: :math:`(N, n_boxes)` - indicies: :math:`(N, n_boxes)` Examples:: >>> boxes = torch.Tensor([[[10., 20., 20., 15.], >>> [24., 22., 50., 54.], >>> [10., 21., 20. 14.5]]]) >>> scores = torch.abs(torch.randn([1, 3])) >>> mask, indices = NonMaxSuppression(boxes, scores, 0.7) >>> #indices are SORTED according to score. >>> surviving_box_indices = indices[mask] """ if boxes.is_cuda: return lib.gpu.non_max_suppression(boxes, scores, threshold) else: return lib.cpu.non_max_suppression(boxes, scores, threshold)
Non-Maximum Suppression The algorithm begins by storing the highest-scoring bounding box, and eliminating any box whose intersection-over-union (IoU) with it is too great. The procedure repeats on the surviving boxes, and so on until there are no boxes left. The stored boxes are returned. NB: The function returns a tuple (mask, indices), where indices index into the input boxes and are sorted according to score, from higest to lowest. indices[i][mask[i]] gives the indices of the surviving boxes from the ith batch, sorted by score. Args: - boxes :math:`(N, n_boxes, 4)` - scroes :math:`(N, n_boxes)` - threshold (float): IoU above which to eliminate boxes Outputs: - mask: :math:`(N, n_boxes)` - indicies: :math:`(N, n_boxes)` Examples:: >>> boxes = torch.Tensor([[[10., 20., 20., 15.], >>> [24., 22., 50., 54.], >>> [10., 21., 20. 14.5]]]) >>> scores = torch.abs(torch.randn([1, 3])) >>> mask, indices = NonMaxSuppression(boxes, scores, 0.7) >>> #indices are SORTED according to score. >>> surviving_box_indices = indices[mask]
NonMaxSuppression
python
junfu1115/DANet
encoding/functions/customize.py
https://github.com/junfu1115/DANet/blob/master/encoding/functions/customize.py
MIT
def pairwise_cosine(X, C, normalize=False): r"""Pairwise Cosine Similarity or Dot-product Similarity Shape: - Input: :math:`X\in\mathcal{R}^{B\times N\times D}` :math:`C\in\mathcal{R}^{K\times D}` :math:`S\in \mathcal{R}^K` (where :math:`B` is batch, :math:`N` is total number of features, :math:`K` is number is codewords, :math:`D` is feature dimensions.) - Output: :math:`E\in\mathcal{R}^{B\times N\times K}` """ if normalize: X = F.normalize(X, dim=2, eps=1e-8) C = F.normalize(C, dim=1, eps=1e-8) return torch.matmul(X, C.t())
Pairwise Cosine Similarity or Dot-product Similarity Shape: - Input: :math:`X\in\mathcal{R}^{B\times N\times D}` :math:`C\in\mathcal{R}^{K\times D}` :math:`S\in \mathcal{R}^K` (where :math:`B` is batch, :math:`N` is total number of features, :math:`K` is number is codewords, :math:`D` is feature dimensions.) - Output: :math:`E\in\mathcal{R}^{B\times N\times K}`
pairwise_cosine
python
junfu1115/DANet
encoding/functions/encoding.py
https://github.com/junfu1115/DANet/blob/master/encoding/functions/encoding.py
MIT
def get_deepten(dataset='pascal_voc', backbone='resnet50', pretrained=False, root='~/.encoding/models', **kwargs): r"""DeepTen model from the paper `"Deep TEN: Texture Encoding Network" <https://arxiv.org/pdf/1612.02844v1.pdf>`_ Parameters ---------- dataset : str, default pascal_voc The dataset that model pretrained on. (pascal_voc, ade20k) pretrained : bool, default False Whether to load the pretrained weights for model. root : str, default '~/.encoding/models' Location for keeping the model parameters. Examples -------- >>> model = get_deepten(dataset='minc', backbone='resnet50', pretrained=False) >>> print(model) """ from ..datasets import datasets, acronyms model = DeepTen(datasets[dataset.lower()].NUM_CLASS, backbone=backbone, **kwargs) if pretrained: from .model_store import get_model_file model.load_state_dict(torch.load( get_model_file('deepten_%s_%s'%(backbone, acronyms[dataset]), root=root))) return model
DeepTen model from the paper `"Deep TEN: Texture Encoding Network" <https://arxiv.org/pdf/1612.02844v1.pdf>`_ Parameters ---------- dataset : str, default pascal_voc The dataset that model pretrained on. (pascal_voc, ade20k) pretrained : bool, default False Whether to load the pretrained weights for model. root : str, default '~/.encoding/models' Location for keeping the model parameters. Examples -------- >>> model = get_deepten(dataset='minc', backbone='resnet50', pretrained=False) >>> print(model)
get_deepten
python
junfu1115/DANet
encoding/models/deepten.py
https://github.com/junfu1115/DANet/blob/master/encoding/models/deepten.py
MIT
def get_model_file(name, root=os.path.join('~', '.encoding', 'models')): r"""Return location for the pretrained on local file system. This function will download from online model zoo when model cannot be found or has mismatch. The root directory will be created if it doesn't exist. Parameters ---------- name : str Name of the model. root : str, default '~/.encoding/models' Location for keeping the model parameters. Returns ------- file_path Path to the requested pretrained model file. """ if name not in _model_sha1: from torchvision.models.resnet import model_urls if name not in model_urls: raise ValueError('Pretrained model for {name} is not available.'.format(name=name)) root = os.path.expanduser(root) return download(model_urls[name], path=root, overwrite=True) file_name = '{name}-{short_hash}'.format(name=name, short_hash=short_hash(name)) root = os.path.expanduser(root) if not os.path.exists(root): os.makedirs(root) file_path = os.path.join(root, file_name+'.pth') sha1_hash = _model_sha1[name] lockfile = os.path.join(root, file_name + '.lock') with portalocker.Lock(lockfile, timeout=300): if os.path.exists(file_path): if check_sha1(file_path, sha1_hash): return file_path else: print('Mismatch in the content of model file {} detected.' + ' Downloading again.'.format(file_path)) else: print('Model file {} is not found. Downloading.'.format(file_path)) zip_file_path = os.path.join(root, file_name+'.zip') repo_url = os.environ.get('ENCODING_REPO', encoding_repo_url) if repo_url[-1] != '/': repo_url = repo_url + '/' download(_url_format.format(repo_url=repo_url, file_name=file_name), path=zip_file_path, overwrite=True) with zipfile.ZipFile(zip_file_path) as zf: zf.extractall(root) os.remove(zip_file_path) if check_sha1(file_path, sha1_hash): return file_path else: raise ValueError('Downloaded file has different hash. Please try again.')
Return location for the pretrained on local file system. This function will download from online model zoo when model cannot be found or has mismatch. The root directory will be created if it doesn't exist. Parameters ---------- name : str Name of the model. root : str, default '~/.encoding/models' Location for keeping the model parameters. Returns ------- file_path Path to the requested pretrained model file.
get_model_file
python
junfu1115/DANet
encoding/models/model_store.py
https://github.com/junfu1115/DANet/blob/master/encoding/models/model_store.py
MIT
def purge(root=os.path.join('~', '.encoding', 'models')): r"""Purge all pretrained model files in local file store. Parameters ---------- root : str, default '~/.encoding/models' Location for keeping the model parameters. """ root = os.path.expanduser(root) files = os.listdir(root) for f in files: if f.endswith(".pth"): os.remove(os.path.join(root, f))
Purge all pretrained model files in local file store. Parameters ---------- root : str, default '~/.encoding/models' Location for keeping the model parameters.
purge
python
junfu1115/DANet
encoding/models/model_store.py
https://github.com/junfu1115/DANet/blob/master/encoding/models/model_store.py
MIT
def get_model(name, **kwargs): """Returns a pre-defined model by name Parameters ---------- name : str Name of the model. pretrained : bool Whether to load the pretrained weights for model. root : str, default '~/.encoding/models' Location for keeping the model parameters. Returns ------- Module: The model. """ name = name.lower() if name not in models: raise ValueError('%s\n\t%s' % (str(name), '\n\t'.join(sorted(models.keys())))) net = models[name](**kwargs) return net
Returns a pre-defined model by name Parameters ---------- name : str Name of the model. pretrained : bool Whether to load the pretrained weights for model. root : str, default '~/.encoding/models' Location for keeping the model parameters. Returns ------- Module: The model.
get_model
python
junfu1115/DANet
encoding/models/model_zoo.py
https://github.com/junfu1115/DANet/blob/master/encoding/models/model_zoo.py
MIT
def resnet50(pretrained=False, root='~/.encoding/models', **kwargs): """Constructs a ResNet-50 model. Args: pretrained (bool): If True, returns a model pre-trained on ImageNet """ model = ResNet(Bottleneck, [3, 4, 6, 3], **kwargs) if pretrained: model.load_state_dict(torch.load( get_model_file('resnet50', root=root)), strict=False) return model
Constructs a ResNet-50 model. Args: pretrained (bool): If True, returns a model pre-trained on ImageNet
resnet50
python
junfu1115/DANet
encoding/models/backbone/resnet.py
https://github.com/junfu1115/DANet/blob/master/encoding/models/backbone/resnet.py
MIT
def resnet101(pretrained=False, root='~/.encoding/models', **kwargs): """Constructs a ResNet-101 model. Args: pretrained (bool): If True, returns a model pre-trained on ImageNet """ pretrained=False model = ResNet(Bottleneck, [3, 4, 23, 3], **kwargs) if pretrained: model.load_state_dict(torch.load( get_model_file('resnet101', root=root)), strict=False) return model
Constructs a ResNet-101 model. Args: pretrained (bool): If True, returns a model pre-trained on ImageNet
resnet101
python
junfu1115/DANet
encoding/models/backbone/resnet.py
https://github.com/junfu1115/DANet/blob/master/encoding/models/backbone/resnet.py
MIT
def resnet152(pretrained=False, root='~/.encoding/models', **kwargs): """Constructs a ResNet-152 model. Args: pretrained (bool): If True, returns a model pre-trained on ImageNet """ model = ResNet(Bottleneck, [3, 8, 36, 3], **kwargs) if pretrained: model.load_state_dict(torch.load( get_model_file('resnet152', root=root)), strict=False) return model
Constructs a ResNet-152 model. Args: pretrained (bool): If True, returns a model pre-trained on ImageNet
resnet152
python
junfu1115/DANet
encoding/models/backbone/resnet.py
https://github.com/junfu1115/DANet/blob/master/encoding/models/backbone/resnet.py
MIT
def resnet50s(pretrained=False, root='~/.encoding/models', **kwargs): """Constructs a ResNetS-50 model as in PSPNet. Args: pretrained (bool): If True, returns a model pre-trained on ImageNet """ kwargs['deep_stem'] = True model = ResNet(Bottleneck, [3, 4, 6, 3], **kwargs) if pretrained: model.load_state_dict(torch.load( get_model_file('resnet50s', root=root)), strict=False) return model
Constructs a ResNetS-50 model as in PSPNet. Args: pretrained (bool): If True, returns a model pre-trained on ImageNet
resnet50s
python
junfu1115/DANet
encoding/models/backbone/resnet_variants.py
https://github.com/junfu1115/DANet/blob/master/encoding/models/backbone/resnet_variants.py
MIT
def resnet101s(pretrained=False, root='~/.encoding/models', **kwargs): """Constructs a ResNetS-101 model as in PSPNet. Args: pretrained (bool): If True, returns a model pre-trained on ImageNet """ kwargs['deep_stem'] = True model = ResNet(Bottleneck, [3, 4, 23, 3], **kwargs) if pretrained: model.load_state_dict(torch.load( get_model_file('resnet101s', root=root)), strict=False) return model
Constructs a ResNetS-101 model as in PSPNet. Args: pretrained (bool): If True, returns a model pre-trained on ImageNet
resnet101s
python
junfu1115/DANet
encoding/models/backbone/resnet_variants.py
https://github.com/junfu1115/DANet/blob/master/encoding/models/backbone/resnet_variants.py
MIT
def resnet152s(pretrained=False, root='~/.encoding/models', **kwargs): """Constructs a ResNetS-152 model as in PSPNet. Args: pretrained (bool): If True, returns a model pre-trained on ImageNet """ kwargs['deep_stem'] = True model = ResNet(Bottleneck, [3, 8, 36, 3], **kwargs) if pretrained: model.load_state_dict(torch.load( get_model_file('resnet152s', root=root)), strict=False) return model
Constructs a ResNetS-152 model as in PSPNet. Args: pretrained (bool): If True, returns a model pre-trained on ImageNet
resnet152s
python
junfu1115/DANet
encoding/models/backbone/resnet_variants.py
https://github.com/junfu1115/DANet/blob/master/encoding/models/backbone/resnet_variants.py
MIT
def resnext50_32x4d(pretrained=False, root='~/.encoding/models', **kwargs): r"""ResNeXt-50 32x4d model from `"Aggregated Residual Transformation for Deep Neural Networks" <https://arxiv.org/pdf/1611.05431.pdf>`_ Args: pretrained (bool): If True, returns a model pre-trained on ImageNet progress (bool): If True, displays a progress bar of the download to stderr """ kwargs['groups'] = 32 kwargs['bottleneck_width'] = 4 model = ResNet(Bottleneck, [3, 4, 6, 3], **kwargs) if pretrained: model.load_state_dict(torch.load( get_model_file('resnext50_32x4d', root=root)), strict=False) return model
ResNeXt-50 32x4d model from `"Aggregated Residual Transformation for Deep Neural Networks" <https://arxiv.org/pdf/1611.05431.pdf>`_ Args: pretrained (bool): If True, returns a model pre-trained on ImageNet progress (bool): If True, displays a progress bar of the download to stderr
resnext50_32x4d
python
junfu1115/DANet
encoding/models/backbone/resnext.py
https://github.com/junfu1115/DANet/blob/master/encoding/models/backbone/resnext.py
MIT
def resnext101_32x8d(pretrained=False, root='~/.encoding/models', **kwargs): r"""ResNeXt-101 32x8d model from `"Aggregated Residual Transformation for Deep Neural Networks" <https://arxiv.org/pdf/1611.05431.pdf>`_ Args: pretrained (bool): If True, returns a model pre-trained on ImageNet progress (bool): If True, displays a progress bar of the download to stderr """ kwargs['groups'] = 32 kwargs['bottleneck_width'] = 8 model = ResNet(Bottleneck, [3, 4, 23, 3], **kwargs) if pretrained: model.load_state_dict(torch.load( get_model_file('resnext101_32x8d', root=root)), strict=False) return model
ResNeXt-101 32x8d model from `"Aggregated Residual Transformation for Deep Neural Networks" <https://arxiv.org/pdf/1611.05431.pdf>`_ Args: pretrained (bool): If True, returns a model pre-trained on ImageNet progress (bool): If True, displays a progress bar of the download to stderr
resnext101_32x8d
python
junfu1115/DANet
encoding/models/backbone/resnext.py
https://github.com/junfu1115/DANet/blob/master/encoding/models/backbone/resnext.py
MIT
def wideresnet38(pretrained=False, root='~/.encoding/models', **kwargs): """Constructs a WideResNet-38 model. Args: pretrained (bool): If True, returns a model pre-trained on ImageNet """ model = WideResNet([3, 3, 6, 3, 1, 1], **kwargs) if pretrained: model.load_state_dict(torch.load( get_model_file('wideresnet38', root=root)), strict=False) return model
Constructs a WideResNet-38 model. Args: pretrained (bool): If True, returns a model pre-trained on ImageNet
wideresnet38
python
junfu1115/DANet
encoding/models/backbone/wideresnet.py
https://github.com/junfu1115/DANet/blob/master/encoding/models/backbone/wideresnet.py
MIT
def wideresnet50(pretrained=False, root='~/.encoding/models', **kwargs): """Constructs a WideResNet-50 model. Args: pretrained (bool): If True, returns a model pre-trained on ImageNet """ model = WideResNet([3, 3, 6, 6, 3, 1], **kwargs) if pretrained: model.load_state_dict(torch.load( get_model_file('wideresnet50', root=root)), strict=False) return model
Constructs a WideResNet-50 model. Args: pretrained (bool): If True, returns a model pre-trained on ImageNet
wideresnet50
python
junfu1115/DANet
encoding/models/backbone/wideresnet.py
https://github.com/junfu1115/DANet/blob/master/encoding/models/backbone/wideresnet.py
MIT
def xception65(pretrained=False, **kwargs): """Constructs a ResNet-18 model. Args: pretrained (bool): If True, returns a model pre-trained on ImageNet """ model = Xception65(**kwargs) if pretrained: model.load_state_dict(torch.load(get_model_file('xception65', root=root))) return model
Constructs a ResNet-18 model. Args: pretrained (bool): If True, returns a model pre-trained on ImageNet
xception65
python
junfu1115/DANet
encoding/models/backbone/xception.py
https://github.com/junfu1115/DANet/blob/master/encoding/models/backbone/xception.py
MIT
def get_atten(dataset='pascal_voc', backbone='resnet50s', pretrained=False, root='~/.encoding/models', **kwargs): r"""ATTEN model from the paper `"Fully Convolutional Network for semantic segmentation" <https://people.eecs.berkeley.edu/~jonlong/long_shelhamer_atten.pdf>`_ Parameters ---------- dataset : str, default pascal_voc The dataset that model pretrained on. (pascal_voc, ade20k) pretrained : bool, default False Whether to load the pretrained weights for model. pooling_mode : str, default 'avg' Using 'max' pool or 'avg' pool in the Attention module. root : str, default '~/.encoding/models' Location for keeping the model parameters. Examples -------- >>> model = get_atten(dataset='pascal_voc', backbone='resnet50s', pretrained=False) >>> print(model) """ # infer number of classes from ...datasets import datasets, acronyms model = ATTEN(datasets[dataset.lower()].NUM_CLASS, backbone=backbone, **kwargs) if pretrained: from .model_store import get_model_file model.load_state_dict(torch.load( get_model_file('atten_%s_%s'%(backbone, acronyms[dataset]), root=root))) return model
ATTEN model from the paper `"Fully Convolutional Network for semantic segmentation" <https://people.eecs.berkeley.edu/~jonlong/long_shelhamer_atten.pdf>`_ Parameters ---------- dataset : str, default pascal_voc The dataset that model pretrained on. (pascal_voc, ade20k) pretrained : bool, default False Whether to load the pretrained weights for model. pooling_mode : str, default 'avg' Using 'max' pool or 'avg' pool in the Attention module. root : str, default '~/.encoding/models' Location for keeping the model parameters. Examples -------- >>> model = get_atten(dataset='pascal_voc', backbone='resnet50s', pretrained=False) >>> print(model)
get_atten
python
junfu1115/DANet
encoding/models/sseg/atten.py
https://github.com/junfu1115/DANet/blob/master/encoding/models/sseg/atten.py
MIT
def parallel_forward(self, inputs, **kwargs): """Multi-GPU Mult-size Evaluation Args: inputs: list of Tensors """ inputs = [(input.unsqueeze(0).cuda(device),) for input, device in zip(inputs, self.device_ids)] replicas = self.replicate(self, self.device_ids[:len(inputs)]) kwargs = scatter(kwargs, target_gpus, dim) if kwargs else [] if len(inputs) < len(kwargs): inputs.extend([() for _ in range(len(kwargs) - len(inputs))]) elif len(kwargs) < len(inputs): kwargs.extend([{} for _ in range(len(inputs) - len(kwargs))]) outputs = self.parallel_apply(replicas, inputs, kwargs) #for out in outputs: # print('out.size()', out.size()) return outputs
Multi-GPU Mult-size Evaluation Args: inputs: list of Tensors
parallel_forward
python
junfu1115/DANet
encoding/models/sseg/base.py
https://github.com/junfu1115/DANet/blob/master/encoding/models/sseg/base.py
MIT
def get_danet(dataset='pascal_voc', backbone='resnet50', pretrained=False, root='~/.encoding/models', **kwargs): r"""DANet model from the paper `"Dual Attention Network for Scene Segmentation" <https://arxiv.org/abs/1809.02983.pdf>` """ acronyms = { 'pascal_voc': 'voc', 'pascal_aug': 'voc', 'pcontext': 'pcontext', 'ade20k': 'ade', 'cityscapes': 'cityscapes', } # infer number of classes from ...datasets import datasets, VOCSegmentation, VOCAugSegmentation, ADE20KSegmentation model = DANet(datasets[dataset.lower()].NUM_CLASS, backbone=backbone, root=root, **kwargs) if pretrained: from .model_store import get_model_file model.load_state_dict(torch.load( get_model_file('fcn_%s_%s'%(backbone, acronyms[dataset]), root=root)), strict=False) return model
DANet model from the paper `"Dual Attention Network for Scene Segmentation" <https://arxiv.org/abs/1809.02983.pdf>`
get_danet
python
junfu1115/DANet
encoding/models/sseg/danet.py
https://github.com/junfu1115/DANet/blob/master/encoding/models/sseg/danet.py
MIT
def get_dran(dataset='pascal_voc', backbone='resnet50', pretrained=False, root='~/.encoding/models', **kwargs): r"""Scene Segmentation with Dual Relation-aware Attention Network """ acronyms = { 'pascal_voc': 'voc', 'pascal_aug': 'voc', 'pcontext': 'pcontext', 'ade20k': 'ade', } # infer number of classes from ...datasets import datasets, VOCSegmentation, VOCAugSegmentation, ADE20KSegmentation model = Dran(datasets[dataset.lower()].NUM_CLASS, backbone=backbone, root=root, **kwargs) if pretrained: from .model_store import get_model_file model.load_state_dict(torch.load( get_model_file('fcn_%s_%s'%(backbone, acronyms[dataset]), root=root)), strict= False) return model
Scene Segmentation with Dual Relation-aware Attention Network
get_dran
python
junfu1115/DANet
encoding/models/sseg/dran.py
https://github.com/junfu1115/DANet/blob/master/encoding/models/sseg/dran.py
MIT
def get_encnet(dataset='pascal_voc', backbone='resnet50s', pretrained=False, root='~/.encoding/models', **kwargs): r"""EncNet model from the paper `"Context Encoding for Semantic Segmentation" <https://arxiv.org/pdf/1803.08904.pdf>`_ Parameters ---------- dataset : str, default pascal_voc The dataset that model pretrained on. (pascal_voc, ade20k) backbone : str, default resnet50s The backbone network. (resnet50s, 101s, 152s) pretrained : bool, default False Whether to load the pretrained weights for model. root : str, default '~/.encoding/models' Location for keeping the model parameters. Examples -------- >>> model = get_encnet(dataset='pascal_voc', backbone='resnet50s', pretrained=False) >>> print(model) """ kwargs['lateral'] = True if dataset.lower().startswith('p') else False # infer number of classes from ...datasets import datasets, acronyms model = EncNet(datasets[dataset.lower()].NUM_CLASS, backbone=backbone, root=root, **kwargs) if pretrained: from ..model_store import get_model_file model.load_state_dict(torch.load( get_model_file('encnet_%s_%s'%(backbone, acronyms[dataset]), root=root))) return model
EncNet model from the paper `"Context Encoding for Semantic Segmentation" <https://arxiv.org/pdf/1803.08904.pdf>`_ Parameters ---------- dataset : str, default pascal_voc The dataset that model pretrained on. (pascal_voc, ade20k) backbone : str, default resnet50s The backbone network. (resnet50s, 101s, 152s) pretrained : bool, default False Whether to load the pretrained weights for model. root : str, default '~/.encoding/models' Location for keeping the model parameters. Examples -------- >>> model = get_encnet(dataset='pascal_voc', backbone='resnet50s', pretrained=False) >>> print(model)
get_encnet
python
junfu1115/DANet
encoding/models/sseg/encnet.py
https://github.com/junfu1115/DANet/blob/master/encoding/models/sseg/encnet.py
MIT
def get_encnet_resnet50_pcontext(pretrained=False, root='~/.encoding/models', **kwargs): r"""EncNet model from the paper `"Context Encoding for Semantic Segmentation" <https://arxiv.org/pdf/1803.08904.pdf>`_ Parameters ---------- pretrained : bool, default False Whether to load the pretrained weights for model. root : str, default '~/.encoding/models' Location for keeping the model parameters. Examples -------- >>> model = get_encnet_resnet50_pcontext(pretrained=True) >>> print(model) """ return get_encnet('pcontext', 'resnet50s', pretrained, root=root, aux=True, base_size=520, crop_size=480, **kwargs)
EncNet model from the paper `"Context Encoding for Semantic Segmentation" <https://arxiv.org/pdf/1803.08904.pdf>`_ Parameters ---------- pretrained : bool, default False Whether to load the pretrained weights for model. root : str, default '~/.encoding/models' Location for keeping the model parameters. Examples -------- >>> model = get_encnet_resnet50_pcontext(pretrained=True) >>> print(model)
get_encnet_resnet50_pcontext
python
junfu1115/DANet
encoding/models/sseg/encnet.py
https://github.com/junfu1115/DANet/blob/master/encoding/models/sseg/encnet.py
MIT
def get_encnet_resnet101_coco(pretrained=False, root='~/.encoding/models', **kwargs): r"""EncNet model from the paper `"Context Encoding for Semantic Segmentation" <https://arxiv.org/pdf/1803.08904.pdf>`_ Parameters ---------- pretrained : bool, default False Whether to load the pretrained weights for model. root : str, default '~/.encoding/models' Location for keeping the model parameters. Examples -------- >>> model = get_encnet_resnet101_coco(pretrained=True) >>> print(model) """ return get_encnet('coco', 'resnet101s', pretrained, root=root, aux=True, base_size=520, crop_size=480, lateral=True, **kwargs)
EncNet model from the paper `"Context Encoding for Semantic Segmentation" <https://arxiv.org/pdf/1803.08904.pdf>`_ Parameters ---------- pretrained : bool, default False Whether to load the pretrained weights for model. root : str, default '~/.encoding/models' Location for keeping the model parameters. Examples -------- >>> model = get_encnet_resnet101_coco(pretrained=True) >>> print(model)
get_encnet_resnet101_coco
python
junfu1115/DANet
encoding/models/sseg/encnet.py
https://github.com/junfu1115/DANet/blob/master/encoding/models/sseg/encnet.py
MIT
def get_encnet_resnet101_pcontext(pretrained=False, root='~/.encoding/models', **kwargs): r"""EncNet model from the paper `"Context Encoding for Semantic Segmentation" <https://arxiv.org/pdf/1803.08904.pdf>`_ Parameters ---------- pretrained : bool, default False Whether to load the pretrained weights for model. root : str, default '~/.encoding/models' Location for keeping the model parameters. Examples -------- >>> model = get_encnet_resnet101_pcontext(pretrained=True) >>> print(model) """ return get_encnet('pcontext', 'resnet101s', pretrained, root=root, aux=True, base_size=520, crop_size=480, **kwargs)
EncNet model from the paper `"Context Encoding for Semantic Segmentation" <https://arxiv.org/pdf/1803.08904.pdf>`_ Parameters ---------- pretrained : bool, default False Whether to load the pretrained weights for model. root : str, default '~/.encoding/models' Location for keeping the model parameters. Examples -------- >>> model = get_encnet_resnet101_pcontext(pretrained=True) >>> print(model)
get_encnet_resnet101_pcontext
python
junfu1115/DANet
encoding/models/sseg/encnet.py
https://github.com/junfu1115/DANet/blob/master/encoding/models/sseg/encnet.py
MIT
def get_encnet_resnet50_ade(pretrained=False, root='~/.encoding/models', **kwargs): r"""EncNet model from the paper `"Context Encoding for Semantic Segmentation" <https://arxiv.org/pdf/1803.08904.pdf>`_ Parameters ---------- pretrained : bool, default False Whether to load the pretrained weights for model. root : str, default '~/.encoding/models' Location for keeping the model parameters. Examples -------- >>> model = get_encnet_resnet50_ade(pretrained=True) >>> print(model) """ return get_encnet('ade20k', 'resnet50s', pretrained, root=root, aux=True, base_size=520, crop_size=480, **kwargs)
EncNet model from the paper `"Context Encoding for Semantic Segmentation" <https://arxiv.org/pdf/1803.08904.pdf>`_ Parameters ---------- pretrained : bool, default False Whether to load the pretrained weights for model. root : str, default '~/.encoding/models' Location for keeping the model parameters. Examples -------- >>> model = get_encnet_resnet50_ade(pretrained=True) >>> print(model)
get_encnet_resnet50_ade
python
junfu1115/DANet
encoding/models/sseg/encnet.py
https://github.com/junfu1115/DANet/blob/master/encoding/models/sseg/encnet.py
MIT
def get_encnet_resnet101_ade(pretrained=False, root='~/.encoding/models', **kwargs): r"""EncNet model from the paper `"Context Encoding for Semantic Segmentation" <https://arxiv.org/pdf/1803.08904.pdf>`_ Parameters ---------- pretrained : bool, default False Whether to load the pretrained weights for model. root : str, default '~/.encoding/models' Location for keeping the model parameters. Examples -------- >>> model = get_encnet_resnet50_ade(pretrained=True) >>> print(model) """ return get_encnet('ade20k', 'resnet101s', pretrained, root=root, aux=True, base_size=640, crop_size=576, **kwargs)
EncNet model from the paper `"Context Encoding for Semantic Segmentation" <https://arxiv.org/pdf/1803.08904.pdf>`_ Parameters ---------- pretrained : bool, default False Whether to load the pretrained weights for model. root : str, default '~/.encoding/models' Location for keeping the model parameters. Examples -------- >>> model = get_encnet_resnet50_ade(pretrained=True) >>> print(model)
get_encnet_resnet101_ade
python
junfu1115/DANet
encoding/models/sseg/encnet.py
https://github.com/junfu1115/DANet/blob/master/encoding/models/sseg/encnet.py
MIT
def get_encnet_resnet152_ade(pretrained=False, root='~/.encoding/models', **kwargs): r"""EncNet model from the paper `"Context Encoding for Semantic Segmentation" <https://arxiv.org/pdf/1803.08904.pdf>`_ Parameters ---------- pretrained : bool, default False Whether to load the pretrained weights for model. root : str, default '~/.encoding/models' Location for keeping the model parameters. Examples -------- >>> model = get_encnet_resnet50_ade(pretrained=True) >>> print(model) """ return get_encnet('ade20k', 'resnet152s', pretrained, root=root, aux=True, base_size=520, crop_size=480, **kwargs)
EncNet model from the paper `"Context Encoding for Semantic Segmentation" <https://arxiv.org/pdf/1803.08904.pdf>`_ Parameters ---------- pretrained : bool, default False Whether to load the pretrained weights for model. root : str, default '~/.encoding/models' Location for keeping the model parameters. Examples -------- >>> model = get_encnet_resnet50_ade(pretrained=True) >>> print(model)
get_encnet_resnet152_ade
python
junfu1115/DANet
encoding/models/sseg/encnet.py
https://github.com/junfu1115/DANet/blob/master/encoding/models/sseg/encnet.py
MIT
def get_fcfpn(dataset='pascal_voc', backbone='resnet50', pretrained=False, root='~/.encoding/models', **kwargs): r"""FCFPN model from the paper `"Fully Convolutional Network for semantic segmentation" <https://people.eecs.berkeley.edu/~jonlong/long_shelhamer_fcfpn.pdf>`_ Parameters ---------- dataset : str, default pascal_voc The dataset that model pretrained on. (pascal_voc, ade20k) pretrained : bool, default False Whether to load the pretrained weights for model. root : str, default '~/.encoding/models' Location for keeping the model parameters. Examples -------- >>> model = get_fcfpn(dataset='pascal_voc', backbone='resnet50s', pretrained=False) >>> print(model) """ acronyms = { 'pascal_voc': 'voc', 'pascal_aug': 'voc', 'ade20k': 'ade', } # infer number of classes from ...datasets import datasets, VOCSegmentation, VOCAugSegmentation, ADE20KSegmentation model = FCFPN(datasets[dataset.lower()].NUM_CLASS, backbone=backbone, **kwargs) if pretrained: from ..model_store import get_model_file model.load_state_dict(torch.load( get_model_file('fcfpn_%s_%s'%(backbone, acronyms[dataset]), root=root))) return model
FCFPN model from the paper `"Fully Convolutional Network for semantic segmentation" <https://people.eecs.berkeley.edu/~jonlong/long_shelhamer_fcfpn.pdf>`_ Parameters ---------- dataset : str, default pascal_voc The dataset that model pretrained on. (pascal_voc, ade20k) pretrained : bool, default False Whether to load the pretrained weights for model. root : str, default '~/.encoding/models' Location for keeping the model parameters. Examples -------- >>> model = get_fcfpn(dataset='pascal_voc', backbone='resnet50s', pretrained=False) >>> print(model)
get_fcfpn
python
junfu1115/DANet
encoding/models/sseg/fcfpn.py
https://github.com/junfu1115/DANet/blob/master/encoding/models/sseg/fcfpn.py
MIT
def get_fcn(dataset='pascal_voc', backbone='resnet50s', pretrained=False, root='~/.encoding/models', **kwargs): r"""FCN model from the paper `"Fully Convolutional Network for semantic segmentation" <https://people.eecs.berkeley.edu/~jonlong/long_shelhamer_fcn.pdf>`_ Parameters ---------- dataset : str, default pascal_voc The dataset that model pretrained on. (pascal_voc, ade20k) pretrained : bool, default False Whether to load the pretrained weights for model. root : str, default '~/.encoding/models' Location for keeping the model parameters. Examples -------- >>> model = get_fcn(dataset='pascal_voc', backbone='resnet50s', pretrained=False) >>> print(model) """ # infer number of classes from ...datasets import datasets, acronyms model = FCN(datasets[dataset.lower()].NUM_CLASS, backbone=backbone, root=root, **kwargs) if pretrained: from ..model_store import get_model_file model.load_state_dict(torch.load( get_model_file('fcn_%s_%s'%(backbone, acronyms[dataset]), root=root))) return model
FCN model from the paper `"Fully Convolutional Network for semantic segmentation" <https://people.eecs.berkeley.edu/~jonlong/long_shelhamer_fcn.pdf>`_ Parameters ---------- dataset : str, default pascal_voc The dataset that model pretrained on. (pascal_voc, ade20k) pretrained : bool, default False Whether to load the pretrained weights for model. root : str, default '~/.encoding/models' Location for keeping the model parameters. Examples -------- >>> model = get_fcn(dataset='pascal_voc', backbone='resnet50s', pretrained=False) >>> print(model)
get_fcn
python
junfu1115/DANet
encoding/models/sseg/fcn.py
https://github.com/junfu1115/DANet/blob/master/encoding/models/sseg/fcn.py
MIT
def get_fcn_resnest50_ade(pretrained=False, root='~/.encoding/models', **kwargs): r"""EncNet-PSP model from the paper `"Context Encoding for Semantic Segmentation" <https://arxiv.org/pdf/1803.08904.pdf>`_ Parameters ---------- pretrained : bool, default False Whether to load the pretrained weights for model. root : str, default '~/.encoding/models' Location for keeping the model parameters. Examples -------- >>> model = get_fcn_resnet50_ade(pretrained=True) >>> print(model) """ kwargs['aux'] = True return get_fcn('ade20k', 'resnest50', pretrained, root=root, **kwargs)
EncNet-PSP model from the paper `"Context Encoding for Semantic Segmentation" <https://arxiv.org/pdf/1803.08904.pdf>`_ Parameters ---------- pretrained : bool, default False Whether to load the pretrained weights for model. root : str, default '~/.encoding/models' Location for keeping the model parameters. Examples -------- >>> model = get_fcn_resnet50_ade(pretrained=True) >>> print(model)
get_fcn_resnest50_ade
python
junfu1115/DANet
encoding/models/sseg/fcn.py
https://github.com/junfu1115/DANet/blob/master/encoding/models/sseg/fcn.py
MIT
def get_fcn_resnest50_pcontext(pretrained=False, root='~/.encoding/models', **kwargs): r"""EncNet-PSP model from the paper `"Context Encoding for Semantic Segmentation" <https://arxiv.org/pdf/1803.08904.pdf>`_ Parameters ---------- pretrained : bool, default False Whether to load the pretrained weights for model. root : str, default '~/.encoding/models' Location for keeping the model parameters. Examples -------- >>> model = get_fcn_resnet50_ade(pretrained=True) >>> print(model) """ kwargs['aux'] = True return get_fcn('pcontext', 'resnest50', pretrained, root=root, **kwargs)
EncNet-PSP model from the paper `"Context Encoding for Semantic Segmentation" <https://arxiv.org/pdf/1803.08904.pdf>`_ Parameters ---------- pretrained : bool, default False Whether to load the pretrained weights for model. root : str, default '~/.encoding/models' Location for keeping the model parameters. Examples -------- >>> model = get_fcn_resnet50_ade(pretrained=True) >>> print(model)
get_fcn_resnest50_pcontext
python
junfu1115/DANet
encoding/models/sseg/fcn.py
https://github.com/junfu1115/DANet/blob/master/encoding/models/sseg/fcn.py
MIT
def get_upernet(dataset='pascal_voc', backbone='resnet50s', pretrained=False, root='~/.encoding/models', **kwargs): r"""UperNet model from the paper `"Fully Convolutional Network for semantic segmentation" <https://people.eecs.berkeley.edu/~jonlong/long_shelhamer_upernet.pdf>`_ Parameters ---------- dataset : str, default pascal_voc The dataset that model pretrained on. (pascal_voc, ade20k) pretrained : bool, default False Whether to load the pretrained weights for model. root : str, default '~/.encoding/models' Location for keeping the model parameters. Examples -------- >>> model = get_upernet(dataset='pascal_voc', backbone='resnet50s', pretrained=False) >>> print(model) """ acronyms = { 'pascal_voc': 'voc', 'pascal_aug': 'voc', 'ade20k': 'ade', } # infer number of classes from ...datasets import datasets, VOCSegmentation, VOCAugSegmentation, ADE20KSegmentation model = UperNet(datasets[dataset.lower()].NUM_CLASS, backbone=backbone, **kwargs) if pretrained: from ..model_store import get_model_file model.load_state_dict(torch.load( get_model_file('upernet_%s_%s'%(backbone, acronyms[dataset]), root=root))) return model
UperNet model from the paper `"Fully Convolutional Network for semantic segmentation" <https://people.eecs.berkeley.edu/~jonlong/long_shelhamer_upernet.pdf>`_ Parameters ---------- dataset : str, default pascal_voc The dataset that model pretrained on. (pascal_voc, ade20k) pretrained : bool, default False Whether to load the pretrained weights for model. root : str, default '~/.encoding/models' Location for keeping the model parameters. Examples -------- >>> model = get_upernet(dataset='pascal_voc', backbone='resnet50s', pretrained=False) >>> print(model)
get_upernet
python
junfu1115/DANet
encoding/models/sseg/upernet.py
https://github.com/junfu1115/DANet/blob/master/encoding/models/sseg/upernet.py
MIT
def forward(self, x): """ inputs : x : input feature maps( B X C X H X W) returns : out : attention value + input feature attention: B X (HxW) X (HxW) """ m_batchsize, C, height, width = x.size() proj_query = self.query_conv(x).view(m_batchsize, -1, width*height).permute(0, 2, 1) proj_key = self.key_conv(x).view(m_batchsize, -1, width*height) energy = torch.bmm(proj_query, proj_key) attention = self.softmax(energy) proj_value = self.value_conv(x).view(m_batchsize, -1, width*height) out = torch.bmm(proj_value, attention.permute(0, 2, 1)) out = out.view(m_batchsize, C, height, width) out = self.gamma*out + x return out
inputs : x : input feature maps( B X C X H X W) returns : out : attention value + input feature attention: B X (HxW) X (HxW)
forward
python
junfu1115/DANet
encoding/nn/da_att.py
https://github.com/junfu1115/DANet/blob/master/encoding/nn/da_att.py
MIT
def forward(self,x): """ inputs : x : input feature maps( B X C X H X W) returns : out : attention value + input feature attention: B X C X C """ m_batchsize, C, height, width = x.size() proj_query = x.view(m_batchsize, C, -1) proj_key = x.view(m_batchsize, C, -1).permute(0, 2, 1) energy = torch.bmm(proj_query, proj_key) energy_new = torch.max(energy, -1, keepdim=True)[0].expand_as(energy)-energy attention = self.softmax(energy_new) proj_value = x.view(m_batchsize, C, -1) out = torch.bmm(attention, proj_value) out = out.view(m_batchsize, C, height, width) out = self.gamma*out + x return out
inputs : x : input feature maps( B X C X H X W) returns : out : attention value + input feature attention: B X C X C
forward
python
junfu1115/DANet
encoding/nn/da_att.py
https://github.com/junfu1115/DANet/blob/master/encoding/nn/da_att.py
MIT
def forward(self, x,y): """ inputs : x : input feature(N,C,H,W) y:gathering centers(N,K,M) returns : out : compact position attention feature attention map: (H*W)*M """ m_batchsize,C,width ,height = x.size() m_batchsize,K,M = y.size() proj_query = self.conv_query(x).view(m_batchsize,-1,width*height).permute(0,2,1)#BxNxd proj_key = self.conv_key(y).view(m_batchsize,K,-1).permute(0,2,1)#BxdxK energy = torch.bmm(proj_query,proj_key)#BxNxK attention = self.softmax(energy) #BxNxk proj_value = self.conv_value(y).permute(0,2,1) #BxCxK out = torch.bmm(proj_value,attention.permute(0,2,1))#BxCxN out = out.view(m_batchsize,C,width,height) out = self.scale*out + x return out
inputs : x : input feature(N,C,H,W) y:gathering centers(N,K,M) returns : out : compact position attention feature attention map: (H*W)*M
forward
python
junfu1115/DANet
encoding/nn/dran_att.py
https://github.com/junfu1115/DANet/blob/master/encoding/nn/dran_att.py
MIT
def forward(self, x,y): """ inputs : x : input feature(N,C,H,W) y:gathering centers(N,K,H,W) returns : out : compact channel attention feature attention map: K*C """ m_batchsize,C,width ,height = x.size() x_reshape =x.view(m_batchsize,C,-1) B,K,W,H = y.size() y_reshape =y.view(B,K,-1) proj_query = x_reshape #BXC1XN proj_key = y_reshape.permute(0,2,1) #BX(N)XC energy = torch.bmm(proj_query,proj_key) #BXC1XC energy_new = torch.max(energy,-1,keepdim=True)[0].expand_as(energy)-energy attention = self.softmax(energy_new) proj_value = y.view(B,K,-1) #BCN out = torch.bmm(attention,proj_value) #BC1N out = out.view(m_batchsize,C,width ,height) out = x + self.scale*out return out
inputs : x : input feature(N,C,H,W) y:gathering centers(N,K,H,W) returns : out : compact channel attention feature attention map: K*C
forward
python
junfu1115/DANet
encoding/nn/dran_att.py
https://github.com/junfu1115/DANet/blob/master/encoding/nn/dran_att.py
MIT
def forward(self, x,y): """ inputs : x : low level feature(N,C,H,W) y:high level feature(N,C,H,W) returns : out : cross-level gating decoder feature """ low_lvl_feat = self.conv_low(x) high_lvl_feat = upsample(y, low_lvl_feat.size()[2:], **self._up_kwargs) feat_cat = torch.cat([low_lvl_feat,high_lvl_feat],1) low_lvl_feat_refine = self.gamma*self.conv_att(feat_cat)*low_lvl_feat low_high_feat = torch.cat([low_lvl_feat_refine,high_lvl_feat],1) low_high_feat = self.conv_cat(low_high_feat) low_high_feat = self.conv_out(low_high_feat) return low_high_feat
inputs : x : low level feature(N,C,H,W) y:high level feature(N,C,H,W) returns : out : cross-level gating decoder feature
forward
python
junfu1115/DANet
encoding/nn/dran_att.py
https://github.com/junfu1115/DANet/blob/master/encoding/nn/dran_att.py
MIT
def reset_dropblock(start_step, nr_steps, start_value, stop_value, m): """ Example: from functools import partial apply_drop_prob = partial(reset_dropblock, 0, epochs*iters_per_epoch, 0.0, 0.1) net.apply(apply_drop_prob) """ if isinstance(m, DropBlock2D): m.reset_steps(start_step, nr_steps, start_value, stop_value)
Example: from functools import partial apply_drop_prob = partial(reset_dropblock, 0, epochs*iters_per_epoch, 0.0, 0.1) net.apply(apply_drop_prob)
reset_dropblock
python
junfu1115/DANet
encoding/nn/dropblock.py
https://github.com/junfu1115/DANet/blob/master/encoding/nn/dropblock.py
MIT
def __init__(self, smoothing=0.1): """ Constructor for the LabelSmoothing module. :param smoothing: label smoothing factor """ super(LabelSmoothing, self).__init__() self.confidence = 1.0 - smoothing self.smoothing = smoothing
Constructor for the LabelSmoothing module. :param smoothing: label smoothing factor
__init__
python
junfu1115/DANet
encoding/nn/loss.py
https://github.com/junfu1115/DANet/blob/master/encoding/nn/loss.py
MIT
def download(url, path=None, overwrite=False, sha1_hash=None): """Download an given URL Parameters ---------- url : str URL to download path : str, optional Destination path to store downloaded file. By default stores to the current directory with same name as in url. overwrite : bool, optional Whether to overwrite destination file if already exists. sha1_hash : str, optional Expected sha1 hash in hexadecimal digits. Will ignore existing file when hash is specified but doesn't match. Returns ------- str The file path of the downloaded file. """ if path is None: fname = url.split('/')[-1] else: path = os.path.expanduser(path) if os.path.isdir(path): fname = os.path.join(path, url.split('/')[-1]) else: fname = path if overwrite or not os.path.exists(fname) or (sha1_hash and not check_sha1(fname, sha1_hash)): dirname = os.path.dirname(os.path.abspath(os.path.expanduser(fname))) if not os.path.exists(dirname): os.makedirs(dirname) print('Downloading %s from %s...'%(fname, url)) r = requests.get(url, stream=True) if r.status_code != 200: raise RuntimeError("Failed downloading url %s"%url) total_length = r.headers.get('content-length') with open(fname, 'wb') as f: if total_length is None: # no content length header for chunk in r.iter_content(chunk_size=1024): if chunk: # filter out keep-alive new chunks f.write(chunk) else: total_length = int(total_length) for chunk in tqdm(r.iter_content(chunk_size=1024), total=int(total_length / 1024. + 0.5), unit='KB', unit_scale=False, dynamic_ncols=True): f.write(chunk) if sha1_hash and not check_sha1(fname, sha1_hash): raise UserWarning('File {} is downloaded but the content hash does not match. ' \ 'The repo may be outdated or download may be incomplete. ' \ 'If the "repo_url" is overridden, consider switching to ' \ 'the default repo.'.format(fname)) return fname
Download an given URL Parameters ---------- url : str URL to download path : str, optional Destination path to store downloaded file. By default stores to the current directory with same name as in url. overwrite : bool, optional Whether to overwrite destination file if already exists. sha1_hash : str, optional Expected sha1 hash in hexadecimal digits. Will ignore existing file when hash is specified but doesn't match. Returns ------- str The file path of the downloaded file.
download
python
junfu1115/DANet
encoding/utils/files.py
https://github.com/junfu1115/DANet/blob/master/encoding/utils/files.py
MIT
def check_sha1(filename, sha1_hash): """Check whether the sha1 hash of the file content matches the expected hash. Parameters ---------- filename : str Path to the file. sha1_hash : str Expected sha1 hash in hexadecimal digits. Returns ------- bool Whether the file content matches the expected hash. """ sha1 = hashlib.sha1() with open(filename, 'rb') as f: while True: data = f.read(1048576) if not data: break sha1.update(data) return sha1.hexdigest() == sha1_hash
Check whether the sha1 hash of the file content matches the expected hash. Parameters ---------- filename : str Path to the file. sha1_hash : str Expected sha1 hash in hexadecimal digits. Returns ------- bool Whether the file content matches the expected hash.
check_sha1
python
junfu1115/DANet
encoding/utils/files.py
https://github.com/junfu1115/DANet/blob/master/encoding/utils/files.py
MIT
def accuracy(output, target, topk=(1,)): """Computes the accuracy over the k top predictions for the specified values of k""" with torch.no_grad(): maxk = max(topk) batch_size = target.size(0) _, pred = output.topk(maxk, 1, True, True) pred = pred.t() correct = pred.eq(target.view(1, -1).expand_as(pred)) res = [] for k in topk: correct_k = correct[:k].view(-1).float().sum(0, keepdim=True) res.append(correct_k.mul_(100.0 / batch_size)) return res
Computes the accuracy over the k top predictions for the specified values of k
accuracy
python
junfu1115/DANet
encoding/utils/metrics.py
https://github.com/junfu1115/DANet/blob/master/encoding/utils/metrics.py
MIT
def batch_pix_accuracy(output, target): """Batch Pixel Accuracy Args: predict: input 4D tensor target: label 3D tensor """ _, predict = torch.max(output, 1) predict = predict.cpu().numpy().astype('int64') + 1 target = target.cpu().numpy().astype('int64') + 1 pixel_labeled = np.sum(target > 0) pixel_correct = np.sum((predict == target)*(target > 0)) assert pixel_correct <= pixel_labeled, \ "Correct area should be smaller than Labeled" return pixel_correct, pixel_labeled
Batch Pixel Accuracy Args: predict: input 4D tensor target: label 3D tensor
batch_pix_accuracy
python
junfu1115/DANet
encoding/utils/metrics.py
https://github.com/junfu1115/DANet/blob/master/encoding/utils/metrics.py
MIT
def batch_intersection_union(output, target, nclass): """Batch Intersection of Union Args: predict: input 4D tensor target: label 3D tensor nclass: number of categories (int) """ _, predict = torch.max(output, 1) mini = 1 maxi = nclass nbins = nclass predict = predict.cpu().numpy().astype('int64') + 1 target = target.cpu().numpy().astype('int64') + 1 predict = predict * (target > 0).astype(predict.dtype) intersection = predict * (predict == target) # areas of intersection and union area_inter, _ = np.histogram(intersection, bins=nbins, range=(mini, maxi)) area_pred, _ = np.histogram(predict, bins=nbins, range=(mini, maxi)) area_lab, _ = np.histogram(target, bins=nbins, range=(mini, maxi)) area_union = area_pred + area_lab - area_inter assert (area_inter <= area_union).all(), \ "Intersection area should be smaller than Union area" return area_inter, area_union
Batch Intersection of Union Args: predict: input 4D tensor target: label 3D tensor nclass: number of categories (int)
batch_intersection_union
python
junfu1115/DANet
encoding/utils/metrics.py
https://github.com/junfu1115/DANet/blob/master/encoding/utils/metrics.py
MIT
def get_mask_pallete(npimg, dataset='detail'): """Get image color pallete for visualizing masks""" # recovery boundary if dataset == 'pascal_voc': npimg[npimg==21] = 255 # put colormap out_img = Image.fromarray(npimg.squeeze().astype('uint8')) if dataset == 'ade20k': out_img.putpalette(adepallete) elif dataset == 'citys': out_img.putpalette(citypallete) elif dataset in ('detail', 'pascal_voc', 'pascal_aug'): out_img.putpalette(vocpallete) return out_img
Get image color pallete for visualizing masks
get_mask_pallete
python
junfu1115/DANet
encoding/utils/pallete.py
https://github.com/junfu1115/DANet/blob/master/encoding/utils/pallete.py
MIT
def update_bn_stats( model: nn.Module, data_loader: Iterable[Any], num_iters: int = 200 # pyre-ignore ) -> None: """ Recompute and update the batch norm stats to make them more precise. During training both BN stats and the weight are changing after every iteration, so the running average can not precisely reflect the actual stats of the current model. In this function, the BN stats are recomputed with fixed weights, to make the running average more precise. Specifically, it computes the true average of per-batch mean/variance instead of the running average. Args: model (nn.Module): the model whose bn stats will be recomputed. Note that: 1. This function will not alter the training mode of the given model. Users are responsible for setting the layers that needs precise-BN to training mode, prior to calling this function. 2. Be careful if your models contain other stateful layers in addition to BN, i.e. layers whose state can change in forward iterations. This function will alter their state. If you wish them unchanged, you need to either pass in a submodule without those layers, or backup the states. data_loader (iterator): an iterator. Produce data as inputs to the model. num_iters (int): number of iterations to compute the stats. """ bn_layers = get_bn_modules(model) if len(bn_layers) == 0: return # In order to make the running stats only reflect the current batch, the # momentum is disabled. # bn.running_mean = (1 - momentum) * bn.running_mean + momentum * batch_mean # Setting the momentum to 1.0 to compute the stats without momentum. momentum_actual = [bn.momentum for bn in bn_layers] # pyre-ignore for bn in bn_layers: bn.momentum = 1.0 # Note that running_var actually means "running average of variance" running_mean = [ torch.zeros_like(bn.running_mean) for bn in bn_layers # pyre-ignore ] running_var = [torch.zeros_like(bn.running_var) for bn in bn_layers] # pyre-ignore ind = -1 for ind, inputs in enumerate(itertools.islice(data_loader, num_iters)): inputs=inputs.cuda() with torch.no_grad(): # No need to backward model(inputs) for i, bn in enumerate(bn_layers): # Accumulates the bn stats. running_mean[i] += (bn.running_mean - running_mean[i]) / (ind + 1) running_var[i] += (bn.running_var - running_var[i]) / (ind + 1) # We compute the "average of variance" across iterations. assert ind == num_iters - 1, ( "update_bn_stats is meant to run for {} iterations, " "but the dataloader stops at {} iterations.".format(num_iters, ind) ) for i, bn in enumerate(bn_layers): # Sets the precise bn stats. bn.running_mean = running_mean[i] bn.running_var = running_var[i] bn.momentum = momentum_actual[i]
Recompute and update the batch norm stats to make them more precise. During training both BN stats and the weight are changing after every iteration, so the running average can not precisely reflect the actual stats of the current model. In this function, the BN stats are recomputed with fixed weights, to make the running average more precise. Specifically, it computes the true average of per-batch mean/variance instead of the running average. Args: model (nn.Module): the model whose bn stats will be recomputed. Note that: 1. This function will not alter the training mode of the given model. Users are responsible for setting the layers that needs precise-BN to training mode, prior to calling this function. 2. Be careful if your models contain other stateful layers in addition to BN, i.e. layers whose state can change in forward iterations. This function will alter their state. If you wish them unchanged, you need to either pass in a submodule without those layers, or backup the states. data_loader (iterator): an iterator. Produce data as inputs to the model. num_iters (int): number of iterations to compute the stats.
update_bn_stats
python
junfu1115/DANet
encoding/utils/precise_bn.py
https://github.com/junfu1115/DANet/blob/master/encoding/utils/precise_bn.py
MIT
def get_bn_modules(model: nn.Module) -> List[nn.Module]: """ Find all BatchNorm (BN) modules that are in training mode. See fvcore.precise_bn.BN_MODULE_TYPES for a list of all modules that are included in this search. Args: model (nn.Module): a model possibly containing BN modules. Returns: list[nn.Module]: all BN modules in the model. """ # Finds all the bn layers. bn_layers = [ m for m in model.modules() if m.training and isinstance(m, BN_MODULE_TYPES) ] return bn_layers
Find all BatchNorm (BN) modules that are in training mode. See fvcore.precise_bn.BN_MODULE_TYPES for a list of all modules that are included in this search. Args: model (nn.Module): a model possibly containing BN modules. Returns: list[nn.Module]: all BN modules in the model.
get_bn_modules
python
junfu1115/DANet
encoding/utils/precise_bn.py
https://github.com/junfu1115/DANet/blob/master/encoding/utils/precise_bn.py
MIT
def get_selabel_vector(target, nclass): r"""Get SE-Loss Label in a batch Args: predict: input 4D tensor target: label 3D tensor (BxHxW) nclass: number of categories (int) Output: 2D tensor (BxnClass) """ batch = target.size(0) tvect = torch.zeros(batch, nclass) for i in range(batch): hist = torch.histc(target[i].data.float(), bins=nclass, min=0, max=nclass-1) vect = hist>0 tvect[i] = vect return tvect
Get SE-Loss Label in a batch Args: predict: input 4D tensor target: label 3D tensor (BxHxW) nclass: number of categories (int) Output: 2D tensor (BxnClass)
get_selabel_vector
python
junfu1115/DANet
encoding/utils/train_helper.py
https://github.com/junfu1115/DANet/blob/master/encoding/utils/train_helper.py
MIT
def filepath_enumerate(paths): """Enumerate the file paths of all subfiles of the list of paths""" out = [] for path in paths: if os.path.isfile(path): out.append(path) else: for root, dirs, files in os.walk(path): for name in files: out.append(os.path.normpath(os.path.join(root, name))) return out
Enumerate the file paths of all subfiles of the list of paths
filepath_enumerate
python
junfu1115/DANet
tests/lint.py
https://github.com/junfu1115/DANet/blob/master/tests/lint.py
MIT
def _print_summary_map(strm, result_map, ftype): """Print summary of certain result map.""" if len(result_map) == 0: return 0 npass = len([x for k, x in result_map.items() if len(x) == 0]) strm.write('=====%d/%d %s files passed check=====\n' % (npass, len(result_map), ftype)) for fname, emap in result_map.items(): if len(emap) == 0: continue strm.write('%s: %d Errors of %d Categories map=%s\n' % ( fname, sum(emap.values()), len(emap), str(emap))) return len(result_map) - npass
Print summary of certain result map.
_print_summary_map
python
junfu1115/DANet
tests/lint.py
https://github.com/junfu1115/DANet/blob/master/tests/lint.py
MIT
def get_header_guard_dmlc(filename): """Get Header Guard Convention for DMLC Projects. For headers in include, directly use the path For headers in src, use project name plus path Examples: with project-name = dmlc include/dmlc/timer.h -> DMLC_TIMTER_H_ src/io/libsvm_parser.h -> DMLC_IO_LIBSVM_PARSER_H_ """ fileinfo = cpplint.FileInfo(filename) file_path_from_root = fileinfo.RepositoryName() inc_list = ['include', 'api', 'wrapper', 'contrib'] if os.name == 'nt': inc_list.append("mshadow") if file_path_from_root.find('src/') != -1 and _HELPER.project_name is not None: idx = file_path_from_root.find('src/') file_path_from_root = _HELPER.project_name + file_path_from_root[idx + 3:] else: idx = file_path_from_root.find("include/") if idx != -1: file_path_from_root = file_path_from_root[idx + 8:] for spath in inc_list: prefix = spath + '/' if file_path_from_root.startswith(prefix): file_path_from_root = re.sub('^' + prefix, '', file_path_from_root) break return re.sub(r'[-./\s]', '_', file_path_from_root).upper() + '_'
Get Header Guard Convention for DMLC Projects. For headers in include, directly use the path For headers in src, use project name plus path Examples: with project-name = dmlc include/dmlc/timer.h -> DMLC_TIMTER_H_ src/io/libsvm_parser.h -> DMLC_IO_LIBSVM_PARSER_H_
get_header_guard_dmlc
python
junfu1115/DANet
tests/lint.py
https://github.com/junfu1115/DANet/blob/master/tests/lint.py
MIT
def __init__(self, caption_track: Dict): """Construct a :class:`Caption <Caption>`. :param dict caption_track: Caption track data extracted from ``watch_html``. """ self.url = caption_track.get("baseUrl") # Certain videos have runs instead of simpleText # this handles that edge case name_dict = caption_track['name'] if 'simpleText' in name_dict: self.name = name_dict['simpleText'] else: for el in name_dict['runs']: if 'text' in el: self.name = el['text'] # Use "vssId" instead of "languageCode", fix issue #779 self.code = caption_track["vssId"] # Remove preceding '.' for backwards compatibility, e.g.: # English -> vssId: .en, languageCode: en # English (auto-generated) -> vssId: a.en, languageCode: en self.code = self.code.strip('.')
Construct a :class:`Caption <Caption>`. :param dict caption_track: Caption track data extracted from ``watch_html``.
__init__
python
pytube/pytube
pytube/captions.py
https://github.com/pytube/pytube/blob/master/pytube/captions.py
Unlicense
def json_captions(self) -> dict: """Download and parse the json caption tracks.""" json_captions_url = self.url.replace('fmt=srv3','fmt=json3') text = request.get(json_captions_url) parsed = json.loads(text) assert parsed['wireMagic'] == 'pb3', 'Unexpected captions format' return parsed
Download and parse the json caption tracks.
json_captions
python
pytube/pytube
pytube/captions.py
https://github.com/pytube/pytube/blob/master/pytube/captions.py
Unlicense
def float_to_srt_time_format(d: float) -> str: """Convert decimal durations into proper srt format. :rtype: str :returns: SubRip Subtitle (str) formatted time duration. float_to_srt_time_format(3.89) -> '00:00:03,890' """ fraction, whole = math.modf(d) time_fmt = time.strftime("%H:%M:%S,", time.gmtime(whole)) ms = f"{fraction:.3f}".replace("0.", "") return time_fmt + ms
Convert decimal durations into proper srt format. :rtype: str :returns: SubRip Subtitle (str) formatted time duration. float_to_srt_time_format(3.89) -> '00:00:03,890'
float_to_srt_time_format
python
pytube/pytube
pytube/captions.py
https://github.com/pytube/pytube/blob/master/pytube/captions.py
Unlicense
def xml_caption_to_srt(self, xml_captions: str) -> str: """Convert xml caption tracks to "SubRip Subtitle (srt)". :param str xml_captions: XML formatted caption tracks. """ segments = [] root = ElementTree.fromstring(xml_captions) for i, child in enumerate(list(root)): text = child.text or "" caption = unescape(text.replace("\n", " ").replace(" ", " "),) try: duration = float(child.attrib["dur"]) except KeyError: duration = 0.0 start = float(child.attrib["start"]) end = start + duration sequence_number = i + 1 # convert from 0-indexed to 1. line = "{seq}\n{start} --> {end}\n{text}\n".format( seq=sequence_number, start=self.float_to_srt_time_format(start), end=self.float_to_srt_time_format(end), text=caption, ) segments.append(line) return "\n".join(segments).strip()
Convert xml caption tracks to "SubRip Subtitle (srt)". :param str xml_captions: XML formatted caption tracks.
xml_caption_to_srt
python
pytube/pytube
pytube/captions.py
https://github.com/pytube/pytube/blob/master/pytube/captions.py
Unlicense
def download( self, title: str, srt: bool = True, output_path: Optional[str] = None, filename_prefix: Optional[str] = None, ) -> str: """Write the media stream to disk. :param title: Output filename (stem only) for writing media file. If one is not specified, the default filename is used. :type title: str :param srt: Set to True to download srt, false to download xml. Defaults to True. :type srt bool :param output_path: (optional) Output path for writing media file. If one is not specified, defaults to the current working directory. :type output_path: str or None :param filename_prefix: (optional) A string that will be prepended to the filename. For example a number in a playlist or the name of a series. If one is not specified, nothing will be prepended This is separate from filename so you can use the default filename but still add a prefix. :type filename_prefix: str or None :rtype: str """ if title.endswith(".srt") or title.endswith(".xml"): filename = ".".join(title.split(".")[:-1]) else: filename = title if filename_prefix: filename = f"{safe_filename(filename_prefix)}{filename}" filename = safe_filename(filename) filename += f" ({self.code})" if srt: filename += ".srt" else: filename += ".xml" file_path = os.path.join(target_directory(output_path), filename) with open(file_path, "w", encoding="utf-8") as file_handle: if srt: file_handle.write(self.generate_srt_captions()) else: file_handle.write(self.xml_captions) return file_path
Write the media stream to disk. :param title: Output filename (stem only) for writing media file. If one is not specified, the default filename is used. :type title: str :param srt: Set to True to download srt, false to download xml. Defaults to True. :type srt bool :param output_path: (optional) Output path for writing media file. If one is not specified, defaults to the current working directory. :type output_path: str or None :param filename_prefix: (optional) A string that will be prepended to the filename. For example a number in a playlist or the name of a series. If one is not specified, nothing will be prepended This is separate from filename so you can use the default filename but still add a prefix. :type filename_prefix: str or None :rtype: str
download
python
pytube/pytube
pytube/captions.py
https://github.com/pytube/pytube/blob/master/pytube/captions.py
Unlicense
def calculate_n(self, initial_n: list): """Converts n to the correct value to prevent throttling.""" if self.calculated_n: return self.calculated_n # First, update all instances of 'b' with the list(initial_n) for i in range(len(self.throttling_array)): if self.throttling_array[i] == 'b': self.throttling_array[i] = initial_n for step in self.throttling_plan: curr_func = self.throttling_array[int(step[0])] if not callable(curr_func): logger.debug(f'{curr_func} is not callable.') logger.debug(f'Throttling array:\n{self.throttling_array}\n') raise ExtractError(f'{curr_func} is not callable.') first_arg = self.throttling_array[int(step[1])] if len(step) == 2: curr_func(first_arg) elif len(step) == 3: second_arg = self.throttling_array[int(step[2])] curr_func(first_arg, second_arg) self.calculated_n = ''.join(initial_n) return self.calculated_n
Converts n to the correct value to prevent throttling.
calculate_n
python
pytube/pytube
pytube/cipher.py
https://github.com/pytube/pytube/blob/master/pytube/cipher.py
Unlicense
def get_signature(self, ciphered_signature: str) -> str: """Decipher the signature. Taking the ciphered signature, applies the transform functions. :param str ciphered_signature: The ciphered signature sent in the ``player_config``. :rtype: str :returns: Decrypted signature required to download the media content. """ signature = list(ciphered_signature) for js_func in self.transform_plan: name, argument = self.parse_function(js_func) # type: ignore signature = self.transform_map[name](signature, argument) logger.debug( "applied transform function\n" "output: %s\n" "js_function: %s\n" "argument: %d\n" "function: %s", "".join(signature), name, argument, self.transform_map[name], ) return "".join(signature)
Decipher the signature. Taking the ciphered signature, applies the transform functions. :param str ciphered_signature: The ciphered signature sent in the ``player_config``. :rtype: str :returns: Decrypted signature required to download the media content.
get_signature
python
pytube/pytube
pytube/cipher.py
https://github.com/pytube/pytube/blob/master/pytube/cipher.py
Unlicense
def parse_function(self, js_func: str) -> Tuple[str, int]: """Parse the Javascript transform function. Break a JavaScript transform function down into a two element ``tuple`` containing the function name and some integer-based argument. :param str js_func: The JavaScript version of the transform function. :rtype: tuple :returns: two element tuple containing the function name and an argument. **Example**: parse_function('DE.AJ(a,15)') ('AJ', 15) """ logger.debug("parsing transform function") for pattern in self.js_func_patterns: regex = re.compile(pattern) parse_match = regex.search(js_func) if parse_match: fn_name, fn_arg = parse_match.groups() return fn_name, int(fn_arg) raise RegexMatchError( caller="parse_function", pattern="js_func_patterns" )
Parse the Javascript transform function. Break a JavaScript transform function down into a two element ``tuple`` containing the function name and some integer-based argument. :param str js_func: The JavaScript version of the transform function. :rtype: tuple :returns: two element tuple containing the function name and an argument. **Example**: parse_function('DE.AJ(a,15)') ('AJ', 15)
parse_function
python
pytube/pytube
pytube/cipher.py
https://github.com/pytube/pytube/blob/master/pytube/cipher.py
Unlicense
def get_initial_function_name(js: str) -> str: """Extract the name of the function responsible for computing the signature. :param str js: The contents of the base.js asset file. :rtype: str :returns: Function name from regex match """ function_patterns = [ r"\b[cs]\s*&&\s*[adf]\.set\([^,]+\s*,\s*encodeURIComponent\s*\(\s*(?P<sig>[a-zA-Z0-9$]+)\(", # noqa: E501 r"\b[a-zA-Z0-9]+\s*&&\s*[a-zA-Z0-9]+\.set\([^,]+\s*,\s*encodeURIComponent\s*\(\s*(?P<sig>[a-zA-Z0-9$]+)\(", # noqa: E501 r'(?:\b|[^a-zA-Z0-9$])(?P<sig>[a-zA-Z0-9$]{2})\s*=\s*function\(\s*a\s*\)\s*{\s*a\s*=\s*a\.split\(\s*""\s*\)', # noqa: E501 r'(?P<sig>[a-zA-Z0-9$]+)\s*=\s*function\(\s*a\s*\)\s*{\s*a\s*=\s*a\.split\(\s*""\s*\)', # noqa: E501 r'(["\'])signature\1\s*,\s*(?P<sig>[a-zA-Z0-9$]+)\(', r"\.sig\|\|(?P<sig>[a-zA-Z0-9$]+)\(", r"yt\.akamaized\.net/\)\s*\|\|\s*.*?\s*[cs]\s*&&\s*[adf]\.set\([^,]+\s*,\s*(?:encodeURIComponent\s*\()?\s*(?P<sig>[a-zA-Z0-9$]+)\(", # noqa: E501 r"\b[cs]\s*&&\s*[adf]\.set\([^,]+\s*,\s*(?P<sig>[a-zA-Z0-9$]+)\(", # noqa: E501 r"\b[a-zA-Z0-9]+\s*&&\s*[a-zA-Z0-9]+\.set\([^,]+\s*,\s*(?P<sig>[a-zA-Z0-9$]+)\(", # noqa: E501 r"\bc\s*&&\s*a\.set\([^,]+\s*,\s*\([^)]*\)\s*\(\s*(?P<sig>[a-zA-Z0-9$]+)\(", # noqa: E501 r"\bc\s*&&\s*[a-zA-Z0-9]+\.set\([^,]+\s*,\s*\([^)]*\)\s*\(\s*(?P<sig>[a-zA-Z0-9$]+)\(", # noqa: E501 r"\bc\s*&&\s*[a-zA-Z0-9]+\.set\([^,]+\s*,\s*\([^)]*\)\s*\(\s*(?P<sig>[a-zA-Z0-9$]+)\(", # noqa: E501 ] logger.debug("finding initial function name") for pattern in function_patterns: regex = re.compile(pattern) function_match = regex.search(js) if function_match: logger.debug("finished regex search, matched: %s", pattern) return function_match.group(1) raise RegexMatchError( caller="get_initial_function_name", pattern="multiple" )
Extract the name of the function responsible for computing the signature. :param str js: The contents of the base.js asset file. :rtype: str :returns: Function name from regex match
get_initial_function_name
python
pytube/pytube
pytube/cipher.py
https://github.com/pytube/pytube/blob/master/pytube/cipher.py
Unlicense
def get_transform_plan(js: str) -> List[str]: """Extract the "transform plan". The "transform plan" is the functions that the ciphered signature is cycled through to obtain the actual signature. :param str js: The contents of the base.js asset file. **Example**: ['DE.AJ(a,15)', 'DE.VR(a,3)', 'DE.AJ(a,51)', 'DE.VR(a,3)', 'DE.kT(a,51)', 'DE.kT(a,8)', 'DE.VR(a,3)', 'DE.kT(a,21)'] """ name = re.escape(get_initial_function_name(js)) pattern = r"%s=function\(\w\){[a-z=\.\(\"\)]*;(.*);(?:.+)}" % name logger.debug("getting transform plan") return regex_search(pattern, js, group=1).split(";")
Extract the "transform plan". The "transform plan" is the functions that the ciphered signature is cycled through to obtain the actual signature. :param str js: The contents of the base.js asset file. **Example**: ['DE.AJ(a,15)', 'DE.VR(a,3)', 'DE.AJ(a,51)', 'DE.VR(a,3)', 'DE.kT(a,51)', 'DE.kT(a,8)', 'DE.VR(a,3)', 'DE.kT(a,21)']
get_transform_plan
python
pytube/pytube
pytube/cipher.py
https://github.com/pytube/pytube/blob/master/pytube/cipher.py
Unlicense
def get_transform_object(js: str, var: str) -> List[str]: """Extract the "transform object". The "transform object" contains the function definitions referenced in the "transform plan". The ``var`` argument is the obfuscated variable name which contains these functions, for example, given the function call ``DE.AJ(a,15)`` returned by the transform plan, "DE" would be the var. :param str js: The contents of the base.js asset file. :param str var: The obfuscated variable name that stores an object with all functions that descrambles the signature. **Example**: >>> get_transform_object(js, 'DE') ['AJ:function(a){a.reverse()}', 'VR:function(a,b){a.splice(0,b)}', 'kT:function(a,b){var c=a[0];a[0]=a[b%a.length];a[b]=c}'] """ pattern = r"var %s={(.*?)};" % re.escape(var) logger.debug("getting transform object") regex = re.compile(pattern, flags=re.DOTALL) transform_match = regex.search(js) if not transform_match: raise RegexMatchError(caller="get_transform_object", pattern=pattern) return transform_match.group(1).replace("\n", " ").split(", ")
Extract the "transform object". The "transform object" contains the function definitions referenced in the "transform plan". The ``var`` argument is the obfuscated variable name which contains these functions, for example, given the function call ``DE.AJ(a,15)`` returned by the transform plan, "DE" would be the var. :param str js: The contents of the base.js asset file. :param str var: The obfuscated variable name that stores an object with all functions that descrambles the signature. **Example**: >>> get_transform_object(js, 'DE') ['AJ:function(a){a.reverse()}', 'VR:function(a,b){a.splice(0,b)}', 'kT:function(a,b){var c=a[0];a[0]=a[b%a.length];a[b]=c}']
get_transform_object
python
pytube/pytube
pytube/cipher.py
https://github.com/pytube/pytube/blob/master/pytube/cipher.py
Unlicense
def get_transform_map(js: str, var: str) -> Dict: """Build a transform function lookup. Build a lookup table of obfuscated JavaScript function names to the Python equivalents. :param str js: The contents of the base.js asset file. :param str var: The obfuscated variable name that stores an object with all functions that descrambles the signature. """ transform_object = get_transform_object(js, var) mapper = {} for obj in transform_object: # AJ:function(a){a.reverse()} => AJ, function(a){a.reverse()} name, function = obj.split(":", 1) fn = map_functions(function) mapper[name] = fn return mapper
Build a transform function lookup. Build a lookup table of obfuscated JavaScript function names to the Python equivalents. :param str js: The contents of the base.js asset file. :param str var: The obfuscated variable name that stores an object with all functions that descrambles the signature.
get_transform_map
python
pytube/pytube
pytube/cipher.py
https://github.com/pytube/pytube/blob/master/pytube/cipher.py
Unlicense
def get_throttling_function_name(js: str) -> str: """Extract the name of the function that computes the throttling parameter. :param str js: The contents of the base.js asset file. :rtype: str :returns: The name of the function used to compute the throttling parameter. """ function_patterns = [ # https://github.com/ytdl-org/youtube-dl/issues/29326#issuecomment-865985377 # https://github.com/yt-dlp/yt-dlp/commit/48416bc4a8f1d5ff07d5977659cb8ece7640dcd8 # var Bpa = [iha]; # ... # a.C && (b = a.get("n")) && (b = Bpa[0](b), a.set("n", b), # Bpa.length || iha("")) }}; # In the above case, `iha` is the relevant function name r'a\.[a-zA-Z]\s*&&\s*\([a-z]\s*=\s*a\.get\("n"\)\)\s*&&\s*' r'\([a-z]\s*=\s*([a-zA-Z0-9$]+)(\[\d+\])?\([a-z]\)', ] logger.debug('Finding throttling function name') for pattern in function_patterns: regex = re.compile(pattern) function_match = regex.search(js) if function_match: logger.debug("finished regex search, matched: %s", pattern) if len(function_match.groups()) == 1: return function_match.group(1) idx = function_match.group(2) if idx: idx = idx.strip("[]") array = re.search( r'var {nfunc}\s*=\s*(\[.+?\]);'.format( nfunc=re.escape(function_match.group(1))), js ) if array: array = array.group(1).strip("[]").split(",") array = [x.strip() for x in array] return array[int(idx)] raise RegexMatchError( caller="get_throttling_function_name", pattern="multiple" )
Extract the name of the function that computes the throttling parameter. :param str js: The contents of the base.js asset file. :rtype: str :returns: The name of the function used to compute the throttling parameter.
get_throttling_function_name
python
pytube/pytube
pytube/cipher.py
https://github.com/pytube/pytube/blob/master/pytube/cipher.py
Unlicense
def get_throttling_function_code(js: str) -> str: """Extract the raw code for the throttling function. :param str js: The contents of the base.js asset file. :rtype: str :returns: The name of the function used to compute the throttling parameter. """ # Begin by extracting the correct function name name = re.escape(get_throttling_function_name(js)) # Identify where the function is defined pattern_start = r"%s=function\(\w\)" % name regex = re.compile(pattern_start) match = regex.search(js) # Extract the code within curly braces for the function itself, and merge any split lines code_lines_list = find_object_from_startpoint(js, match.span()[1]).split('\n') joined_lines = "".join(code_lines_list) # Prepend function definition (e.g. `Dea=function(a)`) return match.group(0) + joined_lines
Extract the raw code for the throttling function. :param str js: The contents of the base.js asset file. :rtype: str :returns: The name of the function used to compute the throttling parameter.
get_throttling_function_code
python
pytube/pytube
pytube/cipher.py
https://github.com/pytube/pytube/blob/master/pytube/cipher.py
Unlicense
def get_throttling_function_array(js: str) -> List[Any]: """Extract the "c" array. :param str js: The contents of the base.js asset file. :returns: The array of various integers, arrays, and functions. """ raw_code = get_throttling_function_code(js) array_start = r",c=\[" array_regex = re.compile(array_start) match = array_regex.search(raw_code) array_raw = find_object_from_startpoint(raw_code, match.span()[1] - 1) str_array = throttling_array_split(array_raw) converted_array = [] for el in str_array: try: converted_array.append(int(el)) continue except ValueError: # Not an integer value. pass if el == 'null': converted_array.append(None) continue if el.startswith('"') and el.endswith('"'): # Convert e.g. '"abcdef"' to string without quotation marks, 'abcdef' converted_array.append(el[1:-1]) continue if el.startswith('function'): mapper = ( (r"{for\(\w=\(\w%\w\.length\+\w\.length\)%\w\.length;\w--;\)\w\.unshift\(\w.pop\(\)\)}", throttling_unshift), # noqa:E501 (r"{\w\.reverse\(\)}", throttling_reverse), (r"{\w\.push\(\w\)}", throttling_push), (r";var\s\w=\w\[0\];\w\[0\]=\w\[\w\];\w\[\w\]=\w}", throttling_swap), (r"case\s\d+", throttling_cipher_function), (r"\w\.splice\(0,1,\w\.splice\(\w,1,\w\[0\]\)\[0\]\)", throttling_nested_splice), # noqa:E501 (r";\w\.splice\(\w,1\)}", js_splice), (r"\w\.splice\(-\w\)\.reverse\(\)\.forEach\(function\(\w\){\w\.unshift\(\w\)}\)", throttling_prepend), # noqa:E501 (r"for\(var \w=\w\.length;\w;\)\w\.push\(\w\.splice\(--\w,1\)\[0\]\)}", throttling_reverse), # noqa:E501 ) found = False for pattern, fn in mapper: if re.search(pattern, el): converted_array.append(fn) found = True if found: continue converted_array.append(el) # Replace null elements with array itself for i in range(len(converted_array)): if converted_array[i] is None: converted_array[i] = converted_array return converted_array
Extract the "c" array. :param str js: The contents of the base.js asset file. :returns: The array of various integers, arrays, and functions.
get_throttling_function_array
python
pytube/pytube
pytube/cipher.py
https://github.com/pytube/pytube/blob/master/pytube/cipher.py
Unlicense
def get_throttling_plan(js: str): """Extract the "throttling plan". The "throttling plan" is a list of tuples used for calling functions in the c array. The first element of the tuple is the index of the function to call, and any remaining elements of the tuple are arguments to pass to that function. :param str js: The contents of the base.js asset file. :returns: The full function code for computing the throttlign parameter. """ raw_code = get_throttling_function_code(js) transform_start = r"try{" plan_regex = re.compile(transform_start) match = plan_regex.search(raw_code) transform_plan_raw = find_object_from_startpoint(raw_code, match.span()[1] - 1) # Steps are either c[x](c[y]) or c[x](c[y],c[z]) step_start = r"c\[(\d+)\]\(c\[(\d+)\](,c(\[(\d+)\]))?\)" step_regex = re.compile(step_start) matches = step_regex.findall(transform_plan_raw) transform_steps = [] for match in matches: if match[4] != '': transform_steps.append((match[0],match[1],match[4])) else: transform_steps.append((match[0],match[1])) return transform_steps
Extract the "throttling plan". The "throttling plan" is a list of tuples used for calling functions in the c array. The first element of the tuple is the index of the function to call, and any remaining elements of the tuple are arguments to pass to that function. :param str js: The contents of the base.js asset file. :returns: The full function code for computing the throttlign parameter.
get_throttling_plan
python
pytube/pytube
pytube/cipher.py
https://github.com/pytube/pytube/blob/master/pytube/cipher.py
Unlicense
def swap(arr: List, b: int): """Swap positions at b modulus the list length. This function is equivalent to: .. code-block:: javascript function(a, b) { var c=a[0];a[0]=a[b%a.length];a[b]=c } **Example**: >>> swap([1, 2, 3, 4], 2) [3, 2, 1, 4] """ r = b % len(arr) return list(chain([arr[r]], arr[1:r], [arr[0]], arr[r + 1 :]))
Swap positions at b modulus the list length. This function is equivalent to: .. code-block:: javascript function(a, b) { var c=a[0];a[0]=a[b%a.length];a[b]=c } **Example**: >>> swap([1, 2, 3, 4], 2) [3, 2, 1, 4]
swap
python
pytube/pytube
pytube/cipher.py
https://github.com/pytube/pytube/blob/master/pytube/cipher.py
Unlicense
def throttling_reverse(arr: list): """Reverses the input list. Needs to do an in-place reversal so that the passed list gets changed. To accomplish this, we create a reversed copy, and then change each indvidual element. """ reverse_copy = arr.copy()[::-1] for i in range(len(reverse_copy)): arr[i] = reverse_copy[i]
Reverses the input list. Needs to do an in-place reversal so that the passed list gets changed. To accomplish this, we create a reversed copy, and then change each indvidual element.
throttling_reverse
python
pytube/pytube
pytube/cipher.py
https://github.com/pytube/pytube/blob/master/pytube/cipher.py
Unlicense
def throttling_unshift(d: list, e: int): """Rotates the elements of the list to the right. In the javascript, the operation is as follows: for(e=(e%d.length+d.length)%d.length;e--;)d.unshift(d.pop()) """ e = throttling_mod_func(d, e) new_arr = d[-e:] + d[:-e] d.clear() for el in new_arr: d.append(el)
Rotates the elements of the list to the right. In the javascript, the operation is as follows: for(e=(e%d.length+d.length)%d.length;e--;)d.unshift(d.pop())
throttling_unshift
python
pytube/pytube
pytube/cipher.py
https://github.com/pytube/pytube/blob/master/pytube/cipher.py
Unlicense
def throttling_cipher_function(d: list, e: str): """This ciphers d with e to generate a new list. In the javascript, the operation is as follows: var h = [A-Za-z0-9-_], f = 96; // simplified from switch-case loop d.forEach( function(l,m,n){ this.push( n[m]=h[ (h.indexOf(l)-h.indexOf(this[m])+m-32+f--)%h.length ] ) }, e.split("") ) """ h = list('ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789-_') f = 96 # by naming it "this" we can more closely reflect the js this = list(e) # This is so we don't run into weirdness with enumerate while # we change the input list copied_list = d.copy() for m, l in enumerate(copied_list): bracket_val = (h.index(l) - h.index(this[m]) + m - 32 + f) % len(h) this.append( h[bracket_val] ) d[m] = h[bracket_val] f -= 1
This ciphers d with e to generate a new list. In the javascript, the operation is as follows: var h = [A-Za-z0-9-_], f = 96; // simplified from switch-case loop d.forEach( function(l,m,n){ this.push( n[m]=h[ (h.indexOf(l)-h.indexOf(this[m])+m-32+f--)%h.length ] ) }, e.split("") )
throttling_cipher_function
python
pytube/pytube
pytube/cipher.py
https://github.com/pytube/pytube/blob/master/pytube/cipher.py
Unlicense
def throttling_nested_splice(d: list, e: int): """Nested splice function in throttling js. In the javascript, the operation is as follows: function(d,e){ e=(e%d.length+d.length)%d.length; d.splice( 0, 1, d.splice( e, 1, d[0] )[0] ) } While testing, all this seemed to do is swap element 0 and e, but the actual process is preserved in case there was an edge case that was not considered. """ e = throttling_mod_func(d, e) inner_splice = js_splice( d, e, 1, d[0] ) js_splice( d, 0, 1, inner_splice[0] )
Nested splice function in throttling js. In the javascript, the operation is as follows: function(d,e){ e=(e%d.length+d.length)%d.length; d.splice( 0, 1, d.splice( e, 1, d[0] )[0] ) } While testing, all this seemed to do is swap element 0 and e, but the actual process is preserved in case there was an edge case that was not considered.
throttling_nested_splice
python
pytube/pytube
pytube/cipher.py
https://github.com/pytube/pytube/blob/master/pytube/cipher.py
Unlicense
def throttling_prepend(d: list, e: int): """ In the javascript, the operation is as follows: function(d,e){ e=(e%d.length+d.length)%d.length; d.splice(-e).reverse().forEach( function(f){ d.unshift(f) } ) } Effectively, this moves the last e elements of d to the beginning. """ start_len = len(d) # First, calculate e e = throttling_mod_func(d, e) # Then do the prepending new_arr = d[-e:] + d[:-e] # And update the input list d.clear() for el in new_arr: d.append(el) end_len = len(d) assert start_len == end_len
In the javascript, the operation is as follows: function(d,e){ e=(e%d.length+d.length)%d.length; d.splice(-e).reverse().forEach( function(f){ d.unshift(f) } ) } Effectively, this moves the last e elements of d to the beginning.
throttling_prepend
python
pytube/pytube
pytube/cipher.py
https://github.com/pytube/pytube/blob/master/pytube/cipher.py
Unlicense