code stringlengths 66 870k | docstring stringlengths 19 26.7k | func_name stringlengths 1 138 | language stringclasses 1
value | repo stringlengths 7 68 | path stringlengths 5 324 | url stringlengths 46 389 | license stringclasses 7
values |
|---|---|---|---|---|---|---|---|
def forward(
self,
hidden_states: torch.Tensor,
original_hidden_states: torch.Tensor,
layer_idx: int,
attention_mask: Optional[torch.Tensor] = None,
past_key_value: Optional[ZambaHybridDynamicCache] = None,
output_attentions: Optional[bool] = False,
use_ca... |
Args:
hidden_states (`torch.FloatTensor`): output of previous Mamba layer of shape `(batch, seq_len, embed_dim)`
original_hidden_states (`torch.FloatTensor`): word embedding output of shape `(batch, seq_len, embed_dim)`.
This is concatenated with `hidden_states` (which i... | forward | python | huggingface/transformers | src/transformers/models/zamba/modeling_zamba.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/zamba/modeling_zamba.py | Apache-2.0 |
def forward(
self,
hidden_states: torch.Tensor,
original_hidden_states: Optional[torch.Tensor] = None,
layer_idx: Optional[int] = None,
attention_mask: Optional[torch.Tensor] = None,
causal_mask: Optional[torch.Tensor] = None,
past_key_value: Optional[ZambaHybridD... |
Args:
hidden_states (`torch.FloatTensor`): input to the layer of shape `(batch, seq_len, embed_dim)`
attention_mask (`torch.FloatTensor`, *optional*): attention mask of size
`(batch, sequence_length)` where padding elements are indicated by 0.
past_key_value ... | forward | python | huggingface/transformers | src/transformers/models/zamba/modeling_zamba.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/zamba/modeling_zamba.py | Apache-2.0 |
def forward(
self,
hidden_states: torch.Tensor,
original_hidden_states: Optional[torch.Tensor] = None,
layer_idx: Optional[int] = None,
attention_mask: Optional[torch.Tensor] = None,
causal_mask: Optional[torch.Tensor] = None,
past_key_value: Optional[ZambaHybridD... |
Args:
hidden_states (`torch.FloatTensor`): input to the layer of shape `(batch, seq_len, embed_dim)`
original_hidden_states (`torch.FloatTensor`): word embedding output that will be concatenated with
hidden activations to form the input of the shared transformer layer.
... | forward | python | huggingface/transformers | src/transformers/models/zamba/modeling_zamba.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/zamba/modeling_zamba.py | Apache-2.0 |
def _check_and_enable_flash_attn_2(
cls,
config,
torch_dtype: Optional[torch.dtype] = None,
device_map: Optional[Union[str, Dict[str, int]]] = None,
hard_check_only: bool = False,
check_device_map: bool = False,
):
"""
Overloads `PreTrainedModel._check... |
Overloads `PreTrainedModel._check_and_enable_flash_attn_2` so as to DISABLE Flash Attention 2 by default on Zamba models.
Flash attention 2 is currently not supported in the HuggingFace implementation of Zamba v1.
| _check_and_enable_flash_attn_2 | python | huggingface/transformers | src/transformers/models/zamba/modeling_zamba.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/zamba/modeling_zamba.py | Apache-2.0 |
def forward(
self,
input_ids: Optional[torch.LongTensor] = None,
attention_mask: Optional[torch.Tensor] = None,
position_ids: Optional[torch.LongTensor] = None,
past_key_values: Optional[ZambaHybridDynamicCache] = None,
inputs_embeds: Optional[torch.FloatTensor] = None,
... |
labels (`torch.LongTensor` of shape `(batch_size, sequence_length)`, *optional*):
Labels for computing the masked language modeling loss. Indices should either be in `[0, ...,
config.vocab_size]` or -100 (see `input_ids` docstring). Tokens with indices set to `-100` are ignored
... | forward | python | huggingface/transformers | src/transformers/models/zamba/modeling_zamba.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/zamba/modeling_zamba.py | Apache-2.0 |
def forward(
self,
input_ids: Optional[torch.LongTensor] = None,
attention_mask: Optional[torch.Tensor] = None,
position_ids: Optional[torch.LongTensor] = None,
past_key_values: Optional[Union[Cache, List[torch.FloatTensor]]] = None,
inputs_embeds: Optional[torch.FloatTen... |
labels (`torch.LongTensor` of shape `(batch_size,)`, *optional*):
Labels for computing the sequence classification/regression loss. Indices should be in `[0, ...,
config.num_labels - 1]`. If `config.num_labels == 1` a regression loss is computed (Mean-Square loss), If
`confi... | forward | python | huggingface/transformers | src/transformers/models/zamba/modeling_zamba.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/zamba/modeling_zamba.py | Apache-2.0 |
def reorder_cache(self, beam_idx: torch.LongTensor):
"""Reorders the cache for beam search, given the selected beam indices."""
for layer_idx in range(len(self.key_cache)):
device = self.key_cache[layer_idx].device
self.key_cache[layer_idx] = self.key_cache[layer_idx].index_selec... | Reorders the cache for beam search, given the selected beam indices. | reorder_cache | python | huggingface/transformers | src/transformers/models/zamba2/modeling_zamba2.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/zamba2/modeling_zamba2.py | Apache-2.0 |
def get_seq_length(self, layer_idx: Optional[int] = 0) -> int:
"""Returns the sequence length of the cached states. A layer index can be optionally passed."""
# take any layer that contains cache and not empty tensor
layer_idx = self.transformer_layers[0] if layer_idx not in self.transformer_lay... | Returns the sequence length of the cached states. A layer index can be optionally passed. | get_seq_length | python | huggingface/transformers | src/transformers/models/zamba2/modeling_zamba2.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/zamba2/modeling_zamba2.py | Apache-2.0 |
def repeat_kv(hidden_states: torch.Tensor, n_rep: int) -> torch.Tensor:
"""
This is the equivalent of torch.repeat_interleave(x, dim=1, repeats=n_rep). The hidden states go from (batch,
num_key_value_heads, seqlen, head_dim) to (batch, num_attention_heads, seqlen, head_dim)
"""
batch, num_key_value_... |
This is the equivalent of torch.repeat_interleave(x, dim=1, repeats=n_rep). The hidden states go from (batch,
num_key_value_heads, seqlen, head_dim) to (batch, num_attention_heads, seqlen, head_dim)
| repeat_kv | python | huggingface/transformers | src/transformers/models/zamba2/modeling_zamba2.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/zamba2/modeling_zamba2.py | Apache-2.0 |
def rotate_half(x):
"""Rotates half the hidden dims of the input."""
x1 = x[..., : x.shape[-1] // 2]
x2 = x[..., x.shape[-1] // 2 :]
return torch.cat((-x2, x1), dim=-1) | Rotates half the hidden dims of the input. | rotate_half | python | huggingface/transformers | src/transformers/models/zamba2/modeling_zamba2.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/zamba2/modeling_zamba2.py | Apache-2.0 |
def apply_rotary_pos_emb(q, k, cos, sin, position_ids=None, unsqueeze_dim=1):
"""Applies Rotary Position Embedding to the query and key tensors.
Args:
q (`torch.Tensor`): The query tensor.
k (`torch.Tensor`): The key tensor.
cos (`torch.Tensor`): The cosine part of the rotary embedding.... | Applies Rotary Position Embedding to the query and key tensors.
Args:
q (`torch.Tensor`): The query tensor.
k (`torch.Tensor`): The key tensor.
cos (`torch.Tensor`): The cosine part of the rotary embedding.
sin (`torch.Tensor`): The sine part of the rotary embedding.
positio... | apply_rotary_pos_emb | python | huggingface/transformers | src/transformers/models/zamba2/modeling_zamba2.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/zamba2/modeling_zamba2.py | Apache-2.0 |
def pad_tensor_by_size(input_tensor: torch.Tensor, pad_size: int):
"""
Padding x tensor with `pad_size` on the seq_len dim (dim=1)
Assumes that we only have tensors of either size 4 or 3
"""
pad_shape = (0, 0, 0, 0, 0, pad_size, 0, 0) if len(input_tensor.shape) == 4 else (0, 0, 0, pad_size, 0, 0)
... |
Padding x tensor with `pad_size` on the seq_len dim (dim=1)
Assumes that we only have tensors of either size 4 or 3
| pad_tensor_by_size | python | huggingface/transformers | src/transformers/models/zamba2/modeling_zamba2.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/zamba2/modeling_zamba2.py | Apache-2.0 |
def reshape_into_chunks(input_tensor, pad_size, chunk_size):
"""
Padding input_tensor with `pad_size` on the seq_len dim (dim=1) and
simultaneously splitting it into chunk sequences.
Assumes that we only have tensors of either size 4 or 3
"""
# [bsz, seq_len, ...] -> [bsz, seq_len multiple of c... |
Padding input_tensor with `pad_size` on the seq_len dim (dim=1) and
simultaneously splitting it into chunk sequences.
Assumes that we only have tensors of either size 4 or 3
| reshape_into_chunks | python | huggingface/transformers | src/transformers/models/zamba2/modeling_zamba2.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/zamba2/modeling_zamba2.py | Apache-2.0 |
def segment_sum(input_tensor):
"""
More stable segment sum calculation. Uses cumulative sums and masking instead of direct subtractions.
"""
chunk_size = input_tensor.size(-1)
# 1. expand input tensor to have an additional dimension and repeat along that dimension
# [..., chunk_size] -> [..., ch... |
More stable segment sum calculation. Uses cumulative sums and masking instead of direct subtractions.
| segment_sum | python | huggingface/transformers | src/transformers/models/zamba2/modeling_zamba2.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/zamba2/modeling_zamba2.py | Apache-2.0 |
def __init__(self, config: Zamba2Config, num_fwd_mem_blocks=None, block_id: Optional[int] = None):
"""
This MLP layer contributes to tied transformer blocks aimed to increasing compute without increasing model size. Because this layer
is tied, un-tied adapter modules (formally same as LoRA, but ... |
This MLP layer contributes to tied transformer blocks aimed to increasing compute without increasing model size. Because this layer
is tied, un-tied adapter modules (formally same as LoRA, but used in the base model) are added to the up and gate projectors to increase expressivity with a small memory o... | __init__ | python | huggingface/transformers | src/transformers/models/zamba2/modeling_zamba2.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/zamba2/modeling_zamba2.py | Apache-2.0 |
def forward(
self,
hidden_states: torch.Tensor,
original_hidden_states: torch.Tensor,
layer_idx: int,
attention_mask: Optional[torch.Tensor] = None,
past_key_value: Optional[Zamba2HybridDynamicCache] = None,
output_attentions: Optional[bool] = False,
posit... |
Args:
hidden_states (`torch.FloatTensor`): output of previous Mamba layer of shape `(batch, seq_len, embed_dim)`
original_hidden_states (`torch.FloatTensor`): word embedding output of shape `(batch, seq_len, embed_dim)`.
This is concatenated with `hidden_states` (which i... | forward | python | huggingface/transformers | src/transformers/models/zamba2/modeling_zamba2.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/zamba2/modeling_zamba2.py | Apache-2.0 |
def forward(
self,
hidden_states: torch.Tensor,
original_hidden_states: Optional[torch.Tensor] = None,
layer_idx: Optional[int] = None,
attention_mask: Optional[torch.Tensor] = None,
causal_mask: Optional[torch.Tensor] = None,
past_key_value: Optional[Zamba2Hybrid... |
Args:
hidden_states (`torch.FloatTensor`): input to the layer of shape `(batch, seq_len, embed_dim)`
attention_mask (`torch.FloatTensor`, *optional*): attention mask of size
`(batch, sequence_length)` where padding elements are indicated by 0.
past_key_value ... | forward | python | huggingface/transformers | src/transformers/models/zamba2/modeling_zamba2.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/zamba2/modeling_zamba2.py | Apache-2.0 |
def forward(
self,
hidden_states: torch.Tensor,
original_hidden_states: Optional[torch.Tensor] = None,
layer_idx: Optional[int] = None,
attention_mask: Optional[torch.Tensor] = None,
causal_mask: Optional[torch.Tensor] = None,
past_key_value: Optional[Zamba2Hybrid... |
Args:
hidden_states (`torch.FloatTensor`): input to the layer of shape `(batch, seq_len, embed_dim)`
original_hidden_states (`torch.FloatTensor`): word embedding output that will be concatenated with
hidden activations to form the input of the shared transformer layer.
... | forward | python | huggingface/transformers | src/transformers/models/zamba2/modeling_zamba2.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/zamba2/modeling_zamba2.py | Apache-2.0 |
def forward(
self,
input_ids: Optional[torch.LongTensor] = None,
attention_mask: Optional[torch.Tensor] = None,
position_ids: Optional[torch.LongTensor] = None,
past_key_values: Optional[Zamba2HybridDynamicCache] = None,
inputs_embeds: Optional[torch.FloatTensor] = None,
... |
labels (`torch.LongTensor` of shape `(batch_size, sequence_length)`, *optional*):
Labels for computing the masked language modeling loss. Indices should either be in `[0, ...,
config.vocab_size]` or -100 (see `input_ids` docstring). Tokens with indices set to `-100` are ignored
... | forward | python | huggingface/transformers | src/transformers/models/zamba2/modeling_zamba2.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/zamba2/modeling_zamba2.py | Apache-2.0 |
def forward(
self,
input_ids: Optional[torch.LongTensor] = None,
attention_mask: Optional[torch.Tensor] = None,
position_ids: Optional[torch.LongTensor] = None,
past_key_values: Optional[Union[Cache, List[torch.FloatTensor]]] = None,
inputs_embeds: Optional[torch.FloatTen... |
labels (`torch.LongTensor` of shape `(batch_size,)`, *optional*):
Labels for computing the sequence classification/regression loss. Indices should be in `[0, ...,
config.num_labels - 1]`. If `config.num_labels == 1` a regression loss is computed (Mean-Square loss), If
`confi... | forward | python | huggingface/transformers | src/transformers/models/zamba2/modeling_zamba2.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/zamba2/modeling_zamba2.py | Apache-2.0 |
def get_seq_length(self, layer_idx: Optional[int] = 0) -> int:
"""Returns the sequence length of the cached states. A layer index can be optionally passed."""
# take any layer that contains cache and not empty tensor
layer_idx = self.transformer_layers[0] if layer_idx not in self.transformer_lay... | Returns the sequence length of the cached states. A layer index can be optionally passed. | get_seq_length | python | huggingface/transformers | src/transformers/models/zamba2/modular_zamba2.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/zamba2/modular_zamba2.py | Apache-2.0 |
def __init__(self, config: Zamba2Config, num_fwd_mem_blocks=None, block_id: Optional[int] = None):
"""
This MLP layer contributes to tied transformer blocks aimed to increasing compute without increasing model size. Because this layer
is tied, un-tied adapter modules (formally same as LoRA, but ... |
This MLP layer contributes to tied transformer blocks aimed to increasing compute without increasing model size. Because this layer
is tied, un-tied adapter modules (formally same as LoRA, but used in the base model) are added to the up and gate projectors to increase expressivity with a small memory o... | __init__ | python | huggingface/transformers | src/transformers/models/zamba2/modular_zamba2.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/zamba2/modular_zamba2.py | Apache-2.0 |
def forward(
self,
hidden_states: torch.Tensor,
original_hidden_states: torch.Tensor,
layer_idx: int,
attention_mask: Optional[torch.Tensor] = None,
past_key_value: Optional[Zamba2HybridDynamicCache] = None,
output_attentions: Optional[bool] = False,
posit... |
Args:
hidden_states (`torch.FloatTensor`): output of previous Mamba layer of shape `(batch, seq_len, embed_dim)`
original_hidden_states (`torch.FloatTensor`): word embedding output of shape `(batch, seq_len, embed_dim)`.
This is concatenated with `hidden_states` (which i... | forward | python | huggingface/transformers | src/transformers/models/zamba2/modular_zamba2.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/zamba2/modular_zamba2.py | Apache-2.0 |
def forward(
self,
hidden_states: torch.Tensor,
original_hidden_states: Optional[torch.Tensor] = None,
layer_idx: Optional[int] = None,
attention_mask: Optional[torch.Tensor] = None,
causal_mask: Optional[torch.Tensor] = None,
past_key_value: Optional[Zamba2Hybrid... |
Args:
hidden_states (`torch.FloatTensor`): input to the layer of shape `(batch, seq_len, embed_dim)`
original_hidden_states (`torch.FloatTensor`): word embedding output that will be concatenated with
hidden activations to form the input of the shared transformer layer.
... | forward | python | huggingface/transformers | src/transformers/models/zamba2/modular_zamba2.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/zamba2/modular_zamba2.py | Apache-2.0 |
def convert_zoedepth_checkpoint(model_name, pytorch_dump_folder_path, push_to_hub):
"""
Copy/paste/tweak model's weights to our ZoeDepth structure.
"""
# define ZoeDepth configuration based on URL
config, _ = get_zoedepth_config(model_name)
# load original model
original_model = torch.hub.... |
Copy/paste/tweak model's weights to our ZoeDepth structure.
| convert_zoedepth_checkpoint | python | huggingface/transformers | src/transformers/models/zoedepth/convert_zoedepth_to_hf.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/zoedepth/convert_zoedepth_to_hf.py | Apache-2.0 |
def resize(
self,
image: np.ndarray,
size: Dict[str, int],
keep_aspect_ratio: bool = False,
ensure_multiple_of: int = 1,
resample: PILImageResampling = PILImageResampling.BILINEAR,
data_format: Optional[Union[str, ChannelDimension]] = None,
input_data_form... |
Resize an image to target size `(size["height"], size["width"])`. If `keep_aspect_ratio` is `True`, the image
is resized to the largest possible size such that the aspect ratio is preserved. If `ensure_multiple_of` is
set, the image is resized to a size that is a multiple of this value.
... | resize | python | huggingface/transformers | src/transformers/models/zoedepth/image_processing_zoedepth.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/zoedepth/image_processing_zoedepth.py | Apache-2.0 |
def pad_image(
self,
image: np.array,
mode: PaddingMode = PaddingMode.REFLECT,
data_format: Optional[Union[str, ChannelDimension]] = None,
input_data_format: Optional[Union[str, ChannelDimension]] = None,
):
"""
Pad an image as done in the original ZoeDepth im... |
Pad an image as done in the original ZoeDepth implementation.
Padding fixes the boundary artifacts in the output depth map.
Boundary artifacts are sometimes caused by the fact that the model is trained on NYU raw dataset
which has a black or white border around the image. This function... | pad_image | python | huggingface/transformers | src/transformers/models/zoedepth/image_processing_zoedepth.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/zoedepth/image_processing_zoedepth.py | Apache-2.0 |
def preprocess(
self,
images: ImageInput,
do_pad: Optional[bool] = None,
do_rescale: Optional[bool] = None,
rescale_factor: Optional[float] = None,
do_normalize: Optional[bool] = None,
image_mean: Optional[Union[float, List[float]]] = None,
image_std: Opti... |
Preprocess an image or batch of images.
Args:
images (`ImageInput`):
Image to preprocess. Expects a single or batch of images with pixel values ranging from 0 to 255. If
passing in images with pixel values between 0 and 1, set `do_rescale=False`.
... | preprocess | python | huggingface/transformers | src/transformers/models/zoedepth/image_processing_zoedepth.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/zoedepth/image_processing_zoedepth.py | Apache-2.0 |
def resize(
self,
images: "torch.Tensor",
size: SizeDict,
keep_aspect_ratio: bool = False,
ensure_multiple_of: int = 1,
interpolation: Optional["F.InterpolationMode"] = None,
) -> "torch.Tensor":
"""
Resize an image or batchd images to target size `(si... |
Resize an image or batchd images to target size `(size["height"], size["width"])`. If `keep_aspect_ratio` is `True`, the image
is resized to the largest possible size such that the aspect ratio is preserved. If `ensure_multiple_of` is
set, the image is resized to a size that is a multiple of th... | resize | python | huggingface/transformers | src/transformers/models/zoedepth/image_processing_zoedepth_fast.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/zoedepth/image_processing_zoedepth_fast.py | Apache-2.0 |
def _pad_images(
self,
images: "torch.Tensor",
):
"""
Args:
image (`torch.Tensor`):
Image to pad.
"""
height, width = get_image_size(images, channel_dim=ChannelDimension.FIRST)
pad_height = int(np.sqrt(height / 2) * 3)
pad_... |
Args:
image (`torch.Tensor`):
Image to pad.
| _pad_images | python | huggingface/transformers | src/transformers/models/zoedepth/image_processing_zoedepth_fast.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/zoedepth/image_processing_zoedepth_fast.py | Apache-2.0 |
def forward(self, hidden_states: List[torch.Tensor], patch_height, patch_width) -> List[torch.Tensor]:
"""
Args:
hidden_states (`List[torch.FloatTensor]`, each of shape `(batch_size, sequence_length + 1, hidden_size)`):
List of hidden states from the backbone.
"""
... |
Args:
hidden_states (`List[torch.FloatTensor]`, each of shape `(batch_size, sequence_length + 1, hidden_size)`):
List of hidden states from the backbone.
| forward | python | huggingface/transformers | src/transformers/models/zoedepth/modeling_zoedepth.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/zoedepth/modeling_zoedepth.py | Apache-2.0 |
def forward(self, hidden_states: List[torch.Tensor], patch_height, patch_width) -> List[torch.Tensor]:
"""
Args:
hidden_states (`List[torch.FloatTensor]`, each of shape `(batch_size, sequence_length, hidden_size)` or `(batch_size, hidden_size, height, width)`):
List of hidden... |
Args:
hidden_states (`List[torch.FloatTensor]`, each of shape `(batch_size, sequence_length, hidden_size)` or `(batch_size, hidden_size, height, width)`):
List of hidden states from the backbone.
| forward | python | huggingface/transformers | src/transformers/models/zoedepth/modeling_zoedepth.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/zoedepth/modeling_zoedepth.py | Apache-2.0 |
def __init__(self, n_classes=256, act=torch.softmax):
"""Compute log binomial distribution for n_classes
Args:
n_classes (`int`, *optional*, defaults to 256):
Number of output classes.
act (`torch.nn.Module`, *optional*, defaults to `torch.softmax`):
... | Compute log binomial distribution for n_classes
Args:
n_classes (`int`, *optional*, defaults to 256):
Number of output classes.
act (`torch.nn.Module`, *optional*, defaults to `torch.softmax`):
Activation function to apply to the output.
| __init__ | python | huggingface/transformers | src/transformers/models/zoedepth/modeling_zoedepth.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/zoedepth/modeling_zoedepth.py | Apache-2.0 |
def forward(self, probabilities, temperature=1.0, eps=1e-4):
"""Compute the log binomial distribution for probabilities.
Args:
probabilities (`torch.Tensor` of shape `(batch_size, num_channels, height, width)`):
Tensor containing probabilities of each class.
temp... | Compute the log binomial distribution for probabilities.
Args:
probabilities (`torch.Tensor` of shape `(batch_size, num_channels, height, width)`):
Tensor containing probabilities of each class.
temperature (`float` or `torch.Tensor` of shape `(batch_size, num_channels, ... | forward | python | huggingface/transformers | src/transformers/models/zoedepth/modeling_zoedepth.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/zoedepth/modeling_zoedepth.py | Apache-2.0 |
def __init__(
self,
config,
in_features,
condition_dim,
n_classes=256,
bottleneck_factor=2,
):
"""Per-pixel MLP followed by a Conditional Log Binomial softmax.
Args:
in_features (`int`):
Number of input channels in the main... | Per-pixel MLP followed by a Conditional Log Binomial softmax.
Args:
in_features (`int`):
Number of input channels in the main feature.
condition_dim (`int`):
Number of input channels in the condition feature.
n_classes (`int`, *optional*, defa... | __init__ | python | huggingface/transformers | src/transformers/models/zoedepth/modeling_zoedepth.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/zoedepth/modeling_zoedepth.py | Apache-2.0 |
def forward(self, main_feature, condition_feature):
"""
Args:
main_feature (`torch.Tensor` of shape `(batch_size, num_channels, height, width)`):
Main feature.
condition_feature (torch.Tensor of shape `(batch_size, num_channels, height, width)`):
C... |
Args:
main_feature (`torch.Tensor` of shape `(batch_size, num_channels, height, width)`):
Main feature.
condition_feature (torch.Tensor of shape `(batch_size, num_channels, height, width)`):
Condition feature.
Returns:
`torch.Tensor`:... | forward | python | huggingface/transformers | src/transformers/models/zoedepth/modeling_zoedepth.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/zoedepth/modeling_zoedepth.py | Apache-2.0 |
def __init__(self, config, n_bins=16, mlp_dim=256, min_depth=1e-3, max_depth=10):
"""Bin center regressor network.
Can be "normed" or "unnormed". If "normed", bin centers are bounded on the (min_depth, max_depth) interval.
Args:
config (`int`):
Model configuration.
... | Bin center regressor network.
Can be "normed" or "unnormed". If "normed", bin centers are bounded on the (min_depth, max_depth) interval.
Args:
config (`int`):
Model configuration.
n_bins (`int`, *optional*, defaults to 16):
Number of bin centers... | __init__ | python | huggingface/transformers | src/transformers/models/zoedepth/modeling_zoedepth.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/zoedepth/modeling_zoedepth.py | Apache-2.0 |
def forward(self, x):
"""
Returns tensor of bin_width vectors (centers). One vector b for every pixel
"""
x = self.conv1(x)
x = self.act1(x)
x = self.conv2(x)
bin_centers = self.act2(x)
if self.bin_centers_type == "normed":
bin_centers = bin_c... |
Returns tensor of bin_width vectors (centers). One vector b for every pixel
| forward | python | huggingface/transformers | src/transformers/models/zoedepth/modeling_zoedepth.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/zoedepth/modeling_zoedepth.py | Apache-2.0 |
def __init__(
self,
config,
n_bins,
n_attractors=16,
min_depth=1e-3,
max_depth=10,
memory_efficient=False,
):
"""
Attractor layer for bin centers. Bin centers are bounded on the interval (min_depth, max_depth)
"""
super().__init... |
Attractor layer for bin centers. Bin centers are bounded on the interval (min_depth, max_depth)
| __init__ | python | huggingface/transformers | src/transformers/models/zoedepth/modeling_zoedepth.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/zoedepth/modeling_zoedepth.py | Apache-2.0 |
def forward(self, x, prev_bin, prev_bin_embedding=None, interpolate=True):
"""
The forward pass of the attractor layer. This layer predicts the new bin centers based on the previous bin centers
and the attractor points (the latter are predicted by the MLP).
Args:
x (`torch.T... |
The forward pass of the attractor layer. This layer predicts the new bin centers based on the previous bin centers
and the attractor points (the latter are predicted by the MLP).
Args:
x (`torch.Tensor` of shape `(batch_size, num_channels, height, width)`):
Feature ... | forward | python | huggingface/transformers | src/transformers/models/zoedepth/modeling_zoedepth.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/zoedepth/modeling_zoedepth.py | Apache-2.0 |
def __init__(
self,
config,
n_bins,
n_attractors=16,
min_depth=1e-3,
max_depth=10,
memory_efficient=True,
):
"""
Attractor layer for bin centers. Bin centers are unbounded
"""
super().__init__()
self.n_attractors = n_at... |
Attractor layer for bin centers. Bin centers are unbounded
| __init__ | python | huggingface/transformers | src/transformers/models/zoedepth/modeling_zoedepth.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/zoedepth/modeling_zoedepth.py | Apache-2.0 |
def forward(self, x, prev_bin, prev_bin_embedding=None, interpolate=True):
"""
The forward pass of the attractor layer. This layer predicts the new bin centers based on the previous bin centers
and the attractor points (the latter are predicted by the MLP).
Args:
x (`torch.T... |
The forward pass of the attractor layer. This layer predicts the new bin centers based on the previous bin centers
and the attractor points (the latter are predicted by the MLP).
Args:
x (`torch.Tensor` of shape (batch_size, num_channels, height, width)`):
Feature b... | forward | python | huggingface/transformers | src/transformers/models/zoedepth/modeling_zoedepth.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/zoedepth/modeling_zoedepth.py | Apache-2.0 |
def __init__(self, in_features, out_features, mlp_dim=128):
"""Projector MLP.
Args:
in_features (`int`):
Number of input channels.
out_features (`int`):
Number of output channels.
mlp_dim (`int`, *optional*, defaults to 128):
... | Projector MLP.
Args:
in_features (`int`):
Number of input channels.
out_features (`int`):
Number of output channels.
mlp_dim (`int`, *optional*, defaults to 128):
Hidden dimension.
| __init__ | python | huggingface/transformers | src/transformers/models/zoedepth/modeling_zoedepth.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/zoedepth/modeling_zoedepth.py | Apache-2.0 |
def __init__(self, config):
"""ViT-like transformer block
Args:
config (`ZoeDepthConfig`):
Model configuration class defining the model architecture.
"""
super().__init__()
in_channels = config.bottleneck_features
self.transformer_encoder = ... | ViT-like transformer block
Args:
config (`ZoeDepthConfig`):
Model configuration class defining the model architecture.
| __init__ | python | huggingface/transformers | src/transformers/models/zoedepth/modeling_zoedepth.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/zoedepth/modeling_zoedepth.py | Apache-2.0 |
def positional_encoding_1d(self, batch_size, sequence_length, embedding_dim, device="cpu", dtype=torch.float32):
"""Generate positional encodings
Args:
sequence_length (int): Sequence length
embedding_dim (int): Embedding dimension
Returns:
torch.Tensor: Pos... | Generate positional encodings
Args:
sequence_length (int): Sequence length
embedding_dim (int): Embedding dimension
Returns:
torch.Tensor: Positional encodings.
| positional_encoding_1d | python | huggingface/transformers | src/transformers/models/zoedepth/modeling_zoedepth.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/zoedepth/modeling_zoedepth.py | Apache-2.0 |
def forward(self, x):
"""Forward pass
Args:
x (torch.Tensor - NCHW): Input feature tensor
Returns:
torch.Tensor - Transformer output embeddings of shape (batch_size, sequence_length, embedding_dim)
"""
embeddings = self.embedding_convPxP(x).flatten(2) #... | Forward pass
Args:
x (torch.Tensor - NCHW): Input feature tensor
Returns:
torch.Tensor - Transformer output embeddings of shape (batch_size, sequence_length, embedding_dim)
| forward | python | huggingface/transformers | src/transformers/models/zoedepth/modeling_zoedepth.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/zoedepth/modeling_zoedepth.py | Apache-2.0 |
def forward(
self,
pixel_values: torch.FloatTensor,
labels: Optional[torch.LongTensor] = None,
output_attentions: Optional[bool] = None,
output_hidden_states: Optional[bool] = None,
return_dict: Optional[bool] = None,
) -> Union[Tuple[torch.Tensor], DepthEstimatorOutp... |
labels (`torch.LongTensor` of shape `(batch_size, height, width)`, *optional*):
Ground truth depth estimation maps for computing the loss.
Examples:
```python
>>> from transformers import AutoImageProcessor, ZoeDepthForDepthEstimation
>>> import torch
>>> im... | forward | python | huggingface/transformers | src/transformers/models/zoedepth/modeling_zoedepth.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/zoedepth/modeling_zoedepth.py | Apache-2.0 |
def values_override(self) -> Optional[Mapping[str, Any]]:
"""
Dictionary of keys to override in the model's config before exporting
Returns:
Dictionary with the keys (and their corresponding values) to override
"""
if hasattr(self._config, "use_cache"):
r... |
Dictionary of keys to override in the model's config before exporting
Returns:
Dictionary with the keys (and their corresponding values) to override
| values_override | python | huggingface/transformers | src/transformers/onnx/config.py | https://github.com/huggingface/transformers/blob/master/src/transformers/onnx/config.py | Apache-2.0 |
def is_torch_support_available(self) -> bool:
"""
The minimum PyTorch version required to export the model.
Returns:
`bool`: Whether the installed version of PyTorch is compatible with the model.
"""
if is_torch_available():
from transformers.utils import... |
The minimum PyTorch version required to export the model.
Returns:
`bool`: Whether the installed version of PyTorch is compatible with the model.
| is_torch_support_available | python | huggingface/transformers | src/transformers/onnx/config.py | https://github.com/huggingface/transformers/blob/master/src/transformers/onnx/config.py | Apache-2.0 |
def use_external_data_format(num_parameters: int) -> bool:
"""
Flag indicating if the model requires using external data format
Args:
num_parameters: Number of parameter on the model
Returns:
True if model.num_parameters() * size_of(float32) >= 2Gb False otherwi... |
Flag indicating if the model requires using external data format
Args:
num_parameters: Number of parameter on the model
Returns:
True if model.num_parameters() * size_of(float32) >= 2Gb False otherwise
| use_external_data_format | python | huggingface/transformers | src/transformers/onnx/config.py | https://github.com/huggingface/transformers/blob/master/src/transformers/onnx/config.py | Apache-2.0 |
def generate_dummy_inputs(
self,
preprocessor: Union["PreTrainedTokenizerBase", "FeatureExtractionMixin", "ImageProcessingMixin"],
batch_size: int = -1,
seq_length: int = -1,
num_choices: int = -1,
is_pair: bool = False,
framework: Optional[TensorType] = None,
... |
Generate inputs to provide to the ONNX exporter for the specific framework
Args:
preprocessor: ([`PreTrainedTokenizerBase`], [`FeatureExtractionMixin`], or [`ImageProcessingMixin`]):
The preprocessor associated with this model configuration.
batch_size (`int`, *... | generate_dummy_inputs | python | huggingface/transformers | src/transformers/onnx/config.py | https://github.com/huggingface/transformers/blob/master/src/transformers/onnx/config.py | Apache-2.0 |
def flatten_output_collection_property(cls, name: str, field: Iterable[Any]) -> Dict[str, Any]:
"""
Flatten any potential nested structure expanding the name of the field with the index of the element within the
structure.
Args:
name: The name of the nested structure
... |
Flatten any potential nested structure expanding the name of the field with the index of the element within the
structure.
Args:
name: The name of the nested structure
field: The structure to, potentially, be flattened
Returns:
(Dict[str, Any]): Out... | flatten_output_collection_property | python | huggingface/transformers | src/transformers/onnx/config.py | https://github.com/huggingface/transformers/blob/master/src/transformers/onnx/config.py | Apache-2.0 |
def num_layers(self) -> int:
"""
The number of layers attribute retrieved from the model config. Override this for model configs where the
number of layers attribute is not called `num_layers`.
"""
if not hasattr(self._config, "num_layers"):
raise AttributeError(
... |
The number of layers attribute retrieved from the model config. Override this for model configs where the
number of layers attribute is not called `num_layers`.
| num_layers | python | huggingface/transformers | src/transformers/onnx/config.py | https://github.com/huggingface/transformers/blob/master/src/transformers/onnx/config.py | Apache-2.0 |
def num_attention_heads(self) -> int:
"""
The number of attention heads attribute retrieved from the model config. Override this for model configs where
the number of attention heads attribute is not called `num_attention_heads`.
"""
if not hasattr(self._config, "num_attention_he... |
The number of attention heads attribute retrieved from the model config. Override this for model configs where
the number of attention heads attribute is not called `num_attention_heads`.
| num_attention_heads | python | huggingface/transformers | src/transformers/onnx/config.py | https://github.com/huggingface/transformers/blob/master/src/transformers/onnx/config.py | Apache-2.0 |
def fill_with_past_key_values_(
self, inputs_or_outputs: Mapping[str, Mapping[int, str]], direction: str, inverted_values_shape: bool = False
):
"""
Fill the input_or_outputs mapping with past_key_values dynamic axes considering.
Args:
inputs_or_outputs: The mapping to f... |
Fill the input_or_outputs mapping with past_key_values dynamic axes considering.
Args:
inputs_or_outputs: The mapping to fill.
direction: either "inputs" or "outputs", it specifies whether input_or_outputs is the input mapping or the
output mapping, this is impo... | fill_with_past_key_values_ | python | huggingface/transformers | src/transformers/onnx/config.py | https://github.com/huggingface/transformers/blob/master/src/transformers/onnx/config.py | Apache-2.0 |
def check_onnxruntime_requirements(minimum_version: Version):
"""
Check onnxruntime is installed and if the installed version match is recent enough
Raises:
ImportError: If onnxruntime is not installed or too old version is found
"""
try:
import onnxruntime
# Parse the vers... |
Check onnxruntime is installed and if the installed version match is recent enough
Raises:
ImportError: If onnxruntime is not installed or too old version is found
| check_onnxruntime_requirements | python | huggingface/transformers | src/transformers/onnx/convert.py | https://github.com/huggingface/transformers/blob/master/src/transformers/onnx/convert.py | Apache-2.0 |
def export_pytorch(
preprocessor: Union["PreTrainedTokenizer", "FeatureExtractionMixin", "ProcessorMixin"],
model: "PreTrainedModel",
config: OnnxConfig,
opset: int,
output: Path,
tokenizer: Optional["PreTrainedTokenizer"] = None,
device: str = "cpu",
) -> Tuple[List[str], List[str]]:
""... |
Export a PyTorch model to an ONNX Intermediate Representation (IR)
Args:
preprocessor: ([`PreTrainedTokenizer`], [`FeatureExtractionMixin`] or [`ProcessorMixin`]):
The preprocessor used for encoding the data.
model ([`PreTrainedModel`]):
The model to export.
con... | export_pytorch | python | huggingface/transformers | src/transformers/onnx/convert.py | https://github.com/huggingface/transformers/blob/master/src/transformers/onnx/convert.py | Apache-2.0 |
def export_tensorflow(
preprocessor: Union["PreTrainedTokenizer", "FeatureExtractionMixin"],
model: "TFPreTrainedModel",
config: OnnxConfig,
opset: int,
output: Path,
tokenizer: Optional["PreTrainedTokenizer"] = None,
) -> Tuple[List[str], List[str]]:
"""
Export a TensorFlow model to an ... |
Export a TensorFlow model to an ONNX Intermediate Representation (IR)
Args:
preprocessor: ([`PreTrainedTokenizer`] or [`FeatureExtractionMixin`]):
The preprocessor used for encoding the data.
model ([`TFPreTrainedModel`]):
The model to export.
config ([`~onnx.co... | export_tensorflow | python | huggingface/transformers | src/transformers/onnx/convert.py | https://github.com/huggingface/transformers/blob/master/src/transformers/onnx/convert.py | Apache-2.0 |
def export(
preprocessor: Union["PreTrainedTokenizer", "FeatureExtractionMixin", "ProcessorMixin"],
model: Union["PreTrainedModel", "TFPreTrainedModel"],
config: OnnxConfig,
opset: int,
output: Path,
tokenizer: Optional["PreTrainedTokenizer"] = None,
device: str = "cpu",
) -> Tuple[List[str]... |
Export a Pytorch or TensorFlow model to an ONNX Intermediate Representation (IR)
Args:
preprocessor: ([`PreTrainedTokenizer`], [`FeatureExtractionMixin`] or [`ProcessorMixin`]):
The preprocessor used for encoding the data.
model ([`PreTrainedModel`] or [`TFPreTrainedModel`]):
... | export | python | huggingface/transformers | src/transformers/onnx/convert.py | https://github.com/huggingface/transformers/blob/master/src/transformers/onnx/convert.py | Apache-2.0 |
def supported_features_mapping(
*supported_features: str, onnx_config_cls: Optional[str] = None
) -> Dict[str, Callable[[PretrainedConfig], OnnxConfig]]:
"""
Generate the mapping between supported the features and their corresponding OnnxConfig for a given model.
Args:
*supported_features: The ... |
Generate the mapping between supported the features and their corresponding OnnxConfig for a given model.
Args:
*supported_features: The names of the supported features.
onnx_config_cls: The OnnxConfig full name corresponding to the model.
Returns:
The dictionary mapping a feature... | supported_features_mapping | python | huggingface/transformers | src/transformers/onnx/features.py | https://github.com/huggingface/transformers/blob/master/src/transformers/onnx/features.py | Apache-2.0 |
def get_supported_features_for_model_type(
model_type: str, model_name: Optional[str] = None
) -> Dict[str, Callable[[PretrainedConfig], OnnxConfig]]:
"""
Tries to retrieve the feature -> OnnxConfig constructor map from the model type.
Args:
model_type (`str`):
... |
Tries to retrieve the feature -> OnnxConfig constructor map from the model type.
Args:
model_type (`str`):
The model type to retrieve the supported features for.
model_name (`str`, *optional*):
The name attribute of the model object, only used fo... | get_supported_features_for_model_type | python | huggingface/transformers | src/transformers/onnx/features.py | https://github.com/huggingface/transformers/blob/master/src/transformers/onnx/features.py | Apache-2.0 |
def _validate_framework_choice(framework: str):
"""
Validates if the framework requested for the export is both correct and available, otherwise throws an
exception.
"""
if framework not in ["pt", "tf"]:
raise ValueError(
f"Only two frameworks are supp... |
Validates if the framework requested for the export is both correct and available, otherwise throws an
exception.
| _validate_framework_choice | python | huggingface/transformers | src/transformers/onnx/features.py | https://github.com/huggingface/transformers/blob/master/src/transformers/onnx/features.py | Apache-2.0 |
def get_model_class_for_feature(feature: str, framework: str = "pt") -> Type:
"""
Attempts to retrieve an AutoModel class from a feature name.
Args:
feature (`str`):
The feature required.
framework (`str`, *optional*, defaults to `"pt"`):
... |
Attempts to retrieve an AutoModel class from a feature name.
Args:
feature (`str`):
The feature required.
framework (`str`, *optional*, defaults to `"pt"`):
The framework to use for the export.
Returns:
The AutoModel class co... | get_model_class_for_feature | python | huggingface/transformers | src/transformers/onnx/features.py | https://github.com/huggingface/transformers/blob/master/src/transformers/onnx/features.py | Apache-2.0 |
def determine_framework(model: str, framework: Optional[str] = None) -> str:
"""
Determines the framework to use for the export.
The priority is in the following order:
1. User input via `framework`.
2. If local checkpoint is provided, use the same framework as the check... |
Determines the framework to use for the export.
The priority is in the following order:
1. User input via `framework`.
2. If local checkpoint is provided, use the same framework as the checkpoint.
3. Available framework in environment, with priority given to PyTorch... | determine_framework | python | huggingface/transformers | src/transformers/onnx/features.py | https://github.com/huggingface/transformers/blob/master/src/transformers/onnx/features.py | Apache-2.0 |
def get_model_from_feature(
feature: str, model: str, framework: Optional[str] = None, cache_dir: Optional[str] = None
) -> Union["PreTrainedModel", "TFPreTrainedModel"]:
"""
Attempts to retrieve a model from a model's name and the feature to be enabled.
Args:
feature (`... |
Attempts to retrieve a model from a model's name and the feature to be enabled.
Args:
feature (`str`):
The feature required.
model (`str`):
The name of the model to export.
framework (`str`, *optional*, defaults to `None`):
... | get_model_from_feature | python | huggingface/transformers | src/transformers/onnx/features.py | https://github.com/huggingface/transformers/blob/master/src/transformers/onnx/features.py | Apache-2.0 |
def check_supported_model_or_raise(
model: Union["PreTrainedModel", "TFPreTrainedModel"], feature: str = "default"
) -> Tuple[str, Callable]:
"""
Check whether or not the model has the requested features.
Args:
model: The model to export.
feature: The name of... |
Check whether or not the model has the requested features.
Args:
model: The model to export.
feature: The name of the feature to check if it is available.
Returns:
(str) The type of the model (OnnxConfig) The OnnxConfig instance holding the model export pro... | check_supported_model_or_raise | python | huggingface/transformers | src/transformers/onnx/features.py | https://github.com/huggingface/transformers/blob/master/src/transformers/onnx/features.py | Apache-2.0 |
def get_preprocessor(model_name: str) -> Optional[Union["AutoTokenizer", "AutoFeatureExtractor", "AutoProcessor"]]:
"""
Gets a preprocessor (tokenizer, feature extractor or processor) that is available for `model_name`.
Args:
model_name (`str`): Name of the model for which a preprocessor are loaded... |
Gets a preprocessor (tokenizer, feature extractor or processor) that is available for `model_name`.
Args:
model_name (`str`): Name of the model for which a preprocessor are loaded.
Returns:
`Optional[Union[AutoTokenizer, AutoFeatureExtractor, AutoProcessor]]`:
If a processor i... | get_preprocessor | python | huggingface/transformers | src/transformers/onnx/utils.py | https://github.com/huggingface/transformers/blob/master/src/transformers/onnx/utils.py | Apache-2.0 |
def ffmpeg_read(bpayload: bytes, sampling_rate: int) -> np.array:
"""
Helper function to read an audio file through ffmpeg.
"""
ar = f"{sampling_rate}"
ac = "1"
format_for_conversion = "f32le"
ffmpeg_command = [
"ffmpeg",
"-i",
"pipe:0",
"-ac",
ac,
... |
Helper function to read an audio file through ffmpeg.
| ffmpeg_read | python | huggingface/transformers | src/transformers/pipelines/audio_classification.py | https://github.com/huggingface/transformers/blob/master/src/transformers/pipelines/audio_classification.py | Apache-2.0 |
def __call__(
self,
inputs: Union[np.ndarray, bytes, str],
**kwargs,
):
"""
Classify the sequence(s) given as inputs. See the [`AutomaticSpeechRecognitionPipeline`] documentation for more
information.
Args:
inputs (`np.ndarray` or `bytes` or `str`... |
Classify the sequence(s) given as inputs. See the [`AutomaticSpeechRecognitionPipeline`] documentation for more
information.
Args:
inputs (`np.ndarray` or `bytes` or `str` or `dict`):
The inputs is either :
- `str` that is the filename of the aud... | __call__ | python | huggingface/transformers | src/transformers/pipelines/audio_classification.py | https://github.com/huggingface/transformers/blob/master/src/transformers/pipelines/audio_classification.py | Apache-2.0 |
def ffmpeg_read(bpayload: bytes, sampling_rate: int) -> np.array:
"""
Helper function to read an audio file through ffmpeg.
"""
ar = f"{sampling_rate}"
ac = "1"
format_for_conversion = "f32le"
ffmpeg_command = [
"ffmpeg",
"-i",
"pipe:0",
"-ac",
ac,
... |
Helper function to read an audio file through ffmpeg.
| ffmpeg_read | python | huggingface/transformers | src/transformers/pipelines/audio_utils.py | https://github.com/huggingface/transformers/blob/master/src/transformers/pipelines/audio_utils.py | Apache-2.0 |
def ffmpeg_microphone(
sampling_rate: int,
chunk_length_s: float,
format_for_conversion: str = "f32le",
ffmpeg_input_device: Optional[str] = None,
ffmpeg_additional_args: Optional[list[str]] = None,
):
"""
Helper function to read audio from a microphone using ffmpeg. The default input device... |
Helper function to read audio from a microphone using ffmpeg. The default input device will be used unless another
input device is specified using the `ffmpeg_input_device` argument. Uses 'alsa' on Linux, 'avfoundation' on MacOS and
'dshow' on Windows.
Arguments:
sampling_rate (`int`):
... | ffmpeg_microphone | python | huggingface/transformers | src/transformers/pipelines/audio_utils.py | https://github.com/huggingface/transformers/blob/master/src/transformers/pipelines/audio_utils.py | Apache-2.0 |
def ffmpeg_microphone_live(
sampling_rate: int,
chunk_length_s: float,
stream_chunk_s: Optional[int] = None,
stride_length_s: Optional[Union[Tuple[float, float], float]] = None,
format_for_conversion: str = "f32le",
ffmpeg_input_device: Optional[str] = None,
ffmpeg_additional_args: Optional[... |
Helper function to read audio from a microphone using ffmpeg. This will output `partial` overlapping chunks starting
from `stream_chunk_s` (if it is defined) until `chunk_length_s` is reached. It will make use of striding to avoid
errors on the "sides" of the various chunks. The default input device will b... | ffmpeg_microphone_live | python | huggingface/transformers | src/transformers/pipelines/audio_utils.py | https://github.com/huggingface/transformers/blob/master/src/transformers/pipelines/audio_utils.py | Apache-2.0 |
def chunk_bytes_iter(iterator, chunk_len: int, stride: Tuple[int, int], stream: bool = False):
"""
Reads raw bytes from an iterator and does chunks of length `chunk_len`. Optionally adds `stride` to each chunks to
get overlaps. `stream` is used to return partial results even if a full `chunk_len` is not yet... |
Reads raw bytes from an iterator and does chunks of length `chunk_len`. Optionally adds `stride` to each chunks to
get overlaps. `stream` is used to return partial results even if a full `chunk_len` is not yet available.
| chunk_bytes_iter | python | huggingface/transformers | src/transformers/pipelines/audio_utils.py | https://github.com/huggingface/transformers/blob/master/src/transformers/pipelines/audio_utils.py | Apache-2.0 |
def _ffmpeg_stream(ffmpeg_command, buflen: int):
"""
Internal function to create the generator of data through ffmpeg
"""
bufsize = 2**24 # 16Mo
try:
with subprocess.Popen(ffmpeg_command, stdout=subprocess.PIPE, bufsize=bufsize) as ffmpeg_process:
while True:
raw... |
Internal function to create the generator of data through ffmpeg
| _ffmpeg_stream | python | huggingface/transformers | src/transformers/pipelines/audio_utils.py | https://github.com/huggingface/transformers/blob/master/src/transformers/pipelines/audio_utils.py | Apache-2.0 |
def _get_microphone_name():
"""
Retrieve the microphone name in Windows .
"""
command = ["ffmpeg", "-list_devices", "true", "-f", "dshow", "-i", ""]
try:
ffmpeg_devices = subprocess.run(command, text=True, stderr=subprocess.PIPE, encoding="utf-8")
microphone_lines = [line for line i... |
Retrieve the microphone name in Windows .
| _get_microphone_name | python | huggingface/transformers | src/transformers/pipelines/audio_utils.py | https://github.com/huggingface/transformers/blob/master/src/transformers/pipelines/audio_utils.py | Apache-2.0 |
def rescale_stride(stride, ratio):
"""
Rescales the stride values from audio space to tokens/logits space.
(160_000, 16_000, 16_000) -> (2000, 200, 200) for instance.
"""
# Shape is [B, SEQ] for tokens
# [B, SEQ, V] for logits
new_strides = []
for input_n, left, right in stride:
... |
Rescales the stride values from audio space to tokens/logits space.
(160_000, 16_000, 16_000) -> (2000, 200, 200) for instance.
| rescale_stride | python | huggingface/transformers | src/transformers/pipelines/automatic_speech_recognition.py | https://github.com/huggingface/transformers/blob/master/src/transformers/pipelines/automatic_speech_recognition.py | Apache-2.0 |
def __call__(
self,
inputs: Union[np.ndarray, bytes, str],
**kwargs,
):
"""
Transcribe the audio sequence(s) given as inputs to text. See the [`AutomaticSpeechRecognitionPipeline`]
documentation for more information.
Args:
inputs (`np.ndarray` or ... |
Transcribe the audio sequence(s) given as inputs to text. See the [`AutomaticSpeechRecognitionPipeline`]
documentation for more information.
Args:
inputs (`np.ndarray` or `bytes` or `str` or `dict`):
The inputs is either :
- `str` that is either ... | __call__ | python | huggingface/transformers | src/transformers/pipelines/automatic_speech_recognition.py | https://github.com/huggingface/transformers/blob/master/src/transformers/pipelines/automatic_speech_recognition.py | Apache-2.0 |
def infer_framework_load_model(
model,
config: AutoConfig,
model_classes: Optional[Dict[str, Tuple[type]]] = None,
task: Optional[str] = None,
framework: Optional[str] = None,
**model_kwargs,
):
"""
Select framework (TensorFlow or PyTorch) to use from the `model` passed. Returns a tuple ... |
Select framework (TensorFlow or PyTorch) to use from the `model` passed. Returns a tuple (framework, model).
If `model` is instantiated, this function will just infer the framework from the model class. Otherwise `model` is
actually a checkpoint name and this method will try to instantiate it using `model... | infer_framework_load_model | python | huggingface/transformers | src/transformers/pipelines/base.py | https://github.com/huggingface/transformers/blob/master/src/transformers/pipelines/base.py | Apache-2.0 |
def infer_framework_from_model(
model,
model_classes: Optional[Dict[str, Tuple[type]]] = None,
task: Optional[str] = None,
framework: Optional[str] = None,
**model_kwargs,
):
"""
Select framework (TensorFlow or PyTorch) to use from the `model` passed. Returns a tuple (framework, model).
... |
Select framework (TensorFlow or PyTorch) to use from the `model` passed. Returns a tuple (framework, model).
If `model` is instantiated, this function will just infer the framework from the model class. Otherwise `model` is
actually a checkpoint name and this method will try to instantiate it using `model... | infer_framework_from_model | python | huggingface/transformers | src/transformers/pipelines/base.py | https://github.com/huggingface/transformers/blob/master/src/transformers/pipelines/base.py | Apache-2.0 |
def get_framework(model, revision: Optional[str] = None):
"""
Select framework (TensorFlow or PyTorch) to use.
Args:
model (`str`, [`PreTrainedModel`] or [`TFPreTrainedModel]`):
If both frameworks are installed, picks the one corresponding to the model passed (either a model class or
... |
Select framework (TensorFlow or PyTorch) to use.
Args:
model (`str`, [`PreTrainedModel`] or [`TFPreTrainedModel]`):
If both frameworks are installed, picks the one corresponding to the model passed (either a model class or
the model name). If no specific model is provided, defa... | get_framework | python | huggingface/transformers | src/transformers/pipelines/base.py | https://github.com/huggingface/transformers/blob/master/src/transformers/pipelines/base.py | Apache-2.0 |
def get_default_model_and_revision(
targeted_task: Dict, framework: Optional[str], task_options: Optional[Any]
) -> Tuple[str, str]:
"""
Select a default model to use for a given task. Defaults to pytorch if ambiguous.
Args:
targeted_task (`Dict`):
Dictionary representing the given t... |
Select a default model to use for a given task. Defaults to pytorch if ambiguous.
Args:
targeted_task (`Dict`):
Dictionary representing the given task, that should contain default models
framework (`str`, None)
"pt", "tf" or None, representing a specific framework if it ... | get_default_model_and_revision | python | huggingface/transformers | src/transformers/pipelines/base.py | https://github.com/huggingface/transformers/blob/master/src/transformers/pipelines/base.py | Apache-2.0 |
def load_assistant_model(
model: "PreTrainedModel",
assistant_model: Optional[Union[str, "PreTrainedModel"]],
assistant_tokenizer: Optional[PreTrainedTokenizer],
) -> Tuple[Optional["PreTrainedModel"], Optional[PreTrainedTokenizer]]:
"""
Prepares the assistant model and the assistant tokenizer for a... |
Prepares the assistant model and the assistant tokenizer for a pipeline whose model that can call `generate`.
Args:
model ([`PreTrainedModel`]):
The main model that will be used by the pipeline to make predictions.
assistant_model (`str` or [`PreTrainedModel`], *optional*):
... | load_assistant_model | python | huggingface/transformers | src/transformers/pipelines/base.py | https://github.com/huggingface/transformers/blob/master/src/transformers/pipelines/base.py | Apache-2.0 |
def save_binary(self, data: Union[dict, List[dict]]) -> str:
"""
Save the provided data object as a pickle-formatted binary data on the disk.
Args:
data (`dict` or list of `dict`): The data to store.
Returns:
`str`: Path where the data has been saved.
""... |
Save the provided data object as a pickle-formatted binary data on the disk.
Args:
data (`dict` or list of `dict`): The data to store.
Returns:
`str`: Path where the data has been saved.
| save_binary | python | huggingface/transformers | src/transformers/pipelines/base.py | https://github.com/huggingface/transformers/blob/master/src/transformers/pipelines/base.py | Apache-2.0 |
def from_str(
format: str,
output_path: Optional[str],
input_path: Optional[str],
column: Optional[str],
overwrite=False,
) -> "PipelineDataFormat":
"""
Creates an instance of the right subclass of [`~pipelines.PipelineDataFormat`] depending on `format`.
... |
Creates an instance of the right subclass of [`~pipelines.PipelineDataFormat`] depending on `format`.
Args:
format (`str`):
The format of the desired pipeline. Acceptable values are `"json"`, `"csv"` or `"pipe"`.
output_path (`str`, *optional*):
... | from_str | python | huggingface/transformers | src/transformers/pipelines/base.py | https://github.com/huggingface/transformers/blob/master/src/transformers/pipelines/base.py | Apache-2.0 |
def save(self, data: List[dict]):
"""
Save the provided data object with the representation for the current [`~pipelines.PipelineDataFormat`].
Args:
data (`List[dict]`): The data to store.
"""
with open(self.output_path, "w") as f:
if len(data) > 0:
... |
Save the provided data object with the representation for the current [`~pipelines.PipelineDataFormat`].
Args:
data (`List[dict]`): The data to store.
| save | python | huggingface/transformers | src/transformers/pipelines/base.py | https://github.com/huggingface/transformers/blob/master/src/transformers/pipelines/base.py | Apache-2.0 |
def save_pretrained(
self,
save_directory: Union[str, os.PathLike],
safe_serialization: bool = True,
**kwargs,
):
"""
Save the pipeline's model and tokenizer.
Args:
save_directory (`str` or `os.PathLike`):
A path to the directory w... |
Save the pipeline's model and tokenizer.
Args:
save_directory (`str` or `os.PathLike`):
A path to the directory where to saved. It will be created if it doesn't exist.
safe_serialization (`str`):
Whether to save the model using `safetensors` or t... | save_pretrained | python | huggingface/transformers | src/transformers/pipelines/base.py | https://github.com/huggingface/transformers/blob/master/src/transformers/pipelines/base.py | Apache-2.0 |
def device_placement(self):
"""
Context Manager allowing tensor allocation on the user-specified device in framework agnostic way.
Returns:
Context manager
Examples:
```python
# Explicitly ask for tensor allocation on CUDA device :0
pipe = pipeline(... |
Context Manager allowing tensor allocation on the user-specified device in framework agnostic way.
Returns:
Context manager
Examples:
```python
# Explicitly ask for tensor allocation on CUDA device :0
pipe = pipeline(..., device=0)
with pipe.device... | device_placement | python | huggingface/transformers | src/transformers/pipelines/base.py | https://github.com/huggingface/transformers/blob/master/src/transformers/pipelines/base.py | Apache-2.0 |
def check_model_type(self, supported_models: Union[List[str], dict]):
"""
Check if the model class is in supported by the pipeline.
Args:
supported_models (`List[str]` or `dict`):
The list of models supported by the pipeline, or a dictionary with model class values.
... |
Check if the model class is in supported by the pipeline.
Args:
supported_models (`List[str]` or `dict`):
The list of models supported by the pipeline, or a dictionary with model class values.
| check_model_type | python | huggingface/transformers | src/transformers/pipelines/base.py | https://github.com/huggingface/transformers/blob/master/src/transformers/pipelines/base.py | Apache-2.0 |
def __call__(self, inputs: Union[str, List[str], "Image.Image", List["Image.Image"]] = None, **kwargs):
"""
Predict the depth(s) of the image(s) passed as inputs.
Args:
inputs (`str`, `List[str]`, `PIL.Image` or `List[PIL.Image]`):
The pipeline handles three types of... |
Predict the depth(s) of the image(s) passed as inputs.
Args:
inputs (`str`, `List[str]`, `PIL.Image` or `List[PIL.Image]`):
The pipeline handles three types of images:
- A string containing a http link pointing to an image
- A string contain... | __call__ | python | huggingface/transformers | src/transformers/pipelines/depth_estimation.py | https://github.com/huggingface/transformers/blob/master/src/transformers/pipelines/depth_estimation.py | Apache-2.0 |
def apply_tesseract(image: "Image.Image", lang: Optional[str], tesseract_config: Optional[str]):
"""Applies Tesseract OCR on a document image, and returns recognized words + normalized bounding boxes."""
# apply OCR
data = pytesseract.image_to_data(image, lang=lang, output_type="dict", config=tesseract_conf... | Applies Tesseract OCR on a document image, and returns recognized words + normalized bounding boxes. | apply_tesseract | python | huggingface/transformers | src/transformers/pipelines/document_question_answering.py | https://github.com/huggingface/transformers/blob/master/src/transformers/pipelines/document_question_answering.py | Apache-2.0 |
def __call__(
self,
image: Union["Image.Image", str],
question: Optional[str] = None,
word_boxes: Optional[Tuple[str, List[float]]] = None,
**kwargs,
):
"""
Answer the question(s) given as inputs by using the document(s). A document is defined as an image and ... |
Answer the question(s) given as inputs by using the document(s). A document is defined as an image and an
optional list of (word, box) tuples which represent the text in the document. If the `word_boxes` are not
provided, it will use the Tesseract OCR engine (if available) to extract the words ... | __call__ | python | huggingface/transformers | src/transformers/pipelines/document_question_answering.py | https://github.com/huggingface/transformers/blob/master/src/transformers/pipelines/document_question_answering.py | Apache-2.0 |
def __call__(self, inputs: Union[str, List[str], "Image.Image", List["Image.Image"]] = None, **kwargs):
"""
Assign labels to the image(s) passed as inputs.
Args:
inputs (`str`, `List[str]`, `PIL.Image` or `List[PIL.Image]`):
The pipeline handles three types of images... |
Assign labels to the image(s) passed as inputs.
Args:
inputs (`str`, `List[str]`, `PIL.Image` or `List[PIL.Image]`):
The pipeline handles three types of images:
- A string containing a http link pointing to an image
- A string containing a l... | __call__ | python | huggingface/transformers | src/transformers/pipelines/image_classification.py | https://github.com/huggingface/transformers/blob/master/src/transformers/pipelines/image_classification.py | Apache-2.0 |
def __call__(self, inputs=None, **kwargs) -> Union[Predictions, List[Prediction]]:
"""
Perform segmentation (detect masks & classes) in the image(s) passed as inputs.
Args:
inputs (`str`, `List[str]`, `PIL.Image` or `List[PIL.Image]`):
The pipeline handles three type... |
Perform segmentation (detect masks & classes) in the image(s) passed as inputs.
Args:
inputs (`str`, `List[str]`, `PIL.Image` or `List[PIL.Image]`):
The pipeline handles three types of images:
- A string containing an HTTP(S) link pointing to an image
... | __call__ | python | huggingface/transformers | src/transformers/pipelines/image_segmentation.py | https://github.com/huggingface/transformers/blob/master/src/transformers/pipelines/image_segmentation.py | Apache-2.0 |
def add_images_to_messages(
messages: dict, images: Optional[Union[str, List[str], "Image.Image", List["Image.Image"]]]
):
"""
Retrieve and combine images from the chat and the images passed as input.
"""
if images is None:
images = []
elif not isinstance(images, Iterable) or isinstance(... |
Retrieve and combine images from the chat and the images passed as input.
| add_images_to_messages | python | huggingface/transformers | src/transformers/pipelines/image_text_to_text.py | https://github.com/huggingface/transformers/blob/master/src/transformers/pipelines/image_text_to_text.py | Apache-2.0 |
def __call__(
self, images: Union[str, List[str], "Image.Image", List["Image.Image"]], **kwargs
) -> Union["Image.Image", List["Image.Image"]]:
"""
Transform the image(s) passed as inputs.
Args:
images (`str`, `List[str]`, `PIL.Image` or `List[PIL.Image]`):
... |
Transform the image(s) passed as inputs.
Args:
images (`str`, `List[str]`, `PIL.Image` or `List[PIL.Image]`):
The pipeline handles three types of images:
- A string containing a http link pointing to an image
- A string containing a local pa... | __call__ | python | huggingface/transformers | src/transformers/pipelines/image_to_image.py | https://github.com/huggingface/transformers/blob/master/src/transformers/pipelines/image_to_image.py | Apache-2.0 |
def __call__(self, inputs: Union[str, List[str], "Image.Image", List["Image.Image"]] = None, **kwargs):
"""
Assign labels to the image(s) passed as inputs.
Args:
inputs (`str`, `List[str]`, `PIL.Image` or `List[PIL.Image]`):
The pipeline handles three types of images... |
Assign labels to the image(s) passed as inputs.
Args:
inputs (`str`, `List[str]`, `PIL.Image` or `List[PIL.Image]`):
The pipeline handles three types of images:
- A string containing a HTTP(s) link pointing to an image
- A string containing ... | __call__ | python | huggingface/transformers | src/transformers/pipelines/image_to_text.py | https://github.com/huggingface/transformers/blob/master/src/transformers/pipelines/image_to_text.py | Apache-2.0 |
def __call__(self, *args, **kwargs) -> Union[Predictions, List[Prediction]]:
"""
Detect objects (bounding boxes & classes) in the image(s) passed as inputs.
Args:
inputs (`str`, `List[str]`, `PIL.Image` or `List[PIL.Image]`):
The pipeline handles three types of image... |
Detect objects (bounding boxes & classes) in the image(s) passed as inputs.
Args:
inputs (`str`, `List[str]`, `PIL.Image` or `List[PIL.Image]`):
The pipeline handles three types of images:
- A string containing an HTTP(S) link pointing to an image
... | __call__ | python | huggingface/transformers | src/transformers/pipelines/object_detection.py | https://github.com/huggingface/transformers/blob/master/src/transformers/pipelines/object_detection.py | Apache-2.0 |
def _get_bounding_box(self, box: "torch.Tensor") -> Dict[str, int]:
"""
Turns list [xmin, xmax, ymin, ymax] into dict { "xmin": xmin, ... }
Args:
box (`torch.Tensor`): Tensor containing the coordinates in corners format.
Returns:
bbox (`Dict[str, int]`): Dict co... |
Turns list [xmin, xmax, ymin, ymax] into dict { "xmin": xmin, ... }
Args:
box (`torch.Tensor`): Tensor containing the coordinates in corners format.
Returns:
bbox (`Dict[str, int]`): Dict containing the coordinates in corners format.
| _get_bounding_box | python | huggingface/transformers | src/transformers/pipelines/object_detection.py | https://github.com/huggingface/transformers/blob/master/src/transformers/pipelines/object_detection.py | Apache-2.0 |
def __init__(self, loader, infer, params, loader_batch_size=None):
"""
Roughly equivalent to
```
for item in loader:
yield infer(item, **params)
```
Arguments:
loader (`torch.utils.data.DataLoader` or `Iterable`):
... |
Roughly equivalent to
```
for item in loader:
yield infer(item, **params)
```
Arguments:
loader (`torch.utils.data.DataLoader` or `Iterable`):
The iterator that will be used to apply `infer` on.
... | __init__ | python | huggingface/transformers | src/transformers/pipelines/pt_utils.py | https://github.com/huggingface/transformers/blob/master/src/transformers/pipelines/pt_utils.py | Apache-2.0 |
def loader_batch_item(self):
"""
Return item located at `loader_batch_index` within the current `loader_batch_data`.
"""
if isinstance(self._loader_batch_data, torch.Tensor):
# Batch data is simple tensor, just fetch the slice
result = self._loader_batch_data[self... |
Return item located at `loader_batch_index` within the current `loader_batch_data`.
| loader_batch_item | python | huggingface/transformers | src/transformers/pipelines/pt_utils.py | https://github.com/huggingface/transformers/blob/master/src/transformers/pipelines/pt_utils.py | Apache-2.0 |
Subsets and Splits
Django Code with Docstrings
Filters Python code examples from Django repository that contain Django-related code, helping identify relevant code snippets for understanding Django framework usage patterns.
SQL Console for Shuu12121/python-treesitter-filtered-datasetsV2
Retrieves specific code examples from the Flask repository but doesn't provide meaningful analysis or patterns beyond basic data retrieval.
HTTPX Repo Code and Docstrings
Retrieves specific code examples from the httpx repository, which is useful for understanding how particular libraries are used but doesn't provide broader analytical insights about the dataset.
Requests Repo Docstrings & Code
Retrieves code examples with their docstrings and file paths from the requests repository, providing basic filtering but limited analytical value beyond finding specific code samples.
Quart Repo Docstrings & Code
Retrieves code examples with their docstrings from the Quart repository, providing basic code samples but offering limited analytical value for understanding broader patterns or relationships in the dataset.