code stringlengths 66 870k | docstring stringlengths 19 26.7k | func_name stringlengths 1 138 | language stringclasses 1
value | repo stringlengths 7 68 | path stringlengths 5 324 | url stringlengths 46 389 | license stringclasses 7
values |
|---|---|---|---|---|---|---|---|
def forward(
self,
input_features: Optional[torch.LongTensor] = None,
attention_mask: Optional[torch.Tensor] = None,
decoder_input_ids: Optional[torch.LongTensor] = None,
decoder_attention_mask: Optional[torch.LongTensor] = None,
encoder_outputs: Optional[Tuple[Tuple[torc... |
labels (`torch.LongTensor` of shape `(batch_size, sequence_length)`, *optional*):
Labels for computing the masked language modeling loss. Indices should be in `[-100, 0, ...,
config.vocab_size]` (see `input_ids` docstring) Tokens with indices set to `-100` are ignored (masked), the
... | forward | python | huggingface/transformers | src/transformers/models/seamless_m4t_v2/modeling_seamless_m4t_v2.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/seamless_m4t_v2/modeling_seamless_m4t_v2.py | Apache-2.0 |
def generate(
self,
input_features: Optional[torch.Tensor] = None,
return_intermediate_token_ids: Optional[bool] = None,
tgt_lang: Optional[str] = None,
speaker_id: Optional[int] = 0,
**kwargs,
) -> Union[torch.Tensor, SeamlessM4Tv2GenerationOutput]:
"""
... |
Generates translated audio waveforms.
<Tip>
This method successively calls the `.generate` function of two different sub-models. You can specify keyword
arguments at two different levels: general arguments that will be passed to both models, or prefixed arguments
that will be ... | generate | python | huggingface/transformers | src/transformers/models/seamless_m4t_v2/modeling_seamless_m4t_v2.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/seamless_m4t_v2/modeling_seamless_m4t_v2.py | Apache-2.0 |
def __init__(self, config, current_modality="text"):
r"""
current_modality (`str`, *optional*, defaults to `"text"`):
Default modality. Used to initialize the model.
"""
super().__init__(config)
self.shared = nn.Embedding(config.vocab_size, config.hidden_size, config... |
current_modality (`str`, *optional*, defaults to `"text"`):
Default modality. Used to initialize the model.
| __init__ | python | huggingface/transformers | src/transformers/models/seamless_m4t_v2/modeling_seamless_m4t_v2.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/seamless_m4t_v2/modeling_seamless_m4t_v2.py | Apache-2.0 |
def forward(
self,
input_ids: Optional[torch.LongTensor] = None,
input_features: Optional[torch.FloatTensor] = None,
attention_mask: Optional[torch.Tensor] = None,
decoder_input_ids: Optional[torch.LongTensor] = None,
decoder_attention_mask: Optional[torch.LongTensor] = N... |
labels (`torch.LongTensor` of shape `(batch_size, sequence_length)`, *optional*):
Labels for computing the masked language modeling loss. Indices should be in `[-100, 0, ...,
config.vocab_size]` (see `input_ids` docstring) Tokens with indices set to `-100` are ignored (masked), the
... | forward | python | huggingface/transformers | src/transformers/models/seamless_m4t_v2/modeling_seamless_m4t_v2.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/seamless_m4t_v2/modeling_seamless_m4t_v2.py | Apache-2.0 |
def generate(
self,
input_ids: Optional[torch.Tensor] = None,
input_features: Optional[torch.Tensor] = None,
return_intermediate_token_ids: Optional[bool] = None,
tgt_lang: Optional[str] = None,
speaker_id: Optional[int] = 0,
generate_speech: Optional[bool] = True... |
Generates translated token ids and/or translated audio waveforms.
<Tip>
This method successively calls the `.generate` function of two different sub-models. You can specify keyword
arguments at two different levels: general arguments that will be passed to both models, or prefixed arg... | generate | python | huggingface/transformers | src/transformers/models/seamless_m4t_v2/modeling_seamless_m4t_v2.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/seamless_m4t_v2/modeling_seamless_m4t_v2.py | Apache-2.0 |
def convert_segformer_checkpoint(model_name, checkpoint_path, pytorch_dump_folder_path):
"""
Copy/paste/tweak model's weights to our SegFormer structure.
"""
# load default SegFormer configuration
config = SegformerConfig()
encoder_only = False
# set attributes based on model_name
repo... |
Copy/paste/tweak model's weights to our SegFormer structure.
| convert_segformer_checkpoint | python | huggingface/transformers | src/transformers/models/segformer/convert_segformer_original_to_pytorch.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/segformer/convert_segformer_original_to_pytorch.py | Apache-2.0 |
def from_dict(cls, image_processor_dict: Dict[str, Any], **kwargs):
"""
Overrides the `from_dict` method from the base class to save support of deprecated `reduce_labels` in old configs
"""
image_processor_dict = image_processor_dict.copy()
if "reduce_labels" in image_processor_d... |
Overrides the `from_dict` method from the base class to save support of deprecated `reduce_labels` in old configs
| from_dict | python | huggingface/transformers | src/transformers/models/segformer/image_processing_segformer.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/segformer/image_processing_segformer.py | Apache-2.0 |
def resize(
self,
image: np.ndarray,
size: Dict[str, int],
resample: PILImageResampling = PILImageResampling.BILINEAR,
data_format: Optional[Union[str, ChannelDimension]] = None,
input_data_format: Optional[Union[str, ChannelDimension]] = None,
**kwargs,
) -> ... |
Resize an image to `(size["height"], size["width"])`.
Args:
image (`np.ndarray`):
Image to resize.
size (`Dict[str, int]`):
Dictionary in the format `{"height": int, "width": int}` specifying the size of the output image.
resample (`P... | resize | python | huggingface/transformers | src/transformers/models/segformer/image_processing_segformer.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/segformer/image_processing_segformer.py | Apache-2.0 |
def preprocess(
self,
images: ImageInput,
segmentation_maps: Optional[ImageInput] = None,
do_resize: Optional[bool] = None,
size: Optional[Dict[str, int]] = None,
resample: PILImageResampling = None,
do_rescale: Optional[bool] = None,
rescale_factor: Optio... |
Preprocess an image or batch of images.
Args:
images (`ImageInput`):
Image to preprocess. Expects a single or batch of images with pixel values ranging from 0 to 255. If
passing in images with pixel values between 0 and 1, set `do_rescale=False`.
... | preprocess | python | huggingface/transformers | src/transformers/models/segformer/image_processing_segformer.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/segformer/image_processing_segformer.py | Apache-2.0 |
def post_process_semantic_segmentation(self, outputs, target_sizes: Optional[List[Tuple]] = None):
"""
Converts the output of [`SegformerForSemanticSegmentation`] into semantic segmentation maps. Only supports PyTorch.
Args:
outputs ([`SegformerForSemanticSegmentation`]):
... |
Converts the output of [`SegformerForSemanticSegmentation`] into semantic segmentation maps. Only supports PyTorch.
Args:
outputs ([`SegformerForSemanticSegmentation`]):
Raw outputs of the model.
target_sizes (`List[Tuple]` of length `batch_size`, *optional*):
... | post_process_semantic_segmentation | python | huggingface/transformers | src/transformers/models/segformer/image_processing_segformer.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/segformer/image_processing_segformer.py | Apache-2.0 |
def drop_path(input: torch.Tensor, drop_prob: float = 0.0, training: bool = False) -> torch.Tensor:
"""
Drop paths (Stochastic Depth) per sample (when applied in main path of residual blocks).
Comment by Ross Wightman: This is the same as the DropConnect impl I created for EfficientNet, etc networks,
h... |
Drop paths (Stochastic Depth) per sample (when applied in main path of residual blocks).
Comment by Ross Wightman: This is the same as the DropConnect impl I created for EfficientNet, etc networks,
however, the original name is misleading as 'Drop Connect' is a different form of dropout in a separate pape... | drop_path | python | huggingface/transformers | src/transformers/models/segformer/modeling_segformer.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/segformer/modeling_segformer.py | Apache-2.0 |
def forward(
self,
pixel_values: Optional[torch.FloatTensor] = None,
labels: Optional[torch.LongTensor] = None,
output_attentions: Optional[bool] = None,
output_hidden_states: Optional[bool] = None,
return_dict: Optional[bool] = None,
) -> Union[Tuple, SegFormerImageC... |
labels (`torch.LongTensor` of shape `(batch_size,)`, *optional*):
Labels for computing the image classification/regression loss. Indices should be in `[0, ...,
config.num_labels - 1]`. If `config.num_labels == 1` a regression loss is computed (Mean-Square loss), If
`config.n... | forward | python | huggingface/transformers | src/transformers/models/segformer/modeling_segformer.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/segformer/modeling_segformer.py | Apache-2.0 |
def forward(
self,
pixel_values: torch.FloatTensor,
labels: Optional[torch.LongTensor] = None,
output_attentions: Optional[bool] = None,
output_hidden_states: Optional[bool] = None,
return_dict: Optional[bool] = None,
) -> Union[Tuple, SemanticSegmenterOutput]:
... |
labels (`torch.LongTensor` of shape `(batch_size, height, width)`, *optional*):
Ground truth semantic segmentation maps for computing the loss. Indices should be in `[0, ...,
config.num_labels - 1]`. If `config.num_labels > 1`, a classification loss is computed (Cross-Entropy).
... | forward | python | huggingface/transformers | src/transformers/models/segformer/modeling_segformer.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/segformer/modeling_segformer.py | Apache-2.0 |
def call(
self,
pixel_values: tf.Tensor,
labels: tf.Tensor | None = None,
output_attentions: Optional[bool] = None,
output_hidden_states: Optional[bool] = None,
return_dict: Optional[bool] = None,
) -> Union[Tuple, TFSemanticSegmenterOutput]:
r"""
labe... |
labels (`tf.Tensor` of shape `(batch_size, height, width)`, *optional*):
Ground truth semantic segmentation maps for computing the loss. Indices should be in `[0, ...,
config.num_labels - 1]`. If `config.num_labels > 1`, a (per-pixel) classification loss is computed
(Cross-E... | call | python | huggingface/transformers | src/transformers/models/segformer/modeling_tf_segformer.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/segformer/modeling_tf_segformer.py | Apache-2.0 |
def mask_to_rgb(
self,
image: np.ndarray,
palette: Optional[List[Tuple[int, int]]] = None,
data_format: Optional[Union[str, ChannelDimension]] = None,
) -> np.ndarray:
"""Converts a segmentation map to RGB format.
Args:
image (`np.ndarray`):
... | Converts a segmentation map to RGB format.
Args:
image (`np.ndarray`):
Segmentation map with dimensions (height, width) where pixel values represent the class index.
palette (`List[Tuple[int, int]]`, *optional*, defaults to `None`):
Palette to use to conv... | mask_to_rgb | python | huggingface/transformers | src/transformers/models/seggpt/image_processing_seggpt.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/seggpt/image_processing_seggpt.py | Apache-2.0 |
def resize(
self,
image: np.ndarray,
size: Dict[str, int],
resample: PILImageResampling = PILImageResampling.BICUBIC,
data_format: Optional[Union[str, ChannelDimension]] = None,
input_data_format: Optional[Union[str, ChannelDimension]] = None,
**kwargs,
) -> n... |
Resize an image to `(size["height"], size["width"])`.
Args:
image (`np.ndarray`):
Image to resize.
size (`Dict[str, int]`):
Dictionary in the format `{"height": int, "width": int}` specifying the size of the output image.
resample (`P... | resize | python | huggingface/transformers | src/transformers/models/seggpt/image_processing_seggpt.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/seggpt/image_processing_seggpt.py | Apache-2.0 |
def _preprocess_step(
self,
images: ImageInput,
do_resize: Optional[bool] = None,
size: Optional[Dict[str, int]] = None,
resample: PILImageResampling = None,
do_rescale: Optional[bool] = None,
rescale_factor: Optional[float] = None,
do_normalize: Optional[... |
Preprocess an image or batch of images.
Args:
images (`ImageInput`):
Image to _preprocess. Expects a single or batch of images with pixel values ranging from 0 to 255. If
passing in images with pixel values between 0 and 1, set `do_rescale=False`.
... | _preprocess_step | python | huggingface/transformers | src/transformers/models/seggpt/image_processing_seggpt.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/seggpt/image_processing_seggpt.py | Apache-2.0 |
def preprocess(
self,
images: Optional[ImageInput] = None,
prompt_images: Optional[ImageInput] = None,
prompt_masks: Optional[ImageInput] = None,
do_resize: Optional[bool] = None,
size: Optional[Dict[str, int]] = None,
resample: PILImageResampling = None,
... |
Preprocess an image or batch of images.
Args:
images (`ImageInput`):
Image to _preprocess. Expects a single or batch of images with pixel values ranging from 0 to 255. If
passing in images with pixel values between 0 and 1, set `do_rescale=False`.
... | preprocess | python | huggingface/transformers | src/transformers/models/seggpt/image_processing_seggpt.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/seggpt/image_processing_seggpt.py | Apache-2.0 |
def post_process_semantic_segmentation(
self, outputs, target_sizes: Optional[List[Tuple[int, int]]] = None, num_labels: Optional[int] = None
):
"""
Converts the output of [`SegGptImageSegmentationOutput`] into segmentation maps. Only supports
PyTorch.
Args:
outp... |
Converts the output of [`SegGptImageSegmentationOutput`] into segmentation maps. Only supports
PyTorch.
Args:
outputs ([`SegGptImageSegmentationOutput`]):
Raw outputs of the model.
target_sizes (`List[Tuple[int, int]]`, *optional*):
List ... | post_process_semantic_segmentation | python | huggingface/transformers | src/transformers/models/seggpt/image_processing_seggpt.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/seggpt/image_processing_seggpt.py | Apache-2.0 |
def get_rel_pos(self, q_size: int, k_size: int, rel_pos: torch.Tensor) -> torch.Tensor:
"""
Get relative positional embeddings according to the relative positions of
query and key sizes.
Args:
q_size (int):
size of the query.
k_size (int):
... |
Get relative positional embeddings according to the relative positions of
query and key sizes.
Args:
q_size (int):
size of the query.
k_size (int):
size of key k.
rel_pos (`torch.Tensor`):
relative position... | get_rel_pos | python | huggingface/transformers | src/transformers/models/seggpt/modeling_seggpt.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/seggpt/modeling_seggpt.py | Apache-2.0 |
def add_decomposed_rel_pos(
self,
attn: torch.Tensor,
query: torch.Tensor,
rel_pos_h: torch.Tensor,
rel_pos_w: torch.Tensor,
q_size: Tuple[int, int],
k_size: Tuple[int, int],
) -> torch.Tensor:
"""
Calculate decomposed Relative Positional Embed... |
Calculate decomposed Relative Positional Embeddings from :paper:`mvitv2`.
https://github.com/facebookresearch/mvit/blob/19786631e330df9f3622e5402b4a419a263a2c80/mvit/models/attention.py
Args:
attn (`torch.Tensor`):
attention map.
query (`torch.Tensor`):
... | add_decomposed_rel_pos | python | huggingface/transformers | src/transformers/models/seggpt/modeling_seggpt.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/seggpt/modeling_seggpt.py | Apache-2.0 |
def drop_path(input: torch.Tensor, drop_prob: float = 0.0, training: bool = False) -> torch.Tensor:
"""
Drop paths (Stochastic Depth) per sample (when applied in main path of residual blocks).
Comment by Ross Wightman: This is the same as the DropConnect impl I created for EfficientNet, etc networks,
h... |
Drop paths (Stochastic Depth) per sample (when applied in main path of residual blocks).
Comment by Ross Wightman: This is the same as the DropConnect impl I created for EfficientNet, etc networks,
however, the original name is misleading as 'Drop Connect' is a different form of dropout in a separate pape... | drop_path | python | huggingface/transformers | src/transformers/models/seggpt/modeling_seggpt.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/seggpt/modeling_seggpt.py | Apache-2.0 |
def forward(
self,
pixel_values: torch.Tensor,
prompt_pixel_values: torch.Tensor,
prompt_masks: torch.Tensor,
bool_masked_pos: Optional[torch.BoolTensor] = None,
feature_ensemble: Optional[bool] = None,
embedding_type: Optional[str] = None,
labels: Optiona... |
prompt_pixel_values (`torch.FloatTensor` of shape `(batch_size, num_channels, height, width)`):
Prompt pixel values. Prompt pixel values can be obtained using [`AutoImageProcessor`]. See
[`SegGptImageProcessor.__call__`] for details.
prompt_masks (`torch.FloatTensor` of shape `(... | forward | python | huggingface/transformers | src/transformers/models/seggpt/modeling_seggpt.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/seggpt/modeling_seggpt.py | Apache-2.0 |
def forward(
self,
prompt_masks: torch.FloatTensor,
pred_masks: torch.FloatTensor,
labels: torch.FloatTensor,
bool_masked_pos: torch.BoolTensor,
):
"""Computes the L1 loss between the predicted masks and the ground truth masks.
Args:
prompt_masks ... | Computes the L1 loss between the predicted masks and the ground truth masks.
Args:
prompt_masks (`torch.FloatTensor` of shape `(batch_size, num_channels, height, width)`):
Pixel values from mask prompt.
pred_masks (`torch.FloatTensor` of shape `(batch_size, num_channels... | forward | python | huggingface/transformers | src/transformers/models/seggpt/modeling_seggpt.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/seggpt/modeling_seggpt.py | Apache-2.0 |
def forward(
self,
pixel_values: torch.Tensor,
prompt_pixel_values: torch.Tensor,
prompt_masks: torch.Tensor,
bool_masked_pos: Optional[torch.BoolTensor] = None,
feature_ensemble: Optional[bool] = None,
embedding_type: Optional[str] = None,
labels: Optiona... |
prompt_pixel_values (`torch.FloatTensor` of shape `(batch_size, num_channels, height, width)`):
Prompt pixel values. Prompt pixel values can be obtained using [`AutoImageProcessor`]. See
[`SegGptImageProcessor.__call__`] for details.
prompt_masks (`torch.FloatTensor` of shape `(... | forward | python | huggingface/transformers | src/transformers/models/seggpt/modeling_seggpt.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/seggpt/modeling_seggpt.py | Apache-2.0 |
def convert_sew_checkpoint(
checkpoint_path, pytorch_dump_folder_path, config_path=None, dict_path=None, is_finetuned=True
):
"""
Copy/paste/tweak model's weights to transformers design.
"""
if is_finetuned:
model, _, _ = fairseq.checkpoint_utils.load_model_ensemble_and_task(
[c... |
Copy/paste/tweak model's weights to transformers design.
| convert_sew_checkpoint | python | huggingface/transformers | src/transformers/models/sew/convert_sew_original_pytorch_checkpoint_to_pytorch.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/sew/convert_sew_original_pytorch_checkpoint_to_pytorch.py | Apache-2.0 |
def _get_feat_extract_output_lengths(self, input_lengths: Union[torch.LongTensor, int]):
"""
Computes the output length of the convolutional layers
"""
def _conv_out_length(input_length, kernel_size, stride):
# 1D convolutional layer output length formula taken
#... |
Computes the output length of the convolutional layers
| _get_feat_extract_output_lengths | python | huggingface/transformers | src/transformers/models/sew/modeling_sew.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/sew/modeling_sew.py | Apache-2.0 |
def _compute_mask_indices(
shape: Tuple[int, int],
mask_prob: float,
mask_length: int,
attention_mask: Optional[torch.LongTensor] = None,
min_masks: int = 0,
) -> np.ndarray:
"""
Computes random mask spans for a given shape. Used to implement [SpecAugment: A Simple Data Augmentation Method f... |
Computes random mask spans for a given shape. Used to implement [SpecAugment: A Simple Data Augmentation Method for
ASR](https://arxiv.org/abs/1904.08779). Note that this method is not optimized to run on TPU and should be run on
CPU as part of the preprocessing during training.
Args:
shape: T... | _compute_mask_indices | python | huggingface/transformers | src/transformers/models/sew/modeling_sew.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/sew/modeling_sew.py | Apache-2.0 |
def compute_num_masked_span(input_length):
"""Given input length, compute how many spans should be masked"""
num_masked_span = int(mask_prob * input_length / mask_length + epsilon)
num_masked_span = max(num_masked_span, min_masks)
# make sure num masked span <= sequence_length
i... | Given input length, compute how many spans should be masked | compute_num_masked_span | python | huggingface/transformers | src/transformers/models/sew/modeling_sew.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/sew/modeling_sew.py | Apache-2.0 |
def _mask_hidden_states(
self,
hidden_states: torch.FloatTensor,
mask_time_indices: Optional[torch.FloatTensor] = None,
attention_mask: Optional[torch.LongTensor] = None,
):
"""
Masks extracted features along time axis and/or along feature axis according to
[S... |
Masks extracted features along time axis and/or along feature axis according to
[SpecAugment](https://arxiv.org/abs/1904.08779).
| _mask_hidden_states | python | huggingface/transformers | src/transformers/models/sew/modeling_sew.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/sew/modeling_sew.py | Apache-2.0 |
def forward(
self,
input_values: Optional[torch.Tensor],
attention_mask: Optional[torch.Tensor] = None,
mask_time_indices: Optional[torch.FloatTensor] = None,
output_attentions: Optional[bool] = None,
output_hidden_states: Optional[bool] = None,
return_dict: Optio... |
mask_time_indices (`torch.BoolTensor` of shape `(batch_size, sequence_length)`, *optional*):
Indices to mask extracted features for contrastive loss. When in training mode, model learns to predict
masked extracted features in *config.proj_codevector_dim* space.
| forward | python | huggingface/transformers | src/transformers/models/sew/modeling_sew.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/sew/modeling_sew.py | Apache-2.0 |
def __init__(self, config, target_lang: Optional[str] = None):
r"""
target_lang (`str`, *optional*):
Language id of adapter weights. Adapter weights are stored in the format adapter.<lang>.safetensors or
adapter.<lang>.bin. Only relevant when using an instance of [`SEWForCTC`] wi... |
target_lang (`str`, *optional*):
Language id of adapter weights. Adapter weights are stored in the format adapter.<lang>.safetensors or
adapter.<lang>.bin. Only relevant when using an instance of [`SEWForCTC`] with adapters. Uses 'eng' by
default.
| __init__ | python | huggingface/transformers | src/transformers/models/sew/modeling_sew.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/sew/modeling_sew.py | Apache-2.0 |
def tie_weights(self):
"""
This method overwrites [`~PreTrainedModel.tie_weights`] so that adapter weights can be correctly loaded when
passing `target_lang=...` to `from_pretrained(...)`.
This method is **not** supposed to be called by the user and is prone to be changed in the future.... |
This method overwrites [`~PreTrainedModel.tie_weights`] so that adapter weights can be correctly loaded when
passing `target_lang=...` to `from_pretrained(...)`.
This method is **not** supposed to be called by the user and is prone to be changed in the future.
| tie_weights | python | huggingface/transformers | src/transformers/models/sew/modeling_sew.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/sew/modeling_sew.py | Apache-2.0 |
def freeze_feature_extractor(self):
"""
Calling this function will disable the gradient computation for the feature encoder so that its parameter will
not be updated during training.
"""
warnings.warn(
"The method `freeze_feature_extractor` is deprecated and will be r... |
Calling this function will disable the gradient computation for the feature encoder so that its parameter will
not be updated during training.
| freeze_feature_extractor | python | huggingface/transformers | src/transformers/models/sew/modeling_sew.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/sew/modeling_sew.py | Apache-2.0 |
def freeze_base_model(self):
"""
Calling this function will disable the gradient computation for the base model so that its parameters will not
be updated during training. Only the classification head will be updated.
"""
for param in self.sew.parameters():
param.requ... |
Calling this function will disable the gradient computation for the base model so that its parameters will not
be updated during training. Only the classification head will be updated.
| freeze_base_model | python | huggingface/transformers | src/transformers/models/sew/modeling_sew.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/sew/modeling_sew.py | Apache-2.0 |
def forward(
self,
input_values: Optional[torch.Tensor],
attention_mask: Optional[torch.Tensor] = None,
output_attentions: Optional[bool] = None,
output_hidden_states: Optional[bool] = None,
return_dict: Optional[bool] = None,
labels: Optional[torch.Tensor] = None... |
labels (`torch.LongTensor` of shape `(batch_size, target_length)`, *optional*):
Labels for connectionist temporal classification. Note that `target_length` has to be smaller or equal to
the sequence length of the output logits. Indices are selected in `[-100, 0, ..., config.vocab_size -... | forward | python | huggingface/transformers | src/transformers/models/sew/modeling_sew.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/sew/modeling_sew.py | Apache-2.0 |
def freeze_feature_extractor(self):
"""
Calling this function will disable the gradient computation for the feature encoder so that its parameters will
not be updated during training.
"""
warnings.warn(
"The method `freeze_feature_extractor` is deprecated and will be ... |
Calling this function will disable the gradient computation for the feature encoder so that its parameters will
not be updated during training.
| freeze_feature_extractor | python | huggingface/transformers | src/transformers/models/sew/modeling_sew.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/sew/modeling_sew.py | Apache-2.0 |
def freeze_base_model(self):
"""
Calling this function will disable the gradient computation for the base model so that its parameters will not
be updated during training. Only the classification head will be updated.
"""
for param in self.sew.parameters():
param.requ... |
Calling this function will disable the gradient computation for the base model so that its parameters will not
be updated during training. Only the classification head will be updated.
| freeze_base_model | python | huggingface/transformers | src/transformers/models/sew/modeling_sew.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/sew/modeling_sew.py | Apache-2.0 |
def forward(
self,
input_values: Optional[torch.Tensor],
attention_mask: Optional[torch.Tensor] = None,
output_attentions: Optional[bool] = None,
output_hidden_states: Optional[bool] = None,
return_dict: Optional[bool] = None,
labels: Optional[torch.Tensor] = None... |
input_values (`torch.FloatTensor` of shape `(batch_size, sequence_length)`):
Float values of input raw speech waveform. Values can be obtained by loading a `.flac` or `.wav` audio file
into an array of type `List[float]` or a `numpy.ndarray`, *e.g.* via the soundfile library (`pip insta... | forward | python | huggingface/transformers | src/transformers/models/sew/modeling_sew.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/sew/modeling_sew.py | Apache-2.0 |
def _get_feat_extract_output_lengths(self, input_lengths: Union[torch.LongTensor, int]):
"""
Computes the output length of the convolutional layers
"""
def _conv_out_length(input_length, kernel_size, stride):
# 1D convolutional layer output length formula taken
#... |
Computes the output length of the convolutional layers
| _get_feat_extract_output_lengths | python | huggingface/transformers | src/transformers/models/sew/modular_sew.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/sew/modular_sew.py | Apache-2.0 |
def _mask_hidden_states(
self,
hidden_states: torch.FloatTensor,
mask_time_indices: Optional[torch.FloatTensor] = None,
attention_mask: Optional[torch.LongTensor] = None,
):
"""
Masks extracted features along time axis and/or along feature axis according to
[S... |
Masks extracted features along time axis and/or along feature axis according to
[SpecAugment](https://arxiv.org/abs/1904.08779).
| _mask_hidden_states | python | huggingface/transformers | src/transformers/models/sew/modular_sew.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/sew/modular_sew.py | Apache-2.0 |
def forward(
self,
input_values: Optional[torch.Tensor],
attention_mask: Optional[torch.Tensor] = None,
mask_time_indices: Optional[torch.FloatTensor] = None,
output_attentions: Optional[bool] = None,
output_hidden_states: Optional[bool] = None,
return_dict: Optio... |
mask_time_indices (`torch.BoolTensor` of shape `(batch_size, sequence_length)`, *optional*):
Indices to mask extracted features for contrastive loss. When in training mode, model learns to predict
masked extracted features in *config.proj_codevector_dim* space.
| forward | python | huggingface/transformers | src/transformers/models/sew/modular_sew.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/sew/modular_sew.py | Apache-2.0 |
def to_dict(self):
"""
Serializes this instance to a Python dictionary.
"""
output = super().to_dict()
output["hidden_dropout"] = output.pop("_hidden_dropout")
return output |
Serializes this instance to a Python dictionary.
| to_dict | python | huggingface/transformers | src/transformers/models/sew_d/configuration_sew_d.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/sew_d/configuration_sew_d.py | Apache-2.0 |
def convert_sew_checkpoint(
checkpoint_path, pytorch_dump_folder_path, config_path=None, dict_path=None, is_finetuned=True
):
"""
Copy/paste/tweak model's weights to transformers design.
"""
if is_finetuned:
model, _, _ = fairseq.checkpoint_utils.load_model_ensemble_and_task(
[c... |
Copy/paste/tweak model's weights to transformers design.
| convert_sew_checkpoint | python | huggingface/transformers | src/transformers/models/sew_d/convert_sew_d_original_pytorch_checkpoint_to_pytorch.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/sew_d/convert_sew_d_original_pytorch_checkpoint_to_pytorch.py | Apache-2.0 |
def _compute_mask_indices(
shape: Tuple[int, int],
mask_prob: float,
mask_length: int,
attention_mask: Optional[torch.LongTensor] = None,
min_masks: int = 0,
) -> np.ndarray:
"""
Computes random mask spans for a given shape. Used to implement [SpecAugment: A Simple Data Augmentation Method f... |
Computes random mask spans for a given shape. Used to implement [SpecAugment: A Simple Data Augmentation Method for
ASR](https://arxiv.org/abs/1904.08779). Note that this method is not optimized to run on TPU and should be run on
CPU as part of the preprocessing during training.
Args:
shape: T... | _compute_mask_indices | python | huggingface/transformers | src/transformers/models/sew_d/modeling_sew_d.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/sew_d/modeling_sew_d.py | Apache-2.0 |
def compute_num_masked_span(input_length):
"""Given input length, compute how many spans should be masked"""
num_masked_span = int(mask_prob * input_length / mask_length + epsilon)
num_masked_span = max(num_masked_span, min_masks)
# make sure num masked span <= sequence_length
i... | Given input length, compute how many spans should be masked | compute_num_masked_span | python | huggingface/transformers | src/transformers/models/sew_d/modeling_sew_d.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/sew_d/modeling_sew_d.py | Apache-2.0 |
def build_relative_position(query_size, key_size, bucket_size=-1, max_position=-1, device=None):
"""
Build relative position according to the query and key
We assume the absolute position of query \\(P_q\\) is range from (0, query_size) and the absolute position of key
\\(P_k\\) is range from (0, key_s... |
Build relative position according to the query and key
We assume the absolute position of query \(P_q\) is range from (0, query_size) and the absolute position of key
\(P_k\) is range from (0, key_size), The relative positions from query to key is \(R_{q \rightarrow k} = P_q -
P_k\)
Args:
... | build_relative_position | python | huggingface/transformers | src/transformers/models/sew_d/modeling_sew_d.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/sew_d/modeling_sew_d.py | Apache-2.0 |
def forward(self, x):
"""
Call the module
Args:
x (`torch.tensor`): The input tensor to apply dropout
"""
if self.training and self.drop_prob > 0:
return XDropout.apply(x, self.get_context())
return x |
Call the module
Args:
x (`torch.tensor`): The input tensor to apply dropout
| forward | python | huggingface/transformers | src/transformers/models/sew_d/modeling_sew_d.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/sew_d/modeling_sew_d.py | Apache-2.0 |
def forward(
self,
hidden_states,
attention_mask,
output_attentions=False,
query_states=None,
relative_pos=None,
rel_embeddings=None,
):
"""
Call the module
Args:
hidden_states (`torch.FloatTensor`):
Input s... |
Call the module
Args:
hidden_states (`torch.FloatTensor`):
Input states to the module usually the output from previous layer, it will be the Q,K and V in
*Attention(Q,K,V)*
attention_mask (`torch.BoolTensor`):
An attention mask m... | forward | python | huggingface/transformers | src/transformers/models/sew_d/modeling_sew_d.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/sew_d/modeling_sew_d.py | Apache-2.0 |
def _get_feat_extract_output_lengths(self, input_lengths: Union[torch.LongTensor, int]):
"""
Computes the output length of the convolutional layers
"""
def _conv_out_length(input_length, kernel_size, stride):
# 1D convolutional layer output length formula taken
#... |
Computes the output length of the convolutional layers
| _get_feat_extract_output_lengths | python | huggingface/transformers | src/transformers/models/sew_d/modeling_sew_d.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/sew_d/modeling_sew_d.py | Apache-2.0 |
def _mask_hidden_states(
self,
hidden_states: torch.FloatTensor,
mask_time_indices: Optional[torch.FloatTensor] = None,
attention_mask: Optional[torch.LongTensor] = None,
):
"""
Masks extracted features along time axis and/or along feature axis according to
[S... |
Masks extracted features along time axis and/or along feature axis according to
[SpecAugment](https://arxiv.org/abs/1904.08779).
| _mask_hidden_states | python | huggingface/transformers | src/transformers/models/sew_d/modeling_sew_d.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/sew_d/modeling_sew_d.py | Apache-2.0 |
def forward(
self,
input_values: Optional[torch.Tensor],
attention_mask: Optional[torch.Tensor] = None,
mask_time_indices: Optional[torch.FloatTensor] = None,
output_attentions: Optional[bool] = None,
output_hidden_states: Optional[bool] = None,
return_dict: Optio... |
mask_time_indices (`torch.BoolTensor` of shape `(batch_size, sequence_length)`, *optional*):
Indices to mask extracted features for contrastive loss. When in training mode, model learns to predict
masked extracted features in *config.proj_codevector_dim* space.
| forward | python | huggingface/transformers | src/transformers/models/sew_d/modeling_sew_d.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/sew_d/modeling_sew_d.py | Apache-2.0 |
def __init__(self, config, target_lang: Optional[str] = None):
r"""
target_lang (`str`, *optional*):
Language id of adapter weights. Adapter weights are stored in the format adapter.<lang>.safetensors or
adapter.<lang>.bin. Only relevant when using an instance of [`SEWDForCTC`] w... |
target_lang (`str`, *optional*):
Language id of adapter weights. Adapter weights are stored in the format adapter.<lang>.safetensors or
adapter.<lang>.bin. Only relevant when using an instance of [`SEWDForCTC`] with adapters. Uses 'eng' by
default.
| __init__ | python | huggingface/transformers | src/transformers/models/sew_d/modeling_sew_d.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/sew_d/modeling_sew_d.py | Apache-2.0 |
def tie_weights(self):
"""
This method overwrites [`~PreTrainedModel.tie_weights`] so that adapter weights can be correctly loaded when
passing `target_lang=...` to `from_pretrained(...)`.
This method is **not** supposed to be called by the user and is prone to be changed in the future.... |
This method overwrites [`~PreTrainedModel.tie_weights`] so that adapter weights can be correctly loaded when
passing `target_lang=...` to `from_pretrained(...)`.
This method is **not** supposed to be called by the user and is prone to be changed in the future.
| tie_weights | python | huggingface/transformers | src/transformers/models/sew_d/modeling_sew_d.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/sew_d/modeling_sew_d.py | Apache-2.0 |
def freeze_feature_extractor(self):
"""
Calling this function will disable the gradient computation for the feature encoder so that its parameter will
not be updated during training.
"""
warnings.warn(
"The method `freeze_feature_extractor` is deprecated and will be r... |
Calling this function will disable the gradient computation for the feature encoder so that its parameter will
not be updated during training.
| freeze_feature_extractor | python | huggingface/transformers | src/transformers/models/sew_d/modeling_sew_d.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/sew_d/modeling_sew_d.py | Apache-2.0 |
def freeze_base_model(self):
"""
Calling this function will disable the gradient computation for the base model so that its parameters will not
be updated during training. Only the classification head will be updated.
"""
for param in self.sew_d.parameters():
param.re... |
Calling this function will disable the gradient computation for the base model so that its parameters will not
be updated during training. Only the classification head will be updated.
| freeze_base_model | python | huggingface/transformers | src/transformers/models/sew_d/modeling_sew_d.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/sew_d/modeling_sew_d.py | Apache-2.0 |
def forward(
self,
input_values: Optional[torch.Tensor],
attention_mask: Optional[torch.Tensor] = None,
output_attentions: Optional[bool] = None,
output_hidden_states: Optional[bool] = None,
return_dict: Optional[bool] = None,
labels: Optional[torch.Tensor] = None... |
labels (`torch.LongTensor` of shape `(batch_size, target_length)`, *optional*):
Labels for connectionist temporal classification. Note that `target_length` has to be smaller or equal to
the sequence length of the output logits. Indices are selected in `[-100, 0, ..., config.vocab_size -... | forward | python | huggingface/transformers | src/transformers/models/sew_d/modeling_sew_d.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/sew_d/modeling_sew_d.py | Apache-2.0 |
def freeze_feature_extractor(self):
"""
Calling this function will disable the gradient computation for the feature encoder so that its parameters will
not be updated during training.
"""
warnings.warn(
"The method `freeze_feature_extractor` is deprecated and will be ... |
Calling this function will disable the gradient computation for the feature encoder so that its parameters will
not be updated during training.
| freeze_feature_extractor | python | huggingface/transformers | src/transformers/models/sew_d/modeling_sew_d.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/sew_d/modeling_sew_d.py | Apache-2.0 |
def freeze_base_model(self):
"""
Calling this function will disable the gradient computation for the base model so that its parameters will not
be updated during training. Only the classification head will be updated.
"""
for param in self.sew_d.parameters():
param.re... |
Calling this function will disable the gradient computation for the base model so that its parameters will not
be updated during training. Only the classification head will be updated.
| freeze_base_model | python | huggingface/transformers | src/transformers/models/sew_d/modeling_sew_d.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/sew_d/modeling_sew_d.py | Apache-2.0 |
def forward(
self,
input_values: Optional[torch.Tensor],
attention_mask: Optional[torch.Tensor] = None,
output_attentions: Optional[bool] = None,
output_hidden_states: Optional[bool] = None,
return_dict: Optional[bool] = None,
labels: Optional[torch.Tensor] = None... |
input_values (`torch.FloatTensor` of shape `(batch_size, sequence_length)`):
Float values of input raw speech waveform. Values can be obtained by loading a `.flac` or `.wav` audio file
into an array of type `List[float]` or a `numpy.ndarray`, *e.g.* via the soundfile library (`pip insta... | forward | python | huggingface/transformers | src/transformers/models/sew_d/modeling_sew_d.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/sew_d/modeling_sew_d.py | Apache-2.0 |
def convert(
shieldgemma_checkpoint_path: str,
gemma_checkpoint_path: str,
config: ShieldGemma2Config,
target_dtype: torch.dtype,
) -> ConversionResult:
"""Loads Orbax checkpoint from `input_path` and converts it to HF tree."""
checkpointer = obc.PyTreeCheckpointer()
sg2_ckpt = checkpointer... | Loads Orbax checkpoint from `input_path` and converts it to HF tree. | convert | python | huggingface/transformers | src/transformers/models/shieldgemma2/convert_shieldgemma2_weights_orbax_to_hf.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/shieldgemma2/convert_shieldgemma2_weights_orbax_to_hf.py | Apache-2.0 |
def forward(
self,
input_ids: Optional[torch.LongTensor] = None,
pixel_values: Optional[torch.FloatTensor] = None,
attention_mask: Optional[torch.Tensor] = None,
position_ids: Optional[torch.LongTensor] = None,
past_key_values: Optional[Union[List[torch.FloatTensor], Cach... |
Returns:
A `ShieldGemma2ImageClassifierOutputWithNoAttention` instance containing the logits and probabilities
associated with the model predicting the `Yes` or `No` token as the response to that prompt, captured in the
following properties.
* `logits` (`t... | forward | python | huggingface/transformers | src/transformers/models/shieldgemma2/modeling_shieldgemma2.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/shieldgemma2/modeling_shieldgemma2.py | Apache-2.0 |
def __init__(
self, image_processor, tokenizer, chat_template=None, image_seq_length=256, policy_definitions=None, **kwargs
):
"""A processor for the ShieldGemma 2 model.
Args:
image_processor: The image processor to use, typically a `Gemma3ImageProcessorFast` instance.
... | A processor for the ShieldGemma 2 model.
Args:
image_processor: The image processor to use, typically a `Gemma3ImageProcessorFast` instance.
tokenizer: The tokenizer to use, typically a `GemmaTokenizerFast` instance.
chat_template: The chat template to use with this processo... | __init__ | python | huggingface/transformers | src/transformers/models/shieldgemma2/processing_shieldgemma2.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/shieldgemma2/processing_shieldgemma2.py | Apache-2.0 |
def __call__(
self,
images: ImageInput = None,
text=None,
videos=None,
audio=None,
**kwargs: Unpack[ShieldGemma2ProcessorKwargs],
) -> BatchFeature:
"""Generates a batch of inputs from the provided images.
ShieldGemma was trained to classify image con... | Generates a batch of inputs from the provided images.
ShieldGemma was trained to classify image content for policy compliance using a specific prompt construction.
This processor generates a batch of such prompts from the provided images by:
1. Creating a list of conversations, one for each `... | __call__ | python | huggingface/transformers | src/transformers/models/shieldgemma2/processing_shieldgemma2.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/shieldgemma2/processing_shieldgemma2.py | Apache-2.0 |
def split_encoderblock_layers(state_dict: dict) -> dict:
"""
Split the encoderblock weight into layers. In some cases they are concatenated in
the original checkpoints.
"""
# Make shallow copy
state_dict = state_dict.copy()
# Split encoderblock weight into layers
keys = list(state_dict.k... |
Split the encoderblock weight into layers. In some cases they are concatenated in
the original checkpoints.
| split_encoderblock_layers | python | huggingface/transformers | src/transformers/models/siglip/convert_siglip_to_hf.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/siglip/convert_siglip_to_hf.py | Apache-2.0 |
def convert_siglip_checkpoint(model_name, pytorch_dump_folder_path, verify_logits=True, push_to_hub=False):
"""
Copy/paste/tweak model's weights to our SigLIP structure.
"""
# Define default SigLIP configuration
config = get_siglip_config(model_name)
# Get checkpoint
checkpoint = model_nam... |
Copy/paste/tweak model's weights to our SigLIP structure.
| convert_siglip_checkpoint | python | huggingface/transformers | src/transformers/models/siglip/convert_siglip_to_hf.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/siglip/convert_siglip_to_hf.py | Apache-2.0 |
def preprocess(
self,
images: ImageInput,
do_resize: Optional[bool] = None,
size: Optional[Dict[str, int]] = None,
resample: PILImageResampling = None,
do_rescale: Optional[bool] = None,
rescale_factor: Optional[float] = None,
do_normalize: Optional[bool] ... |
Preprocess an image or batch of images.
Args:
images (`ImageInput`):
Image to preprocess. Expects a single or batch of images with pixel values ranging from 0 to 255. If
passing in images with pixel values between 0 and 1, set `do_rescale=False`.
... | preprocess | python | huggingface/transformers | src/transformers/models/siglip/image_processing_siglip.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/siglip/image_processing_siglip.py | Apache-2.0 |
def trunc_normal_tf_(
tensor: torch.Tensor, mean: float = 0.0, std: float = 1.0, a: float = -2.0, b: float = 2.0
) -> torch.Tensor:
"""Fills the input Tensor with values drawn from a truncated
normal distribution. The values are effectively drawn from the
normal distribution :math:`\\mathcal{N}(\text{me... | Fills the input Tensor with values drawn from a truncated
normal distribution. The values are effectively drawn from the
normal distribution :math:`\mathcal{N}( ext{mean}, ext{std}^2)`
with values outside :math:`[a, b]` redrawn until they are within
the bounds. The method used for generating the random... | trunc_normal_tf_ | python | huggingface/transformers | src/transformers/models/siglip/modeling_siglip.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/siglip/modeling_siglip.py | Apache-2.0 |
def interpolate_pos_encoding(self, embeddings: torch.Tensor, height: int, width: int) -> torch.Tensor:
"""
This method allows to interpolate the pre-trained position encodings, to be able to use the model on higher resolution
images. This method is also adapted to support torch.jit tracing and n... |
This method allows to interpolate the pre-trained position encodings, to be able to use the model on higher resolution
images. This method is also adapted to support torch.jit tracing and no class embeddings.
Adapted from:
- https://github.com/facebookresearch/dino/blob/de9ee3df6cf39fa... | interpolate_pos_encoding | python | huggingface/transformers | src/transformers/models/siglip/modeling_siglip.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/siglip/modeling_siglip.py | Apache-2.0 |
def forward(
self,
hidden_states: torch.Tensor,
attention_mask: torch.Tensor,
output_attentions: Optional[bool] = False,
) -> Tuple[torch.FloatTensor]:
"""
Args:
hidden_states (`torch.FloatTensor`):
Input to the layer of shape `(batch, seq_... |
Args:
hidden_states (`torch.FloatTensor`):
Input to the layer of shape `(batch, seq_len, embed_dim)`.
attention_mask (`torch.FloatTensor`):
Attention mask of shape `(batch, 1, q_len, k_v_seq_len)` where padding elements are indicated by very large negativ... | forward | python | huggingface/transformers | src/transformers/models/siglip/modeling_siglip.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/siglip/modeling_siglip.py | Apache-2.0 |
def forward(
self,
inputs_embeds,
attention_mask: Optional[torch.Tensor] = None,
output_attentions: Optional[bool] = None,
output_hidden_states: Optional[bool] = None,
) -> BaseModelOutput:
r"""
Args:
inputs_embeds (`torch.FloatTensor` of shape `(b... |
Args:
inputs_embeds (`torch.FloatTensor` of shape `(batch_size, sequence_length, hidden_size)`):
Optionally, instead of passing `input_ids` you can choose to directly pass an embedded representation.
This is useful if you want more control over how to convert `input_... | forward | python | huggingface/transformers | src/transformers/models/siglip/modeling_siglip.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/siglip/modeling_siglip.py | Apache-2.0 |
def forward(
self,
input_ids: Optional[torch.Tensor] = None,
attention_mask: Optional[torch.Tensor] = None,
position_ids: Optional[torch.Tensor] = None,
output_attentions: Optional[bool] = None,
output_hidden_states: Optional[bool] = None,
) -> BaseModelOutputWithPool... |
Examples:
```python
>>> from transformers import AutoTokenizer, SiglipTextModel
>>> model = SiglipTextModel.from_pretrained("google/siglip-base-patch16-224")
>>> tokenizer = AutoTokenizer.from_pretrained("google/siglip-base-patch16-224")
>>> # important: make sure to ... | forward | python | huggingface/transformers | src/transformers/models/siglip/modeling_siglip.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/siglip/modeling_siglip.py | Apache-2.0 |
def forward(
self,
pixel_values,
output_attentions: Optional[bool] = None,
output_hidden_states: Optional[bool] = None,
interpolate_pos_encoding: bool = False,
) -> BaseModelOutputWithPooling:
r"""
Examples:
```python
>>> from PIL import Image... |
Examples:
```python
>>> from PIL import Image
>>> import requests
>>> from transformers import AutoProcessor, SiglipVisionModel
>>> model = SiglipVisionModel.from_pretrained("google/siglip-base-patch16-224")
>>> processor = AutoProcessor.from_pretrained("google... | forward | python | huggingface/transformers | src/transformers/models/siglip/modeling_siglip.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/siglip/modeling_siglip.py | Apache-2.0 |
def get_text_features(
self,
input_ids: Optional[torch.Tensor] = None,
attention_mask: Optional[torch.Tensor] = None,
position_ids: Optional[torch.Tensor] = None,
output_attentions: Optional[bool] = None,
output_hidden_states: Optional[bool] = None,
) -> torch.FloatTe... |
Returns:
text_features (`torch.FloatTensor` of shape `(batch_size, output_dim`): The text embeddings obtained by
applying the projection layer to the pooled output of [`SiglipTextModel`].
Examples:
```python
>>> from transformers import AutoTokenizer, AutoModel... | get_text_features | python | huggingface/transformers | src/transformers/models/siglip/modeling_siglip.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/siglip/modeling_siglip.py | Apache-2.0 |
def get_image_features(
self,
pixel_values: Optional[torch.FloatTensor] = None,
output_attentions: Optional[bool] = None,
output_hidden_states: Optional[bool] = None,
interpolate_pos_encoding: bool = False,
) -> torch.FloatTensor:
r"""
Returns:
ima... |
Returns:
image_features (`torch.FloatTensor` of shape `(batch_size, output_dim`): The image embeddings obtained by
applying the projection layer to the pooled output of [`SiglipVisionModel`].
Examples:
```python
>>> from PIL import Image
>>> import requ... | get_image_features | python | huggingface/transformers | src/transformers/models/siglip/modeling_siglip.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/siglip/modeling_siglip.py | Apache-2.0 |
def forward(
self,
input_ids: Optional[torch.LongTensor] = None,
pixel_values: Optional[torch.FloatTensor] = None,
attention_mask: Optional[torch.Tensor] = None,
position_ids: Optional[torch.LongTensor] = None,
return_loss: Optional[bool] = None,
output_attentions... |
return_loss (`bool`, *optional*):
Whether or not to return the contrastive loss.
Examples:
```python
>>> from PIL import Image
>>> import requests
>>> from transformers import AutoProcessor, AutoModel
>>> import torch
>>> model = AutoModel.... | forward | python | huggingface/transformers | src/transformers/models/siglip/modeling_siglip.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/siglip/modeling_siglip.py | Apache-2.0 |
def forward(
self,
pixel_values: Optional[torch.Tensor] = None,
labels: Optional[torch.Tensor] = None,
output_attentions: Optional[bool] = None,
output_hidden_states: Optional[bool] = None,
interpolate_pos_encoding: bool = False,
) -> ImageClassifierOutput:
r"... |
labels (`torch.LongTensor` of shape `(batch_size,)`, *optional*):
Labels for computing the image classification/regression loss. Indices should be in `[0, ...,
config.num_labels - 1]`. If `config.num_labels == 1` a regression loss is computed (Mean-Square loss), If
`config.n... | forward | python | huggingface/transformers | src/transformers/models/siglip/modeling_siglip.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/siglip/modeling_siglip.py | Apache-2.0 |
def __call__(
self,
text: Union[TextInput, PreTokenizedInput, List[TextInput], List[PreTokenizedInput]] = None,
images: ImageInput = None,
padding: Union[bool, str, PaddingStrategy] = False,
truncation: Union[bool, str, TruncationStrategy] = None,
max_length: Optional[int... |
Main method to prepare for the model one or several sequences(s) and image(s). This method forwards the `text`
and `kwargs` arguments to SiglipTokenizer's [`~SiglipTokenizer.__call__`] if `text` is not `None` to encode
the text. To prepare the image(s), this method forwards the `images` argumen... | __call__ | python | huggingface/transformers | src/transformers/models/siglip/processing_siglip.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/siglip/processing_siglip.py | Apache-2.0 |
def get_special_tokens_mask(
self, token_ids_0: List[int], token_ids_1: Optional[List[int]] = None, already_has_special_tokens: bool = False
) -> List[int]:
"""
Retrieve sequence ids from a token list that has no special tokens added. This method is called when adding
special tokens ... |
Retrieve sequence ids from a token list that has no special tokens added. This method is called when adding
special tokens using the tokenizer `prepare_for_model` method.
Args:
token_ids_0 (`List[int]`):
List of IDs.
token_ids_1 (`List[int]`, *optional*)... | get_special_tokens_mask | python | huggingface/transformers | src/transformers/models/siglip/tokenization_siglip.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/siglip/tokenization_siglip.py | Apache-2.0 |
def _add_eos_if_not_present(self, token_ids: List[int]) -> List[int]:
"""Do not add eos again if user already added it."""
if len(token_ids) > 0 and token_ids[-1] == self.eos_token_id:
warnings.warn(
f"This sequence already has {self.eos_token}. In future versions this behavi... | Do not add eos again if user already added it. | _add_eos_if_not_present | python | huggingface/transformers | src/transformers/models/siglip/tokenization_siglip.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/siglip/tokenization_siglip.py | Apache-2.0 |
def create_token_type_ids_from_sequences(
self, token_ids_0: List[int], token_ids_1: Optional[List[int]] = None
) -> List[int]:
"""
Create a mask from the two sequences passed to be used in a sequence-pair classification task. T5 does not make
use of token type ids, therefore a list ... |
Create a mask from the two sequences passed to be used in a sequence-pair classification task. T5 does not make
use of token type ids, therefore a list of zeros is returned.
Args:
token_ids_0 (`List[int]`):
List of IDs.
token_ids_1 (`List[int]`, *optiona... | create_token_type_ids_from_sequences | python | huggingface/transformers | src/transformers/models/siglip/tokenization_siglip.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/siglip/tokenization_siglip.py | Apache-2.0 |
def build_inputs_with_special_tokens(
self, token_ids_0: List[int], token_ids_1: Optional[List[int]] = None
) -> List[int]:
"""
Build model inputs from a sequence or a pair of sequence for sequence classification tasks by concatenating and
adding special tokens. A sequence has the fo... |
Build model inputs from a sequence or a pair of sequence for sequence classification tasks by concatenating and
adding special tokens. A sequence has the following format:
- single sequence: `X </s>`
- pair of sequences: `A </s> B </s>`
Args:
token_ids_0 (`List[int... | build_inputs_with_special_tokens | python | huggingface/transformers | src/transformers/models/siglip/tokenization_siglip.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/siglip/tokenization_siglip.py | Apache-2.0 |
def canonicalize_text(self, text, *, keep_punctuation_exact_string=None):
"""Returns canonicalized `text` (puncuation removed).
Args:
text (`str`):
String to be canonicalized.
keep_punctuation_exact_string (`str`, *optional*):
If provided, then th... | Returns canonicalized `text` (puncuation removed).
Args:
text (`str`):
String to be canonicalized.
keep_punctuation_exact_string (`str`, *optional*):
If provided, then this exact string is kept. For example providing '{}' will keep any occurrences of '{}'... | canonicalize_text | python | huggingface/transformers | src/transformers/models/siglip/tokenization_siglip.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/siglip/tokenization_siglip.py | Apache-2.0 |
def tokenize(self, text: "TextInput", add_special_tokens=False, **kwargs) -> List[str]:
"""
Converts a string to a list of tokens.
"""
tokens = super().tokenize(SPIECE_UNDERLINE + text.replace(SPIECE_UNDERLINE, " "), **kwargs)
if len(tokens) > 1 and tokens[0] == SPIECE_UNDERLINE... |
Converts a string to a list of tokens.
| tokenize | python | huggingface/transformers | src/transformers/models/siglip/tokenization_siglip.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/siglip/tokenization_siglip.py | Apache-2.0 |
def convert_tokens_to_string(self, tokens):
"""Converts a sequence of tokens (string) in a single string."""
current_sub_tokens = []
out_string = ""
prev_is_special = False
for token in tokens:
# make sure that special tokens are not decoded using sentencepiece model
... | Converts a sequence of tokens (string) in a single string. | convert_tokens_to_string | python | huggingface/transformers | src/transformers/models/siglip/tokenization_siglip.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/siglip/tokenization_siglip.py | Apache-2.0 |
def get_siglip2_config(model_name: str) -> Siglip2Config:
"""
Create a configuration for the Siglip2 model based on the model name.
"""
_, variant, patch, _ = model_name.split("-")
patch_size = int(patch[-2:])
num_patches = 256
common_options = COMMON_CONFIG_PARAMS[variant]
vision_conf... |
Create a configuration for the Siglip2 model based on the model name.
| get_siglip2_config | python | huggingface/transformers | src/transformers/models/siglip2/convert_siglip2_to_hf.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/siglip2/convert_siglip2_to_hf.py | Apache-2.0 |
def flatten_nested_dict(params: dict, parent_key: str = "", sep: str = "/") -> dict:
"""
Flatten a nested original checkpoint dictionary into a flat dictionary.
"""
items = []
for k, v in params.items():
new_key = parent_key + sep + k if parent_key else k
if isinstance(v, collections... |
Flatten a nested original checkpoint dictionary into a flat dictionary.
| flatten_nested_dict | python | huggingface/transformers | src/transformers/models/siglip2/convert_siglip2_to_hf.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/siglip2/convert_siglip2_to_hf.py | Apache-2.0 |
def split_encoderblock_layers(state_dict: dict) -> dict:
"""
Split the encoderblock weight into layers. In some cases they are concatenated in
the original checkpoints.
"""
# Make shallow copy
state_dict = state_dict.copy()
# Split encoderblock weight into layers
keys = list(state_dict.k... |
Split the encoderblock weight into layers. In some cases they are concatenated in
the original checkpoints.
| split_encoderblock_layers | python | huggingface/transformers | src/transformers/models/siglip2/convert_siglip2_to_hf.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/siglip2/convert_siglip2_to_hf.py | Apache-2.0 |
def merge_qkv_for_head(state_dict: dict, config: Siglip2Config) -> dict:
"""
Merge the q/k/v weights and biases for the attention head.
"""
# Make shallow copy
state_dict = state_dict.copy()
# Read and process q/k/v weights and biases
qkv_weights, qkv_biases = [], []
for name in ["query"... |
Merge the q/k/v weights and biases for the attention head.
| merge_qkv_for_head | python | huggingface/transformers | src/transformers/models/siglip2/convert_siglip2_to_hf.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/siglip2/convert_siglip2_to_hf.py | Apache-2.0 |
def convert_old_keys_to_new_keys(state_dict_keys: list) -> dict:
"""
This function should be applied only once, on the concatenated keys to efficiently rename using
the key mappings.
"""
output_dict = {}
if state_dict_keys is not None:
old_text = "\n".join(state_dict_keys)
new_te... |
This function should be applied only once, on the concatenated keys to efficiently rename using
the key mappings.
| convert_old_keys_to_new_keys | python | huggingface/transformers | src/transformers/models/siglip2/convert_siglip2_to_hf.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/siglip2/convert_siglip2_to_hf.py | Apache-2.0 |
def create_image(width, height):
"""
Helper function to create an image with a blue circle on a red background.
"""
image = Image.new("RGB", (width, height), color="red")
draw = ImageDraw.Draw(image)
center_x = image.width // 2
center_y = image.height // 2
radius = min(center_x, center_y... |
Helper function to create an image with a blue circle on a red background.
| create_image | python | huggingface/transformers | src/transformers/models/siglip2/convert_siglip2_to_hf.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/siglip2/convert_siglip2_to_hf.py | Apache-2.0 |
def convert_siglip2_checkpoint(model_name, pytorch_dump_folder_path, verify_logits=True, push_to_hub=False):
"""
Copy/paste/tweak model's weights to our Siglip2 structure.
"""
# Define Siglip2 configuration
config = get_siglip2_config(model_name)
checkpoint = MODEL_NAME_TO_CHECKPOINT_PATH[mode... |
Copy/paste/tweak model's weights to our Siglip2 structure.
| convert_siglip2_checkpoint | python | huggingface/transformers | src/transformers/models/siglip2/convert_siglip2_to_hf.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/siglip2/convert_siglip2_to_hf.py | Apache-2.0 |
def get_image_size_for_max_num_patches(
image_height: int, image_width: int, patch_size: int, max_num_patches: int, eps: float = 1e-5
) -> Tuple[int, int]:
"""
Determine image size based on max number of patches, ensure dimensions are divisible by patch size and image is at least 1 patch.
Args:
... |
Determine image size based on max number of patches, ensure dimensions are divisible by patch size and image is at least 1 patch.
Args:
image_height (`int`):
Original image height.
image_width (`int`):
Original image width.
patch_size (`int`):
Patch ... | get_image_size_for_max_num_patches | python | huggingface/transformers | src/transformers/models/siglip2/image_processing_siglip2.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/siglip2/image_processing_siglip2.py | Apache-2.0 |
def convert_image_to_patches(image: np.ndarray, patch_size: int) -> np.ndarray:
"""
Convert 3D array image of shape (image_height, image_width, num_channels) into 2D array of patches of shape
(num_patches_height * num_patches_width, patch_size * patch_size * num_channels).
"""
image_height, image_wi... |
Convert 3D array image of shape (image_height, image_width, num_channels) into 2D array of patches of shape
(num_patches_height * num_patches_width, patch_size * patch_size * num_channels).
| convert_image_to_patches | python | huggingface/transformers | src/transformers/models/siglip2/image_processing_siglip2.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/siglip2/image_processing_siglip2.py | Apache-2.0 |
def pad_along_first_dim(array: np.ndarray, target_length: int, pad_value: int = 0) -> Tuple[np.ndarray, np.ndarray]:
"""
Pad the array along the first dimension.
"""
current_length = array.shape[0]
padding_length = target_length - current_length
mask = np.ones((target_length,), dtype=np.int32)
... |
Pad the array along the first dimension.
| pad_along_first_dim | python | huggingface/transformers | src/transformers/models/siglip2/image_processing_siglip2.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/siglip2/image_processing_siglip2.py | Apache-2.0 |
def preprocess(
self,
images: ImageInput,
do_resize: Optional[bool] = None,
resample: Optional["PILImageResampling"] = None,
do_rescale: Optional[bool] = None,
rescale_factor: Optional[float] = None,
do_normalize: Optional[bool] = None,
image_mean: Optiona... |
Preprocess an image or batch of images.
Args:
images (`ImageInput`):
Image to preprocess. Expects a single or batch of images with pixel values ranging from 0 to 255. If
passing in images with pixel values between 0 and 1, set `do_rescale=False`.
... | preprocess | python | huggingface/transformers | src/transformers/models/siglip2/image_processing_siglip2.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/siglip2/image_processing_siglip2.py | Apache-2.0 |
def convert_image_to_patches(image: "torch.Tensor", patch_size: int) -> "torch.Tensor":
"""
Convert 3D tensor image of shape (num_channels, image_height, image_width) into 2D tensor of patches of shape
(num_patches_height * num_patches_width, patch_size * patch_size * num_channels).
"""
num_channels... |
Convert 3D tensor image of shape (num_channels, image_height, image_width) into 2D tensor of patches of shape
(num_patches_height * num_patches_width, patch_size * patch_size * num_channels).
| convert_image_to_patches | python | huggingface/transformers | src/transformers/models/siglip2/image_processing_siglip2_fast.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/siglip2/image_processing_siglip2_fast.py | Apache-2.0 |
def pad_along_first_dim(
tensor: "torch.Tensor", target_length: int, pad_value: int = 0
) -> Tuple["torch.Tensor", "torch.Tensor"]:
"""
Pad the tensor along the first dimension.
"""
current_length = tensor.shape[0]
padding_length = target_length - current_length
mask = torch.ones((target_len... |
Pad the tensor along the first dimension.
| pad_along_first_dim | python | huggingface/transformers | src/transformers/models/siglip2/image_processing_siglip2_fast.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/siglip2/image_processing_siglip2_fast.py | Apache-2.0 |
def resize_positional_embeddings(
positional_embeddings: torch.Tensor,
spatial_shapes: torch.LongTensor,
max_length: int,
) -> torch.Tensor:
"""
Resize positional embeddings to image-specific size and pad to a fixed size.
Args:
positional_embeddings (`tor... |
Resize positional embeddings to image-specific size and pad to a fixed size.
Args:
positional_embeddings (`torch.Tensor`):
Position embeddings of shape (height, width, embed_dim)
spatial_shapes (`torch.LongTensor`):
Spatial shapes of shape (batch... | resize_positional_embeddings | python | huggingface/transformers | src/transformers/models/siglip2/modeling_siglip2.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/siglip2/modeling_siglip2.py | Apache-2.0 |
def forward(self, pixel_values: torch.FloatTensor, spatial_shapes: torch.LongTensor) -> torch.Tensor:
"""
Args:
pixel_values (`torch.FloatTensor`):
Pixel values of shape (batch_size, max_num_patches, num_channels * patch_size * patch_size)
spatial_shapes (`List[Tu... |
Args:
pixel_values (`torch.FloatTensor`):
Pixel values of shape (batch_size, max_num_patches, num_channels * patch_size * patch_size)
spatial_shapes (`List[Tuple[int, int]]`):
Spatial shapes of shape (batch_size, 2) to resize the positional embeddings to
... | forward | python | huggingface/transformers | src/transformers/models/siglip2/modeling_siglip2.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/siglip2/modeling_siglip2.py | Apache-2.0 |
Subsets and Splits
Django Code with Docstrings
Filters Python code examples from Django repository that contain Django-related code, helping identify relevant code snippets for understanding Django framework usage patterns.
SQL Console for Shuu12121/python-treesitter-filtered-datasetsV2
Retrieves specific code examples from the Flask repository but doesn't provide meaningful analysis or patterns beyond basic data retrieval.
HTTPX Repo Code and Docstrings
Retrieves specific code examples from the httpx repository, which is useful for understanding how particular libraries are used but doesn't provide broader analytical insights about the dataset.
Requests Repo Docstrings & Code
Retrieves code examples with their docstrings and file paths from the requests repository, providing basic filtering but limited analytical value beyond finding specific code samples.
Quart Repo Docstrings & Code
Retrieves code examples with their docstrings from the Quart repository, providing basic code samples but offering limited analytical value for understanding broader patterns or relationships in the dataset.