code stringlengths 66 870k | docstring stringlengths 19 26.7k | func_name stringlengths 1 138 | language stringclasses 1
value | repo stringlengths 7 68 | path stringlengths 5 324 | url stringlengths 46 389 | license stringclasses 7
values |
|---|---|---|---|---|---|---|---|
def forward(
self,
input_ids: torch.LongTensor = None,
pixel_values: torch.FloatTensor = None,
attention_mask: Optional[torch.Tensor] = None,
position_ids: Optional[torch.LongTensor] = None,
past_key_values: Optional[List[torch.FloatTensor]] = None,
inputs_embeds:... |
vision_feature_layers (`Union[int, List[int]]`, *optional*):
The vision feature layer, or the list of indexes of the layers to select
the vision feature.
| forward | python | huggingface/transformers | src/transformers/models/vipllava/modular_vipllava.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/vipllava/modular_vipllava.py | Apache-2.0 |
def forward(
self,
input_ids: torch.LongTensor = None,
pixel_values: torch.FloatTensor = None,
attention_mask: Optional[torch.Tensor] = None,
position_ids: Optional[torch.LongTensor] = None,
past_key_values: Optional[List[torch.FloatTensor]] = None,
inputs_embeds:... |
labels (`torch.LongTensor` of shape `(batch_size, sequence_length)`, *optional*):
Labels for computing the masked language modeling loss. Indices should either be in `[0, ...,
config.vocab_size]` or -100 (see `input_ids` docstring). Tokens with indices set to `-100` are ignored
... | forward | python | huggingface/transformers | src/transformers/models/vipllava/modular_vipllava.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/vipllava/modular_vipllava.py | Apache-2.0 |
def from_encoder_decoder_configs(
cls, encoder_config: PretrainedConfig, decoder_config: PretrainedConfig, **kwargs
) -> PretrainedConfig:
r"""
Instantiate a [`VisionEncoderDecoderConfig`] (or a derived class) from a pre-trained encoder model
configuration and decoder model configura... |
Instantiate a [`VisionEncoderDecoderConfig`] (or a derived class) from a pre-trained encoder model
configuration and decoder model configuration.
Returns:
[`VisionEncoderDecoderConfig`]: An instance of a configuration object
| from_encoder_decoder_configs | python | huggingface/transformers | src/transformers/models/vision_encoder_decoder/configuration_vision_encoder_decoder.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/vision_encoder_decoder/configuration_vision_encoder_decoder.py | Apache-2.0 |
def get_decoder_config(
self, encoder_config: PretrainedConfig, decoder_config: PretrainedConfig, feature: str = "default"
) -> OnnxConfig:
r"""
Returns ONNX decoder config for `VisionEncoderDecoder` model.
Args:
encoder_config (`PretrainedConfig`):
The e... |
Returns ONNX decoder config for `VisionEncoderDecoder` model.
Args:
encoder_config (`PretrainedConfig`):
The encoder model's configuration to use when exporting to ONNX.
decoder_config (`PretrainedConfig`):
The decoder model's configuration to us... | get_decoder_config | python | huggingface/transformers | src/transformers/models/vision_encoder_decoder/configuration_vision_encoder_decoder.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/vision_encoder_decoder/configuration_vision_encoder_decoder.py | Apache-2.0 |
def init_cache(self, batch_size, max_length, encoder_outputs):
r"""
Args:
batch_size (`int`):
batch_size used for fast auto-regressive decoding. Defines the batch size of the initialized cache.
max_length (`int`):
maximum possible length for auto-r... |
Args:
batch_size (`int`):
batch_size used for fast auto-regressive decoding. Defines the batch size of the initialized cache.
max_length (`int`):
maximum possible length for auto-regressive decoding. Defines the sequence length of the initialized
... | init_cache | python | huggingface/transformers | src/transformers/models/vision_encoder_decoder/modeling_flax_vision_encoder_decoder.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/vision_encoder_decoder/modeling_flax_vision_encoder_decoder.py | Apache-2.0 |
def encode(
self,
pixel_values: jnp.ndarray,
output_attentions: Optional[bool] = None,
output_hidden_states: Optional[bool] = None,
return_dict: Optional[bool] = None,
train: bool = False,
params: Optional[dict] = None,
dropout_rng: PRNGKey = None,
):
... |
Returns:
Example:
```python
>>> from transformers import AutoImageProcessor, FlaxVisionEncoderDecoderModel
>>> from PIL import Image
>>> import requests
>>> url = "http://images.cocodataset.org/val2017/000000039769.jpg"
>>> image = Image.open(requests.... | encode | python | huggingface/transformers | src/transformers/models/vision_encoder_decoder/modeling_flax_vision_encoder_decoder.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/vision_encoder_decoder/modeling_flax_vision_encoder_decoder.py | Apache-2.0 |
def decode(
self,
decoder_input_ids,
encoder_outputs,
decoder_attention_mask: Optional[jnp.ndarray] = None,
decoder_position_ids: Optional[jnp.ndarray] = None,
past_key_values: Optional[dict] = None,
output_attentions: Optional[bool] = None,
output_hidden_... |
Returns:
Example:
```python
>>> from transformers import AutoImageProcessor, FlaxVisionEncoderDecoderModel
>>> import jax.numpy as jnp
>>> from PIL import Image
>>> import requests
>>> url = "http://images.cocodataset.org/val2017/000000039769.jpg"
... | decode | python | huggingface/transformers | src/transformers/models/vision_encoder_decoder/modeling_flax_vision_encoder_decoder.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/vision_encoder_decoder/modeling_flax_vision_encoder_decoder.py | Apache-2.0 |
def __call__(
self,
pixel_values: jnp.ndarray,
decoder_input_ids: Optional[jnp.ndarray] = None,
decoder_attention_mask: Optional[jnp.ndarray] = None,
decoder_position_ids: Optional[jnp.ndarray] = None,
output_attentions: Optional[bool] = None,
output_hidden_states... |
Returns:
Examples:
```python
>>> from transformers import FlaxVisionEncoderDecoderModel, AutoImageProcessor, AutoTokenizer
>>> from PIL import Image
>>> import requests
>>> url = "http://images.cocodataset.org/val2017/000000039769.jpg"
>>> image = Imag... | __call__ | python | huggingface/transformers | src/transformers/models/vision_encoder_decoder/modeling_flax_vision_encoder_decoder.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/vision_encoder_decoder/modeling_flax_vision_encoder_decoder.py | Apache-2.0 |
def from_encoder_decoder_pretrained(
cls,
encoder_pretrained_model_name_or_path: Optional[Union[str, os.PathLike]] = None,
decoder_pretrained_model_name_or_path: Optional[Union[str, os.PathLike]] = None,
*model_args,
**kwargs,
) -> FlaxPreTrainedModel:
r"""
In... |
Instantiate an encoder and a decoder from one or two base classes of the library from pretrained model
checkpoints.
Params:
encoder_pretrained_model_name_or_path (`Union[str, os.PathLike]`, *optional*):
Information necessary to initiate the encoder. Can be either:
... | from_encoder_decoder_pretrained | python | huggingface/transformers | src/transformers/models/vision_encoder_decoder/modeling_flax_vision_encoder_decoder.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/vision_encoder_decoder/modeling_flax_vision_encoder_decoder.py | Apache-2.0 |
def from_encoder_decoder_pretrained(
cls,
encoder_pretrained_model_name_or_path: Optional[str] = None,
decoder_pretrained_model_name_or_path: Optional[str] = None,
*model_args,
**kwargs,
) -> TFPreTrainedModel:
r"""
Instantiate an encoder and a decoder from on... |
Instantiate an encoder and a decoder from one or two base classes of the library from pretrained model
checkpoints.
Params:
encoder_pretrained_model_name_or_path (`str`, *optional*):
Information necessary to initiate the encoder. Can be either:
... | from_encoder_decoder_pretrained | python | huggingface/transformers | src/transformers/models/vision_encoder_decoder/modeling_tf_vision_encoder_decoder.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/vision_encoder_decoder/modeling_tf_vision_encoder_decoder.py | Apache-2.0 |
def call(
self,
pixel_values: np.ndarray | tf.Tensor | None = None,
decoder_input_ids: np.ndarray | tf.Tensor | None = None,
decoder_attention_mask: np.ndarray | tf.Tensor | None = None,
encoder_outputs: Optional[Union[Tuple, TFBaseModelOutput]] = None,
past_key_values: O... |
Returns:
Examples:
```python
>>> from transformers import AutoImageProcessor, AutoTokenizer, TFVisionEncoderDecoderModel
>>> from PIL import Image
>>> import requests
>>> image_processor = AutoImageProcessor.from_pretrained("google/vit-base-patch16-224-in21k")... | call | python | huggingface/transformers | src/transformers/models/vision_encoder_decoder/modeling_tf_vision_encoder_decoder.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/vision_encoder_decoder/modeling_tf_vision_encoder_decoder.py | Apache-2.0 |
def shift_tokens_right(input_ids: torch.Tensor, pad_token_id: int, decoder_start_token_id: int):
"""
Shift input ids one token to the right.
"""
shifted_input_ids = input_ids.new_zeros(input_ids.shape)
shifted_input_ids[:, 1:] = input_ids[:, :-1].clone()
if decoder_start_token_id is None:
... |
Shift input ids one token to the right.
| shift_tokens_right | python | huggingface/transformers | src/transformers/models/vision_encoder_decoder/modeling_vision_encoder_decoder.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/vision_encoder_decoder/modeling_vision_encoder_decoder.py | Apache-2.0 |
def __init__(
self,
config: Optional[PretrainedConfig] = None,
encoder: Optional[PreTrainedModel] = None,
decoder: Optional[PreTrainedModel] = None,
):
r"""
encoder (`PreTrainedModel`, *optional*):
The encoder model to use.
decoder (`PreTrainedMode... |
encoder (`PreTrainedModel`, *optional*):
The encoder model to use.
decoder (`PreTrainedModel`, *optional*):
The decoder model to use.
| __init__ | python | huggingface/transformers | src/transformers/models/vision_encoder_decoder/modeling_vision_encoder_decoder.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/vision_encoder_decoder/modeling_vision_encoder_decoder.py | Apache-2.0 |
def from_pretrained(cls, pretrained_model_name_or_path, *model_args, **kwargs):
r"""
Example:
```python
>>> from transformers import VisionEncoderDecoderModel, AutoImageProcessor, AutoTokenizer
>>> from PIL import Image
>>> import requests
>>> image_processor = ... |
Example:
```python
>>> from transformers import VisionEncoderDecoderModel, AutoImageProcessor, AutoTokenizer
>>> from PIL import Image
>>> import requests
>>> image_processor = AutoImageProcessor.from_pretrained("ydshieh/vit-gpt2-coco-en")
>>> decoder_tokenizer... | from_pretrained | python | huggingface/transformers | src/transformers/models/vision_encoder_decoder/modeling_vision_encoder_decoder.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/vision_encoder_decoder/modeling_vision_encoder_decoder.py | Apache-2.0 |
def from_encoder_decoder_pretrained(
cls,
encoder_pretrained_model_name_or_path: Optional[str] = None,
decoder_pretrained_model_name_or_path: Optional[str] = None,
*model_args,
**kwargs,
) -> PreTrainedModel:
r"""
Instantiate an encoder and a decoder from one ... |
Instantiate an encoder and a decoder from one or two base classes of the library from pretrained model
checkpoints.
The model is set in evaluation mode by default using `model.eval()` (Dropout modules are deactivated). To train
the model, you need to first set it back in training mode... | from_encoder_decoder_pretrained | python | huggingface/transformers | src/transformers/models/vision_encoder_decoder/modeling_vision_encoder_decoder.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/vision_encoder_decoder/modeling_vision_encoder_decoder.py | Apache-2.0 |
def forward(
self,
pixel_values: Optional[torch.FloatTensor] = None,
decoder_input_ids: Optional[torch.LongTensor] = None,
decoder_attention_mask: Optional[torch.BoolTensor] = None,
encoder_outputs: Optional[Tuple[torch.FloatTensor]] = None,
past_key_values: Optional[Tupl... |
decoder_input_ids (`torch.LongTensor` of shape `(batch_size, target_sequence_length)`, *optional*):
Indices of decoder input sequence tokens in the vocabulary.
Indices can be obtained using [`PreTrainedTokenizer`]. See [`PreTrainedTokenizer.encode`] and
[`PreTrainedTokenize... | forward | python | huggingface/transformers | src/transformers/models/vision_encoder_decoder/modeling_vision_encoder_decoder.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/vision_encoder_decoder/modeling_vision_encoder_decoder.py | Apache-2.0 |
def get_text_features(
self,
input_ids,
attention_mask=None,
position_ids=None,
token_type_ids=None,
params: Optional[dict] = None,
dropout_rng: jax.random.PRNGKey = None,
train=False,
):
r"""
Args:
input_ids (`numpy.ndarray... |
Args:
input_ids (`numpy.ndarray` of shape `(batch_size, sequence_length)`):
Indices of input sequence tokens in the vocabulary. Padding will be ignored by default should you
provide it.
Indices can be obtained using [`PreTrainedTokenizer`]. See [`Pre... | get_text_features | python | huggingface/transformers | src/transformers/models/vision_text_dual_encoder/modeling_flax_vision_text_dual_encoder.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/vision_text_dual_encoder/modeling_flax_vision_text_dual_encoder.py | Apache-2.0 |
def get_image_features(
self, pixel_values, params: Optional[dict] = None, dropout_rng: jax.random.PRNGKey = None, train=False
):
r"""
Args:
pixel_values (`numpy.ndarray` of shape `(batch_size, num_channels, height, width)`):
Pixel values. Padding will be ignored ... |
Args:
pixel_values (`numpy.ndarray` of shape `(batch_size, num_channels, height, width)`):
Pixel values. Padding will be ignored by default should you provide it. Pixel values can be obtained
using [`ImageFeatureExtractionMixin`]. See [`ImageFeatureExtractionMixin.__... | get_image_features | python | huggingface/transformers | src/transformers/models/vision_text_dual_encoder/modeling_flax_vision_text_dual_encoder.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/vision_text_dual_encoder/modeling_flax_vision_text_dual_encoder.py | Apache-2.0 |
def from_vision_text_pretrained(
cls,
vision_model_name_or_path: Optional[str] = None,
text_model_name_or_path: Optional[str] = None,
*model_args,
**kwargs,
) -> FlaxPreTrainedModel:
"""
Params:
vision_model_name_or_path (`str`, *optional*, default... |
Params:
vision_model_name_or_path (`str`, *optional*, defaults to `None`):
Information necessary to initiate the vision model. Can be either:
- A string, the *model id* of a pretrained model hosted inside a model repo on huggingface.co.
- A p... | from_vision_text_pretrained | python | huggingface/transformers | src/transformers/models/vision_text_dual_encoder/modeling_flax_vision_text_dual_encoder.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/vision_text_dual_encoder/modeling_flax_vision_text_dual_encoder.py | Apache-2.0 |
def get_text_features(
self,
input_ids=None,
attention_mask=None,
position_ids=None,
token_type_ids=None,
output_attentions=None,
output_hidden_states=None,
return_dict=None,
):
r"""
Returns:
text_features (`tf.Tensor` of sh... |
Returns:
text_features (`tf.Tensor` of shape `(batch_size, output_dim`): The text embeddings obtained by applying
the projection layer to the pooled output of [`TFCLIPTextModel`].
Examples:
```python
>>> from transformers import TFVisionTextDualEncoderModel, Au... | get_text_features | python | huggingface/transformers | src/transformers/models/vision_text_dual_encoder/modeling_tf_vision_text_dual_encoder.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/vision_text_dual_encoder/modeling_tf_vision_text_dual_encoder.py | Apache-2.0 |
def get_image_features(
self,
pixel_values=None,
output_attentions=None,
output_hidden_states=None,
return_dict=None,
):
r"""
Returns:
image_features (`tf.Tensor` of shape `(batch_size, output_dim`): The image embeddings obtained by applying
... |
Returns:
image_features (`tf.Tensor` of shape `(batch_size, output_dim`): The image embeddings obtained by applying
the projection layer to the pooled output of [`TFCLIPVisionModel`].
Examples:
```python
>>> from PIL import Image
>>> import requests
... | get_image_features | python | huggingface/transformers | src/transformers/models/vision_text_dual_encoder/modeling_tf_vision_text_dual_encoder.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/vision_text_dual_encoder/modeling_tf_vision_text_dual_encoder.py | Apache-2.0 |
def call(
self,
input_ids: tf.Tensor | None = None,
pixel_values: tf.Tensor | None = None,
attention_mask: tf.Tensor | None = None,
position_ids: tf.Tensor | None = None,
return_loss: Optional[bool] = None,
token_type_ids: tf.Tensor | None = None,
output_a... |
Returns:
Examples:
```python
>>> from PIL import Image
>>> import requests
>>> from transformers import (
... TFVisionTextDualEncoderModel,
... VisionTextDualEncoderProcessor,
... AutoImageProcessor,
... AutoTokenizer,
... | call | python | huggingface/transformers | src/transformers/models/vision_text_dual_encoder/modeling_tf_vision_text_dual_encoder.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/vision_text_dual_encoder/modeling_tf_vision_text_dual_encoder.py | Apache-2.0 |
def from_vision_text_pretrained(
cls,
vision_model_name_or_path: Optional[str] = None,
text_model_name_or_path: Optional[str] = None,
*model_args,
**kwargs,
) -> TFPreTrainedModel:
"""
Params:
vision_model_name_or_path (`str`, *optional*, defaults ... |
Params:
vision_model_name_or_path (`str`, *optional*, defaults to `None`):
Information necessary to initiate the vision model. Can be either:
- A string, the *model id* of a pretrained model hosted inside a model repo on huggingface.co.
- A p... | from_vision_text_pretrained | python | huggingface/transformers | src/transformers/models/vision_text_dual_encoder/modeling_tf_vision_text_dual_encoder.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/vision_text_dual_encoder/modeling_tf_vision_text_dual_encoder.py | Apache-2.0 |
def dummy_inputs(self):
"""
Dummy inputs to build the network.
Returns:
`Dict[str, tf.Tensor]`: The dummy inputs.
"""
input_ids = tf.constant(DUMMY_INPUTS, dtype=tf.int32)
batch_size, seq_len = input_ids.shape
VISION_DUMMY_INPUTS = tf.random.uniform(... |
Dummy inputs to build the network.
Returns:
`Dict[str, tf.Tensor]`: The dummy inputs.
| dummy_inputs | python | huggingface/transformers | src/transformers/models/vision_text_dual_encoder/modeling_tf_vision_text_dual_encoder.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/vision_text_dual_encoder/modeling_tf_vision_text_dual_encoder.py | Apache-2.0 |
def __init__(
self,
config: Optional[VisionTextDualEncoderConfig] = None,
vision_model: Optional[PreTrainedModel] = None,
text_model: Optional[PreTrainedModel] = None,
):
r"""
vision_model (`PreTrainedModel`):
The vision model to use.
text_model (`... |
vision_model (`PreTrainedModel`):
The vision model to use.
text_model (`PreTrainedModel`):
The text model to use.
| __init__ | python | huggingface/transformers | src/transformers/models/vision_text_dual_encoder/modeling_vision_text_dual_encoder.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/vision_text_dual_encoder/modeling_vision_text_dual_encoder.py | Apache-2.0 |
def get_text_features(
self,
input_ids=None,
attention_mask=None,
position_ids=None,
token_type_ids=None,
output_attentions=None,
output_hidden_states=None,
return_dict=None,
):
r"""
Returns:
text_features (`torch.FloatTenso... |
Returns:
text_features (`torch.FloatTensor` of shape `(batch_size, output_dim`): The text embeddings obtained by
applying the projection layer to the pooled output of [`CLIPTextModel`].
Examples:
```python
>>> from transformers import VisionTextDualEncoderModel... | get_text_features | python | huggingface/transformers | src/transformers/models/vision_text_dual_encoder/modeling_vision_text_dual_encoder.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/vision_text_dual_encoder/modeling_vision_text_dual_encoder.py | Apache-2.0 |
def get_image_features(
self,
pixel_values=None,
output_attentions=None,
output_hidden_states=None,
return_dict=None,
):
r"""
Returns:
image_features (`torch.FloatTensor` of shape `(batch_size, output_dim`): The image embeddings obtained by
... |
Returns:
image_features (`torch.FloatTensor` of shape `(batch_size, output_dim`): The image embeddings obtained by
applying the projection layer to the pooled output of [`CLIPVisionModel`].
Examples:
```python
>>> from PIL import Image
>>> import reques... | get_image_features | python | huggingface/transformers | src/transformers/models/vision_text_dual_encoder/modeling_vision_text_dual_encoder.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/vision_text_dual_encoder/modeling_vision_text_dual_encoder.py | Apache-2.0 |
def forward(
self,
input_ids: Optional[torch.LongTensor] = None,
pixel_values: Optional[torch.FloatTensor] = None,
attention_mask: Optional[torch.Tensor] = None,
position_ids: Optional[torch.LongTensor] = None,
return_loss: Optional[bool] = None,
token_type_ids: O... |
return_loss (`bool`, *optional*):
Whether or not to return the contrastive loss.
Examples:
```python
>>> from PIL import Image
>>> import requests
>>> from transformers import (
... VisionTextDualEncoderModel,
... VisionTextDualEncod... | forward | python | huggingface/transformers | src/transformers/models/vision_text_dual_encoder/modeling_vision_text_dual_encoder.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/vision_text_dual_encoder/modeling_vision_text_dual_encoder.py | Apache-2.0 |
def from_vision_text_pretrained(
cls,
vision_model_name_or_path: Optional[str] = None,
text_model_name_or_path: Optional[str] = None,
*model_args,
**kwargs,
) -> PreTrainedModel:
"""
Params:
vision_model_name_or_path (`str`, *optional*, defaults to... |
Params:
vision_model_name_or_path (`str`, *optional*, defaults to `None`):
Information necessary to initiate the vision model. Can be either:
- A string, the *model id* of a pretrained model hosted inside a model repo on huggingface.co.
- A p... | from_vision_text_pretrained | python | huggingface/transformers | src/transformers/models/vision_text_dual_encoder/modeling_vision_text_dual_encoder.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/vision_text_dual_encoder/modeling_vision_text_dual_encoder.py | Apache-2.0 |
def __call__(
self,
images: Optional[ImageInput] = None,
text: Union[TextInput, PreTokenizedInput, List[TextInput], List[PreTokenizedInput]] = None,
audio=None,
videos=None,
**kwargs: Unpack[VisionTextDualEncoderProcessorKwargs],
) -> BatchEncoding:
"""
... |
Main method to prepare for the model one or several sequences(s) and image(s). This method forwards the `text`
and `kwargs` arguments to VisionTextDualEncoderTokenizer's [`~PreTrainedTokenizer.__call__`] if `text` is not
`None` to encode the text. To prepare the image(s), this method forwards t... | __call__ | python | huggingface/transformers | src/transformers/models/vision_text_dual_encoder/processing_vision_text_dual_encoder.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/vision_text_dual_encoder/processing_vision_text_dual_encoder.py | Apache-2.0 |
def convert_visual_bert_checkpoint(checkpoint_path, pytorch_dump_folder_path):
"""
Copy/paste/tweak model's weights to our VisualBERT structure.
"""
assert checkpoint_path.split("/")[-1] in ACCEPTABLE_CHECKPOINTS, (
f"The checkpoint provided must be in {ACCEPTABLE_CHECKPOINTS}."
)
# Ge... |
Copy/paste/tweak model's weights to our VisualBERT structure.
| convert_visual_bert_checkpoint | python | huggingface/transformers | src/transformers/models/visual_bert/convert_visual_bert_original_pytorch_checkpoint_to_pytorch.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/visual_bert/convert_visual_bert_original_pytorch_checkpoint_to_pytorch.py | Apache-2.0 |
def __init__(self, config, add_pooling_layer=True):
r"""
add_pooling_layer (bool, *optional*, defaults to `True`):
Whether to add a pooling layer
"""
super().__init__(config)
self.config = config
self.embeddings = VisualBertEmbeddings(config)
self.enc... |
add_pooling_layer (bool, *optional*, defaults to `True`):
Whether to add a pooling layer
| __init__ | python | huggingface/transformers | src/transformers/models/visual_bert/modeling_visual_bert.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/visual_bert/modeling_visual_bert.py | Apache-2.0 |
def forward(
self,
input_ids: Optional[torch.LongTensor] = None,
attention_mask: Optional[torch.LongTensor] = None,
token_type_ids: Optional[torch.LongTensor] = None,
position_ids: Optional[torch.LongTensor] = None,
head_mask: Optional[torch.LongTensor] = None,
in... |
visual_embeds (`torch.FloatTensor` of shape `(batch_size, visual_seq_length, visual_embedding_dim)`, *optional*):
The embedded representation of the visual inputs, generally derived using using an object detector.
visual_attention_mask (`torch.FloatTensor` of shape `(batch_size, visual_seq_... | forward | python | huggingface/transformers | src/transformers/models/visual_bert/modeling_visual_bert.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/visual_bert/modeling_visual_bert.py | Apache-2.0 |
def forward(
self,
input_ids: Optional[torch.LongTensor] = None,
attention_mask: Optional[torch.LongTensor] = None,
token_type_ids: Optional[torch.LongTensor] = None,
position_ids: Optional[torch.LongTensor] = None,
head_mask: Optional[torch.LongTensor] = None,
in... |
visual_embeds (`torch.FloatTensor` of shape `(batch_size, visual_seq_length, visual_embedding_dim)`, *optional*):
The embedded representation of the visual inputs, generally derived using using an object detector.
visual_attention_mask (`torch.FloatTensor` of shape `(batch_size, visual_seq_... | forward | python | huggingface/transformers | src/transformers/models/visual_bert/modeling_visual_bert.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/visual_bert/modeling_visual_bert.py | Apache-2.0 |
def forward(
self,
input_ids: Optional[torch.LongTensor] = None,
attention_mask: Optional[torch.LongTensor] = None,
token_type_ids: Optional[torch.LongTensor] = None,
position_ids: Optional[torch.LongTensor] = None,
head_mask: Optional[torch.LongTensor] = None,
in... |
input_ids (`torch.LongTensor` of shape `(batch_size, num_choices, sequence_length)`):
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using [`AutoTokenizer`]. See [`PreTrainedTokenizer.encode`] and
[`PreTrainedTokenizer.__call__`] for details.
... | forward | python | huggingface/transformers | src/transformers/models/visual_bert/modeling_visual_bert.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/visual_bert/modeling_visual_bert.py | Apache-2.0 |
def forward(
self,
input_ids: Optional[torch.LongTensor] = None,
attention_mask: Optional[torch.LongTensor] = None,
token_type_ids: Optional[torch.LongTensor] = None,
position_ids: Optional[torch.LongTensor] = None,
head_mask: Optional[torch.LongTensor] = None,
in... |
visual_embeds (`torch.FloatTensor` of shape `(batch_size, visual_seq_length, visual_embedding_dim)`, *optional*):
The embedded representation of the visual inputs, generally derived using using an object detector.
visual_attention_mask (`torch.FloatTensor` of shape `(batch_size, visual_seq_... | forward | python | huggingface/transformers | src/transformers/models/visual_bert/modeling_visual_bert.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/visual_bert/modeling_visual_bert.py | Apache-2.0 |
def forward(
self,
input_ids: Optional[torch.LongTensor] = None,
attention_mask: Optional[torch.LongTensor] = None,
token_type_ids: Optional[torch.LongTensor] = None,
position_ids: Optional[torch.LongTensor] = None,
head_mask: Optional[torch.LongTensor] = None,
in... |
visual_embeds (`torch.FloatTensor` of shape `(batch_size, visual_seq_length, visual_embedding_dim)`, *optional*):
The embedded representation of the visual inputs, generally derived using using an object detector.
visual_attention_mask (`torch.FloatTensor` of shape `(batch_size, visual_seq_... | forward | python | huggingface/transformers | src/transformers/models/visual_bert/modeling_visual_bert.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/visual_bert/modeling_visual_bert.py | Apache-2.0 |
def forward(
self,
input_ids: Optional[torch.LongTensor] = None,
attention_mask: Optional[torch.LongTensor] = None,
token_type_ids: Optional[torch.LongTensor] = None,
position_ids: Optional[torch.LongTensor] = None,
head_mask: Optional[torch.LongTensor] = None,
in... |
visual_embeds (`torch.FloatTensor` of shape `(batch_size, visual_seq_length, visual_embedding_dim)`, *optional*):
The embedded representation of the visual inputs, generally derived using using an object detector.
visual_attention_mask (`torch.FloatTensor` of shape `(batch_size, visual_seq_... | forward | python | huggingface/transformers | src/transformers/models/visual_bert/modeling_visual_bert.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/visual_bert/modeling_visual_bert.py | Apache-2.0 |
def convert_vit_checkpoint(model_name, pytorch_dump_folder_path, base_model=True):
"""
Copy/paste/tweak model's weights to our ViT structure.
"""
# define default ViT configuration
config = ViTConfig()
# patch_size
if model_name[-1] == "8":
config.patch_size = 8
# set labels if ... |
Copy/paste/tweak model's weights to our ViT structure.
| convert_vit_checkpoint | python | huggingface/transformers | src/transformers/models/vit/convert_dino_to_pytorch.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/vit/convert_dino_to_pytorch.py | Apache-2.0 |
def convert_vit_checkpoint(vit_name, pytorch_dump_folder_path):
"""
Copy/paste/tweak model's weights to our ViT structure.
"""
# define default ViT configuration
config = ViTConfig()
base_model = False
# load original model from timm
timm_model = timm.create_model(vit_name, pretrained=... |
Copy/paste/tweak model's weights to our ViT structure.
| convert_vit_checkpoint | python | huggingface/transformers | src/transformers/models/vit/convert_vit_timm_to_pytorch.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/vit/convert_vit_timm_to_pytorch.py | Apache-2.0 |
def resize(
self,
image: np.ndarray,
size: Dict[str, int],
resample: PILImageResampling = PILImageResampling.BILINEAR,
data_format: Optional[Union[str, ChannelDimension]] = None,
input_data_format: Optional[Union[str, ChannelDimension]] = None,
**kwargs,
) -> ... |
Resize an image to `(size["height"], size["width"])`.
Args:
image (`np.ndarray`):
Image to resize.
size (`Dict[str, int]`):
Dictionary in the format `{"height": int, "width": int}` specifying the size of the output image.
resample (`P... | resize | python | huggingface/transformers | src/transformers/models/vit/image_processing_vit.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/vit/image_processing_vit.py | Apache-2.0 |
def preprocess(
self,
images: ImageInput,
do_resize: Optional[bool] = None,
size: Optional[Dict[str, int]] = None,
resample: PILImageResampling = None,
do_rescale: Optional[bool] = None,
rescale_factor: Optional[float] = None,
do_normalize: Optional[bool] ... |
Preprocess an image or batch of images.
Args:
images (`ImageInput`):
Image to preprocess. Expects a single or batch of images with pixel values ranging from 0 to 255. If
passing in images with pixel values between 0 and 1, set `do_rescale=False`.
... | preprocess | python | huggingface/transformers | src/transformers/models/vit/image_processing_vit.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/vit/image_processing_vit.py | Apache-2.0 |
def interpolate_pos_encoding(self, embeddings, height, width) -> tf.Tensor:
"""
This method allows to interpolate the pre-trained position encodings, to be able to use the model on higher
resolution images.
Source:
https://github.com/facebookresearch/dino/blob/de9ee3df6cf39fac95... |
This method allows to interpolate the pre-trained position encodings, to be able to use the model on higher
resolution images.
Source:
https://github.com/facebookresearch/dino/blob/de9ee3df6cf39fac952ab558447af1fa1365362a/vision_transformer.py#L174
| interpolate_pos_encoding | python | huggingface/transformers | src/transformers/models/vit/modeling_tf_vit.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/vit/modeling_tf_vit.py | Apache-2.0 |
def call(
self,
pixel_values: TFModelInputType | None = None,
head_mask: np.ndarray | tf.Tensor | None = None,
output_attentions: Optional[bool] = None,
output_hidden_states: Optional[bool] = None,
interpolate_pos_encoding: Optional[bool] = None,
return_dict: Opti... |
labels (`tf.Tensor` or `np.ndarray` of shape `(batch_size,)`, *optional*):
Labels for computing the image classification/regression loss. Indices should be in `[0, ...,
config.num_labels - 1]`. If `config.num_labels == 1` a regression loss is computed (Mean-Square loss), If
... | call | python | huggingface/transformers | src/transformers/models/vit/modeling_tf_vit.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/vit/modeling_tf_vit.py | Apache-2.0 |
def interpolate_pos_encoding(self, embeddings: torch.Tensor, height: int, width: int) -> torch.Tensor:
"""
This method allows to interpolate the pre-trained position encodings, to be able to use the model on higher resolution
images. This method is also adapted to support torch.jit tracing.
... |
This method allows to interpolate the pre-trained position encodings, to be able to use the model on higher resolution
images. This method is also adapted to support torch.jit tracing.
Adapted from:
- https://github.com/facebookresearch/dino/blob/de9ee3df6cf39fac952ab558447af1fa1365362... | interpolate_pos_encoding | python | huggingface/transformers | src/transformers/models/vit/modeling_vit.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/vit/modeling_vit.py | Apache-2.0 |
def __init__(self, config: ViTConfig, add_pooling_layer: bool = True, use_mask_token: bool = False):
r"""
add_pooling_layer (bool, *optional*, defaults to `True`):
Whether to add a pooling layer
use_mask_token (`bool`, *optional*, defaults to `False`):
Whether to use a ma... |
add_pooling_layer (bool, *optional*, defaults to `True`):
Whether to add a pooling layer
use_mask_token (`bool`, *optional*, defaults to `False`):
Whether to use a mask token for masked image modeling.
| __init__ | python | huggingface/transformers | src/transformers/models/vit/modeling_vit.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/vit/modeling_vit.py | Apache-2.0 |
def forward(
self,
pixel_values: Optional[torch.Tensor] = None,
bool_masked_pos: Optional[torch.BoolTensor] = None,
head_mask: Optional[torch.Tensor] = None,
output_attentions: Optional[bool] = None,
output_hidden_states: Optional[bool] = None,
interpolate_pos_enc... |
bool_masked_pos (`torch.BoolTensor` of shape `(batch_size, num_patches)`, *optional*):
Boolean masked positions. Indicates which patches are masked (1) and which aren't (0).
| forward | python | huggingface/transformers | src/transformers/models/vit/modeling_vit.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/vit/modeling_vit.py | Apache-2.0 |
def forward(
self,
pixel_values: Optional[torch.Tensor] = None,
bool_masked_pos: Optional[torch.BoolTensor] = None,
head_mask: Optional[torch.Tensor] = None,
output_attentions: Optional[bool] = None,
output_hidden_states: Optional[bool] = None,
interpolate_pos_enc... |
bool_masked_pos (`torch.BoolTensor` of shape `(batch_size, num_patches)`):
Boolean masked positions. Indicates which patches are masked (1) and which aren't (0).
Examples:
```python
>>> from transformers import AutoImageProcessor, ViTForMaskedImageModeling
>>> impor... | forward | python | huggingface/transformers | src/transformers/models/vit/modeling_vit.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/vit/modeling_vit.py | Apache-2.0 |
def forward(
self,
pixel_values: Optional[torch.Tensor] = None,
head_mask: Optional[torch.Tensor] = None,
labels: Optional[torch.Tensor] = None,
output_attentions: Optional[bool] = None,
output_hidden_states: Optional[bool] = None,
interpolate_pos_encoding: Option... |
labels (`torch.LongTensor` of shape `(batch_size,)`, *optional*):
Labels for computing the image classification/regression loss. Indices should be in `[0, ...,
config.num_labels - 1]`. If `config.num_labels == 1` a regression loss is computed (Mean-Square loss), If
`config.n... | forward | python | huggingface/transformers | src/transformers/models/vit/modeling_vit.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/vit/modeling_vit.py | Apache-2.0 |
def get_absolute_positions(self, abs_pos_embeddings, has_cls_token, height, width):
"""
Calculate absolute positional embeddings. If needed, resize embeddings and remove cls_token dimension for the
original embeddings.
Args:
abs_pos_embeddings (`torch.Tensor`):
... |
Calculate absolute positional embeddings. If needed, resize embeddings and remove cls_token dimension for the
original embeddings.
Args:
abs_pos_embeddings (`torch.Tensor`):
Absolute positional embeddings with (1, num_position, num_channels).
has_cls_tok... | get_absolute_positions | python | huggingface/transformers | src/transformers/models/vitdet/modeling_vitdet.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/vitdet/modeling_vitdet.py | Apache-2.0 |
def get_rel_pos(q_size, k_size, rel_pos):
"""
Get relative positional embeddings according to the relative positions of query and key sizes.
Args:
q_size (`int`):
Size of query q.
k_size (`int`):
Size of key k.
rel_pos (`torch.Tensor`):
Relative p... |
Get relative positional embeddings according to the relative positions of query and key sizes.
Args:
q_size (`int`):
Size of query q.
k_size (`int`):
Size of key k.
rel_pos (`torch.Tensor`):
Relative position embeddings (num_embeddings, num_channels)... | get_rel_pos | python | huggingface/transformers | src/transformers/models/vitdet/modeling_vitdet.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/vitdet/modeling_vitdet.py | Apache-2.0 |
def add_decomposed_relative_positions(attn, queries, rel_pos_h, rel_pos_w, q_size, k_size):
"""
Calculate decomposed Relative Positional Embeddings as introduced in
[MViT2](https://github.com/facebookresearch/mvit/blob/19786631e330df9f3622e5402b4a419a263a2c80/mvit/models/attention.py).
Args:
at... |
Calculate decomposed Relative Positional Embeddings as introduced in
[MViT2](https://github.com/facebookresearch/mvit/blob/19786631e330df9f3622e5402b4a419a263a2c80/mvit/models/attention.py).
Args:
attn (`torch.Tensor`):
Attention map.
queries (`torch.Tensor`):
Query... | add_decomposed_relative_positions | python | huggingface/transformers | src/transformers/models/vitdet/modeling_vitdet.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/vitdet/modeling_vitdet.py | Apache-2.0 |
def __init__(self, config, input_size=None):
"""
Args:
config (`VitDetConfig`):
Model configuration.
input_size (`Tuple[int]`, *optional*):
Input resolution, only required in case relative position embeddings are added.
"""
super().... |
Args:
config (`VitDetConfig`):
Model configuration.
input_size (`Tuple[int]`, *optional*):
Input resolution, only required in case relative position embeddings are added.
| __init__ | python | huggingface/transformers | src/transformers/models/vitdet/modeling_vitdet.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/vitdet/modeling_vitdet.py | Apache-2.0 |
def drop_path(input: torch.Tensor, drop_prob: float = 0.0, training: bool = False) -> torch.Tensor:
"""
Drop paths (Stochastic Depth) per sample (when applied in main path of residual blocks).
Comment by Ross Wightman: This is the same as the DropConnect impl I created for EfficientNet, etc networks,
h... |
Drop paths (Stochastic Depth) per sample (when applied in main path of residual blocks).
Comment by Ross Wightman: This is the same as the DropConnect impl I created for EfficientNet, etc networks,
however, the original name is misleading as 'Drop Connect' is a different form of dropout in a separate pape... | drop_path | python | huggingface/transformers | src/transformers/models/vitdet/modeling_vitdet.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/vitdet/modeling_vitdet.py | Apache-2.0 |
def __init__(self, config, in_channels, out_channels, bottleneck_channels):
"""
Args:
config (`VitDetConfig`):
Model configuration.
in_channels (`int`):
Number of input channels.
out_channels (`int`):
Number of output ch... |
Args:
config (`VitDetConfig`):
Model configuration.
in_channels (`int`):
Number of input channels.
out_channels (`int`):
Number of output channels.
bottleneck_channels (`int`):
Number of output chann... | __init__ | python | huggingface/transformers | src/transformers/models/vitdet/modeling_vitdet.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/vitdet/modeling_vitdet.py | Apache-2.0 |
def window_partition(hidden_state, window_size):
"""
Partition into non-overlapping windows with padding if needed.
Args:
hidden_state (`torch.Tensor`):
Input tokens with [batch_size, height, width, num_channels].
window_size (`int`):
Window size.
Returns:
... |
Partition into non-overlapping windows with padding if needed.
Args:
hidden_state (`torch.Tensor`):
Input tokens with [batch_size, height, width, num_channels].
window_size (`int`):
Window size.
Returns:
`tuple(torch.FloatTensor)` comprising various element... | window_partition | python | huggingface/transformers | src/transformers/models/vitdet/modeling_vitdet.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/vitdet/modeling_vitdet.py | Apache-2.0 |
def window_unpartition(windows, window_size, pad_height_width, height_width):
"""
Window unpartition into original sequences and removing padding.
Args:
windows (`torch.Tensor`):
Input tokens with [batch_size * num_windows, window_size, window_size, num_channels].
window_size (`... |
Window unpartition into original sequences and removing padding.
Args:
windows (`torch.Tensor`):
Input tokens with [batch_size * num_windows, window_size, window_size, num_channels].
window_size (`int`):
Window size.
pad_height_width (`Tuple[int]`):
... | window_unpartition | python | huggingface/transformers | src/transformers/models/vitdet/modeling_vitdet.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/vitdet/modeling_vitdet.py | Apache-2.0 |
def caffe2_msra_fill(module: nn.Module) -> None:
"""
Initialize `module.weight` using the "MSRAFill" implemented in Caffe2. Also initializes `module.bias` to 0.
Source: https://detectron2.readthedocs.io/en/latest/_modules/fvcore/nn/weight_init.html.
Args:
module (torch.nn.Module): module to in... |
Initialize `module.weight` using the "MSRAFill" implemented in Caffe2. Also initializes `module.bias` to 0.
Source: https://detectron2.readthedocs.io/en/latest/_modules/fvcore/nn/weight_init.html.
Args:
module (torch.nn.Module): module to initialize.
| caffe2_msra_fill | python | huggingface/transformers | src/transformers/models/vitdet/modeling_vitdet.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/vitdet/modeling_vitdet.py | Apache-2.0 |
def forward(
self,
pixel_values: Optional[torch.Tensor] = None,
head_mask: Optional[torch.Tensor] = None,
output_attentions: Optional[bool] = None,
output_hidden_states: Optional[bool] = None,
return_dict: Optional[bool] = None,
) -> Union[Tuple, BaseModelOutput]:
... |
Examples:
```python
>>> from transformers import VitDetConfig, VitDetModel
>>> import torch
>>> config = VitDetConfig()
>>> model = VitDetModel(config)
>>> pixel_values = torch.randn(1, 3, 224, 224)
>>> with torch.no_grad():
... outputs = ... | forward | python | huggingface/transformers | src/transformers/models/vitdet/modeling_vitdet.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/vitdet/modeling_vitdet.py | Apache-2.0 |
def forward(
self,
pixel_values: torch.Tensor,
output_hidden_states: Optional[bool] = None,
output_attentions: Optional[bool] = None,
return_dict: Optional[bool] = None,
) -> BackboneOutput:
r"""
Examples:
```python
>>> from transformers impor... |
Examples:
```python
>>> from transformers import VitDetConfig, VitDetBackbone
>>> import torch
>>> config = VitDetConfig()
>>> model = VitDetBackbone(config)
>>> pixel_values = torch.randn(1, 3, 224, 224)
>>> with torch.no_grad():
... outp... | forward | python | huggingface/transformers | src/transformers/models/vitdet/modeling_vitdet.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/vitdet/modeling_vitdet.py | Apache-2.0 |
def to_dict(self):
"""
Serializes this instance to a Python dictionary. Override the default [`~PretrainedConfig.to_dict`]. Returns:
`Dict[str, any]`: Dictionary of all the attributes that make up this configuration instance,
"""
output = copy.deepcopy(self.__dict__)
... |
Serializes this instance to a Python dictionary. Override the default [`~PretrainedConfig.to_dict`]. Returns:
`Dict[str, any]`: Dictionary of all the attributes that make up this configuration instance,
| to_dict | python | huggingface/transformers | src/transformers/models/vitmatte/configuration_vitmatte.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/vitmatte/configuration_vitmatte.py | Apache-2.0 |
def pad_image(
self,
image: np.ndarray,
size_divisibility: int = 32,
data_format: Optional[Union[str, ChannelDimension]] = None,
input_data_format: Optional[Union[str, ChannelDimension]] = None,
) -> np.ndarray:
"""
Args:
image (`np.ndarray`):
... |
Args:
image (`np.ndarray`):
Image to pad.
size_divisibility (`int`, *optional*, defaults to 32):
The width and height of the image will be padded to be divisible by this number.
data_format (`ChannelDimension` or `str`, *optional*, defaults to... | pad_image | python | huggingface/transformers | src/transformers/models/vitmatte/image_processing_vitmatte.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/vitmatte/image_processing_vitmatte.py | Apache-2.0 |
def preprocess(
self,
images: ImageInput,
trimaps: ImageInput,
do_rescale: Optional[bool] = None,
rescale_factor: Optional[float] = None,
do_normalize: Optional[bool] = None,
image_mean: Optional[Union[float, List[float]]] = None,
image_std: Optional[Union... |
Preprocess an image or batch of images.
Args:
images (`ImageInput`):
Image to preprocess. Expects a single or batch of images with pixel values ranging from 0 to 255. If
passing in images with pixel values between 0 and 1, set `do_rescale=False`.
... | preprocess | python | huggingface/transformers | src/transformers/models/vitmatte/image_processing_vitmatte.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/vitmatte/image_processing_vitmatte.py | Apache-2.0 |
def preprocess(
self,
images: list["torch.Tensor"],
trimaps: list["torch.Tensor"],
**kwargs: Unpack[VitMatteFastImageProcessorKwargs],
) -> BatchFeature:
r"""
trimaps (`list[torch.Tensor]`):
The trimaps to preprocess.
"""
validate_kwargs(ca... |
trimaps (`list[torch.Tensor]`):
The trimaps to preprocess.
| preprocess | python | huggingface/transformers | src/transformers/models/vitmatte/image_processing_vitmatte_fast.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/vitmatte/image_processing_vitmatte_fast.py | Apache-2.0 |
def _prepare_input_trimaps(
self, trimaps: ImageInput, device: Optional["torch.device"] = None
) -> list["torch.Tensor"]:
"""
Prepare input trimaps for processing,m this can not yet deal with nested list
Args:
trimaps ('ImageInout):
The input trimaps to b... |
Prepare input trimaps for processing,m this can not yet deal with nested list
Args:
trimaps ('ImageInout):
The input trimaps to be process, should not be nested
device('Optional['torch.device'] defaults to 'self.device'):
The device to process th... | _prepare_input_trimaps | python | huggingface/transformers | src/transformers/models/vitmatte/image_processing_vitmatte_fast.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/vitmatte/image_processing_vitmatte_fast.py | Apache-2.0 |
def _pad_image(
self,
images: "torch.tensor",
size_divisibility: int = 32,
) -> "torch.tensor":
"""
Pads an image or batched images constantly so that width and height are divisible by size_divisibility
Args:
image (`torch,tensor`):
Image ... |
Pads an image or batched images constantly so that width and height are divisible by size_divisibility
Args:
image (`torch,tensor`):
Image to pad.
size_divisibility (`int`, *optional*, defaults to 32):
The width and height of the image will be pa... | _pad_image | python | huggingface/transformers | src/transformers/models/vitmatte/image_processing_vitmatte_fast.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/vitmatte/image_processing_vitmatte_fast.py | Apache-2.0 |
def forward(
self,
pixel_values: Optional[torch.Tensor] = None,
output_attentions: Optional[bool] = None,
output_hidden_states: Optional[bool] = None,
labels: Optional[torch.Tensor] = None,
return_dict: Optional[bool] = None,
):
r"""
labels (`torch.Lon... |
labels (`torch.LongTensor` of shape `(batch_size, height, width)`, *optional*):
Ground truth image matting for computing the loss.
Examples:
```python
>>> from transformers import VitMatteImageProcessor, VitMatteForImageMatting
>>> import torch
>>> from PIL... | forward | python | huggingface/transformers | src/transformers/models/vitmatte/modeling_vitmatte.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/vitmatte/modeling_vitmatte.py | Apache-2.0 |
def convert_old_keys_to_new_keys(state_dict_keys: Optional[dict] = None):
"""
This function should be applied only once, on the concatenated keys to efficiently rename using
the key mappings.
"""
output_dict = {}
if state_dict_keys is not None:
old_text = "\n".join(state_dict_keys)
... |
This function should be applied only once, on the concatenated keys to efficiently rename using
the key mappings.
| convert_old_keys_to_new_keys | python | huggingface/transformers | src/transformers/models/vitpose/convert_vitpose_to_hf.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/vitpose/convert_vitpose_to_hf.py | Apache-2.0 |
def box_to_center_and_scale(
box: Union[Tuple, List, np.ndarray],
image_width: int,
image_height: int,
normalize_factor: float = 200.0,
padding_factor: float = 1.25,
):
"""
Encodes a bounding box in COCO format into (center, scale).
Args:
box (`Tuple`, `List`, or `np.ndarray`):
... |
Encodes a bounding box in COCO format into (center, scale).
Args:
box (`Tuple`, `List`, or `np.ndarray`):
Bounding box in COCO format (top_left_x, top_left_y, width, height).
image_width (`int`):
Image width.
image_height (`int`):
Image height.
... | box_to_center_and_scale | python | huggingface/transformers | src/transformers/models/vitpose/image_processing_vitpose.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/vitpose/image_processing_vitpose.py | Apache-2.0 |
def coco_to_pascal_voc(bboxes: np.ndarray) -> np.ndarray:
"""
Converts bounding boxes from the COCO format to the Pascal VOC format.
In other words, converts from (top_left_x, top_left_y, width, height) format
to (top_left_x, top_left_y, bottom_right_x, bottom_right_y).
Args:
bboxes (`np.n... |
Converts bounding boxes from the COCO format to the Pascal VOC format.
In other words, converts from (top_left_x, top_left_y, width, height) format
to (top_left_x, top_left_y, bottom_right_x, bottom_right_y).
Args:
bboxes (`np.ndarray` of shape `(batch_size, 4)):
Bounding boxes in... | coco_to_pascal_voc | python | huggingface/transformers | src/transformers/models/vitpose/image_processing_vitpose.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/vitpose/image_processing_vitpose.py | Apache-2.0 |
def get_keypoint_predictions(heatmaps: np.ndarray) -> Tuple[np.ndarray, np.ndarray]:
"""Get keypoint predictions from score maps.
Args:
heatmaps (`np.ndarray` of shape `(batch_size, num_keypoints, height, width)`):
Model predicted heatmaps.
Returns:
tuple: A tuple containing ag... | Get keypoint predictions from score maps.
Args:
heatmaps (`np.ndarray` of shape `(batch_size, num_keypoints, height, width)`):
Model predicted heatmaps.
Returns:
tuple: A tuple containing aggregated results.
- coords (`np.ndarray` of shape `(batch_size, num_keypoints, 2)`)... | get_keypoint_predictions | python | huggingface/transformers | src/transformers/models/vitpose/image_processing_vitpose.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/vitpose/image_processing_vitpose.py | Apache-2.0 |
def post_dark_unbiased_data_processing(coords: np.ndarray, batch_heatmaps: np.ndarray, kernel: int = 3) -> np.ndarray:
"""DARK post-pocessing. Implemented by unbiased_data_processing.
Paper references:
- Huang et al. The Devil is in the Details: Delving into Unbiased Data Processing for Human Pose Estimati... | DARK post-pocessing. Implemented by unbiased_data_processing.
Paper references:
- Huang et al. The Devil is in the Details: Delving into Unbiased Data Processing for Human Pose Estimation (CVPR 2020).
- Zhang et al. Distribution-Aware Coordinate Representation for Human Pose Estimation (CVPR 2020).
Ar... | post_dark_unbiased_data_processing | python | huggingface/transformers | src/transformers/models/vitpose/image_processing_vitpose.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/vitpose/image_processing_vitpose.py | Apache-2.0 |
def transform_preds(coords: np.ndarray, center: np.ndarray, scale: np.ndarray, output_size: np.ndarray) -> np.ndarray:
"""Get final keypoint predictions from heatmaps and apply scaling and
translation to map them back to the image.
Note:
num_keypoints: K
Args:
coords (`np.ndarray` of s... | Get final keypoint predictions from heatmaps and apply scaling and
translation to map them back to the image.
Note:
num_keypoints: K
Args:
coords (`np.ndarray` of shape `(num_keypoints, ndims)`):
* If ndims=2, corrds are predicted keypoint location.
* If ndims=4, c... | transform_preds | python | huggingface/transformers | src/transformers/models/vitpose/image_processing_vitpose.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/vitpose/image_processing_vitpose.py | Apache-2.0 |
def get_warp_matrix(theta: float, size_input: np.ndarray, size_dst: np.ndarray, size_target: np.ndarray):
"""
Calculate the transformation matrix under the constraint of unbiased. Paper ref: Huang et al. The Devil is in the
Details: Delving into Unbiased Data Processing for Human Pose Estimation (CVPR 2020)... |
Calculate the transformation matrix under the constraint of unbiased. Paper ref: Huang et al. The Devil is in the
Details: Delving into Unbiased Data Processing for Human Pose Estimation (CVPR 2020).
Source: https://github.com/open-mmlab/mmpose/blob/master/mmpose/core/post_processing/post_transforms.py
... | get_warp_matrix | python | huggingface/transformers | src/transformers/models/vitpose/image_processing_vitpose.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/vitpose/image_processing_vitpose.py | Apache-2.0 |
def affine_transform(
self,
image: np.array,
center: Tuple[float],
scale: Tuple[float],
rotation: float,
size: Dict[str, int],
data_format: Optional[ChannelDimension] = None,
input_data_format: Optional[Union[str, ChannelDimension]] = None,
) -> np.arr... |
Apply an affine transformation to an image.
Args:
image (`np.array`):
Image to transform.
center (`Tuple[float]`):
Center of the bounding box (x, y).
scale (`Tuple[float]`):
Scale of the bounding box with respect to he... | affine_transform | python | huggingface/transformers | src/transformers/models/vitpose/image_processing_vitpose.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/vitpose/image_processing_vitpose.py | Apache-2.0 |
def preprocess(
self,
images: ImageInput,
boxes: Union[List[List[float]], np.ndarray],
do_affine_transform: Optional[bool] = None,
size: Optional[Dict[str, int]] = None,
do_rescale: Optional[bool] = None,
rescale_factor: Optional[float] = None,
do_normaliz... |
Preprocess an image or batch of images.
Args:
images (`ImageInput`):
Image to preprocess. Expects a single or batch of images with pixel values ranging from 0 to 255. If
passing in images with pixel values between 0 and 1, set `do_rescale=False`.
... | preprocess | python | huggingface/transformers | src/transformers/models/vitpose/image_processing_vitpose.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/vitpose/image_processing_vitpose.py | Apache-2.0 |
def keypoints_from_heatmaps(
self,
heatmaps: np.ndarray,
center: np.ndarray,
scale: np.ndarray,
kernel: int = 11,
):
"""
Get final keypoint predictions from heatmaps and transform them back to
the image.
Args:
heatmaps (`np.ndarray... |
Get final keypoint predictions from heatmaps and transform them back to
the image.
Args:
heatmaps (`np.ndarray` of shape `(batch_size, num_keypoints, height, width])`):
Model predicted heatmaps.
center (`np.ndarray` of shape `(batch_size, 2)`):
... | keypoints_from_heatmaps | python | huggingface/transformers | src/transformers/models/vitpose/image_processing_vitpose.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/vitpose/image_processing_vitpose.py | Apache-2.0 |
def post_process_pose_estimation(
self,
outputs: "VitPoseEstimatorOutput",
boxes: Union[List[List[List[float]]], np.ndarray],
kernel_size: int = 11,
threshold: Optional[float] = None,
target_sizes: Union[TensorType, List[Tuple]] = None,
):
"""
Transfor... |
Transform the heatmaps into keypoint predictions and transform them back to the image.
Args:
outputs (`VitPoseEstimatorOutput`):
VitPoseForPoseEstimation model outputs.
boxes (`List[List[List[float]]]` or `np.ndarray`):
List or array of bounding ... | post_process_pose_estimation | python | huggingface/transformers | src/transformers/models/vitpose/image_processing_vitpose.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/vitpose/image_processing_vitpose.py | Apache-2.0 |
def flip_back(output_flipped, flip_pairs, target_type="gaussian-heatmap"):
"""Flip the flipped heatmaps back to the original form.
Args:
output_flipped (`torch.tensor` of shape `(batch_size, num_keypoints, height, width)`):
The output heatmaps obtained from the flipped images.
flip_... | Flip the flipped heatmaps back to the original form.
Args:
output_flipped (`torch.tensor` of shape `(batch_size, num_keypoints, height, width)`):
The output heatmaps obtained from the flipped images.
flip_pairs (`torch.Tensor` of shape `(num_keypoints, 2)`):
Pairs of keypoin... | flip_back | python | huggingface/transformers | src/transformers/models/vitpose/modeling_vitpose.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/vitpose/modeling_vitpose.py | Apache-2.0 |
def forward(
self,
pixel_values: torch.Tensor,
dataset_index: Optional[torch.Tensor] = None,
flip_pairs: Optional[torch.Tensor] = None,
labels: Optional[torch.Tensor] = None,
output_attentions: Optional[bool] = None,
output_hidden_states: Optional[bool] = None,
... |
dataset_index (`torch.Tensor` of shape `(batch_size,)`):
Index to use in the Mixture-of-Experts (MoE) blocks of the backbone.
This corresponds to the dataset index used during training, e.g. For the single dataset index 0 refers to the corresponding dataset. For the multiple datasets i... | forward | python | huggingface/transformers | src/transformers/models/vitpose/modeling_vitpose.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/vitpose/modeling_vitpose.py | Apache-2.0 |
def forward(
self,
pixel_values: torch.Tensor,
dataset_index: Optional[torch.Tensor] = None,
head_mask: Optional[torch.Tensor] = None,
output_attentions: Optional[bool] = None,
output_hidden_states: Optional[bool] = None,
return_dict: Optional[bool] = None,
):... |
dataset_index (`torch.Tensor` of shape `(batch_size,)`):
Index to use in the Mixture-of-Experts (MoE) blocks of the backbone.
This corresponds to the dataset index used during training, e.g. index 0 refers to COCO.
Examples:
```python
>>> from transformers imp... | forward | python | huggingface/transformers | src/transformers/models/vitpose_backbone/modeling_vitpose_backbone.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/vitpose_backbone/modeling_vitpose_backbone.py | Apache-2.0 |
def forward(
self, spectrogram: torch.FloatTensor, global_conditioning: Optional[torch.FloatTensor] = None
) -> torch.FloatTensor:
r"""
Converts a spectrogram into a speech waveform.
Args:
spectrogram (`torch.FloatTensor` of shape `(batch_size, config.spectrogram_bins, s... |
Converts a spectrogram into a speech waveform.
Args:
spectrogram (`torch.FloatTensor` of shape `(batch_size, config.spectrogram_bins, sequence_length)`):
Tensor containing the spectrograms.
global_conditioning (`torch.FloatTensor` of shape `(batch_size, config.s... | forward | python | huggingface/transformers | src/transformers/models/vits/modeling_vits.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/vits/modeling_vits.py | Apache-2.0 |
def forward(
self,
input_ids: Optional[torch.Tensor] = None,
attention_mask: Optional[torch.Tensor] = None,
speaker_id: Optional[int] = None,
output_attentions: Optional[bool] = None,
output_hidden_states: Optional[bool] = None,
return_dict: Optional[bool] = None,... |
speaker_id (`int`, *optional*):
Which speaker embedding to use. Only used for multispeaker models.
labels (`torch.FloatTensor` of shape `(batch_size, config.spectrogram_bins, sequence_length)`, *optional*):
Float values of target spectrogram. Timesteps set to `-100.0` are ignore... | forward | python | huggingface/transformers | src/transformers/models/vits/modeling_vits.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/vits/modeling_vits.py | Apache-2.0 |
def normalize_text(self, input_string):
"""Lowercase the input string, respecting any special token ids that may be part or entirely upper-cased."""
all_vocabulary = list(self.encoder.keys()) + list(self.added_tokens_encoder.keys())
filtered_text = ""
i = 0
while i < len(input_s... | Lowercase the input string, respecting any special token ids that may be part or entirely upper-cased. | normalize_text | python | huggingface/transformers | src/transformers/models/vits/tokenization_vits.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/vits/tokenization_vits.py | Apache-2.0 |
def prepare_for_tokenization(
self, text: str, is_split_into_words: bool = False, normalize: Optional[bool] = None, **kwargs
) -> Tuple[str, Dict[str, Any]]:
"""
Performs any necessary transformations before tokenization.
This method should pop the arguments from kwargs and return t... |
Performs any necessary transformations before tokenization.
This method should pop the arguments from kwargs and return the remaining `kwargs` as well. We test the
`kwargs` at the end of the encoding process to be sure all the arguments have been used.
Args:
text (`str`):
... | prepare_for_tokenization | python | huggingface/transformers | src/transformers/models/vits/tokenization_vits.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/vits/tokenization_vits.py | Apache-2.0 |
def _tokenize(self, text: str) -> List[str]:
"""Tokenize a string by inserting the `<pad>` token at the boundary between adjacent characters."""
tokens = list(text)
if self.add_blank:
interspersed = [self._convert_id_to_token(0)] * (len(tokens) * 2 + 1)
interspersed[1::2... | Tokenize a string by inserting the `<pad>` token at the boundary between adjacent characters. | _tokenize | python | huggingface/transformers | src/transformers/models/vits/tokenization_vits.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/vits/tokenization_vits.py | Apache-2.0 |
def get_2d_sincos_pos_embed(embed_dim, grid_size, add_cls_token=False):
"""
Create 2D sin/cos positional embeddings.
Args:
embed_dim (`int`):
Embedding dimension.
grid_size (`int`):
The grid height and width.
add_cls_token (`bool`, *optional*, defaults to `Fa... |
Create 2D sin/cos positional embeddings.
Args:
embed_dim (`int`):
Embedding dimension.
grid_size (`int`):
The grid height and width.
add_cls_token (`bool`, *optional*, defaults to `False`):
Whether or not to add a classification (CLS) token.
Ret... | get_2d_sincos_pos_embed | python | huggingface/transformers | src/transformers/models/vit_mae/modeling_tf_vit_mae.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/vit_mae/modeling_tf_vit_mae.py | Apache-2.0 |
def get_1d_sincos_pos_embed_from_grid(embed_dim, pos):
"""
embed_dim: output dimension for each position pos: a list of positions to be encoded: size (M,) out: (M, D)
"""
if embed_dim % 2 != 0:
raise ValueError("embed_dim must be even")
omega = tf.range(embed_dim // 2, dtype="float32")
... |
embed_dim: output dimension for each position pos: a list of positions to be encoded: size (M,) out: (M, D)
| get_1d_sincos_pos_embed_from_grid | python | huggingface/transformers | src/transformers/models/vit_mae/modeling_tf_vit_mae.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/vit_mae/modeling_tf_vit_mae.py | Apache-2.0 |
def interpolate_pos_encoding(self, embeddings, height, width) -> tf.Tensor:
"""
This method allows to interpolate the pre-trained position encodings, to be able to use the model on higher
resolution images.
Source:
https://github.com/facebookresearch/dino/blob/de9ee3df6cf39fac95... |
This method allows to interpolate the pre-trained position encodings, to be able to use the model on higher
resolution images.
Source:
https://github.com/facebookresearch/dino/blob/de9ee3df6cf39fac952ab558447af1fa1365362a/vision_transformer.py#L174
| interpolate_pos_encoding | python | huggingface/transformers | src/transformers/models/vit_mae/modeling_tf_vit_mae.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/vit_mae/modeling_tf_vit_mae.py | Apache-2.0 |
def random_masking(self, sequence: tf.Tensor, noise: tf.Tensor | None = None):
"""
Perform per-sample random masking by per-sample shuffling. Per-sample shuffling is done by argsort random
noise.
Args:
sequence (`tf.Tensor` of shape `(batch_size, sequence_length, dim)`)
... |
Perform per-sample random masking by per-sample shuffling. Per-sample shuffling is done by argsort random
noise.
Args:
sequence (`tf.Tensor` of shape `(batch_size, sequence_length, dim)`)
noise (`tf.Tensor` of shape `(batch_size, sequence_length)`, *optional*) which is
... | random_masking | python | huggingface/transformers | src/transformers/models/vit_mae/modeling_tf_vit_mae.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/vit_mae/modeling_tf_vit_mae.py | Apache-2.0 |
def call(
self,
pixel_values: TFModelInputType | None = None,
noise: Optional[tf.Tensor] = None,
head_mask: np.ndarray | tf.Tensor | None = None,
output_attentions: Optional[bool] = None,
output_hidden_states: Optional[bool] = None,
return_dict: Optional[bool] = N... |
Returns:
Examples:
```python
>>> from transformers import AutoImageProcessor, TFViTMAEModel
>>> from PIL import Image
>>> import requests
>>> url = "http://images.cocodataset.org/val2017/000000039769.jpg"
>>> image = Image.open(requests.get(url, stream... | call | python | huggingface/transformers | src/transformers/models/vit_mae/modeling_tf_vit_mae.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/vit_mae/modeling_tf_vit_mae.py | Apache-2.0 |
def interpolate_pos_encoding(self, embeddings) -> tf.Tensor:
"""
This method is a modified version of the interpolation function for ViT-mae model at the decoder, that
allows to interpolate the pre-trained decoder position encodings, to be able to use the model on higher
resolution image... |
This method is a modified version of the interpolation function for ViT-mae model at the decoder, that
allows to interpolate the pre-trained decoder position encodings, to be able to use the model on higher
resolution images.
Source:
https://github.com/facebookresearch/dino/blo... | interpolate_pos_encoding | python | huggingface/transformers | src/transformers/models/vit_mae/modeling_tf_vit_mae.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/vit_mae/modeling_tf_vit_mae.py | Apache-2.0 |
def patchify(self, pixel_values, interpolate_pos_encoding: bool = False):
"""
Args:
pixel_values (`tf.Tensor` of shape `(batch_size, height, width, num_channels)` or `(batch_size, num_channels, height, width)`):
Pixel values.
interpolate_pos_encoding (`bool`, defa... |
Args:
pixel_values (`tf.Tensor` of shape `(batch_size, height, width, num_channels)` or `(batch_size, num_channels, height, width)`):
Pixel values.
interpolate_pos_encoding (`bool`, default `False`):
interpolation flag passed during the forward pass.
... | patchify | python | huggingface/transformers | src/transformers/models/vit_mae/modeling_tf_vit_mae.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/vit_mae/modeling_tf_vit_mae.py | Apache-2.0 |
def unpatchify(self, patchified_pixel_values, original_image_size: Optional[Tuple[int, int]] = None):
"""
Args:
patchified_pixel_values (`tf.Tensor` of shape `(batch_size, num_patches, patch_size**2 * num_channels)`:
Patchified pixel values.
original_image_size (`... |
Args:
patchified_pixel_values (`tf.Tensor` of shape `(batch_size, num_patches, patch_size**2 * num_channels)`:
Patchified pixel values.
original_image_size (`Tuple[int, int]`, *optional*):
Original image size.
Returns:
`tf.Tensor` of ... | unpatchify | python | huggingface/transformers | src/transformers/models/vit_mae/modeling_tf_vit_mae.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/vit_mae/modeling_tf_vit_mae.py | Apache-2.0 |
def forward_loss(self, pixel_values, pred, mask, interpolate_pos_encoding: bool = False):
"""
Args:
pixel_values (`tf.Tensor` of shape `(batch_size, height, width, num_channels)`):
Pixel values.
pred (`tf.Tensor` of shape `(batch_size, num_patches, patch_size**2 *... |
Args:
pixel_values (`tf.Tensor` of shape `(batch_size, height, width, num_channels)`):
Pixel values.
pred (`tf.Tensor` of shape `(batch_size, num_patches, patch_size**2 * num_channels)`:
Predicted pixel values.
mask (`tf.Tensor` of shape `(bat... | forward_loss | python | huggingface/transformers | src/transformers/models/vit_mae/modeling_tf_vit_mae.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/vit_mae/modeling_tf_vit_mae.py | Apache-2.0 |
def call(
self,
pixel_values: TFModelInputType | None = None,
noise: Optional[tf.Tensor] = None,
head_mask: np.ndarray | tf.Tensor | None = None,
output_attentions: Optional[bool] = None,
output_hidden_states: Optional[bool] = None,
return_dict: Optional[bool] = N... |
Returns:
Examples:
```python
>>> from transformers import AutoImageProcessor, TFViTMAEForPreTraining
>>> from PIL import Image
>>> import requests
>>> url = "http://images.cocodataset.org/val2017/000000039769.jpg"
>>> image = Image.open(requests.get(ur... | call | python | huggingface/transformers | src/transformers/models/vit_mae/modeling_tf_vit_mae.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/vit_mae/modeling_tf_vit_mae.py | Apache-2.0 |
def get_2d_sincos_pos_embed(embed_dim, grid_size, add_cls_token=False):
"""
Create 2D sin/cos positional embeddings.
Args:
embed_dim (`int`):
Embedding dimension.
grid_size (`int`):
The grid height and width.
add_cls_token (`bool`, *optional*, defaults to `Fa... |
Create 2D sin/cos positional embeddings.
Args:
embed_dim (`int`):
Embedding dimension.
grid_size (`int`):
The grid height and width.
add_cls_token (`bool`, *optional*, defaults to `False`):
Whether or not to add a classification (CLS) token.
Ret... | get_2d_sincos_pos_embed | python | huggingface/transformers | src/transformers/models/vit_mae/modeling_vit_mae.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/vit_mae/modeling_vit_mae.py | Apache-2.0 |
def get_1d_sincos_pos_embed_from_grid(embed_dim, pos):
"""
embed_dim: output dimension for each position pos: a list of positions to be encoded: size (M,) out: (M, D)
"""
if embed_dim % 2 != 0:
raise ValueError("embed_dim must be even")
omega = np.arange(embed_dim // 2, dtype=float)
ome... |
embed_dim: output dimension for each position pos: a list of positions to be encoded: size (M,) out: (M, D)
| get_1d_sincos_pos_embed_from_grid | python | huggingface/transformers | src/transformers/models/vit_mae/modeling_vit_mae.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/vit_mae/modeling_vit_mae.py | Apache-2.0 |
def interpolate_pos_encoding(self, embeddings: torch.Tensor, height: int, width: int) -> torch.Tensor:
"""
This method allows to interpolate the pre-trained position encodings, to be able to use the model on higher resolution
images. This method is also adapted to support torch.jit tracing.
... |
This method allows to interpolate the pre-trained position encodings, to be able to use the model on higher resolution
images. This method is also adapted to support torch.jit tracing.
Adapted from:
- https://github.com/facebookresearch/dino/blob/de9ee3df6cf39fac952ab558447af1fa1365362... | interpolate_pos_encoding | python | huggingface/transformers | src/transformers/models/vit_mae/modeling_vit_mae.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/vit_mae/modeling_vit_mae.py | Apache-2.0 |
def random_masking(self, sequence, noise=None):
"""
Perform per-sample random masking by per-sample shuffling. Per-sample shuffling is done by argsort random
noise.
Args:
sequence (`torch.LongTensor` of shape `(batch_size, sequence_length, dim)`)
noise (`torch.Fl... |
Perform per-sample random masking by per-sample shuffling. Per-sample shuffling is done by argsort random
noise.
Args:
sequence (`torch.LongTensor` of shape `(batch_size, sequence_length, dim)`)
noise (`torch.FloatTensor` of shape `(batch_size, sequence_length)`, *optio... | random_masking | python | huggingface/transformers | src/transformers/models/vit_mae/modeling_vit_mae.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/vit_mae/modeling_vit_mae.py | Apache-2.0 |
Subsets and Splits
Django Code with Docstrings
Filters Python code examples from Django repository that contain Django-related code, helping identify relevant code snippets for understanding Django framework usage patterns.
SQL Console for Shuu12121/python-treesitter-filtered-datasetsV2
Retrieves specific code examples from the Flask repository but doesn't provide meaningful analysis or patterns beyond basic data retrieval.
HTTPX Repo Code and Docstrings
Retrieves specific code examples from the httpx repository, which is useful for understanding how particular libraries are used but doesn't provide broader analytical insights about the dataset.
Requests Repo Docstrings & Code
Retrieves code examples with their docstrings and file paths from the requests repository, providing basic filtering but limited analytical value beyond finding specific code samples.
Quart Repo Docstrings & Code
Retrieves code examples with their docstrings from the Quart repository, providing basic code samples but offering limited analytical value for understanding broader patterns or relationships in the dataset.