code stringlengths 66 870k | docstring stringlengths 19 26.7k | func_name stringlengths 1 138 | language stringclasses 1
value | repo stringlengths 7 68 | path stringlengths 5 324 | url stringlengths 46 389 | license stringclasses 7
values |
|---|---|---|---|---|---|---|---|
def __init__(self, indices, num_segments, batch_dims=0):
"""
Creates an index
Args:
indices (`torch.LongTensor`, same shape as a *values* Tensor to which the indices refer):
Tensor containing the indices.
num_segments (`torch.LongTensor`):
... |
Creates an index
Args:
indices (`torch.LongTensor`, same shape as a *values* Tensor to which the indices refer):
Tensor containing the indices.
num_segments (`torch.LongTensor`):
Scalar tensor, the number of segments. All elements in a batched se... | __init__ | python | huggingface/transformers | src/transformers/models/tapas/modeling_tapas.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/tapas/modeling_tapas.py | Apache-2.0 |
def __init__(self, outer_index, inner_index):
"""
Combines indices i and j into pairs (i, j). The result is an index where each segment (i, j) is the
intersection of segments i and j. For example if the inputs represent table cells indexed by respectively rows
and columns the output will... |
Combines indices i and j into pairs (i, j). The result is an index where each segment (i, j) is the
intersection of segments i and j. For example if the inputs represent table cells indexed by respectively rows
and columns the output will be a table indexed by (row, column) pairs, i.e. by cell.... | __init__ | python | huggingface/transformers | src/transformers/models/tapas/modeling_tapas.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/tapas/modeling_tapas.py | Apache-2.0 |
def project_outer(self, index):
"""Projects an index with the same index set onto the outer components."""
indices = torch.div(index.indices, self.inner_index.num_segments, rounding_mode="floor").type(torch.long)
return IndexMap(indices=indices, num_segments=self.outer_index.num_segments, batch_... | Projects an index with the same index set onto the outer components. | project_outer | python | huggingface/transformers | src/transformers/models/tapas/modeling_tapas.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/tapas/modeling_tapas.py | Apache-2.0 |
def project_inner(self, index):
"""Projects an index with the same index set onto the inner components."""
return IndexMap(
indices=torch.fmod(index.indices, self.inner_index.num_segments)
.type(torch.float)
.floor()
.type(torch.long),
num_segm... | Projects an index with the same index set onto the inner components. | project_inner | python | huggingface/transformers | src/transformers/models/tapas/modeling_tapas.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/tapas/modeling_tapas.py | Apache-2.0 |
def gather(values, index, name="segmented_gather"):
"""
Gathers from *values* using the index map. For each element in the domain of the index map this operation looks up
a value for that index in *values*. Two elements from the same segment always get assigned the same value.
Args:
values (`to... |
Gathers from *values* using the index map. For each element in the domain of the index map this operation looks up
a value for that index in *values*. Two elements from the same segment always get assigned the same value.
Args:
values (`torch.Tensor` of shape (B1, ..., Bn, num_segments, V1, ...)):... | gather | python | huggingface/transformers | src/transformers/models/tapas/modeling_tapas.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/tapas/modeling_tapas.py | Apache-2.0 |
def flatten(index, name="segmented_flatten"):
"""
Flattens a batched index map (which is typically of shape batch_size, seq_length) to a 1d index map. This operation
relabels the segments to keep batch elements distinct. The k-th batch element will have indices shifted by
*num_segments* * (k - 1). The r... |
Flattens a batched index map (which is typically of shape batch_size, seq_length) to a 1d index map. This operation
relabels the segments to keep batch elements distinct. The k-th batch element will have indices shifted by
*num_segments* * (k - 1). The result is a tensor with *num_segments* multiplied by t... | flatten | python | huggingface/transformers | src/transformers/models/tapas/modeling_tapas.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/tapas/modeling_tapas.py | Apache-2.0 |
def range_index_map(batch_shape, num_segments, name="range_index_map"):
"""
Constructs an index map equal to range(num_segments).
Args:
batch_shape (`torch.Size`):
Batch shape
num_segments (`int`):
Number of segments
name (`str`, *optional*, defaults to 'rang... |
Constructs an index map equal to range(num_segments).
Args:
batch_shape (`torch.Size`):
Batch shape
num_segments (`int`):
Number of segments
name (`str`, *optional*, defaults to 'range_index_map'):
Name for the operation. Currently not used
Retu... | range_index_map | python | huggingface/transformers | src/transformers/models/tapas/modeling_tapas.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/tapas/modeling_tapas.py | Apache-2.0 |
def _segment_reduce(values, index, segment_reduce_fn, name):
"""
Applies a segment reduction segment-wise.
Args:
values (`torch.Tensor`):
Tensor with segment values.
index (`IndexMap`):
IndexMap.
segment_reduce_fn (`str`):
Name for the reduce oper... |
Applies a segment reduction segment-wise.
Args:
values (`torch.Tensor`):
Tensor with segment values.
index (`IndexMap`):
IndexMap.
segment_reduce_fn (`str`):
Name for the reduce operation. One of "sum", "mean", "max" or "min".
name (`str`):
... | _segment_reduce | python | huggingface/transformers | src/transformers/models/tapas/modeling_tapas.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/tapas/modeling_tapas.py | Apache-2.0 |
def compute_column_logits(
sequence_output, column_output_weights, column_output_bias, cell_index, cell_mask, allow_empty_column_selection
):
"""
Computes the column logits.
Args:
sequence_output (`torch.FloatTensor` of shape `(batch_size, sequence_length, hidden_size)`):
Also known... |
Computes the column logits.
Args:
sequence_output (`torch.FloatTensor` of shape `(batch_size, sequence_length, hidden_size)`):
Also known as last_hidden_state. Sequence of hidden-states at the output of the last layer of the model.
column_output_weights (`torch.FloatTensor` of shap... | compute_column_logits | python | huggingface/transformers | src/transformers/models/tapas/modeling_tapas.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/tapas/modeling_tapas.py | Apache-2.0 |
def _single_column_cell_selection_loss(token_logits, column_logits, labels, cell_index, col_index, cell_mask):
"""
Computes the loss for cell selection constrained to a single column. The loss is a hierarchical log-likelihood. The
model first predicts a column and then selects cells within that column (cond... |
Computes the loss for cell selection constrained to a single column. The loss is a hierarchical log-likelihood. The
model first predicts a column and then selects cells within that column (conditioned on the column). Cells outside
the selected column are never selected.
Args:
token_logits (`to... | _single_column_cell_selection_loss | python | huggingface/transformers | src/transformers/models/tapas/modeling_tapas.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/tapas/modeling_tapas.py | Apache-2.0 |
def _calculate_aggregate_mask(answer, pooled_output, cell_selection_preference, labels, aggregation_classifier):
"""
Finds examples where the model should select cells with no aggregation.
Returns a mask that determines for which examples should the model select answers directly from the table, without
... |
Finds examples where the model should select cells with no aggregation.
Returns a mask that determines for which examples should the model select answers directly from the table, without
any aggregation function. If the answer is a piece of text the case is unambiguous as aggregation functions only
ap... | _calculate_aggregate_mask | python | huggingface/transformers | src/transformers/models/tapas/modeling_tapas.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/tapas/modeling_tapas.py | Apache-2.0 |
def _calculate_aggregation_loss_known(
logits_aggregation, aggregate_mask, aggregation_labels, use_answer_as_supervision, num_aggregation_labels
):
"""
Calculates aggregation loss when its type is known during training.
In the weakly supervised setting, the only known information is that for cell selec... |
Calculates aggregation loss when its type is known during training.
In the weakly supervised setting, the only known information is that for cell selection examples, "no aggregation"
should be predicted. For other examples (those that require aggregation), no loss is accumulated. In the setting
where ... | _calculate_aggregation_loss_known | python | huggingface/transformers | src/transformers/models/tapas/modeling_tapas.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/tapas/modeling_tapas.py | Apache-2.0 |
def _calculate_aggregation_loss_unknown(logits_aggregation, aggregate_mask):
"""
Calculates aggregation loss in the case of answer supervision.
Args:
logits_aggregation (`torch.FloatTensor` of shape `(batch_size, num_aggregation_labels)`):
Logits per aggregation operation.
aggre... |
Calculates aggregation loss in the case of answer supervision.
Args:
logits_aggregation (`torch.FloatTensor` of shape `(batch_size, num_aggregation_labels)`):
Logits per aggregation operation.
aggregate_mask (`torch.FloatTensor` of shape `(batch_size, )`):
A mask set to... | _calculate_aggregation_loss_unknown | python | huggingface/transformers | src/transformers/models/tapas/modeling_tapas.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/tapas/modeling_tapas.py | Apache-2.0 |
def _calculate_aggregation_loss(
logits_aggregation,
aggregate_mask,
aggregation_labels,
use_answer_as_supervision,
num_aggregation_labels,
aggregation_loss_weight,
):
"""
Calculates the aggregation loss per example.
Args:
logits_aggregation (`torch.FloatTensor` of shape `(b... |
Calculates the aggregation loss per example.
Args:
logits_aggregation (`torch.FloatTensor` of shape `(batch_size, num_aggregation_labels)`):
Logits per aggregation operation.
aggregate_mask (`torch.FloatTensor` of shape `(batch_size, )`):
A mask set to 1 for examples th... | _calculate_aggregation_loss | python | huggingface/transformers | src/transformers/models/tapas/modeling_tapas.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/tapas/modeling_tapas.py | Apache-2.0 |
def _calculate_expected_result(
dist_per_cell, numeric_values, numeric_values_scale, input_mask_float, logits_aggregation, config
):
"""
Calculates the expected result given cell and aggregation probabilities.
Args:
dist_per_cell (`torch.distributions.Bernoulli`):
Cell selection dis... |
Calculates the expected result given cell and aggregation probabilities.
Args:
dist_per_cell (`torch.distributions.Bernoulli`):
Cell selection distribution for each cell.
numeric_values (`torch.FloatTensor` of shape `(batch_size, seq_length)`):
Numeric values of every t... | _calculate_expected_result | python | huggingface/transformers | src/transformers/models/tapas/modeling_tapas.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/tapas/modeling_tapas.py | Apache-2.0 |
def _calculate_regression_loss(
answer,
aggregate_mask,
dist_per_cell,
numeric_values,
numeric_values_scale,
input_mask_float,
logits_aggregation,
config,
):
"""
Calculates the regression loss per example.
Args:
answer (`torch.FloatTensor` of shape `(batch_size,)`):
... |
Calculates the regression loss per example.
Args:
answer (`torch.FloatTensor` of shape `(batch_size,)`):
Answer for every example in the batch. Nan if there is no scalar answer.
aggregate_mask (`torch.FloatTensor` of shape `(batch_size,)`):
A mask set to 1 for examples ... | _calculate_regression_loss | python | huggingface/transformers | src/transformers/models/tapas/modeling_tapas.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/tapas/modeling_tapas.py | Apache-2.0 |
def call(
self,
input_ids: Optional[tf.Tensor] = None,
position_ids: Optional[tf.Tensor] = None,
token_type_ids: Optional[tf.Tensor] = None,
inputs_embeds: Optional[tf.Tensor] = None,
training: bool = False,
) -> tf.Tensor:
"""
Applies embedding based ... |
Applies embedding based on inputs tensor.
Returns:
final_embeddings (`tf.Tensor`): output embedding tensor.
| call | python | huggingface/transformers | src/transformers/models/tapas/modeling_tf_tapas.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/tapas/modeling_tf_tapas.py | Apache-2.0 |
def call(
self,
input_ids: TFModelInputType | None = None,
attention_mask: np.ndarray | tf.Tensor | None = None,
token_type_ids: np.ndarray | tf.Tensor | None = None,
position_ids: np.ndarray | tf.Tensor | None = None,
head_mask: np.ndarray | tf.Tensor | None = None,
... |
Returns:
Examples:
```python
>>> from transformers import AutoTokenizer, TapasModel
>>> import pandas as pd
>>> tokenizer = AutoTokenizer.from_pretrained("google/tapas-base")
>>> model = TapasModel.from_pretrained("google/tapas-base")
>>> data = {
... | call | python | huggingface/transformers | src/transformers/models/tapas/modeling_tf_tapas.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/tapas/modeling_tf_tapas.py | Apache-2.0 |
def call(
self,
input_ids: TFModelInputType | None = None,
attention_mask: np.ndarray | tf.Tensor | None = None,
token_type_ids: np.ndarray | tf.Tensor | None = None,
position_ids: np.ndarray | tf.Tensor | None = None,
head_mask: np.ndarray | tf.Tensor | None = None,
... |
labels (`tf.Tensor` or `np.ndarray` of shape `(batch_size, sequence_length)`, *optional*):
Labels for computing the masked language modeling loss. Indices should be in `[-100, 0, ...,
config.vocab_size]` (see `input_ids` docstring) Tokens with indices set to `-100` are ignored (masked),... | call | python | huggingface/transformers | src/transformers/models/tapas/modeling_tf_tapas.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/tapas/modeling_tf_tapas.py | Apache-2.0 |
def call(self, sequence_output, cell_index, cell_mask, allow_empty_column_selection) -> tf.Tensor:
"""
Computes the column logits.
Args:
sequence_output (`tf.Tensor` of shape `(batch_size, sequence_length, hidden_size)`):
Also known as last_hidden_state. Sequence of ... |
Computes the column logits.
Args:
sequence_output (`tf.Tensor` of shape `(batch_size, sequence_length, hidden_size)`):
Also known as last_hidden_state. Sequence of hidden-states at the output of the last layer of the
model.
cell_index (`ProductIn... | call | python | huggingface/transformers | src/transformers/models/tapas/modeling_tf_tapas.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/tapas/modeling_tf_tapas.py | Apache-2.0 |
def call(
self,
input_ids: TFModelInputType | None = None,
attention_mask: np.ndarray | tf.Tensor | None = None,
token_type_ids: np.ndarray | tf.Tensor | None = None,
position_ids: np.ndarray | tf.Tensor | None = None,
head_mask: np.ndarray | tf.Tensor | None = None,
... |
table_mask (`tf.Tensor` of shape `(batch_size, seq_length)`, *optional*):
Mask for the table. Indicates which tokens belong to the table (1). Question tokens, table headers and
padding are 0.
labels (`tf.Tensor` of shape `(batch_size, seq_length)`, *optional*):
Label... | call | python | huggingface/transformers | src/transformers/models/tapas/modeling_tf_tapas.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/tapas/modeling_tf_tapas.py | Apache-2.0 |
def call(
self,
input_ids: TFModelInputType | None = None,
attention_mask: np.ndarray | tf.Tensor | None = None,
token_type_ids: np.ndarray | tf.Tensor | None = None,
position_ids: np.ndarray | tf.Tensor | None = None,
head_mask: np.ndarray | tf.Tensor | None = None,
... |
labels (`torch.LongTensor` of shape `(batch_size,)`, *optional*):
Labels for computing the sequence classification/regression loss. Indices should be in `[0, ...,
config.num_labels - 1]`. If `config.num_labels == 1` a regression loss is computed (Mean-Square loss), If
`confi... | call | python | huggingface/transformers | src/transformers/models/tapas/modeling_tf_tapas.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/tapas/modeling_tf_tapas.py | Apache-2.0 |
def __init__(self, indices, num_segments, batch_dims=0):
"""
Creates an index.
Args:
indices: <int32> Tensor of indices, same shape as `values`.
num_segments: <int32> Scalar tensor, the number of segments. All elements
in a batched segmented tensor must have the ... |
Creates an index.
Args:
indices: <int32> Tensor of indices, same shape as `values`.
num_segments: <int32> Scalar tensor, the number of segments. All elements
in a batched segmented tensor must have the same number of segments (although many segments can be empty).
... | __init__ | python | huggingface/transformers | src/transformers/models/tapas/modeling_tf_tapas.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/tapas/modeling_tf_tapas.py | Apache-2.0 |
def __init__(self, outer_index, inner_index):
"""
Combines indices i and j into pairs (i, j). The result is an index where each segment (i, j) is the
intersection of segments i and j. For example if the inputs represent table cells indexed by respectively rows
and columns the output will... |
Combines indices i and j into pairs (i, j). The result is an index where each segment (i, j) is the
intersection of segments i and j. For example if the inputs represent table cells indexed by respectively rows
and columns the output will be a table indexed by (row, column) pairs, i.e. by cell.... | __init__ | python | huggingface/transformers | src/transformers/models/tapas/modeling_tf_tapas.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/tapas/modeling_tf_tapas.py | Apache-2.0 |
def project_outer(self, index):
"""Projects an index with the same index set onto the outer components."""
return IndexMap(
indices=tf.math.floordiv(index.indices, self.inner_index.num_segments),
num_segments=self.outer_index.num_segments,
batch_dims=index.batch_dims,... | Projects an index with the same index set onto the outer components. | project_outer | python | huggingface/transformers | src/transformers/models/tapas/modeling_tf_tapas.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/tapas/modeling_tf_tapas.py | Apache-2.0 |
def project_inner(self, index):
"""Projects an index with the same index set onto the inner components."""
return IndexMap(
indices=tf.math.floormod(index.indices, self.inner_index.num_segments),
num_segments=self.inner_index.num_segments,
batch_dims=index.batch_dims,... | Projects an index with the same index set onto the inner components. | project_inner | python | huggingface/transformers | src/transformers/models/tapas/modeling_tf_tapas.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/tapas/modeling_tf_tapas.py | Apache-2.0 |
def flatten(index, name="segmented_flatten"):
"""
Flattens a batched index map to a 1d index map. This operation relabels the segments to keep batch elements
distinct. The k-th batch element will have indices shifted by `num_segments` * (k - 1). The result is a tensor with
`num_segments` multiplied by t... |
Flattens a batched index map to a 1d index map. This operation relabels the segments to keep batch elements
distinct. The k-th batch element will have indices shifted by `num_segments` * (k - 1). The result is a tensor with
`num_segments` multiplied by the number of elements in the batch.
Args:
... | flatten | python | huggingface/transformers | src/transformers/models/tapas/modeling_tf_tapas.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/tapas/modeling_tf_tapas.py | Apache-2.0 |
def range_index_map(batch_shape, num_segments, name="range_index_map"):
"""
Constructs an index map equal to range(num_segments).
Args:
batch_shape (`tf.Tensor`):
Batch shape
num_segments (`int`):
Number of segments
name (`str`, *optional*, defaults to 'range... |
Constructs an index map equal to range(num_segments).
Args:
batch_shape (`tf.Tensor`):
Batch shape
num_segments (`int`):
Number of segments
name (`str`, *optional*, defaults to 'range_index_map'):
Name for the operation. Currently not used
Retur... | range_index_map | python | huggingface/transformers | src/transformers/models/tapas/modeling_tf_tapas.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/tapas/modeling_tf_tapas.py | Apache-2.0 |
def _segment_reduce(values, index, segment_reduce_fn, name):
"""
Applies a segment reduction segment-wise.
Args:
values (`tf.Tensor`):
Tensor with segment values.
index (`IndexMap`):
IndexMap.
segment_reduce_fn (`str`):
Name for the reduce operati... |
Applies a segment reduction segment-wise.
Args:
values (`tf.Tensor`):
Tensor with segment values.
index (`IndexMap`):
IndexMap.
segment_reduce_fn (`str`):
Name for the reduce operation. One of "sum", "mean", "max" or "min".
name (`str`):
... | _segment_reduce | python | huggingface/transformers | src/transformers/models/tapas/modeling_tf_tapas.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/tapas/modeling_tf_tapas.py | Apache-2.0 |
def _single_column_cell_selection_loss(token_logits, column_logits, labels, cell_index, col_index, cell_mask):
"""
Computes the loss for cell selection constrained to a single column. The loss is a hierarchical log-likelihood. The
model first predicts a column and then selects cells within that column (cond... |
Computes the loss for cell selection constrained to a single column. The loss is a hierarchical log-likelihood. The
model first predicts a column and then selects cells within that column (conditioned on the column). Cells outside
the selected column are never selected.
Args:
token_logits (`tf... | _single_column_cell_selection_loss | python | huggingface/transformers | src/transformers/models/tapas/modeling_tf_tapas.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/tapas/modeling_tf_tapas.py | Apache-2.0 |
def _calculate_aggregate_mask(answer, pooled_output, cell_selection_preference, labels, aggregation_classifier):
"""
Finds examples where the model should select cells with no aggregation.
Returns a mask that determines for which examples should the model select answers directly from the table, without
... |
Finds examples where the model should select cells with no aggregation.
Returns a mask that determines for which examples should the model select answers directly from the table, without
any aggregation function. If the answer is a piece of text the case is unambiguous as aggregation functions only
ap... | _calculate_aggregate_mask | python | huggingface/transformers | src/transformers/models/tapas/modeling_tf_tapas.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/tapas/modeling_tf_tapas.py | Apache-2.0 |
def _calculate_aggregation_loss_known(
logits_aggregation, aggregate_mask, aggregation_labels, use_answer_as_supervision, num_aggregation_labels
):
"""
Calculates aggregation loss when its type is known during training.
In the weakly supervised setting, the only known information is that for cell selec... |
Calculates aggregation loss when its type is known during training.
In the weakly supervised setting, the only known information is that for cell selection examples, "no aggregation"
should be predicted. For other examples (those that require aggregation), no loss is accumulated. In the setting
where ... | _calculate_aggregation_loss_known | python | huggingface/transformers | src/transformers/models/tapas/modeling_tf_tapas.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/tapas/modeling_tf_tapas.py | Apache-2.0 |
def _calculate_aggregation_loss_unknown(logits_aggregation, aggregate_mask):
"""
Calculates aggregation loss in the case of answer supervision.
Args:
logits_aggregation (`tf.Tensor` of shape `(batch_size, num_aggregation_labels)`):
Logits per aggregation operation.
aggregate_mas... |
Calculates aggregation loss in the case of answer supervision.
Args:
logits_aggregation (`tf.Tensor` of shape `(batch_size, num_aggregation_labels)`):
Logits per aggregation operation.
aggregate_mask (`tf.Tensor` of shape `(batch_size, )`):
A mask set to 1 for examples ... | _calculate_aggregation_loss_unknown | python | huggingface/transformers | src/transformers/models/tapas/modeling_tf_tapas.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/tapas/modeling_tf_tapas.py | Apache-2.0 |
def _calculate_aggregation_loss(
logits_aggregation,
aggregate_mask,
aggregation_labels,
use_answer_as_supervision,
num_aggregation_labels,
aggregation_loss_weight,
):
"""
Calculates the aggregation loss per example.
Args:
logits_aggregation (`tf.Tensor` of shape `(batch_siz... |
Calculates the aggregation loss per example.
Args:
logits_aggregation (`tf.Tensor` of shape `(batch_size, num_aggregation_labels)`):
Logits per aggregation operation.
aggregate_mask (`tf.Tensor` of shape `(batch_size, )`):
A mask set to 1 for examples that should use ag... | _calculate_aggregation_loss | python | huggingface/transformers | src/transformers/models/tapas/modeling_tf_tapas.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/tapas/modeling_tf_tapas.py | Apache-2.0 |
def _calculate_expected_result(
dist_per_cell, numeric_values, numeric_values_scale, input_mask_float, logits_aggregation, config
):
"""
Calculates the expected result given cell and aggregation probabilities.
Args:
dist_per_cell (`tfp.distributions.Bernoulli`):
Cell selection distr... |
Calculates the expected result given cell and aggregation probabilities.
Args:
dist_per_cell (`tfp.distributions.Bernoulli`):
Cell selection distribution for each cell.
numeric_values (`tf.Tensor` of shape `(batch_size, seq_length)`):
Numeric values of every token. Nan ... | _calculate_expected_result | python | huggingface/transformers | src/transformers/models/tapas/modeling_tf_tapas.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/tapas/modeling_tf_tapas.py | Apache-2.0 |
def _calculate_regression_loss(
answer,
aggregate_mask,
dist_per_cell,
numeric_values,
numeric_values_scale,
input_mask_float,
logits_aggregation,
config,
):
"""
Calculates the regression loss per example.
Args:
answer (`tf.Tensor` of shape `(batch_size,)`):
... |
Calculates the regression loss per example.
Args:
answer (`tf.Tensor` of shape `(batch_size,)`):
Answer for every example in the batch. Nan if there is no scalar answer.
aggregate_mask (`tf.Tensor` of shape `(batch_size,)`):
A mask set to 1 for examples that should use ... | _calculate_regression_loss | python | huggingface/transformers | src/transformers/models/tapas/modeling_tf_tapas.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/tapas/modeling_tf_tapas.py | Apache-2.0 |
def whitespace_tokenize(text):
"""Runs basic whitespace cleaning and splitting on a piece of text."""
text = text.strip()
if not text:
return []
tokens = text.split()
return tokens | Runs basic whitespace cleaning and splitting on a piece of text. | whitespace_tokenize | python | huggingface/transformers | src/transformers/models/tapas/tokenization_tapas.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/tapas/tokenization_tapas.py | Apache-2.0 |
def create_segment_token_type_ids_from_sequences(
self, query_ids: List[int], table_values: List[TableValue]
) -> List[int]:
"""
Creates the segment token type IDs according to the query token IDs and a list of table values.
Args:
query_ids (`List[int]`): list of token I... |
Creates the segment token type IDs according to the query token IDs and a list of table values.
Args:
query_ids (`List[int]`): list of token IDs corresponding to the ID.
table_values (`List[TableValue]`): lift of table values, which are named tuples containing the
... | create_segment_token_type_ids_from_sequences | python | huggingface/transformers | src/transformers/models/tapas/tokenization_tapas.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/tapas/tokenization_tapas.py | Apache-2.0 |
def create_column_token_type_ids_from_sequences(
self, query_ids: List[int], table_values: List[TableValue]
) -> List[int]:
"""
Creates the column token type IDs according to the query token IDs and a list of table values.
Args:
query_ids (`List[int]`): list of token IDs... |
Creates the column token type IDs according to the query token IDs and a list of table values.
Args:
query_ids (`List[int]`): list of token IDs corresponding to the ID.
table_values (`List[TableValue]`): lift of table values, which are named tuples containing the
... | create_column_token_type_ids_from_sequences | python | huggingface/transformers | src/transformers/models/tapas/tokenization_tapas.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/tapas/tokenization_tapas.py | Apache-2.0 |
def create_row_token_type_ids_from_sequences(
self, query_ids: List[int], table_values: List[TableValue]
) -> List[int]:
"""
Creates the row token type IDs according to the query token IDs and a list of table values.
Args:
query_ids (`List[int]`): list of token IDs corre... |
Creates the row token type IDs according to the query token IDs and a list of table values.
Args:
query_ids (`List[int]`): list of token IDs corresponding to the ID.
table_values (`List[TableValue]`): lift of table values, which are named tuples containing the
t... | create_row_token_type_ids_from_sequences | python | huggingface/transformers | src/transformers/models/tapas/tokenization_tapas.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/tapas/tokenization_tapas.py | Apache-2.0 |
def build_inputs_with_special_tokens(
self, token_ids_0: List[int], token_ids_1: Optional[List[int]] = None
) -> List[int]:
"""
Build model inputs from a question and flattened table for question answering or sequence classification tasks
by concatenating and adding special tokens.
... |
Build model inputs from a question and flattened table for question answering or sequence classification tasks
by concatenating and adding special tokens.
Args:
token_ids_0 (`List[int]`): The ids of the question.
token_ids_1 (`List[int]`, *optional*): The ids of the fla... | build_inputs_with_special_tokens | python | huggingface/transformers | src/transformers/models/tapas/tokenization_tapas.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/tapas/tokenization_tapas.py | Apache-2.0 |
def get_special_tokens_mask(
self, token_ids_0: List[int], token_ids_1: Optional[List[int]] = None, already_has_special_tokens: bool = False
) -> List[int]:
"""
Retrieve sequence ids from a token list that has no special tokens added. This method is called when adding
special tokens ... |
Retrieve sequence ids from a token list that has no special tokens added. This method is called when adding
special tokens using the tokenizer `prepare_for_model` method.
Args:
token_ids_0 (`List[int]`):
List of question IDs.
token_ids_1 (`List[int]`, *o... | get_special_tokens_mask | python | huggingface/transformers | src/transformers/models/tapas/tokenization_tapas.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/tapas/tokenization_tapas.py | Apache-2.0 |
def __call__(
self,
table: "pd.DataFrame",
queries: Optional[
Union[
TextInput,
PreTokenizedInput,
EncodedInput,
List[TextInput],
List[PreTokenizedInput],
List[EncodedInput],
]... |
Main method to tokenize and prepare for the model one or several sequence(s) related to a table.
Args:
table (`pd.DataFrame`):
Table containing tabular data. Note that all cell values must be text. Use *.astype(str)* on a Pandas
dataframe to convert it to st... | __call__ | python | huggingface/transformers | src/transformers/models/tapas/tokenization_tapas.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/tapas/tokenization_tapas.py | Apache-2.0 |
def batch_encode_plus(
self,
table: "pd.DataFrame",
queries: Optional[
Union[
List[TextInput],
List[PreTokenizedInput],
List[EncodedInput],
]
] = None,
answer_coordinates: Optional[List[List[Tuple]]] = None,
... |
Prepare a table and a list of strings for the model.
<Tip warning={true}>
This method is deprecated, `__call__` should be used instead.
</Tip>
Args:
table (`pd.DataFrame`):
Table containing tabular data. Note that all cell values must be text. Use... | batch_encode_plus | python | huggingface/transformers | src/transformers/models/tapas/tokenization_tapas.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/tapas/tokenization_tapas.py | Apache-2.0 |
def _get_question_tokens(self, query):
"""Tokenizes the query, taking into account the max and min question length."""
query_tokens = self.tokenize(query)
if self.max_question_length is not None and len(query_tokens) > self.max_question_length:
logger.warning("Skipping query as its ... | Tokenizes the query, taking into account the max and min question length. | _get_question_tokens | python | huggingface/transformers | src/transformers/models/tapas/tokenization_tapas.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/tapas/tokenization_tapas.py | Apache-2.0 |
def encode(
self,
table: "pd.DataFrame",
query: Optional[
Union[
TextInput,
PreTokenizedInput,
EncodedInput,
]
] = None,
add_special_tokens: bool = True,
padding: Union[bool, str, PaddingStrategy] = F... |
Prepare a table and a string for the model. This method does not return token type IDs, attention masks, etc.
which are necessary for the model to work correctly. Use that method if you want to build your processing on
your own, otherwise refer to `__call__`.
Args:
table (`... | encode | python | huggingface/transformers | src/transformers/models/tapas/tokenization_tapas.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/tapas/tokenization_tapas.py | Apache-2.0 |
def encode_plus(
self,
table: "pd.DataFrame",
query: Optional[
Union[
TextInput,
PreTokenizedInput,
EncodedInput,
]
] = None,
answer_coordinates: Optional[List[Tuple]] = None,
answer_text: Optional[Li... |
Prepare a table and a string for the model.
Args:
table (`pd.DataFrame`):
Table containing tabular data. Note that all cell values must be text. Use *.astype(str)* on a Pandas
dataframe to convert it to string.
query (`str` or `List[str]`):
... | encode_plus | python | huggingface/transformers | src/transformers/models/tapas/tokenization_tapas.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/tapas/tokenization_tapas.py | Apache-2.0 |
def prepare_for_model(
self,
raw_table: "pd.DataFrame",
raw_query: Union[
TextInput,
PreTokenizedInput,
EncodedInput,
],
tokenized_table: Optional[TokenizedTable] = None,
query_tokens: Optional[TokenizedTable] = None,
answer_coo... |
Prepares a sequence of input id so that it can be used by the model. It adds special tokens, truncates
sequences if overflowing while taking into account the special tokens.
Args:
raw_table (`pd.DataFrame`):
The original table before any transformation (like tokeniz... | prepare_for_model | python | huggingface/transformers | src/transformers/models/tapas/tokenization_tapas.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/tapas/tokenization_tapas.py | Apache-2.0 |
def _get_truncated_table_rows(
self,
query_tokens: List[str],
tokenized_table: TokenizedTable,
num_rows: int,
num_columns: int,
max_length: int,
truncation_strategy: Union[str, TapasTruncationStrategy],
) -> Tuple[int, int]:
"""
Truncates a seq... |
Truncates a sequence pair in-place following the strategy.
Args:
query_tokens (`List[str]`):
List of strings corresponding to the tokenized query.
tokenized_table (`TokenizedTable`):
Tokenized table
num_rows (`int`):
T... | _get_truncated_table_rows | python | huggingface/transformers | src/transformers/models/tapas/tokenization_tapas.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/tapas/tokenization_tapas.py | Apache-2.0 |
def _tokenize_table(
self,
table=None,
):
"""
Tokenizes column headers and cell texts of a table.
Args:
table (`pd.Dataframe`):
Table. Returns: `TokenizedTable`: TokenizedTable object.
"""
tokenized_rows = []
tokenized_row ... |
Tokenizes column headers and cell texts of a table.
Args:
table (`pd.Dataframe`):
Table. Returns: `TokenizedTable`: TokenizedTable object.
| _tokenize_table | python | huggingface/transformers | src/transformers/models/tapas/tokenization_tapas.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/tapas/tokenization_tapas.py | Apache-2.0 |
def _get_table_values(self, table, num_columns, num_rows, num_tokens) -> Generator[TableValue, None, None]:
"""Iterates over partial table and returns token, column and row indexes."""
for tc in table.selected_tokens:
# First row is header row.
if tc.row_index >= num_rows + 1:
... | Iterates over partial table and returns token, column and row indexes. | _get_table_values | python | huggingface/transformers | src/transformers/models/tapas/tokenization_tapas.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/tapas/tokenization_tapas.py | Apache-2.0 |
def _get_table_boundaries(self, table):
"""Return maximal number of rows, columns and tokens."""
max_num_tokens = 0
max_num_columns = 0
max_num_rows = 0
for tc in table.selected_tokens:
max_num_columns = max(max_num_columns, tc.column_index + 1)
max_num_ro... | Return maximal number of rows, columns and tokens. | _get_table_boundaries | python | huggingface/transformers | src/transformers/models/tapas/tokenization_tapas.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/tapas/tokenization_tapas.py | Apache-2.0 |
def _get_max_num_tokens(self, question_tokens, tokenized_table, num_columns, num_rows, max_length):
"""Computes max number of tokens that can be squeezed into the budget."""
token_budget = self._get_token_budget(question_tokens, max_length)
_, _, max_num_tokens = self._get_table_boundaries(token... | Computes max number of tokens that can be squeezed into the budget. | _get_max_num_tokens | python | huggingface/transformers | src/transformers/models/tapas/tokenization_tapas.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/tapas/tokenization_tapas.py | Apache-2.0 |
def _get_numeric_column_ranks(self, column_ids, row_ids, table):
"""Returns column ranks for all numeric columns."""
ranks = [0] * len(column_ids)
inv_ranks = [0] * len(column_ids)
# original code from tf_example_utils.py of the original implementation
if table is not None:
... | Returns column ranks for all numeric columns. | _get_numeric_column_ranks | python | huggingface/transformers | src/transformers/models/tapas/tokenization_tapas.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/tapas/tokenization_tapas.py | Apache-2.0 |
def _get_numeric_sort_key_fn(self, table_numeric_values, value):
"""
Returns the sort key function for comparing value to table values. The function returned will be a suitable
input for the key param of the sort(). See number_annotation_utils._get_numeric_sort_key_fn for details
Args:
... |
Returns the sort key function for comparing value to table values. The function returned will be a suitable
input for the key param of the sort(). See number_annotation_utils._get_numeric_sort_key_fn for details
Args:
table_numeric_values: Numeric values of a column
val... | _get_numeric_sort_key_fn | python | huggingface/transformers | src/transformers/models/tapas/tokenization_tapas.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/tapas/tokenization_tapas.py | Apache-2.0 |
def _get_numeric_relations(self, question, column_ids, row_ids, table):
"""
Returns numeric relations embeddings
Args:
question: Question object.
column_ids: Maps word piece position to column id.
row_ids: Maps word piece position to row id.
table... |
Returns numeric relations embeddings
Args:
question: Question object.
column_ids: Maps word piece position to column id.
row_ids: Maps word piece position to row id.
table: The table containing the numeric cell values.
| _get_numeric_relations | python | huggingface/transformers | src/transformers/models/tapas/tokenization_tapas.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/tapas/tokenization_tapas.py | Apache-2.0 |
def _get_numeric_values(self, table, column_ids, row_ids):
"""Returns numeric values for computation of answer loss."""
numeric_values = [float("nan")] * len(column_ids)
if table is not None:
num_rows = table.shape[0]
num_columns = table.shape[1]
for col_in... | Returns numeric values for computation of answer loss. | _get_numeric_values | python | huggingface/transformers | src/transformers/models/tapas/tokenization_tapas.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/tapas/tokenization_tapas.py | Apache-2.0 |
def _get_numeric_values_scale(self, table, column_ids, row_ids):
"""Returns a scale to each token to down weigh the value of long words."""
numeric_values_scale = [1.0] * len(column_ids)
if table is None:
return numeric_values_scale
num_rows = table.shape[0]
num_co... | Returns a scale to each token to down weigh the value of long words. | _get_numeric_values_scale | python | huggingface/transformers | src/transformers/models/tapas/tokenization_tapas.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/tapas/tokenization_tapas.py | Apache-2.0 |
def _get_all_answer_ids_from_coordinates(
self,
column_ids,
row_ids,
answers_list,
):
"""Maps lists of answer coordinates to token indexes."""
answer_ids = [0] * len(column_ids)
found_answers = set()
all_answers = set()
for answers in answers_l... | Maps lists of answer coordinates to token indexes. | _get_all_answer_ids_from_coordinates | python | huggingface/transformers | src/transformers/models/tapas/tokenization_tapas.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/tapas/tokenization_tapas.py | Apache-2.0 |
def _get_all_answer_ids(self, column_ids, row_ids, answer_coordinates):
"""
Maps answer coordinates of a question to token indexes.
In the SQA format (TSV), the coordinates are given as (row, column) tuples. Here, we first swap them to
(column, row) format before calling _get_all_answer... |
Maps answer coordinates of a question to token indexes.
In the SQA format (TSV), the coordinates are given as (row, column) tuples. Here, we first swap them to
(column, row) format before calling _get_all_answer_ids_from_coordinates.
| _get_all_answer_ids | python | huggingface/transformers | src/transformers/models/tapas/tokenization_tapas.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/tapas/tokenization_tapas.py | Apache-2.0 |
def _find_tokens(self, text, segment):
"""Return start index of segment in text or None."""
logging.info(f"text: {text} {segment}")
for index in range(1 + len(text) - len(segment)):
for seg_index, seg_token in enumerate(segment):
if text[index + seg_index].piece != se... | Return start index of segment in text or None. | _find_tokens | python | huggingface/transformers | src/transformers/models/tapas/tokenization_tapas.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/tapas/tokenization_tapas.py | Apache-2.0 |
def _find_answer_coordinates_from_answer_text(
self,
tokenized_table,
answer_text,
):
"""Returns all occurrences of answer_text in the table."""
logging.info(f"answer text: {answer_text}")
for row_index, row in enumerate(tokenized_table.rows):
if row_index... | Returns all occurrences of answer_text in the table. | _find_answer_coordinates_from_answer_text | python | huggingface/transformers | src/transformers/models/tapas/tokenization_tapas.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/tapas/tokenization_tapas.py | Apache-2.0 |
def _find_answer_ids_from_answer_texts(
self,
column_ids,
row_ids,
tokenized_table,
answer_texts,
):
"""Maps question with answer texts to the first matching token indexes."""
answer_ids = [0] * len(column_ids)
for answer_text in answer_texts:
... | Maps question with answer texts to the first matching token indexes. | _find_answer_ids_from_answer_texts | python | huggingface/transformers | src/transformers/models/tapas/tokenization_tapas.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/tapas/tokenization_tapas.py | Apache-2.0 |
def _get_answer_ids(self, column_ids, row_ids, answer_coordinates):
"""Maps answer coordinates of a question to token indexes."""
answer_ids, missing_count = self._get_all_answer_ids(column_ids, row_ids, answer_coordinates)
if missing_count:
raise ValueError("Couldn't find all answe... | Maps answer coordinates of a question to token indexes. | _get_answer_ids | python | huggingface/transformers | src/transformers/models/tapas/tokenization_tapas.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/tapas/tokenization_tapas.py | Apache-2.0 |
def _pad(
self,
encoded_inputs: Union[Dict[str, EncodedInput], BatchEncoding],
max_length: Optional[int] = None,
padding_strategy: PaddingStrategy = PaddingStrategy.DO_NOT_PAD,
pad_to_multiple_of: Optional[int] = None,
padding_side: Optional[str] = None,
return_at... |
Pad encoded inputs (on left/right and up to predefined length or max length in the batch)
Args:
encoded_inputs:
Dictionary of tokenized inputs (`List[int]`) or batch of tokenized inputs (`List[List[int]]`).
max_length: maximum length of the returned list and opt... | _pad | python | huggingface/transformers | src/transformers/models/tapas/tokenization_tapas.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/tapas/tokenization_tapas.py | Apache-2.0 |
def _get_mean_cell_probs(self, probabilities, segment_ids, row_ids, column_ids):
"""Computes average probability per cell, aggregating over tokens."""
coords_to_probs = collections.defaultdict(list)
for i, prob in self._get_cell_token_probs(probabilities, segment_ids, row_ids, column_ids):
... | Computes average probability per cell, aggregating over tokens. | _get_mean_cell_probs | python | huggingface/transformers | src/transformers/models/tapas/tokenization_tapas.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/tapas/tokenization_tapas.py | Apache-2.0 |
def convert_logits_to_predictions(self, data, logits, logits_agg=None, cell_classification_threshold=0.5):
"""
Converts logits of [`TapasForQuestionAnswering`] to actual predicted answer coordinates and optional
aggregation indices.
The original implementation, on which this function is... |
Converts logits of [`TapasForQuestionAnswering`] to actual predicted answer coordinates and optional
aggregation indices.
The original implementation, on which this function is based, can be found
[here](https://github.com/google-research/tapas/blob/4908213eb4df7aa988573350278b44c4dbe3... | convert_logits_to_predictions | python | huggingface/transformers | src/transformers/models/tapas/tokenization_tapas.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/tapas/tokenization_tapas.py | Apache-2.0 |
def tokenize(self, text, never_split=None):
"""
Basic Tokenization of a piece of text. For sub-word tokenization, see WordPieceTokenizer.
Args:
never_split (`List[str]`, *optional*)
Kept for backward compatibility purposes. Now implemented directly at the base class ... |
Basic Tokenization of a piece of text. For sub-word tokenization, see WordPieceTokenizer.
Args:
never_split (`List[str]`, *optional*)
Kept for backward compatibility purposes. Now implemented directly at the base class level (see
[`PreTrainedTokenizer.tokeni... | tokenize | python | huggingface/transformers | src/transformers/models/tapas/tokenization_tapas.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/tapas/tokenization_tapas.py | Apache-2.0 |
def _run_strip_accents(self, text):
"""Strips accents from a piece of text."""
text = unicodedata.normalize("NFD", text)
output = []
for char in text:
cat = unicodedata.category(char)
if cat == "Mn":
continue
output.append(char)
... | Strips accents from a piece of text. | _run_strip_accents | python | huggingface/transformers | src/transformers/models/tapas/tokenization_tapas.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/tapas/tokenization_tapas.py | Apache-2.0 |
def _run_split_on_punc(self, text, never_split=None):
"""Splits punctuation on a piece of text."""
if not self.do_split_on_punc or (never_split is not None and text in never_split):
return [text]
chars = list(text)
i = 0
start_new_word = True
output = []
... | Splits punctuation on a piece of text. | _run_split_on_punc | python | huggingface/transformers | src/transformers/models/tapas/tokenization_tapas.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/tapas/tokenization_tapas.py | Apache-2.0 |
def _tokenize_chinese_chars(self, text):
"""Adds whitespace around any CJK character."""
output = []
for char in text:
cp = ord(char)
if self._is_chinese_char(cp):
output.append(" ")
output.append(char)
output.append(" ")
... | Adds whitespace around any CJK character. | _tokenize_chinese_chars | python | huggingface/transformers | src/transformers/models/tapas/tokenization_tapas.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/tapas/tokenization_tapas.py | Apache-2.0 |
def _is_chinese_char(self, cp):
"""Checks whether CP is the codepoint of a CJK character."""
# This defines a "chinese character" as anything in the CJK Unicode block:
# https://en.wikipedia.org/wiki/CJK_Unified_Ideographs_(Unicode_block)
#
# Note that the CJK Unicode block is ... | Checks whether CP is the codepoint of a CJK character. | _is_chinese_char | python | huggingface/transformers | src/transformers/models/tapas/tokenization_tapas.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/tapas/tokenization_tapas.py | Apache-2.0 |
def _clean_text(self, text):
"""Performs invalid character removal and whitespace cleanup on text."""
output = []
for char in text:
cp = ord(char)
if cp == 0 or cp == 0xFFFD or _is_control(char):
continue
if _is_whitespace(char):
... | Performs invalid character removal and whitespace cleanup on text. | _clean_text | python | huggingface/transformers | src/transformers/models/tapas/tokenization_tapas.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/tapas/tokenization_tapas.py | Apache-2.0 |
def tokenize(self, text):
"""
Tokenizes a piece of text into its word pieces. This uses a greedy longest-match-first algorithm to perform
tokenization using the given vocabulary.
For example, `input = "unaffable"` will return as output `["un", "##aff", "##able"]`.
Args:
... |
Tokenizes a piece of text into its word pieces. This uses a greedy longest-match-first algorithm to perform
tokenization using the given vocabulary.
For example, `input = "unaffable"` will return as output `["un", "##aff", "##able"]`.
Args:
text: A single token or whitespa... | tokenize | python | huggingface/transformers | src/transformers/models/tapas/tokenization_tapas.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/tapas/tokenization_tapas.py | Apache-2.0 |
def _process_date_pattern(dp):
"""Compute a regex for each date pattern to use as a prefilter."""
pattern, mask = dp
regex = pattern
regex = regex.replace(".", re.escape("."))
regex = regex.replace("-", re.escape("-"))
regex = regex.replace(" ", r"\s+")
for field, field_regex in _FIELD_TO_RE... | Compute a regex for each date pattern to use as a prefilter. | _process_date_pattern | python | huggingface/transformers | src/transformers/models/tapas/tokenization_tapas.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/tapas/tokenization_tapas.py | Apache-2.0 |
def _get_numeric_value_from_date(date, mask):
"""Converts date (datetime Python object) to a NumericValue object with a Date object value."""
if date.year < _MIN_YEAR or date.year > _MAX_YEAR:
raise ValueError(f"Invalid year: {date.year}")
new_date = Date()
if mask.year:
new_date.year =... | Converts date (datetime Python object) to a NumericValue object with a Date object value. | _get_numeric_value_from_date | python | huggingface/transformers | src/transformers/models/tapas/tokenization_tapas.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/tapas/tokenization_tapas.py | Apache-2.0 |
def _parse_date(text):
"""Attempts to format a text as a standard date string (yyyy-mm-dd)."""
text = re.sub(r"Sept\b", "Sep", text)
for in_pattern, mask, regex in _PROCESSED_DATE_PATTERNS:
if not regex.match(text):
continue
try:
date = datetime.datetime.strptime(text... | Attempts to format a text as a standard date string (yyyy-mm-dd). | _parse_date | python | huggingface/transformers | src/transformers/models/tapas/tokenization_tapas.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/tapas/tokenization_tapas.py | Apache-2.0 |
def _parse_number(text):
"""Parses simple cardinal and ordinals numbers."""
for suffix in _ORDINAL_SUFFIXES:
if text.endswith(suffix):
text = text[: -len(suffix)]
break
text = text.replace(",", "")
try:
value = float(text)
except ValueError:
return Non... | Parses simple cardinal and ordinals numbers. | _parse_number | python | huggingface/transformers | src/transformers/models/tapas/tokenization_tapas.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/tapas/tokenization_tapas.py | Apache-2.0 |
def get_all_spans(text, max_ngram_length):
"""
Split a text into all possible ngrams up to 'max_ngram_length'. Split points are white space and punctuation.
Args:
text: Text to split.
max_ngram_length: maximal ngram length.
Yields:
Spans, tuples of begin-end index.
"""
start_i... |
Split a text into all possible ngrams up to 'max_ngram_length'. Split points are white space and punctuation.
Args:
text: Text to split.
max_ngram_length: maximal ngram length.
Yields:
Spans, tuples of begin-end index.
| get_all_spans | python | huggingface/transformers | src/transformers/models/tapas/tokenization_tapas.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/tapas/tokenization_tapas.py | Apache-2.0 |
def parse_text(text):
"""
Extracts longest number and date spans.
Args:
text: text to annotate
Returns:
List of longest numeric value spans.
"""
span_dict = collections.defaultdict(list)
for match in _NUMBER_PATTERN.finditer(text):
span_text = text[match.start() : match... |
Extracts longest number and date spans.
Args:
text: text to annotate
Returns:
List of longest numeric value spans.
| parse_text | python | huggingface/transformers | src/transformers/models/tapas/tokenization_tapas.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/tapas/tokenization_tapas.py | Apache-2.0 |
def _get_value_as_primitive_value(numeric_value):
"""Maps a NumericValue proto to a float or tuple of float."""
if numeric_value.float_value is not None:
return numeric_value.float_value
if numeric_value.date is not None:
date = numeric_value.date
value_tuple = [None, None, None]
... | Maps a NumericValue proto to a float or tuple of float. | _get_value_as_primitive_value | python | huggingface/transformers | src/transformers/models/tapas/tokenization_tapas.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/tapas/tokenization_tapas.py | Apache-2.0 |
def _consolidate_numeric_values(row_index_to_values, min_consolidation_fraction, debug_info):
"""
Finds the most common numeric values in a column and returns them
Args:
row_index_to_values:
For each row index all the values in that cell.
min_consolidation_fraction:
... |
Finds the most common numeric values in a column and returns them
Args:
row_index_to_values:
For each row index all the values in that cell.
min_consolidation_fraction:
Fraction of cells that need to have consolidated value.
debug_info:
Additional in... | _consolidate_numeric_values | python | huggingface/transformers | src/transformers/models/tapas/tokenization_tapas.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/tapas/tokenization_tapas.py | Apache-2.0 |
def _get_numeric_values(text):
"""Parses text and returns numeric values."""
numeric_spans = parse_text(text)
return itertools.chain(*(span.values for span in numeric_spans)) | Parses text and returns numeric values. | _get_numeric_values | python | huggingface/transformers | src/transformers/models/tapas/tokenization_tapas.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/tapas/tokenization_tapas.py | Apache-2.0 |
def _get_column_values(table, col_index):
"""
Parses text in column and returns a dict mapping row_index to values. This is the _get_column_values function from
number_annotation_utils.py of the original implementation
Args:
table: Pandas dataframe
col_index: integer, indicating the index o... |
Parses text in column and returns a dict mapping row_index to values. This is the _get_column_values function from
number_annotation_utils.py of the original implementation
Args:
table: Pandas dataframe
col_index: integer, indicating the index of the column to get the numeric values of
| _get_column_values | python | huggingface/transformers | src/transformers/models/tapas/tokenization_tapas.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/tapas/tokenization_tapas.py | Apache-2.0 |
def get_numeric_relation(value, other_value, sort_key_fn):
"""Compares two values and returns their relation or None."""
value = sort_key_fn(value)
other_value = sort_key_fn(other_value)
if value == other_value:
return Relation.EQ
if value < other_value:
return Relation.LT
if val... | Compares two values and returns their relation or None. | get_numeric_relation | python | huggingface/transformers | src/transformers/models/tapas/tokenization_tapas.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/tapas/tokenization_tapas.py | Apache-2.0 |
def add_numeric_values_to_question(question):
"""Adds numeric value spans to a question."""
original_text = question
question = normalize_for_match(question)
numeric_spans = parse_text(question)
return Question(original_text=original_text, text=question, numeric_spans=numeric_spans) | Adds numeric value spans to a question. | add_numeric_values_to_question | python | huggingface/transformers | src/transformers/models/tapas/tokenization_tapas.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/tapas/tokenization_tapas.py | Apache-2.0 |
def filter_invalid_unicode_from_table(table):
"""
Removes invalid unicode from table. Checks whether a table cell text contains an invalid unicode encoding. If yes,
reset the table cell text to an empty str and log a warning for each invalid cell
Args:
table: table to clean.
"""
# to do... |
Removes invalid unicode from table. Checks whether a table cell text contains an invalid unicode encoding. If yes,
reset the table cell text to an empty str and log a warning for each invalid cell
Args:
table: table to clean.
| filter_invalid_unicode_from_table | python | huggingface/transformers | src/transformers/models/tapas/tokenization_tapas.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/tapas/tokenization_tapas.py | Apache-2.0 |
def add_numeric_table_values(table, min_consolidation_fraction=0.7, debug_info=None):
"""
Parses text in table column-wise and adds the consolidated values. Consolidation refers to finding values with a
common types (date or number)
Args:
table:
Table to annotate.
min_consol... |
Parses text in table column-wise and adds the consolidated values. Consolidation refers to finding values with a
common types (date or number)
Args:
table:
Table to annotate.
min_consolidation_fraction:
Fraction of cells in a column that need to have consolidated va... | add_numeric_table_values | python | huggingface/transformers | src/transformers/models/tapas/tokenization_tapas.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/tapas/tokenization_tapas.py | Apache-2.0 |
def resize(
self,
image: np.ndarray,
size: Dict[str, int],
resample: PILImageResampling = PILImageResampling.BILINEAR,
data_format: Optional[Union[str, ChannelDimension]] = None,
input_data_format: Optional[Union[str, ChannelDimension]] = None,
**kwargs,
) -> ... |
Resize an image. The shortest edge of the image is resized to size["shortest_edge"] , with the longest edge
resized to keep the input aspect ratio. Both the height and width are resized to be divisible by 32.
Args:
image (`np.ndarray`):
Image to resize.
... | resize | python | huggingface/transformers | src/transformers/models/textnet/image_processing_textnet.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/textnet/image_processing_textnet.py | Apache-2.0 |
def preprocess(
self,
images: ImageInput,
do_resize: Optional[bool] = None,
size: Optional[Dict[str, int]] = None,
size_divisor: Optional[int] = None,
resample: PILImageResampling = None,
do_center_crop: Optional[bool] = None,
crop_size: Optional[int] = No... |
Preprocess an image or batch of images.
Args:
images (`ImageInput`):
Image to preprocess. Expects a single or batch of images with pixel values ranging from 0 to 255. If
passing in images with pixel values between 0 and 1, set `do_rescale=False`.
... | preprocess | python | huggingface/transformers | src/transformers/models/textnet/image_processing_textnet.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/textnet/image_processing_textnet.py | Apache-2.0 |
def forward(
self,
pixel_values: Optional[torch.FloatTensor] = None,
labels: Optional[torch.LongTensor] = None,
output_hidden_states: Optional[bool] = None,
return_dict: Optional[bool] = None,
) -> ImageClassifierOutputWithNoAttention:
r"""
labels (`torch.Long... |
labels (`torch.LongTensor` of shape `(batch_size,)`, *optional*):
Labels for computing the image classification/regression loss. Indices should be in `[0, ...,
config.num_labels - 1]`. If `config.num_labels == 1` a regression loss is computed (Mean-Square loss), If
`config.n... | forward | python | huggingface/transformers | src/transformers/models/textnet/modeling_textnet.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/textnet/modeling_textnet.py | Apache-2.0 |
def forward(
self, pixel_values: Tensor, output_hidden_states: Optional[bool] = None, return_dict: Optional[bool] = None
) -> Union[Tuple[Tuple], BackboneOutput]:
r"""
Examples:
```python
>>> import torch
>>> import requests
>>> from PIL import Image
... |
Examples:
```python
>>> import torch
>>> import requests
>>> from PIL import Image
>>> from transformers import AutoImageProcessor, AutoBackbone
>>> url = "http://images.cocodataset.org/val2017/000000039769.jpg"
>>> image = Image.open(requests.get(url, ... | forward | python | huggingface/transformers | src/transformers/models/textnet/modeling_textnet.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/textnet/modeling_textnet.py | Apache-2.0 |
def get_nested_attr(obj, key):
"""Recursively retrieves an attribute from an object, handling list/tuple indexing if present."""
parts = key.split(".")
for part in parts:
match = re.match(r"(.*)\[(\d+)\]", part) # Handle list indexing like `layers[0]`
if match:
attr_name, index ... | Recursively retrieves an attribute from an object, handling list/tuple indexing if present. | get_nested_attr | python | huggingface/transformers | src/transformers/models/timesfm/convert_timesfm_orignal_to_hf.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/timesfm/convert_timesfm_orignal_to_hf.py | Apache-2.0 |
def forward(self, seq_length=None, position=None):
"""Generates a Tensor of sinusoids with different frequencies.
Args:
seq_length: an optional Python int defining the output sequence length.
if the `position` argument is specified.
position: [B, seq_length], optio... | Generates a Tensor of sinusoids with different frequencies.
Args:
seq_length: an optional Python int defining the output sequence length.
if the `position` argument is specified.
position: [B, seq_length], optional position for each token in the
sequence, onl... | forward | python | huggingface/transformers | src/transformers/models/timesfm/modeling_timesfm.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/timesfm/modeling_timesfm.py | Apache-2.0 |
def forward(
self,
past_values: torch.Tensor,
past_values_padding: torch.LongTensor,
freq: torch.Tensor,
output_attentions: bool = False,
output_hidden_states: bool = False,
) -> TimesFmOutput:
r"""
past_values_padding (`torch.LongTensor` of shape `(ba... |
past_values_padding (`torch.LongTensor` of shape `(batch_size, sequence_length)`):
The padding indicator of the time series.
past_values (`torch.FloatTensor` of shape `(batch_size, sequence_length)`):
Past values of the time series that serves as input to the model.
freq... | forward | python | huggingface/transformers | src/transformers/models/timesfm/modeling_timesfm.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/timesfm/modeling_timesfm.py | Apache-2.0 |
def _prepare_4d_attention_mask(
attention_mask: Optional[torch.Tensor],
sequence_length: int,
dtype: torch.dtype,
device: torch.device,
is_causal: bool = True,
) -> Optional[torch.Tensor]:
"""
Creates 4D attention mask and combines causal and padding masks if ... |
Creates 4D attention mask and combines causal and padding masks if needed.
Args:
attention_mask: Optional tensor of shape (batch_size, seq_length) containing padding mask
sequence_length: Length of the sequence
dtype: Data type of the mask
device: Device... | _prepare_4d_attention_mask | python | huggingface/transformers | src/transformers/models/timesfm/modeling_timesfm.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/timesfm/modeling_timesfm.py | Apache-2.0 |
def _timesfm_masked_mean_std(inputs: torch.Tensor, padding: torch.Tensor) -> tuple[torch.Tensor, torch.Tensor]:
"""Calculates mean and standard deviation of `inputs` across axis 1.
It excludes values where `padding` is 1.
Args:
inputs: A PyTorch tensor of shape [b, n, p].
... | Calculates mean and standard deviation of `inputs` across axis 1.
It excludes values where `padding` is 1.
Args:
inputs: A PyTorch tensor of shape [b, n, p].
padding: A PyTorch tensor of shape [b, n, p] with values 0 or 1.
Returns:
A tuple containing the me... | _timesfm_masked_mean_std | python | huggingface/transformers | src/transformers/models/timesfm/modeling_timesfm.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/timesfm/modeling_timesfm.py | Apache-2.0 |
def _timesfm_shift_padded_seq(mask: torch.Tensor, seq: torch.Tensor) -> torch.Tensor:
"""Shifts rows of seq based on the first 0 in each row of the mask.
Args:
mask: mask tensor of shape [B, N]
seq: seq tensor of shape [B, N, P]
Returns:
The shifted sequence... | Shifts rows of seq based on the first 0 in each row of the mask.
Args:
mask: mask tensor of shape [B, N]
seq: seq tensor of shape [B, N, P]
Returns:
The shifted sequence.
| _timesfm_shift_padded_seq | python | huggingface/transformers | src/transformers/models/timesfm/modeling_timesfm.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/timesfm/modeling_timesfm.py | Apache-2.0 |
def _preprocess(
self, inputs: Sequence[torch.Tensor], freq: Sequence[int]
) -> tuple[torch.Tensor, torch.Tensor, torch.Tensor]:
"""Formats and pads raw inputs to feed into the model.
This function both pads each time series to match the context length, and
pads the inputs to meet t... | Formats and pads raw inputs to feed into the model.
This function both pads each time series to match the context length, and
pads the inputs to meet the SPMD shape requirement.
Args:
inputs: A list of 1d Tensors. Each Tensor is the context time series of
a single forecas... | _preprocess | python | huggingface/transformers | src/transformers/models/timesfm/modeling_timesfm.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/timesfm/modeling_timesfm.py | Apache-2.0 |
def forward(
self,
past_values: Sequence[torch.Tensor],
freq: Optional[Sequence[Union[torch.Tensor, int]]] = None,
window_size: Optional[int] = None,
future_values: Optional[torch.Tensor] = None,
forecast_context_len: Optional[int] = None,
return_forecast_on_conte... |
window_size (`int`, *optional*):
Window size of trend + residual decomposition. If None then we do not do decomposition.
future_values (`torch.Tensor`, *optional*):
Optional future time series values to be used for loss computation.
forecast_context_len (`int`, *optional... | forward | python | huggingface/transformers | src/transformers/models/timesfm/modeling_timesfm.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/timesfm/modeling_timesfm.py | Apache-2.0 |
Subsets and Splits
Django Code with Docstrings
Filters Python code examples from Django repository that contain Django-related code, helping identify relevant code snippets for understanding Django framework usage patterns.
SQL Console for Shuu12121/python-treesitter-filtered-datasetsV2
Retrieves specific code examples from the Flask repository but doesn't provide meaningful analysis or patterns beyond basic data retrieval.
HTTPX Repo Code and Docstrings
Retrieves specific code examples from the httpx repository, which is useful for understanding how particular libraries are used but doesn't provide broader analytical insights about the dataset.
Requests Repo Docstrings & Code
Retrieves code examples with their docstrings and file paths from the requests repository, providing basic filtering but limited analytical value beyond finding specific code samples.
Quart Repo Docstrings & Code
Retrieves code examples with their docstrings from the Quart repository, providing basic code samples but offering limited analytical value for understanding broader patterns or relationships in the dataset.