INSTRUCTION stringlengths 1 46.3k | RESPONSE stringlengths 75 80.2k |
|---|---|
Sort a batch first tensor by some specified lengths.
Parameters
----------
tensor : torch.FloatTensor, required.
A batch first Pytorch tensor.
sequence_lengths : torch.LongTensor, required.
A tensor representing the lengths of some dimension of the tensor which
we want to sort b... | def sort_batch_by_length(tensor: torch.Tensor, sequence_lengths: torch.Tensor):
"""
Sort a batch first tensor by some specified lengths.
Parameters
----------
tensor : torch.FloatTensor, required.
A batch first Pytorch tensor.
sequence_lengths : torch.LongTensor, required.
A ten... |
Given the output from a ``Seq2SeqEncoder``, with shape ``(batch_size, sequence_length,
encoding_dim)``, this method returns the final hidden state for each element of the batch,
giving a tensor of shape ``(batch_size, encoding_dim)``. This is not as simple as
``encoder_outputs[:, -1]``, because the sequenc... | def get_final_encoder_states(encoder_outputs: torch.Tensor,
mask: torch.Tensor,
bidirectional: bool = False) -> torch.Tensor:
"""
Given the output from a ``Seq2SeqEncoder``, with shape ``(batch_size, sequence_length,
encoding_dim)``, this method retu... |
Computes and returns an element-wise dropout mask for a given tensor, where
each element in the mask is dropped out with probability dropout_probability.
Note that the mask is NOT applied to the tensor - the tensor is passed to retain
the correct CUDA tensor type for the mask.
Parameters
----------... | def get_dropout_mask(dropout_probability: float, tensor_for_masking: torch.Tensor):
"""
Computes and returns an element-wise dropout mask for a given tensor, where
each element in the mask is dropped out with probability dropout_probability.
Note that the mask is NOT applied to the tensor - the tensor i... |
``torch.nn.functional.softmax(vector)`` does not work if some elements of ``vector`` should be
masked. This performs a softmax on just the non-masked portions of ``vector``. Passing
``None`` in for the mask is also acceptable; you'll just get a regular softmax.
``vector`` can have an arbitrary number of ... | def masked_softmax(vector: torch.Tensor,
mask: torch.Tensor,
dim: int = -1,
memory_efficient: bool = False,
mask_fill_value: float = -1e32) -> torch.Tensor:
"""
``torch.nn.functional.softmax(vector)`` does not work if some elements of `... |
``torch.nn.functional.log_softmax(vector)`` does not work if some elements of ``vector`` should be
masked. This performs a log_softmax on just the non-masked portions of ``vector``. Passing
``None`` in for the mask is also acceptable; you'll just get a regular log_softmax.
``vector`` can have an arbitrar... | def masked_log_softmax(vector: torch.Tensor, mask: torch.Tensor, dim: int = -1) -> torch.Tensor:
"""
``torch.nn.functional.log_softmax(vector)`` does not work if some elements of ``vector`` should be
masked. This performs a log_softmax on just the non-masked portions of ``vector``. Passing
``None`` in... |
To calculate max along certain dimensions on masked values
Parameters
----------
vector : ``torch.Tensor``
The vector to calculate max, assume unmasked parts are already zeros
mask : ``torch.Tensor``
The mask of the vector. It must be broadcastable with vector.
dim : ``int``
... | def masked_max(vector: torch.Tensor,
mask: torch.Tensor,
dim: int,
keepdim: bool = False,
min_val: float = -1e7) -> torch.Tensor:
"""
To calculate max along certain dimensions on masked values
Parameters
----------
vector : ``torch.Tensor`... |
Flips a padded tensor along the time dimension without affecting masked entries.
Parameters
----------
padded_sequence : ``torch.Tensor``
The tensor to flip along the time dimension.
Assumed to be of dimensions (batch size, num timesteps, ...)
sequence_lengths : ... | def masked_flip(padded_sequence: torch.Tensor,
sequence_lengths: List[int]) -> torch.Tensor:
"""
Flips a padded tensor along the time dimension without affecting masked entries.
Parameters
----------
padded_sequence : ``torch.Tensor``
The tensor to flip a... |
To calculate mean along certain dimensions on masked values
Parameters
----------
vector : ``torch.Tensor``
The vector to calculate mean.
mask : ``torch.Tensor``
The mask of the vector. It must be broadcastable with vector.
dim : ``int``
The dimension to calculate mean
k... | def masked_mean(vector: torch.Tensor,
mask: torch.Tensor,
dim: int,
keepdim: bool = False,
eps: float = 1e-8) -> torch.Tensor:
"""
To calculate mean along certain dimensions on masked values
Parameters
----------
vector : ``torch.Tenso... |
Perform Viterbi decoding in log space over a sequence given a transition matrix
specifying pairwise (transition) potentials between tags and a matrix of shape
(sequence_length, num_tags) specifying unary potentials for possible tags per
timestep.
Parameters
----------
tag_sequence : torch.Tenso... | def viterbi_decode(tag_sequence: torch.Tensor,
transition_matrix: torch.Tensor,
tag_observations: Optional[List[int]] = None):
"""
Perform Viterbi decoding in log space over a sequence given a transition matrix
specifying pairwise (transition) potentials between tags an... |
Takes the dictionary of tensors produced by a ``TextField`` and returns a mask
with 0 where the tokens are padding, and 1 otherwise. We also handle ``TextFields``
wrapped by an arbitrary number of ``ListFields``, where the number of wrapping ``ListFields``
is given by ``num_wrapping_dims``.
If ``num_w... | def get_text_field_mask(text_field_tensors: Dict[str, torch.Tensor],
num_wrapping_dims: int = 0) -> torch.LongTensor:
"""
Takes the dictionary of tensors produced by a ``TextField`` and returns a mask
with 0 where the tokens are padding, and 1 otherwise. We also handle ``TextFields`... |
Takes a matrix of vectors and a set of weights over the rows in the matrix (which we call an
"attention" vector), and returns a weighted sum of the rows in the matrix. This is the typical
computation performed after an attention mechanism.
Note that while we call this a "matrix" of vectors and an attentio... | def weighted_sum(matrix: torch.Tensor, attention: torch.Tensor) -> torch.Tensor:
"""
Takes a matrix of vectors and a set of weights over the rows in the matrix (which we call an
"attention" vector), and returns a weighted sum of the rows in the matrix. This is the typical
computation performed after an... |
Computes the cross entropy loss of a sequence, weighted with respect to
some user provided weights. Note that the weighting here is not the same as
in the :func:`torch.nn.CrossEntropyLoss()` criterion, which is weighting
classes; here we are weighting the loss contribution from particular elements
in th... | def sequence_cross_entropy_with_logits(logits: torch.FloatTensor,
targets: torch.LongTensor,
weights: torch.FloatTensor,
average: str = "batch",
label_smoothing: fl... |
Replaces all masked values in ``tensor`` with ``replace_with``. ``mask`` must be broadcastable
to the same shape as ``tensor``. We require that ``tensor.dim() == mask.dim()``, as otherwise we
won't know which dimensions of the mask to unsqueeze.
This just does ``tensor.masked_fill()``, except the pytorch ... | def replace_masked_values(tensor: torch.Tensor, mask: torch.Tensor, replace_with: float) -> torch.Tensor:
"""
Replaces all masked values in ``tensor`` with ``replace_with``. ``mask`` must be broadcastable
to the same shape as ``tensor``. We require that ``tensor.dim() == mask.dim()``, as otherwise we
w... |
A check for tensor equality (by value). We make sure that the tensors have the same shape,
then check all of the entries in the tensor for equality. We additionally allow the input
tensors to be lists or dictionaries, where we then do the above check on every position in the
list / item in the dictionary.... | def tensors_equal(tensor1: torch.Tensor, tensor2: torch.Tensor, tolerance: float = 1e-12) -> bool:
"""
A check for tensor equality (by value). We make sure that the tensors have the same shape,
then check all of the entries in the tensor for equality. We additionally allow the input
tensors to be list... |
In order to `torch.load()` a GPU-trained model onto a CPU (or specific GPU),
you have to supply a `map_location` function. Call this with
the desired `cuda_device` to get the function that `torch.load()` needs. | def device_mapping(cuda_device: int):
"""
In order to `torch.load()` a GPU-trained model onto a CPU (or specific GPU),
you have to supply a `map_location` function. Call this with
the desired `cuda_device` to get the function that `torch.load()` needs.
"""
def inner_device_mapping(storage: torc... |
Combines a list of tensors using element-wise operations and concatenation, specified by a
``combination`` string. The string refers to (1-indexed) positions in the input tensor list,
and looks like ``"1,2,1+2,3-1"``.
We allow the following kinds of combinations: ``x``, ``x*y``, ``x+y``, ``x-y``, and ``x/... | def combine_tensors(combination: str, tensors: List[torch.Tensor]) -> torch.Tensor:
"""
Combines a list of tensors using element-wise operations and concatenation, specified by a
``combination`` string. The string refers to (1-indexed) positions in the input tensor list,
and looks like ``"1,2,1+2,3-1"`... |
Return zero-based index in the sequence of the last item whose value is equal to obj. Raises a
ValueError if there is no such item.
Parameters
----------
sequence : ``Sequence[T]``
obj : ``T``
Returns
-------
zero-based index associated to the position of the last item equal to obj | def _rindex(sequence: Sequence[T], obj: T) -> int:
"""
Return zero-based index in the sequence of the last item whose value is equal to obj. Raises a
ValueError if there is no such item.
Parameters
----------
sequence : ``Sequence[T]``
obj : ``T``
Returns
-------
zero-based in... |
Like :func:`combine_tensors`, but does a weighted (linear) multiplication while combining.
This is a separate function from ``combine_tensors`` because we try to avoid instantiating
large intermediate tensors during the combination, which is possible because we know that we're
going to be multiplying by a w... | def combine_tensors_and_multiply(combination: str,
tensors: List[torch.Tensor],
weights: torch.nn.Parameter) -> torch.Tensor:
"""
Like :func:`combine_tensors`, but does a weighted (linear) multiplication while combining.
This is a separate fu... |
For use with :func:`combine_tensors`. This function computes the resultant dimension when
calling ``combine_tensors(combination, tensors)``, when the tensor dimension is known. This is
necessary for knowing the sizes of weight matrices when building models that use
``combine_tensors``.
Parameters
... | def get_combined_dim(combination: str, tensor_dims: List[int]) -> int:
"""
For use with :func:`combine_tensors`. This function computes the resultant dimension when
calling ``combine_tensors(combination, tensors)``, when the tensor dimension is known. This is
necessary for knowing the sizes of weight ... |
A numerically stable computation of logsumexp. This is mathematically equivalent to
`tensor.exp().sum(dim, keep=keepdim).log()`. This function is typically used for summing log
probabilities.
Parameters
----------
tensor : torch.FloatTensor, required.
A tensor of arbitrary size.
dim : ... | def logsumexp(tensor: torch.Tensor,
dim: int = -1,
keepdim: bool = False) -> torch.Tensor:
"""
A numerically stable computation of logsumexp. This is mathematically equivalent to
`tensor.exp().sum(dim, keep=keepdim).log()`. This function is typically used for summing log
pro... |
This is a subroutine for :func:`~batched_index_select`. The given ``indices`` of size
``(batch_size, d_1, ..., d_n)`` indexes into dimension 2 of a target tensor, which has size
``(batch_size, sequence_length, embedding_size)``. This function returns a vector that
correctly indexes into the flattened target... | def flatten_and_batch_shift_indices(indices: torch.Tensor,
sequence_length: int) -> torch.Tensor:
"""
This is a subroutine for :func:`~batched_index_select`. The given ``indices`` of size
``(batch_size, d_1, ..., d_n)`` indexes into dimension 2 of a target tensor, which h... |
The given ``indices`` of size ``(batch_size, d_1, ..., d_n)`` indexes into the sequence
dimension (dimension 2) of the target, which has size ``(batch_size, sequence_length,
embedding_size)``.
This function returns selected values in the target with respect to the provided indices, which
have size ``(b... | def batched_index_select(target: torch.Tensor,
indices: torch.LongTensor,
flattened_indices: Optional[torch.LongTensor] = None) -> torch.Tensor:
"""
The given ``indices`` of size ``(batch_size, d_1, ..., d_n)`` indexes into the sequence
dimension (dimension ... |
The given ``indices`` of size ``(set_size, subset_size)`` specifies subsets of the ``target``
that each of the set_size rows should select. The `target` has size
``(batch_size, sequence_length, embedding_size)``, and the resulting selected tensor has size
``(batch_size, set_size, subset_size, embedding_size... | def flattened_index_select(target: torch.Tensor,
indices: torch.LongTensor) -> torch.Tensor:
"""
The given ``indices`` of size ``(set_size, subset_size)`` specifies subsets of the ``target``
that each of the set_size rows should select. The `target` has size
``(batch_size, seq... |
Returns a range vector with the desired size, starting at 0. The CUDA implementation
is meant to avoid copy data from CPU to GPU. | def get_range_vector(size: int, device: int) -> torch.Tensor:
"""
Returns a range vector with the desired size, starting at 0. The CUDA implementation
is meant to avoid copy data from CPU to GPU.
"""
if device > -1:
return torch.cuda.LongTensor(size, device=device).fill_(1).cumsum(0) - 1
... |
Places the given values (designed for distances) into ``num_total_buckets``semi-logscale
buckets, with ``num_identity_buckets`` of these capturing single values.
The default settings will bucket values into the following buckets:
[0, 1, 2, 3, 4, 5-7, 8-15, 16-31, 32-63, 64+].
Parameters
----------... | def bucket_values(distances: torch.Tensor,
num_identity_buckets: int = 4,
num_total_buckets: int = 10) -> torch.Tensor:
"""
Places the given values (designed for distances) into ``num_total_buckets``semi-logscale
buckets, with ``num_identity_buckets`` of these capturing s... |
Add begin/end of sentence tokens to the batch of sentences.
Given a batch of sentences with size ``(batch_size, timesteps)`` or
``(batch_size, timesteps, dim)`` this returns a tensor of shape
``(batch_size, timesteps + 2)`` or ``(batch_size, timesteps + 2, dim)`` respectively.
Returns both the new tens... | def add_sentence_boundary_token_ids(tensor: torch.Tensor,
mask: torch.Tensor,
sentence_begin_token: Any,
sentence_end_token: Any) -> Tuple[torch.Tensor, torch.Tensor]:
"""
Add begin/end of sentence tokens... |
Remove begin/end of sentence embeddings from the batch of sentences.
Given a batch of sentences with size ``(batch_size, timesteps, dim)``
this returns a tensor of shape ``(batch_size, timesteps - 2, dim)`` after removing
the beginning and end sentence markers. The sentences are assumed to be padded on the... | def remove_sentence_boundaries(tensor: torch.Tensor,
mask: torch.Tensor) -> Tuple[torch.Tensor, torch.Tensor]:
"""
Remove begin/end of sentence embeddings from the batch of sentences.
Given a batch of sentences with size ``(batch_size, timesteps, dim)``
this returns a tens... |
Implements the frequency-based positional encoding described
in `Attention is all you Need
<https://www.semanticscholar.org/paper/Attention-Is-All-You-Need-Vaswani-Shazeer/0737da0767d77606169cbf4187b83e1ab62f6077>`_ .
Adds sinusoids of different frequencies to a ``Tensor``. A sinusoid of a
different fr... | def add_positional_features(tensor: torch.Tensor,
min_timescale: float = 1.0,
max_timescale: float = 1.0e4):
# pylint: disable=line-too-long
"""
Implements the frequency-based positional encoding described
in `Attention is all you Need
<https:/... |
Produce N identical layers. | def clone(module: torch.nn.Module, num_copies: int) -> torch.nn.ModuleList:
"""Produce N identical layers."""
return torch.nn.ModuleList([copy.deepcopy(module) for _ in range(num_copies)]) |
Given a (possibly higher order) tensor of ids with shape
(d1, ..., dn, sequence_length)
Return a view that's (d1 * ... * dn, sequence_length).
If original tensor is 1-d or 2-d, return it as is. | def combine_initial_dims(tensor: torch.Tensor) -> torch.Tensor:
"""
Given a (possibly higher order) tensor of ids with shape
(d1, ..., dn, sequence_length)
Return a view that's (d1 * ... * dn, sequence_length).
If original tensor is 1-d or 2-d, return it as is.
"""
if tensor.dim() <= 2:
... |
Given a tensor of embeddings with shape
(d1 * ... * dn, sequence_length, embedding_dim)
and the original shape
(d1, ..., dn, sequence_length),
return the reshaped tensor of embeddings with shape
(d1, ..., dn, sequence_length, embedding_dim).
If original size is 1-d or 2-d, return it as is. | def uncombine_initial_dims(tensor: torch.Tensor, original_size: torch.Size) -> torch.Tensor:
"""
Given a tensor of embeddings with shape
(d1 * ... * dn, sequence_length, embedding_dim)
and the original shape
(d1, ..., dn, sequence_length),
return the reshaped tensor of embeddings with shape
... |
Checks if the string occurs in the table, and if it does, returns the names of the columns
under which it occurs. If it does not, returns an empty list. | def _string_in_table(self, candidate: str) -> List[str]:
"""
Checks if the string occurs in the table, and if it does, returns the names of the columns
under which it occurs. If it does not, returns an empty list.
"""
candidate_column_names: List[str] = []
# First check i... |
These are the transformation rules used to normalize cell in column names in Sempre. See
``edu.stanford.nlp.sempre.tables.StringNormalizationUtils.characterNormalize`` and
``edu.stanford.nlp.sempre.tables.TableTypeSystem.canonicalizeName``. We reproduce those
rules here to normalize and canoni... | def normalize_string(string: str) -> str:
"""
These are the transformation rules used to normalize cell in column names in Sempre. See
``edu.stanford.nlp.sempre.tables.StringNormalizationUtils.characterNormalize`` and
``edu.stanford.nlp.sempre.tables.TableTypeSystem.canonicalizeName``. ... |
Takes a logical form as a lisp string and returns a nested list representation of the lisp.
For example, "(count (division first))" would get mapped to ['count', ['division', 'first']]. | def lisp_to_nested_expression(lisp_string: str) -> List:
"""
Takes a logical form as a lisp string and returns a nested list representation of the lisp.
For example, "(count (division first))" would get mapped to ['count', ['division', 'first']].
"""
stack: List = []
current_expression: List = [... |
Parameters
----------
batch : ``List[List[str]]``, required
A list of tokenized sentences.
Returns
-------
A tuple of tensors, the first representing activations (batch_size, 3, num_timesteps, 1024) and
the second a mask (batch_size, num_timesteps). | def batch_to_embeddings(self, batch: List[List[str]]) -> Tuple[torch.Tensor, torch.Tensor]:
"""
Parameters
----------
batch : ``List[List[str]]``, required
A list of tokenized sentences.
Returns
-------
A tuple of tensors, the first representing a... |
Computes the ELMo embeddings for a single tokenized sentence.
Please note that ELMo has internal state and will give different results for the same input.
See the comment under the class definition.
Parameters
----------
sentence : ``List[str]``, required
A tokenize... | def embed_sentence(self, sentence: List[str]) -> numpy.ndarray:
"""
Computes the ELMo embeddings for a single tokenized sentence.
Please note that ELMo has internal state and will give different results for the same input.
See the comment under the class definition.
Parameters
... |
Computes the ELMo embeddings for a batch of tokenized sentences.
Please note that ELMo has internal state and will give different results for the same input.
See the comment under the class definition.
Parameters
----------
batch : ``List[List[str]]``, required
A li... | def embed_batch(self, batch: List[List[str]]) -> List[numpy.ndarray]:
"""
Computes the ELMo embeddings for a batch of tokenized sentences.
Please note that ELMo has internal state and will give different results for the same input.
See the comment under the class definition.
Pa... |
Computes the ELMo embeddings for a iterable of sentences.
Please note that ELMo has internal state and will give different results for the same input.
See the comment under the class definition.
Parameters
----------
sentences : ``Iterable[List[str]]``, required
An ... | def embed_sentences(self,
sentences: Iterable[List[str]],
batch_size: int = DEFAULT_BATCH_SIZE) -> Iterable[numpy.ndarray]:
"""
Computes the ELMo embeddings for a iterable of sentences.
Please note that ELMo has internal state and will give differ... |
Computes ELMo embeddings from an input_file where each line contains a sentence tokenized by whitespace.
The ELMo embeddings are written out in HDF5 format, where each sentence embedding
is saved in a dataset with the line number in the original file as the key.
Parameters
----------
... | def embed_file(self,
input_file: IO,
output_file_path: str,
output_format: str = "all",
batch_size: int = DEFAULT_BATCH_SIZE,
forget_sentences: bool = False,
use_sentence_keys: bool = False) -> None:
... |
Add the field to the existing fields mapping.
If we have already indexed the Instance, then we also index `field`, so
it is necessary to supply the vocab. | def add_field(self, field_name: str, field: Field, vocab: Vocabulary = None) -> None:
"""
Add the field to the existing fields mapping.
If we have already indexed the Instance, then we also index `field`, so
it is necessary to supply the vocab.
"""
self.fields[field_name]... |
Increments counts in the given ``counter`` for all of the vocabulary items in all of the
``Fields`` in this ``Instance``. | def count_vocab_items(self, counter: Dict[str, Dict[str, int]]):
"""
Increments counts in the given ``counter`` for all of the vocabulary items in all of the
``Fields`` in this ``Instance``.
"""
for field in self.fields.values():
field.count_vocab_items(counter) |
Indexes all fields in this ``Instance`` using the provided ``Vocabulary``.
This `mutates` the current object, it does not return a new ``Instance``.
A ``DataIterator`` will call this on each pass through a dataset; we use the ``indexed``
flag to make sure that indexing only happens once.
... | def index_fields(self, vocab: Vocabulary) -> None:
"""
Indexes all fields in this ``Instance`` using the provided ``Vocabulary``.
This `mutates` the current object, it does not return a new ``Instance``.
A ``DataIterator`` will call this on each pass through a dataset; we use the ``index... |
Returns a dictionary of padding lengths, keyed by field name. Each ``Field`` returns a
mapping from padding keys to actual lengths, and we just key that dictionary by field name. | def get_padding_lengths(self) -> Dict[str, Dict[str, int]]:
"""
Returns a dictionary of padding lengths, keyed by field name. Each ``Field`` returns a
mapping from padding keys to actual lengths, and we just key that dictionary by field name.
"""
lengths = {}
for field_n... |
Pads each ``Field`` in this instance to the lengths given in ``padding_lengths`` (which is
keyed by field name, then by padding key, the same as the return value in
:func:`get_padding_lengths`), returning a list of torch tensors for each field.
If ``padding_lengths`` is omitted, we will call ``... | def as_tensor_dict(self,
padding_lengths: Dict[str, Dict[str, int]] = None) -> Dict[str, DataArray]:
"""
Pads each ``Field`` in this instance to the lengths given in ``padding_lengths`` (which is
keyed by field name, then by padding key, the same as the return value in
... |
Return the full name (including module) of the given class. | def full_name(cla55: Optional[type]) -> str:
"""
Return the full name (including module) of the given class.
"""
# Special case to handle None:
if cla55 is None:
return "?"
if issubclass(cla55, Initializer) and cla55 not in [Initializer, PretrainedModelInitializer]:
init_fn = cl... |
Find the name (if any) that a subclass was registered under.
We do this simply by iterating through the registry until we
find it. | def _get_config_type(cla55: type) -> Optional[str]:
"""
Find the name (if any) that a subclass was registered under.
We do this simply by iterating through the registry until we
find it.
"""
# Special handling for pytorch RNN types:
if cla55 == torch.nn.RNN:
return "rnn"
elif cla... |
Inspect the docstring and get the comments for each parameter. | def _docspec_comments(obj) -> Dict[str, str]:
"""
Inspect the docstring and get the comments for each parameter.
"""
# Sometimes our docstring is on the class, and sometimes it's on the initializer,
# so we've got to check both.
class_docstring = getattr(obj, '__doc__', None)
init_docstring ... |
Create the ``Config`` for a class by reflecting on its ``__init__``
method and applying a few hacks. | def _auto_config(cla55: Type[T]) -> Config[T]:
"""
Create the ``Config`` for a class by reflecting on its ``__init__``
method and applying a few hacks.
"""
typ3 = _get_config_type(cla55)
# Don't include self, or vocab
names_to_ignore = {"self", "vocab"}
# Hack for RNNs
if cla55 in ... |
Pretty-print a config in sort-of-JSON+comments. | def render_config(config: Config, indent: str = "") -> str:
"""
Pretty-print a config in sort-of-JSON+comments.
"""
# Add four spaces to the indent.
new_indent = indent + " "
return "".join([
# opening brace + newline
"{\n",
# "type": "...", (if present)
... |
Render a single config item, with the provided indent | def _render(item: ConfigItem, indent: str = "") -> str:
"""
Render a single config item, with the provided indent
"""
optional = item.default_value != _NO_DEFAULT
if is_configurable(item.annotation):
rendered_annotation = f"{item.annotation} (configurable)"
else:
rendered_annota... |
Return a mapping {registered_name -> subclass_name}
for the registered subclasses of `cla55`. | def _valid_choices(cla55: type) -> Dict[str, str]:
"""
Return a mapping {registered_name -> subclass_name}
for the registered subclasses of `cla55`.
"""
valid_choices: Dict[str, str] = {}
if cla55 not in Registrable._registry:
raise ValueError(f"{cla55} is not a known Registrable class"... |
Convert `url` into a hashed filename in a repeatable way.
If `etag` is specified, append its hash to the url's, delimited
by a period. | def url_to_filename(url: str, etag: str = None) -> str:
"""
Convert `url` into a hashed filename in a repeatable way.
If `etag` is specified, append its hash to the url's, delimited
by a period.
"""
url_bytes = url.encode('utf-8')
url_hash = sha256(url_bytes)
filename = url_hash.hexdiges... |
Return the url and etag (which may be ``None``) stored for `filename`.
Raise ``FileNotFoundError`` if `filename` or its stored metadata do not exist. | def filename_to_url(filename: str, cache_dir: str = None) -> Tuple[str, str]:
"""
Return the url and etag (which may be ``None``) stored for `filename`.
Raise ``FileNotFoundError`` if `filename` or its stored metadata do not exist.
"""
if cache_dir is None:
cache_dir = CACHE_DIRECTORY
c... |
Given something that might be a URL (or might be a local path),
determine which. If it's a URL, download the file and cache it, and
return the path to the cached file. If it's already a local path,
make sure the file exists and then return the path. | def cached_path(url_or_filename: Union[str, Path], cache_dir: str = None) -> str:
"""
Given something that might be a URL (or might be a local path),
determine which. If it's a URL, download the file and cache it, and
return the path to the cached file. If it's already a local path,
make sure the fi... |
Given something that might be a URL (or might be a local path),
determine check if it's url or an existing file path. | def is_url_or_existing_file(url_or_filename: Union[str, Path, None]) -> bool:
"""
Given something that might be a URL (or might be a local path),
determine check if it's url or an existing file path.
"""
if url_or_filename is None:
return False
url_or_filename = os.path.expanduser(str(ur... |
Split a full s3 path into the bucket name and path. | def split_s3_path(url: str) -> Tuple[str, str]:
"""Split a full s3 path into the bucket name and path."""
parsed = urlparse(url)
if not parsed.netloc or not parsed.path:
raise ValueError("bad s3 path {}".format(url))
bucket_name = parsed.netloc
s3_path = parsed.path
# Remove '/' at begin... |
Wrapper function for s3 requests in order to create more helpful error
messages. | def s3_request(func: Callable):
"""
Wrapper function for s3 requests in order to create more helpful error
messages.
"""
@wraps(func)
def wrapper(url: str, *args, **kwargs):
try:
return func(url, *args, **kwargs)
except ClientError as exc:
if int(exc.resp... |
Check ETag on S3 object. | def s3_etag(url: str) -> Optional[str]:
"""Check ETag on S3 object."""
s3_resource = boto3.resource("s3")
bucket_name, s3_path = split_s3_path(url)
s3_object = s3_resource.Object(bucket_name, s3_path)
return s3_object.e_tag |
Pull a file directly from S3. | def s3_get(url: str, temp_file: IO) -> None:
"""Pull a file directly from S3."""
s3_resource = boto3.resource("s3")
bucket_name, s3_path = split_s3_path(url)
s3_resource.Bucket(bucket_name).download_fileobj(s3_path, temp_file) |
Given a URL, look for the corresponding dataset in the local cache.
If it's not there, download it. Then return the path to the cached file. | def get_from_cache(url: str, cache_dir: str = None) -> str:
"""
Given a URL, look for the corresponding dataset in the local cache.
If it's not there, download it. Then return the path to the cached file.
"""
if cache_dir is None:
cache_dir = CACHE_DIRECTORY
os.makedirs(cache_dir, exist... |
Extract a de-duped collection (set) of text from a file.
Expected file format is one item per line. | def read_set_from_file(filename: str) -> Set[str]:
"""
Extract a de-duped collection (set) of text from a file.
Expected file format is one item per line.
"""
collection = set()
with open(filename, 'r') as file_:
for line in file_:
collection.add(line.rstrip())
return col... |
Processes the text2sql data into the following directory structure:
``dataset/{query_split, question_split}/{train,dev,test}.json``
for datasets which have train, dev and test splits, or:
``dataset/{query_split, question_split}/{split_{split_id}}.json``
for datasets which use cross validation.
... | def main(output_directory: int, data: str) -> None:
"""
Processes the text2sql data into the following directory structure:
``dataset/{query_split, question_split}/{train,dev,test}.json``
for datasets which have train, dev and test splits, or:
``dataset/{query_split, question_split}/{split_{split... |
Apply dropout to this layer, for this whole mini-batch.
dropout_prob = layer_index / total_layers * undecayed_dropout_prob if layer_idx and
total_layers is specified, else it will use the undecayed_dropout_prob directly.
Parameters
----------
layer_input ``torch.FloatTensor`` re... | def forward(self,
layer_input: torch.Tensor,
layer_output: torch.Tensor,
layer_index: int = None,
total_layers: int = None) -> torch.Tensor:
# pylint: disable=arguments-differ
"""
Apply dropout to this layer, for this whole mini-bat... |
See ``PlaceholderType.resolve`` | def resolve(self, other: Type) -> Optional[Type]:
"""See ``PlaceholderType.resolve``"""
if not isinstance(other, NltkComplexType):
return None
expected_second = ComplexType(NUMBER_TYPE,
ComplexType(ANY_TYPE, ComplexType(ComplexType(ANY_TYPE, ANY_... |
See ``PlaceholderType.resolve`` | def resolve(self, other: Type) -> Type:
"""See ``PlaceholderType.resolve``"""
if not isinstance(other, NltkComplexType):
return None
resolved_second = NUMBER_TYPE.resolve(other.second)
if not resolved_second:
return None
return CountType(other.first) |
Reads an NLVR dataset and returns a JSON representation containing sentences, labels, correct and
incorrect logical forms. The output will contain at most `max_num_logical_forms` logical forms
each in both correct and incorrect lists. The output format is:
``[{"id": str, "label": str, "sentence": str, "... | def process_data(input_file: str,
output_file: str,
max_path_length: int,
max_num_logical_forms: int,
ignore_agenda: bool,
write_sequences: bool) -> None:
"""
Reads an NLVR dataset and returns a JSON representation containing s... |
This method lets you take advantage of spacy's batch processing.
Default implementation is to just iterate over the texts and call ``split_sentences``. | def batch_split_sentences(self, texts: List[str]) -> List[List[str]]:
"""
This method lets you take advantage of spacy's batch processing.
Default implementation is to just iterate over the texts and call ``split_sentences``.
"""
return [self.split_sentences(text) for text in tex... |
An iterator over the entire dataset, yielding all sentences processed. | def dataset_iterator(self, file_path: str) -> Iterator[OntonotesSentence]:
"""
An iterator over the entire dataset, yielding all sentences processed.
"""
for conll_file in self.dataset_path_iterator(file_path):
yield from self.sentence_iterator(conll_file) |
An iterator returning file_paths in a directory
containing CONLL-formatted files. | def dataset_path_iterator(file_path: str) -> Iterator[str]:
"""
An iterator returning file_paths in a directory
containing CONLL-formatted files.
"""
logger.info("Reading CONLL sentences from dataset files at: %s", file_path)
for root, _, files in list(os.walk(file_path))... |
An iterator over CONLL formatted files which yields documents, regardless
of the number of document annotations in a particular file. This is useful
for conll data which has been preprocessed, such as the preprocessing which
takes place for the 2012 CONLL Coreference Resolution task. | def dataset_document_iterator(self, file_path: str) -> Iterator[List[OntonotesSentence]]:
"""
An iterator over CONLL formatted files which yields documents, regardless
of the number of document annotations in a particular file. This is useful
for conll data which has been preprocessed, s... |
An iterator over the sentences in an individual CONLL formatted file. | def sentence_iterator(self, file_path: str) -> Iterator[OntonotesSentence]:
"""
An iterator over the sentences in an individual CONLL formatted file.
"""
for document in self.dataset_document_iterator(file_path):
for sentence in document:
yield sentence |
For a given coref label, add it to a currently open span(s), complete a span(s) or
ignore it, if it is outside of all spans. This method mutates the clusters and coref_stacks
dictionaries.
Parameters
----------
label : ``str``
The coref label for this word.
w... | def _process_coref_span_annotations_for_word(label: str,
word_index: int,
clusters: DefaultDict[int, List[Tuple[int, int]]],
coref_stacks: DefaultDict[int, List[int]]) -> No... |
Given a sequence of different label types for a single word and the current
span label we are inside, compute the BIO tag for each label and append to a list.
Parameters
----------
annotations: ``List[str]``
A list of labels to compute BIO tags for.
span_labels : ``L... | def _process_span_annotations_for_word(annotations: List[str],
span_labels: List[List[str]],
current_span_labels: List[Optional[str]]) -> None:
"""
Given a sequence of different label types for a single word and the cu... |
Apply dropout to input tensor.
Parameters
----------
input_tensor: ``torch.FloatTensor``
A tensor of shape ``(batch_size, num_timesteps, embedding_dim)``
Returns
-------
output: ``torch.FloatTensor``
A tensor of shape ``(batch_size, num_timesteps... | def forward(self, input_tensor):
# pylint: disable=arguments-differ
"""
Apply dropout to input tensor.
Parameters
----------
input_tensor: ``torch.FloatTensor``
A tensor of shape ``(batch_size, num_timesteps, embedding_dim)``
Returns
-------
... |
Compute and return the metric. Optionally also call :func:`self.reset`. | def get_metric(self, reset: bool) -> Union[float, Tuple[float, ...], Dict[str, float], Dict[str, List[float]]]:
"""
Compute and return the metric. Optionally also call :func:`self.reset`.
"""
raise NotImplementedError |
If you actually passed gradient-tracking Tensors to a Metric, there will be
a huge memory leak, because it will prevent garbage collection for the computation
graph. This method ensures that you're using tensors directly and that they are on
the CPU. | def unwrap_to_tensors(*tensors: torch.Tensor):
"""
If you actually passed gradient-tracking Tensors to a Metric, there will be
a huge memory leak, because it will prevent garbage collection for the computation
graph. This method ensures that you're using tensors directly and that they ar... |
Replaces abstract variables in text with their concrete counterparts. | def replace_variables(sentence: List[str],
sentence_variables: Dict[str, str]) -> Tuple[List[str], List[str]]:
"""
Replaces abstract variables in text with their concrete counterparts.
"""
tokens = []
tags = []
for token in sentence:
if token not in sentence_variabl... |
Cleans up and unifies a SQL query. This involves unifying quoted strings
and splitting brackets which aren't formatted consistently in the data. | def clean_and_split_sql(sql: str) -> List[str]:
"""
Cleans up and unifies a SQL query. This involves unifying quoted strings
and splitting brackets which aren't formatted consistently in the data.
"""
sql_tokens: List[str] = []
for token in sql.strip().split():
token = token.replace('"',... |
Some examples in the text2sql datasets use ID as a column reference to the
column of a table which has a primary key. This causes problems if you are trying
to constrain a grammar to only produce the column names directly, because you don't
know what ID refers to. So instead of dealing with that, we just re... | def resolve_primary_keys_in_schema(sql_tokens: List[str],
schema: Dict[str, List[TableColumn]]) -> List[str]:
"""
Some examples in the text2sql datasets use ID as a column reference to the
column of a table which has a primary key. This causes problems if you are trying
... |
Reads a schema from the text2sql data, returning a dictionary
mapping table names to their columns and respective types.
This handles columns in an arbitrary order and also allows
either ``{Table, Field}`` or ``{Table, Field} Name`` as headers,
because both appear in the data. It also uppercases table a... | def read_dataset_schema(schema_path: str) -> Dict[str, List[TableColumn]]:
"""
Reads a schema from the text2sql data, returning a dictionary
mapping table names to their columns and respective types.
This handles columns in an arbitrary order and also allows
either ``{Table, Field}`` or ``{Table, Fi... |
A utility function for reading in text2sql data. The blob is
the result of loading the json from a file produced by the script
``scripts/reformat_text2sql_data.py``.
Parameters
----------
data : ``JsonDict``
use_all_sql : ``bool``, optional (default = False)
Whether to use all of the sq... | def process_sql_data(data: List[JsonDict],
use_all_sql: bool = False,
use_all_queries: bool = False,
remove_unneeded_aliases: bool = False,
schema: Dict[str, List[TableColumn]] = None) -> Iterable[SqlData]:
"""
A utility functio... |
This function exists because Pytorch RNNs require that their inputs be sorted
before being passed as input. As all of our Seq2xxxEncoders use this functionality,
it is provided in a base class. This method can be called on any module which
takes as input a ``PackedSequence`` and some ``hidden_st... | def sort_and_run_forward(self,
module: Callable[[PackedSequence, Optional[RnnState]],
Tuple[Union[PackedSequence, torch.Tensor], RnnState]],
inputs: torch.Tensor,
mask: torch.Tensor,
... |
Returns an initial state for use in an RNN. Additionally, this method handles
the batch size changing across calls by mutating the state to append initial states
for new elements in the batch. Finally, it also handles sorting the states
with respect to the sequence lengths of elements in the bat... | def _get_initial_states(self,
batch_size: int,
num_valid: int,
sorting_indices: torch.LongTensor) -> Optional[RnnState]:
"""
Returns an initial state for use in an RNN. Additionally, this method handles
the batch... |
After the RNN has run forward, the states need to be updated.
This method just sets the state to the updated new state, performing
several pieces of book-keeping along the way - namely, unsorting the
states and ensuring that the states of completely padded sequences are
not updated. Fina... | def _update_states(self,
final_states: RnnStateStorage,
restoration_indices: torch.LongTensor) -> None:
"""
After the RNN has run forward, the states need to be updated.
This method just sets the state to the updated new state, performing
sev... |
Takes a list of valid target action sequences and creates a mapping from all possible
(valid) action prefixes to allowed actions given that prefix. While the method is called
``construct_prefix_tree``, we're actually returning a map that has as keys the paths to
`all internal nodes of the trie`, and as val... | def construct_prefix_tree(targets: Union[torch.Tensor, List[List[List[int]]]],
target_mask: Optional[torch.Tensor] = None) -> List[Dict[Tuple[int, ...], Set[int]]]:
"""
Takes a list of valid target action sequences and creates a mapping from all possible
(valid) action prefixes to ... |
Convert the string to Value object.
Args:
original_string (basestring): Original string
corenlp_value (basestring): Optional value returned from CoreNLP
Returns:
Value | def to_value(original_string, corenlp_value=None):
"""Convert the string to Value object.
Args:
original_string (basestring): Original string
corenlp_value (basestring): Optional value returned from CoreNLP
Returns:
Value
"""
if isinstance(original_string, Value):
# ... |
Convert a list of strings to a list of Values
Args:
original_strings (list[basestring])
corenlp_values (list[basestring or None])
Returns:
list[Value] | def to_value_list(original_strings, corenlp_values=None):
"""Convert a list of strings to a list of Values
Args:
original_strings (list[basestring])
corenlp_values (list[basestring or None])
Returns:
list[Value]
"""
assert isinstance(original_strings, (list, tuple, set))
... |
Return True if the predicted denotation is correct.
Args:
target_values (list[Value])
predicted_values (list[Value])
Returns:
bool | def check_denotation(target_values, predicted_values):
"""Return True if the predicted denotation is correct.
Args:
target_values (list[Value])
predicted_values (list[Value])
Returns:
bool
"""
# Check size
if len(target_values) != len(predicted_values):
return Fa... |
Try to parse into a number.
Return:
the number (int or float) if successful; otherwise None. | def parse(text):
"""Try to parse into a number.
Return:
the number (int or float) if successful; otherwise None.
"""
try:
return int(text)
except ValueError:
try:
amount = float(text)
assert not isnan(amount) an... |
Try to parse into a date.
Return:
tuple (year, month, date) if successful; otherwise None. | def parse(text):
"""Try to parse into a date.
Return:
tuple (year, month, date) if successful; otherwise None.
"""
try:
ymd = text.lower().split('-')
assert len(ymd) == 3
year = -1 if ymd[0] in ('xx', 'xxxx') else int(ymd[0])
m... |
Given a sequence tensor, extract spans and return representations of
them. Span representation can be computed in many different ways,
such as concatenation of the start and end spans, attention over the
vectors contained inside the span, etc.
Parameters
----------
seque... | def forward(self, # pylint: disable=arguments-differ
sequence_tensor: torch.FloatTensor,
span_indices: torch.LongTensor,
sequence_mask: torch.LongTensor = None,
span_indices_mask: torch.LongTensor = None):
"""
Given a sequence tensor, extra... |
serialization_directory : str, required.
The directory containing the serialized weights.
device: int, default = -1
The device to run the evaluation on.
data: str, default = None
The data to evaluate on. By default, we use the validation data from
the original experiment.
pre... | def main(serialization_directory: int,
device: int,
data: str,
prefix: str,
domain: str = None):
"""
serialization_directory : str, required.
The directory containing the serialized weights.
device: int, default = -1
The device to run the evaluation on.
... |
Takes an initial state object, a means of transitioning from state to state, and a
supervision signal, and uses the supervision to train the transition function to pick
"good" states.
This function should typically return a ``loss`` key during training, which the ``Model``
will use as i... | def decode(self,
initial_state: State,
transition_function: TransitionFunction,
supervision: SupervisionType) -> Dict[str, torch.Tensor]:
"""
Takes an initial state object, a means of transitioning from state to state, and a
supervision signal, and us... |
Returns the state of the scheduler as a ``dict``. | def state_dict(self) -> Dict[str, Any]:
"""
Returns the state of the scheduler as a ``dict``.
"""
return {key: value for key, value in self.__dict__.items() if key != 'optimizer'} |
Load the schedulers state.
Parameters
----------
state_dict : ``Dict[str, Any]``
Scheduler state. Should be an object returned from a call to ``state_dict``. | def load_state_dict(self, state_dict: Dict[str, Any]) -> None:
"""
Load the schedulers state.
Parameters
----------
state_dict : ``Dict[str, Any]``
Scheduler state. Should be an object returned from a call to ``state_dict``.
"""
self.__dict__.update(s... |
Parameters
----------
text_field_input : ``Dict[str, torch.Tensor]``
A dictionary that was the output of a call to ``TextField.as_tensor``. Each tensor in
here is assumed to have a shape roughly similar to ``(batch_size, sequence_length)``
(perhaps with an extra trai... | def forward(self, # pylint: disable=arguments-differ
text_field_input: Dict[str, torch.Tensor],
num_wrapping_dims: int = 0) -> torch.Tensor:
"""
Parameters
----------
text_field_input : ``Dict[str, torch.Tensor]``
A dictionary that was the out... |
Identifies the best prediction given the results from the submodels.
Parameters
----------
subresults : List[Dict[str, torch.Tensor]]
Results of each submodel.
Returns
-------
The index of the best submodel. | def ensemble(subresults: List[Dict[str, torch.Tensor]]) -> torch.Tensor:
"""
Identifies the best prediction given the results from the submodels.
Parameters
----------
subresults : List[Dict[str, torch.Tensor]]
Results of each submodel.
Returns
-------
The index of the best sub... |
Parameters
----------
inputs : ``torch.Tensor``, required.
A Tensor of shape ``(batch_size, sequence_length, hidden_size)``.
mask : ``torch.LongTensor``, required.
A binary mask of shape ``(batch_size, sequence_length)`` representing the
non-padded elements in... | def forward(self, # pylint: disable=arguments-differ
inputs: torch.Tensor,
mask: torch.LongTensor) -> torch.Tensor:
"""
Parameters
----------
inputs : ``torch.Tensor``, required.
A Tensor of shape ``(batch_size, sequence_length, hidden_size)``... |
Parameters
----------
inputs : ``PackedSequence``, required.
A batch first ``PackedSequence`` to run the stacked LSTM over.
initial_state : ``Tuple[torch.Tensor, torch.Tensor]``, optional, (default = None)
A tuple (state, memory) representing the initial hidden state and ... | def _lstm_forward(self,
inputs: PackedSequence,
initial_state: Optional[Tuple[torch.Tensor, torch.Tensor]] = None) -> \
Tuple[torch.Tensor, Tuple[torch.Tensor, torch.Tensor]]:
"""
Parameters
----------
inputs : ``PackedSequence``, r... |
Load the pre-trained weights from the file. | def load_weights(self, weight_file: str) -> None:
"""
Load the pre-trained weights from the file.
"""
requires_grad = self.requires_grad
with h5py.File(cached_path(weight_file), 'r') as fin:
for i_layer, lstms in enumerate(
zip(self.forward_layers... |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.