code stringlengths 66 870k | docstring stringlengths 19 26.7k | func_name stringlengths 1 138 | language stringclasses 1
value | repo stringlengths 7 68 | path stringlengths 5 324 | url stringlengths 46 389 | license stringclasses 7
values |
|---|---|---|---|---|---|---|---|
def as_tensor_dict(self,
padding_lengths = None,
cuda_device = -1,
verbose = False) :
# This complex return type is actually predefined elsewhere a... |
This method converts this ``Batch`` into a set of pytorch Tensors that can be passed
through a model. In order for the tensors to be valid tensors, all ``Instances`` in this
batch need to be padded to the same lengths wherever padding is necessary, so we do that
first, then we combine ... | as_tensor_dict | python | plasticityai/magnitude | pymagnitude/third_party/allennlp/data/dataset.py | https://github.com/plasticityai/magnitude/blob/master/pymagnitude/third_party/allennlp/data/dataset.py | MIT |
def add_field(self, field_name , field , vocab = None) :
u"""
Add the field to the existing fields mapping.
If we have already indexed the Instance, then we also index `field`, so
it is necessary to supply the vocab.
"""
self.fields[field_name... |
Add the field to the existing fields mapping.
If we have already indexed the Instance, then we also index `field`, so
it is necessary to supply the vocab.
| add_field | python | plasticityai/magnitude | pymagnitude/third_party/allennlp/data/instance.py | https://github.com/plasticityai/magnitude/blob/master/pymagnitude/third_party/allennlp/data/instance.py | MIT |
def index_fields(self, vocab ) :
u"""
Indexes all fields in this ``Instance`` using the provided ``Vocabulary``.
This `mutates` the current object, it does not return a new ``Instance``.
A ``DataIterator`` will call this on each pass through a dataset; we use the ``inde... |
Indexes all fields in this ``Instance`` using the provided ``Vocabulary``.
This `mutates` the current object, it does not return a new ``Instance``.
A ``DataIterator`` will call this on each pass through a dataset; we use the ``indexed``
flag to make sure that indexing only happens once... | index_fields | python | plasticityai/magnitude | pymagnitude/third_party/allennlp/data/instance.py | https://github.com/plasticityai/magnitude/blob/master/pymagnitude/third_party/allennlp/data/instance.py | MIT |
def get_padding_lengths(self) :
u"""
Returns a dictionary of padding lengths, keyed by field name. Each ``Field`` returns a
mapping from padding keys to actual lengths, and we just key that dictionary by field name.
"""
lengths = {}
for field_... |
Returns a dictionary of padding lengths, keyed by field name. Each ``Field`` returns a
mapping from padding keys to actual lengths, and we just key that dictionary by field name.
| get_padding_lengths | python | plasticityai/magnitude | pymagnitude/third_party/allennlp/data/instance.py | https://github.com/plasticityai/magnitude/blob/master/pymagnitude/third_party/allennlp/data/instance.py | MIT |
def pop_max_vocab_size(params ) :
u"""
max_vocab_size is allowed to be either an int or a Dict[str, int] (or nothing).
But it could also be a string representing an int (in the case of environment variable
substitution). So we need some complex logic to handle it.
... |
max_vocab_size is allowed to be either an int or a Dict[str, int] (or nothing).
But it could also be a string representing an int (in the case of environment variable
substitution). So we need some complex logic to handle it.
| pop_max_vocab_size | python | plasticityai/magnitude | pymagnitude/third_party/allennlp/data/vocabulary.py | https://github.com/plasticityai/magnitude/blob/master/pymagnitude/third_party/allennlp/data/vocabulary.py | MIT |
def save_to_files(self, directory ) :
u"""
Persist this Vocabulary to files so it can be reloaded later.
Each namespace corresponds to one file.
Parameters
----------
directory : ``str``
The directory where we save the serialized vocabulary.
... |
Persist this Vocabulary to files so it can be reloaded later.
Each namespace corresponds to one file.
Parameters
----------
directory : ``str``
The directory where we save the serialized vocabulary.
| save_to_files | python | plasticityai/magnitude | pymagnitude/third_party/allennlp/data/vocabulary.py | https://github.com/plasticityai/magnitude/blob/master/pymagnitude/third_party/allennlp/data/vocabulary.py | MIT |
def from_files(cls, directory ) :
u"""
Loads a ``Vocabulary`` that was serialized using ``save_to_files``.
Parameters
----------
directory : ``str``
The directory containing the serialized vocabulary.
"""
logger.info(u"Loading token... |
Loads a ``Vocabulary`` that was serialized using ``save_to_files``.
Parameters
----------
directory : ``str``
The directory containing the serialized vocabulary.
| from_files | python | plasticityai/magnitude | pymagnitude/third_party/allennlp/data/vocabulary.py | https://github.com/plasticityai/magnitude/blob/master/pymagnitude/third_party/allennlp/data/vocabulary.py | MIT |
def set_from_file(self,
filename ,
is_padded = True,
oov_token = DEFAULT_OOV_TOKEN,
namespace = u"tokens"):
u"""
If you already have a vocabulary file for a trained model somewhere, and you really... |
If you already have a vocabulary file for a trained model somewhere, and you really want to
use that vocabulary file instead of just setting the vocabulary from a dataset, for
whatever reason, you can do that with this method. You must specify the namespace to use,
and we assume that y... | set_from_file | python | plasticityai/magnitude | pymagnitude/third_party/allennlp/data/vocabulary.py | https://github.com/plasticityai/magnitude/blob/master/pymagnitude/third_party/allennlp/data/vocabulary.py | MIT |
def from_instances(cls,
instances ,
min_count = None,
max_vocab_size = None,
non_padded_namespaces = DEFAULT_NON_PADDED_NAMESPACES,
... |
Constructs a vocabulary given a collection of `Instances` and some parameters.
We count all of the vocabulary items in the instances, then pass those counts
and the other parameters, to :func:`__init__`. See that method for a description
of what the other parameters do.
| from_instances | python | plasticityai/magnitude | pymagnitude/third_party/allennlp/data/vocabulary.py | https://github.com/plasticityai/magnitude/blob/master/pymagnitude/third_party/allennlp/data/vocabulary.py | MIT |
def from_params(cls, params , instances = None): # type: ignore
u"""
There are two possible ways to build a vocabulary; from a
collection of instances, using :func:`Vocabulary.from_instances`, or
from a pre-saved vocabulary, using :func:`Vocabulary.from_... |
There are two possible ways to build a vocabulary; from a
collection of instances, using :func:`Vocabulary.from_instances`, or
from a pre-saved vocabulary, using :func:`Vocabulary.from_files`.
You can also extend pre-saved vocabulary with collection of instances
using this metho... | from_params | python | plasticityai/magnitude | pymagnitude/third_party/allennlp/data/vocabulary.py | https://github.com/plasticityai/magnitude/blob/master/pymagnitude/third_party/allennlp/data/vocabulary.py | MIT |
def _extend(self,
counter = None,
min_count = None,
max_vocab_size = None,
non_padded_namespaces = DEFAULT_NON_PADDED_NAMESPACES,
pretrained_files ... |
This method can be used for extending already generated vocabulary.
It takes same parameters as Vocabulary initializer. The token2index
and indextotoken mappings of calling vocabulary will be retained.
It is an inplace operation so None will be returned.
| _extend | python | plasticityai/magnitude | pymagnitude/third_party/allennlp/data/vocabulary.py | https://github.com/plasticityai/magnitude/blob/master/pymagnitude/third_party/allennlp/data/vocabulary.py | MIT |
def extend_from_instances(self,
params ,
instances = ()) :
u"""
Extends an already generated vocabulary using a collection of instances.
"""
min_count = params.pop(u"min_count", None)
... |
Extends an already generated vocabulary using a collection of instances.
| extend_from_instances | python | plasticityai/magnitude | pymagnitude/third_party/allennlp/data/vocabulary.py | https://github.com/plasticityai/magnitude/blob/master/pymagnitude/third_party/allennlp/data/vocabulary.py | MIT |
def add_token_to_namespace(self, token , namespace = u'tokens') :
u"""
Adds ``token`` to the index, if it is not already present. Either way, we return the index of
the token.
"""
if not isinstance(token, unicode):
raise ValueError(u"Vocabulary tokens ... |
Adds ``token`` to the index, if it is not already present. Either way, we return the index of
the token.
| add_token_to_namespace | python | plasticityai/magnitude | pymagnitude/third_party/allennlp/data/vocabulary.py | https://github.com/plasticityai/magnitude/blob/master/pymagnitude/third_party/allennlp/data/vocabulary.py | MIT |
def text_to_instance(self, # type: ignore
utterances ,
sql_query = None) :
# pylint: disable=arguments-differ
u"""
Parameters
----------
utterances: ``List[str]``, required.
List of utterance... |
Parameters
----------
utterances: ``List[str]``, required.
List of utterances in the interaction, the last element is the current utterance.
sql_query: ``str``, optional
The SQL query, given as label during training or validation.
| text_to_instance | python | plasticityai/magnitude | pymagnitude/third_party/allennlp/data/dataset_readers/atis.py | https://github.com/plasticityai/magnitude/blob/master/pymagnitude/third_party/allennlp/data/dataset_readers/atis.py | MIT |
def text_to_instance(self, # type: ignore
tokens ,
ccg_categories = None,
original_pos_tags = None,
modified_pos_tags = None,
predicate_arg_categories ... |
We take `pre-tokenized` input here, because we don't have a tokenizer in this class.
Parameters
----------
tokens : ``List[str]``, required.
The tokens in a given sentence.
ccg_categories : ``List[str]``, optional, (default = None).
The CCG categories fo... | text_to_instance | python | plasticityai/magnitude | pymagnitude/third_party/allennlp/data/dataset_readers/ccgbank.py | https://github.com/plasticityai/magnitude/blob/master/pymagnitude/third_party/allennlp/data/dataset_readers/ccgbank.py | MIT |
def text_to_instance(self, # type: ignore
tokens ,
pos_tags = None,
chunk_tags = None,
ner_tags = None) :
u"""
We take `pre-tokenized` input here, b... |
We take `pre-tokenized` input here, because we don't have a tokenizer in this class.
| text_to_instance | python | plasticityai/magnitude | pymagnitude/third_party/allennlp/data/dataset_readers/conll2003.py | https://github.com/plasticityai/magnitude/blob/master/pymagnitude/third_party/allennlp/data/dataset_readers/conll2003.py | MIT |
def read(self, file_path ) :
u"""
Returns an ``Iterable`` containing all the instances
in the specified dataset.
If ``self.lazy`` is False, this calls ``self._read()``,
ensures that the result is a list, then returns the resulting list.
If ``sel... |
Returns an ``Iterable`` containing all the instances
in the specified dataset.
If ``self.lazy`` is False, this calls ``self._read()``,
ensures that the result is a list, then returns the resulting list.
If ``self.lazy`` is True, this returns an object whose
``__iter__`... | read | python | plasticityai/magnitude | pymagnitude/third_party/allennlp/data/dataset_readers/dataset_reader.py | https://github.com/plasticityai/magnitude/blob/master/pymagnitude/third_party/allennlp/data/dataset_readers/dataset_reader.py | MIT |
def text_to_instance(self, # type: ignore
sentence ,
structured_representations ,
labels = None,
target_sequences = None,
identifier ... |
Parameters
----------
sentence : ``str``
The query sentence.
structured_representations : ``List[List[List[JsonDict]]]``
A list of Json representations of all the worlds. See expected format in this class' docstring.
labels : ``List[str]`` (optional)
... | text_to_instance | python | plasticityai/magnitude | pymagnitude/third_party/allennlp/data/dataset_readers/nlvr.py | https://github.com/plasticityai/magnitude/blob/master/pymagnitude/third_party/allennlp/data/dataset_readers/nlvr.py | MIT |
def _ontonotes_subset(ontonotes_reader ,
file_path ,
domain_identifier ) :
u"""
Iterates over the Ontonotes 5.0 dataset using an optional domain identifier.
If the domain identifier is present, on... |
Iterates over the Ontonotes 5.0 dataset using an optional domain identifier.
If the domain identifier is present, only examples which contain the domain
identifier in the file path are yielded.
| _ontonotes_subset | python | plasticityai/magnitude | pymagnitude/third_party/allennlp/data/dataset_readers/ontonotes_ner.py | https://github.com/plasticityai/magnitude/blob/master/pymagnitude/third_party/allennlp/data/dataset_readers/ontonotes_ner.py | MIT |
def text_to_instance(self, # type: ignore
tokens ,
ner_tags = None) :
u"""
We take `pre-tokenized` input here, because we don't have a tokenizer in this class.
"""
# pylint: disable=arguments-differ
... |
We take `pre-tokenized` input here, because we don't have a tokenizer in this class.
| text_to_instance | python | plasticityai/magnitude | pymagnitude/third_party/allennlp/data/dataset_readers/ontonotes_ner.py | https://github.com/plasticityai/magnitude/blob/master/pymagnitude/third_party/allennlp/data/dataset_readers/ontonotes_ner.py | MIT |
def text_to_instance(self, # type: ignore
tokens ,
pos_tags = None,
gold_tree = None) :
u"""
We take `pre-tokenized` input here, because we don't have a tokenizer in this class.
Para... |
We take `pre-tokenized` input here, because we don't have a tokenizer in this class.
Parameters
----------
tokens : ``List[str]``, required.
The tokens in a given sentence.
pos_tags ``List[str]``, optional, (default = None).
The POS tags for the words in... | text_to_instance | python | plasticityai/magnitude | pymagnitude/third_party/allennlp/data/dataset_readers/penn_tree_bank.py | https://github.com/plasticityai/magnitude/blob/master/pymagnitude/third_party/allennlp/data/dataset_readers/penn_tree_bank.py | MIT |
def _strip_functional_tags(self, tree ) :
u"""
Removes all functional tags from constituency labels in an NLTK tree.
We also strip off anything after a =, - or | character, because these
are functional tags which we don't want to use.
This modification is done in-pla... |
Removes all functional tags from constituency labels in an NLTK tree.
We also strip off anything after a =, - or | character, because these
are functional tags which we don't want to use.
This modification is done in-place.
| _strip_functional_tags | python | plasticityai/magnitude | pymagnitude/third_party/allennlp/data/dataset_readers/penn_tree_bank.py | https://github.com/plasticityai/magnitude/blob/master/pymagnitude/third_party/allennlp/data/dataset_readers/penn_tree_bank.py | MIT |
def _get_gold_spans(self, # pylint: disable=arguments-differ
tree ,
index ,
typed_spans ) :
u"""
Recursively construct the gold spans from an nltk ``Tree``.
Labels are the constituen... |
Recursively construct the gold spans from an nltk ``Tree``.
Labels are the constituents, and in the case of nested constituents
with the same spans, labels are concatenated in parent-child order.
For example, ``(S (NP (D the) (N man)))`` would have an ``S-NP`` label
for the oute... | _get_gold_spans | python | plasticityai/magnitude | pymagnitude/third_party/allennlp/data/dataset_readers/penn_tree_bank.py | https://github.com/plasticityai/magnitude/blob/master/pymagnitude/third_party/allennlp/data/dataset_readers/penn_tree_bank.py | MIT |
def _ontonotes_subset(ontonotes_reader ,
file_path ,
domain_identifier ) :
u"""
Iterates over the Ontonotes 5.0 dataset using an optional domain identifier.
If the domain identifier is present, on... |
Iterates over the Ontonotes 5.0 dataset using an optional domain identifier.
If the domain identifier is present, only examples which contain the domain
identifier in the file path are yielded.
| _ontonotes_subset | python | plasticityai/magnitude | pymagnitude/third_party/allennlp/data/dataset_readers/semantic_role_labeling.py | https://github.com/plasticityai/magnitude/blob/master/pymagnitude/third_party/allennlp/data/dataset_readers/semantic_role_labeling.py | MIT |
def text_to_instance(self, # type: ignore
tokens ,
verb_label ,
tags = None) :
u"""
We take `pre-tokenized` input here, along with a verb label. The verb label should be a
one... |
We take `pre-tokenized` input here, along with a verb label. The verb label should be a
one-hot binary vector, the same length as the tokens, indicating the position of the verb
to find arguments for.
| text_to_instance | python | plasticityai/magnitude | pymagnitude/third_party/allennlp/data/dataset_readers/semantic_role_labeling.py | https://github.com/plasticityai/magnitude/blob/master/pymagnitude/third_party/allennlp/data/dataset_readers/semantic_role_labeling.py | MIT |
def text_to_instance(self, tokens , tags = None) : # type: ignore
u"""
We take `pre-tokenized` input here, because we don't have a tokenizer in this class.
"""
# pylint: disable=arguments-differ
fields = {}
sequence = T... |
We take `pre-tokenized` input here, because we don't have a tokenizer in this class.
| text_to_instance | python | plasticityai/magnitude | pymagnitude/third_party/allennlp/data/dataset_readers/sequence_tagging.py | https://github.com/plasticityai/magnitude/blob/master/pymagnitude/third_party/allennlp/data/dataset_readers/sequence_tagging.py | MIT |
def text_to_instance(self, tokens , sentiment = None) : # type: ignore
u"""
We take `pre-tokenized` input here, because we don't have a tokenizer in this class.
Parameters
----------
tokens : ``List[str]``, required.
The tokens in a given s... |
We take `pre-tokenized` input here, because we don't have a tokenizer in this class.
Parameters
----------
tokens : ``List[str]``, required.
The tokens in a given sentence.
sentiment ``str``, optional, (default = None).
The sentiment for this sentence.
... | text_to_instance | python | plasticityai/magnitude | pymagnitude/third_party/allennlp/data/dataset_readers/stanford_sentiment_tree_bank.py | https://github.com/plasticityai/magnitude/blob/master/pymagnitude/third_party/allennlp/data/dataset_readers/stanford_sentiment_tree_bank.py | MIT |
def text_to_instance(self, # type: ignore
words ,
upos_tags ,
dependencies = None) :
# pylint: disable=arguments-differ
u"""
Parameters
----------
wor... |
Parameters
----------
words : ``List[str]``, required.
The words in the sentence to be encoded.
upos_tags : ``List[str]``, required.
The universal dependencies POS tags for each word.
dependencies ``List[Tuple[str, int]]``, optional (default = None)
... | text_to_instance | python | plasticityai/magnitude | pymagnitude/third_party/allennlp/data/dataset_readers/universal_dependencies.py | https://github.com/plasticityai/magnitude/blob/master/pymagnitude/third_party/allennlp/data/dataset_readers/universal_dependencies.py | MIT |
def text_to_instance(self, # type: ignore
question ,
table_lines ,
example_lisp_string = None,
dpd_output = None,
tokenized_question = None) ... |
Reads text inputs and makes an instance. WikitableQuestions dataset provides tables as TSV
files, which we use for training.
Parameters
----------
question : ``str``
Input question
table_lines : ``List[str]``
The table content itself, as a list o... | text_to_instance | python | plasticityai/magnitude | pymagnitude/third_party/allennlp/data/dataset_readers/wikitables.py | https://github.com/plasticityai/magnitude/blob/master/pymagnitude/third_party/allennlp/data/dataset_readers/wikitables.py | MIT |
def _parse_example_line(lisp_string ) :
u"""
Training data in WikitableQuestions comes with examples in the form of lisp strings in the format:
(example (id <example-id>)
(utterance <question>)
(context (graph tables.TableKnowled... |
Training data in WikitableQuestions comes with examples in the form of lisp strings in the format:
(example (id <example-id>)
(utterance <question>)
(context (graph tables.TableKnowledgeGraph <table-filename>))
(targetValue (list (descr... | _parse_example_line | python | plasticityai/magnitude | pymagnitude/third_party/allennlp/data/dataset_readers/wikitables.py | https://github.com/plasticityai/magnitude/blob/master/pymagnitude/third_party/allennlp/data/dataset_readers/wikitables.py | MIT |
def canonicalize_clusters(clusters ) :
u"""
The CONLL 2012 data includes 2 annotatated spans which are identical,
but have different ids. This checks all clusters for spans which are
identical, and if it finds any, merges the clusters... |
The CONLL 2012 data includes 2 annotatated spans which are identical,
but have different ids. This checks all clusters for spans which are
identical, and if it finds any, merges the clusters containing the
identical spans.
| canonicalize_clusters | python | plasticityai/magnitude | pymagnitude/third_party/allennlp/data/dataset_readers/coreference_resolution/conll.py | https://github.com/plasticityai/magnitude/blob/master/pymagnitude/third_party/allennlp/data/dataset_readers/coreference_resolution/conll.py | MIT |
def text_to_instance(self, # type: ignore
sentences ,
gold_clusters = None) :
# pylint: disable=arguments-differ
u"""
Parameters
----------
sentences : ``List[List... |
Parameters
----------
sentences : ``List[List[str]]``, required.
A list of lists representing the tokenised words and sentences in the document.
gold_clusters : ``Optional[List[List[Tuple[int, int]]]]``, optional (default = None)
A list of all clusters in the doc... | text_to_instance | python | plasticityai/magnitude | pymagnitude/third_party/allennlp/data/dataset_readers/coreference_resolution/conll.py | https://github.com/plasticityai/magnitude/blob/master/pymagnitude/third_party/allennlp/data/dataset_readers/coreference_resolution/conll.py | MIT |
def text_to_instance(self, # type: ignore
sentence ,
gold_clusters = None) :
# pylint: disable=arguments-differ
u"""
Parameters
----------
sentences : ``List[Token]``,... |
Parameters
----------
sentences : ``List[Token]``, required.
The already tokenised sentence to analyse.
gold_clusters : ``Optional[List[List[Tuple[int, int]]]]``, optional (default = None)
A list of all clusters in the sentence, represented as word spans. Each cl... | text_to_instance | python | plasticityai/magnitude | pymagnitude/third_party/allennlp/data/dataset_readers/coreference_resolution/winobias.py | https://github.com/plasticityai/magnitude/blob/master/pymagnitude/third_party/allennlp/data/dataset_readers/coreference_resolution/winobias.py | MIT |
def dataset_path_iterator(file_path ) :
u"""
An iterator returning file_paths in a directory
containing CONLL-formatted files.
"""
logger.info(u"Reading CONLL sentences from dataset files at: %s", file_path)
for root, _, files in list(os.walk(file_path... |
An iterator returning file_paths in a directory
containing CONLL-formatted files.
| dataset_path_iterator | python | plasticityai/magnitude | pymagnitude/third_party/allennlp/data/dataset_readers/dataset_utils/ontonotes.py | https://github.com/plasticityai/magnitude/blob/master/pymagnitude/third_party/allennlp/data/dataset_readers/dataset_utils/ontonotes.py | MIT |
def dataset_document_iterator(self, file_path ) :
u"""
An iterator over CONLL formatted files which yields documents, regardless
of the number of document annotations in a particular file. This is useful
for conll data which has been preprocessed, ... |
An iterator over CONLL formatted files which yields documents, regardless
of the number of document annotations in a particular file. This is useful
for conll data which has been preprocessed, such as the preprocessing which
takes place for the 2012 CONLL Coreference Resolution task.
... | dataset_document_iterator | python | plasticityai/magnitude | pymagnitude/third_party/allennlp/data/dataset_readers/dataset_utils/ontonotes.py | https://github.com/plasticityai/magnitude/blob/master/pymagnitude/third_party/allennlp/data/dataset_readers/dataset_utils/ontonotes.py | MIT |
def sentence_iterator(self, file_path ) :
u"""
An iterator over the sentences in an individual CONLL formatted file.
"""
for document in self.dataset_document_iterator(file_path):
for sentence in document:
yield sentence |
An iterator over the sentences in an individual CONLL formatted file.
| sentence_iterator | python | plasticityai/magnitude | pymagnitude/third_party/allennlp/data/dataset_readers/dataset_utils/ontonotes.py | https://github.com/plasticityai/magnitude/blob/master/pymagnitude/third_party/allennlp/data/dataset_readers/dataset_utils/ontonotes.py | MIT |
def _process_coref_span_annotations_for_word(label ,
word_index ,
clusters ,
coref_stacks ) ... |
For a given coref label, add it to a currently open span(s), complete a span(s) or
ignore it, if it is outside of all spans. This method mutates the clusters and coref_stacks
dictionaries.
Parameters
----------
label : ``str``
The coref label for this word.
... | _process_coref_span_annotations_for_word | python | plasticityai/magnitude | pymagnitude/third_party/allennlp/data/dataset_readers/dataset_utils/ontonotes.py | https://github.com/plasticityai/magnitude/blob/master/pymagnitude/third_party/allennlp/data/dataset_readers/dataset_utils/ontonotes.py | MIT |
def _process_span_annotations_for_word(annotations ,
span_labels ,
current_span_labels ) :
u"""
Given a sequence of different label types for a single word and the c... |
Given a sequence of different label types for a single word and the current
span label we are inside, compute the BIO tag for each label and append to a list.
Parameters
----------
annotations: ``List[str]``
A list of labels to compute BIO tags for.
span_lab... | _process_span_annotations_for_word | python | plasticityai/magnitude | pymagnitude/third_party/allennlp/data/dataset_readers/dataset_utils/ontonotes.py | https://github.com/plasticityai/magnitude/blob/master/pymagnitude/third_party/allennlp/data/dataset_readers/dataset_utils/ontonotes.py | MIT |
def enumerate_spans(sentence ,
offset = 0,
max_span_width = None,
min_span_width = 1,
filter_function = None) :
u"""
Given a sentence, return all token spans ... |
Given a sentence, return all token spans within the sentence. Spans are `inclusive`.
Additionally, you can provide a maximum and minimum span width, which will be used
to exclude spans outside of this range.
Finally, you can provide a function mapping ``List[T] -> bool``, which will
be applied to ... | enumerate_spans | python | plasticityai/magnitude | pymagnitude/third_party/allennlp/data/dataset_readers/dataset_utils/span_utils.py | https://github.com/plasticityai/magnitude/blob/master/pymagnitude/third_party/allennlp/data/dataset_readers/dataset_utils/span_utils.py | MIT |
def bio_tags_to_spans(tag_sequence ,
classes_to_ignore = None) :
u"""
Given a sequence corresponding to BIO tags, extracts spans.
Spans are inclusive and can be of zero length, representing a single word span.
Ill-formed spans are also i... |
Given a sequence corresponding to BIO tags, extracts spans.
Spans are inclusive and can be of zero length, representing a single word span.
Ill-formed spans are also included (i.e those which do not start with a "B-LABEL"),
as otherwise it is possible to get a perfect precision score whilst still predi... | bio_tags_to_spans | python | plasticityai/magnitude | pymagnitude/third_party/allennlp/data/dataset_readers/dataset_utils/span_utils.py | https://github.com/plasticityai/magnitude/blob/master/pymagnitude/third_party/allennlp/data/dataset_readers/dataset_utils/span_utils.py | MIT |
def iob1_tags_to_spans(tag_sequence ,
classes_to_ignore = None) :
u"""
Given a sequence corresponding to IOB1 tags, extracts spans.
Spans are inclusive and can be of zero length, representing a single word span.
Ill-formed spans are als... |
Given a sequence corresponding to IOB1 tags, extracts spans.
Spans are inclusive and can be of zero length, representing a single word span.
Ill-formed spans are also included (i.e., those where "B-LABEL" is not preceded
by "I-LABEL" or "B-LABEL").
Parameters
----------
tag_sequence : List... | iob1_tags_to_spans | python | plasticityai/magnitude | pymagnitude/third_party/allennlp/data/dataset_readers/dataset_utils/span_utils.py | https://github.com/plasticityai/magnitude/blob/master/pymagnitude/third_party/allennlp/data/dataset_readers/dataset_utils/span_utils.py | MIT |
def bioul_tags_to_spans(tag_sequence ,
classes_to_ignore = None) :
u"""
Given a sequence corresponding to BIOUL tags, extracts spans.
Spans are inclusive and can be of zero length, representing a single word span.
Ill-formed spans are ... |
Given a sequence corresponding to BIOUL tags, extracts spans.
Spans are inclusive and can be of zero length, representing a single word span.
Ill-formed spans are not allowed and will raise ``InvalidTagSequence``.
This function works properly when the spans are unlabeled (i.e., your labels are
simp... | bioul_tags_to_spans | python | plasticityai/magnitude | pymagnitude/third_party/allennlp/data/dataset_readers/dataset_utils/span_utils.py | https://github.com/plasticityai/magnitude/blob/master/pymagnitude/third_party/allennlp/data/dataset_readers/dataset_utils/span_utils.py | MIT |
def to_bioul(tag_sequence , encoding = u"IOB1") :
u"""
Given a tag sequence encoded with IOB1 labels, recode to BIOUL.
In the IOB1 scheme, I is a token inside a span, O is a token outside
a span and B is the beginning of span immediately following another
span of the same... |
Given a tag sequence encoded with IOB1 labels, recode to BIOUL.
In the IOB1 scheme, I is a token inside a span, O is a token outside
a span and B is the beginning of span immediately following another
span of the same type.
In the BIO scheme, I is a token inside a span, O is a token outside
a... | to_bioul | python | plasticityai/magnitude | pymagnitude/third_party/allennlp/data/dataset_readers/dataset_utils/span_utils.py | https://github.com/plasticityai/magnitude/blob/master/pymagnitude/third_party/allennlp/data/dataset_readers/dataset_utils/span_utils.py | MIT |
def pick_paragraphs(self,
evidence_files ,
question = None,
answer_texts = None) :
u"""
Given a list of evidence documents, return a list of paragraphs to use as training
examples.... |
Given a list of evidence documents, return a list of paragraphs to use as training
examples. Each paragraph returned will be made into one training example.
To aid in picking the best paragraph, you can also optionally pass the question text or the
answer strings. Note, though, that ... | pick_paragraphs | python | plasticityai/magnitude | pymagnitude/third_party/allennlp/data/dataset_readers/reading_comprehension/triviaqa.py | https://github.com/plasticityai/magnitude/blob/master/pymagnitude/third_party/allennlp/data/dataset_readers/reading_comprehension/triviaqa.py | MIT |
def normalize_text(text ) :
u"""
Performs a normalization that is very similar to that done by the normalization functions in
SQuAD and TriviaQA.
This involves splitting and rejoining the text, and could be a somewhat expensive operation.
"""
return u' '.join([token
... |
Performs a normalization that is very similar to that done by the normalization functions in
SQuAD and TriviaQA.
This involves splitting and rejoining the text, and could be a somewhat expensive operation.
| normalize_text | python | plasticityai/magnitude | pymagnitude/third_party/allennlp/data/dataset_readers/reading_comprehension/util.py | https://github.com/plasticityai/magnitude/blob/master/pymagnitude/third_party/allennlp/data/dataset_readers/reading_comprehension/util.py | MIT |
def char_span_to_token_span(token_offsets ,
character_span ) :
u"""
Converts a character span from a passage into the corresponding token span in the tokenized
version of the passage. If you pass in a character... |
Converts a character span from a passage into the corresponding token span in the tokenized
version of the passage. If you pass in a character span that does not correspond to complete
tokens in the tokenized version, we'll do our best, but the behavior is officially undefined.
We return an error flag... | char_span_to_token_span | python | plasticityai/magnitude | pymagnitude/third_party/allennlp/data/dataset_readers/reading_comprehension/util.py | https://github.com/plasticityai/magnitude/blob/master/pymagnitude/third_party/allennlp/data/dataset_readers/reading_comprehension/util.py | MIT |
def find_valid_answer_spans(passage_tokens ,
answer_texts ) :
u"""
Finds a list of token spans in ``passage_tokens`` that match the given ``answer_texts``. This
tries to find all spans that would evaluate to correct given the SQuAD a... |
Finds a list of token spans in ``passage_tokens`` that match the given ``answer_texts``. This
tries to find all spans that would evaluate to correct given the SQuAD and TriviaQA official
evaluation scripts, which do some normalization of the input text.
Note that this could return duplicate spans! T... | find_valid_answer_spans | python | plasticityai/magnitude | pymagnitude/third_party/allennlp/data/dataset_readers/reading_comprehension/util.py | https://github.com/plasticityai/magnitude/blob/master/pymagnitude/third_party/allennlp/data/dataset_readers/reading_comprehension/util.py | MIT |
def make_reading_comprehension_instance(question_tokens ,
passage_tokens ,
token_indexers ,
passage_text ,
t... |
Converts a question, a passage, and an optional answer (or answers) to an ``Instance`` for use
in a reading comprehension model.
Creates an ``Instance`` with at least these fields: ``question`` and ``passage``, both
``TextFields``; and ``metadata``, a ``MetadataField``. Additionally, if both ``answer... | make_reading_comprehension_instance | python | plasticityai/magnitude | pymagnitude/third_party/allennlp/data/dataset_readers/reading_comprehension/util.py | https://github.com/plasticityai/magnitude/blob/master/pymagnitude/third_party/allennlp/data/dataset_readers/reading_comprehension/util.py | MIT |
def as_tensor(self,
padding_lengths ,
cuda_device = -1) :
u"""
Given a set of specified padding lengths, actually pad the data in this field and return a
torch Tensor (or a more complex data structure) of the correct shape. We ... |
Given a set of specified padding lengths, actually pad the data in this field and return a
torch Tensor (or a more complex data structure) of the correct shape. We also take a
couple of parameters that are important when constructing torch Tensors.
Parameters
----------
... | as_tensor | python | plasticityai/magnitude | pymagnitude/third_party/allennlp/data/fields/field.py | https://github.com/plasticityai/magnitude/blob/master/pymagnitude/third_party/allennlp/data/fields/field.py | MIT |
def get_padding_lengths(self) :
u"""
The ``TextField`` has a list of ``Tokens``, and each ``Token`` gets converted into arrays by
(potentially) several ``TokenIndexers``. This method gets the max length (over tokens)
associated with each of these arrays.
"""
... |
The ``TextField`` has a list of ``Tokens``, and each ``Token`` gets converted into arrays by
(potentially) several ``TokenIndexers``. This method gets the max length (over tokens)
associated with each of these arrays.
| get_padding_lengths | python | plasticityai/magnitude | pymagnitude/third_party/allennlp/data/fields/text_field.py | https://github.com/plasticityai/magnitude/blob/master/pymagnitude/third_party/allennlp/data/fields/text_field.py | MIT |
def sort_by_padding(instances ,
sorting_keys , # pylint: disable=invalid-sequence-index
vocab ,
padding_noise = 0.0) :
u"""
Sorts the instances by their padding lengths, using the... |
Sorts the instances by their padding lengths, using the keys in
``sorting_keys`` (in the order in which they are provided). ``sorting_keys`` is a list of
``(field_name, padding_key)`` tuples.
| sort_by_padding | python | plasticityai/magnitude | pymagnitude/third_party/allennlp/data/iterators/bucket_iterator.py | https://github.com/plasticityai/magnitude/blob/master/pymagnitude/third_party/allennlp/data/iterators/bucket_iterator.py | MIT |
def add_epoch_number(batch , epoch ) :
u"""
Add the epoch number to the batch instances as a MetadataField.
"""
for instance in batch.instances:
instance.fields[u'epoch_num'] = MetadataField(epoch)
return batch |
Add the epoch number to the batch instances as a MetadataField.
| add_epoch_number | python | plasticityai/magnitude | pymagnitude/third_party/allennlp/data/iterators/data_iterator.py | https://github.com/plasticityai/magnitude/blob/master/pymagnitude/third_party/allennlp/data/iterators/data_iterator.py | MIT |
def __call__(self,
instances ,
num_epochs = None,
shuffle = True,
cuda_device = -1) :
u"""
Returns a generator that yields batches over the given dataset
for the given nu... |
Returns a generator that yields batches over the given dataset
for the given number of epochs. If ``num_epochs`` is not specified,
it will yield batches forever.
Parameters
----------
instances : ``Iterable[Instance]``
The instances in the dataset. IMPORTANT... | __call__ | python | plasticityai/magnitude | pymagnitude/third_party/allennlp/data/iterators/data_iterator.py | https://github.com/plasticityai/magnitude/blob/master/pymagnitude/third_party/allennlp/data/iterators/data_iterator.py | MIT |
def _take_instances(self,
instances ,
max_instances = None) :
u"""
Take the next `max_instances` instances from the given dataset.
If `max_instances` is `None`, then just take all instances fro... |
Take the next `max_instances` instances from the given dataset.
If `max_instances` is `None`, then just take all instances from the dataset.
If `max_instances` is not `None`, each call resumes where the previous one
left off, and when you get to the end of the dataset you start again fr... | _take_instances | python | plasticityai/magnitude | pymagnitude/third_party/allennlp/data/iterators/data_iterator.py | https://github.com/plasticityai/magnitude/blob/master/pymagnitude/third_party/allennlp/data/iterators/data_iterator.py | MIT |
def _memory_sized_lists(self,
instances ) :
u"""
Breaks the dataset into "memory-sized" lists of instances,
which it yields up one at a time until it gets through a full epoch.
For example, if the dataset is alrea... |
Breaks the dataset into "memory-sized" lists of instances,
which it yields up one at a time until it gets through a full epoch.
For example, if the dataset is already an in-memory list, and each epoch
represents one pass through the dataset, it just yields back the dataset.
Whe... | _memory_sized_lists | python | plasticityai/magnitude | pymagnitude/third_party/allennlp/data/iterators/data_iterator.py | https://github.com/plasticityai/magnitude/blob/master/pymagnitude/third_party/allennlp/data/iterators/data_iterator.py | MIT |
def _ensure_batch_is_sufficiently_small(self, batch_instances ) :
u"""
If self._maximum_samples_per_batch is specified, then split the batch into smaller
sub-batches if it exceeds the maximum size.
"""
if self._maximum_samples_per_batch i... |
If self._maximum_samples_per_batch is specified, then split the batch into smaller
sub-batches if it exceeds the maximum size.
| _ensure_batch_is_sufficiently_small | python | plasticityai/magnitude | pymagnitude/third_party/allennlp/data/iterators/data_iterator.py | https://github.com/plasticityai/magnitude/blob/master/pymagnitude/third_party/allennlp/data/iterators/data_iterator.py | MIT |
def get_num_batches(self, instances ) :
u"""
Returns the number of batches that ``dataset`` will be split into; if you want to track
progress through the batch with the generator produced by ``__call__``, this could be
useful.
"""
if is_lazy(insta... |
Returns the number of batches that ``dataset`` will be split into; if you want to track
progress through the batch with the generator produced by ``__call__``, this could be
useful.
| get_num_batches | python | plasticityai/magnitude | pymagnitude/third_party/allennlp/data/iterators/data_iterator.py | https://github.com/plasticityai/magnitude/blob/master/pymagnitude/third_party/allennlp/data/iterators/data_iterator.py | MIT |
def split_words(self, sentence ) :
u"""
Splits a sentence into word tokens. We handle four kinds of things: words with punctuation
that should be ignored as a special case (Mr. Mrs., etc.), contractions/genitives (isn't,
don't, Matt's), and beginning and ending punctua... |
Splits a sentence into word tokens. We handle four kinds of things: words with punctuation
that should be ignored as a special case (Mr. Mrs., etc.), contractions/genitives (isn't,
don't, Matt's), and beginning and ending punctuation ("antennagate", (parentheticals), and
such.).
... | split_words | python | plasticityai/magnitude | pymagnitude/third_party/allennlp/data/tokenizers/word_splitter.py | https://github.com/plasticityai/magnitude/blob/master/pymagnitude/third_party/allennlp/data/tokenizers/word_splitter.py | MIT |
def tokens_to_indices(self,
tokens ,
vocabulary ,
index_name ) :
u"""
Takes a list of tokens and converts them to one or more sets of indices.
This could be just ... |
Takes a list of tokens and converts them to one or more sets of indices.
This could be just an ID for each token from the vocabulary.
Or it could split each token into characters and return one ID per character.
Or (for instance, in the case of byte-pair encoding) there might not be a c... | tokens_to_indices | python | plasticityai/magnitude | pymagnitude/third_party/allennlp/data/token_indexers/token_indexer.py | https://github.com/plasticityai/magnitude/blob/master/pymagnitude/third_party/allennlp/data/token_indexers/token_indexer.py | MIT |
def pad_token_sequence(self,
tokens ,
desired_num_tokens ,
padding_lengths ) :
u"""
This method pads a list of tokens to ``desired_num_to... |
This method pads a list of tokens to ``desired_num_tokens`` and returns a padded copy of the
input tokens. If the input token list is longer than ``desired_num_tokens`` then it will be
truncated.
``padding_lengths`` is used to provide supplemental padding parameters which are needed
... | pad_token_sequence | python | plasticityai/magnitude | pymagnitude/third_party/allennlp/data/token_indexers/token_indexer.py | https://github.com/plasticityai/magnitude/blob/master/pymagnitude/third_party/allennlp/data/token_indexers/token_indexer.py | MIT |
def archive_model(serialization_dir ,
weights = _DEFAULT_WEIGHTS,
files_to_archive = None) :
u"""
Archive the model weights, its training configuration, and its
vocabulary to `model.tar.gz`. Include the additional ``files_to_archive``
i... |
Archive the model weights, its training configuration, and its
vocabulary to `model.tar.gz`. Include the additional ``files_to_archive``
if provided.
Parameters
----------
serialization_dir: ``str``
The directory where the weights and vocabulary are written out.
weights: ``str``, o... | archive_model | python | plasticityai/magnitude | pymagnitude/third_party/allennlp/models/archival.py | https://github.com/plasticityai/magnitude/blob/master/pymagnitude/third_party/allennlp/models/archival.py | MIT |
def load_archive(archive_file ,
cuda_device = -1,
overrides = u"",
weights_file = None) :
u"""
Instantiates an Archive from an archived `tar.gz` file.
Parameters
----------
archive_file: ``str``
The archive file... |
Instantiates an Archive from an archived `tar.gz` file.
Parameters
----------
archive_file: ``str``
The archive file to load the model from.
weights_file: ``str``, optional (default = None)
The weights file to use. If unspecified, weights.th in the archive_file will be used.
c... | load_archive | python | plasticityai/magnitude | pymagnitude/third_party/allennlp/models/archival.py | https://github.com/plasticityai/magnitude/blob/master/pymagnitude/third_party/allennlp/models/archival.py | MIT |
def forward(self, # type: ignore
words ,
pos_tags ,
metadata ,
head_tags = None,
head_indices = None) :
... |
Parameters
----------
words : Dict[str, torch.LongTensor], required
The output of ``TextField.as_array()``, which should typically be passed directly to a
``TextFieldEmbedder``. This output is a dictionary mapping keys to ``TokenIndexer``
tensors. At its mos... | forward | python | plasticityai/magnitude | pymagnitude/third_party/allennlp/models/biaffine_dependency_parser.py | https://github.com/plasticityai/magnitude/blob/master/pymagnitude/third_party/allennlp/models/biaffine_dependency_parser.py | MIT |
def _construct_loss(self,
head_tag_representation ,
child_tag_representation ,
attended_arcs ,
head_indices ,
head_tags ,
... |
Computes the arc and tag loss for a sequence given gold head indices and tags.
Parameters
----------
head_tag_representation : ``torch.Tensor``, required.
A tensor of shape (batch_size, sequence_length, tag_representation_dim),
which will be used to generate pre... | _construct_loss | python | plasticityai/magnitude | pymagnitude/third_party/allennlp/models/biaffine_dependency_parser.py | https://github.com/plasticityai/magnitude/blob/master/pymagnitude/third_party/allennlp/models/biaffine_dependency_parser.py | MIT |
def _greedy_decode(self,
head_tag_representation ,
child_tag_representation ,
attended_arcs ,
mask ) :
u"""
Decodes the head... |
Decodes the head and head tag predictions by decoding the unlabeled arcs
independently for each word and then again, predicting the head tags of
these greedily chosen arcs indpendently. Note that this method of decoding
is not guaranteed to produce trees (i.e. there maybe be multiple ro... | _greedy_decode | python | plasticityai/magnitude | pymagnitude/third_party/allennlp/models/biaffine_dependency_parser.py | https://github.com/plasticityai/magnitude/blob/master/pymagnitude/third_party/allennlp/models/biaffine_dependency_parser.py | MIT |
def _mst_decode(self,
head_tag_representation ,
child_tag_representation ,
attended_arcs ,
mask ) :
u"""
Decodes the head and head tag p... |
Decodes the head and head tag predictions using the Edmonds' Algorithm
for finding minimum spanning trees on directed graphs. Nodes in the
graph are the words in the sentence, and between each pair of nodes,
there is an edge in each direction, where the weight of the edge corresponds
... | _mst_decode | python | plasticityai/magnitude | pymagnitude/third_party/allennlp/models/biaffine_dependency_parser.py | https://github.com/plasticityai/magnitude/blob/master/pymagnitude/third_party/allennlp/models/biaffine_dependency_parser.py | MIT |
def _get_head_tags(self,
head_tag_representation ,
child_tag_representation ,
head_indices ) :
u"""
Decodes the head tags given the head and child tag representations
and a ... |
Decodes the head tags given the head and child tag representations
and a tensor of head indices to compute tags for. Note that these are
either gold or predicted heads, depending on whether this function is
being called to compute the loss, or if it's being called during inference.
... | _get_head_tags | python | plasticityai/magnitude | pymagnitude/third_party/allennlp/models/biaffine_dependency_parser.py | https://github.com/plasticityai/magnitude/blob/master/pymagnitude/third_party/allennlp/models/biaffine_dependency_parser.py | MIT |
def _get_mask_for_eval(self,
mask ,
pos_tags ) :
u"""
Dependency evaluation excludes words are punctuation.
Here, we create a new mask to exclude word indices which
have a "punctuat... |
Dependency evaluation excludes words are punctuation.
Here, we create a new mask to exclude word indices which
have a "punctuation-like" part of speech tag.
Parameters
----------
mask : ``torch.LongTensor``, required.
The original mask.
pos_tags : ``... | _get_mask_for_eval | python | plasticityai/magnitude | pymagnitude/third_party/allennlp/models/biaffine_dependency_parser.py | https://github.com/plasticityai/magnitude/blob/master/pymagnitude/third_party/allennlp/models/biaffine_dependency_parser.py | MIT |
def forward(self, # type: ignore
tokens ,
label = None) :
# pylint: disable=arguments-differ
u"""
Parameters
----------
tokens : Dict[str, torch.LongTensor], required
... |
Parameters
----------
tokens : Dict[str, torch.LongTensor], required
The output of ``TextField.as_array()``.
label : torch.LongTensor, optional (default = None)
A variable representing the label for each instance in the batch.
Returns
-------
... | forward | python | plasticityai/magnitude | pymagnitude/third_party/allennlp/models/biattentive_classification_network.py | https://github.com/plasticityai/magnitude/blob/master/pymagnitude/third_party/allennlp/models/biattentive_classification_network.py | MIT |
def decode(self, output_dict ) :
u"""
Does a simple argmax over the class probabilities, converts indices to string labels, and
adds a ``"label"`` key to the dictionary with the result.
"""
predictions = output_dict[u"class_probab... |
Does a simple argmax over the class probabilities, converts indices to string labels, and
adds a ``"label"`` key to the dictionary with the result.
| decode | python | plasticityai/magnitude | pymagnitude/third_party/allennlp/models/biattentive_classification_network.py | https://github.com/plasticityai/magnitude/blob/master/pymagnitude/third_party/allennlp/models/biattentive_classification_network.py | MIT |
def forward(self, # type: ignore
premise ,
hypothesis ,
label = None,
metadata = None # pylint:disable=unused-argument
) ... |
Parameters
----------
premise : Dict[str, torch.LongTensor]
The premise from a ``TextField``
hypothesis : Dict[str, torch.LongTensor]
The hypothesis from a ``TextField``
label : torch.LongTensor, optional (default = None)
The label for the pa... | forward | python | plasticityai/magnitude | pymagnitude/third_party/allennlp/models/bimpm.py | https://github.com/plasticityai/magnitude/blob/master/pymagnitude/third_party/allennlp/models/bimpm.py | MIT |
def forward(self, # type: ignore
tokens ,
spans ,
metadata ,
pos_tags = None,
span_labels = None) :
... |
Parameters
----------
tokens : Dict[str, torch.LongTensor], required
The output of ``TextField.as_array()``, which should typically be passed directly to a
``TextFieldEmbedder``. This output is a dictionary mapping keys to ``TokenIndexer``
tensors. At its mo... | forward | python | plasticityai/magnitude | pymagnitude/third_party/allennlp/models/constituency_parser.py | https://github.com/plasticityai/magnitude/blob/master/pymagnitude/third_party/allennlp/models/constituency_parser.py | MIT |
def decode(self, output_dict ) :
u"""
Constructs an NLTK ``Tree`` given the scored spans. We also switch to exclusive
span ends when constructing the tree representation, because it makes indexing
into lists cleaner for ranges of text, ra... |
Constructs an NLTK ``Tree`` given the scored spans. We also switch to exclusive
span ends when constructing the tree representation, because it makes indexing
into lists cleaner for ranges of text, rather than individual indices.
Finally, for batch prediction, we will have padded spans... | decode | python | plasticityai/magnitude | pymagnitude/third_party/allennlp/models/constituency_parser.py | https://github.com/plasticityai/magnitude/blob/master/pymagnitude/third_party/allennlp/models/constituency_parser.py | MIT |
def construct_trees(self,
predictions ,
all_spans ,
num_spans ,
sentences ,
pos_tags = None) :
... |
Construct ``nltk.Tree``'s for each batch element by greedily nesting spans.
The trees use exclusive end indices, which contrasts with how spans are
represented in the rest of the model.
Parameters
----------
predictions : ``torch.FloatTensor``, required.
A t... | construct_trees | python | plasticityai/magnitude | pymagnitude/third_party/allennlp/models/constituency_parser.py | https://github.com/plasticityai/magnitude/blob/master/pymagnitude/third_party/allennlp/models/constituency_parser.py | MIT |
def resolve_overlap_conflicts_greedily(spans ) :
u"""
Given a set of spans, removes spans which overlap by evaluating the difference
in probability between one being labeled and the other explicitly having no label
and vice-versa. The worst c... |
Given a set of spans, removes spans which overlap by evaluating the difference
in probability between one being labeled and the other explicitly having no label
and vice-versa. The worst case time complexity of this method is ``O(k * n^4)`` where ``n``
is the length of the sentence that... | resolve_overlap_conflicts_greedily | python | plasticityai/magnitude | pymagnitude/third_party/allennlp/models/constituency_parser.py | https://github.com/plasticityai/magnitude/blob/master/pymagnitude/third_party/allennlp/models/constituency_parser.py | MIT |
def construct_tree_from_spans(spans_to_labels ,
sentence ,
pos_tags = None) :
u"""
Parameters
----------
spans_to_labels : ``Dict[Tuple[int, int], str]``, required.
... |
Parameters
----------
spans_to_labels : ``Dict[Tuple[int, int], str]``, required.
A mapping from spans to constituency labels.
sentence : ``List[str]``, required.
A list of tokens forming the sentence to be parsed.
pos_tags : ``List[str]``, optional (defa... | construct_tree_from_spans | python | plasticityai/magnitude | pymagnitude/third_party/allennlp/models/constituency_parser.py | https://github.com/plasticityai/magnitude/blob/master/pymagnitude/third_party/allennlp/models/constituency_parser.py | MIT |
def forward(self, # type: ignore
tokens ,
tags = None,
metadata = None) :
# pylint: disable=arguments-differ
u"""
Parameters
----------
t... |
Parameters
----------
tokens : ``Dict[str, torch.LongTensor]``, required
The output of ``TextField.as_array()``, which should typically be passed directly to a
``TextFieldEmbedder``. This output is a dictionary mapping keys to ``TokenIndexer``
tensors. At it... | forward | python | plasticityai/magnitude | pymagnitude/third_party/allennlp/models/crf_tagger.py | https://github.com/plasticityai/magnitude/blob/master/pymagnitude/third_party/allennlp/models/crf_tagger.py | MIT |
def decode(self, output_dict ) :
u"""
Converts the tag ids to the actual tags.
``output_dict["tags"]`` is a list of lists of tag_ids,
so we use an ugly nested list comprehension.
"""
output_dict[u"tags"] = [
... |
Converts the tag ids to the actual tags.
``output_dict["tags"]`` is a list of lists of tag_ids,
so we use an ugly nested list comprehension.
| decode | python | plasticityai/magnitude | pymagnitude/third_party/allennlp/models/crf_tagger.py | https://github.com/plasticityai/magnitude/blob/master/pymagnitude/third_party/allennlp/models/crf_tagger.py | MIT |
def forward(self, # type: ignore
premise ,
hypothesis ,
label = None,
metadata = None) :
# pylint: disable=arguments-differ
... |
Parameters
----------
premise : Dict[str, torch.LongTensor]
From a ``TextField``
hypothesis : Dict[str, torch.LongTensor]
From a ``TextField``
label : torch.IntTensor, optional, (default = None)
From a ``LabelField``
metadata : ``List[... | forward | python | plasticityai/magnitude | pymagnitude/third_party/allennlp/models/decomposable_attention.py | https://github.com/plasticityai/magnitude/blob/master/pymagnitude/third_party/allennlp/models/decomposable_attention.py | MIT |
def _load(cls,
config ,
serialization_dir ,
weights_file = None,
cuda_device = -1) :
u"""
Ensembles don't have vocabularies or weights of their own, so they override _load.
"""
model_params = config.ge... |
Ensembles don't have vocabularies or weights of their own, so they override _load.
| _load | python | plasticityai/magnitude | pymagnitude/third_party/allennlp/models/ensemble.py | https://github.com/plasticityai/magnitude/blob/master/pymagnitude/third_party/allennlp/models/ensemble.py | MIT |
def forward(self, # type: ignore
premise ,
hypothesis ,
label = None,
metadata = None # pylint:disable=unused-argument
) ... |
Parameters
----------
premise : Dict[str, torch.LongTensor]
From a ``TextField``
hypothesis : Dict[str, torch.LongTensor]
From a ``TextField``
label : torch.IntTensor, optional (default = None)
From a ``LabelField``
metadata : ``List[D... | forward | python | plasticityai/magnitude | pymagnitude/third_party/allennlp/models/esim.py | https://github.com/plasticityai/magnitude/blob/master/pymagnitude/third_party/allennlp/models/esim.py | MIT |
def get_regularization_penalty(self) :
u"""
Computes the regularization penalty for the model.
Returns 0 if the model was not configured to use regularization.
"""
if self._regularizer is None:
return 0.0
else:
return s... |
Computes the regularization penalty for the model.
Returns 0 if the model was not configured to use regularization.
| get_regularization_penalty | python | plasticityai/magnitude | pymagnitude/third_party/allennlp/models/model.py | https://github.com/plasticityai/magnitude/blob/master/pymagnitude/third_party/allennlp/models/model.py | MIT |
def get_parameters_for_histogram_tensorboard_logging( # pylint: disable=invalid-name
self) :
u"""
Returns the name of model parameters used for logging histograms to tensorboard.
"""
return [name for name, _ in self.named_parameters()] |
Returns the name of model parameters used for logging histograms to tensorboard.
| get_parameters_for_histogram_tensorboard_logging | python | plasticityai/magnitude | pymagnitude/third_party/allennlp/models/model.py | https://github.com/plasticityai/magnitude/blob/master/pymagnitude/third_party/allennlp/models/model.py | MIT |
def forward_on_instances(self,
instances ) :
u"""
Takes a list of :class:`~allennlp.data.instance.Instance`s, converts that text into
arrays using this model's :class:`Vocabulary`, passes those arrays through
:... |
Takes a list of :class:`~allennlp.data.instance.Instance`s, converts that text into
arrays using this model's :class:`Vocabulary`, passes those arrays through
:func:`self.forward()` and :func:`self.decode()` (which by default does nothing)
and returns the result. Before returning the ... | forward_on_instances | python | plasticityai/magnitude | pymagnitude/third_party/allennlp/models/model.py | https://github.com/plasticityai/magnitude/blob/master/pymagnitude/third_party/allennlp/models/model.py | MIT |
def _get_prediction_device(self) :
u"""
This method checks the device of the model parameters to determine the cuda_device
this model should be run on for predictions. If there are no parameters, it returns -1.
Returns
-------
The cuda device this model should run... |
This method checks the device of the model parameters to determine the cuda_device
this model should be run on for predictions. If there are no parameters, it returns -1.
Returns
-------
The cuda device this model should run on for predictions.
| _get_prediction_device | python | plasticityai/magnitude | pymagnitude/third_party/allennlp/models/model.py | https://github.com/plasticityai/magnitude/blob/master/pymagnitude/third_party/allennlp/models/model.py | MIT |
def _maybe_warn_for_unseparable_batches(self, output_key ):
u"""
This method warns once if a user implements a model which returns a dictionary with
values which we are unable to split back up into elements of the batch. This is controlled
by a class attribute ``_warn_for_unseperable... |
This method warns once if a user implements a model which returns a dictionary with
values which we are unable to split back up into elements of the batch. This is controlled
by a class attribute ``_warn_for_unseperable_batches`` because it would be extremely verbose
otherwise.
| _maybe_warn_for_unseparable_batches | python | plasticityai/magnitude | pymagnitude/third_party/allennlp/models/model.py | https://github.com/plasticityai/magnitude/blob/master/pymagnitude/third_party/allennlp/models/model.py | MIT |
def _load(cls,
config ,
serialization_dir ,
weights_file = None,
cuda_device = -1) :
u"""
Instantiates an already-trained model, based on the experiment
configuration and some optional overrides.
"""
... |
Instantiates an already-trained model, based on the experiment
configuration and some optional overrides.
| _load | python | plasticityai/magnitude | pymagnitude/third_party/allennlp/models/model.py | https://github.com/plasticityai/magnitude/blob/master/pymagnitude/third_party/allennlp/models/model.py | MIT |
def load(cls,
config ,
serialization_dir ,
weights_file = None,
cuda_device = -1) :
u"""
Instantiates an already-trained model, based on the experiment
configuration and some optional overrides.
Parameter... |
Instantiates an already-trained model, based on the experiment
configuration and some optional overrides.
Parameters
----------
config: Params
The configuration that was used to train the model. It should definitely
have a `model` section, and should pro... | load | python | plasticityai/magnitude | pymagnitude/third_party/allennlp/models/model.py | https://github.com/plasticityai/magnitude/blob/master/pymagnitude/third_party/allennlp/models/model.py | MIT |
def forward(self, # type: ignore
tokens ,
verb_indicator ,
tags = None,
metadata = None) :
# pylint: disable=arguments-differ
u"... |
Parameters
----------
tokens : Dict[str, torch.LongTensor], required
The output of ``TextField.as_array()``, which should typically be passed directly to a
``TextFieldEmbedder``. This output is a dictionary mapping keys to ``TokenIndexer``
tensors. At its mo... | forward | python | plasticityai/magnitude | pymagnitude/third_party/allennlp/models/semantic_role_labeler.py | https://github.com/plasticityai/magnitude/blob/master/pymagnitude/third_party/allennlp/models/semantic_role_labeler.py | MIT |
def decode(self, output_dict ) :
u"""
Does constrained viterbi decoding on class probabilities output in :func:`forward`. The
constraint simply specifies that the output tags must be a valid BIO sequence. We add a
``"tags"`` key to the ... |
Does constrained viterbi decoding on class probabilities output in :func:`forward`. The
constraint simply specifies that the output tags must be a valid BIO sequence. We add a
``"tags"`` key to the dictionary with the result.
| decode | python | plasticityai/magnitude | pymagnitude/third_party/allennlp/models/semantic_role_labeler.py | https://github.com/plasticityai/magnitude/blob/master/pymagnitude/third_party/allennlp/models/semantic_role_labeler.py | MIT |
def get_viterbi_pairwise_potentials(self):
u"""
Generate a matrix of pairwise transition potentials for the BIO labels.
The only constraint implemented here is that I-XXX labels must be preceded
by either an identical I-XXX tag or a B-XXX tag. In order to achieve this
constraint,... |
Generate a matrix of pairwise transition potentials for the BIO labels.
The only constraint implemented here is that I-XXX labels must be preceded
by either an identical I-XXX tag or a B-XXX tag. In order to achieve this
constraint, pairs of labels which do not satisfy this constraint h... | get_viterbi_pairwise_potentials | python | plasticityai/magnitude | pymagnitude/third_party/allennlp/models/semantic_role_labeler.py | https://github.com/plasticityai/magnitude/blob/master/pymagnitude/third_party/allennlp/models/semantic_role_labeler.py | MIT |
def write_to_conll_eval_file(prediction_file ,
gold_file ,
verb_index ,
sentence ,
prediction ,
gold_labels ):
u"... |
Prints predicate argument predictions and gold labels for a single verbal
predicate in a sentence to two provided file references.
Parameters
----------
prediction_file : TextIO, required.
A file reference to print predictions to.
gold_file : TextIO, required.
A file reference ... | write_to_conll_eval_file | python | plasticityai/magnitude | pymagnitude/third_party/allennlp/models/semantic_role_labeler.py | https://github.com/plasticityai/magnitude/blob/master/pymagnitude/third_party/allennlp/models/semantic_role_labeler.py | MIT |
def convert_bio_tags_to_conll_format(labels ):
u"""
Converts BIO formatted SRL tags to the format required for evaluation with the
official CONLL 2005 perl script. Spans are represented by bracketed labels,
with the labels of words inside spans being the same as those outside spans.
Beginn... |
Converts BIO formatted SRL tags to the format required for evaluation with the
official CONLL 2005 perl script. Spans are represented by bracketed labels,
with the labels of words inside spans being the same as those outside spans.
Beginning spans always have a opening bracket and a closing asterisk (e... | convert_bio_tags_to_conll_format | python | plasticityai/magnitude | pymagnitude/third_party/allennlp/models/semantic_role_labeler.py | https://github.com/plasticityai/magnitude/blob/master/pymagnitude/third_party/allennlp/models/semantic_role_labeler.py | MIT |
def forward(self, # type: ignore
tokens ,
tags = None,
metadata = None) :
# pylint: disable=arguments-differ
u"""
Parameters
----------
t... |
Parameters
----------
tokens : Dict[str, torch.LongTensor], required
The output of ``TextField.as_array()``, which should typically be passed directly to a
``TextFieldEmbedder``. This output is a dictionary mapping keys to ``TokenIndexer``
tensors. At its mo... | forward | python | plasticityai/magnitude | pymagnitude/third_party/allennlp/models/simple_tagger.py | https://github.com/plasticityai/magnitude/blob/master/pymagnitude/third_party/allennlp/models/simple_tagger.py | MIT |
def decode(self, output_dict ) :
u"""
Does a simple position-wise argmax over each token, converts indices to string labels, and
adds a ``"tags"`` key to the dictionary with the result.
"""
all_predictions = output_dict[u'class_pr... |
Does a simple position-wise argmax over each token, converts indices to string labels, and
adds a ``"tags"`` key to the dictionary with the result.
| decode | python | plasticityai/magnitude | pymagnitude/third_party/allennlp/models/simple_tagger.py | https://github.com/plasticityai/magnitude/blob/master/pymagnitude/third_party/allennlp/models/simple_tagger.py | MIT |
def forward(self, # type: ignore
text ,
spans ,
span_labels = None,
metadata = None) :
# pylint: disable=arguments-differ
u"""
... |
Parameters
----------
text : ``Dict[str, torch.LongTensor]``, required.
The output of a ``TextField`` representing the text of
the document.
spans : ``torch.IntTensor``, required.
A tensor of shape (batch_size, num_spans, 2), representing the inclusiv... | forward | python | plasticityai/magnitude | pymagnitude/third_party/allennlp/models/coreference_resolution/coref.py | https://github.com/plasticityai/magnitude/blob/master/pymagnitude/third_party/allennlp/models/coreference_resolution/coref.py | MIT |
def decode(self, output_dict ):
u"""
Converts the list of spans and predicted antecedent indices into clusters
of spans for each element in the batch.
Parameters
----------
output_dict : ``Dict[str, torch.Tensor]``, required.
The resul... |
Converts the list of spans and predicted antecedent indices into clusters
of spans for each element in the batch.
Parameters
----------
output_dict : ``Dict[str, torch.Tensor]``, required.
The result of calling :func:`forward` on an instance or batch of instances.
... | decode | python | plasticityai/magnitude | pymagnitude/third_party/allennlp/models/coreference_resolution/coref.py | https://github.com/plasticityai/magnitude/blob/master/pymagnitude/third_party/allennlp/models/coreference_resolution/coref.py | MIT |
def _generate_valid_antecedents(num_spans_to_keep ,
max_antecedents ,
device ):
... |
This method generates possible antecedents per span which survived the pruning
stage. This procedure is `generic across the batch`. The reason this is the case is
that each span in a batch can be coreferent with any previous span, but here we
are computing the possible `indices` of thes... | _generate_valid_antecedents | python | plasticityai/magnitude | pymagnitude/third_party/allennlp/models/coreference_resolution/coref.py | https://github.com/plasticityai/magnitude/blob/master/pymagnitude/third_party/allennlp/models/coreference_resolution/coref.py | MIT |
def _compute_span_pair_embeddings(self,
top_span_embeddings ,
antecedent_embeddings ,
antecedent_offsets ):
u"""
Computes an embedding r... |
Computes an embedding representation of pairs of spans for the pairwise scoring function
to consider. This includes both the original span representations, the element-wise
similarity of the span representations, and an embedding representation of the distance
between the two spans.
... | _compute_span_pair_embeddings | python | plasticityai/magnitude | pymagnitude/third_party/allennlp/models/coreference_resolution/coref.py | https://github.com/plasticityai/magnitude/blob/master/pymagnitude/third_party/allennlp/models/coreference_resolution/coref.py | MIT |
def _compute_antecedent_gold_labels(top_span_labels ,
antecedent_labels ):
u"""
Generates a binary indicator for every pair of spans. This label is one if and
only if the pair of spans belong to the same cluster. The labels ... |
Generates a binary indicator for every pair of spans. This label is one if and
only if the pair of spans belong to the same cluster. The labels are augmented
with a dummy antecedent at the zeroth position, which represents the prediction
that a span does not have any antecedent.
... | _compute_antecedent_gold_labels | python | plasticityai/magnitude | pymagnitude/third_party/allennlp/models/coreference_resolution/coref.py | https://github.com/plasticityai/magnitude/blob/master/pymagnitude/third_party/allennlp/models/coreference_resolution/coref.py | MIT |
Subsets and Splits
Django Code with Docstrings
Filters Python code examples from Django repository that contain Django-related code, helping identify relevant code snippets for understanding Django framework usage patterns.
SQL Console for Shuu12121/python-treesitter-filtered-datasetsV2
Retrieves specific code examples from the Flask repository but doesn't provide meaningful analysis or patterns beyond basic data retrieval.
HTTPX Repo Code and Docstrings
Retrieves specific code examples from the httpx repository, which is useful for understanding how particular libraries are used but doesn't provide broader analytical insights about the dataset.
Requests Repo Docstrings & Code
Retrieves code examples with their docstrings and file paths from the requests repository, providing basic filtering but limited analytical value beyond finding specific code samples.
Quart Repo Docstrings & Code
Retrieves code examples with their docstrings from the Quart repository, providing basic code samples but offering limited analytical value for understanding broader patterns or relationships in the dataset.