code stringlengths 66 870k | docstring stringlengths 19 26.7k | func_name stringlengths 1 138 | language stringclasses 1
value | repo stringlengths 7 68 | path stringlengths 5 324 | url stringlengths 46 389 | license stringclasses 7
values |
|---|---|---|---|---|---|---|---|
def _compute_coreference_scores(self,
pairwise_embeddings ,
top_span_mention_scores ,
antecedent_mention_scores ,
antecede... |
Computes scores for every pair of spans. Additionally, a dummy label is included,
representing the decision that the span is not coreferent with anything. For the dummy
label, the score is always zero. For the true antecedent spans, the score consists of
the pairwise antecedent score an... | _compute_coreference_scores | python | plasticityai/magnitude | pymagnitude/third_party/allennlp/models/coreference_resolution/coref.py | https://github.com/plasticityai/magnitude/blob/master/pymagnitude/third_party/allennlp/models/coreference_resolution/coref.py | MIT |
def forward(self, # type: ignore
source_tokens ,
target_tokens = None) :
# pylint: disable=arguments-differ
u"""
Decoder logic for producing the entire target sequence.
Pa... |
Decoder logic for producing the entire target sequence.
Parameters
----------
source_tokens : Dict[str, torch.LongTensor]
The output of ``TextField.as_array()`` applied on the source ``TextField``. This will be
passed through a ``TextFieldEmbedder`` and then throu... | forward | python | plasticityai/magnitude | pymagnitude/third_party/allennlp/models/encoder_decoders/simple_seq2seq.py | https://github.com/plasticityai/magnitude/blob/master/pymagnitude/third_party/allennlp/models/encoder_decoders/simple_seq2seq.py | MIT |
def _prepare_decode_step_input(self,
input_indices ,
decoder_hidden_state = None,
encoder_outputs = None,
encoder_outputs_mask ... |
Given the input indices for the current timestep of the decoder, and all the encoder
outputs, compute the input at the current timestep. Note: This method is agnostic to
whether the indices are gold indices or the predictions made by the decoder at the last
timestep. So, this can be us... | _prepare_decode_step_input | python | plasticityai/magnitude | pymagnitude/third_party/allennlp/models/encoder_decoders/simple_seq2seq.py | https://github.com/plasticityai/magnitude/blob/master/pymagnitude/third_party/allennlp/models/encoder_decoders/simple_seq2seq.py | MIT |
def _get_loss(logits ,
targets ,
target_mask ) :
u"""
Takes logits (unnormalized outputs from the decoder) of size (batch_size,
num_decoding_steps, num_classes), target indices of size (batc... |
Takes logits (unnormalized outputs from the decoder) of size (batch_size,
num_decoding_steps, num_classes), target indices of size (batch_size, num_decoding_steps+1)
and corresponding masks of size (batch_size, num_decoding_steps+1) steps and computes cross
entropy loss while taking the... | _get_loss | python | plasticityai/magnitude | pymagnitude/third_party/allennlp/models/encoder_decoders/simple_seq2seq.py | https://github.com/plasticityai/magnitude/blob/master/pymagnitude/third_party/allennlp/models/encoder_decoders/simple_seq2seq.py | MIT |
def decode(self, output_dict ) :
u"""
This method overrides ``Model.decode``, which gets called after ``Model.forward``, at test
time, to finalize predictions. The logic for the decoder part of the encoder-decoder lives
within the ``forwa... |
This method overrides ``Model.decode``, which gets called after ``Model.forward``, at test
time, to finalize predictions. The logic for the decoder part of the encoder-decoder lives
within the ``forward`` method.
This method trims the output predictions to the first end symbol, replace... | decode | python | plasticityai/magnitude | pymagnitude/third_party/allennlp/models/encoder_decoders/simple_seq2seq.py | https://github.com/plasticityai/magnitude/blob/master/pymagnitude/third_party/allennlp/models/encoder_decoders/simple_seq2seq.py | MIT |
def ensemble(subresults ) :
u"""
Identifies the best prediction given the results from the submodels.
Parameters
----------
index : int
The index within this index to ensemble
subresults : List[Dict[str, torch.Tensor]]
Returns
-----... |
Identifies the best prediction given the results from the submodels.
Parameters
----------
index : int
The index within this index to ensemble
subresults : List[Dict[str, torch.Tensor]]
Returns
-------
The index of the best submodel.
| ensemble | python | plasticityai/magnitude | pymagnitude/third_party/allennlp/models/reading_comprehension/bidaf_ensemble.py | https://github.com/plasticityai/magnitude/blob/master/pymagnitude/third_party/allennlp/models/reading_comprehension/bidaf_ensemble.py | MIT |
def forward(self, # type: ignore
sentence ,
worlds ,
actions ,
agenda ,
identifier = None,
labels ... |
Decoder logic for producing type constrained target sequences that maximize coverage of
their respective agendas, and minimize a denotation based loss.
| forward | python | plasticityai/magnitude | pymagnitude/third_party/allennlp/models/semantic_parsing/nlvr/nlvr_coverage_semantic_parser.py | https://github.com/plasticityai/magnitude/blob/master/pymagnitude/third_party/allennlp/models/semantic_parsing/nlvr/nlvr_coverage_semantic_parser.py | MIT |
def _get_checklist_info(self,
agenda ,
all_actions ):
... |
Takes an agenda and a list of all actions and returns a target checklist against which the
checklist at each state will be compared to compute a loss, indices of ``terminal_actions``,
and a ``checklist_mask`` that indicates which of the terminal actions are relevant for
checklist loss c... | _get_checklist_info | python | plasticityai/magnitude | pymagnitude/third_party/allennlp/models/semantic_parsing/nlvr/nlvr_coverage_semantic_parser.py | https://github.com/plasticityai/magnitude/blob/master/pymagnitude/third_party/allennlp/models/semantic_parsing/nlvr/nlvr_coverage_semantic_parser.py | MIT |
def _get_state_cost(self, state ) :
u"""
Return the costs a finished state. Since it is a finished state, the group size will be 1,
and hence we'll return just one cost.
"""
if not state.is_finished():
raise RuntimeError(u"_get_state_co... |
Return the costs a finished state. Since it is a finished state, the group size will be 1,
and hence we'll return just one cost.
| _get_state_cost | python | plasticityai/magnitude | pymagnitude/third_party/allennlp/models/semantic_parsing/nlvr/nlvr_coverage_semantic_parser.py | https://github.com/plasticityai/magnitude/blob/master/pymagnitude/third_party/allennlp/models/semantic_parsing/nlvr/nlvr_coverage_semantic_parser.py | MIT |
def _get_state_info(self, state) :
u"""
This method is here for debugging purposes, in case you want to look at the what the model
is learning. It may be inefficient to call it while training the model on real data.
"""
if len(state.batch_indices) == 1 and state... |
This method is here for debugging purposes, in case you want to look at the what the model
is learning. It may be inefficient to call it while training the model on real data.
| _get_state_info | python | plasticityai/magnitude | pymagnitude/third_party/allennlp/models/semantic_parsing/nlvr/nlvr_coverage_semantic_parser.py | https://github.com/plasticityai/magnitude/blob/master/pymagnitude/third_party/allennlp/models/semantic_parsing/nlvr/nlvr_coverage_semantic_parser.py | MIT |
def is_finished(self) :
u"""This method is identical to ``WikiTablesDecoderState.is_finished``."""
if len(self.batch_indices) != 1:
raise RuntimeError(u"is_finished() is only defined with a group_size of 1")
return self.grammar_state[0].is_finished() | This method is identical to ``WikiTablesDecoderState.is_finished``. | is_finished | python | plasticityai/magnitude | pymagnitude/third_party/allennlp/models/semantic_parsing/nlvr/nlvr_decoder_state.py | https://github.com/plasticityai/magnitude/blob/master/pymagnitude/third_party/allennlp/models/semantic_parsing/nlvr/nlvr_decoder_state.py | MIT |
def take_step(self, # type: ignore
state ,
max_actions = None,
allowed_actions = None) :
u"""
Given a ``NlvrDecoderState``, returns a list of next states that are sorted by their scores.... |
Given a ``NlvrDecoderState``, returns a list of next states that are sorted by their scores.
This method is very similar to ``WikiTablesDecoderStep._take_step``. The differences are
that depending on the type of supervision being used, we may not have a notion of
"allowed actions" here,... | take_step | python | plasticityai/magnitude | pymagnitude/third_party/allennlp/models/semantic_parsing/nlvr/nlvr_decoder_step.py | https://github.com/plasticityai/magnitude/blob/master/pymagnitude/third_party/allennlp/models/semantic_parsing/nlvr/nlvr_decoder_step.py | MIT |
def _get_predicted_embedding_addition(state ) :
u"""
Computes checklist balance, uses it to get the embeddings of desired terminal actions yet to
be produced by the decoder, and returns their sum for the decoder to add it to the predicted
embedding to bias... |
Computes checklist balance, uses it to get the embeddings of desired terminal actions yet to
be produced by the decoder, and returns their sum for the decoder to add it to the predicted
embedding to bias the prediction towards missing actions.
| _get_predicted_embedding_addition | python | plasticityai/magnitude | pymagnitude/third_party/allennlp/models/semantic_parsing/nlvr/nlvr_decoder_step.py | https://github.com/plasticityai/magnitude/blob/master/pymagnitude/third_party/allennlp/models/semantic_parsing/nlvr/nlvr_decoder_step.py | MIT |
def _get_next_state_info_with_agenda(
state ,
considered_actions ,
action_logits ,
action_mask ):
... |
We return a list of log probabilities and checklist states corresponding to next actions that are
not padding. This method is applicable to the case where we do not have target action
sequences and are relying on agendas for training.
| _get_next_state_info_with_agenda | python | plasticityai/magnitude | pymagnitude/third_party/allennlp/models/semantic_parsing/nlvr/nlvr_decoder_step.py | https://github.com/plasticityai/magnitude/blob/master/pymagnitude/third_party/allennlp/models/semantic_parsing/nlvr/nlvr_decoder_step.py | MIT |
def _get_next_state_info_without_agenda(state ,
considered_actions ,
action_logits ,
action_mask ):
... |
We return a list of log probabilities corresponding to actions that are not padding. This
method is related to the training scenario where we have target action sequences for
training.
| _get_next_state_info_without_agenda | python | plasticityai/magnitude | pymagnitude/third_party/allennlp/models/semantic_parsing/nlvr/nlvr_decoder_step.py | https://github.com/plasticityai/magnitude/blob/master/pymagnitude/third_party/allennlp/models/semantic_parsing/nlvr/nlvr_decoder_step.py | MIT |
def attend_on_sentence(self,
query ,
encoder_outputs ,
encoder_output_mask ) :
u"""
This method is almost identical to ``WikiTablesDecoderStep.atten... |
This method is almost identical to ``WikiTablesDecoderStep.attend_on_question``. We just
don't return the attention weights.
Given a query (which is typically the decoder hidden state), compute an attention over the
output of the sentence encoder, and return a weighted sum of the senten... | attend_on_sentence | python | plasticityai/magnitude | pymagnitude/third_party/allennlp/models/semantic_parsing/nlvr/nlvr_decoder_step.py | https://github.com/plasticityai/magnitude/blob/master/pymagnitude/third_party/allennlp/models/semantic_parsing/nlvr/nlvr_decoder_step.py | MIT |
def _get_action_embeddings(state ,
actions_to_embed ) :
u"""
This method is identical to ``WikiTablesDecoderStep._get_action_embeddings``
Returns an embedded representation for all actions in ``ac... |
This method is identical to ``WikiTablesDecoderStep._get_action_embeddings``
Returns an embedded representation for all actions in ``actions_to_embed``, using the state
in ``NlvrDecoderState``.
Parameters
----------
state : ``NlvrDecoderState``
The current s... | _get_action_embeddings | python | plasticityai/magnitude | pymagnitude/third_party/allennlp/models/semantic_parsing/nlvr/nlvr_decoder_step.py | https://github.com/plasticityai/magnitude/blob/master/pymagnitude/third_party/allennlp/models/semantic_parsing/nlvr/nlvr_decoder_step.py | MIT |
def _compute_new_states(cls,
state ,
action_logprobs ,
hidden_state ,
memory_cell ,
action_embedding... |
This method is very similar to ``WikiTabledDecoderStep._compute_new_states``.
The difference here is that we also keep track of checklists if they are passed to this
method.
| _compute_new_states | python | plasticityai/magnitude | pymagnitude/third_party/allennlp/models/semantic_parsing/nlvr/nlvr_decoder_step.py | https://github.com/plasticityai/magnitude/blob/master/pymagnitude/third_party/allennlp/models/semantic_parsing/nlvr/nlvr_decoder_step.py | MIT |
def forward(self, # type: ignore
sentence ,
worlds ,
actions ,
identifier = None,
target_action_sequences = None,
... |
Decoder logic for producing type constrained target sequences, trained to maximize marginal
likelihod over a set of approximate logical forms.
| forward | python | plasticityai/magnitude | pymagnitude/third_party/allennlp/models/semantic_parsing/nlvr/nlvr_direct_semantic_parser.py | https://github.com/plasticityai/magnitude/blob/master/pymagnitude/third_party/allennlp/models/semantic_parsing/nlvr/nlvr_direct_semantic_parser.py | MIT |
def _get_action_strings(cls,
possible_actions ,
action_indices ) :
u"""
Takes a list of possible actions and indices of decoded actions into those possible actions
... |
Takes a list of possible actions and indices of decoded actions into those possible actions
for a batch and returns sequences of action strings. We assume ``action_indices`` is a dict
mapping batch indices to k-best decoded sequence lists.
| _get_action_strings | python | plasticityai/magnitude | pymagnitude/third_party/allennlp/models/semantic_parsing/nlvr/nlvr_semantic_parser.py | https://github.com/plasticityai/magnitude/blob/master/pymagnitude/third_party/allennlp/models/semantic_parsing/nlvr/nlvr_semantic_parser.py | MIT |
def _embed_actions(self, actions ):
u"""
Given all of the possible actions for all batch instances, produce an embedding for them.
Th... |
Given all of the possible actions for all batch instances, produce an embedding for them.
There will be significant overlap in this list, as the production rules from the grammar
are shared across all batch instances. Our returned tensor has an embedding for each
`unique` action, so we... | _embed_actions | python | plasticityai/magnitude | pymagnitude/third_party/allennlp/models/semantic_parsing/nlvr/nlvr_semantic_parser.py | https://github.com/plasticityai/magnitude/blob/master/pymagnitude/third_party/allennlp/models/semantic_parsing/nlvr/nlvr_semantic_parser.py | MIT |
def decode(self, output_dict ) :
u"""
This method overrides ``Model.decode``, which gets called after ``Model.forward``, at test
time, to finalize predictions. We only transform the action string sequences into logical
forms here.
... |
This method overrides ``Model.decode``, which gets called after ``Model.forward``, at test
time, to finalize predictions. We only transform the action string sequences into logical
forms here.
| decode | python | plasticityai/magnitude | pymagnitude/third_party/allennlp/models/semantic_parsing/nlvr/nlvr_semantic_parser.py | https://github.com/plasticityai/magnitude/blob/master/pymagnitude/third_party/allennlp/models/semantic_parsing/nlvr/nlvr_semantic_parser.py | MIT |
def _check_state_denotations(self, state) :
u"""
Returns whether action history in the state evaluates to the correct denotations over all
worlds. Only defined when the state is finished.
"""
assert state.is_finished(), u"Cannot compute denotations for unfinished sta... |
Returns whether action history in the state evaluates to the correct denotations over all
worlds. Only defined when the state is finished.
| _check_state_denotations | python | plasticityai/magnitude | pymagnitude/third_party/allennlp/models/semantic_parsing/nlvr/nlvr_semantic_parser.py | https://github.com/plasticityai/magnitude/blob/master/pymagnitude/third_party/allennlp/models/semantic_parsing/nlvr/nlvr_semantic_parser.py | MIT |
def _get_predicted_embedding_addition(state ,
unlinked_terminal_indices ,
unlinked_checklist_balance ) :
u"""
Gets the embeddings of desired unlinked terminal ... |
Gets the embeddings of desired unlinked terminal actions yet to be produced by the decoder,
and returns their sum for the decoder to add it to the predicted embedding to bias the
prediction towards missing actions.
| _get_predicted_embedding_addition | python | plasticityai/magnitude | pymagnitude/third_party/allennlp/models/semantic_parsing/wikitables/wikitables_decoder_step.py | https://github.com/plasticityai/magnitude/blob/master/pymagnitude/third_party/allennlp/models/semantic_parsing/wikitables/wikitables_decoder_step.py | MIT |
def attend_on_question(self,
query ,
encoder_outputs ,
encoder_output_mask ) :
u"""
Given a query (which is typically the decoder hidden state), com... |
Given a query (which is typically the decoder hidden state), compute an attention over the
output of the question encoder, and return a weighted sum of the question representations
given this attention. We also return the attention weights themselves.
This is a simple computation, but... | attend_on_question | python | plasticityai/magnitude | pymagnitude/third_party/allennlp/models/semantic_parsing/wikitables/wikitables_decoder_step.py | https://github.com/plasticityai/magnitude/blob/master/pymagnitude/third_party/allennlp/models/semantic_parsing/wikitables/wikitables_decoder_step.py | MIT |
def _get_actions_to_consider(state ):
u"""
The ``WikiTablesDecoderState`` d... |
The ``WikiTablesDecoderState`` defines a set of actions that are valid in the current
grammar state for each group element. This method gets that set of actions and separates
them into actions that can be embedded and actions that need to be linked.
This method goes through all of the... | _get_actions_to_consider | python | plasticityai/magnitude | pymagnitude/third_party/allennlp/models/semantic_parsing/wikitables/wikitables_decoder_step.py | https://github.com/plasticityai/magnitude/blob/master/pymagnitude/third_party/allennlp/models/semantic_parsing/wikitables/wikitables_decoder_step.py | MIT |
def _get_action_embeddings(state ,
actions_to_embed ):
... |
Returns an embedded representation for all actions in ``actions_to_embed``, using the state
in ``WikiTablesDecoderState``.
Parameters
----------
state : ``WikiTablesDecoderState``
The current state. We'll use this to get the global action embeddings.
action... | _get_action_embeddings | python | plasticityai/magnitude | pymagnitude/third_party/allennlp/models/semantic_parsing/wikitables/wikitables_decoder_step.py | https://github.com/plasticityai/magnitude/blob/master/pymagnitude/third_party/allennlp/models/semantic_parsing/wikitables/wikitables_decoder_step.py | MIT |
def _get_entity_action_logits(self,
state ,
actions_to_link ,
attention_weights ,
linked_checklist_balance = None): ... |
Returns scores for each action in ``actions_to_link`` that are derived from the linking
scores between the question and the table entities, and the current attention on the
question. The intuition is that if we're paying attention to a particular word in the
question, we should tend to... | _get_entity_action_logits | python | plasticityai/magnitude | pymagnitude/third_party/allennlp/models/semantic_parsing/wikitables/wikitables_decoder_step.py | https://github.com/plasticityai/magnitude/blob/master/pymagnitude/third_party/allennlp/models/semantic_parsing/wikitables/wikitables_decoder_step.py | MIT |
def forward(self, # type: ignore
question ,
table ,
world ,
actions ,
agenda ,
example_lisp_stri... |
Parameters
----------
question : Dict[str, torch.LongTensor]
The output of ``TextField.as_array()`` applied on the question ``TextField``. This will
be passed through a ``TextFieldEmbedder`` and then through an encoder.
table : ``Dict[str, torch.LongTensor]``
... | forward | python | plasticityai/magnitude | pymagnitude/third_party/allennlp/models/semantic_parsing/wikitables/wikitables_erm_semantic_parser.py | https://github.com/plasticityai/magnitude/blob/master/pymagnitude/third_party/allennlp/models/semantic_parsing/wikitables/wikitables_erm_semantic_parser.py | MIT |
def _get_checklist_info(agenda ,
all_actions ,
terminal_productions ,
max_num_terminals ) :
u"""
Takes an agenda, ... |
Takes an agenda, a list of all actions, a set of terminal productions in the corresponding
world, and a length to pad the checklist vectors to, and returns a target checklist against
which the checklist at each state will be compared to compute a loss, indices of
``terminal_actions``, a... | _get_checklist_info | python | plasticityai/magnitude | pymagnitude/third_party/allennlp/models/semantic_parsing/wikitables/wikitables_erm_semantic_parser.py | https://github.com/plasticityai/magnitude/blob/master/pymagnitude/third_party/allennlp/models/semantic_parsing/wikitables/wikitables_erm_semantic_parser.py | MIT |
def get_metrics(self, reset = False) :
u"""
The base class returns a dict with dpd accuracy, denotation accuracy, and logical form
percentage metrics. We add the agenda coverage metric here.
"""
metrics = super(WikiTablesErmSemanticParser, self).get_metri... |
The base class returns a dict with dpd accuracy, denotation accuracy, and logical form
percentage metrics. We add the agenda coverage metric here.
| get_metrics | python | plasticityai/magnitude | pymagnitude/third_party/allennlp/models/semantic_parsing/wikitables/wikitables_erm_semantic_parser.py | https://github.com/plasticityai/magnitude/blob/master/pymagnitude/third_party/allennlp/models/semantic_parsing/wikitables/wikitables_erm_semantic_parser.py | MIT |
def forward(self, # type: ignore
question ,
table ,
world ,
actions ,
example_lisp_string = None,
targ... |
In this method we encode the table entities, link them to words in the question, then
encode the question. Then we set up the initial state for the decoder, and pass that
state off to either a DecoderTrainer, if we're training, or a BeamSearch for inference,
if we're not.
Param... | forward | python | plasticityai/magnitude | pymagnitude/third_party/allennlp/models/semantic_parsing/wikitables/wikitables_mml_semantic_parser.py | https://github.com/plasticityai/magnitude/blob/master/pymagnitude/third_party/allennlp/models/semantic_parsing/wikitables/wikitables_mml_semantic_parser.py | MIT |
def _get_initial_state_and_scores(self,
question ,
table ,
world ,
actions ... |
Does initial preparation and creates an intiial state for both the semantic parsers. Note
that the checklist state is optional, and the ``WikiTablesMmlParser`` is not expected to
pass it.
| _get_initial_state_and_scores | python | plasticityai/magnitude | pymagnitude/third_party/allennlp/models/semantic_parsing/wikitables/wikitables_semantic_parser.py | https://github.com/plasticityai/magnitude/blob/master/pymagnitude/third_party/allennlp/models/semantic_parsing/wikitables/wikitables_semantic_parser.py | MIT |
def _get_neighbor_indices(worlds ,
num_entities ,
tensor ) :
u"""
This method returns the indices of each entity's neighbors. A tensor
is accepted as a parameter for copying purp... |
This method returns the indices of each entity's neighbors. A tensor
is accepted as a parameter for copying purposes.
Parameters
----------
worlds : ``List[WikiTablesWorld]``
num_entities : ``int``
tensor : ``torch.Tensor``
Used for copying the const... | _get_neighbor_indices | python | plasticityai/magnitude | pymagnitude/third_party/allennlp/models/semantic_parsing/wikitables/wikitables_semantic_parser.py | https://github.com/plasticityai/magnitude/blob/master/pymagnitude/third_party/allennlp/models/semantic_parsing/wikitables/wikitables_semantic_parser.py | MIT |
def _get_type_vector(worlds ,
num_entities ,
tensor ) :
u"""
Produces the one hot encoding for each entity's type. In addition,
a map from a flattened entity index t... |
Produces the one hot encoding for each entity's type. In addition,
a map from a flattened entity index to type is returned to combine
entity type operations into one method.
Parameters
----------
worlds : ``List[WikiTablesWorld]``
num_entities : ``int``
... | _get_type_vector | python | plasticityai/magnitude | pymagnitude/third_party/allennlp/models/semantic_parsing/wikitables/wikitables_semantic_parser.py | https://github.com/plasticityai/magnitude/blob/master/pymagnitude/third_party/allennlp/models/semantic_parsing/wikitables/wikitables_semantic_parser.py | MIT |
def _get_linking_probabilities(self,
worlds ,
linking_scores ,
question_mask ,
entity_type_dict ) ... |
Produces the probability of an entity given a question word and type. The logic below
separates the entities by type since the softmax normalization term sums over entities
of a single type.
Parameters
----------
worlds : ``List[WikiTablesWorld]``
linking_scores... | _get_linking_probabilities | python | plasticityai/magnitude | pymagnitude/third_party/allennlp/models/semantic_parsing/wikitables/wikitables_semantic_parser.py | https://github.com/plasticityai/magnitude/blob/master/pymagnitude/third_party/allennlp/models/semantic_parsing/wikitables/wikitables_semantic_parser.py | MIT |
def get_metrics(self, reset = False) :
u"""
We track three metrics here:
1. dpd_acc, which is the percentage of the time that our best output action sequence is
in the set of action sequences provided by DPD. This is an easy-to-compute lower bound
... |
We track three metrics here:
1. dpd_acc, which is the percentage of the time that our best output action sequence is
in the set of action sequences provided by DPD. This is an easy-to-compute lower bound
on denotation accuracy for the set of examples where we actually have... | get_metrics | python | plasticityai/magnitude | pymagnitude/third_party/allennlp/models/semantic_parsing/wikitables/wikitables_semantic_parser.py | https://github.com/plasticityai/magnitude/blob/master/pymagnitude/third_party/allennlp/models/semantic_parsing/wikitables/wikitables_semantic_parser.py | MIT |
def _embed_actions(self, actions ):
... |
Given all of the possible actions for all batch instances, produce an embedding for them.
There will be significant overlap in this list, as the production rules from the grammar
are shared across all batch instances. Our returned tensor has an embedding for each
`unique` action, so we... | _embed_actions | python | plasticityai/magnitude | pymagnitude/third_party/allennlp/models/semantic_parsing/wikitables/wikitables_semantic_parser.py | https://github.com/plasticityai/magnitude/blob/master/pymagnitude/third_party/allennlp/models/semantic_parsing/wikitables/wikitables_semantic_parser.py | MIT |
def _map_entity_productions(linking_scores ,
worlds ,
actions ):
... |
Constructs a map from ``(batch_index, action_index)`` to ``(batch_index * entity_index)``.
That is, some actions correspond to terminal productions of entities from our table. We
need to find those actions and map them to their corresponding entity indices, where the
entity index is it... | _map_entity_productions | python | plasticityai/magnitude | pymagnitude/third_party/allennlp/models/semantic_parsing/wikitables/wikitables_semantic_parser.py | https://github.com/plasticityai/magnitude/blob/master/pymagnitude/third_party/allennlp/models/semantic_parsing/wikitables/wikitables_semantic_parser.py | MIT |
def decode(self, output_dict ) :
u"""
This method overrides ``Model.decode``, which gets called after ``Model.forward``, at test
time, to finalize predictions. This is (confusingly) a separate notion from the "decoder"
in "encoder/decode... |
This method overrides ``Model.decode``, which gets called after ``Model.forward``, at test
time, to finalize predictions. This is (confusingly) a separate notion from the "decoder"
in "encoder/decoder", where that decoder logic lives in ``WikiTablesDecoderStep``.
This method trims the... | decode | python | plasticityai/magnitude | pymagnitude/third_party/allennlp/models/semantic_parsing/wikitables/wikitables_semantic_parser.py | https://github.com/plasticityai/magnitude/blob/master/pymagnitude/third_party/allennlp/models/semantic_parsing/wikitables/wikitables_semantic_parser.py | MIT |
def forward(self, inputs , # pylint: disable=arguments-differ
# pylint: disable=unused-argument
initial_state = None) :
u"""
Parameters
----------
inputs : ``PackedSequence``, required.
... |
Parameters
----------
inputs : ``PackedSequence``, required.
A batch first ``PackedSequence`` to run the stacked LSTM over.
initial_state : Tuple[torch.Tensor, torch.Tensor], optional, (default = None)
Currently, this is ignored.
Returns
-------
... | forward | python | plasticityai/magnitude | pymagnitude/third_party/allennlp/modules/alternating_highway_lstm.py | https://github.com/plasticityai/magnitude/blob/master/pymagnitude/third_party/allennlp/modules/alternating_highway_lstm.py | MIT |
def forward(self, # pylint: disable=arguments-differ
inputs ,
initial_state = None):
u"""
Parameters
----------
inputs : PackedSequence, required.
A tensor of shape (batch_size, num_t... |
Parameters
----------
inputs : PackedSequence, required.
A tensor of shape (batch_size, num_timesteps, input_size)
to apply the LSTM over.
initial_state : Tuple[torch.Tensor, torch.Tensor], optional, (default = None)
A tuple (state, memory) represent... | forward | python | plasticityai/magnitude | pymagnitude/third_party/allennlp/modules/augmented_lstm.py | https://github.com/plasticityai/magnitude/blob/master/pymagnitude/third_party/allennlp/modules/augmented_lstm.py | MIT |
def multi_perspective_match(vector1 ,
vector2 ,
weight ) :
u"""
Calculate multi-perspective cosine matching between time-steps of vectors
of the same length.
Parameters
... |
Calculate multi-perspective cosine matching between time-steps of vectors
of the same length.
Parameters
----------
vector1 : ``torch.Tensor``
A tensor of shape ``(batch, seq_len, hidden_size)``
vector2 : ``torch.Tensor``
A tensor of shape ``(batch, seq_len or 1, hidden_size)``... | multi_perspective_match | python | plasticityai/magnitude | pymagnitude/third_party/allennlp/modules/bimpm_matching.py | https://github.com/plasticityai/magnitude/blob/master/pymagnitude/third_party/allennlp/modules/bimpm_matching.py | MIT |
def multi_perspective_match_pairwise(vector1 ,
vector2 ,
weight ,
eps = 1e-8) :
u"""
Calculate multi-perspective cosine matching between eac... |
Calculate multi-perspective cosine matching between each time step of
one vector and each time step of another vector.
Parameters
----------
vector1 : ``torch.Tensor``
A tensor of shape ``(batch, seq_len1, hidden_size)``
vector2 : ``torch.Tensor``
A tensor of shape ``(batch, se... | multi_perspective_match_pairwise | python | plasticityai/magnitude | pymagnitude/third_party/allennlp/modules/bimpm_matching.py | https://github.com/plasticityai/magnitude/blob/master/pymagnitude/third_party/allennlp/modules/bimpm_matching.py | MIT |
def forward(self,
context_1 ,
mask_1 ,
context_2 ,
mask_2 ) :
# pylint: disable=arguments-differ
u"""
Given the forward (or backward... |
Given the forward (or backward) representations of sentence1 and sentence2, apply four bilateral
matching functions between them in one direction.
Parameters
----------
context_1 : ``torch.Tensor``
Tensor of shape (batch_size, seq_len1, hidden_dim) representing the ... | forward | python | plasticityai/magnitude | pymagnitude/third_party/allennlp/modules/bimpm_matching.py | https://github.com/plasticityai/magnitude/blob/master/pymagnitude/third_party/allennlp/modules/bimpm_matching.py | MIT |
def allowed_transitions(constraint_type , labels ) :
u"""
Given labels and a constraint type, returns the allowed transitions. It will
additionally include transitions for the start and end states, which are used
by the conditional random field.
Parameters... |
Given labels and a constraint type, returns the allowed transitions. It will
additionally include transitions for the start and end states, which are used
by the conditional random field.
Parameters
----------
constraint_type : ``str``, required
Indicates which constraint to apply. Cur... | allowed_transitions | python | plasticityai/magnitude | pymagnitude/third_party/allennlp/modules/conditional_random_field.py | https://github.com/plasticityai/magnitude/blob/master/pymagnitude/third_party/allennlp/modules/conditional_random_field.py | MIT |
def is_transition_allowed(constraint_type ,
from_tag ,
from_entity ,
to_tag ,
to_entity ):
u"""
Given a constraint type and strings ``from_tag`` and ``to_tag`` that
represent the origi... |
Given a constraint type and strings ``from_tag`` and ``to_tag`` that
represent the origin and destination of the transition, return whether
the transition is allowed under the given constraint type.
Parameters
----------
constraint_type : ``str``, required
Indicates which constraint to... | is_transition_allowed | python | plasticityai/magnitude | pymagnitude/third_party/allennlp/modules/conditional_random_field.py | https://github.com/plasticityai/magnitude/blob/master/pymagnitude/third_party/allennlp/modules/conditional_random_field.py | MIT |
def _input_likelihood(self, logits , mask ) :
u"""
Computes the (batch_size,) denominator term for the log-likelihood, which is the
sum of the likelihoods across all possible state sequences.
"""
batch_size, sequence_length, num_tags = log... |
Computes the (batch_size,) denominator term for the log-likelihood, which is the
sum of the likelihoods across all possible state sequences.
| _input_likelihood | python | plasticityai/magnitude | pymagnitude/third_party/allennlp/modules/conditional_random_field.py | https://github.com/plasticityai/magnitude/blob/master/pymagnitude/third_party/allennlp/modules/conditional_random_field.py | MIT |
def _joint_likelihood(self,
logits ,
tags ,
mask ) :
u"""
Computes the numerator term for the log-likelihood, which is just score(inputs, tags)
"""
batc... |
Computes the numerator term for the log-likelihood, which is just score(inputs, tags)
| _joint_likelihood | python | plasticityai/magnitude | pymagnitude/third_party/allennlp/modules/conditional_random_field.py | https://github.com/plasticityai/magnitude/blob/master/pymagnitude/third_party/allennlp/modules/conditional_random_field.py | MIT |
def viterbi_tags(self,
logits ,
mask ) :
u"""
Uses viterbi algorithm to find most likely tags for the given inputs.
If constraints are applied, disallows all other transitions.
"""
... |
Uses viterbi algorithm to find most likely tags for the given inputs.
If constraints are applied, disallows all other transitions.
| viterbi_tags | python | plasticityai/magnitude | pymagnitude/third_party/allennlp/modules/conditional_random_field.py | https://github.com/plasticityai/magnitude/blob/master/pymagnitude/third_party/allennlp/modules/conditional_random_field.py | MIT |
def forward(self, # pylint: disable=arguments-differ
inputs ,
word_inputs = None) :
u"""
Parameters
----------
inputs: ``torch.Tensor``, required.
Shape ``(batch_siz... |
Parameters
----------
inputs: ``torch.Tensor``, required.
Shape ``(batch_size, timesteps, 50)`` of character ids representing the current batch.
word_inputs : ``torch.Tensor``, required.
If you passed a cached vocab, you can in addition pass a tensor of shape
... | forward | python | plasticityai/magnitude | pymagnitude/third_party/allennlp/modules/elmo.py | https://github.com/plasticityai/magnitude/blob/master/pymagnitude/third_party/allennlp/modules/elmo.py | MIT |
def batch_to_ids(batch ) :
u"""
Converts a batch of tokenized sentences to a tensor representing the sentences with encoded characters
(len(batch), max sentence length, max word length).
Parameters
----------
batch : ``List[List[str]]``, required
A list of... |
Converts a batch of tokenized sentences to a tensor representing the sentences with encoded characters
(len(batch), max sentence length, max word length).
Parameters
----------
batch : ``List[List[str]]``, required
A list of tokenized sentences.
Returns
-------
A tensor of... | batch_to_ids | python | plasticityai/magnitude | pymagnitude/third_party/allennlp/modules/elmo.py | https://github.com/plasticityai/magnitude/blob/master/pymagnitude/third_party/allennlp/modules/elmo.py | MIT |
def forward(self, inputs ) : # pylint: disable=arguments-differ
u"""
Compute context insensitive token embeddings for ELMo representations.
Parameters
----------
inputs: ``torch.Tensor``
Shape ``(batch_size, sequence_length, 50... |
Compute context insensitive token embeddings for ELMo representations.
Parameters
----------
inputs: ``torch.Tensor``
Shape ``(batch_size, sequence_length, 50)`` of character ids representing the
current batch.
Returns
-------
Dict with ... | forward | python | plasticityai/magnitude | pymagnitude/third_party/allennlp/modules/elmo.py | https://github.com/plasticityai/magnitude/blob/master/pymagnitude/third_party/allennlp/modules/elmo.py | MIT |
def forward(self, # pylint: disable=arguments-differ
inputs ,
word_inputs = None) :
u"""
Parameters
----------
inputs: ``torch.Tensor``, required.
Shape ``(batch_s... |
Parameters
----------
inputs: ``torch.Tensor``, required.
Shape ``(batch_size, timesteps, 50)`` of character ids representing the current batch.
word_inputs : ``torch.Tensor``, required.
If you passed a cached vocab, you can in addition pass a tensor of shape ``(... | forward | python | plasticityai/magnitude | pymagnitude/third_party/allennlp/modules/elmo.py | https://github.com/plasticityai/magnitude/blob/master/pymagnitude/third_party/allennlp/modules/elmo.py | MIT |
def create_cached_cnn_embeddings(self, tokens ) :
u"""
Given a list of tokens, this method precomputes word representations
by running just the character convolutions and highway layers of elmo,
essentially creating uncontextual word vectors. On subsequent forward passes... |
Given a list of tokens, this method precomputes word representations
by running just the character convolutions and highway layers of elmo,
essentially creating uncontextual word vectors. On subsequent forward passes,
the word ids are looked up from an embedding, rather than being compu... | create_cached_cnn_embeddings | python | plasticityai/magnitude | pymagnitude/third_party/allennlp/modules/elmo.py | https://github.com/plasticityai/magnitude/blob/master/pymagnitude/third_party/allennlp/modules/elmo.py | MIT |
def forward(self, # pylint: disable=arguments-differ
inputs ,
mask ) :
"""
Parameters
----------
inputs : ``torch.Tensor``, required.
A Tensor of shape ``(batch_size, sequence_length, hidden_size)``... |
Parameters
----------
inputs : ``torch.Tensor``, required.
A Tensor of shape ``(batch_size, sequence_length, hidden_size)``.
mask : ``torch.LongTensor``, required.
A binary mask of shape ``(batch_size, sequence_length)`` representing the
non-padded el... | forward | python | plasticityai/magnitude | pymagnitude/third_party/allennlp/modules/elmo_lstm.py | https://github.com/plasticityai/magnitude/blob/master/pymagnitude/third_party/allennlp/modules/elmo_lstm.py | MIT |
def _lstm_forward(self,
inputs ,
initial_state = None) \
:
"""
Parameters
----------
inputs : ``PackedSequence``, re... |
Parameters
----------
inputs : ``PackedSequence``, required.
A batch first ``PackedSequence`` to run the stacked LSTM over.
initial_state : ``Tuple[torch.Tensor, torch.Tensor]``, optional, (default = None)
A tuple (state, memory) representing the initial hidden s... | _lstm_forward | python | plasticityai/magnitude | pymagnitude/third_party/allennlp/modules/elmo_lstm.py | https://github.com/plasticityai/magnitude/blob/master/pymagnitude/third_party/allennlp/modules/elmo_lstm.py | MIT |
def load_weights(self, weight_file ) :
"""
Load the pre-trained weights from the file.
"""
requires_grad = self.requires_grad
with h5py.File(cached_path(weight_file), 'r') as fin:
for i_layer, lstms in enumerate(
zip(self.forward_layers... |
Load the pre-trained weights from the file.
| load_weights | python | plasticityai/magnitude | pymagnitude/third_party/allennlp/modules/elmo_lstm.py | https://github.com/plasticityai/magnitude/blob/master/pymagnitude/third_party/allennlp/modules/elmo_lstm.py | MIT |
def sort_and_run_forward(self,
module
,
inputs ,
mask ,
... |
This function exists because Pytorch RNNs require that their inputs be sorted
before being passed as input. As all of our Seq2xxxEncoders use this functionality,
it is provided in a base class. This method can be called on any module which
takes as input a ``PackedSequence`` and some ``... | sort_and_run_forward | python | plasticityai/magnitude | pymagnitude/third_party/allennlp/modules/encoder_base.py | https://github.com/plasticityai/magnitude/blob/master/pymagnitude/third_party/allennlp/modules/encoder_base.py | MIT |
def _get_initial_states(self,
batch_size ,
num_valid ,
sorting_indices ) :
u"""
Returns an initial state for use in an RNN. Additionally, this method handles
the batc... |
Returns an initial state for use in an RNN. Additionally, this method handles
the batch size changing across calls by mutating the state to append initial states
for new elements in the batch. Finally, it also handles sorting the states
with respect to the sequence lengths of elements i... | _get_initial_states | python | plasticityai/magnitude | pymagnitude/third_party/allennlp/modules/encoder_base.py | https://github.com/plasticityai/magnitude/blob/master/pymagnitude/third_party/allennlp/modules/encoder_base.py | MIT |
def _update_states(self,
final_states ,
restoration_indices ) :
u"""
After the RNN has run forward, the states need to be updated.
This method just sets the state to the updated new state, performing
se... |
After the RNN has run forward, the states need to be updated.
This method just sets the state to the updated new state, performing
several pieces of book-keeping along the way - namely, unsorting the
states and ensuring that the states of completely padded sequences are
not upda... | _update_states | python | plasticityai/magnitude | pymagnitude/third_party/allennlp/modules/encoder_base.py | https://github.com/plasticityai/magnitude/blob/master/pymagnitude/third_party/allennlp/modules/encoder_base.py | MIT |
def forward(self, input_tensor):
# pylint: disable=arguments-differ
u"""
Apply dropout to input tensor.
Parameters
----------
input_tensor: ``torch.FloatTensor``
A tensor of shape ``(batch_size, num_timesteps, embedding_dim)``
Returns
-------... |
Apply dropout to input tensor.
Parameters
----------
input_tensor: ``torch.FloatTensor``
A tensor of shape ``(batch_size, num_timesteps, embedding_dim)``
Returns
-------
output: ``torch.FloatTensor``
A tensor of shape ``(batch_size, num_... | forward | python | plasticityai/magnitude | pymagnitude/third_party/allennlp/modules/input_variational_dropout.py | https://github.com/plasticityai/magnitude/blob/master/pymagnitude/third_party/allennlp/modules/input_variational_dropout.py | MIT |
def forward(self, # pylint: disable=arguments-differ
inputs ,
batch_lengths ,
initial_state = None):
u"""
Parameters
----------
inputs : ``torch.FloatTensor``, requir... |
Parameters
----------
inputs : ``torch.FloatTensor``, required.
A tensor of shape (batch_size, num_timesteps, input_size)
to apply the LSTM over.
batch_lengths : ``List[int]``, required.
A list of length batch_size containing the lengths of the sequen... | forward | python | plasticityai/magnitude | pymagnitude/third_party/allennlp/modules/lstm_cell_with_projection.py | https://github.com/plasticityai/magnitude/blob/master/pymagnitude/third_party/allennlp/modules/lstm_cell_with_projection.py | MIT |
def forward(self, tensors , # pylint: disable=arguments-differ
mask = None) :
u"""
Compute a weighted average of the ``tensors``. The input tensors an be any shape
with at least two dimensions, but must all be the same shape.
... |
Compute a weighted average of the ``tensors``. The input tensors an be any shape
with at least two dimensions, but must all be the same shape.
When ``do_layer_norm=True``, the ``mask`` is required input. If the ``tensors`` are
dimensioned ``(dim_0, ..., dim_{n-1}, dim_n)``, then the... | forward | python | plasticityai/magnitude | pymagnitude/third_party/allennlp/modules/scalar_mix.py | https://github.com/plasticityai/magnitude/blob/master/pymagnitude/third_party/allennlp/modules/scalar_mix.py | MIT |
def forward(self, # pylint: disable=arguments-differ
span_embeddings ,
span_mask ,
num_spans_to_keep ):
... |
Extracts the top-k scoring spans with respect to the scorer. We additionally return
the indices of the top-k in their original order, not ordered by score, so that we
can rely on the ordering to consider the previous k spans as antecedents for each
span later.
Parameters
... | forward | python | plasticityai/magnitude | pymagnitude/third_party/allennlp/modules/span_pruner.py | https://github.com/plasticityai/magnitude/blob/master/pymagnitude/third_party/allennlp/modules/span_pruner.py | MIT |
def forward(self, # pylint: disable=arguments-differ
inputs ,
initial_state = None):
u"""
Parameters
----------
inputs : ``PackedSequence``, required.
A batch first ``PackedSequence``... |
Parameters
----------
inputs : ``PackedSequence``, required.
A batch first ``PackedSequence`` to run the stacked LSTM over.
initial_state : Tuple[torch.Tensor, torch.Tensor], optional, (default = None)
A tuple (state, memory) representing the initial hidden state... | forward | python | plasticityai/magnitude | pymagnitude/third_party/allennlp/modules/stacked_alternating_lstm.py | https://github.com/plasticityai/magnitude/blob/master/pymagnitude/third_party/allennlp/modules/stacked_alternating_lstm.py | MIT |
def forward(self, # pylint: disable=arguments-differ
inputs ,
initial_state = None):
u"""
Parameters
----------
inputs : ``PackedSequence``, required.
A batch first ``PackedSequence``... |
Parameters
----------
inputs : ``PackedSequence``, required.
A batch first ``PackedSequence`` to run the stacked LSTM over.
initial_state : Tuple[torch.Tensor, torch.Tensor], optional, (default = None)
A tuple (state, memory) representing the initial hidden state... | forward | python | plasticityai/magnitude | pymagnitude/third_party/allennlp/modules/stacked_bidirectional_lstm.py | https://github.com/plasticityai/magnitude/blob/master/pymagnitude/third_party/allennlp/modules/stacked_bidirectional_lstm.py | MIT |
def forward(self, # pylint: disable=arguments-differ
inputs ,
mask = None) :
u"""
Parameters
----------
inputs : ``torch.FloatTensor``, required.
A tensor of shape (batch_size, timesteps, inpu... |
Parameters
----------
inputs : ``torch.FloatTensor``, required.
A tensor of shape (batch_size, timesteps, input_dim)
mask : ``torch.FloatTensor``, optional (default = None).
A tensor of shape (batch_size, timesteps).
Returns
-------
A ten... | forward | python | plasticityai/magnitude | pymagnitude/third_party/allennlp/modules/seq2seq_encoders/multi_head_self_attention.py | https://github.com/plasticityai/magnitude/blob/master/pymagnitude/third_party/allennlp/modules/seq2seq_encoders/multi_head_self_attention.py | MIT |
def forward(self, tensor_1 , tensor_2 ) :
# pylint: disable=arguments-differ
u"""
Takes two tensors of the same shape, such as ``(batch_size, length_1, length_2,
embedding_dim)``. Computes a (possibly parameterized) similarity on the final dimens... |
Takes two tensors of the same shape, such as ``(batch_size, length_1, length_2,
embedding_dim)``. Computes a (possibly parameterized) similarity on the final dimension
and returns a tensor with one less dimension, such as ``(batch_size, length_1, length_2)``.
| forward | python | plasticityai/magnitude | pymagnitude/third_party/allennlp/modules/similarity_functions/similarity_function.py | https://github.com/plasticityai/magnitude/blob/master/pymagnitude/third_party/allennlp/modules/similarity_functions/similarity_function.py | MIT |
def forward(self, # pylint: disable=arguments-differ
sequence_tensor ,
span_indices ,
sequence_mask = None,
span_indices_mask = None):
u"""
Given a sequence tensor, extr... |
Given a sequence tensor, extract spans and return representations of
them. Span representation can be computed in many different ways,
such as concatenation of the start and end spans, attention over the
vectors contained inside the span, etc.
Parameters
----------
... | forward | python | plasticityai/magnitude | pymagnitude/third_party/allennlp/modules/span_extractors/span_extractor.py | https://github.com/plasticityai/magnitude/blob/master/pymagnitude/third_party/allennlp/modules/span_extractors/span_extractor.py | MIT |
def forward(self, # pylint: disable=arguments-differ
text_field_input ,
num_wrapping_dims = 0) :
u"""
Parameters
----------
text_field_input : ``Dict[str, torch.Tensor]``
A dictionary that was the ou... |
Parameters
----------
text_field_input : ``Dict[str, torch.Tensor]``
A dictionary that was the output of a call to ``TextField.as_tensor``. Each tensor in
here is assumed to have a shape roughly similar to ``(batch_size, sequence_length)``
(perhaps with an e... | forward | python | plasticityai/magnitude | pymagnitude/third_party/allennlp/modules/text_field_embedders/text_field_embedder.py | https://github.com/plasticityai/magnitude/blob/master/pymagnitude/third_party/allennlp/modules/text_field_embedders/text_field_embedder.py | MIT |
def forward(self, # pylint: disable=arguments-differ
inputs ,
word_inputs = None) :
u"""
Parameters
----------
inputs: ``torch.Tensor``
Shape ``(batch_size, timesteps, 50)`` of character ids representin... |
Parameters
----------
inputs: ``torch.Tensor``
Shape ``(batch_size, timesteps, 50)`` of character ids representing the current batch.
word_inputs : ``torch.Tensor``, optional.
If you passed a cached vocab, you can in addition pass a tensor of shape
``... | forward | python | plasticityai/magnitude | pymagnitude/third_party/allennlp/modules/token_embedders/elmo_token_embedder.py | https://github.com/plasticityai/magnitude/blob/master/pymagnitude/third_party/allennlp/modules/token_embedders/elmo_token_embedder.py | MIT |
def from_params(cls, vocab , params ) : # type: ignore
u"""
We need the vocabulary here to know how many items we need to embed, and we look for a
``vocab_namespace`` key in the parameter dictionary to know which vocabulary to use. If
you know beforehand... |
We need the vocabulary here to know how many items we need to embed, and we look for a
``vocab_namespace`` key in the parameter dictionary to know which vocabulary to use. If
you know beforehand exactly how many embeddings you need, or aren't using a vocabulary
mapping for the things g... | from_params | python | plasticityai/magnitude | pymagnitude/third_party/allennlp/modules/token_embedders/embedding.py | https://github.com/plasticityai/magnitude/blob/master/pymagnitude/third_party/allennlp/modules/token_embedders/embedding.py | MIT |
def _read_pretrained_embeddings_file(file_uri ,
embedding_dim ,
vocab ,
namespace = u"tokens") :
u"""
Returns and embedding matrix for the given vocabulary u... |
Returns and embedding matrix for the given vocabulary using the pretrained embeddings
contained in the given file. Embeddings for tokens not found in the pretrained embedding file
are randomly initialized using a normal distribution with mean and standard deviation equal to
those of the pretrained embe... | _read_pretrained_embeddings_file | python | plasticityai/magnitude | pymagnitude/third_party/allennlp/modules/token_embedders/embedding.py | https://github.com/plasticityai/magnitude/blob/master/pymagnitude/third_party/allennlp/modules/token_embedders/embedding.py | MIT |
def _read_embeddings_from_text_file(file_uri ,
embedding_dim ,
vocab ,
namespace = u"tokens") :
u"""
Read pre-trained word vectors from an eventually compressed... |
Read pre-trained word vectors from an eventually compressed text file, possibly contained
inside an archive with multiple files. The text file is assumed to be utf-8 encoded with
space-separated fields: [word] [dim 1] [dim 2] ...
Lines that contain more numerical tokens than ``embedding_dim`` raise a ... | _read_embeddings_from_text_file | python | plasticityai/magnitude | pymagnitude/third_party/allennlp/modules/token_embedders/embedding.py | https://github.com/plasticityai/magnitude/blob/master/pymagnitude/third_party/allennlp/modules/token_embedders/embedding.py | MIT |
def _read_embeddings_from_hdf5(embeddings_filename ,
embedding_dim ,
vocab ,
namespace = u"tokens") :
u"""
Reads from a hdf5 formatted file. The embedding matrix is assumed to... |
Reads from a hdf5 formatted file. The embedding matrix is assumed to
be keyed by 'embedding' and of size ``(num_tokens, embedding_dim)``.
| _read_embeddings_from_hdf5 | python | plasticityai/magnitude | pymagnitude/third_party/allennlp/modules/token_embedders/embedding.py | https://github.com/plasticityai/magnitude/blob/master/pymagnitude/third_party/allennlp/modules/token_embedders/embedding.py | MIT |
def __len__(self) :
u""" Hack for tqdm: no need for explicitly passing ``total=file.num_tokens`` """
if self.num_tokens:
return self.num_tokens
raise AttributeError(u'an object of type EmbeddingsTextFile has "len()" only if the underlying '
... | Hack for tqdm: no need for explicitly passing ``total=file.num_tokens`` | __len__ | python | plasticityai/magnitude | pymagnitude/third_party/allennlp/modules/token_embedders/embedding.py | https://github.com/plasticityai/magnitude/blob/master/pymagnitude/third_party/allennlp/modules/token_embedders/embedding.py | MIT |
def _get_num_tokens_from_first_line(line ) :
u""" This function takes in input a string and if it contains 1 or 2 integers, it assumes the
largest one it the number of tokens. Returns None if the line doesn't match that pattern. """
fields = line.split(u' ')
if 1 <= l... | This function takes in input a string and if it contains 1 or 2 integers, it assumes the
largest one it the number of tokens. Returns None if the line doesn't match that pattern. | _get_num_tokens_from_first_line | python | plasticityai/magnitude | pymagnitude/third_party/allennlp/modules/token_embedders/embedding.py | https://github.com/plasticityai/magnitude/blob/master/pymagnitude/third_party/allennlp/modules/token_embedders/embedding.py | MIT |
def forward(self, inputs , offsets ) :
u"""
Parameters
----------
inputs: ``torch.Tensor``, required
A ``(batch_size, num_timesteps)`` tensor representing the byte-pair encodings
for the current batch.
offsets: ``to... |
Parameters
----------
inputs: ``torch.Tensor``, required
A ``(batch_size, num_timesteps)`` tensor representing the byte-pair encodings
for the current batch.
offsets: ``torch.Tensor``, required
A ``(batch_size, max_sequence_length)`` tensor representi... | forward | python | plasticityai/magnitude | pymagnitude/third_party/allennlp/modules/token_embedders/openai_transformer_embedder.py | https://github.com/plasticityai/magnitude/blob/master/pymagnitude/third_party/allennlp/modules/token_embedders/openai_transformer_embedder.py | MIT |
def uniform_unit_scaling(tensor , nonlinearity = u"linear"):
u"""
An initaliser which preserves output variance for approximately gaussian
distributed inputs. This boils down to initialising layers using a uniform
distribution in the range ``(-sqrt(3/dim[0]) * scale, sqrt(3 / dim[0]) *... |
An initaliser which preserves output variance for approximately gaussian
distributed inputs. This boils down to initialising layers using a uniform
distribution in the range ``(-sqrt(3/dim[0]) * scale, sqrt(3 / dim[0]) * scale)``, where
``dim[0]`` is equal to the input dimension of the parameter and th... | uniform_unit_scaling | python | plasticityai/magnitude | pymagnitude/third_party/allennlp/nn/initializers.py | https://github.com/plasticityai/magnitude/blob/master/pymagnitude/third_party/allennlp/nn/initializers.py | MIT |
def block_orthogonal(tensor ,
split_sizes ,
gain = 1.0) :
u"""
An initializer which allows initializing model parameters in "blocks". This is helpful
in the case of recurrent models which use multiple gates applied to linear proj... |
An initializer which allows initializing model parameters in "blocks". This is helpful
in the case of recurrent models which use multiple gates applied to linear projections,
which can be computed efficiently if they are concatenated together. However, they are
separate parameters which should be initi... | block_orthogonal | python | plasticityai/magnitude | pymagnitude/third_party/allennlp/nn/initializers.py | https://github.com/plasticityai/magnitude/blob/master/pymagnitude/third_party/allennlp/nn/initializers.py | MIT |
def lstm_hidden_bias(tensor ) :
u"""
Initialize the biases of the forget gate to 1, and all other gates to 0,
following Jozefowicz et al., An Empirical Exploration of Recurrent Network Architectures
"""
# gates are (b_hi|b_hf|b_hg|b_ho) of shape (4*hidden_size)
tensor.data.ze... |
Initialize the biases of the forget gate to 1, and all other gates to 0,
following Jozefowicz et al., An Empirical Exploration of Recurrent Network Architectures
| lstm_hidden_bias | python | plasticityai/magnitude | pymagnitude/third_party/allennlp/nn/initializers.py | https://github.com/plasticityai/magnitude/blob/master/pymagnitude/third_party/allennlp/nn/initializers.py | MIT |
def __init__(self,
initializers = None,
prevent_regexes = None) :
u"""
Parameters
----------
initializers : ``List[Tuple[str, Initializer]]``, optional (default = [])
A list mapping parameter r... |
Parameters
----------
initializers : ``List[Tuple[str, Initializer]]``, optional (default = [])
A list mapping parameter regexes to initializers. We will check each parameter against
each regex in turn, and apply the initializer paired with the first matching regex, if
... | __init__ | python | plasticityai/magnitude | pymagnitude/third_party/allennlp/nn/initializers.py | https://github.com/plasticityai/magnitude/blob/master/pymagnitude/third_party/allennlp/nn/initializers.py | MIT |
def __call__(self, module ) :
u"""
Applies an initializer to all parameters in a module that match one of the regexes we were
given in this object's constructor. Does nothing to parameters that do not match.
Parameters
----------
module : torch.nn... |
Applies an initializer to all parameters in a module that match one of the regexes we were
given in this object's constructor. Does nothing to parameters that do not match.
Parameters
----------
module : torch.nn.Module, required.
The Pytorch module to apply the in... | __call__ | python | plasticityai/magnitude | pymagnitude/third_party/allennlp/nn/initializers.py | https://github.com/plasticityai/magnitude/blob/master/pymagnitude/third_party/allennlp/nn/initializers.py | MIT |
def from_params(cls, params = ()) : # type: ignore
u"""
Converts a Params object into an InitializerApplicator. The json should
be formatted as follows::
[
["parameter_regex_match1",
{
... |
Converts a Params object into an InitializerApplicator. The json should
be formatted as follows::
[
["parameter_regex_match1",
{
"type": "normal"
"mean": 0.01
"std": 0.1
... | from_params | python | plasticityai/magnitude | pymagnitude/third_party/allennlp/nn/initializers.py | https://github.com/plasticityai/magnitude/blob/master/pymagnitude/third_party/allennlp/nn/initializers.py | MIT |
def batch_tensor_dicts(tensor_dicts ,
remove_trailing_dimension = False) :
u"""
Takes a list of tensor dictionaries, where each dictionary is assumed to have matching keys,
and returns a single dictionary with all tensors w... |
Takes a list of tensor dictionaries, where each dictionary is assumed to have matching keys,
and returns a single dictionary with all tensors with the same key batched together.
Parameters
----------
tensor_dicts : ``List[Dict[str, torch.Tensor]]``
The list of tensor dictionaries to batch.... | batch_tensor_dicts | python | plasticityai/magnitude | pymagnitude/third_party/allennlp/nn/util.py | https://github.com/plasticityai/magnitude/blob/master/pymagnitude/third_party/allennlp/nn/util.py | MIT |
def get_mask_from_sequence_lengths(sequence_lengths , max_length ) :
u"""
Given a variable of shape ``(batch_size,)`` that represents the sequence lengths of each batch
element, this function returns a ``(batch_size, max_length)`` mask variable. For example, if
our input... |
Given a variable of shape ``(batch_size,)`` that represents the sequence lengths of each batch
element, this function returns a ``(batch_size, max_length)`` mask variable. For example, if
our input was ``[2, 2, 3]``, with a ``max_length`` of 4, we'd return
``[[1, 1, 0, 0], [1, 1, 0, 0], [1, 1, 1, 0]]`... | get_mask_from_sequence_lengths | python | plasticityai/magnitude | pymagnitude/third_party/allennlp/nn/util.py | https://github.com/plasticityai/magnitude/blob/master/pymagnitude/third_party/allennlp/nn/util.py | MIT |
def sort_batch_by_length(tensor , sequence_lengths ):
u"""
Sort a batch first tensor by some specified lengths.
Parameters
----------
tensor : torch.FloatTensor, required.
A batch first Pytorch tensor.
sequence_lengths : torch.LongTensor, required.
A te... |
Sort a batch first tensor by some specified lengths.
Parameters
----------
tensor : torch.FloatTensor, required.
A batch first Pytorch tensor.
sequence_lengths : torch.LongTensor, required.
A tensor representing the lengths of some dimension of the tensor which
we want to s... | sort_batch_by_length | python | plasticityai/magnitude | pymagnitude/third_party/allennlp/nn/util.py | https://github.com/plasticityai/magnitude/blob/master/pymagnitude/third_party/allennlp/nn/util.py | MIT |
def get_final_encoder_states(encoder_outputs ,
mask ,
bidirectional = False) :
u"""
Given the output from a ``Seq2SeqEncoder``, with shape ``(batch_size, sequence_length,
encoding_dim)``, this method ret... |
Given the output from a ``Seq2SeqEncoder``, with shape ``(batch_size, sequence_length,
encoding_dim)``, this method returns the final hidden state for each element of the batch,
giving a tensor of shape ``(batch_size, encoding_dim)``. This is not as simple as
``encoder_outputs[:, -1]``, because the se... | get_final_encoder_states | python | plasticityai/magnitude | pymagnitude/third_party/allennlp/nn/util.py | https://github.com/plasticityai/magnitude/blob/master/pymagnitude/third_party/allennlp/nn/util.py | MIT |
def get_dropout_mask(dropout_probability , tensor_for_masking ):
u"""
Computes and returns an element-wise dropout mask for a given tensor, where
each element in the mask is dropped out with probability dropout_probability.
Note that the mask is NOT applied to the tensor - the tensor ... |
Computes and returns an element-wise dropout mask for a given tensor, where
each element in the mask is dropped out with probability dropout_probability.
Note that the mask is NOT applied to the tensor - the tensor is passed to retain
the correct CUDA tensor type for the mask.
Parameters
-----... | get_dropout_mask | python | plasticityai/magnitude | pymagnitude/third_party/allennlp/nn/util.py | https://github.com/plasticityai/magnitude/blob/master/pymagnitude/third_party/allennlp/nn/util.py | MIT |
def masked_softmax(vector, mask):
u"""
``torch.nn.functional.softmax(vector)`` does not work if some elements of ``vector`` should be
masked. This performs a softmax on just the non-masked portions of ``vector``. Passing
``None`` in for the mask is also acceptable; you'll just get a regular softmax.
... |
``torch.nn.functional.softmax(vector)`` does not work if some elements of ``vector`` should be
masked. This performs a softmax on just the non-masked portions of ``vector``. Passing
``None`` in for the mask is also acceptable; you'll just get a regular softmax.
We assume that both ``vector`` and ``m... | masked_softmax | python | plasticityai/magnitude | pymagnitude/third_party/allennlp/nn/util.py | https://github.com/plasticityai/magnitude/blob/master/pymagnitude/third_party/allennlp/nn/util.py | MIT |
def masked_log_softmax(vector, mask):
u"""
``torch.nn.functional.log_softmax(vector)`` does not work if some elements of ``vector`` should be
masked. This performs a log_softmax on just the non-masked portions of ``vector``. Passing
``None`` in for the mask is also acceptable; you'll just get a regula... |
``torch.nn.functional.log_softmax(vector)`` does not work if some elements of ``vector`` should be
masked. This performs a log_softmax on just the non-masked portions of ``vector``. Passing
``None`` in for the mask is also acceptable; you'll just get a regular log_softmax.
We assume that both ``vect... | masked_log_softmax | python | plasticityai/magnitude | pymagnitude/third_party/allennlp/nn/util.py | https://github.com/plasticityai/magnitude/blob/master/pymagnitude/third_party/allennlp/nn/util.py | MIT |
def masked_max(vector ,
mask ,
dim ,
keepdim = False,
min_val = -1e7) :
u"""
To calculate max along certain dimensions on masked values
Parameters
----------
vector : ``torch.Tensor... |
To calculate max along certain dimensions on masked values
Parameters
----------
vector : ``torch.Tensor``
The vector to calculate max, assume unmasked parts are already zeros
mask : ``torch.Tensor``
The mask of the vector. It must be broadcastable with vector.
dim : ``int``
... | masked_max | python | plasticityai/magnitude | pymagnitude/third_party/allennlp/nn/util.py | https://github.com/plasticityai/magnitude/blob/master/pymagnitude/third_party/allennlp/nn/util.py | MIT |
def masked_mean(vector ,
mask ,
dim ,
keepdim = False,
eps = 1e-8) :
u"""
To calculate mean along certain dimensions on masked values
Parameters
----------
vector : ``torch.Tens... |
To calculate mean along certain dimensions on masked values
Parameters
----------
vector : ``torch.Tensor``
The vector to calculate mean.
mask : ``torch.Tensor``
The mask of the vector. It must be broadcastable with vector.
dim : ``int``
The dimension to calculate mean
... | masked_mean | python | plasticityai/magnitude | pymagnitude/third_party/allennlp/nn/util.py | https://github.com/plasticityai/magnitude/blob/master/pymagnitude/third_party/allennlp/nn/util.py | MIT |
def viterbi_decode(tag_sequence ,
transition_matrix ,
tag_observations = None):
u"""
Perform Viterbi decoding in log space over a sequence given a transition matrix
specifying pairwise (transition) potentials between tags a... |
Perform Viterbi decoding in log space over a sequence given a transition matrix
specifying pairwise (transition) potentials between tags and a matrix of shape
(sequence_length, num_tags) specifying unary potentials for possible tags per
timestep.
Parameters
----------
tag_sequence : torch.... | viterbi_decode | python | plasticityai/magnitude | pymagnitude/third_party/allennlp/nn/util.py | https://github.com/plasticityai/magnitude/blob/master/pymagnitude/third_party/allennlp/nn/util.py | MIT |
def get_text_field_mask(text_field_tensors ,
num_wrapping_dims = 0) :
u"""
Takes the dictionary of tensors produced by a ``TextField`` and returns a mask
with 0 where the tokens are padding, and 1 otherwise. We also handle ``TextFields... |
Takes the dictionary of tensors produced by a ``TextField`` and returns a mask
with 0 where the tokens are padding, and 1 otherwise. We also handle ``TextFields``
wrapped by an arbitrary number of ``ListFields``, where the number of wrapping ``ListFields``
is given by ``num_wrapping_dims``.
If ``... | get_text_field_mask | python | plasticityai/magnitude | pymagnitude/third_party/allennlp/nn/util.py | https://github.com/plasticityai/magnitude/blob/master/pymagnitude/third_party/allennlp/nn/util.py | MIT |
def _last_dimension_applicator(function_to_apply ,
tensor ,
mask = None):
u"""
Takes a tensor with 3 or more dimensions and applies a function over th... |
Takes a tensor with 3 or more dimensions and applies a function over the last dimension. We
assume the tensor has shape ``(batch_size, ..., sequence_length)`` and that the mask (if given)
has shape ``(batch_size, sequence_length)``. We first unsqueeze and expand the mask so that it
has the same shape... | _last_dimension_applicator | python | plasticityai/magnitude | pymagnitude/third_party/allennlp/nn/util.py | https://github.com/plasticityai/magnitude/blob/master/pymagnitude/third_party/allennlp/nn/util.py | MIT |
def weighted_sum(matrix , attention ) :
u"""
Takes a matrix of vectors and a set of weights over the rows in the matrix (which we call an
"attention" vector), and returns a weighted sum of the rows in the matrix. This is the typical
computation performed after a... |
Takes a matrix of vectors and a set of weights over the rows in the matrix (which we call an
"attention" vector), and returns a weighted sum of the rows in the matrix. This is the typical
computation performed after an attention mechanism.
Note that while we call this a "matrix" of vectors and an att... | weighted_sum | python | plasticityai/magnitude | pymagnitude/third_party/allennlp/nn/util.py | https://github.com/plasticityai/magnitude/blob/master/pymagnitude/third_party/allennlp/nn/util.py | MIT |
def sequence_cross_entropy_with_logits(logits ,
targets ,
weights ,
batch_average = True,
label_smoothing... |
Computes the cross entropy loss of a sequence, weighted with respect to
some user provided weights. Note that the weighting here is not the same as
in the :func:`torch.nn.CrossEntropyLoss()` criterion, which is weighting
classes; here we are weighting the loss contribution from particular elements
... | sequence_cross_entropy_with_logits | python | plasticityai/magnitude | pymagnitude/third_party/allennlp/nn/util.py | https://github.com/plasticityai/magnitude/blob/master/pymagnitude/third_party/allennlp/nn/util.py | MIT |
def replace_masked_values(tensor , mask , replace_with ) :
u"""
Replaces all masked values in ``tensor`` with ``replace_with``. ``mask`` must be broadcastable
to the same shape as ``tensor``. We require that ``tensor.dim() == mask.dim()``, as otherwise we
... |
Replaces all masked values in ``tensor`` with ``replace_with``. ``mask`` must be broadcastable
to the same shape as ``tensor``. We require that ``tensor.dim() == mask.dim()``, as otherwise we
won't know which dimensions of the mask to unsqueeze.
| replace_masked_values | python | plasticityai/magnitude | pymagnitude/third_party/allennlp/nn/util.py | https://github.com/plasticityai/magnitude/blob/master/pymagnitude/third_party/allennlp/nn/util.py | MIT |
Subsets and Splits
Django Code with Docstrings
Filters Python code examples from Django repository that contain Django-related code, helping identify relevant code snippets for understanding Django framework usage patterns.
SQL Console for Shuu12121/python-treesitter-filtered-datasetsV2
Retrieves specific code examples from the Flask repository but doesn't provide meaningful analysis or patterns beyond basic data retrieval.
HTTPX Repo Code and Docstrings
Retrieves specific code examples from the httpx repository, which is useful for understanding how particular libraries are used but doesn't provide broader analytical insights about the dataset.
Requests Repo Docstrings & Code
Retrieves code examples with their docstrings and file paths from the requests repository, providing basic filtering but limited analytical value beyond finding specific code samples.
Quart Repo Docstrings & Code
Retrieves code examples with their docstrings from the Quart repository, providing basic code samples but offering limited analytical value for understanding broader patterns or relationships in the dataset.