code stringlengths 66 870k | docstring stringlengths 19 26.7k | func_name stringlengths 1 138 | language stringclasses 1
value | repo stringlengths 7 68 | path stringlengths 5 324 | url stringlengths 46 389 | license stringclasses 7
values |
|---|---|---|---|---|---|---|---|
def test_get_module_root(self):
u"""
When a user runs ``allennlp test-install``, we have no idea where
they're running it from, so we do an ``os.chdir`` to the _module_
root in order to get all the paths in the fixtures to resolve properly.
The logic within ``allennlp test-insta... |
When a user runs ``allennlp test-install``, we have no idea where
they're running it from, so we do an ``os.chdir`` to the _module_
root in order to get all the paths in the fixtures to resolve properly.
The logic within ``allennlp test-install`` is pretty hard to test in
its e... | test_get_module_root | python | plasticityai/magnitude | pymagnitude/third_party/allennlp/tests/commands/test_install_test.py | https://github.com/plasticityai/magnitude/blob/master/pymagnitude/third_party/allennlp/tests/commands/test_install_test.py | MIT |
def head_callback(_):
u"""
Writing this as a callback allows different responses to different HEAD requests.
In our case, we're going to change the ETag header every `change_etag_every`
requests, which will allow us to simulate having a new version of the file.
"""
nonloc... |
Writing this as a callback allows different responses to different HEAD requests.
In our case, we're going to change the ETag header every `change_etag_every`
requests, which will allow us to simulate having a new version of the file.
| head_callback | python | plasticityai/magnitude | pymagnitude/third_party/allennlp/tests/common/file_utils_test.py | https://github.com/plasticityai/magnitude/blob/master/pymagnitude/third_party/allennlp/tests/common/file_utils_test.py | MIT |
def set_up_s3_bucket(bucket_name = u"my-bucket", s3_objects = None):
u"""Creates a mock s3 bucket optionally with objects uploaded from local files."""
s3_client = boto3.client(u"s3")
s3_client.create_bucket(Bucket=bucket_name)
for filename, key in s3_objects or []:
s... | Creates a mock s3 bucket optionally with objects uploaded from local files. | set_up_s3_bucket | python | plasticityai/magnitude | pymagnitude/third_party/allennlp/tests/common/file_utils_test.py | https://github.com/plasticityai/magnitude/blob/master/pymagnitude/third_party/allennlp/tests/common/file_utils_test.py | MIT |
def test_s3_bucket(self):
u"""This just ensures the bucket gets set up correctly."""
set_up_s3_bucket()
s3_client = boto3.client(u"s3")
buckets = s3_client.list_buckets()[u"Buckets"]
assert len(buckets) == 1
assert buckets[0][u"Name"] == u"my-bucket" | This just ensures the bucket gets set up correctly. | test_s3_bucket | python | plasticityai/magnitude | pymagnitude/third_party/allennlp/tests/common/file_utils_test.py | https://github.com/plasticityai/magnitude/blob/master/pymagnitude/third_party/allennlp/tests/common/file_utils_test.py | MIT |
def test_from_params_valid_vocab_extension_thoroughly(self):
u'''
Tests for Valid Vocab Extension thoroughly: Vocab extension is valid
when overlapping namespaces have same padding behaviour (padded/non-padded)
Summary of namespace paddings in this test:
original_vocab namespaces... |
Tests for Valid Vocab Extension thoroughly: Vocab extension is valid
when overlapping namespaces have same padding behaviour (padded/non-padded)
Summary of namespace paddings in this test:
original_vocab namespaces
tokens0 padded
tokens1 non-padded
... | test_from_params_valid_vocab_extension_thoroughly | python | plasticityai/magnitude | pymagnitude/third_party/allennlp/tests/data/vocabulary_test.py | https://github.com/plasticityai/magnitude/blob/master/pymagnitude/third_party/allennlp/tests/data/vocabulary_test.py | MIT |
def test_from_instances_exclusive_embeddings_file_inside_archive(self):
u""" Just for ensuring there are no problems when reading pretrained tokens from an archive """
# Read embeddings file from archive
archive_path = unicode(self.TEST_DIR / u"embeddings-archive.zip")
with zipfile.ZipF... | Just for ensuring there are no problems when reading pretrained tokens from an archive | test_from_instances_exclusive_embeddings_file_inside_archive | python | plasticityai/magnitude | pymagnitude/third_party/allennlp/tests/data/vocabulary_test.py | https://github.com/plasticityai/magnitude/blob/master/pymagnitude/third_party/allennlp/tests/data/vocabulary_test.py | MIT |
def test_forward_pass_runs_correctly(self):
u"""
Check to make sure a forward pass on an ensemble of two identical copies of a model yields the same
results as the model itself.
"""
bidaf_ensemble = BidafEnsemble([self.model, self.model])
batch = Batch(self.instances)
... |
Check to make sure a forward pass on an ensemble of two identical copies of a model yields the same
results as the model itself.
| test_forward_pass_runs_correctly | python | plasticityai/magnitude | pymagnitude/third_party/allennlp/tests/models/reading_comprehension/bidaf_ensemble_test.py | https://github.com/plasticityai/magnitude/blob/master/pymagnitude/third_party/allennlp/tests/models/reading_comprehension/bidaf_ensemble_test.py | MIT |
def score(self, logits, tags):
u"""
Computes the likelihood score for the given sequence of tags,
given the provided logits (and the transition weights in the CRF model)
"""
# Start with transitions from START and to END
total = self.transitions_from_start[tags[0]] + self... |
Computes the likelihood score for the given sequence of tags,
given the provided logits (and the transition weights in the CRF model)
| score | python | plasticityai/magnitude | pymagnitude/third_party/allennlp/tests/modules/conditional_random_field_test.py | https://github.com/plasticityai/magnitude/blob/master/pymagnitude/third_party/allennlp/tests/modules/conditional_random_field_test.py | MIT |
def _load_sentences_embeddings(self):
u"""
Load the test sentences and the expected LM embeddings.
These files loaded in this method were created with a batch-size of 3.
Due to idiosyncrasies with TensorFlow, the 30 sentences in sentences.json are split into 3 files in which
the... |
Load the test sentences and the expected LM embeddings.
These files loaded in this method were created with a batch-size of 3.
Due to idiosyncrasies with TensorFlow, the 30 sentences in sentences.json are split into 3 files in which
the k-th sentence in each is from batch k.
T... | _load_sentences_embeddings | python | plasticityai/magnitude | pymagnitude/third_party/allennlp/tests/modules/elmo_test.py | https://github.com/plasticityai/magnitude/blob/master/pymagnitude/third_party/allennlp/tests/modules/elmo_test.py | MIT |
def create_small_test_fixture(output_dir = u'/tmp') :
u"""
This is how I created the transformer_model.tar.gz.
After running this, go to the specified output dir and run
tar -czvf transformer_model.tar.gz model/
In case you need to regenerate the fixture for some reason.
"""
... |
This is how I created the transformer_model.tar.gz.
After running this, go to the specified output dir and run
tar -czvf transformer_model.tar.gz model/
In case you need to regenerate the fixture for some reason.
| create_small_test_fixture | python | plasticityai/magnitude | pymagnitude/third_party/allennlp/tests/modules/token_embedders/openai_transformer_embedder_test.py | https://github.com/plasticityai/magnitude/blob/master/pymagnitude/third_party/allennlp/tests/modules/token_embedders/openai_transformer_embedder_test.py | MIT |
def test_html(self):
u"""
The pip-installed version of allennlp (currently) requires the config explorer HTML
to be hardcoded into the server file. But when iterating on it, it's easier to use the
/debug/ endpoint, which points at `config_explorer.html`, so that you don't have to
... |
The pip-installed version of allennlp (currently) requires the config explorer HTML
to be hardcoded into the server file. But when iterating on it, it's easier to use the
/debug/ endpoint, which points at `config_explorer.html`, so that you don't have to
restart the server every time yo... | test_html | python | plasticityai/magnitude | pymagnitude/third_party/allennlp/tests/service/config_explorer_test.py | https://github.com/plasticityai/magnitude/blob/master/pymagnitude/third_party/allennlp/tests/service/config_explorer_test.py | MIT |
def test_rnn_hack(self):
u"""
Behind the scenes, when you try to create a torch RNN,
it just calls torch.RNNBase with an extra parameter.
This test is to make sure that works correctly.
"""
response = self.client.get(u'/api/config/?class=torch.nn.modules.rnn.LSTM')
... |
Behind the scenes, when you try to create a torch RNN,
it just calls torch.RNNBase with an extra parameter.
This test is to make sure that works correctly.
| test_rnn_hack | python | plasticityai/magnitude | pymagnitude/third_party/allennlp/tests/service/config_explorer_test.py | https://github.com/plasticityai/magnitude/blob/master/pymagnitude/third_party/allennlp/tests/service/config_explorer_test.py | MIT |
def step(self, closure=None):
u"""
Performs a single optimization step.
Parameters
----------
closure : ``callable``, optional.
A closure that reevaluates the model and returns the loss.
"""
loss = None
if closure is not None:
loss... |
Performs a single optimization step.
Parameters
----------
closure : ``callable``, optional.
A closure that reevaluates the model and returns the loss.
| step | python | plasticityai/magnitude | pymagnitude/third_party/allennlp/training/optimizers.py | https://github.com/plasticityai/magnitude/blob/master/pymagnitude/third_party/allennlp/training/optimizers.py | MIT |
def sparse_clip_norm(parameters, max_norm, norm_type=2) :
u"""Clips gradient norm of an iterable of parameters.
The norm is computed over all gradients together, as if they were
concatenated into a single vector. Gradients are modified in-place.
Supports sparse gradients.
Parameters
--... | Clips gradient norm of an iterable of parameters.
The norm is computed over all gradients together, as if they were
concatenated into a single vector. Gradients are modified in-place.
Supports sparse gradients.
Parameters
----------
parameters : ``(Iterable[torch.Tensor])``
An iterable... | sparse_clip_norm | python | plasticityai/magnitude | pymagnitude/third_party/allennlp/training/trainer.py | https://github.com/plasticityai/magnitude/blob/master/pymagnitude/third_party/allennlp/training/trainer.py | MIT |
def move_optimizer_to_cuda(optimizer):
u"""
Move the optimizer state to GPU, if necessary.
After calling, any parameter specific state in the optimizer
will be located on the same device as the parameter.
"""
for param_group in optimizer.param_groups:
for param in param_group[u'params']:... |
Move the optimizer state to GPU, if necessary.
After calling, any parameter specific state in the optimizer
will be located on the same device as the parameter.
| move_optimizer_to_cuda | python | plasticityai/magnitude | pymagnitude/third_party/allennlp/training/trainer.py | https://github.com/plasticityai/magnitude/blob/master/pymagnitude/third_party/allennlp/training/trainer.py | MIT |
def time_to_str(timestamp ) :
u"""
Convert seconds past Epoch to human readable string.
"""
datetimestamp = datetime.datetime.fromtimestamp(timestamp)
return u'{:04d}-{:02d}-{:02d}-{:02d}-{:02d}-{:02d}'.format(
datetimestamp.year, datetimestamp.month, datetimestamp.day,
... |
Convert seconds past Epoch to human readable string.
| time_to_str | python | plasticityai/magnitude | pymagnitude/third_party/allennlp/training/trainer.py | https://github.com/plasticityai/magnitude/blob/master/pymagnitude/third_party/allennlp/training/trainer.py | MIT |
def __init__(self,
model ,
optimizer ,
iterator ,
train_dataset ,
validation_dataset = None,
patience = None,
... |
Parameters
----------
model : ``Model``, required.
An AllenNLP model to be optimized. Pytorch Modules can also be optimized if
their ``forward`` method returns a dictionary with a "loss" key, containing a
scalar tensor representing the loss function to be opt... | __init__ | python | plasticityai/magnitude | pymagnitude/third_party/allennlp/training/trainer.py | https://github.com/plasticityai/magnitude/blob/master/pymagnitude/third_party/allennlp/training/trainer.py | MIT |
def _rescale_gradients(self) :
u"""
Performs gradient rescaling. Is a no-op if gradient rescaling is not enabled.
"""
if self._grad_norm:
parameters_to_clip = [p for p in self._model.parameters()
if p.grad is not None]
... |
Performs gradient rescaling. Is a no-op if gradient rescaling is not enabled.
| _rescale_gradients | python | plasticityai/magnitude | pymagnitude/third_party/allennlp/training/trainer.py | https://github.com/plasticityai/magnitude/blob/master/pymagnitude/third_party/allennlp/training/trainer.py | MIT |
def _data_parallel(self, batch):
u"""
Do the forward pass using multiple GPUs. This is a simplification
of torch.nn.parallel.data_parallel to support the allennlp model
interface.
"""
inputs, module_kwargs = scatter_kwargs((), batch, self._cuda_devices, 0)
used_d... |
Do the forward pass using multiple GPUs. This is a simplification
of torch.nn.parallel.data_parallel to support the allennlp model
interface.
| _data_parallel | python | plasticityai/magnitude | pymagnitude/third_party/allennlp/training/trainer.py | https://github.com/plasticityai/magnitude/blob/master/pymagnitude/third_party/allennlp/training/trainer.py | MIT |
def _batch_loss(self, batch , for_training ) :
u"""
Does a forward pass on the given batch and returns the ``loss`` value in the result.
If ``for_training`` is `True` also applies regularization penalty.
"""
if self._multiple_gpu:
outp... |
Does a forward pass on the given batch and returns the ``loss`` value in the result.
If ``for_training`` is `True` also applies regularization penalty.
| _batch_loss | python | plasticityai/magnitude | pymagnitude/third_party/allennlp/training/trainer.py | https://github.com/plasticityai/magnitude/blob/master/pymagnitude/third_party/allennlp/training/trainer.py | MIT |
def _get_metrics(self, total_loss , num_batches , reset = False) :
u"""
Gets the metrics but sets ``"loss"`` to
the total loss divided by the ``num_batches`` so that
the ``"loss"`` metric is "average loss per batch".
"""
metrics = self._... |
Gets the metrics but sets ``"loss"`` to
the total loss divided by the ``num_batches`` so that
the ``"loss"`` metric is "average loss per batch".
| _get_metrics | python | plasticityai/magnitude | pymagnitude/third_party/allennlp/training/trainer.py | https://github.com/plasticityai/magnitude/blob/master/pymagnitude/third_party/allennlp/training/trainer.py | MIT |
def _train_epoch(self, epoch ) :
u"""
Trains one epoch and returns metrics.
"""
logger.info(u"Epoch %d/%d", epoch, self._num_epochs - 1)
logger.info("Peak CPU memory usage MB: {peak_memory_mb()}")
for gpu, memory in list(gpu_memory_mb().items()):
... |
Trains one epoch and returns metrics.
| _train_epoch | python | plasticityai/magnitude | pymagnitude/third_party/allennlp/training/trainer.py | https://github.com/plasticityai/magnitude/blob/master/pymagnitude/third_party/allennlp/training/trainer.py | MIT |
def _should_stop_early(self, metric_history ) :
u"""
uses patience and the validation metric to determine if training should stop early
"""
if self._patience and self._patience < len(metric_history):
# Pylint can't figure out that in this branch `self._pati... |
uses patience and the validation metric to determine if training should stop early
| _should_stop_early | python | plasticityai/magnitude | pymagnitude/third_party/allennlp/training/trainer.py | https://github.com/plasticityai/magnitude/blob/master/pymagnitude/third_party/allennlp/training/trainer.py | MIT |
def _parameter_and_gradient_statistics_to_tensorboard(self, # pylint: disable=invalid-name
epoch ,
batch_grad_norm ) :
u"""
Send the mean and std of all parameters and gra... |
Send the mean and std of all parameters and gradients to tensorboard, as well
as logging the average gradient norm.
| _parameter_and_gradient_statistics_to_tensorboard | python | plasticityai/magnitude | pymagnitude/third_party/allennlp/training/trainer.py | https://github.com/plasticityai/magnitude/blob/master/pymagnitude/third_party/allennlp/training/trainer.py | MIT |
def _histograms_to_tensorboard(self, epoch , histogram_parameters ) :
u"""
Send histograms of parameters to tensorboard.
"""
for name, param in self._model.named_parameters():
if name in histogram_parameters:
self._tensorboard.add_train_his... |
Send histograms of parameters to tensorboard.
| _histograms_to_tensorboard | python | plasticityai/magnitude | pymagnitude/third_party/allennlp/training/trainer.py | https://github.com/plasticityai/magnitude/blob/master/pymagnitude/third_party/allennlp/training/trainer.py | MIT |
def _metrics_to_tensorboard(self,
epoch ,
train_metrics ,
val_metrics = None) :
u"""
Sends all of the train metrics (and validation metrics, if provided) to tensorboard.
"""
... |
Sends all of the train metrics (and validation metrics, if provided) to tensorboard.
| _metrics_to_tensorboard | python | plasticityai/magnitude | pymagnitude/third_party/allennlp/training/trainer.py | https://github.com/plasticityai/magnitude/blob/master/pymagnitude/third_party/allennlp/training/trainer.py | MIT |
def _metrics_to_console(self, # pylint: disable=no-self-use
train_metrics ,
val_metrics = None) :
u"""
Logs all of the train metrics (and validation metrics, if provided) to the console.
"""
val_metrics = val_metr... |
Logs all of the train metrics (and validation metrics, if provided) to the console.
| _metrics_to_console | python | plasticityai/magnitude | pymagnitude/third_party/allennlp/training/trainer.py | https://github.com/plasticityai/magnitude/blob/master/pymagnitude/third_party/allennlp/training/trainer.py | MIT |
def _validation_loss(self) :
u"""
Computes the validation loss. Returns it and the number of batches.
"""
logger.info(u"Validating")
self._model.eval()
if self._validation_iterator is not None:
val_iterator = self._validation_iterator
... |
Computes the validation loss. Returns it and the number of batches.
| _validation_loss | python | plasticityai/magnitude | pymagnitude/third_party/allennlp/training/trainer.py | https://github.com/plasticityai/magnitude/blob/master/pymagnitude/third_party/allennlp/training/trainer.py | MIT |
def train(self) :
u"""
Trains the supplied model with the supplied parameters.
"""
try:
epoch_counter, validation_metric_per_epoch = self._restore_checkpoint()
except RuntimeError:
traceback.print_exc()
raise ConfigurationError... |
Trains the supplied model with the supplied parameters.
| train | python | plasticityai/magnitude | pymagnitude/third_party/allennlp/training/trainer.py | https://github.com/plasticityai/magnitude/blob/master/pymagnitude/third_party/allennlp/training/trainer.py | MIT |
def _save_checkpoint(self,
epoch ,
val_metric_per_epoch ,
is_best = None) :
u"""
Saves a checkpoint of the model to self._serialization_dir.
Is a no-op if self._serializa... |
Saves a checkpoint of the model to self._serialization_dir.
Is a no-op if self._serialization_dir is None.
Parameters
----------
epoch : Union[int, str], required.
The epoch of training. If the checkpoint is saved in the middle
of an epoch, the paramete... | _save_checkpoint | python | plasticityai/magnitude | pymagnitude/third_party/allennlp/training/trainer.py | https://github.com/plasticityai/magnitude/blob/master/pymagnitude/third_party/allennlp/training/trainer.py | MIT |
def find_latest_checkpoint(self) :
u"""
Return the location of the latest model and training state files.
If there isn't a valid checkpoint then return None.
"""
have_checkpoint = (self._serialization_dir is not None and
any(u"model_st... |
Return the location of the latest model and training state files.
If there isn't a valid checkpoint then return None.
| find_latest_checkpoint | python | plasticityai/magnitude | pymagnitude/third_party/allennlp/training/trainer.py | https://github.com/plasticityai/magnitude/blob/master/pymagnitude/third_party/allennlp/training/trainer.py | MIT |
def _restore_checkpoint(self) :
u"""
Restores a model from a serialization_dir to the last saved checkpoint.
This includes an epoch count and optimizer state, which is serialized separately
from model parameters. This function should only be used to continue tr... |
Restores a model from a serialization_dir to the last saved checkpoint.
This includes an epoch count and optimizer state, which is serialized separately
from model parameters. This function should only be used to continue training -
if you wish to load a model for inference/load parts ... | _restore_checkpoint | python | plasticityai/magnitude | pymagnitude/third_party/allennlp/training/trainer.py | https://github.com/plasticityai/magnitude/blob/master/pymagnitude/third_party/allennlp/training/trainer.py | MIT |
def __call__(self, # type: ignore
predicted_indices ,
predicted_labels ,
gold_indices ,
gold_labels ,
mask = None):
u"""
Parameters
---... |
Parameters
----------
predicted_indices : ``torch.Tensor``, required.
A tensor of head index predictions of shape (batch_size, timesteps).
predicted_labels : ``torch.Tensor``, required.
A tensor of arc label predictions of shape (batch_size, timesteps).
g... | __call__ | python | plasticityai/magnitude | pymagnitude/third_party/allennlp/training/metrics/attachment_scores.py | https://github.com/plasticityai/magnitude/blob/master/pymagnitude/third_party/allennlp/training/metrics/attachment_scores.py | MIT |
def get_metric(self, reset = False):
u"""
Returns
-------
The accumulated metrics as a dictionary.
"""
unlabeled_attachment_score = 0.0
labeled_attachment_score = 0.0
unlabeled_exact_match = 0.0
labeled_exact_match = 0.0
if self._tota... |
Returns
-------
The accumulated metrics as a dictionary.
| get_metric | python | plasticityai/magnitude | pymagnitude/third_party/allennlp/training/metrics/attachment_scores.py | https://github.com/plasticityai/magnitude/blob/master/pymagnitude/third_party/allennlp/training/metrics/attachment_scores.py | MIT |
def __call__(self, value):
u"""
Parameters
----------
value : ``float``
The value to average.
"""
self._total_value += list(self.unwrap_to_tensors(value))[0]
self._count += 1 |
Parameters
----------
value : ``float``
The value to average.
| __call__ | python | plasticityai/magnitude | pymagnitude/third_party/allennlp/training/metrics/average.py | https://github.com/plasticityai/magnitude/blob/master/pymagnitude/third_party/allennlp/training/metrics/average.py | MIT |
def get_metric(self, reset = False):
u"""
Returns
-------
The average of all values that were passed to ``__call__``.
"""
average_value = self._total_value / self._count if self._count > 0 else 0
if reset:
self.reset()
return average_valu... |
Returns
-------
The average of all values that were passed to ``__call__``.
| get_metric | python | plasticityai/magnitude | pymagnitude/third_party/allennlp/training/metrics/average.py | https://github.com/plasticityai/magnitude/blob/master/pymagnitude/third_party/allennlp/training/metrics/average.py | MIT |
def __call__(self,
predictions ,
gold_labels ,
mask = None):
u"""
Parameters
----------
predictions : ``torch.Tensor``, required.
A tensor of predictions of shape (batch_size,... |
Parameters
----------
predictions : ``torch.Tensor``, required.
A tensor of predictions of shape (batch_size, ...).
gold_labels : ``torch.Tensor``, required.
A tensor of the same shape as ``predictions``.
mask: ``torch.Tensor``, optional (default = None).... | __call__ | python | plasticityai/magnitude | pymagnitude/third_party/allennlp/training/metrics/boolean_accuracy.py | https://github.com/plasticityai/magnitude/blob/master/pymagnitude/third_party/allennlp/training/metrics/boolean_accuracy.py | MIT |
def __call__(self,
predictions ,
gold_labels ,
mask = None):
u"""
Parameters
----------
predictions : ``torch.Tensor``, required.
A tensor of predictions of shape (batch_size,... |
Parameters
----------
predictions : ``torch.Tensor``, required.
A tensor of predictions of shape (batch_size, ..., num_classes).
gold_labels : ``torch.Tensor``, required.
A tensor of integer class label of shape (batch_size, ...). It must be the same
... | __call__ | python | plasticityai/magnitude | pymagnitude/third_party/allennlp/training/metrics/categorical_accuracy.py | https://github.com/plasticityai/magnitude/blob/master/pymagnitude/third_party/allennlp/training/metrics/categorical_accuracy.py | MIT |
def __call__(self, # type: ignore
top_spans ,
antecedent_indices ,
predicted_antecedents ,
metadata_list ):
u"""
Parameters
----------
top_spans : ``torch.Tens... |
Parameters
----------
top_spans : ``torch.Tensor``
(start, end) indices for all spans kept after span pruning in the model.
Expected shape: (batch_size, num_spans, 2)
antecedent_indices : ``torch.Tensor``
For each span, the indices of all allowed ante... | __call__ | python | plasticityai/magnitude | pymagnitude/third_party/allennlp/training/metrics/conll_coref_scores.py | https://github.com/plasticityai/magnitude/blob/master/pymagnitude/third_party/allennlp/training/metrics/conll_coref_scores.py | MIT |
def b_cubed(clusters, mention_to_gold):
u"""
Averaged per-mention precision and recall.
<https://pdfs.semanticscholar.org/cfe3/c24695f1c14b78a5b8e95bcbd1c666140fd1.pdf>
"""
numerator, denominator = 0, 0
for cluster in clusters:
if len(cluster) == 1:
... |
Averaged per-mention precision and recall.
<https://pdfs.semanticscholar.org/cfe3/c24695f1c14b78a5b8e95bcbd1c666140fd1.pdf>
| b_cubed | python | plasticityai/magnitude | pymagnitude/third_party/allennlp/training/metrics/conll_coref_scores.py | https://github.com/plasticityai/magnitude/blob/master/pymagnitude/third_party/allennlp/training/metrics/conll_coref_scores.py | MIT |
def muc(clusters, mention_to_gold):
u"""
Counts the mentions in each predicted cluster which need to be re-allocated in
order for each predicted cluster to be contained by the respective gold cluster.
<http://aclweb.org/anthology/M/M95/M95-1005.pdf>
"""
true_p, all_p = 0,... |
Counts the mentions in each predicted cluster which need to be re-allocated in
order for each predicted cluster to be contained by the respective gold cluster.
<http://aclweb.org/anthology/M/M95/M95-1005.pdf>
| muc | python | plasticityai/magnitude | pymagnitude/third_party/allennlp/training/metrics/conll_coref_scores.py | https://github.com/plasticityai/magnitude/blob/master/pymagnitude/third_party/allennlp/training/metrics/conll_coref_scores.py | MIT |
def phi4(gold_clustering, predicted_clustering):
u"""
Subroutine for ceafe. Computes the mention F measure between gold and
predicted mentions in a cluster.
"""
return 2 * len([mention for mention in gold_clustering if mention in predicted_clustering])\
/ float(len... |
Subroutine for ceafe. Computes the mention F measure between gold and
predicted mentions in a cluster.
| phi4 | python | plasticityai/magnitude | pymagnitude/third_party/allennlp/training/metrics/conll_coref_scores.py | https://github.com/plasticityai/magnitude/blob/master/pymagnitude/third_party/allennlp/training/metrics/conll_coref_scores.py | MIT |
def ceafe(clusters, gold_clusters):
u"""
Computes the Constrained EntityAlignment F-Measure (CEAF) for evaluating coreference.
Gold and predicted mentions are aligned into clusterings which maximise a metric - in
this case, the F measure between gold and predicted clusters.
<ht... |
Computes the Constrained EntityAlignment F-Measure (CEAF) for evaluating coreference.
Gold and predicted mentions are aligned into clusterings which maximise a metric - in
this case, the F measure between gold and predicted clusters.
<https://www.semanticscholar.org/paper/On-Coreferen... | ceafe | python | plasticityai/magnitude | pymagnitude/third_party/allennlp/training/metrics/conll_coref_scores.py | https://github.com/plasticityai/magnitude/blob/master/pymagnitude/third_party/allennlp/training/metrics/conll_coref_scores.py | MIT |
def __call__(self, # type: ignore
logits ,
mask = None):
u"""
Parameters
----------
logits : ``torch.Tensor``, required.
A tensor of unnormalized log probabilities of shape (batch_size, ..., num_classes).... |
Parameters
----------
logits : ``torch.Tensor``, required.
A tensor of unnormalized log probabilities of shape (batch_size, ..., num_classes).
mask: ``torch.Tensor``, optional (default = None).
A masking tensor of shape (batch_size, ...).
| __call__ | python | plasticityai/magnitude | pymagnitude/third_party/allennlp/training/metrics/entropy.py | https://github.com/plasticityai/magnitude/blob/master/pymagnitude/third_party/allennlp/training/metrics/entropy.py | MIT |
def __call__(self, predicted_trees , gold_trees ) : # type: ignore
u"""
Parameters
----------
predicted_trees : ``List[Tree]``
A list of predicted NLTK Trees to compute score for.
gold_trees : ``List[Tree]``
A list of gold NLTK... |
Parameters
----------
predicted_trees : ``List[Tree]``
A list of predicted NLTK Trees to compute score for.
gold_trees : ``List[Tree]``
A list of gold NLTK Trees to use as a reference.
| __call__ | python | plasticityai/magnitude | pymagnitude/third_party/allennlp/training/metrics/evalb_bracketing_scorer.py | https://github.com/plasticityai/magnitude/blob/master/pymagnitude/third_party/allennlp/training/metrics/evalb_bracketing_scorer.py | MIT |
def get_metric(self, reset = False):
u"""
Returns
-------
The average precision, recall and f1.
"""
recall = self._correct_predicted_brackets / self._gold_brackets if self._gold_brackets > 0 else 0.0
precision = self._correct_predicted_brackets / self._predi... |
Returns
-------
The average precision, recall and f1.
| get_metric | python | plasticityai/magnitude | pymagnitude/third_party/allennlp/training/metrics/evalb_bracketing_scorer.py | https://github.com/plasticityai/magnitude/blob/master/pymagnitude/third_party/allennlp/training/metrics/evalb_bracketing_scorer.py | MIT |
def __call__(self,
predictions ,
gold_labels ,
mask = None):
u"""
Parameters
----------
predictions : ``torch.Tensor``, required.
A tensor of predictions of shape (batch_size,... |
Parameters
----------
predictions : ``torch.Tensor``, required.
A tensor of predictions of shape (batch_size, ..., num_classes).
gold_labels : ``torch.Tensor``, required.
A tensor of integer class label of shape (batch_size, ...). It must be the same
... | __call__ | python | plasticityai/magnitude | pymagnitude/third_party/allennlp/training/metrics/f1_measure.py | https://github.com/plasticityai/magnitude/blob/master/pymagnitude/third_party/allennlp/training/metrics/f1_measure.py | MIT |
def get_metric(self, reset = False):
u"""
Returns
-------
A tuple of the following metrics based on the accumulated count statistics:
precision : float
recall : float
f1-measure : float
"""
precision = float(self._true_positives) / float(self... |
Returns
-------
A tuple of the following metrics based on the accumulated count statistics:
precision : float
recall : float
f1-measure : float
| get_metric | python | plasticityai/magnitude | pymagnitude/third_party/allennlp/training/metrics/f1_measure.py | https://github.com/plasticityai/magnitude/blob/master/pymagnitude/third_party/allennlp/training/metrics/f1_measure.py | MIT |
def __call__(self,
predictions ,
gold_labels ,
mask ):
u"""
Parameters
----------
predictions : ``torch.Tensor``, required.
A tensor of predictions.
gold_labels : ``tor... |
Parameters
----------
predictions : ``torch.Tensor``, required.
A tensor of predictions.
gold_labels : ``torch.Tensor``, required.
A tensor corresponding to some gold label to evaluate against.
mask: ``torch.Tensor``, optional (default = None).
... | __call__ | python | plasticityai/magnitude | pymagnitude/third_party/allennlp/training/metrics/metric.py | https://github.com/plasticityai/magnitude/blob/master/pymagnitude/third_party/allennlp/training/metrics/metric.py | MIT |
def __init__(self,
vocabulary ,
tag_namespace = u"tags",
ignore_classes = None,
label_encoding = u"BIO") :
u"""
Parameters
----------
vocabulary : ``Vocabulary``, required.
... |
Parameters
----------
vocabulary : ``Vocabulary``, required.
A vocabulary containing the tag namespace.
tag_namespace : str, required.
This metric assumes that a BIO format is used in which the
labels are of the format: ["B-LABEL", "I-LABEL"].
... | __init__ | python | plasticityai/magnitude | pymagnitude/third_party/allennlp/training/metrics/span_based_f1_measure.py | https://github.com/plasticityai/magnitude/blob/master/pymagnitude/third_party/allennlp/training/metrics/span_based_f1_measure.py | MIT |
def __call__(self,
predictions ,
gold_labels ,
mask = None,
prediction_map = None):
u"""
Parameters
----------
predictions : ``torch.Tensor``, req... |
Parameters
----------
predictions : ``torch.Tensor``, required.
A tensor of predictions of shape (batch_size, sequence_length, num_classes).
gold_labels : ``torch.Tensor``, required.
A tensor of integer class label of shape (batch_size, sequence_length). It must ... | __call__ | python | plasticityai/magnitude | pymagnitude/third_party/allennlp/training/metrics/span_based_f1_measure.py | https://github.com/plasticityai/magnitude/blob/master/pymagnitude/third_party/allennlp/training/metrics/span_based_f1_measure.py | MIT |
def _handle_continued_spans(spans ) :
u"""
The official CONLL 2012 evaluation script for SRL treats continued spans (i.e spans which
have a `C-` prepended to another valid tag) as part of the span that they are continuing.
This is basically a... |
The official CONLL 2012 evaluation script for SRL treats continued spans (i.e spans which
have a `C-` prepended to another valid tag) as part of the span that they are continuing.
This is basically a massive hack to allow SRL models which produce a linear sequence of
predictions to do s... | _handle_continued_spans | python | plasticityai/magnitude | pymagnitude/third_party/allennlp/training/metrics/span_based_f1_measure.py | https://github.com/plasticityai/magnitude/blob/master/pymagnitude/third_party/allennlp/training/metrics/span_based_f1_measure.py | MIT |
def get_metric(self, reset = False):
u"""
Returns
-------
A Dict per label containing following the span based metrics:
precision : float
recall : float
f1-measure : float
Additionally, an ``overall`` key is included, which provides the precision,
... |
Returns
-------
A Dict per label containing following the span based metrics:
precision : float
recall : float
f1-measure : float
Additionally, an ``overall`` key is included, which provides the precision,
recall and f1-measure for all spans.
| get_metric | python | plasticityai/magnitude | pymagnitude/third_party/allennlp/training/metrics/span_based_f1_measure.py | https://github.com/plasticityai/magnitude/blob/master/pymagnitude/third_party/allennlp/training/metrics/span_based_f1_measure.py | MIT |
def __call__(self, best_span_string, answer_strings):
u"""
Parameters
----------
value : ``float``
The value to average.
"""
exact_match = squad_eval.metric_max_over_ground_truths(
squad_eval.exact_match_score,
best_span_string,... |
Parameters
----------
value : ``float``
The value to average.
| __call__ | python | plasticityai/magnitude | pymagnitude/third_party/allennlp/training/metrics/squad_em_and_f1.py | https://github.com/plasticityai/magnitude/blob/master/pymagnitude/third_party/allennlp/training/metrics/squad_em_and_f1.py | MIT |
def get_metric(self, reset = False) :
u"""
Returns
-------
Average exact match and F1 score (in that order) as computed by the official SQuAD script
over all inputs.
"""
exact_match = self._total_em / self._count if self._count > 0 else... |
Returns
-------
Average exact match and F1 score (in that order) as computed by the official SQuAD script
over all inputs.
| get_metric | python | plasticityai/magnitude | pymagnitude/third_party/allennlp/training/metrics/squad_em_and_f1.py | https://github.com/plasticityai/magnitude/blob/master/pymagnitude/third_party/allennlp/training/metrics/squad_em_and_f1.py | MIT |
def __call__(self, logical_form , example_lisp_string ): # type: ignore
u"""
Parameters
----------
example_lisp_string : ``str``
The value to average.
"""
denotation_correct = self.evaluate_logical_form(logical_form, example_lisp_string)
if de... |
Parameters
----------
example_lisp_string : ``str``
The value to average.
| __call__ | python | plasticityai/magnitude | pymagnitude/third_party/allennlp/training/metrics/wikitables_accuracy.py | https://github.com/plasticityai/magnitude/blob/master/pymagnitude/third_party/allennlp/training/metrics/wikitables_accuracy.py | MIT |
def _create_sempre_executor(self) :
u"""
Creates a server running SEMPRE that we can send logical forms to for evaluation. This
uses inter-process communication, because SEMPRE is java code. We also need to be careful
to clean up the process when our program exits.
"""
... |
Creates a server running SEMPRE that we can send logical forms to for evaluation. This
uses inter-process communication, because SEMPRE is java code. We also need to be careful
to clean up the process when our program exits.
| _create_sempre_executor | python | plasticityai/magnitude | pymagnitude/third_party/allennlp/training/metrics/wikitables_accuracy.py | https://github.com/plasticityai/magnitude/blob/master/pymagnitude/third_party/allennlp/training/metrics/wikitables_accuracy.py | MIT |
def clear(self):
"""Remove all entries from the cache"""
with self.lock:
# If really clear()ing a full cache, clean up self._data first to
# give garbage collection a chance to reduce memorey usage.
# Instantiating "[_MARKER] * size" will temporarily have 2 lists
... | Remove all entries from the cache | clear | python | plasticityai/magnitude | pymagnitude/third_party/repoze/lru/__init__.py | https://github.com/plasticityai/magnitude/blob/master/pymagnitude/third_party/repoze/lru/__init__.py | MIT |
def get(self, key, default=None):
"""Return value for key. If not in cache, return default"""
self.lookups += 1
try:
pos, val = self._data[key]
self.hits += 1
except KeyError:
self.misses += 1
return default
self.clock_refs[pos] = T... | Return value for key. If not in cache, return default | get | python | plasticityai/magnitude | pymagnitude/third_party/repoze/lru/__init__.py | https://github.com/plasticityai/magnitude/blob/master/pymagnitude/third_party/repoze/lru/__init__.py | MIT |
def put(self, key, val):
"""Add key to the cache with value val"""
# These do not change or they are just references, no need for locking.
maxpos = self.maxpos
clock_refs = self.clock_refs
clock_keys = self.clock_keys
data = self._data
with self.lock:
... | Add key to the cache with value val | put | python | plasticityai/magnitude | pymagnitude/third_party/repoze/lru/__init__.py | https://github.com/plasticityai/magnitude/blob/master/pymagnitude/third_party/repoze/lru/__init__.py | MIT |
def clear(self):
"""Remove all entries from the cache"""
with self.lock:
# If really clear()ing a full cache, clean up self._data first to
# give garbage collection a chance to reduce memorey usage.
# Instantiating "[_MARKER] * size" will temporarily have 2 lists
... | Remove all entries from the cache | clear | python | plasticityai/magnitude | pymagnitude/third_party/repoze/lru/__init__.py | https://github.com/plasticityai/magnitude/blob/master/pymagnitude/third_party/repoze/lru/__init__.py | MIT |
def get(self, key, default=None):
"""Return value for key. If not in cache or expired, return default"""
self.lookups += 1
try:
pos, val, expires = self._data[key]
except KeyError:
self.misses += 1
return default
if expires > time.time():
... | Return value for key. If not in cache or expired, return default | get | python | plasticityai/magnitude | pymagnitude/third_party/repoze/lru/__init__.py | https://github.com/plasticityai/magnitude/blob/master/pymagnitude/third_party/repoze/lru/__init__.py | MIT |
def put(self, key, val, timeout=None):
"""Add key to the cache with value val
key will expire in $timeout seconds. If key is already in cache, val
and timeout will be updated.
"""
# These do not change or they are just references, no need for locking.
maxpos = self.maxpo... | Add key to the cache with value val
key will expire in $timeout seconds. If key is already in cache, val
and timeout will be updated.
| put | python | plasticityai/magnitude | pymagnitude/third_party/repoze/lru/__init__.py | https://github.com/plasticityai/magnitude/blob/master/pymagnitude/third_party/repoze/lru/__init__.py | MIT |
def get_default_args(remove_self, func, rargs, kwargs):
"""
returns a dictionary of arg_name:default_values for the input function
"""
if hasattr(inspect, 'getfullargspec'):
argspec = inspect.getfullargspec(func)
arg... |
returns a dictionary of arg_name:default_values for the input function
| get_default_args | python | plasticityai/magnitude | pymagnitude/third_party/repoze/lru/__init__.py | https://github.com/plasticityai/magnitude/blob/master/pymagnitude/third_party/repoze/lru/__init__.py | MIT |
def __init__(self, maxsize=None, timeout=_DEFAULT_TIMEOUT):
"""Create cache decorator factory.
- maxsize : the default size for created caches.
- timeout : the defaut expiraiton time for created caches.
"""
self._maxsize = maxsize
self._timeout = timeout
self._c... | Create cache decorator factory.
- maxsize : the default size for created caches.
- timeout : the defaut expiraiton time for created caches.
| __init__ | python | plasticityai/magnitude | pymagnitude/third_party/repoze/lru/__init__.py | https://github.com/plasticityai/magnitude/blob/master/pymagnitude/third_party/repoze/lru/__init__.py | MIT |
def lrucache(self, name=None, maxsize=None):
"""Named arguments:
- name (optional) is a string, and should be unique amongst all caches
- maxsize (optional) is an int, overriding any default value set by
the constructor
"""
name, maxsize, _ = self._resolve_set... | Named arguments:
- name (optional) is a string, and should be unique amongst all caches
- maxsize (optional) is an int, overriding any default value set by
the constructor
| lrucache | python | plasticityai/magnitude | pymagnitude/third_party/repoze/lru/__init__.py | https://github.com/plasticityai/magnitude/blob/master/pymagnitude/third_party/repoze/lru/__init__.py | MIT |
def clear(self, *names):
"""Clear the given cache(s).
If no 'names' are passed, clear all caches.
"""
if len(names) == 0:
names = self._cache.keys()
for name in names:
self._cache[name].clear() | Clear the given cache(s).
If no 'names' are passed, clear all caches.
| clear | python | plasticityai/magnitude | pymagnitude/third_party/repoze/lru/__init__.py | https://github.com/plasticityai/magnitude/blob/master/pymagnitude/third_party/repoze/lru/__init__.py | MIT |
def mytrace(cursor, statement, bindings):
"Called just before executing each statement"
print ("SQL:",statement)
if bindings:
print ("Bindings:",bindings)
return True # if you return False then execution is aborted | Called just before executing each statement | mytrace | python | plasticityai/magnitude | pymagnitude/third_party/_apsw/example-code.py | https://github.com/plasticityai/magnitude/blob/master/pymagnitude/third_party/_apsw/example-code.py | MIT |
def authorizer(operation, paramone, paramtwo, databasename, triggerorview):
"""Called when each operation is prepared. We can return SQLITE_OK, SQLITE_DENY or
SQLITE_IGNORE"""
# find the operation name
print (apsw.mapping_authorizer_function[operation], paramone, paramtwo, databasename, triggerorview)
... | Called when each operation is prepared. We can return SQLITE_OK, SQLITE_DENY or
SQLITE_IGNORE | authorizer | python | plasticityai/magnitude | pymagnitude/third_party/_apsw/example-code.py | https://github.com/plasticityai/magnitude/blob/master/pymagnitude/third_party/_apsw/example-code.py | MIT |
def testSanity(self):
"Check all parts compiled and are present"
# check some error codes etc are present - picked first middle and last from lists in code
apsw.SQLError
apsw.MisuseError
apsw.NotADBError
apsw.ThreadingViolationError
apsw.BindingsError
apsw... | Check all parts compiled and are present | testSanity | python | plasticityai/magnitude | pymagnitude/third_party/_apsw/tests.py | https://github.com/plasticityai/magnitude/blob/master/pymagnitude/third_party/_apsw/tests.py | MIT |
def testMemoryLeaks(self):
"MemoryLeaks: Run with a memory profiler such as valgrind and debug Python"
# make and toss away a bunch of db objects, cursors, functions etc - if you use memory profiling then
# simple memory leaks will show up
c=self.db.cursor()
c.execute("create tab... | MemoryLeaks: Run with a memory profiler such as valgrind and debug Python | testMemoryLeaks | python | plasticityai/magnitude | pymagnitude/third_party/_apsw/tests.py | https://github.com/plasticityai/magnitude/blob/master/pymagnitude/third_party/_apsw/tests.py | MIT |
def testExecTracing(self):
"Verify tracing of executed statements and bindings"
self.db.setexectrace(None)
c=self.db.cursor()
cmds=[] # this is maniulated in tracefunc
def tracefunc(cursor, cmd, bindings):
cmds.append( (cmd, bindings) )
return True
... | Verify tracing of executed statements and bindings | testExecTracing | python | plasticityai/magnitude | pymagnitude/third_party/_apsw/tests.py | https://github.com/plasticityai/magnitude/blob/master/pymagnitude/third_party/_apsw/tests.py | MIT |
def strnumcollate(s1, s2):
"return -1 if s1<s2, +1 if s1>s2 else 0. Items are string head and numeric tail"
# split values into two parts - the head and the numeric tail
values=[s1,s2]
for vn,v in enumerate(values):
for i in range(len(v),0,-1):
... | return -1 if s1<s2, +1 if s1>s2 else 0. Items are string head and numeric tail | strnumcollate | python | plasticityai/magnitude | pymagnitude/third_party/_apsw/tests.py | https://github.com/plasticityai/magnitude/blob/master/pymagnitude/third_party/_apsw/tests.py | MIT |
def testStringsWithNulls(self):
"Verify that strings with nulls in them are handled correctly"
c=self.db.cursor()
c.execute("create table foo(row,str)")
vals=("a simple string",
"a simple string\0with a null",
"a string\0with two\0nulls",
"or ev... | Verify that strings with nulls in them are handled correctly | testStringsWithNulls | python | plasticityai/magnitude | pymagnitude/third_party/_apsw/tests.py | https://github.com/plasticityai/magnitude/blob/master/pymagnitude/third_party/_apsw/tests.py | MIT |
def testMakeSqliteMsgFromException(self):
"Test C function that converts exception into SQLite error code"
class Source:
def Create1(self, *args):
e=apsw.IOError()
e.extendedresult=apsw.SQLITE_IOERR_ACCESS
raise e
def Create2(self,... | Test C function that converts exception into SQLite error code | testMakeSqliteMsgFromException | python | plasticityai/magnitude | pymagnitude/third_party/_apsw/tests.py | https://github.com/plasticityai/magnitude/blob/master/pymagnitude/third_party/_apsw/tests.py | MIT |
def testClosingChecks(self):
"Check closed connection is correctly detected"
cur=self.db.cursor()
rowid=next(cur.execute("create table foo(x blob); insert into foo values(zeroblob(98765)); select rowid from foo"))[0]
blob=self.db.blobopen("main", "foo", "x", rowid, True)
blob.clo... | Check closed connection is correctly detected | testClosingChecks | python | plasticityai/magnitude | pymagnitude/third_party/_apsw/tests.py | https://github.com/plasticityai/magnitude/blob/master/pymagnitude/third_party/_apsw/tests.py | MIT |
def testLargeObjects(self):
"Verify handling of large strings/blobs (>2GB) [Python 2.5+, 64 bit platform]"
if not is64bit:
return
# For binary/blobs I use an anonymous area slightly larger than 2GB chunk of memory, but don't touch any of it
import mmap
f=mmap.mmap(-1,... | Verify handling of large strings/blobs (>2GB) [Python 2.5+, 64 bit platform] | testLargeObjects | python | plasticityai/magnitude | pymagnitude/third_party/_apsw/tests.py | https://github.com/plasticityai/magnitude/blob/master/pymagnitude/third_party/_apsw/tests.py | MIT |
def testErrorCodes(self):
"Verify setting of result codes on error/exception"
fname=TESTFILEPREFIX+"gunk-errcode-test"
write_whole_file(fname, "wb", b("A")*8192)
db=None
try:
# The exception could be thrown on either of these lines
# depending on several f... | Verify setting of result codes on error/exception | testErrorCodes | python | plasticityai/magnitude | pymagnitude/third_party/_apsw/tests.py | https://github.com/plasticityai/magnitude/blob/master/pymagnitude/third_party/_apsw/tests.py | MIT |
def testIssue4(self):
"Issue 4: Error messages and SQLite ticket 3063"
connection = apsw.Connection(":memory:")
cursor = connection.cursor()
cursor.execute("CREATE TABLE A_TABLE (ID ABC PRIMARY KEY NOT NULL)")
try:
cursor.execute("INSERT INTO A_TABLE VALUES (NULL)")
... | Issue 4: Error messages and SQLite ticket 3063 | testIssue4 | python | plasticityai/magnitude | pymagnitude/third_party/_apsw/tests.py | https://github.com/plasticityai/magnitude/blob/master/pymagnitude/third_party/_apsw/tests.py | MIT |
def testIssue15(self):
"Issue 15: Release GIL during calls to prepare"
self.db.cursor().execute("create table foo(x)")
self.db.cursor().execute("begin exclusive")
db2=apsw.Connection(TESTFILEPREFIX+"testdb")
db2.setbusytimeout(30000)
t=ThreadRunner(db2.cursor().execute, "... | Issue 15: Release GIL during calls to prepare | testIssue15 | python | plasticityai/magnitude | pymagnitude/third_party/_apsw/tests.py | https://github.com/plasticityai/magnitude/blob/master/pymagnitude/third_party/_apsw/tests.py | MIT |
def testIssue31(self):
"Issue 31: GIL & SQLite mutexes with heavy threading, threadsafe errors from SQLite"
randomnumbers=[random.randint(0,10000) for _ in range(10000)]
cursor=self.db.cursor()
cursor.execute("create table foo(x)")
cursor.execute("begin")
for num in rand... | Issue 31: GIL & SQLite mutexes with heavy threading, threadsafe errors from SQLite | testIssue31 | python | plasticityai/magnitude | pymagnitude/third_party/_apsw/tests.py | https://github.com/plasticityai/magnitude/blob/master/pymagnitude/third_party/_apsw/tests.py | MIT |
def testIssue50(self):
"Issue 50: Check Blob.read return value on eof"
# first get what the system returns on eof
if iswindows:
f=open("nul", "rb")
else:
f=open("/dev/null", "rb")
try:
# deliberately hit eof
f.read()
# n... | Issue 50: Check Blob.read return value on eof | testIssue50 | python | plasticityai/magnitude | pymagnitude/third_party/_apsw/tests.py | https://github.com/plasticityai/magnitude/blob/master/pymagnitude/third_party/_apsw/tests.py | MIT |
def testIssue142(self):
"Issue 142: bytes from system during dump"
orig_strftime=time.strftime
orig_getuser=getpass.getuser
fh=[]
try:
time.strftime=lambda arg: BYTES(r"gjkTIMEJUNKhgjhg\xfe\xdf")
getpass.getuser=lambda : BYTES(r"\x81\x82\x83gjkhgUSERJUNKjh... | Issue 142: bytes from system during dump | testIssue142 | python | plasticityai/magnitude | pymagnitude/third_party/_apsw/tests.py | https://github.com/plasticityai/magnitude/blob/master/pymagnitude/third_party/_apsw/tests.py | MIT |
def testTicket2158(self):
"Check we are not affected by SQLite ticket #2158"
# https://sqlite.org/cvstrac/tktview?tn=2158
def dummy(x,y):
if x<y: return -1
if x>y: return 1
return 0
self.db.createcollation("dummy", dummy)
cur=self.db.cursor()
... | Check we are not affected by SQLite ticket #2158 | testTicket2158 | python | plasticityai/magnitude | pymagnitude/third_party/_apsw/tests.py | https://github.com/plasticityai/magnitude/blob/master/pymagnitude/third_party/_apsw/tests.py | MIT |
def testIssue199(self):
"Backup API should accept Connection subclasses"
# https://github.com/rogerbinns/apsw/issues/199
class subclass(apsw.Connection):
pass
dbsub=subclass("")
dbsub.cursor().execute("create table a(b);insert into a values(3);")
b=self.db.b... | Backup API should accept Connection subclasses | testIssue199 | python | plasticityai/magnitude | pymagnitude/third_party/_apsw/tests.py | https://github.com/plasticityai/magnitude/blob/master/pymagnitude/third_party/_apsw/tests.py | MIT |
def testPysqliteRecursiveIssue(self):
"Check an issue that affected pysqlite"
# https://code.google.com/p/pysqlite/source/detail?r=260ee266d6686e0f87b0547c36b68a911e6c6cdb
cur=self.db.cursor()
cur.execute("create table a(x); create table b(y);")
def foo():
yield (1,)
... | Check an issue that affected pysqlite | testPysqliteRecursiveIssue | python | plasticityai/magnitude | pymagnitude/third_party/_apsw/tests.py | https://github.com/plasticityai/magnitude/blob/master/pymagnitude/third_party/_apsw/tests.py | MIT |
def testStatementCacheZeroSize(self):
"Rerun statement cache tests with a zero sized/disabled cache"
self.db=apsw.Connection(TESTFILEPREFIX+"testdb", statementcachesize=-1)
self.testStatementCache(-1) | Rerun statement cache tests with a zero sized/disabled cache | testStatementCacheZeroSize | python | plasticityai/magnitude | pymagnitude/third_party/_apsw/tests.py | https://github.com/plasticityai/magnitude/blob/master/pymagnitude/third_party/_apsw/tests.py | MIT |
def testWikipedia(self):
"Use front page of wikipedia to check unicode handling"
# the text also includes characters that can't be represented in 16 bits
text=u("""WIKIPEDIA\nEnglish\nThe Free Encyclopedia\n2 386 000+ articles\nDeutsch\nDie freie Enzyklop\\u00e4die\n753 000+ Artikel\nFran\\u00e7... | Use front page of wikipedia to check unicode handling | testWikipedia | python | plasticityai/magnitude | pymagnitude/third_party/_apsw/tests.py | https://github.com/plasticityai/magnitude/blob/master/pymagnitude/third_party/_apsw/tests.py | MIT |
def testPickle(self, module=None):
"Verify data etc can be pickled"
if module==None:
import pickle
self.testPickle(pickle)
try:
import cPickle
self.testPickle(cPickle)
except ImportError:
pass
ret... | Verify data etc can be pickled | testPickle | python | plasticityai/magnitude | pymagnitude/third_party/_apsw/tests.py | https://github.com/plasticityai/magnitude/blob/master/pymagnitude/third_party/_apsw/tests.py | MIT |
def testBlobReadError(self):
"Ensure blob read errors are handled well"
cur=self.db.cursor()
cur.execute("create table ioerror (x, blob)")
cur.execute("insert into ioerror (rowid,x,blob) values (2,3,x'deadbeef')")
blob=self.db.blobopen("main", "ioerror", "blob", 2, False)
... | Ensure blob read errors are handled well | testBlobReadError | python | plasticityai/magnitude | pymagnitude/third_party/_apsw/tests.py | https://github.com/plasticityai/magnitude/blob/master/pymagnitude/third_party/_apsw/tests.py | MIT |
def testVFSWithWAL(self):
"Verify VFS using WAL where possible"
apsw.connection_hooks.append(lambda c: c.cursor().execute("pragma journal_mode=WAL"))
try:
self.testVFS()
finally:
apsw.connection_hooks.pop() | Verify VFS using WAL where possible | testVFSWithWAL | python | plasticityai/magnitude | pymagnitude/third_party/_apsw/tests.py | https://github.com/plasticityai/magnitude/blob/master/pymagnitude/third_party/_apsw/tests.py | MIT |
def testzzFaultInjection(self):
"Deliberately inject faults to exercise all code paths"
if not hasattr(apsw, "faultdict"):
return
def dummy(*args):
1/0
def dummy2(*args):
return 7
# The 1/0 in these tests is to cause a ZeroDivisionError so
... | Deliberately inject faults to exercise all code paths | testzzFaultInjection | python | plasticityai/magnitude | pymagnitude/third_party/_apsw/tests.py | https://github.com/plasticityai/magnitude/blob/master/pymagnitude/third_party/_apsw/tests.py | MIT |
def testzzForkChecker(self):
"Test detection of using objects across fork"
# need to free up everything that already exists
self.db.close()
gc.collect()
# install it
apsw.fork_checker()
# return some objects
def getstuff():
db=apsw.Connection("... | Test detection of using objects across fork | testzzForkChecker | python | plasticityai/magnitude | pymagnitude/third_party/_apsw/tests.py | https://github.com/plasticityai/magnitude/blob/master/pymagnitude/third_party/_apsw/tests.py | MIT |
def testdb(filename=TESTFILEPREFIX+"testdb2", vfsname="apswtest", closedb=True, mode=None, attachdb=None):
"This method causes all parts of a vfs to be executed"
gc.collect() # free any existing db handles
for suf in "", "-journal", "x", "x-journal":
deletefile(filename+suf)
db=apsw.Connection... | This method causes all parts of a vfs to be executed | testdb | python | plasticityai/magnitude | pymagnitude/third_party/_apsw/tests.py | https://github.com/plasticityai/magnitude/blob/master/pymagnitude/third_party/_apsw/tests.py | MIT |
def setup(write=write):
"""Call this if importing this test suite as it will ensure tests
we can't run are removed etc. It will also print version
information."""
print_version_info(write)
if hasattr(apsw, "config"):
apsw.config(apsw.SQLITE_CONFIG_MEMSTATUS, True) # ensure memory tracking... | Call this if importing this test suite as it will ensure tests
we can't run are removed etc. It will also print version
information. | setup | python | plasticityai/magnitude | pymagnitude/third_party/_apsw/tests.py | https://github.com/plasticityai/magnitude/blob/master/pymagnitude/third_party/_apsw/tests.py | MIT |
def fmtfloat(n, decimals=3, total=None):
"Work around borken python float formatting"
s="%0.*f" % (decimals, n)
if total:
s=(" "*total+s)[-total:]
return s | Work around borken python float formatting | fmtfloat | python | plasticityai/magnitude | pymagnitude/third_party/_apsw/tools/apswtrace.py | https://github.com/plasticityai/magnitude/blob/master/pymagnitude/third_party/_apsw/tools/apswtrace.py | MIT |
def __init__(self, stdin=None, stdout=None, stderr=None, encoding="utf8", args=None, db=None):
"""Create instance, set defaults and do argument processing."""
super(Shell, self).__init__()
# The parameter doc has to be in main class doc as sphinx
# ignores any described here
self... | Create instance, set defaults and do argument processing. | __init__ | python | plasticityai/magnitude | pymagnitude/third_party/_apsw/tools/shell.py | https://github.com/plasticityai/magnitude/blob/master/pymagnitude/third_party/_apsw/tools/shell.py | MIT |
def _ensure_db(self):
"The database isn't opened until first use. This function ensures it is now open."
if not self._db:
if not self.dbfilename:
self.dbfilename=":memory:"
self._db=apsw.Connection(self.dbfilename, flags=apsw.SQLITE_OPEN_URI | apsw.SQLITE_OPEN_RE... | The database isn't opened until first use. This function ensures it is now open. | _ensure_db | python | plasticityai/magnitude | pymagnitude/third_party/_apsw/tools/shell.py | https://github.com/plasticityai/magnitude/blob/master/pymagnitude/third_party/_apsw/tools/shell.py | MIT |
def _set_db(self, newv):
"Sets the open database (or None) and filename"
(db, dbfilename)=newv
if self._db:
self._db.close(True)
self._db=None
self._db=db
self.dbfilename=dbfilename | Sets the open database (or None) and filename | _set_db | python | plasticityai/magnitude | pymagnitude/third_party/_apsw/tools/shell.py | https://github.com/plasticityai/magnitude/blob/master/pymagnitude/third_party/_apsw/tools/shell.py | MIT |
def process_args(self, args):
"""Process command line options specified in args. It is safe to
call this multiple times. We try to be compatible with SQLite shell
argument parsing.
:param args: A list of string options. Do not include the
program as args[0]
:retur... | Process command line options specified in args. It is safe to
call this multiple times. We try to be compatible with SQLite shell
argument parsing.
:param args: A list of string options. Do not include the
program as args[0]
:returns: A tuple of (databasefilename, initfil... | process_args | python | plasticityai/magnitude | pymagnitude/third_party/_apsw/tools/shell.py | https://github.com/plasticityai/magnitude/blob/master/pymagnitude/third_party/_apsw/tools/shell.py | MIT |
Subsets and Splits
Django Code with Docstrings
Filters Python code examples from Django repository that contain Django-related code, helping identify relevant code snippets for understanding Django framework usage patterns.
SQL Console for Shuu12121/python-treesitter-filtered-datasetsV2
Retrieves specific code examples from the Flask repository but doesn't provide meaningful analysis or patterns beyond basic data retrieval.
HTTPX Repo Code and Docstrings
Retrieves specific code examples from the httpx repository, which is useful for understanding how particular libraries are used but doesn't provide broader analytical insights about the dataset.
Requests Repo Docstrings & Code
Retrieves code examples with their docstrings and file paths from the requests repository, providing basic filtering but limited analytical value beyond finding specific code samples.
Quart Repo Docstrings & Code
Retrieves code examples with their docstrings from the Quart repository, providing basic code samples but offering limited analytical value for understanding broader patterns or relationships in the dataset.