hexsha string | size int64 | ext string | lang string | max_stars_repo_path string | max_stars_repo_name string | max_stars_repo_head_hexsha string | max_stars_repo_licenses list | max_stars_count int64 | max_stars_repo_stars_event_min_datetime string | max_stars_repo_stars_event_max_datetime string | max_issues_repo_path string | max_issues_repo_name string | max_issues_repo_head_hexsha string | max_issues_repo_licenses list | max_issues_count int64 | max_issues_repo_issues_event_min_datetime string | max_issues_repo_issues_event_max_datetime string | max_forks_repo_path string | max_forks_repo_name string | max_forks_repo_head_hexsha string | max_forks_repo_licenses list | max_forks_count int64 | max_forks_repo_forks_event_min_datetime string | max_forks_repo_forks_event_max_datetime string | content string | avg_line_length float64 | max_line_length int64 | alphanum_fraction float64 | qsc_code_num_words_quality_signal int64 | qsc_code_num_chars_quality_signal float64 | qsc_code_mean_word_length_quality_signal float64 | qsc_code_frac_words_unique_quality_signal float64 | qsc_code_frac_chars_top_2grams_quality_signal float64 | qsc_code_frac_chars_top_3grams_quality_signal float64 | qsc_code_frac_chars_top_4grams_quality_signal float64 | qsc_code_frac_chars_dupe_5grams_quality_signal float64 | qsc_code_frac_chars_dupe_6grams_quality_signal float64 | qsc_code_frac_chars_dupe_7grams_quality_signal float64 | qsc_code_frac_chars_dupe_8grams_quality_signal float64 | qsc_code_frac_chars_dupe_9grams_quality_signal float64 | qsc_code_frac_chars_dupe_10grams_quality_signal float64 | qsc_code_frac_chars_replacement_symbols_quality_signal float64 | qsc_code_frac_chars_digital_quality_signal float64 | qsc_code_frac_chars_whitespace_quality_signal float64 | qsc_code_size_file_byte_quality_signal float64 | qsc_code_num_lines_quality_signal float64 | qsc_code_num_chars_line_max_quality_signal float64 | qsc_code_num_chars_line_mean_quality_signal float64 | qsc_code_frac_chars_alphabet_quality_signal float64 | qsc_code_frac_chars_comments_quality_signal float64 | qsc_code_cate_xml_start_quality_signal float64 | qsc_code_frac_lines_dupe_lines_quality_signal float64 | qsc_code_cate_autogen_quality_signal float64 | qsc_code_frac_lines_long_string_quality_signal float64 | qsc_code_frac_chars_string_length_quality_signal float64 | qsc_code_frac_chars_long_word_length_quality_signal float64 | qsc_code_frac_lines_string_concat_quality_signal float64 | qsc_code_cate_encoded_data_quality_signal float64 | qsc_code_frac_chars_hex_words_quality_signal float64 | qsc_code_frac_lines_prompt_comments_quality_signal float64 | qsc_code_frac_lines_assert_quality_signal float64 | qsc_codepython_cate_ast_quality_signal float64 | qsc_codepython_frac_lines_func_ratio_quality_signal float64 | qsc_codepython_cate_var_zero_quality_signal bool | qsc_codepython_frac_lines_pass_quality_signal float64 | qsc_codepython_frac_lines_import_quality_signal float64 | qsc_codepython_frac_lines_simplefunc_quality_signal float64 | qsc_codepython_score_lines_no_logic_quality_signal float64 | qsc_codepython_frac_lines_print_quality_signal float64 | qsc_code_num_words int64 | qsc_code_num_chars int64 | qsc_code_mean_word_length int64 | qsc_code_frac_words_unique null | qsc_code_frac_chars_top_2grams int64 | qsc_code_frac_chars_top_3grams int64 | qsc_code_frac_chars_top_4grams int64 | qsc_code_frac_chars_dupe_5grams int64 | qsc_code_frac_chars_dupe_6grams int64 | qsc_code_frac_chars_dupe_7grams int64 | qsc_code_frac_chars_dupe_8grams int64 | qsc_code_frac_chars_dupe_9grams int64 | qsc_code_frac_chars_dupe_10grams int64 | qsc_code_frac_chars_replacement_symbols int64 | qsc_code_frac_chars_digital int64 | qsc_code_frac_chars_whitespace int64 | qsc_code_size_file_byte int64 | qsc_code_num_lines int64 | qsc_code_num_chars_line_max int64 | qsc_code_num_chars_line_mean int64 | qsc_code_frac_chars_alphabet int64 | qsc_code_frac_chars_comments int64 | qsc_code_cate_xml_start int64 | qsc_code_frac_lines_dupe_lines int64 | qsc_code_cate_autogen int64 | qsc_code_frac_lines_long_string int64 | qsc_code_frac_chars_string_length int64 | qsc_code_frac_chars_long_word_length int64 | qsc_code_frac_lines_string_concat null | qsc_code_cate_encoded_data int64 | qsc_code_frac_chars_hex_words int64 | qsc_code_frac_lines_prompt_comments int64 | qsc_code_frac_lines_assert int64 | qsc_codepython_cate_ast int64 | qsc_codepython_frac_lines_func_ratio int64 | qsc_codepython_cate_var_zero int64 | qsc_codepython_frac_lines_pass int64 | qsc_codepython_frac_lines_import int64 | qsc_codepython_frac_lines_simplefunc int64 | qsc_codepython_score_lines_no_logic int64 | qsc_codepython_frac_lines_print int64 | effective string | hits int64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
18b971f458f568de22a1bea39a8dbe0fcc8c9194 | 41 | py | Python | stackable/__init__.py | productaize/stackable | 1172d255bfbf79a65a9509c04f470f1049d748bf | [
"MIT"
] | 1 | 2017-10-19T09:21:54.000Z | 2017-10-19T09:21:54.000Z | stackable/__init__.py | productaize/stackable | 1172d255bfbf79a65a9509c04f470f1049d748bf | [
"MIT"
] | 1 | 2017-10-28T17:51:36.000Z | 2017-10-28T17:51:36.000Z | stackable/__init__.py | miraculixx/stackable | e21787077e004879a0a1f6fe46be61289794d8db | [
"MIT"
] | null | null | null | from .stackable import StackableSettings | 41 | 41 | 0.878049 | 4 | 41 | 9 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.097561 | 41 | 1 | 41 | 41 | 0.972973 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
18ffd0ea67075d08637e6a35706d5c2b8ad0a536 | 116 | py | Python | src/tools/email_validator.py | jsmuniz7/wishlist | 9ba4d3d8877e1838588aaf80bdd7a1ec7dd0400d | [
"MIT"
] | null | null | null | src/tools/email_validator.py | jsmuniz7/wishlist | 9ba4d3d8877e1838588aaf80bdd7a1ec7dd0400d | [
"MIT"
] | null | null | null | src/tools/email_validator.py | jsmuniz7/wishlist | 9ba4d3d8877e1838588aaf80bdd7a1ec7dd0400d | [
"MIT"
] | null | null | null | import re
EMAIL_REGEX = re.compile(r"[^@]+@[^@]+\.[^@]+")
def is_valid(email):
return EMAIL_REGEX.match(email) | 19.333333 | 47 | 0.62069 | 16 | 116 | 4.3125 | 0.6875 | 0.289855 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.12069 | 116 | 6 | 48 | 19.333333 | 0.676471 | 0 | 0 | 0 | 0 | 0 | 0.153846 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | false | 0 | 0.25 | 0.25 | 0.75 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 6 |
7a18f20fd1d1b80a54aaddfcc12071946767998c | 219 | py | Python | iterator_pipe.py | AshwinParanjape/pipedream | 2828a5df5d45655da485351d0bdae94f2e084781 | [
"MIT"
] | null | null | null | iterator_pipe.py | AshwinParanjape/pipedream | 2828a5df5d45655da485351d0bdae94f2e084781 | [
"MIT"
] | null | null | null | iterator_pipe.py | AshwinParanjape/pipedream | 2828a5df5d45655da485351d0bdae94f2e084781 | [
"MIT"
] | null | null | null | class PipeFunction():
def __init__(self, func):
self.func = func
def __call__(self, *args, **kwargs):
return self.func(*args, **kwargs)
def __ror__(self, other):
return self(other)
| 21.9 | 41 | 0.598174 | 26 | 219 | 4.576923 | 0.461538 | 0.201681 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.26484 | 219 | 9 | 42 | 24.333333 | 0.73913 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.428571 | false | 0 | 0 | 0.285714 | 0.857143 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 6 |
e1380b3bccde4dfc81a9b77c6646d1d2ef5d8399 | 6,134 | py | Python | tfbasemodels/utils/layer_utils.py | ldfrancis/tfbasemodels | cbdd13f907dfd4c32b7d5aa9e147bd71b0243368 | [
"MIT"
] | null | null | null | tfbasemodels/utils/layer_utils.py | ldfrancis/tfbasemodels | cbdd13f907dfd4c32b7d5aa9e147bd71b0243368 | [
"MIT"
] | 1 | 2021-02-15T22:16:23.000Z | 2021-02-15T22:16:23.000Z | tfbasemodels/utils/layer_utils.py | ldfrancis/tfbasemodels | cbdd13f907dfd4c32b7d5aa9e147bd71b0243368 | [
"MIT"
] | null | null | null | import tensorflow as tf
def conv2d_bn(filters, kernel_size, strides=(1, 1), padding='valid', data_format=None,
dilation_rate=(1, 1), groups=1, activation=None, use_bias=True,
kernel_initializer='glorot_uniform', bias_initializer='zeros',
kernel_regularizer=None, bias_regularizer=None, activity_regularizer=None,
kernel_constraint=None, bias_constraint=None, axis=-1, momentum=0.99, epsilon=0.001, center=True,
scale=True,
beta_initializer='zeros', gamma_initializer='ones',
moving_mean_initializer='zeros', moving_variance_initializer='ones',
beta_regularizer=None, gamma_regularizer=None, beta_constraint=None,
gamma_constraint=None, renorm=False, renorm_clipping=None, renorm_momentum=0.99,
fused=None, trainable=True, virtual_batch_size=None, adjustment=None, name=None, bn=True,
**kwargs):
"""Performs convolution-->batch_norm-->activation
"""
def _op(x):
if name:
conv_name = name + "_conv"
bn_name = name + "_bn"
act_name = name + "_act"
x = tf.keras.layers.Conv2D(filters, kernel_size, strides=strides, padding=padding, data_format=data_format,
dilation_rate=dilation_rate, groups=groups, use_bias=use_bias,
kernel_initializer=kernel_initializer, bias_initializer=bias_initializer,
kernel_regularizer=kernel_regularizer, bias_regularizer=bias_regularizer,
activity_regularizer=activity_regularizer,
kernel_constraint=kernel_constraint, bias_constraint=bias_constraint,
name=conv_name)(x)
if bn:
x = tf.keras.layers.BatchNormalization(axis=3, momentum=momentum, epsilon=epsilon, center=center,
scale=scale,
beta_initializer=beta_initializer, gamma_initializer=gamma_initializer,
moving_mean_initializer=moving_mean_initializer,
moving_variance_initializer=moving_variance_initializer,
beta_regularizer=beta_regularizer, gamma_regularizer=gamma_regularizer,
beta_constraint=beta_constraint,
gamma_constraint=gamma_constraint, renorm=renorm,
renorm_clipping=renorm_clipping, renorm_momentum=renorm_momentum,
fused=fused, trainable=trainable, virtual_batch_size=virtual_batch_size,
adjustment=adjustment, name=bn_name,
)(x)
if activation: x = activation(x, name=act_name)
return x
return _op
def bn_conv2d(filters, kernel_size, strides=(1, 1), padding='valid', data_format=None,
dilation_rate=(1, 1), groups=1, activation=None, use_bias=True,
kernel_initializer='glorot_uniform', bias_initializer='zeros',
kernel_regularizer=None, bias_regularizer=None, activity_regularizer=None,
kernel_constraint=None, bias_constraint=None, axis=-1, momentum=0.99, epsilon=0.001, center=True,
scale=True,
beta_initializer='zeros', gamma_initializer='ones',
moving_mean_initializer='zeros', moving_variance_initializer='ones',
beta_regularizer=None, gamma_regularizer=None, beta_constraint=None,
gamma_constraint=None, renorm=False, renorm_clipping=None, renorm_momentum=0.99,
fused=None, trainable=True, virtual_batch_size=None, adjustment=None, name=None,
**kwargs):
"""Performs batch_norm-->activation-->convolution. used in a residual block with
preactivation
"""
def _op(x):
if name:
conv_name = name + "_conv"
bn_name = name + "_bn"
act_name = name + "_act"
x = tf.keras.layers.BatchNormalization(axis=3, momentum=momentum, epsilon=epsilon, center=center,
scale=scale, beta_initializer=beta_initializer,
gamma_initializer=gamma_initializer,
moving_mean_initializer=moving_mean_initializer,
moving_variance_initializer=moving_variance_initializer,
beta_regularizer=beta_regularizer, gamma_regularizer=gamma_regularizer,
beta_constraint=beta_constraint, gamma_constraint=gamma_constraint,
renorm=renorm, renorm_clipping=renorm_clipping,
renorm_momentum=renorm_momentum, fused=fused, trainable=trainable,
virtual_batch_size=virtual_batch_size, adjustment=adjustment,
name=conv_name, )(x)
if activation: x = activation(x, name=act_name)
x = tf.keras.layers.Conv2D(filters, kernel_size, strides=strides, padding=padding, data_format=data_format,
dilation_rate=dilation_rate, groups=groups, use_bias=use_bias,
kernel_initializer=kernel_initializer, bias_initializer=bias_initializer,
kernel_regularizer=kernel_regularizer, bias_regularizer=bias_regularizer,
activity_regularizer=activity_regularizer, kernel_constraint=kernel_constraint,
bias_constraint=bias_constraint, name=bn_name)(x)
return x
return _op
| 63.237113 | 123 | 0.571405 | 567 | 6,134 | 5.871252 | 0.132275 | 0.045059 | 0.037849 | 0.028837 | 0.937519 | 0.931211 | 0.931211 | 0.931211 | 0.931211 | 0.931211 | 0 | 0.009613 | 0.355559 | 6,134 | 96 | 124 | 63.895833 | 0.832532 | 0.022498 | 0 | 0.666667 | 0 | 0 | 0.018087 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.051282 | false | 0 | 0.012821 | 0 | 0.115385 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
e1586cc7ff2a99644c18e6f638fbb4db6d3b73c8 | 12,393 | py | Python | data/sampler/multi_scale_sampler.py | apple/ml-cvnets | 84d992f413e52c0468f86d23196efd9dad885e6f | [
"AML"
] | 209 | 2021-10-30T08:32:10.000Z | 2022-03-31T16:18:03.000Z | data/sampler/multi_scale_sampler.py | apple/ml-cvnets | 84d992f413e52c0468f86d23196efd9dad885e6f | [
"AML"
] | 12 | 2021-12-04T10:47:11.000Z | 2022-03-31T15:39:40.000Z | data/sampler/multi_scale_sampler.py | apple/ml-cvnets | 84d992f413e52c0468f86d23196efd9dad885e6f | [
"AML"
] | 50 | 2021-11-01T08:15:02.000Z | 2022-03-29T08:17:34.000Z | #
# For licensing see accompanying LICENSE file.
# Copyright (C) 2022 Apple Inc. All Rights Reserved.
#
import random
import argparse
from utils import logger
from typing import Optional
from common import DEFAULT_IMAGE_WIDTH, DEFAULT_IMAGE_HEIGHT
from . import register_sampler, BaseSamplerDP, BaseSamplerDDP
from .utils import _image_batch_pairs
@register_sampler(name="multi_scale_sampler")
class MultiScaleSampler(BaseSamplerDP):
"""
Multi-scale Batch Sampler for data parallel
Args:
opts: command line argument
n_data_samples (int): Number of samples in the dataset
is_training (Optional[bool]): Training or validation mode. Default: False
"""
def __init__(
self,
opts,
n_data_samples: int,
is_training: Optional[bool] = False,
*args,
**kwargs
) -> None:
super().__init__(
opts=opts, n_data_samples=n_data_samples, is_training=is_training
)
crop_size_w: int = getattr(
opts, "sampler.msc.crop_size_width", DEFAULT_IMAGE_WIDTH
)
crop_size_h: int = getattr(
opts, "sampler.msc.crop_size_height", DEFAULT_IMAGE_HEIGHT
)
min_crop_size_w: int = getattr(opts, "sampler.msc.min_crop_size_width", 160)
max_crop_size_w: int = getattr(opts, "sampler.msc.max_crop_size_width", 320)
min_crop_size_h: int = getattr(opts, "sampler.msc.min_crop_size_height", 160)
max_crop_size_h: int = getattr(opts, "sampler.msc.max_crop_size_height", 320)
scale_inc: bool = getattr(opts, "sampler.msc.scale_inc", False)
scale_ep_intervals: list or int = getattr(
opts, "sampler.msc.ep_intervals", [40]
)
scale_inc_factor: float = getattr(opts, "sampler.msc.scale_inc_factor", 0.25)
check_scale_div_factor: int = getattr(opts, "sampler.msc.check_scale", 32)
max_img_scales: int = getattr(opts, "sampler.msc.max_n_scales", 10)
if isinstance(scale_ep_intervals, int):
scale_ep_intervals = [scale_ep_intervals]
self.min_crop_size_w = min_crop_size_w
self.max_crop_size_w = max_crop_size_w
self.min_crop_size_h = min_crop_size_h
self.max_crop_size_h = max_crop_size_h
self.crop_size_w = crop_size_w
self.crop_size_h = crop_size_h
self.scale_inc_factor = scale_inc_factor
self.scale_ep_intervals = scale_ep_intervals
self.max_img_scales = max_img_scales
self.check_scale_div_factor = check_scale_div_factor
self.scale_inc = scale_inc
if is_training:
self.img_batch_tuples = _image_batch_pairs(
crop_size_h=self.crop_size_h,
crop_size_w=self.crop_size_w,
batch_size_gpu0=self.batch_size_gpu0,
n_gpus=self.n_gpus,
max_scales=self.max_img_scales,
check_scale_div_factor=self.check_scale_div_factor,
min_crop_size_w=self.min_crop_size_w,
max_crop_size_w=self.max_crop_size_w,
min_crop_size_h=self.min_crop_size_h,
max_crop_size_h=self.max_crop_size_h,
)
# over-ride the batch-size
self.img_batch_tuples = [
(h, w, self.batch_size_gpu0) for h, w, b in self.img_batch_tuples
]
else:
self.img_batch_tuples = [(crop_size_h, crop_size_w, self.batch_size_gpu0)]
@classmethod
def add_arguments(cls, parser: argparse.ArgumentParser):
group = parser.add_argument_group(
title="Multi-scale sampler", description="Multi-scale sampler"
)
group.add_argument(
"--sampler.msc.crop-size-width",
default=DEFAULT_IMAGE_WIDTH,
type=int,
help="Base crop size (along width) during training",
)
group.add_argument(
"--sampler.msc.crop-size-height",
default=DEFAULT_IMAGE_HEIGHT,
type=int,
help="Base crop size (along height) during training",
)
group.add_argument(
"--sampler.msc.min-crop-size-width",
default=160,
type=int,
help="Min. crop size along width during training",
)
group.add_argument(
"--sampler.msc.max-crop-size-width",
default=320,
type=int,
help="Max. crop size along width during training",
)
group.add_argument(
"--sampler.msc.min-crop-size-height",
default=160,
type=int,
help="Min. crop size along height during training",
)
group.add_argument(
"--sampler.msc.max-crop-size-height",
default=320,
type=int,
help="Max. crop size along height during training",
)
group.add_argument(
"--sampler.msc.max-n-scales",
default=5,
type=int,
help="Max. scales in variable batch sampler. For example, [0.25, 0.5, 0.75, 1, 1.25] ",
)
group.add_argument(
"--sampler.msc.check-scale",
default=32,
type=int,
help="Image scales should be divisible by this factor",
)
group.add_argument(
"--sampler.msc.ep-intervals",
default=[40],
type=int,
help="Epoch intervals at which scales are adjusted",
)
group.add_argument(
"--sampler.msc.scale-inc-factor",
default=0.25,
type=float,
help="Factor by which we should increase the scale",
)
group.add_argument(
"--sampler.msc.scale-inc",
action="store_true",
help="Increase image scales during training",
)
return parser
def __iter__(self):
if self.shuffle:
random.seed(self.epoch)
random.shuffle(self.img_indices)
random.shuffle(self.img_batch_tuples)
start_index = 0
n_samples = len(self.img_indices)
while start_index < n_samples:
crop_h, crop_w, batch_size = random.choice(self.img_batch_tuples)
end_index = min(start_index + batch_size, n_samples)
batch_ids = self.img_indices[start_index:end_index]
n_batch_samples = len(batch_ids)
if len(batch_ids) != batch_size:
batch_ids += self.img_indices[: (batch_size - n_batch_samples)]
start_index += batch_size
if len(batch_ids) > 0:
batch = [(crop_h, crop_w, b_id) for b_id in batch_ids]
yield batch
def update_scales(self, epoch, is_master_node=False, *args, **kwargs):
pass
def __repr__(self):
repr_str = "{}(".format(self.__class__.__name__)
repr_str += (
"\n\t base_im_size=(h={}, w={}), "
"\n\t base_batch_size={} "
"\n\t scales={} "
"\n\t scale_inc={} "
"\n\t scale_inc_factor={} "
"\n\t ep_intervals={}".format(
self.crop_size_h,
self.crop_size_w,
self.batch_size_gpu0,
self.img_batch_tuples,
self.scale_inc,
self.scale_inc_factor,
self.scale_ep_intervals,
)
)
repr_str += "\n)"
return repr_str
@register_sampler(name="multi_scale_sampler_ddp")
class MultiScaleSamplerDDP(BaseSamplerDDP):
"""
Multi-scale Batch Sampler for distributed data parallel
Args:
opts: command line argument
n_data_samples (int): Number of samples in the dataset
is_training (Optional[bool]): Training or validation mode. Default: False
"""
def __init__(
self,
opts,
n_data_samples: int,
is_training: Optional[bool] = False,
*args,
**kwargs
) -> None:
super().__init__(
opts=opts, n_data_samples=n_data_samples, is_training=is_training
)
crop_size_w: int = getattr(
opts, "sampler.msc.crop_size_width", DEFAULT_IMAGE_WIDTH
)
crop_size_h: int = getattr(
opts, "sampler.msc.crop_size_height", DEFAULT_IMAGE_HEIGHT
)
min_crop_size_w: int = getattr(opts, "sampler.msc.min_crop_size_width", 160)
max_crop_size_w: int = getattr(opts, "sampler.msc.max_crop_size_width", 320)
min_crop_size_h: int = getattr(opts, "sampler.msc.min_crop_size_height", 160)
max_crop_size_h: int = getattr(opts, "sampler.msc.max_crop_size_height", 320)
scale_inc: bool = getattr(opts, "sampler.msc.scale_inc", False)
scale_ep_intervals: list or int = getattr(
opts, "sampler.msc.ep_intervals", [40]
)
scale_inc_factor: float = getattr(opts, "sampler.msc.scale_inc_factor", 0.25)
check_scale_div_factor: int = getattr(opts, "sampler.msc.check_scale", 32)
max_img_scales: int = getattr(opts, "sampler.msc.max_n_scales", 10)
self.crop_size_h = crop_size_h
self.crop_size_w = crop_size_w
self.min_crop_size_h = min_crop_size_h
self.max_crop_size_h = max_crop_size_h
self.min_crop_size_w = min_crop_size_w
self.max_crop_size_w = max_crop_size_w
self.scale_inc_factor = scale_inc_factor
self.scale_ep_intervals = scale_ep_intervals
self.max_img_scales = max_img_scales
self.check_scale_div_factor = check_scale_div_factor
self.scale_inc = scale_inc
if is_training:
self.img_batch_tuples = _image_batch_pairs(
crop_size_h=self.crop_size_h,
crop_size_w=self.crop_size_w,
batch_size_gpu0=self.batch_size_gpu0,
n_gpus=self.num_replicas,
max_scales=self.max_img_scales,
check_scale_div_factor=self.check_scale_div_factor,
min_crop_size_w=self.min_crop_size_w,
max_crop_size_w=self.max_crop_size_w,
min_crop_size_h=self.min_crop_size_h,
max_crop_size_h=self.max_crop_size_h,
)
self.img_batch_tuples = [
(h, w, self.batch_size_gpu0) for h, w, b in self.img_batch_tuples
]
else:
self.img_batch_tuples = [
(self.crop_size_h, self.crop_size_w, self.batch_size_gpu0)
]
def __iter__(self):
if self.shuffle:
random.seed(self.epoch)
indices_rank_i = self.img_indices[
self.rank : len(self.img_indices) : self.num_replicas
]
random.shuffle(indices_rank_i)
else:
indices_rank_i = self.img_indices[
self.rank : len(self.img_indices) : self.num_replicas
]
start_index = 0
n_samples_rank_i = len(indices_rank_i)
while start_index < n_samples_rank_i:
crop_h, crop_w, batch_size = random.choice(self.img_batch_tuples)
end_index = min(start_index + batch_size, n_samples_rank_i)
batch_ids = indices_rank_i[start_index:end_index]
n_batch_samples = len(batch_ids)
if n_batch_samples != batch_size:
batch_ids += indices_rank_i[: (batch_size - n_batch_samples)]
start_index += batch_size
if len(batch_ids) > 0:
batch = [(crop_h, crop_w, b_id) for b_id in batch_ids]
yield batch
def update_scales(self, epoch, is_master_node=False, *args, **kwargs):
pass
def __repr__(self):
repr_str = "{}(".format(self.__class__.__name__)
repr_str += (
"\n\t base_im_size=(h={}, w={}), "
"\n\t base_batch_size={} "
"\n\t scales={} "
"\n\t scale_inc={} "
"\n\t scale_inc_factor={} "
"\n\t ep_intervals={}".format(
self.crop_size_h,
self.crop_size_w,
self.batch_size_gpu0,
self.img_batch_tuples,
self.scale_inc,
self.scale_inc_factor,
self.scale_ep_intervals,
)
)
repr_str += "\n )"
return repr_str
| 35.408571 | 99 | 0.593803 | 1,591 | 12,393 | 4.242615 | 0.102451 | 0.109037 | 0.045333 | 0.068444 | 0.834667 | 0.792741 | 0.775111 | 0.744444 | 0.739852 | 0.72563 | 0 | 0.010817 | 0.313725 | 12,393 | 349 | 100 | 35.510029 | 0.782834 | 0.046236 | 0 | 0.657343 | 0 | 0.003497 | 0.153663 | 0.08066 | 0 | 0 | 0 | 0 | 0 | 1 | 0.031469 | false | 0.006993 | 0.024476 | 0 | 0.073427 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
e181fbc4dd43e4884c80335280100bdd670e42cc | 5,605 | py | Python | src/tests/presale/test_style.py | pajowu/pretix | d6985123b4528f134ead71ce0a4613c9a309fd2c | [
"ECL-2.0",
"Apache-2.0"
] | 1 | 2020-04-25T00:11:00.000Z | 2020-04-25T00:11:00.000Z | src/tests/presale/test_style.py | pajowu/pretix | d6985123b4528f134ead71ce0a4613c9a309fd2c | [
"ECL-2.0",
"Apache-2.0"
] | 7 | 2019-07-08T10:29:54.000Z | 2020-01-08T17:32:07.000Z | src/tests/presale/test_style.py | pajowu/pretix | d6985123b4528f134ead71ce0a4613c9a309fd2c | [
"ECL-2.0",
"Apache-2.0"
] | null | null | null | import datetime
import os.path
from django.conf import settings
from django.test import TestCase, override_settings
from django_scopes import scopes_disabled
from pretix.base.models import Event, Organizer
from pretix.multidomain.models import KnownDomain
from pretix.presale.style import regenerate_css, regenerate_organizer_css
class StyleTest(TestCase):
@scopes_disabled()
def setUp(self):
super().setUp()
self.orga = Organizer.objects.create(name='CCC', slug='ccc')
self.event = Event.objects.create(
organizer=self.orga, name='30C3', slug='30c3',
date_from=datetime.datetime(2013, 12, 26, tzinfo=datetime.timezone.utc),
live=True
)
def test_organizer_generate_css_for_inherited_events(self):
self.orga.settings.primary_color = "#33c33c"
regenerate_organizer_css.apply(args=(self.orga.pk,))
self.orga.settings.flush()
assert self.orga.settings.presale_css_file
with open(os.path.join(settings.MEDIA_ROOT, self.orga.settings.presale_css_file), 'r') as c:
assert '#33c33c' in c.read()
self.event.settings.flush()
assert self.event.settings.presale_css_file
with open(os.path.join(settings.MEDIA_ROOT, self.event.settings.presale_css_file), 'r') as c:
assert '#33c33c' in c.read()
def test_organizer_generate_css_only_for_inherited_events(self):
self.orga.settings.primary_color = "#33c33c"
self.event.settings.primary_color = "#34c34c"
regenerate_organizer_css.apply(args=(self.orga.pk,))
self.orga.settings.flush()
assert self.orga.settings.presale_css_file
with open(os.path.join(settings.MEDIA_ROOT, self.orga.settings.presale_css_file), 'r') as c:
assert '#33c33c' in c.read()
self.event.settings.flush()
assert self.event.settings.presale_css_file
with open(os.path.join(settings.MEDIA_ROOT, self.event.settings.presale_css_file), 'r') as c:
assert '#34c34c' not in c.read()
assert '#33c33c' not in c.read()
def test_event_generate_css_individually(self):
self.orga.settings.primary_color = "#33c33c"
self.event.settings.primary_color = "#34c34c"
regenerate_css.apply(args=(self.event.pk,))
self.event.settings.flush()
assert self.event.settings.presale_css_file
with open(os.path.join(settings.MEDIA_ROOT, self.event.settings.presale_css_file), 'r') as c:
assert '#34c34c' in c.read()
assert '#33c33c' not in c.read()
regenerate_organizer_css.apply(args=(self.orga.pk,))
self.event.settings.flush()
assert self.event.settings.presale_css_file
with open(os.path.join(settings.MEDIA_ROOT, self.event.settings.presale_css_file), 'r') as c:
assert '#34c34c' in c.read()
assert '#33c33c' not in c.read()
def test_event_generate_css_new_file(self):
self.event.settings.primary_color = "#34c34c"
regenerate_css.apply(args=(self.event.pk,))
self.event.settings.flush()
fname = self.event.settings.presale_css_file
self.event.settings.primary_color = "#ff00ff"
regenerate_css.apply(args=(self.event.pk,))
self.event.settings.flush()
assert self.event.settings.presale_css_file != fname
def test_event_generate_css_cache_file(self):
self.event.settings.primary_color = "#34c34c"
regenerate_css.apply(args=(self.event.pk,))
self.event.settings.flush()
fname = self.event.settings.presale_css_file
self.event.settings.primary_color = "#34c34c"
regenerate_css.apply(args=(self.event.pk,))
self.event.settings.flush()
assert self.event.settings.presale_css_file == fname
@override_settings(
MEDIA_URL="https://usercontent.pretix.space/media/",
SITE_URL="https://pretix.eu"
)
def test_seperate_media_domain(self):
self.event.settings.primary_color = "#34c34c"
regenerate_css.apply(args=(self.event.pk,))
self.event.settings.flush()
with open(os.path.join(settings.MEDIA_ROOT, self.event.settings.presale_css_file), 'r') as c:
assert 'https://pretix.eu/static/' in c.read()
@override_settings(
MEDIA_URL="https://usercontent.pretix.space/media/",
SITE_URL="https://pretix.eu"
)
def test_seperate_media_domain_and_organizer_domain(self):
KnownDomain.objects.create(domainname="test.pretix.eu", organizer=self.orga)
self.event.settings.primary_color = "#34c34c"
regenerate_css.apply(args=(self.event.pk,))
self.event.settings.flush()
with open(os.path.join(settings.MEDIA_ROOT, self.event.settings.presale_css_file), 'r') as c:
assert 'https://test.pretix.eu/static/' in c.read()
@override_settings(
STATIC_URL="https://static.pretix.files/static/",
MEDIA_URL="https://usercontent.pretix.space/media/",
SITE_URL="https://pretix.eu"
)
def test_seperate_media_domain_and_static_domain(self):
KnownDomain.objects.create(domainname="test.pretix.eu", organizer=self.orga)
self.event.settings.primary_color = "#34c34c"
regenerate_css.apply(args=(self.event.pk,))
self.event.settings.flush()
with open(os.path.join(settings.MEDIA_ROOT, self.event.settings.presale_css_file), 'r') as c:
assert 'https://static.pretix.files/static/' in c.read()
assert 'https://test.pretix.eu/static/' not in c.read()
| 42.142857 | 101 | 0.676004 | 739 | 5,605 | 4.947226 | 0.120433 | 0.108315 | 0.162746 | 0.114333 | 0.84081 | 0.805799 | 0.796772 | 0.796772 | 0.780908 | 0.770241 | 0 | 0.021338 | 0.197324 | 5,605 | 132 | 102 | 42.462121 | 0.791287 | 0 | 0 | 0.651376 | 0 | 0 | 0.092953 | 0 | 0 | 0 | 0 | 0 | 0.192661 | 1 | 0.082569 | false | 0 | 0.073395 | 0 | 0.165138 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
e1aa5d7b47694aefa7c410426751e26350a1c1e7 | 4,441 | py | Python | pa1-skeleton/pa1-data/8/www.stanford.edu_class_cs221_progAssignments_PA2_distanceCalculator.py | yzhong94/cs276-spring-2019 | a4780a9f88b8c535146040fe11bb513c91c5693b | [
"MIT"
] | null | null | null | pa1-skeleton/pa1-data/8/www.stanford.edu_class_cs221_progAssignments_PA2_distanceCalculator.py | yzhong94/cs276-spring-2019 | a4780a9f88b8c535146040fe11bb513c91c5693b | [
"MIT"
] | null | null | null | pa1-skeleton/pa1-data/8/www.stanford.edu_class_cs221_progAssignments_PA2_distanceCalculator.py | yzhong94/cs276-spring-2019 | a4780a9f88b8c535146040fe11bb513c91c5693b | [
"MIT"
] | null | null | null | distancecalculator py licensing information please do not distribute or publish solutions to this project you are free to use and extend these projects for educational purposes the pacman ai projects were developed at uc berkeley primarily by john denero denero cs berkeley edu and dan klein klein cs berkeley edu for more info see http inst eecs berkeley edu cs188 sp09 pacman html this file contains a distancer object which computes and caches the shortest path between any two points in the maze it returns a manhattan distance between two points if the maze distance has not yet been calculated example distancer distancer gamestate data layout distancer getdistance 1 1 10 10 the distancer object also serves as an example of sharing data safely among agents via a global dictionary distancemap and performing asynchronous computation via threads these examples may help you in designing your own objects but you shouldn t need to modify the distancer code in order to use its distances import threading sys time random class distancer def __init__ self layout background true default 10000 initialize with distancer layout changing default is unnecessary this will start computing maze distances in the background and use them as soon as they are ready in the meantime it returns manhattan distance to compute all maze distances on initialization set background false self _distances none self default default start computing distances in the background when the dc finishes it will fill in self _distances for us dc distancecalculator dc setattr layout self dc setdaemon true if background dc start else dc run def getdistance self pos1 pos2 the getdistance function is the only one you ll need after you create the object if self _distances none return manhattandistance pos1 pos2 if isint pos1 and isint pos2 return self getdistanceongrid pos1 pos2 pos1grids getgrids2d pos1 pos2grids getgrids2d pos2 bestdistance self default for pos1snap snap1distance in pos1grids for pos2snap snap2distance in pos2grids griddistance self getdistanceongrid pos1snap pos2snap distance griddistance snap1distance snap2distance if bestdistance distance bestdistance distance return bestdistance def getdistanceongrid self pos1 pos2 key pos1 pos2 if key in self _distances return self _distances key else raise exception positions not in grid str key def isreadyformazedistance self return self _distances none def manhattandistance x y return abs x 0 y 0 abs x 1 y 1 def isint pos x y pos return x int x and y int y def getgrids2d pos grids for x xdistance in getgrids1d pos 0 for y ydistance in getgrids1d pos 1 grids append x y xdistance ydistance return grids def getgrids1d x intx int x if x int x return x 0 return intx x intx intx 1 intx 1 x machinery for computing maze distances distancemap distancemapsemaphore threading semaphore 1 distancethread none def waitondistancecalculator t global distancethread if distancethread none time sleep t class distancecalculator threading thread def setattr self layout distancer default 10000 self layout layout self distancer distancer self default default def run self global distancemap distancethread distancemapsemaphore acquire if self layout walls not in distancemap if distancethread none raise exception multiple distance threads distancethread self distances computedistances self layout print sys stdout distancer switching to maze distances distancemap self layout walls distances distancethread none else distances distancemap self layout walls distancemapsemaphore release self distancer _distances distances def computedistances layout distances allnodes layout walls aslist false for source in allnodes dist closed for node in allnodes dist node sys maxint import util queue util priorityqueue queue push source 0 dist source 0 while not queue isempty node queue pop if node in closed continue closed node true nodedist dist node adjacent x y node if not layout iswall x y 1 adjacent append x y 1 if not layout iswall x y 1 adjacent append x y 1 if not layout iswall x 1 y adjacent append x 1 y if not layout iswall x 1 y adjacent append x 1 y for other in adjacent if not other in dist continue olddist dist other newdist nodedist 1 if newdist olddist dist other newdist queue push other newdist for target in allnodes distances target source dist target return distances def getdistanceongrid distances pos1 pos2 key pos1 pos2 if key in distances return distances key return 100000
| 2,220.5 | 4,440 | 0.841702 | 703 | 4,441 | 5.301565 | 0.334282 | 0.004293 | 0.004025 | 0.018245 | 0.072444 | 0.053662 | 0.053662 | 0.053662 | 0.03971 | 0.03971 | 0 | 0.022472 | 0.158298 | 4,441 | 1 | 4,441 | 4,441 | 0.974585 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 1 | null | null | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 6 |
bee8ce414338fd6a02b5d8eeb0d53458e87baece | 31 | py | Python | pynrc/maths/__init__.py | JarronL/pyNRC | 0354f0635dd4c5391ca3a769fa9e5ead83661c30 | [
"MIT"
] | 6 | 2017-01-09T05:11:11.000Z | 2022-02-25T18:30:41.000Z | pynrc/maths/__init__.py | JarronL/pyNRC | 0354f0635dd4c5391ca3a769fa9e5ead83661c30 | [
"MIT"
] | 24 | 2017-05-25T04:45:47.000Z | 2022-02-25T18:31:48.000Z | pynrc/maths/__init__.py | JarronL/pyNRC | 0354f0635dd4c5391ca3a769fa9e5ead83661c30 | [
"MIT"
] | 11 | 2017-01-27T22:40:55.000Z | 2021-03-30T18:41:46.000Z | from webbpsf_ext import robust
| 15.5 | 30 | 0.870968 | 5 | 31 | 5.2 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.129032 | 31 | 1 | 31 | 31 | 0.962963 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
beee347296e99a4b8f3afc70a64e076b7f439b8b | 37 | py | Python | src/finance_stats/linear_regression/__init__.py | pralphv/hkportfolioanalysis-backend | 6dbf6f17e6ebd95e28ee042126b34408dde4f520 | [
"MIT"
] | null | null | null | src/finance_stats/linear_regression/__init__.py | pralphv/hkportfolioanalysis-backend | 6dbf6f17e6ebd95e28ee042126b34408dde4f520 | [
"MIT"
] | 1 | 2021-03-31T19:44:25.000Z | 2021-03-31T19:44:25.000Z | src/finance_stats/linear_regression/__init__.py | pralphv/hkportfolioanalysis-backend | 6dbf6f17e6ebd95e28ee042126b34408dde4f520 | [
"MIT"
] | 1 | 2020-11-27T17:56:38.000Z | 2020-11-27T17:56:38.000Z | from .api import calculate_beta_alpha | 37 | 37 | 0.891892 | 6 | 37 | 5.166667 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.081081 | 37 | 1 | 37 | 37 | 0.911765 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
55dd6bb18ddaf3e0cf027b8214252701ec602a00 | 2,448 | py | Python | tests/fixtures/cloud_formation.py | scherniavsky/lizzy | b2bca022967042032aa5e4c3226d760cadc8db49 | [
"Apache-2.0"
] | 21 | 2015-06-01T13:54:04.000Z | 2017-06-04T01:04:25.000Z | tests/fixtures/cloud_formation.py | scherniavsky/lizzy | b2bca022967042032aa5e4c3226d760cadc8db49 | [
"Apache-2.0"
] | 187 | 2015-06-02T06:29:35.000Z | 2017-05-27T12:37:54.000Z | tests/fixtures/cloud_formation.py | scherniavsky/lizzy | b2bca022967042032aa5e4c3226d760cadc8db49 | [
"Apache-2.0"
] | 12 | 2017-06-09T01:10:24.000Z | 2019-05-16T10:44:34.000Z | GOOD_CF_DEFINITION = {
"AWSTemplateFormatVersion": "2010-09-09",
"Mappings": {
"Senza": {
"Info": {
"StackName": "abc",
"StackVersion": "2"
}
}
},
"Parameters": {},
"Resources": {
"AppServerConfig": {
"Properties": {
"ImageId": "image-id",
"InstanceType": "t2.micro",
"UserData": {
"Fn::Base64": "#taupage-config\napplication_id: abc\nsource: pierone.example.com/lizzy/lizzy:12\n"
}
},
"Type": "AWS::AutoScaling::LaunchConfiguration"
}
}
}
GOOD_CF_DEFINITION_WITH_UNUSUAL_AUTOSCALING_RESOURCE = {
"AWSTemplateFormatVersion": "2010-09-09",
"Mappings": {
"Senza": {
"Info": {
"StackName": "abc",
"StackVersion": "2"
}
}
},
"Parameters": {},
"Resources": {
"NiceClusterConfig": {
"Properties": {
"ImageId": "image-id",
"InstanceType": "t2.micro",
"UserData": {
"Fn::Base64": "#taupage-config\napplication_id: abc\nsource: pierone.example.com/lizzy/lizzy:12\n"
}
},
"Type": "AWS::AutoScaling::LaunchConfiguration"
}
}
}
BAD_CF_MISSING_TAUPAGE_AUTOSCALING_GROUP = {
"AWSTemplateFormatVersion": "2010-09-09",
"Mappings": {
"Senza": {
"Info": {
"StackName": "abc",
"StackVersion": "2"
}
}
},
"Parameters": {},
"Resources": {
"AppServerConfig": {
"Properties": {
"ImageId": "image-id",
"InstanceType": "t2.micro",
"UserData": {}
},
"Type": "AWS::Else"
}
}
}
BAD_CF_DEFINITION = {
"AWSTemplateFormatVersion": "2010-09-09",
"Mappings": {
"Senza": {
"Info": {
"StackName": "abc",
"StackVersion": "2"
}
}
},
"Parameters": {},
"Resources": {
"AppServerConfig": {
"Properties": {
"ImageId": "image-id",
"InstanceType": "t2.micro",
"UserData": {}
},
"Type": "AWS::AutoScaling::LaunchConfiguration"
}
}
}
| 25.5 | 118 | 0.424428 | 152 | 2,448 | 6.723684 | 0.315789 | 0.109589 | 0.117417 | 0.125245 | 0.863992 | 0.863992 | 0.863992 | 0.863992 | 0.863992 | 0.863992 | 0 | 0.033637 | 0.417075 | 2,448 | 95 | 119 | 25.768421 | 0.682551 | 0 | 0 | 0.608696 | 0 | 0.021739 | 0.397876 | 0.140114 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
55f06e10a82c7edd0b420a9820f1a853e87d29d0 | 153 | py | Python | migration/migration/doctype/migration_setting/test_migration_setting.py | Ravn10/migration | 54c07bffae7caa3230c65b42f760f103bf0a7f73 | [
"MIT"
] | null | null | null | migration/migration/doctype/migration_setting/test_migration_setting.py | Ravn10/migration | 54c07bffae7caa3230c65b42f760f103bf0a7f73 | [
"MIT"
] | null | null | null | migration/migration/doctype/migration_setting/test_migration_setting.py | Ravn10/migration | 54c07bffae7caa3230c65b42f760f103bf0a7f73 | [
"MIT"
] | null | null | null | # Copyright (c) 2021, Firsterp and Contributors
# See license.txt
# import frappe
import unittest
class TestMigrationSetting(unittest.TestCase):
pass
| 17 | 47 | 0.79085 | 18 | 153 | 6.722222 | 0.888889 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.030303 | 0.137255 | 153 | 8 | 48 | 19.125 | 0.886364 | 0.490196 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.333333 | 0.333333 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 6 |
3600ea30688f379498ca661a468ccc730cb1643c | 4,102 | py | Python | functions/predictionLambda/index.py | chriscoombs/aws-comparing-algorithms-performance-mlops-cdk | 6d3888f3ecd667ee76dc473edba37a608786ed2e | [
"Apache-2.0"
] | 3 | 2021-01-06T18:57:16.000Z | 2021-05-07T07:06:58.000Z | functions/predictionLambda/index.py | chriscoombs/aws-comparing-algorithms-performance-mlops-cdk | 6d3888f3ecd667ee76dc473edba37a608786ed2e | [
"Apache-2.0"
] | null | null | null | functions/predictionLambda/index.py | chriscoombs/aws-comparing-algorithms-performance-mlops-cdk | 6d3888f3ecd667ee76dc473edba37a608786ed2e | [
"Apache-2.0"
] | 4 | 2020-10-30T05:05:24.000Z | 2021-05-07T04:04:58.000Z | from __future__ import print_function
import json
import os
import boto3
import uuid
import random
import math
sagemakerclient = boto3.client('sagemaker-runtime')
s3client=boto3.client('s3')
def handler(event, context):
bucket_name=os.environ['bucket']
prediction_trials=3
if "xgboostresult" in event["training"].keys():
prefix='xgboost_dataset'
response = s3client.list_objects_v2(Bucket = bucket_name)
file_keys = [obj['Key'] for obj in response['Contents'] if (obj['Key'].find('Xgboost/test')!=-1)]
file_key=file_keys[0]
location='/tmp/' + prefix
try:
os.mkdir(location)
except FileExistsError:
print("Directory already created!")
# downloading the testing samples file for linear model
try:
s3client.download_file(bucket_name,file_key, location + '/' + 'testing_sample.csv')
except Exception as e:
print('Unable to download file!')
file_test=location + '/' + 'testing_sample.csv'
test_data = [l for l in open(file_test, 'r')]
sum=0
successful_predictions=prediction_trials
for _ in range(1,prediction_trials+1):
#selecting a random sample to perform prediction
sample=random.choice(test_data).split(' ')
actual_age=sample[0]
payload=sample[1:] #removing actual age from the sample
payload=' '.join(map(str, payload))
try:
response=sagemakerclient.invoke_endpoint(
EndpointName=event["input"]["xgb_endpoint_name"],
ContentType='libsvm',
Body=payload
)
result=json.loads(response['Body'].read().decode())
accuracy=str(round(100-((abs(float(result)-float(actual_age))/float(actual_age))*100),2))
sum=sum+float(accuracy)
except Exception as e:
print(e)
successful_predictions-=1
xgboost_avg_final_accuarcy=sum/successful_predictions
return {
'prediction_result': xgboost_avg_final_accuarcy,
'endpoint_name': event["input"]["xgb_endpoint_name"]
}
elif "llresult" in event["training"].keys():
prefix='ln_dataset'
response = s3client.list_objects_v2(Bucket = bucket_name)
file_keys = [obj['Key'] for obj in response['Contents'] if (obj['Key'].find('Linear/test')!=-1)]
file_key=file_keys[0]
location='/tmp/' + prefix
try:
os.mkdir(location)
except FileExistsError:
print("Directory already created!")
# downloading the testing samples file for linear model
try:
s3client.download_file(bucket_name,file_key, location + '/' + 'testing_sample.csv')
except Exception as e:
print('Unable to download file!')
file_test=location + '/' + 'testing_sample.csv'
test_data = [l for l in open(file_test, 'r')]
sum=0
successful_predictions=prediction_trials
for _ in range(1,prediction_trials+1):
#selecting a random sample to perform prediction
sample=random.choice(test_data).split(',')
actual_age=sample[0]
payload=sample[1:] #removing actual age from the sample
payload=','.join(map(str, payload))
try:
response=sagemakerclient.invoke_endpoint(
EndpointName=event["input"]["ll_endpoint_name"],
ContentType='text/csv',
Body=payload
)
result=json.loads(response['Body'].read().decode())
linear_result=result['predictions'][0]['score']
accuracy=str(round(100-((abs(float(linear_result)-float(actual_age))/float(actual_age))*100),2))
sum=sum+float(accuracy)
except Exception as e:
print(e)
successful_predictions-=1
linear_avg_final_accuarcy=sum/successful_predictions
return {
'prediction_result': linear_avg_final_accuarcy,
'endpoint_name': event["input"]["ll_endpoint_name"]
}
else:
return {
'message': 'something wrong has happened!'
}
| 39.066667 | 108 | 0.627255 | 475 | 4,102 | 5.237895 | 0.263158 | 0.028939 | 0.022508 | 0.038585 | 0.835209 | 0.791801 | 0.770096 | 0.73955 | 0.73955 | 0.651125 | 0 | 0.014061 | 0.25451 | 4,102 | 104 | 109 | 39.442308 | 0.799542 | 0.066065 | 0 | 0.56701 | 0 | 0 | 0.143866 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.010309 | false | 0 | 0.072165 | 0 | 0.113402 | 0.072165 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
3608a5429e2ec4d73c87e4360b18039a7034558b | 66 | py | Python | okta/config.py | adrianlazar-okta/okta.adrian-lazar.com | 10ed29642bff61a887965b2e59ab146a22471fb3 | [
"Apache-2.0"
] | null | null | null | okta/config.py | adrianlazar-okta/okta.adrian-lazar.com | 10ed29642bff61a887965b2e59ab146a22471fb3 | [
"Apache-2.0"
] | null | null | null | okta/config.py | adrianlazar-okta/okta.adrian-lazar.com | 10ed29642bff61a887965b2e59ab146a22471fb3 | [
"Apache-2.0"
] | null | null | null | class Config:
SECRET_KEY = '89b4f6ac5922d8685d182175e97f97e8' | 33 | 51 | 0.80303 | 5 | 66 | 10.4 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.403509 | 0.136364 | 66 | 2 | 51 | 33 | 0.508772 | 0 | 0 | 0 | 0 | 0 | 0.484848 | 0.484848 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 6 |
3610d959e23d4aabea1da039469ea330486ed6b9 | 56 | py | Python | winejournal/blueprints/regions/__init__.py | rickandersonaia/wine-journal | 9664c8ed8df9eb853562c500e888490a61a6e44d | [
"CC-BY-4.0"
] | null | null | null | winejournal/blueprints/regions/__init__.py | rickandersonaia/wine-journal | 9664c8ed8df9eb853562c500e888490a61a6e44d | [
"CC-BY-4.0"
] | 5 | 2021-02-08T20:22:06.000Z | 2021-09-07T23:52:33.000Z | winejournal/blueprints/regions/__init__.py | rickandersonaia/wine-journal | 9664c8ed8df9eb853562c500e888490a61a6e44d | [
"CC-BY-4.0"
] | 2 | 2018-06-27T15:03:38.000Z | 2020-03-14T15:40:34.000Z | from winejournal.blueprints.regions.views import regions | 56 | 56 | 0.892857 | 7 | 56 | 7.142857 | 0.857143 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.053571 | 56 | 1 | 56 | 56 | 0.943396 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 1 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | 6 |
3649b25b5d42655cdb665931f6377e88c0834bb7 | 374 | py | Python | web/controllers/account/account.py | jxsrjw4/flask_learning | 17097a5c6a911da29a50dd1865552de6b7f8d9c5 | [
"Apache-2.0"
] | null | null | null | web/controllers/account/account.py | jxsrjw4/flask_learning | 17097a5c6a911da29a50dd1865552de6b7f8d9c5 | [
"Apache-2.0"
] | null | null | null | web/controllers/account/account.py | jxsrjw4/flask_learning | 17097a5c6a911da29a50dd1865552de6b7f8d9c5 | [
"Apache-2.0"
] | null | null | null | from flask import Blueprint, render_template
route_account = Blueprint("account_page", __name__)
@route_account.route('/index')
def index():
return render_template("/account/index.html")
@route_account.route('/info')
def info():
return render_template("/account/info.html")
@route_account.route('/set')
def set():
return render_template('/account/set.html') | 24.933333 | 51 | 0.737968 | 48 | 374 | 5.479167 | 0.333333 | 0.212928 | 0.193916 | 0.307985 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.106952 | 374 | 15 | 52 | 24.933333 | 0.787425 | 0 | 0 | 0 | 0 | 0 | 0.216 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.272727 | false | 0 | 0.090909 | 0.272727 | 0.636364 | 0.181818 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 6 |
3660790e0de0683ff622d7182b498c939c3e596c | 12,023 | py | Python | sdk/python/pulumi_rancher2/namespace.py | mitchellmaler/pulumi-rancher2 | e6ca44b58b5b10c12a4e628e61aa8d98330f0863 | [
"ECL-2.0",
"Apache-2.0"
] | null | null | null | sdk/python/pulumi_rancher2/namespace.py | mitchellmaler/pulumi-rancher2 | e6ca44b58b5b10c12a4e628e61aa8d98330f0863 | [
"ECL-2.0",
"Apache-2.0"
] | null | null | null | sdk/python/pulumi_rancher2/namespace.py | mitchellmaler/pulumi-rancher2 | e6ca44b58b5b10c12a4e628e61aa8d98330f0863 | [
"ECL-2.0",
"Apache-2.0"
] | null | null | null | # coding=utf-8
# *** WARNING: this file was generated by the Pulumi Terraform Bridge (tfgen) Tool. ***
# *** Do not edit by hand unless you're certain you know what you are doing! ***
import json
import warnings
import pulumi
import pulumi.runtime
from typing import Union
from . import utilities, tables
class Namespace(pulumi.CustomResource):
annotations: pulumi.Output[dict]
"""
Annotations for Node Pool object (map)
"""
container_resource_limit: pulumi.Output[dict]
"""
Default containers resource limits on namespace (List maxitem:1)
* `limitsCpu` (`str`) - Limit for limits cpu in namespace (string)
* `limitsMemory` (`str`) - Limit for limits memory in namespace (string)
* `requestsCpu` (`str`) - Limit for requests cpu in namespace (string)
* `requestsMemory` (`str`) - Limit for requests memory in namespace (string)
"""
description: pulumi.Output[str]
"""
A namespace description (string)
"""
labels: pulumi.Output[dict]
"""
Labels for Node Pool object (map)
"""
name: pulumi.Output[str]
"""
The name of the namespace (string)
"""
project_id: pulumi.Output[str]
"""
The project id where assign namespace. It's on the form `project_id=<cluster_id>:<id>`. Updating `<id>` part on same `<cluster_id>` namespace will be moved between projects (string)
"""
resource_quota: pulumi.Output[dict]
"""
Resource quota for namespace. Rancher v2.1.x or higher (list maxitems:1)
* `limit` (`dict`) - Resource quota limit for namespace (list maxitems:1)
* `configMaps` (`str`) - Limit for config maps in namespace (string)
* `limitsCpu` (`str`) - Limit for limits cpu in namespace (string)
* `limitsMemory` (`str`) - Limit for limits memory in namespace (string)
* `persistentVolumeClaims` (`str`) - Limit for persistent volume claims in namespace (string)
* `pods` (`str`) - Limit for pods in namespace (string)
* `replicationControllers` (`str`) - Limit for replication controllers in namespace (string)
* `requestsCpu` (`str`) - Limit for requests cpu in namespace (string)
* `requestsMemory` (`str`) - Limit for requests memory in namespace (string)
* `requestsStorage` (`str`) - Limit for requests storage in namespace (string)
* `secrets` (`str`) - Limit for secrets in namespace (string)
* `services` (`str`)
* `servicesLoadBalancers` (`str`) - Limit for services load balancers in namespace (string)
* `servicesNodePorts` (`str`) - Limit for services node ports in namespace (string)
"""
wait_for_cluster: pulumi.Output[bool]
"""
Wait for cluster becomes active. Default `false` (bool)
"""
def __init__(__self__, resource_name, opts=None, annotations=None, container_resource_limit=None, description=None, labels=None, name=None, project_id=None, resource_quota=None, wait_for_cluster=None, __props__=None, __name__=None, __opts__=None):
"""
Provides a Rancher v2 Namespace resource. This can be used to create namespaces for Rancher v2 environments and retrieve their information.
> This content is derived from https://github.com/terraform-providers/terraform-provider-rancher2/blob/master/website/docs/r/namespace.html.markdown.
:param str resource_name: The name of the resource.
:param pulumi.ResourceOptions opts: Options for the resource.
:param pulumi.Input[dict] annotations: Annotations for Node Pool object (map)
:param pulumi.Input[dict] container_resource_limit: Default containers resource limits on namespace (List maxitem:1)
:param pulumi.Input[str] description: A namespace description (string)
:param pulumi.Input[dict] labels: Labels for Node Pool object (map)
:param pulumi.Input[str] name: The name of the namespace (string)
:param pulumi.Input[str] project_id: The project id where assign namespace. It's on the form `project_id=<cluster_id>:<id>`. Updating `<id>` part on same `<cluster_id>` namespace will be moved between projects (string)
:param pulumi.Input[dict] resource_quota: Resource quota for namespace. Rancher v2.1.x or higher (list maxitems:1)
:param pulumi.Input[bool] wait_for_cluster: Wait for cluster becomes active. Default `false` (bool)
The **container_resource_limit** object supports the following:
* `limitsCpu` (`pulumi.Input[str]`) - Limit for limits cpu in namespace (string)
* `limitsMemory` (`pulumi.Input[str]`) - Limit for limits memory in namespace (string)
* `requestsCpu` (`pulumi.Input[str]`) - Limit for requests cpu in namespace (string)
* `requestsMemory` (`pulumi.Input[str]`) - Limit for requests memory in namespace (string)
The **resource_quota** object supports the following:
* `limit` (`pulumi.Input[dict]`) - Resource quota limit for namespace (list maxitems:1)
* `configMaps` (`pulumi.Input[str]`) - Limit for config maps in namespace (string)
* `limitsCpu` (`pulumi.Input[str]`) - Limit for limits cpu in namespace (string)
* `limitsMemory` (`pulumi.Input[str]`) - Limit for limits memory in namespace (string)
* `persistentVolumeClaims` (`pulumi.Input[str]`) - Limit for persistent volume claims in namespace (string)
* `pods` (`pulumi.Input[str]`) - Limit for pods in namespace (string)
* `replicationControllers` (`pulumi.Input[str]`) - Limit for replication controllers in namespace (string)
* `requestsCpu` (`pulumi.Input[str]`) - Limit for requests cpu in namespace (string)
* `requestsMemory` (`pulumi.Input[str]`) - Limit for requests memory in namespace (string)
* `requestsStorage` (`pulumi.Input[str]`) - Limit for requests storage in namespace (string)
* `secrets` (`pulumi.Input[str]`) - Limit for secrets in namespace (string)
* `services` (`pulumi.Input[str]`)
* `servicesLoadBalancers` (`pulumi.Input[str]`) - Limit for services load balancers in namespace (string)
* `servicesNodePorts` (`pulumi.Input[str]`) - Limit for services node ports in namespace (string)
"""
if __name__ is not None:
warnings.warn("explicit use of __name__ is deprecated", DeprecationWarning)
resource_name = __name__
if __opts__ is not None:
warnings.warn("explicit use of __opts__ is deprecated, use 'opts' instead", DeprecationWarning)
opts = __opts__
if opts is None:
opts = pulumi.ResourceOptions()
if not isinstance(opts, pulumi.ResourceOptions):
raise TypeError('Expected resource options to be a ResourceOptions instance')
if opts.version is None:
opts.version = utilities.get_version()
if opts.id is None:
if __props__ is not None:
raise TypeError('__props__ is only valid when passed in combination with a valid opts.id to get an existing resource')
__props__ = dict()
__props__['annotations'] = annotations
__props__['container_resource_limit'] = container_resource_limit
__props__['description'] = description
__props__['labels'] = labels
__props__['name'] = name
if project_id is None:
raise TypeError("Missing required property 'project_id'")
__props__['project_id'] = project_id
__props__['resource_quota'] = resource_quota
__props__['wait_for_cluster'] = wait_for_cluster
super(Namespace, __self__).__init__(
'rancher2:index/namespace:Namespace',
resource_name,
__props__,
opts)
@staticmethod
def get(resource_name, id, opts=None, annotations=None, container_resource_limit=None, description=None, labels=None, name=None, project_id=None, resource_quota=None, wait_for_cluster=None):
"""
Get an existing Namespace resource's state with the given name, id, and optional extra
properties used to qualify the lookup.
:param str resource_name: The unique name of the resulting resource.
:param str id: The unique provider ID of the resource to lookup.
:param pulumi.ResourceOptions opts: Options for the resource.
:param pulumi.Input[dict] annotations: Annotations for Node Pool object (map)
:param pulumi.Input[dict] container_resource_limit: Default containers resource limits on namespace (List maxitem:1)
:param pulumi.Input[str] description: A namespace description (string)
:param pulumi.Input[dict] labels: Labels for Node Pool object (map)
:param pulumi.Input[str] name: The name of the namespace (string)
:param pulumi.Input[str] project_id: The project id where assign namespace. It's on the form `project_id=<cluster_id>:<id>`. Updating `<id>` part on same `<cluster_id>` namespace will be moved between projects (string)
:param pulumi.Input[dict] resource_quota: Resource quota for namespace. Rancher v2.1.x or higher (list maxitems:1)
:param pulumi.Input[bool] wait_for_cluster: Wait for cluster becomes active. Default `false` (bool)
The **container_resource_limit** object supports the following:
* `limitsCpu` (`pulumi.Input[str]`) - Limit for limits cpu in namespace (string)
* `limitsMemory` (`pulumi.Input[str]`) - Limit for limits memory in namespace (string)
* `requestsCpu` (`pulumi.Input[str]`) - Limit for requests cpu in namespace (string)
* `requestsMemory` (`pulumi.Input[str]`) - Limit for requests memory in namespace (string)
The **resource_quota** object supports the following:
* `limit` (`pulumi.Input[dict]`) - Resource quota limit for namespace (list maxitems:1)
* `configMaps` (`pulumi.Input[str]`) - Limit for config maps in namespace (string)
* `limitsCpu` (`pulumi.Input[str]`) - Limit for limits cpu in namespace (string)
* `limitsMemory` (`pulumi.Input[str]`) - Limit for limits memory in namespace (string)
* `persistentVolumeClaims` (`pulumi.Input[str]`) - Limit for persistent volume claims in namespace (string)
* `pods` (`pulumi.Input[str]`) - Limit for pods in namespace (string)
* `replicationControllers` (`pulumi.Input[str]`) - Limit for replication controllers in namespace (string)
* `requestsCpu` (`pulumi.Input[str]`) - Limit for requests cpu in namespace (string)
* `requestsMemory` (`pulumi.Input[str]`) - Limit for requests memory in namespace (string)
* `requestsStorage` (`pulumi.Input[str]`) - Limit for requests storage in namespace (string)
* `secrets` (`pulumi.Input[str]`) - Limit for secrets in namespace (string)
* `services` (`pulumi.Input[str]`)
* `servicesLoadBalancers` (`pulumi.Input[str]`) - Limit for services load balancers in namespace (string)
* `servicesNodePorts` (`pulumi.Input[str]`) - Limit for services node ports in namespace (string)
"""
opts = pulumi.ResourceOptions.merge(opts, pulumi.ResourceOptions(id=id))
__props__ = dict()
__props__["annotations"] = annotations
__props__["container_resource_limit"] = container_resource_limit
__props__["description"] = description
__props__["labels"] = labels
__props__["name"] = name
__props__["project_id"] = project_id
__props__["resource_quota"] = resource_quota
__props__["wait_for_cluster"] = wait_for_cluster
return Namespace(resource_name, opts=opts, __props__=__props__)
def translate_output_property(self, prop):
return tables._CAMEL_TO_SNAKE_CASE_TABLE.get(prop) or prop
def translate_input_property(self, prop):
return tables._SNAKE_TO_CAMEL_CASE_TABLE.get(prop) or prop
| 60.115 | 251 | 0.670964 | 1,426 | 12,023 | 5.490884 | 0.143058 | 0.073052 | 0.067433 | 0.07765 | 0.773052 | 0.758876 | 0.742529 | 0.742529 | 0.715964 | 0.683142 | 0 | 0.002141 | 0.223156 | 12,023 | 199 | 252 | 60.417085 | 0.836188 | 0.502953 | 0 | 0.03125 | 1 | 0 | 0.154374 | 0.024485 | 0 | 0 | 0 | 0 | 0 | 1 | 0.0625 | false | 0.015625 | 0.09375 | 0.03125 | 0.34375 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
367f4d91b6625612700002ff7b5a530f03e03748 | 7,091 | py | Python | src/genie/libs/parser/iosxr/tests/ShowOspfv3Interface/cli/equal/golden1_expected.py | balmasea/genieparser | d1e71a96dfb081e0a8591707b9d4872decd5d9d3 | [
"Apache-2.0"
] | 204 | 2018-06-27T00:55:27.000Z | 2022-03-06T21:12:18.000Z | src/genie/libs/parser/iosxr/tests/ShowOspfv3Interface/cli/equal/golden1_expected.py | balmasea/genieparser | d1e71a96dfb081e0a8591707b9d4872decd5d9d3 | [
"Apache-2.0"
] | 468 | 2018-06-19T00:33:18.000Z | 2022-03-31T23:23:35.000Z | src/genie/libs/parser/iosxr/tests/ShowOspfv3Interface/cli/equal/golden1_expected.py | balmasea/genieparser | d1e71a96dfb081e0a8591707b9d4872decd5d9d3 | [
"Apache-2.0"
] | 309 | 2019-01-16T20:21:07.000Z | 2022-03-30T12:56:41.000Z | expected_output = {
"vrf": {
"default": {
"address_family": {
"ipv6": {
"instance": {
"mpls1": {
"instance_id": {
0: {
"areas": {
0: {
"interfaces": {
"GigabitEthernet0/0/0/0": {
"enable": "up",
"line_protocol": "up",
"link_local_address": "fe80:100:10::1",
"interface_id": 7,
"router_id": "10.94.1.1",
"network_type": "POINT_TO_POINT",
"cost": 1,
"bfd": {
"bfd_status": "enabled",
"interval": 150,
"multiplier": 3,
"mode": "Default",
},
"transmit_delay": 1,
"state": "POINT_TO_POINT",
"hello_interval": 10,
"dead_interval": 40,
"wait_interval": 40,
"retransmit_interval": 5,
"hello_timer": "00:00:08",
"index": "1/1/1",
"flood_queue_length": 0,
"next": "0(0)/0(0)/0(0)",
"last_flood_scan_length": 1,
"max_flood_scan_length": 4,
"last_flood_scan_time_msec": 0,
"max_flood_scan_time_msec": 0,
"statistics": {
"num_nbrs_suppress_hello": 0,
"refrence_count": 6,
},
"adjacent_neighbors": {
"nbr_count": 1,
"adj_nbr_count": 1,
"neighbor": "10.220.100.100",
},
},
"Loopback0": {
"enable": "up",
"line_protocol": "up",
"link_local_address": "fe80::8849:faff:fe9c:f9b6",
"interface_id": 6,
"router_id": "10.94.1.1",
"network_type": "LOOPBACK",
"cost": 0,
"loopback_txt": "Loopback interface is treated as a stub Host",
},
"GigabitEthernet0/0/0/1": {
"enable": "up",
"line_protocol": "up",
"link_local_address": "fe80:100:20::1",
"interface_id": 8,
"router_id": "10.94.1.1",
"network_type": "POINT_TO_POINT",
"cost": 1,
"bfd": {
"bfd_status": "enabled",
"interval": 150,
"multiplier": 3,
"mode": "Default",
},
"transmit_delay": 1,
"state": "POINT_TO_POINT",
"hello_interval": 10,
"dead_interval": 40,
"wait_interval": 40,
"retransmit_interval": 5,
"hello_timer": "00:00:08",
"index": "1/2/2",
"flood_queue_length": 0,
"next": "0(0)/0(0)/0(0)",
"last_flood_scan_length": 1,
"max_flood_scan_length": 4,
"last_flood_scan_time_msec": 0,
"max_flood_scan_time_msec": 0,
"statistics": {
"num_nbrs_suppress_hello": 0,
"refrence_count": 6,
},
"adjacent_neighbors": {
"nbr_count": 1,
"adj_nbr_count": 1,
"neighbor": "10.145.95.95",
},
},
}
}
}
}
}
}
}
}
}
}
}
}
| 63.882883 | 115 | 0.18869 | 305 | 7,091 | 4.085246 | 0.311475 | 0.020867 | 0.021669 | 0.019262 | 0.785714 | 0.785714 | 0.785714 | 0.785714 | 0.76565 | 0.731942 | 0 | 0.083878 | 0.74108 | 7,091 | 110 | 116 | 64.463636 | 0.594771 | 0 | 0 | 0.572727 | 0 | 0 | 0.18869 | 0.042166 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
36d2192020121aa69f7cc948e5148237e5c88240 | 7,018 | py | Python | tests/test_standardqueryoperators/test_skip_while.py | rlugojr/RxPy | 9f9b1de0ab833e53b0d1626a3b43a6c9424f01ec | [
"ECL-2.0",
"Apache-2.0"
] | 78 | 2015-01-22T23:57:01.000Z | 2021-06-04T15:16:22.000Z | tests/test_standardqueryoperators/test_skip_while.py | rlugojr/RxPy | 9f9b1de0ab833e53b0d1626a3b43a6c9424f01ec | [
"ECL-2.0",
"Apache-2.0"
] | 1 | 2015-10-19T12:59:57.000Z | 2015-10-19T12:59:57.000Z | tests/test_standardqueryoperators/test_skip_while.py | rlugojr/RxPy | 9f9b1de0ab833e53b0d1626a3b43a6c9424f01ec | [
"ECL-2.0",
"Apache-2.0"
] | 11 | 2015-02-16T20:43:45.000Z | 2018-05-30T11:46:50.000Z | from rx.observable import Observable
from rx.testing import TestScheduler, ReactiveTest, is_prime
on_next = ReactiveTest.on_next
on_completed = ReactiveTest.on_completed
on_error = ReactiveTest.on_error
subscribe = ReactiveTest.subscribe
subscribed = ReactiveTest.subscribed
disposed = ReactiveTest.disposed
created = ReactiveTest.created
def test_skip_while_complete_before():
scheduler = TestScheduler()
xs = scheduler.create_hot_observable(on_next(90, -1), on_next(110, -1), on_next(210, 2), on_next(260, 5), on_next(290, 13), on_next(320, 3), on_completed(330), on_next(350, 7), on_next(390, 4), on_next(410, 17), on_next(450, 8), on_next(500, 23), on_completed(600))
invoked = 0
def create():
def predicate(x):
nonlocal invoked
invoked += 1
return is_prime(x)
return xs.skip_while(predicate)
results = scheduler.start(create)
results.messages.assert_equal(on_completed(330))
xs.subscriptions.assert_equal(subscribe(200, 330))
assert(invoked == 4)
def test_skip_while_complete_after():
scheduler = TestScheduler()
xs = scheduler.create_hot_observable(on_next(90, -1), on_next(110, -1), on_next(210, 2), on_next(260, 5), on_next(290, 13), on_next(320, 3), on_next(350, 7), on_next(390, 4), on_next(410, 17), on_next(450, 8), on_next(500, 23), on_completed(600))
invoked = 0
def create():
def predicate(x):
nonlocal invoked
invoked += 1
return is_prime(x)
return xs.skip_while(predicate)
results = scheduler.start(create)
results.messages.assert_equal(on_next(390, 4), on_next(410, 17), on_next(450, 8), on_next(500, 23), on_completed(600))
xs.subscriptions.assert_equal(subscribe(200, 600))
assert(invoked == 6)
def test_skip_while_error_before():
ex = 'ex'
scheduler = TestScheduler()
xs = scheduler.create_hot_observable(on_next(90, -1), on_next(110, -1), on_next(210, 2), on_next(260, 5), on_error(270, ex), on_next(290, 13), on_next(320, 3), on_next(350, 7), on_next(390, 4), on_next(410, 17), on_next(450, 8), on_next(500, 23), on_completed(600))
invoked = 0
def create():
def predicate(x):
nonlocal invoked
invoked += 1
return is_prime(x)
return xs.skip_while(predicate)
results = scheduler.start(create)
results.messages.assert_equal(on_error(270, ex))
xs.subscriptions.assert_equal(subscribe(200, 270))
assert(invoked == 2)
def test_skip_while_error_after():
ex = 'ex'
scheduler = TestScheduler()
xs = scheduler.create_hot_observable(on_next(90, -1), on_next(110, -1), on_next(210, 2), on_next(260, 5), on_next(290, 13), on_next(320, 3), on_next(350, 7), on_next(390, 4), on_next(410, 17), on_next(450, 8), on_next(500, 23), on_error(600, ex))
invoked = 0
def create():
def predicate(x):
nonlocal invoked
invoked += 1
return is_prime(x)
return xs.skip_while(predicate)
results = scheduler.start(create)
results.messages.assert_equal(on_next(390, 4), on_next(410, 17), on_next(450, 8), on_next(500, 23), on_error(600, ex))
xs.subscriptions.assert_equal(subscribe(200, 600))
assert(invoked == 6)
def test_skip_while_dispose_before():
scheduler = TestScheduler()
xs = scheduler.create_hot_observable(on_next(90, -1), on_next(110, -1), on_next(210, 2), on_next(260, 5), on_next(290, 13), on_next(320, 3), on_next(350, 7), on_next(390, 4), on_next(410, 17), on_next(450, 8), on_next(500, 23), on_completed(600))
invoked = 0
def create():
def predicate(x):
nonlocal invoked
invoked += 1
return is_prime(x)
return xs.skip_while(predicate)
results = scheduler.start(create, disposed=300)
results.messages.assert_equal()
xs.subscriptions.assert_equal(subscribe(200, 300))
assert(invoked == 3)
def test_skip_while_dispose_after():
scheduler = TestScheduler()
xs = scheduler.create_hot_observable(on_next(90, -1), on_next(110, -1), on_next(210, 2), on_next(260, 5), on_next(290, 13), on_next(320, 3), on_next(350, 7), on_next(390, 4), on_next(410, 17), on_next(450, 8), on_next(500, 23), on_completed(600))
invoked = 0
def create():
def predicate(x):
nonlocal invoked
invoked += 1
return is_prime(x)
return xs.skip_while(predicate)
results = scheduler.start(create, disposed=470)
results.messages.assert_equal(on_next(390, 4), on_next(410, 17), on_next(450, 8))
xs.subscriptions.assert_equal(subscribe(200, 470))
assert(invoked == 6)
def test_skip_while_zero():
scheduler = TestScheduler()
xs = scheduler.create_hot_observable(on_next(90, -1), on_next(110, -1), on_next(205, 100), on_next(210, 2), on_next(260, 5), on_next(290, 13), on_next(320, 3), on_next(350, 7), on_next(390, 4), on_next(410, 17), on_next(450, 8), on_next(500, 23), on_completed(600))
invoked = 0
def create():
def predicate(x):
nonlocal invoked
invoked += 1
return is_prime(x)
return xs.skip_while(predicate)
results = scheduler.start(create)
results.messages.assert_equal(on_next(205, 100), on_next(210, 2), on_next(260, 5), on_next(290, 13), on_next(320, 3), on_next(350, 7), on_next(390, 4), on_next(410, 17), on_next(450, 8), on_next(500, 23), on_completed(600))
xs.subscriptions.assert_equal(subscribe(200, 600))
assert(invoked == 1)
def test_skip_while_throw():
scheduler = TestScheduler()
xs = scheduler.create_hot_observable(on_next(90, -1), on_next(110, -1), on_next(210, 2), on_next(260, 5), on_next(290, 13), on_next(320, 3), on_next(350, 7), on_next(390, 4), on_next(410, 17), on_next(450, 8), on_next(500, 23), on_completed(600))
ex = 'ex'
invoked = 0
def create():
def predicate(x):
nonlocal invoked
invoked += 1
if invoked == 3:
raise Exception(ex)
return is_prime(x)
return xs.skip_while(predicate)
results = scheduler.start(create)
results.messages.assert_equal(on_error(290, ex))
xs.subscriptions.assert_equal(subscribe(200, 290))
assert(invoked == 3)
def test_skip_while_index():
scheduler = TestScheduler()
xs = scheduler.create_hot_observable(on_next(90, -1), on_next(110, -1), on_next(210, 2), on_next(260, 5), on_next(290, 13), on_next(320, 3), on_next(350, 7), on_next(390, 4), on_next(410, 17), on_next(450, 8), on_next(500, 23), on_completed(600))
def create():
def predicate(x, i):
return i < 5
return xs.skip_while(predicate)
results = scheduler.start(create)
results.messages.assert_equal(on_next(390, 4), on_next(410, 17), on_next(450, 8), on_next(500, 23), on_completed(600))
xs.subscriptions.assert_equal(subscribe(200, 600))
| 42.533333 | 269 | 0.6499 | 1,062 | 7,018 | 4.072505 | 0.074388 | 0.176185 | 0.029133 | 0.03237 | 0.889711 | 0.864277 | 0.837919 | 0.798613 | 0.798613 | 0.798613 | 0 | 0.121365 | 0.211029 | 7,018 | 164 | 270 | 42.792683 | 0.659744 | 0 | 0 | 0.683824 | 0 | 0 | 0.000855 | 0 | 0 | 0 | 0 | 0 | 0.191176 | 1 | 0.198529 | false | 0 | 0.014706 | 0.007353 | 0.345588 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
36d85ced708f206280d6f6e0964fcb861f54052c | 195 | py | Python | spacys_pos_tagging/__init__.py | barreira/spacys_pos_tagging | 59dbebc13a3e3066121456d2ead99adee463db68 | [
"MIT"
] | null | null | null | spacys_pos_tagging/__init__.py | barreira/spacys_pos_tagging | 59dbebc13a3e3066121456d2ead99adee463db68 | [
"MIT"
] | null | null | null | spacys_pos_tagging/__init__.py | barreira/spacys_pos_tagging | 59dbebc13a3e3066121456d2ead99adee463db68 | [
"MIT"
] | null | null | null | #!/usr/bin/python3
from .app import process_args
from .pos_tagging import add_tokenizer_exceptions, generate_dependencies_graph, \
generate_information, generate_pos_chart, generate_tagged_text
| 32.5 | 81 | 0.861538 | 26 | 195 | 6.038462 | 0.769231 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.005556 | 0.076923 | 195 | 5 | 82 | 39 | 0.866667 | 0.087179 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.666667 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
7fcb1c388eaea8524c54e2c08c219e1b7ca077cc | 25 | py | Python | iotas/devices/moorescloud/engineroom/__init__.py | cheshrkat/holideck | 4ee6e11d728ef24c32670e6b15a16de27c8f7406 | [
"MIT"
] | 2 | 2018-01-03T09:58:00.000Z | 2019-05-15T07:26:33.000Z | iotas/devices/moorescloud/engineroom/__init__.py | jpwarren/holideck | 4fd39f5cf5bd3a1749d5c57f1a73ea70fd9a8f49 | [
"MIT"
] | null | null | null | iotas/devices/moorescloud/engineroom/__init__.py | jpwarren/holideck | 4fd39f5cf5bd3a1749d5c57f1a73ea70fd9a8f49 | [
"MIT"
] | null | null | null | print "Hello engineroom"
| 12.5 | 24 | 0.8 | 3 | 25 | 6.666667 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.12 | 25 | 1 | 25 | 25 | 0.909091 | 0 | 0 | 0 | 0 | 0 | 0.64 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0 | null | null | 1 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 6 |
3d04df4c6ab8160292db45ed496fd5e66252d04e | 4,417 | py | Python | loans/migrations/0005_auto_20220225_0824.py | prateekmohanty63/microfinance | 39839c0d378be4ccc40a9dde5dc38a10773a38a1 | [
"MIT"
] | 1 | 2022-02-25T18:39:44.000Z | 2022-02-25T18:39:44.000Z | loans/migrations/0005_auto_20220225_0824.py | prateekmohanty63/microfinance | 39839c0d378be4ccc40a9dde5dc38a10773a38a1 | [
"MIT"
] | null | null | null | loans/migrations/0005_auto_20220225_0824.py | prateekmohanty63/microfinance | 39839c0d378be4ccc40a9dde5dc38a10773a38a1 | [
"MIT"
] | null | null | null | # Generated by Django 3.1.4 on 2022-02-25 08:24
from django.conf import settings
from django.db import migrations, models
import django.db.models.deletion
import django.utils.timezone
class Migration(migrations.Migration):
dependencies = [
migrations.swappable_dependency(settings.AUTH_USER_MODEL),
('loans', '0004_delete_organizationsettings'),
]
operations = [
migrations.RenameField(
model_name='product',
old_name='label',
new_name='name',
),
migrations.RemoveField(
model_name='productconfig',
name='status',
),
migrations.AddField(
model_name='feeconfig',
name='created_at',
field=models.DateTimeField(auto_now_add=True, default=django.utils.timezone.now),
preserve_default=False,
),
migrations.AddField(
model_name='feeconfig',
name='created_user',
field=models.ForeignKey(blank=True, null=True, on_delete=django.db.models.deletion.SET_NULL, to=settings.AUTH_USER_MODEL),
),
migrations.AddField(
model_name='feeconfig',
name='last_updated',
field=models.DateTimeField(auto_now=True),
),
migrations.AddField(
model_name='interestconfig',
name='created_at',
field=models.DateTimeField(auto_now_add=True, default=django.utils.timezone.now),
preserve_default=False,
),
migrations.AddField(
model_name='interestconfig',
name='created_user',
field=models.ForeignKey(blank=True, null=True, on_delete=django.db.models.deletion.SET_NULL, to=settings.AUTH_USER_MODEL),
),
migrations.AddField(
model_name='interestconfig',
name='last_updated',
field=models.DateTimeField(auto_now=True),
),
migrations.AddField(
model_name='loan',
name='created_user',
field=models.ForeignKey(blank=True, null=True, on_delete=django.db.models.deletion.SET_NULL, to=settings.AUTH_USER_MODEL),
),
migrations.AddField(
model_name='payment',
name='created_user',
field=models.ForeignKey(blank=True, null=True, on_delete=django.db.models.deletion.SET_NULL, to=settings.AUTH_USER_MODEL),
),
migrations.AddField(
model_name='paymentconfig',
name='created_at',
field=models.DateTimeField(auto_now_add=True, default=django.utils.timezone.now),
preserve_default=False,
),
migrations.AddField(
model_name='paymentconfig',
name='created_user',
field=models.ForeignKey(blank=True, null=True, on_delete=django.db.models.deletion.SET_NULL, to=settings.AUTH_USER_MODEL),
),
migrations.AddField(
model_name='paymentconfig',
name='last_updated',
field=models.DateTimeField(auto_now=True),
),
migrations.AddField(
model_name='product',
name='created_user',
field=models.ForeignKey(blank=True, null=True, on_delete=django.db.models.deletion.SET_NULL, to=settings.AUTH_USER_MODEL),
),
migrations.AddField(
model_name='productconfig',
name='created_user',
field=models.ForeignKey(blank=True, null=True, on_delete=django.db.models.deletion.SET_NULL, to=settings.AUTH_USER_MODEL),
),
migrations.AddField(
model_name='productconfig',
name='current',
field=models.BooleanField(default=False),
),
migrations.AddField(
model_name='productconfig',
name='label',
field=models.CharField(blank=True, max_length=200, null=True),
),
migrations.AlterField(
model_name='productconfig',
name='default_on_day',
field=models.IntegerField(null=True),
),
migrations.AlterField(
model_name='productconfig',
name='length',
field=models.IntegerField(null=True),
),
migrations.AlterField(
model_name='productconfig',
name='overdue_on_day',
field=models.IntegerField(null=True),
),
]
| 36.808333 | 134 | 0.602898 | 443 | 4,417 | 5.819413 | 0.176072 | 0.069822 | 0.133825 | 0.157099 | 0.799845 | 0.799845 | 0.769201 | 0.719938 | 0.698991 | 0.698991 | 0 | 0.006975 | 0.285941 | 4,417 | 119 | 135 | 37.117647 | 0.810399 | 0.010188 | 0 | 0.778761 | 1 | 0 | 0.108009 | 0.007323 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.035398 | 0 | 0.061947 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
3d2976d917723c64f856e08fc6d860e9fb9ba216 | 45 | py | Python | lynxfall/utils/__init__.py | Fates-List-Archive/lynxfall | cf0a4671e5a2cc7a354d0908c09f4840fef4037c | [
"MIT"
] | null | null | null | lynxfall/utils/__init__.py | Fates-List-Archive/lynxfall | cf0a4671e5a2cc7a354d0908c09f4840fef4037c | [
"MIT"
] | null | null | null | lynxfall/utils/__init__.py | Fates-List-Archive/lynxfall | cf0a4671e5a2cc7a354d0908c09f4840fef4037c | [
"MIT"
] | null | null | null | from .fastapi import *
from .string import *
| 15 | 22 | 0.733333 | 6 | 45 | 5.5 | 0.666667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.177778 | 45 | 2 | 23 | 22.5 | 0.891892 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
3d2d30bc731f999ef152a01901941c0acde7e018 | 195 | py | Python | src/mcookbook/config/live.py | s0undt3ch/mommas-cookbook | ccca526eee9241f12674cad8c1e1da1a900cef82 | [
"Apache-2.0"
] | 2 | 2022-01-02T23:47:32.000Z | 2022-01-07T11:14:15.000Z | src/mcookbook/config/live.py | UfSoft/mommas-cookbook | ccca526eee9241f12674cad8c1e1da1a900cef82 | [
"Apache-2.0"
] | 1 | 2022-01-17T12:47:37.000Z | 2022-01-17T12:47:37.000Z | src/mcookbook/config/live.py | s0undt3ch/mommas-cookbook | ccca526eee9241f12674cad8c1e1da1a900cef82 | [
"Apache-2.0"
] | 1 | 2022-01-10T18:49:36.000Z | 2022-01-10T18:49:36.000Z | """
Live configuration schema.
"""
from __future__ import annotations
from mcookbook.config.base import BaseConfig
class LiveConfig(BaseConfig):
"""
Live configuration schema.
"""
| 15 | 44 | 0.723077 | 19 | 195 | 7.210526 | 0.684211 | 0.248175 | 0.335766 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.179487 | 195 | 12 | 45 | 16.25 | 0.85625 | 0.271795 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.666667 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
180799d6ce52c52412ac9bedb9415fc9e4cb91ab | 8,922 | py | Python | main.py | anglefish/NCP_data | 86965f5a8ffc3a0cb1323e33c6986e97f1e87325 | [
"MIT"
] | 2 | 2020-02-12T03:14:33.000Z | 2020-02-12T09:59:13.000Z | main.py | anglefish/NCP_data | 86965f5a8ffc3a0cb1323e33c6986e97f1e87325 | [
"MIT"
] | null | null | null | main.py | anglefish/NCP_data | 86965f5a8ffc3a0cb1323e33c6986e97f1e87325 | [
"MIT"
] | 2 | 2020-02-14T02:45:24.000Z | 2020-02-16T15:30:46.000Z | from mpython import *
import urequests
import network
import json
import math
import time
bmp = bytearray([\
0X00,0X00,0X00,0X00,0X00,0X00,0X00,0X00,0X00,0X00,0X00,0X00,0X00,0X00,0X00,0X00,
0X00,0X00,0X10,0X00,0X00,0X04,0X00,0X00,0X00,0X03,0X80,0X00,0X00,0X00,0X3E,0X00,
0X00,0X00,0XF8,0X00,0X00,0X07,0XC0,0X00,0X00,0X0F,0X8F,0XF8,0X00,0X03,0XFF,0XC0,
0X00,0X00,0XF8,0X00,0X00,0X07,0XC0,0X00,0X00,0X3F,0XFF,0XF8,0X00,0X03,0XFF,0XF8,
0X02,0X1F,0XFC,0X00,0X00,0X0F,0XFF,0X80,0X1F,0XFF,0XFF,0XF8,0X00,0X7F,0XFF,0XF8,
0X3E,0X1F,0XFF,0XC0,0X0F,0XFF,0XFF,0X80,0X1F,0XFF,0XFF,0XF8,0X00,0X7B,0XFF,0XF8,
0X3E,0X1F,0XFF,0XF0,0X0F,0XFF,0XFF,0X80,0X1F,0XFF,0XFF,0XF8,0X0E,0XF8,0X1E,0X00,
0X1E,0X1F,0XFF,0XE0,0X0F,0XFF,0XFF,0X80,0X1F,0XF8,0X1E,0X00,0X0E,0XFB,0XFF,0XF0,
0X1F,0XDF,0XFF,0XE0,0X0F,0XFF,0XFF,0XC0,0X1F,0XC5,0XFF,0X80,0X1E,0XF3,0XFF,0XF0,
0XFF,0XC0,0XFF,0XE0,0X0F,0XFF,0X80,0X00,0X07,0XFF,0XFF,0X80,0X1E,0XF3,0XFF,0XF0,
0XFF,0XC0,0X00,0X20,0X00,0X0F,0X80,0X00,0X07,0XBF,0XFF,0XC0,0X1D,0XF3,0XFF,0X00,
0XFF,0XEF,0XFC,0X00,0X07,0XFF,0XFF,0XE0,0X3F,0XBF,0XFF,0X80,0X3D,0XF0,0XFF,0XFC,
0XFF,0XEF,0XFF,0X00,0X3F,0XFF,0XFF,0XE0,0X3F,0XBF,0X07,0X80,0X3D,0XF7,0XFF,0XFC,
0XDF,0X0F,0XFF,0X80,0X3F,0XFF,0XFF,0XE0,0X7F,0XBC,0X07,0XF8,0X3D,0XE7,0XFF,0XFC,
0X1F,0X1F,0XFF,0X80,0X3F,0XFF,0XFF,0XE0,0X7F,0XBE,0X0F,0XF8,0X3F,0XF7,0XFF,0XFC,
0X1F,0XDF,0XFF,0X80,0X3F,0XFF,0XFF,0XF0,0X7F,0XBE,0X0F,0XF8,0X1F,0XF0,0X00,0X00,
0X3F,0XFF,0X07,0X80,0X3E,0X1F,0X80,0X00,0X0F,0XA1,0XFF,0XF8,0X1F,0XFF,0XFF,0XC0,
0X7F,0XFF,0X0F,0X80,0X00,0X1F,0X80,0X40,0X0F,0X8F,0XFF,0XF8,0X0F,0XF3,0XFF,0XF0,
0X7F,0XFF,0X0F,0X80,0X1E,0X1F,0X07,0XC0,0X0F,0X9F,0XFF,0XC0,0X03,0XE3,0XFF,0XF0,
0X7F,0X9F,0X0F,0X80,0X3E,0X1F,0X03,0XC0,0X7F,0X9F,0XF7,0XC0,0X03,0XE3,0XF3,0XF8,
0X7F,0X1F,0X0F,0X80,0X3E,0X1F,0X03,0XE0,0X7F,0X9F,0X07,0X80,0X03,0XC3,0XFF,0XF8,
0X0F,0X9E,0X0F,0X00,0X3C,0X1F,0X03,0XE0,0X3F,0X87,0XFF,0X80,0X03,0XC3,0XFF,0X78,
0X0F,0XBE,0X1F,0X00,0X3F,0X1F,0X0F,0XE0,0X3F,0X8F,0XFF,0X00,0X03,0XC3,0XFF,0X78,
0X0F,0XBE,0X0F,0XE0,0X3F,0XFF,0XFF,0XE0,0X3F,0X8F,0XFF,0XF0,0X03,0XC3,0XE0,0X78,
0X08,0X3E,0X0F,0XE0,0X3F,0XFF,0XFF,0XE0,0X0F,0X83,0XFF,0XF0,0X03,0XC3,0XFF,0XF8,
0X00,0X3E,0X0F,0XF0,0X3F,0XFF,0XFF,0XC0,0X0F,0X87,0XFF,0XF0,0X03,0XC3,0XFF,0XF8,
0X00,0X3E,0X07,0XF0,0X1F,0XFF,0XFF,0X00,0X0F,0X87,0XE3,0XF0,0X03,0XC3,0XE7,0XF0,
0X00,0X00,0X03,0XE0,0X00,0XFF,0X80,0X00,0X0F,0X83,0XC0,0X30,0X00,0X03,0XE0,0XF0,
0X00,0X00,0X00,0X00,0X00,0X00,0X00,0X00,0X00,0X81,0X00,0X00,0X00,0X03,0XE0,0X70,
0X00,0X00,0X00,0X00,0X00,0X00,0X00,0X00,0X00,0X00,0X00,0X00,0X00,0X00,0X00,0X00,
0X00,0X00,0X00,0X00,0X00,0X00,0X00,0X00,0X00,0X00,0X00,0X00,0X00,0X00,0X00,0X00,
0X00,0X00,0X00,0X00,0X00,0X00,0X00,0X00,0X00,0X00,0X00,0X00,0X00,0X00,0X00,0X00,
0X00,0X00,0X00,0X00,0X00,0X00,0X00,0X00,0X00,0X00,0X00,0X00,0X00,0X00,0X00,0X00,
0X00,0X00,0X00,0X00,0X00,0X00,0X00,0X00,0X00,0X00,0X00,0X00,0X00,0X00,0X00,0X00,
0X00,0X00,0X00,0X00,0X00,0X00,0X00,0X00,0X00,0X00,0X00,0X00,0X00,0X00,0X00,0X00,
0X00,0X00,0X00,0X00,0X00,0X00,0X00,0X00,0X00,0X00,0X00,0X00,0X00,0X00,0X00,0X00,
0X00,0X0F,0XC0,0X00,0X00,0X00,0X00,0X00,0X00,0X00,0X00,0X00,0X00,0X00,0X00,0X00,
0X00,0X0F,0X80,0X00,0X01,0XFF,0XF8,0X00,0X00,0X00,0XC0,0X00,0X00,0X00,0X03,0XE0,
0X00,0X0F,0XF0,0X00,0X0F,0XFF,0XFE,0X00,0X00,0X01,0XE0,0X00,0X00,0X00,0XC3,0XE0,
0X18,0XFF,0XFE,0X00,0X1F,0XFF,0XFF,0X00,0X00,0X03,0XF0,0X00,0X00,0X00,0XF3,0XE0,
0X1F,0XFF,0XFF,0X80,0X1F,0XFF,0XFF,0X80,0X00,0X03,0XFB,0X00,0X00,0X01,0XF3,0XE0,
0X3F,0XFF,0XFF,0XC0,0X1F,0XC0,0X1F,0XC0,0X00,0X01,0XF7,0X80,0X00,0X01,0XF7,0XC0,
0X3F,0XFF,0XFF,0XC0,0X3F,0X00,0X07,0XC0,0X00,0X3E,0XEF,0XC0,0X3F,0XF9,0XFF,0XFC,
0X3F,0XFF,0X9F,0XE0,0X3E,0XFF,0XF7,0XC0,0X00,0X3E,0X5F,0XC0,0X1F,0XFD,0XFF,0XFC,
0X7F,0X1F,0X07,0XE0,0X3E,0XFF,0XF3,0XE0,0X0E,0X7E,0X3F,0X80,0X1F,0XFF,0XFF,0XF8,
0X7C,0X1F,0X07,0XE0,0X3E,0XFF,0XF3,0XE0,0X1F,0X7E,0X3F,0X00,0X1F,0X7F,0XFF,0XF8,
0X7C,0X3F,0X03,0XE0,0X3E,0X1F,0XF3,0XE0,0X1F,0X7C,0X7E,0X00,0X1E,0X3F,0XC7,0XF8,
0XFC,0X3F,0X03,0XE0,0X3C,0X1E,0X01,0XE0,0X1E,0X7C,0XFC,0XC0,0X1F,0XFF,0XC7,0XC0,
0XFC,0X3F,0X03,0XE0,0X7C,0XFF,0XC1,0XF0,0X3E,0XFD,0XFD,0XE0,0X1F,0XFF,0XC7,0XC0,
0XFC,0X3F,0X03,0XE0,0X7C,0XFF,0XE1,0XF0,0X3E,0XFB,0XFB,0XF0,0X1F,0XFC,0XC7,0XC0,
0XFC,0X3F,0X07,0XE0,0X7C,0XFF,0XE1,0XF0,0X3C,0XFF,0XF3,0XF8,0X1E,0X3D,0XFF,0XF8,
0XFE,0X3F,0X07,0XE0,0X7C,0XFF,0XF9,0XF0,0X3C,0XFF,0XE1,0XF8,0X1F,0XFD,0XFF,0XF8,
0X7F,0XFF,0X3F,0XE0,0X7C,0X0F,0X3D,0XF0,0X7C,0XFF,0XC0,0XF0,0X1F,0XFD,0XFF,0XF8,
0X7F,0XFF,0XFF,0XC0,0X7C,0X0F,0X3F,0XF0,0X1C,0XFF,0X80,0X60,0X1F,0XFD,0XFF,0XF8,
0X3F,0XFF,0XFF,0XC0,0X7D,0XFF,0XFD,0XF0,0X00,0XFF,0X00,0X00,0X1E,0X3C,0XFF,0X80,
0X1F,0XFF,0XFF,0X80,0X7F,0XFF,0XF9,0XF0,0X00,0XFE,0X00,0X00,0X1E,0X7C,0X0F,0X80,
0X07,0XFF,0XFF,0X00,0X3E,0XFF,0XF1,0XF0,0X00,0XFC,0X03,0X00,0X0E,0X7C,0X0F,0X80,
0X00,0XFF,0XF0,0X00,0X3F,0X80,0X03,0XE0,0X01,0XFF,0XFF,0X00,0X0F,0X78,0X0F,0X80,
0X00,0X7E,0X00,0X00,0X1F,0X80,0X0F,0XE0,0X03,0XFF,0XFF,0X80,0X0F,0X79,0XFF,0XF8,
0X00,0X7E,0X00,0X00,0X1F,0XFF,0XFF,0XE0,0X07,0XFF,0XFF,0X80,0X00,0X01,0XFF,0XFC,
0X00,0X7E,0X00,0X00,0X0F,0XFF,0XFF,0XC0,0X03,0XFF,0XFF,0X80,0X00,0X01,0XFF,0XFC,
0X00,0X7E,0X00,0X00,0X03,0XFF,0XFF,0X00,0X01,0XDF,0XFE,0X00,0X00,0X01,0XFF,0XFC,
0X00,0X3E,0X00,0X00,0X00,0XFF,0XF8,0X00,0X00,0X80,0X00,0X00,0X00,0X01,0XFF,0XE0,
0X00,0X00,0X00,0X00,0X00,0X00,0X00,0X00,0X00,0X00,0X00,0X00,0X00,0X00,0X00,0X00
])
heart = bytearray([\
0X00,0X00,0X3C,0X3C,0X7E,0X7E,0XFF,0XFF,0XFF,0XFF,0XFF,0XFF,0XFF,0XFF,0X7F,0XFE,
0X7F,0XFE,0X3F,0XFC,0X1F,0XF8,0X0F,0XF0,0X07,0XE0,0X03,0XC0,0X01,0X80,0X00,0X00,
])
# 绘制启动图
oled.bitmap(0, 0, bmp, 128, 64, 0)
oled.show()
time.sleep(2)
# 绘制软件信息
oled.fill(0)
oled.DispChar("疫情数据播报系统 v1.0", 2, 0, 1)
oled.DispChar("Made with by 于儿", 2, 16, 1)
oled.hline(2,38,124,1)
oled.DispChar("武汉加油!中国加油!", 10, 45,1)
oled.bitmap(63, 16, heart, 16, 16, 1)
oled.show()
time.sleep(2)
oled.fill(0)
oled.DispChar('疫情数据查询', 30, 10, 1)
myUI=UI(oled)
myUI.ProgressBar(10,30,110,8,30)
oled.DispChar('正在连接Wifi...', 25, 40, 1)
oled.show()
# 连接wifi
my_wifi = wifi()
my_wifi.connectWiFi('SSID', 'password')
oled.fill(0)
oled.DispChar('疫情数据查询', 30, 10, 1)
myUI.ProgressBar(10,30,110,8,60)
oled.DispChar('获取疫情数据...', 25, 40, 1)
oled.show()
# 获取数据
_response = urequests.post('https://www.planplus.cn/ncovdata')
resp_json = _response.json()
_response.close()
data = resp_json['city'] # 城市疫情数据
confirm = resp_json['confirm'] # 确诊历史数据
suspect = resp_json['suspect'] # 疑似历史数据
dead = resp_json['dead'] # 死亡历史数据
heal = resp_json['heal'] # 治愈历史数据
total = resp_json['total'] # 全国数据
date = resp_json['day'].split(' ')[0] # 数据日期(只要年月日)
oled.fill(0)
oled.DispChar('疫情数据查询', 30, 10, 1)
myUI.ProgressBar(10,30,110,8,100)
oled.DispChar('数据加载完成...', 25, 40, 1)
time.sleep_ms(200)
index = None # 城市索引
# 绘制曲线
# d: 历史数据
# name: 图线名称
def draw_line(d, name):
num = len(d)
max = d[num-1]
step = math.floor(120/num)
oled.fill(0)
oled.DispChar(str(max), 8, 2)
oled.DispChar(name, 8, 16)
oled.hline(4,62,120,1)
oled.vline(4,2,60,1)
x1 = 4
y1 = 62
for i in range(1,num):
x2 = x1+step
y2 = 62 - round(60*d[i]/max)
oled.line(x1, y1, x2, y2, 1)
x1 = x2
y1 = y2
oled.show()
# 绘制全国数据
def draw_total():
oled.fill(0)
oled.DispChar('确诊:%d' % (total['confirm']), 0, 0, 1)
oled.DispChar('疑似:%d' % (total['suspect']), 0, 16, 1)
oled.DispChar('治愈:%d' % (total['heal']), 0, 32, 1)
oled.DispChar('死亡:%d' % (total['dead']), 0, 48, 1)
oled.DispChar('日期', 98, 32, 1)
oled.DispChar(date[-5:], 90, 48, 1)
oled.show()
# 绘制各省的数据
def draw_info():
global index
d = data[index]
y = 0
oled.fill(0)
oled.DispChar(d['name'], 0, y, 1)
oled.DispChar(date, 60, y, 1)
y+=16
oled.DispChar('确诊 %d 例' % (d['confirm']), 0, y, 1)
if d['heal']>0:
y+=16
oled.DispChar('治愈 %d 例' % (d['heal']), 0, y, 1)
if d['dead']>0:
y+=16
oled.DispChar('死亡 %d 例' % (d['dead']), 0, y, 1)
oled.show()
# 按钮A按下的响应函数
def on_button_a_press(_):
global index
if index==None:
index = len(data)-1
else:
index -= 1
if index<0:
index = len(data)-1
draw_info()
# 按钮B按下的响应函数
def on_button_b_press(_):
global index
if index == None:
index = 0
else:
index += 1
if index>=len(data):
index = 0
draw_info()
button_a.irq(trigger=Pin.IRQ_FALLING, handler=on_button_a_press)
button_b.irq(trigger=Pin.IRQ_FALLING, handler=on_button_b_press)
draw_total()
while True:
if(touchPad_P.read() < 100):
draw_line(confirm, "确诊人数")
elif(touchPad_Y.read() < 100):
draw_line(suspect, "疑似人数")
elif(touchPad_T.read() < 100):
draw_line(heal, "治愈人数")
elif(touchPad_H.read() < 100):
draw_line(dead, "死亡人数")
elif(touchPad_O.read() < 100):
draw_total()
elif(touchPad_N.read() < 100):
index = 0
draw_info()
time.sleep_ms(100) | 40.008969 | 80 | 0.708922 | 1,623 | 8,922 | 3.865681 | 0.140481 | 0.263946 | 0.325151 | 0.395282 | 0.390341 | 0.287058 | 0.252152 | 0.213899 | 0.171183 | 0.153969 | 0 | 0.31683 | 0.098969 | 8,922 | 223 | 81 | 40.008969 | 0.463615 | 0.016252 | 0 | 0.251282 | 0 | 0 | 0.031857 | 0 | 0 | 0 | 0.482302 | 0 | 0 | 1 | 0.025641 | false | 0.005128 | 0.030769 | 0 | 0.05641 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
18595c83dabe14470c8aabb26d4b3fb0df2abd19 | 45,720 | py | Python | tests/providers/box/test_provider.py | icereval/waterbutler | ef09929b688547eb63d83d8f7fcf168f841ca910 | [
"Apache-2.0"
] | 65 | 2015-01-23T03:22:04.000Z | 2022-01-11T22:33:19.000Z | tests/providers/box/test_provider.py | cslzchen/waterbutler | e4e07727e06885a752c9251e5731f5627f646da3 | [
"Apache-2.0"
] | 300 | 2015-02-16T16:45:02.000Z | 2022-01-31T14:49:07.000Z | tests/providers/box/test_provider.py | cslzchen/waterbutler | e4e07727e06885a752c9251e5731f5627f646da3 | [
"Apache-2.0"
] | 76 | 2015-01-20T20:45:17.000Z | 2021-07-30T13:18:10.000Z | import io
import json
from http import HTTPStatus
import pytest
import aiohttpretty
from waterbutler.core import streams
from waterbutler.core import exceptions
from waterbutler.core.path import WaterButlerPath
from waterbutler.providers.box import BoxProvider
from waterbutler.providers.box.metadata import (BoxRevision,
BoxFileMetadata,
BoxFolderMetadata)
from waterbutler.providers.box.settings import NONCHUNKED_UPLOAD_LIMIT
from tests.utils import MockCoroutine
from tests.providers.box.fixtures import (intra_fixtures,
revision_fixtures,
root_provider_fixtures,)
@pytest.fixture
def auth():
return {
'name': 'cat',
'email': 'cat@cat.com',
}
@pytest.fixture
def credentials():
return {'token': 'wrote harry potter'}
@pytest.fixture
def other_credentials():
return {'token': 'wrote lord of the rings'}
@pytest.fixture
def settings():
return {'folder': '11446498'}
@pytest.fixture
def provider(auth, credentials, settings):
return BoxProvider(auth, credentials, settings)
@pytest.fixture
def other_provider(auth, other_credentials, settings):
return BoxProvider(auth, other_credentials, settings)
@pytest.fixture
def file_content():
return b'SLEEP IS FOR THE WEAK GO SERVE STREAMS'
@pytest.fixture
def file_sha_b64():
return '2jmj7l5rSw0yVb/vlWAYkK/YBwk='
@pytest.fixture
def file_like(file_content):
return io.BytesIO(file_content)
@pytest.fixture
def file_stream(file_like):
return streams.FileStreamReader(file_like)
class TestValidatePath:
@pytest.mark.asyncio
@pytest.mark.aiohttpretty
async def test_validate_v1_path_file(self, provider, root_provider_fixtures):
file_id = '5000948880'
good_url = provider.build_url('files', file_id, fields='id,name,path_collection')
bad_url = provider.build_url('folders', file_id, fields='id,name,path_collection')
aiohttpretty.register_json_uri('GET', good_url,
body=root_provider_fixtures['file_metadata']['entries'][0],
status=200)
aiohttpretty.register_uri('GET', bad_url, status=404)
try:
wb_path_v1 = await provider.validate_v1_path('/' + file_id)
except Exception as exc:
pytest.fail(str(exc))
with pytest.raises(exceptions.NotFoundError) as exc:
await provider.validate_v1_path('/' + file_id + '/')
assert exc.value.code == HTTPStatus.NOT_FOUND
wb_path_v0 = await provider.validate_path('/' + file_id)
assert wb_path_v1 == wb_path_v0
@pytest.mark.asyncio
@pytest.mark.aiohttpretty
async def test_validate_v1_path_folder(self, provider, root_provider_fixtures):
provider.folder = '0'
folder_id = '11446498'
good_url = provider.build_url('folders', folder_id, fields='id,name,path_collection')
bad_url = provider.build_url('files', folder_id, fields='id,name,path_collection')
aiohttpretty.register_json_uri('GET', good_url,
body=root_provider_fixtures['folder_object_metadata'],
status=200)
aiohttpretty.register_uri('GET', bad_url, status=404)
try:
wb_path_v1 = await provider.validate_v1_path('/' + folder_id + '/')
except Exception as exc:
pytest.fail(str(exc))
with pytest.raises(exceptions.NotFoundError) as exc:
await provider.validate_v1_path('/' + folder_id)
assert exc.value.code == HTTPStatus.NOT_FOUND
wb_path_v0 = await provider.validate_path('/' + folder_id + '/')
assert wb_path_v1 == wb_path_v0
@pytest.mark.asyncio
@pytest.mark.aiohttpretty
async def test_validate_path_root(self, provider):
path = await provider.validate_path('/')
assert path.is_dir
assert len(path.parts) == 1
assert path.name == ''
@pytest.mark.asyncio
@pytest.mark.aiohttpretty
async def test_validate_v1_path_root(self, provider):
path = await provider.validate_v1_path('/')
assert path.is_dir
assert len(path.parts) == 1
assert path.name == ''
@pytest.mark.asyncio
@pytest.mark.aiohttpretty
async def test_validate_v1_path_bad_path(self, provider):
with pytest.raises(exceptions.NotFoundError) as e:
await provider.validate_v1_path('/bulbasaur')
assert e.value.message == 'Could not retrieve file or directory /bulbasaur'
assert e.value.code == 404
@pytest.mark.asyncio
@pytest.mark.aiohttpretty
async def test_validate_path_bad_path(self, provider):
with pytest.raises(exceptions.MetadataError) as e:
await provider.validate_path('/bulbasaur/charmander')
assert e.value.message == 'Could not find /bulbasaur/charmander'
assert e.value.code == 404
@pytest.mark.asyncio
@pytest.mark.aiohttpretty
async def test_validate_path(self, provider, root_provider_fixtures):
provider.folder = '0'
folder_id = '0'
good_url = provider.build_url('folders', folder_id, 'items', fields='id,name,type', limit=1000)
aiohttpretty.register_json_uri('GET', good_url,
body=root_provider_fixtures['revalidate_metadata'],
status=200)
result = await provider.validate_path('/bulbasaur')
assert result == WaterButlerPath('/bulbasaur', folder=False)
class TestDownload:
@pytest.mark.asyncio
@pytest.mark.aiohttpretty
async def test_download(self, provider, root_provider_fixtures):
item = root_provider_fixtures['file_metadata']['entries'][0]
path = WaterButlerPath('/triangles.txt', _ids=(provider.folder, item['id']))
metadata_url = provider.build_url('files', item['id'])
content_url = provider.build_url('files', item['id'], 'content')
aiohttpretty.register_json_uri('GET', metadata_url, body=item)
aiohttpretty.register_uri('GET', content_url, body=b'better', auto_length=True)
result = await provider.download(path)
content = await result.read()
assert content == b'better'
@pytest.mark.asyncio
@pytest.mark.aiohttpretty
async def test_download_revision(self, provider, root_provider_fixtures):
revision = '21753842'
item = root_provider_fixtures['file_metadata']['entries'][0]
path = WaterButlerPath('/triangles.txt', _ids=(provider.folder, item['id']))
metadata_url = provider.build_url('files', item['id'])
content_url = provider.build_url('files', item['id'], 'content', version=revision)
aiohttpretty.register_json_uri('GET', metadata_url, body=item)
aiohttpretty.register_uri('GET', content_url, body=b'better', auto_length=True)
result = await provider.download(path, revision)
content = await result.read()
assert content == b'better'
@pytest.mark.asyncio
@pytest.mark.aiohttpretty
async def test_download_not_found(self, provider, root_provider_fixtures):
item = root_provider_fixtures['file_metadata']['entries'][0]
path = WaterButlerPath('/vectors.txt', _ids=(provider.folder, None))
metadata_url = provider.build_url('files', item['id'])
aiohttpretty.register_uri('GET', metadata_url, status=404)
with pytest.raises(exceptions.DownloadError) as e:
await provider.download(path)
assert e.value.code == 404
@pytest.mark.asyncio
@pytest.mark.aiohttpretty
async def test_download_range(self, provider, root_provider_fixtures):
item = root_provider_fixtures['file_metadata']['entries'][0]
path = WaterButlerPath('/triangles.txt', _ids=(provider.folder, item['id']))
metadata_url = provider.build_url('files', item['id'])
content_url = provider.build_url('files', item['id'], 'content')
aiohttpretty.register_json_uri('GET', metadata_url, body=item)
aiohttpretty.register_uri('GET', content_url, body=b'be', auto_length=True, status=206)
result = await provider.download(path, range=(0,1))
assert result.partial
content = await result.read()
assert content == b'be'
assert aiohttpretty.has_call(method='GET', uri=content_url,
headers={'Authorization': 'Bearer wrote harry potter',
'Accept-Encoding': '', 'Range': 'bytes=0-1'})
class TestUpload:
@pytest.mark.asyncio
@pytest.mark.aiohttpretty
async def test_upload_create(self, provider, root_provider_fixtures, file_stream):
path = WaterButlerPath('/newfile', _ids=(provider.folder, None))
upload_url = provider._build_upload_url('files', 'content')
upload_metadata = root_provider_fixtures['upload_metadata']
aiohttpretty.register_json_uri('POST', upload_url, status=201, body=upload_metadata)
metadata, created = await provider.upload(file_stream, path)
expected = BoxFileMetadata(upload_metadata['entries'][0], path).serialized()
assert metadata.serialized() == expected
assert created is True
assert path.identifier_path == metadata.path
assert aiohttpretty.has_call(method='POST', uri=upload_url)
@pytest.mark.asyncio
@pytest.mark.aiohttpretty
async def test_upload_conflict_keep(self, provider, root_provider_fixtures, file_stream):
upload_metadata = root_provider_fixtures['upload_metadata']
item = upload_metadata['entries'][0]
path = WaterButlerPath('/newfile', _ids=(provider.folder, item['id']))
upload_url = provider._build_upload_url('files', 'content')
aiohttpretty.register_json_uri('POST', upload_url, status=201, body=upload_metadata)
metadata_url = provider.build_url('files', path.identifier)
aiohttpretty.register_json_uri('GET', metadata_url, body=upload_metadata)
list_url = provider.build_url('folders', item['path_collection']['entries'][1]['id'],
'items', fields='id,name,type', limit=1000)
aiohttpretty.register_json_uri('GET', list_url,
body=root_provider_fixtures['folder_list_metadata'])
metadata, created = await provider.upload(file_stream, path, conflict='keep')
expected = BoxFileMetadata(item, path).serialized()
# since the metadata for the renamed conflict file isn't actually saved, this one is odd to
# test.
assert metadata.serialized() == expected
assert created is True
assert path.identifier_path == metadata.path
assert aiohttpretty.has_call(method='POST', uri=upload_url)
@pytest.mark.asyncio
@pytest.mark.aiohttpretty
async def test_upload_update(self, provider, root_provider_fixtures, file_stream):
upload_metadata = root_provider_fixtures['upload_metadata']
item_to_overwrite = root_provider_fixtures['folder_list_metadata']['entries'][0]
path = WaterButlerPath('/newfile', _ids=(provider.folder, item_to_overwrite['id']))
upload_url = provider._build_upload_url('files', item_to_overwrite['id'], 'content')
aiohttpretty.register_json_uri('POST', upload_url, status=201, body=upload_metadata)
metadata, created = await provider.upload(file_stream, path)
expected = BoxFileMetadata(upload_metadata['entries'][0], path).serialized()
assert metadata.serialized() == expected
assert created is False
assert aiohttpretty.has_call(method='POST', uri=upload_url)
@pytest.mark.asyncio
@pytest.mark.aiohttpretty
async def test_upload_checksum_mismatch(self, provider, root_provider_fixtures, file_stream):
path = WaterButlerPath('/newfile', _ids=(provider.folder, None))
upload_url = provider._build_upload_url('files', 'content')
aiohttpretty.register_json_uri('POST', upload_url, status=201,
body=root_provider_fixtures['checksum_mismatch_metadata'])
with pytest.raises(exceptions.UploadChecksumMismatchError):
await provider.upload(file_stream, path)
assert aiohttpretty.has_call(method='POST', uri=upload_url)
@pytest.mark.asyncio
@pytest.mark.aiohttpretty
async def test_upload_limit_contiguous_upload(self, provider, file_stream):
assert file_stream.size == 38
provider.NONCHUNKED_UPLOAD_LIMIT = 40
provider.metadata = MockCoroutine()
provider._contiguous_upload = MockCoroutine(return_value={'id': '345'})
provider._chunked_upload = MockCoroutine()
path = WaterButlerPath('/foobah/', _ids=('0', '1'))
await provider.upload(file_stream, path)
provider._contiguous_upload.assert_called_with(path, file_stream)
assert not provider._chunked_upload.called
provider.NONCHUNKED_UPLOAD_LIMIT = NONCHUNKED_UPLOAD_LIMIT
@pytest.mark.asyncio
@pytest.mark.aiohttpretty
async def test_upload_limit_chunked_upload(self, provider, file_stream):
assert file_stream.size == 38
provider.NONCHUNKED_UPLOAD_LIMIT = 15
provider.metadata = MockCoroutine()
provider._contiguous_upload = MockCoroutine()
provider._chunked_upload = MockCoroutine(return_value={'id': '345'})
path = WaterButlerPath('/foobah/', _ids=('0', '1'))
await provider.upload(file_stream, path)
provider._chunked_upload.assert_called_with(path, file_stream)
assert not provider._contiguous_upload.called
provider.NONCHUNKED_UPLOAD_LIMIT = NONCHUNKED_UPLOAD_LIMIT
@pytest.mark.asyncio
@pytest.mark.aiohttpretty
async def test_chunked_upload(self, provider, file_stream, file_sha_b64,
root_provider_fixtures):
assert file_stream.size == 38
session_metadata = root_provider_fixtures['create_session_metadata']
parts_manifest = [
root_provider_fixtures['upload_part_one'],
root_provider_fixtures['upload_part_two'],
]
file_metadata = root_provider_fixtures['upload_commit_metadata']
provider._create_chunked_upload_session = MockCoroutine(return_value=session_metadata)
provider._upload_parts = MockCoroutine(return_value=parts_manifest)
provider._complete_chunked_upload_session = MockCoroutine(return_value=file_metadata)
path = WaterButlerPath('/foobah/', _ids=('0', '1'))
await provider._chunked_upload(path, file_stream)
provider._create_chunked_upload_session.assert_called_with(path, file_stream)
provider._upload_parts.assert_called_with(file_stream, session_metadata)
provider._complete_chunked_upload_session.assert_called_with(session_metadata,
parts_manifest,
file_sha_b64)
@pytest.mark.asyncio
@pytest.mark.aiohttpretty
async def test_create_upload_session_new_file(self, provider, file_stream):
"""Check that the chunked upload session creation is sending the proper data payload to
the appropriate URL when creating a new file in the root of the project.
"""
path = WaterButlerPath('/newfile', _ids=(provider.folder, None))
session_url = provider._build_upload_url('files', 'upload_sessions')
aiohttpretty.register_json_uri(
'POST',
session_url,
status=201,
body={'dummy': 'data'},
)
await provider._create_chunked_upload_session(path, file_stream)
assert aiohttpretty.has_call(
method='POST',
uri=session_url,
data=json.dumps({
'folder_id': provider.folder,
'file_name': 'newfile',
'file_size': 38,
}, sort_keys=True),
)
@pytest.mark.asyncio
@pytest.mark.aiohttpretty
async def test_create_upload_session_new_file_nonroot(self, provider, file_stream):
"""Check that the chunked upload session creation is sending the proper data payload to
the appropriate URL when creating a new file in a subdirectory.
"""
subfolder_id = '444444444'
path = WaterButlerPath('/subdir/newfile', _ids=(provider.folder, subfolder_id, None))
session_url = provider._build_upload_url('files', 'upload_sessions')
aiohttpretty.register_json_uri(
'POST',
session_url,
status=201,
body={'dummy': 'data'},
)
await provider._create_chunked_upload_session(path, file_stream)
assert aiohttpretty.has_call(
method='POST',
uri=session_url,
data=json.dumps({
'folder_id': subfolder_id,
'file_name': 'newfile',
'file_size': 38,
}, sort_keys=True),
)
@pytest.mark.asyncio
@pytest.mark.aiohttpretty
async def test_create_upload_session_existing_file(self, provider, file_stream):
"""Check that the chunked upload session creation is sending the proper data payload to
the appropriate URL when updating an existing file in the root of the project.
"""
path = WaterButlerPath('/newfile', _ids=(provider.folder, '2345'))
session_url = 'https://upload.box.com/api/2.0/files/2345/upload_sessions'
aiohttpretty.register_json_uri(
'POST',
session_url,
status=201,
body={'dummy': 'data'},
)
await provider._create_chunked_upload_session(path, file_stream)
assert aiohttpretty.has_call(
method='POST',
uri=session_url,
data=json.dumps({
'file_name': 'newfile',
'file_size': 38,
}, sort_keys=True),
)
@pytest.mark.asyncio
@pytest.mark.aiohttpretty
async def test_create_upload_session_existing_file_nonroot(self, provider, file_stream):
"""Check that the chunked upload session creation is sending the proper data payload to
the appropriate URL when updating an existing file in a subdirectory.
"""
path = WaterButlerPath('/subdir/newfile', _ids=(provider.folder, '44444444', '2345'))
session_url = 'https://upload.box.com/api/2.0/files/2345/upload_sessions'
aiohttpretty.register_json_uri(
'POST',
session_url,
status=201,
body={'dummy': 'data'},
)
await provider._create_chunked_upload_session(path, file_stream)
assert aiohttpretty.has_call(
method='POST',
uri=session_url,
data=json.dumps({
'file_name': 'newfile',
'file_size': 38,
}, sort_keys=True),
)
@pytest.mark.asyncio
@pytest.mark.aiohttpretty
async def test_upload_parts(self, provider, root_provider_fixtures):
responses = [
{
'body': json.dumps(root_provider_fixtures['upload_part_one']),
'status': 201,
'headers': {'Content-Type': 'application/json'},
},
{
'body': json.dumps(root_provider_fixtures['upload_part_two']),
'status': 201,
'headers': {'Content-Type': 'application/json'},
}
]
session_url = 'https://upload.box.com/api/2.0/files/upload_sessions/fake_session_id'
aiohttpretty.register_json_uri(
'PUT',
session_url,
status=HTTPStatus.CREATED,
responses=responses
)
session_metadata = root_provider_fixtures['create_session_metadata']
stream = streams.StringStream('tenbytestr'.encode() * 2)
parts_metadata = await provider._upload_parts(stream, session_metadata)
expected_response = [
{
'offset': 10,
'part_id': '37B0FB1B',
'sha1': '3ff00d99585b8da363f9f9955e791ed763e111c1',
'size': 10
},
{
'offset': 20,
'part_id': '1872DEDA',
'sha1': '0ae5fc290c5c5414cdda245ab712a8440376284a',
'size': 10
}
]
assert parts_metadata == expected_response
assert len(aiohttpretty.calls) == 2
for call in aiohttpretty.calls:
assert call['method'] == 'PUT'
assert call['uri'] == session_url
call_one = aiohttpretty.calls[0]
assert call_one['headers'] == {
'Authorization': 'Bearer wrote harry potter',
'Content-Length': '10',
'Content-Range': 'bytes 0-9/20',
'Content-Type:': 'application/octet-stream',
'Digest': 'sha={}'.format('pz4mZbOEOesBeUhR1THUF1Oq1bI=')
}
call_two = aiohttpretty.calls[1]
assert call_two['headers'] == {
'Authorization': 'Bearer wrote harry potter',
'Content-Length': '10',
'Content-Range': 'bytes 10-19/20',
'Content-Type:': 'application/octet-stream',
'Digest': 'sha={}'.format('pz4mZbOEOesBeUhR1THUF1Oq1bI=')
}
@pytest.mark.asyncio
@pytest.mark.aiohttpretty
async def test_complete_chunked_upload_session(self, provider, root_provider_fixtures):
commit_url = 'https://upload.box.com/api/2.0/files/upload_sessions/fake_session_id/commit'
aiohttpretty.register_json_uri(
'POST',
commit_url,
status=201,
body=root_provider_fixtures['upload_commit_metadata']
)
session_metadata = root_provider_fixtures['create_session_metadata']
entry = await provider._complete_chunked_upload_session(
session_metadata,
root_provider_fixtures['formated_parts'],
'fake_sha'
)
assert root_provider_fixtures['upload_commit_metadata']['entries'][0] == entry
assert aiohttpretty.has_call(method='POST', uri=commit_url)
class TestDelete:
@pytest.mark.asyncio
@pytest.mark.aiohttpretty
async def test_delete_file(self, provider, root_provider_fixtures):
item = root_provider_fixtures['file_metadata']['entries'][0]
path = WaterButlerPath('/{}'.format(item['name']), _ids=(provider.folder, item['id']))
url = provider.build_url('files', path.identifier)
aiohttpretty.register_uri('DELETE', url, status=204)
await provider.delete(path)
assert aiohttpretty.has_call(method='DELETE', uri=url)
@pytest.mark.asyncio
@pytest.mark.aiohttpretty
async def test_delete_folder(self, provider, root_provider_fixtures):
item = root_provider_fixtures['folder_object_metadata']
path = WaterButlerPath('/{}/'.format(item['name']), _ids=(provider.folder, item['id']))
url = provider.build_url('folders', path.identifier, recursive=True)
aiohttpretty.register_uri('DELETE', url, status=204)
await provider.delete(path)
assert aiohttpretty.has_call(method='DELETE', uri=url)
@pytest.mark.asyncio
async def test_must_not_be_none(self, provider):
path = WaterButlerPath('/Goats', _ids=(provider.folder, None))
with pytest.raises(exceptions.NotFoundError) as e:
await provider.delete(path)
assert e.value.code == 404
assert str(path) in e.value.message
@pytest.mark.asyncio
@pytest.mark.aiohttpretty
async def test_delete_root_no_confirm(self, provider, root_provider_fixtures):
path = WaterButlerPath('/', _ids=('0'))
with pytest.raises(exceptions.DeleteError) as e:
await provider.delete(path)
assert e.value.message == 'confirm_delete=1 is required for deleting root provider folder'
assert e.value.code == 400
@pytest.mark.asyncio
@pytest.mark.aiohttpretty
async def test_delete_root(self, provider, root_provider_fixtures):
item = root_provider_fixtures['file_metadata']['entries'][0]
path = WaterButlerPath('/newfile', _ids=(provider.folder, item['id']))
root_path = WaterButlerPath('/', _ids=('0'))
url = provider.build_url('folders', root_path.identifier, 'items',
fields='id,name,size,modified_at,etag,total_count',
offset=(0), limit=1000)
aiohttpretty.register_json_uri('GET', url,
body=root_provider_fixtures['one_entry_folder_list_metadata'])
url = provider.build_url('files', item['id'], fields='id,name,path_collection')
delete_url = provider.build_url('files', path.identifier)
aiohttpretty.register_json_uri('GET', url,
body=root_provider_fixtures['file_metadata']['entries'][0])
aiohttpretty.register_json_uri('DELETE', delete_url, status=204)
await provider.delete(root_path, 1)
assert aiohttpretty.has_call(method='DELETE', uri=delete_url)
class TestMetadata:
@pytest.mark.asyncio
async def test_must_not_be_none(self, provider):
path = WaterButlerPath('/Goats', _ids=(provider.folder, None))
with pytest.raises(exceptions.NotFoundError) as e:
await provider.metadata(path)
assert e.value.code == 404
assert str(path) in e.value.message
@pytest.mark.asyncio
@pytest.mark.aiohttpretty
async def test_revision_metadata(self, provider, root_provider_fixtures, revision_fixtures):
list_metadata = revision_fixtures['revisions_list_metadata']
item = list_metadata['entries'][0]
path = WaterButlerPath('/goats', _ids=(provider.folder, item['id']))
url = provider.build_url('files', path.identifier, 'versions')
aiohttpretty.register_json_uri('GET', url, body=list_metadata)
result = await provider.metadata(path, revision=item['id'])
expected = BoxFileMetadata(item, path)
assert result == expected
@pytest.mark.asyncio
@pytest.mark.aiohttpretty
async def test_revision_metadata_error(self, provider, root_provider_fixtures,
revision_fixtures):
list_metadata = revision_fixtures['revisions_list_metadata']
item = list_metadata['entries'][0]
path = WaterButlerPath('/goats', _ids=(provider.folder, item['id']))
url = provider.build_url('files', path.identifier, 'versions')
aiohttpretty.register_json_uri('GET', url, body=list_metadata)
with pytest.raises(exceptions.NotFoundError) as e:
await provider.metadata(path, revision='this is a bad revision id')
assert e.value.code == 404
assert str(path) in e.value.message
@pytest.mark.asyncio
@pytest.mark.aiohttpretty
async def test_metadata_bad_response(self, provider, root_provider_fixtures):
item = root_provider_fixtures['file_metadata']['entries'][0]
path = WaterButlerPath('/goats', _ids=(provider.folder, item['id']))
url = provider.build_url('files', path.identifier)
aiohttpretty.register_json_uri('GET', url, body=None)
with pytest.raises(exceptions.NotFoundError) as e:
await provider.metadata(path)
assert e.value.code == 404
assert str(path) in e.value.message
@pytest.mark.asyncio
@pytest.mark.aiohttpretty
async def test_folder_metadata(self, provider, root_provider_fixtures):
item = root_provider_fixtures['folder_object_metadata']
path = WaterButlerPath('/goats/', _ids=(provider.folder, item['id']))
url = provider.build_url('folders', path.identifier)
aiohttpretty.register_json_uri('GET', url, body=item)
result = await provider.metadata(path, raw=True, folder=True)
assert result == item
@pytest.mark.asyncio
@pytest.mark.aiohttpretty
async def test_metadata(self, provider, root_provider_fixtures):
path = WaterButlerPath('/', _ids=(provider.folder, ))
list_url = provider.build_url('folders', provider.folder, 'items',
fields='id,name,size,modified_at,etag,total_count',
offset=0, limit=1000)
list_metadata = root_provider_fixtures['folder_list_metadata']
aiohttpretty.register_json_uri('GET', list_url, body=list_metadata)
result = await provider.metadata(path)
expected = []
for x in list_metadata['entries']:
if x['type'] == 'file':
expected.append(BoxFileMetadata(x, path.child(x['name'])))
else:
expected.append(BoxFolderMetadata(x, path.child(x['name'], folder=True)))
assert result == expected
@pytest.mark.asyncio
@pytest.mark.aiohttpretty
async def test_metadata_raw(self, provider, root_provider_fixtures):
item = root_provider_fixtures['folder_list_metadata']
path = WaterButlerPath('/', _ids=(provider.folder, ))
list_url = provider.build_url('folders', provider.folder, 'items',
fields='id,name,size,modified_at,etag,total_count',
offset=0, limit=1000)
aiohttpretty.register_json_uri('GET', list_url, body=item)
result = await provider.metadata(path, raw=True)
assert result == item
@pytest.mark.asyncio
@pytest.mark.aiohttpretty
async def test_metadata_nested(self, provider, root_provider_fixtures):
item = root_provider_fixtures['file_metadata']['entries'][0]
path = WaterButlerPath('/name.txt', _ids=(provider, item['id']))
file_url = provider.build_url('files', path.identifier)
aiohttpretty.register_json_uri('GET', file_url, body=item)
result = await provider.metadata(path)
expected = BoxFileMetadata(item, path)
assert result == expected
assert aiohttpretty.has_call(method='GET', uri=file_url)
assert result.extra == {
'etag': '3',
'hashes': {
'sha1': '134b65991ed521fcfe4724b7d814ab8ded5185dc',
},
}
@pytest.mark.asyncio
@pytest.mark.aiohttpretty
async def test_metadata_missing(self, provider):
path = WaterButlerPath('/Something', _ids=(provider.folder, None))
with pytest.raises(exceptions.NotFoundError) as exc:
await provider.metadata(path)
assert exc.value.code == HTTPStatus.NOT_FOUND
class TestRevisions:
@pytest.mark.asyncio
@pytest.mark.aiohttpretty
async def test_get_revisions(self, provider, root_provider_fixtures, revision_fixtures):
item = root_provider_fixtures['file_metadata']['entries'][0]
revisions_list = revision_fixtures['revisions_list_metadata']
path = WaterButlerPath('/name.txt', _ids=(provider, item['id']))
file_url = provider.build_url('files', path.identifier)
revisions_url = provider.build_url('files', path.identifier, 'versions')
aiohttpretty.register_json_uri('GET', file_url, body=item)
aiohttpretty.register_json_uri('GET', revisions_url, body=revisions_list)
result = await provider.revisions(path)
expected = [
BoxRevision(each)
for each in [item] + revisions_list['entries']
]
assert result == expected
assert aiohttpretty.has_call(method='GET', uri=file_url)
assert aiohttpretty.has_call(method='GET', uri=revisions_url)
@pytest.mark.asyncio
@pytest.mark.aiohttpretty
async def test_get_revisions_free_account(self, provider, root_provider_fixtures):
item = root_provider_fixtures['file_metadata']['entries'][0]
path = WaterButlerPath('/name.txt', _ids=(provider, item['id']))
file_url = provider.build_url('files', path.identifier)
revisions_url = provider.build_url('files', path.identifier, 'versions')
aiohttpretty.register_json_uri('GET', file_url, body=item)
aiohttpretty.register_json_uri('GET', revisions_url, body={}, status=403)
result = await provider.revisions(path)
expected = [BoxRevision(item)]
assert result == expected
assert aiohttpretty.has_call(method='GET', uri=file_url)
assert aiohttpretty.has_call(method='GET', uri=revisions_url)
class TestIntraCopy:
@pytest.mark.asyncio
@pytest.mark.aiohttpretty
async def test_intra_copy_file(self, provider, root_provider_fixtures):
item = root_provider_fixtures['file_metadata']['entries'][0]
src_path = WaterButlerPath('/name.txt', _ids=(provider, item['id']))
dest_path = WaterButlerPath('/charmander/name.txt', _ids=(provider, item['id']))
file_url = provider.build_url('files', src_path.identifier, 'copy')
aiohttpretty.register_json_uri('POST', file_url, body=item)
result = await provider.intra_copy(provider, src_path, dest_path)
expected = (BoxFileMetadata(item, dest_path), True)
assert result == expected
@pytest.mark.asyncio
@pytest.mark.aiohttpretty
async def test_intra_copy_file_replace(self, provider, root_provider_fixtures):
item = root_provider_fixtures['file_metadata']['entries'][0]
src_path = WaterButlerPath('/name.txt', _ids=(provider, item['id']))
dest_path = WaterButlerPath('/charmander/name.txt', _ids=(provider, item['id'], item['id']))
file_url = provider.build_url('files', src_path.identifier, 'copy')
delete_url = provider.build_url('files', dest_path.identifier)
aiohttpretty.register_uri('DELETE', delete_url, status=204)
aiohttpretty.register_json_uri('POST', file_url, body=item)
result = await provider.intra_copy(provider, src_path, dest_path)
expected = (BoxFileMetadata(item, dest_path), False)
assert result == expected
@pytest.mark.asyncio
@pytest.mark.aiohttpretty
async def test_intra_copy_folder(self, provider, intra_fixtures, root_provider_fixtures):
item = intra_fixtures['intra_folder_metadata']
list_metadata = root_provider_fixtures['folder_list_metadata']
src_path = WaterButlerPath('/name/', _ids=(provider, item['id']))
dest_path = WaterButlerPath('/charmander/name/', _ids=(provider, item['id']))
file_url = provider.build_url('folders', src_path.identifier, 'copy')
list_url = provider.build_url('folders', item['id'], 'items',
fields='id,name,size,modified_at,etag,total_count',
offset=0, limit=1000)
aiohttpretty.register_json_uri('GET', list_url, body=list_metadata)
aiohttpretty.register_json_uri('POST', file_url, body=item)
expected_folder = BoxFolderMetadata(item, dest_path)
expected_folder._children = []
for child_item in list_metadata['entries']:
child_path = dest_path.child(child_item['name'], folder=(child_item['type'] == 'folder'))
serialized_child = provider._serialize_item(child_item, child_path)
expected_folder._children.append(serialized_child)
expected = (expected_folder, True)
result = await provider.intra_copy(provider, src_path, dest_path)
assert result == expected
@pytest.mark.asyncio
@pytest.mark.aiohttpretty
async def test_intra_copy_folder_replace(self, provider, intra_fixtures, root_provider_fixtures):
item = intra_fixtures['intra_folder_metadata']
list_metadata = root_provider_fixtures['folder_list_metadata']
src_path = WaterButlerPath('/name/', _ids=(provider, item['id']))
dest_path = WaterButlerPath('/charmander/name/', _ids=(provider, item['id'], item['id']))
file_url = provider.build_url('folders', src_path.identifier, 'copy')
delete_url = provider.build_url('folders', dest_path.identifier, recursive=True)
list_url = provider.build_url('folders', item['id'], 'items',
fields='id,name,size,modified_at,etag,total_count',
offset=0, limit=1000)
aiohttpretty.register_json_uri('GET', list_url, body=list_metadata)
aiohttpretty.register_uri('DELETE', delete_url, status=204)
aiohttpretty.register_json_uri('POST', file_url, body=item)
expected_folder = BoxFolderMetadata(item, dest_path)
expected_folder._children = []
for child_item in list_metadata['entries']:
child_path = dest_path.child(child_item['name'], folder=(child_item['type'] == 'folder'))
serialized_child = provider._serialize_item(child_item, child_path)
expected_folder._children.append(serialized_child)
expected = (expected_folder, False)
result = await provider.intra_copy(provider, src_path, dest_path)
assert result == expected
assert aiohttpretty.has_call(method='DELETE', uri=delete_url)
class TestIntraMove:
@pytest.mark.asyncio
@pytest.mark.aiohttpretty
async def test_intra_move_file(self, provider, root_provider_fixtures):
item = root_provider_fixtures['file_metadata']['entries'][0]
src_path = WaterButlerPath('/name.txt', _ids=(provider, item['id']))
dest_path = WaterButlerPath('/charmander/name.txt', _ids=(provider, item['id']))
file_url = provider.build_url('files', src_path.identifier)
aiohttpretty.register_json_uri('PUT', file_url, body=item)
result = await provider.intra_move(provider, src_path, dest_path)
expected = (BoxFileMetadata(item, dest_path), True)
assert result == expected
@pytest.mark.asyncio
@pytest.mark.aiohttpretty
async def test_intra_move_file_replace(self, provider, root_provider_fixtures):
item = root_provider_fixtures['file_metadata']['entries'][0]
src_path = WaterButlerPath('/name.txt', _ids=(provider, item['id']))
dest_path = WaterButlerPath('/charmander/name.txt', _ids=(provider, item['id'], item['id']))
file_url = provider.build_url('files', src_path.identifier)
delete_url = provider.build_url('files', dest_path.identifier)
aiohttpretty.register_uri('DELETE', delete_url, status=204)
aiohttpretty.register_json_uri('PUT', file_url, body=item)
result = await provider.intra_move(provider, src_path, dest_path)
expected = (BoxFileMetadata(item, dest_path), False)
assert result == expected
@pytest.mark.asyncio
@pytest.mark.aiohttpretty
async def test_intra_move_folder(self, provider, intra_fixtures, root_provider_fixtures):
item = intra_fixtures['intra_folder_metadata']
list_metadata = root_provider_fixtures['folder_list_metadata']
src_path = WaterButlerPath('/name/', _ids=(provider, item['id']))
dest_path = WaterButlerPath('/charmander/name/', _ids=(provider, item['id']))
file_url = provider.build_url('folders', src_path.identifier)
list_url = provider.build_url('folders', item['id'], 'items',
fields='id,name,size,modified_at,etag,total_count',
offset=0, limit=1000)
aiohttpretty.register_json_uri('PUT', file_url, body=item)
aiohttpretty.register_json_uri('GET', list_url, body=list_metadata)
expected_folder = BoxFolderMetadata(item, dest_path)
expected_folder._children = []
for child_item in list_metadata['entries']:
child_path = dest_path.child(child_item['name'], folder=(child_item['type'] == 'folder'))
serialized_child = provider._serialize_item(child_item, child_path)
expected_folder._children.append(serialized_child)
expected = (expected_folder, True)
result = await provider.intra_move(provider, src_path, dest_path)
assert result == expected
@pytest.mark.asyncio
@pytest.mark.aiohttpretty
async def test_intra_move_folder_replace(self, provider, intra_fixtures, root_provider_fixtures):
item = intra_fixtures['intra_folder_metadata']
list_metadata = root_provider_fixtures['folder_list_metadata']
src_path = WaterButlerPath('/name/', _ids=(provider, item['id']))
dest_path = WaterButlerPath('/charmander/name/', _ids=(provider, item['id'], item['id']))
file_url = provider.build_url('folders', src_path.identifier)
delete_url = provider.build_url('folders', dest_path.identifier, recursive=True)
list_url = provider.build_url('folders', item['id'], 'items',
fields='id,name,size,modified_at,etag,total_count',
offset=0, limit=1000)
aiohttpretty.register_json_uri('PUT', file_url, body=item)
aiohttpretty.register_uri('DELETE', delete_url, status=204)
aiohttpretty.register_json_uri('GET', list_url, body=list_metadata)
expected_folder = BoxFolderMetadata(item, dest_path)
expected_folder._children = []
for child_item in list_metadata['entries']:
child_path = dest_path.child(child_item['name'], folder=(child_item['type'] == 'folder'))
serialized_child = provider._serialize_item(child_item, child_path)
expected_folder._children.append(serialized_child)
expected = (expected_folder, False)
result = await provider.intra_move(provider, src_path, dest_path)
assert result == expected
assert aiohttpretty.has_call(method='DELETE', uri=delete_url)
class TestCreateFolder:
@pytest.mark.asyncio
@pytest.mark.aiohttpretty
async def test_must_be_folder(self, provider):
path = WaterButlerPath('/Just a poor file from a poor folder', _ids=(provider.folder, None))
with pytest.raises(exceptions.CreateFolderError) as e:
await provider.create_folder(path)
assert e.value.code == 400
assert e.value.message == 'Path must be a directory'
@pytest.mark.asyncio
@pytest.mark.aiohttpretty
async def test_id_must_be_none(self, provider):
path = WaterButlerPath('/Just a poor file from a poor folder/',
_ids=(provider.folder, 'someid'))
assert path.identifier is not None
with pytest.raises(exceptions.FolderNamingConflict) as e:
await provider.create_folder(path)
assert e.value.code == 409
assert e.value.message == ('Cannot create folder "Just a poor file from a poor folder", '
'because a file or folder already exists with that name')
@pytest.mark.asyncio
@pytest.mark.aiohttpretty
async def test_already_exists(self, provider):
url = provider.build_url('folders')
data_url = provider.build_url('folders', provider.folder)
path = WaterButlerPath('/50 shades of nope/', _ids=(provider.folder, None))
aiohttpretty.register_json_uri('POST', url, status=409)
aiohttpretty.register_json_uri('GET', data_url, body={
'id': provider.folder,
'type': 'folder',
'name': 'All Files',
'path_collection': {
'entries': []
}
})
with pytest.raises(exceptions.FolderNamingConflict) as e:
await provider.create_folder(path)
assert e.value.code == 409
assert e.value.message == ('Cannot create folder "50 shades of nope", because a file or '
'folder already exists with that name')
@pytest.mark.asyncio
@pytest.mark.aiohttpretty
async def test_returns_metadata(self, provider, root_provider_fixtures):
url = provider.build_url('folders')
folder_metadata = root_provider_fixtures['folder_object_metadata']
folder_metadata['name'] = '50 shades of nope'
path = WaterButlerPath('/50 shades of nope/', _ids=(provider.folder, None))
aiohttpretty.register_json_uri('POST', url, status=201, body=folder_metadata)
resp = await provider.create_folder(path)
assert resp.kind == 'folder'
assert resp.name == '50 shades of nope'
assert resp.path == '/{}/'.format(folder_metadata['id'])
assert isinstance(resp, BoxFolderMetadata)
assert path.identifier_path == '/' + folder_metadata['id'] + '/'
class TestOperations:
def test_can_duplicate_names(self, provider):
assert provider.can_duplicate_names() is False
def test_shares_storage_root(self, provider, other_provider):
assert provider.shares_storage_root(other_provider) is False
assert provider.shares_storage_root(provider) is True
def test_can_intra_move(self, provider, other_provider):
assert provider.can_intra_move(other_provider) is False
assert provider.can_intra_move(provider) is True
def test_can_intra_copy(self, provider, other_provider):
assert provider.can_intra_copy(other_provider) is False
assert provider.can_intra_copy(provider) is True
| 39.965035 | 103 | 0.654418 | 5,138 | 45,720 | 5.576684 | 0.065979 | 0.035598 | 0.057237 | 0.040135 | 0.857397 | 0.823893 | 0.787701 | 0.754057 | 0.71975 | 0.699124 | 0 | 0.013252 | 0.235849 | 45,720 | 1,143 | 104 | 40 | 0.806881 | 0.002078 | 0 | 0.604734 | 0 | 0.001183 | 0.113009 | 0.025118 | 0 | 0 | 0 | 0 | 0.131361 | 1 | 0.016568 | false | 0 | 0.015385 | 0.011834 | 0.055621 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
a110175fff1a446182484fb34dcfb1151caec9e8 | 6,511 | py | Python | tests/test_transferFrom.py | pacdao/gov-token | 7ce7b8916c70e145f311e4190507b245dbeb2c12 | [
"MIT"
] | null | null | null | tests/test_transferFrom.py | pacdao/gov-token | 7ce7b8916c70e145f311e4190507b245dbeb2c12 | [
"MIT"
] | null | null | null | tests/test_transferFrom.py | pacdao/gov-token | 7ce7b8916c70e145f311e4190507b245dbeb2c12 | [
"MIT"
] | null | null | null | #!/usr/bin/python3
import brownie
import pytest
@pytest.mark.skip(reason="no transfer")
def test_sender_balance_decreases(accounts, token):
sender_balance = token.balanceOf(accounts[0])
amount = sender_balance // 4
token.approve(accounts[1], amount, {"from": accounts[0]})
token.transferFrom(accounts[0], accounts[2], amount, {"from": accounts[1]})
assert token.balanceOf(accounts[0]) == sender_balance - amount
@pytest.mark.skip(reason="no transfer")
def test_receiver_balance_increases(accounts, token):
receiver_balance = token.balanceOf(accounts[2])
amount = token.balanceOf(accounts[0]) // 4
token.approve(accounts[1], amount, {"from": accounts[0]})
token.transferFrom(accounts[0], accounts[2], amount, {"from": accounts[1]})
assert token.balanceOf(accounts[2]) == receiver_balance + amount
@pytest.mark.skip(reason="no transfer")
def test_caller_balance_not_affected(accounts, token):
caller_balance = token.balanceOf(accounts[1])
amount = token.balanceOf(accounts[0])
token.approve(accounts[1], amount, {"from": accounts[0]})
token.transferFrom(accounts[0], accounts[2], amount, {"from": accounts[1]})
assert token.balanceOf(accounts[1]) == caller_balance
@pytest.mark.skip(reason="no transfer")
def test_caller_approval_affected(accounts, token):
approval_amount = token.balanceOf(accounts[0])
transfer_amount = approval_amount // 4
token.approve(accounts[1], approval_amount, {"from": accounts[0]})
token.transferFrom(accounts[0], accounts[2], transfer_amount, {"from": accounts[1]})
assert (
token.allowance(accounts[0], accounts[1]) == approval_amount - transfer_amount
)
@pytest.mark.skip(reason="no transfer")
def test_receiver_approval_not_affected(accounts, token):
approval_amount = token.balanceOf(accounts[0])
transfer_amount = approval_amount // 4
token.approve(accounts[1], approval_amount, {"from": accounts[0]})
token.approve(accounts[2], approval_amount, {"from": accounts[0]})
token.transferFrom(accounts[0], accounts[2], transfer_amount, {"from": accounts[1]})
assert token.allowance(accounts[0], accounts[2]) == approval_amount
@pytest.mark.skip(reason="no transfer")
def test_total_supply_not_affected(accounts, token):
total_supply = token.totalSupply()
amount = token.balanceOf(accounts[0])
token.approve(accounts[1], amount, {"from": accounts[0]})
token.transferFrom(accounts[0], accounts[2], amount, {"from": accounts[1]})
assert token.totalSupply() == total_supply
@pytest.mark.skip(reason="no transfer")
def test_returns_true(accounts, token):
amount = token.balanceOf(accounts[0])
token.approve(accounts[1], amount, {"from": accounts[0]})
tx = token.transferFrom(accounts[0], accounts[2], amount, {"from": accounts[1]})
assert tx.return_value is True
@pytest.mark.skip(reason="no transfer")
def test_transfer_full_balance(accounts, token):
amount = token.balanceOf(accounts[0])
receiver_balance = token.balanceOf(accounts[2])
token.approve(accounts[1], amount, {"from": accounts[0]})
token.transferFrom(accounts[0], accounts[2], amount, {"from": accounts[1]})
assert token.balanceOf(accounts[0]) == 0
assert token.balanceOf(accounts[2]) == receiver_balance + amount
@pytest.mark.skip(reason="no transfer")
def test_transfer_zero_tokens(accounts, token):
sender_balance = token.balanceOf(accounts[0])
receiver_balance = token.balanceOf(accounts[2])
token.approve(accounts[1], sender_balance, {"from": accounts[0]})
token.transferFrom(accounts[0], accounts[2], 0, {"from": accounts[1]})
assert token.balanceOf(accounts[0]) == sender_balance
assert token.balanceOf(accounts[2]) == receiver_balance
@pytest.mark.skip(reason="no transfer")
def test_transfer_zero_tokens_without_approval(accounts, token):
sender_balance = token.balanceOf(accounts[0])
receiver_balance = token.balanceOf(accounts[2])
token.transferFrom(accounts[0], accounts[2], 0, {"from": accounts[1]})
assert token.balanceOf(accounts[0]) == sender_balance
assert token.balanceOf(accounts[2]) == receiver_balance
@pytest.mark.skip(reason="no transfer")
def test_insufficient_balance(accounts, token):
balance = token.balanceOf(accounts[0])
token.approve(accounts[1], balance + 1, {"from": accounts[0]})
with brownie.reverts():
token.transferFrom(accounts[0], accounts[2], balance + 1, {"from": accounts[1]})
def test_insufficient_approval(accounts, token, owner):
token.mint(accounts[0], 10 ** 18, {"from": owner})
balance = token.balanceOf(accounts[0])
token.approve(accounts[1], balance - 1, {"from": accounts[0]})
with brownie.reverts():
token.transferFrom(accounts[0], accounts[2], balance, {"from": accounts[1]})
def test_no_approval(accounts, token):
balance = token.balanceOf(accounts[0])
with brownie.reverts():
token.transferFrom(accounts[0], accounts[2], balance, {"from": accounts[1]})
def test_revoked_approval(accounts, token):
balance = token.balanceOf(accounts[0])
token.approve(accounts[1], balance, {"from": accounts[0]})
token.approve(accounts[1], 0, {"from": accounts[0]})
with brownie.reverts():
token.transferFrom(accounts[0], accounts[2], balance, {"from": accounts[1]})
@pytest.mark.skip(reason="no transfer")
def test_transfer_to_self(accounts, token):
sender_balance = token.balanceOf(accounts[0])
amount = sender_balance // 4
token.approve(accounts[0], sender_balance, {"from": accounts[0]})
token.transferFrom(accounts[0], accounts[0], amount, {"from": accounts[0]})
assert token.balanceOf(accounts[0]) == sender_balance
assert token.allowance(accounts[0], accounts[0]) == sender_balance - amount
@pytest.mark.skip(reason="no transfer")
def test_transfer_to_self_no_approval(accounts, token):
token.mint(accounts[0], 10 ** 18, {"from": accounts[0]})
amount = token.balanceOf(accounts[0])
with brownie.reverts():
token.transferFrom(accounts[0], accounts[0], amount, {"from": accounts[0]})
@pytest.mark.skip(reason="no transfer")
def test_transfer_event_fires(accounts, token):
amount = token.balanceOf(accounts[0])
token.approve(accounts[1], amount, {"from": accounts[0]})
tx = token.transferFrom(accounts[0], accounts[2], amount, {"from": accounts[1]})
assert len(tx.events) == 1
assert tx.events["Transfer"].values() == [accounts[0], accounts[2], amount]
| 35.194595 | 88 | 0.706036 | 827 | 6,511 | 5.446191 | 0.076179 | 0.135879 | 0.156306 | 0.112345 | 0.886545 | 0.868783 | 0.845471 | 0.829707 | 0.805284 | 0.751554 | 0 | 0.026255 | 0.134234 | 6,511 | 184 | 89 | 35.38587 | 0.772751 | 0.002611 | 0 | 0.630252 | 0 | 0 | 0.046512 | 0 | 0 | 0 | 0 | 0 | 0.142857 | 1 | 0.142857 | false | 0 | 0.016807 | 0 | 0.159664 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
a13cde07156344fefbd207230420683d0650de37 | 889 | py | Python | code_salad/rattle/rattle/widgets.py | Fledermann/code_salad | 150f0ba8749368519beedd5aba7f8329ae4df042 | [
"MIT"
] | 6 | 2019-02-01T01:05:00.000Z | 2019-02-02T07:47:51.000Z | code_salad/rattle/rattle/widgets.py | Fledermann/code_salad | 150f0ba8749368519beedd5aba7f8329ae4df042 | [
"MIT"
] | null | null | null | code_salad/rattle/rattle/widgets.py | Fledermann/code_salad | 150f0ba8749368519beedd5aba7f8329ae4df042 | [
"MIT"
] | null | null | null |
class Input:
def __init__(self, id_, callback):
self.id_ = id_
self.callback = callback
self.type = 'input'
self.code = f'<input type="text" id="{self.id_}" class="widget">'
def __setattr__(self, key, value):
if key not in ('id_', 'callback', 'type', 'code'):
self.callback(self.id_, key, value)
self.__dict__[key] = value
class Label:
def __init__(self, id_, callback):
self.id_ = id_
self.callback = callback
self.type = 'label'
self.__dict__['text'] = ''
self.code = f'<p id="{self.id_}" class="widget"></p>'
def __setattr__(self, key, value):
if key not in ('id_', 'callback', 'type', 'code'):
self.code = f'<p id="{self.id_}" class="widget">{self.text}</p>'
self.callback(self.id_, key, value)
self.__dict__[key] = value
| 29.633333 | 76 | 0.551181 | 114 | 889 | 3.938596 | 0.184211 | 0.120267 | 0.124722 | 0.08686 | 0.835189 | 0.792873 | 0.792873 | 0.792873 | 0.792873 | 0.672606 | 0 | 0 | 0.278965 | 889 | 29 | 77 | 30.655172 | 0.700468 | 0 | 0 | 0.636364 | 0 | 0 | 0.212838 | 0.033784 | 0 | 0 | 0 | 0 | 0 | 1 | 0.181818 | false | 0 | 0 | 0 | 0.272727 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
a15648cf056c6e478aa64e5a9530deb77a45af0e | 8,138 | py | Python | tests/test_confidence_intervals.py | EdgarTeixeira/eul | 8ce3e567d43b41cae0217c0cb3f953ee9e9d8565 | [
"MIT"
] | null | null | null | tests/test_confidence_intervals.py | EdgarTeixeira/eul | 8ce3e567d43b41cae0217c0cb3f953ee9e9d8565 | [
"MIT"
] | null | null | null | tests/test_confidence_intervals.py | EdgarTeixeira/eul | 8ce3e567d43b41cae0217c0cb3f953ee9e9d8565 | [
"MIT"
] | null | null | null | import dsul.confidence_intervals as ci
import numpy as np
from scipy import stats
class TestAnalyticalMethods:
def test_mean_ci_with_known_stddev(self):
population_dist = stats.norm(0, 1)
successes = 0
trials = 1000
for i in range(trials):
sample = population_dist.rvs(30)
results = ci.mean_ci(sample, alpha=0.2, std_dev=1)
successes += int(results.contains(0.0))
coverage = successes / trials
posterior = stats.beta(successes + 1, trials - successes + 1)
is_credible = posterior.ppf(0.05) <= coverage <= posterior.ppf(0.95)
assert is_credible, self.error_msg(coverage, 0.8)
def test_mean_ci_with_unknown_stddev(self):
population_dist = stats.norm(0, 1)
successes = 0
trials = 1000
for i in range(trials):
sample = population_dist.rvs(30)
results = ci.mean_ci(sample, alpha=0.2)
successes += int(results.contains(0.0))
coverage = successes / trials
posterior = stats.beta(successes + 1, trials - successes + 1)
is_credible = posterior.ppf(0.05) <= coverage <= posterior.ppf(0.95)
assert is_credible, self.error_msg(coverage, 0.8)
def test_variance_ci(self):
population_dist = stats.norm(0, 1)
successes = 0
trials = 1000
for i in range(trials):
sample = population_dist.rvs(30)
results = ci.variance_ci(sample, alpha=0.2)
successes += int(results.contains(1.0))
coverage = successes / trials
posterior = stats.beta(successes + 1, trials - successes + 1)
is_credible = posterior.ppf(0.05) <= coverage <= posterior.ppf(0.95)
assert is_credible, self.error_msg(coverage, 0.8)
def test_standard_deviation_ci(self):
population_dist = stats.norm(0, 1)
successes = 0
trials = 1000
for i in range(trials):
sample = population_dist.rvs(30)
results = ci.stddev_ci(sample, alpha=0.2)
successes += int(results.contains(1.0))
coverage = successes / trials
posterior = stats.beta(successes + 1, trials - successes + 1)
is_credible = posterior.ppf(0.05) <= coverage <= posterior.ppf(0.95)
assert is_credible, self.error_msg(coverage, 0.8)
def test_proportion_ci(self):
population_dist = stats.bernoulli(0.7)
successes = 0
trials = 1000
for i in range(trials // 2):
sample = population_dist.rvs(30)
results = ci.proportion_ci(sample, alpha=0.2)
successes += int(results.contains(1.0))
for i in range(trials - trials // 2):
sample = population_dist.rvs(30)
results = ci.proportion_ci(sample.mean(), alpha=0.2)
successes += int(results.contains(1.0))
coverage = successes / trials
posterior = stats.beta(successes + 1, trials - successes + 1)
is_credible = posterior.ppf(0.05) <= coverage <= posterior.ppf(0.95)
assert is_credible, self.error_msg(coverage, 0.8)
def error_msg(self, observed, expected):
return f"Observed Coverage: {observed}\nExpected Coverage: {expected}"
def test_jackknife_distribution():
sample = [1, 2, 3]
jack = ci.SimpleJackknife(sample, np.mean)
jack.run(n_jobs=1)
jackknife_values = jack.jackknife_samples_
ground_truth = np.asarray([(2 + 3)/2, (1 + 3)/2, (1 + 2)/2])
assert np.allclose(jackknife_values, ground_truth)
class TestSimpleBootstrap:
def test_symmetric_ci(self):
sample = np.random.normal(0, 1, size=1000)
theo_ci = ci.mean_ci(sample, alpha=0.1, std_dev=1)
bootstrap = ci.SimpleBootstrap(sample, np.mean)
bootstrap.run(iterations=1000)
# Test Percentile CI
boots_ci = bootstrap.confidence_interval(alpha=0.1)
lower_dist = abs(boots_ci.lower - theo_ci.lower)
assert lower_dist < 0.01, self.error_msg(boots_ci.lower, theo_ci.lower)
upper_dist = abs(boots_ci.upper - theo_ci.upper)
assert upper_dist < 0.01, self.error_msg(boots_ci.upper, theo_ci.upper)
# Test Normal CI
boots_ci = bootstrap.confidence_interval(
alpha=0.1, bootstrap_ci=ci.BootstrapCI.normal)
lower_dist = abs(boots_ci.lower - theo_ci.lower)
assert lower_dist < 0.01, self.error_msg(boots_ci.lower, theo_ci.lower)
upper_dist = abs(boots_ci.upper - theo_ci.upper)
assert upper_dist < 0.01, self.error_msg(boots_ci.upper, theo_ci.upper)
# Test BCA CI
boots_ci = bootstrap.confidence_interval(
alpha=0.1, bootstrap_ci=ci.BootstrapCI.bca)
lower_dist = abs(boots_ci.lower - theo_ci.lower)
assert lower_dist < 0.01, self.error_msg(boots_ci.lower, theo_ci.lower)
upper_dist = abs(boots_ci.upper - theo_ci.upper)
assert upper_dist < 0.01, self.error_msg(boots_ci.upper, theo_ci.upper)
def test_left_ci(self):
sample = np.random.normal(0, 1, size=1000)
theo_ci = ci.mean_ci(
sample, alpha=0.1, std_dev=1, interval_type=ci.IntervalType.left)
bootstrap = ci.SimpleBootstrap(sample, np.mean)
bootstrap.run(iterations=1000)
# Test Percentile CI
boots_ci = bootstrap.confidence_interval(
alpha=0.1, interval_type=ci.IntervalType.left)
assert boots_ci.upper == theo_ci.upper, "Upper confidence should be infinite for a one-sided left CI"
lower_dist = abs(boots_ci.lower - theo_ci.lower)
assert lower_dist < 0.01, self.error_msg(boots_ci.lower, theo_ci.lower)
# Test Normal CI
boots_ci = bootstrap.confidence_interval(
alpha=0.1, interval_type=ci.IntervalType.left, bootstrap_ci=ci.BootstrapCI.normal)
assert boots_ci.upper == theo_ci.upper, "Upper confidence should be infinite for a one-sided left CI"
lower_dist = abs(boots_ci.lower - theo_ci.lower)
assert lower_dist < 0.01, self.error_msg(boots_ci.lower, theo_ci.lower)
# Test BCA CI
boots_ci = bootstrap.confidence_interval(
alpha=0.1, interval_type=ci.IntervalType.left, bootstrap_ci=ci.BootstrapCI.bca)
assert boots_ci.upper == theo_ci.upper, "Upper confidence should be infinite for a one-sided left CI"
lower_dist = abs(boots_ci.lower - theo_ci.lower)
assert lower_dist < 0.01, self.error_msg(boots_ci.lower, theo_ci.lower)
def test_right_ci(self):
sample = np.random.normal(0, 1, size=1000)
theo_ci = ci.mean_ci(
sample, alpha=0.1, std_dev=1, interval_type=ci.IntervalType.right)
bootstrap = ci.SimpleBootstrap(sample, np.mean)
bootstrap.run(iterations=1000)
# Test Percentile CI
boots_ci = bootstrap.confidence_interval(
alpha=0.1, interval_type=ci.IntervalType.right)
assert boots_ci.lower == theo_ci.lower, "Lower confidence should be -infinity for a one-sided right CI"
upper_dist = abs(boots_ci.upper - theo_ci.upper)
assert upper_dist < 0.01, self.error_msg(boots_ci.upper, theo_ci.upper)
# Test Normal CI
boots_ci = bootstrap.confidence_interval(
alpha=0.1, interval_type=ci.IntervalType.right, bootstrap_ci=ci.BootstrapCI.normal)
assert boots_ci.lower == theo_ci.lower, "Lower confidence should be -infinity for a one-sided right CI"
upper_dist = abs(boots_ci.upper - theo_ci.upper)
assert upper_dist < 0.01, self.error_msg(boots_ci.upper, theo_ci.upper)
# Test BCA CI
boots_ci = bootstrap.confidence_interval(
alpha=0.1, interval_type=ci.IntervalType.right, bootstrap_ci=ci.BootstrapCI.bca)
assert boots_ci.lower == theo_ci.lower, "Lower confidence should be -infinity for a one-sided right CI"
upper_dist = abs(boots_ci.upper - theo_ci.upper)
assert upper_dist < 0.01, self.error_msg(boots_ci.upper, theo_ci.upper)
def error_msg(boots, theoretical):
return f"Bootstrap upper: {boots}/nTheoretical upper: {theoretical}"
| 38.752381 | 111 | 0.652372 | 1,135 | 8,138 | 4.496916 | 0.097797 | 0.053487 | 0.039969 | 0.047022 | 0.884796 | 0.869514 | 0.869514 | 0.869514 | 0.864224 | 0.85815 | 0 | 0.037338 | 0.243057 | 8,138 | 209 | 112 | 38.937799 | 0.791234 | 0.016835 | 0 | 0.6875 | 0 | 0 | 0.059817 | 0 | 0 | 0 | 0 | 0 | 0.166667 | 1 | 0.076389 | false | 0 | 0.020833 | 0.013889 | 0.125 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
a18d612204766093a085902ff66f339e5b57b496 | 44,549 | py | Python | nfl_stats/player.py | ppourmand/nfl-stats-py | 05e5ed6e1fc551721f564d7498e757fe79f54c82 | [
"MIT"
] | 7 | 2018-08-21T16:29:42.000Z | 2021-08-13T16:35:57.000Z | nfl_stats/player.py | ppourmand/nfl-stats-py | 05e5ed6e1fc551721f564d7498e757fe79f54c82 | [
"MIT"
] | null | null | null | nfl_stats/player.py | ppourmand/nfl-stats-py | 05e5ed6e1fc551721f564d7498e757fe79f54c82 | [
"MIT"
] | 3 | 2018-08-21T16:55:13.000Z | 2019-05-21T14:55:35.000Z | import os
import yaml
from pathlib import Path
from bs4 import BeautifulSoup
import requests
class QB:
def __init__(self, name:str):
self.name: str = name
self.number: str = ""
self.position: str = "QB"
self.team: str = ""
self.games_played: int = 0
self.games_started: int = 0
self.passes_completed: int = 0
self.passes_attempted: int = 0
self.pass_completion_percentage: float = 0.0
self.yards_gained_by_passing: int = 0
self.passing_touchdowns: int = 0
self.passing_touchdown_percentage: float = 0.0
self.interceptions: int = 0
self.interception_percentage: float = 0.0
self.longest_completed_pass: int = 0
self.yards_gained_per_pass_attempt: float = 0.0
self.yards_gained_per_pass_completion: float = 0.0
self.qb_rating: float = 0.0
self.times_sacked: int = 0
self.yards_lost_due_to_sacks: int = 0
self.approximate_value: int = 0
self.year = ""
def set_stats(self, year:str ) -> None:
"""
Gets the statistics from the scraped site and sets appriopiate variables.
TODO:
- have better exception handling for requests/bs4
Parameters:
year - the year to get statistics for
Returns:
None
"""
first_letter_lastname = list(self.name.split()[1])[0].upper() # lol so convoluted
first_four_lastname = self.name.split()[1][0:4]
first_two_firstname = self.name.split()[0][0:2]
self.year = year
if self.is_player_stats_cached():
print(">> Player stats cached for year, setting from cache")
self.set_player_stats_from_cache()
return
print(">> Retrieving data from pro-football-reference.com")
request_url = "https://www.pro-football-reference.com/players/{}/{}{}00.htm".format(first_letter_lastname,
first_four_lastname,
first_two_firstname)
try:
response = requests.get(request_url)
soup = BeautifulSoup(response.text, 'html.parser')
stats_for_year = soup.find("tr", {"id": "passing.{}".format(year)})
self.team = stats_for_year.find("td", {"data-stat": "team"}).find('a').text
self.position = stats_for_year.find("td", {"data-stat": "pos"}).text
self.number = stats_for_year.find("td", {"data-stat": "uniform_number"}).text
self.games_played = int(stats_for_year.find("td", {"data-stat": "g"}).text)
self.games_started = int(stats_for_year.find("td", {"data-stat": "gs"}).text)
self.passes_completed = int(stats_for_year.find("td", {"data-stat": "pass_cmp"}).text)
self.passes_attempted = int(stats_for_year.find("td", {"data-stat": "pass_att"}).text)
self.pass_completion_percentage = float(stats_for_year.find("td", {"data-stat": "pass_cmp_perc"}).text)
self.yards_gained_by_passing = int(stats_for_year.find("td", {"data-stat": "pass_yds"}).text)
self.passing_touchdowns = int(stats_for_year.find("td", {"data-stat": "pass_td"}).text)
self.passing_touchdown_percentage = float(stats_for_year.find("td", {"data-stat": "pass_td_perc"}).text)
self.interceptions = int(stats_for_year.find("td", {"data-stat": "pass_int"}).text)
self.interception_percentage = float(stats_for_year.find("td", {"data-stat": "pass_int_perc"}).text)
self.longest_completed_pass = int(stats_for_year.find("td", {"data-stat": "pass_long"}).text)
self.yards_gained_per_pass_attempt = float(stats_for_year.find("td", {"data-stat": "pass_yds_per_att"}).text)
self.yards_gained_per_pass_completion = float(stats_for_year.find("td", {"data-stat": "pass_yds_per_cmp"}).text)
self.qb_rating = float(stats_for_year.find("td", {"data-stat": "qbr"}).text)
self.times_sacked = int(stats_for_year.find("td", {"data-stat": "pass_sacked"}).text)
self.yards_lost_due_to_sacks = int(stats_for_year.find("td", {"data-stat": "pass_sacked_yds"}).text)
self.approximate_value = int(stats_for_year.find("td", {"data-stat": "av"}).text)
# save it to its own file for later retrieval
self.save_stats_to_yaml()
except Exception as e:
print(e)
def print_stats(self) -> None:
print("Year: {}".format(self.year))
print("Name: {}".format(self.name))
print("Number: {}".format(self.number))
print("Position : {}".format(self.position))
print("Team: {}".format(self.team))
print("Games played: {}".format(self.games_played))
print("Games started: {}".format(self.games_started))
print("Passes completed: {}".format(self.passes_completed))
print("Passes attempted: {}".format(self.passes_attempted))
print("Pass completion %: {}".format(self.pass_completion_percentage))
print("Yards gained by passing: {}".format(self.yards_gained_by_passing))
print("Passing touchdowns: {}".format(self.passing_touchdowns))
print("Passing touchdown percentage: {}".format(self.passing_touchdown_percentage))
print("Interceptions: {}".format(self.interceptions))
print("Interception percentage: {}".format(self.interception_percentage))
print("Longest completed pass: {}".format(self.longest_completed_pass))
print("Yards gained per pass attempted: {}".format(self.yards_gained_per_pass_attempt))
print("Yards gained per pass completion: {}".format(self.yards_gained_per_pass_completion))
print("Rating: {}".format(self.qb_rating))
print("Times sacked: {}".format(self.times_sacked))
print("Yards lost due to sacks: {}".format(self.yards_lost_due_to_sacks))
print("Approximate value: {}".format(self.approximate_value))
def save_stats_to_yaml(self) -> None:
"""
Saves statistical data to a yaml file
Returns:
- None
"""
# if the directory for proper organization doesn't exist, make it
directory = "./players/QB/{}_{}/".format(self.name.split()[0], self.name.split()[1])
if not os.path.exists(directory):
os.makedirs(directory)
data = {
"name": self.name,
"number": self.number,
"position": self.position,
"team": self.team,
"games_played": self.games_played,
"games_started": self.games_started,
"passes_completed": self.passes_completed,
"passes_attempted": self.passes_attempted,
"pass_completion_perc": self.pass_completion_percentage,
"yards_gained_by_passing": self.yards_gained_by_passing,
"passing_touchdowns": self.passing_touchdowns,
"passing_touchdown_perc": self.passing_touchdown_percentage,
"interceptions": self.interceptions,
"interception_perc": self.interception_percentage,
"longest_completed_pass": self.longest_completed_pass,
"yards_gained_per_pass_attempt": self.yards_gained_per_pass_attempt,
"yards_gained_per_pass_completion": self.yards_gained_per_pass_completion,
"qb_rating": self.qb_rating,
"times_sacked": self.times_sacked,
"yards_lost_due_to_sacks": self.yards_lost_due_to_sacks,
"approximate_value": self.approximate_value
}
with open("{}/{}.yaml".format(directory, self.year), "w") as file:
yaml.dump(data, file, default_flow_style=False)
def is_player_stats_cached(self) -> bool:
"""
Checks if the player + year yaml file exists
Returns:
- bool: True if player file exists, False otherwise
"""
player_file = Path("./players/QB/{}_{}/{}.yaml".format(self.name.split()[0], self.name.split()[1], self.year))
return player_file.is_file()
def set_player_stats_from_cache(self) -> None:
"""
Reads the corresponding yaml file and set player stats.
Returns:
- None
"""
player_file = "./players/QB/{}_{}/{}.yaml".format(self.name.split()[0], self.name.split()[1], self.year)
with open(player_file, "r") as file:
try:
yaml_data = yaml.safe_load(file)
self.name = yaml_data["name"]
self.number = yaml_data["number"]
self.team = yaml_data["team"]
self.games_played = yaml_data["games_played"]
self.games_started = yaml_data["games_started"]
self.passes_completed = yaml_data["passes_completed"]
self.passes_attempted = yaml_data["passes_attempted"]
self.pass_completion_percentage = yaml_data["pass_completion_perc"]
self.yards_gained_by_passing = yaml_data["yards_gained_by_passing"]
self.passing_touchdowns = yaml_data["passing_touchdowns"]
self.passing_touchdown_percentage = yaml_data["passing_touchdown_perc"]
self.interceptions = yaml_data["interceptions"]
self.interception_percentage = yaml_data["interception_perc"]
self.longest_completed_pass = yaml_data["longest_completed_pass"]
self.yards_gained_per_pass_attempt = yaml_data["yards_gained_per_pass_attempt"]
self.yards_gained_per_pass_completion = yaml_data["yards_gained_per_pass_completion"]
self.qb_rating = yaml_data["qb_rating"]
self.times_sacked = yaml_data["times_sacked"]
self.yards_lost_due_to_sacks = yaml_data["yards_lost_due_to_sacks"]
self.approximate_value = yaml_data["approximate_value"]
except yaml.YAMLError as e:
print(e)
class WR:
def __init__(self, name: str):
self.name: str = name
self.number: int = 0
self.team: str = ""
self.position: str = "WR"
self.games_played: int = 0
self.games_started: int = 0
self.pass_targets: int = 0
self.receptions: int = 0
self.receiving_yards: int = 0
self.yards_per_reception: float = 0.0
self.receiving_touchdowns: int = 0
self.longest_reception: int = 0
self.receptions_per_game: float = 0.0
self.receiving_yards_per_game: float = 0.0
self.rush_attempts: int = 0
self.rushing_yards: int = 0
self.rushing_touchdowns: int = 0
self.longest_rushing_attempt: int = 0
self.rushing_yards_per_attempt: float = 0.0
self.rushing_yards_per_game: float = 0.0
self.rushing_attempts_per_game: float = 0.0
self.touches: int = 0
self.approximate_value: int = 0
self.fumbles: int = 0
def set_stats(self, year: str) -> None:
"""
Given the year, scrape pro-football-reference to get stats
Parameters:
year - the year to get statistics for
Returns:
- None
"""
first_letter_lastname = list(self.name.split()[1])[0].upper() # lol so convoluted
first_four_lastname = self.name.split()[1][0:4]
first_two_firstname = self.name.split()[0][0:2]
self.year = year
if self.is_player_stats_cached():
print(">> Player stats cached for year, setting from cache")
self.set_stats_from_cache()
return
print(">> Retrieving data from pro-football-reference.com")
request_url = "https://www.pro-football-reference.com/players/{}/{}{}00.htm".format(first_letter_lastname,
first_four_lastname,
first_two_firstname)
try:
response = requests.get(request_url)
soup = BeautifulSoup(response.text, 'html.parser')
stats_for_year = soup.find("tr", {"id": "receiving_and_rushing.{}".format(year)})
self.number = int(stats_for_year.find("td", {"data-stat": "uniform_number"}).text)
self.team = stats_for_year.find("td", {"data-stat": "team"}).find('a').text
self.position = stats_for_year.find("td", {"data-stat":"pos"}).text
self.games_played = int(stats_for_year.find("td", {"data-stat":"g"}).text)
self.games_started = int(stats_for_year.find("td", {"data-stat":"gs"}).text)
self.pass_targets = int(stats_for_year.find("td", {"data-stat":"targets"}).text)
self.receptions = int(stats_for_year.find("td", {"data-stat":"rec"}).text)
self.receiving_yards = int(stats_for_year.find("td", {"data-stat":"rec_yds"}).text)
self.yards_per_reception = float(stats_for_year.find("td", {"data-stat":"rec_yds_per_rec"}).text)
self.receiving_touchdowns = int(stats_for_year.find("td", {"data-stat":"rec_td"}).text)
self.longest_reception = int(stats_for_year.find("td", {"data-stat":"rec_long"}).text)
self.receptions_per_game = float(stats_for_year.find("td", {"data-stat":"rec_per_g"}).text)
self.receiving_yards_per_game = float(stats_for_year.find("td", {"data-stat":"rec_yds_per_g"}).text)
self.rush_attempts = int(stats_for_year.find("td", {"data-stat":"rush_att"}).text)
self.rushing_yards = int(stats_for_year.find("td", {"data-stat":"rush_yds"}).text)
self.rushing_touchdowns = int(stats_for_year.find("td", {"data-stat":"rush_td"}).text)
self.longest_rushing_attempt = int(stats_for_year.find("td", {"data-stat":"rush_long"}).text)
self.rushing_yards_per_attempt = float(stats_for_year.find("td", {"data-stat":"rush_yds_per_att"}).text)
self.rushing_yards_per_game = float(stats_for_year.find("td", {"data-stat":"rush_yds_per_g"}).text)
self.rushing_attempts_per_game = float(stats_for_year.find("td", {"data-stat":"rush_att_per_g"}).text)
self.touches = int(stats_for_year.find("td", {"data-stat":"touches"}).text)
self.fumbles = int(stats_for_year.find("td", {"data-stat":"fumbles"}).text)
self.approximate_value = int(stats_for_year.find("td", {"data-stat":"av"}).text)
self.save_stats_to_yaml()
except Exception as e:
print(e)
def save_stats_to_yaml(self) -> None:
"""
Saves data scraped to a yaml file for caching.
Returns:
- None
"""
# if the directory for proper organization doesn't exist, make it
directory = "./players/WR/{}_{}/".format(self.name.split()[0], self.name.split()[1])
if not os.path.exists(directory):
os.makedirs(directory)
data = {
"name": self.name,
"number": self.number,
"team": self.team,
"position": self.position,
"games_played": self.games_played,
"games_started": self.games_started,
"pass_targets": self.pass_targets,
"receptions": self.receptions,
"receiving_yards": self.receiving_yards,
"yards_per_reception": self.yards_per_reception,
"receiving_touchdowns": self.receiving_touchdowns,
"longest_reception": self.longest_reception,
"receptions_per_game": self.receptions_per_game,
"receiving_yards_per_game": self.receiving_yards_per_game,
"rush_attempts": self.rush_attempts,
"rushing_yards": self.rushing_yards,
"rushing_touchdowns": self.rushing_touchdowns,
"longest_rushing_attempt": self.longest_rushing_attempt,
"rushing_yards_per_attempt": self.rushing_yards_per_attempt,
"rushing_yards_per_game": self.rushing_yards_per_game,
"rushing_attempts_per_game": self.rushing_attempts_per_game,
"touches": self.touches,
"approximate_value": self.approximate_value,
"fumbles": self.fumbles
}
with open("{}/{}.yaml".format(directory, self.year), "w") as file:
yaml.dump(data, file, default_flow_style=False)
def set_stats_from_cache(self) -> None:
player_file = "./players/WR/{}_{}/{}.yaml".format(self.name.split()[0], self.name.split()[1], self.year)
with open(player_file, "r") as file:
try:
yaml_data = yaml.safe_load(file)
self.name = yaml_data["name"]
self.number = yaml_data["number"]
self.team = yaml_data["team"]
self.position = yaml_data["position"]
self.games_played = yaml_data["games_played"]
self.games_started = yaml_data["games_started"]
self.pass_targets = yaml_data["pass_targets"]
self.receptions = yaml_data["receptions"]
self.receiving_yards = yaml_data["receiving_yards"]
self.yards_per_reception = yaml_data["yards_per_reception"]
self.receiving_touchdowns = yaml_data["receiving_touchdowns"]
self.longest_reception = yaml_data["longest_reception"]
self.receptions_per_game = yaml_data["receptions_per_game"]
self.receiving_yards_per_game = yaml_data["receiving_yards_per_game"]
self.rush_attempts = yaml_data["rush_attempts"]
self.rushing_yards = yaml_data["rushing_yards"]
self.rushing_touchdowns = yaml_data["rushing_touchdowns"]
self.longest_rushing_attempt = yaml_data["longest_rushing_attempt"]
self.rushing_yards_per_attempt = yaml_data["rushing_yards_per_attempt"]
self.rushing_yards_per_game = yaml_data["rushing_yards_per_game"]
self.rushing_attempts_per_game = yaml_data["rushing_attempts_per_game"]
self.touches = yaml_data["touches"]
self.approximate_value = yaml_data["approximate_value"]
self.fumbles = yaml_data["fumbles"]
except yaml.YAMLError as e:
print(e)
def print_stats(self) -> None:
print(self.name)
print(self.number)
print(self.team)
print(self.position)
print(self.games_played)
print(self.games_started)
print(self.pass_targets)
print(self.receptions)
print(self.receiving_yards)
print(self.yards_per_reception)
print(self.receiving_touchdowns)
print(self.longest_reception)
print(self.receptions_per_game)
print(self.receiving_yards_per_game)
print(self.rush_attempts)
print(self.rushing_yards)
print(self.rushing_touchdowns)
print(self.longest_rushing_attempt)
print(self.rushing_yards_per_attempt)
print(self.rushing_yards_per_game)
print(self.rushing_attempts_per_game)
print(self.touches)
print(self.approximate_value)
print(self.fumbles)
def is_player_stats_cached(self) -> bool:
player_file = Path("./players/WR/{}_{}/{}.yaml".format(self.name.split()[0], self.name.split()[1], self.year))
return player_file.is_file()
class RB:
def __init__(self, name: str):
self.name: str = name
self.number: int = 0
self.team: str = ""
self.position: str = "RB"
self.games_played: int = 0
self.games_started: int = 0
self.rushing_attempts: int = 0
self.rushing_yards: int = 0
self.rushing_touchdowns: int = 0
self.longest_rushing_attempt: int = 0
self.rushing_yards_per_attempt: float = 0.0
self.rushing_yards_per_game: float = 0.0
self.rushing_attempts_per_game: float = 0.0
self.pass_targets: int = 0
self.receptions: int = 0
self.receiving_yards: int = 0
self.receiving_yards_per_reception: float = 0.0
self.receiving_touchdowns: int = 0
self.longest_reception: int = 0
self.receptions_per_game: float = 0.0
self.receiving_yards_per_game: float = 0.0
self.approximate_value: int = 0
self.fumbles: int = 0
def set_stats(self, year: str) -> None:
"""
Given the year, scrape pro-football-reference to get stats
Parameters:
year - the year to get statistics for
Returns:
- None
"""
first_letter_lastname = list(self.name.split()[1])[0].upper() # lol so convoluted
first_four_lastname = self.name.split()[1][0:4]
first_two_firstname = self.name.split()[0][0:2]
self.year = year
if self.is_player_stats_cached():
print(">> Player stats cached for year, setting from cache")
self.set_stats_from_cache()
return
print(">> Retrieving data from pro-football-reference.com")
request_url = "https://www.pro-football-reference.com/players/{}/{}{}00.htm".format(first_letter_lastname,
first_four_lastname,
first_two_firstname)
try:
response = requests.get(request_url)
soup = BeautifulSoup(response.text, 'html.parser')
stats_for_year = soup.find("tr", {"id": "rushing_and_receiving.{}".format(year)})
self.number = int(stats_for_year.find("td", {"data-stat": "uniform_number"}).text)
self.team = stats_for_year.find("td", {"data-stat": "team"}).find('a').text
self.games_played = int(stats_for_year.find("td", {"data-stat": "g"}).text)
self.games_started = int(stats_for_year.find("td", {"data-stat": "gs"}).text)
self.rushing_attempts = int(stats_for_year.find("td", {"data-stat": "rush_att"}).text)
self.rushing_yards = int(stats_for_year.find("td", {"data-stat": "rush_yds"}).text)
self.rushing_touchdowns = int(stats_for_year.find("td", {"data-stat": "rush_td"}).text)
self.longest_rushing_attempt = int(stats_for_year.find("td", {"data-stat": "rush_long"}).text)
self.rushing_yards_per_attempt = float(stats_for_year.find("td", {"data-stat": "rush_yds_per_att"}).text)
self.rushing_yards_per_game = float(stats_for_year.find("td", {"data-stat": "rush_yds_per_g"}).text)
self.rushing_attempts_per_game = float(stats_for_year.find("td", {"data-stat": "rush_att_per_g"}).text)
self.pass_targets = int(stats_for_year.find("td", {"data-stat": "targets"}).text)
self.receptions = int(stats_for_year.find("td", {"data-stat": "rec"}).text)
self.receiving_yards = int(stats_for_year.find("td", {"data-stat": "rec_yds"}).text)
self.receiving_yards_per_reception = float(stats_for_year.find("td", {"data-stat": "rec_yds_per_rec"}).text)
self.receiving_touchdowns = int(stats_for_year.find("td", {"data-stat": "rec_td"}).text)
self.longest_reception = int(stats_for_year.find("td", {"data-stat": "rec_long"}).text)
self.receptions_per_game = float(stats_for_year.find("td", {"data-stat": "rec_per_g"}).text)
self.receiving_yards_per_game = float(stats_for_year.find("td", {"data-stat": "rec_yds_per_g"}).text)
self.approximate_value = int(stats_for_year.find("td", {"data-stat": "av"}).text)
self.fumbles = int(stats_for_year.find("td", {"data-stat": "fumbles"}).text)
self.save_stats()
except Exception as e:
print(e)
def save_stats(self) -> None:
directory = "./players/RB/{}_{}/".format(self.name.split()[0], self.name.split()[1])
if not os.path.exists(directory):
os.makedirs(directory)
data = {
"name": self.name,
"number": self.number,
"team": self.team,
"games_played": self.games_played,
"games_started": self.games_started,
"rushing_attempts": self.rushing_attempts,
"rushing_yards": self.rushing_yards,
"rushing_touchdowns": self.rushing_touchdowns,
"longest_rushing_attempt": self.longest_rushing_attempt,
"rushing_yards_per_attempt": self.rushing_yards_per_attempt,
"rushing_yards_per_game": self.rushing_yards_per_game,
"rushing_attempts_per_game": self.rushing_attempts_per_game,
"pass_targets": self.pass_targets,
"receptions": self.receptions,
"receiving_yards": self.receiving_yards,
"receiving_yards_per_reception": self.receiving_yards_per_reception,
"receiving_touchdowns": self.receiving_touchdowns,
"longest_reception": self.longest_reception,
"receptions_per_game": self.receptions_per_game,
"receiving_yards_per_game": self.receiving_yards_per_game,
"approximate_value": self.approximate_value,
"fumbles": self.fumbles
}
with open("{}/{}.yaml".format(directory, self.year), "w") as file:
yaml.dump(data, file, default_flow_style=False)
def set_stats_from_cache(self) -> None:
player_file = "./players/RB/{}_{}/{}.yaml".format(self.name.split()[0], self.name.split()[1], self.year)
with open(player_file, "r") as file:
try:
yaml_data = yaml.safe_load(file)
self.number = yaml_data["number"]
self.team = yaml_data["team"]
self.games_played = yaml_data["games_played"]
self.games_started = yaml_data["games_started"]
self.rushing_attempts = yaml_data["rushing_attempts"]
self.rushing_yards = yaml_data["rushing_yards"]
self.rushing_touchdowns = yaml_data["rushing_touchdowns"]
self.longest_rushing_attempt = yaml_data["longest_rushing_attempt"]
self.rushing_yards_per_attempt = yaml_data["rushing_yards_per_attempt"]
self.rushing_yards_per_game = yaml_data["rushing_yards_per_game"]
self.rushing_attempts_per_game = yaml_data["rushing_attempts_per_game"]
self.pass_targets = yaml_data["pass_targets"]
self.receptions = yaml_data["receptions"]
self.receiving_yards = yaml_data["receiving_yards"]
self.receiving_yards_per_reception = yaml_data["receiving_yards_per_reception"]
self.receiving_touchdowns = yaml_data["receiving_touchdowns"]
self.longest_reception = yaml_data["longest_reception"]
self.receptions_per_game = yaml_data["receptions_per_game"]
self.receiving_yards_per_game = yaml_data["receiving_yards_per_game"]
self.approximate_value = yaml_data["approximate_value"]
self.fumbles = yaml_data["fumbles"]
except yaml.YAMLError as e:
print(e)
def is_player_stats_cached(self) -> bool:
player_file = Path("./players/RB/{}_{}/{}.yaml".format(self.name.split()[0], self.name.split()[1], self.year))
return player_file.is_file()
def print_stats(self) -> None:
print("Year: {}".format(self.year))
print("Name: {}".format(self.name))
print("Number: {}".format(self.number))
print("Position : {}".format(self.position))
print("Team: {}".format(self.team))
print("Games played: {}".format(self.games_played))
print("Games started: {}".format(self.games_started))
print("Rushing attempts: {}".format(self.rushing_attempts))
print("Rushing yards: {}".format(self.rushing_yards))
print("Rushing touchdowns: {}".format(self.rushing_touchdowns))
print("Longest rushing attempt: {}".format(self.longest_rushing_attempt))
print("Rushing yards per attempt: {}".format(self.rushing_yards_per_attempt))
print("Rushing yards per game: {}".format(self.rushing_yards_per_game))
print("Pass targets: {}".format(self.pass_targets))
print("Receptions: {}".format(self.receptions))
print("Receiving yards: {}".format(self.receiving_yards))
print("Receiving yards per reception: {}".format(self.receiving_yards_per_reception))
print("Receiving touchdowns: {}".format(self.receiving_touchdowns))
print("Longest reception: {}".format(self.longest_reception))
print("Receptions per game: {}".format(self.receptions_per_game))
print("Receiving yards per game: {}".format(self.receiving_yards_per_game))
print("Approximate value: {}".format(self.approximate_value))
print("Fumbles: {}".format(self.fumbles))
class K:
def __init__(self, name: str):
self.name: str = name
self.number: int = 0
self.team: str = ""
self.position: str = "K"
self.games_played: int = 0
self.games_started: int = 0
self.field_goal_attempts_20_to_29: int = 0
self.field_goals_made_20_to_29: int = 0
self.field_goal_attempts_30_to_39: int = 0
self.fields_goals_made_30_to_39: int = 0
self.field_goal_attempts_40_to_49: int = 0
self.field_goals_made_40_to_49: int = 0
self.field_goal_attempts_50_plus: int = 0
self.field_goals_made_50_plus: int = 0
self.longest_field_goal_made: int = 0
self.total_field_goals_attempted: int = 0
self.total_field_goals_made: int = 0
self.extra_points_attempted: int = 0
self.extra_points_made: int = 0
self.approximate_value: int = 0
def set_stats(self, year: str) -> None:
first_letter_lastname = list(self.name.split()[1])[0].upper() # lol so convoluted
first_four_lastname = self.name.split()[1][0:4]
first_two_firstname = self.name.split()[0][0:2]
self.year = year
if self.is_player_stats_cached():
print(">> Player stats cached for year, setting from cache")
self.set_stats_from_cache()
return
print(">> Retrieving data from pro-football-reference.com")
request_url = "https://www.pro-football-reference.com/players/{}/{}{}00.htm".format(first_letter_lastname,
first_four_lastname,
first_two_firstname)
try:
response = requests.get(request_url)
soup = BeautifulSoup(response.text, 'html.parser')
stats_for_year = soup.find("tr", {"id": "kicking.{}".format(year)})
self.number = int(stats_for_year.find("td", {"data-stat": "uniform_number"}).text)
self.team = stats_for_year.find("td", {"data-stat": "team"}).find('a').text
self.games_played = int(stats_for_year.find("td", {"data-stat": "g"}).text)
self.games_started = int(stats_for_year.find("td", {"data-stat": "gs"}).text)
self.field_goal_attempts_20_to_29 = int(stats_for_year.find("td", {"data-stat": "fga2"}).text)
self.field_goals_made_20_to_29 = int(stats_for_year.find("td", {"data-stat": "fgm2"}).text)
self.field_goal_attempts_30_to_39 = int(stats_for_year.find("td", {"data-stat": "fga3"}).text)
self.fields_goals_made_30_to_39 = int(stats_for_year.find("td", {"data-stat": "fgm3"}).text)
self.field_goal_attempts_40_to_49 = int(stats_for_year.find("td", {"data-stat": "fga4"}).text)
self.field_goals_made_40_to_49 = int(stats_for_year.find("td", {"data-stat": "fgm4"}).text)
self.field_goal_attempts_50_plus = int(stats_for_year.find("td", {"data-stat": "fga5"}).text)
self.field_goals_made_50_plus = int(stats_for_year.find("td", {"data-stat": "fgm5"}).text)
self.longest_field_goal_made = int(stats_for_year.find("td", {"data-stat": "fg_long"}).text)
self.total_field_goals_attempted = int(stats_for_year.find("td", {"data-stat": "fga"}).text)
self.total_field_goals_made = int(stats_for_year.find("td", {"data-stat": "fgm"}).text)
self.extra_points_attempted = int(stats_for_year.find("td", {"data-stat": "xpa"}).text)
self.extra_points_made = int(stats_for_year.find("td", {"data-stat": "xpm"}).text)
self.approximate_value = int(stats_for_year.find("td", {"data-stat": "av"}).text)
self.save_stats()
except Exception as e:
print(e)
def save_stats(self) -> None:
directory = "./players/K/{}_{}/".format(self.name.split()[0], self.name.split()[1])
if not os.path.exists(directory):
os.makedirs(directory)
data = {
"name": self.name,
"number": self.number,
"team": self.team,
"games_played": self.games_played,
"games_started": self.games_started,
"field_goal_attempts_20_to_29": self.field_goal_attempts_20_to_29,
"field_goals_made_20_to_29": self.field_goals_made_20_to_29,
"field_goal_attempts_30_to_39": self.field_goal_attempts_30_to_39,
"field_goals_made_30_to_39": self.fields_goals_made_30_to_39,
"field_goal_attempts_40_to_49": self.field_goal_attempts_40_to_49,
"field_goals_made_40_to_49": self.field_goals_made_40_to_49,
"field_goal_attempts_50_plus": self.field_goal_attempts_50_plus,
"field_goals_made_50_plus": self.field_goals_made_50_plus,
"longest_field_goal_made": self.longest_field_goal_made,
"total_field_goals_attempted": self.total_field_goals_attempted,
"total_field_goals_made": self.total_field_goals_made,
"extra_points_attempted": self.extra_points_attempted,
"extra_points_made": self.extra_points_made,
"approximate_value": self.approximate_value
}
with open("{}/{}.yaml".format(directory, self.year), "w") as file:
yaml.dump(data, file, default_flow_style=False)
def set_stats_from_cache(self) -> None:
player_file = "./players/K/{}_{}/{}.yaml".format(self.name.split()[0], self.name.split()[1], self.year)
with open(player_file, "r") as file:
try:
yaml_data = yaml.safe_load(file)
self.number = yaml_data["number"]
self.team = yaml_data["team"]
self.games_played = yaml_data["games_played"]
self.games_started = yaml_data["games_started"]
self.field_goal_attempts_20_to_29 = yaml_data["field_goal_attempts_20_to_29"]
self.field_goals_made_20_to_29 = yaml_data["field_goals_made_20_to_29"]
self.field_goal_attempts_30_to_39 = yaml_data["field_goal_attempts_30_to_39"]
self.field_goals_made_30_to_39 = yaml_data["field_goals_made_30_to_39"]
self.field_goal_attempts_40_to_49 = yaml_data["field_goal_attempts_40_to_49"]
self.field_goals_made_40_to_49 = yaml_data["field_goals_made_40_to_49"]
self.field_goal_attempts_50_plus = yaml_data["field_goal_attempts_50_plus"]
self.field_goals_made_50_plus = yaml_data["field_goals_made_50_plus"]
self.longest_field_goal_made = yaml_data["longest_field_goal_made"]
self.total_field_goals_attempted = yaml_data["total_field_goals_attempted"]
self.total_field_goals_made = yaml_data["total_field_goals_made"]
self.extra_points_attempted = yaml_data["extra_points_attempted"]
self.extra_points_made = yaml_data["extra_points_made"]
self.approximate_value = yaml_data["approximate_value"]
except yaml.YAMLError as e:
print(e)
def is_player_stats_cached(self) -> bool:
player_file = Path("./players/K/{}_{}/{}.yaml".format(self.name.split()[0], self.name.split()[1], self.year))
return player_file.is_file()
def print_stats(self) -> None:
print("Year: {}".format(self.year))
print("Name: {}".format(self.name))
print("Number: {}".format(self.number))
print("Team: {}".format(self.team))
print("Position: {}".format(self.position))
print("Games played: {}".format(self.games_played))
print("Games started: {}".format(self.games_started))
print("Field goal attempts (20-29): {}".format(self.field_goal_attempts_20_to_29))
print("Field goals made (20-29): {}".format(self.field_goals_made_20_to_29))
print("Field goal attempts (30-39): {}".format(self.field_goal_attempts_30_to_39))
print("Field goals made (30-39): {}".format(self.fields_goals_made_30_to_39))
print("Field goal attempts (40-49): {}".format(self.field_goal_attempts_40_to_49))
print("Field goals made (40-49): {}".format(self.field_goals_made_40_to_49))
print("Field goal attempts (50+): {}".format(self.field_goal_attempts_50_plus))
print("Field goals made (50+): {}".format(self.field_goals_made_50_plus))
print("Longest field goal: {} yd".format(self.longest_field_goal_made))
print("Total field goals attempted: {}".format(self.total_field_goals_attempted))
print("Total field goals mdae: {}".format(self.total_field_goals_made))
print("Extra points attempted: {}".format(self.extra_points_attempted))
print("Extra points made: {}".format(self.extra_points_made))
print("Approximate value: {}".format(self.approximate_value))
class TE:
def __init__(self, name: str):
self.name: str = name
self.number: int = 0
self.team: str = ""
self.position: str = "TE"
self.games_played: int = 0
self.games_started: int = 0
self.targets: int = 0
self.receptions: int = 0
self.receiving_yards: int = 0
self.receiving_touchdowns: int = 0
self.longest_reception: int = 0
self.touches: int = 0
self.all_purpose_yards: int = 0
self.fumbles: int = 0
self.approximate_value: int = 0
def set_stats(self, year:str) -> None:
first_letter_lastname = list(self.name.split()[1])[0].upper() # lol so convoluted
first_four_lastname = self.name.split()[1][0:4]
first_two_firstname = self.name.split()[0][0:2]
self.year = year
if self.is_player_stats_cached():
print(">> Player stats cached for year, setting from cache")
self.set_stats_from_cache()
return
print(">> Retrieving data from pro-football-reference.com")
request_url = "https://www.pro-football-reference.com/players/{}/{}{}00.htm".format(first_letter_lastname,
first_four_lastname,
first_two_firstname)
try:
response = requests.get(request_url)
soup = BeautifulSoup(response.text, 'html.parser')
stats_for_year = soup.find("tr", {"id": "receiving_and_rushing.{}".format(year)})
self.number = int(stats_for_year.find("td", {"data-stat": "uniform_number"}).text)
self.team = stats_for_year.find("td", {"data-stat": "team"}).find('a').text
self.games_played = int(stats_for_year.find("td", {"data-stat": "g"}).text)
self.games_started = int(stats_for_year.find("td", {"data-stat": "gs"}).text)
self.targets = int(stats_for_year.find("td", {"data-stat": "targets"}).text)
self.receptions = int(stats_for_year.find("td", {"data-stat": "rec"}).text)
self.receiving_yards = int(stats_for_year.find("td", {"data-stat": "rec_yds"}).text)
self.receiving_touchdowns = int(stats_for_year.find("td", {"data-stat": "rec_td"}).text)
self.longest_reception = int(stats_for_year.find("td", {"data-stat": "rec_long"}).text)
self.touches = int(stats_for_year.find("td", {"data-stat": "touches"}).text)
self.all_purpose_yards = int(stats_for_year.find("td", {"data-stat": "all_purpose_yds"}).text)
self.fumbles = int(stats_for_year.find("td", {"data-stat": "fumbles"}).text)
self.approximate_value = int(stats_for_year.find("td", {"data-stat": "av"}).text)
self.save_stats()
except Exception as e:
print(e)
def save_stats(self) -> None:
directory = "./players/TE/{}_{}/".format(self.name.split()[0], self.name.split()[1])
if not os.path.exists(directory):
os.makedirs(directory)
data = {
"name": self.name,
"number": self.number,
"team": self.team,
"games_played": self.games_played,
"games_started": self.games_started,
"targets": self.targets,
"receptions": self.receptions,
"receiving_yards": self.receiving_yards,
"receiving_touchdowns": self.receiving_touchdowns,
"longest_reception": self.longest_reception,
"touches": self.touches,
"all_purpose_yards": self.all_purpose_yards,
"fumbles": self.fumbles,
"approximate_value": self.approximate_value
}
with open("{}/{}.yaml".format(directory, self.year), "w") as file:
yaml.dump(data, file, default_flow_style=False)
def set_stats_from_cache(self) -> None:
player_file = "./players/TE/{}_{}/{}.yaml".format(self.name.split()[0], self.name.split()[1], self.year)
with open(player_file, "r") as file:
try:
yaml_data = yaml.safe_load(file)
self.number = yaml_data["number"]
self.team = yaml_data["team"]
self.games_played = yaml_data["games_played"]
self.games_started = yaml_data["games_started"]
self.targets = yaml_data["targets"]
self.receptions = yaml_data["receptions"]
self.receiving_yards = yaml_data["receiving_yards"]
self.receiving_touchdowns = yaml_data["receiving_touchdowns"]
self.longest_reception = yaml_data["longest_reception"]
self.touches = yaml_data["touches"]
self.all_purpose_yards = yaml_data["all_purpose_yards"]
self.fumbles = yaml_data["fumbles"]
self.approximate_value = yaml_data["approximate_value"]
except yaml.YAMLError as e:
print(e)
def is_player_stats_cached(self) -> bool:
player_file = Path("./players/TE/{}_{}/{}.yaml".format(self.name.split()[0], self.name.split()[1], self.year))
return player_file.is_file()
def print_stats(self) -> None:
print("Year: {}".format(self.year))
print("Name: {}".format(self.name))
print("Number: {}".format(self.number))
print("Team: {}".format(self.team))
print("Position: {}".format(self.position))
print("Games played: {}".format(self.games_played))
print("Games started: {}".format(self.games_started))
print("Targets: {}".format(self.targets))
print("Receptions: {}".format(self.receptions))
print("Receiving yards: {}".format(self.receiving_yards))
print("Receiving touchdowns: {}".format(self.receiving_touchdowns))
print("Longest reception: {}".format(self.longest_reception))
print("Touches: {}".format(self.touches))
print("All purpose yards: {}".format(self.all_purpose_yards))
print("Fumbles: {}".format(self.fumbles))
print("Approximate Value: {}".format(self.approximate_value)) | 53.480192 | 124 | 0.612 | 5,496 | 44,549 | 4.658297 | 0.040939 | 0.028709 | 0.046871 | 0.05937 | 0.853371 | 0.801305 | 0.758144 | 0.722444 | 0.691508 | 0.673502 | 0 | 0.012651 | 0.256571 | 44,549 | 833 | 125 | 53.480192 | 0.760379 | 0.023659 | 0 | 0.659634 | 0 | 0 | 0.181178 | 0.044124 | 0 | 0 | 0 | 0.0012 | 0 | 1 | 0.042194 | false | 0.084388 | 0.007032 | 0 | 0.070323 | 0.184248 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 6 |
a1b0e3d07cb899e80d24e6d4a712708169cf768f | 1,954 | py | Python | tests/helper/test_list_helper.py | lucasjoao/tcc | 508326a57af1283d2f00aacd2cfefc48afdec4eb | [
"MIT"
] | 1 | 2020-10-13T23:12:25.000Z | 2020-10-13T23:12:25.000Z | tests/helper/test_list_helper.py | lucasjoao/tcc | 508326a57af1283d2f00aacd2cfefc48afdec4eb | [
"MIT"
] | null | null | null | tests/helper/test_list_helper.py | lucasjoao/tcc | 508326a57af1283d2f00aacd2cfefc48afdec4eb | [
"MIT"
] | null | null | null | import unittest
from src.helper import list_helper as lh
class TestsListHelper(unittest.TestCase):
def test_remove_duplicates_lists_ok(self):
list_of_lists = [['Abacate', 'Abacate', 'Granola'], ['Abacate', 'Granola'], ['Abacate', 'Granola']]
result = lh.list_helper.remove_duplicates_list_of_lists(list_of_lists)
self.assertEqual(len(result), 2)
self.assertIn(('Abacate', 'Abacate', 'Granola'), result)
self.assertIn(('Abacate', 'Granola'), result)
def test_remove_duplicates_lists_not_remove(self):
list_of_lists = [['Abacate', 'Abacate', 'Granola'], ['Abacate', 'Granola']]
result = lh.list_helper.remove_duplicates_list_of_lists(list_of_lists)
self.assertEqual(len(result), len(list_of_lists))
self.assertIn(('Abacate', 'Abacate', 'Granola'), result)
self.assertIn(('Abacate', 'Granola'), result)
def test_remove_duplicates_list_empty(self):
result = lh.list_helper.remove_duplicates_list_of_lists([])
self.assertEqual(len(result), 0)
def test_remove_duplicates_dict_ok(self):
list_of_dicts = [{'key0': 1}, {'key1': 2, 'key2': 3}, {'key0': 1}]
result = lh.list_helper.remove_duplicates_list_of_dicts(list_of_dicts)
self.assertEqual(len(result), 2)
self.assertIn({'key0': 1}, result)
self.assertIn({'key1': 2, 'key2': 3}, result)
def test_remove_duplicates_dict_not_remove(self):
list_of_dicts = [{'key0': 1}, {'key1': 2, 'key2': 3}]
result = lh.list_helper.remove_duplicates_list_of_dicts(list_of_dicts)
self.assertEqual(len(result), len(list_of_dicts))
self.assertIn({'key0': 1}, result)
self.assertIn({'key1': 2, 'key2': 3}, result)
def test_remove_duplicates_dict_empty(self):
result = lh.list_helper.remove_duplicates_list_of_dicts([])
self.assertEqual(len(result), 0)
if __name__ == '__main__':
unittest.main()
| 38.313725 | 107 | 0.668884 | 249 | 1,954 | 4.907631 | 0.156627 | 0.07856 | 0.072013 | 0.11293 | 0.895254 | 0.829787 | 0.829787 | 0.774141 | 0.774141 | 0.718494 | 0 | 0.018773 | 0.18219 | 1,954 | 50 | 108 | 39.08 | 0.745932 | 0 | 0 | 0.457143 | 0 | 0 | 0.109519 | 0 | 0 | 0 | 0 | 0 | 0.4 | 1 | 0.171429 | false | 0 | 0.057143 | 0 | 0.257143 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
a1dbf2a8a9f97a895136ca1a707a203fe868467e | 5,913 | py | Python | three_class_problem/three_class_svm_rf.py | J-E-11/MLBio | d2a85ab109dee7ebc8df749f2dd03cf7d976c5c1 | [
"MIT"
] | null | null | null | three_class_problem/three_class_svm_rf.py | J-E-11/MLBio | d2a85ab109dee7ebc8df749f2dd03cf7d976c5c1 | [
"MIT"
] | null | null | null | three_class_problem/three_class_svm_rf.py | J-E-11/MLBio | d2a85ab109dee7ebc8df749f2dd03cf7d976c5c1 | [
"MIT"
] | null | null | null | '''
Author: Jinwan Huang
Three class classification using SVM and Random Forest
return scores and confusion matrix
containing both before batch effect removal and after
'''
import pandas as pd
import numpy as np
import seaborn as sns
import matplotlib.pyplot as plt
from sklearn.metrics import roc_auc_score
from sklearn.metrics import confusion_matrix
from sklearn.preprocessing import LabelBinarizer
from sklearn.model_selection import cross_validate
from sklearn.model_selection import cross_val_predict
from sklearn.ensemble import RandomForestClassifier
from sklearn import svm
df = pd.read_csv('merged.csv',index_col=0)
#print(df.head())
label = df['sign']
data = df.drop(['Cell', 'sign'], axis=1)
feature_list = pd.read_csv('features_from_xgboost.csv', index_col=0)
feature_list = feature_list.values.tolist()
feature_list = [k[0] for k in feature_list]
data = data[feature_list]
########
model = svm.SVC(kernel='linear')
score = cross_validate(model, data, label, cv=10, scoring=['accuracy', 'precision_macro', 'recall_macro', 'f1_macro'])
#print("score ", score)
results = pd.DataFrame.from_dict(score)
#print(results)
print('SVM Acc: %.3f' % results['test_accuracy'].mean(),
'SVM Precision: %.3f' % results['test_precision_macro'].mean(),
'SVM Recall: %.3f' % results['test_recall_macro'].mean(),
'SVM F1: %.3f' % results['test_f1_macro'].mean())
y_pred = cross_val_predict(model, data, label, cv=10)
cm = confusion_matrix(label, y_pred)
print(cm)
cm = cm/sum(cm)
a = pd.DataFrame(data=cm, index=['negative','positive','normal'],columns=['negative','positive','normal'])
ax = sns.heatmap(a, cmap="YlGnBu", annot=True, square=True)
ax.set_xlabel("Predicted")
ax.set_ylabel("True")
ax.set_title("Confusion Matrix for SVM Without Batch Effect")
b, t = plt.ylim() # discover the values for bottom and top
b += 0.5 # Add 0.5 to the bottom
t -= 0.5 # Subtract 0.5 from the top
plt.ylim(b, t) # update the ylim(bottom, top) values
plt.savefig('SVM_cm.png')
plt.clf()
######
model = RandomForestClassifier(max_depth=30, n_estimators=100, random_state=0)
score = cross_validate(model, data, label, cv=10, scoring=['accuracy', 'precision_macro', 'recall_macro', 'f1_macro'])
results = pd.DataFrame.from_dict(score)
print('RF Acc: %.3f' % results['test_accuracy'].mean(),
'RF Precision: %.3f' % results['test_precision_macro'].mean(),
'RF Recall: %.3f' % results['test_recall_macro'].mean(),
'RF F1: %.3f' % results['test_f1_macro'].mean())
y_pred = cross_val_predict(model, data, label, cv=10)
cm = confusion_matrix(label, y_pred)
print(cm)
cm = cm/sum(cm)
a = pd.DataFrame(data=cm, index=['negative','positive','normal'],columns=['negative','positive','normal'])
ax = sns.heatmap(a, cmap="YlGnBu", annot=True, square=True)
ax.set_xlabel("Predicted")
ax.set_ylabel("True")
ax.set_title("Confusion Matrix for Random Forest Without Batch Effect")
b, t = plt.ylim() # discover the values for bottom and top
b += 0.5 # Add 0.5 to the bottom
t -= 0.5 # Subtract 0.5 from the top
plt.ylim(b, t) # update the ylim(bottom, top) values
plt.savefig('rf_cm.png')
plt.clf()
#for data with batch effect removed
df = pd.read_csv('merged_rm_be.csv')
df = df.drop(df.columns[[0]], axis=1)
cleaned_data = df.dropna()
label = cleaned_data['sign']
data = cleaned_data.drop(['sign'], axis=1)
feature_list = pd.read_csv('features_from_xgboost_rm_be.csv', index_col=0)
feature_list = feature_list.values.tolist()
feature_list = [k[0] for k in feature_list]
data = data[feature_list]
########
model = svm.SVC(kernel='linear')
score = cross_validate(model, data, label, cv=10, scoring=['accuracy', 'precision_macro', 'recall_macro', 'f1_macro'])
#print("score ", score)
results = pd.DataFrame.from_dict(score)
#print(results)
print('SVM+Batch effect removal Acc: %.3f' % results['test_accuracy'].mean(),
'SVM+Batch effect removal Precision: %.3f' % results['test_precision_macro'].mean(),
'SVM+Batch effect removal Recall: %.3f' % results['test_recall_macro'].mean(),
'SVM+Batch effect removal F1: %.3f' % results['test_f1_macro'].mean())
y_pred = cross_val_predict(model, data, label, cv=10)
cm = confusion_matrix(label, y_pred)
cm = cm/sum(cm)
a = pd.DataFrame(data=cm, index=['negative','positive','normal'],columns=['negative','positive','normal'])
ax = sns.heatmap(a, cmap="YlGnBu", annot=True, square=True)
ax.set_xlabel("Predicted")
ax.set_ylabel("True")
ax.set_title("Confusion Matrix for SVM After Batch Effect")
b, t = plt.ylim() # discover the values for bottom and top
b += 0.5 # Add 0.5 to the bottom
t -= 0.5 # Subtract 0.5 from the top
plt.ylim(b, t) # update the ylim(bottom, top) values
plt.savefig('SVM_cm_rm_be.png')
plt.clf()
######
model = RandomForestClassifier(max_depth=30, n_estimators=100, random_state=0)
score = cross_validate(model, data, label, cv=10, scoring=['accuracy', 'precision_macro', 'recall_macro', 'f1_macro'])
results = pd.DataFrame.from_dict(score)
print('RF+Batch effect removal Acc: %.3f' % results['test_accuracy'].mean(),
'RF+Batch effect removal Precision: %.3f' % results['test_precision_macro'].mean(),
'RF+Batch effect removal Recall: %.3f' % results['test_recall_macro'].mean(),
'RF+Batch effect removal F1: %.3f' % results['test_f1_macro'].mean())
y_pred = cross_val_predict(model, data, label, cv=10)
cm = confusion_matrix(label, y_pred)
print(cm)
cm = cm/sum(cm)
a = pd.DataFrame(data=cm, index=['negative','positive','normal'],columns=['negative','positive','normal'])
ax = sns.heatmap(a, cmap="YlGnBu", annot=True, square=True)
ax.set_xlabel("Predicted")
ax.set_ylabel("True")
ax.set_title("Confusion Matrix for Random Forest After Batch Effect")
b, t = plt.ylim() # discover the values for bottom and top
b += 0.5 # Add 0.5 to the bottom
t -= 0.5 # Subtract 0.5 from the top
plt.ylim(b, t) # update the ylim(bottom, top) values
plt.savefig('rf_cm_rm_be.png')
| 33.219101 | 118 | 0.713174 | 926 | 5,913 | 4.411447 | 0.154428 | 0.035251 | 0.050918 | 0.031334 | 0.851163 | 0.84284 | 0.825214 | 0.809058 | 0.767931 | 0.74541 | 0 | 0.018708 | 0.123119 | 5,913 | 177 | 119 | 33.40678 | 0.769142 | 0.131913 | 0 | 0.60177 | 0 | 0 | 0.283738 | 0.011065 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.097345 | 0 | 0.097345 | 0.061947 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
b81caec1686bb9328ba6f5b4a421a9381a516193 | 47 | py | Python | ml_models/__init__.py | sampathweb/imagnet-predictor-api | 76a939809621d2e00b957bfeb44ea70d51bcbfc5 | [
"MIT"
] | null | null | null | ml_models/__init__.py | sampathweb/imagnet-predictor-api | 76a939809621d2e00b957bfeb44ea70d51bcbfc5 | [
"MIT"
] | null | null | null | ml_models/__init__.py | sampathweb/imagnet-predictor-api | 76a939809621d2e00b957bfeb44ea70d51bcbfc5 | [
"MIT"
] | null | null | null | from .classifiers import ImagenetModel # noqa
| 23.5 | 46 | 0.808511 | 5 | 47 | 7.6 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.148936 | 47 | 1 | 47 | 47 | 0.95 | 0.085106 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
62b94cf463ebc1b9695c3b78ade8e6b3646c7318 | 67 | py | Python | aisikl/components/separator.py | itmat0/votr | 626b3b1bf1f21d2891b0cc71bbbb1c8f52cd4909 | [
"Apache-2.0"
] | 8 | 2015-01-13T23:15:18.000Z | 2021-08-19T12:35:00.000Z | aisikl/components/separator.py | itmat0/votr | 626b3b1bf1f21d2891b0cc71bbbb1c8f52cd4909 | [
"Apache-2.0"
] | 99 | 2015-01-14T22:09:50.000Z | 2021-05-20T19:07:07.000Z | aisikl/components/separator.py | itmat0/votr | 626b3b1bf1f21d2891b0cc71bbbb1c8f52cd4909 | [
"Apache-2.0"
] | 12 | 2016-02-10T21:35:42.000Z | 2021-08-19T12:35:50.000Z |
from .control import Control
class Separator(Control):
pass
| 9.571429 | 28 | 0.731343 | 8 | 67 | 6.125 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.208955 | 67 | 6 | 29 | 11.166667 | 0.924528 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.333333 | 0.333333 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 6 |
62c2b62124a1fe9c9d4ea0471a986609a1616f9e | 91 | py | Python | src/aceinna/devices/upgrade_workers/__init__.py | LukaszChl/ros_openimu | 1bcf547fa42ee7c7dcc856c1d4eb5702d301b059 | [
"Apache-2.0"
] | 6 | 2021-03-18T16:18:53.000Z | 2022-01-18T15:32:15.000Z | src/aceinna/devices/upgrade_workers/__init__.py | LukaszChl/ros_openimu | 1bcf547fa42ee7c7dcc856c1d4eb5702d301b059 | [
"Apache-2.0"
] | 11 | 2020-12-22T16:19:20.000Z | 2022-02-11T11:03:25.000Z | src/aceinna/devices/upgrade_workers/__init__.py | LukaszChl/ros_openimu | 1bcf547fa42ee7c7dcc856c1d4eb5702d301b059 | [
"Apache-2.0"
] | 11 | 2021-04-12T03:00:28.000Z | 2022-03-25T19:53:43.000Z | from .firmware_worker import FirmwareUpgradeWorker
from .sdk_worker import SDKUpgradeWorker | 45.5 | 50 | 0.901099 | 10 | 91 | 8 | 0.7 | 0.3 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.076923 | 91 | 2 | 51 | 45.5 | 0.952381 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
1a03f4c3a7055f77a88d28f90f2e835dab25fc9e | 19,236 | py | Python | purequant/trade.py | Silverbulelt/PureQuant | 98ddac1c0d8a40a46fad893567fe5a574b09aa53 | [
"MIT"
] | 3 | 2020-08-28T10:46:38.000Z | 2021-12-14T02:44:27.000Z | purequant/trade.py | TradV/PureQuant | 98ddac1c0d8a40a46fad893567fe5a574b09aa53 | [
"MIT"
] | null | null | null | purequant/trade.py | TradV/PureQuant | 98ddac1c0d8a40a46fad893567fe5a574b09aa53 | [
"MIT"
] | 2 | 2020-08-15T22:59:41.000Z | 2021-06-07T14:00:42.000Z | # -*- coding:utf-8 -*-
"""
交易模块
Author: eternal ranger
Date: 2020/07/09
email: interstella.ranger2020@gmail.com
"""
from purequant.exchange.okex import spot_api as okexspot
from purequant.exchange.okex import futures_api as okexfutures
from purequant.exchange.okex import swap_api as okexswap
from purequant.exchange.huobi import huobi_futures as huobifutures
from purequant.utils.time_tools import ts_to_datetime_str, utctime_str_to_ts
class OkexFutures:
"""okex交割合约操作 https://www.okex.com/docs/zh/#futures-README"""
def __init__(self, access_key, secret_key, passphrase, instrument_id):
self.access_key = access_key
self.secret_key = secret_key
self.passphrase = passphrase
self.instrument_id = instrument_id
self.okex_futures = okexfutures.FutureAPI(self.access_key, self.secret_key, self.passphrase, self.instrument_id)
def buy(self, price, size, order_type):
receipt = self.okex_futures.buy(self.instrument_id, price, size, order_type)
return receipt
def sell(self, price, size, order_type):
receipt = self.okex_futures.sell(self.instrument_id, price, size, order_type)
return receipt
def sellshort(self, price, size, order_type):
receipt = self.okex_futures.sellshort(self.instrument_id, price, size, order_type)
return receipt
def buytocover(self, price, size, order_type):
receipt = self.okex_futures.buytocover(self.instrument_id, price, size, order_type)
return receipt
def BUY(self, cover_short_price, cover_short_size, open_long_price, open_long_size, order_type):
receipt = self.okex_futures.BUY(self.instrument_id, cover_short_price, cover_short_size, open_long_price, open_long_size, order_type)
return receipt
def SELL(self, cover_long_price, cover_long_size, open_short_price, open_short_size, order_type):
receipt = self.okex_futures.SELL(self.instrument_id, cover_long_price, cover_long_size, open_short_price, open_short_size, order_type)
return receipt
def get_order_list(self, state, limit):
receipt = self.okex_futures.get_order_list(self.instrument_id, state=state, limit=limit)
return receipt
def revoke_order(self, order_id):
receipt = self.okex_futures.revoke_order(self.instrument_id, order_id)
if receipt['error_code'] == "0":
return '撤单成功'
else:
return '撤单失败' + receipt['error_message']
def get_order_info(self, order_id):
receipt = self.okex_futures.get_order_info(self.instrument_id, order_id)
return receipt
def get_kline(self, time_frame):
if time_frame == "1m" or time_frame == "1M":
granularity = '60'
elif time_frame == '3m' or time_frame == "3M":
granularity = '180'
elif time_frame == '5m' or time_frame == "5M":
granularity = '300'
elif time_frame == '15m' or time_frame == "15M":
granularity = '900'
elif time_frame == '30m' or time_frame == "30M":
granularity = '1800'
elif time_frame == '1h' or time_frame == "1H":
granularity = '3600'
elif time_frame == '2h' or time_frame == "2H":
granularity = '7200'
elif time_frame == '4h' or time_frame == "4H":
granularity = '14400'
elif time_frame == '6h' or time_frame == "6H":
granularity = '21600'
elif time_frame == '12h' or time_frame == "12H":
granularity = '43200'
elif time_frame == '1d' or time_frame == "1D":
granularity = '86400'
else:
return 'k线周期错误,只支持【1min 3min 5min 15min 30min 1hour 2hour 4hour 6hour 12hour 1day】'
receipt = self.okex_futures.get_kline(self.instrument_id, granularity=granularity)
return receipt
def get_position(self):
receipt = self.okex_futures.get_specific_position(instrument_id=self.instrument_id)
return receipt
def get_ticker(self):
receipt = self.okex_futures.get_specific_ticker(instrument_id=self.instrument_id)
return receipt
def get_contract_value(self):
receipt = self.okex_futures.get_products()
t = 0
result = {}
for item in receipt:
result[item['instrument_id']] = item['contract_val']
t += 1
return result
class OkexSpot:
"""okex现货操作 https://www.okex.com/docs/zh/#spot-README"""
def __init__(self, access_key, secret_key, passphrase, instrument_id):
self.access_key = access_key
self.secret_key = secret_key
self.passphrase = passphrase
self.instrument_id = instrument_id
self.okex_spot = okexspot.SpotAPI(self.access_key, self.secret_key, self.passphrase)
def buy(self, type, price, size, order_type, notional=""):
receipt = self.okex_spot.take_order(instrument_id=self.instrument_id, side="buy", type=type, size=size, price=price, order_type=order_type, notional=notional)
order_id = int(receipt['order_id'])
if receipt['result'] == False:
return '交易提醒:' + self.instrument_id + '买入开多失败' + receipt['error_message']
else:
order_info = self.okex_spot.get_order_info(self.instrument_id, order_id=order_id)
if order_info['state'] == '2':
return "交易提醒:【{}】订单【买入开多】完全成交!成交均价【{}】 数量【{}】 成交金额【{}】".format(self.instrument_id, order_info['price_avg'], order_info['filled_size'], round(float(order_info['filled_size']) * float(order_info['price_avg']), 2))
else:
return "交易提醒:【{}】订单【买入开多】失败!".format(self.instrument_id)
def sell(self, type, price, size, order_type):
receipt = self.okex_spot.take_order(instrument_id=self.instrument_id, side="sell", type=type, size=size, price=price, order_type=order_type)
order_id = int(receipt['order_id'])
if receipt['result'] == False:
return '交易提醒:' + self.instrument_id + '卖出平多失败' + receipt['error_message']
else:
order_info = self.okex_spot.get_order_info(self.instrument_id, order_id=order_id)
if order_info['state'] == '2':
return "交易提醒:【{}】订单【卖出平多】完全成交!成交均价【{}】 数量【{}】 成交金额【{}】".format(self.instrument_id, order_info['price_avg'], order_info['filled_size'], round(float(order_info['filled_size']) * float(order_info['price_avg']), 2))
else:
return "交易提醒:【{}】订单【卖出平多】失败!".format(self.instrument_id)
def get_order_list(self, state, limit):
receipt = self.okex_spot.get_orders_list(self.instrument_id, state=state, limit=limit)
return receipt
def revoke_order(self, order_id):
receipt = self.okex_spot.revoke_order(self.instrument_id, order_id)
if receipt['error_code'] == "0":
return '撤单成功'
else:
return '撤单失败' + receipt['error_message']
def get_order_info(self, order_id):
receipt = self.okex_spot.get_order_info(self.instrument_id, order_id)
return receipt
def get_kline(self, time_frame):
if time_frame == "1m" or time_frame == "1M":
granularity = '60'
elif time_frame == '3m' or time_frame == "3M":
granularity = '180'
elif time_frame == '5m' or time_frame == "5M":
granularity = '300'
elif time_frame == '15m' or time_frame == "15M":
granularity = '900'
elif time_frame == '30m' or time_frame == "30M":
granularity = '1800'
elif time_frame == '1h' or time_frame == "1H":
granularity = '3600'
elif time_frame == '2h' or time_frame == "2H":
granularity = '7200'
elif time_frame == '4h' or time_frame == "4H":
granularity = '14400'
elif time_frame == '6h' or time_frame == "6H":
granularity = '21600'
elif time_frame == '12h' or time_frame == "12H":
granularity = '43200'
elif time_frame == '1d' or time_frame == "1D":
granularity = '86400'
else:
return 'k线周期错误,只支持【1min 3min 5min 15min 30min 1hour 2hour 4hour 6hour 12hour 1day】'
receipt = self.okex_spot.get_kline(self.instrument_id, granularity=granularity)
return receipt
def get_position(self):
receipt = self.okex_spot.get_position(self.instrument_id)
return receipt
class OkexSwap:
"""okex永续合约操作 https://www.okex.com/docs/zh/#swap-README"""
def __init__(self, access_key, secret_key, passphrase, instrument_id):
self.access_key = access_key
self.secret_key = secret_key
self.passphrase = passphrase
self.instrument_id = instrument_id
self.okex_swap = okexswap.SwapAPI(self.access_key, self.secret_key, self.passphrase)
def buy(self, price, size, order_type):
receipt = self.okex_swap.take_order(instrument_id=self.instrument_id, price=price, size=size, order_type=order_type, type=1)
if receipt['error_code'] != "0":
return '交易提醒:' + self.instrument_id + '买入开多失败' + receipt['error_message']
else:
order_id = receipt['order_id']
order_info = self.okex_swap.get_order_info(self.instrument_id, order_id=order_id)
if order_info['state'] == '2':
return "交易提醒:【{}】订单【买入开多】完全成交!成交均价【{}】 数量【{}】 成交金额【{}】".format(self.instrument_id, order_info['price_avg'],
order_info['filled_qty'], round(
int(order_info['contract_val']) * int(order_info['filled_qty']) * float(order_info['price_avg']), 2))
else:
return "交易提醒:【{}】订单【买入开多 】失败!".format(self.instrument_id)
def sell(self, price, size, order_type):
receipt = self.okex_swap.take_order(instrument_id=self.instrument_id, price=price, size=size, order_type=order_type,
type=3)
if receipt['error_code'] != "0":
return '交易提醒:' + self.instrument_id + '卖出平多失败' + receipt['error_message']
else:
order_id = receipt['order_id']
order_info = self.okex_swap.get_order_info(self.instrument_id, order_id=order_id)
if order_info['state'] == '2':
return "交易提醒:【{}】订单【卖出平多】完全成交!成交均价【{}】 数量【{}】 成交金额【{}】".format(self.instrument_id, order_info['price_avg'],
order_info['filled_qty'], round(
int(order_info['contract_val']) * int(order_info['filled_qty']) * float(
order_info['price_avg']), 2))
else:
return "交易提醒:【{}】订单【卖出平多】失败!".format(self.instrument_id)
def sellshort(self, price, size, order_type):
receipt = self.okex_swap.take_order(instrument_id=self.instrument_id, price=price, size=size, order_type=order_type,
type=2)
if receipt['error_code'] != "0":
return '交易提醒:' + self.instrument_id + '卖出开空失败' + receipt['error_message']
else:
order_id = receipt['order_id']
order_info = self.okex_swap.get_order_info(self.instrument_id, order_id=order_id)
if order_info['state'] == '2':
return "交易提醒:【{}】订单【卖出开空】完全成交!成交均价【{}】 数量【{}】 成交金额【{}】".format(self.instrument_id, order_info['price_avg'],
order_info['filled_qty'], round(
int(order_info['contract_val']) * int(order_info['filled_qty']) * float(
order_info['price_avg']), 2))
else:
return "交易提醒:【{}】订单【卖出开空】失败!".format(self.instrument_id)
def buytocover(self, price, size, order_type):
receipt = self.okex_swap.take_order(instrument_id=self.instrument_id, price=price, size=size, order_type=order_type,
type=4)
if receipt['error_code'] != "0":
return '交易提醒:' + self.instrument_id + '买入平空失败' + receipt['error_message']
else:
order_id = receipt['order_id']
order_info = self.okex_swap.get_order_info(self.instrument_id, order_id=order_id)
if order_info['state'] == '2':
return "交易提醒:【{}】订单【买入平空】完全成交!成交均价【{}】 数量【{}】 成交金额【{}】".format(self.instrument_id, order_info['price_avg'],
order_info['filled_qty'], round(
int(order_info['contract_val']) * int(order_info['filled_qty']) * float(
order_info['price_avg']), 2))
else:
return "交易提醒:【{}】订单【买入平空】失败!".format(self.instrument_id)
def BUY(self, cover_short_price, cover_short_size, open_long_price, open_long_size, order_type):
receipt1 = self.buytocover(cover_short_price, cover_short_size, order_type)
receipt2 = self.buy(open_long_price, open_long_size, order_type)
return receipt1 + receipt2
def SELL(self, cover_long_price, cover_long_size, open_short_price, open_short_size, order_type):
receipt1 = self.sell(cover_long_price, cover_long_size, order_type)
receipt2 = self.sellshort(open_short_price, open_short_size, order_type)
return receipt1 + receipt2
def get_order_list(self, state, limit):
receipt = self.okex_swap.get_order_list(self.instrument_id, state=state, limit=limit)
return receipt
def revoke_order(self, order_id):
receipt = self.okex_swap.revoke_order(self.instrument_id, order_id)
if receipt['error_code'] == "0":
return '撤单成功'
else:
return '撤单失败' + receipt['error_message']
def get_order_info(self, order_id):
receipt = self.okex_swap.get_order_info(self.instrument_id, order_id)
return receipt
def get_kline(self, time_frame):
if time_frame == "1m" or time_frame == "1M":
granularity = '60'
elif time_frame == '3m' or time_frame == "3M":
granularity = '180'
elif time_frame == '5m' or time_frame == "5M":
granularity = '300'
elif time_frame == '15m' or time_frame == "15M":
granularity = '900'
elif time_frame == '30m' or time_frame == "30M":
granularity = '1800'
elif time_frame == '1h' or time_frame == "1H":
granularity = '3600'
elif time_frame == '2h' or time_frame == "2H":
granularity = '7200'
elif time_frame == '4h' or time_frame == "4H":
granularity = '14400'
elif time_frame == '6h' or time_frame == "6H":
granularity = '21600'
elif time_frame == '12h' or time_frame == "12H":
granularity = '43200'
elif time_frame == '1d' or time_frame == "1D":
granularity = '86400'
else:
return 'k线周期错误,只支持【1min 3min 5min 15min 30min 1hour 2hour 4hour 6hour 12hour 1day】'
receipt = self.okex_swap.get_kline(self.instrument_id, granularity=granularity)
return receipt
def get_position(self):
receipt = self.okex_swap.get_specific_position(self.instrument_id)
direction = receipt['holding'][0]['side']
amount = int(receipt['holding'][0]['position'])
price = float(receipt['holding'][0]['avg_cost'])
if amount == 0:
direction = None
result = {'direction': direction, 'amount': amount, 'price': price}
return result
def get_contract_value(self):
receipt = self.okex_swap.get_instruments()
t = 0
result = {}
for item in receipt:
result[item['instrument_id']]=item['contract_val']
t += 1
return result
class HuobiFutures:
"""火币合约 https://huobiapi.github.io/docs/dm/v1/cn/#5ea2e0cde2"""
def __init__(self, access_key, secret_key, instrument_id):
self.access_key = access_key
self.secret_key = secret_key
self.instrument_id = instrument_id
self.huobi_futures = huobifutures.HuobiFutures(self.access_key, self.secret_key)
def buy(self, price, size, order_type):
"""
火币交割合约下单买入开多,只支持季度和次季合约,只支持20倍杠杆
:param self.instrument_id: 合约ID 例如:'BTC-201225'
:param price: 下单价格
:param size: 下单数量
:param order_type: 0:限价单
1:只做Maker(Post only)
2:全部成交或立即取消(FOK)
3:立即成交并取消剩余(IOC)
4:对手价下单
:return:
"""
symbol = self.instrument_id[0:3]
if self.instrument_id[6:8] == '03' or self.instrument_id[6:8] == '09':
contract_type = "quarter"
elif self.instrument_id[6:8] == '06' or self.instrument_id[6:8] == '12':
contract_type = "next_quarter"
else:
return "交易提醒:合约ID错误,只可输入当季或者次季合约ID,请重新输入!"
contract_code = self.instrument_id[0:3] + self.instrument_id[4:10]
order_price_type = 0
if order_type == 0:
order_price_type = 'limit'
elif order_type == 1:
order_price_type = "post_only"
elif order_type == 2:
order_price_type = "fok"
elif order_type == 3:
order_price_type = "ioc"
elif order_type == 4:
order_price_type = "opponent"
self.huobi_futures.send_contract_order(symbol=symbol, contract_type=contract_type, contract_code=contract_code,
client_order_id='', price=price, volume=size, direction='buy',
offset='open', lever_rate=20, order_price_type=order_price_type)
def get_kline(self, time_frame):
if self.instrument_id[6:8] == '03' or self.instrument_id[6:8] == '09':
symbol = "BTC_CQ"
elif self.instrument_id[6:8] == '06' or self.instrument_id[6:8] == '12':
symbol = "BTC_NQ"
else:
return "交易提醒:合约ID错误,只可输入当季或者次季合约ID,请重新输入!"
if time_frame == '1m' or time_frame == '1M':
period = '1min'
elif time_frame == '5m' or time_frame == '5M':
period = '5min'
elif time_frame == '15m' or time_frame == '15M':
period = '15min'
elif time_frame == '30m' or time_frame == '30M':
period = '30min'
elif time_frame == '1h' or time_frame == '1H':
period = '60min'
elif time_frame == '4h' or time_frame == '4H':
period = '4hour'
elif time_frame == '1d' or time_frame == '1D':
period = '1day'
else:
return "k线周期错误,k线周期只能是【1m, 5m, 15m, 30m, 1h, 4h, 1d】之一"
records = self.huobi_futures.get_contract_kline(symbol=symbol, period=period)['data']
length = len(records)
j = 1
list = []
while j < length:
for item in records:
item = [ts_to_datetime_str(item['id']), item['open'], item['high'], item['low'], item['close'], item['vol'], round(item['amount'], 2)]
list.append(item)
j+=1
list.reverse()
return list
| 46.351807 | 227 | 0.602984 | 2,385 | 19,236 | 4.607128 | 0.101887 | 0.095013 | 0.100473 | 0.034401 | 0.832454 | 0.809428 | 0.774572 | 0.755643 | 0.726975 | 0.708591 | 0 | 0.029574 | 0.275785 | 19,236 | 414 | 228 | 46.463768 | 0.75917 | 0.032751 | 0 | 0.67052 | 0 | 0 | 0.10632 | 0.01331 | 0 | 0 | 0 | 0 | 0 | 1 | 0.109827 | false | 0.026012 | 0.014451 | 0 | 0.291908 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
a7e96e9aeffc7f557a0fadb74163d89332c13438 | 39 | py | Python | indra/assemblers/figaro/__init__.py | zebulon2/indra | 7727ddcab52ad8012eb6592635bfa114e904bd48 | [
"BSD-2-Clause"
] | 1 | 2020-12-27T14:37:10.000Z | 2020-12-27T14:37:10.000Z | indra/assemblers/figaro/__init__.py | zebulon2/indra | 7727ddcab52ad8012eb6592635bfa114e904bd48 | [
"BSD-2-Clause"
] | null | null | null | indra/assemblers/figaro/__init__.py | zebulon2/indra | 7727ddcab52ad8012eb6592635bfa114e904bd48 | [
"BSD-2-Clause"
] | null | null | null | from .assembler import FigaroAssembler
| 19.5 | 38 | 0.871795 | 4 | 39 | 8.5 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.102564 | 39 | 1 | 39 | 39 | 0.971429 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
c503dd5e31fa6dc0fb9d3de90e13d49ba8e2090b | 127 | py | Python | minato_namikaze/botmain/bot/lib/util/__init__.py | Onii-Chan-Discord/yondaime-hokage | d6e75405a9d30b37bfb4fd588f02c0de813c92b1 | [
"Apache-2.0"
] | 1 | 2021-11-04T13:20:36.000Z | 2021-11-04T13:20:36.000Z | minato_namikaze/bot_files/lib/util/__init__.py | ooliver1/yondaime-hokage | 3552887dc022c8ace13de9dae01392b9471e5f58 | [
"Apache-2.0"
] | null | null | null | minato_namikaze/bot_files/lib/util/__init__.py | ooliver1/yondaime-hokage | 3552887dc022c8ace13de9dae01392b9471e5f58 | [
"Apache-2.0"
] | null | null | null | from .post_user_stats import *
from .privacy_vote import *
from .setup_server import *
from .util import *
from .vars import *
| 21.166667 | 30 | 0.76378 | 19 | 127 | 4.894737 | 0.578947 | 0.430108 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.15748 | 127 | 5 | 31 | 25.4 | 0.869159 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
c532135c6b2c0b59a3a755199e429b8aa6c2b7ce | 181 | py | Python | modules/2.79/bpy/ops/dpaint.py | cmbasnett/fake-bpy-module | acb8b0f102751a9563e5b5e5c7cd69a4e8aa2a55 | [
"MIT"
] | null | null | null | modules/2.79/bpy/ops/dpaint.py | cmbasnett/fake-bpy-module | acb8b0f102751a9563e5b5e5c7cd69a4e8aa2a55 | [
"MIT"
] | null | null | null | modules/2.79/bpy/ops/dpaint.py | cmbasnett/fake-bpy-module | acb8b0f102751a9563e5b5e5c7cd69a4e8aa2a55 | [
"MIT"
] | null | null | null | def bake():
pass
def output_toggle(output='A'):
pass
def surface_slot_add():
pass
def surface_slot_remove():
pass
def type_toggle(type='CANVAS'):
pass
| 8.619048 | 31 | 0.635359 | 25 | 181 | 4.36 | 0.48 | 0.256881 | 0.256881 | 0.330275 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.248619 | 181 | 20 | 32 | 9.05 | 0.801471 | 0 | 0 | 0.5 | 0 | 0 | 0.039106 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.5 | false | 0.5 | 0 | 0 | 0.5 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 6 |
c53437cd4b1477936a19cf9a7bc412eccac09f28 | 101,167 | py | Python | test/test_esapi.py | diabonas/tpm2-pytss | 1ae429c2e96d16632414d2b16023c65beae32015 | [
"BSD-2-Clause"
] | null | null | null | test/test_esapi.py | diabonas/tpm2-pytss | 1ae429c2e96d16632414d2b16023c65beae32015 | [
"BSD-2-Clause"
] | null | null | null | test/test_esapi.py | diabonas/tpm2-pytss | 1ae429c2e96d16632414d2b16023c65beae32015 | [
"BSD-2-Clause"
] | null | null | null | #!/usr/bin/python3 -u
"""
SPDX-License-Identifier: BSD-2
"""
import unittest
from tpm2_pytss import *
from .TSS2_BaseTest import TSS2_EsapiTest
class TestEsys(TSS2_EsapiTest):
def testGetRandom(self):
r = self.ectx.GetRandom(5)
self.assertEqual(len(r), 5)
with self.assertRaises(TypeError):
self.ectx.GetRandom("foo")
with self.assertRaises(TypeError):
self.ectx.GetRandom(5, session1="baz")
with self.assertRaises(TypeError):
self.ectx.GetRandom(5, session2=object())
with self.assertRaises(TypeError):
self.ectx.GetRandom(5, session3=56.7)
def testCreatePrimary(self):
inSensitive = TPM2B_SENSITIVE_CREATE()
inPublic = TPM2B_PUBLIC()
outsideInfo = TPM2B_DATA()
creationPCR = TPML_PCR_SELECTION()
inPublic.publicArea.type = TPM2_ALG.ECC
inPublic.publicArea.nameAlg = TPM2_ALG.SHA1
inPublic.publicArea.objectAttributes = (
TPMA_OBJECT.USERWITHAUTH
| TPMA_OBJECT.SIGN_ENCRYPT
| TPMA_OBJECT.RESTRICTED
| TPMA_OBJECT.FIXEDTPM
| TPMA_OBJECT.FIXEDPARENT
| TPMA_OBJECT.SENSITIVEDATAORIGIN
)
inPublic.publicArea.parameters.eccDetail.scheme.scheme = TPM2_ALG.ECDSA
inPublic.publicArea.parameters.eccDetail.scheme.details.ecdsa.hashAlg = (
TPM2_ALG.SHA256
)
inPublic.publicArea.parameters.eccDetail.symmetric.algorithm = TPM2_ALG.NULL
inPublic.publicArea.parameters.eccDetail.kdf.scheme = TPM2_ALG.NULL
inPublic.publicArea.parameters.eccDetail.curveID = TPM2_ECC.NIST_P256
handle, public, creation_data, digest, ticket = self.ectx.CreatePrimary(
inSensitive, inPublic, ESYS_TR.OWNER, outsideInfo, creationPCR,
)
self.assertNotEqual(handle, 0)
self.assertEqual(type(public), TPM2B_PUBLIC)
self.assertEqual(type(creation_data), TPM2B_CREATION_DATA)
self.assertEqual(type(digest), TPM2B_DIGEST)
self.assertEqual(type(ticket), TPMT_TK_CREATION)
self.ectx.FlushContext(handle)
handle, _, _, _, _ = self.ectx.CreatePrimary(inSensitive,)
self.assertNotEqual(handle, 0)
self.ectx.FlushContext(handle)
handle, _, _, _, _ = self.ectx.CreatePrimary(inSensitive, inPublic)
self.assertNotEqual(handle, 0)
self.ectx.FlushContext(handle)
handle, _, _, _, _ = self.ectx.CreatePrimary(inSensitive, "ecc256")
self.assertNotEqual(handle, 0)
self.ectx.FlushContext(handle)
handle, _, _, _, _ = self.ectx.CreatePrimary(
inSensitive, "ecc256", creationPCR="sha256:4,6,7"
)
self.assertNotEqual(handle, 0)
self.ectx.FlushContext(handle)
with self.assertRaises(TypeError):
self.ectx.CreatePrimary(
TPM2B_DATA, inPublic, ESYS_TR.OWNER, outsideInfo, creationPCR,
)
with self.assertRaises(TypeError):
self.ectx.CreatePrimary(
inSensitive,
b"should not work",
ESYS_TR.OWNER,
outsideInfo,
creationPCR,
)
with self.assertRaises(TypeError):
self.ectx.CreatePrimary(
inSensitive, inPublic, object(), outsideInfo, creationPCR,
)
with self.assertRaises(TypeError):
self.ectx.CreatePrimary(
inSensitive, inPublic, ESYS_TR.OWNER, object(), creationPCR,
)
with self.assertRaises(TypeError):
self.ectx.CreatePrimary(
inSensitive, inPublic, ESYS_TR.OWNER, outsideInfo, TPM2B_SENSITIVE(),
)
with self.assertRaises(TypeError):
handle, _, _, _, _ = self.ectx.CreatePrimary(
inSensitive, "ecc256", session1=object()
)
with self.assertRaises(TypeError):
handle, _, _, _, _ = self.ectx.CreatePrimary(
inSensitive, "ecc256", session2=object()
)
with self.assertRaises(TypeError):
handle, _, _, _, _ = self.ectx.CreatePrimary(
inSensitive, "ecc256", session3=object()
)
def testPCRRead(self):
pcrsels = TPML_PCR_SELECTION.parse("sha1:3+sha256:all")
_, _, digests, = self.ectx.PCR_Read(pcrsels)
self.assertEqual(len(digests[0]), 20)
for d in digests[1:]:
self.assertEqual(len(d), 32)
def test_plain_NV_define_write_read_undefine(self):
nv_public = TPM2B_NV_PUBLIC(
nvPublic=TPMS_NV_PUBLIC(
nvIndex=TPM2_HC.NV_INDEX_FIRST,
nameAlg=TPM2_ALG.SHA256,
attributes=TPMA_NV.parse("ownerread|ownerwrite|authread|authwrite"),
dataSize=32,
)
)
# No password NV index
nv_index = self.ectx.NV_DefineSpace(None, nv_public)
self.ectx.NV_Write(nv_index, b"hello world")
value = self.ectx.NV_Read(nv_index, 11)
self.assertEqual(bytes(value), b"hello world")
public, name = self.ectx.NV_ReadPublic(nv_index)
self.assertEqual(public.nvPublic.nvIndex, TPM2_HC.NV_INDEX_FIRST)
self.assertEqual(public.nvPublic.nameAlg, TPM2_ALG.SHA256)
self.assertEqual(
public.nvPublic.attributes,
TPMA_NV.parse("ownerread|ownerwrite|authread|authwrite|written"),
)
self.assertEqual(public.nvPublic.authPolicy.size, 0)
self.assertEqual(public.nvPublic.dataSize, 32)
# Algorithm id (UINT16) followed by SHA256 len of name bytes
self.assertEqual(len(name), 2 + 32)
n = str(name)
self.assertEqual(len(n), 68)
self.assertTrue(isinstance(n, str))
self.ectx.NV_UndefineSpace(nv_index)
with self.assertRaises(TSS2_Exception):
public, name = self.ectx.NV_ReadPublic(nv_index)
def test_hierarchychangeauth(self):
self.ectx.HierarchyChangeAuth(ESYS_TR.OWNER, "passwd")
# force esys to forget about the 'good' password
self.ectx.setAuth(ESYS_TR.OWNER, "badpasswd")
with self.assertRaises(TSS2_Exception):
self.ectx.HierarchyChangeAuth(ESYS_TR.OWNER, "anotherpasswd")
def test_fulltest_YES(self):
self.ectx.SelfTest(True)
def test_fulltest_NO(self):
self.ectx.SelfTest(False)
def test_incremental_self_test(self):
algs = TPML_ALG.parse("rsa,ecc,xor,aes,cbc")
self.ectx.IncrementalSelfTest(algs)
self.ectx.IncrementalSelfTest("rsa,ecc,xor,aes,cbc")
with self.assertRaises(TypeError):
self.ectx.IncrementalSelfTest(None)
with self.assertRaises(TypeError):
self.ectx.IncrementalSelfTest(object())
with self.assertRaises(TypeError):
self.ectx.IncrementalSelfTest(session1=45.9)
with self.assertRaises(TypeError):
self.ectx.IncrementalSelfTest(session2=object())
with self.assertRaises(TypeError):
self.ectx.IncrementalSelfTest(session3=TPM2B_PUBLIC())
def test_incremental_resume_test(self):
algs = TPML_ALG.parse("rsa,ecc,xor,aes,cbc")
self.ectx.IncrementalSelfTest(algs)
toDo, rc = self.ectx.GetTestResult()
self.assertEqual(type(toDo), TPM2B_MAX_BUFFER)
self.assertEqual(rc, TPM2_RC.SUCCESS)
with self.assertRaises(TypeError):
self.ectx.GetTestResult(session1=45.7)
with self.assertRaises(TypeError):
self.ectx.GetTestResult(session2=TPM2B_DATA())
with self.assertRaises(TypeError):
self.ectx.GetTestResult(session3=object())
def test_hmac_session(self):
sym = TPMT_SYM_DEF(
algorithm=TPM2_ALG.XOR,
keyBits=TPMU_SYM_KEY_BITS(exclusiveOr=TPM2_ALG.SHA256),
mode=TPMU_SYM_MODE(aes=TPM2_ALG.CFB),
)
hmac_session = self.ectx.StartAuthSession(
tpmKey=ESYS_TR.NONE,
bind=ESYS_TR.NONE,
nonceCaller=None,
sessionType=TPM2_SE.HMAC,
symmetric=sym,
authHash=TPM2_ALG.SHA256,
)
self.assertTrue(hmac_session)
self.ectx.HierarchyChangeAuth(ESYS_TR.OWNER, "passwd", session1=hmac_session)
# force esys to forget about the 'good' password
self.ectx.setAuth(ESYS_TR.OWNER, "badpasswd")
with self.assertRaises(TSS2_Exception):
self.ectx.HierarchyChangeAuth(
ESYS_TR.OWNER, "anotherpasswd", session1=hmac_session
)
# test some bad params
with self.assertRaises(TypeError):
self.ectx.StartAuthSession(
object, ESYS_TR.NONE, None, TPM2_SE.HMAC, sym, TPM2_ALG.SHA256
)
with self.assertRaises(TypeError):
self.ectx.StartAuthSession(
ESYS_TR.NONE, object(), None, TPM2_SE.HMAC, sym, TPM2_ALG.SHA256
)
with self.assertRaises(TypeError):
self.ectx.StartAuthSession(
ESYS_TR.NONE,
ESYS_TR.NONE,
TPM2B_PUBLIC(),
TPM2_SE.HMAC,
sym,
TPM2_ALG.SHA256,
)
with self.assertRaises(ValueError):
self.ectx.StartAuthSession(
ESYS_TR.NONE, ESYS_TR.NONE, None, 8745635, sym, TPM2_ALG.SHA256
)
with self.assertRaises(TypeError):
self.ectx.StartAuthSession(
ESYS_TR.NONE, ESYS_TR.NONE, None, object(), sym, TPM2_ALG.SHA256
)
with self.assertRaises(TypeError):
self.ectx.StartAuthSession(
ESYS_TR.NONE, ESYS_TR.NONE, None, TPM2_SE.HMAC, 42, TPM2_ALG.SHA256
)
with self.assertRaises(ValueError):
self.ectx.StartAuthSession(
ESYS_TR.NONE, ESYS_TR.NONE, None, TPM2_SE.HMAC, sym, 8395847
)
with self.assertRaises(TypeError):
self.ectx.StartAuthSession(
ESYS_TR.NONE, ESYS_TR.NONE, None, TPM2_SE.HMAC, sym, TPM2B_SYM_KEY()
)
def test_start_authSession_enckey(self):
inSensitive = TPM2B_SENSITIVE_CREATE()
handle, _, _, _, _ = self.ectx.CreatePrimary(inSensitive, "rsa2048:aes128cfb")
sym = TPMT_SYM_DEF(
algorithm=TPM2_ALG.XOR,
keyBits=TPMU_SYM_KEY_BITS(exclusiveOr=TPM2_ALG.SHA256),
mode=TPMU_SYM_MODE(aes=TPM2_ALG.CFB),
)
session = self.ectx.StartAuthSession(
tpmKey=handle,
bind=ESYS_TR.NONE,
nonceCaller=None,
sessionType=TPM2_SE.POLICY,
symmetric=sym,
authHash=TPM2_ALG.SHA256,
)
self.ectx.TRSess_SetAttributes(
session, (TPMA_SESSION.ENCRYPT | TPMA_SESSION.DECRYPT)
)
random = self.ectx.GetRandom(4, session1=session)
self.assertEqual(len(random), 4)
def test_start_authSession_enckey_bindkey(self):
inSensitive = TPM2B_SENSITIVE_CREATE(
TPMS_SENSITIVE_CREATE(userAuth=TPM2B_AUTH("password"))
)
handle, _, _, _, _ = self.ectx.CreatePrimary(inSensitive, "rsa2048:aes128cfb")
sym = TPMT_SYM_DEF(
algorithm=TPM2_ALG.XOR,
keyBits=TPMU_SYM_KEY_BITS(exclusiveOr=TPM2_ALG.SHA256),
mode=TPMU_SYM_MODE(aes=TPM2_ALG.CFB),
)
session = self.ectx.StartAuthSession(
tpmKey=handle,
bind=handle,
nonceCaller=None,
sessionType=TPM2_SE.POLICY,
symmetric=sym,
authHash=TPM2_ALG.SHA256,
)
self.ectx.TRSess_SetAttributes(
session, (TPMA_SESSION.ENCRYPT | TPMA_SESSION.DECRYPT)
)
random = self.ectx.GetRandom(4, session1=session)
self.assertEqual(len(random), 4)
def test_TRSess_SetAttributes(self):
inSensitive = TPM2B_SENSITIVE_CREATE(
TPMS_SENSITIVE_CREATE(userAuth=TPM2B_AUTH("password"))
)
handle, _, _, _, _ = self.ectx.CreatePrimary(inSensitive, "rsa2048:aes128cfb")
sym = TPMT_SYM_DEF(
algorithm=TPM2_ALG.XOR,
keyBits=TPMU_SYM_KEY_BITS(exclusiveOr=TPM2_ALG.SHA256),
mode=TPMU_SYM_MODE(aes=TPM2_ALG.CFB),
)
session = self.ectx.StartAuthSession(
tpmKey=handle,
bind=handle,
nonceCaller=None,
sessionType=TPM2_SE.POLICY,
symmetric=sym,
authHash=TPM2_ALG.SHA256,
)
self.ectx.TRSess_SetAttributes(
session, (TPMA_SESSION.ENCRYPT | TPMA_SESSION.DECRYPT)
)
with self.assertRaises(TypeError):
self.ectx.TRSess_SetAttributes(
object(), (TPMA_SESSION.ENCRYPT | TPMA_SESSION.DECRYPT)
)
with self.assertRaises(TypeError):
self.ectx.TRSess_SetAttributes(session, 67.5)
def test_start_authSession_noncecaller(self):
inSensitive = TPM2B_SENSITIVE_CREATE(
TPMS_SENSITIVE_CREATE(userAuth=TPM2B_AUTH("password"))
)
handle, _, _, _, _ = self.ectx.CreatePrimary(inSensitive, "rsa2048:aes128cfb")
sym = TPMT_SYM_DEF(
algorithm=TPM2_ALG.XOR,
keyBits=TPMU_SYM_KEY_BITS(exclusiveOr=TPM2_ALG.SHA256),
mode=TPMU_SYM_MODE(aes=TPM2_ALG.CFB),
)
session = self.ectx.StartAuthSession(
tpmKey=handle,
bind=handle,
nonceCaller=TPM2B_NONCE(b"thisisthirtytwocharslastichecked"),
sessionType=TPM2_SE.POLICY,
symmetric=sym,
authHash=TPM2_ALG.SHA256,
)
self.ectx.TRSess_SetAttributes(
session, (TPMA_SESSION.ENCRYPT | TPMA_SESSION.DECRYPT)
)
random = self.ectx.GetRandom(4, session1=session)
self.assertEqual(len(random), 4)
def test_start_authSession_noncecaller_bad(self):
sym = TPMT_SYM_DEF(
algorithm=TPM2_ALG.XOR,
keyBits=TPMU_SYM_KEY_BITS(exclusiveOr=TPM2_ALG.SHA256),
mode=TPMU_SYM_MODE(aes=TPM2_ALG.CFB),
)
with self.assertRaises(TypeError):
self.ectx.StartAuthSession(
tpmKey=ESYS_TR.NONE,
bind=ESYS_TR.NONE,
nonceCaller=object(),
sessionType=TPM2_SE.HMAC,
symmetric=sym,
authHash=TPM2_ALG.SHA256,
)
def test_create(self):
inSensitive = TPM2B_SENSITIVE_CREATE(
TPMS_SENSITIVE_CREATE(userAuth=TPM2B_AUTH("password"))
)
parentHandle, _, _, _, _ = self.ectx.CreatePrimary(
inSensitive, "rsa2048:aes128cfb"
)
childInSensitive = TPM2B_SENSITIVE_CREATE(
TPMS_SENSITIVE_CREATE(userAuth=TPM2B_AUTH("childpassword"))
)
alg = "rsa2048"
childInPublic = TPM2B_PUBLIC(TPMT_PUBLIC.parse(alg))
outsideInfo = TPM2B_DATA()
creationPCR = TPML_PCR_SELECTION()
priv, pub, _, _, _ = self.ectx.Create(
parentHandle, childInSensitive, childInPublic, outsideInfo, creationPCR
)
self.assertEqual(type(priv), TPM2B_PRIVATE),
self.assertEqual(type(pub), TPM2B_PUBLIC),
priv, pub, _, _, _ = self.ectx.Create(
parentHandle, childInSensitive, childInPublic, outsideInfo
)
self.assertEqual(type(priv), TPM2B_PRIVATE),
self.assertEqual(type(pub), TPM2B_PUBLIC),
priv, pub, _, _, _ = self.ectx.Create(
parentHandle, childInSensitive, childInPublic, creationPCR=creationPCR
)
self.assertEqual(type(priv), TPM2B_PRIVATE),
self.assertEqual(type(pub), TPM2B_PUBLIC),
priv, pub, _, _, _ = self.ectx.Create(
parentHandle,
childInSensitive,
childInPublic,
creationPCR="sha256:1,2,3,4,5",
)
self.assertEqual(type(priv), TPM2B_PRIVATE),
self.assertEqual(type(pub), TPM2B_PUBLIC),
with self.assertRaises(TypeError):
self.ectx.Create(
34.945, childInSensitive, childInPublic, outsideInfo, creationPCR
)
with self.assertRaises(TypeError):
self.ectx.Create(
parentHandle, object(), childInPublic, outsideInfo, creationPCR
)
with self.assertRaises(TypeError):
self.ectx.Create(
parentHandle, childInSensitive, 56, outsideInfo, creationPCR
)
with self.assertRaises(TypeError):
self.ectx.Create(
parentHandle, childInSensitive, childInPublic, None, creationPCR
)
with self.assertRaises(TypeError):
self.ectx.Create(
parentHandle, childInSensitive, childInPublic, outsideInfo, object
)
def test_load(self):
inSensitive = TPM2B_SENSITIVE_CREATE(
TPMS_SENSITIVE_CREATE(userAuth=TPM2B_AUTH("password"))
)
parentHandle, _, _, _, _ = self.ectx.CreatePrimary(
inSensitive, "rsa2048:aes128cfb"
)
childInSensitive = TPM2B_SENSITIVE_CREATE(
TPMS_SENSITIVE_CREATE(userAuth=TPM2B_AUTH("childpassword"))
)
priv, pub, _, _, _ = self.ectx.Create(parentHandle, childInSensitive)
childHandle = self.ectx.Load(parentHandle, priv, pub)
self.assertTrue(childHandle)
with self.assertRaises(TypeError):
self.ectx.Load(42.5, priv, pub)
with self.assertRaises(TypeError):
self.ectx.Load(parentHandle, TPM2B_DATA(), pub)
with self.assertRaises(TypeError):
self.ectx.Load(parentHandle, priv, object())
def test_readpublic(self):
inSensitive = TPM2B_SENSITIVE_CREATE(
TPMS_SENSITIVE_CREATE(userAuth=TPM2B_AUTH("password"))
)
parentHandle, _, _, _, _ = self.ectx.CreatePrimary(
inSensitive, "rsa2048:aes128cfb"
)
childInSensitive = TPM2B_SENSITIVE_CREATE(
TPMS_SENSITIVE_CREATE(userAuth=TPM2B_AUTH("childpassword"))
)
priv, pub, _, _, _ = self.ectx.Create(parentHandle, childInSensitive)
childHandle = self.ectx.Load(parentHandle, priv, pub)
pubdata, name, qname = self.ectx.ReadPublic(childHandle)
self.assertTrue(
isinstance(pubdata, TPM2B_PUBLIC),
f"Expected TPM2B_PUBLIC, got: {type(pubdata)}",
)
self.assertTrue(
isinstance(name, TPM2B_NAME), f"Expected TPM2B_NAME, got: {type(name)}"
)
self.assertTrue(
isinstance(qname, TPM2B_NAME), f"Expected TPM2B_NAME, got: {type(qname)}"
)
self.assertTrue(pubdata.publicArea.type, TPM2_ALG.RSA)
self.assertTrue(pubdata.publicArea.nameAlg, TPM2_ALG.SHA256)
self.assertTrue(name.size, 32)
self.assertTrue(qname.size, 32)
with self.assertRaises(TypeError):
self.ectx.ReadPublic(object())
with self.assertRaises(TypeError):
self.ectx.ReadPublic(childHandle, session1=object)
with self.assertRaises(TypeError):
self.ectx.ReadPublic(childHandle, session2="foo")
with self.assertRaises(TypeError):
self.ectx.ReadPublic(childHandle, session3=42.5)
def test_MakeCredential(self):
inSensitive = TPM2B_SENSITIVE_CREATE(
TPMS_SENSITIVE_CREATE(userAuth=TPM2B_AUTH("password"))
)
parentHandle, _, _, _, _ = self.ectx.CreatePrimary(
inSensitive, "rsa2048:aes128cfb"
)
alg = "rsa2048"
attrs = (
TPMA_OBJECT.RESTRICTED
| TPMA_OBJECT.DECRYPT
| TPMA_OBJECT.USERWITHAUTH
| TPMA_OBJECT.SENSITIVEDATAORIGIN
)
childInPublic = TPM2B_PUBLIC(TPMT_PUBLIC.parse(alg=alg, objectAttributes=attrs))
childInSensitive = TPM2B_SENSITIVE_CREATE(
TPMS_SENSITIVE_CREATE(userAuth=TPM2B_AUTH("childpassword"))
)
priv, pub, _, _, _ = self.ectx.Create(
parentHandle, childInSensitive, childInPublic
)
childHandle = self.ectx.Load(parentHandle, priv, pub)
primaryKeyName = self.ectx.ReadPublic(parentHandle)[1]
credential = TPM2B_DIGEST("this is my credential")
# this can be done without a key as in tpm2-tools project, but for simpplicity
# use the TPM command, which uses the PUBLIC portion of the object and thus
# needs no auth.
credentialBlob, secret = self.ectx.MakeCredential(
childHandle, credential, primaryKeyName
)
self.assertEqual(type(credentialBlob), TPM2B_ID_OBJECT)
self.assertEqual(type(secret), TPM2B_ENCRYPTED_SECRET)
credentialBlob, secret = self.ectx.MakeCredential(
childHandle, "this is my credential", bytes(primaryKeyName)
)
self.assertEqual(type(credentialBlob), TPM2B_ID_OBJECT)
self.assertEqual(type(secret), TPM2B_ENCRYPTED_SECRET)
with self.assertRaises(TypeError):
self.ectx.MakeCredential(42.5, credential, primaryKeyName)
with self.assertRaises(TypeError):
self.ectx.MakeCredential(childHandle, object(), primaryKeyName)
with self.assertRaises(TypeError):
self.ectx.MakeCredential(childHandle, credential, object())
with self.assertRaises(TypeError):
self.ectx.MakeCredential(
childHandle, credential, primaryKeyName, session1="bar"
)
with self.assertRaises(TypeError):
self.ectx.MakeCredential(
childHandle, credential, primaryKeyName, session2=54.6
)
with self.assertRaises(TypeError):
self.ectx.MakeCredential(
childHandle, credential, primaryKeyName, session3=object()
)
def test_ActivateCredential(self):
inSensitive = TPM2B_SENSITIVE_CREATE(
TPMS_SENSITIVE_CREATE(userAuth=TPM2B_AUTH("password"))
)
parentHandle, _, _, _, _ = self.ectx.CreatePrimary(
inSensitive, "rsa2048:aes128cfb"
)
alg = "rsa2048"
attrs = (
TPMA_OBJECT.RESTRICTED
| TPMA_OBJECT.DECRYPT
| TPMA_OBJECT.USERWITHAUTH
| TPMA_OBJECT.SENSITIVEDATAORIGIN
)
childInPublic = TPM2B_PUBLIC(TPMT_PUBLIC.parse(alg=alg, objectAttributes=attrs))
childInSensitive = TPM2B_SENSITIVE_CREATE(
TPMS_SENSITIVE_CREATE(userAuth=TPM2B_AUTH("childpassword"))
)
priv, pub, _, _, _ = self.ectx.Create(
parentHandle, childInSensitive, childInPublic
)
childHandle = self.ectx.Load(parentHandle, priv, pub)
primaryKeyName = self.ectx.ReadPublic(parentHandle)[1]
credential = TPM2B_DIGEST("this is my credential")
# this can be done without a key as in tpm2-tools project, but for simpplicity
# use the TPM command, which uses the PUBLIC portion of the object and thus
# needs no auth.
credentialBlob, secret = self.ectx.MakeCredential(
childHandle, credential, primaryKeyName
)
self.ectx.setAuth(childHandle, "childpassword")
certInfo = self.ectx.ActivateCredential(
parentHandle, childHandle, credentialBlob, secret
)
self.assertEqual(bytes(certInfo), b"this is my credential")
with self.assertRaises(TypeError):
self.ectx.ActivateCredential(object(), childHandle, credentialBlob, secret)
with self.assertRaises(TypeError):
self.ectx.ActivateCredential(parentHandle, 76.4, credentialBlob, secret)
with self.assertRaises(TypeError):
self.ectx.ActivateCredential(
parentHandle, childHandle, TPM2B_PUBLIC(), secret
)
with self.assertRaises(TypeError):
self.ectx.ActivateCredential(
parentHandle, childHandle, credentialBlob, object()
)
with self.assertRaises(TypeError):
self.ectx.ActivateCredential(
parentHandle, childHandle, credentialBlob, secret, session1="foo"
)
with self.assertRaises(TypeError):
self.ectx.ActivateCredential(
parentHandle, childHandle, credentialBlob, secret, session2=object()
)
with self.assertRaises(TypeError):
self.ectx.ActivateCredential(
parentHandle, childHandle, credentialBlob, secret, session3=65.4
)
def test_unseal(self):
inSensitive = TPM2B_SENSITIVE_CREATE(
TPMS_SENSITIVE_CREATE(userAuth=TPM2B_AUTH("password"))
)
parentHandle, _, _, _, _ = self.ectx.CreatePrimary(
inSensitive, "rsa2048:aes128cfb"
)
attrs = (
TPMA_OBJECT.USERWITHAUTH | TPMA_OBJECT.FIXEDPARENT | TPMA_OBJECT.FIXEDTPM
)
templ = TPMT_PUBLIC(
type=TPM2_ALG.KEYEDHASH, objectAttributes=attrs, nameAlg=TPM2_ALG.SHA256
)
templ.parameters.keyedHashDetail.scheme.scheme = TPM2_ALG.NULL
templ.parameters.keyedHashDetail.scheme.details.hmac.hashAlg = TPM2_ALG.SHA256
childInPublic = TPM2B_PUBLIC(templ)
childInSensitive = TPM2B_SENSITIVE_CREATE(
# TODO make sure this works without the buffer, and for other SIMPLE TPM2B types
TPMS_SENSITIVE_CREATE(
userAuth=TPM2B_AUTH("childpassword"),
data=TPM2B_SENSITIVE_DATA(b"sealedsecret"),
)
)
priv, pub, _, _, _ = self.ectx.Create(
parentHandle, childInSensitive, childInPublic
)
childHandle = self.ectx.Load(parentHandle, priv, pub)
self.ectx.setAuth(childHandle, "childpassword")
secret = self.ectx.Unseal(childHandle)
self.assertEqual(bytes(secret), b"sealedsecret")
with self.assertRaises(TypeError):
self.ectx.Unseal(45.2)
with self.assertRaises(TypeError):
self.ectx.Unseal(childHandle, session1=object())
with self.assertRaises(TypeError):
self.ectx.Unseal(childHandle, session2=67.4)
with self.assertRaises(TypeError):
self.ectx.Unseal(childHandle, session3="bar")
def test_objectChangeAuth(self):
inSensitive = TPM2B_SENSITIVE_CREATE(
TPMS_SENSITIVE_CREATE(userAuth=TPM2B_AUTH("password"))
)
parentHandle, _, _, _, _ = self.ectx.CreatePrimary(
inSensitive, "rsa2048:aes128cfb"
)
alg = "rsa2048"
attrs = (
TPMA_OBJECT.RESTRICTED
| TPMA_OBJECT.DECRYPT
| TPMA_OBJECT.USERWITHAUTH
| TPMA_OBJECT.SENSITIVEDATAORIGIN
)
childInPublic = TPM2B_PUBLIC(TPMT_PUBLIC.parse(alg=alg, objectAttributes=attrs))
childInSensitive = TPM2B_SENSITIVE_CREATE(
TPMS_SENSITIVE_CREATE(userAuth=TPM2B_AUTH("childpassword"))
)
priv, pub, _, _, _ = self.ectx.Create(
parentHandle, childInSensitive, childInPublic
)
childHandle = self.ectx.Load(parentHandle, priv, pub)
# force an error
self.ectx.setAuth(childHandle, "BADchildpassword")
with self.assertRaises(TSS2_Exception):
self.ectx.ObjectChangeAuth(childHandle, parentHandle, "newauth")
self.ectx.setAuth(childHandle, "childpassword")
self.ectx.ObjectChangeAuth(childHandle, parentHandle, TPM2B_AUTH("newauth"))
self.ectx.ObjectChangeAuth(childHandle, parentHandle, b"anotherauth")
self.ectx.ObjectChangeAuth(childHandle, parentHandle, "yetanotherone")
with self.assertRaises(TypeError):
self.ectx.ObjectChangeAuth("bad", parentHandle, "yetanotherone")
with self.assertRaises(TypeError):
self.ectx.ObjectChangeAuth(childHandle, 56.7, "yetanotherone")
with self.assertRaises(TypeError):
self.ectx.ObjectChangeAuth(childHandle, parentHandle, object())
def test_createloaded(self):
inSensitive = TPM2B_SENSITIVE_CREATE(
TPMS_SENSITIVE_CREATE(userAuth=TPM2B_AUTH("password"))
)
parentHandle, _, _, _, _ = self.ectx.CreatePrimary(
inSensitive, "rsa2048:aes128cfb"
)
templ = TPMT_PUBLIC.parse(
alg="rsa2048", objectAttributes=TPMA_OBJECT.DEFAULT_TPM2_TOOLS_CREATE_ATTRS
)
childInPublic = TPM2B_TEMPLATE(templ.Marshal())
childInSensitive = TPM2B_SENSITIVE_CREATE(
TPMS_SENSITIVE_CREATE(userAuth=TPM2B_AUTH("childpassword"))
)
childHandle, priv, pub = self.ectx.CreateLoaded(
parentHandle, childInSensitive, childInPublic
)
self.assertNotEqual(childHandle, 0)
self.assertEqual(type(priv), TPM2B_PRIVATE)
self.assertEqual(type(pub), TPM2B_PUBLIC)
childHandle, priv, pub = self.ectx.CreateLoaded(parentHandle, childInSensitive)
self.assertNotEqual(childHandle, 0)
self.assertEqual(type(priv), TPM2B_PRIVATE)
self.assertEqual(type(pub), TPM2B_PUBLIC)
with self.assertRaises(TypeError):
self.ectx.CreateLoaded(65.4, childInSensitive, childInPublic)
with self.assertRaises(TypeError):
self.ectx.CreateLoaded(parentHandle, "1223", childInPublic)
with self.assertRaises(TypeError):
self.ectx.CreateLoaded(parentHandle, childInSensitive, object())
with self.assertRaises(TypeError):
self.ectx.CreateLoaded(parentHandle, childInSensitive, session1=56.7)
with self.assertRaises(TypeError):
self.ectx.CreateLoaded(parentHandle, childInSensitive, session2=b"baz")
with self.assertRaises(TypeError):
self.ectx.CreateLoaded(parentHandle, childInSensitive, session3=object())
def test_rsa_enc_dec(self):
inSensitive = TPM2B_SENSITIVE_CREATE(
TPMS_SENSITIVE_CREATE(userAuth=TPM2B_AUTH("password"))
)
parentHandle, _, _, _, _ = self.ectx.CreatePrimary(
inSensitive, "rsa2048:aes128cfb"
)
templ = TPMT_PUBLIC.parse(
alg="rsa2048", objectAttributes=TPMA_OBJECT.DEFAULT_TPM2_TOOLS_CREATE_ATTRS
)
childInPublic = TPM2B_TEMPLATE(templ.Marshal())
childInSensitive = TPM2B_SENSITIVE_CREATE(
TPMS_SENSITIVE_CREATE(userAuth=TPM2B_AUTH("childpassword"))
)
childHandle, _, _ = self.ectx.CreateLoaded(
parentHandle, childInSensitive, childInPublic
)
message = TPM2B_PUBLIC_KEY_RSA("hello world")
scheme = TPMT_RSA_DECRYPT(scheme=TPM2_ALG.RSAES)
outData = self.ectx.RSA_Encrypt(childHandle, message, scheme)
message2 = self.ectx.RSA_Decrypt(childHandle, outData, scheme)
self.assertEqual(bytes(message), bytes(message2))
outData = self.ectx.RSA_Encrypt(childHandle, "hello world", scheme)
message2 = self.ectx.RSA_Decrypt(childHandle, outData, scheme)
self.assertEqual(bytes(message), bytes(message2))
# negative test RSA_Encrypt
with self.assertRaises(TypeError):
self.ectx.RSA_Encrypt(45.6, message, scheme)
with self.assertRaises(TypeError):
self.ectx.RSA_Encrypt(childHandle, TPM2B_PUBLIC(), scheme)
with self.assertRaises(TypeError):
self.ectx.RSA_Encrypt(childHandle, message, "foo")
with self.assertRaises(TypeError):
self.ectx.RSA_Encrypt(childHandle, message, scheme, session1=object())
with self.assertRaises(TypeError):
self.ectx.RSA_Encrypt(childHandle, message, scheme, session2="foo")
with self.assertRaises(TypeError):
self.ectx.RSA_Encrypt(childHandle, message, scheme, session3=52.6)
# negative test RSA_Decrypt
with self.assertRaises(TypeError):
self.ectx.RSA_Decrypt(56.2, outData, scheme)
with self.assertRaises(TypeError):
self.ectx.RSA_Decrypt(childHandle, object(), scheme)
with self.assertRaises(TypeError):
self.ectx.RSA_Decrypt(childHandle, outData, TPM2_ALG.RSAES)
with self.assertRaises(TypeError):
self.ectx.RSA_Decrypt(childHandle, outData, scheme, session1=67.9)
with self.assertRaises(TypeError):
self.ectx.RSA_Decrypt(childHandle, outData, scheme, session2="foo")
with self.assertRaises(TypeError):
self.ectx.RSA_Decrypt(childHandle, outData, scheme, session3=object())
def test_rsa_enc_dec_with_label(self):
inSensitive = TPM2B_SENSITIVE_CREATE(
TPMS_SENSITIVE_CREATE(userAuth=TPM2B_AUTH("password"))
)
parentHandle, _, _, _, _ = self.ectx.CreatePrimary(
inSensitive, "rsa2048:aes128cfb"
)
templ = TPMT_PUBLIC.parse(
alg="rsa2048", objectAttributes=TPMA_OBJECT.DEFAULT_TPM2_TOOLS_CREATE_ATTRS
)
childInPublic = TPM2B_TEMPLATE(templ.Marshal())
childInSensitive = TPM2B_SENSITIVE_CREATE(
TPMS_SENSITIVE_CREATE(userAuth=TPM2B_AUTH("childpassword"))
)
childHandle, _, _ = self.ectx.CreateLoaded(
parentHandle, childInSensitive, childInPublic
)
message = TPM2B_PUBLIC_KEY_RSA("hello world")
scheme = TPMT_RSA_DECRYPT(scheme=TPM2_ALG.RSAES)
outData = self.ectx.RSA_Encrypt(
childHandle, message, scheme, label=b"my label\0"
)
message2 = self.ectx.RSA_Decrypt(
childHandle, outData, scheme, label=b"my label\0"
)
self.assertEqual(bytes(message), bytes(message2))
def test_ECDH_KeyGen(self):
inSensitive = TPM2B_SENSITIVE_CREATE()
parentHandle, _, _, _, _ = self.ectx.CreatePrimary(
inSensitive, "ecc256:aes128cfb"
)
zPoint, pubPoint = self.ectx.ECDH_KeyGen(parentHandle)
self.assertNotEqual(zPoint, None)
self.assertNotEqual(pubPoint, None)
with self.assertRaises(TypeError):
self.ectx.ECDH_KeyGen(56.8)
with self.assertRaises(TypeError):
self.ectx.ECDH_KeyGen(parentHandle, session1="baz")
with self.assertRaises(TypeError):
self.ectx.ECDH_KeyGen(parentHandle, session2=object())
with self.assertRaises(TypeError):
self.ectx.ECDH_KeyGen(parentHandle, session3=45.6)
def test_ECDH_ZGen(self):
alg = "ecc256:ecdh"
attrs = (
TPMA_OBJECT.USERWITHAUTH
| TPMA_OBJECT.DECRYPT
| TPMA_OBJECT.FIXEDTPM
| TPMA_OBJECT.FIXEDPARENT
| TPMA_OBJECT.SENSITIVEDATAORIGIN
)
inPublic = TPM2B_PUBLIC(TPMT_PUBLIC.parse(alg=alg, objectAttributes=attrs))
inSensitive = TPM2B_SENSITIVE_CREATE(TPMS_SENSITIVE_CREATE())
parentHandle, _, _, _, _ = self.ectx.CreatePrimary(inSensitive, inPublic)
inPoint = TPM2B_ECC_POINT(
TPMS_ECC_POINT(
x=binascii.unhexlify(
"25db1f8bbcfabc31f8176acbb2f840a3b6a5d340659d37eed9fd5247f514d598"
),
y=binascii.unhexlify(
"ed623e3dd20908cf583c814bbf657e08ab9f40ffea51da21298ce24deb344ccc"
),
)
)
outPoint = self.ectx.ECDH_ZGen(parentHandle, inPoint)
self.assertEqual(type(outPoint), TPM2B_ECC_POINT)
with self.assertRaises(TypeError):
self.ectx.ECDH_ZGen(object(), inPoint)
with self.assertRaises(TypeError):
self.ectx.ECDH_ZGen(parentHandle, "boo")
with self.assertRaises(TypeError):
self.ectx.ECDH_ZGen(parentHandle, inPoint, session1="baz")
with self.assertRaises(TypeError):
self.ectx.ECDH_ZGen(parentHandle, inPoint, session2=object())
with self.assertRaises(TypeError):
self.ectx.ECDH_ZGen(parentHandle, inPoint, session3=89.6)
def test_ECC_Parameters(self):
params = self.ectx.ECC_Parameters(TPM2_ECC_CURVE.NIST_P256)
self.assertEqual(type(params), TPMS_ALGORITHM_DETAIL_ECC)
with self.assertRaises(ValueError):
self.ectx.ECC_Parameters(42)
with self.assertRaises(TypeError):
self.ectx.ECC_Parameters(TPM2B_DATA())
with self.assertRaises(TypeError):
self.ectx.ECC_Parameters(TPM2_ECC_CURVE.NIST_P256, session1="foo")
with self.assertRaises(TypeError):
self.ectx.ECC_Parameters(TPM2_ECC_CURVE.NIST_P256, session2=object())
with self.assertRaises(TypeError):
self.ectx.ECC_Parameters(TPM2_ECC_CURVE.NIST_P256, session3=TPM2B_DATA())
def test_ZGen_2Phase(self):
alg = "ecc256:ecdh-sha256"
attrs = (
TPMA_OBJECT.USERWITHAUTH
| TPMA_OBJECT.DECRYPT
| TPMA_OBJECT.FIXEDTPM
| TPMA_OBJECT.FIXEDPARENT
| TPMA_OBJECT.SENSITIVEDATAORIGIN
)
inPublic = TPM2B_PUBLIC(TPMT_PUBLIC.parse(alg=alg, objectAttributes=attrs))
inSensitive = TPM2B_SENSITIVE_CREATE(TPMS_SENSITIVE_CREATE())
eccHandle, outPublic, _, _, _ = self.ectx.CreatePrimary(inSensitive, inPublic)
curveId = TPM2_ECC.NIST_P256
Q, counter = self.ectx.EC_Ephemeral(curveId)
inQsB = TPM2B_ECC_POINT(outPublic.publicArea.unique.ecc)
inQeB = Q
Z1, Z2 = self.ectx.ZGen_2Phase(eccHandle, inQsB, inQeB, TPM2_ALG.ECDH, counter)
self.assertEqual(type(Z1), TPM2B_ECC_POINT)
self.assertEqual(type(Z2), TPM2B_ECC_POINT)
with self.assertRaises(TypeError):
self.ectx.ZGen_2Phase(42.5, inQsB, inQeB, TPM2_ALG.ECDH, counter)
with self.assertRaises(TypeError):
self.ectx.ZGen_2Phase(eccHandle, "hello", inQeB, TPM2_ALG.ECDH, counter)
with self.assertRaises(TypeError):
self.ectx.ZGen_2Phase(eccHandle, inQsB, object(), TPM2_ALG.ECDH, counter)
with self.assertRaises(TypeError):
self.ectx.ZGen_2Phase(eccHandle, inQsB, inQeB, object(), counter)
with self.assertRaises(TypeError):
self.ectx.ZGen_2Phase(eccHandle, inQsB, inQeB, TPM2_ALG.ECDH, "baz")
with self.assertRaises(ValueError):
self.ectx.ZGen_2Phase(eccHandle, inQsB, inQeB, TPM2_ALG.ECDH, 2 ** 18)
with self.assertRaises(TypeError):
self.ectx.ZGen_2Phase(
eccHandle, inQsB, inQeB, TPM2_ALG.ECDH, counter, session1=object()
)
with self.assertRaises(TypeError):
self.ectx.ZGen_2Phase(
eccHandle, inQsB, inQeB, TPM2_ALG.ECDH, counter, session2="baz"
)
with self.assertRaises(TypeError):
self.ectx.ZGen_2Phase(
eccHandle, inQsB, inQeB, TPM2_ALG.ECDH, counter, session3=45.5
)
def test_EncryptDecrypt(self):
inSensitive = TPM2B_SENSITIVE_CREATE()
parentHandle = self.ectx.CreatePrimary(inSensitive, "ecc")[0]
inPublic = TPM2B_TEMPLATE(TPMT_PUBLIC.parse(alg="aes").Marshal())
aesKeyHandle = self.ectx.CreateLoaded(parentHandle, inSensitive, inPublic)[0]
ivIn = TPM2B_IV(b"thisis16byteszxc")
inData = TPM2B_MAX_BUFFER(b"this is data to encrypt")
outCipherText, outIV = self.ectx.EncryptDecrypt(
aesKeyHandle, False, TPM2_ALG.CFB, ivIn, inData
)
self.assertEqual(len(outIV), len(ivIn))
outData, outIV2 = self.ectx.EncryptDecrypt(
aesKeyHandle, True, TPM2_ALG.CFB, ivIn, outCipherText
)
self.assertEqual(bytes(inData), bytes(outData))
self.assertEqual(bytes(outIV), bytes(outIV2))
# test plain bytes for data
ivIn = b"thisis16byteszxc"
inData = b"this is data to encrypt"
outCipherText, outIV = self.ectx.EncryptDecrypt(
aesKeyHandle, False, TPM2_ALG.CFB, ivIn, inData
)
self.assertEqual(len(outIV), len(ivIn))
outData, outIV2 = self.ectx.EncryptDecrypt(
aesKeyHandle, True, TPM2_ALG.CFB, ivIn, outCipherText
)
self.assertEqual(inData, bytes(outData))
self.assertEqual(bytes(outIV), bytes(outIV2))
with self.assertRaises(TypeError):
self.ectx.EncryptDecrypt(42.5, True, TPM2_ALG.CFB, ivIn, outCipherText)
with self.assertRaises(TypeError):
self.ectx.EncryptDecrypt(
aesKeyHandle, object(), TPM2_ALG.CFB, ivIn, outCipherText
)
with self.assertRaises(TypeError):
self.ectx.EncryptDecrypt(aesKeyHandle, True, object(), ivIn, outCipherText)
with self.assertRaises(ValueError):
self.ectx.EncryptDecrypt(aesKeyHandle, True, 42, ivIn, outCipherText)
with self.assertRaises(TypeError):
self.ectx.EncryptDecrypt(
aesKeyHandle, True, TPM2_ALG.CFB, TPM2B_PUBLIC(), outCipherText
)
with self.assertRaises(TypeError):
self.ectx.EncryptDecrypt(aesKeyHandle, True, TPM2_ALG.CFB, ivIn, None)
with self.assertRaises(TypeError):
self.ectx.EncryptDecrypt(
aesKeyHandle, True, TPM2_ALG.CFB, ivIn, outCipherText, session1=object()
)
with self.assertRaises(TypeError):
self.ectx.EncryptDecrypt(
aesKeyHandle, True, TPM2_ALG.CFB, ivIn, outCipherText, session2="foo"
)
with self.assertRaises(TypeError):
self.ectx.EncryptDecrypt(
aesKeyHandle, True, TPM2_ALG.CFB, ivIn, outCipherText, session3=12.3
)
def test_EncryptDecrypt2(self):
inSensitive = TPM2B_SENSITIVE_CREATE()
parentHandle = self.ectx.CreatePrimary(inSensitive, "ecc")[0]
inPublic = TPM2B_TEMPLATE(TPMT_PUBLIC.parse(alg="aes").Marshal())
aesKeyHandle = self.ectx.CreateLoaded(parentHandle, inSensitive, inPublic)[0]
ivIn = TPM2B_IV(b"thisis16byteszxc")
inData = TPM2B_MAX_BUFFER(b"this is data to encrypt")
outCipherText, outIV = self.ectx.EncryptDecrypt2(
aesKeyHandle, False, TPM2_ALG.CFB, ivIn, inData
)
self.assertEqual(len(outIV), len(ivIn))
outData, outIV2 = self.ectx.EncryptDecrypt2(
aesKeyHandle, True, TPM2_ALG.CFB, ivIn, outCipherText
)
self.assertEqual(bytes(inData), bytes(outData))
self.assertEqual(bytes(outIV), bytes(outIV2))
ivIn = b"thisis16byteszxc"
inData = b"this is data to encrypt"
outCipherText, outIV = self.ectx.EncryptDecrypt2(
aesKeyHandle, False, TPM2_ALG.CFB, ivIn, inData
)
self.assertEqual(len(outIV), len(ivIn))
outData, outIV2 = self.ectx.EncryptDecrypt2(
aesKeyHandle, True, TPM2_ALG.CFB, ivIn, outCipherText
)
self.assertEqual(inData, bytes(outData))
self.assertEqual(bytes(outIV), bytes(outIV2))
with self.assertRaises(TypeError):
self.ectx.EncryptDecrypt(42.5, True, TPM2_ALG.CFB, ivIn, outCipherText)
with self.assertRaises(TypeError):
self.ectx.EncryptDecrypt(
aesKeyHandle, object(), TPM2_ALG.CFB, ivIn, outCipherText
)
with self.assertRaises(TypeError):
self.ectx.EncryptDecrypt(aesKeyHandle, True, object(), ivIn, outCipherText)
with self.assertRaises(ValueError):
self.ectx.EncryptDecrypt(aesKeyHandle, True, 42, ivIn, outCipherText)
with self.assertRaises(TypeError):
self.ectx.EncryptDecrypt(
aesKeyHandle, True, TPM2_ALG.CFB, TPM2B_PUBLIC(), outCipherText
)
with self.assertRaises(TypeError):
self.ectx.EncryptDecrypt(aesKeyHandle, True, TPM2_ALG.CFB, ivIn, None)
with self.assertRaises(TypeError):
self.ectx.EncryptDecrypt(
aesKeyHandle, True, TPM2_ALG.CFB, ivIn, outCipherText, session1=object()
)
with self.assertRaises(TypeError):
self.ectx.EncryptDecrypt(
aesKeyHandle, True, TPM2_ALG.CFB, ivIn, outCipherText, session2="foo"
)
with self.assertRaises(TypeError):
self.ectx.EncryptDecrypt(
aesKeyHandle, True, TPM2_ALG.CFB, ivIn, outCipherText, session3=12.3
)
def test_Hash(self):
# Null hierarchy default
digest, ticket = self.ectx.Hash(b"1234", TPM2_ALG.SHA256)
self.assertNotEqual(digest, None)
self.assertNotEqual(ticket, None)
d = bytes(digest)
c = binascii.unhexlify(
"03ac674216f3e15c761ee1a5e255f067953623c8b388b4459e13f978d7c846f4"
)
self.assertEqual(c, d)
# Owner hierarchy set
digest, ticket = self.ectx.Hash(b"1234", TPM2_ALG.SHA256, ESYS_TR.OWNER)
self.assertNotEqual(digest, None)
self.assertNotEqual(ticket, None)
d = bytes(digest)
c = binascii.unhexlify(
"03ac674216f3e15c761ee1a5e255f067953623c8b388b4459e13f978d7c846f4"
)
self.assertEqual(c, d)
# Test TPM2B_MAX_BUFFER
inData = TPM2B_MAX_BUFFER(b"1234")
digest, ticket = self.ectx.Hash(inData, TPM2_ALG.SHA256)
self.assertNotEqual(digest, None)
self.assertNotEqual(ticket, None)
d = bytes(digest)
c = binascii.unhexlify(
"03ac674216f3e15c761ee1a5e255f067953623c8b388b4459e13f978d7c846f4"
)
self.assertEqual(c, d)
# Test str input
inData = TPM2B_MAX_BUFFER("1234")
digest, ticket = self.ectx.Hash(inData, TPM2_ALG.SHA256)
self.assertNotEqual(digest, None)
self.assertNotEqual(ticket, None)
d = bytes(digest)
c = binascii.unhexlify(
"03ac674216f3e15c761ee1a5e255f067953623c8b388b4459e13f978d7c846f4"
)
self.assertEqual(c, d)
with self.assertRaises(TypeError):
self.ectx.Hash(object(), TPM2_ALG.SHA256)
with self.assertRaises(TypeError):
self.ectx.Hash(inData, "baz")
with self.assertRaises(ValueError):
self.ectx.Hash(inData, 42)
with self.assertRaises(TypeError):
self.ectx.Hash(inData, TPM2_ALG.SHA256, session1=56.7)
with self.assertRaises(TypeError):
self.ectx.Hash(inData, TPM2_ALG.SHA256, session2="baz")
with self.assertRaises(TypeError):
self.ectx.Hash(inData, TPM2_ALG.SHA256, session3=object())
def test_HMAC(self):
attrs = (
TPMA_OBJECT.SIGN_ENCRYPT
| TPMA_OBJECT.USERWITHAUTH
| TPMA_OBJECT.SENSITIVEDATAORIGIN
)
templ = TPMT_PUBLIC.parse(alg="hmac", objectAttributes=attrs)
inPublic = TPM2B_PUBLIC(templ)
inSensitive = TPM2B_SENSITIVE_CREATE(TPMS_SENSITIVE_CREATE())
primaryHandle = self.ectx.CreatePrimary(inSensitive, inPublic)[0]
# Test bytes
hmac = self.ectx.HMAC(primaryHandle, b"1234", TPM2_ALG.SHA256)
self.assertNotEqual(hmac, None)
self.assertEqual(len(bytes(hmac)), 32)
# Test str
hmac = self.ectx.HMAC(primaryHandle, "1234", TPM2_ALG.SHA256)
self.assertNotEqual(hmac, None)
self.assertEqual(len(bytes(hmac)), 32)
# Test Native
inData = TPM2B_MAX_BUFFER("1234")
hmac = self.ectx.HMAC(primaryHandle, inData, TPM2_ALG.SHA256)
self.assertNotEqual(hmac, None)
self.assertEqual(len(bytes(hmac)), 32)
with self.assertRaises(TypeError):
self.ectx.HMAC(45.6, inData, TPM2_ALG.SHA256)
with self.assertRaises(TypeError):
self.ectx.HMAC(primaryHandle, object(), TPM2_ALG.SHA256)
with self.assertRaises(TypeError):
self.ectx.HMAC(primaryHandle, inData, "baz")
with self.assertRaises(ValueError):
self.ectx.HMAC(primaryHandle, inData, 42)
with self.assertRaises(TypeError):
self.ectx.HMAC(primaryHandle, inData, TPM2_ALG.SHA256, session1=object())
with self.assertRaises(TypeError):
self.ectx.HMAC(primaryHandle, inData, TPM2_ALG.SHA256, session2="object")
with self.assertRaises(TypeError):
self.ectx.HMAC(primaryHandle, inData, TPM2_ALG.SHA256, session3=45.6)
def test_StirRandom(self):
self.ectx.StirRandom(b"1234")
self.ectx.StirRandom("1234")
self.ectx.StirRandom(TPM2B_SENSITIVE_DATA("1234"))
with self.assertRaises(TypeError):
self.ectx.StirRandom(object())
with self.assertRaises(TypeError):
self.ectx.StirRandom("1234", session1=56.7)
with self.assertRaises(TypeError):
self.ectx.StirRandom("1234", session2="foo")
with self.assertRaises(TypeError):
self.ectx.StirRandom("1234", session3=object())
def test_HMAC_Sequence(self):
inPublic = TPM2B_PUBLIC(
TPMT_PUBLIC.parse(
alg="hmac",
objectAttributes=(
TPMA_OBJECT.SIGN_ENCRYPT
| TPMA_OBJECT.USERWITHAUTH
| TPMA_OBJECT.SENSITIVEDATAORIGIN
),
)
)
inSensitive = TPM2B_SENSITIVE_CREATE(TPMS_SENSITIVE_CREATE())
handle = self.ectx.CreatePrimary(inSensitive, inPublic)[0]
seqHandle = self.ectx.HMAC_Start(handle, None, TPM2_ALG.SHA256)
self.assertNotEqual(seqHandle, 0)
self.ectx.FlushContext(seqHandle)
seqHandle = self.ectx.HMAC_Start(handle, b"1234", TPM2_ALG.SHA256)
self.assertNotEqual(seqHandle, 0)
self.ectx.FlushContext(seqHandle)
seqHandle = self.ectx.HMAC_Start(handle, "1234", TPM2_ALG.SHA256)
self.assertNotEqual(seqHandle, 0)
self.ectx.FlushContext(seqHandle)
seqHandle = self.ectx.HMAC_Start(handle, TPM2B_AUTH(b"1234"), TPM2_ALG.SHA256)
self.assertNotEqual(seqHandle, 0)
# self.ectx.setAuth(seqHandle, b"1234")
self.ectx.SequenceUpdate(seqHandle, "here is some data")
self.ectx.SequenceUpdate(seqHandle, b"more data but byte string")
self.ectx.SequenceUpdate(seqHandle, TPM2B_MAX_BUFFER("native data format"))
self.ectx.SequenceUpdate(seqHandle, None)
digest, ticket = self.ectx.SequenceComplete(seqHandle, None)
self.assertNotEqual(digest, None)
self.assertNotEqual(ticket, None)
self.assertEqual(len(digest), 32)
with self.assertRaises(TypeError):
self.ectx.HMAC_Start(45.6, "1234", TPM2_ALG.SHA256)
with self.assertRaises(TypeError):
self.ectx.HMAC_Start(handle, dict(), TPM2_ALG.SHA256)
with self.assertRaises(TypeError):
self.ectx.HMAC_Start(handle, "1234", object())
with self.assertRaises(ValueError):
self.ectx.HMAC_Start(handle, "1234", 42)
with self.assertRaises(TypeError):
self.ectx.HMAC_Start(handle, "1234", TPM2_ALG.SHA256, session1="baz")
with self.assertRaises(TypeError):
self.ectx.HMAC_Start(handle, "1234", TPM2_ALG.SHA256, session2=object())
with self.assertRaises(TypeError):
self.ectx.HMAC_Start(handle, "1234", TPM2_ALG.SHA256, session3=45.6)
def test_HashSequence(self):
seqHandle = self.ectx.HashSequenceStart(None, TPM2_ALG.SHA256)
self.assertNotEqual(seqHandle, 0)
self.ectx.FlushContext(seqHandle)
seqHandle = self.ectx.HashSequenceStart(b"1234", TPM2_ALG.SHA256)
self.assertNotEqual(seqHandle, 0)
self.ectx.FlushContext(seqHandle)
seqHandle = self.ectx.HashSequenceStart("1234", TPM2_ALG.SHA256)
self.assertNotEqual(seqHandle, 0)
self.ectx.FlushContext(seqHandle)
seqHandle = self.ectx.HashSequenceStart(TPM2B_AUTH(b"1234"), TPM2_ALG.SHA256)
self.assertNotEqual(seqHandle, 0)
self.ectx.setAuth(seqHandle, b"1234")
self.ectx.SequenceUpdate(seqHandle, "here is some data")
self.ectx.SequenceUpdate(seqHandle, b"more data but byte string")
self.ectx.SequenceUpdate(seqHandle, TPM2B_MAX_BUFFER("native data format"))
self.ectx.SequenceUpdate(seqHandle, None)
digest, ticket = self.ectx.SequenceComplete(seqHandle, "AnotherBuffer")
self.assertNotEqual(digest, None)
self.assertNotEqual(ticket, None)
e = binascii.unhexlify(
"a02271d78e351c6e9e775b0570b440d3ac37ad6c02a3b69df940f3f893f80d41"
)
d = bytes(digest)
self.assertEqual(e, d)
with self.assertRaises(TypeError):
self.ectx.HashSequenceStart(object(), TPM2_ALG.SHA256)
with self.assertRaises(TypeError):
self.ectx.HashSequenceStart(b"1234", "dssdf")
with self.assertRaises(ValueError):
self.ectx.HashSequenceStart(b"1234", 42)
with self.assertRaises(TypeError):
self.ectx.HashSequenceStart(b"1234", TPM2_ALG.SHA256, session1="baz")
with self.assertRaises(TypeError):
self.ectx.HashSequenceStart(b"1234", TPM2_ALG.SHA256, session2=56.7)
with self.assertRaises(TypeError):
self.ectx.HashSequenceStart(b"1234", TPM2_ALG.SHA256, session3=TPM2B_DATA())
with self.assertRaises(TypeError):
self.ectx.SequenceUpdate(56.7, "here is some data")
with self.assertRaises(TypeError):
self.ectx.SequenceUpdate(seqHandle, [])
with self.assertRaises(TypeError):
self.ectx.SequenceUpdate(seqHandle, "here is some data", sequence1="foo")
with self.assertRaises(TypeError):
self.ectx.SequenceUpdate(seqHandle, "here is some data", sequence2=object())
with self.assertRaises(TypeError):
self.ectx.SequenceUpdate(seqHandle, "here is some data", sequence3=78.23)
with self.assertRaises(TypeError):
self.ectx.SequenceComplete(78.25, "AnotherBuffer")
with self.assertRaises(TypeError):
self.ectx.SequenceComplete(seqHandle, [])
with self.assertRaises(TypeError):
self.ectx.SequenceComplete(seqHandle, "AnotherBuffer", hierarchy=object())
with self.assertRaises(ValueError):
self.ectx.SequenceComplete(seqHandle, "AnotherBuffer", hierarchy=42)
with self.assertRaises(TypeError):
self.ectx.SequenceComplete(seqHandle, "AnotherBuffer", session1=42.67)
with self.assertRaises(TypeError):
self.ectx.SequenceComplete(seqHandle, "AnotherBuffer", session2="baz")
with self.assertRaises(TypeError):
self.ectx.SequenceComplete(seqHandle, "AnotherBuffer", session3=object())
def test_EventSequenceComplete(self):
seqHandle = self.ectx.HashSequenceStart(TPM2B_AUTH(b"1234"), TPM2_ALG.NULL)
self.assertNotEqual(seqHandle, 0)
self.ectx.setAuth(seqHandle, b"1234")
self.ectx.SequenceUpdate(seqHandle, "here is some data")
self.ectx.SequenceUpdate(seqHandle, b"more data but byte string")
self.ectx.SequenceUpdate(seqHandle, TPM2B_MAX_BUFFER("native data format"))
self.ectx.SequenceUpdate(seqHandle, None)
pcrs = self.ectx.EventSequenceComplete(
ESYS_TR.PCR16, seqHandle, "AnotherBuffer"
)
self.assertEqual(type(pcrs), TPML_DIGEST_VALUES)
with self.assertRaises(TypeError):
self.ectx.EventSequenceComplete(object(), seqHandle, None)
with self.assertRaises(ValueError):
self.ectx.EventSequenceComplete(42, seqHandle, None)
with self.assertRaises(TypeError):
self.ectx.EventSequenceComplete(ESYS_TR.PCR16, 46.5, None)
with self.assertRaises(TypeError):
self.ectx.EventSequenceComplete(ESYS_TR.PCR16, seqHandle, object())
with self.assertRaises(TypeError):
self.ectx.EventSequenceComplete(
ESYS_TR.PCR16, seqHandle, None, sequence1=67.34
)
with self.assertRaises(TypeError):
self.ectx.EventSequenceComplete(
ESYS_TR.PCR16, seqHandle, None, sequence2="boo"
)
with self.assertRaises(TypeError):
self.ectx.EventSequenceComplete(
ESYS_TR.PCR16, seqHandle, None, sequence3=object()
)
def test_ContextSave_ContextLoad(self):
inPublic = TPM2B_PUBLIC(
TPMT_PUBLIC.parse(
alg="rsa:rsassa-sha256",
objectAttributes=TPMA_OBJECT.USERWITHAUTH
| TPMA_OBJECT.SIGN_ENCRYPT
| TPMA_OBJECT.FIXEDTPM
| TPMA_OBJECT.FIXEDPARENT
| TPMA_OBJECT.SENSITIVEDATAORIGIN,
)
)
inSensitive = TPM2B_SENSITIVE_CREATE()
handle, outpub, _, _, _ = self.ectx.CreatePrimary(inSensitive, inPublic)
ctx = self.ectx.ContextSave(handle)
nhandle = self.ectx.ContextLoad(ctx)
name = self.ectx.TR_GetName(nhandle)
self.assertEqual(bytes(outpub.getName()), bytes(name))
def test_FlushContext(self):
inPublic = TPM2B_PUBLIC(
TPMT_PUBLIC.parse(
alg="rsa:rsassa-sha256",
objectAttributes=TPMA_OBJECT.USERWITHAUTH
| TPMA_OBJECT.SIGN_ENCRYPT
| TPMA_OBJECT.FIXEDTPM
| TPMA_OBJECT.FIXEDPARENT
| TPMA_OBJECT.SENSITIVEDATAORIGIN,
)
)
inSensitive = TPM2B_SENSITIVE_CREATE()
handle, _, _, _, _ = self.ectx.CreatePrimary(inSensitive, inPublic)
self.ectx.FlushContext(handle)
with self.assertRaises(TSS2_Exception) as e:
self.ectx.TR_GetName(handle)
self.assertEqual(e.exception.error, lib.TSS2_ESYS_RC_BAD_TR)
def test_EvictControl(self):
inPublic = TPM2B_PUBLIC(
TPMT_PUBLIC.parse(
alg="rsa:rsassa-sha256",
objectAttributes=TPMA_OBJECT.USERWITHAUTH
| TPMA_OBJECT.SIGN_ENCRYPT
| TPMA_OBJECT.FIXEDTPM
| TPMA_OBJECT.FIXEDPARENT
| TPMA_OBJECT.SENSITIVEDATAORIGIN,
)
)
inSensitive = TPM2B_SENSITIVE_CREATE()
handle, _, _, _, _ = self.ectx.CreatePrimary(inSensitive, inPublic)
self.ectx.EvictControl(
ESYS_TR.OWNER, handle, 0x81000081, session1=ESYS_TR.PASSWORD
)
phandle = self.ectx.TR_FromTPMPublic(0x81000081)
self.ectx.EvictControl(
ESYS_TR.OWNER, phandle, 0x81000081, session1=ESYS_TR.PASSWORD
)
with self.assertRaises(TSS2_Exception) as e:
self.ectx.TR_FromTPMPublic(0x81000081)
self.assertEqual(e.exception.error, TPM2_RC.HANDLE)
def test_GetCapability(self):
more = True
while more:
more, capdata = self.ectx.GetCapability(
TPM2_CAP.COMMANDS, TPM2_CC.FIRST, lib.TPM2_MAX_CAP_CC
)
for c in capdata.data.command:
pass
def test_TestParms(self):
parms = TPMT_PUBLIC_PARMS(type=TPM2_ALG.RSA)
parms.parameters.rsaDetail.symmetric.algorithm = TPM2_ALG.NULL
parms.parameters.rsaDetail.scheme.scheme = TPM2_ALG.NULL
parms.parameters.rsaDetail.keyBits = 2048
parms.parameters.rsaDetail.exponent = 0
self.ectx.TestParms(parms)
parms.parameters.rsaDetail.keyBits = 1234
with self.assertRaises(TSS2_Exception) as e:
self.ectx.TestParms(parms)
self.assertEqual(e.exception.error, TPM2_RC.VALUE)
self.assertEqual(e.exception.parameter, 1)
def test_ReadClock(self):
ctime = self.ectx.ReadClock()
self.assertGreater(ctime.time, 0)
self.assertGreater(ctime.clockInfo.clock, 0)
def test_ClockSet(self):
newtime = 0xFA1AFE1
self.ectx.ClockSet(ESYS_TR.OWNER, newtime, session1=ESYS_TR.PASSWORD)
ntime = self.ectx.ReadClock()
self.assertGreaterEqual(ntime.clockInfo.clock, newtime)
with self.assertRaises(TSS2_Exception) as e:
self.ectx.ClockSet(ESYS_TR.OWNER, 0, session1=ESYS_TR.PASSWORD)
self.assertEqual(e.exception.error, TPM2_RC.VALUE)
def test_ClockRateAdjust(self):
self.ectx.ClockRateAdjust(
ESYS_TR.OWNER, TPM2_CLOCK.COARSE_SLOWER, session1=ESYS_TR.PASSWORD
)
def test_NV_UndefineSpaceSpecial(self):
# pre-generated TPM2_PolicyCommandCode(TPM2_CC_NV_UndefineSpaceSpecial)
pol = b"\x1d-\xc4\x85\xe1w\xdd\xd0\xa4\n4I\x13\xce\xebB\x0c\xaa\t<BX}.\x1b\x13+\x15|\xcb]\xb0"
nvpub = TPM2B_NV_PUBLIC(
nvPublic=TPMS_NV_PUBLIC(
nvIndex=0x1000000,
nameAlg=TPM2_ALG.SHA256,
attributes=TPMA_NV.PPWRITE
| TPMA_NV.PPREAD
| TPMA_NV.PLATFORMCREATE
| TPMA_NV.POLICY_DELETE,
authPolicy=pol,
dataSize=8,
)
)
nvhandle = self.ectx.NV_DefineSpace(b"", nvpub, authHandle=ESYS_TR.RH_PLATFORM)
session = self.ectx.StartAuthSession(
ESYS_TR.NONE,
ESYS_TR.NONE,
None,
TPM2_SE.POLICY,
TPMT_SYM_DEF(algorithm=TPM2_ALG.NULL),
TPM2_ALG.SHA256,
)
self.ectx.PolicyCommandCode(session, TPM2_CC.NV_UndefineSpaceSpecial)
self.ectx.NV_UndefineSpaceSpecial(
nvhandle, session1=session, session2=ESYS_TR.PASSWORD
)
def test_NV_ReadPublic(self):
nvpub = TPM2B_NV_PUBLIC(
nvPublic=TPMS_NV_PUBLIC(
nvIndex=0x1000000,
nameAlg=TPM2_ALG.SHA256,
attributes=TPMA_NV.OWNERWRITE | TPMA_NV.OWNERREAD,
authPolicy=b"",
dataSize=8,
)
)
nvhandle = self.ectx.NV_DefineSpace(b"", nvpub)
pubout, name = self.ectx.NV_ReadPublic(nvhandle)
self.assertEqual(nvpub.getName().name, name.name)
def test_NV_Increment(self):
nvpub = TPM2B_NV_PUBLIC(
nvPublic=TPMS_NV_PUBLIC(
nvIndex=0x1000000,
nameAlg=TPM2_ALG.SHA256,
attributes=TPMA_NV.OWNERWRITE
| TPMA_NV.OWNERREAD
| (TPM2_NT.COUNTER << TPMA_NV.TPM2_NT_SHIFT),
authPolicy=b"",
dataSize=8,
)
)
nvhandle = self.ectx.NV_DefineSpace(b"", nvpub)
self.ectx.NV_Increment(nvhandle, authHandle=ESYS_TR.RH_OWNER)
data = self.ectx.NV_Read(nvhandle, 8, 0, authHandle=ESYS_TR.RH_OWNER)
counter = int.from_bytes(data.buffer, byteorder="big")
self.assertEqual(counter, 1)
def test_NV_Extend(self):
nvpub = TPM2B_NV_PUBLIC(
nvPublic=TPMS_NV_PUBLIC(
nvIndex=0x1000000,
nameAlg=TPM2_ALG.SHA256,
attributes=TPMA_NV.OWNERWRITE
| TPMA_NV.OWNERREAD
| (TPM2_NT.EXTEND << TPMA_NV.TPM2_NT_SHIFT),
authPolicy=b"",
dataSize=32,
)
)
nvhandle = self.ectx.NV_DefineSpace(b"", nvpub)
edata = b"\xFF" * 32
self.ectx.NV_Extend(nvhandle, edata, authHandle=ESYS_TR.RH_OWNER)
data = self.ectx.NV_Read(nvhandle, 32, 0, authHandle=ESYS_TR.RH_OWNER)
edigest = b"\xbb\xa9\x1c\xa8]\xc9\x14\xb2\xec>\xfb\x9e\x16\xe7&{\xf9\x19;\x145\r \xfb\xa8\xa8\xb4\x06s\n\xe3\n"
self.assertEqual(edigest, bytes(data))
def test_NV_SetBits(self):
nvpub = TPM2B_NV_PUBLIC(
nvPublic=TPMS_NV_PUBLIC(
nvIndex=0x1000000,
nameAlg=TPM2_ALG.SHA256,
attributes=TPMA_NV.OWNERWRITE
| TPMA_NV.OWNERREAD
| (TPM2_NT.BITS << TPMA_NV.TPM2_NT_SHIFT),
authPolicy=b"",
dataSize=8,
)
)
nvhandle = self.ectx.NV_DefineSpace(b"", nvpub)
bits = 0b1010
self.ectx.NV_SetBits(nvhandle, bits, authHandle=ESYS_TR.RH_OWNER)
data = self.ectx.NV_Read(nvhandle, 8, 0, authHandle=ESYS_TR.RH_OWNER)
b = bits.to_bytes(length=8, byteorder="big")
self.assertEqual(b, bytes(data))
def test_NV_WriteLock(self):
nvpub = TPM2B_NV_PUBLIC(
nvPublic=TPMS_NV_PUBLIC(
nvIndex=0x1000000,
nameAlg=TPM2_ALG.SHA256,
attributes=TPMA_NV.OWNERWRITE
| TPMA_NV.OWNERREAD
| TPMA_NV.WRITE_STCLEAR,
authPolicy=b"",
dataSize=8,
)
)
nvhandle = self.ectx.NV_DefineSpace(b"", nvpub)
self.ectx.NV_WriteLock(nvhandle, authHandle=ESYS_TR.RH_OWNER)
indata = b"12345678"
with self.assertRaises(TSS2_Exception) as e:
self.ectx.NV_Write(nvhandle, indata, authHandle=ESYS_TR.RH_OWNER)
self.assertEqual(e.exception.error, TPM2_RC.NV_LOCKED)
def test_NV_GlobalWriteLock(self):
nvpub = TPM2B_NV_PUBLIC(
nvPublic=TPMS_NV_PUBLIC(
nvIndex=0x1000000,
nameAlg=TPM2_ALG.SHA256,
attributes=TPMA_NV.OWNERWRITE | TPMA_NV.OWNERREAD | TPMA_NV.GLOBALLOCK,
authPolicy=b"",
dataSize=8,
)
)
nvhandle = self.ectx.NV_DefineSpace(b"", nvpub)
self.ectx.NV_GlobalWriteLock()
indata = b"12345678"
with self.assertRaises(TSS2_Exception) as e:
self.ectx.NV_Write(nvhandle, indata, authHandle=ESYS_TR.RH_OWNER)
self.assertEqual(e.exception.error, TPM2_RC.NV_LOCKED)
def test_NV_ReadLock(self):
nvpub = TPM2B_NV_PUBLIC(
nvPublic=TPMS_NV_PUBLIC(
nvIndex=0x1000000,
nameAlg=TPM2_ALG.SHA256,
attributes=TPMA_NV.OWNERWRITE
| TPMA_NV.OWNERREAD
| TPMA_NV.READ_STCLEAR,
authPolicy=b"",
dataSize=8,
)
)
nvhandle = self.ectx.NV_DefineSpace(b"", nvpub)
indata = b"12345678"
self.ectx.NV_Write(nvhandle, indata, authHandle=ESYS_TR.RH_OWNER)
self.ectx.NV_ReadLock(nvhandle, authHandle=ESYS_TR.RH_OWNER)
with self.assertRaises(TSS2_Exception) as e:
self.ectx.NV_Read(nvhandle, 8, authHandle=ESYS_TR.RH_OWNER)
self.assertEqual(e.exception.error, TPM2_RC.NV_LOCKED)
def test_NV_ChangeAuth(self):
# pre-generated TPM2_PolicyCommandCode(TPM2_CC_NV_ChangeAuth)
pol = b"D^\xd9S`\x1a\x04U\x04U\t\x99\xbf,\xbb)\x92\xcb\xa2\xdb\xb5\x12\x1b\xcf\x03\x86\x9fe\xb5\x0c&\xe5"
nvpub = TPM2B_NV_PUBLIC(
nvPublic=TPMS_NV_PUBLIC(
nvIndex=0x1000000,
nameAlg=TPM2_ALG.SHA256,
attributes=TPMA_NV.OWNERWRITE | TPMA_NV.OWNERREAD | TPMA_NV.AUTHREAD,
authPolicy=pol,
dataSize=8,
)
)
nvhandle = self.ectx.NV_DefineSpace(b"first", nvpub)
self.ectx.NV_Write(nvhandle, b"sometest", authHandle=ESYS_TR.RH_OWNER)
self.ectx.NV_Read(nvhandle, 8, authHandle=nvhandle)
session = self.ectx.StartAuthSession(
ESYS_TR.NONE,
ESYS_TR.NONE,
None,
TPM2_SE.POLICY,
TPMT_SYM_DEF(algorithm=TPM2_ALG.NULL),
TPM2_ALG.SHA256,
)
self.ectx.PolicyCommandCode(session, TPM2_CC.NV_ChangeAuth)
self.ectx.NV_ChangeAuth(nvhandle, b"second", session1=session)
self.ectx.NV_Read(nvhandle, 8, authHandle=nvhandle)
def test_NV_Certify(self):
nvpub = TPM2B_NV_PUBLIC(
nvPublic=TPMS_NV_PUBLIC(
nvIndex=0x1000000,
nameAlg=TPM2_ALG.SHA256,
attributes=TPMA_NV.OWNERWRITE | TPMA_NV.OWNERREAD,
authPolicy=b"",
dataSize=8,
)
)
nvhandle = self.ectx.NV_DefineSpace(b"", nvpub)
self.ectx.NV_Write(nvhandle, b"sometest", authHandle=ESYS_TR.RH_OWNER)
inPublic = TPM2B_PUBLIC(
TPMT_PUBLIC.parse(
alg="rsa:rsassa-sha256",
objectAttributes=TPMA_OBJECT.USERWITHAUTH
| TPMA_OBJECT.SIGN_ENCRYPT
| TPMA_OBJECT.FIXEDTPM
| TPMA_OBJECT.FIXEDPARENT
| TPMA_OBJECT.SENSITIVEDATAORIGIN,
)
)
inSensitive = TPM2B_SENSITIVE_CREATE()
eccHandle, signPub, _, _, _ = self.ectx.CreatePrimary(inSensitive, inPublic)
qualifyingData = TPM2B_DATA(b"qdata")
inScheme = TPMT_SIG_SCHEME(scheme=TPM2_ALG.NULL)
certifyInfo, _ = self.ectx.NV_Certify(
eccHandle,
nvhandle,
qualifyingData,
inScheme,
8,
authHandle=ESYS_TR.RH_OWNER,
session1=ESYS_TR.PASSWORD,
session2=ESYS_TR.PASSWORD,
)
att, _ = TPMS_ATTEST.Unmarshal(bytes(certifyInfo))
self.assertEqual(att.magic, TPM2_GENERATED_VALUE(0xFF544347))
self.assertEqual(att.type, TPM2_ST.ATTEST_NV)
self.assertEqual(bytes(att.extraData), b"qdata")
nvpub.nvPublic.attributes = nvpub.nvPublic.attributes | TPMA_NV.WRITTEN
self.assertEqual(bytes(att.attested.nv.indexName), bytes(nvpub.getName()))
self.assertEqual(att.attested.nv.offset, 0)
self.assertEqual(att.attested.nv.nvContents.buffer, b"sometest")
def test_Certify(self):
inPublic = TPM2B_PUBLIC(
TPMT_PUBLIC.parse(
alg="rsa:rsassa-sha256",
objectAttributes=TPMA_OBJECT.USERWITHAUTH
| TPMA_OBJECT.SIGN_ENCRYPT
| TPMA_OBJECT.FIXEDTPM
| TPMA_OBJECT.FIXEDPARENT
| TPMA_OBJECT.SENSITIVEDATAORIGIN,
)
)
inSensitive = TPM2B_SENSITIVE_CREATE()
eccHandle = self.ectx.CreatePrimary(inSensitive, inPublic)[0]
qualifyingData = TPM2B_DATA()
inScheme = TPMT_SIG_SCHEME(scheme=TPM2_ALG.NULL)
certifyInfo, signature = self.ectx.Certify(
eccHandle, eccHandle, qualifyingData, inScheme
)
self.assertEqual(type(certifyInfo), TPM2B_ATTEST)
self.assertNotEqual(len(certifyInfo), 0)
self.assertEqual(type(signature), TPMT_SIGNATURE)
certifyInfo, signature = self.ectx.Certify(
eccHandle, eccHandle, b"12345678", inScheme
)
self.assertEqual(type(certifyInfo), TPM2B_ATTEST)
self.assertNotEqual(len(certifyInfo), 0)
self.assertEqual(type(signature), TPMT_SIGNATURE)
with self.assertRaises(TypeError):
certifyInfo, signature = self.ectx.Certify(
TPM2B_ATTEST(), eccHandle, qualifyingData, inScheme
)
with self.assertRaises(TypeError):
certifyInfo, signature = self.ectx.Certify(
eccHandle, 2.0, qualifyingData, inScheme
)
with self.assertRaises(TypeError):
certifyInfo, signature = self.ectx.Certify(
eccHandle, eccHandle, TPM2B_PUBLIC(), inScheme
)
with self.assertRaises(TypeError):
certifyInfo, signature = self.ectx.Certify(
eccHandle, eccHandle, qualifyingData, TPM2B_PRIVATE()
)
with self.assertRaises(TypeError):
self.ectx.Certify(
eccHandle, eccHandle, qualifyingData, inScheme, session1=56.7
)
with self.assertRaises(TypeError):
self.ectx.Certify(
eccHandle, eccHandle, qualifyingData, inScheme, session2="foo"
)
with self.assertRaises(TypeError):
self.ectx.Certify(
eccHandle, eccHandle, qualifyingData, inScheme, session3=object()
)
def test_CertifyCreation(self):
inPublic = TPM2B_PUBLIC(
TPMT_PUBLIC.parse(
alg="rsa:rsassa-sha256",
objectAttributes=TPMA_OBJECT.USERWITHAUTH
| TPMA_OBJECT.SIGN_ENCRYPT
| TPMA_OBJECT.FIXEDTPM
| TPMA_OBJECT.FIXEDPARENT
| TPMA_OBJECT.SENSITIVEDATAORIGIN,
)
)
inSensitive = TPM2B_SENSITIVE_CREATE()
eccHandle, _, _, creationHash, creationTicket = self.ectx.CreatePrimary(
inSensitive, inPublic
)
qualifyingData = TPM2B_DATA()
inScheme = TPMT_SIG_SCHEME(scheme=TPM2_ALG.NULL)
certifyInfo, signature = self.ectx.CertifyCreation(
eccHandle, eccHandle, qualifyingData, creationHash, inScheme, creationTicket
)
self.assertEqual(type(certifyInfo), TPM2B_ATTEST)
self.assertNotEqual(len(certifyInfo), 0)
self.assertEqual(type(signature), TPMT_SIGNATURE)
with self.assertRaises(TypeError):
self.ectx.CertifyCreation(
45.6, eccHandle, qualifyingData, creationHash, inScheme, creationTicket
)
with self.assertRaises(TypeError):
self.ectx.CertifyCreation(
eccHandle,
object(),
qualifyingData,
creationHash,
inScheme,
creationTicket,
)
with self.assertRaises(TypeError):
self.ectx.CertifyCreation(
eccHandle,
eccHandle,
TPM2B_PUBLIC(),
creationHash,
inScheme,
creationTicket,
)
with self.assertRaises(TypeError):
self.ectx.CertifyCreation(
eccHandle, eccHandle, qualifyingData, object(), inScheme, creationTicket
)
with self.assertRaises(TypeError):
self.ectx.CertifyCreation(
eccHandle, eccHandle, qualifyingData, creationHash, [], creationTicket
)
with self.assertRaises(TypeError):
self.ectx.CertifyCreation(
eccHandle, eccHandle, qualifyingData, creationHash, inScheme, 56.7
)
with self.assertRaises(TypeError):
self.ectx.CertifyCreation(
eccHandle,
eccHandle,
qualifyingData,
creationHash,
inScheme,
creationTicket,
session1=56.7,
)
with self.assertRaises(TypeError):
self.ectx.CertifyCreation(
eccHandle,
eccHandle,
qualifyingData,
creationHash,
inScheme,
creationTicket,
session2=object(),
)
with self.assertRaises(TypeError):
self.ectx.CertifyCreation(
eccHandle,
eccHandle,
qualifyingData,
creationHash,
inScheme,
creationTicket,
session3="baz",
)
def test_Vendor_TCG_Test(self):
with self.assertRaises(TSS2_Exception):
self.ectx.Vendor_TCG_Test(b"random data")
in_cdata = TPM2B_DATA(b"other bytes")._cdata
with self.assertRaises(TSS2_Exception):
self.ectx.Vendor_TCG_Test(in_cdata)
with self.assertRaises(TypeError):
self.ectx.Vendor_TCG_Test(None)
with self.assertRaises(TypeError):
self.ectx.Vendor_TCG_Test(TPM2B_PUBLIC())
def test_FieldUpgradeStart(self):
inPublic = TPM2B_PUBLIC(
TPMT_PUBLIC.parse(
alg="rsa:rsassa-sha256",
objectAttributes=TPMA_OBJECT.USERWITHAUTH
| TPMA_OBJECT.SIGN_ENCRYPT
| TPMA_OBJECT.FIXEDTPM
| TPMA_OBJECT.FIXEDPARENT
| TPMA_OBJECT.SENSITIVEDATAORIGIN,
)
)
inSensitive = TPM2B_SENSITIVE_CREATE()
keyhandle, _, _, _, _ = self.ectx.CreatePrimary(inSensitive, inPublic)
with self.assertRaises(TSS2_Exception) as e:
self.ectx.FieldUpgradeStart(
ESYS_TR.PLATFORM,
keyhandle,
b"",
TPMT_SIGNATURE(sigAlg=TPM2_ALG.NULL),
session1=ESYS_TR.PASSWORD,
)
self.assertEqual(e.exception.error, TPM2_RC.COMMAND_CODE)
def test_FieldUpgradeData(self):
with self.assertRaises(TSS2_Exception) as e:
self.ectx.FieldUpgradeData(b"")
self.assertEqual(e.exception.error, TPM2_RC.COMMAND_CODE)
def test_FirmwareRead(self):
with self.assertRaises(TSS2_Exception) as e:
self.ectx.FirmwareRead(0)
self.assertEqual(e.exception.error, TPM2_RC.COMMAND_CODE)
def test_shutdown_no_arg(self):
self.ectx.Shutdown(TPM2_SU.STATE)
def test_shutdown_state(self):
self.ectx.Shutdown(TPM2_SU.STATE)
def test_shutdown_clear(self):
self.ectx.Shutdown(TPM2_SU.CLEAR)
def test_shutdown_bad(self):
with self.assertRaises(TypeError):
self.ectx.Shutdown(object())
with self.assertRaises(ValueError):
self.ectx.Shutdown(42)
with self.assertRaises(TypeError):
self.ectx.Shutdown(session1=object())
with self.assertRaises(TypeError):
self.ectx.Shutdown(session2=object())
with self.assertRaises(TypeError):
self.ectx.Shutdown(session3=object())
def test_policyrestart(self):
sym = TPMT_SYM_DEF(
algorithm=TPM2_ALG.XOR,
keyBits=TPMU_SYM_KEY_BITS(exclusiveOr=TPM2_ALG.SHA256),
mode=TPMU_SYM_MODE(aes=TPM2_ALG.CFB),
)
session = self.ectx.StartAuthSession(
tpmKey=ESYS_TR.NONE,
bind=ESYS_TR.NONE,
nonceCaller=None,
sessionType=TPM2_SE.TRIAL,
symmetric=sym,
authHash=TPM2_ALG.SHA256,
)
self.ectx.PolicyRestart(session)
with self.assertRaises(TypeError):
self.ectx.PolicyRestart(object())
with self.assertRaises(TypeError):
self.ectx.PolicyRestart(session, session1=4.5)
with self.assertRaises(TypeError):
self.ectx.PolicyRestart(session, session2=object())
with self.assertRaises(TypeError):
self.ectx.PolicyRestart(session, session3=33.666)
def test_duplicate(self):
sym = TPMT_SYM_DEF(
algorithm=TPM2_ALG.XOR,
keyBits=TPMU_SYM_KEY_BITS(exclusiveOr=TPM2_ALG.SHA256),
mode=TPMU_SYM_MODE(aes=TPM2_ALG.CFB),
)
session = self.ectx.StartAuthSession(
tpmKey=ESYS_TR.NONE,
bind=ESYS_TR.NONE,
nonceCaller=None,
sessionType=TPM2_SE.TRIAL,
symmetric=sym,
authHash=TPM2_ALG.SHA256,
)
self.ectx.PolicyAuthValue(session)
self.ectx.PolicyCommandCode(session, TPM2_CC.Duplicate)
policyDigest = self.ectx.PolicyGetDigest(session)
self.ectx.FlushContext(session)
session = None
inPublic = TPM2B_PUBLIC(
TPMT_PUBLIC.parse(
alg="rsa2048",
objectAttributes=TPMA_OBJECT.USERWITHAUTH
| TPMA_OBJECT.RESTRICTED
| TPMA_OBJECT.DECRYPT
| TPMA_OBJECT.FIXEDTPM
| TPMA_OBJECT.FIXEDPARENT
| TPMA_OBJECT.SENSITIVEDATAORIGIN,
)
)
inSensitive = TPM2B_SENSITIVE_CREATE()
primary1, _, _, _, _ = self.ectx.CreatePrimary(inSensitive, inPublic,)
primary2, _, _, _, _ = self.ectx.CreatePrimary(inSensitive, inPublic)
inPublic = TPM2B_PUBLIC(
TPMT_PUBLIC.parse(
alg="rsa2048:aes128cfb",
objectAttributes=TPMA_OBJECT.USERWITHAUTH
| TPMA_OBJECT.RESTRICTED
| TPMA_OBJECT.DECRYPT
| TPMA_OBJECT.SENSITIVEDATAORIGIN,
)
)
inPublic.publicArea.authPolicy = policyDigest
priv, pub, _, _, _ = self.ectx.Create(primary1, inSensitive, inPublic)
childHandle = self.ectx.Load(primary1, priv, pub)
session = self.ectx.StartAuthSession(
tpmKey=ESYS_TR.NONE,
bind=ESYS_TR.NONE,
nonceCaller=None,
sessionType=TPM2_SE.POLICY,
symmetric=sym,
authHash=TPM2_ALG.SHA256,
)
self.ectx.PolicyAuthValue(session)
self.ectx.PolicyCommandCode(session, TPM2_CC.Duplicate)
encryptionKey = TPM2B_DATA("is sixteen bytes")
sym = TPMT_SYM_DEF_OBJECT(
algorithm=TPM2_ALG.AES,
keyBits=TPMU_SYM_KEY_BITS(aes=128),
mode=TPMU_SYM_MODE(aes=TPM2_ALG.CFB),
)
encryptionKeyOut, duplicate, symSeed = self.ectx.Duplicate(
childHandle, primary2, encryptionKey, sym, session1=session
)
self.assertEqual(type(encryptionKeyOut), TPM2B_DATA)
self.assertEqual(type(duplicate), TPM2B_PRIVATE)
self.assertEqual(type(symSeed), TPM2B_ENCRYPTED_SECRET)
with self.assertRaises(TypeError):
self.ectx.Duplicate(6.7, primary2, encryptionKey, sym, session1=session)
with self.assertRaises(TypeError):
self.ectx.Duplicate(
childHandle, object(), encryptionKey, sym, session1=session
)
with self.assertRaises(TypeError):
self.ectx.Duplicate(
childHandle, primary2, TPM2B_PUBLIC(), sym, session1=session
)
with self.assertRaises(TypeError):
self.ectx.Duplicate(
childHandle, primary2, encryptionKey, b"1234", session1=session
)
with self.assertRaises(TypeError):
self.ectx.Duplicate(
childHandle, primary2, encryptionKey, sym, session1=7.89
)
with self.assertRaises(TypeError):
self.ectx.Duplicate(
childHandle,
primary2,
encryptionKey,
sym,
session1=session,
session2="foo",
)
with self.assertRaises(TypeError):
self.ectx.Duplicate(
childHandle,
primary2,
encryptionKey,
sym,
session1=session,
session2=ESYS_TR.PASSWORD,
session3="foo",
)
def test_PolicyAuthValue(self):
sym = TPMT_SYM_DEF(
algorithm=TPM2_ALG.XOR,
keyBits=TPMU_SYM_KEY_BITS(exclusiveOr=TPM2_ALG.SHA256),
mode=TPMU_SYM_MODE(aes=TPM2_ALG.CFB),
)
session = self.ectx.StartAuthSession(
tpmKey=ESYS_TR.NONE,
bind=ESYS_TR.NONE,
nonceCaller=None,
sessionType=TPM2_SE.TRIAL,
symmetric=sym,
authHash=TPM2_ALG.SHA256,
)
self.ectx.PolicyAuthValue(session)
with self.assertRaises(TypeError):
self.ectx.PolicyAuthValue(b"1234")
def test_PolicyCommandCode(self):
sym = TPMT_SYM_DEF(
algorithm=TPM2_ALG.XOR,
keyBits=TPMU_SYM_KEY_BITS(exclusiveOr=TPM2_ALG.SHA256),
mode=TPMU_SYM_MODE(aes=TPM2_ALG.CFB),
)
session = self.ectx.StartAuthSession(
tpmKey=ESYS_TR.NONE,
bind=ESYS_TR.NONE,
nonceCaller=None,
sessionType=TPM2_SE.TRIAL,
symmetric=sym,
authHash=TPM2_ALG.SHA256,
)
self.ectx.PolicyCommandCode(session, TPM2_CC.Duplicate)
with self.assertRaises(TypeError):
self.ectx.PolicyCommandCode(b"1234", TPM2_CC.Duplicate)
with self.assertRaises(TypeError):
self.ectx.PolicyCommandCode(session, b"12345")
with self.assertRaises(ValueError):
self.ectx.PolicyCommandCode(session, 42)
def test_PolicyGetDigest(self):
sym = TPMT_SYM_DEF(
algorithm=TPM2_ALG.XOR,
keyBits=TPMU_SYM_KEY_BITS(exclusiveOr=TPM2_ALG.SHA256),
mode=TPMU_SYM_MODE(aes=TPM2_ALG.CFB),
)
session = self.ectx.StartAuthSession(
tpmKey=ESYS_TR.NONE,
bind=ESYS_TR.NONE,
nonceCaller=None,
sessionType=TPM2_SE.TRIAL,
symmetric=sym,
authHash=TPM2_ALG.SHA256,
)
self.ectx.PolicyAuthValue(session)
self.ectx.PolicyCommandCode(session, TPM2_CC.Duplicate)
policyDigest = self.ectx.PolicyGetDigest(session)
self.assertTrue(type(policyDigest), TPM2B_DIGEST)
def test_rewrap(self):
sym = TPMT_SYM_DEF(
algorithm=TPM2_ALG.XOR,
keyBits=TPMU_SYM_KEY_BITS(exclusiveOr=TPM2_ALG.SHA256),
mode=TPMU_SYM_MODE(aes=TPM2_ALG.CFB),
)
session = self.ectx.StartAuthSession(
tpmKey=ESYS_TR.NONE,
bind=ESYS_TR.NONE,
nonceCaller=None,
sessionType=TPM2_SE.TRIAL,
symmetric=sym,
authHash=TPM2_ALG.SHA256,
)
self.ectx.PolicyAuthValue(session)
self.ectx.PolicyCommandCode(session, TPM2_CC.Duplicate)
policyDigest = self.ectx.PolicyGetDigest(session)
self.ectx.FlushContext(session)
session = None
inPublic = TPM2B_PUBLIC(
TPMT_PUBLIC.parse(
alg="rsa2048",
objectAttributes=TPMA_OBJECT.USERWITHAUTH
| TPMA_OBJECT.RESTRICTED
| TPMA_OBJECT.DECRYPT
| TPMA_OBJECT.FIXEDTPM
| TPMA_OBJECT.FIXEDPARENT
| TPMA_OBJECT.SENSITIVEDATAORIGIN,
)
)
inSensitive = TPM2B_SENSITIVE_CREATE()
primary1, _, _, _, _ = self.ectx.CreatePrimary(inSensitive, inPublic)
primary2, _, _, _, _ = self.ectx.CreatePrimary(inSensitive, inPublic)
inPublic = TPM2B_PUBLIC(
TPMT_PUBLIC.parse(
alg="rsa2048:aes128cfb",
objectAttributes=TPMA_OBJECT.USERWITHAUTH
| TPMA_OBJECT.RESTRICTED
| TPMA_OBJECT.DECRYPT
| TPMA_OBJECT.SENSITIVEDATAORIGIN,
)
)
inPublic.publicArea.authPolicy = policyDigest
priv, pub, _, _, _ = self.ectx.Create(primary1, inSensitive, inPublic)
childHandle = self.ectx.Load(primary1, priv, pub)
session = self.ectx.StartAuthSession(
tpmKey=ESYS_TR.NONE,
bind=ESYS_TR.NONE,
nonceCaller=None,
sessionType=TPM2_SE.POLICY,
symmetric=sym,
authHash=TPM2_ALG.SHA256,
)
self.ectx.PolicyAuthValue(session)
self.ectx.PolicyCommandCode(session, TPM2_CC.Duplicate)
encryptionKey = TPM2B_DATA("is sixteen bytes")
sym = TPMT_SYM_DEF_OBJECT(
algorithm=TPM2_ALG.AES,
keyBits=TPMU_SYM_KEY_BITS(aes=128),
mode=TPMU_SYM_MODE(aes=TPM2_ALG.CFB),
)
_, duplicate, symSeed = self.ectx.Duplicate(
childHandle, primary2, encryptionKey, sym, session1=session
)
keyName = pub.publicArea.getName()
duplicate, symSeed = self.ectx.Rewrap(
primary2, primary1, duplicate, keyName, symSeed
)
self.assertEqual(type(duplicate), TPM2B_PRIVATE)
self.assertEqual(type(symSeed), TPM2B_ENCRYPTED_SECRET)
with self.assertRaises(TypeError):
self.ectx.Rewrap(67.3, primary1, duplicate, keyName, symSeed)
with self.assertRaises(TypeError):
self.ectx.Rewrap(primary2, object(), duplicate, keyName, symSeed)
with self.assertRaises(TypeError):
self.ectx.Rewrap(primary2, primary1, TPM2B_NAME(), keyName, symSeed)
with self.assertRaises(TypeError):
self.ectx.Rewrap(primary2, primary1, duplicate, TPM2B_PRIVATE(), symSeed)
with self.assertRaises(TypeError):
self.ectx.Rewrap(
primary2, primary1, duplicate, keyName, symSeed, session1="goo"
)
with self.assertRaises(TypeError):
self.ectx.Rewrap(
primary2, primary1, duplicate, keyName, symSeed, session2=45.6
)
with self.assertRaises(TypeError):
self.ectx.Rewrap(
primary2, primary1, duplicate, keyName, symSeed, sesion3=object()
)
def test_Import(self):
sym = TPMT_SYM_DEF(
algorithm=TPM2_ALG.XOR,
keyBits=TPMU_SYM_KEY_BITS(exclusiveOr=TPM2_ALG.SHA256),
mode=TPMU_SYM_MODE(aes=TPM2_ALG.CFB),
)
session = self.ectx.StartAuthSession(
tpmKey=ESYS_TR.NONE,
bind=ESYS_TR.NONE,
nonceCaller=None,
sessionType=TPM2_SE.TRIAL,
symmetric=sym,
authHash=TPM2_ALG.SHA256,
)
self.ectx.PolicyAuthValue(session)
self.ectx.PolicyCommandCode(session, TPM2_CC.Duplicate)
policyDigest = self.ectx.PolicyGetDigest(session)
self.ectx.FlushContext(session)
session = None
inPublic = TPM2B_PUBLIC(
TPMT_PUBLIC.parse(
alg="rsa2048",
objectAttributes=TPMA_OBJECT.USERWITHAUTH
| TPMA_OBJECT.RESTRICTED
| TPMA_OBJECT.DECRYPT
| TPMA_OBJECT.FIXEDTPM
| TPMA_OBJECT.FIXEDPARENT
| TPMA_OBJECT.SENSITIVEDATAORIGIN,
)
)
inSensitive = TPM2B_SENSITIVE_CREATE()
primary1, _, _, _, _ = self.ectx.CreatePrimary(inSensitive, inPublic)
primary2, _, _, _, _ = self.ectx.CreatePrimary(inSensitive, inPublic)
inPublic = TPM2B_PUBLIC(
TPMT_PUBLIC.parse(
alg="rsa2048:aes128cfb",
objectAttributes=TPMA_OBJECT.USERWITHAUTH
| TPMA_OBJECT.RESTRICTED
| TPMA_OBJECT.DECRYPT
| TPMA_OBJECT.SENSITIVEDATAORIGIN,
)
)
inPublic.publicArea.authPolicy = policyDigest
priv, pub, _, _, _ = self.ectx.Create(primary1, inSensitive, inPublic)
childHandle = self.ectx.Load(primary1, priv, pub)
session = self.ectx.StartAuthSession(
tpmKey=ESYS_TR.NONE,
bind=ESYS_TR.NONE,
nonceCaller=None,
sessionType=TPM2_SE.POLICY,
symmetric=sym,
authHash=TPM2_ALG.SHA256,
)
self.ectx.PolicyAuthValue(session)
self.ectx.PolicyCommandCode(session, TPM2_CC.Duplicate)
sym = TPMT_SYM_DEF_OBJECT(algorithm=TPM2_ALG.NULL,)
encryptionKey, duplicate, symSeed = self.ectx.Duplicate(
childHandle, primary2, None, sym, session1=session
)
private = self.ectx.Import(
primary1, encryptionKey, pub, duplicate, symSeed, sym
)
self.assertEqual(type(private), TPM2B_PRIVATE)
with self.assertRaises(TypeError):
self.ectx.Import(98.5, encryptionKey, pub, duplicate, symSeed, sym)
with self.assertRaises(TypeError):
self.ectx.Import(primary1, TPM2B_ECC_POINT(), pub, duplicate, symSeed, sym)
with self.assertRaises(TypeError):
self.ectx.Import(
primary1, encryptionKey, TPM2B_DATA(), duplicate, symSeed, sym
)
with self.assertRaises(TypeError):
self.ectx.Import(primary1, encryptionKey, pub, object(), symSeed, sym)
with self.assertRaises(TypeError):
self.ectx.Import(primary1, encryptionKey, pub, duplicate, None, sym)
with self.assertRaises(TypeError):
self.ectx.Import(
primary1, encryptionKey, pub, duplicate, symSeed, TPM2B_PUBLIC()
)
with self.assertRaises(TypeError):
self.ectx.Import(
primary1, encryptionKey, pub, duplicate, symSeed, sym, session1="boo"
)
with self.assertRaises(TypeError):
self.ectx.Import(
primary1, encryptionKey, pub, duplicate, symSeed, sym, session2=object()
)
with self.assertRaises(TypeError):
self.ectx.Import(
primary1, encryptionKey, pub, duplicate, symSeed, sym, session3=4.5
)
def test_Quote(self):
inPublic = TPM2B_PUBLIC(
TPMT_PUBLIC.parse(
alg="ecc:ecdsa",
objectAttributes=TPMA_OBJECT.SIGN_ENCRYPT
| TPMA_OBJECT.FIXEDPARENT
| TPMA_OBJECT.FIXEDTPM
| TPMA_OBJECT.USERWITHAUTH
| TPMA_OBJECT.SENSITIVEDATAORIGIN,
)
)
inSensitive = TPM2B_SENSITIVE_CREATE(TPMS_SENSITIVE_CREATE())
parentHandle = self.ectx.CreatePrimary(inSensitive, inPublic)[0]
quote, signature = self.ectx.Quote(
parentHandle, "sha256:1,2,3,4", TPM2B_DATA(b"123456789")
)
self.assertTrue(type(quote), TPM2B_ATTEST)
self.assertTrue(type(signature), TPMT_SIGNATURE)
quote, signature = self.ectx.Quote(
parentHandle, TPML_PCR_SELECTION.parse("sha256:1,2,3,4"), TPM2B_DATA()
)
self.assertTrue(type(quote), TPM2B_ATTEST)
self.assertTrue(type(signature), TPMT_SIGNATURE)
quote, signature = self.ectx.Quote(
parentHandle,
"sha256:1,2,3,4",
TPM2B_DATA(),
inScheme=TPMT_SIG_SCHEME(scheme=TPM2_ALG.NULL),
)
self.assertTrue(type(quote), TPM2B_ATTEST)
self.assertTrue(type(signature), TPMT_SIGNATURE)
with self.assertRaises(TypeError):
self.ectx.Quote(42.0, "sha256:1,2,3,4", TPM2B_DATA())
with self.assertRaises(TypeError):
self.ectx.Quote(parentHandle, b"sha256:1,2,3,4")
with self.assertRaises(TypeError):
self.ectx.Quote(parentHandle, "sha256:1,2,3,4", qualifyingData=object())
with self.assertRaises(TypeError):
self.ectx.Quote(parentHandle, "sha256:1,2,3,4", inScheme=87)
with self.assertRaises(TypeError):
self.ectx.Quote(parentHandle, "sha256:1,2,3,4", session1="foo")
with self.assertRaises(TypeError):
self.ectx.Quote(parentHandle, "sha256:1,2,3,4", session2=25.68)
with self.assertRaises(TypeError):
self.ectx.Quote(parentHandle, "sha256:1,2,3,4", session3=object())
def test_GetSessionAuditDigest(self):
inPublic = TPM2B_PUBLIC(
TPMT_PUBLIC.parse(
alg="rsa2048:rsassa:null",
objectAttributes=TPMA_OBJECT.SIGN_ENCRYPT
| TPMA_OBJECT.FIXEDPARENT
| TPMA_OBJECT.FIXEDTPM
| TPMA_OBJECT.USERWITHAUTH
| TPMA_OBJECT.SENSITIVEDATAORIGIN
| TPMA_OBJECT.RESTRICTED,
)
)
inSensitive = TPM2B_SENSITIVE_CREATE(TPMS_SENSITIVE_CREATE())
signHandle = self.ectx.CreatePrimary(inSensitive, inPublic)[0]
sym = TPMT_SYM_DEF(algorithm=TPM2_ALG.NULL,)
session = self.ectx.StartAuthSession(
tpmKey=ESYS_TR.NONE,
bind=ESYS_TR.NONE,
nonceCaller=None,
sessionType=TPM2_SE.HMAC,
symmetric=sym,
authHash=TPM2_ALG.SHA256,
)
self.ectx.TRSess_SetAttributes(
session, TPMA_SESSION.AUDIT | TPMA_SESSION.CONTINUESESSION
)
self.ectx.GetCapability(
TPM2_CAP.COMMANDS, TPM2_CC.FIRST, lib.TPM2_MAX_CAP_CC, session1=session
)
auditInfo, signature = self.ectx.GetSessionAuditDigest(
signHandle, session, b"1234"
)
self.assertEqual(type(auditInfo), TPM2B_ATTEST)
self.assertEqual(type(signature), TPMT_SIGNATURE)
with self.assertRaises(TypeError):
self.ectx.GetSessionAuditDigest(45.89, session, b"1234")
with self.assertRaises(TypeError):
self.ectx.GetSessionAuditDigest(signHandle, object(), b"1234")
with self.assertRaises(TypeError):
self.ectx.GetSessionAuditDigest(signHandle, session, list())
with self.assertRaises(TypeError):
self.ectx.GetSessionAuditDigest(
signHandle, session, b"1234", privacyAdminHandle=45.6
)
with self.assertRaises(ValueError):
self.ectx.GetSessionAuditDigest(
signHandle, session, b"1234", privacyAdminHandle=ESYS_TR.LOCKOUT
)
with self.assertRaises(TypeError):
self.ectx.GetSessionAuditDigest(
signHandle, session, b"1234", session1="baz"
)
with self.assertRaises(TypeError):
self.ectx.GetSessionAuditDigest(
signHandle, session, b"1234", session2=object()
)
with self.assertRaises(TypeError):
self.ectx.GetSessionAuditDigest(
signHandle, session, b"1234", session3=12.723
)
def test_PP_Commands(self):
with self.assertRaises(TSS2_Exception) as e:
self.ectx.PP_Commands(TPML_CC(), TPML_CC(), session1=ESYS_TR.PASSWORD)
self.assertEqual(e.exception.error, TPM2_RC.PP)
self.assertEqual(e.exception.session, 1)
with self.assertRaises(TypeError):
self.ectx.PP_Commands(b"bad setList", TPML_CC(), session1=ESYS_TR.PASSWORD)
with self.assertRaises(TypeError):
self.ectx.PP_Commands(TPML_CC(), None, session1=ESYS_TR.PASSWORD)
with self.assertRaises(TypeError):
self.ectx.PP_Commands(TPML_CC(), TPML_CC(), session1=b"0xF1F1")
with self.assertRaises(TypeError):
self.ectx.PP_Commands(TPML_CC(), TPML_CC(), session2=b"0xF1F1")
with self.assertRaises(TypeError):
self.ectx.PP_Commands(TPML_CC(), TPML_CC(), session3=b"0xF1F1")
with self.assertRaises(TypeError):
self.ectx.PP_Commands(TPML_CC(), TPML_CC(), authHandle="platform")
def test_SetAlgorithmSet(self):
self.ectx.SetAlgorithmSet(0)
with self.assertRaises(TypeError):
self.ectx.SetAlgorithmSet([1, 2, 3])
with self.assertRaises(TypeError):
self.ectx.SetAlgorithmSet(session2=set(3, 2, 1))
with self.assertRaises(TypeError):
self.ectx.SetAlgorithmSet(session1=set(4, 3, 2))
with self.assertRaises(TypeError):
self.ectx.SetAlgorithmSet(session3=set(5, 4, 3))
with self.assertRaises(TypeError):
self.ectx.SetAlgorithmSet(authHandle=None)
def test_DictionaryAttackLockReset(self):
self.ectx.DictionaryAttackLockReset()
with self.assertRaises(TSS2_Exception) as e:
self.ectx.DictionaryAttackLockReset(lockHandle=ESYS_TR.RH_OWNER)
self.assertEqual(e.exception.error, 132)
self.assertEqual(e.exception.handle, 1)
with self.assertRaises(TypeError):
self.ectx.DictionaryAttackLockReset([1, 2, 3])
with self.assertRaises(TypeError):
self.ectx.DictionaryAttackLockReset(session2=set(3, 2, 1))
with self.assertRaises(TypeError):
self.ectx.DictionaryAttackLockReset(session1=set(4, 3, 2))
with self.assertRaises(TypeError):
self.ectx.DictionaryAttackLockReset(session3=set(5, 4, 3))
with self.assertRaises(TypeError):
self.ectx.DictionaryAttackLockReset(lockHandle=None)
def test_DictionaryAttackParameters(self):
self.ectx.DictionaryAttackParameters(1, 2, 3)
with self.assertRaises(TypeError):
self.ectx.DictionaryAttackParameters(None, 2, 3)
with self.assertRaises(TypeError):
self.ectx.DictionaryAttackParameters(1, None, 3)
with self.assertRaises(TypeError):
self.ectx.DictionaryAttackParameters(1, 2, None)
with self.assertRaises(TypeError):
self.ectx.DictionaryAttackParameters(1, 2, 3, session2=set(3, 2, 1))
with self.assertRaises(TypeError):
self.ectx.DictionaryAttackParameters(1, 2, 3, session1=set(4, 3, 2))
with self.assertRaises(TypeError):
self.ectx.DictionaryAttackParameters(1, 2, 3, session3=set(5, 4, 3))
with self.assertRaises(TypeError):
self.ectx.DictionaryAttackParameters(1, 2, 3, lockHandle=None)
if __name__ == "__main__":
unittest.main()
| 34.49267 | 119 | 0.617751 | 9,976 | 101,167 | 6.098637 | 0.060345 | 0.073767 | 0.092373 | 0.118212 | 0.854438 | 0.829553 | 0.808202 | 0.76476 | 0.707627 | 0.655243 | 0 | 0.034156 | 0.286635 | 101,167 | 2,932 | 120 | 34.504434 | 0.808865 | 0.010181 | 0 | 0.584158 | 0 | 0.00135 | 0.034235 | 0.008431 | 0 | 0 | 0.001668 | 0.000341 | 0.19982 | 1 | 0.035104 | false | 0.021152 | 0.006301 | 0 | 0.041854 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
3dbb29e50dd0ef73a0cedadacde7895ae71a1426 | 1,903 | py | Python | src/dijkstra/main.py | Andre-Gilbert/Dijkstra-Shortest-Path | 05d3f13a2e30ac1c6962cb2ca4b67928747bbb35 | [
"MIT"
] | 1 | 2021-06-11T08:09:09.000Z | 2021-06-11T08:09:09.000Z | src/dijkstra/main.py | Andre-Gilbert/Pathfinding-Algorithms | 05d3f13a2e30ac1c6962cb2ca4b67928747bbb35 | [
"MIT"
] | null | null | null | src/dijkstra/main.py | Andre-Gilbert/Pathfinding-Algorithms | 05d3f13a2e30ac1c6962cb2ca4b67928747bbb35 | [
"MIT"
] | null | null | null | """Runs Dijkstra's Algorithm.
Typical usage example:
v_a = Vertex('A')
v_b = Vertex('B')
v_c = Vertex('C')
v_d = Vertex('D')
v_e = Vertex('E')
v_f = Vertex('F')
vertices = {v_a, v_b, v_c, v_d, v_e, v_f}
e_a_b = Edge(v_a, v_b, 7)
e_a_c = Edge(v_a, v_c, 9)
e_a_f = Edge(v_a, v_f, 14)
e_b_c = Edge(v_b, v_c, 10)
e_b_d = Edge(v_b, v_d, 15)
e_c_d = Edge(v_c, v_d, 11)
e_c_f = Edge(v_c, v_f, 2)
e_d_e = Edge(v_d, v_e, 6)
e_f_e = Edge(v_f, v_e, 9)
edges = {e_a_b, e_a_c, e_a_f, e_b_c, e_b_d, e_c_d, e_c_f, e_d_e, e_f_e}
graph = Graph(vertices, edges)
print("\nDijkstra Lazy Version:")
print("-" * 31)
dijkstra_lazy(graph, v_a, v_e)
# Reset visited vertices
for vertex in graph.vertices:
vertex.visited = False
print("\nDijkstra Eager Version:")
print("-" * 31)
dijkstra_eager(graph, v_a, v_e)
"""
from algorithms import dijkstra_eager, dijkstra_lazy
from data_structures import Edge, Graph, Vertex
# Example usage
if __name__ == "__main__":
v_a = Vertex('A')
v_b = Vertex('B')
v_c = Vertex('C')
v_d = Vertex('D')
v_e = Vertex('E')
v_f = Vertex('F')
vertices = {v_a, v_b, v_c, v_d, v_e, v_f}
e_a_b = Edge(v_a, v_b, 7)
e_a_c = Edge(v_a, v_c, 9)
e_a_f = Edge(v_a, v_f, 14)
e_b_c = Edge(v_b, v_c, 10)
e_b_d = Edge(v_b, v_d, 15)
e_c_d = Edge(v_c, v_d, 11)
e_c_f = Edge(v_c, v_f, 2)
e_d_e = Edge(v_d, v_e, 6)
e_f_e = Edge(v_f, v_e, 9)
edges = {e_a_b, e_a_c, e_a_f, e_b_c, e_b_d, e_c_d, e_c_f, e_d_e, e_f_e}
graph = Graph(vertices, edges)
print("\nDijkstra Lazy Version:")
print("-" * 31)
dijkstra_lazy(graph, v_a, v_e)
# Reset visited vertices
for vertex in graph.vertices:
vertex.visited = False
print("\nDijkstra Eager Version:")
print("-" * 31)
dijkstra_eager(graph, v_a, v_e)
| 25.716216 | 75 | 0.590646 | 400 | 1,903 | 2.4225 | 0.115 | 0.092879 | 0.037152 | 0.043344 | 0.848297 | 0.848297 | 0.848297 | 0.848297 | 0.848297 | 0.848297 | 0 | 0.024079 | 0.258014 | 1,903 | 73 | 76 | 26.068493 | 0.662181 | 0.491855 | 0 | 0.068966 | 0 | 0 | 0.067921 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.068966 | 0 | 0.068966 | 0.137931 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
3dc5bea3f3fe3abb1e7657553a7d3eb2926caa4c | 5,812 | py | Python | modules/old/partition.py | jperezvisaires/tfg-intphys | 8c32c383bf00c00b0fc627ba7bf1192bc3011c40 | [
"MIT"
] | 1 | 2020-01-25T19:43:45.000Z | 2020-01-25T19:43:45.000Z | modules/old/partition.py | jperezvisaires/tfg-intphys | 8c32c383bf00c00b0fc627ba7bf1192bc3011c40 | [
"MIT"
] | null | null | null | modules/old/partition.py | jperezvisaires/tfg-intphys | 8c32c383bf00c00b0fc627ba7bf1192bc3011c40 | [
"MIT"
] | null | null | null | import numpy as np
import h5py
BASE_PATH = "train"
def data_dictionaries_segmentation(num_scenes, path_hdf5, num_initial=1, test=False, base_path=BASE_PATH):
frames = 100
list_samples = []
list_targets_mask = []
for scene in range(num_initial, num_initial+num_scenes):
scene_path = "{}/{:05d}".format(base_path, scene)
with h5py.File(path_hdf5, "r") as f:
if scene_path in f:
for i in range(frames):
path_image = scene_path + "/scene/scene_{:03d}".format(i+1)
list_samples.append(path_image)
path_mask = scene_path + "/masks/masks_{:03d}".format(i+1)
list_targets_mask.append(path_mask)
partition = partition_dictionary(list_samples, num_scenes, test)
targets = targets_dictionary(list_samples, list_targets_mask)
return partition, targets
def data_dictionaries_depth(num_scenes, path_hdf5, num_initial=1, test=False, base_path=BASE_PATH):
frames = 100
list_samples = []
list_targets_depth = []
for scene in range(num_initial, num_initial+num_scenes):
scene_path = "{}/{:05d}".format(base_path, scene)
with h5py.File(path_hdf5, "r") as f:
if scene_path in f:
for i in range(frames):
path_image = scene_path + "/scene/scene_{:03d}".format(i+1)
list_samples.append(path_image)
path_depth = scene_path + "/depth/depth_{:03d}".format(i+1)
list_targets_depth.append(path_depth)
partition = partition_dictionary(list_samples, num_scenes, test)
targets = targets_dictionary(list_samples, list_targets_depth)
return partition, targets
def data_dictionaries_prediction(num_scenes, path_hdf5, input_frames=3, prediction_frame=5, space_frames=5, num_initial=1, test=False, base_path=BASE_PATH):
frames = 100
last_frame = frames - prediction_frame - (input_frames - 1) * space_frames
list_samples = []
list_targets = []
for scene in range(num_initial, num_initial+num_scenes):
scene_path = "{}/{:05d}".format(base_path, scene)
with h5py.File(path_hdf5, "r") as f:
if scene_path in f:
for i in range(last_frame):
list_samples_memory = []
for j in range(input_frames):
path_image = scene_path + "/masks/masks_{:03d}".format((i+1) + (j*space_frames))
list_samples_memory.append(path_image)
list_samples.append(list_samples_memory)
path_image = scene_path + "/masks/masks_{:03d}".format((i+1) + (input_frames - 1) * space_frames + prediction_frame)
list_targets.append(path_image)
partition = partition_dictionary_prediction(list_samples, num_scenes, test, last_frame, prediction_frame)
targets = targets_dictionary_prediction(list_samples, list_targets)
return partition, targets
def partition_dictionary(list_samples, num_scenes, test):
frames = 100
if test:
train_scenes = int(num_scenes * 0.8)
train_frames = train_scenes * frames
vali_scenes = int(num_scenes * 0.1)
vali_frames = vali_scenes * frames
list_samples_train = list_samples[:train_frames]
list_samples_vali = list_samples[train_frames:(train_frames + vali_frames)]
list_samples_test = list_samples[(train_frames + vali_frames):]
partition = {"train": list_samples_train,
"validation": list_samples_vali,
"test": list_samples_test}
else:
train_scenes = int(num_scenes * 0.9)
train_frames = train_scenes * frames
list_samples_train = list_samples[:train_frames]
list_samples_vali = list_samples[train_frames:]
partition = {"train": list_samples_train,
"validation": list_samples_vali}
return partition
def partition_dictionary_prediction(list_samples, num_scenes, test, last_frame=85, prediction_frame=5):
frames = 100 - (100 - last_frame)
if test:
train_scenes = int(num_scenes * 0.8)
train_frames = train_scenes * frames
vali_scenes = int(num_scenes * 0.1)
vali_frames = vali_scenes * frames
list_samples_train = list_samples[:train_frames]
list_samples_vali = list_samples[train_frames:(train_frames + vali_frames)]
list_samples_test = list_samples[(train_frames + vali_frames):]
partition = {"train": list_samples_train,
"validation": list_samples_vali,
"test": list_samples_test}
else:
train_scenes = int(num_scenes * 0.9)
train_frames = train_scenes * frames
list_samples_train = list_samples[:train_frames]
list_samples_vali = list_samples[train_frames:]
partition = {"train": list_samples_train,
"validation": list_samples_vali}
return partition
def targets_dictionary(list_samples, list_targets):
targets = {}
for i in range(len(list_samples)):
targets[list_samples[i]] = list_targets[i]
return targets
def targets_dictionary_prediction(list_samples, list_targets):
targets = {}
for i in range(len(list_samples)):
targets[list_samples[i][0]] = list_targets[i]
return targets | 31.934066 | 156 | 0.603751 | 673 | 5,812 | 4.858841 | 0.093611 | 0.178287 | 0.088073 | 0.067278 | 0.838532 | 0.801835 | 0.761468 | 0.725076 | 0.715902 | 0.715902 | 0 | 0.018905 | 0.308328 | 5,812 | 182 | 157 | 31.934066 | 0.794527 | 0 | 0 | 0.691589 | 0 | 0 | 0.03733 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.065421 | false | 0 | 0.018692 | 0 | 0.149533 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
3de3a755a17fe3a53719680d56e07c84e42ff934 | 31 | py | Python | ceefax/fonts/size4/__init__.py | mscroggs/CEEFAX | 8e7a075de1809064b77360da24ebbbaa409c3bf2 | [
"MIT"
] | 1 | 2020-03-28T15:53:22.000Z | 2020-03-28T15:53:22.000Z | ceefax/fonts/size4/__init__.py | mscroggs/CEEFAX | 8e7a075de1809064b77360da24ebbbaa409c3bf2 | [
"MIT"
] | 1 | 2021-02-05T13:43:52.000Z | 2021-02-05T13:43:52.000Z | ceefax/fonts/size4/__init__.py | mscroggs/CEEFAX | 8e7a075de1809064b77360da24ebbbaa409c3bf2 | [
"MIT"
] | null | null | null | from .default import size4font
| 15.5 | 30 | 0.83871 | 4 | 31 | 6.5 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.037037 | 0.129032 | 31 | 1 | 31 | 31 | 0.925926 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
9a968578f2168d5076859916a63a3f01182075bf | 132 | py | Python | R_CNN.py | scain40/OpenCVCVImageComparisson | 368d901233111606fb2f0ecbce4447dd9c149fd0 | [
"MIT"
] | null | null | null | R_CNN.py | scain40/OpenCVCVImageComparisson | 368d901233111606fb2f0ecbce4447dd9c149fd0 | [
"MIT"
] | null | null | null | R_CNN.py | scain40/OpenCVCVImageComparisson | 368d901233111606fb2f0ecbce4447dd9c149fd0 | [
"MIT"
] | null | null | null | import cv2 as cv
# Need to get and train a model for this, look for an approproate data set large enough to support all of this
| 33 | 111 | 0.75 | 26 | 132 | 3.807692 | 0.884615 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.009901 | 0.234848 | 132 | 3 | 112 | 44 | 0.970297 | 0.818182 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
9acc2d83d8aeb14e6765c6f688afbb0f2e240ec7 | 141 | py | Python | setenum/django.py | azhpushkin/setenum | 3f6b1d69b16fc243bf740f6c927f6a6824804391 | [
"MIT"
] | null | null | null | setenum/django.py | azhpushkin/setenum | 3f6b1d69b16fc243bf740f6c927f6a6824804391 | [
"MIT"
] | null | null | null | setenum/django.py | azhpushkin/setenum | 3f6b1d69b16fc243bf740f6c927f6a6824804391 | [
"MIT"
] | null | null | null | from django.db.models.enums import ChoicesMeta
from . import SetEnumMeta
class DjangoChoicesSetEnumMeta(SetEnumMeta, ChoicesMeta):
pass
| 23.5 | 57 | 0.822695 | 15 | 141 | 7.733333 | 0.733333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.120567 | 141 | 5 | 58 | 28.2 | 0.935484 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.25 | 0.5 | 0 | 0.75 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 6 |
9ae26f0697ce2913e2b3679233d0bba9a224cd55 | 31 | py | Python | Modules/03_termcolor.py | milenpenev/Python_Advanced | 2f32012dd682fa9541bbf5fa155f6bdbcfa946de | [
"MIT"
] | null | null | null | Modules/03_termcolor.py | milenpenev/Python_Advanced | 2f32012dd682fa9541bbf5fa155f6bdbcfa946de | [
"MIT"
] | null | null | null | Modules/03_termcolor.py | milenpenev/Python_Advanced | 2f32012dd682fa9541bbf5fa155f6bdbcfa946de | [
"MIT"
] | null | null | null | from termcolor import colored
| 10.333333 | 29 | 0.83871 | 4 | 31 | 6.5 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.16129 | 31 | 2 | 30 | 15.5 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
b11cdaeae334636debebc6644310dfab982a74cc | 12,914 | py | Python | devilry/devilry_admin/tests/assignment/examiners/test_remove_groups_from_examiner.py | devilry/devilry-django | 9ae28e462dfa4cfee966ebacbca04ade9627e715 | [
"BSD-3-Clause"
] | 29 | 2015-01-18T22:56:23.000Z | 2020-11-10T21:28:27.000Z | devilry/devilry_admin/tests/assignment/examiners/test_remove_groups_from_examiner.py | devilry/devilry-django | 9ae28e462dfa4cfee966ebacbca04ade9627e715 | [
"BSD-3-Clause"
] | 786 | 2015-01-06T16:10:18.000Z | 2022-03-16T11:10:50.000Z | devilry/devilry_admin/tests/assignment/examiners/test_remove_groups_from_examiner.py | devilry/devilry-django | 9ae28e462dfa4cfee966ebacbca04ade9627e715 | [
"BSD-3-Clause"
] | 15 | 2015-04-06T06:18:43.000Z | 2021-02-24T12:28:30.000Z | import mock
from django import test
from django.contrib import messages
from django.http import Http404
from cradmin_legacy import cradmin_testhelpers
from model_bakery import baker
from devilry.apps.core.models import Examiner
from devilry.devilry_admin.views.assignment.examiners import remove_groups_from_examiner
class TestRemoveGroupsToExaminerView(test.TestCase, cradmin_testhelpers.TestCaseMixin):
"""
NOTE: Much of the functionality for this view is tested in
test_groupview_base.test_groupviewmixin.TestGroupViewMixin
and test_basemultiselectview.TestBaseMultiselectView.
"""
viewclass = remove_groups_from_examiner.RemoveGroupsToExaminerView
def __mockinstance_with_devilryrole(self, devilryrole):
mockinstance = mock.MagicMock()
mockinstance.get_devilryrole_for_requestuser.return_value = devilryrole
return mockinstance
def test_404_if_relatedexaminer_is_inactive(self):
testassignment = baker.make('core.Assignment', long_name='Test Assignment')
relatedexaminer = baker.make('core.RelatedExaminer', period=testassignment.period,
active=False)
baker.make('core.AssignmentGroup', parentnode=testassignment)
with self.assertRaises(Http404):
self.mock_getrequest(
cradmin_role=testassignment,
viewkwargs={'relatedexaminer_id': relatedexaminer.id},
cradmin_instance=self.__mockinstance_with_devilryrole('departmentadmin'))
def test_title(self):
testassignment = baker.make('core.Assignment', long_name='Test Assignment')
relatedexaminer = baker.make('core.RelatedExaminer', period=testassignment.period,
user__fullname='Test User')
baker.make('core.Examiner', assignmentgroup__parentnode=testassignment, relatedexaminer=relatedexaminer)
mockresponse = self.mock_http200_getrequest_htmls(
cradmin_role=testassignment,
viewkwargs={'relatedexaminer_id': relatedexaminer.id},
cradmin_instance=self.__mockinstance_with_devilryrole('departmentadmin'))
self.assertIn(
'Remove students from Test User',
mockresponse.selector.one('title').alltext_normalized)
def test_h1(self):
testassignment = baker.make('core.Assignment', long_name='Test Assignment')
relatedexaminer = baker.make('core.RelatedExaminer', period=testassignment.period,
user__fullname='Test User')
baker.make('core.Examiner', assignmentgroup__parentnode=testassignment, relatedexaminer=relatedexaminer)
mockresponse = self.mock_http200_getrequest_htmls(
cradmin_role=testassignment,
viewkwargs={'relatedexaminer_id': relatedexaminer.id},
cradmin_instance=self.__mockinstance_with_devilryrole('departmentadmin'))
self.assertEqual(
'Remove students from Test User',
mockresponse.selector.one('h1').alltext_normalized)
def test_submit_button_text(self):
testassignment = baker.make_recipe('devilry.apps.core.assignment_activeperiod_start')
relatedexaminer = baker.make('core.RelatedExaminer', period=testassignment.period)
baker.make('core.Examiner', assignmentgroup__parentnode=testassignment, relatedexaminer=relatedexaminer)
mockresponse = self.mock_http200_getrequest_htmls(
cradmin_role=testassignment,
viewkwargs={'relatedexaminer_id': relatedexaminer.id},
cradmin_instance=self.__mockinstance_with_devilryrole('departmentadmin'))
self.assertEqual(
'Remove students',
mockresponse.selector.one('.cradmin-legacy-multiselect2-target-formfields .btn').alltext_normalized)
def test_exclude_groups_that_does_not_have_the_examiner(self):
testassignment = baker.make_recipe('devilry.apps.core.assignment_activeperiod_start')
relatedexaminer = baker.make('core.RelatedExaminer', period=testassignment.period)
baker.make('core.AssignmentGroup', parentnode=testassignment)
baker.make('core.AssignmentGroup', parentnode=testassignment)
group_with_the_examiner = baker.make('core.AssignmentGroup', parentnode=testassignment)
baker.make('core.Examiner', assignmentgroup=group_with_the_examiner, relatedexaminer=relatedexaminer)
mockresponse = self.mock_http200_getrequest_htmls(
cradmin_role=testassignment,
viewkwargs={'relatedexaminer_id': relatedexaminer.id},
cradmin_instance=self.__mockinstance_with_devilryrole('departmentadmin'))
self.assertEqual(
1,
mockresponse.selector.count('.cradmin-legacy-listbuilder-itemvalue'))
def test_no_groups_on_examiner(self):
testassignment = baker.make_recipe('devilry.apps.core.assignment_activeperiod_start')
relatedexaminer = baker.make('core.RelatedExaminer', period=testassignment.period)
baker.make('core.AssignmentGroup', parentnode=testassignment)
messagesmock = mock.MagicMock()
self.mock_http302_getrequest(
cradmin_role=testassignment,
messagesmock=messagesmock,
viewkwargs={'relatedexaminer_id': relatedexaminer.id},
cradmin_instance=self.__mockinstance_with_devilryrole('departmentadmin'))
messagesmock.add.assert_called_once_with(
messages.INFO,
'No students to remove.',
'')
def test_get_querycount(self):
testassignment = baker.make_recipe('devilry.apps.core.assignment_activeperiod_start')
relatedexaminer = baker.make('core.RelatedExaminer', period=testassignment.period)
baker.make('core.AssignmentGroup', parentnode=testassignment, _quantity=10)
testgroup = baker.make('core.AssignmentGroup', parentnode=testassignment)
baker.make('core.Examiner', assignmentgroup=testgroup, relatedexaminer=relatedexaminer)
with self.assertNumQueries(11):
self.mock_getrequest(
cradmin_role=testassignment,
viewkwargs={'relatedexaminer_id': relatedexaminer.id},
cradmin_instance=self.__mockinstance_with_devilryrole('departmentadmin'))
def test_post_ok(self):
testassignment = baker.make_recipe('devilry.apps.core.assignment_activeperiod_start')
relatedexaminer = baker.make('core.RelatedExaminer', period=testassignment.period)
testgroup1 = baker.make('core.AssignmentGroup', parentnode=testassignment)
baker.make('core.Examiner', assignmentgroup=testgroup1, relatedexaminer=relatedexaminer)
testgroup2 = baker.make('core.AssignmentGroup', parentnode=testassignment)
baker.make('core.Examiner', assignmentgroup=testgroup2, relatedexaminer=relatedexaminer)
self.assertEqual(2, Examiner.objects.count())
self.mock_http302_postrequest(
cradmin_role=testassignment,
cradmin_instance=self.__mockinstance_with_devilryrole('departmentadmin'),
viewkwargs={'relatedexaminer_id': relatedexaminer.id},
requestkwargs={
'data': {'selected_items': [str(testgroup1.id), str(testgroup2.id)]}
})
self.assertEqual(0, Examiner.objects.count())
def test_post_successmessage_no_projectgroups(self):
testassignment = baker.make_recipe('devilry.apps.core.assignment_activeperiod_start')
relatedexaminer = baker.make('core.RelatedExaminer', period=testassignment.period)
testgroup1 = baker.make('core.AssignmentGroup', parentnode=testassignment)
baker.make('core.Candidate', assignment_group=testgroup1)
baker.make('core.Examiner', assignmentgroup=testgroup1, relatedexaminer=relatedexaminer)
testgroup2 = baker.make('core.AssignmentGroup', parentnode=testassignment)
baker.make('core.Examiner', assignmentgroup=testgroup2, relatedexaminer=relatedexaminer)
baker.make('core.Candidate', assignment_group=testgroup2)
messagesmock = mock.MagicMock()
self.mock_http302_postrequest(
cradmin_role=testassignment,
messagesmock=messagesmock,
cradmin_instance=self.__mockinstance_with_devilryrole('departmentadmin'),
viewkwargs={'relatedexaminer_id': relatedexaminer.id},
requestkwargs={
'data': {'selected_items': [str(testgroup1.id), str(testgroup2.id)]}
})
messagesmock.add.assert_called_once_with(
messages.SUCCESS,
'Removed 2 students.',
'')
def test_post_successmessage_with_single_projectgroups(self):
testassignment = baker.make_recipe('devilry.apps.core.assignment_activeperiod_start')
relatedexaminer = baker.make('core.RelatedExaminer', period=testassignment.period)
testgroup = baker.make('core.AssignmentGroup', parentnode=testassignment)
baker.make('core.Candidate', assignment_group=testgroup)
baker.make('core.Candidate', assignment_group=testgroup)
baker.make('core.Examiner', assignmentgroup=testgroup, relatedexaminer=relatedexaminer)
messagesmock = mock.MagicMock()
self.mock_http302_postrequest(
cradmin_role=testassignment,
messagesmock=messagesmock,
cradmin_instance=self.__mockinstance_with_devilryrole('departmentadmin'),
viewkwargs={'relatedexaminer_id': relatedexaminer.id},
requestkwargs={
'data': {'selected_items': [str(testgroup.id)]}
})
messagesmock.add.assert_called_once_with(
messages.SUCCESS,
'Removed 1 project group with 2 students.',
'')
def test_post_successmessage_with_multiple_projectgroups(self):
testassignment = baker.make_recipe('devilry.apps.core.assignment_activeperiod_start')
relatedexaminer = baker.make('core.RelatedExaminer', period=testassignment.period)
testgroup1 = baker.make('core.AssignmentGroup', parentnode=testassignment)
baker.make('core.Candidate', assignment_group=testgroup1, _quantity=10)
baker.make('core.Examiner', assignmentgroup=testgroup1, relatedexaminer=relatedexaminer)
testgroup2 = baker.make('core.AssignmentGroup', parentnode=testassignment)
baker.make('core.Candidate', assignment_group=testgroup2, _quantity=8)
baker.make('core.Examiner', assignmentgroup=testgroup2, relatedexaminer=relatedexaminer)
messagesmock = mock.MagicMock()
self.mock_http302_postrequest(
cradmin_role=testassignment,
messagesmock=messagesmock,
cradmin_instance=self.__mockinstance_with_devilryrole('departmentadmin'),
viewkwargs={'relatedexaminer_id': relatedexaminer.id},
requestkwargs={
'data': {'selected_items': [str(testgroup1.id), str(testgroup2.id)]}
})
messagesmock.add.assert_called_once_with(
messages.SUCCESS,
'Removed 2 project groups with 18 students.',
'')
def test_post_querycount(self):
testassignment = baker.make_recipe('devilry.apps.core.assignment_activeperiod_start')
relatedexaminer = baker.make('core.RelatedExaminer', period=testassignment.period)
testgroup1 = baker.make('core.AssignmentGroup', parentnode=testassignment)
baker.make('core.Candidate', assignment_group=testgroup1, _quantity=10)
baker.make('core.Examiner', assignmentgroup=testgroup1, relatedexaminer=relatedexaminer)
testgroup2 = baker.make('core.AssignmentGroup', parentnode=testassignment)
baker.make('core.Candidate', assignment_group=testgroup2, _quantity=10)
baker.make('core.Examiner', assignmentgroup=testgroup2, relatedexaminer=relatedexaminer)
testgroup3 = baker.make('core.AssignmentGroup', parentnode=testassignment)
baker.make('core.Candidate', assignment_group=testgroup3, _quantity=3)
baker.make('core.Examiner', assignmentgroup=testgroup3, relatedexaminer=relatedexaminer)
testgroup4 = baker.make('core.AssignmentGroup', parentnode=testassignment)
baker.make('core.Candidate', assignment_group=testgroup4, _quantity=3)
baker.make('core.Examiner', assignmentgroup=testgroup4, relatedexaminer=relatedexaminer)
with self.assertNumQueries(6):
self.mock_postrequest(
cradmin_role=testassignment,
cradmin_instance=self.__mockinstance_with_devilryrole('departmentadmin'),
viewkwargs={'relatedexaminer_id': relatedexaminer.id},
requestkwargs={
'data': {'selected_items': [str(testgroup1.id), str(testgroup2.id),
str(testgroup3.id), str(testgroup4.id)]}
})
| 56.640351 | 112 | 0.708533 | 1,176 | 12,914 | 7.562075 | 0.133503 | 0.068818 | 0.086248 | 0.056674 | 0.834364 | 0.820083 | 0.805353 | 0.774767 | 0.743619 | 0.742157 | 0 | 0.009634 | 0.196221 | 12,914 | 227 | 113 | 56.889868 | 0.84711 | 0.013241 | 0 | 0.693069 | 0 | 0 | 0.17753 | 0.039783 | 0 | 0 | 0 | 0 | 0.064356 | 1 | 0.064356 | false | 0 | 0.039604 | 0 | 0.118812 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
b12f6cf932f8019f02ae308cc48dc365276a07e2 | 41 | py | Python | markovdwp/priors/__init__.py | ivannz/MarkovDWP | f10ed7a331ddd9b7fc28c4cab3b05b2352a9ee2b | [
"MIT"
] | null | null | null | markovdwp/priors/__init__.py | ivannz/MarkovDWP | f10ed7a331ddd9b7fc28c4cab3b05b2352a9ee2b | [
"MIT"
] | null | null | null | markovdwp/priors/__init__.py | ivannz/MarkovDWP | f10ed7a331ddd9b7fc28c4cab3b05b2352a9ee2b | [
"MIT"
] | null | null | null | from .implicit import ImplicitSlicePrior
| 20.5 | 40 | 0.878049 | 4 | 41 | 9 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.097561 | 41 | 1 | 41 | 41 | 0.972973 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
b147be10d6b8ee4a2f16490cfdd487f2f6226240 | 47 | py | Python | views/__init__.py | andywar65/bimblog | 011e42b20d0847827715e5b5f0d269106471ad2c | [
"BSD-2-Clause"
] | null | null | null | views/__init__.py | andywar65/bimblog | 011e42b20d0847827715e5b5f0d269106471ad2c | [
"BSD-2-Clause"
] | null | null | null | views/__init__.py | andywar65/bimblog | 011e42b20d0847827715e5b5f0d269106471ad2c | [
"BSD-2-Clause"
] | null | null | null | from .building import *
from .station import *
| 15.666667 | 23 | 0.744681 | 6 | 47 | 5.833333 | 0.666667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.170213 | 47 | 2 | 24 | 23.5 | 0.897436 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
b159f12075e086e970d9117a06e22155164ce150 | 108 | py | Python | code/ch07-confirmation-emails/infrastructure/app_secrets.py | rcastleman/twilio-and-sendgrid-python-course | b1aa8e02725cf9580edb4b0310537fac33a685b0 | [
"MIT"
] | 29 | 2021-01-08T13:46:35.000Z | 2022-03-30T04:31:29.000Z | code/ch07-confirmation-emails/infrastructure/app_secrets.py | rcastleman/twilio-and-sendgrid-python-course | b1aa8e02725cf9580edb4b0310537fac33a685b0 | [
"MIT"
] | 2 | 2021-08-29T15:11:23.000Z | 2022-03-05T00:01:46.000Z | code/ch07-confirmation-emails/infrastructure/app_secrets.py | rcastleman/twilio-and-sendgrid-python-course | b1aa8e02725cf9580edb4b0310537fac33a685b0 | [
"MIT"
] | 30 | 2021-07-02T00:14:58.000Z | 2022-03-06T00:47:53.000Z | from typing import Optional
sendgrid_key_name: Optional[str] = None
sendgrid_api_key: Optional[str] = None
| 21.6 | 39 | 0.805556 | 16 | 108 | 5.1875 | 0.625 | 0.26506 | 0.361446 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.12037 | 108 | 4 | 40 | 27 | 0.873684 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.333333 | 0 | 0.333333 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
b169770791fb6c995c9e0d5871e888c1cc97494e | 32 | py | Python | dumbtest.py | mkseth4774/ine-guide-to-network-programmability-python-course-files | 35c49dfcf8e8f1b69435987a00fb9a236b803d9f | [
"MIT"
] | null | null | null | dumbtest.py | mkseth4774/ine-guide-to-network-programmability-python-course-files | 35c49dfcf8e8f1b69435987a00fb9a236b803d9f | [
"MIT"
] | null | null | null | dumbtest.py | mkseth4774/ine-guide-to-network-programmability-python-course-files | 35c49dfcf8e8f1b69435987a00fb9a236b803d9f | [
"MIT"
] | null | null | null | print('This is python working!') | 32 | 32 | 0.75 | 5 | 32 | 4.8 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.09375 | 32 | 1 | 32 | 32 | 0.827586 | 0 | 0 | 0 | 0 | 0 | 0.69697 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 6 |
b1aa242e3500f953ac946e554632a04298860e36 | 11,480 | py | Python | tests/test_main.py | larryon/gimme-aws-creds | 4aa880f7c8b187b0dc0623bf335da2c909f7e047 | [
"Apache-2.0"
] | 1 | 2021-04-10T13:49:48.000Z | 2021-04-10T13:49:48.000Z | tests/test_main.py | larryon/gimme-aws-creds | 4aa880f7c8b187b0dc0623bf335da2c909f7e047 | [
"Apache-2.0"
] | 1 | 2020-05-06T14:28:02.000Z | 2020-05-06T14:28:02.000Z | tests/test_main.py | elzayat/gimme-aws-creds | 839f5041e9099c030c40ad54a92e4873f659e129 | [
"Apache-2.0"
] | 1 | 2021-08-20T13:10:26.000Z | 2021-08-20T13:10:26.000Z | import unittest
from mock import patch
from gimme_aws_creds import errors
from gimme_aws_creds.main import GimmeAWSCreds
from gimme_aws_creds.common import RoleSet
class TestMain(unittest.TestCase):
APP_INFO = [
RoleSet(idp='idp', role='test1', friendly_account_name='', friendly_role_name=''),
RoleSet(idp='idp', role='test2', friendly_account_name='', friendly_role_name='')
]
AWS_INFO = [
{'name': 'test1'},
{'name': 'test2'}
]
@patch('builtins.input', return_value='-1')
def test_choose_roles_app_neg1(self, mock):
creds = GimmeAWSCreds()
self.assertRaises(errors.GimmeAWSCredsExitBase, creds._choose_roles, self.APP_INFO)
self.assertRaises(errors.GimmeAWSCredsExitBase, creds._choose_app, self.AWS_INFO)
@patch('builtins.input', return_value='0')
def test_choose_roles_app_0(self, mock):
creds = GimmeAWSCreds()
selections = creds._choose_roles(self.APP_INFO)
self.assertEqual(selections, {self.APP_INFO[0].role})
selections = creds._choose_roles(self.APP_INFO)
self.assertEqual(selections, {self.APP_INFO[0].role})
@patch('builtins.input', return_value='1')
def test_choose_roles_app_1(self, mock):
creds = GimmeAWSCreds()
selections = creds._choose_roles(self.APP_INFO)
self.assertEqual(selections, {self.APP_INFO[1].role})
selections = creds._choose_roles(self.APP_INFO)
self.assertEqual(selections, {self.APP_INFO[1].role})
@patch('builtins.input', return_value='2')
def test_choose_roles_app_2(self, mock):
creds = GimmeAWSCreds()
self.assertRaises(errors.GimmeAWSCredsExitBase, creds._choose_roles, self.APP_INFO)
self.assertRaises(errors.GimmeAWSCredsExitBase, creds._choose_app, self.AWS_INFO)
@patch('builtins.input', return_value='a')
def test_choose_roles_app_a(self, mock):
creds = GimmeAWSCreds()
self.assertRaises(errors.GimmeAWSCredsExitBase, creds._choose_roles, self.APP_INFO)
self.assertRaises(errors.GimmeAWSCredsExitBase, creds._choose_app, self.AWS_INFO)
def test_get_selected_app_from_config_0(self):
creds = GimmeAWSCreds()
selection = creds._get_selected_app('test1', self.AWS_INFO)
self.assertEqual(selection, self.AWS_INFO[0])
def test_get_selected_app_from_config_1(self):
creds = GimmeAWSCreds()
selection = creds._get_selected_app('test2', self.AWS_INFO)
self.assertEqual(selection, self.AWS_INFO[1])
@patch('builtins.input', return_value='0')
def test_missing_app_from_config(self, mock):
creds = GimmeAWSCreds()
selection = creds._get_selected_app('test3', self.AWS_INFO)
self.assertEqual(selection, self.AWS_INFO[0])
def test_get_selected_roles_from_config_0(self):
creds = GimmeAWSCreds()
selections = creds._get_selected_roles('test1', self.APP_INFO)
self.assertEqual(selections, {'test1'})
def test_get_selected_roles_from_config_1(self):
creds = GimmeAWSCreds()
selections = creds._get_selected_roles('test2', self.APP_INFO)
self.assertEqual(selections, {'test2'})
def test_get_selected_roles_multiple(self):
creds = GimmeAWSCreds()
selections = creds._get_selected_roles('test1, test2', self.APP_INFO)
self.assertEqual(selections, {'test1', 'test2'})
def test_get_selected_roles_multiple_list(self):
creds = GimmeAWSCreds()
selections = creds._get_selected_roles(['test1', 'test2'], self.APP_INFO)
self.assertEqual(selections, {'test1', 'test2'})
def test_get_selected_roles_all(self):
creds = GimmeAWSCreds()
selections = creds._get_selected_roles('all', self.APP_INFO)
self.assertEqual(selections, {'test1', 'test2'})
@patch('builtins.input', return_value='0')
def test_missing_role_from_config(self, mock):
creds = GimmeAWSCreds()
selections = creds._get_selected_roles('test3', self.APP_INFO)
self.assertEqual(selections, {'test1'})
def test_get_partition_aws(self):
creds = GimmeAWSCreds()
partition = creds._get_partition_from_saml_acs('https://signin.aws.amazon.com/saml')
self.assertEqual(partition, 'aws')
def test_get_partition_china(self):
creds = GimmeAWSCreds()
partition = creds._get_partition_from_saml_acs('https://signin.amazonaws.cn/saml')
self.assertEqual(partition, 'aws-cn')
def test_get_partition_govcloud(self):
creds = GimmeAWSCreds()
partition = creds._get_partition_from_saml_acs('https://signin.amazonaws-us-gov.com/saml')
self.assertEqual(partition, 'aws-us-gov')
def test_get_partition_unkown(self):
creds = GimmeAWSCreds()
self.assertRaises(errors.GimmeAWSCredsExitBase, creds._get_partition_from_saml_acs,
'https://signin.amazonaws-foo.com/saml')
def test_parse_role_arn_base_path(self):
creds = GimmeAWSCreds()
arn = "arn:aws:iam::123456789012:role/okta-1234-role"
self.assertEqual(creds._parse_role_arn(arn),
{
'account': '123456789012',
'path': '/',
'role': 'okta-1234-role'
})
def test_parse_role_arn_extended_path(self):
creds = GimmeAWSCreds()
arn = "arn:aws:iam::123456789012:role/a/really/extended/path/okta-1234-role"
self.assertEqual(creds._parse_role_arn(arn),
{
'account': '123456789012',
'path': '/a/really/extended/path/',
'role': 'okta-1234-role'
})
def test_get_alias_from_friendly_name_no_alias(self):
creds = GimmeAWSCreds()
friendly_name = "Account: 123456789012"
self.assertEqual(creds._get_alias_from_friendly_name(friendly_name), None)
def test_get_alias_from_friendly_name_with_alias(self):
creds = GimmeAWSCreds()
friendly_name = "Account: my-account-org (123456789012)"
self.assertEqual(creds._get_alias_from_friendly_name(friendly_name), "my-account-org")
def test_get_profile_name_accrole_resolve_alias_do_not_include_paths(self):
"Testing the acc-role, with alias resolution, and not including full role path"
creds = GimmeAWSCreds()
naming_data = {'account': '123456789012', 'role': 'administrator', 'path': '/administrator/'}
role = RoleSet(idp='arn:aws:iam::123456789012:saml-provider/my-okta-provider',
role='arn:aws:iam::123456789012:role/administrator/administrator',
friendly_account_name='Account: my-org-master (123456789012)',
friendly_role_name='administrator/administrator')
cred_profile = 'acc-role'
resolve_alias = 'True'
include_path = 'False'
self.assertEqual(creds.get_profile_name(cred_profile, include_path, naming_data, resolve_alias, role), "my-org-master-administrator")
def test_get_profile_accrole_name_do_not_resolve_alias_do_not_include_paths(self):
"Testing the acc-role, without alias resolution, and not including full role path"
creds = GimmeAWSCreds()
naming_data = {'account': '123456789012', 'role': 'administrator', 'path': '/administrator/'}
role = RoleSet(idp='arn:aws:iam::123456789012:saml-provider/my-okta-provider',
role='arn:aws:iam::123456789012:role/administrator/administrator',
friendly_account_name='Account: my-org-master (123456789012)',
friendly_role_name='administrator/administrator')
cred_profile = 'acc-role'
resolve_alias = 'False'
include_path = 'False'
self.assertEqual(creds.get_profile_name(cred_profile, include_path, naming_data, resolve_alias, role),
"123456789012-administrator")
def test_get_profile_accrole_name_do_not_resolve_alias_include_paths(self):
"Testing the acc-role, without alias resolution, and including full role path"
creds = GimmeAWSCreds()
naming_data = {'account': '123456789012', 'role': 'administrator', 'path': '/some/long/extended/path/'}
role = RoleSet(idp='arn:aws:iam::123456789012:saml-provider/my-okta-provider',
role='arn:aws:iam::123456789012:role/some/long/extended/path/administrator',
friendly_account_name='Account: my-org-master (123456789012)',
friendly_role_name='administrator/administrator')
cred_profile = 'acc-role'
resolve_alias = 'False'
include_path = 'True'
self.assertEqual(creds.get_profile_name(cred_profile, include_path, naming_data, resolve_alias, role),
"123456789012-/some/long/extended/path/administrator")
def test_get_profile_name_role(self):
"Testing the role"
creds = GimmeAWSCreds()
naming_data = {'account': '123456789012', 'role': 'administrator', 'path': '/some/long/extended/path/'}
role = RoleSet(idp='arn:aws:iam::123456789012:saml-provider/my-okta-provider',
role='arn:aws:iam::123456789012:role/some/long/extended/path/administrator',
friendly_account_name='Account: my-org-master (123456789012)',
friendly_role_name='administrator/administrator')
cred_profile = 'role'
resolve_alias = 'False'
include_path = 'True'
self.assertEqual(creds.get_profile_name(cred_profile, include_path, naming_data, resolve_alias, role),
'administrator')
def test_get_profile_name_default(self):
"Testing the default"
creds = GimmeAWSCreds()
naming_data = {'account': '123456789012', 'role': 'administrator', 'path': '/some/long/extended/path/'}
role = RoleSet(idp='arn:aws:iam::123456789012:saml-provider/my-okta-provider',
role='arn:aws:iam::123456789012:role/some/long/extended/path/administrator',
friendly_account_name='Account: my-org-master (123456789012)',
friendly_role_name='administrator/administrator')
cred_profile = 'default'
resolve_alias = 'False'
include_path = 'True'
self.assertEqual(creds.get_profile_name(cred_profile, include_path, naming_data, resolve_alias, role),
'default')
def test_get_profile_name_else(self):
"testing else statement in get_profile_name"
creds = GimmeAWSCreds()
naming_data = {'account': '123456789012', 'role': 'administrator', 'path': '/some/long/extended/path/'}
role = RoleSet(idp='arn:aws:iam::123456789012:saml-provider/my-okta-provider',
role='arn:aws:iam::123456789012:role/some/long/extended/path/administrator',
friendly_account_name='Account: my-org-master (123456789012)',
friendly_role_name='administrator/administrator')
cred_profile = 'foo'
resolve_alias = 'False'
include_path = 'True'
self.assertEqual(creds.get_profile_name(cred_profile, include_path, naming_data, resolve_alias, role),
'foo')
| 46.104418 | 141 | 0.658188 | 1,287 | 11,480 | 5.585859 | 0.087024 | 0.027264 | 0.026429 | 0.040896 | 0.913062 | 0.887189 | 0.84824 | 0.775908 | 0.739741 | 0.706357 | 0 | 0.049977 | 0.226132 | 11,480 | 248 | 142 | 46.290323 | 0.75923 | 0.027439 | 0 | 0.545 | 0 | 0.005 | 0.235453 | 0.106882 | 0 | 0 | 0 | 0 | 0.165 | 1 | 0.14 | false | 0 | 0.025 | 0 | 0.18 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
490ad735f8dd482e7e0233134c45dd3b7bb725ae | 14,799 | py | Python | tests/geo_commands_test.py | leenr/aioredis | ffe15297f2b5205c9fbe6b18eeebcc340a32f677 | [
"MIT"
] | 1 | 2021-01-03T00:57:16.000Z | 2021-01-03T00:57:16.000Z | tests/geo_commands_test.py | idfumg/aioredis | b4b823e4dc453256ba8173eb82568c321e712da5 | [
"MIT"
] | 1 | 2022-02-16T14:38:57.000Z | 2022-02-16T14:38:57.000Z | tests/geo_commands_test.py | idfumg/aioredis | b4b823e4dc453256ba8173eb82568c321e712da5 | [
"MIT"
] | null | null | null | import pytest
from aioredis import GeoPoint, GeoMember
from _testutils import redis_version
@pytest.mark.run_loop
@redis_version(
3, 2, 0, reason='GEOADD is available since redis >= 3.2.0')
async def test_geoadd(redis):
res = await redis.geoadd('geodata', 13.361389, 38.115556, 'Palermo')
assert res == 1
res = await redis.geoadd(
'geodata',
15.087269, 37.502669, 'Catania',
12.424315, 37.802105, 'Marsala'
)
assert res == 2
@pytest.mark.run_loop
@redis_version(
3, 2, 0, reason='GEODIST is available since redis >= 3.2.0')
async def test_geodist(redis):
res = await redis.geoadd(
'geodata',
13.361389, 38.115556, 'Palermo',
15.087269, 37.502669, 'Catania'
)
assert res == 2
res = await redis.geodist('geodata', 'Palermo', 'Catania')
assert res == 166274.1516
res = await redis.geodist('geodata', 'Palermo', 'Catania', 'km')
assert res == 166.2742
@pytest.mark.run_loop
@redis_version(
3, 2, 0, reason='GEOHASH is available since redis >= 3.2.0')
async def test_geohash(redis):
res = await redis.geoadd(
'geodata',
13.361389, 38.115556, 'Palermo',
15.087269, 37.502669, 'Catania'
)
assert res == 2
res = await redis.geohash(
'geodata', 'Palermo', encoding='utf-8'
)
assert res == ['sqc8b49rny0']
res = await redis.geohash(
'geodata', 'Palermo', 'Catania', encoding='utf-8'
)
assert res == ['sqc8b49rny0', 'sqdtr74hyu0']
@pytest.mark.run_loop
@redis_version(
3, 2, 0, reason='GEOPOS is available since redis >= 3.2.0')
async def test_geopos(redis):
res = await redis.geoadd(
'geodata',
13.361389, 38.115556, 'Palermo',
15.087269, 37.502669, 'Catania'
)
assert res == 2
res = await redis.geopos('geodata', 'Palermo')
assert res == [
GeoPoint(longitude=13.36138933897018433, latitude=38.11555639549629859)
]
res = await redis.geopos('geodata', 'Catania', 'Palermo')
assert res == [
GeoPoint(longitude=15.087267458438873, latitude=37.50266842333162),
GeoPoint(longitude=13.36138933897018433, latitude=38.11555639549629859)
]
@pytest.mark.run_loop
@redis_version(
3, 2, 0, reason='GEO* is available since redis >= 3.2.0')
async def test_geo_not_exist_members(redis):
res = await redis.geoadd('geodata', 13.361389, 38.115556, 'Palermo')
assert res == 1
res = await redis.geoadd(
'geodata',
15.087269, 37.502669, 'Catania',
12.424315, 37.802105, 'Marsala'
)
assert res == 2
res = await redis.geohash(
'geodata', 'NotExistMember'
)
assert res == [None]
res = await redis.geodist('geodata', 'NotExistMember', 'Catania')
assert res is None
res = await redis.geopos(
'geodata', 'Palermo', 'NotExistMember', 'Catania'
)
assert res == [
GeoPoint(
longitude=13.36138933897018433,
latitude=38.11555639549629859
),
None,
GeoPoint(longitude=15.087267458438873, latitude=37.50266842333162)
]
@pytest.mark.run_loop
@redis_version(
3, 2, 0, reason='GEORADIUS is available since redis >= 3.2.0')
async def test_georadius_validation(redis):
res = await redis.geoadd(
'geodata',
13.361389, 38.115556, 'Palermo', 15.087269, 37.502669, 'Catania'
)
assert res == 2
with pytest.raises(TypeError):
res = await redis.georadius(
'geodata', 15, 37, 200, 'km', count=1.3, encoding='utf-8'
)
with pytest.raises(TypeError):
res = await redis.georadius(
'geodata', 15, 37, '200', 'km', encoding='utf-8'
)
with pytest.raises(ValueError):
res = await redis.georadius(
'geodata', 15, 37, 200, 'k', encoding='utf-8'
)
with pytest.raises(ValueError):
res = await redis.georadius(
'geodata', 15, 37, 200, 'km', sort='DESV', encoding='utf-8'
)
@pytest.mark.run_loop
@redis_version(
3, 2, 0, reason='GEORADIUS is available since redis >= 3.2.0')
async def test_georadius(redis):
res = await redis.geoadd(
'geodata',
13.361389, 38.115556, 'Palermo', 15.087269, 37.502669, 'Catania'
)
assert res == 2
res = await redis.georadius(
'geodata', 15, 37, 200, 'km', encoding='utf-8'
)
assert res == ['Palermo', 'Catania']
res = await redis.georadius(
'geodata', 15, 37, 200, 'km', count=1, encoding='utf-8'
)
assert res == ['Catania']
res = await redis.georadius(
'geodata', 15, 37, 200, 'km', sort='ASC', encoding='utf-8'
)
assert res == ['Catania', 'Palermo']
res = await redis.georadius(
'geodata', 15, 37, 200, 'km', with_dist=True, encoding='utf-8'
)
assert res == [
GeoMember(member='Palermo', dist=190.4424, coord=None, hash=None),
GeoMember(member='Catania', dist=56.4413, coord=None, hash=None)
]
res = await redis.georadius(
'geodata', 15, 37, 200, 'km',
with_dist=True, with_coord=True, encoding='utf-8'
)
assert res == [
GeoMember(
member='Palermo', dist=190.4424, hash=None,
coord=GeoPoint(
longitude=13.36138933897018433, latitude=38.11555639549629859
)
),
GeoMember(
member='Catania', dist=56.4413, hash=None,
coord=GeoPoint(
longitude=15.087267458438873, latitude=37.50266842333162
),
)
]
res = await redis.georadius(
'geodata', 15, 37, 200, 'km',
with_dist=True, with_coord=True, with_hash=True, encoding='utf-8'
)
assert res == [
GeoMember(
member='Palermo', dist=190.4424, hash=3479099956230698,
coord=GeoPoint(
longitude=13.36138933897018433, latitude=38.11555639549629859
)
),
GeoMember(
member='Catania', dist=56.4413, hash=3479447370796909,
coord=GeoPoint(
longitude=15.087267458438873, latitude=37.50266842333162
),
)
]
res = await redis.georadius(
'geodata', 15, 37, 200, 'km',
with_coord=True, with_hash=True, encoding='utf-8'
)
assert res == [
GeoMember(
member='Palermo', dist=None, hash=3479099956230698,
coord=GeoPoint(
longitude=13.36138933897018433, latitude=38.11555639549629859
)
),
GeoMember(
member='Catania', dist=None, hash=3479447370796909,
coord=GeoPoint(
longitude=15.087267458438873, latitude=37.50266842333162
),
)
]
res = await redis.georadius(
'geodata', 15, 37, 200, 'km', with_coord=True, encoding='utf-8'
)
assert res == [
GeoMember(
member='Palermo', dist=None, hash=None,
coord=GeoPoint(
longitude=13.36138933897018433, latitude=38.11555639549629859
)
),
GeoMember(
member='Catania', dist=None, hash=None,
coord=GeoPoint(
longitude=15.087267458438873, latitude=37.50266842333162
),
)
]
res = await redis.georadius(
'geodata', 15, 37, 200, 'km', count=1, sort='DESC',
with_hash=True, encoding='utf-8'
)
assert res == [
GeoMember(
member='Palermo', dist=None, hash=3479099956230698, coord=None
)
]
@pytest.mark.run_loop
@redis_version(
3, 2, 0, reason='GEORADIUSBYMEMBER is available since redis >= 3.2.0')
async def test_georadiusbymember(redis):
res = await redis.geoadd(
'geodata',
13.361389, 38.115556, 'Palermo',
15.087269, 37.502669, 'Catania'
)
assert res == 2
res = await redis.georadiusbymember(
'geodata', 'Palermo', 200, 'km', with_dist=True, encoding='utf-8'
)
assert res == [
GeoMember(member='Palermo', dist=0.0, coord=None, hash=None),
GeoMember(member='Catania', dist=166.2742, coord=None, hash=None)
]
res = await redis.georadiusbymember(
'geodata', 'Palermo', 200, 'km', encoding='utf-8'
)
assert res == ['Palermo', 'Catania']
res = await redis.georadiusbymember(
'geodata', 'Palermo', 200, 'km',
with_dist=True, with_coord=True, encoding='utf-8'
)
assert res == [
GeoMember(
member='Palermo', dist=0.0, hash=None,
coord=GeoPoint(13.361389338970184, 38.1155563954963)
),
GeoMember(
member='Catania', dist=166.2742, hash=None,
coord=GeoPoint(15.087267458438873, 37.50266842333162)
)
]
res = await redis.georadiusbymember(
'geodata', 'Palermo', 200, 'km',
with_dist=True, with_coord=True, with_hash=True, encoding='utf-8'
)
assert res == [
GeoMember(
member='Palermo', dist=0.0, hash=3479099956230698,
coord=GeoPoint(13.361389338970184, 38.1155563954963)
),
GeoMember(
member='Catania', dist=166.2742, hash=3479447370796909,
coord=GeoPoint(15.087267458438873, 37.50266842333162)
)
]
@pytest.mark.run_loop
@redis_version(
3, 2, 0, reason='GEOHASH is available since redis >= 3.2.0')
async def test_geohash_binary(redis):
res = await redis.geoadd(
'geodata',
13.361389, 38.115556, 'Palermo',
15.087269, 37.502669, 'Catania'
)
assert res == 2
res = await redis.geohash(
'geodata', 'Palermo'
)
assert res == [b'sqc8b49rny0']
res = await redis.geohash(
'geodata', 'Palermo', 'Catania'
)
assert res == [b'sqc8b49rny0', b'sqdtr74hyu0']
@pytest.mark.run_loop
@redis_version(
3, 2, 0, reason='GEORADIUS is available since redis >= 3.2.0')
async def test_georadius_binary(redis):
res = await redis.geoadd(
'geodata',
13.361389, 38.115556, 'Palermo', 15.087269, 37.502669, 'Catania'
)
assert res == 2
res = await redis.georadius(
'geodata', 15, 37, 200, 'km'
)
assert res == [b'Palermo', b'Catania']
res = await redis.georadius(
'geodata', 15, 37, 200, 'km', count=1
)
assert res == [b'Catania']
res = await redis.georadius(
'geodata', 15, 37, 200, 'km', sort='ASC'
)
assert res == [b'Catania', b'Palermo']
res = await redis.georadius(
'geodata', 15, 37, 200, 'km', with_dist=True
)
assert res == [
GeoMember(member=b'Palermo', dist=190.4424, coord=None, hash=None),
GeoMember(member=b'Catania', dist=56.4413, coord=None, hash=None)
]
res = await redis.georadius(
'geodata', 15, 37, 200, 'km',
with_dist=True, with_coord=True
)
assert res == [
GeoMember(
member=b'Palermo', dist=190.4424, hash=None,
coord=GeoPoint(
longitude=13.36138933897018433, latitude=38.11555639549629859
)
),
GeoMember(
member=b'Catania', dist=56.4413, hash=None,
coord=GeoPoint(
longitude=15.087267458438873, latitude=37.50266842333162
),
)
]
res = await redis.georadius(
'geodata', 15, 37, 200, 'km',
with_dist=True, with_coord=True, with_hash=True
)
assert res == [
GeoMember(
member=b'Palermo', dist=190.4424, hash=3479099956230698,
coord=GeoPoint(
longitude=13.36138933897018433, latitude=38.11555639549629859
)
),
GeoMember(
member=b'Catania', dist=56.4413, hash=3479447370796909,
coord=GeoPoint(
longitude=15.087267458438873, latitude=37.50266842333162
),
)
]
res = await redis.georadius(
'geodata', 15, 37, 200, 'km',
with_coord=True, with_hash=True
)
assert res == [
GeoMember(
member=b'Palermo', dist=None, hash=3479099956230698,
coord=GeoPoint(
longitude=13.36138933897018433, latitude=38.11555639549629859
)
),
GeoMember(
member=b'Catania', dist=None, hash=3479447370796909,
coord=GeoPoint(
longitude=15.087267458438873, latitude=37.50266842333162
),
)
]
res = await redis.georadius(
'geodata', 15, 37, 200, 'km', with_coord=True
)
assert res == [
GeoMember(
member=b'Palermo', dist=None, hash=None,
coord=GeoPoint(
longitude=13.36138933897018433, latitude=38.11555639549629859
)
),
GeoMember(
member=b'Catania', dist=None, hash=None,
coord=GeoPoint(
longitude=15.087267458438873, latitude=37.50266842333162
),
)
]
res = await redis.georadius(
'geodata', 15, 37, 200, 'km', count=1, sort='DESC',
with_hash=True
)
assert res == [
GeoMember(
member=b'Palermo', dist=None, hash=3479099956230698, coord=None
)
]
@pytest.mark.run_loop
@redis_version(
3, 2, 0, reason='GEORADIUSBYMEMBER is available since redis >= 3.2.0')
async def test_georadiusbymember_binary(redis):
res = await redis.geoadd(
'geodata',
13.361389, 38.115556, 'Palermo',
15.087269, 37.502669, 'Catania'
)
assert res == 2
res = await redis.georadiusbymember(
'geodata', 'Palermo', 200, 'km', with_dist=True
)
assert res == [
GeoMember(member=b'Palermo', dist=0.0, coord=None, hash=None),
GeoMember(member=b'Catania', dist=166.2742, coord=None, hash=None)
]
res = await redis.georadiusbymember(
'geodata', 'Palermo', 200, 'km',
with_dist=True, with_coord=True
)
assert res == [
GeoMember(
member=b'Palermo', dist=0.0, hash=None,
coord=GeoPoint(13.361389338970184, 38.1155563954963)
),
GeoMember(
member=b'Catania', dist=166.2742, hash=None,
coord=GeoPoint(15.087267458438873, 37.50266842333162)
)
]
res = await redis.georadiusbymember(
'geodata', 'Palermo', 200, 'km',
with_dist=True, with_coord=True, with_hash=True
)
assert res == [
GeoMember(
member=b'Palermo', dist=0.0, hash=3479099956230698,
coord=GeoPoint(13.361389338970184, 38.1155563954963)
),
GeoMember(
member=b'Catania', dist=166.2742, hash=3479447370796909,
coord=GeoPoint(15.087267458438873, 37.50266842333162)
)
]
| 29.017647 | 79 | 0.576661 | 1,631 | 14,799 | 5.187002 | 0.05886 | 0.050118 | 0.081442 | 0.05721 | 0.958038 | 0.947754 | 0.928014 | 0.900591 | 0.885106 | 0.867849 | 0 | 0.196781 | 0.29056 | 14,799 | 509 | 80 | 29.074656 | 0.60901 | 0 | 0 | 0.569196 | 0 | 0 | 0.120076 | 0 | 0 | 0 | 0 | 0 | 0.109375 | 1 | 0 | false | 0 | 0.006696 | 0 | 0.006696 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
4935547f5095df8eea3dd6cabaa1ef98a2af4774 | 142 | py | Python | mvqag/train/__init__.py | VirkSaab/Medical-VQA | 6d77963cc81940fc680a18d931e0d88a3264f5fa | [
"MIT"
] | null | null | null | mvqag/train/__init__.py | VirkSaab/Medical-VQA | 6d77963cc81940fc680a18d931e0d88a3264f5fa | [
"MIT"
] | null | null | null | mvqag/train/__init__.py | VirkSaab/Medical-VQA | 6d77963cc81940fc680a18d931e0d88a3264f5fa | [
"MIT"
] | null | null | null | from .train import *
from .metrics import *
from .mixup import *
from .losses import *
from .lr_schedulers import *
from .ranger import Ranger | 23.666667 | 28 | 0.760563 | 20 | 142 | 5.35 | 0.45 | 0.46729 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.161972 | 142 | 6 | 29 | 23.666667 | 0.89916 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
495cb787a5e749f2d36743e178503e662c81a80a | 88 | py | Python | sldWriter/__init__.py | milieuinfo/arcgis2qgs | d0d2e4da7cb9d6f08ef82b6c6f94d3340b96f9ba | [
"MIT"
] | 18 | 2015-11-24T20:08:35.000Z | 2021-11-12T15:10:22.000Z | sldWriter/__init__.py | canglangshushu/arcgis2qgs | d0d2e4da7cb9d6f08ef82b6c6f94d3340b96f9ba | [
"MIT"
] | 1 | 2021-04-08T14:21:01.000Z | 2021-04-08T14:21:01.000Z | sldWriter/__init__.py | canglangshushu/arcgis2qgs | d0d2e4da7cb9d6f08ef82b6c6f94d3340b96f9ba | [
"MIT"
] | 9 | 2017-06-09T04:31:52.000Z | 2021-11-22T11:32:14.000Z | from _sldWriter import *
from _layer import *
from _style import *
from _filter import * | 22 | 24 | 0.784091 | 12 | 88 | 5.416667 | 0.5 | 0.461538 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.170455 | 88 | 4 | 25 | 22 | 0.890411 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
499d2a09c5f9bc7d52d94643bcdf733d6bfe190e | 169 | py | Python | pages/themes/modulesAndPackages/examples/my_app/helper_module.py | WWWCourses/ProgressBG-Python-UniCredit-Slides | 87539aa2f73738370ac8df865cf3a1adac447391 | [
"MIT"
] | null | null | null | pages/themes/modulesAndPackages/examples/my_app/helper_module.py | WWWCourses/ProgressBG-Python-UniCredit-Slides | 87539aa2f73738370ac8df865cf3a1adac447391 | [
"MIT"
] | null | null | null | pages/themes/modulesAndPackages/examples/my_app/helper_module.py | WWWCourses/ProgressBG-Python-UniCredit-Slides | 87539aa2f73738370ac8df865cf3a1adac447391 | [
"MIT"
] | null | null | null | print( "__file__:", __file__)
print( "__name__:", __name__)
def get_user_name():
return input('Enter your name:')
def greet(user_name):
print(f'Hello, {user_name}!') | 21.125 | 33 | 0.704142 | 24 | 169 | 4.125 | 0.541667 | 0.242424 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.112426 | 169 | 8 | 34 | 21.125 | 0.66 | 0 | 0 | 0 | 0 | 0 | 0.311765 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0 | 0 | 0.166667 | 0.5 | 0.5 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 6 |
b8c790ff3195376678e1bbf691240f066d10b098 | 16,019 | py | Python | authors/apps/profiles/tests/test_user_profiles.py | andela/ah-backend-sparta | fcc394e486a736993702bfa1e6fd9e9b189b93ae | [
"BSD-3-Clause"
] | null | null | null | authors/apps/profiles/tests/test_user_profiles.py | andela/ah-backend-sparta | fcc394e486a736993702bfa1e6fd9e9b189b93ae | [
"BSD-3-Clause"
] | 11 | 2019-03-25T14:38:23.000Z | 2019-04-18T08:02:10.000Z | authors/apps/profiles/tests/test_user_profiles.py | andela/ah-backend-sparta | fcc394e486a736993702bfa1e6fd9e9b189b93ae | [
"BSD-3-Clause"
] | 5 | 2019-06-12T08:22:58.000Z | 2020-02-07T08:26:37.000Z | from authors.apps.authentication.tests.test_base import BaseTestCase
from rest_framework import status
from .test_data import(
register_user1_data,
register_user2_data,
update_user_profile_data1,
update_user_profile_data2,
update_user_profile_data_with_wrong_url,
update_user_profile_data3,
register_user3_data
)
class TestUserProfiles(BaseTestCase):
"""
Class to handle tests of profile management
"""
def test_updating_users_profiles(self):
"""
Method to test updating a user profile
"""
user_token1 = self.create_user(register_user1_data)
response1 = self.client.put('/api/profiles/maria22', update_user_profile_data1, HTTP_AUTHORIZATION=user_token1, format='json')
self.assertEqual(response1.data["username"], "maria22")
self.assertEqual(response1.data["firstname"], update_user_profile_data1["firstname"])
self.assertEqual(response1.data["lastname"], update_user_profile_data1["lastname"])
self.assertEqual(response1.data["bio"], update_user_profile_data1["bio"])
self.assertEqual(response1.data["image"], update_user_profile_data1["image"])
self.assertEqual(response1.status_code, status.HTTP_200_OK)
def test_getting_all_user_profiles(self):
"""
Method to test getting all user profiles
"""
user_token1 = self.create_user(register_user1_data)
self.client.put('/api/profiles/maria22', update_user_profile_data1, HTTP_AUTHORIZATION=user_token1, format='json')
#Get all user Profiles
response = self.client.get('/api/profiles/', HTTP_AUTHORIZATION=user_token1, format='json')
self.assertEqual(response.data["profiles"][0]["username"], "maria22")
self.assertEqual(response.data["profiles"][0]["firstname"], update_user_profile_data1['firstname'])
self.assertEqual(response.data["profiles"][0]["lastname"], update_user_profile_data1['lastname'])
self.assertEqual(response.data["profiles"][0]["bio"], update_user_profile_data1['bio'])
self.assertEqual(response.data["profiles"][0]["image"], update_user_profile_data1['image'])
self.assertEqual(response.status_code, status.HTTP_200_OK)
def test_getting_specific_user_profile(self):
"""
Method to test getting a single user profile
"""
user_token1 = self.create_user(register_user1_data)
self.client.put('/api/profiles/maria22', update_user_profile_data1, HTTP_AUTHORIZATION=user_token1, format='json')
#Get all user Profiles
response = self.client.get('/api/profiles/maria22', HTTP_AUTHORIZATION=user_token1, format='json')
self.assertEqual(response.data["username"], "maria22")
self.assertEqual(response.data["firstname"], update_user_profile_data1["firstname"])
self.assertEqual(response.data["lastname"], update_user_profile_data1["lastname"])
self.assertEqual(response.data["bio"], update_user_profile_data1["bio"])
self.assertEqual(response.data["image"], update_user_profile_data1["image"])
self.assertEqual(response.status_code, status.HTTP_200_OK)
def test_user_updating_profile_which_is_not_theirs(self):
"""
Method to test whether users are updating a profile that is not their profile
"""
user_token1 = self.create_user(register_user1_data)
user_token2 = self.create_user(register_user2_data)
response1 = self.client.put('/api/profiles/joan22', update_user_profile_data1, HTTP_AUTHORIZATION=user_token1, format='json')
response2 = self.client.put('/api/profiles/maria22', update_user_profile_data2, HTTP_AUTHORIZATION=user_token2, format='json')
self.assertEqual(response1.data["message"], "You can only edit your Profile")
self.assertEqual(response2.data["message"], "You can only edit your Profile")
self.assertEqual(response2.status_code, status.HTTP_403_FORBIDDEN)
def test_raise_exception_when_url_is_wrong(self):
"""
Method to test raise exception incase the image url is invalid
"""
user_token = self.create_user(register_user2_data)
response = self.client.put('/api/profiles/joan22', update_user_profile_data_with_wrong_url, HTTP_AUTHORIZATION=user_token, format='json')
self.assertEqual(response.status_code, status.HTTP_400_BAD_REQUEST)
self.assertEqual(response.data["errors"]["image"][0].title(), 'Enter A Valid Url.')
def test_user_following_another_user(self):
"""
Method to test user can follow another user
"""
user_token1 = self.create_user(register_user1_data)
user_token2 = self.create_user(register_user2_data)
response1 = self.client.put('/api/profiles/joan22', update_user_profile_data1, HTTP_AUTHORIZATION=user_token1, format='json')
response2 = self.client.put('/api/profiles/maria22', update_user_profile_data2, HTTP_AUTHORIZATION=user_token2, format='json')
response3 = self.client.post('/api/profiles/maria22/follow', HTTP_AUTHORIZATION=user_token2, format='json')
response4 = self.client.post('/api/profiles/joan22/follow', HTTP_AUTHORIZATION=user_token1, format='json')
self.assertEqual(response3.data["message"], "You are now following maria22")
self.assertEqual(response4.data["message"], "You are now following joan22")
self.assertEqual(response3.status_code, status.HTTP_201_CREATED)
self.assertEqual(response4.status_code, status.HTTP_201_CREATED)
def test_user_can_unfollow_another_user(self):
"""
Method to test a user can unfollow another user
"""
user_token1 = self.create_user(register_user1_data)
user_token2 = self.create_user(register_user2_data)
response1 = self.client.put('/api/profiles/joan22', update_user_profile_data1, HTTP_AUTHORIZATION=user_token1, format='json')
response2 = self.client.put('/api/profiles/maria22', update_user_profile_data2, HTTP_AUTHORIZATION=user_token2, format='json')
response3 = self.client.post('/api/profiles/maria22/follow', HTTP_AUTHORIZATION=user_token2, format='json')
response4 = self.client.post('/api/profiles/joan22/follow', HTTP_AUTHORIZATION=user_token1, format='json')
response5 = self.client.delete('/api/profiles/maria22/follow', HTTP_AUTHORIZATION=user_token2, format='json')
self.assertEqual(response5.data["message"], "You have unfollowed maria22")
self.assertEqual(response5.status_code, status.HTTP_204_NO_CONTENT)
def test_user_cant_unfollow_themselves(self):
"""
Method to test a user unfollowing themselves
"""
user_token1 = self.create_user(register_user1_data)
user_token2 = self.create_user(register_user2_data)
response1 = self.client.put('/api/profiles/joan22', update_user_profile_data1, HTTP_AUTHORIZATION=user_token1, format='json')
response2 = self.client.put('/api/profiles/maria22', update_user_profile_data2, HTTP_AUTHORIZATION=user_token2, format='json')
response3 = self.client.post('/api/profiles/maria22/follow', HTTP_AUTHORIZATION=user_token2, format='json')
response4 = self.client.post('/api/profiles/joan22/follow', HTTP_AUTHORIZATION=user_token1, format='json')
response5 = self.client.delete('/api/profiles/maria22/follow', HTTP_AUTHORIZATION=user_token1, format='json')
self.assertEqual(response5.data["errors"][0], "You can not unfollow your self")
self.assertEqual(response5.status_code, status.HTTP_400_BAD_REQUEST)
def test_user_cant_unfollow_a_user_for_two_times(self):
"""
Method to test a user unfollowing a user they have already unfollowed
"""
user_token1 = self.create_user(register_user1_data)
user_token2 = self.create_user(register_user2_data)
response1 = self.client.put('/api/profiles/joan22', update_user_profile_data1, HTTP_AUTHORIZATION=user_token1, format='json')
response2 = self.client.put('/api/profiles/maria22', update_user_profile_data2, HTTP_AUTHORIZATION=user_token2, format='json')
response3 = self.client.post('/api/profiles/maria22/follow', HTTP_AUTHORIZATION=user_token2, format='json')
response4 = self.client.post('/api/profiles/joan22/follow', HTTP_AUTHORIZATION=user_token1, format='json')
response5 = self.client.delete('/api/profiles/maria22/follow', HTTP_AUTHORIZATION=user_token2, format='json')
response6 = self.client.delete('/api/profiles/maria22/follow', HTTP_AUTHORIZATION=user_token2, format='json')
self.assertEqual(response6.data["errors"][0], "You have already unfollowed maria22")
self.assertEqual(response6.status_code, status.HTTP_400_BAD_REQUEST)
def test_view_users_your_following(self):
"""
Method to getting a list of users one is following
"""
user_token1 = self.create_user(register_user1_data)
user_token2 = self.create_user(register_user2_data)
response1 = self.client.put('/api/profiles/joan22', update_user_profile_data1, HTTP_AUTHORIZATION=user_token1, format='json')
response2 = self.client.put('/api/profiles/maria22', update_user_profile_data2, HTTP_AUTHORIZATION=user_token2, format='json')
response3 = self.client.post('/api/profiles/maria22/follow', HTTP_AUTHORIZATION=user_token2, format='json')
response4 = self.client.post('/api/profiles/joan22/follow', HTTP_AUTHORIZATION=user_token1, format='json')
response5 = self.client.get('/api/profiles/maria22/followers', HTTP_AUTHORIZATION=user_token1, format='json')
self.assertEqual(response5.data[0]["username"], "joan22")
self.assertEqual(response5.status_code, status.HTTP_200_OK)
def test_view_users_that_are_following_you(self):
"""
Method to test getting alist of users that are following you
"""
user_token1 = self.create_user(register_user1_data)
user_token2 = self.create_user(register_user2_data)
response1 = self.client.put('/api/profiles/joan22', update_user_profile_data1, HTTP_AUTHORIZATION=user_token1, format='json')
response2 = self.client.put('/api/profiles/maria22', update_user_profile_data2, HTTP_AUTHORIZATION=user_token2, format='json')
response3 = self.client.post('/api/profiles/maria22/follow', HTTP_AUTHORIZATION=user_token2, format='json')
response4 = self.client.post('/api/profiles/joan22/follow', HTTP_AUTHORIZATION=user_token1, format='json')
response5 = self.client.get('/api/profiles/maria22/following', HTTP_AUTHORIZATION=user_token1, format='json')
self.assertEqual(response5.data[0]["username"], "joan22")
self.assertEqual(response5.status_code, status.HTTP_200_OK)
def test_user_to_follow_does_not_exist(self):
"""
Method to test following a user who does not exist
"""
user_token1 = self.create_user(register_user1_data)
user_token2 = self.create_user(register_user2_data)
response1 = self.client.put('/api/profiles/joan22', update_user_profile_data1, HTTP_AUTHORIZATION=user_token1, format='json')
response2 = self.client.put('/api/profiles/maria22', update_user_profile_data2, HTTP_AUTHORIZATION=user_token2, format='json')
response3 = self.client.post('/api/profiles/maria22hghghgh/follow', HTTP_AUTHORIZATION=user_token2, format='json')
self.assertEqual(response3.data["detail"], "Not found.")
self.assertEqual(response3.status_code, status.HTTP_404_NOT_FOUND)
def test_following_a_user_you_already_follow(self):
"""
Method to test following a user you already follow
"""
user_token1 = self.create_user(register_user1_data)
user_token2 = self.create_user(register_user2_data)
response1 = self.client.put('/api/profiles/joan22', update_user_profile_data1, HTTP_AUTHORIZATION=user_token1, format='json')
response2 = self.client.put('/api/profiles/maria22', update_user_profile_data2, HTTP_AUTHORIZATION=user_token2, format='json')
response3 = self.client.post('/api/profiles/maria22/follow', HTTP_AUTHORIZATION=user_token2, format='json')
response4 = self.client.post('/api/profiles/joan22/follow', HTTP_AUTHORIZATION=user_token1, format='json')
response5 = self.client.post('/api/profiles/joan22/follow', HTTP_AUTHORIZATION=user_token1, format='json')
self.assertEqual(response5.data["errors"][0], "You already follow this user")
def test_a_user_following_themselves(self):
"""
Method to test following a user you already follow
"""
user_token1 = self.create_user(register_user1_data)
user_token2 = self.create_user(register_user2_data)
response2 = self.client.put('/api/profiles/maria22', update_user_profile_data2, HTTP_AUTHORIZATION=user_token2, format='json')
response3 = self.client.post('/api/profiles/maria22/follow', HTTP_AUTHORIZATION=user_token1, format='json')
self.assertEqual(response3.data["errors"][0], "You can not follow your self")
def test_user_is_not_following_anyone(self):
"""
Method to test user does not follow any one
"""
user_token1 = self.create_user(register_user3_data)
response1 = self.client.put('/api/profiles/claire22', update_user_profile_data3, HTTP_AUTHORIZATION=user_token1, format='json')
response2 = self.client.get('/api/profiles/claire22/following', HTTP_AUTHORIZATION=user_token1, format='json')
self.assertEqual(response2.data["errors"][0], "You are currently not following anyone")
def test_user_does_not_have_followers(self):
"""
Method to test user does not user following them
"""
user_token1 = self.create_user(register_user3_data)
response1 = self.client.put('/api/profiles/claire22', update_user_profile_data3, HTTP_AUTHORIZATION=user_token1, format='json')
response2 = self.client.get('/api/profiles/claire22/followers', HTTP_AUTHORIZATION=user_token1, format='json')
self.assertEqual(response2.data["errors"][0], "You dont have any followers")
def test_a_given_user_does_not_have_followers(self):
"""
Method to test user does not have followers
"""
user_token1 = self.create_user(register_user1_data)
user_token2 = self.create_user(register_user2_data)
response1 = self.client.put('/api/profiles/joan22', update_user_profile_data1, HTTP_AUTHORIZATION=user_token1, format='json')
response2 = self.client.put('/api/profiles/maria22', update_user_profile_data2, HTTP_AUTHORIZATION=user_token2, format='json')
response3 = self.client.get('/api/profiles/joan22/followers', HTTP_AUTHORIZATION=user_token1, format='json')
self.assertEqual(response3.data["errors"][0], "joan22 does not have any followers")
self.assertEqual(response3.status_code, status.HTTP_400_BAD_REQUEST)
def test_a_given_user_does_not_have_users_following(self):
"""
Method to test user does not have users they are following
"""
user_token1 = self.create_user(register_user1_data)
user_token2 = self.create_user(register_user2_data)
response1 = self.client.put('/api/profiles/joan22', update_user_profile_data1, HTTP_AUTHORIZATION=user_token1, format='json')
response2 = self.client.put('/api/profiles/maria22', update_user_profile_data2, HTTP_AUTHORIZATION=user_token2, format='json')
response3 = self.client.get('/api/profiles/joan22/following', HTTP_AUTHORIZATION=user_token1, format='json')
self.assertEqual(response3.data["errors"][0], "joan22 is currently not following anyone")
self.assertEqual(response3.status_code, status.HTTP_400_BAD_REQUEST) | 61.141221 | 146 | 0.721456 | 1,987 | 16,019 | 5.547559 | 0.072974 | 0.052617 | 0.110496 | 0.08328 | 0.873991 | 0.845868 | 0.810306 | 0.77175 | 0.762769 | 0.715141 | 0 | 0.033278 | 0.165241 | 16,019 | 262 | 147 | 61.141221 | 0.791056 | 0.063674 | 0 | 0.484848 | 0 | 0 | 0.167204 | 0.079141 | 0 | 0 | 0 | 0 | 0.284848 | 1 | 0.109091 | false | 0 | 0.018182 | 0 | 0.133333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
b8dc5f50da6602264cc8186b2620dab9eb0c181c | 3,438 | py | Python | src/radixsort.py | endere/Data-structures | 2eb522406c348b6ca26c47f8790b77088cc8cae5 | [
"MIT"
] | 1 | 2017-06-19T22:35:34.000Z | 2017-06-19T22:35:34.000Z | src/radixsort.py | endere/Data-structures | 2eb522406c348b6ca26c47f8790b77088cc8cae5 | [
"MIT"
] | 1 | 2017-07-13T00:53:06.000Z | 2017-07-13T00:53:06.000Z | src/radixsort.py | endere/Data-structures-2nd-half | 2eb522406c348b6ca26c47f8790b77088cc8cae5 | [
"MIT"
] | null | null | null | """Radix sort data structure."""
def radix_sort(array):
"""Radix sort data structure function."""
if len(array) <= 1:
return array
buckets = {'0': [], '1': [], '2': [], '3': [], '4': [], '5': [], '6': [], '7': [], '8': [], '9': []}
for i in range(len(str(max(array)))):
for x in range(len(array)):
if not isinstance(array[x], int):
raise TypeError('Must be an integer, please try again.')
try:
buckets[str(array[x])[-i - 1]].append(array[x])
except IndexError:
buckets['0'].append(array[x])
array = []
for z in range(10):
array += buckets[str(z)]
buckets[str(z)] = []
return array
if __name__ == '__main__': # pragma no cover
import random
import datetime
from functools import reduce
times = []
num_runs = 500
string_length = 5
for i in range(num_runs):
data = [i for i in range(string_length)]
timeA = datetime.datetime.now()
radix_sort(data)
timeB = datetime.datetime.now()
times.append(timeB - timeA)
average_time = reduce(lambda x, y: x + y, times) / len(times)
print(' ')
print('pre sorted')
print('Number of runs: ', num_runs)
print('Length of lists to sort: ', string_length)
print('Average time: ', str(average_time)[-8:], 'seconds')
string_length = 100
for i in range(num_runs):
data = [i for i in range(string_length)]
timeA = datetime.datetime.now()
radix_sort(data)
timeB = datetime.datetime.now()
times.append(timeB - timeA)
average_time = reduce(lambda x, y: x + y, times) / len(times)
print(' ')
print('pre sorted')
print('Number of runs: ', num_runs)
print('Length of lists to sort: ', string_length)
print('Average time: ', str(average_time)[-8:], 'seconds')
string_length = 5
for i in range(num_runs):
data = [i for i in range(string_length)][::-1]
timeA = datetime.datetime.now()
radix_sort(data)
timeB = datetime.datetime.now()
times.append(timeB - timeA)
average_time = reduce(lambda x, y: x + y, times) / len(times)
print(' ')
print('reverse order.')
print('Number of runs: ', num_runs)
print('Length of lists to sort: ', string_length)
print('Average time: ', str(average_time)[-8:], 'seconds')
string_length = 100
for i in range(num_runs):
data = [i for i in range(string_length)][::-1]
timeA = datetime.datetime.now()
radix_sort(data)
timeB = datetime.datetime.now()
times.append(timeB - timeA)
average_time = reduce(lambda x, y: x + y, times) / len(times)
print(' ')
print('reverse order.')
print('Number of runs: ', num_runs)
print('Length of lists to sort: ', string_length)
print('Average time: ', str(average_time)[-8:], 'seconds')
for i in range(num_runs):
data = random.sample(range(string_length), string_length)
timeA = datetime.datetime.now()
radix_sort(data)
timeB = datetime.datetime.now()
times.append(timeB - timeA)
average_time = reduce(lambda x, y: x + y, times) / len(times)
print(' ')
print('random Order.')
print('Number of runs: ', num_runs)
print('Length of lists to sort: ', string_length)
print('Average time: ', str(average_time)[-8:], 'seconds')
| 36.574468 | 104 | 0.579116 | 453 | 3,438 | 4.284768 | 0.178808 | 0.092736 | 0.030912 | 0.056672 | 0.758887 | 0.758887 | 0.758887 | 0.747553 | 0.747553 | 0.747553 | 0 | 0.013095 | 0.267016 | 3,438 | 93 | 105 | 36.967742 | 0.757143 | 0.022978 | 0 | 0.735632 | 0 | 0 | 0.129032 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.011494 | false | 0 | 0.034483 | 0 | 0.068966 | 0.287356 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
b8fdb0b8b089b1d81d79ffeb9581f2f9d782e180 | 73 | py | Python | CA117/Lab_2/pi_12.py | PRITI1999/OneLineWonders | 91a7368e0796e5a3b5839c9165f9fbe5460879f5 | [
"MIT"
] | 6 | 2016-02-04T00:15:20.000Z | 2019-10-13T13:53:16.000Z | CA117/Lab_2/pi_12.py | PRITI1999/OneLineWonders | 91a7368e0796e5a3b5839c9165f9fbe5460879f5 | [
"MIT"
] | 2 | 2016-03-14T04:01:36.000Z | 2019-10-16T12:45:34.000Z | CA117/Lab_2/pi_12.py | PRITI1999/OneLineWonders | 91a7368e0796e5a3b5839c9165f9fbe5460879f5 | [
"MIT"
] | 10 | 2016-02-09T14:38:32.000Z | 2021-05-25T08:16:26.000Z | print("{:.{}f}".format(__import__('math').pi,__import__('sys').argv[1]))
| 36.5 | 72 | 0.630137 | 10 | 73 | 3.8 | 0.9 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.013889 | 0.013699 | 73 | 1 | 73 | 73 | 0.513889 | 0 | 0 | 0 | 0 | 0 | 0.191781 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 1 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | 6 |
77167f13690bee4065c037d37ba6dea8809804b8 | 129 | py | Python | core/context_processors.py | Wambere/django-template | 4ec67371a5582b1a322df8f4d1c30c06b4ddb5a3 | [
"MIT"
] | null | null | null | core/context_processors.py | Wambere/django-template | 4ec67371a5582b1a322df8f4d1c30c06b4ddb5a3 | [
"MIT"
] | 2 | 2015-11-27T10:40:42.000Z | 2015-11-27T10:52:17.000Z | core/context_processors.py | Wambere/django-template | 4ec67371a5582b1a322df8f4d1c30c06b4ddb5a3 | [
"MIT"
] | null | null | null | from django.contrib.sites.models import Site
def site_processor(request):
return { 'site': Site.objects.get_current() }
| 25.8 | 50 | 0.728682 | 17 | 129 | 5.411765 | 0.823529 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.155039 | 129 | 4 | 51 | 32.25 | 0.844037 | 0 | 0 | 0 | 0 | 0 | 0.032 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0 | 0.333333 | 0.333333 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 6 |
7750618843503753a981a446d71ea006203241b6 | 208 | py | Python | plugins/red_canary/komand_red_canary/triggers/__init__.py | lukaszlaszuk/insightconnect-plugins | 8c6ce323bfbb12c55f8b5a9c08975d25eb9f8892 | [
"MIT"
] | 46 | 2019-06-05T20:47:58.000Z | 2022-03-29T10:18:01.000Z | plugins/red_canary/komand_red_canary/triggers/__init__.py | lukaszlaszuk/insightconnect-plugins | 8c6ce323bfbb12c55f8b5a9c08975d25eb9f8892 | [
"MIT"
] | 386 | 2019-06-07T20:20:39.000Z | 2022-03-30T17:35:01.000Z | plugins/red_canary/komand_red_canary/triggers/__init__.py | lukaszlaszuk/insightconnect-plugins | 8c6ce323bfbb12c55f8b5a9c08975d25eb9f8892 | [
"MIT"
] | 43 | 2019-07-09T14:13:58.000Z | 2022-03-28T12:04:46.000Z | # GENERATED BY KOMAND SDK - DO NOT EDIT
from .new_activity_monitor_matches.trigger import NewActivityMonitorMatches
from .new_detections.trigger import NewDetections
from .new_events.trigger import NewEvents
| 41.6 | 75 | 0.860577 | 27 | 208 | 6.444444 | 0.703704 | 0.12069 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.100962 | 208 | 4 | 76 | 52 | 0.930481 | 0.177885 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
91fd7928a242ba820a031bf05aa9e37a03dae2b1 | 29,646 | py | Python | tests/environments/execution/test_fargate_task_environment.py | vnsn/prefect | 972345597975155dba9e3232bcc430d0a6258a37 | [
"Apache-2.0"
] | 1 | 2021-05-12T12:47:12.000Z | 2021-05-12T12:47:12.000Z | tests/environments/execution/test_fargate_task_environment.py | vnsn/prefect | 972345597975155dba9e3232bcc430d0a6258a37 | [
"Apache-2.0"
] | 3 | 2022-03-12T09:21:41.000Z | 2022-03-26T08:11:51.000Z | tests/environments/execution/test_fargate_task_environment.py | vnsn/prefect | 972345597975155dba9e3232bcc430d0a6258a37 | [
"Apache-2.0"
] | null | null | null | from unittest.mock import MagicMock
import cloudpickle
import prefect
import pytest
from botocore.exceptions import ClientError
from prefect import Flow, config
from prefect.executors import LocalDaskExecutor
from prefect.environments import FargateTaskEnvironment
from prefect.storage import Docker
from prefect.utilities.configuration import set_temporary_config
def test_create_fargate_task_environment():
environment = FargateTaskEnvironment()
assert environment.executor is not None
assert environment.labels == set()
assert environment.on_start is None
assert environment.on_exit is None
assert environment.metadata == {}
assert environment.logger.name == "prefect.FargateTaskEnvironment"
def test_create_fargate_task_environment_with_executor():
executor = LocalDaskExecutor()
environment = FargateTaskEnvironment(executor=executor)
assert environment.executor is executor
def test_create_fargate_task_environment_labels():
environment = FargateTaskEnvironment(labels=["foo"])
assert environment.labels == set(["foo"])
def test_create_fargate_task_environment_callbacks():
def f():
pass
environment = FargateTaskEnvironment(labels=["foo"], on_start=f, on_exit=f)
assert environment.labels == set(["foo"])
assert environment.on_start is f
assert environment.on_exit is f
def test_fargate_task_environment_dependencies():
environment = FargateTaskEnvironment()
assert environment.dependencies == ["boto3", "botocore"]
def test_create_fargate_task_environment_aws_creds_provided():
environment = FargateTaskEnvironment(
labels=["foo"],
aws_access_key_id="id",
aws_secret_access_key="secret",
aws_session_token="session",
region_name="region",
)
assert environment.labels == set(["foo"])
assert environment.aws_access_key_id == "id"
assert environment.aws_secret_access_key == "secret"
assert environment.aws_session_token == "session"
assert environment.region_name == "region"
def test_create_fargate_task_environment_aws_creds_environment(monkeypatch):
monkeypatch.setenv("AWS_ACCESS_KEY_ID", "id")
monkeypatch.setenv("AWS_SECRET_ACCESS_KEY", "secret")
monkeypatch.setenv("AWS_SESSION_TOKEN", "session")
monkeypatch.setenv("REGION_NAME", "region")
environment = FargateTaskEnvironment(labels=["foo"])
assert environment.labels == set(["foo"])
assert environment.aws_access_key_id == "id"
assert environment.aws_secret_access_key == "secret"
assert environment.aws_session_token == "session"
assert environment.region_name == "region"
def test_parse_task_definition_kwargs():
environment = FargateTaskEnvironment()
kwarg_dict = {
"family": "test",
"taskRoleArn": "test",
"executionRoleArn": "test",
"networkMode": "test",
"containerDefinitions": "test",
"volumes": "test",
"placementConstraints": "test",
"requiresCompatibilities": "test",
"cpu": "test",
"memory": "test",
"tags": "test",
"pidMode": "test",
"ipcMode": "test",
"proxyConfiguration": "test",
"inferenceAccelerators": "test",
}
task_definition_kwargs, task_run_kwargs = environment._parse_kwargs(kwarg_dict)
assert task_definition_kwargs == kwarg_dict
assert task_run_kwargs == {"placementConstraints": "test", "tags": "test"}
def test_parse_task_run_kwargs():
environment = FargateTaskEnvironment()
kwarg_dict = {
"cluster": "test",
"taskDefinition": "test",
"count": "test",
"startedBy": "test",
"group": "test",
"placementConstraints": "test",
"placementStrategy": "test",
"platformVersion": "test",
"networkConfiguration": "test",
"tags": "test",
"enableECSManagedTags": "test",
"propagateTags": "test",
}
task_definition_kwargs, task_run_kwargs = environment._parse_kwargs(kwarg_dict)
assert task_run_kwargs == kwarg_dict
assert task_definition_kwargs == {"placementConstraints": "test", "tags": "test"}
def test_parse_task_definition_and_run_kwargs():
environment = FargateTaskEnvironment()
def_kwarg_dict = {
"family": "test",
"taskRoleArn": "test",
"executionRoleArn": "test",
"networkMode": "test",
"containerDefinitions": "test",
"volumes": "test",
"placementConstraints": "test",
"requiresCompatibilities": "test",
"cpu": "test",
"memory": "test",
"tags": "test",
"pidMode": "test",
"ipcMode": "test",
"proxyConfiguration": "test",
"inferenceAccelerators": "test",
}
run_kwarg_dict = {
"cluster": "test",
"taskDefinition": "test",
"count": "test",
"startedBy": "test",
"group": "test",
"placementConstraints": "test",
"placementStrategy": "test",
"platformVersion": "test",
"networkConfiguration": "test",
"tags": "test",
"enableECSManagedTags": "test",
"propagateTags": "test",
}
kwarg_dict = {
"family": "test",
"taskRoleArn": "test",
"executionRoleArn": "test",
"networkMode": "test",
"containerDefinitions": "test",
"volumes": "test",
"placementConstraints": "test",
"requiresCompatibilities": "test",
"cpu": "test",
"memory": "test",
"tags": "test",
"pidMode": "test",
"ipcMode": "test",
"proxyConfiguration": "test",
"inferenceAccelerators": "test",
"cluster": "test",
"taskDefinition": "test",
"count": "test",
"startedBy": "test",
"group": "test",
"placementStrategy": "test",
"platformVersion": "test",
"networkConfiguration": "test",
"enableECSManagedTags": "test",
"propagateTags": "test",
}
task_definition_kwargs, task_run_kwargs = environment._parse_kwargs(kwarg_dict)
assert task_definition_kwargs == def_kwarg_dict
assert task_run_kwargs == run_kwarg_dict
def test_parse_task_kwargs_invalid_value_removed():
environment = FargateTaskEnvironment()
kwarg_dict = {"test": "not_real"}
task_definition_kwargs, task_run_kwargs = environment._parse_kwargs(kwarg_dict)
assert task_definition_kwargs == {}
assert task_run_kwargs == {}
def test_setup_definition_exists(monkeypatch):
existing_task_definition = {
"containerDefinitions": [
{
"environment": [
{"name": "PREFECT__CLOUD__GRAPHQL", "value": config.cloud.graphql},
{"name": "PREFECT__CLOUD__USE_LOCAL_SECRETS", "value": "false"},
{
"name": "PREFECT__ENGINE__FLOW_RUNNER__DEFAULT_CLASS",
"value": "prefect.engine.cloud.CloudFlowRunner",
},
{
"name": "PREFECT__ENGINE__TASK_RUNNER__DEFAULT_CLASS",
"value": "prefect.engine.cloud.CloudTaskRunner",
},
{"name": "PREFECT__LOGGING__LOG_TO_CLOUD", "value": "true"},
{
"name": "PREFECT__LOGGING__EXTRA_LOGGERS",
"value": str(config.logging.extra_loggers),
},
],
"name": "flow-container",
"image": "test/image:tag",
"command": [
"/bin/sh",
"-c",
"python -c 'import prefect; prefect.environments.execution.load_and_run_flow()'",
],
}
],
}
boto3_client = MagicMock()
boto3_client.describe_task_definition.return_value = {
"taskDefinition": existing_task_definition
}
monkeypatch.setattr("boto3.client", MagicMock(return_value=boto3_client))
environment = FargateTaskEnvironment()
environment.setup(
Flow(
"test",
storage=Docker(registry_url="test", image_name="image", image_tag="tag"),
)
)
assert boto3_client.describe_task_definition.called
assert not boto3_client.register_task_definition.called
def test_setup_definition_changed(monkeypatch):
existing_task_definition = {
"containerDefinitions": [
{
"environment": [
{"name": "PREFECT__CLOUD__GRAPHQL", "value": config.cloud.graphql},
{"name": "PREFECT__CLOUD__USE_LOCAL_SECRETS", "value": "false"},
{
"name": "PREFECT__ENGINE__FLOW_RUNNER__DEFAULT_CLASS",
"value": "prefect.engine.cloud.CloudFlowRunner",
},
{
"name": "PREFECT__ENGINE__TASK_RUNNER__DEFAULT_CLASS",
"value": "prefect.engine.cloud.CloudTaskRunner",
},
{"name": "PREFECT__LOGGING__LOG_TO_CLOUD", "value": "true"},
{
"name": "PREFECT__LOGGING__EXTRA_LOGGERS",
"value": str(config.logging.extra_loggers),
},
],
"name": "flow-container",
"image": "test/image:tag",
"command": [
"/bin/sh",
"-c",
"python -c 'import prefect; prefect.environments.execution.load_and_run_flow()'",
],
}
],
"memory": 256,
"cpu": 512,
}
boto3_client = MagicMock()
boto3_client.describe_task_definition.return_value = {
"taskDefinition": existing_task_definition
}
monkeypatch.setattr("boto3.client", MagicMock(return_value=boto3_client))
environment = FargateTaskEnvironment(memory=256, cpu=1024)
with pytest.raises(ValueError):
environment.setup(
Flow(
"test",
storage=Docker(
registry_url="test", image_name="image", image_tag="newtag"
),
)
)
assert boto3_client.describe_task_definition.called
assert not boto3_client.register_task_definition.called
def test_validate_definition_not_changed_when_env_out_of_order(monkeypatch):
existing_task_definition = {
"containerDefinitions": [
{
"environment": [
{
"name": "PREFECT__ENGINE__FLOW_RUNNER__DEFAULT_CLASS",
"value": "prefect.engine.cloud.CloudFlowRunner",
},
{"name": "PREFECT__CLOUD__USE_LOCAL_SECRETS", "value": "false"},
{
"name": "PREFECT__ENGINE__TASK_RUNNER__DEFAULT_CLASS",
"value": "prefect.engine.cloud.CloudTaskRunner",
},
{"name": "PREFECT__LOGGING__LOG_TO_CLOUD", "value": "true"},
{
"name": "PREFECT__LOGGING__EXTRA_LOGGERS",
"value": str(config.logging.extra_loggers),
},
# This is added first in _render_task_definition_kwargs, so it's at the end now
{"name": "PREFECT__CLOUD__GRAPHQL", "value": config.cloud.graphql},
],
"name": "flow-container",
"image": "test/image:tag",
"command": [
"/bin/sh",
"-c",
"python -c 'import prefect; prefect.environments.execution.load_and_run_flow()'",
],
}
],
"memory": 256,
"cpu": 512,
}
boto3_client = MagicMock()
boto3_client.describe_task_definition.return_value = {
"taskDefinition": existing_task_definition
}
monkeypatch.setattr("boto3.client", MagicMock(return_value=boto3_client))
environment = FargateTaskEnvironment(memory=256, cpu=512)
environment.setup(
Flow(
"test",
storage=Docker(registry_url="test", image_name="image", image_tag="tag"),
)
)
assert boto3_client.describe_task_definition.called
assert not boto3_client.register_task_definition.called
def test_validate_definition_not_changed_when_out_of_order_in_second_container(
monkeypatch,
):
existing_task_definition = {
"containerDefinitions": [
{
"environment": [
{
"name": "PREFECT__ENGINE__FLOW_RUNNER__DEFAULT_CLASS",
"value": "prefect.engine.cloud.CloudFlowRunner",
},
{"name": "PREFECT__CLOUD__USE_LOCAL_SECRETS", "value": "false"},
{
"name": "PREFECT__ENGINE__TASK_RUNNER__DEFAULT_CLASS",
"value": "prefect.engine.cloud.CloudTaskRunner",
},
{"name": "PREFECT__LOGGING__LOG_TO_CLOUD", "value": "true"},
{
"name": "PREFECT__LOGGING__EXTRA_LOGGERS",
"value": str(config.logging.extra_loggers),
},
# This is added first in _render_task_definition_kwargs, so it's at the end now
{"name": "PREFECT__CLOUD__GRAPHQL", "value": config.cloud.graphql},
],
"name": "flow-container",
"image": "test/image:tag",
"command": [
"/bin/sh",
"-c",
"python -c 'import prefect; prefect.environments.execution.load_and_run_flow()'",
],
},
{
"environment": [
{
"name": "foo",
"value": "bar",
},
{
"name": "foo2",
"value": "bar2",
},
{"name": "PREFECT__CLOUD__GRAPHQL", "value": config.cloud.graphql},
{"name": "PREFECT__CLOUD__USE_LOCAL_SECRETS", "value": "false"},
{
"name": "PREFECT__ENGINE__FLOW_RUNNER__DEFAULT_CLASS",
"value": "prefect.engine.cloud.CloudFlowRunner",
},
{
"name": "PREFECT__ENGINE__TASK_RUNNER__DEFAULT_CLASS",
"value": "prefect.engine.cloud.CloudTaskRunner",
},
{"name": "PREFECT__LOGGING__LOG_TO_CLOUD", "value": "true"},
{
"name": "PREFECT__LOGGING__EXTRA_LOGGERS",
"value": str(config.logging.extra_loggers),
},
],
"secrets": [
{"name": "1", "valueFrom": "1"},
{"name": "2", "valueFrom": "2"},
],
"mountPoints": [
{"sourceVolume": "1", "containerPath": "1", "readOnly": False},
{"sourceVolume": "2", "containerPath": "2", "readOnly": False},
],
"extraHosts": [
{"hostname": "1", "ipAddress": "1"},
{"hostname": "2", "ipAddress": "2"},
],
"volumesFrom": [
{"sourceContainer": "1", "readOnly": False},
{"sourceContainer": "2", "readOnly": False},
],
"ulimits": [
{"name": "cpu", "softLimit": 1, "hardLimit": 1},
{"name": "memlock", "softLimit": 2, "hardLimit": 2},
],
"portMappings": [
{"containerPort": 80, "hostPort": 80, "protocol": "tcp"},
{"containerPort": 81, "hostPort": 81, "protocol": "tcp"},
],
"logConfiguration": {
"logDriver": "awslogs",
"options": {},
"secretOptions": [
{"name": "1", "valueFrom": "1"},
{"name": "2", "valueFrom": "2"},
],
},
"name": "some-other-container",
"image": "test/image:tag",
"command": [
"/bin/sh",
],
},
],
"memory": 256,
"cpu": 512,
}
boto3_client = MagicMock()
boto3_client.describe_task_definition.return_value = {
"taskDefinition": existing_task_definition
}
monkeypatch.setattr("boto3.client", MagicMock(return_value=boto3_client))
environment = FargateTaskEnvironment(
memory=256,
cpu=512,
containerDefinitions=[
{},
{
"environment": [
{
"name": "foo2",
"value": "bar2",
},
{
"name": "foo",
"value": "bar",
},
],
"secrets": [
{"name": "2", "valueFrom": "2"},
{"name": "1", "valueFrom": "1"},
],
"mountPoints": [
{"sourceVolume": "2", "containerPath": "2", "readOnly": False},
{"sourceVolume": "1", "containerPath": "1", "readOnly": False},
],
"extraHosts": [
{"hostname": "2", "ipAddress": "2"},
{"hostname": "1", "ipAddress": "1"},
],
"volumesFrom": [
{"sourceContainer": "2", "readOnly": False},
{"sourceContainer": "1", "readOnly": False},
],
"ulimits": [
{"name": "memlock", "softLimit": 2, "hardLimit": 2},
{"name": "cpu", "softLimit": 1, "hardLimit": 1},
],
"portMappings": [
{"containerPort": 81, "hostPort": 81, "protocol": "tcp"},
{"containerPort": 80, "hostPort": 80, "protocol": "tcp"},
],
"logConfiguration": {
"logDriver": "awslogs",
"options": {},
"secretOptions": [
{"name": "2", "valueFrom": "2"},
{"name": "1", "valueFrom": "1"},
],
},
"name": "some-other-container",
"image": "test/image:tag",
"command": [
"/bin/sh",
],
},
],
)
environment.setup(
Flow(
"test",
storage=Docker(registry_url="test", image_name="image", image_tag="tag"),
)
)
assert boto3_client.describe_task_definition.called
assert not boto3_client.register_task_definition.called
def test_validate_definition_not_changed_when_names_are_in_arn(monkeypatch):
existing_task_definition = {
"containerDefinitions": [
{
"environment": [
{"name": "PREFECT__CLOUD__GRAPHQL", "value": config.cloud.graphql},
{"name": "PREFECT__CLOUD__USE_LOCAL_SECRETS", "value": "false"},
{
"name": "PREFECT__ENGINE__FLOW_RUNNER__DEFAULT_CLASS",
"value": "prefect.engine.cloud.CloudFlowRunner",
},
{
"name": "PREFECT__ENGINE__TASK_RUNNER__DEFAULT_CLASS",
"value": "prefect.engine.cloud.CloudTaskRunner",
},
{"name": "PREFECT__LOGGING__LOG_TO_CLOUD", "value": "true"},
{
"name": "PREFECT__LOGGING__EXTRA_LOGGERS",
"value": str(config.logging.extra_loggers),
},
],
"name": "flow-container",
"image": "test/image:tag",
"command": [
"/bin/sh",
"-c",
"python -c 'import prefect; prefect.environments.execution.load_and_run_flow()'",
],
}
],
"taskRoleArn": "arn:aws:iam::000000000000:role/my-role-name",
"memory": 256,
"cpu": 512,
}
boto3_client = MagicMock()
boto3_client.describe_task_definition.return_value = {
"taskDefinition": existing_task_definition
}
monkeypatch.setattr("boto3.client", MagicMock(return_value=boto3_client))
environment = FargateTaskEnvironment(
memory=256, cpu=512, taskRoleArn="my-role-name"
)
environment.setup(
Flow(
"test",
storage=Docker(registry_url="test", image_name="image", image_tag="tag"),
)
)
assert boto3_client.describe_task_definition.called
assert not boto3_client.register_task_definition.called
def test_setup_definition_register(monkeypatch):
boto3_client = MagicMock()
boto3_client.describe_task_definition.side_effect = ClientError({}, None)
boto3_client.register_task_definition.return_value = {}
monkeypatch.setattr("boto3.client", MagicMock(return_value=boto3_client))
environment = FargateTaskEnvironment(
family="test",
containerDefinitions=[
{
"name": "flow-container",
"image": "image",
"command": [],
"environment": [],
"essential": True,
}
],
)
environment.setup(
Flow(
"test",
storage=Docker(registry_url="test", image_name="image", image_tag="tag"),
)
)
assert boto3_client.describe_task_definition.called
assert boto3_client.register_task_definition.called
assert boto3_client.register_task_definition.call_args[1]["family"] == "test"
assert boto3_client.register_task_definition.call_args[1][
"containerDefinitions"
] == [
{
"name": "flow-container",
"image": "test/image:tag",
"command": [
"/bin/sh",
"-c",
"python -c 'import prefect; prefect.environments.execution.load_and_run_flow()'",
],
"environment": [
{
"name": "PREFECT__CLOUD__GRAPHQL",
"value": prefect.config.cloud.graphql,
},
{"name": "PREFECT__CLOUD__USE_LOCAL_SECRETS", "value": "false"},
{
"name": "PREFECT__ENGINE__FLOW_RUNNER__DEFAULT_CLASS",
"value": "prefect.engine.cloud.CloudFlowRunner",
},
{
"name": "PREFECT__ENGINE__TASK_RUNNER__DEFAULT_CLASS",
"value": "prefect.engine.cloud.CloudTaskRunner",
},
{"name": "PREFECT__LOGGING__LOG_TO_CLOUD", "value": "true"},
{
"name": "PREFECT__LOGGING__EXTRA_LOGGERS",
"value": "[]",
},
],
"essential": True,
}
]
def test_setup_definition_register_no_defintions(monkeypatch):
boto3_client = MagicMock()
boto3_client.describe_task_definition.side_effect = ClientError({}, None)
boto3_client.register_task_definition.return_value = {}
monkeypatch.setattr("boto3.client", MagicMock(return_value=boto3_client))
environment = FargateTaskEnvironment(family="test")
environment.setup(
Flow(
"test",
storage=Docker(registry_url="test", image_name="image", image_tag="tag"),
)
)
assert boto3_client.describe_task_definition.called
assert boto3_client.register_task_definition.called
assert boto3_client.register_task_definition.call_args[1]["family"] == "test"
assert boto3_client.register_task_definition.call_args[1][
"containerDefinitions"
] == [
{
"environment": [
{
"name": "PREFECT__CLOUD__GRAPHQL",
"value": prefect.config.cloud.graphql,
},
{"name": "PREFECT__CLOUD__USE_LOCAL_SECRETS", "value": "false"},
{
"name": "PREFECT__ENGINE__FLOW_RUNNER__DEFAULT_CLASS",
"value": "prefect.engine.cloud.CloudFlowRunner",
},
{
"name": "PREFECT__ENGINE__TASK_RUNNER__DEFAULT_CLASS",
"value": "prefect.engine.cloud.CloudTaskRunner",
},
{"name": "PREFECT__LOGGING__LOG_TO_CLOUD", "value": "true"},
{
"name": "PREFECT__LOGGING__EXTRA_LOGGERS",
"value": "[]",
},
],
"name": "flow-container",
"image": "test/image:tag",
"command": [
"/bin/sh",
"-c",
"python -c 'import prefect; prefect.environments.execution.load_and_run_flow()'",
],
}
]
def test_execute_run_task(monkeypatch):
boto3_client = MagicMock()
boto3_client.run_task.return_value = {}
monkeypatch.setattr("boto3.client", MagicMock(return_value=boto3_client))
with set_temporary_config({"cloud.auth_token": "test"}):
environment = FargateTaskEnvironment(
cluster="test", family="test", taskDefinition="test"
)
environment.execute(
Flow(
"test",
storage=Docker(
registry_url="test", image_name="image", image_tag="tag"
),
),
)
assert boto3_client.run_task.called
assert boto3_client.run_task.call_args[1]["taskDefinition"] == "test"
assert boto3_client.run_task.call_args[1]["overrides"] == {
"containerOverrides": [
{
"name": "flow-container",
"environment": [
{
"name": "PREFECT__CLOUD__AUTH_TOKEN",
"value": prefect.config.cloud.get("auth_token"),
},
{"name": "PREFECT__CONTEXT__FLOW_RUN_ID", "value": "unknown"},
{"name": "PREFECT__CONTEXT__IMAGE", "value": "test/image:tag"},
],
}
]
}
assert boto3_client.run_task.call_args[1]["launchType"] == "FARGATE"
assert boto3_client.run_task.call_args[1]["cluster"] == "test"
def test_execute_run_task_agent_token(monkeypatch):
boto3_client = MagicMock()
boto3_client.run_task.return_value = {}
monkeypatch.setattr("boto3.client", MagicMock(return_value=boto3_client))
with set_temporary_config({"cloud.agent.auth_token": "test"}):
environment = FargateTaskEnvironment(
cluster="test", family="test", taskDefinition="test"
)
environment.execute(
Flow(
"test",
storage=Docker(
registry_url="test", image_name="image", image_tag="tag"
),
),
)
assert boto3_client.run_task.called
assert boto3_client.run_task.call_args[1]["taskDefinition"] == "test"
assert boto3_client.run_task.call_args[1]["overrides"] == {
"containerOverrides": [
{
"name": "flow-container",
"environment": [
{
"name": "PREFECT__CLOUD__AUTH_TOKEN",
"value": prefect.config.cloud.agent.get("auth_token"),
},
{"name": "PREFECT__CONTEXT__FLOW_RUN_ID", "value": "unknown"},
{"name": "PREFECT__CONTEXT__IMAGE", "value": "test/image:tag"},
],
}
]
}
assert boto3_client.run_task.call_args[1]["launchType"] == "FARGATE"
assert boto3_client.run_task.call_args[1]["cluster"] == "test"
def test_environment_run():
class MyExecutor(LocalDaskExecutor):
submit_called = False
def submit(self, *args, **kwargs):
self.submit_called = True
return super().submit(*args, **kwargs)
global_dict = {}
@prefect.task
def add_to_dict():
global_dict["run"] = True
executor = MyExecutor()
environment = FargateTaskEnvironment(executor=executor)
flow = prefect.Flow("test", tasks=[add_to_dict], environment=environment)
environment.run(flow=flow)
assert global_dict.get("run") is True
assert executor.submit_called
def test_roundtrip_cloudpickle():
environment = FargateTaskEnvironment(cluster="test")
assert environment.task_run_kwargs == {"cluster": "test"}
new = cloudpickle.loads(cloudpickle.dumps(environment))
assert isinstance(new, FargateTaskEnvironment)
assert new.task_run_kwargs == {"cluster": "test"}
| 35.589436 | 101 | 0.524759 | 2,411 | 29,646 | 6.120282 | 0.095396 | 0.0492 | 0.026498 | 0.024939 | 0.836202 | 0.804147 | 0.76579 | 0.754541 | 0.732516 | 0.717539 | 0 | 0.010589 | 0.350165 | 29,646 | 832 | 102 | 35.632212 | 0.755359 | 0.005228 | 0 | 0.673525 | 0 | 0 | 0.257834 | 0.100414 | 0 | 0 | 0 | 0 | 0.08642 | 1 | 0.034294 | false | 0.001372 | 0.02332 | 0 | 0.061728 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
6234fdb0c4a54ec34ec40a936c42840238437f70 | 432 | py | Python | tinynn/__init__.py | lx120/tinynn | 88b941a706700ca7f6b1cc4ae7f271df7049348c | [
"MIT"
] | 217 | 2019-08-19T07:23:57.000Z | 2022-03-31T13:00:46.000Z | tinynn/__init__.py | lx120/tinynn | 88b941a706700ca7f6b1cc4ae7f271df7049348c | [
"MIT"
] | 19 | 2019-09-06T17:11:13.000Z | 2022-03-12T00:02:47.000Z | tinynn/__init__.py | lx120/tinynn | 88b941a706700ca7f6b1cc4ae7f271df7049348c | [
"MIT"
] | 57 | 2019-09-06T12:12:24.000Z | 2022-03-29T03:33:01.000Z | from tinynn.core import initializer
from tinynn.core import layer
from tinynn.core import loss
from tinynn.core import model
from tinynn.core import net
from tinynn.core import optimizer
from tinynn.utils import data_iterator
from tinynn.utils import dataset
from tinynn.utils import downloader
from tinynn.utils import math
from tinynn.utils import metric
from tinynn.utils import seeder
from tinynn.utils import structured_param
| 28.8 | 41 | 0.847222 | 67 | 432 | 5.432836 | 0.298507 | 0.357143 | 0.288462 | 0.403846 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.122685 | 432 | 14 | 42 | 30.857143 | 0.960422 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
657a7f8379db58ebf99b8a894722e21163bd7d98 | 122 | py | Python | examples/starkex-cairo/starkware/cairo/lang/compiler/preprocessor/preprocessor_error.py | LatticeLabVentures/BeamNet | e4a755dbc52b4eaef73074b22d4431df88394b4a | [
"CC0-1.0"
] | null | null | null | examples/starkex-cairo/starkware/cairo/lang/compiler/preprocessor/preprocessor_error.py | LatticeLabVentures/BeamNet | e4a755dbc52b4eaef73074b22d4431df88394b4a | [
"CC0-1.0"
] | null | null | null | examples/starkex-cairo/starkware/cairo/lang/compiler/preprocessor/preprocessor_error.py | LatticeLabVentures/BeamNet | e4a755dbc52b4eaef73074b22d4431df88394b4a | [
"CC0-1.0"
] | null | null | null | from starkware.cairo.lang.compiler.error_handling import LocationError
class PreprocessorError(LocationError):
pass
| 20.333333 | 70 | 0.836066 | 13 | 122 | 7.769231 | 0.923077 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.106557 | 122 | 5 | 71 | 24.4 | 0.926606 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.333333 | 0.333333 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 6 |
65b08e60ee75726625f2a00d780a2bf3c0d00373 | 418 | py | Python | alphalogic_api/objects/__init__.py | Alphaopen/alphalogic-api | fc3f7c9fbb17d06bc03363e59d770b3b0885cb1a | [
"MIT"
] | null | null | null | alphalogic_api/objects/__init__.py | Alphaopen/alphalogic-api | fc3f7c9fbb17d06bc03363e59d770b3b0885cb1a | [
"MIT"
] | null | null | null | alphalogic_api/objects/__init__.py | Alphaopen/alphalogic-api | fc3f7c9fbb17d06bc03363e59d770b3b0885cb1a | [
"MIT"
] | 1 | 2018-09-26T20:46:07.000Z | 2018-09-26T20:46:07.000Z | # -*- coding: utf-8 -*-
from alphalogic_api.objects.object import Root, Object
from alphalogic_api.objects.command import Command
from alphalogic_api.objects.event import Event, MajorEvent, MinorEvent, CriticalEvent, BlockerEvent, TrivialEvent
from alphalogic_api.objects.parameter import Parameter, ParameterBool, ParameterLong, \
ParameterDouble, ParameterDatetime, ParameterString, ParameterList, ParameterDict | 59.714286 | 113 | 0.832536 | 44 | 418 | 7.818182 | 0.568182 | 0.162791 | 0.197674 | 0.27907 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.002639 | 0.093301 | 418 | 7 | 114 | 59.714286 | 0.905013 | 0.050239 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.8 | 0 | 0.8 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
65ccb2e4c32ed003b213a0c073c56c1a8c211312 | 33 | py | Python | audio2rgb/__init__.py | beckdac/audio2rgb | ec894caeca8b19588dea2106b2d5006e9d41ee98 | [
"BSD-3-Clause"
] | 1 | 2018-09-26T01:09:35.000Z | 2018-09-26T01:09:35.000Z | audio2rgb/__init__.py | beckdac/audio2rgb | ec894caeca8b19588dea2106b2d5006e9d41ee98 | [
"BSD-3-Clause"
] | null | null | null | audio2rgb/__init__.py | beckdac/audio2rgb | ec894caeca8b19588dea2106b2d5006e9d41ee98 | [
"BSD-3-Clause"
] | null | null | null | from .audio2rgb import Audio2RGB
| 16.5 | 32 | 0.848485 | 4 | 33 | 7 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.068966 | 0.121212 | 33 | 1 | 33 | 33 | 0.896552 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
65d88922f95ca0b03e8ed2bf62a7a167964b2250 | 86 | py | Python | mc2pbrt/block/__init__.py | PbrtCraft/mc2pbrt | 145c844a7ec0fb3ee3c83dba67c2f8cdcf888ed4 | [
"MIT"
] | 3 | 2019-10-04T17:56:50.000Z | 2019-11-11T13:39:24.000Z | mc2pbrt/block/__init__.py | PbrtCraft/mc2pbrt | 145c844a7ec0fb3ee3c83dba67c2f8cdcf888ed4 | [
"MIT"
] | 8 | 2019-10-12T03:53:23.000Z | 2022-03-12T00:01:21.000Z | mc2pbrt/block/__init__.py | PbrtCraft/mc2pbrt | 145c844a7ec0fb3ee3c83dba67c2f8cdcf888ed4 | [
"MIT"
] | 1 | 2019-02-12T23:41:00.000Z | 2019-02-12T23:41:00.000Z | from block.blocksolver import BlockSolver
from block.blockcreator import BlockCreator
| 28.666667 | 43 | 0.883721 | 10 | 86 | 7.6 | 0.5 | 0.236842 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.093023 | 86 | 2 | 44 | 43 | 0.974359 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
65e0019c66a61789890e7cb76d98033ef4754570 | 10,209 | py | Python | sansred/attenuation_constants.py | SayaTakeuchi1010/_combine | 31d511e07068603274db017212cfbfd228ab7d23 | [
"Unlicense"
] | 5 | 2018-09-28T14:05:17.000Z | 2020-09-15T21:53:28.000Z | sansred/attenuation_constants.py | SayaTakeuchi1010/_combine | 31d511e07068603274db017212cfbfd228ab7d23 | [
"Unlicense"
] | 80 | 2018-04-26T15:00:41.000Z | 2022-03-03T15:32:53.000Z | sansred/attenuation_constants.py | SayaTakeuchi1010/_combine | 31d511e07068603274db017212cfbfd228ab7d23 | [
"Unlicense"
] | 9 | 2018-04-20T18:30:54.000Z | 2021-12-28T15:38:41.000Z | """
Calibration constants for SANS.
Defines *attenuation* dictionary with::
attenuation['NGB30|NG7|NGB']['att|att_err|lambda'] = [value, ...]
"""
import re
import json
from numpy import array
attenuation = {}
# From NCNR_Utils.ipf in IGOR reduction:
# 08/27/2019 bbm: modified NGBatt0 so that it is the same length as all the others.
ngb_source_string = """\
// new calibrations MAY 2013
root:myGlobals:Attenuators:NGBatt0 = {1,1,1,1,1,1,1,1,1,1,1,1}
root:myGlobals:Attenuators:NGBatt1 = {0.512,0.474,0.418,0.392,0.354,0.325,0.294,0.27,0.255,0.222,0.185,0.155}
root:myGlobals:Attenuators:NGBatt2 = {0.268,0.227,0.184,0.16,0.129,0.108,0.0904,0.0777,0.0689,0.0526,0.0372,0.0263}
root:myGlobals:Attenuators:NGBatt3 = {0.135,0.105,0.0769,0.0629,0.0455,0.0342,0.0266,0.0212,0.0178,0.0117,0.007,0.00429}
root:myGlobals:Attenuators:NGBatt4 = {0.0689,0.0483,0.0324,0.0249,0.016,0.0109,0.00782,0.00583,0.0046,0.00267,0.00135,0.000752}
root:myGlobals:Attenuators:NGBatt5 = {0.0348,0.0224,0.0136,0.00979,0.0056,0.00347,0.0023,0.0016,0.0012,0.000617,0.000282,0.000155}
root:myGlobals:Attenuators:NGBatt6 = {0.018,0.0105,0.00586,0.00398,0.00205,0.00115,0.000709,0.000467,0.000335,0.000157,7.08e-05,4e-05}
root:myGlobals:Attenuators:NGBatt7 = {0.00466,0.00226,0.00104,0.000632,0.00026,0.000123,6.8e-05,4.26e-05,3e-05,1.5e-05,8e-06,4e-06}
root:myGlobals:Attenuators:NGBatt8 = {0.00121,0.000488,0.000187,0.000101,3.52e-05,1.61e-05,9.79e-06,7.68e-06,4.4e-06,2e-06,1e-06,5e-07}
root:myGlobals:Attenuators:NGBatt9 = {0.000312,0.000108,3.53e-05,1.78e-05,6.6e-06,4.25e-06,2e-06,1.2e-06,9e-07,4e-07,1.6e-07,9e-08}
root:myGlobals:Attenuators:NGBatt10 = {8.5e-05,2.61e-05,8.24e-06,4.47e-06,2.53e-06,9e-07,5e-07,3e-07,2e-07,1e-07,4e-08,2e-08}
// percent errors as measured, MAY 2013 values
// zero error for zero attenuators, large values put in for unknown values (either 2% or 5%)
root:myGlobals:Attenuators:NGBatt0_err = {0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 }
root:myGlobals:Attenuators:NGBatt1_err = {0.174,0.256,0.21,0.219,0.323,0.613,0.28,0.135,0.195,0.216,0.214,19.8}
root:myGlobals:Attenuators:NGBatt2_err = {0.261,0.458,0.388,0.419,0.354,0.668,0.321,0.206,0.302,0.305,0.315,31.1}
root:myGlobals:Attenuators:NGBatt3_err = {0.319,0.576,0.416,0.448,0.431,0.688,0.37,0.247,0.368,0.375,0.41,50.6}
root:myGlobals:Attenuators:NGBatt4_err = {0.41,0.611,0.479,0.515,0.461,0.715,0.404,0.277,0.416,0.436,0.576,111}
root:myGlobals:Attenuators:NGBatt5_err = {0.549,0.684,0.503,0.542,0.497,0.735,0.428,0.3,0.456,0.538,1.08,274}
root:myGlobals:Attenuators:NGBatt6_err = {0.61,0.712,0.528,0.571,0.52,0.749,0.446,0.333,0.515,0.836,2.28,5}
root:myGlobals:Attenuators:NGBatt7_err = {0.693,0.76,0.556,0.607,0.554,0.774,0.516,0.56,0.924,5,5,5}
root:myGlobals:Attenuators:NGBatt8_err = {0.771,0.813,0.59,0.657,0.612,0.867,0.892,1.3,5,5,5,5}
root:myGlobals:Attenuators:NGBatt9_err = {0.837,0.867,0.632,0.722,0.751,1.21,5,5,5,5,5,5}
root:myGlobals:Attenuators:NGBatt10_err = {0.892,0.921,0.715,0.845,1.09,5,5,5,5,5,5,5}
"""
ngb30_source_string = """\
// new calibration done June 2007, John Barker
root:myGlobals:Attenuators:ng3att0 = {1, 1, 1, 1, 1, 1, 1, 1, 1, 1 }
root:myGlobals:Attenuators:ng3att1 = {0.444784,0.419,0.3935,0.3682,0.3492,0.3132,0.2936,0.2767,0.2477,0.22404}
root:myGlobals:Attenuators:ng3att2 = {0.207506,0.1848,0.1629,0.1447,0.1292,0.1056,0.09263,0.08171,0.06656,0.0546552}
root:myGlobals:Attenuators:ng3att3 = {0.092412,0.07746,0.06422,0.05379,0.04512,0.03321,0.02707,0.02237,0.01643,0.0121969}
root:myGlobals:Attenuators:ng3att4 = {0.0417722,0.03302,0.02567,0.02036,0.01604,0.01067,0.00812,0.006316,0.00419,0.00282411}
root:myGlobals:Attenuators:ng3att5 = {0.0187129,0.01397,0.01017,0.007591,0.005668,0.003377,0.002423,0.001771,0.001064,0.000651257}
root:myGlobals:Attenuators:ng3att6 = {0.00851048,0.005984,0.004104,0.002888,0.002029,0.001098,0.0007419,0.0005141,0.000272833,0.000150624}
root:myGlobals:Attenuators:ng3att7 = {0.00170757,0.001084,0.0006469,0.0004142,0.0002607,0.0001201,7.664e-05,4.06624e-05,1.77379e-05,7.30624e-06}
root:myGlobals:Attenuators:ng3att8 = {0.000320057,0.0001918,0.0001025,6.085e-05,3.681e-05,1.835e-05,6.74002e-06,3.25288e-06,1.15321e-06,3.98173e-07}
root:myGlobals:Attenuators:ng3att9 = {6.27682e-05,3.69e-05,1.908e-05,1.196e-05,8.738e-06,6.996e-06,6.2901e-07,2.60221e-07,7.49748e-08,2.08029e-08}
root:myGlobals:Attenuators:ng3att10 = {1.40323e-05,8.51e-06,5.161e-06,4.4e-06,4.273e-06,1.88799e-07,5.87021e-08,2.08169e-08,4.8744e-09,1.08687e-09}
// percent errors as measured, May 2007 values
// zero error for zero attenuators, appropriate average values put in for unknown values
root:myGlobals:Attenuators:ng3att0_err = {0, 0, 0, 0, 0, 0, 0, 0, 0, 0 }
root:myGlobals:Attenuators:ng3att1_err = {0.15,0.142,0.154,0.183,0.221,0.328,0.136,0.13,0.163,0.15}
root:myGlobals:Attenuators:ng3att2_err = {0.25,0.257,0.285,0.223,0.271,0.405,0.212,0.223,0.227,0.25}
root:myGlobals:Attenuators:ng3att3_err = {0.3,0.295,0.329,0.263,0.323,0.495,0.307,0.28,0.277,0.3}
root:myGlobals:Attenuators:ng3att4_err = {0.35,0.331,0.374,0.303,0.379,0.598,0.367,0.322,0.33,0.35}
root:myGlobals:Attenuators:ng3att5_err = {0.4,0.365,0.418,0.355,0.454,0.745,0.411,0.367,0.485,0.4}
root:myGlobals:Attenuators:ng3att6_err = {0.45,0.406,0.473,0.385,0.498,0.838,0.454,0.49,0.5,0.5}
root:myGlobals:Attenuators:ng3att7_err = {0.6,0.554,0.692,0.425,0.562,0.991,0.715,0.8,0.8,0.8}
root:myGlobals:Attenuators:ng3att8_err = {0.7,0.705,0.927,0.503,0.691,1.27,1,1,1,1}
root:myGlobals:Attenuators:ng3att9_err = {1,0.862,1.172,0.799,1.104,1.891,1.5,1.5,1.5,1.5}
root:myGlobals:Attenuators:ng3att10_err = {1.5,1.054,1.435,1.354,1.742,2,2,2,2,2}
"""
ng7_source_string = """\
// New calibration, June 2007, John Barker
root:myGlobals:Attenuators:ng7att0 = {1, 1, 1, 1, 1, 1, 1, 1 ,1,1}
root:myGlobals:Attenuators:ng7att1 = {0.448656,0.4192,0.3925,0.3661,0.3458,0.3098,0.2922,0.2738,0.2544,0.251352}
root:myGlobals:Attenuators:ng7att2 = {0.217193,0.1898,0.1682,0.148,0.1321,0.1076,0.0957,0.08485,0.07479,0.0735965}
root:myGlobals:Attenuators:ng7att3 = {0.098019,0.07877,0.06611,0.05429,0.04548,0.03318,0.02798,0.0234,0.02004,0.0202492}
root:myGlobals:Attenuators:ng7att4 = {0.0426904,0.03302,0.02617,0.02026,0.0158,0.01052,0.008327,0.006665,0.005745,0.00524807}
root:myGlobals:Attenuators:ng7att5 = {0.0194353,0.01398,0.01037,0.0075496,0.005542,0.003339,0.002505,0.001936,0.001765,0.00165959}
root:myGlobals:Attenuators:ng7att6 = {0.00971666,0.005979,0.004136,0.002848,0.001946,0.001079,0.0007717,0.000588,0.000487337,0.000447713}
root:myGlobals:Attenuators:ng7att7 = {0.00207332,0.001054,0.0006462,0.0003957,0.0002368,0.0001111,7.642e-05,4.83076e-05,3.99401e-05,3.54814e-05}
root:myGlobals:Attenuators:ng7att8 = {0.000397173,0.0001911,0.0001044,5.844e-05,3.236e-05,1.471e-05,6.88523e-06,4.06541e-06,3.27333e-06,2.81838e-06}
root:myGlobals:Attenuators:ng7att9 = {9.43625e-05,3.557e-05,1.833e-05,1.014e-05,6.153e-06,1.64816e-06,6.42353e-07,3.42132e-07,2.68269e-07,2.2182e-07}
root:myGlobals:Attenuators:ng7att10 = {2.1607e-05,7.521e-06,2.91221e-06,1.45252e-06,7.93451e-07,1.92309e-07,5.99279e-08,2.87928e-08,2.19862e-08,1.7559e-08}
// percent errors as measured, May 2007 values
// zero error for zero attenuators, appropriate average values put in for unknown values
root:myGlobals:Attenuators:ng7att0_err = {0, 0, 0, 0, 0, 0, 0, 0, 0, 0 }
root:myGlobals:Attenuators:ng7att1_err = {0.2,0.169,0.1932,0.253,0.298,0.4871,0.238,0.245,0.332,0.3}
root:myGlobals:Attenuators:ng7att2_err = {0.3,0.305,0.3551,0.306,0.37,0.6113,0.368,0.413,0.45,0.4}
root:myGlobals:Attenuators:ng7att3_err = {0.4,0.355,0.4158,0.36,0.4461,0.7643,0.532,0.514,0.535,0.5}
root:myGlobals:Attenuators:ng7att4_err = {0.45,0.402,0.4767,0.415,0.5292,0.9304,0.635,0.588,0.623,0.6}
root:myGlobals:Attenuators:ng7att5_err = {0.5,0.447,0.5376,0.487,0.6391,1.169,0.708,0.665,0.851,0.8}
root:myGlobals:Attenuators:ng7att6_err = {0.6,0.501,0.6136,0.528,0.8796,1.708,0.782,0.874,1,1}
root:myGlobals:Attenuators:ng7att7_err = {0.8,0.697,0.9149,0.583,1.173,2.427,1.242,2,2,2}
root:myGlobals:Attenuators:ng7att8_err = {1,0.898,1.24,0.696,1.577,3.412,3,3,3,3}
root:myGlobals:Attenuators:ng7att9_err = {1.5,1.113,1.599,1.154,2.324,4.721,5,5,5,5}
root:myGlobals:Attenuators:ng7att10_err = {1.5,1.493,5,5,5,5,5,5,5,5}
"""
def compile_source(source, db):
source = source.replace("ng3", "ngb30")
preface = re.compile(r"[ \t]*root:myGlobals:Attenuators:(\S*)att([0-9]+)\s*=\s*\{(.*)\}")
err_preface = re.compile(r"[ \t]*root:myGlobals:Attenuators:(\S*)att([0-9]+)_err\s*=\s*\{(.*)\}")
for att in preface.findall(source):
instr = att[0].upper()
atten = att[1]
db.setdefault(instr, {})
db[instr].setdefault("att", {})
db[instr]["att"][atten] = array(json.loads('[' + att[2] + ']'), dtype="float")
for att in err_preface.findall(source):
instr = att[0].upper()
atten = att[1]
db.setdefault(instr, {})
db[instr].setdefault("att_err", {})
db[instr]["att_err"][atten] = array(json.loads('[' + att[2] + ']'), dtype="float")
compile_source(ngb30_source_string, attenuation)
attenuation['NGB30']['lambda'] = array([4, 5, 6, 7, 8, 10, 12, 14, 17, 20], dtype="float")
compile_source(ng7_source_string, attenuation)
attenuation['NG7']['lambda'] = array([4, 5, 6, 7, 8, 10, 12, 14, 17, 20], dtype="float")
compile_source(ngb_source_string, attenuation)
attenuation['NGB']['lambda'] = array([3, 4, 5, 6, 8, 10, 12, 14, 16, 20, 25, 30], dtype="float")
def make_csv_files():
import numpy as np
import csv
for instr in attenuation:
att = attenuation[instr]
for key, label in [['att', '_attenuator'], ['att_err', '_attenuator_error']]:
cols = [att['lambda']]
header_items = ["lambda"]
index = 0
rowdict = att[key]
while str(index) in rowdict:
cols.append(rowdict[str(index)])
header_items.append("att%d" % (index,))
index += 1
a = np.vstack(cols).T
with open(instr + label + ".csv", "wt") as fout:
fout.write(",".join(header_items) + "\r\n")
writer = csv.writer(fout)
writer.writerows(a.tolist())
| 66.72549 | 158 | 0.699089 | 2,064 | 10,209 | 3.426357 | 0.319283 | 0.125 | 0.230769 | 0.013575 | 0.208286 | 0.175198 | 0.15724 | 0.127404 | 0.118071 | 0.118071 | 0 | 0.351598 | 0.086492 | 10,209 | 153 | 159 | 66.72549 | 0.406712 | 0.025957 | 0 | 0.103175 | 0 | 0.547619 | 0.829408 | 0.703502 | 0 | 0 | 0 | 0 | 0 | 1 | 0.015873 | false | 0 | 0.039683 | 0 | 0.055556 | 0 | 0 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
65e56831fa8bf75ebb671948e09f0f2814a9c3c9 | 151 | py | Python | tests/conftest.py | cybojenix/python-sforce | eb2ab1a7030797c1dcb1449f663bcd6abfefb213 | [
"MIT"
] | null | null | null | tests/conftest.py | cybojenix/python-sforce | eb2ab1a7030797c1dcb1449f663bcd6abfefb213 | [
"MIT"
] | null | null | null | tests/conftest.py | cybojenix/python-sforce | eb2ab1a7030797c1dcb1449f663bcd6abfefb213 | [
"MIT"
] | null | null | null | import pytest
from faker import Factory as FakerFactory
@pytest.fixture(scope="session")
def faker():
return FakerFactory.create(locale="en_GB")
| 18.875 | 46 | 0.768212 | 20 | 151 | 5.75 | 0.8 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.125828 | 151 | 7 | 47 | 21.571429 | 0.871212 | 0 | 0 | 0 | 0 | 0 | 0.07947 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.2 | true | 0 | 0.4 | 0.2 | 0.8 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 1 | 1 | 0 | 0 | 6 |
029c42bf7d1a47a3b84023e76eff1047c2f214da | 4,980 | py | Python | nalu-plots/master_plot.py | ashishrana160796/calcnet-experiments | 6c7d30393f3f790b768335ec5f1559087c16ba75 | [
"MIT"
] | null | null | null | nalu-plots/master_plot.py | ashishrana160796/calcnet-experiments | 6c7d30393f3f790b768335ec5f1559087c16ba75 | [
"MIT"
] | null | null | null | nalu-plots/master_plot.py | ashishrana160796/calcnet-experiments | 6c7d30393f3f790b768335ec5f1559087c16ba75 | [
"MIT"
] | null | null | null | from mpl_toolkits.mplot3d import Axes3D
import matplotlib.pyplot as plt
from matplotlib import cm
from matplotlib.ticker import LinearLocator, FormatStrFormatter
import numpy as np
# defining figure size and dpi for image quality.
fig = plt.figure(figsize=(8, 6), dpi=108)
#================
# First subplot.
#================
# Set up the axes for the first plot.
ax = fig.add_subplot(2, 2, 1, projection='3d')
phi = 1.61803399
# Two functions for multiplication and creating transformation matrix W.
sigmoid = lambda y: 1 / (1 + pow(phi, -y))
tanh = lambda x: ((pow(phi, x) - pow(phi, -x))/(pow(phi, x)+pow(phi, -x)))
# Generate data for plotting.
X = np.arange(-10, 10, 0.25)
Y = np.arange(-10, 10, 0.25)
X, Y = np.meshgrid(X, Y)
# Final function for Transformation Matrix W.
Z = tanh(X)*sigmoid(Y)
# Title labelling ,axis labelling and axis rotation.
ax.set_xlabel('tanh(x)', fontsize=14)
ax.set_ylabel('sigmoid(y)', fontsize=14)
ax.set_zlabel('W', fontsize=14)
ax.view_init(30,-130)
ax.set_title('G-NAC')
# Plot the surface.
surf = ax.plot_surface(X, Y, Z, cmap=cm.coolwarm,
linewidth=0, antialiased=False)
# Customize the z axis.
ax.set_zlim(-2.0, 2.0)
ax.zaxis.set_major_locator(LinearLocator(10))
ax.zaxis.set_major_formatter(FormatStrFormatter('%.02f'))
#=================
# Second subplot.
#=================
# Set up the axes for the first plot.
ax = fig.add_subplot(2, 2, 2, projection='3d')
e = 2.732
# Two functions for multiplication and creating transformation matrix W.
sigmoid = lambda y: 1 / (1 + pow(phi, -y))
tanh = lambda x: ((pow(e, x) - pow(e, -x))/(pow(e, x)+pow(e, -x)))
# Generate data for plotting.
X = np.arange(-10, 10, 0.25)
Y = np.arange(-10, 10, 0.25)
X, Y = np.meshgrid(X, Y)
# Final function for Transformation Matrix W.
Z = tanh(X)*sigmoid(Y)
# Title labelling ,axis labelling and axis rotation.
ax.set_xlabel('tanh(x)', fontsize=14)
ax.set_ylabel('sigmoid(y)', fontsize=14)
ax.set_zlabel('W', fontsize=14)
ax.view_init(37,-135)
ax.set_title('NAC')
# Plot the surface.
surf = ax.plot_surface(X, Y, Z, cmap=cm.coolwarm,
linewidth=0, antialiased=False)
# Customize the z axis.
ax.set_zlim(-2.0, 2.0)
ax.zaxis.set_major_locator(LinearLocator(10))
ax.zaxis.set_major_formatter(FormatStrFormatter('%.02f'))
ax.view_init(37,-135)
#===============
# Third subplot
#===============
# Set up the axes for the first plot.
ax = fig.add_subplot(2, 2, 3, projection='3d')
e = 1.618
# Sub-function decleration for NALU units.
sigmoid = lambda y: 1 / (1 + pow(phi, -y))
tanh = lambda x: ((pow(phi, x)-pow(phi, -x))/(pow(phi, x)+pow(phi, -x)))
# Learned gate decleration.
g_x = lambda x: 1 / (1 + pow(phi, -x))
# Small epsilon value for truly defined exponential space.
eps = 10e-2
# log-exponential space defiined.
m_y = lambda y: pow(phi, np.log(np.abs(y)+eps))
# Generate data for plotting.
X = np.arange(-10, 10, 0.25)
Y = np.arange(-10, 10, 0.25)
X, Y = np.meshgrid(X, Y)
# Final function for Transformation Matrix W.
W_nac = tanh(X)*sigmoid(Y)
# Final function for G-NALU.
Z = ( (g_x(Y)*W_nac) + ((1-g_x(Y))*m_y(Y)) )
# Title labelling ,axis labelling and axis rotation.
ax.set_xlabel('tanh(x)', fontsize=12)
ax.set_ylabel('sig-M/G(y), exp(log(|y|+eps))', fontsize=12)
ax.set_zlabel('Y', fontsize=14)
ax.set_title('G-NALU')
ax.view_init(32,145)
# Plot the surface.
surf = ax.plot_surface(X, Y, Z, cmap=cm.coolwarm,
linewidth=0, antialiased=False)
# Customize the z axis.
ax.set_zlim(-2.0, 2.0)
ax.zaxis.set_major_locator(LinearLocator(10))
ax.zaxis.set_major_formatter(FormatStrFormatter('%.02f'))
#===============
# Fourth subplot
#===============
# Set up the axes for the first plot.
ax = fig.add_subplot(2, 2, 4, projection='3d')
e = 2.732
# Sub-function decleration for NALU units.
sigmoid = lambda y: 1 / (1 + np.exp(-y))
tanh = lambda x: ((np.exp(x)-np.exp(-x))/(np.exp(x)+np.exp(-x)))
# Learned gate decleration.
g_x = lambda x: 1 / (1 + np.exp(-x))
# Small epsilon value for truly defined exponential space.
eps = 10e-2
# log-exponential space defiined.
m_y = lambda y: np.exp(np.log(np.abs(y)+eps))
# Generate data for plotting.
X = np.arange(-5, 5, 0.25)
Y = np.arange(-5, 5, 0.25)
X, Y = np.meshgrid(X, Y)
# Final function for Transformation Matrix W.
W_nac = tanh(X)*sigmoid(Y)
# Final function for NALU.
Z = ( (g_x(Y)*W_nac) + ((1-g_x(Y))*m_y(Y)) )
# Title labelling ,axis labelling and axis rotation.
ax.set_xlabel('tanh(x)', fontsize=12)
ax.set_ylabel('sig-M/G(y), exp(log(|y|+eps))', fontsize=12)
ax.set_zlabel('Y', fontsize=14)
ax.set_title('NALU')
ax.view_init(32,145)
# Plot the surface.
surf = ax.plot_surface(X, Y, Z, cmap=cm.coolwarm,
linewidth=0, antialiased=False)
# Customize the z axis.
ax.set_zlim(-2.0, 2.0)
ax.zaxis.set_major_locator(LinearLocator(10))
ax.zaxis.set_major_formatter(FormatStrFormatter('%.02f'))
####################
# final plot display
####################
plt.show()
| 26.631016 | 74 | 0.654217 | 842 | 4,980 | 3.793349 | 0.159145 | 0.031309 | 0.019724 | 0.020038 | 0.881966 | 0.864746 | 0.86005 | 0.86005 | 0.86005 | 0.86005 | 0 | 0.047473 | 0.149799 | 4,980 | 186 | 75 | 26.774194 | 0.706897 | 0.31245 | 0 | 0.693182 | 0 | 0 | 0.046805 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.056818 | 0 | 0.056818 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
02a67d7f54eacce0e525ab9f29667297afc05071 | 36 | py | Python | pymx/codegen/__init__.py | cla7aye15I4nd/Pymx | a294e196ad0510171ede15e947a89539485a28f3 | [
"MIT"
] | 3 | 2020-03-29T23:58:28.000Z | 2020-05-29T12:27:54.000Z | pymx/codegen/__init__.py | cla7aye15I4nd/Pymx | a294e196ad0510171ede15e947a89539485a28f3 | [
"MIT"
] | null | null | null | pymx/codegen/__init__.py | cla7aye15I4nd/Pymx | a294e196ad0510171ede15e947a89539485a28f3 | [
"MIT"
] | null | null | null | """ codegen """
from . import riscv | 12 | 19 | 0.611111 | 4 | 36 | 5.5 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.194444 | 36 | 3 | 19 | 12 | 0.758621 | 0.194444 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
02d02fc07d8c5aaa3e4ab21511d89951373bc0e9 | 149 | py | Python | copper/managers/__init__.py | cinepost/Copperfield_FX | 1900b506d0a407a3fb5774ab129b984a547ee0b5 | [
"Unlicense"
] | 6 | 2016-07-28T13:59:34.000Z | 2021-12-28T05:44:15.000Z | copper/managers/__init__.py | cinepost/Copperfield_FX | 1900b506d0a407a3fb5774ab129b984a547ee0b5 | [
"Unlicense"
] | 5 | 2016-06-30T10:19:25.000Z | 2022-03-11T23:19:01.000Z | copper/managers/__init__.py | cinepost/Copperfield_FX | 1900b506d0a407a3fb5774ab129b984a547ee0b5 | [
"Unlicense"
] | 3 | 2019-03-18T05:17:10.000Z | 2020-02-14T06:56:40.000Z | from .root_network import ROOT_Network
from .rop_network import ROP_Network
from .obj_network import OBJ_Network
from .cop_network import COP_Network | 37.25 | 38 | 0.872483 | 24 | 149 | 5.083333 | 0.291667 | 0.42623 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.100671 | 149 | 4 | 39 | 37.25 | 0.910448 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
02de67bc9ca941823d9089fc082d1e47ff4f55a5 | 1,152 | py | Python | source/functions/PortScanner.py | GucciHsuan/CampusCyberInspectionTool2021 | 86636f777192e492f4342519e30a975a6a58b8ab | [
"MIT"
] | null | null | null | source/functions/PortScanner.py | GucciHsuan/CampusCyberInspectionTool2021 | 86636f777192e492f4342519e30a975a6a58b8ab | [
"MIT"
] | null | null | null | source/functions/PortScanner.py | GucciHsuan/CampusCyberInspectionTool2021 | 86636f777192e492f4342519e30a975a6a58b8ab | [
"MIT"
] | null | null | null | import socket
class Scanport:
def portscannerTCP():
ip = input('\033[0;31;42mInput IP which you want to know open ports:\n\033[0m').strip() #getting ip-address of host
for port in range(65535): #check for all available ports
try:
serv = socket.socket(socket.AF_INET,socket.SOCK_STREAM) # create a new socket
serv.bind((ip,port)) # bind socket with address
except:
print('\033[32m[OPEN]\033[0m Port open :',port) #print open port number
serv.close() #close connection
def portscannerUDP():
ip = input('\033[0;31;42mInput IP which you want to know open ports:\n\033[0m').strip() #getting ip-address of host
for port in range(65535): #check for all available ports
try:
serv = socket.socket(socket.AF_INET,socket.SOCK_DGRAM) # create a new socket
serv.bind((ip,port)) # bind socket with address
except:
print('\033[32m[OPEN]\033[0m Port open :',port) #print open port number
serv.close() #close connection
#Scanport.portscanner() | 54.857143 | 124 | 0.601563 | 156 | 1,152 | 4.416667 | 0.346154 | 0.029028 | 0.029028 | 0.03193 | 0.870827 | 0.870827 | 0.870827 | 0.870827 | 0.870827 | 0.870827 | 0 | 0.063725 | 0.291667 | 1,152 | 21 | 125 | 54.857143 | 0.780637 | 0.262153 | 0 | 0.7 | 0 | 0.1 | 0.23389 | 0.050119 | 0 | 0 | 0 | 0 | 0 | 1 | 0.1 | false | 0 | 0.05 | 0 | 0.2 | 0.1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
f304fc5af95c4990e344ac4100e59718559efd40 | 23,083 | py | Python | semanticVisitor.py | lamhacker/VPL-Compiler | c54850f95342504e962b990c5cac17679e7069a9 | [
"Apache-2.0"
] | 2 | 2020-01-28T12:41:09.000Z | 2020-04-25T13:31:43.000Z | semanticVisitor.py | lamhacker/VPL-Compiler | c54850f95342504e962b990c5cac17679e7069a9 | [
"Apache-2.0"
] | null | null | null | semanticVisitor.py | lamhacker/VPL-Compiler | c54850f95342504e962b990c5cac17679e7069a9 | [
"Apache-2.0"
] | null | null | null | # Generated from /Users/franklan123/Desktop/VPL_ANTLR/VPL/VPL.g by ANTLR 4.7
from antlr4 import *
from VPLVisitor import VPLVisitor
if __name__ is not None and "." in __name__:
from .VPLParser import VPLParser
else:
from VPLParser import VPLParser
import sys
'''
Semantic Checker
This class check 6 kinds of errors
(1): Error 1, function redefined
Description: two functions have same name and same parameters.
Example: func mymin(a,b,c)end func mymin(a,b,c)end
(2): Error 2, parameter redefined
Description: a function has more than one duplicated parameter names
Example: func mymin(a,a)end
(3): Error 3, variable redefined
Description: a function has more than one duplicated variable name
Example: func mymin(a,b) var c,c;end
(4): Error 4, variable and function argument have the same name
Description: A variable and a function argument with the same name
Example: func mymin(a,b) var a;end
(5): Error 5, variable undefined
Description: variable undefined
Example: func mymin(a,b) var a; d = 2 end
(6): Error 6, assign to a variable that does not have value
Description: assign to a variable that does not have value
Example: func mymin(a,b) var c; a = c end
'''
parameterList = []
variableList = []
allIdentList = []
functionNameList = []
#newList = []
#eList = []
# This class defines a complete generic visitor for a parse tree produced by VPLParser.
class semanticVisitor(ParseTreeVisitor):
# Visit a parse tree produced by VPLParser#program.
def visitProgram(self, ctx:VPLParser.ProgramContext):
return self.visitChildren(ctx)
# Visit a parse tree produced by VPLParser#function_declaration.
def visitFunction_declaration(self, ctx:VPLParser.Function_declarationContext):
functionNameList.append(ctx.IDENT().getText())
# clear allidentlist
global allIdentList
allIdentList = []
return self.visitChildren(ctx)
# Visit a parse tree produced by VPLParser#parameter.
def visitParameter(self, ctx:VPLParser.ParameterContext):
if ctx.name() == None:
return self.visitChildren(ctx)
parameter = ctx.name().getText().split(',')
# check same functions name
if len(functionNameList) != len(set(functionNameList)):
for i in range(len(parameterList)):
if len(parameterList[i]) == len(parameter):
count = 0
for j in range(len(parameter)):
if parameter[j] in parameterList[i]:
count = count + 1
if count == len(parameterList[i]):
print("Error 1, function redefined")
sys.exit()
parameterList.append(parameter)
if len(parameter) != len(set(parameter)):
print("Error 2, parameter redefined")
sys.exit()
return self.visitChildren(ctx)
# Visit a parse tree produced by VPLParser#parameterName.
def visitParameterName(self, ctx:VPLParser.ParameterNameContext):
return self.visitChildren(ctx)
# Visit a parse tree produced by VPLParser#multParameterName.
def visitMultParameterName(self, ctx:VPLParser.MultParameterNameContext):
return self.visitChildren(ctx)
# Visit a parse tree produced by VPLParser#variable_declaration.
def visitVariable_declaration(self, ctx:VPLParser.Variable_declarationContext):
if ctx.name() == None:
return self.visitChildren(ctx)
variableName = ctx.name().getText().split(',')
variableList.append(variableName)
if len(variableName) != len(set(variableName)):
print("Error 3, variable redefined")
sys.exit()
targetParameter = parameterList[-1]
targetVariable = variableList[-1]
targetParameterLength = len(targetParameter)
targetVariableLength = len(targetVariable)
for i in range(targetParameterLength):
for j in range(targetVariableLength):
if targetVariable[j] == targetParameter[i]:
print("Error 4, variable and function argument have the same name")
sys.exit()
return self.visitChildren(ctx)
# Visit a parse tree produced by VPLParser#condition.
def visitCondition(self, ctx:VPLParser.ConditionContext):
return self.visitChildren(ctx)
# Visit a parse tree produced by VPLParser#whileloop.
def visitWhileloop(self, ctx:VPLParser.WhileloopContext):
return self.visitChildren(ctx)
# Visit a parse tree produced by VPLParser#assign.
def visitAssign(self, ctx:VPLParser.AssignContext):
# get all the parameters in this scope
targetParameter = []
targetParameter = parameterList[-1]
# get all the varialbes in this scope
targetVariable = []
if len(variableList) != 0:
targetVariable = variableList[-1]
# merge all the parameters and variables
canUseList = targetParameter + targetVariable
# get all idents which are assigned
allIdentList.append(ctx.IDENT().getText())
# check whether left hand side is defined
temp = 0
if ctx.IDENT().getText() in targetParameter:
temp = temp + 1
if ctx.IDENT().getText() in targetVariable:
temp = temp + 1
if temp == 0:
print("Error 5, variable undefined")
sys.exit()
# check right hand side
# a list contain elements that can generate further
newList = []
# a list contain elements that cannot generate further
eList = []
# get all assign IDENT
assignedIdentList = []
if len(allIdentList) > 1:
for i in range(len(allIdentList) - 1):
assignedIdentList.append(allIdentList[i])
rightHandSide = ctx.expression()
while str(type(rightHandSide)) == "<class 'VPLParser.VPLParser.ParenthesisExpressionContext'>":
rightHandSide = rightHandSide.expression()
# check whether e is a number
if str(type(rightHandSide)) == "<class 'VPLParser.VPLParser.NumExpressionContext'>":
return self.visitChildren(ctx)
# check whether e is an ident, if it is, check whether it is defined,and have value
if str(type(rightHandSide)) == "<class 'VPLParser.VPLParser.IdenetExpressionContext'>":
# check define
if rightHandSide.getText() not in canUseList:
print("Error 5, variable undefined")
sys.exit()
# handle first ident
if len(assignedIdentList) == 0:
if(len(variableList) != 0):
# if first ident does not in the variable list, throw error
if rightHandSide.getText() in variableList[-1]:
print("Error 6, assign to a variable that does not have value")
sys.exit()
# handle the ident after the first, if ident does not belong to parameters and assignIdent,throw error
if len(assignedIdentList) != 0 and rightHandSide.getText() not in assignedIdentList and rightHandSide.getText() not in targetParameter:
print("Error 6, assign to a variable that does not have value")
sys.exit()
return self.visitChildren(ctx)
else:
newList.append(ctx.expression())
i = 0
#check whether E generate operation(+,-,*,/,min) or (E) or ident/num
while i < len(newList):
rhs = newList[i].expression()
# if E generate operation
if type(rhs) is list:
while str(type(rhs[0])) == "<class 'VPLParser.VPLParser.ParenthesisExpressionContext'>":
rhs[0] = rhs[0].expression()
while str(type(rhs[1])) == "<class 'VPLParser.VPLParser.ParenthesisExpressionContext'>":
rhs[1] = rhs[1].expression()
# check whether left hand side is Ident, if it is a Ident, then put into the atom(cannot generate further)list
if str(type(rhs[0])) == "<class 'VPLParser.VPLParser.IdenetExpressionContext'>":
eList.append(rhs[0])
# check whether left hand side is number, if it is a number, then ignore it
elif str(type(rhs[0])) == "<class 'VPLParser.VPLParser.NumExpressionContext'>":
pass
# if it is not a Ident or number, then it can generate further
else:
newList.append(rhs[0])
# check whether right hand side is Ident, if it is a Ident, then put into the atom(cannot generate further)list
if str(type(rhs[1])) == "<class 'VPLParser.VPLParser.IdenetExpressionContext'>":
eList.append(rhs[1])
# check whether left hand side is number, if it is a number, then ignore it
elif str(type(rhs[1])) == "<class 'VPLParser.VPLParser.NumExpressionContext'>":
pass
# if it is not a Ident or number, then it can generate further
else:
newList.append(rhs[1])
# if E generate non operation
else:
newList.append(rhs)
i = i + 1
# check whether all the Ident is defined
for i in range(len(eList)):
if eList[i].getText() not in canUseList:
print("Error 5, variable undefined")
sys.exit()
# handle first ident
if len(assignedIdentList) == 0:
if(len(variableList) != 0):
for i in range (len(eList)):
if eList[i].getText() in targetVariable:
print("Error 6, assign to a variable that does not have value")
sys.exit()
# handle the rest ident
if len(assignedIdentList) != 0:
for i in range (len(eList)):
if eList[i].getText() not in assignedIdentList and eList[i].getText() not in targetParameter:
print("Error 6, assign to a variable that does not have value")
sys.exit()
return self.visitChildren(ctx)
# Visit a parse tree produced by VPLParser#noneStatement.
def visitNoneStatement(self, ctx:VPLParser.NoneStatementContext):
return self.visitChildren(ctx)
# Visit a parse tree produced by VPLParser#nest_statement.
def visitNest_statement(self, ctx:VPLParser.Nest_statementContext):
return self.visitChildren(ctx)
# Visit a parse tree produced by VPLParser#addExpression.
def visitAddExpression(self, ctx:VPLParser.AddExpressionContext):
return self.visitChildren(ctx)
# Visit a parse tree produced by VPLParser#minusExpression.
def visitMinusExpression(self, ctx:VPLParser.MinusExpressionContext):
return self.visitChildren(ctx)
# Visit a parse tree produced by VPLParser#multExpression.
def visitMultExpression(self, ctx:VPLParser.MultExpressionContext):
return self.visitChildren(ctx)
# Visit a parse tree produced by VPLParser#dicExpression.
def visitDicExpression(self, ctx:VPLParser.DivExpressionContext):
return self.visitChildren(ctx)
# Visit a parse tree produced by VPLParser#minExpression.
def visitMinExpression(self, ctx:VPLParser.MinExpressionContext):
return self.visitChildren(ctx)
# Visit a parse tree produced by VPLParser#parenthesisExpression.
def visitParenthesisExpression(self, ctx:VPLParser.ParenthesisExpressionContext):
return self.visitChildren(ctx)
# Visit a parse tree produced by VPLParser#idenetExpression.
def visitIdenetExpression(self, ctx:VPLParser.IdenetExpressionContext):
return self.visitChildren(ctx)
# Visit a parse tree produced by VPLParser#numExpression.
def visitNumExpression(self, ctx:VPLParser.NumExpressionContext):
return self.visitChildren(ctx)
# Visit a parse tree produced by VPLParser#lessThan.
def visitLessThan(self, ctx:VPLParser.LessThanContext):
# get all the parameters in this scope
targetParameter = parameterList[-1]
# get all the varialbes in this scope
targetVariable = []
if len(variableList) != 0:
targetVariable = variableList[-1]
# merge all the parameters and variables
canUseList = targetParameter + targetVariable
# get all assign IDENT
assignedIdentList = []
if len(allIdentList) > 0:
for i in range(len(allIdentList)):
assignedIdentList.append(allIdentList[i])
# a list contain elements that can generate further
newList = []
# a list contain elements that cannot generate further
eList = []
leftHandSide = ctx.expression()
while str(type(leftHandSide)) == "<class 'VPLParser.VPLParser.ParenthesisExpressionContext'>":
leftHandSide = leftHandSide.expression()
# check whether e is a number
if str(type(leftHandSide)) == "<class 'VPLParser.VPLParser.NumExpressionContext'>":
return self.visitChildren(ctx)
# check whether e is an ident, if it is, check whether it is defined,and have value
if str(type(leftHandSide)) == "<class 'VPLParser.VPLParser.IdenetExpressionContext'>":
# check define
if leftHandSide.getText() not in canUseList:
print("Error 5, variable undefined")
sys.exit()
# handle first ident
if len(assignedIdentList) == 0:
if(len(variableList) != 0):
# if first ident does not in the variable list, throw error
if leftHandSide.getText() in variableList[-1]:
print("Error 6, assign to a variable that does not have value")
sys.exit()
# handle the ident after the first, if ident does not belong to parameters and assignIdent,throw error
if len(assignedIdentList) != 0 and leftHandSide.getText() not in assignedIdentList and leftHandSide.getText() not in targetParameter:
print("Error 6, assign to a variable that does not have value")
sys.exit()
return self.visitChildren(ctx)
else:
newList.append(leftHandSide)
i = 0
#check whether E generate operation(+,-,*,/,min) or (E) or ident/num
while i < len(newList):
# if E generate operation
lhs = newList[i].expression()
if type(lhs) is list:
while str(type(lhs[0])) == "<class 'VPLParser.VPLParser.ParenthesisExpressionContext'>":
lhs[0] = lhs[0].expression()
while str(type(lhs[1])) == "<class 'VPLParser.VPLParser.ParenthesisExpressionContext'>":
lhs[1] = lhs[1].expression()
# check whether left hand side is Ident, if it is a Ident, then put into the atom(cannot generate further)list
if str(type(lhs[0])) == "<class 'VPLParser.VPLParser.IdenetExpressionContext'>":
eList.append(lhs[0])
# check whether left hand side is number, if it is a number, then ignore it
elif str(type(lhs[0])) == "<class 'VPLParser.VPLParser.NumExpressionContext'>":
pass
# if it is not a Ident or number, then it can generate further
else:
newList.append(lhs[0])
# check whether right hand side is Ident, if it is a Ident, then put into the atom(cannot generate further)list
if str(type(lhs[1])) == "<class 'VPLParser.VPLParser.IdenetExpressionContext'>":
eList.append(lhs[1])
# check whether left hand side is number, if it is a number, then ignore it
elif str(type(lhs[1])) == "<class 'VPLParser.VPLParser.NumExpressionContext'>":
pass
# if it is not a Ident or number, then it can generate further
else:
newList.append(lhs[1])
# if E generate non operation
else:
newList.append(lhs)
i = i + 1
# check whether all the Ident is defined
for i in range(len(eList)):
if eList[i].getText() not in canUseList:
print("Error 5, variable undefined")
sys.exit()
# handle first ident
if len(assignedIdentList) == 0:
if(len(variableList) != 0):
for i in range (len(eList)):
if eList[i].getText() in targetVariable:
print("Error 6, assign to a variable that does not have value")
sys.exit()
# handle the rest ident
if len(assignedIdentList) != 0:
for i in range (len(eList)):
if eList[i].getText() not in assignedIdentList and eList[i].getText() not in targetParameter:
print("Error 6, assign to a variable that does not have value")
sys.exit()
return self.visitChildren(ctx)
# Visit a parse tree produced by VPLParser#largeThan.
def visitLargeThan(self, ctx:VPLParser.LargeThanContext):
# get all the parameters in this scope
targetParameter = parameterList[-1]
# get all the varialbes in this scope
targetVariable = []
if len(variableList) != 0:
targetVariable = variableList[-1]
# merge all the parameters and variables
canUseList = targetParameter + targetVariable
# get all assign IDENT
assignedIdentList = []
if len(allIdentList) > 0:
for i in range(len(allIdentList)):
assignedIdentList.append(allIdentList[i])
# a list contain elements that can generate further
newList = []
# a list contain elements that cannot generate further
eList = []
leftHandSide = ctx.expression()
while str(type(leftHandSide)) == "<class 'VPLParser.VPLParser.ParenthesisExpressionContext'>":
leftHandSide = leftHandSide.expression()
# check whether e is a number
if str(type(leftHandSide)) == "<class 'VPLParser.VPLParser.NumExpressionContext'>":
return self.visitChildren(ctx)
# check whether e is an ident, if it is, check whether it is defined,and have value
if str(type(leftHandSide)) == "<class 'VPLParser.VPLParser.IdenetExpressionContext'>":
# check define
if leftHandSide.getText() not in canUseList:
print("Error 5, variable undefined")
sys.exit()
# handle first ident
if len(assignedIdentList) == 0:
if(len(variableList) != 0):
# if first ident does not in the variable list, throw error
if leftHandSide.getText() in variableList[-1]:
print("Error 6, assign to a variable that does not have value")
sys.exit()
# handle the ident after the first, if ident does not belong to parameters and assignIdent,throw error
if len(assignedIdentList) != 0 and leftHandSide.getText() not in assignedIdentList and leftHandSide.getText() not in targetParameter:
print("Error 6, assign to a variable that does not have value")
sys.exit()
return self.visitChildren(ctx)
else:
newList.append(leftHandSide)
i = 0
#check whether E generate operation(+,-,*,/,min) or (E) or ident/num
while i < len(newList):
# if E generate operation
lhs = newList[i].expression()
if type(lhs) is list:
while str(type(lhs[0])) == "<class 'VPLParser.VPLParser.ParenthesisExpressionContext'>":
lhs[0] = lhs[0].expression()
while str(type(lhs[1])) == "<class 'VPLParser.VPLParser.ParenthesisExpressionContext'>":
lhs[1] = lhs[1].expression()
# check whether left hand side is Ident, if it is a Ident, then put into the atom(cannot generate further)list
if str(type(lhs[0])) == "<class 'VPLParser.VPLParser.IdenetExpressionContext'>":
eList.append(lhs[0])
# check whether left hand side is number, if it is a number, then ignore it
elif str(type(lhs[0])) == "<class 'VPLParser.VPLParser.NumExpressionContext'>":
pass
# if it is not a Ident or number, then it can generate further
else:
newList.append(lhs[0])
# check whether right hand side is Ident, if it is a Ident, then put into the atom(cannot generate further)list
if str(type(lhs[1])) == "<class 'VPLParser.VPLParser.IdenetExpressionContext'>":
eList.append(lhs[1])
# check whether left hand side is number, if it is a number, then ignore it
elif str(type(lhs[1])) == "<class 'VPLParser.VPLParser.NumExpressionContext'>":
pass
# if it is not a Ident or number, then it can generate further
else:
newList.append(lhs[1])
# if E generate non operation
else:
newList.append(lhs)
i = i + 1
# check whether all the Ident is defined
for i in range(len(eList)):
if eList[i].getText() not in canUseList:
print("Error 5, variable undefined")
sys.exit()
# handle first ident
if len(assignedIdentList) == 0:
if(len(variableList) != 0):
for i in range (len(eList)):
if eList[i].getText() in targetVariable:
print("Error 6, assign to a variable that does not have value")
sys.exit()
# handle the rest ident
if len(assignedIdentList) != 0:
for i in range (len(eList)):
if eList[i].getText() not in assignedIdentList and eList[i].getText() not in targetParameter:
print("Error 6, assign to a variable that does not have value")
sys.exit()
return self.visitChildren(ctx)
del VPLParser
| 43.47081 | 147 | 0.592817 | 2,537 | 23,083 | 5.386677 | 0.091841 | 0.021221 | 0.048807 | 0.055173 | 0.76211 | 0.745719 | 0.732328 | 0.700132 | 0.687399 | 0.673935 | 0 | 0.008495 | 0.321709 | 23,083 | 530 | 148 | 43.55283 | 0.864342 | 0.223195 | 0 | 0.711039 | 1 | 0 | 0.144877 | 0.075152 | 0 | 0 | 0 | 0 | 0 | 1 | 0.068182 | false | 0.019481 | 0.016234 | 0.048701 | 0.181818 | 0.074675 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
f31b3cb5488149633dce6d935479b505c9386962 | 17,186 | py | Python | s2v_key_case_and_sense_variations_test.py | joshweir/sense2vec-rest | b11ee59661b63146afc2acf1283486c0ab21b15d | [
"CC0-1.0"
] | null | null | null | s2v_key_case_and_sense_variations_test.py | joshweir/sense2vec-rest | b11ee59661b63146afc2acf1283486c0ab21b15d | [
"CC0-1.0"
] | null | null | null | s2v_key_case_and_sense_variations_test.py | joshweir/sense2vec-rest | b11ee59661b63146afc2acf1283486c0ab21b15d | [
"CC0-1.0"
] | null | null | null | import pytest
from s2v_key_case_and_sense_variations import S2vKeyCaseAndSenseVariations
@pytest.fixture
def s2v_mock():
from sense2vec import Sense2Vec
import numpy as np
s2v = Sense2Vec(shape=(16, 4))
s2v.add('New_York|GPE', np.asarray([1, 1, 1, 1], dtype=np.float32))
s2v.add('New_York|NOUN', np.asarray([1, 2, 1, 1], dtype=np.float32))
s2v.add('big|ADJ', np.asarray([2, 5, 4, 2], dtype=np.float32))
s2v.add('BIG|ADJ', np.asarray([2, 5, 4, 1], dtype=np.float32))
s2v.add('apple|NOUN', np.asarray([1, 3, 9, 3], dtype=np.float32))
s2v.add('big_apple|NOUN', np.asarray([6, 6, 6, 6], dtype=np.float32))
s2v.add('Big_Apple|NOUN', np.asarray([6, 6, 6, 6], dtype=np.float32))
s2v.add('Big_Apple|LOC', np.asarray([6, 6, 6, 6], dtype=np.float32))
s2v.add('Big_apple|NOUN', np.asarray([6, 6, 6, 6], dtype=np.float32))
s2v.add('BIG_apple|NOUN', np.asarray([6, 6, 6, 6], dtype=np.float32))
s2v.add('BIG_Apple|NOUN', np.asarray([6, 6, 6, 6], dtype=np.float32))
s2v.add('BIG_APPLE|NOUN', np.asarray([6, 6, 6, 6], dtype=np.float32))
s2v.add('black|NOUN', np.asarray([6, 6, 6, 6], dtype=np.float32))
s2v.add('black|ADJ', np.asarray([5, 5, 5, 5], dtype=np.float32))
s2v.add('blue|NOUN', np.asarray([6, 6, 6, 6], dtype=np.float32))
s2v.add('blue_big_apple|NOUN', np.asarray([6, 6, 6, 6], dtype=np.float32))
return s2v
@pytest.fixture
def the_service(s2v_mock):
from s2v_util import S2vUtil
from s2v_senses import S2vSenses
from s2v_key_case_and_sense_variations import S2vKeyCaseAndSenseVariations
s2v_util = S2vUtil(s2v_mock)
s2v_senses = S2vSenses(s2v_util)
return S2vKeyCaseAndSenseVariations(s2v_util, s2v_senses)
def test_noun_senses_with_matching_key_are_equal_priority_when_phrase_is_proper(the_service, s2v_mock):
k = [
{ 'wordsense': 'New_York|LOC', 'required': True },
]
result = the_service.call(
k,
attempt_phrase_join_for_compound_phrases = True,
flag_joined_phrase_variations = True,
phrase_is_proper = True,
)
expected = [
# LOC and GPE senses are synonymous, LOC does not exist for this key in s2v, therefore GPE is selected
{ 'key': [{ 'wordsense': 'New_York|GPE', 'required': True, 'is_joined': False }], 'priority': 1 },
{ 'key': [{ 'wordsense': 'New_York|NOUN', 'required': True, 'is_joined': False }], 'priority': 1 },
]
assert result == expected
def test_proper_noun_phrase_is_correctly_identified_without_phrase_is_proper_input_flagged(the_service, s2v_mock):
k = [
{ 'wordsense': 'New_York|LOC', 'required': True },
]
result = the_service.call(
k,
attempt_phrase_join_for_compound_phrases = True,
flag_joined_phrase_variations = True,
)
result_phrase_is_proper_flagged = the_service.call(
k,
attempt_phrase_join_for_compound_phrases = True,
flag_joined_phrase_variations = True,
phrase_is_proper = True,
)
expected = [
# LOC and GPE senses are synonymous, LOC does not exist for this key in s2v, therefore GPE is selected
{ 'key': [{ 'wordsense': 'New_York|GPE', 'required': True, 'is_joined': False }], 'priority': 1 },
{ 'key': [{ 'wordsense': 'New_York|NOUN', 'required': True, 'is_joined': False }], 'priority': 1 },
]
assert result == result_phrase_is_proper_flagged == expected
def test_ranks_joined_keys_higher_than_non_joined_then_ranks_better_case_matches_higher_and_non_proper_phrase_correctly_identified(the_service, s2v_mock):
k = [
{ 'wordsense': 'big|ADJ', 'required': False },
{ 'wordsense': 'apple|NOUN', 'required': False },
]
result = the_service.call(
k,
attempt_phrase_join_for_compound_phrases = True,
flag_joined_phrase_variations = True,
)
result_phrase_is_proper_flagged = the_service.call(
k,
attempt_phrase_join_for_compound_phrases = True,
flag_joined_phrase_variations = True,
phrase_is_proper = False,
)
expected = [
{'key': [{'is_joined': True, 'required': True, 'wordsense': 'big_apple|NOUN'}], 'priority': 1},
{'key': [{'is_joined': True, 'required': True, 'wordsense': 'Big_apple|NOUN'}], 'priority': 2},
{'key': [{'is_joined': True, 'required': True, 'wordsense': 'Big_Apple|NOUN'}], 'priority': 3},
{'key': [{'is_joined': True, 'required': True, 'wordsense': 'BIG_apple|NOUN'}], 'priority': 4},
{'key': [{'is_joined': True, 'required': True, 'wordsense': 'BIG_Apple|NOUN'}], 'priority': 5},
{'key': [{'is_joined': True, 'required': True, 'wordsense': 'BIG_APPLE|NOUN'}], 'priority': 6},
{'key': [{'is_joined': True, 'required': True, 'wordsense': 'Big_Apple|LOC'}], 'priority': 7},
{'key': [
{'is_joined': False, 'required': False, 'wordsense': 'big|ADJ'},
{'is_joined': False, 'required': False, 'wordsense': 'apple|NOUN'}], 'priority': 8},
{'key': [
{'is_joined': False, 'required': False, 'wordsense': 'BIG|ADJ'},
{'is_joined': False, 'required': False, 'wordsense': 'apple|NOUN'}], 'priority': 9},
]
assert result == result_phrase_is_proper_flagged == expected
def test_ranks_joined_keys_higher_than_non_joined_then_ranks_non_proper_phrases_higher_when_flagged_as_not_proper_phrase(the_service, s2v_mock):
k = [
{ 'wordsense': 'BIG|ADJ', 'required': False },
{ 'wordsense': 'APPLE|NOUN', 'required': False },
]
result = the_service.call(
k,
attempt_phrase_join_for_compound_phrases = True,
flag_joined_phrase_variations = True,
phrase_is_proper = False,
)
expected = [
{'key': [{'is_joined': True, 'required': True, 'wordsense': 'big_apple|NOUN'}], 'priority': 1},
{'key': [{'is_joined': True, 'required': True, 'wordsense': 'BIG_APPLE|NOUN'}], 'priority': 2},
{'key': [{'is_joined': True, 'required': True, 'wordsense': 'BIG_Apple|NOUN'}], 'priority': 3},
{'key': [{'is_joined': True, 'required': True, 'wordsense': 'BIG_apple|NOUN'}], 'priority': 4},
{'key': [{'is_joined': True, 'required': True, 'wordsense': 'Big_Apple|NOUN'}], 'priority': 5},
{'key': [{'is_joined': True, 'required': True, 'wordsense': 'Big_apple|NOUN'}], 'priority': 6},
{'key': [{'is_joined': True, 'required': True, 'wordsense': 'Big_Apple|LOC'}], 'priority': 7},
{'key': [
{'is_joined': False, 'required': False, 'wordsense': 'BIG|ADJ'},
{'is_joined': False, 'required': False, 'wordsense': 'apple|NOUN'}], 'priority': 8},
{'key': [
{'is_joined': False, 'required': False, 'wordsense': 'big|ADJ'},
{'is_joined': False, 'required': False, 'wordsense': 'apple|NOUN'}], 'priority': 9},
]
assert result == expected
def test_ranks_based_on_key_match_and_matching_keys_equal_priority_when_flagged_as_proper_phrase(the_service, s2v_mock):
k = [
{ 'wordsense': 'BIG|ADJ', 'required': False },
{ 'wordsense': 'APPLE|NOUN', 'required': False },
]
result = the_service.call(
k,
attempt_phrase_join_for_compound_phrases = True,
flag_joined_phrase_variations = True,
)
result_phrase_is_proper_flagged = the_service.call(
k,
attempt_phrase_join_for_compound_phrases = True,
flag_joined_phrase_variations = True,
phrase_is_proper = True,
)
expected = [
{'key': [{'is_joined': True, 'required': True, 'wordsense': 'BIG_APPLE|NOUN'}], 'priority': 1},
{'key': [{'is_joined': True, 'required': True, 'wordsense': 'BIG_Apple|NOUN'}], 'priority': 2},
{'key': [{'is_joined': True, 'required': True, 'wordsense': 'BIG_apple|NOUN'}], 'priority': 3},
{'key': [{'is_joined': True, 'required': True, 'wordsense': 'Big_Apple|LOC'}], 'priority': 4},
{'key': [{'is_joined': True, 'required': True, 'wordsense': 'Big_Apple|NOUN'}], 'priority': 4},
{'key': [{'is_joined': True, 'required': True, 'wordsense': 'Big_apple|NOUN'}], 'priority': 5},
{'key': [{'is_joined': True, 'required': True, 'wordsense': 'big_apple|NOUN'}], 'priority': 6},
{'key': [
{'is_joined': False, 'required': False, 'wordsense': 'BIG|ADJ'},
{'is_joined': False, 'required': False, 'wordsense': 'apple|NOUN'}], 'priority': 7},
{'key': [
{'is_joined': False, 'required': False, 'wordsense': 'big|ADJ'},
{'is_joined': False, 'required': False, 'wordsense': 'apple|NOUN'}], 'priority': 8},
]
assert result == result_phrase_is_proper_flagged == expected
def test_ranks_joined_keys_higher_than_non_joined_then_ranks_better_case_matches_higher_2(the_service, s2v_mock):
k = [
{ 'wordsense': 'Big|ADJ', 'required': False },
{ 'wordsense': 'Apple|NOUN', 'required': False },
]
result = the_service.call(
k,
attempt_phrase_join_for_compound_phrases = True,
flag_joined_phrase_variations = True,
)
expected = [
{'key': [{'is_joined': True, 'required': True, 'wordsense': 'Big_Apple|LOC'}], 'priority': 1},
{'key': [{'is_joined': True, 'required': True, 'wordsense': 'Big_Apple|NOUN'}], 'priority': 1},
{'key': [{'is_joined': True, 'required': True, 'wordsense': 'Big_apple|NOUN'}], 'priority': 2},
{'key': [{'is_joined': True, 'required': True, 'wordsense': 'BIG_Apple|NOUN'}], 'priority': 3},
{'key': [{'is_joined': True, 'required': True, 'wordsense': 'BIG_APPLE|NOUN'}], 'priority': 4},
{'key': [{'is_joined': True, 'required': True, 'wordsense': 'BIG_apple|NOUN'}], 'priority': 5},
{'key': [{'is_joined': True, 'required': True, 'wordsense': 'big_apple|NOUN'}], 'priority': 6},
{'key': [{'is_joined': False, 'required': False, 'wordsense': 'BIG|ADJ'}, {'is_joined': False, 'required': False, 'wordsense': 'apple|NOUN'}], 'priority': 7},
{'key': [{'is_joined': False, 'required': False, 'wordsense': 'big|ADJ'}, {'is_joined': False, 'required': False, 'wordsense': 'apple|NOUN'}], 'priority': 8},
]
assert result == expected
def test_joins_the_last_two_words_of_a_phrase_if_they_exist_as_a_compound(the_service, s2v_mock):
k = [
{ 'wordsense': 'black|ADJ', 'required': False },
{ 'wordsense': 'big|ADJ', 'required': False },
{ 'wordsense': 'apple|NOUN', 'required': False },
]
result = the_service.call(
k,
attempt_phrase_join_for_compound_phrases = True,
flag_joined_phrase_variations = True,
)
expected = [
{'key': [{'wordsense': 'black|ADJ', 'required': False, 'is_joined': False}, {'wordsense': 'big_apple|NOUN', 'required': True, 'is_joined': True}], 'priority': 1},
{'key': [{'wordsense': 'black|ADJ', 'required': False, 'is_joined': False}, {'wordsense': 'Big_apple|NOUN', 'required': True, 'is_joined': True}], 'priority': 2},
{'key': [{'wordsense': 'black|ADJ', 'required': False, 'is_joined': False}, {'wordsense': 'Big_Apple|LOC', 'required': True, 'is_joined': True}], 'priority': 3},
{'key': [{'wordsense': 'black|ADJ', 'required': False, 'is_joined': False}, {'wordsense': 'Big_Apple|NOUN', 'required': True, 'is_joined': True}], 'priority': 3},
{'key': [{'wordsense': 'black|ADJ', 'required': False, 'is_joined': False}, {'wordsense': 'BIG_apple|NOUN', 'required': True, 'is_joined': True}], 'priority': 4},
{'key': [{'wordsense': 'black|ADJ', 'required': False, 'is_joined': False}, {'wordsense': 'BIG_Apple|NOUN', 'required': True, 'is_joined': True}], 'priority': 5},
{'key': [{'wordsense': 'black|ADJ', 'required': False, 'is_joined': False}, {'wordsense': 'BIG_APPLE|NOUN', 'required': True, 'is_joined': True}], 'priority': 6},
{'key': [{'wordsense': 'black|ADJ', 'required': False, 'is_joined': False}, {'wordsense': 'big|ADJ', 'required': False, 'is_joined': False}, {'wordsense': 'apple|NOUN', 'required': False, 'is_joined': False}], 'priority': 7},
{'key': [{'wordsense': 'black|ADJ', 'required': False, 'is_joined': False}, {'wordsense': 'BIG|ADJ', 'required': False, 'is_joined': False}, {'wordsense': 'apple|NOUN', 'required': False, 'is_joined': False}], 'priority': 8}
]
assert result == expected
def test_3_word_all_required_compound(the_service, s2v_mock):
k = [
{ 'wordsense': 'blue|NOUN', 'required': True },
{ 'wordsense': 'big|ADJ', 'required': True },
{ 'wordsense': 'apple|NOUN', 'required': True },
]
result = the_service.call(
k,
attempt_phrase_join_for_compound_phrases = True,
flag_joined_phrase_variations = True,
)
expected = [
{'key': [{'wordsense': 'blue_big_apple|NOUN', 'required': True, 'is_joined': True}], 'priority': 1},
{'key': [{'wordsense': 'blue|NOUN', 'required': True, 'is_joined': False}, {'wordsense': 'big_apple|NOUN', 'required': True, 'is_joined': True}], 'priority': 2},
{'key': [{'wordsense': 'blue|NOUN', 'required': True, 'is_joined': False}, {'wordsense': 'Big_apple|NOUN', 'required': True, 'is_joined': True}], 'priority': 3},
{'key': [{'wordsense': 'blue|NOUN', 'required': True, 'is_joined': False}, {'wordsense': 'Big_Apple|LOC', 'required': True, 'is_joined': True}], 'priority': 4},
{'key': [{'wordsense': 'blue|NOUN', 'required': True, 'is_joined': False}, {'wordsense': 'Big_Apple|NOUN', 'required': True, 'is_joined': True}], 'priority': 4},
{'key': [{'wordsense': 'blue|NOUN', 'required': True, 'is_joined': False}, {'wordsense': 'BIG_apple|NOUN', 'required': True, 'is_joined': True}], 'priority': 5},
{'key': [{'wordsense': 'blue|NOUN', 'required': True, 'is_joined': False}, {'wordsense': 'BIG_Apple|NOUN', 'required': True, 'is_joined': True}], 'priority': 6},
{'key': [{'wordsense': 'blue|NOUN', 'required': True, 'is_joined': False}, {'wordsense': 'BIG_APPLE|NOUN', 'required': True, 'is_joined': True}], 'priority': 7},
{'key': [{'wordsense': 'blue|NOUN', 'required': True, 'is_joined': False}, {'wordsense': 'big|ADJ', 'required': True, 'is_joined': False}, {'wordsense': 'apple|NOUN', 'required': True, 'is_joined': False}], 'priority': 8},
{'key': [{'wordsense': 'blue|NOUN', 'required': True, 'is_joined': False}, {'wordsense': 'BIG|ADJ', 'required': True, 'is_joined': False}, {'wordsense': 'apple|NOUN', 'required': True, 'is_joined': False}], 'priority': 9}
]
assert result == expected
def test_3_word_all_required_compound_must_only_phrase_join(the_service, s2v_mock):
k = [
{ 'wordsense': 'blue|NOUN', 'required': True },
{ 'wordsense': 'big|ADJ', 'required': True },
{ 'wordsense': 'apple|NOUN', 'required': True },
]
result = the_service.call(
k,
attempt_phrase_join_for_compound_phrases = True,
flag_joined_phrase_variations = True,
must_only_phrase_join_for_compound_phrases = True,
)
expected = [
{'key': [{'wordsense': 'blue_big_apple|NOUN', 'required': True, 'is_joined': True}], 'priority': 1},
]
assert result == expected
def test_3_word_all_required_compound_must_only_phrase_join_2(the_service, s2v_mock):
k = [
{ 'wordsense': 'big|ADJ', 'required': False },
{ 'wordsense': 'apple|NOUN', 'required': True },
]
result = the_service.call(
k,
attempt_phrase_join_for_compound_phrases = True,
flag_joined_phrase_variations = True,
must_only_phrase_join_for_compound_phrases = True,
)
expected = [
{'key': [{'wordsense': 'big_apple|NOUN', 'required': True, 'is_joined': True}], 'priority': 1},
{'key': [{'wordsense': 'Big_apple|NOUN', 'required': True, 'is_joined': True}], 'priority': 2},
{'key': [{'wordsense': 'Big_Apple|NOUN', 'required': True, 'is_joined': True}], 'priority': 3},
{'key': [{'wordsense': 'BIG_apple|NOUN', 'required': True, 'is_joined': True}], 'priority': 4},
{'key': [{'wordsense': 'BIG_Apple|NOUN', 'required': True, 'is_joined': True}], 'priority': 5},
{'key': [{'wordsense': 'BIG_APPLE|NOUN', 'required': True, 'is_joined': True}], 'priority': 6},
{'key': [{'wordsense': 'Big_Apple|LOC', 'required': True, 'is_joined': True}], 'priority': 7}
]
assert result == expected
def test_limit(the_service, s2v_mock):
k = [
{ 'wordsense': 'blue|NOUN', 'required': True },
{ 'wordsense': 'big|ADJ', 'required': True },
{ 'wordsense': 'apple|NOUN', 'required': True },
]
result = the_service.call(
k,
attempt_phrase_join_for_compound_phrases = True,
flag_joined_phrase_variations = True,
limit = 4,
)
expected = [
{'key': [{'wordsense': 'blue_big_apple|NOUN', 'required': True, 'is_joined': True}], 'priority': 1},
{'key': [{'wordsense': 'blue|NOUN', 'required': True, 'is_joined': False}, {'wordsense': 'big_apple|NOUN', 'required': True, 'is_joined': True}], 'priority': 2},
{'key': [{'wordsense': 'blue|NOUN', 'required': True, 'is_joined': False}, {'wordsense': 'Big_apple|NOUN', 'required': True, 'is_joined': True}], 'priority': 3},
{'key': [{'wordsense': 'blue|NOUN', 'required': True, 'is_joined': False}, {'wordsense': 'Big_Apple|LOC', 'required': True, 'is_joined': True}], 'priority': 4},
]
assert result == expected
| 56.163399 | 232 | 0.623298 | 2,113 | 17,186 | 4.821581 | 0.053005 | 0.081665 | 0.064782 | 0.092265 | 0.92874 | 0.924421 | 0.919121 | 0.90852 | 0.90852 | 0.895563 | 0 | 0.016161 | 0.175492 | 17,186 | 305 | 233 | 56.347541 | 0.702823 | 0.011696 | 0 | 0.575972 | 0 | 0 | 0.324991 | 0 | 0 | 0 | 0 | 0 | 0.038869 | 1 | 0.045936 | false | 0 | 0.024735 | 0 | 0.077739 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
b88e842231bee5065d436a1fa2fda924bd6b234d | 100 | py | Python | utils/__init__.py | GabyRumc/DeepCovidXR | 3cc48cd6a9d545d8b10383b1f34dad16b0b998d2 | [
"MIT"
] | 12 | 2020-12-01T01:21:35.000Z | 2021-08-18T07:39:17.000Z | utils/__init__.py | GabyRumc/DeepCovidXR | 3cc48cd6a9d545d8b10383b1f34dad16b0b998d2 | [
"MIT"
] | 8 | 2020-11-03T15:10:25.000Z | 2021-03-06T13:50:55.000Z | utils/__init__.py | GabyRumc/DeepCovidXR | 3cc48cd6a9d545d8b10383b1f34dad16b0b998d2 | [
"MIT"
] | 10 | 2020-11-25T07:49:14.000Z | 2021-11-04T19:36:07.000Z | from .NIH_util import nihUtils
from .img_util import imgUtils
from .model_util import trainFeatures
| 25 | 37 | 0.85 | 15 | 100 | 5.466667 | 0.6 | 0.365854 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.12 | 100 | 3 | 38 | 33.333333 | 0.931818 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
b27fc9b1d8d54a7cfd1706dec0357525f94be528 | 23 | py | Python | kay_package/__init__.py | KathyRoma/lambda_packages | 8942337ae070915b7abb7f34a23ee7e43a9b3230 | [
"MIT"
] | null | null | null | kay_package/__init__.py | KathyRoma/lambda_packages | 8942337ae070915b7abb7f34a23ee7e43a9b3230 | [
"MIT"
] | 1 | 2020-07-07T23:01:46.000Z | 2020-07-07T23:01:46.000Z | kay_package/__init__.py | KathyRoma/lambda_packages | 8942337ae070915b7abb7f34a23ee7e43a9b3230 | [
"MIT"
] | 1 | 2020-07-07T22:57:29.000Z | 2020-07-07T22:57:29.000Z | from .k_fun import Kfun | 23 | 23 | 0.826087 | 5 | 23 | 3.6 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.130435 | 23 | 1 | 23 | 23 | 0.9 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
b2819881981fc947eaa46bfab91b5b44a3086934 | 179 | py | Python | dryapp/drydrop/app/models/session.py | harperreed/drydrop | 3c9463a9e789f2e8e5f8df43fb10bb9f44a7ba78 | [
"BSD-3-Clause"
] | 1 | 2015-11-05T00:49:41.000Z | 2015-11-05T00:49:41.000Z | dryapp/drydrop/app/models/session.py | harperreed/drydrop | 3c9463a9e789f2e8e5f8df43fb10bb9f44a7ba78 | [
"BSD-3-Clause"
] | null | null | null | dryapp/drydrop/app/models/session.py | harperreed/drydrop | 3c9463a9e789f2e8e5f8df43fb10bb9f44a7ba78 | [
"BSD-3-Clause"
] | null | null | null | # -*- mode: python; coding: utf-8 -*-
import logging
import google.appengine.ext.db as db
from drydrop.app.core.model import Model
class Session(db.Expando, Model):
pass | 22.375 | 40 | 0.709497 | 27 | 179 | 4.703704 | 0.777778 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.006711 | 0.167598 | 179 | 8 | 41 | 22.375 | 0.845638 | 0.195531 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.2 | 0.6 | 0 | 0.8 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 6 |
a2e3efe4f705a958a76e17e43a0d50023efeb0d0 | 81 | py | Python | src/pengbot/context.py | mariocesar/pengbot | 070854f92ac1314ee56f7f6cb9d27430b8f0fda8 | [
"MIT"
] | 1 | 2020-09-21T13:52:04.000Z | 2020-09-21T13:52:04.000Z | src/pengbot/context.py | mariocesar/pengbot | 070854f92ac1314ee56f7f6cb9d27430b8f0fda8 | [
"MIT"
] | null | null | null | src/pengbot/context.py | mariocesar/pengbot | 070854f92ac1314ee56f7f6cb9d27430b8f0fda8 | [
"MIT"
] | null | null | null | from pengbot.utils import AttributeDict
class Context(AttributeDict):
pass
| 13.5 | 39 | 0.790123 | 9 | 81 | 7.111111 | 0.888889 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.160494 | 81 | 5 | 40 | 16.2 | 0.941176 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.333333 | 0.333333 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 6 |
a2ea084344a9ed8ab4b11e9aa7167e6505818d2b | 35,714 | py | Python | cpdb/pinboard/tests/models/test_pinboard.py | invinst/CPDBv2_backend | b4e96d620ff7a437500f525f7e911651e4a18ef9 | [
"Apache-2.0"
] | 25 | 2018-07-20T22:31:40.000Z | 2021-07-15T16:58:41.000Z | cpdb/pinboard/tests/models/test_pinboard.py | invinst/CPDBv2_backend | b4e96d620ff7a437500f525f7e911651e4a18ef9 | [
"Apache-2.0"
] | 13 | 2018-06-18T23:08:47.000Z | 2022-02-10T07:38:25.000Z | cpdb/pinboard/tests/models/test_pinboard.py | invinst/CPDBv2_backend | b4e96d620ff7a437500f525f7e911651e4a18ef9 | [
"Apache-2.0"
] | 6 | 2018-05-17T21:59:43.000Z | 2020-11-17T00:30:26.000Z | from datetime import datetime
from django.test.testcases import TestCase
import pytz
from robber.expect import expect
from data.factories import (
AllegationFactory, OfficerFactory, OfficerAllegationFactory, InvestigatorAllegationFactory,
PoliceWitnessFactory,
AttachmentFileFactory,
)
from pinboard.factories import PinboardFactory, ExamplePinboardFactory
from trr.factories import TRRFactory
class PinboardTestCase(TestCase):
def test_clone(self):
officer_1 = OfficerFactory(id=1)
officer_2 = OfficerFactory(id=2)
allegation_1 = AllegationFactory(crid='123abc')
allegation_2 = AllegationFactory(crid='456def')
trr_1 = TRRFactory(id=1, officer=OfficerFactory(id=3))
trr_2 = TRRFactory(id=2, officer=OfficerFactory(id=4))
pinboard = PinboardFactory(
title='Pinboard title',
description='Pinboard title',
officers=(officer_1, officer_2),
allegations=(allegation_1, allegation_2),
trrs=(trr_1, trr_2),
)
cloned_pinboard = pinboard.clone()
cloned_pinboard.refresh_from_db()
officers = set(pinboard.officers.all().values_list('id', flat=True))
allegations = set(pinboard.allegations.all().values_list('crid', flat=True))
trrs = set(pinboard.trrs.all().values_list('id', flat=True))
cloned_officers = set(cloned_pinboard.officers.all().values_list('id', flat=True))
cloned_allegations = set(cloned_pinboard.allegations.all().values_list('crid', flat=True))
cloned_trrs = set(cloned_pinboard.trrs.all().values_list('id', flat=True))
expect(pinboard.title).to.eq(cloned_pinboard.title)
expect(pinboard.description).to.eq(cloned_pinboard.description)
expect(officers).to.eq(cloned_officers)
expect(allegations).to.eq(cloned_allegations)
expect(trrs).to.eq(cloned_trrs)
expect(cloned_pinboard.source_pinboard).to.eq(pinboard)
def test_clone_duplicate(self):
pinboard = PinboardFactory(
title='Pinboard title',
description='Pinboard title',
officers=(),
allegations=(),
trrs=(),
)
cloned_pinboard = pinboard.clone(is_duplicated=True)
expect(cloned_pinboard.title).to.eq('Pinboard title copy')
def test_clone_duplicate_with_copy_in_title(self):
pinboard = PinboardFactory(
title='Pinboard title copy',
description='Pinboard title',
officers=(),
allegations=(),
trrs=(),
)
cloned_pinboard = pinboard.clone(is_duplicated=True)
expect(cloned_pinboard.title).to.eq('Pinboard title copy 2')
def test_clone_duplicate_with_copy_number_in_title(self):
pinboard = PinboardFactory(
title='Pinboard title copy 2',
description='Pinboard title',
officers=(),
allegations=(),
trrs=(),
)
cloned_pinboard = pinboard.clone(is_duplicated=True)
expect(cloned_pinboard.title).to.eq('Pinboard title copy 3')
def test_clone_duplicate_with_copy_copy_number_in_title(self):
pinboard = PinboardFactory(
title='Pinboard title copy copy',
description='Pinboard title',
officers=(),
allegations=(),
trrs=(),
)
cloned_pinboard = pinboard.clone(is_duplicated=True)
expect(cloned_pinboard.title).to.eq('Pinboard title copy copy 2')
def test_clone_duplicate_with_copy_number_in_middle_of_title(self):
pinboard = PinboardFactory(
title='Pinboard title copy 2d',
description='Pinboard title',
officers=(),
allegations=(),
trrs=(),
)
cloned_pinboard = pinboard.clone(is_duplicated=True)
expect(cloned_pinboard.title).to.eq('Pinboard title copy 2d copy')
def test_str(self):
pinboard = PinboardFactory(
id='abcd1234',
title='Pinboard title',
)
pinboard_no_title = PinboardFactory(
id='dcba4321',
title='',
)
expect(str(pinboard)).to.eq('abcd1234 - Pinboard title')
expect(str(pinboard_no_title)).to.eq('dcba4321')
def test_is_empty(self):
officer_1 = OfficerFactory(id=1)
officer_2 = OfficerFactory(id=2)
allegation_1 = AllegationFactory(crid='123abc')
allegation_2 = AllegationFactory(crid='456def')
trr_1 = TRRFactory(id=1, officer=OfficerFactory(id=3))
trr_2 = TRRFactory(id=2, officer=OfficerFactory(id=4))
pinboard_full = PinboardFactory(
officers=(officer_1, officer_2),
allegations=(allegation_1, allegation_2),
trrs=(trr_1, trr_2),
)
pinboard_with_officers = PinboardFactory(
officers=(officer_1, officer_2),
)
pinboard_with_allegations = PinboardFactory(
allegations=(allegation_1, allegation_2),
)
pinboard_with_trrs = PinboardFactory(
trrs=(trr_1, trr_2),
)
pinboard_empty = PinboardFactory()
expect(pinboard_full.is_empty).to.be.false()
expect(pinboard_with_officers.is_empty).to.be.false()
expect(pinboard_with_allegations.is_empty).to.be.false()
expect(pinboard_with_trrs.is_empty).to.be.false()
expect(pinboard_empty.is_empty).to.be.true()
def test_example_pinboards(self):
officer_1 = OfficerFactory(id=1)
officer_2 = OfficerFactory(id=2)
allegation_1 = AllegationFactory(crid='123abc')
allegation_2 = AllegationFactory(crid='456def')
trr_1 = TRRFactory(id=1, officer=OfficerFactory(id=3))
trr_2 = TRRFactory(id=2, officer=OfficerFactory(id=4))
pinboard_full = PinboardFactory(
officers=(officer_1, officer_2),
allegations=(allegation_1, allegation_2),
trrs=(trr_1, trr_2),
)
pinboard_with_officers = PinboardFactory(
officers=(officer_1, officer_2),
)
pinboard_with_allegations = PinboardFactory(
allegations=(allegation_1, allegation_2),
)
pinboard_with_trrs = PinboardFactory(
trrs=(trr_1, trr_2),
)
pinboard_empty = PinboardFactory()
e_pinboard_1 = PinboardFactory(
title='Example pinboard 1',
description='Example pinboard 1',
)
example_pinboard_1 = ExamplePinboardFactory(pinboard=e_pinboard_1)
e_pinboard_2 = PinboardFactory(
title='Example pinboard 1',
description='Example pinboard 1',
)
example_pinboard_2 = ExamplePinboardFactory(pinboard=e_pinboard_2)
expect(pinboard_empty.example_pinboards).to.have.length(2)
expect(pinboard_empty.example_pinboards).to.contain(example_pinboard_1)
expect(pinboard_empty.example_pinboards).to.contain(example_pinboard_2)
expect(pinboard_with_officers.example_pinboards).to.be.none()
expect(pinboard_with_allegations.example_pinboards).to.be.none()
expect(pinboard_with_trrs.example_pinboards).to.be.none()
expect(pinboard_full.example_pinboards).to.be.none()
def test_all_officers(self):
officer_1 = OfficerFactory(id=8562, first_name='Jerome', last_name='Finnigan')
officer_2 = OfficerFactory(id=8563, first_name='Edward', last_name='May')
officer_3 = OfficerFactory(id=8564, first_name='Joe', last_name='Parker')
officer_4 = OfficerFactory(id=8565, first_name='Jane', last_name='Marry')
officer_5 = OfficerFactory(id=8566, first_name='John', last_name='Parker')
officer_6 = OfficerFactory(id=8567, first_name='William', last_name='People')
officer_7 = OfficerFactory(id=8568, first_name='Zin', last_name='Flagg')
allegation_1 = AllegationFactory(
crid='123',
is_officer_complaint=False,
incident_date=datetime(2005, 12, 31, tzinfo=pytz.utc)
)
allegation_2 = AllegationFactory(
crid='456',
is_officer_complaint=True,
incident_date=datetime(2006, 12, 31, tzinfo=pytz.utc)
)
allegation_3 = AllegationFactory(
crid='789',
is_officer_complaint=False,
incident_date=datetime(2007, 12, 31, tzinfo=pytz.utc)
)
trr_1 = TRRFactory(
id=1,
officer=officer_6,
trr_datetime=datetime(2008, 12, 31, tzinfo=pytz.utc)
)
trr_2 = TRRFactory(
id=2,
officer=officer_7,
trr_datetime=datetime(2009, 12, 31, tzinfo=pytz.utc)
)
OfficerAllegationFactory(id=1, officer=officer_3, allegation=allegation_1)
OfficerAllegationFactory(id=2, officer=officer_4, allegation=allegation_2)
OfficerAllegationFactory(id=3, officer=officer_2, allegation=allegation_2)
OfficerAllegationFactory(id=4, officer=officer_5, allegation=allegation_3)
pinboard = PinboardFactory(
description='abc',
)
pinboard.officers.set([officer_1, officer_2])
pinboard.allegations.set([allegation_1, allegation_2])
pinboard.trrs.set([trr_1, trr_2])
expected_all_officers = {officer_2, officer_4, officer_1, officer_3, officer_6, officer_7}
expect(set(pinboard.all_officers)).to.eq(expected_all_officers)
def test_all_officer_ids(self):
officer_1 = OfficerFactory(id=8562, first_name='Jerome', last_name='Finnigan')
officer_2 = OfficerFactory(id=8563, first_name='Edward', last_name='May')
officer_3 = OfficerFactory(id=8564, first_name='Joe', last_name='Parker')
officer_4 = OfficerFactory(id=8565, first_name='Jane', last_name='Marry')
officer_5 = OfficerFactory(id=8566, first_name='John', last_name='Parker')
officer_6 = OfficerFactory(id=8567, first_name='William', last_name='People')
officer_7 = OfficerFactory(id=8568, first_name='Zin', last_name='Flagg')
allegation_1 = AllegationFactory(
crid='123',
is_officer_complaint=False,
incident_date=datetime(2005, 12, 31, tzinfo=pytz.utc)
)
allegation_2 = AllegationFactory(
crid='456',
is_officer_complaint=True,
incident_date=datetime(2006, 12, 31, tzinfo=pytz.utc)
)
allegation_3 = AllegationFactory(
crid='789',
is_officer_complaint=False,
incident_date=datetime(2007, 12, 31, tzinfo=pytz.utc)
)
trr_1 = TRRFactory(
id=1,
officer=officer_6,
trr_datetime=datetime(2008, 12, 31, tzinfo=pytz.utc)
)
trr_2 = TRRFactory(
id=2,
officer=officer_7,
trr_datetime=datetime(2009, 12, 31, tzinfo=pytz.utc)
)
OfficerAllegationFactory(id=1, officer=officer_3, allegation=allegation_1)
OfficerAllegationFactory(id=2, officer=officer_4, allegation=allegation_2)
OfficerAllegationFactory(id=3, officer=officer_2, allegation=allegation_2)
OfficerAllegationFactory(id=4, officer=officer_5, allegation=allegation_3)
pinboard = PinboardFactory(
description='abc',
)
pinboard.officers.set([officer_1, officer_2])
pinboard.allegations.set([allegation_1, allegation_2])
pinboard.trrs.set([trr_1, trr_2])
expected_all_officer_ids = {officer_2.id, officer_4.id, officer_1.id, officer_3.id, officer_6.id, officer_7.id}
expect(set(pinboard.all_officer_ids)).to.eq(expected_all_officer_ids)
def test_officer_ids(self):
pinned_officer_1 = OfficerFactory(id=1)
pinned_officer_2 = OfficerFactory(id=2)
pinboard = PinboardFactory(
title='Test pinboard',
description='Test description',
)
pinboard.officers.set([pinned_officer_1, pinned_officer_2])
expect(list(pinboard.officer_ids)).to.eq([1, 2])
def test_crids(self):
pinned_allegation_1 = AllegationFactory(crid='1')
pinned_allegation_2 = AllegationFactory(crid='2')
pinboard = PinboardFactory(
title='Test pinboard',
description='Test description',
)
pinboard.allegations.set([pinned_allegation_1, pinned_allegation_2])
expect(list(pinboard.crids)).to.eq(['1', '2'])
def test_trr_ids(self):
pinned_trr_1 = TRRFactory(id=1)
pinned_trr_2 = TRRFactory(id=2)
pinboard = PinboardFactory(
title='Test pinboard',
description='Test description',
)
pinboard.trrs.set([pinned_trr_1, pinned_trr_2])
expect(list(pinboard.trr_ids)).to.eq([1, 2])
def test_relevant_coaccusals(self):
pinned_officer_1 = OfficerFactory(id=1)
pinned_officer_2 = OfficerFactory(id=2)
pinned_allegation_1 = AllegationFactory(crid='1')
pinned_allegation_2 = AllegationFactory(crid='2')
pinned_trr = TRRFactory(officer__id=77)
pinboard = PinboardFactory(
title='Test pinboard',
description='Test description',
)
pinboard.officers.set([pinned_officer_1, pinned_officer_2])
pinboard.allegations.set([pinned_allegation_1, pinned_allegation_2])
pinboard.trrs.set([pinned_trr])
not_relevant_allegation = AllegationFactory(crid='999')
officer_coaccusal_11 = OfficerFactory(id=11)
officer_coaccusal_21 = OfficerFactory(id=21)
OfficerFactory(id=99, first_name='Not Relevant', last_name='Officer')
allegation_11 = AllegationFactory(crid='11')
allegation_12 = AllegationFactory(crid='12')
allegation_13 = AllegationFactory(crid='13')
allegation_14 = AllegationFactory(crid='14')
OfficerAllegationFactory(allegation=allegation_11, officer=pinned_officer_1)
OfficerAllegationFactory(allegation=allegation_12, officer=pinned_officer_1)
OfficerAllegationFactory(allegation=allegation_13, officer=pinned_officer_1)
OfficerAllegationFactory(allegation=allegation_14, officer=pinned_officer_1)
OfficerAllegationFactory(allegation=allegation_11, officer=officer_coaccusal_11)
OfficerAllegationFactory(allegation=allegation_12, officer=officer_coaccusal_11)
OfficerAllegationFactory(allegation=allegation_13, officer=officer_coaccusal_11)
OfficerAllegationFactory(allegation=allegation_14, officer=officer_coaccusal_11)
OfficerAllegationFactory(allegation=not_relevant_allegation, officer=officer_coaccusal_11)
allegation_21 = AllegationFactory(crid='21')
allegation_22 = AllegationFactory(crid='22')
allegation_23 = AllegationFactory(crid='23')
OfficerAllegationFactory(allegation=allegation_21, officer=pinned_officer_2)
OfficerAllegationFactory(allegation=allegation_22, officer=pinned_officer_2)
OfficerAllegationFactory(allegation=allegation_23, officer=pinned_officer_2)
OfficerAllegationFactory(allegation=allegation_21, officer=officer_coaccusal_21)
OfficerAllegationFactory(allegation=allegation_22, officer=officer_coaccusal_21)
OfficerAllegationFactory(allegation=allegation_23, officer=officer_coaccusal_21)
OfficerAllegationFactory(allegation=not_relevant_allegation, officer=officer_coaccusal_21)
allegation_coaccusal_12 = OfficerFactory(id=12)
allegation_coaccusal_22 = OfficerFactory(id=22)
OfficerAllegationFactory(allegation=pinned_allegation_1, officer=allegation_coaccusal_12)
OfficerAllegationFactory(allegation=pinned_allegation_2, officer=allegation_coaccusal_12)
OfficerAllegationFactory(allegation=not_relevant_allegation, officer=allegation_coaccusal_12)
OfficerAllegationFactory(allegation=pinned_allegation_2, officer=allegation_coaccusal_22)
OfficerAllegationFactory(allegation=not_relevant_allegation, officer=allegation_coaccusal_22)
relevant_coaccusals = list(pinboard.relevant_coaccusals)
expect(relevant_coaccusals).to.have.length(5)
expect(relevant_coaccusals[0].id).to.eq(11)
expect(relevant_coaccusals[0].coaccusal_count).to.eq(4)
expect(relevant_coaccusals[1].id).to.eq(21)
expect(relevant_coaccusals[1].coaccusal_count).to.eq(3)
expect(relevant_coaccusals[2].id).to.eq(12)
expect(relevant_coaccusals[2].coaccusal_count).to.eq(2)
expect({relevant_coaccusal.id for relevant_coaccusal in relevant_coaccusals[3:5]}).to.eq({22, 77})
expect(relevant_coaccusals[3].coaccusal_count).to.eq(1)
expect(relevant_coaccusals[4].coaccusal_count).to.eq(1)
def test_relevant_complaints_coaccusal_count_via_trr(self):
officer_coaccusal_11 = OfficerFactory(id=11)
officer_coaccusal_21 = OfficerFactory(id=21)
OfficerFactory(id=99, first_name='Not Relevant', last_name='Officer')
pinned_officer_1 = OfficerFactory(id=1)
pinned_officer_2 = OfficerFactory(id=2)
pinned_allegation_1 = AllegationFactory(crid='1')
pinned_allegation_2 = AllegationFactory(crid='2')
pinned_trr_1 = TRRFactory(officer__id=77)
pinned_trr_2 = TRRFactory(officer=officer_coaccusal_11)
pinboard = PinboardFactory(
title='Test pinboard',
description='Test description',
)
pinboard.officers.set([pinned_officer_1, pinned_officer_2])
pinboard.allegations.set([pinned_allegation_1, pinned_allegation_2])
pinboard.trrs.set([pinned_trr_1, pinned_trr_2])
not_relevant_allegation = AllegationFactory(crid='999')
allegation_11 = AllegationFactory(crid='11')
allegation_12 = AllegationFactory(crid='12')
allegation_13 = AllegationFactory(crid='13')
allegation_14 = AllegationFactory(crid='14')
OfficerAllegationFactory(allegation=allegation_11, officer=pinned_officer_1)
OfficerAllegationFactory(allegation=allegation_12, officer=pinned_officer_1)
OfficerAllegationFactory(allegation=allegation_13, officer=pinned_officer_1)
OfficerAllegationFactory(allegation=allegation_14, officer=pinned_officer_1)
OfficerAllegationFactory(allegation=allegation_11, officer=officer_coaccusal_11)
OfficerAllegationFactory(allegation=allegation_12, officer=officer_coaccusal_11)
OfficerAllegationFactory(allegation=allegation_13, officer=officer_coaccusal_11)
OfficerAllegationFactory(allegation=allegation_14, officer=officer_coaccusal_11)
OfficerAllegationFactory(allegation=not_relevant_allegation, officer=officer_coaccusal_11)
allegation_21 = AllegationFactory(crid='21')
allegation_22 = AllegationFactory(crid='22')
allegation_23 = AllegationFactory(crid='23')
OfficerAllegationFactory(allegation=allegation_21, officer=pinned_officer_2)
OfficerAllegationFactory(allegation=allegation_22, officer=pinned_officer_2)
OfficerAllegationFactory(allegation=allegation_23, officer=pinned_officer_2)
OfficerAllegationFactory(allegation=allegation_21, officer=officer_coaccusal_21)
OfficerAllegationFactory(allegation=allegation_22, officer=officer_coaccusal_21)
OfficerAllegationFactory(allegation=allegation_23, officer=officer_coaccusal_21)
OfficerAllegationFactory(allegation=not_relevant_allegation, officer=officer_coaccusal_21)
allegation_coaccusal_12 = OfficerFactory(id=12)
allegation_coaccusal_22 = OfficerFactory(id=22)
OfficerAllegationFactory(allegation=pinned_allegation_1, officer=allegation_coaccusal_12)
OfficerAllegationFactory(allegation=pinned_allegation_2, officer=allegation_coaccusal_12)
OfficerAllegationFactory(allegation=not_relevant_allegation, officer=allegation_coaccusal_12)
OfficerAllegationFactory(allegation=pinned_allegation_2, officer=allegation_coaccusal_22)
OfficerAllegationFactory(allegation=not_relevant_allegation, officer=allegation_coaccusal_22)
relevant_coaccusals = list(pinboard.relevant_coaccusals)
expect(relevant_coaccusals).to.have.length(5)
expect(relevant_coaccusals[0].id).to.eq(11)
expect(relevant_coaccusals[0].coaccusal_count).to.eq(5)
expect(relevant_coaccusals[1].id).to.eq(21)
expect(relevant_coaccusals[1].coaccusal_count).to.eq(3)
expect(relevant_coaccusals[2].id).to.eq(12)
expect(relevant_coaccusals[2].coaccusal_count).to.eq(2)
expect({relevant_coaccusal.id for relevant_coaccusal in relevant_coaccusals[3:5]}).to.eq({22, 77})
expect(relevant_coaccusals[3].coaccusal_count).to.eq(1)
expect(relevant_coaccusals[4].coaccusal_count).to.eq(1)
def test_relevant_complaints_via_accused_officers(self):
pinned_officer_1 = OfficerFactory(id=1)
pinned_officer_2 = OfficerFactory(id=2)
pinned_officer_3 = OfficerFactory(id=3)
relevant_allegation_1 = AllegationFactory(crid='1', incident_date=datetime(2002, 2, 21, tzinfo=pytz.utc))
relevant_allegation_2 = AllegationFactory(crid='2', incident_date=datetime(2002, 2, 22, tzinfo=pytz.utc))
AllegationFactory(crid='not relevant')
pinboard = PinboardFactory(
title='Test pinboard',
description='Test description',
)
pinboard.officers.set([pinned_officer_1, pinned_officer_2, pinned_officer_3])
OfficerAllegationFactory(officer=pinned_officer_1, allegation=relevant_allegation_1)
OfficerAllegationFactory(officer=pinned_officer_2, allegation=relevant_allegation_2)
relevant_complaints = list(pinboard.relevant_complaints)
expect(relevant_complaints).to.have.length(2)
expect(relevant_complaints[0].crid).to.eq('2')
expect(relevant_complaints[1].crid).to.eq('1')
def test_relevant_complaints_filter_out_pinned_allegations(self):
pinned_officer_1 = OfficerFactory(id=1)
pinned_officer_2 = OfficerFactory(id=2)
pinned_allegation_1 = AllegationFactory(crid='1')
pinned_allegation_2 = AllegationFactory(crid='2')
pinboard = PinboardFactory(
title='Test pinboard',
description='Test description',
)
pinboard.officers.set([pinned_officer_1, pinned_officer_2])
pinboard.allegations.set([pinned_allegation_1, pinned_allegation_2])
OfficerAllegationFactory(officer=pinned_officer_1, allegation=pinned_allegation_1)
OfficerAllegationFactory(officer=pinned_officer_2, allegation=pinned_allegation_2)
AllegationFactory(crid='not relevant')
expect(list(pinboard.relevant_complaints)).to.have.length(0)
def test_relevant_complaints_via_investigator(self):
pinned_investigator_1 = OfficerFactory(id=1)
pinned_investigator_2 = OfficerFactory(id=2)
pinned_investigator_3 = OfficerFactory(id=3)
not_relevant_officer = OfficerFactory(id=999)
relevant_allegation_1 = AllegationFactory(crid='1', incident_date=datetime(2002, 2, 21, tzinfo=pytz.utc))
relevant_allegation_2 = AllegationFactory(crid='2', incident_date=datetime(2002, 2, 22, tzinfo=pytz.utc))
relevant_allegation_3 = AllegationFactory(crid='3', incident_date=datetime(2002, 2, 23, tzinfo=pytz.utc))
not_relevant_allegation = AllegationFactory(crid='999')
pinboard = PinboardFactory(
title='Test pinboard',
description='Test description',
)
pinboard.officers.set([pinned_investigator_1, pinned_investigator_2, pinned_investigator_3])
InvestigatorAllegationFactory(investigator__officer=pinned_investigator_1, allegation=relevant_allegation_1)
InvestigatorAllegationFactory(investigator__officer=pinned_investigator_2, allegation=relevant_allegation_2)
InvestigatorAllegationFactory(investigator__officer=pinned_investigator_3, allegation=relevant_allegation_3)
InvestigatorAllegationFactory(investigator__officer=not_relevant_officer, allegation=not_relevant_allegation)
relevant_complaints = list(pinboard.relevant_complaints)
expect(relevant_complaints).to.have.length(3)
expect(relevant_complaints[0].crid).to.eq('3')
expect(relevant_complaints[1].crid).to.eq('2')
expect(relevant_complaints[2].crid).to.eq('1')
def test_relevant_complaints_via_police_witnesses(self):
pinned_officer_1 = OfficerFactory(id=1)
pinned_officer_2 = OfficerFactory(id=2)
not_relevant_officer = OfficerFactory(id=999)
relevant_allegation_11 = AllegationFactory(crid='11', incident_date=datetime(2002, 2, 21, tzinfo=pytz.utc))
relevant_allegation_12 = AllegationFactory(crid='12', incident_date=datetime(2002, 2, 22, tzinfo=pytz.utc))
relevant_allegation_21 = AllegationFactory(crid='21', incident_date=datetime(2002, 2, 23, tzinfo=pytz.utc))
not_relevant_allegation = AllegationFactory(crid='999')
pinboard = PinboardFactory(
title='Test pinboard',
description='Test description',
)
pinboard.officers.set([pinned_officer_1, pinned_officer_2])
PoliceWitnessFactory(allegation=relevant_allegation_11, officer=pinned_officer_1)
PoliceWitnessFactory(allegation=relevant_allegation_12, officer=pinned_officer_1)
PoliceWitnessFactory(allegation=relevant_allegation_21, officer=pinned_officer_2)
PoliceWitnessFactory(allegation=not_relevant_allegation, officer=not_relevant_officer)
relevant_complaints = list(pinboard.relevant_complaints)
expect(relevant_complaints).to.have.length(3)
expect(relevant_complaints[0].crid).to.eq('21')
expect(relevant_complaints[1].crid).to.eq('12')
expect(relevant_complaints[2].crid).to.eq('11')
def test_relevant_complaints_order_officers(self):
pinned_officer_1 = OfficerFactory(id=1, allegation_count=3)
pinned_officer_2 = OfficerFactory(id=2)
pinned_officer_3 = OfficerFactory(id=3)
officer_4 = OfficerFactory(id=4, allegation_count=2)
officer_5 = OfficerFactory(id=5, allegation_count=4)
relevant_allegation_1 = AllegationFactory(crid='1', incident_date=datetime(2002, 2, 21, tzinfo=pytz.utc))
relevant_allegation_2 = AllegationFactory(crid='2', incident_date=datetime(2002, 2, 22, tzinfo=pytz.utc))
AllegationFactory(crid='not relevant')
pinboard = PinboardFactory(
title='Test pinboard',
description='Test description',
)
pinboard.officers.set([pinned_officer_1, pinned_officer_2, pinned_officer_3])
OfficerAllegationFactory(officer=pinned_officer_1, allegation=relevant_allegation_1)
OfficerAllegationFactory(officer=officer_4, allegation=relevant_allegation_1)
OfficerAllegationFactory(officer=officer_5, allegation=relevant_allegation_1)
OfficerAllegationFactory(officer=pinned_officer_2, allegation=relevant_allegation_2)
relevant_complaints = list(pinboard.relevant_complaints)
expect(relevant_complaints).to.have.length(2)
expect(relevant_complaints[0].crid).to.eq('2')
expect(relevant_complaints[0].prefetched_officer_allegations).to.have.length(1)
expect(relevant_complaints[0].prefetched_officer_allegations[0].officer.id).to.eq(2)
expect(relevant_complaints[1].crid).to.eq('1')
expect(relevant_complaints[1].prefetched_officer_allegations).to.have.length(3)
expect(relevant_complaints[1].prefetched_officer_allegations[0].officer.id).to.eq(5)
expect(relevant_complaints[1].prefetched_officer_allegations[1].officer.id).to.eq(1)
expect(relevant_complaints[1].prefetched_officer_allegations[2].officer.id).to.eq(4)
def test_relevant_documents_via_accused_officers(self):
pinned_officer_1 = OfficerFactory(id=1)
pinned_officer_2 = OfficerFactory(id=2)
pinned_officer_3 = OfficerFactory(id=3)
relevant_allegation_1 = AllegationFactory(crid='1', incident_date=datetime(2002, 2, 21, tzinfo=pytz.utc))
relevant_allegation_2 = AllegationFactory(crid='2', incident_date=datetime(2002, 2, 22, tzinfo=pytz.utc))
not_relevant_allegation = AllegationFactory(crid='not relevant')
relevant_document_1 = AttachmentFileFactory(
id=1, file_type='document', allegation=relevant_allegation_1, show=True
)
relevant_document_2 = AttachmentFileFactory(
id=2, file_type='document', allegation=relevant_allegation_2, show=True
)
AttachmentFileFactory(
id=998, file_type='document', title='relevant but not show', allegation=relevant_allegation_1, show=False
)
AttachmentFileFactory(
id=999, file_type='document', title='not relevant', allegation=not_relevant_allegation, show=True
)
pinboard = PinboardFactory(
title='Test pinboard',
description='Test description',
)
pinboard.officers.set([pinned_officer_1, pinned_officer_2, pinned_officer_3])
OfficerAllegationFactory(officer=pinned_officer_1, allegation=relevant_allegation_1)
OfficerAllegationFactory(officer=pinned_officer_2, allegation=relevant_allegation_2)
relevant_documents = list(pinboard.relevant_documents)
expect(relevant_documents).to.have.length(2)
expect(relevant_documents[0].id).to.eq(relevant_document_2.id)
expect(relevant_documents[0].allegation.crid).to.eq('2')
expect(relevant_documents[1].id).to.eq(relevant_document_1.id)
expect(relevant_documents[1].allegation.crid).to.eq('1')
def test_relevant_documents_with_pinned_allegations(self):
pinned_officer_1 = OfficerFactory(id=1)
pinned_officer_2 = OfficerFactory(id=2)
pinned_allegation_1 = AllegationFactory(crid='1', incident_date=datetime(2002, 2, 21, tzinfo=pytz.utc))
pinned_allegation_2 = AllegationFactory(crid='2', incident_date=datetime(2002, 2, 22, tzinfo=pytz.utc))
pinboard = PinboardFactory(
title='Test pinboard',
description='Test description',
)
pinboard.officers.set([pinned_officer_1, pinned_officer_2])
pinboard.allegations.set([pinned_allegation_1, pinned_allegation_2])
OfficerAllegationFactory(officer=pinned_officer_1, allegation=pinned_allegation_1)
OfficerAllegationFactory(officer=pinned_officer_2, allegation=pinned_allegation_2)
not_relevant_allegation = AllegationFactory(crid='not relevant')
relevant_document_1 = AttachmentFileFactory(
id=1, file_type='document', allegation=pinned_allegation_1, show=True
)
relevant_document_2 = AttachmentFileFactory(
id=2, file_type='document', allegation=pinned_allegation_2, show=True
)
AttachmentFileFactory(
id=998, file_type='document', title='relevant but not show', allegation=pinned_allegation_1, show=False
)
AttachmentFileFactory(
id=999, file_type='document', title='not relevant', allegation=not_relevant_allegation, show=True
)
relevant_documents = list(pinboard.relevant_documents)
expect(relevant_documents).to.have.length(2)
expect(relevant_documents[0].id).to.eq(relevant_document_2.id)
expect(relevant_documents[0].allegation.crid).to.eq('2')
expect(relevant_documents[1].id).to.eq(relevant_document_1.id)
expect(relevant_documents[1].allegation.crid).to.eq('1')
def test_relevant_documents_order_officers(self):
pinned_officer_1 = OfficerFactory(id=1, allegation_count=3)
pinned_officer_2 = OfficerFactory(id=2)
pinned_officer_3 = OfficerFactory(id=3)
officer_4 = OfficerFactory(id=4, allegation_count=2)
officer_5 = OfficerFactory(id=5, allegation_count=4)
relevant_allegation_1 = AllegationFactory(crid='1', incident_date=datetime(2002, 2, 21, tzinfo=pytz.utc))
relevant_allegation_2 = AllegationFactory(crid='2', incident_date=datetime(2002, 2, 22, tzinfo=pytz.utc))
not_relevant_allegation = AllegationFactory(crid='999')
pinboard = PinboardFactory(
title='Test pinboard',
description='Test description',
)
pinboard.officers.set([pinned_officer_1, pinned_officer_2, pinned_officer_3])
OfficerAllegationFactory(officer=pinned_officer_1, allegation=relevant_allegation_1)
OfficerAllegationFactory(officer=officer_4, allegation=relevant_allegation_1)
OfficerAllegationFactory(officer=officer_5, allegation=relevant_allegation_1)
OfficerAllegationFactory(officer=pinned_officer_2, allegation=relevant_allegation_2)
relevant_document_1 = AttachmentFileFactory(
id=1, file_type='document', allegation=relevant_allegation_1, show=True
)
relevant_document_2 = AttachmentFileFactory(
id=2, file_type='document', allegation=relevant_allegation_2, show=True
)
AttachmentFileFactory(
id=998, file_type='document', title='relevant but not show', allegation=relevant_allegation_1, show=False
)
AttachmentFileFactory(
id=999, file_type='document', title='not relevant', allegation=not_relevant_allegation, show=True
)
relevant_documents = list(pinboard.relevant_documents)
expect(relevant_documents).to.have.length(2)
expect(relevant_documents[0].id).to.eq(relevant_document_2.id)
expect(relevant_documents[0].allegation.crid).to.eq('2')
expect(relevant_documents[0].allegation.prefetched_officer_allegations).to.have.length(1)
expect(relevant_documents[0].allegation.prefetched_officer_allegations[0].officer.id).to.eq(2)
expect(relevant_documents[1].id).to.eq(relevant_document_1.id)
expect(relevant_documents[1].allegation.crid).to.eq('1')
expect(relevant_documents[1].allegation.prefetched_officer_allegations).to.have.length(3)
expect(relevant_documents[1].allegation.prefetched_officer_allegations[0].officer.id).to.eq(5)
expect(relevant_documents[1].allegation.prefetched_officer_allegations[1].officer.id).to.eq(1)
expect(relevant_documents[1].allegation.prefetched_officer_allegations[2].officer.id).to.eq(4)
def test_relevant_documents_not_showing_audios_and_videos(self):
pinned_officer_1 = OfficerFactory(id=1)
relevant_allegation = AllegationFactory(crid='1', incident_date=datetime(2002, 2, 21, tzinfo=pytz.utc))
OfficerAllegationFactory(officer=pinned_officer_1, allegation=relevant_allegation)
AttachmentFileFactory(file_type='document', allegation=relevant_allegation, show=True)
AttachmentFileFactory(file_type='document', allegation=relevant_allegation, show=True)
AttachmentFileFactory(file_type='audio', allegation=relevant_allegation, show=True)
AttachmentFileFactory(file_type='video', allegation=relevant_allegation, show=True)
pinboard = PinboardFactory(
title='Test pinboard',
description='Test description',
)
pinboard.officers.set([pinned_officer_1])
relevant_documents = list(pinboard.relevant_documents)
expect(relevant_documents).to.have.length(2)
expect(relevant_documents[0].file_type).to.eq('document')
expect(relevant_documents[1].file_type).to.eq('document')
| 48.923288 | 119 | 0.705886 | 3,953 | 35,714 | 6.097647 | 0.041993 | 0.043146 | 0.022652 | 0.011201 | 0.907899 | 0.879895 | 0.870768 | 0.856455 | 0.816752 | 0.794432 | 0 | 0.039349 | 0.19519 | 35,714 | 729 | 120 | 48.990398 | 0.799255 | 0 | 0 | 0.68254 | 0 | 0 | 0.042 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.039683 | false | 0 | 0.011111 | 0 | 0.052381 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
0c28263b0e8837a9f7a76340b7ca27bba468b49b | 328 | py | Python | EventFilter/SiStripRawToDigi/python/ExcludedFEDListProducer_cfi.py | ckamtsikis/cmssw | ea19fe642bb7537cbf58451dcf73aa5fd1b66250 | [
"Apache-2.0"
] | 852 | 2015-01-11T21:03:51.000Z | 2022-03-25T21:14:00.000Z | EventFilter/SiStripRawToDigi/python/ExcludedFEDListProducer_cfi.py | ckamtsikis/cmssw | ea19fe642bb7537cbf58451dcf73aa5fd1b66250 | [
"Apache-2.0"
] | 30,371 | 2015-01-02T00:14:40.000Z | 2022-03-31T23:26:05.000Z | EventFilter/SiStripRawToDigi/python/ExcludedFEDListProducer_cfi.py | ckamtsikis/cmssw | ea19fe642bb7537cbf58451dcf73aa5fd1b66250 | [
"Apache-2.0"
] | 3,240 | 2015-01-02T05:53:18.000Z | 2022-03-31T17:24:21.000Z | import FWCore.ParameterSet.Config as cms
import EventFilter.SiStripRawToDigi.siStripExcludedFEDListProducer_cfi
SiStripExcludedFEDListProducer = EventFilter.SiStripRawToDigi.siStripExcludedFEDListProducer_cfi.siStripExcludedFEDListProducer.clone()
SiStripExcludedFEDListProducer.ProductLabel = cms.InputTag("rawDataCollector")
| 54.666667 | 135 | 0.905488 | 23 | 328 | 12.826087 | 0.608696 | 0.183051 | 0.386441 | 0.40678 | 0.610169 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.039634 | 328 | 5 | 136 | 65.6 | 0.936508 | 0 | 0 | 0 | 0 | 0 | 0.04878 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.5 | 0 | 0.5 | 0 | 1 | 0 | 1 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
a73dee5fee50fb1575ac569836a612e429e4f061 | 20 | py | Python | softmock/__init__.py | web-trump/soft-mock | 88f5cb48c30499735e688d49a47682976094142f | [
"MIT"
] | 4 | 2021-02-08T07:35:57.000Z | 2021-02-09T09:55:04.000Z | softmock/__init__.py | web-trump/softmock | 88f5cb48c30499735e688d49a47682976094142f | [
"MIT"
] | null | null | null | softmock/__init__.py | web-trump/softmock | 88f5cb48c30499735e688d49a47682976094142f | [
"MIT"
] | null | null | null | from . import entry
| 10 | 19 | 0.75 | 3 | 20 | 5 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.2 | 20 | 1 | 20 | 20 | 0.9375 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
a79b7d192561d1a71ff87bb568c50661b986bf91 | 180 | py | Python | src/test/datascience/serverConfigFiles/remotePassword.py | ChaseKnowlden/vscode-jupyter | 9bdaf87f0b6dcd717c508e9023350499a6093f97 | [
"MIT"
] | 615 | 2020-11-11T22:55:28.000Z | 2022-03-30T21:48:08.000Z | src/test/datascience/serverConfigFiles/remotePassword.py | ChaseKnowlden/vscode-jupyter | 9bdaf87f0b6dcd717c508e9023350499a6093f97 | [
"MIT"
] | 8,428 | 2020-11-11T22:06:43.000Z | 2022-03-31T23:42:34.000Z | src/test/datascience/serverConfigFiles/remotePassword.py | ChaseKnowlden/vscode-jupyter | 9bdaf87f0b6dcd717c508e9023350499a6093f97 | [
"MIT"
] | 158 | 2020-11-12T07:49:02.000Z | 2022-03-27T20:50:20.000Z | c.NotebookApp.ip = "*"
c.NotebookApp.port = 9799
c.NotebookApp.open_browser = False
# Python
c.NotebookApp.password = "sha1:74182e119a7b:e1b98bbba98f9ada3fd714eda9652437e80082e2"
| 25.714286 | 85 | 0.805556 | 19 | 180 | 7.578947 | 0.684211 | 0.333333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.224242 | 0.083333 | 180 | 6 | 86 | 30 | 0.648485 | 0.033333 | 0 | 0 | 0 | 0 | 0.345029 | 0.339181 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.25 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 6 |
ac28081dff4321dc988a4b497d58eb45d8addd62 | 138 | py | Python | robots/robot1/config.py | memristor/mep2 | bc5cddacba3d740f791f3454b8cb51bda83ce202 | [
"MIT"
] | 5 | 2018-11-27T15:15:00.000Z | 2022-02-10T21:44:13.000Z | robots/robot1/config.py | memristor/mep2 | bc5cddacba3d740f791f3454b8cb51bda83ce202 | [
"MIT"
] | 2 | 2018-10-20T15:48:40.000Z | 2018-11-20T05:11:33.000Z | robots/robot1/config.py | memristor/mep2 | bc5cddacba3d740f791f3454b8cb51bda83ce202 | [
"MIT"
] | 1 | 2020-02-07T12:44:47.000Z | 2020-02-07T12:44:47.000Z | from modules.default_config import motion, collision_wait_suspend, pathfind
motion.can.iface='can0'
collision_wait_suspend.wait_time = 20
| 34.5 | 75 | 0.855072 | 20 | 138 | 5.6 | 0.75 | 0.232143 | 0.357143 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.023438 | 0.072464 | 138 | 3 | 76 | 46 | 0.851563 | 0 | 0 | 0 | 0 | 0 | 0.028986 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.333333 | 0 | 0.333333 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
3ba9748486ce8180842dbac181dc355d5a707717 | 2,029 | py | Python | Plot_Graph.py | evanxia1018/CSE539_Project_LLE | 881ea2278c39c16716e5de83dd8abbd267806a35 | [
"MIT"
] | 2 | 2019-11-10T02:04:52.000Z | 2020-04-19T03:51:51.000Z | Plot_Graph.py | SarahLynnePu/CSE539_Project_LLE | 881ea2278c39c16716e5de83dd8abbd267806a35 | [
"MIT"
] | null | null | null | Plot_Graph.py | SarahLynnePu/CSE539_Project_LLE | 881ea2278c39c16716e5de83dd8abbd267806a35 | [
"MIT"
] | 3 | 2017-12-28T14:09:24.000Z | 2020-04-19T04:25:03.000Z | from mpl_toolkits.mplot3d import Axes3D
import matplotlib.pyplot as plt
import numpy as np
def plot3D(sample_list):
sample_x = [x[0] for x in sample_list]
sample_y = [x[1] for x in sample_list]
sample_z = [x[2] for x in sample_list]
fig = plt.figure()
ax = fig.add_subplot(111, projection='3d')
ax.scatter(sample_x, sample_y, sample_z)
ax.set_xlabel('X Label')
ax.set_xlabel('Y Label')
ax.set_xlabel('Z Label')
def plot3D_color(data, labels):
data = np.array(data)
labels = np.array(labels)
x1 = data[labels[:, 0] == 1, 0]
y1 = data[labels[:, 0] == 1, 1]
z1 = data[labels[:, 0] == 1, 2]
x0 = data[labels[:, 0] == 0, 0]
y0 = data[labels[:, 0] == 0, 1]
z0 = data[labels[:, 0] == 0, 2]
fig = plt.figure()
ax = fig.add_subplot(111, projection='3d')
ax.scatter(x1, y1, z1, s=3)
ax.scatter(x0, y0, z0, s=3)
ax.set_xlabel('X Label')
ax.set_xlabel('Y Label')
ax.set_xlabel('Z Label')
plt.show()
def plot2D_color(data, labels):
data = np.array(data)
labels = np.array(labels)
x1 = data[labels[:, 0] == 1, 0]
y1 = data[labels[:, 0] == 1, 1]
x0 = data[labels[:, 0] == 0, 0]
y0 = data[labels[:, 0] == 0, 1]
fig = plt.figure()
ax = fig.add_subplot(111, projection='3d')
ax.scatter(x1, y1, s=3)
ax.scatter(x0, y0, s=3)
ax.set_xlabel('X Label')
ax.set_xlabel('Y Label')
plt.show()
def plot1D_color(data, labels):
data = np.array(data)
labels = np.array(labels)
x1 = data[labels[:, 0] == 1, 0]
x0 = data[labels[:, 0] == 0, 0]
fig = plt.figure()
ax = fig.add_subplot(111, projection='3d')
ax.scatter(x1, 0, s=3)
ax.scatter(x0, 0, s=3)
ax.set_xlabel('X Label')
plt.show()
def plot2D(sample_list):
sample_x = [x[0] for x in sample_list]
sample_y = [x[1] for x in sample_list]
fig = plt.figure()
ax = fig.add_subplot(111, projection='3d')
ax.scatter(sample_x, sample_y)
ax.set_xlabel('X Label')
ax.set_xlabel('Y Label')
| 27.794521 | 46 | 0.587974 | 342 | 2,029 | 3.380117 | 0.149123 | 0.155709 | 0.114187 | 0.083045 | 0.865052 | 0.826125 | 0.782007 | 0.765571 | 0.765571 | 0.765571 | 0 | 0.064392 | 0.234598 | 2,029 | 72 | 47 | 28.180556 | 0.679974 | 0 | 0 | 0.698413 | 0 | 0 | 0.042878 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.079365 | false | 0 | 0.047619 | 0 | 0.126984 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
3bc300f4118d1a352e007d76f8da405f45d7b10d | 1,027 | py | Python | tests/test_small_hessian_eigenvalue.py | elsandal/pyclesperanto_prototype | 7bda828813b86b44b63d73d5e8f466d9769cded1 | [
"BSD-3-Clause"
] | 2 | 2020-07-01T06:20:44.000Z | 2020-07-01T09:36:48.000Z | tests/test_small_hessian_eigenvalue.py | elsandal/pyclesperanto_prototype | 7bda828813b86b44b63d73d5e8f466d9769cded1 | [
"BSD-3-Clause"
] | null | null | null | tests/test_small_hessian_eigenvalue.py | elsandal/pyclesperanto_prototype | 7bda828813b86b44b63d73d5e8f466d9769cded1 | [
"BSD-3-Clause"
] | 1 | 2020-06-29T18:40:54.000Z | 2020-06-29T18:40:54.000Z | import pyclesperanto_prototype as cle
import numpy as np
def test_small_hessian_eigenvalue_2d():
test = np.asarray([
[1, -1],
[1, -1]
])
reference_small_hessian_eigenvalue = np.asarray([
[-2, 0],
[-2, 0]
])
small_hessian_eigenvalue = cle.small_hessian_eigenvalue(test)
print(small_hessian_eigenvalue)
assert np.allclose(reference_small_hessian_eigenvalue, small_hessian_eigenvalue)
def test_small_hessian_eigenvalue_3d():
test = np.asarray([
[
[1, -1],
[1, -1]
], [
[2, -2],
[2, -2]
],
])
reference_small_hessian_eigenvalue = np.asarray([
[
[-2.1, -1.1],
[-2.1, -1.1]
], [
[-4.1, 0],
[-4.1, 0]
],
])
small_hessian_eigenvalue = cle.small_hessian_eigenvalue(test)
print(small_hessian_eigenvalue)
assert np.allclose(reference_small_hessian_eigenvalue, small_hessian_eigenvalue, atol=0.1)
| 21.851064 | 94 | 0.567673 | 116 | 1,027 | 4.706897 | 0.206897 | 0.307692 | 0.564103 | 0.227106 | 0.860806 | 0.754579 | 0.754579 | 0.542125 | 0.542125 | 0.542125 | 0 | 0.047887 | 0.308666 | 1,027 | 46 | 95 | 22.326087 | 0.721127 | 0 | 0 | 0.555556 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.055556 | 1 | 0.055556 | false | 0 | 0.055556 | 0 | 0.111111 | 0.055556 | 0 | 0 | 0 | null | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
ce33ff9441f5106fdde9d924a5f6568d2eae9b8f | 4,570 | py | Python | lib/ad/protocol/test/test_ldapfilter.py | sfu-rcg/python-ad | a4b80d73130a0b12cc17fecbfca73d995d074d32 | [
"MIT"
] | 9 | 2015-03-28T16:05:01.000Z | 2021-05-15T02:53:44.000Z | lib/ad/protocol/test/test_ldapfilter.py | sfu-rcg/python-ad | a4b80d73130a0b12cc17fecbfca73d995d074d32 | [
"MIT"
] | null | null | null | lib/ad/protocol/test/test_ldapfilter.py | sfu-rcg/python-ad | a4b80d73130a0b12cc17fecbfca73d995d074d32 | [
"MIT"
] | 7 | 2015-04-14T13:04:52.000Z | 2021-02-24T12:50:05.000Z | #
# This file is part of Python-AD. Python-AD is free software that is made
# available under the MIT license. Consult the file "LICENSE" that is
# distributed together with this file for the exact licensing terms.
#
# Python-AD is copyright (c) 2007 by the Python-AD authors. See the file
# "AUTHORS" for a complete overview.
from nose.tools import assert_raises
from ad.protocol import ldapfilter
class TestLDAPFilterParser(object):
"""Test suite for ad.protocol.ldapfilter."""
def test_equals(self):
filt = '(type=value)'
parser = ldapfilter.Parser()
res = parser.parse(filt)
assert isinstance(res, ldapfilter.EQUALS)
assert res.type == 'type'
assert res.value == 'value'
def test_and(self):
filt = '(&(type=value)(type2=value2))'
parser = ldapfilter.Parser()
res = parser.parse(filt)
assert isinstance(res, ldapfilter.AND)
assert len(res.terms) == 2
assert isinstance(res.terms[0], ldapfilter.EQUALS)
assert isinstance(res.terms[1], ldapfilter.EQUALS)
def test_and_multi_term(self):
filt = '(&(type=value)(type2=value2)(type3=value3))'
parser = ldapfilter.Parser()
res = parser.parse(filt)
assert isinstance(res, ldapfilter.AND)
assert len(res.terms) == 3
assert isinstance(res.terms[0], ldapfilter.EQUALS)
assert isinstance(res.terms[1], ldapfilter.EQUALS)
assert isinstance(res.terms[2], ldapfilter.EQUALS)
def test_or(self):
filt = '(|(type=value)(type2=value2))'
parser = ldapfilter.Parser()
res = parser.parse(filt)
assert isinstance(res, ldapfilter.OR)
assert len(res.terms) == 2
assert isinstance(res.terms[0], ldapfilter.EQUALS)
assert isinstance(res.terms[1], ldapfilter.EQUALS)
def test_or_multi_term(self):
filt = '(|(type=value)(type2=value2)(type3=value3))'
parser = ldapfilter.Parser()
res = parser.parse(filt)
assert isinstance(res, ldapfilter.OR)
assert len(res.terms) == 3
assert isinstance(res.terms[0], ldapfilter.EQUALS)
assert isinstance(res.terms[1], ldapfilter.EQUALS)
assert isinstance(res.terms[2], ldapfilter.EQUALS)
def test_not(self):
filt = '(!(type=value))'
parser = ldapfilter.Parser()
res = parser.parse(filt)
assert isinstance(res, ldapfilter.NOT)
assert isinstance(res.term, ldapfilter.EQUALS)
def test_lte(self):
filt = '(type<=value)'
parser = ldapfilter.Parser()
res = parser.parse(filt)
assert isinstance(res, ldapfilter.LTE)
assert res.type == 'type'
assert res.value == 'value'
def test_gte(self):
filt = '(type>=value)'
parser = ldapfilter.Parser()
res = parser.parse(filt)
assert isinstance(res, ldapfilter.GTE)
assert res.type == 'type'
assert res.value == 'value'
def test_approx(self):
filt = '(type~=value)'
parser = ldapfilter.Parser()
res = parser.parse(filt)
assert isinstance(res, ldapfilter.APPROX)
assert res.type == 'type'
assert res.value == 'value'
def test_present(self):
filt = '(type=*)'
parser = ldapfilter.Parser()
res = parser.parse(filt)
assert isinstance(res, ldapfilter.PRESENT)
assert res.type == 'type'
def test_escape(self):
filt = r'(type=\5c\00\2a)'
parser = ldapfilter.Parser()
res = parser.parse(filt)
assert res.value == '\\\x00*'
def test_error_incomplete_term(self):
parser = ldapfilter.Parser()
filt = '('
assert_raises(ldapfilter.Error, parser.parse, filt)
filt = '(type'
assert_raises(ldapfilter.Error, parser.parse, filt)
filt = '(type='
assert_raises(ldapfilter.Error, parser.parse, filt)
filt = '(type=)'
assert_raises(ldapfilter.Error, parser.parse, filt)
def test_error_not_multi_term(self):
parser = ldapfilter.Parser()
filt = '(!(type=value)(type2=value2))'
assert_raises(ldapfilter.Error, parser.parse, filt)
def test_error_illegal_operator(self):
parser = ldapfilter.Parser()
filt = '($(type=value)(type2=value2))'
assert_raises(ldapfilter.Error, parser.parse, filt)
def test_error_illegal_character(self):
parser = ldapfilter.Parser()
filt = '(type=val*e)'
assert_raises(ldapfilter.Error, parser.parse, filt)
| 34.885496 | 73 | 0.624508 | 540 | 4,570 | 5.218519 | 0.161111 | 0.119234 | 0.14159 | 0.097587 | 0.783889 | 0.78247 | 0.75692 | 0.742016 | 0.725692 | 0.725692 | 0 | 0.011675 | 0.250328 | 4,570 | 130 | 74 | 35.153846 | 0.810858 | 0.077024 | 0 | 0.576923 | 0 | 0 | 0.08797 | 0.048027 | 0 | 0 | 0 | 0 | 0.413462 | 1 | 0.144231 | false | 0 | 0.019231 | 0 | 0.173077 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
0212812ec45bb338ea2b0a68a2fefaa4836c519e | 2,008 | py | Python | mentor/userform/dbtests.py | JarettSutula/GoatBoat | 53dd978048c6757b537e5988793b29a5ef1b7b52 | [
"MIT"
] | null | null | null | mentor/userform/dbtests.py | JarettSutula/GoatBoat | 53dd978048c6757b537e5988793b29a5ef1b7b52 | [
"MIT"
] | null | null | null | mentor/userform/dbtests.py | JarettSutula/GoatBoat | 53dd978048c6757b537e5988793b29a5ef1b7b52 | [
"MIT"
] | null | null | null | from django.test import TestCase
import os
import unittest
import pymongo
from dotenv import load_dotenv
import certifi
# Create your tests here.
class TestDatabaseMethods(unittest.TestCase):
def test_dbconnect(self):
ca = certifi.where()
# load the .env file in local directories for DB access.
load_dotenv()
DB_USERNAME = os.getenv('DB_USERNAME')
DB_PASSWORD = os.getenv('DB_PASSWORD')
connection_string = "mongodb+srv://"+DB_USERNAME+":"+DB_PASSWORD+"@gb-mentoring-cluster.jhwgr.mongodb.net/?retryWrites=true&w=majority"
client = pymongo.MongoClient(connection_string, tlsCAfile = ca)
db_handle = client.get_database('gbmDB')
self.assertEqual(db_handle.list_collection_names()[0], 'users')
def test_dbcount(self):
ca = certifi.where()
# load the .env file in local directories for DB access.
load_dotenv()
DB_USERNAME = os.getenv('DB_USERNAME')
DB_PASSWORD = os.getenv('DB_PASSWORD')
connection_string = "mongodb+srv://"+DB_USERNAME+":"+DB_PASSWORD+"@gb-mentoring-cluster.jhwgr.mongodb.net/?retryWrites=true&w=majority"
client = pymongo.MongoClient(connection_string, tlsCAfile = ca)
db_handle = client.get_database('gbmDB')
db_collection = db_handle.users
self.assertNotEqual(db_collection.estimated_document_count(), 0)
def connect_db(self):
ca = certifi.where()
# load the .env file in local directories for DB access.
load_dotenv()
DB_USERNAME = os.getenv('DB_USERNAME')
DB_PASSWORD = os.getenv('DB_PASSWORD')
connection_string = "mongodb+srv://"+DB_USERNAME+":"+DB_PASSWORD+"@gb-mentoring-cluster.jhwgr.mongodb.net/?retryWrites=true&w=majority"
client = pymongo.MongoClient(connection_string, tlsCAfile = ca)
db_handle = client.get_database('gbmDB')
return db_handle
if __name__ == '__main__':
unittest.main()
| 31.873016 | 143 | 0.673307 | 242 | 2,008 | 5.363636 | 0.289256 | 0.069337 | 0.046225 | 0.09245 | 0.718798 | 0.718798 | 0.718798 | 0.718798 | 0.718798 | 0.718798 | 0 | 0.001269 | 0.215139 | 2,008 | 62 | 144 | 32.387097 | 0.822335 | 0.093626 | 0 | 0.567568 | 0 | 0 | 0.189085 | 0.112459 | 0 | 0 | 0 | 0 | 0.054054 | 1 | 0.081081 | false | 0.162162 | 0.162162 | 0 | 0.297297 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 6 |
021569aefbdfb0e80abb739c0353d8327f38176b | 13,394 | py | Python | tests/handlers/test_embed.py | byte-power/redash | a12fe525cc2043bbe89e2717b336bb6ab4eae11c | [
"BSD-2-Clause"
] | 1 | 2022-01-18T01:23:03.000Z | 2022-01-18T01:23:03.000Z | tests/handlers/test_embed.py | byte-power/polydash | 9d63424a2c82a7199471fc3b42816b24a193037f | [
"BSD-2-Clause"
] | 1 | 2021-03-02T02:01:42.000Z | 2021-03-02T02:01:42.000Z | tests/handlers/test_embed.py | byte-power/redash | a12fe525cc2043bbe89e2717b336bb6ab4eae11c | [
"BSD-2-Clause"
] | 1 | 2021-03-19T12:23:50.000Z | 2021-03-19T12:23:50.000Z | import time
from tests import BaseTestCase
from redash.models import db
from redash import settings
from redash.authentication import get_embed_signature, encode_params
class TestUnembedables(BaseTestCase):
def test_not_embedable(self):
query = self.factory.create_query()
res = self.make_request("get", "/api/queries/{0}".format(query.id))
self.assertEqual(res.status_code, 200)
self.assertIn("frame-ancestors 'none'", res.headers["Content-Security-Policy"])
self.assertEqual(res.headers["X-Frame-Options"], "deny")
class TestEmbedVisualization(BaseTestCase):
def test_sucesss(self):
vis = self.factory.create_visualization()
vis.query_rel.latest_query_data = self.factory.create_query_result()
db.session.add(vis.query_rel)
res = self.make_request(
"get",
"/embed/query/{}/visualization/{}".format(vis.query_rel.id, vis.id),
is_json=False,
)
self.assertEqual(res.status_code, 200)
self.assertIn("frame-ancestors *", res.headers["Content-Security-Policy"])
self.assertNotIn("X-Frame-Options", res.headers)
class TestUnembedablesDashboard(BaseTestCase):
def test_not_embedable(self):
dashboard = self.factory.create_dashboard()
res = self.make_request("get", "/api/dashboards/embed/{0}".format(dashboard.id))
self.assertEqual(res.status_code, 200)
self.assertIn("frame-ancestors 'none'", res.headers["Content-Security-Policy"])
self.assertEqual(res.headers["X-Frame-Options"], "deny")
class TestEmbedDashboardList(BaseTestCase):
def setUp(self):
super(TestEmbedDashboardList, self).setUp()
def test_get_success(self):
admin = self.factory.create_admin()
dashboard1 = self.factory.create_dashboard(is_archived=True, is_draft=True)
dashboard2 = self.factory.create_dashboard(is_archived=True, is_draft=False)
dashboard3 = self.factory.create_dashboard(is_archived=False, is_draft=False)
dashboard4 = self.factory.create_dashboard(is_archived=False, is_draft=False)
rv = self.make_request("get", "/api/dashboards/embed", user=admin)
assert len(rv.json["results"]) == 2
def test_get_not_admin(self):
dashboard1 = self.factory.create_dashboard(is_archived=True, is_draft=True)
dashboard2 = self.factory.create_dashboard(is_archived=True, is_draft=False)
dashboard3 = self.factory.create_dashboard(is_archived=False, is_draft=False)
rv = self.make_request("get", "/api/dashboards/embed")
self.assertEqual(rv.status_code, 403)
class TestEmbedDashboard(BaseTestCase):
def setUp(self):
super(TestEmbedDashboard, self).setUp()
self.dashboard = self.factory.create_dashboard()
self.application = self.factory.create_application()
db.session.flush()
self.basic_url = "http://localhost/{}".format(self.factory.org.slug)
self.embed_url = "/embed/dashboard/{}".format(self.dashboard.id)
def test_not_add_dashboard_to_application(self):
timestamp = int(time.time())
params = {
"secret_key": self.application.secret_key,
"timestamp": str(timestamp),
}
s = encode_params(params)
url = "?".join([self.embed_url, s])
signature = get_embed_signature(self.application.secret_token, self.basic_url+url, timestamp)
path = "{}&signature={}".format(url, signature)
print(path)
res = self.make_request(
"get",
path,
user=False,
is_json=False,
)
self.assertEqual(res.status_code, 403)
#
# This is a bad way to write these tests, but the way Flask works doesn't make it easy to write them properly...
#
def test_success(self):
self.factory.create_application_dashboard(application_id=self.application.id, dashboard_id=self.dashboard.id)
timestamp = int(time.time())
params = {
"secret_key": self.application.secret_key,
"timestamp": str(timestamp),
"max_age": "3600",
"p_countries": "['us', 'ke', 'en']",
"p_type": "游戏",
"p_time": "['2021年01月01日', '2022年12月31日']",
}
s = encode_params(params)
url = "?".join([self.embed_url, s])
signature = get_embed_signature(self.application.secret_token, self.basic_url+url, timestamp)
path = "{}&signature={}".format(url, signature)
res = self.make_request(
"get",
path,
user=False,
is_json=False,
)
self.assertEqual(res.status_code, 200)
self.assertIn("frame-ancestors *", res.headers["Content-Security-Policy"])
self.assertNotIn("X-Frame-Options", res.headers)
def test_works_for_logged_in_user(self):
application = self.factory.create_application(name="test_works_for_logged_in_user")
self.factory.create_application_dashboard(application_id=application.id, dashboard_id=self.dashboard.id)
timestamp = int(time.time())
params = {
"secret_key": application.secret_key,
"timestamp": str(timestamp),
}
s = encode_params(params)
url = "?".join([self.embed_url, s])
signature = get_embed_signature(application.secret_token, self.basic_url+url, timestamp)
path = "{}&signature={}".format(url, signature)
res = self.make_request(
"get",
path,
is_json=False,
)
self.assertEqual(res.status_code, 200)
def test_bad_serect_token(self):
application = self.factory.create_application(name="test_bad_serect_token")
self.factory.create_application_dashboard(application_id=application.id, dashboard_id=self.dashboard.id)
timestamp = int(time.time())
params = {
"secret_key": application.secret_key,
"timestamp": str(timestamp),
}
s = encode_params(params)
url = "?".join([self.embed_url, s])
signature = get_embed_signature("bad_api_serect", self.basic_url+url, timestamp)
path = "{}&signature={}".format(url, signature)
res = self.make_request(
"get",
path,
is_json=False,
)
self.assertEqual(res.status_code, 401)
def test_deactive_appcliation(self):
application = self.factory.create_application(name="test_deactive_appcliation", active=False)
self.factory.create_application_dashboard(application_id=application.id, dashboard_id=self.dashboard.id)
timestamp = int(time.time())
params = {
"secret_key": application.secret_key,
"timestamp": str(timestamp),
}
s = encode_params(params)
url = "?".join([self.embed_url, s])
signature = get_embed_signature(application.secret_token, self.basic_url+url, timestamp)
path = "{}&signature={}".format(url, signature)
res = self.make_request(
"get",
path,
is_json=False,
)
self.assertEqual(res.status_code, 401)
def test_expired_timestamp(self):
application = self.factory.create_application(name="test_expired_timestamp")
self.factory.create_application_dashboard(application_id=application.id, dashboard_id=self.dashboard.id)
timestamp = int(time.time())
timestamp = timestamp - 10 - 1
params = {
"secret_key": application.secret_key,
"timestamp": str(timestamp),
}
s = encode_params(params)
url = "?".join([self.embed_url, s])
signature = get_embed_signature(application.secret_token, self.basic_url+url, timestamp)
path = "{}&signature={}".format(url, signature)
res = self.make_request(
"get",
path,
is_json=False,
)
self.assertEqual(res.status_code, 401)
timestamp = int(time.time())
timestamp = timestamp + 10 + 1
params = {
"secret_key": application.secret_key,
"timestamp": str(timestamp),
}
s = encode_params(params)
url = "?".join([self.embed_url, s])
signature = get_embed_signature(application.secret_token, self.basic_url+url, timestamp)
path = "{}&signature={}".format(url, signature)
res = self.make_request(
"get",
path,
is_json=False,
)
self.assertEqual(res.status_code, 401)
def test_no_timestamp(self):
application = self.factory.create_application(name="test_no_timestamp")
self.factory.create_application_dashboard(application_id=application.id, dashboard_id=self.dashboard.id)
timestamp = int(time.time())
params = {
"secret_key": application.secret_key,
}
s = encode_params(params)
url = "?".join([self.embed_url, s])
signature = get_embed_signature(application.secret_token, self.basic_url+url, timestamp)
path = "{}&signature={}".format(url, signature)
res = self.make_request(
"get",
path,
is_json=False,
)
self.assertEqual(res.status_code, 401)
def test_no_signature(self):
application = self.factory.create_application(name="test_no_signature")
self.factory.create_application_dashboard(application_id=application.id, dashboard_id=self.dashboard.id)
timestamp = int(time.time())
params = {
"secret_key": application.secret_key,
"timestamp": str(timestamp),
}
s = encode_params(params)
url = "?".join([self.embed_url, s])
res = self.make_request(
"get",
url,
is_json=False,
)
self.assertEqual(res.status_code, 401)
# TODO: this should be applied to the new API endpoint
class TestPublicDashboard(BaseTestCase):
def test_success(self):
dashboard = self.factory.create_dashboard()
api_key = self.factory.create_api_key(object=dashboard)
res = self.make_request(
"get",
"/public/dashboards/{}".format(api_key.api_key),
user=False,
is_json=False,
)
self.assertEqual(res.status_code, 200)
self.assertIn("frame-ancestors *", res.headers["Content-Security-Policy"])
self.assertNotIn("X-Frame-Options", res.headers)
def test_works_for_logged_in_user(self):
dashboard = self.factory.create_dashboard()
api_key = self.factory.create_api_key(object=dashboard)
res = self.make_request(
"get", "/public/dashboards/{}".format(api_key.api_key), is_json=False
)
self.assertEqual(res.status_code, 200)
def test_bad_token(self):
res = self.make_request(
"get", "/public/dashboards/bad-token", user=False, is_json=False
)
self.assertEqual(res.status_code, 302)
def test_inactive_token(self):
dashboard = self.factory.create_dashboard()
api_key = self.factory.create_api_key(object=dashboard, active=False)
res = self.make_request(
"get",
"/public/dashboards/{}".format(api_key.api_key),
user=False,
is_json=False,
)
self.assertEqual(res.status_code, 302)
# Not relevant for now, as tokens in api_keys table are only created for dashboards. Once this changes, we should
# add this test.
# def test_token_doesnt_belong_to_dashboard(self):
# pass
class TestAPIPublicDashboard(BaseTestCase):
def test_success(self):
dashboard = self.factory.create_dashboard()
api_key = self.factory.create_api_key(object=dashboard)
res = self.make_request(
"get",
"/api/dashboards/public/{}".format(api_key.api_key),
user=False,
is_json=False,
)
self.assertEqual(res.status_code, 200)
self.assertIn("frame-ancestors *", res.headers["Content-Security-Policy"])
self.assertNotIn("X-Frame-Options", res.headers)
def test_works_for_logged_in_user(self):
dashboard = self.factory.create_dashboard()
api_key = self.factory.create_api_key(object=dashboard)
res = self.make_request(
"get", "/api/dashboards/public/{}".format(api_key.api_key), is_json=False
)
self.assertEqual(res.status_code, 200)
def test_bad_token(self):
res = self.make_request(
"get", "/api/dashboards/public/bad-token", user=False, is_json=False
)
self.assertEqual(res.status_code, 404)
def test_inactive_token(self):
dashboard = self.factory.create_dashboard()
api_key = self.factory.create_api_key(object=dashboard, active=False)
res = self.make_request(
"get",
"/api/dashboards/public/{}".format(api_key.api_key),
user=False,
is_json=False,
)
self.assertEqual(res.status_code, 404)
# Not relevant for now, as tokens in api_keys table are only created for dashboards. Once this changes, we should
# add this test.
# def test_token_doesnt_belong_to_dashboard(self):
# pass | 38.268571 | 117 | 0.632149 | 1,541 | 13,394 | 5.28488 | 0.112265 | 0.054028 | 0.08141 | 0.048625 | 0.849337 | 0.833988 | 0.809185 | 0.798011 | 0.772102 | 0.745457 | 0 | 0.009827 | 0.247872 | 13,394 | 350 | 118 | 38.268571 | 0.79859 | 0.039794 | 0 | 0.694158 | 0 | 0 | 0.100553 | 0.041404 | 0 | 0 | 0 | 0.002857 | 0.116838 | 1 | 0.079038 | false | 0 | 0.017182 | 0 | 0.120275 | 0.003436 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
65f0dfc179ffee87cc586e5c320937ee560df45f | 17,430 | py | Python | redbird/test/common/test_common.py | Miksus/red-base | 4c272e8cb2325b51f6293f608a773e011b1d05da | [
"MIT"
] | null | null | null | redbird/test/common/test_common.py | Miksus/red-base | 4c272e8cb2325b51f6293f608a773e011b1d05da | [
"MIT"
] | null | null | null | redbird/test/common/test_common.py | Miksus/red-base | 4c272e8cb2325b51f6293f608a773e011b1d05da | [
"MIT"
] | null | null | null |
import configparser
from typing import Optional
import re, os
import json
from dotenv import load_dotenv
import pytest
import responses
import requests
from redbird.repos.rest import RESTRepo
from redbird.repos.sqlalchemy import SQLRepo
from redbird.repos.memory import MemoryRepo
from redbird.repos.mongo import MongoRepo
from redbird.oper import greater_equal, greater_than, less_equal, less_than, not_equal
from pydantic import BaseModel, Field
TEST_CASES = [
pytest.param("memory"),
pytest.param("memory-dict"),
pytest.param("sql-dict"),
pytest.param("sql-pydantic"),
pytest.param("sql-orm"),
pytest.param("sql-pydantic-orm"),
pytest.param("mongo"),
pytest.param("mongo-mock"),
pytest.param("http-rest"),
]
# ------------------------
# ACTUAL TESTS
# ------------------------
class RepoTests:
def populate(self, repo, items=None):
if items is None:
items = [
dict(id="a", name="Jack", age=20),
dict(id="b", name="John", age=30),
dict(id="c", name="James", age=30),
dict(id="d", name="Johnny", age=30),
dict(id="e", name="Jesse", age=40),
]
for d in items:
item = repo.to_item(d)
repo.add(item)
@pytest.mark.parametrize(
'repo',
TEST_CASES,
indirect=True
)
class TestAPI:
def test_session(self, repo):
repo.session.remove()
def test_has_attributes(self, repo):
# Test given attrs are found
attrs = [
"session", "model",
"add", "insert", "update", "upsert", "delete",
"filter_by",
]
for attr in attrs:
getattr(repo, attr)
def test_has_attributes_result(self, repo):
attrs = [
"query_", "repo", "query",
"first", "all", "limit", "last",
"update", "delete",
"count",
]
filter = repo.filter_by()
for attr in attrs:
getattr(filter, attr)
@pytest.mark.parametrize(
'repo',
TEST_CASES,
indirect=True
)
class TestPopulated(RepoTests):
def populate(self, repo, items=None):
if items is None:
items = [
dict(id="a", name="Jack", age=20),
dict(id="b", name="John", age=30),
dict(id="c", name="James", age=30),
dict(id="d", name="Johnny", age=30),
dict(id="e", name="Jesse", age=40),
]
for d in items:
item = repo.to_item(d)
repo.add(item)
def test_filter_by_first(self, repo):
self.populate(repo)
Item = repo.model
assert repo.filter_by(age=30).first() == Item(id="b", name="John", age=30)
def test_filter_by_last(self, repo):
self.populate(repo)
Item = repo.model
assert repo.filter_by(age=30).last() == Item(id="d", name="Johnny", age=30)
def test_filter_by_limit(self, repo):
self.populate(repo)
Item = repo.model
assert repo.filter_by(age=30).limit(2) == [
Item(id="b", name="John", age=30),
Item(id="c", name="James", age=30),
]
def test_filter_by_all(self, repo):
self.populate(repo)
Item = repo.model
assert repo.filter_by(age=30).all() == [
Item(id="b", name="John", age=30),
Item(id="c", name="James", age=30),
Item(id="d", name="Johnny", age=30),
]
def test_filter_by_update(self, repo):
self.populate(repo)
Item = repo.model
repo.filter_by(age=30).update(name="Something")
assert repo.filter_by().all() == [
Item(id="a", name="Jack", age=20),
Item(id="b", name="Something", age=30),
Item(id="c", name="Something", age=30),
Item(id="d", name="Something", age=30),
Item(id="e", name="Jesse", age=40),
]
def test_filter_by_replace(self, repo):
self.populate(repo)
Item = repo.model
repo.filter_by(id="b").replace(id="b", name="Something")
if repo.model == dict and not isinstance(repo, SQLRepo):
expected = [
Item(id="a", name="Jack", age=20),
Item(id="c", name="James", age=30),
Item(id="d", name="Johnny", age=30),
Item(id="e", name="Jesse", age=40),
Item(id="b", name="Something"),
]
else:
expected = [
Item(id="a", name="Jack", age=20),
Item(id="c", name="James", age=30),
Item(id="d", name="Johnny", age=30),
Item(id="e", name="Jesse", age=40),
Item(id="b", name="Something", age=None),
]
assert repo.filter_by().all() == expected
def test_filter_by_replace_dict(self, repo):
self.populate(repo)
Item = repo.model
repo.filter_by(id="b").replace({"id": "b", "name": "Something"})
if repo.model == dict and not isinstance(repo, SQLRepo):
expected = [
Item(id="a", name="Jack", age=20),
Item(id="c", name="James", age=30),
Item(id="d", name="Johnny", age=30),
Item(id="e", name="Jesse", age=40),
Item(id="b", name="Something"),
]
else:
expected = [
Item(id="a", name="Jack", age=20),
Item(id="c", name="James", age=30),
Item(id="d", name="Johnny", age=30),
Item(id="e", name="Jesse", age=40),
Item(id="b", name="Something", age=None),
]
assert repo.filter_by().all() == expected
def test_filter_by_delete(self, repo):
self.populate(repo)
Item = repo.model
repo.filter_by(age=30).delete()
assert repo.filter_by().all() == [
Item(id="a", name="Jack", age=20),
#Item(id="b", name="John", age=30),
#Item(id="c", name="James", age=30),
#Item(id="d", name="Johnny", age=30),
Item(id="e", name="Jesse", age=40),
]
def test_filter_by_count(self, repo):
self.populate(repo)
Item = repo.model
assert repo.filter_by(age=30).count() == 3
def test_filter_by_less_than(self, repo):
self.populate(repo)
if isinstance(repo, RESTRepo):
pytest.xfail("RESTRepo does not support operations (yet)")
Item = repo.model
assert repo.filter_by(age=less_than(40)).all() == [
Item(id="a", name="Jack", age=20),
Item(id="b", name="John", age=30),
Item(id="c", name="James", age=30),
Item(id="d", name="Johnny", age=30),
#Item(id="e", name="Jesse", age=40),
]
def test_getitem(self, repo):
self.populate(repo)
Item = repo.model
assert repo["b"] == Item(id="b", name="John", age=30)
def test_getitem_missing(self, repo):
self.populate(repo)
with pytest.raises(KeyError):
repo["not_found"]
def test_delitem(self, repo):
self.populate(repo)
Item = repo.model
del repo["b"]
assert repo.filter_by().all() == [
Item(id="a", name="Jack", age=20),
#Item(id="b", name="John", age=30),
Item(id="c", name="James", age=30),
Item(id="d", name="Johnny", age=30),
Item(id="e", name="Jesse", age=40),
]
@pytest.mark.skip(reason="This is a minor issue")
def test_delitem_missing(self, repo):
self.populate(repo)
with pytest.raises(KeyError):
del repo["not_found"]
def test_setitem(self, repo):
self.populate(repo)
Item = repo.model
repo["d"] = {"name": "Johnny boy"}
assert repo.filter_by().all() == [
Item(id="a", name="Jack", age=20),
Item(id="b", name="John", age=30),
Item(id="c", name="James", age=30),
Item(id="d", name="Johnny boy", age=30),
Item(id="e", name="Jesse", age=40),
]
@pytest.mark.skip(reason="This is a minor issue")
def test_setitem_missing(self, repo):
self.populate(repo)
with pytest.raises(KeyError):
repo["not_found"] = {"name": "something"}
@pytest.mark.parametrize(
'repo',
TEST_CASES,
indirect=True
)
class TestFilteringOperations(RepoTests):
def test_greater_than(self, repo):
self.populate(repo)
if isinstance(repo, RESTRepo):
pytest.xfail("RESTRepo does not support operations (yet)")
Item = repo.model
assert repo.filter_by(age=greater_than(20)).all() == [
#Item(id="a", name="Jack", age=20),
Item(id="b", name="John", age=30),
Item(id="c", name="James", age=30),
Item(id="d", name="Johnny", age=30),
Item(id="e", name="Jesse", age=40),
]
def test_less_than(self, repo):
self.populate(repo)
if isinstance(repo, RESTRepo):
pytest.xfail("RESTRepo does not support operations (yet)")
Item = repo.model
assert repo.filter_by(age=less_than(30)).all() == [
Item(id="a", name="Jack", age=20),
#Item(id="b", name="John", age=30),
#Item(id="c", name="James", age=30),
#Item(id="d", name="Johnny", age=30),
#Item(id="e", name="Jesse", age=40),
]
def test_greater_equal(self, repo):
self.populate(repo)
if isinstance(repo, RESTRepo):
pytest.xfail("RESTRepo does not support operations (yet)")
Item = repo.model
assert repo.filter_by(age=greater_equal(30)).all() == [
#Item(id="a", name="Jack", age=20),
Item(id="b", name="John", age=30),
Item(id="c", name="James", age=30),
Item(id="d", name="Johnny", age=30),
Item(id="e", name="Jesse", age=40),
]
def test_less_equal(self, repo):
self.populate(repo)
if isinstance(repo, RESTRepo):
pytest.xfail("RESTRepo does not support operations (yet)")
Item = repo.model
assert repo.filter_by(age=less_equal(30)).all() == [
Item(id="a", name="Jack", age=20),
Item(id="b", name="John", age=30),
Item(id="c", name="James", age=30),
Item(id="d", name="Johnny", age=30),
#Item(id="e", name="Jesse", age=40),
]
def test_not_equal(self, repo):
self.populate(repo)
if isinstance(repo, RESTRepo):
pytest.xfail("RESTRepo does not support operations (yet)")
Item = repo.model
assert repo.filter_by(age=not_equal(30)).all() == [
Item(id="a", name="Jack", age=20),
#Item(id="b", name="John", age=30),
#Item(id="c", name="James", age=30),
#Item(id="d", name="Johnny", age=30),
Item(id="e", name="Jesse", age=40),
]
@pytest.mark.parametrize(
'repo',
TEST_CASES,
indirect=True
)
class TestEmpty:
def test_add(self, repo):
Item = repo.model
assert repo.filter_by().all() == []
repo.add(Item(id="a", name="Jack", age=20))
assert repo.filter_by().all() == [Item(id="a", name="Jack", age=20)]
repo.add(Item(id="b", name="John", age=30))
assert repo.filter_by().all() == [
Item(id="a", name="Jack", age=20),
Item(id="b", name="John", age=30)
]
def test_update(self, repo):
Item = repo.model
repo.add(Item(id="a", name="Jack", age=20))
repo.add(Item(id="b", name="John", age=30))
repo.update(Item(id="a", name="Max"))
items = repo.filter_by().all()
assert items == [
Item(id="a", name="Max", age=20),
Item(id="b", name="John", age=30),
]
def test_replace(self, repo):
Item = repo.model
repo.add(Item(id="a", name="Jack", age=20))
repo.add(Item(id="b", name="John", age=30))
repo.replace(Item(id="a", name="Max"))
if repo.model == dict and not isinstance(repo, SQLRepo):
expected = [
Item(id="a", name="Max"),
Item(id="b", name="John", age=30),
]
else:
expected = [
Item(id="a", name="Max", age=None),
Item(id="b", name="John", age=30),
]
items = sorted(repo.filter_by().all(), key=lambda x: repo.get_field_value(x, "id"))
assert items == expected
def test_add_exist(self, repo):
Item = repo.model
repo.add(Item(id="a", name="Jack", age=20))
with pytest.raises(Exception):
repo.add(Item(id="a", name="John", age=30))
def test_add_exist_ignore(self, repo):
Item = repo.model
repo.add(Item(id="a", name="Jack", age=20), if_exists="ignore")
repo.add(Item(id="a", name="John", age=30), if_exists="ignore")
assert repo.filter_by().all() == [
Item(id="a", name="Jack", age=20),
]
def test_add_exist_update(self, repo):
Item = repo.model
repo.add(Item(id="a", name="Jack", age=20), if_exists="update")
assert repo.filter_by().all() == [
Item(id="a", name="Jack", age=20),
]
repo.add(Item(id="a", name="John", age=30), if_exists="update")
assert repo.filter_by().all() == [
Item(id="a", name="John", age=30),
]
def test_get_by(self, repo):
Item = repo.model
repo.add(Item(id="a", name="Max", age=20))
repo.add(Item(id="b", name="John", age=30))
assert list(repo) == [
Item(id="a", name="Max", age=20),
Item(id="b", name="John", age=30),
]
repo.get_by("a").update(age=40)
assert list(repo) == [
Item(id="a", name="Max", age=40),
Item(id="b", name="John", age=30),
]
repo.get_by("b").delete()
assert list(repo) == [
Item(id="a", name="Max", age=40),
]
@pytest.mark.parametrize(
'repo',
TEST_CASES,
indirect=True
)
class TestMalformedData(RepoTests):
def test_filter_raise(self, repo):
class MalformedItem(BaseModel):
id: str
name: int
age: int
self.populate(repo, [
{"id": "a", "name": 1, "age": 20},
{"id": "b", "name": "Jack", "age": 20},
{"id": "c", "name": "James", "age": 30},
{"id": "d", "name": 2, "age": 30},
{"id": "e", "name": 3, "age": 30},
])
repo.model = MalformedItem
with pytest.raises(ValueError):
repo.filter_by().all()
def test_filter_warn(self, repo):
class MalformedItem(BaseModel):
id: str
name: int
age: int
self.populate(repo, [
{"id": "a", "name": 1, "age": 20},
{"id": "b", "name": "Jack", "age": 20},
{"id": "c", "name": "James", "age": 30},
{"id": "d", "name": 2, "age": 30},
{"id": "e", "name": 3, "age": 30},
])
repo.model = MalformedItem
repo.errors_query = "warn"
with pytest.warns(UserWarning):
repo.filter_by().all()
def test_filter_discard(self, repo):
class MalformedItem(BaseModel):
id: str
name: int
age: int
self.populate(repo, [
{"id": "a", "name": 1, "age": 20},
{"id": "b", "name": "Jack", "age": 20},
{"id": "c", "name": "James", "age": 30},
{"id": "d", "name": 2, "age": 30},
{"id": "e", "name": 3, "age": 30},
])
repo.model = MalformedItem
repo.errors_query = "discard"
Item = MalformedItem
assert repo.filter_by().all() == [
Item(id="a", name=1, age=20),
Item(id="d", name=2, age=30),
Item(id="e", name=3, age=30),
]
def test_validate_success(self, repo):
self.populate(repo, [
{"id": "a", "name": 1, "age": 20},
{"id": "b", "name": "Jack", "age": 30},
{"id": "c", "name": "James", "age": 40},
])
repo.filter_by().validate()
def test_validate_fail(self, repo):
class MalformedItem(BaseModel):
id: str
name: int
age: int
self.populate(repo, [
{"id": "a", "name": 1, "age": 20},
{"id": "b", "name": "Jack", "age": 30},
{"id": "c", "name": "James", "age": 40},
])
repo.model = MalformedItem
with pytest.raises(ValueError):
repo.filter_by().validate()
def test_validate_fail_max_shown(self, repo):
class MalformedItem(BaseModel):
id: str
name: int
age: int
self.populate(repo, [
{"id": "a", "name": 1, "age": 20},
{"id": "b", "name": "Jack", "age": 30},
{"id": "c", "name": "James", "age": 40},
])
repo.model = MalformedItem
with pytest.raises(ValueError):
repo.filter_by().validate(max_shown=1) | 30.578947 | 91 | 0.500287 | 2,199 | 17,430 | 3.894498 | 0.073215 | 0.079869 | 0.037599 | 0.053947 | 0.82263 | 0.812938 | 0.795423 | 0.778491 | 0.763779 | 0.721158 | 0 | 0.027365 | 0.316523 | 17,430 | 570 | 92 | 30.578947 | 0.691513 | 0.035055 | 0 | 0.622768 | 0 | 0 | 0.091672 | 0 | 0 | 0 | 0 | 0 | 0.066964 | 1 | 0.087054 | false | 0 | 0.03125 | 0 | 0.142857 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
65fc5ec8893c123148e8b0b3c1755499ac3c8ebb | 26 | py | Python | src/query/__init__.py | josemanuel179/practica3-python | 1580769d971ce945c304211face9cef8cf0debd0 | [
"Apache-2.0"
] | null | null | null | src/query/__init__.py | josemanuel179/practica3-python | 1580769d971ce945c304211face9cef8cf0debd0 | [
"Apache-2.0"
] | null | null | null | src/query/__init__.py | josemanuel179/practica3-python | 1580769d971ce945c304211face9cef8cf0debd0 | [
"Apache-2.0"
] | null | null | null | from .query import Query
| 13 | 25 | 0.769231 | 4 | 26 | 5 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.192308 | 26 | 1 | 26 | 26 | 0.952381 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
5a124ba2069248bd2d3ac1dc15b59a3f15760c02 | 287 | py | Python | bsdploy/tests/test_templates.py | Mattlk13/bsdploy | 9a677da925b59a0178d2e0d6f85bb4c86a3eb399 | [
"BSD-3-Clause"
] | 152 | 2015-01-07T17:54:48.000Z | 2022-02-25T15:47:14.000Z | bsdploy/tests/test_templates.py | Mattlk13/bsdploy | 9a677da925b59a0178d2e0d6f85bb4c86a3eb399 | [
"BSD-3-Clause"
] | 71 | 2015-01-02T12:48:11.000Z | 2020-05-28T10:36:10.000Z | bsdploy/tests/test_templates.py | Mattlk13/bsdploy | 9a677da925b59a0178d2e0d6f85bb4c86a3eb399 | [
"BSD-3-Clause"
] | 25 | 2015-01-07T14:00:36.000Z | 2020-07-29T16:28:54.000Z | def test_dhcp_host_dhclient_exit_hooks():
pass
def test_jails_host_pf_conf():
pass
def test_jails_host_ezjail_conf():
pass
def test_jails_host_freebsd_conf():
pass
def test_zfs_auto_snapshot_010_zfs_snapshot():
pass
def test_jails_host_pkg_conf():
pass
| 12.478261 | 46 | 0.756098 | 45 | 287 | 4.222222 | 0.4 | 0.221053 | 0.289474 | 0.336842 | 0.463158 | 0.252632 | 0 | 0 | 0 | 0 | 0 | 0.012766 | 0.181185 | 287 | 22 | 47 | 13.045455 | 0.795745 | 0 | 0 | 0.5 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.5 | true | 0.5 | 0 | 0 | 0.5 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 6 |
5a1d29d33d08c196b9ed764a6a8f613682603ce6 | 3,914 | py | Python | Backend/notifications.py | sahilb2/SelfImprovementWebApp | 2ba232c4e150efb4ef41689ed049d29293e2086e | [
"MIT"
] | 1 | 2021-02-24T04:59:03.000Z | 2021-02-24T04:59:03.000Z | Backend/notifications.py | sahilb2/SelfImprovementWebApp | 2ba232c4e150efb4ef41689ed049d29293e2086e | [
"MIT"
] | null | null | null | Backend/notifications.py | sahilb2/SelfImprovementWebApp | 2ba232c4e150efb4ef41689ed049d29293e2086e | [
"MIT"
] | 1 | 2018-12-09T00:53:21.000Z | 2018-12-09T00:53:21.000Z | import sqlite3
def insertFit(username, ex, message, day, hour, minute):
dbase = sqlite3.connect("Notifications.db")
act = dbase.cursor()
row = (username, ex, message, day, hour, minute)
act.execute("INSERT INTO Fitness (user, ex1, message1, day1, hour1, min1) VALUES (?,?,?,?,?,?)", row)
dbase.commit()
dbase.close()
return True
def insertRem(username, rem, message, day, hour, minute):
dbase = sqlite3.connect("Notifications.db")
act = dbase.cursor()
row = (username, rem, message, day, hour, minute)
act.execute("INSERT INTO Reminders (user, rem1, message1, day1, hour1, min1) VALUES (?,?,?,?,?,?)", row)
dbase.commit()
dbase.close()
return True
#given user, return all subject, message (Notifications.db), email (Accounts.db) [dictionary]
def returnAllReminders(username):
dbase = sqlite3.connect("Notifications.db")
act = dbase.cursor()
row = act.execute("SELECT * FROM Reminders where user = ?", (format(username),))
dic = {}
for obj in row:
dic[obj[1]] = obj[2]
dbase.commit()
dbase.close()
return dic
#print(returnAllReminders("brian"))
def deleteRem(username, rem):
dbase = sqlite3.connect("Notifications.db")
act = dbase.cursor()
info = (rem, username)
act.execute("DELETE FROM Reminders WHERE rem1 = ? AND user = ?", info)
dbase.commit()
dbase.close()
return True
def updateRem(username, rem, day, hour, minute):
dbase = sqlite3.connect("Notifications.db")
act = dbase.cursor()
change = (day, hour, minute)
row = (day, hour, minute, username, rem)
act.execute("UPDATE Reminders SET day1 = ?, hour1 = ?, min1 = ? WHERE user = ? AND rem1 = ?", row)
dbase.commit()
dbase.close()
return True
def deleteFit(username, ex):
dbase = sqlite3.connect("Notifications.db")
act = dbase.cursor()
info = (ex, username)
act.execute("DELETE FROM Fitness WHERE ex1 = ? AND user = ?", info)
dbase.commit()
dbase.close()
return True
def updateFit(username, rem, day, hour, minute):
dbase = sqlite3.connect("Notifications.db")
act = dbase.cursor()
change = (day, hour, minute)
row = (day, hour, minute, username, rem)
act.execute("UPDATE Fitness SET day1 = ?, hour1 = ?, min1 = ? WHERE user = ? AND ex1 = ?", row)
dbase.commit()
dbase.close()
return True
def getDayRem(username, rem):
dbase = sqlite3.connect("Notifications.db")
act = dbase.cursor()
change = (username, rem)
day = act.execute("SELECT * from Reminders where user = ? and rem1 = ?", change)
for row in day:
x = row[2]
dbase.commit()
dbase.close()
return x
def getHourRem(username, rem):
dbase = sqlite3.connect("Notifications.db")
act = dbase.cursor()
change = (username, rem)
day = act.execute("SELECT * from Reminders where user = ? and rem1 = ?", change)
for row in day:
x = row[3]
dbase.commit()
dbase.close()
return x
def getMinRem(username, rem):
dbase = sqlite3.connect("Notifications.db")
act = dbase.cursor()
change = (username, rem)
day = act.execute("SELECT * from Reminders where user = ? and rem1 = ?", change)
for row in day:
x = row[4]
dbase.commit()
dbase.close()
return x
def getDayFit(username, rem):
dbase = sqlite3.connect("Notifications.db")
act = dbase.cursor()
change = (username, rem)
day = act.execute("SELECT * from Fitness where user = ? and rem1 = ?", change)
for row in day:
x = row[2]
dbase.commit()
dbase.close()
return x
def getHourFit(username, rem):
dbase = sqlite3.connect("Notifications.db")
act = dbase.cursor()
change = (username, rem)
hour = act.execute("SELECT * from Fitness where user = ? and rem1 = ?", change)
for row in hour:
x = row[3]
dbase.commit()
dbase.close()
return x
def getMinFit(username, rem):
dbase = sqlite3.connect("Notifications.db")
act = dbase.cursor()
change = (username, rem)
minute = act.execute("SELECT * from Fitness where user = ? and rem1 = ?", change)
for row in minute:
x = row[4]
dbase.commit()
dbase.close()
return x
#print(getHourRem("brian", "pushups"))
| 27.56338 | 105 | 0.678079 | 526 | 3,914 | 5.045627 | 0.142586 | 0.078749 | 0.093067 | 0.156745 | 0.831198 | 0.810098 | 0.789751 | 0.753203 | 0.694047 | 0.630369 | 0 | 0.014674 | 0.164282 | 3,914 | 141 | 106 | 27.758865 | 0.796698 | 0.041645 | 0 | 0.719008 | 0 | 0 | 0.25587 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.107438 | false | 0 | 0.008264 | 0 | 0.223141 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
5a60ce1504537d8bdb27460a2868ae94d4f52afb | 138 | py | Python | metametrics/__init__.py | donsheehy/metametrics | 592f8617e9947ab190cd1b1872d30b7bbc933280 | [
"MIT"
] | 1 | 2021-10-04T18:24:38.000Z | 2021-10-04T18:24:38.000Z | metametrics/__init__.py | donsheehy/metametrics | 592f8617e9947ab190cd1b1872d30b7bbc933280 | [
"MIT"
] | null | null | null | metametrics/__init__.py | donsheehy/metametrics | 592f8617e9947ab190cd1b1872d30b7bbc933280 | [
"MIT"
] | null | null | null | # metametrics/__init__.py
from metametrics.point import Point
from metametrics.metric_functions import naiveHD,greedyHD,calcGreedy,l_inf
| 27.6 | 74 | 0.862319 | 18 | 138 | 6.277778 | 0.722222 | 0.265487 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.07971 | 138 | 4 | 75 | 34.5 | 0.889764 | 0.166667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
5a88d569a41f15c0ab0f52745b4936aa727dfe1a | 327 | py | Python | arrays/tests/test_merge_sorted_array.py | ahcode0919/python-ds-algorithms | 0d617b78c50b6c18da40d9fa101438749bfc82e1 | [
"MIT"
] | null | null | null | arrays/tests/test_merge_sorted_array.py | ahcode0919/python-ds-algorithms | 0d617b78c50b6c18da40d9fa101438749bfc82e1 | [
"MIT"
] | null | null | null | arrays/tests/test_merge_sorted_array.py | ahcode0919/python-ds-algorithms | 0d617b78c50b6c18da40d9fa101438749bfc82e1 | [
"MIT"
] | 3 | 2020-10-07T20:24:45.000Z | 2020-12-16T04:53:19.000Z | from arrays.merge_sorted_array import merge_sorted_array
def test_merge_sorted_array():
assert merge_sorted_array([2, 0], 1, [1], 1) == [1, 2]
assert merge_sorted_array([1, 2, 3, 0, 0, 0], 3, [2, 4, 5], 3) == [1, 2, 2, 3, 4, 5]
assert merge_sorted_array([4, 5, 6, 0, 0, 0], 3, [1, 2, 3], 3) == [1, 2, 3, 4, 5, 6]
| 40.875 | 88 | 0.584098 | 67 | 327 | 2.656716 | 0.238806 | 0.370787 | 0.539326 | 0.370787 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.157692 | 0.204893 | 327 | 7 | 89 | 46.714286 | 0.526923 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.6 | 1 | 0.2 | true | 0 | 0.2 | 0 | 0.4 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
5a8ad8ee76a2d92cc6e0085f4052b5282a1e63b5 | 5,281 | py | Python | services/configuration/migrations/0012_auto_20151125_0031.py | rtubio/server | 3bb15f4d4dcd543d6f95d1fda2cb737de0bb9a9b | [
"Apache-2.0"
] | 4 | 2015-03-23T16:34:53.000Z | 2017-12-12T11:41:54.000Z | services/configuration/migrations/0012_auto_20151125_0031.py | rtubio/server | 3bb15f4d4dcd543d6f95d1fda2cb737de0bb9a9b | [
"Apache-2.0"
] | 42 | 2015-01-08T22:21:04.000Z | 2021-12-13T19:48:44.000Z | services/configuration/migrations/0012_auto_20151125_0031.py | rtubio/server | 3bb15f4d4dcd543d6f95d1fda2cb737de0bb9a9b | [
"Apache-2.0"
] | 2 | 2015-04-04T15:23:35.000Z | 2017-07-23T23:14:06.000Z | # -*- coding: utf-8 -*-
from __future__ import unicode_literals
from django.db import models, migrations
import django.utils.timezone
class Migration(migrations.Migration):
dependencies = [
('configuration', '0011_auto_20151125_0016'),
]
operations = [
migrations.AddField(
model_name='availabilityruledaily',
name='ending_time',
field=models.DateTimeField(default=django.utils.timezone.now, null=True, verbose_name='Ending time for the rule'),
preserve_default=True,
),
migrations.AddField(
model_name='availabilityruledaily',
name='starting_time',
field=models.DateTimeField(default=django.utils.timezone.now, null=True, verbose_name='Starting time for the rule'),
preserve_default=True,
),
migrations.AddField(
model_name='availabilityruleonce',
name='ending_time',
field=models.DateTimeField(default=django.utils.timezone.now, null=True, verbose_name='Ending time for the rule'),
preserve_default=True,
),
migrations.AddField(
model_name='availabilityruleonce',
name='starting_time',
field=models.DateTimeField(default=django.utils.timezone.now, null=True, verbose_name='Starting time for the rule'),
preserve_default=True,
),
migrations.AddField(
model_name='availabilityruleweekly',
name='f_e_time',
field=models.DateTimeField(default=django.utils.timezone.now, null=True, verbose_name='End time on Friday'),
preserve_default=True,
),
migrations.AddField(
model_name='availabilityruleweekly',
name='f_s_time',
field=models.DateTimeField(default=django.utils.timezone.now, null=True, verbose_name='Start time on Friday'),
preserve_default=True,
),
migrations.AddField(
model_name='availabilityruleweekly',
name='m_e_time',
field=models.DateTimeField(default=django.utils.timezone.now, null=True, verbose_name='End time on Monday'),
preserve_default=True,
),
migrations.AddField(
model_name='availabilityruleweekly',
name='m_s_time',
field=models.DateTimeField(default=django.utils.timezone.now, null=True, verbose_name='Start time on Monday'),
preserve_default=True,
),
migrations.AddField(
model_name='availabilityruleweekly',
name='r_e_time',
field=models.DateTimeField(default=django.utils.timezone.now, null=True, verbose_name='End time on Thursday'),
preserve_default=True,
),
migrations.AddField(
model_name='availabilityruleweekly',
name='r_s_time',
field=models.DateTimeField(default=django.utils.timezone.now, null=True, verbose_name='Start time on Thursday'),
preserve_default=True,
),
migrations.AddField(
model_name='availabilityruleweekly',
name='s_e_time',
field=models.DateTimeField(default=django.utils.timezone.now, null=True, verbose_name='End time on Saturday'),
preserve_default=True,
),
migrations.AddField(
model_name='availabilityruleweekly',
name='s_s_time',
field=models.DateTimeField(default=django.utils.timezone.now, null=True, verbose_name='Start time on Saturday'),
preserve_default=True,
),
migrations.AddField(
model_name='availabilityruleweekly',
name='t_e_time',
field=models.DateTimeField(default=django.utils.timezone.now, null=True, verbose_name='End time on Tuesday'),
preserve_default=True,
),
migrations.AddField(
model_name='availabilityruleweekly',
name='t_s_time',
field=models.DateTimeField(default=django.utils.timezone.now, null=True, verbose_name='Start time on Tuesday'),
preserve_default=True,
),
migrations.AddField(
model_name='availabilityruleweekly',
name='w_e_time',
field=models.DateTimeField(default=django.utils.timezone.now, null=True, verbose_name='End t on Wednesday'),
preserve_default=True,
),
migrations.AddField(
model_name='availabilityruleweekly',
name='w_s_time',
field=models.DateTimeField(default=django.utils.timezone.now, null=True, verbose_name='Start time on Wednesday'),
preserve_default=True,
),
migrations.AddField(
model_name='availabilityruleweekly',
name='x_e_time',
field=models.DateTimeField(default=django.utils.timezone.now, null=True, verbose_name='End time on Sunday'),
preserve_default=True,
),
migrations.AddField(
model_name='availabilityruleweekly',
name='x_s_time',
field=models.DateTimeField(default=django.utils.timezone.now, null=True, verbose_name='Start time on Sunday'),
preserve_default=True,
),
]
| 42.58871 | 128 | 0.628101 | 533 | 5,281 | 6.046904 | 0.114447 | 0.064846 | 0.112007 | 0.150791 | 0.941359 | 0.941359 | 0.910332 | 0.910332 | 0.910332 | 0.910332 | 0 | 0.004393 | 0.267184 | 5,281 | 123 | 129 | 42.934959 | 0.828424 | 0.003977 | 0 | 0.683761 | 0 | 0 | 0.18353 | 0.07094 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.025641 | 0 | 0.051282 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
5a8b51fad47ca6bc74ac1661bae152e4e9dc4fc5 | 130 | py | Python | chainerui/tasks/__init__.py | Bartzi/chainerui | b6d4cc9f570a440a80c536f2c7c1ba596d2a45ed | [
"MIT"
] | 185 | 2017-12-15T09:24:07.000Z | 2022-01-20T11:20:13.000Z | chainerui/tasks/__init__.py | yutiansut/chainerui | 058f00ec2b9a8a6ccd4f2b2f2bfc80869b771ea1 | [
"MIT"
] | 191 | 2017-12-15T09:14:52.000Z | 2022-02-17T14:09:19.000Z | chainerui/tasks/__init__.py | yutiansut/chainerui | 058f00ec2b9a8a6ccd4f2b2f2bfc80869b771ea1 | [
"MIT"
] | 29 | 2017-12-15T09:40:45.000Z | 2022-03-13T11:21:11.000Z | from chainerui.tasks.collect_results import collect_results # NOQA
from chainerui.tasks.crawl_result import crawl_result # NOQA
| 43.333333 | 67 | 0.846154 | 18 | 130 | 5.888889 | 0.5 | 0.245283 | 0.339623 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.107692 | 130 | 2 | 68 | 65 | 0.913793 | 0.069231 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
ce55939d7a8974eb19c445008f3a13630e8d9983 | 185 | py | Python | tweet/forms.py | amandalynnes/twitterclone | 28638c178e329c597e8848ff42f46aedfa2d0712 | [
"MIT"
] | null | null | null | tweet/forms.py | amandalynnes/twitterclone | 28638c178e329c597e8848ff42f46aedfa2d0712 | [
"MIT"
] | null | null | null | tweet/forms.py | amandalynnes/twitterclone | 28638c178e329c597e8848ff42f46aedfa2d0712 | [
"MIT"
] | 1 | 2021-06-30T03:09:03.000Z | 2021-06-30T03:09:03.000Z | from django import forms
from twitteruser.models import TwitterUser
class TweetForm(forms.Form):
title = forms.CharField(max_length=40)
body = forms.CharField(max_length=140)
| 23.125 | 42 | 0.778378 | 25 | 185 | 5.68 | 0.64 | 0.197183 | 0.239437 | 0.323944 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.031447 | 0.140541 | 185 | 7 | 43 | 26.428571 | 0.861635 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.4 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
ce852764f73b503a7ff34edd6363c9c0510b1e17 | 45 | py | Python | app/utils/misc/__init__.py | Arkosh744/telega-bot-mongo | 3a24a3b571a383b1bca3518455f95f6ee899464e | [
"MIT"
] | 18 | 2021-11-06T15:47:13.000Z | 2022-03-22T20:29:53.000Z | app/utils/misc/__init__.py | im-koala/HairRecolorBot | dddcd556236a624b5dcd6ddbf3f15602b9edfbcf | [
"MIT"
] | 4 | 2022-02-07T19:06:03.000Z | 2022-02-12T09:17:24.000Z | app/utils/misc/__init__.py | im-koala/HairRecolorBot | dddcd556236a624b5dcd6ddbf3f15602b9edfbcf | [
"MIT"
] | 7 | 2021-11-06T15:48:26.000Z | 2022-03-22T20:29:56.000Z | from .throttling_decorator import rate_limit
| 22.5 | 44 | 0.888889 | 6 | 45 | 6.333333 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.088889 | 45 | 1 | 45 | 45 | 0.926829 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
cec9df6d776f3b47ea49c1ad8708c5f22edb3304 | 1,977 | py | Python | tests/test_plotting.py | pevisscher/camelot | d93423e0dc78b64e3f2881714c91709d9241d163 | [
"MIT"
] | 1 | 2021-03-11T14:50:46.000Z | 2021-03-11T14:50:46.000Z | tests/test_plotting.py | pevisscher/camelot | d93423e0dc78b64e3f2881714c91709d9241d163 | [
"MIT"
] | null | null | null | tests/test_plotting.py | pevisscher/camelot | d93423e0dc78b64e3f2881714c91709d9241d163 | [
"MIT"
] | 2 | 2020-08-03T16:55:31.000Z | 2021-03-29T12:19:52.000Z | # -*- coding: utf-8 -*-
import os
import pytest
import camelot
testdir = os.path.dirname(os.path.abspath(__file__))
testdir = os.path.join(testdir, "files")
@pytest.mark.mpl_image_compare(baseline_dir="files/baseline_plots", remove_text=True)
def test_text_plot():
filename = os.path.join(testdir, "foo.pdf")
tables = camelot.read_pdf(filename)
return camelot.plot(tables[0], kind="text")
@pytest.mark.mpl_image_compare(baseline_dir="files/baseline_plots", remove_text=True)
def test_grid_plot():
filename = os.path.join(testdir, "foo.pdf")
tables = camelot.read_pdf(filename)
return camelot.plot(tables[0], kind="grid")
@pytest.mark.mpl_image_compare(baseline_dir="files/baseline_plots", remove_text=True)
def test_lattice_contour_plot():
filename = os.path.join(testdir, "foo.pdf")
tables = camelot.read_pdf(filename)
return camelot.plot(tables[0], kind="contour")
@pytest.mark.mpl_image_compare(baseline_dir="files/baseline_plots", remove_text=True)
def test_stream_contour_plot():
filename = os.path.join(testdir, "tabula/12s0324.pdf")
tables = camelot.read_pdf(filename, flavor="stream")
return camelot.plot(tables[0], kind="contour")
@pytest.mark.mpl_image_compare(baseline_dir="files/baseline_plots", remove_text=True)
def test_line_plot():
filename = os.path.join(testdir, "foo.pdf")
tables = camelot.read_pdf(filename)
return camelot.plot(tables[0], kind="line")
@pytest.mark.mpl_image_compare(baseline_dir="files/baseline_plots", remove_text=True)
def test_joint_plot():
filename = os.path.join(testdir, "foo.pdf")
tables = camelot.read_pdf(filename)
return camelot.plot(tables[0], kind="joint")
@pytest.mark.mpl_image_compare(baseline_dir="files/baseline_plots", remove_text=True)
def test_textedge_plot():
filename = os.path.join(testdir, "tabula/12s0324.pdf")
tables = camelot.read_pdf(filename, flavor="stream")
return camelot.plot(tables[0], kind="textedge")
| 32.409836 | 85 | 0.742539 | 283 | 1,977 | 4.968198 | 0.159011 | 0.042674 | 0.056899 | 0.096728 | 0.877667 | 0.877667 | 0.877667 | 0.86771 | 0.86771 | 0.86771 | 0 | 0.01139 | 0.111786 | 1,977 | 60 | 86 | 32.95 | 0.789294 | 0.010622 | 0 | 0.575 | 0 | 0 | 0.136643 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.175 | false | 0 | 0.075 | 0 | 0.425 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
0c8926284f73bd402854448e1cc9af0e56af1e78 | 196 | py | Python | flask_wiki/views.py | gcavalcante8808/flask-wiki | a2c0af2e7fa6ce64faeb38e678a2e207ff63f3a6 | [
"BSD-2-Clause"
] | null | null | null | flask_wiki/views.py | gcavalcante8808/flask-wiki | a2c0af2e7fa6ce64faeb38e678a2e207ff63f3a6 | [
"BSD-2-Clause"
] | 35 | 2015-10-08T21:00:22.000Z | 2021-06-25T15:29:41.000Z | flask_wiki/views.py | gcavalcante8808/flask-wiki | a2c0af2e7fa6ce64faeb38e678a2e207ff63f3a6 | [
"BSD-2-Clause"
] | 1 | 2019-07-09T14:17:48.000Z | 2019-07-09T14:17:48.000Z | from flask_restful import Resource, Api
class HomeView(Resource):
def get(self, page_id):
return {'page_id': page_id}
def post(self, page_id):
return {'page_id': page_id} | 24.5 | 39 | 0.663265 | 29 | 196 | 4.241379 | 0.517241 | 0.292683 | 0.162602 | 0.260163 | 0.455285 | 0.455285 | 0.455285 | 0.455285 | 0 | 0 | 0 | 0 | 0.22449 | 196 | 8 | 40 | 24.5 | 0.809211 | 0 | 0 | 0.333333 | 0 | 0 | 0.071066 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0 | 0.166667 | 0.333333 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 6 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.