hexsha string | size int64 | ext string | lang string | max_stars_repo_path string | max_stars_repo_name string | max_stars_repo_head_hexsha string | max_stars_repo_licenses list | max_stars_count int64 | max_stars_repo_stars_event_min_datetime string | max_stars_repo_stars_event_max_datetime string | max_issues_repo_path string | max_issues_repo_name string | max_issues_repo_head_hexsha string | max_issues_repo_licenses list | max_issues_count int64 | max_issues_repo_issues_event_min_datetime string | max_issues_repo_issues_event_max_datetime string | max_forks_repo_path string | max_forks_repo_name string | max_forks_repo_head_hexsha string | max_forks_repo_licenses list | max_forks_count int64 | max_forks_repo_forks_event_min_datetime string | max_forks_repo_forks_event_max_datetime string | content string | avg_line_length float64 | max_line_length int64 | alphanum_fraction float64 | qsc_code_num_words_quality_signal int64 | qsc_code_num_chars_quality_signal float64 | qsc_code_mean_word_length_quality_signal float64 | qsc_code_frac_words_unique_quality_signal float64 | qsc_code_frac_chars_top_2grams_quality_signal float64 | qsc_code_frac_chars_top_3grams_quality_signal float64 | qsc_code_frac_chars_top_4grams_quality_signal float64 | qsc_code_frac_chars_dupe_5grams_quality_signal float64 | qsc_code_frac_chars_dupe_6grams_quality_signal float64 | qsc_code_frac_chars_dupe_7grams_quality_signal float64 | qsc_code_frac_chars_dupe_8grams_quality_signal float64 | qsc_code_frac_chars_dupe_9grams_quality_signal float64 | qsc_code_frac_chars_dupe_10grams_quality_signal float64 | qsc_code_frac_chars_replacement_symbols_quality_signal float64 | qsc_code_frac_chars_digital_quality_signal float64 | qsc_code_frac_chars_whitespace_quality_signal float64 | qsc_code_size_file_byte_quality_signal float64 | qsc_code_num_lines_quality_signal float64 | qsc_code_num_chars_line_max_quality_signal float64 | qsc_code_num_chars_line_mean_quality_signal float64 | qsc_code_frac_chars_alphabet_quality_signal float64 | qsc_code_frac_chars_comments_quality_signal float64 | qsc_code_cate_xml_start_quality_signal float64 | qsc_code_frac_lines_dupe_lines_quality_signal float64 | qsc_code_cate_autogen_quality_signal float64 | qsc_code_frac_lines_long_string_quality_signal float64 | qsc_code_frac_chars_string_length_quality_signal float64 | qsc_code_frac_chars_long_word_length_quality_signal float64 | qsc_code_frac_lines_string_concat_quality_signal float64 | qsc_code_cate_encoded_data_quality_signal float64 | qsc_code_frac_chars_hex_words_quality_signal float64 | qsc_code_frac_lines_prompt_comments_quality_signal float64 | qsc_code_frac_lines_assert_quality_signal float64 | qsc_codepython_cate_ast_quality_signal float64 | qsc_codepython_frac_lines_func_ratio_quality_signal float64 | qsc_codepython_cate_var_zero_quality_signal bool | qsc_codepython_frac_lines_pass_quality_signal float64 | qsc_codepython_frac_lines_import_quality_signal float64 | qsc_codepython_frac_lines_simplefunc_quality_signal float64 | qsc_codepython_score_lines_no_logic_quality_signal float64 | qsc_codepython_frac_lines_print_quality_signal float64 | qsc_code_num_words int64 | qsc_code_num_chars int64 | qsc_code_mean_word_length int64 | qsc_code_frac_words_unique null | qsc_code_frac_chars_top_2grams int64 | qsc_code_frac_chars_top_3grams int64 | qsc_code_frac_chars_top_4grams int64 | qsc_code_frac_chars_dupe_5grams int64 | qsc_code_frac_chars_dupe_6grams int64 | qsc_code_frac_chars_dupe_7grams int64 | qsc_code_frac_chars_dupe_8grams int64 | qsc_code_frac_chars_dupe_9grams int64 | qsc_code_frac_chars_dupe_10grams int64 | qsc_code_frac_chars_replacement_symbols int64 | qsc_code_frac_chars_digital int64 | qsc_code_frac_chars_whitespace int64 | qsc_code_size_file_byte int64 | qsc_code_num_lines int64 | qsc_code_num_chars_line_max int64 | qsc_code_num_chars_line_mean int64 | qsc_code_frac_chars_alphabet int64 | qsc_code_frac_chars_comments int64 | qsc_code_cate_xml_start int64 | qsc_code_frac_lines_dupe_lines int64 | qsc_code_cate_autogen int64 | qsc_code_frac_lines_long_string int64 | qsc_code_frac_chars_string_length int64 | qsc_code_frac_chars_long_word_length int64 | qsc_code_frac_lines_string_concat null | qsc_code_cate_encoded_data int64 | qsc_code_frac_chars_hex_words int64 | qsc_code_frac_lines_prompt_comments int64 | qsc_code_frac_lines_assert int64 | qsc_codepython_cate_ast int64 | qsc_codepython_frac_lines_func_ratio int64 | qsc_codepython_cate_var_zero int64 | qsc_codepython_frac_lines_pass int64 | qsc_codepython_frac_lines_import int64 | qsc_codepython_frac_lines_simplefunc int64 | qsc_codepython_score_lines_no_logic int64 | qsc_codepython_frac_lines_print int64 | effective string | hits int64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
523f9700ed13c3bae87c8fbe0de30f129cc31497 | 45 | py | Python | server/util/__init__.py | jjojala/results | bfcf6820ff4b2dd05d8974bc98b0a59bc6c3585f | [
"Apache-2.0"
] | null | null | null | server/util/__init__.py | jjojala/results | bfcf6820ff4b2dd05d8974bc98b0a59bc6c3585f | [
"Apache-2.0"
] | 7 | 2015-11-25T22:26:25.000Z | 2016-10-18T22:14:35.000Z | server/util/__init__.py | jjojala/results | bfcf6820ff4b2dd05d8974bc98b0a59bc6c3585f | [
"Apache-2.0"
] | null | null | null | from .patch import patch, diff, PatchConflict | 45 | 45 | 0.822222 | 6 | 45 | 6.166667 | 0.833333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.111111 | 45 | 1 | 45 | 45 | 0.925 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
87797c782ed9e6a7b609e077090ffd33cb3f5483 | 134 | py | Python | Game/lib/interface/__init__.py | brunnossanttos/Game.JackPot | 323cbdaedbf3032dee438a37342bfa18a2f98c82 | [
"MIT"
] | null | null | null | Game/lib/interface/__init__.py | brunnossanttos/Game.JackPot | 323cbdaedbf3032dee438a37342bfa18a2f98c82 | [
"MIT"
] | null | null | null | Game/lib/interface/__init__.py | brunnossanttos/Game.JackPot | 323cbdaedbf3032dee438a37342bfa18a2f98c82 | [
"MIT"
] | null | null | null | def linha(tam=65):
return '\033[36m-' * tam
def cabecalho(txt):
print(linha())
print(txt.center(65))
print(linha())
| 14.888889 | 28 | 0.589552 | 19 | 134 | 4.157895 | 0.578947 | 0.253165 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.085714 | 0.216418 | 134 | 8 | 29 | 16.75 | 0.666667 | 0 | 0 | 0.333333 | 0 | 0 | 0.067164 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0 | 0 | 0.166667 | 0.5 | 0.5 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 6 |
5e910efd3c321feafa27ff83a1356fc991d2cda2 | 7,860 | py | Python | t2vretrieval/models/criterion.py | aranciokov/ranp | ac9213e4f33b808258acd9dcf5ab08e3902642ed | [
"MIT"
] | 3 | 2022-03-18T08:09:41.000Z | 2022-03-23T08:42:03.000Z | t2vretrieval/models/criterion.py | aranciokov/ranp | ac9213e4f33b808258acd9dcf5ab08e3902642ed | [
"MIT"
] | null | null | null | t2vretrieval/models/criterion.py | aranciokov/ranp | ac9213e4f33b808258acd9dcf5ab08e3902642ed | [
"MIT"
] | null | null | null | import torch
import torch.nn as nn
import framework.configbase
import framework.ops
def cosine_sim(im, s):
'''cosine similarity between all the image and sentence pairs
'''
inner_prod = im.mm(s.t())
im_norm = torch.sqrt((im**2).sum(1).view(-1, 1) + 1e-18)
s_norm = torch.sqrt((s**2).sum(1).view(1, -1) + 1e-18)
sim = inner_prod / (im_norm * s_norm)
return sim
class ContrastiveLoss(nn.Module):
'''compute contrastive loss
'''
def __init__(self, margin=0, max_violation=False, direction='bi', topk=1):
'''Args:
direction: i2t for negative sentence, t2i for negative image, bi for both
'''
super(ContrastiveLoss, self).__init__()
self.margin = margin
self.max_violation = max_violation
self.direction = direction
self.topk = topk
def forward(self, scores, margin=None, average_batch=True, batch_relevance=None, threshold_pos=1.):
'''
Args:
scores: image-sentence score matrix, (batch, batch)
the same row of im and s are positive pairs, different rows are negative pairs
batch_relevance: image-sentence relevancy matrix (batch, batch)
'''
if margin is None:
margin = self.margin
batch_size = scores.size(0)
diagonal = scores.diag().view(batch_size, 1) # positive pairs
# mask to clear diagonals which are positive pairs
pos_masks = torch.eye(batch_size).bool().to(scores.device)
batch_topk = min(batch_size, self.topk)
if self.direction == 'i2t' or self.direction == 'bi':
d1 = diagonal.expand_as(scores) # same collumn for im2s (negative sentence)
# compare every diagonal score to scores in its collumn
# caption retrieval
cost_s = (margin + scores - d1).clamp(min=0)
cost_s = cost_s.masked_fill(pos_masks, 0)
if batch_relevance is not None:
cost_s[batch_relevance >= threshold_pos] = 0
if self.max_violation:
cost_s, _ = torch.topk(cost_s, batch_topk, dim=1)
cost_s = cost_s / batch_topk
if average_batch:
cost_s = cost_s / batch_size
else:
if average_batch:
cost_s = cost_s / (batch_size * (batch_size - 1))
cost_s = torch.sum(cost_s)
if self.direction == 't2i' or self.direction == 'bi':
d2 = diagonal.t().expand_as(scores) # same row for s2im (negative image)
# compare every diagonal score to scores in its row
cost_im = (margin + scores - d2).clamp(min=0)
cost_im = cost_im.masked_fill(pos_masks, 0)
if batch_relevance is not None:
cost_im[batch_relevance >= threshold_pos] = 0
if self.max_violation:
cost_im, _ = torch.topk(cost_im, batch_topk, dim=0)
cost_im = cost_im / batch_topk
if average_batch:
cost_im = cost_im / batch_size
else:
if average_batch:
cost_im = cost_im / (batch_size * (batch_size - 1))
cost_im = torch.sum(cost_im)
if self.direction == 'i2t':
return cost_s
elif self.direction == 't2i':
return cost_im
else:
return cost_s + cost_im
class ContrastiveLossHP(nn.Module):
'''compute contrastive loss
'''
def __init__(self, margin=0, margin_pos=0, max_violation=False, direction='bi', topk=1):
'''Args:
direction: i2t for negative sentence, t2i for negative image, bi for both
'''
super(ContrastiveLossHP, self).__init__()
self.margin = margin
self.margin_pos = margin_pos
self.max_violation = max_violation
self.direction = direction
self.topk = topk
def forward(self, scores, margin=None, average_batch=True, batch_relevance=None, threshold_pos=1., margin_pos=None):
'''
Args:
scores: image-sentence score matrix, (batch, batch)
the same row of im and s are positive pairs, different rows are negative pairs
batch_relevance: image-sentence relevancy matrix (batch, batch)
'''
if margin is None:
margin = self.margin
if margin_pos is None:
margin_pos = self.margin_pos
batch_size = scores.size(0)
diagonal = scores.diag().view(batch_size, 1) # positive pairs
# mask to clear diagonals which are positive pairs
pos_masks = torch.eye(batch_size).bool().to(scores.device)
batch_topk = min(batch_size, self.topk)
if self.direction == 'i2t' or self.direction == 'bi':
d1 = diagonal.expand_as(scores) # same collumn for im2s (negative sentence)
# compare every diagonal score to scores in its collumn
# caption retrieval
cost_s = (margin + scores - d1).clamp(min=0)
cost_s = cost_s.masked_fill(pos_masks, 0)
if batch_relevance is not None:
cost_s[batch_relevance >= threshold_pos] = 0
if self.max_violation:
cost_s, _ = torch.topk(cost_s, batch_topk, dim=1)
cost_s = cost_s / batch_topk
if average_batch:
cost_s = cost_s / batch_size
else:
if average_batch:
cost_s = cost_s / (batch_size * (batch_size - 1))
cost_s = torch.sum(cost_s)
# we want a copy of scores in order to compute the negative-masked version as well
hp_scores = scores.clone() # scores are the (v_i, q_j)
hp_scores[batch_relevance < threshold_pos] = 1 # mask the negatives; positives have now score <= 1
hp_scores = hp_scores.masked_fill(pos_masks, 1)
# we want to pick the argmin (hardest positive)
hp_scores, _ = torch.topk(hp_scores, batch_topk, dim=1, largest=False)
# -> s(q, v+)
hp_cost_s = (margin_pos + scores - hp_scores).clamp(min=0)
# we need to mask hp_cost again because 'scores' is not pos-masked
hp_cost_s = hp_cost_s.masked_fill(pos_masks, 0)
if batch_relevance is not None:
hp_cost_s[batch_relevance >= threshold_pos] = 0
hp_cost_s, _ = torch.topk(hp_cost_s, batch_topk, dim=1)
hp_cost_s = hp_cost_s / batch_topk
if average_batch:
hp_cost_s = hp_cost_s / batch_size
hp_cost_s = torch.sum(hp_cost_s)
if self.direction == 't2i' or self.direction == 'bi':
d2 = diagonal.t().expand_as(scores) # same row for s2im (negative image)
# compare every diagonal score to scores in its row
cost_im = (margin + scores - d2).clamp(min=0)
cost_im = cost_im.masked_fill(pos_masks, 0)
if batch_relevance is not None:
cost_im[batch_relevance >= threshold_pos] = 0
if self.max_violation:
cost_im, _ = torch.topk(cost_im, batch_topk, dim=0)
cost_im = cost_im / batch_topk
if average_batch:
cost_im = cost_im / batch_size
else:
if average_batch:
cost_im = cost_im / (batch_size * (batch_size - 1))
cost_im = torch.sum(cost_im)
# we want a copy of scores in order to compute the negative-masked version as well
hp_scores_im = scores.clone() # scores are the (v_i, q_j)
hp_scores_im[batch_relevance < threshold_pos] = 1 # mask the negatives; positives have now score <= 1
hp_scores_im = hp_scores_im.masked_fill(pos_masks, 1)
# we want to pick the argmin (hardest positive)
hp_scores_im, _ = torch.topk(hp_scores_im, batch_topk, dim=0, largest=False)
# -> s(q, v+)
hp_cost_im = (margin_pos + scores - hp_scores_im).clamp(min=0)
# we need to mask hp_cost again because 'scores' is not pos-masked
hp_cost_im = hp_cost_im.masked_fill(pos_masks, 0)
if batch_relevance is not None:
hp_cost_im[batch_relevance >= threshold_pos] = 0
hp_cost_im, _ = torch.topk(hp_cost_im, batch_topk, dim=0)
hp_cost_im = hp_cost_im / batch_topk
if average_batch:
hp_cost_im = hp_cost_im / batch_size
hp_cost_im = torch.sum(hp_cost_im)
if self.direction == 'i2t':
return cost_s, hp_cost_s
elif self.direction == 't2i':
return cost_im, hp_cost_im
else:
return cost_s + cost_im, hp_cost_s + hp_cost_im
| 37.075472 | 118 | 0.659796 | 1,186 | 7,860 | 4.12226 | 0.114671 | 0.047044 | 0.028636 | 0.016363 | 0.896093 | 0.878503 | 0.858867 | 0.812436 | 0.796482 | 0.769278 | 0 | 0.014262 | 0.24173 | 7,860 | 211 | 119 | 37.251185 | 0.80604 | 0.226972 | 0 | 0.647482 | 0 | 0 | 0.006051 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.035971 | false | 0 | 0.028777 | 0 | 0.129496 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
0dc63506330fdc60018049e930eed55c9aea71d2 | 27 | py | Python | entity/__init__.py | tuannguyendang/montypython | c0b8ff7a8130e811ba16bfab8d5e013eac37f432 | [
"Apache-2.0"
] | null | null | null | entity/__init__.py | tuannguyendang/montypython | c0b8ff7a8130e811ba16bfab8d5e013eac37f432 | [
"Apache-2.0"
] | null | null | null | entity/__init__.py | tuannguyendang/montypython | c0b8ff7a8130e811ba16bfab8d5e013eac37f432 | [
"Apache-2.0"
] | null | null | null | from .user import User, db
| 13.5 | 26 | 0.740741 | 5 | 27 | 4 | 0.8 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.185185 | 27 | 1 | 27 | 27 | 0.909091 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
216d4eb41692418c9a789beb8f220409a6f64736 | 17,825 | py | Python | polynomials_on_simplices/polynomial/polynomials_unit_simplex_bases.py | FAndersson/polynomials_on_simplices | f015a4772c817bfa99b0d6b726667a38a174b064 | [
"MIT"
] | 1 | 2021-03-17T11:41:21.000Z | 2021-03-17T11:41:21.000Z | polynomials_on_simplices/polynomial/polynomials_unit_simplex_bases.py | FAndersson/polynomials_on_simplices | f015a4772c817bfa99b0d6b726667a38a174b064 | [
"MIT"
] | null | null | null | polynomials_on_simplices/polynomial/polynomials_unit_simplex_bases.py | FAndersson/polynomials_on_simplices | f015a4772c817bfa99b0d6b726667a38a174b064 | [
"MIT"
] | null | null | null | """Functionality for working with different bases for polynomials on the unit simplex, and for converting between
these bases.
"""
import copy
import numpy as np
from polynomials_on_simplices.polynomial.polynomials_monomial_basis import (
dual_monomial_basis, dual_monomial_basis_fn, dual_vector_valued_monomial_basis,
dual_vector_valued_monomial_basis_fn, monomial_basis, monomial_basis_fn, monomial_basis_fn_latex,
monomial_basis_fn_latex_compact, monomial_basis_latex, monomial_basis_latex_compact,
unique_identifier_monomial_basis, vector_valued_monomial_basis, vector_valued_monomial_basis_fn)
from polynomials_on_simplices.polynomial.polynomials_unit_simplex_bernstein_basis import (
PolynomialBernstein, bernstein_basis, bernstein_basis_fn, bernstein_basis_fn_latex,
bernstein_basis_fn_latex_compact, bernstein_basis_latex, bernstein_basis_latex_compact, dual_bernstein_basis,
dual_bernstein_basis_fn, dual_vector_valued_bernstein_basis, dual_vector_valued_bernstein_basis_fn,
unique_identifier_bernstein_basis, vector_valued_bernstein_basis, vector_valued_bernstein_basis_fn)
from polynomials_on_simplices.polynomial.polynomials_unit_simplex_lagrange_basis import (
PolynomialLagrange, dual_lagrange_basis, dual_lagrange_basis_fn, dual_vector_valued_lagrange_basis,
dual_vector_valued_lagrange_basis_fn, lagrange_basis, lagrange_basis_fn, lagrange_basis_fn_latex,
lagrange_basis_fn_latex_compact, lagrange_basis_latex, lagrange_basis_latex_compact,
unique_identifier_lagrange_basis, vector_valued_lagrange_basis, vector_valued_lagrange_basis_fn)
def convert_polynomial_to_basis(p, target_basis):
r"""
Convert a polynomial in :math:`\mathcal{P}_r (\Delta_c^m)` to the given basis.
:param p: Polynomial expanded in some basis.
:param str target_basis: Unique identifier for the basis we want to expand the polynomial in.
:return: Polynomial expanded in the given basis.
"""
if p.basis() == target_basis:
return copy.deepcopy(p)
if target_basis == unique_identifier_monomial_basis():
return p.to_monomial_basis()
m = p.domain_dimension()
n = p.target_dimension()
r = p.degree()
coeff = np.empty(p.coeff.shape)
dual_basis = dual_polynomial_basis(r, m, target_basis)
if n == 1:
for i in range(len(coeff)):
coeff[i] = dual_basis[i](p)
else:
# Handle each component of p separately
for i in range(len(coeff)):
for j in range(n):
coeff[i][j] = dual_basis[i](p[j])
if target_basis == unique_identifier_lagrange_basis():
return PolynomialLagrange(coeff, r, m)
else:
if target_basis == unique_identifier_bernstein_basis():
return PolynomialBernstein(coeff, r, m)
else:
raise ValueError("Unknown polynomial basis")
def polynomial_basis_fn(nu, r, basis):
r"""
Generate a basis polynomial in the space :math:`\mathcal{P}_r(\Delta_c^n)` (where n is equal to the length
of nu) in the given basis.
:param nu: Multi-index indicating which basis polynomial should be generated.
:type nu: int or :class:`~polynomials_on_simplices.algebra.multiindex.MultiIndex` or Tuple[int, ...]
:param int r: Degree of polynomial.
:param str basis: Unique identifier for the basis we should generate a base polynomial for.
:return: The base polynomial as specified by nu, r and basis.
:rtype: Implementation of :class:`~polynomials_on_simplices.polynomial.polynomials_base.PolynomialBase`.
"""
if basis == unique_identifier_monomial_basis():
return monomial_basis_fn(nu)
if basis == unique_identifier_lagrange_basis():
return lagrange_basis_fn(nu, r)
if basis == unique_identifier_bernstein_basis():
return bernstein_basis_fn(nu, r)
raise ValueError("Unknown polynomial basis")
def polynomial_basis(r, n, basis):
r"""
Generate all base polynomials for the space :math:`\mathcal{P}_r(\Delta_c^n)` in the given basis.
:param int r: Degree of the polynomial space.
:param int n: Dimension of the unit simplex.
:param str basis: Unique identifier for the basis we should generate base polynomials for.
:return: List of base polynomials in the specified basis.
"""
if basis == unique_identifier_monomial_basis():
return monomial_basis(r, n)
if basis == unique_identifier_lagrange_basis():
return lagrange_basis(r, n)
if basis == unique_identifier_bernstein_basis():
return bernstein_basis(r, n)
raise ValueError("Unknown polynomial basis")
def vector_valued_polynomial_basis_fn(nu, r, i, n, basis):
r"""
Generate a basis polynomial for the space :math:`\mathcal{P}_r(\Delta_c^m, \mathbb{R}^n)` (where m is equal to
the length of nu) in the given basis.
:param nu: Multi-index indicating which basis polynomial should be generated.
:type nu: int or :class:`~polynomials_on_simplices.algebra.multiindex.MultiIndex` or Tuple[int, ...]
:param int r: Degree of polynomial.
:param int i: Index of the vector component that is non-zero.
:param int n: Dimension of the target.
:param str basis: Unique identifier for the basis we should generate a base polynomial for.
:return: The base polynomial as specified by nu, r and basis.
:rtype: Implementation of :class:`~polynomials_on_simplices.polynomial.polynomials_base.PolynomialBase`.
"""
if basis == unique_identifier_monomial_basis():
return vector_valued_monomial_basis_fn(nu, i, n)
if basis == unique_identifier_lagrange_basis():
return vector_valued_lagrange_basis_fn(nu, r, i, n)
if basis == unique_identifier_bernstein_basis():
return vector_valued_bernstein_basis_fn(nu, r, i, n)
raise ValueError("Unknown polynomial basis")
def vector_valued_polynomial_basis(r, m, n, basis, ordering="interleaved"):
r"""
Generate all base polynomials for the space :math:`\mathcal{P}_r(\Delta_c^m, \mathbb{R}^n)` in the given basis.
:param int r: Degree of the polynomial space.
:param int m: Dimension of the domain.
:param int n: Dimension of the target.
:param str basis: Unique identifier for the basis we should generate base polynomials for.
:param str ordering: How the vector valued basis functions are ordered. Can be "sequential" or "interleaved".
For sequential, sorting is first done on the index of the component that is non-zero, and then the non-zero
component is sorted in the same way as the scalar valued basis functions. For "interleaved" basis functions
are first sorted on their non-zero component in the same way as scalar valued basis functions, and then they
are sorted on the index of the component that is non-zero.
:return: List of base polynomials in the specified basis.
"""
if n == 1:
return polynomial_basis(m, r, basis)
if basis == unique_identifier_monomial_basis():
return vector_valued_monomial_basis(r, m, n, ordering)
if basis == unique_identifier_lagrange_basis():
return vector_valued_lagrange_basis(r, m, n, ordering)
if basis == unique_identifier_bernstein_basis():
return vector_valued_bernstein_basis(r, m, n, ordering)
raise ValueError("Unknown polynomial basis")
def dual_polynomial_basis_fn(mu, r, basis):
r"""
Generate a dual basis function to a polynomial basis, i.e. the linear map
:math:`q_{\mu, r} : \mathcal{P}_r(\Delta_c^n) \to \mathbb{R}` such that
.. math::
q_{\mu, r}(p_{\nu, r}) = \delta_{\mu, \nu},
where :math:`p_{\nu, r}` is the degree r basis polynomial indexed by the multi-index :math:`\nu` in the given
basis and
.. math::
\delta_{\mu, \nu} = \begin{cases}
1 & \mu = \nu \\
0 & \text{else}
\end{cases}.
:param mu: Multi-index indicating which dual basis function should be generated.
:type mu: int or :class:`~polynomials_on_simplices.algebra.multiindex.MultiIndex` or Tuple[int, ...]
:param int r: Degree of polynomial space.
:param str basis: Unique identifier for the basis we should generate a dual base function for.
:return: The dual basis function as specified by mu, r and basis.
:rtype: Callable :math:`q_{\mu, r}(p)`.
"""
if basis == unique_identifier_monomial_basis():
return dual_monomial_basis_fn(mu)
if basis == unique_identifier_lagrange_basis():
return dual_lagrange_basis_fn(mu, r)
if basis == unique_identifier_bernstein_basis():
return dual_bernstein_basis_fn(mu, r)
raise ValueError("Unknown polynomial basis")
def dual_polynomial_basis(r, n, basis):
r"""
Generate all dual base functions for the space :math:`\mathcal{P}_r(\Delta_c^n)` in the given basis (i.e. a basis
for :math:`\mathcal{P}_r(\Delta_c^n)^*`).
:param int r: Degree of the polynomial space.
:param int n: Dimension of the domain.
:param str basis: Unique identifier for the basis we should generate dual base functions for.
:return: List of dual base functions.
:rtype: List[callable `q(p)`].
"""
if basis == unique_identifier_monomial_basis():
return dual_monomial_basis(r, n)
if basis == unique_identifier_lagrange_basis():
return dual_lagrange_basis(r, n)
if basis == unique_identifier_bernstein_basis():
return dual_bernstein_basis(r, n)
raise ValueError("Unknown polynomial basis")
def dual_vector_valued_polynomial_basis_fn(mu, r, i, n, basis):
r"""
Generate a dual basis function to a vector valued polynomial basis, i.e. the linear map
:math:`q_{\mu, i} : \mathcal{P}_r(\mathbb{R}^m, \mathbb{R}^n) \to \mathbb{R}` that satisfies
.. math::
q_{\mu, i}(p_{\nu, j}) = \delta_{\mu, \nu} \delta_{i, j},
where :math:`p_{\nu, j}` is the degree :math:`|\nu|` vector valued basis polynomial indexed by the
multi-index :math:`\nu` with a non-zero i:th component in the given basis (see
:func:`vector_valued_polynomial_basis_fn`) and
.. math::
\delta_{\mu, \nu} = \begin{cases}
1 & \mu = \nu \\
0 & \text{else}
\end{cases}.
:param mu: Multi-index indicating which dual basis function should be generated.
:type mu: int or :class:`~polynomials_on_simplices.algebra.multiindex.MultiIndex` or Tuple[int, ...].
:param int r: Degree of polynomial space.
:param int i: Integer indicating which dual basis function should be generated.
:param int n: Dimension of the target.
:param str basis: Unique identifier for the basis we should generate a dual base function for.
:return: The dual basis function as specified by mu, r and i.
:rtype: Callable :math:`q_{\mu, i}(p)`.
"""
if basis == unique_identifier_monomial_basis():
return dual_vector_valued_monomial_basis_fn(mu, i, n)
if basis == unique_identifier_lagrange_basis():
return dual_vector_valued_lagrange_basis_fn(mu, r, i, n)
if basis == unique_identifier_bernstein_basis():
return dual_vector_valued_bernstein_basis_fn(mu, r, i, n)
raise ValueError("Unknown polynomial basis")
def dual_vector_valued_polynomial_basis(r, m, n, basis, ordering="interleaved"):
r"""
Generate all dual base functions for the space :math:`\mathcal{P}_r(\mathbb{R}^m, \mathbb{R}^n)` in the
given basis (i.e. the basis for :math:`\mathcal{P}_r(\mathbb{R}^m, \mathbb{R}^n)^*`).
See :func:`dual_vector_valued_polynomial_basis_fn`.
:param int r: Degree of the polynomial space.
:param int m: Dimension of the domain.
:param int n: Dimension of the target.
:param str basis: Unique identifier for the basis we should generate dual base functions for.
:param str ordering: How the vector valued basis functions are ordered. Can be "sequential" or "interleaved".
For sequential, sorting is first done on the index of the component that is non-zero, and then the non-zero
component is sorted in the same way as the scalar valued basis functions. For "interleaved" basis functions
are first sorted on their non-zero component in the same way as scalar valued basis functions, and then they
are sorted on the index of the component that is non-zero.
:return: List of dual base functions.
:rtype: List[callable `q(p)`].
"""
if basis == unique_identifier_monomial_basis():
return dual_vector_valued_monomial_basis(r, m, n, ordering)
if basis == unique_identifier_lagrange_basis():
return dual_vector_valued_lagrange_basis(r, m, n, ordering)
if basis == unique_identifier_bernstein_basis():
return dual_vector_valued_bernstein_basis(r, m, n, ordering)
raise ValueError("Unknown polynomial basis")
def polynomial_basis_fn_latex(nu, r, basis):
r"""
Generate Latex string for a basis polynomial for the space :math:`\mathcal{P}_r(\mathbb{R}^n)` (where n is equal
to the length of nu) in the given basis.
:param nu: Multi-index indicating which basis polynomial should be generated.
:type nu: int or :class:`~polynomials_on_simplices.algebra.multiindex.MultiIndex` or Tuple[int, ...]
:param int r: Degree of polynomial.
:param str basis: Unique identifier for the basis we should generate a basis polynomial Latex string for.
:return: Latex string for the base polynomial as specified by nu, r and basis.
:rtype: str
.. rubric:: Examples
>>> polynomial_basis_fn_latex(3, 3, unique_identifier_monomial_basis())
'x^3'
>>> polynomial_basis_fn_latex((1, 1, 1), 3, unique_identifier_bernstein_basis())
'6 x_1 x_2 x_3'
"""
if basis == unique_identifier_monomial_basis():
return monomial_basis_fn_latex(nu)
if basis == unique_identifier_lagrange_basis():
return lagrange_basis_fn_latex(nu, r)
if basis == unique_identifier_bernstein_basis():
return bernstein_basis_fn_latex(nu, r)
raise ValueError("Unknown polynomial basis")
def polynomial_basis_fn_latex_compact(nu, r, basis):
r"""
Generate compact Latex string for a basis polynomial for the space :math:`\mathcal{P}_r(\mathbb{R}^n)` (where n
is equal to the length of nu) in the given basis, using the common shorthand notation for the given basis.
:param nu: Multi-index indicating which basis polynomial should be generated.
:type nu: int or :class:`~polynomials_on_simplices.algebra.multiindex.MultiIndex` or Tuple[int, ...]
:param int r: Degree of polynomial.
:param str basis: Unique identifier for the basis we should generate a basis polynomial Latex string for.
:return: Latex string for the base polynomial as specified by nu, r and basis.
:rtype: str
.. rubric:: Examples
>>> polynomial_basis_fn_latex_compact(3, 3, unique_identifier_monomial_basis())
'x^3'
>>> polynomial_basis_fn_latex_compact((1, 1), 3, unique_identifier_monomial_basis())
'x^{(1, 1)}'
>>> polynomial_basis_fn_latex_compact((1, 1, 1), 3, unique_identifier_bernstein_basis())
'b_{(1, 1, 1), 3}(x)'
"""
if basis == unique_identifier_monomial_basis():
return monomial_basis_fn_latex_compact(nu)
if basis == unique_identifier_lagrange_basis():
return lagrange_basis_fn_latex_compact(nu, r)
if basis == unique_identifier_bernstein_basis():
return bernstein_basis_fn_latex_compact(nu, r)
raise ValueError("Unknown polynomial basis")
def polynomial_basis_latex(r, n, basis):
r"""
Generate Latex strings for all base polynomials for the space :math:`\mathcal{P}_r(\Delta_c^n)` in the given
basis.
:param int r: Degree of the polynomial space.
:param int n: Dimension of the unit simplex.
:param str basis: Unique identifier for the basis we should generate base polynomial Latex strings for.
:return: List of Latex strings for each base polynomials in the specified basis.
:rtype: List[str]
.. rubric:: Examples
>>> polynomial_basis_latex(2,1,unique_identifier_monomial_basis())
['1', 'x', 'x^2']
>>> polynomial_basis_latex(2,2,unique_identifier_bernstein_basis())
['(1 - x_1 - x_2)^2', '2 x_1 (1 - x_1 - x_2)', 'x_1^2', '2 x_2 (1 - x_1 - x_2)', '2 x_1 x_2', 'x_2^2']
"""
if basis == unique_identifier_monomial_basis():
return monomial_basis_latex(r, n)
if basis == unique_identifier_lagrange_basis():
return lagrange_basis_latex(r, n)
if basis == unique_identifier_bernstein_basis():
return bernstein_basis_latex(r, n)
raise ValueError("Unknown polynomial basis")
def polynomial_basis_latex_compact(r, n, basis):
r"""
Generate compact Latex strings for all base polynomials for the space :math:`\mathcal{P}_r(\Delta_c^n)` in the
given basis.
:param int r: Degree of the polynomial space.
:param int n: Dimension of the unit simplex.
:param str basis: Unique identifier for the basis we should generate base polynomial Latex strings for.
:return: List of Latex strings for each base polynomials in the specified basis.
:rtype: List[str]
.. rubric:: Examples
>>> polynomial_basis_latex_compact(2,1,unique_identifier_monomial_basis())
['1', 'x', 'x^2']
>>> polynomial_basis_latex_compact(1,2,unique_identifier_bernstein_basis())
['b_{(0, 0), 1}(x)', 'b_{(1, 0), 1}(x)', 'b_{(0, 1), 1}(x)']
"""
if basis == unique_identifier_monomial_basis():
return monomial_basis_latex_compact(r, n)
if basis == unique_identifier_lagrange_basis():
return lagrange_basis_latex_compact(r, n)
if basis == unique_identifier_bernstein_basis():
return bernstein_basis_latex_compact(r, n)
raise ValueError("Unknown polynomial basis")
if __name__ == "__main__":
import doctest
doctest.testmod()
| 46.419271 | 117 | 0.711585 | 2,587 | 17,825 | 4.68535 | 0.065327 | 0.084481 | 0.090092 | 0.068311 | 0.885158 | 0.827407 | 0.772131 | 0.757611 | 0.744906 | 0.70613 | 0 | 0.005209 | 0.192314 | 17,825 | 383 | 118 | 46.54047 | 0.836702 | 0.535091 | 0 | 0.36129 | 0 | 0 | 0.045328 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.083871 | false | 0 | 0.03871 | 0 | 0.387097 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
21e379b33534f46cb9532ac14f5517f762186c3a | 1,640 | py | Python | S11/tensornet/data/downloader.py | abishek-raju/EVA4B2 | 189f4062c85d91f43c1381087a9c89ff794e5428 | [
"Apache-2.0"
] | null | null | null | S11/tensornet/data/downloader.py | abishek-raju/EVA4B2 | 189f4062c85d91f43c1381087a9c89ff794e5428 | [
"Apache-2.0"
] | null | null | null | S11/tensornet/data/downloader.py | abishek-raju/EVA4B2 | 189f4062c85d91f43c1381087a9c89ff794e5428 | [
"Apache-2.0"
] | null | null | null | import os
from torchvision import datasets
def download_cifar10(path=None, train=True, transform=None):
"""Download CIFAR10 dataset
Args:
path (str, optional): Path where dataset will be downloaded.
If no path provided, data will be downloaded in a pre-defined
directory. (default: None)
train (bool, optional): If True, download the training data else
download the test data. (default: True)
transform (tensornet.Transformations, optional): Data transformations
to be applied on the data. (default: None)
Returns:
Downloaded dataset.
"""
if path is None:
path = os.path.join(os.path.dirname(os.path.abspath(__file__)), 'cifar10')
return datasets.CIFAR10(
path, train=train, download=True, transform=transform
)
def download_mnist(path=None, train=True, transform=None):
"""Download MNIST dataset
Args:
path (str, optional): Path where dataset will be downloaded.
If no path provided, data will be downloaded in a pre-defined
directory. (default: None)
train (bool, optional): If True, download the training data else
download the test data. (default: True)
transform (tensornet.Transformations, optional): Data transformations
to be applied on the data. (default: None)
Returns:
Downloaded dataset.
"""
if path is None:
path = os.path.join(os.path.dirname(os.path.abspath(__file__)), 'mnist')
return datasets.MNIST(
path, train=train, download=True, transform=transform
)
| 32.8 | 82 | 0.64939 | 198 | 1,640 | 5.328283 | 0.247475 | 0.073934 | 0.060664 | 0.032227 | 0.872038 | 0.872038 | 0.872038 | 0.716588 | 0.716588 | 0.716588 | 0 | 0.006617 | 0.262805 | 1,640 | 49 | 83 | 33.469388 | 0.866005 | 0.581098 | 0 | 0.285714 | 0 | 0 | 0.020761 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.142857 | false | 0 | 0.142857 | 0 | 0.428571 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
df00c262f23c33cd0cc1256e7568ff28d540498f | 68 | py | Python | airbyte-integrations/bases/base-singer/base_singer/__init__.py | rajatariya21/airbyte | 11e70a7a96e2682b479afbe6f709b9a5fe9c4a8d | [
"MIT"
] | 6,215 | 2020-09-21T13:45:56.000Z | 2022-03-31T21:21:45.000Z | airbyte-integrations/bases/base-singer/base_singer/__init__.py | rajatariya21/airbyte | 11e70a7a96e2682b479afbe6f709b9a5fe9c4a8d | [
"MIT"
] | 8,448 | 2020-09-21T00:43:50.000Z | 2022-03-31T23:56:06.000Z | airbyte-integrations/bases/base-singer/base_singer/__init__.py | rajatariya21/airbyte | 11e70a7a96e2682b479afbe6f709b9a5fe9c4a8d | [
"MIT"
] | 1,251 | 2020-09-20T05:48:47.000Z | 2022-03-31T10:41:29.000Z | from .singer_helpers import * # noqa
from .source import * # noqa
| 22.666667 | 37 | 0.705882 | 9 | 68 | 5.222222 | 0.666667 | 0.425532 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.205882 | 68 | 2 | 38 | 34 | 0.87037 | 0.132353 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
df108d7aff5982e6c8f958d3b6117429d5e34a02 | 84 | py | Python | jeri/core/models/fields/value.py | fmorgner/jeri | 5b33411c0e25375e3e5928fc044581a24c56f3ad | [
"BSD-3-Clause"
] | null | null | null | jeri/core/models/fields/value.py | fmorgner/jeri | 5b33411c0e25375e3e5928fc044581a24c56f3ad | [
"BSD-3-Clause"
] | null | null | null | jeri/core/models/fields/value.py | fmorgner/jeri | 5b33411c0e25375e3e5928fc044581a24c56f3ad | [
"BSD-3-Clause"
] | null | null | null | from jeri.core.models.fields.base import Field
class StringField(Field):
pass
| 14 | 46 | 0.761905 | 12 | 84 | 5.333333 | 0.916667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.154762 | 84 | 5 | 47 | 16.8 | 0.901408 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.333333 | 0.333333 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 6 |
df2950ff51ae397a312852ed9352d7e50064ae54 | 1,698 | py | Python | run_evm.py | DarionRichie/simple_evm | bdcfbf7da345ba170db92cdd5ad7f62df7ead4d2 | [
"MIT"
] | null | null | null | run_evm.py | DarionRichie/simple_evm | bdcfbf7da345ba170db92cdd5ad7f62df7ead4d2 | [
"MIT"
] | null | null | null | run_evm.py | DarionRichie/simple_evm | bdcfbf7da345ba170db92cdd5ad7f62df7ead4d2 | [
"MIT"
] | null | null | null |
import binascii
from simple_evm import VM
ADDRESS = b'\x85\x82\xa2\x89\x02\xb9\xae\x93\xfc\x03\xdd\xb4\xae\xae\xe1\x8e\x85\x93\x12\xc1'
SENDER = b'\xae\x03\x02\x18\x87\xc2\x22\x37\x6f\x12\xca\xf0\x21\x44\xa1\x2f\x19\x19\x92\xcf'
code_hex = '60806040526004361061006c5763ffffffff7c010000000000000000000000000000000000000000000000000000000060003504166312065fe08114610071578063455259cb14610098578063858af522146100ad57806395dd7a55146100c2578063afc874d2146100d9575b600080fd5b34801561007d57600080fd5b506100866100ee565b60408051918252519081900360200190f35b3480156100a457600080fd5b506100866100f3565b3480156100b957600080fd5b506100866100f7565b3480156100ce57600080fd5b506100d76100fc565b005b3480156100e557600080fd5b506100d7610139565b333190565b3a90565b602a90565b6000805b7f80000000000000000000000000000000000000000000000000000000000000008110156101355760003b9150600101610100565b5050565b604080517f08c379a000000000000000000000000000000000000000000000000000000000815260206004820152600e60248201527f616c776179732072657665727473000000000000000000000000000000000000604482015290519081900360640190fd00a165627a7a72305820645df686b4a16d5a69fc6d841fc9ad700528c14b35ca5629e11b154a9d3dff890029'
code_bytes = binascii.unhexlify(code_hex)
msg = {
'data': b'\x12\x06\x5f\xe0',
'value': 0,
'origin': SENDER,
'sender': SENDER,
'address': ADDRESS
}
state = {
ADDRESS: {
"balance": 100000000000000000,
"nonce": 0,
'code': code_bytes,
"storage": {}
},
SENDER: {
"balance": 100000000000000000,
"nonce": 0,
"storage": {}
},
}
m = VM(state, msg)
pc = None
while pc != m.pc:
pc = m.pc
r = m.step()
print("return value")
print(r)
| 43.538462 | 933 | 0.804476 | 112 | 1,698 | 12.151786 | 0.589286 | 0.010287 | 0.044085 | 0.045555 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.599735 | 0.111307 | 1,698 | 38 | 934 | 44.684211 | 0.302187 | 0 | 0 | 0.181818 | 0 | 0.060606 | 0.694166 | 0.636417 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.060606 | 0 | 0.060606 | 0.060606 | 0 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
10d01d525a3c1c0a70074c3623d6fbe9abfa11c0 | 583 | py | Python | stars/_import.py | Shimenrock/shimenrock.weblogic_toolset | 6b5e4637fae3c626f686248c6281e1856c802831 | [
"MIT"
] | null | null | null | stars/_import.py | Shimenrock/shimenrock.weblogic_toolset | 6b5e4637fae3c626f686248c6281e1856c802831 | [
"MIT"
] | null | null | null | stars/_import.py | Shimenrock/shimenrock.weblogic_toolset | 6b5e4637fae3c626f686248c6281e1856c802831 | [
"MIT"
] | null | null | null | from . import console
from . import cve_2014_4210
from . import cve_2016_0638
from . import cve_2016_3510
from . import cve_2017_3248
from . import cve_2017_3506
from . import cve_2017_10271
from . import cve_2018_2628
from . import cve_2018_2893
from . import cve_2018_2894
from . import cve_2018_3191
from . import cve_2018_3245
from . import cve_2018_3252
from . import cve_2019_2618
from . import cve_2019_2725
from . import cve_2019_2729
from . import cve_2019_2888
from . import cve_2019_2890
from . import cve_2020_2551
from . import cve_2020_2555
from . import cve_2020_2883
| 26.5 | 28 | 0.819897 | 103 | 583 | 4.252427 | 0.291262 | 0.479452 | 0.593607 | 0.232877 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.322645 | 0.144082 | 583 | 21 | 29 | 27.761905 | 0.55511 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
8006e1afe921e09e3546c1c8dd18b0e8545d5440 | 445 | py | Python | pymysql/tests/__init__.py | mathieulongtin/PyMySQL | 41d9e6bff7e0a6d74d8aa327b6cb64d6e17baf7e | [
"MIT"
] | 1 | 2017-11-08T08:15:45.000Z | 2017-11-08T08:15:45.000Z | pymysql/tests/__init__.py | mathieulongtin/PyMySQL | 41d9e6bff7e0a6d74d8aa327b6cb64d6e17baf7e | [
"MIT"
] | null | null | null | pymysql/tests/__init__.py | mathieulongtin/PyMySQL | 41d9e6bff7e0a6d74d8aa327b6cb64d6e17baf7e | [
"MIT"
] | 4 | 2016-10-12T23:54:55.000Z | 2020-07-25T23:28:25.000Z | from pymysql.tests.test_issues import *
from pymysql.tests.test_basic import *
from pymysql.tests.test_nextset import *
from pymysql.tests.test_DictCursor import *
from pymysql.tests.test_connection import TestConnection
from pymysql.tests.test_SSCursor import *
from pymysql.tests.thirdparty import *
if __name__ == "__main__":
try:
import unittest2 as unittest
except ImportError:
import unittest
unittest.main()
| 27.8125 | 56 | 0.770787 | 56 | 445 | 5.875 | 0.392857 | 0.234043 | 0.340426 | 0.364742 | 0.316109 | 0 | 0 | 0 | 0 | 0 | 0 | 0.002681 | 0.161798 | 445 | 15 | 57 | 29.666667 | 0.879357 | 0 | 0 | 0 | 0 | 0 | 0.017978 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.769231 | 0 | 0.769231 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
8019a04cc26ef5faf42c53e7d9cb20a7a8376b1c | 41 | py | Python | deepchem/models/tensorgraph/optimizers.py | hl2500/deepchem | 09ed9c04110eb822c2d6c9be61c27da4939896f6 | [
"MIT"
] | 1 | 2020-06-23T03:59:15.000Z | 2020-06-23T03:59:15.000Z | deepchem/models/tensorgraph/optimizers.py | evenhe/deepchem | 9d0fc5554b286117ae08b21b3f15877b06a1009e | [
"MIT"
] | null | null | null | deepchem/models/tensorgraph/optimizers.py | evenhe/deepchem | 9d0fc5554b286117ae08b21b3f15877b06a1009e | [
"MIT"
] | 1 | 2021-02-24T04:58:32.000Z | 2021-02-24T04:58:32.000Z | from deepchem.models.optimizers import *
| 20.5 | 40 | 0.829268 | 5 | 41 | 6.8 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.097561 | 41 | 1 | 41 | 41 | 0.918919 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
801c19f8b7796e5536631041f8089db4e399acf8 | 102 | py | Python | test/fixtures/use_function.py | python-rope/pylsp-rope | 431415560779881b57048dc563802705f7556bca | [
"MIT"
] | 16 | 2021-10-03T07:18:20.000Z | 2022-03-28T00:11:53.000Z | test/fixtures/use_function.py | python-rope/pylsp-rope | 431415560779881b57048dc563802705f7556bca | [
"MIT"
] | 7 | 2021-10-03T06:37:42.000Z | 2021-11-02T17:13:27.000Z | test/fixtures/use_function.py | python-rope/pylsp-rope | 431415560779881b57048dc563802705f7556bca | [
"MIT"
] | null | null | null | def add(a, b):
return a + b
def main():
a, b = 10, 20
print(f"{a} + {b} = {add(a, b)}")
| 12.75 | 37 | 0.411765 | 20 | 102 | 2.1 | 0.5 | 0.238095 | 0.238095 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.058824 | 0.333333 | 102 | 7 | 38 | 14.571429 | 0.558824 | 0 | 0 | 0 | 0 | 0 | 0.22549 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.4 | false | 0 | 0 | 0.2 | 0.6 | 0.2 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 6 |
802e91133d9af3bb060cf0a19a45a1e62b8451b5 | 11,090 | py | Python | spreadflow_xslt/test/test_xslt_proc.py | znerol/spreadflow-xslt | adb6d97d703d561f17878291e4a67130e101536a | [
"MIT"
] | null | null | null | spreadflow_xslt/test/test_xslt_proc.py | znerol/spreadflow-xslt | adb6d97d703d561f17878291e4a67130e101536a | [
"MIT"
] | null | null | null | spreadflow_xslt/test/test_xslt_proc.py | znerol/spreadflow-xslt | adb6d97d703d561f17878291e4a67130e101536a | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
from __future__ import absolute_import
from __future__ import division
from __future__ import unicode_literals
import codecs
import copy
import os
from twisted.internet import defer
from mock import Mock
from testtools import TestCase, run_test_with
from testtools.twistedsupport import AsynchronousDeferredRunTest
from spreadflow_core.scheduler import Scheduler
from spreadflow_delta.test.matchers import MatchesSendDeltaItemInvocation
from spreadflow_xslt.proc import XSLT
FIXTURE_DIRECTORY = os.path.join(os.path.dirname(os.path.abspath(__file__)), 'fixtures')
class XSLTTransformUnitTest(TestCase):
@run_test_with(AsynchronousDeferredRunTest)
@defer.inlineCallbacks
def test_spec_document_example(self):
"""
Operates on fixtures/01-spec-document-example.*
see: https://www.w3.org/TR/xslt#section-Examples
"""
xsl_path = os.path.join(FIXTURE_DIRECTORY, '01-spec-document-example.xsl')
pipe = XSLT(xsl_path)
input_data = b''
input_path = os.path.join(FIXTURE_DIRECTORY, '01-spec-document-example.xml')
with open(input_path, 'rb') as input_file:
input_data = input_file.read()
expected_data = b''
expected_path = os.path.join(FIXTURE_DIRECTORY, '01-spec-document-example.html')
with open(expected_path, 'rb') as expected_file:
expected_data = expected_file.read()
item = {
'inserts': ['a'],
'deletes': [],
'data': {
'a': {
'content': input_data
}
}
}
expected = copy.deepcopy(item)
expected['data']['a']['content'] = expected_data
matches = MatchesSendDeltaItemInvocation(expected, pipe)
send = Mock(spec=Scheduler.send)
yield pipe(item, send)
self.assertEquals(send.call_count, 1)
self.assertThat(send.call_args, matches)
@run_test_with(AsynchronousDeferredRunTest)
@defer.inlineCallbacks
def test_spec_data_example(self):
"""
Operates on fixtures/02-spec-data-example.*
see: https://www.w3.org/TR/xslt#section-Examples
"""
xsl_path = os.path.join(FIXTURE_DIRECTORY, '02-spec-data-example.xsl')
pipe = XSLT(xsl_path, key='custom_content', destkey='custom_result', coiterate=None)
input_data = b''
input_path = os.path.join(FIXTURE_DIRECTORY, '02-spec-data-example.xml')
with open(input_path, 'rb') as input_file:
input_data = input_file.read()
expected_data = b''
expected_path = os.path.join(FIXTURE_DIRECTORY, '02-spec-data-example.html')
with open(expected_path, 'rb') as expected_file:
expected_data = expected_file.read()
item = {
'inserts': ['b'],
'deletes': [],
'data': {
'b': {
'custom_content': input_data
}
}
}
expected = copy.deepcopy(item)
expected['data']['b']['custom_result'] = expected_data
matches = MatchesSendDeltaItemInvocation(expected, pipe)
send = Mock(spec=Scheduler.send)
yield pipe(item, send)
self.assertEquals(send.call_count, 1)
self.assertThat(send.call_args, matches)
@run_test_with(AsynchronousDeferredRunTest)
@defer.inlineCallbacks
def test_literal_strparam(self):
"""
Operates on fixtures/03-literal-strparam.*
see: https://www.w3.org/TR/xslt#section-Examples
"""
xsl_path = os.path.join(FIXTURE_DIRECTORY, '03-literal-strparam.xsl')
pipe = XSLT(xsl_path, strparams={'extract_id': 'South'})
input_data = b''
input_path = os.path.join(FIXTURE_DIRECTORY, '03-literal-strparam-data.xml')
with open(input_path, 'rb') as input_file:
input_data = input_file.read()
expected_data = b''
expected_path = os.path.join(FIXTURE_DIRECTORY, '03-literal-strparam-expected.xml')
with open(expected_path, 'rb') as expected_file:
expected_data = expected_file.read()
item = {
'inserts': ['b'],
'deletes': [],
'data': {
'b': {
'content': input_data
}
}
}
expected = copy.deepcopy(item)
expected['data']['b']['content'] = expected_data
matches = MatchesSendDeltaItemInvocation(expected, pipe)
send = Mock(spec=Scheduler.send)
yield pipe(item, send)
self.assertEquals(send.call_count, 1)
self.assertThat(send.call_args, matches)
@run_test_with(AsynchronousDeferredRunTest)
@defer.inlineCallbacks
def test_literal_rawparam(self):
"""
Operates on fixtures/04-literal-param.*
see: https://www.w3.org/TR/xslt#section-Examples
"""
xsl_path = os.path.join(FIXTURE_DIRECTORY, '04-literal-param.xsl')
pipe = XSLT(xsl_path, params={'extract_pos': '2'})
input_data = b''
input_path = os.path.join(FIXTURE_DIRECTORY, '04-literal-param-data.xml')
with open(input_path, 'rb') as input_file:
input_data = input_file.read()
expected_data = b''
expected_path = os.path.join(FIXTURE_DIRECTORY, '04-literal-param-expected.xml')
with open(expected_path, 'rb') as expected_file:
expected_data = expected_file.read()
item = {
'inserts': ['b'],
'deletes': [],
'data': {
'b': {
'content': input_data
}
}
}
expected = copy.deepcopy(item)
expected['data']['b']['content'] = expected_data
matches = MatchesSendDeltaItemInvocation(expected, pipe)
send = Mock(spec=Scheduler.send)
yield pipe(item, send)
self.assertEquals(send.call_count, 1)
self.assertThat(send.call_args, matches)
@run_test_with(AsynchronousDeferredRunTest)
@defer.inlineCallbacks
def test_dynamic_strparam(self):
"""
Operates on fixtures/05-dynamic-strparam.*
see: https://www.w3.org/TR/xslt#section-Examples
"""
xsl_path = os.path.join(FIXTURE_DIRECTORY, '05-dynamic-strparam.xsl')
pipe = XSLT(xsl_path, strparamskey='params')
input_data = b''
input_path = os.path.join(FIXTURE_DIRECTORY, '05-dynamic-strparam-data.xml')
with open(input_path, 'rb') as input_file:
input_data = input_file.read()
expected_data = b''
expected_path = os.path.join(FIXTURE_DIRECTORY, '05-dynamic-strparam-expected.xml')
with open(expected_path, 'rb') as expected_file:
expected_data = expected_file.read()
item = {
'inserts': ['b'],
'deletes': [],
'data': {
'b': {
'params': {
'extract_id': 'West',
},
'content': input_data
}
}
}
expected = copy.deepcopy(item)
expected['data']['b']['content'] = expected_data
matches = MatchesSendDeltaItemInvocation(expected, pipe)
send = Mock(spec=Scheduler.send)
yield pipe(item, send)
self.assertEquals(send.call_count, 1)
self.assertThat(send.call_args, matches)
@run_test_with(AsynchronousDeferredRunTest)
@defer.inlineCallbacks
def test_dynamic_rawparam(self):
"""
Operates on fixtures/06-dynamic-param.*
see: https://www.w3.org/TR/xslt#section-Examples
"""
xsl_path = os.path.join(FIXTURE_DIRECTORY, '06-dynamic-param.xsl')
pipe = XSLT(xsl_path, paramskey='params')
input_data = b''
input_path = os.path.join(FIXTURE_DIRECTORY, '06-dynamic-param-data.xml')
with open(input_path, 'rb') as input_file:
input_data = input_file.read()
expected_data = b''
expected_path = os.path.join(FIXTURE_DIRECTORY, '06-dynamic-param-expected.xml')
with open(expected_path, 'rb') as expected_file:
expected_data = expected_file.read()
item = {
'inserts': ['b'],
'deletes': [],
'data': {
'b': {
'params': {
'extract_pos': '1',
},
'content': input_data
}
}
}
expected = copy.deepcopy(item)
expected['data']['b']['content'] = expected_data
matches = MatchesSendDeltaItemInvocation(expected, pipe)
send = Mock(spec=Scheduler.send)
yield pipe(item, send)
self.assertEquals(send.call_count, 1)
self.assertThat(send.call_args, matches)
@run_test_with(AsynchronousDeferredRunTest)
@defer.inlineCallbacks
def test_no_input_doc(self):
"""
Operates on fixtures/07-no-input-doc.*
"""
xsl_path = os.path.join(FIXTURE_DIRECTORY, '07-no-input-doc.xsl')
pipe = XSLT(xsl_path, strparams={'who': 'slartibartfast'}, key=None, destkey='content')
expected_data = b''
expected_path = os.path.join(FIXTURE_DIRECTORY, '07-no-input-doc-expected.xml')
with open(expected_path, 'rb') as expected_file:
expected_data = expected_file.read()
item = {
'inserts': ['b'],
'deletes': [],
'data': {
'b': {
}
}
}
expected = copy.deepcopy(item)
expected['data']['b']['content'] = expected_data
matches = MatchesSendDeltaItemInvocation(expected, pipe)
send = Mock(spec=Scheduler.send)
yield pipe(item, send)
self.assertEquals(send.call_count, 1)
self.assertThat(send.call_args, matches)
@run_test_with(AsynchronousDeferredRunTest)
@defer.inlineCallbacks
def test_encoded_output(self):
"""
Operates on fixtures/08-encoded-output.*
"""
xsl_path = os.path.join(FIXTURE_DIRECTORY, '08-encoded-output.xsl')
pipe = XSLT(xsl_path, strparams={'who': 'Birgitta Jónsdóttir'}, key=None, destkey='content', encoding='utf-8')
expected_data = ''
expected_path = os.path.join(FIXTURE_DIRECTORY, '08-encoded-output-expected.xml')
with codecs.open(expected_path, encoding='utf-8') as expected_file:
expected_data = expected_file.read()
item = {
'inserts': ['b'],
'deletes': [],
'data': {
'b': {
}
}
}
expected = copy.deepcopy(item)
expected['data']['b']['content'] = expected_data
matches = MatchesSendDeltaItemInvocation(expected, pipe)
send = Mock(spec=Scheduler.send)
yield pipe(item, send)
self.assertEquals(send.call_count, 1)
self.assertThat(send.call_args, matches)
| 33.810976 | 118 | 0.588278 | 1,187 | 11,090 | 5.31171 | 0.103623 | 0.060904 | 0.036479 | 0.04885 | 0.84885 | 0.818715 | 0.800159 | 0.788422 | 0.788422 | 0.733069 | 0 | 0.01002 | 0.289089 | 11,090 | 327 | 119 | 33.914373 | 0.789701 | 0.058972 | 0 | 0.63786 | 0 | 0 | 0.106047 | 0.050083 | 0 | 0 | 0 | 0 | 0.065844 | 1 | 0.032922 | false | 0 | 0.053498 | 0 | 0.090535 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
8053ee9849cbada17bdf127f561a167ecdceeb4f | 30,174 | py | Python | src/pyensae/languages/PigLexer.py | mohamedelkansouli/Ensae_py2 | e54a05f90c6aa6e2a5065eac9f9ec10aca64b46a | [
"MIT"
] | null | null | null | src/pyensae/languages/PigLexer.py | mohamedelkansouli/Ensae_py2 | e54a05f90c6aa6e2a5065eac9f9ec10aca64b46a | [
"MIT"
] | null | null | null | src/pyensae/languages/PigLexer.py | mohamedelkansouli/Ensae_py2 | e54a05f90c6aa6e2a5065eac9f9ec10aca64b46a | [
"MIT"
] | null | null | null | # Generated from \Pig.g4 by ANTLR 4.7
from antlr4 import *
from io import StringIO
from typing.io import TextIO
import sys
def serializedATN():
with StringIO() as buf:
buf.write("\3\u608b\ua72a\u8133\ub9ed\u417c\u3be7\u7786\u5964\2\\")
buf.write("\u02e3\b\1\4\2\t\2\4\3\t\3\4\4\t\4\4\5\t\5\4\6\t\6\4\7")
buf.write("\t\7\4\b\t\b\4\t\t\t\4\n\t\n\4\13\t\13\4\f\t\f\4\r\t\r")
buf.write("\4\16\t\16\4\17\t\17\4\20\t\20\4\21\t\21\4\22\t\22\4\23")
buf.write("\t\23\4\24\t\24\4\25\t\25\4\26\t\26\4\27\t\27\4\30\t\30")
buf.write("\4\31\t\31\4\32\t\32\4\33\t\33\4\34\t\34\4\35\t\35\4\36")
buf.write("\t\36\4\37\t\37\4 \t \4!\t!\4\"\t\"\4#\t#\4$\t$\4%\t%")
buf.write("\4&\t&\4\'\t\'\4(\t(\4)\t)\4*\t*\4+\t+\4,\t,\4-\t-\4.")
buf.write("\t.\4/\t/\4\60\t\60\4\61\t\61\4\62\t\62\4\63\t\63\4\64")
buf.write("\t\64\4\65\t\65\4\66\t\66\4\67\t\67\48\t8\49\t9\4:\t:")
buf.write("\4;\t;\4<\t<\4=\t=\4>\t>\4?\t?\4@\t@\4A\tA\4B\tB\4C\t")
buf.write("C\4D\tD\4E\tE\4F\tF\4G\tG\4H\tH\4I\tI\4J\tJ\4K\tK\4L\t")
buf.write("L\4M\tM\4N\tN\4O\tO\4P\tP\4Q\tQ\4R\tR\4S\tS\4T\tT\4U\t")
buf.write("U\4V\tV\4W\tW\4X\tX\4Y\tY\4Z\tZ\4[\t[\4\\\t\\\4]\t]\4")
buf.write("^\t^\4_\t_\4`\t`\4a\ta\4b\tb\3\2\3\2\3\2\3\2\3\2\3\2\3")
buf.write("\2\3\3\3\3\3\3\3\3\3\3\3\4\3\4\3\4\3\4\3\4\3\4\3\4\3\5")
buf.write("\3\5\3\5\3\5\3\5\3\5\3\5\3\5\3\6\3\6\3\6\3\6\3\6\3\6\3")
buf.write("\7\3\7\3\7\3\7\3\7\3\7\3\7\3\7\3\b\3\b\3\b\3\b\3\b\3\b")
buf.write("\3\b\3\b\3\b\3\t\3\t\3\t\3\t\3\t\3\t\3\t\3\t\3\n\3\n\3")
buf.write("\n\3\n\3\n\3\13\3\13\3\13\3\13\3\13\3\13\3\f\3\f\3\f\3")
buf.write("\f\3\f\3\f\3\r\3\r\3\r\3\r\3\r\3\r\3\16\3\16\3\16\3\16")
buf.write("\3\16\3\17\3\17\3\17\3\20\3\20\3\20\3\20\3\21\3\21\3\21")
buf.write("\3\21\3\22\3\22\3\22\3\23\3\23\3\23\3\24\3\24\3\24\3\24")
buf.write("\3\24\3\24\3\25\3\25\3\25\3\25\3\25\3\25\3\26\3\26\3\26")
buf.write("\3\26\3\26\3\26\3\27\3\27\3\27\3\27\3\27\3\27\3\27\3\27")
buf.write("\3\27\3\30\3\30\3\31\3\31\3\31\3\31\3\31\3\31\3\31\3\31")
buf.write("\3\31\3\32\3\32\3\32\3\32\3\32\3\32\3\32\3\32\3\32\3\32")
buf.write("\3\33\3\33\3\33\3\33\3\33\3\33\3\34\3\34\3\34\3\34\3\35")
buf.write("\3\35\3\35\3\36\3\36\3\36\3\36\3\37\3\37\3\37\3\37\3\37")
buf.write("\3\37\3\37\3\37\3\37\3 \3 \3 \3 \3 \3 \3 \3 \3!\3!\3!")
buf.write("\3!\3!\3\"\3\"\3\"\3\"\3#\3#\3#\3#\3#\3$\3$\3$\3$\3%\3")
buf.write("%\3%\3%\3%\3&\3&\3&\3&\3&\3&\3\'\3\'\3\'\3\'\3\'\3\'\3")
buf.write("\'\3(\3(\3(\3(\3(\3(\3(\3(\3(\3(\3)\3)\3)\3)\3)\3)\3)")
buf.write("\3)\3)\3)\3*\3*\3*\3*\3+\3+\3+\3+\3+\3+\3,\3,\3,\3,\3")
buf.write("-\3-\3-\3.\3.\3.\3.\3.\3/\3/\3/\3/\3/\3/\3/\3\60\3\60")
buf.write("\3\60\3\60\3\60\3\60\3\60\3\60\3\61\3\61\3\61\3\61\3\61")
buf.write("\3\61\3\62\3\62\3\62\3\62\3\62\3\62\3\62\3\62\3\62\3\62")
buf.write("\3\63\3\63\3\63\3\63\3\63\3\64\3\64\3\64\3\64\3\64\3\64")
buf.write("\3\65\3\65\3\65\3\65\3\65\3\65\3\66\3\66\3\66\3\66\3\66")
buf.write("\3\66\3\66\3\67\3\67\3\67\3\67\3\67\3\67\3\67\38\38\3")
buf.write("8\38\38\38\39\39\39\39\39\39\39\3:\3:\3:\3:\3:\3:\3;\3")
buf.write(";\3;\3;\3;\3;\3;\3<\3<\3<\3<\3<\3=\3=\3=\3=\3=\3=\3>\3")
buf.write(">\3>\3>\3>\3?\3?\3@\3@\3A\3A\3B\3B\3C\3C\3C\3C\3C\3C\7")
buf.write("C\u0243\nC\fC\16C\u0246\13C\3D\3D\3D\5D\u024b\nD\3D\3")
buf.write("D\5D\u024f\nD\3E\6E\u0252\nE\rE\16E\u0253\3F\3F\5F\u0258")
buf.write("\nF\3G\3G\3G\5G\u025d\nG\3G\5G\u0260\nG\3H\3H\5H\u0264")
buf.write("\nH\3I\3I\3I\3I\3I\3I\3I\3I\3I\3I\3I\7I\u0271\nI\fI\16")
buf.write("I\u0274\13I\3I\3I\3J\3J\7J\u027a\nJ\fJ\16J\u027d\13J\3")
buf.write("J\3J\3K\3K\3L\3L\3M\3M\3M\3M\7M\u0289\nM\fM\16M\u028c")
buf.write("\13M\3N\3N\3N\3N\7N\u0292\nN\fN\16N\u0295\13N\3N\3N\3")
buf.write("N\3O\3O\3O\3O\3O\3O\3O\3O\3O\3O\3O\3O\3O\3O\3O\3O\3O\3")
buf.write("O\3O\3O\3O\3O\5O\u02b0\nO\3P\3P\3P\3P\3P\3P\3P\3P\3P\3")
buf.write("P\5P\u02bc\nP\3Q\3Q\5Q\u02c0\nQ\3R\3R\3S\3S\3T\3T\3U\3")
buf.write("U\3V\3V\3W\3W\3X\3X\3Y\3Y\3Z\3Z\3[\3[\3\\\3\\\3]\3]\3")
buf.write("^\3^\3_\3_\3`\3`\3a\3a\3b\3b\4\u028a\u0293\2c\3\3\5\4")
buf.write("\7\5\t\6\13\7\r\b\17\t\21\n\23\13\25\f\27\r\31\16\33\17")
buf.write("\35\20\37\21!\22#\23%\24\'\25)\26+\27-\30/\31\61\32\63")
buf.write("\33\65\34\67\359\36;\37= ?!A\"C#E$G%I&K\'M(O)Q*S+U,W-")
buf.write("Y.[/]\60_\61a\62c\63e\64g\65i\66k\67m8o9q:s;u<w=y>{?}")
buf.write("\2\177\2\u0081\2\u0083\2\u0085@\u0087\2\u0089A\u008bB")
buf.write("\u008dC\u008fD\u0091E\u0093F\u0095G\u0097H\u0099I\u009b")
buf.write("J\u009d\2\u009f\2\u00a1K\u00a3L\u00a5M\u00a7N\u00a9O\u00ab")
buf.write("P\u00adQ\u00afR\u00b1S\u00b3T\u00b5U\u00b7V\u00b9W\u00bb")
buf.write("X\u00bdY\u00bfZ\u00c1[\u00c3\\\3\2\16\4\2C\\c|\5\2//\61")
buf.write("\61<<\4\2NNnn\4\2GGgg\4\2--//\4\2HHhh\6\2\f\f\17\17))")
buf.write("^^\t\2))^^ddhhppttvv\5\2\62;CHch\3\2bb\5\2\13\f\16\17")
buf.write("\"\"\4\2\f\f\17\17\2\u02f8\2\3\3\2\2\2\2\5\3\2\2\2\2\7")
buf.write("\3\2\2\2\2\t\3\2\2\2\2\13\3\2\2\2\2\r\3\2\2\2\2\17\3\2")
buf.write("\2\2\2\21\3\2\2\2\2\23\3\2\2\2\2\25\3\2\2\2\2\27\3\2\2")
buf.write("\2\2\31\3\2\2\2\2\33\3\2\2\2\2\35\3\2\2\2\2\37\3\2\2\2")
buf.write("\2!\3\2\2\2\2#\3\2\2\2\2%\3\2\2\2\2\'\3\2\2\2\2)\3\2\2")
buf.write("\2\2+\3\2\2\2\2-\3\2\2\2\2/\3\2\2\2\2\61\3\2\2\2\2\63")
buf.write("\3\2\2\2\2\65\3\2\2\2\2\67\3\2\2\2\29\3\2\2\2\2;\3\2\2")
buf.write("\2\2=\3\2\2\2\2?\3\2\2\2\2A\3\2\2\2\2C\3\2\2\2\2E\3\2")
buf.write("\2\2\2G\3\2\2\2\2I\3\2\2\2\2K\3\2\2\2\2M\3\2\2\2\2O\3")
buf.write("\2\2\2\2Q\3\2\2\2\2S\3\2\2\2\2U\3\2\2\2\2W\3\2\2\2\2Y")
buf.write("\3\2\2\2\2[\3\2\2\2\2]\3\2\2\2\2_\3\2\2\2\2a\3\2\2\2\2")
buf.write("c\3\2\2\2\2e\3\2\2\2\2g\3\2\2\2\2i\3\2\2\2\2k\3\2\2\2")
buf.write("\2m\3\2\2\2\2o\3\2\2\2\2q\3\2\2\2\2s\3\2\2\2\2u\3\2\2")
buf.write("\2\2w\3\2\2\2\2y\3\2\2\2\2{\3\2\2\2\2\u0085\3\2\2\2\2")
buf.write("\u0089\3\2\2\2\2\u008b\3\2\2\2\2\u008d\3\2\2\2\2\u008f")
buf.write("\3\2\2\2\2\u0091\3\2\2\2\2\u0093\3\2\2\2\2\u0095\3\2\2")
buf.write("\2\2\u0097\3\2\2\2\2\u0099\3\2\2\2\2\u009b\3\2\2\2\2\u00a1")
buf.write("\3\2\2\2\2\u00a3\3\2\2\2\2\u00a5\3\2\2\2\2\u00a7\3\2\2")
buf.write("\2\2\u00a9\3\2\2\2\2\u00ab\3\2\2\2\2\u00ad\3\2\2\2\2\u00af")
buf.write("\3\2\2\2\2\u00b1\3\2\2\2\2\u00b3\3\2\2\2\2\u00b5\3\2\2")
buf.write("\2\2\u00b7\3\2\2\2\2\u00b9\3\2\2\2\2\u00bb\3\2\2\2\2\u00bd")
buf.write("\3\2\2\2\2\u00bf\3\2\2\2\2\u00c1\3\2\2\2\2\u00c3\3\2\2")
buf.write("\2\3\u00c5\3\2\2\2\5\u00cc\3\2\2\2\7\u00d1\3\2\2\2\t\u00d8")
buf.write("\3\2\2\2\13\u00e0\3\2\2\2\r\u00e6\3\2\2\2\17\u00ee\3\2")
buf.write("\2\2\21\u00f7\3\2\2\2\23\u00ff\3\2\2\2\25\u0104\3\2\2")
buf.write("\2\27\u010a\3\2\2\2\31\u0110\3\2\2\2\33\u0116\3\2\2\2")
buf.write("\35\u011b\3\2\2\2\37\u011e\3\2\2\2!\u0122\3\2\2\2#\u0126")
buf.write("\3\2\2\2%\u0129\3\2\2\2\'\u012c\3\2\2\2)\u0132\3\2\2\2")
buf.write("+\u0138\3\2\2\2-\u013e\3\2\2\2/\u0147\3\2\2\2\61\u0149")
buf.write("\3\2\2\2\63\u0152\3\2\2\2\65\u015c\3\2\2\2\67\u0162\3")
buf.write("\2\2\29\u0166\3\2\2\2;\u0169\3\2\2\2=\u016d\3\2\2\2?\u0176")
buf.write("\3\2\2\2A\u017e\3\2\2\2C\u0183\3\2\2\2E\u0187\3\2\2\2")
buf.write("G\u018c\3\2\2\2I\u0190\3\2\2\2K\u0195\3\2\2\2M\u019b\3")
buf.write("\2\2\2O\u01a2\3\2\2\2Q\u01ac\3\2\2\2S\u01b6\3\2\2\2U\u01ba")
buf.write("\3\2\2\2W\u01c0\3\2\2\2Y\u01c4\3\2\2\2[\u01c7\3\2\2\2")
buf.write("]\u01cc\3\2\2\2_\u01d3\3\2\2\2a\u01db\3\2\2\2c\u01e1\3")
buf.write("\2\2\2e\u01eb\3\2\2\2g\u01f0\3\2\2\2i\u01f6\3\2\2\2k\u01fc")
buf.write("\3\2\2\2m\u0203\3\2\2\2o\u020a\3\2\2\2q\u0210\3\2\2\2")
buf.write("s\u0217\3\2\2\2u\u021d\3\2\2\2w\u0224\3\2\2\2y\u0229\3")
buf.write("\2\2\2{\u022f\3\2\2\2}\u0234\3\2\2\2\177\u0236\3\2\2\2")
buf.write("\u0081\u0238\3\2\2\2\u0083\u023a\3\2\2\2\u0085\u023c\3")
buf.write("\2\2\2\u0087\u024e\3\2\2\2\u0089\u0251\3\2\2\2\u008b\u0255")
buf.write("\3\2\2\2\u008d\u0259\3\2\2\2\u008f\u0261\3\2\2\2\u0091")
buf.write("\u0265\3\2\2\2\u0093\u0277\3\2\2\2\u0095\u0280\3\2\2\2")
buf.write("\u0097\u0282\3\2\2\2\u0099\u0284\3\2\2\2\u009b\u028d\3")
buf.write("\2\2\2\u009d\u02af\3\2\2\2\u009f\u02bb\3\2\2\2\u00a1\u02bf")
buf.write("\3\2\2\2\u00a3\u02c1\3\2\2\2\u00a5\u02c3\3\2\2\2\u00a7")
buf.write("\u02c5\3\2\2\2\u00a9\u02c7\3\2\2\2\u00ab\u02c9\3\2\2\2")
buf.write("\u00ad\u02cb\3\2\2\2\u00af\u02cd\3\2\2\2\u00b1\u02cf\3")
buf.write("\2\2\2\u00b3\u02d1\3\2\2\2\u00b5\u02d3\3\2\2\2\u00b7\u02d5")
buf.write("\3\2\2\2\u00b9\u02d7\3\2\2\2\u00bb\u02d9\3\2\2\2\u00bd")
buf.write("\u02db\3\2\2\2\u00bf\u02dd\3\2\2\2\u00c1\u02df\3\2\2\2")
buf.write("\u00c3\u02e1\3\2\2\2\u00c5\u00c6\7f\2\2\u00c6\u00c7\7")
buf.write("g\2\2\u00c7\u00c8\7h\2\2\u00c8\u00c9\7k\2\2\u00c9\u00ca")
buf.write("\7p\2\2\u00ca\u00cb\7g\2\2\u00cb\4\3\2\2\2\u00cc\u00cd")
buf.write("\7n\2\2\u00cd\u00ce\7q\2\2\u00ce\u00cf\7c\2\2\u00cf\u00d0")
buf.write("\7f\2\2\u00d0\6\3\2\2\2\u00d1\u00d2\7h\2\2\u00d2\u00d3")
buf.write("\7k\2\2\u00d3\u00d4\7n\2\2\u00d4\u00d5\7v\2\2\u00d5\u00d6")
buf.write("\7g\2\2\u00d6\u00d7\7t\2\2\u00d7\b\3\2\2\2\u00d8\u00d9")
buf.write("\7h\2\2\u00d9\u00da\7q\2\2\u00da\u00db\7t\2\2\u00db\u00dc")
buf.write("\7g\2\2\u00dc\u00dd\7c\2\2\u00dd\u00de\7e\2\2\u00de\u00df")
buf.write("\7j\2\2\u00df\n\3\2\2\2\u00e0\u00e1\7q\2\2\u00e1\u00e2")
buf.write("\7t\2\2\u00e2\u00e3\7f\2\2\u00e3\u00e4\7g\2\2\u00e4\u00e5")
buf.write("\7t\2\2\u00e5\f\3\2\2\2\u00e6\u00e7\7c\2\2\u00e7\u00e8")
buf.write("\7t\2\2\u00e8\u00e9\7t\2\2\u00e9\u00ea\7c\2\2\u00ea\u00eb")
buf.write("\7p\2\2\u00eb\u00ec\7i\2\2\u00ec\u00ed\7g\2\2\u00ed\16")
buf.write("\3\2\2\2\u00ee\u00ef\7f\2\2\u00ef\u00f0\7k\2\2\u00f0\u00f1")
buf.write("\7u\2\2\u00f1\u00f2\7v\2\2\u00f2\u00f3\7k\2\2\u00f3\u00f4")
buf.write("\7p\2\2\u00f4\u00f5\7e\2\2\u00f5\u00f6\7v\2\2\u00f6\20")
buf.write("\3\2\2\2\u00f7\u00f8\7e\2\2\u00f8\u00f9\7q\2\2\u00f9\u00fa")
buf.write("\7i\2\2\u00fa\u00fb\7t\2\2\u00fb\u00fc\7q\2\2\u00fc\u00fd")
buf.write("\7w\2\2\u00fd\u00fe\7r\2\2\u00fe\22\3\2\2\2\u00ff\u0100")
buf.write("\7l\2\2\u0100\u0101\7q\2\2\u0101\u0102\7k\2\2\u0102\u0103")
buf.write("\7p\2\2\u0103\24\3\2\2\2\u0104\u0105\7e\2\2\u0105\u0106")
buf.write("\7t\2\2\u0106\u0107\7q\2\2\u0107\u0108\7u\2\2\u0108\u0109")
buf.write("\7u\2\2\u0109\26\3\2\2\2\u010a\u010b\7w\2\2\u010b\u010c")
buf.write("\7p\2\2\u010c\u010d\7k\2\2\u010d\u010e\7q\2\2\u010e\u010f")
buf.write("\7p\2\2\u010f\30\3\2\2\2\u0110\u0111\7u\2\2\u0111\u0112")
buf.write("\7r\2\2\u0112\u0113\7n\2\2\u0113\u0114\7k\2\2\u0114\u0115")
buf.write("\7v\2\2\u0115\32\3\2\2\2\u0116\u0117\7k\2\2\u0117\u0118")
buf.write("\7p\2\2\u0118\u0119\7v\2\2\u0119\u011a\7q\2\2\u011a\34")
buf.write("\3\2\2\2\u011b\u011c\7k\2\2\u011c\u011d\7h\2\2\u011d\36")
buf.write("\3\2\2\2\u011e\u011f\7c\2\2\u011f\u0120\7n\2\2\u0120\u0121")
buf.write("\7n\2\2\u0121 \3\2\2\2\u0122\u0123\7c\2\2\u0123\u0124")
buf.write("\7p\2\2\u0124\u0125\7{\2\2\u0125\"\3\2\2\2\u0126\u0127")
buf.write("\7c\2\2\u0127\u0128\7u\2\2\u0128$\3\2\2\2\u0129\u012a")
buf.write("\7d\2\2\u012a\u012b\7{\2\2\u012b&\3\2\2\2\u012c\u012d")
buf.write("\7w\2\2\u012d\u012e\7u\2\2\u012e\u012f\7k\2\2\u012f\u0130")
buf.write("\7p\2\2\u0130\u0131\7i\2\2\u0131(\3\2\2\2\u0132\u0133")
buf.write("\7k\2\2\u0133\u0134\7p\2\2\u0134\u0135\7p\2\2\u0135\u0136")
buf.write("\7g\2\2\u0136\u0137\7t\2\2\u0137*\3\2\2\2\u0138\u0139")
buf.write("\7q\2\2\u0139\u013a\7w\2\2\u013a\u013b\7v\2\2\u013b\u013c")
buf.write("\7g\2\2\u013c\u013d\7t\2\2\u013d,\3\2\2\2\u013e\u013f")
buf.write("\7Q\2\2\u013f\u0140\7P\2\2\u0140\u0141\7U\2\2\u0141\u0142")
buf.write("\7E\2\2\u0142\u0143\7J\2\2\u0143\u0144\7G\2\2\u0144\u0145")
buf.write("\7O\2\2\u0145\u0146\7C\2\2\u0146.\3\2\2\2\u0147\u0148")
buf.write("\7,\2\2\u0148\60\3\2\2\2\u0149\u014a\7r\2\2\u014a\u014b")
buf.write("\7c\2\2\u014b\u014c\7t\2\2\u014c\u014d\7c\2\2\u014d\u014e")
buf.write("\7n\2\2\u014e\u014f\7n\2\2\u014f\u0150\7g\2\2\u0150\u0151")
buf.write("\7n\2\2\u0151\62\3\2\2\2\u0152\u0153\7r\2\2\u0153\u0154")
buf.write("\7c\2\2\u0154\u0155\7t\2\2\u0155\u0156\7v\2\2\u0156\u0157")
buf.write("\7k\2\2\u0157\u0158\7v\2\2\u0158\u0159\7k\2\2\u0159\u015a")
buf.write("\7q\2\2\u015a\u015b\7p\2\2\u015b\64\3\2\2\2\u015c\u015d")
buf.write("\7i\2\2\u015d\u015e\7t\2\2\u015e\u015f\7q\2\2\u015f\u0160")
buf.write("\7w\2\2\u0160\u0161\7r\2\2\u0161\66\3\2\2\2\u0162\u0163")
buf.write("\7c\2\2\u0163\u0164\7p\2\2\u0164\u0165\7f\2\2\u01658\3")
buf.write("\2\2\2\u0166\u0167\7q\2\2\u0167\u0168\7t\2\2\u0168:\3")
buf.write("\2\2\2\u0169\u016a\7p\2\2\u016a\u016b\7q\2\2\u016b\u016c")
buf.write("\7v\2\2\u016c<\3\2\2\2\u016d\u016e\7i\2\2\u016e\u016f")
buf.write("\7g\2\2\u016f\u0170\7p\2\2\u0170\u0171\7g\2\2\u0171\u0172")
buf.write("\7t\2\2\u0172\u0173\7c\2\2\u0173\u0174\7v\2\2\u0174\u0175")
buf.write("\7g\2\2\u0175>\3\2\2\2\u0176\u0177\7h\2\2\u0177\u0178")
buf.write("\7n\2\2\u0178\u0179\7c\2\2\u0179\u017a\7v\2\2\u017a\u017b")
buf.write("\7v\2\2\u017b\u017c\7g\2\2\u017c\u017d\7p\2\2\u017d@\3")
buf.write("\2\2\2\u017e\u017f\7g\2\2\u017f\u0180\7x\2\2\u0180\u0181")
buf.write("\7c\2\2\u0181\u0182\7n\2\2\u0182B\3\2\2\2\u0183\u0184")
buf.write("\7c\2\2\u0184\u0185\7u\2\2\u0185\u0186\7e\2\2\u0186D\3")
buf.write("\2\2\2\u0187\u0188\7f\2\2\u0188\u0189\7g\2\2\u0189\u018a")
buf.write("\7u\2\2\u018a\u018b\7e\2\2\u018bF\3\2\2\2\u018c\u018d")
buf.write("\7k\2\2\u018d\u018e\7p\2\2\u018e\u018f\7v\2\2\u018fH\3")
buf.write("\2\2\2\u0190\u0191\7n\2\2\u0191\u0192\7q\2\2\u0192\u0193")
buf.write("\7p\2\2\u0193\u0194\7i\2\2\u0194J\3\2\2\2\u0195\u0196")
buf.write("\7h\2\2\u0196\u0197\7n\2\2\u0197\u0198\7q\2\2\u0198\u0199")
buf.write("\7c\2\2\u0199\u019a\7v\2\2\u019aL\3\2\2\2\u019b\u019c")
buf.write("\7f\2\2\u019c\u019d\7q\2\2\u019d\u019e\7w\2\2\u019e\u019f")
buf.write("\7d\2\2\u019f\u01a0\7n\2\2\u01a0\u01a1\7g\2\2\u01a1N\3")
buf.write("\2\2\2\u01a2\u01a3\7e\2\2\u01a3\u01a4\7j\2\2\u01a4\u01a5")
buf.write("\7c\2\2\u01a5\u01a6\7t\2\2\u01a6\u01a7\7c\2\2\u01a7\u01a8")
buf.write("\7t\2\2\u01a8\u01a9\7t\2\2\u01a9\u01aa\7c\2\2\u01aa\u01ab")
buf.write("\7{\2\2\u01abP\3\2\2\2\u01ac\u01ad\7d\2\2\u01ad\u01ae")
buf.write("\7{\2\2\u01ae\u01af\7v\2\2\u01af\u01b0\7g\2\2\u01b0\u01b1")
buf.write("\7c\2\2\u01b1\u01b2\7t\2\2\u01b2\u01b3\7t\2\2\u01b3\u01b4")
buf.write("\7c\2\2\u01b4\u01b5\7{\2\2\u01b5R\3\2\2\2\u01b6\u01b7")
buf.write("\7d\2\2\u01b7\u01b8\7c\2\2\u01b8\u01b9\7i\2\2\u01b9T\3")
buf.write("\2\2\2\u01ba\u01bb\7v\2\2\u01bb\u01bc\7w\2\2\u01bc\u01bd")
buf.write("\7r\2\2\u01bd\u01be\7n\2\2\u01be\u01bf\7g\2\2\u01bfV\3")
buf.write("\2\2\2\u01c0\u01c1\7o\2\2\u01c1\u01c2\7c\2\2\u01c2\u01c3")
buf.write("\7r\2\2\u01c3X\3\2\2\2\u01c4\u01c5\7k\2\2\u01c5\u01c6")
buf.write("\7u\2\2\u01c6Z\3\2\2\2\u01c7\u01c8\7p\2\2\u01c8\u01c9")
buf.write("\7w\2\2\u01c9\u01ca\7n\2\2\u01ca\u01cb\7n\2\2\u01cb\\")
buf.write("\3\2\2\2\u01cc\u01cd\7u\2\2\u01cd\u01ce\7v\2\2\u01ce\u01cf")
buf.write("\7t\2\2\u01cf\u01d0\7g\2\2\u01d0\u01d1\7c\2\2\u01d1\u01d2")
buf.write("\7o\2\2\u01d2^\3\2\2\2\u01d3\u01d4\7v\2\2\u01d4\u01d5")
buf.write("\7j\2\2\u01d5\u01d6\7t\2\2\u01d6\u01d7\7q\2\2\u01d7\u01d8")
buf.write("\7w\2\2\u01d8\u01d9\7i\2\2\u01d9\u01da\7j\2\2\u01da`\3")
buf.write("\2\2\2\u01db\u01dc\7u\2\2\u01dc\u01dd\7v\2\2\u01dd\u01de")
buf.write("\7q\2\2\u01de\u01df\7t\2\2\u01df\u01e0\7g\2\2\u01e0b\3")
buf.write("\2\2\2\u01e1\u01e2\7o\2\2\u01e2\u01e3\7c\2\2\u01e3\u01e4")
buf.write("\7r\2\2\u01e4\u01e5\7t\2\2\u01e5\u01e6\7g\2\2\u01e6\u01e7")
buf.write("\7f\2\2\u01e7\u01e8\7w\2\2\u01e8\u01e9\7e\2\2\u01e9\u01ea")
buf.write("\7g\2\2\u01ead\3\2\2\2\u01eb\u01ec\7u\2\2\u01ec\u01ed")
buf.write("\7j\2\2\u01ed\u01ee\7k\2\2\u01ee\u01ef\7r\2\2\u01eff\3")
buf.write("\2\2\2\u01f0\u01f1\7e\2\2\u01f1\u01f2\7c\2\2\u01f2\u01f3")
buf.write("\7e\2\2\u01f3\u01f4\7j\2\2\u01f4\u01f5\7g\2\2\u01f5h\3")
buf.write("\2\2\2\u01f6\u01f7\7k\2\2\u01f7\u01f8\7p\2\2\u01f8\u01f9")
buf.write("\7r\2\2\u01f9\u01fa\7w\2\2\u01fa\u01fb\7v\2\2\u01fbj\3")
buf.write("\2\2\2\u01fc\u01fd\7q\2\2\u01fd\u01fe\7w\2\2\u01fe\u01ff")
buf.write("\7v\2\2\u01ff\u0200\7r\2\2\u0200\u0201\7w\2\2\u0201\u0202")
buf.write("\7v\2\2\u0202l\3\2\2\2\u0203\u0204\7u\2\2\u0204\u0205")
buf.write("\7v\2\2\u0205\u0206\7f\2\2\u0206\u0207\7g\2\2\u0207\u0208")
buf.write("\7t\2\2\u0208\u0209\7t\2\2\u0209n\3\2\2\2\u020a\u020b")
buf.write("\7u\2\2\u020b\u020c\7v\2\2\u020c\u020d\7f\2\2\u020d\u020e")
buf.write("\7k\2\2\u020e\u020f\7p\2\2\u020fp\3\2\2\2\u0210\u0211")
buf.write("\7u\2\2\u0211\u0212\7v\2\2\u0212\u0213\7f\2\2\u0213\u0214")
buf.write("\7q\2\2\u0214\u0215\7w\2\2\u0215\u0216\7v\2\2\u0216r\3")
buf.write("\2\2\2\u0217\u0218\7n\2\2\u0218\u0219\7k\2\2\u0219\u021a")
buf.write("\7o\2\2\u021a\u021b\7k\2\2\u021b\u021c\7v\2\2\u021ct\3")
buf.write("\2\2\2\u021d\u021e\7u\2\2\u021e\u021f\7c\2\2\u021f\u0220")
buf.write("\7o\2\2\u0220\u0221\7r\2\2\u0221\u0222\7n\2\2\u0222\u0223")
buf.write("\7g\2\2\u0223v\3\2\2\2\u0224\u0225\7n\2\2\u0225\u0226")
buf.write("\7g\2\2\u0226\u0227\7h\2\2\u0227\u0228\7v\2\2\u0228x\3")
buf.write("\2\2\2\u0229\u022a\7t\2\2\u022a\u022b\7k\2\2\u022b\u022c")
buf.write("\7i\2\2\u022c\u022d\7j\2\2\u022d\u022e\7v\2\2\u022ez\3")
buf.write("\2\2\2\u022f\u0230\7h\2\2\u0230\u0231\7w\2\2\u0231\u0232")
buf.write("\7n\2\2\u0232\u0233\7n\2\2\u0233|\3\2\2\2\u0234\u0235")
buf.write("\4\62;\2\u0235~\3\2\2\2\u0236\u0237\t\2\2\2\u0237\u0080")
buf.write("\3\2\2\2\u0238\u0239\7a\2\2\u0239\u0082\3\2\2\2\u023a")
buf.write("\u023b\t\3\2\2\u023b\u0084\3\2\2\2\u023c\u0244\5\177@")
buf.write("\2\u023d\u0243\5}?\2\u023e\u0243\5\177@\2\u023f\u0243")
buf.write("\5\u0081A\2\u0240\u0241\7<\2\2\u0241\u0243\7<\2\2\u0242")
buf.write("\u023d\3\2\2\2\u0242\u023e\3\2\2\2\u0242\u023f\3\2\2\2")
buf.write("\u0242\u0240\3\2\2\2\u0243\u0246\3\2\2\2\u0244\u0242\3")
buf.write("\2\2\2\u0244\u0245\3\2\2\2\u0245\u0086\3\2\2\2\u0246\u0244")
buf.write("\3\2\2\2\u0247\u024a\5\u0089E\2\u0248\u0249\7\60\2\2\u0249")
buf.write("\u024b\5\u0089E\2\u024a\u0248\3\2\2\2\u024a\u024b\3\2")
buf.write("\2\2\u024b\u024f\3\2\2\2\u024c\u024d\7\60\2\2\u024d\u024f")
buf.write("\5\u0089E\2\u024e\u0247\3\2\2\2\u024e\u024c\3\2\2\2\u024f")
buf.write("\u0088\3\2\2\2\u0250\u0252\5}?\2\u0251\u0250\3\2\2\2\u0252")
buf.write("\u0253\3\2\2\2\u0253\u0251\3\2\2\2\u0253\u0254\3\2\2\2")
buf.write("\u0254\u008a\3\2\2\2\u0255\u0257\5\u0089E\2\u0256\u0258")
buf.write("\t\4\2\2\u0257\u0256\3\2\2\2\u0257\u0258\3\2\2\2\u0258")
buf.write("\u008c\3\2\2\2\u0259\u025f\5\u0087D\2\u025a\u025c\t\5")
buf.write("\2\2\u025b\u025d\t\6\2\2\u025c\u025b\3\2\2\2\u025c\u025d")
buf.write("\3\2\2\2\u025d\u025e\3\2\2\2\u025e\u0260\5\u0087D\2\u025f")
buf.write("\u025a\3\2\2\2\u025f\u0260\3\2\2\2\u0260\u008e\3\2\2\2")
buf.write("\u0261\u0263\5\u008dG\2\u0262\u0264\t\7\2\2\u0263\u0262")
buf.write("\3\2\2\2\u0263\u0264\3\2\2\2\u0264\u0090\3\2\2\2\u0265")
buf.write("\u0272\7)\2\2\u0266\u0271\n\b\2\2\u0267\u0268\7^\2\2\u0268")
buf.write("\u0271\t\t\2\2\u0269\u026a\7^\2\2\u026a\u026b\7w\2\2\u026b")
buf.write("\u026c\3\2\2\2\u026c\u026d\t\n\2\2\u026d\u026e\t\n\2\2")
buf.write("\u026e\u026f\t\n\2\2\u026f\u0271\t\n\2\2\u0270\u0266\3")
buf.write("\2\2\2\u0270\u0267\3\2\2\2\u0270\u0269\3\2\2\2\u0271\u0274")
buf.write("\3\2\2\2\u0272\u0270\3\2\2\2\u0272\u0273\3\2\2\2\u0273")
buf.write("\u0275\3\2\2\2\u0274\u0272\3\2\2\2\u0275\u0276\7)\2\2")
buf.write("\u0276\u0092\3\2\2\2\u0277\u027b\7b\2\2\u0278\u027a\n")
buf.write("\13\2\2\u0279\u0278\3\2\2\2\u027a\u027d\3\2\2\2\u027b")
buf.write("\u0279\3\2\2\2\u027b\u027c\3\2\2\2\u027c\u027e\3\2\2\2")
buf.write("\u027d\u027b\3\2\2\2\u027e\u027f\7b\2\2\u027f\u0094\3")
buf.write("\2\2\2\u0280\u0281\7&\2\2\u0281\u0096\3\2\2\2\u0282\u0283")
buf.write("\t\f\2\2\u0283\u0098\3\2\2\2\u0284\u0285\7/\2\2\u0285")
buf.write("\u0286\7/\2\2\u0286\u028a\3\2\2\2\u0287\u0289\n\r\2\2")
buf.write("\u0288\u0287\3\2\2\2\u0289\u028c\3\2\2\2\u028a\u028b\3")
buf.write("\2\2\2\u028a\u0288\3\2\2\2\u028b\u009a\3\2\2\2\u028c\u028a")
buf.write("\3\2\2\2\u028d\u028e\7\61\2\2\u028e\u028f\7,\2\2\u028f")
buf.write("\u0293\3\2\2\2\u0290\u0292\13\2\2\2\u0291\u0290\3\2\2")
buf.write("\2\u0292\u0295\3\2\2\2\u0293\u0294\3\2\2\2\u0293\u0291")
buf.write("\3\2\2\2\u0294\u0296\3\2\2\2\u0295\u0293\3\2\2\2\u0296")
buf.write("\u0297\7,\2\2\u0297\u0298\7\61\2\2\u0298\u009c\3\2\2\2")
buf.write("\u0299\u029a\7g\2\2\u029a\u02b0\7s\2\2\u029b\u029c\7i")
buf.write("\2\2\u029c\u02b0\7v\2\2\u029d\u029e\7n\2\2\u029e\u02b0")
buf.write("\7v\2\2\u029f\u02a0\7i\2\2\u02a0\u02a1\7v\2\2\u02a1\u02b0")
buf.write("\7g\2\2\u02a2\u02a3\7n\2\2\u02a3\u02a4\7v\2\2\u02a4\u02b0")
buf.write("\7g\2\2\u02a5\u02a6\7p\2\2\u02a6\u02a7\7g\2\2\u02a7\u02b0")
buf.write("\7s\2\2\u02a8\u02a9\7o\2\2\u02a9\u02aa\7c\2\2\u02aa\u02ab")
buf.write("\7v\2\2\u02ab\u02ac\7e\2\2\u02ac\u02ad\7j\2\2\u02ad\u02ae")
buf.write("\7g\2\2\u02ae\u02b0\7u\2\2\u02af\u0299\3\2\2\2\u02af\u029b")
buf.write("\3\2\2\2\u02af\u029d\3\2\2\2\u02af\u029f\3\2\2\2\u02af")
buf.write("\u02a2\3\2\2\2\u02af\u02a5\3\2\2\2\u02af\u02a8\3\2\2\2")
buf.write("\u02b0\u009e\3\2\2\2\u02b1\u02b2\7?\2\2\u02b2\u02bc\7")
buf.write("?\2\2\u02b3\u02bc\7>\2\2\u02b4\u02b5\7>\2\2\u02b5\u02bc")
buf.write("\7?\2\2\u02b6\u02bc\7@\2\2\u02b7\u02b8\7@\2\2\u02b8\u02bc")
buf.write("\7?\2\2\u02b9\u02ba\7#\2\2\u02ba\u02bc\7?\2\2\u02bb\u02b1")
buf.write("\3\2\2\2\u02bb\u02b3\3\2\2\2\u02bb\u02b4\3\2\2\2\u02bb")
buf.write("\u02b6\3\2\2\2\u02bb\u02b7\3\2\2\2\u02bb\u02b9\3\2\2\2")
buf.write("\u02bc\u00a0\3\2\2\2\u02bd\u02c0\5\u009dO\2\u02be\u02c0")
buf.write("\5\u009fP\2\u02bf\u02bd\3\2\2\2\u02bf\u02be\3\2\2\2\u02c0")
buf.write("\u00a2\3\2\2\2\u02c1\u02c2\7<\2\2\u02c2\u00a4\3\2\2\2")
buf.write("\u02c3\u02c4\7=\2\2\u02c4\u00a6\3\2\2\2\u02c5\u02c6\7")
buf.write("*\2\2\u02c6\u00a8\3\2\2\2\u02c7\u02c8\7+\2\2\u02c8\u00aa")
buf.write("\3\2\2\2\u02c9\u02ca\7}\2\2\u02ca\u00ac\3\2\2\2\u02cb")
buf.write("\u02cc\7\177\2\2\u02cc\u00ae\3\2\2\2\u02cd\u02ce\7]\2")
buf.write("\2\u02ce\u00b0\3\2\2\2\u02cf\u02d0\7_\2\2\u02d0\u00b2")
buf.write("\3\2\2\2\u02d1\u02d2\7%\2\2\u02d2\u00b4\3\2\2\2\u02d3")
buf.write("\u02d4\7?\2\2\u02d4\u00b6\3\2\2\2\u02d5\u02d6\7.\2\2\u02d6")
buf.write("\u00b8\3\2\2\2\u02d7\u02d8\7\60\2\2\u02d8\u00ba\3\2\2")
buf.write("\2\u02d9\u02da\7\61\2\2\u02da\u00bc\3\2\2\2\u02db\u02dc")
buf.write("\7\'\2\2\u02dc\u00be\3\2\2\2\u02dd\u02de\7-\2\2\u02de")
buf.write("\u00c0\3\2\2\2\u02df\u02e0\7/\2\2\u02e0\u00c2\3\2\2\2")
buf.write("\u02e1\u02e2\7A\2\2\u02e2\u00c4\3\2\2\2\24\2\u0242\u0244")
buf.write("\u024a\u024e\u0253\u0257\u025c\u025f\u0263\u0270\u0272")
buf.write("\u027b\u028a\u0293\u02af\u02bb\u02bf\2")
return buf.getvalue()
class PigLexer(Lexer):
atn = ATNDeserializer().deserialize(serializedATN())
decisionsToDFA = [DFA(ds, i) for i, ds in enumerate(atn.decisionToState)]
DEFINE = 1
LOAD = 2
FILTER = 3
FOREACH = 4
ORDER = 5
ARRANGE = 6
DISTINCT = 7
COGROUP = 8
JOIN = 9
CROSS = 10
UNION = 11
SPLIT = 12
INTO = 13
IF = 14
ALL = 15
ANY = 16
AS = 17
BY = 18
USING = 19
INNER = 20
OUTER = 21
ONSCHEMA = 22
STAR = 23
PARALLEL = 24
PARTITION = 25
GROUP = 26
AND = 27
OR = 28
NOT = 29
GENERATE = 30
FLATTEN = 31
EVAL = 32
ASC = 33
DESC = 34
INT = 35
LONG = 36
FLOAT = 37
DOUBLE = 38
CHARARRAY = 39
BYTEARRAY = 40
BAG = 41
TUPLE = 42
MAP = 43
IS = 44
NULL = 45
STREAM = 46
THROUGH = 47
STORE = 48
MAPREDUCE = 49
SHIP = 50
CACHE = 51
INPUT = 52
OUTPUT = 53
ERROR = 54
STDIN = 55
STDOUT = 56
LIMIT = 57
SAMPLE = 58
LEFT = 59
RIGHT = 60
FULL = 61
IDENTIFIER = 62
INTEGER = 63
LONGINTEGER = 64
DOUBLENUMBER = 65
FLOATNUMBER = 66
QUOTEDSTRING = 67
EXECCOMMAND = 68
DOLLAR = 69
WS = 70
SL_COMMENT = 71
ML_COMMENT = 72
FILTEROP = 73
COLON = 74
SEMI_COLON = 75
LEFT_PAREN = 76
RIGHT_PAREN = 77
LEFT_CURLYP = 78
RIGHT_CURLYP = 79
LEFT_BRACKET = 80
RIGHT_BRACKET = 81
POUND = 82
EQUAL = 83
COMMA = 84
PERIOD = 85
DIV = 86
PERCENT = 87
PLUS = 88
MINUS = 89
QMARK = 90
channelNames = [u"DEFAULT_TOKEN_CHANNEL", u"HIDDEN"]
modeNames = ["DEFAULT_MODE"]
literalNames = ["<INVALID>",
"'define'", "'load'", "'filter'", "'foreach'", "'order'", "'arrange'",
"'distinct'", "'cogroup'", "'join'", "'cross'", "'union'", "'split'",
"'into'", "'if'", "'all'", "'any'", "'as'", "'by'", "'using'",
"'inner'", "'outer'", "'ONSCHEMA'", "'*'", "'parallel'", "'partition'",
"'group'", "'and'", "'or'", "'not'", "'generate'", "'flatten'",
"'eval'", "'asc'", "'desc'", "'int'", "'long'", "'float'", "'double'",
"'chararray'", "'bytearray'", "'bag'", "'tuple'", "'map'", "'is'",
"'null'", "'stream'", "'through'", "'store'", "'mapreduce'",
"'ship'", "'cache'", "'input'", "'output'", "'stderr'", "'stdin'",
"'stdout'", "'limit'", "'sample'", "'left'", "'right'", "'full'",
"'$'", "':'", "';'", "'('", "')'", "'{'", "'}'", "'['", "']'",
"'#'", "'='", "','", "'.'", "'/'", "'%'", "'+'", "'-'", "'?'"]
symbolicNames = ["<INVALID>",
"DEFINE", "LOAD", "FILTER", "FOREACH", "ORDER", "ARRANGE", "DISTINCT",
"COGROUP", "JOIN", "CROSS", "UNION", "SPLIT", "INTO", "IF",
"ALL", "ANY", "AS", "BY", "USING", "INNER", "OUTER", "ONSCHEMA",
"STAR", "PARALLEL", "PARTITION", "GROUP", "AND", "OR", "NOT",
"GENERATE", "FLATTEN", "EVAL", "ASC", "DESC", "INT", "LONG",
"FLOAT", "DOUBLE", "CHARARRAY", "BYTEARRAY", "BAG", "TUPLE",
"MAP", "IS", "NULL", "STREAM", "THROUGH", "STORE", "MAPREDUCE",
"SHIP", "CACHE", "INPUT", "OUTPUT", "ERROR", "STDIN", "STDOUT",
"LIMIT", "SAMPLE", "LEFT", "RIGHT", "FULL", "IDENTIFIER", "INTEGER",
"LONGINTEGER", "DOUBLENUMBER", "FLOATNUMBER", "QUOTEDSTRING",
"EXECCOMMAND", "DOLLAR", "WS", "SL_COMMENT", "ML_COMMENT", "FILTEROP",
"COLON", "SEMI_COLON", "LEFT_PAREN", "RIGHT_PAREN", "LEFT_CURLYP",
"RIGHT_CURLYP", "LEFT_BRACKET", "RIGHT_BRACKET", "POUND", "EQUAL",
"COMMA", "PERIOD", "DIV", "PERCENT", "PLUS", "MINUS", "QMARK"]
ruleNames = ["DEFINE", "LOAD", "FILTER", "FOREACH", "ORDER", "ARRANGE",
"DISTINCT", "COGROUP", "JOIN", "CROSS", "UNION", "SPLIT",
"INTO", "IF", "ALL", "ANY", "AS", "BY", "USING", "INNER",
"OUTER", "ONSCHEMA", "STAR", "PARALLEL", "PARTITION",
"GROUP", "AND", "OR", "NOT", "GENERATE", "FLATTEN", "EVAL",
"ASC", "DESC", "INT", "LONG", "FLOAT", "DOUBLE", "CHARARRAY",
"BYTEARRAY", "BAG", "TUPLE", "MAP", "IS", "NULL", "STREAM",
"THROUGH", "STORE", "MAPREDUCE", "SHIP", "CACHE", "INPUT",
"OUTPUT", "ERROR", "STDIN", "STDOUT", "LIMIT", "SAMPLE",
"LEFT", "RIGHT", "FULL", "DIGIT", "LETTER", "SPECIALCHAR",
"FSSPECIALCHAR", "IDENTIFIER", "FLOATINGPOINT", "INTEGER",
"LONGINTEGER", "DOUBLENUMBER", "FLOATNUMBER", "QUOTEDSTRING",
"EXECCOMMAND", "DOLLAR", "WS", "SL_COMMENT", "ML_COMMENT",
"STRFILTEROP", "NUMFILTEROP", "FILTEROP", "COLON", "SEMI_COLON",
"LEFT_PAREN", "RIGHT_PAREN", "LEFT_CURLYP", "RIGHT_CURLYP",
"LEFT_BRACKET", "RIGHT_BRACKET", "POUND", "EQUAL", "COMMA",
"PERIOD", "DIV", "PERCENT", "PLUS", "MINUS", "QMARK"]
grammarFileName = "Pig.g4"
def __init__(self, input=None, output: TextIO = sys.stdout):
super().__init__(input, output)
self.checkVersion("4.7")
self._interp = LexerATNSimulator(
self, self.atn, self.decisionsToDFA, PredictionContextCache())
self._actions = None
self._predicates = None
| 61.831967 | 91 | 0.563697 | 6,505 | 30,174 | 2.607225 | 0.170023 | 0.131958 | 0.065625 | 0.067689 | 0.335731 | 0.21987 | 0.15855 | 0.139446 | 0.136616 | 0.132311 | 0 | 0.336534 | 0.157917 | 30,174 | 487 | 92 | 61.958932 | 0.330946 | 0.00116 | 0 | 0.004237 | 1 | 0.610169 | 0.630454 | 0.570661 | 0.002119 | 0 | 0 | 0 | 0 | 1 | 0.004237 | false | 0 | 0.008475 | 0 | 0.224576 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
33bb494b59105a91812f3254fe3c358e3ea18fe8 | 13,780 | py | Python | dfirtrack_config/tests/main/test_main_config_forms.py | thomas-kropeit/dfirtrack | b1e0e659af7bc8085cfe2d269ddc651f9f4ba585 | [
"Apache-2.0"
] | null | null | null | dfirtrack_config/tests/main/test_main_config_forms.py | thomas-kropeit/dfirtrack | b1e0e659af7bc8085cfe2d269ddc651f9f4ba585 | [
"Apache-2.0"
] | 6 | 2022-03-16T12:30:51.000Z | 2022-03-28T01:34:45.000Z | dfirtrack_config/tests/main/test_main_config_forms.py | thomas-kropeit/dfirtrack | b1e0e659af7bc8085cfe2d269ddc651f9f4ba585 | [
"Apache-2.0"
] | null | null | null | from django.test import TestCase
from dfirtrack_artifacts.models import Artifactstatus
from dfirtrack_config.forms import MainConfigForm
from dfirtrack_main.models import Casestatus
class MainConfigFormTestCase(TestCase):
"""main config form tests"""
@classmethod
def setUpTestData(cls):
pass
def test_main_config_system_name_editable_form_label(self):
"""test form label"""
# get object
form = MainConfigForm()
# compare
self.assertEqual(
form.fields['system_name_editable'].label,
'Make system name editable (may require service restart)',
)
def test_main_config_capitalization_form_label(self):
"""test form label"""
# get object
form = MainConfigForm()
# compare
self.assertEqual(
form.fields['capitalization'].label, 'Capitalization of system names'
)
def test_main_config_main_overview_form_label(self):
"""test form label"""
# get object
form = MainConfigForm()
# compare
self.assertEqual(form.fields['main_overview'].label, 'Main overview page')
def test_main_config_artifactstatus_form_label(self):
"""test form label"""
# get object
form = MainConfigForm()
# compare
self.assertEqual(
form.fields['artifactstatus_open'].label,
'Artifactstatus to be considered open',
)
self.assertEqual(
form.fields['artifactstatus_requested'].label,
'Artifactstatus setting the artifact requested time',
)
self.assertEqual(
form.fields['artifactstatus_acquisition'].label,
'Artifactstatus setting the artifact acquisition time',
)
def test_main_config_casestatus_form_label(self):
"""test form label"""
# get object
form = MainConfigForm()
# compare
self.assertEqual(
form.fields['casestatus_open'].label, 'Casestatus to be considered open'
)
self.assertEqual(
form.fields['casestatus_start'].label,
'Casestatus setting the case start time',
)
self.assertEqual(
form.fields['casestatus_end'].label, 'Casestatus setting the case end time'
)
def test_main_config_statushistory_entry_numbers_form_label(self):
"""test form label"""
# get object
form = MainConfigForm()
# compare
self.assertEqual(
form.fields['statushistory_entry_numbers'].label,
'Show only this number of last statushistory entries',
)
def test_main_config_cron_form_label(self):
"""test form label"""
# get object
form = MainConfigForm()
# compare
self.assertEqual(
form.fields['cron_export_path'].label,
'Export files created by scheduled tasks to this path',
)
self.assertEqual(
form.fields['cron_username'].label,
'Use this username for scheduled tasks (just for logging, does not have to exist)',
)
def test_main_config_form_empty(self):
"""test minimum form requirements / INVALID"""
# get object
form = MainConfigForm(
data={
'cron_export_path': '/tmp',
}
)
# compare
self.assertFalse(form.is_valid())
def test_main_config_form_statushistory_entry_numbers_filled(self):
"""test minimum form requirements / INVALID"""
# get object
form = MainConfigForm(
data={
'statushistory_entry_numbers': 9,
'cron_export_path': '/tmp',
}
)
# compare
self.assertFalse(form.is_valid())
def test_main_config_form_cron_export_path_filled(self):
"""test minimum form requirements / INVALID"""
# get object
form = MainConfigForm(
data={
'statushistory_entry_numbers': 8,
'cron_export_path': '/tmp',
}
)
# compare
self.assertFalse(form.is_valid())
def test_main_config_form_cron_username_filled(self):
"""test minimum form requirements / INVALID"""
# get object
form = MainConfigForm(
data={
'statushistory_entry_numbers': 7,
'cron_export_path': '/tmp',
'cron_username': 'cron',
}
)
# compare
self.assertFalse(form.is_valid())
def test_main_config_form_capitalization_filled(self):
"""test minimum form requirements / INVALID"""
# get object
form = MainConfigForm(
data={
'statushistory_entry_numbers': 7,
'cron_export_path': '/tmp',
'cron_username': 'cron',
'capitalization': 'capitalization_keep',
}
)
# compare
self.assertFalse(form.is_valid())
def test_main_config_form_main_overview_filled(self):
"""test minimum form requirements / VALID"""
# get object
form = MainConfigForm(
data={
'statushistory_entry_numbers': 6,
'cron_export_path': '/tmp',
'cron_username': 'cron',
'capitalization': 'capitalization_keep',
'main_overview': 'main_overview_system',
}
)
# compare
self.assertTrue(form.is_valid())
def test_main_config_form_different_artifactstatus(self):
"""test custom field validation"""
# create obects
artifactstatus_1 = Artifactstatus.objects.create(
artifactstatus_name='artifactstatus_1'
).artifactstatus_id
artifactstatus_2 = Artifactstatus.objects.create(
artifactstatus_name='artifactstatus_2'
).artifactstatus_id
artifactstatus_3 = Artifactstatus.objects.create(
artifactstatus_name='artifactstatus_3'
).artifactstatus_id
artifactstatus_4 = Artifactstatus.objects.create(
artifactstatus_name='artifactstatus_4'
).artifactstatus_id
artifactstatus_5 = Artifactstatus.objects.create(
artifactstatus_name='artifactstatus_5'
).artifactstatus_id
artifactstatus_6 = Artifactstatus.objects.create(
artifactstatus_name='artifactstatus_6'
).artifactstatus_id
# get object
form = MainConfigForm(
data={
'statushistory_entry_numbers': 5,
'cron_export_path': '/tmp',
'cron_username': 'cron',
'capitalization': 'capitalization_keep',
'main_overview': 'main_overview_system',
'artifactstatus_requested': [
artifactstatus_1,
artifactstatus_2,
artifactstatus_3,
],
'artifactstatus_acquisition': [
artifactstatus_4,
artifactstatus_5,
artifactstatus_6,
],
}
)
# compare
self.assertTrue(form.is_valid())
def test_main_config_form_same_artifactstatus(self):
"""test custom field validation"""
# create obects
artifactstatus_1 = Artifactstatus.objects.create(
artifactstatus_name='artifactstatus_1'
).artifactstatus_id
artifactstatus_2 = Artifactstatus.objects.create(
artifactstatus_name='artifactstatus_2'
).artifactstatus_id
artifactstatus_3 = Artifactstatus.objects.create(
artifactstatus_name='artifactstatus_3'
).artifactstatus_id
artifactstatus_4 = Artifactstatus.objects.create(
artifactstatus_name='artifactstatus_4'
).artifactstatus_id
artifactstatus_5 = Artifactstatus.objects.create(
artifactstatus_name='artifactstatus_5'
).artifactstatus_id
# get object
form = MainConfigForm(
data={
'statushistory_entry_numbers': 4,
'cron_export_path': '/tmp',
'cron_username': 'cron',
'capitalization': 'capitalization_keep',
'main_overview': 'main_overview_system',
'artifactstatus_requested': [
artifactstatus_1,
artifactstatus_2,
artifactstatus_3,
],
'artifactstatus_acquisition': [
artifactstatus_3,
artifactstatus_4,
artifactstatus_5,
],
}
)
# compare
self.assertFalse(form.is_valid())
self.assertEqual(
form.errors['artifactstatus_requested'],
['Same artifactstatus were chosen for requested and acquisition time.'],
)
self.assertEqual(
form.errors['artifactstatus_acquisition'],
['Same artifactstatus were chosen for requested and acquisition time.'],
)
def test_main_config_form_different_casestatus(self):
"""test custom field validation"""
# create obects
casestatus_1 = Casestatus.objects.create(
casestatus_name='casestatus_1'
).casestatus_id
casestatus_2 = Casestatus.objects.create(
casestatus_name='casestatus_2'
).casestatus_id
casestatus_3 = Casestatus.objects.create(
casestatus_name='casestatus_3'
).casestatus_id
casestatus_4 = Casestatus.objects.create(
casestatus_name='casestatus_4'
).casestatus_id
casestatus_5 = Casestatus.objects.create(
casestatus_name='casestatus_5'
).casestatus_id
casestatus_6 = Casestatus.objects.create(
casestatus_name='casestatus_6'
).casestatus_id
# get object
form = MainConfigForm(
data={
'statushistory_entry_numbers': 3,
'cron_export_path': '/tmp',
'cron_username': 'cron',
'capitalization': 'capitalization_keep',
'main_overview': 'main_overview_system',
'casestatus_start': [
casestatus_1,
casestatus_2,
casestatus_3,
],
'casestatus_end': [
casestatus_4,
casestatus_5,
casestatus_6,
],
}
)
# compare
self.assertTrue(form.is_valid())
def test_main_config_form_same_casestatus(self):
"""test custom field validation"""
# create obects
casestatus_1 = Casestatus.objects.create(
casestatus_name='casestatus_1'
).casestatus_id
casestatus_2 = Casestatus.objects.create(
casestatus_name='casestatus_2'
).casestatus_id
casestatus_3 = Casestatus.objects.create(
casestatus_name='casestatus_3'
).casestatus_id
casestatus_4 = Casestatus.objects.create(
casestatus_name='casestatus_4'
).casestatus_id
casestatus_5 = Casestatus.objects.create(
casestatus_name='casestatus_5'
).casestatus_id
# get object
form = MainConfigForm(
data={
'statushistory_entry_numbers': 2,
'cron_export_path': '/tmp',
'cron_username': 'cron',
'capitalization': 'capitalization_keep',
'main_overview': 'main_overview_system',
'casestatus_start': [
casestatus_1,
casestatus_2,
casestatus_3,
],
'casestatus_end': [
casestatus_3,
casestatus_4,
casestatus_5,
],
}
)
# compare
self.assertFalse(form.is_valid())
self.assertEqual(
form.errors['casestatus_start'],
['Same casestatus were chosen for start and end time.'],
)
self.assertEqual(
form.errors['casestatus_end'],
['Same casestatus were chosen for start and end time.'],
)
def test_main_config_form_path_not_existent(self):
"""test custom field validation"""
# get object
form = MainConfigForm(
data={
'statushistory_entry_numbers': 6,
'cron_export_path': '/this_path_does_not_exist',
'cron_username': 'cron',
'capitalization': 'capitalization_keep',
'main_overview': 'main_overview_system',
}
)
# compare
self.assertFalse(form.is_valid())
self.assertEqual(
form.errors['cron_export_path'], ['Export path does not exist.']
)
def test_main_config_form_path_no_write_permission(self):
"""test custom field validation"""
# get object
form = MainConfigForm(
data={
'statushistory_entry_numbers': 6,
'cron_export_path': '/root',
'cron_username': 'cron',
'capitalization': 'capitalization_keep',
'main_overview': 'main_overview_system',
}
)
# compare
self.assertFalse(form.is_valid())
self.assertEqual(
form.errors['cron_export_path'], ['No write permission for export path.']
)
| 33.045564 | 95 | 0.566981 | 1,189 | 13,780 | 6.27418 | 0.101766 | 0.038338 | 0.028016 | 0.043298 | 0.837802 | 0.785121 | 0.742359 | 0.742359 | 0.730831 | 0.704021 | 0 | 0.008777 | 0.346807 | 13,780 | 416 | 96 | 33.125 | 0.82002 | 0.070392 | 0 | 0.652733 | 0 | 0 | 0.223634 | 0.043338 | 0 | 0 | 0 | 0 | 0.096463 | 1 | 0.064309 | false | 0.003215 | 0.012862 | 0 | 0.080386 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
1d704f3d6ea83aef785f34d2b7fbf6956a2d9c5a | 188 | py | Python | cakechat/dialog_model/inference/candidates/__init__.py | sketscripter/emotional-chatbot-cakechat | 470df58a2206a0ea38b6bed53b20cbc63bd3de24 | [
"Apache-2.0"
] | 1,608 | 2018-01-31T15:22:29.000Z | 2022-03-30T19:59:16.000Z | cakechat/dialog_model/inference/candidates/__init__.py | GaelicThunder/cakechat | 844507281b30d81b3fe3674895fe27826dba8438 | [
"Apache-2.0"
] | 64 | 2019-07-05T06:06:43.000Z | 2021-08-02T05:22:31.000Z | cakechat/dialog_model/inference/candidates/__init__.py | Spark3757/chatbot | 4e8eae70af2d5b68564d86b7ea0dbec956ae676f | [
"Apache-2.0"
] | 690 | 2018-01-31T17:57:19.000Z | 2022-03-30T07:07:41.000Z | from cakechat.dialog_model.inference.candidates.beamsearch import BeamsearchCandidatesGenerator
from cakechat.dialog_model.inference.candidates.sampling import SamplingCandidatesGenerator
| 62.666667 | 95 | 0.914894 | 18 | 188 | 9.444444 | 0.611111 | 0.141176 | 0.211765 | 0.270588 | 0.494118 | 0.494118 | 0 | 0 | 0 | 0 | 0 | 0 | 0.042553 | 188 | 2 | 96 | 94 | 0.944444 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
1d8754f9bfef700734d82d93d883405866656b3e | 141 | py | Python | backslash/archiveable.py | oren0e/backslash-python | 37f0fe37e21c384baa27b4f5b7210e79d02a65dc | [
"BSD-3-Clause"
] | null | null | null | backslash/archiveable.py | oren0e/backslash-python | 37f0fe37e21c384baa27b4f5b7210e79d02a65dc | [
"BSD-3-Clause"
] | null | null | null | backslash/archiveable.py | oren0e/backslash-python | 37f0fe37e21c384baa27b4f5b7210e79d02a65dc | [
"BSD-3-Clause"
] | null | null | null | class Archiveable():
def toggle_archived(self):
self.client.api.call_function('toggle_archived', {self._get_id_key(): self.id})
| 28.2 | 87 | 0.70922 | 19 | 141 | 4.947368 | 0.684211 | 0.297872 | 0.382979 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.141844 | 141 | 4 | 88 | 35.25 | 0.77686 | 0 | 0 | 0 | 0 | 0 | 0.106383 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0 | 0 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 6 |
d598f91d8b14083925d05d418d3a9a4e9cb2e784 | 96 | py | Python | oscar/lib/python2.7/site-packages/faker/providers/user_agent/en_US/__init__.py | sainjusajan/django-oscar | 466e8edc807be689b0a28c9e525c8323cc48b8e1 | [
"BSD-3-Clause"
] | null | null | null | oscar/lib/python2.7/site-packages/faker/providers/user_agent/en_US/__init__.py | sainjusajan/django-oscar | 466e8edc807be689b0a28c9e525c8323cc48b8e1 | [
"BSD-3-Clause"
] | null | null | null | oscar/lib/python2.7/site-packages/faker/providers/user_agent/en_US/__init__.py | sainjusajan/django-oscar | 466e8edc807be689b0a28c9e525c8323cc48b8e1 | [
"BSD-3-Clause"
] | null | null | null | from .. import Provider as UserAgentProvider
class Provider(UserAgentProvider):
pass
| 16 | 45 | 0.739583 | 9 | 96 | 7.888889 | 0.777778 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.208333 | 96 | 5 | 46 | 19.2 | 0.934211 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.333333 | 0.333333 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 6 |
634223756ae7874ff7388e8ebbf1a23d102d9d88 | 127,775 | py | Python | models.py | Tjoha1994/ship_in_transit_simulator | b022c336a0150cfe723cc499974395edb5a1cfdf | [
"MIT"
] | 2 | 2021-07-22T07:33:10.000Z | 2021-09-01T08:30:53.000Z | models.py | Tjoha1994/ship_in_transit_simulator | b022c336a0150cfe723cc499974395edb5a1cfdf | [
"MIT"
] | null | null | null | models.py | Tjoha1994/ship_in_transit_simulator | b022c336a0150cfe723cc499974395edb5a1cfdf | [
"MIT"
] | 1 | 2021-09-21T06:26:43.000Z | 2021-09-21T06:26:43.000Z | """ This module provides classes that that can be used to setup and
run simulation models of a ship in transit.
"""
import numpy as np
import math
import matplotlib.pyplot as plt
import pandas as pd
from collections import defaultdict
from typing import NamedTuple, List
import random
class ShipConfiguration(NamedTuple):
dead_weight_tonnage: float
coefficient_of_deadweight_to_displacement: float
bunkers: float
ballast: float
length_of_ship: float
width_of_ship: float
added_mass_coefficient_in_surge: float
added_mass_coefficient_in_sway: float
added_mass_coefficient_in_yaw: float
mass_over_linear_friction_coefficient_in_surge: float
mass_over_linear_friction_coefficient_in_sway: float
mass_over_linear_friction_coefficient_in_yaw: float
nonlinear_friction_coefficient__in_surge: float
nonlinear_friction_coefficient__in_sway: float
nonlinear_friction_coefficient__in_yaw: float
class EnvironmentConfiguration(NamedTuple):
current_velocity_component_from_north: float
current_velocity_component_from_east: float
wind_speed: float
wind_direction: float
class SimulationConfiguration(NamedTuple):
route_name: str
initial_north_position_m: float
initial_east_position_m: float
initial_yaw_angle_rad: float
initial_forward_speed_m_per_s: float
initial_sideways_speed_m_per_s: float
initial_yaw_rate_rad_per_s: float
initial_propeller_shaft_speed_rad_per_s: float
machinery_system_operating_mode: int
integration_step: float
simulation_time: float
class SimplifiedPropulsionSimulationConfiguration(NamedTuple):
route_name: str
initial_north_position_m: float
initial_east_position_m: float
initial_yaw_angle_rad: float
initial_forward_speed_m_per_s: float
initial_sideways_speed_m_per_s: float
initial_yaw_rate_rad_per_s: float
initial_thrust_force: float
machinery_system_operating_mode: int
integration_step: float
simulation_time: float
class DriftSimulationConfiguration(NamedTuple):
initial_north_position_m: float
initial_east_position_m: float
initial_yaw_angle_rad: float
initial_forward_speed_m_per_s: float
initial_sideways_speed_m_per_s: float
initial_yaw_rate_rad_per_s: float
integration_step: float
simulation_time: float
class LoadOnPowerSources(NamedTuple):
load_on_main_engine: float
load_on_electrical: float
load_percentage_on_main_engine: float
load_percentage_on_electrical: float
class MachineryModeParams(NamedTuple):
main_engine_capacity: float
electrical_capacity: float
shaft_generator_state: str
class IcebergConfiguration(NamedTuple):
mass_tonnage: float
coefficient_of_deadweight_to_displacement: float
waterlinelength_of_iceberg: float
width_of_iceberg: float
height_of_iceberg: float
shape_of_iceberg: str
size_of_iceberg: str
added_mass_coefficient_in_surge: float
added_mass_coefficient_in_sway: float
added_mass_coefficient_in_yaw: float
mass_over_linear_friction_coefficient_in_surge: float
mass_over_linear_friction_coefficient_in_sway: float
mass_over_linear_friction_coefficient_in_yaw: float
nonlinear_friction_coefficient__in_surge: float
nonlinear_friction_coefficient__in_sway: float
nonlinear_friction_coefficient__in_yaw: float
class ZonesConfiguration(NamedTuple):
n_pos: float
e_pos: float
object_radius: float
coll_radius: float
excl_radius: float
zone1_radius: float
zone2_radius: float
zone3_radius: float
class IceCost(NamedTuple):
disconnect_cost: float
light_col_cost: float
medium_col_cost: float
severe_col_cost: float
towing_cost: float
disconnect_time_cost: float
towing_time_cost: float
Ki_lowerbound_severe: float
Ki_lowerbound_medium: float
Ki_lowerbound_light: float
class MachineryMode:
def __init__(self, params: MachineryModeParams, name: str = "main"):
self.main_engine_capacity = params.main_engine_capacity
self.electrical_capacity = params.electrical_capacity
self.shaft_generator_state = params.shaft_generator_state
self.available_propulsion_power = 0
self.available_propulsion_power_main_engine = 0
self.available_propulsion_power_electrical = 0
self.name = name
def update_available_propulsion_power(self, hotel_load):
if self.shaft_generator_state == 'MOTOR':
self.available_propulsion_power = self.main_engine_capacity + self.electrical_capacity - hotel_load
self.available_propulsion_power_main_engine = self.main_engine_capacity
self.available_propulsion_power_electrical = self.electrical_capacity - hotel_load
elif self.shaft_generator_state == 'GEN':
self.available_propulsion_power = self.main_engine_capacity - hotel_load
self.available_propulsion_power_main_engine = self.main_engine_capacity - hotel_load
self.available_propulsion_power_electrical = 0
else: # shaft_generator_state == 'off'
self.available_propulsion_power = self.main_engine_capacity
self.available_propulsion_power_main_engine = self.main_engine_capacity
self.available_propulsion_power_electrical = 0
def distribute_load(self, load_perc, hotel_load):
total_load_propulsion = load_perc * self.available_propulsion_power
if self.shaft_generator_state == 'MOTOR':
load_main_engine = min(total_load_propulsion, self.main_engine_capacity)
load_electrical = total_load_propulsion + hotel_load - load_main_engine
load_percentage_electrical = load_electrical / self.electrical_capacity
if self.main_engine_capacity == 0:
load_percentage_main_engine = 0
else:
load_percentage_main_engine = load_main_engine / self.main_engine_capacity
elif self.shaft_generator_state == 'GEN':
# Here the rule is that electrical handles hotel as far as possible
load_electrical = min(hotel_load, self.electrical_capacity)
load_main_engine = total_load_propulsion + hotel_load - load_electrical
load_percentage_main_engine = load_main_engine / self.main_engine_capacity
if self.electrical_capacity == 0:
load_percentage_electrical = 0
else:
load_percentage_electrical = load_electrical / self.electrical_capacity
else: # shaft_generator_state == 'off'
load_main_engine = total_load_propulsion
load_electrical = hotel_load
load_percentage_main_engine = load_main_engine / self.main_engine_capacity
load_percentage_electrical = load_electrical / self.electrical_capacity
return LoadOnPowerSources(
load_on_main_engine=load_main_engine,
load_on_electrical=load_electrical,
load_percentage_on_main_engine=load_percentage_main_engine,
load_percentage_on_electrical=load_percentage_electrical
)
class MachineryModes:
def __init__(self, list_of_modes: List[MachineryMode]):
self.list_of_modes = list_of_modes
class MachinerySystemConfiguration(NamedTuple):
hotel_load: float
machinery_modes: MachineryModes
rated_speed_main_engine_rpm: float
linear_friction_main_engine: float
linear_friction_hybrid_shaft_generator: float
gear_ratio_between_main_engine_and_propeller: float
gear_ratio_between_hybrid_shaft_generator_and_propeller: float
propeller_inertia: float
propeller_speed_to_torque_coefficient: float
propeller_diameter: float
propeller_speed_to_thrust_force_coefficient: float
rudder_angle_to_sway_force_coefficient: float
rudder_angle_to_yaw_force_coefficient: float
max_rudder_angle_degrees: float
class SimplifiedPropulsionMachinerySystemConfiguration(NamedTuple):
hotel_load: float
machinery_modes: MachineryModes
thrust_force_dynamic_time_constant: float
rudder_angle_to_sway_force_coefficient: float
rudder_angle_to_yaw_force_coefficient: float
max_rudder_angle_degrees: float
class ShipModel:
''' Creates a ship model object that can be used to simulate a ship in transit
The ships model is propelled by a single propeller and steered by a rudder.
The propeller is powered by either the main engine, an auxiliary motor
referred to as the hybrid shaft generator, or both. The model contains the
following states:
- North position of ship
- East position of ship
- Yaw angle (relative to north axis)
- Surge velocity (forward)
- Sway velocity (sideways)
- Yaw rate
- Propeller shaft speed
Simulation results are stored in the instance variable simulation_results
'''
def __init__(self, ship_config: ShipConfiguration,
machinery_config: MachinerySystemConfiguration,
environment_config: EnvironmentConfiguration,
simulation_config: SimulationConfiguration):
route_name = simulation_config.route_name
if route_name != 'none':
# Route following
self.navigate = NavigationSystem(route_name)
self.next_wpt = 1
self.prev_wpt = 0
payload = 0.9 * (ship_config.dead_weight_tonnage - ship_config.bunkers)
lsw = ship_config.dead_weight_tonnage / ship_config.coefficient_of_deadweight_to_displacement \
- ship_config.dead_weight_tonnage
self.mass = lsw + payload + ship_config.bunkers + ship_config.ballast
self.l_ship = ship_config.length_of_ship # 80
self.w_ship = ship_config.width_of_ship # 16.0
self.x_g = 0
self.i_z = self.mass * (self.l_ship ** 2 + self.w_ship ** 2) / 12
# zero-frequency added mass
self.x_du, self.y_dv, self.n_dr = self.set_added_mass(ship_config.added_mass_coefficient_in_surge,
ship_config.added_mass_coefficient_in_sway,
ship_config.added_mass_coefficient_in_yaw)
self.t_surge = ship_config.mass_over_linear_friction_coefficient_in_surge
self.t_sway = ship_config.mass_over_linear_friction_coefficient_in_sway
self.t_yaw = ship_config.mass_over_linear_friction_coefficient_in_yaw
self.ku = ship_config.nonlinear_friction_coefficient__in_surge # 2400.0 # non-linear friction coeff in surge
self.kv = ship_config.nonlinear_friction_coefficient__in_sway # 4000.0 # non-linear friction coeff in sway
self.kr = ship_config.nonlinear_friction_coefficient__in_yaw # 400.0 # non-linear friction coeff in yaw
# Machinery system params
self.machinery_modes = machinery_config.machinery_modes
self.hotel_load = machinery_config.hotel_load # 200000 # 0.2 MW
self.update_available_propulsion_power()
mode = simulation_config.machinery_system_operating_mode
self.mode = self.machinery_modes.list_of_modes[mode]
# self.p_rated_me = machinery_config.mcr_main_engine # 2160000 # 2.16 MW
# self.p_rated_hsg = machinery_config.mcr_hybrid_shaft_generator # 590000 # 0.59 MW
self.w_rated_me = machinery_config.rated_speed_main_engine_rpm * np.pi / 30 # 1000 * np.pi / 30 # rated speed
self.d_me = machinery_config.linear_friction_main_engine # 68.0 # linear friction for main engine speed
self.d_hsg = machinery_config.linear_friction_hybrid_shaft_generator # 57.0 # linear friction for HSG speed
self.r_me = machinery_config.gear_ratio_between_main_engine_and_propeller # 0.6 # gear ratio between main engine and propeller
self.r_hsg = machinery_config.gear_ratio_between_hybrid_shaft_generator_and_propeller # 0.6 # gear ratio between main engine and propeller
self.jp = machinery_config.propeller_inertia # 6000 # propeller inertia
self.kp = machinery_config.propeller_speed_to_torque_coefficient # 7.5 # constant relating omega to torque
self.dp = machinery_config.propeller_diameter # 3.1 # propeller diameter
self.kt = machinery_config.propeller_speed_to_thrust_force_coefficient # 1.7 # constant relating omega to thrust force
self.shaft_speed_max = 1.1 * self.w_rated_me * self.r_me # Used for saturation of power sources
self.c_rudder_v = machinery_config.rudder_angle_to_sway_force_coefficient # 50000.0 # tuning param for simplified rudder response model
self.c_rudder_r = machinery_config.rudder_angle_to_yaw_force_coefficient # 500000.0 # tuning param for simplified rudder response model
self.rudder_ang_max = machinery_config.max_rudder_angle_degrees * np.pi / 180 # 30 * np.pi / 180 # Maximal rudder angle deflection (both ways)
# Environmental conditions
self.vel_c = np.array([environment_config.current_velocity_component_from_north,
environment_config.current_velocity_component_from_east,
0.0])
self.wind_dir = environment_config.wind_direction
self.wind_speed = environment_config.wind_speed
# Operational parameters used to calculate loading percent on each power source
self.p_rel_rated_hsg = 0.0
self.p_rel_rated_me = 0.0
# Configure machinery system according to self.mso
# self.mso_mode = simulation_config.machinery_system_operating_mode
# self.mode_selector(machinery_config.mcr_main_engine,
# machinery_config.mcr_hybrid_shaft_generator)
# Initial states (can be altered using self.set_state_vector(x))
self.n = simulation_config.initial_north_position_m
self.e = simulation_config.initial_east_position_m
self.psi = simulation_config.initial_yaw_angle_rad
self.u = simulation_config.initial_forward_speed_m_per_s
self.v = simulation_config.initial_sideways_speed_m_per_s
self.r = simulation_config.initial_yaw_rate_rad_per_s
self.omega = simulation_config.initial_propeller_shaft_speed_rad_per_s
self.x = self.update_state_vector()
self.states = np.empty(7)
# Differentials
self.d_n = self.d_e = self.d_psi = 0
self.d_u = self.d_v = self.d_r = 0
self.d_omega = 0
# Set up ship control systems
self.initialize_shaft_speed_controller(kp=0.05, ki=0.005)
self.initialize_ship_speed_controller(kp=7, ki=0.13)
self.initialize_ship_heading_controller(kp=4, kd=90, ki=0.005)
self.initialize_heading_filter(kp=0.5, kd=10, t=5000)
# Set up integration
self.int = EulerInt() # Instantiate the Euler integrator
self.int.set_dt(simulation_config.integration_step)
self.int.set_sim_time(simulation_config.simulation_time)
# Instantiate ship draw plotting
self.drw = ShipDraw() # Instantiate the ship drawing class
self.ship_drawings = [[], []] # Arrays for storing ship drawing data
# Fuel
self.fuel_cons_me = 0.0 # Initial fuel cons for ME
self.fuel_cons_electrical = 0.0 # Initial fuel cons for HSG
self.fuel_cons = 0.0 # Initial total fuel cons
self.power_me = [] # Array for storing ME power cons. data
self.power_hsg = [] # Array for storing HSG power cons. data
self.me_rated = [] # Array for storing ME rated power data
self.hsg_rated = [] # Array for storing HSG rated power data
self.load_hist = [] # Array for storing load percentage history
self.fuel_rate_me = [] # Array for storing ME fuel cons. rate
self.fuel_rate_hsg = [] # Array for storing HSG fuel cons. rate
self.fuel_me = [] # Array for storing ME fuel cons.
self.fuel_hsg = [] # Array for storing HSG fuel cons.
self.fuel = [] # Array for storing total fuel cons
self.fuel_rate = []
self.load_perc_me = []
self.load_perc_hsg = []
self.power_total = []
self.power_prop = []
# Wind effect on ship
self.rho_a = 1.2
self.h_f = 8.0 # mean height above water seen from the front
self.h_s = 8.0 # mean height above water seen from the side
self.proj_area_f = self.w_ship * self.h_f # Projected are from the front
self.proj_area_l = self.l_ship * self.h_s # Projected area from the side
self.cx = 0.5
self.cy = 0.7
self.cn = 0.08
# Fuel consumption function parameters
self.a_me = 128.89
self.b_me = -168.93
self.c_me = 246.76
self.a_dg = 180.71
self.b_dg = -289.90
self.c_dg = 324.90
self.simulation_results = defaultdict(list)
def update_available_propulsion_power(self):
for mode in self.machinery_modes.list_of_modes:
mode.update_available_propulsion_power(self.hotel_load)
def set_added_mass(self, surge_coeff, sway_coeff, yaw_coeff):
''' Sets the added mass in surge due to surge motion, sway due
to sway motion and yaw due to yaw motion according to given coeffs.
args:
surge_coeff (float): Added mass coefficient in surge direction due to surge motion
sway_coeff (float): Added mass coefficient in sway direction due to sway motion
yaw_coeff (float): Added mass coefficient in yaw direction due to yaw motion
returns:
x_du (float): Added mass in surge
y_dv (float): Added mass in sway
n_dr (float): Added mass in yaw
'''
x_du = self.mass * surge_coeff
y_dv = self.mass * sway_coeff
n_dr = self.i_z * yaw_coeff
return x_du, y_dv, n_dr
def mode_selector(self, mode: int):
self.mode = self.machinery_modes.list_of_modes[mode]
def spec_fuel_cons_me(self, load_perc):
""" Calculate fuel consumption rate for the main engine.
Args:
load_perc (float): The fraction of the mcr load on the ME
Returns:
Number of kilograms of fuel per second used by ME
"""
rate = self.a_me * load_perc ** 2 + self.b_me * load_perc + self.c_me
return rate / 3.6e9
def spec_fuel_cons_dg(self, load_perc):
""" Calculate fuel consumption rate for a diesel generator.
Args:
load_perc (float): The fraction of the mcr load on the DG
Returns:
Number of kilograms of fuel per second used by DG
"""
rate = self.a_dg * load_perc ** 2 + self.b_dg * load_perc + self.c_dg
return rate / 3.6e9
def load_perc(self, load_perc):
""" Calculates the load percentage on the main engine and the diesel_gens based on the
operating mode of the machinery system (MSO-mode).
Args:
load_perc (float): Current load on the machinery system as a fraction of the
total power that can be delivered by the machinery system in the current mode.
Returns:
load_perc_me (float): Current load on the ME as a fraction of ME MCR
load_perc_hsg (float): Current load on the HSG as a fraction of HSG MCR
"""
load_data = self.mode.distribute_load(load_perc=load_perc, hotel_load=self.hotel_load)
return load_data.load_percentage_on_main_engine, load_data.load_percentage_on_electrical
def fuel_consumption(self, load_perc):
'''
Args:
load_perc (float): The fraction of produced power over the online power production capacity.
Returns:
rate_me (float): Fuel consumption rate for the main engine
rate_hsg (float): Fuel consumption rate for the HSG
fuel_cons_me (float): Accumulated fuel consumption for the ME
fuel_cons_hsg (float): Accumulated fuel consumption for the HSG
fuel_cons (float): Total accumulated fuel consumption for the ship
'''
'''
if self.mso_mode == 1:
load_me = load_perc * self.p_rated_me + self.hotel_load
load_perc_me = load_me / self.p_rated_me
rate_me = load_me * self.spec_fuel_cons_me(load_perc_me)
rate_hsg = 0.0
elif self.mso_mode == 2:
load_me = load_perc * self.p_rated_me
load_perc_me = load_me / self.p_rated_me
load_hsg = self.hotel_load
load_perc_hsg = load_hsg / self.p_rated_hsg
rate_me = load_me * self.spec_fuel_cons_me(load_perc_me)
rate_hsg = load_hsg * self.spec_fuel_cons_dg(load_perc_hsg)
elif self.mso_mode == 3:
load_hsg = (load_perc * self.p_rated_hsg + self.hotel_load)
load_perc_hsg = load_hsg / self.p_rated_hsg
rate_me = 0.0
rate_hsg = load_hsg * self.spec_fuel_cons_dg(load_perc_hsg)
'''
load_data = self.mode.distribute_load(load_perc=load_perc, hotel_load=self.hotel_load)
if load_data.load_on_main_engine == 0:
rate_me = 0
else:
rate_me = load_data.load_on_main_engine \
* self.spec_fuel_cons_me(load_data.load_percentage_on_main_engine)
if load_data.load_percentage_on_electrical == 0:
rate_electrical = 0
else:
rate_electrical = load_data.load_on_electrical \
* self.spec_fuel_cons_dg(load_data.load_percentage_on_electrical)
self.fuel_cons_me = self.fuel_cons_me + rate_me * self.int.dt
self.fuel_cons_electrical = self.fuel_cons_electrical + rate_electrical * self.int.dt
self.fuel_cons = self.fuel_cons + (rate_me + rate_electrical) * self.int.dt
return rate_me, rate_electrical, self.fuel_cons_me, self.fuel_cons_electrical, self.fuel_cons
def get_wind_force(self):
''' This method calculates the forces due to the relative
wind speed, acting on teh ship in surge, sway and yaw
direction.
:return: Wind force acting in surge, sway and yaw
'''
uw = self.wind_speed * np.cos(self.wind_dir - self.psi)
vw = self.wind_speed * np.sin(self.wind_dir - self.psi)
u_rw = uw - self.u
v_rw = vw - self.v
gamma_rw = -np.arctan2(v_rw, u_rw)
wind_rw2 = u_rw ** 2 + v_rw ** 2
c_x = -self.cx * np.cos(gamma_rw)
c_y = self.cy * np.sin(gamma_rw)
c_n = self.cn * np.sin(2 * gamma_rw)
tau_coeff = 0.5 * self.rho_a * wind_rw2
tau_u = tau_coeff * c_x * self.proj_area_f
tau_v = tau_coeff * c_y * self.proj_area_l
tau_n = tau_coeff * c_n * self.proj_area_l * self.l_ship
return np.array([tau_u, tau_v, tau_n])
def update_state_vector(self):
''' Update the state vector according to the individual state values
'''
return np.array([self.n, self.e, self.psi, self.u, self.v, self.r, self.omega])
def set_north_pos(self, val):
''' Set the north position of the ship and update the state vector
'''
self.n = val
self.x = self.update_state_vector()
def set_east_pos(self, val):
''' Set the east position of the ship and update the state vector
'''
self.e = val
self.x = self.update_state_vector()
def set_yaw_angle(self, val):
''' Set the yaw angle of the ship and update the state vector
'''
self.psi = val
self.x = self.update_state_vector()
def set_surge_speed(self, val):
''' Set the surge speed of the ship and update the state vector
'''
self.u = val
self.x = self.update_state_vector()
def set_sway_speed(self, val):
''' Set the sway speed of the ship and update the state vector
'''
self.v = val
self.x = self.update_state_vector()
def set_yaw_rate(self, val):
''' Set the yaw rate of the ship and update the state vector
'''
self.r = val
self.x = self.update_state_vector()
def set_shaft_speed(self, val):
''' Set the propeller shaft speed and update the state vector
'''
self.omega = val
self.x = self.update_state_vector()
def initialize_shaft_speed_controller(self, kp, ki):
''' This method sets up and configures the shaft speed
controller of the ship
'''
self.shaft_speed_controller = ControllerLib()
self.shaft_speed_controller.set_kp(kp)
self.shaft_speed_controller.set_ki(ki)
def initialize_ship_speed_controller(self, kp, ki):
''' This method sets up and configures the ship speed
controller.
'''
self.ship_speed_controller = ControllerLib()
self.ship_speed_controller.set_kp(kp)
self.ship_speed_controller.set_ki(ki)
def initialize_ship_heading_controller(self, kp, kd, ki):
''' This method sets up and configures the ship heading
controller.
'''
self.ship_heading_controller = ControllerLib()
self.ship_heading_controller.set_kp(kp)
self.ship_heading_controller.set_kd(-kd)
self.ship_heading_controller.set_ki(ki)
def initialize_heading_filter(self, kp, kd, t):
''' This method sets up and configures a low pass filter
to smooth the hading setpoint signal for the ship
heading controller.
'''
self.ship_heading_filter = ControllerLib()
self.ship_heading_filter.set_kp(kp)
self.ship_heading_filter.set_kd(kd)
self.ship_heading_filter.set_T(t)
def loadperc_from_speedref(self, speed_ref):
''' Calculates suitable machinery load percentage for the ship to
track the speed reference signal. The shaft speed controller
is used to calculate suitable shaft speed to follow the desired
ship speed and suitable load percentage to follow the calculated
shaft speed. The load percentage is the fraction of the produced
power over the total power capacity in the current configuration.
'''
ref_shaft_speed = self.ship_speed_controller.pi_ctrl(speed_ref, self.u, self.int.dt, -550, 550)
ref_shaft_speed = ControllerLib.sat(ref_shaft_speed, 0, self.shaft_speed_max)
load_perc = self.shaft_speed_controller.pi_ctrl(ref_shaft_speed, self.omega, self.int.dt)
load_perc = ControllerLib.sat(load_perc, 0, 1.1)
return load_perc
def rudderang_from_headingref(self, heading_ref):
''' This method finds a suitable rudder angle for the ship to
sail with the heading specified by "heading_ref" by using
PID-controller. The rudder angle is saturated according to
|self.rudder_ang_max|. The mathod should be called from within
simulation loop if the user want the ship to follow a specified
heading reference signal.
'''
rudder_ang = self.ship_heading_controller.pid_ctrl(heading_ref, self.psi, self.int.dt)
rudder_ang = ControllerLib.sat(rudder_ang, -self.rudder_ang_max, self.rudder_ang_max)
return rudder_ang
def rudderang_from_route(self):
''' This method finds a suitable rudder angle for the ship to follow
a predefined route specified in the "navigate"-instantiation of the
"NavigationSystem"-class.
'''
self.next_wpt, self.prev_wpt = self.navigate.next_wpt(self.next_wpt, self.n, self.e)
psi_d = self.navigate.los_guidance(self.next_wpt, self.n, self.e)
return self.rudderang_from_headingref(psi_d)
def print_next_wpt(self, ship_id):
''' Prints a string with the ship identification (ship_id)
and its next waypoint, if the next waypoint is specified
'''
if self.next_wpt != self.navigate.next_wpt(self.next_wpt, self.n, self.e)[0]:
print('Current target waypoint for ' + ship_id + ' is: ' + str(self.next_wpt))
def set_next_wpt(self, wpt):
''' Sets the next waypoint to "wpt", where "wpt" is the index
of the waypoint refering to the list of waypoints making
up the route specified in the instantiation "navigate" of
the class "NavigationSystem"
'''
self.next_wpt = wpt
def three_dof_kinematics(self):
''' Updates the time differientials of the north position, east
position and yaw angle. Should be called in the simulation
loop before the integration step.
'''
vel = np.array([self.u, self.v, self.r])
dx = np.dot(self.rotation(), vel)
self.d_n = dx[0]
self.d_e = dx[1]
self.d_psi = dx[2]
def rotation(self):
''' Specifies the rotation matrix for rotations about the z-axis, such that
"body-fixed coordinates" = rotation x "North-east-down-fixed coordinates" .
'''
return np.array([[np.cos(self.psi), -np.sin(self.psi), 0],
[np.sin(self.psi), np.cos(self.psi), 0],
[0, 0, 1]])
def three_dof_kinetics(self, f_thrust, rudder_angle):
''' Calculates accelerations of the ship, as a funciton
of thrust-force, rudder angle, wind forces and the
states in the previous time-step.
'''
# System matrices (did not include added mass yet)
M_rb = np.array([[self.mass + self.x_du, 0, 0],
[0, self.mass + self.y_dv, self.mass * self.x_g],
[0, self.mass * self.x_g, self.i_z + self.n_dr]])
C_rb = np.array([[0, 0, -self.mass * (self.x_g * self.r + self.v)],
[0, 0, self.mass * self.u],
[self.mass * (self.x_g * self.r + self.v), -self.mass * self.u, 0]])
D = np.array([[self.mass / self.t_surge, 0, 0],
[0, self.mass / self.t_sway, 0],
[0, 0, self.i_z / self.t_yaw]])
D2 = np.array([[self.ku * self.u, 0, 0],
[0, self.kv * self.v, 0],
[0, 0, self.kr * self.r]])
# Forces acting (replace zero vectors with suitable functions)
f_rudder_v, f_rudder_r = self.rudder(rudder_angle)
F_wind = self.get_wind_force()
F_waves = np.array([0, 0, 0])
F_ctrl = np.array([f_thrust, f_rudder_v, f_rudder_r])
# assembling state vector
vel = np.array([self.u, self.v, self.r])
# Transforming current velocity to ship frame
v_c = np.dot(np.linalg.inv(self.rotation()), self.vel_c)
u_r = self.u - v_c[0]
v_r = self.v - v_c[1]
C_a = np.array([[0, 0, self.y_dv * v_r],
[0, 0, -self.x_du * u_r],
[-self.y_dv * v_r, self.x_du * u_r, 0]])
# Kinetic equation
M_inv = np.linalg.inv(M_rb)
dx = np.dot(M_inv, -np.dot(C_rb, vel) - np.dot(C_a, vel - v_c) - np.dot(D + D2, vel - v_c)
+ F_wind + F_waves + F_ctrl)
self.d_u = dx[0]
self.d_v = dx[1]
self.d_r = dx[2]
def rudder(self, delta):
''' This method takes in the rudder angle and returns
the force i sway and yaw generated by the rudder.
args:
delta (float): The rudder angle in radians
returs:
v_force (float): The force in sway-direction generated by the rudder
r_force (float): The yaw-torque generated by the rudder
'''
u_c = np.dot(np.linalg.inv(self.rotation()), self.vel_c)[0]
v_force = -self.c_rudder_v * delta * (self.u - u_c)
r_force = -self.c_rudder_r * delta * (self.u - u_c)
return v_force, r_force
def shaft_eq(self, torque_main_engine, torque_hsg):
''' Updates the time differential of the shaft speed
equation.
'''
eq_me = (torque_main_engine - self.d_me * self.omega) / self.r_me
eq_hsg = (torque_hsg - self.d_hsg * self.omega) / self.r_hsg
self.d_omega = (eq_me + eq_hsg - self.kp * self.omega ** 2) / self.jp
def thrust(self):
''' Updates the thrust force based on the shaft speed (self.omega)
'''
return self.dp ** 4 * self.kt * self.omega * abs(self.omega)
def main_engine_torque(self, load_perc):
''' Returns the torque of the main engine as a
function of the load percentage parameter
'''
# if self.omega >= 1 * np.pi / 30:
# return load_perc * self.p_rel_rated_me / self.omega
# else:
# return 0
# return min(load_perc * self.p_rel_rated_me / (self.omega + 0.1), self.p_rel_rated_me / 5 * np.pi / 30)
return min(load_perc * self.mode.available_propulsion_power_main_engine / (self.omega + 0.1),
self.mode.available_propulsion_power_main_engine / 5 * np.pi / 30)
def hsg_torque(self, load_perc):
''' Returns the torque of the HSG as a
function of the load percentage parameter
'''
# if self.omega >= 100 * np.pi / 30:
# return load_perc * self.p_rel_rated_hsg / self.omega
# else:
# return 0
# return min(load_perc * self.p_rel_rated_hsg / (self.omega + 0.1), self.p_rel_rated_hsg / 5 * np.pi / 30)
return min(load_perc * self.mode.available_propulsion_power_electrical / (self.omega + 0.1),
self.mode.available_propulsion_power_electrical / 5 * np.pi / 30)
def update_differentials(self, load_perc, rudder_angle):
''' This method should be called in the simulation loop. It will
update the full differential equation of the ship.
'''
self.three_dof_kinematics()
self.shaft_eq(self.main_engine_torque(load_perc), self.hsg_torque(load_perc))
self.three_dof_kinetics(self.thrust(), rudder_angle)
def integrate_differentials(self):
''' Integrates the differential equation one time step ahead using
the euler intgration method with parameters set in the
int-instantiation of the "EulerInt"-class.
'''
self.set_north_pos(self.int.integrate(self.n, self.d_n))
self.set_east_pos(self.int.integrate(self.e, self.d_e))
self.set_yaw_angle(self.int.integrate(self.psi, self.d_psi))
self.set_surge_speed(self.int.integrate(self.u, self.d_u))
self.set_sway_speed(self.int.integrate(self.v, self.d_v))
self.set_yaw_rate(self.int.integrate(self.r, self.d_r))
self.set_shaft_speed(self.int.integrate(self.omega, self.d_omega))
def store_states(self):
''' Appends the current value of each state to an array. This
is convenient when plotting. The method should be called within
the simulation loop each time step. Then afterwars, an array
containing for ecample the north-position for each time step
is obtained as ...states[0]
'''
self.states[0].append(self.n)
self.states[1].append(self.e)
self.states[2].append(self.psi)
self.states[3].append(self.u)
self.states[4].append(self.v)
self.states[5].append(self.r)
self.states[6].append(self.omega)
def ship_snap_shot(self):
''' This method is used to store a map-view snap shot of
the ship at the given north-east position and heading.
It uses the ShipDraw-class. To plot a map view of the
n-th ship snap-shot, use:
plot(ship_drawings[1][n], ship_drawings[0][n])
'''
x, y = self.drw.local_coords()
x_ned, y_ned = self.drw.rotate_coords(x, y, self.psi)
x_ned_trans, y_ned_trans = self.drw.translate_coords(x_ned, y_ned, self.n, self.e)
self.ship_drawings[0].append(x_ned_trans)
self.ship_drawings[1].append(y_ned_trans)
def store_simulation_data(self, load_perc):
load_perc_me, load_perc_hsg = self.load_perc(load_perc)
self.simulation_results['time [s]'].append(self.int.time)
self.simulation_results['north position [m]'].append(self.n)
self.simulation_results['east position [m]'].append(self.e)
self.simulation_results['yaw angle [deg]'].append(self.t_yaw * 180 / np.pi)
self.simulation_results['forward speed[m/s]'].append(self.u)
self.simulation_results['sideways speed [m/s]'].append(self.v)
self.simulation_results['yaw rate [deg/sec]'].append(self.r * 180 / np.pi)
self.simulation_results['propeller shaft speed [rpm]'].append(self.omega * 30 / np.pi)
self.simulation_results['commanded load fraction [-]'].append(load_perc)
self.simulation_results['commanded load fraction me [-]'].append(load_perc_me)
self.simulation_results['commanded load fraction hsg [-]'].append(load_perc_hsg)
load_data = self.mode.distribute_load(load_perc=load_perc, hotel_load=self.hotel_load)
self.simulation_results['power me [kw]'].append(load_data.load_on_main_engine / 1000)
self.simulation_results['available power me [kw]'].append(self.mode.main_engine_capacity / 1000)
self.simulation_results['power electrical [kw]'].append(load_data.load_on_electrical / 1000)
self.simulation_results['available power electrical [kw]'].append(self.mode.electrical_capacity / 1000)
self.simulation_results['power [kw]'].append((load_data.load_on_electrical
+ load_data.load_on_main_engine) / 1000)
self.simulation_results['propulsion power [kw]'].append((load_perc
* self.mode.available_propulsion_power) / 1000)
rate_me, rate_hsg, cons_me, cons_hsg, cons = self.fuel_consumption(load_perc)
self.simulation_results['fuel rate me [kg/s]'].append(rate_me)
self.simulation_results['fuel rate hsg [kg/s]'].append(rate_hsg)
self.simulation_results['fuel rate [kg/s]'].append(rate_me + rate_hsg)
self.simulation_results['fuel consumption me [kg]'].append(cons_me)
self.simulation_results['fuel consumption hsg [kg]'].append(cons_hsg)
self.simulation_results['fuel consumption [kg]'].append(cons)
self.simulation_results['motor torque [Nm]'].append(self.main_engine_torque(load_perc))
self.simulation_results['thrust force [kN]'].append(self.thrust() / 1000)
self.fuel_me.append(cons_me)
self.fuel_hsg.append(cons_hsg)
self.fuel.append(cons)
class ShipModelSimplifiedPropulsion:
''' Creates a ship model object that can be used to simulate a ship in transit
The ships model is propelled by a single propeller and steered by a rudder.
The propeller is powered by either the main engine, an auxiliary motor
referred to as the hybrid shaft generator, or both. The model contains the
following states:
- North position of ship
- East position of ship
- Yaw angle (relative to north axis)
- Surge velocity (forward)
- Sway velocity (sideways)
- Yaw rate
- Propeller shaft speed
Simulation results are stored in the instance variable simulation_results
'''
def __init__(self, ship_config: ShipConfiguration,
machinery_config: SimplifiedPropulsionMachinerySystemConfiguration,
environment_config: EnvironmentConfiguration,
simulation_config: SimplifiedPropulsionSimulationConfiguration):
route_name = simulation_config.route_name
if route_name != 'none':
# Route following
self.navigate = NavigationSystem(route_name)
self.next_wpt = 1
self.prev_wpt = 0
payload = 0.9 * (ship_config.dead_weight_tonnage - ship_config.bunkers)
lsw = ship_config.dead_weight_tonnage / ship_config.coefficient_of_deadweight_to_displacement \
- ship_config.dead_weight_tonnage
self.mass = lsw + payload + ship_config.bunkers + ship_config.ballast
self.l_ship = ship_config.length_of_ship # 80
self.w_ship = ship_config.width_of_ship # 16.0
self.x_g = 0
self.i_z = self.mass * (self.l_ship ** 2 + self.w_ship ** 2) / 12
# zero-frequency added mass
self.x_du, self.y_dv, self.n_dr = self.set_added_mass(ship_config.added_mass_coefficient_in_surge,
ship_config.added_mass_coefficient_in_sway,
ship_config.added_mass_coefficient_in_yaw)
self.t_surge = ship_config.mass_over_linear_friction_coefficient_in_surge
self.t_sway = ship_config.mass_over_linear_friction_coefficient_in_sway
self.t_yaw = ship_config.mass_over_linear_friction_coefficient_in_yaw
self.ku = ship_config.nonlinear_friction_coefficient__in_surge # 2400.0 # non-linear friction coeff in surge
self.kv = ship_config.nonlinear_friction_coefficient__in_sway # 4000.0 # non-linear friction coeff in sway
self.kr = ship_config.nonlinear_friction_coefficient__in_yaw # 400.0 # non-linear friction coeff in yaw
# Machinery system params
self.machinery_modes = machinery_config.machinery_modes
self.hotel_load = machinery_config.hotel_load # 200000 # 0.2 MW
self.update_available_propulsion_power()
mode = simulation_config.machinery_system_operating_mode
self.mode = self.machinery_modes.list_of_modes[mode]
self.thrust = simulation_config.initial_thrust_force
self.d_thrust = 0
self.k_thrust = 2160 / 790
self.thrust_time_constant = machinery_config.thrust_force_dynamic_time_constant
self.c_rudder_v = machinery_config.rudder_angle_to_sway_force_coefficient
self.c_rudder_r = machinery_config.rudder_angle_to_yaw_force_coefficient # 500000.0 # tuning param for simplified rudder response model
self.rudder_ang_max = machinery_config.max_rudder_angle_degrees * np.pi / 180 # 30 * np.pi / 180 # Maximal rudder angle deflection (both ways)
# Environmental conditions
self.vel_c = np.array([environment_config.current_velocity_component_from_north,
environment_config.current_velocity_component_from_east,
0.0])
self.wind_dir = environment_config.wind_direction
self.wind_speed = environment_config.wind_speed
# Operational parameters used to calculate loading percent on each power source
self.p_rel_rated_hsg = 0.0
self.p_rel_rated_me = 0.0
# Configure machinery system according to self.mso
# self.mso_mode = simulation_config.machinery_system_operating_mode
# self.mode_selector(machinery_config.mcr_main_engine,
# machinery_config.mcr_hybrid_shaft_generator)
# Initial states (can be altered using self.set_state_vector(x))
self.n = simulation_config.initial_north_position_m
self.e = simulation_config.initial_east_position_m
self.psi = simulation_config.initial_yaw_angle_rad
self.u = simulation_config.initial_forward_speed_m_per_s
self.v = simulation_config.initial_sideways_speed_m_per_s
self.r = simulation_config.initial_yaw_rate_rad_per_s
self.x = self.update_state_vector()
self.states = np.empty(7)
# Differentials
self.d_n = self.d_e = self.d_psi = 0
self.d_u = self.d_v = self.d_r = 0
# Set up ship control systems
self.initialize_ship_speed_controller(kp=7, ki=0.13)
self.initialize_ship_heading_controller(kp=4, kd=90, ki=0.005)
self.initialize_heading_filter(kp=0.5, kd=10, t=5000)
# Set up integration
self.int = EulerInt() # Instantiate the Euler integrator
self.int.set_dt(simulation_config.integration_step)
self.int.set_sim_time(simulation_config.simulation_time)
# Instantiate ship draw plotting
self.drw = ShipDraw() # Instantiate the ship drawing class
self.ship_drawings = [[], []] # Arrays for storing ship drawing data
# Fuel
self.fuel_cons_me = 0.0 # Initial fuel cons for ME
self.fuel_cons_electrical = 0.0 # Initial fuel cons for HSG
self.fuel_cons = 0.0 # Initial total fuel cons
self.power_me = [] # Array for storing ME power cons. data
self.power_hsg = [] # Array for storing HSG power cons. data
self.me_rated = [] # Array for storing ME rated power data
self.hsg_rated = [] # Array for storing HSG rated power data
self.load_hist = [] # Array for storing load percentage history
self.fuel_rate_me = [] # Array for storing ME fuel cons. rate
self.fuel_rate_hsg = [] # Array for storing HSG fuel cons. rate
self.fuel_me = [] # Array for storing ME fuel cons.
self.fuel_hsg = [] # Array for storing HSG fuel cons.
self.fuel = [] # Array for storing total fuel cons
self.fuel_rate = []
self.load_perc_me = []
self.load_perc_hsg = []
self.power_total = []
self.power_prop = []
# Wind effect on ship
self.rho_a = 1.2
self.h_f = 8.0 # mean height above water seen from the front
self.h_s = 8.0 # mean height above water seen from the side
self.proj_area_f = self.w_ship * self.h_f # Projected are from the front
self.proj_area_l = self.l_ship * self.h_s # Projected area from the side
self.cx = 0.5
self.cy = 0.7
self.cn = 0.08
# Fuel consumption function parameters
self.a_me = 128.89
self.b_me = -168.93
self.c_me = 246.76
self.a_dg = 180.71
self.b_dg = -289.90
self.c_dg = 324.90
self.simulation_results = defaultdict(list)
def update_available_propulsion_power(self):
for mode in self.machinery_modes.list_of_modes:
mode.update_available_propulsion_power(self.hotel_load)
def set_added_mass(self, surge_coeff, sway_coeff, yaw_coeff):
''' Sets the added mass in surge due to surge motion, sway due
to sway motion and yaw due to yaw motion according to given coeffs.
args:
surge_coeff (float): Added mass coefficient in surge direction due to surge motion
sway_coeff (float): Added mass coefficient in sway direction due to sway motion
yaw_coeff (float): Added mass coefficient in yaw direction due to yaw motion
returns:
x_du (float): Added mass in surge
y_dv (float): Added mass in sway
n_dr (float): Added mass in yaw
'''
x_du = self.mass * surge_coeff
y_dv = self.mass * sway_coeff
n_dr = self.i_z * yaw_coeff
return x_du, y_dv, n_dr
def mode_selector(self, mode: int):
self.mode = self.machinery_modes.list_of_modes[mode]
def spec_fuel_cons_me(self, load_perc):
""" Calculate fuel consumption rate for the main engine.
Args:
load_perc (float): The fraction of the mcr load on the ME
Returns:
Number of kilograms of fuel per second used by ME
"""
rate = self.a_me * load_perc ** 2 + self.b_me * load_perc + self.c_me
return rate / 3.6e9
def spec_fuel_cons_dg(self, load_perc):
""" Calculate fuel consumption rate for a diesel generator.
Args:
load_perc (float): The fraction of the mcr load on the DG
Returns:
Number of kilograms of fuel per second used by DG
"""
rate = self.a_dg * load_perc ** 2 + self.b_dg * load_perc + self.c_dg
return rate / 3.6e9
def load_perc(self, load_perc):
""" Calculates the load percentage on the main engine and the diesel_gens based on the
operating mode of the machinery system (MSO-mode).
Args:
load_perc (float): Current load on the machinery system as a fraction of the
total power that can be delivered by the machinery system in the current mode.
Returns:
load_perc_me (float): Current load on the ME as a fraction of ME MCR
load_perc_hsg (float): Current load on the HSG as a fraction of HSG MCR
"""
load_data = self.mode.distribute_load(load_perc=load_perc, hotel_load=self.hotel_load)
return load_data.load_percentage_on_main_engine, load_data.load_percentage_on_electrical
def fuel_consumption(self, load_perc):
'''
Args:
load_perc (float): The fraction of produced power over the online power production capacity.
Returns:
rate_me (float): Fuel consumption rate for the main engine
rate_hsg (float): Fuel consumption rate for the HSG
fuel_cons_me (float): Accumulated fuel consumption for the ME
fuel_cons_hsg (float): Accumulated fuel consumption for the HSG
fuel_cons (float): Total accumulated fuel consumption for the ship
'''
'''
if self.mso_mode == 1:
load_me = load_perc * self.p_rated_me + self.hotel_load
load_perc_me = load_me / self.p_rated_me
rate_me = load_me * self.spec_fuel_cons_me(load_perc_me)
rate_hsg = 0.0
elif self.mso_mode == 2:
load_me = load_perc * self.p_rated_me
load_perc_me = load_me / self.p_rated_me
load_hsg = self.hotel_load
load_perc_hsg = load_hsg / self.p_rated_hsg
rate_me = load_me * self.spec_fuel_cons_me(load_perc_me)
rate_hsg = load_hsg * self.spec_fuel_cons_dg(load_perc_hsg)
elif self.mso_mode == 3:
load_hsg = (load_perc * self.p_rated_hsg + self.hotel_load)
load_perc_hsg = load_hsg / self.p_rated_hsg
rate_me = 0.0
rate_hsg = load_hsg * self.spec_fuel_cons_dg(load_perc_hsg)
'''
load_data = self.mode.distribute_load(load_perc=load_perc, hotel_load=self.hotel_load)
if load_data.load_on_main_engine == 0:
rate_me = 0
else:
rate_me = load_data.load_on_main_engine \
* self.spec_fuel_cons_me(load_data.load_percentage_on_main_engine)
if load_data.load_percentage_on_electrical == 0:
rate_electrical = 0
else:
rate_electrical = load_data.load_on_electrical \
* self.spec_fuel_cons_dg(load_data.load_percentage_on_electrical)
self.fuel_cons_me = self.fuel_cons_me + rate_me * self.int.dt
self.fuel_cons_electrical = self.fuel_cons_electrical + rate_electrical * self.int.dt
self.fuel_cons = self.fuel_cons + (rate_me + rate_electrical) * self.int.dt
return rate_me, rate_electrical, self.fuel_cons_me, self.fuel_cons_electrical, self.fuel_cons
def get_wind_force(self):
''' This method calculates the forces due to the relative
wind speed, acting on teh ship in surge, sway and yaw
direction.
:return: Wind force acting in surge, sway and yaw
'''
uw = self.wind_speed * np.cos(self.wind_dir - self.psi)
vw = self.wind_speed * np.sin(self.wind_dir - self.psi)
u_rw = uw - self.u
v_rw = vw - self.v
gamma_rw = -np.arctan2(v_rw, u_rw)
wind_rw2 = u_rw ** 2 + v_rw ** 2
c_x = -self.cx * np.cos(gamma_rw)
c_y = self.cy * np.sin(gamma_rw)
c_n = self.cn * np.sin(2 * gamma_rw)
tau_coeff = 0.5 * self.rho_a * wind_rw2
tau_u = tau_coeff * c_x * self.proj_area_f
tau_v = tau_coeff * c_y * self.proj_area_l
tau_n = tau_coeff * c_n * self.proj_area_l * self.l_ship
return np.array([tau_u, tau_v, tau_n])
def update_state_vector(self):
''' Update the state vector according to the individual state values
'''
return np.array([self.n, self.e, self.psi, self.u, self.v, self.r])
def set_north_pos(self, val):
''' Set the north position of the ship and update the state vector
'''
self.n = val
self.x = self.update_state_vector()
def set_east_pos(self, val):
''' Set the east position of the ship and update the state vector
'''
self.e = val
self.x = self.update_state_vector()
def set_yaw_angle(self, val):
''' Set the yaw angle of the ship and update the state vector
'''
self.psi = val
self.x = self.update_state_vector()
def set_surge_speed(self, val):
''' Set the surge speed of the ship and update the state vector
'''
self.u = val
self.x = self.update_state_vector()
def set_sway_speed(self, val):
''' Set the sway speed of the ship and update the state vector
'''
self.v = val
self.x = self.update_state_vector()
def set_yaw_rate(self, val):
''' Set the yaw rate of the ship and update the state vector
'''
self.r = val
self.x = self.update_state_vector()
def initialize_shaft_speed_controller(self, kp, ki):
''' This method sets up and configures the shaft speed
controller of the ship
'''
self.shaft_speed_controller = ControllerLib()
self.shaft_speed_controller.set_kp(kp)
self.shaft_speed_controller.set_ki(ki)
def initialize_ship_speed_controller(self, kp, ki):
''' This method sets up and configures the ship speed
controller.
'''
self.ship_speed_controller = ControllerLib()
self.ship_speed_controller.set_kp(kp)
self.ship_speed_controller.set_ki(ki)
def initialize_ship_heading_controller(self, kp, kd, ki):
''' This method sets up and configures the ship heading
controller.
'''
self.ship_heading_controller = ControllerLib()
self.ship_heading_controller.set_kp(kp)
self.ship_heading_controller.set_kd(-kd)
self.ship_heading_controller.set_ki(ki)
def initialize_heading_filter(self, kp, kd, t):
''' This method sets up and configures a low pass filter
to smooth the hading setpoint signal for the ship
heading controller.
'''
self.ship_heading_filter = ControllerLib()
self.ship_heading_filter.set_kp(kp)
self.ship_heading_filter.set_kd(kd)
self.ship_heading_filter.set_T(t)
def loadperc_from_speedref(self, speed_ref):
''' Calculates suitable machinery load percentage for the ship to
track the speed reference signal. The shaft speed controller
is used to calculate suitable shaft speed to follow the desired
ship speed and suitable load percentage to follow the calculated
shaft speed. The load percentage is the fraction of the produced
power over the total power capacity in the current configuration.
'''
ref_shaft_speed = self.ship_speed_controller.pi_ctrl(speed_ref, self.u, self.int.dt, -550, 550)
ref_shaft_speed = ControllerLib.sat(ref_shaft_speed, 0, self.shaft_speed_max)
load_perc = self.shaft_speed_controller.pi_ctrl(ref_shaft_speed, self.omega, self.int.dt)
load_perc = ControllerLib.sat(load_perc, 0, 1.1)
return load_perc
def rudderang_from_headingref(self, heading_ref):
''' This method finds a suitable rudder angle for the ship to
sail with the heading specified by "heading_ref" by using
PID-controller. The rudder angle is saturated according to
|self.rudder_ang_max|. The mathod should be called from within
simulation loop if the user want the ship to follow a specified
heading reference signal.
'''
rudder_ang = self.ship_heading_controller.pid_ctrl(heading_ref, self.psi, self.int.dt)
rudder_ang = ControllerLib.sat(rudder_ang, -self.rudder_ang_max, self.rudder_ang_max)
return rudder_ang
def rudderang_from_route(self):
''' This method finds a suitable rudder angle for the ship to follow
a predefined route specified in the "navigate"-instantiation of the
"NavigationSystem"-class.
'''
self.next_wpt, self.prev_wpt = self.navigate.next_wpt(self.next_wpt, self.n, self.e)
psi_d = self.navigate.los_guidance(self.next_wpt, self.n, self.e)
return self.rudderang_from_headingref(psi_d)
def print_next_wpt(self, ship_id):
''' Prints a string with the ship identification (ship_id)
and its next waypoint, if the next waypoint is specified
'''
if self.next_wpt != self.navigate.next_wpt(self.next_wpt, self.n, self.e)[0]:
print('Current target waypoint for ' + ship_id + ' is: ' + str(self.next_wpt))
def set_next_wpt(self, wpt):
''' Sets the next waypoint to "wpt", where "wpt" is the index
of the waypoint refering to the list of waypoints making
up the route specified in the instantiation "navigate" of
the class "NavigationSystem"
'''
self.next_wpt = wpt
def three_dof_kinematics(self):
''' Updates the time differientials of the north position, east
position and yaw angle. Should be called in the simulation
loop before the integration step.
'''
vel = np.array([self.u, self.v, self.r])
dx = np.dot(self.rotation(), vel)
self.d_n = dx[0]
self.d_e = dx[1]
self.d_psi = dx[2]
def rotation(self):
''' Specifies the rotation matrix for rotations about the z-axis, such that
"body-fixed coordinates" = rotation x "North-east-down-fixed coordinates" .
'''
return np.array([[np.cos(self.psi), -np.sin(self.psi), 0],
[np.sin(self.psi), np.cos(self.psi), 0],
[0, 0, 1]])
def three_dof_kinetics(self, load_perc, rudder_angle):
''' Calculates accelerations of the ship, as a funciton
of thrust-force, rudder angle, wind forces and the
states in the previous time-step.
'''
# System matrices (did not include added mass yet)
M_rb = np.array([[self.mass + self.x_du, 0, 0],
[0, self.mass + self.y_dv, self.mass * self.x_g],
[0, self.mass * self.x_g, self.i_z + self.n_dr]])
C_rb = np.array([[0, 0, -self.mass * (self.x_g * self.r + self.v)],
[0, 0, self.mass * self.u],
[self.mass * (self.x_g * self.r + self.v), -self.mass * self.u, 0]])
D = np.array([[self.mass / self.t_surge, 0, 0],
[0, self.mass / self.t_sway, 0],
[0, 0, self.i_z / self.t_yaw]])
D2 = np.array([[self.ku * self.u, 0, 0],
[0, self.kv * self.v, 0],
[0, 0, self.kr * self.r]])
# Forces acting (replace zero vectors with suitable functions)
f_rudder_v, f_rudder_r = self.rudder(rudder_angle)
self.update_thrust(load_perc)
F_wind = self.get_wind_force()
F_waves = np.array([0, 0, 0])
F_ctrl = np.array([self.thrust, f_rudder_v, f_rudder_r])
# assembling state vector
vel = np.array([self.u, self.v, self.r])
# Transforming current velocity to ship frame
v_c = np.dot(np.linalg.inv(self.rotation()), self.vel_c)
u_r = self.u - v_c[0]
v_r = self.v - v_c[1]
C_a = np.array([[0, 0, self.y_dv * v_r],
[0, 0, -self.x_du * u_r],
[-self.y_dv * v_r, self.x_du * u_r, 0]])
# Kinetic equation
M_inv = np.linalg.inv(M_rb)
dx = np.dot(M_inv, -np.dot(C_rb, vel) - np.dot(C_a, vel - v_c) - np.dot(D + D2, vel - v_c)
+ F_wind + F_waves + F_ctrl)
self.d_u = dx[0]
self.d_v = dx[1]
self.d_r = dx[2]
def rudder(self, delta):
''' This method takes in the rudder angle and returns
the force i sway and yaw generated by the rudder.
args:
delta (float): The rudder angle in radians
returs:
v_force (float): The force in sway-direction generated by the rudder
r_force (float): The yaw-torque generated by the rudder
'''
u_c = np.dot(np.linalg.inv(self.rotation()), self.vel_c)[0]
v_force = -self.c_rudder_v * delta * (self.u - u_c)
r_force = -self.c_rudder_r * delta * (self.u - u_c)
return v_force, r_force
def update_thrust(self, load_perc):
''' Updates the thrust force based on engine power
'''
power = load_perc * (self.mode.available_propulsion_power_main_engine
+ self.mode.available_propulsion_power_electrical)
self.d_thrust = (-self.k_thrust * self.thrust + power) / self.thrust_time_constant
self.thrust = self.thrust + self.int.dt * self.d_thrust
def update_differentials(self, load_perc, rudder_angle):
''' This method should be called in the simulation loop. It will
update the full differential equation of the ship.
'''
self.three_dof_kinematics()
self.three_dof_kinetics(load_perc=load_perc, rudder_angle=rudder_angle)
def integrate_differentials(self):
''' Integrates the differential equation one time step ahead using
the euler intgration method with parameters set in the
int-instantiation of the "EulerInt"-class.
'''
self.set_north_pos(self.int.integrate(self.n, self.d_n))
self.set_east_pos(self.int.integrate(self.e, self.d_e))
self.set_yaw_angle(self.int.integrate(self.psi, self.d_psi))
self.set_surge_speed(self.int.integrate(self.u, self.d_u))
self.set_sway_speed(self.int.integrate(self.v, self.d_v))
self.set_yaw_rate(self.int.integrate(self.r, self.d_r))
def store_states(self):
''' Appends the current value of each state to an array. This
is convenient when plotting. The method should be called within
the simulation loop each time step. Then afterwars, an array
containing for ecample the north-position for each time step
is obtained as ...states[0]
'''
self.states[0].append(self.n)
self.states[1].append(self.e)
self.states[2].append(self.psi)
self.states[3].append(self.u)
self.states[4].append(self.v)
self.states[5].append(self.r)
self.states[6].append(self.omega)
def ship_snap_shot(self):
''' This method is used to store a map-view snap shot of
the ship at the given north-east position and heading.
It uses the ShipDraw-class. To plot a map view of the
n-th ship snap-shot, use:
plot(ship_drawings[1][n], ship_drawings[0][n])
'''
x, y = self.drw.local_coords()
x_ned, y_ned = self.drw.rotate_coords(x, y, self.psi)
x_ned_trans, y_ned_trans = self.drw.translate_coords(x_ned, y_ned, self.n, self.e)
self.ship_drawings[0].append(x_ned_trans)
self.ship_drawings[1].append(y_ned_trans)
def store_simulation_data(self, load_perc):
load_perc_me, load_perc_hsg = self.load_perc(load_perc)
self.simulation_results['time [s]'].append(self.int.time)
self.simulation_results['north position [m]'].append(self.n)
self.simulation_results['east position [m]'].append(self.e)
self.simulation_results['yaw angle [deg]'].append(self.t_yaw * 180 / np.pi)
self.simulation_results['forward speed[m/s]'].append(self.u)
self.simulation_results['sideways speed [m/s]'].append(self.v)
self.simulation_results['yaw rate [deg/sec]'].append(self.r * 180 / np.pi)
self.simulation_results['commanded load fraction [-]'].append(load_perc)
self.simulation_results['commanded load fraction me [-]'].append(load_perc_me)
self.simulation_results['commanded load fraction hsg [-]'].append(load_perc_hsg)
load_data = self.mode.distribute_load(load_perc=load_perc, hotel_load=self.hotel_load)
self.simulation_results['power me [kw]'].append(load_data.load_on_main_engine / 1000)
self.simulation_results['available power me [kw]'].append(self.mode.main_engine_capacity / 1000)
self.simulation_results['power electrical [kw]'].append(load_data.load_on_electrical / 1000)
self.simulation_results['available power electrical [kw]'].append(self.mode.electrical_capacity / 1000)
self.simulation_results['power [kw]'].append((load_data.load_on_electrical
+ load_data.load_on_main_engine) / 1000)
self.simulation_results['propulsion power [kw]'].append((load_perc
* self.mode.available_propulsion_power) / 1000)
rate_me, rate_hsg, cons_me, cons_hsg, cons = self.fuel_consumption(load_perc)
self.simulation_results['fuel rate me [kg/s]'].append(rate_me)
self.simulation_results['fuel rate hsg [kg/s]'].append(rate_hsg)
self.simulation_results['fuel rate [kg/s]'].append(rate_me + rate_hsg)
self.simulation_results['fuel consumption me [kg]'].append(cons_me)
self.simulation_results['fuel consumption hsg [kg]'].append(cons_hsg)
self.simulation_results['fuel consumption [kg]'].append(cons)
self.fuel_me.append(cons_me)
self.fuel_hsg.append(cons_hsg)
self.fuel.append(cons)
self.simulation_results['thrust force [kN]'].append(self.thrust / 1000)
class ShipModelWithoutPropulsion:
''' Creates a ship model object that can be used to simulate a ship drifting freely
The model contains the following states:
- North position of ship
- East position of ship
- Yaw angle (relative to north axis)
- Surge velocity (forward)
- Sway velocity (sideways)
- Yaw rate
Simulation results are stored in the instance variable simulation_results
'''
def __init__(self, ship_config: ShipConfiguration,
environment_config: EnvironmentConfiguration,
simulation_config: DriftSimulationConfiguration):
payload = 0.9 * (ship_config.dead_weight_tonnage - ship_config.bunkers)
lsw = ship_config.dead_weight_tonnage / ship_config.coefficient_of_deadweight_to_displacement \
- ship_config.dead_weight_tonnage
self.mass = lsw + payload + ship_config.bunkers + ship_config.ballast
self.l_ship = ship_config.length_of_ship # 80
self.w_ship = ship_config.width_of_ship # 16.0
self.x_g = 0
self.i_z = self.mass * (self.l_ship ** 2 + self.w_ship ** 2) / 12
# zero-frequency added mass
self.x_du, self.y_dv, self.n_dr = self.set_added_mass(ship_config.added_mass_coefficient_in_surge,
ship_config.added_mass_coefficient_in_sway,
ship_config.added_mass_coefficient_in_yaw)
self.t_surge = ship_config.mass_over_linear_friction_coefficient_in_surge
self.t_sway = ship_config.mass_over_linear_friction_coefficient_in_sway
self.t_yaw = ship_config.mass_over_linear_friction_coefficient_in_yaw
self.ku = ship_config.nonlinear_friction_coefficient__in_surge # 2400.0 # non-linear friction coeff in surge
self.kv = ship_config.nonlinear_friction_coefficient__in_sway # 4000.0 # non-linear friction coeff in sway
self.kr = ship_config.nonlinear_friction_coefficient__in_yaw # 400.0 # non-linear friction coeff in yaw
# Environmental conditions
self.vel_c = np.array([environment_config.current_velocity_component_from_north,
environment_config.current_velocity_component_from_east,
0.0])
self.wind_dir = environment_config.wind_direction
self.wind_speed = environment_config.wind_speed
# Initial states (can be altered using self.set_state_vector(x))
self.n = simulation_config.initial_north_position_m
self.e = simulation_config.initial_east_position_m
self.psi = simulation_config.initial_yaw_angle_rad
self.u = simulation_config.initial_forward_speed_m_per_s
self.v = simulation_config.initial_sideways_speed_m_per_s
self.r = simulation_config.initial_yaw_rate_rad_per_s
self.x = self.update_state_vector()
self.states = np.empty(6)
# Differentials
self.d_n = self.d_e = self.d_psi = 0
self.d_u = self.d_v = self.d_r = 0
self.hello = 'Hello'
# Set up integration
self.int = EulerInt() # Instantiate the Euler integrator
self.int.set_dt(simulation_config.integration_step)
self.int.set_sim_time(simulation_config.simulation_time)
# Instantiate ship draw plotting
self.drw = ShipDraw() # Instantiate the ship drawing class
self.ship_drawings = [[], []] # Arrays for storing ship drawing data
# Wind effect on ship
self.rho_a = 1.2
self.h_f = 8.0 # mean height above water seen from the front
self.h_s = 8.0 # mean height above water seen from the side
self.proj_area_f = self.w_ship * self.h_f # Projected are from the front
self.proj_area_l = self.l_ship * self.h_s # Projected area from the side
self.cx = 0.5
self.cy = 0.7
self.cn = 0.08
self.simulation_results = defaultdict(list)
def set_added_mass(self, surge_coeff, sway_coeff, yaw_coeff):
''' Sets the added mass in surge due to surge motion, sway due
to sway motion and yaw due to yaw motion according to given coeffs.
args:
surge_coeff (float): Added mass coefficient in surge direction due to surge motion
sway_coeff (float): Added mass coefficient in sway direction due to sway motion
yaw_coeff (float): Added mass coefficient in yaw direction due to yaw motion
returns:
x_du (float): Added mass in surge
y_dv (float): Added mass in sway
n_dr (float): Added mass in yaw
'''
x_du = self.mass * surge_coeff
y_dv = self.mass * sway_coeff
n_dr = self.i_z * yaw_coeff
return x_du, y_dv, n_dr
def get_wind_force(self):
''' This method calculates the forces due to the relative
wind speed, acting on teh ship in surge, sway and yaw
direction.
:return: Wind force acting in surge, sway and yaw
'''
uw = self.wind_speed * np.cos(self.wind_dir - self.psi)
vw = self.wind_speed * np.sin(self.wind_dir - self.psi)
u_rw = uw - self.u
v_rw = vw - self.v
gamma_rw = -np.arctan2(v_rw, u_rw)
wind_rw2 = u_rw ** 2 + v_rw ** 2
c_x = -self.cx * np.cos(gamma_rw)
c_y = self.cy * np.sin(gamma_rw)
c_n = self.cn * np.sin(2 * gamma_rw)
tau_coeff = 0.5 * self.rho_a * wind_rw2
tau_u = tau_coeff * c_x * self.proj_area_f
tau_v = tau_coeff * c_y * self.proj_area_l
tau_n = tau_coeff * c_n * self.proj_area_l * self.l_ship
return np.array([tau_u, tau_v, tau_n])
def update_state_vector(self):
''' Update the state vector according to the individual state values
'''
return np.array([self.n, self.e, self.psi, self.u, self.v, self.r])
def set_north_pos(self, val):
''' Set the north position of the ship and update the state vector
'''
self.n = val
self.x = self.update_state_vector()
def set_east_pos(self, val):
''' Set the east position of the ship and update the state vector
'''
self.e = val
self.x = self.update_state_vector()
def set_yaw_angle(self, val):
''' Set the yaw angle of the ship and update the state vector
'''
self.psi = val
self.x = self.update_state_vector()
def set_surge_speed(self, val):
''' Set the surge speed of the ship and update the state vector
'''
self.u = val
self.x = self.update_state_vector()
def set_sway_speed(self, val):
''' Set the sway speed of the ship and update the state vector
'''
self.v = val
self.x = self.update_state_vector()
def set_yaw_rate(self, val):
''' Set the yaw rate of the ship and update the state vector
'''
self.r = val
self.x = self.update_state_vector()
def three_dof_kinematics(self):
''' Updates the time differientials of the north position, east
position and yaw angle. Should be called in the simulation
loop before the integration step.
'''
vel = np.array([self.u, self.v, self.r])
dx = np.dot(self.rotation(), vel)
self.d_n = dx[0]
self.d_e = dx[1]
self.d_psi = dx[2]
def rotation(self):
''' Specifies the rotation matrix for rotations about the z-axis, such that
"body-fixed coordinates" = rotation x "North-east-down-fixed coordinates" .
'''
return np.array([[np.cos(self.psi), -np.sin(self.psi), 0],
[np.sin(self.psi), np.cos(self.psi), 0],
[0, 0, 1]])
def three_dof_kinetics(self):
''' Calculates accelerations of the ship, as a funciton
of wind forces and the states in the previous time-step.
'''
# System matrices (did not include added mass yet)
M_rb = np.array([[self.mass + self.x_du, 0, 0],
[0, self.mass + self.y_dv, self.mass * self.x_g],
[0, self.mass * self.x_g, self.i_z + self.n_dr]])
C_rb = np.array([[0, 0, -self.mass * (self.x_g * self.r + self.v)],
[0, 0, self.mass * self.u],
[self.mass * (self.x_g * self.r + self.v), -self.mass * self.u, 0]])
D = np.array([[self.mass / self.t_surge, 0, 0],
[0, self.mass / self.t_sway, 0],
[0, 0, self.i_z / self.t_yaw]])
D2 = np.array([[self.ku * self.u, 0, 0],
[0, self.kv * self.v, 0],
[0, 0, self.kr * self.r]])
F_wind = self.get_wind_force()
F_waves = np.array([0, 0, 0])
# assembling state vector
vel = np.array([self.u, self.v, self.r])
# Transforming current velocity to ship frame
v_c = np.dot(np.linalg.inv(self.rotation()), self.vel_c)
u_r = self.u - v_c[0]
v_r = self.v - v_c[1]
C_a = np.array([[0, 0, self.y_dv * v_r],
[0, 0, -self.x_du * u_r],
[-self.y_dv * v_r, self.x_du * u_r, 0]])
# Kinetic equation
M_inv = np.linalg.inv(M_rb)
dx = np.dot(M_inv, -np.dot(C_rb, vel) - -np.dot(C_a, vel - v_c) - np.dot(D + D2, vel - v_c)
+ F_wind)
self.d_u = dx[0]
self.d_v = dx[1]
self.d_r = dx[2]
def update_differentials(self):
''' This method should be called in the simulation loop. It will
update the full differential equation of the ship.
'''
self.three_dof_kinematics()
self.three_dof_kinetics()
def integrate_differentials(self):
''' Integrates the differential equation one time step ahead using
the euler intgration method with parameters set in the
int-instantiation of the "EulerInt"-class.
'''
self.set_north_pos(self.int.integrate(self.n, self.d_n))
self.set_east_pos(self.int.integrate(self.e, self.d_e))
self.set_yaw_angle(self.int.integrate(self.psi, self.d_psi))
self.set_surge_speed(self.int.integrate(self.u, self.d_u))
self.set_sway_speed(self.int.integrate(self.v, self.d_v))
self.set_yaw_rate(self.int.integrate(self.r, self.d_r))
def store_states(self):
''' Appends the current value of each state to an array. This
is convenient when plotting. The method should be called within
the simulation loop each time step. Then afterwars, an array
containing for ecample the north-position for each time step
is obtained as ...states[0]
'''
self.states[0].append(self.n)
self.states[1].append(self.e)
self.states[2].append(self.psi)
self.states[3].append(self.u)
self.states[4].append(self.v)
self.states[5].append(self.r)
def ship_snap_shot(self):
''' This method is used to store a map-view snap shot of
the ship at the given north-east position and heading.
It uses the ShipDraw-class. To plot a map view of the
n-th ship snap-shot, use:
plot(ship_drawings[1][n], ship_drawings[0][n])
'''
x, y = self.drw.local_coords()
x_ned, y_ned = self.drw.rotate_coords(x, y, self.psi)
x_ned_trans, y_ned_trans = self.drw.translate_coords(x_ned, y_ned, self.n, self.e)
self.ship_drawings[0].append(x_ned_trans)
self.ship_drawings[1].append(y_ned_trans)
def store_simulation_data(self):
self.simulation_results['time [s]'].append(self.int.time)
self.simulation_results['north position [m]'].append(self.n)
self.simulation_results['east position [m]'].append(self.e)
self.simulation_results['yaw angle [deg]'].append(self.t_yaw * 180 / np.pi)
self.simulation_results['forward speed[m/s]'].append(self.u)
self.simulation_results['sideways speed [m/s]'].append(self.v)
self.simulation_results['yaw rate [deg/sec]'].append(self.r * 180 / np.pi)
self.simulation_results['wind speed [m/sec]'].append(self.wind_speed)
class ControllerLib:
''' This class offers the following set of controllers :
- P-controller
- PI-controller
- PD-controller
- PID-controller
- A second order filter
- Signal saturation
'''
def __init__(self):
self.kp = 1.0
self.ki = 1.0
self.kd = 1.0
self.t = 1.0
self.prev_error = 0.0
self.error_i = 0.0
def set_error_i(self, val):
''' Reset/set the value of the error-integral to "val".
Useful for PI and PID.controllers
'''
self.error_i = val
def set_kp(self, val):
''' Set the proportional gain constant
'''
self.kp = val
def set_kd(self, val):
''' Set the gain constant for the derivative term
'''
self.kd = val
def set_ki(self, val):
''' Set the gain constant for the integral term
'''
self.ki = val
def set_T(self, val):
''' Set the time constant. Only relevant
for the low pass filter
'''
self.T = val
def p_ctrl(self, ref, meas):
''' Uses a proportional control law to calculate a control output
'''
error = ref - meas
return self.kp * error
def pi_ctrl(self, ref, meas, dt, *args):
''' Uses a proportional-integral control law to calculate a control
output. The optional argument is an 2x1 array and will specify lower
and upper limit for error integration [lower, upper]
'''
error = ref - meas
error_i = self.error_i + error * dt
if args:
error_i = self.sat(error_i, args[0], args[1])
self.error_i = error_i
return error * self.kp + error_i * self.ki
def pd_ctrl(self, ref, meas, dt):
''' Uses a proportional-derivative control law to calculate a control
output
'''
error = ref - meas
d_error = (error - self.prev_error) / dt
self.prev_error = error
return error * self.kp - d_error * self.kd
def pid_ctrl(self, ref, meas, dt, *args):
''' Uses a proportional-derivative-integral control law to calculate
a control output. The optional argument is a 2x1 array and will
specify lower and upper [lower, upper] limit for error integration
'''
error = ref - meas
d_error = (error - self.prev_error) / dt
error_i = self.error_i + error * dt
if args:
error_i = self.sat(error_i, args[0], args[1])
self.prev_error = error
self.error_i = error_i
return error * self.kp - d_error * self.kd + error_i * self.ki
def filter_2(self, ref, x, v):
''' Calculates the two time differentials dx and dv which may be
integrated to "smooth out" the reference signal "ref"
'''
dx = v
dv = (self.kp * (ref - x) - self.kd * v) / self.t
return dx, dv
@staticmethod
def sat(val, low, hi):
''' Saturate the input val such that it remains
between "low" and "hi"
'''
return max(low, min(val, hi))
class EulerInt:
''' Provides methods relevant for using the
Euler method to integrate an ODE.
Usage:
int=EulerInt()
while int.time <= int.sim_time:
dx = f(x)
int.integrate(x,dx)
int.next_time
'''
def __init__(self):
self.dt = 0.01
self.sim_time = 10
self.time = 0.0
self.times = []
self.global_times = []
def set_dt(self, val):
''' Sets the integrator step length
'''
self.dt = val
def set_sim_time(self, val):
''' Sets the upper time integration limit
'''
self.sim_time = val
def set_time(self, val):
''' Sets the time variable to "val"
'''
self.time = val
def next_time(self, time_shift=0):
''' Increment the time variable to the next time instance
and store in an array
'''
self.time = self.time + self.dt
self.times.append(self.time)
self.global_times.append(self.time + time_shift)
def integrate(self, x, dx):
''' Performs the Euler integration step
'''
return x + dx * self.dt
class ShipDraw:
''' This class is used to calculate the coordinates of each
corner of 80 meter long and 20meter wide ship seen from above,
and rotate and translate the coordinates according to
the ship heading and position
'''
def __init__(self):
self.l = 80.0
self.b = 20.0
def local_coords(self):
''' Here the ship is pointing along the local
x-axix with its center of origin (midship)
at the origin
1 denotes the left back corner
2 denotes the left starting point of bow curvatiure
3 denotes the bow
4 the right starting point of the bow curve
5 the right back cornier
'''
x1, y1 = -self.l / 2, -self.b / 2
x2, y2 = self.l / 4, -self.b / 2
x3, y3 = self.l / 2, 0.0
x4, y4 = self.l / 4, self.b / 2
x5, y5 = -self.l / 2, self.b / 2
x = np.array([x1, x2, x3, x4, x5, x1])
y = np.array([y1, y2, y3, y4, y5, y1])
return x, y
def rotate_coords(self, x, y, psi):
''' Rotates the ship an angle psi
'''
x_t = np.cos(psi) * x - np.sin(psi) * y
y_t = np.sin(psi) * x + np.cos(psi) * y
return x_t, y_t
def translate_coords(self, x_ned, y_ned, north, east):
''' Takes in coordinates of the corners of the ship (in the ned-frame)
and translates them in the north and east direction according to
"north" and "east"
'''
x_t = x_ned + north
y_t = y_ned + east
return x_t, y_t
class NavigationSystem:
''' This class provides a way of following a predifined route using
line-og-sight (LOS) guidance law. The path to the textfile where
the route is specified is given as an argument when calling the
class. The route text file is formated as follows:
x1 y1
x2 y2
...
where (x1,y1) are the coordinates to the first waypoint,
(x2,y2) to the second, etc.
'''
def __init__(self, route):
self.load_waypoints(route)
self.ra = 600 # Radius of acceptance for waypoints
self.r = 450 # Lookahead distance
def load_waypoints(self, route):
''' Reads the file containing the route and stores it as an
array of north positions and an array of east positions
'''
self.data = np.loadtxt(route)
self.north = []
self.east = []
for i in range(0, (int(np.size(self.data) / 2))):
self.north.append(self.data[i][0])
self.east.append(self.data[i][1])
def next_wpt(self, k, N, E):
''' Returns the index of the next and current waypoint. The method, if
called at each time step, will detect when the ship has arrived
close enough to a waypoint, to proceed ot the next waypoint. Example
of usage in the method "rudderang_from_route()" from the ShipDyn-class.
'''
if (self.north[k] - N) ** 2 + (
self.east[k] - E) ** 2 <= self.ra ** 2: # Check that we are within circle of acceptance
if len(self.north) > k + 1: # If number of waypoints are greater than current waypoint index
return k + 1, k # Then move on to next waypoint and let current become previous
else:
return k, k # At the end of the route, let the next wpt also be the previous wpt
else:
return k, k - 1
def los_guidance(self, k, x, y):
''' Returns the desired heading (i.e. reference signal to
a ship heading controller). The parameter "k" is the
index of the next waypoint.
'''
dx = self.north[k] - self.north[k - 1]
dy = self.east[k] - self.east[k - 1]
alpha_k = math.atan2(dy, dx)
e_ct = -(x - self.north[k - 1]) * math.sin(alpha_k) + (y - self.east[k - 1]) * math.cos(alpha_k)
if e_ct ** 2 >= self.r ** 2:
e_ct = 0.99 * self.r
delta = math.sqrt(self.r ** 2 - e_ct ** 2)
chi_r = math.atan(-e_ct / delta)
return alpha_k + chi_r
class StaticObstacle:
''' This class is used to define a static obstacle. It can only make
circular obstacles. The class is instantiated with the following
input paramters:
- n_pos: The north coordinate of the center of the obstacle.
- e_pos: The east coordinate of the center of the obstacle.
- radius: The radius of the obstacle.
'''
def __init__(self, n_pos, e_pos, radius):
self.n = n_pos
self.e = e_pos
self.r = radius
def distance(self, n_ship, e_ship):
''' Returns the distance from a ship with coordinates (north, east)=
(n_ship, e_ship), to the closest point on the perifery of the
circular obstacle.
'''
rad_2 = (n_ship - self.n) ** 2 + (e_ship - self.e) ** 2
rad = np.sqrt(abs(rad_2))
return rad - self.r
def plot_obst(self, ax):
''' This method can be used to plot the obstacle in a
map-view.
'''
# ax = plt.gca()
ax.add_patch(plt.Circle((self.e, self.n), radius=self.r, fill=True, color='grey'))
class Zones:
def __init__(self, z_config: ZonesConfiguration, iceberg_config: IcebergConfiguration):
self.n = z_config.n_pos
self.e = z_config.e_pos
self.r = z_config.coll_radius
self.r0 = z_config.excl_radius
self.r1 = z_config.zone1_radius
self.r2 = z_config.zone2_radius
self.r3 = z_config.zone3_radius
self.collimargin = 0.5 * (iceberg_config.waterlinelength_of_iceberg + z_config.object_radius)
def distance(self, n_iceberg, e_iceberg):
''' Returns the distance from a ship with coordinates (north, east)=
(n_ship, e_ship), to the closest point on the perifery of the
circular obstacle.
'''
rad_2 = (n_iceberg - self.n) ** 2 + (e_iceberg - self.e) ** 2
rad = np.sqrt(abs(rad_2))
return rad
def d_to_north(self, n_iceberg):
rad = abs(n_iceberg - self.n)
return rad
def d_to_east(self, e_iceberg):
rad = abs(e_iceberg - self.e)
return rad
def cpa_zone(self, d_to_s):
"""to calculate which zone the cpa (closest point of approach). d_to_s is the distance between the iceberg center and zone center"""
if d_to_s - self.collimargin - self.r <= 0:
cpazone = -1 # "Collision Zone"
elif d_to_s - self.collimargin - self.r0 <= 0:
cpazone = 0 # "Exclusion Zone"
elif d_to_s - self.collimargin - self.r1 <= 0:
cpazone = 1 # "Zone 1"
elif d_to_s - self.collimargin - self.r2 <= 0:
cpazone = 2 # "Zone 2"
elif d_to_s - self.collimargin - self.r3 <= 0:
cpazone = 3 # "Zone 3"
else:
cpazone = 4 # "outside all zones"
return cpazone
def d_to_exclusion(self, n_iceberg, e_iceberg):
''' Returns the distance from a ship with coordinates (north, east)=
(n_ship, e_ship), to the closest point on the perifery of the
circular obstacle.
'''
rad_2 = (n_iceberg - self.n) ** 2 + (e_iceberg - self.e) ** 2
rad = np.sqrt(abs(rad_2))
return rad - self.r0 - self.collimargin
def colli_event(self, n_iceberg, e_iceberg):
rad_2 = (n_iceberg - self.n) ** 2 + (e_iceberg - self.e) ** 2
rad = np.sqrt(abs(rad_2))
if rad - self.collimargin - self.r <= 0:
return 1
else:
return 0
def breach_exclusion(self, n_iceberg, e_iceberg):
rad_2 = (n_iceberg - self.n) ** 2 + (e_iceberg - self.e) ** 2
rad = np.sqrt(abs(rad_2))
if rad - self.collimargin + self.r0 <= 0:
return 1
else:
return 0
def plot_coll(self):
''' This method can be used to plot the obstacle in a
map-view.
'''
# ax = plt.gca()
return plt.Circle((self.e, self.n), radius=self.r + self.collimargin, fill=False, color='red')
def plot_excl(self):
''' This method can be used to plot the obstacle in a
map-view.
'''
# ax = plt.gca()
return plt.Circle((self.e, self.n), radius=self.r0 + self.collimargin, fill=False, color='red')
def plot_zone1(self):
''' This method can be used to plot the obstacle in a
map-view.
'''
# ax = plt.gca()
return plt.Circle((self.e, self.n), radius=self.r1 + self.collimargin, fill=False, color='orange')
def plot_zone2(self):
''' This method can be used to plot the obstacle in a
map-view.
'''
# ax = plt.gca()
return plt.Circle((self.e, self.n), radius=self.r2 + self.collimargin, fill=False, color='blue')
def plot_zone3(self):
''' This method can be used to plot the obstacle in a
map-view.
'''
# ax = plt.gca()
return plt.Circle((self.e, self.n), radius=self.r3 + self.collimargin, fill=False, color='green')
class IcebergDraw:
''' This class is used to calculate the coordinates of each
corner of 80 meter long and 20meter wide ship seen from above,
and rotate and translate the coordinates according to
the ship heading and position
'''
def __init__(self, iceberg_config: IcebergConfiguration):
self.l = iceberg_config.waterlinelength_of_iceberg
self.b = iceberg_config.width_of_iceberg
def local_coords(self):
''' Here the ship is pointing along the local
x-axix with its center of origin (midship)
at the origin
1 denotes the left back corner
2 denotes the left starting point of bow curvatiure
3 denotes the bow
4 the right starting point of the bow curve
5 the right back cornier
'''
x1, y1 = -self.l / 2, -self.b / 2
x2, y2 = self.l / 4, -self.b / 2
x3, y3 = self.l / 2, 0.0
x4, y4 = self.l / 4, self.b / 2
x5, y5 = -self.l / 2, self.b / 2
x = np.array([x1, x2, x3, x4, x5, x1])
y = np.array([y1, y2, y3, y4, y5, y1])
return x, y
def rotate_coords(self, x, y, psi):
''' Rotates the ship an angle psi
'''
x_t = np.cos(psi) * x - np.sin(psi) * y
y_t = np.sin(psi) * x + np.cos(psi) * y
return x_t, y_t
def translate_coords(self, x_ned, y_ned, north, east):
''' Takes in coordinates of the corners of the ship (in the ned-frame)
and translates them in the north and east direction according to
"north" and "east"
'''
x_t = x_ned + north
y_t = y_ned + east
return x_t, y_t
class IcebergDriftingModel1:
''' Creates a iceberg model object that can be used to simulate a iceberg drifting freely
The model contains the following states:
- North position of iceberg
- East position of iceberg
- Yaw angle (relative to north axis)
- Surge velocity (forward)
- Sway velocity (sideways)
- Yaw rate
Simulation results are stored in the instance variable simulation_results
'''
def __init__(self, iceberg_config: IcebergConfiguration,
environment_config: EnvironmentConfiguration,
simulation_config: DriftSimulationConfiguration):
payload = 0.9 * (iceberg_config.mass_tonnage)
lsw = iceberg_config.mass_tonnage / iceberg_config.coefficient_of_deadweight_to_displacement \
- iceberg_config.mass_tonnage
self.mass = lsw + payload
self.l_iceberg = iceberg_config.waterlinelength_of_iceberg # 80
self.w_iceberg = iceberg_config.width_of_iceberg # 16.0
self.x_g = 0
self.i_z = self.mass * (self.l_iceberg ** 2 + self.w_iceberg ** 2) / 12
# zero-frequency added mass
self.x_du, self.y_dv, self.n_dr = self.set_added_mass(iceberg_config.added_mass_coefficient_in_surge,
iceberg_config.added_mass_coefficient_in_sway,
iceberg_config.added_mass_coefficient_in_yaw)
self.t_surge = iceberg_config.mass_over_linear_friction_coefficient_in_surge
self.t_sway = iceberg_config.mass_over_linear_friction_coefficient_in_sway
self.t_yaw = iceberg_config.mass_over_linear_friction_coefficient_in_yaw
self.ku = iceberg_config.nonlinear_friction_coefficient__in_surge # 2400.0 # non-linear friction coeff in surge
self.kv = iceberg_config.nonlinear_friction_coefficient__in_sway # 4000.0 # non-linear friction coeff in sway
self.kr = iceberg_config.nonlinear_friction_coefficient__in_yaw # 400.0 # non-linear friction coeff in yaw
# Environmental conditions
self.vel_c = np.array([environment_config.current_velocity_component_from_north,
environment_config.current_velocity_component_from_east,
0.0])
self.wind_dir = environment_config.wind_direction
self.wind_speed = environment_config.wind_speed
# Initial states (can be altered using self.set_state_vector(x))
self.n = simulation_config.initial_north_position_m
self.e = simulation_config.initial_east_position_m
self.psi = simulation_config.initial_yaw_angle_rad
self.u = simulation_config.initial_forward_speed_m_per_s
self.v = simulation_config.initial_sideways_speed_m_per_s
self.r = simulation_config.initial_yaw_rate_rad_per_s
self.x = self.update_state_vector()
self.states = np.empty(6)
# Initial states (save as local values)
self.n_initial = simulation_config.initial_north_position_m
self.e_initial = simulation_config.initial_east_position_m
self.psi_initial = simulation_config.initial_yaw_angle_rad
self.u_initial = simulation_config.initial_forward_speed_m_per_s
self.v_initial = simulation_config.initial_sideways_speed_m_per_s
self.r_initial = simulation_config.initial_yaw_rate_rad_per_s
# Differentials
self.d_n = self.d_e = self.d_psi = 0
self.d_u = self.d_v = self.d_r = 0
# Set up integration
self.int = EulerInt() # Instantiate the Euler integrator
self.int.set_dt(simulation_config.integration_step)
self.int.set_sim_time(simulation_config.simulation_time)
# Instantiate ship draw plotting
self.drw = IcebergDraw(iceberg_config) # Instantiate the ship drawing class
self.iceberg_drawings = [[], []] # Arrays for storing ship drawing data
# Wind effect on ship
self.rho_a = 1.2
self.h_f = 8.0 # mean height above water seen from the front
self.h_s = 8.0 # mean height above water seen from the side
self.proj_area_f = self.w_iceberg * self.h_f # Projected are from the front
self.proj_area_l = self.l_iceberg * self.h_s # Projected area from the side
self.cx = 0.5
self.cy = 0.7
self.cn = 0.08
self.simulation_results = defaultdict(list)
def set_added_mass(self, surge_coeff, sway_coeff, yaw_coeff):
''' Sets the added mass in surge due to surge motion, sway due
to sway motion and yaw due to yaw motion according to given coeffs.
args:
surge_coeff (float): Added mass coefficient in surge direction due to surge motion
sway_coeff (float): Added mass coefficient in sway direction due to sway motion
yaw_coeff (float): Added mass coefficient in yaw direction due to yaw motion
returns:
x_du (float): Added mass in surge
y_dv (float): Added mass in sway
n_dr (float): Added mass in yaw
'''
x_du = self.mass * surge_coeff
y_dv = self.mass * sway_coeff
n_dr = self.i_z * yaw_coeff
return x_du, y_dv, n_dr
def get_wind_force(self):
''' This method calculates the forces due to the relative
wind speed, acting on teh ship in surge, sway and yaw
direction.
:return: Wind force acting in surge, sway and yaw
'''
uw = self.wind_speed * np.cos(self.wind_dir - self.psi)
vw = self.wind_speed * np.sin(self.wind_dir - self.psi)
u_rw = uw - self.u
v_rw = vw - self.v
gamma_rw = -np.arctan2(v_rw, u_rw)
wind_rw2 = u_rw ** 2 + v_rw ** 2
c_x = -self.cx * np.cos(gamma_rw)
c_y = self.cy * np.sin(gamma_rw)
c_n = self.cn * np.sin(2 * gamma_rw)
tau_coeff = 0.5 * self.rho_a * wind_rw2
tau_u = tau_coeff * c_x * self.proj_area_f
tau_v = tau_coeff * c_y * self.proj_area_l
tau_n = tau_coeff * c_n * self.proj_area_l * self.l_iceberg
return np.array([tau_u, tau_v, tau_n])
def update_state_vector(self):
''' Update the state vector according to the individual state values
'''
return np.array([self.n, self.e, self.psi, self.u, self.v, self.r])
def set_north_pos(self, val):
''' Set the north position of the iceberg and update the state vector
'''
self.n = val
self.x = self.update_state_vector()
def set_east_pos(self, val):
''' Set the east position of the iceberg and update the state vector
'''
self.e = val
self.x = self.update_state_vector()
def set_yaw_angle(self, val):
''' Set the yaw angle of the iceberg and update the state vector
'''
self.psi = val
self.x = self.update_state_vector()
def set_surge_speed(self, val):
''' Set the surge speed of the iceberg and update the state vector
'''
self.u = val
self.x = self.update_state_vector()
def set_sway_speed(self, val):
''' Set the sway speed of the iceberg and update the state vector
'''
self.v = val
self.x = self.update_state_vector()
def set_yaw_rate(self, val):
''' Set the yaw rate of the iceberg and update the state vector
'''
self.r = val
self.x = self.update_state_vector()
def three_dof_kinematics(self):
''' Updates the time differientials of the north position, east
position and yaw angle. Should be called in the simulation
loop before the integration step.
'''
vel = np.array([self.u, self.v, self.r])
dx = np.dot(self.rotation(), vel)
self.d_n = dx[0]
self.d_e = dx[1]
self.d_psi = dx[2]
def rotation(self):
''' Specifies the rotation matrix for rotations about the z-axis, such that
"body-fixed coordinates" = rotation x "North-east-down-fixed coordinates" .
'''
return np.array([[np.cos(self.psi), -np.sin(self.psi), 0],
[np.sin(self.psi), np.cos(self.psi), 0],
[0, 0, 1]])
def three_dof_kinetics(self):
''' Calculates accelerations of the iceberg, as a funciton
of wind forces and the states in the previous time-step.
'''
# System matrices (did not include added mass yet)
M_rb = np.array([[self.mass + self.x_du, 0, 0],
[0, self.mass + self.y_dv, self.mass * self.x_g],
[0, self.mass * self.x_g, self.i_z + self.n_dr]])
C_rb = np.array([[0, 0, -self.mass * (self.x_g * self.r + self.v)],
[0, 0, self.mass * self.u],
[self.mass * (self.x_g * self.r + self.v), -self.mass * self.u, 0]])
D = np.array([[self.mass / self.t_surge, 0, 0],
[0, self.mass / self.t_sway, 0],
[0, 0, self.i_z / self.t_yaw]])
D2 = np.array([[self.ku * self.u, 0, 0],
[0, self.kv * self.v, 0],
[0, 0, self.kr * self.r]])
F_wind = self.get_wind_force()
F_waves = np.array([0, 0, 0])
# assembling state vector
vel = np.array([self.u, self.v, self.r])
# Transforming current velocity to ship frame
v_c = np.dot(np.linalg.inv(self.rotation()), self.vel_c)
u_r = self.u - v_c[0]
v_r = self.v - v_c[1]
C_a = np.array([[0, 0, self.y_dv * v_r],
[0, 0, -self.x_du * u_r],
[-self.y_dv * v_r, self.x_du * u_r, 0]])
# Kinetic equation
M_inv = np.linalg.inv(M_rb)
dx = np.dot(M_inv, -np.dot(C_rb, vel) - -np.dot(C_a, vel - v_c) - np.dot(D + D2, vel - v_c)
+ F_wind)
self.d_u = dx[0]
self.d_v = dx[1]
self.d_r = dx[2]
def update_differentials(self):
''' This method should be called in the simulation loop. It will
update the full differential equation of the ship.
'''
self.three_dof_kinematics()
self.three_dof_kinetics()
def integrate_differentials(self):
''' Integrates the differential equation one time step ahead using
the euler intgration method with parameters set in the
int-instantiation of the "EulerInt"-class.
'''
self.set_north_pos(self.int.integrate(self.n, self.d_n))
self.set_east_pos(self.int.integrate(self.e, self.d_e))
self.set_yaw_angle(self.int.integrate(self.psi, self.d_psi))
self.set_surge_speed(self.int.integrate(self.u, self.d_u))
self.set_sway_speed(self.int.integrate(self.v, self.d_v))
self.set_yaw_rate(self.int.integrate(self.r, self.d_r))
def store_states(self):
''' Appends the current value of each state to an array. This
is convenient when plotting. The method should be called within
the simulation loop each time step. Then afterwars, an array
containing for ecample the north-position for each time step
is obtained as ...states[0]
'''
self.states[0].append(self.n)
self.states[1].append(self.e)
self.states[2].append(self.psi)
self.states[3].append(self.u)
self.states[4].append(self.v)
self.states[5].append(self.r)
def iceberg_snap_shot(self):
''' This method is used to store a map-view snap shot of
the ship at the given north-east position and heading.
It uses the ShipDraw-class. To plot a map view of the
n-th ship snap-shot, use:
plot(ship_drawings[1][n], ship_drawings[0][n])
'''
x, y = self.drw.local_coords()
x_ned, y_ned = self.drw.rotate_coords(x, y, self.psi)
x_ned_trans, y_ned_trans = self.drw.translate_coords(x_ned, y_ned, self.n, self.e)
self.iceberg_drawings[0].append(x_ned_trans)
self.iceberg_drawings[1].append(y_ned_trans)
def store_simulation_data(self):
self.simulation_results['time [s]'].append(self.int.time)
self.simulation_results['north position [m]'].append(self.n)
self.simulation_results['east position [m]'].append(self.e)
self.simulation_results['yaw angle [deg]'].append(self.t_yaw * 180 / np.pi)
self.simulation_results['forward speed [m/s]'].append(self.u)
self.simulation_results['sideways speed [m/s]'].append(self.v)
self.simulation_results['yaw rate [deg/sec]'].append(self.r * 180 / np.pi)
self.simulation_results['wind speed [m/sec]'].append(self.wind_speed)
self.simulation_results['wind direction [radius]'].append(self.wind_dir)
def restore_to_intial(self):
self.n = self.n_initial
self.e = self.e_initial
self.psi = self.psi_initial
self.u = self.u_initial
self.v = self.v_initial
self.r = self.r_initial
class DistanceSimulation:
"""his class is for simulate drift multiple times to get a distribution of
collision event and
time to collision,
closest point of approach,
impact point in case of collision,
when and where iceberg breach the exclusion zone,
when and where the iceberg breach zone 1
when and where the iceberg breach zone 2
when and where the iceberg beach zone 3"""
def __init__(self, round, iceberg_config: IcebergConfiguration, simulation_config: DriftSimulationConfiguration,
environment_config: EnvironmentConfiguration, z_config: ZonesConfiguration):
self.distance_results = defaultdict(list)
self.iceberg = IcebergDriftingModel1(iceberg_config, environment_config, simulation_config)
self.zones_config = Zones(z_config, iceberg_config)
self.cpa_point = np.empty(6).tolist()
self.col_point = np.empty(3).tolist()
self.exc_point = np.empty(3).tolist()
self.zone1_point = np.empty(3).tolist()
self.zone2_point = np.empty(3).tolist()
self.zone3_point = np.empty(3).tolist()
self.breach_event = np.empty(5).tolist()
self.round_results = defaultdict(list)
# specify the number of simulation
self.n = round
self.dis_lists = np.empty(self.n, dtype=object)
self.t_lists = np.empty(self.n, dtype=object)
self.d_n_lists = np.empty(self.n, dtype=object)
self.d_e_lists = np.empty(self.n, dtype=object)
self.d_exc_lists = np.empty(self.n, dtype=object)
self.d_zone1_lists = np.empty(self.n, dtype=object)
self.d_zone2_lists = np.empty(self.n, dtype=object)
self.d_zone3_lists = np.empty(self.n, dtype=object)
self.n_lists = np.empty(self.n, dtype=object)
self.e_lists = np.empty(self.n, dtype=object)
self.windS_lists = np.empty(self.n, dtype=object)
self.windD_lists = np.empty(self.n, dtype=object)
self.u_lists = np.empty(self.n, dtype=object)
self.v_lists = np.empty(self.n, dtype=object)
self.yaw_lists = np.empty(self.n, dtype=object)
self.yaw_angle_lists = np.empty(self.n, dtype=object)
def simulation(self):
max_wind_speed = 25
self.distance_results.clear()
self.iceberg.simulation_results.clear()
self.iceberg.restore_to_intial()
self.iceberg.int.time = 0
continue_simulation = True
while self.iceberg.int.time <= self.iceberg.int.sim_time and continue_simulation:
#self.iceberg.wind_speed = random.random() * max_wind_speed
self.iceberg.update_differentials()
self.iceberg.integrate_differentials()
self.iceberg.store_simulation_data()
col = self.zones_config.colli_event(self.iceberg.n, self.iceberg.e)
d = self.zones_config.distance(self.iceberg.n, self.iceberg.e)
d_to_exc = d - self.zones_config.r0 - self.zones_config.collimargin
d_to_zone1 = d - self.zones_config.r1 - self.zones_config.collimargin
d_to_zone2 = d - self.zones_config.r2 - self.zones_config.collimargin
d_to_zone3 = d - self.zones_config.r3 - self.zones_config.collimargin
dn = self.zones_config.d_to_north(self.iceberg.n)
de = self.zones_config.d_to_east(self.iceberg.e)
t = self.iceberg.int.time
self.distance_results['Time [s]'].append(t)
self.distance_results['Distance between iceberg and structure [m]'].append(d)
self.distance_results['Distance between iceberg and structure in north direction [m]'].append(dn)
self.distance_results['Distance between iceberg and structure in east direction [m]'].append(de)
self.distance_results['Distance to exclusion zone'].append(d_to_exc)
self.distance_results['Distance to zone 1'].append(d_to_zone1)
self.distance_results['Distance to zone 2'].append(d_to_zone2)
self.distance_results['Distance to zone 3'].append(d_to_zone3)
if col == 1:
continue_simulation = False
print('Collision occur at: ', self.iceberg.int.time, 's')
print("Closest point to Structure:", self.zones_config.distance(self.iceberg.n, self.iceberg.e), 'm',
"CPA:", self.iceberg.n, self.iceberg.e)
elif self.iceberg.n > self.zones_config.r3 + self.zones_config.n:
continue_simulation = False
self.iceberg.int.next_time()
def cpa(self):
"""distance_list = self.distance_results['Distance between iceberg and structure [m]']
time_list = self.distance_results['Time [s]']
d_north_list = distance_results['Distance between iceberg and structure in north direction [m]']
d_east_list = distance_results['Distance between iceberg and structure in east direction [m]']"""
cpa_d = min(self.distance_results['Distance between iceberg and structure [m]'])
cpa_idx = self.distance_results['Distance between iceberg and structure [m]'].index(cpa_d)
cpa_time = self.distance_results['Time [s]'][cpa_idx]
cpa_loc = np.empty(2).tolist()
cpa_loc[0] = self.iceberg.simulation_results['north position [m]'][cpa_idx]
cpa_loc[1] = self.iceberg.simulation_results['east position [m]'][cpa_idx]
cpazone = self.zones_config.cpa_zone(cpa_d)
cpa_speed_u = self.iceberg.simulation_results['forward speed [m/s]'][cpa_idx]
cpa_speed_v = self.iceberg.simulation_results['sideways speed [m/s]'][cpa_idx]
self.cpa_point = [cpa_d, cpazone, cpa_time, cpa_loc, cpa_speed_u, cpa_speed_v]
if cpazone == -1:
col = 1
exc_breach = 1
zone1_breach = 1
zone2_breach = 1
zone3_breach = 1
self.col_point = [cpa_time, self.iceberg.simulation_results['north position [m]'][cpa_idx],
self.iceberg.simulation_results['east position [m]'][cpa_idx]]
d_to_exc = self.distance_results['Distance to exclusion zone']
exc_idx = list(map(lambda i: i <= 0, d_to_exc)).index(True)
self.exc_point = [self.iceberg.simulation_results['time [s]'][exc_idx],
self.iceberg.simulation_results['north position [m]'][exc_idx],
self.iceberg.simulation_results['east position [m]'][exc_idx]]
d_to_zone1 = self.distance_results['Distance to zone 1']
zone1_idx = list(map(lambda i: i <= 0, d_to_zone1)).index(True)
self.zone1_point = [self.iceberg.simulation_results['time [s]'][zone1_idx],
self.iceberg.simulation_results['north position [m]'][zone1_idx],
self.iceberg.simulation_results['east position [m]'][zone1_idx]]
d_to_zone2 = self.distance_results['Distance to zone 2']
zone2_idx = list(map(lambda i: i <= 0, d_to_zone2)).index(True)
self.zone2_point = [self.iceberg.simulation_results['time [s]'][zone2_idx],
self.iceberg.simulation_results['north position [m]'][zone2_idx],
self.iceberg.simulation_results['east position [m]'][zone2_idx]]
d_to_zone3 = self.distance_results['Distance to zone 3']
zone3_idx = list(map(lambda i: i <= 0, d_to_zone3)).index(True)
self.zone3_point = [self.iceberg.simulation_results['time [s]'][zone3_idx],
self.iceberg.simulation_results['north position [m]'][zone3_idx],
self.iceberg.simulation_results['east position [m]'][zone3_idx]]
elif cpazone == 0:
col = 0
exc_breach = 1
zone1_breach = 1
zone2_breach = 1
zone3_breach = 1
d_to_exc = self.distance_results['Distance to exclusion zone']
exc_idx = list(map(lambda i: i <= 0, d_to_exc)).index(True)
self.exc_point = [self.iceberg.simulation_results['time [s]'][exc_idx],
self.iceberg.simulation_results['north position [m]'][exc_idx],
self.iceberg.simulation_results['east position [m]'][exc_idx]]
d_to_zone1 = self.distance_results['Distance to zone 1']
zone1_idx = list(map(lambda i: i <= 0, d_to_zone1)).index(True)
self.zone1_point = [self.iceberg.simulation_results['time [s]'][zone1_idx],
self.iceberg.simulation_results['north position [m]'][zone1_idx],
self.iceberg.simulation_results['east position [m]'][zone1_idx]]
d_to_zone2 = self.distance_results['Distance to zone 2']
zone2_idx = list(map(lambda i: i <= 0, d_to_zone2)).index(True)
self.zone2_point = [self.iceberg.simulation_results['time [s]'][zone2_idx],
self.iceberg.simulation_results['north position [m]'][zone2_idx],
self.iceberg.simulation_results['east position [m]'][zone2_idx]]
d_to_zone3 = self.distance_results['Distance to zone 3']
zone3_idx = list(map(lambda i: i <= 0, d_to_zone3)).index(True)
self.zone3_point = [self.iceberg.simulation_results['time [s]'][zone3_idx],
self.iceberg.simulation_results['north position [m]'][zone3_idx],
self.iceberg.simulation_results['east position [m]'][zone3_idx]]
elif cpazone == 1:
col = 0
exc_breach = 0
zone1_breach = 1
zone2_breach = 1
zone3_breach = 1
d_to_zone1 = self.distance_results['Distance to zone 1']
zone1_idx = list(map(lambda i: i <= 0, d_to_zone1)).index(True)
self.zone1_point = [self.iceberg.simulation_results['time [s]'][zone1_idx],
self.iceberg.simulation_results['north position [m]'][zone1_idx],
self.iceberg.simulation_results['east position [m]'][zone1_idx]]
d_to_zone2 = self.distance_results['Distance to zone 2']
zone2_idx = list(map(lambda i: i <= 0, d_to_zone2)).index(True)
self.zone2_point = [self.iceberg.simulation_results['time [s]'][zone2_idx],
self.iceberg.simulation_results['north position [m]'][zone2_idx],
self.iceberg.simulation_results['east position [m]'][zone2_idx]]
d_to_zone3 = self.distance_results['Distance to zone 3']
zone3_idx = list(map(lambda i: i <= 0, d_to_zone3)).index(True)
self.zone3_point = [self.iceberg.simulation_results['time [s]'][zone3_idx],
self.iceberg.simulation_results['north position [m]'][zone3_idx],
self.iceberg.simulation_results['east position [m]'][zone3_idx]]
elif cpazone == 2:
col = 0
exc_breach = 0
zone1_breach = 0
zone2_breach = 1
zone3_breach = 1
d_to_zone2 = self.distance_results['Distance to zone 2']
zone2_idx = list(map(lambda i: i <= 0, d_to_zone2)).index(True)
self.zone2_point = [self.iceberg.simulation_results['time [s]'][zone2_idx],
self.iceberg.simulation_results['north position [m]'][zone2_idx],
self.iceberg.simulation_results['east position [m]'][zone2_idx]]
d_to_zone3 = self.distance_results['Distance to zone 3']
zone3_idx = list(map(lambda i: i <= 0, d_to_zone3)).index(True)
self.zone3_point = [self.iceberg.simulation_results['time [s]'][zone3_idx],
self.iceberg.simulation_results['north position [m]'][zone3_idx],
self.iceberg.simulation_results['east position [m]'][zone3_idx]]
elif cpazone == 3:
col = 0
exc_breach = 0
zone1_breach = 0
zone2_breach = 0
zone3_breach = 1
d_to_zone3 = self.distance_results['Distance to zone 3']
zone3_idx = list(map(lambda i: i <= 0, d_to_zone3)).index(True)
self.zone3_point = [self.iceberg.simulation_results['time [s]'][zone3_idx],
self.iceberg.simulation_results['north position [m]'][zone3_idx],
self.iceberg.simulation_results['east position [m]'][zone3_idx]]
else:
col = 0
exc_breach = 0
zone1_breach = 0
zone2_breach = 0
zone3_breach = 0
self.breach_event = [col, exc_breach, zone1_breach, zone2_breach, zone3_breach]
def multsim(self):
"""this function is to conduct multiple simulations and record data fro each simulation"""
n = 1
self.round_results.clear()
while n <= self.n:
self.simulation()
self.cpa()
self.round_results['simulation round'].append(n)
self.round_results['distance between the closest point of approach (cpa) and the structure'].append(
self.cpa_point[0])
self.round_results['zone of closest point of approach (cpa)'].append(self.cpa_point[1])
self.round_results['time when iceberg reaches the closest point of approach (cpa)'].append(
self.cpa_point[2])
self.round_results['location of the closest point of approach (cpa)'].append(self.cpa_point[3])
self.round_results['breach event'].append(self.breach_event)
self.round_results['where the iceberg breach the collision zone'].append(self.col_point[1:3])
self.round_results['when the iceberg breach the collision zone'].append(self.col_point[0])
self.round_results['where the iceberg breach the exclusion zone'].append(self.exc_point[1:3])
self.round_results['when the iceberg breach the exclusion zone'].append(self.exc_point[0])
self.round_results['where the iceberg breach the zone 1'].append(self.zone1_point[1:3])
self.round_results['when the iceberg breach the zone 1'].append(self.zone1_point[0])
self.round_results['where the iceberg breach the zone 2'].append(self.zone2_point[1::3])
self.round_results['when the iceberg breach the zone 2'].append(self.zone2_point[0])
self.round_results['where the iceberg breach the zone 3'].append(self.zone3_point[1:3])
self.round_results['when the iceberg breach the zone 3'].append(self.zone3_point[0])
self.round_results['forward speed when the iceberg approach the structure'].append(self.cpa_point[4])
self.round_results['sideways speed when the iceberg approach the structure'].append(self.cpa_point[5])
self.t_lists[n - 1] = self.distance_results['Time [s]']
self.dis_lists[n - 1] = self.distance_results['Distance between iceberg and structure [m]']
self.d_n_lists[n - 1] = self.distance_results[
'Distance between iceberg and structure in north direction [m]']
self.d_e_lists[n - 1] = self.distance_results[
'Distance between iceberg and structure in east direction [m]']
self.d_exc_lists[n - 1] = self.distance_results['Distance to exclusion zone']
self.d_zone1_lists[n - 1] = self.distance_results['Distance to zone 1']
self.d_zone2_lists[n - 1] = self.distance_results['Distance to zone 2']
self.d_zone3_lists[n - 1] = self.distance_results['Distance to zone 3']
self.n_lists[n - 1] = self.iceberg.simulation_results['north position [m]']
self.e_lists[n - 1] = self.iceberg.simulation_results['east position [m]']
self.windS_lists[n - 1] = self.iceberg.simulation_results['wind speed [m/sec]']
self.windD_lists[n - 1] = self.iceberg.simulation_results['wind direction [radius]']
self.u_lists[n - 1] = self.iceberg.simulation_results['forward speed [m/s]']
self.v_lists[n - 1] = self.iceberg.simulation_results['sideways speed [m/s]']
self.yaw_lists[n - 1] = self.iceberg.simulation_results['yaw rate [deg/sec]']
self.yaw_angle_lists[n - 1] = self.iceberg.simulation_results['yaw angle [deg]']
n += 1
def col_pro(self):
prob = self.round_results['location of the closest point of approach (cpa)'].count(-1) / self.n
return prob
def exc_pro(self):
prob = self.round_results['location of the closest point of approach (cpa)'].count(0) / self.n
return prob
def zone1_pro(self):
prob = self.round_results['location of the closest point of approach (cpa)'].count(1) / self.n
return prob
def zone2_pro(self):
prob = self.round_results['location of the closest point of approach (cpa)'].count(2) / self.n
return prob
def zone3_pro(self):
prob = self.round_results['location of the closest point of approach (cpa)'].count(3) / self.n
return prob
def outside_pro(self):
prob = 1 - self.round_results['location of the closest point of approach (cpa)'].count(4) / self.n
return prob
class Cost:
def __init__(self, multi_simulation: DistanceSimulation, ice_cost_config: IceCost,
env_config: EnvironmentConfiguration):
self.dsim = multi_simulation
self.tow_s_prob = 0.7
self.disconnect_s_prob = 0.98
self.icecost = ice_cost_config
self.env = env_config
def col_kinetics(self, col_velocity_2):
kinetics = 0.5 * self.dsim.iceberg.mass * col_velocity_2
return kinetics
def col_type(self, col_velocity_2):
if self.col_kinetics(col_velocity_2) >= self.icecost.Ki_lowerbound_severe:
col_type = "Severe Collision"
elif self.col_kinetics(col_velocity_2) >= self.icecost.Ki_lowerbound_medium:
col_type = "Medium Collision"
elif self.col_kinetics(col_velocity_2) >= self.icecost.Ki_lowerbound_light:
col_type = "Light Collision"
else:
col_type = "Collision can be ignored"
return col_type
def update_tow_success(self, tcpa):
if tcpa < self.icecost.towing_time_cost:
self.tow_s_prob = 0
elif tcpa >= 1.5 * self.icecost.towing_time_cost:
self.tow_s_prob = 0.8
def cost_cal(self, col_event, col_velocity_2):
col_type = self.col_type(col_velocity_2)
if col_type == "Light Collision":
col_cost = self.icecost.light_col_cost
elif col_type == "Medium Collision":
col_cost = self.icecost.medium_col_cost
elif col_type == "Severe Collision":
col_cost = self.icecost.severe_col_cost
else:
col_cost = 0
col_pro = col_event * np.array([1 - self.tow_s_prob, 1 - self.disconnect_s_prob, 1])
operation_cost = np.array([self.icecost.towing_cost, self.icecost.disconnect_cost, 0])
mean_cost = col_pro * col_cost + operation_cost
return mean_cost
def cost_msim(self):
n = self.dsim.n
m = 1
total_cost = np.zeros(3)
col_times = 0
while m <= n:
idx = m - 1
col_event = self.dsim.round_results['zone of closest point of approach (cpa)'][idx]
col_event = self.dsim.round_results['breach event'][idx][0]
tcpa = self.dsim.round_results['time when iceberg reaches the closest point of approach (cpa)'][idx]
col_velocity_2 = self.dsim.round_results['forward speed when the iceberg approach the structure'][
idx] ** 2 + \
self.dsim.round_results['sideways speed when the iceberg approach the structure'][idx] ** 2
self.update_tow_success(tcpa)
cost_single = self.cost_cal(col_event, col_velocity_2)
total_cost += cost_single
col_times += col_event
m += 1
average_cost = total_cost / n
return col_times, average_cost
| 45.390764 | 152 | 0.629857 | 17,918 | 127,775 | 4.264315 | 0.044592 | 0.029814 | 0.018964 | 0.020888 | 0.853628 | 0.828552 | 0.811119 | 0.798306 | 0.777196 | 0.753141 | 0 | 0.014987 | 0.278826 | 127,775 | 2,814 | 153 | 45.406894 | 0.814201 | 0.238803 | 0 | 0.67675 | 0 | 0 | 0.050178 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.104982 | false | 0 | 0.004152 | 0 | 0.236655 | 0.003559 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
6343322a4092f4a78d49f8d24b339d799f986f81 | 26 | py | Python | connectFourLab/game/__init__.py | yuriharrison/connect-four-lab | 8c90535df91a45e8976b368f3cea4558478abe64 | [
"MIT"
] | null | null | null | connectFourLab/game/__init__.py | yuriharrison/connect-four-lab | 8c90535df91a45e8976b368f3cea4558478abe64 | [
"MIT"
] | null | null | null | connectFourLab/game/__init__.py | yuriharrison/connect-four-lab | 8c90535df91a45e8976b368f3cea4558478abe64 | [
"MIT"
] | null | null | null | from .game import RunGame
| 13 | 25 | 0.807692 | 4 | 26 | 5.25 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.153846 | 26 | 1 | 26 | 26 | 0.954545 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
636c59041befa426bc7a8469477671983a887031 | 211 | py | Python | rest_fhir/mixins/vread.py | weynelucas/django-rest-fhir | 560a0aadd0cfa43b6dc58f995c86015f6eefb768 | [
"MIT"
] | 2 | 2021-05-07T12:16:27.000Z | 2021-12-16T20:45:36.000Z | rest_fhir/mixins/vread.py | weynelucas/django-rest-fhir | 560a0aadd0cfa43b6dc58f995c86015f6eefb768 | [
"MIT"
] | 3 | 2021-05-10T19:40:33.000Z | 2021-06-27T14:24:47.000Z | rest_fhir/mixins/vread.py | weynelucas/django-rest-fhir | 560a0aadd0cfa43b6dc58f995c86015f6eefb768 | [
"MIT"
] | 1 | 2021-08-09T22:00:22.000Z | 2021-08-09T22:00:22.000Z | from .conditional_read import ConditionalReadMixin
class VReadResourceMixin(ConditionalReadMixin):
def vread(self, request, *args, **kwargs):
return self.conditional_read(request, *args, **kwargs)
| 30.142857 | 62 | 0.763033 | 21 | 211 | 7.571429 | 0.666667 | 0.188679 | 0.213836 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.137441 | 211 | 6 | 63 | 35.166667 | 0.873626 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | false | 0 | 0.25 | 0.25 | 1 | 0 | 1 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 6 |
63becbd4a16bc4710be0af8db94244112b0ef416 | 119 | py | Python | server/blueprints/store/controllers/__init__.py | Soopro/totoro | 6be1af50496340ded9879a6450c8208ac9f97e72 | [
"MIT"
] | null | null | null | server/blueprints/store/controllers/__init__.py | Soopro/totoro | 6be1af50496340ded9879a6450c8208ac9f97e72 | [
"MIT"
] | null | null | null | server/blueprints/store/controllers/__init__.py | Soopro/totoro | 6be1af50496340ded9879a6450c8208ac9f97e72 | [
"MIT"
] | 1 | 2019-10-31T06:11:41.000Z | 2019-10-31T06:11:41.000Z | # coding=utf-8
from __future__ import absolute_import
from .base import *
from .book import *
from .category import *
| 17 | 38 | 0.764706 | 17 | 119 | 5.058824 | 0.588235 | 0.348837 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.01 | 0.159664 | 119 | 6 | 39 | 19.833333 | 0.85 | 0.10084 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
892b6e246a2db52a0fa882db7e7d4953aee7eb29 | 3,088 | py | Python | BAAlgorithmUtils/AVLTreeUtilTest.py | BenArvin/BAAlgorithmUtils | 3af05cdaf70081e114ed1ca1713799cc0f6b861b | [
"MIT"
] | 7 | 2018-12-29T09:04:48.000Z | 2022-02-22T02:38:27.000Z | BAAlgorithmUtils/AVLTreeUtilTest.py | BenArvin/BAAlgorithmUtils | 3af05cdaf70081e114ed1ca1713799cc0f6b861b | [
"MIT"
] | null | null | null | BAAlgorithmUtils/AVLTreeUtilTest.py | BenArvin/BAAlgorithmUtils | 3af05cdaf70081e114ed1ca1713799cc0f6b861b | [
"MIT"
] | 2 | 2019-01-07T08:03:38.000Z | 2022-02-22T02:38:29.000Z | #!/usr/bin/python
# -*- coding: UTF-8 -*-
import sys, os
sys.path.append('../')
from BAAlgorithmUtils.AVLTreeUtil import AVLTree
def start1():
print('\n********************************')
trainSamples = [
{'key': 3, 'content': '3-1'},
{'key': 2, 'content': '2-1'},
{'key': 1, 'content': '1-1'},
{'key': 4, 'content': '4-1'},
{'key': 5, 'content': '5-1'},
{'key': 6, 'content': '6-1'},
{'key': 7, 'content': '7-1'},
{'key': 16, 'content': '16-1'},
{'key': 15, 'content': '15-1'},
{'key': 14, 'content': '14-1'},
{'key': 13, 'content': '13-1'},
{'key': 12, 'content': '12-1'},
{'key': 11, 'content': '11-1'},
{'key': 10, 'content': '10-1'},
{'key': 8, 'content': '8-1'},
{'key': 9, 'content': '9-1'},
]
trainsString = ''
avlTree = AVLTree()
for sample in trainSamples:
trainsString = trainsString + ', ' + str(sample['key'])
avlTree.set(sample['key'], sample['content'])
print('Sample: ' + trainsString[2 : len(trainsString)])
print('\nAVL Tree:')
avlTree.fullPrint()
print('********************************\n')
def start2():
print('\n********************************')
trainSamples = [
{'key': 10, 'content': '10-1'},
{'key': 8, 'content': '8-1'},
{'key': 12, 'content': '12-1'},
{'key': 7, 'content': '7-1'},
{'key': 9, 'content': '9-1'},
{'key': 11, 'content': '11-1'},
{'key': 13, 'content': '13-1'},
{'key': 6, 'content': '6-1'},
]
trainsString = ''
avlTree = AVLTree()
for sample in trainSamples:
trainsString = trainsString + ', ' + str(sample['key'])
avlTree.set(sample['key'], sample['content'])
print('Sample: ' + trainsString[2 : len(trainsString)])
print('\nAVL Tree:')
avlTree.fullPrint()
print('\nAfter delete 8:')
avlTree.delete(8)
avlTree.fullPrint()
print('\nAfter delete 10:')
avlTree.delete(10)
avlTree.fullPrint()
print('\nAfter add 14:')
avlTree.set(14, '14-1')
avlTree.fullPrint()
print('********************************\n')
def start3():
print('\n********************************')
trainSamples = [
{'key': 10, 'content': '10-1'},
{'key': 8, 'content': '8-1'},
{'key': 12, 'content': '12-1'},
{'key': 7, 'content': '7-1'},
{'key': 9, 'content': '9-1'},
{'key': 11, 'content': '11-1'},
{'key': 13, 'content': '13-1'},
{'key': 6, 'content': '6-1'},
]
trainsString = ''
avlTree = AVLTree()
for sample in trainSamples:
trainsString = trainsString + ', ' + str(sample['key'])
avlTree.set(sample['key'], sample['content'])
print('Sample: ' + trainsString[2 : len(trainsString)])
print('\nAVL Tree:')
avlTree.fullPrint()
print('\nAfter delete 10:')
avlTree.delete(10)
avlTree.fullPrint()
print('********************************\n')
if __name__ == '__main__':
start1()
start2()
start3() | 31.510204 | 63 | 0.460168 | 329 | 3,088 | 4.294833 | 0.167173 | 0.082095 | 0.104034 | 0.076433 | 0.784855 | 0.764331 | 0.748762 | 0.704176 | 0.704176 | 0.704176 | 0 | 0.066005 | 0.249352 | 3,088 | 98 | 64 | 31.510204 | 0.543572 | 0.012306 | 0 | 0.722222 | 0 | 0 | 0.268941 | 0.066907 | 0 | 0 | 0 | 0 | 0 | 1 | 0.033333 | false | 0 | 0.022222 | 0 | 0.055556 | 0.177778 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
895ef7b360faac4d33955e2c67b68485d3e0721c | 41 | py | Python | src/astrildvisual/rays/__init__.py | Christovis/wys-ars | bb15f2d392842f9b32de12b5db5c86079bc97105 | [
"MIT"
] | 3 | 2021-07-27T14:45:58.000Z | 2022-01-31T21:09:46.000Z | src/astrildvisual/rays/__init__.py | Christovis/wys-ars | bb15f2d392842f9b32de12b5db5c86079bc97105 | [
"MIT"
] | 1 | 2021-11-03T10:47:45.000Z | 2021-11-03T10:47:45.000Z | src/astrildvisual/rays/__init__.py | Christovis/wys-ars | bb15f2d392842f9b32de12b5db5c86079bc97105 | [
"MIT"
] | 1 | 2021-11-03T10:17:34.000Z | 2021-11-03T10:17:34.000Z | from .visuals import maps_with_vel_field
| 20.5 | 40 | 0.878049 | 7 | 41 | 4.714286 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.097561 | 41 | 1 | 41 | 41 | 0.891892 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
8963c43972e009e28a784b2a9b0849ec01e7ef7b | 54 | py | Python | util/encode_util.py | lzpsgh/AscTrio | f969beece5dc93d29063da03793521bc54b814dd | [
"MIT"
] | 5 | 2021-07-21T06:50:51.000Z | 2022-03-31T04:18:28.000Z | util/encode_util.py | lzpsgh/AscTrio | f969beece5dc93d29063da03793521bc54b814dd | [
"MIT"
] | null | null | null | util/encode_util.py | lzpsgh/AscTrio | f969beece5dc93d29063da03793521bc54b814dd | [
"MIT"
] | 1 | 2022-03-28T01:50:03.000Z | 2022-03-28T01:50:03.000Z | # coding : utf-8
# @Time : 2021/4/30 下午11:47
| 18 | 32 | 0.5 | 9 | 54 | 3 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.333333 | 0.333333 | 54 | 2 | 33 | 27 | 0.416667 | 0.907407 | 0 | null | 0 | null | 0 | 0 | null | 0 | 0 | 0 | null | 1 | null | true | 0 | 0 | null | null | null | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
896a39c5791384de17dfce2b48058275fad0de3f | 1,340 | py | Python | 2_convnets/1_mnist/model_zoo.py | rcassani/learning-deep | 577447571936a7d6039195f216a90d5a9aed3fb7 | [
"MIT"
] | 1 | 2019-12-07T11:02:24.000Z | 2019-12-07T11:02:24.000Z | 2_convnets/1_mnist/model_zoo.py | rcassani/learning-deep | 577447571936a7d6039195f216a90d5a9aed3fb7 | [
"MIT"
] | 1 | 2020-04-24T01:21:41.000Z | 2020-06-09T19:59:40.000Z | 2_convnets/1_mnist/model_zoo.py | rcassani/learning-deep | 577447571936a7d6039195f216a90d5a9aed3fb7 | [
"MIT"
] | 1 | 2019-05-25T21:47:12.000Z | 2019-05-25T21:47:12.000Z | import torch.nn as nn
class small_cnn(nn.Module):
# Model
def __init__(self):
super(small_cnn, self).__init__()
self.features = nn.Sequential(
nn.Conv2d(1, 10, kernel_size=3, padding=1),
nn.ReLU(True),
)
self.classifier = nn.Sequential(
nn.Linear(10*28*28, 100),
nn.ReLU(True),
nn.Linear(100, 10),
nn.Softmax()
)
def forward(self, x):
x = x.view([x.size(0),1,28,28])
x = self.features(x)
x = x.view(x.size(0), -1)
x = self.classifier(x)
return x
class medium_cnn(nn.Module):
# Model
def __init__(self):
super(medium_cnn, self).__init__()
self.features = nn.Sequential(
nn.Conv2d(1, 10, kernel_size=3, padding=1),
nn.ReLU(True),
nn.Conv2d(10, 20, kernel_size=3, padding=1),
nn.ReLU(True)
)
self.classifier = nn.Sequential(
nn.Linear(20*28*28, 100),
nn.ReLU(True),
nn.Linear(100, 10),
nn.Softmax()
)
def forward(self, x):
x = x.view([x.size(0),1,28,28])
x = self.features(x)
x = x.view(x.size(0), -1)
x = self.classifier(x)
return x | 27.346939 | 56 | 0.478358 | 175 | 1,340 | 3.531429 | 0.205714 | 0.02589 | 0.080906 | 0.045307 | 0.894822 | 0.894822 | 0.894822 | 0.894822 | 0.791262 | 0.791262 | 0 | 0.075721 | 0.379104 | 1,340 | 49 | 57 | 27.346939 | 0.667067 | 0.008209 | 0 | 0.682927 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.097561 | false | 0 | 0.02439 | 0 | 0.219512 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
982a5d868b8674d0f9ee5a397fa37ac39e444522 | 938 | py | Python | ml_models/randomwalk.py | annakoretchko/algo_trading | 9ca1b9307c4d477888e5f2e7f6d4f57a03ca3399 | [
"MIT"
] | 1 | 2022-01-12T14:49:52.000Z | 2022-01-12T14:49:52.000Z | ml_models/randomwalk.py | webclinic017/algo_trading-3 | 0ce3657dc7295ca6496f270f943f3e670ae199d2 | [
"MIT"
] | null | null | null | ml_models/randomwalk.py | webclinic017/algo_trading-3 | 0ce3657dc7295ca6496f270f943f3e670ae199d2 | [
"MIT"
] | 1 | 2021-09-10T17:50:44.000Z | 2021-09-10T17:50:44.000Z | In [26]: symbol = '.SPX'
In [27]: data = pd.DataFrame(raw[symbol])
In [28]: lags = 5
cols = []
for lag in range(1, lags + 1):
col = 'lag_{}'.format(lag) 1
data[col] = data[symbol].shift(lag) 2
cols.append(col) 3
In [29]: data.head(7)
Out[29]: .SPX lag_1 lag_2 lag_3 lag_4 lag_5
Date
2010-01-01 NaN NaN NaN NaN NaN NaN
2010-01-04 1132.99 NaN NaN NaN NaN NaN
2010-01-05 1136.52 1132.99 NaN NaN NaN NaN
2010-01-06 1137.14 1136.52 1132.99 NaN NaN NaN
2010-01-07 1141.69 1137.14 1136.52 1132.99 NaN NaN
2010-01-08 1144.98 1141.69 1137.14 1136.52 1132.99 NaN
2010-01-11 1146.98 1144.98 1141.69 1137.14 1136.52 1132.99
In [30]: data.dropna(inplace=True) | 40.782609 | 73 | 0.479744 | 148 | 938 | 3 | 0.358108 | 0.202703 | 0.202703 | 0.162162 | 0.466216 | 0.445946 | 0.412162 | 0.263514 | 0.202703 | 0.135135 | 0 | 0.375228 | 0.414712 | 938 | 23 | 74 | 40.782609 | 0.433515 | 0 | 0 | 0 | 0 | 0 | 0.01065 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0 | null | null | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
9831798c6a6fd25639f7f94cbdfb340bdefe8553 | 22,527 | py | Python | tests/likelihoodtest/likelihoodtest.py | plewis/phycas | 9f5a4d9b2342dab907d14a46eb91f92ad80a5605 | [
"MIT"
] | 3 | 2015-09-24T23:12:57.000Z | 2021-04-12T07:07:01.000Z | tests/likelihoodtest/likelihoodtest.py | plewis/phycas | 9f5a4d9b2342dab907d14a46eb91f92ad80a5605 | [
"MIT"
] | null | null | null | tests/likelihoodtest/likelihoodtest.py | plewis/phycas | 9f5a4d9b2342dab907d14a46eb91f92ad80a5605 | [
"MIT"
] | 1 | 2015-11-23T10:35:43.000Z | 2015-11-23T10:35:43.000Z | # This example is designed to check the likelihood calculation under most models
# supported by Phycas. A data set is simulated under the most complex model, and
# analyzed under a spectrum of simpler models. The data set is saved as a nexus file
# complete with PAUP blocks that allow verification of Phycas's likelihood
# calculations by PAUP. A second sweep of models is done for a real data set
# (nyldna4.nex) and a paup command file is written to allow verification of Phycas'
# likelihood calculations.
from phycas import *
def tryAllModels(fn):
# Create string containing PAUP commands that will be added to the end of the
# file fn to check results - we will add more commands to this string as we go
paup_commands = []
#paup_commands.append('\n[!\n***** HKY+G+I (estimate everything) *****]')
paup_commands.append('\n[!\n***** GTR+G+I (estimate everything) *****]')
#paup_commands.append('lset nst=2 variant=hky basefreq=estimate tratio=estimate rates=gamma shape=estimate pinvar=estimate;')
paup_commands.append('lset nst=6 basefreq=estimate rmatrix=estimate rates=gamma shape=estimate pinvar=estimate;')
paup_commands.append('lscores 1 / userbrlen;')
print
print '************* Testing GTRModel *******************'
# Compute likelihood using the GTR+G+I model
print '\nGTR+G+I model'
model.type = 'gtr'
model.relrates = [1.8, 4.0, 1.5, 1.2, 5.0, 1.0]
model.state_freqs = [0.1, 0.2, 0.3, 0.4]
model.num_rates = 4
model.gamma_shape = 1.2
model.pinvar_model = True
model.pinvar = 0.3
lnL = like()
ref_lnL = lnL
print 'lnL = %.5f (this is the reference lnL)' % (lnL)
paup_commands.append('\n[!\n***** GTR+G+I (using GTRModel) *****]')
paup_commands.append('lset nst=6 basefreq=(0.1 0.2 0.3) rmatrix=(1.8 4.0 1.5 1.2 5.0) rates=gamma shape=1.2 pinvar=0.3;')
paup_commands.append('lscores 1 / userbrlen;')
paup_commands.append('[!Phycas GTR+G+I lnL = %.5f]' % lnL)
# Compute likelihood using the GTR+I model
print '\nGTR+I model'
model.type = 'gtr'
model.relrates = [1.8, 4.0, 1.5, 1.2, 5.0, 1.0]
model.state_freqs = [0.1, 0.2, 0.3, 0.4]
model.num_rates = 1
model.pinvar_model = True
model.pinvar = 0.3
lnL = like()
#ref_lnL = lnL
print 'lnL = %.5f (%.5f worse than the reference lnL)' % (lnL, ref_lnL - lnL)
paup_commands.append('\n[!\n***** GTR+I (using GTRModel) *****]')
paup_commands.append('lset nst=6 basefreq=(0.1 0.2 0.3) rmatrix=(1.8 4.0 1.5 1.2 5.0) rates=equal pinvar=0.3;')
paup_commands.append('lscores 1 / userbrlen;')
paup_commands.append('[!Phycas GTR+I lnL = %.5f]' % lnL)
# Compute likelihood using the GTR+G model
print '\nGTR+G model'
model.type = 'gtr'
model.relrates = [1.8, 4.0, 1.5, 1.2, 5.0, 1.0]
model.state_freqs = [0.1, 0.2, 0.3, 0.4]
model.num_rates = 4
model.gamma_shape = 1.2
model.pinvar_model = False
lnL = like()
print 'lnL = %.5f (%.5f worse than the reference lnL)' % (lnL, ref_lnL - lnL)
paup_commands.append('\n[!\n***** GTR+G (using GTRModel) *****]')
paup_commands.append('lset nst=6 basefreq=(0.1 0.2 0.3) rmatrix=(1.8 4.0 1.5 1.2 5.0) rates=gamma shape=1.2 pinvar=0.0;')
paup_commands.append('lscores 1 / userbrlen;')
paup_commands.append('[!Phycas GTR+G lnL = %.5f]' % lnL)
# ************* temporary below here **************
#print '\nGTR+psr model'
#phycas.model = Likelihood.GTRModel()
#phycas.model.setRelRates([1.8, 4.0, 1.5, 1.2, 5.0, 1.0])
#phycas.model.setNucleotideFreqs(0.1, 0.2, 0.3, 0.4)
#phycas.model.setNGammaRates(4)
#phycas.model.setShape(1.2)
#phycas.model.setNotPinvarModel()
#phycas.likelihood.usePatternSpecificRates()
#phycas.likelihood.replaceModel(phycas.model)
#phycas.likelihood.prepareForLikelihood(phycas.tree)
#lnL = phycas.likelihood.calcLnL(phycas.tree)
#print 'lnL = %.5f (%.5f worse than the reference lnL)' % (lnL, ref_lnL - lnL)
#paup_commands.append('\n[!\n***** GTR+psr (actually, using GTR+G since no way to do psr in PAUP*) *****]')
#paup_commands.append('lset nst=6 basefreq=(0.1 0.2 0.3) rmatrix=(1.8 4.0 1.5 1.2 5.0) rates=gamma shape=1.2 pinvar=0.0;')
#paup_commands.append('lscores 1 / userbrlen;')
#paup_commands.append('[!Phycas GTR+G lnL = %.5f]' % lnL)
#phycas.likelihood.doNotUsePatternSpecificRates()
# ************* temporary above here **************
# Compute likelihood using the GTR model
print '\nGTR model'
model.type = 'gtr'
model.relrates = [1.8, 4.0, 1.5, 1.2, 5.0, 1.0]
model.state_freqs = [0.1, 0.2, 0.3, 0.4]
model.num_rates = 1
model.pinvar_model = False
model.pinvar = 0.3
lnL = like()
print 'lnL = %.5f (%.5f worse than reference lnL)' % (lnL, ref_lnL - lnL)
paup_commands.append('\n[!\n***** GTR (using GTRModel) *****]')
paup_commands.append('lset nst=6 basefreq=(0.1 0.2 0.3) rmatrix=(1.8 4.0 1.5 1.2 5.0) rates=equal pinvar=0.0;')
paup_commands.append('lscores 1 / userbrlen;')
paup_commands.append('[!Phycas GTR lnL = %.5f]' % lnL)
print
print '************* Testing HKYModel *******************'
# Compute likelihood using the HKY+G+I model
print '\nHKY+G+I model'
model.type = 'hky'
model.kappa = 4.0
model.state_freqs = [0.1, 0.2, 0.3, 0.4]
model.num_rates = 4
model.gamma_shape = 1.2
model.pinvar_model = True
model.pinvar = 0.3
lnL = like()
ref_lnL = lnL
print 'lnL = %.5f (%.5f worse than the reference lnL)' % (lnL, ref_lnL - lnL)
paup_commands.append('\n[!\n***** HKY+G+I (using HKYModel) *****]')
paup_commands.append('lset nst=2 variant=hky basefreq=(0.1 0.2 0.3) tratio=1.8333333 rates=gamma shape=1.2 pinvar=0.3;')
paup_commands.append('lscores 1 / userbrlen;')
paup_commands.append('[!Phycas HKY+G+I lnL = %.5f]' % lnL)
# Compute likelihood using the HKY+I model
print '\nHKY+I model'
model.type = 'hky'
model.kappa = 4.0
model.state_freqs = [0.1, 0.2, 0.3, 0.4]
model.num_rates = 1
#model.gamma_shape = 1.2
model.pinvar_model = True
model.pinvar = 0.3
lnL = like()
ref_lnL = lnL
print 'lnL = %.5f (%.5f worse than the reference lnL)' % (lnL, ref_lnL - lnL)
paup_commands.append('\n[!\n***** HKY+I (using HKYModel) *****]')
paup_commands.append('lset nst=2 variant=hky basefreq=(0.1 0.2 0.3) tratio=1.8333333 rates=equal pinvar=0.3;')
paup_commands.append('lscores 1 / userbrlen;')
paup_commands.append('[!Phycas HKY+I lnL = %.5f]' % lnL)
# Compute likelihood using the HKY+G model
print '\nHKY+G model'
model.type = 'hky'
model.kappa = 4.0
model.state_freqs = [0.1, 0.2, 0.3, 0.4]
model.num_rates = 4
model.gamma_shape = 1.2
model.pinvar_model = False
#model.pinvar = 0.3
lnL = like()
print 'lnL = %.5f (%.5f worse than the reference lnL)' % (lnL, ref_lnL - lnL)
paup_commands.append('\n[!\n***** HKY+G (using HKYModel) *****]')
paup_commands.append('lset nst=2 variant=hky basefreq=(0.1 0.2 0.3) tratio=1.8333333 rates=gamma shape=1.2 pinvar=0.0;')
paup_commands.append('lscores 1 / userbrlen;')
paup_commands.append('[!Phycas HKY+G lnL = %.5f]' % lnL)
# Compute likelihood using the HKY model
print '\nHKY model'
model.type = 'hky'
model.kappa = 4.0
model.state_freqs = [0.1, 0.2, 0.3, 0.4]
model.num_rates = 1
#model.gamma_shape = 1.2
model.pinvar_model = False
#model.pinvar = 0.3
lnL = like()
print 'lnL = %.5f (%.5f worse than reference lnL)' % (lnL, ref_lnL - lnL)
paup_commands.append('\n[!\n***** HKY (using HKYModel) *****]')
paup_commands.append('lset nst=2 variant=hky basefreq=(0.1 0.2 0.3) tratio=1.8333333 rates=equal pinvar=0.0;')
paup_commands.append('lscores 1 / userbrlen;')
paup_commands.append('[!Phycas HKY lnL = %.5f]' % lnL)
# Compute likelihood using the F81+G+I model
print '\nF81+G+I model'
model.type = 'hky'
model.kappa = 1.0
model.state_freqs = [0.1, 0.2, 0.3, 0.4]
model.num_rates = 4
model.gamma_shape = 1.2
model.pinvar_model = True
model.pinvar = 0.3
lnL = like()
print 'lnL = %.5f (%.5f worse than reference lnL)' % (lnL, ref_lnL - lnL)
paup_commands.append('[!\n***** F81+G+I (using HKYModel) *****]')
paup_commands.append('lset nst=1 basefreq=(0.1 0.2 0.3) rates=gamma shape=1.2 pinvar=0.3;')
paup_commands.append('lscores 1 / userbrlen;')
paup_commands.append('[!Phycas F81+G+I lnL = %.5f]' % lnL)
# Compute likelihood using the F81+I model
print '\nF81+I model'
model.type = 'hky'
model.kappa = 1.0
model.state_freqs = [0.1, 0.2, 0.3, 0.4]
model.num_rates = 1
#model.gamma_shape = 1.2
model.pinvar_model = True
model.pinvar = 0.3
lnL = like()
print 'lnL = %.5f (%.5f worse than reference lnL)' % (lnL, ref_lnL - lnL)
paup_commands.append('[!\n***** F81+I (using HKYModel) *****]')
paup_commands.append('lset nst=1 basefreq=(0.1 0.2 0.3) rates=equal pinvar=0.3;')
paup_commands.append('lscores 1 / userbrlen;')
paup_commands.append('[!Phycas F81+I lnL = %.5f]' % lnL)
# Compute likelihood using the F81+G model
print '\nF81+G model'
model.type = 'hky'
model.kappa = 1.0
model.state_freqs = [0.1, 0.2, 0.3, 0.4]
model.num_rates = 4
model.gamma_shape = 1.2
model.pinvar_model = False
#model.pinvar = 0.3
lnL = like()
print 'lnL = %.5f (%.5f worse than reference lnL)' % (lnL, ref_lnL - lnL)
paup_commands.append('[!\n***** F81+G (using HKYModel) *****]')
paup_commands.append('lset nst=1 basefreq=(0.1 0.2 0.3) rates=gamma shape=1.2 pinvar=0.0;')
paup_commands.append('lscores 1 / userbrlen;')
paup_commands.append('[!Phycas F81+G lnL = %.5f]' % lnL)
# Compute likelihood using the F81 model
print '\nF81 model'
model.type = 'hky'
model.kappa = 1.0
model.state_freqs = [0.1, 0.2, 0.3, 0.4]
model.num_rates = 1
#model.gamma_shape = 1.2
model.pinvar_model = False
#model.pinvar = 0.3
lnL = like()
print 'lnL = %.5f (%.5f worse than reference lnL)' % (lnL, ref_lnL - lnL)
paup_commands.append('[!\n***** F81 (using HKYModel) *****]')
paup_commands.append('lset nst=1 basefreq=(0.1 0.2 0.3) rates=equal pinvar=0.0;')
paup_commands.append('lscores 1 / userbrlen;')
paup_commands.append('[!Phycas F81 lnL = %.5f]' % lnL)
# Compute likelihood using the K80+G+I model
print '\nK80+G+I model'
model.type = 'hky'
model.kappa = 4.0
model.state_freqs = [0.25, 0.25, 0.25, 0.25]
model.num_rates = 4
model.gamma_shape = 1.2
model.pinvar_model = True
model.pinvar = 0.3
lnL = like()
print 'lnL = %.5f (%.5f worse than reference lnL)' % (lnL, ref_lnL - lnL)
paup_commands.append('[!\n***** K80+G+I (using HKYModel) *****]')
paup_commands.append('lset nst=2 basefreq=equal tratio=2.0 rates=gamma shape=1.2 pinvar=0.3;')
paup_commands.append('lscores 1 / userbrlen;')
paup_commands.append('[!Phycas K80+G+I lnL = %.5f]' % lnL)
# Compute likelihood using the K80+I model
print '\nK80+I model'
model.type = 'hky'
model.kappa = 4.0
model.state_freqs = [0.25, 0.25, 0.25, 0.25]
model.num_rates = 1
#model.gamma_shape = 1.2
model.pinvar_model = True
model.pinvar = 0.3
lnL = like()
print 'lnL = %.5f (%.5f worse than reference lnL)' % (lnL, ref_lnL - lnL)
paup_commands.append('[!\n***** K80+I (using HKYModel) *****]')
paup_commands.append('lset nst=2 basefreq=equal tratio=2.0 rates=equal pinvar=0.3;')
paup_commands.append('lscores 1 / userbrlen;')
paup_commands.append('[!Phycas K80+I lnL = %.5f]' % lnL)
# Compute likelihood using the K80+G model
print '\nK80+G model'
model.type = 'hky'
model.kappa = 4.0
model.state_freqs = [0.25, 0.25, 0.25, 0.25]
model.num_rates = 4
model.gamma_shape = 1.2
model.pinvar_model = False
#model.pinvar = 0.3
lnL = like()
print 'lnL = %.5f (%.5f worse than reference lnL)' % (lnL, ref_lnL - lnL)
paup_commands.append('[!\n***** K80+G (using HKYModel) *****]')
paup_commands.append('lset nst=2 basefreq=equal tratio=2.0 rates=gamma shape=1.2 pinvar=0.0;')
paup_commands.append('lscores 1 / userbrlen;')
paup_commands.append('[!Phycas K80+G lnL = %.5f]' % lnL)
# Compute likelihood using the K80 model
print '\nK80 model'
model.type = 'hky'
model.kappa = 4.0
model.state_freqs = [0.25, 0.25, 0.25, 0.25]
model.num_rates = 1
#model.gamma_shape = 1.2
model.pinvar_model = False
#model.pinvar = 0.3
lnL = like()
print 'lnL = %.5f (%.5f worse than reference lnL)' % (lnL, ref_lnL - lnL)
paup_commands.append('[!\n***** K80 (using HKYModel) *****]')
paup_commands.append('lset nst=2 basefreq=equal tratio=2.0 rates=equal pinvar=0.0;')
paup_commands.append('lscores 1 / userbrlen;')
paup_commands.append('[!Phycas K80 lnL = %.5f]' % lnL)
# Compute likelihood using the JC+G+I model
print '\nJC+G+I model'
model.type = 'hky'
model.kappa = 1.0
model.state_freqs = [0.25, 0.25, 0.25, 0.25]
model.num_rates = 4
model.gamma_shape = 1.2
model.pinvar_model = True
model.pinvar = 0.3
lnL = like()
print 'lnL = %.5f (%.5f worse than reference lnL)' % (lnL, ref_lnL - lnL)
paup_commands.append('[!\n***** JC+G+I (using HKYModel) *****]')
paup_commands.append('lset nst=1 basefreq=equal rates=gamma shape=1.2 pinvar=0.3;')
paup_commands.append('lscores 1 / userbrlen;')
paup_commands.append('[!Phycas JC+G+I lnL = %.5f]' % lnL)
# Compute likelihood using the JC+I model
print '\nJC+I model'
model.type = 'hky'
model.kappa = 1.0
model.state_freqs = [0.25, 0.25, 0.25, 0.25]
model.num_rates = 1
#model.gamma_shape = 1.2
model.pinvar_model = True
model.pinvar = 0.3
lnL = like()
print 'lnL = %.5f (%.5f worse than reference lnL)' % (lnL, ref_lnL - lnL)
paup_commands.append('[!\n***** JC+I (using HKYModel) *****]')
paup_commands.append('lset nst=1 basefreq=equal rates=equal pinvar=0.3;')
paup_commands.append('lscores 1 / userbrlen;')
paup_commands.append('[!Phycas JC+I lnL = %.5f]' % lnL)
# Compute likelihood using the JC+G model
print '\nJC+G model'
model.type = 'hky'
model.kappa = 1.0
model.state_freqs = [0.25, 0.25, 0.25, 0.25]
model.num_rates = 4
model.gamma_shape = 1.2
model.pinvar_model = False
#model.pinvar = 0.3
lnL = like()
print 'lnL = %.5f (%.5f worse than reference lnL)' % (lnL, ref_lnL - lnL)
paup_commands.append('[!\n***** JC+G (using HKYModel) *****]')
paup_commands.append('lset nst=1 basefreq=equal rates=gamma shape=1.2 pinvar=0.0;')
paup_commands.append('lscores 1 / userbrlen;')
paup_commands.append('[!Phycas JC+G lnL = %.5f]' % lnL)
# Compute likelihood using the JC model
print '\nJC model'
model.type = 'hky'
model.kappa = 1.0
model.state_freqs = [0.25, 0.25, 0.25, 0.25]
model.num_rates = 1
#model.gamma_shape = 1.2
model.pinvar_model = False
#model.pinvar = 0.3
lnL = like()
print 'lnL = %.5f (%.5f worse than reference lnL)' % (lnL, ref_lnL - lnL)
paup_commands.append('[!\n***** JC (using HKYModel) *****]')
paup_commands.append('lset nst=1 basefreq=equal rates=equal pinvar=0.0;')
paup_commands.append('lscores 1 / userbrlen;')
paup_commands.append('[!Phycas JC lnL = %.5f]' % lnL)
print
print '************** Testing JCModel *******************'
# Compute likelihood using the JC+G+I model
print '\nJC+G+I model'
model.type = 'jc'
#model.kappa = 1.0
#model.state_freqs = [0.25, 0.25, 0.25, 0.25]
model.num_rates = 4
model.gamma_shape = 1.2
model.pinvar_model = True
model.pinvar = 0.3
lnL = like()
print 'lnL = %.5f (%.5f worse than reference lnL)' % (lnL, ref_lnL - lnL)
paup_commands.append('[!\n***** JC+G+I (using JCModel) *****]')
paup_commands.append('lset nst=1 basefreq=equal rates=gamma shape=1.2 pinvar=0.3;')
paup_commands.append('lscores 1 / userbrlen;')
paup_commands.append('[!Phycas JC+G+I lnL = %.5f]' % lnL)
# Compute likelihood using the JC+I model
print '\nJC+I model'
model.type = 'jc'
#model.kappa = 1.0
#model.state_freqs = [0.25, 0.25, 0.25, 0.25]
model.num_rates = 1
#model.gamma_shape = 1.2
model.pinvar_model = True
model.pinvar = 0.3
lnL = like()
print 'lnL = %.5f (%.5f worse than reference lnL)' % (lnL, ref_lnL - lnL)
paup_commands.append('[!\n***** JC+I (using JCModel) *****]')
paup_commands.append('lset nst=1 basefreq=equal rates=equal pinvar=0.3;')
paup_commands.append('lscores 1 / userbrlen;')
paup_commands.append('[!Phycas JC+I lnL = %.5f]' % lnL)
# Compute likelihood using the JC+G model
print '\nJC+G model'
model.type = 'jc'
#model.kappa = 1.0
#model.state_freqs = [0.25, 0.25, 0.25, 0.25]
model.num_rates = 4
model.gamma_shape = 1.2
model.pinvar_model = False
#model.pinvar = 0.3
lnL = like()
print 'lnL = %.5f (%.5f worse than reference lnL)' % (lnL, ref_lnL - lnL)
paup_commands.append('[!\n***** JC+G (using JCModel) *****]')
paup_commands.append('lset nst=1 basefreq=equal rates=gamma shape=1.2 pinvar=0.0;')
paup_commands.append('lscores 1 / userbrlen;')
paup_commands.append('[!Phycas JC+G lnL = %.5f]' % lnL)
# Compute likelihood using the JC model
print '\nJC model'
model.type = 'jc'
#model.kappa = 1.0
#model.state_freqs = [0.25, 0.25, 0.25, 0.25]
model.num_rates = 1
#model.gamma_shape = 1.2
model.pinvar_model = False
#model.pinvar = 0.3
lnL = like()
print 'lnL = %.5f (%.5f worse than reference lnL)' % (lnL, ref_lnL - lnL)
paup_commands.append('[!\n***** JC (using JCModel) *****]')
paup_commands.append('lset nst=1 basefreq=equal rates=equal pinvar=0.0;')
paup_commands.append('lscores 1 / userbrlen;')
paup_commands.append('[!Phycas JC lnL = %.5f]' % lnL)
# Add a PAUP block to the file named fn to make it easy to check the results
f = file(fn, 'a')
f.write('\n')
f.write('\nbegin paup;')
f.write('\n set criterion=likelihood storebrlen;')
f.write('\nend;')
f.write('\n')
f.write('\nbegin trees;')
f.write('\n translate')
for i,nm in enumerate(blob.taxon_labels):
if nm.count(' ') > 0:
f.write("\n %d '%s'" % (i+1, nm))
else:
f.write("\n %d %s" % (i+1, nm))
if i < len(blob.taxon_labels) - 1:
f.write(',')
else:
f.write(';')
f.write('\n utree t = %s' % model_tree_str)
f.write('\nend;')
f.write('\n')
f.write('\nbegin paup;')
f.write('\nlog file=paup.log start replace;\n')
f.write('\n'.join(paup_commands))
f.write('\n\nlog stop;')
f.write('\nend;')
f.write('\n')
f.close()
def simulateData(fn):
# NOT YET READY FOR PARTITIONED VERSION
# Define the names of the taxa to use when the simulated data set is saved to a file
phycas.taxon_names = ['P. parksii', 'P. articulata', 'P._gracilis', 'P. macrophylla']
# Create a simulation model
#phycas.model = Likelihood.HKYModel()
phycas.model = Likelihood.GTRModel()
#phycas.model.setKappa(4.0)
phycas.model.setRelRates([1.8, 4.0, 1.5, 1.2, 5.0, 1.0])
phycas.model.setNGammaRates(4)
phycas.model.setShape(1.2)
phycas.model.setNucleotideFreqs(0.1, 0.2, 0.3, 0.4)
phycas.model.setPinvarModel()
phycas.model.setPinvar(0.3)
# Create a likelihood object to orchestrate both simulations and likelihood calculations
phycas.likelihood = Likelihood.TreeLikelihood(phycas.model)
# Prepare the tree for simulation (i.e. equip nodes with transition matrices)
phycas.likelihood.prepareForSimulation(phycas.tree)
# Simulation settings
phycas.r.setSeed(13579)
phycas.sim_nreps = 1 # ignored at present
phycas.sim_outfile = 'simout.nex'
#num_sites = 5000
num_sites = 100000
# Create a SimData object to hold the simulated data
sim_data = Likelihood.SimData()
# Simulate num_sites of data and store in sim_data
# Use the function simulateFirst (rather than just simulate) in order
# to force calculation of transition probabilities
phycas.likelihood.simulateFirst(sim_data, phycas.tree, phycas.r, num_sites)
# Save simulated data to a NEXUS file using taxon_names, datatype=dna and
# using the symbols a, c, g, and t for state codes 0, 1, 2, and 3, respectively
sim_data.saveToNexusFile('simulated.nex', phycas.taxon_names, 'dna', ('a','c','g','t'))
# Copy the simulated data from sim_data to phycas.likelihood so that
# we can compute the likelihood for the simulated data
phycas.likelihood.copyDataFromSimData(sim_data)
def createCommandFile(fn, dataf):
outf = file(fn, 'w')
outf.write('#nexus\n\n')
outf.write('begin paup;\n')
outf.write(" set nowarnroot;\n")
outf.write(" exe '%s';\n" % dataf)
outf.write('end;\n')
outf.close()
if __name__ == '__main__':
print
print '+------------------------------------------------+'
print '| Analyzing nyldna4.nex |'
print '+------------------------------------------------+'
dataf = getPhycasTestData('nyldna4.nex')
blob = readFile(dataf)
nchar = blob.characters.getMatrix().getNChar()
partition.validate(nchar)
# Create a model tree
model_tree_str = '(1:0.1,2:0.15,(3:0.025,4:0.15):0.05);'
model_tree = TreeCollection(newick=model_tree_str)
like.data_source = blob.characters
like.tree_source = model_tree
like.starting_edgelen_dist = None
like.store_site_likes = False
createCommandFile('check.nex', dataf)
tryAllModels('check.nex')
#doingSimTest = False
#if doingSimTest:
# print
# print '+------------------------------------------------+'
# print '| Analyzing Simulated Data |'
# print '+------------------------------------------------+'
#
# simulateData('simulated.nex')
# tryAllModels('simulated.nex')
#else:
# d = os.path.dirname(__file__)
# o = open(os.path.join(d, 'reference_output','simulated.nex'), "rU")
# t = open("simulated.nex", "w")
# t.write(o.read())
# t.close()
# o.close()
| 40.370968 | 129 | 0.616638 | 3,445 | 22,527 | 3.952685 | 0.08418 | 0.095175 | 0.138797 | 0.015863 | 0.745612 | 0.743482 | 0.729676 | 0.724536 | 0.711023 | 0.670485 | 0 | 0.054583 | 0.208683 | 22,527 | 557 | 130 | 40.443447 | 0.709301 | 0.220269 | 0 | 0.647059 | 0 | 0.053708 | 0.346194 | 0.007854 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.002558 | null | null | 0.148338 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
984a3f7aca26815a33be01423d93ac7187208611 | 209 | py | Python | scripts/pylint_custom_plugin/tests/test_files/copyright_header_acceptable.py | vincenttran-msft/azure-sdk-for-python | 348b56f9f03eeb3f7b502eed51daf494ffff874d | [
"MIT"
] | 1 | 2022-02-01T18:50:12.000Z | 2022-02-01T18:50:12.000Z | scripts/pylint_custom_plugin/tests/test_files/copyright_header_acceptable.py | vincenttran-msft/azure-sdk-for-python | 348b56f9f03eeb3f7b502eed51daf494ffff874d | [
"MIT"
] | null | null | null | scripts/pylint_custom_plugin/tests/test_files/copyright_header_acceptable.py | vincenttran-msft/azure-sdk-for-python | 348b56f9f03eeb3f7b502eed51daf494ffff874d | [
"MIT"
] | null | null | null | # ------------------------------------
# Copyright (c) Microsoft Corporation.
# Licensed under the MIT License.
# ------------------------------------
class Something():
def __init__(self):
pass
| 20.9 | 38 | 0.425837 | 15 | 209 | 5.666667 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.162679 | 209 | 9 | 39 | 23.222222 | 0.485714 | 0.679426 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0.333333 | 0 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 6 |
98b4e1643108d473a3f73c90f156756074f8bbb6 | 102 | py | Python | tests/exog/random/random_exog_150_40.py | shaido987/pyaf | b9afd089557bed6b90b246d3712c481ae26a1957 | [
"BSD-3-Clause"
] | 377 | 2016-10-13T20:52:44.000Z | 2022-03-29T18:04:14.000Z | tests/exog/random/random_exog_150_40.py | ysdede/pyaf | b5541b8249d5a1cfdc01f27fdfd99b6580ed680b | [
"BSD-3-Clause"
] | 160 | 2016-10-13T16:11:53.000Z | 2022-03-28T04:21:34.000Z | tests/exog/random/random_exog_150_40.py | ysdede/pyaf | b5541b8249d5a1cfdc01f27fdfd99b6580ed680b | [
"BSD-3-Clause"
] | 63 | 2017-03-09T14:51:18.000Z | 2022-03-27T20:52:57.000Z | import tests.exog.test_random_exogenous as testrandexog
testrandexog.test_random_exogenous( 150,40); | 25.5 | 55 | 0.862745 | 14 | 102 | 6 | 0.714286 | 0.238095 | 0.452381 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.052632 | 0.068627 | 102 | 4 | 56 | 25.5 | 0.831579 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.5 | 0 | 0.5 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
7f3718b38514dd26f81177c48998fa881c34d511 | 4,751 | py | Python | daiquiri/serve/tests/test_viewsets.py | agy-why/daiquiri | 4d3e2ce51e202d5a8f1df404a0094a4e018dcb4d | [
"Apache-2.0"
] | 14 | 2018-12-23T18:35:02.000Z | 2021-12-15T04:55:12.000Z | daiquiri/serve/tests/test_viewsets.py | agy-why/daiquiri | 4d3e2ce51e202d5a8f1df404a0094a4e018dcb4d | [
"Apache-2.0"
] | 40 | 2018-12-20T12:44:05.000Z | 2022-03-21T11:35:20.000Z | daiquiri/serve/tests/test_viewsets.py | agy-why/daiquiri | 4d3e2ce51e202d5a8f1df404a0094a4e018dcb4d | [
"Apache-2.0"
] | 5 | 2019-05-16T08:03:35.000Z | 2021-08-23T20:03:11.000Z | from django.test import TestCase
from test_generator.viewsets import TestViewsetMixin
class ServeTestCase(TestCase):
databases = ('default', 'data', 'tap', 'oai')
fixtures = (
'auth.json',
'metadata.json',
'jobs.json',
'queryjobs.json'
)
users = (
('admin', 'admin'),
('user', 'user'),
('test', 'test'),
('anonymous', None),
)
class PublicRowTests(TestViewsetMixin, ServeTestCase):
url_names = {
'viewset': 'serve:row'
}
status_map = {
'list_viewset': {
'admin': 200, 'user': 200, 'test': 200, 'anonymous': 200
}
}
def _test_list_viewset(self, username):
self.assert_list_viewset(username, query_params={
'schema': 'daiquiri_data_obs',
'table': 'stars'
})
class InternalRowTests(TestViewsetMixin, ServeTestCase):
url_names = {
'viewset': 'serve:row'
}
status_map = {
'list_viewset': {
'admin': 200, 'user': 200, 'test': 200, 'anonymous': 404
}
}
def _test_list_viewset(self, username):
self.assert_list_viewset(username, query_params={
'schema': 'daiquiri_data_sim',
'table': 'halos'
})
class PrivateRowTests(TestViewsetMixin, ServeTestCase):
url_names = {
'viewset': 'serve:row'
}
status_map = {
'list_viewset': {
'admin': 404, 'user': 404, 'test': 200, 'anonymous': 404
}
}
def _test_list_viewset(self, username):
self.assert_list_viewset(username, query_params={
'schema': 'daiquiri_data_test',
'table': 'test'
})
class NotFoundRowTests(TestViewsetMixin, ServeTestCase):
url_names = {
'viewset': 'serve:row'
}
status_map = {
'list_viewset': {
'admin': 404, 'user': 404, 'test': 404, 'anonymous': 404
}
}
def _test_non_existing_schema_viewset(self, username):
self.assert_list_viewset(username, query_params={
'schema': 'non_existing',
'table': 'stars'
})
def _test_non_existing_table_viewset(self, username):
self.assert_list_viewset(username, query_params={
'schema': 'daiquiri_data_obs',
'table': 'non_existing'
})
def _test_non_existing_user_table_viewset(self, username):
self.assert_list_viewset(username, query_params={
'schema': 'daiquiri_user_user',
'table': 'non_existing'
})
class PublicColumnTests(TestViewsetMixin, ServeTestCase):
url_names = {
'viewset': 'serve:column'
}
status_map = {
'list_viewset': {
'admin': 200, 'user': 200, 'test': 200, 'anonymous': 200
}
}
def _test_list_viewset(self, username):
self.assert_list_viewset(username, query_params={
'schema': 'daiquiri_data_obs',
'table': 'stars'
})
class InternalColumnTests(TestViewsetMixin, ServeTestCase):
url_names = {
'viewset': 'serve:column'
}
status_map = {
'list_viewset': {
'admin': 200, 'user': 200, 'test': 200, 'anonymous': 404
}
}
def _test_list_viewset(self, username):
self.assert_list_viewset(username, query_params={
'schema': 'daiquiri_data_sim',
'table': 'halos'
})
class PrivateColumnTests(TestViewsetMixin, ServeTestCase):
url_names = {
'viewset': 'serve:column'
}
status_map = {
'list_viewset': {
'admin': 404, 'user': 404, 'test': 200, 'anonymous': 404
}
}
def _test_list_viewset(self, username):
self.assert_list_viewset(username, query_params={
'schema': 'daiquiri_data_test',
'table': 'test'
})
class NotFoundColumnTests(TestViewsetMixin, ServeTestCase):
url_names = {
'viewset': 'serve:column'
}
status_map = {
'list_viewset': {
'admin': 404, 'user': 404, 'test': 404, 'anonymous': 404
}
}
def _test_non_existing_schema_viewset(self, username):
self.assert_list_viewset(username, query_params={
'schema': 'non_existing',
'table': 'stars'
})
def _test_non_existing_table_viewset(self, username):
self.assert_list_viewset(username, query_params={
'schema': 'daiquiri_data_obs',
'table': 'non_existing'
})
def _test_non_existing_user_table_viewset(self, username):
self.assert_list_viewset(username, query_params={
'schema': 'daiquiri_user_user',
'table': 'non_existing'
})
| 23.874372 | 68 | 0.568933 | 449 | 4,751 | 5.714922 | 0.13363 | 0.111458 | 0.088854 | 0.10756 | 0.858145 | 0.858145 | 0.858145 | 0.858145 | 0.858145 | 0.858145 | 0 | 0.028872 | 0.300147 | 4,751 | 198 | 69 | 23.994949 | 0.742857 | 0 | 0 | 0.675676 | 0 | 0 | 0.194696 | 0 | 0 | 0 | 0 | 0 | 0.081081 | 1 | 0.081081 | false | 0 | 0.013514 | 0 | 0.283784 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
7faefc43afce1ef5c3f56a6ac68d0c19e2ba7181 | 3,101 | py | Python | epi/tests/test_chap_04_01_parity_of_a_word.py | totoro72/pt1 | 92cffb9b36ebe2023243446e560e54200b0bd6e9 | [
"MIT"
] | null | null | null | epi/tests/test_chap_04_01_parity_of_a_word.py | totoro72/pt1 | 92cffb9b36ebe2023243446e560e54200b0bd6e9 | [
"MIT"
] | 17 | 2020-09-04T16:35:48.000Z | 2022-03-02T03:21:39.000Z | epi/tests/test_chap_04_01_parity_of_a_word.py | totoro72/pt1 | 92cffb9b36ebe2023243446e560e54200b0bd6e9 | [
"MIT"
] | null | null | null | import unittest
from chap_04_01_parity_of_a_word import (
count_bits_naive,
count_bits_better,
compute_parity_w_cache,
compute_parity_log,
right_prop_rightmost_set_bit,
compute_x_mod_power_of_2,
is_x_power_of_2
)
class TestParity(unittest.TestCase):
def test_count_bits_naive(self):
self.assertEqual(count_bits_naive(0), 0)
self.assertEqual(count_bits_naive(1), 1)
self.assertEqual(count_bits_naive(2), 1)
self.assertEqual(count_bits_naive(3), 2)
self.assertEqual(count_bits_naive(7), 3)
self.assertEqual(count_bits_naive(15), 4)
with self.assertRaises(ValueError):
count_bits_naive(-1)
def test_count_bits_better(self):
self.assertEqual(count_bits_better(0), 0)
self.assertEqual(count_bits_better(1), 1)
self.assertEqual(count_bits_better(2), 1)
self.assertEqual(count_bits_better(3), 2)
self.assertEqual(count_bits_better(7), 3)
self.assertEqual(count_bits_better(15), 4)
with self.assertRaises(ValueError):
count_bits_naive(-1)
def test_compute_parity_w_cache(self):
self.assertEqual(compute_parity_w_cache(0), 0)
self.assertEqual(compute_parity_w_cache(1), 1)
self.assertEqual(compute_parity_w_cache(2), 1)
self.assertEqual(compute_parity_w_cache(3), 0)
self.assertEqual(compute_parity_w_cache(7), 1)
self.assertEqual(compute_parity_w_cache(15), 0)
# works with negatives!
self.assertEqual(compute_parity_w_cache(-1), 0)
self.assertEqual(compute_parity_w_cache(0x0001000100010001), 0)
self.assertEqual(compute_parity_w_cache(0x00010011ffff000f), 1)
def test_compute_parity_log(self):
self.assertEqual(compute_parity_log(0), 0)
self.assertEqual(compute_parity_log(1), 1)
self.assertEqual(compute_parity_log(2), 1)
self.assertEqual(compute_parity_log(3), 0)
self.assertEqual(compute_parity_log(7), 1)
self.assertEqual(compute_parity_log(15), 0)
# works with negatives!
self.assertEqual(compute_parity_log(-1), 0)
self.assertEqual(compute_parity_log(0x0001000100010001), 0)
self.assertEqual(compute_parity_log(0x00010011ffff000f), 1)
def test_right_prop_rightmost_set_bit(self):
# 0101 0000 = 2^4 + 2^6 = 16 + 64 = 80
# 0101 1111 = 80 + 15 = 95
self.assertEqual(right_prop_rightmost_set_bit(80), 95)
# works for negative too
# 1111....0000 = -1 - 15 = -16
# 1111....1111 = -1
self.assertEqual(right_prop_rightmost_set_bit(-16), -1)
def test_compute_x_mod_power_of_2(self):
self.assertEqual(compute_x_mod_power_of_2(77, 3), 5)
def test_is_x_power_of_2(self):
self.assertEqual(is_x_power_of_2(1), True)
self.assertEqual(is_x_power_of_2(2), True)
self.assertEqual(is_x_power_of_2(4), True)
self.assertEqual(is_x_power_of_2(32), True)
self.assertEqual(is_x_power_of_2(1024), True)
self.assertEqual(is_x_power_of_2(1023), False)
| 38.283951 | 71 | 0.691067 | 444 | 3,101 | 4.45045 | 0.148649 | 0.296053 | 0.211538 | 0.255061 | 0.823381 | 0.704453 | 0.361842 | 0.181174 | 0.105263 | 0.055668 | 0 | 0.08648 | 0.20574 | 3,101 | 80 | 72 | 38.7625 | 0.715794 | 0.056433 | 0 | 0.065574 | 0 | 0 | 0 | 0 | 0 | 0 | 0.024674 | 0 | 0.672131 | 1 | 0.114754 | false | 0 | 0.032787 | 0 | 0.163934 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
f687ed029da5b261be3f9a563f24ce9fb63a8a78 | 13,095 | py | Python | src/ostorlab/agent/message/proto/v3/asset/ip/v6/whois/whois_pb2.py | bbhunter/ostorlab | 968fe4e5b927c0cd159594c13b73f95b71150154 | [
"Apache-2.0"
] | 113 | 2022-02-21T09:30:14.000Z | 2022-03-31T21:54:26.000Z | src/ostorlab/agent/message/proto/v3/asset/ip/v6/whois/whois_pb2.py | bbhunter/ostorlab | 968fe4e5b927c0cd159594c13b73f95b71150154 | [
"Apache-2.0"
] | 2 | 2022-02-25T10:56:55.000Z | 2022-03-24T13:08:06.000Z | src/ostorlab/agent/message/proto/v3/asset/ip/v6/whois/whois_pb2.py | bbhunter/ostorlab | 968fe4e5b927c0cd159594c13b73f95b71150154 | [
"Apache-2.0"
] | 20 | 2022-02-28T14:25:04.000Z | 2022-03-30T23:01:11.000Z | # -*- coding: utf-8 -*-
# Generated by the protocol buffer compiler. DO NOT EDIT!
# source: v3/asset/ip/v6/whois/whois.proto
from google.protobuf import descriptor as _descriptor
from google.protobuf import message as _message
from google.protobuf import reflection as _reflection
from google.protobuf import symbol_database as _symbol_database
# @@protoc_insertion_point(imports)
_sym_db = _symbol_database.Default()
DESCRIPTOR = _descriptor.FileDescriptor(
name='v3/asset/ip/v6/whois/whois.proto',
package='v3.asset.ip.v6.whois',
syntax='proto2',
serialized_options=None,
create_key=_descriptor._internal_create_key,
serialized_pb=b'\n v3/asset/ip/v6/whois/whois.proto\x12\x14v3.asset.ip.v6.whois\"L\n\x07Network\x12\x0c\n\x04\x63idr\x18\x01 \x01(\t\x12\x0c\n\x04name\x18\x02 \x01(\t\x12\x0e\n\x06handle\x18\x03 \x01(\t\x12\x15\n\rparent_handle\x18\x04 \x01(\t\"6\n\x07\x43ontact\x12\x0c\n\x04name\x18\x01 \x01(\t\x12\x0c\n\x04kind\x18\x02 \x01(\t\x12\x0f\n\x07\x61\x64\x64ress\x18\x03 \x01(\t\"F\n\x06\x45ntity\x12\x0c\n\x04name\x18\x01 \x01(\t\x12.\n\x07\x63ontact\x18\x02 \x01(\x0b\x32\x1d.v3.asset.ip.v6.whois.Contact\"\x88\x02\n\x07Message\x12\x0c\n\x04host\x18\x01 \x02(\t\x12\x0c\n\x04mask\x18\x02 \x01(\t\x12\x12\n\x07version\x18\x03 \x02(\x05:\x01\x36\x12\x14\n\x0c\x61sn_registry\x18\x04 \x01(\t\x12\x12\n\nasn_number\x18\x05 \x01(\x05\x12\x18\n\x10\x61sn_country_code\x18\x06 \x01(\t\x12\x10\n\x08\x61sn_date\x18\x07 \x01(\t\x12\x17\n\x0f\x61sn_description\x18\x08 \x01(\t\x12.\n\x07network\x18\t \x01(\x0b\x32\x1d.v3.asset.ip.v6.whois.Network\x12.\n\x08\x65ntities\x18\n \x03(\x0b\x32\x1c.v3.asset.ip.v6.whois.Entity'
)
_NETWORK = _descriptor.Descriptor(
name='Network',
full_name='v3.asset.ip.v6.whois.Network',
filename=None,
file=DESCRIPTOR,
containing_type=None,
create_key=_descriptor._internal_create_key,
fields=[
_descriptor.FieldDescriptor(
name='cidr', full_name='v3.asset.ip.v6.whois.Network.cidr', index=0,
number=1, type=9, cpp_type=9, label=1,
has_default_value=False, default_value=b"".decode('utf-8'),
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
_descriptor.FieldDescriptor(
name='name', full_name='v3.asset.ip.v6.whois.Network.name', index=1,
number=2, type=9, cpp_type=9, label=1,
has_default_value=False, default_value=b"".decode('utf-8'),
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
_descriptor.FieldDescriptor(
name='handle', full_name='v3.asset.ip.v6.whois.Network.handle', index=2,
number=3, type=9, cpp_type=9, label=1,
has_default_value=False, default_value=b"".decode('utf-8'),
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
_descriptor.FieldDescriptor(
name='parent_handle', full_name='v3.asset.ip.v6.whois.Network.parent_handle', index=3,
number=4, type=9, cpp_type=9, label=1,
has_default_value=False, default_value=b"".decode('utf-8'),
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
],
extensions=[
],
nested_types=[],
enum_types=[
],
serialized_options=None,
is_extendable=False,
syntax='proto2',
extension_ranges=[],
oneofs=[
],
serialized_start=58,
serialized_end=134,
)
_CONTACT = _descriptor.Descriptor(
name='Contact',
full_name='v3.asset.ip.v6.whois.Contact',
filename=None,
file=DESCRIPTOR,
containing_type=None,
create_key=_descriptor._internal_create_key,
fields=[
_descriptor.FieldDescriptor(
name='name', full_name='v3.asset.ip.v6.whois.Contact.name', index=0,
number=1, type=9, cpp_type=9, label=1,
has_default_value=False, default_value=b"".decode('utf-8'),
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
_descriptor.FieldDescriptor(
name='kind', full_name='v3.asset.ip.v6.whois.Contact.kind', index=1,
number=2, type=9, cpp_type=9, label=1,
has_default_value=False, default_value=b"".decode('utf-8'),
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
_descriptor.FieldDescriptor(
name='address', full_name='v3.asset.ip.v6.whois.Contact.address', index=2,
number=3, type=9, cpp_type=9, label=1,
has_default_value=False, default_value=b"".decode('utf-8'),
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
],
extensions=[
],
nested_types=[],
enum_types=[
],
serialized_options=None,
is_extendable=False,
syntax='proto2',
extension_ranges=[],
oneofs=[
],
serialized_start=136,
serialized_end=190,
)
_ENTITY = _descriptor.Descriptor(
name='Entity',
full_name='v3.asset.ip.v6.whois.Entity',
filename=None,
file=DESCRIPTOR,
containing_type=None,
create_key=_descriptor._internal_create_key,
fields=[
_descriptor.FieldDescriptor(
name='name', full_name='v3.asset.ip.v6.whois.Entity.name', index=0,
number=1, type=9, cpp_type=9, label=1,
has_default_value=False, default_value=b"".decode('utf-8'),
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
_descriptor.FieldDescriptor(
name='contact', full_name='v3.asset.ip.v6.whois.Entity.contact', index=1,
number=2, type=11, cpp_type=10, label=1,
has_default_value=False, default_value=None,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
],
extensions=[
],
nested_types=[],
enum_types=[
],
serialized_options=None,
is_extendable=False,
syntax='proto2',
extension_ranges=[],
oneofs=[
],
serialized_start=192,
serialized_end=262,
)
_MESSAGE = _descriptor.Descriptor(
name='Message',
full_name='v3.asset.ip.v6.whois.Message',
filename=None,
file=DESCRIPTOR,
containing_type=None,
create_key=_descriptor._internal_create_key,
fields=[
_descriptor.FieldDescriptor(
name='host', full_name='v3.asset.ip.v6.whois.Message.host', index=0,
number=1, type=9, cpp_type=9, label=2,
has_default_value=False, default_value=b"".decode('utf-8'),
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
_descriptor.FieldDescriptor(
name='mask', full_name='v3.asset.ip.v6.whois.Message.mask', index=1,
number=2, type=9, cpp_type=9, label=1,
has_default_value=False, default_value=b"".decode('utf-8'),
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
_descriptor.FieldDescriptor(
name='version', full_name='v3.asset.ip.v6.whois.Message.version', index=2,
number=3, type=5, cpp_type=1, label=2,
has_default_value=True, default_value=6,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
_descriptor.FieldDescriptor(
name='asn_registry', full_name='v3.asset.ip.v6.whois.Message.asn_registry', index=3,
number=4, type=9, cpp_type=9, label=1,
has_default_value=False, default_value=b"".decode('utf-8'),
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
_descriptor.FieldDescriptor(
name='asn_number', full_name='v3.asset.ip.v6.whois.Message.asn_number', index=4,
number=5, type=5, cpp_type=1, label=1,
has_default_value=False, default_value=0,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
_descriptor.FieldDescriptor(
name='asn_country_code', full_name='v3.asset.ip.v6.whois.Message.asn_country_code', index=5,
number=6, type=9, cpp_type=9, label=1,
has_default_value=False, default_value=b"".decode('utf-8'),
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
_descriptor.FieldDescriptor(
name='asn_date', full_name='v3.asset.ip.v6.whois.Message.asn_date', index=6,
number=7, type=9, cpp_type=9, label=1,
has_default_value=False, default_value=b"".decode('utf-8'),
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
_descriptor.FieldDescriptor(
name='asn_description', full_name='v3.asset.ip.v6.whois.Message.asn_description', index=7,
number=8, type=9, cpp_type=9, label=1,
has_default_value=False, default_value=b"".decode('utf-8'),
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
_descriptor.FieldDescriptor(
name='network', full_name='v3.asset.ip.v6.whois.Message.network', index=8,
number=9, type=11, cpp_type=10, label=1,
has_default_value=False, default_value=None,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
_descriptor.FieldDescriptor(
name='entities', full_name='v3.asset.ip.v6.whois.Message.entities', index=9,
number=10, type=11, cpp_type=10, label=3,
has_default_value=False, default_value=[],
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
],
extensions=[
],
nested_types=[],
enum_types=[
],
serialized_options=None,
is_extendable=False,
syntax='proto2',
extension_ranges=[],
oneofs=[
],
serialized_start=265,
serialized_end=529,
)
_ENTITY.fields_by_name['contact'].message_type = _CONTACT
_MESSAGE.fields_by_name['network'].message_type = _NETWORK
_MESSAGE.fields_by_name['entities'].message_type = _ENTITY
DESCRIPTOR.message_types_by_name['Network'] = _NETWORK
DESCRIPTOR.message_types_by_name['Contact'] = _CONTACT
DESCRIPTOR.message_types_by_name['Entity'] = _ENTITY
DESCRIPTOR.message_types_by_name['Message'] = _MESSAGE
_sym_db.RegisterFileDescriptor(DESCRIPTOR)
Network = _reflection.GeneratedProtocolMessageType('Network', (_message.Message,), {
'DESCRIPTOR' : _NETWORK,
'__module__' : 'v3.asset.ip.v6.whois.whois_pb2'
# @@protoc_insertion_point(class_scope:v3.asset.ip.v6.whois.Network)
})
_sym_db.RegisterMessage(Network)
Contact = _reflection.GeneratedProtocolMessageType('Contact', (_message.Message,), {
'DESCRIPTOR' : _CONTACT,
'__module__' : 'v3.asset.ip.v6.whois.whois_pb2'
# @@protoc_insertion_point(class_scope:v3.asset.ip.v6.whois.Contact)
})
_sym_db.RegisterMessage(Contact)
Entity = _reflection.GeneratedProtocolMessageType('Entity', (_message.Message,), {
'DESCRIPTOR' : _ENTITY,
'__module__' : 'v3.asset.ip.v6.whois.whois_pb2'
# @@protoc_insertion_point(class_scope:v3.asset.ip.v6.whois.Entity)
})
_sym_db.RegisterMessage(Entity)
Message = _reflection.GeneratedProtocolMessageType('Message', (_message.Message,), {
'DESCRIPTOR' : _MESSAGE,
'__module__' : 'v3.asset.ip.v6.whois.whois_pb2'
# @@protoc_insertion_point(class_scope:v3.asset.ip.v6.whois.Message)
})
_sym_db.RegisterMessage(Message)
# @@protoc_insertion_point(module_scope)
| 43.795987 | 1,014 | 0.738755 | 1,830 | 13,095 | 5.004372 | 0.091257 | 0.053287 | 0.038327 | 0.05962 | 0.778117 | 0.758899 | 0.735095 | 0.719917 | 0.68028 | 0.627211 | 0 | 0.042955 | 0.123559 | 13,095 | 298 | 1,015 | 43.942953 | 0.754988 | 0.035052 | 0 | 0.680451 | 1 | 0.003759 | 0.189926 | 0.15278 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.015038 | 0 | 0.015038 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
f68f15d3e56a51b1033c217916b97c3b8da1167c | 82 | py | Python | tests/test_cli.py | maggie-jiayizhang/wals3 | fa8f9a9cf968920f16859a075a08502988c9ec4d | [
"Apache-2.0"
] | 86 | 2015-03-18T07:47:54.000Z | 2022-03-27T10:16:01.000Z | tests/test_cli.py | maggie-jiayizhang/wals3 | fa8f9a9cf968920f16859a075a08502988c9ec4d | [
"Apache-2.0"
] | 21 | 2015-01-15T14:13:06.000Z | 2021-11-24T11:17:59.000Z | tests/test_cli.py | maggie-jiayizhang/wals3 | fa8f9a9cf968920f16859a075a08502988c9ec4d | [
"Apache-2.0"
] | 16 | 2015-02-13T05:31:11.000Z | 2022-03-03T08:54:37.000Z | from wals3.scripts import initializedb
def test_init():
assert initializedb
| 13.666667 | 38 | 0.780488 | 10 | 82 | 6.3 | 0.9 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.014706 | 0.170732 | 82 | 5 | 39 | 16.4 | 0.911765 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.333333 | 1 | 0.333333 | true | 0 | 0.333333 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
f6db8d71dadd33c1f146c737b36b24ff658ba08a | 21,068 | py | Python | src/nodes.py | Agent-Hellboy/find-unused-import | 81f555e5ae42a906a2db6e3049ffe23f7c2c2535 | [
"MIT"
] | 2 | 2022-01-07T14:50:17.000Z | 2022-01-23T10:30:33.000Z | src/nodes.py | Agent-Hellboy/find-unused-import | 81f555e5ae42a906a2db6e3049ffe23f7c2c2535 | [
"MIT"
] | 1 | 2022-01-11T13:55:29.000Z | 2022-01-11T13:55:29.000Z | src/nodes.py | Agent-Hellboy/find-unused-import | 81f555e5ae42a906a2db6e3049ffe23f7c2c2535 | [
"MIT"
] | null | null | null | import re
OBJECTS = []
def get_class_str(node_repr):
node_str = type(node_repr)
ptrn = re.search(r"<(.*?)>", str(node_str))
return ptrn.group(1).split("'")[1]
class AST:
@classmethod
def parse_node(self, obj):
pass
class Add:
@classmethod
def parse_node(self, obj):
pass
class And:
@classmethod
def parse_node(self, obj):
pass
class AnnAssign:
@classmethod
def parse_node(self, obj):
AST_NODES[get_class_str(obj.target)].parse_node(obj.target)
AST_NODES[get_class_str(obj.value)].parse_node(obj.value)
AST_NODES[get_class_str(obj.annotation)].parse_node(obj.annotation)
class Assert:
@classmethod
def parse_node(self, obj):
AST_NODES[get_class_str(obj.test)].parse_node(obj.test)
AST_NODES[get_class_str(obj.msg)].parse_node(obj.msg)
class Assign:
@classmethod
def parse_node(self, obj):
for node in obj.targets:
AST_NODES[get_class_str(node)].parse_node(node)
AST_NODES[get_class_str(obj.value)].parse_node(obj.value)
class AsyncFor:
@classmethod
def parse_node(self, obj):
for node in obj.body:
AST_NODES[get_class_str(node)].parse_node(node)
for node in obj.orelse:
AST_NODES[get_class_str(node)].parse_node(node)
AST_NODES[get_class_str(obj.target)].parse_node(obj.target)
AST_NODES[get_class_str(obj.iter)].parse_node(obj.iter)
class AsyncFunctionDef:
@classmethod
def parse_node(self, obj):
AST_NODES[get_class_str(obj.args)].parse_node(obj.args)
for node in obj.body:
AST_NODES[get_class_str(node)].parse_node(node)
for node in obj.decorator_list:
AST_NODES[get_class_str(node)].parse_node(node)
OBJECTS.append(obj.name)
class AsyncWith:
@classmethod
def parse_node(self, obj):
for node in obj.body:
AST_NODES[get_class_str(node)].parse_node(node)
for node in obj.items:
AST_NODES[get_class_str(node)].parse_node(node)
class Attribute:
@classmethod
def parse_node(self, obj):
AST_NODES[get_class_str(obj.value)].parse_node(obj.value)
OBJECTS.append(obj.attr)
class AugAssign:
@classmethod
def parse_node(self, obj):
AST_NODES[get_class_str(obj.target)].parse_node(obj.target)
AST_NODES[get_class_str(obj.value)].parse_node(obj.value)
class AugLoad:
@classmethod
def parse_node(self, obj):
pass
class AugStore:
@classmethod
def parse_node(self, obj):
pass
class Await:
@classmethod
def parse_node(self, obj):
AST_NODES[get_class_str(obj.value)].parse_node(obj.value)
class BinOp:
@classmethod
def parse_node(self, obj):
AST_NODES[get_class_str(obj.left)].parse_node(obj.left)
AST_NODES[get_class_str(obj.right)].parse_node(obj.right)
class BitAnd:
@classmethod
def parse_node(self, obj):
pass
class BitOr:
@classmethod
def parse_node(self, obj):
pass
class BitXor:
@classmethod
def parse_node(self, obj):
pass
class BoolOp:
@classmethod
def parse_node(self, obj):
for node in obj.values:
AST_NODES[get_class_str(node)].parse_node(node)
class Break:
@classmethod
def parse_node(self, obj):
pass
class Bytes:
@classmethod
def parse_node(self, obj):
pass
class Call:
@classmethod
def parse_node(self, obj):
for node in obj.args:
AST_NODES[get_class_str(node)].parse_node(node)
for node in obj.keywords:
AST_NODES[get_class_str(node)].parse_node(node)
AST_NODES[get_class_str(obj.func)].parse_node(obj.func)
class ClassDef:
@classmethod
def parse_node(self, obj):
OBJECTS.append(obj.name)
for node in obj.bases:
AST_NODES[get_class_str(node)].parse_node(node)
for node in obj.keywords:
AST_NODES[get_class_str(node)].parse_node(node)
for node in obj.body:
AST_NODES[get_class_str(node)].parse_node(node)
for node in obj.decorator_list:
AST_NODES[get_class_str(node)].parse_node(node)
AST_NODES[get_class_str(obj.starargs)].parse_node(obj.starargs)
AST_NODES[get_class_str(obj.kwargs)].parse_node(obj.kwargs)
class Compare:
@classmethod
def parse_node(self, obj):
for node in obj.comparators:
AST_NODES[get_class_str(node)].parse_node(node)
AST_NODES[get_class_str(node)].parse_node(node)
class Constant:
@classmethod
def parse_node(self, obj):
OBJECTS.append(obj.value)
class Continue:
@classmethod
def parse_node(self, obj):
pass
class Del:
@classmethod
def parse_node(self, obj):
pass
class Delete:
@classmethod
def parse_node(self, obj):
for node in obj.targets:
AST_NODES[get_class_str(node)].parse_node(node)
class Dict:
@classmethod
def parse_node(self, obj):
for node in obj.keys:
AST_NODES[get_class_str(node)].parse_node(obj.node)
for node in obj.values:
AST_NODES[get_class_str(node)].parse_node(obj.node)
class DictComp:
@classmethod
def parse_node(self, obj):
for node in obj.generators:
AST_NODES[get_class_str(node)].parse_node(node)
AST_NODES[get_class_str(obj.key)].parse_node(obj.key)
AST_NODES[get_class_str(obj.value)].parse_node(obj.value)
class Div:
@classmethod
def parse_node(self, obj):
pass
class Ellipsis:
@classmethod
def parse_node(self, obj):
pass
class Eq:
@classmethod
def parse_node(self, obj):
pass
class ExceptHandler:
@classmethod
def parse_node(self, obj):
for node in obj.body:
AST_NODES[get_class_str(node)].parse_node(node)
AST_NODES[get_class_str(obj.type)].parse_node(obj.type)
OBJECTS.append(obj.name)
class Expr:
@classmethod
def parse_node(self, obj):
AST_NODES[get_class_str(obj.value)].parse_node(obj.value)
class Expression:
@classmethod
def parse_node(self, obj):
AST_NODES[get_class_str(obj.body)].parse_node(obj.body)
class ExtSlice:
@classmethod
def parse_node(self, obj):
pass
class FloorDiv:
@classmethod
def parse_node(self, obj):
pass
class For:
@classmethod
def parse_node(self, obj):
for node in obj.body:
AST_NODES[get_class_str(node)].parse_node(node)
for node in obj.orelse:
AST_NODES[get_class_str(node)].parse_node(node)
AST_NODES[get_class_str(obj.target)].parse_node(obj.target)
AST_NODES[get_class_str(obj.iter)].parse_node(obj.iter)
class FormattedValue:
@classmethod
def parse_node(self, obj):
AST_NODES[get_class_str(obj.value)].parse_node(obj.value)
class FunctionDef:
@classmethod
def parse_node(self, obj):
AST_NODES[get_class_str(obj.args)].parse_node(obj.args)
for node in obj.body:
AST_NODES[get_class_str(node)].parse_node(node)
for node in obj.decorator_list:
AST_NODES[get_class_str(node)].parse_node(node)
OBJECTS.append(obj.name)
class FunctionType:
@classmethod
def parse_node(self, obj):
pass
class GeneratorExp:
@classmethod
def parse_node(self, obj):
for node in obj.generators:
AST_NODES[get_class_str(node)].parse_node(node)
AST_NODES[get_class_str(obj.elt)].parse_node(obj.elt)
class Global:
@classmethod
def parse_node(self, obj):
for name in obj.names:
OBJECTS.append(name)
class Gt:
@classmethod
def parse_node(self, obj):
pass
class GtE:
@classmethod
def parse_node(self, obj):
pass
class If:
@classmethod
def parse_node(self, obj):
for node in obj.body:
AST_NODES[get_class_str(node)].parse_node(node)
for node in obj.orelse:
AST_NODES[get_class_str(node)].parse_node(node)
AST_NODES[get_class_str(obj.test)].parse_node(obj.test)
class IfExp:
@classmethod
def parse_node(self, obj):
AST_NODES[get_class_str(obj.test)].parse_node(obj.test)
AST_NODES[get_class_str(obj.body)].parse_node(obj.body)
AST_NODES[get_class_str(obj.orelse)].parse_node(obj.orelse)
class Import:
@classmethod
def parse_node(self, obj):
pass
class ImportFrom:
@classmethod
def parse_node(self, obj):
pass
class In:
@classmethod
def parse_node(self, obj):
pass
class Index:
@classmethod
def parse_node(self, obj):
pass
class Interactive:
@classmethod
def parse_node(self, obj):
pass
class Invert:
@classmethod
def parse_node(self, obj):
AST_NODES[get_class_str(obj.operand)].parse_node(obj.operand)
class Is:
@classmethod
def parse_node(self, obj):
pass
class IsNot:
@classmethod
def parse_node(self, obj):
pass
class JoinedStr:
@classmethod
def parse_node(self, obj):
for node in obj.values:
AST_NODES[get_class_str(node)].parse_node(node)
class LShift:
@classmethod
def parse_node(self, obj):
pass
class Lambda:
@classmethod
def parse_node(self, obj):
for node in obj.body:
AST_NODES[get_class_str(node)].parse_node(node)
AST_NODES[get_class_str(obj.args)].parse_node(obj.args)
class List:
@classmethod
def parse_node(self, obj):
for node in obj.elts:
AST_NODES[get_class_str(node)].parse_node(obj.node)
class ListComp:
@classmethod
def parse_node(self, obj):
for node in obj.generators:
AST_NODES[get_class_str(node)].parse_node(node)
AST_NODES[get_class_str(obj.elt)].parse_node(obj.elt)
class Load:
@classmethod
def parse_node(self, obj):
pass
class Lt:
@classmethod
def parse_node(self, obj):
pass
class LtE:
@classmethod
def parse_node(self, obj):
pass
class MatMult:
@classmethod
def parse_node(self, obj):
pass
class Mod:
@classmethod
def parse_node(self, obj):
pass
class Module:
@classmethod
def parse_node(self, obj):
for node in obj.body:
AST_NODES[get_class_str(node)].parse_node(node)
class Mult:
@classmethod
def parse_node(self, obj):
pass
class Name:
@classmethod
def parse_node(self, obj):
OBJECTS.append(obj.id)
class NameConstant:
@classmethod
def parse_node(self, obj):
pass
class NamedExpr:
@classmethod
def parse_node(self, obj):
AST_NODES[get_class_str(obj.value)].parse_node(obj.value)
AST_NODES[get_class_str(obj.target)].parse_node(obj.target)
class NodeTransformer:
@classmethod
def parse_node(self, obj):
pass
class NodeVisitor:
@classmethod
def parse_node(self, obj):
pass
class Nonlocal:
@classmethod
def parse_node(self, obj):
for name in obj.names:
OBJECTS.append(name)
class Not:
@classmethod
def parse_node(self, obj):
pass
class NotEq:
@classmethod
def parse_node(self, obj):
pass
class NotIn:
@classmethod
def parse_node(self, obj):
pass
class Num:
@classmethod
def parse_node(self, obj):
pass
class Or:
@classmethod
def parse_node(self, obj):
pass
class Param:
@classmethod
def parse_node(self, obj):
pass
class Pass:
@classmethod
def parse_node(self, obj):
pass
class Pow:
@classmethod
def parse_node(self, obj):
pass
class RShift:
@classmethod
def parse_node(self, obj):
pass
class Raise:
@classmethod
def parse_node(self, obj):
AST_NODES[get_class_str(obj.exec)].parse_node(obj.exec)
AST_NODES[get_class_str(obj.cause)].parse_node(obj.cause)
class Return:
@classmethod
def parse_node(self, obj):
AST_NODES[get_class_str(obj.value)].parse_node(obj.value)
class Set:
@classmethod
def parse_node(self, obj):
for node in obj.elts:
AST_NODES[get_class_str(node)].parse_node(obj.node)
class SetComp:
@classmethod
def parse_node(self, obj):
for node in obj.generators:
AST_NODES[get_class_str(node)].parse_node(node)
AST_NODES[get_class_str(obj.elt)].parse_node(obj.elt)
class Slice:
@classmethod
def parse_node(self, obj):
AST_NODES[get_class_str(obj.lower)].parse_node(obj.lower)
AST_NODES[get_class_str(obj.upper)].parse_node(obj.upper)
AST_NODES[get_class_str(obj.step)].parse_node(obj.step)
class Starred:
@classmethod
def parse_node(self, obj):
AST_NODES[get_class_str(obj.value)].parse_node(obj.value)
class Store:
@classmethod
def parse_node(self, obj):
pass
class Str:
@classmethod
def parse_node(self, obj):
pass
class Sub:
@classmethod
def parse_node(self, obj):
pass
class Subscript:
@classmethod
def parse_node(self, obj):
AST_NODES[get_class_str(obj.value)].parse_node(obj.value)
AST_NODES[get_class_str(obj.slice)].parse_node(obj.slice)
class Suite:
@classmethod
def parse_node(self, obj):
pass
class Try:
@classmethod
def parse_node(self, obj):
for node in obj.body:
AST_NODES[get_class_str(node)].parse_node(node)
for node in obj.orelse:
AST_NODES[get_class_str(node)].parse_node(node)
for node in obj.handlers:
AST_NODES[get_class_str(node)].parse_node(node)
for node in obj.finalbody:
AST_NODES[get_class_str(node)].parse_node(node)
class Tuple:
@classmethod
def parse_node(self, obj):
for node in obj.elts:
AST_NODES[get_class_str(node)].parse_node(obj.node)
class TypeIgnore:
@classmethod
def parse_node(self, obj):
pass
class UAdd:
@classmethod
def parse_node(self, obj):
pass
class USub:
@classmethod
def parse_node(self, obj):
pass
class UnaryOp:
@classmethod
def parse_node(self, obj):
AST_NODES[get_class_str(obj.operand)].parse_node(obj.operand)
class While:
@classmethod
def parse_node(self, obj):
for node in obj.body:
AST_NODES[get_class_str(node)].parse_node(node)
for node in obj.orelse:
AST_NODES[get_class_str(node)].parse_node(node)
AST_NODES[get_class_str(obj.test)].parse_node(obj.test)
class With:
@classmethod
def parse_node(self, obj):
for node in obj.body:
AST_NODES[get_class_str(node)].parse_node(node)
for node in obj.items:
AST_NODES[get_class_str(node)].parse_node(node)
class Yield:
@classmethod
def parse_node(self, obj):
AST_NODES[get_class_str(obj.value)].parse_node(obj.value)
class YieldFrom:
@classmethod
def parse_node(self, obj):
AST_NODES[get_class_str(obj.value)].parse_node(obj.value)
class _ABC:
@classmethod
def parse_node(self, obj):
pass
class alias:
@classmethod
def parse_node(self, obj):
OBJECTS.append(obj.name)
OBJECTS.append(obj.asname)
class arg:
@classmethod
def parse_node(self, obj):
OBJECTS.append(obj.arg)
AST_NODES[get_class_str(obj.annotation)].parse_node(obj.annotation)
class arguments:
@classmethod
def parse_node(self, obj):
for node in obj.posonlyargs:
AST_NODES[get_class_str(node)].parse_node(node)
for node in obj.kwonlyargs:
AST_NODES[get_class_str(node)].parse_node(node)
for node in obj.args:
AST_NODES[get_class_str(node)].parse_node(node)
AST_NODES[get_class_str(obj.vararg)].parse_node(obj.vararg)
AST_NODES[get_class_str(node.kwarg)].parse_node(node.kwarg)
class boolop:
@classmethod
def parse_node(self, obj):
pass
class cmpop:
@classmethod
def parse_node(self, obj):
pass
class comprehension:
@classmethod
def parse_node(self, obj):
for node in obj.ifs:
AST_NODES[get_class_str(node)].parse_node(node)
AST_NODES[get_class_str(obj.target)].parse_node(obj.target)
AST_NODES[get_class_str(obj.iter)].parse_node(obj.iter)
class excepthandler:
@classmethod
def parse_node(self, obj):
pass
class expr:
@classmethod
def parse_node(self, obj):
pass
class expr_context:
@classmethod
def parse_node(self, obj):
pass
class keyword:
@classmethod
def parse_node(self, obj):
OBJECTS.append(obj.arg)
AST_NODES[get_class_str(obj.value)].parse_node(obj.value)
class mod:
@classmethod
def parse_node(self, obj):
pass
class operator:
@classmethod
def parse_node(self, obj):
pass
class slice:
@classmethod
def parse_node(self, obj):
pass
class stmt:
@classmethod
def parse_node(self, obj):
pass
class type_ignore:
@classmethod
def parse_node(self, obj):
pass
class unaryop:
@classmethod
def parse_node(self, obj):
pass
class withitem:
@classmethod
def parse_node(self, obj):
AST_NODES[get_class_str(obj.context_expr)].parse_node(obj.context_expr)
AST_NODES[get_class_str(obj.optional_vars)].parse_node(obj.optional_vars)
AST_NODES = {
"_ast.AST": AST,
"_ast.Add": Add,
"_ast.And": And,
"_ast.AnnAssign": AnnAssign,
"_ast.Assert": Assert,
"_ast.Assign": Assign,
"_ast.AsyncFor": AsyncFor,
"_ast.AsyncFunctionDef": AsyncFunctionDef,
"_ast.AsyncWith": AsyncWith,
"_ast.Attribute": Attribute,
"_ast.AugAssign": AugAssign,
"_ast.AugLoad": AugLoad,
"_ast.AugStore": AugStore,
"_ast.Await": Await,
"_ast.BinOp": BinOp,
"_ast.BitAnd": BitAnd,
"_ast.BitOr": BitOr,
"_ast.BitXor": BitXor,
"_ast.BoolOp": BoolOp,
"_ast.Break": Break,
"ast.Bytes": Bytes,
"_ast.Call": Call,
"_ast.ClassDef": ClassDef,
"_ast.Compare": Compare,
"_ast.Constant": Constant,
"_ast.Continue": Continue,
"_ast.Del": Del,
"_ast.Delete": Delete,
"_ast.Dict": Dict,
"_ast.DictComp": DictComp,
"_ast.Div": Div,
"ast.Ellipsis": Ellipsis,
"_ast.Eq": Eq,
"_ast.ExceptHandler": ExceptHandler,
"_ast.Expr": Expr,
"_ast.Expression": Expression,
"_ast.ExtSlice": ExtSlice,
"_ast.FloorDiv": FloorDiv,
"_ast.For": For,
"_ast.FormattedValue": FormattedValue,
"_ast.FunctionDef": FunctionDef,
"_ast.FunctionType": FunctionType,
"_ast.GeneratorExp": GeneratorExp,
"_ast.Global": Global,
"_ast.Gt": Gt,
"_ast.GtE": GtE,
"_ast.If": If,
"_ast.IfExp": IfExp,
"_ast.Import": Import,
"_ast.ImportFrom": ImportFrom,
"_ast.In": In,
"_ast.Index": Index,
"_ast.Interactive": Interactive,
"_ast.Invert": Invert,
"_ast.Is": Is,
"_ast.IsNot": IsNot,
"_ast.JoinedStr": JoinedStr,
"_ast.LShift": LShift,
"_ast.Lambda": Lambda,
"_ast.List": List,
"_ast.ListComp": ListComp,
"_ast.Load": Load,
"_ast.Lt": Lt,
"_ast.LtE": LtE,
"_ast.MatMult": MatMult,
"_ast.Mod": Mod,
"_ast.Module": Module,
"_ast.Mult": Mult,
"_ast.Name": Name,
"ast.NameConstant": NameConstant,
"_ast.NamedExpr": NamedExpr,
"ast.NodeTransformer": NodeTransformer,
"ast.NodeVisitor": NodeVisitor,
"_ast.Nonlocal": Nonlocal,
"_ast.Not": Not,
"_ast.NotEq": NotEq,
"_ast.NotIn": NotIn,
"ast.Num": Num,
"_ast.Or": Or,
"_ast.Param": Param,
"_ast.Pass": Pass,
"_ast.Pow": Pow,
"_ast.RShift": RShift,
"_ast.Raise": Raise,
"_ast.Return": Return,
"_ast.Set": Set,
"_ast.SetComp": SetComp,
"_ast.Slice": Slice,
"_ast.Starred": Starred,
"_ast.Store": Store,
"ast.Str": Str,
"_ast.Sub": Sub,
"_ast.Subscript": Subscript,
"_ast.Suite": Suite,
"_ast.Try": Try,
"_ast.Tuple": Tuple,
"_ast.TypeIgnore": TypeIgnore,
"_ast.UAdd": UAdd,
"_ast.USub": USub,
"_ast.UnaryOp": UnaryOp,
"_ast.While": While,
"_ast.With": With,
"_ast.Yield": Yield,
"_ast.YieldFrom": YieldFrom,
"ast._ABC": _ABC,
"_ast.alias": alias,
"_ast.arg": arg,
"_ast.arguments": arguments,
"_ast.boolop": boolop,
"_ast.cmpop": cmpop,
"_ast.comprehension": comprehension,
"_ast.excepthandler": excepthandler,
"_ast.expr": expr,
"_ast.expr_context": expr_context,
"_ast.keyword": keyword,
"_ast.mod": mod,
"_ast.operator": operator,
"_ast.slice": slice,
"_ast.stmt": stmt,
"_ast.type_ignore": type_ignore,
"_ast.unaryop": unaryop,
"_ast.withitem": withitem,
}
| 21.497959 | 81 | 0.64064 | 2,830 | 21,068 | 4.526502 | 0.063958 | 0.16089 | 0.180952 | 0.219048 | 0.747151 | 0.745589 | 0.733255 | 0.725059 | 0.543326 | 0.52943 | 0 | 0.000126 | 0.245396 | 21,068 | 979 | 82 | 21.519918 | 0.805636 | 0 | 0 | 0.612329 | 0 | 0 | 0.065312 | 0.000997 | 0 | 0 | 0 | 0 | 0.00274 | 1 | 0.168493 | false | 0.091781 | 0.006849 | 0 | 0.343836 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 6 |
f6e20f724e1e44cadec0c3278458f9f4a941e3b3 | 36 | py | Python | Instance/config.py | simonkairu/pitches | 6e475297f3f9c91dd601bb9ae9f774c6bb849ca9 | [
"MIT"
] | null | null | null | Instance/config.py | simonkairu/pitches | 6e475297f3f9c91dd601bb9ae9f774c6bb849ca9 | [
"MIT"
] | null | null | null | Instance/config.py | simonkairu/pitches | 6e475297f3f9c91dd601bb9ae9f774c6bb849ca9 | [
"MIT"
] | null | null | null | SECRET_KEY='wvkwcw6nflmYWLrdlLUVcg'
| 18 | 35 | 0.888889 | 3 | 36 | 10.333333 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.028571 | 0.027778 | 36 | 1 | 36 | 36 | 0.857143 | 0 | 0 | 0 | 0 | 0 | 0.611111 | 0.611111 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
100086e7fb33cea9af3340db96c87496054077b3 | 121 | py | Python | staff_management_models/staff_group_payments/admin.py | reimibeta/django-staff-management-models | 20fd718d18d39f333fb9e1ac231154db86a9b91c | [
"Apache-2.0"
] | null | null | null | staff_management_models/staff_group_payments/admin.py | reimibeta/django-staff-management-models | 20fd718d18d39f333fb9e1ac231154db86a9b91c | [
"Apache-2.0"
] | null | null | null | staff_management_models/staff_group_payments/admin.py | reimibeta/django-staff-management-models | 20fd718d18d39f333fb9e1ac231154db86a9b91c | [
"Apache-2.0"
] | null | null | null | from staff_management_models.staff_group_payments.class_admins.staff_worker_payment_admin import StaffWorkerPaymentAdmin
| 60.5 | 120 | 0.942149 | 15 | 121 | 7.066667 | 0.866667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.033058 | 121 | 1 | 121 | 121 | 0.905983 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
1018f41969b218b1df5e3d8cce148c92bee0d665 | 118 | py | Python | EduSim/Envs/TMS/Agent.py | bigdata-ustc/EduSim | 849eed229c24615e5f2c3045036311e83c22ea68 | [
"MIT"
] | 18 | 2019-11-11T03:45:35.000Z | 2022-02-09T15:31:51.000Z | EduSim/Envs/TMS/Agent.py | ghzhao78506/EduSim | cb10e952eb212d8a9344143f889207b5cd48ba9d | [
"MIT"
] | 3 | 2020-10-23T01:05:57.000Z | 2021-03-16T12:12:24.000Z | EduSim/Envs/TMS/Agent.py | bigdata-ustc/EduSim | 849eed229c24615e5f2c3045036311e83c22ea68 | [
"MIT"
] | 6 | 2020-06-09T21:32:00.000Z | 2022-03-12T00:25:18.000Z | # coding: utf-8
# 2020/5/7 @ tongshiwei
from EduSim.SimOS import RandomAgent
class TMSAgent(RandomAgent):
pass
| 13.111111 | 36 | 0.728814 | 16 | 118 | 5.375 | 0.9375 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.072165 | 0.177966 | 118 | 8 | 37 | 14.75 | 0.814433 | 0.29661 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.333333 | 0.333333 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 6 |
63e320bd742e48993b6094fca258ab13228711a6 | 3,247 | py | Python | tests/test_reading.py | emilioschepis/cdesf2 | 4a98b8608abd9422d818b5feca720e6bacd19642 | [
"MIT"
] | null | null | null | tests/test_reading.py | emilioschepis/cdesf2 | 4a98b8608abd9422d818b5feca720e6bacd19642 | [
"MIT"
] | null | null | null | tests/test_reading.py | emilioschepis/cdesf2 | 4a98b8608abd9422d818b5feca720e6bacd19642 | [
"MIT"
] | null | null | null | import pandas as pd
import numpy as np
from datetime import datetime, timezone, timedelta
from pandas._libs.tslibs import Timestamp
from pm4py.objects.log import obj
from cdesf2.utils import read_csv, read_xes
def test_read_csv():
# TODO: Is there a reason to replace " " in the activity name with "_"?
event_stream_test = read_csv("demo/Detail_Supplier_IW-Frozen.csv")
assert isinstance(event_stream_test, obj.EventStream)
assert len(event_stream_test) == 5000
first_event = event_stream_test[0]
assert first_event["case:concept:name"] == "Case 496"
assert first_event["concept:name"] == "Process Creation"
assert isinstance(first_event["time:timestamp"], datetime)
assert first_event["time:timestamp"] == datetime(2010, 9, 21, 9, 0, 13)
second_event = event_stream_test[1]
assert second_event["case:concept:name"] == "Case 12186"
assert second_event["concept:name"] == "Process Creation"
assert isinstance(second_event["time:timestamp"], datetime)
assert second_event["time:timestamp"] == datetime(2010, 9, 21, 9, 0, 21)
second_to_last_event = event_stream_test[-2]
assert second_to_last_event["case:concept:name"] == "Case 8848"
assert second_to_last_event["concept:name"] == "ME Fabrication Checker"
assert isinstance(second_to_last_event["time:timestamp"], datetime)
assert second_to_last_event["time:timestamp"] == datetime(2011, 4, 27, 8, 0, 59)
last_event = event_stream_test[-1]
assert last_event["case:concept:name"] == "Case 10634"
assert last_event["concept:name"] == "ME Fabrication Checker"
assert isinstance(last_event["time:timestamp"], datetime)
assert last_event["time:timestamp"] == datetime(2011, 4, 27, 9, 0, 0)
def test_read_xes():
event_stream_test = read_xes("demo/running-example.xes")
assert isinstance(event_stream_test, obj.EventStream)
assert len(event_stream_test) == 42
tzinfo = timezone(timedelta(seconds=3600))
first_event = event_stream_test[0]
assert first_event["case:concept:name"] == "1"
assert first_event["concept:name"] == "register request"
assert isinstance(first_event["time:timestamp"], datetime)
assert first_event["time:timestamp"] == datetime(2010, 12, 30, 11, 2, tzinfo=tzinfo)
second_event = event_stream_test[1]
assert second_event["case:concept:name"] == "1"
assert second_event["concept:name"] == "examine thoroughly"
assert isinstance(second_event["time:timestamp"], datetime)
assert second_event["time:timestamp"] == datetime(
2010, 12, 31, 10, 6, tzinfo=tzinfo
)
second_to_last_event = event_stream_test[-2]
assert second_to_last_event["case:concept:name"] == "4"
assert second_to_last_event["concept:name"] == "decide"
assert isinstance(second_to_last_event["time:timestamp"], datetime)
assert second_to_last_event["time:timestamp"] == datetime(
2011, 1, 9, 12, 2, tzinfo=tzinfo
)
last_event = event_stream_test[-1]
assert last_event["case:concept:name"] == "4"
assert last_event["concept:name"] == "reject request"
assert isinstance(last_event["time:timestamp"], datetime)
assert last_event["time:timestamp"] == datetime(2011, 1, 12, 15, 44, tzinfo=tzinfo)
| 42.168831 | 88 | 0.716046 | 446 | 3,247 | 4.988789 | 0.213004 | 0.080899 | 0.129438 | 0.186966 | 0.762247 | 0.717303 | 0.705618 | 0.651685 | 0.648989 | 0.608539 | 0 | 0.045918 | 0.154912 | 3,247 | 76 | 89 | 42.723684 | 0.764942 | 0.02125 | 0 | 0.305085 | 0 | 0 | 0.21568 | 0.018262 | 0 | 0 | 0 | 0.013158 | 0.610169 | 1 | 0.033898 | false | 0 | 0.101695 | 0 | 0.135593 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 1 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
89d69cafac7cb233d1ca2659e1e0221da3b1a53d | 5,867 | py | Python | pirates/leveleditor/worldData/JungleTestIslandA.py | Willy5s/Pirates-Online-Rewritten | 7434cf98d9b7c837d57c181e5dabd02ddf98acb7 | [
"BSD-3-Clause"
] | 81 | 2018-04-08T18:14:24.000Z | 2022-01-11T07:22:15.000Z | pirates/leveleditor/worldData/JungleTestIslandA.py | Willy5s/Pirates-Online-Rewritten | 7434cf98d9b7c837d57c181e5dabd02ddf98acb7 | [
"BSD-3-Clause"
] | 4 | 2018-09-13T20:41:22.000Z | 2022-01-08T06:57:00.000Z | pirates/leveleditor/worldData/JungleTestIslandA.py | Willy5s/Pirates-Online-Rewritten | 7434cf98d9b7c837d57c181e5dabd02ddf98acb7 | [
"BSD-3-Clause"
] | 26 | 2018-05-26T12:49:27.000Z | 2021-09-11T09:11:59.000Z | from pandac.PandaModules import Point3, VBase3
objectStruct = {'Locator Links': [['1157574848.27sdnaik', '1157574884.56sdnaik', 'Bi-directional'], ['1157574928.53sdnaik', '1157574884.58sdnaik', 'Bi-directional'], ['1157574928.66sdnaik', '1157574992.98sdnaik', 'Bi-directional'], ['1157574992.97sdnaik', '1157574848.3sdnaik', 'Bi-directional']],'Objects': {'1157574780.95sdnaik': {'Type': 'Island','Name': 'JungleTestIslandA','File': '','Objects': {'1157485817.84sdnaik': {'Type': 'Island Game Area','File': 'jungle_area_a','Hpr': Point3(0.0, 0.0, 0.0),'Objects': {'1157574928.53sdnaik': {'Type': 'Locator Node','Name': 'portal_interior_1','GridPos': Point3(-3.925, -10.962, 221.101),'Hpr': VBase3(-153.319, 0.0, 0.0),'Pos': Point3(406.169, 250.261, 8.467),'Scale': VBase3(1.0, 1.0, 1.0)},'1157574928.66sdnaik': {'Type': 'Locator Node','Name': 'portal_interior_2','GridPos': Point3(-542.421, -643.139, 218.437),'Hpr': VBase3(96.892, 0.711, -0.076),'Pos': Point3(-120.867, -365.049, -0.26),'Scale': VBase3(1.0, 1.0, 1.0)},'1164917329.8sdnaik': {'Type': 'Locator Node','Name': 'portal_interior_3','GridPos': Point3(-1156.107, 144.641, 521.905),'Hpr': VBase3(-61.504, -3.202, 1.38),'Pos': Point3(-352.561, 241.702, 3.28),'Scale': VBase3(1.0, 1.0, 1.0)}},'Pos': Point3(-803.546, -97.061, 518.625),'Scale': VBase3(1.0, 1.0, 1.0),'Visual': {'Model': 'models/jungles/jungle_a_zero'}},'1157574848.27sdnaik': {'Type': 'Locator Node','Name': 'portal_exterior_1','Hpr': VBase3(-18.331, 0.0, 0.0),'Pos': Point3(-219.917, -319.235, 0.595),'Scale': VBase3(1.0, 1.0, 1.0)},'1157574848.3sdnaik': {'Type': 'Locator Node','Name': 'portal_exterior_2','Hpr': VBase3(68.97, 0.0, 0.0),'Pos': Point3(-285.103, -58.817, 44.049),'Scale': VBase3(1.0, 1.0, 1.0)},'1157574853.39sdnaik': {'Type': 'Locator Node','Name': 'portal_exterior_1','Hpr': VBase3(-18.331, 0.0, 0.0),'Pos': Point3(-219.917, -319.235, 0.595),'Scale': VBase3(1.0, 1.0, 1.0)},'1157574861.95sdnaik': {'Type': 'Locator Node','Name': 'portal_exterior_2','Hpr': VBase3(68.97, 0.0, 0.0),'Pos': Point3(-285.103, -58.817, 44.049),'Scale': VBase3(1.0, 1.0, 1.0)},'1157574884.55sdnaik': {'Type': 'Connector Tunnel','File': '','Hpr': Point3(0.0, 0.0, 0.0),'Objects': {'1157574884.56sdnaik': {'Type': 'Locator Node','Name': 'portal_connector_1','Hpr': VBase3(-90.0, 0.0, 0.0),'Pos': Point3(0.0, 3.262, 0.0),'Scale': VBase3(1.0, 1.0, 1.0)},'1157574884.58sdnaik': {'Type': 'Locator Node','Name': 'portal_connector_2','GridPos': Point3(-313.651, -175.767, 131.091),'Hpr': VBase3(90.0, 0.0, 0.0),'Pos': Point3(95.197, 150.0, 0.0),'Scale': VBase3(1.0, 1.0, 1.0)}},'Pos': Point3(-252.603, -296.91, 259.504),'Scale': VBase3(1.0, 1.0, 1.0),'Visual': {'Model': 'models/tunnels/tunnel_cave_left'}},'1157574992.92sdnaik': {'Type': 'Connector Tunnel','File': '','Hpr': Point3(0.0, 0.0, 0.0),'Objects': {'1157574992.97sdnaik': {'Type': 'Locator Node','Name': 'portal_connector_1','Hpr': VBase3(-90.0, 0.0, 0.0),'Pos': Point3(0.0, 3.262, 0.0),'Scale': VBase3(1.0, 1.0, 1.0)},'1157574992.98sdnaik': {'Type': 'Locator Node','Name': 'portal_connector_2','GridPos': Point3(-488.418, -323.438, 197.465),'Hpr': VBase3(90.0, 0.0, 0.0),'Pos': Point3(95.197, 150.0, 0.0),'Scale': VBase3(1.0, 1.0, 1.0)}},'Pos': Point3(-514.185, 421.494, 276.169),'Scale': VBase3(1.0, 1.0, 1.0),'Visual': {'Model': 'models/tunnels/tunnel_cave_left'}},'1161902850.89sdnaik': {'Type': 'Locator Node','Name': 'portal_exterior_1','Hpr': VBase3(-18.331, 0.0, 0.0),'Pos': Point3(-219.917, -319.235, 0.595),'Scale': VBase3(1.0, 1.0, 1.0)},'1161902852.94sdnaik': {'Type': 'Locator Node','Name': 'portal_exterior_2','Hpr': VBase3(68.97, 0.0, 0.0),'Pos': Point3(-285.103, -58.817, 44.049),'Scale': VBase3(1.0, 1.0, 1.0)},'1161986237.39sdnaik': {'Type': 'Player Spawn Node','Hpr': Point3(0.0, 0.0, 0.0),'Pos': Point3(-199.458, -320.558, 0.843),'Scale': VBase3(1.0, 1.0, 1.0),'Spawnables': 'All'}},'Visual': {'Model': 'models/islands/bilgewater_zero'}}},'Node Links': [],'Layers': {},'ObjectIds': {'1157485817.84sdnaik': '["Objects"]["1157574780.95sdnaik"]["Objects"]["1157485817.84sdnaik"]','1157574780.95sdnaik': '["Objects"]["1157574780.95sdnaik"]','1157574848.27sdnaik': '["Objects"]["1157574780.95sdnaik"]["Objects"]["1157574848.27sdnaik"]','1157574848.3sdnaik': '["Objects"]["1157574780.95sdnaik"]["Objects"]["1157574848.3sdnaik"]','1157574853.39sdnaik': '["Objects"]["1157574780.95sdnaik"]["Objects"]["1157574853.39sdnaik"]','1157574861.95sdnaik': '["Objects"]["1157574780.95sdnaik"]["Objects"]["1157574861.95sdnaik"]','1157574884.55sdnaik': '["Objects"]["1157574780.95sdnaik"]["Objects"]["1157574884.55sdnaik"]','1157574884.56sdnaik': '["Objects"]["1157574780.95sdnaik"]["Objects"]["1157574884.55sdnaik"]["Objects"]["1157574884.56sdnaik"]','1157574884.58sdnaik': '["Objects"]["1157574780.95sdnaik"]["Objects"]["1157574884.55sdnaik"]["Objects"]["1157574884.58sdnaik"]','1157574928.53sdnaik': '["Objects"]["1157574780.95sdnaik"]["Objects"]["1157485817.84sdnaik"]["Objects"]["1157574928.53sdnaik"]','1157574928.66sdnaik': '["Objects"]["1157574780.95sdnaik"]["Objects"]["1157485817.84sdnaik"]["Objects"]["1157574928.66sdnaik"]','1157574992.92sdnaik': '["Objects"]["1157574780.95sdnaik"]["Objects"]["1157574992.92sdnaik"]','1157574992.97sdnaik': '["Objects"]["1157574780.95sdnaik"]["Objects"]["1157574992.92sdnaik"]["Objects"]["1157574992.97sdnaik"]','1157574992.98sdnaik': '["Objects"]["1157574780.95sdnaik"]["Objects"]["1157574992.92sdnaik"]["Objects"]["1157574992.98sdnaik"]','1161902850.89sdnaik': '["Objects"]["1157574780.95sdnaik"]["Objects"]["1161902850.89sdnaik"]','1161902852.94sdnaik': '["Objects"]["1157574780.95sdnaik"]["Objects"]["1161902852.94sdnaik"]','1161986237.39sdnaik': '["Objects"]["1157574780.95sdnaik"]["Objects"]["1161986237.39sdnaik"]','1164917329.8sdnaik': '["Objects"]["1157574780.95sdnaik"]["Objects"]["1157485817.84sdnaik"]["Objects"]["1164917329.8sdnaik"]'}} | 2,933.5 | 5,820 | 0.668144 | 833 | 5,867 | 4.663866 | 0.204082 | 0.033462 | 0.033977 | 0.035006 | 0.569627 | 0.527413 | 0.459974 | 0.442986 | 0.322523 | 0.287001 | 0 | 0.30018 | 0.050622 | 5,867 | 2 | 5,820 | 2,933.5 | 0.397307 | 0 | 0 | 0 | 0 | 0 | 0.581288 | 0.263463 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.5 | 0 | 0.5 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
89e7148b94fbb6ff67562cec6d68812dbd3e3bf6 | 48 | py | Python | cvdd/models/__init__.py | altescy/cvdd | 57e4fe0fd30a6d2b67651ce076b63a9a8a6e7c7a | [
"MIT"
] | 5 | 2021-07-11T08:40:43.000Z | 2021-07-19T05:08:11.000Z | cvdd/models/__init__.py | altescy/cvdd | 57e4fe0fd30a6d2b67651ce076b63a9a8a6e7c7a | [
"MIT"
] | null | null | null | cvdd/models/__init__.py | altescy/cvdd | 57e4fe0fd30a6d2b67651ce076b63a9a8a6e7c7a | [
"MIT"
] | null | null | null | from cvdd.models.cvdd import CVDD # noqa: F401
| 24 | 47 | 0.75 | 8 | 48 | 4.5 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.075 | 0.166667 | 48 | 1 | 48 | 48 | 0.825 | 0.208333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
89efdfa0780385a69fa5893afa10657f79b8c298 | 9,188 | py | Python | market_place_proj/app_market/migrations/0002_initial.py | grand-roman/market_place | 219e9c6f108fc4edd7508e0f00078c0c78583f47 | [
"BSD-2-Clause"
] | null | null | null | market_place_proj/app_market/migrations/0002_initial.py | grand-roman/market_place | 219e9c6f108fc4edd7508e0f00078c0c78583f47 | [
"BSD-2-Clause"
] | null | null | null | market_place_proj/app_market/migrations/0002_initial.py | grand-roman/market_place | 219e9c6f108fc4edd7508e0f00078c0c78583f47 | [
"BSD-2-Clause"
] | null | null | null | # Generated by Django 3.2.8 on 2021-12-23 12:58
from django.conf import settings
from django.db import migrations, models
import django.db.models.deletion
import mptt.fields
class Migration(migrations.Migration):
initial = True
dependencies = [
migrations.swappable_dependency(settings.AUTH_USER_MODEL),
('app_market', '0001_initial'),
]
operations = [
migrations.AddField(
model_name='review',
name='user',
field=models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, related_name='reviews', to=settings.AUTH_USER_MODEL, verbose_name='user'),
),
migrations.AddField(
model_name='relatedgoodgroup',
name='discount',
field=models.ForeignKey(blank=True, default=None, null=True, on_delete=django.db.models.deletion.PROTECT, related_name='related_discounts', to='app_market.discount', verbose_name='discount'),
),
migrations.AddField(
model_name='relatedgoodgroup',
name='group1',
field=models.ForeignKey(blank=True, null=True, on_delete=django.db.models.deletion.PROTECT, related_name='group1', to='app_market.goodgroup', verbose_name='Product category 1'),
),
migrations.AddField(
model_name='relatedgoodgroup',
name='group2',
field=models.ForeignKey(blank=True, null=True, on_delete=django.db.models.deletion.PROTECT, related_name='group2', to='app_market.goodgroup', verbose_name='Product category 2'),
),
migrations.AddField(
model_name='payment',
name='order',
field=models.ForeignKey(on_delete=django.db.models.deletion.PROTECT, related_name='pay', to='app_market.order', verbose_name='order'),
),
migrations.AddField(
model_name='orderdetail',
name='cart',
field=models.ForeignKey(blank=True, null=True, on_delete=django.db.models.deletion.PROTECT, related_name='order_detail', to='app_market.cart', verbose_name='Cart'),
),
migrations.AddField(
model_name='orderdetail',
name='discount',
field=models.ForeignKey(blank=True, default=None, null=True, on_delete=django.db.models.deletion.PROTECT, related_name='detail', to='app_market.discount', verbose_name='discount'),
),
migrations.AddField(
model_name='orderdetail',
name='order',
field=models.ForeignKey(blank=True, null=True, on_delete=django.db.models.deletion.PROTECT, related_name='detail', to='app_market.order', verbose_name='order'),
),
migrations.AddField(
model_name='orderdetail',
name='user',
field=models.ForeignKey(blank=True, null=True, on_delete=django.db.models.deletion.PROTECT, related_name='order_detail', to=settings.AUTH_USER_MODEL, verbose_name='order'),
),
migrations.AddField(
model_name='order',
name='cart_discount',
field=models.ForeignKey(blank=True, default=None, null=True, on_delete=django.db.models.deletion.PROTECT, related_name='order', to='app_market.discount', verbose_name='discount'),
),
migrations.AddField(
model_name='order',
name='user',
field=models.ForeignKey(on_delete=django.db.models.deletion.PROTECT, related_name='order', to=settings.AUTH_USER_MODEL, verbose_name='user'),
),
migrations.AddField(
model_name='goodview',
name='good',
field=models.ForeignKey(on_delete=django.db.models.deletion.PROTECT, related_name='views', to='app_market.good', verbose_name='good'),
),
migrations.AddField(
model_name='goodview',
name='user',
field=models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, related_name='views', to=settings.AUTH_USER_MODEL, verbose_name='user'),
),
migrations.AddField(
model_name='good',
name='category',
field=models.ForeignKey(on_delete=django.db.models.deletion.PROTECT, related_name='good', to='app_market.category', verbose_name='Product category'),
),
migrations.AddField(
model_name='good',
name='discount',
field=models.ForeignKey(blank=True, default=None, null=True, on_delete=django.db.models.deletion.PROTECT, related_name='good', to='app_market.discount', verbose_name='discount'),
),
migrations.AddField(
model_name='good',
name='files',
field=models.ManyToManyField(blank=True, related_name='catalog', to='app_market.MediaFiles', verbose_name='related files'),
),
migrations.AddField(
model_name='good',
name='group',
field=models.ForeignKey(blank=True, null=True, on_delete=django.db.models.deletion.PROTECT, related_name='group', to='app_market.goodgroup', verbose_name='group of products for discount'),
),
migrations.AddField(
model_name='good',
name='image',
field=models.ForeignKey(null=True, on_delete=django.db.models.deletion.PROTECT, related_name='good', to='app_market.mediafiles', verbose_name='image for good'),
),
migrations.AddField(
model_name='good',
name='tag',
field=models.ManyToManyField(related_name='good', to='app_market.Tag', verbose_name='Tags'),
),
migrations.AddField(
model_name='discount',
name='variants',
field=models.ForeignKey(on_delete=django.db.models.deletion.PROTECT, related_name='discount', to='app_market.discountvariants', verbose_name='discount options'),
),
migrations.AddField(
model_name='category',
name='discount',
field=models.ForeignKey(blank=True, default=None, null=True, on_delete=django.db.models.deletion.PROTECT, related_name='category', to='app_market.discount', verbose_name='discount'),
),
migrations.AddField(
model_name='category',
name='icon',
field=models.ForeignKey(blank=True, null=True, on_delete=django.db.models.deletion.CASCADE, related_name='icon', to='app_market.mediafiles', verbose_name='related files'),
),
migrations.AddField(
model_name='category',
name='parent',
field=mptt.fields.TreeForeignKey(blank=True, null=True, on_delete=django.db.models.deletion.CASCADE, related_name='children', to='app_market.category'),
),
migrations.AddField(
model_name='catalog',
name='delivery',
field=models.ForeignKey(blank=True, null=True, on_delete=django.db.models.deletion.PROTECT, related_name='catalog', to='app_market.delivery', verbose_name='delivery'),
),
migrations.AddField(
model_name='catalog',
name='discount',
field=models.ForeignKey(blank=True, default=None, null=True, on_delete=django.db.models.deletion.PROTECT, related_name='catalog', to='app_market.discount', verbose_name='discount'),
),
migrations.AddField(
model_name='catalog',
name='good',
field=models.ForeignKey(on_delete=django.db.models.deletion.PROTECT, related_name='catalog', to='app_market.good', verbose_name='product'),
),
migrations.AddField(
model_name='catalog',
name='seller',
field=models.ForeignKey(on_delete=django.db.models.deletion.PROTECT, related_name='catalog', to='app_market.seller', verbose_name='seller'),
),
migrations.AddField(
model_name='cartsale',
name='discount',
field=models.ForeignKey(blank=True, default=None, null=True, on_delete=django.db.models.deletion.PROTECT, related_name='cart_sale', to='app_market.discount', verbose_name='discount'),
),
migrations.AddField(
model_name='cart',
name='cart_discount',
field=models.ForeignKey(blank=True, default=None, null=True, on_delete=django.db.models.deletion.PROTECT, related_name='cart', to='app_market.discount', verbose_name='discount'),
),
migrations.AddField(
model_name='cart',
name='catalog',
field=models.ForeignKey(blank=True, null=True, on_delete=django.db.models.deletion.PROTECT, related_name='cart', to='app_market.catalog', verbose_name='catalog_good'),
),
migrations.AddField(
model_name='cart',
name='good',
field=models.ForeignKey(blank=True, null=True, on_delete=django.db.models.deletion.PROTECT, related_name='cart', to='app_market.good', verbose_name='good'),
),
migrations.AddField(
model_name='cart',
name='user',
field=models.ForeignKey(blank=True, null=True, on_delete=django.db.models.deletion.PROTECT, related_name='cart', to=settings.AUTH_USER_MODEL, verbose_name='user'),
),
]
| 51.044444 | 203 | 0.641707 | 1,024 | 9,188 | 5.587891 | 0.088867 | 0.04474 | 0.128626 | 0.150996 | 0.835722 | 0.829081 | 0.736281 | 0.708668 | 0.686299 | 0.686299 | 0 | 0.003507 | 0.224205 | 9,188 | 179 | 204 | 51.329609 | 0.799242 | 0.004898 | 0 | 0.639535 | 1 | 0 | 0.15797 | 0.009846 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.023256 | 0 | 0.046512 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
d6089b133347f222fbcfd38934ebc034c6fbe312 | 232 | py | Python | strategies.py | JMan-Zx/game_of_hog | 9e75fe246905092fc37e46c314cfee04316b1371 | [
"MIT"
] | null | null | null | strategies.py | JMan-Zx/game_of_hog | 9e75fe246905092fc37e46c314cfee04316b1371 | [
"MIT"
] | null | null | null | strategies.py | JMan-Zx/game_of_hog | 9e75fe246905092fc37e46c314cfee04316b1371 | [
"MIT"
] | null | null | null | from game_matrix import game_matrix
def optimal(player_score, opponent_score):
# take off the win_rate, not needed
return game_matrix(player_score, opponent_score)[0]
def roll_4(player_score, opponent_score):
return 4
| 25.777778 | 55 | 0.780172 | 36 | 232 | 4.722222 | 0.555556 | 0.176471 | 0.335294 | 0.423529 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.015306 | 0.155172 | 232 | 8 | 56 | 29 | 0.852041 | 0.142241 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.4 | false | 0 | 0.2 | 0.4 | 1 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 6 |
d640cdcd379049a0929051fb7f9350b9d263d736 | 32 | py | Python | aiotg/__init__.py | domclick/aiotg | 9228af2727ea88ee60b46678511dd701b965afe3 | [
"MIT"
] | 435 | 2015-06-30T17:50:42.000Z | 2022-03-24T10:08:59.000Z | aiotg/__init__.py | domclick/aiotg | 9228af2727ea88ee60b46678511dd701b965afe3 | [
"MIT"
] | 65 | 2015-08-14T10:25:49.000Z | 2021-07-29T14:01:56.000Z | aiotg/__init__.py | domclick/aiotg | 9228af2727ea88ee60b46678511dd701b965afe3 | [
"MIT"
] | 71 | 2015-07-20T22:14:47.000Z | 2022-01-21T11:20:59.000Z | from aiotg.bot import * # noqa
| 16 | 31 | 0.6875 | 5 | 32 | 4.4 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.21875 | 32 | 1 | 32 | 32 | 0.88 | 0.125 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
d65674ebd0afd658cb831b24a3c0347e6d767352 | 209 | py | Python | modules/debug/sources/__init__.py | AnthonyEdvalson/Machina | fefb058591dd7b62817c75277d5ca0eb6dbd8c3a | [
"MIT"
] | null | null | null | modules/debug/sources/__init__.py | AnthonyEdvalson/Machina | fefb058591dd7b62817c75277d5ca0eb6dbd8c3a | [
"MIT"
] | null | null | null | modules/debug/sources/__init__.py | AnthonyEdvalson/Machina | fefb058591dd7b62817c75277d5ca0eb6dbd8c3a | [
"MIT"
] | null | null | null | from modules.debug.sources.brokensource import BrokenSource
from modules.debug.sources.source1 import Source1
from modules.debug.sources.source2 import Source2
sources = [Source1, Source2, BrokenSource]
| 34.833333 | 60 | 0.822967 | 25 | 209 | 6.88 | 0.32 | 0.19186 | 0.27907 | 0.401163 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.032258 | 0.110048 | 209 | 5 | 61 | 41.8 | 0.892473 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.75 | 0 | 0.75 | 0 | 1 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
c3884df21f8506c3208765a75e80a5447431a310 | 66 | py | Python | devito/passes/equations/__init__.py | speglich/devito | b636f7694eb6a1e19b0f2c48f44ff63613029a7b | [
"MIT"
] | 1 | 2021-03-25T21:23:03.000Z | 2021-03-25T21:23:03.000Z | devito/passes/equations/__init__.py | speglich/devito | b636f7694eb6a1e19b0f2c48f44ff63613029a7b | [
"MIT"
] | 52 | 2020-10-12T19:29:09.000Z | 2022-03-10T14:05:22.000Z | devito/passes/equations/__init__.py | alisiahkoohi/devito | f535a44dff12de2837eb6e3217a65ffb2d371cb8 | [
"MIT"
] | 1 | 2020-06-02T03:31:11.000Z | 2020-06-02T03:31:11.000Z | from .linearity import * # noqa
from .buffering import * # noqa
| 22 | 32 | 0.69697 | 8 | 66 | 5.75 | 0.625 | 0.434783 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.212121 | 66 | 2 | 33 | 33 | 0.884615 | 0.136364 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
c3bf84eddf3053d89dbd376832d9725192054646 | 131 | py | Python | xvision/api/face.py | jimmysue/xvision | bf5aa567a197b3e4c9fdd285c80b4f7512d14d7a | [
"MIT"
] | 3 | 2021-04-08T10:50:53.000Z | 2021-11-15T07:26:16.000Z | xvision/api/face.py | jimmysue/xvision | bf5aa567a197b3e4c9fdd285c80b4f7512d14d7a | [
"MIT"
] | 3 | 2021-08-05T07:40:52.000Z | 2021-11-16T05:53:29.000Z | xvision/api/face.py | jimmysue/xvision | bf5aa567a197b3e4c9fdd285c80b4f7512d14d7a | [
"MIT"
] | 1 | 2021-12-15T05:57:48.000Z | 2021-12-15T05:57:48.000Z |
def load_face_detector(checkpoint, *args, **kwargs):
pass
def load_face_alignmentor(checkpoint, *args, **kwargs):
pass
| 14.555556 | 55 | 0.709924 | 16 | 131 | 5.5625 | 0.5625 | 0.157303 | 0.247191 | 0.539326 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.167939 | 131 | 8 | 56 | 16.375 | 0.816514 | 0 | 0 | 0.5 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.5 | false | 0.5 | 0 | 0 | 0.5 | 0 | 1 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 6 |
c3d665569858a64b4a4b5c00838a5c871acc2751 | 26,046 | py | Python | anygraph/unittests/test_linkers.py | gemerden/anygraph | c20cab82ad4a7f4117690a445e136c2b0e84f0f3 | [
"MIT"
] | 10 | 2020-06-11T14:11:58.000Z | 2021-12-31T11:59:26.000Z | anygraph/unittests/test_linkers.py | gemerden/anygraph | c20cab82ad4a7f4117690a445e136c2b0e84f0f3 | [
"MIT"
] | null | null | null | anygraph/unittests/test_linkers.py | gemerden/anygraph | c20cab82ad4a7f4117690a445e136c2b0e84f0f3 | [
"MIT"
] | null | null | null | import unittest
import uuid
from random import choice
from anygraph import One, Many
class TestLinkers(unittest.TestCase):
def test_one(self):
class TestOne(object):
next = One()
def __init__(self, name):
self.name = name
bob = TestOne('bob')
ann = TestOne('ann')
bob.next = ann
assert bob.next is ann
assert list(TestOne.next.iterate(bob)) == [bob, ann]
del bob.next
assert bob.next is None
bob.next = ann
ann.next = bob
assert bob.next is ann
assert ann.next is bob
bob.next = bob
assert bob.next is bob
def test_triangle(self):
class TestOne(object):
next = One()
def __init__(self, name):
self.name = name
bob = TestOne('bob')
ann = TestOne('ann')
kik = TestOne('kik')
bob.next = ann
ann.next = kik
kik.next = bob
assert bob.next is ann
assert ann.next is kik
assert kik.next is bob
assert list(TestOne.next.iterate(bob)) == [bob, ann, kik]
del ann.next
assert ann.next is None
assert bob.next is ann
assert kik.next is bob
def test_many(self):
class TestMany(object):
nexts = Many()
def __init__(self, name):
self.name = name
bob = TestMany('bob')
ann = TestMany('ann')
pete = TestMany('pete')
howy = TestMany('howy')
bob.nexts.include(ann, pete)
pete.nexts.include(howy)
assert ann in bob.nexts
assert howy in pete.nexts
assert howy not in bob.nexts
assert list(TestMany.nexts.iterate(bob)) == [bob, ann, pete, howy]
def test_cyclic(self):
class TestMany(object):
nexts = Many(cyclic=False)
def __init__(self, name):
self.name = name
bob = TestMany('bob')
ann = TestMany('ann')
pete = TestMany('pete')
howy = TestMany('howy')
bob.nexts.include(ann, pete)
pete.nexts.include(howy)
with self.assertRaises(ValueError):
howy.nexts.include(bob)
def test_to_self(self):
class TestMany(object):
nexts = Many(to_self=False)
def __init__(self, name):
self.name = name
bob = TestMany('bob')
with self.assertRaises(ValueError):
bob.nexts.include(bob)
with self.assertRaises(ValueError):
bob.nexts = [bob]
def test_on_link(self):
on_link_results = []
on_unlink_results = []
def on_link(one, two):
on_link_results.append((one, two))
def on_unlink(one, two):
on_unlink_results.append((one, two))
class TestMany(object):
nexts = Many(on_link=on_link, on_unlink=on_unlink)
def __init__(self, name):
self.name = name
bob = TestMany('bob')
ann = TestMany('ann')
pete = TestMany('pete')
howy = TestMany('howy')
bob.nexts.include(ann, pete)
pete.nexts.include(howy)
assert on_link_results == [(bob, ann), (bob, pete), (pete, howy)]
del bob.nexts
assert on_unlink_results == [(bob, ann), (bob, pete)]
def test_visitor(self):
class TestMany(object):
nexts = Many()
def __init__(self, name):
self.name = name
bob = TestMany('bob')
ann = TestMany('ann')
pete = TestMany('pete')
howy = TestMany('howy')
bob.nexts.include(ann, pete)
ann.nexts.include(howy)
pete.nexts.include(howy)
people = []
def on_visit(obj):
people.append(obj)
TestMany.nexts.visit(bob, on_visit=on_visit, breadth_first=False)
assert people == [bob, ann, howy, pete]
del people[:]
TestMany.nexts.visit(bob, on_visit=on_visit, breadth_first=True)
assert people == [bob, ann, pete, howy]
def test_builder_many_by_name(self):
class TestBuilder(object):
nexts = Many()
def __init__(self, name, iterable=()):
self.name = name
self.items = list(iterable)
def extend(self, *items):
self.items.extend(items)
def __iter__(self):
for item in self.items:
yield item
bob = TestBuilder('bob')
ann = TestBuilder('ann')
pete = TestBuilder('pete')
howy = TestBuilder('howy')
bob.extend(ann, pete)
pete.extend(howy, bob)
ann.extend(pete)
howy.extend(ann)
assert bob.nexts == set()
TestBuilder.nexts.build(bob)
assert bob.nexts == {ann, pete}
assert howy.nexts == {ann}
def test_builder_many_by_func_cyclic(self):
class TestBuilder(object):
nexts = Many()
def __init__(self, name, iterable=()):
self.name = name
self.items = list(iterable)
def extend(self, *items):
self.items.extend(items)
bob = TestBuilder('bob')
ann = TestBuilder('ann')
pete = TestBuilder('pete')
howy = TestBuilder('howy')
bob.extend(ann, pete)
pete.extend(howy, bob)
ann.extend(pete)
howy.extend(ann)
TestBuilder.nexts.build(bob, key=lambda obj: obj.items)
assert bob.nexts == {ann, pete}
assert pete.nexts == {howy, bob}
assert howy.nexts == {ann}
assert TestBuilder.nexts.in_cycle(bob)
assert TestBuilder.nexts.is_cyclic(bob)
def test_builder_one(self):
class TestBuilder(object):
next = One()
def __init__(self, name, other=None):
self.name = name
self.other = other
def __str__(self):
return self.name
bob = TestBuilder('bob')
ann = TestBuilder('ann')
pip = TestBuilder('pip')
bob.other = ann
ann.other = pip
assert bob.next is None
TestBuilder.next.build(bob, 'other')
assert bob.next == ann
assert ann.next == pip
class TestDoubleLinkers(unittest.TestCase):
def test_one_one(self):
class TestOneOne(object):
left = One('right')
right = One('left')
def __init__(self, name):
self.name = name
bob = TestOneOne('bob')
ann = TestOneOne('ann')
bob.left = ann
assert bob.left is ann
assert ann.right is bob
assert list(TestOneOne.left.iterate(bob)) == [bob, ann]
del bob.left
assert bob.left is None
assert ann.right is None
bob.left = ann
ann.right = None
assert bob.left is None
assert ann.right is None
bob.left = bob
assert bob.left is bob
assert bob.right is bob
del bob.left
assert bob.left is None
assert bob.right is None
def test_triangle(self):
class TestOne(object):
next = One('prev')
prev = One('next')
def __init__(self, name):
self.name = name
bob = TestOne('bob')
ann = TestOne('ann')
kik = TestOne('kik')
bob.next = ann
ann.next = kik
kik.next = bob
assert bob.next is ann
assert ann.next is kik
assert kik.next is bob
assert bob.prev is kik
assert kik.prev is ann
assert ann.prev is bob
assert list(TestOne.next.iterate(bob)) == [bob, ann, kik]
del ann.prev
assert ann.prev is None
assert bob.next is None
assert ann.next is kik
assert kik.next is bob
def test_many_many(self):
class TestMany(object):
nexts = Many('prevs')
prevs = Many('nexts')
def __init__(self, name):
self.name = name
bob = TestMany('bob')
ann = TestMany('ann')
pete = TestMany('pete')
howy = TestMany('howy')
bob.nexts.include(ann, pete)
pete.nexts.include(howy)
assert ann in bob.nexts
assert bob in ann.prevs
assert howy in pete.nexts
assert howy not in bob.nexts
assert list(TestMany.nexts.iterate(bob)) == [bob, ann, pete, howy]
def test_one_many(self):
class TestOneMany(object):
parent = One('children')
children = Many('parent')
def __init__(self, name):
self.name = name
bob = TestOneMany('bob')
ann = TestOneMany('ann')
pete = TestOneMany('pete')
howy = TestOneMany('howy')
bob.children = [ann, pete]
howy.parent = pete
assert ann in bob.children
assert bob is ann.parent
assert howy in pete.children
assert howy not in bob.children
assert howy.parent.parent is bob
assert list(TestOneMany.children.iterate(bob)) == [bob, ann, pete, howy]
assert list(TestOneMany.parent.iterate(howy)) == [howy, pete, bob]
del bob.children
assert ann not in bob.children
assert bob is not ann.parent
assert howy.parent.parent is None
def test_pairs(self):
class Test(object):
other = One('other')
def __init__(self, name):
self.name = name
def __str__(self):
return self.name
bob = Test('bob')
ann = Test('ann')
kik = Test('kik')
bob.other = ann
assert bob.other is ann
assert ann.other is bob
assert list(Test.other.iterate(bob)) == [bob, ann]
kik.other = ann
assert bob.other is None
assert kik.other is ann
assert ann.other is kik
def test_non_directed(self):
class Test(object):
others = Many('others')
def __init__(self, name):
self.name = name
def __str__(self):
return self.name
bob = Test('bob')
ann = Test('ann')
kik = Test('kik')
bob.others.include(ann)
bob.others.include(kik)
assert ann in bob.others
assert kik in bob.others
assert bob in ann.others
assert bob in kik.others
kik.others.include(ann)
assert kik in ann.others
assert ann in kik.others
assert ann in bob.others
assert kik in bob.others
assert bob in ann.others
assert bob in kik.others
assert list(Test.others.iterate(bob)) == [bob, ann, kik]
kik.others.exclude(ann)
assert kik not in ann.others
assert ann not in kik.others
assert ann in bob.others
assert kik in bob.others
assert bob in ann.others
assert bob in kik.others
def test_cyclic(self):
class TestMany(object):
nexts = Many('prevs', cyclic=False)
prevs = Many('nexts')
def __init__(self, name):
self.name = name
bob = TestMany('bob')
ann = TestMany('ann')
pete = TestMany('pete')
howy = TestMany('howy')
bob.nexts.include(ann, pete)
pete.nexts.include(howy)
with self.assertRaises(ValueError):
howy.nexts.include(bob)
with self.assertRaises(ValueError):
bob.prevs.include(howy)
def test_on_link(self):
on_link_results = []
on_unlink_results = []
def on_link(one, two):
on_link_results.append((one, two))
def on_unlink(one, two):
on_unlink_results.append((one, two))
class TestMany(object):
nexts = Many('prevs', on_link=on_link, on_unlink=on_unlink)
prevs = Many('nexts')
def __init__(self, name):
self.name = name
bob = TestMany('bob')
ann = TestMany('ann')
pete = TestMany('pete')
howy = TestMany('howy')
bob.nexts.include(ann, pete)
pete.nexts.include(howy)
assert on_link_results == [(bob, ann), (bob, pete), (pete, howy)]
del bob.nexts
assert on_unlink_results == [(bob, ann), (bob, pete)]
def test_custom_id(self):
class TestMany(object):
nexts = Many('prevs', get_id=lambda obj: hash(obj))
prevs = Many('nexts')
def __init__(self, name):
self.name = name
bob = TestMany('bob')
ann = TestMany('ann')
bob.nexts.include(ann)
assert ann in bob.nexts
assert bob in ann.prevs
def test_visitor(self):
class TestMany(object):
nexts = Many('prevs')
prevs = Many('nexts')
def __init__(self, name):
self.name = name
bob = TestMany('bob')
ann = TestMany('ann')
pete = TestMany('pete')
howy = TestMany('howy')
bob.nexts.include(ann, pete)
ann.nexts.include(howy)
pete.nexts.include(howy)
people = []
def on_visit(obj):
people.append(obj)
TestMany.nexts.visit(bob, on_visit=on_visit, breadth_first=False)
assert people == [bob, ann, howy, pete]
del people[:]
TestMany.nexts.visit(bob, on_visit=on_visit, breadth_first=True)
assert people == [bob, ann, pete, howy]
def test_builder_many_by_name(self):
class TestBuilder(object):
nexts = Many('prevs')
prevs = Many('nexts')
def __init__(self, name, iterable=()):
self.name = name
self.items = list(iterable)
def extend(self, *items):
self.items.extend(items)
def __iter__(self):
for item in self.items:
yield item
def __str__(self):
return self.name
bob = TestBuilder('bob')
ann = TestBuilder('ann')
pete = TestBuilder('pete')
howy = TestBuilder('howy')
bob.extend(ann, pete)
pete.extend(howy, bob)
ann.extend(pete)
howy.extend(ann)
assert bob.nexts == set()
assert ann.prevs == set()
TestBuilder.nexts.build(bob)
assert bob.nexts == {ann, pete}
assert ann.prevs == {bob, howy}
assert howy.nexts == {ann}
assert howy.prevs == {pete}
def test_builder_many_by_func(self):
class TestBuilder(object):
nexts = Many('prevs')
prevs = Many('nexts')
def __init__(self, name, iterable=()):
self.name = name
self.items = list(iterable)
def extend(self, *items):
self.items.extend(items)
def __str__(self):
return self.name
bob = TestBuilder('bob')
ann = TestBuilder('ann')
pete = TestBuilder('pete')
howy = TestBuilder('howy')
bob.extend(ann, pete)
pete.extend(howy, bob)
ann.extend(pete)
howy.extend(ann)
assert bob.nexts == set()
assert ann.prevs == set()
TestBuilder.nexts.build(bob, key=lambda obj: obj.items)
assert bob.nexts == {ann, pete}
assert ann.prevs == {bob, howy}
assert howy.nexts == {ann}
assert howy.prevs == {pete}
def test_builder_one(self):
class TestBuilder(object):
next = One('prev')
prev = One('next')
def __init__(self, name, other=None):
self.name = name
self.other = other
def __str__(self):
return self.name
bob = TestBuilder('bob')
ann = TestBuilder('ann')
pip = TestBuilder('pip')
bob.other = ann
ann.other = pip
assert bob.next is None
assert ann.prev is None
TestBuilder.next.build(bob, 'other')
assert bob.next == ann
assert ann.prev == bob
assert ann.next == pip
assert pip.prev == ann
def test_gather(self):
class Test(object):
nexts = Many('prevs')
prevs = Many('nexts')
def __init__(self, name):
self.name = name
def __repr__(self):
return self.name
bob = Test('bob')
ann = Test('ann')
pete = Test('pete')
howy = Test('howy')
bob.nexts.include(ann)
ann.prevs.include(pete)
pete.nexts.include(howy, bob)
assert Test.nexts.gather(bob) == [bob, ann, pete, howy]
def test_gather_pairs_directed(self):
class Test(object):
nexts = Many('prevs')
prevs = Many('nexts')
def __init__(self, name):
self.name = name
def __repr__(self):
return self.name
nodes = [Test(str(i)) for i in range(5)]
for node in nodes: # fully linked
node.nexts.include(*nodes)
assert len(nodes[0].nexts) == 5
assert len(nodes[0].prevs) == 5
nexts_pairs = Test.nexts.gather_pairs(nodes[0])
assert len(nexts_pairs) == 25
prevs_pairs = Test.prevs.gather_pairs(nodes[0])
assert len(prevs_pairs) == 25
def test_gather_pairs_non_directed(self):
class Test(object):
friends = Many('friends')
def __init__(self, name):
self.name = name
def __repr__(self):
return self.name
nodes = [Test(str(i)) for i in range(5)]
for node in nodes: # fully linked
node.friends.include(*nodes)
assert len(nodes[0].friends) == 5
nexts_pairs = Test.friends.gather_pairs(nodes[0])
assert len(nexts_pairs) == 25
def test_find_directed(self):
class Test(object):
nexts = Many('prevs')
prevs = Many('nexts')
def __init__(self, name):
self.name = name
def __repr__(self):
return self.name
nodes = [Test(i) for i in range(5)]
for i, node1 in enumerate(nodes):
for j, node2 in enumerate(nodes):
if (i + j) % 2 == 1:
node1.nexts.include(node2)
found = Test.nexts.find(nodes[0], filter=lambda n: n.name in (1, 2))
assert len(found) == 2
def test_find_non_directed(self):
class Test(object):
friends = Many('friends')
def __init__(self, name):
self.name = name
def __repr__(self):
return self.name
nodes = [Test(i) for i in range(5)]
for i, node1 in enumerate(nodes):
for j, node2 in enumerate(nodes):
if (i + j) % 2 == 1:
node1.friends.include(node2)
found = Test.friends.find(nodes[0], filter=lambda n: n.name in (1, 2))
assert len(found) == 2
def test_reachable_directed(self):
class Test(object):
nexts = Many('prevs')
prevs = Many('nexts')
def __init__(self, name):
self.name = name
def __repr__(self):
return self.name
nodes = [Test(i) for i in range(5)]
for i, node1 in enumerate(nodes):
for j, node2 in enumerate(nodes):
if j == i + 1:
node1.nexts.include(node2)
reachable = Test.nexts.reachable(nodes[0], nodes[4])
assert reachable
reachable = Test.nexts.reachable(nodes[1], nodes[0])
assert not reachable
def test_reachable_non_directed(self):
class Test(object):
friends = Many('friends')
def __init__(self, name):
self.name = name
def __repr__(self):
return self.name
nodes = [Test(i) for i in range(5)]
for i, node1 in enumerate(nodes):
for j, node2 in enumerate(nodes):
if j == i + 1:
node1.friends.include(node2)
reachable = Test.friends.reachable(nodes[0], nodes[4])
assert reachable
reachable = Test.friends.reachable(nodes[1], nodes[0])
assert reachable
def test_walk_directed(self):
class Test(object):
nexts = Many('prevs')
prevs = Many('nexts')
def __init__(self, name):
self.name = name
def __repr__(self):
return self.name
nodes = [Test(i) for i in range(5)]
for i, node1 in enumerate(nodes):
for j, node2 in enumerate(nodes):
if j in (i + 1, i + 2):
node1.nexts.include(node2)
visited = list(Test.nexts.walk(nodes[0], key=lambda n: n.nexts[0] if len(n.nexts) else None))
assert visited == nodes
def test_walk_non_directed(self):
class Test(object):
friends = Many('friends')
def __init__(self, name):
self.name = str(name)
def __repr__(self):
return self.name
nodes = [Test(i) for i in range(5)]
for i, node1 in enumerate(nodes):
for j, node2 in enumerate(nodes):
if j in (i + 1, i + 2):
node1.friends.include(node2)
visited = []
for node in Test.friends.walk(nodes[0], key=lambda n: n.friends[0]):
visited.append(node)
if len(visited) == 5:
break
assert len(set(visited)) == 2
def test_endpoints(self):
class Test(object):
nexts = Many(install=True)
def __init__(self, name):
self.name = str(name)
def __repr__(self):
return self.name
nodes = [Test(i) for i in range(7)]
for i, node1 in enumerate(nodes):
for j, node2 in enumerate(nodes):
if j in (i + 1, i + 2):
node1.nexts.include(node2)
assert nodes[0].endpoints() == [nodes[-1]]
def test_install_one(self):
class TestMany(object):
one = One('one', install=True)
def __init__(self, name):
self.name = name
bob = TestMany('bob')
ann = TestMany('ann')
bob.one = ann
assert ann.one is bob
assert bob.one is ann
assert bob.iterate.__name__ == 'iterate'
assert list(bob.iterate()) == [bob, ann]
assert bob.shortest_path.__name__ == 'shortest_path'
assert bob.shortest_path(ann) == [bob, ann]
assert bob.in_cycle()
def test_install_many(self):
class TestMany(object):
nexts = Many(install=True)
def __init__(self, name):
self.name = name
bob = TestMany('bob')
ann = TestMany('ann')
bob.nexts.include(ann)
ann.nexts.include(bob)
assert ann in bob.nexts
assert bob in ann.nexts
assert bob.iterate.__name__ == 'iterate'
assert list(bob.iterate()) == [bob, ann]
assert bob.shortest_path.__name__ == 'shortest_path'
assert bob.shortest_path(ann) == [bob, ann]
assert bob.in_cycle()
def test_install_error(self):
with self.assertRaises(RuntimeError):
class TestMany(object):
nexts = Many(install=True)
prevs = Many(install=True)
def test_install_some(self):
class Test(object):
nexts = Many(install=('iterate',))
t1 = Test()
t2 = Test()
t1.nexts.include(t2)
assert list(t1.iterate()) == [t1, t2]
with self.assertRaises(AttributeError):
t1.in_cycle()
def test_install_non_directed(self):
class Friend(object):
friends = Many('friends', install=True)
def __init__(self, name):
self.name = name
bob = Friend('bob')
ann = Friend('ann')
bob.friends.include(ann)
assert ann in bob.friends
assert bob in ann.friends
assert bob.iterate.__name__ == 'iterate'
assert list(bob.iterate()) == [bob, ann]
assert bob.shortest_path.__name__ == 'shortest_path'
assert ann.shortest_path(bob) == [ann, bob]
assert bob.in_cycle()
def test_directed_image(self):
names = ('ann', 'bob', 'fred', 'betty', 'eric', 'charley', 'claudia', 'lars', 'jan')
class Object(object):
nexts = Many(install=True)
def __init__(self, key):
self.key = key
nodes = [Object(name) for name in names]
for _ in range(20):
choice(nodes).nexts.include(choice(nodes))
try:
nodes[0].save_image('/data/nexts.png', label_getter=lambda obj: obj.key, view=False)
except RuntimeError as error: # graphviz not installed
print(error)
def test_undirected_image(self):
names = ('ann', 'bob', 'fred', 'betty', 'eric', 'charley', 'claudia', 'lars', 'jan', 'imogen') # len() == 10
class Person(object):
friends = Many('friends', install=True)
def __init__(self, name):
self.name = name
nodes = [Person(name) for name in names]
for _ in range((len(names) * len(names)) // 3):
person1 = choice(nodes)
person2 = choice(nodes)
if person1 is not person2:
person1.friends.include(person2)
try:
nodes[0].save_image('/data/friends.png', label_getter=lambda obj: obj.name, view=False)
except RuntimeError as error: # graphviz not installed
print(error)
| 25.890656 | 117 | 0.531176 | 3,065 | 26,046 | 4.378467 | 0.053834 | 0.053651 | 0.031148 | 0.041356 | 0.822951 | 0.787332 | 0.74389 | 0.718703 | 0.691356 | 0.661401 | 0 | 0.006525 | 0.358673 | 26,046 | 1,005 | 118 | 25.916418 | 0.796875 | 0.003187 | 0 | 0.744711 | 0 | 0 | 0.026929 | 0 | 0 | 0 | 0 | 0 | 0.22708 | 1 | 0.149506 | false | 0 | 0.005642 | 0.022567 | 0.236953 | 0.002821 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
c3e09c556896dd2fa796ac85763c805d61969d77 | 12,008 | py | Python | ansible/library/openstack-ansible-modules-master_zip_old/tests/test_keystone_service.py | ggwena/BadCops | fe9e7438a74abb8383488c6aca6dcb42ae049dcc | [
"MIT"
] | 1 | 2015-06-16T14:42:48.000Z | 2015-06-16T14:42:48.000Z | ansible/library/openstack-ansible-modules-master_zip_old/tests/test_keystone_service.py | ggwena/BadCops | fe9e7438a74abb8383488c6aca6dcb42ae049dcc | [
"MIT"
] | null | null | null | ansible/library/openstack-ansible-modules-master_zip_old/tests/test_keystone_service.py | ggwena/BadCops | fe9e7438a74abb8383488c6aca6dcb42ae049dcc | [
"MIT"
] | null | null | null | import keystone_service
import mock
from nose.tools import assert_equal, assert_list_equal, assert_is_none
from nose import SkipTest
def setup():
keystone = mock.MagicMock()
service = mock.Mock(id="b6a7ff03f2574cd9b5c7c61186e0d781",
type="identity",
description="Keystone Identity Service")
# Can't set <name> field in mock in initializer
service.name = "keystone"
keystone.services.list = mock.Mock(return_value=[service])
endpoint = mock.Mock(id="600759628a214eb7b3acde39b1e85180",
service_id="b6a7ff03f2574cd9b5c7c61186e0d781",
publicurl="http://192.168.206.130:5000/v2.0",
internalurl="http://192.168.206.130:5000/v2.0",
adminurl="http://192.168.206.130:35357/v2.0",
region="RegionOne")
keystone.endpoints.list = mock.Mock(return_value=[endpoint])
return keystone
@mock.patch('keystone_service.ensure_endpoint_absent')
@mock.patch('keystone_service.ensure_service_absent')
@mock.patch('keystone_service.ensure_endpoint_present')
@mock.patch('keystone_service.ensure_service_present')
def test_dispatch_service_present(mock_ensure_service_present,
mock_ensure_endpoint_present,
mock_ensure_service_absent,
mock_ensure_endpoint_absent):
""" Dispatch: service present """
# Setup
mock_ensure_service_present.return_value = (True, None)
mock_ensure_endpoint_present.return_value = (True, None)
manager = mock.MagicMock()
manager.attach_mock(mock_ensure_service_present, 'ensure_service_present')
manager.attach_mock(mock_ensure_service_absent, 'ensure_service_absent')
manager.attach_mock(mock_ensure_endpoint_present,
'ensure_endpoint_present')
manager.attach_mock(mock_ensure_endpoint_absent,
'ensure_endpoint_absent')
keystone = setup()
name = "keystone"
service_type = "identity"
description = "Keystone Identity Service"
state = "present"
public_url = "http://192.168.206.130:5000/v2.0"
internal_url = "http://192.168.206.130:5000/v2.0"
admin_url = "http://192.168.206.130:35357/v2.0"
region = "RegionOne"
check_mode = False
# Code under test
keystone_service.dispatch(keystone, name, service_type, description,
public_url, internal_url, admin_url, region, state, check_mode)
expected_calls = [mock.call.ensure_service_present(keystone, name,
service_type,
description,
check_mode),
mock.call.ensure_endpoint_present(keystone, name,
public_url,
internal_url,
admin_url,
region,
check_mode)]
assert_equal(manager.mock_calls, expected_calls)
@mock.patch('keystone_service.ensure_endpoint_absent')
@mock.patch('keystone_service.ensure_service_absent')
@mock.patch('keystone_service.ensure_endpoint_present')
@mock.patch('keystone_service.ensure_service_present')
def test_dispatch_service_absent(mock_ensure_service_present,
mock_ensure_endpoint_present,
mock_ensure_service_absent,
mock_ensure_endpoint_absent):
""" Dispatch: service absent """
# Setup
mock_ensure_service_absent.return_value = True
mock_ensure_endpoint_absent.return_value = True
manager = mock.MagicMock()
manager.attach_mock(mock_ensure_service_present, 'ensure_service_present')
manager.attach_mock(mock_ensure_service_absent, 'ensure_service_absent')
manager.attach_mock(mock_ensure_endpoint_present,
'ensure_endpoint_present')
manager.attach_mock(mock_ensure_endpoint_absent,
'ensure_endpoint_absent')
keystone = setup()
name = "keystone"
service_type = "identity"
description = "Keystone Identity Service"
region = "RegionOne"
state = "absent"
public_url = "http://192.168.206.130:5000/v2.0"
internal_url = "http://192.168.206.130:5000/v2.0"
admin_url = "http://192.168.206.130:35357/v2.0"
check_mode = False
# Code under test
keystone_service.dispatch(keystone, name, service_type, description,
public_url, internal_url, admin_url, region, state, check_mode)
expected_calls = [
mock.call.ensure_endpoint_absent(keystone, name, check_mode),
mock.call.ensure_service_absent(keystone, name, check_mode)
]
assert_list_equal(manager.mock_calls, expected_calls)
def test_ensure_service_present_when_present():
""" ensure_services_present when the service is present"""
# Setup
keystone = setup()
name = "keystone"
service_type = "identity"
description = "Keystone Identity Service"
check_mode = False
# Code under test
(changed, id) = keystone_service.ensure_service_present(keystone, name,
service_type, description, check_mode)
# Assertions
assert not changed
assert_equal(id, "b6a7ff03f2574cd9b5c7c61186e0d781")
def test_ensure_service_present_when_present_check():
""" ensure_services_present when the service is present, check mode"""
# Setup
keystone = setup()
name = "keystone"
service_type = "identity"
description = "Keystone Identity Service"
check_mode = True
# Code under test
(changed, id) = keystone_service.ensure_service_present(keystone, name,
service_type, description, check_mode)
# Assertions
assert not changed
assert_equal(id, "b6a7ff03f2574cd9b5c7c61186e0d781")
def test_ensure_service_present_when_absent():
""" ensure_services_present when the service is absent"""
# Setup
keystone = setup()
service = mock.Mock(id="a7ebed35051147d4abbe2ee049eeb346")
keystone.services.create = mock.Mock(return_value=service)
name = "nova"
service_type = "compute"
description = "Compute Service"
check_mode = False
# Code under test
(changed, id) = keystone_service.ensure_service_present(keystone, name,
service_type, description, check_mode)
# Assertions
assert changed
assert_equal(id, "a7ebed35051147d4abbe2ee049eeb346")
keystone.services.create.assert_called_with(name=name,
service_type=service_type,
description=description)
def test_ensure_service_present_when_absent_check():
""" ensure_services_present when the service is absent, check mode"""
# Setup
keystone = setup()
service = mock.Mock(id="a7ebed35051147d4abbe2ee049eeb346")
keystone.services.create = mock.Mock(return_value=service)
name = "nova"
service_type = "compute"
description = "Compute Service"
check_mode = True
# Code under test
(changed, id) = keystone_service.ensure_service_present(keystone, name,
service_type, description, check_mode)
# Assertions
assert changed
assert_equal(id, None)
assert not keystone.services.create.called
def test_get_endpoint_present():
""" get_endpoint when endpoint is present """
keystone = setup()
endpoint = keystone_service.get_endpoint(keystone, "keystone")
assert_equal(endpoint.id, "600759628a214eb7b3acde39b1e85180")
def test_ensure_endpoint_present_when_present():
""" ensure_endpoint_present when the endpoint is present """
# Setup
keystone = setup()
name = "keystone"
public_url = "http://192.168.206.130:5000/v2.0"
internal_url = "http://192.168.206.130:5000/v2.0"
admin_url = "http://192.168.206.130:35357/v2.0"
region = "RegionOne"
check_mode = False
# Code under test
(changed, id) = keystone_service.ensure_endpoint_present(keystone, name,
public_url, internal_url, admin_url, region,
check_mode)
# Assertions
assert not changed
assert_equal(id, "600759628a214eb7b3acde39b1e85180")
def test_ensure_endpoint_present_when_present_check():
""" ensure_endpoint_present when the endpoint is present, check mode"""
# Setup
keystone = setup()
name = "keystone"
public_url = "http://192.168.206.130:5000/v2.0"
internal_url = "http://192.168.206.130:5000/v2.0"
admin_url = "http://192.168.206.130:35357/v2.0"
region = "RegionOne"
check_mode = True
# Code under test
(changed, id) = keystone_service.ensure_endpoint_present(keystone, name,
public_url, internal_url, admin_url, region, check_mode)
# Assertions
assert not changed
assert_equal(id, "600759628a214eb7b3acde39b1e85180")
def test_ensure_endpoint_present_when_absent():
""" ensure_endpoint_present when the endpoint is absent """
# Setup
keystone = setup()
# Mock out the endpoints create
endpoint = mock.Mock(id="622386d836b14fd986d9cec7504d208a",
publicurl="http://192.168.206.130:8774/v2/%(tenant_id)s",
internalurl="http://192.168.206.130:8774/v2/%(tenant_id)s",
adminurl="http://192.168.206.130:8774/v2/%(tenant_id)s",
region="RegionOne")
keystone.endpoints.create = mock.Mock(return_value=endpoint)
# We need to add a service, but not an endpoint
service = mock.Mock(id="0ad62de6cfe044c7a77ad3a7f2851b5d",
type="compute",
description="Compute Service")
service.name = "nova"
keystone.services.list.return_value.append(service)
name = "nova"
public_url = "http://192.168.206.130:8774/v2/%(tenant_id)s"
internal_url = "http://192.168.206.130:8774/v2/%(tenant_id)s"
admin_url = "http://192.168.206.130:8774/v2/%(tenant_id)s"
region = "RegionOne"
check_mode = False
# Code under test
(changed, id) = keystone_service.ensure_endpoint_present(keystone, name,
public_url, internal_url, admin_url, region,
check_mode)
# Assertions
assert changed
assert_equal(id, "622386d836b14fd986d9cec7504d208a")
keystone.endpoints.create.assert_called_with(
service_id="0ad62de6cfe044c7a77ad3a7f2851b5d",
publicurl="http://192.168.206.130:8774/v2/%(tenant_id)s",
internalurl="http://192.168.206.130:8774/v2/%(tenant_id)s",
adminurl="http://192.168.206.130:8774/v2/%(tenant_id)s",
region="RegionOne")
def test_ensure_endpoint_present_when_absent_check():
""" ensure_endpoint_present when the endpoint is absent, check mode"""
# Setup
keystone = setup()
# We need to add a service, but not an endpoint
service = mock.Mock(id="0ad62de6cfe044c7a77ad3a7f2851b5d",
type="compute",
description="Compute Service")
service.name = "nova"
keystone.services.list.return_value.append(service)
name = "nova"
public_url = "http://192.168.206.130:8774/v2/%(tenant_id)s"
internal_url = "http://192.168.206.130:8774/v2/%(tenant_id)s"
admin_url = "http://192.168.206.130:8774/v2/%(tenant_id)s"
region = "RegionOne"
check_mode = True
# Code under test
(changed, id) = keystone_service.ensure_endpoint_present(keystone, name,
public_url, internal_url, admin_url, region, check_mode)
# Assertions
assert changed
assert_is_none(id)
assert not keystone.endpoints.create.called
| 37.880126 | 80 | 0.647318 | 1,337 | 12,008 | 5.554974 | 0.071803 | 0.06032 | 0.036354 | 0.04726 | 0.852565 | 0.81298 | 0.79386 | 0.77124 | 0.71644 | 0.700821 | 0 | 0.088613 | 0.254747 | 12,008 | 316 | 81 | 38 | 0.741312 | 0.086109 | 0 | 0.697248 | 0 | 0 | 0.225103 | 0.091954 | 0 | 0 | 0 | 0 | 0.110092 | 1 | 0.055046 | false | 0 | 0.018349 | 0 | 0.077982 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
617bb18c3a9869be98d3113fef109768a928d02e | 43 | py | Python | dataset/bdd100k/__init__.py | masszhou/lane_detector | e28fe4adbd4c804e45c9bd86743739196bc30105 | [
"MIT"
] | 4 | 2020-10-07T03:31:42.000Z | 2022-03-23T04:10:56.000Z | dataset/bdd100k/__init__.py | masszhou/lane_detector | e28fe4adbd4c804e45c9bd86743739196bc30105 | [
"MIT"
] | null | null | null | dataset/bdd100k/__init__.py | masszhou/lane_detector | e28fe4adbd4c804e45c9bd86743739196bc30105 | [
"MIT"
] | 1 | 2020-11-16T07:13:53.000Z | 2020-11-16T07:13:53.000Z | from .bdd100k_loader import DatasetBDD100K
| 21.5 | 42 | 0.883721 | 5 | 43 | 7.4 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.153846 | 0.093023 | 43 | 1 | 43 | 43 | 0.794872 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
6187d6f8a3a2fa1d86d960edf723250d59a4d237 | 27,696 | py | Python | src/an_MakeFigs.py | mbonnema/SWAV | d5dd4dd1a88de008f27b0232c536491c7dc84623 | [
"CNRI-Python"
] | null | null | null | src/an_MakeFigs.py | mbonnema/SWAV | d5dd4dd1a88de008f27b0232c536491c7dc84623 | [
"CNRI-Python"
] | null | null | null | src/an_MakeFigs.py | mbonnema/SWAV | d5dd4dd1a88de008f27b0232c536491c7dc84623 | [
"CNRI-Python"
] | null | null | null | #!/usr/bin/env python3
# -*- coding: utf-8 -*-
"""
Created on Fri Nov 12 09:16:31 2021
@author: mbonnema
"""
import matplotlib.pyplot as plt
import matplotlib.dates as mdates
import matplotlib.lines as mlines
import matplotlib.colors as colors
import matplotlib.ticker as ticker
from matplotlib import cm
import datetime
import pandas as pd
import geopandas as geo
import numpy as np
import contextily as ctx
from InterpS1 import InterpS1
import scipy
from shapely.geometry import shape
from SumArea import SumArea
from SumArea import SumAreaSq
from Smooth import Smooth
import ee
ee.Initialize()
def add_basemap(ax, zoom, url='http://tile.stamen.com/terrain/tileZ/tileX/tileY.png'):
xmin, xmax, ymin, ymax = ax.axis()
basemap, extent = ctx.bounds2img(xmin, ymin, xmax, ymax, zoom=zoom)
ax.imshow(basemap, extent=extent, interpolation='bilinear')
# restore original x/y limits
ax.axis((xmin, xmax, ymin, ymax))
#--Figure 1a-------------------------------------------------------------------
def Fig1a(lat,lon,Ltype,A_d):
area = A_d
fig = plt.figure(figsize=(8,5), dpi=300)
ax = fig.add_subplot()
ax.set_aspect('equal')
world = geo.read_file(geo.datasets.get_path('naturalearth_lowres'))
worldPlot=world.plot(ax=ax, color='white', edgecolor='black', linewidth=0.5)
Lakesdf = pd.DataFrame(
{'LakeType': Ltype,
'Latitude': lat,
'Longitude': lon})
Lakesgdf = geo.GeoDataFrame(
Lakesdf, geometry=geo.points_from_xy(Lakesdf.Longitude, Lakesdf.Latitude))
colorMapmax = []
for t in Ltype:
if t == 2:
colorMapmax.append('#ff0000')
elif t == 1:
colorMapmax.append('#0000ff')
markerSize = []
for a in area:
if a < 10:
markerSize.append(0.1)
elif a < 100 and a >= 10:
markerSize.append(1)
elif a < 1000 and a >= 100:
markerSize.append(5)
elif a < 10000 and a >=1000:
markerSize.append(10)
elif a >= 10000:
markerSize.append(20)
LakePlot = Lakesgdf.plot(ax=ax, c=colorMapmax, markersize=markerSize, linewidths=0, marker='o')
BlueLake = mlines.Line2D([], [], color='blue', marker='o',markersize=3, label='Lakes', linewidth = 0)
RedRes = mlines.Line2D([], [], color='Red', marker='o',markersize=3, label='Reservoirs', linewidth = 0)
lakesize0 = mlines.Line2D([], [], color='grey', marker='o',markersize=np.sqrt(0.1), label='1 $\mathregular{km^{2}}$', linewidth = 0)
lakesize1 = mlines.Line2D([], [], color='grey', marker='o',markersize=np.sqrt(1), label='10 $\mathregular{km^{2}}$', linewidth = 0)
lakesize2 = mlines.Line2D([], [], color='grey', marker='o',markersize=np.sqrt(5), label='100 $\mathregular{km^{2}}$', linewidth = 0)
lakesize3 = mlines.Line2D([], [], color='grey', marker='o',markersize=np.sqrt(10), label='1000 $\mathregular{km^{2}}$', linewidth = 0)
lakesize4 = mlines.Line2D([], [], color='grey', marker='o',markersize=np.sqrt(20), label='>10000 $\mathregular{km^{2}}$', linewidth = 0)
plt.legend(handles=[BlueLake,RedRes,lakesize0,lakesize1,lakesize2,lakesize3,lakesize4], loc="lower left", fontsize=8)
plt.gca().axes.get_xaxis().set_visible(False)
plt.gca().axes.get_yaxis().set_visible(False)
plt.ylim([-60,90])
plt.xlim([-180,180])
plt.show()
#--Figure 1b-------------------------------------------------------------------
def Fig1b(lat, lon, Amin, Amax, A_d, Ltype):
area = A_d
markerScale = 1
fig = plt.figure(figsize=(8,5), dpi=300)
ax = fig.add_subplot()
ax.set_aspect('equal')
world = geo.read_file(geo.datasets.get_path('naturalearth_lowres'))
worldPlot=world.plot(ax=ax, color='white', edgecolor='black', linewidth=0.5)
states = geo.read_file('MapData/usa-states-census-2014.shp')
statePlot=states.plot(ax=ax, color='white', edgecolor='black', linewidth=0.5)
Lakesdf = pd.DataFrame(
{'LakeType': Ltype,
'Latitude': lat,
'Longitude': lon})
Lakesgdf = geo.GeoDataFrame(
Lakesdf, geometry=geo.points_from_xy(Lakesdf.Longitude, Lakesdf.Latitude))
colorMapmin = []
for t in Ltype:
if t == 2:
colorMapmin.append('#ff8888')
elif t == 1:
colorMapmin.append('#8888ff')
markerSizemin = []
for a in Amin:
markerSizemin.append(a*markerScale)
#markerSizemin.append((a/markerScale)**3)
colorMapmax = []
for t in Ltype:
if t == 2:
colorMapmax.append('#ff0000')
elif t == 1:
colorMapmax.append('#0000ff')
markerSizemax = []
for a in Amax:
markerSizemax.append(a*markerScale)
#markerSizemax.append((a/markerScale)**3)
LakePlotmax = Lakesgdf.plot(ax=ax, c=colorMapmax, markersize=markerSizemax, marker='.', linewidth=1.5)
LakePlotmin = Lakesgdf.plot(ax=ax, c=colorMapmin, markersize=markerSizemin, marker='.')
BlueLake = mlines.Line2D([], [], color='blue', marker='o',markersize=3, label='Lakes', linewidth = 0)
RedRes = mlines.Line2D([], [], color='Red', marker='o',markersize=3, label='Reservoirs', linewidth = 0)
lakesize0 = mlines.Line2D([], [], color='grey', marker='o',markersize=np.sqrt(1), label='1 $\mathregular{km^{2}}$', linewidth = 0)
lakesize1 = mlines.Line2D([], [], color='grey', marker='o',markersize=np.sqrt(5), label='5 $\mathregular{km^{2}}$', linewidth = 0)
lakesize2 = mlines.Line2D([], [], color='grey', marker='o',markersize=np.sqrt(10), label='10 $\mathregular{km^{2}}$', linewidth = 0)
lakesize3 = mlines.Line2D([], [], color='grey', marker='o',markersize=np.sqrt(20), label='20 $\mathregular{km^{2}}$', linewidth = 0)
lakesize_blank = mlines.Line2D([], [], color='white', marker='o',markersize=0, label='', linewidth = 0)
lakesize4 = mlines.Line2D([], [], color='grey', marker='o',markersize=np.sqrt(100), label='100 $\mathregular{km^{2}}$', linewidth = 0)
plt.legend(handles=[BlueLake,RedRes,lakesize0,lakesize1,lakesize2,lakesize3,lakesize_blank,lakesize4, lakesize_blank], loc="lower left", fontsize=4)
plt.gca().axes.get_xaxis().set_visible(False)
plt.gca().axes.get_yaxis().set_visible(False)
plt.ylim([32,50])
plt.xlim([-125,-115])
plt.show()
#--Figure 3a-------------------------------------------------------------------
def Fig3a(D_int,A_int,WE_int,LE_int,Type, D_int_jrc, A_int_jrc):
Asum,Dates = SumArea(A_int,D_int)
Asum_jrc_raw, Dates_jrc = SumArea(A_int_jrc,D_int_jrc)
Asum_jrc = []
for a in Asum_jrc_raw:
a = float(a)
Asum_jrc.append(a)
Asum_jrc = np.array(Asum_jrc)
#WEsum,Dates = SumArea(WE_int,D_int)
#LEsum,Dates = SumArea(LE_int,D_int)
#AsumUp = np.array(Asum) + np.array(WEsum)
#AsumDown = np.array(Asum) - np.array(LEsum)
formatDates = []
for d in Dates:
formatDates.append(datetime.datetime.fromtimestamp(d/1000))
lake_ids = []
res_ids = []
for key in Type:
if Type[key] == 1:
lake_ids.append(key)
else:
res_ids.append(key)
D_int_lake = {k: D_int[k] for k in D_int.keys() & lake_ids }
D_int_res = {k: D_int[k] for k in D_int.keys() & res_ids }
A_int_lake = {k: A_int[k] for k in A_int.keys() & lake_ids }
A_int_res = {k: A_int[k] for k in A_int.keys() & res_ids }
#WE_int_lake = {k: WE_int[k] for k in WE_int.keys() & lake_ids }
#WE_int_res = {k: WE_int[k] for k in WE_int.keys() & res_ids }
#LE_int_lake = {k: LE_int[k] for k in LE_int.keys() & lake_ids }
#LE_int_res = {k: LE_int[k] for k in LE_int.keys() & res_ids }
Asum_lake, Dates_lake = SumArea(A_int_lake,D_int_lake)
#WEsum_lake, Dates_lake = SumArea(WE_int_lake,D_int_lake)
#LEsum_lake, Dates_lake = SumArea(LE_int_lake,D_int_lake)
Asum_res, Dates_res = SumArea(A_int_res,D_int_res)
#WEsum_res, Dates_res = SumArea(WE_int_res,D_int_res)
#LEsum_res, Dates_res = SumArea(LE_int_res,D_int_res)
print(np.mean(Asum_res))
print(np.mean(Asum_lake))
'''
AsumUp_lake = np.array(Asum_lake) + np.array(WEsum_lake)
AsumDown_lake = np.array(Asum_lake) - np.array(LEsum_lake)
AsumUp_res = np.array(Asum_res) + np.array(WEsum_res)
AsumDown_res = np.array(Asum_res) - np.array(LEsum_res)
'''
fig = plt.figure(figsize=(4,2), dpi=300)
ax = fig.add_subplot()
'''
ax.plot(formatDates,Asum,c='purple',label='All SWB',linewidth=1)
#plt.fill_between(formatDates,AsumUp,AsumDown,color='purple',alpha=0.1)
ax.plot(formatDates,Asum_lake,c='blue',label='Natural Lakes',linewidth=1)
#plt.fill_between(formatDates,AsumUp_lake,AsumDown_lake,color='blue',alpha=0.1)
ax.plot(formatDates,Asum_res,c='red',label='Artificial Reservoirs',linewidth=1)
#plt.fill_between(formatDates,AsumUp_res,AsumDown_res,color='red',alpha=0.1)
ax.plot(formatDates,Asum_jrc,c='black',label='Lakes > 10,000 km2',linewidth=1)
'''
A1 = Asum_jrc
A2 = Asum_jrc + Asum_lake
A3 = A2 + Asum_res
plt.fill_between(formatDates,A1,color='black',alpha=0.3)
plt.fill_between(formatDates,A2, A1, color='blue',alpha=0.3)
plt.fill_between(formatDates,A3, A2, color='red',alpha=0.3)
plt.gca().set_ylabel('Total Water Surface Area $\mathregular{km^{2}}$', fontsize = 12)
plt.title('World Lake and Reservoir Surface Area')
plt.gca().set_ylim(0,2500000)
years = mdates.YearLocator()
months = mdates.MonthLocator()
years_fmt = mdates.DateFormatter('%Y')
plt.gca().xaxis.set_major_formatter(years_fmt)
plt.gca().xaxis.set_major_locator(years)
plt.gca().xaxis.set_minor_locator(months)
plt.gca().set_xlim(datetime.date(2017,1,1),datetime.date(2020,1,1))
plt.gca().format_xdata = mdates.DateFormatter('%Y-%m-%d')
#plt.gca().format_ydata = lambda x: '$%0.9f' % x # format the price.
#plt.gca().y_labels = ax.get_yticks()
#plt.gca().yaxis.set_major_formatter(ticker.FormatStrFormatter('%0.1e'))
plt.ticklabel_format(axis="y", style="sci", scilimits=(0,0), useMathText=True)
#plt.gca().grid(True)
#plt.gca().legend(frameon=False, loc='upper right', ncol=1)
#--Figure 3b-------------------------------------------------------------------
def Fig3b(D_int,A_int,WE_int,LE_int,Type):
Asum,Dates = SumArea(A_int,D_int)
Avg = np.mean(Asum)
Asum = np.array(Asum) - Avg
WEsum,Dates = SumAreaSq(WE_int,D_int)
LEsum,Dates = SumAreaSq(LE_int,D_int)
AsumUp = Smooth(np.array(Asum) + np.array(WEsum))
AsumDown = Smooth(np.array(Asum) - np.array(LEsum))
Asum = Smooth(Asum)
formatDates = []
for d in Dates:
formatDates.append(datetime.datetime.fromtimestamp(d/1000))
lake_ids = []
res_ids = []
for key in Type:
if Type[key] == 1:
lake_ids.append(key)
else:
res_ids.append(key)
D_int_lake = {k: D_int[k] for k in D_int.keys() & lake_ids }
D_int_res = {k: D_int[k] for k in D_int.keys() & res_ids }
A_int_lake = {k: A_int[k] for k in A_int.keys() & lake_ids }
A_int_res = {k: A_int[k] for k in A_int.keys() & res_ids }
WE_int_lake = {k: WE_int[k] for k in WE_int.keys() & lake_ids }
WE_int_res = {k: WE_int[k] for k in WE_int.keys() & res_ids }
LE_int_lake = {k: LE_int[k] for k in LE_int.keys() & lake_ids }
LE_int_res = {k: LE_int[k] for k in LE_int.keys() & res_ids }
Asum_lake, Dates_lake = SumArea(A_int_lake,D_int_lake)
WEsum_lake, Dates_lake = SumAreaSq(WE_int_lake,D_int_lake)
LEsum_lake, Dates_lake = SumAreaSq(LE_int_lake,D_int_lake)
Asum_res, Dates_res = SumArea(A_int_res,D_int_res)
WEsum_res, Dates_res = SumAreaSq(WE_int_res,D_int_res)
LEsum_res, Dates_res = SumAreaSq(LE_int_res,D_int_res)
Avg_lake = np.mean(Asum_lake)
Asum_lake = np.array(Asum_lake) - Avg_lake
Avg_res = np.mean(Asum_res)
Asum_res = np.array(Asum_res) - Avg_res
AsumUp_lake = Smooth(np.array(Asum_lake) + np.array(WEsum_lake))
AsumDown_lake = Smooth(np.array(Asum_lake) - np.array(LEsum_lake))
AsumUp_res = Smooth(np.array(Asum_res) + np.array(WEsum_res))
AsumDown_res = Smooth(np.array(Asum_res) - np.array(LEsum_res))
Asum_lake = Smooth(Asum_lake)
Asum_res = Smooth(Asum_res)
fig = plt.figure(figsize=(6,3), dpi=200)
ax = fig.add_subplot()
ax.plot(formatDates,Asum,c='purple',label='All SWB',linewidth=1)
#plt.fill_between(formatDates,AsumUp,AsumDown,color='purple',alpha=0.1)
ax.plot(formatDates,Asum_lake,c='blue',label='Natural Lakes',linewidth=1)
#plt.fill_between(formatDates,AsumUp_lake,AsumDown_lake,color='blue',alpha=0.1)
ax.plot(formatDates,Asum_res,c='red',label='Artificial Reservoirs',linewidth=1)
#plt.fill_between(formatDates,AsumUp_res,AsumDown_res,color='red',alpha=0.1)
plt.gca().set_ylabel('Water Surface Area Anomalies $\mathregular{km^{2}}$', fontsize = 12)
plt.title('Global Lake and Reservoir Surface Area Anomalies')
plt.gca().set_ylim(-15000,15000)
years = mdates.YearLocator()
months = mdates.MonthLocator()
years_fmt = mdates.DateFormatter('%Y')
plt.gca().xaxis.set_major_formatter(years_fmt)
plt.gca().xaxis.set_major_locator(years)
plt.gca().xaxis.set_minor_locator(months)
plt.gca().set_xlim(datetime.date(2017,1,1),datetime.date(2020,1,1))
plt.gca().format_xdata = mdates.DateFormatter('%Y-%m-%d')
plt.gca().format_ydata = lambda x: '$%1.2f' % x # format the price.
plt.gca().grid(True)
plt.gca().legend(frameon=False, loc='lower right', ncol=1,fontsize = 12)
#--Figure 3c-------------------------------------------------------------------
def Fig3c(D_int,A_int,WE_int,LE_int,Type):
Asum,Dates = SumArea(A_int,D_int)
Avg = np.mean(Asum)
Asum = np.array(Asum) - Avg
WEsum,Dates = SumAreaSq(WE_int,D_int)
LEsum,Dates = SumAreaSq(LE_int,D_int)
AsumUp = Smooth((np.array(Asum) + np.array(WEsum))/Avg*100)
AsumDown = Smooth((np.array(Asum) - np.array(LEsum))/Avg*100)
Asum = Smooth(Asum/Avg*100)
formatDates = []
for d in Dates:
formatDates.append(datetime.datetime.fromtimestamp(d/1000))
lake_ids = []
res_ids = []
for key in Type:
if Type[key] == 1:
lake_ids.append(key)
else:
res_ids.append(key)
D_int_lake = {k: D_int[k] for k in D_int.keys() & lake_ids }
D_int_res = {k: D_int[k] for k in D_int.keys() & res_ids }
A_int_lake = {k: A_int[k] for k in A_int.keys() & lake_ids }
A_int_res = {k: A_int[k] for k in A_int.keys() & res_ids }
WE_int_lake = {k: WE_int[k] for k in WE_int.keys() & lake_ids }
WE_int_res = {k: WE_int[k] for k in WE_int.keys() & res_ids }
LE_int_lake = {k: LE_int[k] for k in LE_int.keys() & lake_ids }
LE_int_res = {k: LE_int[k] for k in LE_int.keys() & res_ids }
Asum_lake, Dates_lake = SumArea(A_int_lake,D_int_lake)
WEsum_lake, Dates_lake = SumAreaSq(WE_int_lake,D_int_lake)
LEsum_lake, Dates_lake = SumAreaSq(LE_int_lake,D_int_lake)
Asum_res, Dates_res = SumArea(A_int_res,D_int_res)
WEsum_res, Dates_res = SumAreaSq(WE_int_res,D_int_res)
LEsum_res, Dates_res = SumAreaSq(LE_int_res,D_int_res)
Avg_lake = np.mean(Asum_lake)
Asum_lake = np.array(Asum_lake) - Avg_lake
Avg_res = np.mean(Asum_res)
Asum_res = np.array(Asum_res) - Avg_res
AsumUp_lake = Smooth((np.array(Asum_lake) + np.array(WEsum_lake))/Avg_lake*100)
AsumDown_lake = Smooth((np.array(Asum_lake) - np.array(LEsum_lake))/Avg_lake*100)
AsumUp_res = Smooth((np.array(Asum_res) + np.array(WEsum_res))/Avg_res*100)
AsumDown_res = Smooth((np.array(Asum_res) - np.array(LEsum_res))/Avg_res*100)
Asum_lake = Smooth(Asum_lake/Avg_lake*100)
Asum_res = Smooth(Asum_res/Avg_res*100)
fig = plt.figure(figsize=(6,3), dpi=200)
ax = fig.add_subplot()
ax.plot(formatDates,Asum,c='purple',label='All SWB',linewidth=1)
#plt.fill_between(formatDates,AsumUp,AsumDown,color='purple',alpha=0.1)
ax.plot(formatDates,Asum_lake,c='blue',label='Natural Lakes',linewidth=1)
#plt.fill_between(formatDates,AsumUp_lake,AsumDown_lake,color='blue',alpha=0.1)
ax.plot(formatDates,Asum_res,c='red',label='Artificial Reservoirs',linewidth=1)
#plt.fill_between(formatDates,AsumUp_res,AsumDown_res,color='red',alpha=0.1)
plt.gca().set_ylabel('Relative Water Surface Area Anomalies [%]', fontsize = 12)
plt.title('Global Lake and Reservoir Surface Area Anomalies')
plt.gca().set_ylim(-4,4)
years = mdates.YearLocator()
months = mdates.MonthLocator()
years_fmt = mdates.DateFormatter('%Y')
plt.gca().xaxis.set_major_formatter(years_fmt)
plt.gca().xaxis.set_major_locator(years)
plt.gca().xaxis.set_minor_locator(months)
plt.gca().set_xlim(datetime.date(2017,1,1),datetime.date(2020,1,1))
plt.gca().format_xdata = mdates.DateFormatter('%Y-%m-%d')
plt.gca().format_ydata = lambda x: '$%1.2f' % x # format the price.
plt.gca().grid(True)
plt.gca().legend(frameon=False, loc='lower right', ncol=1,fontsize = 12)
#--Figure 3b-------------------------------------------------------------------
def Fig3b_alt(D_int,A_int,LakesByBasin):
fig = plt.figure(figsize=(8,5), dpi=300)
ax = fig.add_subplot()
ax.set_aspect('equal')
world = geo.read_file(geo.datasets.get_path('naturalearth_lowres'))
worldPlot=world.plot(ax=ax, color='white', edgecolor='black', linewidth=0.5)
cmap = cm.get_cmap('Blues', 256)
basins = ee.FeatureCollection("WWF/HydroSHEDS/v1/Basins/hybas_2")
basinNumbers = basins.aggregate_array('PFAF_ID').getInfo()
lakes = ee.FeatureCollection('users/matthewbonnema/HydroLAKES')
lakes = lakes.filter(ee.Filter.gte('Lake_area',1))
PolyList = []
AV = []
#basinNumbers = basinNumbers[0:2]
#print(basinNumbers[-1])
for N in basinNumbers:
print('Loading geometry for basin: '+str(N))
b = ee.Feature(basins.filter(ee.Filter.eq('PFAF_ID',N)).first())
Geo= b.geometry().simplify(100)
Geometry = Geo.getInfo()
geo_shape = shape(Geometry)
PolyList.append(geo_shape)
print('\tGeometry loaded')
print('Aggregating lake area for basin: '+str(N))
lakeID = LakesByBasin[str(N)]
#basinLakes = lakes.filterBounds(Geo)
#lakeID = basinLakes.aggregate_array('Hylak_id').getInfo()
A = {k: A_int[k] for k in A_int.keys() & lakeID }
D = {k: D_int[k] for k in D_int.keys() & lakeID }
Asum,Dates = SumArea(A,D)
Asum = np.array(Asum)
Amax = np.max(Asum)
Amin = np.min(Asum)
Avar = Amax-Amin
AV.append(Avar)
print('\tFinished aggregation')
maxVar = np.max(AV)
C = np.array(AV)/maxVar
#print(C)
n = len(PolyList)
i = 0
for P,c in zip(PolyList,C):
i = i+1
print('Drawing shape number '+str(i)+'/'+str(n))
c = float(c)
try:
if P.geom_type == 'Polygon':
plt.fill(*P.exterior.xy,c=cmap(c))
elif P.geom_type == 'MultiPolygon':
for geom in P.geoms:
plt.fill(*geom.exterior.xy,c=cmap(c))
print('\tFinished drawing')
except:
print('\tFailed to draw polygon')
norm = colors.Normalize(vmin=0, vmax=maxVar, clip=False)
plt.colorbar(cm.ScalarMappable(norm=norm, cmap=cmap), ax=ax)
plt.gca().axes.get_xaxis().set_visible(False)
plt.gca().axes.get_yaxis().set_visible(False)
plt.ylim([-60,90])
plt.xlim([-180,180])
plt.show()
#--Figure 3c-------------------------------------------------------------------
def Fig3c_alt(D_int,A_int,LakesByBasin):
fig = plt.figure(figsize=(8,5), dpi=300)
ax = fig.add_subplot()
ax.set_aspect('equal')
world = geo.read_file(geo.datasets.get_path('naturalearth_lowres'))
worldPlot=world.plot(ax=ax, color='white', edgecolor='black', linewidth=0.5)
cmap = cm.get_cmap('Blues', 256)
basins = ee.FeatureCollection("WWF/HydroSHEDS/v1/Basins/hybas_2")
basinNumbers = basins.aggregate_array('PFAF_ID').getInfo()
lakes = ee.FeatureCollection('users/matthewbonnema/HydroLAKES')
lakes = lakes.filter(ee.Filter.gte('Lake_area',1))
PolyList = []
AV = []
#basinNumbers = basinNumbers[0:2]
#print(basinNumbers[-1])
for N in basinNumbers:
print('Loading geometry for basin: '+str(N))
b = ee.Feature(basins.filter(ee.Filter.eq('PFAF_ID',N)).first())
Geo= b.geometry().simplify(100)
Geometry = Geo.getInfo()
geo_shape = shape(Geometry)
PolyList.append(geo_shape)
print('\tGeometry loaded')
print('Aggregating lake area for basin: '+str(N))
lakeID = LakesByBasin[str(N)]
#basinLakes = lakes.filterBounds(Geo)
#lakeID = basinLakes.aggregate_array('Hylak_id').getInfo()
A = {k: A_int[k] for k in A_int.keys() & lakeID }
D = {k: D_int[k] for k in D_int.keys() & lakeID }
Asum,Dates = SumArea(A,D)
Asum = np.array(Asum)
Amax = np.max(Asum)
Amin = np.min(Asum)
Avar = Amax-Amin
Amean = np.mean(Asum)
AvarP = Avar/Amean*100
AV.append(AvarP)
print('\tFinished aggregation')
maxVar = np.max(AV)
C = np.array(AV)/maxVar
#print(C)
n = len(PolyList)
i = 0
for P,c in zip(PolyList,C):
i = i+1
print('Drawing shape number '+str(i)+'/'+str(n))
c = float(c)
try:
if P.geom_type == 'Polygon':
plt.fill(*P.exterior.xy,c=cmap(c),alpha=0.8)
elif P.geom_type == 'MultiPolygon':
for geom in P.geoms:
plt.fill(*geom.exterior.xy,c=cmap(c),alpha=0.8)
print('\tFinished drawing')
except:
print('\tFailed to draw polygon')
norm = colors.Normalize(vmin=0, vmax=maxVar, clip=False)
plt.colorbar(cm.ScalarMappable(norm=norm, cmap=cmap), ax=ax)
plt.gca().axes.get_xaxis().set_visible(False)
plt.gca().axes.get_yaxis().set_visible(False)
plt.ylim([-60,90])
plt.xlim([-180,180])
plt.show()
#--Figure 4a-------------------------------------------------------------------
def Fig4a(Avp, A_d):
Avp_subdiv = []
labels = []
positions = []
sub = 2
n_sub = range(sub*4)
for n in n_sub:
Avp_sel = Avp[np.array([A_d>=10**(n/2),A_d<10**((n+1)/2)]).all(axis=0)]
Avp_subdiv.append(Avp_sel)
positions.append(np.mean([10**(n/2),10**((n+1)/2)]))
#positions.append(10**(n/2))
if n % 2 == 0:
labels.append('$10^{'+str(n//2)+'}$')
else:
labels.append('$10^{'+str(n)+'/2}$')
Avp_subdiv.append(Avp[A_d>=10000])
labels.append('>$10^4$')
positions.append(np.mean([10**(8/2),10**(9/2)]))
w = 0.42
width = lambda p, w: 10**(np.log10(p)+w/2.)-10**(np.log10(p)-w/2.)
widths = width(positions,w)
#widths[-1] = width(positions[-1],w*1.4)
fig = plt.figure(figsize=(8,5), dpi=300)
ax = fig.add_subplot()
ax.boxplot(Avp_subdiv,showfliers=False,positions=positions,widths=widths)
ax.set_ylabel('Relative Water Surface Area Variability [%]', fontsize = 12)
ax.set_xlabel('Nominal Surface Area $\mathregular{km^{2}}$', fontsize = 12)
plt.title('Global SWB Area Variability')
#ax.boxplot(Avp_subdiv,showfliers=False,labels=labels,widths=0.7)
ax.set_xscale('log')
ax.minorticks_off()
#--Figure 4b-------------------------------------------------------------------
def Fig4b(Avp, A_d, Ltype):
Avp_lake = Avp[Ltype==1]
Avp_res = Avp[Ltype==2]
A_d_lake = A_d[Ltype==1]
A_d_res = A_d[Ltype==2]
Avp_subdiv_lake = []
Avp_subdiv_res= []
labels = []
positionsl = []
positionsr = []
sub = 2
n_sub = range(sub*4)
for n in n_sub:
Avp_sel_l = Avp_lake[np.array([A_d_lake>=10**(n/2),A_d_lake<10**((n+1)/2)]).all(axis=0)]
Avp_sel_r = Avp_res[np.array([A_d_res>=10**(n/2),A_d_res<10**((n+1)/2)]).all(axis=0)]
#t,p = scipy.stats.ttest_ind(Avp_sel_l ,Avp_sel_r, equal_var=0)
#print(n/2,t,p)
Avp_subdiv_lake.append(Avp_sel_l)
Avp_subdiv_res.append(Avp_sel_r)
'''
positionsl.append(np.mean([np.mean([10**(n/2),10**((n+1)/2)]),10**(n/2)]))
positionsr.append(np.mean([np.mean([10**(n/2),10**((n+1)/2)]),10**((n+1)/2)]))
'''
positionsl.append(np.mean([10**(n/2),10**((n+1)/2)]))
positionsr.append(np.mean([10**(n/2),10**((n+1)/2)]) - 0.02*np.mean([10**(n/2),10**((n+1)/2)]))
#positions.append(10**(n/2))
if n % 2 == 0:
labels.append('>$10^{'+str(n//2)+'}$')
else:
labels.append('>$10^{'+str(n)+'/2}$')
Avp_subdiv_lake.append(Avp[A_d>=10000])
#labels.append('>$10^4$')
positionsl.append(np.mean([10**(8/2),10**(9/2)]))
w = 0.38
width = lambda p, w: 10**(np.log10(p)+w/2.)-10**(np.log10(p)-w/2.)
widthsl = width(positionsl,w)
widthsr = width(positionsr,w)*0.92
#widths[-1] = width(positions[-1],w*1.4)
fig = plt.figure(figsize=(8,5), dpi=300)
ax = fig.add_subplot()
bpl = ax.boxplot(Avp_subdiv_lake,showfliers=False,positions=positionsl,widths=widthsl,notch=True)
bpr = ax.boxplot(Avp_subdiv_res,showfliers=False,positions=positionsr,widths=widthsr,notch=True)
for element in ['boxes', 'whiskers', 'fliers', 'means', 'medians', 'caps']:
plt.setp(bpl[element], color='blue')
for element in ['boxes', 'whiskers', 'fliers', 'means', 'medians', 'caps']:
plt.setp(bpr[element], color='red')
ax.set_ylabel('Relative Water Surface Area Variability [%]', fontsize = 12)
ax.set_xlabel('Nominal Surface Area $\mathregular{km^{2}}$', fontsize = 12)
plt.title('Global Lake and Reservoir Area Variability')
ax.set_xscale('log')
ax.minorticks_off()
#--Figure 4c-------------------------------------------------------------------
def Fig4c(Av, A_d, Ltype):
Av_lake = Av[Ltype==1]
Av_res = Av[Ltype==2]
A_d_lake = A_d[Ltype==1]
A_d_res = A_d[Ltype==2]
Av_total_lake = []
Av_total_res= []
Av_total = []
positions = []
sub = 2
n_sub = range(sub*4)
for n in n_sub:
Av_sel_l = Av_lake[np.array([A_d_lake>=10**(n/2),A_d_lake<10**((n+1)/2)]).all(axis=0)]
Av_sel_r = Av_res[np.array([A_d_res>=10**(n/2),A_d_res<10**((n+1)/2)]).all(axis=0)]
Av_total_lake.append(np.sum(Av_sel_l))
Av_total_res.append(np.sum(Av_sel_r))
Av_total.append(np.sum(Av_sel_l)+np.sum(Av_sel_r))
positions.append(np.mean([10**(n/2),10**((n+1)/2)]))
Av_total_gt = np.sum(Av[A_d>=10000])
fig = plt.figure(figsize=(8,5), dpi=300)
ax = fig.add_subplot()
w = 0.42
width = lambda p, w: 10**(np.log10(p)+w/2.)-10**(np.log10(p)-w/2.)
widths = width(positions,w)
ax.bar(x=positions,height=Av_total_res, width=widths, bottom=Av_total_lake, color='red')
print(np.sum(Av_total_res))
positions.append(np.mean([10**(8/2),10**(9/2)]))
widths = width(positions,w)
Av_total_lake.append(Av_total_gt)
ax.bar(x=positions,height=Av_total_lake, width=widths, color='blue')
print(np.sum(Av_total_lake))
ax.set_ylabel('Total Surface Area Amplitude $\mathregular{km^{2}}$', fontsize = 12)
ax.set_xlabel('Nominal Surface Area $\mathregular{km^{2}}$', fontsize = 12)
plt.title('Total Lake and Reservoir Area Amplitude')
ax.set_xscale('log')
ax.minorticks_off() | 43.006211 | 152 | 0.617562 | 4,126 | 27,696 | 3.984004 | 0.104217 | 0.013627 | 0.011924 | 0.013627 | 0.802105 | 0.777467 | 0.7653 | 0.744555 | 0.73178 | 0.724054 | 0 | 0.034918 | 0.193458 | 27,696 | 644 | 153 | 43.006211 | 0.700958 | 0.107091 | 0 | 0.654224 | 0 | 0 | 0.094248 | 0.021639 | 0 | 0 | 0 | 0 | 0 | 1 | 0.021611 | false | 0 | 0.035363 | 0 | 0.056974 | 0.035363 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
61b1012aae2d2936f129d55b00373321422825fc | 25 | py | Python | app.py | galenguyer/vigilant | 7fa21029b37ca047198eeee26fa36575c82d7bbc | [
"MIT"
] | null | null | null | app.py | galenguyer/vigilant | 7fa21029b37ca047198eeee26fa36575c82d7bbc | [
"MIT"
] | null | null | null | app.py | galenguyer/vigilant | 7fa21029b37ca047198eeee26fa36575c82d7bbc | [
"MIT"
] | null | null | null | from vigilant import app
| 12.5 | 24 | 0.84 | 4 | 25 | 5.25 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.16 | 25 | 1 | 25 | 25 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
f606e56ba3d5c848c746af97bdbc52ea7e98df9c | 29 | py | Python | limix_ext/logreg/__init__.py | glimix/limix-ext | 7cf7a3b2b02f6a73cbba90f1945a06b9295b7357 | [
"MIT"
] | 24 | 2020-11-22T21:02:37.000Z | 2022-02-19T14:26:47.000Z | limix_ext/logreg/__init__.py | glimix/limix-ext | 7cf7a3b2b02f6a73cbba90f1945a06b9295b7357 | [
"MIT"
] | 73 | 2020-10-23T07:40:45.000Z | 2022-03-29T17:56:33.000Z | limix_ext/logreg/__init__.py | glimix/limix-ext | 7cf7a3b2b02f6a73cbba90f1945a06b9295b7357 | [
"MIT"
] | 21 | 2020-12-28T17:00:10.000Z | 2021-06-21T23:52:32.000Z | from .predict import predict
| 14.5 | 28 | 0.827586 | 4 | 29 | 6 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.137931 | 29 | 1 | 29 | 29 | 0.96 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
f652898fd2b443e0239ef1384dd5ff8d07382f67 | 6,191 | py | Python | src/gestureInfo.py | DennisMelamed/crazyFrog | 25ffa0e6616f7195d034189eef7348d16157215e | [
"MIT"
] | 2 | 2018-01-19T13:16:36.000Z | 2018-01-19T18:39:35.000Z | src/gestureInfo.py | DennisMelamed/crazyFrog | 25ffa0e6616f7195d034189eef7348d16157215e | [
"MIT"
] | null | null | null | src/gestureInfo.py | DennisMelamed/crazyFrog | 25ffa0e6616f7195d034189eef7348d16157215e | [
"MIT"
] | 1 | 2018-09-28T14:30:59.000Z | 2018-09-28T14:30:59.000Z | #
# Author: Owen Levin
#
from numpy import dot
import rospkg
rospack = rospkg.RosPack()
macro_folder = rospack.get_path('crazy_frog') + "/macros/"
file_ext = ".csv"
# Index of Gestures
gestures = [0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,22,23]
num_var = [22,23]
Digit = 14
Negate = 15
numbers = [0,1,2,3,4,5,6,7,8,9,Digit,Negate]
Run = 10
END = 11
RecordMacro = 12
CallMacro = 13
Repeat = 16
MoveX = 17
MoveY = 18
MoveZ = 19
Wait = 20
actions = [CallMacro, MoveX, MoveY, MoveZ, Wait]
NOP = 21
SetNumberVar = 22
CallNumberVar = 15
def make_num_var_dict():
rec_num_dict = {}
not_END = gestures[:END]+gestures[END+1:]
for gesture in gestures:
rec_num_dict[gesture] = gestures
idle_legal_gestures = {
0: numbers+[END], # 0 is the digit 0
1: numbers+[END], # 1 is the digit 1
2: numbers+[END], # 2 is the digit 2
3: numbers+[END], # 3 is the digit 3
4: numbers+[END], # 4 is the digit 4
5: numbers+[END], # 5 is the digit 5
6: numbers+[END], # 6 is the digit 6
7: numbers+[END], # 7 is the digit 7
8: numbers+[END], # 8 is the digit 8
9: numbers+[END], # 9 is the digit 9
Digit: numbers[:10]+[Negate], # 14 is the digit/number command. The next value should be the next digit in the number being constructed.
Negate: numbers[:10], # 15 is negation of a digit. (Can be used before any digit in a number to change the sign)
Run: [END,Digit,CallNumberVar], # 10 is the run command which takes a number (which previously recorded macro to run) as a parameter
END: [Run,RecordMacro,SetNumberVar], # 11 is the end scope/ cancel gesture (note: one frequently will have entered a scope from just ending a previous scope)
} # 21 is No op which if recognized, will be do nothing, but is mentioned here for completeness
number_var_name_legal_gestures = make_num_var_dict()
recording_legal_gestures = {
0: numbers+[END], # 0 is the digit 0
1: numbers+[END], # 1 is the digit 1
2: numbers+[END], # 2 is the digit 2
3: numbers+[END], # 3 is the digit 3
4: numbers+[END], # 4 is the digit 4
5: numbers+[END], # 5 is the digit 5
6: numbers+[END], # 6 is the digit 6
7: numbers+[END], # 7 is the digit 7
8: numbers+[END], # 8 is the digit 8
9: numbers+[END], # 9 is the digit 9
Digit: numbers[:10]+[Negate], # 14 is the digit/number command. The next value should be the next digit in the number being constructed.
Negate: numbers[:10], # 15 is negation of a digit. (Can be used before any digit in a number to change the sign)
END: [END,RecordMacro,SetNumberVar,CallMacro,Repeat,MoveX,MoveY,MoveZ,Wait], # 11 is the end scope/ cancel gesture (note: one frequently will have entered a scope from just ending a previous scope)
RecordMacro: [END,Digit,CallNumberVar], # 12 is the record macro command. The current scope is set to Recording and the next expected value should be a digit
CallMacro: [END,Digit,CallNumberVar], # 13 is the call macro commmand. The next expected value(s) should be a digit(s) (which previously recorded macro to call)
Repeat: [END,Digit,CallNumberVar], # 16 is the repeat command which starts a Repeating scope and takes first a number parameter n, then an arbitrary number of actions to repeat x times.
# (If n is 0 or negative, nothing happens)
MoveX: [END,Digit,CallNumberVar]+actions, # 17 is move in the x direction (left is negative, right is positive). Expects a number (of decimeters) to move next
MoveY: [END,Digit,CallNumberVar]+actions, # 18 is move in the y direction (backward is negative, forward is positive)). Expects a number (of decimeters) to move next
MoveZ: [END,Digit,CallNumberVar]+actions, # 19 is move in the z direction (down is negative, up is positive). Expects a number (of decimeters) to move next
Wait: [END,Digit,CallNumberVar]+actions, # 20 is the wait command. (Expects a number (of seconds) to wait next
} # 21 is No op which if recognized, will be do nothing, but is mentioned here for completeness
repeating_legal_gestures = {
0: numbers+[END], # 0 is the digit 0
1: numbers+[END], # 1 is the digit 1
2: numbers+[END], # 2 is the digit 2
3: numbers+[END], # 3 is the digit 3
4: numbers+[END], # 4 is the digit 4
5: numbers+[END], # 5 is the digit 5
6: numbers+[END], # 6 is the digit 6
7: numbers+[END], # 7 is the digit 7
8: numbers+[END], # 8 is the digit 8
9: numbers+[END], # 9 is the digit 9
Digit: numbers[:10]+[Negate], # 14 is the digit/number command. The next value should be the next digit in the number being constructed.
Negate: numbers[:10], # 15 is negation of a digit. (Can be used before any digit in a number to change the sign)
END: [END,RecordMacro,SetNumberVar,CallMacro,Repeat,MoveX,MoveY,MoveZ,Wait], # 11 is the end scope/ cancel gesture (note: one frequently will have entered a scope from just ending a previous scope)
RecordMacro: [END,Digit,CallNumberVar], # 12 is the record macro command. The current scope is set to Recording and the next expected value should be a digit
CallMacro: [END,Digit,CallNumberVar], # 13 is the call macro commmand. The next expected value(s) should be a digit(s) (which previously recorded macro to call)
Repeat: [END,Digit,CallNumberVar], # 16 is the repeat command which starts a Repeating scope and takes first a number parameter n, then an arbitrary number of actions to repeat x times.
# (If n is 0 or negative, nothing happens)
MoveX: [END,Digit,CallNumberVar]+actions, # 17 is move in the x direction (left is negative, right is positive). Expects a number (of decimeters) to move next
MoveY: [END,Digit,CallNumberVar]+actions, # 18 is move in the y direction (backward is negative, forward is positive)). Expects a number (of decimeters) to move next
MoveZ: [END,Digit,CallNumberVar]+actions, # 19 is move in the z direction (down is negative, up is positive). Expects a number (of decimeters) to move next
Wait: [END,Digit,CallNumberVar]+actions, # 20 is the wait command. (Expects a number (of seconds) to wait next
}
legal_gestures_in_scope = { # Each scope is mapped to a dictionary containing gestures allowed after the previous input
"Idle": idle_legal_gestures,
RecordMacro: recording_legal_gestures,
Repeat: repeating_legal_gestures,
}
| 57.324074 | 198 | 0.721047 | 1,065 | 6,191 | 4.158685 | 0.153052 | 0.050802 | 0.074509 | 0.050576 | 0.822082 | 0.815308 | 0.815308 | 0.815308 | 0.815308 | 0.815308 | 0 | 0.046278 | 0.179777 | 6,191 | 107 | 199 | 57.859813 | 0.825916 | 0.574382 | 0 | 0.553191 | 0 | 0 | 0.010113 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.010638 | false | 0 | 0.021277 | 0 | 0.031915 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
9c887d0c985225b1553f4bac2c31462365fdb9dd | 29 | py | Python | nBody-master/nbody/utils/__init__.py | craigboger/nbody-sim | 1b93c5db395f49c54e78c9dfb7e5590fbfad23cc | [
"MIT"
] | null | null | null | nBody-master/nbody/utils/__init__.py | craigboger/nbody-sim | 1b93c5db395f49c54e78c9dfb7e5590fbfad23cc | [
"MIT"
] | null | null | null | nBody-master/nbody/utils/__init__.py | craigboger/nbody-sim | 1b93c5db395f49c54e78c9dfb7e5590fbfad23cc | [
"MIT"
] | null | null | null | from .Counter import Counter
| 14.5 | 28 | 0.827586 | 4 | 29 | 6 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.137931 | 29 | 1 | 29 | 29 | 0.96 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
9caae1570264bf28d92ba15fbad413a4102c8792 | 130 | py | Python | apps/account/models.py | picsldev/pyerp | e998e3e99a4e45033d54a6b1df50697f7288f67f | [
"MIT"
] | null | null | null | apps/account/models.py | picsldev/pyerp | e998e3e99a4e45033d54a6b1df50697f7288f67f | [
"MIT"
] | 11 | 2020-06-05T22:50:37.000Z | 2022-02-10T09:05:56.000Z | apps/account/models.py | gvizquel/pyerp | c859f7293cabd1003f79112463cee93ac89fccba | [
"MIT"
] | null | null | null | # Librerias en carpetas locales
from .submodels.accountmove import PyAccountMove
from .submodels.accountplan import PyAccountPlan
| 32.5 | 48 | 0.861538 | 14 | 130 | 8 | 0.785714 | 0.232143 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.1 | 130 | 3 | 49 | 43.333333 | 0.957265 | 0.223077 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
140ef840c6386823ec4d046c1fb019c3d764ca89 | 1,908 | py | Python | saopy/qoi/__init__.py | CityPulse/CP_Resourcemanagement | aa670fa89d5e086a98ade3ccc152518be55abf2e | [
"MIT"
] | 2 | 2016-11-03T14:57:45.000Z | 2019-05-13T13:21:08.000Z | saopy/qoi/__init__.py | CityPulse/CP_Resourcemanagement | aa670fa89d5e086a98ade3ccc152518be55abf2e | [
"MIT"
] | null | null | null | saopy/qoi/__init__.py | CityPulse/CP_Resourcemanagement | aa670fa89d5e086a98ade3ccc152518be55abf2e | [
"MIT"
] | 1 | 2020-07-23T11:27:15.000Z | 2020-07-23T11:27:15.000Z | import saopy.model
from saopy.model import qoi___Accuracy as Accuracy
from saopy.model import qoi___Age as Age
from saopy.model import qoi___Authority as Authority
from saopy.model import qoi___Bandwidth as Bandwidth
from saopy.model import qoi___Completeness as Completeness
from saopy.model import qoi___Confidentiality as Confidentiality
from saopy.model import qoi___Correctness as Correctness
from saopy.model import qoi___Cost as Cost
from saopy.model import qoi___Deviation as Deviation
from saopy.model import qoi___Encryption as Encryption
from saopy.model import qoi___EnergyConsumption as EnergyConsumption
from saopy.model import qoi___Frequency as Frequency
from saopy.model import qoi___Jitter as Jitter
from saopy.model import qoi___Latency as Latency
from saopy.model import qoi___LicenceDefinition as LicenceDefinition
from saopy.model import qoi___MayBePublished as MayBePublished
from saopy.model import qoi___MayBeUsed as MayBeUsed
from saopy.model import qoi___MonetaryConsumption as MonetaryConsumption
from saopy.model import qoi___NetworkConsumption as NetworkConsumption
from saopy.model import qoi___NetworkPerformance as NetworkPerformance
from saopy.model import qoi___Ordered as Ordered
from saopy.model import qoi___PacketLoss as PacketLoss
from saopy.model import qoi___Precision as Precision
from saopy.model import qoi___PublicKey as PublicKey
from saopy.model import qoi___Quality as Quality
from saopy.model import qoi___Queuing as Queuing
from saopy.model import qoi___QueuingType as QueuingType
from saopy.model import qoi___Reputation as Reputation
from saopy.model import qoi___Resolution as Resolution
from saopy.model import qoi___Security as Security
from saopy.model import qoi___Signing as Signing
from saopy.model import qoi___Throughput as Throughput
from saopy.model import qoi___Timeliness as Timeliness
from saopy.model import qoi___Volatility as Volatility
| 51.567568 | 72 | 0.873166 | 275 | 1,908 | 5.687273 | 0.145455 | 0.223785 | 0.304348 | 0.434783 | 0.5 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.108491 | 1,908 | 36 | 73 | 53 | 0.919459 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
145222319f0d6d1817b866e79d224f64daff2979 | 2,664 | py | Python | Documents/RacimoAire/ADC/Libreria/ADS1115-master/Onion Omega Python/ADS1115_4Channel.py | JoseSalamancaCoy/RACIMO_AIRE | 628d6ff184a30af0efd25bff675b0006500d4ba2 | [
"MIT"
] | null | null | null | Documents/RacimoAire/ADC/Libreria/ADS1115-master/Onion Omega Python/ADS1115_4Channel.py | JoseSalamancaCoy/RACIMO_AIRE | 628d6ff184a30af0efd25bff675b0006500d4ba2 | [
"MIT"
] | null | null | null | Documents/RacimoAire/ADC/Libreria/ADS1115-master/Onion Omega Python/ADS1115_4Channel.py | JoseSalamancaCoy/RACIMO_AIRE | 628d6ff184a30af0efd25bff675b0006500d4ba2 | [
"MIT"
] | null | null | null | # Distributed with a free-will license.
# Use it any way you want, profit or free, provided it fits in the licenses of its associated works.
# ADS1115
# This code is designed to work with the ADS1115_I2CADC I2C Mini Module available from ControlEverything.com.
# https://www.controleverything.com/content/Analog-Digital-Converters?sku=ADS1115_I2CADC#tabs-0-product_tabset-2
from OmegaExpansion import onionI2C
import time
# Get I2C bus
i2c = onionI2C.OnionI2C()
# ADS1115 address, 0x48(72)
# Select configuration register, 0x01(01)
# 0xC483(50307) AINP = AIN0 and AINN = GND, +/- 2.048V
# Continuous conversion mode, 128SPS
data = [0xC4,0x83]
i2c.writeBytes(0x48, 0x01, data)
time.sleep(0.5)
# ADS1115 address, 0x48(72)
# Read data back from 0x00(00), 2 bytes
# raw_adc MSB, raw_adc LSB
data = i2c.readBytes(0x48, 0x00, 2)
# Convert the data
raw_adc = data[0] * 256 + data[1]
if raw_adc > 32767:
raw_adc -= 65535
# Output data to screen
print "Digital Value of Analog Input on Channel-0: %d" %raw_adc
# ADS1115 address, 0x48(72)
# Select configuration register, 0x01(01)
# 0xD483(54403) AINP = AIN1 and AINN = GND, +/- 2.048V
# Continuous conversion mode, 128SPS
data = [0xD4,0x83]
i2c.writeBytes(0x48, 0x01, data)
time.sleep(0.5)
# ADS1115 address, 0x48(72)
# Read data back from 0x00(00), 2 bytes
# raw_adc MSB, raw_adc LSB
data = i2c.readBytes(0x48, 0x00, 2)
# Convert the data
raw_adc = data[0] * 256 + data[1]
if raw_adc > 32767:
raw_adc -= 65535
# Output data to screen
print "Digital Value of Analog Input on Channel-1: %d" %raw_adc
# ADS1115 address, 0x48(72)
# Select configuration register, 0x01(01)
# 0xE483(58499) AINP = AIN2 and AINN = GND, +/- 2.048V
# Continuous conversion mode, 128SPS
data = [0xE4,0x83]
i2c.writeBytes(0x48, 0x01, data)
time.sleep(0.5)
# ADS1115 address, 0x48(72)
# Read data back from 0x00(00), 2 bytes
# raw_adc MSB, raw_adc LSB
data = i2c.readBytes(0x48, 0x00, 2)
# Convert the data
raw_adc = data[0] * 256 + data[1]
if raw_adc > 32767:
raw_adc -= 65535
# Output data to screen
print "Digital Value of Analog Input on Channel-2: %d" %raw_adc
# ADS1115 address, 0x48(72)
# Select configuration register, 0x01(01)
# 0xF483(62595) AINP = AIN3 and AINN = GND, +/- 2.048V
# Continuous conversion mode, 128SPS
data = [0xF4,0x83]
i2c.writeBytes(0x48, 0x01, data)
time.sleep(0.5)
# ADS1115 address, 0x48(72)
# Read data back from 0x00(00), 2 bytes
# raw_adc MSB, raw_adc LSB
data = i2c.readBytes(0x48, 0x00, 2)
# Convert the data
raw_adc = data[0] * 256 + data[1]
if raw_adc > 32767:
raw_adc -= 65535
# Output data to screen
print "Digital Value of Analog Input on Channel-3: %d" %raw_adc
| 25.615385 | 112 | 0.715841 | 435 | 2,664 | 4.321839 | 0.273563 | 0.076596 | 0.076596 | 0.085106 | 0.749468 | 0.749468 | 0.749468 | 0.749468 | 0.749468 | 0.721277 | 0 | 0.161918 | 0.170045 | 2,664 | 103 | 113 | 25.864078 | 0.688376 | 0.56982 | 0 | 0.685714 | 0 | 0 | 0.166969 | 0 | 0 | 0 | 0.087114 | 0 | 0 | 0 | null | null | 0 | 0.057143 | null | null | 0.114286 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
145388f977523f2be500192512a342122731d326 | 8,434 | py | Python | models.py | sunbingqi/CS224N_Final_Project | dcb967b90b2c9a780729dda2a780b0f0a715e0ad | [
"MIT"
] | null | null | null | models.py | sunbingqi/CS224N_Final_Project | dcb967b90b2c9a780729dda2a780b0f0a715e0ad | [
"MIT"
] | null | null | null | models.py | sunbingqi/CS224N_Final_Project | dcb967b90b2c9a780729dda2a780b0f0a715e0ad | [
"MIT"
] | null | null | null | """Top-level model classes.
Author:
Chris Chute (chute@stanford.edu)
"""
import layers
import torch
import torch.nn as nn
import torch.nn.functional as F
from util import masked_softmax
class BiDAFBASE(nn.Module):
"""Baseline BiDAF model for SQuAD.
Based on the paper:
"Bidirectional Attention Flow for Machine Comprehension"
by Minjoon Seo, Aniruddha Kembhavi, Ali Farhadi, Hannaneh Hajishirzi
(https://arxiv.org/abs/1611.01603).
Follows a high-level structure commonly found in SQuAD models:
- Embedding layer: Embed word indices to get word vectors.
- Encoder layer: Encode the embedded sequence.
- Attention layer: Apply an attention mechanism to the encoded sequence.
- Model encoder layer: Encode the sequence again.
- Output layer: Simple layer (e.g., fc + softmax) to get final outputs.
Args:
word_vectors (torch.Tensor): Pre-trained word vectors.
hidden_size (int): Number of features in the hidden state at each layer.
drop_prob (float): Dropout probability.
"""
def __init__(self, word_vectors, hidden_size, drop_prob=0.):
super(BiDAFBASE, self).__init__()
self.emb = layers.EmbeddingBASE(word_vectors=word_vectors,
hidden_size=hidden_size,
drop_prob=drop_prob)
self.enc = layers.RNNEncoder(input_size=hidden_size,
hidden_size=hidden_size,
num_layers=1,
drop_prob=drop_prob)
self.att = layers.BiDAFAttention(hidden_size=2 * hidden_size,
drop_prob=drop_prob)
self.mod = layers.RNNEncoder(input_size=8 * hidden_size,
hidden_size=hidden_size,
num_layers=2,
drop_prob=drop_prob)
self.out = layers.BiDAFOutput(hidden_size=hidden_size,
drop_prob=drop_prob)
def forward(self, cw_idxs, qw_idxs):
c_mask = torch.zeros_like(cw_idxs) != cw_idxs
q_mask = torch.zeros_like(qw_idxs) != qw_idxs
c_len, q_len = c_mask.sum(-1), q_mask.sum(-1)
c_emb = self.emb(cw_idxs) # (batch_size, c_len, hidden_size)
q_emb = self.emb(qw_idxs) # (batch_size, q_len, hidden_size)
c_enc = self.enc(c_emb, c_len) # (batch_size, c_len, 2 * hidden_size)
q_enc = self.enc(q_emb, q_len) # (batch_size, q_len, 2 * hidden_size)
att = self.att(c_enc, q_enc,
c_mask, q_mask) # (batch_size, c_len, 8 * hidden_size)
mod = self.mod(att, c_len) # (batch_size, c_len, 2 * hidden_size)
out = self.out(att, mod, c_mask) # 2 tensors, each (batch_size, c_len)
return out
class BiDAF(nn.Module):
"""Baseline BiDAF model for SQuAD + Character Embedding.
Based on the paper:
"Bidirectional Attention Flow for Machine Comprehension"
by Minjoon Seo, Aniruddha Kembhavi, Ali Farhadi, Hannaneh Hajishirzi
(https://arxiv.org/abs/1611.01603).
Follows a high-level structure commonly found in SQuAD models:
- Embedding layer: Embed word indices to get word vectors.
- Encoder layer: Encode the embedded sequence.
- Attention layer: Apply an attention mechanism to the encoded sequence.
- Model encoder layer: Encode the sequence again.
- Output layer: Simple layer (e.g., fc + softmax) to get final outputs.
Args:
word_vectors (torch.Tensor): Pre-trained word vectors.
hidden_size (int): Number of features in the hidden state at each layer.
drop_prob (float): Dropout probability.
"""
def __init__(self, word_vectors, hidden_size, character_vectors, char_channel_size, char_channel_width, drop_prob=0.):
super(BiDAF, self).__init__()
self.emb = layers.Embedding(word_vectors=word_vectors,
hidden_size=hidden_size,
character_vectors=character_vectors,
char_channel_size=char_channel_size,
char_channel_width=char_channel_width,
drop_prob=drop_prob)
self.enc = layers.RNNEncoder(input_size=hidden_size,
hidden_size=hidden_size,
num_layers=1,
drop_prob=drop_prob)
self.att = layers.BiDAFAttention(hidden_size=2 * hidden_size,
drop_prob=drop_prob)
self.mod = layers.RNNEncoder(input_size=8 * hidden_size,
hidden_size=hidden_size,
num_layers=2,
drop_prob=drop_prob)
self.out = layers.BiDAFOutput(hidden_size=hidden_size,
drop_prob=drop_prob)
def forward(self, cw_idxs, qw_idxs, cc_idxs, qc_idxs):
c_mask = torch.zeros_like(cw_idxs) != cw_idxs
q_mask = torch.zeros_like(qw_idxs) != qw_idxs
c_len, q_len = c_mask.sum(-1), q_mask.sum(-1)
c_emb = self.emb(cw_idxs, cc_idxs) # (batch_size, c_len, hidden_size)
q_emb = self.emb(qw_idxs, qc_idxs) # (batch_size, q_len, hidden_size)
c_enc = self.enc(c_emb, c_len) # (batch_size, c_len, 2 * hidden_size)
q_enc = self.enc(q_emb, q_len) # (batch_size, q_len, 2 * hidden_size)
print(c_enc.shape)
print(q_enc.shape)
# torch.Size([64, 273, 200])
# torch.Size([64, 21, 200])
att = self.att(c_enc, q_enc,
c_mask, q_mask) # (batch_size, c_len, 8 * hidden_size)
mod = self.mod(att, c_len) # (batch_size, c_len, 2 * hidden_size)
out = self.out(att, mod, c_mask) # 2 tensors, each (batch_size, c_len)
return out
class QANet(nn.Module):
def __init__(self, word_vectors, character_vectors, hidden_size, char_channel_size, char_channel_width, pad=0, drop_prob=0.1, num_head=1):
super().__init__()
self.emb = layers.Embedding(word_vectors=word_vectors,
hidden_size=hidden_size,
character_vectors=character_vectors,
char_channel_size=char_channel_size,
char_channel_width=char_channel_width,
drop_prob=drop_prob)
self.enc = layers.EncoderBlock(conv_num=4, hidden_size=hidden_size, num_head=num_head, k=7, drop_prob=drop_prob)
self.att = layers.BiDAFAttention(hidden_size=hidden_size, drop_prob=drop_prob)
self.att_conv = nn.Conv1d(hidden_size * 4, hidden_size, kernel_size=1, bias=False)
nn.init.xavier_uniform_(self.att_conv.weight)
self.mod = nn.ModuleList([layers.EncoderBlock(conv_num=2, hidden_size=hidden_size, num_head=num_head, k=5, drop_prob=drop_prob) for _ in range(7)])
self.out = layers.QANetOutput(hidden_size=hidden_size)
def forward(self, cw_idxs, qw_idxs, cc_idxs, qc_idxs):
c_mask = torch.zeros_like(cw_idxs) != cw_idxs
q_mask = torch.zeros_like(qw_idxs) != qw_idxs
c_emb = self.emb(cw_idxs, cc_idxs) # (batch_size, c_len, hidden_size)
q_emb = self.emb(qw_idxs, qc_idxs) # (batch_size, q_len, hidden_size)
c_enc = self.enc(c_emb.transpose(1, 2), c_mask, 1, 1)
q_enc = self.enc(q_emb.transpose(1, 2), q_mask, 1, 1)
c_enc = c_enc.permute((0, 2, 1))
q_enc = q_enc.permute((0, 2, 1))
att = self.att(c_enc, q_enc, c_mask, q_mask)
# print(att.shape) torch.Size([64, 314, 400])
att = att.permute((0, 2, 1))
att = self.att_conv(att)
for i, enc_block in enumerate(self.mod):
att = enc_block(att, c_mask, i*(2+2)+1, 7)
mod_1 = att
for i, enc_block in enumerate(self.mod):
att = enc_block(att, c_mask, i*(2+2)+1, 7)
mod_2 = att
for i, enc_block in enumerate(self.mod):
att = enc_block(att, c_mask, i*(2+2)+1, 7)
mod_3 = att
out = self.out(mod_1, mod_2, mod_3, c_mask)
return out | 43.927083 | 155 | 0.589874 | 1,137 | 8,434 | 4.092348 | 0.152155 | 0.118203 | 0.057167 | 0.073071 | 0.848055 | 0.832796 | 0.827853 | 0.790243 | 0.785515 | 0.759725 | 0 | 0.019424 | 0.316339 | 8,434 | 192 | 156 | 43.927083 | 0.787548 | 0.275907 | 0 | 0.663636 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.054545 | false | 0 | 0.045455 | 0 | 0.154545 | 0.018182 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
145599ea0d8b8585ada70d693246b6d2c121413b | 37 | py | Python | folder2/testfile2.py | aviljoenn/pynetlab | 14e2f12217b6d7137c5e047574cef36b475382dc | [
"Apache-2.0"
] | null | null | null | folder2/testfile2.py | aviljoenn/pynetlab | 14e2f12217b6d7137c5e047574cef36b475382dc | [
"Apache-2.0"
] | null | null | null | folder2/testfile2.py | aviljoenn/pynetlab | 14e2f12217b6d7137c5e047574cef36b475382dc | [
"Apache-2.0"
] | null | null | null | print("This is file 2 in folder 2")
| 12.333333 | 35 | 0.675676 | 8 | 37 | 3.125 | 0.875 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.068966 | 0.216216 | 37 | 2 | 36 | 18.5 | 0.793103 | 0 | 0 | 0 | 0 | 0 | 0.722222 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 6 |
145b774ac040ddf73e25781f110fbd00bf78a816 | 18 | py | Python | Chapter1/R-1/9.py | GeorgeGkas/Data_Structures_and_Algorithms_in_Python | c4f8b590ab2dd008504e639607c62d5e5760009a | [
"MIT"
] | 1 | 2017-05-18T09:43:38.000Z | 2017-05-18T09:43:38.000Z | Chapter1/R-1/9.py | GeorgeGkas/Data_Structures_and_Algorithms_in_Python | c4f8b590ab2dd008504e639607c62d5e5760009a | [
"MIT"
] | null | null | null | Chapter1/R-1/9.py | GeorgeGkas/Data_Structures_and_Algorithms_in_Python | c4f8b590ab2dd008504e639607c62d5e5760009a | [
"MIT"
] | null | null | null | range(50, 81, 10)
| 9 | 17 | 0.611111 | 4 | 18 | 2.75 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.4 | 0.166667 | 18 | 1 | 18 | 18 | 0.333333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
147379d82fcdc1ad756e06bf99062f91ee672b0b | 66 | py | Python | examples/max/ex3.py | mcorne/python-by-example | 15339c0909c84b51075587a6a66391100971c033 | [
"MIT"
] | null | null | null | examples/max/ex3.py | mcorne/python-by-example | 15339c0909c84b51075587a6a66391100971c033 | [
"MIT"
] | null | null | null | examples/max/ex3.py | mcorne/python-by-example | 15339c0909c84b51075587a6a66391100971c033 | [
"MIT"
] | null | null | null | print(max([10, '1', '100', 90, '111', '2'], key=lambda x:int(x)))
| 33 | 65 | 0.515152 | 13 | 66 | 2.615385 | 0.923077 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.206897 | 0.121212 | 66 | 1 | 66 | 66 | 0.37931 | 0 | 0 | 0 | 0 | 0 | 0.121212 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 6 |
1ad3189b352da64045f42099eead2a05d277cf32 | 11,202 | py | Python | test/unit/test_client.py | smarter-travel-media/artifacts | cdb29a17ede0924b122b3905a500442c62ae53b7 | [
"MIT"
] | 1 | 2017-02-10T20:55:17.000Z | 2017-02-10T20:55:17.000Z | test/unit/test_client.py | smarter-travel-media/artifacts | cdb29a17ede0924b122b3905a500442c62ae53b7 | [
"MIT"
] | 4 | 2015-12-21T19:26:05.000Z | 2016-05-06T14:30:35.000Z | test/unit/test_client.py | smarter-travel-media/artifacts | cdb29a17ede0924b122b3905a500442c62ae53b7 | [
"MIT"
] | 1 | 2016-07-30T07:07:48.000Z | 2016-07-30T07:07:48.000Z | # -*- coding: utf-8 -*-
"""
"""
import mock
import pytest
import requests
@pytest.fixture
def version_dao():
from stac.http import VersionApiDao
return mock.Mock(spec=VersionApiDao)
@pytest.fixture
def url_generator():
from stac.client import MavenArtifactUrlGenerator
return mock.Mock(spec=MavenArtifactUrlGenerator)
class TestMavenArtifactoryClient(object):
def test_get_version_url(self, version_dao, url_generator):
from stac.client import GenericArtifactoryClient, GenericArtifactoryClientConfig
url_generator.get_url.return_value = ('https://www.example.com/artifactory/libs-release/'
'com/example/services/login/3.9.1/login-3.9.1.jar')
config = GenericArtifactoryClientConfig()
config.is_integration = False
config.http_dao = version_dao
config.url_generator = url_generator
client = GenericArtifactoryClient(config)
url = client.get_version_url('com.example.services.login', 'jar', '3.9.1')
assert ('https://www.example.com/artifactory/libs-release/'
'com/example/services/login/3.9.1/login-3.9.1.jar') == url
def test_get_latest_version_snapshot(self, version_dao, url_generator):
from stac.client import GenericArtifactoryClient, GenericArtifactoryClientConfig
version_dao.get_most_recent_versions.return_value = ['1.3.0-SNAPSHOT']
config = GenericArtifactoryClientConfig()
config.is_integration = True
config.http_dao = version_dao
config.url_generator = url_generator
maven_client = GenericArtifactoryClient(config)
version = maven_client.get_latest_version('com.example.users.service')
assert '1.3.0-SNAPSHOT' == version
def test_get_latest_version_snapshot_no_results(self, version_dao, url_generator):
from stac.client import GenericArtifactoryClient, GenericArtifactoryClientConfig
from stac.exceptions import NoMatchingVersionsError
request = mock.Mock(spec=requests.Request)
response = mock.Mock(spec=requests.Response)
response.status_code = 404
error = requests.HTTPError("Something bad", request=request, response=response)
version_dao.get_most_recent_versions.side_effect = error
config = GenericArtifactoryClientConfig()
config.is_integration = True
config.http_dao = version_dao
config.url_generator = url_generator
maven_client = GenericArtifactoryClient(config)
with pytest.raises(NoMatchingVersionsError):
maven_client.get_latest_version('com.example.users.service')
def test_get_latest_version_snapshot_only_release_results(self, version_dao, url_generator):
from stac.client import GenericArtifactoryClient, GenericArtifactoryClientConfig
from stac.exceptions import NoMatchingVersionsError
version_dao.get_most_recent_versions.return_value = []
config = GenericArtifactoryClientConfig()
config.is_integration = True
config.http_dao = version_dao
config.url_generator = url_generator
maven_client = GenericArtifactoryClient(config)
with pytest.raises(NoMatchingVersionsError):
maven_client.get_latest_version('com.example.users.service')
def test_get_latest_version_release(self, version_dao, url_generator):
from stac.client import GenericArtifactoryClient, GenericArtifactoryClientConfig
version_dao.get_most_recent_release.return_value = '4.13.4'
config = GenericArtifactoryClientConfig()
config.is_integration = False
config.http_dao = version_dao
config.url_generator = url_generator
maven_client = GenericArtifactoryClient(config)
version = maven_client.get_latest_version('com.example.users.service')
assert '4.13.4' == version
def test_get_latest_version_release_no_results(self, version_dao, url_generator):
from stac.client import GenericArtifactoryClient, GenericArtifactoryClientConfig
from stac.exceptions import NoMatchingVersionsError
request = mock.Mock(spec=requests.Request)
response = mock.Mock(spec=requests.Response)
response.status_code = 404
error = requests.HTTPError("Something bad", request=request, response=response)
version_dao.get_most_recent_release.side_effect = error
config = GenericArtifactoryClientConfig()
config.is_integration = False
config.http_dao = version_dao
config.url_generator = url_generator
maven_client = GenericArtifactoryClient(config)
with pytest.raises(NoMatchingVersionsError):
maven_client.get_latest_version('com.example.users.service')
def test_get_latest_versions_bad_limit(self):
from stac.client import GenericArtifactoryClient, GenericArtifactoryClientConfig
config = GenericArtifactoryClientConfig()
maven_client = GenericArtifactoryClient(config)
with pytest.raises(ValueError):
maven_client.get_latest_versions('com.example.users.service', limit=0)
def test_get_latest_versions_snapshot(self, version_dao, url_generator):
from stac.client import GenericArtifactoryClient, GenericArtifactoryClientConfig
version_dao.get_most_recent_versions.return_value = [
'1.3.0-SNAPSHOT', '1.2.1-SNAPSHOT', '1.1.0-SNAPSHOT']
config = GenericArtifactoryClientConfig()
config.is_integration = True
config.http_dao = version_dao
config.url_generator = url_generator
maven_client = GenericArtifactoryClient(config)
versions = maven_client.get_latest_versions('com.example.users.service', limit=3)
expected = [
'1.3.0-SNAPSHOT',
'1.2.1-SNAPSHOT',
'1.1.0-SNAPSHOT'
]
assert expected == versions
def test_get_latest_versions_snapshot_no_results(self, version_dao, url_generator):
from stac.client import GenericArtifactoryClient, GenericArtifactoryClientConfig
from stac.exceptions import NoMatchingVersionsError
request = mock.Mock(spec=requests.Request)
response = mock.Mock(spec=requests.Response)
response.status_code = 404
error = requests.HTTPError("Something bad", request=request, response=response)
version_dao.get_most_recent_versions.side_effect = error
config = GenericArtifactoryClientConfig()
config.is_integration = True
config.http_dao = version_dao
config.url_generator = url_generator
maven_client = GenericArtifactoryClient(config)
with pytest.raises(NoMatchingVersionsError):
maven_client.get_latest_versions('com.example.users.service')
def test_get_latest_versions_snapshot_only_release_results(self, version_dao, url_generator):
from stac.client import GenericArtifactoryClient, GenericArtifactoryClientConfig
from stac.exceptions import NoMatchingVersionsError
version_dao.get_most_recent_versions.return_value = []
config = GenericArtifactoryClientConfig()
config.is_integration = True
config.http_dao = version_dao
config.url_generator = url_generator
maven_client = GenericArtifactoryClient(config)
with pytest.raises(NoMatchingVersionsError):
maven_client.get_latest_versions('com.example.users.service')
def test_get_latest_versions_release(self, version_dao, url_generator):
from stac.client import GenericArtifactoryClient, GenericArtifactoryClientConfig
version_dao.get_most_recent_versions.return_value = ['1.2.1', '1.2.0', '1.1.1']
config = GenericArtifactoryClientConfig()
config.is_integration = False
config.http_dao = version_dao
config.url_generator = url_generator
maven_client = GenericArtifactoryClient(config)
versions = maven_client.get_latest_versions('com.example.users.service', limit=3)
expected = [
'1.2.1',
'1.2.0',
'1.1.1'
]
assert expected == versions
def test_get_latest_versions_release_no_results(self, version_dao, url_generator):
from stac.client import GenericArtifactoryClient, GenericArtifactoryClientConfig
from stac.exceptions import NoMatchingVersionsError
request = mock.Mock(spec=requests.Request)
response = mock.Mock(spec=requests.Response)
response.status_code = 404
error = requests.HTTPError("Something bad", request=request, response=response)
version_dao.get_most_recent_versions.side_effect = error
config = GenericArtifactoryClientConfig()
config.is_integration = False
config.http_dao = version_dao
config.url_generator = url_generator
maven_client = GenericArtifactoryClient(config)
with pytest.raises(NoMatchingVersionsError):
maven_client.get_latest_versions('com.example.users.service')
def test_get_latest_versions_release_only_snapshot_results(self, version_dao, url_generator):
from stac.client import GenericArtifactoryClient, GenericArtifactoryClientConfig
from stac.exceptions import NoMatchingVersionsError
version_dao.get_most_recent_versions.return_value = []
config = GenericArtifactoryClientConfig()
config.is_integration = False
config.http_dao = version_dao
config.url_generator = url_generator
maven_client = GenericArtifactoryClient(config)
with pytest.raises(NoMatchingVersionsError):
maven_client.get_latest_versions('com.example.users.service')
class TestMavenArtifactUrlGenerator(object):
def test_get_version_url_with_descriptor(self):
from stac.client import MavenArtifactUrlGenerator
gen = MavenArtifactUrlGenerator('https://corp.example.com/artifactory', 'libs-release-local')
url = gen.get_url('com.example.services', 'locations', 'jar', '4.5.1', 'sources')
assert ('https://corp.example.com/artifactory/libs-release-local/' +
'com/example/services/locations/4.5.1/locations-4.5.1-sources.jar') == url
def test_get_version_url_without_descriptor(self):
from stac.client import MavenArtifactUrlGenerator
gen = MavenArtifactUrlGenerator('https://corp.example.com/artifactory', 'libs-release-local')
url = gen.get_url('com.example.services', 'locations', 'war', '4.5.1', None)
assert ('https://corp.example.com/artifactory/libs-release-local/' +
'com/example/services/locations/4.5.1/locations-4.5.1.war') == url
def test_parse_full_name_group_and_artifact():
from stac.client import _parse_full_name
name = 'com.example.services.auth'
group, artifact = _parse_full_name(name)
assert 'com.example.services' == group
assert 'auth' == artifact
def test_parse_full_name_single_name():
from stac.client import _parse_full_name
name = 'my-python-lib'
group, artifact = _parse_full_name(name)
assert '' == group
assert 'my-python-lib' == artifact
| 39.864769 | 101 | 0.719247 | 1,210 | 11,202 | 6.409091 | 0.08595 | 0.058801 | 0.032495 | 0.046422 | 0.915667 | 0.90303 | 0.867054 | 0.850419 | 0.829014 | 0.825919 | 0 | 0.011023 | 0.198268 | 11,202 | 280 | 102 | 40.007143 | 0.852466 | 0.001875 | 0 | 0.693467 | 0 | 0.020101 | 0.110097 | 0.050752 | 0 | 0 | 0 | 0 | 0.055276 | 1 | 0.095477 | false | 0 | 0.145729 | 0 | 0.261307 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
210fcc7d2a18ce64a4348dbf5cceabde6b713f51 | 134 | py | Python | tests/test_clean.py | jonparrott/readme_renderer | d356c29635bf06eb4b150e6ae2a224c952a15f1d | [
"Apache-2.0"
] | null | null | null | tests/test_clean.py | jonparrott/readme_renderer | d356c29635bf06eb4b150e6ae2a224c952a15f1d | [
"Apache-2.0"
] | null | null | null | tests/test_clean.py | jonparrott/readme_renderer | d356c29635bf06eb4b150e6ae2a224c952a15f1d | [
"Apache-2.0"
] | null | null | null | from readme_renderer.clean import clean
def test_invalid_link():
assert clean('<a href="http://exam](ple.com">foo</a>') is None
| 22.333333 | 66 | 0.708955 | 22 | 134 | 4.181818 | 0.863636 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.126866 | 134 | 5 | 67 | 26.8 | 0.786325 | 0 | 0 | 0 | 0 | 0 | 0.283582 | 0 | 0 | 0 | 0 | 0 | 0.333333 | 1 | 0.333333 | true | 0 | 0.333333 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
2156b3977eed64fe4d6f067d04dbccc1bdd6dd75 | 17,213 | py | Python | test/unit/transport/test_parser.py | doesitblend/ncclient | 0730e23ce5de8f8615c9ff5e63d6fc9bba8b3cf1 | [
"Apache-2.0"
] | null | null | null | test/unit/transport/test_parser.py | doesitblend/ncclient | 0730e23ce5de8f8615c9ff5e63d6fc9bba8b3cf1 | [
"Apache-2.0"
] | null | null | null | test/unit/transport/test_parser.py | doesitblend/ncclient | 0730e23ce5de8f8615c9ff5e63d6fc9bba8b3cf1 | [
"Apache-2.0"
] | null | null | null | import os
import unittest
from mock import patch
import paramiko
from ncclient import manager
from ncclient.transport.ssh import SSHSession
from ncclient.operations.third_party.juniper.rpc import *
from ncclient.operations import RaiseMode
from ncclient.transport.parser import DefaultXMLParser
try:
import selectors
except ImportError:
import selectors2 as selectors
class TestSession(unittest.TestCase):
@unittest.skipIf(sys.version_info.major == 2, "test not supported < Python3")
@patch('ncclient.transport.SSHSession.connected')
@patch('paramiko.channel.Channel.send_ready')
@patch('paramiko.channel.Channel.send')
@patch('ncclient.transport.ssh.SSHSession.close')
@patch('paramiko.channel.Channel.recv')
@patch('ncclient.transport.SSHSession')
@patch('selectors.DefaultSelector.select')
@patch('ncclient.operations.rpc.uuid4')
def test_filter_xml_sax_on(self, mock_uuid4, mock_select, mock_session, mock_recv,
mock_close, mock_send, mock_send_ready, mock_connected):
mock_send.return_value = True
mock_send_ready.return_value = -1
mock_uuid4.return_value = type('dummy', (), {'urn': "urn:uuid:e0a7abe3-fffa-11e5-b78e-b8e85604f858"})
device_handler = manager.make_device_handler({'name': 'junos', 'use_filter': True})
rpc = '<get-software-information/>'
mock_recv.side_effect = self._read_file('get-software-information.xml')
session = SSHSession(device_handler)
session._connected = True
session._channel = paramiko.Channel("c100")
session.parser = session._device_handler.get_xml_parser(session)
obj = ExecuteRpc(session, device_handler, raise_mode=RaiseMode.ALL)
obj._filter_xml = '<multi-routing-engine-results><multi-routing-engine-item><re-name/></multi-routing-engine-item></multi-routing-engine-results>'
session.run()
resp = obj.request(rpc)._NCElement__doc[0]
self.assertEqual(len(resp.xpath('multi-routing-engine-item/re-name')), 2)
# as filter_xml is not having software-information, response wont contain it
self.assertEqual(len(resp.xpath('multi-routing-engine-item/software-information')), 0)
@unittest.skipIf(sys.version_info.major == 2, "test not supported < Python3")
@patch('ncclient.transport.SSHSession.connected')
@patch('paramiko.channel.Channel.send_ready')
@patch('paramiko.channel.Channel.send')
@patch('ncclient.transport.ssh.SSHSession.close')
@patch('paramiko.channel.Channel.recv')
@patch('ncclient.transport.SSHSession')
@patch('selectors.DefaultSelector.select')
@patch('ncclient.operations.rpc.uuid4')
def test_filter_xml_delimiter_rpc_reply(self, mock_uuid4, mock_select,
mock_session, mock_recv, mock_close,
mock_send, mock_send_ready,
mock_connected):
mock_send.return_value = True
mock_send_ready.return_value = -1
mock_uuid4.return_value = type('dummy', (), {'urn': "urn:uuid:e0a7abe3-fffa-11e5-b78e-b8e85604f858"})
device_handler = manager.make_device_handler({'name': 'junos', 'use_filter': True})
rpc = '<get-software-information/>'
mock_recv.side_effect = self._read_file('get-software-information.xml')[:-1] + [b"</rpc-reply>]]>", b"]]>"]
session = SSHSession(device_handler)
session._connected = True
session._channel = paramiko.Channel("c100")
session.parser = session._device_handler.get_xml_parser(session)
obj = ExecuteRpc(session, device_handler, raise_mode=RaiseMode.ALL)
obj._filter_xml = '<multi-routing-engine-results><multi-routing-engine-item><re-name/></multi-routing-engine-item></multi-routing-engine-results>'
session.run()
resp = obj.request(rpc)._NCElement__doc[0]
self.assertEqual(len(resp.xpath('multi-routing-engine-item/re-name')), 2)
self.assertEqual(len(resp.xpath('multi-routing-engine-item/software-information')), 0)
@unittest.skipIf(sys.version_info.major == 2, "test not supported < Python3")
@patch('ncclient.transport.SSHSession.connected')
@patch('paramiko.channel.Channel.send_ready')
@patch('paramiko.channel.Channel.send')
@patch('ncclient.transport.ssh.SSHSession.close')
@patch('paramiko.channel.Channel.recv')
@patch('ncclient.transport.SSHSession')
@patch('selectors.DefaultSelector.select')
@patch('ncclient.operations.rpc.uuid4')
def test_filter_xml_delimiter_multiple_rpc_reply(self, mock_uuid4, mock_select,
mock_session, mock_recv, mock_close,
mock_send, mock_send_ready,
mock_connected):
mock_send.return_value = True
mock_send_ready.return_value = -1
mock_uuid4.return_value = type('dummy', (), {'urn': "urn:uuid:e0a7abe3-fffa-11e5-b78e-b8e85604f858"})
device_handler = manager.make_device_handler({'name': 'junos', 'use_filter': True})
rpc = '<get-software-information/>'
mock_recv.side_effect = self._read_file('get-software-information.xml')[:-1] + [b"</rpc-reply>]]>",
b"]]><rpc-reply>"] + \
self._read_file('get-software-information.xml')[1:]
session = SSHSession(device_handler)
session._connected = True
session._channel = paramiko.Channel("c100")
session.parser = session._device_handler.get_xml_parser(session)
obj = ExecuteRpc(session, device_handler, raise_mode=RaiseMode.ALL)
obj._filter_xml = '<multi-routing-engine-results><multi-routing-engine-item><re-name/></multi-routing-engine-item></multi-routing-engine-results>'
session.run()
resp = obj.request(rpc)._NCElement__doc[0]
self.assertEqual(len(resp.xpath('multi-routing-engine-item/re-name')), 2)
self.assertEqual(len(resp.xpath('multi-routing-engine-item/software-information')), 0)
@unittest.skipIf(sys.version_info.major == 2, "test not supported < Python3")
@patch('ncclient.transport.SSHSession.connected')
@patch('paramiko.channel.Channel.send_ready')
@patch('paramiko.channel.Channel.send')
@patch('ncclient.transport.ssh.SSHSession.close')
@patch('paramiko.channel.Channel.recv')
@patch('ncclient.transport.SSHSession')
@patch('selectors.DefaultSelector.select')
@patch('ncclient.operations.rpc.uuid4')
def test_filter_xml_delimiter_multiple_rpc_in_parallel(self, mock_uuid4, mock_select,
mock_session, mock_recv, mock_close,
mock_send, mock_send_ready,
mock_connected):
mock_send.return_value = True
mock_send_ready.return_value = -1
mock_uuid4.side_effect = [type('xyz', (), {'urn': "urn:uuid:ddef40cb-5745-481d-974d-7188f9f2bb33"}),
type('pqr', (), {'urn': "urn:uuid:549ef9d1-024a-4fd0-88bf-047d25f0870d"})]
device_handler = manager.make_device_handler({'name': 'junos', 'use_filter': True})
rpc = '<get-software-information/>'
mock_recv.side_effect = [b"""
<rpc-reply xmlns="urn:ietf:params:xml:ns:netconf:base:1.0" xmlns:junos="http://xml.juniper.net/junos/19.2I0/junos" xmlns:nc="urn:ietf:params:xml:ns:netconf:base:1.0" message-id="urn:uuid:ddef40cb-5745-481d-974d-7188f9f2bb33">
<ospf-neighbor-information xmlns="http://xml.juniper.net/junos/19.2I0/junos-routing">
<ospf-neighbor>
<neighbor-address>13.1.1.2</neighbor-address>
<interface-name>ge-0/0/0.1</interface-name>
<ospf-neighbor-state>Exchange</ospf-neighbor-state>
<neighbor-id>2.2.2.2</neighbor-id>
<neighbor-priority>128</neighbor-priority>
<activity-timer>36</activity-timer>
<ospf-area>0.0.0.0</ospf-area>
<options>0x52</options>
<dr-address>13.1.1.1</dr-address>
<bdr-address>13.1.1.2</bdr-address>
<neighbor-up-time junos:seconds="17812">
04:56:52
</neighbor-up-time>
<neighbor-adjacency-time junos:seconds="17812">
04:56:52
</neighbor-adjacency-time>
<master-slave>slave</master-slave>
<sequence-number>0x204b6fd</sequence-number>
<dbd-retransmit-time>3</dbd-retransmit-time>
<lsreq-retransmit-time>0</lsreq-retransmit-time>
<lsa-list>
Link state retransmission list:
Type LSA ID Adv rtr Seq
Router 1.1.1.1 1.1.1.1 0x80000019
OpaqArea 1.0.0.1 1.1.1.1 0x80000011
Router 3.3.3.3 3.3.3.3 0x80000004
Network 23.1.1.2 3.3.3.3 0x80000001
OpaqArea 1.0.0.1 2.2.2.2 0x80000002
OpaqArea 1.0.0.1 3.3.3.3 0x80000002
OpaqArea 1.0.0.3 1.1.1.1 0x80000002
OpaqArea 1.0.0.3 3.3.3.3 0x80000001
OpaqArea 1.0.0.4 2.2.2.2 0x80000001
</lsa-list>
<ospf-neighbor-topology>
<ospf-topology-name>default</ospf-topology-name>
<ospf-topology-id>0</ospf-topology-id>
<ospf-neighbor-topology-state>Forward Only</ospf-neighbor-topology-state>
</ospf-neighbor-topology>
</ospf-neighbor>
</ospf-neighbor-information>
</rpc-reply>]]>]]><rpc-reply xmlns="urn:ietf:params:xml:ns:netconf:base:1.0" xmlns:junos="http://xml.juniper.net/junos/19.2I0/junos" xmlns:nc="urn:ietf:params:xml:ns:netconf:base:1.0" message-id="urn:uuid:549ef9d1-024a-4fd0-88bf-047d25f0870d">
<pfe-statistics>
<pfe-traffic-statistics>
<pfe-input-packets>22450</pfe-input-packets>
<input-pps>0</input-pps>
<pfe-output-packets>31992</pfe-output-packets>
<output-pps>0</output-pps>
<pfe-fabric-input>0</pfe-fabric-input>
<pfe-fabric-input-pps>0</pfe-fabric-input-pps>
<pfe-fabric-output>0</pfe-fabric-output>
<pfe-fabric-output-pps>0</pfe-fabric-output-pps>
</pfe-traffic-statistics></pfe-statistics></rpc-reply>]]>]]>"""]
session = SSHSession(device_handler)
session._connected = True
session._channel = paramiko.Channel("c100")
session.parser = session._device_handler.get_xml_parser(session)
obj = ExecuteRpc(session, device_handler, raise_mode=RaiseMode.ALL)
obj = ExecuteRpc(session, device_handler, raise_mode=RaiseMode.ALL)
obj._filter_xml = '<multi-routing-engine-results><multi-routing-engine-item><re-name/></multi-routing-engine-item></multi-routing-engine-results>'
session.run()
resp = obj.request(rpc)._NCElement__doc[0]
self.assertEqual(len(resp.xpath('pfe-traffic-statistics')), 1)
@unittest.skipIf(sys.version_info.major == 2, "test not supported < Python3")
@patch('ncclient.transport.SSHSession.connected')
@patch('paramiko.channel.Channel.send_ready')
@patch('paramiko.channel.Channel.send')
@patch('ncclient.transport.ssh.SSHSession.close')
@patch('paramiko.channel.Channel.recv')
@patch('ncclient.transport.SSHSession')
@patch('selectors.DefaultSelector.select')
@patch('ncclient.operations.rpc.uuid4')
def test_filter_xml_delimiter_splited_rpc_reply(self, mock_uuid4, mock_select,
mock_session, mock_recv, mock_close,
mock_send, mock_send_ready,
mock_connected):
mock_send.return_value = True
mock_send_ready.return_value = -1
mock_uuid4.return_value = type('dummy', (), {'urn': "urn:uuid:e0a7abe3-fffa-11e5-b78e-b8e85604f858"})
device_handler = manager.make_device_handler({'name': 'junos', 'use_filter': True})
rpc = '<get-software-information/>'
mock_recv.side_effect = self._read_file('get-software-information.xml')[:-1] + [b"</rpc", b"-reply>]]>",
b"]]><rpc-reply>"] + \
self._read_file('get-software-information.xml')[1:]
session = SSHSession(device_handler)
session._connected = True
session._channel = paramiko.Channel("c100")
session.parser = session._device_handler.get_xml_parser(session)
obj = ExecuteRpc(session, device_handler, raise_mode=RaiseMode.ALL)
obj._filter_xml = '<multi-routing-engine-results><multi-routing-engine-item><re-name/></multi-routing-engine-item></multi-routing-engine-results>'
session.run()
resp = obj.request(rpc)._NCElement__doc[0]
self.assertEqual(len(resp.xpath('multi-routing-engine-item/re-name')), 2)
self.assertEqual(len(resp.xpath('multi-routing-engine-item/software-information')), 0)
@unittest.skipIf(sys.version_info.major == 2, "test not supported < Python3")
@patch('ncclient.transport.SSHSession.connected')
@patch('paramiko.channel.Channel.send_ready')
@patch('paramiko.channel.Channel.send')
@patch('ncclient.transport.ssh.SSHSession.close')
@patch('paramiko.channel.Channel.recv')
@patch('ncclient.transport.SSHSession')
@patch('selectors.DefaultSelector.select')
@patch('ncclient.operations.rpc.uuid4')
def test_use_filter_xml_without_sax_input(self, mock_uuid4, mock_select,
mock_session, mock_recv,
mock_close, mock_send,
mock_send_ready,
mock_connected):
mock_send.return_value = True
mock_send_ready.return_value = -1
mock_uuid4.return_value = type('dummy', (), {'urn': "urn:uuid:e0a7abe3-fffa-11e5-b78e-b8e85604f858"})
device_handler = manager.make_device_handler({'name': 'junos', 'use_filter': True})
rpc = '<get-software-information/>'
mock_recv.side_effect = self._read_file('get-software-information.xml')
session = SSHSession(device_handler)
session._connected = True
session._channel = paramiko.Channel("c100")
session.parser = session._device_handler.get_xml_parser(session)
obj = ExecuteRpc(session, device_handler, raise_mode=RaiseMode.ALL)
obj._filter_xml = None
session.run()
resp = obj.request(rpc)._NCElement__doc[0]
self.assertEqual(len(resp.xpath('multi-routing-engine-item/re-name')), 2)
self.assertEqual(len(resp.xpath('multi-routing-engine-item/software-information')), 2)
@unittest.skipIf(sys.version_info.major == 2, "test not supported < Python3")
@patch('ncclient.transport.SSHSession.connected')
@patch('paramiko.channel.Channel.send_ready')
@patch('paramiko.channel.Channel.send')
@patch('ncclient.transport.ssh.SSHSession.close')
@patch('paramiko.channel.Channel.recv')
@patch('ncclient.transport.SSHSession')
@patch('selectors.DefaultSelector.select')
@patch('ncclient.operations.rpc.uuid4')
def test_use_filter_False(self, mock_uuid4, mock_select,
mock_session, mock_recv,
mock_close, mock_send,
mock_send_ready,
mock_connected):
mock_send.return_value = True
mock_send_ready.return_value = -1
mock_uuid4.return_value = type('dummy', (), {'urn': "urn:uuid:e0a7abe3-fffa-11e5-b78e-b8e85604f858"})
device_handler = manager.make_device_handler({'name': 'junos', 'use_filter': False})
rpc = '<get-software-information/>'
mock_recv.side_effect = self._read_file('get-software-information.xml')
session = SSHSession(device_handler)
session._connected = True
session._channel = paramiko.Channel("c100")
session.parser = session._device_handler.get_xml_parser(session)
obj = ExecuteRpc(session, device_handler, raise_mode=RaiseMode.ALL)
obj._filter_xml = '<multi-routing-engine-results><multi-routing-engine-item><re-name/></multi-routing-engine-item></multi-routing-engine-results>'
session.run()
resp = obj.request(rpc)._NCElement__doc[0]
self.assertEqual(len(resp.xpath('multi-routing-engine-item/re-name')), 2)
# as filter_xml is not having software-information, response wont contain it
self.assertEqual(len(resp.xpath('multi-routing-engine-item/software-information')), 2)
self.assertIsInstance(session.parser, DefaultXMLParser)
def _read_file(self, fname):
fpath = os.path.join(os.path.dirname(__file__),
'rpc-reply', fname)
lines = []
with open(fpath, "rb") as fp:
lines = fp.readlines()
return lines
| 57.186047 | 247 | 0.638819 | 2,030 | 17,213 | 5.254187 | 0.108867 | 0.043878 | 0.060754 | 0.049503 | 0.833583 | 0.827677 | 0.823551 | 0.808738 | 0.799175 | 0.794487 | 0 | 0.039601 | 0.225411 | 17,213 | 300 | 248 | 57.376667 | 0.760369 | 0.008656 | 0 | 0.677193 | 0 | 0.031579 | 0.420726 | 0.321435 | 0 | 0 | 0.006037 | 0 | 0.049123 | 1 | 0.02807 | false | 0 | 0.042105 | 0 | 0.077193 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
0d10d8349157877d15f19a50159b633a70cf7138 | 25 | py | Python | code/abc085_a_02.py | KoyanagiHitoshi/AtCoder | 731892543769b5df15254e1f32b756190378d292 | [
"MIT"
] | 3 | 2019-08-16T16:55:48.000Z | 2021-04-11T10:21:40.000Z | code/abc085_a_02.py | KoyanagiHitoshi/AtCoder | 731892543769b5df15254e1f32b756190378d292 | [
"MIT"
] | null | null | null | code/abc085_a_02.py | KoyanagiHitoshi/AtCoder | 731892543769b5df15254e1f32b756190378d292 | [
"MIT"
] | null | null | null | print("2018"+input()[4:]) | 25 | 25 | 0.6 | 4 | 25 | 3.75 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.2 | 0 | 25 | 1 | 25 | 25 | 0.4 | 0 | 0 | 0 | 0 | 0 | 0.153846 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 6 |
0d1a0685a787ba53bf67509347ace972cdd96b8f | 11,476 | py | Python | nz_snow_tools/util/write_fsca_to_netcdf.py | jonoconway/nz_snow_tools | 7002fb401fb48225260fada6fd5b5b7ca5ad1184 | [
"MIT"
] | 3 | 2020-09-01T07:53:05.000Z | 2021-02-02T20:28:37.000Z | nz_snow_tools/util/write_fsca_to_netcdf.py | jonoconway/nz_snow_tools | 7002fb401fb48225260fada6fd5b5b7ca5ad1184 | [
"MIT"
] | null | null | null | nz_snow_tools/util/write_fsca_to_netcdf.py | jonoconway/nz_snow_tools | 7002fb401fb48225260fada6fd5b5b7ca5ad1184 | [
"MIT"
] | null | null | null | """
writes arrays of fractional snow covered area to a net cdf file
"""
import netCDF4 as nc
from time import strftime, gmtime
import numpy as np
def create_ncvar_temperaure(ds, no_time=False):
if no_time == False:
temp_var = ds.createVariable('air_temperature', 'f4', ('time', 'northing', 'easting',), zlib=True, complevel=4)
else:
temp_var = ds.createVariable('air_temperature', 'f4', ('northing', 'easting',), zlib=True, complevel=4)
temp_var.setncatts({
'long_name': 'surface air temperature',
'standard_name': 'air_temperature',
'units': 'K',
'description': "mean value over previous 1 hour",
'cell_methods': 'time: mean',
'missing': -9999.,
'valid_min': 230.,
'valid_max': 333.
})
return temp_var
def create_ncvar_shortwave(ds, no_time=False):
if no_time == False:
temp_var = ds.createVariable('surface_downwelling_shortwave_flux', 'f8', ('time', 'northing', 'easting',),
zlib=True, complevel=4)
else:
temp_var = ds.createVariable('surface_downwelling_shortwave_flux', 'f8', ('northing', 'easting',), zlib=True,
complevel=4)
temp_var.setncatts({
'long_name': 'downwelling shortwave radiation flux at surface',
'standard_name': 'surface_downwelling_shortwave_flux_in_air',
'units': 'W / m^2',
'description': "mean value over previous 1 hour",
'cell_methods': 'time: mean',
'missing': -9999.,
'valid_min': 0.,
'valid_max': 1500.
})
return temp_var
def create_ncvar_precipitation(ds, no_time=False):
if no_time == False:
precip_var = ds.createVariable('precipitation_amount', 'f8', ('time', 'northing', 'easting',), zlib=True, complevel=4)
else:
precip_var = ds.createVariable('precipitation_amount', 'f8', ('northing', 'easting',), zlib=True, complevel=4)
precip_var.setncatts({
'long_name': 'precipitation amount (mm)',
'standard_name': 'precipitation_amount',
'units': 'mm',
'description': "total value over previous 1 hour",
'cell_methods': 'time: sum',
'missing': -9999.,
'valid_min': 0.,
'valid_max': 2000.
})
return precip_var
def create_ncvar_fsca(ds):
fsca_var = ds.createVariable('fsca', 'u8', ('time', 'northing', 'easting',), zlib=True, complevel=4)
fsca_var.setncatts({
'long_name': 'fractional snow covered area'
# 'missing': -9999.,
# 'valid_min': 0,
# 'valid_max': 100
})
return fsca_var
def create_ncvar_swe(ds):
swe_var = ds.createVariable('swe', 'f4', ('time', 'northing', 'easting',), zlib=True, complevel=4)
swe_var.setncatts({
'long_name': 'snow water equivalent',
'missing': -9999.
# 'valid_min': 0,
# 'valid_max': 100
})
return swe_var
def create_ncvar_acc(ds):
acc_var = ds.createVariable('acc', 'f4', ('time', 'northing', 'easting',), zlib=True, complevel=4)
acc_var.setncatts({
'long_name': 'snowfall in mm snow water equivalent',
'missing': -9999.
# 'valid_min': 0,
# 'valid_max': 100
})
return acc_var
def create_ncvar_melt(ds):
melt_var = ds.createVariable('melt', 'f4', ('time', 'northing', 'easting',), zlib=True, complevel=4)
melt_var.setncatts({
'long_name': 'melt in mm snow water equivalent',
'missing': -9999.
# 'valid_min': 0,
# 'valid_max': 100
})
return melt_var
def create_lat_lons_for_NZTMgrid(extent_w=1.2e6, extent_e=1.4e6, extent_n=5.13e6, extent_s=4.82e6, resolution=250):
"""create grids of latitude and longitude corresponding to grid centres of data in nztm grid
"""
# create coordinates
x_centres = np.arange(extent_w + resolution / 2, extent_e, resolution)
y_centres = np.arange(extent_s + resolution / 2, extent_n, resolution)
y_array, x_array = np.meshgrid(y_centres, x_centres, indexing='ij')
lat_array, lon_array = nztm_to_wgs84(y_array, x_array)
return lat_array, lon_array
def write_nztm_grids_to_netcdf(fname, list_of_data_arrays, var_names, datetime_list, northings, eastings, lat_array,
lon_array, elevation, no_time=False):
"""
Write a netCDF file containing fractional snow covered area data
:param fname: string, full pathname of file to be created
:param list_of_data_arrays: list, list containing data arrays to be saved [[time, northings, eastings],[time, northings, eastings]]
:param var_names: list of strings corresponding to names of data arrays
:param datetime_list: list of datetime objects corresponding to data
:param northings: vector containing northings associated with data grid
:param eastings: vector containing eastings associated with data grid
:param lat_array: array containing longitudes of data grid
:param lon_array: array containing latitudes of data grid
:param elevation: array containing elevation of data grid
:return:
"""
ds = nc.Dataset(fname, 'w')
# add common attributes
ds.institution = "Bodeker Scientific"
ds.title = ''
ds.source = ''
ds.history = ''
ds.references = ''
ds.author = ''
ds.email = ''
ds.created = strftime("%Y-%m-%d %H:%M:%S", gmtime())
if no_time == False:
ds.featureType = "timeSeries"
else:
ds.comment = 'timestamp {}'.format(datetime_list.strftime('%Y%m%d%H%M'))
ds.Conventions = "CF-1.6"
if no_time == False:
ds.createDimension('time', )
t = ds.createVariable('time', 'f8', ('time',))
t.long_name = "time"
t.units = 'days since 1900-01-01 00:00:00'
t[:] = nc.date2num(datetime_list, units=t.units)
ds.createDimension('northing', len(northings))
ds.createDimension('easting', len(eastings))
ds.createDimension('latitude', len(northings))
ds.createDimension('longitude', len(eastings))
# add northing and easting dimensions as well as lat/lon variables
t = ds.createVariable('northing', 'f8', ('northing',))
t.axis = 'Y'
t.long_name = "northing in NZTM"
t.units = 'metres'
t[:] = northings
t = ds.createVariable('easting', 'f8', ('easting',))
t.axis = 'X'
t.long_name = "easting in NZTM"
t.units = 'metres'
t[:] = eastings
t = ds.createVariable('lat', 'f8', ('northing', 'easting',))
t.long_name = "latitude"
t.standard_name = "latitude"
t.units = "degrees north"
t[:] = lat_array
t = ds.createVariable('lon', 'f8', ('northing', 'easting',))
t.long_name = "longitude"
t.standard_name = "longitude"
t.units = "degrees east"
t[:] = lon_array
elevation_var = ds.createVariable('elevation', 'f8', ('northing', 'easting',), fill_value=-9999.)
elevation_var.long_name = "elevation (meters)"
elevation_var.standard_name = "surface_altitude"
elevation_var.units = "meters"
elevation_var[:] = elevation
if 'precipitation_amount' in var_names:
precip_var = create_ncvar_precipitation(ds, no_time=no_time)
precip_var[:] = list_of_data_arrays[var_names.index('precipitation_amount')]
if 'air_temperature' in var_names:
temp_var = create_ncvar_temperaure(ds, no_time=no_time)
temp_var[:] = list_of_data_arrays[var_names.index('air_temperature')]
if 'surface_downwelling_shortwave_flux' in var_names:
temp_var = create_ncvar_shortwave(ds, no_time=no_time)
temp_var[:] = list_of_data_arrays[var_names.index('surface_downwelling_shortwave_flux')]
if 'fsca' in var_names:
fsca_var = create_ncvar_fsca(ds)
fsca_var[:] = list_of_data_arrays[var_names.index('fsca')]
if 'swe' in var_names:
swe_var = create_ncvar_swe(ds)
swe_var[:] = list_of_data_arrays[var_names.index('swe')]
ds.close()
def setup_nztm_grid_netcdf(fname, list_of_data_arrays, var_names, datetime_list, northings, eastings, lat_array,
lon_array, elevation, no_time=False):
"""
Write a netCDF file containing fractional snow covered area data
:param fname: string, full pathname of file to be created
:param list_of_data_arrays: list, list containing data arrays to be saved [[time, northings, eastings],[time, northings, eastings]]
:param var_names: list of strings corresponding to names of data arrays
:param datetime_list: list of datetime objects corresponding to data
:param northings: vector containing northings associated with data grid
:param eastings: vector containing eastings associated with data grid
:param lat_array: array containing longitudes of data grid
:param lon_array: array containing latitudes of data grid
:param elevation: array containing elevation of data grid
:return:
"""
ds = nc.Dataset(fname, 'w')
# add common attributes
ds.institution = "Bodeker Scientific"
ds.title = ''
ds.source = ''
ds.history = ''
ds.references = ''
ds.author = ''
ds.email = ''
ds.created = strftime("%Y-%m-%d %H:%M:%S", gmtime())
if no_time == False:
ds.featureType = "timeSeries"
else:
ds.comment = 'timestamp {}'.format(datetime_list.strftime('%Y%m%d%H%M'))
ds.Conventions = "CF-1.6"
if no_time == False:
ds.createDimension('time', )
t = ds.createVariable('time', 'f8', ('time',))
t.long_name = "time"
t.units = 'days since 1900-01-01 00:00:00'
t[:] = nc.date2num(datetime_list, units=t.units)
ds.createDimension('northing', len(northings))
ds.createDimension('easting', len(eastings))
ds.createDimension('latitude', len(northings))
ds.createDimension('longitude', len(eastings))
# add northing and easting dimensions as well as lat/lon variables
t = ds.createVariable('northing', 'f8', ('northing',))
t.axis = 'Y'
t.long_name = "northing in NZTM"
t.units = 'metres'
t[:] = northings
t = ds.createVariable('easting', 'f8', ('easting',))
t.axis = 'X'
t.long_name = "easting in NZTM"
t.units = 'metres'
t[:] = eastings
t = ds.createVariable('lat', 'f8', ('northing', 'easting',))
t.long_name = "latitude"
t.standard_name = "latitude"
t.units = "degrees north"
t[:] = lat_array
t = ds.createVariable('lon', 'f8', ('northing', 'easting',))
t.long_name = "longitude"
t.standard_name = "longitude"
t.units = "degrees east"
t[:] = lon_array
elevation_var = ds.createVariable('elevation', 'f8', ('northing', 'easting',), fill_value=-9999.)
elevation_var.long_name = "elevation (meters)"
elevation_var.standard_name = "surface_altitude"
elevation_var.units = "meters"
elevation_var[:] = elevation
if 'precipitation_amount' in var_names:
precip_var = create_ncvar_precipitation(ds, no_time=no_time)
if 'air_temperature' in var_names:
temp_var = create_ncvar_temperaure(ds, no_time=no_time)
if 'surface_downwelling_shortwave_flux' in var_names:
temp_var = create_ncvar_shortwave(ds, no_time=no_time)
if 'fsca' in var_names:
fsca_var = create_ncvar_fsca(ds)
if 'swe' in var_names:
swe_var = create_ncvar_swe(ds)
if 'acc' in var_names:
acc_var = create_ncvar_acc(ds)
if 'melt' in var_names:
melt_var = create_ncvar_melt(ds)
return ds
| 35.639752 | 135 | 0.64526 | 1,476 | 11,476 | 4.817073 | 0.134824 | 0.020253 | 0.018565 | 0.032349 | 0.822504 | 0.817862 | 0.796765 | 0.764838 | 0.725738 | 0.705204 | 0 | 0.017923 | 0.222116 | 11,476 | 321 | 136 | 35.750779 | 0.778537 | 0.169833 | 0 | 0.680365 | 0 | 0 | 0.222128 | 0.022468 | 0 | 0 | 0 | 0 | 0 | 1 | 0.045662 | false | 0 | 0.013699 | 0 | 0.100457 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
0d2a57508331cc5221d9ba1f8f79249af10dd6d8 | 153 | py | Python | models/backbone/__init__.py | trankha1655/pan_pp.origin | 65775074ee06fef2c46aecd720821d0502ef3cd9 | [
"Apache-2.0"
] | 329 | 2020-09-02T11:11:27.000Z | 2022-03-27T17:46:17.000Z | models/backbone/__init__.py | simplify23/pan_pp.pytorch | aa4774b1bf360d0a8e54d520483514d57521bf34 | [
"Apache-2.0"
] | 84 | 2020-09-04T00:33:05.000Z | 2022-03-26T14:22:09.000Z | models/backbone/__init__.py | simplify23/pan_pp.pytorch | aa4774b1bf360d0a8e54d520483514d57521bf34 | [
"Apache-2.0"
] | 74 | 2020-09-03T01:12:25.000Z | 2022-03-28T12:04:49.000Z | from .builder import build_backbone
from .resnet import resnet18, resnet50, resnet101
__all__ = ['resnet18', 'resnet50', 'resnet101', 'build_backbone']
| 30.6 | 65 | 0.771242 | 17 | 153 | 6.588235 | 0.588235 | 0.232143 | 0.446429 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.102941 | 0.111111 | 153 | 4 | 66 | 38.25 | 0.720588 | 0 | 0 | 0 | 0 | 0 | 0.254902 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.666667 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
b4f803a29f61225b0633eade21b5cb7a018ea789 | 26,449 | py | Python | optionvisualizer/option_formulas.py | GBERESEARCH/optionvisualizer | 98830c478f7edbacb18f30d679ee4010c7ed7fe6 | [
"MIT"
] | 8 | 2020-12-08T15:22:02.000Z | 2022-03-26T19:40:38.000Z | optionvisualizer/option_formulas.py | GBERESEARCH/optionvisualizer | 98830c478f7edbacb18f30d679ee4010c7ed7fe6 | [
"MIT"
] | null | null | null | optionvisualizer/option_formulas.py | GBERESEARCH/optionvisualizer | 98830c478f7edbacb18f30d679ee4010c7ed7fe6 | [
"MIT"
] | 4 | 2020-11-19T23:08:05.000Z | 2021-12-28T08:07:02.000Z | """
Option Pricing and Greeks formulas
"""
import numpy as np
from optionvisualizer.utilities import Utils
# pylint: disable=invalid-name
class Option():
"""
Calculate Black Scholes Option Price and Greeks
"""
@staticmethod
def price(opt_params, params):
"""
Black Scholes Option Price
Parameters
----------
opt_params : Dict
S : Float
Underlying Stock Price. The default is 100.
K : Float
Strike Price. The default is 100.
T : Float
Time to Maturity. The default is 0.25 (3 months).
r : Float
Interest Rate. The default is 0.05 (5%).
q : Float
Dividend Yield. The default is 0.
sigma : Float
Implied Volatility. The default is 0.2 (20%).
option : Str
Option type, Put or Call. The default is 'call'
params : Dict
Dictionary of key parameters; used for refreshing distribution.
Returns
-------
Float
Black Scholes Option Price.
"""
# Update distribution parameters
params = Utils.refresh_dist_params(
opt_params=opt_params, params=params)
if opt_params['option'] == "call":
opt_price = (
(opt_params['S'] * params['carry'] * params['Nd1'])
- (opt_params['K'] * np.exp(-opt_params['r'] * opt_params['T'])
* params['Nd2']))
elif opt_params['option'] == "put":
opt_price = (
(opt_params['K'] * np.exp(-opt_params['r'] * opt_params['T'])
* params['minusNd2'])
- (opt_params['S'] * params['carry'] * params['minusNd1']))
else:
print("Please supply an option type, 'put' or 'call'")
np.nan_to_num(opt_price, copy=False)
return opt_price
@staticmethod
def delta(opt_params, params):
"""
Sensitivity of the option price to changes in asset price
Parameters
----------
opt_params : Dict
S : Float
Underlying Stock Price. The default is 100.
K : Float
Strike Price. The default is 100.
T : Float
Time to Maturity. The default is 0.25 (3 months).
r : Float
Interest Rate. The default is 0.05 (5%).
q : Float
Dividend Yield. The default is 0.
sigma : Float
Implied Volatility. The default is 0.2 (20%).
option : Str
Option type, Put or Call. The default is 'call'
params : Dict
Dictionary of key parameters; used for refreshing distribution.
Returns
-------
Float
Option Delta.
"""
# Update distribution parameters
params = Utils.refresh_dist_params(
opt_params=opt_params, params=params)
if opt_params['option'] == 'call':
opt_delta = params['carry'] * params['Nd1']
if opt_params['option'] == 'put':
opt_delta = params['carry'] * (params['Nd1'] - 1)
return opt_delta
@staticmethod
def theta(opt_params, params):
"""
Sensitivity of the option price to changes in time to maturity
Parameters
----------
opt_params : Dict
S : Float
Underlying Stock Price. The default is 100.
K : Float
Strike Price. The default is 100.
T : Float
Time to Maturity. The default is 0.25 (3 months).
r : Float
Interest Rate. The default is 0.05 (5%).
q : Float
Dividend Yield. The default is 0.
sigma : Float
Implied Volatility. The default is 0.2 (20%).
option : Str
Option type, Put or Call. The default is 'call'
params : Dict
Dictionary of key parameters; used for refreshing distribution.
Returns
-------
Float
Option Theta.
"""
# Update distribution parameters
params = Utils.refresh_dist_params(
opt_params=opt_params, params=params)
if opt_params['option'] == 'call':
opt_theta = (
((-opt_params['S']
* params['carry']
* params['nd1']
* opt_params['sigma'])
/ (2 * np.sqrt(opt_params['T']))
- (params['b'] - opt_params['r'])
* (opt_params['S']
* params['carry']
* params['Nd1'])
- (opt_params['r'] * opt_params['K'])
* np.exp(-opt_params['r'] * opt_params['T'])
* params['Nd2'])
/ 100)
if opt_params['option'] == 'put':
opt_theta = (
((-opt_params['S']
* params['carry']
* params['nd1']
* opt_params['sigma'] )
/ (2 * np.sqrt(opt_params['T']))
+ (params['b'] - opt_params['r'])
* (opt_params['S']
* params['carry']
* params['minusNd1'])
+ (opt_params['r'] * opt_params['K'])
* np.exp(-opt_params['r'] * opt_params['T'])
* params['minusNd2'])
/ 100)
return opt_theta
@staticmethod
def gamma(opt_params, params):
"""
Sensitivity of delta to changes in the underlying asset price
Parameters
----------
opt_params : Dict
S : Float
Underlying Stock Price. The default is 100.
K : Float
Strike Price. The default is 100.
T : Float
Time to Maturity. The default is 0.25 (3 months).
r : Float
Interest Rate. The default is 0.05 (5%).
q : Float
Dividend Yield. The default is 0.
sigma : Float
Implied Volatility. The default is 0.2 (20%).
option : Str
Option type, Put or Call. The default is 'call'
params : Dict
Dictionary of key parameters; used for refreshing distribution.
Returns
-------
Float
Option Gamma.
"""
# Update distribution parameters
params = Utils.refresh_dist_params(
opt_params=opt_params, params=params)
opt_gamma = ((params['nd1'] * params['carry'])
/ (opt_params['S'] * opt_params['sigma']
* np.sqrt(opt_params['T'])))
return opt_gamma
@staticmethod
def vega(opt_params, params):
"""
Sensitivity of the option price to changes in volatility
Parameters
----------
opt_params : Dict
S : Float
Underlying Stock Price. The default is 100.
K : Float
Strike Price. The default is 100.
T : Float
Time to Maturity. The default is 0.25 (3 months).
r : Float
Interest Rate. The default is 0.05 (5%).
q : Float
Dividend Yield. The default is 0.
sigma : Float
Implied Volatility. The default is 0.2 (20%).
option : Str
Option type, Put or Call. The default is 'call'
params : Dict
Dictionary of key parameters; used for refreshing distribution.
Returns
-------
Float
Option Vega.
"""
# Update distribution parameters
params = Utils.refresh_dist_params(
opt_params=opt_params, params=params)
opt_vega = ((opt_params['S'] * params['carry']
* params['nd1'] * np.sqrt(opt_params['T'])) / 100)
return opt_vega
@staticmethod
def rho(opt_params, params):
"""
Sensitivity of the option price to changes in the risk free rate
Parameters
----------
opt_params : Dict
S : Float
Underlying Stock Price. The default is 100.
K : Float
Strike Price. The default is 100.
T : Float
Time to Maturity. The default is 0.25 (3 months).
r : Float
Interest Rate. The default is 0.05 (5%).
q : Float
Dividend Yield. The default is 0.
sigma : Float
Implied Volatility. The default is 0.2 (20%).
option : Str
Option type, Put or Call. The default is 'call'
params : Dict
Dictionary of key parameters; used for refreshing distribution.
Returns
-------
Float
Option Rho.
"""
# Update distribution parameters
params = Utils.refresh_dist_params(
opt_params=opt_params, params=params)
if opt_params['option'] == 'call':
opt_rho = (
(opt_params['T'] * opt_params['K']
* np.exp(-opt_params['r'] * opt_params['T']) * params['Nd2'])
/ 10000)
if opt_params['option'] == 'put':
opt_rho = (
(-opt_params['T'] * opt_params['K']
* np.exp(-opt_params['r'] * opt_params['T'])
* params['minusNd2'])
/ 10000)
return opt_rho
@staticmethod
def vanna(opt_params, params):
"""
DdeltaDvol, DvegaDspot
Sensitivity of delta to changes in volatility
Sensitivity of vega to changes in the asset price
Parameters
----------
opt_params : Dict
S : Float
Underlying Stock Price. The default is 100.
K : Float
Strike Price. The default is 100.
T : Float
Time to Maturity. The default is 0.25 (3 months).
r : Float
Interest Rate. The default is 0.05 (5%).
q : Float
Dividend Yield. The default is 0.
sigma : Float
Implied Volatility. The default is 0.2 (20%).
option : Str
Option type, Put or Call. The default is 'call'
params : Dict
Dictionary of key parameters; used for refreshing distribution.
Returns
-------
Float
Option Vanna.
"""
# Update distribution parameters
params = Utils.refresh_dist_params(
opt_params=opt_params, params=params)
opt_vanna = (
(((-params['carry'] * params['d2'])
/ opt_params['sigma']) * params['nd1']) / 100)
return opt_vanna
@classmethod
def vomma(cls, opt_params, params):
"""
DvegaDvol, Vega Convexity, Volga, Vol Gamma
Sensitivity of vega to changes in volatility
Parameters
----------
opt_params : Dict
S : Float
Underlying Stock Price. The default is 100.
K : Float
Strike Price. The default is 100.
T : Float
Time to Maturity. The default is 0.25 (3 months).
r : Float
Interest Rate. The default is 0.05 (5%).
q : Float
Dividend Yield. The default is 0.
sigma : Float
Implied Volatility. The default is 0.2 (20%).
option : Str
Option type, Put or Call. The default is 'call'
params : Dict
Dictionary of key parameters; used for refreshing distribution.
Returns
-------
Float
Option Vomma.
"""
# Update distribution parameters
params = Utils.refresh_dist_params(
opt_params=opt_params, params=params)
opt_vomma = (
(cls.vega(opt_params, params) * (
(params['d1'] * params['d2']) / (opt_params['sigma']))) / 100)
return opt_vomma
@staticmethod
def charm(opt_params, params):
"""
DdeltaDtime, Delta Bleed
Sensitivity of delta to changes in time to maturity
Parameters
----------
opt_params : Dict
S : Float
Underlying Stock Price. The default is 100.
K : Float
Strike Price. The default is 100.
T : Float
Time to Maturity. The default is 0.25 (3 months).
r : Float
Interest Rate. The default is 0.05 (5%).
q : Float
Dividend Yield. The default is 0.
sigma : Float
Implied Volatility. The default is 0.2 (20%).
option : Str
Option type, Put or Call. The default is 'call'
params : Dict
Dictionary of key parameters; used for refreshing distribution.
Returns
-------
Float
Option Charm.
"""
# Update distribution parameters
params = Utils.refresh_dist_params(
opt_params=opt_params, params=params)
if opt_params['option'] == 'call':
opt_charm = (
(-params['carry'] * ((params['nd1'] * (
(params['b'] / (opt_params['sigma']
* np.sqrt(opt_params['T'])))
- (params['d2'] / (2 * opt_params['T']))))
+ ((params['b'] - opt_params['r']) * params['Nd1'])))
/ 100)
if opt_params['option'] == 'put':
opt_charm = (
(-params['carry'] * (
(params['nd1'] * (
(params['b']
/ (opt_params['sigma'] * np.sqrt(opt_params['T'])))
- (params['d2'] / (2 * opt_params['T']))))
- ((params['b'] - opt_params['r']) * params['minusNd1'])))
/ 100)
return opt_charm
@classmethod
def zomma(cls, opt_params, params):
"""
DgammaDvol
Sensitivity of gamma to changes in volatility
Parameters
----------
opt_params : Dict
S : Float
Underlying Stock Price. The default is 100.
K : Float
Strike Price. The default is 100.
T : Float
Time to Maturity. The default is 0.25 (3 months).
r : Float
Interest Rate. The default is 0.05 (5%).
q : Float
Dividend Yield. The default is 0.
sigma : Float
Implied Volatility. The default is 0.2 (20%).
option : Str
Option type, Put or Call. The default is 'call'
params : Dict
Dictionary of key parameters; used for refreshing distribution.
Returns
-------
Float
Option Zomma.
"""
# Update distribution parameters
params = Utils.refresh_dist_params(
opt_params=opt_params, params=params)
opt_zomma = (
(cls.gamma(opt_params, params) * (
(params['d1'] * params['d2'] - 1)
/ opt_params['sigma'])) / 100)
return opt_zomma
@classmethod
def speed(cls, opt_params, params):
"""
DgammaDspot
Sensitivity of gamma to changes in asset price
3rd derivative of option price with respect to spot
Parameters
----------
opt_params : Dict
S : Float
Underlying Stock Price. The default is 100.
K : Float
Strike Price. The default is 100.
T : Float
Time to Maturity. The default is 0.25 (3 months).
r : Float
Interest Rate. The default is 0.05 (5%).
q : Float
Dividend Yield. The default is 0.
sigma : Float
Implied Volatility. The default is 0.2 (20%).
option : Str
Option type, Put or Call. The default is 'call'
params : Dict
Dictionary of key parameters; used for refreshing distribution.
Returns
-------
Float
Option Speed.
"""
# Update distribution parameters
params = Utils.refresh_dist_params(
opt_params=opt_params, params=params)
opt_speed = -(
cls.gamma(opt_params, params) * (1 + (
params['d1'] / (opt_params['sigma']
* np.sqrt(opt_params['T']))))
/ opt_params['S'])
return opt_speed
@classmethod
def color(cls, opt_params, params):
"""
DgammaDtime, Gamma Bleed, Gamma Theta
Sensitivity of gamma to changes in time to maturity
Parameters
----------
opt_params : Dict
S : Float
Underlying Stock Price. The default is 100.
K : Float
Strike Price. The default is 100.
T : Float
Time to Maturity. The default is 0.25 (3 months).
r : Float
Interest Rate. The default is 0.05 (5%).
q : Float
Dividend Yield. The default is 0.
sigma : Float
Implied Volatility. The default is 0.2 (20%).
option : Str
Option type, Put or Call. The default is 'call'
params : Dict
Dictionary of key parameters; used for refreshing distribution.
Returns
-------
Float
Option Color.
"""
# Update distribution parameters
params = Utils.refresh_dist_params(
opt_params=opt_params, params=params)
opt_color = (
(cls.gamma(opt_params, params) * (
(opt_params['r'] - params['b']) + (
(params['b'] * params['d1'])
/ (opt_params['sigma'] * np.sqrt(opt_params['T'])))
+ ((1 - params['d1'] * params['d2']) / (2 * opt_params['T']))))
/ 100)
return opt_color
@classmethod
def ultima(cls, opt_params, params):
"""
DvommaDvol
Sensitivity of vomma to changes in volatility
3rd derivative of option price wrt volatility
Parameters
----------
opt_params : Dict
S : Float
Underlying Stock Price. The default is 100.
K : Float
Strike Price. The default is 100.
T : Float
Time to Maturity. The default is 0.25 (3 months).
r : Float
Interest Rate. The default is 0.05 (5%).
q : Float
Dividend Yield. The default is 0.
sigma : Float
Implied Volatility. The default is 0.2 (20%).
option : Str
Option type, Put or Call. The default is 'call'
params : Dict
Dictionary of key parameters; used for refreshing distribution.
Returns
-------
Float
Option Ultima.
"""
# Update distribution parameters
params = Utils.refresh_dist_params(
opt_params=opt_params, params=params)
opt_ultima = (
(cls.vomma(opt_params, params) * (
(1 / opt_params['sigma']) * (params['d1'] * params['d2']
- (params['d1'] / params['d2'])
- (params['d2'] / params['d1']) - 1)))
/ 100)
return opt_ultima
@classmethod
def vega_bleed(cls, opt_params, params):
"""
DvegaDtime
Sensitivity of vega to changes in time to maturity.
Parameters
----------
opt_params : Dict
S : Float
Underlying Stock Price. The default is 100.
K : Float
Strike Price. The default is 100.
T : Float
Time to Maturity. The default is 0.25 (3 months).
r : Float
Interest Rate. The default is 0.05 (5%).
q : Float
Dividend Yield. The default is 0.
sigma : Float
Implied Volatility. The default is 0.2 (20%).
option : Str
Option type, Put or Call. The default is 'call'
params : Dict
Dictionary of key parameters; used for refreshing distribution.
Returns
-------
Float
Option Vega Bleed.
"""
# Update distribution parameters
params = Utils.refresh_dist_params(
opt_params=opt_params, params=params)
opt_vega_bleed = (
(cls.vega(opt_params, params)
* (opt_params['r']
- params['b']
+ ((params['b'] * params['d1'])
/ (opt_params['sigma'] * np.sqrt(opt_params['T'])))
- ((1 + (params['d1'] * params['d2']) )
/ (2 * opt_params['T']))))
/ 100)
return opt_vega_bleed
@classmethod
def return_options(cls, opt_dict, params):
"""
Calculate option prices to be used in payoff diagrams.
Parameters
----------
opt_dict : Dict
Dictionary of option pricing parameters
params : Dict
Dictionary of key parameters
Returns
-------
From 1 to 4 sets of option values:
Cx_0: Current option price; Float.
Cx: Terminal Option payoff, varying by strike; Array
Cx_G: Current option value, varying by strike; Array
"""
# Dictionary to store option legs
option_legs = {}
# create array of 1000 equally spaced points between 75% of
# initial underlying price and 125%
option_legs['SA'] = np.linspace(
0.75 * opt_dict['S'], 1.25 * opt_dict['S'], 1000)
opt_params = {
'S':opt_dict['S'],
'K':opt_dict['K1'],
'T':opt_dict['T1'],
'r':opt_dict['r'],
'q':opt_dict['q'],
'sigma':opt_dict['sigma'],
'option':opt_dict['option1'],
}
# Calculate the current price of option 1
option_legs['C1_0'] = cls.price(opt_params=opt_params, params=params)
# Calculate the prices at maturity for the range of strikes
# in SA of option 1
change_params = {'S':option_legs['SA'], 'T':0}
opt_params.update(change_params)
option_legs['C1'] = cls.price(opt_params=opt_params, params=params)
# Calculate the current prices for the range of strikes
# in SA of option 1
change_params = {'T':opt_dict['T1']}
opt_params.update(change_params)
option_legs['C1_G'] = cls.price(opt_params=opt_params, params=params)
if opt_dict['legs'] > 1:
# Calculate the current price of option 2
change_params = {'S':opt_dict['S'],
'K':opt_dict['K2'],
'T':opt_dict['T2'],
'option':opt_dict['option2']}
opt_params.update(change_params)
option_legs['C2_0'] = cls.price(
opt_params=opt_params, params=params)
# Calculate the prices at maturity for the range of strikes
# in SA of option 2
change_params = {'S':option_legs['SA'], 'T':0}
opt_params.update(change_params)
option_legs['C2'] = cls.price(opt_params=opt_params, params=params)
# Calculate the current prices for the range of strikes
# in SA of option 2
change_params = {'T':opt_dict['T2']}
opt_params.update(change_params)
option_legs['C2_G'] = cls.price(
opt_params=opt_params, params=params)
if opt_dict['legs'] > 2:
# Calculate the current price of option 3
change_params = {'S':opt_dict['S'],
'K':opt_dict['K3'],
'T':opt_dict['T3'],
'option':opt_dict['option3']}
opt_params.update(change_params)
option_legs['C3_0'] = cls.price(
opt_params=opt_params, params=params)
# Calculate the prices at maturity for the range of strikes
# in SA of option 3
change_params = {'S':option_legs['SA'], 'T':0}
opt_params.update(change_params)
option_legs['C3'] = cls.price(opt_params=opt_params, params=params)
# Calculate the current prices for the range of strikes
# in SA of option 3
change_params = {'T':opt_dict['T3']}
opt_params.update(change_params)
option_legs['C3_G'] = cls.price(
opt_params=opt_params, params=params)
if opt_dict['legs'] > 3:
# Calculate the current price of option 4
change_params = {'S':opt_dict['S'],
'K':opt_dict['K4'],
'T':opt_dict['T4'],
'option':opt_dict['option4']}
opt_params.update(change_params)
option_legs['C4_0'] = cls.price(
opt_params=opt_params, params=params)
# Calculate the prices at maturity for the range of strikes
# in SA of option 4
change_params = {'S':option_legs['SA'], 'T':0}
opt_params.update(change_params)
option_legs['C4'] = cls.price(
opt_params=opt_params, params=params)
# Calculate the current prices for the range of strikes
# in SA of option 4
change_params = {'T':opt_dict['T4']}
opt_params.update(change_params)
option_legs['C4_G'] = cls.price(
opt_params=opt_params, params=params)
return option_legs
| 31.374852 | 79 | 0.493365 | 2,834 | 26,449 | 4.491884 | 0.065984 | 0.120189 | 0.09238 | 0.057188 | 0.844776 | 0.823017 | 0.782639 | 0.766064 | 0.739827 | 0.727337 | 0 | 0.027887 | 0.406178 | 26,449 | 842 | 80 | 31.412114 | 0.782631 | 0.437748 | 0 | 0.472868 | 0 | 0 | 0.054752 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.05814 | false | 0 | 0.007752 | 0 | 0.127907 | 0.003876 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
2eca032916d2148a559be6a245373af27a7ed590 | 27,723 | py | Python | PhdTester/phdTester/graph.py | Koldar/phdTester | b9eddc33409869fa327eef20dd0c05b336a71d97 | [
"MIT"
] | 1 | 2019-07-09T17:30:49.000Z | 2019-07-09T17:30:49.000Z | PhdTester/phdTester/graph.py | Koldar/phdTester | b9eddc33409869fa327eef20dd0c05b336a71d97 | [
"MIT"
] | 145 | 2019-05-22T17:22:50.000Z | 2021-02-10T02:25:05.000Z | PhdTester/phdTester/graph.py | Koldar/phdTester | b9eddc33409869fa327eef20dd0c05b336a71d97 | [
"MIT"
] | null | null | null | import abc
import os
from typing import Any, Iterable, Tuple, List, Dict
class ISingleDirectedGraph(abc.ABC):
@abc.abstractmethod
def add_vertex(self, payload, aid: Any = None) -> Any:
pass
@abc.abstractmethod
def get_vertex(self, aid) -> Any:
pass
@abc.abstractmethod
def contains_vertex(self, aid) -> bool:
pass
@abc.abstractmethod
def remove_vertex(self, aid) -> None:
pass
@abc.abstractmethod
def vertices(self) -> Iterable[Tuple[Any, Any]]:
pass
def __getitem__(self, item) -> Any:
return self.get_vertex(item)
def __setitem__(self, key, value) -> None:
self.add_vertex(aid=key, payload=value)
def __contains__(self, item) -> bool:
return self.contains_vertex(item)
@abc.abstractmethod
def add_edge(self, source, sink, payload) -> None:
pass
@abc.abstractmethod
def get_edge(self, source, sink) -> Any:
pass
@abc.abstractmethod
def contains_edge(self, source, sink) -> bool:
pass
@abc.abstractmethod
def remove_edge(self, source, sink) -> None:
pass
@abc.abstractmethod
def edges(self, source: Any = None, sink: Any = None) -> Iterable[Tuple[Any, Any, Any]]:
pass
@abc.abstractmethod
def successors(self, source: Any) -> Iterable[Any]:
pass
@abc.abstractmethod
def predecessors(self, sink: Any) -> Iterable[Any]:
pass
@abc.abstractmethod
def out_edges(self, source: Any) -> Iterable[Tuple[Any, Any, Any]]:
"""
:param source: the key of the node whose out edges we want to compute
:return: edges going out from source. it returns an iterable of 3 elements: the key of the source,
the key of the sink and the payload attached to the edge
"""
pass
@abc.abstractmethod
def in_edges(self, source: Any) -> Iterable[Tuple[Any, Any, Any]]:
"""
:param source: the key of the node whose in edges we want to compute
:return: edges going in source. it returns an iterable of 3 elements: the key of the source,
the key of the sink and the payload attached to the edge
"""
pass
def in_degree(self, n: Any) -> int:
"""
:param n: the key of the vertex
:return: number of edges going in n
"""
return len(list(self.in_edges(n)))
def out_degree(self, n: Any) -> int:
"""
:param n: the key of a vertex
:return: number of edges going out from n
"""
return len(list(self.out_edges(n)))
@property
def roots(self) -> Iterable[Tuple[Any, Any]]:
"""
roots are vertices in the graph which have no predecessors
:return: an iterable of tuples where the first element is the key of a root while the second one is the payload
associated to the vertex
"""
for n, payload in self.vertices():
if self.in_degree(n) == 0:
yield (n, payload)
class IMultiDirectedGraph(abc.ABC):
@abc.abstractmethod
def add_vertex(self, payload, aid: Any = None) -> Any:
pass
@abc.abstractmethod
def get_vertex(self, aid) -> Any:
pass
@abc.abstractmethod
def contains_vertex(self, aid) -> bool:
pass
@abc.abstractmethod
def remove_vertex(self, aid) -> None:
pass
@abc.abstractmethod
def vertices(self) -> Iterable[Tuple[Any, Any]]:
pass
def __getitem__(self, item) -> Any:
return self.get_vertex(item)
def __setitem__(self, key, value) -> None:
self.add_vertex(aid=key, payload=value)
def __contains__(self, item) -> bool:
return self.contains_vertex(item)
@abc.abstractmethod
def add_edge(self, source, sink, payload):
"""
add a new edge in the graph. between the same source and sink there can be multiple albeit different edges
:param source: the source of the edge
:param sink: the sink of the edge
:param payload: the payload attached to the given edge
:return:
"""
pass
@abc.abstractmethod
def get_edge(self, source, sink) -> Iterable[Tuple[Any, Any, Any]]:
"""
get the edges between a source and a sink
:param source:
:param sink:
:return:
"""
pass
@abc.abstractmethod
def contains_edge(self, source, sink) -> bool:
pass
@abc.abstractmethod
def remove_edge(self, source, sink, payload):
pass
@abc.abstractmethod
def edges(self, source: Any = None, sink: Any = None) -> Iterable[Tuple[Any, Any, Any]]:
pass
@abc.abstractmethod
def successors(self, source: Any) -> Iterable[Any]:
"""
nodes which are connected to a direct edge whose source is `source`
:param source:
:return:
"""
pass
@abc.abstractmethod
def predecessors(self, sink: Any) -> Iterable[Any]:
"""
nodes which are connected with a direct edge whose sink is `sink`
:param sink:
:return:
"""
pass
@abc.abstractmethod
def out_edges(self, source: Any) -> Iterable[Tuple[Any, Any, Any]]:
"""
:param source: the key of the node whose out edges we want to compute
:return: edges going out from source. it returns an iterable of 3 elements: the key of the source,
the key of the sink and the payload attached to the edge
"""
pass
@abc.abstractmethod
def in_edges(self, source: Any) -> Iterable[Tuple[Any, Any, Any]]:
"""
:param source: the key of the node whose in edges we want to compute
:return: edges going in source. it returns an iterable of 3 elements: the key of the source,
the key of the sink and the payload attached to the edge
"""
pass
def in_degree(self, n: Any) -> int:
"""
:param n: the key of the vertex
:return: number of edges going in n
"""
return len(list(self.in_edges(n)))
def out_degree(self, n: Any) -> int:
"""
:param n: the key of a vertex
:return: number of edges going out from n
"""
return len(list(self.out_edges(n)))
@property
def roots(self) -> Iterable[Tuple[Any, Any]]:
"""
roots are vertices in the graph which have no predecessors
:return: an iterable of tuples where the first element is the key of a root while the second one is the payload
associated to the vertex
"""
for n, payload in self.vertices():
if self.in_degree(n) == 0:
yield (n, payload)
class SimpleSingleDirectedGraph(ISingleDirectedGraph):
next_id = 0
def __init__(self):
self._vertices = {} # Dict[Any, Any]
self._edges = {} # Dict[Any, Dict[Any, Any]]
@staticmethod
def generate_vertex_id() -> int:
result = SimpleSingleDirectedGraph.next_id
SimpleSingleDirectedGraph.next_id += 1
return result
def check_vertex(self, vertex_name: str):
# the node request may
if vertex_name not in self._vertices:
raise KeyError("node named '{}' not found. available names are {}".format(vertex_name, ', '.join(self._vertices.keys())))
def add_vertex(self, payload, aid: Any = None) -> Any:
if aid is None:
aid = SimpleSingleDirectedGraph.generate_vertex_id()
if aid in self._vertices:
raise KeyError(f"key {aid} is already indexing a vertex!")
self._vertices[aid] = payload
return aid
def get_vertex(self, aid) -> Any:
if aid not in self._vertices:
raise KeyError(f"key {aid} is not indexing a vertex in the graph!")
return self._vertices[aid]
def contains_vertex(self, aid: Any) -> bool:
return aid in self._vertices
def remove_vertex(self, aid) -> None:
if aid not in self:
raise KeyError(f"vertex {aid} not found!")
# remove all the edges involved in the vertex
edges_to_remove = []
for s, t, payload in self.edges(source=aid, sink=None):
edges_to_remove.append((s, t))
for s, t, payload in self.edges(source=None, sink=aid):
edges_to_remove.append((s, t))
for s, t in edges_to_remove:
self.remove_edge(s, t)
del self._vertices[aid]
def vertices(self) -> Iterable[Tuple[Any, Any]]:
return map(lambda k: (k, self._vertices[k]), self._vertices)
def add_edge(self, source, sink, payload) -> None:
if source not in self._edges:
self._edges[source] = {}
if sink in self._edges[source]:
raise KeyError(f"edge {source}->{sink} already exists")
self._edges[source][sink] = payload
def get_edge(self, source, sink) -> Any:
return self._edges[source][sink]
def contains_edge(self, source, sink) -> bool:
if source not in self._edges:
return False
if sink not in self._edges[source]:
return False
return True
def remove_edge(self, source, sink) -> None:
del self._edges[source][sink]
def edges(self, source: Any = None, sink: Any = None) -> Iterable[Tuple[Any, Any, Any]]:
sources = self._vertices.keys() if source is None else [source]
sinks = self._vertices.keys() if sink is None else [sink]
for source in sources:
if source not in self._edges:
continue
for sink in self._edges[source]:
if sink not in sinks:
continue
yield (source, sink, self._edges[source][sink])
def successors(self, n: Any) -> Iterable[Any]:
self.check_vertex(n)
# the vertex may have no successors. In this case we generate the stop iteration immediately
if n not in self._edges:
raise StopIteration()
for sink, payload in self._edges[n].items():
yield sink
def predecessors(self, n: Any) -> Iterable[Any]:
self.check_vertex(n)
for source in self._edges:
if n in self._edges[source]:
yield source
def out_edges(self, n: Any) -> Iterable[Tuple[Any, Any, Any]]:
self.check_vertex(n)
# the vertex may have no successors. In this case we generate the stop iteration immediately
if n not in self._edges:
raise StopIteration()
for sink, payload in self._edges[n].items():
yield (n, sink, payload)
def in_edges(self, n: Any) -> Iterable[Tuple[Any, Any, Any]]:
self.check_vertex(n)
for source in self._edges:
if n in self._edges[source]:
yield (source, n, self._edges[source][n])
class SimpleMultiDirectedGraph(IMultiDirectedGraph):
next_id = 0
def __init__(self):
self._vertices: Dict[Any, Any] = {}
self._edges: Dict[Any, Dict[Any, List[Any]]] = {}
@staticmethod
def generate_vertex_id() -> int:
result = SimpleMultiDirectedGraph.next_id
SimpleMultiDirectedGraph.next_id += 1
return result
def check_vertex(self, vertex_name: str):
# the node request may
if vertex_name not in self._vertices:
raise KeyError("node named '{}' not found. available names are {}".format(
vertex_name,
', '.join(self._vertices.keys())
))
def add_vertex(self, payload, aid: Any = None) -> Any:
if aid is None:
aid = SimpleMultiDirectedGraph.generate_vertex_id()
if aid in self._vertices:
raise KeyError(f"key {aid} is already indexing a vertex!")
self._vertices[aid] = payload
return aid
def get_vertex(self, aid) -> Any:
if aid not in self._vertices:
raise KeyError(f"key {aid} is not indexing a vertex in the graph!")
return self._vertices[aid]
def contains_vertex(self, aid: Any) -> bool:
return aid in self._vertices
def remove_vertex(self, aid) -> None:
if aid not in self:
raise KeyError(f"vertex {aid} not found!")
# remove all the edges involved in the vertex
edges_to_remove = []
for s, t, payload in self.edges(source=aid, sink=None):
edges_to_remove.append((s, t, payload))
for s, t, payload in self.edges(source=None, sink=aid):
edges_to_remove.append((s, t, payload))
for s, t, payload in edges_to_remove:
self.remove_edge(s, t, payload)
del self._vertices[aid]
def vertices(self) -> Iterable[Tuple[Any, Any]]:
return map(lambda k: (k, self._vertices[k]), self._vertices)
def add_edge(self, source, sink, payload):
if source not in self._edges:
self._edges[source] = {}
if sink not in self._edges[source]:
self._edges[source][sink] = []
self._edges[source][sink].append(payload)
def get_edge(self, source, sink) -> Iterable[Tuple[Any, Any, Any]]:
for payload in self._edges[source][sink]:
yield (source, sink, payload)
def contains_edge(self, source, sink) -> bool:
if source not in self._edges:
return False
if sink not in self._edges[source]:
return False
return True
def remove_edge(self, source, sink, payload):
if source not in self._edges:
raise KeyError(f"source {source} has no outgoing edges!")
if sink not in self._edges[source]:
raise KeyError(f"source {source} has no outgoing edges towards {sink}!")
self._edges[source][sink].remove(payload)
if len(self._edges[source][sink]) == 0:
del self._edges[source][sink]
def edges(self, source: Any = None, sink: Any = None) -> Iterable[Tuple[Any, Any, Any]]:
sources = self._vertices.keys() if source is None else [source]
sinks = self._vertices.keys() if sink is None else [sink]
for source in sources:
if source not in self._edges:
continue
for sink in self._edges[source]:
if sink not in sinks:
continue
for payload in self._edges[source][sink]:
yield (source, sink, payload)
def successors(self, n: Any) -> Iterable[Any]:
self.check_vertex(n)
# the vertex may have no successors. In this case we generate the stop iteration immediately
if n not in self._edges:
raise StopIteration()
visited = set()
for sink, payload in self._edges[n].items():
if sink in visited:
continue
visited.add(sink)
yield sink
def predecessors(self, n: Any) -> Iterable[Any]:
self.check_vertex(n)
visited = set()
for source in self._edges:
if n in self._edges[source]:
if source in visited:
continue
visited.add(source)
yield source
def out_edges(self, n: Any) -> Iterable[Tuple[Any, Any, Any]]:
self.check_vertex(n)
# the vertex may have no successors. In this case we generate the stop iteration immediately
if n not in self._edges:
raise StopIteration()
for sink, edges in self._edges[n].items():
for payload in edges:
yield (n, sink, payload)
def in_edges(self, n: Any) -> Iterable[Tuple[Any, Any, Any]]:
self.check_vertex(n)
for source in self._edges:
if n in self._edges[source]:
for payload in self._edges[source][n]:
yield (source, n, payload)
class IMultiDirectedHyperGraph(abc.ABC):
"""
A hyper graph. Each edge is actually an hyper edge with one source and several sinks. Each hyper edge
has attacched a single payload.
Between the same source and sinks we have have several hyperedges
"""
class HyperEdge(object):
"""
Represents an hyper edge inside an hyperedge graph.
For example
A -> (B, C, D)
"""
def __init__(self, source: Any, sinks: Iterable[Any], payload: Any):
"""
Create a new hyper edge
:param source: the vertex representing the sourc eof the hyperedge
:param sinks: sorted list of the sinks of the hyperedge
:param payload: object attached to the hyperedge
"""
self.source = source
self.sinks = list(sinks)
self.payload = payload
def is_compliant(self, source: Any, sinks: Iterable[Any]) -> bool:
"""
:param source:
:param sinks:
:return: true if the source and the sinks of 2 hyper edges are the same
"""
return self.source == source and set(self.sinks) == set(sinks)
def is_laid_on(self, vertices: Iterable[Any]) -> bool:
"""
:param vertices: the set of vertices id to handle
:return: if all the endpoints belong to the given set, false otherwise
"""
return self.source in vertices and all(map(lambda sink: sink in vertices, self.sinks))
def __str__(self) -> str:
return f"{self.source} -> {self.sinks}"
@abc.abstractmethod
def add_vertex(self, payload, aid: Any = None) -> Any:
"""
Insert a new vertex in the hypergraph
:param payload: the paylaod attached to the vertex
:param aid: the id of the vertex
:raises KeyError: if aid is already an id of a vertex in this graph. If None we will set an id
:return: the id of the newly generated vrtex
"""
pass
@abc.abstractmethod
def get_vertex(self, aid: Any) -> Any:
"""
:param aid: the id of the vertex we're looking for
:raises KeyError: if aid is not an id of a vertex in the graph
:return: the payload of the vertex we're looking for
"""
pass
@abc.abstractmethod
def contains_vertex(self, aid: Any) -> bool:
"""
:param aid: the id of the vertex we're looking for
:return: true if aid is an id of a vertex inside th graph, false otehrwise
"""
pass
@abc.abstractmethod
def vertices(self) -> Iterable[Tuple[Any, Any]]:
"""
Iterable of all the vertices inside the graph
:return:
"""
pass
@abc.abstractmethod
def size(self) -> int:
"""
:return: number of vertices in the graph
"""
pass
def is_empty(self) -> bool:
"""
:return: true if the graph has no vertex, false otherwise
"""
return self.size() == 0
def get_vertices_number(self) -> int:
return self.size()
@abc.abstractmethod
def add_edge(self, source: Any, sinks: Iterable[Any], payload) -> "IMultiDirectedHyperGraph.HyperEdge":
"""
add a new hyper edge in the hyper graph. between the same source and sink there can be
multiple albeit different edges
:param source: the source of the edge
:param sinks: a sorted list of sinks of the hyper edge
:param payload: the payload attached to the given hyper edge
:return: the hyperedge representing the edge we want
"""
pass
@abc.abstractmethod
def get_edge(self, source: Any, sinks: Iterable[Any]) -> Iterable[HyperEdge]:
"""
get the edges between a source and several hyperedge sinks
:param source: the source of the hyperedge we want
:param sinks: the sinks of the hyper edge we want
:return: an iterable of all the hyper edges having source `source` and exactly the sinks in `sinks`
"""
pass
@abc.abstractmethod
def contains_edge(self, source: Any, sinks: Iterable[Any]) -> bool:
"""
Check if an hyper edge with such source and sinks exists in the graph
:param source: the id of the source of the hyper graph
:param sinks: the ids of the sinks of the hyper graph
:return: true if there is at least one hyper edge with exactly the source and the sinks given, false otherwise
"""
pass
@abc.abstractmethod
def successors(self, source: Any) -> Iterable[Any]:
"""
vertices which are connected to a direct hyperedge whose source is `source`
:param source: id of the node handle
:return: iterable of all the vertices which are successors to the node `source`
"""
pass
@abc.abstractmethod
def predecessors(self, sink: Any) -> Iterable[Any]:
"""
vertices which are connected with a direct hyper edge whose at **least** one sink is `sink`
:param sink: id of a sink
:return: iterable of vertices which has at least one sink identical to `sink`
"""
pass
@abc.abstractmethod
def edges(self) -> Iterable[HyperEdge]:
"""
:return: iterable of all the hyper edges in the graph
"""
pass
@abc.abstractmethod
def out_edges(self, source: Any) -> Iterable[HyperEdge]:
"""
iterable of hyper edges going out from a vertex
:param source: the key of the vertex whose out edges we want to compute
:return: hyper edges going out from source.
"""
pass
@abc.abstractmethod
def in_edges(self, source: Any) -> Iterable[HyperEdge]:
"""
iterable of hyper edges going in a vertex
:param source: the key of the node whose in edges we want to compute
:return: edges going in source. it returns an iterable of 3 elements: the key of the source,
the key of the sink and the payload attached to the edge
"""
pass
def in_degree(self, n: Any) -> int:
"""
:param n: the key of the vertex
:return: number of hyper edges going inside the vertex n
"""
return len(list(self.in_edges(n)))
def out_degree(self, n: Any) -> int:
"""
:param n: the key of a vertex
:return: number of hyper edges going out from n
"""
return len(list(self.out_edges(n)))
@property
def roots(self) -> Iterable[Tuple[Any, Any]]:
"""
roots are vertices in the graph which have no predecessors
:return: an iterable of tuples where the first element is the key of a root while the second one is the payload
associated to the vertex
"""
for n, payload in self.vertices():
if self.in_degree(n) == 0:
yield (n, payload)
class DefaultMultiDirectedHyperGraph(IMultiDirectedHyperGraph):
next_id = 0
def __init__(self):
self._vertices: Dict[Any, Any] = {}
self._edges: List["IMultiDirectedHyperGraph.HyperEdge"] = []
"""
List of hyper edges.
"""
@staticmethod
def _generate_vertex_id() -> int:
"""
generates a new vertex id
:return:
"""
result = DefaultMultiDirectedHyperGraph.next_id
DefaultMultiDirectedHyperGraph.next_id += 1
return result
def add_vertex(self, payload, aid: Any = None) -> Any:
if aid is None:
aid = DefaultMultiDirectedHyperGraph._generate_vertex_id()
if aid in self._vertices:
raise KeyError(f"key {aid} is already an id of a vertex in this hypergraph!")
self._vertices[aid] = payload
return aid
def get_vertex(self, aid: Any) -> Any:
return self._vertices[aid]
def contains_vertex(self, aid: Any) -> bool:
return aid in self._vertices
def vertices(self) -> Iterable[Tuple[Any, Any]]:
yield from self._vertices.items()
def size(self) -> int:
return len(self._vertices)
def add_edge(self, source: Any, sinks: Iterable[Any], payload) -> "IMultiDirectedHyperGraph.HyperEdge":
result = IMultiDirectedHyperGraph.HyperEdge(source=source, sinks=list(sinks), payload=payload)
self._edges.append(result)
return result
def get_edge(self, source: Any, sinks: Iterable[Any]) -> Iterable[IMultiDirectedHyperGraph.HyperEdge]:
for edge in self._edges:
if edge.is_compliant(source, sinks):
yield edge
def contains_edge(self, source: Any, sinks: Iterable[Any]) -> bool:
for edge in self._edges:
if edge.is_compliant(source, sinks):
return True
else:
return False
def successors(self, source: Any) -> Iterable[Any]:
visited = set()
for edge in self._edges:
if edge.source == source:
for sink in edge.sinks:
if sink not in visited:
visited.add(sink)
yield sink
def predecessors(self, sink: Any) -> Iterable[Any]:
visited = set()
for edge in self._edges:
if sink in edge.sinks and edge.source not in visited:
visited.add(sink)
yield edge.source
def edges(self) -> Iterable[IMultiDirectedHyperGraph.HyperEdge]:
yield from self._edges
def out_edges(self, source: Any) -> Iterable[IMultiDirectedHyperGraph.HyperEdge]:
for edge in self._edges:
if edge.source == source:
yield edge
def in_edges(self, source: Any) -> Iterable[IMultiDirectedHyperGraph.HyperEdge]:
for edge in self._edges:
if source in edge.sinks:
yield edge
def generate_image(self, output_file: str):
with open(f"{output_file}.dot", "w") as dotfile:
dotfile.write("digraph {\n")
dotfile.write("\trankdir=\"TB\";\n")
# add all edges
for index, hyperedge in enumerate(self._edges):
dotfile.write(f"\tN{hyperedge.source} -> HE{index:04d} [arrowhead=\"none\"];\n")
for sink in hyperedge.sinks:
dotfile.write(f"\tHE{index:04d} -> N{sink};\n")
# add all vertices of graph
for index, vertex in self._vertices.items():
dotfile.write(f"\tN{index} [label=\"{index}\"];\n")
# add all vertices of hyper edges
for index, hyperedge in enumerate(self._edges):
dotfile.write(f"\tHE{index:04d} [shape=\"point\", label=\"\"];\n")
dotfile.write("}\n")
os.system(f"dot -Tsvg -o \"{output_file}.svg\" \"{output_file}.dot\"")
os.remove(f"{output_file}.dot")
| 33.401205 | 134 | 0.575118 | 3,467 | 27,723 | 4.516873 | 0.066628 | 0.035632 | 0.030204 | 0.05364 | 0.787739 | 0.753768 | 0.727011 | 0.688186 | 0.666603 | 0.642465 | 0 | 0.001241 | 0.331602 | 27,723 | 829 | 135 | 33.441496 | 0.843875 | 0.225733 | 0 | 0.782895 | 0 | 0 | 0.045038 | 0.006434 | 0 | 0 | 0 | 0 | 0 | 1 | 0.245614 | false | 0.089912 | 0.006579 | 0.030702 | 0.361842 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 6 |
2ef125506e13465d675d5c6f9775912164599a66 | 39 | py | Python | Config/__init__.py | Naopil/EldenBot | 2b6f4e98dcfdf3720a6c4add4f694d0e15cd575a | [
"MIT"
] | null | null | null | Config/__init__.py | Naopil/EldenBot | 2b6f4e98dcfdf3720a6c4add4f694d0e15cd575a | [
"MIT"
] | 1 | 2019-11-16T19:01:01.000Z | 2019-11-16T19:01:01.000Z | Config/__init__.py | Naopil/EldenBot | 2b6f4e98dcfdf3720a6c4add4f694d0e15cd575a | [
"MIT"
] | 4 | 2018-07-22T23:13:26.000Z | 2022-03-29T17:06:50.000Z | from .ConfigLoader import GlobalConfig
| 19.5 | 38 | 0.871795 | 4 | 39 | 8.5 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.102564 | 39 | 1 | 39 | 39 | 0.971429 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
2effd946f203625ed6a496f35c3a4a5cc5913b17 | 27 | py | Python | examples/NIPS/MNIST/noisy_sequence_detection/__init__.py | MarcRoigVilamala/DeepProbCEP | 0d82f6c237b50122734afe758ef42bdb070ace7c | [
"MIT"
] | 6 | 2020-09-10T03:40:53.000Z | 2021-05-26T07:30:20.000Z | examples/NIPS/MNIST/noisy_sequence_detection/__init__.py | MarcRoigVilamala/DeepProbCEP | 0d82f6c237b50122734afe758ef42bdb070ace7c | [
"MIT"
] | null | null | null | examples/NIPS/MNIST/noisy_sequence_detection/__init__.py | MarcRoigVilamala/DeepProbCEP | 0d82f6c237b50122734afe758ef42bdb070ace7c | [
"MIT"
] | 1 | 2020-11-23T15:55:57.000Z | 2020-11-23T15:55:57.000Z | from .scenario001 import *
| 13.5 | 26 | 0.777778 | 3 | 27 | 7 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.130435 | 0.148148 | 27 | 1 | 27 | 27 | 0.782609 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
2c2b01f05edca46c62424fe33c6c7e0289a68bec | 1,211 | py | Python | delete2.py | Naxaes/TextEditor | b3ea7edce4ea92b7038db218aef89715286547e7 | [
"MIT"
] | null | null | null | delete2.py | Naxaes/TextEditor | b3ea7edce4ea92b7038db218aef89715286547e7 | [
"MIT"
] | null | null | null | delete2.py | Naxaes/TextEditor | b3ea7edce4ea92b7038db218aef89715286547e7 | [
"MIT"
] | null | null | null | from tkinter import *
root = Tk()
text_area = Frame(root, bd=2, relief=SUNKEN)
text_area.grid_rowconfigure(0, weight=1)
text_area.grid_columnconfigure(0, weight=1)
scrollbar_x = Scrollbar(text_area, orient=HORIZONTAL)
scrollbar_x.grid(row=1, column=0, sticky=E + W)
scrollbar_y = Scrollbar(text_area)
scrollbar_y.grid(row=0, column=1, sticky=N + S)
text = Text(text_area, wrap=NONE, bd=0, xscrollcommand=scrollbar_x.set, yscrollcommand=scrollbar_y.set)
text.grid(row=0, column=0, sticky=N+S+E+W)
scrollbar_x.config(command=text.xview)
scrollbar_y.config(command=text.yview)
text_area.pack(side=TOP)
output_area = Frame(root, bd=2, relief=SUNKEN)
output_area.grid_rowconfigure(0, weight=1)
output_area.grid_columnconfigure(0, weight=1)
scrollbar_x = Scrollbar(output_area, orient=HORIZONTAL)
scrollbar_x.grid(row=1, column=0, sticky=E + W)
scrollbar_y = Scrollbar(output_area)
scrollbar_y.grid(row=0, column=1, sticky=N + S)
text = Text(output_area, wrap=NONE, bd=0, xscrollcommand=scrollbar_x.set, yscrollcommand=scrollbar_y.set)
text.grid(row=0, column=0, sticky=N+S+E+W)
scrollbar_x.config(command=text.xview)
scrollbar_y.config(command=text.yview)
output_area.pack(side=BOTTOM)
mainloop() | 25.765957 | 105 | 0.774566 | 202 | 1,211 | 4.475248 | 0.222772 | 0.088496 | 0.035398 | 0.061947 | 0.85177 | 0.85177 | 0.789823 | 0.727876 | 0.727876 | 0.617257 | 0 | 0.0217 | 0.086705 | 1,211 | 47 | 106 | 25.765957 | 0.79566 | 0 | 0 | 0.37037 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.037037 | 0 | 0.037037 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
25dff0016c69482af3215732d728bd4d52c10521 | 10,272 | py | Python | db/migrations/versions/8723fb0cc02e_create_tables.py | chriamue/flask-unchained-react-spa | 610e099f3ece508f4c8a62d3704e4cc49f869194 | [
"MIT"
] | 5 | 2018-10-15T15:33:32.000Z | 2021-01-13T23:03:48.000Z | db/migrations/versions/8723fb0cc02e_create_tables.py | chriamue/flask-unchained-react-spa | 610e099f3ece508f4c8a62d3704e4cc49f869194 | [
"MIT"
] | 15 | 2018-10-15T20:14:21.000Z | 2022-03-15T19:15:09.000Z | db/migrations/versions/8723fb0cc02e_create_tables.py | chriamue/flask-unchained-react-spa | 610e099f3ece508f4c8a62d3704e4cc49f869194 | [
"MIT"
] | 4 | 2018-10-15T15:59:25.000Z | 2020-04-11T17:48:35.000Z | """create tables
Revision ID: 8723fb0cc02e
Revises:
Create Date: 2018-04-23 10:21:06.996560
"""
from alembic import op
import sqlalchemy as sa
import flask_unchained.bundles.sqlalchemy as flask_sqlalchemy_bundle
# revision identifiers, used by Alembic.
revision = '8723fb0cc02e'
down_revision = None
branch_labels = None
depends_on = None
def upgrade():
# ### commands auto generated by Alembic - please adjust! ###
op.create_table('category',
sa.Column('name', sa.String(length=32), nullable=False),
sa.Column('slug', sa.String(length=32), nullable=False),
sa.Column('id', flask_sqlalchemy_bundle.sqla.types.BigInteger(), nullable=False),
sa.Column('created_at', flask_sqlalchemy_bundle.sqla.types.DateTime(timezone=True), server_default=sa.text('now()'), nullable=False),
sa.Column('updated_at', flask_sqlalchemy_bundle.sqla.types.DateTime(timezone=True), server_default=sa.text('now()'), nullable=False),
sa.PrimaryKeyConstraint('id', name=op.f('pk_category'))
)
op.create_table('contact_submission',
sa.Column('name', sa.String(length=64), nullable=False),
sa.Column('email', sa.String(length=64), nullable=False),
sa.Column('message', sa.Text(), nullable=False),
sa.Column('id', flask_sqlalchemy_bundle.sqla.types.BigInteger(), nullable=False),
sa.Column('created_at', flask_sqlalchemy_bundle.sqla.types.DateTime(timezone=True), server_default=sa.text('now()'), nullable=False),
sa.Column('updated_at', flask_sqlalchemy_bundle.sqla.types.DateTime(timezone=True), server_default=sa.text('now()'), nullable=False),
sa.PrimaryKeyConstraint('id', name=op.f('pk_contact_submission'))
)
op.create_table('role',
sa.Column('name', sa.String(length=64), nullable=False),
sa.Column('id', flask_sqlalchemy_bundle.sqla.types.BigInteger(), nullable=False),
sa.Column('created_at', flask_sqlalchemy_bundle.sqla.types.DateTime(timezone=True), server_default=sa.text('now()'), nullable=False),
sa.Column('updated_at', flask_sqlalchemy_bundle.sqla.types.DateTime(timezone=True), server_default=sa.text('now()'), nullable=False),
sa.PrimaryKeyConstraint('id', name=op.f('pk_role'))
)
op.create_index(op.f('ix_role_name'), 'role', ['name'], unique=True)
op.create_table('tag',
sa.Column('name', sa.String(length=32), nullable=False),
sa.Column('slug', sa.String(length=32), nullable=False),
sa.Column('id', flask_sqlalchemy_bundle.sqla.types.BigInteger(), nullable=False),
sa.Column('created_at', flask_sqlalchemy_bundle.sqla.types.DateTime(timezone=True), server_default=sa.text('now()'), nullable=False),
sa.Column('updated_at', flask_sqlalchemy_bundle.sqla.types.DateTime(timezone=True), server_default=sa.text('now()'), nullable=False),
sa.PrimaryKeyConstraint('id', name=op.f('pk_tag'))
)
op.create_table('user',
sa.Column('email', sa.String(length=64), nullable=False),
sa.Column('password', sa.String(), nullable=False),
sa.Column('active', sa.Boolean(name='active'), nullable=False),
sa.Column('confirmed_at', flask_sqlalchemy_bundle.sqla.types.DateTime(timezone=True), nullable=True),
sa.Column('username', sa.String(length=64), nullable=False),
sa.Column('first_name', sa.String(length=64), nullable=False),
sa.Column('last_name', sa.String(length=64), nullable=False),
sa.Column('id', flask_sqlalchemy_bundle.sqla.types.BigInteger(), nullable=False),
sa.Column('created_at', flask_sqlalchemy_bundle.sqla.types.DateTime(timezone=True), server_default=sa.text('now()'), nullable=False),
sa.Column('updated_at', flask_sqlalchemy_bundle.sqla.types.DateTime(timezone=True), server_default=sa.text('now()'), nullable=False),
sa.PrimaryKeyConstraint('id', name=op.f('pk_user'))
)
op.create_index(op.f('ix_user_email'), 'user', ['email'], unique=True)
op.create_index(op.f('ix_user_username'), 'user', ['username'], unique=True)
op.create_table('series',
sa.Column('title', sa.String(length=100), nullable=False),
sa.Column('slug', sa.String(length=100), nullable=False),
sa.Column('file_path', sa.String(length=255), nullable=True),
sa.Column('header_image', sa.String(length=255), nullable=True),
sa.Column('summary', sa.Text(), nullable=False),
sa.Column('category_id', flask_sqlalchemy_bundle.sqla.types.BigInteger(), nullable=True),
sa.Column('id', flask_sqlalchemy_bundle.sqla.types.BigInteger(), nullable=False),
sa.Column('created_at', flask_sqlalchemy_bundle.sqla.types.DateTime(timezone=True), server_default=sa.text('now()'), nullable=False),
sa.Column('updated_at', flask_sqlalchemy_bundle.sqla.types.DateTime(timezone=True), server_default=sa.text('now()'), nullable=False),
sa.ForeignKeyConstraint(['category_id'], ['category.id'], name=op.f('fk_series_category_id_category')),
sa.PrimaryKeyConstraint('id', name=op.f('pk_series'))
)
op.create_table('user_role',
sa.Column('user_id', flask_sqlalchemy_bundle.sqla.types.BigInteger(), nullable=False),
sa.Column('role_id', flask_sqlalchemy_bundle.sqla.types.BigInteger(), nullable=False),
sa.Column('created_at', flask_sqlalchemy_bundle.sqla.types.DateTime(timezone=True), server_default=sa.text('now()'), nullable=False),
sa.Column('updated_at', flask_sqlalchemy_bundle.sqla.types.DateTime(timezone=True), server_default=sa.text('now()'), nullable=False),
sa.ForeignKeyConstraint(['role_id'], ['role.id'], name=op.f('fk_user_role_role_id_role')),
sa.ForeignKeyConstraint(['user_id'], ['user.id'], name=op.f('fk_user_role_user_id_user')),
sa.PrimaryKeyConstraint('user_id', 'role_id', name=op.f('pk_user_role'))
)
op.create_table('series_article',
sa.Column('part', sa.Integer(), nullable=False),
sa.Column('series_id', flask_sqlalchemy_bundle.sqla.types.BigInteger(), nullable=False),
sa.Column('id', flask_sqlalchemy_bundle.sqla.types.BigInteger(), nullable=False),
sa.Column('created_at', flask_sqlalchemy_bundle.sqla.types.DateTime(timezone=True), server_default=sa.text('now()'), nullable=False),
sa.Column('updated_at', flask_sqlalchemy_bundle.sqla.types.DateTime(timezone=True), server_default=sa.text('now()'), nullable=False),
sa.ForeignKeyConstraint(['series_id'], ['series.id'], name=op.f('fk_series_article_series_id_series')),
sa.PrimaryKeyConstraint('id', name=op.f('pk_series_article'))
)
op.create_table('series_tag',
sa.Column('series_id', flask_sqlalchemy_bundle.sqla.types.BigInteger(), nullable=False),
sa.Column('tag_id', flask_sqlalchemy_bundle.sqla.types.BigInteger(), nullable=False),
sa.Column('created_at', flask_sqlalchemy_bundle.sqla.types.DateTime(timezone=True), server_default=sa.text('now()'), nullable=False),
sa.Column('updated_at', flask_sqlalchemy_bundle.sqla.types.DateTime(timezone=True), server_default=sa.text('now()'), nullable=False),
sa.ForeignKeyConstraint(['series_id'], ['series.id'], name=op.f('fk_series_tag_series_id_series')),
sa.ForeignKeyConstraint(['tag_id'], ['tag.id'], name=op.f('fk_series_tag_tag_id_tag')),
sa.PrimaryKeyConstraint('series_id', 'tag_id', name=op.f('pk_series_tag'))
)
op.create_table('article',
sa.Column('title', sa.String(length=100), nullable=False),
sa.Column('slug', sa.String(length=100), nullable=False),
sa.Column('publish_date', flask_sqlalchemy_bundle.sqla.types.DateTime(timezone=True), nullable=False),
sa.Column('last_updated', flask_sqlalchemy_bundle.sqla.types.DateTime(timezone=True), nullable=True),
sa.Column('file_path', sa.String(length=255), nullable=True),
sa.Column('header_image', sa.String(length=255), nullable=True),
sa.Column('preview', sa.Text(), nullable=False),
sa.Column('html', sa.Text(), nullable=False),
sa.Column('author_id', flask_sqlalchemy_bundle.sqla.types.BigInteger(), nullable=False),
sa.Column('article_series_id', flask_sqlalchemy_bundle.sqla.types.BigInteger(), nullable=True),
sa.Column('category_id', flask_sqlalchemy_bundle.sqla.types.BigInteger(), nullable=True),
sa.Column('id', flask_sqlalchemy_bundle.sqla.types.BigInteger(), nullable=False),
sa.Column('created_at', flask_sqlalchemy_bundle.sqla.types.DateTime(timezone=True), server_default=sa.text('now()'), nullable=False),
sa.Column('updated_at', flask_sqlalchemy_bundle.sqla.types.DateTime(timezone=True), server_default=sa.text('now()'), nullable=False),
sa.ForeignKeyConstraint(['article_series_id'], ['series_article.id'], name=op.f('fk_article_article_series_id_series_article')),
sa.ForeignKeyConstraint(['author_id'], ['user.id'], name=op.f('fk_article_author_id_user')),
sa.ForeignKeyConstraint(['category_id'], ['category.id'], name=op.f('fk_article_category_id_category')),
sa.PrimaryKeyConstraint('id', name=op.f('pk_article'))
)
op.create_table('article_tag',
sa.Column('article_id', flask_sqlalchemy_bundle.sqla.types.BigInteger(), nullable=False),
sa.Column('tag_id', flask_sqlalchemy_bundle.sqla.types.BigInteger(), nullable=False),
sa.Column('created_at', flask_sqlalchemy_bundle.sqla.types.DateTime(timezone=True), server_default=sa.text('now()'), nullable=False),
sa.Column('updated_at', flask_sqlalchemy_bundle.sqla.types.DateTime(timezone=True), server_default=sa.text('now()'), nullable=False),
sa.ForeignKeyConstraint(['article_id'], ['article.id'], name=op.f('fk_article_tag_article_id_article')),
sa.ForeignKeyConstraint(['tag_id'], ['tag.id'], name=op.f('fk_article_tag_tag_id_tag')),
sa.PrimaryKeyConstraint('article_id', 'tag_id', name=op.f('pk_article_tag'))
)
# ### end Alembic commands ###
def downgrade():
# ### commands auto generated by Alembic - please adjust! ###
op.drop_table('article_tag')
op.drop_table('article')
op.drop_table('series_tag')
op.drop_table('series_article')
op.drop_table('user_role')
op.drop_table('series')
op.drop_index(op.f('ix_user_username'), table_name='user')
op.drop_index(op.f('ix_user_email'), table_name='user')
op.drop_table('user')
op.drop_table('tag')
op.drop_index(op.f('ix_role_name'), table_name='role')
op.drop_table('role')
op.drop_table('contact_submission')
op.drop_table('category')
# ### end Alembic commands ###
| 65.012658 | 137 | 0.72868 | 1,415 | 10,272 | 5.077739 | 0.074912 | 0.07794 | 0.127349 | 0.146138 | 0.827418 | 0.807794 | 0.767989 | 0.732916 | 0.709255 | 0.694363 | 0 | 0.008598 | 0.094237 | 10,272 | 157 | 138 | 65.426752 | 0.76365 | 0.027551 | 0 | 0.371429 | 0 | 0 | 0.170016 | 0.034767 | 0 | 0 | 0 | 0 | 0 | 1 | 0.014286 | false | 0.007143 | 0.021429 | 0 | 0.035714 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
d394ab8ef93d9fb13c17a4577026f96307df8d84 | 118 | py | Python | kaggle_utils/__init__.py | Ynakatsuka/kaggle_utils | a23b62745bd7881e1a91c74e17612189d1f08784 | [
"MIT"
] | 122 | 2019-10-13T02:47:18.000Z | 2022-02-21T02:01:06.000Z | kaggle_utils/__init__.py | Ynakatsuka/kaggle_utils | a23b62745bd7881e1a91c74e17612189d1f08784 | [
"MIT"
] | null | null | null | kaggle_utils/__init__.py | Ynakatsuka/kaggle_utils | a23b62745bd7881e1a91c74e17612189d1f08784 | [
"MIT"
] | 19 | 2019-08-01T02:23:09.000Z | 2022-01-15T11:22:01.000Z | from . import features
from . import models
from . import preprocess
from . import utils
from . import visualizations
| 19.666667 | 28 | 0.788136 | 15 | 118 | 6.2 | 0.466667 | 0.537634 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.169492 | 118 | 5 | 29 | 23.6 | 0.94898 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
d3a336a964acd8b72725345a3b5c1c93f17c03c0 | 79 | py | Python | ContactHands/contact_hands_two_stream/engine/__init__.py | seoyoon130/Graduation_Project | 9082cb93fb4f73c3a1577f63e906e6eb7f147dc4 | [
"Apache-2.0"
] | 26 | 2020-10-20T01:58:26.000Z | 2022-02-24T11:48:10.000Z | ContactHands/contact_hands_two_stream/engine/__init__.py | seoyoon130/Graduation_Project | 9082cb93fb4f73c3a1577f63e906e6eb7f147dc4 | [
"Apache-2.0"
] | 5 | 2020-10-21T05:39:08.000Z | 2021-09-17T13:57:29.000Z | contact_hands_two_stream/engine/__init__.py | cvlab-stonybrook/ContactHands | 6aba9a5f098b50529e589b7835264df9264844e9 | [
"MIT"
] | 1 | 2022-02-24T11:48:14.000Z | 2022-02-24T11:48:14.000Z | from .custom_arg_parser import *
from .custom_predictor import CustomPredictor
| 26.333333 | 45 | 0.860759 | 10 | 79 | 6.5 | 0.7 | 0.307692 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.101266 | 79 | 2 | 46 | 39.5 | 0.915493 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
6c9ea02da1861cdcf3b0ad69f985e65134c63078 | 78 | py | Python | models/__init__.py | mnanchev/restful_password_generator | 31b58f4cccae35e8c9aacf3817425e73c20ca41c | [
"MIT"
] | null | null | null | models/__init__.py | mnanchev/restful_password_generator | 31b58f4cccae35e8c9aacf3817425e73c20ca41c | [
"MIT"
] | 1 | 2021-12-30T10:11:23.000Z | 2022-01-05T16:56:16.000Z | models/__init__.py | mnanchev/restful_password_generator | 31b58f4cccae35e8c9aacf3817425e73c20ca41c | [
"MIT"
] | null | null | null | from models.creator import CreatorModel
from models.secret import SecretModel
| 26 | 39 | 0.871795 | 10 | 78 | 6.8 | 0.7 | 0.294118 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.102564 | 78 | 2 | 40 | 39 | 0.971429 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
6cccf4ca8893430e484d7391559264b807dc4373 | 807 | py | Python | hyaml/methods/units.py | latera/hyaml | 4dc020434d25c182f8477ddd5582398e51501274 | [
"Apache-2.0"
] | 3 | 2020-04-12T15:55:11.000Z | 2021-08-02T16:26:21.000Z | hyaml/methods/units.py | latera/hyaml | 4dc020434d25c182f8477ddd5582398e51501274 | [
"Apache-2.0"
] | null | null | null | hyaml/methods/units.py | latera/hyaml | 4dc020434d25c182f8477ddd5582398e51501274 | [
"Apache-2.0"
] | null | null | null | from datetime import timedelta
def bytes(number):
return int(number)
_base = 1024
_kilo = _base ** 1
_mega = _base ** 2
_giga = _base ** 3
_tera = _base ** 4
def kilobytes(number):
return bytes(number) * _kilo
def megabytes(number):
return bytes(number) * _mega
def gigabytes(number):
return bytes(number) * _giga
def terabytes(number):
return bytes(number) * _tera
def seconds(number):
return timedelta(seconds=number)
def minutes(number):
return timedelta(minutes=number)
def hours(number):
return timedelta(hours=number)
def days(number):
return timedelta(days=number)
def weeks(number):
return timedelta(weeks=number)
def months(number):
return timedelta(days=number * 30)
def years(number):
return timedelta(days=number * 365)
| 13.913793 | 39 | 0.692689 | 103 | 807 | 5.300971 | 0.31068 | 0.263736 | 0.269231 | 0.168498 | 0.17033 | 0 | 0 | 0 | 0 | 0 | 0 | 0.020155 | 0.200743 | 807 | 57 | 40 | 14.157895 | 0.826357 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.4 | false | 0 | 0.033333 | 0.4 | 0.833333 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 6 |
6ce18246c4105be740bfe93810eae477752832d9 | 100 | py | Python | opfu/time.py | XavierDingRotman/OptionsFutures | bab0de0d66efe39f05e9ddf59460ec76547d9ada | [
"Apache-2.0"
] | 1 | 2020-07-05T20:54:15.000Z | 2020-07-05T20:54:15.000Z | opfu/time.py | XavierDingRotman/OptionsFutures | bab0de0d66efe39f05e9ddf59460ec76547d9ada | [
"Apache-2.0"
] | null | null | null | opfu/time.py | XavierDingRotman/OptionsFutures | bab0de0d66efe39f05e9ddf59460ec76547d9ada | [
"Apache-2.0"
] | null | null | null | def get_T(trans_date, mature_date, base=365.2425):
return (mature_date - trans_date).days/base
| 25 | 50 | 0.75 | 17 | 100 | 4.117647 | 0.647059 | 0.257143 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.08046 | 0.13 | 100 | 3 | 51 | 33.333333 | 0.724138 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.5 | false | 0 | 0 | 0.5 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 6 |
9f0f318a910353a60ce7b9fd45787385b356b1b0 | 31 | py | Python | encoder_decoder/__init__.py | uyuutosa/naughty_parrot | 5047530f190950d1dfbf2a52decea4b8769c20ae | [
"MIT"
] | null | null | null | encoder_decoder/__init__.py | uyuutosa/naughty_parrot | 5047530f190950d1dfbf2a52decea4b8769c20ae | [
"MIT"
] | null | null | null | encoder_decoder/__init__.py | uyuutosa/naughty_parrot | 5047530f190950d1dfbf2a52decea4b8769c20ae | [
"MIT"
] | null | null | null | from .encoder_decoder import *
| 15.5 | 30 | 0.806452 | 4 | 31 | 6 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.129032 | 31 | 1 | 31 | 31 | 0.888889 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
4cb443f8f3c8715058c667317142792cf7c9daeb | 94 | py | Python | src/datasets/__init__.py | chao1224/SGNN-EBM | bda4c486e8ecb9775b635757dbe1071878be7b8a | [
"MIT"
] | null | null | null | src/datasets/__init__.py | chao1224/SGNN-EBM | bda4c486e8ecb9775b635757dbe1071878be7b8a | [
"MIT"
] | 1 | 2022-03-25T01:47:18.000Z | 2022-03-25T01:50:12.000Z | src/datasets/__init__.py | chao1224/SGNN-EBM | bda4c486e8ecb9775b635757dbe1071878be7b8a | [
"MIT"
] | null | null | null | from .molecule_dataset import MoleculeDataset
from .knowledge_graph_dataset import PPI_dataset | 47 | 48 | 0.904255 | 12 | 94 | 6.75 | 0.666667 | 0.320988 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.074468 | 94 | 2 | 48 | 47 | 0.931034 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
4cd4d01ff5582fed967630dda76ec0f5c5e71936 | 319 | py | Python | CrashCourse/makeing_pizzas.py | axetang/AxePython | 3b517fa3123ce2e939680ad1ae14f7e602d446a6 | [
"Apache-2.0"
] | 1 | 2019-01-04T05:47:50.000Z | 2019-01-04T05:47:50.000Z | CrashCourse/makeing_pizzas.py | axetang/AxePython | 3b517fa3123ce2e939680ad1ae14f7e602d446a6 | [
"Apache-2.0"
] | null | null | null | CrashCourse/makeing_pizzas.py | axetang/AxePython | 3b517fa3123ce2e939680ad1ae14f7e602d446a6 | [
"Apache-2.0"
] | null | null | null | #import pizza as p
from pizza import make_pizza as mp
#pizza.make_pizza(16, 'pepperoni')
#pizza.make_pizza(12, "mushroom", "green peppers", 'extra cheese')
#make_pizza(16, 'pepperoni')
#make_pizza(12, "mushroom", "green peppers", 'extra cheese')
mp(16, 'pepperoni')
mp(12, "mushroom", "green peppers", 'extra cheese')
| 31.9 | 66 | 0.717868 | 47 | 319 | 4.765957 | 0.319149 | 0.200893 | 0.200893 | 0.294643 | 0.522321 | 0.522321 | 0.375 | 0.375 | 0 | 0 | 0 | 0.042254 | 0.109718 | 319 | 9 | 67 | 35.444444 | 0.746479 | 0.630094 | 0 | 0 | 0 | 0 | 0.371681 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.333333 | 0 | 0.333333 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
e223e4d9c816bdad72dcf3a58fe296841eb603c9 | 87 | py | Python | app/post/__init__.py | ppolle/blog | 9fcaccfb56e47d3b0b9850df77217d50da854d39 | [
"Unlicense"
] | null | null | null | app/post/__init__.py | ppolle/blog | 9fcaccfb56e47d3b0b9850df77217d50da854d39 | [
"Unlicense"
] | null | null | null | app/post/__init__.py | ppolle/blog | 9fcaccfb56e47d3b0b9850df77217d50da854d39 | [
"Unlicense"
] | null | null | null | from flask import Blueprint
post = Blueprint('post',__name__)
from . import views,forms | 29 | 33 | 0.793103 | 12 | 87 | 5.416667 | 0.666667 | 0.4 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.114943 | 87 | 3 | 34 | 29 | 0.844156 | 0 | 0 | 0 | 0 | 0 | 0.045455 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.666667 | 0 | 0.666667 | 0.666667 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 1 | 0 | 6 |
e28862a91351314af37b1163a06dc93ff56d22d1 | 148 | py | Python | tests/test_maybe/test_nothing_singleton.py | ftruzzi/returns | 42fbb70ffe68e01a848908021cf8ed1a6704b13d | [
"BSD-2-Clause"
] | 2,233 | 2019-01-31T09:15:58.000Z | 2022-03-31T22:13:19.000Z | tests/test_maybe/test_nothing_singleton.py | ftruzzi/returns | 42fbb70ffe68e01a848908021cf8ed1a6704b13d | [
"BSD-2-Clause"
] | 1,062 | 2019-01-30T22:44:03.000Z | 2022-03-31T17:52:22.000Z | tests/test_maybe/test_nothing_singleton.py | ftruzzi/returns | 42fbb70ffe68e01a848908021cf8ed1a6704b13d | [
"BSD-2-Clause"
] | 105 | 2019-03-27T10:15:19.000Z | 2022-02-28T21:34:41.000Z | from returns.maybe import _Nothing
def test_nothing_singleton():
"""Ensures `_Nothing` is a singleton."""
assert _Nothing() is _Nothing()
| 21.142857 | 44 | 0.716216 | 18 | 148 | 5.555556 | 0.666667 | 0.18 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.168919 | 148 | 6 | 45 | 24.666667 | 0.813008 | 0.22973 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.333333 | 1 | 0.333333 | true | 0 | 0.333333 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
e28ef86a319c7442f0171b76280b0851068fae9a | 48 | py | Python | babyai_sr/rl/utils/__init__.py | thomasaunger/babyai_sr | 27fba5fb960640ebc405de83d5ab75b8c6d50ad7 | [
"BSD-3-Clause"
] | null | null | null | babyai_sr/rl/utils/__init__.py | thomasaunger/babyai_sr | 27fba5fb960640ebc405de83d5ab75b8c6d50ad7 | [
"BSD-3-Clause"
] | null | null | null | babyai_sr/rl/utils/__init__.py | thomasaunger/babyai_sr | 27fba5fb960640ebc405de83d5ab75b8c6d50ad7 | [
"BSD-3-Clause"
] | null | null | null | from babyai_sr.rl.utils.penv import ParallelEnv
| 24 | 47 | 0.854167 | 8 | 48 | 5 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.083333 | 48 | 1 | 48 | 48 | 0.909091 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
2c3cd00cef5978d123932a805305d94813db80b5 | 60 | py | Python | src/rpc/__init__.py | Ariam27/rpc | e6b5b1db9edb93805bcce0e81cae4c65f81fbbe6 | [
"MIT"
] | null | null | null | src/rpc/__init__.py | Ariam27/rpc | e6b5b1db9edb93805bcce0e81cae4c65f81fbbe6 | [
"MIT"
] | null | null | null | src/rpc/__init__.py | Ariam27/rpc | e6b5b1db9edb93805bcce0e81cae4c65f81fbbe6 | [
"MIT"
] | null | null | null | from rpc.server import Server
from rpc.client import Client
| 20 | 29 | 0.833333 | 10 | 60 | 5 | 0.5 | 0.28 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.133333 | 60 | 2 | 30 | 30 | 0.961538 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
2c3d504a3435eacff6f659c0a10169c8437adfc8 | 66 | py | Python | integration/ros2_workspace/src/test_py_package/test_py_package/test.py | rotu/colcon-bundle | b57dd328dca2750b31c6303e587d70913a9dfe9d | [
"Apache-2.0"
] | 31 | 2018-10-19T18:16:37.000Z | 2021-07-05T06:54:38.000Z | integration/ros2_workspace/src/test_py_package/test_py_package/test.py | rotu/colcon-bundle | b57dd328dca2750b31c6303e587d70913a9dfe9d | [
"Apache-2.0"
] | 113 | 2018-10-24T17:33:50.000Z | 2022-02-08T20:36:19.000Z | integration/ros2_workspace/src/test_py_package/test_py_package/test.py | rotu/colcon-bundle | b57dd328dca2750b31c6303e587d70913a9dfe9d | [
"Apache-2.0"
] | 30 | 2018-10-19T18:16:08.000Z | 2022-03-24T01:21:27.000Z | import rclpy
def main():
print('Test py package succeeded!') | 13.2 | 39 | 0.681818 | 9 | 66 | 5 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.19697 | 66 | 5 | 39 | 13.2 | 0.849057 | 0 | 0 | 0 | 0 | 0 | 0.38806 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | true | 0 | 0.333333 | 0 | 0.666667 | 0.333333 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
2c5a397d59db4b044cb33df45aa49a13f9419944 | 29 | py | Python | netflow/__init__.py | JackFram/Neural-Flow | 83cea7aa933fa9650b42271ba4205208814d047b | [
"Apache-2.0"
] | 1 | 2022-01-24T16:27:51.000Z | 2022-01-24T16:27:51.000Z | netflow/__init__.py | JackFram/Neural-Flow | 83cea7aa933fa9650b42271ba4205208814d047b | [
"Apache-2.0"
] | null | null | null | netflow/__init__.py | JackFram/Neural-Flow | 83cea7aa933fa9650b42271ba4205208814d047b | [
"Apache-2.0"
] | null | null | null | from .fx_interpreter import * | 29 | 29 | 0.827586 | 4 | 29 | 5.75 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.103448 | 29 | 1 | 29 | 29 | 0.884615 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
e2bf7ac4e9d738eb8638e8426b11ad3683099635 | 150 | py | Python | modules/exception.py | alfie-max/Publish | 0014cc45f2496c44c9171353dc42a58d73dd5490 | [
"MIT"
] | 1 | 2016-03-08T07:17:46.000Z | 2016-03-08T07:17:46.000Z | modules/exception.py | alfie-max/Publish | 0014cc45f2496c44c9171353dc42a58d73dd5490 | [
"MIT"
] | 5 | 2021-03-18T19:55:25.000Z | 2022-03-11T23:11:27.000Z | modules/exception.py | alfie-max/Publish | 0014cc45f2496c44c9171353dc42a58d73dd5490 | [
"MIT"
] | null | null | null | class UnhandledException(Exception): pass
class AuthorizationError(Exception): pass
class NetworkError(Exception): pass
class Failed(Exception): pass
| 30 | 41 | 0.84 | 16 | 150 | 7.875 | 0.4375 | 0.412698 | 0.428571 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.08 | 150 | 4 | 42 | 37.5 | 0.913043 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 1 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 6 |
e2c28465d71df4ca517906ac46851858992a534f | 89 | py | Python | tests/test_version.py | CanoaPBC/basesqlmodel | 7425b75660bb22972cec940cf3b336f3021c7d21 | [
"MIT"
] | 29 | 2021-09-17T22:44:24.000Z | 2022-03-10T13:43:24.000Z | tests/test_version.py | CanoaPBC/basesqlmodel | 7425b75660bb22972cec940cf3b336f3021c7d21 | [
"MIT"
] | 6 | 2021-09-22T09:53:11.000Z | 2022-01-17T12:09:18.000Z | tests/test_version.py | CanoaPBC/basesqlmodel | 7425b75660bb22972cec940cf3b336f3021c7d21 | [
"MIT"
] | 4 | 2022-01-07T06:29:09.000Z | 2022-03-12T11:20:07.000Z | import basesqlmodel
def test_version():
assert basesqlmodel.__version__ == "0.1.0"
| 14.833333 | 46 | 0.730337 | 11 | 89 | 5.454545 | 0.727273 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.04 | 0.157303 | 89 | 5 | 47 | 17.8 | 0.76 | 0 | 0 | 0 | 0 | 0 | 0.05618 | 0 | 0 | 0 | 0 | 0 | 0.333333 | 1 | 0.333333 | true | 0 | 0.333333 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
e2dfd7cf127b419990c5db74e130062dc089bd0b | 40 | py | Python | pycoincap/__init__.py | natemago/pycoincap | 45a4d6d3e467bd8ef6fb039a351bd6cf6c409a68 | [
"MIT"
] | 17 | 2017-10-03T18:45:12.000Z | 2022-01-11T18:24:23.000Z | pycoincap/__init__.py | natemago/pycoincap | 45a4d6d3e467bd8ef6fb039a351bd6cf6c409a68 | [
"MIT"
] | 8 | 2017-10-29T10:31:54.000Z | 2019-11-07T12:36:17.000Z | pycoincap/__init__.py | natemago/pycoincap | 45a4d6d3e467bd8ef6fb039a351bd6cf6c409a68 | [
"MIT"
] | 20 | 2017-10-29T20:19:21.000Z | 2021-07-14T07:39:33.000Z | from pycoincap.core import CryptoMarket
| 20 | 39 | 0.875 | 5 | 40 | 7 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.1 | 40 | 1 | 40 | 40 | 0.972222 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
39398079e7a8f44c08538665a4d5fbd4898766cd | 139 | py | Python | models/__init__.py | Luodian/Learning-Invariant-Representations-and-Risks | f3058fe50e86660ca0c17ba0df41ece9af64c557 | [
"MIT"
] | 17 | 2021-04-22T03:24:38.000Z | 2022-03-30T03:12:09.000Z | models/__init__.py | Luodian/Learning-Invariant-Representations-and-Risks | f3058fe50e86660ca0c17ba0df41ece9af64c557 | [
"MIT"
] | 5 | 2021-12-10T10:12:26.000Z | 2022-03-31T00:01:58.000Z | models/__init__.py | Luodian/Learning-Invariant-Representations-and-Risks | f3058fe50e86660ca0c17ba0df41ece9af64c557 | [
"MIT"
] | 3 | 2021-05-19T06:12:14.000Z | 2021-12-17T09:27:49.000Z | from .models import *
from .LeNet import LeNet
from .ResNet import *
from .heads import *
from .AlexNet import *
from .CountingNet import * | 23.166667 | 26 | 0.755396 | 19 | 139 | 5.526316 | 0.421053 | 0.380952 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.165468 | 139 | 6 | 26 | 23.166667 | 0.905172 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
1a7f5da86975d5bc73d48bb7d4f5db50a117ec2e | 26 | py | Python | database/__init__.py | HazemMeqdad/anti-raid | 9beef999a5e7ecb81e61343d430746e7963b966f | [
"MIT"
] | 1 | 2021-10-17T11:39:20.000Z | 2021-10-17T11:39:20.000Z | database/__init__.py | HazemMeqdad/anti-raid | 9beef999a5e7ecb81e61343d430746e7963b966f | [
"MIT"
] | null | null | null | database/__init__.py | HazemMeqdad/anti-raid | 9beef999a5e7ecb81e61343d430746e7963b966f | [
"MIT"
] | null | null | null | from database.db import *
| 13 | 25 | 0.769231 | 4 | 26 | 5 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.153846 | 26 | 1 | 26 | 26 | 0.909091 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
1aaf88a975d35ee968ab0bb666f75f40d07c0f9a | 118 | py | Python | bitmovin_api_sdk/encoding/configurations/audio/dolby_digital_plus/customdata/__init__.py | bitmovin/bitmovin-api-sdk-python | 5a85147669c84b8ca411cf2d4dbdddc92d85bbe7 | [
"MIT"
] | 11 | 2019-07-03T10:41:16.000Z | 2022-02-25T21:48:06.000Z | bitmovin_api_sdk/encoding/configurations/audio/dolby_digital_plus/customdata/__init__.py | bitmovin/bitmovin-api-sdk-python | 5a85147669c84b8ca411cf2d4dbdddc92d85bbe7 | [
"MIT"
] | 8 | 2019-11-23T00:01:25.000Z | 2021-04-29T12:30:31.000Z | bitmovin_api_sdk/encoding/configurations/audio/dolby_digital_plus/customdata/__init__.py | bitmovin/bitmovin-api-sdk-python | 5a85147669c84b8ca411cf2d4dbdddc92d85bbe7 | [
"MIT"
] | 13 | 2020-01-02T14:58:18.000Z | 2022-03-26T12:10:30.000Z | from bitmovin_api_sdk.encoding.configurations.audio.dolby_digital_plus.customdata.customdata_api import CustomdataApi
| 59 | 117 | 0.915254 | 15 | 118 | 6.866667 | 0.866667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.033898 | 118 | 1 | 118 | 118 | 0.903509 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
46fa53291b89c3606d59539411e61102b46d079b | 398 | py | Python | django/contrib/messages/tests/__init__.py | egenerat/gae-django | f12379483cf3917ed3cb46ca5ff0b94daf89fc50 | [
"MIT"
] | 3 | 2016-07-08T23:49:32.000Z | 2018-04-15T22:55:01.000Z | django/contrib/messages/tests/__init__.py | egenerat/gae-django | f12379483cf3917ed3cb46ca5ff0b94daf89fc50 | [
"MIT"
] | 27 | 2017-02-05T15:57:04.000Z | 2018-04-15T22:57:26.000Z | django/contrib/messages/tests/__init__.py | egenerat/gae-django | f12379483cf3917ed3cb46ca5ff0b94daf89fc50 | [
"MIT"
] | null | null | null | from django.contrib.messages.tests.cookie import CookieTest
from django.contrib.messages.tests.fallback import FallbackTest
from django.contrib.messages.tests.middleware import MiddlewareTest
from django.contrib.messages.tests.session import SessionTest
from django.contrib.messages.tests.user_messages import \
UserMessagesTest, LegacyFallbackTest
| 56.857143 | 80 | 0.766332 | 42 | 398 | 7.238095 | 0.404762 | 0.164474 | 0.279605 | 0.411184 | 0.493421 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.178392 | 398 | 6 | 81 | 66.333333 | 0.929664 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.833333 | 0 | 0.833333 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
204e837f7c14943c6508decb3709405d6f4a7e25 | 148 | py | Python | storyhub/sdk/service/output/OutputBoolean.py | wilzbach/hub-sdk-python | 3dd2a3112276068080751198cb1fcc29975c0136 | [
"Apache-2.0"
] | null | null | null | storyhub/sdk/service/output/OutputBoolean.py | wilzbach/hub-sdk-python | 3dd2a3112276068080751198cb1fcc29975c0136 | [
"Apache-2.0"
] | null | null | null | storyhub/sdk/service/output/OutputBoolean.py | wilzbach/hub-sdk-python | 3dd2a3112276068080751198cb1fcc29975c0136 | [
"Apache-2.0"
] | null | null | null | from storyhub.sdk.service.output.OutputBase import OutputBase
class OutputBoolean(OutputBase):
"""
A service output boolean type.
"""
| 18.5 | 61 | 0.722973 | 16 | 148 | 6.6875 | 0.75 | 0.242991 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.182432 | 148 | 7 | 62 | 21.142857 | 0.884298 | 0.202703 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.5 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
6461d2f38811009d551a78be09c74f412d603b57 | 168 | py | Python | NintbotForDiscord/Utils.py | nint8835/DiscordUserBot | 63e99c916680492cdfd4b00b7df5ef5c9640a204 | [
"MIT"
] | 3 | 2015-12-24T18:17:44.000Z | 2016-05-20T07:19:29.000Z | NintbotForDiscord/Utils.py | nint8835/DiscordSelfBot | 63e99c916680492cdfd4b00b7df5ef5c9640a204 | [
"MIT"
] | null | null | null | NintbotForDiscord/Utils.py | nint8835/DiscordSelfBot | 63e99c916680492cdfd4b00b7df5ef5c9640a204 | [
"MIT"
] | 1 | 2020-01-20T11:30:18.000Z | 2020-01-20T11:30:18.000Z | import discord
from .Types import DiscordChannel
def channel_is_private(channel: DiscordChannel) -> bool:
return isinstance(channel, discord.abc.PrivateChannel)
| 21 | 58 | 0.803571 | 19 | 168 | 7 | 0.736842 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.125 | 168 | 7 | 59 | 24 | 0.904762 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | false | 0 | 0.5 | 0.25 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 6 |
b37b02c311ced4b2372b0b554bd8f0bb95fcb2c9 | 971 | py | Python | tests/test_depends.py | marvingabler/fastapi-limiter | fa272dd4d4a3fcf56ad98f5bb6fd81babce007dd | [
"Apache-2.0"
] | 118 | 2020-11-09T07:21:41.000Z | 2022-03-31T15:40:45.000Z | tests/test_depends.py | marvingabler/fastapi-limiter | fa272dd4d4a3fcf56ad98f5bb6fd81babce007dd | [
"Apache-2.0"
] | 18 | 2020-12-30T10:52:25.000Z | 2022-03-11T07:32:54.000Z | tests/test_depends.py | marvingabler/fastapi-limiter | fa272dd4d4a3fcf56ad98f5bb6fd81babce007dd | [
"Apache-2.0"
] | 24 | 2021-01-07T04:08:57.000Z | 2022-03-17T09:37:44.000Z | from time import sleep
from starlette.testclient import TestClient
from examples.main import app
def test_limiter():
with TestClient(app) as client:
response = client.get("/")
assert response.status_code == 200
client.get("/")
response = client.get("/")
assert response.status_code == 429
sleep(5)
response = client.get("/")
assert response.status_code == 200
def test_limiter_multiple():
with TestClient(app) as client:
response = client.get("/multiple")
assert response.status_code == 200
response = client.get("/multiple")
assert response.status_code == 429
sleep(5)
response = client.get("/multiple")
assert response.status_code == 200
response = client.get("/multiple")
assert response.status_code == 429
sleep(10)
response = client.get("/multiple")
assert response.status_code == 200
| 23.682927 | 43 | 0.61792 | 108 | 971 | 5.453704 | 0.231481 | 0.137521 | 0.2309 | 0.325976 | 0.779287 | 0.779287 | 0.779287 | 0.750424 | 0.544992 | 0.456706 | 0 | 0.03966 | 0.272915 | 971 | 40 | 44 | 24.275 | 0.794618 | 0 | 0 | 0.740741 | 0 | 0 | 0.050463 | 0 | 0 | 0 | 0 | 0 | 0.296296 | 1 | 0.074074 | false | 0 | 0.111111 | 0 | 0.185185 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
37466b094c002e1c2619a912073292a3fc2e7a79 | 159 | py | Python | workflow/ignite/decorators/__init__.py | Aiwizo/ml-workflow | 88e104fce571dd3b76914626a52f9001342c07cc | [
"Apache-2.0"
] | 4 | 2020-09-23T15:39:24.000Z | 2021-09-12T22:11:00.000Z | workflow/ignite/decorators/__init__.py | Aiwizo/ml-workflow | 88e104fce571dd3b76914626a52f9001342c07cc | [
"Apache-2.0"
] | 4 | 2020-09-23T15:07:39.000Z | 2020-10-30T10:26:24.000Z | workflow/ignite/decorators/__init__.py | Aiwizo/ml-workflow | 88e104fce571dd3b76914626a52f9001342c07cc | [
"Apache-2.0"
] | null | null | null | from .evaluate import evaluate
from .step import step
from .to_device import to_device
from .train import train
from .update_cpu_model import update_cpu_model
| 26.5 | 46 | 0.842767 | 26 | 159 | 4.923077 | 0.384615 | 0.125 | 0.21875 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.125786 | 159 | 5 | 47 | 31.8 | 0.920863 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.