hexsha string | size int64 | ext string | lang string | max_stars_repo_path string | max_stars_repo_name string | max_stars_repo_head_hexsha string | max_stars_repo_licenses list | max_stars_count int64 | max_stars_repo_stars_event_min_datetime string | max_stars_repo_stars_event_max_datetime string | max_issues_repo_path string | max_issues_repo_name string | max_issues_repo_head_hexsha string | max_issues_repo_licenses list | max_issues_count int64 | max_issues_repo_issues_event_min_datetime string | max_issues_repo_issues_event_max_datetime string | max_forks_repo_path string | max_forks_repo_name string | max_forks_repo_head_hexsha string | max_forks_repo_licenses list | max_forks_count int64 | max_forks_repo_forks_event_min_datetime string | max_forks_repo_forks_event_max_datetime string | content string | avg_line_length float64 | max_line_length int64 | alphanum_fraction float64 | qsc_code_num_words_quality_signal int64 | qsc_code_num_chars_quality_signal float64 | qsc_code_mean_word_length_quality_signal float64 | qsc_code_frac_words_unique_quality_signal float64 | qsc_code_frac_chars_top_2grams_quality_signal float64 | qsc_code_frac_chars_top_3grams_quality_signal float64 | qsc_code_frac_chars_top_4grams_quality_signal float64 | qsc_code_frac_chars_dupe_5grams_quality_signal float64 | qsc_code_frac_chars_dupe_6grams_quality_signal float64 | qsc_code_frac_chars_dupe_7grams_quality_signal float64 | qsc_code_frac_chars_dupe_8grams_quality_signal float64 | qsc_code_frac_chars_dupe_9grams_quality_signal float64 | qsc_code_frac_chars_dupe_10grams_quality_signal float64 | qsc_code_frac_chars_replacement_symbols_quality_signal float64 | qsc_code_frac_chars_digital_quality_signal float64 | qsc_code_frac_chars_whitespace_quality_signal float64 | qsc_code_size_file_byte_quality_signal float64 | qsc_code_num_lines_quality_signal float64 | qsc_code_num_chars_line_max_quality_signal float64 | qsc_code_num_chars_line_mean_quality_signal float64 | qsc_code_frac_chars_alphabet_quality_signal float64 | qsc_code_frac_chars_comments_quality_signal float64 | qsc_code_cate_xml_start_quality_signal float64 | qsc_code_frac_lines_dupe_lines_quality_signal float64 | qsc_code_cate_autogen_quality_signal float64 | qsc_code_frac_lines_long_string_quality_signal float64 | qsc_code_frac_chars_string_length_quality_signal float64 | qsc_code_frac_chars_long_word_length_quality_signal float64 | qsc_code_frac_lines_string_concat_quality_signal float64 | qsc_code_cate_encoded_data_quality_signal float64 | qsc_code_frac_chars_hex_words_quality_signal float64 | qsc_code_frac_lines_prompt_comments_quality_signal float64 | qsc_code_frac_lines_assert_quality_signal float64 | qsc_codepython_cate_ast_quality_signal float64 | qsc_codepython_frac_lines_func_ratio_quality_signal float64 | qsc_codepython_cate_var_zero_quality_signal bool | qsc_codepython_frac_lines_pass_quality_signal float64 | qsc_codepython_frac_lines_import_quality_signal float64 | qsc_codepython_frac_lines_simplefunc_quality_signal float64 | qsc_codepython_score_lines_no_logic_quality_signal float64 | qsc_codepython_frac_lines_print_quality_signal float64 | qsc_code_num_words int64 | qsc_code_num_chars int64 | qsc_code_mean_word_length int64 | qsc_code_frac_words_unique null | qsc_code_frac_chars_top_2grams int64 | qsc_code_frac_chars_top_3grams int64 | qsc_code_frac_chars_top_4grams int64 | qsc_code_frac_chars_dupe_5grams int64 | qsc_code_frac_chars_dupe_6grams int64 | qsc_code_frac_chars_dupe_7grams int64 | qsc_code_frac_chars_dupe_8grams int64 | qsc_code_frac_chars_dupe_9grams int64 | qsc_code_frac_chars_dupe_10grams int64 | qsc_code_frac_chars_replacement_symbols int64 | qsc_code_frac_chars_digital int64 | qsc_code_frac_chars_whitespace int64 | qsc_code_size_file_byte int64 | qsc_code_num_lines int64 | qsc_code_num_chars_line_max int64 | qsc_code_num_chars_line_mean int64 | qsc_code_frac_chars_alphabet int64 | qsc_code_frac_chars_comments int64 | qsc_code_cate_xml_start int64 | qsc_code_frac_lines_dupe_lines int64 | qsc_code_cate_autogen int64 | qsc_code_frac_lines_long_string int64 | qsc_code_frac_chars_string_length int64 | qsc_code_frac_chars_long_word_length int64 | qsc_code_frac_lines_string_concat null | qsc_code_cate_encoded_data int64 | qsc_code_frac_chars_hex_words int64 | qsc_code_frac_lines_prompt_comments int64 | qsc_code_frac_lines_assert int64 | qsc_codepython_cate_ast int64 | qsc_codepython_frac_lines_func_ratio int64 | qsc_codepython_cate_var_zero int64 | qsc_codepython_frac_lines_pass int64 | qsc_codepython_frac_lines_import int64 | qsc_codepython_frac_lines_simplefunc int64 | qsc_codepython_score_lines_no_logic int64 | qsc_codepython_frac_lines_print int64 | effective string | hits int64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
cf8ed1cc290eb972a6785dfdc389bd7a3461231c | 4,434 | py | Python | pySpatialTools/utils/artificial_data/artificial_features.py | tgquintela/pySpatialTools | e028008f9750521bf7d311f7cd3323c88d621ea4 | [
"MIT"
] | 8 | 2015-07-21T05:15:16.000Z | 2018-06-12T18:22:52.000Z | pySpatialTools/utils/artificial_data/artificial_features.py | tgquintela/pySpatialTools | e028008f9750521bf7d311f7cd3323c88d621ea4 | [
"MIT"
] | 6 | 2016-01-11T22:25:28.000Z | 2016-01-28T16:17:46.000Z | pySpatialTools/utils/artificial_data/artificial_features.py | tgquintela/pySpatialTools | e028008f9750521bf7d311f7cd3323c88d621ea4 | [
"MIT"
] | null | null | null |
"""
Artificial features
-------------------
Module which groups artificial data creation of features for testing or other
needings in this package.
"""
import numpy as np
def continuous_array_features(n, n_feats):
"""Array-like continuous features.
Parameters
----------
n: int
the number of elements we want to consider.
n_feats: int
the number of features we want to consider.
Returns
-------
features: np.ndarray
the random features we want to compute.
"""
features = np.random.random((n, n_feats))
return features
def categorical_array_features(n, n_feats):
"""Array-like categorical features.
Parameters
----------
n: int
the number of the elements to create their features.
n_feats: int
the number of features.
Returns
-------
features: list
the random features we want to compute.
"""
n_feats = [n_feats] if type(n_feats) == int else n_feats
features = []
for fea in n_feats:
features.append(np.random.randint(0, fea, n))
features = np.stack(features, axis=1)
return features
def continuous_dict_features(n, n_feats):
"""Listdict-like continuous features.
Parameters
----------
n: int
the number of the elements to create their features.
n_feats: int
the number of features.
Returns
-------
features: list
the random features we want to compute.
"""
features = []
for i in range(n):
fea = np.unique(np.random.randint(0, n_feats, n_feats))
features.append(dict(zip(fea, np.random.random(len(fea)))))
return features
def categorical_dict_features(n, n_feats):
"""Listdict-like categorical features.
Parameters
----------
n: int
the number of the elements to create their features.
n_feats: int
the number of features.
Returns
-------
features: list
the random features we want to compute.
"""
max_v = np.random.randint(1, n)
features = []
for i in range(n):
fea = np.unique(np.random.randint(0, n_feats, n_feats))
features.append(dict(zip(fea, np.random.randint(0, max_v, len(fea)))))
return features
def continuous_agg_array_features(n, n_feats, ks):
"""Array-like continuous aggregated features.
Parameters
----------
n: int
the number of the elements to create their features.
n_feats: int
the number of features.
Returns
-------
features: np.ndarray
the random features we want to compute.
"""
features = []
for k in range(ks):
features.append(continuous_array_features(n, n_feats))
features = np.stack(features, axis=2)
return features
def categorical_agg_array_features(n, n_feats, ks):
"""Array-like categorical aggregated features.
Parameters
----------
n: int
the number of the elements to create their features.
n_feats: int
the number of features.
ks: int
the number of perturbations.
Returns
-------
features: list
the random features we want to compute.
"""
features = []
for k in range(ks):
features.append(categorical_array_features(n, n_feats))
features = np.stack(features, axis=2)
return features
def continuous_agg_dict_features(n, n_feats, ks):
"""Listdict-like continuous aggregated features.
Parameters
----------
n: int
the number of the elements to create their features.
n_feats: int
the number of features.
ks: int
the number of perturbations.
Returns
-------
features: list
the random features we want to compute.
"""
features = []
for k in range(ks):
features.append(continuous_dict_features(n, n_feats))
return features
def categorical_agg_dict_features(n, n_feats, ks):
"""Listdict-like categorical aggregated features.
Parameters
----------
n: int
the number of the elements to create their features.
n_feats: int
the number of features.
ks: int
the number of perturbations.
Returns
-------
features: list
the random features we want to compute.
"""
features = []
for k in range(ks):
features.append(categorical_dict_features(n, n_feats))
return features
| 22.393939 | 78 | 0.617952 | 564 | 4,434 | 4.751773 | 0.12234 | 0.067164 | 0.085075 | 0.099254 | 0.877985 | 0.839179 | 0.827612 | 0.735448 | 0.735448 | 0.666418 | 0 | 0.002484 | 0.273793 | 4,434 | 197 | 79 | 22.507614 | 0.829814 | 0.504511 | 0 | 0.543478 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.173913 | false | 0 | 0.021739 | 0 | 0.369565 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
5c640151bcb9a3d11a918c49cc65868bfb24438c | 2,413 | py | Python | tests/payments/capture_payments_integration_test.py | riaz-bordie-cko/checkout-sdk-python | d9bc073306c1a98544c326be693ed722576ea895 | [
"MIT"
] | null | null | null | tests/payments/capture_payments_integration_test.py | riaz-bordie-cko/checkout-sdk-python | d9bc073306c1a98544c326be693ed722576ea895 | [
"MIT"
] | null | null | null | tests/payments/capture_payments_integration_test.py | riaz-bordie-cko/checkout-sdk-python | d9bc073306c1a98544c326be693ed722576ea895 | [
"MIT"
] | null | null | null | from __future__ import absolute_import
from checkout_sdk.payments.payments import CaptureRequest
from tests.checkout_test_utils import new_uuid, assert_response, new_idempotency_key, retriable
from tests.payments.payments_test_utils import make_card_payment
def test_should_full_capture_card_payment(default_api):
payment_response = make_card_payment(default_api)
capture_request = CaptureRequest()
capture_request.reference = new_uuid()
capture_response = retriable(callback=default_api.payments.capture_payment,
payment_id=payment_response.id,
capture_request=capture_request)
assert_response(capture_response,
'reference',
'action_id',
'_links')
def test_should_partially_capture_card_payment(default_api):
payment_response = make_card_payment(default_api)
capture_request = CaptureRequest()
capture_request.reference = new_uuid()
capture_request.amount = 5
capture_response = retriable(callback=default_api.payments.capture_payment,
payment_id=payment_response.id,
capture_request=capture_request)
assert_response(capture_response,
'reference',
'action_id',
'_links')
def test_should_full_capture_card_payment_idempotently(default_api):
payment_response = make_card_payment(default_api)
capture_request = CaptureRequest()
capture_request.reference = new_uuid()
capture_request.amount = 2
idempotency_key = new_idempotency_key()
capture_response_1 = retriable(callback=default_api.payments.capture_payment,
payment_id=payment_response.id,
capture_request=capture_request,
idempotency_key=idempotency_key)
assert_response(capture_response_1, 'action_id')
capture_response_2 = retriable(callback=default_api.payments.capture_payment,
payment_id=payment_response.id,
capture_request=capture_request,
idempotency_key=idempotency_key)
assert_response(capture_response_2, 'action_id')
assert capture_response_1.action_id == capture_response_2.action_id
| 38.919355 | 95 | 0.668462 | 245 | 2,413 | 6.106122 | 0.159184 | 0.149733 | 0.06016 | 0.070187 | 0.805481 | 0.794118 | 0.794118 | 0.768048 | 0.724599 | 0.724599 | 0 | 0.004569 | 0.274347 | 2,413 | 61 | 96 | 39.557377 | 0.8498 | 0 | 0 | 0.659091 | 0 | 0 | 0.027352 | 0 | 0 | 0 | 0 | 0 | 0.136364 | 1 | 0.068182 | false | 0 | 0.090909 | 0 | 0.159091 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
5c93baebef3d257ce788d41da043548bdb2f94ee | 67 | py | Python | core/helpers/flask/__init__.py | N7SALab/HEV | d449a2ba4eeb07127b97befd0a58d37279ed8a36 | [
"Apache-2.0"
] | null | null | null | core/helpers/flask/__init__.py | N7SALab/HEV | d449a2ba4eeb07127b97befd0a58d37279ed8a36 | [
"Apache-2.0"
] | 1 | 2021-06-02T01:10:50.000Z | 2021-06-02T01:10:50.000Z | core/helpers/flask/__init__.py | N7SALab/HEV | d449a2ba4eeb07127b97befd0a58d37279ed8a36 | [
"Apache-2.0"
] | null | null | null | from .auth import *
from .config import *
from .auth_creds import * | 22.333333 | 25 | 0.746269 | 10 | 67 | 4.9 | 0.5 | 0.326531 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.164179 | 67 | 3 | 25 | 22.333333 | 0.875 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
5caf4a6058c969bd8a6665216881b7e8e11a56bd | 45 | py | Python | sunnysouth/suppliers/services/__init__.py | EdwinBaeza05/django-genricsl-app | a8759d609957e80883cca79f0694d494364775a4 | [
"MIT"
] | null | null | null | sunnysouth/suppliers/services/__init__.py | EdwinBaeza05/django-genricsl-app | a8759d609957e80883cca79f0694d494364775a4 | [
"MIT"
] | null | null | null | sunnysouth/suppliers/services/__init__.py | EdwinBaeza05/django-genricsl-app | a8759d609957e80883cca79f0694d494364775a4 | [
"MIT"
] | null | null | null | from .suppliers import SupplierCreateService
| 22.5 | 44 | 0.888889 | 4 | 45 | 10 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.088889 | 45 | 1 | 45 | 45 | 0.97561 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
5ccaf4dc2a014546668a9b3988975c76384f6857 | 23,584 | py | Python | attic/orbit_conventional.py | megbedell/PSOAP | b1cf6d14383495a5751c64a63caddbe83b511796 | [
"MIT"
] | 25 | 2017-02-21T21:42:03.000Z | 2022-03-02T20:33:48.000Z | attic/orbit_conventional.py | megbedell/PSOAP | b1cf6d14383495a5751c64a63caddbe83b511796 | [
"MIT"
] | 3 | 2018-02-11T22:03:22.000Z | 2022-02-03T21:12:01.000Z | attic/orbit_conventional.py | megbedell/PSOAP | b1cf6d14383495a5751c64a63caddbe83b511796 | [
"MIT"
] | 1 | 2022-02-02T15:46:04.000Z | 2022-02-02T15:46:04.000Z | import numpy as np
from scipy.optimize import fsolve, minimize
from psoap import constants as C
class Binary:
'''
A binary orbit that delivers astrometric position, relative astrometric position (B relative to A), and radial velocities of A and B.
'''
def __init__(self, a, e, i, omega, Omega, T0, M_tot, M_2, gamma, obs_dates=None, **kwargs):
self.a = a # [AU] semi-major axis
self.e = e # eccentricity
self.i = i # [deg] inclination
self.omega = omega # [deg] argument of periastron
self.Omega = Omega # [deg] east of north
self.T0 = T0 # [JD]
self.M_tot = M_tot # [M_sun]
self.M_2 = M_2 # [M_sun]
self.gamma = gamma # [km/s]
# Update the RV quantities
self.recalculate()
# If we are going to be repeatedly predicting the orbit at a sequence of dates,
# just store them to the object.
self.obs_dates = obs_dates
self.param_dict = {"a":self.a, "e":self.e, "i":self.i, "omega":self.omega,
"Omega":self.Omega, "T0":self.T0, "M_tot":self.M_tot, "M_2":self.M_2, "gamma":self.gamma}
def recalculate(self):
'''
Recalculates derivative RV quantities when other parameters are updated.
'''
# Calculate the following RV quantities
self.q = self.M_2 / (self.M_tot - self.M_2) # [M2/M1]
self.P = np.sqrt(4 * np.pi**2 / (C.G * self.M_tot * C.M_sun) * (self.a * C.AU)**3) / (60 * 60 * 24)# [days]
self.K = np.sqrt(C.G/(1 - self.e**2)) * self.M_2 * C.M_sun * np.sin(self.i * np.pi/180.) / np.sqrt(self.M_tot * C.M_sun * self.a * C.AU) * 1e-5 # [km/s]
def update_parameters(self, param_values, param_list):
'''
param_values is numpy array of values
param_list is list of strings of the names of the parameters
'''
for (value, key) in zip(param_values, param_list):
self.param_dict[key] = value
def theta(self, t):
'''Calculate the true anomoly for the A-B orbit.
Input is in days.'''
# t is input in seconds
# Take a modulus of the period
t = (t - self.T0) % self.P
f = lambda E: E - self.e * np.sin(E) - 2 * np.pi * t/self.P
E0 = 2 * np.pi * t / self.P
E = fsolve(f, E0)[0]
th = 2 * np.arctan(np.sqrt((1 + self.e)/(1 - self.e)) * np.tan(E/2.))
if E < np.pi:
return th
else:
return th + 2 * np.pi
def v1_f(self, f):
'''Calculate the component of A's velocity based on only the inner orbit.
f is the true anomoly of this inner orbit.'''
return self.K * (np.cos(self.omega * np.pi/180 + f) + self.e * np.cos(self.omega * np.pi/180))
def v2_f(self, f):
'''Calculate the component of B's velocity based on only the inner orbit.
f is the true anomoly of this inner orbit.'''
return -self.K/self.q * (np.cos(self.omega * np.pi/180 + f) + self.e * np.cos(self.omega * np.pi/180))
# Get the position of A in the plane of the orbit
def xy_A(self, f):
# find the reduced radius
r = self.a * (1 - self.e**2) / (1 + self.e * np.cos(f)) # [AU]
r1 = r * self.M_2 / self.M_tot # [AU]
x = r1 * np.cos(f)
y = r1 * np.sin(f)
return (x,y)
# Get the position of B in the plane of the orbit
def xy_B(self, f):
# find the reduced radius
r = self.a * (1 - self.e**2) / (1 + self.e * np.cos(f)) # [AU]
r2 = -r * (self.M_tot - self.M_2) / self.M_tot # [AU]
x = r2 * np.cos(f)
y = r2 * np.sin(f)
return (x,y)
def xy_AB(self, f):
r = self.a * (1 - self.e**2) / (1 + self.e * np.cos(f)) # [AU]
x = r * np.cos(f)
y = r * np.sin(f)
return (x,y)
# position of A relative to center of mass
def XY_A(self, f):
Omega = self.Omega * np.pi / 180
omega = self.omega * np.pi / 180 # add in pi to swap the periapse
i = self.i * np.pi / 180
# find the reduced semi-major axis
a1 = self.a * self.M_2 / self.M_tot
r1 = a1 * (1 - self.e**2) / (1 + self.e * np.cos(f)) # [AU]
x = r1 / a1 * np.cos(f)
y = r1 / a1 * np.sin(f)
# Calculate Thiele-Innes elements
A = a1 * (np.cos(omega) * np.cos(Omega) - np.sin(omega) * np.sin(Omega) * np.cos(i))
B = a1 * (np.cos(omega) * np.sin(Omega) + np.sin(omega) * np.cos(Omega) * np.cos(i))
F = a1 * (-np.sin(omega) * np.cos(Omega) - np.cos(omega) * np.sin(Omega) * np.cos(i))
G = a1 * (-np.sin(omega) * np.sin(Omega) + np.cos(omega) * np.cos(Omega) * np.cos(i))
X = A * x + F * y
Y = B * x + G * y
return (X, Y) # [AU]
# position of B relative to center of mass
def XY_B(self, f):
Omega = self.Omega * np.pi / 180
omega = self.omega * np.pi / 180
i = self.i * np.pi / 180
# find the reduced radius
a2 = self.a * (self.M_tot - self.M_2) / self.M_tot
r2 = a2 * (1 - self.e**2) / (1 + self.e * np.cos(f)) # [AU]
x = r2 / a2 * np.cos(f)
y = r2 / a2 * np.sin(f)
# Calculate Thiele-Innes elements
A = a2 * (np.cos(omega) * np.cos(Omega) - np.sin(omega) * np.sin(Omega) * np.cos(i))
B = a2 * (np.cos(omega) * np.sin(Omega) + np.sin(omega) * np.cos(Omega) * np.cos(i))
F = a2 * (-np.sin(omega) * np.cos(Omega) - np.cos(omega) * np.sin(Omega) * np.cos(i))
G = a2 * (-np.sin(omega) * np.sin(Omega) + np.cos(omega) * np.cos(Omega) * np.cos(i))
X = A * x + F * y
Y = B * x + G * y
return (X, Y) # [AU]
def XY_AB(self, f):
Omega = self.Omega * np.pi / 180
omega = self.omega * np.pi / 180
i = self.i * np.pi / 180
r = self.a * (1 - self.e**2) / (1 + self.e * np.cos(f))
x = r / self.a * np.cos(f)
y = r / self.a * np.sin(f)
# Calculate Thiele-Innes elements
A = self.a * (np.cos(omega) * np.cos(Omega) - np.sin(omega) * np.sin(Omega) * np.cos(i))
B = self.a * (np.cos(omega) * np.sin(Omega) + np.sin(omega) * np.cos(Omega) * np.cos(i))
F = self.a * (-np.sin(omega) * np.cos(Omega) - np.cos(omega) * np.sin(Omega) * np.cos(i))
G = self.a * (-np.sin(omega) * np.sin(Omega) + np.cos(omega) * np.cos(Omega) * np.cos(i))
X = A * x + F * y
Y = B * x + G * y
# X is north, Y is east.
return (X, Y) # [AU]
def get_orbit(self, t):
'''
Given a time, calculate all of the orbital quantaties we might be interseted in.
returns (v_A, v_B, (x,y) of A, (x,y) of B, and x,y of B relative to A)
'''
# Get the true anomoly "f" from time
f = self.theta(t)
# Feed this into the orbit equation and add the systemic velocity
vA = self.v1_f(f) + self.gamma
vB = self.v2_f(f) + self.gamma
XY_A = self.XY_A(f)
XY_B = self.XY_B(f)
XY_AB = self.XY_AB(f)
xy_A = self.xy_A(f)
xy_B = self.xy_B(f)
xy_AB = self.xy_AB(f)
return (vA, vB, XY_A, XY_B, XY_AB, xy_A, xy_B, xy_AB)
def get_component_orbits(self, dates=None):
'''
Return both vA and vB for all dates provided.
'''
if dates is None and self.obs_dates is None:
raise RuntimeError("Must provide input dates or specify observation dates upon creation of orbit object.")
if dates is None and self.obs_dates is not None:
dates = self.obs_dates
dates = np.atleast_1d(dates)
N = len(dates)
vAs = np.empty(N, dtype=np.float64)
vBs = np.empty(N, dtype=np.float64)
XY_As = np.empty((N, 2), dtype=np.float64)
XY_Bs = np.empty((N, 2), dtype=np.float64)
XY_ABs = np.empty((N, 2), dtype=np.float64)
xy_As = np.empty((N, 2), dtype=np.float64)
xy_Bs = np.empty((N, 2), dtype=np.float64)
xy_ABs = np.empty((N, 2), dtype=np.float64)
for i,date in enumerate(dates):
vA, vB, XY_A, XY_B, XY_AB, xy_A, xy_B, xy_AB = self.get_orbit(date)
vAs[i] = vA
vBs[i] = vB
XY_As[i] = np.array(XY_A)
XY_Bs[i] = np.array(XY_B)
XY_ABs[i] = np.array(XY_AB)
xy_As[i] = np.array(xy_A)
xy_Bs[i] = np.array(xy_B)
xy_ABs[i] = np.array(xy_AB)
return (vAs, vBs, XY_As, XY_Bs, XY_ABs, xy_As, xy_Bs, xy_ABs)
def get_component_fits(self, dates=None):
'''
Return both vA, vB, rho_AB, and theta_AB, for all dates provided.
These are mainly as inputs to a fit.
'''
if dates is None and self.obs_dates is None:
raise RuntimeError("Must provide input dates or specify observation dates upon creation of orbit object.")
if dates is None and self.obs_dates is not None:
dates = self.obs_dates
dates = np.atleast_1d(dates)
N = len(dates)
vAs = np.empty(N, dtype=np.float64)
vBs = np.empty(N, dtype=np.float64)
rho_ABs = np.empty(N, dtype=np.float64)
theta_ABs = np.empty(N, dtype=np.float64)
for i,date in enumerate(dates):
vA, vB, XY_A, XY_B, XY_AB, xy_A, xy_B, xy_AB = self.get_orbit(date)
vAs[i] = vA
vBs[i] = vB
# Calculate rho, theta from XY_AB
X, Y = XY_AB
rho = np.sqrt(X**2 + Y**2) # [AU]
theta = np.arctan2(Y, X) * 180/np.pi # [Deg]
if theta < 0: # ensure that 0 <= theta <= 360
theta += 360.
rho_ABs[i] = rho
theta_ABs[i] = theta
return (vAs, vBs, rho_ABs, theta_ABs)
class Triple:
'''
Techniques describing solving for a triple star orbit.
'''
def __init__(self, a_in, e_in, i_in, omega_in, Omega_in, T0_in, a_out, e_out, i_out, omega_out, Omega_out, T0_out, M_1, M_2, M_3, gamma, obs_dates=None, **kwargs):
self.a_in = a_in # [AU]
self.e_in = e_in #
self.i_in = i_in # [deg]
self.omega_in = omega_in # [deg]
self.Omega_in = Omega_in # [deg]
self.T0_in = T0_in # [JD]
self.a_out = a_out # [AU]
self.e_out = e_out
self.i_out = i_out # [deg]
self.omega_out = omega_out # [deg]
self.Omega_out = Omega_out # [deg]
self.T0_out = T0_out # [JD]
self.M_1 = M_1 # [M_sun]
self.M_2 = M_2 # [M_sun]
self.M_3 = M_3 # [M_sun]
self.gamma = gamma # [km/s]
self.recalculate()
# If we are going to be repeatedly predicting the orbit at a sequence of dates,
# just store them to the object.
self.obs_dates = obs_dates
self.param_dict = {"a_in":self.a_in, "e_in":self.e_in, "i_in":self.i_in, "omega_in":self.omega_in, "Omega_in":self.Omega_in, "T0_in":self.T0_in, "a_out":self.a_out, "e_out":self.e_out, "i_out":self.i_out, "omega_out":self.omega_out, "Omega_out":self.Omega_out, "T0_out":self.T0_out, "M_1":self.M_1, "M_2":self.M_2, "M_3":self.M_3, "gamma":self.gamma}
def recalculate(self):
'''
Update all of the derived quantities.
'''
# Calculate the following RV quantities
self.P_in = np.sqrt(4 * np.pi**2 / (C.G * (self.M_1 + self.M_2) * C.M_sun) * (self.a_in * C.AU)**3) / (60 * 60 * 24)# [days]
self.K_in = np.sqrt(C.G/(1 - self.e_in**2)) * self.M_2 * C.M_sun * np.sin(self.i_in * np.pi/180.) / np.sqrt((self.M_1 + self.M_2) * C.M_sun * self.a_in * C.AU) * 1e-5 # [km/s]
self.P_out = np.sqrt(4 * np.pi**2 / (C.G * (self.M_1 + self.M_2 + self.M_3) * C.M_sun) * (self.a_out * C.AU)**3) / (60 * 60 * 24) # [days]
self.K_out = np.sqrt(C.G/(1 - self.e_out**2)) * self.M_3 * C.M_sun * np.sin(self.i_out * np.pi/180.) / np.sqrt((self.M_1 + self.M_2 + self.M_3) * C.M_sun * self.a_out * C.AU) * 1e-5 # [km/s]
def update_parameters(self, param_values, param_list):
'''
param_values is numpy array of values
param_list is list of strings of the names of the parameters
'''
for (value, key) in zip(param_values, param_list):
self.param_dict[key] = value
def theta_in(self, t):
'''Calculate the true anomoly for the A-B orbit.'''
# t is input in seconds
# Take a modulus of the period
t = (t - self.T0_in) % self.P_in
f = lambda E: E - self.e_in * np.sin(E) - 2 * np.pi * t/self.P_in
E0 = 2 * np.pi * t / self.P_in
E = fsolve(f, E0)[0]
th = 2 * np.arctan(np.sqrt((1 + self.e_in)/(1 - self.e_in)) * np.tan(E/2.))
if E < np.pi:
return th
else:
return th + 2 * np.pi
def theta_out(self, t):
'''Calculate the true anomoly for the (A-B) - C orbit.'''
# t is input in seconds
# Take a modulus of the period
t = (t - self.T0_out) % self.P_out
f = lambda E: E - self.e_out * np.sin(E) - 2 * np.pi * t/self.P_out
E0 = 2 * np.pi * t / self.P_out
E = fsolve(f, E0)[0]
th = 2 * np.arctan(np.sqrt((1 + self.e_out)/(1 - self.e_out)) * np.tan(E/2.))
if E < np.pi:
return th
else:
return th + 2 * np.pi
def v1_f(self, f):
'''Calculate the component of A's velocity based on only the inner orbit.
f is the true anomoly of this inner orbit.'''
return self.K_in * (np.cos(self.omega_in * np.pi/180 + f) + self.e_in * np.cos(self.omega_in * np.pi/180))
def v2_f(self, f):
'''Calculate the component of B's velocity based on only the inner orbit.
f is the true anomoly of this inner orbit.'''
return -self.K_in * self.M_1/self.M_2 * (np.cos(self.omega_in * np.pi/180 + f) + self.e_in * np.cos(self.omega_in * np.pi/180))
def v3_f(self, f):
'''Calculate the velocity of (A-B) based only on the outer orbit.
f is the true anomoly of the outer orbit'''
return self.K_out * (np.cos(self.omega_out * np.pi/180 + f) + self.e_out * np.cos(self.omega_out * np.pi/180))
def v3_f_C(self, f):
'''Calculate the velocity of C based only on the outer orbit.
f is the true anomoly of the outer orbit
'''
return -self.K_out * (self.M_1 + self.M_2)/ self.M_3 * (np.cos(self.omega_out * np.pi/180 + f) + self.e_out * np.cos(self.omega_out * np.pi/180))
# absolute position of the AB center of mass in the plane of the orbit
def xy_AB(self, f):
# find the reduced radius
r = self.a_out * (1 - self.e_out**2) / (1 + self.e_out * np.cos(f)) # [AU]
r1 = r * self.M_3 / (self.M_1 + self.M_2 + self.M_3) # [AU]
x = r1 * np.cos(f)
y = r1 * np.sin(f)
return (x,y)
# absolute position of C in the plane of the orbit
def xy_C(self, f):
# find the reduced radius
r = self.a_out * (1 - self.e_out**2) / (1 + self.e_out * np.cos(f)) # [AU]
r2 = -r * (self.M_1 + self.M_2) / (self.M_1 + self.M_2 + self.M_3) # [AU]
x = r2 * np.cos(f)
y = r2 * np.sin(f)
return (x,y)
# absolute position of AB center of mass
def XY_AB(self, f):
# find the reduced radius
r = self.a_out * (1 - self.e_out**2) / (1 + self.e_out * np.cos(f)) # [AU]
r1 = r * self.M_3 / (self.M_1 + self.M_2 + self.M_3) # [AU]
Omega = self.Omega_out * np.pi / 180
omega = self.omega_out * np.pi / 180 # add in pi to swap the periapse
i = self.i_out * np.pi / 180
X = r1 * (np.cos(Omega) * np.cos(omega + f) - np.sin(Omega) * np.sin(omega + f) * np.cos(i))
Y = r1 * (np.sin(Omega) * np.cos(omega + f) + np.cos(Omega) * np.sin(omega + f) * np.cos(i))
return (X, Y) # [AU]
# absolute position of C
def XY_C(self, f):
# find the reduced radius
r = self.a_out * (1 - self.e_out**2) / (1 + self.e_out * np.cos(f)) # [AU]
r2 = -r * (self.M_1 + self.M_2) / (self.M_1 + self.M_2 + self.M_3) # [AU]
Omega = self.Omega_out * np.pi / 180
omega = self.omega_out * np.pi / 180
i = self.i_out * np.pi / 180
X = r2 * (np.cos(Omega) * np.cos(omega + f) - np.sin(Omega) * np.sin(omega + f) * np.cos(i))
Y = r2 * (np.sin(Omega) * np.cos(omega + f) + np.cos(Omega) * np.sin(omega + f) * np.cos(i))
return (X, Y) # [AU]
# position of A relative to center of mass of AB in the plane of the orbit
def xy_A_loc(self, f):
# find the reduced radius
r = self.a_in * (1 - self.e_in**2) / (1 + self.e_in * np.cos(f)) # [AU]
r1 = r * self.M_2 / (self.M_1 + self.M_2) # [AU]
x = r1 * np.cos(f)
y = r1 * np.sin(f)
return (x,y)
# position of B relative to center of mass of AB in the plane of the orbit
def xy_B_loc(self, f):
# find the reduced radius
r = self.a_in * (1 - self.e_in**2) / (1 + self.e_in * np.cos(f)) # [AU]
r2 = -r * self.M_1 / (self.M_1 + self.M_2) # [AU]
x = r2 * np.cos(f)
y = r2 * np.sin(f)
return (x,y)
# position of A relative to center of mass of AB (projected)
def XY_A_loc(self, f):
# find the reduced radius
r = self.a_in * (1 - self.e_in**2) / (1 + self.e_in * np.cos(f)) # [AU]
r1 = r * self.M_2 / (self.M_1 + self.M_2) # [AU]
Omega = self.Omega_in * np.pi / 180
omega = self.omega_in * np.pi / 180 # add in pi to swap the periapse
i = self.i_in * np.pi / 180
X = r1 * (np.cos(Omega) * np.cos(omega + f) - np.sin(Omega) * np.sin(omega + f) * np.cos(i))
Y = r1 * (np.sin(Omega) * np.cos(omega + f) + np.cos(Omega) * np.sin(omega + f) * np.cos(i))
return (X, Y) # [AU]
# position of B relative to center of mass of AB (projected)
def XY_B_loc(self, f):
# find the reduced radius
r = self.a_in * (1 - self.e_in**2) / (1 + self.e_in * np.cos(f)) # [AU]
r2 = -r * self.M_1 / (self.M_1 + self.M_2) # [AU]
Omega = self.Omega_in * np.pi / 180
omega = self.omega_in * np.pi / 180
i = self.i_in * np.pi / 180
X = r2 * (np.cos(Omega) * np.cos(omega + f) - np.sin(Omega) * np.sin(omega + f) * np.cos(i))
Y = r2 * (np.sin(Omega) * np.cos(omega + f) + np.cos(Omega) * np.sin(omega + f) * np.cos(i))
return (X, Y) # [AU]
def get_orbit(self, t):
'''
Given a time, calculate all of the orbital quantaties we might be interseted in.
returns (v_A, v_B, (x,y) of A, (x,y) of B, and x,y of B relative to A)
'''
# Get the true anomoly "f" from time
f_in = self.theta_in(t)
f_out = self.theta_out(t)
# Feed this into the orbit equation and add the systemic velocity
vA = self.v1_f(f_in) + self.v3_f(f_out) + self.gamma
vB = self.v2_f(f_in) + self.v3_f(f_out) + self.gamma
vC = self.v3_f_C(f_out) + self.gamma
# Absolute positions of AB center of mass, and C component.
XY_AB = self.XY_AB(f_out)
XY_C = self.XY_C(f_out)
# Positions of A and B relative to AB center of mass.
XY_A_loc = self.XY_A_loc(f_in)
XY_B_loc = self.XY_B_loc(f_in)
# Absolute positions of A and B
XY_A = np.array(XY_A_loc) + np.array(XY_AB)
XY_B = np.array(XY_B_loc) + np.array(XY_AB)
# Orbital positions in the plane
xy_AB = self.xy_AB(f_out)
xy_C = self.xy_C(f_out)
xy_A_loc = self.xy_A_loc(f_in)
xy_B_loc = self.xy_B_loc(f_in)
return (vA, vB, vC, XY_A, XY_B, XY_C, XY_AB, XY_A_loc, XY_B_loc, xy_C, xy_AB, xy_A_loc, xy_B_loc)
def get_component_orbits(self, dates=None):
'''
Return both vA and vB for all dates provided.
'''
if dates is None and self.obs_dates is None:
raise RuntimeError("Must provide input dates or specify observation dates upon creation of orbit object.")
if dates is None and self.obs_dates is not None:
dates = self.obs_dates
dates = np.atleast_1d(dates)
N = len(dates)
vAs = np.empty(N, dtype=np.float64)
vBs = np.empty(N, dtype=np.float64)
vCs = np.empty(N, dtype=np.float64)
XY_As = np.empty((N, 2), dtype=np.float64)
XY_Bs = np.empty((N, 2), dtype=np.float64)
XY_Cs = np.empty((N, 2), dtype=np.float64)
XY_ABs = np.empty((N, 2), dtype=np.float64)
XY_A_locs = np.empty((N, 2), dtype=np.float64)
XY_B_locs = np.empty((N, 2), dtype=np.float64)
xy_Cs = np.empty((N, 2), dtype=np.float64)
xy_ABs = np.empty((N, 2), dtype=np.float64)
xy_A_locs = np.empty((N, 2), dtype=np.float64)
xy_B_locs = np.empty((N, 2), dtype=np.float64)
for i,date in enumerate(dates):
vA, vB, vC, XY_A, XY_B, XY_C, XY_AB, XY_A_loc, XY_B_loc, xy_C, xy_AB, xy_A_loc, xy_B_loc = self.get_orbit(date)
vAs[i] = vA
vBs[i] = vB
vCs[i] = vC
XY_As[i] = np.array(XY_A)
XY_Bs[i] = np.array(XY_B)
XY_Cs[i] = np.array(XY_C)
XY_ABs[i] = np.array(XY_AB)
XY_A_locs[i] = np.array(XY_A_loc)
XY_B_locs[i] = np.array(XY_B_loc)
xy_Cs[i] = np.array(xy_C)
xy_ABs[i] = np.array(xy_AB)
xy_A_locs[i] = np.array(xy_A_loc)
xy_B_locs[i] = np.array(xy_B_loc)
return (vAs, vBs, vCs, XY_As, XY_Bs, XY_Cs, XY_ABs, XY_A_locs, XY_B_locs, xy_Cs, xy_ABs, xy_A_locs, xy_B_locs)
def get_component_fits(self, dates=None):
'''
Return the vA, vB, vC, rho_AB, theta_AB, rho_AC, theta_AC for all dates provided.
These are mainly as inputs to a fit.
'''
if dates is None and self.obs_dates is None:
raise RuntimeError("Must provide input dates or specify observation dates upon creation of orbit object.")
if dates is None and self.obs_dates is not None:
dates = self.obs_dates
dates = np.atleast_1d(dates)
N = len(dates)
vAs = np.empty(N, dtype=np.float64)
vBs = np.empty(N, dtype=np.float64)
vCs = np.empty(N, dtype=np.float64)
rho_ABs = np.empty(N, dtype=np.float64)
theta_ABs = np.empty(N, dtype=np.float64)
rho_ACs = np.empty(N, dtype=np.float64)
theta_ACs = np.empty(N, dtype=np.float64)
for i,date in enumerate(dates):
vA, vB, vC, XY_A, XY_B, XY_C, XY_AB, XY_A_loc, XY_B_loc, xy_C, xy_AB, xy_A_loc, xy_B_loc = self.get_orbit(date)
vAs[i] = vA
vBs[i] = vB
vCs[i] = vC
# For AB pair
# Calculate rho, theta from XY_A, XY_B, and XY_C
X_A, Y_A = XY_A
X_B, Y_B = XY_B
X_C, Y_C = XY_C
rho_AB = np.sqrt((X_B - X_A)**2 + (Y_B - Y_A)**2) # [AU]
theta_AB = np.arctan2((Y_B - Y_A), (X_B - X_A)) * 180/np.pi # [Deg]
if theta_AB < 0: # ensure that 0 <= theta <= 360
theta_AB += 360.
rho_ABs[i] = rho_AB
theta_ABs[i] = theta_AB
rho_AC = np.sqrt((X_C - X_A)**2 + (Y_C - Y_A)**2) # [AU]
theta_AC = np.arctan2((Y_C - Y_A), (X_C - X_A)) * 180/np.pi # [Deg]
if theta_AC < 0:
theta_AC += 360.
rho_ACs[i] = rho_AC
theta_ACs[i] = theta_AC
return (vAs, vBs, vCs, rho_ABs, theta_ABs, rho_ACs, theta_ACs)
models = {"Binary":Binary, "Triple":Triple}
| 36.338983 | 358 | 0.542359 | 4,159 | 23,584 | 2.92931 | 0.056504 | 0.039399 | 0.032833 | 0.031519 | 0.860872 | 0.831815 | 0.794878 | 0.767052 | 0.743495 | 0.711401 | 0 | 0.030779 | 0.311186 | 23,584 | 648 | 359 | 36.395062 | 0.719175 | 0.191231 | 0 | 0.540984 | 0 | 0 | 0.02493 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.095628 | false | 0 | 0.008197 | 0 | 0.196721 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
5ce423e5349da5c955a5eb85fbc3a81e4c16d380 | 2,188 | py | Python | paw/tests/test_gen_hcat.py | tehw0lf/paw | adc2b11d102b8ff9acf7763a7f2e65c21b38283d | [
"MIT"
] | 9 | 2017-10-29T17:53:34.000Z | 2021-07-19T14:39:42.000Z | paw/tests/test_gen_hcat.py | tehw0lf/paw | adc2b11d102b8ff9acf7763a7f2e65c21b38283d | [
"MIT"
] | 18 | 2017-10-15T12:17:11.000Z | 2020-05-13T14:01:22.000Z | paw/tests/test_gen_hcat.py | tehw0lf/paw | adc2b11d102b8ff9acf7763a7f2e65c21b38283d | [
"MIT"
] | 6 | 2017-10-15T13:10:21.000Z | 2020-08-01T11:52:44.000Z | from .base import paw_test
class gen_hcat_test(paw_test):
def test_gen_hcat_cmd_one_h(self):
self.paw.hcat = True
self.paw.patterns[0] = ["%h"]
self.paw.gen_hcat_cmd()
self.assertEqual(len(self.paw.catstrs), 1)
self.assertEqual(self.paw.catstrs, {0: "-a 3 -1 ABCDEF ?1"})
def test_gen_hcat_cmd_one_dh(self):
self.paw.hcat = True
self.paw.patterns[0] = ["%dh"]
self.paw.gen_hcat_cmd()
self.assertEqual(len(self.paw.catstrs), 1)
self.assertEqual(self.paw.catstrs, {0: "-a 3 -1 0123456789ABCDEF ?1"})
def test_gen_hcat_cmd_one_di(self):
self.paw.hcat = True
self.paw.patterns[0] = ["%di"]
self.paw.gen_hcat_cmd()
self.assertEqual(len(self.paw.catstrs), 1)
self.assertEqual(self.paw.catstrs, {0: "-a 3 -2 0123456789abcdef ?2"})
def test_gen_hcat_cmd_one_i(self):
self.paw.hcat = True
self.paw.patterns[0] = ["%i"]
self.paw.gen_hcat_cmd()
self.assertEqual(len(self.paw.catstrs), 1)
self.assertEqual(self.paw.catstrs, {0: "-a 3 -2 abcdef ?2"})
def test_gen_hcat_cmd_single(self):
self.paw.hcat = True
self.paw.patterns[0] = ["%h", "%i", "%d", "%s", "%s"]
self.paw.gen_hcat_cmd()
self.assertEqual(len(self.paw.catstrs), 1)
self.assertEqual(
self.paw.catstrs, {0: "-a 3 -1 ABCDEF -2 abcdef ?1?2?d?s?s"}
)
def test_gen_hcat_cmd_multi(self):
self.paw.hcat = True
self.paw.patterns[0] = ["%h", "%i", "%d", "%s", "%s"]
self.paw.patterns[1] = ["%h", "%i", "%i", "%s", "%u", "%d"]
self.paw.gen_hcat_cmd()
self.assertEqual(len(self.paw.catstrs), 2)
self.assertEqual(
self.paw.catstrs,
{
0: "-a 3 -1 ABCDEF -2 abcdef ?1?2?d?s?s",
1: "-a 3 -1 ABCDEF -2 abcdef ?1?2?2?s?u?d",
},
)
def test_gen_hcat_cmd_empty_pattern(self):
self.paw.hcat = True
self.paw.patterns[0] = [""]
self.paw.gen_hcat_cmd()
self.assertEqual(self.paw.wcount, 1)
| 35.868852 | 79 | 0.543419 | 319 | 2,188 | 3.567398 | 0.112853 | 0.21529 | 0.123023 | 0.086116 | 0.867311 | 0.837434 | 0.785589 | 0.720562 | 0.704745 | 0.581722 | 0 | 0.044958 | 0.288391 | 2,188 | 60 | 80 | 36.466667 | 0.685934 | 0 | 0 | 0.442308 | 0 | 0.019231 | 0.111372 | 0 | 0 | 0 | 0 | 0 | 0.25 | 1 | 0.134615 | false | 0 | 0.019231 | 0 | 0.173077 | 0 | 0 | 0 | 0 | null | 1 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
7a2ea5bc3630e94ee6e385cfa61ad17071361ef0 | 55 | py | Python | src/project_name/script.py | hlop3z/dexter | 5ee7306597f02088c249e37338d8ddc0b1c21fda | [
"MIT"
] | null | null | null | src/project_name/script.py | hlop3z/dexter | 5ee7306597f02088c249e37338d8ddc0b1c21fda | [
"MIT"
] | null | null | null | src/project_name/script.py | hlop3z/dexter | 5ee7306597f02088c249e37338d8ddc0b1c21fda | [
"MIT"
] | null | null | null | def hello_world():
print("hello world from Script") | 27.5 | 36 | 0.709091 | 8 | 55 | 4.75 | 0.75 | 0.526316 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.163636 | 55 | 2 | 36 | 27.5 | 0.826087 | 0 | 0 | 0 | 0 | 0 | 0.410714 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.5 | true | 0 | 0 | 0 | 0.5 | 0.5 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 6 |
7a65be2fc8a3ec380ba97441f304112162e797ec | 73 | py | Python | coringa/ledgers/tests/models/__init__.py | joyinsky/coringa | 79ad781a644dd8b93eeb4acb07e60233bc869a77 | [
"MIT"
] | null | null | null | coringa/ledgers/tests/models/__init__.py | joyinsky/coringa | 79ad781a644dd8b93eeb4acb07e60233bc869a77 | [
"MIT"
] | null | null | null | coringa/ledgers/tests/models/__init__.py | joyinsky/coringa | 79ad781a644dd8b93eeb4acb07e60233bc869a77 | [
"MIT"
] | 1 | 2020-03-12T00:15:10.000Z | 2020-03-12T00:15:10.000Z | from .ledger import *
from .account import *
from .transactions import *
| 18.25 | 27 | 0.753425 | 9 | 73 | 6.111111 | 0.555556 | 0.363636 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.164384 | 73 | 3 | 28 | 24.333333 | 0.901639 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
7a7d6594002b658e9a21338bfb74d95c07e899fa | 20 | py | Python | proxy/mqtt/pyRC5/RC5/__init__.py | aehparta/koti-devices | f189f3480aa493769d4db198b4f2b7c9eeb463c2 | [
"MIT"
] | null | null | null | proxy/mqtt/pyRC5/RC5/__init__.py | aehparta/koti-devices | f189f3480aa493769d4db198b4f2b7c9eeb463c2 | [
"MIT"
] | null | null | null | proxy/mqtt/pyRC5/RC5/__init__.py | aehparta/koti-devices | f189f3480aa493769d4db198b4f2b7c9eeb463c2 | [
"MIT"
] | null | null | null | from .RC5 import RC5 | 20 | 20 | 0.8 | 4 | 20 | 4 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.117647 | 0.15 | 20 | 1 | 20 | 20 | 0.823529 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
7a7f071db5e5dd7ac5ad0f6e9e8b8225e9511fff | 88 | py | Python | yamaps/settings.py | herald-it/django-yamaps | 8c5d7109ce6d008355857e73bb5c36d050a280c6 | [
"MIT"
] | 2 | 2018-07-28T07:31:08.000Z | 2020-02-24T18:57:16.000Z | yamaps/settings.py | herald-it/django-yamaps | 8c5d7109ce6d008355857e73bb5c36d050a280c6 | [
"MIT"
] | 6 | 2017-09-07T11:20:50.000Z | 2021-06-10T19:10:07.000Z | yamaps/settings.py | herald-it/django-yamaps | 8c5d7109ce6d008355857e73bb5c36d050a280c6 | [
"MIT"
] | 6 | 2017-03-16T08:37:34.000Z | 2020-12-24T13:53:15.000Z | from django.conf import settings
YAMAPS_ADDRESS_MODEL = settings.YAMAPS_ADDRESS_MODEL
| 17.6 | 52 | 0.863636 | 12 | 88 | 6 | 0.666667 | 0.388889 | 0.583333 | 0.722222 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.102273 | 88 | 4 | 53 | 22 | 0.911392 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.5 | 0 | 0.5 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
8f96e32f8845b057e341e3dcf50dd06ec855e7ed | 80,393 | py | Python | executions/atis/funql/query.py | JasperGuo/MeaningRepresentationBenchmark | b61e8ed68fdbd934c195fa968445540bfa897f2f | [
"MIT"
] | 9 | 2020-11-11T08:54:05.000Z | 2022-03-22T11:16:03.000Z | executions/atis/funql/query.py | JasperGuo/MeaningRepresentationBenchmark | b61e8ed68fdbd934c195fa968445540bfa897f2f | [
"MIT"
] | null | null | null | executions/atis/funql/query.py | JasperGuo/MeaningRepresentationBenchmark | b61e8ed68fdbd934c195fa968445540bfa897f2f | [
"MIT"
] | 2 | 2021-01-14T08:25:25.000Z | 2021-06-08T21:41:32.000Z | # coding=utf8
import re
import mysql.connector
from pprint import pprint
from .transform import transform
db = None
def get_connection():
global db
if db and db.is_connected():
return db
else:
db = mysql.connector.connect(
host="localhost",
user="root",
passwd="123456",
database="atis",
auth_plugin='mysql_native_password'
)
return db
def close_connection():
if db is not None and db.is_connected():
db.close()
def format_headers(header):
s = header.replace("( ", "(").replace(" )", ")").strip().lower()
return s
def get_result(sql):
db = get_connection()
_sql = sql
cursor = db.cursor()
cursor.execute(_sql)
# print(cursor.description)
headers = cursor.description
results = cursor.fetchall()
formatted_results = list()
for x in results:
r = dict()
for value, header in zip(x, headers):
r[format_headers(header[0])] = value
formatted_results.append(r)
# pprint(formatted_results)
return formatted_results
"""
Entity Type
"""
def build_entity(entity_type, entity_value):
return {entity_type: entity_value}
def answer(values):
return values
def get_entity_value(entity, key=None):
assert isinstance(entity, dict)
if key:
entity_type = key
entity_value = entity[key]
else:
entity_type = list(entity.keys())[0]
entity_value = entity[entity_type].replace("_", " ")
if entity_value == 'st louis':
entity_value = 'st. louis'
elif entity_value == 'st petersburg':
entity_value = 'st. petersburg'
elif entity_value == 'st paul':
entity_value = 'st. paul'
return entity_type, entity_value
def meal_code_all():
sql = "SELECT distinct meal_code FROM food_service"
return get_result(sql)
def airport_all():
sql = "SELECT distinct airport_code FROM airport"
return get_result(sql)
def aircraft_all():
sql = "SELECT distinct aircraft_code FROM aircraft"
return get_result(sql)
def city_all():
sql = "SELECT distinct city_name FROM city"
return get_result(sql)
def fare_basis_code_all():
sql = "SELECT distinct fare_basis_code FROM fare_basis"
return get_result(sql)
def airline_all():
sql = "SELECT distinct airline_code FROM airline"
return get_result(sql)
def flight_all():
sql = 'SELECT DISTINCT flight_id FROM flight'
return get_result(sql)
def booking_class_t_all():
sql = "SELECT distinct class_description FROM class_of_service"
return get_result(sql)
def class_of_service_all():
sql = 'SELECT DISTINCT class_of_service_1.booking_class FROM class_of_service class_of_service_1'
return get_result(sql)
def ground_transport_all():
sql = "SELECT distinct transport_type FROM ground_service"
return get_result(sql)
def abbrev(argument):
if isinstance(argument, dict):
entities = [argument]
elif isinstance(argument, list):
entities = argument
else:
raise Exception("Not Supported Argument Type", argument)
if len(entities) == 0:
return list()
results = list()
for e in entities:
sql = "SELECT DISTINCT airline_1.airline_code FROM airline airline_1 WHERE airline_1.airline_name like '%" + e['airline_code'] + "%'"
results += get_result(sql)
return results
def capacity(argument):
if isinstance(argument, dict):
entities = [argument]
elif isinstance(argument, list):
entities = argument
else:
raise Exception("Not Supported Argument Type", argument)
if len(entities) == 0:
return list()
results = list()
flight_number_template = "SELECT flight_1.flight_number, aircraft_1.capacity FROM aircraft as aircraft_1 JOIN flight as flight_1 on aircraft_1.aircraft_code = flight_1.aircraft_code_sequence WHERE flight_1.flight_number = %s;"
flight_id_template = "SELECT flight_1.flight_id, aircraft_1.capacity FROM aircraft as aircraft_1 JOIN flight as flight_1 on aircraft_1.aircraft_code = flight_1.aircraft_code_sequence WHERE flight_1.flight_id = %s;"
aircraft_code_template = "SELECT DISTINCT aircraft_1.aircraft_code, aircraft_1.capacity FROM aircraft aircraft_1 WHERE aircraft_1.aircraft_code = '%s'"
for e in entities:
if 'aircraft_code' in e:
entity_type, entity_name = get_entity_value(e, key='aircraft_code')
sql = aircraft_code_template % entity_name
elif 'flight_id' in e:
entity_type, entity_name = get_entity_value(e, key='flight_id')
# flight id
sql = flight_id_template % entity_name
else:
# entity_type == 'flight_number':
entity_type, entity_name = get_entity_value(e, key='flight_number')
sql = flight_number_template % entity_name
results += get_result(sql)
return results
def flight_number(argument):
if isinstance(argument, dict):
entities = [argument]
elif isinstance(argument, list):
entities = argument
else:
raise Exception("Not Supported Argument Type", argument)
if len(entities) == 0:
return list()
flight_id = "(%s)" % ','.join(['"%s"' % e['flight_id'] for e in entities])
sql = "SELECT DISTINCT flight_number FROM flight WHERE flight_id IN %s" % flight_id
results = get_result(sql)
return results
def time_elapsed(argument):
if isinstance(argument, dict):
entities = [argument]
elif isinstance(argument, list):
entities = argument
else:
raise Exception("Not Supported Argument Type", argument)
if len(entities) == 0:
return list()
flight_id = "(%s)" % ','.join(['"%s"' % e['flight_id'] for e in entities])
sql = "SELECT DISTINCT flight_id, time_elapsed FROM flight WHERE flight_id IN %s" % flight_id
return get_result(sql)
def time_elapsed_2(entity):
entity_type_1, entity_value_1 = get_entity_value(entity)
sql = "SELECT DISTINCT flight_id FROM flight WHERE time_elapsed = %s" % entity_value_1
return get_result(sql)
def minimum_connection_time(airport_code):
entity_type_1, entity_value_1 = get_entity_value(airport_code)
sql = "SELECT DISTINCT airport_1.minimum_connect_time FROM airport airport_1 WHERE airport_1.airport_code = '%s'" % (
entity_value_1)
return get_result(sql)
def miles_distant(entity_1, entity_2):
"""
_miles_distant
:entity_type: (airport_code, city_name)
:entity_type: (city_name, city_name)
"""
entity_type_1, entity_value_1 = get_entity_value(entity_1)
entity_type_2, entity_value_2 = get_entity_value(entity_2)
if entity_type_1 == 'airport_code' and entity_type_2 == 'city_name':
sql = "SELECT airport_service.miles_distant FROM airport_service JOIN city ON city.city_code = airport_service.city_code WHERE city.city_name = '%s' AND airport_service.airport_code = '%s'" % (
entity_value_2, entity_value_1)
else:
sql = "SELECT distinct airport_service.miles_distant FROM airport_service JOIN city ON airport_service.city_code = city.city_code WHERE city.city_name = '%s' AND airport_service.airport_code IN (SELECT T1.airport_code FROM airport_service AS T1 JOIN city AS T2 ON T1.city_code = T2.city_code WHERE T2.city_name = '%s');" % (
entity_value_1, entity_value_2)
return get_result(sql)
def minutes_distant(argument):
if isinstance(argument, dict):
entities = [argument]
elif isinstance(argument, list):
entities = argument
else:
raise Exception("Not Supported Argument Type", argument)
if len(entities) == 0:
return list()
key = 'city_name' if 'city_name' in entities[0] else 'airport_code'
values = "(%s)" % ','.join(['"%s"' % e[key] for e in entities])
if key == 'city_name':
sql = "SELECT minutes_distant, city_name FROM airport_service JOIN city ON airport_service.city_code = city.city_code WHERE city.city_name IN %s" % (values)
else:
# airport_code
sql = "SELECT minutes_distant FROM airport_service WHERE airport_code IN %s" % values
results = get_result(sql)
return results
def services_1(airline_code):
entity_type_1, entity_value_1 = get_entity_value(airline_code)
sql = "SELECT city.city_name, flight.to_airport FROM flight JOIN airport_service ON flight.to_airport = airport_service.airport_code JOIN city ON city.city_code = airport_service.city_code WHERE flight.airline_code = '%s'" % (entity_value_1)
results = get_result(sql)
return results
def services_2(entity):
entity_type_1, entity_value_1 = get_entity_value(entity)
if entity_type_1 == 'city_name':
sql = "SELECT flight.airline_code FROM flight JOIN airport_service ON flight.to_airport = airport_service.airport_code JOIN city ON city.city_code = airport_service.city_code WHERE city.city_name = '%s'" % (
entity_value_1)
else:
assert entity_type_1 == 'airport_code'
sql = "SELECT DISTINCT flight.airline_code FROM flight WHERE flight.to_airport = '%s'" % (
entity_value_1,)
results = get_result(sql)
return results
def services(entity_1, entity_2):
entity_type_1, entity_value_1 = get_entity_value(entity_1)
entity_type_2, entity_value_2 = get_entity_value(entity_2)
if entity_type_2 == 'city_name':
sql = "SELECT DISTINCT flight.airline_code, city.city_name FROM flight JOIN airport_service ON flight.to_airport = airport_service.airport_code JOIN city ON city.city_code = airport_service.city_code WHERE flight.airline_code = '%s' AND city.city_name = '%s'" % (
entity_value_1, entity_value_2)
else:
assert entity_type_2 == 'airport_code'
sql = "SELECT DISTINCT flight.airline_code, flight.to_airport FROM flight WHERE flight.airline_code = '%s' AND flight.to_airport = '%s'" % (
entity_value_1, entity_value_2,)
results = get_result(sql)
return results
def airport(argument):
if isinstance(argument, dict):
entities = [argument]
elif isinstance(argument, list):
entities = argument
else:
raise Exception("Not Supported Argument Type", argument)
results = list()
value_set = set()
for entity in entities:
# Airport code
for key in ['from_airport', 'to_airport', 'airport_code']:
if key in entity and entity[key] not in value_set:
results.append({key: entity[key]})
value_set.add(entity[key])
break
return results
def aircraft(argument):
if isinstance(argument, dict):
entities = [argument]
elif isinstance(argument, list):
entities = argument
else:
raise Exception("Not Supported Argument Type", argument)
results = list()
value_set = set()
for entity in entities:
# Airport code
for key in ['aircraft_code']:
if key in entity and entity[key] not in value_set:
results.append({key: entity[key]})
value_set.add(entity[key])
break
return results
def airline(argument):
if isinstance(argument, dict):
entities = [argument]
elif isinstance(argument, list):
entities = argument
else:
raise Exception("Not Supported Argument Type", argument)
results = list()
value_set = set()
for entity in entities:
# Airport code
for key in ['airline_code']:
if key in entity and entity[key] not in value_set:
results.append({key: entity[key]})
value_set.add(entity[key])
break
return results
def city(argument):
if isinstance(argument, dict):
entities = [argument]
elif isinstance(argument, list):
entities = argument
else:
raise Exception("Not Supported Argument Type", argument)
results = list()
value_set = set()
for entity in entities:
# Airport code
for key in ['city_name']:
if key in entity and entity[key] not in value_set:
results.append({key: entity[key]})
value_set.add(entity[key])
break
return results
def time_zone_code(argument):
if isinstance(argument, dict):
entities = [argument]
elif isinstance(argument, list):
entities = argument
else:
raise Exception("Not Supported Argument Type", argument)
results = list()
value_set = set()
for entity in entities:
# Airport code
for key in ['time_zone_code']:
if key in entity and entity[key] not in value_set:
results.append({key: entity[key]})
value_set.add(entity[key])
break
return results
def flight(argument):
if isinstance(argument, dict):
entities = [argument]
elif isinstance(argument, list):
entities = argument
else:
raise Exception("Not Supported Argument Type", argument)
results = list()
value_set = set()
for entity in entities:
# Airport code
for key in ['flight_id']:
if key in entity and entity[key] not in value_set:
results.append({key: entity[key]})
value_set.add(entity[key])
break
return results
def taxi(argument):
if isinstance(argument, dict):
entities = [argument]
elif isinstance(argument, list):
entities = argument
else:
raise Exception("Not Supported Argument Type", argument)
results = list()
value_set = set()
for entity in entities:
# Airport code
for key in ['transport_type']:
if key in entity and "TAXI" in entity[key]:
results.append({key: entity[key]})
value_set.add(entity[key])
break
return results
def air_taxi_operation(argument):
if isinstance(argument, dict):
entities = [argument]
elif isinstance(argument, list):
entities = argument
else:
raise Exception("Not Supported Argument Type", argument)
results = list()
value_set = set()
for entity in entities:
# Airport code
for key in ['transport_type']:
if key in entity and "AIR TAXI OPERATION" == entity[key]:
results.append({key: entity[key]})
value_set.add(entity[key])
break
return results
def limousine(argument):
if isinstance(argument, dict):
entities = [argument]
elif isinstance(argument, list):
entities = argument
else:
raise Exception("Not Supported Argument Type", argument)
results = list()
value_set = set()
for entity in entities:
# Airport code
for key in ['transport_type']:
if key in entity and "LIMOUSINE" == entity[key]:
results.append({key: entity[key]})
value_set.add(entity[key])
break
return results
def rapid_transit(argument):
if isinstance(argument, dict):
entities = [argument]
elif isinstance(argument, list):
entities = argument
else:
raise Exception("Not Supported Argument Type", argument)
results = list()
value_set = set()
for entity in entities:
# Airport code
for key in ['transport_type']:
if key in entity and "RAPID TRANSIT" == entity[key]:
results.append({key: entity[key]})
value_set.add(entity[key])
break
return results
def rental_car(argument):
if isinstance(argument, dict):
entities = [argument]
elif isinstance(argument, list):
entities = argument
else:
raise Exception("Not Supported Argument Type", argument)
results = list()
value_set = set()
for entity in entities:
# Airport code
for key in ['transport_type']:
if key in entity and "RENTAL CAR" == entity[key]:
results.append({key: entity[key]})
value_set.add(entity[key])
break
return results
def ground_transport(argument):
if isinstance(argument, dict):
entities = [argument]
elif isinstance(argument, list):
entities = argument
else:
raise Exception("Not Supported Argument Type", argument)
results = list()
value_set = set()
for entity in entities:
# Airport code
for key in ['transport_type']:
results.append({key: entity[key]})
value_set.add(entity[key])
break
return results
def turboprop(argument):
if isinstance(argument, dict):
entities = [argument]
elif isinstance(argument, list):
entities = argument
else:
raise Exception("Not Supported Argument Type", argument)
if len(entities) == 0:
return list()
et = 'aircraft_code' if 'aircraft_code' in entities[0] else 'flight_id'
values = "(%s)" % ','.join(['"%s"' % e[et] for e in entities])
if et == 'aircraft_code':
sql = "SELECT aircraft_code FROM aircraft WHERE aircraft_code IN %s AND propulsion = 'TURBOPROP'" % values
else:
sql = "SELECT flight_id FROM flight JOIN aircraft ON flight.aircraft_code_sequence = aircraft.aircraft_code WHERE propulsion = 'TURBOPROP' AND flight_id IN %s" % values
results = get_result(sql)
return results
def jet(argument):
if isinstance(argument, dict):
entities = [argument]
elif isinstance(argument, list):
entities = argument
else:
raise Exception("Not Supported Argument Type", argument)
if len(entities) == 0:
return list()
et = 'aircraft_code' if 'aircraft_code' in entities[0] else 'flight_id'
values = "(%s)" % ','.join(['"%s"' % e[et] for e in entities])
if et == 'aircraft_code':
sql = "SELECT aircraft_code FROM aircraft WHERE aircraft_code IN %s AND propulsion = 'JET'" % values
else:
sql = "SELECT flight_id FROM flight JOIN aircraft ON flight.aircraft_code_sequence = aircraft.aircraft_code WHERE propulsion = 'JET' AND flight_id IN %s" % values
results = get_result(sql)
return results
def economy(argument):
if isinstance(argument, dict):
entities = [argument]
elif isinstance(argument, list):
entities = argument
else:
raise Exception("Not Supported Argument Type", argument)
if len(entities) == 0:
return list()
sql = "SELECT flight_fare.flight_id FROM flight_fare JOIN fare ON flight_fare.fare_id = fare.fare_id JOIN fare_basis ON fare.fare_basis_code = fare_basis.fare_basis_code WHERE fare_basis.economy = 'YES'"
results = get_result(sql)
results = intersection(entities, results)
return results
def connecting(argument):
if isinstance(argument, dict):
entities = [argument]
elif isinstance(argument, list):
entities = argument
else:
raise Exception("Not Supported Argument Type", argument)
if len(entities) == 0:
return list()
flight_id = "(%s)" % ','.join(['"%s"' % e['flight_id'] for e in entities])
sql = 'SELECT DISTINCT flight_id FROM flight WHERE connections > 0 AND flight_id IN %s ' % flight_id
results = get_result(sql)
return results
def discounted(argument):
if isinstance(argument, dict):
entities = [argument]
elif isinstance(argument, list):
entities = argument
else:
raise Exception("Not Supported Argument Type", argument)
if len(entities) == 0:
return list()
flight_id = "(%s)" % ','.join(['"%s"' % e['flight_id'] for e in entities])
sql = "SELECT DISTINCT flight.flight_id FROM flight JOIN flight_fare ON flight.flight_id = flight_fare.flight_id JOIN fare ON fare.fare_id = flight_fare.fare_id JOIN fare_basis ON fare.fare_basis_code = fare_basis.fare_basis_code WHERE fare_basis.discounted = 'YES' AND flight.flight_id IN %s" % flight_id
results = get_result(sql)
return results
def nonstop(argument):
if isinstance(argument, dict):
entities = [argument]
elif isinstance(argument, list):
entities = argument
else:
raise Exception("Not Supported Argument Type", argument)
if len(entities) == 0:
return list()
sql = 'SELECT flight.flight_id FROM flight WHERE flight.stops = 0'
results = get_result(sql)
results = intersection(entities, results)
return results
def daily(argument):
if isinstance(argument, dict):
entities = [argument]
elif isinstance(argument, list):
entities = argument
else:
raise Exception("Not Supported Argument Type", argument)
sql = "SELECT flight_id FROM flight WHERE flight_days = 'daily'"
results = get_result(sql)
results = intersection(entities, results)
return results
def today(argument):
if isinstance(argument, dict):
entities = [argument]
elif isinstance(argument, list):
entities = argument
else:
raise Exception("Not Supported Argument Type", argument)
sql = "SELECT flight.flight_id FROM flight JOIN days on flight.flight_days = days.days_code JOIN date_day ON days.day_name = date_day.day_name WHERE date_day.year = 1991 AND date_day.month_number = 6 AND date_day.day_number = 22"
results = get_result(sql)
results = intersection(entities, results)
return results
def after_day_2(entity):
return flight_all()
def before_day_2(entity):
return flight_all()
def tomorrow(argument):
if isinstance(argument, dict):
entities = [argument]
elif isinstance(argument, list):
entities = argument
else:
raise Exception("Not Supported Argument Type", argument)
sql = "SELECT flight.flight_id FROM flight JOIN days on flight.flight_days = days.days_code JOIN date_day ON days.day_name = date_day.day_name WHERE date_day.year = 1991 AND date_day.month_number = 1 AND date_day.day_number = 20"
results = get_result(sql)
results = intersection(entities, results)
return results
def overnight(argument):
if isinstance(argument, dict):
entities = [argument]
elif isinstance(argument, list):
entities = argument
else:
raise Exception("Not Supported Argument Type", argument)
return entities
def tonight(argument):
if isinstance(argument, dict):
entities = [argument]
elif isinstance(argument, list):
entities = argument
else:
raise Exception("Not Supported Argument Type", argument)
if len(entities) == 0:
return list()
flight_id = "(%s)" % ','.join(['"%s"' % e['flight_id'] for e in entities])
sql = "SELECT DISTINCT flight_id FROM flight WHERE departure_time BETWEEN %d AND %d AND flight_id IN %s" % (
1800, 2359, flight_id)
results = get_result(sql)
return results
def day_number_return_2(entity):
entity_type_1, entity_value_1 = get_entity_value(entity)
sql = "SELECT flight.flight_id FROM flight JOIN flight_fare ON flight.flight_id = flight_fare.flight_id JOIN fare ON flight_fare.fare_id = fare.fare_id JOIN fare_basis ON fare.fare_basis_code = fare_basis.fare_basis_code JOIN days ON fare_basis.basis_days = days.days_code JOIN date_day ON days.day_name = date_day.day_name WHERE date_day.day_number = %s" % (
entity_value_1)
results = get_result(sql)
return results
def tomorrow_arrival(argument):
if isinstance(argument, dict):
entities = [argument]
elif isinstance(argument, list):
entities = argument
else:
raise Exception("Not Supported Argument Type", argument)
if len(entities) == 0:
return list()
flight_id = "(%s)" % ','.join(['"%s"' % e['flight_id'] for e in entities])
sql = "SELECT flight.flight_id FROM flight JOIN days on flight.flight_days = days.days_code JOIN date_day ON days.day_name = date_day.day_name WHERE date_day.year = 1991 AND date_day.month_number = 1 AND date_day.day_number = 20 AND flight.departure_time > flight.arrival_time AND flight.flight_id IN %s" % (
flight_id)
results = get_result(sql)
return results
def day_after_tomorrow(argument):
if isinstance(argument, dict):
entities = [argument]
elif isinstance(argument, list):
entities = argument
else:
raise Exception("Not Supported Argument Type", argument)
if len(entities) == 0:
return list()
sql = "SELECT DISTINCT flight.flight_id FROM flight JOIN days on flight.flight_days = days.days_code JOIN date_day ON days.day_name = date_day.day_name WHERE date_day.year = 1991 AND date_day.month_number = 1 AND date_day.day_number = 21"
results = get_result(sql)
results = intersection(entities, results)
return results
def oneway(argument):
if isinstance(argument, dict):
entities = [argument]
elif isinstance(argument, list):
entities = argument
else:
raise Exception("Not Supported Argument Type", argument)
sql = 'SELECT flight.flight_id FROM flight JOIN flight_fare ON flight.flight_id = flight_fare.flight_id JOIN fare ON flight_fare.fare_id = fare.fare_id WHERE fare.round_trip_required = "NO"'
results = get_result(sql)
results = intersection(entities, results)
return results
def round_trip(argument):
if isinstance(argument, dict):
entities = [argument]
elif isinstance(argument, list):
entities = argument
else:
raise Exception("Not Supported Argument Type", argument)
sql = 'SELECT flight.flight_id FROM flight JOIN flight_fare ON flight.flight_id = flight_fare.flight_id JOIN fare ON flight_fare.fare_id = fare.fare_id WHERE fare.round_trip_required IS NOT NULL'
results = get_result(sql)
results = intersection(entities, results)
return results
def weekday(argument):
if isinstance(argument, dict):
entities = [argument]
elif isinstance(argument, list):
entities = argument
else:
raise Exception("Not Supported Argument Type", argument)
results = list()
for entity in entities:
if 'flight_id' not in entity:
assert 'transport_type' in entity
results.append(entity)
else:
sql = "SELECT distinct day_name FROM flight JOIN days ON flight.flight_days = days.days_code WHERE flight_id = %s AND day_name IN ('MONDAY', 'TUESDAY', 'WEDNESDAY', 'THURSDAY', 'FRIDAY')" % entity['flight_id']
tmp = get_result(sql)
if len(tmp) == 5:
results.append(entity)
return results
def airline_2(airline_code):
entity_type_1, entity_value_1 = get_entity_value(airline_code)
sql = "SELECT flight_id FROM flight WHERE airline_code = '%s'" % (
entity_value_1)
return get_result(sql)
def aircraft_2(entity):
entity_type_1, entity_value_1 = get_entity_value(entity)
sql = "SELECT DISTINCT flight.flight_id FROM flight JOIN equipment_sequence ON flight.aircraft_code_sequence = equipment_sequence.aircraft_code_sequence WHERE equipment_sequence.aircraft_code = '%s'" % (
entity_value_1)
return get_result(sql)
def manufacturer_2(entity):
entity_type_1, entity_value_1 = get_entity_value(entity)
sql = "SELECT DISTINCT aircraft.aircraft_code , flight.flight_id FROM flight JOIN aircraft ON flight.aircraft_code_sequence = aircraft.aircraft_code WHERE aircraft.manufacturer = '%s'" % (
entity_value_1)
results = get_result(sql)
return get_result(sql)
def meal(argument):
if isinstance(argument, dict):
entities = [argument]
elif isinstance(argument, list):
entities = argument
else:
raise Exception("Not Supported Argument Type", argument)
results = list()
for entity in entities:
sql = "SELECT food_service.meal_description FROM flight JOIN food_service ON flight.meal_code = food_service.meal_code WHERE flight_id = %s" % (
entity['flight_id'])
results += get_result(sql)
return results
def loc_t_2(entity):
entity_type_1, entity_value_1 = get_entity_value(entity)
if entity_type_1 == 'city_name':
sql = "SELECT T.airport_code FROM airport_service AS T JOIN city ON T.city_code = city.city_code WHERE city.city_name = '%s';" % (
entity_value_1)
elif entity_type_1 == 'state_name':
sql = "SELECT T.airport_code FROM airport_service AS T JOIN city ON T.city_code = city.city_code JOIN state ON city.state_code = state.state_code WHERE state.state_name = '%s';" % (
entity_value_1)
else:
assert entity_type_1 == 'time_zone_code'
sql = "SELECT city_name FROM city WHERE time_zone_code = '%s'" % (
entity_value_1)
return get_result(sql)
def loc_t_1(entity):
entity_type_1, entity_value_1 = get_entity_value(entity)
if entity_type_1 == 'airport_code':
sql = "SELECT city.city_name FROM airport_service AS T JOIN city ON T.city_code = city.city_code WHERE T.airport_code = '%s';" % (
entity_value_1)
else:
assert entity_type_1 == 'city_name'
sql = "SELECT time_zone_code FROM city WHERE city_name = '%s'" % (
entity_value_1)
results = get_result(sql)
return results
def during_day_2(entity):
entity_type_1, entity_value_1 = get_entity_value(entity)
period_map = {
"morning": [0, 1200],
"afternoon": [1200, 1800],
"early": [0, 800],
"evening": [1800, 2200],
"pm": [1200, 2400],
"late": [601, 1759],
"breakfast": [600, 900],
"late evening": [2000, 2400],
"late night": [2159, 301],
"daytime": [600, 1800]
}
if entity_value_1 == 'late night':
sql = "SELECT DISTINCT flight.flight_id FROM flight JOIN days ON flight.flight_days = days.days_code JOIN date_day ON days.day_name = date_day.day_name WHERE date_day.year = 1991 AND date_day.month_number = 3 AND ( (date_day.day_number = 21 AND flight.departure_time > 2159) OR (date_day.day_number = 22 AND flight.departure_time < 301))"
else:
start, end = period_map[entity_value_1]
sql = "SELECT DISTINCT flight_1.flight_id FROM flight flight_1 WHERE flight_1.departure_time BETWEEN %d AND %d" % (
start, end)
results = get_result(sql)
return results
def during_day_arrival_2(entity):
entity_type_1, entity_value_1 = get_entity_value(entity)
processed_day_period = entity_value_1
period_map = {
"morning": [0, 1200],
"afternoon": [1200, 1800],
"early": [0, 800],
"evening": [1800, 2200],
"pm": [1200, 2400],
"late": [601, 1759],
"breakfast": [600, 900],
"late evening": [2000, 2400],
"daytime": [600, 1800],
"late night": [2159, 301],
'mealtime': [1700, 2000]
}
if processed_day_period == 'late night':
sql = "SELECT DISTINCT flight.flight_id FROM flight JOIN days ON flight.flight_days = days.days_code JOIN date_day ON days.day_name = date_day.day_name WHERE flight.flight_id = %s AND date_day.year = 1991 AND date_day.month_number = 3 AND ( (date_day.day_number = 21 AND flight.arrival_time > 2159) OR (date_day.day_number = 22 AND flight.arrival_time < 301))"
else:
start, end = period_map[processed_day_period]
sql = "SELECT DISTINCT flight_1.flight_id FROM flight flight_1 WHERE flight_1.arrival_time BETWEEN %d AND %d" % (
start, end)
results = get_result(sql)
return results
def day_number_arrival_2(entity):
entity_type_1, entity_value_1 = get_entity_value(entity)
sql = "SELECT DISTINCT flight.flight_id FROM flight JOIN days on flight.flight_days = days.days_code JOIN date_day ON days.day_name = date_day.day_name WHERE date_day.year = 1991 AND ((date_day.day_number = %s AND flight.arrival_time < flight.departure_time) OR (date_day.day_number = %s))" % (
str(int(entity_value_1) - 1), entity_value_1)
results = get_result(sql)
return results
def flight_number_2(entity):
entity_type_1, entity_value_1 = get_entity_value(entity)
sql = "SELECT DISTINCT flight_id FROM flight WHERE flight_number = '%s'" % entity_value_1
results = get_result(sql)
return results
def aircraft_basis_type_2(entity):
entity_type_1, entity_value_1 = get_entity_value(entity)
sql = "SELECT aircraft_code FROM aircraft WHERE basic_type = '%s'" % (
entity_value_1)
results = get_result(sql)
return results
def from_2(entity):
if isinstance(entity, dict):
entity_type_1, entity_value_1 = get_entity_value(entity)
if entity_type_1 == 'airport_code':
sql = "SELECT DISTINCT flight_id FROM flight WHERE flight.from_airport = '%s'" % (
entity_value_1)
else:
# entity_type == 'city_name'
sql = "SELECT DISTINCT flight_1.flight_id FROM flight AS flight_1 JOIN airport_service AS airport_service_1 ON flight_1.from_airport = airport_service_1.airport_code JOIN city AS city_1 ON airport_service_1.city_code = city_1.city_code WHERE city_1.city_name = '%s'" % (
entity_value_1)
results = get_result(sql)
else:
assert isinstance(entity, list)
if len(entity) == 0:
return list()
entity_type_1, entity_value_1 = get_entity_value(entity[0])
values = "(%s)" % ','.join(
['"%s"' % e[entity_type_1] for e in entity])
if entity_type_1 == 'airport_code':
sql = "SELECT DISTINCT flight_id FROM flight WHERE flight.from_airport IN %s" % (
values)
else:
# city_name
sql = "SELECT DISTINCT flight_1.flight_id FROM flight AS flight_1 JOIN airport_service AS airport_service_1 ON flight_1.from_airport = airport_service_1.airport_code JOIN city AS city_1 ON airport_service_1.city_code = city_1.city_code WHERE city_1.city_name IN %s" % (
values)
results = get_result(sql)
return results
def from_1(argument):
if isinstance(argument, dict):
entities = [argument]
elif isinstance(argument, list):
entities = argument
else:
raise Exception("Not Supported Argument Type", argument)
results = list()
flight_id = "(%s)" % ','.join(['"%s"' % e['flight_id'] for e in entities])
sql = "SELECT flight_1.from_airport, city_1.city_name FROM flight AS flight_1 JOIN airport_service AS airport_service_1 ON flight_1.from_airport = airport_service_1.airport_code JOIN city AS city_1 ON airport_service_1.city_code = city_1.city_code WHERE flight_1.flight_id in %s" % (
flight_id)
results = get_result(sql)
return results
def to_2(entity):
"""
_to(x,"mke:_ap"/"indianapolis:_ci")
"""
if isinstance(entity, dict):
entity_type_1, entity_value_1 = get_entity_value(entity)
if entity_type_1 == 'airport_code':
sql = "SELECT DISTINCT flight_id FROM flight WHERE flight.to_airport = '%s'" % (
entity_value_1)
elif entity_type_1 == 'city_name':
sql = "SELECT DISTINCT flight_1.flight_id FROM flight AS flight_1 JOIN airport_service AS airport_service_1 ON flight_1.to_airport = airport_service_1.airport_code JOIN city AS city_1 ON airport_service_1.city_code = city_1.city_code WHERE city_1.city_name = '%s'" % (
entity_value_1)
else:
# entity_type == 'state_name':
sql = "SELECT DISTINCT flight_1.flight_id FROM flight AS flight_1 JOIN airport_service AS airport_service_1 ON flight_1.to_airport = airport_service_1.airport_code JOIN city AS city_1 ON airport_service_1.city_code = city_1.city_code JOIN state ON city_1.state_code = state.state_code WHERE state.state_name = '%s'" % (
entity_value_1)
results = get_result(sql)
else:
assert isinstance(entity, list)
if len(entity) == 0:
return list()
entity_type_1, entity_value_1 = get_entity_value(entity[0])
values = "(%s)" % ','.join(
['"%s"' % e[entity_type_1] for e in entity])
if entity_type_1 == 'airport_code':
sql = "SELECT DISTINCT flight_id FROM flight WHERE flight.to_airport IN %s" % (
values)
else:
# city_name
sql = "SELECT DISTINCT flight_1.flight_id FROM flight AS flight_1 JOIN airport_service AS airport_service_1 ON flight_1.to_airport = airport_service_1.airport_code JOIN city AS city_1 ON airport_service_1.city_code = city_1.city_code WHERE city_1.city_name IN %s" % (
values)
results = get_result(sql)
return results
def to_1(argument):
if isinstance(argument, dict):
entities = [argument]
elif isinstance(argument, list):
entities = argument
else:
raise Exception("Not Supported Argument Type", argument)
if len(entities) == 0:
return list()
results = list()
flight_id = "(%s)" % ','.join(['"%s"' % e['flight_id'] for e in entities])
sql = "SELECT flight_1.to_airport, city_1.city_name FROM flight AS flight_1 JOIN airport_service AS airport_service_1 ON flight_1.to_airport = airport_service_1.airport_code JOIN city AS city_1 ON airport_service_1.city_code = city_1.city_code WHERE flight_1.flight_id in %s" % (
flight_id)
results = get_result(sql)
return results
def airport_1(argument):
if isinstance(argument, dict):
entities = [argument]
elif isinstance(argument, list):
entities = argument
else:
raise Exception("Not Supported Argument Type", argument)
results = list()
if len(entities) == 0:
return list()
city_names = "(%s)" % ','.join(['"%s"' % e['city_name'] for e in entities])
sql = 'SELECT airport_service.airport_code FROM airport_service JOIN city ON city.city_code = airport_service.city_code WHERE city.city_name IN %s' % (
city_names)
results = get_result(sql)
return results
def airline_1(argument):
if isinstance(argument, dict):
entities = [argument]
elif isinstance(argument, list):
entities = argument
else:
raise Exception("Not Supported Argument Type", argument)
results = list()
if len(entities) == 0:
return list()
flight_id = "(%s)" % ','.join(['"%s"' % e['flight_id'] for e in entities])
sql = "SELECT DISTINCT airline_code FROM flight WHERE flight_id IN %s" % (
flight_id)
results = get_result(sql)
return results
def booking_class_2(entity):
entity_type_1, entity_value_1 = get_entity_value(entity)
sql = "SELECT flight_fare.flight_id FROM flight_fare JOIN fare ON flight_fare.fare_id = fare.fare_id JOIN fare_basis ON fare.fare_basis_code = fare_basis.fare_basis_code JOIN class_of_service ON fare_basis.booking_class = class_of_service.booking_class WHERE class_of_service.class_description = '%s'" % entity_value_1
results = get_result(sql)
return results
def booking_class_1(argument):
if isinstance(argument, dict):
entities = [argument]
elif isinstance(argument, list):
entities = argument
else:
raise Exception("Not Supported Argument Type", argument)
results = list()
if len(entities) == 0:
return list()
flight_id = "(%s)" % ','.join(['"%s"' % e['flight_id'] for e in entities])
sql = "SELECT DISTINCT class_of_service.class_description FROM flight_fare JOIN fare ON flight_fare.fare_id = fare.fare_id JOIN fare_basis ON fare.fare_basis_code = fare_basis.fare_basis_code JOIN class_of_service ON fare_basis.booking_class = class_of_service.booking_class WHERE flight_fare.flight_id IN %s" % (
flight_id)
results = get_result(sql)
return results
def from_airport_2(entity):
entity_type_1, entity_value_1 = get_entity_value(entity)
if entity_type_1 == 'city_name':
sql = "SELECT DISTINCT T3.transport_type FROM airport_service AS T1 JOIN city AS T2 ON T1.city_code = T2.city_code JOIN ground_service AS T3 ON T1.airport_code = T3.airport_code WHERE T2.city_name = '%s'" % entity_value_1
else:
assert entity_type_1 == 'airport_code'
sql = "SELECT DISTINCT ground_service_1.transport_type, ground_service_1.airport_code FROM ground_service ground_service_1 WHERE ground_service_1.airport_code = '%s'" % entity_value_1
results = get_result(sql)
return results
def to_city_2(entity):
entity_type_1, entity_value_1 = get_entity_value(entity)
assert entity_type_1 == 'city_name'
sql = "SELECT DISTINCT ground_service_1.transport_type, city_1.city_name FROM ground_service AS ground_service_1 JOIN city AS city_1 ON ground_service_1.city_code = city_1.city_code WHERE city_1.city_name = '%s'" % (
entity_value_1)
results = get_result(sql)
return results
def meal_2(entity):
entity_type_1, entity_value_1 = get_entity_value(entity)
if entity_type_1 == 'meal_code':
sql = "SELECT flight_id FROM flight WHERE meal_code = '%s'" % (entity_value_1)
else:
sql = "SELECT flight_id FROM flight JOIN food_service ON flight.meal_code = food_service.meal_code WHERE food_service.meal_description = '%s'" % (
entity_value_1)
results = get_result(sql)
return results
def meal_code_2(entity):
entity_type_1, entity_value_1 = get_entity_value(entity)
sql = "SELECT flight_id FROM flight WHERE meal_code = '%s'" % (entity_value_1)
results = get_result(sql)
return results
def day_2(entity):
entity_type_1, entity_value_1 = get_entity_value(entity)
sql = "SELECT DISTINCT flight.flight_id FROM flight JOIN days on flight.flight_days = days.days_code WHERE days.day_name = '%s'" % (
entity_value_1)
results = get_result(sql)
return results
def day_return_2(entity):
entity_type_1, entity_value_1 = get_entity_value(entity)
sql = "SELECT DISTINCT flight.flight_id FROM flight JOIN flight_fare ON flight.flight_id = flight_fare.flight_id JOIN fare ON flight_fare.fare_id = fare.fare_id JOIN fare_basis ON fare.fare_basis_code = fare_basis.fare_basis_code JOIN days ON fare_basis.basis_days = days.days_code WHERE days.day_name = '%s'" % (
entity_value_1)
results = get_result(sql)
return results
def year_2(entity):
entity_type_1, entity_value_1 = get_entity_value(entity)
sql = "SELECT DISTINCT flight_id FROM flight JOIN days ON flight.flight_days = days.days_code JOIN date_day ON days.day_name = date_day.day_name WHERE date_day.year = %s" % (
entity_value_1)
results = get_result(sql)
return results
def day_arrival_2(entity):
entity_type_1, entity_value_1 = get_entity_value(entity)
sql = "SELECT DISTINCT flight.flight_id FROM flight JOIN days on flight.flight_days = days.days_code WHERE days.day_name = '%s'" % (
entity_value_1)
results = get_result(sql)
return results
def day_number_2(entity):
entity_type_1, entity_value_1 = get_entity_value(entity)
sql = "SELECT flight.flight_id FROM flight JOIN days on flight.flight_days = days.days_code JOIN date_day ON days.day_name = date_day.day_name WHERE date_day.year = 1991 AND date_day.day_number = %s" % (
entity_value_1)
results = get_result(sql)
return results
def next_days_2(entity):
entity_type_1, entity_value_1 = get_entity_value(entity)
sql = "SELECT DISTINCT flight.flight_id FROM flight JOIN days on flight.flight_days = days.days_code JOIN date_day ON days.day_name = date_day.day_name WHERE date_day.year = 1991 AND date_day.month_number = 1 AND date_day.day_number BETWEEN 20 and %s" % (
int(entity_value_1) + 20 )
results = get_result(sql)
return results
def month_2(entity):
entity_type_1, entity_value_1 = get_entity_value(entity)
month_map = {
"january": 1,
"february": 2,
"march": 3,
"april": 4,
"may": 5,
"june": 6,
"july": 7,
"august": 8,
"september": 9,
"october": 10,
"november": 11,
"december": 12
}
sql = "SELECT flight.flight_id FROM flight JOIN days on flight.flight_days = days.days_code JOIN date_day ON days.day_name = date_day.day_name WHERE date_day.year = 1991 AND date_day.month_number = %s" % (
month_map[entity_value_1])
results = get_result(sql)
return results
def month_arrival_2(entity):
entity_type_1, entity_value_1 = get_entity_value(entity)
month_map = {
"january": 1,
"february": 2,
"march": 3,
"april": 4,
"may": 5,
"june": 6,
"july": 7,
"august": 8,
"september": 9,
"october": 10,
"november": 11,
"december": 12
}
sql = "SELECT DISTINCT flight.flight_id FROM flight JOIN days on flight.flight_days = days.days_code JOIN date_day ON days.day_name = date_day.day_name WHERE date_day.year = 1991 AND date_day.month_number = %s" % (
month_map[entity_value_1])
results = get_result(sql)
return results
def month_return_2(entity):
"""
_month_return(x, "june:_mn")
:entity_type (flight_id, month)
"""
entity_type_1, entity_value_1 = get_entity_value(entity)
month_map = {
"january": 1,
"february": 2,
"march": 3,
"april": 4,
"may": 5,
"june": 6,
"july": 7,
"august": 8,
"september": 9,
"october": 10,
"november": 11,
"december": 12
}
sql = "SELECT DISTINCT flight.flight_id FROM flight JOIN flight_fare ON flight.flight_id = flight_fare.flight_id JOIN fare ON flight_fare.fare_id = fare.fare_id JOIN fare_basis ON fare.fare_basis_code = fare_basis.fare_basis_code JOIN days ON fare_basis.basis_days = days.days_code JOIN date_day ON days.day_name = date_day.day_name WHERE date_day.year = 1991 AND date_day.month_number = %s" % (
month_map[entity_value_1])
results = get_result(sql)
return results
def days_from_today_2(entity):
entity_type_1, entity_value_1 = get_entity_value(entity)
sql = "SELECT flight.flight_id FROM flight JOIN days on flight.flight_days = days.days_code JOIN date_day ON days.day_name = date_day.day_name WHERE date_day.year = 1991 AND date_day.month_number = 5 AND date_day.day_number = %s" % (
int(entity_value_1) + 27)
results = get_result(sql)
return results
def stop_1(argument):
if isinstance(argument, dict):
entities = [argument]
elif isinstance(argument, list):
entities = argument
else:
raise Exception("Not Supported Argument Type", argument)
if len(entities) == 0:
return list()
flight_id = "(%s)" % ','.join(['"%s"' % e['flight_id'] for e in entities])
sql = "SELECT city.city_name, flight_stop.stop_airport FROM flight JOIN flight_stop ON flight.flight_id = flight_stop.flight_id JOIN airport_service ON flight_stop.stop_airport = airport_service.airport_code JOIN city ON city.city_code = airport_service.city_code WHERE flight.flight_id IN %s" % (
flight_id)
results = get_result(sql)
return results
def stop_2(entity):
if isinstance(entity, dict):
entity_type_1, entity_value_1 = get_entity_value(entity)
if entity_type_1 == 'city_name':
sql = "SELECT flight.flight_id FROM flight JOIN flight_stop ON flight.flight_id = flight_stop.flight_id JOIN airport_service ON flight_stop.stop_airport = airport_service.airport_code JOIN city ON city.city_code = airport_service.city_code WHERE city.city_name = '%s'" % (
entity_value_1)
elif entity_type_1 == 'airport_code':
sql = "SELECT flight_stop.flight_id FROM flight_stop WHERE flight_stop.stop_airport = '%s'" % (
entity_value_1)
results = get_result(sql)
else:
assert isinstance(entity, list)
if len(entity) == 0:
return list()
entity_type_1, entity_value_1 = get_entity_value(entity[0])
values = "(%s)" % ','.join(
['"%s"' % e[entity_type_1] for e in entity])
if entity_type_1 == 'city_name':
sql = "SELECT flight.flight_id FROM flight JOIN flight_stop ON flight.flight_id = flight_stop.flight_id JOIN airport_service ON flight_stop.stop_airport = airport_service.airport_code JOIN city ON city.city_code = airport_service.city_code WHERE city.city_name IN %s" % (
values)
elif entity_type_1 == 'airport_code':
sql = "SELECT flight_stop.flight_id FROM flight_stop WHERE flight_stop.stop_airport IN %s" % (
values)
results = get_result(sql)
return results
def stop_arrival_time(argument):
if isinstance(argument, dict):
entities = [argument]
elif isinstance(argument, list):
entities = argument
else:
raise Exception("Not Supported Argument Type", argument)
if len(entities) == 0:
return list()
flight_id = "(%s)" % ','.join(['"%s"' % e['flight_id'] for e in entities])
sql = "SELECT flight_stop.arrival_time, city.city_name FROM flight_stop JOIN airport_service ON flight_stop.stop_airport = airport_service.airport_code JOIN city ON city.city_code = airport_service.city_code WHERE flight_stop.flight_id IN %s" % (flight_id)
return get_result(sql)
def stops(argument):
if isinstance(argument, dict):
entities = [argument]
elif isinstance(argument, list):
entities = argument
else:
raise Exception("Not Supported Argument Type", argument)
if len(entities) == 0:
return list()
flight_id = "(%s)" % ','.join(['"%s"' % e['flight_id'] for e in entities])
sql = "SELECT flight_id, stops FROM flight WHERE flight.flight_id IN %s" % (flight_id)
return get_result(sql)
def stops_2(entity):
entity_type_1, entity_value_1 = get_entity_value(entity)
sql = "SELECT DISTINCT flight_id FROM flight WHERE stops = %s" % (
entity_value_1)
results = get_result(sql)
return results
def class_type_2(entity):
entity_type_1, entity_value_1 = get_entity_value(entity)
sql = "SELECT flight_fare.flight_id FROM flight_fare JOIN fare ON flight_fare.fare_id = fare.fare_id JOIN fare_basis ON fare.fare_basis_code = fare_basis.fare_basis_code WHERE fare_basis.class_type = '%s'" % (
entity_value_1)
results = get_result(sql)
return results
def fare_basis_code_2(entity):
entity_type_1, entity_value_1 = get_entity_value(entity)
sql = "SELECT DISTINCT flight_id FROM flight_fare JOIN fare ON flight_fare.fare_id = fare.fare_id WHERE fare.fare_basis_code = '%s'" % (entity_value_1)
results = get_result(sql)
return results
def fare_basis_code(argument):
if isinstance(argument, dict):
entities = [argument]
elif isinstance(argument, list):
entities = argument
else:
raise Exception("Not Supported Argument Type", argument)
if len(entities) == 0:
return list()
flight_id = "(%s)" % ','.join(['"%s"' % e['flight_id'] for e in entities])
sql = "SELECT DISTINCT fare.fare_basis_code FROM flight_fare JOIN fare ON flight_fare.fare_id = fare.fare_id WHERE flight_id IN %s" % (
flight_id)
results = get_result(sql)
return results
def has_meal(argument):
if isinstance(argument, dict):
entities = [argument]
elif isinstance(argument, list):
entities = argument
else:
raise Exception("Not Supported Argument Type", argument)
if len(entities) == 0:
return list()
flight_id = "(%s)" % ','.join(['"%s"' % e['flight_id'] for e in entities])
sql = "SELECT flight_id FROM flight WHERE meal_code is not NULL AND flight_id IN %s" % (
flight_id)
results = get_result(sql)
return results
def has_stops(argument):
if isinstance(argument, dict):
entities = [argument]
elif isinstance(argument, list):
entities = argument
else:
raise Exception("Not Supported Argument Type", argument)
if len(entities) == 0:
return list()
flight_id = "(%s)" % ','.join(['"%s"' % e['flight_id'] for e in entities])
sql = 'SELECT T1.flight_id FROM flight AS T1 JOIN flight_stop AS T2 ON T1.flight_id = T2.flight_id WHERE T1.flight_id IN %s' % (
flight_id)
results = get_result(sql)
return results
def less_than_fare_2(entity):
entity_type_1, entity_value_1 = get_entity_value(entity)
sql = "SELECT DISTINCT flight.flight_id FROM flight JOIN flight_fare ON flight.flight_id = flight_fare.flight_id JOIN fare ON fare.fare_id = flight_fare.fare_id WHERE fare.one_direction_cost < %s" % (entity_value_1)
results = get_result(sql)
return results
def fare_2(entity):
entity_type_1, entity_value_1 = get_entity_value(entity)
sql = "SELECT DISTINCT flight.flight_id FROM flight JOIN flight_fare ON flight.flight_id = flight_fare.flight_id JOIN fare ON fare.fare_id = flight_fare.fare_id WHERE fare.one_direction_cost = '%s'" % (
entity_value_1)
results = get_result(sql)
return results
def fare(argument):
if isinstance(argument, dict):
entities = [argument]
elif isinstance(argument, list):
entities = argument
else:
raise Exception("Not Supported Argument Type", argument)
if len(entities) == 0:
return list()
flight_id = "(%s)" % ','.join(['"%s"' % e['flight_id'] for e in entities])
sql = "SELECT flight.flight_id, fare.one_direction_cost FROM flight JOIN flight_fare ON flight.flight_id = flight_fare.flight_id JOIN fare ON fare.fare_id = flight_fare.fare_id WHERE flight.flight_id IN %s" % (
flight_id)
results = get_result(sql)
return results
def fare_time(argument):
if isinstance(argument, dict):
entities = [argument]
elif isinstance(argument, list):
entities = argument
else:
raise Exception("Not Supported Argument Type", argument)
if len(entities) == 0:
return list()
flight_id = "(%s)" % ','.join(['"%s"' % e['flight_id'] for e in entities])
sql = "SELECT flight.flight_id, fare.one_direction_cost, flight.departure_time FROM flight JOIN flight_fare ON flight.flight_id = flight_fare.flight_id JOIN fare ON fare.fare_id = flight_fare.fare_id WHERE flight.flight_id IN %s" % (
flight_id)
results = get_result(sql)
return results
def ground_fare(argument):
if isinstance(argument, dict):
entities = [argument]
elif isinstance(argument, list):
entities = argument
else:
raise Exception("Not Supported Argument Type", argument)
if len(entities) == 0:
return list()
transport_types = "(%s)" % ','.join(['"%s"' % e['transport_type'] for e in entities])
sql = "SELECT ground_fare FROM ground_service WHERE transport_type IN %s" % (
transport_types)
results = get_result(sql)
return results
def aircraft_1(argument):
if isinstance(argument, dict):
entities = [argument]
elif isinstance(argument, list):
entities = argument
else:
raise Exception("Not Supported Argument Type", argument)
if len(entities) == 0:
return list()
flight_id = "(%s)" % ','.join(['"%s"' % e['flight_id'] for e in entities])
sql = "SELECT DISTINCT equipment_sequence.aircraft_code FROM flight JOIN equipment_sequence ON flight.aircraft_code_sequence = equipment_sequence.aircraft_code_sequence WHERE flight.flight_id IN %s" % (
flight_id)
results = get_result(sql)
return results
def flight_aircraft(argument):
if isinstance(argument, dict):
entities = [argument]
elif isinstance(argument, list):
entities = argument
else:
raise Exception("Not Supported Argument Type", argument)
if len(entities) == 0:
return list()
flight_id = "(%s)" % ','.join(['"%s"' % e['flight_id'] for e in entities])
sql = "SELECT DISTINCT flight.flight_id, equipment_sequence.aircraft_code FROM flight JOIN equipment_sequence ON flight.aircraft_code_sequence = equipment_sequence.aircraft_code_sequence WHERE flight.flight_id IN %s" % (
flight_id)
results = get_result(sql)
return results
def flight_airline(argument):
if isinstance(argument, dict):
entities = [argument]
elif isinstance(argument, list):
entities = argument
else:
raise Exception("Not Supported Argument Type", argument)
if len(entities) == 0:
return list()
results = list()
flight_id = "(%s)" % ','.join(['"%s"' % e['flight_id'] for e in entities])
sql = "SELECT DISTINCT flight.flight_id, airline_code FROM flight WHERE flight_id IN %s" % (
flight_id)
results = get_result(sql)
return results
def departure_time_2(entity):
entity_type_1, entity_value_1 = get_entity_value(entity)
sql = "SELECT DISTINCT flight_1.flight_id FROM flight flight_1 WHERE flight_1.departure_time = %s" % (
entity_value_1)
results = get_result(sql)
return results
def departure_time(argument):
if isinstance(argument, dict):
entities = [argument]
elif isinstance(argument, list):
entities = argument
else:
raise Exception("Not Supported Argument Type", argument)
if len(entities) == 0:
return list()
flight_id = "(%s)" % ','.join(['"%s"' % e['flight_id'] for e in entities])
sql = "SELECT DISTINCT flight_1.departure_time FROM flight flight_1 WHERE flight_1.flight_id IN %s" % (
flight_id)
results = get_result(sql)
return results
def arrival_time(argument):
if isinstance(argument, dict):
entities = [argument]
elif isinstance(argument, list):
entities = argument
else:
raise Exception("Not Supported Argument Type", argument)
if len(entities) == 0:
return list()
flight_id = "(%s)" % ','.join(['"%s"' % e['flight_id'] for e in entities])
sql = "SELECT DISTINCT flight_1.arrival_time FROM flight flight_1 WHERE flight_1.flight_id IN %s" % (
flight_id)
results = get_result(sql)
return results
def arrival_time_2(entity):
entity_type_1, entity_value_1 = get_entity_value(entity)
sql = "SELECT DISTINCT flight_1.flight_id FROM flight flight_1 WHERE flight_1.arrival_time = %s" % (
entity_value_1)
results = get_result(sql)
return results
def approx_return_time_2(entity):
return approx_arrival_time_2(entity)
def approx_arrival_time_2(entity):
entity_type_1, entity_value_1 = get_entity_value(entity)
processed_arrival_time = entity_value_1
if len(processed_arrival_time) == 4:
if processed_arrival_time[2:] == '00':
start_time = "%d%d" % (int(processed_arrival_time[:2]) - 1, 30)
end_time = "%d%d" % (int(processed_arrival_time[:2]), 30)
elif processed_arrival_time[2:] == '15':
start_time = "%d%d" % (int(processed_arrival_time[:2]) - 1, 45)
end_time = "%d%d" % (int(processed_arrival_time[:2]), 45)
elif processed_arrival_time[2:] == '30':
start_time = "%d00" % (int(processed_arrival_time[:2]))
end_time = "%d00" % (int(processed_arrival_time[:2]) + 1)
else:
assert processed_arrival_time[2:] == '45'
start_time = "%d%d" % (int(processed_arrival_time[:2]), 15)
end_time = "%d%d" % (int(processed_arrival_time[:2]) + 1, 15)
else:
if processed_arrival_time[1:] == '00':
start_time = "%d%d" % (int(processed_arrival_time[:1]) - 1, 30)
end_time = "%d%d" % (int(processed_arrival_time[:1]), 30)
elif processed_arrival_time[1:] == '15':
start_time = "%d%d" % (int(processed_arrival_time[:1]) - 1, 45)
end_time = "%d%d" % (int(processed_arrival_time[:1]), 45)
elif processed_arrival_time[1:] == '30':
start_time = "%d00" % (int(processed_arrival_time[:1]))
end_time = "%d00" % (int(processed_arrival_time[:1]) + 1)
else:
assert processed_arrival_time[1:] == '45'
start_time = "%d%d" % (int(processed_arrival_time[:1]), 15)
end_time = "%d%d" % (int(processed_arrival_time[:1]) + 1, 15)
sql = "SELECT DISTINCT flight_1.flight_id FROM flight flight_1 WHERE flight_1.arrival_time >= %s AND flight_1.arrival_time <= %s" % (
start_time, end_time)
results = get_result(sql)
return results
def airline_name(argument):
if isinstance(argument, dict):
entities = [argument]
elif isinstance(argument, list):
entities = argument
else:
raise Exception("Not Supported Argument Type", argument)
results = list()
if len(entities) == 0:
return list()
flight_id = "(%s)" % ','.join(
['"%s"' % e['flight_id'] for e in entities])
sql = "SELECT airline_name FROM flight JOIN airline ON flight.airline_code = airline.airline_code WHERE flight.flight_id IN %s" % (
flight_id)
results = get_result(sql)
return results
def flight_fare(argument):
return fare(argument)
def restriction_code(argument):
if isinstance(argument, dict):
entities = [argument]
elif isinstance(argument, list):
entities = argument
else:
raise Exception("Not Supported Argument Type", argument)
if len(entities) == 0:
return list()
flight_id = "(%s)" % ','.join(
['"%s"' % e['flight_id'] for e in entities])
sql = "SELECT restriction.restriction_code FROM flight_fare JOIN fare ON flight_fare.fare_id = fare.fare_id JOIN restriction ON fare.restriction_code = restriction.restriction_code WHERE flight_fare.flight_id IN %s" % (
flight_id)
return get_result(sql)
def approx_departure_time_2(entity):
"""
_approx_departure_time()
"""
entity_type_1, entity_value_1 = get_entity_value(entity)
processed_departure_time = entity_value_1
if len(processed_departure_time) == 4:
if processed_departure_time[2:] == '00':
start_time = "%d%d" % (int(processed_departure_time[:2]) - 1, 30)
end_time = "%d%d" % (int(processed_departure_time[:2]), 30)
elif processed_departure_time[2:] == '15':
start_time = "%d%d" % (int(processed_departure_time[:2]) - 1, 45)
end_time = "%d%d" % (int(processed_departure_time[:2]), 45)
elif processed_departure_time[2:] == '30':
start_time = "%d00" % (int(processed_departure_time[:2]))
end_time = "%d00" % (int(processed_departure_time[:2]) + 1)
print(start_time, end_time)
else:
assert processed_departure_time[2:] == '45'
start_time = "%d%d" % (int(processed_departure_time[:2]), 15)
end_time = "%d%d" % (int(processed_departure_time[:2]) + 1, 15)
sql = "SELECT DISTINCT flight_1.flight_id FROM flight flight_1 WHERE flight_1.departure_time >= %s AND flight_1.departure_time <= %s" % (
start_time, end_time)
elif len(processed_departure_time) == 3:
if processed_departure_time[1:] == '00':
start_time = "%d%d" % (int(processed_departure_time[:1]) - 1, 30)
end_time = "%d%d" % (int(processed_departure_time[:1]), 30)
elif processed_departure_time[1:] == '15':
start_time = "%d%d" % (int(processed_departure_time[:1]) - 1, 45)
end_time = "%d%d" % (int(processed_departure_time[:1]), 45)
elif processed_departure_time[1:] == '30':
start_time = "%d00" % (int(processed_departure_time[:1]))
end_time = "%d00" % (int(processed_departure_time[:1]) + 1)
else:
assert processed_departure_time[1:] == '45'
start_time = "%d%d" % (int(processed_departure_time[:1]), 15)
end_time = "%d%d" % (int(processed_departure_time[:1]) + 1, 15)
sql = "SELECT DISTINCT flight_1.flight_id FROM flight flight_1 WHERE flight_1.departure_time >= %s AND flight_1.departure_time <= %s" % (
start_time, end_time)
elif processed_departure_time == "0":
start_time = "2330"
end_time = "30"
sql = "SELECT DISTINCT flight_1.flight_id FROM flight flight_1 WHERE (flight_1.departure_time >= %s OR flight_1.departure_time <= %s)" % (
start_time, end_time)
results = get_result(sql)
return results
def larger_than_stops_2(entity):
entity_type_1, entity_value_1 = get_entity_value(entity)
sql = "SELECT DISTINCT flight_id FROM flight WHERE stops > %s" % (
entity_value_1)
results = get_result(sql)
return results
def larger_than_arrival_time_2(entity):
entity_type_1, entity_value_1 = get_entity_value(entity)
sql = "SELECT DISTINCT flight.flight_id FROM flight WHERE arrival_time > %s" % entity_value_1
results = get_result(sql)
return results
def less_than_arrival_time_2(entity):
entity_type_1, entity_value_1 = get_entity_value(entity)
sql = "SELECT DISTINCT flight.flight_id FROM flight WHERE arrival_time < %s" % entity_value_1
results = get_result(sql)
return results
def larger_than_departure_time_2(entity):
if isinstance(entity, dict):
entity_type_1, entity_value_1 = get_entity_value(entity)
sql = "SELECT DISTINCT flight.flight_id FROM flight WHERE departure_time > %s" % entity_value_1
results = get_result(sql)
else:
assert isinstance(entity, list)
flight_ids = "(%s)" % ','.join(
['"%s"' % e['flight_id'] for e in entity])
sql = "SELECT DISTINCT flight.flight_id FROM flight WHERE departure_time > (SELECT MAX(T.departure_time) FROM flight AS T WHERE T.flight_id IN %s)" % flight_ids
results = get_result(sql)
return results
def less_than_departure_time_2(entity):
entity_type_1, entity_value_1 = get_entity_value(entity)
sql = "SELECT DISTINCT flight.flight_id FROM flight WHERE departure_time < %s" % entity_value_1
results = get_result(sql)
return results
def larger_than_capacity_2(argument):
if isinstance(argument, dict):
entities = [argument]
elif isinstance(argument, list):
entities = argument
else:
raise Exception("Not Supported Argument Type", argument)
if len(entities) == 0:
return list()
# flight id
valid_key = ''
if 'aircraft_code' in entities[0]:
valid_key = 'aircraft_code'
else:
raise Exception("Invalid key in larger_than_capacity_2")
values = "(%s)" % ','.join(['"%s"' % e[valid_key] for e in entities])
sql = "SELECT DISTINCT aircraft_1.aircraft_code FROM aircraft aircraft_1 WHERE aircraft_1.capacity > (SELECT MAX(T.capacity) FROM aircraft AS T WHERE T.aircraft_code IN %s)" % values
results = get_result(sql)
return results
def intersection(*args):
keys = {}
all_entities = list()
for arg in args:
if len(arg) == 0:
return list()
if isinstance(arg, dict):
if len(keys) == 0:
keys = set(arg.keys())
else:
keys &= set(arg.keys())
else:
assert isinstance(arg, list) and isinstance(arg[0], dict)
all_entities += arg
if len(keys) == 0:
keys = set(arg[0].keys())
else:
keys &= set(arg[0].keys())
if len(keys) == 0:
return list()
valid_key = list(keys)[0]
results = set()
for aidx, arg in enumerate(args):
tmp = set()
if isinstance(arg, list):
for a in arg:
tmp.add(a[valid_key])
else:
tmp.add(arg[valid_key])
if aidx == 0:
results = tmp
else:
results &= tmp
return_results = list()
for r in results:
info = {valid_key: r}
if valid_key == 'transport_type':
for e in all_entities:
if valid_key in e and e[valid_key] == r:
info.update(e)
return_results.append(info)
return return_results
def not_(argument):
if isinstance(argument, dict):
entities = [argument]
elif isinstance(argument, list):
entities = argument
else:
raise Exception("Not Supported Argument Type", argument)
if len(entities) == 0:
return list()
# flight id
valid_key = ''
if 'flight_id' in entities[0]:
valid_key = 'flight_id'
elif 'airline_code' in entities[0]:
valid_key = 'airline_code'
elif 'aircraft_code' in entities[0]:
valid_key = 'aircraft_code'
elif 'city_name' in entities[0]:
valid_key = 'city_name'
else:
raise Exception("Invalid key in Not")
values = "(%s)" % ','.join(['"%s"' % e[valid_key] for e in entities])
if valid_key == 'flight_id':
sql = 'SELECT flight_id FROM flight WHERE flight_id NOT IN %s' % values
elif valid_key == 'airline_code':
sql = "SELECT distinct airline_code FROM airline WHERE airline_code NOT IN %s" % values
elif valid_key == 'aircraft_code':
sql = "SELECT distinct aircraft_code FROM aircraft WHERE aircraft_code NOT IN %s" % values
elif valid_key == 'city_name':
sql = "SELECT distinct city_name FROM city WHERE city_name NOT IN %s" % values
results = get_result(sql)
return results
def or_(*args):
keys = {}
for arg in args:
if len(arg) == 0:
return list()
if isinstance(arg, dict):
if len(keys) == 0:
keys = set(arg.keys())
else:
keys &= set(arg.keys())
else:
assert isinstance(arg, list) and isinstance(arg[0], dict)
if len(keys) == 0:
keys = set(arg[0].keys())
else:
keys &= set(arg[0].keys())
if len(keys) == 0:
return list()
valid_key = list(keys)[0]
results = set()
for aidx, arg in enumerate(args):
tmp = set()
if isinstance(arg, list):
for a in arg:
tmp.add(a[valid_key])
else:
tmp.add(arg[valid_key])
if aidx == 0:
results = tmp
else:
results |= tmp
return_results = list()
for r in results:
return_results.append({valid_key: r})
return return_results
def count_(argument):
if isinstance(argument, list):
return len(argument)
if argument is not None:
return 1
return 0
def max_(argument):
if isinstance(argument, dict):
entities = [argument]
elif isinstance(argument, list):
entities = argument
else:
raise Exception("Not Supported Argument Type", argument)
valid_key = None
keys = set()
for e in entities:
keys |= set(e.keys())
keys = keys & {'one_direction_cost', 'arrival_time', 'departure_time'}
if len(keys) > 0:
valid_key = list(keys)[0]
print(valid_key)
max_value = max([e[valid_key] for e in entities])
return max_value
else:
return 0.0
def min_(argument):
if isinstance(argument, dict):
entities = [argument]
elif isinstance(argument, list):
entities = argument
else:
raise Exception("Not Supported Argument Type", argument)
valid_key = None
keys = set()
for e in entities:
keys |= set(e.keys())
keys = keys & {'one_direction_cost', 'arrival_time', 'departure_time'}
if len(keys) > 0:
valid_key = list(keys)[0]
print(valid_key)
min_value = min([e[valid_key] for e in entities])
return min_value
else:
return 0.0
def argmin_departure_time(argument):
if isinstance(argument, dict):
entities = [argument]
elif isinstance(argument, list):
entities = argument
else:
raise Exception("Not Supported Argument Type", argument)
if len(entities) == 0:
return list()
flight_id = "(%s)" % ','.join(['"%s"' % e['flight_id'] for e in entities])
sql = "select flight.flight_id, flight.departure_time FROM flight WHERE flight.flight_id IN %s" % flight_id
results = get_result(sql)
min_time = min([r['departure_time'] for r in results])
return_results = [r for r in results if r['departure_time'] == min_time]
return return_results
def argmax_arrival_time(argument):
if isinstance(argument, dict):
entities = [argument]
elif isinstance(argument, list):
entities = argument
else:
raise Exception("Not Supported Argument Type", argument)
if len(entities) == 0:
return list()
flight_id = "(%s)" % ','.join(['"%s"' % e['flight_id'] for e in entities])
sql = "select flight.flight_id, flight.arrival_time FROM flight WHERE flight.flight_id IN %s" % flight_id
results = get_result(sql)
max_time = max([r['arrival_time'] for r in results])
return_results = [r for r in results if r['arrival_time'] == max_time]
return return_results
def argmax_departure_time(argument):
if isinstance(argument, dict):
entities = [argument]
elif isinstance(argument, list):
entities = argument
else:
raise Exception("Not Supported Argument Type", argument)
if len(entities) == 0:
return list()
flight_id = "(%s)" % ','.join(['"%s"' % e['flight_id'] for e in entities])
sql = "select flight.flight_id, flight.departure_time FROM flight WHERE flight.flight_id IN %s" % flight_id
results = get_result(sql)
max_time = max([r['departure_time'] for r in results])
return_results = [r for r in results if r['departure_time'] == max_time]
return return_results
def argmin_arrival_time(argument):
if isinstance(argument, dict):
entities = [argument]
elif isinstance(argument, list):
entities = argument
else:
raise Exception("Not Supported Argument Type", argument)
if len(entities) == 0:
return list()
flight_id = "(%s)" % ','.join(['"%s"' % e['flight_id'] for e in entities])
sql = "select flight.flight_id, flight.arrival_time FROM flight WHERE flight.flight_id IN %s" % flight_id
results = get_result(sql)
min_time = min([r['arrival_time'] for r in results])
return_results = [r for r in results if r['arrival_time'] == min_time]
return return_results
def argmin_fare(argument):
results = fare(argument)
if len(results) == 0:
return list()
min_fare = min([r['one_direction_cost'] for r in results])
return_results = [
r for r in results if r['one_direction_cost'] == min_fare]
return return_results
def argmax_fare(argument):
results = fare(argument)
max_fare = max([r['one_direction_cost'] for r in results])
return_results = [
r for r in results if r['one_direction_cost'] == max_fare]
return return_results
def argmax_capacity(argument):
results = capacity(argument)
max_capacity = max([r['capacity'] for r in results])
return_results = [
r for r in results if r['capacity'] == max_capacity]
return return_results
def argmin_capacity(argument):
results = capacity(argument)
min_capacity = min([r['capacity'] for r in results])
return_results = [
r for r in results if r['capacity'] == min_capacity]
return return_results
def sum_capacity(argument):
results = capacity(argument)
total_capacity = sum([r['capacity'] for r in results])
return total_capacity
def sum_stops(argument):
results = stops(argument)
total_stops = sum([r['stops'] for r in results])
return total_stops
def argmax_stops(argument):
if isinstance(argument, dict):
entities = [argument]
elif isinstance(argument, list):
entities = argument
else:
raise Exception("Not Supported Argument Type", argument)
if len(entities) == 0:
return list()
flight_id = "(%s)" % ','.join(['"%s"' % e['flight_id'] for e in entities])
sql = "SELECT DISTINCT flight_id, stops FROM flight WHERE flight_id IN %s" % (flight_id)
results = get_result(sql)
max_stops = max([r['stops'] for r in results])
return_results = [
r for r in results if r['stops'] == max_stops]
return return_results
def argmin_stops(argument):
if isinstance(argument, dict):
entities = [argument]
elif isinstance(argument, list):
entities = argument
else:
raise Exception("Not Supported Argument Type", argument)
if len(entities) == 0:
return list()
flight_id = "(%s)" % ','.join(['"%s"' % e['flight_id'] for e in entities])
sql = "SELECT DISTINCT flight_id, stops FROM flight WHERE flight_id IN %s" % (
flight_id)
results = get_result(sql)
min_stops = min([r['stops'] for r in results])
return_results = [
r for r in results if r['stops'] == min_stops]
return return_results
def argmin_miles_distant_2(argument):
if isinstance(argument, dict):
entities = [argument]
elif isinstance(argument, list):
entities = argument
else:
raise Exception("Not Supported Argument Type", argument)
if len(entities) == 0:
return list()
assert 'airport_code' in entities[0]
key = 'airport_code'
values = "(%s)" % ','.join(
['"%s"' % e[key] for e in entities])
sql = "SELECT airport_service.miles_distant, airport_service.airport_code FROM airport_service JOIN city ON city.city_code = airport_service.city_code WHERE airport_service.airport_code IN %s AND airport_service.miles_distant > 0 ORDER BY airport_service.miles_distant ASC LIMIT 1" % values
print(sql)
results = get_result(sql)
return results
def argmin_time_elapsed(argument):
results = time_elapsed(argument)
min_time = min([r['time_elapsed'] for r in results])
return_results = [
r for r in results if r['time_elapsed'] == min_time]
return return_results
def argmax_count(argument):
if isinstance(argument, dict):
entities = [argument]
elif isinstance(argument, list):
entities = argument
else:
raise Exception("Not Supported Argument Type", argument)
if len(entities) == 0:
return list()
assert 'airline_code' in entities[0]
key = 'airline_code'
values = "(%s)" % ','.join(
['"%s"' % e[key] for e in entities])
sql = "SELECT airline_code FROM flight WHERE airline_code IN %s GROUP BY airline_code order by count(DISTINCT flight_id) LIMIT 1" % values
results = get_result(sql)
return results
def equals(arguemnt_1, arguemnt_2):
if isinstance(arguemnt_1, list):
entities_1 = arguemnt_1
elif isinstance(arguemnt_1, dict):
entities_1 = [arguemnt_1]
if isinstance(arguemnt_2, list):
entities_2 = arguemnt_2
elif isinstance(arguemnt_2, dict):
entities_2 = [arguemnt_2]
for e1 in entities_1:
is_found = False
for e2 in entities_2:
is_match = True
for k, v in e1.items():
if k not in e2 or e2[k].lower() != v.lower():
is_match = False
if is_match:
is_found = True
break
if not is_found:
return False
return True
def named_1(values):
return values
if __name__ == '__main__':
values = answer(argmin_capacity(aircraft(intersection(not_(turboprop(
aircraft_all())), larger_than_capacity_2(turboprop(aircraft_all()))))))
print(values)
data = list()
with open('../../../data/atis/atis_funql_train.tsv', 'r') as f:
for line in f:
line = line.strip()
data.append(line.split('\t'))
for idx, (question, funql) in enumerate(data):
print(idx)
print(question)
print(funql)
expression = transform(funql)
print(expression)
results = eval(expression)
print(results)
print('====\n\n')
| 35.650998 | 399 | 0.65446 | 10,838 | 80,393 | 4.621332 | 0.029433 | 0.042327 | 0.028032 | 0.035659 | 0.880845 | 0.850278 | 0.830751 | 0.806653 | 0.787306 | 0.76762 | 0 | 0.016784 | 0.243342 | 80,393 | 2,254 | 400 | 35.666815 | 0.806592 | 0.007277 | 0 | 0.690022 | 0 | 0.043161 | 0.31536 | 0.056856 | 0 | 0 | 0 | 0 | 0.011211 | 1 | 0.08352 | false | 0.001121 | 0.002242 | 0.003924 | 0.201233 | 0.006726 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
8fa50f2fa764cce220a09a3dc58d14516a09f779 | 6,735 | py | Python | cinder/volume/drivers/nec/volume.py | lightsey/cinder | e03d68e42e57a63f8d0f3e177fb4287290612b24 | [
"Apache-2.0"
] | 571 | 2015-01-01T17:47:26.000Z | 2022-03-23T07:46:36.000Z | cinder/volume/drivers/nec/volume.py | lightsey/cinder | e03d68e42e57a63f8d0f3e177fb4287290612b24 | [
"Apache-2.0"
] | 37 | 2015-01-22T23:27:04.000Z | 2021-02-05T16:38:48.000Z | cinder/volume/drivers/nec/volume.py | lightsey/cinder | e03d68e42e57a63f8d0f3e177fb4287290612b24 | [
"Apache-2.0"
] | 841 | 2015-01-04T17:17:11.000Z | 2022-03-31T12:06:51.000Z | #
# Copyright (c) 2016 NEC Corporation.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
"""Drivers for M-Series Storage."""
from cinder import interface
from cinder.volume import driver
from cinder.volume.drivers.nec import volume_common
from cinder.volume.drivers.nec import volume_helper
from cinder.zonemanager import utils as fczm_utils
@interface.volumedriver
class MStorageISCSIDriver(volume_helper.MStorageDSVDriver,
driver.ISCSIDriver):
"""M-Series Storage Snapshot iSCSI Driver.
.. code-block:: none
Version history:
1.8.1 - First open source driver version.
1.8.2 - Code refactoring.
1.9.1 - Support optimal path for non-disruptive backup.
1.9.2 - Support manage/unmanage and manage/unmanage snapshot.
Delete an unused configuration
parameter (ldset_controller_node_name).
Fixed bug #1705001: driver fails to start.
1.10.1 - Support automatic configuration of SAN access control.
Fixed bug #1753375: SAN access remains permitted on the
source node.
1.10.2 - Delete max volumes per pool limit.
1.10.3 - Add faster clone status check.
Fixed bug #1777385: driver removed access permission from
the destination node after live-migraion.
Fixed bug #1778669: LUNs of detached volumes are never reused.
1.11.1 - Add support python 3.
Add support for multi-attach.
Add support of more than 4 iSCSI portals for a node.
Add support to revert a volume to a snapshot.
Add support storage assist retype and fixed bug #1838955:
a volume in NEC Storage was left undeleted when the volume
was retyped to another storage.
"""
VERSION = '1.11.1'
CI_WIKI_NAME = 'NEC_Cinder_CI'
def __init__(self, *args, **kwargs):
super(MStorageISCSIDriver, self).__init__(*args, **kwargs)
self._set_config(self.configuration, self.host,
self.__class__.__name__)
@staticmethod
def get_driver_options():
return volume_common.mstorage_opts
def ensure_export(self, context, volume):
pass
def get_volume_stats(self, refresh=False):
return self.iscsi_get_volume_stats(refresh)
def initialize_connection(self, volume, connector):
return self.iscsi_initialize_connection(volume, connector)
def terminate_connection(self, volume, connector, **kwargs):
return self.iscsi_terminate_connection(volume, connector)
def initialize_connection_snapshot(self, snapshot, connector, **kwargs):
return self.iscsi_initialize_connection_snapshot(snapshot,
connector,
**kwargs)
def terminate_connection_snapshot(self, snapshot, connector, **kwargs):
return self.iscsi_terminate_connection_snapshot(snapshot,
connector,
**kwargs)
@interface.volumedriver
class MStorageFCDriver(volume_helper.MStorageDSVDriver,
driver.FibreChannelDriver):
"""M-Series Storage Snapshot FC Driver.
.. code-block:: none
Version history:
1.8.1 - First open source driver version.
1.8.2 - Code refactoring.
1.9.1 - Support optimal path for non-disruptive backup.
1.9.2 - Support manage/unmanage and manage/unmanage snapshot.
Delete an unused configuration
parameter (ldset_controller_node_name).
Fixed bug #1705001: driver fails to start.
1.10.1 - Support automatic configuration of SAN access control.
Fixed bug #1753375: SAN access remains permitted on the
source node.
1.10.2 - Delete max volumes per pool limit.
1.10.3 - Add faster clone status check.
Fixed bug #1777385: driver removed access permission from
the destination node after live-migraion.
Fixed bug #1778669: LUNs of detached volumes are never reused.
1.11.1 - Add support python 3.
Add support for multi-attach.
Add support of more than 4 iSCSI portals for a node.
Add support to revert a volume to a snapshot.
Add support storage assist retype and fixed bug #1838955:
a volume in NEC Storage was left undeleted when the volume
was retyped to another storage.
"""
VERSION = '1.11.1'
CI_WIKI_NAME = 'NEC_Cinder_CI'
def __init__(self, *args, **kwargs):
super(MStorageFCDriver, self).__init__(*args, **kwargs)
self._set_config(self.configuration, self.host,
self.__class__.__name__)
@staticmethod
def get_driver_options():
return volume_common.mstorage_opts
def ensure_export(self, context, volume):
pass
def get_volume_stats(self, refresh=False):
return self.fc_get_volume_stats(refresh)
def initialize_connection(self, volume, connector):
conn_info = self.fc_initialize_connection(volume, connector)
fczm_utils.add_fc_zone(conn_info)
return conn_info
def terminate_connection(self, volume, connector, **kwargs):
conn_info = self.fc_terminate_connection(volume, connector)
fczm_utils.remove_fc_zone(conn_info)
return conn_info
def initialize_connection_snapshot(self, snapshot, connector, **kwargs):
return self.fc_initialize_connection_snapshot(snapshot,
connector,
**kwargs)
def terminate_connection_snapshot(self, snapshot, connector, **kwargs):
return self.fc_terminate_connection_snapshot(snapshot,
connector,
**kwargs)
| 41.067073 | 79 | 0.624202 | 776 | 6,735 | 5.264175 | 0.265464 | 0.019584 | 0.045043 | 0.0306 | 0.765483 | 0.740269 | 0.720441 | 0.666585 | 0.652142 | 0.648715 | 0 | 0.031472 | 0.31121 | 6,735 | 163 | 80 | 41.319018 | 0.849105 | 0.47513 | 0 | 0.646154 | 0 | 0 | 0.011505 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.246154 | false | 0.030769 | 0.076923 | 0.153846 | 0.6 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 6 |
8fcd33e9c77adb6c1d1e0671fb6b4e5ceb626528 | 30 | py | Python | moelog_main/moelog/__init__.py | moe001/moelog | b0ba6f287e39b5d9fa8ac3781212fed4cbe8d428 | [
"MIT"
] | null | null | null | moelog_main/moelog/__init__.py | moe001/moelog | b0ba6f287e39b5d9fa8ac3781212fed4cbe8d428 | [
"MIT"
] | null | null | null | moelog_main/moelog/__init__.py | moe001/moelog | b0ba6f287e39b5d9fa8ac3781212fed4cbe8d428 | [
"MIT"
] | null | null | null | from .__main import get_logger | 30 | 30 | 0.866667 | 5 | 30 | 4.6 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.1 | 30 | 1 | 30 | 30 | 0.851852 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
8fe2c1c5c025900272b0a546b451e2da94b6aae8 | 147 | py | Python | deckparser/__init__.py | venidera/deckparser | 88e9e141cb7ef2cdea2994f7664157d9eae6ec4f | [
"Apache-2.0"
] | 13 | 2018-07-26T20:18:30.000Z | 2022-03-18T07:18:34.000Z | deckparser/__init__.py | venidera/deckparser | 88e9e141cb7ef2cdea2994f7664157d9eae6ec4f | [
"Apache-2.0"
] | null | null | null | deckparser/__init__.py | venidera/deckparser | 88e9e141cb7ef2cdea2994f7664157d9eae6ec4f | [
"Apache-2.0"
] | 3 | 2021-04-23T21:46:59.000Z | 2021-08-30T11:24:07.000Z | from deckparser.newave2dicts import newave2dicts
from deckparser.decomp2dicts import decomp2dicts
from deckparser.suishi2dicts import suishi2dicts
| 36.75 | 48 | 0.897959 | 15 | 147 | 8.8 | 0.4 | 0.318182 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.044444 | 0.081633 | 147 | 3 | 49 | 49 | 0.933333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
8fee364c92f116fed2af18003f0d5f9dd86ffcf7 | 240 | py | Python | tests/testenvironment.py | sujaysundar/FBLAQuiz | 26ff86eb5e4d8ea22c16203ce447d2be11a9ff5e | [
"MIT"
] | null | null | null | tests/testenvironment.py | sujaysundar/FBLAQuiz | 26ff86eb5e4d8ea22c16203ce447d2be11a9ff5e | [
"MIT"
] | 1 | 2021-03-15T17:55:26.000Z | 2021-03-15T21:39:11.000Z | tests/testenvironment.py | sujaysundar/FBLAQuiz | 26ff86eb5e4d8ea22c16203ce447d2be11a9ff5e | [
"MIT"
] | null | null | null | #Test to check if the environment is set to execute a Python program. If the below statement prints on the console, it indicates the environment is set correctly to run a Python program.
print('Python environment is setup correctly.') | 80 | 188 | 0.783333 | 39 | 240 | 4.820513 | 0.589744 | 0.207447 | 0.170213 | 0.202128 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.179167 | 240 | 3 | 189 | 80 | 0.954315 | 0.770833 | 0 | 0 | 0 | 0 | 0.730769 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 6 |
8ffbacc63f9e112bc2a9d4c661425da1249ffb08 | 93 | py | Python | notebooks/Python3-Language/07-Modules and Packages/MainPackage/SubPackage/subscript.py | binakot/Python3-Course | c555fc7376c45f4b2dedb6d57363c0070831c1e1 | [
"MIT"
] | null | null | null | notebooks/Python3-Language/07-Modules and Packages/MainPackage/SubPackage/subscript.py | binakot/Python3-Course | c555fc7376c45f4b2dedb6d57363c0070831c1e1 | [
"MIT"
] | null | null | null | notebooks/Python3-Language/07-Modules and Packages/MainPackage/SubPackage/subscript.py | binakot/Python3-Course | c555fc7376c45f4b2dedb6d57363c0070831c1e1 | [
"MIT"
] | null | null | null | def hello_subscript():
print('Hello subscript')
def hello_indeed():
print('Indeed Hello') | 18.6 | 25 | 0.731183 | 12 | 93 | 5.5 | 0.416667 | 0.242424 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.11828 | 93 | 5 | 26 | 18.6 | 0.804878 | 0 | 0 | 0 | 0 | 0 | 0.287234 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.5 | true | 0 | 0 | 0 | 0.5 | 0.5 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 6 |
8915f44dbe677bf86c448f3e8b4a603af6dd4dbb | 2,538 | py | Python | airfoil_db/exceptions.py | usuaero/AirfoilDatabase | 74addddf501b56d65151bf8cf3394897c6f8dd66 | [
"MIT"
] | 14 | 2020-04-30T18:47:32.000Z | 2022-02-13T19:08:35.000Z | airfoil_db/exceptions.py | usuaero/AirfoilDatabase | 74addddf501b56d65151bf8cf3394897c6f8dd66 | [
"MIT"
] | 4 | 2020-04-03T20:20:32.000Z | 2020-10-11T15:51:49.000Z | airfoil_db/exceptions.py | usuaero/AirfoilDatabase | 74addddf501b56d65151bf8cf3394897c6f8dd66 | [
"MIT"
] | 8 | 2019-11-05T06:03:43.000Z | 2022-02-28T17:31:40.000Z | """Custom exceptions used in the airfoil class."""
class DatabaseBoundsError(Exception):
"""An exception thrown when the inputs to the airfoil database fall outside the database bounds.
Attributes
----------
airfoil : str
The name of the airfoil for which this exception occurred.
inputs_dict : dict
The arguments passed to the airfoil.
exception_indices : list
The indices at which the arguments fell outside the database bounds.
message : str
A message about the error.
"""
def __init__(self, airfoil, exception_indices, inputs_dict):
self.airfoil = airfoil
self.exception_indices = exception_indices
self.inputs_dict = inputs_dict
self.message = "The inputs to the airfoil database fell outside the bounds of available data."
super().__init__(self.message)
def __str__(self):
return self.message+" Airfoil: {0}".format(self.airfoil)
class PolyFitBoundsError(Exception):
"""An exception thrown when the inputs to the airfoil polynomial fits fall outside the bounds.
Attributes
----------
airfoil : str
The name of the airfoil for which this exception occurred.
inputs_dict : dict
The arguments passed to the airfoil.
exception_indices : list
The indices at which the arguments fell outside the bounds.
message : str
A message about the error.
"""
def __init__(self, airfoil, exception_indices, inputs_dict):
self.airfoil = airfoil
self.exception_indices = exception_indices
self.inputs_dict = inputs_dict
self.message = "The inputs to the airfoil polynomial fits fell outside the bounds of available data."
super().__init__(self.message)
def __str__(self):
return self.message+" Airfoil: {0}".format(self.airfoil)
class CamberSolverNotConvergedError(Exception):
"""An exception thrown when the camber line solver fails to converge.
Attributes
----------
airfoil : str
The name of the airfoil for which this exception occurred.
final_error : float
The final approximate error of the solver.
message : str
A message about the error.
"""
def __init__(self, airfoil, final_error):
self.airfoil = airfoil
self.final_error = final_error
self.message = "The camber line solver for {0} failed to converge. Final error: {1}".format(self.airfoil, self.final_error)
super().__init__(self.message) | 30.95122 | 131 | 0.675335 | 313 | 2,538 | 5.306709 | 0.191693 | 0.060205 | 0.043347 | 0.033715 | 0.760987 | 0.760987 | 0.731487 | 0.71463 | 0.71463 | 0.71463 | 0 | 0.002096 | 0.248227 | 2,538 | 82 | 132 | 30.95122 | 0.868449 | 0.438928 | 0 | 0.666667 | 0 | 0 | 0.198904 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.208333 | false | 0 | 0 | 0.083333 | 0.416667 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
64ea4a1f86094096c96af6e5e678266870203f3d | 4,794 | py | Python | tests/marginalization_test.py | LucasFidon/label-set-loss-functions | dffa112b9789c83eaa6b05103b94e55e36738f7d | [
"BSD-3-Clause"
] | 3 | 2021-09-28T10:17:36.000Z | 2022-01-30T01:08:08.000Z | tests/marginalization_test.py | LucasFidon/label-set-loss-functions | dffa112b9789c83eaa6b05103b94e55e36738f7d | [
"BSD-3-Clause"
] | null | null | null | tests/marginalization_test.py | LucasFidon/label-set-loss-functions | dffa112b9789c83eaa6b05103b94e55e36738f7d | [
"BSD-3-Clause"
] | null | null | null | import unittest
import torch
import numpy as np
from label_set_loss_functions.convertor import softmax_marginalize, log_softmax_marginalize
class TestMarginalizationFunctions(unittest.TestCase):
def test_softmax_marginalize(self):
num_classes = 3 # labels 0 to 2
labels_superset_map = {
3: [1, 2],
4: [0, 1, 2],
}
# Define 1d example
target = torch.tensor( # shape: (batch size, num voxels)
[[0, 0, 0, 1, 2, 0, 0, 3, 4, 0]]
)
# First score map for a very good segmentation
score_map1 = 100. * torch.tensor( # shape: (batch size, num classes, num voxels)
[[
[1., 1., 1., 0., 0., 1., 1., 0., 1., 1.],
[0., 0., 0., 1., 0., 0., 0., 1., 0., 0.],
[0., 0., 0., 0., 1., 0., 0., 0., 0., 0.]
]]
)
# Second score map for a very good segmentation (equivalent to the first one)
score_map2 = 100. * torch.tensor( # shape: (batch size, num classes, num voxels)
[[
[1., 1., 1., 0., 0., 1., 1., 0., 0., 1.],
[0., 0., 0., 1., 0., 0., 0., 0., 1., 0.],
[0., 0., 0., 0., 1., 0., 0., 1., 0.01, 0.]
]]
)
# Expected output probabilities (same for the two previous examples)
expected_out = np.array( # shape: (batch size, num classes, num voxels)
[
[1., 1., 1., 0., 0., 1., 1., 0., 1. / len(labels_superset_map[4]), 1.],
[0., 0., 0., 1., 0., 0., 0., 1. / len(labels_superset_map[3]), 1. / len(labels_superset_map[4]), 0.],
[0., 0., 0., 0., 1., 0., 0., 1. / len(labels_superset_map[3]), 1. / len(labels_superset_map[4]), 0.]
]
)
# Compute the softmax + marginalization
marg_proba1, marg_target = softmax_marginalize(
flat_input=score_map1, flat_target=target, labels_superset_map=labels_superset_map)
self.assertAlmostEqual(np.linalg.norm(marg_target.cpu().numpy() - expected_out), 0.)
self.assertAlmostEqual(np.linalg.norm(marg_proba1.cpu().numpy() - expected_out), 0.)
marg_proba2, _ = softmax_marginalize(
flat_input=score_map2, flat_target=target, labels_superset_map=labels_superset_map)
self.assertAlmostEqual(np.linalg.norm(marg_proba2.cpu().numpy() - expected_out), 0.)
def test_log_softmax_marginalize(self):
num_classes = 3 # labels 0 to 2
labels_superset_map = {
3: [1, 2],
4: [0, 1, 2],
}
# Define 1d example
target = torch.tensor( # shape: (batch size, num voxels)
[[0, 0, 0, 1, 2, 0, 0, 3, 4, 0]]
)
# First score map for a very good segmentation
score_map1 = 100. * torch.tensor( # shape: (batch size, num classes, num voxels)
[[
[1., 1., 1., 0., 0., 1., 1., 0., 1., 1.],
[0., 0., 0., 1., 0., 0., 0., 1., 0., 0.],
[0., 0., 0., 0., 1., 0., 0., 0., 0., 0.]
]]
)
# Second score map for a very good segmentation (equivalent to the first one)
score_map2 = 100. * torch.tensor( # shape: (batch size, num classes, num voxels)
[[
[1., 1., 1., 0., 0., 1., 1., 0., 0., 1.],
[0., 0., 0., 1., 0., 0., 0., 0., 1., 0.],
[0., 0., 0., 0., 1., 0., 0., 1., 0.01, 0.]
]]
)
# Expected output probabilities (same for the two previous examples)
expected_out = np.array( # shape: (batch size, num classes, num voxels)
[
[1., 1., 1., 0., 0., 1., 1., 0., 1. / len(labels_superset_map[4]), 1.],
[0., 0., 0., 1., 0., 0., 0., 1. / len(labels_superset_map[3]), 1. / len(labels_superset_map[4]), 0.],
[0., 0., 0., 0., 1., 0., 0., 1. / len(labels_superset_map[3]), 1. / len(labels_superset_map[4]), 0.]
]
)
# Compute the softmax + marginalization
log_marg_proba1, marg_target = log_softmax_marginalize(
flat_input=score_map1, flat_target=target, labels_superset_map=labels_superset_map)
marg_proba1 = torch.exp(log_marg_proba1)
self.assertAlmostEqual(np.linalg.norm(marg_target.cpu().numpy() - expected_out), 0.)
self.assertAlmostEqual(np.linalg.norm(marg_proba1.cpu().numpy() - expected_out), 0.)
log_marg_proba2, _ = log_softmax_marginalize(
flat_input=score_map2, flat_target=target, labels_superset_map=labels_superset_map)
marg_proba2 = torch.exp(log_marg_proba2)
self.assertAlmostEqual(np.linalg.norm(marg_proba2.cpu().numpy() - expected_out), 0.)
if __name__ == '__main__':
unittest.main()
| 47.94 | 117 | 0.52816 | 648 | 4,794 | 3.733025 | 0.125 | 0.064489 | 0.052088 | 0.033072 | 0.86482 | 0.86482 | 0.86234 | 0.86234 | 0.86234 | 0.86234 | 0 | 0.08263 | 0.308302 | 4,794 | 99 | 118 | 48.424242 | 0.646864 | 0.177096 | 0 | 0.650602 | 0 | 0 | 0.002039 | 0 | 0 | 0 | 0 | 0 | 0.072289 | 1 | 0.024096 | false | 0 | 0.048193 | 0 | 0.084337 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
56d256ae1f0db6695c62f45cbfc74696bc474aa9 | 179 | py | Python | l5kit/l5kit/configs/__init__.py | stefaniespeichert/l5kit | e7ef272b80d71c5080891b27f478c6d3e001774e | [
"Apache-2.0"
] | null | null | null | l5kit/l5kit/configs/__init__.py | stefaniespeichert/l5kit | e7ef272b80d71c5080891b27f478c6d3e001774e | [
"Apache-2.0"
] | null | null | null | l5kit/l5kit/configs/__init__.py | stefaniespeichert/l5kit | e7ef272b80d71c5080891b27f478c6d3e001774e | [
"Apache-2.0"
] | null | null | null | from .config import config_data_to_config, load_config_data, save_config_data, schema_v4
__all__ = ["schema_v4", "save_config_data", "load_config_data", "config_data_to_config"]
| 44.75 | 88 | 0.821229 | 28 | 179 | 4.535714 | 0.357143 | 0.472441 | 0.188976 | 0.283465 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.012121 | 0.078212 | 179 | 3 | 89 | 59.666667 | 0.757576 | 0 | 0 | 0 | 0 | 0 | 0.346369 | 0.117318 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.5 | 0 | 0.5 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
713e51d3a9b6bd6cf9f6a08dde23d14e11ea05f2 | 40 | py | Python | src/SimpleSurface/__init__.py | dylan-bennett/ImageSurfacePlus | 229d4a4ea3766addea1b2596e994bcb0060fe78f | [
"MIT"
] | null | null | null | src/SimpleSurface/__init__.py | dylan-bennett/ImageSurfacePlus | 229d4a4ea3766addea1b2596e994bcb0060fe78f | [
"MIT"
] | null | null | null | src/SimpleSurface/__init__.py | dylan-bennett/ImageSurfacePlus | 229d4a4ea3766addea1b2596e994bcb0060fe78f | [
"MIT"
] | null | null | null | from .SimpleSurface import SimpleSurface | 40 | 40 | 0.9 | 4 | 40 | 9 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.075 | 40 | 1 | 40 | 40 | 0.972973 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
8531eb54d299989cfec3ebb6b1c0894c08aa0ba4 | 3,115 | py | Python | tests/artist_app/forms.py | moshthepitt/django-vega-admin | 865774e51b3a2c2df81fec1f212acc3bdcea9eaa | [
"MIT"
] | 4 | 2018-11-24T14:46:45.000Z | 2020-12-04T08:49:28.000Z | tests/artist_app/forms.py | moshthepitt/django-vega-admin | 865774e51b3a2c2df81fec1f212acc3bdcea9eaa | [
"MIT"
] | 40 | 2018-11-17T11:34:35.000Z | 2020-06-26T11:27:01.000Z | tests/artist_app/forms.py | moshthepitt/django-vega-admin | 865774e51b3a2c2df81fec1f212acc3bdcea9eaa | [
"MIT"
] | null | null | null | """
Module for vega-admin test forms
"""
from django import forms
from crispy_forms.bootstrap import Field, FormActions
from crispy_forms.helper import FormHelper
from crispy_forms.layout import Layout, Submit
from vega_admin.forms import ListViewSearchForm
from .models import Artist, Song
class ArtistForm(forms.ModelForm):
"""
Artist ModelForm class
"""
class Meta:
model = Artist
fields = ["name"]
def __init__(self, *args, **kwargs):
self.request = kwargs.pop("request", None)
self.vega_extra_kwargs = kwargs.pop("vega_extra_kwargs", dict())
super().__init__(*args, **kwargs)
self.helper = FormHelper()
self.helper.form_tag = True
self.helper.render_required_fields = True
self.helper.form_show_labels = True
self.helper.html5_required = True
self.helper.include_media = False
self.helper.form_id = "artist"
self.helper.layout = Layout(
Field("name"),
FormActions(
Submit("submitBtn", "Submit", css_class="btn-success btn-block")
),
)
class UpdateArtistForm(forms.ModelForm):
"""
Artist ModelForm class with nothing extra
"""
class Meta:
model = Artist
fields = ["name"]
def __init__(self, *args, **kwargs):
self.request = kwargs.pop("request", None)
self.vega_extra_kwargs = kwargs.pop("vega_extra_kwargs", dict())
super().__init__(*args, **kwargs)
self.helper = FormHelper()
self.helper.form_tag = True
self.helper.render_required_fields = True
self.helper.form_show_labels = True
self.helper.html5_required = True
self.helper.include_media = False
self.helper.form_id = "artist-update"
self.helper.layout = Layout(
Field("name"),
FormActions(
Submit("submitBtn", "Submit", css_class="btn-success btn-block")
),
)
class PlainArtistForm(forms.ModelForm):
"""
Artist ModelForm class with nothing extra
"""
class Meta:
model = Artist
fields = ["name"]
class SongForm(forms.ModelForm):
"""
Artist ModelForm class
"""
class Meta:
model = Song
fields = ["name", "artist"]
def __init__(self, *args, **kwargs):
self.request = kwargs.pop("request", None)
self.vega_extra_kwargs = kwargs.pop("vega_extra_kwargs", dict())
super().__init__(*args, **kwargs)
self.helper = FormHelper()
self.helper.form_tag = True
self.helper.render_required_fields = True
self.helper.form_show_labels = True
self.helper.html5_required = True
self.helper.include_media = False
self.helper.form_id = "song"
self.helper.layout = Layout(
Field("name"),
Field("artist"),
FormActions(
Submit("submitBtn", "Submit", css_class="btn-success btn-block")
),
)
class CustomSearchForm(ListViewSearchForm):
"""Custom search form"""
| 27.8125 | 80 | 0.608026 | 337 | 3,115 | 5.421365 | 0.192878 | 0.131363 | 0.091954 | 0.063492 | 0.78763 | 0.78763 | 0.770662 | 0.770662 | 0.72578 | 0.72578 | 0 | 0.001333 | 0.277689 | 3,115 | 111 | 81 | 28.063063 | 0.810667 | 0.058106 | 0 | 0.733333 | 0 | 0 | 0.084935 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.04 | false | 0 | 0.08 | 0 | 0.24 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
8587bf8be336929623c4cf3c572700cfb8fb4e15 | 38 | py | Python | a2ml/api/auger/impl/exceptions.py | augerai/a2ml | 9d9ce0ac1b51cc81f1cb5ae331c4523131bc6a86 | [
"Apache-2.0"
] | 30 | 2019-07-01T13:23:27.000Z | 2022-03-16T21:19:33.000Z | a2ml/api/auger/impl/exceptions.py | augerai/a2ml | 9d9ce0ac1b51cc81f1cb5ae331c4523131bc6a86 | [
"Apache-2.0"
] | 234 | 2019-07-04T13:56:15.000Z | 2021-11-04T10:12:55.000Z | a2ml/api/auger/impl/exceptions.py | augerai/a2ml | 9d9ce0ac1b51cc81f1cb5ae331c4523131bc6a86 | [
"Apache-2.0"
] | 13 | 2019-07-04T14:00:34.000Z | 2020-07-13T11:18:44.000Z | class AugerException(Exception): pass
| 19 | 37 | 0.842105 | 4 | 38 | 8 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.078947 | 38 | 1 | 38 | 38 | 0.914286 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 1 | 0 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 1 | 0 | 0 | 6 |
a479b5aee6b6361f23b6e70b655a41e860a5e11a | 27 | py | Python | src/euler_python_package/euler_python/medium/p235.py | wilsonify/euler | 5214b776175e6d76a7c6d8915d0e062d189d9b79 | [
"MIT"
] | null | null | null | src/euler_python_package/euler_python/medium/p235.py | wilsonify/euler | 5214b776175e6d76a7c6d8915d0e062d189d9b79 | [
"MIT"
] | null | null | null | src/euler_python_package/euler_python/medium/p235.py | wilsonify/euler | 5214b776175e6d76a7c6d8915d0e062d189d9b79 | [
"MIT"
] | null | null | null | def problem235():
pass
| 9 | 17 | 0.62963 | 3 | 27 | 5.666667 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.15 | 0.259259 | 27 | 2 | 18 | 13.5 | 0.7 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.5 | true | 0.5 | 0 | 0 | 0.5 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 6 |
a4996f2f61ec19d372ba844ca1e058cf9c420e27 | 190 | py | Python | surfice.app/Contents/Resources/script/jstartup.py | ningfei/surf-ice | 11a978d922f53abd02c0aa1b6896443f7f1af9e1 | [
"BSD-2-Clause"
] | 59 | 2016-04-28T05:54:56.000Z | 2022-02-08T20:10:32.000Z | surfice.app/Contents/Resources/script/jstartup.py | ningfei/surf-ice | 11a978d922f53abd02c0aa1b6896443f7f1af9e1 | [
"BSD-2-Clause"
] | 30 | 2017-01-08T07:58:33.000Z | 2022-03-15T21:07:50.000Z | surfice.app/Contents/Resources/script/jstartup.py | ningfei/surf-ice | 11a978d922f53abd02c0aa1b6896443f7f1af9e1 | [
"BSD-2-Clause"
] | 15 | 2017-03-03T14:07:52.000Z | 2022-03-21T17:01:14.000Z | import gl
gl.resetdefaults()
gl.meshload('/Users/chris/afni/pial/lh.sphere')
gl.overlayload('/Users/chris/afni/pial/lh.HCP-MMP1.annot')
gl.atlas2node('/Users/chris/afni/pial/mynodes.node');
| 31.666667 | 58 | 0.763158 | 30 | 190 | 4.833333 | 0.566667 | 0.206897 | 0.289655 | 0.372414 | 0.275862 | 0 | 0 | 0 | 0 | 0 | 0 | 0.01087 | 0.031579 | 190 | 5 | 59 | 38 | 0.777174 | 0 | 0 | 0 | 0 | 0 | 0.563158 | 0.563158 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.2 | 0 | 0.2 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
a4a557e6c4ba2bc417550dc9ed9398f4db1dfe1b | 159 | py | Python | pytracer/utility/__init__.py | zjiayao/pyTracer | c2b4ef299ecbdca1c519059488f7cd2438943ee4 | [
"MIT"
] | 9 | 2017-11-20T18:17:27.000Z | 2022-01-27T23:00:31.000Z | pytracer/utility/__init__.py | zjiayao/pyTracer | c2b4ef299ecbdca1c519059488f7cd2438943ee4 | [
"MIT"
] | 4 | 2021-06-08T19:03:51.000Z | 2022-03-11T23:18:44.000Z | pytracer/utility/__init__.py | zjiayao/pyTracer | c2b4ef299ecbdca1c519059488f7cd2438943ee4 | [
"MIT"
] | 1 | 2017-11-20T22:48:01.000Z | 2017-11-20T22:48:01.000Z | """
__init__.py
pytracer.utility package
Created by Jiayao on Aug 13, 2017
"""
from __future__ import absolute_import
from pytracer.utility.utility import *
| 15.9 | 38 | 0.786164 | 22 | 159 | 5.272727 | 0.727273 | 0.258621 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.043796 | 0.138365 | 159 | 9 | 39 | 17.666667 | 0.80292 | 0.45283 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
8efca5e68bf561b02920ef8ef4466674561572c4 | 1,763 | py | Python | tests/test_comparision_error_messages.py | pawel-slowik/ipset-country | e14994eb44d977c59c147a81fabc4c3f3cd8c392 | [
"MIT"
] | null | null | null | tests/test_comparision_error_messages.py | pawel-slowik/ipset-country | e14994eb44d977c59c147a81fabc4c3f3cd8c392 | [
"MIT"
] | null | null | null | tests/test_comparision_error_messages.py | pawel-slowik/ipset-country | e14994eb44d977c59c147a81fabc4c3f3cd8c392 | [
"MIT"
] | null | null | null | import ipaddress
from ipset import ComparisionResult, comparision_error_messages
def test_ipdeny_missing() -> None:
comparision = ComparisionResult(
common_networks=[ipaddress.IPv4Network("1.1.1.0/24")],
ipdeny_missing=[ipaddress.IPv4Network("2.2.2.0/24")],
ripestat_missing=[],
differences_count=1,
)
error_messages = tuple(comparision_error_messages(comparision))
assert len(error_messages) == 2
assert "total number of differences: 1" in error_messages
assert "networks present in RIPEstat but not in IPdeny: 2.2.2.0/24" in error_messages
def test_ripestat_missing() -> None:
comparision = ComparisionResult(
common_networks=[ipaddress.IPv4Network("1.1.1.0/24")],
ipdeny_missing=[],
ripestat_missing=[ipaddress.IPv4Network("3.3.3.0/24")],
differences_count=1,
)
error_messages = tuple(comparision_error_messages(comparision))
assert len(error_messages) == 2
assert "total number of differences: 1" in error_messages
assert "networks present in IPdeny but not in RIPEstat: 3.3.3.0/24" in error_messages
def test_both_missing() -> None:
comparision = ComparisionResult(
common_networks=[ipaddress.IPv4Network("1.1.1.0/24")],
ipdeny_missing=[ipaddress.IPv4Network("2.2.2.0/24")],
ripestat_missing=[ipaddress.IPv4Network("3.3.3.0/24")],
differences_count=2,
)
error_messages = tuple(comparision_error_messages(comparision))
assert len(error_messages) == 3
assert "networks present in RIPEstat but not in IPdeny: 2.2.2.0/24" in error_messages
assert "networks present in IPdeny but not in RIPEstat: 3.3.3.0/24" in error_messages
assert "total number of differences: 2" in error_messages
| 41 | 89 | 0.711855 | 237 | 1,763 | 5.130802 | 0.147679 | 0.181743 | 0.086349 | 0.013158 | 0.90625 | 0.886513 | 0.881579 | 0.870066 | 0.870066 | 0.870066 | 0 | 0.056591 | 0.178106 | 1,763 | 42 | 90 | 41.97619 | 0.782609 | 0 | 0 | 0.638889 | 0 | 0 | 0.222348 | 0 | 0 | 0 | 0 | 0 | 0.277778 | 1 | 0.083333 | false | 0 | 0.055556 | 0 | 0.138889 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
f10eed758d65b7fc6da7151a2ce36acd72e30248 | 12,676 | py | Python | raytracing/tests/testsSpecialtyLenses.py | gregsadetsky/RayTracing | 3d11ed91014a47bddc797495ca2af059005e810d | [
"MIT"
] | 1 | 2021-04-20T09:38:05.000Z | 2021-04-20T09:38:05.000Z | raytracing/tests/testsSpecialtyLenses.py | gregsadetsky/RayTracing | 3d11ed91014a47bddc797495ca2af059005e810d | [
"MIT"
] | null | null | null | raytracing/tests/testsSpecialtyLenses.py | gregsadetsky/RayTracing | 3d11ed91014a47bddc797495ca2af059005e810d | [
"MIT"
] | 2 | 2021-04-20T09:38:06.000Z | 2022-02-20T23:45:18.000Z | import envtest # modifies path
import matplotlib.pyplot as plt
from raytracing import *
class TestAchromatDoubletLens(envtest.RaytracingTestCase):
def testInit(self):
achromat = AchromatDoubletLens(fa=-100.0, fb=-103.6, R1=-52.0, R2=49.9, R3=600.0, tc1=2.0, tc2=4.0, te=7.7,
n1=N_BAK4.n(0.5876), n2=SF5.n(0.5876), diameter=25.4, url='https://www.test.com',
label="testInit Doublet")
self.assertIsNotNone(achromat)
def testAchromatShift(self):
achromat855 = AchromatDoubletLens(fa=200.0, fb=193.2, R1=134.0, R2=-109.2, R3=-515.2,
tc1=8.2, tc2=5.0, te=10.1, mat1=N_LAK22, mat2=N_SF6HT,
diameter=50.8,
url='https://www.thorlabs.com/newgrouppage9.cfm?objectgroup_id=259',
label="AC508-200-B", wavelengthRef=0.855)
achromat700 = AchromatDoubletLens(fa=200.0, fb=193.2, R1=134.0, R2=-109.2, R3=-515.2,
tc1=8.2, tc2=5.0, te=10.1, mat1=N_LAK22, mat2=N_SF6HT,
diameter=50.8,
url='https://www.thorlabs.com/newgrouppage9.cfm?objectgroup_id=259',
label="AC508-200-B", wavelength=0.700, wavelengthRef=0.855)
thorlabs_focal_value_855 = 0
thorlabs_focal_value_700 = 0.010 # mm
diff = abs(thorlabs_focal_value_855 - thorlabs_focal_value_700)
diffFocal = abs((-1 / achromat855.C) - (-1 / achromat700.C))
print(-1 / achromat855.C, -1 / achromat700.C)
print(diff, diffFocal)
self.assertAlmostEqual(diff, diffFocal, places=3)
def testWarnThickness(self):
with self.assertWarns(UserWarning):
achromat = AchromatDoubletLens(fa=125.0, fb=122.0, R1=77.6, R2=-55.9, R3=-160.8, tc1=4.0, tc2=2.8, te=5.0,
n1=N_BK7.n(0.5876), n2=N_SF5.n(0.5876), diameter=25.4,
url='https://www.test.com', label="testThickness Doublet")
def testWarnBackFocalLength(self):
with self.assertWarns(UserWarning):
achromat = AchromatDoubletLens(fa=30.0, fb=22.9, R1=20.89, R2=-16.73, R3=-79.8, tc1=12, tc2=2.0, te=8.8,
n1=N_BAF10.n(0.5876), n2=N_SF6HT.n(0.5876), diameter=25.4,
url='https://www.test.com', label="TestBackFocalLength Doublet")
def testWarnEffectiveFocalLength(self):
with self.assertWarns(UserWarning):
achromat = AchromatDoubletLens(fa=150.00, fb=126.46, R1=92.05, R2=-72.85, R3=-305.87, tc1=23.2, tc2=23.1,
te=36.01, n1=1.6700, n2=1.8467, diameter=75, url="https://www.test.com",
label="TestEffectiveFocalLength Doublet")
def testPointsOfInterest(self):
z = 10
achromat = AchromatDoubletLens(fa=-100.0, fb=-103.6, R1=-52.0, R2=49.9, R3=600.0, tc1=2.0, tc2=4.0, te=7.7,
n1=N_BAK4.n(0.5876), n2=SF5.n(0.5876), diameter=25.4, url='https://www.test.com',
label="testPoI Doublet")
points = achromat.pointsOfInterest(z)
f = -1.0 / achromat.C
p1 = z - (1 - achromat.D) / achromat.C
ff = p1 - f
p2 = z + achromat.L + (1 - achromat.A) / achromat.C
fb = p2 + f
self.assertIsNotNone(achromat)
self.assertAlmostEqual(points[0]['z'], ff)
self.assertAlmostEqual(points[1]['z'], fb)
class TestAchromatDoubletLensSubclasses(envtest.RaytracingTestCase):
def setUp(self) -> None:
self.subclasses = AchromatDoubletLens.__subclasses__()
def testSubclassesInit(self):
fails = []
for subclass in self.subclasses:
achromat = subclass()
try:
self.assertIsNotNone(achromat)
except AssertionError:
fails.append('{} not properly initiated.'.format(subclass.__name__))
self.assertEqual([], fails)
def testPointsOfInterest(self):
fails = []
z = 0
for subclass in self.subclasses:
achromat = subclass()
points = achromat.pointsOfInterest(z)
f = -1.0 / achromat.C
p1 = z - (1 - achromat.D) / achromat.C
ff = p1 - f
p2 = z + achromat.L + (1 - achromat.A) / achromat.C
fb = p2 + f
try:
self.assertAlmostEqual(points[0]['z'], ff)
self.assertAlmostEqual(points[1]['z'], fb)
except AssertionError:
fails.append('{} has the wrong points of interest.'.format(subclass.__name__))
self.assertEqual([], fails)
class TestSingletLens(envtest.RaytracingTestCase):
def testInit(self):
achromat = SingletLens(f=75.0, fb=72.0, R1=38.6, R2=100000, tc=4.1, te=2.0, n=N_BK7.n(0.5876),
diameter=25.4, url='https://www.test.com', label="TestInit Singlet")
self.assertIsNotNone(achromat)
def testWarnThickness(self):
with self.assertWarns(UserWarning):
achromat = SingletLens(f=63.52, fb=62.41, R1=77.6, R2=-55.9, tc=4.0, te=5.0, n=N_BK7.n(0.5876),
diameter=25.4, url='https://www.test.com', label="testThickness Singlet")
def testWarnBackFocalLength(self):
with self.assertWarns(UserWarning):
achromat = SingletLens(f=15.9, fb=22.9, R1=20.89, R2=-16.73, tc=12, te=1.9, n=N_BAF10.n(0.5876),
diameter=25.4, url='https://www.test.com',
label="TestBackFocalLength Singlet")
def testWarnEffectiveFocalLength(self):
with self.assertWarns(UserWarning):
achromat = SingletLens(f=150.00, fb=57.82, R1=92.05, R2=-72.85, tc=23.2, te=4.8, n=1.6700,
diameter=75, url="https://www.test.com",
label="TestEffectiveFocalLength Singlet")
class TestSingletLensSubclasses(envtest.RaytracingTestCase):
def setUp(self) -> None:
self.subclasses = SingletLens.__subclasses__()
def testSubclassesInit(self):
fails = []
for subclass in self.subclasses:
achromat = subclass()
try:
self.assertIsNotNone(achromat)
except AssertionError:
fails.append('{} not properly initiated.'.format(subclass.__name__))
self.assertEqual([], fails)
def testPointsOfInterest(self):
fails = []
z = 0
for subclass in self.subclasses:
achromat = subclass()
points = achromat.pointsOfInterest(z)
f = -1.0 / achromat.C
p1 = z - (1 - achromat.D) / achromat.C
ff = p1 - f
p2 = z + achromat.L + (1 - achromat.A) / achromat.C
fb = p2 + f
try:
self.assertAlmostEqual(points[0]['z'], ff)
self.assertAlmostEqual(points[1]['z'], fb)
except AssertionError:
fails.append('{} has the wrong points of interest.'.format(subclass.__name__))
self.assertEqual([], fails)
class TestObjectives(envtest.RaytracingTestCase):
def testInit(self):
objective = Objective(f=180 / 40, NA=0.8, focusToFocusLength=40, backAperture=7, workingDistance=2,
magnification=40, fieldNumber=22, label='TestInit Objective', url="https://www.test.com")
self.assertIsNotNone(objective)
def testWarnNotFullyTested(self):
Objective.warningDisplayed = False
with self.assertWarns(FutureWarning):
objective = Objective(f=180 / 40, NA=0.8, focusToFocusLength=40, backAperture=7, workingDistance=2,
magnification=40, fieldNumber=22, label='TestWarn Objective', url="https://www.test.com")
self.assertTrue(Objective.warningDisplayed)
def testFlipOrientation(self):
original = Objective(f=180 / 40, NA=0.8, focusToFocusLength=40, backAperture=7, workingDistance=2,
magnification=40, fieldNumber=22, label='TestFlip Objective', url="https://www.test.com")
flipped = Objective(f=180 / 40, NA=0.8, focusToFocusLength=40, backAperture=7, workingDistance=2,
magnification=40, fieldNumber=22, label='TestFlip Objective', url="https://www.test.com")
flipped.flipOrientation()
self.assertFalse(original.isFlipped)
self.assertTrue(flipped.isFlipped)
self.assertNotEqual(original.frontVertex, flipped.frontVertex)
self.assertNotEqual(original.backVertex, flipped.backVertex)
def testPointsOfInterest(self):
z = 10
objective = Objective(f=180/40, NA=0.8, focusToFocusLength=40, backAperture=7, workingDistance=2,
magnification=40, fieldNumber=22, label='TestPoI Objective', url="https://www.test.com")
points = objective.pointsOfInterest(z)
ff = z + objective.focusToFocusLength
fb = z
self.assertAlmostEqual(points[0]['z'], fb)
self.assertAlmostEqual(points[1]['z'], ff)
def testPointsOfInterestFlipped(self):
z = 10
objective = Objective(f=180/40, NA=0.8, focusToFocusLength=40, backAperture=7, workingDistance=2,
magnification=40, fieldNumber=22, label='TestPoI Objective', url="https://www.test.com")
points = objective.pointsOfInterest(z)
ff = z + objective.focusToFocusLength
fb = z
self.assertAlmostEqual(points[0]['z'], fb)
self.assertAlmostEqual(points[1]['z'], ff)
objective.flipOrientation()
points = objective.pointsOfInterest(z)
self.assertAlmostEqual(points[0]['z'], ff)
self.assertAlmostEqual(points[1]['z'], fb)
class TestObjectivesSubclasses(envtest.RaytracingTestCase):
def setUp(self) -> None:
self.subclasses = Objective.__subclasses__()
def testSubclassesInit(self):
fails = []
for subclass in self.subclasses:
objective = subclass()
try:
self.assertIsNotNone(objective)
except AssertionError:
fails.append('{} not properly initiated.'.format(subclass.__name__))
self.assertEqual([], fails)
def testFlipOrientation(self):
fails = []
for subclass in self.subclasses:
original = subclass()
flipped = subclass().flipOrientation()
try:
self.assertFalse(original.isFlipped)
self.assertTrue(flipped.isFlipped)
self.assertNotEqual(original.frontVertex, flipped.frontVertex)
self.assertNotEqual(original.backVertex, flipped.backVertex)
except AssertionError:
fails.append('{} was not properly flipped.'.format(subclass.__name__))
self.assertEqual([], fails)
def testPointsOfInterest(self):
fails = []
z = 10
for subclass in self.subclasses:
objective = subclass()
points = objective.pointsOfInterest(z)
ff = z + objective.focusToFocusLength
fb = z
try:
self.assertAlmostEqual(points[0]['z'], fb)
self.assertAlmostEqual(points[1]['z'], ff)
except AssertionError:
fails.append('{} has the wrong point of interest.'.format(subclass.__name__))
self.assertEqual([], fails)
def testPointsOfInterestFlipped(self):
fails = []
z = 10
for subclass in self.subclasses:
objective = subclass()
points = objective.pointsOfInterest(z)
ff = z + objective.focusToFocusLength
fb = z
try:
self.assertAlmostEqual(points[0]['z'], fb)
self.assertAlmostEqual(points[1]['z'], ff)
objective.flipOrientation()
points = objective.pointsOfInterest(z)
self.assertAlmostEqual(points[0]['z'], ff)
self.assertAlmostEqual(points[1]['z'], fb)
except AssertionError:
fails.append('{} has the wrong point of interest.'.format(subclass.__name__))
self.assertEqual([], fails)
if __name__ == '__main__':
envtest.main()
| 41.97351 | 123 | 0.570054 | 1,377 | 12,676 | 5.190269 | 0.156137 | 0.055828 | 0.068001 | 0.031482 | 0.816007 | 0.798237 | 0.779768 | 0.754582 | 0.660837 | 0.640129 | 0 | 0.074125 | 0.30609 | 12,676 | 301 | 124 | 42.112957 | 0.738404 | 0.001262 | 0 | 0.748936 | 0 | 0 | 0.081457 | 0.003792 | 0 | 0 | 0 | 0 | 0.246809 | 1 | 0.110638 | false | 0 | 0.012766 | 0 | 0.148936 | 0.008511 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
f126a3cfe2492746abe25760c9f9d645fd0a7635 | 96 | py | Python | venv/lib/python3.8/site-packages/poetry/core/_vendor/packaging/version.py | GiulianaPola/select_repeats | 17a0d053d4f874e42cf654dd142168c2ec8fbd11 | [
"MIT"
] | 2 | 2022-03-13T01:58:52.000Z | 2022-03-31T06:07:54.000Z | venv/lib/python3.8/site-packages/poetry/core/_vendor/packaging/version.py | DesmoSearch/Desmobot | b70b45df3485351f471080deb5c785c4bc5c4beb | [
"MIT"
] | 19 | 2021-11-20T04:09:18.000Z | 2022-03-23T15:05:55.000Z | venv/lib/python3.8/site-packages/poetry/core/_vendor/packaging/version.py | DesmoSearch/Desmobot | b70b45df3485351f471080deb5c785c4bc5c4beb | [
"MIT"
] | null | null | null | /home/runner/.cache/pip/pool/42/60/e5/81c797274b0f34273639ee3275aea54a1208b4aec2d6765599df92ab98 | 96 | 96 | 0.895833 | 9 | 96 | 9.555556 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.46875 | 0 | 96 | 1 | 96 | 96 | 0.427083 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | null | 0 | 0 | null | null | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
f143d0d1af76b31c236c5b80e3c10a69694c82af | 33 | py | Python | utils/__init__.py | itsnamgyu/reid-metric | 437e02ebad510b482f620a293fd8c7baa4f42ad6 | [
"MIT"
] | null | null | null | utils/__init__.py | itsnamgyu/reid-metric | 437e02ebad510b482f620a293fd8c7baa4f42ad6 | [
"MIT"
] | null | null | null | utils/__init__.py | itsnamgyu/reid-metric | 437e02ebad510b482f620a293fd8c7baa4f42ad6 | [
"MIT"
] | null | null | null | from . import distmat, evaluation | 33 | 33 | 0.818182 | 4 | 33 | 6.75 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.121212 | 33 | 1 | 33 | 33 | 0.931034 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
74e6c1c320320014a0a8231fb87f9b159950b7ab | 24 | py | Python | ISS_Info/__init__.py | Quarantine-Projects/ISS-Info | 286cb9d0f5b6804424c65547f9cd14ae69fe6842 | [
"MIT"
] | 16 | 2020-03-22T13:54:10.000Z | 2022-01-08T04:22:13.000Z | ISS_Info/__init__.py | Quarantine-Projects/ISS-Info | 286cb9d0f5b6804424c65547f9cd14ae69fe6842 | [
"MIT"
] | 2 | 2020-03-22T14:00:44.000Z | 2021-08-02T10:01:05.000Z | ISS_Info/__init__.py | Quarantine-Projects/ISS-Info | 286cb9d0f5b6804424c65547f9cd14ae69fe6842 | [
"MIT"
] | 3 | 2020-03-22T16:34:25.000Z | 2020-09-17T04:48:22.000Z | from .ISS_Info import *
| 12 | 23 | 0.75 | 4 | 24 | 4.25 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.166667 | 24 | 1 | 24 | 24 | 0.85 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
74ed8a179f4e26c344789e1f829df6fd641aa37f | 7,204 | py | Python | test/test_clone_media.py | cvisionai/tator-py | 89d5ed3bfc3e824bdf73b6c0fc43180e09dc3782 | [
"MIT"
] | 2 | 2020-06-11T02:17:43.000Z | 2021-01-27T14:41:07.000Z | test/test_clone_media.py | cvisionai/tator-py | 89d5ed3bfc3e824bdf73b6c0fc43180e09dc3782 | [
"MIT"
] | 39 | 2020-06-08T15:12:47.000Z | 2022-03-31T20:05:17.000Z | test/test_clone_media.py | cvisionai/tator-py | 89d5ed3bfc3e824bdf73b6c0fc43180e09dc3782 | [
"MIT"
] | 1 | 2020-06-13T00:09:10.000Z | 2020-06-13T00:09:10.000Z | import tempfile
import os
import time
import tator
from._common import assert_vector_equal
def test_clone_multi(host, token, project, multi_type, multi):
tator_api = tator.get_api(host, token)
multi_obj = tator_api.get_media(multi)
response = tator_api.clone_media_list(project=project,
media_id=[multi],
clone_media_spec={'dest_project': project,
'dest_section': 'Cloned multi',
'dest_type': multi_type})
assert isinstance(response, tator.models.CreateListResponse)
assert len(response.id) == 1
def test_clone_videos(host, token, project, video_type, video):
tator_api = tator.get_api(host, token)
video_obj = tator_api.get_media(video)
response = tator_api.clone_media_list(project=project,
media_id=[video],
clone_media_spec={'dest_project': project,
'dest_section': 'Cloned video',
'dest_type': video_type})
assert isinstance(response, tator.models.CreateListResponse)
assert len(response.id) == 1
cloned_video = tator_api.get_media(response.id[0])
with tempfile.TemporaryDirectory() as temp_dir:
outpath = os.path.join(temp_dir, "video.mp4")
for progress in tator.download_media(tator_api, cloned_video, outpath):
print(f"Cloned video download progress: {progress}%")
assert(os.path.exists(outpath))
def test_clone_images(host, token, project, image_type, image):
tator_api = tator.get_api(host, token)
image_obj = tator_api.get_media(image)
response = tator_api.clone_media_list(project=project,
media_id=[image],
clone_media_spec={'dest_project': project,
'dest_section': 'Cloned image',
'dest_type': image_type})
assert isinstance(response, tator.models.CreateListResponse)
assert len(response.id) == 1
cloned_image = tator_api.get_media(response.id[0])
with tempfile.TemporaryDirectory() as temp_dir:
outpath = os.path.join(temp_dir, "image.jpg")
for progress in tator.download_media(tator_api, cloned_image, outpath):
print(f"Cloned image download progress: {progress}%")
assert(os.path.exists(outpath))
def test_clone_multi_util_same_host(host, token, project, multi_type, multi):
tator_api = tator.get_api(host, token)
query_params = {'project': project, 'media_id': [multi]}
section = 'Cloned media util same host'
created_ids = []
generator = tator.util.clone_media_list(tator_api, query_params, project, {}, multi_type,
section)
for num_created, num_total, response, id_map in generator:
print(f"Created {num_created} of {num_total} files...")
created_ids.append(response.id)
print(f"Finished creating {num_created} files!")
assert len(created_ids) == 1
def test_clone_videos_util_same_host(host, token, project, video_type, video):
tator_api = tator.get_api(host, token)
query_params = {'project': project, 'media_id': [video]}
section = 'Cloned media util same host'
created_ids = []
generator = tator.util.clone_media_list(tator_api, query_params, project, {}, video_type,
section)
for num_created, num_total, response, id_map in generator:
print(f"Created {num_created} of {num_total} files...")
created_ids.append(response.id)
print(f"Finished creating {num_created} files!")
assert len(created_ids) == 1
def test_clone_images_util_same_host(host, token, project, image_type, image):
tator_api = tator.get_api(host, token)
query_params = {'project': project, 'media_id': [image]}
section = 'Cloned media util same host'
created_ids = []
generator = tator.util.clone_media_list(tator_api, query_params, project, {}, image_type,
section)
for num_created, num_total, response, id_map in generator:
print(f"Created {num_created} of {num_total} files...")
created_ids.append(response.id)
print(f"Finished creating {num_created} files!")
assert len(created_ids) == 1
def test_clone_multi_util_different_host(host, token, project, multi_type, multi, video):
tator_api = tator.get_api(host, token)
dest_api = tator.get_api(host, token)
assert tator_api is not dest_api
query_params = {'project': project, 'media_id': [multi]}
section = 'Cloned media util different host'
created_ids = []
generator = tator.util.clone_media_list(tator_api, query_params, project, {video:video},
multi_type, section, dest_api)
for num_created, num_total, response, id_map in generator:
print(f"Created {num_created} of {num_total} files...")
created_ids.append(response.id)
print(f"Finished creating {num_created} files!")
tator.util.clone_media_list(tator_api, query_params, project, multi_type,
'Cloned media util different host', tator_api)
assert len(created_ids) == 1
def test_clone_videos_util_different_host(host, token, project, video_type, video):
tator_api = tator.get_api(host, token)
dest_api = tator.get_api(host, token)
assert tator_api is not dest_api
query_params = {'project': project, 'media_id': [video]}
section = 'Cloned media util different host'
created_ids = []
generator = tator.util.clone_media_list(tator_api, query_params, project, {}, video_type,
section, dest_api)
for num_created, num_total, response, id_map in generator:
print(f"Created {num_created} of {num_total} files...")
created_ids.append(response.id)
print(f"Finished creating {num_created} files!")
tator.util.clone_media_list(tator_api, query_params, project, video_type,
'Cloned media util different host', tator_api)
assert len(created_ids) == 1
def test_clone_images_util_different_host(host, token, project, image_type, image):
tator_api = tator.get_api(host, token)
dest_api = tator.get_api(host, token)
assert tator_api is not dest_api
query_params = {'project': project, 'media_id': [image]}
section = 'Cloned media util different host'
created_ids = []
generator = tator.util.clone_media_list(tator_api, query_params, project, {}, image_type,
section, dest_api)
for num_created, num_total, response, id_map in generator:
print(f"Created {num_created} of {num_total} files...")
created_ids.append(response.id)
print(f"Finished creating {num_created} files!")
assert len(created_ids) == 1
| 49.682759 | 93 | 0.630067 | 885 | 7,204 | 4.853107 | 0.083616 | 0.059604 | 0.058673 | 0.039115 | 0.935506 | 0.912224 | 0.901281 | 0.892433 | 0.892433 | 0.837253 | 0 | 0.002286 | 0.271238 | 7,204 | 144 | 94 | 50.027778 | 0.81581 | 0 | 0 | 0.697674 | 0 | 0 | 0.148251 | 0 | 0 | 0 | 0 | 0 | 0.139535 | 1 | 0.069767 | false | 0 | 0.03876 | 0 | 0.108527 | 0.108527 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
74fc0eabee25e7601e6264e4d9059cb3dc42c624 | 10,394 | py | Python | tests/test_adapter.py | alight-analytics/rfc5424-logging-handler | 1bf1b2b567ab741aebf4d0417b8a6d6f466739e2 | [
"BSD-3-Clause"
] | null | null | null | tests/test_adapter.py | alight-analytics/rfc5424-logging-handler | 1bf1b2b567ab741aebf4d0417b8a6d6f466739e2 | [
"BSD-3-Clause"
] | null | null | null | tests/test_adapter.py | alight-analytics/rfc5424-logging-handler | 1bf1b2b567ab741aebf4d0417b8a6d6f466739e2 | [
"BSD-3-Clause"
] | null | null | null | import logging
import pytest
from mock import patch
from conftest import (
address, message, sd1, sd2
)
from rfc5424logging import Rfc5424SysLogHandler, Rfc5424SysLogAdapter, NOTICE, NILVALUE
@pytest.mark.parametrize("handler_kwargs,adapter_kwargs,logger_kwargs,expected", [
(
{'address': address, 'structured_data': sd1, 'appname': 'my_appname', 'hostname': 'my-hostname', 'procid': "1234"},
{},
{'extra': {'structured_data': sd2, 'msgid': 'my_msgid'}},
b'<14>1 2000-01-01T17:11:11.111111+06:00 my-hostname my_appname 1234'
b' my_msgid [my_sd_id1@32473 my_key1="my_value1"][my_sd_id2@32473 my_key2="my_value2"] '
b'\xef\xbb\xbfThis is an interesting message'
), (
{'address': address, 'structured_data': sd1, 'appname': 'my_appname', 'hostname': 'my-hostname', 'procid': "1234"},
{'enable_extra_levels': True},
{'extra': {'structured_data': sd2, 'msgid': 'my_msgid'}},
b'<14>1 2000-01-01T17:11:11.111111+06:00 my-hostname my_appname 1234'
b' my_msgid [my_sd_id1@32473 my_key1="my_value1"][my_sd_id2@32473 my_key2="my_value2"] '
b'\xef\xbb\xbfThis is an interesting message'
), (
{'address': address, 'structured_data': sd1, 'appname': 'my_appname', 'hostname': 'my-hostname', 'procid': "1234"},
{'extra': {'structured_data': sd2, 'msgid': 'my_msgid'}},
{},
b'<14>1 2000-01-01T17:11:11.111111+06:00 my-hostname my_appname 1234'
b' my_msgid [my_sd_id1@32473 my_key1="my_value1"][my_sd_id2@32473 my_key2="my_value2"] '
b'\xef\xbb\xbfThis is an interesting message'
), (
{'address': address, 'structured_data': sd1, 'appname': 'my_appname', 'hostname': 'my-hostname', 'procid': "1234"},
{'extra': {'structured_data': sd2, 'msgid': 'my_msgid'}, 'enable_extra_levels': True},
{},
b'<14>1 2000-01-01T17:11:11.111111+06:00 my-hostname my_appname 1234'
b' my_msgid [my_sd_id1@32473 my_key1="my_value1"][my_sd_id2@32473 my_key2="my_value2"] '
b'\xef\xbb\xbfThis is an interesting message'
), (
{'address': address, 'structured_data': sd1, 'appname': 'my_appname', 'hostname': 'my-hostname', 'procid': "1234"},
{'enable_extra_levels': True},
{'structured_data': sd2, 'msgid': 'my_msgid'},
b'<14>1 2000-01-01T17:11:11.111111+06:00 my-hostname my_appname 1234'
b' my_msgid [my_sd_id1@32473 my_key1="my_value1"][my_sd_id2@32473 my_key2="my_value2"] '
b'\xef\xbb\xbfThis is an interesting message'
), (
{'address': address, 'structured_data': sd1, 'appname': 'my_appname', 'hostname': 'my-hostname', 'procid': "1234"},
{'enable_extra_levels': True},
{'procid': 'some_procid'},
b'<14>1 2000-01-01T17:11:11.111111+06:00 my-hostname my_appname some_procid'
b' - [my_sd_id1@32473 my_key1="my_value1"] '
b'\xef\xbb\xbfThis is an interesting message'
), (
{'address': address, 'structured_data': sd1, 'appname': 'my_appname', 'hostname': 'my-hostname', 'procid': "1234"},
{'enable_extra_levels': True},
{'appname': 'some_appname'},
b'<14>1 2000-01-01T17:11:11.111111+06:00 my-hostname some_appname 1234'
b' - [my_sd_id1@32473 my_key1="my_value1"] '
b'\xef\xbb\xbfThis is an interesting message'
), (
{'address': address, 'structured_data': sd1, 'appname': 'my_appname', 'hostname': 'my-hostname', 'procid': "1234"},
{'enable_extra_levels': True},
{'hostname': 'some-hostname'},
b'<14>1 2000-01-01T17:11:11.111111+06:00 some-hostname my_appname 1234'
b' - [my_sd_id1@32473 my_key1="my_value1"] '
b'\xef\xbb\xbfThis is an interesting message'
), (
{'address': address},
{'enable_extra_levels': True},
{'hostname': NILVALUE, 'appname': NILVALUE, 'procid': NILVALUE},
b'<14>1 2000-01-01T17:11:11.111111+06:00 - - - - - '
b'\xef\xbb\xbfThis is an interesting message'
)
])
def test_adapter(logger, handler_kwargs, adapter_kwargs, logger_kwargs, expected):
sh = Rfc5424SysLogHandler(**handler_kwargs)
logger.addHandler(sh)
adapter = Rfc5424SysLogAdapter(logger, **adapter_kwargs)
with patch.object(sh, 'socket') as syslog_socket:
adapter.info(message, **logger_kwargs)
syslog_socket.sendto.assert_called_once_with(expected, address)
syslog_socket.sendto.reset_mock()
adapter.log(logging.INFO, message, **logger_kwargs)
syslog_socket.sendto.assert_called_once_with(expected, address)
syslog_socket.sendto.reset_mock()
logger.removeHandler(sh)
def test_log(logger_with_udp_handler):
expected_msg = (b'<13>1 2000-01-01T17:11:11.111111+06:00 testhostname root 111'
b' - - \xef\xbb\xbfThis is an interesting message')
logger, syslog_socket = logger_with_udp_handler
adapter = Rfc5424SysLogAdapter(logger, enable_extra_levels=True)
adapter.log(NOTICE, message)
syslog_socket.sendto.assert_called_once_with(expected_msg, address)
def test_log_not_enabled(adapter_with_udp_handler):
expected_msg = (b'<12>1 2000-01-01T17:11:11.111111+06:00 testhostname root 111'
b' - - \xef\xbb\xbfThis is an interesting message')
adapter, syslog_socket = adapter_with_udp_handler
adapter.log(NOTICE, message)
syslog_socket.sendto.assert_called_once_with(expected_msg, address)
def test_emergency(logger_with_udp_handler):
expected_msg = (b'<8>1 2000-01-01T17:11:11.111111+06:00 testhostname root 111'
b' - - \xef\xbb\xbfThis is an interesting message')
logger, syslog_socket = logger_with_udp_handler
adapter = Rfc5424SysLogAdapter(logger, enable_extra_levels=True)
adapter.emerg(message)
syslog_socket.sendto.assert_called_once_with(expected_msg, address)
def test_emergency_not_enabled(adapter_with_udp_handler):
expected_msg = (b'<10>1 2000-01-01T17:11:11.111111+06:00 testhostname root 111'
b' - - \xef\xbb\xbfThis is an interesting message')
adapter, syslog_socket = adapter_with_udp_handler
adapter.emergency(message)
syslog_socket.sendto.assert_called_once_with(expected_msg, address)
def test_alert(logger_with_udp_handler):
expected_msg = (b'<9>1 2000-01-01T17:11:11.111111+06:00 testhostname root 111'
b' - - \xef\xbb\xbfThis is an interesting message')
logger, syslog_socket = logger_with_udp_handler
adapter = Rfc5424SysLogAdapter(logger, enable_extra_levels=True)
adapter.alert(message)
syslog_socket.sendto.assert_called_once_with(expected_msg, address)
def test_alert_not_enabled(adapter_with_udp_handler):
expected_msg = (b'<10>1 2000-01-01T17:11:11.111111+06:00 testhostname root 111'
b' - - \xef\xbb\xbfThis is an interesting message')
adapter, syslog_socket = adapter_with_udp_handler
adapter.alert(message)
syslog_socket.sendto.assert_called_once_with(expected_msg, address)
def test_notice(logger_with_udp_handler):
expected_msg = (b'<13>1 2000-01-01T17:11:11.111111+06:00 testhostname root 111'
b' - - \xef\xbb\xbfThis is an interesting message')
logger, syslog_socket = logger_with_udp_handler
adapter = Rfc5424SysLogAdapter(logger, enable_extra_levels=True)
adapter.notice(message)
syslog_socket.sendto.assert_called_once_with(expected_msg, address)
def test_notice_not_enabled(adapter_with_udp_handler):
expected_msg = (b'<12>1 2000-01-01T17:11:11.111111+06:00 testhostname root 111'
b' - - \xef\xbb\xbfThis is an interesting message')
adapter, syslog_socket = adapter_with_udp_handler
adapter.notice(message)
syslog_socket.sendto.assert_called_once_with(expected_msg, address)
def test_empty_msg(logger_with_udp_handler):
logger, syslog_socket = logger_with_udp_handler
adapter = Rfc5424SysLogAdapter(logger, enable_extra_levels=True)
expected_msg = b'<15>1 2000-01-01T17:11:11.111111+06:00 testhostname root 111 - -'
adapter.debug()
syslog_socket.sendto.assert_called_once_with(expected_msg, address)
syslog_socket.sendto.reset_mock()
expected_msg = b'<14>1 2000-01-01T17:11:11.111111+06:00 testhostname root 111 - -'
adapter.info()
syslog_socket.sendto.assert_called_once_with(expected_msg, address)
syslog_socket.sendto.reset_mock()
expected_msg = b'<13>1 2000-01-01T17:11:11.111111+06:00 testhostname root 111 - -'
adapter.notice()
syslog_socket.sendto.assert_called_once_with(expected_msg, address)
syslog_socket.sendto.reset_mock()
expected_msg = b'<12>1 2000-01-01T17:11:11.111111+06:00 testhostname root 111 - -'
adapter.warning()
syslog_socket.sendto.assert_called_once_with(expected_msg, address)
syslog_socket.sendto.reset_mock()
expected_msg = b'<11>1 2000-01-01T17:11:11.111111+06:00 testhostname root 111 - -'
adapter.error()
syslog_socket.sendto.assert_called_once_with(expected_msg, address)
syslog_socket.sendto.reset_mock()
expected_msg = b'<10>1 2000-01-01T17:11:11.111111+06:00 testhostname root 111 - -'
adapter.critical()
syslog_socket.sendto.assert_called_once_with(expected_msg, address)
syslog_socket.sendto.reset_mock()
expected_msg = b'<9>1 2000-01-01T17:11:11.111111+06:00 testhostname root 111 - -'
adapter.alert()
syslog_socket.sendto.assert_called_once_with(expected_msg, address)
syslog_socket.sendto.reset_mock()
expected_msg = b'<8>1 2000-01-01T17:11:11.111111+06:00 testhostname root 111 - -'
adapter.emergency()
syslog_socket.sendto.assert_called_once_with(expected_msg, address)
def test_extras(logger_with_udp_handler):
logger, syslog_socket = logger_with_udp_handler
adapter = Rfc5424SysLogAdapter(logger, enable_extra_levels=True, extra={"a": 1, "c": 3})
expected_return = ("aaaa", {'extra': dict(a=1, b=2, c="c")})
# Make sure passed extra argument overrides that of the adapter instance
assert adapter.process("aaaa", kwargs={'extra': {"b": 2, "c": "c"}}) == expected_return
# Make sure adapter instance variables aren't overwritten
assert {"a": 1, "c": 3} == adapter.extra
# Test invalid extra argument
with pytest.raises(TypeError):
adapter = Rfc5424SysLogAdapter(logger, enable_extra_levels=True, extra="i_am_not_a_dict")
| 48.12037 | 123 | 0.695594 | 1,446 | 10,394 | 4.757261 | 0.086445 | 0.066289 | 0.07065 | 0.043611 | 0.870621 | 0.865242 | 0.863788 | 0.8446 | 0.830063 | 0.830063 | 0 | 0.110775 | 0.172311 | 10,394 | 215 | 124 | 48.344186 | 0.688829 | 0.014816 | 0 | 0.629213 | 0 | 0.140449 | 0.380911 | 0.100821 | 0 | 0 | 0 | 0 | 0.11236 | 1 | 0.061798 | false | 0 | 0.02809 | 0 | 0.089888 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
7414f615ec34402d8bc00d1d7e3bf4f46248972e | 537 | py | Python | tests/test_priority.py | j127/todoist-taskwarrior | 24511f22c500371eab1b54afc13c499f9672d779 | [
"MIT"
] | 12 | 2019-01-20T18:09:42.000Z | 2021-11-01T18:25:19.000Z | tests/test_priority.py | j127/todoist-taskwarrior | 24511f22c500371eab1b54afc13c499f9672d779 | [
"MIT"
] | 4 | 2019-08-05T04:53:14.000Z | 2021-10-16T01:44:52.000Z | tests/test_priority.py | richnetdesign/todoist-taskwarrior | 6a0792a3ed8972b546a8ae9caa0dae7f0e1d2f57 | [
"MIT"
] | 5 | 2019-08-01T19:22:45.000Z | 2021-08-19T23:10:45.000Z | """ Priority Tests
Test conversions between Todoist and Taskwarrior priorities.
"""
import pytest
from todoist_taskwarrior import utils
def test_priorities():
assert utils.parse_priority(1) == None
assert utils.parse_priority(2) == 'L'
assert utils.parse_priority(3) == 'M'
assert utils.parse_priority(4) == 'H'
def test_priorities_str():
assert utils.parse_priority('1') == None
assert utils.parse_priority('2') == 'L'
assert utils.parse_priority('3') == 'M'
assert utils.parse_priority('4') == 'H'
| 25.571429 | 60 | 0.696462 | 71 | 537 | 5.098592 | 0.352113 | 0.243094 | 0.353591 | 0.530387 | 0.59116 | 0.59116 | 0.59116 | 0.59116 | 0.59116 | 0.59116 | 0 | 0.017897 | 0.167598 | 537 | 20 | 61 | 26.85 | 0.791946 | 0.141527 | 0 | 0 | 0 | 0 | 0.022124 | 0 | 0 | 0 | 0 | 0.05 | 0.666667 | 1 | 0.166667 | true | 0 | 0.166667 | 0 | 0.333333 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
74a12d2fa8a3123b1e18f735d02402e62c4b96c3 | 28,303 | py | Python | cisco-ios-xr/ydk/models/cisco_ios_xr/Cisco_IOS_XR_ncs1k_macsec_ea_oper.py | bopopescu/ACI | dd717bc74739eeed4747b3ea9e36b239580df5e1 | [
"ECL-2.0",
"Apache-2.0"
] | null | null | null | cisco-ios-xr/ydk/models/cisco_ios_xr/Cisco_IOS_XR_ncs1k_macsec_ea_oper.py | bopopescu/ACI | dd717bc74739eeed4747b3ea9e36b239580df5e1 | [
"ECL-2.0",
"Apache-2.0"
] | null | null | null | cisco-ios-xr/ydk/models/cisco_ios_xr/Cisco_IOS_XR_ncs1k_macsec_ea_oper.py | bopopescu/ACI | dd717bc74739eeed4747b3ea9e36b239580df5e1 | [
"ECL-2.0",
"Apache-2.0"
] | 1 | 2020-07-22T04:04:44.000Z | 2020-07-22T04:04:44.000Z | """ Cisco_IOS_XR_ncs1k_macsec_ea_oper
This module contains a collection of YANG definitions
for Cisco IOS\-XR ncs1k\-macsec\-ea package operational data.
This module contains definitions
for the following management objects\:
ncs1k\-macsec\-oper\: Macsec data
Copyright (c) 2013\-2017 by Cisco Systems, Inc.
All rights reserved.
"""
from collections import OrderedDict
from ydk.types import Entity, EntityPath, Identity, Enum, YType, YLeaf, YLeafList, YList, LeafDataList, Bits, Empty, Decimal64
from ydk.filters import YFilter
from ydk.errors import YError, YModelError
from ydk.errors.error_handler import handle_type_error as _handle_type_error
class Ncs1kCipherSuit(Enum):
"""
Ncs1kCipherSuit (Enum Class)
Ncs1k cipher suit
.. data:: gcm_aes_256 = 0
GCM AES 256
.. data:: gcm_aes_128 = 1
GCM AES 128
.. data:: gcm_aes_xpn_256 = 2
GCM AES XPN 256
"""
gcm_aes_256 = Enum.YLeaf(0, "gcm-aes-256")
gcm_aes_128 = Enum.YLeaf(1, "gcm-aes-128")
gcm_aes_xpn_256 = Enum.YLeaf(2, "gcm-aes-xpn-256")
class Ncs1KMacsecOper(Entity):
"""
Macsec data
.. attribute:: ncs1k_macsec_ctrlr_names
All Macsec operational data
**type**\: :py:class:`Ncs1KMacsecCtrlrNames <ydk.models.cisco_ios_xr.Cisco_IOS_XR_ncs1k_macsec_ea_oper.Ncs1KMacsecOper.Ncs1KMacsecCtrlrNames>`
"""
_prefix = 'ncs1k-macsec-ea-oper'
_revision = '2015-11-09'
def __init__(self):
super(Ncs1KMacsecOper, self).__init__()
self._top_entity = None
self.yang_name = "ncs1k-macsec-oper"
self.yang_parent_name = "Cisco-IOS-XR-ncs1k-macsec-ea-oper"
self.is_top_level_class = True
self.has_list_ancestor = False
self.ylist_key_names = []
self._child_container_classes = OrderedDict([("ncs1k-macsec-ctrlr-names", ("ncs1k_macsec_ctrlr_names", Ncs1KMacsecOper.Ncs1KMacsecCtrlrNames))])
self._child_list_classes = OrderedDict([])
self._leafs = OrderedDict()
self.ncs1k_macsec_ctrlr_names = Ncs1KMacsecOper.Ncs1KMacsecCtrlrNames()
self.ncs1k_macsec_ctrlr_names.parent = self
self._children_name_map["ncs1k_macsec_ctrlr_names"] = "ncs1k-macsec-ctrlr-names"
self._children_yang_names.add("ncs1k-macsec-ctrlr-names")
self._segment_path = lambda: "Cisco-IOS-XR-ncs1k-macsec-ea-oper:ncs1k-macsec-oper"
class Ncs1KMacsecCtrlrNames(Entity):
"""
All Macsec operational data
.. attribute:: ncs1k_macsec_ctrlr_name
Interface name
**type**\: list of :py:class:`Ncs1KMacsecCtrlrName <ydk.models.cisco_ios_xr.Cisco_IOS_XR_ncs1k_macsec_ea_oper.Ncs1KMacsecOper.Ncs1KMacsecCtrlrNames.Ncs1KMacsecCtrlrName>`
"""
_prefix = 'ncs1k-macsec-ea-oper'
_revision = '2015-11-09'
def __init__(self):
super(Ncs1KMacsecOper.Ncs1KMacsecCtrlrNames, self).__init__()
self.yang_name = "ncs1k-macsec-ctrlr-names"
self.yang_parent_name = "ncs1k-macsec-oper"
self.is_top_level_class = False
self.has_list_ancestor = False
self.ylist_key_names = []
self._child_container_classes = OrderedDict([])
self._child_list_classes = OrderedDict([("ncs1k-macsec-ctrlr-name", ("ncs1k_macsec_ctrlr_name", Ncs1KMacsecOper.Ncs1KMacsecCtrlrNames.Ncs1KMacsecCtrlrName))])
self._leafs = OrderedDict()
self.ncs1k_macsec_ctrlr_name = YList(self)
self._segment_path = lambda: "ncs1k-macsec-ctrlr-names"
self._absolute_path = lambda: "Cisco-IOS-XR-ncs1k-macsec-ea-oper:ncs1k-macsec-oper/%s" % self._segment_path()
def __setattr__(self, name, value):
self._perform_setattr(Ncs1KMacsecOper.Ncs1KMacsecCtrlrNames, [], name, value)
class Ncs1KMacsecCtrlrName(Entity):
"""
Interface name
.. attribute:: name (key)
Port name
**type**\: str
**pattern:** [a\-zA\-Z0\-9./\-]+
.. attribute:: ncs1k_status_info
controller data
**type**\: :py:class:`Ncs1KStatusInfo <ydk.models.cisco_ios_xr.Cisco_IOS_XR_ncs1k_macsec_ea_oper.Ncs1KMacsecOper.Ncs1KMacsecCtrlrNames.Ncs1KMacsecCtrlrName.Ncs1KStatusInfo>`
"""
_prefix = 'ncs1k-macsec-ea-oper'
_revision = '2015-11-09'
def __init__(self):
super(Ncs1KMacsecOper.Ncs1KMacsecCtrlrNames.Ncs1KMacsecCtrlrName, self).__init__()
self.yang_name = "ncs1k-macsec-ctrlr-name"
self.yang_parent_name = "ncs1k-macsec-ctrlr-names"
self.is_top_level_class = False
self.has_list_ancestor = False
self.ylist_key_names = ['name']
self._child_container_classes = OrderedDict([("ncs1k-status-info", ("ncs1k_status_info", Ncs1KMacsecOper.Ncs1KMacsecCtrlrNames.Ncs1KMacsecCtrlrName.Ncs1KStatusInfo))])
self._child_list_classes = OrderedDict([])
self._leafs = OrderedDict([
('name', YLeaf(YType.str, 'name')),
])
self.name = None
self.ncs1k_status_info = Ncs1KMacsecOper.Ncs1KMacsecCtrlrNames.Ncs1KMacsecCtrlrName.Ncs1KStatusInfo()
self.ncs1k_status_info.parent = self
self._children_name_map["ncs1k_status_info"] = "ncs1k-status-info"
self._children_yang_names.add("ncs1k-status-info")
self._segment_path = lambda: "ncs1k-macsec-ctrlr-name" + "[name='" + str(self.name) + "']"
self._absolute_path = lambda: "Cisco-IOS-XR-ncs1k-macsec-ea-oper:ncs1k-macsec-oper/ncs1k-macsec-ctrlr-names/%s" % self._segment_path()
def __setattr__(self, name, value):
self._perform_setattr(Ncs1KMacsecOper.Ncs1KMacsecCtrlrNames.Ncs1KMacsecCtrlrName, ['name'], name, value)
class Ncs1KStatusInfo(Entity):
"""
controller data
.. attribute:: encrypt_sc_status
Encrypt Secure Channel Status
**type**\: :py:class:`EncryptScStatus <ydk.models.cisco_ios_xr.Cisco_IOS_XR_ncs1k_macsec_ea_oper.Ncs1KMacsecOper.Ncs1KMacsecCtrlrNames.Ncs1KMacsecCtrlrName.Ncs1KStatusInfo.EncryptScStatus>`
.. attribute:: decrypt_sc_status
Decrypt Secure Channel Status
**type**\: :py:class:`DecryptScStatus <ydk.models.cisco_ios_xr.Cisco_IOS_XR_ncs1k_macsec_ea_oper.Ncs1KMacsecOper.Ncs1KMacsecCtrlrNames.Ncs1KMacsecCtrlrName.Ncs1KStatusInfo.DecryptScStatus>`
.. attribute:: replay_window_size
Replay Window Size
**type**\: int
**range:** 0..4294967295
.. attribute:: must_secure
Must Secure
**type**\: bool
.. attribute:: secure_mode
Secure Mode
**type**\: int
**range:** 0..4294967295
"""
_prefix = 'ncs1k-macsec-ea-oper'
_revision = '2015-11-09'
def __init__(self):
super(Ncs1KMacsecOper.Ncs1KMacsecCtrlrNames.Ncs1KMacsecCtrlrName.Ncs1KStatusInfo, self).__init__()
self.yang_name = "ncs1k-status-info"
self.yang_parent_name = "ncs1k-macsec-ctrlr-name"
self.is_top_level_class = False
self.has_list_ancestor = True
self.ylist_key_names = []
self._child_container_classes = OrderedDict([("encrypt-sc-status", ("encrypt_sc_status", Ncs1KMacsecOper.Ncs1KMacsecCtrlrNames.Ncs1KMacsecCtrlrName.Ncs1KStatusInfo.EncryptScStatus)), ("decrypt-sc-status", ("decrypt_sc_status", Ncs1KMacsecOper.Ncs1KMacsecCtrlrNames.Ncs1KMacsecCtrlrName.Ncs1KStatusInfo.DecryptScStatus))])
self._child_list_classes = OrderedDict([])
self._leafs = OrderedDict([
('replay_window_size', YLeaf(YType.uint32, 'replay-window-size')),
('must_secure', YLeaf(YType.boolean, 'must-secure')),
('secure_mode', YLeaf(YType.uint32, 'secure-mode')),
])
self.replay_window_size = None
self.must_secure = None
self.secure_mode = None
self.encrypt_sc_status = Ncs1KMacsecOper.Ncs1KMacsecCtrlrNames.Ncs1KMacsecCtrlrName.Ncs1KStatusInfo.EncryptScStatus()
self.encrypt_sc_status.parent = self
self._children_name_map["encrypt_sc_status"] = "encrypt-sc-status"
self._children_yang_names.add("encrypt-sc-status")
self.decrypt_sc_status = Ncs1KMacsecOper.Ncs1KMacsecCtrlrNames.Ncs1KMacsecCtrlrName.Ncs1KStatusInfo.DecryptScStatus()
self.decrypt_sc_status.parent = self
self._children_name_map["decrypt_sc_status"] = "decrypt-sc-status"
self._children_yang_names.add("decrypt-sc-status")
self._segment_path = lambda: "ncs1k-status-info"
def __setattr__(self, name, value):
self._perform_setattr(Ncs1KMacsecOper.Ncs1KMacsecCtrlrNames.Ncs1KMacsecCtrlrName.Ncs1KStatusInfo, ['replay_window_size', 'must_secure', 'secure_mode'], name, value)
class EncryptScStatus(Entity):
"""
Encrypt Secure Channel Status
.. attribute:: protection_enabled
Protection Enabled
**type**\: bool
.. attribute:: secure_channel_id
Secure Channel Id
**type**\: int
**range:** 0..18446744073709551615
.. attribute:: confidentiality_offset
Confidentiality offset
**type**\: int
**range:** 0..4294967295
.. attribute:: cipher_suite
Cipher Suite
**type**\: :py:class:`Ncs1kCipherSuit <ydk.models.cisco_ios_xr.Cisco_IOS_XR_ncs1k_macsec_ea_oper.Ncs1kCipherSuit>`
.. attribute:: initial_packet_number
Initial Packet Number
**type**\: int
**range:** 0..18446744073709551615
.. attribute:: secure_tag_length
Secure Tag Length
**type**\: int
**range:** 0..4294967295
.. attribute:: max_packet_number
Maximum Packet Number
**type**\: int
**range:** 0..18446744073709551615
.. attribute:: recent_packet_number
Recent Packet Number
**type**\: int
**range:** 0..18446744073709551615
.. attribute:: active_association
Active Associations
**type**\: list of :py:class:`ActiveAssociation <ydk.models.cisco_ios_xr.Cisco_IOS_XR_ncs1k_macsec_ea_oper.Ncs1KMacsecOper.Ncs1KMacsecCtrlrNames.Ncs1KMacsecCtrlrName.Ncs1KStatusInfo.EncryptScStatus.ActiveAssociation>`
"""
_prefix = 'ncs1k-macsec-ea-oper'
_revision = '2015-11-09'
def __init__(self):
super(Ncs1KMacsecOper.Ncs1KMacsecCtrlrNames.Ncs1KMacsecCtrlrName.Ncs1KStatusInfo.EncryptScStatus, self).__init__()
self.yang_name = "encrypt-sc-status"
self.yang_parent_name = "ncs1k-status-info"
self.is_top_level_class = False
self.has_list_ancestor = True
self.ylist_key_names = []
self._child_container_classes = OrderedDict([])
self._child_list_classes = OrderedDict([("active-association", ("active_association", Ncs1KMacsecOper.Ncs1KMacsecCtrlrNames.Ncs1KMacsecCtrlrName.Ncs1KStatusInfo.EncryptScStatus.ActiveAssociation))])
self._leafs = OrderedDict([
('protection_enabled', YLeaf(YType.boolean, 'protection-enabled')),
('secure_channel_id', YLeaf(YType.uint64, 'secure-channel-id')),
('confidentiality_offset', YLeaf(YType.uint32, 'confidentiality-offset')),
('cipher_suite', YLeaf(YType.enumeration, 'cipher-suite')),
('initial_packet_number', YLeaf(YType.uint64, 'initial-packet-number')),
('secure_tag_length', YLeaf(YType.uint32, 'secure-tag-length')),
('max_packet_number', YLeaf(YType.uint64, 'max-packet-number')),
('recent_packet_number', YLeaf(YType.uint64, 'recent-packet-number')),
])
self.protection_enabled = None
self.secure_channel_id = None
self.confidentiality_offset = None
self.cipher_suite = None
self.initial_packet_number = None
self.secure_tag_length = None
self.max_packet_number = None
self.recent_packet_number = None
self.active_association = YList(self)
self._segment_path = lambda: "encrypt-sc-status"
def __setattr__(self, name, value):
self._perform_setattr(Ncs1KMacsecOper.Ncs1KMacsecCtrlrNames.Ncs1KMacsecCtrlrName.Ncs1KStatusInfo.EncryptScStatus, ['protection_enabled', 'secure_channel_id', 'confidentiality_offset', 'cipher_suite', 'initial_packet_number', 'secure_tag_length', 'max_packet_number', 'recent_packet_number'], name, value)
class ActiveAssociation(Entity):
"""
Active Associations
.. attribute:: association_number
Assocition Number
**type**\: int
**range:** 0..255
.. attribute:: device_association_number
Devive Association Number
**type**\: int
**range:** 0..255
.. attribute:: short_secure_channel_id
Short Secure Channel Id
**type**\: int
**range:** 0..4294967295
.. attribute:: programmed_time
Key Programmed Time
**type**\: str
**length:** 0..30
.. attribute:: key_crc
32bit CRC of Programmed Key
**type**\: str
**pattern:** [0\-9a\-fA\-F]{1,8}
.. attribute:: xpn_salt
XPN Salt
**type**\: list of str
**pattern:** [0\-9a\-fA\-F]{1,8}
"""
_prefix = 'ncs1k-macsec-ea-oper'
_revision = '2015-11-09'
def __init__(self):
super(Ncs1KMacsecOper.Ncs1KMacsecCtrlrNames.Ncs1KMacsecCtrlrName.Ncs1KStatusInfo.EncryptScStatus.ActiveAssociation, self).__init__()
self.yang_name = "active-association"
self.yang_parent_name = "encrypt-sc-status"
self.is_top_level_class = False
self.has_list_ancestor = True
self.ylist_key_names = []
self._child_container_classes = OrderedDict([])
self._child_list_classes = OrderedDict([])
self._leafs = OrderedDict([
('association_number', YLeaf(YType.uint8, 'association-number')),
('device_association_number', YLeaf(YType.uint8, 'device-association-number')),
('short_secure_channel_id', YLeaf(YType.uint32, 'short-secure-channel-id')),
('programmed_time', YLeaf(YType.str, 'programmed-time')),
('key_crc', YLeaf(YType.str, 'key-crc')),
('xpn_salt', YLeafList(YType.str, 'xpn-salt')),
])
self.association_number = None
self.device_association_number = None
self.short_secure_channel_id = None
self.programmed_time = None
self.key_crc = None
self.xpn_salt = []
self._segment_path = lambda: "active-association"
def __setattr__(self, name, value):
self._perform_setattr(Ncs1KMacsecOper.Ncs1KMacsecCtrlrNames.Ncs1KMacsecCtrlrName.Ncs1KStatusInfo.EncryptScStatus.ActiveAssociation, ['association_number', 'device_association_number', 'short_secure_channel_id', 'programmed_time', 'key_crc', 'xpn_salt'], name, value)
class DecryptScStatus(Entity):
"""
Decrypt Secure Channel Status
.. attribute:: protection_enabled
Protection Enabled
**type**\: bool
.. attribute:: secure_channel_id
Secure Channel Id
**type**\: int
**range:** 0..18446744073709551615
.. attribute:: confidentiality_offset
Confidentiality offset
**type**\: int
**range:** 0..4294967295
.. attribute:: cipher_suite
Cipher Suite
**type**\: :py:class:`Ncs1kCipherSuit <ydk.models.cisco_ios_xr.Cisco_IOS_XR_ncs1k_macsec_ea_oper.Ncs1kCipherSuit>`
.. attribute:: initial_packet_number
Initial Packet Number
**type**\: int
**range:** 0..18446744073709551615
.. attribute:: secure_tag_length
Secure Tag Length
**type**\: int
**range:** 0..4294967295
.. attribute:: max_packet_number
Maximum Packet Number
**type**\: int
**range:** 0..18446744073709551615
.. attribute:: recent_packet_number
Recent Packet Number
**type**\: int
**range:** 0..18446744073709551615
.. attribute:: active_association
Active Associations
**type**\: list of :py:class:`ActiveAssociation <ydk.models.cisco_ios_xr.Cisco_IOS_XR_ncs1k_macsec_ea_oper.Ncs1KMacsecOper.Ncs1KMacsecCtrlrNames.Ncs1KMacsecCtrlrName.Ncs1KStatusInfo.DecryptScStatus.ActiveAssociation>`
"""
_prefix = 'ncs1k-macsec-ea-oper'
_revision = '2015-11-09'
def __init__(self):
super(Ncs1KMacsecOper.Ncs1KMacsecCtrlrNames.Ncs1KMacsecCtrlrName.Ncs1KStatusInfo.DecryptScStatus, self).__init__()
self.yang_name = "decrypt-sc-status"
self.yang_parent_name = "ncs1k-status-info"
self.is_top_level_class = False
self.has_list_ancestor = True
self.ylist_key_names = []
self._child_container_classes = OrderedDict([])
self._child_list_classes = OrderedDict([("active-association", ("active_association", Ncs1KMacsecOper.Ncs1KMacsecCtrlrNames.Ncs1KMacsecCtrlrName.Ncs1KStatusInfo.DecryptScStatus.ActiveAssociation))])
self._leafs = OrderedDict([
('protection_enabled', YLeaf(YType.boolean, 'protection-enabled')),
('secure_channel_id', YLeaf(YType.uint64, 'secure-channel-id')),
('confidentiality_offset', YLeaf(YType.uint32, 'confidentiality-offset')),
('cipher_suite', YLeaf(YType.enumeration, 'cipher-suite')),
('initial_packet_number', YLeaf(YType.uint64, 'initial-packet-number')),
('secure_tag_length', YLeaf(YType.uint32, 'secure-tag-length')),
('max_packet_number', YLeaf(YType.uint64, 'max-packet-number')),
('recent_packet_number', YLeaf(YType.uint64, 'recent-packet-number')),
])
self.protection_enabled = None
self.secure_channel_id = None
self.confidentiality_offset = None
self.cipher_suite = None
self.initial_packet_number = None
self.secure_tag_length = None
self.max_packet_number = None
self.recent_packet_number = None
self.active_association = YList(self)
self._segment_path = lambda: "decrypt-sc-status"
def __setattr__(self, name, value):
self._perform_setattr(Ncs1KMacsecOper.Ncs1KMacsecCtrlrNames.Ncs1KMacsecCtrlrName.Ncs1KStatusInfo.DecryptScStatus, ['protection_enabled', 'secure_channel_id', 'confidentiality_offset', 'cipher_suite', 'initial_packet_number', 'secure_tag_length', 'max_packet_number', 'recent_packet_number'], name, value)
class ActiveAssociation(Entity):
"""
Active Associations
.. attribute:: association_number
Assocition Number
**type**\: int
**range:** 0..255
.. attribute:: device_association_number
Devive Association Number
**type**\: int
**range:** 0..255
.. attribute:: short_secure_channel_id
Short Secure Channel Id
**type**\: int
**range:** 0..4294967295
.. attribute:: programmed_time
Key Programmed Time
**type**\: str
**length:** 0..30
.. attribute:: key_crc
32bit CRC of Programmed Key
**type**\: str
**pattern:** [0\-9a\-fA\-F]{1,8}
.. attribute:: xpn_salt
XPN Salt
**type**\: list of str
**pattern:** [0\-9a\-fA\-F]{1,8}
"""
_prefix = 'ncs1k-macsec-ea-oper'
_revision = '2015-11-09'
def __init__(self):
super(Ncs1KMacsecOper.Ncs1KMacsecCtrlrNames.Ncs1KMacsecCtrlrName.Ncs1KStatusInfo.DecryptScStatus.ActiveAssociation, self).__init__()
self.yang_name = "active-association"
self.yang_parent_name = "decrypt-sc-status"
self.is_top_level_class = False
self.has_list_ancestor = True
self.ylist_key_names = []
self._child_container_classes = OrderedDict([])
self._child_list_classes = OrderedDict([])
self._leafs = OrderedDict([
('association_number', YLeaf(YType.uint8, 'association-number')),
('device_association_number', YLeaf(YType.uint8, 'device-association-number')),
('short_secure_channel_id', YLeaf(YType.uint32, 'short-secure-channel-id')),
('programmed_time', YLeaf(YType.str, 'programmed-time')),
('key_crc', YLeaf(YType.str, 'key-crc')),
('xpn_salt', YLeafList(YType.str, 'xpn-salt')),
])
self.association_number = None
self.device_association_number = None
self.short_secure_channel_id = None
self.programmed_time = None
self.key_crc = None
self.xpn_salt = []
self._segment_path = lambda: "active-association"
def __setattr__(self, name, value):
self._perform_setattr(Ncs1KMacsecOper.Ncs1KMacsecCtrlrNames.Ncs1KMacsecCtrlrName.Ncs1KStatusInfo.DecryptScStatus.ActiveAssociation, ['association_number', 'device_association_number', 'short_secure_channel_id', 'programmed_time', 'key_crc', 'xpn_salt'], name, value)
def clone_ptr(self):
self._top_entity = Ncs1KMacsecOper()
return self._top_entity
| 45.503215 | 341 | 0.495001 | 2,212 | 28,303 | 6.033906 | 0.084087 | 0.039559 | 0.113284 | 0.12235 | 0.861467 | 0.832322 | 0.799431 | 0.762419 | 0.705702 | 0.689968 | 0 | 0.038919 | 0.417164 | 28,303 | 621 | 342 | 45.57649 | 0.770187 | 0.22192 | 0 | 0.621739 | 0 | 0.008696 | 0.176176 | 0.063067 | 0 | 0 | 0 | 0 | 0 | 1 | 0.069565 | false | 0 | 0.021739 | 0 | 0.156522 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
7784babf7e420c99d61debc3d7788a565f4b44a0 | 25 | py | Python | python/testData/psi/unified/Annotations.py | jnthn/intellij-community | 8fa7c8a3ace62400c838e0d5926a7be106aa8557 | [
"Apache-2.0"
] | 2 | 2019-04-28T07:48:50.000Z | 2020-12-11T14:18:08.000Z | python/testData/psi/unified/Annotations.py | Cyril-lamirand/intellij-community | 60ab6c61b82fc761dd68363eca7d9d69663cfa39 | [
"Apache-2.0"
] | 173 | 2018-07-05T13:59:39.000Z | 2018-08-09T01:12:03.000Z | python/testData/psi/unified/Annotations.py | Cyril-lamirand/intellij-community | 60ab6c61b82fc761dd68363eca7d9d69663cfa39 | [
"Apache-2.0"
] | 2 | 2020-03-15T08:57:37.000Z | 2020-04-07T04:48:14.000Z | def f(a:1) -> list: pass
| 12.5 | 24 | 0.56 | 6 | 25 | 2.333333 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.05 | 0.2 | 25 | 1 | 25 | 25 | 0.65 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | false | 1 | 0 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 6 |
778dcf6273450f635101259e4046333677c8d4b4 | 142 | py | Python | app/admin/__init__.py | saury2013/online_book | 53cf56b6a8e088011224559e90be4d23b2f604f9 | [
"MIT"
] | null | null | null | app/admin/__init__.py | saury2013/online_book | 53cf56b6a8e088011224559e90be4d23b2f604f9 | [
"MIT"
] | 5 | 2021-03-18T20:34:55.000Z | 2022-03-11T23:24:30.000Z | app/admin/__init__.py | saury2013/online_book | 53cf56b6a8e088011224559e90be4d23b2f604f9 | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
from flask import Blueprint
admin = Blueprint("admin", __name__)
import app.admin.views
from app.models import site | 17.75 | 36 | 0.725352 | 20 | 142 | 4.95 | 0.65 | 0.282828 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.008264 | 0.147887 | 142 | 8 | 37 | 17.75 | 0.809917 | 0.147887 | 0 | 0 | 0 | 0 | 0.041667 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.75 | 0 | 0.75 | 0.5 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 1 | 0 | 6 |
77cf37bac636c4a8c1a906e92788fe9affb05270 | 4,731 | py | Python | src/transforms.py | aguarda/ADLPCC | 07c2b976a2c0db3de2bce8213551789c600d9f83 | [
"Apache-2.0"
] | 2 | 2021-11-19T17:02:04.000Z | 2021-11-26T22:50:23.000Z | src/transforms.py | aguarda/ADLPCC | 07c2b976a2c0db3de2bce8213551789c600d9f83 | [
"Apache-2.0"
] | 1 | 2021-07-13T21:18:37.000Z | 2021-07-13T23:44:47.000Z | src/transforms.py | aguarda/ADLPCC | 07c2b976a2c0db3de2bce8213551789c600d9f83 | [
"Apache-2.0"
] | 3 | 2021-07-09T02:40:08.000Z | 2022-03-02T03:26:15.000Z | import tensorflow.compat.v1 as tf
import tensorflow_compression as tfc
class AnalysisTransform(tf.keras.layers.Layer):
"""The analysis transform."""
def __init__(self, num_filters, *args, **kwargs):
self.num_filters = num_filters
super(AnalysisTransform, self).__init__(*args, **kwargs)
def build(self, input_shape):
self._layers = [
tfc.SignalConv3D(
self.num_filters, (5, 5, 5), name="layer_0", corr=True, strides_down=2,
padding="same_reflect", use_bias=True,
activation=tf.sigmoid),
tfc.SignalConv3D(
self.num_filters, (5, 5, 5), name="layer_1", corr=True, strides_down=2,
padding="same_reflect", use_bias=True,
activation=tf.sigmoid),
tfc.SignalConv3D(
self.num_filters, (5, 5, 5), name="layer_2", corr=True, strides_down=2,
padding="same_reflect", use_bias=False,
activation=None),
]
super(AnalysisTransform, self).build(input_shape)
def call(self, tensor):
for layer in self._layers:
tensor = layer(tensor)
return tensor
class SynthesisTransform(tf.keras.layers.Layer):
"""The synthesis transform."""
def __init__(self, num_filters, *args, **kwargs):
self.num_filters = num_filters
super(SynthesisTransform, self).__init__(*args, **kwargs)
def build(self, input_shape):
self._layers = [
tfc.SignalConv3D(
self.num_filters, (5, 5, 5), name="layer_0", corr=False, strides_up=2,
padding="same_reflect", use_bias=True,
activation=tf.sigmoid),
tfc.SignalConv3D(
self.num_filters, (5, 5, 5), name="layer_1", corr=False, strides_up=2,
padding="same_reflect", use_bias=True,
activation=tf.sigmoid),
tfc.SignalConv3D(
1, (5, 5, 5), name="layer_2", corr=False, strides_up=2,
padding="same_reflect", use_bias=True,
activation=tf.sigmoid),
]
super(SynthesisTransform, self).build(input_shape)
def call(self, tensor):
for layer in self._layers:
tensor = layer(tensor)
return tensor
class HyperAnalysisTransform(tf.keras.layers.Layer):
"""The analysis transform for the entropy model parameters."""
def __init__(self, num_filters, *args, **kwargs):
self.num_filters = num_filters
super(HyperAnalysisTransform, self).__init__(*args, **kwargs)
def build(self, input_shape):
self._layers = [
tfc.SignalConv3D(
self.num_filters, (3, 3, 3), name="layer_0", corr=True, strides_down=1,
padding="same_zeros", use_bias=True,
activation=tf.nn.relu),
tfc.SignalConv3D(
self.num_filters, (3, 3, 3), name="layer_1", corr=True, strides_down=2,
padding="same_zeros", use_bias=True,
activation=tf.nn.relu),
tfc.SignalConv3D(
self.num_filters, (3, 3, 3), name="layer_2", corr=True, strides_down=2,
padding="same_zeros", use_bias=False,
activation=None),
]
super(HyperAnalysisTransform, self).build(input_shape)
def call(self, tensor):
for layer in self._layers:
tensor = layer(tensor)
return tensor
class HyperSynthesisTransform(tf.keras.layers.Layer):
"""The synthesis transform for the entropy model parameters."""
def __init__(self, num_filters, *args, **kwargs):
self.num_filters = num_filters
super(HyperSynthesisTransform, self).__init__(*args, **kwargs)
def build(self, input_shape):
self._layers = [
tfc.SignalConv3D(
self.num_filters, (3, 3, 3), name="layer_0", corr=False, strides_up=2,
padding="same_zeros", use_bias=True, kernel_parameterizer=None,
activation=tf.nn.relu),
tfc.SignalConv3D(
self.num_filters, (3, 3, 3), name="layer_1", corr=False, strides_up=2,
padding="same_zeros", use_bias=True, kernel_parameterizer=None,
activation=tf.nn.relu),
tfc.SignalConv3D(
self.num_filters, (3, 3, 3), name="layer_2", corr=False, strides_up=1,
padding="same_zeros", use_bias=True, kernel_parameterizer=None,
activation=None),
]
super(HyperSynthesisTransform, self).build(input_shape)
def call(self, tensor):
for layer in self._layers:
tensor = layer(tensor)
return tensor
| 38.463415 | 87 | 0.590573 | 545 | 4,731 | 4.900917 | 0.119266 | 0.08611 | 0.099588 | 0.090603 | 0.880195 | 0.880195 | 0.862224 | 0.801572 | 0.801572 | 0.801572 | 0 | 0.022063 | 0.291059 | 4,731 | 122 | 88 | 38.778689 | 0.774299 | 0.034454 | 0 | 0.673469 | 0 | 0 | 0.047504 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.122449 | false | 0 | 0.020408 | 0 | 0.22449 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
77d47182ff1318fdc069605c01addd7f86127dc9 | 225 | py | Python | mrl/g_models/all.py | DarkMatterAI/mrl | e000c3570d4461c3054c882697cce55217ede552 | [
"MIT"
] | 4 | 2021-11-16T09:29:55.000Z | 2021-12-27T17:55:32.000Z | mrl/g_models/all.py | DarkMatterAI/mrl | e000c3570d4461c3054c882697cce55217ede552 | [
"MIT"
] | null | null | null | mrl/g_models/all.py | DarkMatterAI/mrl | e000c3570d4461c3054c882697cce55217ede552 | [
"MIT"
] | 3 | 2021-11-16T09:41:41.000Z | 2021-12-27T17:55:33.000Z | from ..imports import *
from ..core import *
from ..torch_imports import *
from ..torch_core import *
from ..layers import *
from ..dataloaders import *
from .generative_base import *
from .lstm_lm import *
from .vae import * | 25 | 30 | 0.737778 | 31 | 225 | 5.225806 | 0.387097 | 0.493827 | 0.209877 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.155556 | 225 | 9 | 31 | 25 | 0.852632 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
77e78d05f90f90a6cf27916406d8bd0926127e49 | 6,204 | py | Python | forecaster/models_baseline.py | weanl/myJan | 063e1e5f279cc6e7b70cee85b8f1004c4a5e6506 | [
"MIT"
] | 2 | 2019-02-24T16:46:45.000Z | 2020-02-21T07:39:35.000Z | forecaster/models_baseline.py | weanl/myJan | 063e1e5f279cc6e7b70cee85b8f1004c4a5e6506 | [
"MIT"
] | null | null | null | forecaster/models_baseline.py | weanl/myJan | 063e1e5f279cc6e7b70cee85b8f1004c4a5e6506 | [
"MIT"
] | null | null | null | #!/usr/bin/env python3
# -*- coding: utf-8 -*-
"""
Created on Sat Jan 9 21:55:18 2019
@author: weanl
"""
'''
0. averModelScore(DBID='210100063')
average forecast
1. arModelScore(DBID='210100063')
auto regression
2. ardwtModelScore(DBID='210100063')
auto regression with DWT
3. rnndwtModelScore(DBID='210100063')
rnn with DWT
4. ourdwtModelScore(DBID='210100063')
MDWT-based Seq2Seq
*. arimaTrain()
auto regression integrated moving average
8.
'''
'''
'''
import numpy as np
from sklearn import linear_model
from models import splitIndex
'''
'''
FileDir = '../given_data_1128/'
LookBack = int(128)
ForecastStep = 10
'''
0. averModelScore(DBID='210100063')
'''
def averModelScore(DBID='210100063'):
print('---> call ---> averModelScore')
# 1 load data
file_path = FileDir + DBID + '/' + 'SimpleScoreData.npz'
SimpleScoreData = np.load(file_path)
agg_scorex_seqs = SimpleScoreData['agg_scorex_seqs']
agg_scorey_seqs = SimpleScoreData['agg_scorey_seqs']
score_max = SimpleScoreData['score_max']
score_min = SimpleScoreData['score_min']
instance_num = agg_scorex_seqs.shape[0]
# 2 split
trainIndex, testIndex = splitIndex(instance_num, 0.9)
x_train = agg_scorex_seqs[trainIndex]
y_train = agg_scorey_seqs[trainIndex]
x_test = agg_scorex_seqs[testIndex]
y_test = agg_scorey_seqs[testIndex]
print('x_train.shape = ', x_train.shape)
print('y_train.shape = ', y_train.shape)
print('x_test.shape = ', x_test.shape)
print('y_test.shape = ', y_test.shape)
train_num = x_train.shape[0]
test_num = x_test.shape[0]
# 3 model and predict
# no model for average forecast
inputs_test = x_test
y_test_pred = []
inputs_train = x_train
y_train_pred = []
for step in range(ForecastStep):
print('step = ', step, '\tinputs_test.shape=',inputs_test.shape,
'\tinputs_train.shape=', inputs_train.shape)
x_test_its = inputs_test[:, step:]
y_test_pred_its = np.mean(x_test_its, axis=1).reshape(test_num, 1)
y_test_pred.append(y_test_pred_its)
inputs_test = np.concatenate([inputs_test, y_test_pred_its], axis=1)
x_train_its = inputs_train[:, step:]
y_train_pred_its = np.mean(x_train_its, axis=1).reshape(train_num, 1)
y_train_pred.append(y_train_pred_its)
inputs_train = np.concatenate([inputs_train, y_train_pred_its], axis=1)
y_test_pred = np.concatenate(y_test_pred, axis=1)
y_train_pred = np.concatenate(y_train_pred, axis=1)
'''
y_test_pred = arModel.predict(X=x_test)
y_train_pred = arModel.predict(X=x_train)
'''
# 4 save as .npz file
y_test_pred = (y_test_pred * (score_max - score_min) + score_max + score_min) / 2
y_test = (y_test * (score_max - score_min) + score_max + score_min) / 2
y_train_pred = (y_train_pred * (score_max - score_min) + score_max + score_min) / 2
y_train = (y_train * (score_max - score_min) + score_max + score_min) / 2
SavePath = FileDir + DBID + '/' + 'score_forecast_averModelScore'
np.savez_compressed(SavePath, y_test_pred=y_test_pred, y_test=y_test,
y_train_pred=y_train_pred, y_train=y_train)
print('---> return from ---> averModelScore')
return 0
'''
1. arModelScore(DBID='210100063')
'''
def arModelScore(DBID='210100063'):
print('---> call ---> arModelScore')
# 1 load data
file_path = FileDir + DBID + '/' + 'SimpleScoreData.npz'
SimpleScoreData = np.load(file_path)
agg_scorex_seqs = SimpleScoreData['agg_scorex_seqs']
agg_scorey_seqs = SimpleScoreData['agg_scorey_seqs']
score_max = SimpleScoreData['score_max']
score_min = SimpleScoreData['score_min']
instance_num = agg_scorex_seqs.shape[0]
# 2 split
trainIndex, testIndex = splitIndex(instance_num, 0.9)
x_train = agg_scorex_seqs[trainIndex]
y_train = agg_scorey_seqs[trainIndex]
x_test = agg_scorex_seqs[testIndex]
y_test = agg_scorey_seqs[testIndex]
print('x_train.shape = ', x_train.shape)
print('y_train.shape = ', y_train.shape)
print('x_test.shape = ', x_test.shape)
print('y_test.shape = ', y_test.shape)
# 3 model and predict
arModel = linear_model.LinearRegression()
arModel.fit(X=x_train, y=y_train[:, :1])
arModel_coef = arModel.coef_
print('arModel_coef.shape = ', arModel_coef.shape)
print('arModel_coef: \n', arModel_coef)
inputs_test = x_test
y_test_pred = []
inputs_train = x_train
y_train_pred = []
for step in range(ForecastStep):
print('step = ', step, '\tinputs_test.shape=',inputs_test.shape,
'\tinputs_train.shape=', inputs_train.shape)
x_test_its = inputs_test[:, step:]
y_test_pred_its = arModel.predict(X=x_test_its)
y_test_pred.append(y_test_pred_its)
inputs_test = np.concatenate([inputs_test, y_test_pred_its], axis=1)
x_train_its = inputs_train[:, step:]
y_train_pred_its = arModel.predict(X=x_train_its)
y_train_pred.append(y_train_pred_its)
inputs_train = np.concatenate([inputs_train, y_train_pred_its], axis=1)
y_test_pred = np.concatenate(y_test_pred, axis=1)
y_train_pred = np.concatenate(y_train_pred, axis=1)
'''
y_test_pred = arModel.predict(X=x_test)
y_train_pred = arModel.predict(X=x_train)
'''
# 4 save as .npz file
y_test_pred = (y_test_pred * (score_max - score_min) + score_max + score_min) / 2
y_test = (y_test * (score_max - score_min) + score_max + score_min) / 2
y_train_pred = (y_train_pred * (score_max - score_min) + score_max + score_min) / 2
y_train = (y_train * (score_max - score_min) + score_max + score_min) / 2
SavePath = FileDir + DBID + '/' + 'score_forecast_arModelScore'
np.savez_compressed(SavePath, y_test_pred=y_test_pred, y_test=y_test, y_train_pred=y_train_pred, y_train=y_train, arModel_coef=arModel_coef)
print('---> return from ---> arModelScore')
return 0
if __name__ == '__main__':
# 0
averModelScore(DBID='210100063')
# 1
#arModelScore(DBID='210100063')
# 2
# END OF FILE
| 27.945946 | 144 | 0.672953 | 877 | 6,204 | 4.402509 | 0.13455 | 0.060606 | 0.055944 | 0.074592 | 0.729604 | 0.709661 | 0.701373 | 0.701373 | 0.701373 | 0.701373 | 0 | 0.034811 | 0.203578 | 6,204 | 221 | 145 | 28.072398 | 0.74661 | 0.047066 | 0 | 0.705882 | 0 | 0 | 0.122358 | 0.019003 | 0 | 0 | 0 | 0 | 0 | 1 | 0.019608 | false | 0 | 0.029412 | 0 | 0.068627 | 0.156863 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
77eef3077b935882e0cdfea7b11e15dbd9ab071d | 4,495 | py | Python | tests/clean/auth/test_perm_inspector.py | bahnlink/pyclean | 558d75341082472606788e088809831f6ea543c0 | [
"MIT"
] | null | null | null | tests/clean/auth/test_perm_inspector.py | bahnlink/pyclean | 558d75341082472606788e088809831f6ea543c0 | [
"MIT"
] | 2 | 2021-03-25T21:49:39.000Z | 2021-06-01T22:12:00.000Z | tests/clean/auth/test_perm_inspector.py | bahnlink/pyclean | 558d75341082472606788e088809831f6ea543c0 | [
"MIT"
] | 1 | 2018-06-07T17:31:56.000Z | 2018-06-07T17:31:56.000Z | from clean.entities.token import UserToken
from clean.auth.perm_inspector import PermissionInspector
def test_not_allow_if_user_is_not_set():
ut = UserToken(username='crl', email='', photo_url='',
app_meta={}, user_meta={}, admin=False, staff=False, scopes={}).to_dict()
pv = PermissionInspector(user={}, rule={}, allow_super=False)
assert pv.verify(ut) is False
def test_allow_just_with_token():
ut = UserToken(username='crl', email='', photo_url='',
app_meta={}, user_meta={}, admin=False, staff=False, scopes={}).to_dict()
pv = PermissionInspector(user={'req': 'user'}, rule={}, allow_super=False)
assert pv.verify(ut) is True
def test_admin_can_do_anything():
ut = UserToken(username='crl', email='', photo_url='',
app_meta={}, user_meta={}, admin=True, staff=False, scopes={}).to_dict()
pv = PermissionInspector(user={'req': 'staff'},
rule={'path': 'scopes.users.actions', 'op': 'in', 'value': 'w'},
allow_super=True)
assert pv.verify(ut) is True
def test_admin_cannot_do_anything_if_is_not_allow():
scopes = {
'users': {
'actions': ['w', 'r', 'd', 'u']
}
}
ut = UserToken(username='crl', email='', photo_url='',
app_meta={}, user_meta={}, admin=True, staff=False, scopes=scopes).to_dict()
pv = PermissionInspector(user={'req': 'staff'},
rule={'path': 'scopes.users.actions', 'op': 'in', 'value': 'w'},
allow_super=False)
assert pv.verify(ut) is False
def test_staff_need_permissions():
ut = UserToken(username='crl', email='', photo_url='',
app_meta={}, user_meta={}, admin=False, staff=True, scopes={}).to_dict()
pv = PermissionInspector(user={'req': 'staff'},
rule={'path': 'scopes.users.actions', 'op': 'in', 'value': 'w'},
allow_super=False)
assert pv.verify(ut) is False
def test_staff_have_permissions():
ut = UserToken(username='crl', email='', photo_url='',
app_meta={}, user_meta={}, admin=False, staff=True,
scopes={'users': {'actions': ['w']}}).to_dict()
pv = PermissionInspector(user={'req': 'staff'},
rule={'path': 'scopes.users.actions', 'op': 'in', 'value': 'w'},
allow_super=False)
assert pv.verify(ut) is True
def test_even_admin_needs_scopes_if_allow_super_is_false():
ut = UserToken(username='crl', email='', photo_url='',
app_meta={}, user_meta={}, admin=True, staff=True,
scopes={'users': {'actions': ['w']}}).to_dict()
pv = PermissionInspector(user={'req': 'staff'},
rule={'path': 'scopes.users.actions', 'op': 'in', 'value': 'w'},
allow_super=False)
assert pv.verify(ut) is False
def test_only_match_allow_admin():
ut = UserToken(username='crl', email='', photo_url='',
app_meta={}, user_meta={}, admin=True, staff=True,
scopes={'users': {'actions': ['w']}}).to_dict()
pv = PermissionInspector(user={'req': 'staff', 'match': 'min'},
rule={'path': 'scopes.users.actions', 'op': 'in', 'value': 'w'},
allow_super=False)
assert pv.verify(ut) is False
def test_get_permissions_admin():
ut = UserToken(username='crl', email='', admin=True, staff=True, special=1).to_dict()
pv = PermissionInspector()
assert pv.get_user_type(ut) == 'admin'
def test_get_permissions_staff():
ut = UserToken(username='crl', email='', admin=False, staff=True, special=1).to_dict()
pv = PermissionInspector()
assert pv.get_user_type(ut) == 'staff'
def test_get_permissions_special():
ut = UserToken(username='crl', email='', admin=False, staff=False, special=1).to_dict()
pv = PermissionInspector()
assert pv.get_user_type(ut) == 1
def test_get_permissions_user():
ut = UserToken(username='crl', email='', admin=False, staff=False, special=0).to_dict()
pv = PermissionInspector()
assert pv.get_user_type(ut) == 'user'
def test_get_permissions_user_2():
ut = UserToken(username='crl', email='', admin=False, staff=False, special='').to_dict()
pv = PermissionInspector()
assert pv.get_user_type(ut) == 'user'
| 37.14876 | 95 | 0.580645 | 535 | 4,495 | 4.665421 | 0.128972 | 0.036458 | 0.098958 | 0.114583 | 0.857372 | 0.83734 | 0.820513 | 0.820513 | 0.805689 | 0.788862 | 0 | 0.001767 | 0.244716 | 4,495 | 120 | 96 | 37.458333 | 0.733432 | 0 | 0 | 0.560976 | 0 | 0 | 0.084316 | 0 | 0 | 0 | 0 | 0 | 0.158537 | 1 | 0.158537 | false | 0 | 0.02439 | 0 | 0.182927 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
7ac5aac3850296889c92e30e626e56e764660540 | 33 | py | Python | backend/server_delta/server_delta_app/managers/patrimony/__init__.py | dalmarcogd/challenge_ms | 761f0a588b4c309cf6e226d306df3609c1179b4c | [
"MIT"
] | null | null | null | backend/server_delta/server_delta_app/managers/patrimony/__init__.py | dalmarcogd/challenge_ms | 761f0a588b4c309cf6e226d306df3609c1179b4c | [
"MIT"
] | 13 | 2020-06-05T18:26:43.000Z | 2021-06-10T20:36:13.000Z | backend/server_delta/server_delta_app/managers/patrimony/__init__.py | dalmarcogd/challenge_ms | 761f0a588b4c309cf6e226d306df3609c1179b4c | [
"MIT"
] | null | null | null | from .patrimony_manager import *
| 16.5 | 32 | 0.818182 | 4 | 33 | 6.5 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.121212 | 33 | 1 | 33 | 33 | 0.896552 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
7ac849ffd4989a1cc5ddf7652e9b65313d01544d | 83 | py | Python | users/templates/users/__init__.py | XinZhewu/learning_log | 7c42b5aaaad22c8cde59493599dd6484df9950e2 | [
"MIT"
] | 1 | 2019-07-04T09:52:34.000Z | 2019-07-04T09:52:34.000Z | users/templates/users/__init__.py | XinZhewu/learning_log | 7c42b5aaaad22c8cde59493599dd6484df9950e2 | [
"MIT"
] | null | null | null | users/templates/users/__init__.py | XinZhewu/learning_log | 7c42b5aaaad22c8cde59493599dd6484df9950e2 | [
"MIT"
] | null | null | null | # coding=utf-8
# @Time : 2018/3/5 22:36
# @Author : XinZhewu_568580410@qq.com
| 16.6 | 38 | 0.638554 | 14 | 83 | 3.714286 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.298507 | 0.192771 | 83 | 4 | 39 | 20.75 | 0.477612 | 0.903614 | 0 | null | 0 | null | 0 | 0 | null | 0 | 0 | 0 | null | 1 | null | true | 0 | 0 | null | null | null | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
7adeb987d070e20fbe9d4cb3395bb4a94acdaf6d | 71 | py | Python | Python studying/Codes of examples/2.4.1-while3.py | BoyangSheng/Skill-studing | 974c37365fff72e2c7b1e27ae52cb267c7070c9e | [
"Apache-2.0"
] | 1 | 2020-12-09T07:58:01.000Z | 2020-12-09T07:58:01.000Z | Python studying/Codes of examples/2.4.1-while3.py | BoyangSheng/Skill-studying | 974c37365fff72e2c7b1e27ae52cb267c7070c9e | [
"Apache-2.0"
] | null | null | null | Python studying/Codes of examples/2.4.1-while3.py | BoyangSheng/Skill-studying | 974c37365fff72e2c7b1e27ae52cb267c7070c9e | [
"Apache-2.0"
] | null | null | null | x = 1
y = 1
while y <= 10000:
y = y * (x + 1)
x += 1
print(x) | 11.833333 | 21 | 0.380282 | 15 | 71 | 1.8 | 0.4 | 0.222222 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.219512 | 0.422535 | 71 | 6 | 22 | 11.833333 | 0.439024 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.166667 | 1 | 0 | 1 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
7ae64ae58cc99f5069a382781141e1f78ece9147 | 29 | py | Python | activeteacher/modeling/proposal_generator/__init__.py | HunterJ-Lin/ActiveTeacher | 7e15c5fa118a0b5779f44811be2be41fe2f9d169 | [
"Apache-2.0"
] | 1 | 2022-03-09T12:36:58.000Z | 2022-03-09T12:36:58.000Z | activeteacher/modeling/proposal_generator/__init__.py | HunterJ-Lin/ActiveTeacher | 7e15c5fa118a0b5779f44811be2be41fe2f9d169 | [
"Apache-2.0"
] | null | null | null | activeteacher/modeling/proposal_generator/__init__.py | HunterJ-Lin/ActiveTeacher | 7e15c5fa118a0b5779f44811be2be41fe2f9d169 | [
"Apache-2.0"
] | null | null | null | from .rpn import PseudoLabRPN | 29 | 29 | 0.862069 | 4 | 29 | 6.25 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.103448 | 29 | 1 | 29 | 29 | 0.961538 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
bb2a8f6e85681bdc39c00d4b674d7879dd3cfa32 | 40 | py | Python | torchcam/__init__.py | Tramac/pytorch-cam | 39c35e39d33c45255dbfdd025eea5476a6a5ff55 | [
"Apache-2.0"
] | 29 | 2019-06-19T12:54:03.000Z | 2022-03-06T17:04:38.000Z | torchcam/__init__.py | Tramac/pytorch-cam | 39c35e39d33c45255dbfdd025eea5476a6a5ff55 | [
"Apache-2.0"
] | 1 | 2020-11-05T06:42:14.000Z | 2020-11-05T06:49:52.000Z | torchcam/__init__.py | Tramac/pytorch-cam | 39c35e39d33c45255dbfdd025eea5476a6a5ff55 | [
"Apache-2.0"
] | 6 | 2019-08-28T21:45:54.000Z | 2021-07-12T15:21:39.000Z | from .helper import *
from .cam import * | 20 | 21 | 0.725 | 6 | 40 | 4.833333 | 0.666667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.175 | 40 | 2 | 22 | 20 | 0.878788 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
7047f7d28648711f25651a58fd19eaed119082e9 | 254 | py | Python | code/rds_config.py | aws-samples/amazon-rds-proxy-demo | 322d13c8d2419c2441813de4ef18f07bc56d2305 | [
"MIT-0"
] | 6 | 2021-01-06T15:07:25.000Z | 2022-03-23T07:12:52.000Z | code/rds_config.py | aws-samples/amazon-rds-proxy-demo | 322d13c8d2419c2441813de4ef18f07bc56d2305 | [
"MIT-0"
] | null | null | null | code/rds_config.py | aws-samples/amazon-rds-proxy-demo | 322d13c8d2419c2441813de4ef18f07bc56d2305 | [
"MIT-0"
] | null | null | null | #rds_host = "rds-auroramysql-demodb.cluster-c9di9qmu8xqw.us-east-1.rds.amazonaws.com"
rds_host = "rds-proxy-demo.proxy-c9di9qmu8xqw.us-east-1.rds.amazonaws.com"
db_username = "<replace-with-your-db-user>"
db_password = "<replace-with-your-db-password>"
| 50.8 | 86 | 0.771654 | 40 | 254 | 4.8 | 0.5 | 0.072917 | 0.104167 | 0.197917 | 0.354167 | 0.354167 | 0.354167 | 0 | 0 | 0 | 0 | 0.033195 | 0.051181 | 254 | 4 | 87 | 63.5 | 0.763485 | 0.334646 | 0 | 0 | 0 | 0.333333 | 0.708333 | 0.708333 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0.333333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 6 |
70861ab32a27e245ecf19efecf712565b6638923 | 20,957 | py | Python | nonebot/adapters/qqguild/api/handle.py | nonebot/adapter-qqguild | a3e4d353bfdaafb296743bc0f15ed5d643c64d85 | [
"MIT"
] | 39 | 2021-12-23T14:26:41.000Z | 2022-03-22T14:11:19.000Z | nonebot/adapters/qqguild/api/handle.py | nonebot/adapter-qqguild | a3e4d353bfdaafb296743bc0f15ed5d643c64d85 | [
"MIT"
] | 4 | 2022-01-22T17:59:50.000Z | 2022-03-22T12:40:10.000Z | nonebot/adapters/qqguild/api/handle.py | nonebot/adapter-qqguild | a3e4d353bfdaafb296743bc0f15ed5d643c64d85 | [
"MIT"
] | 2 | 2022-01-16T02:38:51.000Z | 2022-03-01T15:48:36.000Z | from datetime import date, datetime
from typing import TYPE_CHECKING, List, Optional
from pydantic import parse_obj_as
from nonebot.drivers import Request
from .model import *
from .request import _request, _exclude_none
if TYPE_CHECKING:
from nonebot.adapters.qqguild.bot import Bot
from nonebot.adapters.qqguild.adapter import Adapter
async def _get_guild(adapter: "Adapter", bot: "Bot", guild_id: int) -> Guild:
request = Request(
"GET",
adapter.get_api_base() / f"guilds/{guild_id}",
headers={"Authorization": adapter.get_authorization(bot.bot_info)},
)
return parse_obj_as(Guild, await _request(adapter, bot, request))
async def _me(adapter: "Adapter", bot: "Bot") -> User:
request = Request(
"GET",
adapter.get_api_base() / f"users/@me",
headers={"Authorization": adapter.get_authorization(bot.bot_info)},
)
return parse_obj_as(User, await _request(adapter, bot, request))
async def _guilds(
adapter: "Adapter",
bot: "Bot",
before: Optional[str] = ...,
after: Optional[str] = ...,
limit: Optional[float] = ...,
) -> List[Guild]:
request = Request(
"GET",
adapter.get_api_base() / f"users/@me/guilds",
params=_exclude_none({"before": before, "after": after, "limit": limit}),
headers={"Authorization": adapter.get_authorization(bot.bot_info)},
)
return parse_obj_as(List[Guild], await _request(adapter, bot, request))
async def _get_channels(adapter: "Adapter", bot: "Bot", guild_id: int) -> List[Channel]:
request = Request(
"GET",
adapter.get_api_base() / f"guilds/{guild_id}/channels",
headers={"Authorization": adapter.get_authorization(bot.bot_info)},
)
return parse_obj_as(List[Channel], await _request(adapter, bot, request))
async def _post_channels(
adapter: "Adapter", bot: "Bot", guild_id: int, **data
) -> List[Channel]:
request = Request(
"POST",
adapter.get_api_base() / f"guilds/{guild_id}/channels",
json=ChannelCreate(**data).dict(exclude_none=True),
headers={"Authorization": adapter.get_authorization(bot.bot_info)},
)
return parse_obj_as(List[Channel], await _request(adapter, bot, request))
async def _get_channel(adapter: "Adapter", bot: "Bot", channel_id: int) -> Channel:
request = Request(
"GET",
adapter.get_api_base() / f"channels/{channel_id}",
headers={"Authorization": adapter.get_authorization(bot.bot_info)},
)
return parse_obj_as(Channel, await _request(adapter, bot, request))
async def _patch_channel(
adapter: "Adapter", bot: "Bot", channel_id: int, **data
) -> Channel:
request = Request(
"PATCH",
adapter.get_api_base() / f"channels/{channel_id}",
json=ChannelUpdate(**data).dict(exclude_none=True),
headers={"Authorization": adapter.get_authorization(bot.bot_info)},
)
return parse_obj_as(Channel, await _request(adapter, bot, request))
async def _delete_channel(adapter: "Adapter", bot: "Bot", channel_id: int) -> None:
request = Request(
"DELETE",
adapter.get_api_base() / f"channels/{channel_id}",
headers={"Authorization": adapter.get_authorization(bot.bot_info)},
)
return await _request(adapter, bot, request)
async def _get_members(
adapter: "Adapter",
bot: "Bot",
guild_id: int,
after: Optional[str] = ...,
limit: Optional[float] = ...,
) -> List[Member]:
request = Request(
"GET",
adapter.get_api_base() / f"guilds/{guild_id}/members",
params=_exclude_none({"after": after, "limit": limit}),
headers={"Authorization": adapter.get_authorization(bot.bot_info)},
)
return parse_obj_as(List[Member], await _request(adapter, bot, request))
async def _get_member(
adapter: "Adapter", bot: "Bot", guild_id: int, user_id: int
) -> Member:
request = Request(
"GET",
adapter.get_api_base() / f"guilds/{guild_id}/members/{user_id}",
headers={"Authorization": adapter.get_authorization(bot.bot_info)},
)
return parse_obj_as(Member, await _request(adapter, bot, request))
async def _delete_member(
adapter: "Adapter", bot: "Bot", guild_id: int, user_id: int, **data
) -> None:
request = Request(
"DELETE",
adapter.get_api_base() / f"guilds/{guild_id}/members/{user_id}",
json=DeleteMemberBody(**data).dict(exclude_none=True),
headers={"Authorization": adapter.get_authorization(bot.bot_info)},
)
return await _request(adapter, bot, request)
async def _get_guild_roles(
adapter: "Adapter", bot: "Bot", guild_id: int
) -> GetGuildRolesReturn:
request = Request(
"GET",
adapter.get_api_base() / f"guilds/{guild_id}/roles",
headers={"Authorization": adapter.get_authorization(bot.bot_info)},
)
return parse_obj_as(GetGuildRolesReturn, await _request(adapter, bot, request))
async def _post_guild_role(
adapter: "Adapter", bot: "Bot", guild_id: int, **data
) -> PostGuildRoleReturn:
request = Request(
"POST",
adapter.get_api_base() / f"guilds/{guild_id}/roles",
json=PostGuildRoleBody(**data).dict(exclude_none=True),
headers={"Authorization": adapter.get_authorization(bot.bot_info)},
)
return parse_obj_as(PostGuildRoleReturn, await _request(adapter, bot, request))
async def _patch_guild_role(
adapter: "Adapter", bot: "Bot", guild_id: int, role_id: int, **data
) -> PatchGuildRoleReturn:
request = Request(
"PATCH",
adapter.get_api_base() / f"guilds/{guild_id}/roles/{role_id}",
json=PatchGuildRoleBody(**data).dict(exclude_none=True),
headers={"Authorization": adapter.get_authorization(bot.bot_info)},
)
return parse_obj_as(PatchGuildRoleReturn, await _request(adapter, bot, request))
async def _delete_guild_role(
adapter: "Adapter", bot: "Bot", guild_id: int, role_id: int
) -> None:
request = Request(
"DELETE",
adapter.get_api_base() / f"guilds/{guild_id}/roles/{role_id}",
headers={"Authorization": adapter.get_authorization(bot.bot_info)},
)
return await _request(adapter, bot, request)
async def _put_guild_member_role(
adapter: "Adapter", bot: "Bot", guild_id: int, role_id: int, user_id: int, **data
) -> None:
request = Request(
"PUT",
adapter.get_api_base() / f"guilds/{guild_id}/members/{user_id}/roles/{role_id}",
json=PutGuildMemberRoleBody(**data).dict(exclude_none=True),
headers={"Authorization": adapter.get_authorization(bot.bot_info)},
)
return await _request(adapter, bot, request)
async def _delete_guild_member_role(
adapter: "Adapter", bot: "Bot", guild_id: int, role_id: int, user_id: int, **data
) -> None:
request = Request(
"DELETE",
adapter.get_api_base() / f"guilds/{guild_id}/members/{user_id}/roles/{role_id}",
json=DeleteGuildMemberRoleBody(**data).dict(exclude_none=True),
headers={"Authorization": adapter.get_authorization(bot.bot_info)},
)
return await _request(adapter, bot, request)
async def _get_channel_permissions(
adapter: "Adapter", bot: "Bot", channel_id: int, user_id: int
) -> ChannelPermissions:
request = Request(
"GET",
adapter.get_api_base() / f"channels/{channel_id}/members/{user_id}/permissions",
headers={"Authorization": adapter.get_authorization(bot.bot_info)},
)
return parse_obj_as(ChannelPermissions, await _request(adapter, bot, request))
async def _put_channel_permissions(
adapter: "Adapter", bot: "Bot", channel_id: int, user_id: int, **data
) -> None:
request = Request(
"PUT",
adapter.get_api_base() / f"channels/{channel_id}/members/{user_id}/permissions",
json=PutChannelPermissionsBody(**data).dict(exclude_none=True),
headers={"Authorization": adapter.get_authorization(bot.bot_info)},
)
return await _request(adapter, bot, request)
async def _get_channel_roles_permissions(
adapter: "Adapter", bot: "Bot", channel_id: int, role_id: int
) -> ChannelPermissions:
request = Request(
"GET",
adapter.get_api_base() / f"channels/{channel_id}/roles/{role_id}/permissions",
headers={"Authorization": adapter.get_authorization(bot.bot_info)},
)
return parse_obj_as(ChannelPermissions, await _request(adapter, bot, request))
async def _put_channel_roles_permissions(
adapter: "Adapter", bot: "Bot", channel_id: int, role_id: int, **data
) -> None:
request = Request(
"PUT",
adapter.get_api_base() / f"channels/{channel_id}/roles/{role_id}/permissions",
json=PutChannelRolesPermissionsBody(**data).dict(exclude_none=True),
headers={"Authorization": adapter.get_authorization(bot.bot_info)},
)
return await _request(adapter, bot, request)
async def _get_message_of_id(
adapter: "Adapter", bot: "Bot", channel_id: int, message_id: str
) -> Message:
request = Request(
"GET",
adapter.get_api_base() / f"channels/{channel_id}/messages/{message_id}",
headers={"Authorization": adapter.get_authorization(bot.bot_info)},
)
return parse_obj_as(Message, await _request(adapter, bot, request))
async def _post_messages(
adapter: "Adapter", bot: "Bot", channel_id: int, **data
) -> Message:
request = Request(
"POST",
adapter.get_api_base() / f"channels/{channel_id}/messages",
json=MessageSend(**data).dict(exclude_none=True),
headers={"Authorization": adapter.get_authorization(bot.bot_info)},
)
return parse_obj_as(Message, await _request(adapter, bot, request))
async def _post_dms(adapter: "Adapter", bot: "Bot", **data) -> List[DMS]:
request = Request(
"POST",
adapter.get_api_base() / f"users/@me/dms",
json=PostDmsBody(**data).dict(exclude_none=True),
headers={"Authorization": adapter.get_authorization(bot.bot_info)},
)
return parse_obj_as(List[DMS], await _request(adapter, bot, request))
async def _post_dms_messages(
adapter: "Adapter", bot: "Bot", guild_id: int, **data
) -> List[Message]:
request = Request(
"POST",
adapter.get_api_base() / f"dms/{guild_id}/messages",
json=MessageSend(**data).dict(exclude_none=True),
headers={"Authorization": adapter.get_authorization(bot.bot_info)},
)
return parse_obj_as(List[Message], await _request(adapter, bot, request))
async def _patch_guild_mute(
adapter: "Adapter", bot: "Bot", guild_id: int, **data
) -> None:
request = Request(
"PATCH",
adapter.get_api_base() / f"guilds/{guild_id}/mute",
json=PatchGuildMuteBody(**data).dict(exclude_none=True),
headers={"Authorization": adapter.get_authorization(bot.bot_info)},
)
return await _request(adapter, bot, request)
async def _patch_guild_member_mute(
adapter: "Adapter", bot: "Bot", guild_id: int, user_id: int, **data
) -> None:
request = Request(
"PATCH",
adapter.get_api_base() / f"guilds/{guild_id}/members/{user_id}/mute",
json=PatchGuildMemberMuteBody(**data).dict(exclude_none=True),
headers={"Authorization": adapter.get_authorization(bot.bot_info)},
)
return await _request(adapter, bot, request)
async def _post_guild_announces(
adapter: "Adapter", bot: "Bot", guild_id: int, **data
) -> None:
request = Request(
"POST",
adapter.get_api_base() / f"guilds/{guild_id}/announces",
json=PostGuildAnnouncesBody(**data).dict(exclude_none=True),
headers={"Authorization": adapter.get_authorization(bot.bot_info)},
)
return await _request(adapter, bot, request)
async def _delete_guild_announces(
adapter: "Adapter", bot: "Bot", guild_id: int, message_id: str
) -> None:
request = Request(
"DELETE",
adapter.get_api_base() / f"guilds/{guild_id}/announces/{message_id}",
headers={"Authorization": adapter.get_authorization(bot.bot_info)},
)
return await _request(adapter, bot, request)
async def _post_channel_announces(
adapter: "Adapter", bot: "Bot", channel_id: int, **data
) -> Announces:
request = Request(
"POST",
adapter.get_api_base() / f"channels/{channel_id}/announces",
json=PostChannelAnnouncesBody(**data).dict(exclude_none=True),
headers={"Authorization": adapter.get_authorization(bot.bot_info)},
)
return parse_obj_as(Announces, await _request(adapter, bot, request))
async def _delete_channel_announces(
adapter: "Adapter", bot: "Bot", channel_id: int, message_id: str
) -> None:
request = Request(
"DELETE",
adapter.get_api_base() / f"channels/{channel_id}/announces/{message_id}",
headers={"Authorization": adapter.get_authorization(bot.bot_info)},
)
return await _request(adapter, bot, request)
async def _get_schedules(
adapter: "Adapter", bot: "Bot", channel_id: int, **data
) -> List[Schedule]:
request = Request(
"GET",
adapter.get_api_base() / f"channels/{channel_id}/schedules",
json=GetSchedulesBody(**data).dict(exclude_none=True),
headers={"Authorization": adapter.get_authorization(bot.bot_info)},
)
return parse_obj_as(List[Schedule], await _request(adapter, bot, request))
async def _post_schedule(
adapter: "Adapter", bot: "Bot", channel_id: int, **data
) -> Schedule:
request = Request(
"POST",
adapter.get_api_base() / f"channels/{channel_id}/schedules",
json=ScheduleCreate(**data).dict(exclude_none=True),
headers={"Authorization": adapter.get_authorization(bot.bot_info)},
)
return parse_obj_as(Schedule, await _request(adapter, bot, request))
async def _get_schedule(
adapter: "Adapter", bot: "Bot", channel_id: int, schedule_id: int
) -> Schedule:
request = Request(
"GET",
adapter.get_api_base() / f"channels/{channel_id}/schedules/{schedule_id}",
headers={"Authorization": adapter.get_authorization(bot.bot_info)},
)
return parse_obj_as(Schedule, await _request(adapter, bot, request))
async def _patch_schedule(
adapter: "Adapter", bot: "Bot", channel_id: int, schedule_id: int, **data
) -> Schedule:
request = Request(
"PATCH",
adapter.get_api_base() / f"channels/{channel_id}/schedules/{schedule_id}",
json=ScheduleUpdate(**data).dict(exclude_none=True),
headers={"Authorization": adapter.get_authorization(bot.bot_info)},
)
return parse_obj_as(Schedule, await _request(adapter, bot, request))
async def _delete_schedule(
adapter: "Adapter", bot: "Bot", channel_id: int, schedule_id: int
) -> None:
request = Request(
"DELETE",
adapter.get_api_base() / f"channels/{channel_id}/schedules/{schedule_id}",
headers={"Authorization": adapter.get_authorization(bot.bot_info)},
)
return await _request(adapter, bot, request)
async def _audio_control(
adapter: "Adapter", bot: "Bot", channel_id: int, **data
) -> None:
request = Request(
"POST",
adapter.get_api_base() / f"channels/{channel_id}/audio",
json=AudioControl(**data).dict(exclude_none=True),
headers={"Authorization": adapter.get_authorization(bot.bot_info)},
)
return await _request(adapter, bot, request)
async def _get_guild_api_permission(
adapter: "Adapter", bot: "Bot", guild_id: int
) -> List[APIPermission]:
request = Request(
"GET",
adapter.get_api_base() / f"guilds/{guild_id}/api_permission",
headers={"Authorization": adapter.get_authorization(bot.bot_info)},
)
return parse_obj_as(List[APIPermission], await _request(adapter, bot, request))
async def _post_api_permission_demand(
adapter: "Adapter", bot: "Bot", guild_id: int, **data
) -> List[APIPermissionDemand]:
request = Request(
"POST",
adapter.get_api_base() / f"guilds/{guild_id}/api_permission/demand",
json=PostApiPermissionDemandBody(**data).dict(exclude_none=True),
headers={"Authorization": adapter.get_authorization(bot.bot_info)},
)
return parse_obj_as(
List[APIPermissionDemand], await _request(adapter, bot, request)
)
async def _url_get(adapter: "Adapter", bot: "Bot") -> UrlGetReturn:
request = Request(
"GET",
adapter.get_api_base() / f"gateway",
headers={"Authorization": adapter.get_authorization(bot.bot_info)},
)
return parse_obj_as(UrlGetReturn, await _request(adapter, bot, request))
async def _shard_url_get(adapter: "Adapter", bot: "Bot") -> ShardUrlGetReturn:
request = Request(
"GET",
adapter.get_api_base() / f"gateway/bot",
headers={"Authorization": adapter.get_authorization(bot.bot_info)},
)
return parse_obj_as(ShardUrlGetReturn, await _request(adapter, bot, request))
async def _put_message_reaction(
adapter: "Adapter", bot: "Bot", channel_id: int, message_id: str, type: int, id: str
) -> None:
request = Request(
"PUT",
adapter.get_api_base()
/ f"channels/{channel_id}/messages/{message_id}/reactions/{type}/{id}",
headers={"Authorization": adapter.get_authorization(bot.bot_info)},
)
return await _request(adapter, bot, request)
async def _delete_own_message_reaction(
adapter: "Adapter", bot: "Bot", channel_id: int, message_id: str, type: int, id: str
) -> None:
request = Request(
"DELETE",
adapter.get_api_base()
/ f"channels/{channel_id}/messages/{message_id}/reactions/{type}/{id}",
headers={"Authorization": adapter.get_authorization(bot.bot_info)},
)
return await _request(adapter, bot, request)
async def _put_pins_message(
adapter: "Adapter", bot: "Bot", channel_id: int, message_id: str
) -> None:
request = Request(
"PUT",
adapter.get_api_base() / f"channels/{channel_id}/pins/{message_id}",
headers={"Authorization": adapter.get_authorization(bot.bot_info)},
)
return await _request(adapter, bot, request)
async def _delete_pins_message(
adapter: "Adapter", bot: "Bot", channel_id: int, message_id: str
) -> None:
request = Request(
"DELETE",
adapter.get_api_base() / f"channels/{channel_id}/pins/{message_id}",
headers={"Authorization": adapter.get_authorization(bot.bot_info)},
)
return await _request(adapter, bot, request)
async def _get_pins_message(
adapter: "Adapter", bot: "Bot", channel_id: int
) -> PinsMessage:
request = Request(
"GET",
adapter.get_api_base() / f"channels/{channel_id}/pins",
headers={"Authorization": adapter.get_authorization(bot.bot_info)},
)
return parse_obj_as(PinsMessage, await _request(adapter, bot, request))
API_HANDLERS = {
"get_guild": _get_guild,
"me": _me,
"guilds": _guilds,
"get_channels": _get_channels,
"post_channels": _post_channels,
"get_channel": _get_channel,
"patch_channel": _patch_channel,
"delete_channel": _delete_channel,
"get_members": _get_members,
"get_member": _get_member,
"delete_member": _delete_member,
"get_guild_roles": _get_guild_roles,
"post_guild_role": _post_guild_role,
"patch_guild_role": _patch_guild_role,
"delete_guild_role": _delete_guild_role,
"put_guild_member_role": _put_guild_member_role,
"delete_guild_member_role": _delete_guild_member_role,
"get_channel_permissions": _get_channel_permissions,
"put_channel_permissions": _put_channel_permissions,
"get_channel_roles_permissions": _get_channel_roles_permissions,
"put_channel_roles_permissions": _put_channel_roles_permissions,
"get_message_of_id": _get_message_of_id,
"post_messages": _post_messages,
"post_dms": _post_dms,
"post_dms_messages": _post_dms_messages,
"patch_guild_mute": _patch_guild_mute,
"patch_guild_member_mute": _patch_guild_member_mute,
"post_guild_announces": _post_guild_announces,
"delete_guild_announces": _delete_guild_announces,
"post_channel_announces": _post_channel_announces,
"delete_channel_announces": _delete_channel_announces,
"get_schedules": _get_schedules,
"post_schedule": _post_schedule,
"get_schedule": _get_schedule,
"patch_schedule": _patch_schedule,
"delete_schedule": _delete_schedule,
"audio_control": _audio_control,
"get_guild_api_permission": _get_guild_api_permission,
"post_api_permission_demand": _post_api_permission_demand,
"url_get": _url_get,
"shard_url_get": _shard_url_get,
"put_message_reaction": _put_message_reaction,
"delete_own_message_reaction": _delete_own_message_reaction,
"put_pins_message": _put_pins_message,
"delete_pins_message": _delete_pins_message,
"get_pins_message": _get_pins_message,
}
| 35.641156 | 88 | 0.677912 | 2,522 | 20,957 | 5.324742 | 0.044409 | 0.068508 | 0.058232 | 0.068508 | 0.850696 | 0.812793 | 0.802442 | 0.771465 | 0.717626 | 0.657309 | 0 | 0 | 0.183566 | 20,957 | 587 | 89 | 35.701874 | 0.784863 | 0 | 0 | 0.520408 | 0 | 0 | 0.169442 | 0.084459 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.016327 | 0 | 0.110204 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
70a400bdd36de699a6f1a74c1612061975e132a5 | 28 | py | Python | src/config/__init__.py | natureyoo/MLVU-DA-Detection | 2e5c1648466f76fd1e32156ada5cb04fe8be59b3 | [
"Apache-2.0"
] | 60 | 2021-04-20T07:35:12.000Z | 2022-03-23T13:55:40.000Z | src/config/__init__.py | natureyoo/MLVU-DA-Detection | 2e5c1648466f76fd1e32156ada5cb04fe8be59b3 | [
"Apache-2.0"
] | 2 | 2021-12-24T03:44:07.000Z | 2022-02-13T02:13:18.000Z | src/config/__init__.py | natureyoo/MLVU-DA-Detection | 2e5c1648466f76fd1e32156ada5cb04fe8be59b3 | [
"Apache-2.0"
] | 2 | 2021-08-19T01:31:42.000Z | 2021-11-08T05:35:26.000Z | from .config import get_cfg
| 14 | 27 | 0.821429 | 5 | 28 | 4.4 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.142857 | 28 | 1 | 28 | 28 | 0.916667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
5632b2b921349c1fcf87b09eec8d974ed747422c | 96 | py | Python | venv/lib/python3.8/site-packages/future/utils/__init__.py | Retraces/UkraineBot | 3d5d7f8aaa58fa0cb8b98733b8808e5dfbdb8b71 | [
"MIT"
] | 2 | 2022-03-13T01:58:52.000Z | 2022-03-31T06:07:54.000Z | venv/lib/python3.8/site-packages/future/utils/__init__.py | DesmoSearch/Desmobot | b70b45df3485351f471080deb5c785c4bc5c4beb | [
"MIT"
] | 19 | 2021-11-20T04:09:18.000Z | 2022-03-23T15:05:55.000Z | venv/lib/python3.8/site-packages/future/utils/__init__.py | DesmoSearch/Desmobot | b70b45df3485351f471080deb5c785c4bc5c4beb | [
"MIT"
] | null | null | null | /home/runner/.cache/pip/pool/c2/cb/d7/b0ac7e0d7662721435d1176697e09632a4bbf5134dca799dbdf212a425 | 96 | 96 | 0.895833 | 9 | 96 | 9.555556 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.427083 | 0 | 96 | 1 | 96 | 96 | 0.46875 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | null | 0 | 0 | null | null | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
56365718283d19da58c0ba7714f9ca3b06fa78dd | 30 | py | Python | dbl_us_to/__init__.py | MrSpinne/DiscordBotsLists.py | 43fd6252e27b05768216e866266934f13923fcf6 | [
"MIT"
] | null | null | null | dbl_us_to/__init__.py | MrSpinne/DiscordBotsLists.py | 43fd6252e27b05768216e866266934f13923fcf6 | [
"MIT"
] | null | null | null | dbl_us_to/__init__.py | MrSpinne/DiscordBotsLists.py | 43fd6252e27b05768216e866266934f13923fcf6 | [
"MIT"
] | 1 | 2021-01-14T15:02:48.000Z | 2021-01-14T15:02:48.000Z | from .client import DBLClient
| 15 | 29 | 0.833333 | 4 | 30 | 6.25 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.133333 | 30 | 1 | 30 | 30 | 0.961538 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
56897951f89508156cb56f175101d00b45bac660 | 35,074 | py | Python | iter8_analytics/tests/unit/v2/experiment_test.py | klee989/iter8-analytics | 742e8d79f11e517b62b065383dc71ad956f0db26 | [
"Apache-2.0"
] | 14 | 2019-11-14T01:30:32.000Z | 2021-09-10T06:03:51.000Z | iter8_analytics/tests/unit/v2/experiment_test.py | klee989/iter8-analytics | 742e8d79f11e517b62b065383dc71ad956f0db26 | [
"Apache-2.0"
] | 120 | 2019-12-09T21:17:37.000Z | 2021-07-21T00:21:17.000Z | iter8_analytics/tests/unit/v2/experiment_test.py | klee989/iter8-analytics | 742e8d79f11e517b62b065383dc71ad956f0db26 | [
"Apache-2.0"
] | 14 | 2020-04-01T15:40:39.000Z | 2021-08-19T14:23:40.000Z | """Tests for module iter8_analytics.api.v2"""
# standard python stuff
import copy
from datetime import datetime, timezone, timedelta
import json
import logging
import os
# python libraries
import requests_mock
from iter8_analytics import fastapi_app
from iter8_analytics.api.v2.types import \
ExperimentResource, AggregatedMetricsAnalysis, VersionAssessmentsAnalysis, \
WinnerAssessmentAnalysis, WeightsAnalysis, VersionWeight
from iter8_analytics.config import env_config
import iter8_analytics.constants as constants
from iter8_analytics.api.v2.examples.examples_canary import \
er_example, er_example_step1, er_example_step2, er_example_step3, \
am_response, va_response, wa_response, w_response, mr_example, mocked_mr_example
from iter8_analytics.api.v2.examples.examples_ab import \
ab_er_example, ab_er_example_step1, ab_er_example_step2, ab_er_example_step3, \
ab_am_response, ab_va_response, ab_wa_response, ab_w_response, ab_mr_example
from iter8_analytics.api.v2.examples.examples_abn import \
abn_er_example, abn_er_example_step1, abn_er_example_step2, abn_er_example_step3, \
abn_am_response, abn_va_response, abn_wa_response, abn_w_response, abn_mr_example
from iter8_analytics.api.v2.metrics import get_aggregated_metrics
from iter8_analytics.api.v2.experiment import get_version_assessments, get_winner_assessment, \
get_weights, get_analytics_results
logger = logging.getLogger('iter8_analytics')
if not logger.hasHandlers():
fastapi_app.config_logger(env_config[constants.LOG_LEVEL])
logger.info(env_config)
# class TestExperiment:
# """Test Iter8 v2 experiment"""
def test_input_object():
ExperimentResource(** er_example)
ExperimentResource(** er_example_step1)
ExperimentResource(** er_example_step2)
ExperimentResource(** er_example_step3)
def test_experiment_response_objects():
AggregatedMetricsAnalysis(** am_response)
VersionAssessmentsAnalysis(** va_response)
WinnerAssessmentAnalysis(** wa_response)
WeightsAnalysis(** w_response)
def test_aggregated_metrics_endpoint():
with requests_mock.mock(real_http=True) as mock:
file_path = os.path.join(os.path.dirname(__file__), 'data/prom_responses',
'prometheus_sample_response.json')
response_json = json.load(open(file_path))
mock.get(er_example["status"]["metrics"][0]["metricObj"]
["spec"]["urlTemplate"], json=response_json)
expr = ExperimentResource(** er_example)
agm = get_aggregated_metrics(
expr.convert_to_float()).convert_to_quantity()
assert agm.data['request-count'].data['default'].value == \
response_json['data']['result'][0]['value'][1]
ercopy = copy.deepcopy(er_example)
del ercopy["status"]["metrics"]
expr = ExperimentResource(** ercopy)
agm = get_aggregated_metrics(
expr.convert_to_float()).convert_to_quantity()
def test_version_assessment_endpoint():
expr = ExperimentResource(** er_example_step1)
get_version_assessments(expr.convert_to_float())
def test_winner_assessment_endpoint():
expr = ExperimentResource(** er_example_step2)
get_winner_assessment(expr.convert_to_float())
def test_weights_endpoint():
expr = ExperimentResource(** er_example_step3)
get_weights(expr.convert_to_float())
def test_analytics_assessment_endpoint():
with requests_mock.mock(real_http=True) as mock:
file_path = os.path.join(os.path.dirname(__file__), 'data/prom_responses',
'prometheus_sample_response.json')
mock.get(er_example["status"]["metrics"][0]["metricObj"]
["spec"]["urlTemplate"], json=json.load(open(file_path)))
expr = ExperimentResource(** er_example)
get_analytics_results(expr.convert_to_float()).convert_to_quantity()
def test_mock_metrics():
ercopy = copy.deepcopy(er_example)
ercopy["status"]["metrics"] = mocked_mr_example
expr = ExperimentResource(** ercopy)
agm = get_aggregated_metrics(
expr.convert_to_float())
logger.info(agm)
assert agm.data['request-count'].data['default'].value > 100.0
assert agm.data['request-count'].data['canary'].value > 100.0
assert agm.data['mean-latency'].data['default'].value > 15.0
assert agm.data['mean-latency'].data['canary'].value > 9.0
def test_mock_metrics_with_negative_elapsed():
ercopy = copy.deepcopy(er_example)
ercopy["status"]["metrics"] = mocked_mr_example
expr = ExperimentResource(** ercopy)
expr.status.startTime = datetime.now(timezone.utc) + timedelta(hours=10)
agm = get_aggregated_metrics(
expr.convert_to_float())
logger.info(agm)
assert agm.data['request-count'].data['default'].value > 0
assert agm.data['request-count'].data['canary'].value > 0
assert agm.data['mean-latency'].data['default'].value > 0
assert agm.data['mean-latency'].data['canary'].value > 0
def test_am_without_candidates():
with requests_mock.mock(real_http=True) as mock:
file_path = os.path.join(os.path.dirname(__file__), 'data/prom_responses',
'prometheus_sample_response.json')
mock.get(er_example["status"]["metrics"][0]["metricObj"]
["spec"]["urlTemplate"], json=json.load(open(file_path)))
example = copy.deepcopy(er_example)
del example['spec']['versionInfo']['candidates']
expr = ExperimentResource(** example)
get_aggregated_metrics(expr.convert_to_float()).convert_to_quantity()
def test_analytics_assessment_conformance():
with requests_mock.mock(real_http=True) as mock:
file_path = os.path.join(os.path.dirname(__file__), 'data/prom_responses',
'prometheus_sample_response.json')
mock.get(er_example["status"]["metrics"][0]["metricObj"]
["spec"]["urlTemplate"], json=json.load(open(file_path)))
example = copy.deepcopy(er_example)
del example['spec']['versionInfo']['candidates']
example['spec']['strategy']['testingPattern'] = 'Conformance'
expr = ExperimentResource(** example)
get_analytics_results(expr.convert_to_float()).convert_to_quantity()
def test_version_assessment_conformance():
with requests_mock.mock(real_http=True) as mock:
file_path = os.path.join(os.path.dirname(__file__), 'data/prom_responses',
'prometheus_sample_response.json')
mock.get(er_example["status"]["metrics"][0]["metricObj"]
["spec"]["urlTemplate"], json=json.load(open(file_path)))
example = copy.deepcopy(er_example)
del example['spec']['versionInfo']['candidates']
example['spec']['strategy']['testingPattern'] = 'Conformance'
expr = ExperimentResource(** example)
resp = get_analytics_results(
expr.convert_to_float()).convert_to_quantity()
assert resp.version_assessments.data == {'default': [True]}
def test_va_without_am():
expr = ExperimentResource(** er_example)
try:
get_version_assessments(expr.convert_to_float())
except AttributeError:
pass
def test_wa_without_va():
expr = ExperimentResource(** er_example)
try:
get_winner_assessment(expr.convert_to_float())
except AttributeError:
pass
def test_w_without_wa():
expr = ExperimentResource(** er_example)
try:
get_weights(expr.convert_to_float())
except AttributeError:
pass
def test_no_prometheus_response():
with requests_mock.mock(real_http=True) as mock:
file_path = os.path.join(os.path.dirname(__file__), 'data/prom_responses',
'prometheus_sample_no_response.json')
mock.get(er_example["status"]["metrics"][0]["metricObj"]
["spec"]["urlTemplate"], json=json.load(open(file_path)))
expr = ExperimentResource(** er_example)
resp = get_aggregated_metrics(
expr.convert_to_float()).convert_to_quantity()
expected_response = {
"request-count": {
"max": None,
"min": None,
"data": {
"default": {
"max": None,
"min": None,
"sample_size": None,
"value": None
},
"canary": {
"max": None,
"min": None,
"sample_size": None,
"value": None
}
}
},
"mean-latency": {
"max": None,
"min": None,
"data": {
"default": {
"max": None,
"min": None,
"sample_size": None,
"value": None
},
"canary": {
"max": None,
"min": None,
"sample_size": None,
"value": None
}
}
}
}
assert resp.data == expected_response
def test_va_with_no_metric_value():
with requests_mock.mock(real_http=True) as mock:
file_path = os.path.join(os.path.dirname(__file__), 'data/prom_responses',
'prometheus_sample_no_response.json')
mock.get(er_example["status"]["metrics"][0]["metricObj"]
["spec"]["urlTemplate"], json=json.load(open(file_path)))
expr = ExperimentResource(** er_example)
resp = get_aggregated_metrics(
expr.convert_to_float()).convert_to_quantity()
example = copy.deepcopy(er_example_step1)
example['status']['analysis']["aggregatedMetrics"] = resp
expr = ExperimentResource(** example)
resp2 = get_version_assessments(expr.convert_to_float())
assert resp2.data == {'default': [False], 'canary': [False]}
def test_va_without_mean_latency_metric():
example = copy.deepcopy(er_example_step1)
example['status']['analysis']['aggregatedMetrics']["data"].pop(
'mean-latency', None)
expr = ExperimentResource(** example)
resp = get_version_assessments(expr.convert_to_float())
assert resp.message == \
"Error: ; Warning: Aggregated metric object for mean-latency metric is unavailable.; Info: "
def test_canary_passing_criteria():
example = copy.deepcopy(er_example_step1)
example['spec']['criteria']['objectives'][0]['upperLimit'] = 500
expr = ExperimentResource(** example)
resp = get_version_assessments(expr.convert_to_float())
assert resp.data == {'default': [True], 'canary': [True]}
def test_canary_failing_upperlimit_criteria():
expr = ExperimentResource(** er_example_step1)
resp = get_version_assessments(expr.convert_to_float())
assert resp.data == {'default': [True], 'canary': [False]}
def test_canary_failing_lowerlimit_criteria():
example = copy.deepcopy(er_example_step1)
example['spec']['criteria']['objectives'][0].pop('upperLimit')
example['spec']['criteria']['objectives'][0]['lowerLimit'] = 500
expr = ExperimentResource(** example)
resp = get_version_assessments(expr.convert_to_float())
assert resp.data == {'default': [False], 'canary': [False]}
def test_canary_is_winner():
example = copy.deepcopy(er_example_step2)
example['status']['analysis']['versionAssessments'] = {
"data": {
"default": [
True
],
"canary": [
True
]
},
"message": "All ok"
}
expr = ExperimentResource(** example)
resp = get_winner_assessment(expr.convert_to_float())
assert resp.data.winnerFound == wa_response['data']['winnerFound']
assert resp.data.winner == wa_response['data']['winner']
def test_analytics_assessment_conformance_winner():
example = copy.deepcopy(er_example_step2)
example['status']['analysis']['versionAssessments'] = {
"data": {
"default": [
True
]
},
"message": "All ok"
}
del example['spec']['versionInfo']['candidates']
example['spec']['strategy']['testingPattern'] = 'Conformance'
expr = ExperimentResource(** example)
resp = get_winner_assessment(expr.convert_to_float())
assert resp.data.winnerFound is True
assert resp.data.winner == 'default'
def test_default_is_winner():
example = copy.deepcopy(er_example_step2)
example['status']['analysis']['versionAssessments'] = {
"data": {
"default": [
True
],
"canary": [
False
]
},
"message": "All ok"
}
expr = ExperimentResource(** example)
resp = get_winner_assessment(expr.convert_to_float())
assert resp.data.winnerFound == wa_response['data']['winnerFound']
assert resp.data.winner == 'default'
def test_no_winner():
example = copy.deepcopy(er_example_step2)
example['status']['analysis']['versionAssessments'] = {
"data": {
"default": [
False
],
"canary": [
False
]
},
"message": "All ok"
}
expr = ExperimentResource(** example)
resp = get_winner_assessment(expr.convert_to_float())
assert resp.data.winnerFound is False
def test_weights_with_winner():
expr = ExperimentResource(** er_example_step3)
resp = get_weights(expr.convert_to_float())
expected_resp = [
VersionWeight(name="default", value=5),
VersionWeight(name="canary", value=95)
]
assert resp.data == expected_resp
def test_weights_with_no_winner():
example = copy.deepcopy(er_example_step3)
example['status']['analysis']['winnerAssessment']['data'] = {
"winnerFound": False
}
expr = ExperimentResource(** example)
resp = get_weights(expr.convert_to_float())
assert resp.data == w_response['data']
def test_inc_old_weights_and_best_versions_and_canary_winner():
example = copy.deepcopy(er_example_step3)
example['status']['currentWeightDistribution'] = [{
"name": "default",
"value": 70
}, {
"name": "canary",
"value": 30
}]
expected_resp = [{
"name": "default",
"value": 5
}, {
"name": "canary",
"value": 95
}]
expr = ExperimentResource(** example)
resp = get_weights(expr.convert_to_float())
assert resp.data == expected_resp
def test_inc_old_weights_and_no_best_versions():
example = copy.deepcopy(er_example_step3)
example['status']['currentWeightDistribution'] = [{
"name": "default",
"value": 50
}, {
"name": "canary",
"value": 50
}]
example["status"]["analysis"]["winnerAssessment"]["data"] = {
"winnerFound": False
}
expected_resp = [{
"name": "default",
"value": 95
}, {
"name": "canary",
"value": 5
}]
expr = ExperimentResource(** example)
resp = get_weights(expr.convert_to_float())
assert resp.data == expected_resp
def test_set_weights_config():
example = copy.deepcopy(er_example_step3)
example['status']['currentWeightDistribution'] = [{
"name": "default",
"value": 50
}, {
"name": "canary",
"value": 50
}]
example['spec']['strategy']['weights'] = {
"maxCandidateWeight": 53,
"maxCandidateWeightIncrement": 2,
"algorithm": 'Progressive'
}
expected_resp = [{
"name": "default",
"value": 48
}, {
"name": "canary",
"value": 52
}]
expr = ExperimentResource(** example)
resp = get_weights(expr.convert_to_float())
assert resp.data == expected_resp
def test_using_previous_metric_status():
with requests_mock.mock(real_http=True) as mock:
file_path = os.path.join(os.path.dirname(__file__), 'data/prom_responses',
'prometheus_sample_no_response.json')
mock.get(er_example["status"]["metrics"][0]["metricObj"]
["spec"]["urlTemplate"], json=json.load(open(file_path)))
example = copy.deepcopy(er_example_step1)
example['status']['metrics'] = mr_example
expr = ExperimentResource(** example)
resp = get_aggregated_metrics(
expr.convert_to_float()).convert_to_quantity()
expected_response = copy.deepcopy(am_response)
assert resp.data['mean-latency'].data['default'].value == \
expected_response['data']['mean-latency']['data']['default']['value']
########## A/B TESTS #############
def test_ab_input_object():
ExperimentResource(** ab_er_example)
ExperimentResource(** ab_er_example_step1)
ExperimentResource(** ab_er_example_step2)
ExperimentResource(** ab_er_example_step3)
def test_experiment_ab_response_objects():
AggregatedMetricsAnalysis(** ab_am_response)
VersionAssessmentsAnalysis(** ab_va_response)
WinnerAssessmentAnalysis(** ab_wa_response)
WeightsAnalysis(** ab_w_response)
def test_ab_aggregated_metrics_endpoint():
with requests_mock.mock(real_http=True) as mock:
file_path = os.path.join(os.path.dirname(__file__), 'data/prom_responses',
'prometheus_sample_response.json')
response_json = json.load(open(file_path))
mock.get(ab_er_example["status"]["metrics"][0]["metricObj"]
["spec"]["urlTemplate"], json=response_json)
expr = ExperimentResource(** ab_er_example)
agm = get_aggregated_metrics(
expr.convert_to_float()).convert_to_quantity()
assert agm.data['request-count'].data['default'].value == \
response_json['data']['result'][0]['value'][1]
def test_ab_version_assessment_endpoint():
expr = ExperimentResource(** ab_er_example_step1)
get_version_assessments(expr.convert_to_float())
def test_ab_winner_assessment_endpoint():
expr = ExperimentResource(** ab_er_example_step2)
get_winner_assessment(expr.convert_to_float())
def test_ab_weights_endpoint():
expr = ExperimentResource(** ab_er_example_step3)
get_weights(expr.convert_to_float())
def test_ab_analytics_assessment_endpoint():
with requests_mock.mock(real_http=True) as mock:
file_path = os.path.join(os.path.dirname(__file__), 'data/prom_responses',
'prometheus_sample_response.json')
mock.get(ab_er_example["status"]["metrics"][0]["metricObj"]
["spec"]["urlTemplate"], json=json.load(open(file_path)))
expr = ExperimentResource(** ab_er_example)
get_analytics_results(expr.convert_to_float()).convert_to_quantity()
def test_ab_reward_only_analytics_assessment_endpoint():
with requests_mock.mock(real_http=True) as mock:
file_path = os.path.join(os.path.dirname(__file__), 'data/prom_responses',
'prometheus_sample_response.json')
mock.get(ab_er_example["status"]["metrics"][0]["metricObj"]
["spec"]["urlTemplate"], json=json.load(open(file_path)))
ab_expr = copy.deepcopy(ab_er_example)
del ab_expr["spec"]["criteria"]["objectives"]
expr = ExperimentResource(** ab_expr)
get_analytics_results(expr.convert_to_float()).convert_to_quantity()
def test_ab_am_without_candidates():
with requests_mock.mock(real_http=True) as mock:
file_path = os.path.join(os.path.dirname(__file__), 'data/prom_responses',
'prometheus_sample_response.json')
mock.get(ab_er_example["status"]["metrics"][0]["metricObj"]
["spec"]["urlTemplate"], json=json.load(open(file_path)))
example = copy.deepcopy(ab_er_example)
del example['spec']['versionInfo']['candidates']
expr = ExperimentResource(** example)
get_aggregated_metrics(expr.convert_to_float()).convert_to_quantity()
def test_ab_analytics_assessment_conformance():
with requests_mock.mock(real_http=True) as mock:
file_path = os.path.join(os.path.dirname(__file__), 'data/prom_responses',
'prometheus_sample_response.json')
mock.get(ab_er_example["status"]["metrics"][0]["metricObj"]
["spec"]["urlTemplate"], json=json.load(open(file_path)))
example = copy.deepcopy(ab_er_example)
del example['spec']['versionInfo']['candidates']
example['spec']['strategy']['testingPattern'] = 'Conformance'
expr = ExperimentResource(** example)
get_analytics_results(expr.convert_to_float()).convert_to_quantity()
def test_conformance_without_objectives():
with requests_mock.mock(real_http=True) as mock:
file_path = os.path.join(os.path.dirname(__file__), 'data/prom_responses',
'prometheus_sample_response.json')
mock.get(ab_er_example["status"]["metrics"][0]["metricObj"]
["spec"]["urlTemplate"], json=json.load(open(file_path)))
example = copy.deepcopy(ab_er_example)
del example['spec']['versionInfo']['candidates']
del example['spec']['criteria']['objectives']
del example['spec']['criteria']['rewards']
example['spec']['strategy']['testingPattern'] = 'Conformance'
expr = ExperimentResource(** example)
get_analytics_results(expr.convert_to_float()).convert_to_quantity()
def test_ab_version_assessment_conformance():
with requests_mock.mock(real_http=True) as mock:
file_path = os.path.join(os.path.dirname(__file__), 'data/prom_responses',
'prometheus_sample_response.json')
mock.get(ab_er_example["status"]["metrics"][0]["metricObj"]
["spec"]["urlTemplate"], json=json.load(open(file_path)))
example = copy.deepcopy(ab_er_example)
del example['spec']['versionInfo']['candidates']
example['spec']['strategy']['testingPattern'] = 'Conformance'
expr = ExperimentResource(** example)
resp = get_analytics_results(
expr.convert_to_float()).convert_to_quantity()
assert resp.version_assessments.data == {'default': [True]}
def test_ab_va_without_am():
expr = ExperimentResource(** ab_er_example)
try:
get_version_assessments(expr.convert_to_float())
except AttributeError:
pass
def test_ab_wa_without_va():
expr = ExperimentResource(** ab_er_example)
try:
get_winner_assessment(expr.convert_to_float())
except AttributeError:
pass
def test_ab_w_without_wa():
expr = ExperimentResource(** ab_er_example)
try:
get_weights(expr.convert_to_float())
except AttributeError:
pass
def test_ab_without_reward():
with requests_mock.mock(real_http=True) as mock:
file_path = os.path.join(os.path.dirname(__file__), 'data/prom_responses',
'prometheus_sample_response.json')
mock.get(ab_er_example["status"]["metrics"][0]["metricObj"]
["spec"]["urlTemplate"], json=json.load(open(file_path)))
example = copy.deepcopy(ab_er_example)
del example['spec']['criteria']['rewards']
expr = ExperimentResource(** example)
resp = get_analytics_results(
expr.convert_to_float()).convert_to_quantity()
assert "No reward metric in experiment" in resp.winner_assessment.message
assert resp.winner_assessment.data.winnerFound is False
def test_ab_without_reward_metric_config():
with requests_mock.mock(real_http=True) as mock:
file_path = os.path.join(os.path.dirname(__file__), 'data/prom_responses',
'prometheus_sample_response.json')
mock.get(ab_er_example["status"]["metrics"][0]["metricObj"]
["spec"]["urlTemplate"], json=json.load(open(file_path)))
example = copy.deepcopy(ab_er_example)
del example["status"]["metrics"][2]
expr = ExperimentResource(** example)
resp = get_analytics_results(
expr.convert_to_float()).convert_to_quantity()
assert "reward metric values are not available" in \
resp.winner_assessment.message
assert resp.winner_assessment.data.winnerFound is False
def test_ab_without_reward_for_feasible_version():
example = copy.deepcopy(ab_er_example_step2)
del example['status']['analysis']['aggregatedMetrics']['data']['business-revenue']['data']['canary']
expr = ExperimentResource(** example)
resp = get_winner_assessment(expr.convert_to_float())
assert "reward value for feasible version canary is not available" in \
resp.message
def test_ab_using_previous_metric_status():
with requests_mock.mock(real_http=True) as mock:
file_path = os.path.join(os.path.dirname(__file__), 'data/prom_responses',
'prometheus_sample_no_response.json')
mock.get(ab_er_example["status"]["metrics"][0]["metricObj"]
["spec"]["urlTemplate"], json=json.load(open(file_path)))
example = copy.deepcopy(ab_er_example_step1)
example['status']['metrics'] = ab_mr_example[:2]
expr = ExperimentResource(** example)
resp = get_aggregated_metrics(
expr.convert_to_float()).convert_to_quantity()
expected_response = copy.deepcopy(ab_am_response)
assert resp.data['mean-latency'].data['default'].value == \
expected_response['data']['mean-latency']['data']['default']['value']
########## A/B/N TESTS #############
def test_abn_input_object():
ExperimentResource(** abn_er_example)
ExperimentResource(** abn_er_example_step1)
ExperimentResource(** abn_er_example_step2)
ExperimentResource(** abn_er_example_step3)
def test_experiment_abn_response_objects():
AggregatedMetricsAnalysis(** abn_am_response)
VersionAssessmentsAnalysis(** abn_va_response)
WinnerAssessmentAnalysis(** abn_wa_response)
WeightsAnalysis(** abn_w_response)
def test_abn_aggregated_metrics_endpoint():
with requests_mock.mock(real_http=True) as mock:
file_path = os.path.join(os.path.dirname(__file__), 'data/prom_responses',
'prometheus_sample_response.json')
response_json = json.load(open(file_path))
mock.get(abn_er_example["status"]["metrics"][0]["metricObj"]
["spec"]["urlTemplate"], json=response_json)
expr = ExperimentResource(** abn_er_example)
agm = get_aggregated_metrics(
expr.convert_to_float()).convert_to_quantity()
assert agm.data['request-count'].data['default'].value == \
response_json['data']['result'][0]['value'][1]
def test_abn_version_assessment_endpoint():
expr = ExperimentResource(** abn_er_example_step1)
get_version_assessments(expr.convert_to_float())
def test_abn_winner_assessment_endpoint():
expr = ExperimentResource(** abn_er_example_step2)
get_winner_assessment(expr.convert_to_float())
def test_abn_weights_endpoint():
expr = ExperimentResource(** abn_er_example_step3)
get_weights(expr.convert_to_float())
def test_abn_analytics_assessment_endpoint():
with requests_mock.mock(real_http=True) as mock:
file_path = os.path.join(os.path.dirname(__file__), 'data/prom_responses',
'prometheus_sample_response.json')
mock.get(abn_er_example["status"]["metrics"][0]["metricObj"]
["spec"]["urlTemplate"], json=json.load(open(file_path)))
expr = ExperimentResource(** abn_er_example)
get_analytics_results(expr.convert_to_float()).convert_to_quantity()
def test_abn_am_without_candidates():
with requests_mock.mock(real_http=True) as mock:
file_path = os.path.join(os.path.dirname(__file__), 'data/prom_responses',
'prometheus_sample_response.json')
mock.get(abn_er_example["status"]["metrics"][0]["metricObj"]
["spec"]["urlTemplate"], json=json.load(open(file_path)))
example = copy.deepcopy(abn_er_example)
del example['spec']['versionInfo']['candidates']
expr = ExperimentResource(** example)
get_aggregated_metrics(expr.convert_to_float()).convert_to_quantity()
def test_abn_analytics_assessment_conformance():
with requests_mock.mock(real_http=True) as mock:
file_path = os.path.join(os.path.dirname(__file__), 'data/prom_responses',
'prometheus_sample_response.json')
mock.get(abn_er_example["status"]["metrics"][0]["metricObj"]
["spec"]["urlTemplate"], json=json.load(open(file_path)))
example = copy.deepcopy(abn_er_example)
del example['spec']['versionInfo']['candidates']
example['spec']['strategy']['testingPattern'] = 'Conformance'
expr = ExperimentResource(** example)
get_analytics_results(expr.convert_to_float()).convert_to_quantity()
def test_abn_version_assessment_conformance():
with requests_mock.mock(real_http=True) as mock:
file_path = os.path.join(os.path.dirname(__file__), 'data/prom_responses',
'prometheus_sample_response.json')
mock.get(abn_er_example["status"]["metrics"][0]["metricObj"]
["spec"]["urlTemplate"], json=json.load(open(file_path)))
example = copy.deepcopy(abn_er_example)
del example['spec']['versionInfo']['candidates']
example['spec']['strategy']['testingPattern'] = 'Conformance'
expr = ExperimentResource(** example)
resp = get_analytics_results(
expr.convert_to_float()).convert_to_quantity()
assert resp.version_assessments.data == {'default': [True]}
def test_abn_va_without_am():
expr = ExperimentResource(** abn_er_example)
try:
get_version_assessments(expr.convert_to_float())
except AttributeError:
pass
def test_abn_wa_without_va():
expr = ExperimentResource(** abn_er_example)
try:
get_winner_assessment(expr.convert_to_float())
except AttributeError:
pass
def test_abn_w_without_wa():
expr = ExperimentResource(** abn_er_example)
try:
get_weights(expr.convert_to_float())
except AttributeError:
pass
def test_abn_without_reward():
with requests_mock.mock(real_http=True) as mock:
file_path = os.path.join(os.path.dirname(__file__), 'data/prom_responses',
'prometheus_sample_response.json')
mock.get(abn_er_example["status"]["metrics"][0]["metricObj"]
["spec"]["urlTemplate"], json=json.load(open(file_path)))
example = copy.deepcopy(abn_er_example)
del example['spec']['criteria']['rewards']
expr = ExperimentResource(** example)
resp = get_analytics_results(
expr.convert_to_float()).convert_to_quantity()
assert "No reward metric in experiment" in \
resp.winner_assessment.message
assert resp.winner_assessment.data.winnerFound is False
def test_abn_without_reward_metric_config():
with requests_mock.mock(real_http=True) as mock:
file_path = os.path.join(os.path.dirname(__file__), 'data/prom_responses',
'prometheus_sample_response.json')
mock.get(abn_er_example["status"]["metrics"][0]["metricObj"]
["spec"]["urlTemplate"], json=json.load(open(file_path)))
example = copy.deepcopy(abn_er_example)
del example["status"]["metrics"][2]
expr = ExperimentResource(** example)
resp = get_analytics_results(
expr.convert_to_float()).convert_to_quantity()
assert "reward metric values are not available" in \
resp.winner_assessment.message
assert resp.winner_assessment.data.winnerFound is False
def test_abn_general():
example = copy.deepcopy(abn_er_example_step2)
expr = ExperimentResource(** example)
resp = get_winner_assessment(expr.convert_to_float())
assert resp.data.winnerFound is True
assert resp.data.winner == 'canary1'
def test_abn_without_reward_for_feasible_version():
example = copy.deepcopy(abn_er_example_step2)
del example['status']['analysis']['aggregatedMetrics']['data']['business-revenue']['data']['canary1']
expr = ExperimentResource(** example)
resp = get_winner_assessment(expr.convert_to_float())
assert "reward value for feasible version canary1 is not available" in \
resp.message
assert resp.data.winnerFound is True
assert resp.data.winner == 'canary2'
def test_abn_with_better_reward_but_not_feasible():
example = copy.deepcopy(abn_er_example_step2)
example['status']['analysis']['versionAssessments']['data']['canary1'] = [False]
expr = ExperimentResource(** example)
resp = get_winner_assessment(expr.convert_to_float())
assert resp.data.winnerFound is True
assert resp.data.winner == 'canary2'
def test_abn_using_previous_metric_status():
with requests_mock.mock(real_http=True) as mock:
file_path = os.path.join(os.path.dirname(__file__), 'data/prom_responses',
'prometheus_sample_no_response.json')
mock.get(abn_er_example["status"]["metrics"][0]["metricObj"]
["spec"]["urlTemplate"], json=json.load(open(file_path)))
example = copy.deepcopy(abn_er_example_step1)
example['status']['metrics'] = abn_mr_example[:2]
expr = ExperimentResource(** example)
resp = get_aggregated_metrics(
expr.convert_to_float()).convert_to_quantity()
expected_response = copy.deepcopy(abn_am_response)
assert resp.data['mean-latency'].data['default'].value == \
expected_response['data']['mean-latency']['data']['default']['value']
def test_abn_using_previous_metric_status_none():
with requests_mock.mock(real_http=True) as mock:
file_path = os.path.join(os.path.dirname(__file__), 'data/prom_responses',
'prometheus_sample_no_response.json')
mock.get(abn_er_example["status"]["metrics"][0]["metricObj"]
["spec"]["urlTemplate"], json=json.load(open(file_path)))
example = copy.deepcopy(abn_er_example)
example['status']['metrics'] = abn_mr_example[:2]
expr = ExperimentResource(** example)
resp = get_aggregated_metrics(
expr.convert_to_float()).convert_to_quantity()
assert resp.data['mean-latency'].data['default'].value is None
| 40.038813 | 105 | 0.648172 | 3,908 | 35,074 | 5.494882 | 0.052712 | 0.049036 | 0.039955 | 0.055323 | 0.864068 | 0.830027 | 0.807907 | 0.773121 | 0.762038 | 0.743318 | 0 | 0.006223 | 0.221189 | 35,074 | 875 | 106 | 40.084571 | 0.779909 | 0.004533 | 0 | 0.698895 | 0 | 0 | 0.152041 | 0.027453 | 0 | 0 | 0 | 0 | 0.071823 | 1 | 0.096685 | false | 0.013812 | 0.020718 | 0 | 0.117403 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
56987d4e6df8953908642327feff8385409f1ca3 | 84 | py | Python | itez/beneficiary/models.py | chriford/itez | 107efc0a25a4fb35a4db24a62bae2b8b958927da | [
"MIT"
] | null | null | null | itez/beneficiary/models.py | chriford/itez | 107efc0a25a4fb35a4db24a62bae2b8b958927da | [
"MIT"
] | null | null | null | itez/beneficiary/models.py | chriford/itez | 107efc0a25a4fb35a4db24a62bae2b8b958927da | [
"MIT"
] | null | null | null | from django.db import models
from django.utils.translation import gettext_lazy as _
| 28 | 54 | 0.845238 | 13 | 84 | 5.307692 | 0.769231 | 0.289855 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.119048 | 84 | 2 | 55 | 42 | 0.932432 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
3b6de893f6ed64dc52767770328093f8398b5166 | 2,306 | py | Python | Magphi/write_output_csv.py | BiologicalScientist/Magphi | 6d04f6f7192d9c0ef8bfb5c9f4a032433c01de59 | [
"MIT"
] | null | null | null | Magphi/write_output_csv.py | BiologicalScientist/Magphi | 6d04f6f7192d9c0ef8bfb5c9f4a032433c01de59 | [
"MIT"
] | null | null | null | Magphi/write_output_csv.py | BiologicalScientist/Magphi | 6d04f6f7192d9c0ef8bfb5c9f4a032433c01de59 | [
"MIT"
] | null | null | null | import csv
import os
def write_seed_hit_matrix(master_seed_hits, seed_pairs, out_path):
with open(os.path.join(out_path, 'contig_hit_matrix.csv'), 'w') as out_file:
header = ['genome'] + list(seed_pairs.keys())
writer = csv.DictWriter(out_file, fieldnames=header)
# Write the field names or header row in file
writer.writeheader()
# Write the remaining lines
keys = list(master_seed_hits.keys())
keys.sort()
for line in keys:
writer.writerow(master_seed_hits[line])
out_file.close()
def write_annotation_num_matrix(master_annotation_hits, seed_pairs, out_path):
with open(os.path.join(out_path, 'annotation_num_matrix.csv'), 'w') as out_file:
header = ['genome'] + list(seed_pairs.keys())
writer = csv.DictWriter(out_file, fieldnames=header)
# Write the field names or header row in file
writer.writeheader()
# Write the remaining lines
keys = list(master_annotation_hits.keys())
keys.sort()
for line in keys:
writer.writerow(master_annotation_hits[line])
out_file.close()
def write_seed_hit_evidence(master_seed_evidence, seed_pairs, out_path):
with open(os.path.join(out_path, 'master_seed_evidence.csv'), 'w') as out_file:
header = ['genome'] + list(seed_pairs.keys())
writer = csv.DictWriter(out_file, fieldnames=header)
# Write the field names or header row in file
writer.writeheader()
# Write the remaining lines
keys = list(master_seed_evidence.keys())
keys.sort()
for line in keys:
writer.writerow(master_seed_evidence[line])
out_file.close()
def write_inter_seed_dist(master_inter_seed_dist, seed_pairs, out_path):
with open(os.path.join(out_path, 'inter_seed_distance.csv'), 'w') as out_file:
header = ['genome'] + list(seed_pairs.keys())
writer = csv.DictWriter(out_file, fieldnames=header)
# Write the field names or header row in file
writer.writeheader()
# Write the remaining lines
keys = list(master_inter_seed_dist.keys())
keys.sort()
for line in keys:
writer.writerow(master_inter_seed_dist[line])
out_file.close()
| 32.478873 | 84 | 0.656548 | 311 | 2,306 | 4.62701 | 0.151125 | 0.058374 | 0.033357 | 0.044475 | 0.803336 | 0.803336 | 0.786657 | 0.747741 | 0.747741 | 0.747741 | 0 | 0 | 0.24111 | 2,306 | 70 | 85 | 32.942857 | 0.822286 | 0.120989 | 0 | 0.571429 | 0 | 0 | 0.05996 | 0.046085 | 0 | 0 | 0 | 0 | 0 | 1 | 0.095238 | false | 0 | 0.047619 | 0 | 0.142857 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
3b929dd4ea418ffa5a0da38af6ff0f94d489b974 | 661 | py | Python | python/stock/time_chk.py | gangserver/py_test | 869bdfa5c94c3b6a15b87e0c3de6b2cdaca821f4 | [
"Apache-2.0"
] | null | null | null | python/stock/time_chk.py | gangserver/py_test | 869bdfa5c94c3b6a15b87e0c3de6b2cdaca821f4 | [
"Apache-2.0"
] | null | null | null | python/stock/time_chk.py | gangserver/py_test | 869bdfa5c94c3b6a15b87e0c3de6b2cdaca821f4 | [
"Apache-2.0"
] | null | null | null | import timeit
iteration_test = """
for i in itr :
pass
"""
print(timeit.timeit(iteration_test, setup='itr = list(range(10000))', number=1000))
print(timeit.timeit(iteration_test, setup='itr = tuple(range(10000))', number=1000))
print(timeit.timeit(iteration_test, setup='itr = set(range(10000))', number=1000))
search_test = """
import random
x = random.randint(0, len(itr)-1)
if x in itr :
pass
"""
print(timeit.timeit(search_test, setup='itr = set(range(10000))', number=1000))
print(timeit.timeit(search_test, setup='itr = list(range(10000))', number=1000))
print(timeit.timeit(search_test, setup='itr = tuple(range(10000))', number=1000))
| 23.607143 | 84 | 0.70348 | 97 | 661 | 4.71134 | 0.268041 | 0.14442 | 0.223195 | 0.262582 | 0.833698 | 0.833698 | 0.794311 | 0.68709 | 0.538293 | 0.538293 | 0 | 0.095727 | 0.114977 | 661 | 27 | 85 | 24.481481 | 0.68547 | 0 | 0 | 0.235294 | 0 | 0 | 0.365152 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0.117647 | 0.117647 | 0 | 0.117647 | 0.352941 | 0 | 0 | 0 | null | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 6 |
d91549db35bf092776b203bd6616797c48a3f92e | 94 | py | Python | svgd/model/__init__.py | nel215/chainer-svgd | fd6bc0611cfb2aedb44ddfae25677344d17f64a5 | [
"Apache-2.0"
] | 1 | 2020-09-27T07:22:28.000Z | 2020-09-27T07:22:28.000Z | svgd/model/__init__.py | nel215/chainer-svgd | fd6bc0611cfb2aedb44ddfae25677344d17f64a5 | [
"Apache-2.0"
] | null | null | null | svgd/model/__init__.py | nel215/chainer-svgd | fd6bc0611cfb2aedb44ddfae25677344d17f64a5 | [
"Apache-2.0"
] | null | null | null | from svgd.model.multivariate_normal import MVN # noqa
from svgd.model.gmm import GMM # noqa
| 31.333333 | 54 | 0.787234 | 15 | 94 | 4.866667 | 0.6 | 0.219178 | 0.356164 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.148936 | 94 | 2 | 55 | 47 | 0.9125 | 0.095745 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
d92ec7aec07dd671a55c496f28ea788249d1b0c0 | 10,821 | py | Python | test/svid/jwtsvid/test_jwt_svid_validator.py | scotte/py-spiffe | a91217629dd6d9c8bf6e3b485e724cf598e26aa6 | [
"Apache-2.0"
] | null | null | null | test/svid/jwtsvid/test_jwt_svid_validator.py | scotte/py-spiffe | a91217629dd6d9c8bf6e3b485e724cf598e26aa6 | [
"Apache-2.0"
] | null | null | null | test/svid/jwtsvid/test_jwt_svid_validator.py | scotte/py-spiffe | a91217629dd6d9c8bf6e3b485e724cf598e26aa6 | [
"Apache-2.0"
] | null | null | null | import pytest
import datetime
from calendar import timegm
from pyspiffe.svid.jwt_svid_validator import (
JwtSvidValidator,
INVALID_INPUT_ERROR,
AUDIENCE_NOT_MATCH_ERROR,
)
from pyspiffe.svid.exceptions import (
TokenExpiredError,
InvalidClaimError,
InvalidAlgorithmError,
InvalidTypeError,
)
"""
validate_claims tests
"""
@pytest.mark.parametrize(
'test_input_claim,test_input_audience, expected',
[
(
{
'exp': timegm(
(
datetime.datetime.utcnow() + datetime.timedelta(hours=24)
).utctimetuple()
),
'aud': 'None',
'sub': 'spiffeid://somwhere.over.the',
},
None,
INVALID_INPUT_ERROR.format('expected_audience cannot be empty'),
),
(
{
'exp': timegm(
(
datetime.datetime.utcnow() + datetime.timedelta(hours=24)
).utctimetuple()
),
'aud': ['test'],
'sub': 'spiffeid://somwhere.over.the',
},
[],
INVALID_INPUT_ERROR.format('expected_audience cannot be empty'),
),
],
)
def test_invalid_input_validate_claims(test_input_claim, test_input_audience, expected):
with pytest.raises(ValueError) as exception:
JwtSvidValidator().validate_claims(test_input_claim, test_input_audience)
assert str(exception.value) == expected
@pytest.mark.parametrize(
'test_input_claim,test_input_audience, expected',
[
(
{
'exp': timegm(
(
datetime.datetime.utcnow() + datetime.timedelta(hours=24)
).utctimetuple()
),
'aud': None,
'sub': 'spiffeid://somwhere.over.the',
},
['even more things'],
str(InvalidClaimError('aud')),
),
(
{
'exp': timegm(
(
datetime.datetime.utcnow() + datetime.timedelta(hours=24)
).utctimetuple()
),
'aud': [],
'sub': 'spiffeid://somwhere.over.the',
},
['even more things'],
str(InvalidClaimError('aud')),
),
(
{
'exp': timegm(
(
datetime.datetime.utcnow() + datetime.timedelta(hours=24)
).utctimetuple()
),
'aud': [''],
'sub': 'spiffeid://somwhere.over.the',
},
[''],
str(InvalidClaimError('audience_claim cannot be empty')),
),
(
{
'exp': timegm(
(
datetime.datetime.utcnow() + datetime.timedelta(hours=24)
).utctimetuple()
),
'aud': ['', '', ''],
'sub': 'spiffeid://somwhere.over.the',
},
['test'],
str(InvalidClaimError('audience_claim cannot be empty')),
),
(
{
'exp': timegm(
(
datetime.datetime.utcnow() + datetime.timedelta(hours=24)
).utctimetuple()
),
'aud': ['something'],
'sub': 'spiffeid://somwhere.over.the',
},
[''],
str(InvalidClaimError(AUDIENCE_NOT_MATCH_ERROR)),
),
(
{
'exp': timegm(
(
datetime.datetime.utcnow() + datetime.timedelta(hours=24)
).utctimetuple()
),
'aud': ['something'],
'sub': 'spiffeid://somwhere.over.the',
},
['something', 'matters'],
str(InvalidClaimError(AUDIENCE_NOT_MATCH_ERROR)),
),
(
{
'exp': timegm(
(
datetime.datetime.utcnow() + datetime.timedelta(hours=24)
).utctimetuple()
),
'aud': ['something'],
'sub': 'spiffeid://somwhere.over.the',
},
['else', 'matters'],
str(InvalidClaimError(AUDIENCE_NOT_MATCH_ERROR)),
),
],
)
def test_invalid_aud_validate_claim(test_input_claim, test_input_audience, expected):
with pytest.raises(InvalidClaimError) as exception:
JwtSvidValidator().validate_claims(test_input_claim, test_input_audience)
assert str(exception.value) == expected
@pytest.mark.parametrize(
'test_input_claim, test_input_audience',
[
(
{
'exp': '1611075778',
'aud': ['something', 'more things', 'even more things'],
'sub': 'spiffeid://somwhere.over.the',
},
['something'],
),
(
{
'exp': timegm(
(
datetime.datetime.utcnow() - datetime.timedelta(hours=24)
).utctimetuple()
),
'aud': ['something', 'more things', 'even more things'],
'sub': 'spiffeid://somwhere.over.the',
},
['even more things'],
),
(
{
'exp': timegm(
(
datetime.datetime.utcnow() - datetime.timedelta(hours=1)
).utctimetuple()
),
'aud': ['something', 'more things', 'even more things'],
'sub': 'spiffeid://somwhere.over.the',
},
['even more things'],
),
(
{
'exp': timegm(
(
datetime.datetime.utcnow() - datetime.timedelta(minutes=1)
).utctimetuple()
),
'aud': ['something', 'more things', 'even more things'],
'sub': 'spiffeid://somwhere.over.the',
},
['more things'],
),
],
)
def test_token_expired_validate_claim(test_input_claim, test_input_audience):
with pytest.raises(TokenExpiredError) as exception:
JwtSvidValidator().validate_claims(test_input_claim, test_input_audience)
assert str(exception.value) == str(TokenExpiredError())
@pytest.mark.parametrize(
'test_input_claim, test_input_audience, expected',
[
({'aud': 'ttt', 'exp': 'ttt'}, [], str(InvalidClaimError('sub'))),
({'sub': 'ttt', 'exp': 'ttt'}, ['ttt'], str(InvalidClaimError('aud'))),
({'sub': 'ttt', 'aud': 'ttt'}, ['ttt'], str(InvalidClaimError('exp'))),
(
{'sub': 'ttt', 'aud': 'ttt', 'exp': ''},
[],
str(InvalidClaimError('exp')),
),
({}, [], str(InvalidClaimError('aud'))),
],
)
def test_missing_required_claim_validate_claims(
test_input_claim, test_input_audience, expected
):
with pytest.raises(InvalidClaimError) as exception:
JwtSvidValidator().validate_claims(test_input_claim, test_input_audience)
assert str(exception.value) == expected
@pytest.mark.parametrize(
'test_input_claim, test_input_audience',
[
(
{
'exp': timegm(
(
datetime.datetime.utcnow() + datetime.timedelta(hours=24)
).utctimetuple()
),
'aud': ['something'],
'sub': 'spiffeid://somwhere.over.the',
},
['something'],
),
(
{
'exp': timegm(
(
datetime.datetime.utcnow() + datetime.timedelta(minutes=5)
).utctimetuple()
),
'aud': ['something', 'more things'],
'sub': 'spiffeid://somwhere.over.the',
},
['something', 'more things'],
),
(
{
'exp': timegm(
(
datetime.datetime.utcnow() + datetime.timedelta(minutes=15)
).utctimetuple()
),
'aud': ['something', 'more things', 'even more things'],
'sub': 'spiffeid://somwhere.over.the',
},
['something', 'more things'],
),
],
)
def test_valid_input_validate_claims(test_input_claim, test_input_audience):
JwtSvidValidator().validate_claims(test_input_claim, test_input_audience)
assert True
"""
validate_header tests
"""
@pytest.mark.parametrize(
'test_input_header, expected',
[
(
None,
INVALID_INPUT_ERROR.format('header alg cannot be empty'),
),
(
'',
INVALID_INPUT_ERROR.format('header alg cannot be empty'),
),
(
{'tttt': 'eee'},
INVALID_INPUT_ERROR.format('header alg cannot be empty'),
),
({'alg': ''}, INVALID_INPUT_ERROR.format('header alg cannot be empty')),
],
)
def test_invalid_input_validate_header(test_input_header, expected):
with pytest.raises(ValueError) as exception:
JwtSvidValidator().validate_header(test_input_header)
assert str(exception.value) == expected
@pytest.mark.parametrize(
'test_input_header, expected',
[
({'alg': 'eee'}, str(InvalidAlgorithmError('eee'))),
({'alg': 'RS256 RS384'}, str(InvalidAlgorithmError('RS256 RS384'))),
],
)
def test_invalid_algorithm_validate_header(test_input_header, expected):
with pytest.raises(InvalidAlgorithmError) as exception:
JwtSvidValidator().validate_header(test_input_header)
assert str(exception.value) == expected
@pytest.mark.parametrize(
'test_input_header, expected',
[
({'alg': 'RS256', 'typ': 'xxx'}, str(InvalidTypeError('xxx'))),
],
)
def test_invalid_type_validate_header(test_input_header, expected):
with pytest.raises(InvalidTypeError) as exception:
JwtSvidValidator().validate_header(test_input_header)
assert str(exception.value) == expected
@pytest.mark.parametrize(
'test_input_header',
[
({'alg': 'RS256', 'typ': 'JOSE'}),
({'alg': 'PS512', 'typ': 'JWT'}),
({'alg': 'ES384', 'typ': ''}),
({'alg': 'PS256'}),
],
)
def test_valid_validate_header(test_input_header):
JwtSvidValidator().validate_header(test_input_header)
assert True
| 30.226257 | 88 | 0.485722 | 844 | 10,821 | 6.03436 | 0.104265 | 0.07422 | 0.046731 | 0.072256 | 0.828392 | 0.81131 | 0.809346 | 0.779894 | 0.759866 | 0.660711 | 0 | 0.009515 | 0.378431 | 10,821 | 357 | 89 | 30.310924 | 0.747696 | 0 | 0 | 0.538941 | 0 | 0 | 0.161599 | 0.048536 | 0 | 0 | 0 | 0 | 0.028037 | 1 | 0.028037 | false | 0 | 0.015576 | 0 | 0.043614 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
d935d30b90ce6629dc7cb9915b551868717a017b | 105 | py | Python | mispy/__init__.py | Te-k/mispy | 6d523d6f134d2bd38ec8264be74e73b68403da65 | [
"Apache-2.0"
] | 7 | 2019-06-06T15:30:42.000Z | 2021-09-13T08:40:09.000Z | mispy/__init__.py | Te-k/mispy | 6d523d6f134d2bd38ec8264be74e73b68403da65 | [
"Apache-2.0"
] | 10 | 2017-08-01T06:50:04.000Z | 2019-04-12T12:25:28.000Z | mispy/__init__.py | Te-k/mispy | 6d523d6f134d2bd38ec8264be74e73b68403da65 | [
"Apache-2.0"
] | 4 | 2017-11-23T16:57:40.000Z | 2019-04-02T13:43:17.000Z | from .misp import MispTag, MispEvent, MispServer, MispAttribute, MispShadowAttribute, MispTransportError
| 52.5 | 104 | 0.857143 | 9 | 105 | 10 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.085714 | 105 | 1 | 105 | 105 | 0.9375 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
d93aee6c4425a37f7df8865d0b50b72c611ca8b6 | 199 | py | Python | best movie.py | Mukulphougat/Python | 4a12e5cb4ddff81dc6675695fcd1bdba8f2e2dd2 | [
"MIT"
] | null | null | null | best movie.py | Mukulphougat/Python | 4a12e5cb4ddff81dc6675695fcd1bdba8f2e2dd2 | [
"MIT"
] | null | null | null | best movie.py | Mukulphougat/Python | 4a12e5cb4ddff81dc6675695fcd1bdba8f2e2dd2 | [
"MIT"
] | null | null | null | # favourite movies
movies = input("\nEnter your favourite movies: ")
def favourite_movies(movies):
print("These are my favourite movies:- " + movies.title() + "!")
print(favourite_movies(movies)) | 39.8 | 68 | 0.723618 | 24 | 199 | 5.916667 | 0.458333 | 0.528169 | 0.591549 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.130653 | 199 | 5 | 69 | 39.8 | 0.820809 | 0.080402 | 0 | 0 | 0 | 0 | 0.351648 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | false | 0 | 0 | 0 | 0.25 | 0.5 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 6 |
d947cd5b4f7e52a147c8d0008807e9c02d4cde7a | 91,949 | py | Python | test/unit/test_ibm_analytics_engine_api_v2.py | IBM/ibm-iae-python-sdk | 7d6abbb0fce1a703bfc953392024ca2d811b566b | [
"Apache-2.0"
] | null | null | null | test/unit/test_ibm_analytics_engine_api_v2.py | IBM/ibm-iae-python-sdk | 7d6abbb0fce1a703bfc953392024ca2d811b566b | [
"Apache-2.0"
] | 10 | 2021-07-22T06:29:53.000Z | 2022-03-11T07:50:24.000Z | test/unit/test_ibm_analytics_engine_api_v2.py | IBM/ibm-iae-python-sdk | 7d6abbb0fce1a703bfc953392024ca2d811b566b | [
"Apache-2.0"
] | 1 | 2020-06-29T15:15:15.000Z | 2020-06-29T15:15:15.000Z | # -*- coding: utf-8 -*-
# (C) Copyright IBM Corp. 2021.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""
Unit Tests for IbmAnalyticsEngineApiV2
"""
from datetime import datetime, timezone
from ibm_cloud_sdk_core.authenticators.no_auth_authenticator import NoAuthAuthenticator
from ibm_cloud_sdk_core.utils import datetime_to_string, string_to_datetime
import inspect
import json
import os
import pytest
import re
import responses
import urllib
from iaesdk.ibm_analytics_engine_api_v2 import *
_service = IbmAnalyticsEngineApiV2(
authenticator=NoAuthAuthenticator()
)
_base_url = 'https://ibm-analytics-engine-api.cloud.ibm.com'
_service.set_service_url(_base_url)
##############################################################################
# Start of Service: AnalyticsEnginesV2
##############################################################################
# region
class TestNewInstance():
"""
Test Class for new_instance
"""
def test_new_instance(self):
"""
new_instance()
"""
os.environ['TEST_SERVICE_AUTH_TYPE'] = 'noAuth'
service = IbmAnalyticsEngineApiV2.new_instance(
service_name='TEST_SERVICE',
)
assert service is not None
assert isinstance(service, IbmAnalyticsEngineApiV2)
def test_new_instance_without_authenticator(self):
"""
new_instance_without_authenticator()
"""
with pytest.raises(ValueError, match='authenticator must be provided'):
service = IbmAnalyticsEngineApiV2.new_instance(
)
class TestGetAllAnalyticsEngines():
"""
Test Class for get_all_analytics_engines
"""
def preprocess_url(self, request_url: str):
"""
Preprocess the request URL to ensure the mock response will be found.
"""
request_url = urllib.parse.unquote(request_url) # don't double-encode if already encoded
request_url = urllib.parse.quote(request_url, safe=':/')
if re.fullmatch('.*/+', request_url) is None:
return request_url
else:
return re.compile(request_url.rstrip('/') + '/+')
@responses.activate
def test_get_all_analytics_engines_all_params(self):
"""
get_all_analytics_engines()
"""
# Set up mock
url = self.preprocess_url(_base_url + '/v2/analytics_engines')
responses.add(responses.GET,
url,
status=200)
# Invoke method
response = _service.get_all_analytics_engines()
# Check for correct operation
assert len(responses.calls) == 1
assert response.status_code == 200
class TestGetAnalyticsEngineById():
"""
Test Class for get_analytics_engine_by_id
"""
def preprocess_url(self, request_url: str):
"""
Preprocess the request URL to ensure the mock response will be found.
"""
request_url = urllib.parse.unquote(request_url) # don't double-encode if already encoded
request_url = urllib.parse.quote(request_url, safe=':/')
if re.fullmatch('.*/+', request_url) is None:
return request_url
else:
return re.compile(request_url.rstrip('/') + '/+')
@responses.activate
def test_get_analytics_engine_by_id_all_params(self):
"""
get_analytics_engine_by_id()
"""
# Set up mock
url = self.preprocess_url(_base_url + '/v2/analytics_engines/testString')
mock_response = '{"id": "id", "name": "name", "service_plan": "service_plan", "hardware_size": "hardware_size", "software_package": "software_package", "domain": "domain", "creation_time": "2019-01-01T12:00:00.000Z", "commision_time": "2019-01-01T12:00:00.000Z", "decommision_time": "2019-01-01T12:00:00.000Z", "deletion_time": "2019-01-01T12:00:00.000Z", "state_change_time": "2019-01-01T12:00:00.000Z", "state": "state", "nodes": [{"id": 2, "fqdn": "fqdn", "type": "type", "state": "state", "public_ip": "public_ip", "private_ip": "private_ip", "state_change_time": "2019-01-01T12:00:00.000Z", "commission_time": "2019-01-01T12:00:00.000Z"}], "user_credentials": {"user": "user"}, "service_endpoints": {"phoenix_jdbc": "phoenix_jdbc", "ambari_console": "ambari_console", "livy": "livy", "spark_history_server": "spark_history_server", "oozie_rest": "oozie_rest", "hive_jdbc": "hive_jdbc", "notebook_gateway_websocket": "notebook_gateway_websocket", "notebook_gateway": "notebook_gateway", "webhdfs": "webhdfs", "ssh": "ssh", "spark_sql": "spark_sql"}, "service_endpoints_ip": {"phoenix_jdbc": "phoenix_jdbc", "ambari_console": "ambari_console", "livy": "livy", "spark_history_server": "spark_history_server", "oozie_rest": "oozie_rest", "hive_jdbc": "hive_jdbc", "notebook_gateway_websocket": "notebook_gateway_websocket", "notebook_gateway": "notebook_gateway", "webhdfs": "webhdfs", "ssh": "ssh", "spark_sql": "spark_sql"}, "private_endpoint_whitelist": ["private_endpoint_whitelist"]}'
responses.add(responses.GET,
url,
body=mock_response,
content_type='application/json',
status=200)
# Set up parameter values
instance_guid = 'testString'
# Invoke method
response = _service.get_analytics_engine_by_id(
instance_guid,
headers={}
)
# Check for correct operation
assert len(responses.calls) == 1
assert response.status_code == 200
@responses.activate
def test_get_analytics_engine_by_id_value_error(self):
"""
test_get_analytics_engine_by_id_value_error()
"""
# Set up mock
url = self.preprocess_url(_base_url + '/v2/analytics_engines/testString')
mock_response = '{"id": "id", "name": "name", "service_plan": "service_plan", "hardware_size": "hardware_size", "software_package": "software_package", "domain": "domain", "creation_time": "2019-01-01T12:00:00.000Z", "commision_time": "2019-01-01T12:00:00.000Z", "decommision_time": "2019-01-01T12:00:00.000Z", "deletion_time": "2019-01-01T12:00:00.000Z", "state_change_time": "2019-01-01T12:00:00.000Z", "state": "state", "nodes": [{"id": 2, "fqdn": "fqdn", "type": "type", "state": "state", "public_ip": "public_ip", "private_ip": "private_ip", "state_change_time": "2019-01-01T12:00:00.000Z", "commission_time": "2019-01-01T12:00:00.000Z"}], "user_credentials": {"user": "user"}, "service_endpoints": {"phoenix_jdbc": "phoenix_jdbc", "ambari_console": "ambari_console", "livy": "livy", "spark_history_server": "spark_history_server", "oozie_rest": "oozie_rest", "hive_jdbc": "hive_jdbc", "notebook_gateway_websocket": "notebook_gateway_websocket", "notebook_gateway": "notebook_gateway", "webhdfs": "webhdfs", "ssh": "ssh", "spark_sql": "spark_sql"}, "service_endpoints_ip": {"phoenix_jdbc": "phoenix_jdbc", "ambari_console": "ambari_console", "livy": "livy", "spark_history_server": "spark_history_server", "oozie_rest": "oozie_rest", "hive_jdbc": "hive_jdbc", "notebook_gateway_websocket": "notebook_gateway_websocket", "notebook_gateway": "notebook_gateway", "webhdfs": "webhdfs", "ssh": "ssh", "spark_sql": "spark_sql"}, "private_endpoint_whitelist": ["private_endpoint_whitelist"]}'
responses.add(responses.GET,
url,
body=mock_response,
content_type='application/json',
status=200)
# Set up parameter values
instance_guid = 'testString'
# Pass in all but one required param and check for a ValueError
req_param_dict = {
"instance_guid": instance_guid,
}
for param in req_param_dict.keys():
req_copy = {key:val if key is not param else None for (key,val) in req_param_dict.items()}
with pytest.raises(ValueError):
_service.get_analytics_engine_by_id(**req_copy)
class TestGetAnalyticsEngineStateById():
"""
Test Class for get_analytics_engine_state_by_id
"""
def preprocess_url(self, request_url: str):
"""
Preprocess the request URL to ensure the mock response will be found.
"""
request_url = urllib.parse.unquote(request_url) # don't double-encode if already encoded
request_url = urllib.parse.quote(request_url, safe=':/')
if re.fullmatch('.*/+', request_url) is None:
return request_url
else:
return re.compile(request_url.rstrip('/') + '/+')
@responses.activate
def test_get_analytics_engine_state_by_id_all_params(self):
"""
get_analytics_engine_state_by_id()
"""
# Set up mock
url = self.preprocess_url(_base_url + '/v2/analytics_engines/testString/state')
mock_response = '{"state": "state"}'
responses.add(responses.GET,
url,
body=mock_response,
content_type='application/json',
status=200)
# Set up parameter values
instance_guid = 'testString'
# Invoke method
response = _service.get_analytics_engine_state_by_id(
instance_guid,
headers={}
)
# Check for correct operation
assert len(responses.calls) == 1
assert response.status_code == 200
@responses.activate
def test_get_analytics_engine_state_by_id_value_error(self):
"""
test_get_analytics_engine_state_by_id_value_error()
"""
# Set up mock
url = self.preprocess_url(_base_url + '/v2/analytics_engines/testString/state')
mock_response = '{"state": "state"}'
responses.add(responses.GET,
url,
body=mock_response,
content_type='application/json',
status=200)
# Set up parameter values
instance_guid = 'testString'
# Pass in all but one required param and check for a ValueError
req_param_dict = {
"instance_guid": instance_guid,
}
for param in req_param_dict.keys():
req_copy = {key:val if key is not param else None for (key,val) in req_param_dict.items()}
with pytest.raises(ValueError):
_service.get_analytics_engine_state_by_id(**req_copy)
class TestCreateCustomizationRequest():
"""
Test Class for create_customization_request
"""
def preprocess_url(self, request_url: str):
"""
Preprocess the request URL to ensure the mock response will be found.
"""
request_url = urllib.parse.unquote(request_url) # don't double-encode if already encoded
request_url = urllib.parse.quote(request_url, safe=':/')
if re.fullmatch('.*/+', request_url) is None:
return request_url
else:
return re.compile(request_url.rstrip('/') + '/+')
@responses.activate
def test_create_customization_request_all_params(self):
"""
create_customization_request()
"""
# Set up mock
url = self.preprocess_url(_base_url + '/v2/analytics_engines/testString/customization_requests')
mock_response = '{"request_id": 10}'
responses.add(responses.POST,
url,
body=mock_response,
content_type='application/json',
status=200)
# Construct a dict representation of a AnalyticsEngineCustomActionScript model
analytics_engine_custom_action_script_model = {}
analytics_engine_custom_action_script_model['source_type'] = 'http'
analytics_engine_custom_action_script_model['script_path'] = 'testString'
analytics_engine_custom_action_script_model['source_props'] = { 'foo': 'bar' }
# Construct a dict representation of a AnalyticsEngineCustomAction model
analytics_engine_custom_action_model = {}
analytics_engine_custom_action_model['name'] = 'testString'
analytics_engine_custom_action_model['type'] = 'bootstrap'
analytics_engine_custom_action_model['script'] = analytics_engine_custom_action_script_model
analytics_engine_custom_action_model['script_params'] = ['testString']
# Set up parameter values
instance_guid = 'testString'
target = 'all'
custom_actions = [analytics_engine_custom_action_model]
# Invoke method
response = _service.create_customization_request(
instance_guid,
target,
custom_actions,
headers={}
)
# Check for correct operation
assert len(responses.calls) == 1
assert response.status_code == 200
# Validate body params
req_body = json.loads(str(responses.calls[0].request.body, 'utf-8'))
assert req_body['target'] == 'all'
assert req_body['custom_actions'] == [analytics_engine_custom_action_model]
@responses.activate
def test_create_customization_request_value_error(self):
"""
test_create_customization_request_value_error()
"""
# Set up mock
url = self.preprocess_url(_base_url + '/v2/analytics_engines/testString/customization_requests')
mock_response = '{"request_id": 10}'
responses.add(responses.POST,
url,
body=mock_response,
content_type='application/json',
status=200)
# Construct a dict representation of a AnalyticsEngineCustomActionScript model
analytics_engine_custom_action_script_model = {}
analytics_engine_custom_action_script_model['source_type'] = 'http'
analytics_engine_custom_action_script_model['script_path'] = 'testString'
analytics_engine_custom_action_script_model['source_props'] = { 'foo': 'bar' }
# Construct a dict representation of a AnalyticsEngineCustomAction model
analytics_engine_custom_action_model = {}
analytics_engine_custom_action_model['name'] = 'testString'
analytics_engine_custom_action_model['type'] = 'bootstrap'
analytics_engine_custom_action_model['script'] = analytics_engine_custom_action_script_model
analytics_engine_custom_action_model['script_params'] = ['testString']
# Set up parameter values
instance_guid = 'testString'
target = 'all'
custom_actions = [analytics_engine_custom_action_model]
# Pass in all but one required param and check for a ValueError
req_param_dict = {
"instance_guid": instance_guid,
"target": target,
"custom_actions": custom_actions,
}
for param in req_param_dict.keys():
req_copy = {key:val if key is not param else None for (key,val) in req_param_dict.items()}
with pytest.raises(ValueError):
_service.create_customization_request(**req_copy)
class TestGetAllCustomizationRequests():
"""
Test Class for get_all_customization_requests
"""
def preprocess_url(self, request_url: str):
"""
Preprocess the request URL to ensure the mock response will be found.
"""
request_url = urllib.parse.unquote(request_url) # don't double-encode if already encoded
request_url = urllib.parse.quote(request_url, safe=':/')
if re.fullmatch('.*/+', request_url) is None:
return request_url
else:
return re.compile(request_url.rstrip('/') + '/+')
@responses.activate
def test_get_all_customization_requests_all_params(self):
"""
get_all_customization_requests()
"""
# Set up mock
url = self.preprocess_url(_base_url + '/v2/analytics_engines/testString/customization_requests')
mock_response = '[{"id": "id"}]'
responses.add(responses.GET,
url,
body=mock_response,
content_type='application/json',
status=200)
# Set up parameter values
instance_guid = 'testString'
# Invoke method
response = _service.get_all_customization_requests(
instance_guid,
headers={}
)
# Check for correct operation
assert len(responses.calls) == 1
assert response.status_code == 200
@responses.activate
def test_get_all_customization_requests_value_error(self):
"""
test_get_all_customization_requests_value_error()
"""
# Set up mock
url = self.preprocess_url(_base_url + '/v2/analytics_engines/testString/customization_requests')
mock_response = '[{"id": "id"}]'
responses.add(responses.GET,
url,
body=mock_response,
content_type='application/json',
status=200)
# Set up parameter values
instance_guid = 'testString'
# Pass in all but one required param and check for a ValueError
req_param_dict = {
"instance_guid": instance_guid,
}
for param in req_param_dict.keys():
req_copy = {key:val if key is not param else None for (key,val) in req_param_dict.items()}
with pytest.raises(ValueError):
_service.get_all_customization_requests(**req_copy)
class TestGetCustomizationRequestById():
"""
Test Class for get_customization_request_by_id
"""
def preprocess_url(self, request_url: str):
"""
Preprocess the request URL to ensure the mock response will be found.
"""
request_url = urllib.parse.unquote(request_url) # don't double-encode if already encoded
request_url = urllib.parse.quote(request_url, safe=':/')
if re.fullmatch('.*/+', request_url) is None:
return request_url
else:
return re.compile(request_url.rstrip('/') + '/+')
@responses.activate
def test_get_customization_request_by_id_all_params(self):
"""
get_customization_request_by_id()
"""
# Set up mock
url = self.preprocess_url(_base_url + '/v2/analytics_engines/testString/customization_requests/testString')
mock_response = '{"id": "id", "run_status": "run_status", "run_details": {"overall_status": "overall_status", "details": [{"node_name": "node_name", "node_type": "node_type", "start_time": "start_time", "end_time": "end_time", "time_taken": "time_taken", "status": "status", "log_file": "log_file"}]}}'
responses.add(responses.GET,
url,
body=mock_response,
content_type='application/json',
status=200)
# Set up parameter values
instance_guid = 'testString'
request_id = 'testString'
# Invoke method
response = _service.get_customization_request_by_id(
instance_guid,
request_id,
headers={}
)
# Check for correct operation
assert len(responses.calls) == 1
assert response.status_code == 200
@responses.activate
def test_get_customization_request_by_id_value_error(self):
"""
test_get_customization_request_by_id_value_error()
"""
# Set up mock
url = self.preprocess_url(_base_url + '/v2/analytics_engines/testString/customization_requests/testString')
mock_response = '{"id": "id", "run_status": "run_status", "run_details": {"overall_status": "overall_status", "details": [{"node_name": "node_name", "node_type": "node_type", "start_time": "start_time", "end_time": "end_time", "time_taken": "time_taken", "status": "status", "log_file": "log_file"}]}}'
responses.add(responses.GET,
url,
body=mock_response,
content_type='application/json',
status=200)
# Set up parameter values
instance_guid = 'testString'
request_id = 'testString'
# Pass in all but one required param and check for a ValueError
req_param_dict = {
"instance_guid": instance_guid,
"request_id": request_id,
}
for param in req_param_dict.keys():
req_copy = {key:val if key is not param else None for (key,val) in req_param_dict.items()}
with pytest.raises(ValueError):
_service.get_customization_request_by_id(**req_copy)
class TestResizeCluster():
"""
Test Class for resize_cluster
"""
def preprocess_url(self, request_url: str):
"""
Preprocess the request URL to ensure the mock response will be found.
"""
request_url = urllib.parse.unquote(request_url) # don't double-encode if already encoded
request_url = urllib.parse.quote(request_url, safe=':/')
if re.fullmatch('.*/+', request_url) is None:
return request_url
else:
return re.compile(request_url.rstrip('/') + '/+')
@responses.activate
def test_resize_cluster_all_params(self):
"""
resize_cluster()
"""
# Set up mock
url = self.preprocess_url(_base_url + '/v2/analytics_engines/testString/resize')
mock_response = '{"request_id": "request_id"}'
responses.add(responses.POST,
url,
body=mock_response,
content_type='application/json',
status=200)
# Construct a dict representation of a ResizeClusterRequestAnalyticsEngineResizeClusterByComputeNodesRequest model
resize_cluster_request_model = {}
resize_cluster_request_model['compute_nodes_count'] = 38
# Set up parameter values
instance_guid = 'testString'
body = resize_cluster_request_model
# Invoke method
response = _service.resize_cluster(
instance_guid,
body,
headers={}
)
# Check for correct operation
assert len(responses.calls) == 1
assert response.status_code == 200
# Validate body params
req_body = json.loads(str(responses.calls[0].request.body, 'utf-8'))
assert req_body == body
@responses.activate
def test_resize_cluster_value_error(self):
"""
test_resize_cluster_value_error()
"""
# Set up mock
url = self.preprocess_url(_base_url + '/v2/analytics_engines/testString/resize')
mock_response = '{"request_id": "request_id"}'
responses.add(responses.POST,
url,
body=mock_response,
content_type='application/json',
status=200)
# Construct a dict representation of a ResizeClusterRequestAnalyticsEngineResizeClusterByComputeNodesRequest model
resize_cluster_request_model = {}
resize_cluster_request_model['compute_nodes_count'] = 38
# Set up parameter values
instance_guid = 'testString'
body = resize_cluster_request_model
# Pass in all but one required param and check for a ValueError
req_param_dict = {
"instance_guid": instance_guid,
"body": body,
}
for param in req_param_dict.keys():
req_copy = {key:val if key is not param else None for (key,val) in req_param_dict.items()}
with pytest.raises(ValueError):
_service.resize_cluster(**req_copy)
class TestResetClusterPassword():
"""
Test Class for reset_cluster_password
"""
def preprocess_url(self, request_url: str):
"""
Preprocess the request URL to ensure the mock response will be found.
"""
request_url = urllib.parse.unquote(request_url) # don't double-encode if already encoded
request_url = urllib.parse.quote(request_url, safe=':/')
if re.fullmatch('.*/+', request_url) is None:
return request_url
else:
return re.compile(request_url.rstrip('/') + '/+')
@responses.activate
def test_reset_cluster_password_all_params(self):
"""
reset_cluster_password()
"""
# Set up mock
url = self.preprocess_url(_base_url + '/v2/analytics_engines/testString/reset_password')
mock_response = '{"id": "id", "user_credentials": {"user": "user", "password": "password"}}'
responses.add(responses.POST,
url,
body=mock_response,
content_type='application/json',
status=200)
# Set up parameter values
instance_guid = 'testString'
# Invoke method
response = _service.reset_cluster_password(
instance_guid,
headers={}
)
# Check for correct operation
assert len(responses.calls) == 1
assert response.status_code == 200
@responses.activate
def test_reset_cluster_password_value_error(self):
"""
test_reset_cluster_password_value_error()
"""
# Set up mock
url = self.preprocess_url(_base_url + '/v2/analytics_engines/testString/reset_password')
mock_response = '{"id": "id", "user_credentials": {"user": "user", "password": "password"}}'
responses.add(responses.POST,
url,
body=mock_response,
content_type='application/json',
status=200)
# Set up parameter values
instance_guid = 'testString'
# Pass in all but one required param and check for a ValueError
req_param_dict = {
"instance_guid": instance_guid,
}
for param in req_param_dict.keys():
req_copy = {key:val if key is not param else None for (key,val) in req_param_dict.items()}
with pytest.raises(ValueError):
_service.reset_cluster_password(**req_copy)
class TestConfigureLogging():
"""
Test Class for configure_logging
"""
def preprocess_url(self, request_url: str):
"""
Preprocess the request URL to ensure the mock response will be found.
"""
request_url = urllib.parse.unquote(request_url) # don't double-encode if already encoded
request_url = urllib.parse.quote(request_url, safe=':/')
if re.fullmatch('.*/+', request_url) is None:
return request_url
else:
return re.compile(request_url.rstrip('/') + '/+')
@responses.activate
def test_configure_logging_all_params(self):
"""
configure_logging()
"""
# Set up mock
url = self.preprocess_url(_base_url + '/v2/analytics_engines/testString/log_config')
responses.add(responses.PUT,
url,
status=202)
# Construct a dict representation of a AnalyticsEngineLoggingNodeSpec model
analytics_engine_logging_node_spec_model = {}
analytics_engine_logging_node_spec_model['node_type'] = 'management'
analytics_engine_logging_node_spec_model['components'] = ['ambari-server']
# Construct a dict representation of a AnalyticsEngineLoggingServer model
analytics_engine_logging_server_model = {}
analytics_engine_logging_server_model['type'] = 'logdna'
analytics_engine_logging_server_model['credential'] = 'testString'
analytics_engine_logging_server_model['api_host'] = 'testString'
analytics_engine_logging_server_model['log_host'] = 'testString'
analytics_engine_logging_server_model['owner'] = 'testString'
# Set up parameter values
instance_guid = 'testString'
log_specs = [analytics_engine_logging_node_spec_model]
log_server = analytics_engine_logging_server_model
# Invoke method
response = _service.configure_logging(
instance_guid,
log_specs,
log_server,
headers={}
)
# Check for correct operation
assert len(responses.calls) == 1
assert response.status_code == 202
# Validate body params
req_body = json.loads(str(responses.calls[0].request.body, 'utf-8'))
assert req_body['log_specs'] == [analytics_engine_logging_node_spec_model]
assert req_body['log_server'] == analytics_engine_logging_server_model
@responses.activate
def test_configure_logging_value_error(self):
"""
test_configure_logging_value_error()
"""
# Set up mock
url = self.preprocess_url(_base_url + '/v2/analytics_engines/testString/log_config')
responses.add(responses.PUT,
url,
status=202)
# Construct a dict representation of a AnalyticsEngineLoggingNodeSpec model
analytics_engine_logging_node_spec_model = {}
analytics_engine_logging_node_spec_model['node_type'] = 'management'
analytics_engine_logging_node_spec_model['components'] = ['ambari-server']
# Construct a dict representation of a AnalyticsEngineLoggingServer model
analytics_engine_logging_server_model = {}
analytics_engine_logging_server_model['type'] = 'logdna'
analytics_engine_logging_server_model['credential'] = 'testString'
analytics_engine_logging_server_model['api_host'] = 'testString'
analytics_engine_logging_server_model['log_host'] = 'testString'
analytics_engine_logging_server_model['owner'] = 'testString'
# Set up parameter values
instance_guid = 'testString'
log_specs = [analytics_engine_logging_node_spec_model]
log_server = analytics_engine_logging_server_model
# Pass in all but one required param and check for a ValueError
req_param_dict = {
"instance_guid": instance_guid,
"log_specs": log_specs,
"log_server": log_server,
}
for param in req_param_dict.keys():
req_copy = {key:val if key is not param else None for (key,val) in req_param_dict.items()}
with pytest.raises(ValueError):
_service.configure_logging(**req_copy)
class TestGetLoggingConfig():
"""
Test Class for get_logging_config
"""
def preprocess_url(self, request_url: str):
"""
Preprocess the request URL to ensure the mock response will be found.
"""
request_url = urllib.parse.unquote(request_url) # don't double-encode if already encoded
request_url = urllib.parse.quote(request_url, safe=':/')
if re.fullmatch('.*/+', request_url) is None:
return request_url
else:
return re.compile(request_url.rstrip('/') + '/+')
@responses.activate
def test_get_logging_config_all_params(self):
"""
get_logging_config()
"""
# Set up mock
url = self.preprocess_url(_base_url + '/v2/analytics_engines/testString/log_config')
mock_response = '{"log_specs": [{"node_type": "management", "components": ["ambari-server"]}], "log_server": {"type": "logdna", "credential": "credential", "api_host": "api_host", "log_host": "log_host", "owner": "owner"}, "log_config_status": [{"node_type": "management", "node_id": "node_id", "action": "action", "status": "status"}]}'
responses.add(responses.GET,
url,
body=mock_response,
content_type='application/json',
status=200)
# Set up parameter values
instance_guid = 'testString'
# Invoke method
response = _service.get_logging_config(
instance_guid,
headers={}
)
# Check for correct operation
assert len(responses.calls) == 1
assert response.status_code == 200
@responses.activate
def test_get_logging_config_value_error(self):
"""
test_get_logging_config_value_error()
"""
# Set up mock
url = self.preprocess_url(_base_url + '/v2/analytics_engines/testString/log_config')
mock_response = '{"log_specs": [{"node_type": "management", "components": ["ambari-server"]}], "log_server": {"type": "logdna", "credential": "credential", "api_host": "api_host", "log_host": "log_host", "owner": "owner"}, "log_config_status": [{"node_type": "management", "node_id": "node_id", "action": "action", "status": "status"}]}'
responses.add(responses.GET,
url,
body=mock_response,
content_type='application/json',
status=200)
# Set up parameter values
instance_guid = 'testString'
# Pass in all but one required param and check for a ValueError
req_param_dict = {
"instance_guid": instance_guid,
}
for param in req_param_dict.keys():
req_copy = {key:val if key is not param else None for (key,val) in req_param_dict.items()}
with pytest.raises(ValueError):
_service.get_logging_config(**req_copy)
class TestDeleteLoggingConfig():
"""
Test Class for delete_logging_config
"""
def preprocess_url(self, request_url: str):
"""
Preprocess the request URL to ensure the mock response will be found.
"""
request_url = urllib.parse.unquote(request_url) # don't double-encode if already encoded
request_url = urllib.parse.quote(request_url, safe=':/')
if re.fullmatch('.*/+', request_url) is None:
return request_url
else:
return re.compile(request_url.rstrip('/') + '/+')
@responses.activate
def test_delete_logging_config_all_params(self):
"""
delete_logging_config()
"""
# Set up mock
url = self.preprocess_url(_base_url + '/v2/analytics_engines/testString/log_config')
responses.add(responses.DELETE,
url,
status=202)
# Set up parameter values
instance_guid = 'testString'
# Invoke method
response = _service.delete_logging_config(
instance_guid,
headers={}
)
# Check for correct operation
assert len(responses.calls) == 1
assert response.status_code == 202
@responses.activate
def test_delete_logging_config_value_error(self):
"""
test_delete_logging_config_value_error()
"""
# Set up mock
url = self.preprocess_url(_base_url + '/v2/analytics_engines/testString/log_config')
responses.add(responses.DELETE,
url,
status=202)
# Set up parameter values
instance_guid = 'testString'
# Pass in all but one required param and check for a ValueError
req_param_dict = {
"instance_guid": instance_guid,
}
for param in req_param_dict.keys():
req_copy = {key:val if key is not param else None for (key,val) in req_param_dict.items()}
with pytest.raises(ValueError):
_service.delete_logging_config(**req_copy)
class TestUpdatePrivateEndpointWhitelist():
"""
Test Class for update_private_endpoint_whitelist
"""
def preprocess_url(self, request_url: str):
"""
Preprocess the request URL to ensure the mock response will be found.
"""
request_url = urllib.parse.unquote(request_url) # don't double-encode if already encoded
request_url = urllib.parse.quote(request_url, safe=':/')
if re.fullmatch('.*/+', request_url) is None:
return request_url
else:
return re.compile(request_url.rstrip('/') + '/+')
@responses.activate
def test_update_private_endpoint_whitelist_all_params(self):
"""
update_private_endpoint_whitelist()
"""
# Set up mock
url = self.preprocess_url(_base_url + '/v2/analytics_engines/testString/private_endpoint_whitelist')
mock_response = '{"private_endpoint_whitelist": ["private_endpoint_whitelist"]}'
responses.add(responses.PATCH,
url,
body=mock_response,
content_type='application/json',
status=200)
# Set up parameter values
instance_guid = 'testString'
ip_ranges = ['testString']
action = 'add'
# Invoke method
response = _service.update_private_endpoint_whitelist(
instance_guid,
ip_ranges,
action,
headers={}
)
# Check for correct operation
assert len(responses.calls) == 1
assert response.status_code == 200
# Validate body params
req_body = json.loads(str(responses.calls[0].request.body, 'utf-8'))
assert req_body['ip_ranges'] == ['testString']
assert req_body['action'] == 'add'
@responses.activate
def test_update_private_endpoint_whitelist_value_error(self):
"""
test_update_private_endpoint_whitelist_value_error()
"""
# Set up mock
url = self.preprocess_url(_base_url + '/v2/analytics_engines/testString/private_endpoint_whitelist')
mock_response = '{"private_endpoint_whitelist": ["private_endpoint_whitelist"]}'
responses.add(responses.PATCH,
url,
body=mock_response,
content_type='application/json',
status=200)
# Set up parameter values
instance_guid = 'testString'
ip_ranges = ['testString']
action = 'add'
# Pass in all but one required param and check for a ValueError
req_param_dict = {
"instance_guid": instance_guid,
"ip_ranges": ip_ranges,
"action": action,
}
for param in req_param_dict.keys():
req_copy = {key:val if key is not param else None for (key,val) in req_param_dict.items()}
with pytest.raises(ValueError):
_service.update_private_endpoint_whitelist(**req_copy)
# endregion
##############################################################################
# End of Service: AnalyticsEnginesV2
##############################################################################
##############################################################################
# Start of Model Tests
##############################################################################
# region
class TestModel_AnalyticsEngine():
"""
Test Class for AnalyticsEngine
"""
def test_analytics_engine_serialization(self):
"""
Test serialization/deserialization for AnalyticsEngine
"""
# Construct dict forms of any model objects needed in order to build this model.
analytics_engine_cluster_node_model = {} # AnalyticsEngineClusterNode
analytics_engine_cluster_node_model['id'] = 38
analytics_engine_cluster_node_model['fqdn'] = 'testString'
analytics_engine_cluster_node_model['type'] = 'testString'
analytics_engine_cluster_node_model['state'] = 'testString'
analytics_engine_cluster_node_model['public_ip'] = 'testString'
analytics_engine_cluster_node_model['private_ip'] = 'testString'
analytics_engine_cluster_node_model['state_change_time'] = "2019-01-01T12:00:00Z"
analytics_engine_cluster_node_model['commission_time'] = "2019-01-01T12:00:00Z"
analytics_engine_user_credentials_model = {} # AnalyticsEngineUserCredentials
analytics_engine_user_credentials_model['user'] = 'testString'
service_endpoints_model = {} # ServiceEndpoints
service_endpoints_model['phoenix_jdbc'] = 'testString'
service_endpoints_model['ambari_console'] = 'testString'
service_endpoints_model['livy'] = 'testString'
service_endpoints_model['spark_history_server'] = 'testString'
service_endpoints_model['oozie_rest'] = 'testString'
service_endpoints_model['hive_jdbc'] = 'testString'
service_endpoints_model['notebook_gateway_websocket'] = 'testString'
service_endpoints_model['notebook_gateway'] = 'testString'
service_endpoints_model['webhdfs'] = 'testString'
service_endpoints_model['ssh'] = 'testString'
service_endpoints_model['spark_sql'] = 'testString'
# Construct a json representation of a AnalyticsEngine model
analytics_engine_model_json = {}
analytics_engine_model_json['id'] = 'testString'
analytics_engine_model_json['name'] = 'testString'
analytics_engine_model_json['service_plan'] = 'testString'
analytics_engine_model_json['hardware_size'] = 'testString'
analytics_engine_model_json['software_package'] = 'testString'
analytics_engine_model_json['domain'] = 'testString'
analytics_engine_model_json['creation_time'] = "2019-01-01T12:00:00Z"
analytics_engine_model_json['commision_time'] = "2019-01-01T12:00:00Z"
analytics_engine_model_json['decommision_time'] = "2019-01-01T12:00:00Z"
analytics_engine_model_json['deletion_time'] = "2019-01-01T12:00:00Z"
analytics_engine_model_json['state_change_time'] = "2019-01-01T12:00:00Z"
analytics_engine_model_json['state'] = 'testString'
analytics_engine_model_json['nodes'] = [analytics_engine_cluster_node_model]
analytics_engine_model_json['user_credentials'] = analytics_engine_user_credentials_model
analytics_engine_model_json['service_endpoints'] = service_endpoints_model
analytics_engine_model_json['service_endpoints_ip'] = service_endpoints_model
analytics_engine_model_json['private_endpoint_whitelist'] = ['testString']
# Construct a model instance of AnalyticsEngine by calling from_dict on the json representation
analytics_engine_model = AnalyticsEngine.from_dict(analytics_engine_model_json)
assert analytics_engine_model != False
# Construct a model instance of AnalyticsEngine by calling from_dict on the json representation
analytics_engine_model_dict = AnalyticsEngine.from_dict(analytics_engine_model_json).__dict__
analytics_engine_model2 = AnalyticsEngine(**analytics_engine_model_dict)
# Verify the model instances are equivalent
assert analytics_engine_model == analytics_engine_model2
# Convert model instance back to dict and verify no loss of data
analytics_engine_model_json2 = analytics_engine_model.to_dict()
assert analytics_engine_model_json2 == analytics_engine_model_json
class TestModel_AnalyticsEngineClusterNode():
"""
Test Class for AnalyticsEngineClusterNode
"""
def test_analytics_engine_cluster_node_serialization(self):
"""
Test serialization/deserialization for AnalyticsEngineClusterNode
"""
# Construct a json representation of a AnalyticsEngineClusterNode model
analytics_engine_cluster_node_model_json = {}
analytics_engine_cluster_node_model_json['id'] = 38
analytics_engine_cluster_node_model_json['fqdn'] = 'testString'
analytics_engine_cluster_node_model_json['type'] = 'testString'
analytics_engine_cluster_node_model_json['state'] = 'testString'
analytics_engine_cluster_node_model_json['public_ip'] = 'testString'
analytics_engine_cluster_node_model_json['private_ip'] = 'testString'
analytics_engine_cluster_node_model_json['state_change_time'] = "2019-01-01T12:00:00Z"
analytics_engine_cluster_node_model_json['commission_time'] = "2019-01-01T12:00:00Z"
# Construct a model instance of AnalyticsEngineClusterNode by calling from_dict on the json representation
analytics_engine_cluster_node_model = AnalyticsEngineClusterNode.from_dict(analytics_engine_cluster_node_model_json)
assert analytics_engine_cluster_node_model != False
# Construct a model instance of AnalyticsEngineClusterNode by calling from_dict on the json representation
analytics_engine_cluster_node_model_dict = AnalyticsEngineClusterNode.from_dict(analytics_engine_cluster_node_model_json).__dict__
analytics_engine_cluster_node_model2 = AnalyticsEngineClusterNode(**analytics_engine_cluster_node_model_dict)
# Verify the model instances are equivalent
assert analytics_engine_cluster_node_model == analytics_engine_cluster_node_model2
# Convert model instance back to dict and verify no loss of data
analytics_engine_cluster_node_model_json2 = analytics_engine_cluster_node_model.to_dict()
assert analytics_engine_cluster_node_model_json2 == analytics_engine_cluster_node_model_json
class TestModel_AnalyticsEngineCreateCustomizationResponse():
"""
Test Class for AnalyticsEngineCreateCustomizationResponse
"""
def test_analytics_engine_create_customization_response_serialization(self):
"""
Test serialization/deserialization for AnalyticsEngineCreateCustomizationResponse
"""
# Construct a json representation of a AnalyticsEngineCreateCustomizationResponse model
analytics_engine_create_customization_response_model_json = {}
analytics_engine_create_customization_response_model_json['request_id'] = 38
# Construct a model instance of AnalyticsEngineCreateCustomizationResponse by calling from_dict on the json representation
analytics_engine_create_customization_response_model = AnalyticsEngineCreateCustomizationResponse.from_dict(analytics_engine_create_customization_response_model_json)
assert analytics_engine_create_customization_response_model != False
# Construct a model instance of AnalyticsEngineCreateCustomizationResponse by calling from_dict on the json representation
analytics_engine_create_customization_response_model_dict = AnalyticsEngineCreateCustomizationResponse.from_dict(analytics_engine_create_customization_response_model_json).__dict__
analytics_engine_create_customization_response_model2 = AnalyticsEngineCreateCustomizationResponse(**analytics_engine_create_customization_response_model_dict)
# Verify the model instances are equivalent
assert analytics_engine_create_customization_response_model == analytics_engine_create_customization_response_model2
# Convert model instance back to dict and verify no loss of data
analytics_engine_create_customization_response_model_json2 = analytics_engine_create_customization_response_model.to_dict()
assert analytics_engine_create_customization_response_model_json2 == analytics_engine_create_customization_response_model_json
class TestModel_AnalyticsEngineCustomAction():
"""
Test Class for AnalyticsEngineCustomAction
"""
def test_analytics_engine_custom_action_serialization(self):
"""
Test serialization/deserialization for AnalyticsEngineCustomAction
"""
# Construct dict forms of any model objects needed in order to build this model.
analytics_engine_custom_action_script_model = {} # AnalyticsEngineCustomActionScript
analytics_engine_custom_action_script_model['source_type'] = 'http'
analytics_engine_custom_action_script_model['script_path'] = 'testString'
analytics_engine_custom_action_script_model['source_props'] = { 'foo': 'bar' }
# Construct a json representation of a AnalyticsEngineCustomAction model
analytics_engine_custom_action_model_json = {}
analytics_engine_custom_action_model_json['name'] = 'testString'
analytics_engine_custom_action_model_json['type'] = 'bootstrap'
analytics_engine_custom_action_model_json['script'] = analytics_engine_custom_action_script_model
analytics_engine_custom_action_model_json['script_params'] = ['testString']
# Construct a model instance of AnalyticsEngineCustomAction by calling from_dict on the json representation
analytics_engine_custom_action_model = AnalyticsEngineCustomAction.from_dict(analytics_engine_custom_action_model_json)
assert analytics_engine_custom_action_model != False
# Construct a model instance of AnalyticsEngineCustomAction by calling from_dict on the json representation
analytics_engine_custom_action_model_dict = AnalyticsEngineCustomAction.from_dict(analytics_engine_custom_action_model_json).__dict__
analytics_engine_custom_action_model2 = AnalyticsEngineCustomAction(**analytics_engine_custom_action_model_dict)
# Verify the model instances are equivalent
assert analytics_engine_custom_action_model == analytics_engine_custom_action_model2
# Convert model instance back to dict and verify no loss of data
analytics_engine_custom_action_model_json2 = analytics_engine_custom_action_model.to_dict()
assert analytics_engine_custom_action_model_json2 == analytics_engine_custom_action_model_json
class TestModel_AnalyticsEngineCustomActionScript():
"""
Test Class for AnalyticsEngineCustomActionScript
"""
def test_analytics_engine_custom_action_script_serialization(self):
"""
Test serialization/deserialization for AnalyticsEngineCustomActionScript
"""
# Construct a json representation of a AnalyticsEngineCustomActionScript model
analytics_engine_custom_action_script_model_json = {}
analytics_engine_custom_action_script_model_json['source_type'] = 'http'
analytics_engine_custom_action_script_model_json['script_path'] = 'testString'
analytics_engine_custom_action_script_model_json['source_props'] = { 'foo': 'bar' }
# Construct a model instance of AnalyticsEngineCustomActionScript by calling from_dict on the json representation
analytics_engine_custom_action_script_model = AnalyticsEngineCustomActionScript.from_dict(analytics_engine_custom_action_script_model_json)
assert analytics_engine_custom_action_script_model != False
# Construct a model instance of AnalyticsEngineCustomActionScript by calling from_dict on the json representation
analytics_engine_custom_action_script_model_dict = AnalyticsEngineCustomActionScript.from_dict(analytics_engine_custom_action_script_model_json).__dict__
analytics_engine_custom_action_script_model2 = AnalyticsEngineCustomActionScript(**analytics_engine_custom_action_script_model_dict)
# Verify the model instances are equivalent
assert analytics_engine_custom_action_script_model == analytics_engine_custom_action_script_model2
# Convert model instance back to dict and verify no loss of data
analytics_engine_custom_action_script_model_json2 = analytics_engine_custom_action_script_model.to_dict()
assert analytics_engine_custom_action_script_model_json2 == analytics_engine_custom_action_script_model_json
class TestModel_AnalyticsEngineCustomizationRequestCollectionItem():
"""
Test Class for AnalyticsEngineCustomizationRequestCollectionItem
"""
def test_analytics_engine_customization_request_collection_item_serialization(self):
"""
Test serialization/deserialization for AnalyticsEngineCustomizationRequestCollectionItem
"""
# Construct a json representation of a AnalyticsEngineCustomizationRequestCollectionItem model
analytics_engine_customization_request_collection_item_model_json = {}
analytics_engine_customization_request_collection_item_model_json['id'] = 'testString'
# Construct a model instance of AnalyticsEngineCustomizationRequestCollectionItem by calling from_dict on the json representation
analytics_engine_customization_request_collection_item_model = AnalyticsEngineCustomizationRequestCollectionItem.from_dict(analytics_engine_customization_request_collection_item_model_json)
assert analytics_engine_customization_request_collection_item_model != False
# Construct a model instance of AnalyticsEngineCustomizationRequestCollectionItem by calling from_dict on the json representation
analytics_engine_customization_request_collection_item_model_dict = AnalyticsEngineCustomizationRequestCollectionItem.from_dict(analytics_engine_customization_request_collection_item_model_json).__dict__
analytics_engine_customization_request_collection_item_model2 = AnalyticsEngineCustomizationRequestCollectionItem(**analytics_engine_customization_request_collection_item_model_dict)
# Verify the model instances are equivalent
assert analytics_engine_customization_request_collection_item_model == analytics_engine_customization_request_collection_item_model2
# Convert model instance back to dict and verify no loss of data
analytics_engine_customization_request_collection_item_model_json2 = analytics_engine_customization_request_collection_item_model.to_dict()
assert analytics_engine_customization_request_collection_item_model_json2 == analytics_engine_customization_request_collection_item_model_json
class TestModel_AnalyticsEngineCustomizationRunDetails():
"""
Test Class for AnalyticsEngineCustomizationRunDetails
"""
def test_analytics_engine_customization_run_details_serialization(self):
"""
Test serialization/deserialization for AnalyticsEngineCustomizationRunDetails
"""
# Construct dict forms of any model objects needed in order to build this model.
analytics_engine_node_level_customization_run_details_model = {} # AnalyticsEngineNodeLevelCustomizationRunDetails
analytics_engine_node_level_customization_run_details_model['node_name'] = 'testString'
analytics_engine_node_level_customization_run_details_model['node_type'] = 'testString'
analytics_engine_node_level_customization_run_details_model['start_time'] = 'testString'
analytics_engine_node_level_customization_run_details_model['end_time'] = 'testString'
analytics_engine_node_level_customization_run_details_model['time_taken'] = 'testString'
analytics_engine_node_level_customization_run_details_model['status'] = 'testString'
analytics_engine_node_level_customization_run_details_model['log_file'] = 'testString'
analytics_engine_customization_run_details_run_details_model = {} # AnalyticsEngineCustomizationRunDetailsRunDetails
analytics_engine_customization_run_details_run_details_model['overall_status'] = 'testString'
analytics_engine_customization_run_details_run_details_model['details'] = [analytics_engine_node_level_customization_run_details_model]
# Construct a json representation of a AnalyticsEngineCustomizationRunDetails model
analytics_engine_customization_run_details_model_json = {}
analytics_engine_customization_run_details_model_json['id'] = 'testString'
analytics_engine_customization_run_details_model_json['run_status'] = 'testString'
analytics_engine_customization_run_details_model_json['run_details'] = analytics_engine_customization_run_details_run_details_model
# Construct a model instance of AnalyticsEngineCustomizationRunDetails by calling from_dict on the json representation
analytics_engine_customization_run_details_model = AnalyticsEngineCustomizationRunDetails.from_dict(analytics_engine_customization_run_details_model_json)
assert analytics_engine_customization_run_details_model != False
# Construct a model instance of AnalyticsEngineCustomizationRunDetails by calling from_dict on the json representation
analytics_engine_customization_run_details_model_dict = AnalyticsEngineCustomizationRunDetails.from_dict(analytics_engine_customization_run_details_model_json).__dict__
analytics_engine_customization_run_details_model2 = AnalyticsEngineCustomizationRunDetails(**analytics_engine_customization_run_details_model_dict)
# Verify the model instances are equivalent
assert analytics_engine_customization_run_details_model == analytics_engine_customization_run_details_model2
# Convert model instance back to dict and verify no loss of data
analytics_engine_customization_run_details_model_json2 = analytics_engine_customization_run_details_model.to_dict()
assert analytics_engine_customization_run_details_model_json2 == analytics_engine_customization_run_details_model_json
class TestModel_AnalyticsEngineCustomizationRunDetailsRunDetails():
"""
Test Class for AnalyticsEngineCustomizationRunDetailsRunDetails
"""
def test_analytics_engine_customization_run_details_run_details_serialization(self):
"""
Test serialization/deserialization for AnalyticsEngineCustomizationRunDetailsRunDetails
"""
# Construct dict forms of any model objects needed in order to build this model.
analytics_engine_node_level_customization_run_details_model = {} # AnalyticsEngineNodeLevelCustomizationRunDetails
analytics_engine_node_level_customization_run_details_model['node_name'] = 'testString'
analytics_engine_node_level_customization_run_details_model['node_type'] = 'testString'
analytics_engine_node_level_customization_run_details_model['start_time'] = 'testString'
analytics_engine_node_level_customization_run_details_model['end_time'] = 'testString'
analytics_engine_node_level_customization_run_details_model['time_taken'] = 'testString'
analytics_engine_node_level_customization_run_details_model['status'] = 'testString'
analytics_engine_node_level_customization_run_details_model['log_file'] = 'testString'
# Construct a json representation of a AnalyticsEngineCustomizationRunDetailsRunDetails model
analytics_engine_customization_run_details_run_details_model_json = {}
analytics_engine_customization_run_details_run_details_model_json['overall_status'] = 'testString'
analytics_engine_customization_run_details_run_details_model_json['details'] = [analytics_engine_node_level_customization_run_details_model]
# Construct a model instance of AnalyticsEngineCustomizationRunDetailsRunDetails by calling from_dict on the json representation
analytics_engine_customization_run_details_run_details_model = AnalyticsEngineCustomizationRunDetailsRunDetails.from_dict(analytics_engine_customization_run_details_run_details_model_json)
assert analytics_engine_customization_run_details_run_details_model != False
# Construct a model instance of AnalyticsEngineCustomizationRunDetailsRunDetails by calling from_dict on the json representation
analytics_engine_customization_run_details_run_details_model_dict = AnalyticsEngineCustomizationRunDetailsRunDetails.from_dict(analytics_engine_customization_run_details_run_details_model_json).__dict__
analytics_engine_customization_run_details_run_details_model2 = AnalyticsEngineCustomizationRunDetailsRunDetails(**analytics_engine_customization_run_details_run_details_model_dict)
# Verify the model instances are equivalent
assert analytics_engine_customization_run_details_run_details_model == analytics_engine_customization_run_details_run_details_model2
# Convert model instance back to dict and verify no loss of data
analytics_engine_customization_run_details_run_details_model_json2 = analytics_engine_customization_run_details_run_details_model.to_dict()
assert analytics_engine_customization_run_details_run_details_model_json2 == analytics_engine_customization_run_details_run_details_model_json
class TestModel_AnalyticsEngineLoggingConfigDetails():
"""
Test Class for AnalyticsEngineLoggingConfigDetails
"""
def test_analytics_engine_logging_config_details_serialization(self):
"""
Test serialization/deserialization for AnalyticsEngineLoggingConfigDetails
"""
# Construct dict forms of any model objects needed in order to build this model.
analytics_engine_logging_node_spec_model = {} # AnalyticsEngineLoggingNodeSpec
analytics_engine_logging_node_spec_model['node_type'] = 'management'
analytics_engine_logging_node_spec_model['components'] = ['ambari-server']
analytics_engine_logging_server_model = {} # AnalyticsEngineLoggingServer
analytics_engine_logging_server_model['type'] = 'logdna'
analytics_engine_logging_server_model['credential'] = 'testString'
analytics_engine_logging_server_model['api_host'] = 'testString'
analytics_engine_logging_server_model['log_host'] = 'testString'
analytics_engine_logging_server_model['owner'] = 'testString'
analytics_engine_logging_config_status_model = {} # AnalyticsEngineLoggingConfigStatus
analytics_engine_logging_config_status_model['node_type'] = 'management'
analytics_engine_logging_config_status_model['node_id'] = 'testString'
analytics_engine_logging_config_status_model['action'] = 'testString'
analytics_engine_logging_config_status_model['status'] = 'testString'
# Construct a json representation of a AnalyticsEngineLoggingConfigDetails model
analytics_engine_logging_config_details_model_json = {}
analytics_engine_logging_config_details_model_json['log_specs'] = [analytics_engine_logging_node_spec_model]
analytics_engine_logging_config_details_model_json['log_server'] = analytics_engine_logging_server_model
analytics_engine_logging_config_details_model_json['log_config_status'] = [analytics_engine_logging_config_status_model]
# Construct a model instance of AnalyticsEngineLoggingConfigDetails by calling from_dict on the json representation
analytics_engine_logging_config_details_model = AnalyticsEngineLoggingConfigDetails.from_dict(analytics_engine_logging_config_details_model_json)
assert analytics_engine_logging_config_details_model != False
# Construct a model instance of AnalyticsEngineLoggingConfigDetails by calling from_dict on the json representation
analytics_engine_logging_config_details_model_dict = AnalyticsEngineLoggingConfigDetails.from_dict(analytics_engine_logging_config_details_model_json).__dict__
analytics_engine_logging_config_details_model2 = AnalyticsEngineLoggingConfigDetails(**analytics_engine_logging_config_details_model_dict)
# Verify the model instances are equivalent
assert analytics_engine_logging_config_details_model == analytics_engine_logging_config_details_model2
# Convert model instance back to dict and verify no loss of data
analytics_engine_logging_config_details_model_json2 = analytics_engine_logging_config_details_model.to_dict()
assert analytics_engine_logging_config_details_model_json2 == analytics_engine_logging_config_details_model_json
class TestModel_AnalyticsEngineLoggingConfigStatus():
"""
Test Class for AnalyticsEngineLoggingConfigStatus
"""
def test_analytics_engine_logging_config_status_serialization(self):
"""
Test serialization/deserialization for AnalyticsEngineLoggingConfigStatus
"""
# Construct a json representation of a AnalyticsEngineLoggingConfigStatus model
analytics_engine_logging_config_status_model_json = {}
analytics_engine_logging_config_status_model_json['node_type'] = 'management'
analytics_engine_logging_config_status_model_json['node_id'] = 'testString'
analytics_engine_logging_config_status_model_json['action'] = 'testString'
analytics_engine_logging_config_status_model_json['status'] = 'testString'
# Construct a model instance of AnalyticsEngineLoggingConfigStatus by calling from_dict on the json representation
analytics_engine_logging_config_status_model = AnalyticsEngineLoggingConfigStatus.from_dict(analytics_engine_logging_config_status_model_json)
assert analytics_engine_logging_config_status_model != False
# Construct a model instance of AnalyticsEngineLoggingConfigStatus by calling from_dict on the json representation
analytics_engine_logging_config_status_model_dict = AnalyticsEngineLoggingConfigStatus.from_dict(analytics_engine_logging_config_status_model_json).__dict__
analytics_engine_logging_config_status_model2 = AnalyticsEngineLoggingConfigStatus(**analytics_engine_logging_config_status_model_dict)
# Verify the model instances are equivalent
assert analytics_engine_logging_config_status_model == analytics_engine_logging_config_status_model2
# Convert model instance back to dict and verify no loss of data
analytics_engine_logging_config_status_model_json2 = analytics_engine_logging_config_status_model.to_dict()
assert analytics_engine_logging_config_status_model_json2 == analytics_engine_logging_config_status_model_json
class TestModel_AnalyticsEngineLoggingNodeSpec():
"""
Test Class for AnalyticsEngineLoggingNodeSpec
"""
def test_analytics_engine_logging_node_spec_serialization(self):
"""
Test serialization/deserialization for AnalyticsEngineLoggingNodeSpec
"""
# Construct a json representation of a AnalyticsEngineLoggingNodeSpec model
analytics_engine_logging_node_spec_model_json = {}
analytics_engine_logging_node_spec_model_json['node_type'] = 'management'
analytics_engine_logging_node_spec_model_json['components'] = ['ambari-server']
# Construct a model instance of AnalyticsEngineLoggingNodeSpec by calling from_dict on the json representation
analytics_engine_logging_node_spec_model = AnalyticsEngineLoggingNodeSpec.from_dict(analytics_engine_logging_node_spec_model_json)
assert analytics_engine_logging_node_spec_model != False
# Construct a model instance of AnalyticsEngineLoggingNodeSpec by calling from_dict on the json representation
analytics_engine_logging_node_spec_model_dict = AnalyticsEngineLoggingNodeSpec.from_dict(analytics_engine_logging_node_spec_model_json).__dict__
analytics_engine_logging_node_spec_model2 = AnalyticsEngineLoggingNodeSpec(**analytics_engine_logging_node_spec_model_dict)
# Verify the model instances are equivalent
assert analytics_engine_logging_node_spec_model == analytics_engine_logging_node_spec_model2
# Convert model instance back to dict and verify no loss of data
analytics_engine_logging_node_spec_model_json2 = analytics_engine_logging_node_spec_model.to_dict()
assert analytics_engine_logging_node_spec_model_json2 == analytics_engine_logging_node_spec_model_json
class TestModel_AnalyticsEngineLoggingServer():
"""
Test Class for AnalyticsEngineLoggingServer
"""
def test_analytics_engine_logging_server_serialization(self):
"""
Test serialization/deserialization for AnalyticsEngineLoggingServer
"""
# Construct a json representation of a AnalyticsEngineLoggingServer model
analytics_engine_logging_server_model_json = {}
analytics_engine_logging_server_model_json['type'] = 'logdna'
analytics_engine_logging_server_model_json['credential'] = 'testString'
analytics_engine_logging_server_model_json['api_host'] = 'testString'
analytics_engine_logging_server_model_json['log_host'] = 'testString'
analytics_engine_logging_server_model_json['owner'] = 'testString'
# Construct a model instance of AnalyticsEngineLoggingServer by calling from_dict on the json representation
analytics_engine_logging_server_model = AnalyticsEngineLoggingServer.from_dict(analytics_engine_logging_server_model_json)
assert analytics_engine_logging_server_model != False
# Construct a model instance of AnalyticsEngineLoggingServer by calling from_dict on the json representation
analytics_engine_logging_server_model_dict = AnalyticsEngineLoggingServer.from_dict(analytics_engine_logging_server_model_json).__dict__
analytics_engine_logging_server_model2 = AnalyticsEngineLoggingServer(**analytics_engine_logging_server_model_dict)
# Verify the model instances are equivalent
assert analytics_engine_logging_server_model == analytics_engine_logging_server_model2
# Convert model instance back to dict and verify no loss of data
analytics_engine_logging_server_model_json2 = analytics_engine_logging_server_model.to_dict()
assert analytics_engine_logging_server_model_json2 == analytics_engine_logging_server_model_json
class TestModel_AnalyticsEngineNodeLevelCustomizationRunDetails():
"""
Test Class for AnalyticsEngineNodeLevelCustomizationRunDetails
"""
def test_analytics_engine_node_level_customization_run_details_serialization(self):
"""
Test serialization/deserialization for AnalyticsEngineNodeLevelCustomizationRunDetails
"""
# Construct a json representation of a AnalyticsEngineNodeLevelCustomizationRunDetails model
analytics_engine_node_level_customization_run_details_model_json = {}
analytics_engine_node_level_customization_run_details_model_json['node_name'] = 'testString'
analytics_engine_node_level_customization_run_details_model_json['node_type'] = 'testString'
analytics_engine_node_level_customization_run_details_model_json['start_time'] = 'testString'
analytics_engine_node_level_customization_run_details_model_json['end_time'] = 'testString'
analytics_engine_node_level_customization_run_details_model_json['time_taken'] = 'testString'
analytics_engine_node_level_customization_run_details_model_json['status'] = 'testString'
analytics_engine_node_level_customization_run_details_model_json['log_file'] = 'testString'
# Construct a model instance of AnalyticsEngineNodeLevelCustomizationRunDetails by calling from_dict on the json representation
analytics_engine_node_level_customization_run_details_model = AnalyticsEngineNodeLevelCustomizationRunDetails.from_dict(analytics_engine_node_level_customization_run_details_model_json)
assert analytics_engine_node_level_customization_run_details_model != False
# Construct a model instance of AnalyticsEngineNodeLevelCustomizationRunDetails by calling from_dict on the json representation
analytics_engine_node_level_customization_run_details_model_dict = AnalyticsEngineNodeLevelCustomizationRunDetails.from_dict(analytics_engine_node_level_customization_run_details_model_json).__dict__
analytics_engine_node_level_customization_run_details_model2 = AnalyticsEngineNodeLevelCustomizationRunDetails(**analytics_engine_node_level_customization_run_details_model_dict)
# Verify the model instances are equivalent
assert analytics_engine_node_level_customization_run_details_model == analytics_engine_node_level_customization_run_details_model2
# Convert model instance back to dict and verify no loss of data
analytics_engine_node_level_customization_run_details_model_json2 = analytics_engine_node_level_customization_run_details_model.to_dict()
assert analytics_engine_node_level_customization_run_details_model_json2 == analytics_engine_node_level_customization_run_details_model_json
class TestModel_AnalyticsEngineResetClusterPasswordResponse():
"""
Test Class for AnalyticsEngineResetClusterPasswordResponse
"""
def test_analytics_engine_reset_cluster_password_response_serialization(self):
"""
Test serialization/deserialization for AnalyticsEngineResetClusterPasswordResponse
"""
# Construct dict forms of any model objects needed in order to build this model.
analytics_engine_reset_cluster_password_response_user_credentials_model = {} # AnalyticsEngineResetClusterPasswordResponseUserCredentials
analytics_engine_reset_cluster_password_response_user_credentials_model['user'] = 'testString'
analytics_engine_reset_cluster_password_response_user_credentials_model['password'] = 'testString'
# Construct a json representation of a AnalyticsEngineResetClusterPasswordResponse model
analytics_engine_reset_cluster_password_response_model_json = {}
analytics_engine_reset_cluster_password_response_model_json['id'] = 'testString'
analytics_engine_reset_cluster_password_response_model_json['user_credentials'] = analytics_engine_reset_cluster_password_response_user_credentials_model
# Construct a model instance of AnalyticsEngineResetClusterPasswordResponse by calling from_dict on the json representation
analytics_engine_reset_cluster_password_response_model = AnalyticsEngineResetClusterPasswordResponse.from_dict(analytics_engine_reset_cluster_password_response_model_json)
assert analytics_engine_reset_cluster_password_response_model != False
# Construct a model instance of AnalyticsEngineResetClusterPasswordResponse by calling from_dict on the json representation
analytics_engine_reset_cluster_password_response_model_dict = AnalyticsEngineResetClusterPasswordResponse.from_dict(analytics_engine_reset_cluster_password_response_model_json).__dict__
analytics_engine_reset_cluster_password_response_model2 = AnalyticsEngineResetClusterPasswordResponse(**analytics_engine_reset_cluster_password_response_model_dict)
# Verify the model instances are equivalent
assert analytics_engine_reset_cluster_password_response_model == analytics_engine_reset_cluster_password_response_model2
# Convert model instance back to dict and verify no loss of data
analytics_engine_reset_cluster_password_response_model_json2 = analytics_engine_reset_cluster_password_response_model.to_dict()
assert analytics_engine_reset_cluster_password_response_model_json2 == analytics_engine_reset_cluster_password_response_model_json
class TestModel_AnalyticsEngineResetClusterPasswordResponseUserCredentials():
"""
Test Class for AnalyticsEngineResetClusterPasswordResponseUserCredentials
"""
def test_analytics_engine_reset_cluster_password_response_user_credentials_serialization(self):
"""
Test serialization/deserialization for AnalyticsEngineResetClusterPasswordResponseUserCredentials
"""
# Construct a json representation of a AnalyticsEngineResetClusterPasswordResponseUserCredentials model
analytics_engine_reset_cluster_password_response_user_credentials_model_json = {}
analytics_engine_reset_cluster_password_response_user_credentials_model_json['user'] = 'testString'
analytics_engine_reset_cluster_password_response_user_credentials_model_json['password'] = 'testString'
# Construct a model instance of AnalyticsEngineResetClusterPasswordResponseUserCredentials by calling from_dict on the json representation
analytics_engine_reset_cluster_password_response_user_credentials_model = AnalyticsEngineResetClusterPasswordResponseUserCredentials.from_dict(analytics_engine_reset_cluster_password_response_user_credentials_model_json)
assert analytics_engine_reset_cluster_password_response_user_credentials_model != False
# Construct a model instance of AnalyticsEngineResetClusterPasswordResponseUserCredentials by calling from_dict on the json representation
analytics_engine_reset_cluster_password_response_user_credentials_model_dict = AnalyticsEngineResetClusterPasswordResponseUserCredentials.from_dict(analytics_engine_reset_cluster_password_response_user_credentials_model_json).__dict__
analytics_engine_reset_cluster_password_response_user_credentials_model2 = AnalyticsEngineResetClusterPasswordResponseUserCredentials(**analytics_engine_reset_cluster_password_response_user_credentials_model_dict)
# Verify the model instances are equivalent
assert analytics_engine_reset_cluster_password_response_user_credentials_model == analytics_engine_reset_cluster_password_response_user_credentials_model2
# Convert model instance back to dict and verify no loss of data
analytics_engine_reset_cluster_password_response_user_credentials_model_json2 = analytics_engine_reset_cluster_password_response_user_credentials_model.to_dict()
assert analytics_engine_reset_cluster_password_response_user_credentials_model_json2 == analytics_engine_reset_cluster_password_response_user_credentials_model_json
class TestModel_AnalyticsEngineResizeClusterResponse():
"""
Test Class for AnalyticsEngineResizeClusterResponse
"""
def test_analytics_engine_resize_cluster_response_serialization(self):
"""
Test serialization/deserialization for AnalyticsEngineResizeClusterResponse
"""
# Construct a json representation of a AnalyticsEngineResizeClusterResponse model
analytics_engine_resize_cluster_response_model_json = {}
analytics_engine_resize_cluster_response_model_json['request_id'] = 'testString'
# Construct a model instance of AnalyticsEngineResizeClusterResponse by calling from_dict on the json representation
analytics_engine_resize_cluster_response_model = AnalyticsEngineResizeClusterResponse.from_dict(analytics_engine_resize_cluster_response_model_json)
assert analytics_engine_resize_cluster_response_model != False
# Construct a model instance of AnalyticsEngineResizeClusterResponse by calling from_dict on the json representation
analytics_engine_resize_cluster_response_model_dict = AnalyticsEngineResizeClusterResponse.from_dict(analytics_engine_resize_cluster_response_model_json).__dict__
analytics_engine_resize_cluster_response_model2 = AnalyticsEngineResizeClusterResponse(**analytics_engine_resize_cluster_response_model_dict)
# Verify the model instances are equivalent
assert analytics_engine_resize_cluster_response_model == analytics_engine_resize_cluster_response_model2
# Convert model instance back to dict and verify no loss of data
analytics_engine_resize_cluster_response_model_json2 = analytics_engine_resize_cluster_response_model.to_dict()
assert analytics_engine_resize_cluster_response_model_json2 == analytics_engine_resize_cluster_response_model_json
class TestModel_AnalyticsEngineState():
"""
Test Class for AnalyticsEngineState
"""
def test_analytics_engine_state_serialization(self):
"""
Test serialization/deserialization for AnalyticsEngineState
"""
# Construct a json representation of a AnalyticsEngineState model
analytics_engine_state_model_json = {}
analytics_engine_state_model_json['state'] = 'testString'
# Construct a model instance of AnalyticsEngineState by calling from_dict on the json representation
analytics_engine_state_model = AnalyticsEngineState.from_dict(analytics_engine_state_model_json)
assert analytics_engine_state_model != False
# Construct a model instance of AnalyticsEngineState by calling from_dict on the json representation
analytics_engine_state_model_dict = AnalyticsEngineState.from_dict(analytics_engine_state_model_json).__dict__
analytics_engine_state_model2 = AnalyticsEngineState(**analytics_engine_state_model_dict)
# Verify the model instances are equivalent
assert analytics_engine_state_model == analytics_engine_state_model2
# Convert model instance back to dict and verify no loss of data
analytics_engine_state_model_json2 = analytics_engine_state_model.to_dict()
assert analytics_engine_state_model_json2 == analytics_engine_state_model_json
class TestModel_AnalyticsEngineUserCredentials():
"""
Test Class for AnalyticsEngineUserCredentials
"""
def test_analytics_engine_user_credentials_serialization(self):
"""
Test serialization/deserialization for AnalyticsEngineUserCredentials
"""
# Construct a json representation of a AnalyticsEngineUserCredentials model
analytics_engine_user_credentials_model_json = {}
analytics_engine_user_credentials_model_json['user'] = 'testString'
# Construct a model instance of AnalyticsEngineUserCredentials by calling from_dict on the json representation
analytics_engine_user_credentials_model = AnalyticsEngineUserCredentials.from_dict(analytics_engine_user_credentials_model_json)
assert analytics_engine_user_credentials_model != False
# Construct a model instance of AnalyticsEngineUserCredentials by calling from_dict on the json representation
analytics_engine_user_credentials_model_dict = AnalyticsEngineUserCredentials.from_dict(analytics_engine_user_credentials_model_json).__dict__
analytics_engine_user_credentials_model2 = AnalyticsEngineUserCredentials(**analytics_engine_user_credentials_model_dict)
# Verify the model instances are equivalent
assert analytics_engine_user_credentials_model == analytics_engine_user_credentials_model2
# Convert model instance back to dict and verify no loss of data
analytics_engine_user_credentials_model_json2 = analytics_engine_user_credentials_model.to_dict()
assert analytics_engine_user_credentials_model_json2 == analytics_engine_user_credentials_model_json
class TestModel_AnalyticsEngineWhitelistResponse():
"""
Test Class for AnalyticsEngineWhitelistResponse
"""
def test_analytics_engine_whitelist_response_serialization(self):
"""
Test serialization/deserialization for AnalyticsEngineWhitelistResponse
"""
# Construct a json representation of a AnalyticsEngineWhitelistResponse model
analytics_engine_whitelist_response_model_json = {}
analytics_engine_whitelist_response_model_json['private_endpoint_whitelist'] = ['testString']
# Construct a model instance of AnalyticsEngineWhitelistResponse by calling from_dict on the json representation
analytics_engine_whitelist_response_model = AnalyticsEngineWhitelistResponse.from_dict(analytics_engine_whitelist_response_model_json)
assert analytics_engine_whitelist_response_model != False
# Construct a model instance of AnalyticsEngineWhitelistResponse by calling from_dict on the json representation
analytics_engine_whitelist_response_model_dict = AnalyticsEngineWhitelistResponse.from_dict(analytics_engine_whitelist_response_model_json).__dict__
analytics_engine_whitelist_response_model2 = AnalyticsEngineWhitelistResponse(**analytics_engine_whitelist_response_model_dict)
# Verify the model instances are equivalent
assert analytics_engine_whitelist_response_model == analytics_engine_whitelist_response_model2
# Convert model instance back to dict and verify no loss of data
analytics_engine_whitelist_response_model_json2 = analytics_engine_whitelist_response_model.to_dict()
assert analytics_engine_whitelist_response_model_json2 == analytics_engine_whitelist_response_model_json
class TestModel_ServiceEndpoints():
"""
Test Class for ServiceEndpoints
"""
def test_service_endpoints_serialization(self):
"""
Test serialization/deserialization for ServiceEndpoints
"""
# Construct a json representation of a ServiceEndpoints model
service_endpoints_model_json = {}
service_endpoints_model_json['phoenix_jdbc'] = 'testString'
service_endpoints_model_json['ambari_console'] = 'testString'
service_endpoints_model_json['livy'] = 'testString'
service_endpoints_model_json['spark_history_server'] = 'testString'
service_endpoints_model_json['oozie_rest'] = 'testString'
service_endpoints_model_json['hive_jdbc'] = 'testString'
service_endpoints_model_json['notebook_gateway_websocket'] = 'testString'
service_endpoints_model_json['notebook_gateway'] = 'testString'
service_endpoints_model_json['webhdfs'] = 'testString'
service_endpoints_model_json['ssh'] = 'testString'
service_endpoints_model_json['spark_sql'] = 'testString'
# Construct a model instance of ServiceEndpoints by calling from_dict on the json representation
service_endpoints_model = ServiceEndpoints.from_dict(service_endpoints_model_json)
assert service_endpoints_model != False
# Construct a model instance of ServiceEndpoints by calling from_dict on the json representation
service_endpoints_model_dict = ServiceEndpoints.from_dict(service_endpoints_model_json).__dict__
service_endpoints_model2 = ServiceEndpoints(**service_endpoints_model_dict)
# Verify the model instances are equivalent
assert service_endpoints_model == service_endpoints_model2
# Convert model instance back to dict and verify no loss of data
service_endpoints_model_json2 = service_endpoints_model.to_dict()
assert service_endpoints_model_json2 == service_endpoints_model_json
class TestModel_ResizeClusterRequestAnalyticsEngineResizeClusterByComputeNodesRequest():
"""
Test Class for ResizeClusterRequestAnalyticsEngineResizeClusterByComputeNodesRequest
"""
def test_resize_cluster_request_analytics_engine_resize_cluster_by_compute_nodes_request_serialization(self):
"""
Test serialization/deserialization for ResizeClusterRequestAnalyticsEngineResizeClusterByComputeNodesRequest
"""
# Construct a json representation of a ResizeClusterRequestAnalyticsEngineResizeClusterByComputeNodesRequest model
resize_cluster_request_analytics_engine_resize_cluster_by_compute_nodes_request_model_json = {}
resize_cluster_request_analytics_engine_resize_cluster_by_compute_nodes_request_model_json['compute_nodes_count'] = 38
# Construct a model instance of ResizeClusterRequestAnalyticsEngineResizeClusterByComputeNodesRequest by calling from_dict on the json representation
resize_cluster_request_analytics_engine_resize_cluster_by_compute_nodes_request_model = ResizeClusterRequestAnalyticsEngineResizeClusterByComputeNodesRequest.from_dict(resize_cluster_request_analytics_engine_resize_cluster_by_compute_nodes_request_model_json)
assert resize_cluster_request_analytics_engine_resize_cluster_by_compute_nodes_request_model != False
# Construct a model instance of ResizeClusterRequestAnalyticsEngineResizeClusterByComputeNodesRequest by calling from_dict on the json representation
resize_cluster_request_analytics_engine_resize_cluster_by_compute_nodes_request_model_dict = ResizeClusterRequestAnalyticsEngineResizeClusterByComputeNodesRequest.from_dict(resize_cluster_request_analytics_engine_resize_cluster_by_compute_nodes_request_model_json).__dict__
resize_cluster_request_analytics_engine_resize_cluster_by_compute_nodes_request_model2 = ResizeClusterRequestAnalyticsEngineResizeClusterByComputeNodesRequest(**resize_cluster_request_analytics_engine_resize_cluster_by_compute_nodes_request_model_dict)
# Verify the model instances are equivalent
assert resize_cluster_request_analytics_engine_resize_cluster_by_compute_nodes_request_model == resize_cluster_request_analytics_engine_resize_cluster_by_compute_nodes_request_model2
# Convert model instance back to dict and verify no loss of data
resize_cluster_request_analytics_engine_resize_cluster_by_compute_nodes_request_model_json2 = resize_cluster_request_analytics_engine_resize_cluster_by_compute_nodes_request_model.to_dict()
assert resize_cluster_request_analytics_engine_resize_cluster_by_compute_nodes_request_model_json2 == resize_cluster_request_analytics_engine_resize_cluster_by_compute_nodes_request_model_json
class TestModel_ResizeClusterRequestAnalyticsEngineResizeClusterByTaskNodesRequest():
"""
Test Class for ResizeClusterRequestAnalyticsEngineResizeClusterByTaskNodesRequest
"""
def test_resize_cluster_request_analytics_engine_resize_cluster_by_task_nodes_request_serialization(self):
"""
Test serialization/deserialization for ResizeClusterRequestAnalyticsEngineResizeClusterByTaskNodesRequest
"""
# Construct a json representation of a ResizeClusterRequestAnalyticsEngineResizeClusterByTaskNodesRequest model
resize_cluster_request_analytics_engine_resize_cluster_by_task_nodes_request_model_json = {}
resize_cluster_request_analytics_engine_resize_cluster_by_task_nodes_request_model_json['task_nodes_count'] = 38
# Construct a model instance of ResizeClusterRequestAnalyticsEngineResizeClusterByTaskNodesRequest by calling from_dict on the json representation
resize_cluster_request_analytics_engine_resize_cluster_by_task_nodes_request_model = ResizeClusterRequestAnalyticsEngineResizeClusterByTaskNodesRequest.from_dict(resize_cluster_request_analytics_engine_resize_cluster_by_task_nodes_request_model_json)
assert resize_cluster_request_analytics_engine_resize_cluster_by_task_nodes_request_model != False
# Construct a model instance of ResizeClusterRequestAnalyticsEngineResizeClusterByTaskNodesRequest by calling from_dict on the json representation
resize_cluster_request_analytics_engine_resize_cluster_by_task_nodes_request_model_dict = ResizeClusterRequestAnalyticsEngineResizeClusterByTaskNodesRequest.from_dict(resize_cluster_request_analytics_engine_resize_cluster_by_task_nodes_request_model_json).__dict__
resize_cluster_request_analytics_engine_resize_cluster_by_task_nodes_request_model2 = ResizeClusterRequestAnalyticsEngineResizeClusterByTaskNodesRequest(**resize_cluster_request_analytics_engine_resize_cluster_by_task_nodes_request_model_dict)
# Verify the model instances are equivalent
assert resize_cluster_request_analytics_engine_resize_cluster_by_task_nodes_request_model == resize_cluster_request_analytics_engine_resize_cluster_by_task_nodes_request_model2
# Convert model instance back to dict and verify no loss of data
resize_cluster_request_analytics_engine_resize_cluster_by_task_nodes_request_model_json2 = resize_cluster_request_analytics_engine_resize_cluster_by_task_nodes_request_model.to_dict()
assert resize_cluster_request_analytics_engine_resize_cluster_by_task_nodes_request_model_json2 == resize_cluster_request_analytics_engine_resize_cluster_by_task_nodes_request_model_json
# endregion
##############################################################################
# End of Model Tests
##############################################################################
| 51.310826 | 1,498 | 0.732058 | 9,864 | 91,949 | 6.380576 | 0.034976 | 0.12131 | 0.040198 | 0.027885 | 0.881598 | 0.856891 | 0.798008 | 0.731625 | 0.692491 | 0.646891 | 0 | 0.008593 | 0.19377 | 91,949 | 1,791 | 1,499 | 51.339475 | 0.840406 | 0.210203 | 0 | 0.500537 | 0 | 0.008593 | 0.139436 | 0.03254 | 0 | 0 | 0 | 0 | 0.106337 | 1 | 0.063373 | false | 0.038668 | 0.011815 | 0 | 0.138561 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
79453538a003d572ab0794e0bb23bbe7764998d8 | 149 | py | Python | jumper/entities/weapon.py | ccmikechen/Jumper-Game | b68a03cdfee27cea2bfb321f77b57ce80904bef6 | [
"MIT"
] | null | null | null | jumper/entities/weapon.py | ccmikechen/Jumper-Game | b68a03cdfee27cea2bfb321f77b57ce80904bef6 | [
"MIT"
] | null | null | null | jumper/entities/weapon.py | ccmikechen/Jumper-Game | b68a03cdfee27cea2bfb321f77b57ce80904bef6 | [
"MIT"
] | 1 | 2017-12-19T17:42:52.000Z | 2017-12-19T17:42:52.000Z | from jumper.entities.item import Item
class Weapon(Item):
def trigger(self, env):
pass
def get_type(self):
return 'weapon'
| 16.555556 | 37 | 0.637584 | 20 | 149 | 4.7 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.268456 | 149 | 8 | 38 | 18.625 | 0.862385 | 0 | 0 | 0 | 0 | 0 | 0.040268 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0.166667 | 0.166667 | 0.166667 | 0.833333 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | 0 | 6 |
79612f2794c1da3be1e55cd63bcf4f98020a9388 | 11,334 | py | Python | tests/show/test_show_show.py | questionlp/wwdtm | f3cf3399c22bf19e369e6e0250e7c72de0be3a90 | [
"Apache-2.0"
] | null | null | null | tests/show/test_show_show.py | questionlp/wwdtm | f3cf3399c22bf19e369e6e0250e7c72de0be3a90 | [
"Apache-2.0"
] | 1 | 2022-01-17T04:25:49.000Z | 2022-01-17T04:25:49.000Z | tests/show/test_show_show.py | questionlp/wwdtm | f3cf3399c22bf19e369e6e0250e7c72de0be3a90 | [
"Apache-2.0"
] | null | null | null | # -*- coding: utf-8 -*-
# vim: set noai syntax=python ts=4 sw=4:
#
# Copyright (c) 2018-2021 Linh Pham
# wwdtm is released under the terms of the Apache License 2.0
"""Testing for object :py:class:`wwdtm.show.Show`
"""
import json
from typing import Any, Dict
import pytest
from wwdtm.show import Show
@pytest.mark.skip
def get_connect_dict() -> Dict[str, Any]:
"""Read in database connection settings and return values as a
dictionary.
"""
with open("config.json", "r") as config_file:
config_dict = json.load(config_file)
if "database" in config_dict:
return config_dict["database"]
def test_show_retrieve_all():
"""Testing for :py:meth:`wwdtm.show.Show.retrieve_all`
"""
show = Show(connect_dict=get_connect_dict())
shows = show.retrieve_all()
assert shows, "No shows could be retrieved"
assert "id" in shows[0], "No Show ID returned for the first list item"
def test_show_retrieve_all_details():
"""Testing for :py:meth:`wwdtm.show.Show.retrieve_all_details`
"""
show = Show(connect_dict=get_connect_dict())
shows = show.retrieve_all_details()
assert shows, "No shows could be retrieved"
assert "date" in shows[0], "'date' was not returned for the first list item"
assert "host" in shows[0], "'host' was not returned for first list item"
def test_show_retrieve_all_ids():
"""Testing for :py:meth:`wwdtm.show.Show.retrieve_all_ids`
"""
show = Show(connect_dict=get_connect_dict())
ids = show.retrieve_all_ids()
assert ids, "No show IDs could be retrieved"
def test_show_retrieve_all_dates():
"""Testing for :py:meth:`wwdtm.show.Show.retrieve_all_dates`
"""
show = Show(connect_dict=get_connect_dict())
dates = show.retrieve_all_dates()
assert dates, "No show dates could be retrieved"
def test_show_retrieve_all_dates_tuple():
"""Testing for :py:meth:`wwdtm.show.Show.retrieve_all_dates_tuple`
"""
show = Show(connect_dict=get_connect_dict())
dates = show.retrieve_all_dates_tuple()
assert dates, "No show dates could be retrieved"
assert isinstance(dates[0], tuple), "First list item is not a tuple"
def test_show_retrieve_all_show_years_months():
"""Testing for :py:meth:`wwdtm.show.Show.retrieve_all_show_years_months`
"""
show = Show(connect_dict=get_connect_dict())
dates = show.retrieve_all_show_years_months()
assert dates, "No dates could be retrieved"
assert isinstance(dates[0], str), "First list item is not a string"
def test_show_retrieve_all_show_years_months_tuple():
"""Testing for :py:meth:`wwdtm.show.Show.retrieve_all_shows_years_months_tuple`
"""
show = Show(connect_dict=get_connect_dict())
dates = show.retrieve_all_shows_years_months_tuple()
assert dates, "No dates could be retrieved"
assert isinstance(dates[0], tuple), "First list item is not a tuple"
@pytest.mark.parametrize("year, month, day", [(2020, 4, 25)])
def test_show_retrieve_by_date(year: int, month: int, day: int):
"""Testing for :py:meth:`wwdtm.show.Show.retrieve_by_date`
:param year: Four digit year to test retrieving a show's information
:param month: One or two digit month to test retrieving a show's
information
:param day: One or two digit day to test retrieving a show's
information
"""
show = Show(connect_dict=get_connect_dict())
info = show.retrieve_by_date(year, month, day)
assert info, f"Show for date {year:04d}-{month:02d}-{day:02d} not found"
assert "date" in info, (f"'date' was not returned for show "
f"{year:04d}-{month:02d}-{day:02d}")
@pytest.mark.parametrize("date", ["2018-10-27"])
def test_show_retrieve_by_date_string(date: str):
"""Testing for :py:meth:`wwdtm.show.Show.retrieve_by_date_string`
:param date: Show date string in ``YYYY-MM-DD`` format to test
retrieving a show's information
"""
show = Show(connect_dict=get_connect_dict())
info = show.retrieve_by_date_string(date)
assert info, f"Show for date {date} not found"
assert "date" in info, f"'date' was not returned for show {date}"
@pytest.mark.parametrize("show_id", [1162])
def test_show_retrieve_by_id(show_id: int):
"""Testing for :py:meth:`wwdtm.show.Show.retrieve_by_id`
:param show_id: Show ID to test retrieving show information
"""
show = Show(connect_dict=get_connect_dict())
info = show.retrieve_by_id(show_id)
assert info, f"Show ID {show_id} not found"
assert "date" in info, f"'date' was not returned for ID {show_id}"
@pytest.mark.parametrize("year", [2018])
def test_show_retrieve_by_year(year: int):
"""Testing for :py:meth:`wwdtm.show.Show.retrieve_by_year`
:param year: Four digit year to test retrieving show information
"""
show = Show(connect_dict=get_connect_dict())
shows = show.retrieve_by_year(year)
assert shows, f"No shows could be retrieved for year {year:04d}"
assert "id" in shows[0], (f"'id' was not returned for the first list item "
f"for year {year:04d}")
@pytest.mark.parametrize("year, month", [(1998, 1), (2018, 10)])
def test_show_retrieve_by_year_month(year: int, month: int):
"""Testing for :py:meth:`wwdtm.show.Show.retrieve_by_year_month`
:param year: Four digit year to test retrieving show information
:param month: One or two digit month to test retrieving show
information
"""
show = Show(connect_dict=get_connect_dict())
shows = show.retrieve_by_year_month(year, month)
assert shows, (f"No shows could be retrieved for year/month "
f"{year:04d}-{month:02d}")
assert "id" in shows[0], (f"'id' was not returned for the first list "
f"item for year/month {year:02d}-{month:04d}")
@pytest.mark.parametrize("year, month, day", [(2020, 4, 25)])
def test_show_retrieve_details_by_date(year: int, month: int, day: int):
"""Testing for :py:meth:`wwdtm.show.Show.retrieve_details_by_date`
:param year: Four digit year to test retrieving show details
:param month: One or two digit month to test retrieving show details
:param day: One or two digit day to test retrieving show details
"""
show = Show(connect_dict=get_connect_dict())
info = show.retrieve_details_by_date(year, month, day)
assert info, f"Show for date {year:04d}-{month:02d}-{day:02d} not found"
assert "date" in info, (f"'date' was not returned for show "
f"{year:04d}-{month:02d}-{day:02d}")
assert "host" in info, (f"'host' was not returned for show "
f"{year:04d}-{month:02d}-{day:02d}")
@pytest.mark.parametrize("date", ["2018-10-27"])
def test_show_retrieve_details_by_date_string(date: str):
"""Testing for :py:meth:`wwdtm.show.Show.retrieve_details_by_date_string`
:param date: Show date string in ``YYYY-MM-DD`` format to test
retrieving show details
"""
show = Show(connect_dict=get_connect_dict())
info = show.retrieve_details_by_date_string(date)
assert info, f"Show for date {date} not found"
assert "date" in info, f"'date' was not returned for show {date}"
assert "host" in info, f"'host' was not returned for show {date}"
@pytest.mark.parametrize("show_id", [1162, 1246])
def test_show_retrieve_details_by_id(show_id: int):
"""Testing for :py:meth:`wwdtm.show.Show.retrieve_details_by_id`
:param show_id: Show ID to test retrieving show details
"""
show = Show(connect_dict=get_connect_dict())
info = show.retrieve_details_by_id(show_id)
assert info, f"Show ID {show_id} not found"
assert "date" in info, f"'date' was not returned for ID {show_id}"
assert "host" in info, f"'host' was not returned for ID {show_id}"
@pytest.mark.parametrize("year", [2021])
def test_show_retrieve_details_by_year(year: int):
"""Testing for :py:meth:`wwdtm.show.Show.retrieve_details_by_year`
:param year: Four digit year to test retrieving show details
"""
show = Show(connect_dict=get_connect_dict())
info = show.retrieve_details_by_year(year)
assert info, f"No shows could be retrieved for year {year:04d}"
assert "date" in info[0], (f"'date' was not returned for first list "
f"item for year {year:04d}")
assert "host" in info[0], (f"'host' was not returned for first list "
f"item for year {year:04d}")
@pytest.mark.parametrize("year, month", [(2020, 4)])
def test_show_retrieve_details_by_year_month(year: int, month: int):
"""Testing for :py:meth:`wwdtm.show.Show.retrieve_details_by_year_month`
:param year: Four digit year to test retrieving show details
:param month: One or two digit year to test retrieving show details
"""
show = Show(connect_dict=get_connect_dict())
info = show.retrieve_details_by_year_month(year, month)
assert info, (f"No shows could be retrieved for year/month "
f"{year:04d}-{month:02d}")
assert "date" in info[0], (f"'date' was not returned for first list item "
f"for year/month {year:04d}-{month:02d}")
assert "host" in info[0], (f"'host' was not returned for first list item "
f"for year/month {year:04d}-{month:02d}")
@pytest.mark.parametrize("year", [2018])
def test_show_retrieve_months_by_year(year: int):
"""Testing for :py:meth:`wwdtm.show.Show.retrieve_months_by_year`
:param year: Four digit year to test retrieving a list of months
"""
show = Show(connect_dict=get_connect_dict())
months = show.retrieve_months_by_year(year)
assert months, f"No months could be retrieved for year {year:04d}"
def test_show_retrieve_recent():
"""Testing for :py:meth:`wwdtm.show.Show.retrieve_recent`
"""
show = Show(connect_dict=get_connect_dict())
shows = show.retrieve_recent()
assert shows, "No shows could be retrieved"
assert "id" in shows[0], "No Show ID returned for the first list item"
def test_show_retrieve_recent_details():
"""Testing for :py:meth:`wwdtm.show.Show.retrieve_recent_details`
"""
show = Show(connect_dict=get_connect_dict())
shows = show.retrieve_recent_details()
assert shows, "No shows could be retrieved"
assert "date" in shows[0], "'date' was not returned for the first list item"
assert "host" in shows[0], "'host' was not returned for first list item"
@pytest.mark.parametrize("year", [2018])
def test_show_retrieve_scores_by_year(year: int):
"""Testing for :py:meth:`wwdtm.show.Show.retrieve_scores_by_year`
:param year: Four digit year to test retrieving scores for a show
year
"""
show = Show(connect_dict=get_connect_dict())
scores = show.retrieve_scores_by_year(year)
assert scores, f"No scores could be retrieved by year {year:04d}"
assert isinstance(scores[0], tuple), "First list item is not a tuple"
def test_show_retrieve_years():
"""Testing for :py:meth:`wwdtm.show.Show.retrieve_years`
"""
show = Show(connect_dict=get_connect_dict())
years = show.retrieve_years()
assert years, f"No years could be retrieved"
assert isinstance(years[0], int), "First list item is not a number"
| 36.918567 | 83 | 0.684401 | 1,722 | 11,334 | 4.328107 | 0.074332 | 0.106266 | 0.040118 | 0.056085 | 0.904468 | 0.889306 | 0.86113 | 0.826244 | 0.795787 | 0.733933 | 0 | 0.018433 | 0.195871 | 11,334 | 306 | 84 | 37.039216 | 0.79932 | 0.260985 | 0 | 0.44898 | 0 | 0 | 0.288188 | 0.033395 | 0 | 0 | 0 | 0 | 0.326531 | 1 | 0.156463 | false | 0 | 0.027211 | 0 | 0.190476 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
79635db127c8ed6164d6c727a589c3efbab2a7ac | 3,173 | py | Python | tests/test_frameshift_identification.py | mldmort/adVNTR | 412398924bc7f2ed1fa38c0c5456998d9cdd5b5a | [
"BSD-3-Clause"
] | 34 | 2017-11-19T18:54:18.000Z | 2022-03-04T15:06:55.000Z | tests/test_frameshift_identification.py | mldmort/adVNTR | 412398924bc7f2ed1fa38c0c5456998d9cdd5b5a | [
"BSD-3-Clause"
] | 42 | 2018-02-16T02:01:57.000Z | 2021-09-13T16:33:15.000Z | tests/test_frameshift_identification.py | mldmort/adVNTR | 412398924bc7f2ed1fa38c0c5456998d9cdd5b5a | [
"BSD-3-Clause"
] | 12 | 2018-02-26T15:18:55.000Z | 2021-01-27T13:02:39.000Z | import unittest
from advntr.reference_vntr import ReferenceVNTR
from advntr.vntr_finder import VNTRFinder
class TestFrameshiftIdentification(unittest.TestCase):
def get_reference_vntr(self, ru_count=10):
pattern = 'ACGTACGT'
ref_vntr = ReferenceVNTR(1, pattern, 1000, 'chr1', None, None)
ref_vntr.repeat_segments = [pattern] * ru_count
return ref_vntr
def get_vntr_finder(self):
vntr_finder = VNTRFinder(self.get_reference_vntr())
return vntr_finder
def test_frameshift_in_uniform_coverage(self):
vntr_finder = self.get_vntr_finder()
avg_bp_coverage = 14.0
observed_indels = 14
expected_indel_transitions = 1 / avg_bp_coverage
frameshift = vntr_finder.identify_frameshift(avg_bp_coverage, observed_indels, expected_indel_transitions)
self.assertEqual(frameshift, True)
def test_frameshift_with_high_coverage(self):
vntr_finder = self.get_vntr_finder()
avg_bp_coverage = 14.0
observed_indels = 18
expected_indel_transitions = 1 / avg_bp_coverage
frameshift = vntr_finder.identify_frameshift(avg_bp_coverage, observed_indels, expected_indel_transitions)
self.assertEqual(frameshift, True)
def test_frameshift_with_low_coverage(self):
vntr_finder = self.get_vntr_finder()
avg_bp_coverage = 14.0
observed_indels = 7
expected_indel_transitions = 1 / avg_bp_coverage
frameshift = vntr_finder.identify_frameshift(avg_bp_coverage, observed_indels, expected_indel_transitions)
self.assertEqual(frameshift, True)
def test_frameshift_with_extremely_low_coverage(self):
vntr_finder = self.get_vntr_finder()
avg_bp_coverage = 14.0
observed_indels = 3
expected_indel_transitions = 1 / avg_bp_coverage
frameshift = vntr_finder.identify_frameshift(avg_bp_coverage, observed_indels, expected_indel_transitions)
self.assertEqual(frameshift, True)
def test_normal_vntr_with_high_error_in_uniform_coverage(self):
vntr_finder = self.get_vntr_finder()
avg_bp_coverage = 14.0
observed_indels = 2
expected_indel_transitions = 1 / avg_bp_coverage
frameshift = vntr_finder.identify_frameshift(avg_bp_coverage, observed_indels, expected_indel_transitions)
self.assertEqual(frameshift, False)
def test_normal_vntr_with_low_error_in_uniform_coverage(self):
vntr_finder = self.get_vntr_finder()
avg_bp_coverage = 14.0
observed_indels = 1
expected_indel_transitions = 1 / avg_bp_coverage
frameshift = vntr_finder.identify_frameshift(avg_bp_coverage, observed_indels, expected_indel_transitions)
self.assertEqual(frameshift, False)
def test_normal_vntr_without_error_in_uniform_coverage(self):
vntr_finder = self.get_vntr_finder()
avg_bp_coverage = 14.0
observed_indels = 0
expected_indel_transitions = 1 / avg_bp_coverage
frameshift = vntr_finder.identify_frameshift(avg_bp_coverage, observed_indels, expected_indel_transitions)
self.assertEqual(frameshift, False)
| 39.17284 | 114 | 0.737788 | 389 | 3,173 | 5.568123 | 0.14653 | 0.11542 | 0.126039 | 0.071099 | 0.811173 | 0.802862 | 0.802862 | 0.802862 | 0.802862 | 0.802862 | 0 | 0.017766 | 0.201702 | 3,173 | 80 | 115 | 39.6625 | 0.837347 | 0 | 0 | 0.57377 | 0 | 0 | 0.003782 | 0 | 0 | 0 | 0 | 0 | 0.114754 | 1 | 0.147541 | false | 0 | 0.04918 | 0 | 0.245902 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
7970c0c3efb6e36219fa1d003ac7d2ac48996ced | 45 | py | Python | backend/services/__init__.py | Psy-Rat/Todomadoro | 692893307575e6d23b2ecb4c86b0f00289fd3c30 | [
"MIT"
] | 2 | 2021-01-12T19:58:22.000Z | 2021-01-14T08:54:34.000Z | backend/services/__init__.py | Psy-Rat/Todomadoro | 692893307575e6d23b2ecb4c86b0f00289fd3c30 | [
"MIT"
] | null | null | null | backend/services/__init__.py | Psy-Rat/Todomadoro | 692893307575e6d23b2ecb4c86b0f00289fd3c30 | [
"MIT"
] | 2 | 2021-01-13T20:31:44.000Z | 2021-01-14T08:54:28.000Z | from .hello_service import hello, fakeDataApI | 45 | 45 | 0.866667 | 6 | 45 | 6.333333 | 0.833333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.088889 | 45 | 1 | 45 | 45 | 0.926829 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
7980b3d10980e91c9bfa0fbf629a90595b7c86fa | 100 | py | Python | views.py | henghasaint/forum | 7ab1ce5f6d3aaca5e4145b9a29935e0114dabd1d | [
"Apache-2.0"
] | null | null | null | views.py | henghasaint/forum | 7ab1ce5f6d3aaca5e4145b9a29935e0114dabd1d | [
"Apache-2.0"
] | null | null | null | views.py | henghasaint/forum | 7ab1ce5f6d3aaca5e4145b9a29935e0114dabd1d | [
"Apache-2.0"
] | null | null | null | from django.http import HttpResponse
def index(request):
return HttpResponse("Hello, World!lcs")
| 16.666667 | 40 | 0.78 | 13 | 100 | 6 | 0.923077 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.12 | 100 | 5 | 41 | 20 | 0.886364 | 0 | 0 | 0 | 0 | 0 | 0.16 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0 | 0.333333 | 0.333333 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 6 |
79bc7dbc08e9dfc2657c0035175bd05d6b647a88 | 194 | py | Python | lib/partitioning/__init__.py | xuzhiying9510/ncflow | 3f6f4a5b2c13ac8f6375b097b35f6c55b18d212e | [
"Artistic-1.0-cl8"
] | 15 | 2021-09-24T14:03:52.000Z | 2022-03-28T15:44:21.000Z | lib/partitioning/__init__.py | xuzhiying9510/ncflow | 3f6f4a5b2c13ac8f6375b097b35f6c55b18d212e | [
"Artistic-1.0-cl8"
] | 1 | 2021-12-14T09:05:29.000Z | 2021-12-16T11:55:55.000Z | lib/partitioning/__init__.py | xuzhiying9510/ncflow | 3f6f4a5b2c13ac8f6375b097b35f6c55b18d212e | [
"Artistic-1.0-cl8"
] | 5 | 2020-12-23T15:24:40.000Z | 2022-01-06T09:42:38.000Z | from .utils import *
from .hard_coded_partitioning import *
from .leader_election import *
from .networkx_partitioning import *
from .spectral_clustering import *
from .fm_partitioning import *
| 27.714286 | 38 | 0.814433 | 24 | 194 | 6.333333 | 0.5 | 0.328947 | 0.289474 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.123711 | 194 | 6 | 39 | 32.333333 | 0.894118 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
8daccf33a6bd9f94be46699835e299df7a5c61ed | 114 | py | Python | tests/abstracts/test_workflow.py | pcorpet/zenaton-python | 099c41627482a6fdc619833c2bec59dbc68cbcd4 | [
"MIT"
] | 28 | 2017-09-19T11:53:22.000Z | 2019-12-17T12:18:43.000Z | tests/abstracts/test_workflow.py | pcorpet/zenaton-python | 099c41627482a6fdc619833c2bec59dbc68cbcd4 | [
"MIT"
] | 21 | 2018-10-25T14:47:56.000Z | 2020-07-28T14:56:03.000Z | tests/abstracts/test_workflow.py | pcorpet/zenaton-python | 099c41627482a6fdc619833c2bec59dbc68cbcd4 | [
"MIT"
] | 2 | 2019-06-17T06:38:05.000Z | 2019-07-24T09:46:00.000Z | from zenaton.abstracts.workflow import Workflow
def test_has_handle():
assert hasattr(Workflow(), 'handle')
| 19 | 47 | 0.763158 | 14 | 114 | 6.071429 | 0.785714 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.131579 | 114 | 5 | 48 | 22.8 | 0.858586 | 0 | 0 | 0 | 0 | 0 | 0.052632 | 0 | 0 | 0 | 0 | 0 | 0.333333 | 1 | 0.333333 | true | 0 | 0.333333 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
8dd1685a4ab1c46c6e02b47c080be568dd97a565 | 21 | py | Python | test.py | r-y-zadeh/MAC | 744f3785c1c029265a27a240ce60e49fc59d8eed | [
"Apache-2.0"
] | null | null | null | test.py | r-y-zadeh/MAC | 744f3785c1c029265a27a240ce60e49fc59d8eed | [
"Apache-2.0"
] | null | null | null | test.py | r-y-zadeh/MAC | 744f3785c1c029265a27a240ce60e49fc59d8eed | [
"Apache-2.0"
] | null | null | null | from app import User
| 10.5 | 20 | 0.809524 | 4 | 21 | 4.25 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.190476 | 21 | 1 | 21 | 21 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
5c22822844b7a19877e45ae4a6e3f3239b14d37c | 615 | py | Python | tests/test_Mini_CSV_Wrangler.py | Ankita0105/Mini_CSV_Wrangler_TakeHomeTest | b06c02b25f06c8fc39684a8ebf48753e63ee6381 | [
"MIT"
] | null | null | null | tests/test_Mini_CSV_Wrangler.py | Ankita0105/Mini_CSV_Wrangler_TakeHomeTest | b06c02b25f06c8fc39684a8ebf48753e63ee6381 | [
"MIT"
] | null | null | null | tests/test_Mini_CSV_Wrangler.py | Ankita0105/Mini_CSV_Wrangler_TakeHomeTest | b06c02b25f06c8fc39684a8ebf48753e63ee6381 | [
"MIT"
] | null | null | null | #!/usr/bin/env python
# -*- coding: utf-8 -*-
"""
test_Mini_CSV_Wrangler
----------------------------------
Tests for `Mini_CSV_Wrangler` module.
"""
import unittest
import Mini_CSV_Wrangler
from Mini_CSV_Wrangler import CSV_wrangler
from Mini_CSV_Wrangler.CSV_wrangler import wrangleCSV
class TestMini_csv_wrangler(unittest.TestCase):
def setUp(self):
pass
def test_something(self):
assert(Mini_CSV_Wrangler.__version__)
wrangleCSV('/Users/ankitasharma/PycharProjects/Mini_CSV_Wrangler_TakeHomeTest/Mini_CSV_Wrangler/configdata.txt')
def tearDown(self):
pass
| 19.83871 | 120 | 0.708943 | 74 | 615 | 5.527027 | 0.472973 | 0.295844 | 0.293399 | 0.09291 | 0.146699 | 0.146699 | 0 | 0 | 0 | 0 | 0 | 0.001919 | 0.152846 | 615 | 30 | 121 | 20.5 | 0.783109 | 0.226016 | 0 | 0.166667 | 0 | 0 | 0.20985 | 0.20985 | 0 | 0 | 0 | 0 | 0.083333 | 1 | 0.25 | false | 0.166667 | 0.333333 | 0 | 0.666667 | 0 | 0 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 1 | 0 | 1 | 0 | 0 | 6 |
30a197a439e6dbb3b56c7a365e42fda8e14f5f91 | 159 | py | Python | python/incometax/contributions/__init__.py | cryp2knight/pinoy-sweldo | 759be2977a9857ede3f0cc35926b05f3106ddc26 | [
"MIT"
] | null | null | null | python/incometax/contributions/__init__.py | cryp2knight/pinoy-sweldo | 759be2977a9857ede3f0cc35926b05f3106ddc26 | [
"MIT"
] | null | null | null | python/incometax/contributions/__init__.py | cryp2knight/pinoy-sweldo | 759be2977a9857ede3f0cc35926b05f3106ddc26 | [
"MIT"
] | null | null | null | from .pagibig_contribution import PagIBIGContribution
from .philhealth_contribution import PhilHealthContribution
from .sss_contribution import SSSContribution | 53 | 59 | 0.91195 | 15 | 159 | 9.466667 | 0.6 | 0.380282 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.069182 | 159 | 3 | 60 | 53 | 0.959459 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
30b59696220bf53853a643d8e8d6273a31eaa504 | 114 | py | Python | src/the_tale/the_tale/game/heroes/jinjaglobals.py | devapromix/the-tale | 2a10efd3270734f8cf482b4cfbc5353ef8f0494c | [
"BSD-3-Clause"
] | 1 | 2020-04-02T11:51:20.000Z | 2020-04-02T11:51:20.000Z | src/the_tale/the_tale/game/heroes/jinjaglobals.py | devapromix/the-tale | 2a10efd3270734f8cf482b4cfbc5353ef8f0494c | [
"BSD-3-Clause"
] | null | null | null | src/the_tale/the_tale/game/heroes/jinjaglobals.py | devapromix/the-tale | 2a10efd3270734f8cf482b4cfbc5353ef8f0494c | [
"BSD-3-Clause"
] | null | null | null |
import smart_imports
smart_imports.all()
@dext_jinja2.jinjaglobal
def heroes_conf():
return conf.settings
| 11.4 | 24 | 0.780702 | 15 | 114 | 5.666667 | 0.8 | 0.282353 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.010204 | 0.140351 | 114 | 9 | 25 | 12.666667 | 0.857143 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.2 | true | 0 | 0.4 | 0.2 | 0.8 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 1 | 0 | 0 | 0 | 6 |
a5068619419f31c772035d468548d1280e68686f | 206 | py | Python | 8_kyu/Fundamentals_Return.py | UlrichBerntien/Codewars-Katas | bbd025e67aa352d313564d3862db19fffa39f552 | [
"MIT"
] | null | null | null | 8_kyu/Fundamentals_Return.py | UlrichBerntien/Codewars-Katas | bbd025e67aa352d313564d3862db19fffa39f552 | [
"MIT"
] | null | null | null | 8_kyu/Fundamentals_Return.py | UlrichBerntien/Codewars-Katas | bbd025e67aa352d313564d3862db19fffa39f552 | [
"MIT"
] | null | null | null | def add(a,b):
return a + b
def multiply(a,b):
return a * b
def divide(a,b):
return a / b
def mod(a,b):
return a % b
def exponent(a,b):
return a ** b
def subt(a,b):
return a - b
| 11.444444 | 18 | 0.533981 | 42 | 206 | 2.619048 | 0.238095 | 0.218182 | 0.436364 | 0.490909 | 0.681818 | 0.590909 | 0 | 0 | 0 | 0 | 0 | 0 | 0.315534 | 206 | 17 | 19 | 12.117647 | 0.780142 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.5 | false | 0 | 0 | 0.5 | 1 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 6 |
eb7003898f524a3c1acb0157ff40287dc50be99f | 7,065 | py | Python | intrepid/jobs_pb2.py | intrepid-geophysics/intrepid-protobuf-py | e01a11e139b0ed3bb9500a8153939d7acfa8b3b4 | [
"Apache-2.0"
] | 1 | 2020-07-08T04:41:52.000Z | 2020-07-08T04:41:52.000Z | intrepid/jobs_pb2.py | intrepid-geophysics/intrepid-protobuf-py | e01a11e139b0ed3bb9500a8153939d7acfa8b3b4 | [
"Apache-2.0"
] | null | null | null | intrepid/jobs_pb2.py | intrepid-geophysics/intrepid-protobuf-py | e01a11e139b0ed3bb9500a8153939d7acfa8b3b4 | [
"Apache-2.0"
] | null | null | null | # -*- coding: utf-8 -*-
# Generated by the protocol buffer compiler. DO NOT EDIT!
# source: jobs.proto
"""Generated protocol buffer code."""
from google.protobuf import descriptor as _descriptor
from google.protobuf import message as _message
from google.protobuf import reflection as _reflection
from google.protobuf import symbol_database as _symbol_database
# @@protoc_insertion_point(imports)
_sym_db = _symbol_database.Default()
DESCRIPTOR = _descriptor.FileDescriptor(
name='jobs.proto',
package='protocols',
syntax='proto2',
serialized_options=b'\n!com.geomodeller.protocols.dc.jobs',
create_key=_descriptor._internal_create_key,
serialized_pb=b'\n\njobs.proto\x12\tprotocols\"k\n\nForwardJob\x12\x0e\n\x06job_id\x18\x01 \x02(\x05\x12\x13\n\x0bproject_url\x18\x02 \x02(\t\x12\x1d\n\x15status_descriptor_url\x18\x03 \x02(\t\x12\x0c\n\x04\x63\x61se\x18\x04 \x02(\t\x12\x0b\n\x03run\x18\x05 \x02(\t\"m\n\x0cInversionJob\x12\x0e\n\x06job_id\x18\x01 \x02(\x05\x12\x13\n\x0bproject_url\x18\x02 \x02(\t\x12\x1d\n\x15status_descriptor_url\x18\x03 \x02(\t\x12\x0c\n\x04\x63\x61se\x18\x04 \x02(\t\x12\x0b\n\x03run\x18\x05 \x02(\tB#\n!com.geomodeller.protocols.dc.jobs'
)
_FORWARDJOB = _descriptor.Descriptor(
name='ForwardJob',
full_name='protocols.ForwardJob',
filename=None,
file=DESCRIPTOR,
containing_type=None,
create_key=_descriptor._internal_create_key,
fields=[
_descriptor.FieldDescriptor(
name='job_id', full_name='protocols.ForwardJob.job_id', index=0,
number=1, type=5, cpp_type=1, label=2,
has_default_value=False, default_value=0,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
_descriptor.FieldDescriptor(
name='project_url', full_name='protocols.ForwardJob.project_url', index=1,
number=2, type=9, cpp_type=9, label=2,
has_default_value=False, default_value=b"".decode('utf-8'),
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
_descriptor.FieldDescriptor(
name='status_descriptor_url', full_name='protocols.ForwardJob.status_descriptor_url', index=2,
number=3, type=9, cpp_type=9, label=2,
has_default_value=False, default_value=b"".decode('utf-8'),
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
_descriptor.FieldDescriptor(
name='case', full_name='protocols.ForwardJob.case', index=3,
number=4, type=9, cpp_type=9, label=2,
has_default_value=False, default_value=b"".decode('utf-8'),
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
_descriptor.FieldDescriptor(
name='run', full_name='protocols.ForwardJob.run', index=4,
number=5, type=9, cpp_type=9, label=2,
has_default_value=False, default_value=b"".decode('utf-8'),
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
],
extensions=[
],
nested_types=[],
enum_types=[
],
serialized_options=None,
is_extendable=False,
syntax='proto2',
extension_ranges=[],
oneofs=[
],
serialized_start=25,
serialized_end=132,
)
_INVERSIONJOB = _descriptor.Descriptor(
name='InversionJob',
full_name='protocols.InversionJob',
filename=None,
file=DESCRIPTOR,
containing_type=None,
create_key=_descriptor._internal_create_key,
fields=[
_descriptor.FieldDescriptor(
name='job_id', full_name='protocols.InversionJob.job_id', index=0,
number=1, type=5, cpp_type=1, label=2,
has_default_value=False, default_value=0,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
_descriptor.FieldDescriptor(
name='project_url', full_name='protocols.InversionJob.project_url', index=1,
number=2, type=9, cpp_type=9, label=2,
has_default_value=False, default_value=b"".decode('utf-8'),
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
_descriptor.FieldDescriptor(
name='status_descriptor_url', full_name='protocols.InversionJob.status_descriptor_url', index=2,
number=3, type=9, cpp_type=9, label=2,
has_default_value=False, default_value=b"".decode('utf-8'),
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
_descriptor.FieldDescriptor(
name='case', full_name='protocols.InversionJob.case', index=3,
number=4, type=9, cpp_type=9, label=2,
has_default_value=False, default_value=b"".decode('utf-8'),
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
_descriptor.FieldDescriptor(
name='run', full_name='protocols.InversionJob.run', index=4,
number=5, type=9, cpp_type=9, label=2,
has_default_value=False, default_value=b"".decode('utf-8'),
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
],
extensions=[
],
nested_types=[],
enum_types=[
],
serialized_options=None,
is_extendable=False,
syntax='proto2',
extension_ranges=[],
oneofs=[
],
serialized_start=134,
serialized_end=243,
)
DESCRIPTOR.message_types_by_name['ForwardJob'] = _FORWARDJOB
DESCRIPTOR.message_types_by_name['InversionJob'] = _INVERSIONJOB
_sym_db.RegisterFileDescriptor(DESCRIPTOR)
ForwardJob = _reflection.GeneratedProtocolMessageType('ForwardJob', (_message.Message,), {
'DESCRIPTOR' : _FORWARDJOB,
'__module__' : 'jobs_pb2'
# @@protoc_insertion_point(class_scope:protocols.ForwardJob)
})
_sym_db.RegisterMessage(ForwardJob)
InversionJob = _reflection.GeneratedProtocolMessageType('InversionJob', (_message.Message,), {
'DESCRIPTOR' : _INVERSIONJOB,
'__module__' : 'jobs_pb2'
# @@protoc_insertion_point(class_scope:protocols.InversionJob)
})
_sym_db.RegisterMessage(InversionJob)
DESCRIPTOR._options = None
# @@protoc_insertion_point(module_scope)
| 42.053571 | 530 | 0.749186 | 925 | 7,065 | 5.412973 | 0.140541 | 0.051128 | 0.079688 | 0.070102 | 0.768924 | 0.748951 | 0.729778 | 0.729778 | 0.729778 | 0.709007 | 0 | 0.03267 | 0.124841 | 7,065 | 167 | 531 | 42.305389 | 0.777131 | 0.045718 | 0 | 0.699301 | 1 | 0.006993 | 0.176523 | 0.135364 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.027972 | 0 | 0.027972 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
eb9c9f5cd6cfaab3bae593376abaa1ccedc944d5 | 637 | py | Python | catalyst/tools/meters/__init__.py | elephantmipt/catalyst | 6c706e4859ed7c58e5e6a5b7634176bffd0e2465 | [
"Apache-2.0"
] | 2 | 2019-04-19T21:34:31.000Z | 2019-05-02T22:50:25.000Z | catalyst/tools/meters/__init__.py | elephantmipt/catalyst | 6c706e4859ed7c58e5e6a5b7634176bffd0e2465 | [
"Apache-2.0"
] | null | null | null | catalyst/tools/meters/__init__.py | elephantmipt/catalyst | 6c706e4859ed7c58e5e6a5b7634176bffd0e2465 | [
"Apache-2.0"
] | 1 | 2020-12-02T18:42:31.000Z | 2020-12-02T18:42:31.000Z | # flake8: noqa
from catalyst.tools.meters.meter import Meter
from catalyst.tools.meters.apmeter import APMeter
from catalyst.tools.meters.aucmeter import AUCMeter
from catalyst.tools.meters.averagevaluemeter import AverageValueMeter
from catalyst.tools.meters.classerrormeter import ClassErrorMeter
from catalyst.tools.meters.confusionmeter import ConfusionMeter
from catalyst.tools.meters.mapmeter import mAPMeter
from catalyst.tools.meters.movingaveragevaluemeter import (
MovingAverageValueMeter,
)
from catalyst.tools.meters.msemeter import MSEMeter
from catalyst.tools.meters.ppv_tpr_f1_meter import PrecisionRecallF1ScoreMeter
| 45.5 | 78 | 0.868132 | 75 | 637 | 7.333333 | 0.266667 | 0.218182 | 0.309091 | 0.418182 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.005102 | 0.076923 | 637 | 13 | 79 | 49 | 0.930272 | 0.018838 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.833333 | 0 | 0.833333 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
eba9a705ba8731c2bb8f98c6b6a3c4cfc2b0e8d0 | 38,097 | py | Python | instances/passenger_demand/pas-20210421-2109-int16e/64.py | LHcau/scheduling-shared-passenger-and-freight-transport-on-a-fixed-infrastructure | bba1e6af5bc8d9deaa2dc3b83f6fe9ddf15d2a11 | [
"BSD-3-Clause"
] | null | null | null | instances/passenger_demand/pas-20210421-2109-int16e/64.py | LHcau/scheduling-shared-passenger-and-freight-transport-on-a-fixed-infrastructure | bba1e6af5bc8d9deaa2dc3b83f6fe9ddf15d2a11 | [
"BSD-3-Clause"
] | null | null | null | instances/passenger_demand/pas-20210421-2109-int16e/64.py | LHcau/scheduling-shared-passenger-and-freight-transport-on-a-fixed-infrastructure | bba1e6af5bc8d9deaa2dc3b83f6fe9ddf15d2a11 | [
"BSD-3-Clause"
] | null | null | null |
"""
PASSENGERS
"""
numPassengers = 3755
passenger_arriving = (
(6, 6, 5, 6, 1, 0, 11, 10, 4, 4, 3, 0), # 0
(6, 9, 11, 9, 2, 0, 6, 9, 4, 7, 1, 0), # 1
(2, 3, 4, 2, 3, 0, 6, 6, 8, 7, 0, 0), # 2
(7, 12, 9, 7, 4, 0, 6, 5, 11, 9, 1, 0), # 3
(2, 4, 13, 7, 4, 0, 7, 8, 5, 8, 0, 0), # 4
(2, 14, 8, 4, 4, 0, 10, 7, 11, 5, 2, 0), # 5
(6, 8, 11, 2, 5, 0, 3, 15, 5, 5, 3, 0), # 6
(6, 8, 7, 2, 3, 0, 7, 9, 3, 2, 0, 0), # 7
(6, 15, 8, 7, 2, 0, 8, 9, 8, 3, 3, 0), # 8
(5, 14, 14, 8, 1, 0, 3, 11, 6, 5, 1, 0), # 9
(3, 6, 14, 4, 4, 0, 11, 14, 3, 6, 3, 0), # 10
(4, 7, 8, 1, 1, 0, 10, 5, 12, 2, 2, 0), # 11
(8, 5, 6, 2, 2, 0, 9, 18, 8, 5, 5, 0), # 12
(4, 9, 11, 3, 4, 0, 8, 14, 5, 4, 2, 0), # 13
(5, 13, 9, 6, 5, 0, 7, 8, 3, 8, 1, 0), # 14
(5, 12, 10, 3, 1, 0, 5, 8, 8, 4, 5, 0), # 15
(7, 8, 9, 8, 0, 0, 12, 9, 8, 7, 1, 0), # 16
(1, 7, 5, 4, 0, 0, 11, 9, 6, 6, 3, 0), # 17
(4, 11, 11, 7, 3, 0, 10, 14, 6, 10, 3, 0), # 18
(4, 16, 12, 7, 1, 0, 8, 8, 4, 8, 4, 0), # 19
(3, 13, 13, 5, 3, 0, 8, 4, 7, 10, 5, 0), # 20
(8, 12, 10, 7, 2, 0, 3, 11, 7, 1, 3, 0), # 21
(5, 5, 11, 3, 2, 0, 10, 13, 7, 5, 3, 0), # 22
(1, 9, 7, 1, 5, 0, 7, 19, 4, 12, 5, 0), # 23
(7, 11, 10, 4, 2, 0, 9, 10, 9, 4, 2, 0), # 24
(2, 9, 8, 5, 3, 0, 2, 7, 4, 12, 3, 0), # 25
(4, 8, 16, 5, 2, 0, 11, 7, 8, 7, 4, 0), # 26
(7, 18, 6, 4, 1, 0, 8, 11, 8, 10, 3, 0), # 27
(3, 20, 8, 5, 3, 0, 3, 7, 15, 6, 3, 0), # 28
(2, 12, 7, 7, 1, 0, 9, 11, 10, 6, 1, 0), # 29
(5, 19, 8, 4, 5, 0, 7, 21, 9, 5, 0, 0), # 30
(6, 6, 12, 2, 3, 0, 11, 8, 4, 5, 4, 0), # 31
(0, 9, 7, 5, 4, 0, 5, 13, 6, 4, 4, 0), # 32
(3, 13, 13, 5, 4, 0, 14, 14, 2, 5, 4, 0), # 33
(3, 10, 9, 5, 1, 0, 9, 10, 7, 1, 1, 0), # 34
(3, 10, 11, 3, 1, 0, 3, 8, 1, 7, 2, 0), # 35
(2, 5, 5, 5, 3, 0, 7, 10, 5, 6, 1, 0), # 36
(7, 13, 10, 6, 3, 0, 6, 7, 10, 7, 5, 0), # 37
(5, 14, 5, 5, 4, 0, 11, 13, 7, 7, 5, 0), # 38
(5, 8, 4, 5, 3, 0, 8, 18, 13, 4, 2, 0), # 39
(7, 6, 9, 2, 0, 0, 5, 8, 8, 5, 1, 0), # 40
(6, 17, 5, 7, 2, 0, 5, 15, 12, 5, 4, 0), # 41
(5, 6, 7, 1, 4, 0, 7, 8, 8, 3, 3, 0), # 42
(1, 7, 5, 3, 0, 0, 5, 7, 5, 3, 2, 0), # 43
(5, 3, 4, 7, 2, 0, 5, 10, 5, 9, 3, 0), # 44
(5, 9, 10, 6, 2, 0, 9, 9, 4, 8, 3, 0), # 45
(3, 9, 6, 6, 4, 0, 5, 14, 6, 5, 2, 0), # 46
(8, 11, 9, 6, 3, 0, 9, 18, 6, 4, 3, 0), # 47
(9, 14, 6, 7, 1, 0, 5, 14, 6, 6, 1, 0), # 48
(8, 10, 8, 4, 1, 0, 9, 12, 8, 5, 1, 0), # 49
(1, 16, 5, 3, 2, 0, 5, 16, 4, 7, 1, 0), # 50
(3, 17, 6, 4, 2, 0, 6, 13, 3, 3, 4, 0), # 51
(6, 14, 10, 2, 5, 0, 11, 9, 7, 10, 3, 0), # 52
(5, 14, 7, 7, 5, 0, 3, 11, 12, 6, 3, 0), # 53
(8, 9, 7, 7, 5, 0, 5, 9, 4, 7, 2, 0), # 54
(5, 11, 4, 3, 0, 0, 10, 8, 10, 9, 2, 0), # 55
(2, 12, 8, 6, 5, 0, 7, 6, 4, 4, 4, 0), # 56
(4, 7, 11, 4, 2, 0, 6, 10, 5, 5, 3, 0), # 57
(1, 8, 10, 4, 1, 0, 3, 14, 2, 4, 3, 0), # 58
(0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0), # 59
)
station_arriving_intensity = (
(4.239442493415277, 10.874337121212122, 12.79077763496144, 10.138043478260869, 11.428846153846154, 7.610869565217392), # 0
(4.27923521607648, 10.995266557940518, 12.859864860039991, 10.194503019323673, 11.51450641025641, 7.608275422705315), # 1
(4.318573563554774, 11.114402244668911, 12.927312196515281, 10.249719806763286, 11.598358974358975, 7.60560193236715), # 2
(4.357424143985952, 11.231615625000002, 12.993070372750644, 10.303646739130434, 11.680326923076926, 7.60284945652174), # 3
(4.395753565505805, 11.346778142536477, 13.057090117109396, 10.356236714975847, 11.760333333333335, 7.600018357487922), # 4
(4.433528436250122, 11.459761240881035, 13.11932215795487, 10.407442632850241, 11.838301282051281, 7.597108997584541), # 5
(4.470715364354698, 11.570436363636365, 13.179717223650389, 10.457217391304349, 11.914153846153846, 7.594121739130435), # 6
(4.507280957955322, 11.678674954405162, 13.238226042559269, 10.50551388888889, 11.987814102564105, 7.591056944444445), # 7
(4.543191825187787, 11.784348456790122, 13.294799343044847, 10.552285024154589, 12.059205128205129, 7.587914975845411), # 8
(4.578414574187884, 11.88732831439394, 13.34938785347044, 10.597483695652175, 12.12825, 7.584696195652175), # 9
(4.612915813091406, 11.987485970819305, 13.401942302199371, 10.64106280193237, 12.194871794871796, 7.581400966183574), # 10
(4.646662150034143, 12.084692869668913, 13.452413417594972, 10.682975241545895, 12.25899358974359, 7.578029649758455), # 11
(4.679620193151888, 12.178820454545454, 13.500751928020566, 10.723173913043478, 12.320538461538462, 7.574582608695652), # 12
(4.71175655058043, 12.26974016905163, 13.546908561839473, 10.761611714975846, 12.37942948717949, 7.5710602053140095), # 13
(4.743037830455566, 12.357323456790127, 13.590834047415022, 10.798241545893719, 12.435589743589743, 7.567462801932367), # 14
(4.773430640913081, 12.441441761363635, 13.632479113110538, 10.833016304347826, 12.488942307692309, 7.563790760869566), # 15
(4.802901590088772, 12.521966526374861, 13.671794487289347, 10.86588888888889, 12.539410256410257, 7.560044444444445), # 16
(4.831417286118428, 12.598769195426486, 13.708730898314768, 10.896812198067634, 12.586916666666667, 7.556224214975846), # 17
(4.8589443371378405, 12.671721212121213, 13.74323907455013, 10.925739130434785, 12.631384615384619, 7.552330434782609), # 18
(4.8854493512828014, 12.740694020061728, 13.775269744358756, 10.952622584541063, 12.67273717948718, 7.5483634661835755), # 19
(4.910898936689104, 12.805559062850728, 13.804773636103969, 10.9774154589372, 12.710897435897436, 7.544323671497584), # 20
(4.935259701492538, 12.866187784090906, 13.831701478149103, 11.000070652173914, 12.74578846153846, 7.540211413043479), # 21
(4.958498253828894, 12.922451627384962, 13.856003998857469, 11.020541062801932, 12.777333333333331, 7.5360270531400975), # 22
(4.980581201833967, 12.97422203633558, 13.877631926592404, 11.038779589371982, 12.805455128205129, 7.531770954106282), # 23
(5.001475153643547, 13.021370454545455, 13.896535989717222, 11.054739130434783, 12.830076923076923, 7.52744347826087), # 24
(5.0211467173934246, 13.063768325617284, 13.91266691659526, 11.068372584541065, 12.851121794871794, 7.523044987922706), # 25
(5.039562501219393, 13.101287093153758, 13.925975435589832, 11.079632850241545, 12.86851282051282, 7.518575845410628), # 26
(5.056689113257243, 13.133798200757575, 13.936412275064265, 11.088472826086958, 12.88217307692308, 7.514036413043479), # 27
(5.072493161642767, 13.161173092031426, 13.943928163381893, 11.09484541062802, 12.89202564102564, 7.509427053140097), # 28
(5.086941254511755, 13.183283210578004, 13.948473828906026, 11.09870350241546, 12.89799358974359, 7.504748128019324), # 29
(5.1000000000000005, 13.200000000000001, 13.950000000000001, 11.100000000000001, 12.9, 7.5), # 30
(5.112219245524297, 13.213886079545453, 13.948855917874395, 11.099765849673204, 12.89926985815603, 7.4934020156588375), # 31
(5.124174680306906, 13.227588636363638, 13.945456038647343, 11.099067973856208, 12.897095035460993, 7.483239613526571), # 32
(5.135871675191815, 13.241105965909092, 13.93984891304348, 11.097913235294119, 12.893498936170213, 7.469612293853072), # 33
(5.147315601023018, 13.254436363636366, 13.93208309178744, 11.096308496732028, 12.888504964539008, 7.452619556888223), # 34
(5.158511828644501, 13.267578124999998, 13.922207125603865, 11.094260620915033, 12.882136524822696, 7.432360902881893), # 35
(5.169465728900256, 13.280529545454549, 13.91026956521739, 11.091776470588236, 12.874417021276598, 7.408935832083959), # 36
(5.180182672634271, 13.293288920454547, 13.896318961352657, 11.088862908496733, 12.865369858156027, 7.382443844744294), # 37
(5.190668030690537, 13.305854545454546, 13.8804038647343, 11.08552679738562, 12.855018439716313, 7.352984441112776), # 38
(5.200927173913044, 13.318224715909091, 13.862572826086955, 11.081775, 12.843386170212765, 7.32065712143928), # 39
(5.21096547314578, 13.330397727272729, 13.842874396135267, 11.077614379084968, 12.830496453900707, 7.285561385973679), # 40
(5.220788299232737, 13.342371874999998, 13.821357125603866, 11.073051797385622, 12.816372695035462, 7.247796734965852), # 41
(5.230401023017903, 13.354145454545458, 13.798069565217393, 11.068094117647059, 12.801038297872342, 7.207462668665667), # 42
(5.239809015345269, 13.365716761363636, 13.773060265700483, 11.06274820261438, 12.784516666666667, 7.164658687323005), # 43
(5.249017647058824, 13.377084090909092, 13.746377777777779, 11.05702091503268, 12.76683120567376, 7.119484291187739), # 44
(5.258032289002557, 13.388245738636364, 13.718070652173916, 11.050919117647059, 12.748005319148938, 7.072038980509745), # 45
(5.266858312020461, 13.399200000000002, 13.688187439613529, 11.044449673202614, 12.72806241134752, 7.022422255538898), # 46
(5.275501086956522, 13.409945170454547, 13.656776690821255, 11.037619444444445, 12.707025886524825, 6.970733616525071), # 47
(5.283965984654732, 13.420479545454548, 13.623886956521739, 11.030435294117646, 12.68491914893617, 6.9170725637181425), # 48
(5.292258375959079, 13.430801420454543, 13.589566787439615, 11.022904084967323, 12.66176560283688, 6.861538597367982), # 49
(5.300383631713555, 13.440909090909088, 13.553864734299518, 11.015032679738564, 12.63758865248227, 6.804231217724471), # 50
(5.308347122762149, 13.450800852272728, 13.516829347826087, 11.006827941176471, 12.612411702127659, 6.7452499250374816), # 51
(5.316154219948849, 13.460475, 13.47850917874396, 10.998296732026144, 12.58625815602837, 6.684694219556889), # 52
(5.3238102941176475, 13.469929829545457, 13.438952777777779, 10.98944591503268, 12.559151418439718, 6.622663601532567), # 53
(5.331320716112533, 13.479163636363635, 13.398208695652173, 10.980282352941177, 12.531114893617023, 6.559257571214393), # 54
(5.338690856777493, 13.488174715909091, 13.356325483091787, 10.970812908496733, 12.502171985815604, 6.494575628852241), # 55
(5.3459260869565215, 13.496961363636363, 13.313351690821257, 10.961044444444445, 12.472346099290782, 6.428717274695986), # 56
(5.353031777493607, 13.505521875000003, 13.269335869565218, 10.950983823529413, 12.441660638297872, 6.361782008995502), # 57
(5.360013299232737, 13.513854545454544, 13.224326570048309, 10.940637908496733, 12.410139007092198, 6.293869332000667), # 58
(0.0, 0.0, 0.0, 0.0, 0.0, 0.0), # 59
)
passenger_arriving_acc = (
(6, 6, 5, 6, 1, 0, 11, 10, 4, 4, 3, 0), # 0
(12, 15, 16, 15, 3, 0, 17, 19, 8, 11, 4, 0), # 1
(14, 18, 20, 17, 6, 0, 23, 25, 16, 18, 4, 0), # 2
(21, 30, 29, 24, 10, 0, 29, 30, 27, 27, 5, 0), # 3
(23, 34, 42, 31, 14, 0, 36, 38, 32, 35, 5, 0), # 4
(25, 48, 50, 35, 18, 0, 46, 45, 43, 40, 7, 0), # 5
(31, 56, 61, 37, 23, 0, 49, 60, 48, 45, 10, 0), # 6
(37, 64, 68, 39, 26, 0, 56, 69, 51, 47, 10, 0), # 7
(43, 79, 76, 46, 28, 0, 64, 78, 59, 50, 13, 0), # 8
(48, 93, 90, 54, 29, 0, 67, 89, 65, 55, 14, 0), # 9
(51, 99, 104, 58, 33, 0, 78, 103, 68, 61, 17, 0), # 10
(55, 106, 112, 59, 34, 0, 88, 108, 80, 63, 19, 0), # 11
(63, 111, 118, 61, 36, 0, 97, 126, 88, 68, 24, 0), # 12
(67, 120, 129, 64, 40, 0, 105, 140, 93, 72, 26, 0), # 13
(72, 133, 138, 70, 45, 0, 112, 148, 96, 80, 27, 0), # 14
(77, 145, 148, 73, 46, 0, 117, 156, 104, 84, 32, 0), # 15
(84, 153, 157, 81, 46, 0, 129, 165, 112, 91, 33, 0), # 16
(85, 160, 162, 85, 46, 0, 140, 174, 118, 97, 36, 0), # 17
(89, 171, 173, 92, 49, 0, 150, 188, 124, 107, 39, 0), # 18
(93, 187, 185, 99, 50, 0, 158, 196, 128, 115, 43, 0), # 19
(96, 200, 198, 104, 53, 0, 166, 200, 135, 125, 48, 0), # 20
(104, 212, 208, 111, 55, 0, 169, 211, 142, 126, 51, 0), # 21
(109, 217, 219, 114, 57, 0, 179, 224, 149, 131, 54, 0), # 22
(110, 226, 226, 115, 62, 0, 186, 243, 153, 143, 59, 0), # 23
(117, 237, 236, 119, 64, 0, 195, 253, 162, 147, 61, 0), # 24
(119, 246, 244, 124, 67, 0, 197, 260, 166, 159, 64, 0), # 25
(123, 254, 260, 129, 69, 0, 208, 267, 174, 166, 68, 0), # 26
(130, 272, 266, 133, 70, 0, 216, 278, 182, 176, 71, 0), # 27
(133, 292, 274, 138, 73, 0, 219, 285, 197, 182, 74, 0), # 28
(135, 304, 281, 145, 74, 0, 228, 296, 207, 188, 75, 0), # 29
(140, 323, 289, 149, 79, 0, 235, 317, 216, 193, 75, 0), # 30
(146, 329, 301, 151, 82, 0, 246, 325, 220, 198, 79, 0), # 31
(146, 338, 308, 156, 86, 0, 251, 338, 226, 202, 83, 0), # 32
(149, 351, 321, 161, 90, 0, 265, 352, 228, 207, 87, 0), # 33
(152, 361, 330, 166, 91, 0, 274, 362, 235, 208, 88, 0), # 34
(155, 371, 341, 169, 92, 0, 277, 370, 236, 215, 90, 0), # 35
(157, 376, 346, 174, 95, 0, 284, 380, 241, 221, 91, 0), # 36
(164, 389, 356, 180, 98, 0, 290, 387, 251, 228, 96, 0), # 37
(169, 403, 361, 185, 102, 0, 301, 400, 258, 235, 101, 0), # 38
(174, 411, 365, 190, 105, 0, 309, 418, 271, 239, 103, 0), # 39
(181, 417, 374, 192, 105, 0, 314, 426, 279, 244, 104, 0), # 40
(187, 434, 379, 199, 107, 0, 319, 441, 291, 249, 108, 0), # 41
(192, 440, 386, 200, 111, 0, 326, 449, 299, 252, 111, 0), # 42
(193, 447, 391, 203, 111, 0, 331, 456, 304, 255, 113, 0), # 43
(198, 450, 395, 210, 113, 0, 336, 466, 309, 264, 116, 0), # 44
(203, 459, 405, 216, 115, 0, 345, 475, 313, 272, 119, 0), # 45
(206, 468, 411, 222, 119, 0, 350, 489, 319, 277, 121, 0), # 46
(214, 479, 420, 228, 122, 0, 359, 507, 325, 281, 124, 0), # 47
(223, 493, 426, 235, 123, 0, 364, 521, 331, 287, 125, 0), # 48
(231, 503, 434, 239, 124, 0, 373, 533, 339, 292, 126, 0), # 49
(232, 519, 439, 242, 126, 0, 378, 549, 343, 299, 127, 0), # 50
(235, 536, 445, 246, 128, 0, 384, 562, 346, 302, 131, 0), # 51
(241, 550, 455, 248, 133, 0, 395, 571, 353, 312, 134, 0), # 52
(246, 564, 462, 255, 138, 0, 398, 582, 365, 318, 137, 0), # 53
(254, 573, 469, 262, 143, 0, 403, 591, 369, 325, 139, 0), # 54
(259, 584, 473, 265, 143, 0, 413, 599, 379, 334, 141, 0), # 55
(261, 596, 481, 271, 148, 0, 420, 605, 383, 338, 145, 0), # 56
(265, 603, 492, 275, 150, 0, 426, 615, 388, 343, 148, 0), # 57
(266, 611, 502, 279, 151, 0, 429, 629, 390, 347, 151, 0), # 58
(266, 611, 502, 279, 151, 0, 429, 629, 390, 347, 151, 0), # 59
)
passenger_arriving_rate = (
(4.239442493415277, 8.699469696969697, 7.674466580976864, 4.055217391304347, 2.2857692307692306, 0.0, 7.610869565217392, 9.143076923076922, 6.082826086956521, 5.1163110539845755, 2.174867424242424, 0.0), # 0
(4.27923521607648, 8.796213246352414, 7.715918916023995, 4.077801207729468, 2.3029012820512818, 0.0, 7.608275422705315, 9.211605128205127, 6.116701811594203, 5.1439459440159965, 2.1990533115881035, 0.0), # 1
(4.318573563554774, 8.891521795735128, 7.7563873179091685, 4.099887922705314, 2.3196717948717946, 0.0, 7.60560193236715, 9.278687179487179, 6.1498318840579715, 5.170924878606112, 2.222880448933782, 0.0), # 2
(4.357424143985952, 8.9852925, 7.795842223650386, 4.121458695652173, 2.336065384615385, 0.0, 7.60284945652174, 9.34426153846154, 6.18218804347826, 5.197228149100257, 2.246323125, 0.0), # 3
(4.395753565505805, 9.07742251402918, 7.834254070265637, 4.142494685990338, 2.352066666666667, 0.0, 7.600018357487922, 9.408266666666668, 6.213742028985508, 5.222836046843758, 2.269355628507295, 0.0), # 4
(4.433528436250122, 9.167808992704828, 7.8715932947729215, 4.1629770531400965, 2.367660256410256, 0.0, 7.597108997584541, 9.470641025641024, 6.244465579710145, 5.247728863181948, 2.291952248176207, 0.0), # 5
(4.470715364354698, 9.25634909090909, 7.907830334190233, 4.182886956521739, 2.382830769230769, 0.0, 7.594121739130435, 9.531323076923076, 6.274330434782609, 5.271886889460156, 2.3140872727272725, 0.0), # 6
(4.507280957955322, 9.34293996352413, 7.942935625535561, 4.2022055555555555, 2.397562820512821, 0.0, 7.591056944444445, 9.590251282051284, 6.303308333333334, 5.295290417023708, 2.3357349908810323, 0.0), # 7
(4.543191825187787, 9.427478765432097, 7.976879605826908, 4.220914009661835, 2.4118410256410256, 0.0, 7.587914975845411, 9.647364102564103, 6.3313710144927535, 5.317919737217938, 2.3568696913580243, 0.0), # 8
(4.578414574187884, 9.509862651515151, 8.009632712082263, 4.23899347826087, 2.4256499999999996, 0.0, 7.584696195652175, 9.702599999999999, 6.358490217391305, 5.339755141388175, 2.377465662878788, 0.0), # 9
(4.612915813091406, 9.589988776655444, 8.041165381319622, 4.256425120772947, 2.438974358974359, 0.0, 7.581400966183574, 9.755897435897436, 6.384637681159421, 5.360776920879748, 2.397497194163861, 0.0), # 10
(4.646662150034143, 9.66775429573513, 8.071448050556983, 4.273190096618357, 2.4517987179487175, 0.0, 7.578029649758455, 9.80719487179487, 6.409785144927537, 5.380965367037988, 2.4169385739337823, 0.0), # 11
(4.679620193151888, 9.743056363636363, 8.100451156812339, 4.289269565217391, 2.4641076923076923, 0.0, 7.574582608695652, 9.85643076923077, 6.433904347826087, 5.400300771208226, 2.4357640909090907, 0.0), # 12
(4.71175655058043, 9.815792135241303, 8.128145137103683, 4.304644685990338, 2.475885897435898, 0.0, 7.5710602053140095, 9.903543589743592, 6.456967028985507, 5.418763424735789, 2.4539480338103257, 0.0), # 13
(4.743037830455566, 9.8858587654321, 8.154500428449014, 4.3192966183574875, 2.4871179487179482, 0.0, 7.567462801932367, 9.948471794871793, 6.478944927536231, 5.4363336189660085, 2.471464691358025, 0.0), # 14
(4.773430640913081, 9.953153409090907, 8.179487467866322, 4.33320652173913, 2.4977884615384616, 0.0, 7.563790760869566, 9.991153846153846, 6.499809782608695, 5.452991645244214, 2.488288352272727, 0.0), # 15
(4.802901590088772, 10.017573221099887, 8.203076692373608, 4.346355555555555, 2.507882051282051, 0.0, 7.560044444444445, 10.031528205128204, 6.519533333333333, 5.468717794915738, 2.504393305274972, 0.0), # 16
(4.831417286118428, 10.079015356341188, 8.22523853898886, 4.358724879227053, 2.517383333333333, 0.0, 7.556224214975846, 10.069533333333332, 6.538087318840581, 5.483492359325907, 2.519753839085297, 0.0), # 17
(4.8589443371378405, 10.13737696969697, 8.245943444730077, 4.370295652173914, 2.5262769230769235, 0.0, 7.552330434782609, 10.105107692307694, 6.55544347826087, 5.4972956298200515, 2.5343442424242424, 0.0), # 18
(4.8854493512828014, 10.192555216049382, 8.265161846615253, 4.381049033816424, 2.534547435897436, 0.0, 7.5483634661835755, 10.138189743589743, 6.571573550724637, 5.510107897743501, 2.5481388040123454, 0.0), # 19
(4.910898936689104, 10.244447250280581, 8.282864181662381, 4.3909661835748794, 2.542179487179487, 0.0, 7.544323671497584, 10.168717948717948, 6.58644927536232, 5.5219094544415865, 2.5611118125701453, 0.0), # 20
(4.935259701492538, 10.292950227272724, 8.299020886889462, 4.400028260869565, 2.5491576923076917, 0.0, 7.540211413043479, 10.196630769230767, 6.600042391304348, 5.53268059125964, 2.573237556818181, 0.0), # 21
(4.958498253828894, 10.337961301907969, 8.313602399314481, 4.408216425120773, 2.555466666666666, 0.0, 7.5360270531400975, 10.221866666666664, 6.6123246376811595, 5.542401599542987, 2.584490325476992, 0.0), # 22
(4.980581201833967, 10.379377629068463, 8.326579155955441, 4.415511835748792, 2.5610910256410255, 0.0, 7.531770954106282, 10.244364102564102, 6.623267753623189, 5.551052770636961, 2.5948444072671157, 0.0), # 23
(5.001475153643547, 10.417096363636363, 8.337921593830332, 4.421895652173912, 2.5660153846153846, 0.0, 7.52744347826087, 10.264061538461538, 6.632843478260869, 5.558614395886888, 2.6042740909090907, 0.0), # 24
(5.0211467173934246, 10.451014660493826, 8.347600149957156, 4.427349033816426, 2.5702243589743587, 0.0, 7.523044987922706, 10.280897435897435, 6.641023550724639, 5.565066766638103, 2.6127536651234564, 0.0), # 25
(5.039562501219393, 10.481029674523006, 8.355585261353898, 4.431853140096617, 2.5737025641025637, 0.0, 7.518575845410628, 10.294810256410255, 6.647779710144927, 5.570390174235932, 2.6202574186307515, 0.0), # 26
(5.056689113257243, 10.507038560606059, 8.361847365038559, 4.435389130434783, 2.5764346153846156, 0.0, 7.514036413043479, 10.305738461538462, 6.653083695652175, 5.574564910025706, 2.6267596401515148, 0.0), # 27
(5.072493161642767, 10.52893847362514, 8.366356898029135, 4.437938164251207, 2.578405128205128, 0.0, 7.509427053140097, 10.313620512820512, 6.656907246376812, 5.5775712653527565, 2.632234618406285, 0.0), # 28
(5.086941254511755, 10.546626568462402, 8.369084297343615, 4.439481400966184, 2.579598717948718, 0.0, 7.504748128019324, 10.318394871794872, 6.659222101449276, 5.57938953156241, 2.6366566421156006, 0.0), # 29
(5.1000000000000005, 10.56, 8.370000000000001, 4.44, 2.58, 0.0, 7.5, 10.32, 6.660000000000001, 5.58, 2.64, 0.0), # 30
(5.112219245524297, 10.571108863636361, 8.369313550724637, 4.439906339869282, 2.5798539716312057, 0.0, 7.4934020156588375, 10.319415886524823, 6.659859509803923, 5.579542367149758, 2.6427772159090903, 0.0), # 31
(5.124174680306906, 10.582070909090909, 8.367273623188405, 4.439627189542483, 2.5794190070921985, 0.0, 7.483239613526571, 10.317676028368794, 6.659440784313724, 5.578182415458937, 2.6455177272727273, 0.0), # 32
(5.135871675191815, 10.592884772727274, 8.363909347826088, 4.439165294117647, 2.5786997872340423, 0.0, 7.469612293853072, 10.314799148936169, 6.658747941176471, 5.575939565217392, 2.6482211931818185, 0.0), # 33
(5.147315601023018, 10.603549090909091, 8.359249855072465, 4.438523398692811, 2.5777009929078014, 0.0, 7.452619556888223, 10.310803971631206, 6.657785098039217, 5.572833236714976, 2.6508872727272728, 0.0), # 34
(5.158511828644501, 10.614062499999998, 8.353324275362318, 4.437704248366013, 2.576427304964539, 0.0, 7.432360902881893, 10.305709219858157, 6.65655637254902, 5.568882850241546, 2.6535156249999994, 0.0), # 35
(5.169465728900256, 10.624423636363638, 8.346161739130434, 4.436710588235294, 2.5748834042553193, 0.0, 7.408935832083959, 10.299533617021277, 6.655065882352941, 5.564107826086956, 2.6561059090909094, 0.0), # 36
(5.180182672634271, 10.634631136363637, 8.337791376811595, 4.435545163398693, 2.573073971631205, 0.0, 7.382443844744294, 10.29229588652482, 6.65331774509804, 5.558527584541062, 2.6586577840909094, 0.0), # 37
(5.190668030690537, 10.644683636363636, 8.32824231884058, 4.4342107189542475, 2.5710036879432625, 0.0, 7.352984441112776, 10.28401475177305, 6.651316078431372, 5.5521615458937195, 2.661170909090909, 0.0), # 38
(5.200927173913044, 10.654579772727272, 8.317543695652173, 4.43271, 2.568677234042553, 0.0, 7.32065712143928, 10.274708936170212, 6.649065, 5.545029130434782, 2.663644943181818, 0.0), # 39
(5.21096547314578, 10.664318181818182, 8.305724637681159, 4.431045751633987, 2.566099290780141, 0.0, 7.285561385973679, 10.264397163120565, 6.646568627450981, 5.537149758454106, 2.6660795454545454, 0.0), # 40
(5.220788299232737, 10.673897499999997, 8.29281427536232, 4.429220718954248, 2.563274539007092, 0.0, 7.247796734965852, 10.253098156028368, 6.643831078431373, 5.5285428502415455, 2.6684743749999993, 0.0), # 41
(5.230401023017903, 10.683316363636365, 8.278841739130435, 4.427237647058823, 2.560207659574468, 0.0, 7.207462668665667, 10.240830638297872, 6.640856470588235, 5.519227826086957, 2.6708290909090913, 0.0), # 42
(5.239809015345269, 10.692573409090908, 8.26383615942029, 4.4250992810457515, 2.556903333333333, 0.0, 7.164658687323005, 10.227613333333332, 6.637648921568627, 5.509224106280192, 2.673143352272727, 0.0), # 43
(5.249017647058824, 10.701667272727272, 8.247826666666667, 4.422808366013072, 2.5533662411347517, 0.0, 7.119484291187739, 10.213464964539007, 6.634212549019608, 5.498551111111111, 2.675416818181818, 0.0), # 44
(5.258032289002557, 10.71059659090909, 8.23084239130435, 4.420367647058823, 2.5496010638297872, 0.0, 7.072038980509745, 10.198404255319149, 6.630551470588235, 5.487228260869566, 2.6776491477272724, 0.0), # 45
(5.266858312020461, 10.71936, 8.212912463768117, 4.417779869281045, 2.5456124822695037, 0.0, 7.022422255538898, 10.182449929078015, 6.626669803921568, 5.475274975845411, 2.67984, 0.0), # 46
(5.275501086956522, 10.727956136363636, 8.194066014492753, 4.415047777777778, 2.5414051773049646, 0.0, 6.970733616525071, 10.165620709219858, 6.6225716666666665, 5.462710676328501, 2.681989034090909, 0.0), # 47
(5.283965984654732, 10.736383636363637, 8.174332173913044, 4.412174117647059, 2.536983829787234, 0.0, 6.9170725637181425, 10.147935319148935, 6.618261176470588, 5.449554782608695, 2.6840959090909093, 0.0), # 48
(5.292258375959079, 10.744641136363633, 8.15374007246377, 4.409161633986929, 2.5323531205673757, 0.0, 6.861538597367982, 10.129412482269503, 6.613742450980394, 5.435826714975845, 2.6861602840909082, 0.0), # 49
(5.300383631713555, 10.752727272727268, 8.13231884057971, 4.406013071895425, 2.527517730496454, 0.0, 6.804231217724471, 10.110070921985816, 6.6090196078431385, 5.421545893719807, 2.688181818181817, 0.0), # 50
(5.308347122762149, 10.760640681818181, 8.110097608695652, 4.4027311764705885, 2.5224823404255314, 0.0, 6.7452499250374816, 10.089929361702126, 6.604096764705883, 5.406731739130435, 2.6901601704545453, 0.0), # 51
(5.316154219948849, 10.768379999999999, 8.087105507246376, 4.399318692810457, 2.517251631205674, 0.0, 6.684694219556889, 10.069006524822695, 6.5989780392156865, 5.391403671497584, 2.6920949999999997, 0.0), # 52
(5.3238102941176475, 10.775943863636364, 8.063371666666667, 4.395778366013072, 2.5118302836879436, 0.0, 6.622663601532567, 10.047321134751774, 6.593667549019608, 5.375581111111111, 2.693985965909091, 0.0), # 53
(5.331320716112533, 10.783330909090907, 8.038925217391304, 4.392112941176471, 2.5062229787234043, 0.0, 6.559257571214393, 10.024891914893617, 6.5881694117647065, 5.359283478260869, 2.6958327272727267, 0.0), # 54
(5.338690856777493, 10.790539772727271, 8.013795289855072, 4.388325163398693, 2.5004343971631204, 0.0, 6.494575628852241, 10.001737588652482, 6.58248774509804, 5.342530193236715, 2.697634943181818, 0.0), # 55
(5.3459260869565215, 10.79756909090909, 7.988011014492754, 4.384417777777777, 2.494469219858156, 0.0, 6.428717274695986, 9.977876879432625, 6.576626666666667, 5.325340676328502, 2.6993922727272723, 0.0), # 56
(5.353031777493607, 10.804417500000001, 7.96160152173913, 4.380393529411765, 2.4883321276595742, 0.0, 6.361782008995502, 9.953328510638297, 6.570590294117648, 5.307734347826087, 2.7011043750000003, 0.0), # 57
(5.360013299232737, 10.811083636363634, 7.934595942028984, 4.376255163398692, 2.4820278014184396, 0.0, 6.293869332000667, 9.928111205673758, 6.564382745098039, 5.289730628019323, 2.7027709090909084, 0.0), # 58
(0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0), # 59
)
passenger_allighting_rate = (
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 0
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 1
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 2
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 3
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 4
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 5
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 6
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 7
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 8
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 9
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 10
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 11
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 12
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 13
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 14
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 15
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 16
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 17
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 18
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 19
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 20
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 21
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 22
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 23
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 24
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 25
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 26
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 27
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 28
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 29
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 30
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 31
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 32
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 33
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 34
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 35
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 36
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 37
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 38
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 39
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 40
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 41
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 42
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 43
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 44
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 45
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 46
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 47
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 48
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 49
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 50
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 51
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 52
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 53
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 54
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 55
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 56
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 57
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 58
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 59
)
"""
parameters for reproducibiliy. More information: https://numpy.org/doc/stable/reference/random/parallel.html
"""
#initial entropy
entropy = 258194110137029475889902652135037600173
#index for seed sequence child
child_seed_index = (
1, # 0
63, # 1
)
| 113.722388 | 214 | 0.730582 | 5,147 | 38,097 | 5.405479 | 0.234311 | 0.310546 | 0.245849 | 0.465818 | 0.325929 | 0.325929 | 0.325498 | 0.325498 | 0.325498 | 0.325498 | 0 | 0.820147 | 0.118487 | 38,097 | 334 | 215 | 114.062874 | 0.008308 | 0.031787 | 0 | 0.202532 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0.015823 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
ebdff6d19988383e59772b211f07066d15b30be5 | 49 | py | Python | tapioca_amarilis/__init__.py | imoveisamarilis/tapioca-amarilis | 86b5a34e2a4a47960f25d011b6f0fd6027129efb | [
"MIT"
] | null | null | null | tapioca_amarilis/__init__.py | imoveisamarilis/tapioca-amarilis | 86b5a34e2a4a47960f25d011b6f0fd6027129efb | [
"MIT"
] | null | null | null | tapioca_amarilis/__init__.py | imoveisamarilis/tapioca-amarilis | 86b5a34e2a4a47960f25d011b6f0fd6027129efb | [
"MIT"
] | null | null | null | from .tapioca_amarilis import AmarilisV1 # noqa
| 24.5 | 48 | 0.816327 | 6 | 49 | 6.5 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.02381 | 0.142857 | 49 | 1 | 49 | 49 | 0.904762 | 0.081633 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
694c21c5045304b6bce61d013e54d7ea7c67f723 | 17,026 | py | Python | pyVmomi/_typeinfo_hmsdrs.py | xweichu/pyvmomi | 77aedef02974a63517a079c482e49fd9890c09a4 | [
"Apache-2.0"
] | null | null | null | pyVmomi/_typeinfo_hmsdrs.py | xweichu/pyvmomi | 77aedef02974a63517a079c482e49fd9890c09a4 | [
"Apache-2.0"
] | null | null | null | pyVmomi/_typeinfo_hmsdrs.py | xweichu/pyvmomi | 77aedef02974a63517a079c482e49fd9890c09a4 | [
"Apache-2.0"
] | null | null | null | # ******* WARNING - AUTO GENERATED CODE - DO NOT EDIT *******
from .VmomiSupport import CreateDataType, CreateManagedType
from .VmomiSupport import CreateEnumType
from .VmomiSupport import AddVersion, AddVersionParent
from .VmomiSupport import AddBreakingChangesInfo
from .VmomiSupport import F_LINK, F_LINKABLE
from .VmomiSupport import F_OPTIONAL, F_SECRET
from .VmomiSupport import newestVersions, ltsVersions
from .VmomiSupport import dottedVersions, oldestVersions
AddVersion("vmodl.query.version.version4", "", "", 0, "vim25")
AddVersion("vmodl.query.version.version3", "", "", 0, "vim25")
AddVersion("vmodl.query.version.version2", "", "", 0, "vim25")
AddVersion("hmsdrs.version.version6", "hmsdrsrv", "6.0", 0, "hmsdrsrv")
AddVersion("vmodl.query.version.version1", "", "", 0, "vim25")
AddVersion("vim.version.version8", "vim25", "5.1", 0, "vim25")
AddVersion("vim.version.version9", "vim25", "5.5", 0, "vim25")
AddVersion("vim.version.version6", "vim25", "4.1", 0, "vim25")
AddVersion("vim.version.version7", "vim25", "5.0", 0, "vim25")
AddVersion("vim.version.version1", "vim2", "2.0", 0, "vim25")
AddVersion("vim.version.version4", "vim25", "2.5u2server", 0, "vim25")
AddVersion("vim.version.version5", "vim25", "4.0", 0, "vim25")
AddVersion("vim.version.version2", "vim25", "2.5", 0, "vim25")
AddVersion("vim.version.version3", "vim25", "2.5u2", 0, "vim25")
AddVersion("vmodl.version.version0", "", "", 0, "vim25")
AddVersion("vmodl.version.version1", "", "", 0, "vim25")
AddVersion("vmodl.version.version2", "", "", 0, "vim25")
AddVersion("vmodl.reflect.version.version1", "reflect", "1.0", 0, "reflect")
AddVersionParent("vmodl.query.version.version4", "vmodl.query.version.version4")
AddVersionParent("vmodl.query.version.version4", "vmodl.query.version.version3")
AddVersionParent("vmodl.query.version.version4", "vmodl.query.version.version2")
AddVersionParent("vmodl.query.version.version4", "vmodl.query.version.version1")
AddVersionParent("vmodl.query.version.version4", "vmodl.version.version0")
AddVersionParent("vmodl.query.version.version4", "vmodl.version.version1")
AddVersionParent("vmodl.query.version.version4", "vmodl.version.version2")
AddVersionParent("vmodl.query.version.version3", "vmodl.query.version.version3")
AddVersionParent("vmodl.query.version.version3", "vmodl.query.version.version2")
AddVersionParent("vmodl.query.version.version3", "vmodl.query.version.version1")
AddVersionParent("vmodl.query.version.version3", "vmodl.version.version0")
AddVersionParent("vmodl.query.version.version3", "vmodl.version.version1")
AddVersionParent("vmodl.query.version.version2", "vmodl.query.version.version2")
AddVersionParent("vmodl.query.version.version2", "vmodl.query.version.version1")
AddVersionParent("vmodl.query.version.version2", "vmodl.version.version0")
AddVersionParent("vmodl.query.version.version2", "vmodl.version.version1")
AddVersionParent("hmsdrs.version.version6", "vmodl.query.version.version4")
AddVersionParent("hmsdrs.version.version6", "vmodl.query.version.version3")
AddVersionParent("hmsdrs.version.version6", "vmodl.query.version.version2")
AddVersionParent("hmsdrs.version.version6", "hmsdrs.version.version6")
AddVersionParent("hmsdrs.version.version6", "vmodl.query.version.version1")
AddVersionParent("hmsdrs.version.version6", "vim.version.version8")
AddVersionParent("hmsdrs.version.version6", "vim.version.version9")
AddVersionParent("hmsdrs.version.version6", "vim.version.version6")
AddVersionParent("hmsdrs.version.version6", "vim.version.version7")
AddVersionParent("hmsdrs.version.version6", "vim.version.version1")
AddVersionParent("hmsdrs.version.version6", "vim.version.version4")
AddVersionParent("hmsdrs.version.version6", "vim.version.version5")
AddVersionParent("hmsdrs.version.version6", "vim.version.version2")
AddVersionParent("hmsdrs.version.version6", "vim.version.version3")
AddVersionParent("hmsdrs.version.version6", "vmodl.version.version0")
AddVersionParent("hmsdrs.version.version6", "vmodl.version.version1")
AddVersionParent("hmsdrs.version.version6", "vmodl.version.version2")
AddVersionParent("hmsdrs.version.version6", "vmodl.reflect.version.version1")
AddVersionParent("vmodl.query.version.version1", "vmodl.query.version.version1")
AddVersionParent("vmodl.query.version.version1", "vmodl.version.version0")
AddVersionParent("vim.version.version8", "vmodl.query.version.version4")
AddVersionParent("vim.version.version8", "vmodl.query.version.version3")
AddVersionParent("vim.version.version8", "vmodl.query.version.version2")
AddVersionParent("vim.version.version8", "vmodl.query.version.version1")
AddVersionParent("vim.version.version8", "vim.version.version8")
AddVersionParent("vim.version.version8", "vim.version.version6")
AddVersionParent("vim.version.version8", "vim.version.version7")
AddVersionParent("vim.version.version8", "vim.version.version1")
AddVersionParent("vim.version.version8", "vim.version.version4")
AddVersionParent("vim.version.version8", "vim.version.version5")
AddVersionParent("vim.version.version8", "vim.version.version2")
AddVersionParent("vim.version.version8", "vim.version.version3")
AddVersionParent("vim.version.version8", "vmodl.version.version0")
AddVersionParent("vim.version.version8", "vmodl.version.version1")
AddVersionParent("vim.version.version8", "vmodl.version.version2")
AddVersionParent("vim.version.version8", "vmodl.reflect.version.version1")
AddVersionParent("vim.version.version9", "vmodl.query.version.version4")
AddVersionParent("vim.version.version9", "vmodl.query.version.version3")
AddVersionParent("vim.version.version9", "vmodl.query.version.version2")
AddVersionParent("vim.version.version9", "vmodl.query.version.version1")
AddVersionParent("vim.version.version9", "vim.version.version8")
AddVersionParent("vim.version.version9", "vim.version.version9")
AddVersionParent("vim.version.version9", "vim.version.version6")
AddVersionParent("vim.version.version9", "vim.version.version7")
AddVersionParent("vim.version.version9", "vim.version.version1")
AddVersionParent("vim.version.version9", "vim.version.version4")
AddVersionParent("vim.version.version9", "vim.version.version5")
AddVersionParent("vim.version.version9", "vim.version.version2")
AddVersionParent("vim.version.version9", "vim.version.version3")
AddVersionParent("vim.version.version9", "vmodl.version.version0")
AddVersionParent("vim.version.version9", "vmodl.version.version1")
AddVersionParent("vim.version.version9", "vmodl.version.version2")
AddVersionParent("vim.version.version9", "vmodl.reflect.version.version1")
AddVersionParent("vim.version.version6", "vmodl.query.version.version3")
AddVersionParent("vim.version.version6", "vmodl.query.version.version2")
AddVersionParent("vim.version.version6", "vmodl.query.version.version1")
AddVersionParent("vim.version.version6", "vim.version.version6")
AddVersionParent("vim.version.version6", "vim.version.version1")
AddVersionParent("vim.version.version6", "vim.version.version4")
AddVersionParent("vim.version.version6", "vim.version.version5")
AddVersionParent("vim.version.version6", "vim.version.version2")
AddVersionParent("vim.version.version6", "vim.version.version3")
AddVersionParent("vim.version.version6", "vmodl.version.version0")
AddVersionParent("vim.version.version6", "vmodl.version.version1")
AddVersionParent("vim.version.version7", "vmodl.query.version.version4")
AddVersionParent("vim.version.version7", "vmodl.query.version.version3")
AddVersionParent("vim.version.version7", "vmodl.query.version.version2")
AddVersionParent("vim.version.version7", "vmodl.query.version.version1")
AddVersionParent("vim.version.version7", "vim.version.version6")
AddVersionParent("vim.version.version7", "vim.version.version7")
AddVersionParent("vim.version.version7", "vim.version.version1")
AddVersionParent("vim.version.version7", "vim.version.version4")
AddVersionParent("vim.version.version7", "vim.version.version5")
AddVersionParent("vim.version.version7", "vim.version.version2")
AddVersionParent("vim.version.version7", "vim.version.version3")
AddVersionParent("vim.version.version7", "vmodl.version.version0")
AddVersionParent("vim.version.version7", "vmodl.version.version1")
AddVersionParent("vim.version.version7", "vmodl.version.version2")
AddVersionParent("vim.version.version7", "vmodl.reflect.version.version1")
AddVersionParent("vim.version.version1", "vmodl.query.version.version1")
AddVersionParent("vim.version.version1", "vim.version.version1")
AddVersionParent("vim.version.version1", "vmodl.version.version0")
AddVersionParent("vim.version.version4", "vmodl.query.version.version1")
AddVersionParent("vim.version.version4", "vim.version.version1")
AddVersionParent("vim.version.version4", "vim.version.version4")
AddVersionParent("vim.version.version4", "vim.version.version2")
AddVersionParent("vim.version.version4", "vim.version.version3")
AddVersionParent("vim.version.version4", "vmodl.version.version0")
AddVersionParent("vim.version.version5", "vmodl.query.version.version2")
AddVersionParent("vim.version.version5", "vmodl.query.version.version1")
AddVersionParent("vim.version.version5", "vim.version.version1")
AddVersionParent("vim.version.version5", "vim.version.version4")
AddVersionParent("vim.version.version5", "vim.version.version5")
AddVersionParent("vim.version.version5", "vim.version.version2")
AddVersionParent("vim.version.version5", "vim.version.version3")
AddVersionParent("vim.version.version5", "vmodl.version.version0")
AddVersionParent("vim.version.version5", "vmodl.version.version1")
AddVersionParent("vim.version.version2", "vmodl.query.version.version1")
AddVersionParent("vim.version.version2", "vim.version.version1")
AddVersionParent("vim.version.version2", "vim.version.version2")
AddVersionParent("vim.version.version2", "vmodl.version.version0")
AddVersionParent("vim.version.version3", "vmodl.query.version.version1")
AddVersionParent("vim.version.version3", "vim.version.version1")
AddVersionParent("vim.version.version3", "vim.version.version2")
AddVersionParent("vim.version.version3", "vim.version.version3")
AddVersionParent("vim.version.version3", "vmodl.version.version0")
AddVersionParent("vmodl.version.version0", "vmodl.version.version0")
AddVersionParent("vmodl.version.version1", "vmodl.version.version0")
AddVersionParent("vmodl.version.version1", "vmodl.version.version1")
AddVersionParent("vmodl.version.version2", "vmodl.version.version0")
AddVersionParent("vmodl.version.version2", "vmodl.version.version1")
AddVersionParent("vmodl.version.version2", "vmodl.version.version2")
AddVersionParent("vmodl.reflect.version.version1", "vmodl.version.version0")
AddVersionParent("vmodl.reflect.version.version1", "vmodl.version.version1")
AddVersionParent("vmodl.reflect.version.version1", "vmodl.version.version2")
AddVersionParent("vmodl.reflect.version.version1", "vmodl.reflect.version.version1")
newestVersions.Add("hmsdrs.version.version6")
ltsVersions.Add("hmsdrs.version.version6")
dottedVersions.Add("hmsdrs.version.version6")
oldestVersions.Add("hmsdrs.version.version6")
CreateDataType("hmsdrs.HbrDrmCollectionMoveSpec", "HmsSdrsHbrDrmCollectionMoveSpec", "vmodl.DynamicData", "hmsdrs.version.version6", [("collectionId", "string", "hmsdrs.version.version6", 0), ("diskSpecs", "hmsdrs.HbrDrmDiskMoveSpec[]", "hmsdrs.version.version6", 0)])
CreateDataType("hmsdrs.HbrDrmDiskBase", "HmsSdrsHbrDrmDiskBase", "vmodl.DynamicData", "hmsdrs.version.version6", [("diskId", "string", "hmsdrs.version.version6", 0), ("collectionId", "string", "hmsdrs.version.version6", 0), ("name", "string", "hmsdrs.version.version6", 0), ("datastoreMoId", "string", "hmsdrs.version.version6", 0), ("spaceRequirement", "long", "hmsdrs.version.version6", 0)])
CreateDataType("hmsdrs.HbrDrmDiskCollection", "HmsSdrsHbrDrmCollection", "vmodl.DynamicData", "hmsdrs.version.version6", [("collectionId", "string", "hmsdrs.version.version6", 0), ("name", "string", "hmsdrs.version.version6", 0), ("placeholderVmMoId", "string", "hmsdrs.version.version6", F_OPTIONAL), ("disks", "hmsdrs.HbrDrmDiskBase[]", "hmsdrs.version.version6", 0)])
CreateDataType("hmsdrs.HbrDrmDiskMoveSpec", "HmsSdrsHbrDrmMoveSpec", "vmodl.DynamicData", "hmsdrs.version.version6", [("diskId", "string", "hmsdrs.version.version6", 0), ("sourceDatastoreMoId", "string", "hmsdrs.version.version6", 0), ("destinationDatastoreMoId", "string", "hmsdrs.version.version6", 0)])
CreateDataType("hmsdrs.ImmovableHbrDrmDisk", "HmsSdrsImmovableHbrDrmDisk", "hmsdrs.HbrDrmDiskBase", "hmsdrs.version.version6", None)
CreateManagedType("hmsdrs.ReplicaMoveManager", "HmsReplicaMoveManager", "vmodl.ManagedObject", "hmsdrs.version.version6", None, [("queryCollections", "HmsReplicaMoveManagerQueryCollections", "hmsdrs.version.version6", (("datastoreMoIds", "string[]", "hmsdrs.version.version6", F_OPTIONAL, None),), (F_OPTIONAL, "hmsdrs.HbrDrmDiskCollection[]", "hmsdrs.HbrDrmDiskCollection[]"), "System.View", None), ("retrieveCurrentMoveTasks", "HmsReplicaMoveManagerRetrieveCurrentMoveTasks", "hmsdrs.version.version6", (("datastoreMoIds", "string[]", "hmsdrs.version.version6", F_OPTIONAL, None),), (F_OPTIONAL, "hmsdrs.ReplicaMoveManager.MoveTaskInfo[]", "hmsdrs.ReplicaMoveManager.MoveTaskInfo[]"), "System.View", None), ("moveCollection", "HmsReplicaMoveManagerMoveCollection_Task", "hmsdrs.version.version6", (("spec", "hmsdrs.HbrDrmCollectionMoveSpec", "hmsdrs.version.version6", 0, None),("allowCrossHostMove", "boolean", "hmsdrs.version.version6", F_OPTIONAL, None),), (0, "vim.Task", "void"), "Host.Hbr.HbrManagement", ["hmsdrs.fault.InvalidCollection", "hmsdrs.fault.MoveAlreadyInProgress", "hmsdrs.fault.DiskNotMovable", "hmsdrs.fault.CannotMoveAcrossHosts", ])])
CreateDataType("hmsdrs.ReplicaMoveManager.MoveTaskInfo", "HmsReplicaMoveManagerMoveTaskInfo", "vmodl.DynamicData", "hmsdrs.version.version6", [("moveSpec", "hmsdrs.HbrDrmCollectionMoveSpec", "hmsdrs.version.version6", 0), ("task", "vim.Task", "hmsdrs.version.version6", 0)])
CreateManagedType("hmsdrs.SdrsServiceInstance", "HmsSdrsServiceInstance", "vmodl.ManagedObject", "hmsdrs.version.version6", [("content", "hmsdrs.SdrsServiceInstanceContent", "hmsdrs.version.version6", 0, "System.Anonymous")], [("currentTime", "HmsSdrsServiceInstanceCurrentTime", "hmsdrs.version.version6", (), (0, "vmodl.DateTime", "vmodl.DateTime"), "System.View", None), ("ping", "HmsSdrsServiceInstancePing", "hmsdrs.version.version6", (), (0, "void", "void"), "System.Anonymous", None)])
CreateDataType("hmsdrs.SdrsServiceInstanceContent", "HmsSdrsServiceInstanceContent", "vmodl.DynamicData", "hmsdrs.version.version6", [("instanceUuid", "string", "hmsdrs.version.version6", 0), ("solutionUsername", "string", "hmsdrs.version.version6", 0), ("sessionManager", "hmsdrs.SdrsSessionManager", "hmsdrs.version.version6", 0), ("replicaMoveManager", "hmsdrs.ReplicaMoveManager", "hmsdrs.version.version6", 0)])
CreateManagedType("hmsdrs.SdrsSessionManager", "HmsSdrsSessionManager", "vmodl.ManagedObject", "hmsdrs.version.version6", None, [("loginByToken", "HmsSdrsSessionManagerLoginByToken", "hmsdrs.version.version6", (("delegatedToken", "string", "hmsdrs.version.version6", 0, None),("locale", "string", "hmsdrs.version.version6", F_OPTIONAL, None),), (0, "void", "void"), "System.Anonymous", ["vim.fault.InvalidLogin", "hmsdrs.fault.AlreadyLoggedIn", "hmsdrs.fault.CannotVerifyCredentialsFault", ]), ("logout", "HmsSdrsSessionManagerLogout", "hmsdrs.version.version6", (), (0, "void", "void"), "System.View", None)])
CreateDataType("hmsdrs.fault.SdrsFault", "HmsSdrsFault", "vmodl.MethodFault", "hmsdrs.version.version6", [("originalMessage", "string", "hmsdrs.version.version6", F_OPTIONAL)])
CreateDataType("hmsdrs.HbrDrmDisk", "HmsSdrsHbrDrmDisk", "hmsdrs.HbrDrmDiskBase", "hmsdrs.version.version6", [("movable", "boolean", "hmsdrs.version.version6", 0), ("datastoresForSingleHostMove", "vim.Datastore[]", "hmsdrs.version.version6", F_OPTIONAL)])
CreateDataType("hmsdrs.fault.AlreadyLoggedIn", "HmsSdrsFaultAlreadyLoggedIn", "hmsdrs.fault.SdrsFault", "hmsdrs.version.version6", None)
CreateDataType("hmsdrs.fault.CannotMoveAcrossHosts", "HmsSdrsFaultCannotMoveAcrossHosts", "hmsdrs.fault.SdrsFault", "hmsdrs.version.version6", None)
CreateDataType("hmsdrs.fault.CannotVerifyCredentialsFault", "HmsSdrsFaultCannotVerifyCredentialsFault", "hmsdrs.fault.SdrsFault", "hmsdrs.version.version6", None)
CreateDataType("hmsdrs.fault.DiskNotMovable", "HmsSdrsFaultDiskNotMovable", "hmsdrs.fault.SdrsFault", "hmsdrs.version.version6", [("nonMovableDisksIds", "string[]", "hmsdrs.version.version6", 0)])
CreateDataType("hmsdrs.fault.InvalidCollection", "HmsSdrsFaultInvalidCollection", "hmsdrs.fault.SdrsFault", "hmsdrs.version.version6", None)
CreateDataType("hmsdrs.fault.MoveAlreadyInProgress", "HmsSdrsFaultMoveAlreadyInProgress", "hmsdrs.fault.SdrsFault", "hmsdrs.version.version6", [("taskInProgress", "vim.Task", "hmsdrs.version.version6", 0)])
CreateDataType("hmsdrs.fault.MoveCanceled", "HmsSdrsFaultMoveCanceled", "hmsdrs.fault.SdrsFault", "hmsdrs.version.version6", None)
| 91.537634 | 1,159 | 0.783096 | 1,722 | 17,026 | 7.734611 | 0.086527 | 0.11187 | 0.16788 | 0.04625 | 0.75779 | 0.711089 | 0.47451 | 0.280952 | 0.113147 | 0.075531 | 0 | 0.028472 | 0.040761 | 17,026 | 185 | 1,160 | 92.032432 | 0.787044 | 0.003465 | 0 | 0 | 1 | 0 | 0.647215 | 0.404067 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.044199 | 0 | 0.044199 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
15e2a1d0df664dbad0b2a610394581778b632b87 | 5,755 | py | Python | test/test_models.py | seung-lab/pytorch-emvision | 09b10afdd5d6326e8c704dbeca05b9668fadeee1 | [
"MIT"
] | null | null | null | test/test_models.py | seung-lab/pytorch-emvision | 09b10afdd5d6326e8c704dbeca05b9668fadeee1 | [
"MIT"
] | null | null | null | test/test_models.py | seung-lab/pytorch-emvision | 09b10afdd5d6326e8c704dbeca05b9668fadeee1 | [
"MIT"
] | 1 | 2020-03-18T18:38:50.000Z | 2020-03-18T18:38:50.000Z | import torch
import emvision
import unittest
class Tester(unittest.TestCase):
def test_rsunet(self):
from emvision.models import RSUNet
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
net = RSUNet(width=[3,4,5,6]).to(device)
x = torch.randn(1,3,20,256,256).to(device)
y = net(x)
# print(y.size())
def test_rsunet_gn(self):
from emvision.models import rsunet_gn
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
net = rsunet_gn(width=[2,4,6,8], group=2).to(device)
x = torch.randn(1,2,20,256,256).to(device)
y = net(x)
# print(y.size())
def test_rsunet_prelu(self):
from emvision.models import rsunet_act
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
net = rsunet_act(width=[3,4,5,6], act='PReLU', init=0.1).to(device)
x = torch.randn(1,3,20,256,256).to(device)
y = net(x)
# print(y.size())
def test_rsunet_prelu_gn(self):
from emvision.models import rsunet_act_gn
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
net = rsunet_act_gn(width=[2,4,6,8], group=2, act='PReLU', init=0.1).to(device)
x = torch.randn(1,2,20,256,256).to(device)
y = net(x)
# print(y.size())
def test_rsunet_leaky_relu(self):
from emvision.models import rsunet_act
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
net = rsunet_act(width=[3,4,5,6], act='LeakyReLU', negative_slope=0.1).to(device)
x = torch.randn(1,3,20,256,256).to(device)
y = net(x)
# print(y.size())
def test_rsunet_leaky_relu_gn(self):
from emvision.models import rsunet_act_gn
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
net = rsunet_act_gn(width=[2,4,6,8], group=2, act='LeakyReLU', negative_slope=0.1).to(device)
x = torch.randn(1,2,20,256,256).to(device)
y = net(x)
# print(y.size())
def test_rsunet_elu(self):
from emvision.models import rsunet_act
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
net = rsunet_act(width=[3,4,5,6], act='ELU').to(device)
x = torch.randn(1,3,20,256,256).to(device)
y = net(x)
# print(y.size())
def test_rsunet_elu_gn(self):
from emvision.models import rsunet_act_gn
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
net = rsunet_act_gn(width=[2,4,6,8], group=2, act='ELU').to(device)
x = torch.randn(1,2,20,256,256).to(device)
y = net(x)
# print(y.size())
def test_rsunet_2d3d(self):
from emvision.models import rsunet_2d3d
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
net = rsunet_2d3d(width=[3,4,5,6], depth2d=2).to(device)
x = torch.randn(1,3,20,256,256).to(device)
y = net(x)
# print(y.size())
def test_rsunet_2d3d_gn(self):
from emvision.models import rsunet_2d3d_gn
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
net = rsunet_2d3d_gn(width=[2,4,6,8], group=2).to(device)
x = torch.randn(1,2,20,256,256).to(device)
y = net(x)
# print(y.size())
def test_vrunet(self):
from emvision.models import vrunet
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
net = vrunet(width=[2,4,6,8]).to(device)
x = torch.randn(1,2,48,148,148).to(device)
y = net(x)
# (48,148,148) -> (20,60,60)
print("VRUnet: {} -> {}".format(x.size(), y.size()))
def test_vrunet_nearest(self):
from emvision.models import vrunet
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
net = vrunet(width=[2,4,6,8], mode='nearest').to(device)
x = torch.randn(1,2,48,148,148).to(device)
y = net(x)
# (48,148,148) -> (20,60,60)
print("VRUnet: {} -> {}".format(x.size(), y.size()))
def test_dynamic_rsunet(self):
from emvision.models import dynamic_rsunet
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
net = dynamic_rsunet(width=[2,4,6,8], unroll=3).to(device)
x = torch.randn(1,2,20,256,256).to(device)
y1 = net(x)
y2 = net(x, unroll=1)
y3 = net(x, unroll=2)
y4 = net(x, unroll=3)
def test_rsunet_act_nn(self):
from emvision.models import rsunet_act_nn
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
net = rsunet_act_nn(width=[2,4,6,8]).to(device)
x = torch.randn(1,2,20,256,256).to(device)
y = net(x)
# print(y.size())
def test_rsunet_act_nn_gn(self):
from emvision.models import rsunet_act_nn_gn
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
net = rsunet_act_nn_gn(width=[2,4,6,8], group=2).to(device)
x = torch.randn(1,2,20,256,256).to(device)
y = net(x)
# print(y.size())
def test_rsunet_zfactor(self):
from emvision.models import rsunet_act
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
net = rsunet_act(width=[3,4,5,6], zfactor=[1,2,2], act='ELU').to(device)
x = torch.randn(1,3,20,256,256).to(device)
y = net(x)
if __name__ == '__main__':
print('torch version =', torch.__version__)
print('cuda version =', torch.version.cuda)
print('cudnn version =', torch.backends.cudnn.version())
print('cuda available?', torch.cuda.is_available())
unittest.main()
| 39.417808 | 101 | 0.602259 | 902 | 5,755 | 3.720621 | 0.078714 | 0.076281 | 0.055721 | 0.101311 | 0.893921 | 0.875447 | 0.855185 | 0.820918 | 0.79708 | 0.795888 | 0 | 0.066091 | 0.234926 | 5,755 | 145 | 102 | 39.689655 | 0.696116 | 0.042572 | 0 | 0.522523 | 0 | 0 | 0.046588 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.144144 | false | 0 | 0.171171 | 0 | 0.324324 | 0.054054 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
15e6b1b76c1d36c2578fe512e91dade46e8e3fa3 | 297 | py | Python | tools/lib_alignments/__init__.py | RedPillGroup/RedPillFaceSwap | 2313d4fe0e6e7b579ce450d556ad4fd6bd0cbed6 | [
"MIT"
] | 1 | 2020-04-14T22:16:50.000Z | 2020-04-14T22:16:50.000Z | tools/lib_alignments/__init__.py | RedPillGroup/RedPillFaceSwap | 2313d4fe0e6e7b579ce450d556ad4fd6bd0cbed6 | [
"MIT"
] | null | null | null | tools/lib_alignments/__init__.py | RedPillGroup/RedPillFaceSwap | 2313d4fe0e6e7b579ce450d556ad4fd6bd0cbed6 | [
"MIT"
] | 2 | 2020-04-14T22:17:09.000Z | 2020-10-30T03:01:13.000Z | from tools.lib_alignments.media import AlignmentData, ExtractedFaces, Faces, Frames
from tools.lib_alignments.annotate import Annotate
from tools.lib_alignments.jobs import Check, Draw, Extract, Legacy, Reformat, RemoveAlignments, Sort, Spatial
from tools.lib_alignments.jobs_manual import Manual
| 59.4 | 109 | 0.851852 | 39 | 297 | 6.358974 | 0.538462 | 0.145161 | 0.193548 | 0.354839 | 0.209677 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.087542 | 297 | 4 | 110 | 74.25 | 0.915129 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
c63925560585002e64736e717f57ac53718a8d4e | 26 | py | Python | dtbot/bitso/dao/mssql/__init__.py | oxsoftdev/dt-bot | e06633134e382e340a7d017e300f65492720c514 | [
"MIT"
] | null | null | null | dtbot/bitso/dao/mssql/__init__.py | oxsoftdev/dt-bot | e06633134e382e340a7d017e300f65492720c514 | [
"MIT"
] | null | null | null | dtbot/bitso/dao/mssql/__init__.py | oxsoftdev/dt-bot | e06633134e382e340a7d017e300f65492720c514 | [
"MIT"
] | null | null | null | from .Store import Store
| 13 | 25 | 0.769231 | 4 | 26 | 5 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.192308 | 26 | 1 | 26 | 26 | 0.952381 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
c63c759bc1ec6dc66f92bf74078363ce8f9a6841 | 106 | py | Python | get_together/context_processors.py | alysivji/GetTogether | 403d9945fff019701de41d081ad4452e771e1ce1 | [
"BSD-2-Clause"
] | 446 | 2018-01-21T09:22:41.000Z | 2022-03-25T17:46:12.000Z | get_together/context_processors.py | alysivji/GetTogether | 403d9945fff019701de41d081ad4452e771e1ce1 | [
"BSD-2-Clause"
] | 272 | 2018-01-03T16:55:39.000Z | 2022-03-11T23:12:30.000Z | get_together/context_processors.py | alysivji/GetTogether | 403d9945fff019701de41d081ad4452e771e1ce1 | [
"BSD-2-Clause"
] | 100 | 2018-01-27T02:04:15.000Z | 2021-09-09T09:02:21.000Z | from django.conf import settings
def theme_engine(request):
return {"theme": settings.THEME_CONFIG}
| 17.666667 | 43 | 0.764151 | 14 | 106 | 5.642857 | 0.785714 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.141509 | 106 | 5 | 44 | 21.2 | 0.868132 | 0 | 0 | 0 | 0 | 0 | 0.04717 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0 | 0.333333 | 0.333333 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 6 |
d672cdc364b28bcca969b01accbbefe123fc4f1e | 10,656 | py | Python | pyjets/read_nc.py | airam-software/pyjets | 7693b007955f17a8e5ab8e49301485b33c8d113d | [
"BSD-3-Clause"
] | null | null | null | pyjets/read_nc.py | airam-software/pyjets | 7693b007955f17a8e5ab8e49301485b33c8d113d | [
"BSD-3-Clause"
] | null | null | null | pyjets/read_nc.py | airam-software/pyjets | 7693b007955f17a8e5ab8e49301485b33c8d113d | [
"BSD-3-Clause"
] | null | null | null | '''
Copyright (c) 2021 María Rodríguez
Process netcdf data
'''
import datetime as dt
import fnmatch
import os
import h5py
import numpy as np
import pandas as pd
from netCDF4 import Dataset, num2date
def convertnc2h5(study_variable, model_name, period, level, path):
if period == 'piControl' or period == 'abrupt-4xCO2':
file_name = find_file(study_variable, path, level, model_name, period)
file_name = file_name[:-3]
nc = Dataset(path + '/' + file_name + '.nc') # Read nc file
nc_time = nc.variables['time']
dates = num2date(nc_time[:], units=nc_time.units,
calendar=nc_time.calendar) # Select dates for three months separately
print('-----')
print(study_variable, model_name, period)
print(dates[0], dates[-1], dates.shape[0] / 12)
lat_names = ['lat', 'latitude', 'nav_lat']
lon_names = ['lon', 'longitude', 'nav_lon']
lat_name = [i for i in nc.variables.keys() if i in lat_names] # Check if any item in one list is in another
lat_name = lat_name[0]
if not lat_name:
print('Error')
exit()
lon_name = [i for i in nc.variables.keys() if i in lon_names]
lon_name = lon_name[0]
if not lon_name:
print('Error')
exit()
nc_lat = nc.variables[lat_name]
nc_lon = nc.variables[lon_name]
nc_lat_data = nc_lat[:].data
nc_lon_data = nc_lon[:].data
x = nc.variables[study_variable]
x_data = np.squeeze(x[:].data)
if len(nc_lat_data.shape) == 1:
grid_type = 'sq'
if nc_lat_data.shape[0] != x_data.shape[1] or \
nc_lon_data.shape[0] != x_data.shape[2]:
print('Inconsistency latitude or longitude')
if len(nc_lat_data.shape) == 2:
grid_type = 'nonsq'
if nc_lat_data.shape[0] != x_data.shape[1]:
print('Inconsistency latitude')
if nc_lon_data.shape[1] != x_data.shape[2]:
print('Inconsistency longitude')
x_data_sel, dates_selection = select_season(x_data, dates, 'winter')
nc.close()
elif period == 'jra':
file_name = find_file(study_variable, path, level, model_name, period)
file_name = file_name[:-3]
if study_variable == 'ua':
variable = 'UGRD_GDS0_ISBL'
elif study_variable == 'zg':
variable = 'HGT_GDS0_ISBL'
nc = Dataset(path + '/' + file_name + '.nc') # Read nc file
nc_time = nc.variables['initial_time0_hours']
# '''Select only some years'''
# year_start = 1958
# year_end = 2010
# time_start = date2index(dt.datetime(year_start, 1, 1), nc_time, select='nearest')
# time_end = date2index(dt.datetime(year_end, 12, 16), nc_time, select='nearest')
# dates = num2date(nc_time[time_start:time_end + 1], units=nc_time.units, calendar=nc_time.calendar)
# print(dates[0], dates[-1], dates.shape[0] / 12)
'''All dates'''
dates = num2date(nc_time[:], units=nc_time.units, calendar=nc_time.calendar)
print('-----')
print(study_variable, model_name, period)
print(dates[0], dates[-1], dates.shape[0] / 12)
nc_lat = nc.variables['lat'] # Other option nc_lat = nc.variables['g0_lat_2']
nc_lat_data = nc_lat[:].data
nc_lon = nc.variables['lon'] # Other option: nc_lon = nc.variables['g0_lon_3']
nc_lon_data = nc_lon[:].data
x = nc.variables[variable]
x_data = np.squeeze(x[:].data)
# '''Plots'''
# lon, lat = np.meshgrid(nc_lon_data, nc_lat_data)
# gridx, gridy = lon, lat
# import matplotlib.pyplot as plt
# plt.contourf(gridx, gridy, x_data[3, :, :])
# plt.show()
if len(nc_lat_data.shape) == 1:
grid_type = 'sq'
if nc_lat_data.shape[0] != x_data.shape[1] or \
nc_lon_data.shape[0] != x_data.shape[2]:
print('Inconsistency latitude or longitude')
if len(nc_lat_data.shape) == 2:
grid_type = 'nonsq'
if nc_lat_data.shape[0] != x_data.shape[1]:
print('Inconsistency latitude')
if nc_lon_data.shape[1] != x_data.shape[2]:
print('Inconsistency longitude')
nc.close()
x_data_sel, dates_selection = select_season(x_data, dates, 'winter')
'''Save data as hdf5'''
hf = h5py.File(path + '/h5files/' + file_name + '.h5', 'w')
hf.create_dataset('nc_lat_data', data=nc_lat_data)
hf.create_dataset('nc_lon_data', data=nc_lon_data)
hf.create_dataset('grid_type', data=grid_type)
hf.create_dataset('x_data_sel', data=x_data_sel)
hf.close()
return nc_lat_data, nc_lon_data, grid_type, x_data_sel
def h5fileinfo(study_variable, model_name, period, level, path):
if period == 'piControl' or period == 'abrupt-4xCO2':
if study_variable == 'tos':
str_aux = '_Omon_'
else:
str_aux = '_Amon_'
for file in os.listdir(path):
if fnmatch.fnmatch(file, study_variable + level +
str_aux + model_name + '_' + period + '_*'):
file_name = file
h5name = file_name
elif period == 'jra' and study_variable == 'ua':
h5name = 'ua010_jra-55_195801-201512_remapped.h5'
elif period == 'jra' and study_variable == 'zg':
h5name = 'zg010_jra-55_195801-201606_mm_remapped.h5'
h5file = h5py.File(path + h5name, 'r')
nc_lat_data = h5file['nc_lat_data'][:]
nc_lon_data = h5file['nc_lon_data'][:]
grid_type = h5file['grid_type'].value
x_data_sel = h5file['x_data_sel'][:]
h5file.close()
return nc_lat_data, nc_lon_data, grid_type, x_data_sel
def select_season(x_data, dates, season):
df = pd.DataFrame({'Date': dates})
df['Date'] = [dt.date(i.year, i.month, i.day) for i in df.Date] # Convert DatetimeNoLeap to date object
df['Month'] = [i.month for i in df.Date] # Add new columns to dataframe
if season == 'winter':
df1 = df.loc[df['Month'].isin([1, 2, 12])]
dates_selection = df1.index.values
x_data = x_data[dates_selection, :, :]
df1.to_excel('./results/dates.xlsx')
return x_data, df1['Date']
def find_file(study_variable, path, level, model_name, period):
if study_variable == 'tos':
str_aux = '_Omon_'
else:
str_aux = '_Amon_'
if period != 'jra':
for file in os.listdir(path):
if fnmatch.fnmatch(file, study_variable + level +
str_aux + model_name + '_' + period + '_*'):
file_name = file
else:
for file in os.listdir(path):
if fnmatch.fnmatch(file, study_variable + level + '_' +
model_name + '-55_*'):
file_name = file
return file_name
def ncfileinfo(study_variable, model_name, period, level, path):
if period == 'piControl' or period == 'abrupt-4xCO2':
file_name = find_file(study_variable, path, level, model_name, period)
file_name = file_name[:-3]
nc = Dataset(path + '/' + file_name + '.nc') # Read nc file
nc_time = nc.variables['time']
dates = num2date(nc_time[:], units=nc_time.units,
calendar=nc_time.calendar) # Select dates for three months separately
print('-----')
print(study_variable, model_name, period)
print(dates[0], dates[-1], dates.shape[0] / 12)
lat_names = ['lat', 'latitude', 'nav_lat']
lon_names = ['lon', 'longitude', 'nav_lon']
lat_name = [i for i in nc.variables.keys() if i in lat_names] # Check if any item in one list is in another
lat_name = lat_name[0]
if not lat_name:
print('Error')
exit()
lon_name = [i for i in nc.variables.keys() if i in lon_names]
lon_name = lon_name[0]
if not lon_name:
print('Error')
exit()
nc_lat = nc.variables[lat_name]
nc_lon = nc.variables[lon_name]
nc_lat_data = nc_lat[:].data
nc_lon_data = nc_lon[:].data
x = nc.variables[study_variable]
x_data = np.squeeze(x[:].data)
if len(nc_lat_data.shape) == 1:
grid_type = 'sq'
if nc_lat_data.shape[0] != x_data.shape[1] or \
nc_lon_data.shape[0] != x_data.shape[2]:
print('Inconsistency latitude or longitude')
if len(nc_lat_data.shape) == 2:
grid_type = 'nonsq'
if nc_lat_data.shape[0] != x_data.shape[1]:
print('Inconsistency latitude')
if nc_lon_data.shape[1] != x_data.shape[2]:
print('Inconsistency longitude')
x_data_sel, dates_selection = select_season(x_data, dates, 'winter')
nc.close()
elif period == 'jra':
file_name = find_file(study_variable, path, level, model_name, period)
file_name = file_name[:-3]
if study_variable == 'ua':
variable = 'UGRD_GDS0_ISBL'
elif study_variable == 'zg':
variable = 'HGT_GDS0_ISBL'
nc = Dataset(path + '/' + file_name + '.nc') # Read nc file
nc_time = nc.variables['initial_time0_hours']
'''All dates'''
dates = num2date(nc_time[:], units=nc_time.units, calendar=nc_time.calendar)
print('-----')
print(study_variable, model_name, period)
print(dates[0], dates[-1], dates.shape[0] / 12)
nc_lat = nc.variables['lat'] # Other option nc_lat = nc.variables['g0_lat_2']
nc_lat_data = nc_lat[:].data
nc_lon = nc.variables['lon'] # Other option: nc_lon = nc.variables['g0_lon_3']
nc_lon_data = nc_lon[:].data
x = nc.variables[variable]
x_data = np.squeeze(x[:].data)
if len(nc_lat_data.shape) == 1:
grid_type = 'sq'
if nc_lat_data.shape[0] != x_data.shape[1] or \
nc_lon_data.shape[0] != x_data.shape[2]:
print('Inconsistency latitude or longitude')
if len(nc_lat_data.shape) == 2:
grid_type = 'nonsq'
if nc_lat_data.shape[0] != x_data.shape[1]:
print('Inconsistency latitude')
if nc_lon_data.shape[1] != x_data.shape[2]:
print('Inconsistency longitude')
nc.close()
x_data_sel, dates_selection = select_season(x_data, dates, 'winter')
return dates_selection
| 34.153846 | 116 | 0.578829 | 1,443 | 10,656 | 4.020097 | 0.121275 | 0.037063 | 0.048095 | 0.038614 | 0.789691 | 0.778831 | 0.76573 | 0.76573 | 0.76573 | 0.746251 | 0 | 0.024739 | 0.290634 | 10,656 | 311 | 117 | 34.263666 | 0.742691 | 0.101258 | 0 | 0.796209 | 0 | 0 | 0.102107 | 0.008325 | 0 | 0 | 0 | 0 | 0 | 1 | 0.023697 | false | 0 | 0.033175 | 0 | 0.080569 | 0.132701 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
d674fa34c753b60d858ae20d67cc7249980a8e49 | 34 | py | Python | nubia_score/__init__.py | LazerLambda/nubia | 7316dfa15822f8c879ab849aa0f6b04a9ecf231a | [
"MIT"
] | 53 | 2020-05-01T06:37:18.000Z | 2022-03-15T15:50:15.000Z | nubia_score/__init__.py | LazerLambda/nubia | 7316dfa15822f8c879ab849aa0f6b04a9ecf231a | [
"MIT"
] | 1 | 2020-05-01T06:54:30.000Z | 2020-05-01T09:00:15.000Z | nubia_score/__init__.py | LazerLambda/nubia | 7316dfa15822f8c879ab849aa0f6b04a9ecf231a | [
"MIT"
] | 13 | 2020-05-01T08:55:36.000Z | 2022-03-11T15:17:22.000Z | from nubia_score.nubia import *
| 8.5 | 31 | 0.764706 | 5 | 34 | 5 | 0.8 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.176471 | 34 | 3 | 32 | 11.333333 | 0.892857 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
d69070338b6a6ba6a997dd8cdad2714e6ff89c12 | 80 | py | Python | game/__init__.py | justinmeister/spaceinvaders-spyral | e735751db21a7595741fb8a62a875a08c95955af | [
"MIT"
] | 1 | 2017-09-01T07:36:57.000Z | 2017-09-01T07:36:57.000Z | game/__init__.py | justinmeister/spaceinvaders-spyral | e735751db21a7595741fb8a62a875a08c95955af | [
"MIT"
] | null | null | null | game/__init__.py | justinmeister/spaceinvaders-spyral | e735751db21a7595741fb8a62a875a08c95955af | [
"MIT"
] | null | null | null | import spyral
import level
def main():
spyral.director.push(level.Level1()) | 16 | 40 | 0.7375 | 11 | 80 | 5.363636 | 0.727273 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.014493 | 0.1375 | 80 | 5 | 40 | 16 | 0.84058 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | true | 0 | 0.5 | 0 | 0.75 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
d693c8ecbacfe6eb6ad3176fd32d1bbea8390b43 | 104 | py | Python | Lecture/Example/lec05/1_apply_twice.py | zzr2311559/CS61a-fall2020 | 60b1d53945d2b2caeb01421841da6307c633f562 | [
"MIT"
] | null | null | null | Lecture/Example/lec05/1_apply_twice.py | zzr2311559/CS61a-fall2020 | 60b1d53945d2b2caeb01421841da6307c633f562 | [
"MIT"
] | null | null | null | Lecture/Example/lec05/1_apply_twice.py | zzr2311559/CS61a-fall2020 | 60b1d53945d2b2caeb01421841da6307c633f562 | [
"MIT"
] | null | null | null | def apply_twice(f, x):
return f(f(x))
def square(x):
return x * x
result = apply_twice(square, 2)
| 13 | 31 | 0.644231 | 20 | 104 | 3.25 | 0.45 | 0.307692 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.012048 | 0.201923 | 104 | 7 | 32 | 14.857143 | 0.771084 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.4 | false | 0 | 0 | 0.4 | 0.8 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 6 |
d6ae2f71c37cbdbf0b5b26057befdc74d6582393 | 79 | py | Python | flubber/__init__.py | braincow/flubber | 691f1fd441d03fc9e093d9dabc7fd8940d1bef27 | [
"MIT"
] | null | null | null | flubber/__init__.py | braincow/flubber | 691f1fd441d03fc9e093d9dabc7fd8940d1bef27 | [
"MIT"
] | 9 | 2018-04-02T07:41:11.000Z | 2018-04-12T08:40:53.000Z | flubber/__init__.py | braincow/flubber | 691f1fd441d03fc9e093d9dabc7fd8940d1bef27 | [
"MIT"
] | null | null | null | import gi
gi.require_version('Gtk', '3.0')
gi.require_version('Notify', '0.7')
| 19.75 | 35 | 0.696203 | 14 | 79 | 3.785714 | 0.642857 | 0.339623 | 0.603774 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.054795 | 0.075949 | 79 | 3 | 36 | 26.333333 | 0.671233 | 0 | 0 | 0 | 0 | 0 | 0.189873 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.333333 | 0 | 0.333333 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
d6c3b917f71264a63bf3a7db044def0053796375 | 337 | py | Python | pykotor/resource/formats/tpc/__init__.py | NickHugi/PyKotor | cab1089f8a8a135861bef45340203718d39f5e1f | [
"MIT"
] | 1 | 2022-02-21T15:17:28.000Z | 2022-02-21T15:17:28.000Z | pykotor/resource/formats/tpc/__init__.py | NickHugi/PyKotor | cab1089f8a8a135861bef45340203718d39f5e1f | [
"MIT"
] | 1 | 2022-03-12T16:06:23.000Z | 2022-03-12T16:06:23.000Z | pykotor/resource/formats/tpc/__init__.py | NickHugi/PyKotor | cab1089f8a8a135861bef45340203718d39f5e1f | [
"MIT"
] | null | null | null | from pykotor.resource.formats.tpc.data import TPC, TPCTextureFormat
from pykotor.resource.formats.tpc.io_tpc import TPCBinaryReader, TPCBinaryWriter
from pykotor.resource.formats.tpc.io_tga import TPCTGAWriter
from pykotor.resource.formats.tpc.io_bmp import TPCBMPWriter
from pykotor.resource.formats.tpc.auto import load_tpc, write_tpc
| 56.166667 | 80 | 0.863501 | 48 | 337 | 5.958333 | 0.375 | 0.192308 | 0.332168 | 0.454545 | 0.527972 | 0.325175 | 0 | 0 | 0 | 0 | 0 | 0 | 0.068249 | 337 | 5 | 81 | 67.4 | 0.910828 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
d6d4b94e16ee0c3404cf7806927dc1fca4e43e71 | 2,068 | py | Python | tests/test_person.py | alexandre-labs/twodo_repositories | 470ee780a00e913a1496aed65945ecd59f136334 | [
"MIT"
] | null | null | null | tests/test_person.py | alexandre-labs/twodo_repositories | 470ee780a00e913a1496aed65945ecd59f136334 | [
"MIT"
] | null | null | null | tests/test_person.py | alexandre-labs/twodo_repositories | 470ee780a00e913a1496aed65945ecd59f136334 | [
"MIT"
] | null | null | null | import pytest
from twodo_domain.person import Person
from twodo_repositories.person import IPersonRepository, IPersonRepositoryException
def test_person_repo_missing_create_method():
class TestPersonRepository(IPersonRepository):
def retrieve(self, person_data):
pass
def update(self, person_data):
pass
def delete(self, person_data):
pass
with pytest.raises(TypeError):
TestPersonRepository()
def test_person_repo_missing_update_method():
class TestPersonRepository(IPersonRepository):
def create(self, person):
pass
def retrieve(self, person_data):
pass
def delete(self, person_data):
pass
with pytest.raises(TypeError):
TestPersonRepository()
def test_person_repo_missing_retrieve_method():
class TestPersonRepository(IPersonRepository):
def create(self, person):
pass
def update(self, person_data):
pass
def delete(self, person_data):
pass
with pytest.raises(TypeError):
TestPersonRepository()
def test_person_repo_missing_delete_method():
class TestPersonRepository(IPersonRepository):
def create(self, person):
pass
def retrieve(self, person_data):
pass
def update(self, person_data):
pass
with pytest.raises(TypeError):
TestPersonRepository()
def test_raising_subclass_of_person_repository_exception():
class ConnectionTimeOut(IPersonRepositoryException):
pass
class TestPersonRepository(IPersonRepository):
def create(self, person):
raise ConnectionTimeOut
def retrieve(self, person_data):
pass
def update(self, person_data):
pass
def delete(self, person_data):
pass
person_repo = TestPersonRepository()
with pytest.raises(IPersonRepositoryException):
person_repo.create(Person('Aleaxndre', 'alexandre@inter.net'))
| 23.5 | 83 | 0.658607 | 198 | 2,068 | 6.666667 | 0.186869 | 0.121212 | 0.127273 | 0.163636 | 0.709848 | 0.655303 | 0.655303 | 0.609091 | 0.609091 | 0.609091 | 0 | 0 | 0.272244 | 2,068 | 87 | 84 | 23.770115 | 0.877076 | 0 | 0 | 0.775862 | 0 | 0 | 0.01354 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.362069 | false | 0.275862 | 0.051724 | 0 | 0.517241 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 6 |
ba3d8c81275b8db5d5dff64c26b7991fd75914b9 | 113 | py | Python | steerclear/settings/unix_settings.py | beblount/Steer-Clear-Backend-Web | 2aca521bad5f9a09c912f8e546f46bd39610544f | [
"MIT"
] | null | null | null | steerclear/settings/unix_settings.py | beblount/Steer-Clear-Backend-Web | 2aca521bad5f9a09c912f8e546f46bd39610544f | [
"MIT"
] | null | null | null | steerclear/settings/unix_settings.py | beblount/Steer-Clear-Backend-Web | 2aca521bad5f9a09c912f8e546f46bd39610544f | [
"MIT"
] | null | null | null | SQLALCHEMY_DATABASE_URI = 'sqlite:////tmp/steerclear.db'
TEST_SQLALCHEMY_DATABASE_URI = 'sqlite:////tmp/test.db'
| 37.666667 | 56 | 0.769912 | 15 | 113 | 5.466667 | 0.533333 | 0.439024 | 0.512195 | 0.658537 | 0.731707 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.053097 | 113 | 2 | 57 | 56.5 | 0.766355 | 0 | 0 | 0 | 0 | 0 | 0.442478 | 0.442478 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
ba4119cddb47a4671e4e3d6f3442a71cf511ede1 | 5,228 | py | Python | tests/test_museumapi.py | VinayArora404219/museum-api | 8c3871a1c39ec3183a9d43183d64c9d2538825a3 | [
"MIT"
] | null | null | null | tests/test_museumapi.py | VinayArora404219/museum-api | 8c3871a1c39ec3183a9d43183d64c9d2538825a3 | [
"MIT"
] | null | null | null | tests/test_museumapi.py | VinayArora404219/museum-api | 8c3871a1c39ec3183a9d43183d64c9d2538825a3 | [
"MIT"
] | null | null | null | """
Tests for museumapi module.
"""
import json
import logging
import unittest
import os
import sys
from unittest.mock import patch, Mock
from museum_api.museumapi import MuseumAPI
logging.basicConfig(
filename='logs/test_museumapi_error.log',
level=logging.ERROR,
format='[%(asctime)s] {%(pathname)s:%(lineno)d} %(levelname)s - %(message)s',
datefmt='%H:%M:%S'
)
class TestMuseumAPI(unittest.TestCase):
"""
Tests functionality of MuseumAPI class.
"""
@classmethod
def setUpClass(cls) -> None:
"""
sets up mAPIObj to be used in all the test functions.
"""
cls.mAPIObj = MuseumAPI()
@patch('museum_api.museumapi.requests.get')
def test_getting_object_ids_when_response_is_ok(self, mock_get):
"""
Tests getting object ids from museum API when response is ok.
:param mock_get: mocked method get of requests module.
"""
actual_object_ids_data = None
tmp_object_ids_data = None
# getting mocked data from the file
try:
with open(os.path.join(os.path.abspath(os.path.dirname(__file__)),
'correct_data/object_ids_resp.json'), 'r', encoding='utf-8') \
as data:
actual_object_ids_data = json.load(data)
with open(os.path.join(os.path.abspath(os.path.dirname(__file__)),
'correct_data/object_ids_resp.json'), 'r', encoding='utf-8') \
as data:
tmp_object_ids_data = json.load(data)
except FileNotFoundError as fn_fe:
logging.error('File not found : %s', fn_fe.args[-1])
sys.exit(1)
# Configure the mock to return a response with an OK status code. Also, the mock should have
# a `json()` method that returns a dictionary object.
mock_get.return_value = Mock(ok=True)
mock_get.return_value.json.return_value = tmp_object_ids_data
# Call the service, which will send a request to the server.
mocked_object_ids_data = self.mAPIObj.get_all_object_ids()
# If the request is sent successfully, then I expect a response to be returned.
self.assertDictEqual(mocked_object_ids_data, actual_object_ids_data)
@patch('museum_api.museumapi.requests.get')
def test_getting_object_ids_when_response_is_not_ok(self, mock_get):
"""
Tests getting object ids from museum API when response is not ok.
:param mock_get: mocked method get of requests module.
"""
# Configure the mock to not return a response with an OK status code.
mock_get.return_value.ok = False
# Call the service, which will send a request to the server.
object_ids_data = self.mAPIObj.get_all_object_ids()
# If the response contains an error, I should get no todos.
self.assertIsNone(object_ids_data)
@patch('museum_api.museumapi.requests.get')
def test_getting_object_when_response_is_ok(self, mock_get):
"""
Tests getting object from museum API when response is ok.
:param mock_get: mocked method get of requests module.
"""
mocked_object_data = None
tmp_object_data = None
# getting mocked data from the file
try:
with open(os.path.join(os.path.abspath(os.path.dirname(__file__)),
'correct_data/object_resp.json'), 'r', encoding='utf-8') \
as data:
actual_object_data = json.load(data)
with open(os.path.join(os.path.abspath(os.path.dirname(__file__)),
'correct_data/object_ids_resp.json'), 'r', encoding='utf-8')\
as data:
tmp_object_data = json.load(data)
except FileNotFoundError as fn_fe:
logging.error('File not found : %s', fn_fe.args[-1])
sys.exit(1)
# Configure the mock to return a response with an OK status code. Also, the mock should have
# a `json()` method that returns a dictionary object.
mock_get.return_value = Mock(ok=True)
mock_get.return_value.json.return_value = tmp_object_data
# Call the service, which will send a request to the server.
mocked_object_data = self.mAPIObj.get_object_for_id(1)
# If the request is sent successfully, then I expect a response to be returned.
self.assertDictEqual(mocked_object_data, actual_object_data)
@patch('museum_api.museumapi.requests.get')
def test_getting_object_when_response_is_not_ok(self, mock_get):
"""
Tests getting object from museum API when response is not ok.
:param mock_get: mocked method get of requests module.
"""
# Configure the mock to not return a response with an OK status code.
mock_get.return_value.ok = False
# Call the service, which will send a request to the server.
object_data = self.mAPIObj.get_object_for_id(1)
# If the response contains an error, It must return None.
self.assertIsNone(object_data)
if __name__ == '__main__':
unittest.main()
| 38.441176 | 100 | 0.639059 | 707 | 5,228 | 4.507779 | 0.188119 | 0.053655 | 0.040791 | 0.033888 | 0.807342 | 0.804205 | 0.787888 | 0.787888 | 0.787888 | 0.787888 | 0 | 0.002634 | 0.27391 | 5,228 | 135 | 101 | 38.725926 | 0.836934 | 0.303366 | 0 | 0.391304 | 0 | 0.014493 | 0.125688 | 0.090935 | 0 | 0 | 0 | 0.007407 | 0.057971 | 1 | 0.072464 | false | 0 | 0.101449 | 0 | 0.188406 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
bac5fb393b25c64375b36fa4868b793b8349cedb | 96 | py | Python | venv/lib/python3.8/site-packages/poetry/core/spdx/updater.py | GiulianaPola/select_repeats | 17a0d053d4f874e42cf654dd142168c2ec8fbd11 | [
"MIT"
] | 2 | 2022-03-13T01:58:52.000Z | 2022-03-31T06:07:54.000Z | venv/lib/python3.8/site-packages/poetry/core/spdx/updater.py | DesmoSearch/Desmobot | b70b45df3485351f471080deb5c785c4bc5c4beb | [
"MIT"
] | 19 | 2021-11-20T04:09:18.000Z | 2022-03-23T15:05:55.000Z | venv/lib/python3.8/site-packages/poetry/core/spdx/updater.py | DesmoSearch/Desmobot | b70b45df3485351f471080deb5c785c4bc5c4beb | [
"MIT"
] | null | null | null | /home/runner/.cache/pip/pool/a5/59/c7/21bc506d29e89823b8ccfa01f15735c723136bc29109083e0cb6a8f687 | 96 | 96 | 0.895833 | 9 | 96 | 9.555556 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.458333 | 0 | 96 | 1 | 96 | 96 | 0.4375 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | null | 0 | 0 | null | null | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
bae4eefb32643ebc8300f850daba590a47a5c3fd | 343 | py | Python | http_utils/recs/__init__.py | Kolyada/grouple-recsys-production | 11ec687a18084dcabb25fcc90f0a0ba51da6d24c | [
"MIT"
] | null | null | null | http_utils/recs/__init__.py | Kolyada/grouple-recsys-production | 11ec687a18084dcabb25fcc90f0a0ba51da6d24c | [
"MIT"
] | 4 | 2022-02-03T09:52:29.000Z | 2022-03-31T20:43:48.000Z | http_utils/recs/__init__.py | Kolyada/grouple-recsys-production | 11ec687a18084dcabb25fcc90f0a0ba51da6d24c | [
"MIT"
] | 3 | 2022-03-25T05:51:05.000Z | 2022-03-29T06:55:23.000Z | from .exploration_recommendations_handler import ExplorationRecommendationsHandler
from .personal_similar_items_handler import PersonalSimilarItemsHandler
from .recommendation_handler import RecommendHandler
from .similar_items_handler import SimilarItemsHandler
from .top_popular_recommendation_handler import TopPopularRecommendationHandler
| 57.166667 | 82 | 0.927114 | 31 | 343 | 9.903226 | 0.516129 | 0.211726 | 0.123779 | 0.162866 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.058309 | 343 | 5 | 83 | 68.6 | 0.950464 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
240340e7d07f4f45e99b9c0a7de3ac96e934231a | 44 | py | Python | basic/import.py | Simba1992/python_programs | 1d23804e882a94fb34d1f2f846d0ee772d272568 | [
"Apache-2.0"
] | null | null | null | basic/import.py | Simba1992/python_programs | 1d23804e882a94fb34d1f2f846d0ee772d272568 | [
"Apache-2.0"
] | null | null | null | basic/import.py | Simba1992/python_programs | 1d23804e882a94fb34d1f2f846d0ee772d272568 | [
"Apache-2.0"
] | null | null | null | from function import test
print(test(50))
| 14.666667 | 26 | 0.75 | 7 | 44 | 4.714286 | 0.857143 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.054054 | 0.159091 | 44 | 2 | 27 | 22 | 0.837838 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.5 | 0 | 0.5 | 0.5 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 1 | 0 | 6 |
241473c59d70dce8cb503a8d1922379b63959485 | 146 | py | Python | frappe_health_ec/frappe_health_ec/doctype/patient/test_patient.py | lapillaga/frappe_health_ec | 5675e3da97550b6e8cf10d15d342144818ec2fee | [
"MIT"
] | null | null | null | frappe_health_ec/frappe_health_ec/doctype/patient/test_patient.py | lapillaga/frappe_health_ec | 5675e3da97550b6e8cf10d15d342144818ec2fee | [
"MIT"
] | null | null | null | frappe_health_ec/frappe_health_ec/doctype/patient/test_patient.py | lapillaga/frappe_health_ec | 5675e3da97550b6e8cf10d15d342144818ec2fee | [
"MIT"
] | null | null | null | # Copyright (c) 2021, Lugo S.A.S and Contributors
# See license.txt
# import frappe
import unittest
class TestPatient(unittest.TestCase):
pass
| 16.222222 | 49 | 0.760274 | 21 | 146 | 5.285714 | 0.857143 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.032258 | 0.150685 | 146 | 8 | 50 | 18.25 | 0.862903 | 0.527397 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.333333 | 0.333333 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 6 |
0304267d492aee060a2162eb70babc462bba1ab7 | 73 | py | Python | ppcd/transforms/__init__.py | geoyee/PdRSCD | 4a1a7256320f006c15e3e5b5b238fdfba8198853 | [
"Apache-2.0"
] | 44 | 2021-04-21T02:41:55.000Z | 2022-03-09T03:01:16.000Z | ppcd/transforms/__init__.py | MinZHANG-WHU/PdRSCD | 612976225201d78adc7ff99529ada17b41fedc5d | [
"Apache-2.0"
] | 2 | 2021-09-30T07:52:47.000Z | 2022-02-12T09:05:35.000Z | ppcd/transforms/__init__.py | MinZHANG-WHU/PdRSCD | 612976225201d78adc7ff99529ada17b41fedc5d | [
"Apache-2.0"
] | 6 | 2021-07-23T02:18:39.000Z | 2022-01-14T01:15:50.000Z | from .transforms import *
from .enhance import *
from . import functional | 24.333333 | 25 | 0.780822 | 9 | 73 | 6.333333 | 0.555556 | 0.350877 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.150685 | 73 | 3 | 26 | 24.333333 | 0.919355 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
033015a113d7564bc1822258480a9e14bea77be5 | 116 | py | Python | src/domain/errors/invalid_image_to_normalize_failure.py | OzielFilho/ProjetoFinalPdi | c9e6fe415f1a985d6eeac204580d3ab623026665 | [
"MIT"
] | null | null | null | src/domain/errors/invalid_image_to_normalize_failure.py | OzielFilho/ProjetoFinalPdi | c9e6fe415f1a985d6eeac204580d3ab623026665 | [
"MIT"
] | null | null | null | src/domain/errors/invalid_image_to_normalize_failure.py | OzielFilho/ProjetoFinalPdi | c9e6fe415f1a985d6eeac204580d3ab623026665 | [
"MIT"
] | null | null | null | from domain.errors.image_failure import ImageFailure
class InvalidImageToNormalizeFailure(ImageFailure):
pass
| 19.333333 | 52 | 0.844828 | 11 | 116 | 8.818182 | 0.909091 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.112069 | 116 | 5 | 53 | 23.2 | 0.941748 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.333333 | 0.333333 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 6 |
0345a447c1cb133f5b2eb35c6bb1edbcadc5b697 | 1,091 | py | Python | output/fed14/early_lm_run.py | imsure318/ecir2017-fusion | f6ba449c8cad3e8c488283d6f8d90924305ec034 | [
"BSD-4-Clause"
] | 1 | 2017-11-09T15:21:30.000Z | 2017-11-09T15:21:30.000Z | output/fed14/early_lm_run.py | iai-group/ecir2017-fusion | 3b020a7c99666666c34b0439648edc39fbab0441 | [
"BSD-4-Clause"
] | null | null | null | output/fed14/early_lm_run.py | iai-group/ecir2017-fusion | 3b020a7c99666666c34b0439648edc39fbab0441 | [
"BSD-4-Clause"
] | 4 | 2017-01-31T07:45:33.000Z | 2020-08-08T17:53:36.000Z | from nordlys.logic.fusion.fusion_scorer_early_lm import EarlyFusionScorer
if __name__ == "__main__":
index_name = "fedweb14"
assoc_file = "data/trecfed/assoc_fed14.txt"
assoc_mode = 1
retr_params = {"lambda": 0.1}
ef = EarlyFusionScorer(index_name, assoc_file, assoc_mode, retr_params, num = 10000)
ef.load_associations()
query_file = "data/trecfed/query_2014.txt"
queries = ef.load_queries(query_file)
outputfile = "output/fed14/elm_fed14_1.txt"
#e:early-fusion lm:language modeling feb14:federated 14 1:binary
ef.score_queries(queries, outputfile)
index_name = "fedweb14"
assoc_file = "data/trecfed/assoc_feb14.txt"
assoc_mode1 = 2
retr_params = {"lambda": 0.1}
ef = EarlyFusionScorer(index_name, assoc_file, assoc_mode1, retr_params, num = 10000)
ef.load_associations()
query_file = "data/trecfed/query_2014.txt"
queries = ef.load_queries(query_file)
outputfile = "output/fed14/elm_fed14_2.txt"
#e:early-fusion lm:language modeling feb14:federated 14 1:binary
ef.score_queries(queries, outputfile)
| 40.407407 | 89 | 0.730522 | 151 | 1,091 | 4.980132 | 0.311258 | 0.047872 | 0.079787 | 0.058511 | 0.837766 | 0.837766 | 0.837766 | 0.837766 | 0.726064 | 0.726064 | 0 | 0.059146 | 0.163153 | 1,091 | 26 | 90 | 41.961538 | 0.764513 | 0.11549 | 0 | 0.545455 | 0 | 0 | 0.209761 | 0.172378 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.045455 | 0 | 0.045455 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
cee161ceb52bd6d204be2dbdadace990e8f4d2f3 | 161 | py | Python | pineboolib/fllegacy/fljasperengine.py | Aulla/pineboo | 3ad6412d365a6ad65c3bb2bdc03f5798d7c37004 | [
"MIT"
] | 2 | 2017-12-10T23:06:16.000Z | 2017-12-10T23:06:23.000Z | pineboolib/fllegacy/fljasperengine.py | Aulla/pineboo | 3ad6412d365a6ad65c3bb2bdc03f5798d7c37004 | [
"MIT"
] | 36 | 2017-11-05T21:13:47.000Z | 2020-08-26T15:56:15.000Z | pineboolib/fllegacy/fljasperengine.py | Aulla/pineboo | 3ad6412d365a6ad65c3bb2bdc03f5798d7c37004 | [
"MIT"
] | 8 | 2017-11-05T15:56:31.000Z | 2019-04-25T16:32:28.000Z | """FLJasperEngine module."""
from PyQt6 import QtCore # type: ignore[import]
class FLJasperEngine(QtCore.QObject):
"""FLJasperEngine class."""
pass
| 16.1 | 48 | 0.695652 | 16 | 161 | 7 | 0.6875 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.007463 | 0.167702 | 161 | 9 | 49 | 17.888889 | 0.828358 | 0.409938 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.333333 | 0.333333 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 6 |
3001dbc404c08a9f2a0a71f157869804700c4bf2 | 37,217 | py | Python | DXFImporter/lib/ezdxf/tools/pattern.py | tapnair/DXFImporter | c86cde0b4420ca7d0c5e3569675acd2d4426667f | [
"MIT"
] | 2 | 2021-07-28T03:52:02.000Z | 2021-07-31T05:08:11.000Z | DXFImporter/lib/ezdxf/tools/pattern.py | tapnair/DXFImporter | c86cde0b4420ca7d0c5e3569675acd2d4426667f | [
"MIT"
] | 1 | 2020-04-28T17:52:26.000Z | 2020-10-07T01:28:56.000Z | DXFImporter/lib/ezdxf/tools/pattern.py | tapnair/DXFImporter | c86cde0b4420ca7d0c5e3569675acd2d4426667f | [
"MIT"
] | 1 | 2021-07-31T05:08:12.000Z | 2021-07-31T05:08:12.000Z | # Purpose: Standard definitions
# Created: 08.07.2015
# Copyright (c) 2015-2020, Manfred Moitzi
# License: MIT License
# pattern type: predefined (1)
PATTERN_OLD = {
"ANSI31": [[45.0, (0.0, 0.0), (-0.0884, 0.0884), []]],
"ANSI32": [
[45.0, (0.0, 0.0), (-0.2652, 0.2652), []],
[45.0, (0.1768, 0.0), (-0.2652, 0.2652), []],
],
"ANSI33": [
[45.0, (20.0, 0.0), (-0.1768, 0.1768), []],
[45.0, (20.1768, 0.0), (-0.1768, 0.1768), [0.125, -0.0625]],
],
"ANSI34": [
[45.0, (0.0, 0.0), (-0.5303, 0.5303), []],
[45.0, (0.1768, 0.0), (-0.5303, 0.5303), []],
[45.0, (0.3536, 0.0), (-0.5303, 0.5303), []],
[45.0, (0.5303, 0.0), (-0.5303, 0.5303), []],
],
"ANSI35": [
[45.0, (-40.0, -10.0), (-0.1768, 0.1768), []],
[45.0, (-39.8232, -10.0), (-0.1768, 0.1768), [0.3125, -0.0625, 0.0, -0.0625]],
],
"ANSI36": [
[45.0, (-40.0, -10.0), (0.0663, 0.2431), [0.3125, -0.0625, 0.0, -0.0625]]
],
"ANSI37": [
[45.0, (0.0, 0.0), (-0.0884, 0.0884), []],
[135.0, (0.0, 0.0), (-0.0884, -0.0884), []],
],
"ANSI38": [
[45.0, (0.0, 0.0), (-0.0884, 0.0884), []],
[135.0, (0.0, 0.0), (-0.2652, 0.0884), [0.3125, -0.1875]],
],
"ACAD_ISO02W100": [[0.0, (0.0, 0.0), (0.0, 0.5), [1.2, -0.3]]],
"ACAD_ISO03W100": [[0.0, (0.0, 0.0), (0.0, 0.5), [1.2, -1.8]]],
"ACAD_ISO04W100": [[0.0, (0.0, 0.0), (0.0, 0.5), [2.4, -0.3, 0.05, -0.3]]],
"ACAD_ISO05W100": [
[0.0, (0.0, 0.0), (0.0, 0.5), [2.4, -0.3, 0.05, -0.3, 0.05, -0.3]]
],
"ACAD_ISO06W100": [
[0.0, (0.0, 0.0), (0.0, 0.5), [2.4, -0.3, 0.05, -0.3, 0.05, -0.65]],
[0.0, (0.0, 0.0), (0.0, 0.5), [-3.4, 0.05, -0.3]],
],
"ACAD_ISO07W100": [[0.0, (0.0, 0.0), (0.0, 0.5), [0.05, -0.3]]],
"ACAD_ISO08W100": [[0.0, (0.0, 0.0), (0.0, 0.5), [2.4, -0.3, 0.6, -0.3]]],
"ACAD_ISO09W100": [
[0.0, (0.0, 0.0), (0.0, 0.5), [2.4, -0.3, 0.6, -0.3, 0.6, -0.3]]
],
"ACAD_ISO10W100": [[0.0, (0.0, 0.0), (0.0, 0.5), [1.2, -0.3, 0.05, -0.3]]],
"ACAD_ISO11W100": [
[0.0, (0.0, 0.0), (0.0, 0.5), [1.2, -0.3, 1.2, -0.3, 0.05, -0.3]]
],
"ACAD_ISO12W100": [
[0.0, (0.0, 0.0), (0.0, 0.5), [1.2, -0.3, 0.05, -0.3, 0.05, -0.3]]
],
"ACAD_ISO13W100": [
[0.0, (0.0, 0.0), (0.0, 0.5), [1.2, -0.3, 1.2, -0.3, 0.05, -0.65]],
[0.0, (0.0, 0.0), (0.0, 0.5), [-3.35, 0.05, -0.3]],
],
"ACAD_ISO14W100": [
[0.0, (0.0, 0.0), (0.0, 0.5), [1.2, -0.3, 0.05, -0.3, 0.05, -0.65]],
[0.0, (0.0, 0.0), (0.0, 0.5), [-2.2, 0.05, -0.3]],
],
"ACAD_ISO15W100": [
[0.0, (0.0, 0.0), (0.0, 0.5), [1.2, -0.3, 1.2, -0.3, 0.05, -1.0]],
[0.0, (0.0, 0.0), (0.0, 0.5), [-3.35, 0.05, -0.3, 0.05, -0.3]],
],
"ANGLE": [
[0.0, (0.0, 0.0), (0.0, 0.275), [0.2, -0.075]],
[90.0, (0.0, 0.0), (-0.275, 0.0), [0.2, -0.075]],
],
"AR-B816": [
[0.0, (0.0, 0.0), (0.0, 0.8), []],
[90.0, (0.0, 0.0), (-0.8, 0.8), [0.8, -0.8]],
],
"AR-B816C": [
[0.0, (0.0, 0.0), (0.8, 0.8), [1.5625, -0.0375]],
[0.0, (-0.8, 0.0375), (0.8, 0.8), [1.5625, -0.0375]],
[90.0, (0.0, 0.0), (-0.8, 0.8), [-0.8375, 0.7625]],
[90.0, (-0.0375, 0.0), (-0.8, 0.8), [-0.8375, 0.7625]],
],
"AR-B88": [
[0.0, (0.0, 0.0), (0.0, 0.8), []],
[90.0, (0.0, 0.0), (-0.4, 0.8), [0.8, -0.8]],
],
"AR-BRELM": [
[0.0, (0.0, 0.0), (0.0, 2.667), [3.8125, -0.1875]],
[0.0, (0.0, 1.125), (0.0, 2.667), [3.8125, -0.1875]],
[0.0, (1.0, 1.3335), (0.0, 2.667), [1.8125, -0.1875]],
[0.0, (1.0, 2.4585), (0.0, 2.667), [1.8125, -0.1875]],
[90.0, (0.0, 0.0), (-4.0, 0.0), [1.125, -1.542]],
[90.0, (-0.1875, 0.0), (-4.0, 0.0), [1.125, -1.542]],
[90.0, (1.0, 1.3335), (-2.0, 0.0), [1.125, -1.542]],
[90.0, (0.8125, 1.3335), (-2.0, 0.0), [1.125, -1.542]],
],
"AR-BRSTD": [
[0.0, (0.0, 0.0), (0.0, 2.667), []],
[90.0, (0.0, 0.0), (-4.0, 2.667), [2.667, -2.667]],
],
"AR-CONC": [
[50.0, (0.0, 0.0), (1.4345, -0.1255), [0.15, -1.65]],
[355.0, (0.0, 0.0), (-0.2775, 1.5044), [0.12, -1.32]],
[100.4514, (0.1195, -0.0105), (1.157, 1.3789), [0.1275, -1.4023]],
[46.1842, (0.0, 0.4), (2.1345, -0.331), [0.225, -2.475]],
[96.6356, (0.1779, 0.3724), (1.8693, 1.9482), [0.1912, -2.1034]],
[351.1842, (0.0, 0.4), (1.8693, 1.9482), [0.18, -1.98]],
[21.0, (0.2, 0.3), (1.1938, -0.8052), [0.15, -1.65]],
[326.0, (0.2, 0.3), (0.4866, 1.4503), [0.12, -1.32]],
[71.4514, (0.2995, 0.2329), (1.6804, 0.6451), [0.1275, -1.4023]],
[37.5, (0.0, 0.0), (0.0243, 0.6658), [0.0, -1.304, 0.0, -1.34, 0.0, -1.325]],
[7.5, (0.0, 0.0), (0.5261, 0.7888), [0.0, -0.764, 0.0, -1.274, 0.0, -0.505]],
[327.5, (-0.446, 0.0), (1.0676, -0.0451), [0.0, -0.5, 0.0, -1.56, 0.0, -2.07]],
[317.5, (-0.646, 0.0), (1.1664, 0.2002), [0.0, -0.65, 0.0, -1.036, 0.0, -1.47]],
],
"AR-HBONE": [
[45.0, (0.0, 0.0), (0.0, 1.1314), [2.4, -0.8]],
[135.0, (0.5657, 0.5657), (0.0, 1.1314), [2.4, -0.8]],
],
"AR-PARQ1": [
[90.0, (0.0, 0.0), (-2.4, 2.4), [2.4, -2.4]],
[90.0, (0.4, 0.0), (-2.4, 2.4), [2.4, -2.4]],
[90.0, (0.8, 0.0), (-2.4, 2.4), [2.4, -2.4]],
[90.0, (1.2, 0.0), (-2.4, 2.4), [2.4, -2.4]],
[90.0, (1.6, 0.0), (-2.4, 2.4), [2.4, -2.4]],
[90.0, (2.0, 0.0), (-2.4, 2.4), [2.4, -2.4]],
[90.0, (2.4, 0.0), (-2.4, 2.4), [2.4, -2.4]],
[0.0, (0.0, 2.4), (2.4, -2.4), [2.4, -2.4]],
[0.0, (0.0, 2.8), (2.4, -2.4), [2.4, -2.4]],
[0.0, (0.0, 3.2), (2.4, -2.4), [2.4, -2.4]],
[0.0, (0.0, 3.6), (2.4, -2.4), [2.4, -2.4]],
[0.0, (0.0, 4.0), (2.4, -2.4), [2.4, -2.4]],
[0.0, (0.0, 4.4), (2.4, -2.4), [2.4, -2.4]],
[0.0, (0.0, 4.8), (2.4, -2.4), [2.4, -2.4]],
],
"AR-RROOF": [
[0.0, (0.0, 0.0), (2.2, 1.0), [15.0, -2.0, 5.0, -1.0]],
[0.0, (1.33, 0.5), (-1.0, 1.33), [3.0, -0.33, 6.0, -0.75]],
[0.0, (0.5, 0.85), (5.2, 0.67), [8.0, -1.4, 4.0, -1.0]],
],
"AR-RSHKE": [
[0.0, (0.0, 0.0), (2.55, 1.2), [0.6, -0.5, 0.7, -0.3, 0.9, -0.4]],
[0.0, (0.6, 0.05), (2.55, 1.2), [0.5, -1.9, 0.4, -0.6]],
[0.0, (1.8, -0.075), (2.55, 1.2), [0.3, -3.1]],
[90.0, (0.0, 0.0), (-0.85, 1.2), [1.15, -3.65]],
[90.0, (0.6, 0.0), (-0.85, 1.2), [1.125, -3.675]],
[90.0, (1.1, 0.0), (-0.85, 1.2), [1.05, -3.75]],
[90.0, (1.8, -0.075), (-0.85, 1.2), [1.15, -3.65]],
[90.0, (2.1, -0.075), (-0.85, 1.2), [1.15, -3.65]],
[90.0, (3.0, 0.0), (-0.85, 1.2), [1.1, -3.7]],
],
"AR-SAND": [
[37.5, (0.0, 0.0), (-0.063, 1.9268), [0.0, -1.52, 0.0, -1.7, 0.0, -1.625]],
[7.5, (0.0, 0.0), (1.7698, 2.8221), [0.0, -0.82, 0.0, -1.37, 0.0, -0.525]],
[327.5, (-1.23, 0.0), (3.1141, 0.0057), [0.0, -0.5, 0.0, -1.8, 0.0, -2.35]],
[317.5, (-1.23, 0.0), (3.0061, 0.8777), [0.0, -0.25, 0.0, -1.18, 0.0, -1.35]],
],
"BOX": [
[90.0, (0.0, 0.0), (-1.0, 0.0), []],
[90.0, (0.25, 0.0), (-1.0, 0.0), []],
[0.0, (0.0, 0.0), (0.0, 1.0), [-0.25, 0.25]],
[0.0, (0.0, 0.25), (0.0, 1.0), [-0.25, 0.25]],
[0.0, (0.0, 0.5), (0.0, 1.0), [0.25, -0.25]],
[0.0, (0.0, 0.75), (0.0, 1.0), [0.25, -0.25]],
[90.0, (0.5, 0.0), (-1.0, 0.0), [0.25, -0.25]],
[90.0, (0.75, 0.0), (-1.0, 0.0), [0.25, -0.25]],
],
"BRASS": [
[0.0, (0.0, 0.0), (0.0, 0.25), []],
[0.0, (0.0, 0.125), (0.0, 0.25), [0.125, -0.0625]],
],
"BRICK": [
[0.0, (0.0, 0.0), (0.0, 0.25), []],
[90.0, (0.0, 0.0), (-0.5, 0.0), [0.25, -0.25]],
[90.0, (0.25, 0.0), (-0.5, 0.0), [-0.25, 0.25]],
],
"BRSTONE": [
[0.0, (0.0, 0.0), (0.0, 0.33), []],
[90.0, (0.9, 0.0), (-0.5, 0.33), [0.33, -0.33]],
[90.0, (0.8, 0.0), (-0.5, 0.33), [0.33, -0.33]],
[0.0, (0.9, 0.055), (0.5, 0.33), [-0.9, 0.1]],
[0.0, (0.9, 0.11), (0.5, 0.33), [-0.9, 0.1]],
[0.0, (0.9, 0.165), (0.5, 0.33), [-0.9, 0.1]],
[0.0, (0.9, 0.22), (0.5, 0.33), [-0.9, 0.1]],
[0.0, (0.9, 0.275), (0.5, 0.33), [-0.9, 0.1]],
],
"CLAY": [
[0.0, (0.0, 0.0), (0.0, 0.1875), []],
[0.0, (0.0, 0.0312), (0.0, 0.1875), []],
[0.0, (0.0, 0.0625), (0.0, 0.1875), []],
[0.0, (0.0, 0.125), (0.0, 0.1875), [0.1875, -0.125]],
],
"CORK": [
[0.0, (0.0, 0.0), (0.0, 0.125), []],
[135.0, (0.0625, -0.0625), (-0.25, -0.25), [0.1768, -0.1768]],
[135.0, (0.0938, -0.0625), (-0.25, -0.25), [0.1768, -0.1768]],
[135.0, (0.125, -0.0625), (-0.25, -0.25), [0.1768, -0.1768]],
],
"CROSS": [
[0.0, (0.0, 0.0), (0.25, 0.25), [0.125, -0.375]],
[90.0, (0.0625, -0.0625), (-0.25, 0.25), [0.125, -0.375]],
],
"DASH": [[0.0, (0.0, 0.0), (0.125, 0.125), [0.125, -0.125]]],
"DOLMIT": [
[0.0, (0.0, 0.0), (0.0, 0.25), []],
[45.0, (0.0, 0.0), (-0.5, 0.5), [0.3536, -0.7071]],
],
"DOTS": [[0.0, (0.0, 0.0), (0.0312, 0.0625), [0.0, -0.0625]]],
"EARTH": [
[0.0, (0.0, 0.0), (0.25, 0.25), [0.25, -0.25]],
[0.0, (0.0, 0.0938), (0.25, 0.25), [0.25, -0.25]],
[0.0, (0.0, 0.1875), (0.25, 0.25), [0.25, -0.25]],
[90.0, (0.0312, 0.2188), (-0.25, 0.25), [0.25, -0.25]],
[90.0, (0.125, 0.2188), (-0.25, 0.25), [0.25, -0.25]],
[90.0, (0.2188, 0.2188), (-0.25, 0.25), [0.25, -0.25]],
],
"ESCHER": [
[60.0, (0.0, 0.0), (-1.2, -0.0), [1.1, -0.1]],
[180.0, (0.0, 0.0), (0.6, -1.0392), [1.1, -0.1]],
[300.0, (0.0, 0.0), (1.2, -0.0), [1.1, -0.1]],
[60.0, (0.1, 0.0), (-1.2, -0.0), [0.2, -1.0]],
[300.0, (0.1, 0.0), (1.2, -0.0), [0.2, -1.0]],
[60.0, (-0.05, 0.0866), (-1.2, -0.0), [0.2, -1.0]],
[180.0, (-0.05, 0.0866), (0.6, -1.0392), [0.2, -1.0]],
[300.0, (-0.05, -0.0866), (1.2, -0.0), [0.2, -1.0]],
[180.0, (-0.05, -0.0866), (0.6, -1.0392), [0.2, -1.0]],
[60.0, (-0.4, 0.0), (-1.2, -0.0), [0.2, -1.0]],
[300.0, (-0.4, 0.0), (1.2, -0.0), [0.2, -1.0]],
[60.0, (0.2, -0.3464), (-1.2, -0.0), [0.2, -1.0]],
[180.0, (0.2, -0.3464), (0.6, -1.0392), [0.2, -1.0]],
[300.0, (0.2, 0.3464), (1.2, -0.0), [0.2, -1.0]],
[180.0, (0.2, 0.3464), (0.6, -1.0392), [0.2, -1.0]],
[0.0, (0.2, 0.1732), (-0.6, 1.0392), [0.7, -0.5]],
[0.0, (0.2, -0.1732), (-0.6, 1.0392), [0.7, -0.5]],
[120.0, (0.05, 0.2598), (-1.2, 0.0), [0.7, -0.5]],
[120.0, (-0.25, 0.0866), (-1.2, 0.0), [0.7, -0.5]],
[240.0, (-0.25, -0.0866), (0.6, -1.0392), [0.7, -0.5]],
[240.0, (0.05, -0.2598), (0.6, -1.0392), [0.7, -0.5]],
],
"FLEX": [
[0.0, (0.0, 0.0), (0.0, 0.25), [0.25, -0.25]],
[45.0, (0.25, 0.0), (0.0, 0.25), [0.0625, -0.2286, 0.0625, -0.3536]],
],
"GOST_GLASS": [
[45.0, (0.0, 0.0), (0.8485, -0.0), [0.5, -0.7]],
[45.0, (0.2121, 0.0), (0.8485, -0.0), [0.2, -1.0]],
[45.0, (0.0, 0.2121), (0.8485, -0.0), [0.2, -1.0]],
],
"GOST_WOOD": [
[90.0, (0.0, 0.0), (1.2, -0.0), [2.0, -0.4]],
[90.0, (0.4, -0.4), (1.2, -0.0), [1.2, -0.3, 0.6, -0.3]],
[90.0, (0.8, -1.0), (1.2, -0.0), [2.0, -0.4]],
],
"GOST_GROUND": [
[45.0, (0.0, 0.0), (2.8284, -0.0), [4.0]],
[45.0, (0.6, 0.0), (2.8284, -0.0), [4.0]],
[45.0, (1.2, 0.0), (2.8284, -0.0), [4.0]],
],
"GRASS": [
[90.0, (0.0, 0.0), (-0.7071, 0.7071), [0.1875, -1.2267]],
[45.0, (0.0, 0.0), (-0.7071, 0.7071), [0.1875, -0.8125]],
[135.0, (0.0, 0.0), (-0.7071, -0.7071), [0.1875, -0.8125]],
],
"GRATE": [
[0.0, (0.0, 0.0), (0.0, 0.0312), []],
[90.0, (0.0, 0.0), (-0.125, 0.0), []],
],
"GRAVEL": [
[228.0128, (0.72, 1.0), (-8.0, -9.0), [0.1345, -13.3191]],
[184.9697, (0.63, 0.9), (12.0, 1.0), [0.2309, -22.8559]],
[132.5104, (0.4, 0.88), (10.0, -11.0), [0.1628, -16.116]],
[267.2737, (0.01, 0.63), (1.0, 20.0), [0.2102, -20.8136]],
[292.8337, (0.0, 0.42), (-5.0, 12.0), [0.2062, -20.4094]],
[357.2737, (0.08, 0.23), (-20.0, 1.0), [0.2102, -20.8136]],
[37.6942, (0.29, 0.22), (-13.0, -10.0), [0.278, -27.5248]],
[72.2553, (0.51, 0.39), (7.0, 22.0), [0.2625, -25.9863]],
[121.4296, (0.59, 0.64), (-8.0, 13.0), [0.2109, -20.8841]],
[175.2364, (0.48, 0.82), (11.0, -1.0), [0.2408, -11.8008]],
[222.3974, (0.24, 0.84), (-12.0, -11.0), [0.3114, -30.8334]],
[138.8141, (1.0, 0.62), (-7.0, 6.0), [0.1063, -10.5238]],
[171.4692, (0.92, 0.69), (13.0, -2.0), [0.2022, -20.0215]],
[225.0, (0.72, 0.72), (-0.0, -1.0), [0.1414, -1.2728]],
[203.1986, (0.65, 0.84), (5.0, 2.0), [0.0762, -7.5396]],
[291.8014, (0.58, 0.81), (-1.0, 3.0), [0.1077, -5.2775]],
[30.9638, (0.62, 0.71), (3.0, 2.0), [0.1749, -5.656]],
[161.5651, (0.77, 0.8), (2.0, -1.0), [0.1265, -3.0358]],
[16.3895, (0.0, 0.81), (10.0, 3.0), [0.1772, -17.5428]],
[70.3462, (0.17, 0.86), (-4.0, -11.0), [0.1487, -14.7174]],
[293.1986, (0.77, 1.0), (-2.0, 5.0), [0.1523, -7.4635]],
[343.6105, (0.83, 0.86), (-10.0, 3.0), [0.1772, -17.5428]],
[339.444, (0.0, 0.19), (-5.0, 2.0), [0.1709, -8.3731]],
[294.7751, (0.16, 0.13), (-5.0, 11.0), [0.1432, -14.1746]],
[66.8014, (0.78, 0.0), (2.0, 5.0), [0.1523, -7.4635]],
[17.354, (0.84, 0.14), (-13.0, -4.0), [0.1676, -16.5954]],
[69.444, (0.29, 0.0), (-2.0, -5.0), [0.0854, -8.4586]],
[101.3099, (0.72, 0.0), (-1.0, 4.0), [0.051, -5.048]],
[165.9638, (0.71, 0.05), (3.0, -1.0), [0.2062, -3.917]],
[186.009, (0.51, 0.1), (10.0, 1.0), [0.191, -18.9139]],
[303.6901, (0.62, 0.62), (-1.0, 2.0), [0.1442, -3.4613]],
[353.1572, (0.7, 0.5), (17.0, -2.0), [0.2518, -24.9276]],
[60.9454, (0.95, 0.47), (-4.0, -7.0), [0.103, -10.1927]],
[90.0, (1.0, 0.56), (-1.0, 1.0), [0.06, -0.94]],
[120.2564, (0.49, 0.13), (4.0, -7.0), [0.1389, -13.7535]],
[48.0128, (0.42, 0.25), (8.0, 9.0), [0.2691, -13.1846]],
[0.0, (0.6, 0.45), (1.0, 1.0), [0.26, -0.74]],
[325.3048, (0.86, 0.45), (10.0, -7.0), [0.1581, -15.6533]],
[254.0546, (0.99, 0.36), (-1.0, -4.0), [0.1456, -7.1345]],
[207.646, (0.95, 0.22), (-19.0, -10.0), [0.2371, -23.4695]],
[175.4261, (0.74, 0.11), (-13.0, 1.0), [0.2508, -24.8291]],
],
"HEX": [
[0.0, (0.0, 0.0), (0.0, 0.2165), [0.125, -0.25]],
[120.0, (0.0, 0.0), (-0.1875, -0.1083), [0.125, -0.25]],
[60.0, (0.125, 0.0), (-0.1875, 0.1083), [0.125, -0.25]],
],
"HONEY": [
[0.0, (0.0, 0.0), (0.1875, 0.1083), [0.125, -0.25]],
[120.0, (0.0, 0.0), (-0.1875, 0.1083), [0.125, -0.25]],
[60.0, (0.0, 0.0), (0.0, 0.2165), [-0.25, 0.125]],
],
"HOUND": [
[0.0, (0.0, 0.0), (0.25, 0.0625), [1.0, -0.5]],
[90.0, (0.0, 0.0), (-0.0625, -0.25), [1.0, -0.5]],
],
"INSUL": [
[0.0, (0.0, 0.0), (0.0, 0.375), []],
[0.0, (0.0, 0.125), (0.0, 0.375), [0.125, -0.125]],
[0.0, (0.0, 0.25), (0.0, 0.375), [0.125, -0.125]],
],
"LINE": [[0.0, (0.0, 0.0), (0.0, 0.125), []]],
"MUDST": [[0.0, (0.0, 0.0), (0.5, 0.25), [0.25, -0.25, 0.0, -0.25, 0.0, -0.25]]],
"NET": [[0.0, (0.0, 0.0), (0.0, 0.125), []], [90.0, (0.0, 0.0), (-0.125, 0.0), []]],
"NET3": [
[0.0, (0.0, 0.0), (0.0, 0.125), []],
[60.0, (0.0, 0.0), (-0.1083, 0.0625), []],
[120.0, (0.0, 0.0), (-0.1083, -0.0625), []],
],
"PLAST": [
[0.0, (0.0, 0.0), (0.0, 0.25), []],
[0.0, (0.0, 0.0312), (0.0, 0.25), []],
[0.0, (0.0, 0.0625), (0.0, 0.25), []],
],
"PLASTI": [
[0.0, (0.0, 0.0), (0.0, 0.25), []],
[0.0, (0.0, 0.0312), (0.0, 0.25), []],
[0.0, (0.0, 0.0625), (0.0, 0.25), []],
[0.0, (0.0, 0.1562), (0.0, 0.25), []],
],
"SACNCR": [
[45.0, (0.0, 0.0), (-0.0663, 0.0663), []],
[45.0, (0.0663, 0.0), (-0.0663, 0.0663), [0.0, -0.0938]],
],
"SQUARE": [
[0.0, (0.0, 0.0), (0.0, 0.125), [0.125, -0.125]],
[90.0, (0.0, 0.0), (-0.125, 0.0), [0.125, -0.125]],
],
"STARS": [
[0.0, (0.0, 0.0), (0.0, 0.2165), [0.125, -0.125]],
[60.0, (0.0, 0.0), (-0.1875, 0.1083), [0.125, -0.125]],
[120.0, (0.0625, 0.1083), (-0.1875, -0.1083), [0.125, -0.125]],
],
"STEEL": [
[45.0, (0.0, 0.0), (-0.0884, 0.0884), []],
[45.0, (0.0, 0.0625), (-0.0884, 0.0884), []],
],
"SWAMP": [
[0.0, (0.0, 0.0), (0.5, 0.866), [0.125, -0.875]],
[90.0, (0.0625, 0.0), (-0.5, 0.866), [0.0625, -1.6696]],
[90.0, (0.0781, 0.0), (-0.5, 0.866), [0.05, -1.6821]],
[90.0, (0.0469, 0.0), (-0.5, 0.866), [0.05, -1.6821]],
[60.0, (0.0938, 0.0), (-0.5, 0.866), [0.04, -0.96]],
[120.0, (0.0312, 0.0), (-1.0, 0.0), [0.04, -0.96]],
],
"TRANS": [
[0.0, (0.0, 0.0), (0.0, 0.25), []],
[0.0, (0.0, 0.125), (0.0, 0.25), [0.125, -0.125]],
],
"TRIANG": [
[60.0, (0.0, 0.0), (-0.1875, 0.3248), [0.1875, -0.1875]],
[120.0, (0.0, 0.0), (-0.375, 0.0), [0.1875, -0.1875]],
[0.0, (-0.0938, 0.1624), (0.1875, 0.3248), [0.1875, -0.1875]],
],
"ZIGZAG": [
[0.0, (0.0, 0.0), (0.125, 0.125), [0.125, -0.125]],
[90.0, (0.125, 0.0), (-0.125, 0.125), [0.125, -0.125]],
],
}
PATTERN_NEW = {
"ANSI31": [[45.0, (0.0, 0.0), (-2.2627, 2.2627), []]],
"ANSI32": [
[45.0, (0.0, 0.0), (-6.7882, 6.7882), []],
[45.0, (4.5255, 0.0), (-6.7882, 6.7882), []],
],
"ANSI33": [
[45.0, (512.0, 0.0), (-4.5255, 4.5255), []],
[45.0, (516.5255, 0.0), (-4.5255, 4.5255), [3.2, -1.6]],
],
"ANSI34": [
[45.0, (0.0, 0.0), (-13.5765, 13.5765), []],
[45.0, (4.5255, 0.0), (-13.5765, 13.5765), []],
[45.0, (9.051, 0.0), (-13.5765, 13.5765), []],
[45.0, (13.5765, 0.0), (-13.5765, 13.5765), []],
],
"ANSI35": [
[45.0, (-1024.0, -256.0), (-4.5255, 4.5255), []],
[45.0, (-1019.4745, -256.0), (-4.5255, 4.5255), [8.0, -1.6, 0.0, -1.6]],
],
"ANSI36": [[45.0, (-1024.0, -256.0), (1.6971, 6.2225), [8.0, -1.6, 0.0, -1.6]]],
"ANSI37": [
[45.0, (0.0, 0.0), (-2.2627, 2.2627), []],
[135.0, (0.0, 0.0), (-2.2627, -2.2627), []],
],
"ANSI38": [
[45.0, (0.0, 0.0), (-2.2627, 2.2627), []],
[135.0, (0.0, 0.0), (-6.7882, 2.2627), [8.0, -4.8]],
],
"ACAD_ISO02W100": [[0.0, (0.0, 0.0), (0.0, 12.8), [30.72, -7.68]]],
"ACAD_ISO03W100": [[0.0, (0.0, 0.0), (0.0, 12.8), [30.72, -46.08]]],
"ACAD_ISO04W100": [[0.0, (0.0, 0.0), (0.0, 12.8), [61.44, -7.68, 1.28, -7.68]]],
"ACAD_ISO05W100": [
[0.0, (0.0, 0.0), (0.0, 12.8), [61.44, -7.68, 1.28, -7.68, 1.28, -7.68]]
],
"ACAD_ISO06W100": [
[0.0, (0.0, 0.0), (0.0, 12.8), [61.44, -7.68, 1.28, -7.68, 1.28, -16.64]],
[0.0, (0.0, 0.0), (0.0, 12.8), [-87.04, 1.28, -7.68]],
],
"ACAD_ISO07W100": [[0.0, (0.0, 0.0), (0.0, 12.8), [1.28, -7.68]]],
"ACAD_ISO08W100": [[0.0, (0.0, 0.0), (0.0, 12.8), [61.44, -7.68, 15.36, -7.68]]],
"ACAD_ISO09W100": [
[0.0, (0.0, 0.0), (0.0, 12.8), [61.44, -7.68, 15.36, -7.68, 15.36, -7.68]]
],
"ACAD_ISO10W100": [[0.0, (0.0, 0.0), (0.0, 12.8), [30.72, -7.68, 1.28, -7.68]]],
"ACAD_ISO11W100": [
[0.0, (0.0, 0.0), (0.0, 12.8), [30.72, -7.68, 30.72, -7.68, 1.28, -7.68]]
],
"ACAD_ISO12W100": [
[0.0, (0.0, 0.0), (0.0, 12.8), [30.72, -7.68, 1.28, -7.68, 1.28, -7.68]]
],
"ACAD_ISO13W100": [
[0.0, (0.0, 0.0), (0.0, 12.8), [30.72, -7.68, 30.72, -7.68, 1.28, -16.64]],
[0.0, (0.0, 0.0), (0.0, 12.8), [-85.76, 1.28, -7.68]],
],
"ACAD_ISO14W100": [
[0.0, (0.0, 0.0), (0.0, 12.8), [30.72, -7.68, 1.28, -7.68, 1.28, -16.64]],
[0.0, (0.0, 0.0), (0.0, 12.8), [-56.32, 1.28, -7.68]],
],
"ACAD_ISO15W100": [
[0.0, (0.0, 0.0), (0.0, 12.8), [30.72, -7.68, 30.72, -7.68, 1.28, -25.6]],
[0.0, (0.0, 0.0), (0.0, 12.8), [-85.76, 1.28, -7.68, 1.28, -7.68]],
],
"ANGLE": [
[0.0, (0.0, 0.0), (0.0, 7.04), [5.12, -1.92]],
[90.0, (0.0, 0.0), (-7.04, 0.0), [5.12, -1.92]],
],
"AR-B816": [
[0.0, (0.0, 0.0), (0.0, 20.48), []],
[90.0, (0.0, 0.0), (-20.48, 20.48), [20.48, -20.48]],
],
"AR-B816C": [
[0.0, (0.0, 0.0), (20.48, 20.48), [40.0, -0.96]],
[0.0, (-20.48, 0.96), (20.48, 20.48), [40.0, -0.96]],
[90.0, (0.0, 0.0), (-20.48, 20.48), [-21.44, 19.52]],
[90.0, (-0.96, 0.0), (-20.48, 20.48), [-21.44, 19.52]],
],
"AR-B88": [
[0.0, (0.0, 0.0), (0.0, 20.48), []],
[90.0, (0.0, 0.0), (-10.24, 20.48), [20.48, -20.48]],
],
"AR-BRELM": [
[0.0, (0.0, 0.0), (0.0, 68.2752), [97.6, -4.8]],
[0.0, (0.0, 28.8), (0.0, 68.2752), [97.6, -4.8]],
[0.0, (25.6, 34.1376), (0.0, 68.2752), [46.4, -4.8]],
[0.0, (25.6, 62.9376), (0.0, 68.2752), [46.4, -4.8]],
[90.0, (0.0, 0.0), (-102.4, 0.0), [28.8, -39.4752]],
[90.0, (-4.8, 0.0), (-102.4, 0.0), [28.8, -39.4752]],
[90.0, (25.6, 34.1376), (-51.2, 0.0), [28.8, -39.4752]],
[90.0, (20.8, 34.1376), (-51.2, 0.0), [28.8, -39.4752]],
],
"AR-BRSTD": [
[0.0, (0.0, 0.0), (0.0, 68.2752), []],
[90.0, (0.0, 0.0), (-102.4, 68.2752), [68.2752, -68.2752]],
],
"AR-CONC": [
[50.0, (0.0, 0.0), (36.7237, -3.2129), [3.84, -42.24]],
[355.0, (0.0, 0.0), (-7.1041, 38.5122), [3.072, -33.792]],
[100.4514, (3.0603, -0.2677), (29.6197, 35.2993), [3.2635, -35.8985]],
[46.1842, (0.0, 10.24), (54.6428, -8.4746), [5.76, -63.36]],
[96.6356, (4.5536, 9.5338), (47.8547, 49.8749), [4.8952, -53.8477]],
[351.1842, (0.0, 10.24), (47.8547, 49.8749), [4.608, -50.688]],
[21.0, (5.12, 7.68), (30.5616, -20.6141), [3.84, -42.24]],
[326.0, (5.12, 7.68), (12.4577, 37.1277), [3.072, -33.792]],
[71.4514, (7.6668, 5.9622), (43.0194, 16.5136), [3.2635, -35.8985]],
[
37.5,
(0.0, 0.0),
(0.6226, 17.0442),
[0.0, -33.3824, 0.0, -34.304, 0.0, -33.92],
],
[
7.5,
(0.0, 0.0),
(13.4692, 20.1939),
[0.0, -19.5584, 0.0, -32.6144, 0.0, -12.928],
],
[
327.5,
(-11.4176, 0.0),
(27.3317, -1.1548),
[0.0, -12.8, 0.0, -39.936, 0.0, -52.992],
],
[
317.5,
(-16.5376, 0.0),
(29.8591, 5.1254),
[0.0, -16.64, 0.0, -26.5216, 0.0, -37.632],
],
],
"AR-HBONE": [
[45.0, (0.0, 0.0), (0.0, 28.9631), [61.44, -20.48]],
[135.0, (14.4815, 14.4815), (0.0, 28.9631), [61.44, -20.48]],
],
"AR-PARQ1": [
[90.0, (0.0, 0.0), (-61.44, 61.44), [61.44, -61.44]],
[90.0, (10.24, 0.0), (-61.44, 61.44), [61.44, -61.44]],
[90.0, (20.48, 0.0), (-61.44, 61.44), [61.44, -61.44]],
[90.0, (30.72, 0.0), (-61.44, 61.44), [61.44, -61.44]],
[90.0, (40.96, 0.0), (-61.44, 61.44), [61.44, -61.44]],
[90.0, (51.2, 0.0), (-61.44, 61.44), [61.44, -61.44]],
[90.0, (61.44, 0.0), (-61.44, 61.44), [61.44, -61.44]],
[0.0, (0.0, 61.44), (61.44, -61.44), [61.44, -61.44]],
[0.0, (0.0, 71.68), (61.44, -61.44), [61.44, -61.44]],
[0.0, (0.0, 81.92), (61.44, -61.44), [61.44, -61.44]],
[0.0, (0.0, 92.16), (61.44, -61.44), [61.44, -61.44]],
[0.0, (0.0, 102.4), (61.44, -61.44), [61.44, -61.44]],
[0.0, (0.0, 112.64), (61.44, -61.44), [61.44, -61.44]],
[0.0, (0.0, 122.88), (61.44, -61.44), [61.44, -61.44]],
],
"AR-RROOF": [
[0.0, (0.0, 0.0), (56.32, 25.6), [384.0, -51.2, 128.0, -25.6]],
[0.0, (34.048, 12.8), (-25.6, 34.048), [76.8, -8.448, 153.6, -19.2]],
[0.0, (12.8, 21.76), (133.12, 17.152), [204.8, -35.84, 102.4, -25.6]],
],
"AR-RSHKE": [
[0.0, (0.0, 0.0), (65.28, 30.72), [15.36, -12.8, 17.92, -7.68, 23.04, -10.24]],
[0.0, (15.36, 1.28), (65.28, 30.72), [12.8, -48.64, 10.24, -15.36]],
[0.0, (46.08, -1.92), (65.28, 30.72), [7.68, -79.36]],
[90.0, (0.0, 0.0), (-21.76, 30.72), [29.44, -93.44]],
[90.0, (15.36, 0.0), (-21.76, 30.72), [28.8, -94.08]],
[90.0, (28.16, 0.0), (-21.76, 30.72), [26.88, -96.0]],
[90.0, (46.08, -1.92), (-21.76, 30.72), [29.44, -93.44]],
[90.0, (53.76, -1.92), (-21.76, 30.72), [29.44, -93.44]],
[90.0, (76.8, 0.0), (-21.76, 30.72), [28.16, -94.72]],
],
"AR-SAND": [
[37.5, (0.0, 0.0), (-1.6126, 49.3267), [0.0, -38.912, 0.0, -43.52, 0.0, -41.6]],
[
7.5,
(0.0, 0.0),
(45.3063, 72.2469),
[0.0, -20.992, 0.0, -35.072, 0.0, -13.44],
],
[
327.5,
(-31.488, 0.0),
(79.722, 0.1449),
[0.0, -12.8, 0.0, -46.08, 0.0, -60.16],
],
[
317.5,
(-31.488, 0.0),
(76.9568, 22.4685),
[0.0, -6.4, 0.0, -30.208, 0.0, -34.56],
],
],
"BOX": [
[90.0, (0.0, 0.0), (-25.6, 0.0), []],
[90.0, (6.4, 0.0), (-25.6, 0.0), []],
[0.0, (0.0, 0.0), (0.0, 25.6), [-6.4, 6.4]],
[0.0, (0.0, 6.4), (0.0, 25.6), [-6.4, 6.4]],
[0.0, (0.0, 12.8), (0.0, 25.6), [6.4, -6.4]],
[0.0, (0.0, 19.2), (0.0, 25.6), [6.4, -6.4]],
[90.0, (12.8, 0.0), (-25.6, 0.0), [6.4, -6.4]],
[90.0, (19.2, 0.0), (-25.6, 0.0), [6.4, -6.4]],
],
"BRASS": [
[0.0, (0.0, 0.0), (0.0, 6.4), []],
[0.0, (0.0, 3.2), (0.0, 6.4), [3.2, -1.6]],
],
"BRICK": [
[0.0, (0.0, 0.0), (0.0, 6.4), []],
[90.0, (0.0, 0.0), (-12.8, 0.0), [6.4, -6.4]],
[90.0, (6.4, 0.0), (-12.8, 0.0), [-6.4, 6.4]],
],
"BRSTONE": [
[0.0, (0.0, 0.0), (0.0, 8.448), []],
[90.0, (23.04, 0.0), (-12.8, 8.448), [8.448, -8.448]],
[90.0, (20.48, 0.0), (-12.8, 8.448), [8.448, -8.448]],
[0.0, (23.04, 1.408), (12.8, 8.448), [-23.04, 2.56]],
[0.0, (23.04, 2.816), (12.8, 8.448), [-23.04, 2.56]],
[0.0, (23.04, 4.224), (12.8, 8.448), [-23.04, 2.56]],
[0.0, (23.04, 5.632), (12.8, 8.448), [-23.04, 2.56]],
[0.0, (23.04, 7.04), (12.8, 8.448), [-23.04, 2.56]],
],
"CLAY": [
[0.0, (0.0, 0.0), (0.0, 4.8), []],
[0.0, (0.0, 0.8), (0.0, 4.8), []],
[0.0, (0.0, 1.6), (0.0, 4.8), []],
[0.0, (0.0, 3.2), (0.0, 4.8), [4.8, -3.2]],
],
"CORK": [
[0.0, (0.0, 0.0), (0.0, 3.2), []],
[135.0, (1.6, -1.6), (-6.4, -6.4), [4.5255, -4.5255]],
[135.0, (2.4, -1.6), (-6.4, -6.4), [4.5255, -4.5255]],
[135.0, (3.2, -1.6), (-6.4, -6.4), [4.5255, -4.5255]],
],
"CROSS": [
[0.0, (0.0, 0.0), (6.4, 6.4), [3.2, -9.6]],
[90.0, (1.6, -1.6), (-6.4, 6.4), [3.2, -9.6]],
],
"DASH": [[0.0, (0.0, 0.0), (3.2, 3.2), [3.2, -3.2]]],
"DOLMIT": [
[0.0, (0.0, 0.0), (0.0, 6.4), []],
[45.0, (0.0, 0.0), (-12.8, 12.8), [9.051, -18.1019]],
],
"DOTS": [[0.0, (0.0, 0.0), (0.8, 1.6), [0.0, -1.6]]],
"EARTH": [
[0.0, (0.0, 0.0), (6.4, 6.4), [6.4, -6.4]],
[0.0, (0.0, 2.4), (6.4, 6.4), [6.4, -6.4]],
[0.0, (0.0, 4.8), (6.4, 6.4), [6.4, -6.4]],
[90.0, (0.8, 5.6), (-6.4, 6.4), [6.4, -6.4]],
[90.0, (3.2, 5.6), (-6.4, 6.4), [6.4, -6.4]],
[90.0, (5.6, 5.6), (-6.4, 6.4), [6.4, -6.4]],
],
"ESCHER": [
[60.0, (0.0, 0.0), (-30.72, -0.0), [28.16, -2.56]],
[180.0, (0.0, 0.0), (15.36, -26.6043), [28.16, -2.56]],
[300.0, (0.0, 0.0), (30.72, -0.0), [28.16, -2.56]],
[60.0, (2.56, 0.0), (-30.72, -0.0), [5.12, -25.6]],
[300.0, (2.56, 0.0), (30.72, -0.0), [5.12, -25.6]],
[60.0, (-1.28, 2.217), (-30.72, -0.0), [5.12, -25.6]],
[180.0, (-1.28, 2.217), (15.36, -26.6043), [5.12, -25.6]],
[300.0, (-1.28, -2.217), (30.72, -0.0), [5.12, -25.6]],
[180.0, (-1.28, -2.217), (15.36, -26.6043), [5.12, -25.6]],
[60.0, (-10.24, 0.0), (-30.72, -0.0), [5.12, -25.6]],
[300.0, (-10.24, 0.0), (30.72, -0.0), [5.12, -25.6]],
[60.0, (5.12, -8.8681), (-30.72, -0.0), [5.12, -25.6]],
[180.0, (5.12, -8.8681), (15.36, -26.6043), [5.12, -25.6]],
[300.0, (5.12, 8.8681), (30.72, -0.0), [5.12, -25.6]],
[180.0, (5.12, 8.8681), (15.36, -26.6043), [5.12, -25.6]],
[0.0, (5.12, 4.4341), (-15.36, 26.6043), [17.92, -12.8]],
[0.0, (5.12, -4.4341), (-15.36, 26.6043), [17.92, -12.8]],
[120.0, (1.28, 6.6511), (-30.72, 0.0), [17.92, -12.8]],
[120.0, (-6.4, 2.217), (-30.72, 0.0), [17.92, -12.8]],
[240.0, (-6.4, -2.217), (15.36, -26.6043), [17.92, -12.8]],
[240.0, (1.28, -6.6511), (15.36, -26.6043), [17.92, -12.8]],
],
"FLEX": [
[0.0, (0.0, 0.0), (0.0, 6.4), [6.4, -6.4]],
[45.0, (6.4, 0.0), (0.0, 6.4), [1.6, -5.851, 1.6, -9.051]],
],
"GOST_GLASS": [
[45.0, (0.0, 0.0), (21.7223, -0.0), [12.8, -17.92]],
[45.0, (5.4306, 0.0), (21.7223, -0.0), [5.12, -25.6]],
[45.0, (0.0, 5.4306), (21.7223, -0.0), [5.12, -25.6]],
],
"GOST_WOOD": [
[90.0, (0.0, 0.0), (30.72, -0.0), [51.2, -10.24]],
[90.0, (10.24, -10.24), (30.72, -0.0), [30.72, -7.68, 15.36, -7.68]],
[90.0, (20.48, -25.6), (30.72, -0.0), [51.2, -10.24]],
],
"GOST_GROUND": [
[45.0, (0.0, 0.0), (72.4077, -0.0), [102.4]],
[45.0, (15.36, 0.0), (72.4077, -0.0), [102.4]],
[45.0, (30.72, 0.0), (72.4077, -0.0), [102.4]],
],
"GRASS": [
[90.0, (0.0, 0.0), (-18.1019, 18.1019), [4.8, -31.4039]],
[45.0, (0.0, 0.0), (-18.1019, 18.1019), [4.8, -20.8]],
[135.0, (0.0, 0.0), (-18.1019, -18.1019), [4.8, -20.8]],
],
"GRATE": [[0.0, (0.0, 0.0), (0.0, 0.8), []], [90.0, (0.0, 0.0), (-3.2, 0.0), []]],
"GRAVEL": [
[228.0128, (18.432, 25.6), (-204.8, -230.4), [3.4441, -340.9687]],
[184.9697, (16.128, 23.04), (307.2, 25.6), [5.9102, -585.1117]],
[132.5104, (10.24, 22.528), (256.0, -281.6), [4.1674, -412.5704]],
[267.2737, (0.256, 16.128), (25.6, 512.0), [5.3821, -532.8271]],
[292.8337, (0.0, 10.752), (-128.0, 307.2), [5.2776, -522.48]],
[357.2737, (2.048, 5.888), (-512.0, 25.6), [5.3821, -532.8271]],
[37.6942, (7.424, 5.632), (-332.8, -256.0), [7.1175, -704.6361]],
[72.2553, (13.056, 9.984), (179.2, 563.2), [6.7197, -665.2498]],
[121.4296, (15.104, 16.384), (-204.8, 332.8), [5.4003, -534.6323]],
[175.2364, (12.288, 20.992), (281.6, -25.6), [6.1653, -302.0995]],
[222.3974, (6.144, 21.504), (-307.2, -281.6), [7.9731, -789.3344]],
[138.8141, (25.6, 15.872), (-179.2, 153.6), [2.7213, -269.4104]],
[171.4692, (23.552, 17.664), (332.8, -51.2), [5.1773, -512.5507]],
[225.0, (18.432, 18.432), (-0.0, -25.6), [3.6204, -32.5835]],
[203.1986, (16.64, 21.504), (128.0, 51.2), [1.9496, -193.0141]],
[291.8014, (14.848, 20.736), (-25.6, 76.8), [2.7572, -135.103]],
[30.9638, (15.872, 18.176), (76.8, 51.2), [4.4782, -144.7942]],
[161.5651, (19.712, 20.48), (51.2, -25.6), [3.2382, -77.7161]],
[16.3895, (0.0, 20.736), (256.0, 76.8), [4.5363, -449.0968]],
[70.3462, (4.352, 22.016), (-102.4, -281.6), [3.8057, -376.7656]],
[293.1986, (19.712, 25.6), (-51.2, 128.0), [3.8993, -191.0645]],
[343.6105, (21.248, 22.016), (-256.0, 76.8), [4.5363, -449.0968]],
[339.444, (0.0, 4.864), (-128.0, 51.2), [4.3745, -214.352]],
[294.7751, (4.096, 3.328), (-128.0, 281.6), [3.6654, -362.8709]],
[66.8014, (19.968, 0.0), (51.2, 128.0), [3.8993, -191.0645]],
[17.354, (21.504, 3.584), (-332.8, -102.4), [4.2914, -424.8428]],
[69.444, (7.424, 0.0), (-51.2, -128.0), [2.1873, -216.5392]],
[101.3099, (18.432, 0.0), (-25.6, 102.4), [1.3053, -129.2296]],
[165.9638, (18.176, 1.28), (76.8, -25.6), [5.2776, -100.2739]],
[186.009, (13.056, 2.56), (256.0, 25.6), [4.8909, -484.1964]],
[303.6901, (15.872, 15.872), (-25.6, 51.2), [3.6921, -88.61]],
[353.1572, (17.92, 12.8), (435.2, -51.2), [6.4459, -638.1456]],
[60.9454, (24.32, 12.032), (-102.4, -179.2), [2.6357, -260.9325]],
[90.0, (25.6, 14.336), (-25.6, 25.6), [1.536, -24.064]],
[120.2564, (12.544, 3.328), (102.4, -179.2), [3.5565, -352.0901]],
[48.0128, (10.752, 6.4), (204.8, 230.4), [6.8882, -337.5245]],
[0.0, (15.36, 11.52), (25.6, 25.6), [6.656, -18.944]],
[325.3048, (22.016, 11.52), (256.0, -179.2), [4.0477, -400.7238]],
[254.0546, (25.344, 9.216), (-25.6, -102.4), [3.7274, -182.6434]],
[207.646, (24.32, 5.632), (-486.4, -256.0), [6.0689, -600.8185]],
[175.4261, (18.944, 2.816), (-332.8, 25.6), [6.4205, -635.6243]],
],
"HEX": [
[0.0, (0.0, 0.0), (0.0, 5.5426), [3.2, -6.4]],
[120.0, (0.0, 0.0), (-4.8, -2.7713), [3.2, -6.4]],
[60.0, (3.2, 0.0), (-4.8, 2.7713), [3.2, -6.4]],
],
"HONEY": [
[0.0, (0.0, 0.0), (4.8, 2.7713), [3.2, -6.4]],
[120.0, (0.0, 0.0), (-4.8, 2.7713), [3.2, -6.4]],
[60.0, (0.0, 0.0), (0.0, 5.5426), [-6.4, 3.2]],
],
"HOUND": [
[0.0, (0.0, 0.0), (6.4, 1.6), [25.6, -12.8]],
[90.0, (0.0, 0.0), (-1.6, -6.4), [25.6, -12.8]],
],
"INSUL": [
[0.0, (0.0, 0.0), (0.0, 9.6), []],
[0.0, (0.0, 3.2), (0.0, 9.6), [3.2, -3.2]],
[0.0, (0.0, 6.4), (0.0, 9.6), [3.2, -3.2]],
],
"LINE": [[0.0, (0.0, 0.0), (0.0, 3.2), []]],
"MUDST": [[0.0, (0.0, 0.0), (12.8, 6.4), [6.4, -6.4, 0.0, -6.4, 0.0, -6.4]]],
"NET": [[0.0, (0.0, 0.0), (0.0, 3.2), []], [90.0, (0.0, 0.0), (-3.2, 0.0), []]],
"NET3": [
[0.0, (0.0, 0.0), (0.0, 3.2), []],
[60.0, (0.0, 0.0), (-2.7713, 1.6), []],
[120.0, (0.0, 0.0), (-2.7713, -1.6), []],
],
"PLAST": [
[0.0, (0.0, 0.0), (0.0, 6.4), []],
[0.0, (0.0, 0.8), (0.0, 6.4), []],
[0.0, (0.0, 1.6), (0.0, 6.4), []],
],
"PLASTI": [
[0.0, (0.0, 0.0), (0.0, 6.4), []],
[0.0, (0.0, 0.8), (0.0, 6.4), []],
[0.0, (0.0, 1.6), (0.0, 6.4), []],
[0.0, (0.0, 4.0), (0.0, 6.4), []],
],
"SACNCR": [
[45.0, (0.0, 0.0), (-1.6971, 1.6971), []],
[45.0, (1.6971, 0.0), (-1.6971, 1.6971), [0.0, -2.4]],
],
"SQUARE": [
[0.0, (0.0, 0.0), (0.0, 3.2), [3.2, -3.2]],
[90.0, (0.0, 0.0), (-3.2, 0.0), [3.2, -3.2]],
],
"STARS": [
[0.0, (0.0, 0.0), (0.0, 5.5426), [3.2, -3.2]],
[60.0, (0.0, 0.0), (-4.8, 2.7713), [3.2, -3.2]],
[120.0, (1.6, 2.7713), (-4.8, -2.7713), [3.2, -3.2]],
],
"STEEL": [
[45.0, (0.0, 0.0), (-2.2627, 2.2627), []],
[45.0, (0.0, 1.6), (-2.2627, 2.2627), []],
],
"SWAMP": [
[0.0, (0.0, 0.0), (12.8, 22.1703), [3.2, -22.4]],
[90.0, (1.6, 0.0), (-12.8, 22.1703), [1.6, -42.7405]],
[90.0, (2.0, 0.0), (-12.8, 22.1703), [1.28, -43.0605]],
[90.0, (1.2, 0.0), (-12.8, 22.1703), [1.28, -43.0605]],
[60.0, (2.4, 0.0), (-12.8, 22.1703), [1.024, -24.576]],
[120.0, (0.8, 0.0), (-25.6, 0.0), [1.024, -24.576]],
],
"TRANS": [
[0.0, (0.0, 0.0), (0.0, 6.4), []],
[0.0, (0.0, 3.2), (0.0, 6.4), [3.2, -3.2]],
],
"TRIANG": [
[60.0, (0.0, 0.0), (-4.8, 8.3138), [4.8, -4.8]],
[120.0, (0.0, 0.0), (-9.6, 0.0), [4.8, -4.8]],
[0.0, (-2.4, 4.1569), (4.8, 8.3138), [4.8, -4.8]],
],
"ZIGZAG": [
[0.0, (0.0, 0.0), (3.2, 3.2), [3.2, -3.2]],
[90.0, (3.2, 0.0), (-3.2, 3.2), [3.2, -3.2]],
],
}
def load(old_pattern=None):
from ezdxf.options import options
if old_pattern is not None:
use_old = bool(old_pattern)
options.use_old_predefined_pattern_scaling = use_old
else:
use_old = options.use_old_predefined_pattern_scaling
return PATTERN_OLD if use_old else PATTERN_NEW
def scale_pattern(pattern, factor: float = 1, ndigits=4):
def _scale(iterable):
return [round(i * factor, ndigits) for i in iterable]
def _scale_line(line):
angle, base_point, offset, dash_length_items = line
return [
round(angle, ndigits),
tuple(_scale(base_point)),
tuple(_scale(offset)),
_scale(dash_length_items)
]
return [_scale_line(line) for line in pattern]
def scale_all(pattern: dict, factor: float = 1, ndigits=4):
return {name: scale_pattern(p, factor, ndigits) for name, p in pattern.items()}
| 44.517943 | 88 | 0.365989 | 7,796 | 37,217 | 1.738199 | 0.091329 | 0.279979 | 0.268984 | 0.253856 | 0.653384 | 0.612427 | 0.567486 | 0.512213 | 0.41805 | 0.369567 | 0 | 0.440927 | 0.265873 | 37,217 | 835 | 89 | 44.571257 | 0.055047 | 0.003735 | 0 | 0.343902 | 0 | 0 | 0.028539 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.006098 | false | 0 | 0.00122 | 0.002439 | 0.013415 | 0 | 0 | 0 | 1 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
30219b3fff57bfd90f74d60a593c4bc95ad04dd6 | 30 | py | Python | srfnef/postprocess/imgq/index_correction/__init__.py | twj2417/srf | 63365cfd75199d70eea2273214a4fa580a9fdf2a | [
"Apache-2.0"
] | 81 | 2020-11-16T00:31:19.000Z | 2022-03-16T02:49:45.000Z | srfnef/postprocess/imgq/index_correction/__init__.py | twj2417/srf | 63365cfd75199d70eea2273214a4fa580a9fdf2a | [
"Apache-2.0"
] | 7 | 2020-11-25T13:30:17.000Z | 2021-09-16T03:38:45.000Z | srfnef/postprocess/imgq/index_correction/__init__.py | twj2417/srf | 63365cfd75199d70eea2273214a4fa580a9fdf2a | [
"Apache-2.0"
] | 8 | 2020-12-09T11:18:40.000Z | 2021-07-26T06:05:50.000Z | from .accuracy import accuracy | 30 | 30 | 0.866667 | 4 | 30 | 6.5 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.1 | 30 | 1 | 30 | 30 | 0.962963 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
064b4319173009e11b92015c2da63aefe00d2047 | 6,214 | py | Python | pony/orm/tests/test_time_parsing.py | ProgHaj/pony | 52720af1728ab2931364be8615e18ad8714a7c9e | [
"Apache-2.0"
] | 2,628 | 2015-01-02T17:55:28.000Z | 2022-03-31T10:36:42.000Z | pony/orm/tests/test_time_parsing.py | ProgHaj/pony | 52720af1728ab2931364be8615e18ad8714a7c9e | [
"Apache-2.0"
] | 525 | 2015-01-03T20:30:08.000Z | 2022-03-23T12:30:01.000Z | pony/orm/tests/test_time_parsing.py | ProgHaj/pony | 52720af1728ab2931364be8615e18ad8714a7c9e | [
"Apache-2.0"
] | 256 | 2015-01-02T17:55:31.000Z | 2022-03-20T17:01:37.000Z | from __future__ import absolute_import, print_function, division
import unittest
from datetime import datetime, date, time
from pony.orm.tests.testutils import raises_exception
from pony.converting import str2time
class TestTimeParsing(unittest.TestCase):
def test_time_1(self):
self.assertEqual(str2time('1:2'), time(1, 2))
self.assertEqual(str2time('01:02'), time(1, 2))
self.assertEqual(str2time('1:2:3'), time(1, 2, 3))
self.assertEqual(str2time('01:02:03'), time(1, 2, 3))
self.assertEqual(str2time('1:2:3.4'), time(1, 2, 3, 400000))
self.assertEqual(str2time('01:02:03.4'), time(1, 2, 3, 400000))
@raises_exception(ValueError, 'Unrecognized time format')
def test_time_2(self):
str2time('1:')
@raises_exception(ValueError, 'Unrecognized time format')
def test_time_3(self):
str2time('1: 2')
@raises_exception(ValueError, 'Unrecognized time format')
def test_time_4(self):
str2time('1:2:')
@raises_exception(ValueError, 'Unrecognized time format')
def test_time_5(self):
str2time('1:2:3:')
@raises_exception(ValueError, 'Unrecognized time format')
def test_time_6(self):
str2time('1:2:3.1234567')
def test_time_7(self):
self.assertEqual(str2time('1:33 am'), time(1, 33))
self.assertEqual(str2time('2:33 am'), time(2, 33))
self.assertEqual(str2time('11:33 am'), time(11, 33))
self.assertEqual(str2time('12:33 am'), time(0, 33))
def test_time_8(self):
self.assertEqual(str2time('1:33 pm'), time(13, 33))
self.assertEqual(str2time('2:33 pm'), time(14, 33))
self.assertEqual(str2time('11:33 pm'), time(23, 33))
self.assertEqual(str2time('12:33 pm'), time(12, 33))
def test_time_9(self):
self.assertEqual(str2time('1:33am'), time(1, 33))
self.assertEqual(str2time('1:33 am'), time(1, 33))
self.assertEqual(str2time('1:33 AM'), time(1, 33))
self.assertEqual(str2time('1:33 a.m'), time(1, 33))
self.assertEqual(str2time('1:33 A.M'), time(1, 33))
self.assertEqual(str2time('1:33 a.m.'), time(1, 33))
self.assertEqual(str2time('1:33 A.M.'), time(1, 33))
def test_time_10(self):
self.assertEqual(str2time('1:33pm'), time(13, 33))
self.assertEqual(str2time('1:33 pm'), time(13, 33))
self.assertEqual(str2time('1:33 PM'), time(13, 33))
self.assertEqual(str2time('1:33 p.m'), time(13, 33))
self.assertEqual(str2time('1:33 P.M'), time(13, 33))
self.assertEqual(str2time('1:33 p.m.'), time(13, 33))
self.assertEqual(str2time('1:33 P.M.'), time(13, 33))
def test_time_11(self):
self.assertEqual(str2time('12:34:56.789'), time(12, 34, 56, 789000))
self.assertEqual(str2time('12.34.56.789'), time(12, 34, 56, 789000))
self.assertEqual(str2time('12 34 56.789'), time(12, 34, 56, 789000))
self.assertEqual(str2time('12h34m56.789'), time(12, 34, 56, 789000))
self.assertEqual(str2time('12h 34m 56.789'), time(12, 34, 56, 789000))
self.assertEqual(str2time('12 h 34 m 56.789'), time(12, 34, 56, 789000))
self.assertEqual(str2time('12h 34m 56.789s'), time(12, 34, 56, 789000))
self.assertEqual(str2time('12 h 34 m 56.789 s'), time(12, 34, 56, 789000))
self.assertEqual(str2time('12h 34min 56.789'), time(12, 34, 56, 789000))
self.assertEqual(str2time('12h 34min 56.789sec'), time(12, 34, 56, 789000))
self.assertEqual(str2time('12h 34 min 56.789 sec'), time(12, 34, 56, 789000))
def test_time_12(self):
self.assertEqual(str2time('12:34:56.789 a.m.'), time(0, 34, 56, 789000))
self.assertEqual(str2time('12.34.56.789 a.m.'), time(0, 34, 56, 789000))
self.assertEqual(str2time('12 34 56.789 a.m.'), time(0, 34, 56, 789000))
self.assertEqual(str2time('12h34m56.789 a.m.'), time(0, 34, 56, 789000))
self.assertEqual(str2time('12h 34m 56.789 a.m.'), time(0, 34, 56, 789000))
self.assertEqual(str2time('12 h 34 m 56.789 a.m.'), time(0, 34, 56, 789000))
self.assertEqual(str2time('12h 34m 56.789s a.m.'), time(0, 34, 56, 789000))
self.assertEqual(str2time('12 h 34 m 56.789 s a.m.'), time(0, 34, 56, 789000))
self.assertEqual(str2time('12h 34min 56.789 a.m.'), time(0, 34, 56, 789000))
self.assertEqual(str2time('12h 34min 56.789sec a.m.'), time(0, 34, 56, 789000))
self.assertEqual(str2time('12h 34 min 56.789 sec a.m.'), time(0, 34, 56, 789000))
def test_time_13(self):
self.assertEqual(str2time('12:34'), time(12, 34))
self.assertEqual(str2time('12.34'), time(12, 34))
self.assertEqual(str2time('12 34'), time(12, 34))
self.assertEqual(str2time('12h34'), time(12, 34))
self.assertEqual(str2time('12h34m'), time(12, 34))
self.assertEqual(str2time('12h 34m'), time(12, 34))
self.assertEqual(str2time('12h34min'), time(12, 34))
self.assertEqual(str2time('12h 34min'), time(12, 34))
self.assertEqual(str2time('12 h 34 m'), time(12, 34))
self.assertEqual(str2time('12 h 34 min'), time(12, 34))
self.assertEqual(str2time('12u34'), time(12, 34)) # Belgium
self.assertEqual(str2time('12h'), time(12))
self.assertEqual(str2time('12u'), time(12))
def test_time_14(self):
self.assertEqual(str2time('12:34 am'), time(0, 34))
self.assertEqual(str2time('12.34 am'), time(0, 34))
self.assertEqual(str2time('12 34 am'), time(0, 34))
self.assertEqual(str2time('12h34 am'), time(0, 34))
self.assertEqual(str2time('12h34m am'), time(0, 34))
self.assertEqual(str2time('12h 34m am'), time(0, 34))
self.assertEqual(str2time('12h34min am'), time(0, 34))
self.assertEqual(str2time('12h 34min am'), time(0, 34))
self.assertEqual(str2time('12 h 34 m am'), time(0, 34))
self.assertEqual(str2time('12 h 34 min am'), time(0, 34))
self.assertEqual(str2time('12u34 am'), time(0, 34))
self.assertEqual(str2time('12h am'), time(0))
self.assertEqual(str2time('12u am'), time(0))
if __name__ == '__main__':
unittest.main()
| 48.546875 | 89 | 0.626328 | 929 | 6,214 | 4.137783 | 0.087191 | 0.296566 | 0.454735 | 0.14308 | 0.862643 | 0.817638 | 0.712019 | 0.649324 | 0.613684 | 0.548127 | 0 | 0.190238 | 0.195526 | 6,214 | 127 | 90 | 48.929134 | 0.578716 | 0.001126 | 0 | 0.342593 | 0 | 0 | 0.153586 | 0 | 0 | 0 | 0 | 0 | 0.703704 | 1 | 0.12963 | false | 0 | 0.046296 | 0 | 0.185185 | 0.009259 | 0 | 0 | 0 | null | 1 | 1 | 0 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.