hexsha string | size int64 | ext string | lang string | max_stars_repo_path string | max_stars_repo_name string | max_stars_repo_head_hexsha string | max_stars_repo_licenses list | max_stars_count int64 | max_stars_repo_stars_event_min_datetime string | max_stars_repo_stars_event_max_datetime string | max_issues_repo_path string | max_issues_repo_name string | max_issues_repo_head_hexsha string | max_issues_repo_licenses list | max_issues_count int64 | max_issues_repo_issues_event_min_datetime string | max_issues_repo_issues_event_max_datetime string | max_forks_repo_path string | max_forks_repo_name string | max_forks_repo_head_hexsha string | max_forks_repo_licenses list | max_forks_count int64 | max_forks_repo_forks_event_min_datetime string | max_forks_repo_forks_event_max_datetime string | content string | avg_line_length float64 | max_line_length int64 | alphanum_fraction float64 | qsc_code_num_words_quality_signal int64 | qsc_code_num_chars_quality_signal float64 | qsc_code_mean_word_length_quality_signal float64 | qsc_code_frac_words_unique_quality_signal float64 | qsc_code_frac_chars_top_2grams_quality_signal float64 | qsc_code_frac_chars_top_3grams_quality_signal float64 | qsc_code_frac_chars_top_4grams_quality_signal float64 | qsc_code_frac_chars_dupe_5grams_quality_signal float64 | qsc_code_frac_chars_dupe_6grams_quality_signal float64 | qsc_code_frac_chars_dupe_7grams_quality_signal float64 | qsc_code_frac_chars_dupe_8grams_quality_signal float64 | qsc_code_frac_chars_dupe_9grams_quality_signal float64 | qsc_code_frac_chars_dupe_10grams_quality_signal float64 | qsc_code_frac_chars_replacement_symbols_quality_signal float64 | qsc_code_frac_chars_digital_quality_signal float64 | qsc_code_frac_chars_whitespace_quality_signal float64 | qsc_code_size_file_byte_quality_signal float64 | qsc_code_num_lines_quality_signal float64 | qsc_code_num_chars_line_max_quality_signal float64 | qsc_code_num_chars_line_mean_quality_signal float64 | qsc_code_frac_chars_alphabet_quality_signal float64 | qsc_code_frac_chars_comments_quality_signal float64 | qsc_code_cate_xml_start_quality_signal float64 | qsc_code_frac_lines_dupe_lines_quality_signal float64 | qsc_code_cate_autogen_quality_signal float64 | qsc_code_frac_lines_long_string_quality_signal float64 | qsc_code_frac_chars_string_length_quality_signal float64 | qsc_code_frac_chars_long_word_length_quality_signal float64 | qsc_code_frac_lines_string_concat_quality_signal float64 | qsc_code_cate_encoded_data_quality_signal float64 | qsc_code_frac_chars_hex_words_quality_signal float64 | qsc_code_frac_lines_prompt_comments_quality_signal float64 | qsc_code_frac_lines_assert_quality_signal float64 | qsc_codepython_cate_ast_quality_signal float64 | qsc_codepython_frac_lines_func_ratio_quality_signal float64 | qsc_codepython_cate_var_zero_quality_signal bool | qsc_codepython_frac_lines_pass_quality_signal float64 | qsc_codepython_frac_lines_import_quality_signal float64 | qsc_codepython_frac_lines_simplefunc_quality_signal float64 | qsc_codepython_score_lines_no_logic_quality_signal float64 | qsc_codepython_frac_lines_print_quality_signal float64 | qsc_code_num_words int64 | qsc_code_num_chars int64 | qsc_code_mean_word_length int64 | qsc_code_frac_words_unique null | qsc_code_frac_chars_top_2grams int64 | qsc_code_frac_chars_top_3grams int64 | qsc_code_frac_chars_top_4grams int64 | qsc_code_frac_chars_dupe_5grams int64 | qsc_code_frac_chars_dupe_6grams int64 | qsc_code_frac_chars_dupe_7grams int64 | qsc_code_frac_chars_dupe_8grams int64 | qsc_code_frac_chars_dupe_9grams int64 | qsc_code_frac_chars_dupe_10grams int64 | qsc_code_frac_chars_replacement_symbols int64 | qsc_code_frac_chars_digital int64 | qsc_code_frac_chars_whitespace int64 | qsc_code_size_file_byte int64 | qsc_code_num_lines int64 | qsc_code_num_chars_line_max int64 | qsc_code_num_chars_line_mean int64 | qsc_code_frac_chars_alphabet int64 | qsc_code_frac_chars_comments int64 | qsc_code_cate_xml_start int64 | qsc_code_frac_lines_dupe_lines int64 | qsc_code_cate_autogen int64 | qsc_code_frac_lines_long_string int64 | qsc_code_frac_chars_string_length int64 | qsc_code_frac_chars_long_word_length int64 | qsc_code_frac_lines_string_concat null | qsc_code_cate_encoded_data int64 | qsc_code_frac_chars_hex_words int64 | qsc_code_frac_lines_prompt_comments int64 | qsc_code_frac_lines_assert int64 | qsc_codepython_cate_ast int64 | qsc_codepython_frac_lines_func_ratio int64 | qsc_codepython_cate_var_zero int64 | qsc_codepython_frac_lines_pass int64 | qsc_codepython_frac_lines_import int64 | qsc_codepython_frac_lines_simplefunc int64 | qsc_codepython_score_lines_no_logic int64 | qsc_codepython_frac_lines_print int64 | effective string | hits int64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
21d1c69858e63fd0fa03a99fca3f0a286e9d9502 | 45 | py | Python | shenfun/laguerre/__init__.py | jaisw7/shenfun | 7482beb5b35580bc45f72704b69343cc6fc1d773 | [
"BSD-2-Clause"
] | 138 | 2017-06-17T13:30:27.000Z | 2022-03-20T02:33:47.000Z | shenfun/laguerre/__init__.py | jaisw7/shenfun | 7482beb5b35580bc45f72704b69343cc6fc1d773 | [
"BSD-2-Clause"
] | 73 | 2017-05-16T06:53:04.000Z | 2022-02-04T10:40:44.000Z | shenfun/laguerre/__init__.py | jaisw7/shenfun | 7482beb5b35580bc45f72704b69343cc6fc1d773 | [
"BSD-2-Clause"
] | 38 | 2018-01-31T14:37:01.000Z | 2022-03-31T15:07:27.000Z | from .bases import *
from .matrices import *
| 15 | 23 | 0.733333 | 6 | 45 | 5.5 | 0.666667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.177778 | 45 | 2 | 24 | 22.5 | 0.891892 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
21dbaafa9584719c2eb09101ae445c14289a272e | 155 | py | Python | archive/p/python/baklava.py | asharma13524/sample-programs | 1e15059b92144991a2983112c0d8fe14111fd0a8 | [
"MIT"
] | 422 | 2018-08-14T11:57:47.000Z | 2022-03-07T23:54:34.000Z | archive/p/python/baklava.py | asharma13524/sample-programs | 1e15059b92144991a2983112c0d8fe14111fd0a8 | [
"MIT"
] | 1,498 | 2018-08-10T19:18:52.000Z | 2021-12-14T03:02:00.000Z | archive/p/python/baklava.py | asharma13524/sample-programs | 1e15059b92144991a2983112c0d8fe14111fd0a8 | [
"MIT"
] | 713 | 2018-08-12T21:37:49.000Z | 2022-03-02T22:57:21.000Z | for i in range(0, 10, 1):
print((" " * (10 - i)) + ("*" * (i * 2 + 1)))
for i in range(10, -1, -1):
print((" " * (10 - i)) + ("*" * (i * 2 + 1)))
| 25.833333 | 49 | 0.335484 | 26 | 155 | 2 | 0.346154 | 0.153846 | 0.230769 | 0.423077 | 0.461538 | 0.461538 | 0.461538 | 0 | 0 | 0 | 0 | 0.150943 | 0.316129 | 155 | 5 | 50 | 31 | 0.339623 | 0 | 0 | 0.5 | 0 | 0 | 0.025806 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.5 | 1 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 6 |
df5511bc676c3d380f605acfd03401177919aa4b | 1,306 | py | Python | plugins/helpers/__init__.py | rmmoreira/udacity-dend-capstone | 600bf61bc59e686a4d3b0b6f371681c084759663 | [
"MIT"
] | 1 | 2021-02-12T19:10:03.000Z | 2021-02-12T19:10:03.000Z | plugins/helpers/__init__.py | rmmoreira/udacity-dend-capstone | 600bf61bc59e686a4d3b0b6f371681c084759663 | [
"MIT"
] | 314 | 2020-05-27T02:59:59.000Z | 2021-08-03T02:43:42.000Z | plugins/helpers/__init__.py | rmmoreira/udacity-dend-capstone | 600bf61bc59e686a4d3b0b6f371681c084759663 | [
"MIT"
] | 3 | 2020-05-31T13:08:33.000Z | 2021-07-06T23:00:36.000Z | from helpers.sql_queries import (immigration_table,
temperature_table,
airport_table,
demographics_table,
dim_airport_table,
dim_demographic_table,
dim_visitor_table,
fact_city_data_table,
fact_city_table_insert,
dim_airport_table_insert,
dim_demographic_table_insert,
dim_visitor_table_insert
)
from helpers.table_dictionaries import (staging_tables,
fact_dimension_tables,
fact_dimension_insert)
__all__ = [
'staging_tables',
'fact_dimension_tables',
'immigration_table',
'temperature_table',
'airport_table',
'demographics_table',
'dim_airport_table',
'dim_demographic_table',
'dim_visitor_table',
'fact_city_data_table',
'fact_city_table_insert',
'dim_airport_table_insert',
'dim_demographic_table_insert',
'dim_visitor_table_insert',
'fact_dimension_insert'
]
| 36.277778 | 63 | 0.503828 | 101 | 1,306 | 5.861386 | 0.217822 | 0.148649 | 0.141892 | 0.108108 | 0.827703 | 0.719595 | 0.719595 | 0.719595 | 0.719595 | 0.719595 | 0 | 0 | 0.445636 | 1,306 | 35 | 64 | 37.314286 | 0.81768 | 0 | 0 | 0 | 0 | 0 | 0.225115 | 0.123277 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.060606 | 0 | 0.060606 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
df7541ae2a5203a47616b3833f3471ad2b030cc8 | 13,810 | py | Python | ultilities.py | xuan0802/state-load-placement | 0a5ff41006c266001c6938dff64e4251d4f77f9b | [
"MIT"
] | null | null | null | ultilities.py | xuan0802/state-load-placement | 0a5ff41006c266001c6938dff64e4251d4f77f9b | [
"MIT"
] | null | null | null | ultilities.py | xuan0802/state-load-placement | 0a5ff41006c266001c6938dff64e4251d4f77f9b | [
"MIT"
] | null | null | null | import matplotlib.pyplot as plt
import ujson
from contants import *
import numpy as np
from ast import literal_eval
# product function
def prod(x):
product = 1
for i in x:
product = product * i
return product
def draw_pareto_front_ue_num():
# make figures
plt.figure()
for n_ in ue_num_list:
# read data
filename = "data/pareto_n_" + str(n_) + ".js"
f = open(filename)
perf_pareto_list = ujson.load(f)
# make plots
x_data = list()
y_data = list()
perf_pareto_list.sort(key=lambda x: x[0])
for i in perf_pareto_list:
x_data.append(i[0])
y_data.append(i[1])
plt.plot(x_data, y_data, marker='D', markerfacecolor=line_style_map_ue[n_]['marker_color'], markersize=10,
linestyle='dashed', color='olive', label=line_style_map_ue[n_]['label'])
for a in annotate_map_ue[n_].keys():
plt.annotate(annotate_map_ue[n_][a], a)
plt.xlabel('State Transfer Cost', fontsize='x-large')
plt.ylabel('Traffic load', fontsize='x-large')
plt.legend()
plt.show()
def draw_pareto_front_handover_frequency():
# make figures
plt.figure()
for h_ in handover_list:
# read data
filename = "data/pareto_h_" + str(h_) + ".js"
f = open(filename)
perf_pareto_list = ujson.load(f)
# make plots
x_data = list()
y_data = list()
perf_pareto_list.sort(key=lambda x: x[0])
for i in perf_pareto_list:
x_data.append(i[0])
y_data.append(i[1])
plt.plot(x_data, y_data, marker='D', markerfacecolor=line_style_map_handover[h_]['marker_color'], markersize=10,
linestyle='dashed', color='olive', label=line_style_map_handover[h_]['label'])
for a in annotate_map_handover[h_].keys():
plt.annotate(annotate_map_handover[h_][a], a)
plt.xlabel('State Transfer Cost', fontsize='x-large')
plt.ylabel('Traffic load', fontsize='x-large')
plt.legend()
plt.show()
def draw_pareto_front_request():
# make figures
plt.figure()
for u_ in ue_num_list:
# read data
filename = "data/pareto_u_" + str(u_) + ".js"
f = open(filename)
perf_pareto_list = ujson.load(f)
# make plots
x_data = list()
y_data = list()
perf_pareto_list.sort(key=lambda x: x[0])
for i in perf_pareto_list:
x_data.append(i[0])
y_data.append(i[1])
plt.plot(x_data, y_data, marker='D', markerfacecolor=line_style_map_request[u_]['marker_color'], markersize=10,
linestyle='dashed', color='olive', label=line_style_map_request[u_]['label'])
for a in annotate_map_request[u_].keys():
plt.annotate(annotate_map_request[u_][a], a)
plt.xlabel('State Transfer Cost', fontsize='x-large')
plt.ylabel('Traffic load', fontsize='x-large')
plt.legend()
plt.show()
def draw_running_time():
po = dict()
po['label'] = 'PO with w=0.5'
po['running_time'] = [0.882800817489624, 1.1198995113372803, 1.508939266204834]
ost = dict()
ost['label'] = 'OST'
ost['running_time'] = [0.9020464420318604, 1.149639368057251, 2.6382675170898438]
otl = dict()
otl['label'] = 'OTL'
otl['running_time'] = [0.09352946281433105, 0.12285518646240234, 0.1560497283935547]
apo = dict()
apo['label'] = 'APO'
apo['running_time'] = [42.97541284561157, 45.240522384643555, 69.9119827747345]
algorithms = list()
algorithms.append(po)
algorithms.append(ost)
algorithms.append(otl)
algorithms.append(apo)
xtick = ['M=10', 'M=11', 'M=12']
fig, ax = plt.subplots()
color_list = {'PO with w=0.5':'blue', 'OST':'red', 'OTL':'green', 'APO':'olive'}
index = np.arange(len(xtick))
bar_width = 0.2
opacity = 0.8
i = 0
for algo in algorithms:
ax.bar(index + i * bar_width, algo['running_time'],
bar_width, alpha=opacity, color=color_list[algo['label']], label=algo['label'])
i = i + 1
ax.set_xlabel('Number of cloud centers', fontsize='x-large')
ax.set_ylabel('Running time (s)', fontsize='x-large')
ax.set_yscale('log')
ax.set_xticks(index + bar_width)
ax.set_xticklabels(xtick)
ax.legend(fontsize='large', loc="upper left")
fig.tight_layout()
plt.show()
def draw_performance_results_handover():
handover_list = [0.5] + [5*x for x in range(20) if x != 0]
OST = dict()
OTL = dict()
PO = dict()
APO = dict()
OST['state'] = []
OTL['state'] = []
PO['state'] = []
OST['load'] = []
OTL['load'] = []
PO['load'] = []
OST['num_func'] = []
OTL['num_func'] = []
PO['num_func'] = []
for h in handover_list:
filename = "data/pareto_" + 'h' + "_" + str(h) + ".js"
f = open(filename)
pareto_optimal_points = ujson.load(f)
pareto_optimal_points.sort(key=lambda x: x[0])
OST['state'].append(pareto_optimal_points[0][0])
OST['load'].append(pareto_optimal_points[0][1])
OST['num_func'].append(pareto_optimal_points[0][2])
OTL['state'].append(pareto_optimal_points[len(pareto_optimal_points)-1][0])
OTL['load'].append(pareto_optimal_points[len(pareto_optimal_points)-1][1])
OTL['num_func'].append(pareto_optimal_points[len(pareto_optimal_points)-1][2])
filename = "data/pareto_weight_" + 'h' + "_" + str(h) + ".js"
f = open(filename)
pareto_optimal_weight_map = ujson.load(f)
for point in pareto_optimal_weight_map.keys():
if 0.5 in pareto_optimal_weight_map[point]:
p_ = literal_eval(point)
PO['state'].append(p_[0])
PO['load'].append(p_[1])
PO['num_func'].append(p_[2])
break
print('--------------------------', h, '-----------')
print('best state', pareto_optimal_points[0][0])
print('worst state', pareto_optimal_points[len(pareto_optimal_points)-1][0])
print('best load', pareto_optimal_points[len(pareto_optimal_points)-1][1])
print('worst load', pareto_optimal_points[0][1])
print('PO state', p_[0])
print('PO load', p_[1])
APO['state'] = [32.0, 320.0, 640.0, 960.0, 1280.0, 1600.0, 1920.0, 2240.0, 2560.0, 2880.0, 3200.0, 3520.0,
3840.0, 4160.0, 4480.0, 4800.0, 5120.0, 5440.0, 5760.0, 6080.0]
APO['load'] = [132000.0, 132000.0, 132000.0, 132000.0, 132000.0, 132000.0, 132000.0, 132000.0,
132000.0, 132000.0, 132000.0, 132000.0, 132000.0, 132000.0, 132000.0, 132000.0,
132000.0, 132000.0, 132000.0, 165000.0]
APO['num_func'] = [3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0,
3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0]
figures = ['state', 'load', 'num_func']
ylabel_map = {'state': 'State Transfer Cost', 'load':'Traffic Load', 'num_func':'Number of StateMF sets'}
for fig in figures:
plt.plot(handover_list, OST[fig], color='red', marker='*', linestyle='solid', label='OST')
plt.plot(handover_list, OTL[fig], color='green', marker='o', linestyle='solid', label='OTL')
plt.plot(handover_list, PO[fig], color='blue', marker='^', linestyle='solid', label='PO with w=0.5')
plt.plot(handover_list, APO[fig], color='olive', marker='x', linestyle='solid', label='APO')
plt.xlabel('Handover frequency', fontsize='x-large')
plt.ylabel(ylabel_map[fig], fontsize='x-large')
plt.legend()
plt.show()
def draw_performance_results_request():
request_list = [x for x in range(10)]
OST = dict()
OTL = dict()
PO = dict()
APO = dict()
OST['state'] = []
OTL['state'] = []
PO['state'] = []
OST['load'] = []
OTL['load'] = []
PO['load'] = []
OST['num_func'] = []
OTL['num_func'] = []
PO['num_func'] = []
for u in request_list:
filename = "data/pareto_" + 'u' + "_" + str(u) + ".js"
f = open(filename)
pareto_optimal_points = ujson.load(f)
pareto_optimal_points.sort(key=lambda x: x[0])
OST['state'].append(pareto_optimal_points[0][0])
OST['load'].append(pareto_optimal_points[0][1])
OST['num_func'].append(pareto_optimal_points[0][2])
OTL['state'].append(pareto_optimal_points[len(pareto_optimal_points)-1][0])
OTL['load'].append(pareto_optimal_points[len(pareto_optimal_points)-1][1])
OTL['num_func'].append(pareto_optimal_points[len(pareto_optimal_points)-1][2])
filename = "data/pareto_weight_" + 'u' + "_" + str(u) + ".js"
f = open(filename)
pareto_optimal_weight_map = ujson.load(f)
for point in pareto_optimal_weight_map.keys():
if 0.5 in pareto_optimal_weight_map[point]:
p_ = literal_eval(point)
PO['state'].append(p_[0])
PO['load'].append(p_[1])
PO['num_func'].append(p_[2])
break
print('--------------------------', u, '-----------')
print('best state', pareto_optimal_points[0][0])
print('worst state', pareto_optimal_points[len(pareto_optimal_points)-1][0])
print('best load', pareto_optimal_points[len(pareto_optimal_points)-1][1])
print('worst load', pareto_optimal_points[0][1])
print('PO state', p_[0])
print('PO load', p_[1])
APO['state'] = [2500.0, 2500.0, 2500.0, 2500.0, 2500.0, 3200.0, 3600.0, 3600.0, 3600.0, 3600.0]
APO['load'] = [27500.0, 55000.0, 82500.0,110000.0, 137500.0, 132000.0, 115500.0, 132000.0, 148500, 165000.0]
APO['num_func'] = [2.0, 2.0, 2.0, 2.0, 2.0,3.0, 4.0, 4.0, 4.0, 4.0]
figures = ['state', 'load', 'num_func']
ylabel_map = {'state': 'State Transfer Cost', 'load': 'Traffic Load', 'num_func': 'Number of StateMF sets'}
for fig in figures:
plt.plot(request_list, OST[fig], color='red', marker='*', linestyle='solid', label='OST')
plt.plot(request_list, OTL[fig], color='green', marker='o', linestyle='solid', label='OTL')
plt.plot(request_list, PO[fig], color='blue', marker='^', linestyle='solid', label='PO with w=0.5')
plt.plot(request_list, APO[fig], color='olive', marker='x', linestyle='solid', label='APO')
plt.xlabel('Number of session requests', fontsize='x-large')
plt.ylabel(ylabel_map[fig], fontsize='x-large')
plt.legend()
plt.show()
def draw_performance_results_ue():
ue_num_list = [1] + [50*x for x in range(20) if x != 0]
OST = dict()
OTL = dict()
PO = dict()
APO = dict()
OST['state'] = []
OTL['state'] = []
PO['state'] = []
OST['load'] = []
OTL['load'] = []
PO['load'] = []
OST['num_func'] = []
OTL['num_func'] = []
PO['num_func'] = []
for n in ue_num_list:
filename = "data/pareto_" + 'n' + "_" + str(n) + ".js"
f = open(filename)
pareto_optimal_points = ujson.load(f)
pareto_optimal_points.sort(key=lambda x: x[0])
OST['state'].append(pareto_optimal_points[0][0])
OST['load'].append(pareto_optimal_points[0][1])
OST['num_func'].append(pareto_optimal_points[0][2])
OTL['state'].append(pareto_optimal_points[len(pareto_optimal_points)-1][0])
OTL['load'].append(pareto_optimal_points[len(pareto_optimal_points)-1][1])
OTL['num_func'].append(pareto_optimal_points[len(pareto_optimal_points)-1][2])
filename = "data/pareto_weight_" + 'n' + "_" + str(n) + ".js"
f = open(filename)
pareto_optimal_weight_map = ujson.load(f)
for point in pareto_optimal_weight_map.keys():
if 0.5 in pareto_optimal_weight_map[point]:
p_ = literal_eval(point)
PO['state'].append(p_[0])
PO['load'].append(p_[1])
PO['num_func'].append(p_[2])
break
print('--------------------------', n, '-----------')
print('best state', pareto_optimal_points[0][0])
print('worst state', pareto_optimal_points[len(pareto_optimal_points)-1][0])
print('best load', pareto_optimal_points[len(pareto_optimal_points)-1][1])
print('worst load', pareto_optimal_points[0][1])
print('PO state', p_[0])
print('PO load', p_[1])
APO['state'] = [2500.0, 2500.0, 2500.0, 2500.0, 2500.0, 2500.0, 2500.0, 2500.0, 2500.0, 2500.0, 3200.0, 3200.0,
3600.0, 3200.0, 3600.0, 3600.0, 3600.0, 3600.0, 3600.0, 3600.0]
APO['load'] = [300.0, 15000.0, 30000.0, 45000.0, 60000.0, 75000.0, 90000.0, 105000.0, 120000.0, 135000.0,
120000.0, 132000.0, 108000, 156000, 126000.0, 135000.0, 144000.0, 153000.0, 162000.0, 171000.0]
APO['num_func'] = [2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 3.0, 3.0, 4.0, 3.0, 4.0, 4.0, 4.0, 4.0,
4.0, 4.0]
figures = ['state', 'load', 'num_func']
ylabel_map = {'state': 'State Transfer Cost', 'load': 'Traffic Load', 'num_func': 'Number of StateMF sets'}
for fig in figures:
plt.plot(ue_num_list, OST[fig], color='red', marker='*', linestyle='solid', label='OST')
plt.plot(ue_num_list, OTL[fig], color='green', marker='o', linestyle='solid', label='OTL')
plt.plot(ue_num_list, PO[fig], color='blue', marker='^', linestyle='solid', label='PO with w=0.5')
plt.plot(ue_num_list, APO[fig], color='olive', marker='x', linestyle='solid', label='APO')
plt.xlabel('Number of UEs', fontsize='x-large')
plt.ylabel(ylabel_map[fig], fontsize='x-large')
plt.legend()
plt.show()
draw_running_time() | 39.570201 | 120 | 0.581101 | 1,986 | 13,810 | 3.85851 | 0.108258 | 0.101788 | 0.126452 | 0.01044 | 0.80308 | 0.760799 | 0.742529 | 0.742529 | 0.742529 | 0.735613 | 0 | 0.09645 | 0.237219 | 13,810 | 349 | 121 | 39.570201 | 0.631004 | 0.008545 | 0 | 0.590909 | 0 | 0 | 0.132437 | 0.005701 | 0.020979 | 0 | 0 | 0 | 0 | 1 | 0.027972 | false | 0 | 0.017483 | 0 | 0.048951 | 0.073427 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
10c8928d6be910cf9d19c0702382215a71adadb8 | 154 | py | Python | codesignal/array/alternatingSums.py | peterlamar/python-cp-cheatsheet | f9f854064a3c657c04fab27d0a496401bfa97da1 | [
"Apache-2.0"
] | 140 | 2020-10-21T13:23:52.000Z | 2022-03-31T15:09:45.000Z | codesignal/array/alternatingSums.py | ajibolashodipo/python-cp-cheatsheet | f9f854064a3c657c04fab27d0a496401bfa97da1 | [
"Apache-2.0"
] | 1 | 2021-07-22T14:01:25.000Z | 2021-07-22T14:01:25.000Z | codesignal/array/alternatingSums.py | ajibolashodipo/python-cp-cheatsheet | f9f854064a3c657c04fab27d0a496401bfa97da1 | [
"Apache-2.0"
] | 33 | 2020-10-21T14:17:02.000Z | 2022-03-25T11:25:03.000Z | """
For a = [50, 60, 60, 45, 70], the output should be
alternatingSums(a) = [180, 105].
"""
def alternatingSums(a):
return [sum(a[::2]), sum(a[1::2])] | 25.666667 | 50 | 0.577922 | 26 | 154 | 3.423077 | 0.692308 | 0.359551 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.148438 | 0.168831 | 154 | 6 | 51 | 25.666667 | 0.546875 | 0.538961 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.5 | false | 0 | 0 | 0.5 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 6 |
10eb2a4c59cbe9874a3b2d220e5e301895207653 | 130 | py | Python | projects/code_combat/4_Backwoods_Forest/089-Defense_of_Plainswood/defense_of_plainswood.py | only-romano/junkyard | b60a25b2643f429cdafee438d20f9966178d6f36 | [
"MIT"
] | null | null | null | projects/code_combat/4_Backwoods_Forest/089-Defense_of_Plainswood/defense_of_plainswood.py | only-romano/junkyard | b60a25b2643f429cdafee438d20f9966178d6f36 | [
"MIT"
] | null | null | null | projects/code_combat/4_Backwoods_Forest/089-Defense_of_Plainswood/defense_of_plainswood.py | only-romano/junkyard | b60a25b2643f429cdafee438d20f9966178d6f36 | [
"MIT"
] | null | null | null | hero.cast("haste", hero)
hero.moveXY(40, 25)
hero.buildXY("fence", 40, 20)
hero.moveXY(40, 35)
hero.buildXY("fence", 40, 52)
| 21.666667 | 30 | 0.653846 | 22 | 130 | 3.863636 | 0.5 | 0.235294 | 0.282353 | 0.423529 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.141593 | 0.130769 | 130 | 5 | 31 | 26 | 0.610619 | 0 | 0 | 0 | 0 | 0 | 0.12 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
80470827baa26156485c11fd4f3192a646c04553 | 181 | py | Python | pyticker/view/pyticker_styles.py | priyanshus/pyticker | 4f84b6c907cc3e405f5b3b98eb5c6f11913d7aff | [
"MIT"
] | null | null | null | pyticker/view/pyticker_styles.py | priyanshus/pyticker | 4f84b6c907cc3e405f5b3b98eb5c6f11913d7aff | [
"MIT"
] | 8 | 2021-03-19T06:24:02.000Z | 2021-03-21T07:25:44.000Z | pyticker/view/pyticker_styles.py | priyanshus/pyticker | 4f84b6c907cc3e405f5b3b98eb5c6f11913d7aff | [
"MIT"
] | null | null | null | class PyTickerStyles(object):
GREY_BACKGROUND_BLACK_TEXT = "bg:#e5e7e9 #000000"
DARK_GREY_BACKGROUND_BLACK_TEXT = "bg:#b2babb #000000"
INPUT_FIELD = "class:input-field"
| 36.2 | 58 | 0.751381 | 23 | 181 | 5.565217 | 0.608696 | 0.21875 | 0.296875 | 0.359375 | 0.390625 | 0 | 0 | 0 | 0 | 0 | 0 | 0.102564 | 0.138122 | 181 | 4 | 59 | 45.25 | 0.717949 | 0 | 0 | 0 | 0 | 0 | 0.292818 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 6 |
8048484e8098083a8c6cdf13da247ddde1d14335 | 9,897 | py | Python | beaconsite/tests/test_views.py | brand-fabian/varfish-server | 6a084d891d676ff29355e72a29d4f7b207220283 | [
"MIT"
] | 14 | 2019-09-30T12:44:17.000Z | 2022-02-04T14:45:16.000Z | beaconsite/tests/test_views.py | brand-fabian/varfish-server | 6a084d891d676ff29355e72a29d4f7b207220283 | [
"MIT"
] | 244 | 2021-03-26T15:13:15.000Z | 2022-03-31T15:48:04.000Z | beaconsite/tests/test_views.py | brand-fabian/varfish-server | 6a084d891d676ff29355e72a29d4f7b207220283 | [
"MIT"
] | 8 | 2020-05-19T21:55:13.000Z | 2022-03-31T07:02:58.000Z | """Tests for UI views in the beaconsite app"""
from Crypto.PublicKey import RSA
from django.urls import reverse
from test_plus.test import TestCase
from beaconsite.models import Consortium, Site
from beaconsite.tests.factories import ConsortiumFactory, SiteFactory
from variants.tests.factories import ProjectFactory
class TestViewsBase(TestCase):
def setUp(self):
self.superuser = self.make_user("superuser")
self.superuser.is_superuser = True
self.superuser.is_staff = True
self.superuser.save()
self.project = ProjectFactory()
self.consortium = ConsortiumFactory()
self.site = SiteFactory(role=Site.LOCAL, state=Site.ENABLED)
class TestIndexView(TestViewsBase):
def test_render(self):
with self.login(self.superuser):
response = self.client.get(reverse("beaconsite:index"))
self.assertEqual(response.status_code, 200)
self.assertIsNotNone(response.context["consortium_list"])
self.assertIsNotNone(response.context["site_list"])
self.assertEqual(response.context["consortium_list"][0].pk, self.consortium.pk)
self.assertEqual(response.context["site_list"][0].pk, self.site.pk)
class TestConsortiumListView(TestViewsBase):
def test_render(self):
with self.login(self.superuser):
response = self.client.get(reverse("beaconsite:consortium-list"))
self.assertEqual(response.status_code, 200)
self.assertIsNotNone(response.context["object_list"])
self.assertEqual(response.context["object_list"][0].pk, self.consortium.pk)
class TestConsortiumCreateView(TestViewsBase):
def test_render(self):
with self.login(self.superuser):
response = self.client.get(reverse("beaconsite:consortium-create"))
self.assertEqual(response.status_code, 200)
def test_create(self):
self.assertEqual(Consortium.objects.count(), 1)
post_data = {
"title": "XXX",
"identifier": "xxx",
"description": "ddd",
"state": Consortium.ENABLED,
"sites": [self.site.pk],
"projects": [self.project.pk],
}
with self.login(self.superuser):
response = self.client.post(reverse("beaconsite:consortium-create"), post_data)
self.assertEqual(response.status_code, 302)
latest_consortium = Consortium.objects.order_by("-date_created")[0]
self.assertEqual(
response.url,
reverse(
"beaconsite:consortium-detail", kwargs={"consortium": latest_consortium.sodar_uuid}
),
)
self.assertEqual(Consortium.objects.count(), 2)
class TestConsortiumUpdateView(TestViewsBase):
def test_render(self):
with self.login(self.superuser):
response = self.client.get(
reverse(
"beaconsite:consortium-update",
kwargs={"consortium": self.consortium.sodar_uuid},
)
)
self.assertEqual(response.status_code, 200)
self.assertIsNotNone(response.context["object"])
def test_update(self):
self.assertEqual(Consortium.objects.count(), 1)
post_data = {
"title": "XXX",
"identifier": "xxx",
"description": "ddd",
"state": Consortium.ENABLED,
"sites": [self.site.pk],
"projects": [self.project.pk],
}
with self.login(self.superuser):
response = self.client.post(
reverse(
"beaconsite:consortium-update",
kwargs={"consortium": self.consortium.sodar_uuid},
),
post_data,
)
self.assertEqual(response.status_code, 302)
latest_consortium = Consortium.objects.order_by("-date_created")[0]
self.assertEqual(
response.url,
reverse(
"beaconsite:consortium-detail", kwargs={"consortium": latest_consortium.sodar_uuid}
),
)
self.assertEqual(Consortium.objects.count(), 1)
self.consortium.refresh_from_db()
for key in ("title", "identifier", "description", "state"):
self.assertEqual(getattr(self.consortium, key), post_data[key])
self.assertEqual([self.site.pk], [s.pk for s in self.consortium.sites.all()])
self.assertEqual([self.project.pk], [p.pk for p in self.consortium.projects.all()])
class TestConsortiumDeleteView(TestViewsBase):
def test_render(self):
with self.login(self.superuser):
response = self.client.get(
reverse(
"beaconsite:consortium-delete",
kwargs={"consortium": self.consortium.sodar_uuid},
)
)
self.assertEqual(response.status_code, 200)
def test_delete(self):
# Assert precondition
self.assertEqual(Consortium.objects.all().count(), 1)
with self.login(self.superuser):
response = self.client.post(
reverse(
"beaconsite:consortium-delete",
kwargs={"consortium": self.consortium.sodar_uuid},
)
)
self.assertEqual(response.status_code, 302)
self.assertEqual(response.url, reverse("beaconsite:consortium-list"))
# Assert postconditions
self.assertEqual(Consortium.objects.all().count(), 0)
class TestSiteListView(TestViewsBase):
def test_render(self):
with self.login(self.superuser):
response = self.client.get(reverse("beaconsite:site-list"))
self.assertEqual(response.status_code, 200)
self.assertIsNotNone(response.context["object_list"])
self.assertEqual(response.context["object_list"][0].pk, self.site.pk)
class TestSiteCreateView(TestViewsBase):
def test_render(self):
with self.login(self.superuser):
response = self.client.get(reverse("beaconsite:site-create"))
self.assertEqual(response.status_code, 200)
def test_create(self):
self.assertEqual(Site.objects.count(), 1)
rsa_key = RSA.generate(2048)
public_key = rsa_key.public_key().export_key("PEM").decode("ascii")
post_data = {
"title": "XXX",
"identifier": "xxx",
"description": "ddd",
"state": Site.ENABLED,
"role": Site.REMOTE,
"entrypoint_url": "http://site.example.com",
"key_algo": Site.RSA_SHA256,
"public_key": public_key,
"consortia": [self.consortium.pk],
}
with self.login(self.superuser):
response = self.client.post(reverse("beaconsite:site-create"), post_data)
self.assertEqual(response.status_code, 302)
latest_site = Site.objects.order_by("-date_created")[0]
self.assertEqual(
response.url,
reverse("beaconsite:site-detail", kwargs={"site": latest_site.sodar_uuid}),
)
self.assertEqual(Site.objects.count(), 2)
class TestSiteUpdateView(TestViewsBase):
def test_render(self):
with self.login(self.superuser):
response = self.client.get(
reverse("beaconsite:site-update", kwargs={"site": self.site.sodar_uuid},)
)
self.assertEqual(response.status_code, 200)
self.assertIsNotNone(response.context["object"])
def test_update(self):
self.assertEqual(Site.objects.count(), 1)
rsa_key = RSA.generate(2048)
public_key = rsa_key.public_key().export_key("PEM").decode("ascii")
post_data = {
"title": "XXX",
"identifier": "xxx",
"description": "ddd",
"state": Site.ENABLED,
"role": Site.REMOTE,
"entrypoint_url": "http://site.example.com",
"key_algo": Site.RSA_SHA256,
"public_key": public_key,
"consortia": [self.consortium.pk],
}
with self.login(self.superuser):
response = self.client.post(
reverse("beaconsite:site-update", kwargs={"site": self.site.sodar_uuid},),
post_data,
)
self.assertEqual(response.status_code, 302)
latest_site = Site.objects.order_by("-date_created")[0]
self.assertEqual(
response.url,
reverse("beaconsite:site-detail", kwargs={"site": latest_site.sodar_uuid}),
)
self.assertEqual(Site.objects.count(), 1)
self.site.refresh_from_db()
keys = (
"title",
"identifier",
"description",
"state",
"role",
"entrypoint_url",
"key_algo",
"public_key",
)
for key in keys:
self.assertEqual(getattr(self.site, key), post_data[key])
self.assertEqual([self.consortium.pk], [s.pk for s in self.site.consortia.all()])
class TestSiteDeleteView(TestViewsBase):
def test_render(self):
with self.login(self.superuser):
response = self.client.get(
reverse("beaconsite:site-delete", kwargs={"site": self.site.sodar_uuid},)
)
self.assertEqual(response.status_code, 200)
def test_delete(self):
# Assert precondition
self.assertEqual(Site.objects.all().count(), 1)
with self.login(self.superuser):
response = self.client.post(
reverse("beaconsite:site-delete", kwargs={"site": self.site.sodar_uuid},)
)
self.assertEqual(response.status_code, 302)
self.assertEqual(response.url, reverse("beaconsite:site-list"))
# Assert postconditions
self.assertEqual(Site.objects.all().count(), 0)
| 35.600719 | 99 | 0.601799 | 1,009 | 9,897 | 5.796829 | 0.121903 | 0.107711 | 0.098307 | 0.043597 | 0.800479 | 0.785775 | 0.764404 | 0.739956 | 0.739956 | 0.739956 | 0 | 0.010975 | 0.272709 | 9,897 | 277 | 100 | 35.729242 | 0.801612 | 0.01263 | 0 | 0.618834 | 0 | 0 | 0.122196 | 0.046297 | 0 | 0 | 0 | 0 | 0.215247 | 1 | 0.071749 | false | 0 | 0.026906 | 0 | 0.143498 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
33cf49c5d0c7460dee1333354694d5b741f29a21 | 12,413 | py | Python | tests/thriftclient.py | aiden0z/flask-thriftclient | 34f14f75f5dd9163ecff6d6279ed82d6f53540e0 | [
"BSD-2-Clause"
] | null | null | null | tests/thriftclient.py | aiden0z/flask-thriftclient | 34f14f75f5dd9163ecff6d6279ed82d6f53540e0 | [
"BSD-2-Clause"
] | null | null | null | tests/thriftclient.py | aiden0z/flask-thriftclient | 34f14f75f5dd9163ecff6d6279ed82d6f53540e0 | [
"BSD-2-Clause"
] | null | null | null | # -*- coding:utf-8 -*-
from thrift.transport import *
from thrift.transport import TSSLSocket
from thrift.protocol import *
from thrift.protocol import TCompactProtocol
import unittest
from flask import Flask
from flask_thriftclient import ThriftClient
class StubClient:
def __init__(self, protocol):
pass
class TestSequenceFunctions(unittest.TestCase):
def setUp(self):
self.app = Flask(__name__)
def test_default_values(self):
client = ThriftClient(StubClient, self.app)
self.assertTrue(isinstance(client.transport, TSocket.TSocket))
self.assertTrue(isinstance(client.protocol,
TBinaryProtocol.TBinaryProtocol))
self.assertEquals(client.transport.port, 9090)
self.assertEquals(client.transport.host, "localhost")
def test_transport_bad_scheme(self):
self.app.config["THRIFTCLIENT_TRANSPORT"] = "bad://whatever"
with self.assertRaises(RuntimeError):
client = ThriftClient(StubClient, self.app)
def test_transport_none_transport(self):
self.app.config["THRIFTCLIENT_TRANSPORT"] = None
with self.assertRaises(RuntimeError):
client = ThriftClient(StubClient, self.app)
def test_transport_empty_url(self):
self.app.config["THRIFTCLIENT_TRANSPORT"] = ""
with self.assertRaises(RuntimeError):
client = ThriftClient(StubClient, self.app)
def test_transport_no_scheme(self):
self.app.config["THRIFTCLIENT_TRANSPORT"] = "/tmp/somefile"
with self.assertRaises(RuntimeError):
client = ThriftClient(StubClient, self.app)
def test_transport_tcp_noport(self):
self.app.config["THRIFTCLIENT_TRANSPORT"] = "tcp://192.168.0.42"
client = ThriftClient(StubClient, self.app)
self.assertTrue(isinstance(client.transport, TSocket.TSocket))
self.assertEquals(client.transport.port, 9090)
self.assertEquals(client.transport.host, "192.168.0.42")
def test_transport_tcp_longurl(self):
self.app.config[
"THRIFTCLIENT_TRANSPORT"] = "tcp://mydomain.foo.com:5921/whatever?its=21;not=zrzer#used"
client = ThriftClient(StubClient, self.app)
self.assertTrue(isinstance(client.transport, TSocket.TSocket))
self.assertEquals(client.transport.port, 5921)
self.assertEquals(client.transport.host, "mydomain.foo.com")
def test_transport_unix_1(self):
self.app.config[
"THRIFTCLIENT_TRANSPORT"] = "unix:///tmp/testunixsocket"
client = ThriftClient(StubClient, self.app)
self.assertTrue(isinstance(client.transport, TSocket.TSocket))
self.assertEquals(client.transport._unix_socket, "/tmp/testunixsocket")
def test_transport_unix_2(self):
self.app.config["THRIFTCLIENT_TRANSPORT"] = "unix:/tmp/testunixsocket"
client = ThriftClient(StubClient, self.app)
self.assertTrue(isinstance(client.transport, TSocket.TSocket))
self.assertEquals(client.transport._unix_socket, "/tmp/testunixsocket")
def test_transport_unix_bad(self):
self.app.config["THRIFTCLIENT_TRANSPORT"] = "unix://tmp/testunixsocket"
with self.assertRaises(RuntimeError):
client = ThriftClient(StubClient, self.app)
def test_transport_http(self):
self.app.config[
"THRIFTCLIENT_TRANSPORT"] = "http://foo.bar.com:8080/end/point"
client = ThriftClient(StubClient, self.app)
self.assertTrue(isinstance(client.transport, THttpClient.THttpClient))
self.assertEquals(client.transport.scheme, "http")
self.assertEquals(client.transport.host, "foo.bar.com")
self.assertEquals(client.transport.port, 8080)
self.assertEquals(client.transport.path, "/end/point")
def test_transport_https(self):
self.app.config[
"THRIFTCLIENT_TRANSPORT"] = "https://foo.bar.com:8080/end/point"
client = ThriftClient(StubClient, self.app)
self.assertTrue(isinstance(client.transport, THttpClient.THttpClient))
self.assertEquals(client.transport.scheme, "https")
self.assertEquals(client.transport.host, "foo.bar.com")
self.assertEquals(client.transport.port, 8080)
self.assertEquals(client.transport.path, "/end/point")
def test_transport_http_defaultport(self):
self.app.config[
"THRIFTCLIENT_TRANSPORT"] = "http://foo.bar.com/end/point"
client = ThriftClient(StubClient, self.app)
self.assertTrue(isinstance(client.transport, THttpClient.THttpClient))
self.assertEquals(client.transport.scheme, "http")
self.assertEquals(client.transport.port, 80)
def test_transport_defaultport(self):
self.app.config[
"THRIFTCLIENT_TRANSPORT"] = "https://foo.bar.com/end/point"
client = ThriftClient(StubClient, self.app)
self.assertTrue(isinstance(client.transport, THttpClient.THttpClient))
self.assertEquals(client.transport.scheme, "https")
self.assertEquals(client.transport.port, 443)
def test_transport_tcps_no_cert(self):
self.app.config["THRIFTCLIENT_TRANSPORT"] = "tcps://192.168.0.42"
self.app.config["THRIFTCLIENT_SSL_VALIDATE"] = False
client = ThriftClient(StubClient, self.app)
self.assertTrue(isinstance(client.transport, TSSLSocket.TSSLSocket))
self.assertEquals(client.transport.port, 9090)
self.assertEquals(client.transport.host, "192.168.0.42")
def test_transport_tcps_with_cert(self):
self.app.config["THRIFTCLIENT_TRANSPORT"] = "tcps://192.168.0.42"
self.app.config["THRIFTCLIENT_SSL_CA_CERTS"] = "tests/cacert.pem"
client = ThriftClient(StubClient, self.app)
self.assertTrue(isinstance(client.transport, TSSLSocket.TSSLSocket))
self.assertEquals(client.transport.port, 9090)
self.assertEquals(client.transport.host, "192.168.0.42")
def test_transport_tcps_forgot_cert(self):
self.app.config["THRIFTCLIENT_TRANSPORT"] = "tcps://192.168.0.42"
self.app.config["THRIFTCLIENT_SSL_VALIDATE"] = True
self.app.config["THRIFTCLIENT_SSL_CA_CERTS"] = None
with self.assertRaises(IOError):
client = ThriftClient(StubClient, self.app)
def test_transport_tcps_unreadable_cert(self):
self.app.config["THRIFTCLIENT_TRANSPORT"] = "tcps://192.168.0.42"
self.app.config["THRIFTCLIENT_SSL_VALIDATE"] = True
self.app.config["THRIFTCLIENT_SSL_CA_CERTS"] = "missingcert"
with self.assertRaises(IOError):
client = ThriftClient(StubClient, self.app)
def test_transport_unixs_no_cert(self):
self.app.config[
"THRIFTCLIENT_TRANSPORT"] = "unixs:/tmp/thriftsocketfile"
self.app.config["THRIFTCLIENT_SSL_VALIDATE"] = False
client = ThriftClient(StubClient, self.app)
self.assertTrue(isinstance(client.transport, TSSLSocket.TSSLSocket))
self.assertEquals(client.transport._unix_socket,
"/tmp/thriftsocketfile")
def test_transport_unixs_no_cert_2(self):
self.app.config[
"THRIFTCLIENT_TRANSPORT"] = "unixs:///tmp/thriftsocketfile"
self.app.config["THRIFTCLIENT_SSL_VALIDATE"] = False
client = ThriftClient(StubClient, self.app)
self.assertTrue(isinstance(client.transport, TSSLSocket.TSSLSocket))
self.assertEquals(client.transport._unix_socket,
"/tmp/thriftsocketfile")
def test_transport_unixs_bad_hostname(self):
self.app.config[
"THRIFTCLIENT_TRANSPORT"] = "unixs://tmp/thriftsocketfile"
self.app.config["THRIFTCLIENT_SSL_VALIDATE"] = False
with self.assertRaises(RuntimeError):
client = ThriftClient(StubClient, self.app)
def test_transport_unixs_with_cert(self):
self.app.config[
"THRIFTCLIENT_TRANSPORT"] = "unixs:/tmp/thriftsocketfile"
self.app.config["THRIFTCLIENT_SSL_CA_CERTS"] = "tests/cacert.pem"
client = ThriftClient(StubClient, self.app)
self.assertTrue(isinstance(client.transport, TSSLSocket.TSSLSocket))
self.assertEquals(client.transport._unix_socket,
"/tmp/thriftsocketfile")
def test_protocol_bad(self):
self.app.config["THRIFTCLIENT_PROTOCOL"] = "BAD"
with self.assertRaises(RuntimeError):
client = ThriftClient(StubClient, self.app)
def test_protocol_binary(self):
self.app.config["THRIFTCLIENT_PROTOCOL"] = ThriftClient.BINARY
client = ThriftClient(StubClient, self.app)
self.assertTrue(isinstance(client.protocol,
TBinaryProtocol.TBinaryProtocol))
def test_protocol_compact(self):
self.app.config["THRIFTCLIENT_PROTOCOL"] = ThriftClient.COMPACT
client = ThriftClient(StubClient, self.app)
self.assertTrue(isinstance(client.protocol,
TCompactProtocol.TCompactProtocol))
def test_protocol_json(self):
self.app.config["THRIFTCLIENT_PROTOCOL"] = ThriftClient.JSON
client = ThriftClient(StubClient, self.app)
self.assertTrue(isinstance(client.protocol,
TJSONProtocol.TJSONProtocol))
def test_connection(self):
"""
http connections aren't really opened, so we can tests
them without a server
"""
self.app.config["THRIFTCLIENT_TRANSPORT"] = "http://localhost:8735"
client = ThriftClient(StubClient, self.app)
@self.app.route("/testme")
def testme():
return "OK" if client.transport.isOpen() else "KO"
testclient = self.app.test_client()
ret = testclient.get("/testme")
self.assertEquals(ret.data, "OK")
self.assertFalse(client.transport.isOpen())
def test_connection_no_server(self):
self.app.config["THRIFTCLIENT_TRANSPORT"] = "tcp://localhost:8735"
client = ThriftClient(StubClient, self.app)
@self.app.route("/testme")
def testme():
return "KO"
testclient = self.app.test_client()
ret = testclient.get("/testme")
self.assertEquals(ret.status_code, 500)
self.assertFalse(client.transport.isOpen())
def test_no_alwaysconnect(self):
self.app.config["THRIFTCLIENT_TRANSPORT"] = "tcp://localhost:8735"
self.app.config["THRIFTCLIENT_ALWAYS_CONNECT"] = False
client = ThriftClient(StubClient, self.app)
@self.app.route("/testme")
def testme():
return "KO" if client.transport.isOpen() else "OK"
testclient = self.app.test_client()
ret = testclient.get("/testme")
self.assertEquals(ret.data, "OK")
self.assertFalse(client.transport.isOpen())
def test_connect_ctx(self):
self.app.config["THRIFTCLIENT_TRANSPORT"] = "http://localhost:8735"
self.app.config["THRIFTCLIENT_ALWAYS_CONNECT"] = False
client = ThriftClient(StubClient, self.app)
with client.connect():
self.assertTrue(client.transport.isOpen())
self.assertFalse(client.transport.isOpen())
def test_autoconnect(self):
self.app.config["THRIFTCLIENT_TRANSPORT"] = "http://localhost:8735"
self.app.config["THRIFTCLIENT_ALWAYS_CONNECT"] = False
client = ThriftClient(StubClient, self.app)
@self.app.route("/testme")
@client.autoconnect
def testme():
return "OK" if client.transport.isOpen() else "KO"
testclient = self.app.test_client()
ret = testclient.get("/testme")
self.assertEquals(ret.data, "OK")
self.assertFalse(client.transport.isOpen())
def test_autoconnect_with_alwaysconnect(self):
self.app.config["THRIFTCLIENT_TRANSPORT"] = "http://localhost:8735"
self.app.config["THRIFTCLIENT_ALWAYS_CONNECT"] = False
client = ThriftClient(StubClient, self.app)
@self.app.route("/testme")
@client.autoconnect
def testme():
return "OK" if client.transport.isOpen() else "KO"
testclient = self.app.test_client()
ret = testclient.get("/testme")
self.assertEquals(ret.data, "OK")
self.assertFalse(client.transport.isOpen())
if __name__ == "__main__":
unittest.main()
| 41.794613 | 100 | 0.672199 | 1,318 | 12,413 | 6.188923 | 0.106222 | 0.075518 | 0.071718 | 0.137918 | 0.880716 | 0.872134 | 0.85681 | 0.795023 | 0.780924 | 0.762413 | 0 | 0.015301 | 0.210263 | 12,413 | 296 | 101 | 41.935811 | 0.816791 | 0.007895 | 0 | 0.659574 | 0 | 0.004255 | 0.164536 | 0.109338 | 0 | 0 | 0 | 0 | 0.280851 | 1 | 0.165957 | false | 0.004255 | 0.029787 | 0.021277 | 0.225532 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
1d3fd13de0cb755a844c63e46caedb1822b96d88 | 197 | py | Python | demo/admin.py | dzhuang/django-galleryfield | 12bdbd2ee4d036e92e4dd15130b98fbcd25c5991 | [
"MIT"
] | 9 | 2021-08-23T15:49:59.000Z | 2022-03-30T10:18:39.000Z | demo/admin.py | dzhuang/django-galleryfield | 12bdbd2ee4d036e92e4dd15130b98fbcd25c5991 | [
"MIT"
] | 1 | 2021-08-10T19:53:33.000Z | 2021-08-10T19:53:33.000Z | demo/admin.py | dzhuang/django-gallery-widget | 12bdbd2ee4d036e92e4dd15130b98fbcd25c5991 | [
"MIT"
] | 2 | 2021-08-22T11:30:46.000Z | 2021-12-01T00:22:21.000Z | from django.contrib import admin
from demo.models import DemoGallery
from galleryfield.models import BuiltInGalleryImage
admin.site.register(DemoGallery)
admin.site.register(BuiltInGalleryImage)
| 24.625 | 51 | 0.862944 | 23 | 197 | 7.391304 | 0.521739 | 0.141176 | 0.2 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.081218 | 197 | 7 | 52 | 28.142857 | 0.939227 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.6 | 0 | 0.6 | 0 | 1 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
1d49c0e7821913091c0a013ec7c7937c7dafdae1 | 2,408 | py | Python | tests/test_views_functional.py | olivier-2018/SE_project | 3c555e4289ee5faf4a1a71220baa47045ade1f51 | [
"MIT"
] | null | null | null | tests/test_views_functional.py | olivier-2018/SE_project | 3c555e4289ee5faf4a1a71220baa47045ade1f51 | [
"MIT"
] | 19 | 2021-12-26T23:03:09.000Z | 2022-01-14T15:42:16.000Z | tests/test_views_functional.py | olivier-2018/SoftwareEngg_project | 3c555e4289ee5faf4a1a71220baa47045ade1f51 | [
"MIT"
] | null | null | null | import pytest
import os
from src_api import create_app
from src_api.views import load_audio_sequence, make_prediction, load_ML_model
import tensorflow as tf
flask_app = create_app()
@pytest.mark.xfail
@pytest.mark.skipif(tf.__version__ != "2.7.0", reason="Tensorflow v2.7.0 only available with pip (not conda)")
@pytest.mark.parametrize(
"filename, testfile_path, model_prediction",
[
("one_16000.wav", "testfiles", 1),
("two_22050.wav", "testfiles", 2),
(
"three_8000.wav",
"testfiles",
3,
),
(
"three_96000.wav",
"testfiles",
3,
),
],
)
def test_load_ML_model_1(filename: str, testfile_path: str, model_prediction: int) -> bool:
"""
Test if model prediction is correct
"""
with flask_app.app_context():
model_path = "/ML_model/audio_MNIST_v3-TF_v2.7.0.tf"
model = load_ML_model(model_path)
testfile_path_full = os.path.join(flask_app.config["APP_FOLDER"], "tests", "testfiles")
audio_sequence = load_audio_sequence(filename, testfile_path_full, sampling_rate=8000, max_seq_length=8000)
prediction = make_prediction(model, audio_sequence, model_input_dim=8000)
assert prediction == model_prediction
@pytest.mark.xfail
@pytest.mark.skipif(tf.__version__ != "2.3.0", reason="Tensorflow v2.3.0 only available with conda")
@pytest.mark.parametrize(
"filename, testfile_path, model_prediction",
[
("one_16000.wav", "testfiles", 1),
("two_22050.wav", "testfiles", 2),
(
"three_8000.wav",
"testfiles",
3,
),
(
"three_96000.wav",
"testfiles",
3,
),
],
)
def test_load_ML_model_2(filename: str, testfile_path: str, model_prediction: int) -> bool:
"""
Test if model prediction is correct
"""
with flask_app.app_context():
model_path = "/ML_model/audio_MNIST_v3-TF_v2.3.0.tf"
model = load_ML_model(model_path)
testfile_path_full = os.path.join(flask_app.config["APP_FOLDER"], "tests", "testfiles")
audio_sequence = load_audio_sequence(filename, testfile_path_full, sampling_rate=8000, max_seq_length=8000)
prediction = make_prediction(model, audio_sequence, model_input_dim=8000)
assert prediction == model_prediction
| 32.106667 | 115 | 0.63995 | 303 | 2,408 | 4.768977 | 0.244224 | 0.066436 | 0.038062 | 0.029066 | 0.838754 | 0.838754 | 0.838754 | 0.838754 | 0.838754 | 0.782007 | 0 | 0.050301 | 0.240449 | 2,408 | 74 | 116 | 32.540541 | 0.739749 | 0.029485 | 0 | 0.666667 | 0 | 0 | 0.213356 | 0.03209 | 0 | 0 | 0 | 0 | 0.033333 | 1 | 0.033333 | false | 0 | 0.083333 | 0 | 0.116667 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
1d9121294ba8b637d742180b1ec61545aa1af87b | 25 | py | Python | tool/bug_finder/__init__.py | MageWeiG/karonte | 2ffe649557adccdd3c2c77c0ae0a5f27a385fdcb | [
"BSD-2-Clause"
] | 1 | 2021-04-15T12:00:56.000Z | 2021-04-15T12:00:56.000Z | tool/bug_finder/__init__.py | cascades-sjtu/karonte | 2ffe649557adccdd3c2c77c0ae0a5f27a385fdcb | [
"BSD-2-Clause"
] | null | null | null | tool/bug_finder/__init__.py | cascades-sjtu/karonte | 2ffe649557adccdd3c2c77c0ae0a5f27a385fdcb | [
"BSD-2-Clause"
] | 1 | 2021-06-09T07:37:59.000Z | 2021-06-09T07:37:59.000Z | from bug_finder import *
| 12.5 | 24 | 0.8 | 4 | 25 | 4.75 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.16 | 25 | 1 | 25 | 25 | 0.904762 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
d517967d295da64892131bf297fffbd3743641f9 | 45 | py | Python | code_sandbox/tasks_queue_app/src/__init__.py | gretkierewicz/gret_code_examples | 623dce8fd3319091e3bdea946af7093442abb1bf | [
"MIT"
] | null | null | null | code_sandbox/tasks_queue_app/src/__init__.py | gretkierewicz/gret_code_examples | 623dce8fd3319091e3bdea946af7093442abb1bf | [
"MIT"
] | null | null | null | code_sandbox/tasks_queue_app/src/__init__.py | gretkierewicz/gret_code_examples | 623dce8fd3319091e3bdea946af7093442abb1bf | [
"MIT"
] | null | null | null | from .entities import *
from .tasks import *
| 15 | 23 | 0.733333 | 6 | 45 | 5.5 | 0.666667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.177778 | 45 | 2 | 24 | 22.5 | 0.891892 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
d54b1872645cf1b9e28eeef8b7e5bd1c49bd4048 | 30 | py | Python | py2app_tests/plugin_with_scripts/helper2.py | flupke/py2app | 8eb6c618f9c63d6ac970fb145a7f7782b71bcb4d | [
"MIT"
] | 193 | 2020-01-15T09:34:20.000Z | 2022-03-18T19:14:16.000Z | py2app_tests/plugin_with_scripts/helper2.py | flupke/py2app | 8eb6c618f9c63d6ac970fb145a7f7782b71bcb4d | [
"MIT"
] | 185 | 2020-01-15T08:38:27.000Z | 2022-03-27T17:29:29.000Z | py2app_tests/plugin_with_scripts/helper2.py | flupke/py2app | 8eb6c618f9c63d6ac970fb145a7f7782b71bcb4d | [
"MIT"
] | 23 | 2020-01-24T14:47:18.000Z | 2022-02-22T17:19:47.000Z | import code
print("Helper 2")
| 10 | 17 | 0.733333 | 5 | 30 | 4.4 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.038462 | 0.133333 | 30 | 2 | 18 | 15 | 0.807692 | 0 | 0 | 0 | 0 | 0 | 0.266667 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.5 | 0 | 0.5 | 0.5 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 1 | 0 | 6 |
d58f04d88181667fe907f3a4f41c52fb122cd0c4 | 134 | py | Python | mongodb/factory/results/statistics.py | RaenonX/Jelly-Bot-API | c7da1e91783dce3a2b71b955b3a22b68db9056cf | [
"MIT"
] | 5 | 2020-08-26T20:12:00.000Z | 2020-12-11T16:39:22.000Z | mongodb/factory/results/statistics.py | RaenonX/Jelly-Bot | c7da1e91783dce3a2b71b955b3a22b68db9056cf | [
"MIT"
] | 234 | 2019-12-14T03:45:19.000Z | 2020-08-26T18:55:19.000Z | mongodb/factory/results/statistics.py | RaenonX/Jelly-Bot-API | c7da1e91783dce3a2b71b955b3a22b68db9056cf | [
"MIT"
] | 2 | 2019-10-23T15:21:15.000Z | 2020-05-22T09:35:55.000Z | from dataclasses import dataclass
from ._base import ModelResult
@dataclass
class RecordAPIStatisticsResult(ModelResult):
pass
| 14.888889 | 45 | 0.820896 | 13 | 134 | 8.384615 | 0.692308 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.141791 | 134 | 8 | 46 | 16.75 | 0.947826 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.2 | 0.4 | 0 | 0.6 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 6 |
891e0c99f3529840e2b5ebe4cfe50190d05cb58b | 132 | py | Python | glooey/drawing/__init__.py | Rahuum/glooey | 932edca1c8fdd710f1941038e47ac8d25a31a1a8 | [
"MIT"
] | 86 | 2016-11-28T12:34:28.000Z | 2022-03-17T13:49:49.000Z | glooey/drawing/__init__.py | Rahuum/glooey | 932edca1c8fdd710f1941038e47ac8d25a31a1a8 | [
"MIT"
] | 57 | 2017-03-07T10:11:52.000Z | 2022-01-16T19:35:33.000Z | glooey/drawing/__init__.py | Rahuum/glooey | 932edca1c8fdd710f1941038e47ac8d25a31a1a8 | [
"MIT"
] | 9 | 2017-03-15T18:55:50.000Z | 2022-02-17T14:52:49.000Z | from .color import *
from .text import *
from .artists import *
from .stencil import *
from .alignment import *
from .grid import *
| 18.857143 | 24 | 0.727273 | 18 | 132 | 5.333333 | 0.444444 | 0.520833 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.181818 | 132 | 6 | 25 | 22 | 0.888889 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
893632a341aacd1f6d992275676642f8327138c9 | 4,282 | py | Python | src/abaqus/Section/GasketSection.py | Haiiliin/PyAbaqus | f20db6ebea19b73059fe875a53be370253381078 | [
"MIT"
] | 7 | 2022-01-21T09:15:45.000Z | 2022-02-15T09:31:58.000Z | src/abaqus/Section/GasketSection.py | Haiiliin/PyAbaqus | f20db6ebea19b73059fe875a53be370253381078 | [
"MIT"
] | null | null | null | src/abaqus/Section/GasketSection.py | Haiiliin/PyAbaqus | f20db6ebea19b73059fe875a53be370253381078 | [
"MIT"
] | null | null | null | import typing
from abaqusConstants import *
from .Section import Section
class GasketSection(Section):
"""The GasketSection object defines the properties of a gasket section.
The GasketSection object is derived from the Section object.
Notes
-----
This object can be accessed by:
.. code-block:: python
import section
mdb.models[name].sections[name]
import odbSection
session.odbs[name].sections[name]
The corresponding analysis keywords are:
- GASKET SECTION
"""
def __init__(self, name: str, material: str, crossSection: float = 1, initialGap: float = 0,
initialThickness: typing.Union[SymbolicConstant, float] = DEFAULT,
initialVoid: float = 0,
stabilizationStiffness: typing.Union[SymbolicConstant, float] = DEFAULT):
"""This method creates a GasketSection object.
Notes
-----
This function can be accessed by:
.. code-block:: python
mdb.models[name].GasketSection
session.odbs[name].GasketSection
Parameters
----------
name
A String specifying the repository key.
material
A String specifying the name of the material of which the gasket is made or material
that defines gasket behavior.
crossSection
A Float specifying the cross-sectional area, width, or out-of-plane thickness, if
applicable, depending on the gasket element type. The default value is 1.0.
initialGap
A Float specifying the initial gap. The default value is 0.0.
initialThickness
The SymbolicConstant DEFAULT or a Float specifying the initial gasket thickness. If
DEFAULT is specified, the initial thickness is determined using nodal coordinates. The
default value is DEFAULT.
initialVoid
A Float specifying the initial void. The default value is 0.0.
stabilizationStiffness
The SymbolicConstant DEFAULT or a Float specifying the default stabilization stiffness
used in all but link elements to stabilize gasket elements that are not supported at all
nodes, such as those that extend outside neighboring components. If DEFAULT is
specified, a value is used equal to 10–9 times the initial compressive stiffness in the
thickness direction. The default value is DEFAULT.
Returns
-------
A GasketSection object. and ValueError.
"""
super().__init__()
pass
def setValues(self, crossSection: float = 1, initialGap: float = 0,
initialThickness: typing.Union[SymbolicConstant, float] = DEFAULT,
initialVoid: float = 0,
stabilizationStiffness: typing.Union[SymbolicConstant, float] = DEFAULT):
"""This method modifies the GasketSection object.
Parameters
----------
crossSection
A Float specifying the cross-sectional area, width, or out-of-plane thickness, if
applicable, depending on the gasket element type. The default value is 1.0.
initialGap
A Float specifying the initial gap. The default value is 0.0.
initialThickness
The SymbolicConstant DEFAULT or a Float specifying the initial gasket thickness. If
DEFAULT is specified, the initial thickness is determined using nodal coordinates. The
default value is DEFAULT.
initialVoid
A Float specifying the initial void. The default value is 0.0.
stabilizationStiffness
The SymbolicConstant DEFAULT or a Float specifying the default stabilization stiffness
used in all but link elements to stabilize gasket elements that are not supported at all
nodes, such as those that extend outside neighboring components. If DEFAULT is
specified, a value is used equal to 10–9 times the initial compressive stiffness in the
thickness direction. The default value is DEFAULT.
Raises
------
ValueError.
"""
pass
| 40.396226 | 101 | 0.638954 | 480 | 4,282 | 5.6875 | 0.25625 | 0.057143 | 0.058608 | 0.069597 | 0.727473 | 0.727473 | 0.727473 | 0.705495 | 0.705495 | 0.705495 | 0 | 0.008114 | 0.309201 | 4,282 | 105 | 102 | 40.780952 | 0.914131 | 0.689631 | 0 | 0.533333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.133333 | false | 0.133333 | 0.2 | 0 | 0.4 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 6 |
8943603c3622f6ade7ee60bd1a21351a25ed5b31 | 10,249 | py | Python | pfrl/q_functions/state_action_q_functions.py | g-votte/pfrl | 4c30c1d73f0941a2b649b62937eec346bb55a95e | [
"MIT"
] | 3 | 2020-12-18T03:45:45.000Z | 2021-10-15T03:38:05.000Z | pfrl/q_functions/state_action_q_functions.py | g-votte/pfrl | 4c30c1d73f0941a2b649b62937eec346bb55a95e | [
"MIT"
] | 10 | 2020-08-23T17:30:47.000Z | 2020-11-07T16:52:41.000Z | pfrl/q_functions/state_action_q_functions.py | g-votte/pfrl | 4c30c1d73f0941a2b649b62937eec346bb55a95e | [
"MIT"
] | 1 | 2020-11-08T17:42:04.000Z | 2020-11-08T17:42:04.000Z | from pfrl.nn.mlp import MLP
from pfrl.nn.mlp_bn import MLPBN
from pfrl.q_function import StateActionQFunction
import torch
import torch.nn as nn
import torch.nn.functional as F
from pfrl.initializers import init_lecun_normal
class SingleModelStateActionQFunction(nn.Module, StateActionQFunction):
"""Q-function with discrete actions.
Args:
model (nn.Module):
Module that is callable and outputs action values.
"""
def __init__(self, model):
super().__init__(model=model)
def forward(self, x, a):
h = self.model(x, a)
return h
class FCSAQFunction(MLP, StateActionQFunction):
"""Fully-connected (s,a)-input Q-function.
Args:
n_dim_obs (int): Number of dimensions of observation space.
n_dim_action (int): Number of dimensions of action space.
n_hidden_channels (int): Number of hidden channels.
n_hidden_layers (int): Number of hidden layers.
nonlinearity (callable): Nonlinearity between layers. It must accept a
Variable as an argument and return a Variable with the same shape.
Nonlinearities with learnable parameters such as PReLU are not
supported. It is not used if n_hidden_layers is zero.
last_wscale (float): Scale of weight initialization of the last layer.
"""
def __init__(
self,
n_dim_obs,
n_dim_action,
n_hidden_channels,
n_hidden_layers,
nonlinearity=F.relu,
last_wscale=1.0,
):
self.n_input_channels = n_dim_obs + n_dim_action
self.n_hidden_layers = n_hidden_layers
self.n_hidden_channels = n_hidden_channels
self.nonlinearity = nonlinearity
super().__init__(
in_size=self.n_input_channels,
out_size=1,
hidden_sizes=[self.n_hidden_channels] * self.n_hidden_layers,
nonlinearity=nonlinearity,
last_wscale=last_wscale,
)
def forward(self, state, action):
h = torch.cat((state, action), dim=1)
return super().forward(h)
class FCLSTMSAQFunction(nn.Module, StateActionQFunction):
"""Fully-connected + LSTM (s,a)-input Q-function.
Args:
n_dim_obs (int): Number of dimensions of observation space.
n_dim_action (int): Number of dimensions of action space.
n_hidden_channels (int): Number of hidden channels.
n_hidden_layers (int): Number of hidden layers.
nonlinearity (callable): Nonlinearity between layers. It must accept a
Variable as an argument and return a Variable with the same shape.
Nonlinearities with learnable parameters such as PReLU are not
supported.
last_wscale (float): Scale of weight initialization of the last layer.
"""
def __init__(
self,
n_dim_obs,
n_dim_action,
n_hidden_channels,
n_hidden_layers,
nonlinearity=F.relu,
last_wscale=1.0,
):
raise NotImplementedError()
self.n_input_channels = n_dim_obs + n_dim_action
self.n_hidden_layers = n_hidden_layers
self.n_hidden_channels = n_hidden_channels
self.nonlinearity = nonlinearity
super().__init__()
self.fc = MLP(
self.n_input_channels,
n_hidden_channels,
[self.n_hidden_channels] * self.n_hidden_layers,
nonlinearity=nonlinearity,
)
self.lstm = nn.LSTM(
num_layers=1, input_size=n_hidden_channels, hidden_size=n_hidden_channels
)
self.out = nn.Linear(n_hidden_channels, 1)
for (n, p) in self.lstm.named_parameters():
if "weight" in n:
init_lecun_normal(p)
else:
nn.init.zeros_(p)
init_lecun_normal(self.out.weight, scale=last_wscale)
nn.init.zeros_(self.out.bias)
def forward(self, x, a):
h = torch.cat((x, a), dim=1)
h = self.nonlinearity(self.fc(h))
h = self.lstm(h)
return self.out(h)
class FCBNSAQFunction(MLPBN, StateActionQFunction):
"""Fully-connected + BN (s,a)-input Q-function.
Args:
n_dim_obs (int): Number of dimensions of observation space.
n_dim_action (int): Number of dimensions of action space.
n_hidden_channels (int): Number of hidden channels.
n_hidden_layers (int): Number of hidden layers.
normalize_input (bool): If set to True, Batch Normalization is applied
to both observations and actions.
nonlinearity (callable): Nonlinearity between layers. It must accept a
Variable as an argument and return a Variable with the same shape.
Nonlinearities with learnable parameters such as PReLU are not
supported. It is not used if n_hidden_layers is zero.
last_wscale (float): Scale of weight initialization of the last layer.
"""
def __init__(
self,
n_dim_obs,
n_dim_action,
n_hidden_channels,
n_hidden_layers,
normalize_input=True,
nonlinearity=F.relu,
last_wscale=1.0,
):
self.n_input_channels = n_dim_obs + n_dim_action
self.n_hidden_layers = n_hidden_layers
self.n_hidden_channels = n_hidden_channels
self.normalize_input = normalize_input
self.nonlinearity = nonlinearity
super().__init__(
in_size=self.n_input_channels,
out_size=1,
hidden_sizes=[self.n_hidden_channels] * self.n_hidden_layers,
normalize_input=self.normalize_input,
nonlinearity=nonlinearity,
last_wscale=last_wscale,
)
def forward(self, state, action):
h = torch.cat((state, action), dim=1)
return super().forward(h)
class FCBNLateActionSAQFunction(nn.Module, StateActionQFunction):
"""Fully-connected + BN (s,a)-input Q-function with late action input.
Actions are not included until the second hidden layer and not normalized.
This architecture is used in the DDPG paper:
http://arxiv.org/abs/1509.02971
Args:
n_dim_obs (int): Number of dimensions of observation space.
n_dim_action (int): Number of dimensions of action space.
n_hidden_channels (int): Number of hidden channels.
n_hidden_layers (int): Number of hidden layers. It must be greater than
or equal to 1.
normalize_input (bool): If set to True, Batch Normalization is applied
nonlinearity (callable): Nonlinearity between layers. It must accept a
Variable as an argument and return a Variable with the same shape.
Nonlinearities with learnable parameters such as PReLU are not
supported.
last_wscale (float): Scale of weight initialization of the last layer.
"""
def __init__(
self,
n_dim_obs,
n_dim_action,
n_hidden_channels,
n_hidden_layers,
normalize_input=True,
nonlinearity=F.relu,
last_wscale=1.0,
):
assert n_hidden_layers >= 1
self.n_input_channels = n_dim_obs + n_dim_action
self.n_hidden_layers = n_hidden_layers
self.n_hidden_channels = n_hidden_channels
self.normalize_input = normalize_input
self.nonlinearity = nonlinearity
super().__init__()
# No need to pass nonlinearity to obs_mlp because it has no
# hidden layers
self.obs_mlp = MLPBN(
in_size=n_dim_obs,
out_size=n_hidden_channels,
hidden_sizes=[],
normalize_input=normalize_input,
normalize_output=True,
)
self.mlp = MLP(
in_size=n_hidden_channels + n_dim_action,
out_size=1,
hidden_sizes=([self.n_hidden_channels] * (self.n_hidden_layers - 1)),
nonlinearity=nonlinearity,
last_wscale=last_wscale,
)
self.output = self.mlp.output
def forward(self, state, action):
h = self.nonlinearity(self.obs_mlp(state))
h = torch.cat((h, action), dim=1)
return self.mlp(h)
class FCLateActionSAQFunction(nn.Module, StateActionQFunction):
"""Fully-connected (s,a)-input Q-function with late action input.
Actions are not included until the second hidden layer and not normalized.
This architecture is used in the DDPG paper:
http://arxiv.org/abs/1509.02971
Args:
n_dim_obs (int): Number of dimensions of observation space.
n_dim_action (int): Number of dimensions of action space.
n_hidden_channels (int): Number of hidden channels.
n_hidden_layers (int): Number of hidden layers. It must be greater than
or equal to 1.
nonlinearity (callable): Nonlinearity between layers. It must accept a
Variable as an argument and return a Variable with the same shape.
Nonlinearities with learnable parameters such as PReLU are not
supported.
last_wscale (float): Scale of weight initialization of the last layer.
"""
def __init__(
self,
n_dim_obs,
n_dim_action,
n_hidden_channels,
n_hidden_layers,
nonlinearity=F.relu,
last_wscale=1.0,
):
assert n_hidden_layers >= 1
self.n_input_channels = n_dim_obs + n_dim_action
self.n_hidden_layers = n_hidden_layers
self.n_hidden_channels = n_hidden_channels
self.nonlinearity = nonlinearity
super().__init__()
# No need to pass nonlinearity to obs_mlp because it has no
# hidden layers
self.obs_mlp = MLP(
in_size=n_dim_obs, out_size=n_hidden_channels, hidden_sizes=[]
)
self.mlp = MLP(
in_size=n_hidden_channels + n_dim_action,
out_size=1,
hidden_sizes=([self.n_hidden_channels] * (self.n_hidden_layers - 1)),
nonlinearity=nonlinearity,
last_wscale=last_wscale,
)
self.output = self.mlp.output
def forward(self, state, action):
h = self.nonlinearity(self.obs_mlp(state))
h = torch.cat((h, action), dim=1)
return self.mlp(h)
| 35.463668 | 85 | 0.642892 | 1,324 | 10,249 | 4.730363 | 0.112538 | 0.069296 | 0.079036 | 0.050295 | 0.853106 | 0.834265 | 0.826441 | 0.826441 | 0.815584 | 0.804088 | 0 | 0.006125 | 0.28315 | 10,249 | 288 | 86 | 35.586806 | 0.846332 | 0.389111 | 0 | 0.694118 | 0 | 0 | 0.00101 | 0 | 0 | 0 | 0 | 0 | 0.011765 | 1 | 0.070588 | false | 0 | 0.041176 | 0 | 0.182353 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
89781308092b8deb6241b78c4b6aa97eef23ff25 | 4,521 | py | Python | tests/integration_tests/test_undo_redo.py | drahoja9/BI-OOP-CAD | afec7d44b1c5502a6bf94f78759c46337f750ea3 | [
"MIT"
] | null | null | null | tests/integration_tests/test_undo_redo.py | drahoja9/BI-OOP-CAD | afec7d44b1c5502a6bf94f78759c46337f750ea3 | [
"MIT"
] | null | null | null | tests/integration_tests/test_undo_redo.py | drahoja9/BI-OOP-CAD | afec7d44b1c5502a6bf94f78759c46337f750ea3 | [
"MIT"
] | null | null | null | import io
from typing import Dict
from app.controller import Controller
from app.shapes import Shape
def test_undo_redo(controller: Controller, shape_commands, stream: io.StringIO, shapes: Dict[str, Shape]):
for command in shape_commands:
controller.execute_command(command)
assert controller._command_engine._redos == []
assert controller._gui._ui.actionUndo.isEnabled() is True
assert controller._gui._ui.actionRedo.isEnabled() is False
for i in range(len(shape_commands)):
controller.undo()
assert controller._command_engine._undos == []
assert controller._command_engine._redos == [
shape_commands[4],
shape_commands[3],
shape_commands[2],
shape_commands[1],
shape_commands[0],
]
assert controller._shapes._shapes == []
assert controller._gui._ui.actionUndo.isEnabled() is False
assert controller._gui._ui.actionRedo.isEnabled() is True
assert controller._gui._ui.history.toPlainText() == ''
res = (
f'{shapes["dot"]}\n'
f'{shapes["dot"]}\n{shapes["line"]}\n'
f'{shapes["dot"]}\n{shapes["line"]}\n{shapes["polyline"]}\n'
f'{shapes["dot"]}\n{shapes["line"]}\n{shapes["polyline"]}\n{shapes["rectangle"]}\n'
f'{shapes["dot"]}\n{shapes["line"]}\n{shapes["polyline"]}\n{shapes["rectangle"]}\n{shapes["circle"]}\n'
f'{shapes["dot"]}\n{shapes["line"]}\n{shapes["polyline"]}\n{shapes["rectangle"]}\n'
f'{shapes["dot"]}\n{shapes["line"]}\n{shapes["polyline"]}\n'
f'{shapes["dot"]}\n{shapes["line"]}\n'
f'{shapes["dot"]}\n'
)
assert stream.getvalue() == res
controller.redo()
controller.redo()
assert controller._command_engine._undos == [
shape_commands[0],
shape_commands[1]
]
assert controller._command_engine._redos == [
shape_commands[4],
shape_commands[3],
shape_commands[2]
]
assert controller._shapes._shapes == [
shapes['dot'],
shapes['line']
]
assert controller._gui._ui.actionUndo.isEnabled() is True
assert controller._gui._ui.actionRedo.isEnabled() is True
assert controller._gui._ui.history.toPlainText() == (
f' > {shape_commands[0]}\n{shapes["dot"]}\n'
f' > {shape_commands[1]}\n{shapes["line"]}'
)
res = (
f'{shapes["dot"]}\n'
f'{shapes["dot"]}\n{shapes["line"]}\n'
f'{shapes["dot"]}\n{shapes["line"]}\n{shapes["polyline"]}\n'
f'{shapes["dot"]}\n{shapes["line"]}\n{shapes["polyline"]}\n{shapes["rectangle"]}\n'
f'{shapes["dot"]}\n{shapes["line"]}\n{shapes["polyline"]}\n{shapes["rectangle"]}\n{shapes["circle"]}\n'
f'{shapes["dot"]}\n{shapes["line"]}\n{shapes["polyline"]}\n{shapes["rectangle"]}\n'
f'{shapes["dot"]}\n{shapes["line"]}\n{shapes["polyline"]}\n'
f'{shapes["dot"]}\n{shapes["line"]}\n'
f'{shapes["dot"]}\n'
f'{shapes["dot"]}\n'
f'{shapes["dot"]}\n{shapes["line"]}\n'
)
assert stream.getvalue() == res
controller.execute_command(shape_commands[0])
assert controller._command_engine._undos == [
shape_commands[0],
shape_commands[1],
shape_commands[0]
]
assert controller._command_engine._redos == []
assert controller._shapes._shapes == [
shapes['dot'],
shapes['line'],
shapes['dot']
]
assert controller._gui._ui.actionUndo.isEnabled() is True
assert controller._gui._ui.actionRedo.isEnabled() is False
assert controller._gui._ui.history.toPlainText() == (
f' > {shape_commands[0]}\n{shapes["dot"]}\n'
f' > {shape_commands[1]}\n{shapes["line"]}\n'
f' > {shape_commands[0]}\n{shapes["dot"]}'
)
res = (
f'{shapes["dot"]}\n'
f'{shapes["dot"]}\n{shapes["line"]}\n'
f'{shapes["dot"]}\n{shapes["line"]}\n{shapes["polyline"]}\n'
f'{shapes["dot"]}\n{shapes["line"]}\n{shapes["polyline"]}\n{shapes["rectangle"]}\n'
f'{shapes["dot"]}\n{shapes["line"]}\n{shapes["polyline"]}\n{shapes["rectangle"]}\n{shapes["circle"]}\n'
f'{shapes["dot"]}\n{shapes["line"]}\n{shapes["polyline"]}\n{shapes["rectangle"]}\n'
f'{shapes["dot"]}\n{shapes["line"]}\n{shapes["polyline"]}\n'
f'{shapes["dot"]}\n{shapes["line"]}\n'
f'{shapes["dot"]}\n'
f'{shapes["dot"]}\n'
f'{shapes["dot"]}\n{shapes["line"]}\n'
f'{shapes["dot"]}\n{shapes["line"]}\n{shapes["dot"]}\n'
)
assert stream.getvalue() == res
| 40.366071 | 111 | 0.594116 | 574 | 4,521 | 4.54878 | 0.090592 | 0.152815 | 0.134048 | 0.134814 | 0.878591 | 0.865186 | 0.848717 | 0.776714 | 0.712371 | 0.712371 | 0 | 0.005177 | 0.188233 | 4,521 | 111 | 112 | 40.72973 | 0.706267 | 0 | 0 | 0.657143 | 0 | 0.085714 | 0.400354 | 0.363194 | 0 | 0 | 0 | 0 | 0.228571 | 1 | 0.009524 | false | 0 | 0.038095 | 0 | 0.047619 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
897ef9a689d50c202a1c3cf0b195e3d66aa3c455 | 213 | py | Python | classifier/decision_tree/__init__.py | ecohealthalliance/eha_grit | cb95b759222ca7a416dd7d439571e7b610dd5e23 | [
"Apache-2.0"
] | null | null | null | classifier/decision_tree/__init__.py | ecohealthalliance/eha_grit | cb95b759222ca7a416dd7d439571e7b610dd5e23 | [
"Apache-2.0"
] | null | null | null | classifier/decision_tree/__init__.py | ecohealthalliance/eha_grit | cb95b759222ca7a416dd7d439571e7b610dd5e23 | [
"Apache-2.0"
] | null | null | null | from classifier import sklearn_classifier
from sklearn.tree import DecisionTreeClassifier
def classify(train, test):
tree = DecisionTreeClassifier()
return sklearn_classifier.classify(train, test, tree)
| 26.625 | 57 | 0.807512 | 23 | 213 | 7.391304 | 0.478261 | 0.2 | 0.2 | 0.247059 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.131455 | 213 | 7 | 58 | 30.428571 | 0.918919 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.2 | false | 0 | 0.4 | 0 | 0.8 | 0 | 1 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
89b34e054bb6f75590d6b29e2a97de1ce1095c83 | 3,720 | py | Python | tracking_wo_bnw/src/tracktor/datasets/mot_wrapper.py | OpenSuze/mot_neural_solver | 44a5c8270b238535fc0ca83cb5758d43757e2637 | [
"MIT"
] | null | null | null | tracking_wo_bnw/src/tracktor/datasets/mot_wrapper.py | OpenSuze/mot_neural_solver | 44a5c8270b238535fc0ca83cb5758d43757e2637 | [
"MIT"
] | null | null | null | tracking_wo_bnw/src/tracktor/datasets/mot_wrapper.py | OpenSuze/mot_neural_solver | 44a5c8270b238535fc0ca83cb5758d43757e2637 | [
"MIT"
] | null | null | null | import torch
from torch.utils.data import Dataset
from .mot_sequence import MOT17Sequence, MOT20Sequence, MOTSynthSequence
class MOTSynthWrapper(Dataset):
"""A Wrapper for the MOT_Sequence class to return multiple sequences."""
def __init__(self, split, dets, dataloader):
"""Initliazes all subset of the dataset.
Keyword arguments:
split -- the split of the dataset to use
dataloader -- args for the MOT_Sequence dataloader
"""
train_sequences = ['0000', '0001', '0002', '0003', '0004', '0005', '0006', '0007']
test_sequences = ['0008', '0009']
if "train" == split:
sequences = train_sequences
elif "test" == split:
sequences = test_sequences
elif "all" == split:
sequences = train_sequences + test_sequences
elif split in train_sequences + test_sequences:
sequences = [split]
else:
raise NotImplementedError("MOT split not available.")
self._data = []
for s in sequences:
if dets == '17':
self._data.append(MOTSynthSequence(seq_name=s, dets='DPM17', **dataloader))
self._data.append(MOTSynthSequence(seq_name=s, dets='FRCNN17', **dataloader))
self._data.append(MOTSynthSequence(seq_name=s, dets='SDP17', **dataloader))
else:
self._data.append(MOTSynthSequence(seq_name=s, dets=dets, **dataloader))
def __len__(self):
return len(self._data)
def __getitem__(self, idx):
return self._data[idx]
class MOT17Wrapper(Dataset):
"""A Wrapper for the MOT_Sequence class to return multiple sequences."""
def __init__(self, split, dets, dataloader):
"""Initliazes all subset of the dataset.
Keyword arguments:
split -- the split of the dataset to use
dataloader -- args for the MOT_Sequence dataloader
"""
train_sequences = ['MOT17-02', 'MOT17-04', 'MOT17-05', 'MOT17-09', 'MOT17-10', 'MOT17-11', 'MOT17-13']
test_sequences = ['MOT17-01', 'MOT17-03', 'MOT17-06', 'MOT17-07', 'MOT17-08', 'MOT17-12', 'MOT17-14']
if "train" == split:
sequences = train_sequences
elif "test" == split:
sequences = test_sequences
elif "all" == split:
sequences = train_sequences + test_sequences
elif f"MOT17-{split}" in train_sequences + test_sequences:
sequences = [f"MOT17-{split}"]
else:
raise NotImplementedError("MOT split not available.")
self._data = []
for s in sequences:
if dets == '17':
self._data.append(MOT17Sequence(seq_name=s, dets='DPM17', **dataloader))
self._data.append(MOT17Sequence(seq_name=s, dets='FRCNN17', **dataloader))
self._data.append(MOT17Sequence(seq_name=s, dets='SDP17', **dataloader))
else:
self._data.append(MOT17Sequence(seq_name=s, dets=dets, **dataloader))
def __len__(self):
return len(self._data)
def __getitem__(self, idx):
return self._data[idx]
class MOT20Wrapper(MOT17Wrapper):
"""A Wrapper for the MOT_Sequence class to return multiple sequences."""
def __init__(self, split, dataloader):
"""Initliazes all subset of the dataset.
Keyword arguments:
split -- the split of the dataset to use
dataloader -- args for the MOT_Sequence dataloader
"""
train_sequences = ['MOT20-01', 'MOT20-02', 'MOT20-03', 'MOT20-05']
test_sequences = ['MOT20-04', 'MOT20-06', 'MOT20-07', 'MOT20-08']
if "train" == split:
sequences = train_sequences
elif "test" == split:
sequences = test_sequences
elif "all" == split:
sequences = train_sequences + test_sequences
elif f"MOT20-{split}" in train_sequences + test_sequences:
sequences = [f"MOT20-{split}"]
else:
raise NotImplementedError("MOT20 split not available.")
self._data = []
for s in sequences:
self._data.append(MOT20Sequence(seq_name=s, **dataloader))
def __len__(self):
return len(self._data)
def __getitem__(self, idx):
return self._data[idx]
| 31.794872 | 104 | 0.700538 | 484 | 3,720 | 5.190083 | 0.17562 | 0.057325 | 0.078822 | 0.038217 | 0.806131 | 0.806131 | 0.806131 | 0.789013 | 0.730892 | 0.625 | 0 | 0.055805 | 0.161828 | 3,720 | 116 | 105 | 32.068966 | 0.74984 | 0.174462 | 0 | 0.618421 | 0 | 0 | 0.138068 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.118421 | false | 0 | 0.039474 | 0.078947 | 0.276316 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
982426e26649c0aaa7a195196291c360553a7340 | 37 | py | Python | nataliestar/__init__.py | Orkestrate/Orkestrate-SC2AI | 2d81ce204ff4b333389dbad52f5f36ca6cb7ac44 | [
"MIT"
] | null | null | null | nataliestar/__init__.py | Orkestrate/Orkestrate-SC2AI | 2d81ce204ff4b333389dbad52f5f36ca6cb7ac44 | [
"MIT"
] | null | null | null | nataliestar/__init__.py | Orkestrate/Orkestrate-SC2AI | 2d81ce204ff4b333389dbad52f5f36ca6cb7ac44 | [
"MIT"
] | null | null | null | from nataliestar import agent_helper
| 18.5 | 36 | 0.891892 | 5 | 37 | 6.4 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.108108 | 37 | 1 | 37 | 37 | 0.969697 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
7f8728b913de67f5ae89d59233503d41a5805ca1 | 3,415 | py | Python | MLlib/loss_func.py | Player0109/ML-DL-implementation | 338a89229da541e4285a3ad22508bfef5effc5e5 | [
"BSD-3-Clause"
] | null | null | null | MLlib/loss_func.py | Player0109/ML-DL-implementation | 338a89229da541e4285a3ad22508bfef5effc5e5 | [
"BSD-3-Clause"
] | null | null | null | MLlib/loss_func.py | Player0109/ML-DL-implementation | 338a89229da541e4285a3ad22508bfef5effc5e5 | [
"BSD-3-Clause"
] | 2 | 2020-10-02T07:13:20.000Z | 2021-04-13T14:04:35.000Z | import numpy as np
from MLlib.activations import sigmoid
class MeanSquaredError():
"""
Calculate Mean Squared Error.
"""
@staticmethod
def loss(X, Y, W):
"""
Calculate loss by mean square method.
PARAMETERS
==========
X:ndarray(dtype=float,ndim=1)
input vector
Y:ndarray(dtype=float)
output vector
W:ndarray(dtype=float)
Weights
RETURNS
=======
array of mean squared losses
"""
M = X.shape[0]
return np.sum((np.dot(X, W).T - Y) ** 2) / (2 * M)
@staticmethod
def derivative(X, Y, W):
"""
Calculate derivative for mean square method.
PARAMETERS
==========
X:ndarray(dtype=float,ndim=1)
input vector
Y:ndarray(dtype=float)
output vector
W:ndarray(dtype=float)
Weights
RETURNS
=======
array of derivates
"""
M = X.shape[0]
return np.dot((np.dot(X, W).T - Y), X).T / M
class LogarithmicError():
"""
Calculate Logarithmic Error.
"""
@staticmethod
def loss(X, Y, W):
"""
Calculate loss by logarithmic error method.
PARAMETERS
==========
X:ndarray(dtype=float,ndim=1)
input vector
Y:ndarray(dtype=float)
output vector
W:ndarray(dtype=float)
Weights
RETURNS
=======
array of logarithmic losses
"""
M = X.shape[0]
H = sigmoid(np.dot(X, W).T)
return (1/M)*(np.sum((-Y)*np.log(H)-(1-Y)*np.log(1-H)))
@staticmethod
def derivative(X, Y, W):
"""
Calculate derivative for logarithmic error method.
PARAMETERS
==========
X:ndarray(dtype=float,ndim=1)
input vector
Y:ndarray(dtype=float)
output vector
W:ndarray(dtype=float)
Weights
RETURNS
=======
array of derivates
"""
M = X.shape[0]
H = sigmoid(np.dot(X, W).T)
return (1/M)*(np.dot(X.T, (H-Y).T))
class AbsoluteError():
"""
Calculate Absolute Error.
"""
@staticmethod
def loss(X, Y, W):
"""
Calculate loss by absolute error method.
PARAMETERS
==========
X:ndarray(dtype=float,ndim=1)
input vector
Y:ndarray(dtype=float)
output vector
W:ndarray(dtype=float)
Weights
RETURNS
=======
array of absolute losses
"""
M = X.shape[0]
return np.sum(np.absolute(np.dot(X, W).T - Y)) / M
@staticmethod
def derivative(X, Y, W):
"""
Calculate derivative for absolute error method.
PARAMETERS
==========
X:ndarray(dtype=float,ndim=1)
input vector
Y:ndarray(dtype=float)
output vector
W:ndarray(dtype=float)
Weights
RETURNS
=======
array of derivates
"""
M = X.shape[0]
AbsError = (np.dot(X, W).T-Y)
return np.dot(
np.divide(
AbsError,
np.absolute(AbsError),
out=np.zeros_like(AbsError),
where=(np.absolute(AbsError)) != 0),
X
).T/M
| 20.207101 | 63 | 0.475549 | 373 | 3,415 | 4.351206 | 0.160858 | 0.133087 | 0.18854 | 0.044362 | 0.790511 | 0.786815 | 0.760937 | 0.760937 | 0.760937 | 0.696858 | 0 | 0.009139 | 0.391215 | 3,415 | 168 | 64 | 20.327381 | 0.771525 | 0.428111 | 0 | 0.512821 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.153846 | false | 0 | 0.051282 | 0 | 0.435897 | 0 | 0 | 0 | 0 | null | 0 | 1 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
7f8bfbb21b7c36d33d3655bac044e98ab70c8823 | 100 | py | Python | src/dataset/__init__.py | morgen-stern/squeezeDet | e7758bf61f9814a23701746f27728044f250a8a0 | [
"BSD-2-Clause"
] | 109 | 2019-04-29T03:30:42.000Z | 2022-03-31T03:06:26.000Z | src/dataset/__init__.py | morgen-stern/squeezeDet | e7758bf61f9814a23701746f27728044f250a8a0 | [
"BSD-2-Clause"
] | 25 | 2019-03-25T00:27:39.000Z | 2022-03-27T20:29:23.000Z | src/dataset/__init__.py | morgen-stern/squeezeDet | e7758bf61f9814a23701746f27728044f250a8a0 | [
"BSD-2-Clause"
] | 35 | 2019-02-12T20:50:32.000Z | 2022-01-05T11:25:06.000Z | from __future__ import absolute_import
from .kitti import kitti
from .pascal_voc import pascal_voc
| 20 | 38 | 0.85 | 15 | 100 | 5.2 | 0.466667 | 0.230769 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.13 | 100 | 4 | 39 | 25 | 0.896552 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
f6cef7840d070c274e14d606818da3b3a46a02df | 39 | py | Python | hello_world.py | Michikouwu/primerrepo3c | 22f8fcc5c4927657e7fbfca52e06af3d9680eeaf | [
"MIT"
] | null | null | null | hello_world.py | Michikouwu/primerrepo3c | 22f8fcc5c4927657e7fbfca52e06af3d9680eeaf | [
"MIT"
] | null | null | null | hello_world.py | Michikouwu/primerrepo3c | 22f8fcc5c4927657e7fbfca52e06af3d9680eeaf | [
"MIT"
] | null | null | null | print('Hola mundo, soy del tercero C')
| 19.5 | 38 | 0.717949 | 7 | 39 | 4 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.153846 | 39 | 1 | 39 | 39 | 0.848485 | 0 | 0 | 0 | 0 | 0 | 0.74359 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 6 |
f6d8c8a99244db8a20ea3506a33e3aae74b74812 | 81 | py | Python | smart-contracts/wake_up_neo.py | deanpress/neo-local | d7a1217402f06c597862735de103f53c2840cf09 | [
"MIT"
] | 1 | 2018-05-19T18:12:07.000Z | 2018-05-19T18:12:07.000Z | smart-contracts/wake_up_neo.py | deanpress/neo-local | d7a1217402f06c597862735de103f53c2840cf09 | [
"MIT"
] | null | null | null | smart-contracts/wake_up_neo.py | deanpress/neo-local | d7a1217402f06c597862735de103f53c2840cf09 | [
"MIT"
] | null | null | null | from boa.blockchain.vm.Neo.Runtime import Log
def Main():
Log("Wake up, NEO!") | 20.25 | 45 | 0.703704 | 14 | 81 | 4.071429 | 0.857143 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.135802 | 81 | 4 | 46 | 20.25 | 0.814286 | 0 | 0 | 0 | 0 | 0 | 0.158537 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | true | 0 | 0.333333 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
63e42cf5d754746a43128ce4d9be23160a9a0cc4 | 4,107 | py | Python | app/migrations/0001_initial.py | vsanasc/Demo-Social-Media-API | 5c39746a5f270864c7d9edb05f435857d87fcb3a | [
"MIT"
] | null | null | null | app/migrations/0001_initial.py | vsanasc/Demo-Social-Media-API | 5c39746a5f270864c7d9edb05f435857d87fcb3a | [
"MIT"
] | null | null | null | app/migrations/0001_initial.py | vsanasc/Demo-Social-Media-API | 5c39746a5f270864c7d9edb05f435857d87fcb3a | [
"MIT"
] | null | null | null | # Generated by Django 3.2.8 on 2021-10-10 20:32
from django.conf import settings
from django.db import migrations, models
import django.db.models.deletion
class Migration(migrations.Migration):
initial = True
dependencies = [
migrations.swappable_dependency(settings.AUTH_USER_MODEL),
]
operations = [
migrations.CreateModel(
name='Profile',
fields=[
('id', models.BigAutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('birthdate', models.DateField(blank=True, null=True)),
('user', models.OneToOneField(on_delete=django.db.models.deletion.CASCADE, to=settings.AUTH_USER_MODEL)),
],
),
migrations.CreateModel(
name='Post',
fields=[
('id', models.BigAutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('created_at', models.DateTimeField(auto_now=True)),
('modified_at', models.DateField(auto_now_add=True)),
('status', models.SmallIntegerField(choices=[(1, 'Active'), (0, 'Inactive'), (-1, 'Deleted')], default=1)),
('text', models.TextField()),
('user', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, to=settings.AUTH_USER_MODEL)),
],
options={
'abstract': False,
},
),
migrations.CreateModel(
name='Friendship',
fields=[
('id', models.BigAutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('created_at', models.DateTimeField(auto_now=True)),
('modified_at', models.DateField(auto_now_add=True)),
('status', models.SmallIntegerField(choices=[(1, 'Active'), (0, 'Inactive'), (-1, 'Deleted')], default=1)),
('state', models.PositiveSmallIntegerField(choices=[(1, 'Pending'), (2, 'Accepted'), (3, 'Rejected')], default=1)),
('receiver', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, related_name='friendship_received', to=settings.AUTH_USER_MODEL)),
('requester', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, related_name='friendship_requests', to=settings.AUTH_USER_MODEL)),
],
options={
'abstract': False,
},
),
migrations.CreateModel(
name='Comment',
fields=[
('id', models.BigAutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('created_at', models.DateTimeField(auto_now=True)),
('modified_at', models.DateField(auto_now_add=True)),
('status', models.SmallIntegerField(choices=[(1, 'Active'), (0, 'Inactive'), (-1, 'Deleted')], default=1)),
('text', models.TextField()),
('post', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, to='app.post')),
('user', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, to=settings.AUTH_USER_MODEL)),
],
options={
'abstract': False,
},
),
migrations.CreateModel(
name='Attachment',
fields=[
('id', models.BigAutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('created_at', models.DateTimeField(auto_now=True)),
('modified_at', models.DateField(auto_now_add=True)),
('status', models.SmallIntegerField(choices=[(1, 'Active'), (0, 'Inactive'), (-1, 'Deleted')], default=1)),
('file', models.FileField(upload_to='files')),
('is_image', models.BooleanField()),
('post', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, to='app.post')),
],
options={
'abstract': False,
},
),
]
| 48.317647 | 159 | 0.572924 | 402 | 4,107 | 5.701493 | 0.228856 | 0.031414 | 0.048866 | 0.076789 | 0.747382 | 0.737347 | 0.737347 | 0.737347 | 0.737347 | 0.737347 | 0 | 0.011729 | 0.273436 | 4,107 | 84 | 160 | 48.892857 | 0.756367 | 0.010957 | 0 | 0.662338 | 1 | 0 | 0.107143 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.038961 | 0 | 0.090909 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
120f9da585eb6f79a6d1db4f86387d48d2ad34cb | 152 | py | Python | forum/attachments/admin.py | successIA/Forum | 08de91a033da2c3779acbf95dfe0210eb1276a26 | [
"MIT"
] | null | null | null | forum/attachments/admin.py | successIA/Forum | 08de91a033da2c3779acbf95dfe0210eb1276a26 | [
"MIT"
] | 6 | 2020-08-13T18:54:33.000Z | 2021-06-10T20:20:16.000Z | forum/attachments/admin.py | successIA/ClassicForum | 08de91a033da2c3779acbf95dfe0210eb1276a26 | [
"MIT"
] | null | null | null | from django import forms
from django.contrib import admin
from django.db import models
from .models import Attachment
admin.site.register(Attachment)
| 19 | 32 | 0.828947 | 22 | 152 | 5.727273 | 0.5 | 0.238095 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.125 | 152 | 7 | 33 | 21.714286 | 0.947368 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.8 | 0 | 0.8 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
124ddc6702f3ad4ff884472481cbc2927ff3ae55 | 141 | py | Python | aula1406/pessoa.py | fillipesouza/aulasdelogicaprogramacao | 409a9b82433eea9bcef2203c7c48ac0ab698f5db | [
"MIT"
] | 1 | 2021-06-30T11:53:21.000Z | 2021-06-30T11:53:21.000Z | aula1406/pessoa.py | fillipesouza/aulasdelogicaprogramacao | 409a9b82433eea9bcef2203c7c48ac0ab698f5db | [
"MIT"
] | null | null | null | aula1406/pessoa.py | fillipesouza/aulasdelogicaprogramacao | 409a9b82433eea9bcef2203c7c48ac0ab698f5db | [
"MIT"
] | 25 | 2021-04-17T00:36:10.000Z | 2021-06-01T17:28:16.000Z | class Pessoa:
def __init__(self, idade: int):
self._idade = idade
def __str__(self):
return f'idade {self._idade}'
| 20.142857 | 38 | 0.602837 | 18 | 141 | 4.166667 | 0.555556 | 0.36 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.283688 | 141 | 6 | 39 | 23.5 | 0.742574 | 0 | 0 | 0 | 0 | 0 | 0.141844 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.4 | false | 0 | 0 | 0.2 | 0.8 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 6 |
d6085927ffc2c75fd4069228b9adc8caee5a3b39 | 263 | py | Python | soda/core/soda/execution/derived_formula.py | duyet/soda-core | 92a52e0d7c1e88624d0637123cfcb2610af6d112 | [
"Apache-2.0"
] | null | null | null | soda/core/soda/execution/derived_formula.py | duyet/soda-core | 92a52e0d7c1e88624d0637123cfcb2610af6d112 | [
"Apache-2.0"
] | null | null | null | soda/core/soda/execution/derived_formula.py | duyet/soda-core | 92a52e0d7c1e88624d0637123cfcb2610af6d112 | [
"Apache-2.0"
] | null | null | null | from typing import Callable, Dict
class DerivedFormula:
def __init__(self, function: Callable, metric_dependencies: Dict[str, "Metric"]):
self.function: Callable = function
self.metric_dependencies: Dict[str, "Metric"] = metric_dependencies
| 32.875 | 85 | 0.73384 | 29 | 263 | 6.413793 | 0.482759 | 0.290323 | 0.215054 | 0.268817 | 0.333333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.171103 | 263 | 7 | 86 | 37.571429 | 0.853211 | 0 | 0 | 0 | 0 | 0 | 0.045627 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.2 | false | 0 | 0.2 | 0 | 0.6 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 6 |
d6180545867885bc64b5bfe188290c16301df8b8 | 313 | py | Python | orange3/run_test.py | nikicc/anaconda-recipes | 9c611a5854bf41bbc5e7ed9853dc71c0851a62ef | [
"BSD-3-Clause"
] | 130 | 2015-07-28T03:41:21.000Z | 2022-03-16T03:07:41.000Z | orange3/run_test.py | nikicc/anaconda-recipes | 9c611a5854bf41bbc5e7ed9853dc71c0851a62ef | [
"BSD-3-Clause"
] | 119 | 2015-08-01T00:54:06.000Z | 2021-01-05T13:00:46.000Z | orange3/run_test.py | nikicc/anaconda-recipes | 9c611a5854bf41bbc5e7ed9853dc71c0851a62ef | [
"BSD-3-Clause"
] | 72 | 2015-07-29T02:35:56.000Z | 2022-02-26T14:31:15.000Z | import Orange.classification._simple_tree
import Orange.classification._tree_scorers
import Orange.data._contingency
import Orange.data._io
import Orange.data._valuecount
import Orange.data._variable
import Orange.preprocess._discretize
import Orange.preprocess._relieff
import Orange.widgets.utils._grid_density
| 31.3 | 42 | 0.881789 | 40 | 313 | 6.6 | 0.45 | 0.409091 | 0.242424 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.057508 | 313 | 9 | 43 | 34.777778 | 0.894915 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
d61f3a564be3dc69c28d17fe1c2f313fcfaffb89 | 45 | py | Python | python/main.py | vwallajabad/VariableX_discord.bot | 280ffeaea305f09af35d60b06e06c2f82dd80bcd | [
"MIT"
] | null | null | null | python/main.py | vwallajabad/VariableX_discord.bot | 280ffeaea305f09af35d60b06e06c2f82dd80bcd | [
"MIT"
] | null | null | null | python/main.py | vwallajabad/VariableX_discord.bot | 280ffeaea305f09af35d60b06e06c2f82dd80bcd | [
"MIT"
] | null | null | null | from bot import discord_code
discord_code()
| 11.25 | 28 | 0.822222 | 7 | 45 | 5 | 0.714286 | 0.628571 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.133333 | 45 | 3 | 29 | 15 | 0.897436 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.5 | 0 | 0.5 | 0 | 1 | 1 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
d624dee20476ec576ec34a67545ccd5ebb1ddd4e | 35 | py | Python | EvoCluster/__init__.py | soumitri2001/EvoCluster | 2f8e3f21c7045478394e7e02a22835f7c184c0c7 | [
"Apache-2.0"
] | 12 | 2019-12-21T16:29:15.000Z | 2022-01-03T01:24:14.000Z | EvoCluster/__init__.py | soumitri2001/EvoCluster | 2f8e3f21c7045478394e7e02a22835f7c184c0c7 | [
"Apache-2.0"
] | 3 | 2020-06-07T07:52:40.000Z | 2021-10-08T16:13:49.000Z | EvoCluster/__init__.py | RaneemQaddoura/EvoCluster | 001dfb4c1f00db84ad1c2f2228eed6112d7e65b1 | [
"Apache-2.0"
] | 14 | 2019-12-28T19:55:48.000Z | 2021-12-31T14:46:03.000Z | from ._evocluster import EvoCluster | 35 | 35 | 0.885714 | 4 | 35 | 7.5 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.085714 | 35 | 1 | 35 | 35 | 0.9375 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
c393c3937c55da45a48e1e80ff79cc4b205719cf | 23 | py | Python | accounts/signals/__init__.py | tavoxr/django-crm | d6ce34b8e8e93c3ae9853df34641868d4c891125 | [
"MIT"
] | null | null | null | accounts/signals/__init__.py | tavoxr/django-crm | d6ce34b8e8e93c3ae9853df34641868d4c891125 | [
"MIT"
] | null | null | null | accounts/signals/__init__.py | tavoxr/django-crm | d6ce34b8e8e93c3ae9853df34641868d4c891125 | [
"MIT"
] | null | null | null | from .register import * | 23 | 23 | 0.782609 | 3 | 23 | 6 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.130435 | 23 | 1 | 23 | 23 | 0.9 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
c39c5e4046f28fcc4e91ec74339bc60025fd922f | 151 | py | Python | Bronze/Bronze_V/24860.py | masterTyper/baekjoon_solved_ac | b9ce14d9bdaa5b5b06735ad075fb827de9f44b9c | [
"MIT"
] | null | null | null | Bronze/Bronze_V/24860.py | masterTyper/baekjoon_solved_ac | b9ce14d9bdaa5b5b06735ad075fb827de9f44b9c | [
"MIT"
] | null | null | null | Bronze/Bronze_V/24860.py | masterTyper/baekjoon_solved_ac | b9ce14d9bdaa5b5b06735ad075fb827de9f44b9c | [
"MIT"
] | null | null | null | Vk, Jk = map(int, input().split())
Vl, Jl = map(int, input().split())
Vh, Dh, Jh = map(int, input().split())
print((Vk * Jk + Vl * Jl) * Vh * Dh * Jh) | 30.2 | 41 | 0.536424 | 27 | 151 | 3 | 0.444444 | 0.222222 | 0.407407 | 0.592593 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.192053 | 151 | 5 | 41 | 30.2 | 0.663934 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0 | 0 | 0 | 0.25 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
c3c34d9e8a13a9e577ba42cddde505295537bb04 | 39 | py | Python | overload/__init__.py | Corwind/python-overload | 87d8a13273316a1bd02e0be7ff884ab77dc71dd2 | [
"Beerware"
] | 1 | 2015-12-22T15:40:23.000Z | 2015-12-22T15:40:23.000Z | overload/__init__.py | Corwind/python-overload | 87d8a13273316a1bd02e0be7ff884ab77dc71dd2 | [
"Beerware"
] | null | null | null | overload/__init__.py | Corwind/python-overload | 87d8a13273316a1bd02e0be7ff884ab77dc71dd2 | [
"Beerware"
] | null | null | null | from overload.overload import overload
| 19.5 | 38 | 0.871795 | 5 | 39 | 6.8 | 0.6 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.102564 | 39 | 1 | 39 | 39 | 0.971429 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
7f0f15d90612f19b4953ba9fc5bc5676e8d6ff0c | 10,610 | py | Python | amiet_tools/functions/flat_plate_response.py | Toktom/amiet_tools | e4104db9a0c3784159378f680ebb89caa5ada053 | [
"BSD-3-Clause"
] | null | null | null | amiet_tools/functions/flat_plate_response.py | Toktom/amiet_tools | e4104db9a0c3784159378f680ebb89caa5ada053 | [
"BSD-3-Clause"
] | null | null | null | amiet_tools/functions/flat_plate_response.py | Toktom/amiet_tools | e4104db9a0c3784159378f680ebb89caa5ada053 | [
"BSD-3-Clause"
] | null | null | null | """Author: Fabio Casagrande Hirono"""
import numpy as np
from .fresnel import fr_int, fr_int_cc
import scipy.special as ss
def delta_p(rho0, b, w0, Kx, ky, xy, Mach):
"""
Calculates the pressure jump response 'delta_p' for a single turbulent gust.
Parameters
----------
rho0 : float
Density of air.
b : float
Airfoil semichord.
w0 : float
Gust amplitude.
Kx : float
Chordwise turbulent gust wavenumber.
ky : float
Spanwise turbulent gust wavenumber.
xy : ({2, 3}, Ny, Nx) array_like
2D array containing (x, y) coordinates of airfoil surface mesh.
Mach : float
Mean flow Mach number.
Returns
-------
delta_p : (Ny, Nx) array_like
Surface pressure jump over airfoil surface mesh in response to a single
turbulent gust with wavenumbers (Kx, ky) and amplitude 'w0'.
"""
# pressure difference over the whole airfoil surface
delta_p = np.zeros(xy[0].shape, 'complex')
if xy.ndim == 3:
# unsteady lift over the chord line (mid-span)
g_x = np.zeros(xy[0][0].shape, 'complex')
# calculates the unsteady lift over the chord
g_x = g_LE(xy[0][0], Kx, ky, Mach, b)
# broadcasts a copy of 'g_x' to 'delta_p'
delta_p = g_x[np.newaxis, :]
elif xy.ndim == 2:
# unsteady lift over the chord line (mid-span)
g_x = np.zeros(xy[0].shape, 'complex')
# calculates the unsteady lift over the chord
delta_p = g_LE(xy[0], Kx, ky, Mach, b)
# adds the constants and the 'k_y' oscillating component
delta_p = 2*np.pi*rho0*w0*delta_p*np.exp(-1j*ky*xy[1])
return delta_p
def g_LE(xs, Kx, ky, Mach, b):
"""
Airfoil non-dimensional chordwise pressure jump in response to a single gust.
Parameters
----------
xs : (Ny, Nx) or (Nx,) array_like
Airfoil surface mesh chordwise coordinates.
Kx : float
Chordwise turbulent gust wavenumber.
ky : float
Spanwise turbulent gust wavenumber.
Mach : float
Mean flow Mach number.
b : float
Airfoil semichord.
Returns
-------
g_LE : (Ny, Nx) array_like
Non-dimensional chordwise surface pressure jump over airfoil surface
mesh in response to a single turbulent gust with wavenumbers (Kx, ky)
and amplitude 'w0'.
Notes
-----
This function provides the airfoil responses for either subcritical or
supercritical gusts. For critical gusts, the airfoil response is
interpolated from slightly sub- and slightly supercritical responses.
"""
beta = np.sqrt(1-Mach**2)
ky_critical = Kx*Mach/beta
# p_diff < 0: supercritical
# p_diff > 0: subcritical
p_diff = (np.abs(ky) - ky_critical)/ky_critical
# supercritical gusts
if p_diff < -1e-3:
return g_LE_super(xs, Kx, ky, Mach, b)
elif p_diff > 1e-3:
return g_LE_sub(xs, Kx, ky, Mach, b)
else:
# get gusts 1% above and below critical ky
ky_sp = ky*0.99
ky_sb = ky*1.01
g_sp = g_LE_super(xs, Kx, ky_sp, Mach, b)
g_sb = g_LE_sub(xs, Kx, ky_sb, Mach, b)
return (g_sp + g_sb)/2.
def g_LE_super(xs, Kx, ky, Mach, b):
"""
Returns airfoil non-dimensional pressure jump for supercritical gusts.
Parameters
----------
xs : (Ny, Nx) or (Nx,) array_like
Airfoil surface mesh chordwise coordinates.
Kx : float
Chordwise turbulent gust wavenumber.
ky : float
Spanwise turbulent gust wavenumber.
Mach : float
Mean flow Mach number.
b : float
Airfoil semichord.
Returns
-------
g_LE_super : (Ny, Nx) array_like
Non-dimensional chordwise surface pressure jump over airfoil surface
mesh in response to a single supercritical turbulent gust with
wavenumbers (Kx, ky)
Notes
-----
This function includes two terms of the Schwarzchild technique; the first
term contains the solution for a infinite-chord airfoil with a leading edge
but no trailing edge, while the second term contains a correction factor
for a infinite-chord airfoil with a trailing edge but no leading edge.
"""
beta = np.sqrt(1-Mach**2)
mu_h = Kx*b/(beta**2)
mu_a = mu_h*Mach
kappa = np.sqrt(mu_a**2 - (ky*b/beta)**2)
g1_sp = (np.exp(-1j*((kappa - mu_a*Mach)*((xs/b) + 1) + np.pi/4))
/ (np.pi*np.sqrt(np.pi*((xs/b) + 1)*(Kx*b + (beta**2)*kappa))))
g2_sp = -(np.exp(-1j*((kappa - mu_a*Mach)*((xs/b) + 1) + np.pi/4))
* (1-(1+1j)*fr_int_cc(2*kappa*(1-xs/b)))
/ (np.pi*np.sqrt(2*np.pi*(Kx*b + (beta**2)*kappa))))
return g1_sp + g2_sp
def g_LE_sub(xs, Kx, ky, Mach, b):
"""
Returns airfoil non-dimensional pressure jump for subcritical gusts.
Parameters
----------
xs : (Ny, Nx) or (Nx,) array_like
Airfoil surface mesh chordwise coordinates.
Kx : float
Chordwise turbulent gust wavenumber.
ky : float
Spanwise turbulent gust wavenumber.
Mach : float
Mean flow Mach number.
b : float
Airfoil semichord.
Returns
-------
g_LE_sub : (Ny, Nx) array_like
Non-dimensional chordwise surface pressure jump over airfoil surface
mesh in response to a single subcritical turbulent gust with
wavenumbers (Kx, ky)
Notes
-----
This function includes two terms of the Schwarzchild technique; the first
term contains the solution for a infinite-chord airfoil with a leading edge
but no trailing edge, while the second term contains a correction factor
for a infinite-chord airfoil with a trailing edge but no leading edge.
"""
beta = np.sqrt(1-Mach**2)
mu_h = Kx*b/(beta**2)
mu_a = mu_h*Mach
kappa1 = np.sqrt(((ky*b/beta)**2) - mu_a**2)
g1_sb = (np.exp((-kappa1 + 1j*mu_a*Mach)*((xs/b) + 1))*np.exp(-1j*np.pi/4)
/ (np.pi*np.sqrt(np.pi*((xs/b) + 1)*(Kx*b - 1j*(beta**2)*kappa1))))
g2_sb = -(np.exp((-kappa1 + 1j*mu_a*Mach)*((xs/b) + 1))
* np.exp(-1j*np.pi/4)*(1 - ss.erf(2*kappa1*(1-xs/b)))
/ (np.pi*np.sqrt(2*np.pi*(Kx*b - 1j*(beta**2)*kappa1))))
return g1_sb + g2_sb
def L_LE(x, sigma, Kx, ky, Mach, b):
"""
Returns the effective lift functions - i.e. chordwise integrated surface pressures
Parameters
----------
x : (M,) array_like
1D array of observer locations 'x'-coordinates
sigma : (M,) array_like
1D array of observer locations flow-corrected distances
Kx : float
Chordwise turbulent gust wavenumber.
ky : float
Spanwise turbulent gust wavenumber.
Mach : float
Mean flow Mach number.
b : float
Airfoil semichord.
Returns
-------
L_LE : (M,) array_like
Effective lift function for all observer locations.
Notes
-----
These functions are the chordwise integrated surface pressures, and are
parts of the far-field-approximated model for airfoil-turbulente noise.
"""
beta = np.sqrt(1-Mach**2)
ky_critical = Kx*Mach/beta
# percentage difference in ky
# p_diff < 0: supercritical / p_diff > 0: subcritical
p_diff = (np.abs(ky) - ky_critical)/ky_critical
# supercritical gusts
if p_diff < -1e-3:
return L_LE_super(x, sigma, Kx, ky, Mach, b)
elif p_diff > 1e-3:
return L_LE_sub(x, sigma, Kx, ky, Mach, b)
else:
# get gusts 1% above and below critical ky
ky_sp = ky*0.99
ky_sb = ky*1.01
L_sp = L_LE_super(x, sigma, Kx, ky_sp, Mach, b)
L_sb = L_LE_sub(x, sigma, Kx, ky_sb, Mach, b)
return (L_sp + L_sb)/2.
def L_LE_super(x, sigma, Kx, Ky, Mach, b):
"""
Returns the effective lift functions for supercritical gusts
Parameters
----------
x : (M,) array_like
1D array of observer locations 'x'-coordinates
sigma : (M,) array_like
1D array of observer locations flow-corrected distances
Kx : float
Chordwise turbulent gust wavenumber.
ky : float
Spanwise turbulent gust wavenumber.
Mach : float
Mean flow Mach number.
b : float
Airfoil semichord.
Returns
-------
Notes
-----
These functions are the chordwise integrated surface pressures, and are
parts of the far-field-approximated model for airfoil-turbulente noise.
"""
beta = np.sqrt(1-Mach**2)
mu_h = Kx*b/(beta**2)
mu_a = mu_h*Mach
kappa = np.sqrt(mu_a**2 - (Ky*b/beta)**2)
H1 = kappa - mu_a*x/sigma
H2 = mu_a*(Mach - x*sigma) - np.pi/4
L1 = ((1/np.pi)*np.sqrt(2/((Kx*b + (beta**2)*kappa)*H1))
* fr_int_cc(2*H1)*np.exp(1j*H2))
L2 = ((np.exp(1j*H2)
/ (np.pi*H1*np.sqrt(2*np.pi*(Kx*b + (beta**2)*kappa))))
* (1j*(1 - np.exp(-2j*H1))
+ (1 - 1j)*(fr_int_cc(4*kappa)
- np.sqrt(2*kappa/(kappa + mu_a*x/sigma))
* np.exp(-2j*H1)
* fr_int_cc(2*(kappa + mu_a*x/sigma)))))
return L1+L2
def L_LE_sub(x, sigma, Kx, Ky, Mach, b):
"""
Returns the effective lift functions for subcritical gusts
Parameters
----------
x : (M,) array_like
1D array of observer locations 'x'-coordinates
sigma : (M,) array_like
1D array of observer locations flow-corrected distances
Kx : float
Chordwise turbulent gust wavenumber.
ky : float
Spanwise turbulent gust wavenumber.
Mach : float
Mean flow Mach number.
b : float
Airfoil semichord.
Returns
-------
Notes
-----
These functions are the chordwise integrated surface pressures, and are
parts of the far-field-approximated model for airfoil-turbulente noise.
"""
beta = np.sqrt(1-Mach**2)
mu_h = Kx*b/(beta**2)
mu_a = mu_h*Mach
kappa1 = np.sqrt((Ky*b/beta)**2 - (mu_a**2))
H2 = mu_a*(Mach - x*sigma) - np.pi/4
H3 = kappa1 - 1j*mu_a*x/sigma
L1 = ((1/np.pi)*np.sqrt(2/((Kx*b - 1j*(beta**2)*kappa1)
* (1j*kappa1 - mu_a*x/sigma)))
* fr_int(2*(1j*kappa1 - mu_a*x/sigma))*np.exp(1j*H2))
L2 = ((1j*np.exp(1j*H2)
/ (np.pi*H3*np.sqrt(2*np.pi*(Kx*b - 1j*(beta**2)*kappa1))))
* (1 - np.exp(-2*H3) - ss.erf(np.sqrt(4*kappa1))
+ 2*np.exp(-2*H3)*np.sqrt(kappa1/(1j*kappa1 + mu_a*x/sigma))
* fr_int(2*(1j*kappa1 - mu_a*x/sigma))))
return L1+L2
| 26.792929 | 86 | 0.591235 | 1,581 | 10,610 | 3.87413 | 0.120177 | 0.010776 | 0.052571 | 0.017633 | 0.804408 | 0.77649 | 0.749388 | 0.739102 | 0.733878 | 0.715265 | 0 | 0.026565 | 0.283318 | 10,610 | 395 | 87 | 26.860759 | 0.778932 | 0.534873 | 0 | 0.329897 | 0 | 0 | 0.005007 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.072165 | false | 0 | 0.030928 | 0 | 0.216495 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
613c6cff5b0fac686b1c4e8b70b149f5f25a2861 | 199 | py | Python | tomo_encoders/tasks/__init__.py | arshadzahangirchowdhury/TomoEncoders | 9c2b15fd515d864079f198546821faee5d78df17 | [
"BSD-3-Clause"
] | null | null | null | tomo_encoders/tasks/__init__.py | arshadzahangirchowdhury/TomoEncoders | 9c2b15fd515d864079f198546821faee5d78df17 | [
"BSD-3-Clause"
] | null | null | null | tomo_encoders/tasks/__init__.py | arshadzahangirchowdhury/TomoEncoders | 9c2b15fd515d864079f198546821faee5d78df17 | [
"BSD-3-Clause"
] | null | null | null | #!/usr/bin/env python3
# -*- coding: utf-8 -*-
"""
"""
from tomo_encoders.tasks import *
from tomo_encoders.tasks.void_metrology import VoidMetrology
from tomo_encoders.tasks import void_mapper
| 15.307692 | 60 | 0.743719 | 27 | 199 | 5.296296 | 0.592593 | 0.167832 | 0.335664 | 0.440559 | 0.377622 | 0 | 0 | 0 | 0 | 0 | 0 | 0.011561 | 0.130653 | 199 | 12 | 61 | 16.583333 | 0.815029 | 0.21608 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
613f826bfdedb2b95034a8f4027632b2d39db56d | 434 | py | Python | hknweb/views/indrel.py | Boomaa23/hknweb | 2c2ce38b5f1c0c6e04ba46282141557357bd5326 | [
"MIT"
] | 20 | 2018-01-07T02:15:43.000Z | 2021-09-15T04:25:50.000Z | hknweb/views/indrel.py | Boomaa23/hknweb | 2c2ce38b5f1c0c6e04ba46282141557357bd5326 | [
"MIT"
] | 292 | 2018-02-01T18:31:18.000Z | 2022-03-30T22:15:08.000Z | hknweb/views/indrel.py | Boomaa23/hknweb | 2c2ce38b5f1c0c6e04ba46282141557357bd5326 | [
"MIT"
] | 85 | 2017-11-13T06:33:13.000Z | 2022-03-30T20:32:55.000Z | from django.shortcuts import render
def index(request):
return render(request, "indrel/index.html")
def resume_book(request):
return render(request, "indrel/resume_book.html")
def infosessions(request):
return render(request, "indrel/infosessions.html")
def career_fair(request):
return render(request, "indrel/career_fair.html")
def contact_us(request):
return render(request, "indrel/contact_us.html")
| 19.727273 | 54 | 0.746544 | 56 | 434 | 5.678571 | 0.321429 | 0.204403 | 0.298742 | 0.408805 | 0.503145 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.135945 | 434 | 21 | 55 | 20.666667 | 0.848 | 0 | 0 | 0 | 0 | 0 | 0.251152 | 0.211982 | 0 | 0 | 0 | 0 | 0 | 1 | 0.454545 | false | 0 | 0.090909 | 0.454545 | 1 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 6 |
61820a8103b4521fcbc8ab56387714b7d7e7c231 | 287 | py | Python | test/test_utils.py | droope/netlib | a3107474f9f336f28dc195f1406a4e035aa51c84 | [
"MIT"
] | null | null | null | test/test_utils.py | droope/netlib | a3107474f9f336f28dc195f1406a4e035aa51c84 | [
"MIT"
] | null | null | null | test/test_utils.py | droope/netlib | a3107474f9f336f28dc195f1406a4e035aa51c84 | [
"MIT"
] | null | null | null | from netlib import utils
def test_hexdump():
assert utils.hexdump("one\0"*10)
def test_cleanBin():
assert utils.cleanBin("one") == "one"
assert utils.cleanBin("\00ne") == ".ne"
assert utils.cleanBin("\nne") == "\nne"
assert utils.cleanBin("\nne", True) == ".ne"
| 20.5 | 48 | 0.623693 | 37 | 287 | 4.783784 | 0.432432 | 0.310734 | 0.429379 | 0.248588 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.021277 | 0.181185 | 287 | 13 | 49 | 22.076923 | 0.731915 | 0 | 0 | 0 | 0 | 0 | 0.118881 | 0 | 0 | 0 | 0 | 0 | 0.625 | 1 | 0.25 | true | 0 | 0.125 | 0 | 0.375 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
f66e7dea2ca1fcc00ca4a17791b761b8b516fc06 | 9,081 | py | Python | FEM/EulerBernoulliBeam.py | ZibraMax/FEM | b868c60408a4f83dec4bb424d66be0b20e2ac71b | [
"MIT"
] | 10 | 2021-03-21T18:38:40.000Z | 2022-02-22T01:32:06.000Z | FEM/EulerBernoulliBeam.py | ZibraMax/FEM | b868c60408a4f83dec4bb424d66be0b20e2ac71b | [
"MIT"
] | null | null | null | FEM/EulerBernoulliBeam.py | ZibraMax/FEM | b868c60408a4f83dec4bb424d66be0b20e2ac71b | [
"MIT"
] | 1 | 2022-02-08T04:40:59.000Z | 2022-02-08T04:40:59.000Z | """Euler Bernoulli Beam implementation [WIP]
"""
from .Solvers import NoLineal
from .Elements.E1D.EulerBernoulliElement import EulerBernoulliElement
from .Core import Core, Geometry
from tqdm import tqdm
import numpy as np
import matplotlib.pyplot as plt
import logging
class EulerBernoulliBeam(Core):
"""Creates a Euler Bernoulli beam problem
Args:
geometry (Geometry): 1D 2 variables per node problem geometry. Geometry must have Euler Bernoulli elements.
EI (float): Young's moduli multiplied by second moment of area (inertia).
cf (float, optional): Soil coeficient. Defaults to 0.
"""
def __init__(self, geometry: Geometry, EI: float, cf: float = 0, f: float = 0) -> None:
"""Creates a Euler Bernoulli beam problem
Args:
geometry (Geometry): 1D 2 variables per node problem geometry. Geometry must have Lineal elements.
EI (float): Young's moduli multiplied by second moment of area (inertia).
cf (float, optional): Soil coeficient. Defaults to 0.
"""
self.a = EI
self.f = f
self.cf = cf
if isinstance(EI, float):
self.a = lambda x: EI
if isinstance(f, float):
self.f = lambda x: f
if isinstance(f, float):
self.cf = lambda x: cf
if geometry.nvn == 1:
logging.warning(
'Border conditions lost, please usea a geometry with 2 variables per node (nvn=2)')
Core.__init__(self, geometry)
for i in range(len(self.elements)):
self.elements[i] = EulerBernoulliElement(
self.elements[i].coords, self.elements[i].gdl)
def elementMatrices(self) -> None:
"""Calculate the element matrices usign Guass Legendre quadrature.
"""
for e in tqdm(self.elements, unit='Element'):
_x, _p = e.T(e.Z.T)
_h = e.hermit(e.Z.T)
jac, dpz = e.J(e.Z.T)
detjac = np.linalg.det(jac)
# _j = np.linalg.inv(jac)
# dpx = _j @ dpz
_dh = e.dhermit(e.Z.T)
for i in range(e.n):
for j in range(e.n):
for k in range(len(e.Z)):
# + self.c(_x[k])*_p[k][i]*_p[k][j]
e.Ke[i, j] += (self.a(_x[k])*_dh[1][i][k]
* _dh[1][j][k]+self.cf(_x[k, 0])*_h[k][i]*_h[k][j])*detjac[k]*e.W[k]
for k in range(len(e.Z)):
e.Fe[i][0] += self.f(_x[k])*_h[k][i]*detjac[k]*e.W[k]
def postProcess(self, plot=True) -> None:
"""Post process the solution. Shows graphs of displacement, rotation, shear and moment.
"""
X = []
U1 = []
U2 = []
U3 = []
U4 = []
for e in self.elements:
_x, _u, du = e.giveSolution(True)
X += _x.T[0].tolist()
U1 += _u.tolist()
U2 += (du[:, 0]).tolist()
U3 += (du[:, 1]*self.a(_x.T[0])).tolist()
U4 += (du[:, 2]*self.a(_x.T[0])).tolist()
if plot:
fig = plt.figure()
ax1 = fig.add_subplot(2, 2, 1)
ax2 = fig.add_subplot(2, 2, 2)
ax3 = fig.add_subplot(2, 2, 3)
ax4 = fig.add_subplot(2, 2, 4)
ax1.plot(X, U1)
ax1.grid()
ax2.plot(X, U2)
ax2.grid()
ax3.plot(X, U3)
ax3.grid()
ax4.plot(X, U4)
ax4.grid()
ax1.set_title(r'$U(x)$')
ax2.set_title(r'$\frac{dU}{dx}$')
ax3.set_title(r'$\frac{d^2U}{dx^2}$')
ax4.set_title(r'$\frac{d^3U}{dx^3}$')
return X, U1, U2, U3, U4
class EulerBernoulliBeamNonLineal(Core):
"""Creates a Euler Bernoulli beam problem
Args:
geometry (Geometry): 1D 2 variables per node problem geometry. Geometry must have Euler Bernoulli elements.
EI (float): Young's moduli multiplied by second moment of area (inertia).
cf (float, optional): Soil coeficient. Defaults to 0.
"""
def __init__(self, geometry: Geometry, EI: float, EA: float, fx: float = 0, fy: float = 0) -> None:
"""Creates a Euler Bernoulli beam problem
Args:
geometry (Geometry): 1D 2 variables per node problem geometry. Geometry must have Lineal elements.
EI (float): Young's moduli multiplied by second moment of area (inertia).
cf (float, optional): Soil coeficient. Defaults to 0.
"""
self.Axx = EI
self.Dxx = EA
self.fx0 = fx
self.fy0 = fy
if isinstance(EI, float):
self.Axx = lambda x: EA
if isinstance(EI, float):
self.Dxx = lambda x: EI
if isinstance(fx, float):
self.fx0 = lambda x: fx
if isinstance(fy, float):
self.fy0 = lambda x: fy
if geometry.nvn == 1:
logging.warning(
'Border conditions lost, please usea a geometry with 2 variables per node (nvn=2)')
Core.__init__(self, geometry, solver=NoLineal.LoadControl)
for i in range(len(self.elements)):
self.elements[i] = EulerBernoulliElement(
self.elements[i].coords, self.elements[i].gdl, nvn=3)
def elementMatrices(self) -> None:
"""Calculate the element matrices usign Guass Legendre quadrature.
"""
for e in tqdm(self.elements, unit='Element'):
en2 = int(e.n/2)
k11 = np.zeros([2, 2])
k12 = np.zeros([2, 4])
k22 = np.zeros([4, 4])
f1 = np.zeros([2, 1])
f2 = np.zeros([4, 1])
# Integración completa
_x, _p = e.T(e.Z.T)
_h = e.hermit(e.Z.T)
jac, dpz = e.J(e.Z.T)
detjac = np.linalg.det(jac)
_j = np.linalg.inv(jac)
dpx = _j @ dpz
_dh = e.dhermit(e.Z.T)
for i in range(4):
for j in range(4):
for k in range(len(e.Z)):
k22[i, j] += (self.Dxx(_x[k])*_dh[1][i][k]
* _dh[1][j][k])*detjac[k]*e.W[k]
if i < 2 and j < 2:
k11[i, j] += (self.Axx(_x[k])*dpx[k][0][i]
* dpx[k][0][j])*detjac[k]*e.W[k]
for k in range(len(e.Z)):
if i < 2:
f1[i][0] += self.fx(_x[k])*_p[k][i]*detjac[k]*e.W[k]
f2[i][0] += self.fy(_x[k])*_h[k][i]*detjac[k]*e.W[k]
# Integración reducida
_x, _p = e.T(e.Zr.T)
_h = e.hermit(e.Zr.T)
jac, dpz = e.J(e.Zr.T)
detjac = np.linalg.det(jac)
_j = np.linalg.inv(jac)
dpx = _j @ dpz
_dh = e.dhermit(e.Zr.T)
for i in range(4):
for j in range(4):
for k in range(len(e.Zr)):
ue = e.Ue.flatten()[[1, 2, 4, 5]]
dw = ue @ _dh[0, :, k].T
# + self.c(_x[k])*_p[k][i]*_p[k][j]
if i < 2:
k12[i, j] += 1.0/2.0*(self.Axx(_x[k])*dw*dpx[k][0][i]
* _dh[0][j][k])*detjac[k]*e.Wr[k]
k22[i, j] += 1.0/2.0*(self.Axx(_x[k])*dw**2*_dh[0][i][k]
* _dh[0][j][k])*detjac[k]*e.Wr[k]
e.Ke[np.ix_([0, 3], [0, 3])] = k11
e.Ke[np.ix_([1, 2, 4, 5], [1, 2, 4, 5])] = k22
e.Ke[np.ix_([0, 3], [1, 2, 4, 5])] = k12
e.Ke[np.ix_([1, 2, 4, 5], [0, 3])] = 2*k12.T
e.Fe[[0, 3]] = f1
e.Fe[[1, 2, 4, 5]] = f2
def postProcess(self, plot=True) -> None:
"""Post process the solution. Shows graphs of displacement, rotation, shear and moment.
"""
X = []
U1 = []
U2 = []
U3 = []
U4 = []
for e in self.elements:
ueflex = e.Ue.flatten()[[1, 2, 4, 5]]
ueax = e.Ue.flatten()[[0, 3]]
e.Ue = ueflex
_x, _u, du = e.giveSolution(True)
X += _x.T[0].tolist()
U1 += _u.tolist()
U2 += (du[:, 0]).tolist()
U3 += (du[:, 1]).tolist()
U4 += (du[:, 2]).tolist()
if plot:
fig = plt.figure()
ax1 = fig.add_subplot(2, 2, 1)
ax2 = fig.add_subplot(2, 2, 2)
ax3 = fig.add_subplot(2, 2, 3)
ax4 = fig.add_subplot(2, 2, 4)
ax1.plot(X, U1)
ax1.grid()
ax2.plot(X, U2)
ax2.grid()
ax3.plot(X, U3)
ax3.grid()
ax4.plot(X, U4)
ax4.grid()
ax1.set_title(r'$U(x)$')
ax2.set_title(r'$\frac{dU}{dx}$')
ax3.set_title(r'$\frac{d^2U}{dx^2}$')
ax4.set_title(r'$\frac{d^3U}{dx^3}$')
return X, U1, U2, U3, U4
| 37.524793 | 115 | 0.466358 | 1,266 | 9,081 | 3.272512 | 0.14376 | 0.021965 | 0.005793 | 0.027034 | 0.8028 | 0.758146 | 0.746078 | 0.73232 | 0.727009 | 0.719286 | 0 | 0.043966 | 0.381346 | 9,081 | 241 | 116 | 37.680498 | 0.693485 | 0.185332 | 0 | 0.59887 | 0 | 0 | 0.040432 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.033898 | false | 0 | 0.039548 | 0 | 0.096045 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
9ca02f28fab1aafa754bf16cfe23932280756aab | 131 | py | Python | backend/site/MWS_Backend/apiserver/models/__init__.py | singapore19/team-3 | f021dc98f809faa62932be09c0ed00bec2aa5af3 | [
"Net-SNMP",
"Xnet",
"RSA-MD"
] | null | null | null | backend/site/MWS_Backend/apiserver/models/__init__.py | singapore19/team-3 | f021dc98f809faa62932be09c0ed00bec2aa5af3 | [
"Net-SNMP",
"Xnet",
"RSA-MD"
] | null | null | null | backend/site/MWS_Backend/apiserver/models/__init__.py | singapore19/team-3 | f021dc98f809faa62932be09c0ed00bec2aa5af3 | [
"Net-SNMP",
"Xnet",
"RSA-MD"
] | null | null | null | from .job import *
from .leave import *
from .trip import *
from .tripjob import *
# from .driver import *
# from .staff import *
| 16.375 | 23 | 0.687023 | 18 | 131 | 5 | 0.444444 | 0.555556 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.206107 | 131 | 7 | 24 | 18.714286 | 0.865385 | 0.320611 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
9cf03cad5066a7818f34cc7c2acaf0bc3c4efc29 | 71 | py | Python | autox/autox_ts/models/__init__.py | OneToolsCollection/4paradigm-AutoX | f8e838021354de17f5bb9bc44e9d68d12dda6427 | [
"Apache-2.0"
] | null | null | null | autox/autox_ts/models/__init__.py | OneToolsCollection/4paradigm-AutoX | f8e838021354de17f5bb9bc44e9d68d12dda6427 | [
"Apache-2.0"
] | null | null | null | autox/autox_ts/models/__init__.py | OneToolsCollection/4paradigm-AutoX | f8e838021354de17f5bb9bc44e9d68d12dda6427 | [
"Apache-2.0"
] | null | null | null | from .ts_lgb_model import ts_lgb_model
from .cnn_model import cnn_model | 35.5 | 38 | 0.873239 | 14 | 71 | 4 | 0.428571 | 0.178571 | 0.357143 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.098592 | 71 | 2 | 39 | 35.5 | 0.875 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
9cf1cae3e6c00047801eefca96572219ad14c597 | 151 | py | Python | movement_assistant/bots/telebot/deletecall.py | davidwickerhf/movement-assistant | 570380adf440faa36993ab8f52e386584a90fec8 | [
"MIT"
] | 3 | 2020-06-11T13:06:21.000Z | 2020-06-11T21:35:41.000Z | movement_assistant/bots/telebot/deletecall.py | davidwickerhf/movement-assistant | 570380adf440faa36993ab8f52e386584a90fec8 | [
"MIT"
] | 25 | 2020-04-29T16:44:05.000Z | 2020-06-11T08:18:47.000Z | movement_assistant/bots/telebot/deletecall.py | davidwickerhf/fff-transparency-wg | 570380adf440faa36993ab8f52e386584a90fec8 | [
"MIT"
] | 1 | 2020-12-23T09:33:05.000Z | 2020-12-23T09:33:05.000Z | from movement_assistant.bots.telebot import *
def delete_call(update, context):
botupdate = interface.authenticate(update, context, 0, True)
| 25.166667 | 64 | 0.754967 | 18 | 151 | 6.222222 | 0.888889 | 0.232143 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.007813 | 0.152318 | 151 | 5 | 65 | 30.2 | 0.867188 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0 | 0.333333 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
146501d0f9514561351c68fc963a2fdaf36f177b | 8,685 | py | Python | tests/queryset/test_queryset_aggregation.py | shellcodesniper/mongoengine | d76cb345be98045cde0fa078569cc8021c0d0162 | [
"MIT"
] | 3 | 2019-06-18T07:54:38.000Z | 2022-01-22T23:27:41.000Z | tests/queryset/test_queryset_aggregation.py | shellcodesniper/mongoengine | d76cb345be98045cde0fa078569cc8021c0d0162 | [
"MIT"
] | 1 | 2022-01-22T23:27:23.000Z | 2022-01-22T23:27:23.000Z | tests/queryset/test_queryset_aggregation.py | shellcodesniper/mongoengine | d76cb345be98045cde0fa078569cc8021c0d0162 | [
"MIT"
] | null | null | null | import unittest
import warnings
from pymongo.read_preferences import ReadPreference
from mongoengine import *
from tests.utils import MongoDBTestCase
class TestQuerysetAggregate(MongoDBTestCase):
def test_read_preference_aggregation_framework(self):
class Bar(Document):
txt = StringField()
meta = {"indexes": ["txt"]}
# Aggregates with read_preference
pipeline = []
bars = Bar.objects.read_preference(
ReadPreference.SECONDARY_PREFERRED
).aggregate(pipeline)
assert (
bars._CommandCursor__collection.read_preference
== ReadPreference.SECONDARY_PREFERRED
)
def test_queryset_aggregation_framework(self):
class Person(Document):
name = StringField()
age = IntField()
Person.drop_collection()
p1 = Person(name="Isabella Luanna", age=16)
p2 = Person(name="Wilson Junior", age=21)
p3 = Person(name="Sandra Mara", age=37)
Person.objects.insert([p1, p2, p3])
pipeline = [{"$project": {"name": {"$toUpper": "$name"}}}]
data = Person.objects(age__lte=22).aggregate(pipeline)
assert list(data) == [
{"_id": p1.pk, "name": "ISABELLA LUANNA"},
{"_id": p2.pk, "name": "WILSON JUNIOR"},
]
pipeline = [{"$project": {"name": {"$toUpper": "$name"}}}]
data = Person.objects(age__lte=22).order_by("-name").aggregate(pipeline)
assert list(data) == [
{"_id": p2.pk, "name": "WILSON JUNIOR"},
{"_id": p1.pk, "name": "ISABELLA LUANNA"},
]
pipeline = [
{"$group": {"_id": None, "total": {"$sum": 1}, "avg": {"$avg": "$age"}}}
]
data = (
Person.objects(age__gte=17, age__lte=40)
.order_by("-age")
.aggregate(pipeline)
)
assert list(data) == [{"_id": None, "avg": 29, "total": 2}]
pipeline = [{"$match": {"name": "Isabella Luanna"}}]
data = Person.objects().aggregate(pipeline)
assert list(data) == [{u"_id": p1.pk, u"age": 16, u"name": u"Isabella Luanna"}]
def test_queryset_aggregation_with_skip(self):
class Person(Document):
name = StringField()
age = IntField()
Person.drop_collection()
p1 = Person(name="Isabella Luanna", age=16)
p2 = Person(name="Wilson Junior", age=21)
p3 = Person(name="Sandra Mara", age=37)
Person.objects.insert([p1, p2, p3])
pipeline = [{"$project": {"name": {"$toUpper": "$name"}}}]
data = Person.objects.skip(1).aggregate(pipeline)
assert list(data) == [
{"_id": p2.pk, "name": "WILSON JUNIOR"},
{"_id": p3.pk, "name": "SANDRA MARA"},
]
def test_queryset_aggregation_with_limit(self):
class Person(Document):
name = StringField()
age = IntField()
Person.drop_collection()
p1 = Person(name="Isabella Luanna", age=16)
p2 = Person(name="Wilson Junior", age=21)
p3 = Person(name="Sandra Mara", age=37)
Person.objects.insert([p1, p2, p3])
pipeline = [{"$project": {"name": {"$toUpper": "$name"}}}]
data = Person.objects.limit(1).aggregate(pipeline)
assert list(data) == [{"_id": p1.pk, "name": "ISABELLA LUANNA"}]
def test_queryset_aggregation_with_sort(self):
class Person(Document):
name = StringField()
age = IntField()
Person.drop_collection()
p1 = Person(name="Isabella Luanna", age=16)
p2 = Person(name="Wilson Junior", age=21)
p3 = Person(name="Sandra Mara", age=37)
Person.objects.insert([p1, p2, p3])
pipeline = [{"$project": {"name": {"$toUpper": "$name"}}}]
data = Person.objects.order_by("name").aggregate(pipeline)
assert list(data) == [
{"_id": p1.pk, "name": "ISABELLA LUANNA"},
{"_id": p3.pk, "name": "SANDRA MARA"},
{"_id": p2.pk, "name": "WILSON JUNIOR"},
]
def test_queryset_aggregation_with_skip_with_limit(self):
class Person(Document):
name = StringField()
age = IntField()
Person.drop_collection()
p1 = Person(name="Isabella Luanna", age=16)
p2 = Person(name="Wilson Junior", age=21)
p3 = Person(name="Sandra Mara", age=37)
Person.objects.insert([p1, p2, p3])
pipeline = [{"$project": {"name": {"$toUpper": "$name"}}}]
data = list(Person.objects.skip(1).limit(1).aggregate(pipeline))
assert list(data) == [{"_id": p2.pk, "name": "WILSON JUNIOR"}]
# Make sure limit/skip chaining order has no impact
data2 = Person.objects.limit(1).skip(1).aggregate(pipeline)
assert data == list(data2)
def test_queryset_aggregation_with_sort_with_limit(self):
class Person(Document):
name = StringField()
age = IntField()
Person.drop_collection()
p1 = Person(name="Isabella Luanna", age=16)
p2 = Person(name="Wilson Junior", age=21)
p3 = Person(name="Sandra Mara", age=37)
Person.objects.insert([p1, p2, p3])
pipeline = [{"$project": {"name": {"$toUpper": "$name"}}}]
data = Person.objects.order_by("name").limit(2).aggregate(pipeline)
assert list(data) == [
{"_id": p1.pk, "name": "ISABELLA LUANNA"},
{"_id": p3.pk, "name": "SANDRA MARA"},
]
# Verify adding limit/skip steps works as expected
pipeline = [{"$project": {"name": {"$toUpper": "$name"}}}, {"$limit": 1}]
data = Person.objects.order_by("name").limit(2).aggregate(pipeline)
assert list(data) == [{"_id": p1.pk, "name": "ISABELLA LUANNA"}]
pipeline = [
{"$project": {"name": {"$toUpper": "$name"}}},
{"$skip": 1},
{"$limit": 1},
]
data = Person.objects.order_by("name").limit(2).aggregate(pipeline)
assert list(data) == [{"_id": p3.pk, "name": "SANDRA MARA"}]
def test_queryset_aggregation_with_sort_with_skip(self):
class Person(Document):
name = StringField()
age = IntField()
Person.drop_collection()
p1 = Person(name="Isabella Luanna", age=16)
p2 = Person(name="Wilson Junior", age=21)
p3 = Person(name="Sandra Mara", age=37)
Person.objects.insert([p1, p2, p3])
pipeline = [{"$project": {"name": {"$toUpper": "$name"}}}]
data = Person.objects.order_by("name").skip(2).aggregate(pipeline)
assert list(data) == [{"_id": p2.pk, "name": "WILSON JUNIOR"}]
def test_queryset_aggregation_with_sort_with_skip_with_limit(self):
class Person(Document):
name = StringField()
age = IntField()
Person.drop_collection()
p1 = Person(name="Isabella Luanna", age=16)
p2 = Person(name="Wilson Junior", age=21)
p3 = Person(name="Sandra Mara", age=37)
Person.objects.insert([p1, p2, p3])
pipeline = [{"$project": {"name": {"$toUpper": "$name"}}}]
data = Person.objects.order_by("name").skip(1).limit(1).aggregate(pipeline)
assert list(data) == [{"_id": p3.pk, "name": "SANDRA MARA"}]
def test_queryset_aggregation_deprecated_interface(self):
class Person(Document):
name = StringField()
Person.drop_collection()
p1 = Person(name="Isabella Luanna")
p2 = Person(name="Wilson Junior")
p3 = Person(name="Sandra Mara")
Person.objects.insert([p1, p2, p3])
pipeline = [{"$project": {"name": {"$toUpper": "$name"}}}]
# Make sure a warning is emitted
with warnings.catch_warnings():
warnings.simplefilter("error", DeprecationWarning)
with self.assertRaises(DeprecationWarning):
Person.objects.order_by("name").limit(2).aggregate(*pipeline)
# Make sure old interface works as expected with a 1-step pipeline
data = Person.objects.order_by("name").limit(2).aggregate(*pipeline)
assert list(data) == [
{"_id": p1.pk, "name": "ISABELLA LUANNA"},
{"_id": p3.pk, "name": "SANDRA MARA"},
]
# Make sure old interface works as expected with a 2-steps pipeline
pipeline = [{"$project": {"name": {"$toUpper": "$name"}}}, {"$limit": 1}]
data = Person.objects.order_by("name").limit(2).aggregate(*pipeline)
assert list(data) == [{"_id": p1.pk, "name": "ISABELLA LUANNA"}]
if __name__ == "__main__":
unittest.main()
| 34.192913 | 87 | 0.563846 | 956 | 8,685 | 4.998954 | 0.115063 | 0.056497 | 0.067797 | 0.084746 | 0.815024 | 0.767315 | 0.747437 | 0.717514 | 0.704959 | 0.676083 | 0 | 0.025197 | 0.268854 | 8,685 | 253 | 88 | 34.328063 | 0.727402 | 0.033621 | 0 | 0.627778 | 0 | 0 | 0.15037 | 0 | 0 | 0 | 0 | 0 | 0.1 | 1 | 0.055556 | false | 0 | 0.027778 | 0 | 0.144444 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
1adf3c39deafb89f07781fdc98dc7da7d60bebef | 223,391 | py | Python | Tutorials/Tutorial 02 Uncertainty Estimation/toolbox_AEM.py | MaxRamgraber/Simple-AEM-Toolbox | 27751103f5e504dd675ba6225f2aee9f85d7c85d | [
"MIT"
] | 3 | 2021-06-16T12:27:22.000Z | 2022-01-04T11:21:35.000Z | toolbox_AEM.py | MaxRamgraber/Simple-AEM-Toolbox | 27751103f5e504dd675ba6225f2aee9f85d7c85d | [
"MIT"
] | null | null | null | toolbox_AEM.py | MaxRamgraber/Simple-AEM-Toolbox | 27751103f5e504dd675ba6225f2aee9f85d7c85d | [
"MIT"
] | 3 | 2021-06-17T11:20:20.000Z | 2022-01-12T09:56:56.000Z | #import numpy as np
#from mpmath import mpc,mpmathify,ellipfun,acos,ellipf
#import matplotlib.pyplot as plt
#plt.close('all')
class Model:
def __init__(self,head_offset=0,aquifer_type='unconfined',domain_center=0+0j,
domain_radius=1,H = None,variables=[],priors=[],observations=[],
likelihood_dictionary=None):
"""
This creates a model base object, to which we can add other elements.
Parameters:
head_offset - [scalar] : aquifer base elevation in [length units]
aquifer_type - [string] : specifies the aquifer type; either 'confined', 'unconfined', or 'convertible'
domain_center - [complex] : x + iy coordinate of center of the circular, physical domain in [length units]; can also be specified as a vector of length 2
domain_radius - [scalar] : radius of the circular domain in [length units]
H - [scalar] : aquifer top elevation in [length units]; only used if the aquifer is 'confined' or 'convertible'
If MCMC use is intended, we further require:
variable - [list] : list of variables which are inferred by MCMC, example: ['head_offset','H']; leave empty if unused
priors - [list] : list of dictionaries, one for each unknown 'variable'; each dictionary must contain the name of distribution (in scipy.stats) and the relevant parameters as keys
observations - [list] : list of dictionaries, one for each hydraulic head observations; each dictionary must contain a 'location' and a 'head key', with a complex and real number, respectively
likelihood_dictionary - [dict] : dictionary with keys 'distribution' and any keywords required to specify the probability distribution; only required if this toolbox's logposterior function is used
"""
import numpy as np
# Set potential scaling variables
self.head_offset = head_offset
self.aquifer_type = aquifer_type
self.H = H
# Set domain scaling variables
self.domain_center = domain_center
self.domain_radius = domain_radius
if not np.isscalar(self.domain_center):
self.domain_center = self.domain_center[0] + 1j*self.domain_center[1]
# Check input for validity
self.check_input()
# Define a list for Analytic Elements
self.elementlist = []
self.variables = variables
self.priors = priors
self.observations = observations
self.likelihood_dictionary = likelihood_dictionary
# This function scrapes the model and its elements for unknown variables,
# then gives this instance three new variables:
# self.num_params Number of unknown variables
# self.params List of unknown variables
# self.param_names List of names of unknown variables
# self.priors List of prior dictionaries for unknow variables
self.take_parameter_inventory()
self.linear_solver = False
# Pre-allocate the function matrix and parameter vector for the linear solver
self.matrix_solver = []
self.params_vector = []
def update(self):
import copy
# self.take_parameter_inventory()
# Unpack the parameter vector to their respective instances -----------
# Count through the parameters
counter = -1
# Go through all unknown variables in the model class, if any
for var in self.variables:
# Replace the old variable
counter += 1
exec("self.%s = copy.copy(self.params[counter])" % var)
# Count through the parameters
counter = -1
# Go through all elements
for e in self.elementlist:
# Go through all the element's unknown variables
for var in e.variables:
# Replace the old variable
counter += 1
exec("e.%s = copy.copy(self.params[counter])" % var)
# Update all other elements
e.update()
def check_input(self):
# Check if aquifer type is valid
if self.aquifer_type != 'confined' and \
self.aquifer_type != 'unconfined' and \
self.aquifer_type != 'convertible':
raise Exception("aquifer_type must be either 'confined', 'unconfined', or 'convertible'.")
if (self.aquifer_type == 'confined' or self.aquifer_type == 'convertible') and \
self.H is None:
raise Exception("depth of confined layer 'H' must be specified if aquifer is confined or convertible.")
def evaluate(self,z,mode='potential',derivatives='all',return_error_flag=False, suppress_warnings = False):
import numpy as np
import copy
# Ensure that the evaluation points are complex
z = self.complexify(z)
if return_error_flag:
error_flag = False
self.update()
# Inverse maps from disk to square,
# Not inverse maps from square to disk
# If there is at least one prescribed head element, prepare the linear solver
if self.linear_solver:
matrix,solution_vector = self.set_up_linear_system()
# Find all elements which require the solver
part_of_solver = [(isinstance(e, ElementHeadBoundary) or isinstance(e, ElementNoFlowBoundary) or isinstance(e, ElementInhomogeneity)) for e in self.elementlist]
part_of_solver = [idx for idx,val in enumerate(part_of_solver) if val]
not_part_of_solver = [i for i in np.arange(len(self.elementlist)) if i not in part_of_solver]
# Solve the system of linear equations
param_vec = np.linalg.solve(matrix,solution_vector)
# Assign those parameters to each element
counter = 0
for idx in part_of_solver:
# Extract the current element...
e = self.elementlist[idx]
# ...and assign the correct strength
e.strength = copy.copy(param_vec[counter:counter+e.segments])
# Then update the entry counter
counter += e.segments
# =====================================================================
# Now that the coefficients are set, evaluate the results
# =====================================================================
if mode == 'potential':
# Coordinates in canonical space are the start values
z_canonical = copy.copy(z)
z = np.zeros(z.shape,dtype=np.complex)
# z *= 0 #-marked-
for e in self.elementlist:
z += e.evaluate(z_canonical)
elif mode == 'gradient':
# Coordinates in canonical space are the start values
z_canonical = copy.copy(z)
z = np.zeros(z.shape,dtype=np.complex)
# z *= 0 #-marked-
for e in self.elementlist:
z += e.evaluate_gradient(z_canonical,derivatives=derivatives)
elif mode == 'head':
# Coordinates in canonical space are the start values
z_canonical = copy.copy(z)
z = np.zeros(z.shape,dtype=np.complex)
# z *= 0 #-marked-
for e in self.elementlist:
z += e.evaluate(z_canonical)
# First, get the base conductivity
for e in self.elementlist:
if isinstance(e, ElementMoebiusBase) or isinstance(e, ElementUniformBase):
temp_k = np.ones(z_canonical.shape)*e.k
for e in self.elementlist:
if isinstance(e, ElementInhomogeneity):
inside = e.are_points_inside_polygon(z_canonical)
temp_k[inside] = e.k
# The hydraulic potential can never be negative; set it to zero
# (drying of an area) for any regions where it is negative, then
# issue a warning
if len(np.where(np.real(z) <= 0)[0]) > 0:
if not suppress_warnings:
print('WARNING: negative or zero potential detected at some evaluation points. Consider lowering head_offset or prescribing prior limits.')
z[np.where(np.real(z) <= 0)] = 1j*np.imag(z[np.where(np.real(z) <= 0)])
if return_error_flag:
error_flag = True
if self.aquifer_type == 'confined':
# Strack 1989, Eq. 8.12
z = (np.real(z) + 0.5*temp_k*self.H**2)/(temp_k*self.H) + \
1j*np.imag(z)
elif self.aquifer_type == 'unconfined':
# Strack 1989, Eq. 8.13
z = np.sqrt(2*(np.real(z))/temp_k) + 1j*np.imag(z)
elif self.aquifer_type == 'convertible':
# Decide which equation to use for what points
# confined: Strack 1989, Eq. 8.12
# unconfined: Strack 1989, Eq. 8.13
limit = 0.5*temp_k/self.H**2
index_conf = np.where(np.real(z) >= limit)[0]
index_unconf = np.where(np.real(z) < limit)[0]
# Handle the confined part
z[index_conf] = \
(np.real(z[index_conf]) + 0.5*temp_k[index_conf]*self.H**2)/(temp_k[index_conf]*self.H) + \
1j*np.imag(z[index_conf])
# Handle the unconfined part
z[index_unconf] = \
np.sqrt(2*(np.real(z[index_unconf]))/temp_k[index_unconf]) + 1j*np.imag(z[index_unconf])
# Offset the head
z += self.head_offset
else:
raise Exception("Mode must be either 'potential', 'gradient', or 'head'.")
if return_error_flag:
return z,error_flag
else:
return z
def set_up_linear_system(self):
"""
This function sets up the system of linear equations required to solve
for the unknown coefficients of prescribed head boundaries, no-flow
boundaries, and polygonal inhomogeneities.
"""
import numpy as np
import copy
# Find all elements which require the solver
# First, find all elements which are either Line Sinks, Doublets, or Inhomogeneities
part_of_solver = [(isinstance(e, ElementHeadBoundary) or isinstance(e, ElementNoFlowBoundary) or isinstance(e, ElementInhomogeneity)) for e in self.elementlist]
# Only keep the elements which must be part of the linear system...
part_of_solver = [idx for idx,val in enumerate(part_of_solver) if val]
# ...and prepare a second set of indices for its complement
not_part_of_solver = [i for i in np.arange(len(self.elementlist)) if i not in part_of_solver]
# These elements invariably consist of segments - find out how many there are in total
num_segments = np.sum([self.elementlist[idx].segments for idx in part_of_solver])
# =====================================================================
# Now create the matrix
# =====================================================================
# Pre-allocate arrays for the linear solver
matrix = np.zeros((num_segments,num_segments))
# The counter will keep track at what row we are
row = 0
# Go through all elements
for i in part_of_solver:
# Find the corresponding element
e = self.elementlist[i]
# We need a second counter for the columns
col = 0
# e is the element we are currently looking at - the row -, now we
# must go through all other elements which are part of the solver
# and check what they contribute to the control points of this element
for i2 in part_of_solver:
# Find the corresponding element
e2 = self.elementlist[i2]
# If the row element is a HeadLineSink, we must extract potentials
if isinstance(e, ElementHeadBoundary):
# Evaluate the contributions of this element to the control points
if e != e2:
block = e2.evaluate(
z = e.zc,
detailed = True,
override_parameters = True).T
else:
block = e2.evaluate(
z = e.zc,
detailed = True,
override_parameters = True,
evaluate_self = True).T
elif isinstance(e, ElementNoFlowBoundary):
# Evaluate the contributions of this element to the control points
block = e2.evaluate_gradient(
z = e.zc,
detailed = True,
derivatives = 'phi',
override_parameters = True).T
# Project the partial derivatives onto the normal vector
# The projection is a->b = <a,b>/||b||^2*b
# Let's try it with the inner product instead
# The normal vector is already normalized
# We should have as many normal vectors as we have control points
# Go through them all, and project each gradient onto the normal vector
for idx,nv in enumerate(e.segment_nvec):
# Calculate the inner product between the returned partial
# derivatives and the segment's normal vector
block[idx,:] = np.inner(
np.column_stack((
np.real(block[idx,:]),
np.imag(block[idx,:]) )),
np.asarray([np.real(nv),np.imag(nv)]).T )[:,0]
elif isinstance(e, ElementInhomogeneity):
# If this inhomogeneity evaluates itself
if i == i2:
# Retrieve own matrix contribution
block = copy.copy(e2.block)
# This contribution is incomplete, subtract A_star from
# its diagonal
# Prepare a vector of outside conductivities; all are
# the background conductivity by default
for e3 in self.elementlist:
if isinstance(e3, ElementMoebiusBase) or isinstance(e3, ElementUniformBase):
A_star = np.ones(e2.zc.shape)*e3.k/(e2.k - e3.k)
# Get add matrix
addmat = np.identity(block.shape[0])
np.fill_diagonal(addmat,A_star)
# Subtract it from the retrieved block
block -= addmat
else:
# Evaluate the contributions of this element to the control points
block = e2.evaluate(
z = e.zc,
detailed = True,
override_parameters = True).T
# Write this block into the matrix
matrix[row:row+e.segments,col:col+e2.segments] = copy.copy(np.real(block))
# Update the column counter
col += e2.segments
# Update the row counter
row += e.segments
# =====================================================================
# Now create the solution_vector
# =====================================================================
# Pre-allocate spac efor the solution vector
solution_vector = np.zeros(num_segments)
# The counter will keep track at what row we are
counter = 0
# Go through all elements
for i in part_of_solver:
# Find the corresponding element
e = self.elementlist[i]
# If the element is a HeadLineSink, we must assign the difference
# between the head target and the background contributions
if isinstance(e, ElementHeadBoundary):
# Step 1: Assign the head targets
solution_vector[counter:counter+e.segments] = \
copy.copy(e.phi_target)
# solution_vector[counter:counter+e.segments] = \
# copy.copy(e.head_target)
# # Step 2: Background potential --------------------------------
# solution_vector[counter:counter+e.segments] -= \
# np.real(self.evaluate(e.zc))
# Step 3: All elements ----------------------------------------
for idx in not_part_of_solver:
solution_vector[counter:counter+e.segments] -= \
np.real(self.elementlist[idx].evaluate(e.zc))
# If the element is a no-flow boundary, we must assign the difference
# between the head target and the background contributions
if isinstance(e, ElementNoFlowBoundary):
# # Step 1: Background gradient ---------------------------------
# temp = self.evaluate_gradient(e.zc,derivatives='phi')
# Step 2: Gradients from all elements -------------------------
temp = np.zeros(e.zc.shape,dtype=np.complex)
for idx in not_part_of_solver:
temp += \
self.elementlist[idx].evaluate_gradient(e.zc,derivatives='phi')
# Step 3: Project gradients onto normal vector ----------------
for ix,nv in enumerate(e.segment_nvec):
solution_vector[counter+ix] = \
-np.inner(
np.asarray([np.real(nv),np.imag(nv)])[:,0],
np.asarray([np.real(temp[ix]),np.imag(temp[ix])]) )
# If the element is an Inhomogeneity, we must simply assign the potentials
# induced by other elements
if isinstance(e, ElementInhomogeneity):
# # Step 1: Background potential --------------------------------
# solution_vector[counter:counter+e.segments] -= \
# np.real(self.evaluate(e.zc))
# Step 2: All elements ----------------------------------------
for idx in not_part_of_solver:
solution_vector[counter:counter+e.segments] -= \
np.real(self.elementlist[idx].evaluate(e.zc))
# Update the counter
counter += e.segments
self.matrix = matrix
self.solvec = solution_vector
return matrix, solution_vector
def gradients(self,z):
import numpy as np
# Extract the gradients and return them
grad = np.zeros(z.shape,dtype=np.complex)
for e in self.elementlist:
grad += e.evaluate_gradient(z)
return grad
def take_parameter_inventory(self):
# Find the number of unknown variables
self.num_params = 0
self.params = []
self.param_names = []
# First see if they are any unknown variables in the main model
if len(self.variables) > 0:
# There are some model variables specified
for idx,var in enumerate(self.variables):
self.num_params += 1
exec("self.params += [self.%s]" % var)
if 'name' in list(self.priors[idx].keys()):
self.param_names += [self.priors[idx]['name']]
else:
self.param_names += ['unknown']
# Check if the prior matches the number of parameters
if len(self.priors) != self.num_params:
raise Exception('Number of priors must match number of parameters. Number of parameters:'+str(self.num_params)+' / Number of priors:'+str(len(self.priors)))
def logprior(self,params,priors,verbose=False):
import numpy as np
import scipy.stats
import copy
import math
# from toolbox_AEM import ElementMoebiusBase,ElementMoebiusOverlay
def check_limits(params,var_dict):
reject = None
import numpy as np
# Check if any limits are prescribed
if 'limits' in list(var_dict.keys()):
if var_dict['limits'][0] is not None and type(var_dict['limits'][0]) != str:
if np.isscalar(params):
if params <= var_dict['limits'][0]:
reject = True
else:
for entry in params:
if entry <= var_dict['limits'][0]:
reject = True
if var_dict['limits'][1] is not None and type(var_dict['limits'][1]) != str:
if np.isscalar(params):
if params >= var_dict['limits'][1]:
reject = True
else:
for entry in params:
if entry >= var_dict['limits'][1]:
reject = True
var_dict.pop('limits')
return reject,var_dict
# Find the base element
MoebiusBase_index = None
for idx,e in enumerate(self.elementlist):
if isinstance(e, ElementMoebiusBase) and 'r' in e.variables:
MoebiusBase_index = idx
# Find any Möbius overlay element
MoebiusOverlay_index = None
for idx,e in enumerate(self.elementlist):
if isinstance(e, ElementMoebiusOverlay) and 'r' in e.variables:
if MoebiusOverlay_index is None:
MoebiusOverlay_index = [idx]
else:
MoebiusOverlay_index += [idx]
logprior = []
reject = False
if MoebiusBase_index is not None:
if self.elementlist[MoebiusBase_index].are_points_clockwise():
# print('base clockwise')
reject = True
# Check if the control points fulfill the minimum angular spacing
r = np.degrees(self.elementlist[MoebiusBase_index].r)
angular_limit = np.degrees(self.elementlist[MoebiusBase_index].angular_limit)
if np.abs((r[0]-r[1] + 180) % 360 - 180) < angular_limit or \
np.abs((r[1]-r[2] + 180) % 360 - 180) < angular_limit or \
np.abs((r[2]-r[0] + 180) % 360 - 180) < angular_limit:
# print('base angular limit violation')
reject = True
if MoebiusOverlay_index is not None:
for idx in MoebiusOverlay_index:
if self.elementlist[idx].are_points_clockwise():
reject = True
# Check if the control points fulfill the minimum angular spacing
r = np.degrees(self.elementlist[idx].r)
angular_limit = np.degrees(self.elementlist[idx].angular_limit)
if np.abs((r[0]-r[1] + 180) % 360 - 180) < angular_limit or \
np.abs((r[1]-r[2] + 180) % 360 - 180) < angular_limit or \
np.abs((r[2]-r[0] + 180) % 360 - 180) < angular_limit:
reject = True
# if not reject:
# print('WOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOP')
for var in range(len(priors)):
# If the logprior is to be rejected due to a violation of limits,
# break the loop
if reject:
break
# Create a copy of this variable's prior dictionary
var_dict = copy.copy(priors[var])
# Check if the user specified any converter
converter_used = False
if 'converter' in list(var_dict.keys()) and 'deconverter' in list(var_dict.keys()):
# Activate the logarithmic boolean and save the base
converter_used = True
# Extract the converter and the deconverter
converter = var_dict['converter']
deconverter = var_dict['deconverter']
# Remove the keys from the dictionary
var_dict.pop('converter')
var_dict.pop('deconverter')
# And convert the variable
params[var] = converter(params[var])
elif 'converter' in list(var_dict.keys()) or 'deconverter' in list(var_dict.keys()):
raise Exception('Both a converter and a deconverter must be specified if either are used for a variable.')
# Remove size if it was specified
if 'size' in list(var_dict.keys()):
del var_dict['size']
# This prior is a univariate normal distribution
if var_dict['distribution'] == 'norm' or var_dict['distribution'] == 'normal':
# Remove the variable name from the dictionary
var_dict.pop('distribution')
if 'name' in list(var_dict.keys()): var_dict.pop('name')
temp, var_dict = check_limits(params = params[var], var_dict = var_dict)
if temp is not None:
reject = True
# Check if any limits are prescribed
# if 'limits' in list(var_dict.keys()):
# if var_dict['limits'][0] is not None and type(var_dict['limits'][0]) != str:
# if params[var] < var_dict['limits'][0]:
# reject = True
# if var_dict['limits'][1] is not None and type(var_dict['limits'][1]) != str:
# if params[var] > var_dict['limits'][1]:
# reject = True
# var_dict.pop('limits')
# Add to the logprior
logprior += [np.sum(scipy.stats.norm.logpdf(x=params[var],**var_dict))]
# This prior is a multivariate normal distribution
elif var_dict['distribution'] == 'multivariate_normal' or var_dict['distribution'] == 'multivariate normal':
# Remove the variable name from the dictionary
var_dict.pop('distribution')
if 'name' in list(var_dict.keys()): var_dict.pop('name')
temp, var_dict = check_limits(params = params[var], var_dict = var_dict)
if temp is not None:
reject = True
# # Check if any limits are prescribed
# if 'limits' in list(var_dict.keys()):
# if var_dict['limits'][0] is not None and type(var_dict['limits'][0]) != str:
# if any(params[var] < var_dict['limits'][0]):
# reject = True
# if var_dict['limits'][1] is not None and type(var_dict['limits'][1]) != str:
# if any(params[var] > var_dict['limits'][1]):
# reject = True
# var_dict.pop('limits')
# Add to the logprior
logprior += [np.sum(scipy.stats.multivariate_normal.logpdf(x=params[var],**var_dict))]
# This prior is a beta distribution
elif var_dict['distribution'] == 'beta':
# Remove the variable name from the dictionary
var_dict.pop('distribution')
if 'name' in list(var_dict.keys()): var_dict.pop('name')
# A beta distribution has natural limits; if none are prescribed, add them
if 'limits' not in list(var_dict.keys()):
var_dict['limits'] = [0,1]
temp, var_dict = check_limits(params = params[var], var_dict = var_dict)
if temp is not None:
reject = True
# # Check if any limits are prescribed
# if 'limits' in list(var_dict.keys()):
# if var_dict['limits'][0] is not None and type(var_dict['limits'][0]) != str:
# if params[var] < var_dict['limits'][0]:
# reject = True
# if var_dict['limits'][1] is not None and type(var_dict['limits'][1]) != str:
# if params[var] > var_dict['limits'][1]:
# reject = True
# var_dict.pop('limits')
# Add to the logprior
logprior += [np.sum(scipy.stats.beta.logpdf(x=params[var],**var_dict))]
# This prior is an exponential distribution
elif var_dict['distribution'] == 'expon' or var_dict['distribution'] == 'exponential':
# SPECIAL EXCEPTION -------------------------------------------
# The argument of the exponential distribution is considered as
# its absolute value to permit evaluation of negative values.
# Checking for limits happens before this, so negative limits
# can be applied.
# -------------------------------------------------------------
# Remove the variable name from the dictionary
var_dict.pop('distribution')
if 'name' in list(var_dict.keys()): var_dict.pop('name')
# An exponential distribution has natural limits; if none are prescribed, add them
if 'limits' not in list(var_dict.keys()):
var_dict['limits'] = [0,None]
temp, var_dict = check_limits(params = params[var], var_dict = var_dict)
if temp is not None:
reject = True
# # Check if any limits are prescribed
# if 'limits' in list(var_dict.keys()):
# if var_dict['limits'][0] is not None and type(var_dict['limits'][0]) != str:
# if params[var] < var_dict['limits'][0]:
# reject = True
# if var_dict['limits'][1] is not None and type(var_dict['limits'][1]) != str:
# if params[var] > var_dict['limits'][1]:
# reject = True
# var_dict.pop('limits')
# Add to the logprior
logprior += [np.sum(scipy.stats.expon.logpdf(x=np.abs(params[var]),**var_dict))]
# This prior is a von Mises distribution
elif var_dict['distribution'] == 'unif' or var_dict['distribution'] == 'uniform':
# Remove the variable name from the dictionary
var_dict.pop('distribution')
if 'name' in list(var_dict.keys()): var_dict.pop('name')
temp, var_dict = check_limits(params = params[var], var_dict = var_dict)
if temp is not None:
reject = True
# Add to the logprior
logprior += [np.sum(scipy.stats.uniform.logpdf(x=params[var],**var_dict))]
# This prior is a von Mises distribution
elif var_dict['distribution'] == 'vonmises' or var_dict['distribution'] == 'von mises' or var_dict['distribution'] == 'von Mises':
# Remove the variable name from the dictionary
var_dict.pop('distribution')
if 'name' in list(var_dict.keys()): var_dict.pop('name')
temp, var_dict = check_limits(params = params[var], var_dict = var_dict)
if temp is not None:
reject = True
# # Check if any limits are prescribed
# if 'limits' in list(var_dict.keys()):
# if var_dict['limits'][0] is not None and type(var_dict['limits'][0]) != str:
# if params[var] < var_dict['limits'][0]:
# reject = True
# if var_dict['limits'][1] is not None and type(var_dict['limits'][1]) != str:
# if params[var] > var_dict['limits'][1]:
# reject = True
# var_dict.pop('limits')
# Add to the logprior
logprior += [np.sum(scipy.stats.vonmises.logpdf(x=params[var],**var_dict))]
else:
raise Exception("Specified distribution name '" + str(var_dict['distribution']) + \
"' not understood. Valid distribution names are: 'norm', " + \
"'multivariate_normal', 'beta', 'expon', or 'vonmises'")
# If a converter are used, deconvert the variables
if converter_used:
params[var] = deconverter(params[var])
# Check if variables have been prescribed as limits
counter = -1
for idx,e in enumerate(self.elementlist):
# If the logprior is to be rejected due to a violation of limits,
# break the loop
if reject:
break
# At what value was the counter at the start of this element
counter_elementstart = counter
# Check the priors list
for prior in e.priors:
# Increment the counter variable
counter += 1
# Check if this prior entry has prescribed limits
if 'limits' in prior:
# Check if the lower limit is a string (a variable)
if type(prior['limits'][0]) == str:
# Check where this variable is, store it
limit = None
for idx,var in enumerate(e.variables):
if prior['limits'][0] == var:
limit = params[counter_elementstart+1+idx]
if limit is None: raise Exception("variable '"+prior['limits'][0]+"' marked as limit not part of the variables of element "+str(e))
if params[counter] < limit:
reject = True
# Check if the upper limit is a string (a variable)
if type(prior['limits'][1]) == str:
# Check where this variable is, store it
limit = None
for idx,var in enumerate(e.variables):
if prior['limits'][1] == var:
limit = params[counter_elementstart+1+idx]
if limit is None: raise Exception("variable '"+prior['limits'][1]+"' marked as limit not part of the variables of element "+str(e))
if params[counter] > limit:
reject = True
# Return the logprior only if the sample isn't rejected
if not reject:
res = logprior
else:
res = None
if verbose:
print('Logprior calculation rejected because at least one variable violated prescribed limits.')
return res
def loglikelihood(self,observations,likelihood_dictionary,predictions = None):
import numpy as np
import scipy.stats
import copy
loglikelihood = None
obs_dict = copy.deepcopy(likelihood_dictionary)
# If no predictions have been provided, map forward
if predictions is None:
# Get the well positions
z = []
for entry in observations:
z += [copy.copy(entry['location'])]
z = np.asarray(z)
predictions,error_flag = copy.copy(np.real(self.evaluate(
z,
mode='head',
return_error_flag=True,
suppress_warnings=True)))
# If any of the predictions is NaN, raise an error flag
if not error_flag and np.isnan(predictions).any():
error_flag = True
else:
error_flag = False
predictions = np.asarray(predictions)
# Create a vector of observations
obs_vec = []
for entry in observations:
obs_vec += [copy.copy(entry['head'])]
obs_vec = np.asarray(obs_vec)
# Get the prediction residuals
residuals = obs_vec - predictions
# This prior is a von Mises distribution
if obs_dict['distribution'] == 'norm' or obs_dict['distribution'] == 'normal':
# Remove superfluous keys
obs_dict.pop('distribution')
if 'name' in list(obs_dict.keys()):
obs_dict.pop('name')
if 'loc' in list(obs_dict.keys()):
print("Warning: 'loc' specified for the loglikeligood pdf. This value is overwritten by the predictions.")
obs_dict.pop('loc')
# Add to the logprior
loglikelihood = scipy.stats.norm.logpdf(x=obs_vec,loc=predictions,**obs_dict)
# This prior is a von Mises distribution
elif obs_dict['distribution'] == 'multivariate_normal' or obs_dict['distribution'] == 'multivariate normal':
# Remove superfluous keys
obs_dict.pop('distribution')
if 'name' in list(obs_dict.keys()):
obs_dict.pop('name')
if 'mean' in list(obs_dict.keys()):
print("Warning: 'mean' specified for the loglikeligood pdf. This value is overwritten by the predictions.")
obs_dict.pop('mean')
# Add to the logprior
loglikelihood = np.sum(scipy.stats.multivariate_normal.logpdf(x=obs_vec,mean=predictions,**obs_dict))
if error_flag:
loglikelihood = None
return loglikelihood, residuals
def logposterior(self,params,likelihood_dictionary=None,priors=None,
observations=None,verbose=False,predictions=None,return_residuals=False):
"""
This function returns the unnormalized logposterior of the model,
intended for use in any external inference routines. If the proposal
violates any parameter limits, or the model detects instabilities, the
logposterior will return None instead of a numerical value.
Parameters:
params - [list] : list of parameters to be evaluated, corresponding to the list entries in 'variables'
likelihood_dictionary - [dict] : dictionary with keys 'distribution' and any keywords required to specify the probability distribution; only required if this toolbox's logposterior function is used
priors - [list] : list of dictionaries, one for each unknown 'variable'; each dictionary must contain the name of distribution (in scipy.stats) and the relevant parameters as keys
observations - [list] : list of dictionaries, one for each hydraulic head observations; each dictionary must contain a 'location' and a 'head key', with a complex and real number, respectively
verbose - [boolean] : flag passed on to the logprior function, controlling whether additional information is printed or not
predictions - [list] : optional list of predictions at the observation locations; predictions are simulated for the defined 'params' if not specified
return_residuals- [boolean] : flag for whether only the logposterior is returned (False), or whether both the logposterior and the observation residuals (True) is returned
"""
# Fetch any unspecified variables
if likelihood_dictionary is None:
likelihood_dictionary = self.likelihood_dictionary
if self.likelihood_dictionary is None:
raise Exception('Likelihood_dictionary must be specified to evaluate the logposterior density.')
if priors is None:
priors = self.priors
if self.priors == []:
raise Exception('No priors are specified. Priors are required to evaluate the logposterior density.')
if observations is None:
observations = self.observations
if self.observations == []:
raise Exception('No observations are specified. Observations are required to evaluate the likelihood for the logposterior density.')
# Evaluate the logprior
logpri = self.logprior(
params = params,
priors = priors,
verbose = False)
# If the logprior was evaluated successfully, evaluate the loglikelihood
if logpri is not None:
loglik, residuals = self.loglikelihood(
observations = observations,
likelihood_dictionary = likelihood_dictionary,
predictions = predictions)
else:
loglik = None
residuals = None
# If both the logprior and loglikelihood were valid, calculate the
# unnormalized logposterior
if logpri is not None and loglik is not None:
logpost = logpri + loglik
else:
logpost = None
# Also return the loglikelihood's residuals, if requested
if return_residuals:
return logpost, residuals
else:
return logpost
def complexify(self,z):
"""
This function takes the provided set of points and converts it into
a complex-valued vector, if it isn't already provided as one.'
"""
import numpy as np
if not np.iscomplex(z).any():
if len(z.shape) != 2 or z.shape[1] != 2:
raise Exception('Shape format not understood. Provide shape vertices either as a complex vector, or as a N-by-2 real numpy array.')
else:
z = z[:,0] + 1j*z[:,1]
return z
def plot(self,color='xkcd:grey'):
"""
This function plots all elements in the elementlist
"""
import matplotlib.pyplot as plt
import numpy as np
# Plot the model domain
plt.plot(
np.cos(np.linspace(0,2*np.pi,361))*self.domain_radius + np.real(self.domain_center),
np.sin(np.linspace(0,2*np.pi,361))*self.domain_radius + np.imag(self.domain_center),
color=color)
for e in self.elementlist:
e.plot()
def trace_gradient(self,p,direction='upgradient',stepsize=1,well_snap_distance = 1):
"""
This function takes a regular grid defined by X and Y meshgrid arrays and
follows the gradient of a corresponding Z array starting from a point p.
p : starting point for gradient tracing
XY : array of points between which the gradient tracing is interpolated
Z : hydraulic head at XY
direction : whether we trace 'upgradient' or 'downgradient'
stepsize : size of successive steps for gradient tracing
visualize : plots the results, if desired
Del : Delaunay triangulation, calculated if missing
triang : triangulation, calculated if missing
thresh : side length threshold above which vertices are rejected
"""
if not direction == 'upgradient' and not direction == 'downgradient':
raise Exception("direction must be either 'upgradient' or 'downgradient'.")
import scipy.spatial
import numpy as np
import matplotlib.pyplot as plt
import shapely.geometry
ring = np.column_stack((
np.cos(np.linspace(0,2*np.pi,361)),
np.sin(np.linspace(0,2*np.pi,361)) ))
ring *= self.domain_radius
ring += np.asarray([np.real(self.domain_center),np.imag(self.domain_center)])
# First, find all elements which could be stoppers
stoppers = []
stoppers.append(shapely.geometry.LineString(ring))
for e in self.elementlist:
if isinstance(e, ElementHeadBoundary):
# Head Boundaries are valid end points
stoppers.append(shapely.geometry.LineString(e.line[:,:2]))
if isinstance(e, ElementWell):
# Wells are valid end points
stoppers.append(shapely.geometry.Point(np.asarray([np.real(e.zc),np.imag(e.zc)])))
if isinstance(e, ElementLineSink):
# Line Sinks are valid end points
stoppers.append(shapely.geometry.LineString(e.line[:,:2]))
if isinstance(e, ElementNoFlowBoundary):
# No-flow Boundaries are valid end points
stoppers.append(shapely.geometry.LineString(e.line[:,:2]))
def gradient(p1,p2,p3,z1,z2,z3):
area = abs((p1[0]*(p2[1]-p3[1])+p2[0]*(p3[1]-p1[1])+p3[0]*(p1[1]-p2[1]))/2)
M = np.asarray(
[[p2[1]-p3[1], p3[1]-p1[1], p1[1]-p2[1]],
[p3[0]-p2[0], p1[0]-p3[0], p2[0]-p1[0]]])
U = np.asarray([z1,z2,z3]).reshape((3,1))
# Solution based on http://pers.ge.imati.cnr.it/livesu/papers/MLP18/MLP18.pdf Equation 1
return np.dot(M,U)[:,0]/(2*area)
# Check if the start point is complex, if yes, turn it into a real vector
if np.iscomplex(p).any():
p = np.asarray([np.real(p),np.imag(p)])
# Depending on the direction, add a gradient
if direction == 'upgradient':
stepsize = stepsize
else:
stepsize = -stepsize
# Set the repeater boolean to True
repeater = True
# Re-arrange the starting point into an array
points = np.asarray(p).copy().reshape((1,2))
# """
# Get three points
testpoints = np.asarray([
points[-1,0] + 1j*points[-1,1],
points[-1,0] + stepsize/100 + 1j*points[-1,1],
points[-1,0] + 1j*points[-1,1] + 1j*stepsize/100])
testpoints = np.real(self.evaluate(testpoints,mode='head'))
grad = np.asarray([
testpoints[1]-testpoints[0],
testpoints[2]-testpoints[0]])/stepsize*100
grad = grad/np.linalg.norm(grad)
# """
# grad = self.evaluate(
# z = points,
# mode = 'gradient',
# derivatives = 'phi')
# # grad = np.asarray([np.real(grad), np.imag(grad)])
# grad = grad/np.linalg.norm(grad)
# And save the result to the points array
points = np.row_stack((
points.copy(),
points + grad*stepsize))
# Now start the while loop, trace until the end
while repeater:
# The last point in the array is the starting point
p = points[-1,:]
# """
testpoints = np.asarray([
points[-1,0] + 1j*points[-1,1],
points[-1,0] + stepsize/100 + 1j*points[-1,1],
points[-1,0] + 1j*points[-1,1] + 1j*stepsize/100])
testpoints = np.real(self.evaluate(testpoints,mode='head'))
grad = np.asarray([
testpoints[1]-testpoints[0],
testpoints[2]-testpoints[0]])/stepsize*100
grad = grad/np.linalg.norm(grad)
# """
# grad = self.evaluate(
# z = points[-1,:],
# mode = 'gradient',
# derivatives = 'phi')
# # grad = np.asarray([np.real(grad), np.imag(grad)])
# grad = grad/np.linalg.norm(grad)
# And append the next step to the list
points = np.row_stack((
points,
points[-1,:] + grad*stepsize))
line = shapely.geometry.LineString(points[-2:,:])
# Check for stopping elements
for stop in stoppers:
# If this stopper is a well, check for distance
if stop.type == 'Point':
point = shapely.geometry.Point(points[-1,:])
if point.distance(stop) <= well_snap_distance:
points[-1,:] = np.asarray(point.xy)[:,0]
repeater = False
# Else, we can check for intersection
else:
if line.intersects(stop):
if line.intersection(stop).type == 'Point':
points[-1,:] = np.asarray(line.intersection(stop).xy)[:,0]
repeater = False
else:
print(type(line.intersection(stop)))
print((type(line.intersection(stop)) == 'Point'))
points[-1,:] = np.asarray(line.intersection(stop)[0].xy)[:,0]
repeater = False
# # Check for oscillation
# p2p = points[-3,:]-points[-2,:]
# p1p = points[-2,:]-points[-1,:]
# if np.inner(p1p,p2p) < 0:
# # The trace direction has change by more than 90 degrees, i.e.
# # turned back; stop iterating
# points = points[:-1,:]
# repeater = False
return points
#%%
class ElementMoebiusBase:
def __init__(self,model,r=None,a=None,b=None,c=None,d=None,head_min=0,head_max=1,
k=1,variables=[],priors=[],proposals=[],angular_limit=1,use_SC=True):
"""
This implements the Möbius base flow element, which can induce curving,
converging, or diverging regional flow.
Parameters:
model - [object] : the model object to which this element is added
r - [vector] : rotations of the three Moebius control points in counter-clockwise radians from East
a - [scalar] : coefficient of the Moebius transformation; calculated from r if not specified
b - [scalar] : coefficient of the Moebius transformation; calculated from r if not specified
c - [scalar] : coefficient of the Moebius transformation; calculated from r if not specified
d - [scalar] : coefficient of the Moebius transformation; calculated from r if not specified
head_min - [scalar] : minimum hydraulic head (mapped to -1 in the unit square)
head_max - [scalar] : maximum hydraulic head (mapped to +1 in the unit square)
k - [scalar] : background hydraulic conductivity in canonical units (e.g., 1E-4 [length]/[time])
use_SC - [boolean] : a flag to denote whether the Möbius base uses the Schwarz-Christoffel transformation from the unit square to the unit disk; changes to this flag affect the flow field, particularly near the edges
If MCMC is used, we further require:
variable - [list] : list of variables which are inferred by MCMC, example: ['r','phi_min']; leave empty if unused
priors - [list] : list of dictionaries, one for each unknown 'variable'; each dictionary must contain the name of distribution (in scipy.stats) and the relevant parameters as keys
angular_limit - [scalar] : a limit which prevents control points (A,B,C, or D) getting closer to each other than the specified value in radians; this acts as protection against improbable or unrealistic flow fields induced by the Möbius transformation
"""
import numpy as np
# Append the base to the elementlist
self.model = model
model.elementlist.append(self)
# Define an angular limit. This is designed to keep the Möbius control
# points from getting arbitrarily close to each other; defined in radians
self.angular_limit = angular_limit
# Get the Schwarz-Christoffel flag
self.use_SC = use_SC
# Set Moebius values
self.r = r
self.a = a
self.b = b
self.c = c
self.d = d
# Set potential scaling variables
self.head_min = head_min
self.head_max = head_max
# Assign the hydraulic conductivity of the base model
self.k = k
# The model requires the base flow in terms of hydraulic potential (phi)
# The function head_to_potential extracts the following variables:
# phi_min hydraulic potential corresponding to head_min
# phi_max hydraulic potential corresponding to head_max
self.head_to_potential()
# Check input for validity
self.check_input()
# Define the original control points in the Moebius base disk
self.z0 = np.asarray(
[np.complex(np.cos(-0.25*np.pi),np.sin(-0.25*np.pi)),
np.complex(np.cos(0.25*np.pi),np.sin(0.25*np.pi)),
np.complex(np.cos(0.75*np.pi),np.sin(0.75*np.pi))])
# If only rotation is specified, get the Moebius coefficients
if self.r is not None and (self.a is None and self.b is None and \
self.c is None and self.d is None):
# Find Moebius coefficients
self.find_moebius_coefficients()
self.variables = variables
self.priors = priors
self.proposals = proposals
self.Ke = 1.854
if len(self.variables) > 0:
# There are some model variables specified
for idx,var in enumerate(self.variables):
self.model.num_params += 1
exec("self.model.params += [self.%s]" % var)
self.model.variables += [var]
self.model.priors += [self.priors[idx]]
if 'name' in list(self.priors[idx].keys()):
self.model.param_names += [self.priors[idx]['name']]
else:
self.model.param_names += ['unknown']
def update(self):
# If this model is updated, make sure to repeat any initialization
# Find Moebius coefficients
self.find_moebius_coefficients()
# Flip h_min and h_max, if necessary
if self.head_min > self.head_max:
temp = self.head_min
self.head_min = self.head_max
self.head_min = temp
# Update potential
self.head_to_potential()
def check_input(self):
import numpy as np
# See if either control point rotations or a full set of Moebius
# coefficients are specified
if self.r is None and (self.a is None or self.b is None or \
self.c is None or self.d is None):
raise Exception('Either control point rotations r or Moebius coefficients a, b, c, and d must be specified.')
# Check if phi_min is smaller than phi_offset, switch if necessary
if self.head_min > self.head_max:
raise Exception('Minimum potential phi_min is larger than maximum potential phi_max.')
# Check if the control points fulfill the minimum angular spacing
r = np.degrees(self.r)
if np.abs((r[0]-r[1] + 180) % 360 - 180) < self.angular_limit or \
np.abs((r[1]-r[2] + 180) % 360 - 180) < self.angular_limit or \
np.abs((r[2]-r[0] + 180) % 360 - 180) < self.angular_limit:
raise Exception('Control points '+str(self.r)+' are too close to each other. Define different control points or adjust the angular limit: '+str(self.angular_limit))
def evaluate(self,z):
import numpy as np
import copy
# Complexify the evaluation points, if they aren't already complex
z = self.complexify(z)
# Coordinates in canonical space are the start values
z_canonical = copy.copy(z)
# Scale the canonical disk to unity canonical disk
z = (z - self.model.domain_center)/self.model.domain_radius
# Map from canonical disk to Möbius base
z = self.moebius(z,inverse=True)
# Map from Möbius base to unit square
if self.use_SC:
z = self.disk_to_square(z)
# Rescale the complex potential
z = (z+1)/2 * (self.phi_max-self.phi_min) + self.phi_min
return z
def evaluate_gradient(self,z,derivatives = 'all'):
import numpy as np
import copy
# Complexify the evaluation points, if they aren't already complex
z = self.complexify(z)
# Map from the canonical disk to Möbius base
z_mb = (copy.copy(z) - self.model.domain_center)/self.model.domain_radius
# dz_mb / dz_c
grad_4 = 1/self.model.domain_radius
# Map from Möbius base to unit disk
z_ud = self.moebius(copy.copy(z_mb),inverse=True)
# dz_ud / dz_mb
grad_3 = (self.a*self.d-self.b*self.c)/(self.c*z_mb-self.a)**2
if self.use_SC:
grad_2 = 2/(self.Ke*np.sqrt(z_ud**4+1))
grad_1 = (self.phi_max-self.phi_min)/2
if self.use_SC:
grad = grad_1*grad_2*grad_3*grad_4
else:
grad = grad_1*grad_3*grad_4
if derivatives == 'phi':
dudx = np.real(grad)
dudy = -np.imag(grad)
grad = dudx + 1j*dudy
elif derivatives == 'psi':
dvdx = -np.imag(np.conjugate(grad))
dvdy = np.real(np.conjugate(grad))
grad = dvdx + 1j*dvdy
elif derivatives != 'all':
raise Exception("'derivatives' has to be either 'all' (complex derivative), " + \
"'phi' (hydraulic potential partial derivatives), or 'psi' " + \
"(flow line partial derivatives)")
return grad
def complex_integral(self,func,a,b):
"""
This implements the Gauss-Kronrod integration for complex-valued functions.
We use this to evaluate the Legendre incomplete elliptic integral of the
first kind, since it is about ten times as fast as using mpmath's ellipf
function. Since this integration is a major computational bottleneck of
this function, we stick with this approach.
The equations below are adapted from:
https://stackoverflow.com/questions/5965583/use-scipy-integrate-quad-to-integrate-complex-numbers
"""
import scipy
from scipy import array
def quad_routine(func, a, b, x_list, w_list):
c_1 = (b-a)/2.0
c_2 = (b+a)/2.0
eval_points = map(lambda x: c_1*x+c_2, x_list)
func_evals = list(map(func, eval_points)) # Python 3: make a list here
return c_1 * sum(array(func_evals) * array(w_list))
def quad_gauss_7(func, a, b):
x_gauss = [-0.949107912342759, -0.741531185599394, -0.405845151377397, 0, 0.405845151377397, 0.741531185599394, 0.949107912342759]
w_gauss = array([0.129484966168870, 0.279705391489277, 0.381830050505119, 0.417959183673469, 0.381830050505119, 0.279705391489277,0.129484966168870])
return quad_routine(func,a,b,x_gauss, w_gauss)
def quad_kronrod_15(func, a, b):
x_kr = [-0.991455371120813,-0.949107912342759, -0.864864423359769, -0.741531185599394, -0.586087235467691,-0.405845151377397, -0.207784955007898, 0.0, 0.207784955007898,0.405845151377397, 0.586087235467691, 0.741531185599394, 0.864864423359769, 0.949107912342759, 0.991455371120813]
w_kr = [0.022935322010529, 0.063092092629979, 0.104790010322250, 0.140653259715525, 0.169004726639267, 0.190350578064785, 0.204432940075298, 0.209482141084728, 0.204432940075298, 0.190350578064785, 0.169004726639267, 0.140653259715525, 0.104790010322250, 0.063092092629979, 0.022935322010529]
return quad_routine(func,a,b,x_kr, w_kr)
class Memorize: # Python 3: no need to inherit from object
def __init__(self, func):
self.func = func
self.eval_points = {}
def __call__(self, *args):
if args not in self.eval_points:
self.eval_points[args] = self.func(*args)
return self.eval_points[args]
def quad(func,a,b):
''' Output is the 15 point estimate; and the estimated error '''
func = Memorize(func) # Memorize function to skip repeated function calls.
g7 = quad_gauss_7(func,a,b)
k15 = quad_kronrod_15(func,a,b)
# I don't have much faith in this error estimate taken from wikipedia
# without incorporating how it should scale with changing limits
return [k15, (200*scipy.absolute(g7-k15))**1.5]
return quad(func,a,b)
def angle_to_unit_circle(self):
import numpy as np
# Angle must be provided in radians, counter-clockwise from 3 o'clock
return np.cos(self.r)+1j*np.sin(self.r)
def find_moebius_coefficients(self):
import numpy as np
# Find the images of the z0 control points
w0 = self.angle_to_unit_circle()
# Then calculate the four parameters for the corresponding Möbius map
self.a = np.linalg.det(np.asarray(
[[self.z0[0]*w0[0], w0[0], 1],
[self.z0[1]*w0[1], w0[1], 1],
[self.z0[2]*w0[2], w0[2], 1]]))
self.b = np.linalg.det(np.asarray(
[[self.z0[0]*w0[0], self.z0[0], w0[0]],
[self.z0[1]*w0[1], self.z0[1], w0[1]],
[self.z0[2]*w0[2], self.z0[2], w0[2]]]))
self.c = np.linalg.det(np.asarray(
[[self.z0[0], w0[0], 1],
[self.z0[1], w0[1], 1],
[self.z0[2], w0[2], 1]]))
self.d = np.linalg.det(np.asarray(
[[self.z0[0]*w0[0], self.z0[0], 1],
[self.z0[1]*w0[1], self.z0[1], 1],
[self.z0[2]*w0[2], self.z0[2], 1]]))
return
def moebius(self,z,inverse=False):
if not inverse:
z = (self.a*z+self.b)/(self.c*z+self.d)
else:
z = (-self.d*z+self.b)/(self.c*z-self.a)
return z
def square_to_disk(self,z,k='default'):
import numpy as np
from mpmath import mpc,mpmathify,ellipfun
if k == 'default': k = 1/mpmathify(np.sqrt(2))
Ke = 1.854
cn = ellipfun('cn')
if type(z) is complex:
z = np.asarray([z])
zf = np.ndarray.flatten(z)
w = np.zeros(zf.shape)*1j
pre_factor = mpc(1,-1)/mpmathify(np.sqrt(2))
mid_factor = Ke*(mpc(1,1)/2)
for idx,entry in enumerate(zf): # Go through all complex numbers
# Calculate result
temp = pre_factor*cn(
u = mid_factor*entry-Ke,
k = k)
# Then place it into the array
w[idx] = np.complex(temp.real,temp.imag)
# Now reshape the array back to its original shape
z = w.reshape(z.shape).copy()
return z
def disk_to_square(self,z,k='default'):
import numpy as np
Ke = 1.854
if type(z) is complex:
z = np.asarray([z])
zf = np.ndarray.flatten(z)
w = np.zeros(zf.shape)*1j
# Using the Gauss-Kronrod integration is about 10 times faster than
# using the mpmath.ellipf function
if k == 'default': k = 1/np.sqrt(2)
m = k**2
pre_factor = (1-1j)/(-Ke)
mid_factor = (1+1j)/np.sqrt(2)
temp = [pre_factor*self.complex_integral(
func = lambda t: (1-m*np.sin(t)**2)**(-0.5),
a = 0,
b = i)[0] + 1 - 1j for i in np.arccos(zf*mid_factor)]
w = np.asarray(temp)
# Now reshape the array back to its original shape
z = w.reshape(z.shape).copy()
return z
def are_points_clockwise(self):
import numpy as np
verts = np.zeros((3,2))
verts[0,:] = np.asarray([np.cos(self.r[0]),np.sin(self.r[0])])
verts[1,:] = np.asarray([np.cos(self.r[1]),np.sin(self.r[1])])
verts[2,:] = np.asarray([np.cos(self.r[2]),np.sin(self.r[2])])
signed_area = 0
for vtx in range(verts.shape[0]):
x1 = verts[vtx,0]
y1 = verts[vtx,1]
if vtx == verts.shape[0]-1: # Last vertex
x2 = verts[0,0]
y2 = verts[0,1]
else:
x2 = verts[vtx+1,0]
y2 = verts[vtx+1,1]
signed_area += (x1 * y2 - x2 * y1)/2
return (signed_area < 0)
def head_to_potential(self):
for idx,h in enumerate([self.head_min-self.model.head_offset,self.head_max-self.model.head_offset]):
if self.model.aquifer_type == 'confined' or (self.model.aquifer_type == 'convertible' and h >= self.model.H):
# Strack 1989, Eq. 8.6
pot = self.k*self.model.H*h - 0.5*self.k*self.model.H**2
elif self.model.aquifer_type == 'unconfined' or (self.model.aquifer_type == 'convertible' and h < self.model.H):
# Strack 1989, Eq. 8.7
pot = 0.5*self.k*h**2
if idx == 0:
self.phi_min = pot
elif idx == 1:
self.phi_max = pot
def complexify(self,z):
"""
This function takes the provided line or polygon and converts it into
a complex-valued vector, if it isn't already provided as one.'
"""
import numpy as np
if not np.iscomplex(z).any():
if len(z.shape) != 2 or z.shape[1] != 2:
raise Exception('Shape format not understood. Provide shape vertices either as a complex vector, or as a N-by-2 real numpy array.')
else:
z = z[:,0] + 1j*z[:,1]
return z
def plot(self,label_offset = 1.1,fontsize=12,fontcolor='xkcd:dark grey',
pointcolor='xkcd:dark grey',pointsize=50,horizontalalignment='center',
verticalalignment='center',color_low = 'xkcd:cerulean',
color_high = 'xkcd:orangish red',**kwargs):
"""
This function plots the Möbius control/reference points on the unit disk.
"""
import numpy as np
import matplotlib.pyplot as plt
import math
# Get the coordinates of the control points
z_A = (1-1j)/np.abs(1-1j)
z_A = self.moebius(z_A,inverse=False)*self.model.domain_radius + self.model.domain_center
z_A = np.asarray([np.real(z_A),np.imag(z_A)])
z_B = (1+1j)/np.abs(1+1j)
z_B = self.moebius(z_B,inverse=False)*self.model.domain_radius + self.model.domain_center
z_B = np.asarray([np.real(z_B),np.imag(z_B)])
z_C = (-1+1j)/np.abs(-1+1j)
z_C = self.moebius(z_C,inverse=False)*self.model.domain_radius + self.model.domain_center
z_C = np.asarray([np.real(z_C),np.imag(z_C)])
z_D = (-1-1j)/np.abs(-1-1j)
z_D = self.moebius(z_D,inverse=False)*self.model.domain_radius + self.model.domain_center
z_D = np.asarray([np.real(z_D),np.imag(z_D)])
dc = self.model.domain_center
if np.isscalar(dc):
dc = np.asarray([np.real(dc),np.imag(dc)])
a_low = np.linspace(
math.atan2(
z_C[1]-dc[1],
z_C[0]-dc[0]),
math.atan2(
z_D[1]-dc[1],
z_D[0]-dc[0]),
360)
if abs(a_low[0]-a_low[-1]) > np.pi:
a_low = np.concatenate((
np.linspace(np.min(a_low),-np.pi,360),
np.linspace(np.pi,np.max(a_low),360) ))
a_high = np.linspace(
math.atan2(
z_A[1]-dc[1],
z_A[0]-dc[0]),
math.atan2(
z_B[1]-dc[1],
z_B[0]-dc[0]),
360)
if abs(a_high[0]-a_high[-1]) > np.pi:
a_high = np.concatenate((
np.linspace(np.min(a_high),-np.pi,360),
np.linspace(np.pi,np.max(a_high),360) ))
plt.plot(np.cos(a_low)*self.model.domain_radius + dc[0],
np.sin(a_low)*self.model.domain_radius + dc[1],
color = color_low,linewidth=2)
plt.plot(np.cos(a_high)*self.model.domain_radius + dc[0],
np.sin(a_high)*self.model.domain_radius + dc[1],
color = color_high,linewidth=2)
plt.scatter(z_A[0],z_A[1],s=pointsize,color=pointcolor,zorder=11,**kwargs)
plt.scatter(z_B[0],z_B[1],s=pointsize,color=pointcolor,zorder=11,**kwargs)
plt.scatter(z_C[0],z_C[1],s=pointsize,color=pointcolor,zorder=11,**kwargs)
plt.scatter(z_D[0],z_D[1],s=pointsize,color=pointcolor,zorder=11,**kwargs)
plt.text(z_A[0]*label_offset,z_A[1]*label_offset,'A',fontsize=fontsize,
horizontalalignment=horizontalalignment,verticalalignment=verticalalignment,
color=fontcolor,**kwargs)
plt.text(z_B[0]*label_offset,z_B[1]*label_offset,'B',fontsize=fontsize,
horizontalalignment=horizontalalignment,verticalalignment=verticalalignment,
color=fontcolor,**kwargs)
plt.text(z_C[0]*label_offset,z_C[1]*label_offset,'C',fontsize=fontsize,
horizontalalignment=horizontalalignment,verticalalignment=verticalalignment,
color=fontcolor,**kwargs)
plt.text(z_D[0]*label_offset,z_D[1]*label_offset,'D',fontsize=fontsize,
horizontalalignment=horizontalalignment,verticalalignment=verticalalignment,
color=fontcolor,**kwargs)
#%%
class ElementUniformBase:
def __init__(self,model,alpha=0,head_min=0,head_max=1,k=1,
variables=[],priors=[]):
"""
This implements the uniform base flow element.
Parameters:
model - [object] : the model object to which this element is added
alpha - [scalar] : direction of the uniform flow in counter-clockwise radians from East
head_min - [scalar] : minimum hydraulic head (mapped to -1 in the unit square)
head_max - [scalar] : maximum hydraulic head (mapped to +1 in the unit square)
k - [scalar] : background hydraulic conductivity in canonical units (e.g., 1E-4 [length]/[time])
If MCMC is used, we further require:
variable - [list] : list of variables which are inferred by MCMC, example: ['r','phi_min']; leave empty if unused
priors - [list] : list of dictionaries, one for each unknown 'variable'; each dictionary must contain the name of distribution (in scipy.stats) and the relevant parameters as keys
observations - [list] : list of dictionaries, one for each hydraulic head observations; each dictionary must contain a 'location' and a 'head key', with a complex and real number, respectively
"""
import numpy as np
# Append the base to the elementlist
self.model = model
model.elementlist.append(self)
# Set orientation value
self.alpha = alpha
# Set potential scaling variables
self.head_min = head_min
self.head_max = head_max
# Assign the hydraulic conductivity of the base model
self.k = k
# The model requires the base flow in terms of hydraulic potential (phi)
# The function head_to_potential extracts the following variables:
# phi_min hydraulic potential corresponding to head_min
# phi_max hydraulic potential corresponding to head_max
self.head_to_potential()
# Check input for validity
self.check_input()
self.variables = variables
self.priors = priors
if len(self.variables) > 0:
# There are some model variables specified
for idx,var in enumerate(self.variables):
self.model.num_params += 1
exec("self.model.params += [self.%s]" % var)
self.model.variables += [var]
self.model.priors += [self.priors[idx]]
if 'name' in list(self.priors[idx].keys()):
self.model.param_names += [self.priors[idx]['name']]
else:
self.model.param_names += ['unknown']
def update(self):
# Update potential
self.head_to_potential()
def check_input(self):
# Check if phi_min is smaller than phi_offset, switch if necessary
if self.head_min > self.head_max:
raise Exception('Minimum potential phi_min is larger than maximum potential phi_max.')
def evaluate(self,z):
import numpy as np
import copy
# Complexify the evaluation points, if they aren't already complex
z = self.complexify(z)
# Coordinates in canonical space are the start values
z_canonical = copy.copy(z)
# head_min and head_max lie on opposite points of the circular model domain
Q = (self.phi_max-self.phi_min)/(self.model.domain_radius*2)
# Rotate the flow field
z = Q*z_canonical*np.exp(-1j*self.alpha)
# And offset it to phi_min
z = z + (self.phi_max-self.phi_min)/2 + self.phi_min
return z
def evaluate_gradient(self,z,derivatives = 'all'):
import numpy as np
# Complexify the evaluation points, if they aren't already complex
z = self.complexify(z)
# head_min and head_max lie on opposite points of the circular model domain
Q = (self.phi_max-self.phi_min)/(self.model.domain_radius*2)
# Extract the derivative
grad = Q*np.exp(-1j*self.alpha)
if derivatives == 'phi':
dudx = np.real(grad)
dudy = -np.imag(grad)
grad = dudx + 1j*dudy
elif derivatives == 'psi':
dvdx = -np.imag(np.conjugate(grad))
dvdy = np.real(np.conjugate(grad))
grad = dvdx + 1j*dvdy
elif derivatives != 'all':
raise Exception("'derivatives' has to be either 'all' (complex derivative), " + \
"'phi' (hydraulic potential partial derivatives), or 'psi' " + \
"(flow line partial derivatives)")
return grad
def head_to_potential(self):
for idx,h in enumerate([self.head_min-self.model.head_offset,self.head_max-self.model.head_offset]):
if self.model.aquifer_type == 'confined' or (self.model.aquifer_type == 'convertible' and h >= self.model.H):
# Strack 1989, Eq. 8.6
pot = self.k*self.model.H*h - 0.5*self.k*self.model.H**2
elif self.model.aquifer_type == 'unconfined' or (self.model.aquifer_type == 'convertible' and h < self.model.H):
# Strack 1989, Eq. 8.7
pot = 0.5*self.k*h**2
if idx == 0:
self.phi_min = pot
elif idx == 1:
self.phi_max = pot
def complexify(self,z):
"""
This function takes the provided line or polygon and converts it into
a complex-valued vector, if it isn't already provided as one.'
"""
import numpy as np
if not np.iscomplex(z).any():
if len(z.shape) != 2 or z.shape[1] != 2:
raise Exception('Shape format not understood. Provide shape vertices either as a complex vector, or as a N-by-2 real numpy array.')
else:
z = z[:,0] + 1j*z[:,1]
return z
def plot(self,color_low = 'xkcd:cerulean',color_high='xkcd:orangish red',
s = 50, **kwargs):
import numpy as np
import matplotlib.pyplot as plt
high = np.asarray([
np.cos(self.alpha)*self.model.domain_radius + np.real(self.model.domain_center),
np.sin(self.alpha)*self.model.domain_radius + np.imag(self.model.domain_center)])
low = -high
plt.scatter(low[0],low[1],s=50,color=color_low,zorder=11,**kwargs)
plt.scatter(high[0],high[1],s=50,color=color_high,zorder=11,**kwargs)
plt.arrow(low[0]*0.9,low[1]*0.9,low[0]*0.15,low[1]*0.15,color=color_low,
zorder=11,head_width = 50,width = 20)
plt.arrow(high[0]*1.1,high[1]*1.1,-high[0]*0.15,-high[1]*0.15,color=color_high,
zorder=11,head_width = 50,width = 20)
#%%
class ElementHeadBoundary:
def __init__(self, model, line, line_ht, segments = None, influence = None,
connectivity = 1, connectivity_normdist = None,
variables = [], priors=[]):
"""
This implements a prescribed head boundary.
Parameters:
model - [object] : the model object to which this element is added
line - [array] : either a real N-by-2 matrix or complex vector of length N specifying the vertices of a line string tracing the element's path
line_ht - [vector] : a real vector of length N specifying the corresponding prescribed hydraulic heads at the line string's vertices
segments - [scalar] : this element has a subdivision function; if a finer resolution than the number of segments in 'line' is desired, specify a larger number here; the function will then subdivide 'line' and 'line_ht' so as to create segments of as equal length as possible
influence - [scalar] : radius of zero influence of each line segment; set to twice the model domain_radius if unspecified
connectivity - [scalar] : either a real scalar or vector of length M, specifying if the aquifer is fully connected (1) or unconnected (0) to the HeadBoundary
connectivity_normdist - [vector] : if connectivity is a vector, this specifies the normalized distances 0,...,d,...,1 along which the M connectivity nodes are placed; connectivity values are then linearly interpolated for each segment
If MCMC is used, we further require:
variable - [list] : list of variables which are inferred by MCMC, example: ['line_ht']; leave empty if unused
priors - [list] : list of dictionaries, one for each unknown 'variable'; each dictionary must contain the name of distribution (in scipy.stats) and the relevant parameters as keys
"""
import numpy as np
from scipy.interpolate import interp1d
import copy
# Connect this element to the solver
self.model = model
model.elementlist.append(self)
model.linear_solver = True
# Prepare the stochastic variables
self.variables = variables
self.priors = priors
# Initialize the head target and connectivity variables
self.line_ht = line_ht
self.connectivity = connectivity
if np.isscalar(self.connectivity): # Connectivity provided is uniform
self.connectivity_uniform = True
else: # Connectivity provided
self.connectivity_uniform = False
# Check if normalized distances were provided
if connectivity_normdist is None:
raise Exception('If connectivity is not uniform, a vector of equal length containing normalized distances (e.g., [0., 0.25, 0.6, 1.]) must be specified.')
# Check if connectivity_normdist is valid
if np.min(connectivity_normdist) < 0 or np.max(connectivity_normdist) > 1:
raise Exception('connectivity_normdist values must be between 0 and 1. Current values: '+str(connectivity_normdist))
# Check if connectivity_normdist is sorted
if not (connectivity_normdist == np.sort(connectivity_normdist)).all():
raise Exception('connectivity_normdist values must be provided in ascending order. Current values: '+str(connectivity_normdist))
self.connectivity_normdist = connectivity_normdist
# ---------------------------------------------------------------------
# Subdivide the provided no flow boundary into #segments pieces
# Complexify the line, if it wasn't already complex
line = self.complexify(line)
# The subdivision algorith requires the line coordinates as a real N-by-2 matrix
line = np.column_stack((
np.real(line)[:,np.newaxis],
np.imag(line)[:,np.newaxis]))
# Make a copy of the line
self.line_raw = copy.copy(line)
# Check if a subdivision has been specified
if segments is None: # No subdivision required
self.segments = line.shape[0]-1
else: # Otherwise, set target
self.segments = segments
# A number of consistency checks
if self.segments < self.line_raw.shape[0]-1:
raise Exception('Number of segments '+str(self.segments)+" mustn't be smaller than number of line points "+str(line.shape[0])+'.')
if len(line_ht) != line.shape[0]:
raise Exception('Number of head prescriptions must equal number of vertices: '+str(len(line_ht))+' =/= '+str(line.shape[0]))
if self.segments > self.line_raw.shape[0]:
# Subdivide the line
self.line = self.subdivide_line(np.column_stack((line,self.line_ht)),self.segments)
self.line_c = copy.copy(self.line[:,0] + 1j*self.line[:,1])
self.line_ht = copy.copy(self.line[:,2])
else:
# Otherwise, reconstruct the line format
self.line = self.line_raw.copy()
self.line_c = self.line[:,0] + 1j*self.line[:,1]
self.line_ht = line_ht
# ---------------------------------------------------------------------
# Assign the initial strength variables for each segment
self.strength = np.ones(self.segments)
# Prepare the influence range for this line sink
if influence is None:
# If no influence range is specified, set it to twice the domain radius
# to ensure that no point in the model domain will lie outside this range
self.influence = self.model.domain_radius*2
else:
self.influence = influence
# Prepare a few variables for this element
self.L = [] # Length of each line segment
self.zc = [] # Center of each line segment
self.head_target = [] # Head target at each line segment
for seg in range(self.segments):
self.L += [np.abs(self.line_c[seg+1] - self.line_c[seg])]
self.zc += [(self.line_c[seg]+self.line_c[seg+1])/2]
self.head_target += [(self.line_ht[seg]+self.line_ht[seg+1])/2]
# Convert list of segment centers to array
self.zc = np.asarray(self.zc)
self.head_target = np.asarray(self.head_target)
# Now form a vector of cumulative distances
self.cumdist = []
for seg in range(self.segments):
if seg == 0:
self.cumdist.append(np.abs(self.zc[0]-self.line_c[0]))
else:
self.cumdist.append(np.abs(self.zc[seg]-self.zc[seg-1]))
self.cumdist = np.cumsum(np.asarray(self.cumdist))
self.cumdist /= (self.cumdist[-1] + np.abs(self.zc[-1]-self.line_c[-1]))
if not self.connectivity_uniform:
# Interpolate the connectivity
from scipy.interpolate import interp1d
itp = interp1d(self.connectivity_normdist,self.connectivity)
self.connectivity_interpolated = itp(self.cumdist)
# Convert the head targets to potential targets
self.set_potential_target()
# Check if the prior matches the number of parameters
if len(self.priors) != len(self.variables):
raise Exception('Number of priors must match number of unknown variables. Number of priors: '+str(self.priors)+' / Number of unknown variables: '+str(len(self.variables)))
# Go through all elements
if len(self.variables) > 0:
# There are some model variables specified
for idx,var in enumerate(self.variables):
self.model.num_params += 1
exec("self.model.params += [self.%s]" % var)
self.model.priors += [self.priors[idx]]
self.model.variables += [var]
if 'name' in list(self.priors[idx].keys()):
self.model.param_names += [self.priors[idx]['name']]
else:
self.model.param_names += ['unknown']
def update(self):
import numpy as np
# Update the potential targets
self.set_potential_target()
if not self.connectivity_uniform:
# Interpolate the connectivity
from scipy.interpolate import interp1d
itp = interp1d(self.connectivity_normdist,self.connectivity)
self.connectivity_interpolated = itp(self.cumdist)
# self.zc = self.xc + 1j*self.yc
# self.L = np.abs(self.z2 - self.z1)
# influence_pt = (self.z2-self.z1)*self.influence/self.L + self.z1
# Z = (2*influence_pt-(self.z1+self.z2))/(self.z2-self.z1)
# part1 = np.nan_to_num((Z+1)*np.log(Z+1))
# part2 = np.nan_to_num((Z-1)*np.log(Z-1))
# self.offset_outside = self.L / (4*np.pi) * (part1 - part2)
def evaluate_gradient(self,z,detailed = False, derivatives = 'all', override_parameters = False):
import numpy as np
import copy
# Complexify the evaluation points, if they aren't already complex
z = self.complexify(z)
# 'detailed' returns the results as a matrix instead of a summed vector
if detailed:
grad = np.zeros((self.segments,z.shape[0]), dtype = np.complex)
else:
grad = np.zeros(z.shape, dtype = np.complex)
for seg in range(self.segments):
Z = (2*z-(self.line_c[seg]+self.line_c[seg+1]))/(self.line_c[seg+1]-self.line_c[seg])
Z[np.where(np.abs(np.imag(Z)) < 1E-10)] = np.real(Z[np.where(np.abs(np.imag(Z)) < 1E-10)])
# Now get the gradient d omega(z)/dZ
if self.connectivity_uniform:
# If the connectivity is uniform, i.e. does not vary along the boundary
if not override_parameters:
temp = self.strength[seg]*self.connectivity*self.L[seg]/4/np.pi*(np.log(Z+1) - np.log(Z-1))
else:
temp = self.L[seg]*self.connectivity/4/np.pi*(np.log(Z+1) - np.log(Z-1))
else:
# If the connectivity is not uniform, i.e. does vary along the boundary
if not override_parameters:
temp = self.strength[seg]*self.connectivity_interpolated[seg]*self.L[seg]/4/np.pi*(np.log(Z+1) - np.log(Z-1))
else:
temp = self.L[seg]*self.connectivity_interpolated[seg]/4/np.pi*(np.log(Z+1) - np.log(Z-1))
# To get d omega(z)/dz we can use the product rule
# d omega(z)/dz = d omega(z)/dZ * dZ/dz
# hence:
temp = temp*2/(self.line_c[seg+1]-self.line_c[seg])
if detailed:
grad[seg,:] = copy.copy(temp)
else:
grad += temp
if derivatives == 'phi':
dudx = np.real(grad)
dudy = -np.imag(grad)
grad = dudx + 1j*dudy
elif derivatives == 'psi':
dvdx = -np.imag(np.conjugate(grad))
dvdy = np.real(np.conjugate(grad))
grad = dvdx + 1j*dvdy
elif derivatives != 'all':
raise Exception("'derivatives' has to be either 'all' (complex derivative), " + \
"'phi' (hydraulic potential partial derivatives), or 'psi' " + \
"(flow line partial derivatives)")
return grad
def evaluate(self,z,detailed = False, override_parameters = False,
evaluate_self = False):
import copy
import numpy as np
# Complexify the evaluation points, if they aren't already complex
z = self.complexify(z)
if detailed:
res = np.zeros((self.segments,z.shape[0]), dtype = np.complex)
else:
res = np.zeros(z.shape, dtype = np.complex)
for seg in range(self.segments):
# Convert to local coordinates
Z = (2*z-(self.line_c[seg]+self.line_c[seg+1]))/(self.line_c[seg+1]-self.line_c[seg])
Z[np.where(np.abs(np.imag(Z)) < 1E-10)] = np.real(Z[np.where(np.abs(np.imag(Z)) < 1E-10)])
if self.connectivity_uniform:
# If the connectivity is uniform, i.e. does not vary along the boundary
# Evaluate the complex potential offset by a distance in the
if not override_parameters:
temp = self.strength[seg]*self.connectivity*self.L[seg]/4/np.pi * (\
(Z+1)*np.log(Z+1) - \
(Z-1)*np.log(Z-1) - \
(2/self.L[seg]*self.influence+2)*np.log(2/self.L[seg]*self.influence+2) + \
(2/self.L[seg]*self.influence)*np.log(2/self.L[seg]*self.influence))
elif override_parameters and evaluate_self:
# This is a unique addition. In order to make the connectivity
# work as an element in conjunction with all others, we must
# 'trick' it into thinking its connectivity is 1. This is only
# ever activated to evaluate the effects of a prescribed head
# boundary on itself (a diagonal block in setting up the matrix
# for the linear system of equations).
temp = self.L[seg]/4/np.pi * (\
(Z+1)*np.log(Z+1) - \
(Z-1)*np.log(Z-1) - \
(2/self.L[seg]*self.influence+2)*np.log(2/self.L[seg]*self.influence+2) + \
(2/self.L[seg]*self.influence)*np.log(2/self.L[seg]*self.influence))
else:
temp = self.L[seg]*self.connectivity/4/np.pi * (\
(Z+1)*np.log(Z+1) - \
(Z-1)*np.log(Z-1) - \
(2/self.L[seg]*self.influence+2)*np.log(2/self.L[seg]*self.influence+2) + \
(2/self.L[seg]*self.influence)*np.log(2/self.L[seg]*self.influence))
else:
# If the connectivity is not uniform, i.e. does vary along the boundary
# Evaluate the complex potential offset by a distance in the
if not override_parameters:
temp = self.strength[seg]*self.connectivity_interpolated[seg]*self.L[seg]/4/np.pi * (\
(Z+1)*np.log(Z+1) - \
(Z-1)*np.log(Z-1) - \
(2/self.L[seg]*self.influence+2)*np.log(2/self.L[seg]*self.influence+2) + \
(2/self.L[seg]*self.influence)*np.log(2/self.L[seg]*self.influence))
elif override_parameters and evaluate_self:
# This is a unique addition. In order to make the connectivity
# work as an element in conjunction with all others, we must
# 'trick' it into thinking its connectivity is 1. This is only
# ever activated to evaluate the effects of a prescribed head
# boundary on itself (a diagonal block in setting up the matrix
# for the linear system of equations).
temp = self.L[seg]/4/np.pi * (\
(Z+1)*np.log(Z+1) - \
(Z-1)*np.log(Z-1) - \
(2/self.L[seg]*self.influence+2)*np.log(2/self.L[seg]*self.influence+2) + \
(2/self.L[seg]*self.influence)*np.log(2/self.L[seg]*self.influence))
else:
temp = self.L[seg]*self.connectivity_interpolated[seg]/4/np.pi * (\
(Z+1)*np.log(Z+1) - \
(Z-1)*np.log(Z-1) - \
(2/self.L[seg]*self.influence+2)*np.log(2/self.L[seg]*self.influence+2) + \
(2/self.L[seg]*self.influence)*np.log(2/self.L[seg]*self.influence))
# If evaluated directly at the endpoints, the result would be NaN
# They should be zero, see Bakker 2009
temp = np.nan_to_num(temp)
if detailed:
res[seg,:] = copy.copy(temp)
else:
res += temp
return res
def subdivide_line(self,line,segments):
import numpy as np
import copy
# If array is one-dimensional, reshape it appropriately
if len(line.shape) == 1: line = line.reshape((line.shape[0],1))
D = line.shape[1]
# Calculate the lengths of original segments
length = [np.linalg.norm(line[seg,:]-line[seg+1,:]) for seg in range(line.shape[0]-1)]
# Normalize the length of the original segments
length /= np.sum(length)
# Calculate the number of new segments we must create, the line already has
# (#vertices-1) segments. We only require the difference
new_segments = segments - line.shape[0] + 1
# Calculate where those segments should go
bins = np.concatenate(( [0] , np.cumsum(length) ))
# Add Extend the bin length a bit to prevent errors from arithmetic under- or overflow
bins[0] -= 1E-10
bins[-1] += 1E-10
# Distribute vertices along the segments
x = np.linspace(0,1,new_segments+1)
num_vertices = []
for seg in range(line.shape[0]-1):
if seg == 0:
num_vertices += [len(np.where(x <= bins[1])[0])]
else:
num_vertices += [len(np.where(np.logical_and(
x > bins[seg],
x <= bins[seg+1]))[0])]
# Subidivide the original segments
new_vertices = []
for seg in range(line.shape[0]-1):
temp = None
for d in range(D):
if temp is None:
temp = copy.copy(line[seg,d] + (line[seg+1,d]-line[seg,d]) * np.linspace(0,1,num_vertices[seg]+2)[1:-1])
else:
temp = np.column_stack((
temp,
copy.copy(line[seg,d] + (line[seg+1,d]-line[seg,d]) * np.linspace(0,1,num_vertices[seg]+2)[1:-1])))
new_vertices += [copy.copy(temp)]
# Create the seed for the new line
new_line = copy.copy(line[0,:].reshape((1,D)) )
# Assemble the new line
for seg in range(line.shape[0]-1):
# Add the new segments, then the next original vertex
new_line = np.row_stack((
new_line,
new_vertices[seg],
line[seg+1,:]))
return new_line
def set_potential_target(self):
import copy
import numpy as np
# Get the hydraulic conductivities at the segment control points
for e in self.model.elementlist:
if isinstance(e, ElementMoebiusBase) or isinstance(e, ElementUniformBase):
temp_k = np.ones(self.zc.shape)*e.k
for e in self.model.elementlist:
if isinstance(e, ElementInhomogeneity):
inside = e.are_points_inside_polygon(self.zc)
temp_k[inside] = e.k
# Create a list of hydraulic potential targets
self.phi_target = copy.copy(self.head_target - self.model.head_offset)
if self.model.aquifer_type == 'confined':
# Strack 1989, Eq. 8.6
self.phi_target = temp_k*self.model.H*self.phi_target - \
0.5*temp_k*self.model.H**2
elif self.model.aquifer_type == 'unconfined':
# Strack 1989, Eq. 8.7
self.phi_target = 0.5*temp_k*self.phi_target**2
elif self.model.aquifer_type == 'convertible':
# Find out which points are confined and which are unconfined
index_conf = np.where(self.phi_target >= self.model.H)[0]
index_unconf = np.where(self.phi_target < self.model.H)[0]
# Account for the confined points
# confined: Strack 1989, Eq. 8.6
self.phi_target[index_conf] = \
temp_k[index_conf]*self.model.H*self.phi_target[index_conf] - \
0.5*temp_k[index_conf]*self.model.H**2
# unconfined: Strack 1989, Eq. 8.7
self.phi_target[index_unconf] = \
0.5*temp_k[index_unconf]*self.phi_target[index_unconf]**2
def complexify(self,z):
"""
This function takes the provided line or polygon and converts it into
a complex-valued vector, if it isn't already provided as one.'
"""
import numpy as np
if not np.iscomplex(z).any():
if len(z.shape) != 2 or z.shape[1] != 2:
raise Exception('Shape format not understood. Provide shape vertices either as a complex vector, or as a N-by-2 real numpy array.')
else:
z = z[:,0] + 1j*z[:,1]
return z
def plot(self,col='xkcd:kermit green',zorder=12,linewidth=5,**kwargs):
import matplotlib.pyplot as plt
import numpy as np
plt.plot(np.real(self.line_c),np.imag(self.line_c),color=col,
zorder=zorder,linewidth=linewidth,**kwargs)
#%%
class ElementNoFlowBoundary:
def __init__(self, model, line, segments = None, variables = [], priors=[]):
"""
This implements a no-flow boundary.
Parameters:
model - [object] : the model object to which this element is added
line - [array] : either a real N-by-2 matrix or complex vector of length N specifying the vertices of a line string tracing the element's path
segments - [scalar] : this element has a subdivision function; if a finer resolution than the number of segments in 'line' is desired, specify a larger number here; the function will then subdivide 'line' and 'line_ht' so as to create segments of as equal length as possible
If MCMC is used, we further require:
variable - [list] : list of variables which are inferred by MCMC, example: ['line_ht']; leave empty if unused
priors - [list] : list of dictionaries, one for each unknown 'variable'; each dictionary must contain the name of distribution (in scipy.stats) and the relevant parameters as keys
"""
import numpy as np
from scipy.interpolate import interp1d
import copy
# Append this element to the specified model
self.model = model
model.elementlist.append(self)
model.linear_solver = True
# ---------------------------------------------------------------------
# Subdivide the provided no flow boundary into segments pieces
# Complexify the line, if it wasn't already complex
line = self.complexify(line)
# The subdivision algorith requires the line coordinates as a real N-by-2 matrix
line = np.column_stack((
np.real(line)[:,np.newaxis],
np.imag(line)[:,np.newaxis]))
self.line_raw = copy.copy(line)
if segments is None:
self.segments = line.shape[0]-1
else:
self.segments = segments
if self.segments < self.line_raw.shape[0]-1:
raise Exception('Prescribed number of line segments '+str(self.segments)+" mustn't be smaller than base number of segments "+str(line.shape[0]-1)+'.')
if self.segments > self.line_raw.shape[0]-1:
# Subdivide the line
self.line = self.subdivide_line(line,self.segments)
self.line_c = self.line[:,0] + 1j*self.line[:,1]
else:
self.line = self.line_raw.copy()
self.line_c = self.line[:,0] + 1j*self.line[:,1]
# Also get the normal vector components to each segment
self.line_nvec = self.line[:,1] - 1j*self.line[:,0]
self.line_nvec = self.line_nvec/np.abs(self.line_nvec)
# ---------------------------------------------------------------------
# Get strength parameters for each vertex
self.strength = np.ones(self.segments)
self.zc = []
self.segment_nvec = []
self.L = []
for seg in range(self.segments):
self.zc += [(self.line_c[seg]+self.line_c[seg+1])/2]
# Calculate the normal vector to this segment
self.segment_nvec += [(self.line_c[seg]-self.line_c[seg+1])]
self.segment_nvec[-1]= [np.imag(self.segment_nvec[-1])-1j*np.real(self.segment_nvec[-1])]
self.L += [np.abs(self.line_c[seg+1] - self.line_c[seg])]
self.zc = np.asarray(self.zc)
# Extract target variables
self.variables = variables
self.priors = priors
self.L = np.asarray(self.L)
# Check if the prior matches the number of parameters
if len(self.priors) != len(self.variables):
raise Exception('Number of priors must match number of unknown variables. Number of priors: '+str(self.priors)+' / Number of unknown variables: '+str(len(self.variables)))
# Go through all elements
if len(self.variables) > 0:
# There are some model variables specified
for idx,var in enumerate(self.variables):
self.model.num_params += 1
exec("self.model.params += [self.%s]" % var)
self.model.priors += [self.priors[idx]]
self.model.variables += [var]
if 'name' in list(self.priors[idx].keys()):
self.model.param_names += [self.priors[idx]['name']]
else:
self.model.param_names += ['unknown']
def update(self):
import numpy as np
def evaluate_gradient(self,z,detailed = False,derivatives = 'all',override_parameters = False):
import numpy as np
import copy
# Complexify the evaluation points, if they aren't already complex
z = self.complexify(z)
# 'detailed' returns the results as a matrix instead of a summed vector
if detailed:
grad = np.zeros((self.segments,z.shape[0]), dtype = np.complex)
else:
grad = np.zeros(z.shape, dtype = np.complex)
# Go through all line segments
for seg in range(self.segments):
# Convert z to the local variable Z
Z = (2*copy.copy(z)-(self.line_c[seg]+self.line_c[seg+1]))/(self.line_c[seg+1]-self.line_c[seg]) #-marked- last influence
# Add to the result file d Omega(Z) / dZ
if not override_parameters:
temp = 1j*self.strength[seg]/(np.pi-np.pi*Z**2)
else:
temp = 1j/(np.pi-np.pi*Z**2)
# Multiply the result with dZ/dz to obtain dOmega(Z)/dz
temp = temp*2/(self.line_c[seg+1]-self.line_c[seg])
if detailed:
grad[seg,:] = copy.copy(temp)
else:
grad += copy.copy(temp)
if derivatives == 'phi':
dudx = np.real(grad)
dudy = -np.imag(grad)
grad = dudx + 1j*dudy
elif derivatives == 'psi':
dvdx = -np.imag(np.conjugate(grad))
dvdy = np.real(np.conjugate(grad))
grad = dvdx + 1j*dvdy
elif derivatives != 'all':
raise Exception("'derivatives' has to be either 'all' (complex derivative), " + \
"'phi' (hydraulic potential partial derivatives), or 'psi' " + \
"(flow line partial derivatives)")
return grad
def evaluate(self,z,detailed = False,override_parameters = False):
import numpy as np
import copy
# Complexify the evaluation points, if they aren't already complex
z = self.complexify(z)
# 'detailed' returns the results as a matrix instead of a summed vector
if detailed:
res = np.zeros((self.segments,z.shape[0]), dtype = np.complex)
else:
res = np.zeros(z.shape, dtype = np.complex)
# Go through all line segments
for seg in range(self.segments):
# Convert z to the local variable Z
Z = (2*copy.copy(z)-(self.line_c[seg]+self.line_c[seg+1]))/(self.line_c[seg+1]-self.line_c[seg]) #-marked- last influence
# # If any Z values are at zero, offset them by a bit
# indices = np.where(np.abs(Z) < 1E-10)
# Z[indices] = 1E-10
indices = np.where(np.logical_and(
np.abs(Z-1) > 1E-10,
np.abs(Z+1) > 1E-10) )[0]
indices = np.ones(Z.shape,dtype=bool)
# indices_minus_1 = np.where(np.abs(Z+1) > 1E-10)[0]
# Term 1
term_1 = np.zeros(Z.shape,dtype=np.complex)
term_1[indices] = (Z[indices]+1)*np.log((Z[indices]-1)/(Z[indices]+1))
term_2 = np.zeros(Z.shape,dtype=np.complex)
term_2[indices] = (Z[indices]-1)*np.log((Z[indices]-1)/(Z[indices]+1))
term_1 = np.nan_to_num(term_1)
term_2 = np.nan_to_num(term_2)
# indices_plus_1 = np.where(np.abs(Z-1) > 1E-10)[0]
# indices_minus_1 = np.where(np.abs(Z+1) > 1E-10)[0]
# # Term 1
# term_1 = np.zeros(Z.shape,dtype=np.complex)
# term_1[indices_plus_1] = (Z[indices_plus_1]+1)*np.log((Z[indices_plus_1]-1)/(Z[indices_plus_1]+1))
# term_2 = np.zeros(Z.shape,dtype=np.complex)
# term_2[indices_minus_1] = (Z[indices_minus_1]-1)*np.log((Z[indices_minus_1]-1)/(Z[indices_minus_1]+1))
# Blablablub
if not override_parameters:
temp = self.strength[seg]/(4*np.pi*1j) * \
(term_1 - term_2)
else:
temp = 1/(4*np.pi*1j) * \
(term_1 - term_2)
# # Blablablub
# if not override_parameters:
# temp = self.strength[seg]/(4*np.pi*1j) * \
# ((Z+1)*np.log((Z-1)/(Z+1)) - (Z-1)*np.log((Z-1)/(Z+1)))
# else:
# temp = 1/(4*np.pi*1j) * \
# ((Z+1)*np.log((Z-1)/(Z+1)) - (Z-1)*np.log((Z-1)/(Z+1)))
if detailed:
res[seg,:] = copy.copy(temp)
else:
res += copy.copy(temp)
return res
def subdivide_line(self,line,segments):
import numpy as np
import copy
# If array is one-dimensional, reshape it appropriately
if len(line.shape) == 1: line = line.reshape((line.shape[0],1))
D = line.shape[1]
# Calculate the lengths of original segments
length = [np.linalg.norm(line[seg,:]-line[seg+1,:]) for seg in range(line.shape[0]-1)]
# Normalize the length of the original segments
length /= np.sum(length)
# Calculate the number of new segments we must create, the line already has
# (#vertices-1) segments. We only require the difference
new_segments = segments - line.shape[0] + 1
# Calculate where those segments should go
bins = np.concatenate(( [0] , np.cumsum(length) ))
# Add Extend the bin length a bit to prevent errors from arithmetic under- or overflow
bins[0] -= 1E-10
bins[-1] += 1E-10
# Distribute vertices along the segments
x = np.linspace(0,1,new_segments)
num_vertices = []
for seg in range(line.shape[0]-1):
if seg == 0:
num_vertices += [len(np.where(x <= bins[1])[0])]
else:
num_vertices += [len(np.where(np.logical_and(
x > bins[seg],
x <= bins[seg+1]))[0])]
# Subidivide the original segments
new_vertices = []
for seg in range(line.shape[0]-1):
temp = None
for d in range(D):
if temp is None:
temp = copy.copy(line[seg,d] + (line[seg+1,d]-line[seg,d]) * np.linspace(0,1,num_vertices[seg]+2)[1:-1])
else:
temp = np.column_stack((
temp,
copy.copy(line[seg,d] + (line[seg+1,d]-line[seg,d]) * np.linspace(0,1,num_vertices[seg]+2)[1:-1])))
new_vertices += [copy.copy(temp)]
# Create the seed for the new line
new_line = copy.copy(line[0,:].reshape((1,D)) )
# Assemble the new line
for seg in range(line.shape[0]-1):
# Add the new segments, then the next original vertex
new_line = np.row_stack((
new_line,
new_vertices[seg],
line[seg+1,:]))
return new_line
def complexify(self,z):
"""
This function takes the provided line or polygon and converts it into
a complex-valued vector, if it isn't already provided as one.'
"""
import numpy as np
if not np.iscomplex(z).any():
if len(z.shape) != 2 or z.shape[1] != 2:
raise Exception('Shape format not understood. Provide shape vertices either as a complex vector, or as a N-by-2 real numpy array.')
else:
z = z[:,0] + 1j*z[:,1]
return z
def plot(self,color='xkcd:dark grey',linewidth=5,zorder=10,**kwargs):
import matplotlib.pyplot as plt
import numpy as np
plt.plot(np.real(self.line_c),np.imag(self.line_c),color=color,
linewidth = linewidth, zorder = zorder, **kwargs)
#%%
class ElementInhomogeneity:
def __init__(self, model, polygon, segments = None, k = 0.1,
variables = [], priors=[], snap_distance = 1E-10,
zero_cutoff = 1E-10, snap = True):
"""
This implements a zonal hydraulic conductivity inhomogeneity.
Parameters:
model - [object] : the model object to which this element is added
polygon - [array] : either a real N-by-2 matrix or complex vector of length N specifying the vertices of a polygon tracing the element's shape
segments - [scalar] : this element has a subdivision function; if a finer resolution than the number of segments in 'polygon' is desired, specify a larger number here; the function will then subdivide 'line' and 'line_ht' so as to create segments of as equal length as possible
k - [scalar] : hydraulic conductivity inside the inhomogeneity in canonical units (e.g., 1E-5 [length units]/[time units])
If MCMC is used, we further require:
variable - [list] : list of variables which are inferred by MCMC, example: ['line_ht']; leave empty if unused
priors - [list] : list of dictionaries, one for each unknown 'variable'; each dictionary must contain the name of distribution (in scipy.stats) and the relevant parameters as keys
"""
import numpy as np
import copy
import matplotlib.path
# Append this element to the specified model
self.model = model
model.elementlist.append(self)
model.linear_solver = True
# Prepare the polygon variable
self.polygon = polygon
self.polygon = self.complexify(self.polygon)
self.snap_distance = snap_distance
self.zero_cutoff = zero_cutoff
# Is the polygon closed? If not, close it temporarily
if np.abs(self.polygon[0]-self.polygon[-1]) > self.snap_distance:
self.polygon = np.asarray(list(self.polygon)+[self.polygon[0]])
# Also create an array with real coordinates
self.polygon_XY = np.column_stack((
np.real(copy.copy(self.polygon))[:,np.newaxis],
np.imag(copy.copy(self.polygon))[:,np.newaxis] ))
# Is the polygon counter-clockwise? If not, correct it
if self.are_vertices_clockwise(self.polygon_XY):
self.polygon = np.flip(self.polygon)
self.polygon_XY = np.flipud(self.polygon_XY)
# Do we wish to subdivide the polygon?
# First, check if the user specified a desired segment count
if segments is None:
self.segments = self.polygon.shape[0]-1
else:
self.segments = segments
if self.segments < self.polygon.shape[0]-1:
raise Exception('Prescribed number of line segments '+str(self.segments)+" mustn't be smaller than the number of vertices "+str(polygon.shape[0]-1)+'.')
# Subdivide the polygon, if desired
if self.segments > self.polygon.shape[0]-1:
self.polygon_XY = self.subdivide_line(self.polygon_XY,self.segments)
self.polygon = self.polygon_XY[:,0] + 1j*self.polygon_XY[:,1]
# Un-close the polygon again
self.polygon_XY = self.polygon_XY[:-1,:]
self.polygon = self.polygon[:-1]
# If vertex snapping is enabled, snap all outside vertices onto the domain edge
if snap:
self.snap_to_domain()
# This is a hack: We shrink the polygon by a small amount. This ensures
# that no issues arise from evaluating points directly on the boundary,
# and allows us to consider inhomogeneities directly bounding each other;
# there might be other ways to solve this issue alternatively
self.polygon_XY = self.shrink_polygon(
polygon = self.polygon_XY,
offset = 1E-10)
self.polygon = self.polygon_XY[:,0] + 1j*self.polygon_XY[:,1]
# The control points of the inhomogeneity are simply its vertices
# This is required for the linear solver
self.zc = self.polygon
# Raise an exception if this inhomogeneity intersects any of the previous
# inhomogeneities
for e in self.model.elementlist[:-1]:
if isinstance(e, ElementInhomogeneity):
if any(e.are_points_inside_polygon(self.zc)):
raise Exception('Inhomogeneities may not intersect each other.')
# Create a path with the edges of the polygon
# We can use this path to find out if evaluation points are inside or
# or outside the inhomogeneity.
self.linepath = matplotlib.path.Path(self.polygon_XY)
# Get strength parameters for each vertex
self.strength = np.ones(self.segments)
# Assign the hydraulic conductivity of the inhomogeneity
self.k = k
# Extract target variables
self.variables = variables
self.priors = priors
# Prepare the matrix block containing the effect of this element onto
# itself for future use in solving the linear system. The matrix requires
# subtraction of the A_star variable from its diagonal entries for completion
self.block = self.matrix_contribution()
# Check if the prior matches the number of parameters
if len(self.priors) != len(self.variables):
raise Exception('Number of priors must match number of unknown variables. Number of priors: '+str(self.priors)+' / Number of unknown variables: '+str(len(self.variables)))
if len(self.variables) > 0:
# There are some model variables specified
for idx,var in enumerate(self.variables):
self.model.num_params += 1
exec("self.model.params += [self.%s]" % var)
self.model.priors += [self.priors[idx]]
self.model.variables += [var]
if 'name' in list(self.priors[idx].keys()):
self.model.param_names += [self.priors[idx]['name']]
else:
self.model.param_names += ['unknown']
def update(self):
import numpy as np
def evaluate_gradient(self,z,detailed = False,derivatives = 'all',override_parameters = False):
import numpy as np
import copy
# Complexify the evaluation points, if they aren't already complex
z = self.complexify(z)
# 'detailed' returns the results as a matrix instead of a summed vector
if detailed:
grad = np.zeros((self.segments,z.shape[0]), dtype = np.complex)
else:
grad = np.zeros(z.shape, dtype = np.complex)
# Go through all line segments
for seg in range(self.segments):
# Find the next (seg_plus) and previous (seg_minus) vertex
if seg == self.segments-1:
seg_plus = 0
seg_minus = seg-1
else:
seg_plus = seg+1
seg_minus = seg-1
if override_parameters:
# Calculate the gradient
temp = 1j/(2*np.pi)* (\
np.log((self.polygon[seg] - z)/(self.polygon[seg_minus]-z)) / \
(self.polygon[seg_minus]-self.polygon[seg]) - \
np.log((self.polygon[seg_plus] - z)/(self.polygon[seg]-z)) / \
(self.polygon[seg]-self.polygon[seg_plus]) )
else:
# Calculate the gradient
temp = self.strength[seg]*1j/(2*np.pi)* (\
np.log((self.polygon[seg] - z)/(self.polygon[seg_minus]-z)) / \
(self.polygon[seg_minus]-self.polygon[seg]) - \
np.log((self.polygon[seg_plus] - z)/(self.polygon[seg]-z)) / \
(self.polygon[seg]-self.polygon[seg_plus]) )
if detailed:
grad[seg,:] = copy.copy(temp)
else:
grad += copy.copy(temp)
if derivatives == 'phi':
dudx = np.real(grad)
dudy = -np.imag(grad)
grad = dudx + 1j*dudy
elif derivatives == 'psi':
dvdx = -np.imag(np.conjugate(grad))
dvdy = np.real(np.conjugate(grad))
grad = dvdx + 1j*dvdy
elif derivatives != 'all':
raise Exception("'derivatives' has to be either 'all' (complex derivative), " + \
"'phi' (hydraulic potential partial derivatives), or 'psi' " + \
"(flow line partial derivatives)")
return grad
def evaluate(self,z,detailed = False,override_parameters = False):
import numpy as np
import copy
# Complexify the evaluation points, if they aren't already complex
z = self.complexify(z)
# Define the segment influence functions
def F(Z):
return np.nan_to_num(-0.5*(Z-1)*np.log((Z-1)/(Z+1))) - 1
def G(Z):
return np.nan_to_num(+0.5*(Z+1)*np.log((Z-1)/(Z+1))) + 1
# 'detailed' returns the results as a matrix instead of a summed vector
if detailed:
res = np.zeros((self.segments,z.shape[0]), dtype = np.complex)
else:
res = np.zeros(z.shape, dtype = np.complex)
# Go through all line segments
for seg in range(self.segments):
temp = np.zeros(z.shape, dtype=np.complex)
# Find the next (seg_plus) and previous (seg_minus) vertex
if seg == self.segments-1:
seg_plus = 0
seg_minus = seg-1
else:
seg_plus = seg+1
seg_minus = seg-1
# Get a vector of distances
dist = np.abs(z - self.polygon[seg])
if override_parameters:
# First, use the standard solution for points which aren't snapped to vertices
Z_before = \
(2*z - (self.polygon[seg_minus] + self.polygon[seg]))/(self.polygon[seg] - self.polygon[seg_minus])
Z_after = \
(2*z - (self.polygon[seg] + self.polygon[seg_plus]))/(self.polygon[seg_plus] - self.polygon[seg])
# # Prevent errors from underflow
# Z_before[np.where(np.abs(np.imag(Z_before)) < 1E-10)] = np.real(Z_before[np.where(np.abs(np.imag(Z_before)) < 1E-10)])
# Z_after[np.where(np.abs(np.imag(Z_after)) < 1E-10)] = np.real(Z_after[np.where(np.abs(np.imag(Z_after)) < 1E-10)])
# Z_before[np.where(np.abs(np.real(Z_before)) < 1E-10)] = 1j*np.imag(Z_before[np.where(np.abs(np.real(Z_before)) < 1E-10)])
# Z_after[np.where(np.abs(np.real(Z_after)) < 1E-10)] = 1j*np.imag(Z_after[np.where(np.abs(np.real(Z_after)) < 1E-10)])
temp = 1/(2*np.pi*1j)*(G(Z_before)+F(Z_after))
else:
# First, use the standard solution for points which aren't snapped to vertices
Z_before = \
(2*z - (self.polygon[seg_minus] + self.polygon[seg]))/(self.polygon[seg] - self.polygon[seg_minus])
Z_after = \
(2*z - (self.polygon[seg] + self.polygon[seg_plus]))/(self.polygon[seg_plus] - self.polygon[seg])
# # Prevent errors from underflow
# Z_before[np.where(np.abs(np.imag(Z_before)) < 1E-10)] = np.real(Z_before[np.where(np.abs(np.imag(Z_before)) < 1E-10)])
# Z_after[np.where(np.abs(np.imag(Z_after)) < 1E-10)] = np.real(Z_after[np.where(np.abs(np.imag(Z_after)) < 1E-10)])
# Z_before[np.where(np.abs(np.real(Z_before)) < 1E-10)] = 1j*np.imag(Z_before[np.where(np.abs(np.real(Z_before)) < 1E-10)])
# Z_after[np.where(np.abs(np.real(Z_after)) < 1E-10)] = 1j*np.imag(Z_after[np.where(np.abs(np.real(Z_after)) < 1E-10)])
temp = self.strength[seg]/(2*np.pi*1j)*(G(Z_before)+F(Z_after))
if detailed:
res[seg,:] = copy.copy(temp)
else:
res += copy.copy(temp)
self.res = res
return res
def matrix_contribution(self):
"""
This function writes a block into the matrix for the solution of the system
of linear equations. It only evaluates the influence of the element itself
onto itself. For its influence on other inhomogeneities, use the evaluate
function with detailed = True.
"""
import numpy as np
import copy
# The functions F and G sometimes return NaN, errors we catch through
# np.nan_to_num. Suppress these error messages.
import warnings
warnings.filterwarnings('ignore')
# Define the segment influence functions
def F(Z):
return np.nan_to_num(-0.5*(Z-1)*np.log((Z-1)/(Z+1))) - 1
def G(Z):
return np.nan_to_num(+0.5*(Z+1)*np.log((Z-1)/(Z+1))) + 1
# We evaluate this block at its own vertices
z = self.polygon
# Pre-allocate an empty matrix for the block
block = np.zeros((self.segments,self.segments))
self.angles = []
self.temp = []
# Go through all vertices in the polygon
for seg in range(self.segments):
# Set the previous, current, and next vertex of the polygon
if seg == self.segments-1:
seg_minus = seg-1
seg_center = seg
seg_plus = 0
else:
seg_minus = seg-1
seg_center = seg
seg_plus = seg+1
self.temp.append([self.polygon[seg_plus]-self.polygon[seg_center],
self.polygon[seg_center]-self.polygon[seg_minus]])
# To evaluate the effect of a vertex on itself, it is computationally
# cleanest to evaluate it in terms of angles; these angles are
# calculated according to Strack 1989, Eq. 35.29 and 35.30
newtemp = np.angle(self.polygon[seg_plus]-self.polygon[seg_center]) - \
np.angle(self.polygon[seg_center]-self.polygon[seg_minus])
if newtemp < -np.pi: newtemp += 2*np.pi
if newtemp > +np.pi: newtemp -= 2*np.pi
# self.angles.append(newtemp)
# Sometimes, numerical imprecision causes the angle to fall outside
# the range 0 and 2 pi; in that case, flip it back inside
# if newtemp < 0: newtemp += 2*np.pi
# if newtemp > 2*np.pi: newtemp -= 2*np.pi
newtemp -= np.pi
self.angles.append(newtemp)
# Write the diagonal entries of the matrix
block[seg,seg] = 1/(2*np.pi)*newtemp
# Here we would normally add the factor for the conductivity difference
# to the diagonal entries; since we only prepare the matrix here,
# we skip it
# # Determine the A_star variable (Strack 1989 35.4, 35.38)
# A_star = self.model.k/(self.k - self.model.k)
# block[seg,seg] -= A_star
# Then handle all off-diagonal contributions
for seg2 in range(self.segments):
# Skip the diagonal
if seg2 != seg:
# Get the indices of the past, current, and next vertex
if seg2 == self.segments-1:
seg_minus = seg2-1
seg_center = seg2
seg_plus = 0
else:
seg_minus = seg2-1
seg_center = seg2
seg_plus = seg2+1
# Calculate the local coordinates
Z_before = \
(2*z[seg] - (self.polygon[seg_minus] + self.polygon[seg_center]))/(self.polygon[seg_center] - self.polygon[seg_minus])
Z_after = \
(2*z[seg] - (self.polygon[seg_center] + self.polygon[seg_plus]))/(self.polygon[seg_plus] - self.polygon[seg_center])
# And write the result into the correct matrix entries
block[seg,seg2] = copy.copy(np.real(1/(2*np.pi*1j)*(G(Z_before)+F(Z_after))))
return block
def are_vertices_clockwise(self,line):
"""
This function takes an string of 2-D vertices of a polygon, provided as a
N x 2 numpy array, and returns a boolean specifying whether the vertices
are provided in clock-wise or counter-clock-wise order.
Parameters:
line - Required : numpy array of polygon vertices (N by 2)
"""
import numpy as np
signed_area = 0
for idx in range(line.shape[0]):
x1 = line[idx,0]
y1 = line[idx,1]
if idx == line.shape[0]-1:
x2 = line[0,0]
y2 = line[0,1]
else:
x2 = line[idx+1,0]
y2 = line[idx+1,1]
signed_area += (x1 * y2 - x2 * y1)
return (np.sign(signed_area) == -1.)
def are_points_inside_polygon(self,z):
import matplotlib.path
import numpy as np
indices = self.linepath.contains_points(
np.column_stack((
np.real(z),
np.imag(z))))
return indices
def are_points_on_polygon(self,points,line = None,snap_distance = 1E-10):
import numpy as np
points = np.column_stack((
np.real(points)[:,np.newaxis],
np.imag(points)[:,np.newaxis]))
if line is None:
# Get the polygon and close it
line = np.row_stack((self.polygon_XY,self.polygon_XY[0,:]))
# Pre-allocate space for the indices
indices = np.zeros(points.shape[0],dtype=np.bool)
# Go through all line segments
for n in range(line.shape[0]-1):
a = line[n,:].reshape((1,2))
b = line[n+1,:].reshape((1,2))
# Normalized tangent vectors
d_ba = b - a
d = np.divide(d_ba, (np.hypot(d_ba[:, 0], d_ba[:, 1]).reshape(-1, 1)))
# Signed parallel distance components
# Row-wise dot products of 2D vectors
s = np.multiply(a - points, d).sum(axis=1)
t = np.multiply(points - b, d).sum(axis=1)
# Clamped parallel distance
h = np.maximum.reduce([s, t, np.zeros(len(s))])
# Perpendicular distance component
# Row-wise cross products of 2D vectors
d_pa = points - a
c = d_pa[:, 0] * d[:, 1] - d_pa[:, 1] * d[:, 0]
# Calculate the distance
d = np.hypot(h, c)
indices[np.where(d <= snap_distance)] = True
return indices
def subdivide_line(self,line,segments):
import numpy as np
import copy
# If array is one-dimensional, reshape it appropriately
if len(line.shape) == 1: line = line.reshape((line.shape[0],1))
D = line.shape[1]
# Calculate the lengths of original segments
length = [np.linalg.norm(line[seg,:]-line[seg+1,:]) for seg in range(line.shape[0]-1)]
# Normalize the length of the original segments
length /= np.sum(length)
# Calculate the number of new segments we must create, the line already has
# (#vertices-1) segments. We only require the difference
new_segments = segments - line.shape[0] + 1
# Calculate where those segments should go
bins = np.concatenate(( [0] , np.cumsum(length) ))
# Add Extend the bin length a bit to prevent errors from arithmetic under- or overflow
bins[0] -= 1E-10
bins[-1] += 1E-10
# Distribute vertices along the segments
x = np.linspace(0,1,new_segments)
num_vertices = []
for seg in range(line.shape[0]-1):
if seg == 0:
num_vertices += [len(np.where(x <= bins[1])[0])]
else:
num_vertices += [len(np.where(np.logical_and(
x > bins[seg],
x <= bins[seg+1]))[0])]
# Subidivide the original segments
new_vertices = []
for seg in range(line.shape[0]-1):
temp = None
for d in range(D):
if temp is None:
temp = copy.copy(np.linspace(line[seg,d],line[seg+1,d],num_vertices[seg]+2)[1:-1])
else:
temp = np.column_stack((
temp,
copy.copy(np.linspace(line[seg,d],line[seg+1,d],num_vertices[seg]+2)[1:-1]) ))
new_vertices += [copy.copy(temp)]
# Create the seed for the new line
new_line = copy.copy(line[0,:].reshape((1,D)) )
# Assemble the new line
for seg in range(line.shape[0]-1):
# Add the new segments, then the next original vertex
new_line = np.row_stack((
new_line,
new_vertices[seg],
line[seg+1,:]))
return new_line
def shrink_polygon(self,polygon, offset = 1):
"""
This function shrinks a user-provided polygon.
Parameters:
polygon - Required : a 2-D array of polygon vertices
offset - Required : a scalar defining the distance by which we wish to shrink the polygon (default = 1)
"""
import numpy as np
import copy
import math
def angle(x1, y1, x2, y2):
numer = (x1*x2 + y1*y2)
denom = np.sqrt((x1**2 + y1**2) * (x2**2 + y2**2))
print(numer)
print(denom)
print( math.acos(numer/denom) )
return math.acos(numer/denom)
def cross_sign(x1, y1, x2, y2):
return x1*y2 > x2*y1
# If the polygon is closed, un-close it
closed = False
if np.linalg.norm(polygon[0,:]-polygon[-1,:]) < 1E-10:
polygon = polygon[:-1,:]
closed = True
# Make sure polygon is counter-clockwise
if self.are_vertices_clockwise(np.row_stack((polygon,polygon[0,:]))):
polygon = np.flipud(polygon)
polygon_shrinked = copy.copy(polygon)
for idx in range(polygon.shape[0]):
if idx == polygon.shape[0]-1:
vtx_before = idx-1
vtx_center = idx
vtx_after = 0
else:
vtx_before = idx-1
vtx_center = idx
vtx_after = idx+1
side_before = polygon[vtx_center,:] - polygon[vtx_before,:]
side_after = polygon[vtx_after,:] - polygon[vtx_center,:]
side_before /= np.linalg.norm(side_before)
side_after /= np.linalg.norm(side_after)
nvec_before = np.asarray([-side_before[1], side_before[0]])
nvec_after = np.asarray([-side_after[1], side_after[0]])
vtx1_before = polygon[vtx_before,:] + nvec_before*offset
vtx2_before = polygon[vtx_center,:] + nvec_before*offset
vtx1_after = polygon[vtx_center,:] + nvec_after*offset
vtx2_after = polygon[vtx_after,:] + nvec_after*offset
p = vtx1_before
r = (vtx2_before-vtx1_before)
q = vtx1_after
s = (vtx2_after-vtx1_after)
if np.cross(r,s) == 0:
# Lines are collinear
polygon_shrinked[idx,:] = vtx2_before
else:
# Lines are not collinear
t = np.cross(q - p,s)/(np.cross(r,s))
# This is the intersection point
polygon_shrinked[idx,:] = p + t*r
if closed:
polygon_shrinked = np.row_stack((
polygon_shrinked,
polygon_shrinked[0,:]))
return polygon_shrinked
def snap_to_domain(self):
"""
This function takes the user-specified polygon and snaps any vertices
outside the model domain onto the domain's edge.
"""
import numpy as np
# Calculate the distances of all edge vertices from the center
dist = np.abs(self.polygon-self.model.domain_center)
# Any vertex with a distance larger than the domain radius lies outside
indices = np.where(dist > self.model.domain_radius)[0]
# Go through all outside vertices
for idx in indices:
#Center the vertex to the model
temp = self.polygon[idx]-self.model.domain_center
# And collapse its norm to unity
temp /= dist[idx]
# Then scale it to the boundary
temp *= self.model.domain_radius
# And translate it back to global variables
temp += self.model.domain_center
# Then save it to the polygon variables
self.polygon[idx] = temp
# It is possible that some vertices have folded onto themselves
repeat = 0
while repeat != 2:
# Calculate the interior angles
self.angles = np.zeros(self.polygon.shape[0])
for idx in range(self.polygon.shape[0]):
# Set the previous, current, and next vertex of the polygon
if idx == self.polygon.shape[0]-1:
seg_minus = idx-1
seg_center = idx
seg_plus = 0
else:
seg_minus = idx-1
seg_center = idx
seg_plus = idx+1
# This is the routine to calculate the interior angle of a vertex
newtemp = np.angle(self.polygon[seg_plus]-self.polygon[seg_center]) - \
np.angle(self.polygon[seg_center]-self.polygon[seg_minus])
newtemp -= np.pi
# Restrict it to the range between -pi and +pi
while newtemp < -np.pi: newtemp += 2*np.pi
while newtemp > +np.pi: newtemp -= 2*np.pi
# Save the angles to the list
self.angles[idx] = newtemp
# All vertices whose interior angles are less than five degrees are
# considered degenerate and are removed
indices = np.ones(self.polygon.shape[0],dtype=bool)
indices[np.where(np.abs(self.angles) < np.radians(5))[0]] = False
# Remove the degenerate vertices
self.polygon = self.polygon[indices]
if np.sum(indices) == len(indices):
repeat += 1
# Update the dependent variables
self.polygon_XY = np.column_stack((
np.real(self.polygon)[:,np.newaxis],
np.imag(self.polygon)[:,np.newaxis] ))
self.segments = self.polygon.shape[0]
def complexify(self,z):
"""
This function takes the provided line or polygon and converts it into
a complex-valued vector, if it isn't already provided as one.'
"""
import numpy as np
if not np.iscomplex(z).any():
if len(z.shape) != 2 or z.shape[1] != 2:
raise Exception('Shape format not understood. Provide shape vertices either as a complex vector, or as a N-by-2 real numpy array.')
else:
z = z[:,0] + 1j*z[:,1]
return z
def plot(self,facecolor='xkcd:silver',edgecolor='xkcd:dark grey',zorder=1,
alpha=0.5,linewidth=2,**kwargs):
import matplotlib.pyplot as plt
import numpy as np
plt.fill(np.real(self.polygon),np.imag(self.polygon),edgecolor=edgecolor,
facecolor=facecolor,alpha=alpha,zorder=zorder,linewidth=linewidth,**kwargs)
#%%
class ElementAreaSink:
def __init__(self, model, polygon, segments = None, strength = 1,
variables = [], priors=[], snap_distance = 1E-10,
snap = False, influence = None):
"""
This implements an area sink.
Parameters:
model - [object] : the model object to which this element is added
polygon - [array] : either a real N-by-2 matrix or complex vector of length N specifying the vertices of a polygon tracing the element's path
segments - [scalar] : this element has a subdivision function; if a finer resolution than the number of segments in 'line' is desired, specify a larger number here; the function will then subdivide 'line' and 'line_ht' so as to create segments of as equal length as possible
strength - [scalar] : injection or extraction rate of this element in [length unit]^3/[length unit]^2/[time unit]
If MCMC is used, we further require:
variable - [list] : list of variables which are inferred by MCMC, example: ['line_ht']; leave empty if unused
priors - [list] : list of dictionaries, one for each unknown 'variable'; each dictionary must contain the name of distribution (in scipy.stats) and the relevant parameters as keys
"""
import numpy as np
import copy
import matplotlib.path
import math
# Append this element to the specified model
self.model = model
model.elementlist.append(self)
# This element adds water, so it also requires an influence range
if influence is None:
self.influence = self.model.domain_radius*2
else:
self.influence = influence
# Complexify the polygon, if it isn't already complex
polygon = self.complexify(polygon)
# Prepare the polygon variable
self.polygon = polygon
# Is the polygon closed? If not, close it temporarily
self.snap_distance = snap_distance
if np.abs(self.polygon[0]-self.polygon[-1]) > self.snap_distance:
self.polygon = np.asarray(list(self.polygon)+[self.polygon[0]])
# Also create an array with real coordinates
self.polygon_XY = np.column_stack((
np.real(copy.copy(self.polygon))[:,np.newaxis],
np.imag(copy.copy(self.polygon))[:,np.newaxis] ))
# Is the polygon counter-clockwise? If not, correct it
if self.are_vertices_clockwise(self.polygon_XY):
self.polygon = np.flip(self.polygon)
self.polygon_XY = np.flipud(self.polygon_XY)
# Do we wish to subdivide the polygon?
# First, check if the user specified a desired segment count
if segments is None:
self.segments = self.polygon.shape[0]-1
else:
self.segments = segments
if self.segments < self.polygon.shape[0]-1:
raise Exception('Prescribed number of line segments '+str(self.segments)+" mustn't be smaller than the number of vertices "+str(polygon.shape[0]-1)+'.')
# Subdivide the polygon, if desired
if self.segments > self.polygon.shape[0]-1:
self.polygon_XY = self.subdivide_line(self.polygon_XY,self.segments)
self.polygon = self.polygon_XY[:,0] + 1j*self.polygon_XY[:,1]
# This is a hack: We shrink the polygon by a small amount. This should ensure
# that no issues arise from evaluating points directly on the boundary;
# there might be other ways to solve this issue alternatively
self.polygon_XY = self.shrink_polygon(
polygon = self.polygon_XY,
offset = 1E-10)
self.polygon = self.polygon_XY[:,0] + 1j*self.polygon_XY[:,1]
# Un-close the polygon again
self.polygon_XY = self.polygon_XY[:-1,:]
self.polygon = self.polygon[:-1]
# If vertex snapping is enabled, snap all outside vertices onto the domain edge
if snap:
self.snap_to_domain()
# =====================================================================
# Now some area-sink-specific work
# =====================================================================
# Get the angles of all segments to the x axis
# required for the local coordinates, Strack 1989, 37.19
self.alpha = np.zeros(self.segments)
for seg in range(self.segments):
if seg == self.segments-1:
nextseg = 0
else:
nextseg = seg+1
# Get the side vector, then normalize it
temp = self.polygon[nextseg]-self.polygon[seg]
temp /= np.abs(temp)
self.alpha[seg] = math.asin(np.imag(temp))
# Get the central point of the polygon
self.zc = np.mean(self.polygon)
# Calculate the area of the polygon with the shoelace formula:
self.A = self.get_polygon_area()
# Calculate the coefficients c0, c1, c2 for all segments
self.L = np.zeros(self.segments)
for seg in range(self.segments):
if seg == self.segments-1:
nextseg = 0
else:
nextseg = seg+1
# Save the length of the segment
self.L[seg] = np.abs(self.polygon[nextseg]-self.polygon[seg])
# Get strength parameters for each vertex
self.strength = strength
# Extract target variables
self.variables = variables
self.priors = priors
# # Prepare the matrix block containing the effect of this element onto
# # itself for future use in solving the linear system. The matrix requires
# # subtraction of the A_star variable from its diagonal entries for completion
# self.block = self.matrix_contribution()
# Check if the prior matches the number of parameters
if len(self.priors) != len(self.variables):
raise Exception('Number of priors must match number of unknown variables. Number of priors: '+str(self.priors)+' / Number of unknown variables: '+str(len(self.variables)))
if len(self.variables) > 0:
# There are some model variables specified
for idx,var in enumerate(self.variables):
self.model.num_params += 1
exec("self.model.params += [self.%s]" % var)
self.model.priors += [self.priors[idx]]
self.model.variables += [var]
if 'name' in list(self.priors[idx].keys()):
self.model.param_names += [self.priors[idx]['name']]
else:
self.model.param_names += ['unknown']
def update(self):
import numpy as np
def get_polygon_area(self):
import numpy as np
return 0.5*np.abs(np.dot(np.real(self.polygon),np.roll(np.imag(self.polygon),1))-np.dot(np.imag(self.polygon),np.roll(np.real(self.polygon),1)))
def evaluate_gradient(self,z,detailed = False,derivatives = 'all',override_parameters = False):
import numpy as np
import copy
# Complexify the evaluation points, if they aren't already complex
z = self.complexify(z)
# # 'detailed' returns the results as a matrix instead of a summed vector
# if detailed:
# grad = np.zeros((self.segments,z.shape[0]), dtype = np.complex)
# else:
# grad = np.zeros(z.shape, dtype = np.complex)
grad = np.zeros(z.shape, dtype = np.complex)
# Assemble vectors of Z_plus_1 and Z_minus_1
Z_plus_1 = []
Z_minus_1 = []
log_ratio = []
for seg in range(self.segments):
if seg == self.segments-1:
nextseg = 0
else:
nextseg = seg+1
# Get the subtraction and addition
Z_minus_1.append(2*(copy.copy(z)-self.polygon[nextseg]) /(self.polygon[nextseg]-self.polygon[seg]))
Z_plus_1.append(2*(copy.copy(z)-self.polygon[seg]) /(self.polygon[nextseg]-self.polygon[seg]))
log_ratio.append(np.log(Z_minus_1[seg]/Z_plus_1[seg]))
for seg in range(self.segments):
if seg == self.segments-1:
nextseg = 0
else:
nextseg = seg+1
dZdz = 2/(self.polygon[nextseg]-self.polygon[seg])
dHdZ = log_ratio[seg] + (self.polygon[nextseg]-self.polygon[seg])/(z - self.polygon[0])
# Calculate local variables
Z = (2*copy.copy(z) - self.polygon[seg] - self.polygon[nextseg])/(self.polygon[nextseg]-self.polygon[seg])
# Get the H function
# indices = np.where(np.abs(Z_plus_1[seg]) < 1E-10)[0]
H = Z_plus_1[seg]*log_ratio[seg] + 2
# H[indices] = 2
for seg2 in np.arange(seg+1,self.segments,1):
H += 2*log_ratio[seg2]
grad += self.L[seg]**2*(H + (Z-np.conj(Z))*dHdZ)*dZdz
# print(grad)
# Add the pre-factor
grad *= self.strength/(32*np.pi*1j)
# Add the second term
# This equation (8.599) should be divided be 2, not 4, I believe
# It is derived from equation 8.598, where the factor us 2
grad -= self.strength*self.A/(2*np.pi)/(z - self.polygon[0])
# grad += self.strength*self.A/(4*np.pi)/(z - np.mean(self.polygon))
# print(grad)
if derivatives == 'phi':
dudx = np.real(grad)
dudy = -np.imag(grad)
grad = dudx + 1j*dudy
elif derivatives == 'psi':
dvdx = -np.imag(np.conjugate(grad))
dvdy = np.real(np.conjugate(grad))
grad = dvdx + 1j*dvdy
elif derivatives != 'all':
raise Exception("'derivatives' has to be either 'all' (complex derivative), " + \
"'phi' (hydraulic potential partial derivatives), or 'psi' " + \
"(flow line partial derivatives)")
return grad
def evaluate(self,z,detailed = False,override_parameters = False):
import numpy as np
import copy
# Complexify the evaluation points, if they aren't already complex
z = self.complexify(z)
# This follows the approach outlined in Strack 2017, 8.26
z = np.asarray(list(z)+[np.mean(self.polygon)+self.influence])
res = np.zeros(z.shape,dtype=np.complex)
# Assemble vectors of Z_plus_1 and Z_minus_1
Z_plus_1 = []
log_ratio = []
for seg in range(self.segments):
if seg == self.segments-1:
nextseg = 0
else:
nextseg = seg+1
# Get the subtraction and addition
Z_minus_1 = 2*(z-self.polygon[nextseg]) /(self.polygon[nextseg]-self.polygon[seg])
Z_plus_1.append(2*(z-self.polygon[seg]) /(self.polygon[nextseg]-self.polygon[seg]))
log_ratio.append(np.log(Z_minus_1/Z_plus_1[seg]))
for seg in range(self.segments):
if seg == self.segments-1:
nextseg = 0
else:
nextseg = seg+1
# Calculate local variables
# Z = (z - self.zc[seg])/self.v[seg]
Z = (2*z - self.polygon[seg] - self.polygon[nextseg])/(self.polygon[nextseg]-self.polygon[seg])
# Get the H function
# indices = np.where(np.abs(Z_plus_1[seg]) < 1E-10)[0]
H = Z_plus_1[seg]*log_ratio[seg] + 2
# H[indices] = 2
for seg2 in np.arange(seg+1,self.segments,1):
H += 2*log_ratio[seg2]
# Combine that shit
res += self.L[seg]**2*(Z - np.conj(Z))*H
# Pre-factor
res *= self.strength/(32*np.pi*1j)
# And the second term
res -= self.strength*self.A/(2*np.pi)*np.log(z-self.polygon[0])
# Correct the offset for the correction factor
res -= np.real(res[-1])
# Remove the correction factor
res = res[:-1]
return res
def are_vertices_clockwise(self,line):
"""
This function takes an string of 2-D vertices of a polygon, provided as a
N x 2 numpy array, and returns a boolean specifying whether the vertices
are provided in clock-wise or counter-clock-wise order.
Parameters:
line - Required : numpy array of polygon vertices (N by 2)
"""
import numpy as np
signed_area = 0
for idx in range(line.shape[0]):
x1 = line[idx,0]
y1 = line[idx,1]
if idx == line.shape[0]-1:
x2 = line[0,0]
y2 = line[0,1]
else:
x2 = line[idx+1,0]
y2 = line[idx+1,1]
signed_area += (x1 * y2 - x2 * y1)
return (np.sign(signed_area) == -1.)
def subdivide_line(self,line,segments):
import numpy as np
import copy
# If array is one-dimensional, reshape it appropriately
if len(line.shape) == 1: line = line.reshape((line.shape[0],1))
D = line.shape[1]
# Calculate the lengths of original segments
length = [np.linalg.norm(line[seg,:]-line[seg+1,:]) for seg in range(line.shape[0]-1)]
# Normalize the length of the original segments
length /= np.sum(length)
# Calculate the number of new segments we must create, the line already has
# (#vertices-1) segments. We only require the difference
new_segments = segments - line.shape[0] + 1
# Calculate where those segments should go
bins = np.concatenate(( [0] , np.cumsum(length) ))
# Add Extend the bin length a bit to prevent errors from arithmetic under- or overflow
bins[0] -= 1E-10
bins[-1] += 1E-10
# Distribute vertices along the segments
x = np.linspace(0,1,new_segments)
num_vertices = []
for seg in range(line.shape[0]-1):
if seg == 0:
num_vertices += [len(np.where(x <= bins[1])[0])]
else:
num_vertices += [len(np.where(np.logical_and(
x > bins[seg],
x <= bins[seg+1]))[0])]
# Subidivide the original segments
new_vertices = []
for seg in range(line.shape[0]-1):
temp = None
for d in range(D):
if temp is None:
temp = copy.copy(np.linspace(line[seg,d],line[seg+1,d],num_vertices[seg]+2)[1:-1])
else:
temp = np.column_stack((
temp,
copy.copy(np.linspace(line[seg,d],line[seg+1,d],num_vertices[seg]+2)[1:-1]) ))
new_vertices += [copy.copy(temp)]
# Create the seed for the new line
new_line = copy.copy(line[0,:].reshape((1,D)) )
# Assemble the new line
for seg in range(line.shape[0]-1):
# Add the new segments, then the next original vertex
new_line = np.row_stack((
new_line,
new_vertices[seg],
line[seg+1,:]))
return new_line
def snap_to_domain(self):
"""
This function takes the user-specified polygon and snaps any vertices
outside the model domain onto the domain's edge.
"""
import numpy as np
# Calculate the distances of all edge vertices from the center
dist = np.abs(self.polygon-self.model.domain_center)
# Any vertex with a distance larger than the domain radius lies outside
indices = np.where(dist > self.model.domain_radius)[0]
# Go through all outside vertices
for idx in indices:
#Center the vertex to the model
temp = self.polygon[idx]-self.model.domain_center
# And collapse its norm to unity
temp /= dist[idx]
# Then scale it to the boundary
temp *= self.model.domain_radius
# And translate it back to global variables
temp += self.model.domain_center
# Then save it to the polygon variables
self.polygon[idx] = temp
# It is possible that some vertices have folded onto themselves
repeat = 0
while repeat != 2:
# Calculate the interior angles
self.angles = np.zeros(self.polygon.shape[0])
for idx in range(self.polygon.shape[0]):
# Set the previous, current, and next vertex of the polygon
if idx == self.polygon.shape[0]-1:
seg_minus = idx-1
seg_center = idx
seg_plus = 0
else:
seg_minus = idx-1
seg_center = idx
seg_plus = idx+1
# This is the routine to calculate the interior angle of a vertex
newtemp = np.angle(self.polygon[seg_plus]-self.polygon[seg_center]) - \
np.angle(self.polygon[seg_center]-self.polygon[seg_minus])
newtemp -= np.pi
# Restrict it to the range between -pi and +pi
while newtemp < -np.pi: newtemp += 2*np.pi
while newtemp > +np.pi: newtemp -= 2*np.pi
# Save the angles to the list
self.angles[idx] = newtemp
# All vertices whose interior angles are less than five degrees are
# considered degenerate and are removed
indices = np.ones(self.polygon.shape[0],dtype=bool)
indices[np.where(np.abs(self.angles) < np.radians(5))[0]] = False
# Remove the degenerate vertices
self.polygon = self.polygon[indices]
if np.sum(indices) == len(indices):
repeat += 1
# Update the dependent variables
self.polygon_XY = np.column_stack((
np.real(self.polygon)[:,np.newaxis],
np.imag(self.polygon)[:,np.newaxis] ))
self.segments = self.polygon.shape[0]
def shrink_polygon(self,polygon, offset = 1):
"""
This function shrinks a user-provided polygon.
Parameters:
polygon - Required : a 2-D array of polygon vertices
offset - Required : a scalar defining the distance by which we wish to shrink the polygon (default = 1)
"""
import numpy as np
import copy
import math
def angle(x1, y1, x2, y2):
numer = (x1*x2 + y1*y2)
denom = np.sqrt((x1**2 + y1**2) * (x2**2 + y2**2))
print(numer)
print(denom)
print( math.acos(numer/denom) )
return math.acos(numer/denom)
def cross_sign(x1, y1, x2, y2):
return x1*y2 > x2*y1
# If the polygon is closed, un-close it
closed = False
if np.linalg.norm(polygon[0,:]-polygon[-1,:]) < 1E-10:
polygon = polygon[:-1,:]
closed = True
# Make sure polygon is counter-clockwise
if self.are_vertices_clockwise(np.row_stack((polygon,polygon[0,:]))):
polygon = np.flipud(polygon)
polygon_shrinked = copy.copy(polygon)
for idx in range(polygon.shape[0]):
if idx == polygon.shape[0]-1:
vtx_before = idx-1
vtx_center = idx
vtx_after = 0
else:
vtx_before = idx-1
vtx_center = idx
vtx_after = idx+1
side_before = polygon[vtx_center,:] - polygon[vtx_before,:]
side_after = polygon[vtx_after,:] - polygon[vtx_center,:]
side_before /= np.linalg.norm(side_before)
side_after /= np.linalg.norm(side_after)
nvec_before = np.asarray([-side_before[1], side_before[0]])
nvec_after = np.asarray([-side_after[1], side_after[0]])
vtx1_before = polygon[vtx_before,:] + nvec_before*offset
vtx2_before = polygon[vtx_center,:] + nvec_before*offset
vtx1_after = polygon[vtx_center,:] + nvec_after*offset
vtx2_after = polygon[vtx_after,:] + nvec_after*offset
p = vtx1_before
r = (vtx2_before-vtx1_before)
q = vtx1_after
s = (vtx2_after-vtx1_after)
if np.cross(r,s) == 0:
# Lines are collinear
polygon_shrinked[idx,:] = vtx2_before
else:
# Lines are not collinear
t = np.cross(q - p,s)/(np.cross(r,s))
# This is the intersection point
polygon_shrinked[idx,:] = p + t*r
if closed:
polygon_shrinked = np.row_stack((
polygon_shrinked,
polygon_shrinked[0,:]))
return polygon_shrinked
def complexify(self,z):
"""
This function takes the provided line or polygon and converts it into
a complex-valued vector, if it isn't already provided as one.'
"""
import numpy as np
if not np.iscomplex(z).any():
if len(z.shape) != 2 or z.shape[1] != 2:
raise Exception('Shape format not understood. Provide shape vertices either as a complex vector, or as a N-by-2 real numpy array.')
else:
z = z[:,0] + 1j*z[:,1]
return z
def plot(self,facecolor_extract='xkcd:orangish red',
edgecolor_extract='xkcd:crimson',facecolor_inject='xkcd:cerulean',
edgecolor_inject='xkcd:cobalt',zorder=12,alpha=0.5,linewidth=5,**kwargs):
import matplotlib.pyplot as plt
import numpy as np
if self.strength < 0:
col_face = facecolor_extract
col_edge = edgecolor_extract
else:
col_face = facecolor_inject
col_edge = edgecolor_inject
plt.fill(np.real(self.polygon),np.imag(self.polygon),edgecolor=col_edge,
facecolor=col_face,alpha=alpha,zorder=zorder,linewidth=linewidth,
**kwargs)
#%%
class ElementLineSink:
def __init__(self, model, line, segments = None, influence = None,
strength = 1, variables = [], priors=[]):
"""
This implements a line sink.
Parameters:
model - [object] : the model object to which this element is added
line - [array] : either a real N-by-2 matrix or complex vector of length N specifying the vertices of a line string tracing the element's path
segments - [scalar] : this element has a subdivision function; if a finer resolution than the number of segments in 'line' is desired, specify a larger number here; the function will then subdivide 'line' and 'line_ht' so as to create segments of as equal length as possible
influence - [scalar] : radius of zero influence of each line segment; set to twice the model domain_radius if unspecified
strength - [scalar] : this specifies the injection or extraction rate of the element
If MCMC is used, we further require:
variable - [list] : list of variables which are inferred by MCMC, example: ['line_ht']; leave empty if unused
priors - [list] : list of dictionaries, one for each unknown 'variable'; each dictionary must contain the name of distribution (in scipy.stats) and the relevant parameters as keys
"""
import numpy as np
from scipy.interpolate import interp1d
import copy
self.model = model
model.elementlist.append(self)
self.variables = variables
self.priors = priors
# ---------------------------------------------------------------------
# Subdivide the provided no flow boundary into #segments pieces
self.line_raw = copy.copy(line)
if segments is None:
self.segments = line.shape[0]-1
else:
self.segments = segments
if self.segments < self.line_raw.shape[0]-1:
raise Exception('Number of segments '+str(self.segments)+" mustn't be smaller than number of line points "+str(line.shape[0])+'.')
if self.segments > self.line_raw.shape[0]:
# Subdivide the line
self.line = self.subdivide_line(line,self.segments)
self.line_c = copy.copy(self.line[:,0] + 1j*self.line[:,1])
else:
self.line = self.line_raw.copy()
self.line_c = self.line[:,0] + 1j*self.line[:,1]
# Also get the normal vector components to each segment
self.line_nvec = self.line[:,1] - 1j*self.line[:,0]
self.line_nvec = self.line_nvec/np.abs(self.line_nvec)
# ---------------------------------------------------------------------
self.strength = np.ones(self.segments)*strength
if influence is None:
self.influence = self.model.domain_radius*2
else:
self.influence = influence
self.Zi = []
self.offset_outside = []
self.L = []
self.zc = []
self.segment_nvec = []
self.head_target = []
for seg in range(self.segments):
self.L += [np.abs(self.line_c[seg+1] - self.line_c[seg])]
influence_pt = (self.line_c[seg+1]-self.line_c[seg])*self.influence/self.L[seg] + self.line_c[seg]
Z = (2*influence_pt-(self.line_c[seg]+self.line_c[seg+1]))/(self.line_c[seg+1]-self.line_c[seg])
self.Zi += [copy.copy(Z)]
self.zc += [(self.line_c[seg]+self.line_c[seg+1])/2]
# Calculate the normal vector to this segment
self.segment_nvec += [(self.line_c[seg]-self.line_c[seg+1])]
self.segment_nvec[-1]= [np.imag(self.segment_nvec[-1])-1j*np.real(self.segment_nvec[-1])]
part1 = np.nan_to_num((Z+1)*np.log(Z+1))
part2 = np.nan_to_num((Z-1)*np.log(Z-1))
self.offset_outside += [self.L[seg] / (4*np.pi) * (part1 - part2)]
# Convert list of segment centers to array
self.zc = np.asarray(self.zc)
# Check if the prior matches the number of parameters
if len(self.priors) != len(self.variables):
raise Exception('Number of priors must match number of unknown variables. Number of priors: '+str(self.priors)+' / Number of unknown variables: '+str(len(self.variables)))
# Go through all elements
if len(self.variables) > 0:
# There are some model variables specified
for idx,var in enumerate(self.variables):
self.model.num_params += 1
exec("self.model.params += [self.%s]" % var)
self.model.priors += [self.priors[idx]]
self.model.variables += [var]
if 'name' in list(self.priors[idx].keys()):
self.model.param_names += [self.priors[idx]['name']]
else:
self.model.param_names += ['unknown']
def update(self):
import numpy as np
# self.zc = self.xc + 1j*self.yc
# self.L = np.abs(self.z2 - self.z1)
# influence_pt = (self.z2-self.z1)*self.influence/self.L + self.z1
# Z = (2*influence_pt-(self.z1+self.z2))/(self.z2-self.z1)
# part1 = np.nan_to_num((Z+1)*np.log(Z+1))
# part2 = np.nan_to_num((Z-1)*np.log(Z-1))
# self.offset_outside = self.L / (4*np.pi) * (part1 - part2)
def evaluate_gradient(self,z,detailed = False, derivatives = 'all', override_parameters = False):
import numpy as np
import copy
# Complexify the evaluation points, if they aren't already complex
z = self.complexify(z)
# 'detailed' returns the results as a matrix instead of a summed vector
if detailed:
grad = np.zeros((self.segments,z.shape[0]), dtype = np.complex)
else:
grad = np.zeros(z.shape, dtype = np.complex)
for seg in range(self.segments):
Z = (2*z-(self.line_c[seg]+self.line_c[seg+1]))/(self.line_c[seg+1]-self.line_c[seg])
# Now get the gradient d omega(z)/dZ
if not override_parameters:
temp = self.strength[seg]*self.L[seg]/4/np.pi*(np.log(Z+1) - np.log(Z-1))
else:
temp = self.L[seg]/4/np.pi*(np.log(Z+1) - np.log(Z-1))
# To get d omega(z)/dz we can use the product rule
# d omega(z)/dz = d omega(z)/dZ * dZ/dz
# hence:
temp = temp*2/(self.line_c[seg+1]-self.line_c[seg])
if detailed:
grad[seg,:] = copy.copy(temp)
else:
grad += temp
if derivatives == 'phi':
dudx = np.real(grad)
dudy = -np.imag(grad)
grad = dudx + 1j*dudy
elif derivatives == 'psi':
dvdx = -np.imag(np.conjugate(grad))
dvdy = np.real(np.conjugate(grad))
grad = dvdx + 1j*dvdy
elif derivatives != 'all':
raise Exception("'derivatives' has to be either 'all' (complex derivative), " + \
"'phi' (hydraulic potential partial derivatives), or 'psi' " + \
"(flow line partial derivatives)")
return grad
def evaluate(self,z,detailed = False, override_parameters = False):
import copy
import numpy as np
# Complexify the evaluation points, if they aren't already complex
z = self.complexify(z)
if detailed:
res = np.zeros((self.segments,z.shape[0]), dtype = np.complex)
else:
res = np.zeros(z.shape, dtype = np.complex)
for seg in range(self.segments):
# Convert to local coordinates
Z = (2*z-(self.line_c[seg]+self.line_c[seg+1]))/(self.line_c[seg+1]-self.line_c[seg])
# Evaluate the complex potential offset by a distance in the
if not override_parameters:
temp = self.strength[seg]*self.L[seg]/4/np.pi * (\
(Z+1)*np.log(Z+1) - \
(Z-1)*np.log(Z-1) - \
(2/self.L[seg]*self.influence+2)*np.log(2/self.L[seg]*self.influence+2) + \
(2/self.L[seg]*self.influence)*np.log(2/self.L[seg]*self.influence))
else:
temp = self.L[seg]/4/np.pi * (\
(Z+1)*np.log(Z+1) - \
(Z-1)*np.log(Z-1) - \
(2/self.L[seg]*self.influence+2)*np.log(2/self.L[seg]*self.influence+2) + \
(2/self.L[seg]*self.influence)*np.log(2/self.L[seg]*self.influence))
# If evaluated directly at the endpoints, the result would be NaN
# They should be zero, see Bakker 2009
temp = np.nan_to_num(temp)
if detailed:
res[seg,:] = copy.copy(temp)
else:
res += temp
return res
def subdivide_line(self,line,segments):
import numpy as np
import copy
# If array is one-dimensional, reshape it appropriately
if len(line.shape) == 1: line = line.reshape((line.shape[0],1))
D = line.shape[1]
# Calculate the lengths of original segments
length = [np.linalg.norm(line[seg,:]-line[seg+1,:]) for seg in range(line.shape[0]-1)]
# Normalize the length of the original segments
length /= np.sum(length)
# Calculate the number of new segments we must create, the line already has
# (#vertices-1) segments. We only require the difference
new_segments = segments - line.shape[0] + 1
# Calculate where those segments should go
bins = np.concatenate(( [0] , np.cumsum(length) ))
# Add Extend the bin length a bit to prevent errors from arithmetic under- or overflow
bins[0] -= 1E-10
bins[-1] += 1E-10
# Distribute vertices along the segments
x = np.linspace(0,1,new_segments)
num_vertices = []
for seg in range(line.shape[0]-1):
if seg == 0:
num_vertices += [len(np.where(x <= bins[1])[0])]
else:
num_vertices += [len(np.where(np.logical_and(
x > bins[seg],
x <= bins[seg+1]))[0])]
# Subidivide the original segments
new_vertices = []
for seg in range(line.shape[0]-1):
temp = None
for d in range(D):
if temp is None:
temp = copy.copy(line[seg,d] + (line[seg+1,d]-line[seg,d]) * np.linspace(0,1,num_vertices[seg]+2)[1:-1])
else:
temp = np.column_stack((
temp,
copy.copy(line[seg,d] + (line[seg+1,d]-line[seg,d]) * np.linspace(0,1,num_vertices[seg]+2)[1:-1])))
new_vertices += [copy.copy(temp)]
# Create the seed for the new line
new_line = copy.copy(line[0,:].reshape((1,D)) )
# Assemble the new line
for seg in range(line.shape[0]-1):
# Add the new segments, then the next original vertex
new_line = np.row_stack((
new_line,
new_vertices[seg],
line[seg+1,:]))
return new_line
def complexify(self,z):
"""
This function takes the provided line or polygon and converts it into
a complex-valued vector, if it isn't already provided as one.'
"""
import numpy as np
if not np.iscomplex(z).any():
if len(z.shape) != 2 or z.shape[1] != 2:
raise Exception('Shape format not understood. Provide shape vertices either as a complex vector, or as a N-by-2 real numpy array.')
else:
z = z[:,0] + 1j*z[:,1]
return z
def plot(self,color_extract='xkcd:orangish red',color_inject='xkcd:cerulean',
zorder=12,linewidth=5,**kwargs):
import matplotlib.pyplot as plt
import numpy as np
if self.strength < 0:
col = color_extract
else:
col = color_inject
plt.plot(np.real(self.line_c),np.imag(self.line_c),color=col,
zorder=zorder,linewidth=linewidth,**kwargs)
#%%
class ElementWell:
def __init__(self, model, zc, rw, influence = None, head_change = -1, strength = 1,
drawdown_specified = False, variables = [], priors = []):
"""
This implements an injection or extraction well.
Parameters:
model - [object] : the model object to which this element is added
zw - [vector] : either a complex scalar or a real vector of length 2 specifying the xy coordinates of the well
rw - [scalar] : a real scalar specifying the screen radius of the well in [length units]
strength - [scalar] : extraction or injection rate at this well in [length units]^3/[time units]
head_change - [scalar] : alternative to strength, induces the prescribed drawdown at the well; only used if drawdown_specified is True
drawdown_specified - [boolean] : flag for whether the well's strength is determined through a prescribed head_change; defaults to False
If MCMC is used, we further require:
variable - [list] : list of variables which are inferred by MCMC, example: ['line_ht']; leave empty if unused
priors - [list] : list of dictionaries, one for each unknown 'variable'; each dictionary must contain the name of distribution (in scipy.stats) and the relevant parameters as keys
"""
import numpy as np
self.model = model
model.elementlist.append(self)
self.variables = variables
self.priors = priors
if influence is None:
# If no influence radius is specified, set it to twice the model radius
self.influence = 2*self.model.domain_radius
else:
# Otherwise, set it to the user-defined value
self.influence = influence
# The well's strength defines its effect on the flow field; this is
# overwritten later on to achieve the desired head_change which depends
# on the aquifer parameters
self.strength = strength
# This is the well's position in terms of complex coordinates
self.zc = zc
if not np.isscalar(self.zc):
self.zc = self.zc[0] + 1j*self.zc[1]
# The well radius is specified in canonical units
self.rw = rw
# Check if drawdown specified
self.drawdown_specified = drawdown_specified
if self.drawdown_specified:
# Get the head change variable
self.head_change = head_change
# Adjust the strength so that the desired drawdown is achieved
self.set_potential_target()
# Check if the prior matches the number of parameters
if len(self.priors) != len(self.variables):
raise Exception('Number of priors must match number of unknown variables. Number of priors: '+str(self.priors)+' / Number of unknown variables: '+str(len(self.variables)))
# Go through all elements
if len(self.variables) > 0:
# There are some model variables specified
for idx,var in enumerate(self.variables):
self.model.num_params += 1
exec("self.model.params += [self.%s]" % var)
self.model.priors += [self.priors[idx]]
self.model.variables += [var]
if 'name' in list(self.priors[idx].keys()):
self.model.param_names += [self.priors[idx]['name']]
else:
self.model.param_names += ['unknown']
def update(self):
if self.drawdown_specified:
self.set_potential_target()
def evaluate_gradient(self,z,derivatives = 'all'):
import numpy as np
# Complexify the evaluation points, if they aren't already complex
z = self.complexify(z)
# Find the indices of wells
dist = np.abs(z-self.zc)
idx_inside = np.where(dist < self.rw)[0]
idx_outside = np.where(dist > self.influence)[0]
idx_valid = [i for i in np.arange(len(z),dtype=int) if (i not in idx_inside and i not in idx_outside)]
# Correct the coordinates ---------------------------------------------
# Set the well center to the origin of the complex plane
zs = z.copy()-self.zc
# Pre-allocate an array for the gradient
grad = np.zeros(zs.shape, dtype=np.complex)
# Calculate the gradient
# grad[idx_valid] = self.strength/(zs[idx_valid]*np.log(self.rw/self.influence))
grad[idx_valid] = -self.strength/(2*np.pi)/zs[idx_valid]
# If partial derivatives are demanded, calculate them
if derivatives == 'phi': # phi corresponds to the hydraulic potential
dudx = np.real(grad)
dudy = -np.imag(grad)
grad = dudx + 1j*dudy
elif derivatives == 'psi': # psi corresponds to the stream function
dvdx = -np.imag(np.conjugate(grad))
dvdy = np.real(np.conjugate(grad))
grad = dvdx + 1j*dvdy
elif derivatives != 'all': # all returns the complex derivative
raise Exception("'derivatives' has to be either 'all' (complex derivative), " + \
"'phi' (hydraulic potential partial derivatives), or 'psi' " + \
"(flow line partial derivatives)")
return grad
def evaluate(self,z):
import numpy as np
# Complexify the evaluation points, if they aren't already complex
z = self.complexify(z)
# Effect at influence range
temp = -self.strength/(2*np.pi)*np.log(self.influence)
# Find the indices of wells
dist = np.abs(z-self.zc)
idx_inside = np.where(dist < self.rw)[0]
# Correct the coordinates ---------------------------------------------
# Center the evaluation points on the well
zs = z.copy()-self.zc
# Snap points inside the well to the well edge
zs[idx_inside] = self.rw + 0j
# Calculate the complex potential
res = -self.strength/(2*np.pi)*np.log(zs) - temp
return res
def set_potential_target(self):
"""
We define the drawdown in terms of head, but for the calculations we
require it in terms of potential.
"""
import copy
import numpy as np
# Get the hydraulic conductivity
for e in self.model.elementlist:
if isinstance(e, ElementMoebiusBase) or isinstance(e, ElementUniformBase):
temp_k = e.k
for e in self.model.elementlist:
if isinstance(e, ElementInhomogeneity):
if e.are_points_inside_polygon(self.zc):
temp_k = e.k
# Create a list of hydraulic potential targets
self.strength = copy.copy(self.head_change)
if self.model.aquifer_type == 'confined':
# Strack 1989, Eq. 8.6
self.strength = temp_k*self.model.H*self.strength - \
0.5*temp_k*self.model.H**2
elif self.model.aquifer_type == 'unconfined':
# Strack 1989, Eq. 8.7
self.strength = 0.5*temp_k*self.strength**2
elif self.model.aquifer_type == 'convertible':
# Find out which points are confined and which are unconfined
index_conf = np.where(self.strength >= self.model.H)[0]
index_unconf = np.where(self.strength < self.model.H)[0]
# Account for the confined points
# confined: Strack 1989, Eq. 8.6
self.strength[index_conf] = \
temp_k[index_conf]*self.model.H*self.strength[index_conf] - \
0.5*temp_k[index_conf]*self.model.H**2
# unconfined: Strack 1989, Eq. 8.7
self.strength[index_unconf] = \
0.5*temp_k[index_unconf]*self.strength[index_unconf]**2
def complexify(self,z):
"""
This function takes the provided line or polygon and converts it into
a complex-valued vector, if it isn't already provided as one.'
"""
import numpy as np
if not np.iscomplex(z).any():
if len(z.shape) != 2 or z.shape[1] != 2:
raise Exception('Shape format not understood. Provide shape vertices either as a complex vector, or as a N-by-2 real numpy array.')
else:
z = z[:,0] + 1j*z[:,1]
return z
def plot(self,color_extract='xkcd:orangish red',color_inject='xkcd:cerulean',
zorder=15,s=100,edgecolor='xkcd:dark grey',linewidth=2,**kwargs):
import matplotlib.pyplot as plt
import numpy as np
if self.strength < 0:
col = color_extract
marker = '^'
else:
col = color_inject
marker = 'v'
plt.scatter(np.real(self.zc),np.imag(self.zc),s=s,color=col,
marker=marker,zorder=zorder,edgecolor=edgecolor,
linewidth=linewidth,**kwargs)
#%%
class ElementMoebiusOverlay:
def __init__(self,model,r=None,a=None,b=None,c=None,d=None,head_min=0,head_max=1,
variables=[],priors=[],angular_limit=1):
"""
Similar to the Möbius base, but an additive overlay. Unfinished.
"""
import numpy as np
# Append the base to the elementlist
self.model = model
model.elementlist.append(self)
# Define an angular limit. This is designed to keep the Möbius control
# points from getting arbitrarily close to each other; defined in radians
self.angular_limit = angular_limit
# Set Moebius values
self.r = r
self.a = a
self.b = b
self.c = c
self.d = d
# Set potential scaling variables
self.head_min = head_min
self.head_max = head_max
# The model requires the base flow in terms of hydraulic potential (phi)
# The function head_to_potential extracts the following variables:
# phi_min hydraulic potential corresponding to head_min
# phi_max hydraulic potential corresponding to head_max
self.head_to_potential()
# Check input for validity
self.check_input()
# Define the original control points in the Moebius base disk
self.z0 = np.asarray(
[np.complex(np.cos(-0.25*np.pi),np.sin(-0.25*np.pi)),
np.complex(np.cos(0.25*np.pi),np.sin(0.25*np.pi)),
np.complex(np.cos(0.75*np.pi),np.sin(0.75*np.pi))])
# If only rotation is specified, get the Moebius coefficients
if self.r is not None and (self.a is None and self.b is None and \
self.c is None and self.d is None):
# Find Moebius coefficients
self.find_moebius_coefficients()
self.variables = variables
self.priors = priors
self.Ke = 1.854
if len(self.variables) > 0:
# There are some model variables specified
for idx,var in enumerate(self.variables):
self.model.num_params += 1
exec("self.model.params += [self.%s]" % var)
self.model.variables += [var]
self.model.priors += [self.priors[idx]]
if 'name' in list(self.priors[idx].keys()):
self.model.param_names += [self.priors[idx]['name']]
else:
self.model.param_names += ['unknown']
def update(self):
# If this model is updated, make sure to repeat any initialization
# Find Moebius coefficients
self.find_moebius_coefficients()
def check_input(self):
import numpy as np
# See if either control point rotations or a full set of Moebius
# coefficients are specified
if self.r is None and (self.a is None or self.b is None or \
self.c is None or self.d is None):
raise Exception('Either control point rotations r or Moebius coefficients a, b, c, and d must be specified.')
# Check if phi_min is smaller than phi_offset, switch if necessary
if self.phi_min > self.phi_max:
raise Exception('Minimum potential phi_min is larger than maximum potential phi_max.')
# Check if the control points fulfill the minimum angular spacing
r = np.degrees(self.r)
if np.abs((r[0]-r[1] + 180) % 360 - 180) < self.angular_limit or \
np.abs((r[1]-r[2] + 180) % 360 - 180) < self.angular_limit or \
np.abs((r[2]-r[0] + 180) % 360 - 180) < self.angular_limit:
raise Exception('Control points '+str(self.r)+' are too close to each other. Define different control points or adjust the angular limit: '+str(self.angular_limit))
def evaluate(self,z):
import numpy as np
import copy
# Complexify the evaluation points, if they aren't already complex
z = self.complexify(z)
# Coordinates in canonical space are the start values
z_canonical = copy.copy(z)
# Scale the canonical disk to unity canonical disk
z = (z - self.model.domain_center)/self.model.domain_radius
# Map from canonical disk to Möbius base
z = self.moebius(z,inverse=True)
# Map from Möbius base to unit square
z = self.disk_to_square(z)
# Rescale the complex potential
z = (z+1)/2 * (self.phi_max-self.phi_min) + self.phi_min
return z
def evaluate_gradient(self,z,derivatives = 'all'):
import numpy as np
import copy
# Complexify the evaluation points, if they aren't already complex
z = self.complexify(z)
# Map from the canonical disk to Möbius base
z_mb = (copy.copy(z) - self.model.domain_center)/self.model.domain_radius
# dz_mb / dz_c
grad_4 = 1/self.model.domain_radius
# Map from Möbius base to unit disk
z_ud = self.moebius(copy.copy(z_mb),inverse=True)
# dz_ud / dz_mb
grad_3 = (self.a*self.d-self.b*self.c)/(self.c*z_mb-self.a)**2
grad_2 = 2/(self.Ke*np.sqrt(z_ud**4+1))
grad_1 = (self.phi_max-self.phi_min)/2
grad = grad_1*grad_2*grad_3*grad_4
if derivatives == 'phi':
dudx = np.real(grad)
dudy = -np.imag(grad)
grad = dudx + 1j*dudy
elif derivatives == 'psi':
dvdx = -np.imag(np.conjugate(grad))
dvdy = np.real(np.conjugate(grad))
grad = dvdx + 1j*dvdy
elif derivatives != 'all':
raise Exception("'derivatives' has to be either 'all' (complex derivative), " + \
"'phi' (hydraulic potential partial derivatives), or 'psi' " + \
"(flow line partial derivatives)")
return grad
def complex_integral(self,func,a,b):
"""
This implements the Gauss-Kronrod integration for complex-valued functions.
We use this to evaluate the Legendre incomplete elliptic integral of the
first kind, since it is about ten times as fast as using mpmath's ellipf
function. Since this integration is a major computational bottleneck of
this function, we stick with this approach.
The equations below are adapted from:
https://stackoverflow.com/questions/5965583/use-scipy-integrate-quad-to-integrate-complex-numbers
"""
import scipy
from scipy import array
def quad_routine(func, a, b, x_list, w_list):
c_1 = (b-a)/2.0
c_2 = (b+a)/2.0
eval_points = map(lambda x: c_1*x+c_2, x_list)
func_evals = list(map(func, eval_points)) # Python 3: make a list here
return c_1 * sum(array(func_evals) * array(w_list))
def quad_gauss_7(func, a, b):
x_gauss = [-0.949107912342759, -0.741531185599394, -0.405845151377397, 0, 0.405845151377397, 0.741531185599394, 0.949107912342759]
w_gauss = array([0.129484966168870, 0.279705391489277, 0.381830050505119, 0.417959183673469, 0.381830050505119, 0.279705391489277,0.129484966168870])
return quad_routine(func,a,b,x_gauss, w_gauss)
def quad_kronrod_15(func, a, b):
x_kr = [-0.991455371120813,-0.949107912342759, -0.864864423359769, -0.741531185599394, -0.586087235467691,-0.405845151377397, -0.207784955007898, 0.0, 0.207784955007898,0.405845151377397, 0.586087235467691, 0.741531185599394, 0.864864423359769, 0.949107912342759, 0.991455371120813]
w_kr = [0.022935322010529, 0.063092092629979, 0.104790010322250, 0.140653259715525, 0.169004726639267, 0.190350578064785, 0.204432940075298, 0.209482141084728, 0.204432940075298, 0.190350578064785, 0.169004726639267, 0.140653259715525, 0.104790010322250, 0.063092092629979, 0.022935322010529]
return quad_routine(func,a,b,x_kr, w_kr)
class Memorize: # Python 3: no need to inherit from object
def __init__(self, func):
self.func = func
self.eval_points = {}
def __call__(self, *args):
if args not in self.eval_points:
self.eval_points[args] = self.func(*args)
return self.eval_points[args]
def quad(func,a,b):
''' Output is the 15 point estimate; and the estimated error '''
func = Memorize(func) # Memorize function to skip repeated function calls.
g7 = quad_gauss_7(func,a,b)
k15 = quad_kronrod_15(func,a,b)
# I don't have much faith in this error estimate taken from wikipedia
# without incorporating how it should scale with changing limits
return [k15, (200*scipy.absolute(g7-k15))**1.5]
return quad(func,a,b)
def angle_to_unit_circle(self):
import numpy as np
# Angle must be provided in radians, counter-clockwise from 3 o'clock
return np.cos(self.r)+1j*np.sin(self.r)
def find_moebius_coefficients(self):
import numpy as np
# Find the images of the z0 control points
w0 = self.angle_to_unit_circle()
# Then calculate the four parameters for the corresponding Möbius map
self.a = np.linalg.det(np.asarray(
[[self.z0[0]*w0[0], w0[0], 1],
[self.z0[1]*w0[1], w0[1], 1],
[self.z0[2]*w0[2], w0[2], 1]]))
self.b = np.linalg.det(np.asarray(
[[self.z0[0]*w0[0], self.z0[0], w0[0]],
[self.z0[1]*w0[1], self.z0[1], w0[1]],
[self.z0[2]*w0[2], self.z0[2], w0[2]]]))
self.c = np.linalg.det(np.asarray(
[[self.z0[0], w0[0], 1],
[self.z0[1], w0[1], 1],
[self.z0[2], w0[2], 1]]))
self.d = np.linalg.det(np.asarray(
[[self.z0[0]*w0[0], self.z0[0], 1],
[self.z0[1]*w0[1], self.z0[1], 1],
[self.z0[2]*w0[2], self.z0[2], 1]]))
return
def moebius(self,z,inverse=False):
if not inverse:
z = (self.a*z+self.b)/(self.c*z+self.d)
else:
z = (-self.d*z+self.b)/(self.c*z-self.a)
return z
def square_to_disk(self,z,k='default'):
import numpy as np
from mpmath import mpc,mpmathify,ellipfun
if k == 'default': k = 1/mpmathify(np.sqrt(2))
Ke = 1.854
cn = ellipfun('cn')
if type(z) is complex:
z = np.asarray([z])
zf = np.ndarray.flatten(z)
w = np.zeros(zf.shape)*1j
pre_factor = mpc(1,-1)/mpmathify(np.sqrt(2))
mid_factor = Ke*(mpc(1,1)/2)
for idx,entry in enumerate(zf): # Go through all complex numbers
# Calculate result
temp = pre_factor*cn(
u = mid_factor*entry-Ke,
k = k)
# Then place it into the array
w[idx] = np.complex(temp.real,temp.imag)
# Now reshape the array back to its original shape
z = w.reshape(z.shape).copy()
return z
def disk_to_square(self,z,k='default'):
import numpy as np
Ke = 1.854
if type(z) is complex:
z = np.asarray([z])
zf = np.ndarray.flatten(z)
w = np.zeros(zf.shape)*1j
# Using the Gauss-Kronrod integration is about 10 times faster than
# using the mpmath.ellipf function
if k == 'default': k = 1/np.sqrt(2)
m = k**2
pre_factor = (1-1j)/(-Ke)
mid_factor = (1+1j)/np.sqrt(2)
temp = [pre_factor*self.complex_integral(
func = lambda t: (1-m*np.sin(t)**2)**(-0.5),
a = 0,
b = i)[0] + 1 - 1j for i in np.arccos(zf*mid_factor)]
w = np.asarray(temp)
# Now reshape the array back to its original shape
z = w.reshape(z.shape).copy()
return z
def are_points_clockwise(self):
import numpy as np
verts = np.zeros((3,2))
verts[0,:] = np.asarray([np.cos(self.r[0]),np.sin(self.r[0])])
verts[1,:] = np.asarray([np.cos(self.r[1]),np.sin(self.r[1])])
verts[2,:] = np.asarray([np.cos(self.r[2]),np.sin(self.r[2])])
signed_area = 0
for vtx in range(verts.shape[0]):
x1 = verts[vtx,0]
y1 = verts[vtx,1]
if vtx == verts.shape[0]-1: # Last vertex
x2 = verts[0,0]
y2 = verts[0,1]
else:
x2 = verts[vtx+1,0]
y2 = verts[vtx+1,1]
signed_area += (x1 * y2 - x2 * y1)/2
return (signed_area < 0)
def head_to_potential(self):
# Extract the hydraulic conductivity from the base element
k = self.model.elementlist[0].k
for idx,h in enumerate([self.head_min,self.head_max]):
if self.model.aquifer_type == 'confined' or (self.model.aquifer_type == 'convertible' and h >= self.model.H):
# Strack 1989, Eq. 8.6
pot = k*self.model.H*h - 0.5*k*self.model.H**2
elif self.model.aquifer_type == 'unconfined' or (self.model.aquifer_type == 'convertible' and h < self.model.H):
# Strack 1989, Eq. 8.7
pot = 0.5*k*h**2
if idx == 0:
self.phi_min = pot
elif idx == 1:
self.phi_max = pot
def complexify(self,z):
"""
This function takes the provided line or polygon and converts it into
a complex-valued vector, if it isn't already provided as one.'
"""
import numpy as np
if not np.iscomplex(z).any():
if len(z.shape) != 2 or z.shape[1] != 2:
raise Exception('Shape format not understood. Provide shape vertices either as a complex vector, or as a N-by-2 real numpy array.')
else:
z = z[:,0] + 1j*z[:,1]
return z
def plot(self,label_offset = 1.1,fontsize=12,fontcolor='xkcd:dark grey',
pointcolor='xkcd:dark grey',pointsize=50,horizontalalignment='center',
verticalalignment='center',color_low = 'xkcd:cerulean',
color_high = 'xkcd:orangish red',**kwargs):
"""
This function plots the Möbius control/reference points on the unit disk.
"""
import numpy as np
import matplotlib.pyplot as plt
import math
# Get the coordinates of the control points
z_A = (1-1j)/np.abs(1-1j)
z_A = self.moebius(z_A,inverse=False)*self.model.domain_radius + self.model.domain_center
z_A = np.asarray([np.real(z_A),np.imag(z_A)])
z_B = (1+1j)/np.abs(1+1j)
z_B = self.moebius(z_B,inverse=False)*self.model.domain_radius + self.model.domain_center
z_B = np.asarray([np.real(z_B),np.imag(z_B)])
z_C = (-1+1j)/np.abs(-1+1j)
z_C = self.moebius(z_C,inverse=False)*self.model.domain_radius + self.model.domain_center
z_C = np.asarray([np.real(z_C),np.imag(z_C)])
z_D = (-1-1j)/np.abs(-1-1j)
z_D = self.moebius(z_D,inverse=False)*self.model.domain_radius + self.model.domain_center
z_D = np.asarray([np.real(z_D),np.imag(z_D)])
a_low = np.linspace(math.atan2(z_C[1],z_C[0]),math.atan2(z_D[1],z_D[0]),360)
if abs(a_low[0]-a_low[-1]) > np.pi:
a_low = np.concatenate((
np.linspace(np.min(a_low),-np.pi,360),
np.linspace(np.pi,np.max(a_low),360) ))
a_high = np.linspace(math.atan2(z_A[1],z_A[0]),math.atan2(z_B[1],z_B[0]),360)
if abs(a_high[0]-a_high[-1]) > np.pi:
a_high = np.concatenate((
np.linspace(np.min(a_high),-np.pi,360),
np.linspace(np.pi,np.max(a_high),360) ))
plt.plot(np.cos(a_low)*self.model.domain_radius + self.model.domain_center,
np.sin(a_low)*self.model.domain_radius + self.model.domain_center,
color = color_low,linewidth=2)
plt.plot(np.cos(a_high)*self.model.domain_radius + self.model.domain_center,
np.sin(a_high)*self.model.domain_radius + self.model.domain_center,
color = color_high,linewidth=2)
plt.scatter(z_A[0],z_A[1],s=pointsize,color=pointcolor,zorder=11,**kwargs)
plt.scatter(z_B[0],z_B[1],s=pointsize,color=pointcolor,zorder=11,**kwargs)
plt.scatter(z_C[0],z_C[1],s=pointsize,color=pointcolor,zorder=11,**kwargs)
plt.scatter(z_D[0],z_D[1],s=pointsize,color=pointcolor,zorder=11,**kwargs)
plt.text(z_A[0]*label_offset,z_A[1]*label_offset,'A',fontsize=fontsize,
horizontalalignment=horizontalalignment,verticalalignment=verticalalignment,
color=fontcolor,**kwargs)
plt.text(z_B[0]*label_offset,z_B[1]*label_offset,'B',fontsize=fontsize,
horizontalalignment=horizontalalignment,verticalalignment=verticalalignment,
color=fontcolor,**kwargs)
plt.text(z_C[0]*label_offset,z_C[1]*label_offset,'C',fontsize=fontsize,
horizontalalignment=horizontalalignment,verticalalignment=verticalalignment,
color=fontcolor,**kwargs)
plt.text(z_D[0]*label_offset,z_D[1]*label_offset,'D',fontsize=fontsize,
horizontalalignment=horizontalalignment,verticalalignment=verticalalignment,
color=fontcolor,**kwargs)
#%%
def equidistant_points_in_circle(rings = 3, radius = 1, offset = 0+0j):
"""
This function creates equidistant points on a number of specified rings
inside a unit disk.
Parameters:
rings - [scalar] : number of rings on which equidistant points are placed; the more rings, the more points
radius - [scalar] : radius by which the unit disk is scaled
"""
import numpy as np
import math
# If the offset is complex, convert it to a real vector of length 2
if np.iscomplex(offset).any():
if not np.isscalar(offset):
raise Exception('Shape format not understood. Provide the offset either as a complex scalar, or as a real numpy array of shape (2,).')
else:
offset = np.asarray([np.real(offset),np.imag(offset)])
if np.isscalar(offset):
offset = np.zeros(2)
# Pre-allocate lists for the X and Y coordinates
x = []
y = []
# Then go through each ring
for k in range(rings):
if k > 0:
pts = round(np.pi/math.asin(1/(2*k)))
else:
pts = 1
theta = np.linspace(0, 2*np.pi, pts)
rad = k/(rings-1)
x += list(np.sin(theta)*rad)
y += list(np.cos(theta)*rad)
# Combine both lists to a common array
XY = np.column_stack((
np.asarray(x),
np.asarray(y)))
# And scale it, if desired
XY *= radius
# Apply the offset
XY[:,0] += offset[0]
XY[:,1] += offset[1]
return XY
| 43.318014 | 306 | 0.504523 | 25,961 | 223,391 | 4.2669 | 0.04526 | 0.019463 | 0.01068 | 0.012323 | 0.785354 | 0.761503 | 0.742031 | 0.725646 | 0.716249 | 0.701434 | 0 | 0.029975 | 0.395029 | 223,391 | 5,156 | 307 | 43.326416 | 0.789685 | 0.261537 | 0 | 0.765283 | 0 | 0.004528 | 0.052317 | 0.00094 | 0 | 0 | 0 | 0 | 0 | 1 | 0.047925 | false | 0 | 0.061132 | 0.002264 | 0.14566 | 0.004528 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
1aef31c40ca2800068409f3a8847859625ad49b7 | 225 | py | Python | office365/sharepoint/audit/audit.py | theodoriss/Office365-REST-Python-Client | 3bd7a62dadcd3f0a0aceeaff7584fff3fd44886e | [
"MIT"
] | 544 | 2016-08-04T17:10:16.000Z | 2022-03-31T07:17:20.000Z | office365/sharepoint/audit/audit.py | theodoriss/Office365-REST-Python-Client | 3bd7a62dadcd3f0a0aceeaff7584fff3fd44886e | [
"MIT"
] | 438 | 2016-10-11T12:24:22.000Z | 2022-03-31T19:30:35.000Z | office365/sharepoint/audit/audit.py | theodoriss/Office365-REST-Python-Client | 3bd7a62dadcd3f0a0aceeaff7584fff3fd44886e | [
"MIT"
] | 202 | 2016-08-22T19:29:40.000Z | 2022-03-30T20:26:15.000Z | from office365.sharepoint.base_entity import BaseEntity
class Audit(BaseEntity):
"""
Enables auditing of how site collections, sites, lists, folders, and list items are accessed, changed, and used.
"""
pass
| 25 | 116 | 0.724444 | 28 | 225 | 5.785714 | 0.928571 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.016575 | 0.195556 | 225 | 8 | 117 | 28.125 | 0.878453 | 0.497778 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.333333 | 0.333333 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 6 |
210679c4352581f818f84b0e6bf6289ce9deb608 | 46 | py | Python | binary_clock/__init__.py | zeycus/binary_clock | 4d7bf6a4d432fdb0f9dccde9d94173ac4935b05f | [
"MIT"
] | null | null | null | binary_clock/__init__.py | zeycus/binary_clock | 4d7bf6a4d432fdb0f9dccde9d94173ac4935b05f | [
"MIT"
] | null | null | null | binary_clock/__init__.py | zeycus/binary_clock | 4d7bf6a4d432fdb0f9dccde9d94173ac4935b05f | [
"MIT"
] | 2 | 2018-02-16T21:16:41.000Z | 2022-03-18T05:15:25.000Z | from .binclockWrapper import binclock_wrapper
| 23 | 45 | 0.891304 | 5 | 46 | 8 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.086957 | 46 | 1 | 46 | 46 | 0.952381 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
2144704faae460c6a062d40147ec2d52a11302fe | 130 | py | Python | server/editors/admin.py | nickdotreid/opioid-mat-decision-aid | bbc2a0d8931d59cd6ab64b0b845e88c8dc1af5d1 | [
"MIT"
] | null | null | null | server/editors/admin.py | nickdotreid/opioid-mat-decision-aid | bbc2a0d8931d59cd6ab64b0b845e88c8dc1af5d1 | [
"MIT"
] | 27 | 2018-09-30T07:59:21.000Z | 2020-11-05T19:25:41.000Z | server/editors/admin.py | nickdotreid/opioid-mat-decision-aid | bbc2a0d8931d59cd6ab64b0b845e88c8dc1af5d1 | [
"MIT"
] | null | null | null | from django.contrib import admin
from .models import Editor
@admin.register(Editor)
class PageAdmin(admin.ModelAdmin):
pass
| 16.25 | 34 | 0.784615 | 17 | 130 | 6 | 0.705882 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.138462 | 130 | 7 | 35 | 18.571429 | 0.910714 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.2 | 0.4 | 0 | 0.6 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 6 |
dcff7d128e3c6aff411835ed9b14181710e71bb1 | 107 | py | Python | python_codes/test.py | penginsl/electrophysiology_analysis | a38c4b1e1cb018fd670edb3157ddfba6cc2a3285 | [
"MIT"
] | 2 | 2020-12-27T01:25:46.000Z | 2021-02-21T07:45:08.000Z | python_codes/test.py | li-shen-amy/electrophysiology_analysis | a38c4b1e1cb018fd670edb3157ddfba6cc2a3285 | [
"MIT"
] | null | null | null | python_codes/test.py | li-shen-amy/electrophysiology_analysis | a38c4b1e1cb018fd670edb3157ddfba6cc2a3285 | [
"MIT"
] | null | null | null | from read_lvb import read_single_lvb
read_single_lvb('8_20_2019_pen1_0_1x10s.lvb', rec_len=1, trial_num=10) | 53.5 | 70 | 0.859813 | 23 | 107 | 3.478261 | 0.73913 | 0.25 | 0.325 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.148515 | 0.056075 | 107 | 2 | 70 | 53.5 | 0.643564 | 0 | 0 | 0 | 0 | 0 | 0.240741 | 0.240741 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.5 | 0 | 0.5 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
b4a758c488ad96a9e6cf231d9114525ad34184ce | 177 | py | Python | httpsig_cffi/__init__.py | hawkowl/httpsig | af5c8bcce7c93f5734b1a48722580d02c5d25bc7 | [
"MIT"
] | 1 | 2016-10-03T17:38:07.000Z | 2016-10-03T17:38:07.000Z | httpsig_cffi/__init__.py | hawkowl/httpsig | af5c8bcce7c93f5734b1a48722580d02c5d25bc7 | [
"MIT"
] | 1 | 2016-09-27T11:03:42.000Z | 2016-10-03T17:42:11.000Z | httpsig_cffi/__init__.py | hawkowl/httpsig | af5c8bcce7c93f5734b1a48722580d02c5d25bc7 | [
"MIT"
] | 4 | 2016-07-27T06:05:10.000Z | 2019-06-26T22:04:09.000Z | from .sign import Signer, HeaderSigner
from .verify import Verifier, HeaderVerifier
from ._version import get_versions
__version__ = get_versions()['version']
del get_versions
| 25.285714 | 44 | 0.819209 | 22 | 177 | 6.227273 | 0.545455 | 0.240876 | 0.262774 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.112994 | 177 | 6 | 45 | 29.5 | 0.872611 | 0 | 0 | 0 | 0 | 0 | 0.039548 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.6 | 0 | 0.6 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
2ef761cd8d1016273df97d4fa8d6cd67e2a5984b | 80 | py | Python | Inheritance/class_Inheritance/project_players_and_monsters/muse_elf.py | vasetousa/OOP | e4fedc497dd149c9800613ea11846e0e770d122c | [
"MIT"
] | null | null | null | Inheritance/class_Inheritance/project_players_and_monsters/muse_elf.py | vasetousa/OOP | e4fedc497dd149c9800613ea11846e0e770d122c | [
"MIT"
] | null | null | null | Inheritance/class_Inheritance/project_players_and_monsters/muse_elf.py | vasetousa/OOP | e4fedc497dd149c9800613ea11846e0e770d122c | [
"MIT"
] | null | null | null | from Encapsulation.project_wild_cat_zoo import Elf
class MuseElf(Elf):
pass | 20 | 50 | 0.8125 | 12 | 80 | 5.166667 | 0.916667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.1375 | 80 | 4 | 51 | 20 | 0.898551 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.333333 | 0.333333 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 6 |
2c06123c16b710a8e76d42eceec9f3bb74023ba8 | 48 | py | Python | meta_policy_search/envs/__init__.py | behzadhaghgoo/cml | e659c7ae10a52bbe1cbabf9d359aea43af19eb12 | [
"MIT"
] | 210 | 2018-10-17T01:04:48.000Z | 2022-03-09T16:17:06.000Z | meta_policy_search/envs/__init__.py | behzadhaghgoo/cml | e659c7ae10a52bbe1cbabf9d359aea43af19eb12 | [
"MIT"
] | 13 | 2018-10-25T20:01:09.000Z | 2022-01-24T13:11:24.000Z | meta_policy_search/envs/__init__.py | behzadhaghgoo/cml | e659c7ae10a52bbe1cbabf9d359aea43af19eb12 | [
"MIT"
] | 55 | 2018-10-18T22:00:51.000Z | 2021-11-24T00:06:31.000Z | from meta_policy_search.envs.base import MetaEnv | 48 | 48 | 0.895833 | 8 | 48 | 5.125 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.0625 | 48 | 1 | 48 | 48 | 0.911111 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
25a704bb8c7432afced1746b5db5782eeb6b9260 | 108 | py | Python | dop_python_pip_package/main_utils/math.py | RyanAngJY/dop_python_pip_package | 483cddc94e555080f11131667cf4ddad5ee4002a | [
"MIT"
] | null | null | null | dop_python_pip_package/main_utils/math.py | RyanAngJY/dop_python_pip_package | 483cddc94e555080f11131667cf4ddad5ee4002a | [
"MIT"
] | null | null | null | dop_python_pip_package/main_utils/math.py | RyanAngJY/dop_python_pip_package | 483cddc94e555080f11131667cf4ddad5ee4002a | [
"MIT"
] | null | null | null | import numpy as np
def add(num1, num2):
return num1 + num2
def random():
return np.random.rand(2)
| 13.5 | 28 | 0.657407 | 18 | 108 | 3.944444 | 0.666667 | 0.225352 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.060241 | 0.231481 | 108 | 7 | 29 | 15.428571 | 0.795181 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.4 | false | 0 | 0.2 | 0.4 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 6 |
25b23f92a690b5dc2591fb3a015a2920be3c2714 | 375 | py | Python | pattern4/hamburg_store_v5/src/WhiteFeatherChicken.py | icexmoon/design-pattern-with-python | bb897e886fe52bb620db0edc6ad9d2e5ecb067af | [
"MIT"
] | null | null | null | pattern4/hamburg_store_v5/src/WhiteFeatherChicken.py | icexmoon/design-pattern-with-python | bb897e886fe52bb620db0edc6ad9d2e5ecb067af | [
"MIT"
] | null | null | null | pattern4/hamburg_store_v5/src/WhiteFeatherChicken.py | icexmoon/design-pattern-with-python | bb897e886fe52bb620db0edc6ad9d2e5ecb067af | [
"MIT"
] | null | null | null | #######################################################
#
# WhiteFeatherChicken.py
# Python implementation of the Class WhiteFeatherChicken
# Generated by Enterprise Architect
# Created on: 19-6��-2021 21:34:31
# Original author: 70748
#
#######################################################
from .Chicken import Chicken
class WhiteFeatherChicken(Chicken):
pass | 28.846154 | 56 | 0.533333 | 32 | 375 | 6.3125 | 0.84375 | 0.237624 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.005333 | 0.054545 | 0.12 | 375 | 13 | 57 | 28.846154 | 0.551515 | 0.458667 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.333333 | 0.333333 | 0 | 0.666667 | 0 | 0 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 6 |
25b461901a114bfef4e2befb760d938c643080a9 | 1,798 | py | Python | entity/block.py | chenwenhang/StockVisualAnalysis | 8f186224f0001a5c1d7fe86f2540bb9221c81707 | [
"MIT"
] | 1 | 2021-06-23T11:43:26.000Z | 2021-06-23T11:43:26.000Z | entity/block.py | chenwenhang/StockVisualAnalysis | 8f186224f0001a5c1d7fe86f2540bb9221c81707 | [
"MIT"
] | null | null | null | entity/block.py | chenwenhang/StockVisualAnalysis | 8f186224f0001a5c1d7fe86f2540bb9221c81707 | [
"MIT"
] | 1 | 2021-11-17T12:13:45.000Z | 2021-11-17T12:13:45.000Z | from entity.item_entity import ItemEntity
class BlockEntity(ItemEntity):
def __init__(self, block):
ItemEntity.__init__(self)
self.field_set = block
def get_code(self):
return self.field_set["code"]
def get_name(self):
return self.field_set["name"]
def get_indicator(self):
return self.field_set["indicator"]
def get_RPS250(self):
return self.field_set["RPS250"]
def get_RPS120(self):
return self.field_set["RPS120"]
def get_RPS60(self):
return self.field_set["RPS60"]
def get_RPS30(self):
return self.field_set["RPS30"]
def get_RPS10(self):
return self.field_set["RPS10"]
def get_date(self):
return self.field_set["date"]
def get_grade(self):
return self.field_set["grade"]
def set_code(self, code):
self.field_set["code"] = code
def set_name(self, name):
self.field_set["name"] = name
def set_indicator(self, indicator):
self.field_set["indicator"] = indicator
def set_RPS250(self, RPS250):
self.field_set["RPS250"] = RPS250
def set_RPS120(self, RPS120):
self.field_set["RPS120"] = RPS120
def set_RPS60(self, RPS60):
self.field_set["RPS60"] = RPS60
def set_RPS30(self, RPS30):
self.field_set["RPS30"] = RPS30
def set_RPS10(self, RPS10):
self.field_set["RPS10"] = RPS10
def set_date(self, date):
self.field_set["date"] = date
def set_grade(self, grade):
self.field_set["grade"] = self.field_set["RPS10"] * 0.4 + \
self.field_set["RPS30"] * 0.3 + \
self.field_set["RPS60"] * 0.2 + \
self.field_set["RPS120"] * 0.1
| 25.323944 | 67 | 0.587875 | 233 | 1,798 | 4.304721 | 0.128755 | 0.224327 | 0.299103 | 0.189432 | 0.219342 | 0 | 0 | 0 | 0 | 0 | 0 | 0.069315 | 0.285873 | 1,798 | 70 | 68 | 25.685714 | 0.711838 | 0 | 0 | 0 | 0 | 0 | 0.070634 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.4375 | false | 0 | 0.020833 | 0.208333 | 0.6875 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 6 |
25b6fe6e7e963061d5676e07d2b8b093d960a651 | 95 | py | Python | simpleimage/__init__.py | adilu/simpleimage | 5e38340fb619d4350a3de7aa9a3a8b0145a6b568 | [
"MIT"
] | 1 | 2021-05-07T13:53:42.000Z | 2021-05-07T13:53:42.000Z | simpleimage/__init__.py | adilu/simpleimage | 5e38340fb619d4350a3de7aa9a3a8b0145a6b568 | [
"MIT"
] | null | null | null | simpleimage/__init__.py | adilu/simpleimage | 5e38340fb619d4350a3de7aa9a3a8b0145a6b568 | [
"MIT"
] | null | null | null | # Inside of __init__.py
from simpleimage.Image import Image
from simpleimage.Pixel import Pixel | 31.666667 | 35 | 0.842105 | 14 | 95 | 5.428571 | 0.642857 | 0.394737 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.115789 | 95 | 3 | 36 | 31.666667 | 0.904762 | 0.221053 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
d320c91ea6b461e0c7b131bd8b8d8de493e7eeac | 20,058 | py | Python | models/bioproc/proc_models.py | LukeItUp/BioProc | dc7c38dd7d3b0ed01419892d81eb7fbe39a6b584 | [
"CC-BY-4.0"
] | null | null | null | models/bioproc/proc_models.py | LukeItUp/BioProc | dc7c38dd7d3b0ed01419892d81eb7fbe39a6b584 | [
"CC-BY-4.0"
] | null | null | null | models/bioproc/proc_models.py | LukeItUp/BioProc | dc7c38dd7d3b0ed01419892d81eb7fbe39a6b584 | [
"CC-BY-4.0"
] | null | null | null | import numpy as np
from bioproc.hill_functions import *
"""
FLIP-FLOP MODELS
"""
# MASTER-SLAVE D FLIP-FLOP QSSA MODEL
def ff_stochastic_model(Y, T, params, omega):
p = np.zeros(12)
a, not_a, q, not_q, d, clk = Y
alpha1, alpha2, alpha3, alpha4, delta1, delta2, Kd, n = params
p[0] = alpha1*(pow(d/(Kd*omega), n)/(1 + pow(d/(Kd*omega), n) + pow(clk/(Kd*omega), n) + pow(d/(Kd*omega), n)*pow(clk/(Kd*omega), n)))*omega
p[1] = alpha2*(1/(1 + pow(not_a/(Kd*omega), n)))*omega
p[2] = delta1*a
p[3] = alpha1*(1/(1 + pow(d/(Kd*omega), n) + pow(clk/(Kd*omega), n) + pow(d/(Kd*omega), n)*pow(clk/(Kd*omega), n)))*omega
p[4] = alpha2*(1/(1 + pow(a/(Kd*omega), n)))*omega
p[5] = delta1*not_a
p[6] = alpha3*((pow(a/(Kd*omega), n)*pow(clk/(Kd*omega), n))/(1 + pow(a/(Kd*omega), n) + pow(clk/(Kd*omega), n) + pow(a/(Kd*omega), n)*pow(clk/(Kd*omega), n)))*omega
p[7] = alpha4*(1/(1 + pow(not_q/(Kd*omega), n)))*omega
p[8] = delta2*q
p[9] = alpha3*((pow(not_a/(Kd*omega), n)*pow(clk/(Kd*omega), n))/(1 + pow(not_a/(Kd*omega), n) + pow(clk/(Kd*omega), n) + pow(not_a/(Kd*omega), n)*pow(clk/(Kd*omega), n)))*omega
p[10] = alpha4*(1/(1 + pow(q/(Kd*omega), n)))*omega
p[11] = delta2*not_q
#propensities
return p
# MASTER-SLAVE D FLIP-FLOP MODEL
def ff_ode_model(Y, T, params):
a, not_a, q, not_q, d, clk = Y
alpha1, alpha2, alpha3, alpha4, delta1, delta2, Kd, n = params
da_dt = alpha1*(pow(d/Kd, n)/(1 + pow(d/Kd, n) + pow(clk/Kd, n) + pow(d/Kd, n)*pow(clk/Kd, n))) + alpha2*(1/(1 + pow(not_a/Kd, n))) - delta1 *a
dnot_a_dt = alpha1*(1/(1 + pow(d/Kd, n) + pow(clk/Kd, n) + pow(d/Kd, n)*pow(clk/Kd, n))) + alpha2*(1/(1 + pow(a/Kd, n))) - delta1*not_a
dq_dt = alpha3*((pow(a/Kd, n)*pow(clk/Kd, n))/(1 + pow(a/Kd, n) + pow(clk/Kd, n) + pow(a/Kd, n)*pow(clk/Kd, n))) + alpha4*(1/(1 + pow(not_q/Kd, n))) - delta2*q
dnot_q_dt = alpha3*((pow(not_a/Kd, n)*pow(clk/Kd, n))/(1 + pow(not_a/Kd, n) + pow(clk/Kd, n) + pow(not_a/Kd, n)*pow(clk/Kd, n))) + alpha4*(1/(1 + pow(q/Kd, n))) - delta2*not_q
return np.array([da_dt, dnot_a_dt, dq_dt, dnot_q_dt])
# FF MODEL WITH ASYNCHRONOUS RESET AND SET
# dodana parametra deltaE, KM
# dodani vhodni spremenljivki RESET in SET
# dodano 23. 1. 2020
def ff_ode_model_RS(Y, T, params):
a, not_a, q, not_q, d, clk, RESET, SET = Y
repress_both = True
if repress_both:
sum_one = a + q
sum_zero = not_a + not_q
alpha1, alpha2, alpha3, alpha4, delta1, delta2, Kd, n, deltaE, KM = params
da_dt = alpha1*(pow(d/Kd, n)/(1 + pow(d/Kd, n) + pow(clk/Kd, n) + pow(d/Kd, n)*pow(clk/Kd, n))) + alpha2*(1/(1 + pow(not_a/Kd, n))) - delta1 *a
#deltaE = delta1
if repress_both:
da_dt += -a*(deltaE*RESET/(KM+sum_one))
else:
da_dt += -a*(deltaE*RESET/(KM+a))
dnot_a_dt = alpha1*(1/(1 + pow(d/Kd, n) + pow(clk/Kd, n) + pow(d/Kd, n)*pow(clk/Kd, n))) + alpha2*(1/(1 + pow(a/Kd, n))) - delta1*not_a
if repress_both:
dnot_a_dt += -not_a*(deltaE*SET/(KM+sum_zero))
else:
dnot_a_dt += -not_a*(deltaE*SET/(KM+not_a))
#deltaE = delta2
dq_dt = alpha3*((pow(a/Kd, n)*pow(clk/Kd, n))/(1 + pow(a/Kd, n) + pow(clk/Kd, n) + pow(a/Kd, n)*pow(clk/Kd, n))) + alpha4*(1/(1 + pow(not_q/Kd, n))) - delta2*q
if repress_both:
dq_dt += -q*(deltaE*RESET/(KM+sum_one))
dnot_q_dt = alpha3*((pow(not_a/Kd, n)*pow(clk/Kd, n))/(1 + pow(not_a/Kd, n) + pow(clk/Kd, n) + pow(not_a/Kd, n)*pow(clk/Kd, n))) + alpha4*(1/(1 + pow(q/Kd, n))) - delta2*not_q
if repress_both:
dnot_q_dt += -not_q*(deltaE*SET/(KM+sum_zero))
return np.array([da_dt, dnot_a_dt, dq_dt, dnot_q_dt])
"""
ADRESSING MODELS
"""
# ADDRESSING 1-BIT QSSA MODEL
def addressing_stochastic_one_bit_model(Y, T, params, omega):
alpha, delta, Kd, n = params
_,_, q1, not_q1, i1, i2 = Y
p = np.zeros(4)
p[0] = alpha*activate_1(not_q1, Kd*omega, n)*omega
p[1] = delta*i1
p[2] = alpha*activate_1(q1, Kd*omega, n)*omega
p[3] = delta*i2
#propensities
return p
# ADDRESSING 2-BIT QSSA MODEL
def addressing_stochastic_two_bit_model(Y, T, params, omega):
alpha, delta, Kd, n = params
_, _, q1, not_q1, _, _, q2, not_q2, i1, i2, i3, i4 = Y
p = np.zeros(8)
p[0] = alpha * activate_2(not_q1, not_q2, Kd*omega, n)*omega
p[1] = delta * i1
p[2] = alpha * activate_2(q1, not_q2, Kd*omega, n)*omega
p[3] = delta * i2
p[4] = alpha * activate_2(q1, q2, Kd*omega, n)*omega
p[5] = delta * i3
p[6] = alpha * activate_2(not_q1, q2, Kd*omega, n)*omega
p[7] = delta * i4
#propensities
return p
# ADDRESSING 3-BIT QSSA MODEL
def addressing_stochastic_three_bit_model(Y, T, params, omega):
alpha, delta, Kd, n = params
_, _, q1, not_q1, _, _, q2, not_q2, _, _, q3, not_q3, i1, i2, i3, i4, i5, i6 = Y
p = np.zeros(12)
p[0] = alpha * activate_2(not_q1, not_q3, Kd*omega, n)*omega
p[1] = delta * i1
p[2] = alpha * activate_2(q1, not_q2, Kd*omega, n)*omega
p[3] = delta * i2
p[4] = alpha * activate_2(q2, not_q3, Kd*omega, n)*omega
p[5] = delta * i3
p[6] = alpha * activate_2(q1, q3, Kd*omega, n)*omega
p[7] = delta * i4
p[8] = alpha * activate_2(not_q1, q2, Kd*omega, n)*omega
p[9] = delta * i5
p[10] = alpha * activate_2(not_q2, q3, Kd*omega, n)*omega
p[11] = delta * i6
#propensities
return p
# ONE BIT ADDRESSING MODEL SIMPLE
def one_bit_simple_addressing_ode_model(Y, T, params):
alpha, delta, Kd, n = params
q1, not_q1, i1, i2 = Y
di1_dt = alpha * activate_1(not_q1, Kd, n) - delta * i1
di2_dt = alpha * activate_1(q1, Kd, n) - delta * i2
return np.array([di1_dt, di2_dt])
# TWO BIT ADDRESSING MODEL SIMPLE
def two_bit_simple_addressing_ode_model(Y, T, params):
alpha, delta, Kd, n = params
q1, not_q1, q2, not_q2, i1, i2, i3, i4 = Y
di1_dt = alpha * activate_2(not_q1, not_q2, Kd, n) - delta * i1
di2_dt = alpha * activate_2(q1, not_q2, Kd, n) - delta * i2
di3_dt = alpha * activate_2(q1, q2, Kd, n) - delta * i3
di4_dt = alpha * activate_2(not_q1, q2, Kd, n) - delta * i4
return np.array([di1_dt, di2_dt, di3_dt, di4_dt])
# THREE BIT ADDRESSING MODEL SIMPLE
def three_bit_simple_addressing_ode_model(Y, T, params):
alpha, delta, Kd, n = params
q1, not_q1, q2, not_q2, q3, not_q3, i1, i2, i3, i4, i5, i6 = Y
di1_dt = alpha * activate_2(not_q1, not_q3, Kd, n) - delta * i1
di2_dt = alpha * activate_2(q1, not_q2, Kd, n) - delta * i2
di3_dt = alpha * activate_2(q2, not_q3, Kd, n) - delta * i3
di4_dt = alpha * activate_2(q1, q3, Kd, n) - delta * i4
di5_dt = alpha * activate_2(not_q1, q2, Kd, n) - delta * i5
di6_dt = alpha * activate_2(not_q2, q3, Kd, n) - delta * i6
return np.array([di1_dt, di2_dt, di3_dt, di4_dt, di5_dt, di6_dt])
# FOUR BIT ADDRESSING MODEL SIMPLE
def four_bit_simple_addressing_ode_model(Y, T, params):
alpha, delta, Kd, n = params
q1, not_q1, q2, not_q2, q3, not_q3, q4, not_q4, i1, i2, i3, i4, i5, i6, i7, i8 = Y
di1_dt = alpha * activate_2(not_q1, not_q4, Kd, n) - delta * i1
di2_dt = alpha * activate_2(q1, not_q2, Kd, n) - delta * i2
di3_dt = alpha * activate_2(q2, not_q3, Kd, n) - delta * i3
di4_dt = alpha * activate_2(q3, not_q4, Kd, n) - delta * i4
di5_dt = alpha * activate_2(q1, q4, Kd, n) - delta * i5
di6_dt = alpha * activate_2(not_q1, q2, Kd, n) - delta * i6
di7_dt = alpha * activate_2(not_q2, q3, Kd, n) - delta * i7
di8_dt = alpha * activate_2(not_q3, q4, Kd, n) - delta * i8
return np.array([di1_dt, di2_dt, di3_dt, di4_dt, di5_dt, di6_dt, di7_dt, di8_dt])
# FIVE BIT ADDRESSING MODEL SIMPLE
def five_bit_simple_addressing_ode_model(Y, T, params):
alpha, delta, Kd, n = params
q1, not_q1, q2, not_q2, q3, not_q3, q4, not_q4, q5, not_q5, i1, i2, i3, i4, i5, i6, i7, i8, i9, i10 = Y
di1_dt = alpha * activate_2(not_q1, not_q5, Kd, n) - delta * i1
di2_dt = alpha * activate_2(q1, not_q2, Kd, n) - delta * i2
di3_dt = alpha * activate_2(q2, not_q3, Kd, n) - delta * i3
di4_dt = alpha * activate_2(q3, not_q4, Kd, n) - delta * i4
di5_dt = alpha * activate_2(q4, not_q5, Kd, n) - delta * i5
di6_dt = alpha * activate_2(q1, q5, Kd, n) - delta * i6
di7_dt = alpha * activate_2(not_q1, q2, Kd, n) - delta * i7
di8_dt = alpha * activate_2(not_q2, q3, Kd, n) - delta * i8
di9_dt = alpha * activate_2(not_q3, q4, Kd, n) - delta * i9
di10_dt = alpha * activate_2(not_q4, q5, Kd, n) - delta * i10
return np.array([di1_dt, di2_dt, di3_dt, di4_dt, di5_dt, di6_dt, di7_dt, di8_dt, di9_dt, di10_dt])
"""
JOHSON COUNTER MODELS
"""
# TOP MODEL (JOHNSON): ONE BIT MODEL WITH EXTERNAL CLOCK
def one_bit_model(Y, T, params):
a, not_a, q, not_q= Y
clk = get_clock(T)
d = not_q
Y_FF1 = [a, not_a, q, not_q, d, clk]
dY = ff_ode_model(Y_FF1, T, params)
return dY
# TOP MODEL (JOHNSON): TWO BIT MODEL WITH EXTERNAL CLOCK
def two_bit_model(Y, T, params):
a1, not_a1, q1, not_q1, a2, not_a2, q2, not_q2 = Y
clk = get_clock(T)
d1 = not_q2
d2 = q1
Y_FF1 = [a1, not_a1, q1, not_q1, d1, clk]
Y_FF2 = [a2, not_a2, q2, not_q2, d2, clk]
dY1 = ff_ode_model(Y_FF1, T, params)
dY2 = ff_ode_model(Y_FF2, T, params)
dY = np.append(dY1, dY2)
return dY
# TOP MODEL (JOHNSON): THREE BIT MODEL WITH EXTERNAL CLOCK
def three_bit_model(Y, T, params):
a1, not_a1, q1, not_q1, a2, not_a2, q2, not_q2, a3, not_a3, q3, not_q3 = Y
clk = get_clock(T)
d1 = not_q3
d2 = q1
d3 = q2
Y_FF1 = [a1, not_a1, q1, not_q1, d1, clk]
Y_FF2 = [a2, not_a2, q2, not_q2, d2, clk]
Y_FF3 = [a3, not_a3, q3, not_q3, d3, clk]
dY1 = ff_ode_model(Y_FF1, T, params)
dY2 = ff_ode_model(Y_FF2, T, params)
dY3 = ff_ode_model(Y_FF3, T, params)
dY = np.append(np.append(dY1, dY2), dY3)
return dY
# TOP MODEL (JOHNSON): FOUR BIT MODEL WITH EXTERNAL CLOCK
def four_bit_model(Y, T, params):
a1, not_a1, q1, not_q1, a2, not_a2, q2, not_q2, a3, not_a3, q3, not_q3, a4, not_a4, q4, not_q4 = Y
clk = get_clock(T)
d1 = not_q4
d2 = q1
d3 = q2
d4 = q3
Y_FF1 = [a1, not_a1, q1, not_q1, d1, clk]
Y_FF2 = [a2, not_a2, q2, not_q2, d2, clk]
Y_FF3 = [a3, not_a3, q3, not_q3, d3, clk]
Y_FF4 = [a4, not_a4, q4, not_q4, d4, clk]
dY1 = ff_ode_model(Y_FF1, T, params)
dY2 = ff_ode_model(Y_FF2, T, params)
dY3 = ff_ode_model(Y_FF3, T, params)
dY4 = ff_ode_model(Y_FF4, T, params)
dY = np.append(np.append(np.append(dY1, dY2), dY3), dY4)
return dY
# TOP MODEL (JOHNSON): FIVE BIT MODEL WITH EXTERNAL CLOCK
def five_bit_model(Y, T, params):
a1, not_a1, q1, not_q1, a2, not_a2, q2, not_q2, a3, not_a3, q3, not_q3, a4, not_a4, q4, not_q4, a5, not_a5, q5, not_q5 = Y
clk = get_clock(T)
d1 = not_q5
d2 = q1
d3 = q2
d4 = q3
d5 = q4
Y_FF1 = [a1, not_a1, q1, not_q1, d1, clk]
Y_FF2 = [a2, not_a2, q2, not_q2, d2, clk]
Y_FF3 = [a3, not_a3, q3, not_q3, d3, clk]
Y_FF4 = [a4, not_a4, q4, not_q4, d4, clk]
Y_FF5 = [a5, not_a5, q5, not_q5, d5, clk]
dY1 = ff_ode_model(Y_FF1, T, params)
dY2 = ff_ode_model(Y_FF2, T, params)
dY3 = ff_ode_model(Y_FF3, T, params)
dY4 = ff_ode_model(Y_FF4, T, params)
dY5 = ff_ode_model(Y_FF5, T, params)
dY = np.append(np.append(np.append(np.append(dY1, dY2), dY3), dY4), dY5)
return dY
"""
JOHSON COUNTER MODELS THAT USE FLIP-FLOPS WITH ASYNCRHONOUS SET/RESET
dodano 23. 1. 2020
"""
# TOP MODEL (JOHNSON): ONE BIT MODEL WITH EXTERNAL CLOCK AND FLIP-FLOPS WITH ASYNCRHONOUS SET/RESET
def one_bit_model_RS(Y, T, params):
a, not_a, q, not_q, R, S = Y
clk = get_clock(T)
d = not_q
Y_FF1 = [a, not_a, q, not_q, d, clk, R, S]
dY = ff_ode_model_RS(Y_FF1, T, params)
return dY
# TOP MODEL (JOHNSON): TWO BIT MODEL WITH EXTERNAL CLOCK AND FLIP-FLOPS WITH ASYNCRHONOUS SET/RESET
def two_bit_model_RS(Y, T, params):
a1, not_a1, q1, not_q1, a2, not_a2, q2, not_q2, R1, S1, R2, S2 = Y
clk = get_clock(T)
d1 = not_q2
d2 = q1
Y_FF1 = [a1, not_a1, q1, not_q1, d1, clk, R1, S1]
Y_FF2 = [a2, not_a2, q2, not_q2, d2, clk, R2, S2]
dY1 = ff_ode_model_RS(Y_FF1, T, params)
dY2 = ff_ode_model_RS(Y_FF2, T, params)
dY = np.append(dY1, dY2)
return dY
# TOP MODEL (JOHNSON): THREE BIT MODEL WITH EXTERNAL CLOCK AND FLIP-FLOPS WITH ASYNCRHONOUS SET/RESET
def three_bit_model_RS(Y, T, params):
a1, not_a1, q1, not_q1, a2, not_a2, q2, not_q2, a3, not_a3, q3, not_q3, R1, S1, R2, S2, R3, S3 = Y
clk = get_clock(T)
d1 = not_q3
d2 = q1
d3 = q2
Y_FF1 = [a1, not_a1, q1, not_q1, d1, clk, R1, S1]
Y_FF2 = [a2, not_a2, q2, not_q2, d2, clk, R2, S2]
Y_FF3 = [a3, not_a3, q3, not_q3, d3, clk, R3, S3]
dY1 = ff_ode_model_RS(Y_FF1, T, params)
dY2 = ff_ode_model_RS(Y_FF2, T, params)
dY3 = ff_ode_model_RS(Y_FF3, T, params)
dY = np.append(np.append(dY1, dY2), dY3)
return dY
"""
PROCESSOR MODEL
!!!OPTIMIZACIJA NAD TEMI MODELI!!!
"""
# TOP MODEL OF PROCESSOR WITH ONE BIT ADDRESSING
def one_bit_processor_ext(Y, T, params_johnson, params_addr):
a1, not_a1, q1, not_q1, i1, i2 = Y
Y_johnson = [a1, not_a1, q1, not_q1]
Y_address = [q1, not_q1, i1, i2]
dY_johnson = one_bit_model(Y_johnson, T, params_johnson)
dY_addr = one_bit_simple_addressing_ode_model(Y_address, T, params_addr)
dY = np.append(dY_johnson, dY_addr)
return dY
# TOP MODEL OF PROCESSOR WITH TWO BIT ADDRESSING
def two_bit_processor_ext(Y, T, params_johnson, params_addr):
a1, not_a1, q1, not_q1, a2, not_a2, q2, not_q2, i1, i2, i3, i4 = Y
Y_johnson = [a1, not_a1, q1, not_q1, a2, not_a2, q2, not_q2]
Y_address = [q1, not_q1, q2, not_q2, i1, i2, i3, i4]
dY_johnson = two_bit_model(Y_johnson, T, params_johnson)
dY_addr = two_bit_simple_addressing_ode_model(Y_address, T, params_addr)
dY = np.append(dY_johnson, dY_addr)
return dY
# TOP MODEL OF PROCESSOR WITH THREE BIT ADDRESSING
def three_bit_processor_ext(Y, T, params_johnson, params_addr):
a1, not_a1, q1, not_q1, a2, not_a2, q2, not_q2, a3, not_a3, q3, not_q3, i1, i2, i3, i4, i5, i6 = Y
Y_johnson = [a1, not_a1, q1, not_q1, a2, not_a2, q2, not_q2, a3, not_a3, q3, not_q3]
Y_address = [q1, not_q1, q2, not_q2, q3, not_q3, i1, i2, i3, i4, i5, i6]
dY_johnson = three_bit_model(Y_johnson, T, params_johnson)
dY_addr = three_bit_simple_addressing_ode_model(Y_address, T, params_addr)
dY = np.append(dY_johnson, dY_addr)
return dY
# TOP MODEL OF PROCESSOR WITH FOUR BIT ADDRESSING
def four_bit_processor_ext(Y, T, params_johnson, params_addr):
a1, not_a1, q1, not_q1, a2, not_a2, q2, not_q2, a3, not_a3, q3, not_q3, a4, not_a4, q4, not_q4, i1, i2, i3, i4, i5, i6, i7, i8 = Y
Y_johnson = [a1, not_a1, q1, not_q1, a2, not_a2, q2, not_q2, a3, not_a3, q3, not_q3, a4, not_a4, q4, not_q4]
Y_address = [q1, not_q1, q2, not_q2, q3, not_q3, q4, not_q4, i1, i2, i3, i4, i5, i6, i7, i8]
dY_johnson = four_bit_model(Y_johnson, T, params_johnson)
dY_addr = four_bit_simple_addressing_ode_model(Y_address, T, params_addr)
dY = np.append(dY_johnson, dY_addr)
return dY
# TOP MODEL OF PROCESSOR WITH FIVE BIT ADDRESSING
def five_bit_processor_ext(Y, T, params_johnson, params_addr):
a1, not_a1, q1, not_q1, a2, not_a2, q2, not_q2, a3, not_a3, q3, not_q3, a4, not_a4, q4, not_q4, a5, not_a5, q5, not_q5, i1, i2, i3, i4, i5, i6, i7, i8 ,i9, i10 = Y
Y_johnson = [a1, not_a1, q1, not_q1, a2, not_a2, q2, not_q2, a3, not_a3, q3, not_q3, a4, not_a4, q4, not_q4, a5, not_a5, q5, not_q5]
Y_address = [q1, not_q1, q2, not_q2, q3, not_q3, q4, not_q4, q5, not_q5, i1, i2, i3, i4, i5, i6, i7, i8, i9, i10]
dY_johnson = five_bit_model(Y_johnson, T, params_johnson)
dY_addr = five_bit_simple_addressing_ode_model(Y_address, T, params_addr)
dY = np.append(dY_johnson, dY_addr)
return dY
"""
PROCESSOR MODEL WITH EXTERNAL CLOCK AND RS inputs
external clock is required, more robust
jumps allowed
dodano 23. 1. 2020
"""
# TOP MODEL OF PROCESSOR WITH ONE BIT ADDRESSING AND FLIP-FLOP WITH RS ASYNCHRONOUS INPUTS
def one_bit_processor_ext_RS(Y, T, params_johnson_RS, params_addr):
a1, not_a1, q1, not_q1, i1, i2 = Y
R1 = 0
S1 = 0
Y_johnson = [a1, not_a1, q1, not_q1, R1, S1]
Y_address = [q1, not_q1, i1, i2]
dY_johnson = one_bit_model_RS(Y_johnson, T, params_johnson_RS)
dY_addr = one_bit_simple_addressing_ode_model(Y_address, T, params_addr)
dY = np.append(dY_johnson, dY_addr)
return dY
# TOP MODEL OF PROCESSOR WITH TWO BIT ADDRESSING AND FLIP-FLOPS WITH RS ASYNCHRONOUS INPUTS
def two_bit_processor_ext_RS(Y, T, params_johnson_RS, params_addr):
a1, not_a1, q1, not_q1, a2, not_a2, q2, not_q2, i1, i2, i3, i4 = Y
R1 = 0
S1 = 0
R2 = 0
S2 = 0
Y_johnson = [a1, not_a1, q1, not_q1, a2, not_a2, q2, not_q2, R1, S1, R2, S2]
Y_address = [q1, not_q1, q2, not_q2, i1, i2, i3, i4]
dY_johnson = two_bit_model_RS(Y_johnson, T, params_johnson_RS)
dY_addr = two_bit_simple_addressing_ode_model(Y_address, T, params_addr)
dY = np.append(dY_johnson, dY_addr)
return dY
# TOP MODEL OF PROCESSOR WITH THREE BIT ADDRESSING AND FLIP-FLOPS WITH RS ASYNCHRONOUS INPUTS
def three_bit_processor_ext_RS(Y, T, params_johnson_RS, params_addr, jump_src, jump_dst, i_src, i_dst):
a1, not_a1, q1, not_q1, a2, not_a2, q2, not_q2, a3, not_a3, q3, not_q3, i1, i2, i3, i4, i5, i6 = Y
i_src = eval(i_src)
R = [0,0,0]
S = [0,0,0]
for i in range(len(jump_src)):
if jump_src[i] > jump_dst[i]:
R[i] = i_src
elif jump_src[i] < jump_dst[i]:
S[i] = i_src
R1, R2, R3 = R if T > 1 else [100,100,100]
S1, S2, S3 = S
Y_johnson = [a1, not_a1, q1, not_q1, a2, not_a2, q2, not_q2, a3, not_a3, q3, not_q3, R1, S1, R2, S2, R3, S3]
Y_address = [q1, not_q1, q2, not_q2, q3, not_q3, i1, i2, i3, i4, i5, i6]
dY_johnson = three_bit_model_RS(Y_johnson, T, params_johnson_RS)
dY_addr = three_bit_simple_addressing_ode_model(Y_address, T, params_addr)
dY = np.append(dY_johnson, dY_addr)
return dY
"""
PROCESSOR MODEL WITH EXTERNAL CLOCK AND RS inputs AND JUMP CONDITIONS
dodano 24. 1. 2020
"""
def get_condition(x0, delta, t):
return x0 * np.e**(-delta*t)
# TOP MODEL OF PROCESSOR WITH THREE BIT ADDRESSING AND CONDITIONAL JUMPS
def three_bit_processor_ext_RS_cond(Y, T, params_johnson_RS, params_addr, jump_src, jump_dst, i_src, i_dst, condition):
a1, not_a1, q1, not_q1, a2, not_a2, q2, not_q2, a3, not_a3, q3, not_q3, i1, i2, i3, i4, i5, i6 = Y
x0_cond, delta_cond, KD_cond, condition_type = condition
cond = get_condition(x0_cond, delta_cond, T)
i_src = eval(i_src)
R = np.array([0,0,0])
S = np.array([0,0,0])
for i in range(len(jump_src)):
if jump_src[i] > jump_dst[i]:
R[i] = i_src
elif jump_src[i] < jump_dst[i]:
S[i] = i_src
if condition_type == "induction":
R = induction(R, cond, KD_cond)
S = induction(S, cond, KD_cond)
else:
R = inhibition(R, cond, KD_cond)
S = inhibition(S, cond, KD_cond)
R1, R2, R3 = R
S1, S2, S3 = S
Y_johnson = [a1, not_a1, q1, not_q1, a2, not_a2, q2, not_q2, a3, not_a3, q3, not_q3, R1, S1, R2, S2, R3, S3]
Y_address = [q1, not_q1, q2, not_q2, q3, not_q3, i1, i2, i3, i4, i5, i6]
dY_johnson = three_bit_model_RS(Y_johnson, T, params_johnson_RS)
dY_addr = three_bit_simple_addressing_ode_model(Y_address, T, params_addr)
dY = np.append(dY_johnson, dY_addr)
return dY
| 32.990132 | 182 | 0.617609 | 3,768 | 20,058 | 3.053875 | 0.052282 | 0.023725 | 0.028591 | 0.023464 | 0.888329 | 0.847658 | 0.801165 | 0.786565 | 0.758582 | 0.71765 | 0 | 0.07833 | 0.236863 | 20,058 | 607 | 183 | 33.044481 | 0.673417 | 0.085851 | 0 | 0.544413 | 0 | 0 | 0.000505 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.083095 | false | 0 | 0.005731 | 0.002865 | 0.17192 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
d38fa5a020b98e37d563129fd98805c3a815a06c | 509 | py | Python | fastiqa/vqa.py | baidut/PatchVQ | 040486b6342dfd36695f1daea0b5c4d77d728a23 | [
"Unlicense"
] | 32 | 2020-12-05T09:11:20.000Z | 2022-03-28T07:49:13.000Z | fastiqa/vqa.py | utlive/PatchVQ | 040486b6342dfd36695f1daea0b5c4d77d728a23 | [
"Unlicense"
] | 5 | 2021-07-12T19:43:51.000Z | 2022-01-28T13:16:16.000Z | fastiqa/vqa.py | utlive/PatchVQ | 040486b6342dfd36695f1daea0b5c4d77d728a23 | [
"Unlicense"
] | 7 | 2020-12-29T21:52:07.000Z | 2022-03-18T15:12:50.000Z | from fastai.vision.all import *
from fastai.distributed import *
from fastiqa.bunches.vqa.vid2mos import *
from fastiqa.bunches.vqa.vid_sp2mos import *
from fastiqa.bunches.vqa.single_vid2mos import *
from fastiqa.bunches.feat2mos import *
from fastiqa.bunches.vqa.test_videos import *
from fastiqa.models._body_head import *
from fastiqa.models.resnet_3d import *
from fastiqa.models.inception_head import *
from fastiqa.models._roi_pool import *
from fastiqa.learn import *
from fastiqa.iqa_exp import *
| 29.941176 | 48 | 0.81336 | 74 | 509 | 5.459459 | 0.364865 | 0.29703 | 0.462871 | 0.29703 | 0.49505 | 0 | 0 | 0 | 0 | 0 | 0 | 0.011013 | 0.108055 | 509 | 16 | 49 | 31.8125 | 0.878855 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
d3a9d77bcb85a726fe556e05107cb08e33ab6304 | 194 | py | Python | {{cookiecutter.repo_name}}/tests/test_{{cookiecutter.repo_name}}_package.py | tjelvar-olsson/cookiecutter-pypackage | 58c90986572b54063edc5809700140bd650f8bf4 | [
"MIT"
] | null | null | null | {{cookiecutter.repo_name}}/tests/test_{{cookiecutter.repo_name}}_package.py | tjelvar-olsson/cookiecutter-pypackage | 58c90986572b54063edc5809700140bd650f8bf4 | [
"MIT"
] | null | null | null | {{cookiecutter.repo_name}}/tests/test_{{cookiecutter.repo_name}}_package.py | tjelvar-olsson/cookiecutter-pypackage | 58c90986572b54063edc5809700140bd650f8bf4 | [
"MIT"
] | 1 | 2019-08-05T00:38:50.000Z | 2019-08-05T00:38:50.000Z | """Test the {{ cookiecutter.repo_name }} package."""
def test_version_is_string():
import {{ cookiecutter.repo_name }}
assert isinstance({{ cookiecutter.repo_name }}.__version__, str)
| 27.714286 | 68 | 0.71134 | 22 | 194 | 5.818182 | 0.636364 | 0.375 | 0.46875 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.139175 | 194 | 6 | 69 | 32.333333 | 0.766467 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.333333 | 0 | null | null | 0 | 0.333333 | null | null | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
6c9e246ae15c63613cd6c2b20609d5f0ab60a6fe | 42 | py | Python | content/downloads/code/hello_world.py | Jerska/jakevdp.github.io-source | ee516b93a8d83457b1f11f21dba9016f685e887d | [
"MIT"
] | 88 | 2017-03-23T02:03:19.000Z | 2022-01-03T04:43:38.000Z | content/downloads/code/hello_world.py | Jerska/jakevdp.github.io-source | ee516b93a8d83457b1f11f21dba9016f685e887d | [
"MIT"
] | 6 | 2017-10-11T15:11:49.000Z | 2018-11-03T16:43:49.000Z | content/downloads/code/hello_world.py | Jerska/jakevdp.github.io-source | ee516b93a8d83457b1f11f21dba9016f685e887d | [
"MIT"
] | 67 | 2017-03-08T18:41:07.000Z | 2022-02-15T02:17:41.000Z | import sys
import os
print("hello_world")
| 10.5 | 20 | 0.785714 | 7 | 42 | 4.571429 | 0.857143 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.119048 | 42 | 3 | 21 | 14 | 0.864865 | 0 | 0 | 0 | 0 | 0 | 0.261905 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.666667 | 0 | 0.666667 | 0.333333 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
6cab7cd5cbf21f075d48ac70d2479b970da3a25c | 34 | py | Python | sources/viewgui/app_main_window/__init__.py | Groomsha/lan-map | 1c30819470f43f8521e98eb75c70da23939f8f06 | [
"Apache-2.0"
] | null | null | null | sources/viewgui/app_main_window/__init__.py | Groomsha/lan-map | 1c30819470f43f8521e98eb75c70da23939f8f06 | [
"Apache-2.0"
] | null | null | null | sources/viewgui/app_main_window/__init__.py | Groomsha/lan-map | 1c30819470f43f8521e98eb75c70da23939f8f06 | [
"Apache-2.0"
] | null | null | null | from .ui_app_main_window import *
| 17 | 33 | 0.823529 | 6 | 34 | 4.166667 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.117647 | 34 | 1 | 34 | 34 | 0.833333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
6cf5868cddef0acac801849800692c7ecb79b242 | 62 | py | Python | src/packs/__init__.py | Flaiers/flatype | ebb951fb1dee2075779a19fd090166bb1347658f | [
"MIT"
] | 2 | 2021-07-31T20:01:36.000Z | 2021-09-07T13:37:42.000Z | src/packs/__init__.py | Flaiers/flatype | ebb951fb1dee2075779a19fd090166bb1347658f | [
"MIT"
] | null | null | null | src/packs/__init__.py | Flaiers/flatype | ebb951fb1dee2075779a19fd090166bb1347658f | [
"MIT"
] | 2 | 2021-09-07T13:37:43.000Z | 2021-10-31T20:19:29.000Z | from .db import *
from .hashing import *
from .types import *
| 15.5 | 22 | 0.709677 | 9 | 62 | 4.888889 | 0.555556 | 0.454545 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.193548 | 62 | 3 | 23 | 20.666667 | 0.88 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
6cfcc432d9799eeca3082db6ec3f5c398e9ea2d0 | 19,094 | py | Python | pybind/slxos/v16r_1_00b/rmon/event_entry/__init__.py | shivharis/pybind | 4e1c6d54b9fd722ccec25546ba2413d79ce337e6 | [
"Apache-2.0"
] | null | null | null | pybind/slxos/v16r_1_00b/rmon/event_entry/__init__.py | shivharis/pybind | 4e1c6d54b9fd722ccec25546ba2413d79ce337e6 | [
"Apache-2.0"
] | null | null | null | pybind/slxos/v16r_1_00b/rmon/event_entry/__init__.py | shivharis/pybind | 4e1c6d54b9fd722ccec25546ba2413d79ce337e6 | [
"Apache-2.0"
] | 1 | 2021-11-05T22:15:42.000Z | 2021-11-05T22:15:42.000Z |
from operator import attrgetter
import pyangbind.lib.xpathhelper as xpathhelper
from pyangbind.lib.yangtypes import RestrictedPrecisionDecimalType, RestrictedClassType, TypedListType
from pyangbind.lib.yangtypes import YANGBool, YANGListType, YANGDynClass, ReferenceType
from pyangbind.lib.base import PybindBase
from decimal import Decimal
from bitarray import bitarray
import __builtin__
class event_entry(PybindBase):
"""
This class was auto-generated by the PythonClass plugin for PYANG
from YANG module brocade-rmon - based on the path /rmon/event-entry. Each member element of
the container is represented as a class variable - with a specific
YANG type.
"""
__slots__ = ('_pybind_generated_by', '_path_helper', '_yang_name', '_rest_name', '_extmethods', '__event_index','__event_description','__log','__event_community','__event_owner',)
_yang_name = 'event-entry'
_rest_name = 'event'
_pybind_generated_by = 'container'
def __init__(self, *args, **kwargs):
path_helper_ = kwargs.pop("path_helper", None)
if path_helper_ is False:
self._path_helper = False
elif path_helper_ is not None and isinstance(path_helper_, xpathhelper.YANGPathHelper):
self._path_helper = path_helper_
elif hasattr(self, "_parent"):
path_helper_ = getattr(self._parent, "_path_helper", False)
self._path_helper = path_helper_
else:
self._path_helper = False
extmethods = kwargs.pop("extmethods", None)
if extmethods is False:
self._extmethods = False
elif extmethods is not None and isinstance(extmethods, dict):
self._extmethods = extmethods
elif hasattr(self, "_parent"):
extmethods = getattr(self._parent, "_extmethods", None)
self._extmethods = extmethods
else:
self._extmethods = False
self.__event_community = YANGDynClass(base=RestrictedClassType(base_type=unicode, restriction_dict={'length': [u'1 .. 127']}), default=unicode("__default_community"), is_leaf=True, yang_name="event-community", rest_name="trap", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions={u'tailf-common': {u'info': u'Send trap for the event', u'alt-name': u'trap'}}, namespace='urn:brocade.com:mgmt:brocade-rmon', defining_module='brocade-rmon', yang_type='string', is_config=True)
self.__event_index = YANGDynClass(base=RestrictedClassType(base_type=RestrictedClassType(base_type=long, restriction_dict={'range': ['-2147483648..2147483647']}, int_size=32), restriction_dict={'range': [u'1 .. 65535']}), is_leaf=True, yang_name="event-index", rest_name="event-index", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions={u'tailf-common': {u'cli-suppress-range': None}}, is_keyval=True, namespace='urn:brocade.com:mgmt:brocade-rmon', defining_module='brocade-rmon', yang_type='event-index-type', is_config=True)
self.__event_owner = YANGDynClass(base=RestrictedClassType(base_type=unicode, restriction_dict={'pattern': u'[a-zA-Z]{1}([-a-zA-Z0-9\\.\\\\\\\\@#\\+\\*\\(\\)=\\{~\\}%<>=$_\\[\\]\\|]{0,14})', 'length': [u'1 .. 15']}), is_leaf=True, yang_name="event-owner", rest_name="owner", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions={u'tailf-common': {u'info': u'Owner identity', u'alt-name': u'owner'}}, namespace='urn:brocade.com:mgmt:brocade-rmon', defining_module='brocade-rmon', yang_type='owner-string', is_config=True)
self.__event_description = YANGDynClass(base=RestrictedClassType(base_type=unicode, restriction_dict={'length': [u'min .. 127']}), default=unicode("__default_description"), is_leaf=True, yang_name="event-description", rest_name="description", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions={u'tailf-common': {u'info': u'Event description', u'alt-name': u'description'}}, namespace='urn:brocade.com:mgmt:brocade-rmon', defining_module='brocade-rmon', yang_type='event-description-type', is_config=True)
self.__log = YANGDynClass(base=YANGBool, is_leaf=True, yang_name="log", rest_name="log", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions={u'tailf-common': {u'info': u'Log the event'}}, namespace='urn:brocade.com:mgmt:brocade-rmon', defining_module='brocade-rmon', yang_type='empty', is_config=True)
load = kwargs.pop("load", None)
if args:
if len(args) > 1:
raise TypeError("cannot create a YANG container with >1 argument")
all_attr = True
for e in self._pyangbind_elements:
if not hasattr(args[0], e):
all_attr = False
break
if not all_attr:
raise ValueError("Supplied object did not have the correct attributes")
for e in self._pyangbind_elements:
nobj = getattr(args[0], e)
if nobj._changed() is False:
continue
setmethod = getattr(self, "_set_%s" % e)
if load is None:
setmethod(getattr(args[0], e))
else:
setmethod(getattr(args[0], e), load=load)
def _path(self):
if hasattr(self, "_parent"):
return self._parent._path()+[self._yang_name]
else:
return [u'rmon', u'event-entry']
def _rest_path(self):
if hasattr(self, "_parent"):
if self._rest_name:
return self._parent._rest_path()+[self._rest_name]
else:
return self._parent._rest_path()
else:
return [u'rmon', u'event']
def _get_event_index(self):
"""
Getter method for event_index, mapped from YANG variable /rmon/event_entry/event_index (event-index-type)
"""
return self.__event_index
def _set_event_index(self, v, load=False):
"""
Setter method for event_index, mapped from YANG variable /rmon/event_entry/event_index (event-index-type)
If this variable is read-only (config: false) in the
source YANG file, then _set_event_index is considered as a private
method. Backends looking to populate this variable should
do so via calling thisObj._set_event_index() directly.
"""
parent = getattr(self, "_parent", None)
if parent is not None and load is False:
raise AttributeError("Cannot set keys directly when" +
" within an instantiated list")
if hasattr(v, "_utype"):
v = v._utype(v)
try:
t = YANGDynClass(v,base=RestrictedClassType(base_type=RestrictedClassType(base_type=long, restriction_dict={'range': ['-2147483648..2147483647']}, int_size=32), restriction_dict={'range': [u'1 .. 65535']}), is_leaf=True, yang_name="event-index", rest_name="event-index", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions={u'tailf-common': {u'cli-suppress-range': None}}, is_keyval=True, namespace='urn:brocade.com:mgmt:brocade-rmon', defining_module='brocade-rmon', yang_type='event-index-type', is_config=True)
except (TypeError, ValueError):
raise ValueError({
'error-string': """event_index must be of a type compatible with event-index-type""",
'defined-type': "brocade-rmon:event-index-type",
'generated-type': """YANGDynClass(base=RestrictedClassType(base_type=RestrictedClassType(base_type=long, restriction_dict={'range': ['-2147483648..2147483647']}, int_size=32), restriction_dict={'range': [u'1 .. 65535']}), is_leaf=True, yang_name="event-index", rest_name="event-index", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions={u'tailf-common': {u'cli-suppress-range': None}}, is_keyval=True, namespace='urn:brocade.com:mgmt:brocade-rmon', defining_module='brocade-rmon', yang_type='event-index-type', is_config=True)""",
})
self.__event_index = t
if hasattr(self, '_set'):
self._set()
def _unset_event_index(self):
self.__event_index = YANGDynClass(base=RestrictedClassType(base_type=RestrictedClassType(base_type=long, restriction_dict={'range': ['-2147483648..2147483647']}, int_size=32), restriction_dict={'range': [u'1 .. 65535']}), is_leaf=True, yang_name="event-index", rest_name="event-index", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions={u'tailf-common': {u'cli-suppress-range': None}}, is_keyval=True, namespace='urn:brocade.com:mgmt:brocade-rmon', defining_module='brocade-rmon', yang_type='event-index-type', is_config=True)
def _get_event_description(self):
"""
Getter method for event_description, mapped from YANG variable /rmon/event_entry/event_description (event-description-type)
"""
return self.__event_description
def _set_event_description(self, v, load=False):
"""
Setter method for event_description, mapped from YANG variable /rmon/event_entry/event_description (event-description-type)
If this variable is read-only (config: false) in the
source YANG file, then _set_event_description is considered as a private
method. Backends looking to populate this variable should
do so via calling thisObj._set_event_description() directly.
"""
if hasattr(v, "_utype"):
v = v._utype(v)
try:
t = YANGDynClass(v,base=RestrictedClassType(base_type=unicode, restriction_dict={'length': [u'min .. 127']}), default=unicode("__default_description"), is_leaf=True, yang_name="event-description", rest_name="description", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions={u'tailf-common': {u'info': u'Event description', u'alt-name': u'description'}}, namespace='urn:brocade.com:mgmt:brocade-rmon', defining_module='brocade-rmon', yang_type='event-description-type', is_config=True)
except (TypeError, ValueError):
raise ValueError({
'error-string': """event_description must be of a type compatible with event-description-type""",
'defined-type': "brocade-rmon:event-description-type",
'generated-type': """YANGDynClass(base=RestrictedClassType(base_type=unicode, restriction_dict={'length': [u'min .. 127']}), default=unicode("__default_description"), is_leaf=True, yang_name="event-description", rest_name="description", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions={u'tailf-common': {u'info': u'Event description', u'alt-name': u'description'}}, namespace='urn:brocade.com:mgmt:brocade-rmon', defining_module='brocade-rmon', yang_type='event-description-type', is_config=True)""",
})
self.__event_description = t
if hasattr(self, '_set'):
self._set()
def _unset_event_description(self):
self.__event_description = YANGDynClass(base=RestrictedClassType(base_type=unicode, restriction_dict={'length': [u'min .. 127']}), default=unicode("__default_description"), is_leaf=True, yang_name="event-description", rest_name="description", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions={u'tailf-common': {u'info': u'Event description', u'alt-name': u'description'}}, namespace='urn:brocade.com:mgmt:brocade-rmon', defining_module='brocade-rmon', yang_type='event-description-type', is_config=True)
def _get_log(self):
"""
Getter method for log, mapped from YANG variable /rmon/event_entry/log (empty)
"""
return self.__log
def _set_log(self, v, load=False):
"""
Setter method for log, mapped from YANG variable /rmon/event_entry/log (empty)
If this variable is read-only (config: false) in the
source YANG file, then _set_log is considered as a private
method. Backends looking to populate this variable should
do so via calling thisObj._set_log() directly.
"""
if hasattr(v, "_utype"):
v = v._utype(v)
try:
t = YANGDynClass(v,base=YANGBool, is_leaf=True, yang_name="log", rest_name="log", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions={u'tailf-common': {u'info': u'Log the event'}}, namespace='urn:brocade.com:mgmt:brocade-rmon', defining_module='brocade-rmon', yang_type='empty', is_config=True)
except (TypeError, ValueError):
raise ValueError({
'error-string': """log must be of a type compatible with empty""",
'defined-type': "empty",
'generated-type': """YANGDynClass(base=YANGBool, is_leaf=True, yang_name="log", rest_name="log", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions={u'tailf-common': {u'info': u'Log the event'}}, namespace='urn:brocade.com:mgmt:brocade-rmon', defining_module='brocade-rmon', yang_type='empty', is_config=True)""",
})
self.__log = t
if hasattr(self, '_set'):
self._set()
def _unset_log(self):
self.__log = YANGDynClass(base=YANGBool, is_leaf=True, yang_name="log", rest_name="log", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions={u'tailf-common': {u'info': u'Log the event'}}, namespace='urn:brocade.com:mgmt:brocade-rmon', defining_module='brocade-rmon', yang_type='empty', is_config=True)
def _get_event_community(self):
"""
Getter method for event_community, mapped from YANG variable /rmon/event_entry/event_community (string)
"""
return self.__event_community
def _set_event_community(self, v, load=False):
"""
Setter method for event_community, mapped from YANG variable /rmon/event_entry/event_community (string)
If this variable is read-only (config: false) in the
source YANG file, then _set_event_community is considered as a private
method. Backends looking to populate this variable should
do so via calling thisObj._set_event_community() directly.
"""
if hasattr(v, "_utype"):
v = v._utype(v)
try:
t = YANGDynClass(v,base=RestrictedClassType(base_type=unicode, restriction_dict={'length': [u'1 .. 127']}), default=unicode("__default_community"), is_leaf=True, yang_name="event-community", rest_name="trap", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions={u'tailf-common': {u'info': u'Send trap for the event', u'alt-name': u'trap'}}, namespace='urn:brocade.com:mgmt:brocade-rmon', defining_module='brocade-rmon', yang_type='string', is_config=True)
except (TypeError, ValueError):
raise ValueError({
'error-string': """event_community must be of a type compatible with string""",
'defined-type': "string",
'generated-type': """YANGDynClass(base=RestrictedClassType(base_type=unicode, restriction_dict={'length': [u'1 .. 127']}), default=unicode("__default_community"), is_leaf=True, yang_name="event-community", rest_name="trap", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions={u'tailf-common': {u'info': u'Send trap for the event', u'alt-name': u'trap'}}, namespace='urn:brocade.com:mgmt:brocade-rmon', defining_module='brocade-rmon', yang_type='string', is_config=True)""",
})
self.__event_community = t
if hasattr(self, '_set'):
self._set()
def _unset_event_community(self):
self.__event_community = YANGDynClass(base=RestrictedClassType(base_type=unicode, restriction_dict={'length': [u'1 .. 127']}), default=unicode("__default_community"), is_leaf=True, yang_name="event-community", rest_name="trap", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions={u'tailf-common': {u'info': u'Send trap for the event', u'alt-name': u'trap'}}, namespace='urn:brocade.com:mgmt:brocade-rmon', defining_module='brocade-rmon', yang_type='string', is_config=True)
def _get_event_owner(self):
"""
Getter method for event_owner, mapped from YANG variable /rmon/event_entry/event_owner (owner-string)
"""
return self.__event_owner
def _set_event_owner(self, v, load=False):
"""
Setter method for event_owner, mapped from YANG variable /rmon/event_entry/event_owner (owner-string)
If this variable is read-only (config: false) in the
source YANG file, then _set_event_owner is considered as a private
method. Backends looking to populate this variable should
do so via calling thisObj._set_event_owner() directly.
"""
if hasattr(v, "_utype"):
v = v._utype(v)
try:
t = YANGDynClass(v,base=RestrictedClassType(base_type=unicode, restriction_dict={'pattern': u'[a-zA-Z]{1}([-a-zA-Z0-9\\.\\\\\\\\@#\\+\\*\\(\\)=\\{~\\}%<>=$_\\[\\]\\|]{0,14})', 'length': [u'1 .. 15']}), is_leaf=True, yang_name="event-owner", rest_name="owner", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions={u'tailf-common': {u'info': u'Owner identity', u'alt-name': u'owner'}}, namespace='urn:brocade.com:mgmt:brocade-rmon', defining_module='brocade-rmon', yang_type='owner-string', is_config=True)
except (TypeError, ValueError):
raise ValueError({
'error-string': """event_owner must be of a type compatible with owner-string""",
'defined-type': "brocade-rmon:owner-string",
'generated-type': """YANGDynClass(base=RestrictedClassType(base_type=unicode, restriction_dict={'pattern': u'[a-zA-Z]{1}([-a-zA-Z0-9\\.\\\\\\\\@#\\+\\*\\(\\)=\\{~\\}%<>=$_\\[\\]\\|]{0,14})', 'length': [u'1 .. 15']}), is_leaf=True, yang_name="event-owner", rest_name="owner", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions={u'tailf-common': {u'info': u'Owner identity', u'alt-name': u'owner'}}, namespace='urn:brocade.com:mgmt:brocade-rmon', defining_module='brocade-rmon', yang_type='owner-string', is_config=True)""",
})
self.__event_owner = t
if hasattr(self, '_set'):
self._set()
def _unset_event_owner(self):
self.__event_owner = YANGDynClass(base=RestrictedClassType(base_type=unicode, restriction_dict={'pattern': u'[a-zA-Z]{1}([-a-zA-Z0-9\\.\\\\\\\\@#\\+\\*\\(\\)=\\{~\\}%<>=$_\\[\\]\\|]{0,14})', 'length': [u'1 .. 15']}), is_leaf=True, yang_name="event-owner", rest_name="owner", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions={u'tailf-common': {u'info': u'Owner identity', u'alt-name': u'owner'}}, namespace='urn:brocade.com:mgmt:brocade-rmon', defining_module='brocade-rmon', yang_type='owner-string', is_config=True)
event_index = __builtin__.property(_get_event_index, _set_event_index)
event_description = __builtin__.property(_get_event_description, _set_event_description)
log = __builtin__.property(_get_log, _set_log)
event_community = __builtin__.property(_get_event_community, _set_event_community)
event_owner = __builtin__.property(_get_event_owner, _set_event_owner)
_pyangbind_elements = {'event_index': event_index, 'event_description': event_description, 'log': log, 'event_community': event_community, 'event_owner': event_owner, }
| 71.246269 | 595 | 0.719231 | 2,586 | 19,094 | 5.062258 | 0.076953 | 0.04125 | 0.047055 | 0.021389 | 0.806126 | 0.783745 | 0.762585 | 0.75487 | 0.742266 | 0.739821 | 0 | 0.010954 | 0.129831 | 19,094 | 267 | 596 | 71.513109 | 0.776949 | 0.127475 | 0 | 0.406977 | 0 | 0.046512 | 0.364617 | 0.166128 | 0 | 0 | 0 | 0 | 0 | 1 | 0.104651 | false | 0 | 0.046512 | 0 | 0.273256 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
9f22ecadee15b29febc27386c5d75768609cf0a2 | 150 | py | Python | answers/pandas_answer8.py | monocilindro/foss4g-geopandas | 12afbc787c1f65cc046234b41166bd62bbb6ac29 | [
"Apache-2.0"
] | null | null | null | answers/pandas_answer8.py | monocilindro/foss4g-geopandas | 12afbc787c1f65cc046234b41166bd62bbb6ac29 | [
"Apache-2.0"
] | null | null | null | answers/pandas_answer8.py | monocilindro/foss4g-geopandas | 12afbc787c1f65cc046234b41166bd62bbb6ac29 | [
"Apache-2.0"
] | null | null | null | sns.catplot(x="Youth_Unemployment_(claimant)_rate_18-24_(Dec-15)", y="Largest_migrant_population_arrived_during_2015/16", kind="box", data=boroughs);
| 75 | 149 | 0.813333 | 23 | 150 | 4.869565 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.082192 | 0.026667 | 150 | 1 | 150 | 150 | 0.684932 | 0 | 0 | 0 | 0 | 0 | 0.673333 | 0.653333 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
9f51da4a2c4d547f1a99a64f4fd42f332e3077f5 | 16,308 | py | Python | src/networks/_original/ConvLSTM_and_UNet.py | claudius-kienle/self-supervised-depth-denoising | 4dffb30e8ef5022ef665825d26f45f67bf712cfd | [
"MIT"
] | 2 | 2021-12-02T15:06:28.000Z | 2021-12-03T09:48:32.000Z | src/networks/_original/ConvLSTM_and_UNet.py | claudius-kienle/self-supervised-depth-denoising | 4dffb30e8ef5022ef665825d26f45f67bf712cfd | [
"MIT"
] | 23 | 2022-02-24T09:17:03.000Z | 2022-03-21T16:57:58.000Z | src/networks/_original/ConvLSTM_and_UNet.py | alr-internship/self-supervised-depth-denoising | 4dffb30e8ef5022ef665825d26f45f67bf712cfd | [
"MIT"
] | null | null | null | import torch.nn as nn
import torch
import torch.nn as nn
import torch.nn.functional as F
from common import conv, norm, ListModule, BloorPool, ConvLSTMCell
from unet_parts import unetDown
class UNet(nn.Module):
'''
upsample_mode in ['deconv', 'nearest', 'bilinear']
pad in ['zero', 'replication', 'none']
'''
def __init__(
self,
num_input_channels=3,
num_output_channels=3,
feature_scale=4,
more_layers=0,
concat_x=False,
upsample_mode='deconv',
pad='zero',
norm_layer='in',
last_act='sigmoid',
need_bias=True,
downsample_mode='max'
):
super(UNet, self).__init__()
self.feature_scale = feature_scale
self.more_layers = more_layers
self.concat_x = concat_x
filters = [64, 128, 256, 512, 1024]
# WHATS FEATURE SCALE????
filters = [x // self.feature_scale for x in filters]
# unetConv2 == DoubleConv
self.start = unetConv2(
num_input_channels,
filters[0] if not concat_x else filters[0] - num_input_channels,
norm_layer,
need_bias,
pad
)
# unetDown == Down
self.down1 = unetDown(
filters[0],
filters[1] if not concat_x else filters[1] - num_input_channels,
norm_layer,
need_bias,
pad,
downsample_mode
)
self.down2 = unetDown(
filters[1],
filters[2] if not concat_x else filters[2] - num_input_channels,
norm_layer,
need_bias,
pad,
downsample_mode
)
self.down3 = unetDown(
filters[2],
filters[3] if not concat_x else filters[3] - num_input_channels,
norm_layer,
need_bias,
pad,
downsample_mode
)
self.down4 = unetDown(
filters[3],
filters[4] if not concat_x else filters[4] - num_input_channels,
norm_layer,
need_bias,
pad,
downsample_mode
)
'''not in original original unet implementation as mentioned to be in paper'''
# more downsampling layers
if self.more_layers > 0:
self.more_downs = [
unetDown(filters[4], filters[4] if not concat_x else filters[4] - num_input_channels, norm_layer, need_bias, pad) for i in range(self.more_layers)]
self.more_ups = [unetUp(filters[4], upsample_mode, need_bias,
pad, same_num_filt=True) for i in range(self.more_layers)]
self.more_downs = ListModule(*self.more_downs)
self.more_ups = ListModule(*self.more_ups)
# Up
self.up4 = unetUp(filters[3], upsample_mode, need_bias, pad)
self.up3 = unetUp(filters[2], upsample_mode, need_bias, pad)
self.up2 = unetUp(filters[1], upsample_mode, need_bias, pad)
self.up1 = unetUp(filters[0], upsample_mode, need_bias, pad)
# OutConv
self.final = conv(filters[0], num_output_channels,
1, bias=need_bias, pad=pad)
if last_act == 'sigmoid':
self.final = nn.Sequential(self.final, nn.Sigmoid())
elif last_act == 'tanh':
self.final = nn.Sequential(self.final, nn.Tanh())
def forward(self, inputs):
# Downsample
downs = [inputs]
down = nn.AvgPool2d(2, 2)
for i in range(4 + self.more_layers):
downs.append(down(downs[-1]))
in64 = self.start(inputs)
if self.concat_x:
in64 = torch.cat([in64, downs[0]], 1)
down1 = self.down1(in64)
if self.concat_x:
down1 = torch.cat([down1, downs[1]], 1)
down2 = self.down2(down1)
if self.concat_x:
down2 = torch.cat([down2, downs[2]], 1)
down3 = self.down3(down2)
if self.concat_x:
down3 = torch.cat([down3, downs[3]], 1)
down4 = self.down4(down3)
if self.concat_x:
down4 = torch.cat([down4, downs[4]], 1)
if self.more_layers > 0:
prevs = [down4]
for kk, d in enumerate(self.more_downs):
# print(prevs[-1].size())
out = d(prevs[-1])
if self.concat_x:
out = torch.cat([out, downs[kk + 5]], 1)
prevs.append(out)
up_ = self.more_ups[-1](prevs[-1], prevs[-2])
for idx in range(self.more_layers - 1):
l = self.more_ups[self.more - idx - 2]
up_ = l(up_, prevs[self.more - idx - 2])
else:
up_ = down4
up4 = self.up4(up_, down3)
up3 = self.up3(up4, down2)
up2 = self.up2(up3, down1)
up1 = self.up1(up2, in64)
return self.final(up1)
class unetConv2(nn.Module):
def __init__(self, in_size, out_size, norm_layer, need_bias, pad, kernel_size=3):
super(unetConv2, self).__init__()
# print(pad)
if norm_layer is not None:
self.conv1 = nn.Sequential(conv(in_size, out_size, kernel_size, bias=need_bias, pad=pad),
norm(out_size, norm_layer),
nn.ReLU(),)
self.conv2 = nn.Sequential(conv(out_size, out_size, kernel_size, bias=need_bias, pad=pad),
norm(out_size, norm_layer),
nn.ReLU(),)
else:
self.conv1 = nn.Sequential(conv(in_size, out_size, kernel_size, bias=need_bias, pad=pad),
nn.ReLU(),)
self.conv2 = nn.Sequential(conv(out_size, out_size, kernel_size, bias=need_bias, pad=pad),
nn.ReLU(),)
def forward(self, inputs):
outputs = self.conv1(inputs)
outputs = self.conv2(outputs)
return outputs
class unetUp(nn.Module):
def __init__(self, out_size, upsample_mode, need_bias, pad, same_num_filt=False, kernel_size=3):
super(unetUp, self).__init__()
num_filt = out_size if same_num_filt else out_size * 2
if upsample_mode == 'deconv':
self.up = nn.ConvTranspose2d(
num_filt, out_size, 4, stride=2, padding=1)
self.conv = unetConv2(out_size * 2, out_size, None, need_bias, pad)
elif upsample_mode == 'bilinear' or upsample_mode == 'nearest':
self.up = nn.Sequential(nn.Upsample(scale_factor=2, mode=upsample_mode),
conv(num_filt, out_size, kernel_size, bias=need_bias, pad=pad))
self.conv = unetConv2(out_size * 2, out_size, None, need_bias, pad)
else:
assert False
def forward(self, inputs1, inputs2):
in1_up = self.up(inputs1)
if (inputs2.size(2) != in1_up.size(2)) or (inputs2.size(3) != in1_up.size(3)):
diff2 = (inputs2.size(2) - in1_up.size(2)) // 2
diff3 = (inputs2.size(3) - in1_up.size(3)) // 2
inputs2_ = inputs2[:, :, diff2: diff2 +
in1_up.size(2), diff3: diff3 + in1_up.size(3)]
else:
inputs2_ = inputs2
output = self.conv(torch.cat([in1_up, inputs2_], 1))
return output
class LSTMUNet(nn.Module):
'''
upsample_mode in ['deconv', 'nearest', 'bilinear']
pad in ['zero', 'replication', 'none']
'''
def __init__(self, num_input_channels=3, num_output_channels=3,
feature_scale=4, more_layers=0, concat_x=False,
upsample_mode='deconv', pad='zero', norm_layer='in', last_act='sigmoid', need_bias=True, downsample_mode='max'):
super(LSTMUNet, self).__init__()
self.feature_scale = feature_scale
self.more_layers = more_layers
self.concat_x = concat_x
filters = [64, 128, 256, 512, 1024]
filters = [x // self.feature_scale for x in filters]
self.start = unetConv2(
num_input_channels, filters[0] if not concat_x else filters[0] - num_input_channels, norm_layer, need_bias, pad)
self.down1 = unetDown(filters[0], filters[1] if not concat_x else filters[1] -
num_input_channels, norm_layer, need_bias, pad, downsample_mode)
self.down2 = unetDown(filters[1], filters[2] if not concat_x else filters[2] -
num_input_channels, norm_layer, need_bias, pad, downsample_mode)
self.down3 = unetDown(filters[2], filters[3] if not concat_x else filters[3] -
num_input_channels, norm_layer, need_bias, pad, downsample_mode)
self.down4 = unetDown(filters[3], filters[4] if not concat_x else filters[4] -
num_input_channels, norm_layer, need_bias, pad, downsample_mode)
# more downsampling layers
if self.more_layers > 0:
self.more_downs = [
unetDown(filters[4], filters[4] if not concat_x else filters[4] - num_input_channels, norm_layer, need_bias, pad) for i in range(self.more_layers)]
self.more_ups = [unetUp(filters[4], upsample_mode, need_bias,
pad, same_num_filt=True) for i in range(self.more_layers)]
self.more_downs = ListModule(*self.more_downs)
self.more_ups = ListModule(*self.more_ups)
self.up4 = unetUp(filters[3], upsample_mode, need_bias, pad)
self.up3 = NewunetUp(filters[2], upsample_mode, need_bias, pad)
self.up2 = NewunetUp(filters[1], upsample_mode, need_bias, pad)
self.up1 = unetUp(filters[0], upsample_mode, need_bias, pad)
self.final = conv(filters[0], num_output_channels,
1, bias=need_bias, pad=pad)
if last_act == 'sigmoid':
self.final = nn.Sequential(self.final, nn.Sigmoid())
elif last_act == 'tanh':
self.final = nn.Sequential(self.final, nn.Tanh())
self.first_conv_lstm_cell = ConvLSTMCell(filters[1], filters[1], 5)
self.second_conv_lstm_cell = ConvLSTMCell(filters[2], filters[2], 5)
self.third_conv_lstm_cell = ConvLSTMCell(filters[3], filters[3], 5)
self.fourth_conv_lstm_cell = ConvLSTMCell(filters[2]*2, filters[2], 5)
self.fifth_conv_lstm_cell = ConvLSTMCell(filters[1]*2, filters[1], 5)
def forward(self, all_inputs):
seq_len = all_inputs.shape[1]
res = []
for seq_idx in range(seq_len):
inputs = all_inputs[:, seq_idx, :, :, :]
# Downsample
downs = [inputs]
down = nn.AvgPool2d(2, 2)
for i in range(4 + self.more_layers):
downs.append(down(downs[-1]))
in64 = self.start(inputs)
if self.concat_x:
in64 = torch.cat([in64, downs[0]], 1)
down1 = self.down1(in64)
if self.concat_x:
down1 = torch.cat([down1, downs[1]], 1)
if seq_idx == 0:
c_1 = self.first_conv_lstm_cell.init_hidden(
batch_size=inputs.shape[0], shape=(down1.shape[-2], down1.shape[-1]))
down1, c_1 = self.first_conv_lstm_cell(down1, c_1[0], c_1[1])
c_1 = [down1, c_1]
down2 = self.down2(down1)
if self.concat_x:
down2 = torch.cat([down2, downs[2]], 1)
if seq_idx == 0:
c_2 = self.second_conv_lstm_cell.init_hidden(
batch_size=inputs.shape[0], shape=(down2.shape[-2], down2.shape[-1]))
down2, c_2 = self.second_conv_lstm_cell(down2, c_2[0], c_2[1])
c_2 = [down2, c_2]
down3 = self.down3(down2)
if self.concat_x:
down3 = torch.cat([down3, downs[3]], 1)
if seq_idx == 0:
c_3 = self.third_conv_lstm_cell.init_hidden(
batch_size=inputs.shape[0], shape=(down3.shape[-2], down3.shape[-1]))
down3, c_3 = self.third_conv_lstm_cell(down3, c_3[0], c_3[1])
c_3 = [down3, c_3]
down4 = self.down4(down3)
if self.concat_x:
down4 = torch.cat([down4, downs[4]], 1)
if self.more_layers > 0:
prevs = [down4]
for kk, d in enumerate(self.more_downs):
# print(prevs[-1].size())
out = d(prevs[-1])
if self.concat_x:
out = torch.cat([out, downs[kk + 5]], 1)
prevs.append(out)
up_ = self.more_ups[-1](prevs[-1], prevs[-2])
for idx in range(self.more_layers - 1):
l = self.more_ups[self.more - idx - 2]
up_ = l(up_, prevs[self.more - idx - 2])
else:
up_ = down4
up4 = self.up4(up_, down3)
up3 = self.up3(up4, down2)
if seq_idx == 0:
c_4 = self.fourth_conv_lstm_cell.init_hidden(
batch_size=inputs.shape[0], shape=(up3.shape[-2], up3.shape[-1]))
up3, c_4 = self.fourth_conv_lstm_cell(up3, c_4[0], c_4[1])
c_4 = [up3, c_4]
up2 = self.up2(up3, down1)
if seq_idx == 0:
c_5 = self.fifth_conv_lstm_cell.init_hidden(
batch_size=inputs.shape[0], shape=(up2.shape[-2], up2.shape[-1]))
up2, c_5 = self.fifth_conv_lstm_cell(up2, c_5[0], c_5[1])
c_5 = [up2, c_5]
up1 = self.up1(up2, in64)
res.append(self.final(up1)[:, None])
return torch.cat(res, 1)
class NewunetUp(nn.Module):
def __init__(self, out_size, upsample_mode, need_bias, pad, same_num_filt=False, kernel_size=3):
super(NewunetUp, self).__init__()
num_filt = out_size if same_num_filt else out_size * 2
if upsample_mode == 'deconv':
self.up = nn.ConvTranspose2d(
num_filt, out_size, 4, stride=2, padding=1)
self.conv = unetConv2(out_size * 2, out_size, None, need_bias, pad)
elif upsample_mode == 'bilinear' or upsample_mode == 'nearest':
self.up = nn.Sequential(nn.Upsample(scale_factor=2, mode=upsample_mode),
conv(num_filt, out_size, kernel_size, bias=need_bias, pad=pad))
# self.conv= unetConv2(out_size * 2, out_size, None, need_bias, pad)
else:
assert False
def forward(self, inputs1, inputs2):
in1_up = self.up(inputs1)
if (inputs2.size(2) != in1_up.size(2)) or (inputs2.size(3) != in1_up.size(3)):
diff2 = (inputs2.size(2) - in1_up.size(2)) // 2
diff3 = (inputs2.size(3) - in1_up.size(3)) // 2
inputs2_ = inputs2[:, :, diff2: diff2 +
in1_up.size(2), diff3: diff3 + in1_up.size(3)]
else:
inputs2_ = inputs2
# output= self.conv(torch.cat([in1_up, inputs2_], 1))
output = torch.cat([in1_up, inputs2_], 1)
return output
class DumpyModel(nn.Module):
'''
upsample_mode in ['deconv', 'nearest', 'bilinear']
pad in ['zero', 'replication', 'none']
'''
def __init__(self, num_input_channels=3, num_output_channels=3,
feature_scale=4, more_layers=0, concat_x=False,
upsample_mode='deconv', pad='zero', norm_layer='in', last_act='sigmoid', need_bias=True, downsample_mode='max'):
super(DumpyModel, self).__init__()
filters = [64, 128, 256, 512, 1024]
filters = [x // feature_scale for x in filters]
self.start = unetConv2(num_input_channels, 1,
norm_layer, need_bias, pad)
def forward(self, all_inputs):
# seq_len = all_inputs.shape[1]
# res = []
# for seq_idx in range(seq_len):
# res.append(self.start(all_inputs[:,seq_idx])[:,None])
# res = torch.cat(res, dim=1)
res = self.start(all_inputs)
# print("Model Inp shape:", all_inputs.shape)
# print("Model Out shape:", res.shape)
return res
| 38.28169 | 163 | 0.552183 | 2,112 | 16,308 | 4.037879 | 0.07339 | 0.038462 | 0.049015 | 0.027908 | 0.881918 | 0.854479 | 0.835131 | 0.808161 | 0.804644 | 0.783537 | 0 | 0.044761 | 0.328734 | 16,308 | 425 | 164 | 38.371765 | 0.734265 | 0.051079 | 0 | 0.66881 | 0 | 0 | 0.0085 | 0 | 0 | 0 | 0 | 0 | 0.006431 | 1 | 0.038585 | false | 0 | 0.019293 | 0 | 0.096463 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
9f799bda407a12c5a02b0d0658b7ebc06757c547 | 5,411 | py | Python | tests/dredd-hooks/relation_hook.py | marianoleonardo/auth | 2e2e50e1cd6f224c9551ca57c25beda6b956cb4e | [
"Apache-2.0"
] | 2 | 2017-10-16T12:03:32.000Z | 2020-10-28T02:51:09.000Z | tests/dredd-hooks/relation_hook.py | marianoleonardo/auth | 2e2e50e1cd6f224c9551ca57c25beda6b956cb4e | [
"Apache-2.0"
] | 33 | 2017-10-05T19:43:17.000Z | 2022-02-25T13:22:05.000Z | tests/dredd-hooks/relation_hook.py | marianoleonardo/auth | 2e2e50e1cd6f224c9551ca57c25beda6b956cb4e | [
"Apache-2.0"
] | 22 | 2017-08-23T13:35:42.000Z | 2021-11-25T16:32:14.000Z | import dredd_hooks as hooks
import controller.CRUDController as crud
import controller.RelationshipController as rship
import crud_api_hook as crud
import auth_hook as auth
from database.flaskAlchemyInit import db, HTTPRequestError
from dojot.module import Log
LOGGER = Log().color_log()
USER_GROUP = []
USER_PERMS = []
GROUP_PERMS = []
REQUESTER = {
"userid": 0,
"username": "dredd"
}
@hooks.before("Relationship management "
"> Manage relationships between users and groups "
"> Add user to group")
def create_sample_group_user(transaction):
global USER_GROUP
user_id, group_id = auth.create_sample_users(transaction)
transaction['fullPath'] = transaction['fullPath'].replace("/1/", f"/{user_id[0]}/")
transaction['fullPath'] = transaction['fullPath'].replace("/101", f"/{group_id[2]}")
USER_GROUP.append((user_id[0], group_id[2]))
@hooks.before("Relationship management "
"> Manage relationships between users and groups "
"> Remove a user from group")
def create_sample_associated_group_user(transaction):
global USER_GROUP, REQUESTER
user_id, group_id = auth.create_sample_users(transaction)
transaction['fullPath'] = transaction['fullPath'].replace("/1/", f"/{user_id[0]}/")
transaction['fullPath'] = transaction['fullPath'].replace("/101", f"/{group_id[2]}")
rship.add_user_group(db.session, user_id[0], group_id[2], REQUESTER)
USER_GROUP.append((user_id[0], group_id[1]))
@hooks.before("Relationship management "
"> Manage relationships between users and permissions "
"> Give a permission to a user")
def create_sample_user_perm(transaction):
global USER_PERMS
user_id, group_id = auth.create_sample_users(transaction)
perm_id = crud.create_sample_perms(transaction)
transaction['fullPath'] = transaction['fullPath'].replace("/1/", f"/{user_id[0]}/")
transaction['fullPath'] = transaction['fullPath'].replace("/201", f"/{perm_id}")
USER_PERMS.append((user_id[0], perm_id))
@hooks.before("Relationship management "
"> Manage relationships between users and permissions "
"> Revoke a user permission")
def create_sample_associated_user_perm(transaction):
global USER_PERMS, REQUESTER
user_id, group_id = auth.create_sample_users(transaction)
perm_id = crud.create_sample_perms(transaction)
transaction['fullPath'] = transaction['fullPath'].replace("/1/", f"/{user_id[0]}/")
transaction['fullPath'] = transaction['fullPath'].replace("/201", f"/{perm_id}")
rship.add_user_permission(db.session, user_id[0], perm_id, REQUESTER)
USER_PERMS.append((user_id[0], perm_id))
@hooks.before("Relationship management "
"> Manage relationships between group and permissions "
"> Give a permission to a group")
def create_sample_group_perm(transaction):
global GROUP_PERMS
perm_id = crud.create_sample_perms(transaction)
group_id = crud.create_sample_groups(transaction)
transaction['fullPath'] = transaction['fullPath'].replace("/101/", f"/{group_id[0]}/")
transaction['fullPath'] = transaction['fullPath'].replace("/201", f"/{perm_id}")
GROUP_PERMS.append((group_id[0], perm_id))
@hooks.before("Relationship management "
"> Manage relationships between group and permissions "
"> Revoke a group permission")
def create_sample_associated_group_perm(transaction):
global GROUP_PERMS
perm_id = crud.create_sample_perms(transaction)
group_id = crud.create_sample_groups(transaction)
transaction['fullPath'] = transaction['fullPath'].replace("/101/", f"/{group_id[0]}/")
transaction['fullPath'] = transaction['fullPath'].replace("/201", f"/{perm_id}")
rship.add_group_permission(db.session, group_id[0], perm_id, REQUESTER)
GROUP_PERMS.append((group_id[0], perm_id))
@hooks.after("Relationship management "
"> Manage relationships between users and groups "
"> Add user to group")
@hooks.after("Relationship management "
"> Manage relationships between users and groups "
"> Remove a user from group")
@hooks.after("Relationship management "
"> Manage relationships between users and permissions "
"> Give a permission to a user")
@hooks.after("Relationship management "
"> Manage relationships between users and permissions "
"> Revoke a user permission")
@hooks.after("Relationship management "
"> Manage relationships between group and permissions "
"> Give a permission to a group")
@hooks.after("Relationship management "
"> Manage relationships between group and permissions "
"> Revoke a group permission")
def clean_associations(transaction):
for user_id, group_id in USER_GROUP:
try:
rship.remove_user_group(db.session, user_id, group_id, REQUESTER)
except HTTPRequestError as e:
pass
for group_id, perm_id in GROUP_PERMS:
try:
rship.remove_group_permission(db.session, group_id, perm_id, REQUESTER)
except HTTPRequestError as e:
pass
for user_id, perm_id in USER_PERMS:
try:
rship.remove_user_permission(db.session, user_id, perm_id, REQUESTER)
except HTTPRequestError as e:
pass
| 41.623077 | 90 | 0.686195 | 649 | 5,411 | 5.516179 | 0.11094 | 0.127374 | 0.093855 | 0.13743 | 0.865922 | 0.828492 | 0.741341 | 0.741341 | 0.712291 | 0.672346 | 0 | 0.011264 | 0.196082 | 5,411 | 129 | 91 | 41.945736 | 0.811724 | 0 | 0 | 0.663636 | 0 | 0 | 0.301053 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.063636 | false | 0.027273 | 0.063636 | 0 | 0.127273 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
4c93dd269715b78fb5c43bd3c25c60b9d7921a5d | 1,528 | py | Python | paz/backend/image/__init__.py | niqbal996/paz | f27205907367415d5b21f90e1a1d1d1ce598e889 | [
"MIT"
] | 1 | 2021-04-12T22:09:22.000Z | 2021-04-12T22:09:22.000Z | paz/backend/image/__init__.py | albertofernandezvillan/paz | 9fbd50b993f37e1e807297a29c6044c09967c9cc | [
"MIT"
] | null | null | null | paz/backend/image/__init__.py | albertofernandezvillan/paz | 9fbd50b993f37e1e807297a29c6044c09967c9cc | [
"MIT"
] | null | null | null | from .opencv_image import RGB2BGR
from .opencv_image import BGR2RGB
from .opencv_image import RGB2HSV
from .opencv_image import HSV2RGB
from .opencv_image import RGB2GRAY
from .opencv_image import cast_image
from .opencv_image import resize_image
from .opencv_image import convert_color_space
from .opencv_image import load_image
from .opencv_image import random_saturation
from .opencv_image import random_brightness
from .opencv_image import random_contrast
from .opencv_image import random_hue
from .opencv_image import random_flip_left_right
from .opencv_image import show_image
from .opencv_image import warp_affine
from .opencv_image import write_image
from .opencv_image import random_shape_crop
from .opencv_image import make_random_plain_image
from .opencv_image import blend_alpha_channel
from .opencv_image import concatenate_alpha_mask
from .opencv_image import split_and_normalize_alpha_channel
from .opencv_image import gaussian_image_blur
from .opencv_image import median_image_blur
from .opencv_image import random_image_blur
from .opencv_image import translate_image
from .opencv_image import sample_scaled_translation
from .opencv_image import get_rotation_matrix
from .draw import draw_random_polygon
from .draw import draw_circle
from .draw import put_text
from .draw import draw_line
from .draw import draw_rectangle
from .draw import draw_dot
from .draw import draw_cube
from .draw import draw_random_polygon
from .draw import draw_filled_polygon
from .draw import lincolor
from .draw import make_mosaic
| 36.380952 | 59 | 0.871073 | 236 | 1,528 | 5.300847 | 0.258475 | 0.223821 | 0.335731 | 0.470024 | 0.449241 | 0.254197 | 0.078337 | 0.078337 | 0.078337 | 0.078337 | 0 | 0.00365 | 0.103403 | 1,528 | 41 | 60 | 37.268293 | 0.909489 | 0 | 0 | 0.051282 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
4ccdd61f532045ca6cdb651d6aab227c9281a5ca | 121 | py | Python | example.py | ZhukovAlexander/lambdify | e291c15bacffc871cd1c10aefe9f132420259dfd | [
"Apache-2.0"
] | 51 | 2016-04-07T12:50:08.000Z | 2020-05-19T14:56:47.000Z | example.py | ZhukovAlexander/easy-lambda | e291c15bacffc871cd1c10aefe9f132420259dfd | [
"Apache-2.0"
] | null | null | null | example.py | ZhukovAlexander/easy-lambda | e291c15bacffc871cd1c10aefe9f132420259dfd | [
"Apache-2.0"
] | 8 | 2016-04-08T10:05:30.000Z | 2020-01-20T14:01:05.000Z | from lambdify import Lambda, UPDATE_EXPLICIT
@Lambda.f(name='echo')
def echo(*args, **kwargs):
return args, kwargs
| 17.285714 | 44 | 0.719008 | 17 | 121 | 5.058824 | 0.764706 | 0.232558 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.14876 | 121 | 6 | 45 | 20.166667 | 0.834951 | 0 | 0 | 0 | 0 | 0 | 0.033058 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | true | 0 | 0.25 | 0.25 | 0.75 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 6 |
9802f22bc89782e9a0842bff485870f55548fb60 | 165 | py | Python | test/unit/test_command_line.py | buddly27/jound | 0f3c1fed055a5f28231daafc762ea7d934639d9e | [
"MIT"
] | null | null | null | test/unit/test_command_line.py | buddly27/jound | 0f3c1fed055a5f28231daafc762ea7d934639d9e | [
"MIT"
] | null | null | null | test/unit/test_command_line.py | buddly27/jound | 0f3c1fed055a5f28231daafc762ea7d934639d9e | [
"MIT"
] | null | null | null | # :coding: utf-8
from jound.command_line import main
def test_with_defaults():
"""Command executes successfully with defaults."""
assert main([]) is None
| 18.333333 | 54 | 0.709091 | 22 | 165 | 5.181818 | 0.818182 | 0.210526 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.007353 | 0.175758 | 165 | 8 | 55 | 20.625 | 0.830882 | 0.363636 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.333333 | 1 | 0.333333 | true | 0 | 0.333333 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
9819e8801dd5cbb40d09ba326e69390a5964bbf8 | 272 | py | Python | async_rx/subject/__init__.py | geronimo-iia/async-rx | 366a2fc5e4e717a0441f1ee8522ef6d5e857566c | [
"MIT"
] | 4 | 2020-05-02T00:14:29.000Z | 2022-02-12T14:17:21.000Z | async_rx/subject/__init__.py | geronimo-iia/async-rx | 366a2fc5e4e717a0441f1ee8522ef6d5e857566c | [
"MIT"
] | 4 | 2020-05-05T16:21:00.000Z | 2021-08-05T23:31:30.000Z | async_rx/subject/__init__.py | geronimo-iia/async-rx | 366a2fc5e4e717a0441f1ee8522ef6d5e857566c | [
"MIT"
] | null | null | null | from .rx_subject import rx_subject
from .rx_subject_from import rx_subject_from
from .rx_subject_replay import rx_subject_replay
from .rx_subject_behavior import rx_subject_behavior
__all__ = ["rx_subject", "rx_subject_from", "rx_subject_replay", "rx_subject_behavior"]
| 34 | 87 | 0.845588 | 42 | 272 | 4.880952 | 0.166667 | 0.526829 | 0.317073 | 0.185366 | 0.214634 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.088235 | 272 | 7 | 88 | 38.857143 | 0.826613 | 0 | 0 | 0 | 0 | 0 | 0.224265 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.8 | 0 | 0.8 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
e25572df44485e98b412a610b63df30dd0ebd83d | 16,315 | py | Python | TestMotorMode.py | sohamranade/RoboticsStudio | ed6794d6264a238646768726254a87c49926a96c | [
"MIT"
] | null | null | null | TestMotorMode.py | sohamranade/RoboticsStudio | ed6794d6264a238646768726254a87c49926a96c | [
"MIT"
] | null | null | null | TestMotorMode.py | sohamranade/RoboticsStudio | ed6794d6264a238646768726254a87c49926a96c | [
"MIT"
] | null | null | null | from lx16a import *
from math import sin, cos, pi
import time
import xlwt
import pandas as pd
import numpy as np
from xlwt import Workbook
class RecordMotorData():
def __init__ (self, servo):
self.servo = servo;
self.id = [];
self.angleOffset = [];
self.physicalPos = [];
self.virtualPos = [];
self.temp = [];
self.voltage = [];
def record(self):
self.id.append(int(self.servo.IDRead()))
self.angleOffset.append(int(self.servo.angleOffsetRead()))
self.physicalPos.append(int(self.servo.getPhysicalPos()))
self.virtualPos.append(int(self.servo.getVirtualPos()))
self.temp.append(int(self.servo.tempRead()))
self.voltage.append(int(self.servo.vInRead()))
def save2CSV(recordMotorDataList):
id = [];
angleOffset = [];
physicalPos = [];
virtualPos = [];
temp = [];
voltage = [];
for recordMotorData in recordMotorDataList:
id.extend(recordMotorData.id);
angleOffset.extend(recordMotorData.angleOffset);
physicalPos.extend(recordMotorData.physicalPos);
virtualPos.extend(recordMotorData.virtualPos);
temp.extend(recordMotorData.temp);
voltage.extend(recordMotorData.voltage);
df = pd.DataFrame(list(zip(id, angleOffset, physicalPos, virtualPos, temp, voltage)), columns = ["Id", "Angle offset", "Physical pos", "Virtual pos", "Temp", "Voltage"])
df.to_csv(r'MotorData.csv')
def initializeMotor(servo1):
targetPos = 120;
initialPos = servo1.getPhysicalPos();
error = targetPos-initialPos
print(servo1.IDRead())
print(initialPos)
print(error)
t=0;
while (abs(sin(t*2*pi/360)*error) < abs(error)):
# print("Motor id is ", servo1.IDRead())
# print("Physical pos is ", servo1.getPhysicalPos())
# print("Virtual pos is ", servo1.getVirtualPos())
servo1.moveTimeWrite(sin(t*2*pi/360)*error+initialPos)
time.sleep(.01)
t+=3
def resetAllMotors(servoL1, servoL2, servoA1S, servoA1E, servoA2S, servoA2E):
initializeMotor(servoL1);
initializeMotor(servoL2);
initializeMotor(servoA1S);
initializeMotor(servoA1E);
initializeMotor(servoA2S);
initializeMotor(servoA2E);
def danceLegs1(servo1, servo2,cycles, relativeAngleStep, r1, r2):
print("Starting dance 1")
direction = 1;
for i in range(0,cycles):
t=0;
initialPos1 = servo1.getPhysicalPos();
initialPos2 = servo2.getPhysicalPos();
while (t<180):
r1.record();
r2.record();
# print(sin(t*2*pi/360)*relativeAngleStep)
# print("Motor id is ", servo1.IDRead())
# print("Physical pos is ", servo1.getPhysicalPos())
# print("Virtual pos is ", servo1.getVirtualPos())
# print("Motor id is ", servo2.IDRead())
# print("Physical pos is ", servo2.getPhysicalPos())
# print("Virtual pos is ", servo2.getVirtualPos())
servo1.moveTimeWrite(direction*sin(t*3*pi/360)*relativeAngleStep+initialPos1);
servo2.moveTimeWrite(direction*sin(t*3*pi/360)*relativeAngleStep+initialPos2);
time.sleep(.01)
t+=5;
direction= -direction;
def turnL1A2(servoLeg1, servoLeg2, servoArm1S, servoArm1E, servoArm2S, servoArm2E,
cycles, angleStepLeg, angleStepShoulder, angleStepElbow, r1, r2):
print("Starting dance legs with arms")
direction = 1;
for i in range(0,cycles):
t=0;
initialPosL1 = servoLeg1.getPhysicalPos();
initialPosL2 = servoLeg2.getPhysicalPos();
initialPosA1S = servoArm1S.getPhysicalPos();
initialPosA1E = servoArm1E.getPhysicalPos();
initialPosA2S = servoArm2S.getPhysicalPos();
initialPosA2E = servoArm2E.getPhysicalPos();
while (t<180):
r1.record();
r2.record();
# print(sin(t*2*pi/360)*angleStepLeg)
# print("Motor id is ", servoLeg1.IDRead())
# print("Physical pos is ", servoLeg1.getPhysicalPos())
# print("Virtual pos is ", servoLeg1.getVirtualPos())
# print("Motor id is ", servoLeg2.IDRead())
# print("Physical pos is ", servoLeg2.getPhysicalPos())
# print("Virtual pos is ", servoLeg2.getVirtualPos())
servoLeg1.moveTimeWrite(direction*sin(t*2*pi/360)*angleStepLeg+initialPosL1);
#servoLeg2.moveTimeWrite(direction*sin(t*2*pi/360)*angleStepLeg+initialPosL2);
#servoArm1S.moveTimeWrite(direction*sin(4*t*2*pi/360)*angleStepShoulder+initialPosA1S);
#servoArm1E.moveTimeWrite(direction*sin(4*t*2*pi/360)*angleStepElbow+initialPosA1E);
servoArm2S.moveTimeWrite(-direction*sin(2*t*2*pi/360)*angleStepShoulder+initialPosA2S);
servoArm2E.moveTimeWrite(-direction*sin(2*t*2*pi/360)*angleStepElbow+initialPosA2E);
time.sleep(.01)
t+=5;
direction= -direction;
resetAllMotors(servoL1, servoL2, servoA1S, servoA1E, servoA2S, servoA2E)
def turnL1A1(servoLeg1, servoLeg2, servoArm1S, servoArm1E, servoArm2S, servoArm2E,
cycles, angleStepLeg, angleStepShoulder, angleStepElbow, r1, r2):
print("Starting dance legs with arms")
direction = 1;
for i in range(0,cycles):
t=0;
initialPosL1 = servoLeg1.getPhysicalPos();
initialPosL2 = servoLeg2.getPhysicalPos();
initialPosA1S = servoArm1S.getPhysicalPos();
initialPosA1E = servoArm1E.getPhysicalPos();
initialPosA2S = servoArm2S.getPhysicalPos();
initialPosA2E = servoArm2E.getPhysicalPos();
while (t<180):
r1.record();
r2.record();
# print(sin(t*2*pi/360)*angleStepLeg)
# print("Motor id is ", servoLeg1.IDRead())
# print("Physical pos is ", servoLeg1.getPhysicalPos())
# print("Virtual pos is ", servoLeg1.getVirtualPos())
# print("Motor id is ", servoLeg2.IDRead())
# print("Physical pos is ", servoLeg2.getPhysicalPos())
# print("Virtual pos is ", servoLeg2.getVirtualPos())
servoLeg1.moveTimeWrite(direction*sin(t*2*pi/360)*angleStepLeg+initialPosL1);
#servoLeg2.moveTimeWrite(direction*sin(t*2*pi/360)*angleStepLeg+initialPosL2);
servoArm1S.moveTimeWrite(-direction*sin(2*t*2*pi/360)*angleStepShoulder+initialPosA1S);
servoArm1E.moveTimeWrite(-direction*sin(2*t*2*pi/360)*angleStepElbow+initialPosA1E);
# servoArm2S.moveTimeWrite(-direction*sin(4*t*2*pi/360)*angleStepShoulder+initialPosA2S);
# servoArm2E.moveTimeWrite(-direction*sin(4*t*2*pi/360)*angleStepElbow+initialPosA2E);
time.sleep(.01)
t+=5;
direction= -direction;
resetAllMotors(servoL1, servoL2, servoA1S, servoA1E, servoA2S, servoA2E)
def turnL2A1(servoLeg1, servoLeg2, servoArm1S, servoArm1E, servoArm2S, servoArm2E,
cycles, angleStepLeg, angleStepShoulder, angleStepElbow, r1, r2):
print("Starting dance legs with arms")
direction = 1;
for i in range(0,cycles):
t=0;
initialPosL1 = servoLeg1.getPhysicalPos();
initialPosL2 = servoLeg2.getPhysicalPos();
initialPosA1S = servoArm1S.getPhysicalPos();
initialPosA1E = servoArm1E.getPhysicalPos();
initialPosA2S = servoArm2S.getPhysicalPos();
initialPosA2E = servoArm2E.getPhysicalPos();
while (t<180):
r1.record();
r2.record();
# print(sin(t*2*pi/360)*angleStepLeg)
# print("Motor id is ", servoLeg1.IDRead())
# print("Physical pos is ", servoLeg1.getPhysicalPos())
# print("Virtual pos is ", servoLeg1.getVirtualPos())
# print("Motor id is ", servoLeg2.IDRead())
# print("Physical pos is ", servoLeg2.getPhysicalPos())
# print("Virtual pos is ", servoLeg2.getVirtualPos())
#servoLeg1.moveTimeWrite(direction*sin(t*2*pi/360)*angleStepLeg+initialPosL1);
servoLeg2.moveTimeWrite(direction*sin(t*2*pi/360)*angleStepLeg+initialPosL2);
servoArm1S.moveTimeWrite(-direction*sin(2*t*2*pi/360)*angleStepShoulder+initialPosA1S);
servoArm1E.moveTimeWrite(-direction*sin(2*t*2*pi/360)*angleStepElbow+initialPosA1E);
#servoArm2S.moveTimeWrite(direction*sin(4*t*2*pi/360)*angleStepShoulder+initialPosA2S);
#servoArm2E.moveTimeWrite(direction*sin(4*t*2*pi/360)*angleStepElbow+initialPosA2E);
time.sleep(.01)
t+=5;
direction= -direction;
resetAllMotors(servoL1, servoL2, servoA1S, servoA1E, servoA2S, servoA2E)
def turnL2A2(servoLeg1, servoLeg2, servoArm1S, servoArm1E, servoArm2S, servoArm2E,
cycles, angleStepLeg, angleStepShoulder, angleStepElbow, r1, r2):
print("Starting dance legs with arms")
direction = 1;
for i in range(0,cycles):
t=0;
initialPosL1 = servoLeg1.getPhysicalPos();
initialPosL2 = servoLeg2.getPhysicalPos();
initialPosA1S = servoArm1S.getPhysicalPos();
initialPosA1E = servoArm1E.getPhysicalPos();
initialPosA2S = servoArm2S.getPhysicalPos();
initialPosA2E = servoArm2E.getPhysicalPos();
while (t<180):
r1.record();
r2.record();
# print(sin(t*2*pi/360)*angleStepLeg)
# print("Motor id is ", servoLeg1.IDRead())
# print("Physical pos is ", servoLeg1.getPhysicalPos())
# print("Virtual pos is ", servoLeg1.getVirtualPos())
# print("Motor id is ", servoLeg2.IDRead())
# print("Physical pos is ", servoLeg2.getPhysicalPos())
# print("Virtual pos is ", servoLeg2.getVirtualPos())
#servoLeg1.moveTimeWrite(direction*sin(t*2*pi/360)*angleStepLeg+initialPosL1);
servoLeg2.moveTimeWrite(direction*sin(t*2*pi/360)*angleStepLeg+initialPosL2);
# servoArm1S.moveTimeWrite(-direction*sin(4*t*2*pi/360)*angleStepShoulder+initialPosA1S);
# servoArm1E.moveTimeWrite(-direction*sin(4*t*2*pi/360)*angleStepElbow+initialPosA1E);
servoArm2S.moveTimeWrite(direction*sin(2*t*2*pi/360)*angleStepShoulder+initialPosA2S);
servoArm2E.moveTimeWrite(direction*sin(2*t*2*pi/360)*angleStepElbow+initialPosA2E);
time.sleep(.01)
t+=5;
direction= -direction;
resetAllMotors(servoL1, servoL2, servoA1S, servoA1E, servoA2S, servoA2E)
def danceArmsAlternate(servoLeg1, servoLeg2, servoArm1S, servoArm1E, servoArm2S, servoArm2E,
cycles, angleStepLeg, angleStepShoulder, angleStepElbow, r1, r2):
print("Starting dance legs with arms")
direction = 1;
for i in range(0,cycles):
t=0;
initialPosL1 = servoLeg1.getPhysicalPos();
initialPosL2 = servoLeg2.getPhysicalPos();
initialPosA1S = servoArm1S.getPhysicalPos();
initialPosA1E = servoArm1E.getPhysicalPos();
initialPosA2S = servoArm2S.getPhysicalPos();
initialPosA2E = servoArm2E.getPhysicalPos();
while (t<180):
r1.record();
r2.record();
# print(sin(t*2*pi/360)*angleStepLeg)
# print("Motor id is ", servoLeg1.IDRead())
# print("Physical pos is ", servoLeg1.getPhysicalPos())
# print("Virtual pos is ", servoLeg1.getVirtualPos())
# print("Motor id is ", servoLeg2.IDRead())
# print("Physical pos is ", servoLeg2.getPhysicalPos())
# print("Virtual pos is ", servoLeg2.getVirtualPos())
# servoLeg1.moveTimeWrite(direction*sin(t*2*pi/360)*angleStepLeg+initialPosL1);
# servoLeg2.moveTimeWrite(direction*sin(t*2*pi/360)*angleStepLeg+initialPosL2);
servoArm1S.moveTimeWrite(direction*sin(2*t*2*pi/360)*angleStepShoulder+initialPosA1S);
servoArm1E.moveTimeWrite(direction*sin(2*t*2*pi/360)*angleStepElbow+initialPosA1E);
time.sleep(.01)
servoArm2S.moveTimeWrite(direction*sin(2*t*2*pi/360)*angleStepShoulder+initialPosA2S);
servoArm2E.moveTimeWrite(direction*sin(2*t*2*pi/360)*angleStepElbow+initialPosA2E);
time.sleep(.01)
t+=5;
direction= -direction;
resetAllMotors(servoL1, servoL2, servoA1S, servoA1E, servoA2S, servoA2E)
def danceArmsSynchronized(servoLeg1, servoLeg2, servoArm1S, servoArm1E, servoArm2S, servoArm2E,
cycles, angleStepLeg, angleStepShoulder, angleStepElbow, r1, r2):
print("Starting dance legs with arms")
direction = 1;
for i in range(0,cycles):
t=0;
initialPosL1 = servoLeg1.getPhysicalPos();
initialPosL2 = servoLeg2.getPhysicalPos();
initialPosA1S = servoArm1S.getPhysicalPos();
initialPosA1E = servoArm1E.getPhysicalPos();
initialPosA2S = servoArm2S.getPhysicalPos();
initialPosA2E = servoArm2E.getPhysicalPos();
while (t<180):
r1.record();
r2.record();
# print(sin(t*2*pi/360)*angleStepLeg)
# print("Motor id is ", servoLeg1.IDRead())
# print("Physical pos is ", servoLeg1.getPhysicalPos())
# print("Virtual pos is ", servoLeg1.getVirtualPos())
# print("Motor id is ", servoLeg2.IDRead())
# print("Physical pos is ", servoLeg2.getPhysicalPos())
# print("Virtual pos is ", servoLeg2.getVirtualPos())
# servoLeg1.moveTimeWrite(direction*sin(t*2*pi/360)*angleStepLeg+initialPosL1);
# servoLeg2.moveTimeWrite(direction*sin(t*2*pi/360)*angleStepLeg+initialPosL2);
servoArm1S.moveTimeWrite(direction*sin(2*t*2*pi/360)*angleStepShoulder+initialPosA1S);
servoArm1E.moveTimeWrite(direction*sin(2*t*2*pi/360)*angleStepElbow+initialPosA1E);
time.sleep(.01)
servoArm2S.moveTimeWrite(-direction*sin(2*t*2*pi/360)*angleStepShoulder+initialPosA2S);
servoArm2E.moveTimeWrite(-direction*sin(2*t*2*pi/360)*angleStepElbow+initialPosA2E);
time.sleep(.01)
t+=5;
direction= -direction;
resetAllMotors(servoL1, servoL2, servoA1S, servoA1E, servoA2S, servoA2E)
def danceLegsAndArms1(servoLeg1, servoLeg2, servoArm1S, servoArm1E, servoArm2S, servoArm2E,
cycles, angleStepLeg, angleStepShoulder, angleStepElbow, r1, r2):
print("Starting dance legs with arms")
direction = 1;
for i in range(0,cycles):
t=0;
initialPosL1 = servoLeg1.getPhysicalPos();
initialPosL2 = servoLeg2.getPhysicalPos();
initialPosA1S = servoArm1S.getPhysicalPos();
initialPosA1E = servoArm1E.getPhysicalPos();
initialPosA2S = servoArm2S.getPhysicalPos();
initialPosA2E = servoArm2E.getPhysicalPos();
while (t<180):
r1.record();
r2.record();
# print(sin(t*2*pi/360)*angleStepLeg)
# print("Motor id is ", servoLeg1.IDRead())
# print("Physical pos is ", servoLeg1.getPhysicalPos())
# print("Virtual pos is ", servoLeg1.getVirtualPos())
# print("Motor id is ", servoLeg2.IDRead())
# print("Physical pos is ", servoLeg2.getPhysicalPos())
# print("Virtual pos is ", servoLeg2.getVirtualPos())
servoLeg1.moveTimeWrite(direction*sin(t*2*pi/360)*angleStepLeg+initialPosL1);
servoLeg2.moveTimeWrite(direction*sin(t*2*pi/360)*angleStepLeg+initialPosL2);
servoArm1S.moveTimeWrite(direction*sin(2*t*2*pi/360)*angleStepShoulder+initialPosA1S);
servoArm1E.moveTimeWrite(direction*sin(2*t*2*pi/360)*angleStepElbow+initialPosA1E);
servoArm2S.moveTimeWrite(direction*sin(2*t*2*pi/360)*angleStepShoulder+initialPosA2S);
servoArm2E.moveTimeWrite(direction*sin(2*t*2*pi/360)*angleStepElbow+initialPosA2E);
time.sleep(.01)
t+=5;
direction= -direction;
resetAllMotors(servoL1, servoL2, servoA1S, servoA1E, servoA2S, servoA2E)
def danceLegsAndArms2(servoLeg1, servoLeg2, servoArm1S, servoArm1E, servoArm2S, servoArm2E,
cycles, angleStepLeg, angleStepShoulder, angleStepElbow, r1, r2):
print("Starting dance legs with arms")
direction = 1;
for i in range(0,cycles):
t=0;
initialPosL1 = servoLeg1.getPhysicalPos();
initialPosL2 = servoLeg2.getPhysicalPos();
initialPosA1S = servoArm1S.getPhysicalPos();
initialPosA1E = servoArm1E.getPhysicalPos();
initialPosA2S = servoArm2S.getPhysicalPos();
initialPosA2E = servoArm2E.getPhysicalPos();
while (t<180):
r1.record();
r2.record();
# print(sin(t*2*pi/360)*angleStepLeg)
# print("Motor id is ", servoLeg1.IDRead())
# print("Physical pos is ", servoLeg1.getPhysicalPos())
# print("Virtual pos is ", servoLeg1.getVirtualPos())
# print("Motor id is ", servoLeg2.IDRead())
# print("Physical pos is ", servoLeg2.getPhysicalPos())
# print("Virtual pos is ", servoLeg2.getVirtualPos())
servoLeg1.moveTimeWrite(direction*sin(t*2*pi/360)*angleStepLeg+initialPosL1);
servoLeg2.moveTimeWrite(direction*sin(t*2*pi/360)*angleStepLeg+initialPosL2);
servoArm1S.moveTimeWrite(direction*sin(2*t*2*pi/360)*angleStepShoulder+initialPosA1S);
servoArm1E.moveTimeWrite(direction*sin(2*t*2*pi/360)*angleStepElbow+initialPosA1E);
servoArm2S.moveTimeWrite(-direction*sin(2*t*2*pi/360)*angleStepShoulder+initialPosA2S);
servoArm2E.moveTimeWrite(-direction*sin(2*t*2*pi/360)*angleStepElbow+initialPosA2E);
time.sleep(.01)
t+=5;
direction= -direction;
resetAllMotors(servoL1, servoL2, servoA1S, servoA1E, servoA2S, servoA2E)
# This is the port that the controller board is connected to
# This will be different for different computers
# On Windows, try the ports COM1, COM2, COM3, etc...
# On Raspbian, try each port in /dev/
try:
LX16A.initialize("COM9")
servoL1 = LX16A(7)
servoL2 = LX16A(8)
servoA1S = LX16A(2)
servoA1E = LX16A(1)
servoA2S = LX16A(4)
servoA2E = LX16A(3)
servoL1.motorMode(800)
servoL2.motorMode(-800)
time.sleep(3)
servoL1.servoMode()
servoL2.servoMode()
print("Resetting to home position")
resetAllMotors(servoL1, servoL2, servoA1S, servoA1E, servoA2S, servoA2E)
print("Finished resetting")
except KeyboardInterrupt:
quit()
| 38.752969 | 170 | 0.735949 | 1,843 | 16,315 | 6.512208 | 0.086815 | 0.025412 | 0.019663 | 0.034411 | 0.842776 | 0.835194 | 0.825196 | 0.812615 | 0.804449 | 0.804449 | 0 | 0.058295 | 0.119951 | 16,315 | 420 | 171 | 38.845238 | 0.777615 | 0.282562 | 0 | 0.685921 | 0 | 0 | 0.030728 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.050542 | false | 0 | 0.025271 | 0 | 0.079422 | 0.050542 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
e2844540c03734c3d5aede77a6cb320fc331436e | 40 | py | Python | src/quickmenus/scripts/quickmenus/qmenus/__init__.py | bohdon/maya-quickmenus | 3a49d4dab534fd32c649878efba3a6d6e63ac677 | [
"MIT"
] | 6 | 2017-12-15T08:49:22.000Z | 2020-09-04T10:03:46.000Z | src/quickmenus/scripts/quickmenus/fmenus/__init__.py | bohdon/maya-quickmenus | 3a49d4dab534fd32c649878efba3a6d6e63ac677 | [
"MIT"
] | null | null | null | src/quickmenus/scripts/quickmenus/fmenus/__init__.py | bohdon/maya-quickmenus | 3a49d4dab534fd32c649878efba3a6d6e63ac677 | [
"MIT"
] | 1 | 2018-08-23T03:49:15.000Z | 2018-08-23T03:49:15.000Z |
from core import *
from menus import *
| 10 | 19 | 0.725 | 6 | 40 | 4.833333 | 0.666667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.225 | 40 | 3 | 20 | 13.333333 | 0.935484 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
e299629d4d5eec6869ab2bb35035220746d7fe90 | 1,298 | py | Python | Lib_dati.py | rsd-dev/Soccer_Odds | 47dccbfd2c702e8b5e6cd42814c3f77dbfee7ca9 | [
"MIT"
] | null | null | null | Lib_dati.py | rsd-dev/Soccer_Odds | 47dccbfd2c702e8b5e6cd42814c3f77dbfee7ca9 | [
"MIT"
] | null | null | null | Lib_dati.py | rsd-dev/Soccer_Odds | 47dccbfd2c702e8b5e6cd42814c3f77dbfee7ca9 | [
"MIT"
] | null | null | null | csv_campionati = ["premier", "championship", "conference",
"serieA", "serieB", "ligaA", "ligaB",
"ligueA", "ligueB", "bundesA", "bundesB",
"eredivisie", "portugalA", "turkeyA"]
csv_urls=["https://www.football-data.co.uk/mmz4281/2021/E0.csv",
"https://www.football-data.co.uk/mmz4281/2021/E1.csv",
"https://www.football-data.co.uk/mmz4281/2021/EC.csv",
"https://www.football-data.co.uk/mmz4281/2021/I1.csv",
"https://www.football-data.co.uk/mmz4281/2021/I2.csv",
"https://www.football-data.co.uk/mmz4281/2021/SP1.csv",
"https://www.football-data.co.uk/mmz4281/2021/SP2.csv",
"https://www.football-data.co.uk/mmz4281/2021/F1.csv",
"https://www.football-data.co.uk/mmz4281/2021/F2.csv",
"https://www.football-data.co.uk/mmz4281/2021/D1.csv",
"https://www.football-data.co.uk/mmz4281/2021/D2.csv",
"https://www.football-data.co.uk/mmz4281/2021/N1.csv",
"https://www.football-data.co.uk/mmz4281/2021/P1.csv",
"https://www.football-data.co.uk/mmz4281/2021/T1.csv",]
csv_nsquadre = [20, 24, 24,
20, 20, 20, 22,
20, 20, 18, 18,
18, 18, 21]
| 49.923077 | 66 | 0.560092 | 174 | 1,298 | 4.16092 | 0.258621 | 0.154696 | 0.309392 | 0.38674 | 0.730663 | 0.730663 | 0.730663 | 0.730663 | 0.68232 | 0 | 0 | 0.155172 | 0.24037 | 1,298 | 25 | 67 | 51.92 | 0.579108 | 0 | 0 | 0 | 0 | 0 | 0.643868 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
2c33dce3a5f24783c00fceee5840dd4c015f1b0f | 49 | py | Python | tests/unit/__init__.py | tcc-td-puc-minas-indtexbr/standard-manager-api | 23b7db7e32f4a1409bb0558bd2edabfd50e3596d | [
"MIT"
] | 3 | 2020-03-07T19:21:47.000Z | 2021-10-02T15:27:20.000Z | tests/unit/__init__.py | tcc-td-puc-minas-indtexbr/standard-manager-api | 23b7db7e32f4a1409bb0558bd2edabfd50e3596d | [
"MIT"
] | null | null | null | tests/unit/__init__.py | tcc-td-puc-minas-indtexbr/standard-manager-api | 23b7db7e32f4a1409bb0558bd2edabfd50e3596d | [
"MIT"
] | null | null | null | from tests import register_paths
register_paths() | 24.5 | 32 | 0.877551 | 7 | 49 | 5.857143 | 0.714286 | 0.634146 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.081633 | 49 | 2 | 33 | 24.5 | 0.911111 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.5 | 0 | 0.5 | 0 | 1 | 1 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
2c88ce7f940e555c46d8b7ea44d1bd4667b5fcbf | 47 | py | Python | schwa/repository/__init__.py | SBST-DPG/schwa | d09660e4b5bb665114c35ebe291e5620e59f4c4c | [
"MIT"
] | 9 | 2015-05-21T10:13:27.000Z | 2020-11-06T22:21:03.000Z | schwa/repository/__init__.py | XiaoxueRenS/schwa | d09660e4b5bb665114c35ebe291e5620e59f4c4c | [
"MIT"
] | 5 | 2021-01-12T09:57:36.000Z | 2021-07-20T08:29:16.000Z | schwa/repository/__init__.py | XiaoxueRenS/schwa | d09660e4b5bb665114c35ebe291e5620e59f4c4c | [
"MIT"
] | 9 | 2015-05-14T09:31:15.000Z | 2021-02-07T02:53:17.000Z | from .file import *
from .repository import *
| 11.75 | 25 | 0.723404 | 6 | 47 | 5.666667 | 0.666667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.191489 | 47 | 3 | 26 | 15.666667 | 0.894737 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
2cb2b7a11fed7f4c536fdcd72955015d016184e1 | 74 | py | Python | pyner/named_entity/__init__.py | chantera/pyner | 6de19713871e923c997495c07e2ec249bded8671 | [
"MIT"
] | 1 | 2019-06-16T00:52:26.000Z | 2019-06-16T00:52:26.000Z | pyner/named_entity/__init__.py | chantera/pyner | 6de19713871e923c997495c07e2ec249bded8671 | [
"MIT"
] | null | null | null | pyner/named_entity/__init__.py | chantera/pyner | 6de19713871e923c997495c07e2ec249bded8671 | [
"MIT"
] | null | null | null | from .dataset import *
from .evaluator import *
from .recognizer import *
| 18.5 | 25 | 0.756757 | 9 | 74 | 6.222222 | 0.555556 | 0.357143 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.162162 | 74 | 3 | 26 | 24.666667 | 0.903226 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
3920a62485976ef0d1531a38d622ba6a81681584 | 127 | py | Python | devodsconnector/__init__.py | DevoInc/python-ds-connector | 64e5de6bc85536309455713120c551202b99bd39 | [
"MIT"
] | 4 | 2020-04-24T00:18:32.000Z | 2022-03-24T19:19:29.000Z | devodsconnector/__init__.py | DevoInc/python-ds-connector | 64e5de6bc85536309455713120c551202b99bd39 | [
"MIT"
] | 3 | 2020-04-27T22:10:30.000Z | 2021-02-11T18:51:50.000Z | devodsconnector/__init__.py | DevoInc/python-ds-connector | 64e5de6bc85536309455713120c551202b99bd39 | [
"MIT"
] | 3 | 2019-08-01T19:03:25.000Z | 2020-04-27T21:40:07.000Z | from .reader import Reader
from .writer import Writer
from .json_writer import JSONWriter
from .__version__ import __version__
| 25.4 | 36 | 0.84252 | 17 | 127 | 5.764706 | 0.411765 | 0.244898 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.125984 | 127 | 4 | 37 | 31.75 | 0.882883 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
392b0e0e6077a0976f1fdaabeb8be1d924457a97 | 101 | py | Python | zsec_aws_tools_extensions/__init__.py | zuoralabs/zsec-aws-tools-extensions | 63d1aa1a5b3f79ea31cd0c5c44006b41033f96a6 | [
"BSD-2-Clause"
] | null | null | null | zsec_aws_tools_extensions/__init__.py | zuoralabs/zsec-aws-tools-extensions | 63d1aa1a5b3f79ea31cd0c5c44006b41033f96a6 | [
"BSD-2-Clause"
] | 5 | 2020-05-20T04:53:05.000Z | 2020-07-31T00:33:39.000Z | zsec_aws_tools_extensions/__init__.py | zuoralabs/zsec-aws-tools-extensions | 63d1aa1a5b3f79ea31cd0c5c44006b41033f96a6 | [
"BSD-2-Clause"
] | 1 | 2020-10-03T12:20:34.000Z | 2020-10-03T12:20:34.000Z | from .deployment import zip_string, PartialAWSResourceCollection, PartialResource, partial_resources
| 50.5 | 100 | 0.891089 | 9 | 101 | 9.777778 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.069307 | 101 | 1 | 101 | 101 | 0.93617 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
3938a7af27a276afcd6271710e95ba5db738e215 | 17,068 | py | Python | tests/endpoints/test_project_endpoints.py | eshults5/imbi | 2ebf32264fe35a886b8f04086c28b9a05be1194a | [
"BSD-3-Clause"
] | null | null | null | tests/endpoints/test_project_endpoints.py | eshults5/imbi | 2ebf32264fe35a886b8f04086c28b9a05be1194a | [
"BSD-3-Clause"
] | null | null | null | tests/endpoints/test_project_endpoints.py | eshults5/imbi | 2ebf32264fe35a886b8f04086c28b9a05be1194a | [
"BSD-3-Clause"
] | null | null | null | import json
import uuid
import jsonpatch
from imbi.endpoints.project import link, project
from tests import base
class AsyncHTTPTestCase(base.TestCaseWithReset):
ADMIN_ACCESS = True
TRUNCATE_TABLES = [
'v1.configuration_systems',
'v1.data_centers',
'v1.deployment_types',
'v1.environments',
'v1.orchestration_systems',
'v1.project_link_types',
'v1.project_types',
'v1.namespaces'
]
def setUp(self):
super().setUp()
self._configuration_system = self.create_configuration_system()
self._data_center = self.create_data_center()
self._deployment_type = self.create_deployment_type()
self._environments = self.create_environments()
self._namespace = self.create_namespace()
self._orchestration_system = self.create_orchestration_system()
self._project_link_type = self.create_project_link_type()
self._project_type = self.create_project_type()
def create_configuration_system(self):
record = {
'name': str(uuid.uuid4()),
'description': str(uuid.uuid4()),
'icon_class': 'fas fa-blind'
}
result = self.fetch('/admin/configuration_system', method='POST',
body=json.dumps(record).encode('utf-8'),
headers=self.headers)
self.assertEqual(result.code, 200)
return record['name']
def create_data_center(self):
record = {
'name': str(uuid.uuid4()),
'description': str(uuid.uuid4()),
'icon_class': 'fas fa-blind'
}
result = self.fetch('/admin/data_center', method='POST',
body=json.dumps(record).encode('utf-8'),
headers=self.headers)
self.assertEqual(result.code, 200)
return record['name']
def create_deployment_type(self):
record = {
'name': str(uuid.uuid4()),
'description': str(uuid.uuid4()),
'icon_class': 'fas fa-blind'
}
result = self.fetch('/admin/deployment_type', method='POST',
body=json.dumps(record).encode('utf-8'),
headers=self.headers)
self.assertEqual(result.code, 200)
return record['name']
def create_environments(self):
environments = []
for iteration in range(0, 2):
record = {
'name': str(uuid.uuid4()),
'description': str(uuid.uuid4()),
'icon_class': 'fas fa-blind'
}
result = self.fetch('/admin/environment', method='POST',
body=json.dumps(record).encode('utf-8'),
headers=self.headers)
self.assertEqual(result.code, 200)
environments.append(record['name'])
return environments
def create_orchestration_system(self):
record = {
'name': str(uuid.uuid4()),
'description': str(uuid.uuid4()),
'icon_class': 'fas fa-blind'
}
result = self.fetch('/admin/orchestration_system', method='POST',
body=json.dumps(record).encode('utf-8'),
headers=self.headers)
self.assertEqual(result.code, 200)
return record['name']
def create_project_link_type(self):
record = {
'link_type': str(uuid.uuid4()),
'icon_class': 'fas fa-blind'
}
result = self.fetch('/admin/project_link_type', method='POST',
body=json.dumps(record).encode('utf-8'),
headers=self.headers)
self.assertEqual(result.code, 200)
return record['link_type']
def create_project_type(self):
record = {
'name': str(uuid.uuid4()),
'slug': str(uuid.uuid4()),
'description': str(uuid.uuid4()),
'icon_class': 'fas fa-blind'
}
result = self.fetch('/admin/project_type', method='POST',
body=json.dumps(record).encode('utf-8'),
headers=self.headers)
self.assertEqual(result.code, 200)
return record['name']
def create_namespace(self):
record = {
'name': str(uuid.uuid4()),
'slug': str(uuid.uuid4().hex),
'icon_class': 'fas fa-blind',
'maintained_by': []
}
result = self.fetch('/admin/namespace', method='POST',
body=json.dumps(record).encode('utf-8'),
headers=self.headers)
self.assertEqual(result.code, 200)
return record['name']
def test_project_lifecycle(self):
record = {
'namespace': self._namespace,
'name': str(uuid.uuid4()),
'slug': str(uuid.uuid4().hex),
'description': str(uuid.uuid4()),
'data_center': self._data_center,
'project_type': self._project_type,
'configuration_system': self._configuration_system,
'deployment_type': self._deployment_type,
'orchestration_system': self._orchestration_system,
'environments': self._environments
}
url = self.get_url('/projects/{}/{}'.format(
self._namespace, record['name']))
# Create
result = self.fetch('/projects', method='POST',
body=json.dumps(record).encode('utf-8'),
headers=self.headers)
self.assertEqual(result.code, 200)
self.assertIsNotNone(result.headers['Date'])
self.assertIsNone(result.headers.get('Last-Modified', None))
self.assert_link_header_equals(result, url)
self.assertEqual(
result.headers['Cache-Control'], 'public, max-age={}'.format(
project.RecordRequestHandler.TTL))
new_value = json.loads(result.body.decode('utf-8'))
self.assertEqual(
new_value['created_by'], self.USERNAME[self.ADMIN_ACCESS])
for field in ['created_by', 'last_modified_by']:
del new_value[field]
self.assertDictEqual(new_value, record)
# PATCH
updated = dict(record)
updated['description'] = str(uuid.uuid4())
patch = jsonpatch.make_patch(record, updated)
patch_value = patch.to_string().encode('utf-8')
result = self.fetch(
url, method='PATCH', body=patch_value, headers=self.headers)
self.assertEqual(result.code, 200)
new_value = json.loads(result.body.decode('utf-8'))
for field in ['created_by', 'last_modified_by']:
self.assertEqual(
new_value[field], self.USERNAME[self.ADMIN_ACCESS])
del new_value[field]
self.assertDictEqual(new_value, updated)
# Patch no change
result = self.fetch(
url, method='PATCH', body=patch_value, headers=self.headers)
self.assertEqual(result.code, 304)
# GET
result = self.fetch(url, headers=self.headers)
self.assertEqual(result.code, 200)
self.assertIsNotNone(result.headers['Date'])
self.assertIsNotNone(result.headers['Last-Modified'])
self.assert_link_header_equals(result, url)
self.assertEqual(
result.headers['Cache-Control'], 'public, max-age={}'.format(
project.RecordRequestHandler.TTL))
new_value = json.loads(result.body.decode('utf-8'))
for field in ['created_by', 'last_modified_by']:
self.assertEqual(
new_value[field], self.USERNAME[self.ADMIN_ACCESS])
del new_value[field]
self.assertDictEqual(new_value, updated)
# DELETE
result = self.fetch(url, method='DELETE', headers=self.headers)
self.assertEqual(result.code, 204)
# GET record should not exist
result = self.fetch(url, headers=self.headers)
self.assertEqual(result.code, 404)
# DELETE should fail as record should not exist
result = self.fetch(url, method='DELETE', headers=self.headers)
self.assertEqual(result.code, 404)
def test_create_with_missing_fields(self):
record = {
'namespace': self._namespace,
'name': str(uuid.uuid4()),
'slug': str(uuid.uuid4().hex),
'data_center': self._data_center,
'project_type': self._project_type,
'configuration_system': self._configuration_system,
'deployment_type': self._deployment_type,
'orchestration_system': self._orchestration_system,
'environments': self._environments
}
url = self.get_url('/projects/{}/{}'.format(
self._namespace, record['name']))
# Create
result = self.fetch('/projects', method='POST',
body=json.dumps(record).encode('utf-8'),
headers=self.headers)
self.assertEqual(result.code, 200)
self.assertIsNone(result.headers.get('Last-Modified', None))
self.assert_link_header_equals(result, url)
self.assertEqual(
result.headers['Cache-Control'], 'public, max-age={}'.format(
project.RecordRequestHandler.TTL))
new_value = json.loads(result.body.decode('utf-8'))
self.assertEqual(
new_value['created_by'], self.USERNAME[self.ADMIN_ACCESS])
for field in ['created_by', 'last_modified_by']:
del new_value[field]
record['description'] = None
self.assertDictEqual(new_value, record)
def test_dependencies(self):
project_a = {
'namespace': self._namespace,
'name': str(uuid.uuid4()),
'slug': str(uuid.uuid4().hex),
'data_center': self._data_center,
'project_type': self._project_type,
'configuration_system': self._configuration_system,
'deployment_type': self._deployment_type,
'orchestration_system': self._orchestration_system,
'environments': self._environments
}
result = self.fetch(
'/projects', method='POST', headers=self.headers,
body=json.dumps(project_a).encode('utf-8'))
self.assertEqual(result.code, 200)
project_b = {
'namespace': self._namespace,
'name': str(uuid.uuid4()),
'slug': str(uuid.uuid4().hex),
'data_center': self._data_center,
'project_type': self._project_type,
'configuration_system': self._configuration_system,
'deployment_type': self._deployment_type,
'orchestration_system': self._orchestration_system,
'environments': self._environments
}
result = self.fetch(
'/projects', method='POST', headers=self.headers,
body=json.dumps(project_b).encode('utf-8'))
self.assertEqual(result.code, 200)
# Create the dependency
result = self.fetch(
'/projects/{}/{}/dependencies'.format(
self._namespace, project_b['name']),
method='POST', headers=self.headers,
body=json.dumps({
'dependency_namespace': self._namespace,
'dependency_name': project_a['name']
}).encode('utf-8'))
self.assertEqual(result.code, 200)
result = self.fetch(
'/projects/{}/{}/dependencies'.format(
self._namespace, project_b['name']),
method='GET', headers=self.headers)
self.assertEqual(result.code, 200)
self.assertListEqual(
json.loads(result.body.decode('utf-8')),
[{
'dependency_namespace': self._namespace,
'dependency_name': project_a['name']
}])
result = self.fetch(
'/projects/{}/{}/dependencies/{}/{}'.format(
self._namespace, project_b['name'],
self._namespace, project_a['name']),
method='GET', headers=self.headers)
self.assertEqual(result.code, 200)
self.assertDictEqual(
json.loads(result.body.decode('utf-8')),
{
'created_by': self.USERNAME[self.ADMIN_ACCESS],
'namespace': self._namespace,
'name': project_b['name'],
'dependency_namespace': self._namespace,
'dependency_name': project_a['name']
})
result = self.fetch(
'/projects/{}/{}/dependencies/{}/{}'.format(
self._namespace, project_b['name'],
self._namespace, project_a['name']),
method='DELETE', headers=self.headers)
self.assertEqual(result.code, 204)
result = self.fetch(
'/projects/{}/{}/dependencies/{}/{}'.format(
self._namespace, project_b['name'],
self._namespace, project_a['name']),
method='GET', headers=self.headers)
self.assertEqual(result.code, 404)
def test_links(self):
project_record = {
'namespace': self._namespace,
'name': str(uuid.uuid4()),
'slug': str(uuid.uuid4().hex),
'data_center': self._data_center,
'project_type': self._project_type,
'configuration_system': self._configuration_system,
'deployment_type': self._deployment_type,
'orchestration_system': self._orchestration_system,
'environments': self._environments
}
result = self.fetch('/projects', method='POST',
body=json.dumps(project_record).encode('utf-8'),
headers=self.headers)
self.assertEqual(result.code, 200)
record = {
'namespace': self._namespace,
'name': project_record['name'],
'link_type': self._project_link_type,
'url': 'https://github.com/AWeber/Imbi'
}
url = self.get_url('/projects/{}/{}/links/{}'.format(
self._namespace, project_record['name'], self._project_link_type))
# Create
result = self.fetch(
'/projects/{}/{}/links'.format(
self._namespace, project_record['name']), headers=self.headers,
method='POST', body=json.dumps(record).encode('utf-8'))
self.assertEqual(result.code, 200)
link_record = json.loads(result.body.decode('utf-8'))
self.assert_link_header_equals(result, url)
self.assertEqual(
result.headers['Cache-Control'], 'public, max-age={}'.format(
link.RecordRequestHandler.TTL))
self.assertEqual(
link_record['created_by'], self.USERNAME[self.ADMIN_ACCESS])
self.assertEqual(link_record['url'], record['url'])
# Get links
result = self.fetch('/projects/{}/{}/links'.format(
self._namespace, project_record['name']), headers=self.headers)
self.assertEqual(result.code, 200)
self.assert_link_header_equals(
result, self.get_url('/projects/{}/{}/links'.format(
self._namespace, project_record['name'])))
records = []
for row in json.loads(result.body.decode('utf-8')):
for field in {'created_at', 'last_modified_at'}:
del row[field]
records.append(row)
self.assertListEqual(records, [link_record])
# PATCH
updated = dict(record)
updated['url'] = 'https://gitlab.com/AWeber/Imbi'
patch = jsonpatch.make_patch(record, updated)
patch_value = patch.to_string().encode('utf-8')
result = self.fetch(
url, method='PATCH', body=patch_value, headers=self.headers)
self.assertEqual(result.code, 200)
self.assert_link_header_equals(result, url)
record = json.loads(result.body.decode('utf-8'))
for field in {'created_by', 'last_modified_by'}:
del record[field]
self.assertDictEqual(record, updated)
# Patch no change
result = self.fetch(
url, method='PATCH', body=patch_value, headers=self.headers)
self.assertEqual(result.code, 304)
self.assert_link_header_equals(result, url)
# Get
result = self.fetch(url, headers=self.headers)
self.assertEqual(result.code, 200)
record = json.loads(result.body.decode('utf-8'))
for field in {'created_by', 'last_modified_by'}:
del record[field]
self.assertDictEqual(record, updated)
# Delete
result = self.fetch(url, method='DELETE', headers=self.headers)
self.assertEqual(result.code, 204)
# Get 404
result = self.fetch(url, headers=self.headers)
self.assertEqual(result.code, 404)
| 39.236782 | 79 | 0.570073 | 1,743 | 17,068 | 5.40677 | 0.079748 | 0.067699 | 0.077992 | 0.082237 | 0.838497 | 0.820352 | 0.812924 | 0.794249 | 0.764643 | 0.752016 | 0 | 0.013414 | 0.296813 | 17,068 | 434 | 80 | 39.327189 | 0.771788 | 0.011718 | 0 | 0.688347 | 0 | 0 | 0.146698 | 0.024568 | 0 | 0 | 0 | 0 | 0.168022 | 1 | 0.03523 | false | 0 | 0.01355 | 0 | 0.078591 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
3942bd983903dfa848b8fa3815a701af06331a17 | 1,462 | py | Python | src/skallel_tensor/__init__.py | alimanfoo/skallel-tensor | 0019440b24de24141b046739a02ad587dc621748 | [
"MIT"
] | 2 | 2019-08-22T21:48:58.000Z | 2020-02-17T15:44:23.000Z | src/skallel_tensor/__init__.py | alimanfoo/skallel-tensor | 0019440b24de24141b046739a02ad587dc621748 | [
"MIT"
] | 9 | 2019-07-04T00:42:22.000Z | 2019-10-01T18:41:13.000Z | src/skallel_tensor/__init__.py | alimanfoo/skallel-tensor | 0019440b24de24141b046739a02ad587dc621748 | [
"MIT"
] | 1 | 2019-06-25T07:36:51.000Z | 2019-06-25T07:36:51.000Z | # flake8: noqa
from .version import version as __version__
# Import the public API.
from .api import (
genotypes_locate_hom,
genotypes_locate_het,
genotypes_locate_call,
genotypes_count_alleles,
genotypes_to_called_allele_counts,
genotypes_to_missing_allele_counts,
genotypes_to_allele_counts,
genotypes_to_allele_counts_melt,
genotypes_to_major_allele_counts,
genotypes_to_haplotypes,
allele_counts_to_frequencies,
allele_counts_allelism,
allele_counts_max_allele,
variants_to_dataframe,
select_slice,
select_indices,
select_mask,
select_range,
select_values,
concatenate,
)
# Import these modules to ensure that their implementation functions are
# registered with the API for dispatching.
from . import numpy_backend
from . import dask_backend
from . import cuda_backend
__all__ = [
'genotypes_locate_hom',
'genotypes_locate_het',
'genotypes_locate_call',
'genotypes_count_alleles',
'genotypes_to_called_allele_counts',
'genotypes_to_missing_allele_counts',
'genotypes_to_allele_counts',
'genotypes_to_allele_counts_melt',
'genotypes_to_major_allele_counts',
'genotypes_to_haplotypes',
'allele_counts_to_frequencies',
'allele_counts_allelism',
'allele_counts_max_allele',
'variants_to_dataframe',
'select_slice',
'select_indices',
'select_mask',
'select_range',
'select_values',
'concatenate',
]
| 26.107143 | 72 | 0.75855 | 172 | 1,462 | 5.872093 | 0.30814 | 0.190099 | 0.166337 | 0.182178 | 0.766337 | 0.766337 | 0.766337 | 0.766337 | 0.766337 | 0.766337 | 0 | 0.000829 | 0.175103 | 1,462 | 55 | 73 | 26.581818 | 0.83665 | 0.100547 | 0 | 0 | 0 | 0 | 0.329008 | 0.242748 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.104167 | 0 | 0.104167 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
394c298d9897a1286f5f636ceac311e3c416584c | 38 | py | Python | smsframework_yunpian/__init__.py | vihtinsky/py-smsframework-yunpian | d965f16a5202ea1096d28fb8fa4e1dd1c3599b09 | [
"BSD-2-Clause"
] | null | null | null | smsframework_yunpian/__init__.py | vihtinsky/py-smsframework-yunpian | d965f16a5202ea1096d28fb8fa4e1dd1c3599b09 | [
"BSD-2-Clause"
] | null | null | null | smsframework_yunpian/__init__.py | vihtinsky/py-smsframework-yunpian | d965f16a5202ea1096d28fb8fa4e1dd1c3599b09 | [
"BSD-2-Clause"
] | null | null | null | from .provider import YunpianProvider
| 19 | 37 | 0.868421 | 4 | 38 | 8.25 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.105263 | 38 | 1 | 38 | 38 | 0.970588 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
39530839430cc10d20b763e8c0f82b34e0cc026d | 31 | py | Python | plugins/holland.backup.mysql_lvm/holland/backup/mysql_lvm/plugin/__init__.py | jkoelker/holland | b53497002b090db24fbbf0545c0683b4b727ab34 | [
"BSD-3-Clause"
] | 1 | 2019-06-06T01:07:34.000Z | 2019-06-06T01:07:34.000Z | plugins/holland.backup.mysql_lvm/holland/backup/mysql_lvm/plugin/__init__.py | jkoelker/holland | b53497002b090db24fbbf0545c0683b4b727ab34 | [
"BSD-3-Clause"
] | null | null | null | plugins/holland.backup.mysql_lvm/holland/backup/mysql_lvm/plugin/__init__.py | jkoelker/holland | b53497002b090db24fbbf0545c0683b4b727ab34 | [
"BSD-3-Clause"
] | 2 | 2015-12-04T12:17:59.000Z | 2022-03-23T07:22:02.000Z | from raw import MysqlLVMBackup
| 15.5 | 30 | 0.870968 | 4 | 31 | 6.75 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.129032 | 31 | 1 | 31 | 31 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
1a3fe6405fe577d0765e5e8ddad6772cb99b75fc | 25 | py | Python | wisdem/pymap/__init__.py | ptrbortolotti/WISDEM | 2b7e44716d022e2f62140073dd078c5deeb8bf0a | [
"Apache-2.0"
] | 3 | 2018-10-07T06:05:37.000Z | 2021-04-27T18:21:59.000Z | wisdem/pymap/__init__.py | ptrbortolotti/WISDEM | 2b7e44716d022e2f62140073dd078c5deeb8bf0a | [
"Apache-2.0"
] | 17 | 2019-09-13T22:21:15.000Z | 2019-10-25T20:04:26.000Z | wisdem/pymap/__init__.py | ptrbortolotti/WISDEM | 2b7e44716d022e2f62140073dd078c5deeb8bf0a | [
"Apache-2.0"
] | 7 | 2018-09-08T06:02:04.000Z | 2021-06-04T07:51:23.000Z | from .pymap import pyMAP
| 12.5 | 24 | 0.8 | 4 | 25 | 5 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.16 | 25 | 1 | 25 | 25 | 0.952381 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
2018746e3fa7691c247f5e48020d95bb3b54a33f | 154 | py | Python | ltrc/ltrc/classes/manage_choices.py | iscenigmax/ltrc-registry | 86c4e52e0d76e686c39a357957af35846a8fc391 | [
"Unlicense"
] | null | null | null | ltrc/ltrc/classes/manage_choices.py | iscenigmax/ltrc-registry | 86c4e52e0d76e686c39a357957af35846a8fc391 | [
"Unlicense"
] | null | null | null | ltrc/ltrc/classes/manage_choices.py | iscenigmax/ltrc-registry | 86c4e52e0d76e686c39a357957af35846a8fc391 | [
"Unlicense"
] | null | null | null | #!/usr/bin/python
# -*- encoding: utf-8 -*-
import datetime
def year_choices():
return [(r, r) for r in range(2015, datetime.date.today().year+1)]
| 17.111111 | 70 | 0.636364 | 24 | 154 | 4.041667 | 0.833333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.046512 | 0.162338 | 154 | 8 | 71 | 19.25 | 0.705426 | 0.25974 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | true | 0 | 0.333333 | 0.333333 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 1 | 1 | 0 | 0 | 0 | 6 |
649eeacad78faa5ef2ee061aef5acf00d095b8ae | 7,353 | py | Python | Train_Eval/Evaluator.py | lucasliu0928/KGDAL | a7e33515d6383763e10508db498adec36a97c97d | [
"MIT"
] | 5 | 2021-06-15T16:51:06.000Z | 2022-03-15T12:36:54.000Z | Train_Eval/Evaluator.py | lucasliu0928/KGDAL | a7e33515d6383763e10508db498adec36a97c97d | [
"MIT"
] | null | null | null | Train_Eval/Evaluator.py | lucasliu0928/KGDAL | a7e33515d6383763e10508db498adec36a97c97d | [
"MIT"
] | 1 | 2022-03-15T12:36:53.000Z | 2022-03-15T12:36:53.000Z | #!/usr/bin/env python3
# -*- coding: utf-8 -*-
"""
Created on Wed Mar 10 21:37:06 2021
@author: lucasliu
"""
import tensorflow as tf
import pandas as pd
import sys
import os
CURR_DIR = os.path.dirname(os.path.abspath("./")) #Set system path to KGDAL
sys.path.append(CURR_DIR)
from Model.LSTM_ATTonTime import MyLSTM,MyLSTM_4grps
from Model.LSTM_ATTonTimeAndFeature_WithThresFeatures import AttnOnFeatures_ScaleAtt_4grps_withThresFeature
from Model.LSTM_Vanila import VanillaLSTM
from Ultility import Evaluation_funcs
#tf.keras.backend.set_floatx('float32')
def external_eval_1grp(ckpt_idx_to_restore,X_Validation,y_Validation,timesteps,n_feature,latent_dim,outdir):
#restore model
reconstructed_model = VanillaLSTM(timesteps,n_feature,latent_dim)
ckpt = tf.train.Checkpoint(step=tf.Variable(1), mod = reconstructed_model)
#manager = tf.train.CheckpointManager(ckpt, "./tf_ckpts", max_to_keep=3)
#ckpt.restore(manager.latest_checkpoint).expect_partial()
ckpt.restore(outdir + '/tf_ckpts/ckpt-' + str(ckpt_idx_to_restore)).expect_partial() #use expect_partial to only restore vars used for validation, removed warnings
pred_prob = reconstructed_model.predict(X_Validation, verbose=0)
pred_classes = Evaluation_funcs.compute_performance2(y_Validation,pred_prob,False,0.5) #return performance at roc cutoff point
accuracy,precision1,recall1,f11,precision0,recall0,f10 = Evaluation_funcs.compute_performance1(pred_classes,y_Validation) #return performance at threhold 0.5
roc_auc = Evaluation_funcs.roc(y_Validation,pred_prob,False)
pr_auc = Evaluation_funcs.PR_AUC(y_Validation, pred_prob)
F1_Class0,F3_Class1 = Evaluation_funcs.F_beta(y_Validation,pred_classes)
perf_tb = pd.DataFrame([[accuracy,roc_auc,pr_auc,precision1,recall1,f11,precision0,recall0,f10,F1_Class0,F3_Class1]],columns=['ACC','ROC_AUC',"PR_AUC",'PREC1','RECALL1','F1_1','PREC0','RECALL0','F1_0','F1_Class0','F3_Class1'])
perf_tb.to_csv(outdir + '/perf0314.csv')
#Two grps
def external_eval(ckpt_idx_to_restore,X_Validation_A,X_Validation_B,y_Validation,timesteps,n_featureA,n_featureB,outdir):
#restore model
reconstructed_model = MyLSTM(timesteps,n_featureA,n_featureB,8)
ckpt = tf.train.Checkpoint(step=tf.Variable(1), mod = reconstructed_model)
#manager = tf.train.CheckpointManager(ckpt, "./tf_ckpts", max_to_keep=3)
#ckpt.restore(manager.latest_checkpoint).expect_partial()
ckpt.restore(outdir + '/tf_ckpts/ckpt-' + str(ckpt_idx_to_restore)).expect_partial() #use expect_partial to only restore vars used for validation, removed warnings
pred_prob = reconstructed_model(X_Validation_A,X_Validation_B, training=False)
loss_object = tf.keras.losses.BinaryCrossentropy()
t_loss2 = loss_object(y_Validation, pred_prob)
pred_classes = Evaluation_funcs.compute_performance2(y_Validation,pred_prob,False,0.5) #return performance at roc cutoff point
accuracy,precision1,recall1,f11,precision0,recall0,f10 = Evaluation_funcs.compute_performance1(pred_classes,y_Validation) #return performance at threhold 0.5
roc_auc = Evaluation_funcs.roc(y_Validation,pred_prob,False)
pr_auc = Evaluation_funcs.PR_AUC(y_Validation, pred_prob)
F1_Class0,F3_Class1 = Evaluation_funcs.F_beta(y_Validation,pred_classes)
perf_tb = pd.DataFrame([[accuracy,roc_auc,pr_auc,precision1,recall1,f11,precision0,recall0,f10,F1_Class0,F3_Class1]],columns=['ACC','ROC_AUC',"PR_AUC",'PREC1','RECALL1','F1_1','PREC0','RECALL0','F1_0','F1_Class0','F3_Class1'])
perf_tb.to_csv(outdir + '/perf0311.csv')
return t_loss2
def external_eval_4grps(ckpt_idx_to_restore,X_Validation_A,X_Validation_B,X_Validation_C,X_Validation_D,y_Validation,timesteps,n_featureA,n_featureB,n_featureC,n_featureD,outdir):
#restore model
reconstructed_model = MyLSTM_4grps(timesteps,n_featureA,n_featureB,n_featureC,n_featureD,8)
ckpt = tf.train.Checkpoint(step=tf.Variable(1), mod = reconstructed_model)
#manager = tf.train.CheckpointManager(ckpt, "./tf_ckpts", max_to_keep=3)
#ckpt.restore(manager.latest_checkpoint).expect_partial()
ckpt.restore(outdir + '/tf_ckpts/ckpt-' + str(ckpt_idx_to_restore)).expect_partial() #use expect_partial to only restore vars used for validation, removed warnings
pred_prob = reconstructed_model(X_Validation_A,X_Validation_B,X_Validation_C,X_Validation_D, training=False)
loss_object = tf.keras.losses.BinaryCrossentropy()
t_loss2 = loss_object(y_Validation, pred_prob)
pred_classes = Evaluation_funcs.compute_performance2(y_Validation,pred_prob,False,0.5) #return performance at roc cutoff point
accuracy,precision1,recall1,f11,precision0,recall0,f10 = Evaluation_funcs.compute_performance1(pred_classes,y_Validation) #return performance at threhold 0.5
roc_auc = Evaluation_funcs.roc(y_Validation,pred_prob,False)
pr_auc = Evaluation_funcs.PR_AUC(y_Validation, pred_prob)
F1_Class0,F3_Class1 = Evaluation_funcs.F_beta(y_Validation,pred_classes)
perf_tb = pd.DataFrame([[accuracy,roc_auc,pr_auc,precision1,recall1,f11,precision0,recall0,f10,F1_Class0,F3_Class1]],columns=['ACC','ROC_AUC',"PR_AUC",'PREC1','RECALL1','F1_1','PREC0','RECALL0','F1_0','F1_Class0','F3_Class1'])
perf_tb.to_csv(outdir + '/perf0311.csv')
return t_loss2
def external_eval_4grps_withThFeatures(ckpt_idx_to_restore,X_Validation_A,X_Validation_B,X_Validation_C,X_Validation_D,X_Validation_A_th,X_Validation_B_th,X_Validation_C_th,X_Validation_D_th,y_Validation,timesteps,n_featureA,n_featureB,n_featureC,n_featureD,n_features_A_th,n_features_B_th,n_features_C_th,n_features_D_th,outdir):
#restore model
reconstructed_model = AttnOnFeatures_ScaleAtt_4grps_withThresFeature(timesteps,n_featureA,n_featureB,n_featureC,n_featureD,n_features_A_th,n_features_B_th,n_features_C_th,n_features_D_th,8)
ckpt = tf.train.Checkpoint(step=tf.Variable(1), mod = reconstructed_model)
#manager = tf.train.CheckpointManager(ckpt, "./tf_ckpts", max_to_keep=3)
#ckpt.restore(manager.latest_checkpoint).expect_partial()
ckpt.restore(outdir + '/tf_ckpts/ckpt-' + str(ckpt_idx_to_restore)).expect_partial() #use expect_partial to only restore vars used for validation, removed warnings
pred_prob = reconstructed_model(X_Validation_A,X_Validation_B,X_Validation_C,X_Validation_D,X_Validation_A_th,X_Validation_B_th,X_Validation_C_th,X_Validation_D_th, training=False)
loss_object = tf.keras.losses.BinaryCrossentropy()
t_loss2 = loss_object(y_Validation, pred_prob)
pred_classes = Evaluation_funcs.compute_performance2(y_Validation,pred_prob,False,0.5) #return performance at roc cutoff point
accuracy,precision1,recall1,f11,precision0,recall0,f10 = Evaluation_funcs.compute_performance1(pred_classes,y_Validation) #return performance at threhold 0.5
roc_auc = Evaluation_funcs.roc(y_Validation,pred_prob,False)
pr_auc = Evaluation_funcs.PR_AUC(y_Validation, pred_prob)
F1_Class0,F3_Class1 = Evaluation_funcs.F_beta(y_Validation,pred_classes)
perf_tb = pd.DataFrame([[accuracy,roc_auc,pr_auc,precision1,recall1,f11,precision0,recall0,f10,F1_Class0,F3_Class1]],columns=['ACC','ROC_AUC',"PR_AUC",'PREC1','RECALL1','F1_1','PREC0','RECALL0','F1_0','F1_Class0','F3_Class1'])
perf_tb.to_csv(outdir + '/perf0315.csv')
return t_loss2
| 63.93913 | 330 | 0.793418 | 1,096 | 7,353 | 4.976277 | 0.145985 | 0.060506 | 0.052255 | 0.052255 | 0.886322 | 0.858636 | 0.838284 | 0.831316 | 0.831316 | 0.823066 | 0 | 0.03388 | 0.096831 | 7,353 | 114 | 331 | 64.5 | 0.787381 | 0.180063 | 0 | 0.641791 | 0 | 0 | 0.063074 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.059701 | false | 0 | 0.119403 | 0 | 0.223881 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
64d1f7bde6417d1c3348d8f1e650930fc64a7b6c | 89 | py | Python | src/schedulers.py | pomelyu/DNNProject | 08fa94788a18c28f867c4ea46e9e18b8f65ee334 | [
"MIT"
] | null | null | null | src/schedulers.py | pomelyu/DNNProject | 08fa94788a18c28f867c4ea46e9e18b8f65ee334 | [
"MIT"
] | null | null | null | src/schedulers.py | pomelyu/DNNProject | 08fa94788a18c28f867c4ea46e9e18b8f65ee334 | [
"MIT"
] | null | null | null | from .utils.mlconfig_torch import register_torch_schedulers
register_torch_schedulers()
| 22.25 | 59 | 0.88764 | 11 | 89 | 6.727273 | 0.636364 | 0.351351 | 0.621622 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.067416 | 89 | 3 | 60 | 29.666667 | 0.891566 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.5 | 0 | 0.5 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
b3b75bbd90993bda8366c31f96dd021d424cee24 | 159 | py | Python | 29.operacoes_com_lista/11.reverse.py | robinson-1985/python-zero-dnc | df510d67e453611fcd320df1397cdb9ca47fecb8 | [
"MIT"
] | null | null | null | 29.operacoes_com_lista/11.reverse.py | robinson-1985/python-zero-dnc | df510d67e453611fcd320df1397cdb9ca47fecb8 | [
"MIT"
] | null | null | null | 29.operacoes_com_lista/11.reverse.py | robinson-1985/python-zero-dnc | df510d67e453611fcd320df1397cdb9ca47fecb8 | [
"MIT"
] | null | null | null | # Reverse() -> Inverte a ordem da lista.
lista_4 = [10,9,8,7,5,6,4,2,3,1,2,3]
print(lista_4)
lista_4.reverse()
print(lista_4)
lista_4.reverse()
print(lista_4) | 19.875 | 40 | 0.698113 | 35 | 159 | 3 | 0.485714 | 0.342857 | 0.314286 | 0.304762 | 0.561905 | 0.561905 | 0.561905 | 0.561905 | 0.561905 | 0 | 0 | 0.132867 | 0.100629 | 159 | 8 | 41 | 19.875 | 0.601399 | 0.238994 | 0 | 0.833333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.5 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 6 |
b3d7ed335364b1e88ed408644ba93e7ccb559b0e | 32 | py | Python | web/routes/views/__init__.py | coinForRich/coin-for-rich | ed7d3b0101ede3340d919d0d28c52ba6e797943e | [
"MIT"
] | 55 | 2021-09-15T04:34:13.000Z | 2022-03-20T18:11:01.000Z | web/routes/views/__init__.py | coinForRich/coin-for-rich | ed7d3b0101ede3340d919d0d28c52ba6e797943e | [
"MIT"
] | 20 | 2021-08-25T14:52:33.000Z | 2022-03-05T23:46:43.000Z | web/routes/views/__init__.py | coinForRich/coin-for-rich | ed7d3b0101ede3340d919d0d28c52ba6e797943e | [
"MIT"
] | 26 | 2021-09-15T04:51:21.000Z | 2022-02-01T04:45:08.000Z | from .views import views_router
| 16 | 31 | 0.84375 | 5 | 32 | 5.2 | 0.8 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.125 | 32 | 1 | 32 | 32 | 0.928571 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
3741124b4074960d4caba5f632908d6e3697f3ce | 180 | py | Python | CodeChef/LONG/FEB15/XRMTRX/a.py | VastoLorde95/Competitive-Programming | 6c990656178fb0cd33354cbe5508164207012f24 | [
"MIT"
] | 170 | 2017-07-25T14:47:29.000Z | 2022-01-26T19:16:31.000Z | CodeChef/LONG/FEB15/XRMTRX/a.py | navodit15/Competitive-Programming | 6c990656178fb0cd33354cbe5508164207012f24 | [
"MIT"
] | null | null | null | CodeChef/LONG/FEB15/XRMTRX/a.py | navodit15/Competitive-Programming | 6c990656178fb0cd33354cbe5508164207012f24 | [
"MIT"
] | 55 | 2017-07-28T06:17:33.000Z | 2021-10-31T03:06:22.000Z | for _ in xrange(input()):
l, r = [int(x) for x in raw_input().split()]
for i in xrange (l,r+1):
print 'row',i,'\t',
for j in xrange(l,r+1):
print i^j,'\t',
print
print
| 20 | 45 | 0.561111 | 38 | 180 | 2.605263 | 0.421053 | 0.242424 | 0.181818 | 0.20202 | 0.323232 | 0.323232 | 0 | 0 | 0 | 0 | 0 | 0.014184 | 0.216667 | 180 | 8 | 46 | 22.5 | 0.687943 | 0 | 0 | 0.25 | 0 | 0 | 0.038889 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0 | null | null | 0.5 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 6 |
377f369bdf1d503bfb993bedf4a04ce8b6d08efa | 238 | py | Python | FreeCodeCamp/Inheritance/Chef.py | NikiReis/Python_OOP | 8071d641f4895b28584317c0896834c354107df2 | [
"MIT"
] | null | null | null | FreeCodeCamp/Inheritance/Chef.py | NikiReis/Python_OOP | 8071d641f4895b28584317c0896834c354107df2 | [
"MIT"
] | null | null | null | FreeCodeCamp/Inheritance/Chef.py | NikiReis/Python_OOP | 8071d641f4895b28584317c0896834c354107df2 | [
"MIT"
] | null | null | null | class Chef:
def make_chicken(self):
print("The chef make a chicken")
def make_salad(self):
print("The chef makes salad")
def make_special_dish(self):
print("The chef makes a special dish") | 26.444444 | 46 | 0.605042 | 33 | 238 | 4.242424 | 0.393939 | 0.15 | 0.257143 | 0.342857 | 0.3 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.302521 | 238 | 9 | 46 | 26.444444 | 0.843373 | 0 | 0 | 0 | 0 | 0 | 0.311688 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.428571 | false | 0 | 0 | 0 | 0.571429 | 0.428571 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 6 |
8065311e12bb2b723b59d14d1896ade469f39d48 | 17,635 | py | Python | services/api/tests/integration/proxy_tests.py | ohsu-computational-biology/dms-aa | 4aabae8b5ada539fa010a79970093c93fbbddb01 | [
"MIT"
] | null | null | null | services/api/tests/integration/proxy_tests.py | ohsu-computational-biology/dms-aa | 4aabae8b5ada539fa010a79970093c93fbbddb01 | [
"MIT"
] | 10 | 2016-12-07T01:37:41.000Z | 2017-01-20T22:20:52.000Z | services/api/tests/integration/proxy_tests.py | ohsu-computational-biology/euler | 4aabae8b5ada539fa010a79970093c93fbbddb01 | [
"MIT"
] | null | null | null | #!/usr/bin/env python
"""
Test proxy
"""
import urllib
import json
# assumes OS_USERNAME has access to only one project
MY_PROJECT = 'BRCA-UK'
MY_GENESET = 'GS1'
MY_GENE = 'ENSG00000141510'
def test_should_logout_ok(client, app):
"""
should respond with ok /api/v1/auth/logout
"""
headers = {'Authorization': _login_bearer_token(client, app),
'Content-Type': 'application/json'}
r = client.post('/api/v1/auth/logout', headers=headers)
assert r.status_code == 200
def test_should_create_external_entityset_ok(client, app):
"""
should respond with ok /api/v1/entityset/external
"""
headers = {'Authorization': _login_bearer_token(client, app),
'Content-Type': 'application/json'}
data = {"filters": {}, "size": 173535, "type": "DONOR",
"name": "Input donor set", "description": "", "isTransient": True,
"sortBy": "fileName", "sortOrder": "DESCENDING"}
r = client.post('/api/v1/entityset/external', headers=headers,
data=json.dumps(data))
assert r.status_code == 201
assert r.json['id']
def test_should_validate_token_ok(client, app):
"""
should respond with ok and response for token
"""
headers = {'Authorization': _login_bearer_token(client, app)}
r = client.get('/api/v1/auth/verify', headers=headers)
assert r.status_code == 200
def test_should_reject_missing_token(client, app):
"""
should respond with ok and response for token
"""
r = client.get('/api/v1/auth/verify')
assert r.status_code == 401
def test_redact_download_info(client, app):
"""
should respond with ok and response from dcc, with MY_PROJECT in results
"""
headers = {'Authorization': _login_bearer_token(client, app)}
r = client.get('/api/v1/download/info/current/Projects', headers=headers)
assert r.status_code == 200
assert len(r.json) > 0
for dir in r.json:
assert MY_PROJECT in dir['name'] or 'README' in dir['name']
def test_post_analysis_enrichment(client, app):
"""
should respond with ok and response from dcc
"""
headers = {'Authorization': _login_bearer_token(client, app),
'Accept': 'application/json',
'Content-Type': 'application/x-www-form-urlencoded'}
form = dict([('sort', u'affectedDonorCountFiltered'),
('params', u'{"maxGeneSetCount":50,"fdr":0.05,"universe":"REACTOME_PATHWAYS","maxGeneCount":50}'), # NOQA
('order', u'DESC'),
('filters', u'{"gene":{"id":{"is":["ES:2d097244-2aac-4ae5-a428-7bff28adad46"]}}}')]) # NOQA
r = client.post('/api/v1/analysis/enrichment', headers=headers,
data=form)
assert r.status_code == 202
def test_post_analysis_enrichment_no_data(client, app):
"""
should respond with ok and response from dcc
"""
headers = {'Authorization': _login_bearer_token(client, app),
'Accept': 'application/json',
'Content-Type': 'application/x-www-form-urlencoded'}
r = client.post('/api/v1/analysis/enrichment', headers=headers)
assert r.status_code == 400
def test_post_analysis_enrichment_no_header(client, app):
"""
should respond with ok and response from dcc
"""
headers = {'Authorization': _login_bearer_token(client, app)}
r = client.post('/api/v1/analysis/enrichment', headers=headers)
assert r.status_code == 400
def test_donors_returns_ok(client, app):
"""
should respond with ok and response from dcc, with MY_PROJECT in results
"""
headers = {'Authorization': _login_bearer_token(client, app)}
params = {'from': 1, 'include': 'facets', 'size': 25}
r = client.get('/api/v1/donors',
query_string=params, headers=headers)
assert r.status_code == 200
assert r.json.keys() == [u'pagination', u'hits', u'facets']
for hit in r.json['hits']:
assert hit['projectId'] == MY_PROJECT
if 'projectId' in r.json['facets']:
for term in r.json['facets']['projectId']['terms']:
assert term['term'] == MY_PROJECT
def test_genes_returns_ok(client, app):
"""
should respond with ok and response from dcc, with MY_PROJECT in results
"""
headers = {'Authorization': _login_bearer_token(client, app)}
params = {'from': 1, 'include': 'facets', 'size': 25}
filters = {"donor": {"projectId": {"is": ["BRCA-UK"]}}}
filters = urllib.quote_plus(json.dumps(filters))
params_filtered = {'from': 1, 'include': 'facets', 'size': 25,
'filters': filters}
r = client.get('/api/v1/genes',
query_string=params, headers=headers)
assert r.status_code == 200
assert r.json.keys() == [u'pagination', u'hits', u'facets']
r_filtered = client.get('/api/v1/genes',
query_string=params_filtered, headers=headers)
assert r_filtered.status_code == 200
for i in range(len(r.json['hits'])):
assert r.json['hits'][i] == r_filtered.json['hits'][i]
def test_genes_count_returns_ok(client, app):
"""
should respond with ok and response from dcc, with MY_PROJECT in results
"""
headers = {'Authorization': _login_bearer_token(client, app)}
filters = {"gene": {"hasPathway": 'true'}}
filters = urllib.quote_plus(json.dumps(filters))
params = {'filters': filters}
filters = {"gene": {"hasPathway": 'true'},
"donor": {"projectId": {"is": [MY_PROJECT]}}}
filters = urllib.quote_plus(json.dumps(filters))
params_filtered = {'filters': filters}
r = client.get('/api/v1/genes/count',
query_string=params, headers=headers)
assert r.status_code == 200
r = int(r.json)
r_filtered = client.get('/api/v1/genes/count',
query_string=params_filtered, headers=headers)
assert r_filtered.status_code == 200
r_filtered = int(r_filtered.json)
assert r == r_filtered
def test_genes_mutations_counts(client, app):
"""
should respond with ok and response from dcc, with MY_PROJECT in results
"""
headers = {'Authorization': _login_bearer_token(client, app)}
filters = {}
filters = urllib.quote_plus(json.dumps(filters))
params = {'filters': filters}
filters = {"donor": {"projectId": {"is": [MY_PROJECT]}}}
filters = urllib.quote_plus(json.dumps(filters))
params_filtered = {'filters': filters}
r = client.get('/api/v1/genes/{}/mutations/counts'.format(MY_GENE),
query_string=params, headers=headers)
assert r.status_code == 200
r_json = r.json
r_filtered = client.get('/api/v1/genes/{}/mutations/counts'.format(MY_GENE), # NOQA
query_string=params_filtered, headers=headers)
assert r_filtered.status_code == 200
assert r_json == r_filtered.json
def test_genesets_genes_counts(client, app):
headers = {'Authorization': _login_bearer_token(client, app)}
r = client.get('/api/v1/genesets/{}/genes/counts'.format(MY_GENESET), headers=headers) # NOQA
assert r.status_code == 200
assert r.json[MY_GENESET] != 0
def test_bad_genesets_genes_counts(client, app):
headers = {'Authorization': _login_bearer_token(client, app)}
r = client.get('/api/v1/genesets/{}/genes/counts'.format('NOT_A_GENESET'), headers=headers) # NOQA
assert r.status_code == 200
assert r.json['NOT_A_GENESET'] == 0
def test_mutations_returns_ok(client, app):
"""
should respond with ok and response from dcc, with MY_PROJECT in results
"""
headers = {'Authorization': _login_bearer_token(client, app)}
params = {'from': 1, 'include': 'facets', 'size': 25}
filters = {"donor": {"projectId": {"is": ["BRCA-UK"]}}}
filters = urllib.quote_plus(json.dumps(filters))
params_filtered = {'from': 1, 'include': 'facets', 'size': 25,
'filters': filters}
r = client.get('/api/v1/mutations',
query_string=params, headers=headers)
assert r.status_code == 200
assert r.json.keys() == [u'pagination', u'hits', u'facets']
r_filtered = client.get('/api/v1/mutations',
query_string=params_filtered, headers=headers)
assert r_filtered.status_code == 200
for i in range(len(r.json['hits'])):
assert r.json['hits'][i] == r_filtered.json['hits'][i]
def test_occurrences(client, app):
"""
should respond with ok and response from dcc, with MY_PROJECT in results
"""
headers = {'Authorization': _login_bearer_token(client, app)}
filters = {}
filters = urllib.quote_plus(json.dumps(filters))
params = {'filters': filters}
filters = {"donor": {"projectId": {"is": [MY_PROJECT]}}}
filters = urllib.quote_plus(json.dumps(filters))
params_filtered = {'filters': filters}
r = client.get('/api/v1/occurrences',
query_string=params, headers=headers)
assert r.status_code == 200
r_json = r.json
r_filtered = client.get('/api/v1/occurrences',
query_string=params_filtered, headers=headers)
assert r_filtered.status_code == 200
assert r_json == r_filtered.json
def test_donors_facets_only_ok(client, app):
"""
should respond with ok and response from dcc, when facetsOnly=true
and filter parameter created by browser is bad
"""
headers = {'Authorization': _login_bearer_token(client, app)}
params = {'facetsOnly': True, 'include': 'facets', 'size': 10,
'from': 1, 'filters': '{"donor":{"id":{"is":["ES:undefined"]}}}'}
r = client.get('/api/v1/donors',
query_string=params, headers=headers)
assert r.status_code == 400
def test_status_returns_ok(client):
"""
should respond with ok and response from dcc
"""
r = client.get('/api/version')
assert r.status_code == 200
assert r.json.keys() == ['indexCommit', 'indexName', 'api',
'portal', 'portalCommit']
def test_files_summary(client, app):
"""
should respond with ok and response from dcc
"""
headers = {'Authorization': _login_bearer_token(client, app)}
filters = {"file": {"projectCode": {"is": ["BRCA-UK"]}}}
filters = urllib.quote_plus(json.dumps(filters))
params = {'from': 1, 'include': 'facets', 'size': 25}
params_filtered = {'filters': filters, 'from': 1, 'include': 'facets',
'size': 25}
r = client.get('/api/v1/repository/files/summary',
query_string=params, headers=headers)
assert r.status_code == 200
assert r.json.keys() == [u'projectCount', u'totalFileSize', u'donorCount',
u'primarySiteCount', u'fileCount']
r_filtered = client.get('/api/v1/repository/files/summary',
query_string=params_filtered, headers=headers)
assert r_filtered.status_code == 200
assert r_filtered.json.keys() == [u'projectCount', u'totalFileSize',
u'donorCount', u'primarySiteCount',
u'fileCount']
for key in r.json.keys():
assert r.json[key] == r_filtered.json[key]
def test_files_returns_ok(client, app):
"""
should respond with ok and response from dcc
"""
headers = {'Authorization': _login_bearer_token(client, app)}
filters = {"file": {"repoName": {"is": ["Collaboratory - Toronto"]}}}
filters = urllib.quote_plus(json.dumps(filters))
params = {'filters': filters, 'from': 1, 'include': 'facets', 'size': 25}
r = client.get('/api/v1/repository/files',
query_string=params, headers=headers)
assert r.status_code == 200
assert r.json.keys() == [u'hits', u'termFacets', u'pagination']
for hit in r.json['hits']:
for donor in hit['donors']:
assert donor['projectCode'] == MY_PROJECT
def test_files_returns_unauthorized_for_no_token(client, app):
"""
should respond with ok and response from dcc
"""
filters = {"file": {"repoName": {"is": ["Collaboratory - Toronto"]}}}
filters = urllib.quote_plus(json.dumps(filters))
params = {'filters': filters, 'from': 1, 'include': 'facets', 'size': 25}
r = client.get('/api/v1/repository/files', query_string=params)
assert r.status_code == 401
def test_files_returns_unauthorized_for_bad_projects(client, app):
"""
should respond with 401 if project codes don't match
"""
headers = {'Authorization': _login_bearer_token(client, app)}
filters = {"file": {"repoName": {"is": ["Collaboratory - Toronto"]}, "projectCode": {"is": ["X", "Y"]}}} # NOQA
filters = urllib.quote_plus(json.dumps(filters))
params = {'filters': filters, 'from': 1, 'include': 'facets', 'size': 25}
r = client.get('/api/v1/repository/files',
query_string=params, headers=headers)
assert r.status_code == 401
def test_projects_returns_empty_list_if_unauthenticated(client, app):
""" /api/v1/projects """
r = client.get('/api/v1/projects')
assert r.status_code == 200
assert len(r.json['hits']) == 0
def test_projects_returns_list_if_authenticated(client, app):
""" /api/v1/projects """
headers = {'Authorization': _login_bearer_token(client, app)}
r = client.get('/api/v1/projects', headers=headers)
assert r.status_code == 200
assert len(r.json['hits']) == 1
def test_projects_returns_list_if_project_specified(client, app):
""" /api/v1/projects """
headers = {'Authorization': _login_bearer_token(client, app)}
filters = {"project": {"id": {"is": [MY_PROJECT]}}}
filters = urllib.quote_plus(json.dumps(filters))
params = {'filters': filters}
r = client.get('/api/v1/projects', headers=headers, query_string=params)
assert r.status_code == 200
assert len(r.json['hits']) == 1
def test_projects_returns_list_if_not_project_specified(client, app):
""" /api/v1/projects """
headers = {'Authorization': _login_bearer_token(client, app)}
filters = {"project": {"id": {"not": []}}}
filters = urllib.quote_plus(json.dumps(filters))
params = {'filters': filters}
r = client.get('/api/v1/projects', headers=headers, query_string=params)
assert r.status_code == 200
assert len(r.json['hits']) == 1
def test_gene_project_donor_counts(client, app):
headers = {'Authorization': _login_bearer_token(client, app)}
filters = {"mutation": {"functionalImpact": {"is": "High"}}}
filters = urllib.quote_plus(json.dumps(filters))
params = {'filters': filters}
r = client.get('/api/v1/ui/search/gene-project-donor-counts/ENSG00000005339??', # NOQA
query_string=params, headers=headers) # NOQA
assert r.status_code == 200
assert r.json['ENSG00000005339']
assert r.json['ENSG00000005339']['terms'][0]['term'] == MY_PROJECT
def test_donor_mutation_counts(client, app):
headers = {'Authorization': _login_bearer_token(client, app)}
r = client.get('/api/v1/ui/search/projects/donor-mutation-counts', headers=headers) # NOQA
assert r.status_code == 200
assert r.json[MY_PROJECT]
assert len(r.json.keys()) == 1
def test_projects_history(client, app):
headers = {'Authorization': _login_bearer_token(client, app)}
r = client.get('/api/v1/projects/history', headers=headers)
assert r.status_code == 200
for group in r.json:
assert group['group'] == MY_PROJECT
def test_projects_genes(client, app):
headers = {'Authorization': _login_bearer_token(client, app)}
r = client.get('/api/v1/projects/{}/genes'.format(MY_PROJECT), headers=headers) # NOQA
assert r.status_code == 200
def test_projects_genes_bad_project(client, app):
headers = {'Authorization': _login_bearer_token(client, app)}
r = client.get('/api/v1/projects/{}/genes'.format('NOT_A_PROJECT'), headers=headers) # NOQA
assert r.status_code == 401
def test_get_manifests(client, app):
headers = {'Authorization': _login_bearer_token(client, app)}
url = '/api/v1/manifests?repos=collaboratory&format=tarball&filters={"file":{"id":{"is":"FI661960"}}}' # NOQA
r = client.get(url, headers=headers)
assert r.status_code == 200
def test_get_manifests_exacloud(client, app):
headers = {'Authorization': _login_bearer_token(client, app)}
# this file is actually in the BRCA repo,
# (since the test user has access to that dir)
# we've overridden the repo to force an exacloud response
url = '/api/v1/manifests?repos=exacloud&format=tarball&filters={"file":{"id":{"is":"FI661960"}}}' # NOQA
r = client.get(url, headers=headers)
assert r.status_code == 200
# should only have one file
assert r.data.count('scp $SCP_OPTS') == 1
def test_get_manifests_exacloud_nofind(client, app):
headers = {'Authorization': _login_bearer_token(client, app)}
# this file is actually in the BRCA repo,
# (since the test user has access to that dir)
# we've overridden the repo to force an exacloud response
url = '/api/v1/manifests?repos=exacloud&format=tarball&filters={"file":{"id":{"is":"DUMMYFILEID"}}}' # NOQA
r = client.get(url, headers=headers)
assert r.status_code == 400
def test_get_manifests_noauth(client, app):
headers = {}
url = '/api/v1/manifests?repos=collaboratory&format=tarball&filters={"file":{"id":{"is":"FI661960"}}}' # NOQA
r = client.get(url, headers=headers)
assert r.status_code == 401
def _login_bearer_token(client, app):
global global_id_token
return 'Bearer {}'.format(global_id_token)
| 39.451902 | 123 | 0.643436 | 2,238 | 17,635 | 4.902145 | 0.105004 | 0.053322 | 0.041473 | 0.054234 | 0.819524 | 0.792271 | 0.773129 | 0.756358 | 0.734117 | 0.707775 | 0 | 0.022032 | 0.204707 | 17,635 | 446 | 124 | 39.540359 | 0.760214 | 0.09725 | 0 | 0.557432 | 0 | 0.013514 | 0.216181 | 0.079879 | 0 | 0 | 0 | 0 | 0.243243 | 1 | 0.121622 | false | 0 | 0.006757 | 0 | 0.131757 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
0382267c2695edb914ba348e76d1cb8e5c2c2550 | 2,394 | py | Python | hintcast/hintcast.py | davocarli/hintcast | 86eaba2d094ac2089f960d82e60a36a698f45f52 | [
"MIT"
] | null | null | null | hintcast/hintcast.py | davocarli/hintcast | 86eaba2d094ac2089f960d82e60a36a698f45f52 | [
"MIT"
] | null | null | null | hintcast/hintcast.py | davocarli/hintcast | 86eaba2d094ac2089f960d82e60a36a698f45f52 | [
"MIT"
] | null | null | null | from inspect import getfullargspec
from types import FunctionType
import logging
def cast_hints(*args, **kwargs):
cast_none = True
strict = False
if 'cast_none' in kwargs:
cast_none = kwargs['cast_none']
if 'strict' in kwargs:
strict = kwargs['strict']
def outer(func):
spec = getfullargspec(func)
def wrapper(*args, **kwargs):
new_args = []
new_kwargs = {}
for i in range(len(args)):
arg_name = spec.args[i]
passed_value = args[i]
if (cast_none or passed_value is not None) and \
arg_name in spec.annotations and not \
isinstance(passed_value, spec.annotations[arg_name]):
try:
passed_value = spec.annotations[arg_name](passed_value)
except Exception as e:
if strict:
raise TypeError(f'Could not convert {arg_name} to {spec.annotations[arg_name]}. {e}')
logging.warning(f'Could not convert {arg_name} to {spec.annotations[arg_name]}. {e}')
new_args.append(passed_value)
for arg_name in kwargs:
passed_value = kwargs[arg_name] if arg_name in kwargs else None
if (cast_none or passed_value is not None) and \
arg_name in spec.annotations and not \
isinstance(passed_value, spec.annotations[arg_name]):
try:
passed_value = spec.annotations[arg_name](passed_value)
except Exception as e:
if strict:
raise TypeError(f'Could not convert {arg_name} to {spec.annotations[arg_name]}. {e}')
logging.warning(f'Could not convert {arg_name} to {spec.annotations[arg_name]}. {e}')
new_kwargs[arg_name] = passed_value
return func(*new_args, **new_kwargs)
return wrapper
if isinstance(args[0] if len(args) > 0 else None, FunctionType):
return outer(args[0])
return outer
def strict_hints(func):
spec = getfullargspec(func)
def wrapper(*args, **kwargs):
for i in range(len(args)):
arg_name = spec.args[i]
passed_value = args[i]
if arg_name in spec.annotations and not \
isinstance(passed_value, spec.annotations[arg_name]):
raise TypeError(f'{arg_name} is not of type {spec.annotations[arg_name]}')
for arg_name in kwargs:
passed_value = kwargs[arg_name] if arg_name in kwargs else None
if arg_name in spec.annotations and not \
isinstance(passed_value, spec.annotations[arg_name]):
raise TypeError(f'{arg_name} is not of type {spec.annotations[arg_name]}')
return func(*args, **kwargs)
return wrapper | 28.5 | 92 | 0.70259 | 358 | 2,394 | 4.527933 | 0.153631 | 0.133868 | 0.133251 | 0.162862 | 0.745219 | 0.745219 | 0.745219 | 0.745219 | 0.692165 | 0.692165 | 0 | 0.001545 | 0.188805 | 2,394 | 84 | 93 | 28.5 | 0.833162 | 0 | 0 | 0.655738 | 0 | 0 | 0.16618 | 0.071816 | 0 | 0 | 0 | 0 | 0 | 1 | 0.081967 | false | 0.229508 | 0.04918 | 0 | 0.229508 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 6 |
03c497ca8af4bfcccb12c10d117a633216b9595b | 69 | py | Python | scripts_nbs/foobar_script.py | hainesm6-learning/Jupyter_Notebook_learning | 62ae95fe124f98f249fa8749f852b04f5aeedf1b | [
"MIT"
] | null | null | null | scripts_nbs/foobar_script.py | hainesm6-learning/Jupyter_Notebook_learning | 62ae95fe124f98f249fa8749f852b04f5aeedf1b | [
"MIT"
] | null | null | null | scripts_nbs/foobar_script.py | hainesm6-learning/Jupyter_Notebook_learning | 62ae95fe124f98f249fa8749f852b04f5aeedf1b | [
"MIT"
] | null | null | null | import sys
print(f"Hello {sys.argv[1]}. This is a python module ;)") | 23 | 57 | 0.681159 | 13 | 69 | 3.615385 | 0.923077 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.016949 | 0.144928 | 69 | 3 | 57 | 23 | 0.779661 | 0 | 0 | 0 | 0 | 0 | 0.671429 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.5 | 0 | 0.5 | 0.5 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 1 | 0 | 6 |
2094dc95de99edd89ce6800e205134c52ff969b2 | 38 | py | Python | ancilla/ancilla/foundation/node/plugins/__init__.py | frenzylabs/ancilla | 3469272f17e1a5092d033cdc099f86f3052e744f | [
"Apache-2.0"
] | 7 | 2020-03-31T19:52:59.000Z | 2021-05-21T08:38:47.000Z | ancilla/ancilla/foundation/node/plugins/__init__.py | frenzylabs/ancilla | 3469272f17e1a5092d033cdc099f86f3052e744f | [
"Apache-2.0"
] | 15 | 2020-04-01T13:52:07.000Z | 2020-04-01T13:52:11.000Z | ancilla/ancilla/foundation/node/plugins/__init__.py | frenzylabs/ancilla | 3469272f17e1a5092d033cdc099f86f3052e744f | [
"Apache-2.0"
] | null | null | null | from .layerkeep import LayerkeepPlugin | 38 | 38 | 0.894737 | 4 | 38 | 8.5 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.078947 | 38 | 1 | 38 | 38 | 0.971429 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
45cef31f6a84e8196c546026701213aa11096bf9 | 28,958 | py | Python | tests/test_api.py | rdas1/TravelKit-Backend | 131921258188defeb6edf970c71e807525de9dd2 | [
"MIT"
] | null | null | null | tests/test_api.py | rdas1/TravelKit-Backend | 131921258188defeb6edf970c71e807525de9dd2 | [
"MIT"
] | null | null | null | tests/test_api.py | rdas1/TravelKit-Backend | 131921258188defeb6edf970c71e807525de9dd2 | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
import json
import unittest
from flask import url_for
from flask_testing import TestCase
from flask_login import login_user
from app import create_app, db
from app.models import User, Todo, TodoList
class TodolistAPITestCase(TestCase):
def create_app(self):
return create_app('testing')
def setUp(self):
db.create_all()
self.username_alice = 'alice'
def tearDown(self):
db.session.remove()
db.drop_all()
def assert404Response(self, response):
self.assert_404(response)
json_response = json.loads(response.data.decode('utf-8'))
self.assertEqual(json_response['error'], 'Not found')
def assert400Response(self, response):
self.assert_400(response)
json_response = json.loads(response.data.decode('utf-8'))
self.assertEqual(json_response['error'], 'Bad Request')
@staticmethod
def setup_new_user(username):
user_data = {
'username': username,
'email': username + '@example.com',
'password': 'example_password'
}
return user_data
@staticmethod
def get_headers():
return {
'Accept': 'application/json',
'Content-Type': 'application/json'
}
def add_user(self, username):
user_data = self.setup_new_user(username)
User.from_dict(user_data)
return User.query.filter_by(username=username).first()
@staticmethod
def add_todolist(title, username=None):
todolist = TodoList(title=title, creator=username).save()
return TodoList.query.filter_by(id=todolist.id).first()
def add_todo(self, description, todolist_id, username=None):
todolist = TodoList.query.filter_by(id=todolist_id).first()
todo = Todo(description=description, todolist_id=todolist.id,
creator=username).save()
return Todo.query.filter_by(id=todo.id).first()
def add_user_through_json_post(self, username):
user_data = self.setup_new_user(username)
return self.client.post(url_for('api.add_user'),
headers=self.get_headers(),
data=json.dumps(user_data))
def create_admin(self):
new_user = self.setup_new_user('admin')
new_user['is_admin'] = True
return User.from_dict(new_user)
def test_main_route(self):
response = self.client.get(url_for('api.get_routes'))
self.assert_200(response)
json_response = json.loads(response.data.decode('utf-8'))
self.assertTrue('users' in json_response)
self.assertTrue('todolists' in json_response)
def test_not_found(self):
response = self.client.get('/api/not/found')
self.assert404Response(response)
# test api post calls
def test_add_user(self):
post_response = self.add_user_through_json_post(self.username_alice)
self.assertEqual(post_response.headers['Content-Type'],
'application/json')
self.assert_status(post_response, 201)
response = self.client.get(url_for('api.get_users'))
self.assert_200(response)
json_response = json.loads(response.data.decode('utf-8'))
users = json_response['users']
self.assertEqual(users[0]['username'], self.username_alice)
def test_add_user_only_using_the_username(self):
user_data = {'username': self.username_alice}
response = self.client.post(url_for('api.add_user'),
headers=self.get_headers(),
data=json.dumps(user_data))
self.assert400Response(response)
def test_add_user_only_using_the_username_and_email(self):
user_data = {
'username': self.username_alice,
'email': self.username_alice + '@example.com',
}
response = self.client.post(url_for('api.add_user'),
headers=self.get_headers(),
data=json.dumps(user_data))
self.assert400Response(response)
def test_add_user_with_to_long_username(self):
user_data = {
'username': 65 * 'a',
'email': self.username_alice + '@example.com',
'password': 'correcthorsebatterystaple',
}
response = self.client.post(url_for('api.add_user'),
headers=self.get_headers(),
data=json.dumps(user_data))
self.assert400Response(response)
def test_add_user_with_invalid_username(self):
user_data = {
'username': 'not a valid username',
'email': self.username_alice + '@example.com',
'password': 'correcthorsebatterystaple',
}
response = self.client.post(url_for('api.add_user'),
headers=self.get_headers(),
data=json.dumps(user_data))
self.assert400Response(response)
def test_add_user_without_username(self):
user_data = {
'username': '',
'email': self.username_alice + '@example.com',
'password': 'correcthorsebatterystaple',
}
response = self.client.post(url_for('api.add_user'),
headers=self.get_headers(),
data=json.dumps(user_data))
self.assert400Response(response)
def test_add_user_with_invalid_email(self):
user_data = {
'username': self.username_alice,
'email': self.username_alice + 'example.com',
'password': 'correcthorsebatterystaple',
}
response = self.client.post(url_for('api.add_user'),
headers=self.get_headers(),
data=json.dumps(user_data))
self.assert400Response(response)
def test_add_user_withoout_email(self):
user_data = {
'username': self.username_alice,
'email': '',
'password': 'correcthorsebatterystaple',
}
response = self.client.post(url_for('api.add_user'),
headers=self.get_headers(),
data=json.dumps(user_data))
self.assert400Response(response)
def test_add_user_with_too_long_email(self):
user_data = {
'username': self.username_alice,
'email': 53 * 'a' + '@example.com',
'password': 'correcthorsebatterystaple',
}
response = self.client.post(url_for('api.add_user'),
headers=self.get_headers(),
data=json.dumps(user_data))
self.assert400Response(response)
def test_add_user_without_password(self):
user_data = {
'username': self.username_alice,
'email': self.username_alice + '@example.com',
'password': '',
}
response = self.client.post(url_for('api.add_user'),
headers=self.get_headers(),
data=json.dumps(user_data))
self.assert400Response(response)
def test_add_user_with_extra_fields(self):
user_data = {
'username': self.username_alice,
'email': self.username_alice + '@example.com',
'password': 'correcthorsebatterystaple',
'extra-field': 'will be ignored'
}
post_response = self.client.post(url_for('api.add_user'),
headers=self.get_headers(),
data=json.dumps(user_data))
self.assertEqual(post_response.headers['Content-Type'],
'application/json')
self.assert_status(post_response, 201)
response = self.client.get(url_for('api.get_users'))
self.assert_200(response)
json_response = json.loads(response.data.decode('utf-8'))
self.assertEqual(
json_response['users'][0]['username'],
self.username_alice
)
def test_add_user_only_using_the_username_and_password(self):
user_data = {
'username': self.username_alice,
'password': 'correcthorsebatterystaple'
}
response = self.client.post(url_for('api.add_user'),
headers=self.get_headers(),
data=json.dumps(user_data))
self.assert400Response(response)
def test_add_todolist(self):
post_response = self.client.post(
url_for('api.add_todolist'),
headers=self.get_headers(),
data=json.dumps({'title': 'todolist'})
)
self.assert_status(post_response, 201)
# the expected id of the todolist is 1, as it is the first to be added
response = self.client.get(url_for('api.get_todolist', todolist_id=1))
self.assert_200(response)
json_response = json.loads(response.data.decode('utf-8'))
self.assertEqual(json_response['title'], 'todolist')
def test_add_todolist_without_title(self):
response = self.client.post(
url_for('api.add_todolist'),
headers=self.get_headers()
)
# opposed to the form, the title is a required argument
self.assert400Response(response)
def test_add_todolist_with_too_long_title(self):
response = self.client.post(
url_for('api.add_todolist'),
headers=self.get_headers(),
data=json.dumps({'title': 129 * 't'})
)
self.assert400Response(response)
def test_add_user_todolist(self):
self.add_user(self.username_alice)
post_response = self.client.post(
url_for('api.add_user_todolist', username=self.username_alice),
headers=self.get_headers(),
data=json.dumps({'title': 'todolist'})
)
self.assert_status(post_response, 201)
response = self.client.get(
url_for('api.get_user_todolists', username=self.username_alice))
self.assert_200(response)
json_response = json.loads(response.data.decode('utf-8'))
# check title, creator are set correctly and a total of one todolist
todolists = json_response['todolists']
self.assertEqual(todolists[0]['title'], 'todolist')
self.assertEqual(todolists[0]['creator'], self.username_alice)
self.assertEqual(len(todolists), 1)
def test_add_user_todolist_when_user_does_not_exist(self):
post_response = self.client.post(
url_for('api.add_user_todolist', username=self.username_alice),
headers=self.get_headers(),
data=json.dumps({'title': 'todolist'})
)
self.assert404Response(post_response)
def test_add_user_todolist_todo(self):
todolist_title = 'new todolist'
self.add_user(self.username_alice)
new_todolist = self.add_todolist(todolist_title, self.username_alice)
post_response = self.client.post(
url_for('api.add_user_todolist_todo',
username=self.username_alice, todolist_id=new_todolist.id),
headers=self.get_headers(),
data=json.dumps({
'description': 'new todo',
'creator': self.username_alice,
'todolist_id': new_todolist.id
})
)
self.assert_status(post_response, 201)
response = self.client.get(url_for('api.get_user_todolist_todos',
username=self.username_alice,
todolist_id=new_todolist.id))
self.assert_200(response)
json_response = json.loads(response.data.decode('utf-8'))
# check title, creator are set correctly and a total of one todo
todos = json_response['todos']
self.assertEqual(todos[0]['description'], 'new todo')
self.assertEqual(todos[0]['creator'], self.username_alice)
self.assertEqual(len(todos), 1)
def test_add_user_todolist_todo_when_todolist_does_not_exist(self):
self.add_user(self.username_alice)
post_response = self.client.post(
url_for('api.add_user_todolist_todo',
username=self.username_alice, todolist_id=1),
headers=self.get_headers(),
data=json.dumps({
'description': 'new todo',
'creator': self.username_alice,
'todolist_id': 1
})
)
self.assert404Response(post_response)
def test_add_user_todolist_todo_without_todo_data(self):
todolist_title = 'new todolist'
self.add_user(self.username_alice)
new_todolist = self.add_todolist(todolist_title, self.username_alice)
post_response = self.client.post(
url_for('api.add_user_todolist_todo',
username=self.username_alice, todolist_id=new_todolist.id),
headers=self.get_headers()
)
self.assert400Response(post_response)
def test_add_todolist_todo(self):
new_todolist = TodoList().save() # todolist with default title
post_response = self.client.post(
url_for('api.add_todolist_todo', todolist_id=new_todolist.id),
headers=self.get_headers(),
data=json.dumps({
'description': 'new todo',
'creator': 'null',
'todolist_id': new_todolist.id
})
)
self.assert_status(post_response, 201)
response = self.client.get(url_for('api.get_todolist_todos',
todolist_id=new_todolist.id))
self.assert_200(response)
json_response = json.loads(response.data.decode('utf-8'))
# check title, creator are set correctly and a total of one todo
todos = json_response['todos']
self.assertEqual(todos[0]['description'], 'new todo')
self.assertEqual(todos[0]['creator'], None)
self.assertEqual(len(todos), 1)
def test_add_todolist_todo_when_todolist_does_not_exist(self):
post_response = self.client.post(
url_for('api.add_todolist_todo', todolist_id=1),
headers=self.get_headers(),
data=json.dumps({
'description': 'new todo',
'creator': 'null',
'todolist_id': 1
})
)
self.assert404Response(post_response)
def test_add_todolist_todo_without_todo_data(self):
new_todolist = TodoList().save()
post_response = self.client.post(
url_for('api.add_todolist_todo', todolist_id=new_todolist.id),
headers=self.get_headers()
)
self.assert400Response(post_response)
# test api get calls
def test_get_users(self):
self.add_user(self.username_alice)
response = self.client.get(url_for('api.get_users'))
self.assert_200(response)
json_response = json.loads(response.data.decode('utf-8'))
self.assertEqual(json_response['users'][0]['username'],
self.username_alice)
def test_get_users_when_no_users_exist(self):
response = self.client.get(url_for('api.get_users'))
self.assert_200(response)
json_response = json.loads(response.data.decode('utf-8'))
self.assertEqual(json_response['users'], [])
def test_get_user(self):
self.add_user(self.username_alice)
response = self.client.get(
url_for('api.get_user', username=self.username_alice))
self.assert_200(response)
json_response = json.loads(response.data.decode('utf-8'))
self.assertEqual(json_response['username'], self.username_alice)
def test_get_user_when_user_does_not_exist(self):
response = self.client.get(
url_for('api.get_user', username=self.username_alice))
self.assert404Response(response)
def test_get_todolists(self):
todolist_title = 'new todolist '
self.add_user(self.username_alice)
self.add_todolist(todolist_title + '1', self.username_alice)
self.add_todolist(todolist_title + '2', self.username_alice)
response = self.client.get(url_for('api.get_todolists'))
self.assert_200(response)
json_response = json.loads(response.data.decode('utf-8'))
todolists = json_response['todolists']
self.assertEqual(todolists[0]['title'], 'new todolist 1')
self.assertEqual(todolists[0]['creator'], self.username_alice)
self.assertEqual(todolists[1]['title'], 'new todolist 2')
self.assertEqual(todolists[1]['creator'], self.username_alice)
self.assertEqual(len(todolists), 2)
def test_get_todolists_when_no_todolists_exist(self):
response = self.client.get(url_for('api.get_todolists'))
self.assert_200(response)
todolists = json.loads(response.data.decode('utf-8'))['todolists']
self.assertEqual(todolists, [])
def test_get_user_todolists(self):
todolist_title = 'new todolist '
self.add_user(self.username_alice)
self.add_todolist(todolist_title + '1', self.username_alice)
self.add_todolist(todolist_title + '2', self.username_alice)
response = self.client.get(url_for('api.get_user_todolists',
username=self.username_alice))
self.assert_200(response)
json_response = json.loads(response.data.decode('utf-8'))
todolists = json_response['todolists']
self.assertEqual(todolists[0]['title'], 'new todolist 1')
self.assertEqual(todolists[0]['creator'], self.username_alice)
self.assertEqual(todolists[1]['title'], 'new todolist 2')
self.assertEqual(todolists[1]['creator'], self.username_alice)
self.assertEqual(len(todolists), 2)
def test_get_user_todolists_when_user_does_not_exist(self):
response = self.client.get(url_for('api.get_user_todolists',
username=self.username_alice))
self.assert404Response(response)
def test_get_user_todolists_when_user_has_no_todolists(self):
self.add_user(self.username_alice)
response = self.client.get(url_for('api.get_user_todolists',
username=self.username_alice))
self.assert_200(response)
todolists = json.loads(response.data.decode('utf-8'))['todolists']
self.assertEqual(todolists, [])
def test_get_todolist_todos(self):
todolist_title = 'new todolist'
new_todolist = self.add_todolist(todolist_title)
self.add_todo('first', new_todolist.id)
self.add_todo('second', new_todolist.id)
response = self.client.get(url_for('api.get_todolist_todos',
todolist_id=new_todolist.id))
self.assert_200(response)
json_response = json.loads(response.data.decode('utf-8'))
todos = json_response['todos']
self.assertEqual(todos[0]['description'], 'first')
self.assertEqual(todos[0]['creator'], None)
self.assertEqual(todos[1]['description'], 'second')
self.assertEqual(todos[1]['creator'], None)
self.assertEqual(len(todos), 2)
def test_get_todolist_todos_when_todolist_does_not_exist(self):
response = self.client.get(url_for('api.get_todolist_todos',
todolist_id=1))
self.assert404Response(response)
def test_get_todolist_todos_when_todolist_has_no_todos(self):
todolist_title = 'new todolist'
new_todolist = self.add_todolist(todolist_title)
response = self.client.get(url_for('api.get_todolist_todos',
todolist_id=new_todolist.id))
self.assert_200(response)
todos = json.loads(response.data.decode('utf-8'))['todos']
self.assertEqual(todos, [])
def test_get_user_todolist_todos(self):
todolist_title = 'new todolist'
self.add_user(self.username_alice)
new_todolist = self.add_todolist(todolist_title, self.username_alice)
self.add_todo('first', new_todolist.id, self.username_alice)
self.add_todo('second', new_todolist.id, self.username_alice)
response = self.client.get(url_for('api.get_user_todolist_todos',
username=self.username_alice,
todolist_id=new_todolist.id))
self.assert_200(response)
json_response = json.loads(response.data.decode('utf-8'))
todos = json_response['todos']
self.assertEqual(todos[0]['description'], 'first')
self.assertEqual(todos[0]['creator'], self.username_alice)
self.assertEqual(todos[1]['description'], 'second')
self.assertEqual(todos[1]['creator'], self.username_alice)
self.assertEqual(len(todos), 2)
def test_get_user_todolist_todos_when_user_does_not_exist(self):
response = self.client.get(
url_for('api.get_user_todolist_todos',
username=self.username_alice, todolist_id=1))
self.assert404Response(response)
def test_get_user_todolist_todos_when_todolist_does_not_exist(self):
self.add_user(self.username_alice)
response = self.client.get(
url_for('api.get_user_todolist_todos',
username=self.username_alice, todolist_id=1))
self.assert404Response(response)
def test_get_user_todolist_todos_when_todolist_has_no_todos(self):
todolist_title = 'new todolist'
self.add_user(self.username_alice)
new_todolist = self.add_todolist(todolist_title, self.username_alice)
response = self.client.get(url_for('api.get_user_todolist_todos',
username=self.username_alice,
todolist_id=new_todolist.id))
self.assert_200(response)
todos = json.loads(response.data.decode('utf-8'))['todos']
self.assertEqual(todos, [])
def test_get_different_user_todolist_todos(self):
first_username = self.username_alice
second_username = 'bob'
todolist_title = 'new todolist'
first_user = self.add_user(first_username)
self.add_user(second_username)
new_todolist = self.add_todolist(todolist_title, second_username)
self.add_todo('first', new_todolist.id, second_username)
self.add_todo('second', new_todolist.id, second_username)
response = self.client.get(url_for('api.get_user_todolist_todos',
username=first_user,
todolist_id=new_todolist.id))
self.assert404Response(response)
def test_get_user_todolist(self):
todolist_title = 'new todolist'
self.add_user(self.username_alice)
new_todolist = self.add_todolist(todolist_title, self.username_alice)
response = self.client.get(url_for('api.get_user_todolist',
username=self.username_alice,
todolist_id=new_todolist.id))
self.assert_200(response)
json_response = json.loads(response.data.decode('utf-8'))
self.assertEqual(json_response['title'], todolist_title)
self.assertEqual(json_response['creator'], self.username_alice)
def test_get_user_todolist_when_user_does_not_exist(self):
response = self.client.get(url_for('api.get_user_todolist',
username=self.username_alice,
todolist_id=1))
self.assert404Response(response)
def test_get_user_todolist_when_todolist_does_not_exist(self):
self.add_user(self.username_alice)
response = self.client.get(url_for('api.get_user_todolist',
username=self.username_alice,
todolist_id=1))
self.assert404Response(response)
# test api put call
def test_update_todo_status_to_finished(self):
todolist = self.add_todolist('new todolist')
todo = self.add_todo('first', todolist.id)
self.assertFalse(todo.is_finished)
self.client.put(
url_for('api.update_todo_status', todo_id=todo.id),
headers=self.get_headers(),
data=json.dumps({'is_finished': True})
)
todo = Todo.query.get(todo.id)
self.assertTrue(todo.is_finished)
def test_update_todo_status_to_open(self):
todolist = self.add_todolist('new todolist')
todo = self.add_todo('first', todolist.id)
todo.finished()
self.assertTrue(todo.is_finished)
self.client.put(
url_for('api.update_todo_status', todo_id=todo.id),
headers=self.get_headers(),
data=json.dumps({'is_finished': False})
)
todo = Todo.query.get(todo.id)
self.assertFalse(todo.is_finished)
self.assertTrue(todo.finished_at is None)
def test_change_todolist_title(self):
todolist = self.add_todolist('new todolist')
response = self.client.put(
url_for('api.change_todolist_title', todolist_id=todolist.id),
headers=self.get_headers(),
data=json.dumps({'title': 'changed title'})
)
self.assert_200(response)
json_response = json.loads(response.data.decode('utf-8'))
self.assertEqual(json_response['title'], 'changed title')
def test_change_todolist_title_too_long_title(self):
todolist = self.add_todolist('new todolist')
response = self.client.put(
url_for('api.change_todolist_title', todolist_id=todolist.id),
headers=self.get_headers(),
data=json.dumps({'title': 129 * 't'})
)
self.assert_400(response)
def test_change_todolist_title_empty_title(self):
todolist = self.add_todolist('new todolist')
response = self.client.put(
url_for('api.change_todolist_title', todolist_id=todolist.id),
headers=self.get_headers(),
data=json.dumps({'title': ''})
)
self.assert_400(response)
def test_change_todolist_title_without_title(self):
todolist = self.add_todolist('new todolist')
response = self.client.put(
url_for('api.change_todolist_title', todolist_id=todolist.id),
headers=self.get_headers()
)
self.assert_400(response)
# test api delete calls
@unittest.skip('because acquiring admin rights is currently an issue')
def test_delete_user(self):
admin = self.create_admin()
login_user(admin)
user = self.add_user(self.username_alice)
user_id = user.id
response = self.client.delete(
url_for('api.delete_user', user_id=user_id),
headers=self.get_headers(),
data=json.dumps({'user_id': user_id})
)
self.assert_200(response)
response = self.client.get(url_for('api.get_user', user_id=user_id))
self.assert_404(response)
@unittest.skip('because acquiring admin rights is currently an issue')
def test_delete_todolist(self):
admin = self.create_admin()
login_user(admin)
todolist = self.add_todolist('new todolist')
todolist_id = todolist.id
response = self.client.delete(
url_for('api.delete_todolist', todolist_id=todolist_id),
headers=self.get_headers(),
data=json.dumps({'todolist_id': todolist_id})
)
self.assert_200(response)
response = self.client.get(
url_for('api.get_todolist', todolist_id=todolist_id)
)
self.assert_404(response)
@unittest.skip('because acquiring admin rights is currently an issue')
def test_delete_todo(self):
admin = self.create_admin()
login_user(admin)
todolist = self.add_todolist('new todolist')
todo = self.add_todo('new todo', todolist.id)
todo_id = todo.id
response = self.client.delete(
url_for('api.delete_todo', todo_id=todo_id),
headers=self.get_headers(),
data=json.dumps({'todo_id': todo_id})
)
self.assert_200(response)
response = self.client.get(
url_for('api.get_todo', todo_id=todo_id)
)
self.assert_404(response)
| 39.345109 | 79 | 0.613302 | 3,316 | 28,958 | 5.085344 | 0.049759 | 0.056929 | 0.078634 | 0.039851 | 0.88282 | 0.858092 | 0.843088 | 0.806855 | 0.772757 | 0.73943 | 0 | 0.013323 | 0.276849 | 28,958 | 735 | 80 | 39.398639 | 0.791939 | 0.015333 | 0 | 0.658291 | 0 | 0 | 0.112062 | 0.032559 | 0 | 0 | 0 | 0.001361 | 0.197655 | 1 | 0.110553 | false | 0.020101 | 0.011725 | 0.00335 | 0.137353 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
45e6bee10dae333a0858479bb54c712eb59e3832 | 6,189 | py | Python | tests/unit/test_exceptions.py | t20100/pip | 9cf35b25e25a47b41480d5b2dc82b8ebd1eeb6a0 | [
"MIT"
] | 1 | 2021-12-08T19:50:41.000Z | 2021-12-08T19:50:41.000Z | tests/unit/test_exceptions.py | t20100/pip | 9cf35b25e25a47b41480d5b2dc82b8ebd1eeb6a0 | [
"MIT"
] | null | null | null | tests/unit/test_exceptions.py | t20100/pip | 9cf35b25e25a47b41480d5b2dc82b8ebd1eeb6a0 | [
"MIT"
] | null | null | null | """Tests the presentation style of exceptions."""
import textwrap
import pytest
from pip._internal.exceptions import DiagnosticPipError
class TestDiagnosticPipErrorCreation:
def test_fails_without_reference(self) -> None:
class DerivedError(DiagnosticPipError):
pass
with pytest.raises(AssertionError) as exc_info:
DerivedError(message="", context=None, hint_stmt=None)
assert str(exc_info.value) == "error reference not provided!"
def test_can_fetch_reference_from_subclass(self) -> None:
class DerivedError(DiagnosticPipError):
reference = "subclass-reference"
obj = DerivedError(message="", context=None, hint_stmt=None)
assert obj.reference == "subclass-reference"
def test_can_fetch_reference_from_arguments(self) -> None:
class DerivedError(DiagnosticPipError):
pass
obj = DerivedError(
message="", context=None, hint_stmt=None, reference="subclass-reference"
)
assert obj.reference == "subclass-reference"
@pytest.mark.parametrize(
"name",
[
"BADNAME",
"BadName",
"bad_name",
"BAD_NAME",
"_bad",
"bad-name-",
"bad--name",
"-bad-name",
"bad-name-due-to-1-number",
],
)
def test_rejects_non_kebab_case_names(self, name: str) -> None:
class DerivedError(DiagnosticPipError):
reference = name
with pytest.raises(AssertionError) as exc_info:
DerivedError(message="", context=None, hint_stmt=None)
assert str(exc_info.value) == "error reference must be kebab-case!"
class TestDiagnosticPipErrorPresentation_ASCII:
def test_complete(self) -> None:
err = DiagnosticPipError(
reference="test-diagnostic",
message="Oh no!\nIt broke. :(",
context="Something went wrong\nvery wrong.",
attention_stmt="You did something wrong, which is what caused this error.",
hint_stmt="Do it better next time, by trying harder.",
)
assert str(err) == textwrap.dedent(
"""\
Oh no!
It broke. :(
Something went wrong
very wrong.
Note: You did something wrong, which is what caused this error.
Hint: Do it better next time, by trying harder.
"""
)
def test_no_context(self) -> None:
err = DiagnosticPipError(
reference="test-diagnostic",
message="Oh no!\nIt broke. :(",
context=None,
attention_stmt="You did something wrong, which is what caused this error.",
hint_stmt="Do it better next time, by trying harder.",
)
assert str(err) == textwrap.dedent(
"""\
Oh no!
It broke. :(
Note: You did something wrong, which is what caused this error.
Hint: Do it better next time, by trying harder.
"""
)
def test_no_note(self) -> None:
err = DiagnosticPipError(
reference="test-diagnostic",
message="Oh no!\nIt broke. :(",
context="Something went wrong\nvery wrong.",
attention_stmt=None,
hint_stmt="Do it better next time, by trying harder.",
)
assert str(err) == textwrap.dedent(
"""\
Oh no!
It broke. :(
Something went wrong
very wrong.
Hint: Do it better next time, by trying harder.
"""
)
def test_no_hint(self) -> None:
err = DiagnosticPipError(
reference="test-diagnostic",
message="Oh no!\nIt broke. :(",
context="Something went wrong\nvery wrong.",
attention_stmt="You did something wrong, which is what caused this error.",
hint_stmt=None,
)
assert str(err) == textwrap.dedent(
"""\
Oh no!
It broke. :(
Something went wrong
very wrong.
Note: You did something wrong, which is what caused this error.
"""
)
def test_no_context_no_hint(self) -> None:
err = DiagnosticPipError(
reference="test-diagnostic",
message="Oh no!\nIt broke. :(",
context=None,
attention_stmt="You did something wrong, which is what caused this error.",
hint_stmt=None,
)
assert str(err) == textwrap.dedent(
"""\
Oh no!
It broke. :(
Note: You did something wrong, which is what caused this error.
"""
)
def test_no_context_no_note(self) -> None:
err = DiagnosticPipError(
reference="test-diagnostic",
message="Oh no!\nIt broke. :(",
context=None,
attention_stmt=None,
hint_stmt="Do it better next time, by trying harder.",
)
assert str(err) == textwrap.dedent(
"""\
Oh no!
It broke. :(
Hint: Do it better next time, by trying harder.
"""
)
def test_no_hint_no_note(self) -> None:
err = DiagnosticPipError(
reference="test-diagnostic",
message="Oh no!\nIt broke. :(",
context="Something went wrong\nvery wrong.",
attention_stmt=None,
hint_stmt=None,
)
assert str(err) == textwrap.dedent(
"""\
Oh no!
It broke. :(
Something went wrong
very wrong.
"""
)
def test_no_hint_no_note_no_context(self) -> None:
err = DiagnosticPipError(
reference="test-diagnostic",
message="Oh no!\nIt broke. :(",
context=None,
hint_stmt=None,
attention_stmt=None,
)
assert str(err) == textwrap.dedent(
"""\
Oh no!
It broke. :(
"""
)
| 28.920561 | 87 | 0.536597 | 629 | 6,189 | 5.165342 | 0.147854 | 0.019698 | 0.029548 | 0.071407 | 0.868883 | 0.80948 | 0.748846 | 0.748846 | 0.719298 | 0.719298 | 0 | 0.000254 | 0.36274 | 6,189 | 213 | 88 | 29.056338 | 0.823529 | 0.006948 | 0 | 0.552846 | 0 | 0 | 0.21234 | 0.004953 | 0 | 0 | 0 | 0 | 0.113821 | 1 | 0.097561 | false | 0.01626 | 0.02439 | 0 | 0.170732 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
b30bb2218c19e7f10854fdf02cb93b4dcb245063 | 529 | py | Python | the_platform/views.py | Thames1990/BadBatBets | 8dffb69561668b8991bf4103919e4b254d4ca56a | [
"MIT"
] | null | null | null | the_platform/views.py | Thames1990/BadBatBets | 8dffb69561668b8991bf4103919e4b254d4ca56a | [
"MIT"
] | null | null | null | the_platform/views.py | Thames1990/BadBatBets | 8dffb69561668b8991bf4103919e4b254d4ca56a | [
"MIT"
] | null | null | null | from django.shortcuts import render
from profiles.util import user_authenticated
def error403(request):
return render(request, 'the_platform/403.html', {
'user_authenticated': user_authenticated(request.user)
})
def error404(request):
return render(request, 'the_platform/404.html', {
'user_authenticated': user_authenticated(request.user)
})
def error500(request):
return render(request, 'the_platform/500.html', {
'user_authenticated': user_authenticated(request.user)
})
| 24.045455 | 62 | 0.718336 | 59 | 529 | 6.271186 | 0.355932 | 0.321622 | 0.154054 | 0.210811 | 0.713514 | 0.713514 | 0.413514 | 0.281081 | 0 | 0 | 0 | 0.041002 | 0.170132 | 529 | 21 | 63 | 25.190476 | 0.801822 | 0 | 0 | 0.428571 | 0 | 0 | 0.221172 | 0.119093 | 0 | 0 | 0 | 0 | 0 | 1 | 0.214286 | false | 0 | 0.142857 | 0.214286 | 0.571429 | 0 | 0 | 0 | 0 | null | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 6 |
b35a597c9bba7640cb0c97538ba3f521dec7cf4c | 39 | py | Python | pyemma/_base/progress/__init__.py | trendelkampschroer/PyEMMA | ee5784d5c1c5bc070fe2e9e6ad4f24b36185dc20 | [
"BSD-2-Clause"
] | 1 | 2020-01-21T16:55:38.000Z | 2020-01-21T16:55:38.000Z | pyemma/_base/progress/__init__.py | trendelkampschroer/PyEMMA | ee5784d5c1c5bc070fe2e9e6ad4f24b36185dc20 | [
"BSD-2-Clause"
] | 1 | 2022-01-10T18:09:25.000Z | 2022-01-10T18:09:25.000Z | pyemma/_base/progress/__init__.py | clonker/PyEMMA | a36534ce2ec6a799428dfbdef0465c979e6c68aa | [
"BSD-2-Clause"
] | null | null | null | from .reporter import ProgressReporter
| 19.5 | 38 | 0.871795 | 4 | 39 | 8.5 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.102564 | 39 | 1 | 39 | 39 | 0.971429 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
b364b5d3e8ca67d92e0f8b39dc17fdb127aadfa5 | 116 | py | Python | arc085_a.py | hythof/atc | 12cb94ebe693e1f469ce0d982bc2924b586552cd | [
"CC0-1.0"
] | null | null | null | arc085_a.py | hythof/atc | 12cb94ebe693e1f469ce0d982bc2924b586552cd | [
"CC0-1.0"
] | null | null | null | arc085_a.py | hythof/atc | 12cb94ebe693e1f469ce0d982bc2924b586552cd | [
"CC0-1.0"
] | null | null | null | n,m = [int(x) for x in open(0).read().split()]
# (1900M+100(N-M))/(1/2^M)
print(int(((1900*m+100*(n-m))/(1/2**m))))
| 29 | 46 | 0.517241 | 28 | 116 | 2.142857 | 0.571429 | 0.1 | 0.166667 | 0.2 | 0.266667 | 0.266667 | 0 | 0 | 0 | 0 | 0 | 0.179245 | 0.086207 | 116 | 3 | 47 | 38.666667 | 0.386792 | 0.206897 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0 | 0 | 0 | 0.5 | 1 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 6 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.