hexsha string | size int64 | ext string | lang string | max_stars_repo_path string | max_stars_repo_name string | max_stars_repo_head_hexsha string | max_stars_repo_licenses list | max_stars_count int64 | max_stars_repo_stars_event_min_datetime string | max_stars_repo_stars_event_max_datetime string | max_issues_repo_path string | max_issues_repo_name string | max_issues_repo_head_hexsha string | max_issues_repo_licenses list | max_issues_count int64 | max_issues_repo_issues_event_min_datetime string | max_issues_repo_issues_event_max_datetime string | max_forks_repo_path string | max_forks_repo_name string | max_forks_repo_head_hexsha string | max_forks_repo_licenses list | max_forks_count int64 | max_forks_repo_forks_event_min_datetime string | max_forks_repo_forks_event_max_datetime string | content string | avg_line_length float64 | max_line_length int64 | alphanum_fraction float64 | qsc_code_num_words_quality_signal int64 | qsc_code_num_chars_quality_signal float64 | qsc_code_mean_word_length_quality_signal float64 | qsc_code_frac_words_unique_quality_signal float64 | qsc_code_frac_chars_top_2grams_quality_signal float64 | qsc_code_frac_chars_top_3grams_quality_signal float64 | qsc_code_frac_chars_top_4grams_quality_signal float64 | qsc_code_frac_chars_dupe_5grams_quality_signal float64 | qsc_code_frac_chars_dupe_6grams_quality_signal float64 | qsc_code_frac_chars_dupe_7grams_quality_signal float64 | qsc_code_frac_chars_dupe_8grams_quality_signal float64 | qsc_code_frac_chars_dupe_9grams_quality_signal float64 | qsc_code_frac_chars_dupe_10grams_quality_signal float64 | qsc_code_frac_chars_replacement_symbols_quality_signal float64 | qsc_code_frac_chars_digital_quality_signal float64 | qsc_code_frac_chars_whitespace_quality_signal float64 | qsc_code_size_file_byte_quality_signal float64 | qsc_code_num_lines_quality_signal float64 | qsc_code_num_chars_line_max_quality_signal float64 | qsc_code_num_chars_line_mean_quality_signal float64 | qsc_code_frac_chars_alphabet_quality_signal float64 | qsc_code_frac_chars_comments_quality_signal float64 | qsc_code_cate_xml_start_quality_signal float64 | qsc_code_frac_lines_dupe_lines_quality_signal float64 | qsc_code_cate_autogen_quality_signal float64 | qsc_code_frac_lines_long_string_quality_signal float64 | qsc_code_frac_chars_string_length_quality_signal float64 | qsc_code_frac_chars_long_word_length_quality_signal float64 | qsc_code_frac_lines_string_concat_quality_signal float64 | qsc_code_cate_encoded_data_quality_signal float64 | qsc_code_frac_chars_hex_words_quality_signal float64 | qsc_code_frac_lines_prompt_comments_quality_signal float64 | qsc_code_frac_lines_assert_quality_signal float64 | qsc_codepython_cate_ast_quality_signal float64 | qsc_codepython_frac_lines_func_ratio_quality_signal float64 | qsc_codepython_cate_var_zero_quality_signal bool | qsc_codepython_frac_lines_pass_quality_signal float64 | qsc_codepython_frac_lines_import_quality_signal float64 | qsc_codepython_frac_lines_simplefunc_quality_signal float64 | qsc_codepython_score_lines_no_logic_quality_signal float64 | qsc_codepython_frac_lines_print_quality_signal float64 | qsc_code_num_words int64 | qsc_code_num_chars int64 | qsc_code_mean_word_length int64 | qsc_code_frac_words_unique null | qsc_code_frac_chars_top_2grams int64 | qsc_code_frac_chars_top_3grams int64 | qsc_code_frac_chars_top_4grams int64 | qsc_code_frac_chars_dupe_5grams int64 | qsc_code_frac_chars_dupe_6grams int64 | qsc_code_frac_chars_dupe_7grams int64 | qsc_code_frac_chars_dupe_8grams int64 | qsc_code_frac_chars_dupe_9grams int64 | qsc_code_frac_chars_dupe_10grams int64 | qsc_code_frac_chars_replacement_symbols int64 | qsc_code_frac_chars_digital int64 | qsc_code_frac_chars_whitespace int64 | qsc_code_size_file_byte int64 | qsc_code_num_lines int64 | qsc_code_num_chars_line_max int64 | qsc_code_num_chars_line_mean int64 | qsc_code_frac_chars_alphabet int64 | qsc_code_frac_chars_comments int64 | qsc_code_cate_xml_start int64 | qsc_code_frac_lines_dupe_lines int64 | qsc_code_cate_autogen int64 | qsc_code_frac_lines_long_string int64 | qsc_code_frac_chars_string_length int64 | qsc_code_frac_chars_long_word_length int64 | qsc_code_frac_lines_string_concat null | qsc_code_cate_encoded_data int64 | qsc_code_frac_chars_hex_words int64 | qsc_code_frac_lines_prompt_comments int64 | qsc_code_frac_lines_assert int64 | qsc_codepython_cate_ast int64 | qsc_codepython_frac_lines_func_ratio int64 | qsc_codepython_cate_var_zero int64 | qsc_codepython_frac_lines_pass int64 | qsc_codepython_frac_lines_import int64 | qsc_codepython_frac_lines_simplefunc int64 | qsc_codepython_score_lines_no_logic int64 | qsc_codepython_frac_lines_print int64 | effective string | hits int64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
a485788eeecfff354b48a006a86e1d854357d0c9 | 50 | py | Python | aerosandbox/weights/__init__.py | raihaan123/AeroSandbox | 1e7c78f04b066415f671237a4833ba98901bb9ec | [
"MIT"
] | 1 | 2021-11-01T22:48:12.000Z | 2021-11-01T22:48:12.000Z | aerosandbox/weights/__init__.py | raihaan123/AeroSandbox | 1e7c78f04b066415f671237a4833ba98901bb9ec | [
"MIT"
] | null | null | null | aerosandbox/weights/__init__.py | raihaan123/AeroSandbox | 1e7c78f04b066415f671237a4833ba98901bb9ec | [
"MIT"
] | null | null | null | from aerosandbox.weights.mass_properties import *
| 25 | 49 | 0.86 | 6 | 50 | 7 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.08 | 50 | 1 | 50 | 50 | 0.913043 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 5 |
a485dec593e36886639d25ea0cb31bfa15127541 | 93 | py | Python | __name__ == '__main__'/test_import_new.py | kyaiooiayk/Python-Programming | b70dde24901cd24b38e2ead7c9a1b2d1808fc4b0 | [
"OLDAP-2.3"
] | null | null | null | __name__ == '__main__'/test_import_new.py | kyaiooiayk/Python-Programming | b70dde24901cd24b38e2ead7c9a1b2d1808fc4b0 | [
"OLDAP-2.3"
] | null | null | null | __name__ == '__main__'/test_import_new.py | kyaiooiayk/Python-Programming | b70dde24901cd24b38e2ead7c9a1b2d1808fc4b0 | [
"OLDAP-2.3"
] | null | null | null | import important_new
print("Called from test_import_new.py, __name__ has value?:", __name__) | 31 | 71 | 0.806452 | 14 | 93 | 4.571429 | 0.785714 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.096774 | 93 | 3 | 71 | 31 | 0.761905 | 0 | 0 | 0 | 0 | 0 | 0.553191 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0.5 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 1 | 0 | 5 |
a485e51be320a1c41ace27db12d2c0b78a42c680 | 2,568 | py | Python | 7/make_fig_7a.py | mohitganguly/test_amanda | a2f19934ce8e7206fa0ddbd4960dc4cfa809518e | [
"MIT"
] | 1 | 2020-03-24T17:02:36.000Z | 2020-03-24T17:02:36.000Z | 7/make_fig_7a.py | mohitganguly/test_amanda | a2f19934ce8e7206fa0ddbd4960dc4cfa809518e | [
"MIT"
] | null | null | null | 7/make_fig_7a.py | mohitganguly/test_amanda | a2f19934ce8e7206fa0ddbd4960dc4cfa809518e | [
"MIT"
] | 1 | 2022-01-03T08:41:52.000Z | 2022-01-03T08:41:52.000Z | from __future__ import division
import numpy as np
import matplotlib.pyplot as plt
import itertools
nseg = 999
l = 20
blocks = np.genfromtxt('block_HH.txt', delimiter = ' ')
x = range(32,40)
dia = blocks[0:,0]
print len(dia)
block1 = blocks[0,1:]
block1 = block1*l/nseg
block2 = blocks[1,1:]
block1 = block2*l/nseg
block3 = blocks[2,1:]
block1 = block3*l/nseg
block4 = blocks[3,1:]
block1 = block4*l/nseg
block5 = blocks[4,1:]
block1 = block5*l/nseg
i = 0
for i in range(5):
plt.plot(x, blocks[i,1:]*l/nseg, '-o', lw = 3)
plt.xlim(31,40)
#plt.ylabel('Threshold Block Width (mm)', fontsize = 16)
#plt.xlabel('Temperature of Block ($^o$C)', fontsize = 16)
#plt.text(39.1, blocks[0,-1]*l/nseg, '500 $\mu m$')
#plt.text(39.1, blocks[1,-1]*l/nseg, '250 $\mu m$')
#plt.text(39.1, blocks[2,-1]*l/nseg, '100 $\mu m$')
#plt.text(39.1, blocks[3,-1]*l/nseg, '10 $\mu m$')
#plt.text(39.1, blocks[4,-1]*l/nseg, '1 $\mu m$')
plt.ylim(-1,11)
plt.vlines(x=33, ymin = float(21*l/nseg), ymax = float(347*l/nseg), linewidth=3, color = 'k')
plt.vlines(x=36, ymin = float(19*l/nseg), ymax = float(268*l/nseg), linewidth=3, color = 'k')
plt.vlines(x=39, ymin = float(19*l/nseg), ymax = float(257*l/nseg), linewidth=3, color = 'k')
plt.savefig('fig_supp.jpeg', format = 'jpeg', dpi = 600)
plt.show()
fig=plt.figure()
ax=fig.add_subplot(111)
ax.set_color_cycle(['b','g','r','c', 'm'])
sqrt_dia = np.sqrt(dia)
#marker_color = itertools.cycle(('b','g','r','b'))
a = [1,4,7]
m_33, c_33 = np.polyfit(sqrt_dia, blocks[:,2]*l/nseg, 1)
m_36, c_36 = np.polyfit(sqrt_dia, blocks[:,5]*l/nseg, 1)
m_39, c_39 = np.polyfit(sqrt_dia, blocks[:,8]*l/nseg, 1)
color = ['r', 'b', 'g']
x = 0
for i in a:
plt.scatter(sqrt_dia, blocks[:,(i+1)]*l/nseg,c = color[x], s = 30)
x = x+1
#plt.plot(sqrt_dia, blocks[:,(i+1)]*l/nseg, color = 'k', lw = 1)
#plt.text (23, blocks[0,(i+1)]*l/nseg, '%d'%x[i], fontsize = 10)
#dia = blocks[:,0]
#plt.scatter(sqrt_dia, block1, c=['b','g','r','c', 'm'], s = 72)
#plt.plot(sqrt_dia, block1, color = 'k')
#plt.plot(sqrt_dia, block2, color = 'k')
#plt.ylabel('Threshold Block Width (mm)', fontsize = 16)
#plt.xlabel('$\sqrt{Axon Diameter (\mu m)} $', fontsize = 16)
ax = plt.subplot(1, 1, 1)
plt.plot(sqrt_dia, m_33 * sqrt_dia + c_33, color = 'r', lw = 2.5)
plt.plot(sqrt_dia, m_36 * sqrt_dia + c_36, color = 'b', lw = 2.5)
plt.plot(sqrt_dia, m_39 * sqrt_dia + c_39, color = 'g', lw = 2.5)
ax.spines['top'].set_visible(False)
ax.spines['right'].set_visible(False)
plt.xlim(-1,24)
plt.ylim(0,7.5)
#plt.savefig('7c.jpeg', format = 'jpeg', dpi = 600)
plt.show()
| 29.860465 | 93 | 0.626558 | 501 | 2,568 | 3.133733 | 0.239521 | 0.073248 | 0.034395 | 0.053503 | 0.363694 | 0.281529 | 0.281529 | 0.126115 | 0.101911 | 0.06242 | 0 | 0.086625 | 0.132399 | 2,568 | 85 | 94 | 30.211765 | 0.618043 | 0.333723 | 0 | 0.038462 | 0 | 0 | 0.031896 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.076923 | null | null | 0.019231 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
a4863c3e8d16dad6c9bb7aafdd0d568e61fb79a7 | 388 | py | Python | triplinker/journeys/migrations/0007_auto_20200907_1616.py | GonnaFlyMethod/triplinker | f4189e499ad48fd9102dd2211a8884078136eae9 | [
"MIT"
] | null | null | null | triplinker/journeys/migrations/0007_auto_20200907_1616.py | GonnaFlyMethod/triplinker | f4189e499ad48fd9102dd2211a8884078136eae9 | [
"MIT"
] | null | null | null | triplinker/journeys/migrations/0007_auto_20200907_1616.py | GonnaFlyMethod/triplinker | f4189e499ad48fd9102dd2211a8884078136eae9 | [
"MIT"
] | null | null | null | # Generated by Django 3.0.8 on 2020-09-07 16:16
from django.db import migrations
class Migration(migrations.Migration):
dependencies = [
('journeys', '0006_auto_20200907_1607'),
]
operations = [
migrations.RenameField(
model_name='activity',
old_name='description_of_activity',
new_name='description',
),
]
| 20.421053 | 48 | 0.610825 | 41 | 388 | 5.585366 | 0.780488 | 0.131004 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.111511 | 0.283505 | 388 | 18 | 49 | 21.555556 | 0.71223 | 0.115979 | 0 | 0 | 1 | 0 | 0.214076 | 0.134897 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.083333 | 0 | 0.333333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
a4872bfc5ea2ebaa9569179a301218bcadb5ad3d | 8,965 | py | Python | old/old_another_small_jobshop_dwave_another_example.py | MiRudnik/quantum_optimization | 9c63c9164d9a8620d7610cc0576a1e3ee7319d98 | [
"MIT"
] | null | null | null | old/old_another_small_jobshop_dwave_another_example.py | MiRudnik/quantum_optimization | 9c63c9164d9a8620d7610cc0576a1e3ee7319d98 | [
"MIT"
] | null | null | null | old/old_another_small_jobshop_dwave_another_example.py | MiRudnik/quantum_optimization | 9c63c9164d9a8620d7610cc0576a1e3ee7319d98 | [
"MIT"
] | 1 | 2021-07-13T21:50:53.000Z | 2021-07-13T21:50:53.000Z | import numpy as np
# Set Q for the problem QUBO
from utils.jobshop_helpers import get_machine_and_time_slot, get_operation_length, is_last_row, get_qubits_from_slot_and_machine, \
get_time_slot
def main():
# qubo_matrix = np.zeros((40,40))
jobs = [[2, 1], [1,2]]
j_flat = []
for job in jobs:
j_flat.extend(job)
time_limit = 5
number_of_machines = 2
qubits_number = number_of_machines * len(j_flat) * time_limit
connections = prepare_connections(jobs, number_of_machines, time_limit)
linear = {}
quadratic = {}
for i in range(qubits_number):
linear['x{}'.format(i), 'x{}'.format(i)] = int(connections[i,i])
for i in range(qubits_number):
for j in range(i + 1, qubits_number):
val = connections[i,j]
if (val != 0):
quadratic['x{}'.format(i), 'x{}'.format(j)] = int(val)
# linear = {('x0', 'x0'): -1, ('x1', 'x1'): -1, ('x2', 'x2'): -1, ('x3', 'x3'): -1,
# ('x4', 'x4'): -1, ('x5', 'x5'): -1, ('x6', 'x6'): -1, ('x7', 'x7'): -1}
# quadratic = {('x0', 'x2'): 2, ('x0', 'x4'): 2, ('x0', 'x6'): 2, ('x2', 'x4'): 2, ('x2', 'x6'): 2, ('x4', 'x6'): 2,
# ('x1', 'x3'): 2, ('x1', 'x5'): 2, ('x1', 'x7'): 2, ('x3', 'x5'): 2, ('x3', 'x7'): 2, ('x5', 'x7'): 2}
# quadratic = {('x0', 'x2'): 2, ('x0', 'x4'): 2, ('x0', 'x6'): 2, ('x1', 'x3'): 2, ('x1', 'x5'): 2, ('x1', 'x7'): 2,
# ('x2', 'x4'): 2, ('x2', 'x6'): 2, ('x3', 'x5'): 2, ('x3', 'x7'): 2
# , ('x4', 'x6'): 2,('x5', 'x7'): 2}
print(linear)
print(quadratic)
Q = dict(linear)
Q.update(quadratic)
# Minor-embed and sample 1000 times on a default D-Wave system
# response = EmbeddingComposite(DWaveSampler()).sample_qubo(Q, num_reads=100)
# for s in list(response.data()):
# print(s.sample, "Energy: ", s.energy, "Occurrences: ", s.num_occurrences)
def add_starts_only_once_constraint(connections, row_length, number_of_qubits, number_of_operations, multiplier):
starting_points = range(row_length)
only_one_one_qubits_lists = [list(range(starting_point, number_of_qubits, number_of_operations)) for starting_point in starting_points]
for oper_list in only_one_one_qubits_lists:
for qubit in oper_list:
connections[qubit, qubit] = -1 * multiplier
print("-1 for [{}, {}]".format(qubit, qubit))
for (i, first_elem) in enumerate(oper_list):
for (j, second_elem) in enumerate(oper_list[i + 1:]):
connections[first_elem, second_elem] = 2 * multiplier
print("2 for [{}, {}]".format(first_elem, second_elem))
return connections
def add_one_job_on_machine_constraint(connections, jobs, row_length, number_of_operations, number_of_qubits, time_limit, multiplier):
for qubit_number in range(number_of_qubits):
machine_number, time_slot = get_machine_and_time_slot(qubit_number, row_length, number_of_operations)
operation_number = qubit_number % number_of_operations
operation_length = get_operation_length(jobs, operation_number)
if (is_last_row(time_slot, time_limit)):
shift = 0
qubits = get_qubits_from_slot_and_machine(machine_number, time_slot + shift, number_of_operations, row_length)
for qubit in qubits:
connections[qubit_number, qubit] += multiplier
else:
for shift in range(operation_length):
qubits = get_qubits_from_slot_and_machine(machine_number, time_slot + shift, number_of_operations,
row_length)
print("For qubit {} qubits are {}".format(qubit_number, list(qubits)))
for qubit in qubits:
if (qubit - qubit_number) % row_length != 0:
connections[qubit_number, qubit] += multiplier
for (i,c) in enumerate(connections):
print(i, c)
return connections
def get_global_op_num(job_lens, job_number, checked_op_num):
previous_operations_number = 0
for job_len in job_lens[:job_number]:
previous_operations_number += job_len
return previous_operations_number + checked_op_num
def get_qubits_for_operation(job_number, checked_op_num, job_lens, number_of_machines, time_limit, number_of_operations):
global_op_num = get_global_op_num(job_lens, job_number, checked_op_num)
row_len = number_of_machines * number_of_operations
qubits = []
for machine_number in range(1,number_of_machines + 1):
qubits.extend([global_op_num + (number_of_operations * (machine_number - 1)) + row_len * cur_time for cur_time in range(time_limit)])
return qubits
def add_order_constraint(connections, jobs, number_of_machines, time_limit, number_of_operations, multiplier):
job_lens = [len(job) for job in jobs]
row_len = number_of_machines * number_of_operations
# for every job
for (job_number, job) in enumerate(jobs):
# for every operation except first in job
for checked_op_num in range(1,len(job)):
qubits_for_checked_op = get_qubits_for_operation(job_number, checked_op_num, job_lens,
number_of_machines, time_limit, number_of_operations)
# for every operation, that is before operation with number op_num
for (tmp_op_num, tmp_op_len) in enumerate(job[:checked_op_num]):
qubits_for_tmp_op = get_qubits_for_operation(job_number, tmp_op_num, job_lens,
number_of_machines, time_limit, number_of_operations)
# print("Job: {}, Checked op_num: {}, tmp op num: {}, tmp op len: {}".format(job, checked_op_num, tmp_op_num, tmp_op_len))
for qubit_checked_op in qubits_for_checked_op:
for qubit_tmp_op in qubits_for_tmp_op:
checked_op_time_slot = get_time_slot(qubit_checked_op, row_len)
tmp_op_time_slot = get_time_slot(qubit_tmp_op, row_len)
if checked_op_time_slot - tmp_op_time_slot < tmp_op_len:
connections[qubit_tmp_op, qubit_checked_op] += multiplier
return connections
def prepare_connections(jobs, number_of_machines, time_limit):
# jobs = [[2,1],[1,2]]
number_of_operations = sum([len(job) for job in jobs])
number_of_qubits = number_of_machines * number_of_operations * time_limit
row_length = number_of_machines * number_of_operations
beta = 1
eta = -1
alpha = 1
connections = np.zeros((number_of_qubits, number_of_qubits))
# connections = add_starts_only_once_constraint(connections, row_length, number_of_qubits, number_of_operations, beta)
connections = add_one_job_on_machine_constraint(connections, jobs, row_length, number_of_operations,
number_of_qubits, time_limit, alpha)
# connections = add_order_constraint(connections, jobs, number_of_machines, time_limit, number_of_operations, eta)
# for (num, conn) in enumerate(connections):
# print(num, conn)
# connections = [[] for i in range(40)]
# connections[0] = [1, 2, 3, 8, 9, 10, 11, 16, 24, 32]
# connections[1] = [2, 3, 8, 9, 16, 17, 24, 25, 32, 33]
# connections[2] = [3, 10, 18, 26, 34]
# connections[3] = [8, 9, 10, 11, 18, 19, 26, 27, 34, 35]
# connections[4] = [5, 6, 7, 12, 13, 14, 15, 20, 28, 36]
# connections[5] = [6, 7, 12, 13, 20, 21, 28, 29, 36, 37]
# connections[6] = [7, 14, 22, 30, 38]
# connections[7] = [12, 13, 14, 15, 22, 23, 30, 31, 38, 39]
# connections[8] = [9, 10, 11, 16, 17, 18, 19, 24, 32]
# connections[9] = [10, 11, 16, 17, 24, 25, 32, 33]
# connections[10] = [11, 18, 26, 34]
# connections[11] = [16, 17, 18, 19, 26, 27, 34, 35]
# connections[12] = [13, 14, 15, 20, 21, 22, 23, 28, 36]
# connections[13] = [14, 15, 20, 21, 28, 29, 36, 37]
# connections[14] = [15, 22, 30, 38]
# connections[15] = [20, 21, 22, 23, 30, 31, 38, 39]
# connections[16] = [17, 18, 19, 24, 25, 26, 27, 32]
# connections[17] = [18, 19, 24, 25, 32, 33]
# connections[18] = [19, 26, 34]
# connections[19] = [24, 25, 26, 27, 34, 35]
# connections[20] = [21, 22, 23, 28, 29, 30, 31, 36]
# connections[21] = [22, 23, 28, 29, 36, 37]
# connections[22] = [23, 30, 38]
# connections[23] = [28, 29, 30, 31, 38, 39]
# connections[24] = [25, 26, 27, 32, 33, 34, 35]
# connections[25] = [26, 27, 32, 33]
# connections[26] = [27, 34]
# connections[27] = [32, 33, 34, 35]
# connections[28] = [29, 30, 31, 36, 37, 38, 39]
# connections[29] = [30, 31, 36, 37]
# connections[30] = [31, 38]
# connections[31] = [36, 37, 38, 39]
# connections[32] = [32, 33, 34, 35]
# connections[33] = [34, 35]
# connections[34] = [35]
# connections[35] = [35]
# connections[36] = [36, 37, 38, 39]
# connections[37] = [36, 38, 39]
# connections[38] = [39]
# connections[39] = [39]
return connections
if __name__=='__main__':
main()
| 46.21134 | 139 | 0.62164 | 1,298 | 8,965 | 4.026194 | 0.128659 | 0.065825 | 0.068886 | 0.026789 | 0.497321 | 0.368733 | 0.309606 | 0.250287 | 0.216992 | 0.216992 | 0 | 0.092547 | 0.232236 | 8,965 | 193 | 140 | 46.450777 | 0.666715 | 0.353374 | 0 | 0.14 | 0 | 0 | 0.013082 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.07 | false | 0 | 0.02 | 0 | 0.15 | 0.06 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
a48740f19c411c12d63473995b985326be58c92a | 425 | py | Python | sensors/TemperaturaDHT11.py | tec-csf/reto-iot-en-supermercado-2019-nova-iot-supermarket | 0eb643132478a06477404dcd86c4359869ec7d81 | [
"MIT"
] | 1 | 2019-10-28T14:58:14.000Z | 2019-10-28T14:58:14.000Z | sensors/TemperaturaDHT11.py | tec-csf/reto-iot-en-supermercado-2019-nova-iot-supermarket | 0eb643132478a06477404dcd86c4359869ec7d81 | [
"MIT"
] | null | null | null | sensors/TemperaturaDHT11.py | tec-csf/reto-iot-en-supermercado-2019-nova-iot-supermarket | 0eb643132478a06477404dcd86c4359869ec7d81 | [
"MIT"
] | null | null | null | import Adafruit_DHT
sensor = Adafruit_DHT.DHT11
pin_temp = 3
def temperatura(pin_temp):
temperature = 0
if (temperature <=22):
humidity, temperature = Adafruit_DHT.read_retry(sensor, pin_temp)
if humidity is not None and temperature is not None:
print('Temp={0:0.1f}*C Humidity={1:0.1f}%'.format(temperature, humidity))
else:
print('Failed to get reading. Try again!') | 38.636364 | 86 | 0.663529 | 59 | 425 | 4.661017 | 0.559322 | 0.12 | 0.065455 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.036585 | 0.228235 | 425 | 11 | 87 | 38.636364 | 0.801829 | 0 | 0 | 0 | 0 | 0 | 0.159624 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.090909 | false | 0 | 0.090909 | 0 | 0.181818 | 0.181818 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
a4880c07ec70056b56e0d43278747671d02751da | 4,716 | py | Python | tests/middleware/test_http_to_https.py | ai-mocap/hypercorn | 0c1a74a726d5e54a2a3876edba8ad2a0a547c5d5 | [
"MIT"
] | 264 | 2018-06-02T17:49:46.000Z | 2022-03-29T07:39:06.000Z | tests/middleware/test_http_to_https.py | ai-mocap/hypercorn | 0c1a74a726d5e54a2a3876edba8ad2a0a547c5d5 | [
"MIT"
] | 52 | 2018-06-14T19:30:00.000Z | 2022-02-27T04:26:48.000Z | tests/middleware/test_http_to_https.py | nonebot/nonecorn | 813408d385f11b6bbdaee63d6b6ace8c87586d25 | [
"MIT"
] | 29 | 2018-06-13T23:54:48.000Z | 2022-02-20T15:23:14.000Z | from __future__ import annotations
import pytest
from hypercorn.middleware import HTTPToHTTPSRedirectMiddleware
from hypercorn.typing import HTTPScope, WebsocketScope
from ..helpers import empty_framework
@pytest.mark.asyncio
@pytest.mark.parametrize("raw_path", [b"/abc", b"/abc%3C"])
async def test_http_to_https_redirect_middleware_http(raw_path: bytes) -> None:
app = HTTPToHTTPSRedirectMiddleware(empty_framework, "localhost")
sent_events = []
async def send(message: dict) -> None:
nonlocal sent_events
sent_events.append(message)
scope: HTTPScope = {
"type": "http",
"asgi": {},
"http_version": "2",
"method": "GET",
"scheme": "http",
"path": raw_path.decode(),
"raw_path": raw_path,
"query_string": b"a=b",
"root_path": "",
"headers": [],
"client": ("127.0.0.1", 80),
"server": None,
"extensions": {},
}
await app(scope, None, send)
assert sent_events == [
{
"type": "http.response.start",
"status": 307,
"headers": [(b"location", b"https://localhost%s?a=b" % raw_path)],
},
{"type": "http.response.body"},
]
@pytest.mark.asyncio
@pytest.mark.parametrize("raw_path", [b"/abc", b"/abc%3C"])
async def test_http_to_https_redirect_middleware_websocket(raw_path: bytes) -> None:
app = HTTPToHTTPSRedirectMiddleware(empty_framework, "localhost")
sent_events = []
async def send(message: dict) -> None:
nonlocal sent_events
sent_events.append(message)
scope: WebsocketScope = {
"type": "websocket",
"asgi": {},
"http_version": "1.1",
"scheme": "ws",
"path": raw_path.decode(),
"raw_path": raw_path,
"query_string": b"a=b",
"root_path": "",
"headers": [],
"client": None,
"server": None,
"subprotocols": [],
"extensions": {"websocket.http.response": {}},
}
await app(scope, None, send)
assert sent_events == [
{
"type": "websocket.http.response.start",
"status": 307,
"headers": [(b"location", b"wss://localhost%s?a=b" % raw_path)],
},
{"type": "websocket.http.response.body"},
]
@pytest.mark.asyncio
async def test_http_to_https_redirect_middleware_websocket_http2() -> None:
app = HTTPToHTTPSRedirectMiddleware(empty_framework, "localhost")
sent_events = []
async def send(message: dict) -> None:
nonlocal sent_events
sent_events.append(message)
scope: WebsocketScope = {
"type": "websocket",
"asgi": {},
"http_version": "2",
"scheme": "ws",
"path": "/abc",
"raw_path": b"/abc",
"query_string": b"a=b",
"root_path": "",
"headers": [],
"client": None,
"server": None,
"subprotocols": [],
"extensions": {"websocket.http.response": {}},
}
await app(scope, None, send)
assert sent_events == [
{
"type": "websocket.http.response.start",
"status": 307,
"headers": [(b"location", b"https://localhost/abc?a=b")],
},
{"type": "websocket.http.response.body"},
]
@pytest.mark.asyncio
async def test_http_to_https_redirect_middleware_websocket_no_rejection() -> None:
app = HTTPToHTTPSRedirectMiddleware(empty_framework, "localhost")
sent_events = []
async def send(message: dict) -> None:
nonlocal sent_events
sent_events.append(message)
scope: WebsocketScope = {
"type": "websocket",
"asgi": {},
"http_version": "2",
"scheme": "ws",
"path": "/abc",
"raw_path": b"/abc",
"query_string": b"a=b",
"root_path": "",
"headers": [],
"client": None,
"server": None,
"subprotocols": [],
"extensions": {},
}
await app(scope, None, send)
assert sent_events == [{"type": "websocket.close"}]
def test_http_to_https_redirect_new_url_header() -> None:
app = HTTPToHTTPSRedirectMiddleware(empty_framework, None)
new_url = app._new_url(
"https",
{
"http_version": "1.1",
"asgi": {},
"method": "GET",
"headers": [(b"host", b"localhost")],
"path": "/",
"root_path": "",
"query_string": b"",
"raw_path": b"/",
"scheme": "http",
"type": "http",
"client": None,
"server": None,
"extensions": {},
},
)
assert new_url == "https://localhost/"
| 27.578947 | 84 | 0.545589 | 475 | 4,716 | 5.218947 | 0.172632 | 0.064542 | 0.050827 | 0.02622 | 0.806777 | 0.786608 | 0.769665 | 0.756353 | 0.756353 | 0.752723 | 0 | 0.00806 | 0.289652 | 4,716 | 170 | 85 | 27.741176 | 0.73194 | 0 | 0 | 0.710345 | 0 | 0 | 0.227523 | 0.03838 | 0 | 0 | 0 | 0 | 0.034483 | 1 | 0.006897 | false | 0 | 0.034483 | 0 | 0.041379 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
a48881351fb66e53f1af28f4da440b46709db632 | 2,428 | py | Python | N-MOS_transistor_by_Python/I-V_Characteristics_n-MOSFET.py | yasser296/Python-Projects | eae3598e2d4faf08d9def92c8b417c2e7946c5f4 | [
"MIT"
] | null | null | null | N-MOS_transistor_by_Python/I-V_Characteristics_n-MOSFET.py | yasser296/Python-Projects | eae3598e2d4faf08d9def92c8b417c2e7946c5f4 | [
"MIT"
] | null | null | null | N-MOS_transistor_by_Python/I-V_Characteristics_n-MOSFET.py | yasser296/Python-Projects | eae3598e2d4faf08d9def92c8b417c2e7946c5f4 | [
"MIT"
] | null | null | null | from numpy import arange
from matplotlib import pyplot , figure
# Kn = Kn' * W/L 4
Kn=1e-3
# Vth is th threshold voltagee
Vth = 1.5
# Sweep drain to source voltge from 0 to 12V
Vds = arange(0, 12, 0.1).tolist()
Vgs = [4 , 6 , 8 , 10 ]
Id = list() # Drain Current Id (A)
for I in range(1,len(Vgs)+1) :
Id.append([])
print(Id)
print("\n\n\n\n\n")
# To draw the transition line
line_Id = list()
line_Vds = list()
# Estimate length of the Vds & Vgs lists
m = len( Vds )
n = len( Vgs )
# Initialize the I-V characteristic points
for i in range(0,n) :
for j in range(0,m) :
if (Vgs[i] < Vth) :
Id[i].append(0)
elif (Vds[j] >= ( Vgs[i] - Vth )) :
Id[i].append((0.5 * Kn * (Vgs[i] - Vth)**2) * 1000)
elif (Vds[j] < ( Vgs[i] - Vth )) :
Id[i].append((Kn *( (Vgs[i] - Vth)* Vds[j] - 0.5 * (Vds[j]**2) )) * 1000 )
# get the transition line points
if (Vds[j] == ( Vgs[i] - Vth )) :
line_Id.append((0.5 * Kn * (Vgs[i] - Vth)**2) * 1000 )
line_Vds.append(Vds[j])
print(Id)
# Plotting the I-V characteristic of n-MOSFET
figure, axis = pyplot.subplots()
print(axis)
print(figure)
figure.set_size_inches(12, 8)
print(figure)
curves = list()
for i in range(0,len(Vgs)) :
curve, = pyplot.plot(Vds, Id[i] , label="Vgs= %d" %Vgs[i] , linewidth=2)
pyplot.annotate("Vgs= %d" %Vgs[i], (10, max(Id[i])+0.0005) , fontsize=12 )
curves.append(curve)
# Plotting the transition line
line_Vds_2 = [0] + line_Vds
line_Id_2 = [0] + line_Id
tran, = pyplot.plot(line_Vds_2 , line_Id_2 , label="Transition line" , linestyle='--' , marker= 'x' , markersize = 12 )
curves.append(tran)
# Plotting the legends
pyplot.legend (curves, [curve.get_label() for curve in curves] , prop={"size":12})
# or
# #fontsize=16 == prop={"size":16} on legend only
axis.set_xlabel("Drain-source voltage Vds (Volt)" , fontsize=16 )
axis.set_ylabel("Drain Current Id (mA)" , fontsize=16 )
pyplot.grid(linestyle='--')
#pyplot.xaxis.grid (color="g")
#pyplot.yaxis.grid (color="r")
Vds_x_axis_numbers = arange(0,13,0.5).tolist()
axis.set_xticks (Vds_x_axis_numbers)
axis.tick_params ( axis='x' , colors='b')
Id_y_axis_numbers = arange(0,41,1).tolist()
axis.set_yticks (Id_y_axis_numbers)
axis.tick_params ( axis='y' , colors='g')
pyplot.title("I-V Characteristics of a n-MOSFET" ,fontsize=16 )
resolution = 500
pyplot.savefig('I-V_characteristic_dpi=%d' %resolution , dpi=resolution)
pyplot.show()
| 22.90566 | 119 | 0.639209 | 415 | 2,428 | 3.650602 | 0.293976 | 0.023762 | 0.032343 | 0.021782 | 0.129373 | 0.106271 | 0.067987 | 0.056766 | 0.056766 | 0 | 0 | 0.045158 | 0.17916 | 2,428 | 105 | 120 | 23.12381 | 0.715003 | 0.189456 | 0 | 0.072727 | 0 | 0 | 0.083419 | 0.012873 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.036364 | 0 | 0.036364 | 0.109091 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
a488bfa5aa832da083db4e6b51c66de316b8a1a6 | 7,652 | py | Python | mods/default/client/gui/game_overlays.py | mpbagot/hsc-major-project-code | eaa69bf566b5b34ae7d4aa78504f97576fa2bb1c | [
"MIT"
] | 4 | 2018-04-17T11:55:06.000Z | 2021-02-25T16:03:47.000Z | mods/default/client/gui/game_overlays.py | mpbagot/mata | eaa69bf566b5b34ae7d4aa78504f97576fa2bb1c | [
"MIT"
] | null | null | null | mods/default/client/gui/game_overlays.py | mpbagot/mata | eaa69bf566b5b34ae7d4aa78504f97576fa2bb1c | [
"MIT"
] | null | null | null | """
game_overlays.py
A module containing the GUI overlays of the default client game
"""
# Import the Modding API
from api.gui.gui import *
from api.gui.objects import *
from api.colour import *
from api.packets import SendCommandPacket
# Import stuff from the mod modules
from mods.default.client.gui.extras import *
from mods.default.client.gui.menus import *
class HUD(Overlay):
def __init__(self, game):
super().__init__()
h = self.screen.get_height()
self.buttons = [P2PNoticeButton(game, [944, 540, 60, 60])]
self.game = game
self.bars = [
HorizBar([744, 698, 260, 20], (255, 0, 0), self.game.player.health/100, 'Health'),
HorizBar([744, 728, 260, 20], (0, 102, 255), self.game.player.exp, 'Experience')
]
equippedItems = self.game.player.inventory.getEquipped()
self.itemSlots = [
ItemSlot(game, equippedItems[0], [664, 630], 60),
ItemSlot(game, equippedItems[1], [664, 700], 60)
]
def drawBackgroundLayer(self):
# Update the bar percentages
self.bars[0].percentage = self.game.player.health/100
self.bars[1].percentage = (self.game.player.exp-int(self.game.player.exp**0.5)**2)/(2*int(self.game.player.exp**0.5)+1)
for a in (0, 1):
self.itemSlots[a].setItem(self.game.player.inventory.getEquipped()[a])
# Draw the background rectangle
pygame.draw.rect(self.screen, (173, 144, 106), scaleRect([654, 620, 400, 150], self.screen))
pygame.draw.rect(self.screen, (65, 55, 40), scaleRect([654, 620, 400, 150], self.screen), 4)
def drawForegroundLayer(self, mousePos):
super().drawForegroundLayer(mousePos)
# Generate a font object
font = pygame.font.Font('resources/font/main.ttf', 20)
text = font.render('Username: '+self.game.player.name, True, (255, 255, 255))
self.screen.blit(text, scaleRect([744, 640], self.screen))
# Generate a smaller font object
font = pygame.font.Font('resources/font/main.ttf', 12)
# Calculate and render the player level
playerLevel = int(self.game.player.exp**0.5)+1
text = font.render('Level: '+str(playerLevel), True, (255, 255, 255))
self.screen.blit(text, scaleRect([744, 670], self.screen))
class Pause(Overlay):
def __init__(self, game):
super().__init__()
self.game = game
self.buttons = [
ResumeButton([351, 179, 321, 90]),
OptionsButton([351, 286, 321, 90], "Options"),
MenuButton([351, 393, 321, 90], True),
ExitButton([351, 500, 321, 90], 'Exit to OS')
]
def drawBackgroundLayer(self):
w = self.screen.get_width()
h = self.screen.get_height()
pygame.draw.rect(self.screen, (236, 196, 145), [w//3, h//7, w//3, h//1.55])
pygame.draw.rect(self.screen, (65, 55, 40), [w//3, h//7, w//3, h//1.55], 4)
def drawForegroundLayer(self, mousePos):
super().drawForegroundLayer(mousePos)
w, h = self.screen.get_size()
font = pygame.font.Font('resources/font/main.ttf', 30)
text = font.render('Menu', True, (0, 0, 0))
self.screen.blit(text, [(w-text.get_width())//2, h//7+5])
class Chat(Overlay):
def __init__(self, game, tab='global'):
super().__init__()
self.game = game
self.tab = tab
self.scrollScreen = Scrollbox([804, 438, 110, 90])
self.textarea = TextArea([100, 538, 618, 100], (255, 255, 255, 127))
latest = game.getModInstance('ClientMod').latestChatTabs
if self.tab not in ['local', 'global']+latest:
latest.insert(0, self.tab)
game.getModInstance('ClientMod').latestChatTabs = latest[:3]
self.buttons = [ChatTabButton([720, 540 + 32 * n, 202, 30], name) for n, name in enumerate(latest)]
def drawForegroundLayer(self, mousePos):
hud = self.game.getModInstance('ClientMod').hudOverlay
if self.game.getGUIState() and self.game.getGUIState().isOverlayOpen(hud):
try:
self.game.getGUIState().getOverlay(hud).notifications.delete(self.tab)
except:
pass
# Fetch the messages from the mod instance
messages = self.game.getModInstance('ClientMod').chatMessages.get(self.tab, [])
# Draw the background rectangle
overlayScreen = pygame.Surface(scaleRect([824, 558], self.screen))
overlayScreen.set_alpha(191)
pygame.draw.rect(overlayScreen, (140, 140, 140), scaleRect([0, 0, 824, 558], self.screen))
pygame.draw.rect(overlayScreen, (170, 170, 170), scaleRect([0, 458, 824, 100], self.screen))
self.screen.blit(overlayScreen, scaleRect([100, 80], self.screen))
self.textarea.draw(self.screen, mousePos)
# Draw the outline boxes
pygame.draw.rect(self.screen, (40, 40, 40), scaleRect([100, 538, 824, 100], self.screen), 4)
pygame.draw.rect(self.screen, (40, 40, 40), scaleRect([100, 80, 824, 558], self.screen), 4)
pygame.draw.rect(self.screen, (40, 40, 40), scaleRect([718, 538, 206, 100], self.screen), 4)
# Generate a font object
fontLarge = pygame.font.Font('resources/font/main.ttf', 20)
# Generate a smaller font object
fontSmall = pygame.font.Font('resources/font/main.ttf', 12)
# Draw the title outline box
title = fontLarge.render(self.tab, True, (0, 0, 0))
# Calculate the leftmost position of the text
leftXPos = (self.screen.get_width() - title.get_width())//2
# Calculate all of the points for the box around the title
pointList = [
[leftXPos - 35, 80],
[leftXPos + title.get_width() + 35, 80],
[leftXPos + 5 + title.get_width(), 50],
[leftXPos - 5, 50]
]
# Fill in the title background shape, then draw the outline around it
pygame.draw.polygon(self.screen, (140, 140, 140), pointList)
pygame.draw.lines(self.screen, (40, 40, 40), True, pointList, 4)
# Lastly, draw the channel title at the top
titlePos = [(self.screen.get_width() - title.get_width())//2, scaleVal(52, self.screen)]
self.screen.blit(title, titlePos)
# Blank out the scrollbox
self.scrollScreen.innerScreen.fill(pygame.Color(127, 127, 127, 0))
# Iterate and blit the messages into the scrollbox
messages = [a for a in messages if '\x00' not in a]
for m, message in enumerate(messages):
text = fontSmall.render(message, True, (0, 0, 0))
self.scrollScreen.blit(text, [0, 15*m])
# Then draw the scrollbox onto the main screen
self.scrollScreen.draw(self.screen, mousePos)
super().drawForegroundLayer(mousePos)
def doKeyPress(self, event):
if event.key == pygame.K_RETURN:
# Adjust the message
message = self.textarea.text
# Skip blank messages
if not message:
return
# Format a non-command as required
if message[0] != '/':
message = '/message '+self.tab+' '+message
# Create the packet
# Send the message
packet = SendCommandPacket(message)
self.game.packetPipeline.sendToServer(packet)
self.textarea.text = ''
# Pass the button press to the textarea
self.textarea.doKeyPress(event)
| 40.062827 | 127 | 0.595792 | 958 | 7,652 | 4.719207 | 0.263048 | 0.075205 | 0.030967 | 0.02787 | 0.304357 | 0.210573 | 0.201504 | 0.16921 | 0.076753 | 0.072329 | 0 | 0.080835 | 0.267642 | 7,652 | 190 | 128 | 40.273684 | 0.72591 | 0.122321 | 0 | 0.151261 | 0 | 0 | 0.035463 | 0.017208 | 0 | 0 | 0 | 0 | 0 | 1 | 0.07563 | false | 0.008403 | 0.05042 | 0 | 0.159664 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
a488cd89e65f252e4c293f2398293943079200dc | 11,362 | py | Python | JumpscaleLib/tools/docsite/Doc.py | threefoldtech/jumpscale_lib9 | 03c1451133d777e5af106fcc6f75c1138bb997f2 | [
"Apache-2.0"
] | null | null | null | JumpscaleLib/tools/docsite/Doc.py | threefoldtech/jumpscale_lib9 | 03c1451133d777e5af106fcc6f75c1138bb997f2 | [
"Apache-2.0"
] | 220 | 2018-07-29T08:37:17.000Z | 2019-08-05T15:01:27.000Z | JumpscaleLib/tools/docsite/Doc.py | threefoldtech/jumpscale_lib9 | 03c1451133d777e5af106fcc6f75c1138bb997f2 | [
"Apache-2.0"
] | 1 | 2018-08-20T09:16:08.000Z | 2018-08-20T09:16:08.000Z | from .Link import Link
from jumpscale import j
import toml
import copy
JSBASE = j.application.jsbase_get_class()
class Doc(JSBASE):
"""
"""
def __init__(self, path, name, docsite):
JSBASE.__init__(self)
self.path = path
self.docsite = docsite
self.cat = ""
if "/blogs/" in path or "/blog/" in path:
self.cat = "blog"
if "/defs/" in path or "/def/" in path:
self.cat = "def"
self.path_dir = j.sal.fs.getDirName(self.path)
self.path_dir_rel = j.sal.fs.pathRemoveDirPart(self.path_dir, self.docsite.path).strip("/")
self.name = self._clean(name)
if self.name == "":
raise RuntimeError("name cannot be empty")
self.name_original = name
self.path_rel = j.sal.fs.pathRemoveDirPart(path, self.docsite.path).strip("/")
name_dot = "%s/%s" % (self.path_dir_rel, self.name)
self.name_dot_lower = self._clean("%s/%s" % (self.path_dir_rel, self.name))
# self.markdown_source = ""
# self.show = True
self.errors = []
if j.sal.fs.getDirName(self.path).strip("/").split("/")[-1][0] == "_":
# means the subdir starts with _
self.show = False
self._processed = False
self._extension = None
self._data = {} # is all data, from parents as well, also from default data
self._md = None
self._content = None
self._images = []
self._links_external = []
self._links_doc = []
self._links = []
def _clean(self, name):
name = name.replace("/", ".")
name = name.strip(".")
return j.data.text.strip_to_ascii_dense(name)
def _get_file_path_new(self, name="", extension="jpeg"):
nr = 0
if name == "":
name = self.name
dest = "%s/%s.%s" % (self.path_dir, name, extension)
found = j.sal.fs.exists(dest)
while found:
nr += 1
name = "%s_%s" % (name, nr) # to make sure we have a unique name
dest = "%s/%s.%s" % (self.path_dir, name, extension)
fname = "%s.%s" % (name, extension)
found = j.sal.fs.exists(dest) or fname in self.docsite._files
fname = "%s.%s" % (name, extension)
self.docsite._files[fname] = dest
return name, dest
@property
def links(self):
if self._links == []:
self._links_process()
return self._links
@property
def images(self):
if not self._images:
self._links_process()
return self._images
@property
def extension(self):
if not self._extension:
self._extension = j.sal.fs.getFileExtension(self.path)
return self._extension
@property
def title(self):
if "title" in self.data:
return self.data["title"]
else:
self.error_raise("Could not find title in doc.")
def error_raise(self, msg):
return self.docsite.error_raise(msg, doc=self)
def htmlpage_get(self, htmlpage=None):
if htmlpage is None:
htmlpage = j.data.html.page_get()
htmlpage = self.markdown_obj.htmlpage_get(htmlpage=htmlpage, webparts=True)
return htmlpage
def html_get(self, htmlpage=None):
return str(self.htmlpage_get(htmlpage=htmlpage))
@property
def html(self):
return self.html_get()
@property
def data(self):
if self._data == {}:
# look for parts which are data
for part in self.parts_get(cat="data"):
for key, val in part.ddict.items():
print("data update")
if j.data.types.list.check(val):
if key not in self._data:
self._data[key] = []
for subval in val:
if subval not in self._data[key] and subval != "":
self._data[key].append(subval)
else:
self._data[key] = val
# now we have all data from the document itself
keys = [part for part in self.docsite.data_default.keys()]
keys.sort(key=len)
for key in keys:
key = key.strip("/")
if self.path_rel.startswith(key):
data = self.docsite.data_default[key]
self._data_update(data)
print("data process doc")
return self._data
@property
def markdown_obj(self):
if not self._md:
try:
self._md = j.data.markdown.document_get(self.markdown_source)
except Exception as e:
msg = "Could not parse markdown of %s" % self
msg += str(e)
self.error_raise(msg)
self._md = j.data.markdown.document_get(content="```\n%s\n```\n" % msg)
return self._md
def header_get(self, level=1, nr=0):
res = self.markdown_obj.parts_get(cat="header")
if len(res) < 1:
return self.error_raise("header level:%s %s could not be found" % (level, nr))
for header in res:
if header.level == level:
return header
@property
def markdown(self):
"""
markdown after processing of the full doc
"""
self._macros_process()
self._links_process()
try:
res = self.markdown_obj.markdown
except Exception as e:
msg = "Could not parse markdown of %s" % self
msg += str(e)
self.error_raise(msg)
res = msg
if "{{" in res:
# TODO:*1 rendering does not seem to be perfect ok
res = j.tools.jinja2.text_render(text=res, **self.data)
return res
@property
def markdown_source(self):
"""
markdown coming from source
"""
if not self._content:
self._content = j.sal.fs.fileGetContents(self.path)
return self._content
@property
def markdown_clean(self):
# remove the code blocks (comments are already gone)
print('markdown_clean')
from IPython import embed
embed(colors='Linux')
return None
@property
def markdown_clean_summary(self):
c = self.content_clean
lines = c.split("\n")
counter = 0
out = ""
while counter < 20 and counter < len(lines):
line = lines[counter]
counter += 1
if line.strip() == "" and counter > 10:
return out
if len(line) > 0 and line.strip()[0] == "#" and counter > 4:
return out
out += "%s\n" % line
return out
def _data_update(self, data):
res = {}
for key, valUpdate2 in data.items():
# check for the keys not in the self.data yet and add them, the others are done above
if key not in self._data:
self._data[key] = copy.copy(valUpdate2) # needs to be copy.copy otherwise we rewrite source later
def link_get(self, filename=None, cat=None, nr=0, die=True):
"""
@param cat: image, doc,link, officedoc, imagelink #doc is markdown
"""
res = self.links_get(filename=filename, cat=cat)
if len(res) == 0:
if die:
raise RuntimeError("could not find link %s:%s" % (filename, cat))
else:
return None
if nr > len(res):
if die:
raise RuntimeError("could not find link %s:%s at position:%s" % (filename, cat, nr))
else:
return None
return res[nr]
def links_get(self, filename=None, cat=None):
self._links_process()
res = []
for link in self._links:
found = True
if cat is not None and not link.cat == cat:
found = False
if filename is not None and not link.filename.startswith(filename):
found = False
if found:
res.append(link)
return res
def _macros_process(self):
"""
eval the macro
"""
for part in self.parts_get(cat="macro"):
line = part.method
if line.strip() == "":
return self.docsite.error_raise("empty macro cannot be executed", doc=self)
block = part.data
methodcode = line.rstrip(", )") # remove end )
methodcode = methodcode.replace("(", "(self,")
if not methodcode.strip() == line.strip():
# means there are parameters
methodcode += ",content=block)"
else:
methodcode += "(content=block)"
methodcode = methodcode.replace(",,", ",")
if methodcode.strip() == "":
raise RuntimeError("method code cannot be empty")
cmd = "j.tools.docsites.macros." + methodcode
# self.logger.debug(cmd)
# macro = eval(cmd)
try:
macro = eval(cmd) # is the result of the macro which is returned
part.result = macro
except Exception as e:
block = "```python\nERROR IN MACRO*** TODO: *1 ***\ncmd:\n%s\nERROR:\n%s\n```\n" % (cmd, e)
self.logger.error(block)
self.docsite.error_raise(block, doc=self)
part.result = block
def _links_process(self):
"""
results in:
self._images = []
self._links_external = []
self._links_doc =
"""
if not self._links == []:
return
# check links for internal images
# regex = "!+\[.*\] *\([a-zA-Z0-9\.\-\_\ \/\"]+\)" # find all possible images/links
regex = "!*\[.*\] *\(.*\)"
for match in j.data.regex.yieldRegexMatches(regex, self.markdown_source, flags=0):
self.logger.debug("##:file:link:%s" % match)
link = Link(self, match.founditem)
if not link.link_source == "":
self._links.append(link)
# whats this one?
# regex = "src *= *\" */?static"
# for match in j.data.regex.yieldRegexMatches(regex, self.markdown_source, flags=0):
# self._content = self.markdown_source.replace(match.foundpart, "src = \"/")
def part_get(self, text_to_find=None, cat=None, nr=0, die=True):
"""
return part of markdown document e.g. header
@param cat is: table, header, macro, code, comment1line, comment, block, data, image
@param nr is the one you need to have 0 = first one which matches
@param text_to_find looks into the text
"""
return self.markdown_obj.part_get(text_to_find=text_to_find, cat=cat, nr=nr, die=die)
def parts_get(self, text_to_find=None, cat=None):
"""
@param cat is: table, header, macro, code, comment1line, comment, block, data, image
@param text_to_find looks into the text
"""
return self.markdown_obj.parts_get(text_to_find=text_to_find, cat=cat)
def __repr__(self):
return "doc:%s:%s" % (self.name, self.path)
__str__ = __repr__
| 33.417647 | 114 | 0.535557 | 1,378 | 11,362 | 4.285922 | 0.177794 | 0.021673 | 0.008127 | 0.006773 | 0.255842 | 0.219777 | 0.198273 | 0.174738 | 0.142736 | 0.113613 | 0 | 0.004175 | 0.346506 | 11,362 | 339 | 115 | 33.516224 | 0.791246 | 0.135011 | 0 | 0.209205 | 0 | 0.004184 | 0.065052 | 0.005847 | 0 | 0 | 0 | 0.00295 | 0 | 1 | 0.108787 | false | 0 | 0.020921 | 0.016736 | 0.259414 | 0.012552 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
a488dc1030b3dc25a1a80edd71496da9acd520e7 | 603 | py | Python | ground.py | michalovsky/flappy-bird | a28e86bbeb57635798b51fca58c2bd8c5cc9730b | [
"MIT"
] | 1 | 2021-06-14T09:36:09.000Z | 2021-06-14T09:36:09.000Z | ground.py | michalovsky/flappy-bird | a28e86bbeb57635798b51fca58c2bd8c5cc9730b | [
"MIT"
] | null | null | null | ground.py | michalovsky/flappy-bird | a28e86bbeb57635798b51fca58c2bd8c5cc9730b | [
"MIT"
] | null | null | null | from images import GROUND_IMAGE
class Ground:
VELOCITY = 5
WIDTH = GROUND_IMAGE.get_width()
IMAGE = GROUND_IMAGE
def __init__(self, y):
self.y = y
self.x1 = 0
self.x2 = self.WIDTH
def move(self):
self.x1 -= self.VELOCITY
self.x2 -= self.VELOCITY
if self.x1 + self.WIDTH < 0:
self.x1 = self.x2 + self.WIDTH
if self.x2 + self.WIDTH < 0:
self.x2 = self.x1 + self.WIDTH
def draw(self, window):
window.blit(self.IMAGE, (self.x1, self.y))
window.blit(self.IMAGE, (self.x2, self.y))
| 22.333333 | 50 | 0.558872 | 86 | 603 | 3.825581 | 0.255814 | 0.109422 | 0.182371 | 0.136778 | 0.139818 | 0 | 0 | 0 | 0 | 0 | 0 | 0.039024 | 0.320066 | 603 | 26 | 51 | 23.192308 | 0.763415 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.157895 | false | 0 | 0.052632 | 0 | 0.421053 | 0 | 0 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
a48a6e2b13af24cc7dc9ac36c9c5a8fbfb6e3bef | 654 | py | Python | LitterFilter/Event.py | mattdbartlett/LitterFilter | c03759a9cf7628774bdb73249fb3e64aea50d700 | [
"MIT"
] | null | null | null | LitterFilter/Event.py | mattdbartlett/LitterFilter | c03759a9cf7628774bdb73249fb3e64aea50d700 | [
"MIT"
] | null | null | null | LitterFilter/Event.py | mattdbartlett/LitterFilter | c03759a9cf7628774bdb73249fb3e64aea50d700 | [
"MIT"
] | null | null | null |
class EventGenerator(object):
"""
Evaluate event sources when run is called
"""
def __init__(self, stateMachine):
self.__stateMachine = stateMachine
self.__sources = list()
def AddEventSource(self, eventSource):
self.__sources.append(eventSource)
def Run(self):
for eventSource in self.__sources:
if eventSource is not None:
eventSource.Evaluate(self.__stateMachine)
return len(self.__sources) > 0
class EventSource(object):
"""
Base class for event sources
"""
def __init__(self):
pass
def Evaluate(stateMachine):
pass
| 22.551724 | 57 | 0.625382 | 66 | 654 | 5.893939 | 0.424242 | 0.113111 | 0.056555 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.002151 | 0.288991 | 654 | 28 | 58 | 23.357143 | 0.834409 | 0.107034 | 0 | 0.125 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.3125 | false | 0.125 | 0 | 0 | 0.5 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 2 |
a48bcd95f6ff3785768fc221bb5436bed3d1d5bd | 1,937 | py | Python | tally_ho/apps/tally/views/reports/races.py | crononauta/tally-ho | ba2207bfaef27bee3ff13a393983ca493f767238 | [
"Apache-2.0"
] | null | null | null | tally_ho/apps/tally/views/reports/races.py | crononauta/tally-ho | ba2207bfaef27bee3ff13a393983ca493f767238 | [
"Apache-2.0"
] | null | null | null | tally_ho/apps/tally/views/reports/races.py | crononauta/tally-ho | ba2207bfaef27bee3ff13a393983ca493f767238 | [
"Apache-2.0"
] | null | null | null | from django.views.generic import TemplateView
from guardian.mixins import LoginRequiredMixin
from tally_ho.libs.views.exports import valid_ballots
from tally_ho.libs.permissions import groups
from tally_ho.libs.reports import progress as p
from tally_ho.libs.views import mixins
class RacesReportView(LoginRequiredMixin,
mixins.GroupRequiredMixin,
TemplateView):
group_required = groups.SUPER_ADMINISTRATOR
template_name = 'reports/races.html'
def get_per_ballot_progress(self):
data = []
tally_id = self.kwargs.get('tally_id')
archived = p.ArchivedProgressReport(tally_id)
for ballot in valid_ballots(tally_id):
archived_result = archived.for_ballot(ballot)
sc = ballot.sub_constituency
if sc:
data.append({
'ballot': ballot.number,
'district': sc.code,
'race_type': ballot.race_type_name,
'expected': archived_result['denominator'],
'complete': archived_result['number'],
'percentage': archived_result['percentage'],
'id': ballot.id,
'active': ballot.active
})
return data
def get(self, *args, **kwargs):
tally_id = kwargs['tally_id']
per_ballot = self.get_per_ballot_progress()
races = len(per_ballot)
completed = sum([1 for x in per_ballot if isinstance(
x['percentage'], float) and x['percentage'] >= 100])
overview = {
'races': races,
'completed': completed,
'percentage': p.rounded_percent(completed, races)
}
return self.render_to_response(
self.get_context_data(
overview=overview,
per_ballot=per_ballot,
tally_id=tally_id))
| 33.396552 | 64 | 0.587506 | 199 | 1,937 | 5.507538 | 0.39196 | 0.051095 | 0.040146 | 0.054745 | 0.036496 | 0 | 0 | 0 | 0 | 0 | 0 | 0.003053 | 0.323696 | 1,937 | 57 | 65 | 33.982456 | 0.833588 | 0 | 0 | 0 | 0 | 0 | 0.083634 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.043478 | false | 0 | 0.130435 | 0 | 0.282609 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
a48cab57e933e3836e7ed795f6be0328aa3d87f5 | 25,944 | py | Python | scheduler.py | bsautrey/python-mapreduce | a3f487b431773b3630eea0e389d1d5ace34a7ef5 | [
"MIT"
] | 7 | 2017-05-17T07:26:38.000Z | 2021-06-18T18:18:26.000Z | scheduler.py | bsautrey/python-mapreduce | a3f487b431773b3630eea0e389d1d5ace34a7ef5 | [
"MIT"
] | null | null | null | scheduler.py | bsautrey/python-mapreduce | a3f487b431773b3630eea0e389d1d5ace34a7ef5 | [
"MIT"
] | 1 | 2021-06-18T18:18:37.000Z | 2021-06-18T18:18:37.000Z | # scheduler.py is used to submit, schedule, run jobs on the cluster.
import os,sys,fcntl,subprocess,random,traceback,errno
from time import sleep,time,ctime
import ujson
sys.path.append('/home/ben/code')
sys.path.append('/home/ben/file_transfer')
from manage_cluster import ManageCluster
from file_transfer import FileTransfer
from configs_parser import get_configs
# for running jobs submitted through the Scheduler
class Runner():
def __init__(self):
# configs
configs = get_configs(self.__module__)
self.local_working_dir = configs['local_working_dir']
# shared references
self.manage_cluster = ManageCluster()
self.manage_cluster.start_cluster()
self.file_transfer = FileTransfer()
self.scheduler = Scheduler()
self.finished_jobs = None
def run(self):
self.finished_jobs = set([])
while True:
job = self._get_next_job()
if job:
print 'STARTING JOB FROM SCHEDULER...'
start_time = time()
success,exception = self._run_job(job)
print 'FINISHED JOB FROM SCHEDULER...'
end_time = time()
self._mark_job_as_finished(job,success,exception,start_time,end_time)
else:
sleep(2)
def _get_next_job(self):
all_jobs = self.scheduler._get_jobs()
for job in all_jobs:
job_name = job['job_name']
force_run = job['force_run']
if force_run and job_name in self.finished_jobs:
self.finished_jobs.remove(job_name)
if job_name not in self.finished_jobs:
return job
self.finished_jobs = set([])
return False
def _run_job(self,job):
self._print_job(job)
self._write_current_job(job)
job_type = job['job_type']
if job_type == 'mapreduce':
success = self._run_mapreduce_job(job)
elif job_type == 'script':
success = self._run_script(job)
elif job_type == 'file_transfer':
success = self._run_file_transfer(job)
# note: success is a tuple.
return success
def _print_job(self,job):
print 'RUNNING JOB:'
for key in job:
val = job[key]
print '\t',key+':',val
print '---\n'
def _mark_job_as_finished(self,job,success,exception,start_time,end_time):
job_name = job['job_name']
self.finished_jobs.add(job_name)
self.scheduler._mark_job_as_finished(job,success,exception,start_time,end_time)
def _write_current_job(self,job):
fn = self.local_working_dir + '/CURRENT_JOB.data'
f = open(fn,'w')
s = ujson.dumps(job)
f.write(s)
f.close()
def _run_mapreduce_job(self,job):
# set job parameters and run
try:
self.manage_cluster.run(job)
exception = None
return (True,exception)
except:
exception = traceback.format_exc()
current_phase = self.manage_cluster.current_phase
print exception
return (False,(exception,current_phase))
def _run_script(self,job):
# get script parameters
script_location = job['script_location']
script_arguments = job['script_arguments']
if script_arguments:
script = ['python',script_location] + script_arguments
else:
script = ['python',script_location]
# run script
try:
val = subprocess.check_output(script,shell=False)
exception = None
return (True,exception)
except:
exception = traceback.format_exc()
print exception
return (False,exception)
def _run_file_transfer(self,job):
job_type = job['job_type']
job_name = job['job_name']
job_priority = job['job_priority']
# upload/download
input_dir = job['input_dir']
output_dir = job['output_dir']
transfer_type = job['transfer_type']
reload_files = job['reload_files']
delete_files = job['delete_files']
compress = job['compress']
# upload auxiliary
input_file_name = job['input_file_name']
auxiliary_data_name = job['auxiliary_data_name']
try:
if transfer_type == 'upload':
self.file_transfer.upload(input_dir,output_dir,reload_files)
elif transfer_type == 'upload_bulk':
self.file_transfer.upload_bulk(input_dir,output_dir,reload_files,compress)
elif transfer_type == 'download':
self.file_transfer.download(input_dir,output_dir,delete_files)
elif transfer_type == 'download_bulk':
self.file_transfer.download_bulk(input_dir,output_dir,delete_files)
elif transfer_type == 'upload_auxiliary':
self.file_transfer.upload_auxiliary(input_file_name,auxiliary_data_name)
elif transfer_type == 'delete':
self.file_transfer.delete_files(output_dir)
exception = None
return (True,exception)
except:
exception = traceback.format_exc()
print exception
return (False,exception)
# for controling workflow for the Runner
class Scheduler():
def __init__(self):
# configs
configs = get_configs(self.__module__)
self.local_working_dir = configs['local_working_dir']
self.job_submitter = os.path.expanduser('~')
# shared references
self.scheduler_token = None
def submit_job(self,job):
job_name = job['job_name']
print 'ATTEMPTING:',job_name
if self._is_correctly_specified(job):
new_jobs = []
existing_jobs = self._get_jobs()
job_name = job['job_name']
for existing_job in existing_jobs:
existing_job_name = existing_job['job_name']
if existing_job_name != job_name:
new_jobs.append(existing_job)
else:
print 'FOUND EXISTING JOB/OVERWRITING:'
for key in existing_job:
print '\t',key,existing_job[key]
new_jobs.append(job)
attempts = 0
while attempts < 100:
if self._lock_scheduler():
fn = self.local_working_dir +'/JOBS.data'
f = open(fn,'w')
for job in new_jobs:
s = ujson.dumps(job)
f.write(s+'\n')
f.close()
self._unlock_scheduler()
print 'ACCEPTED:',job_name
break
else:
attempts = attempts + 1
sleep(random.uniform(0.05,0.10))
else:
print 'JOB MISSPECIFIED/JOB REJECTED:'
for key in job:
print '\t',key,job[key]
print '---\n'
def _is_correctly_specified(self,job):
correct = True
job_priority = job['job_priority']
if job_priority or job_priority == 0:
job_type = job['job_type']
if job_type == 'mapreduce':
job_template = self.get_mapreduce_job_template()
template_keys = set(job_template.keys())
job_keys = set(job.keys())
if template_keys != job_keys:
print 'JOB NOT CREATED WITH TEMPLATE...'
correct = False
return correct
# required
job_name = job['job_name']
project_name = job['project_name']
input_dirs = job['input_dirs']
max_number_dumped_items_shuffler = job['max_number_dumped_items_shuffler']
simultaneous_files_in_redis = job['simultaneous_files_in_redis']
reduce_function_name = job['reduce_function_name']
max_number_dumped_items_reducer = job['max_number_dumped_items_reducer']
if not job_name:
print 'MISSING JOB NAME...'
correct = False
if not project_name:
print 'MISSING PROJECT NAME...'
correct = False
if not input_dirs:
print 'MISSING INPUT DIRS...'
correct = False
if not max_number_dumped_items_shuffler:
print 'MISSING MAX NUMBER DUMPED ITEMS SHUFFLER...'
correct = False
if not simultaneous_files_in_redis:
print 'MISSING SIMULTANEOUS FILES IN REDIS...'
correct = False
if not reduce_function_name:
print 'MISSING REDUCE FUNCTION NAME...'
correct = False
if not max_number_dumped_items_reducer:
print 'MISSING MAX NUMBER DUMPED ITEMS REDUCER...'
correct = False
elif job_type == 'script':
job_template = self.get_script_template()
template_keys = set(job_template.keys())
job_keys = set(job.keys())
if template_keys != job_keys:
print 'JOB NOT CREATED WITH TEMPLATE...'
correct = False
return correct
# required
job_name = job['job_name']
script_location = job['script_location']
if not job_name:
print 'MISSING JOB NAME...'
correct = False
if not script_location:
print 'MISSING SCRIPT LOCATION...'
correct = False
elif job_type == 'file_transfer':
job_template = self.get_file_transfer_template()
template_keys = set(job_template.keys())
job_keys = set(job.keys())
if template_keys != job_keys:
print 'JOB NOT CREATED WITH TEMPLATE...'
correct = False
return correct
# required
job_name = job['job_name']
if not job_name:
print 'MISSING JOB NAME...'
correct = False
transfer_type = job['transfer_type']
if transfer_type == 'upload' or transfer_type == 'download':
# upload/download
input_dir = job['input_dir']
output_dir = job['output_dir']
if not input_dir:
print 'MISSING INPUT DIRS...'
correct = False
if not output_dir:
print 'MISSING OUTPUT DIR...'
correct = False
if transfer_type == 'upload_auxiliary':
# upload auxiliary
input_file_name = job['input_file_name']
auxiliary_data_name = job['auxiliary_data_name']
if not input_file_name:
print 'MISSING INPUT FILE NAME...'
correct = False
if not auxiliary_data_name:
print 'MISSING AUXILIARY DATA NAME...'
correct = False
if transfer_type == 'delete':
output_dir = job['output_dir']
if not output_dir:
print 'MISSING OUTPUT DIR...'
correct = False
else:
print 'MISSING JOB TYPE...'
correct = False
else:
print 'MISSING JOB PRIORITY...'
correct = False
return correct
def delete_job(self,job_name):
new_jobs = []
existing_jobs = self._get_jobs()
for existing_job in existing_jobs:
existing_job_name = existing_job['job_name']
if existing_job_name != job_name:
new_jobs.append(existing_job)
else:
print 'FOUND JOB/DELETING:'
for key in existing_job:
print '\t',key,existing_job[key]
print '---\n'
attempts = 0
while attempts < 100:
if self._lock_scheduler():
fn = self.local_working_dir +'/JOBS.data'
f = open(fn,'w')
for job in new_jobs:
s = ujson.dumps(job)
f.write(s+'\n')
f.close()
self._unlock_scheduler()
break
else:
attempts = attempts + 1
sleep(random.uniform(0.05,0.10))
def _delete_group(self,job_name):
target_job = self._get_job(job_name)
if target_job:
target_group_name = target_job['group_name']
existing_jobs = self._get_jobs()
for existing_job in existing_jobs:
existing_group_name = existing_job['group_name']
if existing_group_name == target_group_name:
existing_job_name = existing_job['job_name']
self.delete_job(existing_job_name)
else:
print 'NO GROUP FOUND FOR JOB:',job_name
def _get_job(self,job_name):
existing_jobs = self._get_jobs()
for existing_job in existing_jobs:
existing_job_name = existing_job['job_name']
if existing_job_name == job_name:
return existing_job
def _get_jobs(self):
jobs = []
fn = self.local_working_dir +'/JOBS.data'
if not os.path.exists(fn):
f = open(fn,'w')
f.close()
temp = []
attempts = 0
while attempts < 100:
if self._lock_scheduler():
f = open(fn)
for l in f:
job = ujson.loads(l)
job_priority = job['job_priority']
temp.append((job_priority,job))
f.close()
self._unlock_scheduler()
break
else:
attempts = attempts + 1
sleep(random.uniform(0.05,0.10))
temp.sort(reverse=True)
for _,job in temp:
jobs.append(job)
return jobs
def _current_job(self):
current_job = self._read_current_job()
for key in current_job:
val = current_job[key]
print key+':',val
def _read_current_job(self):
fn = self.local_working_dir + '/CURRENT_JOB.data'
f = open(fn)
s = f.read()
f.close()
job = ujson.loads(s)
return job
def _mark_job_as_finished(self,current_job,success,exception,start_time,end_time):
current_job_name = current_job['job_name']
if success:
print 'SUCCESS:',current_job_name
fn = self.local_working_dir + '/JOBS_SUCCESS.data'
self._update_runtime(current_job_name,start_time,end_time)
run_once = current_job['run_once']
if run_once:
self.delete_job(current_job_name)
else:
fn = self.local_working_dir + '/JOBS_FAILED.data'
self._delete_group(current_job_name)
current_job['exception'] = exception
f = open(fn,'a')
s = ujson.dumps(current_job)
f.write(s+'\n')
f.close()
def _update_runtime(self,job_name,start_time,end_time):
runtime = int(end_time - start_time)
fn = self.local_working_dir + '/JOBS_RUNTIME.data'
if not os.path.exists(fn):
runtimes = {}
s = ujson.dumps(runtimes)
f = open(fn,'w')
f.write(s)
f.close()
else:
f = open(fn)
s = f.read()
f.close()
runtimes = ujson.loads(s)
if job_name in runtimes:
runtimes[job_name].append(runtime)
random.shuffle(runtimes[job_name])
runtimes[job_name] = runtimes[job_name][0:50]
else:
runtimes[job_name] = [runtime]
s = ujson.dumps(runtimes)
f = open(fn,'w')
f.write(s)
f.close()
def get_mapreduce_job_template(self):
job = {}
# job/project
job['job_submitter'] = self.job_submitter
job['job_type'] = 'mapreduce'
job['force_run'] = False
job['start_time'] = None
job['end_time'] = None
job['project_name'] = None
job['job_name'] = None
job['group_name'] = None
job['job_priority'] = None
job['input_dirs'] = None
job['delete_job_data'] = True
job['run_once'] = False
job['exception'] = None
job['current_phase'] = None
# mapper
job['map_function_name'] = None
job['auxiliary_data_name_mapper'] = None
job['hold_state'] = False
job['downsample'] = 1.0
# shuffler
job['max_number_dumped_items_shuffler'] = None # was 500000
job['simultaneous_files_in_redis'] = None # was 10
# reducer
job['reduce_function_name'] = None
job['auxiliary_data_name_reducer'] = None
job['max_number_dumped_items_reducer'] = None
job['disk_based_input'] = False
job['disk_based_output'] = False
job['compress'] = False
return job
def get_script_template(self):
job = {}
# job/project
job['job_submitter'] = self.job_submitter
job['job_type'] = 'script'
job['force_run'] = False
job['start_time'] = None
job['end_time'] = None
job['job_name'] = None
job['group_name'] = None
job['job_priority'] = None
job['run_once'] = False
job['exception'] = None
# script
job['script_location'] = None
job['script_arguments'] = None
return job
def get_file_transfer_template(self):
job = {}
# job/project
job['job_submitter'] = self.job_submitter
job['job_type'] = 'file_transfer'
job['force_run'] = False
job['start_time'] = None
job['end_time'] = None
job['job_name'] = None
job['group_name'] = None
job['job_priority'] = None
job['job_exception'] = None
job['run_once'] = False
job['exception'] = None
# upload/download
job['input_dir'] = None
job['output_dir'] = None
job['transfer_type'] = None
job['reload_files'] = True
job['delete_files'] = True
job['compress'] = False
# auxiliary_upload
job['input_file_name'] = None
job['auxiliary_data_name'] = None
return job
def _lock_scheduler(self):
scheduler_token = self.local_working_dir+'/scheduler.lock'
self.scheduler_token = open(scheduler_token,'a')
try:
fcntl.flock(self.scheduler_token, fcntl.LOCK_EX | fcntl.LOCK_NB)
return True
except IOError as e:
if e.errno != errno.EAGAIN:
raise
else:
return False
def _unlock_scheduler(self):
fcntl.flock(self.scheduler_token, fcntl.LOCK_UN)
self.scheduler_token.close()
''' def _mean(self,numbers):
s = sum(numbers)
l = len(numbers)
mean = int(s/l)
return mean
def estimate_next_runtime(self,job_name):
fn = self.local_working_dir + '/JOBS_RUNTIME.data'
f = open(fn)
s = f.read()
f.close()
runtimes = ujson.loads(s)
current_job = self._read_current_job()
current_job_name = current_job['job_name']
if current_job_name == job_name:
print 'JOB IS CURRENTLY RUNNING:',job_name
return
index = 0
job_name_exists = False
existing_jobs = self._get_jobs()
for existing_job in existing_jobs:
existing_job_name = existing_job['job_name']
if existing_job_name == current_job_name:
start_index = index
elif existing_job_name == job_name:
end_index = index
job_name_exists = True
index = index + 1
if job_name_exists:
is_current_job = True
current_time = time()
estimated_next_runtime = current_time
if start_index < end_index:
for index in xrange(start_index,end_index):
existing_job = existing_jobs[index]
existing_job_name = existing_job['job_name']
try:
estimated_job_runtime = self._mean(runtimes[existing_job_name])
if is_current_job:
estimated_next_runtime = estimated_next_runtime + 0.5*estimated_job_runtime
is_current_job = False
else:
estimated_next_runtime = estimated_next_runtime + estimated_job_runtime
except KeyError:
print 'NOT ENOUGH DATA TO CALCULATE AN ESTIMATE'
return
else:
l = len(existing_jobs)
for index in xrange(start_index,l):
existing_job = existing_jobs[index]
existing_job_name = existing_job['job_name']
try:
estimated_job_runtime = self._mean(runtimes[existing_job_name])
if is_current_job:
estimated_next_runtime = estimated_next_runtime + 0.5*estimated_job_runtime
is_current_job = False
else:
estimated_next_runtime = estimated_next_runtime + estimated_job_runtime
except KeyError:
print 'NOT ENOUGH DATA TO CALCULATE AN ESTIMATE'
return
for index in xrange(0,end_index):
existing_job = existing_jobs[index]
existing_job_name = existing_job['job_name']
try:
estimated_job_runtime = self._mean(runtimes[existing_job_name])
estimated_next_runtime = estimated_next_runtime + estimated_job_runtime
except KeyError:
print 'NOT ENOUGH DATA TO CALCULATE AN ESTIMATE'
return
seconds_until_next_run = int(estimated_next_runtime - current_time)
current_time_string = ctime(current_time)
estimated_next_runtime_string = ctime(estimated_next_runtime)
print 'CURRENT TIME:',current_time_string
print 'ESTIMATED NEXT RUNTIME:',estimated_next_runtime_string
print 'SECONDS UNTIL NEXT RUN:',seconds_until_next_run
print 'JOB YET TO RUN:'
if start_index < end_index:
for index in xrange(start_index,end_index):
existing_job = existing_jobs[index]
existing_job_name = existing_job['job_name']
estimated_job_runtime = int(self._mean(runtimes[existing_job_name]))
print '\t',existing_job_name,estimated_job_runtime,'seconds...'
print '---\n'
else:
l = len(existing_jobs)
for index in xrange(start_index,l):
existing_job = existing_jobs[index]
existing_job_name = existing_job['job_name']
estimated_job_runtime = int(self._mean(runtimes[existing_job_name]))
print '\t',existing_job_name,estimated_job_runtime,'seconds...'
for index in xrange(0,end_index):
existing_job = existing_jobs[index]
existing_job_name = existing_job['job_name']
estimated_job_runtime = int(self._mean(runtimes[existing_job_name]))
print '\t',existing_job_name,estimated_job_runtime,'seconds...'
print '---\n'
else:
print 'JOB DOES NOT EXIST:',job_name'''
| 36.438202 | 103 | 0.512488 | 2,674 | 25,944 | 4.667539 | 0.084144 | 0.056646 | 0.021633 | 0.018268 | 0.604679 | 0.537136 | 0.487381 | 0.445237 | 0.416072 | 0.398606 | 0 | 0.003633 | 0.405913 | 25,944 | 712 | 104 | 36.438202 | 0.806138 | 0.019542 | 0 | 0.540598 | 0 | 0 | 0.120658 | 0.012577 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.012821 | null | null | 0.09188 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
a48ddbfd57f4568c1f65e7ebc0518dccbe096322 | 843 | py | Python | 15. 3Sum/main.py | Competitive-Programmers-Community/LeetCode | 841fdee805b1a626e9f1cd0e12398d25054638af | [
"MIT"
] | 2 | 2019-10-05T09:48:20.000Z | 2019-10-05T15:40:01.000Z | 15. 3Sum/main.py | Competitive-Programmers-Community/LeetCode | 841fdee805b1a626e9f1cd0e12398d25054638af | [
"MIT"
] | null | null | null | 15. 3Sum/main.py | Competitive-Programmers-Community/LeetCode | 841fdee805b1a626e9f1cd0e12398d25054638af | [
"MIT"
] | 3 | 2020-09-27T05:48:30.000Z | 2021-08-13T10:07:08.000Z | class Solution:
def threeSum(self, nums):
"""
:type nums: List[int]
:rtype: List[List[int]]
"""
nums.sort()
res=[]
for k in range(len(nums)-2):
if k>0 and nums[k]==nums[k-1]:
continue
l=k+1
r=len(nums)-1
while (l<r):
s=nums[k]+nums[l]+nums[r]
if s>0:
r=r-1
elif s<0:
l=l+1
else:
res.append([nums[k],nums[l],nums[r]])
while l<r and nums[l]==nums[l+1]:
l=l+1
while l<r and nums[r]==nums[r-1]:
r=r-1
l=l+1
r=r-1
return res
| 27.193548 | 57 | 0.309609 | 104 | 843 | 2.509615 | 0.298077 | 0.076628 | 0.103448 | 0.061303 | 0.222222 | 0.114943 | 0 | 0 | 0 | 0 | 0 | 0.040541 | 0.561091 | 843 | 30 | 58 | 28.1 | 0.664865 | 0.053381 | 0 | 0.25 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.041667 | false | 0 | 0 | 0 | 0.125 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
a48e2876e063fca41404c9b42cd9234687e02f29 | 1,251 | py | Python | psono/restapi/serializers/share_right_accept.py | dirigeant/psono-server | a18c5b3c4d8bbbe4ecf1615b210d99fb77752205 | [
"Apache-2.0",
"CC0-1.0"
] | 48 | 2018-04-19T15:50:58.000Z | 2022-01-23T15:58:11.000Z | psono/restapi/serializers/share_right_accept.py | dirigeant/psono-server | a18c5b3c4d8bbbe4ecf1615b210d99fb77752205 | [
"Apache-2.0",
"CC0-1.0"
] | 9 | 2018-09-13T14:56:18.000Z | 2020-01-17T16:44:33.000Z | psono/restapi/serializers/share_right_accept.py | dirigeant/psono-server | a18c5b3c4d8bbbe4ecf1615b210d99fb77752205 | [
"Apache-2.0",
"CC0-1.0"
] | 11 | 2019-09-20T11:53:47.000Z | 2021-07-18T22:41:31.000Z | from django.utils.translation import ugettext_lazy as _
from rest_framework import serializers, exceptions
from ..fields import UUIDField
from ..models import User_Share_Right
class ShareRightAcceptSerializer(serializers.Serializer):
share_right_id = UUIDField(required=True)
key = serializers.CharField(max_length=256, required=False)
key_type = serializers.CharField(max_length=256, required=False, default='symmetric')
key_nonce = serializers.CharField(max_length=64, required=False)
def validate(self, attrs: dict) -> dict:
share_right_id = attrs.get('share_right_id')
key_type = attrs.get('key_type')
if key_type not in ['asymmetric', 'symmetric']:
msg = _("Invalid Key Type")
raise exceptions.ValidationError(msg)
try:
user_share_right_obj = User_Share_Right.objects.get(pk=share_right_id, user=self.context['request'].user, accepted=None)
except User_Share_Right.DoesNotExist:
msg = _("You don't have permission to access it or it does not exist or you already accepted or declined this share.")
raise exceptions.ValidationError(msg)
attrs['user_share_right_obj'] = user_share_right_obj
return attrs | 40.354839 | 132 | 0.718625 | 160 | 1,251 | 5.39375 | 0.4625 | 0.115875 | 0.097335 | 0.100811 | 0.17613 | 0.17613 | 0.17613 | 0 | 0 | 0 | 0 | 0.007937 | 0.194245 | 1,251 | 31 | 133 | 40.354839 | 0.848214 | 0 | 0 | 0.090909 | 0 | 0.045455 | 0.159744 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.045455 | false | 0 | 0.181818 | 0 | 0.5 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
a48f32363a4214c8c84b8ccdfb70d7f2134e405c | 3,086 | py | Python | nb_cli/prompts/input.py | cdlaimin/nb-cli | b428a9a24091c072accedbeee56064c6a3cfd15a | [
"MIT"
] | 88 | 2020-10-02T07:16:06.000Z | 2022-03-30T01:24:36.000Z | nb_cli/prompts/input.py | cdlaimin/nb-cli | b428a9a24091c072accedbeee56064c6a3cfd15a | [
"MIT"
] | 13 | 2021-01-28T03:14:35.000Z | 2022-01-15T11:47:21.000Z | nb_cli/prompts/input.py | cdlaimin/nb-cli | b428a9a24091c072accedbeee56064c6a3cfd15a | [
"MIT"
] | 11 | 2021-03-11T15:12:23.000Z | 2022-01-13T10:09:18.000Z | from typing import Callable, Optional
from prompt_toolkit.styles import Style
from prompt_toolkit.buffer import Buffer
from prompt_toolkit.layout import Layout
from prompt_toolkit.lexers import SimpleLexer
from prompt_toolkit.application import get_app
from prompt_toolkit.enums import DEFAULT_BUFFER
from prompt_toolkit.validation import Validator
from prompt_toolkit.layout.controls import BufferControl
from prompt_toolkit.formatted_text import AnyFormattedText
from prompt_toolkit.layout.containers import HSplit, Window
from prompt_toolkit.key_binding import KeyBindings, KeyPressEvent
from . import NoAnswer, BasePrompt
class InputPrompt(BasePrompt[str]):
"""Simple Input Prompt.
Style class guide:
```
[?] Choose a choice and return? answer
└┬┘ └──────────────┬──────────┘ └──┬─┘
questionmark question answer
```
"""
def __init__(
self,
question: str,
question_mark: str = "[?]",
validator: Optional[Callable[[str], bool]] = None,
):
self.question: str = question
self.question_mark: str = question_mark
self.validator: Optional[Callable[[str], bool]] = validator
def _reset(self):
self._answered: bool = False
self._buffer: Buffer = Buffer(
name=DEFAULT_BUFFER,
validator=Validator.from_callable(self.validator)
if self.validator
else None,
accept_handler=self._submit,
)
def _build_layout(self) -> Layout:
self._reset()
layout = Layout(
HSplit(
[
Window(
BufferControl(
self._buffer, lexer=SimpleLexer("class:answer")
),
dont_extend_height=True,
get_line_prefix=self._get_prompt,
)
]
)
)
return layout
def _build_style(self, style: Style) -> Style:
default = Style(
[
("questionmark", "fg:#5F819D"),
("question", "bold"),
("answer", "fg:#5F819D"),
]
)
return Style([*default.style_rules, *style.style_rules])
def _build_keybindings(self) -> KeyBindings:
kb = KeyBindings()
@kb.add("enter", eager=True)
def enter(event: KeyPressEvent):
self._buffer.validate_and_handle()
@kb.add("c-c", eager=True)
@kb.add("c-q", eager=True)
def quit(event: KeyPressEvent):
event.app.exit(result=NoAnswer)
return kb
def _get_prompt(
self, line_number: int, wrap_count: int
) -> AnyFormattedText:
return [
("class:questionmark", self.question_mark),
("", " "),
("class:question", self.question.strip()),
("", " "),
]
def _submit(self, buffer: Buffer) -> bool:
self._answered = True
get_app().exit(result=buffer.document.text)
return True
| 29.390476 | 75 | 0.571614 | 304 | 3,086 | 5.736842 | 0.315789 | 0.063073 | 0.107225 | 0.039564 | 0.036697 | 0 | 0 | 0 | 0 | 0 | 0 | 0.00381 | 0.319507 | 3,086 | 104 | 76 | 29.673077 | 0.809524 | 0.053791 | 0 | 0.025 | 0 | 0 | 0.038128 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.1125 | false | 0 | 0.1625 | 0.0125 | 0.35 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
a492098555deaaa16a0fac0a4f18848c23573c77 | 1,133 | py | Python | vspace_utils/templatetags/next_previous.py | visualspace/django-vspace-utils | 9bf86354f8dbcf8ee8c308345836f824e2cd7a63 | [
"BSD-3-Clause"
] | null | null | null | vspace_utils/templatetags/next_previous.py | visualspace/django-vspace-utils | 9bf86354f8dbcf8ee8c308345836f824e2cd7a63 | [
"BSD-3-Clause"
] | null | null | null | vspace_utils/templatetags/next_previous.py | visualspace/django-vspace-utils | 9bf86354f8dbcf8ee8c308345836f824e2cd7a63 | [
"BSD-3-Clause"
] | null | null | null | """
Efficient and generic get next/previous tags for the Django template language,
using Alex Gaynor's excellent templatetag_sugar library.
The library can be found at: http://pypi.python.org/pypi/django-templatetag-sugar
Usage:
{% load next_previous %}
...
{% get_next in <queryset> after <object> as <next> %}
{% get_previous in <queryset> before <object> as <previous> %}
Initially published here: https://gist.github.com/1004216
"""
from django import template
register = template.Library()
from templatetag_sugar.register import tag
from templatetag_sugar.parser import Constant, Variable, Name
from .utils import get_next_or_previous
@tag(register, [Constant("in"), Variable(), Constant("after"), Variable(), Constant("as"), Name()])
def get_next(context, queryset, item, asvar):
context[asvar] = get_next_or_previous(queryset, item, next=True)
return ""
@tag(register, [Constant("in"), Variable(), Constant("before"), Variable(), Constant("as"), Name()])
def get_previous(context, queryset, item, asvar):
context[asvar] = get_next_or_previous(queryset, item, next=False)
return ""
| 29.815789 | 100 | 0.724625 | 148 | 1,133 | 5.432432 | 0.412162 | 0.052239 | 0.033582 | 0.063433 | 0.333333 | 0.333333 | 0.171642 | 0.171642 | 0.171642 | 0.171642 | 0 | 0.007179 | 0.139453 | 1,133 | 37 | 101 | 30.621622 | 0.817436 | 0.39541 | 0 | 0.153846 | 0 | 0 | 0.028065 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.153846 | false | 0 | 0.307692 | 0 | 0.615385 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 2 |
a49290cc7424f360317d50f2b068b197194ebd8a | 251 | py | Python | tests/fixtures.py | Apkawa/django-modeltranslation-rosetta | 568354ceee201f891e1f9f6d1f5987dbdfa8f84a | [
"MIT"
] | null | null | null | tests/fixtures.py | Apkawa/django-modeltranslation-rosetta | 568354ceee201f891e1f9f6d1f5987dbdfa8f84a | [
"MIT"
] | 14 | 2020-01-06T16:18:37.000Z | 2022-01-20T19:40:56.000Z | tests/fixtures.py | Apkawa/django-modeltranslation-rosetta | 568354ceee201f891e1f9f6d1f5987dbdfa8f84a | [
"MIT"
] | null | null | null | # coding: utf-8
from __future__ import unicode_literals
import factory
class ArticleFactory(factory.django.DjangoModelFactory):
class Meta:
model = 'tests.Article'
title = factory.Faker('sentence')
body = factory.Faker('text')
| 19.307692 | 56 | 0.717131 | 28 | 251 | 6.25 | 0.785714 | 0.137143 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.004878 | 0.183267 | 251 | 12 | 57 | 20.916667 | 0.84878 | 0.051793 | 0 | 0 | 0 | 0 | 0.105932 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.285714 | 0 | 0.857143 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
a492c95951a23587cee545058f4a9aba5d476ad7 | 4,880 | py | Python | IMU/VTK-6.2.0/IO/Geometry/Testing/Python/motor.py | timkrentz/SunTracker | 9a189cc38f45e5fbc4e4c700d7295a871d022795 | [
"MIT"
] | 4 | 2016-03-30T14:31:52.000Z | 2019-02-02T05:01:32.000Z | IMU/VTK-6.2.0/IO/Geometry/Testing/Python/motor.py | timkrentz/SunTracker | 9a189cc38f45e5fbc4e4c700d7295a871d022795 | [
"MIT"
] | null | null | null | IMU/VTK-6.2.0/IO/Geometry/Testing/Python/motor.py | timkrentz/SunTracker | 9a189cc38f45e5fbc4e4c700d7295a871d022795 | [
"MIT"
] | 2 | 2019-08-30T23:36:13.000Z | 2019-11-08T16:52:01.000Z | #!/usr/bin/env python
import vtk
from vtk.test import Testing
from vtk.util.misc import vtkGetDataRoot
VTK_DATA_ROOT = vtkGetDataRoot()
def GetRGBColor(colorName):
'''
Return the red, green and blue components for a
color as doubles.
'''
rgb = [0.0, 0.0, 0.0] # black
vtk.vtkNamedColors().GetColorRGB(colorName, rgb)
return rgb
# Create the RenderWindow, Renderer and both Actors
#
ren1 = vtk.vtkRenderer()
renWin = vtk.vtkRenderWindow()
renWin.AddRenderer(ren1)
iren = vtk.vtkRenderWindowInteractor()
iren.SetRenderWindow(renWin)
# create cutting planes
planes = vtk.vtkPlanes()
points = vtk.vtkPoints()
norms = vtk.vtkFloatArray()
norms.SetNumberOfComponents(3)
points.InsertPoint(0, 0.0, 0.0, 0.0)
norms.InsertTuple3(0, 0.0, 0.0, 1.0)
points.InsertPoint(1, 0.0, 0.0, 0.0)
norms.InsertTuple3(1, -1.0, 0.0, 0.0)
planes.SetPoints(points)
planes.SetNormals(norms)
# texture
texReader = vtk.vtkStructuredPointsReader()
texReader.SetFileName(VTK_DATA_ROOT + "/Data/texThres2.vtk")
texture = vtk.vtkTexture()
texture.SetInputConnection(texReader.GetOutputPort())
texture.InterpolateOff()
texture.RepeatOff()
# read motor parts...each part colored separately
#
byu = vtk.vtkBYUReader()
byu.SetGeometryFileName(VTK_DATA_ROOT + "/Data/motor.g")
byu.SetPartNumber(1)
normals = vtk.vtkPolyDataNormals()
normals.SetInputConnection(byu.GetOutputPort())
tex1 = vtk.vtkImplicitTextureCoords()
tex1.SetInputConnection(normals.GetOutputPort())
tex1.SetRFunction(planes)
# tex1.FlipTextureOn()
byuMapper = vtk.vtkDataSetMapper()
byuMapper.SetInputConnection(tex1.GetOutputPort())
byuActor = vtk.vtkActor()
byuActor.SetMapper(byuMapper)
byuActor.SetTexture(texture)
byuActor.GetProperty().SetColor(GetRGBColor('cold_grey'))
byu2 = vtk.vtkBYUReader()
byu2.SetGeometryFileName(VTK_DATA_ROOT + "/Data/motor.g")
byu2.SetPartNumber(2)
normals2 = vtk.vtkPolyDataNormals()
normals2.SetInputConnection(byu2.GetOutputPort())
tex2 = vtk.vtkImplicitTextureCoords()
tex2.SetInputConnection(normals2.GetOutputPort())
tex2.SetRFunction(planes)
# tex2.FlipTextureOn()
byuMapper2 = vtk.vtkDataSetMapper()
byuMapper2.SetInputConnection(tex2.GetOutputPort())
byuActor2 = vtk.vtkActor()
byuActor2.SetMapper(byuMapper2)
byuActor2.SetTexture(texture)
byuActor2.GetProperty().SetColor(GetRGBColor('peacock'))
byu3 = vtk.vtkBYUReader()
byu3.SetGeometryFileName(VTK_DATA_ROOT + "/Data/motor.g")
byu3.SetPartNumber(3)
triangle3 = vtk.vtkTriangleFilter()
triangle3.SetInputConnection(byu3.GetOutputPort())
normals3 = vtk.vtkPolyDataNormals()
normals3.SetInputConnection(triangle3.GetOutputPort())
tex3 = vtk.vtkImplicitTextureCoords()
tex3.SetInputConnection(normals3.GetOutputPort())
tex3.SetRFunction(planes)
# tex3.FlipTextureOn()
byuMapper3 = vtk.vtkDataSetMapper()
byuMapper3.SetInputConnection(tex3.GetOutputPort())
byuActor3 = vtk.vtkActor()
byuActor3.SetMapper(byuMapper3)
byuActor3.SetTexture(texture)
byuActor3.GetProperty().SetColor(GetRGBColor('raw_sienna'))
byu4 = vtk.vtkBYUReader()
byu4.SetGeometryFileName(VTK_DATA_ROOT + "/Data/motor.g")
byu4.SetPartNumber(4)
normals4 = vtk.vtkPolyDataNormals()
normals4.SetInputConnection(byu4.GetOutputPort())
tex4 = vtk.vtkImplicitTextureCoords()
tex4.SetInputConnection(normals4.GetOutputPort())
tex4.SetRFunction(planes)
# tex4.FlipTextureOn()
byuMapper4 = vtk.vtkDataSetMapper()
byuMapper4.SetInputConnection(tex4.GetOutputPort())
byuActor4 = vtk.vtkActor()
byuActor4.SetMapper(byuMapper4)
byuActor4.SetTexture(texture)
byuActor4.GetProperty().SetColor(GetRGBColor('banana'))
byu5 = vtk.vtkBYUReader()
byu5.SetGeometryFileName(VTK_DATA_ROOT + "/Data/motor.g")
byu5.SetPartNumber(5)
normals5 = vtk.vtkPolyDataNormals()
normals5.SetInputConnection(byu5.GetOutputPort())
tex5 = vtk.vtkImplicitTextureCoords()
tex5.SetInputConnection(normals5.GetOutputPort())
tex5.SetRFunction(planes)
# tex5.FlipTextureOn()
byuMapper5 = vtk.vtkDataSetMapper()
byuMapper5.SetInputConnection(tex5.GetOutputPort())
byuActor5 = vtk.vtkActor()
byuActor5.SetMapper(byuMapper5)
byuActor5.SetTexture(texture)
byuActor5.GetProperty().SetColor(GetRGBColor('peach_puff'))
# Add the actors to the renderer, set the background and size
#
ren1.AddActor(byuActor)
ren1.AddActor(byuActor2)
ren1.AddActor(byuActor3)
byuActor3.VisibilityOff()
ren1.AddActor(byuActor4)
ren1.AddActor(byuActor5)
ren1.SetBackground(1, 1, 1)
renWin.SetSize(300, 300)
camera = vtk.vtkCamera()
camera.SetFocalPoint(0.0286334, 0.0362996, 0.0379685)
camera.SetPosition(1.37067, 1.08629, -1.30349)
camera.SetViewAngle(17.673)
camera.SetClippingRange(1, 10)
camera.SetViewUp(-0.376306, -0.5085, -0.774482)
ren1.SetActiveCamera(camera)
# render the image
iren.Initialize()
#iren.Start()
| 28.208092 | 62 | 0.7625 | 522 | 4,880 | 7.095785 | 0.314176 | 0.012959 | 0.015389 | 0.015119 | 0.071544 | 0.070194 | 0.066415 | 0.012419 | 0 | 0 | 0 | 0.050577 | 0.112705 | 4,880 | 172 | 63 | 28.372093 | 0.80485 | 0.084836 | 0 | 0 | 0 | 0 | 0.029647 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.008547 | false | 0 | 0.025641 | 0 | 0.042735 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
a4945675fa5668b6a6e7a48d03c92355e85e8193 | 3,787 | py | Python | Liver_disease/liver_prediction.py | R3DDY97/kaggle_kernels | 8a5a456612bdae712e58188d407714c7cfd04849 | [
"MIT"
] | null | null | null | Liver_disease/liver_prediction.py | R3DDY97/kaggle_kernels | 8a5a456612bdae712e58188d407714c7cfd04849 | [
"MIT"
] | null | null | null | Liver_disease/liver_prediction.py | R3DDY97/kaggle_kernels | 8a5a456612bdae712e58188d407714c7cfd04849 | [
"MIT"
] | null | null | null | #!/usr/bin/env python3
import pandas as pd
# import numpy as np
from sklearn import (svm, preprocessing)
from sklearn.model_selection import train_test_split, KFold
from sklearn.metrics import (recall_score, precision_score, accuracy_score, confusion_matrix,)
#precision_recall_curve,auc,roc_auc_score,roc_curve,recall_score,classification_report)
from sklearn.ensemble import RandomForestClassifier
from sklearn.linear_model import LogisticRegression
# load and preprocess data
DATA = "/home/reddy/Documents/AI_ML_DL/2_Kaggle/Liver_disease/indian_liver_patient.csv"
def liver_data():
data = pd.read_csv(DATA)
# data.info()
data.head()
data.tail()
data.describe()
# data_bk = data.copy()
# data_nan = data[data.isna().any(axis=1)] # rows having NaN
# nan_rows = list(data_nan.index)
# data_types = data.dtypes
# print(data_types)
# print("Rows having NaN/missing values are {}".format(nan_rows))
# data.groupby("Dataset").size()
# data.groupby("Gender").size()
# max_index = data.iloc[:, 2:-1].idxmax(skipna=True)
# data = data.dropna(axis=0, how='any', inplace=True).replace("Male", 1).replace("Female", 0)
# features = data.columns.tolist()
features = ['Age',
'Gender',
'Total_Bilirubin',
'Direct_Bilirubin',
'Alkaline_Phosphotase',
'Alamine_Aminotransferase',
'Aspartate_Aminotransferase',
'Total_Protiens',
'Albumin',
'Albumin_and_Globulin_Ratio',]
# data.drop([features[i] for i in [3, 5, 8]], axis=1, inplace=True)
data["Gender"] = data["Gender"].map({"Male":1, "Female":0})
data.dropna(axis=0, how='any', inplace=True)
# data.fillna(data['Albumin_and_Globulin_Ratio'].mean(), inplace=True)
data["Dataset"].value_counts()
mldata = data.drop("Dataset", axis=1)
labels = data["Dataset"].map({1:1, 2:0})
# mldata = data.iloc[:, :-1].values
# labels = data.iloc[:, -1].replace(2, 0).values
# mldata = data.iloc[:, [2, 3, 4, 5, 6, 7]].values
# mldata = data.iloc[:, [0, 2, 3, 4, 5, 6, 7, 8, 9]].values #removed gender
# classifier = svm.SVC()
# classifier = RandomForestClassifier(random_state=0)
classifier = LogisticRegression(multi_class='multinomial', solver='newton-cg')
classify_data(classifier, mldata, labels)
def classify_data(classifier, mldata, labels):
#preprocessing data using sk.learn
data_variables = train_test_split(mldata, labels, test_size=0.2, random_state=970)
train_data, test_data, train_label, test_label = data_variables
scaler = preprocessing.StandardScaler().fit(train_data)
train_data_scaled = scaler.transform(train_data)
test_data_scaled = scaler.transform(test_data)
# SVM classifier
# classifier = svm.SVC(random_state=0)
# classifier.fit(train_data, train_label)
# predict_y = classifier.predict(test_data)
# acc_test = classifier.score(test_data, test_label)
# print(acc_test)
# Random Forest classifier
# classifier = RandomForestClassifier(min_samples_split=4)
# classifier = RandomForestClassifier(min_samples_split=4, criterion="entropy")
# classifier = RandomForestClassifier(random_state=0)
classifier.fit(train_data_scaled, train_label)
predict_y = classifier.predict(test_data_scaled)
accuracy = classifier.score(test_data_scaled, test_label)
# accuracy = accuracy_score(test_label, predict_y)
precision = precision_score(test_label, predict_y)
recall = recall_score(test_label, predict_y)
cmatrix = confusion_matrix(test_label, predict_y)
print(accuracy)
print(precision)
print(recall)
print(cmatrix)
if __name__ == '__main__':
liver_data()
| 38.252525 | 97 | 0.686295 | 480 | 3,787 | 5.1875 | 0.3375 | 0.02249 | 0.031325 | 0.027309 | 0.219277 | 0.165462 | 0.08755 | 0.060241 | 0 | 0 | 0 | 0.015873 | 0.184843 | 3,787 | 98 | 98 | 38.642857 | 0.790735 | 0.40375 | 0 | 0 | 0 | 0 | 0.139189 | 0.069369 | 0 | 0 | 0 | 0 | 0 | 1 | 0.042553 | false | 0 | 0.12766 | 0 | 0.170213 | 0.085106 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
a4945a92a6ea4fc3471709318c267e06a6500e97 | 2,208 | py | Python | netspot/nm_helper.py | MaxIV-KitsControls/netspot | 42f505d004bcadcfb32b6ca0511572d38641c23a | [
"MIT"
] | null | null | null | netspot/nm_helper.py | MaxIV-KitsControls/netspot | 42f505d004bcadcfb32b6ca0511572d38641c23a | [
"MIT"
] | null | null | null | netspot/nm_helper.py | MaxIV-KitsControls/netspot | 42f505d004bcadcfb32b6ca0511572d38641c23a | [
"MIT"
] | null | null | null | #!/usr/bin/python -tt
"""NetMagis DB helper."""
import psycopg2
import netspot_settings
class NetMagisDB(object):
"""NetMagis DB helper class."""
def __init__(self):
"""Init."""
self.database = netspot_settings.NM_DATABASE
self.username = netspot_settings.NM_USERNAME
self.password = netspot_settings.NM_PASSWORD
self.db_server = netspot_settings.NM_SERVER
self.cursor = None
self.conn = None
def query(self, sql):
"""Query database.
Args:
sql: string, SQL query
Returns:
SQL result dict
"""
self.cursor.execute(sql)
return self.cursor.fetchall()
def __enter__(self):
"""Enter."""
# Open connection to DB server
self.conn = psycopg2.connect(dbname=self.database,
user=self.username,
host=self.db_server,
password=self.password)
# Create cursor
self.cursor = self.conn.cursor(cursor_factory=psycopg2.extras.DictCursor)
return self
def __exit__(self, ex_type, ex_value, traceback):
"""Exit."""
self.conn.close()
def search(self, search):
"""Search function for NetMagis.
Args:
search: string, search key word
Returns:
NetMagis serach result, list
"""
if search:
sql = """SELECT * FROM dns.rr_ip
RIGHT JOIN dns.rr
ON dns.rr_ip.idrr=dns.rr.idrr
WHERE dns.rr.name LIKE '%{0}%' OR
TEXT(dns.rr_ip.addr) LIKE '%{0}%' OR
TEXT(dns.rr.mac) LIKE LOWER('%{0}%');""".format(search)
result = self.query(sql)
else:
result = []
return result
def get_arecord(self, iddr):
"""Find DNS A records from a NetMagis IDDR.
Args:
iddr: string, NetMagis IDDR
Returns:
arecord: SQL record with A record.
"""
sql = """SELECT * FROM dns.rr
RIGHT JOIN dns.rr_cname
ON dns.rr.idrr=dns.rr_cname.cname
RIGHT JOIN dns.rr_ip
ON dns.rr_ip.idrr=dns.rr.idrr
WHERE dns.rr_cname.idrr = '{0}';""".format(iddr)
return self.query(sql)
def main():
"""Main."""
pass
if __name__ == '__main__':
main()
| 21.861386 | 77 | 0.585598 | 276 | 2,208 | 4.528986 | 0.333333 | 0.06 | 0.028 | 0.0336 | 0.1056 | 0.0768 | 0.0512 | 0.0512 | 0.0512 | 0.0512 | 0 | 0.004481 | 0.292572 | 2,208 | 100 | 78 | 22.08 | 0.795775 | 0.202446 | 0 | 0.043478 | 0 | 0 | 0.283808 | 0.049727 | 0 | 0 | 0 | 0 | 0 | 1 | 0.152174 | false | 0.065217 | 0.043478 | 0 | 0.304348 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
a4951993c951ee5441f92978fa0bae320459a650 | 570 | py | Python | practicalnlp/settings.py | paulomann/practical-nlp-pytorch | 7c6b3612599a4d74bf8d1acdd8a8bd25446b526b | [
"MIT"
] | null | null | null | practicalnlp/settings.py | paulomann/practical-nlp-pytorch | 7c6b3612599a4d74bf8d1acdd8a8bd25446b526b | [
"MIT"
] | null | null | null | practicalnlp/settings.py | paulomann/practical-nlp-pytorch | 7c6b3612599a4d74bf8d1acdd8a8bd25446b526b | [
"MIT"
] | 1 | 2019-09-24T17:13:35.000Z | 2019-09-24T17:13:35.000Z | from os.path import dirname, join
ROOT = dirname(dirname(__file__))
DATA = join(ROOT, 'data')
TRAIN_DATA = join(DATA, 'sst2', 'stsa.binary.phrases.train')
VALIDATION_DATA = join(DATA, 'sst2', 'stsa.binary.dev')
TEST_DATA = join(DATA, 'sst2', 'stsa.binary.test')
PRETRAINED_EMBEDDINGS_FILE = join(DATA, 'GoogleNews-vectors-negative300.bin')
CHECKPOINT_PATH = join(ROOT, "models")
WIKI_TEST_DATA = join(DATA, "wikitext-2", "wiki.test.tokens")
WIKI_VALID_DATA = join(DATA, "wikitext-2", "wiki.valid.tokens")
WIKI_TRAIN_DATA = join(DATA, "wikitext-2", "wiki.train.tokens") | 43.846154 | 77 | 0.738596 | 83 | 570 | 4.879518 | 0.361446 | 0.138272 | 0.177778 | 0.118519 | 0.377778 | 0.377778 | 0 | 0 | 0 | 0 | 0 | 0.017341 | 0.089474 | 570 | 13 | 78 | 43.846154 | 0.763006 | 0 | 0 | 0 | 0 | 0 | 0.336252 | 0.103328 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.090909 | 0 | 0.090909 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
a4958cb92424a6e7c9c549c160c10cca38cc0e8c | 1,146 | py | Python | bbs/decorators.py | weijiang1994/bbs-admin-backend | a703500ed155fd59cc7dc8843d68238efc69a07f | [
"Apache-2.0"
] | 4 | 2022-01-21T07:06:48.000Z | 2022-03-02T10:47:55.000Z | bbs/decorators.py | weijiang1994/bbs-admin-backend | a703500ed155fd59cc7dc8843d68238efc69a07f | [
"Apache-2.0"
] | 3 | 2022-02-21T16:00:11.000Z | 2022-02-24T09:29:14.000Z | bbs/decorators.py | weijiang1994/bbs-admin-backend | a703500ed155fd59cc7dc8843d68238efc69a07f | [
"Apache-2.0"
] | 1 | 2022-03-31T07:54:59.000Z | 2022-03-31T07:54:59.000Z | """
# coding:utf-8
@Time : 2021/12/06
@Author : jiangwei
@File : decorators.py
@Desc : decorators
@email : qq804022023@gmail.com
@Software: PyCharm
"""
from flask import request, jsonify
from bbs.setting import basedir
import os
from functools import wraps
def track_error(func):
"""
track running error
:param func: decorated function
:return: result
"""
@wraps(func)
def wrapper(*args, **kwargs):
try:
return func(*args, **kwargs)
except Exception as e:
import traceback
traceback.print_exc()
return jsonify(
code=500,
msg='服务器内部错误!'
)
return wrapper
def check_json(func):
"""
Check whether the request data contains JSON
:param func: decorated function (view function)
:return: check result
"""
@wraps(func)
def wrapper(*args, **kwargs):
if request.json is None or type(request.json) != dict:
return jsonify(
code=422,
msg='错误的请求数据格式!'
)
return func(*args, **kwargs)
return wrapper
| 22.038462 | 62 | 0.572426 | 125 | 1,146 | 5.224 | 0.56 | 0.061256 | 0.05513 | 0.079632 | 0.107198 | 0.107198 | 0.107198 | 0 | 0 | 0 | 0 | 0.031088 | 0.326353 | 1,146 | 51 | 63 | 22.470588 | 0.814767 | 0.294939 | 0 | 0.37037 | 0 | 0 | 0.023873 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.148148 | false | 0 | 0.185185 | 0 | 0.555556 | 0.037037 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
a495964d82d25e210cda079c174cec9fcd420d1c | 2,447 | py | Python | Tools/extract-sfc.py | Navasnaz/mib2-toolbox | 732f859d0dbb94dcf5c0d8388c959b7389a4c4f0 | [
"MIT"
] | 339 | 2019-09-18T21:46:50.000Z | 2022-03-31T07:50:04.000Z | Tools/extract-sfc.py | Navasnaz/mib2-toolbox | 732f859d0dbb94dcf5c0d8388c959b7389a4c4f0 | [
"MIT"
] | 188 | 2019-09-19T23:09:49.000Z | 2022-03-30T20:21:34.000Z | Tools/extract-sfc.py | Navasnaz/mib2-toolbox | 732f859d0dbb94dcf5c0d8388c959b7389a4c4f0 | [
"MIT"
] | 115 | 2019-09-19T19:49:15.000Z | 2022-03-12T21:10:00.000Z | # ----------------------------------------------------------
# --- Quick 'n' dirty CFF file extractor
#
# File: extract-sfc.py
# Author: Jille
# Revision: 1
# Purpose: MIB2 sfc file exporter
# Comments: Usage: extract-sfc.py <filename> <outdir>
# Changelog: First version
# ----------------------------------------------------------
import struct
import sys
import os
import zlib
if sys.version_info[0] < 3:
sys.exit("You need to run this with Python 3")
try:
from PIL import Image
except ImportError:
sys.exit(" You are missing the PIL module!\n"
" install it by running: \n"
" pip install image")
if len(sys.argv) != 3:
print("usage: extract-sfc.py <filename> <outdir>")
sys.exit(1)
out_dir = sys.argv[2]
if not os.path.exists(out_dir):
os.mkdir(out_dir)
def mkdir_path(path):
if not os.access(path, os.F_OK):
os.mkdir(path)
if not os.path.exists(sys.argv[1]):
print("%s not found" % (sys.argv[1]))
exit(1)
data = open(sys.argv[1], 'rb').read() # Open File with path in sys.argv[1] in mode 'r' reading and 'b' binary mode
offset = 0
counterRGBA = 0
counterL = 0
counterP = 0
offset = 16
(num_files,) = struct.unpack_from('<I', data, offset)
print("Number of files: \t %d" % (num_files))
offset = offset + 4 # offset 20
i = 0
offset_array = []
size_array = []
# go through the entire table of contents to get all paths and offsets
print("id \t offset \t unknown\t size")
while (i < num_files):
(id, unknown1, start_offset, size) = struct.unpack_from('<IIII', data, offset)
offset_array.append(start_offset)
size_array.append(size)
# go on to the next offset
offset = offset + 16
#print("%d %10x %10x %10s " % (i, start_offset, unknown1, size))
i = i + 1
j = 0
print("Extracting files...")
while (j < num_files):
offset = offset_array[j]
size = size_array[j]
file_data = data[offset:offset + size]
file_header = data[offset:offset + 4]
if file_header == b'\x89PNG':
extension = ".png"
else:
extension = ".bin"
# create path
folder = out_dir + "\\"
if not os.path.exists(folder):
os.makedirs(folder)
file = folder + "\\file_" + str(j) + extension
print("Extracting", file)
output_file = open(file, "wb+")
# read data at offset
output_file.write(file_data)
output_file.close()
j = j + 1
print("Done")
| 23.304762 | 115 | 0.591336 | 351 | 2,447 | 4.037037 | 0.390313 | 0.05928 | 0.01976 | 0.023289 | 0.079746 | 0.043754 | 0 | 0 | 0 | 0 | 0 | 0.02069 | 0.229669 | 2,447 | 104 | 116 | 23.528846 | 0.731034 | 0.251328 | 0 | 0 | 0 | 0 | 0.16097 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.015625 | false | 0 | 0.09375 | 0 | 0.109375 | 0.109375 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
a4977a487dfb53fc9bdd74cbde790e28018c0ba6 | 1,344 | py | Python | crusoe_observe/cve-connector/cve_connector/vendor_cve/implementation/parsers/general_and_format_parsers/general_parser.py | CSIRT-MU/CRUSOE | 73e4ac0ced6c3ac46d24ac5c3feb01a1e88bd36b | [
"MIT"
] | 3 | 2021-11-09T09:55:17.000Z | 2022-02-19T02:58:27.000Z | crusoe_observe/cve-connector/cve_connector/vendor_cve/implementation/parsers/general_and_format_parsers/general_parser.py | CSIRT-MU/CRUSOE | 73e4ac0ced6c3ac46d24ac5c3feb01a1e88bd36b | [
"MIT"
] | null | null | null | crusoe_observe/cve-connector/cve_connector/vendor_cve/implementation/parsers/general_and_format_parsers/general_parser.py | CSIRT-MU/CRUSOE | 73e4ac0ced6c3ac46d24ac5c3feb01a1e88bd36b | [
"MIT"
] | null | null | null | """Module contains superclass for parsers."""
from abc import ABC, abstractmethod
from datetime import date, timedelta
from html_table_extractor.extractor import Extractor
from lxml.html import tostring
class GeneralParser(ABC):
"""
Superclass for parsers.
"""
def __init__(self, url, from_date=None, to_date=None):
self.url = url
self.data = None
self.to_date = to_date if to_date else date.today()
self.from_date = from_date if from_date else date.today() - timedelta(days=1)
self.date_format = '%Y/%m/%d'
self.entities = []
@abstractmethod
def load_content(self):
"""
Loads content from URL.
:return:
"""
pass
@abstractmethod
def parse(self):
"""
Parses loaded content.
:return:
"""
pass
@staticmethod
def parse_table(table):
"""
Parse table got as an input.
:param table: input table
:return: parsed table as a tuple: header, rows
"""
unicode = str
table_string = tostring(table, encoding=unicode)
extractor = Extractor(table_string)
extractor.parse()
table_rows_list = extractor.return_list()
table_header = table_rows_list.pop(0)
return table_header, table_rows_list
| 25.846154 | 85 | 0.611607 | 158 | 1,344 | 5.025316 | 0.392405 | 0.040302 | 0.049118 | 0.042821 | 0.060453 | 0 | 0 | 0 | 0 | 0 | 0 | 0.002114 | 0.296131 | 1,344 | 51 | 86 | 26.352941 | 0.837209 | 0.171875 | 0 | 0.148148 | 0 | 0 | 0.008073 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.148148 | false | 0.074074 | 0.148148 | 0 | 0.37037 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
a497d79fdef45fad56951301307fe17492fbee45 | 2,651 | py | Python | core/src/cgcloud/core/apache.py | ompcloud/cgcloud | ec97c3e6df2df549ebf45c69f16fb6d118877d9c | [
"Apache-2.0"
] | 24 | 2015-07-27T02:44:30.000Z | 2022-02-02T10:37:25.000Z | core/src/cgcloud/core/apache.py | ompcloud/cgcloud | ec97c3e6df2df549ebf45c69f16fb6d118877d9c | [
"Apache-2.0"
] | 243 | 2015-05-29T18:39:08.000Z | 2018-07-17T19:42:28.000Z | core/src/cgcloud/core/apache.py | ompcloud/cgcloud | ec97c3e6df2df549ebf45c69f16fb6d118877d9c | [
"Apache-2.0"
] | 22 | 2015-07-16T01:04:08.000Z | 2021-10-10T21:18:36.000Z | import json
import logging
import os
from bd2k.util.strings import interpolate as fmt
from fabric.operations import run
from cgcloud.core.box import Box
from cgcloud.fabric.operations import sudo
log = logging.getLogger( __name__ )
class ApacheSoftwareBox( Box ):
"""
A box to be mixed in to ease the hassle of installing Apache Software
Foundation released software distros.
"""
def _install_apache_package( self, remote_path, install_dir ):
"""
Download the given package from an Apache download mirror and extract it to a child
directory of the directory at the given path.
:param str remote_path: the URL path of the package on the Apache download server and its
mirrors.
:param str install_dir: The path to a local directory in which to create the directory
containing the extracted package.
"""
# TODO: run Fabric tasks with a different manager, so we don't need to catch SystemExit
components = remote_path.split( '/' )
package, tarball = components[ 0 ], components[ -1 ]
# Some mirrors may be down or serve crap, so we may need to retry this a couple of times.
tries = iter( xrange( 3 ) )
while True:
try:
mirror_url = self.__apache_s3_mirror_url( remote_path )
if run( "curl -Ofs '%s'" % mirror_url, warn_only=True ).failed:
mirror_url = self.__apache_official_mirror_url( remote_path )
run( "curl -Ofs '%s'" % mirror_url )
try:
sudo( fmt( 'mkdir -p {install_dir}/{package}' ) )
sudo( fmt( 'tar -C {install_dir}/{package} '
'--strip-components=1 -xzf {tarball}' ) )
return
finally:
run( fmt( 'rm {tarball}' ) )
except SystemExit:
if next( tries, None ) is None:
raise
else:
log.warn( "Could not download or extract the package, retrying ..." )
def __apache_official_mirror_url( self, remote_path ):
url = 'http://www.apache.org/dyn/closer.cgi?path=%s&asjson=1' % remote_path
mirrors = run( "curl -fs '%s'" % url )
mirrors = json.loads( mirrors )
mirror = mirrors[ 'preferred' ]
url = mirror + remote_path
return url
def __apache_s3_mirror_url( self, remote_path ):
file_name = os.path.basename( remote_path )
return 'https://s3-us-west-2.amazonaws.com/bd2k-artifacts/cgcloud/' + file_name
| 40.166667 | 98 | 0.592984 | 328 | 2,651 | 4.646341 | 0.448171 | 0.065617 | 0.034121 | 0.024934 | 0.05643 | 0.026247 | 0 | 0 | 0 | 0 | 0 | 0.006132 | 0.323274 | 2,651 | 65 | 99 | 40.784615 | 0.843367 | 0.246322 | 0 | 0.04878 | 0 | 0.02439 | 0.171563 | 0.024134 | 0 | 0 | 0 | 0.015385 | 0 | 1 | 0.073171 | false | 0 | 0.170732 | 0 | 0.341463 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
a49912576008590c13a1def1c8f65551249b8211 | 2,252 | py | Python | xicam/core/tests/test_threads.py | Xi-CAM/Xi-cam-unified | 8b2811a8c13e18ec3b8860dfa5aee1eabc42da9e | [
"BSD-3-Clause-LBNL"
] | null | null | null | xicam/core/tests/test_threads.py | Xi-CAM/Xi-cam-unified | 8b2811a8c13e18ec3b8860dfa5aee1eabc42da9e | [
"BSD-3-Clause-LBNL"
] | null | null | null | xicam/core/tests/test_threads.py | Xi-CAM/Xi-cam-unified | 8b2811a8c13e18ec3b8860dfa5aee1eabc42da9e | [
"BSD-3-Clause-LBNL"
] | 1 | 2020-05-04T19:28:07.000Z | 2020-05-04T19:28:07.000Z | from pytestqt import qtbot
import pytest
import os
@pytest.mark.skip(reason="thread module testing has issues")
def test_threads(qtbot):
from xicam.core import threads
from qtpy.QtCore import QObject, Signal
def callback(a):
assert a == 10
t = threads.QThreadFuture(sum, [1, 2, 3, 4], callback_slot=callback)
class Callback(QObject):
sig = Signal(int)
callback = Callback()
t2 = threads.QThreadFuture(sum, [1, 2, 3, 4], callback_slot=callback.sig)
t.start()
t2.start()
qtbot.waitSignals([t.sigFinished, t2.sigFinished])
@pytest.mark.skip(reason="thread module testing has issues")
def test_threads_iterator(qtbot):
from xicam.core import threads
results = []
def callback(a):
results.append(a)
def testiterator():
for i in range(3):
yield i
def check():
assert sum(results) == 3
t = threads.QThreadFutureIterator(testiterator, yield_slot=callback, finished_slot=check)
t.start()
qtbot.waitSignal(t.sigFinished)
@pytest.mark.skip(reason="thread module testing has issues")
def test_exit_before_thread(qtbot):
from xicam.core import threads
import time
from qtpy.QtWidgets import QMainWindow
window = QMainWindow()
def long_thread():
time.sleep(100000)
for i in range(1000):
t = threads.QThreadFuture(long_thread)
t.start()
time.sleep(.01)
window.deleteLater()
@pytest.mark.skip(reason="thread module testing has issues")
def test_exit_before_decorated_thread(qtbot):
from xicam.core import threads
import time
from qtpy.QtWidgets import QMainWindow
window = QMainWindow()
@threads.method()
def long_thread():
time.sleep(100000)
for i in range(100):
long_thread()
time.sleep(.01)
window.deleteLater()
@pytest.mark.skip(reason="thread module testing has issues")
def test_qthreads_and_pythreads(qtbot):
from xicam.core import threads
import time
from qtpy.QtWidgets import QMainWindow
window = QMainWindow()
@threads.method()
def long_thread():
time.sleep(100000)
for i in range(1000):
long_thread()
time.sleep(.01)
window.deleteLater()
| 21.447619 | 93 | 0.671403 | 287 | 2,252 | 5.188153 | 0.254355 | 0.040296 | 0.047011 | 0.067159 | 0.691739 | 0.691739 | 0.650101 | 0.617864 | 0.617864 | 0.617864 | 0 | 0.028686 | 0.226021 | 2,252 | 104 | 94 | 21.653846 | 0.825588 | 0 | 0 | 0.6 | 0 | 0 | 0.071048 | 0 | 0 | 0 | 0 | 0 | 0.028571 | 1 | 0.171429 | false | 0 | 0.214286 | 0 | 0.414286 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
a4991ee5bb3a8049313bf554b77dbf8520f3ded7 | 2,374 | py | Python | actions/lib/base.py | StackStorm-Exchange/powerdns | 13879e0e66b29a466d82c1077a1d4abde69c0d3e | [
"Apache-2.0"
] | null | null | null | actions/lib/base.py | StackStorm-Exchange/powerdns | 13879e0e66b29a466d82c1077a1d4abde69c0d3e | [
"Apache-2.0"
] | null | null | null | actions/lib/base.py | StackStorm-Exchange/powerdns | 13879e0e66b29a466d82c1077a1d4abde69c0d3e | [
"Apache-2.0"
] | 1 | 2021-12-01T14:49:27.000Z | 2021-12-01T14:49:27.000Z | # coding=utf-8
from st2common import log as logging
from st2common.runners.base_action import Action
from powerdns.exceptions import PDNSCanonicalError, PDNSError
import powerdns
__all__ = ["PowerDNSClient"]
LOG = logging.getLogger(__name__)
class PowerDNSClientError(Exception):
def __init__(self, message):
self.message = message
class PowerDNSClient(Action):
def __init__(self, config, timeout=5):
super(PowerDNSClient, self).__init__(config)
self.timeout = timeout
self.api_key = config.get("api_key")
self.api_url = config.get("api_url")
def _init_powerdns(self):
self.api_client = powerdns.PDNSApiClient(
api_endpoint=self.api_url,
api_key=self.api_key,
timeout=self.timeout
)
self._api = powerdns.PDNSEndpoint(self.api_client)
def _run(self, *args, **kwargs):
raise NotImplementedError
def _select_server_id(self, server_id):
for server in self._api.servers:
if str(server) == server_id:
self.api = server
return
raise PowerDNSClientError("Server not found")
def _select_zone(self, zone_name):
self.api = self.api.get_zone(zone_name)
if not self.api:
raise PowerDNSClientError("Zone not found")
def run(self, server_id, *args, **kwargs):
try:
self.timeout = kwargs.get("response_timeout")
del kwargs["response_timeout"]
except KeyError:
pass
self._init_powerdns()
# remove server_id from args
try:
args = list(args)
args.pop(args.index(server_id))
except ValueError:
pass
rrset = {}
_cpy = kwargs.copy()
for arg, value in _cpy.items():
if arg.startswith("rrset_"):
rrset[arg.split("_")[1]] = value
kwargs.pop(arg)
if rrset and any(rrset.values()):
kwargs["rrsets"] = [powerdns.interface.RRSet(**rrset)]
try:
self._select_server_id(server_id)
if "zone_name" in kwargs:
self._select_zone(kwargs.pop("zone_name"))
return (True, self._run(*args, **kwargs))
except (PowerDNSClientError, PDNSError, PDNSCanonicalError) as e:
return (False, e)
| 28.95122 | 73 | 0.603201 | 267 | 2,374 | 5.116105 | 0.314607 | 0.061493 | 0.016105 | 0.019034 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.00299 | 0.295703 | 2,374 | 81 | 74 | 29.308642 | 0.813995 | 0.016428 | 0 | 0.081967 | 0 | 0 | 0.051887 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.114754 | false | 0.032787 | 0.065574 | 0 | 0.262295 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
a49a7796657fa08d5519667b5c665162e1e0b2bd | 522 | py | Python | Aula 10 Estrutura Condicional IF/Aula 10.py | JadilsonJR/Python | 99eab305249ccd02c31f1913d569a9b601eff06a | [
"MIT"
] | null | null | null | Aula 10 Estrutura Condicional IF/Aula 10.py | JadilsonJR/Python | 99eab305249ccd02c31f1913d569a9b601eff06a | [
"MIT"
] | null | null | null | Aula 10 Estrutura Condicional IF/Aula 10.py | JadilsonJR/Python | 99eab305249ccd02c31f1913d569a9b601eff06a | [
"MIT"
] | null | null | null |
a= 10
b= 5
op = "/"
if op == "+":
res=a+b
print("Operação Soma, o Resultado foi de:", a ,"+", b ,"=" , res )
if op == "-":
res=a-b
print("Operação Subtração, o Resultado foi de:", a ,"-", b ,"=" , res )
if op == "/":
res=a/b
print("Operação Subtração, o Resultado de:", a ,"/", b ,"=" , res )
if op == "*":
res=a*b
print("Operação Subtração, o Resultado de:" , a ,"*", b ,"=" , res )
# a = False
# if a:
# print("é Verdade e não mintu")
# else :
# print ("Né não")
| 15.818182 | 75 | 0.469349 | 77 | 522 | 3.181818 | 0.298701 | 0.065306 | 0.114286 | 0.130612 | 0.771429 | 0.771429 | 0.771429 | 0.681633 | 0.681633 | 0.681633 | 0 | 0.008174 | 0.296935 | 522 | 32 | 76 | 16.3125 | 0.659401 | 0.149425 | 0 | 0 | 0 | 0 | 0.361111 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.266667 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
a49afb80789480d7cb57a77b968a1e32a26b82e3 | 806 | py | Python | app.py | iUwej/Remote-Logger-Server | 8adfd4b85e277ea7e4bd24c22462ff54f0ddedd8 | [
"Unlicense"
] | null | null | null | app.py | iUwej/Remote-Logger-Server | 8adfd4b85e277ea7e4bd24c22462ff54f0ddedd8 | [
"Unlicense"
] | null | null | null | app.py | iUwej/Remote-Logger-Server | 8adfd4b85e277ea7e4bd24c22462ff54f0ddedd8 | [
"Unlicense"
] | null | null | null |
from flask import Flask
from flask import request
from flask import jsonify
from flask_redis import FlaskRedis
app = Flask(__name__)
#provide the redis configuration in the app configs to use this
redis_store = FlaskRedis(app)
@app.route('/')
def index():
return 'Home to the remote logger'
@app.route('/logerror',methods=['POST','GET'])
def logerror():
if request.method == 'POST':
log = request.get_json(force=True)
#print(log)
redis_store.rpush("errors",str(log))
return "Log saved",201
else:
all_logs = redis_store.lrange("errors",0,-1)
all_logs_str = [item.decode('utf-8') for item in all_logs]
return jsonify(all_logs_str)
@app.route('/clearerror')
def clearerror():
redis_store.delete("errors")
return 'Deleted',200
if __name__ == '__main__':
app.run()
| 17.148936 | 63 | 0.705955 | 118 | 806 | 4.618644 | 0.491525 | 0.066055 | 0.082569 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.013255 | 0.157568 | 806 | 46 | 64 | 17.521739 | 0.789396 | 0.08933 | 0 | 0 | 0 | 0 | 0.142466 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.12 | false | 0 | 0.16 | 0.04 | 0.44 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
a49c560366637a4b9c308293ac6bef9e243f0019 | 736 | py | Python | tabular/tests/unittests/models/test_linear.py | zhiqiangdon/autogluon | 71ee7ef0f05d8f0aad112d8c1719174aa33194d9 | [
"Apache-2.0"
] | 4,462 | 2019-12-09T17:41:07.000Z | 2022-03-31T22:00:41.000Z | tabular/tests/unittests/models/test_linear.py | zhiqiangdon/autogluon | 71ee7ef0f05d8f0aad112d8c1719174aa33194d9 | [
"Apache-2.0"
] | 1,408 | 2019-12-09T17:48:59.000Z | 2022-03-31T20:24:12.000Z | tabular/tests/unittests/models/test_linear.py | zhiqiangdon/autogluon | 71ee7ef0f05d8f0aad112d8c1719174aa33194d9 | [
"Apache-2.0"
] | 623 | 2019-12-10T02:04:18.000Z | 2022-03-20T17:11:01.000Z |
from autogluon.tabular.models.lr.lr_model import LinearModel
def test_linear_binary(fit_helper):
fit_args = dict(
hyperparameters={LinearModel: {}},
)
dataset_name = 'adult'
fit_helper.fit_and_validate_dataset(dataset_name=dataset_name, fit_args=fit_args)
def test_linear_multiclass(fit_helper):
fit_args = dict(
hyperparameters={LinearModel: {}},
)
dataset_name = 'covertype'
fit_helper.fit_and_validate_dataset(dataset_name=dataset_name, fit_args=fit_args)
def test_linear_regression(fit_helper):
fit_args = dict(
hyperparameters={LinearModel: {}},
)
dataset_name = 'ames'
fit_helper.fit_and_validate_dataset(dataset_name=dataset_name, fit_args=fit_args)
| 27.259259 | 85 | 0.73913 | 93 | 736 | 5.419355 | 0.27957 | 0.125 | 0.142857 | 0.095238 | 0.78373 | 0.78373 | 0.78373 | 0.78373 | 0.78373 | 0.444444 | 0 | 0 | 0.165761 | 736 | 26 | 86 | 28.307692 | 0.820847 | 0 | 0 | 0.473684 | 0 | 0 | 0.02449 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.157895 | false | 0 | 0.052632 | 0 | 0.210526 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 4 |
a49de0a573a17e1a8abeb597091da40cf1ac2a4e | 2,529 | py | Python | DBProcessing/ProcessIphoneBackup.py | georgezywang/RealTime_Wechat_Analysis | baa9ba4a06d9d6b4ce13b951f1b3846ebd338ce8 | [
"MIT"
] | null | null | null | DBProcessing/ProcessIphoneBackup.py | georgezywang/RealTime_Wechat_Analysis | baa9ba4a06d9d6b4ce13b951f1b3846ebd338ce8 | [
"MIT"
] | null | null | null | DBProcessing/ProcessIphoneBackup.py | georgezywang/RealTime_Wechat_Analysis | baa9ba4a06d9d6b4ce13b951f1b3846ebd338ce8 | [
"MIT"
] | null | null | null | import sqlite3 as sqlite
import os
from Utils import *
iphoneBackupDir = "IphoneBackup"
m_nsAliasName = "wxid_t798rxqmvz7s11"
PARSED_DB_PATH = "DataStore/Contact.db"
PARSED_DATA_CONNECTION = ConnectNonEncryptedDB(PARSED_DB_PATH)
userAlias, userChatEncryption, userDB = GetUpdateContactInfo(PARSED_DATA_CONNECTION, m_nsAliasName)
def GetUserIphoneDB(userChatEncryption):
for i in range(4):
DBName = "message_{}.sqlite".format(i + 1)
CurrDBConnection = ConnectNonEncryptedDB(os.path.join(iphoneBackupDir, DBName))
currentDBCursor =CurrDBConnection.cursor()
currentDBCursor.execute("SELECT name FROM sqlite_master WHERE type='table';")
tableList = [table[0] for table in currentDBCursor.fetchall()]
CurrDBConnection.close()
if userChatEncryption in tableList:
return i + 1
return -1
def UpdateIphoneContactMap():
parsedDataCursor = PARSED_DATA_CONNECTION.cursor()
parsedDataCursor.execute("""CREATE TABLE IF NOT EXISTS IphoneParsedContact(
m_nsUsrName TEXT PRIMARY KEY,
m_nsRemark TEXT,
m_nsAliasName TEXT,
chat_md5ID TEXT,
db_Stored INTEGER
);""")
PARSED_DATA_CONNECTION.commit()
parsedDataCursor.execute("""SELECT m_nsUsrName, m_nsRemark, m_nsAliasName, chat_md5ID
FROM ParsedContact;""")
contactData = parsedDataCursor.fetchall()
for contact in contactData:
m_nsUsrName = contact[0]
m_nsRemark = contact[1]
m_nsAliasName = contact[2]
chat_md5ID = contact[3]
db_Stored = GetUserIphoneDB(chat_md5ID)
parsedDataCursor.execute("""INSERT OR REPLACE INTO IphoneParsedContact(
m_nsUsrName,
m_nsRemark,
m_nsAliasName,
chat_md5ID,
db_Stored)
VALUES(?,?,?,?,?);""", (m_nsUsrName, m_nsRemark,
m_nsAliasName, chat_md5ID, db_Stored))
contactRemark = m_nsRemark if type(m_nsRemark) is not None or len(m_nsRemark.replace(" ", "")) > 1 else m_nsUsrName
print("Contact {} Stored in DB {}".format(contactRemark, db_Stored))
PARSED_DATA_CONNECTION.commit()
UpdateIphoneContactMap()
| 38.318182 | 123 | 0.594306 | 228 | 2,529 | 6.377193 | 0.368421 | 0.049519 | 0.068776 | 0.039202 | 0.093535 | 0.093535 | 0.093535 | 0.093535 | 0.066025 | 0.066025 | 0 | 0.013537 | 0.328193 | 2,529 | 65 | 124 | 38.907692 | 0.84226 | 0 | 0 | 0.04 | 0 | 0 | 0.373217 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.04 | false | 0 | 0.06 | 0 | 0.14 | 0.02 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
a49eab646b818bca17a2eac3acba44ec620d5a5e | 3,496 | py | Python | qrl/core/StakeValidator.py | djuhn/QRL-1 | 47c4b8beb8e1be8c5a0fdf16b33532f32899ce13 | [
"MIT"
] | null | null | null | qrl/core/StakeValidator.py | djuhn/QRL-1 | 47c4b8beb8e1be8c5a0fdf16b33532f32899ce13 | [
"MIT"
] | null | null | null | qrl/core/StakeValidator.py | djuhn/QRL-1 | 47c4b8beb8e1be8c5a0fdf16b33532f32899ce13 | [
"MIT"
] | 1 | 2021-11-03T06:56:27.000Z | 2021-11-03T06:56:27.000Z | # coding=utf-8
# Distributed under the MIT software license, see the accompanying
# file LICENSE or http://www.opensource.org/licenses/mit-license.php.
from google.protobuf.json_format import MessageToJson, Parse
from qrl.core import config
from qrl.core.Transaction import StakeTransaction
from qrl.generated import qrl_pb2
from qrl.crypto.misc import sha256_n
class StakeValidator:
"""
Stake Validator class to represent each unique Stake Validator
Maintains the cache of successfully validated hashes, saves validation
time by avoiding recalculation of the hash till the hash terminators.
"""
def __init__(self, stakevalidator_protobuf=None):
self._data = stakevalidator_protobuf
if not self._data:
self._data = qrl_pb2.StakeValidator()
@property
def pbdata(self):
return self._data
@property
def address(self) -> bytes:
return self._data.address
@property
def slave_public_key(self) -> bytes:
return self._data.slave_public_key
@property
def terminator_hash(self) -> bytes:
return self._data.terminator_hash
@property
def balance(self) -> int:
return self._data.balance
@property
def is_banned(self) -> bool:
return self._data.is_banned
@property
def is_active(self) -> bool:
return self._data.is_active
@property
def nonce(self) -> int:
return self._data.nonce
@property
def activation_blocknumber(self) -> int:
return self._data.activation_blocknumber
def increase_nonce(self):
self._data.nonce += 1
@staticmethod
def _hash_to_terminator(reveal_hash: bytes, times: int) -> bytes:
return sha256_n(reveal_hash, times)
@staticmethod
def create(balance: int,
stake_txn: StakeTransaction):
stakevalidator = StakeValidator()
stakevalidator._data.address = stake_txn.txfrom
stakevalidator._data.slave_public_key = stake_txn.slave_public_key
stakevalidator._data.terminator_hash = stake_txn.hash
if not stakevalidator._data.terminator_hash:
raise ValueError("terminator hash cannot be empty")
stakevalidator._data.balance = balance
if balance < config.dev.minimum_staking_balance_required:
raise ValueError("balance should be at least {}".format(config.dev.minimum_staking_balance_required))
stakevalidator._data.activation_blocknumber = stake_txn.activation_blocknumber
stakevalidator._data.nonce = 0
stakevalidator._data.is_banned = False
stakevalidator._data.is_active = True # Flag that represents if the stakevalidator has been deactivated by destake txn
return stakevalidator
def validate_hash(self, reveal_hash: bytes, block_idx: int) -> bool:
# FIXME: Measure with a profiler if we really need a cache here
times = block_idx - self.activation_blocknumber + 1
terminator_found = self._hash_to_terminator(reveal_hash, times)
terminator_expected = self.terminator_hash
return terminator_found == terminator_expected
@staticmethod
def from_json(json_data):
pbdata = qrl_pb2.StakeValidator()
Parse(json_data, pbdata)
return StakeValidator(pbdata)
def to_json(self):
return MessageToJson(self._data)
| 31.495495 | 128 | 0.680492 | 403 | 3,496 | 5.674938 | 0.322581 | 0.048972 | 0.055094 | 0.024923 | 0.134674 | 0.05422 | 0 | 0 | 0 | 0 | 0 | 0.004964 | 0.250858 | 3,496 | 110 | 129 | 31.781818 | 0.86827 | 0.140732 | 0 | 0.169014 | 0 | 0 | 0.020935 | 0 | 0 | 0 | 0 | 0.009091 | 0 | 1 | 0.225352 | false | 0 | 0.070423 | 0.15493 | 0.507042 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 3 |
a49f7d80f24b4797e2ac4f693214e1ea5cd6017e | 2,936 | py | Python | sohojoe_wrappers.py | Sohojoe/many_towers | 527b3c4b591a3d0919b76395ecfc22c4c0059b08 | [
"MIT"
] | null | null | null | sohojoe_wrappers.py | Sohojoe/many_towers | 527b3c4b591a3d0919b76395ecfc22c4c0059b08 | [
"MIT"
] | null | null | null | sohojoe_wrappers.py | Sohojoe/many_towers | 527b3c4b591a3d0919b76395ecfc22c4c0059b08 | [
"MIT"
] | null | null | null | import os
import gym
import numpy as np
def done_grading(env):
if hasattr(env, 'done_grading'):
return env.done_grading()
if hasattr(env, 'env'):
return done_grading(env.env)
if hasattr(env, '_env'):
return done_grading(env._env)
def is_grading(env):
if hasattr(env, 'is_grading'):
return env.is_grading()
if hasattr(env, 'env'):
return is_grading(env.env)
if hasattr(env, '_env'):
return is_grading(env._env)
class RenderObservations(gym.Wrapper):
def __init__(self, env, display_vector_obs=True):
gym.Wrapper.__init__(self, env)
self.viewer = None
self._empty = np.zeros((1,1,1))
self._has_vector_obs = hasattr(self.observation_space, 'spaces')
self._8bit = None
self._display_vector_obs = display_vector_obs
def step(self, action):
ob, reward, done, info = self.env.step(action)
should_render = True
if 'human_agent_display' in globals():
global human_agent_display
should_render = human_agent_display
self._renderObs(ob, should_render)
return ob, reward, done, info
def _renderObs(self, obs, should_render):
from gym.envs.classic_control import rendering
if self.viewer is None:
self.viewer = rendering.SimpleImageViewer()
if not should_render:
self.viewer.imshow(self._empty)
return self.viewer.isopen
if self._has_vector_obs:
visual_obs = obs['visual'].copy()
vector_obs = obs['vector'].copy()
else:
visual_obs = obs.copy()
if self._has_vector_obs and self._display_vector_obs:
w = 84
# Displays time left and number of keys on visual observation
key = vector_obs[0:-1]
time_num = vector_obs[-1]
key_num = np.argmax(key, axis=0)
# max_bright = 1
max_bright = 255
visual_obs[0:10, :, :] = 0
for i in range(key_num):
start = int(i * 16.8) + 4
end = start + 10
visual_obs[1:5, start:end, 0:2] = max_bright
visual_obs[6:10, 0:int(time_num * w), 1] = max_bright
self._8bit = visual_obs
# if type(visual_obs[0][0][0]) is np.float32 or type(visual_obs[0][0][0]) is np.float64:
# _8bit = (255.0 * visual_obs).astype(np.uint8)
self._8bit = ( visual_obs).astype(np.uint8)
self.viewer.imshow(self._8bit)
return self.viewer.isopen
def render(self, mode='human', **kwargs):
if self.viewer:
self.viewer.imshow(self._8bit)
return self._8bit
def reset(self):
return self.env.reset()
def close(self):
self.env.close()
if self.viewer is not None:
self.viewer.close()
self.viewer = None
| 32.988764 | 96 | 0.583447 | 388 | 2,936 | 4.193299 | 0.255155 | 0.073755 | 0.044253 | 0.036878 | 0.244622 | 0.195452 | 0.157345 | 0.11555 | 0.090965 | 0 | 0 | 0.027107 | 0.308924 | 2,936 | 88 | 97 | 33.363636 | 0.774766 | 0.070504 | 0 | 0.138889 | 0 | 0 | 0.028645 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.111111 | false | 0 | 0.055556 | 0.013889 | 0.333333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
a49f7fa75323d77c51d8a4ddc4900cade3b8ccc9 | 308 | py | Python | setup.py | FL33TW00D/COP-Kmeans | 7c9c6cc4256107ef8ee89adc491661016342fdfe | [
"MIT"
] | 125 | 2017-06-20T12:33:51.000Z | 2022-03-08T05:40:14.000Z | setup.py | FL33TW00D/COP-Kmeans | 7c9c6cc4256107ef8ee89adc491661016342fdfe | [
"MIT"
] | 6 | 2017-09-12T07:18:03.000Z | 2020-11-18T19:43:53.000Z | setup.py | Behrouz-Babaki/copkmeans | 36ca01fbf001c9bf080408074c2f46838257c14b | [
"MIT"
] | 35 | 2017-05-03T13:53:50.000Z | 2021-11-21T19:16:03.000Z | from setuptools import setup, find_packages
setup(
name='copkmeans',
version='1.5',
description='',
author='',
author_email='',
url='https://github.com/Behrouz-Babaki/COP-Kmeans',
license=license,
packages=find_packages(),
include_package_data=True,
zip_safe=False
)
| 20.533333 | 55 | 0.665584 | 36 | 308 | 5.527778 | 0.833333 | 0.120603 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.008 | 0.188312 | 308 | 14 | 56 | 22 | 0.788 | 0 | 0 | 0 | 0 | 0 | 0.181818 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.076923 | 0 | 0.076923 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
a4a1716bce679ffd4ca64f50cb6d4adb175258a8 | 1,210 | py | Python | python-poo/name_value_object.py | alexsandrox/ddd-valueobjects-python | 5b02966512dcf1c099e80b90e1aa965fa339d397 | [
"MIT"
] | null | null | null | python-poo/name_value_object.py | alexsandrox/ddd-valueobjects-python | 5b02966512dcf1c099e80b90e1aa965fa339d397 | [
"MIT"
] | null | null | null | python-poo/name_value_object.py | alexsandrox/ddd-valueobjects-python | 5b02966512dcf1c099e80b90e1aa965fa339d397 | [
"MIT"
] | null | null | null | """
Class:
- NameValueObject
Summary:
- It is usually a good idea to replace common primitives, such as strings, with suitable
value objects.
While I can represent a phone number as a string, turning it into a phone number object
makes variables and parameters more explicit (with type checking, when the language supports it),
a natural focus for validation, and avoiding inapplicable behavior (performing calculations with whole identification numbers).
(-- by Martin Fowler)
"""
class NameValueObject:
def __init__(self, first_name, last_name):
self.validade_complete_name(first_name, last_name)
self.first_name = first_name
self.last_name = last_name
def to_string(self, first:str, last:str):
return print('{} {}'.format(first, last))
def validade_complete_name(self, first:str, last:str):
if first == None or first == "" or first.isdecimal() or last == None or last == "" or last.isdecimal():
print('>> Nome e Sobrenome devem ser preenchidos corretamente')
return False
else:
self.to_string(first, last)
return True | 43.214286 | 137 | 0.652893 | 152 | 1,210 | 5.078947 | 0.546053 | 0.046632 | 0.046632 | 0.044041 | 0.103627 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.267769 | 1,210 | 28 | 138 | 43.214286 | 0.871332 | 0.426446 | 0 | 0 | 0 | 0 | 0.092166 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.214286 | false | 0 | 0 | 0.071429 | 0.5 | 0.142857 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
a4a22bd41876d7d8004bf1557ae0bfbb73b1abde | 1,464 | py | Python | main.py | Rishikesh-kumar-7258/Block_breaker | 7183f5c8732f5a60e909a5d30436614046fb76b2 | [
"MIT"
] | 20 | 2021-08-30T10:55:34.000Z | 2021-08-30T10:57:51.000Z | main.py | Rishikesh-kumar-7258/Block_breaker | 7183f5c8732f5a60e909a5d30436614046fb76b2 | [
"MIT"
] | 7 | 2021-08-22T09:00:39.000Z | 2021-11-05T09:33:46.000Z | main.py | Rishikesh-kumar-7258/Block_breaker | 7183f5c8732f5a60e909a5d30436614046fb76b2 | [
"MIT"
] | 2 | 2021-08-25T07:22:24.000Z | 2021-09-03T02:42:01.000Z | import pygame
from src.spritesheet import SpriteSheet, balls, blocks, sliders
from src.statemachine import Statemachine
from src.states.gameoverstate import GameOver
from src.states.highscoreState import Highscore
from src.states.levelpassedstate import LevelPassed
from src.states.playstate import Play
from src.states.sliderchoosingstate import SliderChoosing
from src.states.startstate import Start
pygame.init()
# parameters for the screen
screen_width = 800
screen_height = 600
# Setting up the screen
screen = pygame.display.set_mode((screen_width, screen_height))
pygame.display.set_caption("Block Breaker")
pygame.display.set_icon(pygame.image.load("images/logo.png"))
# differenst states
gameStates = {
"start" : Start(),
"sliders" : SliderChoosing(),
"highscore" : Highscore(),
"play" : Play(),
"levelclear" : LevelPassed(),
"over" : GameOver()
}
# statemachine
gstatemachine = Statemachine(gameStates)
gstatemachine.change("start", screen=screen,gstatemachine=gstatemachine,)
gstatemachine.render()
# setting up the clock
clock = pygame.time.Clock()
# setting up the game loop
running = True
while running:
# event handling
events = pygame.event.get()
for event in events:
if event.type == pygame.QUIT:
running = False
# drawing and updating the screen
screen.fill((0, 0, 0))
gstatemachine.update(events)
pygame.display.update()
clock.tick(60)
pygame.quit()
quit() | 25.241379 | 75 | 0.738388 | 176 | 1,464 | 6.102273 | 0.431818 | 0.052142 | 0.072626 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.008929 | 0.15847 | 1,464 | 58 | 76 | 25.241379 | 0.862825 | 0.116803 | 0 | 0 | 0 | 0 | 0.056031 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0.051282 | 0.230769 | 0 | 0.230769 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
a4a374d1f47637ba07c0cd5f25d45e1f33628c90 | 1,466 | py | Python | jdcloud_sdk/services/asset/models/OperatingStatementVo.py | Tanc009/jdcloud-sdk-python | 8b045c99bc5b73ca7348e950b6f01e03a27982f5 | [
"Apache-2.0"
] | 14 | 2018-04-19T09:53:56.000Z | 2022-01-27T06:05:48.000Z | jdcloud_sdk/services/asset/models/OperatingStatementVo.py | Tanc009/jdcloud-sdk-python | 8b045c99bc5b73ca7348e950b6f01e03a27982f5 | [
"Apache-2.0"
] | 15 | 2018-09-11T05:39:54.000Z | 2021-07-02T12:38:02.000Z | jdcloud_sdk/services/asset/models/OperatingStatementVo.py | Tanc009/jdcloud-sdk-python | 8b045c99bc5b73ca7348e950b6f01e03a27982f5 | [
"Apache-2.0"
] | 33 | 2018-04-20T05:29:16.000Z | 2022-02-17T09:10:05.000Z | # coding=utf8
# Copyright 2018 JDCLOUD.COM
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
# NOTE: This class is auto generated by the jdcloud code generator program.
class OperatingStatementVo(object):
def __init__(self, tradeType=None, tradeStatus=None, beginTime=None, endTime=None, pageIndex=None, pageSize=None):
"""
:param tradeType: (Optional) 交易类型:1-充值(11.在线充值 12.退单充值 13.线下汇款非人工认领 14.线下汇款人工认领 15.补单充值 16.退款充值);2-消费;3-提现
:param tradeStatus: (Optional) 交易状态:1-成功 2-失败 31-提现全部成功 32-提现全部失败 33-提现部分成功 34-运营待审核 35-运营通过 36-运营驳回 37-处理中 38-预占充值单失败
:param beginTime: (Optional) 开始时间
:param endTime: (Optional) 结束时间
:param pageIndex: (Optional) 当前页码
:param pageSize: (Optional) 每页条数
"""
self.tradeType = tradeType
self.tradeStatus = tradeStatus
self.beginTime = beginTime
self.endTime = endTime
self.pageIndex = pageIndex
self.pageSize = pageSize
| 38.578947 | 126 | 0.705321 | 202 | 1,466 | 5.09901 | 0.628713 | 0.058252 | 0.025243 | 0.031068 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.036114 | 0.206685 | 1,466 | 37 | 127 | 39.621622 | 0.849527 | 0.678035 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.125 | false | 0 | 0 | 0 | 0.25 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
a4a6b6bc1a07e212eb0593d122c2db090785358b | 690 | py | Python | circuitpython_typing/device_drivers.py | tekktrik/Adafruit_CircuitPython_Typing | f6e60bddbf853acd367e2abd77d3253a38af0189 | [
"MIT"
] | null | null | null | circuitpython_typing/device_drivers.py | tekktrik/Adafruit_CircuitPython_Typing | f6e60bddbf853acd367e2abd77d3253a38af0189 | [
"MIT"
] | 10 | 2022-02-14T02:43:06.000Z | 2022-03-28T18:34:41.000Z | circuitpython_typing/device_drivers.py | tekktrik/Adafruit_CircuitPython_Typing | f6e60bddbf853acd367e2abd77d3253a38af0189 | [
"MIT"
] | 3 | 2022-02-21T20:28:20.000Z | 2022-03-07T17:03:22.000Z | # SPDX-FileCopyrightText: Copyright (c) 2022 Alec Delaney
# SPDX-License-Identifier: MIT
"""
`circuitpython_typing.device_drivers`
================================================================================
Type annotation definitions for device drivers. Used for `adafruit_register`.
* Author(s): Alec Delaney
"""
from adafruit_bus_device.i2c_device import I2CDevice
# # Protocol was introduced in Python 3.8.
try:
from typing import Protocol
except ImportError:
from typing_extensions import Protocol
# pylint: disable=too-few-public-methods
class I2CDeviceDriver(Protocol):
"""Describes classes that are drivers utilizing `I2CDevice`"""
i2c_device: I2CDevice
| 26.538462 | 80 | 0.684058 | 75 | 690 | 6.186667 | 0.693333 | 0.047414 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.019901 | 0.126087 | 690 | 25 | 81 | 27.6 | 0.749585 | 0.646377 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.571429 | 0 | 0.857143 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 3 |
a4a94de479ad444e62d5c5754fb01c753297dfe3 | 34,996 | py | Python | moya/tags/server.py | moyaproject/moya | 78b91d87b4519f91dfdd2b40dab44e72f201a843 | [
"MIT"
] | 129 | 2015-02-16T12:02:50.000Z | 2021-11-06T00:20:01.000Z | moya/tags/server.py | liaohandel/moya | 78b91d87b4519f91dfdd2b40dab44e72f201a843 | [
"MIT"
] | 5 | 2015-02-19T15:56:41.000Z | 2015-09-08T18:58:35.000Z | moya/tags/server.py | liaohandel/moya | 78b91d87b4519f91dfdd2b40dab44e72f201a843 | [
"MIT"
] | 14 | 2015-02-19T17:20:34.000Z | 2022-03-28T01:38:09.000Z | from __future__ import unicode_literals
from __future__ import print_function
from __future__ import absolute_import
from ..elements import Attribute
from ..elements.elementbase import LogicElement
from ..tags.context import ContextElementBase, DataSetter
from .. import logic
from ..urlmapper import URLMapper, MissingURLParameter, RouteError
from ..context.expressiontime import ExpressionDateTime
from ..render import render_object
from .. import http
from ..http import StatusCode, standard_response, RespondWith
from .. import errors
from ..template.errors import MissingTemplateError
from ..template.rendercontainer import RenderContainer
from .. import trace
from .. import __version__
from ..content import Content
from ..tags.content import ContentElementMixin
from ..tools import get_return
from .. import syntax
from ..timezone import Timezone
from ..context.tools import to_expression, set_dynamic
from ..sites import LocaleProxy
from ..compat import text_type, itervalues, py2bytes, iteritems
from .. import db
from ..response import MoyaResponse
from ..request import ReplaceRequest
from ..urltools import urlencode as moya_urlencode
from .. import tools
from .. import pilot
from .. import namespaces
from webob import Response
from fs.path import splitext
from fs.errors import NoSysPath
import pytz
import sys
import logging
log = logging.getLogger("moya.runtime")
startup_log = logging.getLogger("moya.startup")
class Mountpoint(LogicElement):
"""
A [i]mountpoint[/i] defines a collection of URL *routes* which map incoming requests on to moya code.
An app will typically have at least one mountpoint with [c]name="main"[/c] (the default) which is used when the app is mounted. Moya will check each enclosed <url> in turn until it finds a route which matches.
An app may contain multiple mountpoints, which can be [i]mounted[/i] separately.
"""
class Help:
synopsis = "define a collection of url routes"
example = """
<mountpoint name="main">
<!-- should contain <url> tags -->
</mountpoint>
"""
name = Attribute(
"Mountpoint name unique to the application", default="main", map_to="_name"
)
preserve_attributes = ["urlmapper", "middleware", "name"]
def post_build(self, context):
self.urlmapper = URLMapper(self.libid)
self.middleware = dict(request=URLMapper(), response=URLMapper())
self.name = self._name(context)
class URL(LogicElement):
"""
Add a URL route to a [tag]mountpoint[/tag].
"""
class Help:
synopsis = """add a url to a mountpoint"""
mountpoint = Attribute("Name of the parent mount point", required=False)
mount = Attribute("Mountpoint to mount on this url", required=False, default=None)
route = Attribute("URL route", required=True)
view = Attribute("View element", required=False, map_to="target", example="#post")
methods = Attribute(
"A list of comma separated HTTP methods",
type="commalist",
evaldefault=True,
required=False,
default="GET,POST",
example="GET,POST",
map_to="_methods",
)
handler = Attribute(
"A list of comma separated http status codes",
type="commalist",
evaldefault=False,
required=False,
default=[],
example="404",
map_to="_handlers",
)
name = Attribute("An optional name", required=False, default=None)
final = Attribute(
"Ignore further URLs if this route matches?", type="boolean", default=False
)
def lib_finalize(self, context):
if not self.check(context):
return
defaults = self.get_let_map(context)
params = self.get_parameters(context)
methods = params._methods
handlers = []
for h in params._handlers:
try:
handlers.append(StatusCode(h))
except KeyError:
raise errors.ElementError(
""""{}" is not a valid http status code""".format(h), element=self
)
target = params.target
url_target = self.document.lib.qualify_libname(self.libname)
try:
if target is None:
target = (url_target,)
else:
target = (
url_target,
self.document.qualify_element_ref(target, lib=self.lib),
)
except errors.ElementNotFoundError:
raise errors.ElementError(
"No view called '{}' in the project".format(target), element=self
)
if params.mountpoint is None:
mount_point = self.get_ancestor("mountpoint")
else:
_, mount_point = self.get_element(params.mountpoint)
if params.mount:
try:
_, element = self.archive.get_element(params.mount, lib=self.lib)
if not hasattr(element, "urlmapper"):
raise ValueError("element {} is not mountable".format(element))
mount_point.urlmapper.map(
params.route.rstrip("/") + "/*",
[url_target],
methods=methods,
handlers=handlers or None,
defaults=defaults,
)
mount_point.urlmapper.mount(
params.route, element.urlmapper, name=params.name, defaults=defaults
)
except Exception as e:
raise errors.ElementError(
text_type(e), element=self, diagnosis=getattr(e, "diagnosis", None)
)
else:
try:
mount_point.urlmapper.map(
params.route,
target,
methods=methods,
handlers=handlers or None,
name=params.name,
defaults=defaults,
final=params.final,
)
except ValueError as e:
raise errors.ElementError(text_type(e), element=self)
class Middleware(LogicElement):
"""Add middleware to a mountpoint"""
class Help:
synopsis = "add middleware to a mountpoint"
route = Attribute("Route", required=True)
methods = Attribute(
"A list of comma separated HTTP methods",
required=False,
type="commalist",
evaldefault=True,
default="*",
example="GET,POST",
map_to="_methods",
)
mountpoint = Attribute("Mount point", required=False)
stage = Attribute(
"Stage in request handling",
required=False,
default="request",
metavar="STAGE",
choices=["request", "response"],
)
macro = Attribute("Macro to call", required=False, default=None)
name = Attribute("An optional name", required=False, default=None)
def lib_finalize(self, context):
if not self.check(context):
return
params = self.get_parameters(context)
methods = params._methods
target = params.macro
url_target = self.document.lib.qualify_libname(self.libname)
if target is None:
target = (url_target,)
else:
target = (url_target, self.document.qualify_element_ref(target))
if params.mountpoint is None:
mount_point = self.get_ancestor("mountpoint")
else:
_, mount_point = self.get_element(params.mountpoint)
mapper = mount_point.middleware[params.stage]
_route = mapper.map(params.route, target, methods=methods, name=params.name)
class Mount(LogicElement):
"""Mount a library."""
class Help:
synopsis = "mount a library on a given URL"
app = Attribute("Application", required=True)
url = Attribute("Url", required=True)
mountpoint = Attribute("Mount point", required=False, default="main")
priority = Attribute(
"Priority (highest priority is checked first)",
type="integer",
required=False,
default=0,
)
def logic(self, context):
if self.archive.test_build:
return
self.archive.build_libs()
params = self.get_parameters(context)
app = self.archive.find_app(params.app)
server = self.get_ancestor("server")
url_params = self.get_let_map(context, check_missing=False)
url_params["app"] = app.name
mountpoint = app.lib.get_element_by_type_and_attribute(
"mountpoint", "name", params.mountpoint
)
app.mounts.append((params.mountpoint, params.url))
server.urlmapper.mount(
params.url,
mountpoint.urlmapper,
defaults=url_params,
name=app.name,
priority=params.priority,
)
for stage, urlmapper in server.middleware.items():
urlmapper.mount(
params.url,
mountpoint.middleware[stage],
defaults=url_params,
name=app.name,
priority=params.priority,
)
startup_log.debug(
"%s (%s) mounted on %s",
app,
params.mountpoint,
tools.normalize_url_path(params.url),
)
class GetURL(DataSetter):
"""Get a named URL."""
class Help:
synopsis = "get a named URL"
name = Attribute("URL name", required=True)
_from = Attribute("Application", type="application", default=None, evaldefault=True)
query = Attribute(
"Mapping expression to use as a query string",
metavar="EXPRESSION",
required=False,
default=None,
type="expression",
missing=False,
)
_with = Attribute(
"Extract URL values from this object",
type="expression",
required=False,
default=None,
)
base = Attribute("Base (protocol and domain) of the URL", default=None)
def get_value(self, context):
params = self.get_parameters(context)
query = params.query
app = self.get_app(context)
try:
if self.has_parameter("with"):
url_params = self.get_let_map(context)
url_params.update(params["with"])
else:
url_params = {
k: text_type(v) for k, v in iteritems(self.get_let_map(context))
}
for k, v in iteritems(url_params):
if not v:
self.throw(
"bad-value.parameter",
"URL parameter '{}' must not be blank or missing (it is {})".format(
k, to_expression(context, v)
),
)
url = context[".server"].get_url(app.name, params.name, url_params)
except MissingURLParameter as e:
self.throw("get-url.missing-parameter", text_type(e))
except RouteError as e:
self.throw("get-url.no-route", text_type(e))
if query and hasattr(query, "items"):
qs = moya_urlencode(query)
if qs:
url += "?" + qs
url = self.qualify(context, url)
return url
def qualify(self, context, url):
base = self.base(context)
if base is not None:
url = base.rstrip("/") + "/" + url.lstrip("/")
return url
class GetFqURL(GetURL):
"""Get a [i]fully qualified[/i] (including domain name and scheme) named URL."""
base = Attribute("Base (protocol and domain) of the URL", default=None)
class Help:
synopsis = "get a fully qualified URL"
def qualify(self, context, url):
base = self.base(context)
if base is None:
base = context[".sys.site.host"] or context[".request.host_url"]
url = base + url
return url
class Trace(DataSetter):
"""
Extract route information from a URL path.
Returns route matches in a list of dictionaries. Route matches have three keys;
[c]data[/c] is the url data (as returned in [c].url[/c]), [c]targets[/c] is a list of element references,
[c]name[/c] is the name of the matching URL.
If [c]app[/c] or [c]name[/c] is provided, this tag will return the first url route matching the given app / named url.
"""
class Help:
synopsis = "extract routing information from mounted URL paths"
example = """
<trace path=".request.path" dst="matches"/>
"""
server = Attribute(
"Server containing URL routes",
type="expression",
default=".server",
evaldefault=True,
)
path = Attribute(
"URL path to parse", type="expression", required=True, missing=False
)
method = Attribute("HTTP method", type="text", default="GET")
app = Attribute("Application name", required=False, default=None, type="text")
name = Attribute(
"Route name to find", required=False, type="commalist", default=None
)
def get_value(self, context):
server, path, method, app, name = self.get_parameters(
context, "server", "path", "method", "app", "name"
)
if "://" in path:
_, _, path = path.partition("://")
if not path.startswith("/"):
path = "/" + path
if app is None and name is None:
routes = []
for route_match in server.urlmapper.iter_routes(path, method):
if route_match is not None:
data, targets, name = route_match
routes.append({"data": data, "targets": targets, "name": name})
return routes
else:
for route_match in server.urlmapper.iter_routes(path, method):
data, targets, _name = route_match
if app is not None:
if data.get("app", None) != app:
continue
if name is not None:
if _name not in name:
continue
return {"data": data, "targets": targets, "name": _name}
else:
return None
def wrap_element_error(f):
def deco(self, context):
try:
for node in f(self, context):
yield node
except (errors.ElementError, logic.LogicFlowException):
raise
except Exception as e:
# import traceback; traceback.print_exc(e)
raise errors.ElementError(
text_type(e), self, diagnosis=getattr(e, "diagnosis", None)
)
return deco
class View(ContextElementBase, ContentElementMixin):
"""Define a view to handle a URL"""
class Help:
synopsis = "define a view to handle a URL"
content = Attribute("Content", type="elementref", required=False, default=None)
template = Attribute("Template", type="templates", required=False, default=None)
requires = Attribute(
"Permission expression", type="expression", required=False, default=None
)
withscope = Attribute(
"Use scope as template / content data?",
type="boolean",
required=False,
default=True,
)
def extend_context(self, context):
"""Hook to extend the context."""
@wrap_element_error
def run(self, context):
(content, templates, requires, withscope) = self.get_parameters(
context, "content", "template", "requires", "withscope"
)
if self.has_parameter("requires"):
if not requires:
raise logic.EndLogic(http.RespondForbidden())
self.extend_context(context)
yield logic.DeferNodeContents(self)
if "_return" in context:
scope = get_return(context.get("_return"))
else:
if withscope:
scope = context[".call"]
else:
scope = {}
if scope is not None and not isinstance(scope, Content):
app = self.get_app(context)
template = self.resolve_templates(app, templates)
# if content is None and self.younger_sibling.check_type(namespaces.default, 'content'):
# content = self.younger_sibling
if content is not None:
if not hasattr(scope, "items"):
self.throw(
"view.bad-return",
"View should return a dict or other mapping object (not {})".format(
to_expression(scope)
),
)
for defer in self.generate_content(context, content, app, td=scope):
yield defer
context.copy("_content", "_return")
elif template is not None:
render_container = RenderContainer.create(app, template=template)
render_container.update(scope)
context["_return"] = render_container
class AppUrlsProxy(object):
def __moyacontext__(self, context):
urls = context.get(".urls")
app = context[".app"]
return urls[app.name]
class Trace(object):
def __init__(self, target, app=None, route_data=None, response=None):
self.target = target
self.app = app
self.route_data = route_data
if isinstance(response, http.RespondWith):
self.response = text_type(response)
else:
self.response = None
def __moyarepr__(self, context):
return "<trace>"
@property
def target_html(self):
return syntax.highlight("target", self.target, line_numbers=False)
class GetLocale(DataSetter):
"""Get an object containing locale information"""
class Help:
synopsis = "get locale information"
locale = Attribute("Locale name")
def logic(self, context):
_locale = self.locale(context)
try:
locale = LocaleProxy(_locale)
except:
self.throw(
"get-locale.unknown-locale",
'''Couldn't get locale information for "{}"'''.format(_locale),
)
self.set_context(context, self.dst(context), locale)
class SetLocale(LogicElement):
"""Switches the current locale"""
class Help:
synopsis = "switch the current locale"
locale = Attribute("Locale name")
def logic(self, context):
_locale = self.locale(context)
try:
locale = LocaleProxy(_locale)
except:
self.throw(
"change-locale.unknown-locale",
'''Couldn't get locale information for "{}"'''.format(_locale),
)
context[".locale"] = locale
class SetLanguage(LogicElement):
"""Set the current language"""
class Help:
synopsis = "set the current language"
language = Attribute("Language code")
def logic(self, context):
language = self.language(context)
if not isinstance(language, list):
language = [language]
context[".languages"] = language
class Server(LogicElement):
"""Defines a server"""
class Help:
synopsis = "define a server"
def post_build(self, context):
self.urlmapper = URLMapper()
self.middleware = {"request": URLMapper(), "response": URLMapper()}
self.fs = None
super(Server, self).post_build(context)
def startup(self, archive, context, fs, breakpoint=False):
self.fs = fs
archive.build_libs()
try:
if breakpoint:
logic.debug(archive, context, logic.DeferNodeContents(self))
else:
logic.run_logic(archive, context, logic.DeferNodeContents(self))
except Exception as e:
# import traceback
# traceback.print_exc(e)
raise
archive.build_libs()
def get_url(self, app_name, url_name, params=None):
app_routes = self.urlmapper.get_routes(app_name)
url = None
# Could be multiple routes for this name
# Try each one and return the url that doesn't fail
for route in app_routes[:-1]:
try:
url = route.target.get_url(url_name, params, base_route=route)
except RouteError:
continue
else:
break
else:
# Last one, if this throws an exception, we want it to propagate
route = app_routes[-1]
url = route.target.get_url(url_name, params, base_route=route)
return url
def trace(self, archive, url, method="GET"):
for route_match in self.urlmapper.iter_routes(url, method):
route_data = route_match.data
target = route_match.target
if target:
for element_ref in target:
app = archive.get_app(route_data.get("app", None))
yield (route_data, archive.get_element(element_ref, app))
def process_response(self, context, response):
cookies = context.root.get("cookiejar", {})
for cookie in itervalues(cookies):
cookie.set(response)
for cookie_name in cookies.deleted_cookies:
response.delete_cookie(cookie_name)
try:
if not response.date and "now" in context.root:
response.date = context.root["now"]._dt
except:
# Don't want to discard the response here, so log exception
log.exception("error setting response date")
return response
def render_response(self, archive, context, obj, status=StatusCode.ok):
response = Response(
charset=py2bytes("utf8"), status=int(getattr(obj, "http_status", status))
)
result = render_object(obj, archive, context, "html")
response.text = text_type(result)
return self.process_response(context, response)
def _dispatch_result(self, archive, context, request, result, status=StatusCode.ok):
if result is None:
return None
if isinstance(result, ReplaceRequest):
return result
if isinstance(result, RespondWith):
return self.dispatch_handler(
archive, context, request, status=result.status, headers=result.headers
)
if not isinstance(result, Response):
status = int(getattr(result, "http_status", None) or status)
response = MoyaResponse(charset=py2bytes("utf8"), status=status)
html = render_object(result, archive, context, "html")
response.text = html
else:
response = result
return self.process_response(context, response)
def handle_error(self, archive, context, request, error, exc_info):
context.safe_delete("._callstack")
context.safe_delete(".call")
return self.dispatch_handler(
archive,
context,
request,
status=StatusCode.internal_error,
error=error,
exc_info=exc_info,
)
def _dispatch_mapper(
self, archive, context, mapper, url, method="GET", status=None, breakpoint=False
):
"""Loop to call targets for a url/method/status combination"""
dispatch_trace = context.root.get("_urltrace", [])
if breakpoint:
call = archive.debug_call
else:
call = archive.call
root = context.root
for route_data, target, name in mapper.iter_routes(url, method, status):
root.update(urlname=name, headers={})
if target:
for element_ref in target:
app, element = archive.get_element(element_ref)
if element:
app = app or archive.get_app(route_data.get("app", None))
context.root.update(url=route_data)
result = call(element_ref, context, app, url=route_data)
dispatch_trace.append(
Trace(element_ref, app, route_data, result)
)
if result is not None:
yield result
else:
dispatch_trace.append(Trace(element_ref))
else:
dispatch_trace.append(Trace(element_ref))
@classmethod
def set_site(cls, archive, context, request):
"""Set site data for a request"""
domain = request.host
if ":" in domain:
domain = domain.split(":", 1)[0]
site_instance = archive.sites.match(domain, context=context)
if site_instance is None:
log.error(
'no site matching domain "{domain}", consider adding [site:{domain}] to settings'.format(
domain=domain
)
)
return None
context.root["sys"]["site"] = site_instance
try:
context.root["sys"]["base"] = archive.project_fs.getsyspath("/")
except NoSysPath:
context.root["sys"]["base"] = None
context.root["site"] = site_instance._data
return site_instance
@classmethod
def _get_tz(self, context, default_timezone="UTC", user_timezone=False):
"""lazy insertion of .tz"""
if context is None:
context = pilot.context
tz = None
if user_timezone:
tz = context.get(".user.timezone", None)
if not tz:
tz = context.get(".sys.site.timezone", None)
if not tz:
tz = default_timezone
if not tz:
return None
try:
return Timezone(tz)
except pytz.UnknownTimeZoneError:
log.error("invalid value for timezone '%s', defaulting to UTC", tz)
return Timezone("UTC")
def run_middleware(self, stage, archive, context, request, url, method):
middleware = self.middleware[stage]
try:
for result in self._dispatch_mapper(
archive, context, middleware, url, method
):
response = self._dispatch_result(archive, context, request, result)
if response:
return response
except Exception as e:
return self.handle_error(archive, context, request, e, sys.exc_info())
def _populate_context(self, archive, context, request):
"""Add standard values to context."""
populate_context = {
"permissions": {},
"libs": archive.libs,
"apps": archive.apps,
"debug": archive.debug,
"develop": archive.develop,
"sys": {},
"server": self,
"urls": self.urlmapper,
"now": ExpressionDateTime.moya_utcnow(),
"appurls": AppUrlsProxy(),
"moya": {"version": __version__},
"enum": archive.enum,
"accept_language": list(request.accept_language),
"media_url": archive.media_url,
"filters": archive.filters,
"secret": archive.secret,
}
context.root.update(populate_context)
set_dynamic(context)
def dispatch(self, archive, context, request, breakpoint=False):
"""Dispatch a request to the server and return a response object."""
url = request.path_info
method = request.method
self._populate_context(archive, context, request)
site = self.set_site(archive, context, request)
if site is None:
# No site match, return a 404
return self.dispatch_handler(
archive, context, request, StatusCode.not_found
)
root = context.root
if site.head_as_get and method == "HEAD":
# Treat HEAD requests as GET requests
request = request.copy()
request.method = "GET"
root["request"] = request
method = "GET"
root["locale"] = site.locale
context.set_lazy(
".tz",
self._get_tz,
None,
user_timezone=site.user_timezone,
default_timezone=site.timezone,
)
# Request middleware
response = self.run_middleware(
"request", archive, context, request, url, method
)
if response is not None:
return response
def response_middleware(response):
context.safe_delete("._callstack", ".call")
context.root["response"] = response
new_response = self.run_middleware(
"response", archive, context, request, url, method
)
return new_response or response
# Run main views
root["urltrace"] = root["_urltrace"] = []
context.safe_delete("._callstack", ".call")
response = None
try:
for result in self._dispatch_mapper(
archive, context, self.urlmapper, url, method, breakpoint=breakpoint
):
response = self._dispatch_result(archive, context, request, result)
if response:
response = response_middleware(response)
db.commit_sessions(context)
return response
else:
db.commit_sessions(context)
except Exception as e:
db.rollback_sessions(context, close=False)
return self.handle_error(archive, context, request, e, sys.exc_info())
finally:
for thread in context.get("._threads", []):
thread.wait()
context.safe_delete("._threads")
db.close_sessions(context)
root["_urltrace"] = []
# Append slash and redirect if url doesn't end in a slash
if not url.endswith("/") and site.append_slash:
# Check in advance if the url ending with / actually maps to anything
if method in ("HEAD", "GET") and self.urlmapper.has_route(
url + "/", method, None
):
_, ext = splitext(url)
# Don't redirect when the filename has an extension
if not ext:
response = MoyaResponse(
status=StatusCode.temporary_redirect, location=url + "/"
)
return response
if request.method in ["GET", "POST", "HEAD"]:
status_code = StatusCode.not_found
else:
status_code = StatusCode.method_not_allowed
# No response returned, handle 404
return self.dispatch_handler(archive, context, request, status=status_code)
def dispatch_handler(
self,
archive,
context,
request,
status=404,
headers=None,
error=None,
exc_info=None,
):
"""Respond to a status code"""
context.safe_delete(
"._callstack",
".call",
".td",
"._td",
".contentstack",
".content",
".headers",
)
if headers is not None:
context.root["headers"] = headers
moya_trace = None
error2 = None
moya_trace2 = None
if error is not None:
moya_trace = getattr(error, "moya_trace", None)
if moya_trace is None:
try:
moya_trace = trace.build(
context, None, None, error, exc_info, request
)
except Exception as e:
# import traceback; traceback.print_exc(e)
raise
try:
url = request.path_info
method = request.method
for result in self._dispatch_mapper(
archive, context, self.urlmapper, url, method, status
):
if not isinstance(result, RespondWith):
return self._dispatch_result(
archive, context, request, result, status=status
)
except Exception as e:
log.exception("error in dispatch_handler")
# from traceback import print_exc
# print_exc()
if status != StatusCode.internal_error:
return self.handle_error(archive, context, request, e, sys.exc_info())
error2 = e
moya_trace2 = getattr(error2, "moya_trace", None)
if moya_trace2 is None:
moya_trace2 = trace.build(
context, None, None, error2, sys.exc_info(), request
)
if error is not None:
log.error("unhandled exception ({})".format(text_type(error).lstrip()))
try:
context[".console"].obj(context, moya_trace)
except:
pass
context.reset()
context.safe_delete(
"._callstack",
".call",
".td",
"._td",
".contentstack",
".content",
"_funccalls",
"._for",
"_for_stack",
)
# pilot.context = context
# No handlers have been defined for this status code
# We'll look for a template named <status code>.html and render that
template_filename = "{}.html".format(int(status))
try:
response = MoyaResponse(charset=py2bytes("utf8"), status=status)
rc = RenderContainer.create(None, template=template_filename)
rc["request"] = request
rc["status"] = status
rc["error"] = error
rc["trace"] = moya_trace
rc["error2"] = error
rc["trace2"] = moya_trace2
rc["moya_error"] = (
getattr(moya_trace.exception, "type", None) if moya_trace else None
)
if status == 500:
archive.fire(context, "sys.unhandled-exception", data=rc)
response.text = render_object(rc, archive, context, "html")
return response
except MissingTemplateError:
pass
except Exception as e:
# import traceback
# traceback.print_exc(e)
# print(e)
log.error("unable to render %s (%s)", template_filename, text_type(e))
# Render a very basic response
response = Response(charset=py2bytes("utf8"), status=status)
url = request.path_info
try:
response.text = standard_response(
status, url, error, moya_trace, debug=archive.debug
)
except Exception as e:
log.exception("error generating standard response")
return response
| 33.81256 | 213 | 0.566808 | 3,678 | 34,996 | 5.283578 | 0.126427 | 0.023054 | 0.022693 | 0.01235 | 0.297792 | 0.235784 | 0.195647 | 0.161118 | 0.133639 | 0.100242 | 0 | 0.001718 | 0.334695 | 34,996 | 1,034 | 214 | 33.845261 | 0.832925 | 0.068465 | 0 | 0.330909 | 0 | 0 | 0.098984 | 0.003128 | 0 | 0 | 0 | 0 | 0 | 1 | 0.042424 | false | 0.002424 | 0.046061 | 0.002424 | 0.213333 | 0.001212 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
a4a9ae71eefe78d31d10c9cf6561fad89222ead9 | 13,716 | py | Python | iter_app/src/environment.py | Wisc-HCI/ITER | 2ae8a5f0ae17783db4db25198ec0d97e72cd7296 | [
"MIT"
] | 1 | 2021-04-07T15:54:44.000Z | 2021-04-07T15:54:44.000Z | iter_app/src/environment.py | Wisc-HCI/ITER | 2ae8a5f0ae17783db4db25198ec0d97e72cd7296 | [
"MIT"
] | null | null | null | iter_app/src/environment.py | Wisc-HCI/ITER | 2ae8a5f0ae17783db4db25198ec0d97e72cd7296 | [
"MIT"
] | null | null | null | #!/usr/bin/env python
'''
Environment Node
Author Curt Henrichs
Date 5-16-19
Provides environment context for ITER runner.
'''
# __MODES__ for environment object type
MODE_COLLISION_MOVEIT = 'collision_moveit'
MODE_MARKER = 'marker'
import os
import tf
import yaml
import json
import time
import uuid
import rospy
import numpy as np
from sklearn.neighbors import KNeighborsRegressor
from sklearn.neural_network import MLPRegressor
from tf.transformations import *
from visualization_msgs.msg import *
from iter_app.msg import EnvironmentObject
from std_msgs.msg import Header, ColorRGBA
from interactive_markers.interactive_marker_server import *
from geometry_msgs.msg import Pose, Vector3, Quaternion, TransformStamped, PoseStamped
from iter_app.srv import GetARTagPose, GetARTagPoseResponse
from iter_app.srv import SetVisionParams, SetVisionParamsResponse
from iter_app.srv import GetVisionObject, GetVisionObjectResponse
from iter_app.srv import ClearTaskObjects, ClearTaskObjectsResponse
from iter_app.srv import ConnectTaskObject, ConnectTaskObjectResponse
from iter_app.srv import ReleaseTaskObject, ReleaseTaskObjectResponse
from iter_app.srv import GenerateTaskObjects, GenerateTaskObjectsResponse
from iter_app.srv import GetEnvironmentState, GetEnvironmentStateResponse
from iter_app.srv import CalibrateRobotToCamera, CalibrateRobotToCameraResponse
rospy.init_node('environment')
mode = rospy.get_param('~mode',MODE_MARKER)
if mode == MODE_COLLISION_MOVEIT:
import iter_app_tools.environment_interface.collision_moveit as task_env
elif mode == MODE_MARKER:
import iter_app_tools.environment_interface.marker as task_env
else:
raise Exception('Invalid environment mode selected')
import iter_app_tools.environment_interface.vision as vision_env
CALIBRATION_FILEPATH = os.path.join(os.path.dirname(__file__),'config/vision_pose_calibration.yaml')
class Environment:
def __init__(self,calibrate_ar_tag_id):
self._load_calibration_file()
self._tf_listener = tf.TransformListener()
self._tf_broadcaster = tf.TransformBroadcaster()
self._calibrate_ar_tag_id = calibrate_ar_tag_id
self._set_vision_params_srv = rospy.Service("/environment/set_vision_params",SetVisionParams,self._set_vision_params)
self._gen_task_objs_srv = rospy.Service("/environment/generate_task_objects",GenerateTaskObjects,self._generate_task_objs)
self._clear_task_objs_srv = rospy.Service("/environment/clear_task_objects",ClearTaskObjects,self._clear_task_objs)
self._connect_task_obj_srv = rospy.Service("/environment/connect_task_object",ConnectTaskObject,self._connect_task_obj)
self._release_task_obj_srv = rospy.Service("/environment/release_task_object",ReleaseTaskObject,self._release_task_obj)
self._get_vision_obj_srv = rospy.Service("/environment/get_vision_object",GetVisionObject,self._get_vision_obj)
self._cal_bot_to_cam_srv = rospy.Service("/environment/calibrate_robot_to_camera",CalibrateRobotToCamera,self._cal_bot_to_cam)
self._get_state_srv = rospy.Service("/environment/get_state",GetEnvironmentState,self._get_state)
self._get_ar_tag_pose = rospy.Service("/environment/get_ar_tag_pose",GetARTagPose,self._get_ar_tag_pose)
def _load_calibration_file(self):
fin = open(CALIBRATION_FILEPATH,'r')
pose_data = yaml.safe_load(fin)
fin.close()
# select mode
#self._calibration_mode = 'linalg'
#self._calibration_mode = 'knn-pose'
self._calibration_mode = 'knn-offset'
#self._calibration_mode = 'neural-offset'
# format data
count = 0
X = None
for p in pose_data['initial']:
if count == 0:
X = np.matrix([[p['position']['x']],[p['position']['y']],[p['position']['z']],[1]])
else:
_x = np.matrix([[p['position']['x']],[p['position']['y']],[p['position']['z']],[1]])
X = np.append(X,_x,axis=1)
count += 1
count = 0
Y = None
for p in pose_data['offset']:
if count == 0:
Y = np.matrix([[p['position']['x']],[p['position']['y']],[p['position']['z']],[1]])
else:
_y = np.matrix([[p['position']['x']],[p['position']['y']],[p['position']['z']],[1]])
Y = np.append(Y,_y,axis=1)
count += 1
# generate model based on mode
if self._calibration_mode == 'linalg':
print 'Linear Algebra'
if len(pose_data['initial']) == len(pose_data['offset']) and len(pose_data['offset']) >= 4:
print 'Solving'
invX = np.linalg.pinv(X)
self._pose_transform_matrix = Y * invX
else:
print 'Default'
self._pose_transform_matrix = np.matrix([[1,0,0,0],[0,1,0,0],[0,0,1,0],[0,0,0,1]])
elif self._calibration_mode == 'knn-pose':
print 'KNN Pose'
self._model = KNeighborsRegressor(n_neighbors=3, weights='distance')
self._model.fit(X.transpose(),Y.transpose())
elif self._calibration_mode == 'knn-offset':
print 'KNN Offset'
O = np.subtract(Y,X)
self._model = KNeighborsRegressor(n_neighbors=2, weights='distance')
self._model.fit(X.transpose(),O.transpose())
elif self._calibration_mode == 'neural-offset':
print 'Neural Offset'
O = np.subtract(Y,X)
self._model = MLPRegressor(hidden_layer_sizes=(10,10),activation='relu',solver='adam',learning_rate='adaptive', learning_rate_init=0.01, alpha=0.01, verbose=True)
self._model.fit(X.transpose(),O.transpose())
def _create_manual_calibration_marker(self,pos,rot):
interactive_marker = InteractiveMarker()
interactive_marker.header.frame_id = "calibration_point_1"
interactive_marker.name = "camera_marker"
interactive_marker.description = "Camera TF rotation marker"
interactive_marker.scale = 0.1
interactive_marker.pose = Pose(position=Vector3(x=pos[0],y=pos[1],z=pos[2]),
orientation=Quaternion(x=rot[0],y=rot[1],z=rot[2],w=rot[3]))
box_marker = Marker()
box_marker.type = Marker.CUBE
box_marker.scale = Vector3(x=0.05,y=0.05,z=0.05)
box_marker.color = ColorRGBA(r=0.5,g=05,b=0.5,a=0.75)
box_control = InteractiveMarkerControl()
box_control.always_visible = True
box_control.markers.append(box_marker)
interactive_marker.controls.append(box_control)
controls = {
'rotate_x': {'orientation': Quaternion(x=1,y=0,z=0,w=1), 'mode': InteractiveMarkerControl.ROTATE_AXIS},
'rotate_y': {'orientation': Quaternion(x=0,y=0,z=1,w=1), 'mode': InteractiveMarkerControl.ROTATE_AXIS},
'rotate_z': {'orientation': Quaternion(x=0,y=1,z=0,w=1), 'mode': InteractiveMarkerControl.ROTATE_AXIS},
'move_x': {'orientation': Quaternion(x=1,y=0,z=0,w=1), 'mode': InteractiveMarkerControl.MOVE_AXIS},
'move_y': {'orientation': Quaternion(x=0,y=0,z=1,w=1), 'mode': InteractiveMarkerControl.MOVE_AXIS},
'move_z': {'orientation': Quaternion(x=0,y=1,z=0,w=1), 'mode': InteractiveMarkerControl.MOVE_AXIS}
}
for key in controls.keys():
control = InteractiveMarkerControl()
control.name = key
control.orientation = controls[key]['orientation']
control.interaction_mode = controls[key]['mode']
interactive_marker.controls.append(control)
return interactive_marker
def _pose_msg_to_tf(self,msg):
pos = (msg.position.x,msg.position.y,msg.position.z)
rot = (msg.orientation.x,msg.orientation.y,msg.orientation.z,msg.orientation.w)
return pos, rot
def _generate_task_objs(self, request):
# Generates new markers of objects defined by array of objects provided
return GenerateTaskObjectsResponse(status=task_env.generate_dynamic_environment(request.objects))
def _clear_task_objs(self, request):
# Clears set of objects defined by array of string IDs
return ClearTaskObjectsResponse(status=task_env.clear_dynamic_environment(request.ids,request.all))
def _connect_task_obj(self, request):
# Connects object to robot
# Provide a pose which is used for release to calculate the transformation
# over movement used to plot new object
status = task_env.connect_obj_to_robot(request.id,request.pose)
return ConnectTaskObjectResponse(status=status)
def _release_task_obj(self, request):
# Disconnects object from robot
# Provide a pose which is used to calculate the transformation over
# movement used to plot new object
status = task_env.disconnect_obj_from_robot(request.id,request.pose)
return ReleaseTaskObjectResponse(status=status)
def _get_vision_obj(self, request):
# Finds a object from vision set that meets the criteria given.
# Converts to task object with ID.
# Returns pose of object with ID.
type = None
if request.type == 'large':
type = vision_env.BLOCK_LARGE
elif request.type == 'small':
type = vision_env.BLOCK_SMALL
elif request.type == 'unknown':
type = vision_env.BLOCK_UNKNOWN
id, pose = vision_env.get_block(type)
print '\n\n', pose, '\n\n'
response = GetVisionObjectResponse()
response.status = not id == None
if not response.status:
return response
response.vision_id = 'block_{0}'.format(id)
response.pose = self._tf_listener.transformPose(request.frame_id,PoseStamped(pose=pose,header=Header(frame_id='/map'))).pose
if not request.disable_calibrated_offset:
response.pose.position = self._calibration_offset(response.pose.position)
response.task_id = response.vision_id + '_' + str(uuid.uuid1().hex)
response.status = task_env.generate_dynamic_environment([EnvironmentObject(
representation=EnvironmentObject.REPRESENTATION_BOX,
id=response.task_id,
size=Vector3(0.1,0.1,0.1), #Note, this is for representation only
pose=response.pose
)])
return response
def _cal_bot_to_cam(self, request):
# probe camera to robot transform, note robot's ar tag must be within
# camera's field of view
# pre-process poses into tfs
eePos, eeRot = self._pose_msg_to_tf(request.ee_pose)
gtaPos, gtaRot = self._pose_msg_to_tf(request.tag_grip_tf)
print '\n\n\n', gtaPos, '\n\n', gtaRot, '\n\n\n'
# find calibration tag
tagId = request.ar_tag_id if request.ar_tag_id != "" else self._calibrate_ar_tag_id
status = True
# update interactive marker
if status:
self._calibration_marker.pose=Pose(
position=Vector3(x=gtaPos[0],y=gtaPos[1],z=gtaPos[2]),
orientation=Quaternion(x=gtaRot[0],y=gtaRot[1],z=gtaRot[2],w=gtaRot[3]))
self._interactive_marker_server.applyChanges()
return CalibrateRobotToCameraResponse(status=status)
def _get_state(self, request):
return GetEnvironmentState(
grasped_task_objects=task_env.get_grasped_ids(),
all_task_objects=task_env.get_all_task_ids(),
all_vision_objects=vision_env.get_vision_ids(),
all_ar_tags=vision_env.get_ar_ids())
def _set_vision_params(self, request):
params = json.loads(request.params)
status = vision_env.set_vision_params(params)
return SetVisionParamsResponse(status=status)
def _get_ar_tag_pose(self, request):
p_raw = vision_env.get_ar_tag(request.tag_id)
status = p_raw != None
pose = Pose()
if status:
p_tf = self._tf_listener.transformPose(request.frame_id,PoseStamped(pose=p_raw,header=Header(frame_id='/map'))).pose
pose.position.x = p_tf.position.x + request.offset.position.x
pose.position.y = p_tf.position.y + request.offset.position.y
pose.position.z = p_tf.position.z + request.offset.position.z
pose.orientation = request.offset.orientation
response = GetARTagPoseResponse()
response.status = status
response.pose = pose
return response
def _calibration_offset(self, position):
if self._calibration_mode == 'linalg':
X = np.matrix([[position.x],[position.y],[position.z],[1]])
Y = self._pose_transform_matrix * X
return Vector3(x=Y[0,0]/Y[3,0],
y=Y[1,0]/Y[3,0],
z=Y[2,0]/Y[3,0])
elif self._calibration_mode == 'knn-pose':
X = np.matrix([[position.x,position.y,position.z,1]])
Y = self._model.predict(X)
return Vector3(x=Y[0,0],y=Y[0,1],z=Y[0,2])
elif self._calibration_mode == 'knn-offset':
X = np.matrix([[position.x,position.y,position.z,1]])
Y = self._model.predict(X)
return Vector3(x=X[0,0]+Y[0,0],y=X[0,1]+Y[0,1],z=X[0,2]+Y[0,2])
elif self._calibration_mode == 'neural-offset':
X = np.matrix([[position.x,position.y,position.z,1]])
Y = self._model.predict(X)
return Vector3(x=X[0,0]+Y[0,0],y=X[0,1]+Y[0,1],z=X[0,2]+Y[0,2])
if __name__ == "__main__":
calibrate_tag = rospy.get_param('~calibrate_ar_tag_id',None)
env = Environment(calibrate_tag)
rospy.spin()
| 43.542857 | 174 | 0.664261 | 1,779 | 13,716 | 4.89095 | 0.173131 | 0.004367 | 0.026204 | 0.014481 | 0.332376 | 0.256177 | 0.168142 | 0.138375 | 0.125158 | 0.112286 | 0 | 0.015673 | 0.218504 | 13,716 | 314 | 175 | 43.681529 | 0.796063 | 0.068241 | 0 | 0.153509 | 0 | 0 | 0.078267 | 0.024666 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.122807 | null | null | 0.035088 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
8ef5c654eb24fbff56f7dc6c657508169778666e | 721 | py | Python | iast/serializers/department.py | luzhongyang/DongTai-webapi | f07b2b1bc1222999d0bb7e3300e65c953ee966f5 | [
"Apache-2.0"
] | 6 | 2021-09-01T07:37:37.000Z | 2022-02-10T08:28:47.000Z | iast/serializers/department.py | luzhongyang/DongTai-webapi | f07b2b1bc1222999d0bb7e3300e65c953ee966f5 | [
"Apache-2.0"
] | 51 | 2021-11-09T09:19:05.000Z | 2022-02-10T02:37:04.000Z | iast/serializers/department.py | luzhongyang/DongTai-webapi | f07b2b1bc1222999d0bb7e3300e65c953ee966f5 | [
"Apache-2.0"
] | 21 | 2021-09-01T06:32:19.000Z | 2022-03-03T03:23:37.000Z | #!/usr/bin/env python
# -*- coding:utf-8 -*-
# author:owefsad
# software: PyCharm
# project: lingzhi-webapi
from rest_framework import serializers
from dongtai.models import User
from dongtai.models.department import Department
class DepartmentSerializer(serializers.ModelSerializer):
user_count = serializers.SerializerMethodField()
created = serializers.SerializerMethodField()
class Meta:
model = Department
fields = ('id', 'name', 'create_time', 'update_time', 'user_count', 'created')
def get_user_count(self, obj):
return obj.users.count()
def get_created(self, obj):
user = User.objects.filter(id=obj.created_by).first()
return user.get_username()
| 27.730769 | 86 | 0.710125 | 83 | 721 | 6.048193 | 0.578313 | 0.053785 | 0.067729 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.001681 | 0.174757 | 721 | 25 | 87 | 28.84 | 0.842017 | 0.135922 | 0 | 0 | 0 | 0 | 0.072816 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.142857 | false | 0 | 0.214286 | 0.071429 | 0.785714 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
8ef69a8e8cd014e8cddf55bfd7f3ad8fb1bda81e | 5,132 | py | Python | src/media/data/nouns.py | cjcodeproj/medialibrary | 466ba475561f7701fe41ebe196aaf789a0aa7237 | [
"MIT"
] | null | null | null | src/media/data/nouns.py | cjcodeproj/medialibrary | 466ba475561f7701fe41ebe196aaf789a0aa7237 | [
"MIT"
] | 29 | 2021-09-06T00:46:30.000Z | 2022-03-23T16:47:04.000Z | src/media/data/nouns.py | cjcodeproj/medialibrary | 466ba475561f7701fe41ebe196aaf789a0aa7237 | [
"MIT"
] | null | null | null | #!/usr/bin/env python
'''
Objects for representation of proper nouns used in keywords.
'''
# pylint: disable=too-few-public-methods
# pylint: disable=too-many-branches
# pylint: disable=too-many-instance-attributes
from media.xml.namespaces import Namespaces
class AbstractNoun():
'''
Root class for all nouns
'''
def __init__(self):
self.value = ''
self.sort_value = ''
self.tagname = ''
def __str__(self):
return self.value
def __hash__(self):
return hash(self.value)
def __lt__(self, other):
return self.sort_value < other.sort_value
def __rt__(self, other):
return self.sort_value > other.sort_value
def __eq__(self, other):
return self.sort_value == other.sort_value
class Noun(AbstractNoun):
'''
Simplest class to represent proper nouns
for Thing, Event, Group, Entity
value represents the value that is displayed
sort_value represents the value for sorting
'''
def __init__(self, in_element):
super().__init__()
self.value = in_element.text
self.sort_value = self.value.casefold()
self.tagname = Namespaces.ns_strip(in_element.tag)
class Place(AbstractNoun):
'''
ProperNoun class for a location
Has attributes for every possible aspect
of a location, which is probably going to be
a problem.
'''
def __init__(self, in_place):
super().__init__()
self.generic = ''
self.name = ''
self.city = ''
self.county = ''
self.state = ''
self.country = ''
self.planet = ''
if in_place is not None:
self.tagname = Namespaces.ns_strip(in_place.tag)
self._process(in_place)
def _process(self, in_element):
first_tag = True
major = ''
minor = ''
for child in in_element:
if first_tag:
major = self._build_major_value(child)
first_tag = False
else:
minor = self._build_minor_value(child, minor)
if minor:
minor = '(' + minor + ')'
self.value = major + ' ' + minor
else:
self.value = major
self.sort_value = self.value.casefold()
def _build_major_value(self, in_element):
tagname = Namespaces.ns_strip(in_element.tag)
if tagname == 'generic':
self.generic = in_element.text
if tagname == 'name':
self.name = in_element.text
elif tagname == 'ci':
self.city = in_element.text
elif tagname == 'co':
self.county = in_element.text
elif tagname in ['st', 'pr']:
self.state = in_element.text
elif tagname == 'cn':
self.country = in_element.text
elif tagname == 'planet':
self.planet = in_element.text
major = in_element.text
return major
def _build_minor_value(self, in_element, minor):
tagname = Namespaces.ns_strip(in_element.tag)
if tagname == 'ci':
self.city = in_element.text
elif tagname == 'co':
self.county = in_element.text
elif tagname in ['st', 'pr']:
self.state = in_element.text
elif tagname == 'cn':
self.country = in_element.text
elif tagname == 'planet':
self.planet = in_element.text
if minor:
minor += ', ' + in_element.text
else:
minor = in_element.text
return minor
class Name(AbstractNoun):
'''
Proper noun for the name of a real person.
A real person's name will include
the common components like a given name,
a family name, and maybe a middle name.
This class is more heavily used since it
is the standard name class for crew members
or any other data types that use a name.
'''
def __init__(self, in_element):
super().__init__()
self.given = ''
self.family = ''
self.middle = ''
self.sort = ''
if in_element is not None:
self.tagname = Namespaces.ns_strip(in_element.tag)
self._process(in_element)
def _process(self, in_element):
for child in in_element:
tagname = Namespaces.ns_strip(child.tag)
if tagname == 'gn':
self.given = child.text
if tagname == 'fn':
self.family = child.text
if tagname == 'mn':
self.middle = child.text
self._build_value()
# self._build_sort()
def _build_value(self):
raw = ''
if self.given:
raw += self.given + ' '
if self.family:
raw += self.family
if self.middle:
raw += ' ' + self.middle
self.value = raw
self.sort_value = self.family.casefold() + '_' \
+ self.given.casefold() + '_' + self.middle.casefold()
def __str__(self):
'''
The formal string value should be returned
'''
return f"{self.given} {self.family}"
| 28.511111 | 66 | 0.567225 | 607 | 5,132 | 4.581549 | 0.228995 | 0.097087 | 0.074793 | 0.055016 | 0.347717 | 0.314635 | 0.277958 | 0.277958 | 0.236246 | 0.163251 | 0 | 0 | 0.33067 | 5,132 | 179 | 67 | 28.670391 | 0.809607 | 0.16855 | 0 | 0.336134 | 0 | 0 | 0.020369 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.12605 | false | 0 | 0.008403 | 0.042017 | 0.235294 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
8ef75298706c9e95500d4b99ead2e2f3a0f95ab6 | 1,325 | py | Python | get_together/views/utils.py | alysivji/GetTogether | 403d9945fff019701de41d081ad4452e771e1ce1 | [
"BSD-2-Clause"
] | 446 | 2018-01-21T09:22:41.000Z | 2022-03-25T17:46:12.000Z | get_together/views/utils.py | alysivji/GetTogether | 403d9945fff019701de41d081ad4452e771e1ce1 | [
"BSD-2-Clause"
] | 272 | 2018-01-03T16:55:39.000Z | 2022-03-11T23:12:30.000Z | get_together/views/utils.py | alysivji/GetTogether | 403d9945fff019701de41d081ad4452e771e1ce1 | [
"BSD-2-Clause"
] | 100 | 2018-01-27T02:04:15.000Z | 2021-09-09T09:02:21.000Z | import math
from django.conf import settings
from django.utils.translation import ugettext_lazy as _
from events.location import get_client_ip, get_geoip
from events.models import Team
KM_PER_DEGREE_LAT = 110.574
KM_PER_DEGREE_LNG = 111.320 # At the equator
DEFAULT_NEAR_DISTANCE = 100 # kilometeres
def get_nearby_teams(request, near_distance=DEFAULT_NEAR_DISTANCE):
g = get_geoip(request)
if g.latlng is None or g.latlng[0] is None or g.latlng[1] is None:
print("Could not identify latlng from geoip")
return Team.objects.none()
try:
minlat = g.latlng[0] - (near_distance / KM_PER_DEGREE_LAT)
maxlat = g.latlng[0] + (near_distance / KM_PER_DEGREE_LAT)
minlng = g.latlng[1] - (
near_distance / (KM_PER_DEGREE_LNG * math.cos(math.radians(g.latlng[0])))
)
maxlng = g.latlng[1] + (
near_distance / (KM_PER_DEGREE_LNG * math.cos(math.radians(g.latlng[0])))
)
near_teams = Team.public_objects.filter(
city__latitude__gte=minlat,
city__latitude__lte=maxlat,
city__longitude__gte=minlng,
city__longitude__lte=maxlng,
)
return near_teams
except Exception as e:
print("Error looking for local teams: ", e)
return Team.objects.none()
| 33.974359 | 85 | 0.667925 | 186 | 1,325 | 4.467742 | 0.403226 | 0.075812 | 0.079422 | 0.081829 | 0.262335 | 0.226233 | 0.226233 | 0.226233 | 0.226233 | 0.144404 | 0 | 0.022931 | 0.243019 | 1,325 | 38 | 86 | 34.868421 | 0.805583 | 0.019623 | 0 | 0.125 | 0 | 0 | 0.051698 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.03125 | false | 0 | 0.15625 | 0 | 0.28125 | 0.0625 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
8ef8e6caf23a898242e0cad01c4af50ac98c7dfb | 12,399 | py | Python | spar_python/analytics/ta2/parse_circuit_log_test.py | nathanawmk/SPARTA | 6eeb28b2dd147088b6e851876b36eeba3e700f16 | [
"BSD-2-Clause"
] | 37 | 2017-06-09T13:55:23.000Z | 2022-01-28T12:51:17.000Z | spar_python/analytics/ta2/parse_circuit_log_test.py | nathanawmk/SPARTA | 6eeb28b2dd147088b6e851876b36eeba3e700f16 | [
"BSD-2-Clause"
] | null | null | null | spar_python/analytics/ta2/parse_circuit_log_test.py | nathanawmk/SPARTA | 6eeb28b2dd147088b6e851876b36eeba3e700f16 | [
"BSD-2-Clause"
] | 5 | 2017-06-09T13:55:26.000Z | 2021-11-11T03:51:56.000Z | # *****************************************************************
# Copyright 2013 MIT Lincoln Laboratory
# Project: SPAR
# Authors: Tim Meunier
# Description: Unit tests for the perfomer encrypted
# circuit log file parser
# *****************************************************************
import unittest
import StringIO
import collections
import os
import sys
this_dir = os.path.dirname(os.path.abspath(__file__))
base_dir = os.path.join(this_dir, '..', '..', '..')
sys.path.append(base_dir)
import spar_python.report_generation.ta2.ta2_schema as ta2_schema
import spar_python.analytics.ta2.parse_circuit_log as parse_circuit_log
class ParseCircuitLogTest(unittest.TestCase):
'''Test performer encrypted circuit log parser'''
gold_log = """TEST: these_are_BIG_circuits /home/lincoln/spar-testing/tests/ta2/testfile/IBM2-circuit-001-these_are_BIG_circuits.ts
TIME: 2013-05-30 13:34:43
Invoked from /home/lincoln/spar-testing/bin/
PERFORMER: IBM2
RECOVERY
KEYPARAMS: /home/lincoln/spar-testing/tests/ta2/params/5.params
TIME: 2013-05-30 13:34:44
KEYGEN: 0.00128211
KEYTRANSMIT: 1.1383e-05
KEYSIZE: 7
CIRCUIT: /home/lincoln/spar-testing/tests/ta2/circuits/1.cir
TIME: 2013-05-30 13:34:45
INGESTION: 0.0763259
CIRCUITTRANSMIT: 10.6568
INPUT: /home/lincoln/spar-testing/tests/ta2/inputs/57.input
TIME: 2013-05-30 13:34:55
ENCRYPT: 0.00013619
INPUTTRANSMIT: 4.2838e-05
INPUTSIZE: 202
EVAL: 0.328568
OUTPUTTRANSMIT: 9.15502e-06
OUTPUTSIZE: 1
DECRYPTED RESULT: 1
DECRYPT: 0.000103113
INPUT: /home/lincoln/spar-testing/tests/ta2/inputs/103.input
TIME: 2013-05-30 13:35:08
ENCRYPT: 0.00013619
INPUTTRANSMIT: 4.2838e-05
INPUTSIZE: 202
EVAL: 0.328568
OUTPUTTRANSMIT: 9.15502e-06
OUTPUTSIZE: 1
DECRYPTED RESULT: 1
DECRYPT: 0.000103113
KEYPARAMS: /home/lincoln/spar-testing/tests/ta2/params/15.params
TIME: 2013-05-30 13:35:21
KEYGEN: 0.00555211
KEYTRANSMIT: 1.1383e-08
KEYSIZE: 9
CIRCUIT: /home/lincoln/spar-testing/tests/ta2/circuits/23.cir
TIME: 2013-05-30 13:35:22
INGESTION: 0.0763259
CIRCUITTRANSMIT: 10.6568
KEYPARAMS: /home/lincoln/spar-testing/tests/ta2/params/19.params
TIME: 2013-05-30 13:35:32
KEYGEN: 0.09128211
KEYTRANSMIT: 1.0003e-05
KEYSIZE: 10
CIRCUIT: /home/lincoln/spar-testing/tests/ta2/circuits/102.cir
TIME: 2013-05-30 13:35:34
INGESTION: 0.0763259
CIRCUITTRANSMIT: 10.6568
INPUT: /home/lincoln/spar-testing/tests/ta2/inputs/004.input
TIME: 2013-05-30 13:35:44
ENCRYPT: 0.00013619
INPUTTRANSMIT: 4.2838e-05
INPUTSIZE: 202
EVAL: 0.328568
OUTPUTTRANSMIT: 9.15502e-06
OUTPUTSIZE: 1
DECRYPTED RESULT: 1
DECRYPT: 0.000103113
INPUT: /home/lincoln/spar-testing/tests/ta2/inputs/999.input
TIME: 2013-05-30 13:50:00
ENCRYPT: 0.00013619
INPUTTRANSMIT: 4.2838e-05
INPUTSIZE: 202
EVAL: 0.328568
OUTPUTTRANSMIT: 9.15502e-06
OUTPUTSIZE: 1
DECRYPT: 0.000103113"""
gold_results = collections.defaultdict(list,
{ta2_schema.PERKEYGEN_TABLENAME :
[{ta2_schema.PERKEYGEN_LATENCY : '0.00128211',
ta2_schema.PERKEYGEN_TIMESTAMP : '2013-05-30 13:34:44',
ta2_schema.PERKEYGEN_TESTNAME : 'these_are_BIG_circuits',
ta2_schema.PERKEYGEN_PID : 5,
ta2_schema.PERKEYGEN_PERFORMERNAME : 'IBM2',
ta2_schema.PERKEYGEN_TRANSMITLATENCY : '1.1383e-05',
ta2_schema.PERKEYGEN_KEYSIZE : '7',
ta2_schema.PERKEYGEN_RECOVERY : 1},
{ta2_schema.PERKEYGEN_LATENCY : '0.00555211',
ta2_schema.PERKEYGEN_TIMESTAMP : '2013-05-30 13:35:21',
ta2_schema.PERKEYGEN_TESTNAME : 'these_are_BIG_circuits',
ta2_schema.PERKEYGEN_PID : 15,
ta2_schema.PERKEYGEN_PERFORMERNAME : 'IBM2',
ta2_schema.PERKEYGEN_TRANSMITLATENCY : '1.1383e-08',
ta2_schema.PERKEYGEN_KEYSIZE : '9'},
{ta2_schema.PERKEYGEN_LATENCY : '0.09128211',
ta2_schema.PERKEYGEN_TIMESTAMP : '2013-05-30 13:35:32',
ta2_schema.PERKEYGEN_TESTNAME : 'these_are_BIG_circuits',
ta2_schema.PERKEYGEN_PID : 19,
ta2_schema.PERKEYGEN_PERFORMERNAME : 'IBM2',
ta2_schema.PERKEYGEN_TRANSMITLATENCY : '1.0003e-05',
ta2_schema.PERKEYGEN_KEYSIZE : '10'}],
ta2_schema.PERINGESTION_TABLENAME :
[{ta2_schema.PERINGESTION_LATENCY : '0.0763259',
ta2_schema.PERINGESTION_CID : 1,
ta2_schema.PERINGESTION_TESTNAME : 'these_are_BIG_circuits',
ta2_schema.PERINGESTION_PERFORMERNAME : 'IBM2',
ta2_schema.PERINGESTION_TIMESTAMP : '2013-05-30 13:34:45',
ta2_schema.PERINGESTION_TRANSMITLATENCY : '10.6568'},
{ta2_schema.PERINGESTION_LATENCY : '0.0763259',
ta2_schema.PERINGESTION_CID : 23,
ta2_schema.PERINGESTION_TESTNAME : 'these_are_BIG_circuits',
ta2_schema.PERINGESTION_PERFORMERNAME : 'IBM2',
ta2_schema.PERINGESTION_TIMESTAMP : '2013-05-30 13:35:22',
ta2_schema.PERINGESTION_TRANSMITLATENCY : '10.6568'},
{ta2_schema.PERINGESTION_LATENCY : '0.0763259',
ta2_schema.PERINGESTION_CID : 102,
ta2_schema.PERINGESTION_TESTNAME : 'these_are_BIG_circuits',
ta2_schema.PERINGESTION_PERFORMERNAME : 'IBM2',
ta2_schema.PERINGESTION_TIMESTAMP : '2013-05-30 13:35:34',
ta2_schema.PERINGESTION_TRANSMITLATENCY : '10.6568'}],
ta2_schema.PEREVALUATION_TABLENAME :
[{ta2_schema.PEREVALUATION_DECRYPTIONLATENCY : '0.000103113',
ta2_schema.PEREVALUATION_ENCRYPTIONLATENCY : '0.00013619',
ta2_schema.PEREVALUATION_TIMESTAMP : '2013-05-30 13:34:55',
ta2_schema.PEREVALUATION_INPUTTRANSMITLATENCY : '4.2838e-05',
ta2_schema.PEREVALUATION_TESTNAME : 'these_are_BIG_circuits',
ta2_schema.PEREVALUATION_IID : 57,
ta2_schema.PEREVALUATION_PERFORMERNAME : 'IBM2',
ta2_schema.PEREVALUATION_OUTPUT : '1',
ta2_schema.PEREVALUATION_OUTPUTTRANSMITLATENCY : '9.15502e-06',
ta2_schema.PEREVALUATION_INPUTSIZE : '202',
ta2_schema.PEREVALUATION_OUTPUTSIZE : '1',
ta2_schema.PEREVALUATION_EVALUATIONLATENCY : '0.328568'},
{ta2_schema.PEREVALUATION_DECRYPTIONLATENCY : '0.000103113',
ta2_schema.PEREVALUATION_ENCRYPTIONLATENCY : '0.00013619',
ta2_schema.PEREVALUATION_TIMESTAMP : '2013-05-30 13:35:08',
ta2_schema.PEREVALUATION_INPUTTRANSMITLATENCY : '4.2838e-05',
ta2_schema.PEREVALUATION_TESTNAME : 'these_are_BIG_circuits',
ta2_schema.PEREVALUATION_IID : 103,
ta2_schema.PEREVALUATION_PERFORMERNAME : 'IBM2',
ta2_schema.PEREVALUATION_OUTPUT : '1',
ta2_schema.PEREVALUATION_OUTPUTTRANSMITLATENCY : '9.15502e-06',
ta2_schema.PEREVALUATION_INPUTSIZE : '202',
ta2_schema.PEREVALUATION_OUTPUTSIZE : '1',
ta2_schema.PEREVALUATION_EVALUATIONLATENCY : '0.328568'},
{ta2_schema.PEREVALUATION_DECRYPTIONLATENCY : '0.000103113',
ta2_schema.PEREVALUATION_ENCRYPTIONLATENCY : '0.00013619',
ta2_schema.PEREVALUATION_TIMESTAMP : '2013-05-30 13:35:44',
ta2_schema.PEREVALUATION_INPUTTRANSMITLATENCY : '4.2838e-05',
ta2_schema.PEREVALUATION_TESTNAME : 'these_are_BIG_circuits',
ta2_schema.PEREVALUATION_IID : 4,
ta2_schema.PEREVALUATION_PERFORMERNAME : 'IBM2',
ta2_schema.PEREVALUATION_OUTPUT : '1',
ta2_schema.PEREVALUATION_OUTPUTTRANSMITLATENCY : '9.15502e-06',
ta2_schema.PEREVALUATION_INPUTSIZE : '202',
ta2_schema.PEREVALUATION_OUTPUTSIZE : '1',
ta2_schema.PEREVALUATION_EVALUATIONLATENCY : '0.328568'},
{ta2_schema.PEREVALUATION_DECRYPTIONLATENCY : '0.000103113',
ta2_schema.PEREVALUATION_ENCRYPTIONLATENCY : '0.00013619',
ta2_schema.PEREVALUATION_TIMESTAMP : '2013-05-30 13:50:00',
ta2_schema.PEREVALUATION_INPUTTRANSMITLATENCY : '4.2838e-05',
ta2_schema.PEREVALUATION_TESTNAME : 'these_are_BIG_circuits',
ta2_schema.PEREVALUATION_IID : 999,
ta2_schema.PEREVALUATION_PERFORMERNAME : 'IBM2',
ta2_schema.PEREVALUATION_OUTPUT : '',
ta2_schema.PEREVALUATION_OUTPUTTRANSMITLATENCY : '9.15502e-06',
ta2_schema.PEREVALUATION_INPUTSIZE : '202',
ta2_schema.PEREVALUATION_OUTPUTSIZE : '1',
ta2_schema.PEREVALUATION_EVALUATIONLATENCY : '0.328568',
ta2_schema.PEREVALUATION_STATUS : 'FAILED'}]})
maxDiff = None
def setUp(self):
'''Prepair shared variables for all tests.'''
self.circuit_parser = parse_circuit_log.CircuitParser(':memory:')
def test_parse_log(self):
'''Test log file parsing and results population.'''
test_log = StringIO.StringIO(self.gold_log)
self.circuit_parser.parse_log(test_log)
self.assertEqual(dict(self.circuit_parser.results), dict(self.gold_results))
@unittest.skip("OUTATIME")
def test_process_results(self):
'''Test applying results to the DB.'''
self.circuit_parser.results = self.gold_results
self.circuit_parser.process_results()
### TODO
self.assertTrue(True)
def test_get_id_from_filename(self):
'''Test extracting id from a file path.'''
test_path1 = '/home/lincoln/spar-testing/tests/ta2/circuits/102.cir'
gold_id1 = 102
test_path2 = '201.cir'
gold_id2 = 201
self.assertEqual(gold_id1, self.circuit_parser.get_id_from_filename( \
test_path1))
self.assertEqual(gold_id2, self.circuit_parser.get_id_from_filename( \
test_path2))
def test_is_token_valid(self):
'''Test verification that the found token is in the list of expected
tokens.'''
good_token = 'KEYSIZE'
bad_token = 'DECRYPT'
self.circuit_parser.table_name = ta2_schema.PERKEYGEN_TABLENAME
self.assertTrue(self.circuit_parser.is_token_valid(good_token))
self.assertFalse(self.circuit_parser.is_token_valid(bad_token))
def test_check_tokens(self):
'''Test the check for row completeness.'''
self.circuit_parser.table_name = ta2_schema.PERKEYGEN_TABLENAME
test_row = dict(self.gold_results[self.circuit_parser.table_name][0])
self.circuit_parser.check_tokens(test_row)
self.assertEqual(test_row, self.gold_results \
[self.circuit_parser.table_name][0])
test_bad_row = dict(self.gold_results \
[self.circuit_parser.table_name][0])
gold_bad_row = dict(test_bad_row)
del test_bad_row['keysize']
gold_bad_row['keysize'] = ''
gold_bad_row['status'] = 'FAILED'
self.circuit_parser.check_tokens(test_bad_row)
self.assertEqual(test_bad_row, gold_bad_row)
| 50.198381 | 135 | 0.603678 | 1,319 | 12,399 | 5.421531 | 0.154663 | 0.120822 | 0.153825 | 0.029367 | 0.734303 | 0.715844 | 0.658649 | 0.641728 | 0.586352 | 0.533072 | 0 | 0.119487 | 0.295992 | 12,399 | 246 | 136 | 50.402439 | 0.699737 | 0.052585 | 0 | 0.430556 | 0 | 0.00463 | 0.249936 | 0.080318 | 0 | 0 | 0 | 0.004065 | 0.037037 | 1 | 0.027778 | false | 0 | 0.032407 | 0 | 0.078704 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
8efd24e6ffdf51325fc1702ee241c99388f98dff | 3,937 | py | Python | tests/unit/runners/test_asam.py | HudsonWu/mysalt | 8ce2f66e0d0338157923f0ea0dab912a0f43e52e | [
"Apache-2.0"
] | null | null | null | tests/unit/runners/test_asam.py | HudsonWu/mysalt | 8ce2f66e0d0338157923f0ea0dab912a0f43e52e | [
"Apache-2.0"
] | null | null | null | tests/unit/runners/test_asam.py | HudsonWu/mysalt | 8ce2f66e0d0338157923f0ea0dab912a0f43e52e | [
"Apache-2.0"
] | null | null | null | # -*- coding: utf-8 -*-
"""
tests.unit.runners.test_asam
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Unit tests for the asam runner
"""
from __future__ import absolute_import, print_function, unicode_literals
import logging
import salt.runners.asam as asam
from tests.support.mixins import LoaderModuleMockMixin
from tests.support.mock import MagicMock, patch
from tests.support.unit import TestCase
log = logging.getLogger(__name__)
class AsamRunnerVerifySslTest(TestCase, LoaderModuleMockMixin):
def setup_loader_modules(self):
opts = {
"asam": {
"prov1.domain.com": {
"username": "TheUsername",
"password": "ThePassword",
}
}
}
return {asam: {"__opts__": opts}}
def test_add_platform(self):
parse_html_content = MagicMock()
get_platform_set_name = MagicMock(return_value="plat-foo")
requests_mock = MagicMock()
# remove_platform
with patch("salt.runners.asam._parse_html_content", parse_html_content), patch(
"salt.runners.asam._get_platformset_name", get_platform_set_name
), patch("salt.runners.asam.requests.post", requests_mock):
asam.add_platform("plat-foo-2", "plat-foo", "prov1.domain.com")
requests_mock.assert_called_with(
'https://prov1.domain.com:3451/config/PlatformSetConfig.html',
auth=('TheUsername', 'ThePassword'),
data={'manual': 'false'},
verify=True
)
def test_remove_platform(self):
parse_html_content = MagicMock()
get_platform_set_name = MagicMock(return_value="plat-foo")
requests_mock = MagicMock()
# remove_platform
with patch("salt.runners.asam._parse_html_content", parse_html_content), patch(
"salt.runners.asam._get_platformset_name", get_platform_set_name
), patch("salt.runners.asam.requests.post", requests_mock):
asam.remove_platform("plat-foo", "prov1.domain.com")
requests_mock.assert_called_with(
"https://prov1.domain.com:3451/config/PlatformConfig.html",
auth=("TheUsername", "ThePassword"),
data={
"manual": "false",
"platformName": "plat-foo",
"platformSetName": "plat-foo",
"postType": "platformRemove",
"Submit": "Yes",
},
verify=True,
)
def test_list_platforms(self):
parse_html_content = MagicMock()
get_platforms = MagicMock(return_value=["plat-foo", "plat-bar"])
requests_mock = MagicMock()
# remove_platform
with patch("salt.runners.asam._parse_html_content", parse_html_content), patch(
"salt.runners.asam._get_platforms", get_platforms
), patch("salt.runners.asam.requests.post", requests_mock):
asam.list_platforms("prov1.domain.com")
requests_mock.assert_called_with(
"https://prov1.domain.com:3451/config/PlatformConfig.html",
auth=("TheUsername", "ThePassword"),
data={"manual": "false"},
verify=True,
)
def test_list_platform_sets(self):
parse_html_content = MagicMock()
get_platform_sets = MagicMock(return_value=["plat-foo", "plat-bar"])
requests_mock = MagicMock()
# remove_platform
with patch("salt.runners.asam._parse_html_content", parse_html_content), patch(
"salt.runners.asam._get_platforms", get_platform_sets
), patch("salt.runners.asam.requests.post", requests_mock):
asam.list_platform_sets("prov1.domain.com")
requests_mock.assert_called_with(
"https://prov1.domain.com:3451/config/PlatformSetConfig.html",
auth=("TheUsername", "ThePassword"),
data={"manual": "false"},
verify=True,
)
| 35.790909 | 87 | 0.618491 | 411 | 3,937 | 5.642336 | 0.211679 | 0.061665 | 0.084088 | 0.103493 | 0.715395 | 0.704614 | 0.690815 | 0.673566 | 0.673566 | 0.673566 | 0 | 0.009193 | 0.254001 | 3,937 | 109 | 88 | 36.119266 | 0.780388 | 0.044704 | 0 | 0.425 | 0 | 0 | 0.281142 | 0.110429 | 0 | 0 | 0 | 0 | 0.05 | 1 | 0.0625 | false | 0.0625 | 0.075 | 0 | 0.1625 | 0.0125 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 3 |
8efe2c4eaba4d7937fbcd9cb1f00f290f7aabadc | 99 | py | Python | 08-List-Comprehensions/02-List-Comprehension-with-If-Conditional/main.py | 0x00000024/learn-python | 97057dc427feaf8e6da5ca373e7e02d4a1b949ae | [
"MIT"
] | null | null | null | 08-List-Comprehensions/02-List-Comprehension-with-If-Conditional/main.py | 0x00000024/learn-python | 97057dc427feaf8e6da5ca373e7e02d4a1b949ae | [
"MIT"
] | null | null | null | 08-List-Comprehensions/02-List-Comprehension-with-If-Conditional/main.py | 0x00000024/learn-python | 97057dc427feaf8e6da5ca373e7e02d4a1b949ae | [
"MIT"
] | null | null | null | temps = [221, 233, 132, -9999, 434]
new_temps = [i for i in temps if i != -9999]
print(new_temps) | 19.8 | 44 | 0.636364 | 19 | 99 | 3.210526 | 0.631579 | 0.262295 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.253165 | 0.20202 | 99 | 5 | 45 | 19.8 | 0.518987 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.333333 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 4 |
f1006919002985162e2bf9e8cb28be291d009c56 | 531 | py | Python | setup.py | zebpalmer/Python | c21c813a5af54de3d6671b822012f346553623b4 | [
"MIT"
] | 2 | 2019-09-18T10:50:31.000Z | 2021-03-20T08:52:04.000Z | setup.py | zebpalmer/Python | c21c813a5af54de3d6671b822012f346553623b4 | [
"MIT"
] | null | null | null | setup.py | zebpalmer/Python | c21c813a5af54de3d6671b822012f346553623b4 | [
"MIT"
] | null | null | null | try:
from setuptools import setup
except ImportError:
from distutils.core import setup
import Quandl
setup(name = 'Quandl',
description = 'Package for Quandl API access',
version = Quandl.__version__,
author = ", ".join(Quandl.__authors__),
maintainer = Quandl.__maintainer__,
maintainer_email = Quandl.__email__,
url = Quandl.__url__,
license = Quandl.__license__,
install_requires = [
"pandas >= 0.14",
"numpy >= 1.8",
],
packages = ['Quandl'],
)
| 24.136364 | 52 | 0.629002 | 53 | 531 | 5.811321 | 0.622642 | 0.071429 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.012723 | 0.259887 | 531 | 21 | 53 | 25.285714 | 0.770992 | 0 | 0 | 0 | 0 | 0 | 0.129944 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.210526 | 0 | 0.210526 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
f103e29578195afcbfb92ba556cee4316213c6ba | 1,833 | py | Python | tests/python/unittest/test_infer_type.py | mozga-intel/incubator-mxnet | 7dcfedca704f39b4b9b7497dabf3fea47ad40df4 | [
"BSL-1.0",
"Apache-2.0"
] | 13 | 2017-08-11T05:19:48.000Z | 2020-05-12T02:09:27.000Z | tests/python/unittest/test_infer_type.py | mozga-intel/incubator-mxnet | 7dcfedca704f39b4b9b7497dabf3fea47ad40df4 | [
"BSL-1.0",
"Apache-2.0"
] | 4 | 2021-03-30T11:59:59.000Z | 2022-03-12T00:40:23.000Z | tests/python/unittest/test_infer_type.py | mozga-intel/incubator-mxnet | 7dcfedca704f39b4b9b7497dabf3fea47ad40df4 | [
"BSL-1.0",
"Apache-2.0"
] | 13 | 2016-11-10T06:38:46.000Z | 2021-03-18T21:26:11.000Z | # Licensed to the Apache Software Foundation (ASF) under one
# or more contributor license agreements. See the NOTICE file
# distributed with this work for additional information
# regarding copyright ownership. The ASF licenses this file
# to you under the Apache License, Version 2.0 (the
# "License"); you may not use this file except in compliance
# with the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied. See the License for the
# specific language governing permissions and limitations
# under the License.
# pylint: skip-file
import mxnet as mx
import numpy as np
from common import models, with_seed
from mxnet import autograd
from mxnet.test_utils import assert_almost_equal
@with_seed()
def test_infer_multiout_op():
data = mx.nd.arange(16, dtype=np.float64).reshape((4, 4))
data.attach_grad()
with autograd.record():
y = mx.nd.split(data, axis=0, num_outputs=2)
y[0].backward()
assert data.grad.dtype == np.float64
mx.nd.waitall()
@with_seed()
def test_infer_multiout_op2():
def test_func(a):
q, l = mx.nd.linalg.gelqf(a)
return mx.nd.sum(l)
data32 = mx.nd.random.normal(shape=(2, 3), ctx=mx.cpu(), dtype=np.float32)
data32.attach_grad()
with autograd.record():
test32 = test_func(data32)
test32.backward()
data64 = mx.nd.Cast(data32, dtype=np.float64)
data64.attach_grad()
with autograd.record():
test64 = test_func(data64)
test64.backward()
assert_almost_equal(data64.grad.asnumpy(), data32.grad.asnumpy(), atol=1e-5, rtol=1e-5)
| 33.327273 | 91 | 0.713039 | 279 | 1,833 | 4.609319 | 0.487455 | 0.021773 | 0.032659 | 0.051322 | 0.108865 | 0.043546 | 0 | 0 | 0 | 0 | 0 | 0.034783 | 0.184397 | 1,833 | 54 | 92 | 33.944444 | 0.825418 | 0.420076 | 0 | 0.166667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.1 | 1 | 0.1 | false | 0 | 0.166667 | 0 | 0.3 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
f1040870685ccb486372b534a48008c1e473c417 | 871 | py | Python | Examples/ReinforcementLearning/deeprl/env/env_factory.py | burhandodhy/CNTK | fcdeef63d0192c7b4b7428b14c1f9750d6c1de2e | [
"MIT"
] | 17,702 | 2016-01-25T14:03:01.000Z | 2019-05-06T09:23:41.000Z | Examples/ReinforcementLearning/deeprl/env/env_factory.py | burhandodhy/CNTK | fcdeef63d0192c7b4b7428b14c1f9750d6c1de2e | [
"MIT"
] | 3,489 | 2016-01-25T13:32:09.000Z | 2019-05-03T11:29:15.000Z | Examples/ReinforcementLearning/deeprl/env/env_factory.py | burhandodhy/CNTK | fcdeef63d0192c7b4b7428b14c1f9750d6c1de2e | [
"MIT"
] | 5,180 | 2016-01-25T14:02:12.000Z | 2019-05-06T04:24:28.000Z | # Copyright (c) Microsoft. All rights reserved.
# Licensed under the MIT license. See LICENSE.md file in the project root
# for full license information.
# ==============================================================================
from gym import envs
from . import maze2d, puddleworld
def register_env(env_id):
if env_id == 'Maze2D-v0':
envs.register(
id=env_id,
entry_point='env:maze2d.Maze2D',
kwargs={},
max_episode_steps=200,
reward_threshold=-110.0)
elif env_id == 'PuddleWorld-v0':
envs.register(
id=env_id,
entry_point='env:puddleworld.PuddleWorld',
kwargs={},
max_episode_steps=200,
reward_threshold=-100.0)
else:
raise ValueError('Cannot find environment "{0}"\n'.format(env_id))
return True
| 29.033333 | 80 | 0.552239 | 97 | 871 | 4.804124 | 0.56701 | 0.064378 | 0.060086 | 0.06867 | 0.313305 | 0.313305 | 0.313305 | 0.145923 | 0.145923 | 0 | 0 | 0.032813 | 0.265212 | 871 | 29 | 81 | 30.034483 | 0.695313 | 0.259472 | 0 | 0.4 | 0 | 0 | 0.153125 | 0.042188 | 0 | 0 | 0 | 0 | 0 | 1 | 0.05 | false | 0 | 0.1 | 0 | 0.2 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
f104bbe2ef41441b68ca62399f90007c7e48f1ad | 2,599 | py | Python | labdrivers/version.py | pbnjeff89/labdrivers | 1091b9f746a5a011d94cd63abf5010fc8cde1556 | [
"MIT"
] | null | null | null | labdrivers/version.py | pbnjeff89/labdrivers | 1091b9f746a5a011d94cd63abf5010fc8cde1556 | [
"MIT"
] | null | null | null | labdrivers/version.py | pbnjeff89/labdrivers | 1091b9f746a5a011d94cd63abf5010fc8cde1556 | [
"MIT"
] | null | null | null | from os.path import join as pjoin
# Format expected by setup.py and doc/source/conf.py: string of form "X.Y.Z"
_version_major = 0
_version_minor = 9
_version_micro = 8 # use '' for first of series, number for 1 and above
_version_extra = 'dev'
# _version_extra = '' # Uncomment this for full releases
# Construct full version string from these.
_ver = [_version_major, _version_minor]
if _version_micro:
_ver.append(_version_micro)
if _version_extra:
_ver.append(_version_extra)
__version__ = '.'.join(map(str, _ver))
CLASSIFIERS = ["Development Status :: 3 - Alpha",
"Environment :: Console",
"Intended Audience :: Science/Research",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent",
"Programming Language :: Python",
"Topic :: Scientific/Engineering"]
# Description should be a one-liner:
description = "labdrivers: python drivers for lab instruments"
# Long description will go up on the pypi page
long_description = """
labdrivers
========
labdrivers is a collection of drivers for common research lab instruments.
It contains a suite of instrument-specific drivers which can be used to
interface measurement hardware with Python code, along with a set of
Jupyter notebooks demonstrating example use cases.
To get started using these components in your own software, please go to the
repository README_.
.. _README: https://github.com/masonlab/labdrivers/blob/master/README.md
License
=======
``labdrivers`` is licensed under the terms of the MIT license. See the file
"LICENSE" for information on the history of this software, terms & conditions
for usage, and a DISCLAIMER OF ALL WARRANTIES.
All trademarks referenced herein are property of their respective holders.
Copyright (c) 2016--, Henry Hinnefeld.
"""
NAME = "labdrivers"
MAINTAINER = "Jeff Damasco"
MAINTAINER_EMAIL = "jeffdamasco@gmail.com"
DESCRIPTION = description
LONG_DESCRIPTION = long_description
URL = "http://github.com/masonlab/labdrivers"
DOWNLOAD_URL = ""
LICENSE = "MIT"
AUTHOR = "Henry Hinnefeld"
AUTHOR_EMAIL = "henry.hinnefeld@gmail.com"
PLATFORMS = "OS Independent"
MAJOR = _version_major
MINOR = _version_minor
MICRO = _version_micro
VERSION = __version__
PACKAGES = ['labdrivers',
'labdrivers.keithley',
'labdrivers.lakeshore',
'labdrivers.srs',
'labdrivers.quantumdesign',
'labdrivers.oxford',
'labdrivers.ni']
PACKAGE_DATA = {'labdrivers': [pjoin('data', '*')]}
REQUIRES = ["pyvisa", "PyDAQmx"]
| 32.4875 | 77 | 0.709504 | 319 | 2,599 | 5.626959 | 0.570533 | 0.026741 | 0.017827 | 0.030084 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.004286 | 0.191997 | 2,599 | 79 | 78 | 32.898734 | 0.850476 | 0.116199 | 0 | 0 | 0 | 0 | 0.591525 | 0.040192 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.016393 | 0 | 0.016393 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
f10572b5580f12fbec993a136bc3e170dbb62b5c | 10,347 | py | Python | experiments/gbnf/experiment/boosted_experiment.py | robert-giaquinto/survae_flows | 4d7dc638f77c48ad3c8393b967c33ac9dbad60fe | [
"MIT"
] | 2 | 2021-03-06T19:37:39.000Z | 2022-01-09T11:19:45.000Z | experiments/gbnf/experiment/boosted_experiment.py | robert-giaquinto/survae_flows | 4d7dc638f77c48ad3c8393b967c33ac9dbad60fe | [
"MIT"
] | null | null | null | experiments/gbnf/experiment/boosted_experiment.py | robert-giaquinto/survae_flows | 4d7dc638f77c48ad3c8393b967c33ac9dbad60fe | [
"MIT"
] | null | null | null | import torch
import torchvision.utils as vutils
import math
import numpy as np
from survae.distributions import DataParallelDistribution
from survae.utils import elbo_bpd
from .utils import get_args_table, clean_dict
# Path
import os
import time
from survae.data.path import get_survae_path
# Experiment
from .base import BaseExperiment
from .flow_experiment import FlowExperiment
from experiments.gbnf.optim import get_optim
# Logging frameworks
from torch.utils.tensorboard import SummaryWriter
import wandb
class BoostedFlowExperiment(FlowExperiment):
def __init__(self, args,
data_id, model_id, optim_id,
train_loader, eval_loader,
model, optimizer, scheduler_iter, scheduler_epoch):
# Init parent
super(BoostedFlowExperiment, self).__init__(args=args,
data_id=data_id, model_id=model_id, optim_id=optim_id,
train_loader=train_loader,
eval_loader=eval_loader,
model=model,
optimizer=optimizer,
scheduler_iter=scheduler_iter,
scheduler_epoch=scheduler_epoch)
self.num_components = args.boosted_components
self.epochs_per_component = self.args.epochs
self.component_epoch = 0
if args.pretrained_model is not None:
self.args.epochs = self.args.epochs * (self.num_components - 1)
else:
self.args.epochs = self.args.epochs * self.num_components
def run(self):
if self.args.resume:
self.resume()
while self.model.component < self.num_components:
self.init_component()
for epoch in range(self.component_epoch, self.epochs_per_component):
# Train
train_dict = self.train_fn(epoch)
self.log_train_metrics(train_dict)
# Eval
if (epoch+1) % self.eval_every == 0:
eval_dict = self.eval_fn(epoch)
self.log_eval_metrics(eval_dict)
self.eval_epochs.append(epoch)
converged, improved = self.stop_early(eval_dict, epoch)
self.sample_fn(components="c")
else:
eval_dict = None
converged = False
improved = False
# Log
self.save_metrics()
self.log_fn(self.current_epoch, train_dict, eval_dict)
# Checkpoint
self.current_epoch += 1
self.component_epoch += 1
if (self.check_every > 0 and (epoch+1) % self.check_every == 0) or improved:
self.checkpoint_save()
# Early stopping
if converged:
break
# initialize training for next component
if self.check_every == 0:
self.resume() # reload if using early stopping
print(f"--- Boosting component {self.model.component + 1}/{self.num_components} complete ---")
self.model.update_rho(self.train_loader)
self.model.increment_component()
self.component_epoch = 0
self.optimizer, self.scheduler_iter, self.scheduler_epoch = get_optim(self.args, self.model)
self.checkpoint_save()
# Sampling
self.sample_fn(components="1:c")
def eval_fn(self, epoch):
if self.args.super_resolution or self.args.conditional:
return self._cond_eval_fn(epoch)
else:
return self._eval_fn(epoch)
def _cond_eval_fn(self, epoch):
self.model.eval()
with torch.no_grad():
loss_sum = 0.0
approx_loss_sum = 0.0
loss_count = 0
for (x, context) in self.eval_loader:
batch_size = len(x)
context = context.to(self.args.device)
x = x.to(self.args.device)
#loss = -1.0 * self.model.log_prob(x, context).sum() / (math.log(2) * x.shape.numel())
#loss_sum += loss.detach().cpu().item() * batch_size
approx_loss = -1.0 * self.model.approximate_mixture_log_prob(x, context).sum() / (math.log(2) * x.shape.numel())
approx_loss_sum += approx_loss.detach().cpu().item() * batch_size
loss_count += batch_size
#print('Evaluating. Epoch: {}/{}, Datapoint: {}/{}, Bits/dim: {:.3f}, aprx={:.3f}'.format(
# self.current_epoch+1, self.args.epochs, loss_count, len(self.eval_loader.dataset), loss_sum/loss_count, approx_loss_sum/loss_count), end='\r')
print('Evaluating. Epoch: {}/{}, Datapoint: {}/{}, Bits/dim: {:.3f}'.format(
self.current_epoch+1, self.args.epochs, loss_count, len(self.eval_loader.dataset), approx_loss_sum/loss_count), end='\r')
print('')
#return {'bpd': loss_sum/loss_count, 'bpd_aprx': approx_loss_sum/loss_count}
return {'bpd': approx_loss_sum/loss_count}
def _eval_fn(self, epoch):
self.model.eval()
with torch.no_grad():
loss_sum = 0.0
approx_loss_sum = 0.0
loss_count = 0
for x in self.eval_loader:
batch_size = len(x)
x = x.to(self.args.device)
#loss = -1.0 * self.model.log_prob(x).sum() / (math.log(2) * x.shape.numel())
#loss_sum += loss.detach().cpu().item() * batch_size
approx_loss = -1.0 * self.model.approximate_mixture_log_prob(x).sum() / (math.log(2) * x.shape.numel())
approx_loss_sum += approx_loss.detach().cpu().item() * batch_size
loss_count += batch_size
#print('Evaluating. Epoch: {}/{}, Datapoint: {}/{}, Bits/dim: {:.3f}, aprx={:.3f}'.format(
# self.current_epoch+1, self.args.epochs, loss_count, len(self.eval_loader.dataset), loss_sum/loss_count, approx_loss_sum/loss_count), end='\r')
print('Evaluating. Epoch: {}/{}, Datapoint: {}/{}, Bits/dim: {:.3f}'.format(
self.current_epoch+1, self.args.epochs, loss_count, len(self.eval_loader.dataset), approx_loss_sum/loss_count), end='\r')
print('')
#return {'bpd': loss_sum/loss_count, 'bpd_aprx': approx_loss_sum/loss_count}
return {'bpd': approx_loss_sum/loss_count}
def sample_fn(self, components="1:c", temperature=None, sample_new_batch=False):
if self.args.samples < 1:
return
self.model.eval()
get_new_batch = self.sample_batch is None or sample_new_batch
if get_new_batch:
self.sample_batch = next(iter(self.eval_loader))
if self.args.super_resolution or self.args.conditional:
imgs = self.sample_batch[0][:self.args.samples]
context = self.sample_batch[1][:self.args.samples]
self._cond_sample_fn(context, components, temperature=temperature, save_context=get_new_batch)
else:
imgs = self.sample_batch[:self.args.samples]
self._sample_fn(components, temperature=temperature)
if get_new_batch:
# save real samples
path_true_samples = '{}/samples/true_te{}_s{}.png'.format(self.log_path, self.current_epoch, self.args.seed)
self.save_images(imgs, path_true_samples)
def _cond_sample_fn(self, context, components, temperature=None, save_context=True):
if self.args.super_resolution and save_context:
path_context = '{}/samples/context_te{}_s{}.png'.format(self.log_path, self.current_epoch, self.args.seed)
self.save_images(context, path_context)
if components == "1:c":
# save samples from each component
for c in range(self.num_components):
path_samples = '{}/samples/sample_te{}_c{}_s{}.png'.format(self.log_path, self.current_epoch, c, self.args.seed)
samples = self.model.sample(context.to(self.args.device), component=c, temperature=temperature)
self.save_images(samples, path_samples)
else:
path_samples = '{}/samples/sample_c{}_ce{}_te{}_s{}.png'.format(
self.log_path, self.model.component, self.component_epoch, self.current_epoch, self.args.seed)
samples = self.model.sample(context.to(self.args.device), component=self.model.component, temperature=temperature)
self.save_images(samples, path_samples)
def _sample_fn(self, components, temperature=None):
if components == "1:c":
for c in range(self.num_components):
path_samples = '{}/samples/sample_te{}_c{}_s{}.png'.format(self.log_path, self.current_epoch, c, self.args.seed)
samples = self.model.sample(self.args.samples, component=c, temperature=temperature)
self.save_images(samples, path_samples)
else:
path_samples = '{}/samples/sample_component{}_componentepoch{}_totalepochs{}_seed{}.png'.format(
self.log_path, self.model.component, self.component_epoch, self.current_epoch, self.args.seed)
samples = self.model.sample(self.args.samples, component=self.model.component, temperature=temperature)
self.save_images(samples, path_samples)
def init_component(self):
self.best_loss = np.inf
self.best_loss_epoch = 0
for c in range(self.num_components):
if c != self.model.component:
self.optimizer.param_groups[c]['lr'] = 0.0
for n, param in self.model.named_parameters():
param.requires_grad = True if n.startswith(f"flows.{self.model.component}") else False
def update_learning_rates(self):
for c in range(self.num_components):
self.optimizer.param_groups[c]['lr'] = self.args.lr if c == model.component else 0.0
| 45.183406 | 163 | 0.586837 | 1,234 | 10,347 | 4.689627 | 0.138574 | 0.048384 | 0.026611 | 0.033178 | 0.527562 | 0.488509 | 0.470192 | 0.460515 | 0.449974 | 0.418524 | 0 | 0.007769 | 0.303373 | 10,347 | 228 | 164 | 45.381579 | 0.795089 | 0.106311 | 0 | 0.354037 | 0 | 0 | 0.05379 | 0.033619 | 0 | 0 | 0 | 0 | 0 | 1 | 0.062112 | false | 0 | 0.093168 | 0 | 0.192547 | 0.031056 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
f105b721afb9f845c4e320b2b60eb5e5d9422cbd | 6,838 | py | Python | code4step2/data_registration.py | yukeyi/MCDS-Capstone | f7ce48fc5d3f5f96c1f29556585ed2338683c7d2 | [
"MIT"
] | null | null | null | code4step2/data_registration.py | yukeyi/MCDS-Capstone | f7ce48fc5d3f5f96c1f29556585ed2338683c7d2 | [
"MIT"
] | null | null | null | code4step2/data_registration.py | yukeyi/MCDS-Capstone | f7ce48fc5d3f5f96c1f29556585ed2338683c7d2 | [
"MIT"
] | null | null | null | import os
import numpy as np
import pandas as pd
import xarray as xr
import pickle as pkl
from datetime import datetime
from scipy import ndimage as ndi
import SimpleITK as sitk
import skimage as skim
from skimage import feature, morphology
import glob
class RegHearts:
'''Class that generates liver masks for MRE input images'''
def __init__(self, fixed_subj, moving_subj, tslice=0, verbose=False):
self.verbose = verbose
self.fixed_subj = fixed_subj
self.moving_subj = moving_subj
self.tslice = tslice
self.load_niftis()
def load_niftis(self):
fixed_ct_name = os.path.join(self.fixed_subj, f'CT_tslice_{self.tslice}.nii')
fixed_mask_name = os.path.join(self.fixed_subj, f'mask_tslice_{self.tslice}.nii')
moving_ct_name = os.path.join(self.moving_subj, f'CT_tslice_{self.tslice}.nii')
moving_mask_name = os.path.join(self.moving_subj, f'mask_tslice_{self.tslice}.nii')
self.fixed_ct = self.get_sitk_image(fixed_ct_name)
self.fixed_mask = self.get_sitk_image(fixed_mask_name)
self.moving_ct = self.get_sitk_image(moving_ct_name)
self.moving_mask = self.get_sitk_image(moving_mask_name)
def get_sitk_image(self, nifti_name):
reader = sitk.ImageFileReader()
reader.SetImageIO("NiftiImageIO")
reader.SetFileName(nifti_name)
img = reader.Execute()
size = img.GetSize()
dims = img.GetSpacing()
orig = img.GetOrigin()
if self.verbose:
print(f"Image info for {nifti_name}:")
print("Image size:", size[0], size[1], size[2])
print("Image dims:", dims[0], dims[1], dims[2])
print("Image orig:", orig[0], orig[1], orig[2])
caster = sitk.CastImageFilter()
caster.SetOutputPixelType(sitk.sitkFloat32)
return caster.Execute(img)
def gen_param_map(self):
self.p_map_vector = sitk.VectorOfParameterMap()
paff = sitk.GetDefaultParameterMap("affine")
pbsp = sitk.GetDefaultParameterMap("bspline")
paff['AutomaticTransformInitialization'] = ['true']
paff['AutomaticTransformInitializationMethod'] = ['GeometricalCenter']
paff['NumberOfSamplesForExactGradient'] = ['100000']
pbsp['NumberOfSamplesForExactGradient'] = ['100000']
# paff['MaximumNumberOfSamplingAttempts'] = ['2']
# pbsp['MaximumNumberOfSamplingAttempts'] = ['2']
paff['NumberOfSpatialSamples'] = ['5000']
pbsp['NumberOfSpatialSamples'] = ['5000']
paff['NumberOfHistogramBins'] = ['32', '32', '64', '128']
pbsp['NumberOfHistogramBins'] = ['32', '32', '64', '128']
paff['MaximumNumberOfIterations'] = ['1024']
pbsp['MaximumNumberOfIterations'] = ['1024']
# paff['NumberOfResolutions'] = ['4']
# pbsp['NumberOfResolutions'] = ['4']
paff['GridSpacingSchedule'] = ['6', '4', '2', '1.000000']
pbsp['GridSpacingSchedule'] = ['6', '4', '2', '1.000000']
# pbsp['FinalGridSpacingInPhysicalUnits'] = ['40', '40', '40']
pbsp['FinalGridSpacingInPhysicalUnits'] = ['32', '32', '32']
# pbsp['Metric0Weight'] = ['0.01']
# pbsp['Metric1Weight'] = ['0.1']
# paff['FixedImagePyramid'] = ['FixedShrinkingImagePyramid']
# pbsp['FixedImagePyramid'] = ['FixedShrinkingImagePyramid']
# attempting to use multiple fixed images at once
# paff['Registration'] = ['MultiMetricMultiResolutionRegistration']
# paff['FixedImagePyramid'] = ['FixedSmoothingImagePyramid', 'FixedSmoothingImagePyramid']
# paff['ImageSampler'] = ['RandomCoordinate', 'RandomCoordinate']
# paff['Metric'] = ['AdvancedMattesMutualInformation', 'AdvancedMattesMutualInformation']
# pbsp['Metric'] = ['AdvancedMattesMutualInformation', 'TransformBendingEnergyPenalty',
# 'AdvancedMattesMutualInformation', 'TransformBendingEnergyPenalty']
# pbsp['FixedImagePyramid'] = ['FixedSmoothingImagePyramid', 'FixedSmoothingImagePyramid']
# pbsp['ImageSampler'] = ['RandomCoordinate', 'RandomCoordinate']
# 'RandomCoordinate', 'RandomCoordinate']
self.p_map_vector.append(paff)
self.p_map_vector.append(pbsp)
if self.verbose:
sitk.PrintParameterMap(self.p_map_vector)
def register_imgs(self):
self.elastixImageFilter = sitk.ElastixImageFilter()
self.elastixImageFilter.SetFixedImage(self.fixed_ct)
self.elastixImageFilter.SetMovingImage(self.moving_ct)
self.elastixImageFilter.SetParameterMap(self.p_map_vector)
self.elastixImageFilter.Execute()
self.moving_ct_result = self.elastixImageFilter.GetResultImage()
self.moving_ct_result.CopyInformation(self.fixed_ct)
def gen_mask(self, smooth=False):
transformixImageFilter = sitk.TransformixImageFilter()
transformixImageFilter.SetTransformParameterMap(
self.elastixImageFilter.GetTransformParameterMap())
transformixImageFilter.SetMovingImage(self.moving_mask)
transformixImageFilter.Execute()
self.moving_mask_result = transformixImageFilter.GetResultImage()
if smooth:
tmp_img = sitk.GetArrayFromImage(self.moving_mask_result)
tmp_img = np.where((tmp_img > 0), 1, 0)
self.moving_mask_result = sitk.GetImageFromArray(tmp_img)
self.moving_mask_result.CopyInformation(self.fixed_ct)
self.moving_mask_result = sitk.Cast(self.moving_mask_result, sitk.sitkFloat32)
def recenter_img_z(self, sitk_img, offset=False):
spacing = sitk_img.GetSpacing()[2]
layers = sitk_img.GetSize()[2]
orig = sitk_img.GetOrigin()
if not offset:
sitk_img.SetOrigin([orig[0], orig[1], spacing*(-layers/2)])
else:
sitk_img.SetOrigin([orig[0], orig[1], spacing*(-layers/1.5)])
def add_liver_mask(ds, moving_name='19', extra_name='extra1'):
'''Generate a mask from the liver registration method, and place it into the given "extra" slot.
Assumes you are using an xarray dataset from the MREDataset class.'''
for sub in tqdm(ds.subject):
mask_maker = MRELiverMask(str(sub.values), moving_name, verbose=False, center=True,
fixed_seq='T1Pre', moving_seq='T1_inphase')
mask_maker.gen_param_map()
mask_maker.register_imgs()
mask_maker.gen_mask(smooth=True)
mask = sitk.GetArrayFromImage(mask_maker.moving_mask_result)
mask = np.where(mask >= 1, 1, 0)
ds['image'].loc[dict(sequence=extra_name, subject=sub)] = mask
new_sequence = [a.replace(extra_name, 'liverMsk') for a in ds.sequence.values]
ds = ds.assign_coords(sequence=new_sequence)
return ds
| 45.586667 | 100 | 0.665107 | 722 | 6,838 | 6.121884 | 0.279778 | 0.033937 | 0.025339 | 0.027149 | 0.159276 | 0.080995 | 0.078281 | 0.043439 | 0.017647 | 0 | 0 | 0.022803 | 0.211173 | 6,838 | 149 | 101 | 45.892617 | 0.796626 | 0.200351 | 0 | 0.018868 | 0 | 0 | 0.122539 | 0.075621 | 0 | 0 | 0 | 0 | 0 | 1 | 0.075472 | false | 0 | 0.103774 | 0 | 0.207547 | 0.037736 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
f10624ebcad4c054bd4763bf4f173e415f217f5f | 4,635 | py | Python | cmscalibration/data/dataset.py | mxsg/CMS-Model-Calibration | f7f85e863190f7a7ef0922dca4d0a8f8178e5a9e | [
"MIT"
] | null | null | null | cmscalibration/data/dataset.py | mxsg/CMS-Model-Calibration | f7f85e863190f7a7ef0922dca4d0a8f8178e5a9e | [
"MIT"
] | null | null | null | cmscalibration/data/dataset.py | mxsg/CMS-Model-Calibration | f7f85e863190f7a7ef0922dca4d0a8f8178e5a9e | [
"MIT"
] | null | null | null | from enum import Enum
class Metric(Enum):
""" Contains the possible metrics a dataset can exhibit.
Available metrics currently include ones from the following categories:
- performance metrics
- timing information and time stamps
- entity (e.g., job or node) category information
"""
# Performance data
CPU_TIME = 'CPUTime'
CPU_TIME_PER_CORE = 'CPUTimePerCore'
WALL_TIME = 'WallTime'
INIT_TIME = 'InitTime'
USED_CORES = 'UsedCores'
USED_THREADS = 'UsedThreads'
EVENT_STREAM_COUNT = 'UsedEventStreams'
EVENT_COUNT = 'EventCount'
EVENT_COUNT_FROM_PERF = 'EventCountHeuristic'
EVENT_THROUGHPUT = 'EventThroughput'
INPUT_EVENT_COUNT = 'InputEventCount'
OUTPUT_EVENT_COUNT = 'OutputEventCount'
TOTAL_READ_DATA = 'TotalReadDataMiB'
TOTAL_WRITTEN_DATA = 'TotalWriteDataMiB'
AVERAGE_READ_SPEED = 'AvgReadSpeed'
AVERAGE_WRITE_SPEED = 'AvgWriteSpeed'
IO_TIME = 'IOTime'
READ_TIME = 'IOReadTime'
WRITE_TIME = 'IOWriteTime'
CPU_EFFICIENCY = 'CPUEfficiency'
CPU_IDLE_TIME = 'CPUIdleTime'
CPU_IDLE_TIME_RATIO = 'CPUIdleTimeRatio'
CPU_DEMAND = 'CPUDemand'
IO_RATIO = 'IORatio'
CPU_IDLE_TIME_PER_EVENT = 'CPUIdleTimePerEvent'
CPU_TIME_PER_EVENT = 'CPUTimePerEvent'
CPU_DEMAND_PER_EVENT = 'CPUDemandPerEvent'
# Time stamps
START_TIME = 'StartTime'
STOP_TIME = 'StopTime'
FINISHED_TIME = 'FinishedTime'
TIMESTAMP = 'TimeStamp'
EXIT_CODE = 'ExitCode'
# Category information
WORKFLOW = 'Workflow'
SUBMISSION_TOOL = 'SubmissionTool'
JOB_TYPE = 'JobType'
JOB_CATEGORY = 'JobCategory'
TASK_NAME = 'TaskName'
# Node information
BENCHMARK_TOTAL = 'hs06'
BENCHMARK_PER_THREAD = 'computingRatePerThread'
BENCHMARK_PER_SIMULATED_CORE = 'computingRate'
PHYSICAL_CORE_COUNT = 'coresPhysical'
LOGICAL_CORE_COUNT = 'coresLogical'
JOBSLOT_COUNT = 'jobslots'
SIMULATED_CORE_COUNT = 'cores'
HOST_NAME = 'HostName'
CPU_NAME = 'name'
NODE_COUNT = 'nodeCount'
INTERCONNECT_TYPE = 'Interconnect'
class Dataset:
"""A dataset is a data frame with additional information associated with it. Besides the main data frame itself,
it is named, has a time period the data is valid for and can contain additional data frames as associated info.
A dataset can also contain sections of columns which belong together (e.g. originally come from the same dataset).
"""
def __init__(self, df, name='dataset', start=None, end=None, sep='#', extra_dfs=None):
self.df = df
self.name = name
self.start = start
self.end = end
self.sep = sep
if extra_dfs is None:
extra_dfs = dict()
self.extra_dfs = extra_dfs
@property
def sections(self):
"""Return the sections that are present in this dataset."""
if not self.df:
return []
# Retrieve a list of all column sections (all string parts after the initial separator)
column_sections = [col.split(self.sep)[1] for col in self.df.columns if self.sep in col]
sections = set(column_sections)
return sorted(list(sections))
@property
def metrics(self):
"""Return all metrics present in this dataset."""
if not self.df:
return []
column_metric_strings = [col.split(self.sep)[0] for col in self.df.columns]
metrics = set()
for colstring in column_metric_strings:
try:
metrics.add(Metric(colstring))
except ValueError:
continue
return sorted(list(set(metrics)))
def cols_for_section(self, section=''):
"""Return the columns which are present in a specific section of the dataset."""
# Filter all column names that contain the section name
if not section:
filtered_colnames = [colname for colname in self.df.columns if not colname.contains(self.sep)]
else:
filtered_colnames = [colname for colname in self.df.columns if colname.contains(self.sep + section)]
return filtered_colnames
def col(self, metric, section=None):
"""Return the column for the supplied metric from the dataframe."""
if not section:
colname = metric.value
else:
colname = self.sep.join([metric.value, self.sep, section])
if colname not in self.df.columns:
raise ValueError('Metric {} is not contained in the dataset "{}!"'.format(metric, self.name))
else:
return metric.value
| 30.097403 | 118 | 0.66192 | 548 | 4,635 | 5.428832 | 0.375912 | 0.018151 | 0.013445 | 0.02521 | 0.073277 | 0.072605 | 0.058487 | 0.058487 | 0.058487 | 0.033613 | 0 | 0.001159 | 0.255448 | 4,635 | 153 | 119 | 30.294118 | 0.86091 | 0.218986 | 0 | 0.117021 | 0 | 0 | 0.16878 | 0.006199 | 0 | 0 | 0 | 0 | 0 | 1 | 0.053191 | false | 0 | 0.010638 | 0 | 0.659574 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
f1070e7683c969a25dc48150b2ca61554d3bb23f | 5,754 | py | Python | src/ikazuchi/plugins/speech.py | t2y/ikazuchi.plugins.speech | 6b73241460368ce50e71801e39d7e2159b59c56f | [
"Apache-2.0"
] | null | null | null | src/ikazuchi/plugins/speech.py | t2y/ikazuchi.plugins.speech | 6b73241460368ce50e71801e39d7e2159b59c56f | [
"Apache-2.0"
] | null | null | null | src/ikazuchi/plugins/speech.py | t2y/ikazuchi.plugins.speech | 6b73241460368ce50e71801e39d7e2159b59c56f | [
"Apache-2.0"
] | null | null | null | # -*- coding: utf-8 -*-
import codecs
import subprocess
from tempfile import NamedTemporaryFile
from ikazuchi.core.handler.base import BaseHandler
from ikazuchi.core.handler.utils import get_and_check_file_access
from ikazuchi.core.translator import TRANSLATE_API as API
from ikazuchi.ikazuchi import (base_parser, subparsers)
from ikazuchi.utils import get_command
__version__ = "0.1.2"
_MACOS_COMMANDS = ["afplay", "mpg123", "gst123", "mpg321"]
_LINUX_COMMANDS = ["mpg123", "gst123", "mpg321"]
_WINDOWS_COMMANDS = []
# argument parser for speech
speech_parser = subparsers.add_parser("speech", parents=[base_parser])
speech_parser.set_defaults(command=None, post=False, read=None, sentences=[])
speech_parser.add_argument("-c", "--command", dest="command",
metavar="COMMAND", help="use any command to speak(play audio file)")
speech_parser.add_argument("-p", "--post", dest="post", action="store_true",
help="speak post-translated target sentences")
speech_parser.add_argument("-r", "--read", dest="read",
metavar="READING TARGET FILE", help="read aloud target file")
speech_parser.add_argument("-s", "--sentences", dest="sentences", nargs="+",
metavar="SENTENCE", help=u"target sentences")
speech_parser.add_argument("--version", action="version",
version="%(prog)s {0}".format(__version__))
class Handler(BaseHandler):
"""
Handler class for text-to-speech
"""
def __init__(self, opts):
self.command = opts.command if opts.command else \
self._get_play_audio_command()
self.encoding = opts.encoding
self.sentences = [unicode(s, opts.encoding[0]) for s in opts.sentences]
self.read_file = opts.read
self.quiet = opts.quiet
self.lang = opts.lang_from
self.post = opts.post
self.translator = API[opts.api](opts.lang_from, opts.lang_to, None)
self.api = opts.api.title() if opts.api else self.translator.api()
if self.post:
self.lang = opts.lang_to
if self.api == "Google":
self.method_name = "translate_tts"
elif self.api == "Microsoft":
self.method_name = "speak"
def _encode(self, text):
return text.encode(self.encoding[1])
def _translate(self, texts):
if self.api == "Google":
api, translated = self.translator.translate(texts)
elif self.api == "Microsoft":
api, translated = self.translator.translate_array(texts)
return translated
def _call_method(self, api_method):
orig_texts = self._get_target_texts()
if self.post:
texts = self._translate(orig_texts)
else:
texts = orig_texts
_trans = u"{0}({1}):".format("translate", self.api)
play_audio_method = self.get_play_audio_method()
for num, text in enumerate(texts):
if not self.quiet:
print self._encode(u"{0:25}{1}".format(
"sentence:", orig_texts[num]))
if self.post:
print self._encode(u"{0:25}{1}".format(_trans, text))
with NamedTemporaryFile(mode="wb") as tmp:
api = api_method(text, self.lang, tmp)
_method = u"{0}({1}):".format(self.method_name, api)
print self._encode(u"{0:25}".format(_method))
play_audio_method(tmp.name)
def _get_target_texts(self):
texts = self.sentences
if self.read_file:
rf = get_and_check_file_access(self.read_file)
with codecs.open(rf, mode="r", encoding=self.encoding[0]) as f:
texts = [line.rstrip() for line in f if line.rstrip()]
return texts
def _get_play_audio_command(self):
import platform
os_name, commands = platform.system(), []
if os_name == "Darwin":
commands = _MACOS_COMMANDS
elif os_name == "Windows":
commands = _WINDOWS_COMMANDS
elif os_name in ("Linux", "FreeBSD"):
commands = _LINUX_COMMANDS
path_cmd = [path for cmd in commands for path in get_command(cmd)]
return path_cmd[0] if path_cmd else None
def get_play_audio_method(self):
if self.command:
print "use command: {0}".format(self.command)
return self.play_audio_with_command
else:
print "use pyglet"
return play_audio_with_pyglet
def play_audio_with_command(self, file_name):
# FIXME: wrong interface, consider later
subprocess.call([self.command, file_name])
def play_audio_with_pyglet(file_name):
import pyglet
media = pyglet.media.load(file_name)
if media.duration:
pyglet.clock.schedule_once(lambda d: pyglet.app.exit(), media.duration)
media.play()
pyglet.app.run()
else:
print "Cannot play audio with pyglet"
def play_with_ossaudiodev(file_name):
import sys
import sndhdr
from contextlib import closing, nested
from ossaudiodev import open as oss_open
from wave import open as wave_open
file_info = sndhdr.what(file_name)
if not file_info or file_info[0] != "wav":
print "Not supported audio file type"
return
with nested(closing(wave_open(file_name, "rb")),
closing(oss_open("w"))) as (wav, dev):
nc, sw, fr, nf, comptype, compname = wav.getparams()
try:
from ossaudiodev import (AFMT_S16_NE, AFMT_S16_BE, AFMT_S16_LE)
except ImportError:
AFMT_S16_NE = AFMT_S16_BE
if sys.byteorder == "little":
AFMT_S16_NE = AFMT_S16_LE
dev.setparameters(AFMT_S16_NE, nc, fr)
data = wav.readframes(nf)
dev.write(data)
| 38.36 | 79 | 0.634689 | 742 | 5,754 | 4.708895 | 0.242588 | 0.03091 | 0.021465 | 0.032914 | 0.142244 | 0.067258 | 0.014883 | 0.014883 | 0 | 0 | 0 | 0.013911 | 0.250434 | 5,754 | 149 | 80 | 38.61745 | 0.796198 | 0.01512 | 0 | 0.07874 | 0 | 0 | 0.095993 | 0 | 0 | 0 | 0 | 0.006711 | 0 | 0 | null | null | 0 | 0.133858 | null | null | 0.055118 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
f10a2eb48e0f895a84465c9f5a4dec731cfd9d4c | 1,584 | py | Python | minicCompiler.py | CorentinGoet/miniC-Compiler | 8631b1ce47e9de1c3a3255d7c0a941242ad48292 | [
"MIT"
] | null | null | null | minicCompiler.py | CorentinGoet/miniC-Compiler | 8631b1ce47e9de1c3a3255d7c0a941242ad48292 | [
"MIT"
] | null | null | null | minicCompiler.py | CorentinGoet/miniC-Compiler | 8631b1ce47e9de1c3a3255d7c0a941242ad48292 | [
"MIT"
] | null | null | null | #!/usr/bin/env python3
"""
@author Corentin Goetghebeur (github.com/CorentinGoet).
"""
from lexer_pkg.lexer import Lexer
from parser_pkg.parser import Parser
from CLI.CLIinterface import CLI
import sys
import os
from CLI.actions import Actions
from pretty_printer_pkg.pretty_printer import PrettyPrinter
def main():
"""
Main function.
"""
cli = CLI()
action, file, output = cli.process_args(sys.argv)
lexer = Lexer()
parser = Parser()
if action == Actions.HELP:
cli.display_usage()
sys.exit(0)
src = open(file, "r").read()
if output is None:
output = "pretty.minic"
out = open(output, "w")
# Lexing
try:
lexer.tokenize(src)
except Exception as e:
print(f"Error during lexing: {e}")
sys.exit(1)
# Parsing
try:
parser.parse(lexer.lexems)
except Exception as e:
print(f"Error during parsing: {e}")
sys.exit(1)
# Action
try:
if action == Actions.PRETTY_PRINT:
visitor = PrettyPrinter()
elif action == Actions.COMPILE:
print("not implemented yet")
except Exception as e:
print(f"Error during instantiation of the visitor: {e}")
sys.exit(1)
# Visitor
try:
visitor.visit(parser.ast)
except Exception as e:
print(f"Error during visitor: {e}")
print(parser.ast)
out.write(visitor.clean_source)
out.close()
print(visitor.clean_source)
print(f"Successfully wrote the file {output}")
if __name__ == '__main__':
main()
| 21.405405 | 64 | 0.614268 | 199 | 1,584 | 4.798995 | 0.40201 | 0.031414 | 0.071204 | 0.075393 | 0.146597 | 0.146597 | 0.146597 | 0.146597 | 0 | 0 | 0 | 0.00434 | 0.272727 | 1,584 | 73 | 65 | 21.69863 | 0.824653 | 0.07702 | 0 | 0.229167 | 0 | 0 | 0.137282 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.020833 | false | 0 | 0.145833 | 0 | 0.166667 | 0.1875 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
f10f8aba420899a3f319690d32052dd346a3cc8f | 472 | py | Python | ai_controller/controller.py | mingsumsze1/mcts | e67b80eb138d122a75e12b7d1886edb84de0ede5 | [
"MIT"
] | null | null | null | ai_controller/controller.py | mingsumsze1/mcts | e67b80eb138d122a75e12b7d1886edb84de0ede5 | [
"MIT"
] | null | null | null | ai_controller/controller.py | mingsumsze1/mcts | e67b80eb138d122a75e12b7d1886edb84de0ede5 | [
"MIT"
] | null | null | null | from abc import ABC, abstractmethod
from environment.controller import Controller
from environment.game_state import GameState
class AIController(Controller, ABC):
"""
AI player controller
"""
def pick_move(self, state : GameState):
return self.pick_move_with_likelihood(state)[0]
@abstractmethod
def pick_move_with_likelihood(self, state : GameState):
"""
Pick a random move to play and return likelihood
"""
raise NotImplementedError
| 23.6 | 57 | 0.75 | 57 | 472 | 6.070175 | 0.491228 | 0.069364 | 0.063584 | 0.127168 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.002571 | 0.175847 | 472 | 19 | 58 | 24.842105 | 0.886889 | 0.146186 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.222222 | false | 0 | 0.333333 | 0.111111 | 0.777778 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 4 |
f110bbe65f1ea9ba273384dc8c0ede07db3a4131 | 1,025 | py | Python | QTM_F/1D/pH2/cor.py | binggu56/qmd | e2628710de15f8a8b9a1280fcf92f9e87559414c | [
"MIT"
] | null | null | null | QTM_F/1D/pH2/cor.py | binggu56/qmd | e2628710de15f8a8b9a1280fcf92f9e87559414c | [
"MIT"
] | null | null | null | QTM_F/1D/pH2/cor.py | binggu56/qmd | e2628710de15f8a8b9a1280fcf92f9e87559414c | [
"MIT"
] | null | null | null | ##!/usr/bin/python
import numpy as np
import pylab as plt
import matplotlib as mpl
import seaborn as sns
sns.set_context("poster",font_scale=1.5)
sns.set_style({'font.family':'Times New Roman'})
mpl.rcParams['lines.linewidth'] = 2
data = np.genfromtxt(fname='cor.dat')
ncols = data.shape[1]
#for x in range(1,ncols):
#plt.plot(data[:,0],data[:,1],linewidth=2,label='$\Re(C_{xx})$')
plt.plot(data[:,0],data[:,2],linewidth=2,label='$\Im(C_{11})$')
plt.plot(data[:,0],data[:,4],linewidth=2,label='$\Im(C_{22})$')
plt.plot(data[:,0],data[:,6],linewidth=2,label='$\Im(C_{33})$')
plt.plot(data[:,0],data[:,8],linewidth=2,label='$\Im(C_{44})$')
plt.plot(data[:,0],data[:,10],linewidth=2,label='$\Im(C_{12})$')
#plt.plot(data[:,0],data[:,3],linewidth=2,label='$\Re(C_{yy})$')
#plt.plot(data[:,0],data[:,4],linewidth=2,label='$\Im(C_{yy})$')
#plt.figure(1)
#plt.plot(x,y1,'-')
#plt.plot(x,y2,'g-')
plt.xlim(0,40)
plt.legend(loc=3)
plt.xlabel('Time [a.u.]')
#plt.ylabel('Positions')
plt.savefig('cor.pdf')
plt.show()
| 25.625 | 64 | 0.633171 | 186 | 1,025 | 3.430108 | 0.403226 | 0.109718 | 0.137931 | 0.15047 | 0.429467 | 0.109718 | 0.109718 | 0.109718 | 0.109718 | 0.109718 | 0 | 0.049215 | 0.068293 | 1,025 | 39 | 65 | 26.282051 | 0.618848 | 0.29561 | 0 | 0 | 0 | 0 | 0.192686 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.210526 | 0 | 0.210526 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
f111dbbb6122c9440b75b9f703c03c87600c2765 | 721 | py | Python | packages/pyright-internal/src/tests/samples/tuples10.py | sasano8/pyright | e804f324ee5dbd25fd37a258791b3fd944addecd | [
"MIT"
] | 4,391 | 2019-05-07T01:18:57.000Z | 2022-03-31T20:45:44.000Z | packages/pyright-internal/src/tests/samples/tuples10.py | sasano8/pyright | e804f324ee5dbd25fd37a258791b3fd944addecd | [
"MIT"
] | 2,740 | 2019-05-07T03:29:30.000Z | 2022-03-31T12:57:46.000Z | packages/pyright-internal/src/tests/samples/tuples10.py | sasano8/pyright | e804f324ee5dbd25fd37a258791b3fd944addecd | [
"MIT"
] | 455 | 2019-05-07T12:55:14.000Z | 2022-03-31T17:09:15.000Z | # This sample tests that inferred types for tuples strip
# literals under the appropriate circumstances.
from typing import List, Literal, Tuple
a1 = (1, 2)
t1: Literal["tuple[Literal[1], Literal[2]]"] = reveal_type(a1)
a2 = list((1, 2))
t2: Literal["list[int]"] = reveal_type(a2)
a3: List[Literal[1]] = list((1,))
t3: Literal["list[Literal[1]]"] = reveal_type(a3)
def func1(v1: Tuple[Literal[1], ...], v2: Tuple[Literal[1]]):
a4 = set(v1)
t4: Literal["set[Literal[1]]"] = reveal_type(a4)
a5 = set(v2)
t5: Literal["set[Literal[1]]"] = reveal_type(a5)
a6 = (1, "hi")
t6: Literal["tuple[Literal[1], Literal['hi']]"] = reveal_type(a6)
v4 = set(a6)
t7: Literal["set[int | str]"] = reveal_type(v4)
| 23.258065 | 65 | 0.638003 | 114 | 721 | 3.973684 | 0.403509 | 0.14128 | 0.11479 | 0.119205 | 0.242826 | 0.12362 | 0 | 0 | 0 | 0 | 0 | 0.069079 | 0.156727 | 721 | 30 | 66 | 24.033333 | 0.675987 | 0.138696 | 0 | 0 | 0 | 0 | 0.213592 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.0625 | false | 0 | 0.0625 | 0 | 0.125 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
f1126fefbaeaa1ea0dabd1a4a9a79fda14c3c53a | 2,260 | py | Python | experiment_tests/basic_experiments.py | probablytom/bpi_13_python | ef042674d3d40857237511af1fbca59ede97e75e | [
"MIT"
] | null | null | null | experiment_tests/basic_experiments.py | probablytom/bpi_13_python | ef042674d3d40857237511af1fbca59ede97e75e | [
"MIT"
] | null | null | null | experiment_tests/basic_experiments.py | probablytom/bpi_13_python | ef042674d3d40857237511af1fbca59ede97e75e | [
"MIT"
] | null | null | null | import unittest
from theatre_au import Clock
from actor_au import Troupe
from domain_model import construct_universe, action_log, generate_XES, new_trace
class ExperimentalScratchpad(unittest.TestCase):
def setUp(self):
self.clock = Clock()
self.reps = Troupe()
self.specialists = Troupe()
self.company = Troupe()
self.num_reps = 5
self.num_specialists = 2
construct_universe(self.clock,
self.specialists,
self.reps,
self.company,
self.num_reps,
self.num_specialists)
def test_actors_actually_act(self):
# Submit some work for the company to do.
self.company.recieve_message('a_submitted')
self.clock.tick(2)
# Check work has moved on
self.assertTrue(len(action_log) is not 0)
def test_handoff_to_specialists(self):
self.company.recieve_message('w_nabellen_incomplete_dossiers_scheduled')
self.clock.tick(2)
self.assertTrue('w_valideren_aanvraag_complete' in [event[1].lower() for event_sequence in action_log
for event in event_sequence])
class TestExperimentsMakeXES(unittest.TestCase):
def setUp(self):
self.clock = Clock()
self.reps = Troupe()
self.specialists = Troupe()
self.company = Troupe()
self.num_reps = 5
self.num_specialists = 2
construct_universe(self.clock,
self.specialists,
self.reps,
self.company,
self.num_reps,
self.num_specialists)
def test_simple_XES_trace(self):
self.company.recieve_message('start')
self.clock.tick(100)
generate_XES()
def test_generate_50_traces(self):
def run_sim():
self.company.recieve_message('start')
self.clock.tick(100)
for i in range(49):
run_sim()
new_trace()
run_sim()
generate_XES(log_path="50_traces.xes")
if __name__ == '__main__':
unittest.main()
| 29.350649 | 109 | 0.569027 | 245 | 2,260 | 5 | 0.342857 | 0.058776 | 0.035918 | 0.081633 | 0.462857 | 0.435918 | 0.435918 | 0.435918 | 0.435918 | 0.360816 | 0 | 0.013615 | 0.35 | 2,260 | 76 | 110 | 29.736842 | 0.820286 | 0.027876 | 0 | 0.607143 | 1 | 0 | 0.050593 | 0.031449 | 0 | 0 | 0 | 0 | 0.035714 | 1 | 0.125 | false | 0 | 0.071429 | 0 | 0.232143 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
f113582e7cd0fe94674bc2e6c6d22332d46a63a0 | 5,027 | py | Python | chess/states/intro.py | cuiqui/chrisis | d824bde0bc4a3b9def86550f5bae0db6398f971e | [
"MIT"
] | null | null | null | chess/states/intro.py | cuiqui/chrisis | d824bde0bc4a3b9def86550f5bae0db6398f971e | [
"MIT"
] | null | null | null | chess/states/intro.py | cuiqui/chrisis | d824bde0bc4a3b9def86550f5bae0db6398f971e | [
"MIT"
] | null | null | null | import logging
from pathlib import Path
from dataclasses import dataclass
from typing import Union
import pygame as pg
import chess.settings as s
from chess.states.state import State
from chess.panels.intro.title import Title
from chess.panels.intro.menu import Menu
from chess.panels.console import Console
from chess.utils.coords import Coords
from chess.utils.typewriter import Typewriter, TypewriterConfig, LogType
vec = pg.math.Vector2
logger = logging.getLogger(Path(__file__).stem)
@dataclass
class Intro(State):
next = 'GAME'
greet = {
1: [False, ('[DEBUG] Loading protocol...', LogType.DEBUG)],
3: [False, (
'[WARNING] Cannot load "assets/fonts/stolen.ttf". Proceeding with default.',
LogType.WARNING
)],
3.2: [False, (f'[DEBUG] Displaying <State: Intro> interface.', LogType.DEBUG)],
3.3: [False, ('[DEBUG] Invoke <func: self.say_hi>', LogType.DEBUG)],
4: [False, ('[INFO] Hello person, I\'m beep boop.',)],
4.4: [False, ('[DEBUG] Waiting for input...', LogType.DEBUG)]
}
title: Union[None, 'Title'] = None
menu: Union[None, 'Menu'] = None
console: Union[None, 'Console'] = None
info_console: Union[None, 'Console'] = None
def __post_init__(self):
self.debug_draws = [
self.draw_grid,
self.draw_mouse_pos
]
self.new()
def new(self, config=None):
self.title = Title(
sprite_group=self.sprites,
pos=Coords(x=s.GRIDWIDTH//2, y=1),
size=Coords(x=16, y=4)
)
self.menu = Menu(
sprite_group=self.sprites,
pos=Coords(x=s.GRIDWIDTH//2, y=6),
size=Coords(x=22, y=15)
)
self.console = Console(
sprite_group=self.sprites,
pos=Coords(x=6, y=7),
size=Coords(x=6, y=6),
color=s.WHITE,
parent_color=s.DARKGREY,
margin=6,
frame_offset=s.TILESIZE,
tp_config=TypewriterConfig(
padding=5,
size=22,
color=s.DARKGREEN,
surface_color=s.DARKGREY,
pos='midtop'
),
config=TypewriterConfig(
surface_color=s.BLACK,
size=12,
padding=5
)
)
self.info_console = Console(
sprite_group=self.sprites,
pos=Coords(x=s.GRIDWIDTH//2+4, y=7),
size=Coords(x=6, y=6),
title='INFO',
color=s.WHITE,
parent_color=s.DARKGREY,
margin=6,
frame_offset=s.TILESIZE,
tp_config=TypewriterConfig(
padding=5,
size=22,
color=s.DARKGREEN,
surface_color=s.DARKGREY,
pos='midtop'
),
config=TypewriterConfig(
surface_color=s.BLACK,
color=s.WHITE,
size=12,
padding=5
)
)
self.menu.set_console(self.console)
self.menu.set_info_console(self.info_console)
def update(self, screen, current_time, dt):
self.current_time = current_time / 1000
if self.debug:
for func in self.debug_draws:
func(screen)
else:
screen.fill(s.BLACK)
self.sprites.draw(screen)
self.sprites.update()
self.say_hi()
def say_hi(self):
for k, v in self.greet.items():
if not v[0] and k < self.current_time:
v[0] = True
self.console.log(*v[1])
def events(self, events: list):
action = None
for event in events:
if event.type == pg.KEYDOWN and event.key == pg.K_d:
self.toggle_debug()
elif event.type == pg.MOUSEBUTTONUP:
action = self.menu.click(event.pos)
self.persist = self.menu.config
if action == 'PLAY':
if self.check():
self.next = 'GAME'
self.done = True
elif action == 'QUIT':
self.quit = True
def check(self):
if len(self.persist['player']) != 2:
self.console.log('[ERROR] You need another player!', LogType.ERROR)
return False
return True
@staticmethod
def draw_grid(screen):
for x in range(0, s.WIDTH, s.TILESIZE):
pg.draw.line(screen, s.LIGHTGREY, (x, 0), (x, s.HEIGHT))
for y in range(0, s.HEIGHT, s.TILESIZE):
pg.draw.line(screen, s.LIGHTGREY, (0, y), (s.WIDTH, y))
def draw_mouse_pos(self, screen):
mpos = pg.mouse.get_pos()
coords = Coords(x=s.TILESIZE*3, y=s.TILESIZE)
follow = pg.Surface((coords.x, coords.y))
rect = follow.get_rect(topleft=(0, 0))
tp = Typewriter(follow, TypewriterConfig(size=12, pos='center'))
tp.type(str(mpos))
screen.blit(follow, rect)
| 31.616352 | 88 | 0.539089 | 608 | 5,027 | 4.378289 | 0.279605 | 0.024793 | 0.022539 | 0.033058 | 0.279865 | 0.246056 | 0.246056 | 0.246056 | 0.207739 | 0.185199 | 0 | 0.018385 | 0.339964 | 5,027 | 158 | 89 | 31.816456 | 0.783906 | 0 | 0 | 0.258741 | 0 | 0.013986 | 0.065248 | 0.005172 | 0 | 0 | 0 | 0 | 0 | 1 | 0.055944 | false | 0 | 0.083916 | 0 | 0.202797 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
f114f18da4ab04c0defa026734c3b47686aa23f5 | 624 | py | Python | examples/custom_full_model_prediction.py | vickyvava/ImageAI | fc23bc1374d5a29f816c0895b37cb769b1766eac | [
"MIT"
] | 6 | 2019-09-03T01:45:20.000Z | 2021-09-08T09:07:49.000Z | examples/custom_full_model_prediction.py | vickyvava/ImageAI | fc23bc1374d5a29f816c0895b37cb769b1766eac | [
"MIT"
] | 3 | 2020-08-09T11:49:24.000Z | 2020-10-20T00:25:07.000Z | examples/custom_full_model_prediction.py | vickyvava/ImageAI | fc23bc1374d5a29f816c0895b37cb769b1766eac | [
"MIT"
] | 1 | 2019-12-30T18:56:05.000Z | 2019-12-30T18:56:05.000Z | from imageai.Prediction.Custom import CustomImagePrediction
import os
execution_path = os.getcwd()
predictor = CustomImagePrediction()
predictor.setModelPath(model_path=os.path.join(execution_path, "idenprof_full_resnet_ex-001_acc-0.119792.h5")) # Download the model via this link https://github.com/OlafenwaMoses/ImageAI/releases/tag/models-v3
predictor.setJsonPath(model_json=os.path.join(execution_path, "idenprof.json"))
predictor.loadFullModel(num_objects=10)
results, probabilities = predictor.predictImage(image_input=os.path.join(execution_path, "1.jpg"), result_count=5)
print(results)
print(probabilities)
| 32.842105 | 209 | 0.820513 | 83 | 624 | 6.012048 | 0.638554 | 0.104208 | 0.06012 | 0.114228 | 0.170341 | 0.124249 | 0 | 0 | 0 | 0 | 0 | 0.027444 | 0.065705 | 624 | 18 | 210 | 34.666667 | 0.828473 | 0.153846 | 0 | 0 | 0 | 0 | 0.116635 | 0.082218 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.2 | 0 | 0.2 | 0.2 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
f11502da809353d3e04bfd40b3e232ef13fb8a10 | 2,390 | py | Python | meiduo/apps/users/utils.py | libin-c/Meiduo | 58468fd619a8d9f022df442a10a56b1b12ed1dd8 | [
"MIT"
] | null | null | null | meiduo/apps/users/utils.py | libin-c/Meiduo | 58468fd619a8d9f022df442a10a56b1b12ed1dd8 | [
"MIT"
] | 5 | 2020-05-11T20:23:15.000Z | 2021-11-02T15:46:04.000Z | meiduo/apps/users/utils.py | libin-c/Meiduo | 58468fd619a8d9f022df442a10a56b1b12ed1dd8 | [
"MIT"
] | null | null | null | import re
from django.conf import settings
from django.contrib.auth.backends import ModelBackend
from itsdangerous import BadData
from apps.users.models import User
# 自定义认证后端类
from meiduo.settings.dev import logger
from utils.secret import SecretOauth
def get_user_by_account(account):
"""
根据account查询用户
:param account: 用户名或者手机号
:return: user
"""
try:
if re.match('^1[3-9]\d{9}$', account):
# 手机号登录
user = User.objects.get(mobile=account)
else:
# 用户名登录
user = User.objects.get(username=account)
except User.DoesNotExist:
return None
else:
return user
class UsernameMobileAuthBackend(ModelBackend):
# 重写父类的认证方法
def authenticate(self, request, username=None, password=None, **kwargs):
# # 3.1如果是手机号 验证手机号
# if re.match('^(1[3-9]\d{9}$)', username):
# user = User.objects.get(mobile=username)
# else:
# user = User.objects.get(username=username)
# if user and user.check.password(password):
# return user
# 通过request 判断是前段还是后端用户
if request is None:
try:
user = User.objects.get(username=username, is_staff=True)
except:
user =None
if user and user.check_password(password):
return user
else:
user = get_user_by_account(username)
# 校验user是否存在并校验密码是否正确
if user and user.check_password(password):
return user
def generate_verify_email_url(user):
'''
传递user_id , email
:param user:
:return:
'''
# 1. 发送的数据
data_dict = {'user_id': user.id, 'email': user.email}
# 2. 加密数据
secret_dict = SecretOauth().dumps(data_dict)
# 3. 返回拼接的url
verify_url = settings.EMAIL_ACTIVE_URL + '?token=' + secret_dict
return verify_url
def check_verify_email_token(token):
"""
验证token并提取user
:param token: 用户信息签名后的结果
:return: user, None
"""
from utils.secret import SecretOauth
try:
token_dict = SecretOauth().loads(token)
except BadData:
return None
try:
user = User.objects.get(id=token_dict['user_id'], email=token_dict['email'])
except Exception as e:
logger.error(e)
return None
else:
return user
| 24.387755 | 84 | 0.6 | 272 | 2,390 | 5.165441 | 0.319853 | 0.049822 | 0.064057 | 0.076868 | 0.317438 | 0.185053 | 0.113879 | 0.113879 | 0.09395 | 0 | 0 | 0.007798 | 0.30251 | 2,390 | 97 | 85 | 24.639175 | 0.835033 | 0.19749 | 0 | 0.395833 | 0 | 0 | 0.024017 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.083333 | false | 0.0625 | 0.166667 | 0 | 0.4375 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
f11540df96f4088719e10fce2174e7adbb4cfb75 | 423 | py | Python | hippynn/graphs/nodes/__init__.py | tautomer/hippynn | df4504a5ea4680cfc61f490984dcddeac7ed99ee | [
"BSD-3-Clause"
] | 21 | 2021-11-17T00:56:35.000Z | 2022-03-22T05:57:11.000Z | hippynn/graphs/nodes/__init__.py | tautomer/hippynn | df4504a5ea4680cfc61f490984dcddeac7ed99ee | [
"BSD-3-Clause"
] | 4 | 2021-12-17T16:16:53.000Z | 2022-03-16T23:50:38.000Z | hippynn/graphs/nodes/__init__.py | tautomer/hippynn | df4504a5ea4680cfc61f490984dcddeac7ed99ee | [
"BSD-3-Clause"
] | 6 | 2021-11-30T21:09:31.000Z | 2022-03-18T07:07:32.000Z | """
Definitions of nodes for graph computation.
"""
import warnings
from ... import settings
if settings.DEBUG_NODE_CREATION:
warnings.warn("Printing automatic node creation info! Output will be verbose.")
def _debprint(*args, **kwargs):
if settings.DEBUG_NODE_CREATION:
print("AutoNode:", *args, **kwargs)
else:
pass
from . import base, inputs, indexers, networks, targets, physics, loss
| 21.15 | 83 | 0.704492 | 51 | 423 | 5.745098 | 0.72549 | 0.122867 | 0.102389 | 0.129693 | 0.1843 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.186761 | 423 | 19 | 84 | 22.263158 | 0.851744 | 0.101655 | 0 | 0.2 | 0 | 0 | 0.19086 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.1 | true | 0.1 | 0.3 | 0 | 0.4 | 0.2 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 2 |
f116e25e44c1bee36f55a82c2b9058f44ab798b0 | 5,391 | py | Python | prometheus_ecs_discoverer/marshalling.py | lejmr/prometheus-ecs-discoverer | d4968c6e3f8588a9f64157462a82420d099ac583 | [
"Apache-2.0"
] | null | null | null | prometheus_ecs_discoverer/marshalling.py | lejmr/prometheus-ecs-discoverer | d4968c6e3f8588a9f64157462a82420d099ac583 | [
"Apache-2.0"
] | null | null | null | prometheus_ecs_discoverer/marshalling.py | lejmr/prometheus-ecs-discoverer | d4968c6e3f8588a9f64157462a82420d099ac583 | [
"Apache-2.0"
] | null | null | null | import json
import os
import re
from typing import Dict, List, Type
from loguru import logger
from prometheus_ecs_discoverer import s
from prometheus_ecs_discoverer.discovery import Target
# Copyright 2018, 2019 Signal Media Ltd. Licensed under the Apache License 2.0
# Modifications Copyright 2020 Tim Schwenke. Licensed under the Apache License 2.0
"""
Contains functions that work on `Target` objects and are responsible for
turning these into JSON files that can be consued by Prometheus file service
discover.
"""
def extract_path_interval_pairs(
metrics_path: str = None,
) -> Dict[str, str or None]:
"""Extracts path intervals from given metrics path.
Transforms a string like this `30s:/mymetrics1,/mymetrics2` into:
```
{
"/mymetrics1": "30s",
"/mymetrics2": None
}
```
"""
if not metrics_path:
return {s.FALLBACK_METRICS_ENDPOINT: None}
path_interval = {}
for entry in metrics_path.split(","):
if ":" in entry:
pi = entry.split(":")
if re.search("(15s|30s|1m|5m)", pi[0]):
path_interval[pi[1]] = pi[0]
else:
path_interval[pi[1]] = None
else:
path_interval[entry] = None
logger.bind(inp=metrics_path, outp=path_interval).debug(
"Extracted path interval pairs."
) if s.DEBUG else None
return path_interval
def get_filename(
interval: str or None = None,
filename_15s: str = s.FILENAME_15S,
filename_30s: str = s.FILENAME_30S,
filename_1m: str = s.FILENAME_1M,
filename_5m: str = s.FILENAME_5M,
filename_generic: str = s.FILENAME_GENERIC,
) -> str:
"""Gets the filename for given interval.
Exists to allow custom file names.
Returns:
str: File name to use.
"""
if interval == "15s":
return filename_15s
elif interval == "30s":
return filename_30s
elif interval == "1m":
return filename_1m
elif interval == "5m":
return filename_5m
else:
return filename_generic
def marshall_targets(
targets: List[Type[Target]],
filename_15s: str = s.FILENAME_15S,
filename_30s: str = s.FILENAME_30S,
filename_1m: str = s.FILENAME_1M,
filename_5m: str = s.FILENAME_5M,
filename_generic: str = s.FILENAME_GENERIC,
labelname_cluster: str = s.LABELNAME_CLUSTER,
labelname_taskversion: str = s.LABELNAME_TASKVERSION,
labelname_taskid: str = s.LABELNAME_TASKID,
labelname_containerid: str = s.LABELNAME_CONTAINERID,
labelname_instanceid: str = s.LABELNAME_INSTANCEID,
) -> Dict[str, List[Dict]]:
"""Marshalls given targets into JSON compatible structure.
```
{
"tasks.json": [
{
"targets": [
"ip:port"
],
"labels": {
"instance": "ip:port",
"job": "job",
"and": "more"
},
},
...
],
"15s-tasks.json": [
...
],
"30s-tasks.json": [
...
],
"1m-tasks.json": [
...
],
"5m-tasks.json": [
...
]
}
```
"""
result = {
s.FILENAME_GENERIC: [],
s.FILENAME_15S: [],
s.FILENAME_30S: [],
s.FILENAME_1M: [],
s.FILENAME_5M: [],
}
for target in targets:
path_interval_pairs = extract_path_interval_pairs(target.metrics_path)
for path, interval in path_interval_pairs.items():
labels = {}
if target.custom_labels:
labels.update(target.custom_labels)
labels["instance"] = target.p_instance
labels["job"] = target.task_name
labels["metrics_path"] = path
if target.cluster_name:
labels[labelname_cluster] = target.cluster_name
if target.task_version:
labels[labelname_taskversion] = target.task_version
if target.task_id:
labels[labelname_taskid] = target.task_id
if target.container_id:
labels[labelname_containerid] = target.container_id
if target.instance_id:
labels[labelname_instanceid] = target.instance_id
job = {"targets": [f"{target.ip}:{target.port}"], "labels": labels}
result[get_filename(interval)].append(job)
logger.bind(**result).info("Marshalled targets")
return result
def write_targets_to_file(targets: List[Type[Target]], output_directory: str) -> None:
"""Writes targets to files.
Args:
targets: List of target objects.
output_directory: Path to directory where files should be written to.
Raises:
OSError: If the given directory is not valid.
"""
if not os.path.isdir(output_directory):
raise OSError(f"Directory '{output_directory}' not found.")
for file_name, content in marshall_targets(targets).items():
file_path = f"{output_directory}/{file_name}"
tmp_file_path = f"{file_path}.tmp"
with open(tmp_file_path, "w") as file:
file.write(json.dumps(content, indent=4))
os.rename(tmp_file_path, file_path)
logger.bind(file=file_path).debug("Written file.") if s.DEBUG else None
| 27.932642 | 86 | 0.59488 | 623 | 5,391 | 4.969502 | 0.263242 | 0.01938 | 0.03876 | 0.017442 | 0.116925 | 0.106589 | 0.106589 | 0.086563 | 0.086563 | 0.086563 | 0 | 0.02194 | 0.298275 | 5,391 | 192 | 87 | 28.078125 | 0.796458 | 0.213689 | 0 | 0.134021 | 0 | 0 | 0.061526 | 0.014278 | 0 | 0 | 0 | 0 | 0 | 1 | 0.041237 | false | 0 | 0.072165 | 0 | 0.195876 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
f117604d765d64de174902fb0fbc7bb4e32707ff | 129 | py | Python | test/generate_x.py | ksemianov/torch2caffe | 1a4e622f0ddb1212dbfc0ffca91ed0ad1a0a0545 | [
"MIT"
] | null | null | null | test/generate_x.py | ksemianov/torch2caffe | 1a4e622f0ddb1212dbfc0ffca91ed0ad1a0a0545 | [
"MIT"
] | null | null | null | test/generate_x.py | ksemianov/torch2caffe | 1a4e622f0ddb1212dbfc0ffca91ed0ad1a0a0545 | [
"MIT"
] | null | null | null | import sys
import numpy as np
assert len(sys.argv) == 6
x = np.random.randn(*map(int, sys.argv[1:-1]))
np.save(sys.argv[-1], x)
| 18.428571 | 46 | 0.658915 | 27 | 129 | 3.148148 | 0.592593 | 0.247059 | 0.188235 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.035714 | 0.131783 | 129 | 6 | 47 | 21.5 | 0.723214 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.2 | 1 | 0 | false | 0 | 0.4 | 0 | 0.4 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 5 |
f117d7f32926745dd2bc177a8ad4c629308d255f | 1,242 | py | Python | tests/mlflow/test_mlflow_logging.py | MuttData/soam | 65612a02552668c6721dc20e675654883391c3e9 | [
"Apache-2.0"
] | 1 | 2021-09-17T01:14:57.000Z | 2021-09-17T01:14:57.000Z | tests/mlflow/test_mlflow_logging.py | MuttData/soam | 65612a02552668c6721dc20e675654883391c3e9 | [
"Apache-2.0"
] | null | null | null | tests/mlflow/test_mlflow_logging.py | MuttData/soam | 65612a02552668c6721dc20e675654883391c3e9 | [
"Apache-2.0"
] | 1 | 2021-08-09T14:22:50.000Z | 2021-08-09T14:22:50.000Z | from unittest.mock import patch
import mlflow
from soam.core import SoamFlow
from soam.workflow import Slicer
from tests.helpers import sample_data_df # pylint: disable=unused-import
def test_simple_flow(sample_data_df, tmpdir): # pylint: disable=redefined-outer-name
tmp_path = "file://" + str(tmpdir) + "/mlruns"
with patch("soam.core.runner.TRACKING_URI", tmp_path), patch(
"soam.core.runner.TRACKING_IS_ACTIVE", True
), patch("soam.core.step.TRACKING_IS_ACTIVE", True):
df = sample_data_df
df['metric'] = 1
dimensions = ["y"]
ds_col = 'ds'
metrics = ['metric']
slice_task = Slicer(ds_col=ds_col, dimensions=dimensions, metrics=metrics)
with SoamFlow(name="flow") as flow:
_ = slice_task(sample_data_df)
flow.run()
log_df = mlflow.search_runs(['0'])
assert len(log_df) == 2
assert log_df['tags.mlflow.runName'].tolist() == ['Slicer', 'flow_run']
slicer_logs = log_df[log_df['tags.mlflow.runName'] == 'Slicer'].iloc[0]
assert slicer_logs['params.dimensions'] == str(dimensions)
assert slicer_logs['params.metrics'] == str(metrics)
assert slicer_logs['params.ds_col'] == str(ds_col)
| 40.064516 | 85 | 0.6562 | 166 | 1,242 | 4.692771 | 0.385542 | 0.032092 | 0.061617 | 0.084724 | 0.125802 | 0 | 0 | 0 | 0 | 0 | 0 | 0.004065 | 0.207729 | 1,242 | 30 | 86 | 41.4 | 0.787602 | 0.05314 | 0 | 0 | 0 | 0 | 0.198636 | 0.082694 | 0 | 0 | 0 | 0 | 0.192308 | 1 | 0.038462 | false | 0 | 0.192308 | 0 | 0.230769 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
f11901c892a2e1651656900a41006977b1b933d2 | 1,609 | py | Python | emote_recognizer.py | realPanamo/EmoteRecognizer | 9467b7f673266b258fe2cfd76f49c3dd83b2839c | [
"MIT"
] | 2 | 2019-06-23T17:59:52.000Z | 2019-06-25T06:33:15.000Z | emote_recognizer.py | juliarn/EmoteRecognizer | 9467b7f673266b258fe2cfd76f49c3dd83b2839c | [
"MIT"
] | null | null | null | emote_recognizer.py | juliarn/EmoteRecognizer | 9467b7f673266b258fe2cfd76f49c3dd83b2839c | [
"MIT"
] | null | null | null | import enum
import cv2
import numpy
import requests
import config
from model.model import KerasModel
from model.training_data import TrainingData
class EmoteType(enum.Enum):
PEEPO = 0
KAPPA = 1
class EmoteRecognizer:
def __init__(self):
training_data = TrainingData(config.peepo_data_dir, config.kappa_data_dir, config.image_size)
keras_model = KerasModel(config.model_filepath, config.image_size, config.batch_size, training_data)
keras_model.create_weights()
self.model = keras_model.model
self.image_size = config.image_size
def predict(self, image_array):
"""
Predicts an image
:param image_array: the image in the correct size turned into an array
:return: the predicted emote type of the image
"""
# the model expects a list a images to predict, but we have only one image.
# So we're expanding the array
image_array = numpy.expand_dims(image_array, 0)
prediction = self.model.predict(image_array)[0][0]
return EmoteType(prediction)
def parse_image(self, url):
"""
Downloads an image, resizes it and turns it into an array
:param url: the url of the image which should be downloaded
:return: the array of the image, ready to be predicted
"""
response = requests.get(url)
image = numpy.asarray(bytearray(response.content), dtype="uint8")
image = cv2.imdecode(image, cv2.IMREAD_COLOR)
return cv2.resize(image, (self.image_size, self.image_size), interpolation=cv2.INTER_CUBIC)
| 29.254545 | 108 | 0.683033 | 216 | 1,609 | 4.939815 | 0.402778 | 0.050609 | 0.042174 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.009009 | 0.241144 | 1,609 | 54 | 109 | 29.796296 | 0.864865 | 0.257303 | 0 | 0 | 0 | 0 | 0.004484 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.115385 | false | 0 | 0.269231 | 0 | 0.615385 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
f119173712c2b862677623c0eea51d0e26339373 | 633 | py | Python | Python/Supermarket-Queue.py | kbgoda/Codewars-Challenges | b163df4f0bb5ccf5b6482d26b7c1d1a4ec4e9683 | [
"MIT"
] | null | null | null | Python/Supermarket-Queue.py | kbgoda/Codewars-Challenges | b163df4f0bb5ccf5b6482d26b7c1d1a4ec4e9683 | [
"MIT"
] | null | null | null | Python/Supermarket-Queue.py | kbgoda/Codewars-Challenges | b163df4f0bb5ccf5b6482d26b7c1d1a4ec4e9683 | [
"MIT"
] | null | null | null | # Author: Karan Goda
# https://www.codewars.com/kata/57b06f90e298a7b53d000a86
def queue_time(customers, n):
# e.g. customers = [12, 13]
queues = []
if customers is [] or n <= 0:
return 0
elif n >= 1:
# E.g. for 3 queues (n = 3), value would be (0, 0, 0)
[queues.append(0) for queue in range(n)]
for customer in customers:
minQueue = min(queues)
queues[queues.index(minQueue)] += customer
return max(queues)
# Test cases
print(queue_time([1, 2, 3], 1)) # Ans is 3
print(queue_time([1, 2, 3], 2)) # Ans is 3
print(queue_time([1, 2, 3, 5], 3)) # Ans is 6
| 31.65 | 61 | 0.578199 | 100 | 633 | 3.62 | 0.46 | 0.099448 | 0.116022 | 0.124309 | 0.174033 | 0.174033 | 0.127072 | 0.127072 | 0.127072 | 0 | 0 | 0.099783 | 0.271722 | 633 | 19 | 62 | 33.315789 | 0.685466 | 0.298578 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.076923 | false | 0 | 0 | 0 | 0.230769 | 0.230769 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
f11a59c34afba587fb8aa830114d8088e1e90046 | 777 | py | Python | setup.py | mitchelllisle/monstermash | 724907514a9727a2970b3ddffe4d6fb2a490da48 | [
"MIT"
] | null | null | null | setup.py | mitchelllisle/monstermash | 724907514a9727a2970b3ddffe4d6fb2a490da48 | [
"MIT"
] | null | null | null | setup.py | mitchelllisle/monstermash | 724907514a9727a2970b3ddffe4d6fb2a490da48 | [
"MIT"
] | null | null | null | from setuptools import setup, find_packages
with open('requirements/requirements.txt') as f:
requirements = f.read().splitlines()
with open('requirements/requirements-test.txt') as f:
test_requirements = f.read().splitlines()
setup(
name='monstermash',
author='Mitchell Lisle',
author_email='m.lisle90@gmail.com',
description="A Python Encryption Helper Library",
install_requires=requirements,
packages=find_packages(),
setup_requires=[],
test_suite='tests',
tests_require=test_requirements,
entry_points={
'console_scripts': [
'monstermash=monstermash.__main__:main',
],
},
url='https://github.com/mitchelllisle/monstermash',
version='0.1.0',
zip_safe=False,
)
| 25.064516 | 56 | 0.666667 | 84 | 777 | 5.97619 | 0.607143 | 0.047809 | 0.079681 | 0.12749 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.008078 | 0.203346 | 777 | 30 | 57 | 25.9 | 0.802908 | 0 | 0 | 0 | 0 | 0 | 0.317889 | 0.1287 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.041667 | 0 | 0.041667 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
f11a62a225b14fde953a537552d646f6d24b4f0a | 2,117 | py | Python | flatpickr/_base.py | maqnius/django-flatpickr | 92d5bbf9d4c0c01f904053b39f587053d072b45d | [
"MIT"
] | 40 | 2019-03-07T08:48:58.000Z | 2021-12-25T21:26:14.000Z | flatpickr/_base.py | maqnius/django-flatpickr | 92d5bbf9d4c0c01f904053b39f587053d072b45d | [
"MIT"
] | 6 | 2019-08-06T11:08:25.000Z | 2021-11-16T10:05:52.000Z | flatpickr/_base.py | maqnius/django-flatpickr | 92d5bbf9d4c0c01f904053b39f587053d072b45d | [
"MIT"
] | 8 | 2020-01-02T15:14:38.000Z | 2022-01-24T13:10:26.000Z | # -*- coding: utf-8 -*-
"""Contains Base Date-Picker input class for widgets of this package."""
from django.forms.widgets import DateTimeBaseInput
from flatpickr._settings import WidgetSettings
from flatpickr._media import WidgetMedia
from flatpickr._config import WidgetConfig
class BasePickerInput(DateTimeBaseInput):
"""Base Date-Picker input class for widgets of this package."""
Media = WidgetMedia
picker_type = 'DATE'
datetime_format = '%Y-%m-%d'
format_key = 'DATE_INPUT_FORMATS'
option_overrides = {
'dateFormat': 'Y-m-d',
}
def __init__(self, attrs=None, options=None):
"""Initialize the Date-picker widget."""
self.config = WidgetConfig(self.picker_type)
self.config._calculate_options(options, self.option_overrides)
self.template_name = WidgetSettings.TEMPLATE_NAME or self.template_name
_attrs = WidgetSettings.ATTRS.copy()
_attrs.update(attrs or {})
super().__init__(_attrs, self.datetime_format)
def get_context(self, name, value, attrs):
"""Return widget context dictionary."""
context = super().get_context(
name, value, attrs)
context['widget']['attrs']['fp_config'] = self.config.to_json()
return context
def start_of(self, event_id):
"""
Set Date-Picker as the start-date of a date-range.
Args:
- event_id (string): User-defined unique id for linking two fields
"""
WidgetConfig.events[str(event_id)] = self
return self
def end_of(self, event_id, import_options=True):
"""
Set Date-Picker as the end-date of a date-range.
Args:
- event_id (string): User-defined unique id for linking two fields
"""
event_id = str(event_id)
if event_id in WidgetConfig.events:
linked_picker = WidgetConfig.events[event_id]
self.config.linked_to = linked_picker.config.id
else:
raise KeyError(
'start-date not specified for event_id "%s"' % event_id)
return self
| 34.145161 | 79 | 0.64478 | 257 | 2,117 | 5.120623 | 0.350195 | 0.058511 | 0.021277 | 0.028875 | 0.206687 | 0.179331 | 0.179331 | 0.179331 | 0.179331 | 0.179331 | 0 | 0.000633 | 0.253188 | 2,117 | 61 | 80 | 34.704918 | 0.831752 | 0.222957 | 0 | 0.055556 | 0 | 0 | 0.069211 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.111111 | false | 0 | 0.138889 | 0 | 0.5 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
f11af5159da93a6017221c5833ef55a7b96baeab | 354 | py | Python | biopandas/pdb/__init__.py | tahmidbintaslim/biopandas | 75a2d15584a61ad11536104fa280f2885454a394 | [
"BSD-3-Clause"
] | 453 | 2015-11-24T01:16:05.000Z | 2022-03-18T13:52:04.000Z | biopandas/pdb/__init__.py | tahmidbintaslim/biopandas | 75a2d15584a61ad11536104fa280f2885454a394 | [
"BSD-3-Clause"
] | 84 | 2015-11-24T07:41:53.000Z | 2022-03-17T00:37:37.000Z | biopandas/pdb/__init__.py | tahmidbintaslim/biopandas | 75a2d15584a61ad11536104fa280f2885454a394 | [
"BSD-3-Clause"
] | 115 | 2015-12-01T01:37:43.000Z | 2022-03-10T13:20:24.000Z | # BioPandas
# Author: Sebastian Raschka <mail@sebastianraschka.com>
# License: BSD 3 clause
# Project Website: http://rasbt.github.io/biopandas/
# Code Repository: https://github.com/rasbt/biopandas
"""
BioPandas module for working with Protein Data Bank (PDB)
files in pandas DataFrames.
"""
from .pandas_pdb import PandasPdb
__all__ = ["PandasPdb"]
| 23.6 | 57 | 0.757062 | 45 | 354 | 5.844444 | 0.8 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.003236 | 0.127119 | 354 | 14 | 58 | 25.285714 | 0.847896 | 0.776836 | 0 | 0 | 0 | 0 | 0.134328 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.5 | 0 | 0.5 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
f11ca3fdd4ba7c8d116c74428211e0b05be66c95 | 17,854 | py | Python | sdk/python/pulumi_azure_native/costmanagement/v20210101/_inputs.py | polivbr/pulumi-azure-native | 09571f3bf6bdc4f3621aabefd1ba6c0d4ecfb0e7 | [
"Apache-2.0"
] | null | null | null | sdk/python/pulumi_azure_native/costmanagement/v20210101/_inputs.py | polivbr/pulumi-azure-native | 09571f3bf6bdc4f3621aabefd1ba6c0d4ecfb0e7 | [
"Apache-2.0"
] | null | null | null | sdk/python/pulumi_azure_native/costmanagement/v20210101/_inputs.py | polivbr/pulumi-azure-native | 09571f3bf6bdc4f3621aabefd1ba6c0d4ecfb0e7 | [
"Apache-2.0"
] | null | null | null | # coding=utf-8
# *** WARNING: this file was generated by the Pulumi SDK Generator. ***
# *** Do not edit by hand unless you're certain you know what you are doing! ***
import warnings
import pulumi
import pulumi.runtime
from typing import Any, Mapping, Optional, Sequence, Union, overload
from ... import _utilities
from ._enums import *
__all__ = [
'ExportDatasetConfigurationArgs',
'ExportDatasetArgs',
'ExportDefinitionArgs',
'ExportDeliveryDestinationArgs',
'ExportDeliveryInfoArgs',
'ExportRecurrencePeriodArgs',
'ExportScheduleArgs',
'ExportTimePeriodArgs',
]
@pulumi.input_type
class ExportDatasetConfigurationArgs:
def __init__(__self__, *,
columns: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]] = None):
"""
The export dataset configuration. Allows columns to be selected for the export. If not provided then the export will include all available columns.
:param pulumi.Input[Sequence[pulumi.Input[str]]] columns: Array of column names to be included in the export. If not provided then the export will include all available columns. The available columns can vary by customer channel (see examples).
"""
if columns is not None:
pulumi.set(__self__, "columns", columns)
@property
@pulumi.getter
def columns(self) -> Optional[pulumi.Input[Sequence[pulumi.Input[str]]]]:
"""
Array of column names to be included in the export. If not provided then the export will include all available columns. The available columns can vary by customer channel (see examples).
"""
return pulumi.get(self, "columns")
@columns.setter
def columns(self, value: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]]):
pulumi.set(self, "columns", value)
@pulumi.input_type
class ExportDatasetArgs:
def __init__(__self__, *,
configuration: Optional[pulumi.Input['ExportDatasetConfigurationArgs']] = None,
granularity: Optional[pulumi.Input[Union[str, 'GranularityType']]] = None):
"""
The definition for data in the export.
:param pulumi.Input['ExportDatasetConfigurationArgs'] configuration: The export dataset configuration.
:param pulumi.Input[Union[str, 'GranularityType']] granularity: The granularity of rows in the export. Currently only 'Daily' is supported.
"""
if configuration is not None:
pulumi.set(__self__, "configuration", configuration)
if granularity is not None:
pulumi.set(__self__, "granularity", granularity)
@property
@pulumi.getter
def configuration(self) -> Optional[pulumi.Input['ExportDatasetConfigurationArgs']]:
"""
The export dataset configuration.
"""
return pulumi.get(self, "configuration")
@configuration.setter
def configuration(self, value: Optional[pulumi.Input['ExportDatasetConfigurationArgs']]):
pulumi.set(self, "configuration", value)
@property
@pulumi.getter
def granularity(self) -> Optional[pulumi.Input[Union[str, 'GranularityType']]]:
"""
The granularity of rows in the export. Currently only 'Daily' is supported.
"""
return pulumi.get(self, "granularity")
@granularity.setter
def granularity(self, value: Optional[pulumi.Input[Union[str, 'GranularityType']]]):
pulumi.set(self, "granularity", value)
@pulumi.input_type
class ExportDefinitionArgs:
def __init__(__self__, *,
timeframe: pulumi.Input[Union[str, 'TimeframeType']],
type: pulumi.Input[Union[str, 'ExportType']],
data_set: Optional[pulumi.Input['ExportDatasetArgs']] = None,
time_period: Optional[pulumi.Input['ExportTimePeriodArgs']] = None):
"""
The definition of an export.
:param pulumi.Input[Union[str, 'TimeframeType']] timeframe: The time frame for pulling data for the export. If custom, then a specific time period must be provided.
:param pulumi.Input[Union[str, 'ExportType']] type: The type of the export. Note that 'Usage' is equivalent to 'ActualCost' and is applicable to exports that do not yet provide data for charges or amortization for service reservations.
:param pulumi.Input['ExportDatasetArgs'] data_set: The definition for data in the export.
:param pulumi.Input['ExportTimePeriodArgs'] time_period: Has time period for pulling data for the export.
"""
pulumi.set(__self__, "timeframe", timeframe)
pulumi.set(__self__, "type", type)
if data_set is not None:
pulumi.set(__self__, "data_set", data_set)
if time_period is not None:
pulumi.set(__self__, "time_period", time_period)
@property
@pulumi.getter
def timeframe(self) -> pulumi.Input[Union[str, 'TimeframeType']]:
"""
The time frame for pulling data for the export. If custom, then a specific time period must be provided.
"""
return pulumi.get(self, "timeframe")
@timeframe.setter
def timeframe(self, value: pulumi.Input[Union[str, 'TimeframeType']]):
pulumi.set(self, "timeframe", value)
@property
@pulumi.getter
def type(self) -> pulumi.Input[Union[str, 'ExportType']]:
"""
The type of the export. Note that 'Usage' is equivalent to 'ActualCost' and is applicable to exports that do not yet provide data for charges or amortization for service reservations.
"""
return pulumi.get(self, "type")
@type.setter
def type(self, value: pulumi.Input[Union[str, 'ExportType']]):
pulumi.set(self, "type", value)
@property
@pulumi.getter(name="dataSet")
def data_set(self) -> Optional[pulumi.Input['ExportDatasetArgs']]:
"""
The definition for data in the export.
"""
return pulumi.get(self, "data_set")
@data_set.setter
def data_set(self, value: Optional[pulumi.Input['ExportDatasetArgs']]):
pulumi.set(self, "data_set", value)
@property
@pulumi.getter(name="timePeriod")
def time_period(self) -> Optional[pulumi.Input['ExportTimePeriodArgs']]:
"""
Has time period for pulling data for the export.
"""
return pulumi.get(self, "time_period")
@time_period.setter
def time_period(self, value: Optional[pulumi.Input['ExportTimePeriodArgs']]):
pulumi.set(self, "time_period", value)
@pulumi.input_type
class ExportDeliveryDestinationArgs:
def __init__(__self__, *,
container: pulumi.Input[str],
resource_id: Optional[pulumi.Input[str]] = None,
root_folder_path: Optional[pulumi.Input[str]] = None,
sas_token: Optional[pulumi.Input[str]] = None,
storage_account: Optional[pulumi.Input[str]] = None):
"""
This represents the blob storage account location where exports of costs will be delivered. There are two ways to configure the destination. The approach recommended for most customers is to specify the resourceId of the storage account. This requires a one-time registration of the account's subscription with the Microsoft.CostManagementExports resource provider in order to give Azure Cost Management services access to the storage. When creating an export in the Azure portal this registration is performed automatically but API users may need to register the subscription explicitly (for more information see https://docs.microsoft.com/en-us/azure/azure-resource-manager/resource-manager-supported-services ). Another way to configure the destination is available ONLY to Partners with a Microsoft Partner Agreement plan who are global admins of their billing account. These Partners, instead of specifying the resourceId of a storage account, can specify the storage account name along with a SAS token for the account. This allows exports of costs to a storage account in any tenant. The SAS token should be created for the blob service with Service/Container/Object resource types and with Read/Write/Delete/List/Add/Create permissions (for more information see https://docs.microsoft.com/en-us/azure/cost-management-billing/costs/export-cost-data-storage-account-sas-key ).
:param pulumi.Input[str] container: The name of the container where exports will be uploaded. If the container does not exist it will be created.
:param pulumi.Input[str] resource_id: The resource id of the storage account where exports will be delivered. This is not required if a sasToken and storageAccount are specified.
:param pulumi.Input[str] root_folder_path: The name of the directory where exports will be uploaded.
:param pulumi.Input[str] sas_token: A SAS token for the storage account. For a restricted set of Azure customers this together with storageAccount can be specified instead of resourceId. Note: the value returned by the API for this property will always be obfuscated. Returning this same obfuscated value will not result in the SAS token being updated. To update this value a new SAS token must be specified.
:param pulumi.Input[str] storage_account: The storage account where exports will be uploaded. For a restricted set of Azure customers this together with sasToken can be specified instead of resourceId.
"""
pulumi.set(__self__, "container", container)
if resource_id is not None:
pulumi.set(__self__, "resource_id", resource_id)
if root_folder_path is not None:
pulumi.set(__self__, "root_folder_path", root_folder_path)
if sas_token is not None:
pulumi.set(__self__, "sas_token", sas_token)
if storage_account is not None:
pulumi.set(__self__, "storage_account", storage_account)
@property
@pulumi.getter
def container(self) -> pulumi.Input[str]:
"""
The name of the container where exports will be uploaded. If the container does not exist it will be created.
"""
return pulumi.get(self, "container")
@container.setter
def container(self, value: pulumi.Input[str]):
pulumi.set(self, "container", value)
@property
@pulumi.getter(name="resourceId")
def resource_id(self) -> Optional[pulumi.Input[str]]:
"""
The resource id of the storage account where exports will be delivered. This is not required if a sasToken and storageAccount are specified.
"""
return pulumi.get(self, "resource_id")
@resource_id.setter
def resource_id(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "resource_id", value)
@property
@pulumi.getter(name="rootFolderPath")
def root_folder_path(self) -> Optional[pulumi.Input[str]]:
"""
The name of the directory where exports will be uploaded.
"""
return pulumi.get(self, "root_folder_path")
@root_folder_path.setter
def root_folder_path(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "root_folder_path", value)
@property
@pulumi.getter(name="sasToken")
def sas_token(self) -> Optional[pulumi.Input[str]]:
"""
A SAS token for the storage account. For a restricted set of Azure customers this together with storageAccount can be specified instead of resourceId. Note: the value returned by the API for this property will always be obfuscated. Returning this same obfuscated value will not result in the SAS token being updated. To update this value a new SAS token must be specified.
"""
return pulumi.get(self, "sas_token")
@sas_token.setter
def sas_token(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "sas_token", value)
@property
@pulumi.getter(name="storageAccount")
def storage_account(self) -> Optional[pulumi.Input[str]]:
"""
The storage account where exports will be uploaded. For a restricted set of Azure customers this together with sasToken can be specified instead of resourceId.
"""
return pulumi.get(self, "storage_account")
@storage_account.setter
def storage_account(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "storage_account", value)
@pulumi.input_type
class ExportDeliveryInfoArgs:
def __init__(__self__, *,
destination: pulumi.Input['ExportDeliveryDestinationArgs']):
"""
The delivery information associated with a export.
:param pulumi.Input['ExportDeliveryDestinationArgs'] destination: Has destination for the export being delivered.
"""
pulumi.set(__self__, "destination", destination)
@property
@pulumi.getter
def destination(self) -> pulumi.Input['ExportDeliveryDestinationArgs']:
"""
Has destination for the export being delivered.
"""
return pulumi.get(self, "destination")
@destination.setter
def destination(self, value: pulumi.Input['ExportDeliveryDestinationArgs']):
pulumi.set(self, "destination", value)
@pulumi.input_type
class ExportRecurrencePeriodArgs:
def __init__(__self__, *,
from_: pulumi.Input[str],
to: Optional[pulumi.Input[str]] = None):
"""
The start and end date for recurrence schedule.
:param pulumi.Input[str] from_: The start date of recurrence.
:param pulumi.Input[str] to: The end date of recurrence.
"""
pulumi.set(__self__, "from_", from_)
if to is not None:
pulumi.set(__self__, "to", to)
@property
@pulumi.getter(name="from")
def from_(self) -> pulumi.Input[str]:
"""
The start date of recurrence.
"""
return pulumi.get(self, "from_")
@from_.setter
def from_(self, value: pulumi.Input[str]):
pulumi.set(self, "from_", value)
@property
@pulumi.getter
def to(self) -> Optional[pulumi.Input[str]]:
"""
The end date of recurrence.
"""
return pulumi.get(self, "to")
@to.setter
def to(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "to", value)
@pulumi.input_type
class ExportScheduleArgs:
def __init__(__self__, *,
recurrence: Optional[pulumi.Input[Union[str, 'RecurrenceType']]] = None,
recurrence_period: Optional[pulumi.Input['ExportRecurrencePeriodArgs']] = None,
status: Optional[pulumi.Input[Union[str, 'StatusType']]] = None):
"""
The schedule associated with the export.
:param pulumi.Input[Union[str, 'RecurrenceType']] recurrence: The schedule recurrence.
:param pulumi.Input['ExportRecurrencePeriodArgs'] recurrence_period: Has start and end date of the recurrence. The start date must be in future. If present, the end date must be greater than start date.
:param pulumi.Input[Union[str, 'StatusType']] status: The status of the export's schedule. If 'Inactive', the export's schedule is paused.
"""
if recurrence is not None:
pulumi.set(__self__, "recurrence", recurrence)
if recurrence_period is not None:
pulumi.set(__self__, "recurrence_period", recurrence_period)
if status is not None:
pulumi.set(__self__, "status", status)
@property
@pulumi.getter
def recurrence(self) -> Optional[pulumi.Input[Union[str, 'RecurrenceType']]]:
"""
The schedule recurrence.
"""
return pulumi.get(self, "recurrence")
@recurrence.setter
def recurrence(self, value: Optional[pulumi.Input[Union[str, 'RecurrenceType']]]):
pulumi.set(self, "recurrence", value)
@property
@pulumi.getter(name="recurrencePeriod")
def recurrence_period(self) -> Optional[pulumi.Input['ExportRecurrencePeriodArgs']]:
"""
Has start and end date of the recurrence. The start date must be in future. If present, the end date must be greater than start date.
"""
return pulumi.get(self, "recurrence_period")
@recurrence_period.setter
def recurrence_period(self, value: Optional[pulumi.Input['ExportRecurrencePeriodArgs']]):
pulumi.set(self, "recurrence_period", value)
@property
@pulumi.getter
def status(self) -> Optional[pulumi.Input[Union[str, 'StatusType']]]:
"""
The status of the export's schedule. If 'Inactive', the export's schedule is paused.
"""
return pulumi.get(self, "status")
@status.setter
def status(self, value: Optional[pulumi.Input[Union[str, 'StatusType']]]):
pulumi.set(self, "status", value)
@pulumi.input_type
class ExportTimePeriodArgs:
def __init__(__self__, *,
from_: pulumi.Input[str],
to: pulumi.Input[str]):
"""
The date range for data in the export. This should only be specified with timeFrame set to 'Custom'. The maximum date range is 3 months.
:param pulumi.Input[str] from_: The start date for export data.
:param pulumi.Input[str] to: The end date for export data.
"""
pulumi.set(__self__, "from_", from_)
pulumi.set(__self__, "to", to)
@property
@pulumi.getter(name="from")
def from_(self) -> pulumi.Input[str]:
"""
The start date for export data.
"""
return pulumi.get(self, "from_")
@from_.setter
def from_(self, value: pulumi.Input[str]):
pulumi.set(self, "from_", value)
@property
@pulumi.getter
def to(self) -> pulumi.Input[str]:
"""
The end date for export data.
"""
return pulumi.get(self, "to")
@to.setter
def to(self, value: pulumi.Input[str]):
pulumi.set(self, "to", value)
| 44.635 | 1,390 | 0.674359 | 2,179 | 17,854 | 5.407985 | 0.121156 | 0.085879 | 0.047522 | 0.032247 | 0.6037 | 0.458333 | 0.380771 | 0.333248 | 0.307705 | 0.283181 | 0 | 0.000145 | 0.22516 | 17,854 | 399 | 1,391 | 44.746867 | 0.85167 | 0.384956 | 0 | 0.304721 | 1 | 0 | 0.134094 | 0.035588 | 0 | 0 | 0 | 0 | 0 | 1 | 0.206009 | false | 0 | 0.025751 | 0 | 0.351931 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
f11f0d6ac32bd29c53535c13dbecdf64bee130f1 | 1,968 | py | Python | app/usr/lib/chewup/chewup/ui/indicator.py | samwhelp/util-chewup | aedcfe4a765218e11936dc4e5c259157635d7f41 | [
"MIT"
] | null | null | null | app/usr/lib/chewup/chewup/ui/indicator.py | samwhelp/util-chewup | aedcfe4a765218e11936dc4e5c259157635d7f41 | [
"MIT"
] | null | null | null | app/usr/lib/chewup/chewup/ui/indicator.py | samwhelp/util-chewup | aedcfe4a765218e11936dc4e5c259157635d7f41 | [
"MIT"
] | null | null | null |
import gi
gi.require_version('Gtk', '3.0')
from gi.repository import Gtk
gi.require_version('AppIndicator3', '0.1')
from gi.repository import AppIndicator3 as AppIndicator
class Indicator:
app = None
view = None
indicator = None
menu = None
icon_name_on_win_activate = 'empty'
icon_name_on_win_deactivate = 'folder'
icon_name_btn_app_quit = 'application-exit'
def prep (self, *args, **kwds):
self.app = kwds['app']
def init (self):
self.init_menu()
self.view = self.indicator
def init_menu (self):
## Menu
self.menu = menu = Gtk.Menu()
## Activate
item = Gtk.MenuItem.new_with_label('Activate (<Super>+a)')
item.connect('activate', self.on_activate_win)
menu.append(item)
## Fullscreen
item = Gtk.MenuItem.new_with_label('Fullscreen (F11)')
item.connect('activate', self.on_fullscreen_win)
menu.append(item)
## About
item = Gtk.MenuItem.new_with_label('About')
item.connect('activate', self.on_show_about)
menu.append(item)
## Quit
img = Gtk.Image.new_from_icon_name(self.icon_name_btn_app_quit, 16)
item = Gtk.ImageMenuItem.new_with_label('Quit')
item.connect('activate', self.on_quit_app)
item.set_image(img)
menu.append(item)
menu.show_all()
## Indicator
self.indicator = indicator = AppIndicator.Indicator.new(
self.app.name,
self.icon_name_on_win_activate,
AppIndicator.IndicatorCategory.APPLICATION_STATUS
)
indicator.set_menu(menu)
indicator.set_status(AppIndicator.IndicatorStatus.ACTIVE)
def on_show_about (self, menu_item):
self.app.go_show_about()
def on_quit_app (self, menu_item):
self.app.go_quit()
def on_activate_win (self, menu_item):
self.app.win.go_activate()
def on_fullscreen_win (self, menu_item):
self.app.win.go_fullscreen()
def go_switch_icon_on_win_activate (self):
self.indicator.set_icon(self.icon_name_on_win_activate)
def go_switch_icon_on_win_deactivate (self):
self.indicator.set_icon(self.icon_name_on_win_deactivate)
| 23.710843 | 69 | 0.745427 | 293 | 1,968 | 4.716724 | 0.211604 | 0.04631 | 0.036179 | 0.047033 | 0.377713 | 0.240232 | 0.098408 | 0.098408 | 0.059334 | 0.059334 | 0 | 0.005862 | 0.13313 | 1,968 | 82 | 70 | 24 | 0.80422 | 0.022866 | 0 | 0.074074 | 0 | 0 | 0.067575 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.166667 | false | 0 | 0.055556 | 0 | 0.37037 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
f11f62efb3e0225c391fb17a49be38c58be39674 | 4,779 | py | Python | mutation_lib_prep/cosmic_integrator.py | vrushali-broad/ctat-mutations | ba451dc36039f47e9c61b3ee76211070f6dc53a5 | [
"BSD-3-Clause"
] | null | null | null | mutation_lib_prep/cosmic_integrator.py | vrushali-broad/ctat-mutations | ba451dc36039f47e9c61b3ee76211070f6dc53a5 | [
"BSD-3-Clause"
] | null | null | null | mutation_lib_prep/cosmic_integrator.py | vrushali-broad/ctat-mutations | ba451dc36039f47e9c61b3ee76211070f6dc53a5 | [
"BSD-3-Clause"
] | null | null | null | #!/usr/bin/env python
# encoding: utf-8
from __future__ import (absolute_import, division,
print_function, unicode_literals)
#import inspect
import os,sys
import csv
import argparse
import subprocess
import gzip
import glob
import logging
##
## This script decorates the Cosmic coding variants with cancer census annotations.
FORMAT = "%(asctime)-15s: %(message)s"
logger = logging.getLogger()
logging.basicConfig(stream=sys.stderr, format=FORMAT, level=logging.INFO)
parser = argparse.ArgumentParser()
parser.add_argument("--CosmicCodingMuts", required = True ,help="CosmicCodingMut VCF file")
parser.add_argument("--CosmicMutantExport", required = True ,help="CosmicMutantExport TSV file")
parser.add_argument("--output_vcf", required=True, help="output vcf file")
args=parser.parse_args()
csv.field_size_limit(sys.maxsize)
##Add lines to header
add_header_lines = [
'##INFO=<ID=COSMIC_ID,Type=String,Description="COSMIC mutation id (unique).">\n',
'##INFO=<ID=TISSUE,Type=String,Description="The primary tissue/cancer and subtype from which the sample originated.">\n',
'##INFO=<ID=TUMOR,Type=String,Description="The histological classification of the sample.">\n',
'##INFO=<ID=FATHMM,Type=String,Description="FATHMM (Functional Analysis through Hidden Markov Models). \'Pathogenic\'=Cancer or damaging, \'Neutral\'=Passanger or Tolerated.">\n',
'##INFO=<ID=SOMATIC,Type=String,Description="Information on whether the sample was reported to be Confirmed Somatic. \'Confirmed somatic\'=if the mutation has been confimed to be somatic in the experiment by sequencing both the tumour and a matched normal from the same patient, \'Previously Observed\'=when the mutation has been reported as somatic previously but not in current paper, \'variant of unknown origin\'=when the mutation is known to be somatic but the tumour was sequenced without a matched normal">\n',
'##INFO=<ID=PUBMED_COSMIC,Type=String,Description="The PUBMED ID for the paper that the sample was noted in COSMIC.">\n'
]
####################################
# parsing the cancer gene census: CosmicMutantExport
#GENE,STRAND,CDS,AA,CNT
#COSMIC_ID,TISSUE,TUMOR,FATHMM,SOMATIC,PUBMED_COSMIC,GENE,STRAND,GENE,STRAND,CDS,AA,CNT
mutant_dict_necessary_info={}
logger.info("Capturing info from: {}".format(args.CosmicMutantExport))
with gzip.open(args.CosmicMutantExport,"rt") as mt:
mutant_reader=csv.DictReader(mt, delimiter=str("\t"), quoting=csv.QUOTE_NONE)
for row in mutant_reader:
info_items=["COSMIC_ID="+row.get("GENOMIC_MUTATION_ID",""),
"TISSUE="+row.get("Primary site",""),
"TUMOR="+row.get("Primary histology","")+" -- "+row.get("Histology subtype 1",""),
"FATHMM="+row.get("FATHMM prediction",""),
"SOMATIC="+row.get("Mutation somatic status",""),
"PUBMED_COSMIC="+row.get("Pubmed_PMID",""),
"GENE="+row.get("Gene name",""),
"STRAND="+row.get("Mutation strand",""),
"CDS="+row.get("Mutation CDS",""),
"AA="+row.get("Mutation AA","")]
info=";".join(info_items)
mutant_dict_necessary_info[row["GENOMIC_MUTATION_ID"]]=info
logger.info("Now annotating {}".format(args.CosmicCodingMuts))
coding_muts_gzip_fh = gzip.open(args.CosmicCodingMuts,"rt")
cosmic_vcf=os.path.join(args.output_vcf)
logger.info("writing summary file: {}".format(cosmic_vcf))
ofh = open(cosmic_vcf, 'wt')
annotated_set = set()
not_annotated = set()
with gzip.open(args.CosmicCodingMuts,"rt") as fh:
for line in fh:
if line.startswith("##"):
ofh.write(line)
continue
if line.startswith("#CHROM"):
ofh.write("".join(add_header_lines))
ofh.write(line)
continue
line = line.rstrip()
vals = line.split("\t")
vals[0] = "chr" + vals[0]
cosmic_id = vals[2]
if cosmic_id in mutant_dict_necessary_info:
current_info = vals[7]
vals[7] += ";" + mutant_dict_necessary_info[cosmic_id]
annotated_set.add(cosmic_id)
else:
not_annotated.add(cosmic_id)
ofh.write("\t".join(vals) + "\n")
ofh.close()
logger.info("-number of variants with annotations added: {}".format(len(annotated_set)))
logger.info("-number of variants w/o added annotations: {}".format(len(not_annotated)))
logger.info("bgzip compressing {}".format(cosmic_vcf))
subprocess.check_call("bgzip -f {}".format(cosmic_vcf), shell=True)
logger.info("indexing {}".format(cosmic_vcf))
subprocess.check_call(["bcftools", "index", "{}.gz".format(cosmic_vcf)])
logger.info("Done prepping cosmic vcf: {}".format(cosmic_vcf))
sys.exit(0)
| 38.853659 | 517 | 0.676292 | 619 | 4,779 | 5.105008 | 0.355412 | 0.020886 | 0.039873 | 0.029114 | 0.068354 | 0.021519 | 0 | 0 | 0 | 0 | 0 | 0.002524 | 0.170956 | 4,779 | 122 | 518 | 39.172131 | 0.795053 | 0.064658 | 0 | 0.050633 | 0 | 0.075949 | 0.37924 | 0.067843 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0.012658 | 0.101266 | 0 | 0.101266 | 0.012658 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
f11fc082634360f202d4baf18fad1a6d3429b438 | 1,472 | py | Python | Simple HTTP Server/httpServer.py | daniellycosta/python_scripts_hacktoberfest2019 | 642e648b41c88984b41a22f786fade6a54ef5001 | [
"MIT"
] | null | null | null | Simple HTTP Server/httpServer.py | daniellycosta/python_scripts_hacktoberfest2019 | 642e648b41c88984b41a22f786fade6a54ef5001 | [
"MIT"
] | null | null | null | Simple HTTP Server/httpServer.py | daniellycosta/python_scripts_hacktoberfest2019 | 642e648b41c88984b41a22f786fade6a54ef5001 | [
"MIT"
] | 1 | 2020-10-02T03:30:44.000Z | 2020-10-02T03:30:44.000Z | import socket
from HTMLParser import HTMLParser
indexFile = open('index.html','r')
index = indexFile.read()
page404File = open('page404.html','r')
page404 = page404File.read()
# server host and port definition
HOST = '' # server ip (empty)
PORT = 8080 # server port
# create a socket with IPv4 (AF_INET) using TCP (SOCK_STREAM)
listen_socket = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
# allow addres and port reutilization
listen_socket.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)
# binds ip server and port
listen_socket.bind((HOST, PORT))
# "listen" requests
listen_socket.listen(1)
# print that the server is ready
print 'HTTP sever HTTP waiting connection on %s ...' % PORT
while True:
# waits new connections
client_connection, client_address = listen_socket.accept()
# .recv receives the data sended by a client through the socket
request = client_connection.recv(1024)
# prints the message sended by the client
print request
print "GET / HTTP/1.1"
print (request == "GET / HTTP/1.1")
# server answer declaration
if(request == "GET / HTTP/1.1"):
http_response = index
else:
http_response = page404
# server returns what was requested by the client (in this case, it's a generic response)
client_connection.send(http_response)
# close the connection
client_connection.close()
# close the socket
listen_socket.close()
| 28.307692 | 94 | 0.699049 | 199 | 1,472 | 5.070352 | 0.442211 | 0.071358 | 0.023786 | 0.026759 | 0.031715 | 0 | 0 | 0 | 0 | 0 | 0 | 0.027515 | 0.209918 | 1,472 | 51 | 95 | 28.862745 | 0.840069 | 0.344429 | 0 | 0 | 0 | 0 | 0.122494 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.076923 | null | null | 0.153846 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
f1212ab75574f42e605d0d03910218033dfcfd8a | 1,035 | py | Python | dryad/test_status_controller.py | Francis-T/citas-dryad | 2974aeb2b2754df3f23a098d614e3892bb5e2319 | [
"MIT"
] | null | null | null | dryad/test_status_controller.py | Francis-T/citas-dryad | 2974aeb2b2754df3f23a098d614e3892bb5e2319 | [
"MIT"
] | null | null | null | dryad/test_status_controller.py | Francis-T/citas-dryad | 2974aeb2b2754df3f23a098d614e3892bb5e2319 | [
"MIT"
] | 1 | 2016-09-05T08:25:30.000Z | 2016-09-05T08:25:30.000Z | #
# Status Indicator Circuit Controller Test
# Author: Francis T
#
# Tests the status indicator circuit controller
#
import unittest
import status_controller as statc
class BasicFuncTestCase(unittest.TestCase):
def setUp(self):
#statc.initialize()
return
def test_status_inactive(self):
self.assertEqual( statc.indicate(statc.STATUS_INACTIVE), True)
return
def test_status_ready(self):
self.assertEqual( statc.indicate(statc.STATUS_READY), True)
return
def test_status_busy(self):
self.assertEqual( statc.indicate(statc.STATUS_BUSY), True)
return
def test_status_tx(self):
self.assertEqual( statc.indicate(statc.STATUS_TX), True)
return
def test_status_rx(self):
self.assertEqual( statc.indicate(statc.STATUS_RX), True)
return
def test_status_shutdown(self):
self.assertEqual( statc.indicate(statc.STATUS_SHUTDOWN), True)
return
if __name__ == "__main__":
unittest.main()
| 24.069767 | 70 | 0.683092 | 120 | 1,035 | 5.666667 | 0.283333 | 0.079412 | 0.114706 | 0.167647 | 0.548529 | 0.379412 | 0.379412 | 0 | 0 | 0 | 0 | 0 | 0.228019 | 1,035 | 42 | 71 | 24.642857 | 0.851064 | 0.124638 | 0 | 0.28 | 0 | 0 | 0.008939 | 0 | 0 | 0 | 0 | 0 | 0.24 | 1 | 0.28 | false | 0 | 0.08 | 0.04 | 0.68 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 3 |
f121b237306aa5803c642431f163fbdc7a638007 | 1,296 | py | Python | src/adversarial_q_learning/network/dqn.py | shuvoxcd01/neural_tic_tac_toe | a988230ff3dd0d882ebc0fb19630c9ff22fef629 | [
"Apache-2.0"
] | null | null | null | src/adversarial_q_learning/network/dqn.py | shuvoxcd01/neural_tic_tac_toe | a988230ff3dd0d882ebc0fb19630c9ff22fef629 | [
"Apache-2.0"
] | null | null | null | src/adversarial_q_learning/network/dqn.py | shuvoxcd01/neural_tic_tac_toe | a988230ff3dd0d882ebc0fb19630c9ff22fef629 | [
"Apache-2.0"
] | null | null | null | import os
from tensorflow.keras import Sequential
from tensorflow.keras.layers import Dense, InputLayer, Flatten
from tensorflow.keras.models import clone_model
class DQN:
@staticmethod
def get_q_network(input_shape, num_actions):
model = Sequential()
model.add(InputLayer(input_shape=input_shape))
model.add(Flatten())
model.add(Dense(units=100, activation='relu'))
model.add(Dense(units=250, activation='relu'))
model.add(Dense(units=100, activation='relu'))
model.add(Dense(units=50, activation='relu'))
model.add(Dense(units=num_actions))
return model
@staticmethod
def get_weights(model: Sequential):
weights = {}
for weight in model.trainable_weights:
weights[weight.name] = weight
return weights
@staticmethod
def clone(model):
cloned_model = clone_model(model=model)
cloned_model.set_weights(model.get_weights())
return cloned_model
@staticmethod
def save_model(model, saved_model_dir, saved_model_name):
if not os.path.exists(saved_model_dir):
os.makedirs(saved_model_dir)
path_to_saved_model = os.path.join(saved_model_dir, saved_model_name)
model.save(path_to_saved_model)
| 28.173913 | 77 | 0.681327 | 163 | 1,296 | 5.202454 | 0.306748 | 0.09434 | 0.076651 | 0.106132 | 0.242925 | 0.242925 | 0.125 | 0.125 | 0.125 | 0.125 | 0 | 0.010924 | 0.222994 | 1,296 | 45 | 78 | 28.8 | 0.831182 | 0 | 0 | 0.181818 | 0 | 0 | 0.012346 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.121212 | false | 0 | 0.121212 | 0 | 0.363636 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
f122a05ae2a117a955547f6e8fd27731b4497e6e | 6,160 | py | Python | conmato/member.py | ngocbh/codeforces-management-tools | 4064cf3cf4bd9ffabdab15e4243e3fbe80a824ad | [
"MIT"
] | 6 | 2020-03-24T16:57:31.000Z | 2020-09-19T13:34:14.000Z | conmato/member.py | ngocjr7/codeforces-standings-crawler | 1bb8bf468299ea2c944a238627ee1516625cb91e | [
"MIT"
] | 1 | 2021-02-04T04:39:55.000Z | 2021-02-04T04:39:55.000Z | conmato/member.py | ngocjr7/codeforces-standings-crawler | 1bb8bf468299ea2c944a238627ee1516625cb91e | [
"MIT"
] | 1 | 2020-04-26T11:25:55.000Z | 2020-04-26T11:25:55.000Z | from __future__ import absolute_import
import re
import requests
import time
import random
from conmato.utils import *
def remove_participants(session, member, group_id=GROUP_ID):
url = MEMBERS_URL.format(group_id)
payload = {
'_tta': member['_tta'],
'action': 'removeMember',
'csrf_token': member['csrf_token'],
'memberGroupRoleId': member['groupRoleId']
}
response = session.post(url, data=payload)
if response.status_code != 200:
logger.warning('confirm_joining: an error occurred while confirming')
def remove_all_participants(session, user_format='.*', group_id=GROUP_ID):
members = get_all_members(session, group_id)
for member in members:
if member['pending'] or member['role'] == 'manager':
continue
if re.search(user_format, member['username']):
remove_participants(session, member, group_id)
se = random.uniform(float(TIMESLEEP)/2, TIMESLEEP)
time.sleep(se)
def confirm_joining(session, member, action, group_id=GROUP_ID):
"""
action = ['accept', 'reject']
"""
url = MEMBERS_URL.format(group_id)
payload = {
'_tta': member['_tta'],
'action': 'confirmJoining',
'confirmed': action,
'csrf_token': member['csrf_token'],
'groupRoleId': member['groupRoleId']
}
response = session.post(url, data=payload)
if response.status_code != 200:
logger.warning('confirm_joining: an error occurred while confirming')
def confirm_all_participants(session, action, user_format=USER_FORMAT, group_id=GROUP_ID):
"""
if action == 'accept' -> accept all user that match user_format
if action == 'reject' -> reject all user that not match user_format
"""
members = get_pending_participants(session, group_id)
if action != 'accept' and action != 'reject':
logger.warning('confirm_all_participants: cannot recognize action')
return
for member in members:
if re.search(user_format, member['username']) and action == 'accept':
confirm_joining(session, member, action, group_id)
elif not re.search(user_format, member['username']) and action == 'reject':
confirm_joining(session, member, action, group_id)
se = random.uniform(float(TIMESLEEP)/2, TIMESLEEP)
time.sleep(se)
def get_pending_participants(session, group_id=GROUP_ID):
logger.info("Getting pending members of group: {}".format(group_id))
url = MEMBERS_URL.format(group_id)
response = session.get(url)
doc = pq(response.text)
table = doc('table').not_('.rtable').not_('.table-form')
members = []
for tr in pq(table.children())[1:]:
if pq(tr).children().eq(5).children().eq(0).is_('form'):
member = {
'username': pq(tr).children().eq(0)('a').eq(0).text(),
'groupRoleId': pq(tr).children().eq(5).children().eq(0)('input').eq(2).attr('value'),
'csrf_token': pq(tr).children().eq(5).children().eq(0)('input').eq(0).attr('value'),
'_tta': 961
}
members.append(member)
return members
def get_all_members(session, group_id=GROUP_ID):
logger.info("Getting all members in group: {}".format(group_id))
url = MEMBERS_URL.format(group_id)
response = session.get(url)
doc = pq(response.text)
table = doc('table').not_('.rtable').not_('.table-form')
members = []
for tr in pq(table.children())[1:]:
member = {
'username': pq(tr).children().eq(0)('a').eq(0).text()
}
if member['username'] == '':
continue
member['csrf_token'] = pq(tr).children().eq(
0)('form')('input').eq(0).attr('value')
member['groupRoleId'] = pq(tr).children().eq(
0)('form')('input').eq(2).attr('value')
member['_tta'] = 961
if pq(tr).children().eq(5).children().eq(0).is_('form'):
member['pending'] = True
else:
member['pending'] = False
if pq(tr).children().eq(1).text().lower() == 'creator':
member['role'] = 'manager'
else:
member['role'] = 'spectator'
for option in pq(tr).children().eq(1)('select')('option'):
if pq(option).attr['selected'] == 'selected':
member['role'] = pq(option).val().lower()
members.append(member)
return members
def is_manager(group_id=GROUP_ID, username='', password=''):
"""
check if user is manager of codeforces group
Return:
True, False
"""
if username == '' or password == '':
logger.warning(
"isManager:Please provide username and password before using.")
return False
tmp_ss = requests.Session()
url = MEMBERS_URL.format(group_id)
response = tmp_ss.get(url)
doc = pq(response.text)
members = {}
for e in doc('table').eq(1).children():
username_tmp = pq(e)('td').eq(0).text()
mtype_tmp = pq(e)('td').eq(1).text()
members[username_tmp.lower()] = mtype_tmp.lower()
payload = {
"handleOrEmail": username,
"password": password,
"csrf_token": "",
"bfaa": '1ef059a32710a29f84fbde5b5500d49c',
"action": 'enter',
"ftaa": 'uf8qxh8b5vphq6wna4',
"_tta": 569
}
response = tmp_ss.get(LOGIN_URL)
doc = pq(response.text)
payload['csrf_token'] = doc('input').attr('value')
response = tmp_ss.post(
LOGIN_URL,
data=payload,
headers=dict(referer=LOGIN_URL)
)
doc = pq(response.text)
username_again = doc('div').filter(
'.lang-chooser').children().eq(1).children().eq(0).text()
if username_again is None or username.lower() != username_again.lower():
logger.warning('isManager:Login failed, wrong username or password')
return False
if username.lower() in members and members[username.lower()] == 'manager':
return True
logger.warning(
'isManager:Username isnot members or manager of codeforces group')
return False
| 33.478261 | 101 | 0.601948 | 739 | 6,160 | 4.878214 | 0.188092 | 0.050485 | 0.033287 | 0.038835 | 0.570874 | 0.490985 | 0.386685 | 0.335922 | 0.275728 | 0.275728 | 0 | 0.014801 | 0.243182 | 6,160 | 183 | 102 | 33.661202 | 0.758473 | 0.037338 | 0 | 0.4 | 0 | 0 | 0.175103 | 0.009747 | 0 | 0 | 0 | 0 | 0 | 1 | 0.05 | false | 0.035714 | 0.042857 | 0 | 0.142857 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
f123b44307466fc8929799150b7a0789336de9af | 2,271 | py | Python | data/transcoder_evaluation_gfg/python/REMOVE_MINIMUM_NUMBER_ELEMENTS_NO_COMMON_ELEMENT_EXIST_ARRAY.py | mxl1n/CodeGen | e5101dd5c5e9c3720c70c80f78b18f13e118335a | [
"MIT"
] | 241 | 2021-07-20T08:35:20.000Z | 2022-03-31T02:39:08.000Z | data/transcoder_evaluation_gfg/python/REMOVE_MINIMUM_NUMBER_ELEMENTS_NO_COMMON_ELEMENT_EXIST_ARRAY.py | mxl1n/CodeGen | e5101dd5c5e9c3720c70c80f78b18f13e118335a | [
"MIT"
] | 49 | 2021-07-22T23:18:42.000Z | 2022-03-24T09:15:26.000Z | data/transcoder_evaluation_gfg/python/REMOVE_MINIMUM_NUMBER_ELEMENTS_NO_COMMON_ELEMENT_EXIST_ARRAY.py | mxl1n/CodeGen | e5101dd5c5e9c3720c70c80f78b18f13e118335a | [
"MIT"
] | 71 | 2021-07-21T05:17:52.000Z | 2022-03-29T23:49:28.000Z | # Copyright (c) 2019-present, Facebook, Inc.
# All rights reserved.
#
# This source code is licensed under the license found in the
# LICENSE file in the root directory of this source tree.
#
def f_gold ( a , b , n , m ) :
countA = dict ( )
countB = dict ( )
for i in range ( n ) :
countA [ a [ i ] ] = countA.get ( a [ i ] , 0 ) + 1
for i in range ( n ) :
countB [ b [ i ] ] = countB.get ( b [ i ] , 0 ) + 1
res = 0
for x in countA :
if x in countB.keys ( ) :
res += min ( countA [ x ] , countB [ x ] )
return res
#TOFILL
if __name__ == '__main__':
param = [
([4, 7, 10, 12, 12, 24, 29, 38, 45, 51, 53, 54, 59, 68, 72, 73, 85, 86, 88, 92, 92, 95],[7, 9, 17, 23, 25, 26, 29, 32, 35, 56, 56, 58, 59, 59, 62, 63, 72, 82, 85, 86, 95, 97],15,13,),
([-6, 48, -70, 14, -86, 56, 80, -64, 64, -88, -14, 78, 14, -18, 52, 2, 22, 88],[-62, -58, 60, -30, 42, 8, 66, -48, -18, 64, -76, -90, -48, -90, -24, 64, -88, -98],15,9,),
([0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1],[0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1],10,10,),
([10, 93, 2, 16, 36, 49, 36, 86, 6, 99, 95, 2],[99, 28, 7, 21, 62, 89, 82, 41, 43, 77, 8, 14],6,10,),
([-98, -96, -80, -64, -42, -30, -6, 10, 62, 66, 82],[-62, -50, -42, 24, 44, 46, 52, 54, 60, 72, 72],9,6,),
([1, 1, 0, 1, 1],[1, 1, 1, 0, 0],4,2,),
([7, 11, 13, 15, 21, 33, 36, 39, 66, 99],[23, 36, 42, 44, 62, 65, 70, 78, 82, 89],9,9,),
([-40],[-98],0,0,),
([0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1],[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1],31,26,),
([79, 91, 31, 16, 28, 45, 37, 43, 73, 73, 76, 28, 71, 60, 64, 60, 99, 36, 47, 38, 65, 34, 22, 94, 84, 51, 72, 45, 71, 2],[58, 94, 12, 27, 98, 38, 75, 20, 94, 43, 32, 90, 23, 41, 88, 2, 62, 96, 53, 57, 48, 79, 6, 16, 11, 46, 73, 57, 67, 7],18,18,)
]
n_success = 0
for i, parameters_set in enumerate(param):
if f_filled(*parameters_set) == f_gold(*parameters_set):
n_success+=1
print("#Results: %i, %i" % (n_success, len(param))) | 56.775 | 303 | 0.453545 | 493 | 2,271 | 2.054767 | 0.296146 | 0.138203 | 0.192498 | 0.240869 | 0.16387 | 0.140178 | 0.140178 | 0.140178 | 0.13228 | 0.13228 | 0 | 0.360646 | 0.290621 | 2,271 | 40 | 304 | 56.775 | 0.268156 | 0.081462 | 0 | 0.066667 | 0 | 0 | 0.011544 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.033333 | false | 0 | 0 | 0 | 0.066667 | 0.033333 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 4 |
f123c50479e273b94aab8c70c02be92191fe755f | 6,620 | py | Python | canary/argument_pipeline/component_prediction.py | Open-Argumentation/Canary | 1a3128a5357f0428b7cb19d66b52e83dbe75fff0 | [
"MIT"
] | 3 | 2020-12-16T19:26:39.000Z | 2022-03-16T16:41:31.000Z | canary/argument_pipeline/component_prediction.py | Open-Argumentation/Canary | 1a3128a5357f0428b7cb19d66b52e83dbe75fff0 | [
"MIT"
] | 4 | 2021-05-25T13:28:40.000Z | 2022-01-15T12:44:54.000Z | canary/argument_pipeline/component_prediction.py | Open-Argumentation/Canary | 1a3128a5357f0428b7cb19d66b52e83dbe75fff0 | [
"MIT"
] | 2 | 2020-12-10T13:40:36.000Z | 2020-12-16T19:34:03.000Z | import pandas
from imblearn.over_sampling import RandomOverSampler
from nltk.tree import Tree
from scipy.sparse import hstack
from sklearn.base import TransformerMixin, BaseEstimator
from sklearn.feature_extraction import DictVectorizer
from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn.pipeline import make_union, make_pipeline
from sklearn.preprocessing import MaxAbsScaler
from sklearn.svm import SVC
from ..argument_pipeline.base import Model
from ..corpora import load_essay_corpus
from ..nlp import Lemmatiser, PosDistribution
from ..nlp._utils import spacy_download
from ..nlp.transformers import DiscourseMatcher, EmbeddingTransformer
from ..utils import logger
_nlp = spacy_download(disable=['ner', 'textcat', 'tagger', 'lemmatizer', 'tokenizer',
'attribute_ruler',
'tok2vec', ])
__all__ = [
"ArgumentComponent",
"ArgumentComponentFeatures"
]
class ArgumentComponent(Model):
"""Detects argumentative components from natural language e.g. premises and claims"""
def __init__(self, model_id: str = None):
if model_id is None:
model_id = "argument_component"
super().__init__(
model_id=model_id,
)
@staticmethod
def default_train():
"""The default training method. ArgumentComponent defaults to using the essay corpus with undersampling."""
from sklearn.model_selection import train_test_split
ros = RandomOverSampler(random_state=0, sampling_strategy='not majority')
x, y = load_essay_corpus(purpose="component_prediction")
x, y = ros.fit_resample(pandas.DataFrame(x), pandas.DataFrame(y))
train_data, test_data, train_targets, test_targets = \
train_test_split(x, y,
train_size=0.7,
shuffle=True,
random_state=0,
stratify=y
)
logger.debug("Resample")
return list(train_data.to_dict("index").values()), list(test_data.to_dict("index").values()), train_targets[
0].tolist(), test_targets[0].tolist()
@classmethod
def train(cls, pipeline_model=None, train_data=None, test_data=None, train_targets=None, test_targets=None,
save_on_finish=True, *args, **kwargs):
# If the pipeline model is none, use this algorithm
if pipeline_model is None:
pipeline_model = make_pipeline(
ArgumentComponentFeatures(),
MaxAbsScaler(),
SVC(random_state=0, class_weight='balanced', probability=True, cache_size=1000)
)
return super().train(
pipeline_model=pipeline_model,
train_data=train_data,
test_data=test_data,
train_targets=train_targets,
test_targets=test_targets,
save_on_finish=save_on_finish
)
class ArgumentComponentFeatures(TransformerMixin, BaseEstimator):
"""Transformer Mixin that extracts features for the ArgumentComponent model"""
features: list = [
TfidfVectorizer(ngram_range=(1, 1), tokenizer=Lemmatiser(), lowercase=False),
TfidfVectorizer(ngram_range=(2, 2), tokenizer=Lemmatiser(), lowercase=False, max_features=2000),
DiscourseMatcher('forward'),
DiscourseMatcher('thesis'),
DiscourseMatcher('rebuttal'),
DiscourseMatcher('backward'),
DiscourseMatcher('obligation'),
DiscourseMatcher('recommendation'),
DiscourseMatcher('possible'),
DiscourseMatcher('intention'),
DiscourseMatcher('option'),
DiscourseMatcher('first_person'),
EmbeddingTransformer()
]
def __init__(self):
self.__dict_feats = DictVectorizer()
self.__features = make_union(*ArgumentComponentFeatures.features)
@staticmethod
def _prepare_dictionary_features(data):
pos_dist = PosDistribution()
cover_sentences = pandas.DataFrame(data).cover_sentence.tolist()
cover_sentences = list(_nlp.pipe(cover_sentences))
def get_features(feats):
features = []
for i, d in enumerate(feats):
cover_sen_parse_tree = Tree.fromstring(list(cover_sentences[i].sents)[0]._.parse_string)
items = {
'tree_height': cover_sen_parse_tree.height(),
'len_paragraph': d.get('len_paragraph'),
"len_component": d.get('len_component'),
"len_cover_sen": d.get('len_cover_sen'),
'is_in_intro': d.get('is_in_intro'),
'is_in_conclusion': d.get('is_in_conclusion'),
"n_following_components": d.get("n_following_components"),
"n_preceding_components": d.get("n_preceding_components"),
"component_position": d.get("component_position"),
'n_preceding_comp_tokens': d.get('n_preceding_comp_tokens'),
'n_following_comp_tokens': d.get('n_following_comp_tokens'),
'first_in_paragraph': d.get('first_in_paragraph'),
'last_in_paragraph': d.get('last_in_paragraph')
}
items.update(pos_dist(d['cover_sentence']).items())
features.append(items)
return features
return get_features(data)
def fit(self, x: list, y: list = None):
"""Fits self to data provided.
Parameters
----------
x: list
The data on which the transformer is fitted.
y: list
Ignored. Providing will have no effect. Provided for compatibility reasons.
Returns
-------
Self
"""
logger.debug("Fitting")
self.__dict_feats.fit(x)
self.__features.fit(pandas.DataFrame(x).cover_sentence.tolist())
return self
def transform(self, x: list):
"""Transforms data provided.
Parameters
----------
x: list
A list of datapoints which are to be transformed using the mixin
Returns
-------
scipy.sparse.hstack
The features of the inputted list
See Also
---------
scipy.sparse.hstack
"""
features = self.__features.transform(pandas.DataFrame(x).cover_sentence.tolist())
dict_features = self.__dict_feats.transform(self._prepare_dictionary_features(x))
return hstack([features, dict_features])
| 36.373626 | 116 | 0.623112 | 688 | 6,620 | 5.731105 | 0.318314 | 0.012173 | 0.005072 | 0.014202 | 0.061882 | 0.017753 | 0 | 0 | 0 | 0 | 0 | 0.004391 | 0.277492 | 6,620 | 181 | 117 | 36.574586 | 0.819987 | 0.115408 | 0 | 0.016807 | 0 | 0 | 0.12615 | 0.03627 | 0 | 0 | 0 | 0 | 0 | 1 | 0.067227 | false | 0 | 0.142857 | 0 | 0.285714 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
f125c669b4d659e35b8a378e25dd4d527ec4dbd4 | 27,310 | py | Python | flask/lib/python3.8/site-packages/to/trainer.py | Otybrian/blogpost | 518599019e11cd7ee11e01470c4d51dfb4583274 | [
"MIT"
] | null | null | null | flask/lib/python3.8/site-packages/to/trainer.py | Otybrian/blogpost | 518599019e11cd7ee11e01470c4d51dfb4583274 | [
"MIT"
] | null | null | null | flask/lib/python3.8/site-packages/to/trainer.py | Otybrian/blogpost | 518599019e11cd7ee11e01470c4d51dfb4583274 | [
"MIT"
] | null | null | null | import re
import os
import traceback
import importlib.util
from prompt_toolkit import prompt
from prompt_toolkit.history import FileHistory
from prompt_toolkit.auto_suggest import AutoSuggestFromHistory
from prompt_toolkit.validation import ValidationError
from colored import fg, bg, attr
import torch.optim as optim
import torch.nn as nn
from torch.utils.data import TensorDataset, DataLoader
from .utils.cli import *
from .utils.helpers import *
from .utils.batch_logger import *
from .utils.options import *
from .net import *
from .data.dataset import *
class Trainer(object):
#----------------------------------------------------------------------------------------------------------
# Initialization
#----------------------------------------------------------------------------------------------------------
def __init__(self):
super(Trainer, self).__init__()
self.epoch_ran = 0
self.logger = Logger(self)
self.name = sys.argv[0].replace('.py', '')
self.commands = ['list', 'help', 'use', 'load', 'run', 'test', 'validate', 'set', 'exit']
# Configurations
self.cfg_folder = 'configurations'
self.default_cfg = 'default'
self.current_cfg = self.default_cfg
self.current_cfg_path = None
# Models
self.models_folder = 'models'
self.Model = NeuralNetwork
self.cuda_enabled = False
# Events
self.event_handlers = {}
# Data
self.DataLoader = None
self.DataSet = DataSet
# Submission
self.submissions_folder = 'submissions'
if len(sys.argv) == 2:
self.load_cfg('{}/{}.py'.format(self.cfg_folder, sys.argv[1].replace('.py', '')))
else:
self.load_cfg('{}/{}.py'.format(self.cfg_folder, self.current_cfg))
self.reset()
def reset(self):
self.__init_model()
self.__init_optim()
self.__init_loss_fn()
return self
def __init_folder(self):
self.cfg_folder = get(self.cfg, TrainerOptions.CFG_FOLDER.value, default='configurations')
self.models_folder = get(self.cfg, TrainerOptions.MODELS_FOLDER.value, default='models')
self.submissions_folder = get(self.cfg, TrainerOptions.SUBMISSIONS_FOLDER.value, default='submissions')
def __init_model(self):
self.model = self.Model(self.cfg)
init_model_parameters(self.model)
if torch.cuda.is_available():
self.cuda_enabled = True
self.model = self.model.cuda()
def __init_optim(self):
Optimizer = get(self.cfg, TrainerOptions.OPTIMIZER.value, default=optim.Adam)
optim_args = get(self.cfg, TrainerOptions.OPTIMIZER_ARGS.value, default={'lr': 0.01})
self.optimizer = Optimizer(self.model.parameters(), **optim_args)
Scheduler = get(self.cfg, TrainerOptions.SCHEDULER.value, default=None)
sched_args = get(self.cfg, TrainerOptions.SCHEDULER_ARGS.value, default=[])
sched_kwargs = get(self.cfg, TrainerOptions.SCHEDULER_KWARGS.value, default={})
self.scheduler = None
if Scheduler is not None:
self.scheduler = Scheduler(self.optimizer, *sched_args, **sched_kwargs)
def __init_loss_fn(self):
Fn = get(self.cfg, TrainerOptions.LOSS_FN.value, default=nn.CrossEntropyLoss)
self.loss_fn = Fn()
#----------------------------------------------------------------------------------------------------------
# Folder
#----------------------------------------------------------------------------------------------------------
def set_models_folder(self, models_folder):
self.models_folder = models_folder
return self
def set_submissions_folder(self, submissions_folder):
self.submissions_folder = submissions_folder
return self
def set_configurations_folder(self, cfg_folder):
self.cfg_folder = cfg_folder
self.load('{}/{}.py'.format(self.cfg_folder, self.current_cfg))
self.reset()
return self
#----------------------------------------------------------------------------------------------------------
# Configuration
#----------------------------------------------------------------------------------------------------------
def has_cfg(self, cfg):
if not cfg.endswith('.py'):
cfg += '.py'
if '/' not in cfg:
cfg = os.path.join(csd(), self.cfg_folder, cfg)
return os.path.isfile(cfg)
def load_cfg(self, cfg_file):
if not cfg_file.startswith(self.cfg_folder):
cfg_file = os.path.join(self.cfg_folder, cfg_file)
if not cfg_file.endswith('.py'):
cfg_file += '.py'
path = os.path.join(csd(), cfg_file)
try:
p('Loading configuration file at "{}"'.format(path))
spec = importlib.util.spec_from_file_location('configuration', path)
self.cfg = importlib.util.module_from_spec(spec)
spec.loader.exec_module(self.cfg)
self.current_cfg = filename(path).replace('.py', '')
self.current_cfg_path = path
self.__init_folder()
except IOError as e:
raise Exception('Configuration file not found at "{}".'.format(path))
return self
#----------------------------------------------------------------------------------------------------------
# Events
#----------------------------------------------------------------------------------------------------------
def bind(self, event, handler):
if isinstance(event, TrainerEvents):
self.event_handlers[event.value] = handler
else:
raise Exception('Event "{}" should be a TrainerEvents.'.format(event))
return self
#----------------------------------------------------------------------------------------------------------
# DataSet and DataLoader
#----------------------------------------------------------------------------------------------------------
def set_dataloader(self, DataLoader):
self.DataLoader = DataLoader
return self
def set_dataset(self, DataSet):
self.DataSet = DataSet
return self
def __get_dataloader(self, data_type, debug=True):
if self.DataLoader is not None:
return self.DataLoader(self.cfg, data_type)
else:
dataset = self.DataSet(self.cfg, data_type, debug)
if has(self.event_handlers, TrainerEvents.CUSTOMIZE_DATALOADER.value):
return get(self.event_handlers, TrainerEvents.CUSTOMIZE_DATALOADER.value)(self.cfg, data_type, dataset)
else:
should_shuffle = data_type != TEST
batch_size = get(self.cfg, TrainerOptions.BATCH_SIZE.value, default=64)
return DataLoader(dataset, batch_size=batch_size, shuffle=should_shuffle)
#----------------------------------------------------------------------------------------------------------
# Model
#----------------------------------------------------------------------------------------------------------
def set_lr(self, new_lr):
for param_group in self.optimizer.param_groups:
param_group['lr'] = new_lr
self.cfg.learning_rate = new_lr
if has(self.cfg, TrainerOptions.OPTIMIZER_ARGS.value, 'lr'):
get(self.cfg, TrainerOptions.OPTIMIZER_ARGS.value)['lr'] = new_lr
return self
def get_lr(self):
lr = [g['lr'] for g in self.optimizer.param_groups]
return lr
def set_model(self, Model):
self.Model = Model
self.reset()
return self
def save_model(self, percentage=None, loss=None):
mkdirp(os.path.join(csd(), self.models_folder, self.name))
path = '{}/{}/{} - {:03d}'.format(self.models_folder, self.name, self.current_cfg, self.epoch_ran)
if percentage is not None:
path += ' - {:.2f}%'.format(percentage)
if loss is not None:
path += ' - {:.6f}'.format(loss)
path += '.model'
path = os.path.join(csd(), path)
p('Saving neural network "{}" using configuration "{}" to disk at "{}"'.format( \
self.name, self.current_cfg, path))
torch.save(self.model.state_dict(), path)
return self
def load_model(self, epoch=None):
pattern = None
if epoch == 0:
p('Resetting model to primitive state.')
return self.reset()
epoch, path, files, versions = self.get_versions(epoch)
if path is None and epoch is not None: # Can't find the exact epoch, loading the highest.
epoch, path, files, versions = self.get_versions()
if epoch > 0 and path is not None:
p('Loading neural network "{}" using configuration "{}" and epoch "{}" at "{}"'.format( \
self.name, self.current_cfg, epoch, path))
try:
if torch.cuda.is_available():
self.model.load_state_dict(torch.load(path))
else:
self.model.load_state_dict(torch.load(path, lambda storage, loc: storage))
self.epoch_ran = epoch
except Exception as e:
p('Failed to load model at path "{}"'.format(path))
traceback.print_exc()
else:
p('No saved model for neural network "{}" using configuration "{}".'.format(self.name, self.current_cfg))
return self
def has_version(self, epoch):
version, path, files, versions = self.get_versions(epoch)
return version > 0 and version == epoch
def get_versions(self, epoch=None):
folder = csd()
if epoch is not None:
pattern = '{}/{}/{} - {:03d}*.model'.format(self.models_folder, self.name, self.current_cfg, epoch)
else:
pattern = '{}/{}/{}*.model'.format(self.models_folder, self.name, self.current_cfg)
files = find_pattern(os.path.join(folder, pattern), relative_to=folder)
if len(files) > 0:
versions = [int(re.findall(' \d{3} |$', filename(f))[0]) for f in files]
epoch = max(versions)
i = versions.index(epoch)
path = files[i]
return epoch, path, files, versions
return (0, None, files, [])
#----------------------------------------------------------------------------------------------------------
# CLI
#----------------------------------------------------------------------------------------------------------
def cli(self):
print()
print('----------------------------------------------------------')
print('| |')
print('| Welcome to Flare Neural Network Trainer. |')
print('| |')
print('----------------------------------------------------------')
print()
if get(self.cfg, TrainerOptions.AUTO_RELOAD_SAVED_MODEL.value, default=False):
self.load_model()
mkdirp('.flare')
touch('.flare/history')
should_exit = False
while not should_exit:
c = prompt(
'> ',
history=FileHistory('.flare/history'),
auto_suggest=AutoSuggestFromHistory(),
completer=CommandCompleter(self),
validator=CommandValidator(self)
)
try:
should_exit = self.process_command(c)
except Exception as e:
traceback.print_exc()
return self
def process_command(self, c):
parts = list(filter(None, c.split(' ')))
command = parts[0]
if command == 'list':
self.list()
elif command == 'help':
self.help()
elif command == 'use':
self.load_cfg('{}/{}.py'.format(self.cfg_folder, parts[1].replace('.py', '')))
elif command == 'load':
if len(parts) == 2:
self.load_model(int(parts[1]))
else:
self.load_model()
elif command == 'run':
if len(parts) == 1:
self.run()
else:
self.run(int(parts[1]))
elif command == 'set':
parts = list(filter(None, c.split(' ', 2)))
self.set(parts[1], parts[2])
elif command == 'test' or command == 'validate':
fn = self.test if command == 'test' else self.validate
if len(parts) == 1:
fn()
else:
locs = list(map(int, parts[1].split(':')))
if len(locs) == 1:
if self.load_model(locs[0]):
fn()
else:
p('Skipping test because epoch {} cannot be loaded correctly.'.format(locs[0]))
else:
for i in range(*locs):
if self.load_model(i):
fn()
else:
p('Skipping test because epoch {} cannot be loaded correctly.'.format(i))
elif command == 'exit':
return True
return False
#----------------------------------------------------------------------------------------------------------
# Commands
#----------------------------------------------------------------------------------------------------------
def list(self):
color = fg(45)
parameter = fg(119)
reset = attr('reset')
def colorize(o):
return '{}{}{}'.format(color, o, reset)
configs = [
('Module', self.name, color),
('Epoch', self.epoch_ran, color),
('Configuration', self.current_cfg, color),
('Configuration Path', self.current_cfg_path, color),
('Configuration Folder', self.cfg_folder, color),
('Models Folder', self.models_folder, color),
('Submissions Folder', self.submissions_folder, color),
None,
]
for k in list(filter(lambda x: not x.startswith('__'), dir(self.cfg))):
v = getattr(self.cfg, k)
configs.append((k, v, parameter))
max_key_len = max([len(o[0]) if o else 0 for o in configs])
for o in configs:
if o is None:
print()
else:
w('{}{} : {}'.format(o[0], ' ' * (max_key_len - len(o[0])), o[2]))
w(re.sub('^ ', ' ' * (max_key_len + 6), ff(o[1], prefix=' '), flags=re.M))
print(reset)
return self
def help(self):
command = fg(45)
parameter = fg(119)
sample = fg(105)
reset = attr('reset')
indent = ' '
print(
indent + """
{0}python {1}<PYTHON>{3} {1}[CONFIG]{3} {1}[EPOCH]{3}{3}
You can to specify the configuration file path and epoch count to load at script
launch where {1}<PYTHON>{3} is the location of your python file, {1}[CONFIG]{3} is the
location of your configuration file and {1}[EPOCH]{3} is the epoch count you wish to
load.
e.g: {2}python nn.py default 2{3}
{0}list:{3}
Usage: {0}list{3}
List current module, epoch count and configuration file path.
e.g: {2}list{3}
{0}help:{3}
Usage: {0}help{3}
Print help message.
e.g: {2}help{3}
{0}use:{3}
Usage: {0}use{3} {1}<PATH>{3}
Switch to configuration file located at {1}<PATH>{3}.
e.g: {2}use default{3}
{0}load:{3}
Usage: {0}load{3} {1}<EPOCH>{3}
Load previously trained model at epoch {1}<EPOCH>{3}.
e.g: {2}load 10{3}
{0}run:{3}
Usage: {0}run{3} {1}[COUNT]{3}
Run training, optionally {1}[COUNT]{3} times
e.g: {2}run{3} OR {2}run 10{3}
{0}set:{3}
Usage: {0}set{3} {1}<ATTR> <VALUE>{3}
Set the value in configuration dynamically, this does NOT overwrite the
configuration file.
e.g: {2}set learn_rate 0.01{3}
{0}test:{3}
Usage: {0}test{3} {1}[EPOCH]{3}
Test using the model trained, optionally using at epoch {1}[EPOCH]{3}.
{1}[EPOCH]{3} can be a range input to range() or an integer.
e.g: {2}test 10{3} OR {2}test 1:10:2{3}
{0}validate:{3}
Usage: {0}validate{3} {1}[EPOCH]{3}
Validate using the model trained, optionally using at epoch {1}[EPOCH]{3}.
{1}[EPOCH]{3} can be a range input to range() or an integer.
e.g: {2}validate 10{3} OR {2}validate 1:10:2{3}
""".format(command, parameter, sample, reset).replace('\t\t\t', indent).strip()
)
return self
def set(self, key, val):
p('Setting configuration key "{}" to "{}"'.format(key, val))
if key == 'learning_rate':
self.set_lr(num(val))
else:
cmd = 'self.cfg.{} = {}'.format(key, val)
try:
exec(cmd)
self.reset()
except Exception as e:
p('Failed to set configuration key "{}" to "{}"'.format(key, val))
return self
#----------------------------------------------------------------------------------------------------------
# Neural Network
#----------------------------------------------------------------------------------------------------------
def __generate(self, x, y, extras, y_hat):
result = None
if has(self.event_handlers, TrainerEvents.GENERATE.value):
result = get(self.event_handlers, TrainerEvents.GENERATE.value)(x, y, extras, y_hat)
else:
labels_axis = get(self.cfg, TrainerOptions.GENERATE_AXIS.value, default=1)
result = predictions.data.max(1, keepdim=True)[1].cpu().numpy().flatten()
return result
def __post_test(self, results):
if has(self.event_handlers, TrainerEvents.POST_TEST.value):
return get(self.event_handlers, TrainerEvents.POST_TEST.value)(results)
return results
def _match(self, mode, x, y, extras, y_hat):
match_results = None
if has(self.event_handlers, TrainerEvents.MATCH_RESULTS.value):
match_results = get(self.event_handlers, TrainerEvents.MATCH_RESULTS.value)(mode, x, y, extras, y_hat)
else:
match_results = self.__default_match(y_hat, y) # Compute losses
return match_results
def __default_match(self, y_hat, y):
predictions = y_hat.data.max(1, keepdim=True)[1]
expectations = y.long()
if torch.cuda.is_available():
return predictions.eq(expectations.cuda())
else:
return predictions.cpu().eq(expectations)
def __compute_loss(self, mode, x, y, extras, y_hat, logger):
loss = None
if has(self.event_handlers, TrainerEvents.COMPUTE_LOSS.value):
loss = get(self.event_handlers, TrainerEvents.COMPUTE_LOSS.value)(mode, x, y, extras, y_hat)
else:
loss = self.loss_fn(y_hat, to_variable(y).long().squeeze()) # Compute losses
extra_log_msg = {}
if has(self.event_handlers, TrainerEvents.EXTRA_LOG_MSG.value):
result = get(self.event_handlers, TrainerEvents.EXTRA_LOG_MSG.value)(mode, x, y, extras, y_hat)
if result is not None:
extra_log_msg = result
logger.log_loss(loss.data.cpu().numpy(), **extra_log_msg)
return loss
def __propagate_loss(self, mode, x, y, extras, y_hat, logger):
loss = self.__compute_loss(mode, x, y, extras, y_hat, logger)
loss.backward()
self.optimizer.step()
return loss
def __get_validation_results(self, batch_count=-1):
dataloader = self.__get_dataloader(DEV, False)
mode = Mode.VALIDATE
validate_logger = Logger(self)
validate_logger.start(mode)
validate_logger.start_epoch()
for i, batch in enumerate(dataloader):
x, y, extras, y_hat = self.__process_batch(batch, validate_logger, mode)
self.__print_batch(mode, x, y, extras, y_hat, validate_logger)
if batch_count > 0 and i + 1 == batch_count:
break
percentage, (_, _, loss) = validate_logger.get_percentage(), validate_logger.get_loss()
return percentage, loss
def __lr_changed(self, old_lr, new_lr):
eps = 1e-6
for i in range(len(old_lr)):
old, new = old_lr[i], new_lr[i]
if old - new > eps:
return True
return False
def __tune_lr(self):
if self.scheduler is None:
return
precentage, loss = 0.0, 0.0
use_train_data = get(self.cfg, TrainerOptions.SCHEDULE_ON_TRAIN_DATA.value, default=False)
if use_train_data:
percentage, (_, _, loss) = self.logger.get_percentage(), self.logger.get_loss()
else:
batch_count = get(self.cfg, TrainerOptions.SCHEDULE_BATCH_COUNT.value, default=-1)
percentage, loss = self.__get_validation_results(batch_count)
old_lr = self.get_lr()
use_percentage = get(self.cfg, TrainerOptions.SCHEDULE_ON_ACCURACY.value, default=False)
value = percentage if use_percentage else loss
args, kwargs = filter_args(self.scheduler.step, [value], {})
self.scheduler.step(*args, **kwargs)
new_lr = self.get_lr()
verbose = get(self.cfg, TrainerOptions.SCHEDULE_VERBOSE.value, default=False)
if verbose:
data_type = 'percentage {:.2f} %' if use_percentage else 'loss {:.8f}'
data_source = 'training' if use_train_data else 'validation'
template = 'Tuning learning rate using {} from {} data.'.format(data_type, data_source)
p(template.format(value), debug=False)
if self.__lr_changed(old_lr, new_lr):
p('Learning rate is now at: {}'.format(new_lr))
def __process_batch(self, batch, logger, mode=Mode.TRAIN):
logger.increment()
x, y, extras = batch[0], batch[1], batch[2:]
self.optimizer.zero_grad()
if has(self.event_handlers, TrainerEvents.PRE_PROCESS.value):
x, y, extras = get(self.event_handlers, TrainerEvents.PRE_PROCESS.value)(mode, x, y, extras)
if mode is Mode.TRAIN:
self.model.train()
else:
self.model.eval()
y_hat = None
if has(self.event_handlers, TrainerEvents.MODEL_EXTRA_ARGS.value):
args, kwargs = get(self.event_handlers, TrainerEvents.MODEL_EXTRA_ARGS.value)(mode, x, y, extras)
y_hat = forward(self.model, [to_variable(x)] + args, kwargs)
else:
y_hat = self.model(to_variable(x))
if has(self.event_handlers, TrainerEvents.POST_PROCESS.value):
y_hat = get(self.event_handlers, TrainerEvents.POST_PROCESS.value)(mode, x, y, extras, y_hat)
if mode is Mode.TRAIN:
self.__propagate_loss(mode, x, y, extras, y_hat, logger)
elif mode is Mode.VALIDATE:
self.__compute_loss(mode, x, y, extras, y_hat, logger)
return x, y, extras, y_hat
def __print_batch(self, mode, x, y, extras, y_hat, logger):
logger.log_batch(mode, x, y, extras, y_hat)
logger.print_batch(logger is self.logger)
def __save_path(self, save_as):
folder = os.path.join(csd(), self.submissions_folder, self.name)
file = None
if save_as == SaveAs.CSV:
file = '{} - {:03d}.csv'.format(self.current_cfg, self.epoch_ran)
elif save_as == SaveAs.NPY:
file = '{} - {:03d}.npy'.format(self.current_cfg, self.epoch_ran)
return folder, file
def __save_results(self, results, save_as):
folder, file = self.__save_path(save_as)
if folder is None or file is None:
return
path = os.path.join(folder, file)
mkdirp(folder)
if save_as == SaveAs.CSV:
field_names = get(self.cfg, TrainerOptions.CSV_FIELD_NAMES.value, default=['id', 'label'])
write_to_csv(results, path, field_names)
elif save_as == SaveAs.NPY:
np.save(path, np.array(results, dtype='object'))
p('Submission file saved to "{}".'.format(path))
def run(self, epochs=1):
has_scheduler = self.scheduler != None
schedule_on_batch = get(self.cfg, TrainerOptions.SCHEDULE_ON_BATCH.value, default=False)
schedule_first = get(self.cfg, TrainerOptions.SCHEDULE_FIRST.value, default=True)
dev_mode = get(self.cfg, TrainerOptions.DEV_MODE.value, default=False)
train_type = DEV if dev_mode else TRAIN
dataloader = self.__get_dataloader(train_type)
self.logger.start()
for epoch in range(epochs):
self.logger.start_epoch()
if has_scheduler and not schedule_on_batch and schedule_first:
self.__tune_lr()
for batch in dataloader:
if has_scheduler and schedule_on_batch and schedule_first:
self.__tune_lr()
x, y, extras, y_hat = self.__process_batch(batch, self.logger)
if has_scheduler and schedule_on_batch and not schedule_first:
self.__tune_lr()
self.__print_batch(Mode.TRAIN, x, y, extras, y_hat, self.logger)
if has_scheduler and not schedule_on_batch and not schedule_first:
self.__tune_lr()
self.epoch_ran += 1
percentage, (_, loss, _) = self.logger.get_percentage(), self.logger.get_loss()
if abs(percentage) < 1e-6:
percentage = None
self.save_model(percentage, loss)
return self
def validate(self):
return self.test(Mode.VALIDATE)
def test(self, mode=Mode.TEST):
data_type = TEST if mode == Mode.TEST else DEV
dataloader = self.__get_dataloader(data_type)
self.logger.start(mode)
self.logger.start_epoch()
results = []
for batch in dataloader:
x, y, extras, y_hat = self.__process_batch(batch, self.logger, mode)
if mode == Mode.TEST:
result = self.__generate(x, y, extras, y_hat)
for i in range(len(result)):
results.append(result[i])
self.__print_batch(mode, x, y, extras, y_hat, self.logger)
if mode is Mode.TEST:
results = self.__post_test(results)
save_as = get(self.cfg, TrainerOptions.SAVE_AS.value, default=SaveAs.CSV)
self.__save_results(results, save_as)
else:
self.logger.print_summary()
return self
| 38.464789 | 119 | 0.528781 | 3,096 | 27,310 | 4.49677 | 0.112726 | 0.023632 | 0.01494 | 0.014869 | 0.323301 | 0.248958 | 0.21053 | 0.125916 | 0.08641 | 0.070177 | 0 | 0.010282 | 0.287733 | 27,310 | 709 | 120 | 38.519041 | 0.705429 | 0.078067 | 0 | 0.188433 | 0 | 0.007463 | 0.149425 | 0.004614 | 0 | 0 | 0 | 0 | 0 | 1 | 0.08209 | false | 0 | 0.037313 | 0.003731 | 0.20709 | 0.033582 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
f1266a803627ec1031a1ee5077266bcc7b6391cf | 1,420 | py | Python | lambdata/test_lambdata.py | leibo411/lambdata-leibo411 | 59b322e4e3e4d27970dea21efdecaa7d65029c7f | [
"MIT"
] | null | null | null | lambdata/test_lambdata.py | leibo411/lambdata-leibo411 | 59b322e4e3e4d27970dea21efdecaa7d65029c7f | [
"MIT"
] | null | null | null | lambdata/test_lambdata.py | leibo411/lambdata-leibo411 | 59b322e4e3e4d27970dea21efdecaa7d65029c7f | [
"MIT"
] | null | null | null | """Basic unit test for lambdata"""
import unittest
import random
from example_module import favorite_animals, colors, add, increment, becca, rand_num
class ExampleTests(unittest.TestCase):
"""Making sure examples work as expected"""
def test_add(self):
"""Testing that add works as expected"""
num1 = 0
num2 = 1
self.assertEqual(add(num1,num2), 1)
self.assertEqual(add(num2, num2), 2)
def test_increment(self):
"""Testing the increment function"""
x0 = 0
y0 = increment(x0)
self.assertEqual(y0,1)
x1 = 100
y1 = increment(x1)
self.assertEqual(y1,101)
x2 = -1
y2 = increment(x2)
self.assertEqual(y2,0)
def test_colors(self):
"""Testing the colors function"""
self.assertIn("Teal", colors)
self.assertNotIn("yellow", colors)
def test_favorite_animals(self):
"""Testing the favorite animals function"""
length_fa = len(favorite_animals)
self.assertEqual(length_fa, 4)
def test_becca(self):
"""Testing the becca function"""
self.assertIn('Becca is crying', becca)
def test_rand_num(self):
"""Testing the rand_num funciton"""
y4 = random.randint(0, 100)
y5 = rand_num(y4)
self.assertGreater(y5,1000)
if __name__ == "__main__":
unittest.main()
| 23.278689 | 84 | 0.600704 | 170 | 1,420 | 4.876471 | 0.4 | 0.050663 | 0.084439 | 0.048251 | 0.055489 | 0 | 0 | 0 | 0 | 0 | 0 | 0.044379 | 0.285915 | 1,420 | 60 | 85 | 23.666667 | 0.773176 | 0.179577 | 0 | 0 | 0 | 0 | 0.029464 | 0 | 0 | 0 | 0 | 0 | 0.30303 | 1 | 0.181818 | false | 0 | 0.090909 | 0 | 0.30303 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
f1298764a70d48a5cc02427e03a11f21ba24e293 | 6,989 | py | Python | capt/function/push.py | tmanfree/capt | a6c1c12bb2677aef718f550c5fa7ffd4b71dedd4 | [
"MIT"
] | null | null | null | capt/function/push.py | tmanfree/capt | a6c1c12bb2677aef718f550c5fa7ffd4b71dedd4 | [
"MIT"
] | null | null | null | capt/function/push.py | tmanfree/capt | a6c1c12bb2677aef718f550c5fa7ffd4b71dedd4 | [
"MIT"
] | null | null | null |
# system imports
import sys
import os
import time
# local imports
from function.find import Find
from connector.switch import Switch
class Push:
def __init__(self):
self.find = Find()
def template(self, args, config, logger):
dev_id_list = []
address_list = []
try:
file = open(os.path.join(args.file_name), "r")
for ip in file:
dev_id = self.find.dev_id(args, config, ip, logger)
time.sleep(1)
dev_id_list.append({"targetDeviceID": "{}".format(dev_id)})
address_list.append({"address": "{}".format(ip.strip())})
file.close()
except FileNotFoundError:
print("##### ERROR iplist files not found #####")
except Exception as err:
print("##### ERROR with processing:{} #####".format(err))
# require 'yes' input to proceed
# logger.info('Activate BAS on switch INTERFACE {} using VLAN: {}'.format(found_int['name'], args.vlan))
# response = input("Confirm action of changing VLAN ('yes'):")
# if not response == 'yes':
# logger.info('Did not proceed with change.')
# sys.exit(1)
# invoke API call to change VLAN
sw_api_call = Switch(config, logger) # create API switch call object
# push API_CALL_conf_if_bas template out. Update this to use a shared template, the same as change vlan?
job_id = sw_api_call.conf_template(dev_id_list, args.template_name)
timeout = time.time() + 30 # 30 second timeout starting now
time.sleep(1) # without the sleep the job_complete can balk, not finding the job_id yet
while not sw_api_call.job_complete(job_id):
time.sleep(5)
if time.time() > timeout:
logger.critical("Template push failed. Prime job not completed")
sys.exit(1)
###################Only sync successful?
self.force_sync_multiple(address_list, sw_api_call) # 20 minute timeout
#################
if not sw_api_call.job_successful(job_id):
logger.critical("Template push failed. Prime job not successful")
sys.exit(1)
logger.info("Synchronizing ...")
# logger.info("Synchronized!")
logger.info('Template push complete.')
return args
def bas(self, args, config, logger):
# find and display (update this call to work)
dev_id, found_int, dev_ip = self.find.int(args, config, args.interface, logger)
# require 'yes' input to proceed
logger.info('Activate BAS on switch INTERFACE {} using VLAN: {}'.format(found_int['name'], args.vlan))
response = input("Confirm action of changing VLAN ('yes'):")
if not response == 'yes':
logger.info('Did not proceed with change.')
sys.exit(1)
# invoke API call to change VLAN
# sw_api_call = Switch(config.username, config.password, config.cpi_ipv4_address, logger) # create API switch call object
sw_api_call = Switch(config, logger) # create API switch call object
# push API_CALL_conf_if_bas template out. Update this to use a shared template, the same as change vlan?
job_id = sw_api_call.conf_if_bas(dev_id, found_int['name'], args.description, args.vlan)
timeout = time.time() + 30 # 30 second timeout starting now
time.sleep(1) # without the sleep the job_complete can balk, not finding the job_id yet
while not sw_api_call.job_complete(job_id):
time.sleep(5)
if time.time() > timeout:
logger.critical("Change VLAN failed. Prime job not completed")
sys.exit(1)
if not sw_api_call.job_successful(job_id):
logger.critical("Change VLAN failed. Prime job not successful")
sys.exit(1)
logger.info('Change VLAN complete.')
########################################################
#add a verification flag to sync and display after, instead of default?
########################################################
logger.info("Synchronizing ...")
self.force_sync(dev_id,dev_ip, sw_api_call, 20, logger) # 20 minute timeout
logger.info("Synchronized!")
dev_id, found_int, dev_ip = self.find.int(args, config, args.interface, logger)
return args
def force_sync_multiple(self, address_list, sw_api_call):
#no error handling, for triggering a config backup
sw_api_call.sync_multiple(address_list) # force a sync!
# Copies of synchronized and force_sync from upgrade_code.py That uses a constant to hold values though
def force_sync(self, sw_id,sw_ip, sw_api_call, timeout, logger):
old_sync_time = sw_api_call.sync_time(sw_id)
sw_api_call.sync(sw_ip) # force a sync!
end_time = time.time() + 60 * timeout
logger.info("Timeout set to {} minutes.".format(timeout))
time.sleep(20) # don't test for sync status too soon (CPI delay and all that)
while not self.synchronized(sw_id, sw_api_call, logger):
time.sleep(10)
if time.time() > end_time:
logger.critical("Timed out. Sync failed.")
sys.exit(1)
new_sync_time = sw_api_call.sync_time(sw_id)
if old_sync_time == new_sync_time: # KEEP CODE! needed for corner case where force sync fails (code 03.03.03)
logger.critical("Before and after sync time is the same. Sync failed.")
sys.exit(1)
# def force_sync_multiple(self, sw_id,sw_ip, sw_api_call, timeout, logger):
# old_sync_time = sw_api_call.sync_time(sw_id)
# sw_api_call.sync(sw_ip) # force a sync!
# end_time = time.time() + 60 * timeout
# logger.info("Timeout set to {} minutes.".format(timeout))
# time.sleep(20) # don't test for sync status too soon (CPI delay and all that)
# while not self.synchronized(sw_id, sw_api_call, logger):
# time.sleep(10)
# if time.time() > end_time:
# logger.critical("{} Timed out. Sync failed.".format(sw_ip))
# return
#
# new_sync_time = sw_api_call.sync_time(sw_id)
# if old_sync_time == new_sync_time: # KEEP CODE! needed for corner case where force sync fails (code 03.03.03)
# logger.critical("{} Before and after sync time is the same. Sync failed.".format(sw_ip))
# return
def synchronized(self, sw_id, sw_api_call, logger):
if sw_api_call.sync_status(sw_id) == "COMPLETED":
logger.info("Synchronization Complete!")
return True
elif sw_api_call.sync_status(sw_id) == "SYNCHRONIZING":
return False
else:
#sw.sync_state = sw_api_call.sync_status(sw_id)
logger.warning("Unexpected sync state:")
return False | 41.35503 | 129 | 0.606954 | 939 | 6,989 | 4.336528 | 0.192758 | 0.053291 | 0.059676 | 0.031925 | 0.680501 | 0.654715 | 0.634578 | 0.617633 | 0.590128 | 0.590128 | 0 | 0.010222 | 0.272142 | 6,989 | 169 | 130 | 41.35503 | 0.79025 | 0.342681 | 0 | 0.333333 | 0 | 0 | 0.152743 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.066667 | false | 0 | 0.055556 | 0 | 0.188889 | 0.022222 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
f12a3ccfb07fc32ea4a8769b9c53d6c5dadcdff4 | 5,200 | py | Python | python/py_basic_ide/pyBASIC/parser.py | josephlewis42/personal_codebase | aa0fff9a908ab90bc78d24aa69d1b91163c35314 | [
"Unlicense"
] | 3 | 2015-11-24T17:06:58.000Z | 2018-05-01T14:03:57.000Z | python/py_basic_ide/pyBASIC/parser.py | josephlewis42/personal_codebase | aa0fff9a908ab90bc78d24aa69d1b91163c35314 | [
"Unlicense"
] | null | null | null | python/py_basic_ide/pyBASIC/parser.py | josephlewis42/personal_codebase | aa0fff9a908ab90bc78d24aa69d1b91163c35314 | [
"Unlicense"
] | null | null | null | #!/usr/bin/python
SHOW_ERRORS = True
import sys
def error_fx(text):
'''The default error handling, print the text to the console.
replace with your own function if you want, have it print to your
wx application or whatever.'''
sys.stderr.write(text)
def show_error(text):
'''
Send an error if SHOW_ERRORS = True
'''
if SHOW_ERRORS:
error_fx(text)
def split_text(text, seperator=" "):
return get_word(text, seperator)
def get_word(text, seperator=" "):
'''
Returns the beginning and end of text seperated around seperator.
If seperator is not found, the tail will be a blank string.
'''
try:
head = text[0:text.index(seperator)]
tail = text[text.index(seperator) + len(seperator) : len(text)]
except ValueError:
return text, ""
return head.strip(), tail.strip()
def remove_between(text, char="\""):
'''
Returns a string from between the next two characters from the
input string, returns the head, thorax, and tail.
Example:
remove_between("TEST \"Hello Jane!\" said Dick.")
("TEST ", "Hello Jane!", "said Dick.")
'''
head, tail = get_word(text, char)
thorax, abdomen = get_word(tail,char)
return head.strip(), thorax.strip(), abdomen.strip()
def has_another(text, substring):
'''
Tests if the text has another substring, if it does returns true,
if else it returns false.
'''
try:
text.index(substring)
return True
except:
return False
def tokenize(line, linenumber):
'''
Tokenize so the runner can work and check for errors in the syntax.
'''
word_list = [] #Is returned with each token in a proper area.
#Get the keyword
first_word, rest_line = split_text(line)
first_word = first_word.upper()
#Add the first word to the list for identification in runner.
word_list.append(first_word)
#Check for first keyword
acceptable_words_list = ["PRINT", "CLS", "IF", "GOTO", \
"LABEL", "INPUT", "LET", "REM", \
"END", "STOP", "", "CLEAR", "LBL"]
if first_word not in acceptable_words_list:
show_error("Token error line %d, %s is not a valid token."
%(linenumber, first_word))
#Tokenize the rest of the line based off of first keyword.
"""
If statment:
["IF", "EXPRESSION", "THEN STATMENT", "ELSE STATMENT"]
Example
IF y=='' THEN PRINT 'Hello'
Is formatted as.
["IF", "%(y)s == ''", "PRINT 'Hello'", "PRINT 'Goodbye'"]
The else is optional.
"""
if first_word in ["IF"]:
#Check for syntax errors
if not has_another(rest_line, "THEN"):
show_error("IF error line %d, no THEN statment."%(linenumber))
expression, tail = get_word(rest_line, "THEN")
word_list.append(expression)
if not has_another(rest_line, "ELSE"):
#if no else
word_list.append( tokenize(tail, linenumber) )
word_list.append( tokenize("REM Nothing", linenumber) )
else:
#If there is an else still.
then, rest = get_word(tail, "ELSE")
word_list.append( tokenize(then, linenumber) )
word_list.append( tokenize(rest, linenumber) )
#Let
if first_word in ["LET"]:
if not has_another(rest_line, "="):
show_error("LET error line %d, no assignment operator after variable." %(linenumber))
else:
head, tail = get_word(rest_line, "=")
word_list.append(head)
word_list.append(tail)
#Input
if first_word in ["INPUT"]:
a,b,c = remove_between(rest_line, "\"")
if a != "":
show_error("INPUT error line %d, too many tokens before String." %(linenumber))
if has_another(c, " "):
show_error("INPUT error line %d, extra tokens found after variable." %(linenumber))
if c == "":
show_error("INPUT error line %d, no assignment variable." %(linenumber))
word_list.append(b) #User Display Text
word_list.append(c) #Variable
#Rem
if first_word in ["REM"]:
word_list.append(rest_line)
#End
if first_word in ["END"]:
if rest_line != "":
show_error("END error line %d, too many tokens after END." %(linenumber))
#Stop
if first_word in ["STOP"]:
if rest_line != "":
show_error("STOP error line %d, too many tokens after STOP." %(linenumber))
#gosub
#Goto Statment
if first_word in ["GOTO"]:
if has_another(rest_line, " "):
show_error("GOTO error line %d, too many tokens after GOTO" %(linenumber))
else:
word_list.append(rest_line)
#PRINT Statment
if first_word in ["PRINT"]:
word_list.append(rest_line)
#Clear statment
if first_word in ["CLS", "CLEAR"]:
if rest_line != "":
show_error("CLEAR/CLS error line %d, too many tokens after CLEAR/CLS." %(linenumber))
#LABEL statment
if first_word in ["LABEL", "LBL"]:
if has_another(rest_line, " "):
show_error("LABEL/LBL error line %d, too many tokens after LABEL/LBL." %(linenumber))
else:
word_list.append(rest_line)
#Return the list of tokenized words
return word_list
def tokenize_document(text):
'''
Create a token list of a document with newline characters.
'''
tokens = []
tokenlines = text.split("\n")
index = 1
for line in tokenlines:
t = tokenize(line, index)
if t != [""]:
tokens.append(t)
index += 1
return tokens
def tokenize_from_file(path):
'''
Create a basic token list from a document.
'''
text = ""
a = file(path)
for line in a:
text += line
return tokenize_document(text)
| 26.804124 | 88 | 0.674808 | 768 | 5,200 | 4.449219 | 0.213542 | 0.044776 | 0.05736 | 0.038045 | 0.248756 | 0.127305 | 0.093649 | 0 | 0 | 0 | 0 | 0.000713 | 0.190385 | 5,200 | 193 | 89 | 26.943005 | 0.810926 | 0.230769 | 0 | 0.141509 | 0 | 0 | 0.181447 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.084906 | false | 0 | 0.009434 | 0.009434 | 0.179245 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
f12d14efa8b178a7fd6e3c412a22307369c3675c | 2,076 | py | Python | src/config/config.py | mirzak/mender-python-client | 383bd5d130fb67d3f38aa4a4442b0bd74ec29cca | [
"Apache-2.0"
] | null | null | null | src/config/config.py | mirzak/mender-python-client | 383bd5d130fb67d3f38aa4a4442b0bd74ec29cca | [
"Apache-2.0"
] | null | null | null | src/config/config.py | mirzak/mender-python-client | 383bd5d130fb67d3f38aa4a4442b0bd74ec29cca | [
"Apache-2.0"
] | null | null | null | # Copyright 2020 Northern.tech AS
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import json
import logging as log
class NoConfigurationFileError(Exception):
pass
class Config(dict):
"""A dictionary for storing Mender configuration values"""
def __init__(self, *args, **kw):
super(Config, self).__init__(self, *args, **kw)
self.__dict__ = self
# TODO - handle non-existing keys, or explicitly map to all acceptable
# values
def load(
local_path="/etc/mender/mender.conf", global_path="/data/etc/mender/mender.conf"
):
"""Read and return the config from the local and global config files"""
log.info("Loading the configuration files...")
global_conf = local_conf = None
try:
with open(global_path, "r") as fh:
global_conf = json.load(fh)
except FileNotFoundError:
log.debug(f"Global configuration file not found: {e}")
pass
try:
with open(local_path, "r") as fh:
local_conf = json.load(fh)
except FileNotFoundError as e:
log.debug(f"Local configuration file not found: {e}")
if not global_conf and not local_conf:
raise NoConfigurationFileError
if global_conf and local_conf:
# Merge the two files, giving precedence to the local configuration
b = {**global_conf, **local_conf}
c = Config()
c.update(b)
return c
if global_conf:
c = Config()
c.update(global_conf)
return c
c = Config()
c.update(local_conf)
return c
| 31.938462 | 84 | 0.664258 | 284 | 2,076 | 4.753521 | 0.440141 | 0.051852 | 0.017778 | 0.031111 | 0.12 | 0.054815 | 0 | 0 | 0 | 0 | 0 | 0.005125 | 0.248073 | 2,076 | 64 | 85 | 32.4375 | 0.859705 | 0.405588 | 0 | 0.263158 | 0 | 0 | 0.137417 | 0.042219 | 0 | 0 | 0 | 0.015625 | 0 | 1 | 0.052632 | false | 0.052632 | 0.052632 | 0 | 0.236842 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 2 |
f12dc2176e2beefeeb42b25ea4471be881f3f01d | 1,316 | py | Python | Arays/6_Equilibrium index of an array_approach_2.py | sounak95/100_days_of_code | 50fbf088ce6ab2137aa216a30e3b3f828b278a22 | [
"Apache-2.0"
] | null | null | null | Arays/6_Equilibrium index of an array_approach_2.py | sounak95/100_days_of_code | 50fbf088ce6ab2137aa216a30e3b3f828b278a22 | [
"Apache-2.0"
] | null | null | null | Arays/6_Equilibrium index of an array_approach_2.py | sounak95/100_days_of_code | 50fbf088ce6ab2137aa216a30e3b3f828b278a22 | [
"Apache-2.0"
] | null | null | null | """
Description - Equilibrium index of an array is an index such that the sum of elements at lower indexes is equal to the sum of elements at higher indexes. We are given an Array of integers, We have to find out the first index i from left such that -
A[0] + A[1] + ... A[i-1] = A[i+1] + A[i+2] ... A[n-1]
Input
[-7, 1, 5, 2, -4, 3, 0]
Output
3
A[0] + A[1] + A[2] = A[4] + A[5] + A[6]
Tricky Solution : The idea is to get total sum of array first. Then Iterate through the array and keep updating the left sum which is initialized as zero. In the loop, we can get right sum by subtracting the elements one by one. Then check whether Leftsum and Rightsum are equal.
Pseudo Code
// n : size of array
int eqindex(arr, n)
{
sum = 0
leftsum = 0
for (i=0 to n-1)
sum += arr[i]
for (i=0 to n-1)
{
// now sum will be righsum for index i
sum -= a[i]
if (sum == leftsum )
return i
leftsum += a[i]
}
}
Time Complexity : O(n)
Auxiliary Space : O(1)
input:
-7 1 5 2 -4 3 0
output:
2
"""
arr = list(map(int,input().split()))
n= len(arr)
flag=True
sum=0
for item in arr:
sum+=item
left_sum=0
for i in range(n):
sum-=arr[i]
if sum==left_sum:
flag=False
print(i)
break
left_sum+=arr[i]
if flag:
print(-1)
| 23.087719 | 279 | 0.604103 | 250 | 1,316 | 3.168 | 0.384 | 0.012626 | 0.011364 | 0.040404 | 0.137626 | 0.082071 | 0.04798 | 0.04798 | 0.04798 | 0.04798 | 0 | 0.039874 | 0.275836 | 1,316 | 56 | 280 | 23.5 | 0.791186 | 0.794833 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.125 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
f12dec8fdb6f941c195c95911262bfc88aa141b4 | 491 | py | Python | kozmic/projects/__init__.py | artofhuman/kozmic-ci | 930c06e0ad6d5a1fe16b81c036a1d676004eeb37 | [
"BSD-3-Clause"
] | 1 | 2021-06-05T18:36:13.000Z | 2021-06-05T18:36:13.000Z | kozmic/projects/__init__.py | artofhuman/kozmic-ci | 930c06e0ad6d5a1fe16b81c036a1d676004eeb37 | [
"BSD-3-Clause"
] | null | null | null | kozmic/projects/__init__.py | artofhuman/kozmic-ci | 930c06e0ad6d5a1fe16b81c036a1d676004eeb37 | [
"BSD-3-Clause"
] | null | null | null | # coding: utf-8
"""
kozmic.projects
~~~~~~~~~~~~~~~
.. attribute:: bp
:class:`flask.Blueprint` that provides all the means for managing and
viewing projects.
"""
from flask import Blueprint
from flask.ext.login import login_required
bp = Blueprint('projects', __name__)
@bp.before_request
@login_required
def before_request():
# Do nothing, just require login
pass
@bp.record
def configure(state):
register_views()
def register_views():
from . import views
| 15.34375 | 73 | 0.694501 | 62 | 491 | 5.33871 | 0.612903 | 0.054381 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.002481 | 0.179226 | 491 | 31 | 74 | 15.83871 | 0.818859 | 0.393075 | 0 | 0 | 0 | 0 | 0.027682 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | false | 0.083333 | 0.25 | 0 | 0.5 | 0.166667 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 2 |
f12e84e71dc2614e3a6f1d2f7d671fe27072ff71 | 474 | py | Python | py_framework/wsgi.py | zeroam/TIL | 43e3573be44c7f7aa4600ff8a34e99a65cbdc5d1 | [
"MIT"
] | null | null | null | py_framework/wsgi.py | zeroam/TIL | 43e3573be44c7f7aa4600ff8a34e99a65cbdc5d1 | [
"MIT"
] | null | null | null | py_framework/wsgi.py | zeroam/TIL | 43e3573be44c7f7aa4600ff8a34e99a65cbdc5d1 | [
"MIT"
] | null | null | null | from wsgiref.simple_server import make_server
def application(envrion, start_response):
response_body = [
'{key}: {value}'.format(key=key, value=value) for key, value in sorted(envrion.items())
]
response_body = '\n'.join(response_body)
status = '200'
response_headers = [
('Content-type', 'text/plain'),
]
return [response_body.encode('utf-8')]
server = make_server('localhost', 8000, app=application)
server.serve_forever() | 26.333333 | 95 | 0.670886 | 58 | 474 | 5.310345 | 0.637931 | 0.155844 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.020672 | 0.183544 | 474 | 18 | 96 | 26.333333 | 0.775194 | 0 | 0 | 0 | 0 | 0 | 0.115789 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.076923 | false | 0 | 0.076923 | 0 | 0.230769 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
f130b64b0ff7b024705421268b8468e5fc3ddf42 | 3,345 | py | Python | pomma/determine_symbols_and_max_repeats.py | NickleDave/pomma | e41dc4b354edb0c3a52685365fd79653e1930d43 | [
"BSD-3-Clause"
] | 1 | 2019-02-06T16:51:46.000Z | 2019-02-06T16:51:46.000Z | pomma/determine_symbols_and_max_repeats.py | NickleDave/pomma | e41dc4b354edb0c3a52685365fd79653e1930d43 | [
"BSD-3-Clause"
] | null | null | null | pomma/determine_symbols_and_max_repeats.py | NickleDave/pomma | e41dc4b354edb0c3a52685365fd79653e1930d43 | [
"BSD-3-Clause"
] | null | null | null | from itertools import groupby, chain
def determine_symbols_and_max_repeats(sequences):
"""determines unique set of symbols used in sequences, and maximum number
of repeats of those symbols (consecutive repeats, not just repeats in the
sense of occurrences). Any symbol with a maximum number of repeats > 1 is
considered a symbol that repeats. These repeating symbols will be fit with
states that adapt.
To make fitting easier, maps symbols to a set of consecutive integers
from 0 to n where n is the number of symbols,
then applies that mapping to sequences.
Parameters
----------
sequences : list
of lists. Representations of sequences of symbols.
Lists can be of ints or of str (single characters).
If str, will be converted to int.
Returns
-------
symbols_and_max_repeats: dict
with following key, value pairs:
symbols : set
of ints, unique set of symbols used in sequences
symbols_int_map: dict
mapping from symbols to integers 0,1,2,...,n
where n is the number of symbols
seqs_mapped : list
lList of lists of int. Result of "converting" sequences
to ints by applying symbols_int_map to it.
max_repeats : dict
where each key is a symbol and the corresponding value is
the maximum number of consecutive repeats of that symbol
found in any of the sequences
repeat_symbols : list
of int, symbols with repeat strings with max_repeats > 1
"""
if type(sequences) != list:
raise TypeError('sequences should be a list, not {}'.format(type(sequences)))
if not all([type(seq) == list for seq in sequences]):
raise TypeError('sequences should be a list of lists')
# chain.from_iterable concatenates sequences
seqs_concat = list(chain.from_iterable(sequences))
# map unique set of symbols to consecutive integers starting from 0
symbols = set(seqs_concat)
symbols_int_map = dict(zip(symbols,
range(len(symbols))))
# apply mapping to sequences
seqs_mapped = []
for seq in sequences:
seq_mapped = [symbols_int_map[symbol]
for symbol in seq]
seqs_mapped.append(seq_mapped)
# find maximum number of consecutive repeats for each symbol
repeat_counts = []
for seq_mapped in seqs_mapped:
counts_this_seq = [(k, sum(1 for i in g)) for k, g in groupby(seq_mapped)]
repeat_counts.extend(counts_this_seq)
max_repeats = {}
for symbol, symbol_int in symbols_int_map.items():
all_counts_this_symbol = [tuple_count
for tuple_symbol, tuple_count in repeat_counts
if tuple_symbol == symbol_int]
max_repeat = max(all_counts_this_symbol)
max_repeats[symbol] = max_repeat
repeat_symbols = [symbol
for symbol, max_repeat in max_repeats.items()
if max_repeat > 1]
symbols_and_max_repeats = {
'symbols': symbols,
'seqs_mapped': seqs_mapped,
'max_repeats': max_repeats,
'repeat_symbols': repeat_symbols
}
return symbols_and_max_repeats
| 41.8125 | 85 | 0.637668 | 439 | 3,345 | 4.697039 | 0.266515 | 0.053346 | 0.031523 | 0.038797 | 0.125121 | 0.093113 | 0.093113 | 0.026188 | 0 | 0 | 0 | 0.003861 | 0.303139 | 3,345 | 79 | 86 | 42.341772 | 0.880738 | 0.479223 | 0 | 0 | 0 | 0 | 0.069825 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.027778 | false | 0 | 0.027778 | 0 | 0.083333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
f130c55a426adac68bf09f355daa9ca3125bc0da | 292 | py | Python | week2/scripts/tb_publisher.py | ajaykrishna1878/Robotics-Automation-QSTP-2021 | f5b8626db20a60f9dd923bab5a0bec118d0abc67 | [
"MIT"
] | null | null | null | week2/scripts/tb_publisher.py | ajaykrishna1878/Robotics-Automation-QSTP-2021 | f5b8626db20a60f9dd923bab5a0bec118d0abc67 | [
"MIT"
] | null | null | null | week2/scripts/tb_publisher.py | ajaykrishna1878/Robotics-Automation-QSTP-2021 | f5b8626db20a60f9dd923bab5a0bec118d0abc67 | [
"MIT"
] | null | null | null | #!/usr/bin/env python3
import rospy
from std_msgs.msg import Float32
rospy.init_node('radius_publisher')
pub = rospy.Publisher('/radius', Float32, queue_size=1)
rate = rospy.Rate(1)
if __name__ == '__main__':
while not rospy.is_shutdown():
pub.publish(0.5)
rate.sleep() | 22.461538 | 55 | 0.695205 | 43 | 292 | 4.418605 | 0.72093 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.036885 | 0.164384 | 292 | 13 | 56 | 22.461538 | 0.741803 | 0.071918 | 0 | 0 | 0 | 0 | 0.114391 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.222222 | 0 | 0.222222 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
f1313cae1d8ecddeb5f75f139601242ca6ec08e2 | 3,330 | py | Python | exps/supp-synthetic/synth_utils.py | Viktour19/overlap-code | f5c6e63146a00f65710c38b9181bb9d12de6454f | [
"MIT"
] | 2 | 2020-07-09T03:15:58.000Z | 2022-03-09T11:57:17.000Z | exps/supp-synthetic/synth_utils.py | Viktour19/overlap-code | f5c6e63146a00f65710c38b9181bb9d12de6454f | [
"MIT"
] | null | null | null | exps/supp-synthetic/synth_utils.py | Viktour19/overlap-code | f5c6e63146a00f65710c38b9181bb9d12de6454f | [
"MIT"
] | 1 | 2021-05-18T11:55:04.000Z | 2021-05-18T11:55:04.000Z | import pandas as pd
import numpy as np
identity_func = lambda a, b: b
def compliance(D, R, inv_trans=lambda x,y : y):
ops = {'<=': (lambda x,y : x <= y),
'>': (lambda x,y : x > y),
'>=': (lambda x,y : x >= y),
'<': (lambda x,y : x < y),
'==': (lambda x,y : x == y),
'': (lambda x,y : x==True),
'not': (lambda x,y : x==False)}
Ws = []
for r in R:
W = []
for c in r:
try:
v = float(c[2])
except:
v = c[2]
W.append(ops[c[1]](inv_trans(c[0], D[c[0]].values), v))
W = np.array(W)
Ws.append(W)
return Ws
def calc_coverage(X, RS_s):
# This predicts whether or not X trips ANY of the CONSIDERED rules
x_by_all_rules = RS_s.predict_rules(X)
# This lays out the set of singletons* x CONSIDERED rules
# *this is 2 x dimension for binary variables
clauses_by_all_rules = RS_s.M.z
# This lays out the FINAL rules with 1,0, after rounding
rules_used_idx = RS_s.M.w == 1
x_by_used_rules = x_by_all_rules[:, rules_used_idx]
prop_covered_by_used_rule = x_by_used_rules.mean(axis=0)
return prop_covered_by_used_rule
def eval_confusion_matrix(RS_s, x, check_fn):
# Predicted reference samples in support
pred_ref = RS_s.predict(RS_s.refSamples)
# Actual reference samples in support
true_ref = check_fn(RS_s.refSamples)
# Check the confusion matrix
ct_ref = pd.crosstab(
pred_ref, true_ref,
rownames=['Predicted'], colnames=['Actual'])
# Predicted reference samples in support
pred_dat = RS_s.predict(x)
true_dat = check_fn(x)
ct_dat = pd.crosstab(pred_dat, true_dat,
rownames=['Predicted'], colnames=['Actual'])
return cmat_ref, cmat_X
def eval_false_inclusion_rate(RS_s, check_fn):
# Predicted reference samples in support
pred_ref = RS_s.predict(RS_s.refSamples)
# Actual reference samples in support
true_ref = check_fn(RS_s.refSamples)
# Of the reference samples that should be excluded, how many get through?
false_inclusion_rate = pred_ref[true_ref == 0].mean()
return false_inclusion_rate
return cmat_ref, cmat_X
def print_synth_rules(X, RS_s, CNF=True):
rules_support = RS_s.rules(transform=identity_func, fmt='%.1f')
prop_covered = calc_coverage(X, RS_s)
# Outer logic takes into account the negation of the CNF
outer_logic = ['NOT', 'AND NOT'] if CNF else [' ', 'AND']
inner_logic = [' ', 'AND'] if CNF else ['EITHER', 'OR']
print("Total coverage of X: {:.3f}".format(RS_s.predict(X).mean()))
print("Total volume: {:.3f}".format(RS_s.predict(RS_s.refSamples).mean()))
print("-----------")
for i in range(len(rules_support)):
this_rule = rules_support[i]
print("{:<7} Rule: {} \t \t \t Covers {:.3f} of X".format(
outer_logic[0] if i == 0 else outer_logic[1],
i,
prop_covered[i]))
print("{")
for j in range(len(this_rule)):
this_clause = this_rule[j]
print("\t {} {} {}".format(
inner_logic[0] if j == 0 else inner_logic[1],
'NOT' if this_clause[1] == 'not' else ' ',
this_clause[0]))
print("}")
| 33.979592 | 85 | 0.58979 | 503 | 3,330 | 3.697813 | 0.270378 | 0.030645 | 0.034409 | 0.033871 | 0.305376 | 0.225806 | 0.17043 | 0.17043 | 0.17043 | 0.17043 | 0 | 0.009942 | 0.275075 | 3,330 | 97 | 86 | 34.329897 | 0.760563 | 0.168769 | 0 | 0.115942 | 0 | 0 | 0.074047 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.072464 | false | 0 | 0.028986 | 0 | 0.173913 | 0.115942 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
f13146176d51f8cb1731e866ee6731f529600270 | 97 | py | Python | setup.py | souravdatta/words | 1d4d8e5f192b24b2c734d839fe3eaa540256e2ed | [
"MIT"
] | null | null | null | setup.py | souravdatta/words | 1d4d8e5f192b24b2c734d839fe3eaa540256e2ed | [
"MIT"
] | null | null | null | setup.py | souravdatta/words | 1d4d8e5f192b24b2c734d839fe3eaa540256e2ed | [
"MIT"
] | null | null | null | # Py2Exe setup file
from distutils.core import setup
import py2exe
setup(console=['words.py'])
| 13.857143 | 32 | 0.762887 | 14 | 97 | 5.285714 | 0.714286 | 0.297297 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.02381 | 0.134021 | 97 | 6 | 33 | 16.166667 | 0.857143 | 0.175258 | 0 | 0 | 0 | 0 | 0.103896 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.666667 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
f131822d2876d486bc9f4306fc36427725e30437 | 2,014 | py | Python | Tools/Recon/Profile/Phone_Number/atheris.py | Apollo-o/Whistle | f6df3b67be81fe36f0ecb8b4831bc5dc9cdc4a52 | [
"CC0-1.0"
] | null | null | null | Tools/Recon/Profile/Phone_Number/atheris.py | Apollo-o/Whistle | f6df3b67be81fe36f0ecb8b4831bc5dc9cdc4a52 | [
"CC0-1.0"
] | null | null | null | Tools/Recon/Profile/Phone_Number/atheris.py | Apollo-o/Whistle | f6df3b67be81fe36f0ecb8b4831bc5dc9cdc4a52 | [
"CC0-1.0"
] | null | null | null | # Author: O-O
# Date: 6/23/2019
# Description: A Simple Reverse Lookup Program.
import webbrowser
# Generates URLS.
# Precondition: A String.
# Postcondition: Web-Browser Controller (Opens URLS)
def generate_urls(phone_number):
# Phone Number.
area,prefix,line = phone_number[:3], phone_number[3:6], phone_number[6:]
# Generate URLS.
urls = ["https://whocalld.com/+1{}{}{}",
"https://www.whoeasy.com/pni/q/{}-{}-{}",
"https://www.freephonetracer.com/fcpt.aspx?_act=Free&_pho={}-{}-{}",
"https://www.reversephonelookup.com/number/{}{}{}/",
"https://www.ussearch.com/search/phone/{}-{}-{}",
"https://johndoe.com/phones/{}{}{}",
"https://thatsthem.com/phone/{}-{}-{}",
"https://www.thecallerguide.com/caller/{}-{}-{}",
"https://www.truepeoplesearch.com/results?name={}{}{}",
"https://www.whitepages.com/phone/1-{}-{}-{}",
"https://www.zabasearch.com/phone/{}{}{}",
"https://www.advancedbackgroundchecks.com/{}-{}-{}",
"https://www.mylife.com/pub-multisearch.pubview?whyReg=Identity&ab_cid=seoIdentityReg&skipToRedirect=%2FssSubscription.do&search={}-{}-{}",
"https://www.google.com/search?OxIUXfn7H8LU0gKLt53gAw&q={}{}{}",
"https://www.bing.com/search?q={}{}{}",
"https://duckduckgo.com/html?q={}{}{}"]
# Launch URLS.
event = 0
for url in urls:
webbrowser.open_new_tab(url.format(area,prefix,line))
# Display Five Webpages.
event += 1
if event == 5:
input(".....")
event = 0
# Start Program.
# Precondition: A String.
# Postcondition: None.
def main(phone_number):
# If 10 Digits | Invalid Phone Number.
if len(phone_number) == 10 and phone_number.isdigit():
generate_urls(str(phone_number))
else:
print("Invalid Phone Number.")
# Run Program.
# input("Phone Number: ")
main(input("Phone Number: "))
| 32.483871 | 151 | 0.576465 | 220 | 2,014 | 5.209091 | 0.477273 | 0.124782 | 0.033159 | 0.055846 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.017187 | 0.21996 | 2,014 | 61 | 152 | 33.016393 | 0.712285 | 0.180238 | 0 | 0.0625 | 1 | 0.03125 | 0.510404 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.0625 | false | 0 | 0.03125 | 0 | 0.09375 | 0.03125 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
f132e74654ba84f3c9be398d164080b7c216ae8e | 867 | py | Python | examples/show_tempos.py | storagebot/pyechonest | aa39f008e9ecdefedb3f37187596c6cf2b770e80 | [
"BSD-3-Clause"
] | 1 | 2015-04-26T12:21:23.000Z | 2015-04-26T12:21:23.000Z | examples/show_tempos.py | debrice/pyechonest | 8afe498ad70d456d064c328fe55a0049441c1cac | [
"BSD-3-Clause"
] | null | null | null | examples/show_tempos.py | debrice/pyechonest | 8afe498ad70d456d064c328fe55a0049441c1cac | [
"BSD-3-Clause"
] | null | null | null | # Shows the tempos for all of the songs in a director
# requires eyeD3, available from http://eyed3.nicfit.net/
import sys
import os
import eyeD3
import tempo
def show_tempo(mp3):
"given an mp3, print out the artist, title and tempo of the song"
tag = eyeD3.Tag()
tag.link(mp3)
my_tempo = tempo.get_tempo(tag.getArtist(), tag.getTitle())
print 'File: ', mp3
print 'Artist:', tag.getArtist()
print 'Title: ', tag.getTitle()
print 'Tempo: ', my_tempo
print
def show_tempos(dir):
"print out the tempo for each MP3 in the give directory"
for f in os.listdir(dir):
if f.lower().endswith(".mp3"):
path = os.path.join(dir, f)
show_tempo(path)
if __name__ == '__main__':
if len(sys.argv) == 1:
print 'usage: python show_tempos.py path'
else:
show_tempos(sys.argv[1])
| 24.771429 | 69 | 0.635525 | 131 | 867 | 4.083969 | 0.450382 | 0.056075 | 0.041122 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.018321 | 0.244521 | 867 | 34 | 70 | 25.5 | 0.798473 | 0.123414 | 0 | 0 | 0 | 0 | 0.250991 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.16 | null | null | 0.32 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
f1343917f3d52976f0620a8993ab397c596d873e | 321 | py | Python | dialogos/quotes/urls.py | bertucho/epic-movie-quotes-quiz | 09e4ec58a441ab74c1ce6e0fde4e71b08a4d7250 | [
"MIT"
] | null | null | null | dialogos/quotes/urls.py | bertucho/epic-movie-quotes-quiz | 09e4ec58a441ab74c1ce6e0fde4e71b08a4d7250 | [
"MIT"
] | null | null | null | dialogos/quotes/urls.py | bertucho/epic-movie-quotes-quiz | 09e4ec58a441ab74c1ce6e0fde4e71b08a4d7250 | [
"MIT"
] | null | null | null | from django.conf.urls import patterns, url
from quotes import views
from views import *
urlpatterns = patterns('',
url(r'^sdf$', index, name='index'),
url(r'^$', GameView.as_view(), name='game'),
url(r'^post$', AnswerView.as_view(), name='answer'),
url(r'^edit$', QuoteUpdate.as_view(), name='update'),
)
| 29.181818 | 55 | 0.65109 | 45 | 321 | 4.577778 | 0.533333 | 0.07767 | 0.145631 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.140187 | 321 | 10 | 56 | 32.1 | 0.746377 | 0 | 0 | 0 | 0 | 0 | 0.128617 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.333333 | 0 | 0.333333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
f134d17016594831785b2ac0544232f61c2c3c64 | 318 | py | Python | setup.py | timbook/modelmonitor | 876fdc8fb2b48e8e0942f9e7809193c62f0aa77e | [
"MIT"
] | null | null | null | setup.py | timbook/modelmonitor | 876fdc8fb2b48e8e0942f9e7809193c62f0aa77e | [
"MIT"
] | null | null | null | setup.py | timbook/modelmonitor | 876fdc8fb2b48e8e0942f9e7809193c62f0aa77e | [
"MIT"
] | null | null | null | import setuptools
setuptools.setup(
name="modelmonitor",
author="Tim Book",
author_email="timothykbook@gmail.com",
description="A library for monitoring data changes over time",
url="https://github.com/timbook/modelmonitor",
packages=setuptools.find_packages(),
python_requires='>=3.6',
)
| 26.5 | 66 | 0.713836 | 37 | 318 | 6.054054 | 0.837838 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.007407 | 0.150943 | 318 | 11 | 67 | 28.909091 | 0.822222 | 0 | 0 | 0 | 0 | 0 | 0.418239 | 0.069182 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.1 | 0 | 0.1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
f1378b6a473af8bf2230b8b3abb2ec910392d01c | 4,995 | py | Python | Data-Lake/etl.py | naderAsadi/Udacity-Data-Engineering-Projects | d12c42b3260379a470abd244f98a1fd5b32718f7 | [
"MIT"
] | 4 | 2020-10-03T18:14:20.000Z | 2021-11-01T08:15:32.000Z | Data-Lake/etl.py | naderAsadi/Udacity-Data-Engineering-Projects | d12c42b3260379a470abd244f98a1fd5b32718f7 | [
"MIT"
] | null | null | null | Data-Lake/etl.py | naderAsadi/Udacity-Data-Engineering-Projects | d12c42b3260379a470abd244f98a1fd5b32718f7 | [
"MIT"
] | null | null | null | import configparser
from datetime import datetime
import os
from pyspark.sql import SparkSession
from pyspark.sql.functions import udf, col, monotonically_increasing_id
from pyspark.sql.functions import year, month, dayofmonth, hour, weekofyear, date_format, dayofweek
from pyspark.sql.types import *
config = configparser.ConfigParser()
config.read('dl.cfg')
os.environ['AWS_ACCESS_KEY_ID']=config['AWS_ACCESS_KEY_ID']
os.environ['AWS_SECRET_ACCESS_KEY']=config['AWS_SECRET_ACCESS_KEY']
def create_spark_session():
"""Create or retrieve a Spark session
"""
return SparkSession.builder.config("spark.jars.packages", "org.apache.hadoop:hadoop-aws:2.7.0")\
.getOrCreate()
def process_song_data(spark, input_data, output_data):
"""[summary]
Args:
spark ([type]): [description]
input_data ([type]): [description]
output_data ([type]): [description]
"""
song_data = input_data + 'song_data/*/*/*/*.json'
song_schema = StructType([
StructField("artist_id", StringType()),
StructField("artist_latitude", DoubleType()),
StructField("artist_location", StringType()),
StructField("artist_longitude", DoubleType()),
StructField("artist_name", StringType()),
StructField("duration", DoubleType()),
StructField("num_songs", IntegerType()),
StructField("title", StringType()),
StructField("year", IntegerType())
])
df = spark.read.json(song_data, schema=song_schema)
# song table
song_table = df.select('title', 'artist_id', 'year', 'duration').dropDuplicates()\
.withColumn('song_id', monotonically_increasing_id())
song_table.write.parquet(output_data + 'songs/', mode='overwrite', partitionBy=['year', 'artist_id'])
# artist table
artist_table = df.select("artist_id","artist_name","artist_location","artist_latitude","artist_longitude").dropDuplicates()
artist_table.write.parquet(output_data + 'artists/', mode='overwrite')
def process_log_data(spark, input_data, output_data):
"""[summary]
Args:
spark ([type]): [description]
input_data ([type]): [description]
output_data ([type]): [description]
"""
log_data = input_data + 'log-data/'
df = spark.read.json(log_data).drop_duplicates()
df = df.filter(df.page == 'NextSong')
# user table
users_fields = ["userId", "firstName", "lastName", "gender", "level"]
users_table = df.selectExpr(users_fields).drop_duplicates()
users_table.write.parquet(output_data + 'users/', mode='overwrite')
# time table
get_timestamp = udf(lambda x: datetime.utcfromtimestamp(int(x) / 1000), TimestampType())
df = df.withColumn('start_time', get_timestamp('ts'))
time_table = df.withColumn("hour",hour("start_time"))\
.withColumn("day",dayofmonth("start_time"))\
.withColumn("week",weekofyear("start_time"))\
.withColumn("month",month("start_time"))\
.withColumn("year",year("start_time"))\
.withColumn("weekday",dayofweek("start_time"))\
.select("ts","start_time","hour", "day", "week", "month", "year", "weekday").drop_duplicates()
time_table.write.parquet(output_data + 'time_table/', mode='overwrite', partitionBy=['year', 'month'])
# songplays table
# read in song data to use for songplays table
song_df = spark.read\
.format("parquet")\
.option("basePath", os.path.join(output_data, "songs/"))\
.load(os.path.join(output_data, "songs/*/*/"))
# extract columns from joined song and log datasets to create songplays table
songplays_table = df.join(song_df, df.song == song_df.title, how='inner')\
.select(monotonically_increasing_id().alias("songplay_id"), col("start_time"),
col("userId").alias("user_id"), "level", "song_id", "artist_id",
col("sessionId").alias("session_id"), "location", col("userAgent").alias("user_agent"))
songplays_table = songplays_table.join(time_table, songplays_table.start_time == time_table.start_time, how="inner")\
.select("songplay_id", songplays_table.start_time, "user_id", "level", "song_id", "artist_id",
"session_id", "location", "user_agent", "year", "month")
# write songplays table to parquet files partitioned by year and month
songplays_table.drop_duplicates().write.parquet(os.path.join(output_data, "songplays/"), mode="overwrite", partitionBy=["year","month"])
def main():
spark = create_spark_session()
input_data = "s3://udacity-spark-project/"
output_data = "s3://udacity-spark-project/output/"
process_song_data(spark, input_data, output_data)
process_log_data(spark, input_data, output_data)
if __name__ == "__main__":
main()
| 40.609756 | 140 | 0.646847 | 567 | 4,995 | 5.460317 | 0.255732 | 0.04522 | 0.030685 | 0.023256 | 0.235142 | 0.153747 | 0.117571 | 0.101421 | 0.069767 | 0.069767 | 0 | 0.002256 | 0.201401 | 4,995 | 122 | 141 | 40.942623 | 0.773878 | 0.110711 | 0 | 0 | 0 | 0 | 0.214987 | 0.036326 | 0 | 0 | 0 | 0 | 0 | 1 | 0.057143 | false | 0 | 0.1 | 0 | 0.171429 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
f137db4cbba929f8e984ccaf98ff3fbc2e3814b3 | 460 | py | Python | portafolio/core/migrations/0031_career_important_title.py | jhonfmg7/portafolioDjango | 64db6a371a84dcad4f22dd7cdeb598c7c2db124e | [
"Apache-2.0"
] | null | null | null | portafolio/core/migrations/0031_career_important_title.py | jhonfmg7/portafolioDjango | 64db6a371a84dcad4f22dd7cdeb598c7c2db124e | [
"Apache-2.0"
] | null | null | null | portafolio/core/migrations/0031_career_important_title.py | jhonfmg7/portafolioDjango | 64db6a371a84dcad4f22dd7cdeb598c7c2db124e | [
"Apache-2.0"
] | null | null | null | # Generated by Django 3.0.5 on 2020-08-26 20:00
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('core', '0030_remove_project_important_title'),
]
operations = [
migrations.AddField(
model_name='career',
name='important_title',
field=models.CharField(blank=True, max_length=200, null=True, verbose_name='Titulo Importante'),
),
]
| 24.210526 | 108 | 0.636957 | 52 | 460 | 5.480769 | 0.807692 | 0.098246 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.063768 | 0.25 | 460 | 18 | 109 | 25.555556 | 0.762319 | 0.097826 | 0 | 0 | 1 | 0 | 0.186441 | 0.084746 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.333333 | 0 | 0.583333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 3 |