hexsha string | size int64 | ext string | lang string | max_stars_repo_path string | max_stars_repo_name string | max_stars_repo_head_hexsha string | max_stars_repo_licenses list | max_stars_count int64 | max_stars_repo_stars_event_min_datetime string | max_stars_repo_stars_event_max_datetime string | max_issues_repo_path string | max_issues_repo_name string | max_issues_repo_head_hexsha string | max_issues_repo_licenses list | max_issues_count int64 | max_issues_repo_issues_event_min_datetime string | max_issues_repo_issues_event_max_datetime string | max_forks_repo_path string | max_forks_repo_name string | max_forks_repo_head_hexsha string | max_forks_repo_licenses list | max_forks_count int64 | max_forks_repo_forks_event_min_datetime string | max_forks_repo_forks_event_max_datetime string | content string | avg_line_length float64 | max_line_length int64 | alphanum_fraction float64 | qsc_code_num_words_quality_signal int64 | qsc_code_num_chars_quality_signal float64 | qsc_code_mean_word_length_quality_signal float64 | qsc_code_frac_words_unique_quality_signal float64 | qsc_code_frac_chars_top_2grams_quality_signal float64 | qsc_code_frac_chars_top_3grams_quality_signal float64 | qsc_code_frac_chars_top_4grams_quality_signal float64 | qsc_code_frac_chars_dupe_5grams_quality_signal float64 | qsc_code_frac_chars_dupe_6grams_quality_signal float64 | qsc_code_frac_chars_dupe_7grams_quality_signal float64 | qsc_code_frac_chars_dupe_8grams_quality_signal float64 | qsc_code_frac_chars_dupe_9grams_quality_signal float64 | qsc_code_frac_chars_dupe_10grams_quality_signal float64 | qsc_code_frac_chars_replacement_symbols_quality_signal float64 | qsc_code_frac_chars_digital_quality_signal float64 | qsc_code_frac_chars_whitespace_quality_signal float64 | qsc_code_size_file_byte_quality_signal float64 | qsc_code_num_lines_quality_signal float64 | qsc_code_num_chars_line_max_quality_signal float64 | qsc_code_num_chars_line_mean_quality_signal float64 | qsc_code_frac_chars_alphabet_quality_signal float64 | qsc_code_frac_chars_comments_quality_signal float64 | qsc_code_cate_xml_start_quality_signal float64 | qsc_code_frac_lines_dupe_lines_quality_signal float64 | qsc_code_cate_autogen_quality_signal float64 | qsc_code_frac_lines_long_string_quality_signal float64 | qsc_code_frac_chars_string_length_quality_signal float64 | qsc_code_frac_chars_long_word_length_quality_signal float64 | qsc_code_frac_lines_string_concat_quality_signal float64 | qsc_code_cate_encoded_data_quality_signal float64 | qsc_code_frac_chars_hex_words_quality_signal float64 | qsc_code_frac_lines_prompt_comments_quality_signal float64 | qsc_code_frac_lines_assert_quality_signal float64 | qsc_codepython_cate_ast_quality_signal float64 | qsc_codepython_frac_lines_func_ratio_quality_signal float64 | qsc_codepython_cate_var_zero_quality_signal bool | qsc_codepython_frac_lines_pass_quality_signal float64 | qsc_codepython_frac_lines_import_quality_signal float64 | qsc_codepython_frac_lines_simplefunc_quality_signal float64 | qsc_codepython_score_lines_no_logic_quality_signal float64 | qsc_codepython_frac_lines_print_quality_signal float64 | qsc_code_num_words int64 | qsc_code_num_chars int64 | qsc_code_mean_word_length int64 | qsc_code_frac_words_unique null | qsc_code_frac_chars_top_2grams int64 | qsc_code_frac_chars_top_3grams int64 | qsc_code_frac_chars_top_4grams int64 | qsc_code_frac_chars_dupe_5grams int64 | qsc_code_frac_chars_dupe_6grams int64 | qsc_code_frac_chars_dupe_7grams int64 | qsc_code_frac_chars_dupe_8grams int64 | qsc_code_frac_chars_dupe_9grams int64 | qsc_code_frac_chars_dupe_10grams int64 | qsc_code_frac_chars_replacement_symbols int64 | qsc_code_frac_chars_digital int64 | qsc_code_frac_chars_whitespace int64 | qsc_code_size_file_byte int64 | qsc_code_num_lines int64 | qsc_code_num_chars_line_max int64 | qsc_code_num_chars_line_mean int64 | qsc_code_frac_chars_alphabet int64 | qsc_code_frac_chars_comments int64 | qsc_code_cate_xml_start int64 | qsc_code_frac_lines_dupe_lines int64 | qsc_code_cate_autogen int64 | qsc_code_frac_lines_long_string int64 | qsc_code_frac_chars_string_length int64 | qsc_code_frac_chars_long_word_length int64 | qsc_code_frac_lines_string_concat null | qsc_code_cate_encoded_data int64 | qsc_code_frac_chars_hex_words int64 | qsc_code_frac_lines_prompt_comments int64 | qsc_code_frac_lines_assert int64 | qsc_codepython_cate_ast int64 | qsc_codepython_frac_lines_func_ratio int64 | qsc_codepython_cate_var_zero int64 | qsc_codepython_frac_lines_pass int64 | qsc_codepython_frac_lines_import int64 | qsc_codepython_frac_lines_simplefunc int64 | qsc_codepython_score_lines_no_logic int64 | qsc_codepython_frac_lines_print int64 | effective string | hits int64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
8e6de0dffa62535acd94ac48b411c974051bb2fa | 6,558 | py | Python | deepst/utils/evalMultiStepAhead.py | amirkhango/DeepST | 7ba669013bbafd5f413ef50d5d76094c3a68efd6 | [
"MIT"
] | 29 | 2020-05-16T01:24:03.000Z | 2022-03-28T02:17:16.000Z | deepst/utils/evalMultiStepAhead.py | wss981086665/DeepST | 7ba669013bbafd5f413ef50d5d76094c3a68efd6 | [
"MIT"
] | null | null | null | deepst/utils/evalMultiStepAhead.py | wss981086665/DeepST | 7ba669013bbafd5f413ef50d5d76094c3a68efd6 | [
"MIT"
] | 21 | 2020-04-21T02:44:27.000Z | 2022-03-13T14:18:34.000Z | from __future__ import print_function
import sys
from deepst_flow.models.gan import generator_model
from deepst_flow.datasets import load_stdata
from deepst_flow.preprocessing import MinMaxNormalization
from deepst_flow.preprocessing import remove_incomplete_days
# import h5py
import numpy as np
from keras.optimizers import Adam
import os
# from keras.callbacks import EarlyStopping
import cPickle as pickle
import time
import pandas as pd
from copy import copy
from deepst_flow.config import Config
from deepst_flow.datasets.STMatrix import STMatrix
from deepst_flow.utils.eval import rmse
np.random.seed(1337) # for reproducibility
DATAPATH = Config().DATAPATH
print(DATAPATH)
def period_trend(period=1, trend=1):
model_name = sys.argv[1]
steps = 24
Period = 7
T = 48 # lenofday
len_seq = 3
nb_flow = 4
nb_days = 120
# divide data into two subsets:
# Train: ~ 2015.06.21 & Test: 2015.06.22 ~ 2015.06.28
len_train = T * (nb_days - 7)
len_test = T * 7
data, timestamps = load_stdata(os.path.join(DATAPATH, 'traffic_flow_bj15_nomissing.h5'))
print(timestamps)
# remove a certain day which has not 48 timestamps
data, timestamps = remove_incomplete_days(data, timestamps, T)
# minmax_scale
data_train = data[:len_train]
mmn = MinMaxNormalization()
mmn.fit(data_train)
data = mmn.transform(data)
st = STMatrix(data, timestamps, T)
# save TCN and MMS
fpkl = open('preprocessing.pkl', 'wb')
for obj in [mmn]: # [tcn, mmn]:
pickle.dump(obj, fpkl)
fpkl.close()
if period == 1 and trend == 1:
depends = [1, 2, 3, Period*T, Period*T+1, Period*T+2, Period*T+3]
len_close = 3
elif period == 1:
depends = [1] + [Period * T * j for j in xrange(1, len_seq+1)]
len_close = 1
elif trend == 1:
depends = range(1, 1+len_seq)
len_close = 3
else:
depends = [1]
len_close = 1
# else:
# print("unknown args")
# sys.exit(-1)
generator = generator_model(nb_flow, len(depends), 32, 32)
adam = Adam()
generator.compile(loss='mean_absolute_error', optimizer=adam)
generator.load_weights(model_name)
# instance-based dataset --> sequences with format as (X, Y) where X is a sequence of images and Y is an image.
offset_frame = pd.DateOffset(minutes=24 * 60 // T)
Y_test = st.data[-(len_test+steps-1):]
Y_pd_timestamps = st.pd_timestamps[-(len_test+steps-1):]
X_test = []
for pd_timestamp in Y_pd_timestamps:
x = [st.get_matrix(pd_timestamp - j * offset_frame) for j in depends]
X_test.append(np.vstack(x))
X_test = np.asarray(X_test)
Y_true = mmn.inverse_transform(Y_test[-len_test:])
Y_hats = []
for k in xrange(1, steps+1):
print("\n\n==%d-step rmse==" % k)
ts = time.time()
Y_hat = generator.predict(X_test)
Y_hats.append(copy(Y_hat))
print('Y_hat shape', Y_hat.shape, 'X_test shape:', X_test.shape)
# eval
Y_pred = mmn.inverse_transform(Y_hat[-len_test:])
rmse(Y_true, Y_pred)
X_test_hat = copy(X_test[1:])
for j in xrange(1, min(k, len_close) + 1):
# Y^\hat _t replace
X_test_hat[:, nb_flow*(j-1):nb_flow*j] = Y_hats[-j][:-j]
X_test = copy(X_test_hat)
print("\nelapsed time (eval): ", time.time() - ts)
def period_trend_closeness(len_closeness=3, len_trend=3, TrendInterval=7, len_period=3, PeriodInterval=1):
print("start: period_trend_closeness")
model_name = sys.argv[1]
steps = 24
# Period = 7
T = 48 # lenofday
# len_seq = 3
nb_flow = 4
nb_days = 120
# divide data into two subsets:
# Train: ~ 2015.06.21 & Test: 2015.06.22 ~ 2015.06.28
len_train = T * (nb_days - 7)
len_test = T * 7
data, timestamps = load_stdata(os.path.join(DATAPATH, 'traffic_flow_bj15_nomissing.h5'))
print(timestamps)
# remove a certain day which has not 48 timestamps
data, timestamps = remove_incomplete_days(data, timestamps, T)
# minmax_scale
data_train = data[:len_train]
mmn = MinMaxNormalization()
mmn.fit(data_train)
data = mmn.transform(data)
st = STMatrix(data, timestamps, T)
# save TCN and MMS
fpkl = open('preprocessing.pkl', 'wb')
for obj in [mmn]: # [tcn, mmn]:
pickle.dump(obj, fpkl)
fpkl.close()
depends = range(1, len_closeness+1) + \
[PeriodInterval * T * j for j in xrange(1, len_period+1)] + \
[TrendInterval * T * j for j in xrange(1, len_trend+1)]
generator = generator_model(nb_flow, len(depends), 32, 32)
adam = Adam()
generator.compile(loss='mean_absolute_error', optimizer=adam)
generator.load_weights(model_name)
# instance-based dataset --> sequences with format as (X, Y) where X is a sequence of images and Y is an image.
offset_frame = pd.DateOffset(minutes=24 * 60 // T)
Y_test = st.data[-(len_test+steps-1):]
Y_pd_timestamps = st.pd_timestamps[-(len_test+steps-1):]
X_test = []
for pd_timestamp in Y_pd_timestamps:
x = [st.get_matrix(pd_timestamp - j * offset_frame) for j in depends]
X_test.append(np.vstack(x))
X_test = np.asarray(X_test)
Y_true = mmn.inverse_transform(Y_test[-len_test:])
Y_hats = []
for k in xrange(1, steps+1):
print("\n\n==%d-step rmse==" % k)
ts = time.time()
Y_hat = generator.predict(X_test)
Y_hats.append(copy(Y_hat))
print('Y_hat shape', Y_hat.shape, 'X_test shape:', X_test.shape)
# eval
Y_pred = mmn.inverse_transform(Y_hat[-len_test:])
rmse(Y_true, Y_pred)
X_test_hat = copy(X_test[1:])
for j in xrange(1, min(k, len_closeness) + 1):
# Y^\hat _t replace
X_test_hat[:, nb_flow*(j-1):nb_flow*j] = Y_hats[-j][:-j]
X_test = copy(X_test_hat)
print("\nelapsed time (eval): ", time.time() - ts)
if __name__ == '__main__':
if int(sys.argv[2]) == 0: # period & trend
period_trend(1, 1)
elif int(sys.argv[2]) == 1: # period
period_trend(1, 0)
elif int(sys.argv[2]) == 2: # trend
period_trend(0, 1)
elif int(sys.argv[2]) == 3:
period_trend(0, 0)
else:
period_trend_closeness()
# print("unknown args")
# sys.exit(-1)
| 33.28934 | 116 | 0.613602 | 967 | 6,558 | 3.963806 | 0.180972 | 0.031307 | 0.025567 | 0.015654 | 0.735716 | 0.714584 | 0.693973 | 0.693973 | 0.679885 | 0.679885 | 0 | 0.035677 | 0.264867 | 6,558 | 196 | 117 | 33.459184 | 0.759386 | 0.127478 | 0 | 0.676056 | 0 | 0 | 0.055899 | 0.014931 | 0 | 0 | 0 | 0 | 0 | 1 | 0.014085 | false | 0 | 0.112676 | 0 | 0.126761 | 0.077465 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
8e75a055e209b834657641cfacff2b7ed0b25ae6 | 58 | py | Python | ktpdriver/__init__.py | kevinmooreiii/moldriver | 65a0a9cef8971737076f81720a61aa5b607333d2 | [
"Apache-2.0"
] | null | null | null | ktpdriver/__init__.py | kevinmooreiii/moldriver | 65a0a9cef8971737076f81720a61aa5b607333d2 | [
"Apache-2.0"
] | null | null | null | ktpdriver/__init__.py | kevinmooreiii/moldriver | 65a0a9cef8971737076f81720a61aa5b607333d2 | [
"Apache-2.0"
] | null | null | null | from ktpdriver import driver
__all__ = [
'driver',
]
| 9.666667 | 28 | 0.655172 | 6 | 58 | 5.666667 | 0.833333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.241379 | 58 | 5 | 29 | 11.6 | 0.772727 | 0 | 0 | 0 | 0 | 0 | 0.103448 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.25 | 0 | 0.25 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
8e8b9d129f916946481bd39278edb777b5b2cfcd | 338 | py | Python | SoloLearnPython3/Part_2/sketch_4b.py | mahmoudmheisen91/Python_EDU | 3ca08f65bb219335502159a6d13617b9a73c3b7e | [
"MIT"
] | null | null | null | SoloLearnPython3/Part_2/sketch_4b.py | mahmoudmheisen91/Python_EDU | 3ca08f65bb219335502159a6d13617b9a73c3b7e | [
"MIT"
] | null | null | null | SoloLearnPython3/Part_2/sketch_4b.py | mahmoudmheisen91/Python_EDU | 3ca08f65bb219335502159a6d13617b9a73c3b7e | [
"MIT"
] | null | null | null | # Files:
import os
try:
f = open(os.getcwd()+"\\text_file_1.txt", "r+")
#print(f.read())
#print(f.read(16))
#print(f.readlines())
#for line in f:
# print(line)
a = f.write("testing...")
print(a)
#print(f.read())
finally:
f.close()
# With - auto close:
with open(os.getcwd()+"\\text_file_1.txt", "r+") as f:
print(f.read())
| 15.363636 | 54 | 0.591716 | 58 | 338 | 3.37931 | 0.465517 | 0.153061 | 0.204082 | 0.163265 | 0.255102 | 0.255102 | 0.255102 | 0.255102 | 0 | 0 | 0 | 0.013937 | 0.150888 | 338 | 21 | 55 | 16.095238 | 0.66899 | 0.349112 | 0 | 0 | 0 | 0 | 0.228571 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.111111 | 0 | 0.111111 | 0.222222 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
8e8d184961edd6c1dfb170d828dad00771f70762 | 1,369 | py | Python | jina/executors/indexers/cache.py | sdsd0101/jina | 1a835d9015c627a2cbcdc58ee3d127962ada1bc9 | [
"Apache-2.0"
] | null | null | null | jina/executors/indexers/cache.py | sdsd0101/jina | 1a835d9015c627a2cbcdc58ee3d127962ada1bc9 | [
"Apache-2.0"
] | null | null | null | jina/executors/indexers/cache.py | sdsd0101/jina | 1a835d9015c627a2cbcdc58ee3d127962ada1bc9 | [
"Apache-2.0"
] | null | null | null | from typing import Optional
import numpy as np
from . import BaseKVIndexer
from ...proto import uid
class DocIDCache(BaseKVIndexer):
"""Store doc ids in a int64 set and persistent it to a numpy array """
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
self.handler_mutex = False #: for Cache we need to release the handler mutex to allow RW at the same time
def add(self, doc_id: str, *args, **kwargs):
d_id = uid.id2hash(doc_id)
self.query_handler.add(d_id)
self._size += 1
self.write_handler.write(np.int64(d_id).tobytes())
def query(self, doc_id: str, *args, **kwargs) -> Optional[bool]:
if self.query_handler:
d_id = uid.id2hash(doc_id)
return (d_id in self.query_handler) or None
@property
def is_exist(self) -> bool:
""" Always return true, delegate to :meth:`get_query_handler`
:return: True
"""
return True
def get_query_handler(self):
if super().is_exist:
with open(self.index_abspath, 'rb') as fp:
return set(np.frombuffer(fp.read(), dtype=np.int64))
else:
return set()
def get_add_handler(self):
return open(self.index_abspath, 'ab')
def get_create_handler(self):
return open(self.index_abspath, 'wb')
| 29.12766 | 114 | 0.620161 | 192 | 1,369 | 4.234375 | 0.416667 | 0.01845 | 0.059041 | 0.073801 | 0.189422 | 0.189422 | 0.091021 | 0 | 0 | 0 | 0 | 0.008982 | 0.268079 | 1,369 | 46 | 115 | 29.76087 | 0.802395 | 0.157049 | 0 | 0.066667 | 0 | 0 | 0.005333 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.233333 | false | 0 | 0.133333 | 0.066667 | 0.6 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
8eb2b601fdc2c3d0c7c565e6a7177f4f981f1ffd | 1,057 | py | Python | tests/cases/doc/test_parametrize_alt.py | broglep-work/python-pytest-cases | 4976c0073a2fad5fbe5de34a5d1199efda0b7da9 | [
"BSD-3-Clause"
] | 213 | 2018-07-05T21:21:21.000Z | 2022-03-22T04:54:53.000Z | tests/cases/doc/test_parametrize_alt.py | broglep-work/python-pytest-cases | 4976c0073a2fad5fbe5de34a5d1199efda0b7da9 | [
"BSD-3-Clause"
] | 259 | 2018-06-22T16:46:33.000Z | 2022-03-23T19:39:15.000Z | tests/cases/doc/test_parametrize_alt.py | broglep-work/python-pytest-cases | 4976c0073a2fad5fbe5de34a5d1199efda0b7da9 | [
"BSD-3-Clause"
] | 27 | 2019-03-26T12:46:49.000Z | 2022-02-21T16:56:23.000Z | # Authors: Sylvain MARIE <sylvain.marie@se.com>
# + All contributors to <https://github.com/smarie/python-pytest-cases>
#
# License: 3-clause BSD, <https://github.com/smarie/python-pytest-cases/blob/master/LICENSE>
import pytest
from pytest_cases import parametrize_with_cases
def case_sum_one_plus_two():
a = 1
b = 2
c = 3
return a, b, c
@parametrize_with_cases(argnames=["a", "b", "c"], cases=".")
def test_argnames_as_list(a, b, c):
assert a + b == c
@parametrize_with_cases(argnames=("a", "b", "c"), cases=".")
def test_argnames_as_tuple(a, b, c):
assert a + b == c
def test_argnames_from_invalid_type():
with pytest.raises(
TypeError, match="^argnames should be a string, list or a tuple$"
):
parametrize_with_cases(argnames=42, cases=".")(lambda _: None)
def test_argnames_element_from_invalid_type():
with pytest.raises(
TypeError, match="^all argnames should be strings$"
):
parametrize_with_cases(argnames=["a", 2, "c"], cases=".")(lambda _: None)
| 27.102564 | 92 | 0.661306 | 151 | 1,057 | 4.417219 | 0.370861 | 0.02099 | 0.031484 | 0.167916 | 0.488756 | 0.445277 | 0.445277 | 0.302849 | 0.167916 | 0.167916 | 0 | 0.008187 | 0.191107 | 1,057 | 38 | 93 | 27.815789 | 0.77193 | 0.203406 | 0 | 0.26087 | 0 | 0 | 0.107527 | 0 | 0 | 0 | 0 | 0 | 0.086957 | 1 | 0.217391 | false | 0 | 0.086957 | 0 | 0.347826 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
8eb5408797a9adfef2dd2e703600111076f8e969 | 500 | py | Python | chatServer/chatServerConstants.py | odeke-em/restAssured | 72d5c3a3fe9fd090f067a4332f2a1df15370b1bb | [
"MIT"
] | 1 | 2016-11-20T17:54:02.000Z | 2016-11-20T17:54:02.000Z | chatServer/chatServerConstants.py | odeke-em/restAssured | 72d5c3a3fe9fd090f067a4332f2a1df15370b1bb | [
"MIT"
] | null | null | null | chatServer/chatServerConstants.py | odeke-em/restAssured | 72d5c3a3fe9fd090f067a4332f2a1df15370b1bb | [
"MIT"
] | null | null | null | # Author: Konrad Lindenbach <klindenb@ualberta.ca>,
# Emmanuel Odeke <odeke@ualberta.ca>
# Copyright (c) 2014
# Table name strings
MESSAGE_TABLE_KEY = "Message"
RECEIPIENT_TABLE_KEY = "Receipient"
MESSAGE_MARKER_TABLE_KEY = "MessageMarker"
MAX_NAME_LENGTH = 60 # Arbitrary value
MAX_BODY_LENGTH = 200 # Arbitrary value
MAX_ALIAS_LENGTH = 60 # Arbitrary value
MAX_TOKEN_LENGTH = 512 # Arbitrary value
MAX_SUBJECT_LENGTH = 80 # Arbitrary value
MAX_PROFILE_URI_LENGTH = 400 # Arbitrary value
| 29.411765 | 51 | 0.78 | 67 | 500 | 5.522388 | 0.507463 | 0.227027 | 0.22973 | 0.118919 | 0.135135 | 0 | 0 | 0 | 0 | 0 | 0 | 0.044496 | 0.146 | 500 | 16 | 52 | 31.25 | 0.822014 | 0.452 | 0 | 0 | 0 | 0 | 0.114068 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
8ec6a49c43f0dfa005b4b57585655882de2d95f4 | 2,735 | py | Python | app/views/users.py | MatheusMullerGit/api-rest-flask-jwt-authentication | 85ff7e8ed9f5472d3dad097c057b10ac88607d89 | [
"MIT"
] | null | null | null | app/views/users.py | MatheusMullerGit/api-rest-flask-jwt-authentication | 85ff7e8ed9f5472d3dad097c057b10ac88607d89 | [
"MIT"
] | null | null | null | app/views/users.py | MatheusMullerGit/api-rest-flask-jwt-authentication | 85ff7e8ed9f5472d3dad097c057b10ac88607d89 | [
"MIT"
] | 1 | 2021-02-10T23:42:05.000Z | 2021-02-10T23:42:05.000Z | from werkzeug.security import generate_password_hash
from app import db
from flask import request, jsonify
from ..models.users import Users, user_schema, users_schema
def get_users():
name = request.args.get('name')
if name:
users = Users.query.filter(Users.name.like(f'%{name}%')).all()
else:
users = Users.query.all()
if users:
result = users_schema.dump(users)
return jsonify({'message': 'successfully fetched', 'data': result})
return jsonify({'message': 'nothing found', 'data': {}})
def get_user(id):
user = Users.query.get(id)
if user:
result = user_schema.dump(user)
return jsonify({'message': 'successfully fetched', 'data': result}), 201
return jsonify({'message': "user don't exist", 'data': {}}), 404
def post_user():
username = request.json['username']
password = request.json['password']
name = request.json['name']
email = request.json['email']
pass_hash = generate_password_hash(password)
user = Users(username, pass_hash, name, email)
try:
db.session.add(user)
db.session.commit()
result = user_schema.dump(user)
return jsonify({'message': 'successfully registered', 'data': result}), 201
except:
return jsonify({'message': 'unable to create', 'data': {}}), 500
def update_user(id):
username = request.json['username']
password = request.json['password']
name = request.json['name']
email = request.json['email']
user = Users.query.get(id)
if not user:
return jsonify({'message': "user don't exist", 'data': {}}), 404
pass_hash = generate_password_hash(password)
if user:
try:
user.username = username
user.password = pass_hash
user.name = name
user.email = email
db.session.commit()
result = user_schema.dump(user)
return jsonify({'message': 'successfully updated', 'data': result}), 201
except:
return jsonify({'message': 'unable to update', 'data':{}}), 500
def delete_user(id):
user = Users.query.get(id)
if not user:
return jsonify({'message': "user don't exist", 'data': {}}), 404
if user:
try:
db.session.delete(user)
db.session.commit()
result = user_schema.dump(user)
return jsonify({'message': 'successfully deleted', 'data': result}), 200
except:
return jsonify({'message': 'unable to delete', 'data': {}}), 500
def user_by_username(username):
try:
return Users.query.filter(Users.username == username).one()
except:
return None | 30.388889 | 84 | 0.597806 | 319 | 2,735 | 5.050157 | 0.206897 | 0.096834 | 0.148976 | 0.089385 | 0.583489 | 0.583489 | 0.517691 | 0.476723 | 0.456238 | 0.338299 | 0 | 0.014837 | 0.260695 | 2,735 | 90 | 85 | 30.388889 | 0.781899 | 0 | 0 | 0.507042 | 1 | 0 | 0.148392 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.084507 | false | 0.098592 | 0.056338 | 0 | 0.338028 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 2 |
8ecdf82b88826a1e74a5fae777d74d80fe90cd3b | 225 | py | Python | CFG.py | abduallahmohamed/Social-Implicit | 229dc11aa767d12de935cb974b20698f4902b733 | [
"MIT"
] | 3 | 2022-03-07T00:37:37.000Z | 2022-03-22T18:28:24.000Z | CFG.py | abduallahmohamed/Social-Implicit | 229dc11aa767d12de935cb974b20698f4902b733 | [
"MIT"
] | null | null | null | CFG.py | abduallahmohamed/Social-Implicit | 229dc11aa767d12de935cb974b20698f4902b733 | [
"MIT"
] | 1 | 2022-03-15T08:48:28.000Z | 2022-03-15T08:48:28.000Z | CFG = {
"spatial_input": 2,
"spatial_output": 2,
"temporal_input": 8,
"temporal_output": 12,
"bins": [0, 0.01, 0.1, 1.2],
"noise_weight": [0.05, 1, 4, 8],
"noise_weight_eth": [0.175, 1.5, 4, 8],
}
| 22.5 | 43 | 0.524444 | 37 | 225 | 3 | 0.513514 | 0.198198 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.158824 | 0.244444 | 225 | 9 | 44 | 25 | 0.494118 | 0 | 0 | 0 | 0 | 0 | 0.391111 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
8ed1a72472fa3b7c129f65a19d48f668254a66a4 | 3,530 | py | Python | discourse_form/models.py | cinai/teacher_discourse_form | 7d6863dbf8d628011c6b0653ae7ca4e218c2d3bc | [
"MIT"
] | null | null | null | discourse_form/models.py | cinai/teacher_discourse_form | 7d6863dbf8d628011c6b0653ae7ca4e218c2d3bc | [
"MIT"
] | null | null | null | discourse_form/models.py | cinai/teacher_discourse_form | 7d6863dbf8d628011c6b0653ae7ca4e218c2d3bc | [
"MIT"
] | null | null | null | from django.db import models
from sessions_coding.models import Classroom_session,Subject,Axis,Skill,Learning_goal,Copus_code
DIALOGIC_CHOICES = (
('Autoritativo', 'Autoritativo'),
('Dialogico', 'Dialógico'),
('NA', 'NA'),
)
class Discourse_form(models.Model):
session = models.ForeignKey(Classroom_session, on_delete=models.CASCADE,default=1)
init_line = models.IntegerField(default=0)
end_line = models.IntegerField(default=0)
artificial_name = models.CharField(max_length=100)
text = models.TextField()
def __str__(self):
return str(self.pk)+'-'+self.artificial_name
class Form_answer(models.Model):
form = models.ForeignKey(Discourse_form, on_delete=models.CASCADE)
ans_date = models.DateTimeField('date answered',auto_now_add=True, blank=True)
user = models.EmailField()
dialogic = models.CharField(blank=True,default='NA',max_length=40)
done = models.BooleanField(blank=True,default=False)
def __str__(self):
label_id = str(self.form.pk)+'-'+self.form.artificial_name
return label_id+'-'+str(self.ans_date)
class Answered_subject(models.Model):
subject = models.ForeignKey(Subject,on_delete=models.CASCADE)
ans_form = models.ForeignKey(Form_answer, on_delete=models.CASCADE)
def __str__(self):
return str(self.subject)
class Answered_axis(models.Model):
axis = models.ForeignKey(Axis,on_delete=models.CASCADE)
ans_form = models.ForeignKey(Form_answer, on_delete=models.CASCADE)
def __str__(self):
return str(self.axis)
class Answered_skill(models.Model):
skill = models.ForeignKey(Skill,on_delete=models.CASCADE)
ans_form = models.ForeignKey(Form_answer, on_delete=models.CASCADE)
def __str__(self):
return str(self.skill)
class Answered_learning_goal(models.Model):
goal = models.ForeignKey(Learning_goal,on_delete=models.CASCADE)
ans_form = models.ForeignKey(Form_answer, on_delete=models.CASCADE)
def __str__(self):
return str(self.goal)
class Answered_copus_code(models.Model):
copus_code = models.ForeignKey(Copus_code,on_delete=models.CASCADE)
ans_form = models.ForeignKey(Form_answer, on_delete=models.CASCADE)
def __str__(self):
return str(self.copus_code)
class Answered_axis_phrases(models.Model):
axis = models.ForeignKey(Answered_axis,on_delete=models.CASCADE)
ans_form = models.ForeignKey(Form_answer, on_delete=models.CASCADE)
phrases = models.TextField()
def __str__(self):
return str(self.axis)
class Answered_skill_phrases(models.Model):
skill = models.ForeignKey(Answered_skill,on_delete=models.CASCADE)
ans_form = models.ForeignKey(Form_answer, on_delete=models.CASCADE)
code = models.IntegerField(default=0,blank=True)
phrases = models.TextField()
def __str__(self):
return str(self.skill)
class Answered_dialogic_phrases(models.Model):
dialogic = models.CharField(max_length=20,choices=DIALOGIC_CHOICES)
ans_form = models.ForeignKey(Form_answer, on_delete=models.CASCADE)
code = models.IntegerField(default=0,blank=True)
phrases = models.TextField()
def __str__(self):
return str(self.dialogic)
class Answered_copus_phrases(models.Model):
copus = models.ForeignKey(Answered_copus_code,on_delete=models.CASCADE)
ans_form = models.ForeignKey(Form_answer, on_delete=models.CASCADE)
code = models.IntegerField(default=0,blank=True)
phrases = models.TextField()
def __str__(self):
return str(self.copus)
| 39.662921 | 96 | 0.743343 | 459 | 3,530 | 5.437909 | 0.152505 | 0.064503 | 0.106571 | 0.159856 | 0.576923 | 0.492788 | 0.492788 | 0.488782 | 0.473558 | 0.450321 | 0 | 0.004303 | 0.144193 | 3,530 | 88 | 97 | 40.113636 | 0.821913 | 0 | 0 | 0.407895 | 0 | 0 | 0.01813 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.144737 | false | 0 | 0.026316 | 0.131579 | 0.921053 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 2 |
8ed811c6a2d536bedc2c7110b1dd87ef9dfa3f9e | 208 | py | Python | PhysicsTools/RecoAlgos/python/allSuperClusterCandidates_cfi.py | ckamtsikis/cmssw | ea19fe642bb7537cbf58451dcf73aa5fd1b66250 | [
"Apache-2.0"
] | 852 | 2015-01-11T21:03:51.000Z | 2022-03-25T21:14:00.000Z | PhysicsTools/RecoAlgos/python/allSuperClusterCandidates_cfi.py | ckamtsikis/cmssw | ea19fe642bb7537cbf58451dcf73aa5fd1b66250 | [
"Apache-2.0"
] | 30,371 | 2015-01-02T00:14:40.000Z | 2022-03-31T23:26:05.000Z | PhysicsTools/RecoAlgos/python/allSuperClusterCandidates_cfi.py | ckamtsikis/cmssw | ea19fe642bb7537cbf58451dcf73aa5fd1b66250 | [
"Apache-2.0"
] | 3,240 | 2015-01-02T05:53:18.000Z | 2022-03-31T17:24:21.000Z | import FWCore.ParameterSet.Config as cms
allSuperClusterCandidates = cms.EDProducer("ConcreteEcalCandidateProducer",
src = cms.InputTag("hybridSuperClusters"),
particleType = cms.string('gamma')
)
| 23.111111 | 75 | 0.774038 | 18 | 208 | 8.944444 | 0.833333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.120192 | 208 | 8 | 76 | 26 | 0.879781 | 0 | 0 | 0 | 0 | 0 | 0.257282 | 0.140777 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.2 | 0 | 0.2 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
8ee3abb37467d65da279bf5a736b8973df927cbd | 1,550 | py | Python | snmp/datadog_checks/snmp/models.py | 01100010011001010110010101110000/integrations-core | b6216f96c9faa67e9e1e236caa8ddac597f0ef13 | [
"BSD-3-Clause"
] | null | null | null | snmp/datadog_checks/snmp/models.py | 01100010011001010110010101110000/integrations-core | b6216f96c9faa67e9e1e236caa8ddac597f0ef13 | [
"BSD-3-Clause"
] | null | null | null | snmp/datadog_checks/snmp/models.py | 01100010011001010110010101110000/integrations-core | b6216f96c9faa67e9e1e236caa8ddac597f0ef13 | [
"BSD-3-Clause"
] | null | null | null | # (C) Datadog, Inc. 2020-present
# All rights reserved
# Licensed under Simplified BSD License (see LICENSE)
"""
Define our own models and interfaces for dealing with SNMP data.
"""
from typing import Any, Sequence, Tuple, Union
from .exceptions import CouldNotDecodeOID
from .pysnmp_types import ObjectIdentity, ObjectName, ObjectType
from .utils import format_as_oid_string, parse_as_oid_tuple
class OID(object):
"""
An SNMP object identifier.
Acts as a facade for various types used by PySNMP to represent OIDs.
"""
def __init__(self, value):
# type: (Union[Sequence[int], str, ObjectName, ObjectIdentity, ObjectType]) -> None
try:
parts = parse_as_oid_tuple(value)
except CouldNotDecodeOID:
raise # Explicitly re-raise this exception.
# Let's make extra sure we didn't mess up.
if not isinstance(parts, tuple):
raise RuntimeError(
'Expected result {!r} of parsing value {!r} to be a tuple, but got {}'.format(parts, value, type(parts))
) # pragma: no cover
self._parts = parts
def as_tuple(self):
# type: () -> Tuple[int, ...]
return self._parts
def __eq__(self, other):
# type: (Any) -> bool
return isinstance(other, OID) and self.as_tuple() == other.as_tuple()
def __str__(self):
# type: () -> str
return format_as_oid_string(self.as_tuple())
def __repr__(self):
# type: () -> str
return 'OID({!r})'.format(str(self))
| 29.807692 | 120 | 0.630968 | 196 | 1,550 | 4.811224 | 0.530612 | 0.021209 | 0.02333 | 0.036055 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.003493 | 0.26129 | 1,550 | 51 | 121 | 30.392157 | 0.820087 | 0.336129 | 0 | 0 | 0 | 0 | 0.077621 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.217391 | false | 0 | 0.173913 | 0.173913 | 0.608696 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 2 |
d9089775d35deb81b413937db4c14cb666c7998d | 1,044 | py | Python | server/inquest/users/views.py | lucasOlivio/inquest | da7c3030f175f31861cf3db385c611c0c05b9406 | [
"MIT"
] | null | null | null | server/inquest/users/views.py | lucasOlivio/inquest | da7c3030f175f31861cf3db385c611c0c05b9406 | [
"MIT"
] | null | null | null | server/inquest/users/views.py | lucasOlivio/inquest | da7c3030f175f31861cf3db385c611c0c05b9406 | [
"MIT"
] | null | null | null | from django.conf import settings
from django.utils.decorators import method_decorator
from django.views.decorators.cache import cache_page
from rest_framework import mixins, viewsets
from rest_framework.permissions import AllowAny
from inquest.users.models import User
from inquest.users.permissions import IsUserOrReadOnly
from inquest.users.serializers import CreateUserSerializer, UserSerializer
class UserViewSet(
mixins.RetrieveModelMixin, mixins.UpdateModelMixin, viewsets.GenericViewSet
):
""" Updates and retrieves user accounts. """
queryset = User.objects.all()
serializer_class = UserSerializer
permission_classes = (IsUserOrReadOnly,)
@method_decorator(cache_page(settings.CACHE_TTL))
def dispatch(self, *args, **kwargs):
return super().dispatch(*args, **kwargs)
class UserCreateViewSet(mixins.CreateModelMixin, viewsets.GenericViewSet):
""" Creates user accounts. """
queryset = User.objects.all()
serializer_class = CreateUserSerializer
permission_classes = (AllowAny,)
| 32.625 | 79 | 0.782567 | 110 | 1,044 | 7.327273 | 0.472727 | 0.037221 | 0.059553 | 0.059553 | 0.121588 | 0.121588 | 0.121588 | 0.121588 | 0 | 0 | 0 | 0 | 0.135057 | 1,044 | 31 | 80 | 33.677419 | 0.89258 | 0.057471 | 0 | 0.095238 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.047619 | false | 0 | 0.380952 | 0.047619 | 0.857143 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 2 |
d930da9efaf30980ba593d3e41b76ab41bf0c7bc | 600 | py | Python | sketch/models/GKRect.py | shrredd/sketch | eebcd5077ae355f914bc77ac44d06410f9caa132 | [
"MIT",
"Unlicense"
] | 2 | 2018-10-22T12:43:51.000Z | 2018-12-02T02:41:55.000Z | sketch/models/GKRect.py | shrredd/sketch | eebcd5077ae355f914bc77ac44d06410f9caa132 | [
"MIT",
"Unlicense"
] | null | null | null | sketch/models/GKRect.py | shrredd/sketch | eebcd5077ae355f914bc77ac44d06410f9caa132 | [
"MIT",
"Unlicense"
] | null | null | null | class GKRect(object):
"""
GKRect is a lightweight rectangle object that is used in many places in Sketch.
It has many of the same methods as MSRect but they cannot always be used
interchangeably
"""
def __init__(self, x, y, width, height):
self._x = x
self._y = y
self._width = width
self._height = height
@property
def x(self):
return self._x
@property
def y(self):
return self._y
@property
def width(self):
return self._width
@property
def height(self):
return self._height
| 21.428571 | 83 | 0.598333 | 80 | 600 | 4.3375 | 0.45 | 0.126801 | 0.161383 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.326667 | 600 | 27 | 84 | 22.222222 | 0.858911 | 0.28 | 0 | 0.222222 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.277778 | false | 0 | 0 | 0.222222 | 0.555556 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 2 |
d939e2da8bc9779856cf5b9b6ca1c116a4880d8f | 214 | py | Python | tests/context.py | gazzar/scientific-project-structure | 59020f2d391f3b816f80cb7d11cb2397a141d271 | [
"BSD-3-Clause"
] | 1 | 2016-11-14T02:30:24.000Z | 2016-11-14T02:30:24.000Z | tests/context.py | gazzar/tmm_model | b50a8cb2d58e70333015c0ecf5759d887f785047 | [
"BSD-3-Clause"
] | 3 | 2016-06-10T02:04:40.000Z | 2016-06-10T02:05:47.000Z | tests/context.py | gazzar/scientific-project-structure | 59020f2d391f3b816f80cb7d11cb2397a141d271 | [
"BSD-3-Clause"
] | null | null | null | import os
import sys
PATH_HERE = os.path.abspath(os.path.dirname(__file__))
sys.path = [
os.path.join(PATH_HERE, '..'),
os.path.join(PATH_HERE, '..', '..'), # include path to version.py
] + sys.path
| 26.75 | 72 | 0.626168 | 32 | 214 | 3.96875 | 0.40625 | 0.188976 | 0.15748 | 0.220472 | 0.283465 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.17757 | 214 | 7 | 73 | 30.571429 | 0.721591 | 0.121495 | 0 | 0 | 0 | 0 | 0.032258 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.285714 | 0 | 0.285714 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
d939e93b7ad1bb2b8301e5e1485a6e64b91cf39c | 2,528 | py | Python | src/opencmiss/utils/iron/bindings/python/opencmiss/iron/_utils.in.py | tsalemink/opencmiss.utils | c727d9b922330e3ca38967fa7dbe6480f698f9a2 | [
"Apache-2.0"
] | null | null | null | src/opencmiss/utils/iron/bindings/python/opencmiss/iron/_utils.in.py | tsalemink/opencmiss.utils | c727d9b922330e3ca38967fa7dbe6480f698f9a2 | [
"Apache-2.0"
] | null | null | null | src/opencmiss/utils/iron/bindings/python/opencmiss/iron/_utils.in.py | tsalemink/opencmiss.utils | c727d9b922330e3ca38967fa7dbe6480f698f9a2 | [
"Apache-2.0"
] | null | null | null | """Utility routines and classes used by OpenCMISS
"""
from . import _@IRON_PYTHON_MODULE@
class CMFEError(Exception):
"""Base class for errors in the OpenCMISS library"""
pass
class CMFEType(object):
"""Base class for all OpenCMISS types"""
pass
class Enum(object):
pass
def wrap_cmiss_routine(routine, args=None):
"""Wrap a call to the OpenCMISS SWIG module
Call the routine and check the return value, and raise an
exception if it is non-zero.
Return any other remaining return values.
"""
if args is None:
r = routine()
else:
#Replace wrapped cmiss types with the underlying type
new_args = []
for arg in args:
if hasattr(arg, 'cmiss_type'):
new_args.append(arg.cmiss_type)
else:
try:
# Try to convert a list of CMISS types first.
# Check length to avoid empty strings being converted
# to an empty list
if len(arg) > 0:
new_args.append([a.cmiss_type for a in arg])
else:
new_args.append(arg)
except (TypeError, AttributeError):
new_args.append(arg)
r = routine(*new_args)
# We will either have a list of multiple return values, or
# a single status code as a return. Don't have to worry about
# ever having a single return value as a list as there will always
# be at least a return status.
if isinstance(r, list):
status = r[0]
if len(r) == 1:
return_val = None
elif len(r) == 2:
return_val = r[1]
else:
return_val = r[1:]
else:
status = r
return_val = None
if status != _@IRON_PYTHON_MODULE@.cvar.CMFE_NO_ERROR:
if status == _@IRON_PYTHON_MODULE@.cvar.CMFE_POINTER_IS_NULL:
raise CMFEError("CMFE type pointer is null")
elif status == _@IRON_PYTHON_MODULE@.cvar.CMFE_POINTER_NOT_NULL:
raise CMFEError("CMFE type pointer is not null")
elif status == _@IRON_PYTHON_MODULE@.cvar.CMFE_COULD_NOT_ALLOCATE_POINTER:
raise CMFEError("Could not allocate pointer")
elif status == _@IRON_PYTHON_MODULE@.cvar.CMFE_ERROR_CONVERTING_POINTER:
raise CMFEError("Error converting pointer")
else:
raise CMFEError(_@IRON_PYTHON_MODULE@.cmfe_ExtractErrorMessage()[1])
return return_val
| 32 | 82 | 0.598101 | 325 | 2,528 | 4.489231 | 0.353846 | 0.047978 | 0.076765 | 0.075394 | 0.197395 | 0.176833 | 0.176833 | 0.05209 | 0 | 0 | 0 | 0.004103 | 0.325158 | 2,528 | 78 | 83 | 32.410256 | 0.851114 | 0.148734 | 0 | 0.276596 | 0 | 0 | 0.063193 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0.06383 | 0.021277 | null | null | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 2 |
d93bfba4772e6da4fcb48210e144b771e7a65dca | 269 | py | Python | app/vendor/serializers.py | ToddPeterson/food-truck-api | 20de03771abcfb6a689f02519c11b630bad87fbf | [
"MIT"
] | null | null | null | app/vendor/serializers.py | ToddPeterson/food-truck-api | 20de03771abcfb6a689f02519c11b630bad87fbf | [
"MIT"
] | null | null | null | app/vendor/serializers.py | ToddPeterson/food-truck-api | 20de03771abcfb6a689f02519c11b630bad87fbf | [
"MIT"
] | null | null | null | from rest_framework import serializers
from .models import Vendor
class VendorSerializer(serializers.ModelSerializer):
"""Serializer for Vendor objects"""
class Meta:
model = Vendor
fields = ('id', 'name')
read_only_fields = ('id',)
| 20.692308 | 52 | 0.669145 | 28 | 269 | 6.321429 | 0.714286 | 0.090395 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.230483 | 269 | 12 | 53 | 22.416667 | 0.855072 | 0.107807 | 0 | 0 | 0 | 0 | 0.034188 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.285714 | 0 | 0.571429 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
d94e47f662699d2cc3d5d385b3261452d0f31c95 | 329 | py | Python | db.py | joeyw526/Personal | 52849526810f9b11947aeabafe56ecabbc68f04f | [
"MIT"
] | null | null | null | db.py | joeyw526/Personal | 52849526810f9b11947aeabafe56ecabbc68f04f | [
"MIT"
] | null | null | null | db.py | joeyw526/Personal | 52849526810f9b11947aeabafe56ecabbc68f04f | [
"MIT"
] | null | null | null | from sqlalchemy.ext.declarative import declarative_base
from sqlalchemy.orm import relation, sessionmaker
from sqlalchemy import create_engine
Base = declarative_base()
#replace with config setting for database
import config
database_engine = create_engine(config.database_config)
Session = sessionmaker(bind=database_engine)
| 27.416667 | 55 | 0.851064 | 41 | 329 | 6.658537 | 0.463415 | 0.153846 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.100304 | 329 | 11 | 56 | 29.909091 | 0.922297 | 0.121581 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.571429 | 0 | 0.571429 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 2 |
d95923901f575ed8c69caacfc964b37df86b4a68 | 201 | py | Python | ephim/cli.py | kirov/ephim | 6467bed88e7e8135af42b2fbb5368ccdb3eba969 | [
"MIT"
] | null | null | null | ephim/cli.py | kirov/ephim | 6467bed88e7e8135af42b2fbb5368ccdb3eba969 | [
"MIT"
] | null | null | null | ephim/cli.py | kirov/ephim | 6467bed88e7e8135af42b2fbb5368ccdb3eba969 | [
"MIT"
] | null | null | null | import os
from .library import Library
def main():
location = Library.find_library(os.getcwd())
library = Library(location)
library.organize_all()
if __name__ == '__main__':
main()
| 15.461538 | 48 | 0.681592 | 24 | 201 | 5.291667 | 0.541667 | 0.23622 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.199005 | 201 | 12 | 49 | 16.75 | 0.78882 | 0 | 0 | 0 | 0 | 0 | 0.039801 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.125 | false | 0 | 0.25 | 0 | 0.375 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
d95f30f596b67f748d883389b49d496b68057337 | 795 | py | Python | BI-IOS/semester-project/webapp/beecon/polls/models.py | josefdolezal/fit-cvut | 6b6abea4232b946246d33290718d6c5007926b63 | [
"MIT"
] | 20 | 2016-05-15T10:39:53.000Z | 2022-03-29T00:06:06.000Z | BI-IOS/semester-project/webapp/beecon/polls/models.py | josefdolezal/fit-cvut | 6b6abea4232b946246d33290718d6c5007926b63 | [
"MIT"
] | 3 | 2017-05-27T16:44:01.000Z | 2019-01-02T21:02:59.000Z | BI-IOS/semester-project/webapp/beecon/polls/models.py | josefdolezal/fit-cvut | 6b6abea4232b946246d33290718d6c5007926b63 | [
"MIT"
] | 11 | 2018-08-22T21:16:32.000Z | 2021-04-10T22:42:34.000Z | import datetime
from django.db import models
from django.utils import timezone
class Question( models.Model):
question_text = models.CharField( max_length=200 )
pub_date = models.DateTimeField( 'date published' )
def __str__( self ):
return self.question_text
def was_published_recently( self ):
return self.pub_date >= timezone.now() - datetime.timedelta( days=1 )
was_published_recently.admin_order_field = 'pub_date'
was_published_recently.boolean = True
was_published_recently.short_description = 'Published recently?'
class Choice(models.Model):
question = models.ForeignKey( Question, on_delete=models.CASCADE )
choice_text = models.CharField( max_length=200 )
votes = models.IntegerField( default=0 )
def __str__( self ):
return self.choice_text
| 28.392857 | 73 | 0.762264 | 102 | 795 | 5.656863 | 0.45098 | 0.147314 | 0.138648 | 0.076257 | 0.176776 | 0.107452 | 0 | 0 | 0 | 0 | 0 | 0.011817 | 0.148428 | 795 | 27 | 74 | 29.444444 | 0.840473 | 0 | 0 | 0.105263 | 0 | 0 | 0.051572 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.157895 | false | 0 | 0.157895 | 0.157895 | 0.842105 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 2 |
d97154d8516e9fbd0041d7ff10fa9ef2abd45bcc | 1,551 | py | Python | CallApi/serializers.py | michaelgabriel01/Phone_API | 13c8a28c5def652dc54ac3b6f7e943201894b6b0 | [
"MIT"
] | null | null | null | CallApi/serializers.py | michaelgabriel01/Phone_API | 13c8a28c5def652dc54ac3b6f7e943201894b6b0 | [
"MIT"
] | null | null | null | CallApi/serializers.py | michaelgabriel01/Phone_API | 13c8a28c5def652dc54ac3b6f7e943201894b6b0 | [
"MIT"
] | null | null | null | from django.db import models
# from django import forms
from django.contrib.auth.models import User, Group
from rest_framework import serializers
from .models import Call, Album
class UserSerializer(serializers.HyperlinkedModelSerializer):
class Meta:
model = User
fields = ('url', 'username', 'password', 'first_name', 'last_name', 'email', 'groups')
class GroupSerializer(serializers.HyperlinkedModelSerializer):
class Meta:
model = Group
fields = ('url', 'name')
class EmployeeSerializer(serializers.HyperlinkedModelSerializer):
class Meta:
model = User
fields = ('url', 'username', 'first_name', 'last_name', 'email', 'groups')
class CallSerializer(serializers.ModelSerializer):
class Meta:
model = User
class BillSerializer(serializers.HyperlinkedModelSerializer):
class Meta:
model = Group
fields = ('url', 'name')
class PriceSerializer(serializers.HyperlinkedModelSerializer):
class Meta:
model = Group
fields = ('url', 'name')
class AlbumSerializer(serializers.ModelSerializer):
tracks = serializers.StringRelatedField(many=True)
class Meta:
model = Album
fields = ('album_name', 'artist', 'tracks')
# fields = ('url', 'username')
class PhoneSerializer(serializers.ModelSerializer):
class Meta:
model = Call
fields = ('url', 'record_type', 'call_identifier', 'origin_phone_number',
'destination_phone_number', 'record_timestamp', 'duration') | 27.210526 | 94 | 0.676338 | 147 | 1,551 | 7.047619 | 0.346939 | 0.069498 | 0.108108 | 0.222008 | 0.494208 | 0.416988 | 0.416988 | 0.353282 | 0.353282 | 0.214286 | 0 | 0 | 0.213411 | 1,551 | 57 | 95 | 27.210526 | 0.84918 | 0.034172 | 0 | 0.459459 | 0 | 0 | 0.153075 | 0.016043 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0.027027 | 0.108108 | 0 | 0.567568 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
d973a425a59c3ab83b8943e2045a9899336edafd | 4,047 | py | Python | quflow/utils.py | kmodin/quflow | c12bf42929c349e059d85a8d0ff5830b838e8c91 | [
"MIT"
] | null | null | null | quflow/utils.py | kmodin/quflow | c12bf42929c349e059d85a8d0ff5830b838e8c91 | [
"MIT"
] | null | null | null | quflow/utils.py | kmodin/quflow | c12bf42929c349e059d85a8d0ff5830b838e8c91 | [
"MIT"
] | null | null | null | import numpy as np
import pyssht
from numba import njit
from .laplacian.sparse import solve_heat
@njit
def ind2elm(ind):
"""
Convert single index in omega vector to (el, m) indices.
Parameters
----------
ind: int
Returns
-------
(el, m): tuple of indices
"""
el = int(np.floor(np.sqrt(ind)))
m = ind - el * (el + 1)
return el, m
@njit
def elm2ind(el, m):
"""
Convert (el,m) spherical harmonics indices to single index
in `omegacomplex` array.
Parameters
----------
el: int or ndarray of ints
m: int or ndarray of ints
Returns
-------
ind: int
"""
return el*el + el + m
def cart2sph(x, y, z):
"""
Projection of Cartesian coordinates to spherical coordinates (theta, phi).
Parameters
----------
x: ndarray
y: ndarray
z: ndarray
Returns
-------
(theta, phi): tuple of ndarray
"""
phi = np.arctan2(y, x)
theta = np.arctan2(np.sqrt(x * x + y * y), z)
phi[phi < 0] += 2 * np.pi
return theta, phi
def sph2cart(theta, phi):
"""
Spherical coordinates to Cartesian coordinates (assuming radius 1).
Parameters
----------
theta: ndarray
phi: ndarry
Returns
-------
(x, y, z): tuple of ndarray
"""
x = np.sin(theta) * np.cos(phi)
y = np.sin(theta) * np.sin(phi)
z = np.cos(theta)
return x, y, z
def sphgrid(N):
"""
Return a mesh grid for spherical coordinates.
Parameters
----------
N: int
Bandwidth. In the spherical harmonics expansion we have that
the wave-number l fulfills 0 <= l <= N-1.
Returns
-------
(theta, phi): tuple of ndarray
Matrices of shape (N, 2*N-1) such that row-indices corresponds to
theta variations and column-indices corresponds to phi variations.
(Notice that phi is periodic but theta is not.)
"""
theta, phi = pyssht.sample_positions(N, Grid=True)
return theta, phi
def so3generators(N):
"""
Return a basis S1, S2, S3 for the representationn of so(3) in u(N).
Parameters
----------
N: int
Returns
-------
S1, S2, S3: tuple of ndarray
"""
s = (N-1)/2
S3 = 1j*np.diag(np.arange(-s, s+1))
S1 = 1j*np.diag(np.sqrt(s*(s+1)-np.arange(-s, s)*np.arange(-s+1, s+1)), 1)/2 + \
1j*np.diag(np.sqrt(s*(s+1)-np.arange(-s, s)*np.arange(-s+1, s+1)), -1)/2
S2 = np.diag(np.sqrt(s*(s+1)-np.arange(-s, s)*np.arange(-s+1, s+1)), 1)/2 - \
np.diag(np.sqrt(s*(s+1)-np.arange(-s, s)*np.arange(-s+1, s+1)), -1)/2
return S1, S2, S3
def rotate(xi, W):
"""
Apply axis-angle (Rodrigues) rotation to vorticity matrix.
Parameters
----------
xi: ndarray(shape=(3,), dtype=float)
W: ndarray(shape=(N,N), dtype=complex)
Returns
-------
W_rotated: ndarray(shape=(N,N), dtype=complex)
"""
from scipy.linalg import expm
N = W.shape[0]
S1, S2, S3 = so3generators(N)
R = expm(xi[0]*S1 + xi[1]*S2 + xi[2]*S3)
return R@W@R.T.conj()
def north_blob(N, sigma=0):
"""
Return vorticity matrix for blob located at north pole.
Parameters
----------
N: int
sigma: float (optional)
Gaussian sigma for blob. If 0 (default) then give best
approximation to point vortex
Returns
-------
W: ndarray(shape=(N, N), dtype=complex)
"""
W = np.zeros((N, N), dtype=complex)
W[-1, -1] = 1.0j
if sigma != 0:
W = solve_heat(sigma/4., W)
return W
def qtime2seconds(qtime, N):
"""
Convert quantum time units to seconds.
Parameters
----------
qtime: float or ndarray
N: int
Returns
-------
Time in seconds.
"""
return qtime*np.sqrt(16.*np.pi)/N**(3./2.)
def seconds2qtime(t, N):
"""
Convert seconds to quantum time unit.
Parameters
----------
t: float or ndarray
N: int
Returns
-------
Time in quantum time units.
"""
return t/np.sqrt(16.*np.pi)*N**(3./2.)
| 20.034653 | 84 | 0.54806 | 589 | 4,047 | 3.757216 | 0.26146 | 0.011749 | 0.036602 | 0.022594 | 0.199729 | 0.176231 | 0.138274 | 0.113873 | 0.0723 | 0.0723 | 0 | 0.028846 | 0.280455 | 4,047 | 201 | 85 | 20.134328 | 0.731113 | 0.487522 | 0 | 0.081633 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.204082 | false | 0 | 0.102041 | 0 | 0.510204 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
d984ab97c158ba754132a5054a70a7ee9bd1dfd6 | 1,044 | py | Python | django/Pieng/myclass/myquiz/migrations/0001_initial.py | wasit7/tutorials | 83499821266c8debac05cb5d6d5f6da0f0abd68f | [
"MIT"
] | 4 | 2016-02-23T15:39:45.000Z | 2018-03-25T20:15:07.000Z | django/Pieng/myclass/myquiz/migrations/0001_initial.py | wasit7/tutorials | 83499821266c8debac05cb5d6d5f6da0f0abd68f | [
"MIT"
] | null | null | null | django/Pieng/myclass/myquiz/migrations/0001_initial.py | wasit7/tutorials | 83499821266c8debac05cb5d6d5f6da0f0abd68f | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
# Generated by Django 1.9.5 on 2016-09-02 04:17
from __future__ import unicode_literals
from django.db import migrations, models
class Migration(migrations.Migration):
initial = True
dependencies = [
]
operations = [
migrations.CreateModel(
name='Questions',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('created', models.DateTimeField(auto_now_add=True)),
('last_updated', models.DateTimeField(auto_now=True)),
('question', models.TextField(max_length=100)),
('a', models.CharField(max_length=30)),
('b', models.CharField(max_length=30)),
('c', models.CharField(max_length=30)),
('d', models.CharField(max_length=30)),
('anwser', models.CharField(choices=[('a', 'a.'), ('b', 'b.'), ('c', 'c.'), ('d', 'd.')], max_length=1)),
],
),
]
| 33.677419 | 121 | 0.551724 | 112 | 1,044 | 4.982143 | 0.535714 | 0.096774 | 0.129032 | 0.172043 | 0.18638 | 0 | 0 | 0 | 0 | 0 | 0 | 0.037185 | 0.278736 | 1,044 | 30 | 122 | 34.8 | 0.703851 | 0.064176 | 0 | 0 | 1 | 0 | 0.063655 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.090909 | 0 | 0.272727 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
d994167805a77600928084b097565dcf01f4d86b | 856 | py | Python | engram/tests/test_sys_redirect.py | rgrannell1/engram.py | 69ca1af7b0ddb963a611e15414aa2eda48d6c325 | [
"MIT",
"Unlicense"
] | null | null | null | engram/tests/test_sys_redirect.py | rgrannell1/engram.py | 69ca1af7b0ddb963a611e15414aa2eda48d6c325 | [
"MIT",
"Unlicense"
] | 36 | 2015-01-24T23:12:10.000Z | 2015-07-12T19:01:44.000Z | engram/tests/test_sys_redirect.py | rgrannell1/engram.py | 69ca1af7b0ddb963a611e15414aa2eda48d6c325 | [
"MIT",
"Unlicense"
] | null | null | null | #!/usr/bin/env python3
import unittest
import os
import sys
import requests
import utils_test
from multiprocessing import Process
import time
sys.path.append(os.path.abspath('engram'))
import engram
class TestRedirect(utils_test.EngramTestCase):
def test_index(self):
"""
Story: Bookmark pages loads.
In order to access the bookmarks
I want to be able to use the endpoint /bookmarks
Scenario: requesting /bookmarks gets a response.
Given a running engram server on localhost:5000
When someone sends /bookmarks
Then the server sends back a html page
And the response has status 200.
"""
index_response = requests.get('http://localhost:5000/', timeout = 10)
assert index_response.status_code == 200
assert index_response.headers['content-type'] == "text/html; charset=utf-8"
unittest.main()
| 16.784314 | 77 | 0.73014 | 119 | 856 | 5.193277 | 0.647059 | 0.063107 | 0.061489 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.025899 | 0.188084 | 856 | 50 | 78 | 17.12 | 0.863309 | 0.426402 | 0 | 0 | 0 | 0 | 0.129817 | 0 | 0 | 0 | 0 | 0 | 0.133333 | 1 | 0.066667 | false | 0 | 0.533333 | 0 | 0.666667 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 2 |
79428c007cb5f673c35aad775e047d5ad9cc87ed | 2,139 | py | Python | quickstartup/qs_accounts/admin.py | shahabaz/quickstartup | e351138580d3b332aa309d5d98d562a1ebef5c2c | [
"MIT"
] | 13 | 2015-06-10T03:29:15.000Z | 2021-10-01T22:06:48.000Z | quickstartup/qs_accounts/admin.py | shahabaz/quickstartup | e351138580d3b332aa309d5d98d562a1ebef5c2c | [
"MIT"
] | 47 | 2015-06-10T03:26:18.000Z | 2021-09-22T17:35:24.000Z | quickstartup/qs_accounts/admin.py | shahabaz/quickstartup | e351138580d3b332aa309d5d98d562a1ebef5c2c | [
"MIT"
] | 3 | 2015-07-07T23:55:39.000Z | 2020-04-18T10:34:53.000Z | from django import forms
from django.contrib import admin
from django.contrib.auth import get_user_model
from django.contrib.auth.forms import ReadOnlyPasswordHashField
from django.utils.translation import gettext_lazy as _
from .models import User
class UserAdminCreationForm(forms.ModelForm):
password1 = forms.CharField(label=_('Password'), widget=forms.PasswordInput)
password2 = forms.CharField(label=_('Password (verify)'), widget=forms.PasswordInput)
class Meta:
model = get_user_model()
fields = ('name', 'email', 'password1', 'password2', 'is_staff', 'is_superuser')
def clean_password2(self):
password1 = self.cleaned_data.get("password1")
password2 = self.cleaned_data.get("password2")
if password1 and password2 and password1 != password2:
raise forms.ValidationError("Passwords don't match")
return password2
def save(self, commit=True):
user = super().save(commit=False)
user.set_password(self.cleaned_data["password1"])
return user
class UserAdminChangeForm(forms.ModelForm):
password = ReadOnlyPasswordHashField()
class Meta:
model = User
fields = ("name", "email", "password", "is_staff", "is_superuser")
def clean_password(self):
return self.initial["password"]
class UserAdmin(admin.ModelAdmin):
form = UserAdminChangeForm
add_form = UserAdminCreationForm
list_display = ("name", "email", "is_staff", "last_login")
list_filter = ("is_staff", "is_active")
fieldsets = (
(None, {"fields": ("name", "email", "password")}),
("Permissions", {"fields": ("is_active", "is_staff")}),
("Important dates", {"fields": ("last_login", "date_joined")}),
)
add_fieldsets = (
(None, {
"classes": ("wide",),
"fields": ("name", "email", "password1", "password2", "is_staff"),
},),
)
search_fields = ("name", "email")
ordering = ("name", "email")
# Enable admin interface if User is the quickstart user model
if get_user_model() is User:
admin.site.register(User, UserAdmin)
| 31.925373 | 89 | 0.656381 | 231 | 2,139 | 5.930736 | 0.363636 | 0.045985 | 0.054745 | 0.030657 | 0.091241 | 0.091241 | 0.058394 | 0 | 0 | 0 | 0 | 0.010018 | 0.206639 | 2,139 | 66 | 90 | 32.409091 | 0.797289 | 0.027583 | 0 | 0.040816 | 0 | 0 | 0.181906 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.061224 | false | 0.346939 | 0.142857 | 0.020408 | 0.591837 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 2 |
794d994e6a654793b9be5dad209aa40c437a4b42 | 1,555 | py | Python | wk8_hw/ex5_netmiko_sh_ver.py | philuu12/PYTHON_4_NTWK_ENGRS | ac0126ed687a5201031a6295d0094a536547cb92 | [
"Apache-2.0"
] | 1 | 2016-03-01T14:39:17.000Z | 2016-03-01T14:39:17.000Z | wk8_hw/ex5_netmiko_sh_ver.py | philuu12/PYTHON_4_NTWK_ENGRS | ac0126ed687a5201031a6295d0094a536547cb92 | [
"Apache-2.0"
] | null | null | null | wk8_hw/ex5_netmiko_sh_ver.py | philuu12/PYTHON_4_NTWK_ENGRS | ac0126ed687a5201031a6295d0094a536547cb92 | [
"Apache-2.0"
] | null | null | null | #!/usr/bin/env python
"""
5. Use Netmiko to connect to each of the devices in the database.
Execute 'show version' on each device. Calculate the amount of time required to do this.
"""
from netmiko import ConnectHandler
from datetime import datetime
from net_system.models import NetworkDevice, Credentials
import django
import ex1_link_obj_2_credentials
def main():
django.setup()
# Load device info and credentials into database
ex1_link_obj_2_credentials.link_device_to_credentials()
devices = NetworkDevice.objects.all()
for a_device in devices:
if a_device.device_name and a_device.credentials:
start_time = datetime.now()
creds = a_device.credentials
username = creds.username
password = creds.password
remote_conn = ConnectHandler(device_type=a_device.device_type,
ip=a_device.ip_address,
username=username,
password=password,
port=a_device.port,
secret='')
# Print out 'show version' output
print
print '#' * 80
print ("'show version' output for device: %s" % a_device.device_name)
print '#' * 80
print remote_conn.send_command("show version")
# Print out elapsed time
print '#' * 80
print ("Elapsed time: " + str(datetime.now() - start_time))
print '#' * 80
if __name__ == "__main__":
main()
| 31.1 | 88 | 0.601286 | 178 | 1,555 | 5.039326 | 0.41573 | 0.06243 | 0.043478 | 0.024526 | 0.049052 | 0 | 0 | 0 | 0 | 0 | 0 | 0.012311 | 0.3209 | 1,555 | 49 | 89 | 31.734694 | 0.837121 | 0.078457 | 0 | 0.129032 | 0 | 0 | 0.058452 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0.064516 | 0.16129 | null | null | 0.258065 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 2 |
795a4c1bfcd51d93ac77149dbb23b12fc1c26f21 | 2,784 | py | Python | data/ansible-module-template.py | trskop/hsansible | 00b2c5894edcdd6a4af710e28497b3b327fdd87c | [
"BSD-3-Clause"
] | 12 | 2015-01-06T23:59:53.000Z | 2021-01-02T13:58:26.000Z | data/ansible-module-template.py | trskop/hsansible | 00b2c5894edcdd6a4af710e28497b3b327fdd87c | [
"BSD-3-Clause"
] | 1 | 2021-10-06T12:44:10.000Z | 2022-01-12T11:21:42.000Z | data/ansible-module-template.py | trskop/hsansible | 00b2c5894edcdd6a4af710e28497b3b327fdd87c | [
"BSD-3-Clause"
] | 2 | 2020-04-25T17:25:26.000Z | 2021-11-07T21:13:49.000Z | #!/usr/bin/env python
# This file was generated using $program$ $version$ from a template
# with following copyringht notice.
#
# Copyright (c) 2013, Peter Trsko <peter.trsko@gmail.com>
#
# All rights reserved.
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions are met:
#
# * Redistributions of source code must retain the above copyright
# notice, this list of conditions and the following disclaimer.
#
# * Redistributions in binary form must reproduce the above
# copyright notice, this list of conditions and the following
# disclaimer in the documentation and/or other materials provided
# with the distribution.
#
# * Neither the name of Peter Trsko nor the names of other
# contributors may be used to endorse or promote products derived
# from this software without specific prior written permission.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
$if(documentation)$
DOCUMENTATION = '''
$documentation$'''
$endif$
import atexit
import base64
import os
import stat
import subprocess
import sys
import tempfile
try:
import json
except ImportError:
import simplejson as json
def remove_temporary_file(fn):
if fn is not None:
os.remove(fn)
def fail_json(msg):
print json.dumps(dict(failed=True, msg=msg))
sys.exit(1)
def main():
try:
fd, fn = tempfile.mkstemp()
atexit.register(remove_temporary_file, fn)
except Exception as e:
fail_json("Error creating temporary file: %s" % str(e))
try:
os.fchmod(fd, stat.S_IEXEC)
os.write(fd, base64.b64decode(encodedBinary))
os.fsync(fd)
os.close(fd)
except Exception as e:
fail_json("Error recreating executable: %s" % str(e))
try:
subprocess.call([fn] + sys.argv[1:])
except Exception as e:
fail_json("Error while calling executable: %s" % str(e))
encodedBinary = '''
$encodedBinary$'''
if __name__ == '__main__':
main()
| 31.280899 | 77 | 0.716236 | 382 | 2,784 | 5.175393 | 0.489529 | 0.016186 | 0.025797 | 0.027314 | 0.140111 | 0.115832 | 0.115832 | 0.068791 | 0.068791 | 0.068791 | 0 | 0.005427 | 0.205819 | 2,784 | 88 | 78 | 31.636364 | 0.888738 | 0.595546 | 0 | 0.166667 | 1 | 0 | 0.126489 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.238095 | null | null | 0.02381 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
797995b46e29cf3042b928047befb826374e0e03 | 1,694 | py | Python | leer/core/storage/utxo_index_storage.py | TensorVirus/leer | 9295c7e16364c8ff6da37a120a28e61a2c002ac6 | [
"MIT"
] | null | null | null | leer/core/storage/utxo_index_storage.py | TensorVirus/leer | 9295c7e16364c8ff6da37a120a28e61a2c002ac6 | [
"MIT"
] | null | null | null | leer/core/storage/utxo_index_storage.py | TensorVirus/leer | 9295c7e16364c8ff6da37a120a28e61a2c002ac6 | [
"MIT"
] | null | null | null | import os, lmdb
class UTXOIndex:
'''
Basically it is index [public_key -> set of unspent outputs with this public key]
'''
__shared_states = {}
def __init__(self, storage_space, path):
if not path in self.__shared_states:
self.__shared_states[path]={}
self.__dict__ = self.__shared_states[path]
self.directory = path
if not os.path.exists(path):
os.makedirs(self.directory) #TODO catch
self.env = lmdb.open(self.directory, max_dbs=10)
with self.env.begin(write=True) as txn:
self.main_db = self.env.open_db(b'main_db', txn=txn, dupsort=True, dupfixed=True) #TODO duplicate
self.storage_space = storage_space
self.storage_space.register_utxo_index(self)
def _add(self, serialized_pubkey, utxo_hash_and_pc):
with self.env.begin(write=True) as txn:
txn.put( serialized_pubkey, utxo_hash_and_pc, db=self.main_db, dupdata=True)
def add_utxo(self, output):
self._add(output.address.pubkey.serialize(), output.serialized_index)
def _remove(self, serialized_pubkey, utxo_hash_and_pc):
with self.env.begin(write=True) as txn:
txn.delete( serialized_pubkey, utxo_hash_and_pc, db=self.main_db)
def remove_utxo(self, output):
self._remove(output.address.pubkey.serialize(), output.serialized_index)
def get_all_unspent(self, pubkey):
return self.get_all_unspent_for_serialized_pubkey(pubkey.serialize())
def get_all_unspent_for_serialized_pubkey(self, serialized_pubkey):
with self.env.begin(write=False) as txn:
cursor = txn.cursor(db=self.main_db)
if not cursor.set_key(serialized_pubkey):
return []
else:
return list(cursor.iternext_dup())
| 34.571429 | 103 | 0.72196 | 249 | 1,694 | 4.614458 | 0.301205 | 0.111401 | 0.038294 | 0.055701 | 0.418625 | 0.358573 | 0.302872 | 0.302872 | 0.186249 | 0.186249 | 0 | 0.001421 | 0.169421 | 1,694 | 48 | 104 | 35.291667 | 0.81521 | 0.063164 | 0 | 0.088235 | 0 | 0 | 0.004453 | 0 | 0 | 0 | 0 | 0.020833 | 0 | 1 | 0.205882 | false | 0 | 0.029412 | 0.029412 | 0.382353 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
798d4895d401ce4adc3405ca10a7afd4738fcdc6 | 319 | py | Python | SageMaker/from_athena.py | terratenney/aws-tools | d8ca07d56d812deb819b039752b94a0f1b9e6eb2 | [
"MIT"
] | 8 | 2020-12-27T18:44:17.000Z | 2022-03-10T22:20:28.000Z | SageMaker/from_athena.py | terratenney/aws-tools | d8ca07d56d812deb819b039752b94a0f1b9e6eb2 | [
"MIT"
] | 28 | 2020-08-30T02:57:03.000Z | 2021-05-12T09:13:15.000Z | SageMaker/from_athena.py | kyhau/arki | b5d6b160ef0780032f231362158dd9dd892f4e8e | [
"MIT"
] | 8 | 2020-09-03T19:00:13.000Z | 2022-03-31T05:31:35.000Z | #import sys
#!{sys.executable} -m pip install pyathena
from pyathena import connect
import pandas as pd
conn = connect(s3_staging_dir='s3://aws-athena-query-results-459817416023-us-east-1/', region_name='us-east-1')
df = pd.read_sql('SELECT * FROM "ticketdata"."nfl_stadium_data" order by stadium limit 10;', conn)
df | 35.444444 | 111 | 0.758621 | 52 | 319 | 4.538462 | 0.730769 | 0.050847 | 0.059322 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.062937 | 0.103448 | 319 | 9 | 112 | 35.444444 | 0.762238 | 0.159875 | 0 | 0 | 0 | 0 | 0.501873 | 0.314607 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.4 | 0 | 0.4 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 2 |
798fa01b749eafbace3a784968765b5605f4e82b | 331 | py | Python | attendees/forms.py | TonyEight/lionax-wedding | b1abd230491723253726e2bcfe002c7be97db285 | [
"MIT"
] | null | null | null | attendees/forms.py | TonyEight/lionax-wedding | b1abd230491723253726e2bcfe002c7be97db285 | [
"MIT"
] | null | null | null | attendees/forms.py | TonyEight/lionax-wedding | b1abd230491723253726e2bcfe002c7be97db285 | [
"MIT"
] | null | null | null | from django import forms
from django.db.models.base import ModelBase
from phonenumber_field.formfields import PhoneNumberField
from . import models
class InvitationReplyForm(forms.Form):
mobile_phone = PhoneNumberField(required=True, widget=forms.widgets.TextInput(
attrs={"placeholder": "Saisissez votre numéro"}))
| 30.090909 | 82 | 0.794562 | 38 | 331 | 6.868421 | 0.710526 | 0.076628 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.123867 | 331 | 10 | 83 | 33.1 | 0.9 | 0 | 0 | 0 | 0 | 0 | 0.099698 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.571429 | 0 | 0.857143 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 2 |
799d1f003f4411bfae3d21a7133897db1e94f10f | 441 | py | Python | src/pyprerender/BaseEventHandler.py | thingiesmm/pyprerender | b990e2916977fe7281b97d505e3c3e7b7777fa96 | [
"MIT"
] | 3 | 2020-07-28T17:19:09.000Z | 2020-09-11T01:56:42.000Z | src/pyprerender/BaseEventHandler.py | thingiesmm/pyprerender | b990e2916977fe7281b97d505e3c3e7b7777fa96 | [
"MIT"
] | null | null | null | src/pyprerender/BaseEventHandler.py | thingiesmm/pyprerender | b990e2916977fe7281b97d505e3c3e7b7777fa96 | [
"MIT"
] | null | null | null | import threading
class BaseEventHandler(object):
screen_lock = threading.Lock()
def __init__(self, browser, tab, directory='./'):
self.browser = browser
self.tab = tab
self.start_frame = None
self.directory = directory
def frame_started_loading(self, frameId):
if not self.start_frame:
self.start_frame = frameId
def frame_stopped_loading(self, frameId):
pass
| 23.210526 | 53 | 0.646259 | 50 | 441 | 5.46 | 0.46 | 0.098901 | 0.153846 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.267574 | 441 | 18 | 54 | 24.5 | 0.845201 | 0 | 0 | 0 | 0 | 0 | 0.004535 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.230769 | false | 0.076923 | 0.076923 | 0 | 0.461538 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 2 |
79b1521f2c96172275c767bb838446410cf9cd69 | 426 | py | Python | app/mybot.py | arudmin/rocketgram-template | 5360e4a71ae2ba3c83d87a6278f7f46c04317665 | [
"WTFPL"
] | null | null | null | app/mybot.py | arudmin/rocketgram-template | 5360e4a71ae2ba3c83d87a6278f7f46c04317665 | [
"WTFPL"
] | null | null | null | app/mybot.py | arudmin/rocketgram-template | 5360e4a71ae2ba3c83d87a6278f7f46c04317665 | [
"WTFPL"
] | null | null | null | import logging
import pickle
from datetime import datetime
import munch
from rocketgram import Bot, Dispatcher, DefaultValuesMiddleware, ParseModeType
logger = logging.getLogger('mybot')
router = Dispatcher()
def get_bot(token: str):
bot = Bot(token, router=router, globals_class=munch.Munch, context_data_class=munch.Munch)
bot.middleware(DefaultValuesMiddleware(parse_mode=ParseModeType.html))
return bot
| 22.421053 | 94 | 0.793427 | 51 | 426 | 6.529412 | 0.54902 | 0.084084 | 0.09009 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.126761 | 426 | 18 | 95 | 23.666667 | 0.895161 | 0 | 0 | 0 | 0 | 0 | 0.011765 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.090909 | false | 0 | 0.454545 | 0 | 0.636364 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 2 |
79c4ad5431fc01ca8847cda94d203de141747e2e | 3,863 | py | Python | decuen/actors/strats/epsilon.py | ziyadedher/decuen | bc3bd42857308d7b189f576a3404abb3f9152531 | [
"MIT"
] | 2 | 2019-03-21T20:15:22.000Z | 2019-11-06T21:18:52.000Z | decuen/actors/strats/epsilon.py | ziyadedher/decuen | bc3bd42857308d7b189f576a3404abb3f9152531 | [
"MIT"
] | 10 | 2019-11-17T14:41:31.000Z | 2019-11-26T18:52:27.000Z | decuen/actors/strats/epsilon.py | ziyadedher/decuen | bc3bd42857308d7b189f576a3404abb3f9152531 | [
"MIT"
] | null | null | null | """Implementation of an epsilon-greedy action selection strategy."""
from abc import ABC, abstractmethod
from typing import Callable, ClassVar, Optional
from decuen.actors.strats._strategy import Strategy
from decuen.actors.strats.greedy import GreedyStrategy
from decuen.actors.strats.uniform import UniformStrategy
from decuen.dists import Categorical
from decuen.structs import Action, Tensor
from decuen.utils.function_property import FunctionProperty
# pylint: disable=too-few-public-methods
class EpsilonDecay(ABC):
"""Technique used to decay epsilon in `EpsilonGreedyStrategy`."""
@abstractmethod
def decay(self, value: float) -> float:
"""Decay an epsilon value and return the decayed value."""
...
class NoEpsilonDecay(EpsilonDecay):
"""No-op epsilon "decay"."""
def decay(self, value: float) -> float:
"""Return the inputted value directly with no decay."""
return value
class FunctionEpsilonDecay(EpsilonDecay):
"""Functional epsilon decay.
Decays based on a custom decay function.
"""
func: FunctionProperty[Callable[[float], float]]
def __init__(self, func: Callable[[float], float]) -> None:
"""Initialize a functional epsilon decay technique."""
self.func = func
def decay(self, value: float) -> float:
"""Return the decayed value according to the decaying function."""
return self.func(value)
class LinearEpsilonDecay(EpsilonDecay):
"""Linear epsilon decay.
Decays linearly based on a linear decay factor.
"""
factor: float
def __init__(self, factor: float) -> None:
"""Initialize a linear epsilon decay technique."""
self.factor = factor
def decay(self, value: float) -> float:
"""Return the value reduced by the linear decay factor."""
return value - self.factor
class ExponentialEpsilonDecay(EpsilonDecay):
"""Exponential epsilon decay.
Decays geometrically based on a exponential decay factor.
"""
factor: float
def __init__(self, factor: float) -> None:
"""Initialize an exponential epsilon decay technique."""
# TODO: warn against weird factors (i.e. <= 0 or >= 1)
self.factor = factor
def decay(self, value: float) -> float:
"""Return the value multiplied by the exponential decay factor."""
return value * self.factor
# pylint: disable=too-few-public-methods
class EpsilonGreedyStrategy(Strategy):
"""Epsilon-greedy action selection strategy."""
greedy: ClassVar[GreedyStrategy] = GreedyStrategy()
random: ClassVar[UniformStrategy] = UniformStrategy()
epsilon: float
min_epsilon: float
max_epsilon: float
_decay: EpsilonDecay
def __init__(self, epsilon: float, max_epsilon: float = 1, min_epsilon: float = 0,
decay: Optional[EpsilonDecay] = None) -> None:
"""Initialize an epsilon-greedy strategy."""
super().__init__(Categorical)
self.epsilon = epsilon
self.max_epsilon = max_epsilon
self.min_epsilon = min_epsilon
self._decay = decay if decay else NoEpsilonDecay()
def act(self, action_values: Tensor) -> Action:
"""Generate parameters for a categorical action distribution based on a epsilon-greedy strategy.
Decays epsilon according to the decay mechanism after choosing an action.
"""
probs = (1 - self.epsilon) * self.greedy.act(action_values) + self.epsilon * self.random.act(action_values)
self.decay()
return probs
def decay(self) -> None:
"""Decay the epsilon according to the decaying technique."""
self.epsilon = max(self._decay.decay(self.epsilon), self.min_epsilon)
def reset(self) -> None:
"""Reset the epsilon to be the maximum epsilon."""
self.epsilon = self.max_epsilon
| 32.191667 | 115 | 0.6803 | 446 | 3,863 | 5.807175 | 0.246637 | 0.033977 | 0.027799 | 0.032819 | 0.228958 | 0.180309 | 0.145174 | 0.116602 | 0.088803 | 0.088803 | 0 | 0.001656 | 0.218483 | 3,863 | 119 | 116 | 32.462185 | 0.856244 | 0.330054 | 0 | 0.2 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.008403 | 0 | 1 | 0.218182 | false | 0 | 0.145455 | 0 | 0.727273 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
79dd1956eda2bbd3df8995bb81479005690d4a42 | 301 | py | Python | magicmirror/tools/es/__init__.py | memirror/magicMirror | 05ee16b44aef22c30da2bc3323c5ba593b3e53fa | [
"MIT"
] | 5 | 2021-09-03T03:06:51.000Z | 2022-03-22T07:48:22.000Z | magicmirror/tools/es/__init__.py | xiaodongxiexie/magicMirror | 05ee16b44aef22c30da2bc3323c5ba593b3e53fa | [
"MIT"
] | null | null | null | magicmirror/tools/es/__init__.py | xiaodongxiexie/magicMirror | 05ee16b44aef22c30da2bc3323c5ba593b3e53fa | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
# @Author: xiaodong
# @Date : 2021/5/27
from elasticsearch import Elasticsearch
from .question import ElasticSearchQuestion
from ...setting import ELASTICSEARCH_HOST
ElasticSearchQuestion.es = Elasticsearch(ELASTICSEARCH_HOST)
esq = ElasticSearchQuestion("mm_question")
| 21.5 | 60 | 0.777409 | 31 | 301 | 7.451613 | 0.612903 | 0.164502 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.030303 | 0.122924 | 301 | 13 | 61 | 23.153846 | 0.844697 | 0.192691 | 0 | 0 | 0 | 0 | 0.046025 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.6 | 0 | 0.6 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 2 |
79ddb105ac755f4279b7ea5fbb3e4ad1e1be660a | 265 | py | Python | mirari/TCS/migrations/0051_merge_20200210_1930.py | gcastellan0s/mirariapp | 24a9db06d10f96c894d817ef7ccfeec2a25788b7 | [
"MIT"
] | null | null | null | mirari/TCS/migrations/0051_merge_20200210_1930.py | gcastellan0s/mirariapp | 24a9db06d10f96c894d817ef7ccfeec2a25788b7 | [
"MIT"
] | 18 | 2019-12-27T19:58:20.000Z | 2022-02-27T08:17:49.000Z | mirari/TCS/migrations/0051_merge_20200210_1930.py | gcastellan0s/mirariapp | 24a9db06d10f96c894d817ef7ccfeec2a25788b7 | [
"MIT"
] | null | null | null | # Generated by Django 2.0.5 on 2020-02-11 01:30
from django.db import migrations
class Migration(migrations.Migration):
dependencies = [
('TCS', '0050_auto_20190918_1247'),
('TCS', '0050_auto_20191127_2331'),
]
operations = [
]
| 17.666667 | 47 | 0.637736 | 33 | 265 | 4.939394 | 0.818182 | 0.08589 | 0.134969 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.232673 | 0.237736 | 265 | 14 | 48 | 18.928571 | 0.574257 | 0.169811 | 0 | 0 | 1 | 0 | 0.238532 | 0.211009 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.125 | 0 | 0.5 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
8db6147436c0b6b4669ca38bc881f7d85d39b8de | 3,050 | py | Python | nova/scheduler/weights/__init__.py | bopopescu/nova-token | ec98f69dea7b3e2b9013b27fd55a2c1a1ac6bfb2 | [
"Apache-2.0"
] | null | null | null | nova/scheduler/weights/__init__.py | bopopescu/nova-token | ec98f69dea7b3e2b9013b27fd55a2c1a1ac6bfb2 | [
"Apache-2.0"
] | null | null | null | nova/scheduler/weights/__init__.py | bopopescu/nova-token | ec98f69dea7b3e2b9013b27fd55a2c1a1ac6bfb2 | [
"Apache-2.0"
] | 2 | 2017-07-20T17:31:34.000Z | 2020-07-24T02:42:19.000Z | begin_unit
comment|'# Copyright (c) 2011 OpenStack Foundation'
nl|'\n'
comment|'# All Rights Reserved.'
nl|'\n'
comment|'#'
nl|'\n'
comment|'# Licensed under the Apache License, Version 2.0 (the "License"); you may'
nl|'\n'
comment|'# not use this file except in compliance with the License. You may obtain'
nl|'\n'
comment|'# a copy of the License at'
nl|'\n'
comment|'#'
nl|'\n'
comment|'# http://www.apache.org/licenses/LICENSE-2.0'
nl|'\n'
comment|'#'
nl|'\n'
comment|'# Unless required by applicable law or agreed to in writing, software'
nl|'\n'
comment|'# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT'
nl|'\n'
comment|'# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the'
nl|'\n'
comment|'# License for the specific language governing permissions and limitations'
nl|'\n'
comment|'# under the License.'
nl|'\n'
nl|'\n'
string|'"""\nScheduler host weights\n"""'
newline|'\n'
nl|'\n'
name|'from'
name|'nova'
name|'import'
name|'weights'
newline|'\n'
nl|'\n'
nl|'\n'
DECL|class|WeighedHost
name|'class'
name|'WeighedHost'
op|'('
name|'weights'
op|'.'
name|'WeighedObject'
op|')'
op|':'
newline|'\n'
DECL|member|to_dict
indent|' '
name|'def'
name|'to_dict'
op|'('
name|'self'
op|')'
op|':'
newline|'\n'
indent|' '
name|'x'
op|'='
name|'dict'
op|'('
name|'weight'
op|'='
name|'self'
op|'.'
name|'weight'
op|')'
newline|'\n'
name|'x'
op|'['
string|"'host'"
op|']'
op|'='
name|'self'
op|'.'
name|'obj'
op|'.'
name|'host'
newline|'\n'
name|'return'
name|'x'
newline|'\n'
nl|'\n'
DECL|member|__repr__
dedent|''
name|'def'
name|'__repr__'
op|'('
name|'self'
op|')'
op|':'
newline|'\n'
indent|' '
name|'return'
string|'"WeighedHost [host: %r, weight: %s]"'
op|'%'
op|'('
nl|'\n'
name|'self'
op|'.'
name|'obj'
op|','
name|'self'
op|'.'
name|'weight'
op|')'
newline|'\n'
nl|'\n'
nl|'\n'
DECL|class|BaseHostWeigher
dedent|''
dedent|''
name|'class'
name|'BaseHostWeigher'
op|'('
name|'weights'
op|'.'
name|'BaseWeigher'
op|')'
op|':'
newline|'\n'
indent|' '
string|'"""Base class for host weights."""'
newline|'\n'
name|'pass'
newline|'\n'
nl|'\n'
nl|'\n'
DECL|class|HostWeightHandler
dedent|''
name|'class'
name|'HostWeightHandler'
op|'('
name|'weights'
op|'.'
name|'BaseWeightHandler'
op|')'
op|':'
newline|'\n'
DECL|variable|object_class
indent|' '
name|'object_class'
op|'='
name|'WeighedHost'
newline|'\n'
nl|'\n'
DECL|member|__init__
name|'def'
name|'__init__'
op|'('
name|'self'
op|')'
op|':'
newline|'\n'
indent|' '
name|'super'
op|'('
name|'HostWeightHandler'
op|','
name|'self'
op|')'
op|'.'
name|'__init__'
op|'('
name|'BaseHostWeigher'
op|')'
newline|'\n'
nl|'\n'
nl|'\n'
DECL|function|all_weighers
dedent|''
dedent|''
name|'def'
name|'all_weighers'
op|'('
op|')'
op|':'
newline|'\n'
indent|' '
string|'"""Return a list of weight plugin classes found in this directory."""'
newline|'\n'
name|'return'
name|'HostWeightHandler'
op|'('
op|')'
op|'.'
name|'get_all_classes'
op|'('
op|')'
newline|'\n'
dedent|''
endmarker|''
end_unit
| 15.482234 | 88 | 0.632787 | 442 | 3,050 | 4.298643 | 0.269231 | 0.042632 | 0.068421 | 0.050526 | 0.305263 | 0.226842 | 0.147895 | 0.126842 | 0.084211 | 0 | 0 | 0.00299 | 0.122623 | 3,050 | 196 | 89 | 15.561224 | 0.707025 | 0 | 0 | 0.770408 | 0 | 0 | 0.446885 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0.005102 | 0.005102 | null | null | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
8dc12441a5e7249e908d2c2dfcd39c2193640172 | 6,680 | py | Python | jinfo/sequence.py | JBwdn/jinfo | b5933edd3ea3d27f4f7c1e0153e16750de0d1726 | [
"MIT"
] | null | null | null | jinfo/sequence.py | JBwdn/jinfo | b5933edd3ea3d27f4f7c1e0153e16750de0d1726 | [
"MIT"
] | 1 | 2020-12-07T14:07:14.000Z | 2020-12-07T14:07:14.000Z | jinfo/sequence.py | JBwdn/jinfo | b5933edd3ea3d27f4f7c1e0153e16750de0d1726 | [
"MIT"
] | null | null | null | from jinfo.tables import (
DNA_VOCAB,
RNA_VOCAB,
AA_VOCAB,
CODON_TABLE,
RC_TABLE,
NT_MW_TABLE,
AA_MW_TABLE,
)
class SeqVocabError(Exception):
pass
class SeqLengthError(Exception):
pass
class UnknownBaseError(Exception):
pass
class BaseSeq:
"""
Parent class for DNA/RNA/AA sequence objects
"""
def __init__(self, sequence: str = "", label: str = "", vocab: set = None) -> None:
self.vocab = vocab
self.label = label
self.update_seq(sequence)
self.len = len(self.seq)
return
def __str__(self):
return f"{self.label}\t{self.seq}"
def check_seq_valid(self) -> None:
"""
Ensure that the sequence string is consistant with the vocab
"""
if self.vocab is not None:
if not self.vocab.issuperset(set(self.new_seq)):
raise SeqVocabError("Seq contains bases not in vocab")
return
def update_seq(self, sequence: str = "") -> None:
"""
Replace the sequence string with a new string
"""
self.new_seq = sequence.upper()
self.check_seq_valid()
self.seq = self.new_seq
self.len = len(sequence)
return
def update_label(self, label: str = "") -> None:
"""
Replace the sequence string with a new string
"""
self.label = label
return
def align(self, seq2, maxiters: int = 16):
"""
Perform alignment of two sequences, optionally control the number of iterations
***Requires MUSCLE package***
Returns Alignment object
"""
from jinfo.utils.multialign import multialign
return multialign([self, seq2], maxiters=maxiters)
def identity(self, seq2) -> float:
"""
Calculate the percentage identity between two sequences
Returns: float
"""
from jinfo.utils.percentage_identity import percentage_identity
return percentage_identity(self, seq2)
def save_fasta(self, file_name: str) -> None:
"""
Save sequence to fasta file
"""
import textwrap
seq_formatted = textwrap.fill(self.seq, width=80)
if self.label == "":
out_label = "jinfo_sequence"
else:
out_label = self.label
with open(file_name, "w") as text_file:
text_file.write(f">{out_label}\n{seq_formatted}")
return
class DNASeq(BaseSeq):
"""
Class to hold sequences of DNA
"""
def __init__(self, sequence: str = "", label: str = "") -> None:
"""
Call the superclass constructor with new default vocab argument
"""
super(DNASeq, self).__init__(sequence=sequence, label=label, vocab=DNA_VOCAB)
return
def transcribe(self) -> str:
"""
Returns: RNA transcript of the DNA sequence
"""
return self.seq.replace("T", "U")
def translate(self) -> str:
"""
Returns: translated protein sequence of the DNA sequence
"""
transcript = self.transcribe()
if len(transcript) % 3 != 0:
raise SeqLengthError("Seq cannot be split into codons, not a multiple of 3")
codon_list = [transcript[i : i + 3] for i in range(0, len(transcript), 3)]
return "".join([CODON_TABLE[codon] for codon in codon_list])
def reverse_complement(self) -> str:
"""
Returns: reverse complement of the DNA sequence
"""
return "".join([RC_TABLE[base] for base in self.seq][::-1])
def find_CDS(self):
return
def MW(self) -> float:
"""
Calculate MW of linear double stranded DNA
Returns: Molecular weight float
"""
if "X" in self.seq:
raise UnknownBaseError("X base in sequence")
fw_mw = sum([NT_MW_TABLE[base] for base in self.seq]) + 17.01
rv_mw = sum([NT_MW_TABLE[base] for base in self.reverse_complement()]) + 17.01
return fw_mw + rv_mw
def GC(self, dp: int = 2) -> float:
"""
Calculate the GC% of the DNA sequence with optional arg to control precision
Returns: GC percentage float
"""
return round(100 * (self.seq.count("C") + self.seq.count("G")) / self.len, dp)
def tm(self, dp: int = 2) -> float:
"""
Calculate DNA sequence tm with optional arg to control precision
Returns: melting temperature float
"""
import primer3
return round(primer3.calcTm(self.seq), dp)
def one_hot(self, max_len: int = None):
"""
"""
from jinfo import one_hot_dna
if max_len:
return one_hot_dna(self, max_len)
else:
return one_hot_dna(self, self.len)
class RNASeq(BaseSeq):
"""
Class to hold RNA sequences
"""
def __init__(self, sequence: str = "", label: str = "") -> None:
"""
Call the superclass constructor with new default vocab argument
"""
super(RNASeq, self).__init__(sequence=sequence, label=label, vocab=RNA_VOCAB)
return
def reverse_transcribe(self) -> str:
"""
Returns: DNA template of the RNA sequence
"""
return self.seq.replace("U", "T")
def translate(self) -> str:
"""
Returns: the translated protein sequence of the DNA sequence
"""
if len(self.seq) % 3 != 0:
raise SeqLengthError("Seq cannot be split into codons, not a multiple of 3")
codon_list = [self.seq[i : i + 3] for i in range(0, len(self.seq), 3)]
return "".join([CODON_TABLE[codon] for codon in codon_list])
def MW(self) -> float:
"""
Calculate MW of single stranded RNA
Returns: Molecular weight float
"""
if "X" in self.seq:
raise UnknownBaseError("X base in sequence")
return sum([NT_MW_TABLE[base] for base in self.seq]) + 17.01
class AASeq(BaseSeq):
"""
Class to hold amino acid sequences
"""
def __init__(self, sequence: str = "", label: str = ""):
"""
Call the superclass constructor with new default vocab argument
"""
super(AASeq, self).__init__(sequence=sequence, label=label, vocab=AA_VOCAB)
return
def MW(self) -> float:
"""
Calculate protein MW
Returns: Molecular weight float
"""
if "X" in self.seq:
raise UnknownBaseError("X residue in sequence")
return sum([AA_MW_TABLE[base] for base in self.seq])
if __name__ == "__main__":
pass
| 25.39924 | 88 | 0.577994 | 813 | 6,680 | 4.615006 | 0.210332 | 0.035448 | 0.016791 | 0.021322 | 0.441098 | 0.39419 | 0.372068 | 0.269456 | 0.249467 | 0.239872 | 0 | 0.008743 | 0.31512 | 6,680 | 262 | 89 | 25.496183 | 0.811366 | 0.208982 | 0 | 0.294643 | 0 | 0 | 0.058736 | 0.011238 | 0 | 0 | 0 | 0 | 0 | 1 | 0.205357 | false | 0.035714 | 0.053571 | 0.017857 | 0.535714 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
8de9bac54c06400b09fe0841b43b9c0c1c1f4575 | 527 | py | Python | Pre-train/refers/data/__init__.py | funnyzhou/REFERS | 392eddf13cbf3c3a7dc0bf8bfffd108ca4a65a19 | [
"MIT"
] | 46 | 2021-11-19T03:23:01.000Z | 2022-03-27T08:59:50.000Z | Pre-train/refers/data/__init__.py | funnyzhou/REFERS | 392eddf13cbf3c3a7dc0bf8bfffd108ca4a65a19 | [
"MIT"
] | null | null | null | Pre-train/refers/data/__init__.py | funnyzhou/REFERS | 392eddf13cbf3c3a7dc0bf8bfffd108ca4a65a19 | [
"MIT"
] | 3 | 2022-01-22T18:45:20.000Z | 2022-03-29T08:59:16.000Z | from .datasets.captioning import CaptioningDataset
from .datasets.masked_lm import MaskedLmDataset
from .datasets.multilabel import MultiLabelClassificationDataset
from .datasets.downstream import (
ImageNetDataset,
INaturalist2018Dataset,
VOC07ClassificationDataset,
ImageDirectoryDataset,
)
__all__ = [
"CaptioningDataset",
"MaskedLmDataset",
"MultiLabelClassificationDataset",
"ImageDirectoryDataset",
"ImageNetDataset",
"INaturalist2018Dataset",
"VOC07ClassificationDataset",
]
| 26.35 | 64 | 0.781784 | 32 | 527 | 12.71875 | 0.5 | 0.117936 | 0.309582 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.026667 | 0.14611 | 527 | 19 | 65 | 27.736842 | 0.877778 | 0 | 0 | 0 | 0 | 0 | 0.278937 | 0.189753 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.222222 | 0 | 0.222222 | 0 | 0 | 0 | 1 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
8df4615701e7dedef4351712d7a59c5d1e5c432e | 364 | py | Python | Snippets and Basic Functions/Cryptography/ecdsa-ops.py | sckulkarni246/python-snippets-for-embedded-programmers | 9dfd0b193f86a6de54598917f3d7088a60ec4abc | [
"MIT"
] | null | null | null | Snippets and Basic Functions/Cryptography/ecdsa-ops.py | sckulkarni246/python-snippets-for-embedded-programmers | 9dfd0b193f86a6de54598917f3d7088a60ec4abc | [
"MIT"
] | null | null | null | Snippets and Basic Functions/Cryptography/ecdsa-ops.py | sckulkarni246/python-snippets-for-embedded-programmers | 9dfd0b193f86a6de54598917f3d7088a60ec4abc | [
"MIT"
] | null | null | null | import ecdsa
import hashlib
sk = ecdsa.SigningKey.generate(curve=ecdsa.NIST256p)
vk = sk.get_verifying_key()
a = b"Hello World!"
sig = sk.sign(a,hashfunc=hashlib.sha256)
result = vk.verify(sig,a,hashfunc=hashlib.sha256)
strsk = sk.to_string()
strvk = vk.to_string()
sk2 = ecdsa.SigningKey.from_string(strsk,curve=ecdsa.NIST256p)
vk2 = sk2.get_verifying_key()
| 21.411765 | 62 | 0.760989 | 57 | 364 | 4.736842 | 0.508772 | 0.111111 | 0.133333 | 0.162963 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.045872 | 0.101648 | 364 | 16 | 63 | 22.75 | 0.779817 | 0 | 0 | 0 | 1 | 0 | 0.032967 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.181818 | 0 | 0.181818 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
8dfcb3cc51ba098203b49c8846a0a0b7192220bf | 313 | py | Python | scenarios/SambaShare/config.py | dasec/ForTrace | b8187522a2c83fb661e5a1a5f403da8f40a31ead | [
"MIT"
] | 1 | 2022-03-31T14:01:51.000Z | 2022-03-31T14:01:51.000Z | scenarios/SambaShare/config.py | dasec/ForTrace | b8187522a2c83fb661e5a1a5f403da8f40a31ead | [
"MIT"
] | null | null | null | scenarios/SambaShare/config.py | dasec/ForTrace | b8187522a2c83fb661e5a1a5f403da8f40a31ead | [
"MIT"
] | 1 | 2022-03-31T14:02:30.000Z | 2022-03-31T14:02:30.000Z |
imagename = "smbScenario"
author = "David Konczewski"
hostplatform = "windows"
poolpath = "/home/wurstfingersalat/Downloads/fortracepool/"
smbname = "smbServer"
smbplatform = "unix"
sourcePath = "C:\Users\fortrace\Desktop\TestFile.txt"
targetPath = r"\\192.168.103.102\public"
username = "bla"
password = "test"
| 26.083333 | 59 | 0.747604 | 34 | 313 | 6.882353 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.042705 | 0.102236 | 313 | 11 | 60 | 28.454545 | 0.790036 | 0 | 0 | 0 | 0 | 0 | 0.519231 | 0.346154 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0.1 | 0 | null | null | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 2 |
5c05581cfe7725cd6de73043086602105f3512f1 | 205 | py | Python | src/commands/posts/kernel.py | MineHubCZ/MH-DOS | 0d1361aee8aa4903e7b6c89c1df279b74d55d703 | [
"MIT"
] | 2 | 2021-08-01T12:59:59.000Z | 2021-09-27T05:51:05.000Z | src/commands/posts/kernel.py | MineHubCZ/MH-DOS | 0d1361aee8aa4903e7b6c89c1df279b74d55d703 | [
"MIT"
] | 3 | 2021-07-25T07:54:19.000Z | 2021-08-18T20:35:26.000Z | src/commands/posts/kernel.py | MineHubCZ/MH-DOS | 0d1361aee8aa4903e7b6c89c1df279b74d55d703 | [
"MIT"
] | null | null | null | from commands.posts.show import show
from commands.posts.all import all
def posts(arguments):
if not arguments:
all()
return
if int(arguments[0]):
show(arguments[0])
| 17.083333 | 36 | 0.629268 | 27 | 205 | 4.777778 | 0.481481 | 0.186047 | 0.263566 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.013514 | 0.278049 | 205 | 11 | 37 | 18.636364 | 0.858108 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.125 | false | 0 | 0.25 | 0 | 0.5 | 0 | 1 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
5c1034bae8fc295a3f243c68cc7b08646a98fe8b | 348 | py | Python | 22. Workshop - Custom List/tests/tests_case_base.py | elenaborisova/Python-OOP | 584882c08f84045b12322917f0716c7c7bd9befc | [
"MIT"
] | 1 | 2021-03-27T16:56:30.000Z | 2021-03-27T16:56:30.000Z | 22. Workshop - Custom List/tests/tests_case_base.py | elenaborisova/Python-OOP | 584882c08f84045b12322917f0716c7c7bd9befc | [
"MIT"
] | null | null | null | 22. Workshop - Custom List/tests/tests_case_base.py | elenaborisova/Python-OOP | 584882c08f84045b12322917f0716c7c7bd9befc | [
"MIT"
] | 1 | 2021-03-15T14:50:39.000Z | 2021-03-15T14:50:39.000Z | from unittest import TestCase
class TestCaseBase(TestCase):
def assertEmpty(self, iterable):
if type(iterable) == dict:
return self.assertDictEqual({}, dict(iterable))
elif type(iterable) == set:
return self.assertSetEqual(set(), set(iterable))
return self.assertListEqual([], list(iterable))
| 29 | 60 | 0.649425 | 35 | 348 | 6.457143 | 0.571429 | 0.132743 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.232759 | 348 | 11 | 61 | 31.636364 | 0.846442 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.5 | 1 | 0.125 | false | 0 | 0.125 | 0 | 0.75 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
5c242a198477936939412bc6fd3c800247ffa52b | 310 | py | Python | neuraldistributions/utility/__init__.py | mohammadbashiri/bashiri-et-al-2021 | c7c15ea0bf165d4d3db2ff63a04a1e78c29bf44c | [
"MIT"
] | 2 | 2021-12-04T20:01:00.000Z | 2021-12-05T19:59:02.000Z | neuraldistributions/utility/__init__.py | mohammadbashiri/bashiri-et-al-2021 | c7c15ea0bf165d4d3db2ff63a04a1e78c29bf44c | [
"MIT"
] | 1 | 2021-12-15T20:50:04.000Z | 2021-12-15T20:50:04.000Z | neuraldistributions/utility/__init__.py | mohammadbashiri/bashiri-et-al-2021 | c7c15ea0bf165d4d3db2ff63a04a1e78c29bf44c | [
"MIT"
] | 1 | 2021-09-15T12:26:17.000Z | 2021-09-15T12:26:17.000Z | from .reproducibility import set_random_seed
from .training import EarlyStopping
from .dataset import get_dataloader, imread
from .model_evaluation import (
get_conditional_means,
get_conditional_variances,
spearman_corr,
)
from .scoring_functions import (
Correlation,
get_loglikelihood,
) | 25.833333 | 44 | 0.8 | 35 | 310 | 6.771429 | 0.657143 | 0.075949 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.151613 | 310 | 12 | 45 | 25.833333 | 0.901141 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.416667 | 0 | 0.416667 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 2 |
5c2fd36ec63dc8452f1ae98ad930b70f17c2fece | 430 | py | Python | nodefinder/search/result/__init__.py | zx-sdu/NodeFinder | edaeeba8fb5a1ca28222313f6de7a6dfa8253093 | [
"Apache-2.0"
] | 2 | 2020-01-29T16:47:18.000Z | 2021-05-24T16:39:00.000Z | nodefinder/search/result/__init__.py | zx-sdu/NodeFinder | edaeeba8fb5a1ca28222313f6de7a6dfa8253093 | [
"Apache-2.0"
] | 12 | 2018-07-11T23:42:19.000Z | 2021-10-07T21:39:12.000Z | nodefinder/search/result/__init__.py | zx-sdu/NodeFinder | edaeeba8fb5a1ca28222313f6de7a6dfa8253093 | [
"Apache-2.0"
] | 2 | 2019-11-06T00:22:53.000Z | 2019-11-06T00:38:23.000Z | # -*- coding: utf-8 -*-
# © 2017-2019, ETH Zurich, Institut für Theoretische Physik
# Author: Dominik Gresch <greschd@gmx.ch>
"""
Submodule defining the result classes of the search step.
"""
from ._minimization import *
from ._search_result_container import *
from ._controller_state import *
__all__ = _minimization.__all__ + _search_result_container.__all__ + _controller_state.__all__ # pylint: disable=undefined-variable
| 30.714286 | 132 | 0.769767 | 53 | 430 | 5.735849 | 0.698113 | 0.065789 | 0.138158 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.024064 | 0.130233 | 430 | 13 | 133 | 33.076923 | 0.786096 | 0.495349 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.75 | 0 | 0.75 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 2 |
308696aec5daf6d1909385db9bbfaabec91c70ed | 12,678 | py | Python | cli/tests/test_polyflow/test_io.py | hackerwins/polyaxon | ff56a098283ca872abfbaae6ba8abba479ffa394 | [
"Apache-2.0"
] | null | null | null | cli/tests/test_polyflow/test_io.py | hackerwins/polyaxon | ff56a098283ca872abfbaae6ba8abba479ffa394 | [
"Apache-2.0"
] | null | null | null | cli/tests/test_polyflow/test_io.py | hackerwins/polyaxon | ff56a098283ca872abfbaae6ba8abba479ffa394 | [
"Apache-2.0"
] | null | null | null | #!/usr/bin/python
#
# Copyright 2019 Polyaxon, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# coding: utf-8
from __future__ import absolute_import, division, print_function
import uuid
from collections import OrderedDict
from unittest import TestCase
import pytest
from marshmallow import ValidationError
from tests.utils import assert_equal_dict
from polyaxon.schemas.polyflow.io import IOConfig, IOTypes
from polyaxon.schemas.polyflow.params import ParamSpec, get_param
@pytest.mark.polyflow_mark
class TestIOConfigs(TestCase):
def test_wrong_io_config(self):
# No name
with self.assertRaises(ValidationError):
IOConfig.from_dict({})
def test_unsupported_io_config_type(self):
with self.assertRaises(ValidationError):
IOConfig.from_dict({"name": "input1", "type": "something"})
def test_wrong_io_config_default(self):
with self.assertRaises(ValidationError):
IOConfig.from_dict(
{"name": "input1", "type": IOTypes.FLOAT, "value": "foo"}
)
with self.assertRaises(ValidationError):
IOConfig.from_dict(
{"name": "input1", "type": IOTypes.GCS_PATH, "value": 234}
)
def test_wrong_io_config_flag(self):
with self.assertRaises(ValidationError):
IOConfig.from_dict(
{"name": "input1", "type": IOTypes.S3_PATH, "is_flag": True}
)
with self.assertRaises(ValidationError):
IOConfig.from_dict(
{"name": "input1", "type": IOTypes.FLOAT, "is_flag": True}
)
def test_io_config_optionals(self):
config_dict = {"name": "input1"}
config = IOConfig.from_dict(config_dict)
assert_equal_dict(config.to_dict(), config_dict)
def test_io_config_desc(self):
# test desc
config_dict = {"name": "input1", "description": "some text"}
config = IOConfig.from_dict(config_dict)
assert_equal_dict(config.to_dict(), config_dict)
def test_io_config_types(self):
config_dict = {
"name": "input1",
"description": "some text",
"type": IOTypes.INT,
}
config = IOConfig.from_dict(config_dict)
assert_equal_dict(config.to_dict(), config_dict)
expected_repr = OrderedDict((("name", "input1"), ("type", "int"), ("value", 3)))
assert config.get_repr_from_value(3) == expected_repr
assert config.get_repr() == OrderedDict((("name", "input1"), ("type", "int")))
config_dict = {
"name": "input1",
"description": "some text",
"type": IOTypes.S3_PATH,
}
config = IOConfig.from_dict(config_dict)
assert_equal_dict(config.to_dict(), config_dict)
expected_repr = OrderedDict(
(("name", "input1"), ("type", IOTypes.S3_PATH), ("value", "s3://foo"))
)
assert config.get_repr_from_value("s3://foo") == expected_repr
assert config.get_repr() == OrderedDict(
(("name", "input1"), ("type", IOTypes.S3_PATH))
)
def test_io_config_default(self):
config_dict = {
"name": "input1",
"description": "some text",
"type": IOTypes.BOOL,
"is_optional": True,
"value": True,
}
config = IOConfig.from_dict(config_dict)
assert_equal_dict(config.to_dict(), config_dict)
expected_repr = OrderedDict(
(("name", "input1"), ("type", "bool"), ("value", True))
)
assert config.get_repr_from_value(None) == expected_repr
assert config.get_repr() == expected_repr
config_dict = {
"name": "input1",
"description": "some text",
"type": IOTypes.FLOAT,
"is_optional": True,
"value": 3.4,
}
config = IOConfig.from_dict(config_dict)
assert_equal_dict(config.to_dict(), config_dict)
expected_repr = OrderedDict(
(("name", "input1"), ("type", "float"), ("value", 3.4))
)
assert config.get_repr_from_value(None) == expected_repr
assert config.get_repr() == expected_repr
def test_io_config_default_and_required(self):
config_dict = {
"name": "input1",
"description": "some text",
"type": IOTypes.BOOL,
"value": True,
"is_optional": True,
}
config = IOConfig.from_dict(config_dict)
assert_equal_dict(config.to_dict(), config_dict)
config_dict = {
"name": "input1",
"description": "some text",
"type": IOTypes.STR,
"value": "foo",
}
with self.assertRaises(ValidationError):
IOConfig.from_dict(config_dict)
def test_io_config_required(self):
config_dict = {
"name": "input1",
"description": "some text",
"type": "float",
"is_optional": False,
}
config = IOConfig.from_dict(config_dict)
assert_equal_dict(config.to_dict(), config_dict)
expected_repr = OrderedDict(
(("name", "input1"), ("type", "float"), ("value", 1.1))
)
assert config.get_repr_from_value(1.1) == expected_repr
assert config.get_repr() == OrderedDict((("name", "input1"), ("type", "float")))
def test_io_config_flag(self):
config_dict = {
"name": "input1",
"description": "some text",
"type": IOTypes.BOOL,
"is_flag": True,
}
config = IOConfig.from_dict(config_dict)
assert_equal_dict(config.to_dict(), config_dict)
expected_repr = OrderedDict(
(("name", "input1"), ("type", "bool"), ("value", False))
)
assert config.get_repr_from_value(False) == expected_repr
def test_value_non_typed_input(self):
config_dict = {"name": "input1"}
config = IOConfig.from_dict(config_dict)
assert config.validate_value("foo") == "foo"
assert config.validate_value(1) == 1
assert config.validate_value(True) is True
expected_repr = OrderedDict((("name", "input1"), ("value", "foo")))
assert config.get_repr_from_value("foo") == expected_repr
assert config.get_repr() == OrderedDict(name="input1")
def test_value_typed_input(self):
config_dict = {"name": "input1", "type": IOTypes.BOOL}
config = IOConfig.from_dict(config_dict)
with self.assertRaises(ValidationError):
config.validate_value("foo")
with self.assertRaises(ValidationError):
config.validate_value(1)
with self.assertRaises(ValidationError):
config.validate_value(None)
assert config.validate_value(True) is True
def test_value_typed_input_with_default(self):
config_dict = {
"name": "input1",
"type": IOTypes.INT,
"value": 12,
"is_optional": True,
}
config = IOConfig.from_dict(config_dict)
with self.assertRaises(ValidationError):
config.validate_value("foo")
assert config.validate_value(1) == 1
assert config.validate_value(0) == 0
assert config.validate_value(-1) == -1
assert config.validate_value(None) == 12
expected_repr = OrderedDict(
(("name", "input1"), ("type", "int"), ("value", 12))
)
assert config.get_repr_from_value(None) == expected_repr
assert config.get_repr() == expected_repr
def test_get_param(self):
# None string values should exit fast
assert get_param(
name="foo", value=1, iotype=IOTypes.INT, is_flag=False
) == ParamSpec(
name="foo",
iotype=IOTypes.INT,
value=1,
entity=None,
entity_ref=None,
entity_value=None,
is_flag=False,
)
# Str values none regex
assert get_param(
name="foo", value="1", iotype=IOTypes.INT, is_flag=False
) == ParamSpec(
name="foo",
iotype=IOTypes.INT,
value="1",
entity=None,
entity_ref=None,
entity_value=None,
is_flag=False,
)
assert get_param(
name="foo", value="SDfd", iotype=IOTypes.STR, is_flag=False
) == ParamSpec(
name="foo",
iotype=IOTypes.STR,
value="SDfd",
entity=None,
entity_ref=None,
entity_value=None,
is_flag=False,
)
# Regex validation dag
assert get_param(
name="foo", value="{{ dag.inputs.foo }}", iotype=IOTypes.BOOL, is_flag=True
) == ParamSpec(
name="foo",
iotype=IOTypes.BOOL,
value="dag.inputs.foo",
entity="dag",
entity_ref="_",
entity_value="foo",
is_flag=True,
)
# Regex validation dag: invalid params
with self.assertRaises(ValidationError):
get_param(
name="foo",
value="{{ dag.outputs.foo }}",
iotype=IOTypes.BOOL,
is_flag=True,
)
with self.assertRaises(ValidationError):
get_param(
name="foo",
value="{{ dag.1.inputs.foo }}",
iotype=IOTypes.BOOL,
is_flag=True,
)
with self.assertRaises(ValidationError):
get_param(
name="foo", value="{{ dag.inputs }}", iotype=IOTypes.BOOL, is_flag=True
)
# Regex validation ops
assert get_param(
name="foo",
value="{{ ops.foo-bar.outputs.foo }}",
iotype=IOTypes.BOOL,
is_flag=True,
) == ParamSpec(
name="foo",
iotype=IOTypes.BOOL,
value="ops.foo-bar.outputs.foo",
entity="ops",
entity_ref="foo-bar",
entity_value="foo",
is_flag=True,
)
assert get_param(
name="foo",
value="{{ ops.foo-bar.inputs.foo }}",
iotype=IOTypes.BOOL,
is_flag=True,
) == ParamSpec(
name="foo",
iotype=IOTypes.BOOL,
value="ops.foo-bar.inputs.foo",
entity="ops",
entity_ref="foo-bar",
entity_value="foo",
is_flag=True,
)
# Regex validation ops: invalid params
with self.assertRaises(ValidationError):
get_param(
name="foo",
value="{{ ops.foo-bar.outputs }}",
iotype=IOTypes.BOOL,
is_flag=True,
)
with self.assertRaises(ValidationError):
get_param(
name="foo",
value="{{ ops.foo-bar.inputs }}",
iotype=IOTypes.BOOL,
is_flag=True,
)
# Regex validation runs
uid = uuid.uuid4().hex
assert get_param(
name="foo",
value="{{" + "runs.{}.outputs.foo".format(uid) + "}}",
iotype=IOTypes.BOOL,
is_flag=True,
) == ParamSpec(
name="foo",
iotype=IOTypes.BOOL,
value="runs.{}.outputs.foo".format(uid),
entity="runs",
entity_ref=uid,
entity_value="foo",
is_flag=True,
)
# Regex validation runs: invalid params
with self.assertRaises(ValidationError):
get_param(
name="foo",
value="{{ runs.foo-bar.outputs.foo }}",
iotype=IOTypes.BOOL,
is_flag=True,
)
with self.assertRaises(ValidationError):
get_param(
name="foo",
value="{{" + "runs.{}.inputs.foo".format(uid) + "}}",
iotype=IOTypes.BOOL,
is_flag=True,
)
| 33.451187 | 88 | 0.54985 | 1,329 | 12,678 | 5.043642 | 0.125658 | 0.052215 | 0.048038 | 0.093988 | 0.797553 | 0.773236 | 0.738774 | 0.671192 | 0.638669 | 0.570192 | 0 | 0.009238 | 0.325446 | 12,678 | 378 | 89 | 33.539683 | 0.774556 | 0.065862 | 0 | 0.589172 | 0 | 0 | 0.106965 | 0.009647 | 0.006369 | 0 | 0 | 0 | 0.184713 | 1 | 0.047771 | false | 0 | 0.028662 | 0 | 0.079618 | 0.003185 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
30b638da5030fbfdcd883d8276e056b82e0090a9 | 2,200 | bzl | Python | tools/build_rules/test_rules_private.bzl | jobechoi/bazel | c03e9ac2588a590881350f4f9dd859de480240de | [
"Apache-2.0"
] | 16,989 | 2015-09-01T19:57:15.000Z | 2022-03-31T23:54:00.000Z | tools/build_rules/test_rules_private.bzl | jobechoi/bazel | c03e9ac2588a590881350f4f9dd859de480240de | [
"Apache-2.0"
] | 12,562 | 2015-09-01T09:06:01.000Z | 2022-03-31T22:26:20.000Z | tools/build_rules/test_rules_private.bzl | jobechoi/bazel | c03e9ac2588a590881350f4f9dd859de480240de | [
"Apache-2.0"
] | 3,707 | 2015-09-02T19:20:01.000Z | 2022-03-31T17:06:14.000Z | # Copyright 2019 The Bazel Authors. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Bash runfiles library init code for test_rules.bzl."""
# Init code to load the runfiles.bash file.
# The runfiles library itself defines rlocation which you would need to look
# up the library's runtime location, thus we have a chicken-and-egg problem.
INIT_BASH_RUNFILES = [
"# --- begin runfiles.bash initialization ---",
"# Copy-pasted from Bazel Bash runfiles library (tools/bash/runfiles/runfiles.bash).",
"set -euo pipefail",
'if [[ ! -d "${RUNFILES_DIR:-/dev/null}" && ! -f "${RUNFILES_MANIFEST_FILE:-/dev/null}" ]]; then',
' if [[ -f "$0.runfiles_manifest" ]]; then',
' export RUNFILES_MANIFEST_FILE="$0.runfiles_manifest"',
' elif [[ -f "$0.runfiles/MANIFEST" ]]; then',
' export RUNFILES_MANIFEST_FILE="$0.runfiles/MANIFEST"',
' elif [[ -f "$0.runfiles/bazel_tools/tools/bash/runfiles/runfiles.bash" ]]; then',
' export RUNFILES_DIR="$0.runfiles"',
" fi",
"fi",
'if [[ -f "${RUNFILES_DIR:-/dev/null}/bazel_tools/tools/bash/runfiles/runfiles.bash" ]]; then',
' source "${RUNFILES_DIR}/bazel_tools/tools/bash/runfiles/runfiles.bash"',
'elif [[ -f "${RUNFILES_MANIFEST_FILE:-/dev/null}" ]]; then',
' source "$(grep -m1 "^bazel_tools/tools/bash/runfiles/runfiles.bash " \\',
' "$RUNFILES_MANIFEST_FILE" | cut -d " " -f 2-)"',
"else",
' echo >&2 "ERROR: cannot find @bazel_tools//tools/bash/runfiles:runfiles.bash"',
" exit 1",
"fi",
"# --- end runfiles.bash initialization ---",
]
# Label of the runfiles library.
BASH_RUNFILES_DEP = "@bazel_tools//tools/bash/runfiles"
| 46.808511 | 102 | 0.676364 | 299 | 2,200 | 4.886288 | 0.41806 | 0.098563 | 0.081451 | 0.102669 | 0.322382 | 0.284052 | 0.284052 | 0.160164 | 0.1013 | 0.1013 | 0 | 0.009912 | 0.174545 | 2,200 | 46 | 103 | 47.826087 | 0.794604 | 0.386818 | 0 | 0.12 | 0 | 0.16 | 0.815373 | 0.489073 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
30b7f6927cf433ce23be01d4a486fea2d59cd181 | 989 | py | Python | python/items_factory.py | yeachan153/GildedRose-Refactoring-Kata | 77a5c01f3a2250882142f3695e3731372330618c | [
"MIT"
] | null | null | null | python/items_factory.py | yeachan153/GildedRose-Refactoring-Kata | 77a5c01f3a2250882142f3695e3731372330618c | [
"MIT"
] | null | null | null | python/items_factory.py | yeachan153/GildedRose-Refactoring-Kata | 77a5c01f3a2250882142f3695e3731372330618c | [
"MIT"
] | null | null | null | from python.items import AgedBrieItem, BackStageItem, RagnarosItem, ConjuredItem
from python.exceptions import InvalidRepositoryException
from typing import Union
class ItemsFactory:
"""Generates correct item objects
based on specified parameter
"""
@staticmethod
def create_item(
text: str,
) -> Union[AgedBrieItem, BackStageItem, RagnarosItem, ConjuredItem]:
"""Creates correct item
Args:
text (str): [description]
Returns:
Item: [description]
"""
if text == "Aged Brie":
return AgedBrieItem
elif text == "Backstage passes to a TAFKAL80ETC concert":
return BackStageItem
elif text == "Sulfuras, Hand of Ragnaros":
return RagnarosItem
elif text == "Conjured":
return ConjuredItem
else:
raise InvalidRepositoryException(
f"Repository of type {text} does not exist."
)
| 28.257143 | 80 | 0.61274 | 88 | 989 | 6.875 | 0.625 | 0.039669 | 0.122314 | 0.161983 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.002959 | 0.316481 | 989 | 34 | 81 | 29.088235 | 0.892012 | 0.152679 | 0 | 0 | 1 | 0 | 0.160875 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.05 | false | 0.05 | 0.15 | 0 | 0.45 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
30beca55a709baf5dd47f082789523e6f07954f1 | 2,005 | py | Python | app/utils/message.py | NehaDC-IMC/connect-processor-template-for-python | 9abdb4b513c648232ffa2092db7e3b8d02575bcb | [
"Apache-2.0"
] | null | null | null | app/utils/message.py | NehaDC-IMC/connect-processor-template-for-python | 9abdb4b513c648232ffa2092db7e3b8d02575bcb | [
"Apache-2.0"
] | null | null | null | app/utils/message.py | NehaDC-IMC/connect-processor-template-for-python | 9abdb4b513c648232ffa2092db7e3b8d02575bcb | [
"Apache-2.0"
] | null | null | null | class TierFulfillmentMessages(object):
RROR_PROCESSING_TIER_REQUEST = 'There has been an error processing the tier config request. Error description: {}'
class BasePurchaseMessages:
pass
class BaseChangeMessages:
pass
class BaseSuspendMessages:
NOTHING_TO_DO = 'Suspend method for request {} - Nothing to do'
class BaseCancelMessages:
ACTIVATION_TILE_RESPONSE = 'Operation cancel done successfully'
class BaseSharedMessages:
ACTIVATING_TEMPLATE_ERROR = 'There has been a problem activating the template. Description {}'
EMPTY_ACTIVATION_TILE = 'Activation tile response for marketplace {} cannot be empty'
ERROR_GETTING_CONFIGURATION = 'There was an exception while getting configured info for the specified ' \
'marketplace {}'
NOT_FOUND_TEMPLATE = 'It was not found any template of type <{}> for the marketplace with id <{}>. ' \
'Please review the configuration.'
NOT_ALLOWED_DOWNSIZE = 'At least one of the requested items at the order is downsized which ' \
' is not allowed. Please review your order.'
RESPONSE_ERROR = 'Error: {} -> {}'
RESPONSE_DOES_NOT_HAVE_ATTRIBUTE = 'Response does not have attribute {}. Check your request params. ' \
'Response status - {}'
WAITING_SUBSCRIPTION_ACTIVATION = 'The subscription has been updated, waiting Vendor/ISV to update the ' \
'subscription status'
class Message:
class Shared(BaseSharedMessages):
tier_request = TierFulfillmentMessages()
class Purchase(BasePurchaseMessages):
FAIL_REPEATED_PRODUCTS = 'It has been detected repeated products for the same purchase. ' \
'Please review the configured plan.'
class Change(BaseChangeMessages):
pass
class Suspend(BaseSuspendMessages):
pass
class Cancel(BaseCancelMessages):
pass
| 37.830189 | 118 | 0.66783 | 205 | 2,005 | 6.4 | 0.458537 | 0.021341 | 0.018293 | 0.028963 | 0.042683 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.269825 | 2,005 | 52 | 119 | 38.557692 | 0.896175 | 0 | 0 | 0.138889 | 0 | 0 | 0.434066 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0.138889 | 0 | 0 | 0.638889 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 2 |
30d1376ed4a53ab1347ac9676cdec36e14b4c7cc | 823 | py | Python | interface/popup.py | spatocode/seeker | b7c3ccf5baa13ce12f6f9f33cc73fa9f09b8a1dc | [
"BSD-2-Clause"
] | null | null | null | interface/popup.py | spatocode/seeker | b7c3ccf5baa13ce12f6f9f33cc73fa9f09b8a1dc | [
"BSD-2-Clause"
] | null | null | null | interface/popup.py | spatocode/seeker | b7c3ccf5baa13ce12f6f9f33cc73fa9f09b8a1dc | [
"BSD-2-Clause"
] | null | null | null | import wx
class PopupMenu(wx.Menu):
def __init__(self, parent):
super(PopupMenu, self).__init__()
self.parent = parent
sendPackets = wx.MenuItem(self, wx.NewId(), "Send Packets")
self.Append(sendPackets)
copyPackets = wx.MenuItem(self, wx.NewId(), "Copy Packets")
self.Append(copyPackets)
whois = wx.MenuItem(self, wx.NewId(), "Whois")
self.Append(whois)
filterPackets = wx.MenuItem(self, wx.NewId(), "Filter")
self.Append(filterPackets)
rts = wx.MenuItem(self, wx.NewId(), "Reconstruct TCP Session")
self.Append(rts)
rus = wx.MenuItem(self, wx.NewId(), "Reconstruct UDP Session")
self.Append(rus)
copyAddress = wx.MenuItem(self, wx.NewId(), "Copy Address")
self.Append(copyAddress)
| 29.392857 | 70 | 0.617254 | 94 | 823 | 5.319149 | 0.308511 | 0.14 | 0.196 | 0.224 | 0.354 | 0.228 | 0 | 0 | 0 | 0 | 0 | 0 | 0.244228 | 823 | 27 | 71 | 30.481481 | 0.803859 | 0 | 0 | 0 | 0 | 0 | 0.113001 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.052632 | false | 0 | 0.052632 | 0 | 0.157895 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
30ee18a6b2f022e0885f14c7de3dc685941faaaa | 2,604 | py | Python | puente-etl/jobs.py | puente-development-international/serverless | 5eb89737821e3c711fc9ab9f8738c741e0ff5e03 | [
"MIT"
] | null | null | null | puente-etl/jobs.py | puente-development-international/serverless | 5eb89737821e3c711fc9ab9f8738c741e0ff5e03 | [
"MIT"
] | null | null | null | puente-etl/jobs.py | puente-development-international/serverless | 5eb89737821e3c711fc9ab9f8738c741e0ff5e03 | [
"MIT"
] | 1 | 2020-07-05T22:04:07.000Z | 2020-07-05T22:04:07.000Z | from load_to_s3 import export_to_s3_as_dataframe, export_to_s3_as_csv
from transform_form_specifications import get_custom_form_schema_df
from transform_form_results import get_form_results_df
from utils.clients import Clients
from utils.constants import PuenteTables
def run_transform_jobs(event, context):
"""
Orchestrate transformations
"""
# Initialize AWS S3 Client
s3_client = Clients.S3
# if event.get(PuenteTables.ALLERGIES):
if event.get(PuenteTables.FORM_RESULTS):
# raw_results == True does not aggregate the results, pass False to ensure aggregation
df = get_form_results_df(raw_results=True)
for org in df['form_result_surveying_organization'].unique():
org_df = df[df['form_result_surveying_organization']==org]
for custom_form in org_df['custom_form_id'].unique():
final_df = org_df[org_df['custom_form_id'] == custom_form]
export_to_s3_as_csv(s3_client, final_df, f"form-result-{custom_form}", org)
if event.get(PuenteTables.FORM_SPECIFICATIONS):
df = get_custom_form_schema_df()
export_to_s3_as_dataframe(s3_client, df, PuenteTables.FORM_SPECIFICATIONS)
# TODO: PUENTE FORMS
# if event.get(PuenteTables.FORM_ASSET_RESULTS):
# if event.get(PuenteTables.ASSETS):
# if event.get(PuenteTables.HISTORY_ENVIRONMENTAL_HEALTH):
# if event.get(PuenteTables.HISTORY_MEDICAL):
# if event.get(PuenteTables.SURVEY_DATA):
# if event.get(PuenteTables.VITALS):
# if event.get(PuenteTables.EVALUATION_MEDICAL):
# if event.get(PuenteTables.OFFLINE_FORM):
# if event.get(PuenteTables.OFFLINE_FORM_REQUEST):
# if event.get(PuenteTables.HOUSEHOLD):
# if event.get(PuenteTables.ROLE):
# if event.get(PuenteTables.SESSION):
# if event.get(PuenteTables.USER):
if __name__ == '__main__':
jobs = {
PuenteTables.ALLERGIES: False,
PuenteTables.ASSETS: False,
PuenteTables.EVALUATION_MEDICAL: False,
PuenteTables.FORM_ASSET_RESULTS: False,
PuenteTables.FORM_RESULTS: True,
PuenteTables.FORM_SPECIFICATIONS: False,
PuenteTables.HISTORY_ENVIRONMENTAL_HEALTH: False,
PuenteTables.HISTORY_MEDICAL: False,
#PuenteTables.OFFLINE_FORM: False,
#PuenteTables.OFFLINE_FORM_REQUEST: False,
PuenteTables.HOUSEHOLD: False,
PuenteTables.ROLE: False,
PuenteTables.SESSION: False,
PuenteTables.SURVEY_DATA: False,
PuenteTables.USER: False,
PuenteTables.VITALS: False
}
run_transform_jobs(jobs, {})
| 38.294118 | 94 | 0.715822 | 315 | 2,604 | 5.609524 | 0.234921 | 0.063384 | 0.090549 | 0.199208 | 0.255801 | 0.037351 | 0 | 0 | 0 | 0 | 0 | 0.004764 | 0.193932 | 2,604 | 67 | 95 | 38.865672 | 0.837065 | 0.308756 | 0 | 0 | 0 | 0 | 0.073046 | 0.052661 | 0 | 0 | 0 | 0.014925 | 0 | 1 | 0.028571 | false | 0 | 0.142857 | 0 | 0.171429 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
a5003aee3fdfeb112214e2ce995e4b7632c2a000 | 1,077 | py | Python | drdown/medicalrecords/views/view_static_data.py | fga-gpp-mds/2018.1-Cris-Down | 3423374360105b06ac2c57a320bf2ee8deaa08a3 | [
"MIT"
] | 11 | 2018-03-11T01:21:43.000Z | 2018-06-19T21:51:33.000Z | drdown/medicalrecords/views/view_static_data.py | fga-gpp-mds/2018.1-Grupo12 | 3423374360105b06ac2c57a320bf2ee8deaa08a3 | [
"MIT"
] | 245 | 2018-03-13T19:07:14.000Z | 2018-07-07T22:46:00.000Z | drdown/medicalrecords/views/view_static_data.py | fga-gpp-mds/2018.1-Grupo12 | 3423374360105b06ac2c57a320bf2ee8deaa08a3 | [
"MIT"
] | 12 | 2018-08-24T13:26:04.000Z | 2021-03-27T16:28:22.000Z | from drdown.users.models.model_health_team import HealthTeam
from ..models.model_static_data import StaticData
from ..models.model_medical_record import MedicalRecord
from drdown.users.models.model_user import User
from drdown.users.models.model_patient import Patient
from django.views.generic import CreateView, DeleteView, UpdateView, ListView
from django.urls import reverse_lazy
from django.contrib.auth.mixins import UserPassesTestMixin
from ..forms.static_data_forms import StaticDataForm
from ..views.views_base import BaseViewForm, BaseViewUrl, BaseViewPermissions
class StaticDataCreateView(BaseViewUrl, BaseViewForm, BaseViewPermissions,
CreateView):
model = StaticData
form_class = StaticDataForm
template_name = 'medicalrecords/medicalrecord_static_data_form.html'
class StaticDataUpdateView(BaseViewUrl, UpdateView):
model = StaticData
form_class = StaticDataForm
template_name = 'medicalrecords/medicalrecord_static_data_form.html'
slug_url_kwarg = 'username'
slug_field = 'patient__user__username'
| 41.423077 | 77 | 0.812442 | 121 | 1,077 | 6.991736 | 0.413223 | 0.065012 | 0.053191 | 0.074468 | 0.316785 | 0.224586 | 0.224586 | 0.224586 | 0.224586 | 0.224586 | 0 | 0 | 0.129991 | 1,077 | 25 | 78 | 43.08 | 0.902882 | 0 | 0 | 0.285714 | 0 | 0 | 0.121634 | 0.114206 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0.047619 | 0.47619 | 0 | 0.952381 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 2 |
a50616668f1b0fda24a43ea8fe0b13c1c15724d0 | 978 | py | Python | files/hacktm-ctf-2020/strange-pcap/solution/solve.py | J4sp3r/j4sp3r.github.io | 0bab5698fdeab57426c90a315204bcd39cd79fa4 | [
"MIT"
] | null | null | null | files/hacktm-ctf-2020/strange-pcap/solution/solve.py | J4sp3r/j4sp3r.github.io | 0bab5698fdeab57426c90a315204bcd39cd79fa4 | [
"MIT"
] | null | null | null | files/hacktm-ctf-2020/strange-pcap/solution/solve.py | J4sp3r/j4sp3r.github.io | 0bab5698fdeab57426c90a315204bcd39cd79fa4 | [
"MIT"
] | null | null | null | #!/usr/bin/env python3
usb_codes = {
0x04:"aA", 0x05:"bB", 0x06:"cC", 0x07:"dD", 0x08:"eE", 0x09:"fF",
0x0A:"gG", 0x0B:"hH", 0x0C:"iI", 0x0D:"jJ", 0x0E:"kK", 0x0F:"lL",
0x10:"mM", 0x11:"nN", 0x12:"oO", 0x13:"pP", 0x14:"qQ", 0x15:"rR",
0x16:"sS", 0x17:"tT", 0x18:"uU", 0x19:"vV", 0x1A:"wW", 0x1B:"xX",
0x1C:"yY", 0x1D:"zZ", 0x1E:"1!", 0x1F:"2@", 0x20:"3#", 0x21:"4$",
0x22:"5%", 0x23:"6^", 0x24:"7&", 0x25:"8*", 0x26:"9(", 0x27:"0)",
0x2C:" ", 0x2D:"-_", 0x2E:"=+", 0x2F:"[{", 0x30:"]}", 0x32:"#~",
0x33:";:", 0x34:"'\"", 0x36:",<", 0x37:".>", 0x4f:">", 0x50:"<"
}
buff = ""
pos = 0
for x in open("strokes","r").readlines():
x = x.strip()
if not x:
continue
code = int(x[4:6],16)
if code == 0:
continue
if code == 0x28:
buff += "[ENTER]"
continue
if int(x[0:2],16) == 2 or int(x[0:2],16) == 0x20:
buff += usb_codes[code][1]
else:
buff += usb_codes[code][0]
print(buff)
| 28.764706 | 69 | 0.474438 | 150 | 978 | 3.066667 | 0.74 | 0.052174 | 0.021739 | 0.026087 | 0.034783 | 0 | 0 | 0 | 0 | 0 | 0 | 0.210733 | 0.218814 | 978 | 33 | 70 | 29.636364 | 0.391361 | 0.021472 | 0 | 0.111111 | 0 | 0 | 0.139121 | 0 | 0 | 0 | 0.209205 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.037037 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
a50618df62098357329e3f13f260013ab38c4a27 | 1,410 | py | Python | problems/easy.py | mrcoles/coding-bee | 04cc5bc8dae92640fc8f01501b3f93e18a2a34c5 | [
"MIT"
] | 1 | 2021-07-26T18:29:16.000Z | 2021-07-26T18:29:16.000Z | problems/easy.py | recursecenter/coding-bee | 04cc5bc8dae92640fc8f01501b3f93e18a2a34c5 | [
"MIT"
] | null | null | null | problems/easy.py | recursecenter/coding-bee | 04cc5bc8dae92640fc8f01501b3f93e18a2a34c5 | [
"MIT"
] | 3 | 2018-06-25T19:10:47.000Z | 2018-09-17T10:43:31.000Z |
# original problems
def easy_sum(a, b):
"""Takes two numbers and returns their sum"""
return a + b
def easy_product(a, b):
"""Takes two numbers and returns their product"""
return a * b
def easy_concat(a, b):
"""Takes two strings and returns their concatenation"""
return a + b
def easy_emptylist(l):
"""Takes a list and returns True for empty list, False for nonempty"""
return not l
def easy_iseven(x):
"""Takes a number and returns True if it's even, otherwise False"""
return x % 2 == 0
def easy_and(b1, b2):
"Takes two booleans and returns their AND"
return b1 and b2
def easy_or(b1, b2):
"""Takes two booleans and returns their OR"""
return b1 or b2
def easy_lt(a, b):
"""Takes two numbers and return whether num1 is less than num2"""
return a < b
# new sp2 2018
def easy_contains(a, b):
"""Takes in 2 non-empty strings and returns True if the first value contains a substring that matches the second value."""
return b in a
def easy_helloname(a):
"""Takes in a string representing a name and returns a new string saying hello in a very specific format, e.g., if the name is 'Dave', it should return 'Hello, Dave!'"""
return 'Hello, {}!'.format(a)
def easy_iscat(a):
"""Takes in a string and returns 'meow' if it is the exact string 'cat', otherwise 'woof'."""
return 'meow' if a == 'cat' else 'woof'
| 30 | 173 | 0.66383 | 236 | 1,410 | 3.919492 | 0.338983 | 0.083243 | 0.037838 | 0.043243 | 0.247568 | 0.166486 | 0.144865 | 0.144865 | 0 | 0 | 0 | 0.016544 | 0.228369 | 1,410 | 46 | 174 | 30.652174 | 0.83364 | 0.568794 | 0 | 0.086957 | 0 | 0 | 0.102349 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.478261 | false | 0 | 0 | 0 | 0.956522 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
eb5a9418501c2901672aa9f163fdc2b0ebe0e845 | 1,032 | py | Python | src/telliot_core/queries/snapshot.py | tellor-io/telliot-core | e2b6cb3486e1aa796bd4d14147bd18d300191492 | [
"MIT"
] | 9 | 2021-12-15T07:03:34.000Z | 2022-03-30T20:16:45.000Z | src/telliot_core/queries/snapshot.py | tellor-io/telliot-core | e2b6cb3486e1aa796bd4d14147bd18d300191492 | [
"MIT"
] | 76 | 2021-11-11T10:06:11.000Z | 2022-03-30T18:50:48.000Z | src/telliot_core/queries/snapshot.py | tellor-io/telliot-core | e2b6cb3486e1aa796bd4d14147bd18d300191492 | [
"MIT"
] | 7 | 2021-12-17T03:39:23.000Z | 2022-03-29T08:53:43.000Z | import logging
from dataclasses import dataclass
from telliot_core.dtypes.value_type import ValueType
from telliot_core.queries.abi_query import AbiQuery
logger = logging.getLogger(__name__)
@dataclass
class Snapshot(AbiQuery):
"""Returns the result for a given option ID (a specific proposal) on Snapshot.
An array of values representing the amount of votes (uints) for each vote option should be returned
Attributes:
proposal_id:
Specifies the requested data a of a valid proposal on Snapshot.
see https://docs.snapshot.org/graphql-api for reference
"""
proposal_id: str
#: ABI used for encoding/decoding parameters
abi = [{"name": "proposal_id", "type": "string"}]
@property
def value_type(self) -> ValueType:
"""Data type returned for a Snapshot query.
- `uint256[]`: variable-length array of 256-bit values with 18 decimals of precision
- `packed`: false
"""
return ValueType(abi_type="uint256[]", packed=False)
| 28.666667 | 107 | 0.694767 | 133 | 1,032 | 5.293233 | 0.578947 | 0.042614 | 0.042614 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.013682 | 0.22093 | 1,032 | 35 | 108 | 29.485714 | 0.86194 | 0.510659 | 0 | 0 | 0 | 0 | 0.076577 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.083333 | false | 0 | 0.333333 | 0 | 0.75 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 2 |
eb6802390641aaf06fceb753f88bd100dfda7697 | 2,162 | py | Python | rally/plugins/openstack/scenarios/zaqar/utils.py | varuntiwari27/rally | 948fba0e8fe8214dd3716451d2a52e014a4115be | [
"Apache-2.0"
] | 1 | 2018-01-01T00:43:41.000Z | 2018-01-01T00:43:41.000Z | rally/plugins/openstack/scenarios/zaqar/utils.py | noah8713/rally-ovs | 2434787c2cf4ca267108966c4ddc55ded3c333d9 | [
"Apache-2.0"
] | 1 | 2020-07-14T11:29:31.000Z | 2020-07-14T11:29:31.000Z | rally/plugins/openstack/scenarios/zaqar/utils.py | noah8713/rally-ovs | 2434787c2cf4ca267108966c4ddc55ded3c333d9 | [
"Apache-2.0"
] | 1 | 2020-06-05T10:06:37.000Z | 2020-06-05T10:06:37.000Z | # Copyright (c) 2014 Red Hat, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from rally.plugins.openstack import scenario
from rally.task import atomic
class ZaqarScenario(scenario.OpenStackScenario):
"""Base class for Zaqar scenarios with basic atomic actions."""
@atomic.action_timer("zaqar.create_queue")
def _queue_create(self, **kwargs):
"""Create a Zaqar queue with random name.
:param kwargs: other optional parameters to create queues like
"metadata"
:returns: Zaqar queue instance
"""
name = self.generate_random_name()
return self.clients("zaqar").queue(name, **kwargs)
@atomic.action_timer("zaqar.delete_queue")
def _queue_delete(self, queue):
"""Removes a Zaqar queue.
:param queue: queue to remove
"""
queue.delete()
def _messages_post(self, queue, messages, min_msg_count, max_msg_count):
"""Post a list of messages to a given Zaqar queue.
:param queue: post the messages to queue
:param messages: messages to post
:param min_msg_count: minimum number of messages
:param max_msg_count: maximum number of messages
"""
with atomic.ActionTimer(self, "zaqar.post_between_%s_and_%s_messages" %
(min_msg_count, max_msg_count)):
queue.post(messages)
@atomic.action_timer("zaqar.list_messages")
def _messages_list(self, queue):
"""Gets messages from a given Zaqar queue.
:param queue: get messages from queue
:returns: messages iterator
"""
return queue.messages()
| 34.31746 | 79 | 0.672525 | 282 | 2,162 | 5.039007 | 0.432624 | 0.042224 | 0.03589 | 0.046446 | 0.078818 | 0.078818 | 0.042224 | 0 | 0 | 0 | 0 | 0.004887 | 0.242831 | 2,162 | 62 | 80 | 34.870968 | 0.863164 | 0.534228 | 0 | 0 | 0 | 0 | 0.113716 | 0.043376 | 0 | 0 | 0 | 0 | 0 | 1 | 0.235294 | false | 0 | 0.117647 | 0 | 0.529412 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
eb7438a64c922f1691f5f7bc0e7e1d03e74aaef5 | 1,705 | py | Python | tests/pricehunt_test.py | adipid/pricehunt-py | 095e29f14a4761644b5e35df6c7ca4416eb62331 | [
"MIT"
] | null | null | null | tests/pricehunt_test.py | adipid/pricehunt-py | 095e29f14a4761644b5e35df6c7ca4416eb62331 | [
"MIT"
] | 10 | 2020-01-15T09:29:11.000Z | 2020-05-22T16:40:27.000Z | tests/pricehunt_test.py | adipid/pricehunt-py | 095e29f14a4761644b5e35df6c7ca4416eb62331 | [
"MIT"
] | null | null | null | import json
import os
import unittest
import price_checker
from product import Product
def load_json(filename):
with open(filename) as json_file:
imported_file = json.load(json_file)
return imported_file
cwd = os.path.dirname(os.path.realpath(__file__))
products_data = load_json(r"" + cwd + "/test-products.json")
products_list = []
class PricehuntTest(unittest.TestCase):
def setUp(self):
for i in range(len(products_data)):
products_list.append(Product(products_data[i]))
def test_correct_product0(self):
self.assertEqual("Logitech MX Master 3", products_list[0].name)
def test_correct_product1(self):
self.assertEqual("Logitech MX Keys", products_list[1].name)
def test_correct_product2(self):
self.assertEqual("Apple Watch Series 5 Cellular 44mm", products_list[2].name)
def test_lowest_price_product0(self):
self.assertEqual("1149", products_list[0].price)
def test_compare_price_product0(self):
diff = price_checker.compare_prices(products_list[0].price, products_list[0].purchased_price)
self.assertEqual("50", diff)
def test_open_policyFalse(self):
self.assertFalse(price_checker.open_policy(61, "Komplett.no"))
def test_open_policyTrue0(self):
self.assertTrue(price_checker.open_policy(60, "Komplett.no"))
def test_open_policyTrue1(self):
self.assertTrue(price_checker.open_policy(10, "Komplett.no"))
# def test_add_product(self):
# self.fail()
#
# def test_remove_product(self):
# self.fail()
#
# def test_get_list(self):
# self.fail()
if __name__ == '__main__':
unittest.main()
| 27.063492 | 101 | 0.693842 | 223 | 1,705 | 5.022422 | 0.363229 | 0.06875 | 0.067857 | 0.058929 | 0.207143 | 0.117857 | 0.071429 | 0 | 0 | 0 | 0 | 0.02106 | 0.192375 | 1,705 | 62 | 102 | 27.5 | 0.792302 | 0.076833 | 0 | 0 | 0 | 0 | 0.086901 | 0 | 0 | 0 | 0 | 0 | 0.228571 | 1 | 0.285714 | false | 0 | 0.2 | 0 | 0.542857 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
eb80f35fa4930ec027c824efa7f754f7ff93b76c | 251 | py | Python | m2py/thermo/__init__.py | caiorss/m2py | 4a8f754f04adb151b1967fe13b8f80b4ec169560 | [
"BSD-3-Clause"
] | 13 | 2016-12-10T22:03:18.000Z | 2021-11-27T11:55:41.000Z | m2py/thermo/__init__.py | caiorss/m2py | 4a8f754f04adb151b1967fe13b8f80b4ec169560 | [
"BSD-3-Clause"
] | null | null | null | m2py/thermo/__init__.py | caiorss/m2py | 4a8f754f04adb151b1967fe13b8f80b4ec169560 | [
"BSD-3-Clause"
] | 3 | 2017-04-02T00:21:24.000Z | 2021-08-19T14:11:23.000Z | """
Thermodynamic function collection
* Steam tables - module steam
"""
import os
import sys
this = os.path.dirname(os.path.abspath(__file__))
path_dir = os.path.join(this, "..")
sys.path.append(path_dir)
from . import xsteam
from . import gas | 16.733333 | 49 | 0.717131 | 36 | 251 | 4.833333 | 0.555556 | 0.103448 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.151394 | 251 | 15 | 50 | 16.733333 | 0.816901 | 0.270916 | 0 | 0 | 0 | 0 | 0.011429 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.571429 | 0 | 0.571429 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 2 |
eb8fa34f6b204cef1701b2e9f71c854c05d7f905 | 2,098 | py | Python | Chapter08/lecture02.py | ee06b056/IntoToProgramInPython | 86f81bb61832ebf043c5f79f6e6262ecee36e02f | [
"MIT"
] | null | null | null | Chapter08/lecture02.py | ee06b056/IntoToProgramInPython | 86f81bb61832ebf043c5f79f6e6262ecee36e02f | [
"MIT"
] | null | null | null | Chapter08/lecture02.py | ee06b056/IntoToProgramInPython | 86f81bb61832ebf043c5f79f6e6262ecee36e02f | [
"MIT"
] | null | null | null | import datetime
class Person(object):
def __init__(self, name):
self.name = name
try:
lastBlank = name.rindex(' ')
self.lastName = name[lastBlank+1:]
except:
self.lastName = name
self.birthday = None
def getName(self):
return self.name
def getLastName(self):
return self.lastName
def setBirthday(self, birthday):
self.birthday = birthday
def getAge(self):
if self.birthday == None:
raise ValueError
return (datetime.date.today() - self.birthday).days
def __lt__(self, other):
if self.lastName == other.lastName:
return self.name < other.name
return self.lastName < other.lastName
def __str__(self):
return self.name
class MITPerson(Person):
nextIdNum = 0
def __init__(self,name):
Person.__init__(self, name)
self.idNum = MITPerson.nextIdNum
MITPerson.nextIdNum += 1
def getIdNum(self):
return self.idNum
def __lt__(self, other):
return self.idNum < other.idNum
class Student(MITPerson):
pass
class UG(Student):
def __init__(self, name, classYear):
Student.__init__(self, name)
self.year = classYear
def getClass(self):
return self.year
class Grad(Student):
pass
class TransferStudent(Student):
def __init__(self, name, fromSchool):
Student.__init__(self, name)
self.fromSchool = fromSchool
def getOldSchool(self):
return self.fromSchool
class Grades(object):
def __init__(self):
self.students = []
p5 = Grad('Buzz Aldrin')
p6 = UG('Billy Beaver', 1984)
print(p5)
print(type(p5)==Grad)
print(p6, type(p6) == UG)
print(UG)
me = Person('Michale Guttag')
him = Person('Barack Hussein Obama')
her = Person('Madonna')
print(him.getLastName())
him.setBirthday(datetime.date(1961,8,4))
her.setBirthday(datetime.date(1958, 8, 16))
p1 = MITPerson('Barbara Beaver')
print(str(p1) + '\'s id number is ' + str(p1.getIdNum())) | 21.191919 | 59 | 0.613441 | 246 | 2,098 | 5.052846 | 0.313008 | 0.070796 | 0.067578 | 0.04827 | 0.072405 | 0 | 0 | 0 | 0 | 0 | 0 | 0.018979 | 0.271687 | 2,098 | 99 | 60 | 21.191919 | 0.794503 | 0 | 0 | 0.144928 | 0 | 0 | 0.038113 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.217391 | false | 0.028986 | 0.014493 | 0.101449 | 0.492754 | 0.086957 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 2 |
ebb01965890ba34194c2cbeef67583eacd7e6263 | 132 | py | Python | block_2/task_3.py | erdyneevzt/stepik_python | 014fb618426dbee7f76b317c539d7d0363c87d15 | [
"MIT"
] | null | null | null | block_2/task_3.py | erdyneevzt/stepik_python | 014fb618426dbee7f76b317c539d7d0363c87d15 | [
"MIT"
] | null | null | null | block_2/task_3.py | erdyneevzt/stepik_python | 014fb618426dbee7f76b317c539d7d0363c87d15 | [
"MIT"
] | null | null | null | i = 1
while i != 0:
i = int(input())
if i > 100:
break
elif i < 10:
continue
else:
print(i)
| 13.2 | 20 | 0.409091 | 19 | 132 | 2.842105 | 0.736842 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.1 | 0.469697 | 132 | 9 | 21 | 14.666667 | 0.671429 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.111111 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
ebc1fde7f5eab4850e52e3032a953a70b9dcfd1e | 161 | py | Python | Python/1170.py | alinemarchiori/URI_Exercises_Solved | 957681174207388fdb54d6d1a945303ba1ad18f3 | [
"MIT"
] | null | null | null | Python/1170.py | alinemarchiori/URI_Exercises_Solved | 957681174207388fdb54d6d1a945303ba1ad18f3 | [
"MIT"
] | null | null | null | Python/1170.py | alinemarchiori/URI_Exercises_Solved | 957681174207388fdb54d6d1a945303ba1ad18f3 | [
"MIT"
] | null | null | null | n = int(input())
for i in range(n):
dias = 0
valor = float(input())
while valor>1:
valor = valor/2
dias += 1
print(dias, "dias") | 20.125 | 26 | 0.503106 | 24 | 161 | 3.375 | 0.625 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.037736 | 0.341615 | 161 | 8 | 27 | 20.125 | 0.726415 | 0 | 0 | 0 | 0 | 0 | 0.024691 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.125 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
ebcfcb17f63d054cbc04f8aa3f5d779478dc0425 | 1,486 | py | Python | plastiqpublicapi/models/client_secrets_response.py | jeffkynaston/sdk-spike-python-apimatic | e1ca52116aabfcdb2f36c24ebd866cf00bb5c6c9 | [
"MIT"
] | null | null | null | plastiqpublicapi/models/client_secrets_response.py | jeffkynaston/sdk-spike-python-apimatic | e1ca52116aabfcdb2f36c24ebd866cf00bb5c6c9 | [
"MIT"
] | null | null | null | plastiqpublicapi/models/client_secrets_response.py | jeffkynaston/sdk-spike-python-apimatic | e1ca52116aabfcdb2f36c24ebd866cf00bb5c6c9 | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
"""
plastiqpublicapi
This file was automatically generated by APIMATIC v3.0 (
https://www.apimatic.io ).
"""
class ClientSecretsResponse(object):
"""Implementation of the 'Client Secrets Response' model.
TODO: type model description here.
Attributes:
client_secret (string): Client Secret returned by /client-secrets
"""
# Create a mapping from Model property names to API property names
_names = {
"client_secret": 'clientSecret'
}
def __init__(self,
client_secret=None):
"""Constructor for the ClientSecretsResponse class"""
# Initialize members of the class
self.client_secret = client_secret
@classmethod
def from_dictionary(cls,
dictionary):
"""Creates an instance of this model from a dictionary
Args:
dictionary (dictionary): A dictionary representation of the object
as obtained from the deserialization of the server's response. The
keys MUST match property names in the API description.
Returns:
object: An instance of this structure class.
"""
if dictionary is None:
return None
# Extract variables from the dictionary
client_secret = dictionary.get('clientSecret')
# Return an object of this model
return cls(client_secret)
| 26.535714 | 79 | 0.612382 | 156 | 1,486 | 5.75 | 0.487179 | 0.107023 | 0.035674 | 0.035674 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.002973 | 0.320996 | 1,486 | 55 | 80 | 27.018182 | 0.886026 | 0.559892 | 0 | 0 | 1 | 0 | 0.074597 | 0 | 0 | 0 | 0 | 0.018182 | 0 | 1 | 0.142857 | false | 0 | 0 | 0 | 0.428571 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
690f41b1ee6bc246782b5ae649ded26264526d14 | 297 | py | Python | pipy/tests/test_utils.py | rhsmits91/pipy | b38100203711fad715078a00c6074ea63af06893 | [
"MIT"
] | null | null | null | pipy/tests/test_utils.py | rhsmits91/pipy | b38100203711fad715078a00c6074ea63af06893 | [
"MIT"
] | null | null | null | pipy/tests/test_utils.py | rhsmits91/pipy | b38100203711fad715078a00c6074ea63af06893 | [
"MIT"
] | null | null | null | import pandas as pd
from pipy.pipeline.utils import combine_series
def test_combine_series():
s1 = pd.Series(dict(zip("AB", (1, 2))))
s2 = pd.Series(dict(zip("BC", (20, 30))))
s3 = combine_series(s1, s2)
pd.testing.assert_series_equal(s3, pd.Series({"A": 1, "B": 20, "C": 30}))
| 27 | 77 | 0.636364 | 50 | 297 | 3.66 | 0.58 | 0.213115 | 0.163934 | 0.163934 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.068826 | 0.16835 | 297 | 10 | 78 | 29.7 | 0.672065 | 0 | 0 | 0 | 0 | 0 | 0.023569 | 0 | 0 | 0 | 0 | 0 | 0.142857 | 1 | 0.142857 | false | 0 | 0.285714 | 0 | 0.428571 | 0 | 0 | 0 | 0 | null | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
6915b8ff7cef6d50399ffe88a65761306ce6129d | 6,310 | py | Python | cephlm/tests/cephmetrics/ceph/test_connectivity_status.py | ArdanaCLM/cephlm | 74afbcd078c0c903d9b8102d27c669decb124a38 | [
"Apache-2.0"
] | null | null | null | cephlm/tests/cephmetrics/ceph/test_connectivity_status.py | ArdanaCLM/cephlm | 74afbcd078c0c903d9b8102d27c669decb124a38 | [
"Apache-2.0"
] | null | null | null | cephlm/tests/cephmetrics/ceph/test_connectivity_status.py | ArdanaCLM/cephlm | 74afbcd078c0c903d9b8102d27c669decb124a38 | [
"Apache-2.0"
] | null | null | null | # (c) Copyright 2016 Hewlett Packard Enterprise Development LP
# (c) Copyright 2017 SUSE LLC
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
#
import mock
import unittest
from itertools import count
from cephlm.common.exceptions import CephCommandException
from cephlm.tests.cephmetrics.ceph.test_data import * # noqa
from cephlm.cephmetrics.ceph import cluster
from cephlm.utils.values import Severity
class TestCluster(unittest.TestCase):
def setUp(self):
self.monitors = ClusterStatusData.HEALTH_OK['quorum_names']
@mock.patch(
'cephlm.cephmetrics.ceph.cluster.Cluster._verify_monitor_connectivity')
@mock.patch('cephlm.cephmetrics.ceph.cluster.Cluster.get_monitors')
def test_connectivity_ok(self, mock_get_mon, mock_conn_status):
mock_get_mon.return_value = self.monitors
mock_conn_status.return_value = (self.monitors, [])
result = cluster.Cluster.check_monitor_connectivity()
self.assertEqual(str(result),
'Monitors %s are reachable.' %
', '.join(mock_conn_status.return_value[0]))
self.assertEqual(result.value, Severity.ok)
@mock.patch(
'cephlm.cephmetrics.ceph.cluster.Cluster._verify_monitor_connectivity')
@mock.patch('cephlm.cephmetrics.ceph.cluster.Cluster.get_monitors')
def test_connectivity_warn(self, mock_get_mon, mock_conn_status):
mock_get_mon.return_value = self.monitors
mock_conn_status.return_value = (self.monitors[:2], self.monitors[2:])
result = cluster.Cluster.check_monitor_connectivity()
self.assertEqual(str(result),
'Monitor(s) %s is/are unreachable.' %
', '.join(mock_conn_status.return_value[1]))
self.assertEqual(result.value, Severity.warn)
@mock.patch(
'cephlm.cephmetrics.ceph.cluster.Cluster._verify_monitor_connectivity')
@mock.patch('cephlm.cephmetrics.ceph.cluster.Cluster.get_monitors')
def test_connectivity_failure(self, mock_get_mon, mock_conn_status):
mock_get_mon.return_value = self.monitors
mock_conn_status.return_value = ([], self.monitors)
result = cluster.Cluster.check_monitor_connectivity()
self.assertEqual(str(result),
'Monitor(s) %s is/are unreachable.' %
', '.join(mock_conn_status.return_value[1]))
self.assertEqual(result.value, Severity.fail)
@mock.patch('cephlm.cephmetrics.ceph.cluster.Cluster.get_monitors')
def test_connectivity_unknown_error_cmd(self, mock_get_mon):
msg = "No such file or directory"
mock_get_mon.side_effect = CephCommandException(msg)
result = cluster.Cluster.check_monitor_connectivity()
self.assertEqual(str(result), 'Probe error: %s.' % msg)
self.assertEqual(result.value, Severity.unknown)
@mock.patch('cephlm.cephmetrics.ceph.cluster.Cluster._get_ceph_config')
@mock.patch.object(cluster, 'rados')
def test_verify_connectivity_ok(self, mock_rados, mock_get_ceph_config):
class DummyRados:
def __init__(self, clustername, conffile):
pass
def __enter__(self):
return self
def __exit__(self, type_, value, traceback):
return False
def mon_command(self, cmd, inbuf, timeout, target):
return 0, '', ''
mock_get_ceph_config.return_value = ('ceph', 'config',
'/etc/ceph/ceph.conf')
mock_rados.Rados = DummyRados
result = cluster.Cluster._verify_monitor_connectivity(self.monitors)
self.assertEqual(result[0], self.monitors)
self.assertEqual(len(result[1]), 0)
@mock.patch('cephlm.cephmetrics.ceph.cluster.Cluster._get_ceph_config')
@mock.patch.object(cluster, 'rados')
def test_verify_connectivity_error_all(self, mock_rados,
mock_get_ceph_config):
class DummyRados:
def __init__(self, clustername, conffile):
pass
def __enter__(self):
return self
def __exit__(self, type_, value, traceback):
return False
def mon_command(self, cmd, inbuf, timeout, target):
return -4, '', ''
mock_get_ceph_config.return_value = ('ceph', 'config',
'/etc/ceph/ceph.conf')
mock_rados.Rados = DummyRados
result = cluster.Cluster._verify_monitor_connectivity(self.monitors)
self.assertEqual(len(result[0]), 0)
self.assertEqual(result[1], self.monitors)
@mock.patch('cephlm.cephmetrics.ceph.cluster.Cluster._get_ceph_config')
@mock.patch.object(cluster, 'rados')
def test_verify_connectivity_error_one(self, mock_rados,
mock_get_ceph_config):
class DummyRados:
_ids = count(0)
def __init__(self, clustername, conffile):
self.id = DummyRados._ids.next()
def __enter__(self):
return self
def __exit__(self, type_, value, traceback):
return False
def mon_command(self, cmd, inbuf, timeout, target):
if self.id == 0:
return (-4, '', '')
else:
return (0, '', '')
mock_get_ceph_config.return_value = ('ceph', 'config',
'/etc/ceph/ceph.conf')
mock_rados.Rados = DummyRados
result = cluster.Cluster._verify_monitor_connectivity(self.monitors)
self.assertEqual(result[0], self.monitors[1:])
self.assertEqual(result[1], [self.monitors[0]])
| 41.788079 | 79 | 0.640571 | 721 | 6,310 | 5.359223 | 0.226075 | 0.061594 | 0.059783 | 0.067288 | 0.729814 | 0.704451 | 0.670807 | 0.670807 | 0.670807 | 0.659161 | 0 | 0.007053 | 0.258479 | 6,310 | 150 | 80 | 42.066667 | 0.818765 | 0.09683 | 0 | 0.609091 | 0 | 0 | 0.146603 | 0.102077 | 0 | 0 | 0 | 0 | 0.127273 | 1 | 0.181818 | false | 0.018182 | 0.063636 | 0.072727 | 0.372727 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
692623edabe9f0e78683e1958531eb7c61cfa0f0 | 25,868 | py | Python | src/mightypy/ml/_tree.py | NishantBaheti/mightypy | 8219ae5cfc462e02f04bec6bdd7e3751b57d2a25 | [
"MIT"
] | 1 | 2022-02-03T19:32:45.000Z | 2022-02-03T19:32:45.000Z | src/mightypy/ml/_tree.py | NishantBaheti/mightypy | 8219ae5cfc462e02f04bec6bdd7e3751b57d2a25 | [
"MIT"
] | null | null | null | src/mightypy/ml/_tree.py | NishantBaheti/mightypy | 8219ae5cfc462e02f04bec6bdd7e3751b57d2a25 | [
"MIT"
] | null | null | null | from __future__ import annotations
from typing import Union, Tuple, List
import warnings
import numpy as np
class Question:
"""Question is a thershold/matching concept for splitting the node of the Decision Tree
Args:
column_index (int): Column index to be chosen from the array passed at the matching time.
value (Union[int, str, float, np.int64, np.float64]): Threshold value/ matching value.
header (str): column/header name.
"""
def __init__(self, column_index: int, value: Union[int, str, float, np.int64, np.float64], header: str):
"""Constructor
"""
self.column_index = column_index
self.value = value
self.header = header
def match(self, example: Union[list, np.ndarray]) -> bool:
"""Matching function to decide based on example whether result is true or false.
Args:
example (Union[list, np.ndarray]): Example to compare with question parameters.
Returns:
bool: if the example is in threshold or value matches then results true or false.
"""
if isinstance(example, list):
example = np.array(example, dtype="O")
val = example[self.column_index]
# adding numpy int and float data types as well
if isinstance(val, (int, float, np.int64, np.float64)):
# a condition for question to return True or False for numeric value
return float(val) >= float(self.value)
else:
return str(val) == str(self.value) # categorical data comparison
def __repr__(self):
condition = "=="
if isinstance(self.value, (int, float, np.int64, np.float64)):
condition = ">="
return f"Is {self.header} {condition} {self.value} ?"
class Node:
"""A Tree node either Decision Node or Leaf Node
Args:
question (Question, optional): question object. Defaults to None.
true_branch (Node, optional): connection to node at true side of the branch. Defaults to None.
false_branch (Node, optional): connection to node at false side of the branch. Defaults to None.
uncertainty (float, optional): Uncertainty value like gini,entropy,variance etc. Defaults to None.
leaf_value (Union[dict,int,float], optional): Leaf node/final node's value. Defaults to None.
"""
def __init__(self, question: Question = None, true_branch: Node = None, false_branch: Node = None, uncertainty: float = None, *, leaf_value: Union[dict, int, float] = None):
"""Constructor
"""
self.question = question
self.true_branch = true_branch
self.false_branch = false_branch
self.uncertainty = uncertainty
self.leaf_value = leaf_value
@property
def _is_leaf_node(self) -> bool:
"""Check if this node is leaf node or not.
Returns:
bool: True if leaf node else false.
"""
return self.leaf_value is not None
class DecisionTreeClassifier:
"""Decision Tree Based Classification Model
Args:
max_depth (int, optional): max depth of the tree. Defaults to 100.
min_samples_split (int, optional): min size of the sample at the time of split. Defaults to 2.
criteria (str, optional): what criteria to use for information. Defaults to 'gini'. available 'gini','entropy'.
"""
def __init__(self, max_depth: int = 100, min_samples_split: int = 2, criteria: str = 'gini'):
"""Constructor
"""
self._X = None
self._y = None
self._feature_names = None
self._target_name = None
self._tree = None
self.max_depth = max_depth
self.min_samples_split = min_samples_split
self.criteria = criteria
def _count_dict(self, a: np.ndarray) -> dict:
"""Count class frequecies and get a dictionary from it
Args:
a (np.ndarray): input array. shape should be (m,1) for m samples.
Returns:
dict: categories/classes freq dictionary.
"""
unique_values = np.unique(a, return_counts=True)
zipped = zip(*unique_values)
dict_obj = dict(zipped)
return dict_obj
def _gini_impurity(self, arr: np.ndarray) -> float:
"""Calculate Gini Impurity
Args:
arr (np.ndarray): input array.
Returns:
float: gini impurity value.
"""
classes, counts = np.unique(arr, return_counts=True)
gini_score = 1 - np.square(counts / arr.shape[0]).sum(axis=0)
return gini_score
def _entropy(self, arr: np.ndarray) -> float:
"""Calculate Entropy
Args:
arr (np.ndarray): input array.
Returns:
float: entropy result.
"""
classes, counts = np.unique(arr, return_counts=True)
p = counts / arr.shape[0]
entropy_score = (-p * np.log2(p)).sum(axis=0)
return entropy_score
def _uncertainty(self, a: np.ndarray) -> float:
"""calcualte uncertainty
Args:
a (np.ndarray): input array
Returns:
float: uncertainty value
"""
if self.criteria == "entropy":
value = self._entropy(a)
elif self.criteria == "gini":
value = self._gini_impurity(a)
else:
warnings.warn(f"{self.criteria} is not coded yet. returning to gini.")
value = self._gini_impurity(a)
return value
def _partition(self, rows: np.ndarray, question: Union[Question, None]) -> Tuple[list, list]:
"""partition the rows based on the question
Args:
rows (np.ndarray): input array to split.
question (Question): question object containing spltting concept.
Returns:
Tuple[list,list]: true idxs and false idxs.
"""
true_idx, false_idx = [], []
for idx, row in enumerate(rows):
if question.match(row):
true_idx.append(idx)
else:
false_idx.append(idx)
return true_idx, false_idx
def _info_gain(self, left: np.ndarray, right: np.ndarray, parent_uncertainty: float) -> float:
"""Calculate information gain after splitting
Args:
left (np.ndarray): left side array.
right (np.ndarray): right side array.
parent_uncertainty (float): parent node Uncertainity.
Returns:
flaot: information gain value.
"""
# calculating portion/ partition/ weightage
pr = left.shape[0] / (left.shape[0] + right.shape[0])
# calcualte child uncertainity
child_uncertainty = pr * \
self._uncertainty(left) - (1 - pr) * self._uncertainty(right)
# calculate information gain
info_gain_value = parent_uncertainty - child_uncertainty
return info_gain_value
def _find_best_split(self, X: np.ndarray, y: np.ndarray) -> Tuple[float, Union[Question, None], float]:
"""method to find best split possible for the sample
Args:
X (np.ndarray): Feature matrix.
y (np.ndarray): target matrix.
Returns:
Tuple[float,Union[Question,None],float]: maximum gain from the split, best question of it, and parent node uncertainty.
"""
max_gain = -1
best_split_question = None
parent_uncertainty = self._uncertainty(y)
m_samples, n_labels = X.shape
for col_index in range(n_labels): # iterate over feature columns
# get unique values from the feature
unique_values = np.unique(X[:, col_index])
for val in unique_values: # check for every value and find maximum info gain
ques = Question(
column_index=col_index,
value=val,
header=self._feature_names[col_index]
)
t_idx, f_idx = self._partition(X, ques)
# if it does not split the data
# skip it
if len(t_idx) == 0 or len(f_idx) == 0:
continue
true_y = y[t_idx, :]
false_y = y[f_idx, :]
# get information gain
gain = self._info_gain(true_y, false_y, parent_uncertainty)
if gain > max_gain:
max_gain, best_split_question = gain, ques
return max_gain, best_split_question, parent_uncertainty
def _build_tree(self, X: np.ndarray, y: np.ndarray, depth: int = 0) -> Node:
"""Recursive funtion to build tree.
Args:
X (np.ndarray): input features matrix.
y (np.ndarray): target matrix.
depth (int, optional): depth count of the recursion. Defaults to 0.
Returns:
Node: either leaf node or decision node
"""
m_samples, n_labels = X.shape
# if depth is greater than max depth defined or labels/features are left to 1
# or number of samples are less than the minimum size of samples to split then
# stop recursion and return a node
if (depth > self.max_depth or n_labels == 1 or m_samples < self.min_samples_split):
return Node(leaf_value=self._count_dict(y))
gain, ques, uncertainty = self._find_best_split(X, y)
# if gain is zero
# then no point grinding further here
if gain < 0:
return Node(leaf_value=self._count_dict(y))
t_idx, f_idx = self._partition(X, ques) # get partition idxs
true_branch = self._build_tree(
X[t_idx, :], y[t_idx, :], depth + 1) # recog true branch samples
false_branch = self._build_tree(
X[f_idx, :], y[f_idx, :], depth + 1) # recog false branch samples
return Node(
question=ques,
true_branch=true_branch,
false_branch=false_branch,
uncertainty=uncertainty
)
def train(self, X: Union[np.ndarray, list], y: Union[np.ndarray, list], feature_name: list = None, target_name: list = None) -> None:
"""Train the model
Args:
X (Union[np.ndarray,list]): feature matrix.
y (Union[np.ndarray,list]): target matrix.
feature_name (list, optional): feature names list. Defaults to None.
target_name (list, optional): target name list. Defaults to None.
"""
X = np.array(X, dtype='O') if not isinstance(
X, (np.ndarray)) else X # converting to numpy array
y = np.array(y, dtype='O') if not isinstance(
y, (np.ndarray)) else y # converting to numpy array
# reshaping to vectors
self._X = X.reshape(-1, 1) if len(X.shape) == 1 else X
self._y = y.reshape(-1, 1) if len(y.shape) == 1 else y
# creating feature names if not mentioned
self._feature_names = feature_name or [
f"C_{i}" for i in range(self._X.shape[1])]
# creating target name if not mentioned
self._target_name = target_name or ['target']
# BOOOM
# building the tree
self._tree = self._build_tree(
X=self._X,
y=self._y
)
def print_tree(self, node: Union[Node, None] = None, spacing: str = "|-") -> None:
"""print the tree
Args:
node (Union[Node,None], optional): starting node. Defaults to None. then it will go to the root node of the tree.
spacing (str, optional): printing separater. Defaults to "|-".
"""
node = node or self._tree
if node._is_leaf_node:
print(spacing, " Predict :", node.leaf_value)
return
# Print the question at this node
print(spacing + str(node.question) +
" | " + self.criteria + " :" + str(node.uncertainty))
# Call this function recursively on the true branch
print(spacing + '--> True:')
self.print_tree(node.true_branch, " " + spacing + "-")
# Call this function recursively on the false branch
print(spacing + '--> False:')
self.print_tree(node.false_branch, " " + spacing + "-")
def _classification(self, row: np.ndarray, node: Union[Node, None]) -> Union[dict]:
"""Classification recursive function
Args:
row (np.ndarray): input matrix.
node (Union[Node,None]): node to start with. mostly root node. rest will be handled by recursion.
Returns:
Union[dict]: leaf value. classification result.
"""
if node._is_leaf_node:
return node.leaf_value
if node.question.match(row):
return self._classification(row, node.true_branch)
else:
return self._classification(row, node.false_branch)
def _leaf_probabilities(self, results: dict) -> dict:
"""get probabilties
Args:
results (dict): results from _classification.
Returns:
dict: dictionary with categorical probabilities.
"""
total = sum(results.values())
probs = {}
for key in results:
probs[key] = (results[key] / total) * 100
return probs
def predict(self, X: Union[np.ndarray, list]) -> np.ndarray:
"""predict classification results
Args:
X (Union[np.ndarray,list]): testing matrix.
Raises:
ValueError: input X can only be a list or numpy array.
Returns:
np.ndarray: results of classification.
"""
if isinstance(X, (np.ndarray, list)):
X = np.array(X, dtype='O') if not isinstance(X, (np.ndarray)) else X
if len(X.shape) == 1:
max_result = 0
result_dict = self._classification(row=X, node=self._tree)
result = None
for key in result_dict:
if result_dict[key] > max_result:
result = key
return np.array([[result]], dtype='O')
else:
leaf_value = []
# get maximum caterigorical value from all catergories
for row in X:
max_result = 0
result_dict = self._classification(row=row, node=self._tree)
result = None
for key in result_dict:
if result_dict[key] > max_result:
result = key
leaf_value.append([result])
return np.array(leaf_value, dtype='O')
else:
raise ValueError("X should be list or numpy array")
def predict_probability(self, X: Union[np.ndarray, list]) -> Union[np.ndarray, dict]:
"""predict classfication probabilities
Args:
X (Union[np.ndarray,list]): testing matrix.
Raises:
ValueError: input X can only be a list or numpy array.
Returns:
Union[np.ndarray, dict]: probabity results of classification.
"""
if isinstance(X, (np.ndarray, list)):
X = np.array(X, dtype='O') if not isinstance(X, (np.ndarray)) else X
if len(X.shape) == 1:
return self._leaf_probabilities(self._classification(row=X, node=self._tree))
else:
leaf_value = []
for row in X:
leaf_value.append([self._leaf_probabilities(
self._classification(row=row, node=self._tree))])
return np.array(leaf_value, dtype='O')
else:
raise ValueError("X should be list or numpy array")
class DecisionTreeRegressor:
"""Decision Tree Based Regression Model
Args:
max_depth (int, optional): maximum depth of the tree. Defaults to 10.
min_samples_split (int, optional): minimum number of samples while splitting. Defaults to 3.
criteria (str, optional): criteria for best info gain. Defaults to 'variance'.
"""
def __init__(self, max_depth: int = 10, min_samples_split: int = 3, criteria: str = 'variance'):
"""constructor
"""
self._X = None
self._y = None
self._feature_names = None
self._target_name = None
self._tree = None
self.max_depth = max_depth
self.min_samples_split = min_samples_split
self.criteria = criteria
def _mean_leaf_value(self, a: np.ndarray) -> float:
"""leaf values mean
Args:
a (np.ndarray): input array.
Returns:
float: mean value
"""
return float(np.mean(a))
def _partition(self, rows: np.ndarray, question: Union[Question, None]) -> Tuple[list, list]:
"""partition the rows based on the question
Args:
rows (np.ndarray): input array to split.
question (Question): question object containing spltting concept.
Returns:
Tuple[list,list]: true idxs and false idxs.
"""
true_idx, false_idx = [], []
for idx, row in enumerate(rows):
if question.match(row):
true_idx.append(idx)
else:
false_idx.append(idx)
return true_idx, false_idx
def _uncertainty(self, a: np.ndarray) -> float:
"""calcualte uncertainty
Args:
a (np.ndarray): input array
Returns:
float: uncertainty value
"""
if self.criteria == "variance":
value = np.var(a)
else:
warnings.warn(f"{self.criteria} is not coded yet. returning to variance.")
value = np.var(a)
return float(value)
def _info_gain(self, left: np.ndarray, right: np.ndarray, parent_uncertainty: float) -> float:
"""Calculate information gain after splitting
Args:
left (np.ndarray): left side array.
right (np.ndarray): right side array.
parent_uncertainty (float): parent node Uncertainity.
Returns:
flaot: information gain value.
"""
pr = left.shape[0] / (left.shape[0] + right.shape[0])
child_uncertainty = pr * \
self._uncertainty(left) - (1 - pr) * self._uncertainty(right)
info_gain_value = parent_uncertainty - child_uncertainty
return info_gain_value
def _find_best_split(self, X: np.ndarray, y: np.ndarray) -> Tuple[float, Union[Question, None], float]:
"""method to find best split possible for the sample
Args:
X (np.ndarray): Feature matrix.
y (np.ndarray): target matrix.
Returns:
Tuple[float,Union[Question,None],float]: maximum gain from the split, best question of it, and parent node uncertainty
"""
max_gain = -1
best_split_question = None
parent_uncertainty = self._uncertainty(y)
m_samples, n_labels = X.shape
for col_index in range(n_labels): # iterate over feature columns
# get unique values from the feature
unique_values = np.unique(X[:, col_index])
for val in unique_values: # check for every value and find maximum info gain
ques = Question(
column_index=col_index,
value=val,
header=self._feature_names[col_index]
)
t_idx, f_idx = self._partition(X, ques)
# if it does not split the data
# skip it
if len(t_idx) == 0 or len(f_idx) == 0:
continue
true_y = y[t_idx, :]
false_y = y[f_idx, :]
gain = self._info_gain(true_y, false_y, parent_uncertainty)
if gain > max_gain:
max_gain, best_split_question = gain, ques
return max_gain, best_split_question, parent_uncertainty
def _build_tree(self, X: np.ndarray, y: np.ndarray, depth: int = 0) -> Node:
"""Recursive funtion to build tree
Args:
X (np.ndarray): input features matrix.
y (np.ndarray): target matrix.
depth (int, optional): depth count of the recursion. Defaults to 0.
Returns:
Node: either leaf node or decision node
"""
m_samples, n_labels = X.shape
# if depth is greater than max depth defined or labels/features are left to 1
# or number of samples are less than the minimum size of samples to split then
# stop recursion and return a node
if (depth > self.max_depth or n_labels == 1 or m_samples < self.min_samples_split):
return Node(leaf_value=y)
gain, ques, uncertainty = self._find_best_split(X, y)
# if gain is zero no point in going further
if gain < 0:
return Node(leaf_value=y)
t_idx, f_idx = self._partition(X, ques)
true_branch = self._build_tree(
X[t_idx, :], y[t_idx, :], depth + 1) # get true samples
false_branch = self._build_tree(
X[f_idx, :], y[f_idx, :], depth + 1) # get false samples
return Node(
question=ques,
true_branch=true_branch,
false_branch=false_branch,
uncertainty=uncertainty
)
def train(self, X: Union[np.ndarray, list], y: Union[np.ndarray, list], feature_name: list = None, target_name: list = None) -> None:
"""Train the model
Args:
X (Union[np.ndarray,list]): feature matrix.
y (Union[np.ndarray,list]): target matrix.
feature_name (list, optional): feature names list. Defaults to None.
target_name (list, optional): target name list. Defaults to None.
"""
X = np.array(X, dtype='O') if not isinstance(
X, (np.ndarray)) else X # converting to numpy array
y = np.array(y, dtype='O') if not isinstance(
y, (np.ndarray)) else y # converting to numpy array
# reshaping to vectors
self._X = X.reshape(-1, 1) if len(X.shape) == 1 else X
self._y = y.reshape(-1, 1) if len(y.shape) == 1 else y
# creating feature names if not mentioned
self._feature_names = feature_name or [
f"C_{i}" for i in range(self._X.shape[1])]
# creating target name if not mentioned
self._target_name = target_name or ['target']
# BOOOM
# building the tree
self._tree = self._build_tree(
X=self._X,
y=self._y
)
def print_tree(self, node: Union[Node, None] = None, spacing: str = "|-", mean_preds: bool = True) -> None:
"""print the tree
Args:
node (Union[Node,None], optional): starting node. Defaults to None. then it will go to the root node of the tree.
spacing (str, optional): printing separater. Defaults to "|-".
mean_preds (bool): do the mean of prediction values. Defaults to True.
"""
node = node or self._tree
if node._is_leaf_node:
if mean_preds:
print(spacing, " Predict :", self._mean_leaf_value(node.leaf_value))
else:
print(spacing, " Predict :", node.leaf_value[...,-1])
return
# Print the question at this node
print(spacing + str(node.question) +
" | " + self.criteria + " :" + str(node.uncertainty))
# Call this function recursively on the true branch
print(spacing + '--> True:')
self.print_tree(node.true_branch, " " + spacing + "-", mean_preds)
# Call this function recursively on the false branch
print(spacing + '--> False:')
self.print_tree(node.false_branch, " " + spacing + "-", mean_preds)
def _regression(self, row: np.ndarray, node: Union[Node, None], mean_preds: bool) -> float:
"""regression recursive method
Args:
row (np.ndarray): input matrix.
node (Union[Node,None]): node to start with. mostly root node. rest will be handled by recursion.
mean_preds (bool): do the mean of prediction values.
Returns:
float: regression result.
"""
if node._is_leaf_node:
if mean_preds:
return self._mean_leaf_value(node.leaf_value)
else:
return node.leaf_value[...,-1]
if node.question.match(row):
return self._regression(row, node.true_branch, mean_preds)
else:
return self._regression(row, node.false_branch, mean_preds)
def predict(self, X: np.ndarray, mean_preds: bool = True) -> np.ndarray:
"""predict regresssion
Args:
X (np.ndarray): testing matrix.
mean_preds (bool): do the mean of prediction values. Defaults to True.
Raises:
ValueError: X should be list or numpy array
Returns:
np.ndarray: regression prediction.
"""
if isinstance(X, (np.ndarray, list)):
X = np.array(X, dtype='O') if not isinstance(X, (np.ndarray)) else X
if len(X.shape) == 1:
result = self._regression(row=X, node=self._tree, mean_preds=mean_preds)
return np.array([[result]], dtype='O')
else:
leaf_value = []
for row in X:
result = self._regression(row=row, node=self._tree, mean_preds=mean_preds)
leaf_value.append([result])
return np.array(leaf_value, dtype='O')
else:
raise ValueError("X should be list or numpy array")
| 36.027855 | 177 | 0.57631 | 3,180 | 25,868 | 4.550314 | 0.089623 | 0.04727 | 0.01244 | 0.014927 | 0.750173 | 0.728749 | 0.693435 | 0.66047 | 0.624119 | 0.607256 | 0 | 0.005106 | 0.326117 | 25,868 | 717 | 178 | 36.078103 | 0.824977 | 0.33497 | 0 | 0.695925 | 0 | 0 | 0.026105 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.097179 | false | 0 | 0.012539 | 0 | 0.241379 | 0.047022 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
692c298ef0d7f8ede22958f68c573f616b9a3b6e | 1,443 | py | Python | tests/unitary/RewardStream/test_notify_reward_amount.py | AqualisDAO/curve-dao-contracts | beec73a068da8ed01c0f710939dc5adb776d565b | [
"MIT"
] | 217 | 2020-06-24T14:01:21.000Z | 2022-03-29T08:35:24.000Z | tests/unitary/RewardStream/test_notify_reward_amount.py | AqualisDAO/curve-dao-contracts | beec73a068da8ed01c0f710939dc5adb776d565b | [
"MIT"
] | 25 | 2020-06-24T09:39:02.000Z | 2022-03-22T17:03:00.000Z | tests/unitary/RewardStream/test_notify_reward_amount.py | AqualisDAO/curve-dao-contracts | beec73a068da8ed01c0f710939dc5adb776d565b | [
"MIT"
] | 110 | 2020-07-10T22:45:49.000Z | 2022-03-29T02:51:08.000Z | import math
import brownie
from brownie import chain
def test_only_distributor_allowed(alice, stream):
with brownie.reverts("dev: only distributor"):
stream.notify_reward_amount(10 ** 18, {"from": alice})
def test_retrieves_reward_token(bob, stream, reward_token):
stream.notify_reward_amount(10 ** 18, {"from": bob})
post_notify = reward_token.balanceOf(stream)
assert post_notify == 10 ** 18
def test_reward_rate_updates(bob, stream):
stream.notify_reward_amount(10 ** 18, {"from": bob})
post_notify = stream.reward_rate()
assert post_notify > 0
assert post_notify == 10 ** 18 / (86400 * 10)
def test_reward_rate_updates_mid_duration(bob, stream):
stream.notify_reward_amount(10 ** 18, {"from": bob})
chain.sleep(86400 * 5) # half of the duration
# top up the balance to be 10 ** 18 again
stream.notify_reward_amount(10 ** 18 / 2, {"from": bob})
post_notify = stream.reward_rate()
# should relatively close .00001 seems about good of a heuristic
assert math.isclose(post_notify, 10 ** 18 / (86400 * 10), rel_tol=0.00001)
def test_period_finish_updates(bob, stream):
tx = stream.notify_reward_amount(10 ** 18, {"from": bob})
assert stream.period_finish() == tx.timestamp + 86400 * 10
def test_update_last_update_time(bob, stream):
tx = stream.notify_reward_amount(10 ** 18, {"from": bob})
assert stream.last_update_time() == tx.timestamp
| 29.44898 | 78 | 0.701317 | 207 | 1,443 | 4.647343 | 0.304348 | 0.045738 | 0.130977 | 0.174636 | 0.503119 | 0.426195 | 0.35343 | 0.269231 | 0.269231 | 0.269231 | 0 | 0.070826 | 0.178101 | 1,443 | 48 | 79 | 30.0625 | 0.740304 | 0.085239 | 0 | 0.259259 | 0 | 0 | 0.037234 | 0 | 0 | 0 | 0 | 0 | 0.222222 | 1 | 0.222222 | false | 0 | 0.111111 | 0 | 0.333333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
693668c88af0cb09907a7eb439d49d83bf0f94fc | 253 | py | Python | libs/cGeo/setup.py | nodebox/nodebox-pyobjc | 31c7a95ca24fffdc8f4523278d4b68c330adea8e | [
"MIT"
] | 47 | 2015-03-14T01:44:09.000Z | 2021-11-10T10:28:14.000Z | libs/cGeo/setup.py | nodebox/nodebox-pyobjc | 31c7a95ca24fffdc8f4523278d4b68c330adea8e | [
"MIT"
] | 4 | 2015-08-20T20:02:32.000Z | 2021-02-10T18:39:11.000Z | libs/cGeo/setup.py | nodebox/nodebox-pyobjc | 31c7a95ca24fffdc8f4523278d4b68c330adea8e | [
"MIT"
] | 15 | 2015-03-14T01:44:00.000Z | 2020-12-17T16:44:31.000Z | from distutils.core import setup, Extension
cGeo = Extension("cGeo", sources = ["cGeo.c"])
setup (name = "cGeo",
version = "0.1",
author = "Tom De Smedt",
description = "Fast geometric functionality.",
ext_modules = [cGeo]) | 28.111111 | 53 | 0.616601 | 29 | 253 | 5.344828 | 0.793103 | 0.167742 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.010363 | 0.237154 | 253 | 9 | 54 | 28.111111 | 0.792746 | 0 | 0 | 0 | 0 | 0 | 0.228346 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.142857 | 0 | 0.142857 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
6947511cb144cec2a642daef44489c9938624281 | 808 | py | Python | setup.py | ox-it/oxford-term-dates | 943a05236a8a4d6178594ae1775a34ef19e7fe16 | [
"AFL-3.0"
] | 2 | 2016-01-13T15:19:17.000Z | 2020-05-16T07:13:38.000Z | setup.py | ox-it/oxford-term-dates | 943a05236a8a4d6178594ae1775a34ef19e7fe16 | [
"AFL-3.0"
] | null | null | null | setup.py | ox-it/oxford-term-dates | 943a05236a8a4d6178594ae1775a34ef19e7fe16 | [
"AFL-3.0"
] | 1 | 2020-05-16T07:13:41.000Z | 2020-05-16T07:13:41.000Z | #!/usr/bin/env python
from distutils.core import setup
setup(name='oxford_term_dates',
version='1.3.0',
description='A Python library for translating between real dates and Oxford term dates',
author='IT Services, University of Oxford',
author_email='mobileoxford@it.ox.ac.uk',
url='https://github.com/ox-it/oxford-term-dates',
packages=['oxford_term_dates','oxford_term_dates.templatetags'],
classifiers=[
'Framework :: Django',
'Development Status :: 5 - Production/Stable',
'License :: OSI Approved :: Academic Free License (AFL)',
'Intended Audience :: Education',
'Operating System :: OS Independent',
'Programming Language :: Python',
'Topic :: Education',
'Topic :: Internet',
],
)
| 35.130435 | 94 | 0.634901 | 90 | 808 | 5.622222 | 0.733333 | 0.098814 | 0.148221 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.006441 | 0.231436 | 808 | 22 | 95 | 36.727273 | 0.808374 | 0.024752 | 0 | 0 | 0 | 0 | 0.617535 | 0.068615 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.052632 | 0 | 0.052632 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
6962e4354dc7f1a1758e3614363629af342522ee | 1,153 | py | Python | cpdb/data/migrations/0114_attachmentfile_add_fields.py | invinst/CPDBv2_backend | b4e96d620ff7a437500f525f7e911651e4a18ef9 | [
"Apache-2.0"
] | 25 | 2018-07-20T22:31:40.000Z | 2021-07-15T16:58:41.000Z | cpdb/data/migrations/0114_attachmentfile_add_fields.py | invinst/CPDBv2_backend | b4e96d620ff7a437500f525f7e911651e4a18ef9 | [
"Apache-2.0"
] | 13 | 2018-06-18T23:08:47.000Z | 2022-02-10T07:38:25.000Z | cpdb/data/migrations/0114_attachmentfile_add_fields.py | invinst/CPDBv2_backend | b4e96d620ff7a437500f525f7e911651e4a18ef9 | [
"Apache-2.0"
] | 6 | 2018-05-17T21:59:43.000Z | 2020-11-17T00:30:26.000Z | # Generated by Django 2.1.3 on 2019-02-25 03:08
from django.db import migrations, models
from django.db import migrations, models
from django.conf import settings
import django.db.models.deletion
class Migration(migrations.Migration):
dependencies = [
('data', '0113_attachmentfile_update_source_type'),
]
operations = [
migrations.AddField(
model_name='attachmentfile',
name='notifications_count',
field=models.IntegerField(default=0),
),
migrations.AddField(
model_name='attachmentfile',
name='pages',
field=models.IntegerField(default=0),
),
migrations.AddField(
model_name='attachmentfile',
name='manually_updated',
field=models.BooleanField(default=False),
),
migrations.AddField(
model_name='attachmentfile',
name='last_updated_by',
field=models.ForeignKey(
null=True,
on_delete=django.db.models.deletion.CASCADE,
to=settings.AUTH_USER_MODEL
),
),
]
| 28.121951 | 60 | 0.592368 | 110 | 1,153 | 6.072727 | 0.481818 | 0.047904 | 0.137725 | 0.161677 | 0.479042 | 0.479042 | 0.344311 | 0.344311 | 0.227545 | 0.227545 | 0 | 0.026382 | 0.309627 | 1,153 | 40 | 61 | 28.825 | 0.812814 | 0.039029 | 0 | 0.5 | 1 | 0 | 0.138336 | 0.034358 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.117647 | 0 | 0.205882 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
15da9a3152e56c52f76684034993ef00a4445df4 | 696 | py | Python | cjax/continuation/methods/predictor/natural_predictor.py | harsh306/continuation-jax | c1452604558764df9cd4770130b60035eea5c5b3 | [
"MIT"
] | 2 | 2022-01-26T18:02:51.000Z | 2022-02-15T01:36:39.000Z | cjax/continuation/methods/predictor/natural_predictor.py | harsh306/continuation-jax | c1452604558764df9cd4770130b60035eea5c5b3 | [
"MIT"
] | null | null | null | cjax/continuation/methods/predictor/natural_predictor.py | harsh306/continuation-jax | c1452604558764df9cd4770130b60035eea5c5b3 | [
"MIT"
] | 1 | 2022-02-15T01:37:50.000Z | 2022-02-15T01:37:50.000Z | from cjax.continuation.methods.predictor.base_predictor import Predictor
from cjax.utils.math_trees import pytree_element_add
class NaturalPredictor(Predictor):
"""Natural Predictor only updates continuation parameter"""
def __init__(self, concat_states, delta_s):
super().__init__(concat_states)
self.delta_s = delta_s
def _assign_states(self) -> None:
super()._assign_states()
def prediction_step(self):
"""Given current state predict next state.
Updates (state: problem parameters, bparam: continuation parameter) Tuple
"""
self._assign_states()
self._bparam = pytree_element_add(self._bparam, self.delta_s)
| 33.142857 | 81 | 0.715517 | 82 | 696 | 5.719512 | 0.487805 | 0.051173 | 0.06823 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.195402 | 696 | 20 | 82 | 34.8 | 0.8375 | 0.239943 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.272727 | false | 0 | 0.181818 | 0 | 0.545455 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
15e8686d5766af543463452560a9109ff4ea4229 | 489 | py | Python | contabilidad/users/models.py | R3SWebDevelopment/CreaLibreCanacintraMembership | afab16195bbc7f74417a07cf475b5165eee472cb | [
"MIT"
] | null | null | null | contabilidad/users/models.py | R3SWebDevelopment/CreaLibreCanacintraMembership | afab16195bbc7f74417a07cf475b5165eee472cb | [
"MIT"
] | null | null | null | contabilidad/users/models.py | R3SWebDevelopment/CreaLibreCanacintraMembership | afab16195bbc7f74417a07cf475b5165eee472cb | [
"MIT"
] | null | null | null | from django.db import models
from django.utils.translation import ugettext as _
from django.contrib.postgres.fields import JSONField
from django.conf import settings
class Profile(models.Model):
mobile_number = JSONField(default={})
user = models.OneToOneField(
settings.AUTH_USER_MODEL,
on_delete=models.CASCADE,
related_name='profile',
)
notify_by_email = models.BooleanField(default=True)
notify_by_sms = models.BooleanField(default=False)
| 30.5625 | 55 | 0.750511 | 60 | 489 | 5.95 | 0.6 | 0.112045 | 0.140056 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.167689 | 489 | 15 | 56 | 32.6 | 0.87715 | 0 | 0 | 0 | 0 | 0 | 0.014315 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.307692 | 0 | 0.692308 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 2 |
15fba4505460aa878e1ca0ea76de89ed1e91bacc | 1,062 | py | Python | mygit/commands/merge.py | 7Bpencil/mygit | 2eda11af904442b4a460a811af75a95c0845297f | [
"WTFPL"
] | null | null | null | mygit/commands/merge.py | 7Bpencil/mygit | 2eda11af904442b4a460a811af75a95c0845297f | [
"WTFPL"
] | null | null | null | mygit/commands/merge.py | 7Bpencil/mygit | 2eda11af904442b4a460a811af75a95c0845297f | [
"WTFPL"
] | null | null | null | import argparse
from textwrap import dedent
from mygit.state import State
from mygit.constants import Constants
from mygit.command import Command
from mygit.backend import merge
class Merge(Command):
def __init__(self, subparsers: argparse._SubParsersAction, commands_dict: dict):
command_description = dedent(
'''
Fast-forward HEAD to another branch state (if it's possible)
Usage examples:
mygit merge dev merge commits from dev into HEAD
Note: fast-forward is possible only if HEAD commit's line
is subset of branch commit's line
''')
super().__init__("merge", command_description, subparsers, commands_dict)
def _add_arguments(self, command_parser: argparse.ArgumentParser):
command_parser.add_argument("merge_branch", nargs=1)
def work(self, namespace: argparse.Namespace, constants: Constants, state: State):
merge(namespace.merge_branch[0], constants, state)
| 37.928571 | 93 | 0.661017 | 122 | 1,062 | 5.590164 | 0.434426 | 0.052786 | 0.032258 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.002581 | 0.270245 | 1,062 | 27 | 94 | 39.333333 | 0.877419 | 0 | 0 | 0 | 0 | 0 | 0.024217 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.2 | false | 0 | 0.4 | 0 | 0.666667 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 2 |
c603108c7c60eb0e0f9cc519512a6749aaa8ce08 | 4,115 | py | Python | intranet/settings.py | We-Are-One-CS/intranet | a96b566b82925fb5d9b809071f2d95a0ce687cf5 | [
"MIT"
] | 3 | 2020-02-28T21:08:22.000Z | 2020-09-09T14:14:28.000Z | intranet/settings.py | We-Are-One-CS/intranet | a96b566b82925fb5d9b809071f2d95a0ce687cf5 | [
"MIT"
] | 28 | 2020-02-11T09:22:32.000Z | 2020-06-05T07:05:23.000Z | intranet/settings.py | We-Are-One-CS/intranet | a96b566b82925fb5d9b809071f2d95a0ce687cf5 | [
"MIT"
] | null | null | null | """
Django settings for intranet project.
Generated by 'django-admin startproject' using Django 2.2.7.
For more information on this file, see
https://docs.djangoproject.com/en/2.2/topics/settings/
For the full list of settings and their values, see
https://docs.djangoproject.com/en/2.2/ref/settings/
"""
import os
# Build paths inside the project like this: os.path.join(BASE_DIR, ...)
BASE_DIR = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))
# Quick-start development settings - unsuitable for production
# See https://docs.djangoproject.com/en/2.2/howto/deployment/checklist/
# SECURITY WARNING: keep the secret key used in production secret!
SECRET_KEY = 'rd4l0b65&9*u+3g2j^1th5rl6sc*m!r^f*()5ij7(kot75p^2_'
# SECURITY WARNING: don't run with debug turned on in production!
DEBUG = True
ALLOWED_HOSTS = ['*']
# Authentification
LOGIN_REDIRECT_URL = 'index'
LOGOUT_REDIRECT_URL = 'index'
# Application definition
INSTALLED_APPS = [
'wao.apps.WaoConfig',
'django.contrib.admin',
'django.contrib.auth',
'django.contrib.contenttypes',
'django.contrib.sessions',
'django.contrib.messages',
'django.contrib.staticfiles',
'crispy_forms',
'tempus_dominus',
'multiselectfield',
]
MIDDLEWARE = [
'django.middleware.security.SecurityMiddleware',
'django.contrib.sessions.middleware.SessionMiddleware',
'django.middleware.common.CommonMiddleware',
'django.middleware.csrf.CsrfViewMiddleware',
'django.contrib.auth.middleware.AuthenticationMiddleware',
'django.contrib.messages.middleware.MessageMiddleware',
'django.middleware.clickjacking.XFrameOptionsMiddleware',
'wao.middleware.MessagesMiddleware',
]
ROOT_URLCONF = 'intranet.urls'
TEMPLATES = [
{
'BACKEND': 'django.template.backends.django.DjangoTemplates',
'DIRS': [],
'APP_DIRS': True,
'OPTIONS': {
'context_processors': [
'django.template.context_processors.debug',
'django.template.context_processors.request',
'django.contrib.auth.context_processors.auth',
'django.contrib.messages.context_processors.messages',
],
},
},
]
AUTH_USER_MODEL = 'wao.User'
CRISPY_TEMPLATE_PACK = 'bootstrap4'
DATETIME_FORMAT = '%m/%d/%Y %I:%M %p'
WSGI_APPLICATION = 'intranet.wsgi.application'
# Database
# https://docs.djangoproject.com/en/2.2/ref/settings/#databases
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.postgresql_psycopg2',
'NAME': 'weareone', # Name of the database used (we recommend using a dedicated db)
'USER': 'postgres',
'PASSWORD': 'admin',
'HOST': 'localhost',
'PORT': '',
}
}
# For Django CI
if os.environ.get('GITHUB_WORKFLOW'):
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.postgresql',
'NAME': 'github_actions',
'USER': 'postgres',
'PASSWORD': 'postgres',
'HOST': '127.0.0.1',
'PORT': '8000',
}
}
# Password validation
# https://docs.djangoproject.com/en/2.2/ref/settings/#auth-password-validators
AUTH_PASSWORD_VALIDATORS = [
{
'NAME': 'django.contrib.auth.password_validation.UserAttributeSimilarityValidator',
},
{
'NAME': 'django.contrib.auth.password_validation.MinimumLengthValidator',
},
{
'NAME': 'django.contrib.auth.password_validation.CommonPasswordValidator',
},
{
'NAME': 'django.contrib.auth.password_validation.NumericPasswordValidator',
},
]
# Internationalization
# https://docs.djangoproject.com/en/2.2/topics/i18n/
LANGUAGE_CODE = 'fr-fr' # Change the interface to french
TIME_ZONE = 'CET' # Use the Central European Time Zone
USE_I18N = True
USE_L10N = True
USE_TZ = True # Enable time zone
# Static files (CSS, JavaScript, Images)
# https://docs.djangoproject.com/en/2.2/howto/static-files/
STATIC_URL = '/static/'
STATICFILES_DIRS = (
os.path.join(BASE_DIR, 'wao/static/'),
)
INTERNAL_IPS = ['127.0.0.1']
FILE_UPLOAD_PERMISSIONS = 0o644 | 27.433333 | 92 | 0.674119 | 459 | 4,115 | 5.936819 | 0.437909 | 0.07156 | 0.056514 | 0.06422 | 0.214679 | 0.190459 | 0.133211 | 0.097982 | 0.044037 | 0 | 0 | 0.017948 | 0.187606 | 4,115 | 150 | 93 | 27.433333 | 0.797188 | 0.282868 | 0 | 0.063158 | 1 | 0.010526 | 0.517961 | 0.375641 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0.073684 | 0.010526 | 0 | 0.010526 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 2 |
c609af1653ba1996b5a66579c8a910d23126889e | 1,966 | py | Python | day06/mysql_tutorial.py | zhangyage/Python-jike | 069b1e7f0d18739c19f0980d2f88f99ed14b70b6 | [
"Apache-2.0"
] | null | null | null | day06/mysql_tutorial.py | zhangyage/Python-jike | 069b1e7f0d18739c19f0980d2f88f99ed14b70b6 | [
"Apache-2.0"
] | null | null | null | day06/mysql_tutorial.py | zhangyage/Python-jike | 069b1e7f0d18739c19f0980d2f88f99ed14b70b6 | [
"Apache-2.0"
] | null | null | null | #!/usr/bin/env python
# -*- coding:utf-8 -*-
'''
演示数据的插入操作
学习常见的连接操作mysql的方法
1、官方客户端mysql-connector使用
2、第三方客户端MySQLdb
3、MySQLdb的二次封装torndb使用
'''
from __future__ import print_function
sql = "insert into ipdata (startip,endip,loacl,country) values (16777123,16777324,'电信','青海省')"
sql_tmp = "insert into ipdata (startip,endip,loacl,country) values (%s,%s,%s,%s)"
values = [(26777123,26777324,'移动','新疆省'),(26787123,26787324,'电信','云南省'),(26797123,26797324,'移动','广西省'),(29777123,29777324,'移动','四川省')]
#定义sql语句
#实例一
#https://dev.mysql.com/downloads/connector/python/
#使用mysql-connector需要安装相关的驱动程序,linux安装mysql-client、windows安装上面连接下载的程序
'''
'''
print ('mysql-connector'.center(50,'+'))
from mysql import connector
cnx = connector.Connect(host="192.168.75.133",user="zhangyage",password="zhangyage",database="pythontest",charset="utf8")
#创建链接
cnx.autocommit = True
db0 = cnx.cursor()
#创建游标
print (db0.execute(sql))
print (db0.executemany(sql_tmp,values))
#执行sql
实例二 MySQLdb
'''
'''
print ('mysql-MySQLdb'.center(50,'+'))
import MySQLdb
def connect_mysql(db_host="192.168.75.133",user="zhangyage",password="zhangyage",database="pythontest",charset="utf8"):
conn = MySQLdb.connect(host=db_host,user=user,passwd=password,db=database,charset=charset)
conn.autocommit(True)
return conn.cursor()
db1 = connect_mysql()
print (db1.execute(sql),db1.lastrowid)
print (db1.executemany(sql_tmp,values),db1.lastrowid)
#执行sql db1.lastrowid打印最后执行的行号
'''
实例三 torndb
torndb模块是需要我们后期安装的,python -m pip install torndb
'''
print ('mysql-torndb'.center(50,'+'))
import torndb
import simplejson as json
db2 = torndb.Connection(host="192.168.75.133",
user="zhangyage",
password="zhangyage",
database="pythontest",
charset="utf8"
)
print (db2.insert(sql))
print (db2.insertmany(sql_tmp, values))
#输出结果直接是json串,字段和对应的值到时存在的
| 29.343284 | 134 | 0.685148 | 236 | 1,966 | 5.652542 | 0.470339 | 0.017991 | 0.022489 | 0.026987 | 0.235382 | 0.235382 | 0.235382 | 0.235382 | 0.166417 | 0.166417 | 0 | 0.082684 | 0.151068 | 1,966 | 66 | 135 | 29.787879 | 0.716597 | 0.119023 | 0 | 0 | 0 | 0.0625 | 0.23162 | 0.057254 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0.125 | 0.15625 | null | null | 0.3125 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 2 |
c611ace71f06895258980da8c2f9da8383b4b256 | 280 | py | Python | Python/Strings/string_validators.py | isbendiyarovanezrin/HackerRankSolutions | 3e66ecab82b35e718e1bdd1b0c00d0aeb3b4569f | [
"MIT"
] | 2 | 2022-01-03T14:49:57.000Z | 2022-01-16T15:37:18.000Z | Python/Strings/string_validators.py | isbendiyarovanezrin/HackerRankSolutions | 3e66ecab82b35e718e1bdd1b0c00d0aeb3b4569f | [
"MIT"
] | null | null | null | Python/Strings/string_validators.py | isbendiyarovanezrin/HackerRankSolutions | 3e66ecab82b35e718e1bdd1b0c00d0aeb3b4569f | [
"MIT"
] | null | null | null | # Language: Python 3
if __name__ == '__main__':
s = input()
n = any(i.isalnum() for i in s)
a = any(i.isalpha() for i in s)
d = any(i.isdigit() for i in s)
l = any(i.islower() for i in s)
u = any(i.isupper() for i in s)
print(n, a, d, l, u, sep="\n") | 25.454545 | 35 | 0.528571 | 56 | 280 | 2.5 | 0.428571 | 0.142857 | 0.214286 | 0.25 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.005025 | 0.289286 | 280 | 11 | 36 | 25.454545 | 0.698492 | 0.064286 | 0 | 0 | 0 | 0 | 0.038314 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.125 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
c6218bc9f5911456f82cbd5199c0c02e8eeef895 | 382 | py | Python | incrowd/website/urls.py | incrowdio/incrowd | 711e99c55b9da815af7749a2930d4184e235fa68 | [
"Apache-2.0"
] | 4 | 2015-03-10T04:24:07.000Z | 2016-09-18T16:41:12.000Z | incrowd/website/urls.py | incrowdio/incrowd | 711e99c55b9da815af7749a2930d4184e235fa68 | [
"Apache-2.0"
] | 27 | 2015-01-03T09:52:50.000Z | 2021-06-10T20:37:08.000Z | incrowd/website/urls.py | incrowdio/incrowd | 711e99c55b9da815af7749a2930d4184e235fa68 | [
"Apache-2.0"
] | 2 | 2015-09-07T21:06:51.000Z | 2016-03-10T11:31:57.000Z | from rest_framework import routers
from website.api import UserViewSet, PostViewSet, CommentViewSet, \
CategoryViewSet, CrowdViewSet
router = routers.SimpleRouter()
router.register(r'users', UserViewSet)
router.register(r'posts', PostViewSet)
router.register(r'categories', CategoryViewSet)
router.register(r'comments', CommentViewSet)
router.register(r'crowds', CrowdViewSet)
| 34.727273 | 67 | 0.814136 | 42 | 382 | 7.380952 | 0.5 | 0.225806 | 0.241935 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.081152 | 382 | 10 | 68 | 38.2 | 0.883191 | 0 | 0 | 0 | 0 | 0 | 0.089005 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.222222 | 0 | 0.222222 | 0 | 0 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
c627f42ed3e1bf273973cce42e035177374b5114 | 3,158 | py | Python | ooobuild/lo/sdb/result_column.py | Amourspirit/ooo_uno_tmpl | 64e0c86fd68f24794acc22d63d8d32ae05dd12b8 | [
"Apache-2.0"
] | null | null | null | ooobuild/lo/sdb/result_column.py | Amourspirit/ooo_uno_tmpl | 64e0c86fd68f24794acc22d63d8d32ae05dd12b8 | [
"Apache-2.0"
] | null | null | null | ooobuild/lo/sdb/result_column.py | Amourspirit/ooo_uno_tmpl | 64e0c86fd68f24794acc22d63d8d32ae05dd12b8 | [
"Apache-2.0"
] | null | null | null | # coding: utf-8
#
# Copyright 2022 :Barry-Thomas-Paul: Moss
#
# Licensed under the Apache License, Version 2.0 (the "License")
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http: // www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
# Service Class
# this is a auto generated file generated by Cheetah
# Libre Office Version: 7.3
# Namespace: com.sun.star.sdb
from abc import abstractproperty
from .column_settings import ColumnSettings as ColumnSettings_bbba0c00
from ..sdbcx.column import Column as Column_7b1d098a
class ResultColumn(ColumnSettings_bbba0c00, Column_7b1d098a):
"""
Service Class
describes a column of a result set.
See Also:
`API ResultColumn <https://api.libreoffice.org/docs/idl/ref/servicecom_1_1sun_1_1star_1_1sdb_1_1ResultColumn.html>`_
"""
__ooo_ns__: str = 'com.sun.star.sdb'
__ooo_full_ns__: str = 'com.sun.star.sdb.ResultColumn'
__ooo_type_name__: str = 'service'
@abstractproperty
def CatalogName(self) -> str:
"""
gets a column's table's catalog name.
"""
@abstractproperty
def DisplaySize(self) -> int:
"""
indicates the column's normal max width in chars.
"""
@abstractproperty
def IsCaseSensitive(self) -> bool:
"""
indicates that a column is case sensitive.
"""
@abstractproperty
def IsDefinitelyWritable(self) -> bool:
"""
indicates whether a write on the column will definitely succeed.
"""
@abstractproperty
def IsReadOnly(self) -> bool:
"""
indicates whether a column is definitely, not writable.
"""
@abstractproperty
def IsSearchable(self) -> bool:
"""
indicates whether the column can be used in a Where clause.
"""
@abstractproperty
def IsSigned(self) -> bool:
"""
indicates whether values in the column are signed numbers.
"""
@abstractproperty
def IsWritable(self) -> bool:
"""
indicates whether it is possible for a write on the column to succeed.
"""
@abstractproperty
def Label(self) -> str:
"""
gets the suggested column title for use in printouts and displays.
"""
@abstractproperty
def SchemaName(self) -> str:
"""
gets a column's schema name.
"""
@abstractproperty
def ServiceName(self) -> str:
"""
returns the fully-qualified name of the service whose instances are manufactured if the method com.sun.star.sdbc.XRow.getObject)= is called to retrieve a value from the column.
"""
@abstractproperty
def TableName(self) -> str:
"""
gets a column's table name.
"""
__all__ = ['ResultColumn']
| 27.701754 | 184 | 0.651678 | 383 | 3,158 | 5.281984 | 0.472585 | 0.112704 | 0.05042 | 0.059318 | 0.091943 | 0.050914 | 0.023727 | 0 | 0 | 0 | 0 | 0.014976 | 0.259975 | 3,158 | 113 | 185 | 27.946903 | 0.850663 | 0.513616 | 0 | 0.375 | 0 | 0 | 0.052805 | 0.023927 | 0 | 0 | 0 | 0 | 0 | 1 | 0.375 | false | 0 | 0.09375 | 0 | 0.59375 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
c62ebe90410af38a0fd3d10d4dea8a8ef9dad58b | 902 | py | Python | psx/_dump_/28/_dump_ida_/overlay_3/set_funcs.py | maoa3/scalpel | 2e7381b516cded28996d290438acc618d00b2aa7 | [
"Unlicense"
] | 15 | 2018-06-28T01:11:25.000Z | 2021-09-27T15:57:18.000Z | psx/_dump_/28/_dump_ida_/overlay_3/set_funcs.py | maoa3/scalpel | 2e7381b516cded28996d290438acc618d00b2aa7 | [
"Unlicense"
] | 7 | 2018-06-29T04:08:23.000Z | 2019-10-17T13:57:22.000Z | psx/_dump_/28/_dump_ida_/overlay_3/set_funcs.py | maoa3/scalpel | 2e7381b516cded28996d290438acc618d00b2aa7 | [
"Unlicense"
] | 7 | 2018-06-28T01:11:34.000Z | 2020-05-23T09:21:48.000Z | del_items(0x800A0E8C)
SetType(0x800A0E8C, "void VID_OpenModule__Fv()")
del_items(0x800A0F4C)
SetType(0x800A0F4C, "void InitScreens__Fv()")
del_items(0x800A103C)
SetType(0x800A103C, "void MEM_SetupMem__Fv()")
del_items(0x800A1068)
SetType(0x800A1068, "void SetupWorkRam__Fv()")
del_items(0x800A10F8)
SetType(0x800A10F8, "void SYSI_Init__Fv()")
del_items(0x800A1204)
SetType(0x800A1204, "void GM_Open__Fv()")
del_items(0x800A1228)
SetType(0x800A1228, "void PA_Open__Fv()")
del_items(0x800A1260)
SetType(0x800A1260, "void PAD_Open__Fv()")
del_items(0x800A12A4)
SetType(0x800A12A4, "void OVR_Open__Fv()")
del_items(0x800A12C4)
SetType(0x800A12C4, "void SCR_Open__Fv()")
del_items(0x800A12F4)
SetType(0x800A12F4, "void DEC_Open__Fv()")
del_items(0x800A1568)
SetType(0x800A1568, "char *GetVersionString__FPc(char *VersionString2)")
del_items(0x800A163C)
SetType(0x800A163C, "char *GetWord__FPc(char *VStr)")
| 33.407407 | 72 | 0.805987 | 117 | 902 | 5.803419 | 0.333333 | 0.153166 | 0.162003 | 0.123711 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.222877 | 0.059867 | 902 | 26 | 73 | 34.692308 | 0.57783 | 0 | 0 | 0 | 0 | 0 | 0.337029 | 0.029933 | 0 | 0 | 0.288248 | 0 | 0 | 1 | 0 | true | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
c63cbc7a7f3580c129f4752ad0bd37f38f7fd932 | 2,044 | py | Python | backend/takeout/admin/views.py | BillBillBillBill/laughing-garbanzo | 27c66dcc4f0e045ae060255679a2aa68c0f744d2 | [
"MIT"
] | 15 | 2016-08-03T08:11:36.000Z | 2022-03-24T03:21:06.000Z | backend/takeout/admin/views.py | BillBillBillBill/laughing-garbanzo | 27c66dcc4f0e045ae060255679a2aa68c0f744d2 | [
"MIT"
] | null | null | null | backend/takeout/admin/views.py | BillBillBillBill/laughing-garbanzo | 27c66dcc4f0e045ae060255679a2aa68c0f744d2 | [
"MIT"
] | 7 | 2016-08-03T08:11:38.000Z | 2020-12-27T08:49:10.000Z | # coding: utf-8
from rest_framework.views import APIView
from models.admin import Admin
from lib.utils.response import JsonResponse, JsonErrorResponse
from lib.utils.misc import get_update_dict_by_list
from lib.utils.token_tools import get_token
class AdminList(APIView):
def get(self, request):
# 获取管理员列表
admins = [admin.to_string() for admin in Admin.objects.all()]
return JsonResponse({"admin_list": admins})
def post(self, request):
# 注册
username = request.json.get("username")
password = request.json.get("password")
nickname = request.json.get("nickname")
account_type = request.json.get("account_type")
if not all([username, password, nickname, account_type]):
return JsonErrorResponse("username, password, nickname, account_type are needed", 400)
new_admin = Admin(
username=username,
password=password,
nickname=nickname,
account_type=account_type
)
try:
new_admin.save()
except Exception, e:
print e
return JsonErrorResponse("Fail" + e.message)
print "新注册管理员id:", new_admin.id
# 登陆
token = get_token(username, password, "admin")
return JsonResponse({
"id": new_admin.id,
"token": token
})
class AdminDetail(APIView):
def get(self, request, admin_id):
try:
admin = Admin.objects.get(id=admin_id)
except Admin.DoesNotExist:
return JsonErrorResponse("Admin does not exist", 404)
return JsonResponse({"admin": admin.to_detail_string()})
def put(self, request, admin_id):
# 更新个人信息
update_item = ['nickname', 'password']
update_dict = get_update_dict_by_list(update_item, request.json)
modify_num = Admin.objects.filter(id=admin_id).update(**update_dict)
if modify_num == 1:
return JsonResponse({})
return JsonErrorResponse("Update failed", 400)
| 34.644068 | 98 | 0.629648 | 232 | 2,044 | 5.392241 | 0.331897 | 0.052758 | 0.044764 | 0.023981 | 0.1247 | 0 | 0 | 0 | 0 | 0 | 0 | 0.007363 | 0.26908 | 2,044 | 58 | 99 | 35.241379 | 0.829987 | 0.016634 | 0 | 0.042553 | 0 | 0 | 0.088822 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0.12766 | 0.106383 | null | null | 0.042553 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 2 |
c64f1dc22175bdb827ddb012fc48215e9e45a903 | 1,719 | py | Python | ondewo/t2s/client/services/text_to_speech.py | ondewo/ondewo-t2s-client-python | c580c934c7e0e703f81bbddeee3831919fd5f0a2 | [
"Apache-2.0"
] | null | null | null | ondewo/t2s/client/services/text_to_speech.py | ondewo/ondewo-t2s-client-python | c580c934c7e0e703f81bbddeee3831919fd5f0a2 | [
"Apache-2.0"
] | null | null | null | ondewo/t2s/client/services/text_to_speech.py | ondewo/ondewo-t2s-client-python | c580c934c7e0e703f81bbddeee3831919fd5f0a2 | [
"Apache-2.0"
] | null | null | null | from google.protobuf.empty_pb2 import Empty
from ondewo.utils.base_services_interface import BaseServicesInterface
from ondewo.t2s.text_to_speech_pb2 import (
ListT2sPipelinesRequest,
ListT2sPipelinesResponse,
SynthesizeRequest,
SynthesizeResponse,
T2sPipelineId,
Text2SpeechConfig,
)
from ondewo.t2s.text_to_speech_pb2_grpc import Text2SpeechStub
class Text2Speech(BaseServicesInterface):
"""
Exposes the t2s endpoints of ONDEWO t2s in a user-friendly way.
See text_to_speech.proto.
"""
@property
def stub(self) -> Text2SpeechStub:
stub: Text2SpeechStub = Text2SpeechStub(channel=self.grpc_channel)
return stub
def synthesize(self, request: SynthesizeRequest) -> SynthesizeResponse:
response: SynthesizeResponse = self.stub.Synthesize(request)
return response
def get_t2s_pipeline(self, request: T2sPipelineId) -> Text2SpeechConfig:
response: Text2SpeechConfig = self.stub.GetT2sPipeline(request)
return response
def create_t2s_pipeline(self, request: Text2SpeechConfig) -> T2sPipelineId:
response: T2sPipelineId = self.stub.CreateT2sPipeline(request)
return response
def delete_t2s_pipeline(self, request: T2sPipelineId) -> Empty:
response: Empty = self.stub.DeleteT2sPipeline(request)
return response
def update_t2s_pipeline(self, request: Text2SpeechConfig) -> Empty:
response: Empty = self.stub.UpdateT2sPipeline(request)
return response
def list_t2s_pipelines(self, request: ListT2sPipelinesRequest) -> ListT2sPipelinesResponse:
response: ListT2sPipelinesResponse = self.stub.ListT2sPipelines(request)
return response
| 34.38 | 95 | 0.745782 | 167 | 1,719 | 7.538922 | 0.347305 | 0.052423 | 0.100079 | 0.095314 | 0.203336 | 0.04448 | 0.04448 | 0 | 0 | 0 | 0 | 0.02641 | 0.184991 | 1,719 | 49 | 96 | 35.081633 | 0.872234 | 0.052356 | 0 | 0.176471 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.205882 | false | 0 | 0.117647 | 0 | 0.558824 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
c6590aaf2d2dffbad1dc3622668b97cfa3f432d8 | 834 | py | Python | notebooks/src/code/__init__.py | verdimrc/amazon-textract-transformer-pipeline | f3ae99ec3b8808d9edf7bc5ac003494cf1548293 | [
"MIT-0"
] | 22 | 2021-11-10T17:16:10.000Z | 2022-03-31T19:39:50.000Z | notebooks/src/code/__init__.py | verdimrc/amazon-textract-transformer-pipeline | f3ae99ec3b8808d9edf7bc5ac003494cf1548293 | [
"MIT-0"
] | 4 | 2021-11-03T03:45:51.000Z | 2022-01-28T03:30:57.000Z | notebooks/src/code/__init__.py | verdimrc/amazon-textract-transformer-pipeline | f3ae99ec3b8808d9edf7bc5ac003494cf1548293 | [
"MIT-0"
] | 4 | 2021-12-14T22:41:40.000Z | 2022-02-04T15:30:10.000Z | # Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved.
# SPDX-License-Identifier: MIT-0
"""Amazon Textract + LayoutLM model training and inference code package for SageMaker
Why the extra level of nesting? Because the src folder (even if __init__ is present) is not loaded
as a Python module during training, but rather as the working directory. This requires a different
import syntax for top-level files/folders (`import config`, not `from . import config`) than you
would see if your working directory was different (for example when you `from src import code` to
use it from one of the notebooks).
Wrapping this code in an extra package folder ensures that - regardless of whether you use it from
notebook, in SM training job, or in some other app - relative imports *within* this code/ folder
work correctly.
"""
| 55.6 | 98 | 0.780576 | 134 | 834 | 4.828358 | 0.679104 | 0.049459 | 0.027821 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.001435 | 0.164269 | 834 | 14 | 99 | 59.571429 | 0.926829 | 0.986811 | 0 | null | 0 | null | 0 | 0 | null | 0 | 0 | 0 | null | 1 | null | true | 0 | 0 | null | null | null | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
d66ac9bbed52cc97398c2e00fdb14a0c9b408fc1 | 8,428 | py | Python | smartlingApiSdk/api/EstimatesApi.py | Smartling/api-sdk-python | 85e937c3ad0abcf5022688a476ac2edb34ab33ac | [
"Apache-2.0"
] | 8 | 2015-01-08T21:31:17.000Z | 2021-01-07T07:50:31.000Z | smartlingApiSdk/api/EstimatesApi.py | Smartling/api-sdk-python | 85e937c3ad0abcf5022688a476ac2edb34ab33ac | [
"Apache-2.0"
] | 8 | 2015-05-18T21:43:03.000Z | 2020-05-19T06:12:17.000Z | smartlingApiSdk/api/EstimatesApi.py | Smartling/api-sdk-python | 85e937c3ad0abcf5022688a476ac2edb34ab33ac | [
"Apache-2.0"
] | 14 | 2015-07-24T08:52:27.000Z | 2022-03-05T06:36:45.000Z |
#!/usr/bin/python
# -*- coding: utf-8 -*-
""" Copyright 2012-2021 Smartling, Inc.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this work except in compliance with the License.
* You may obtain a copy of the License in the LICENSE file, or at:
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
"""
from smartlingApiSdk.ApiV2 import ApiV2
class EstimatesApi(ApiV2):
def __init__(self, userIdentifier, userSecret, projectId, proxySettings=None, permanentHeaders={}, env='prod'):
ApiV2.__init__(self, userIdentifier, userSecret, projectId, proxySettings, permanentHeaders=permanentHeaders, env=env)
def getJobFuzzyEstimateReports(self, translationJobUid, reportStatus='', contentCoverage='', creatorUserUids=[], translationJobSchemaContents=[], tags=[], createdFrom='', createdTo='', limit=0, offset=0, **kwargs):
"""
method : GET
api url : /estimates-api/v2/projects/{projectId}/jobs/{translationJobUid}/reports/fuzzy
Responses:
200 : OK
details : https://api-reference.smartling.com/#operation/getJobFuzzyEstimateReports
"""
kw = {
'reportStatus':reportStatus,
'contentCoverage':contentCoverage,
'creatorUserUids':creatorUserUids,
'translationJobSchemaContents':translationJobSchemaContents,
'tags':tags,
'createdFrom':createdFrom,
'createdTo':createdTo,
'limit':limit,
'offset':offset,
}
kw.update(kwargs)
url = self.urlHelper.getUrl('/estimates-api/v2/projects/{projectId}/jobs/{translationJobUid}/reports/fuzzy', translationJobUid=translationJobUid, **kwargs)
response, status = self.command('GET', url, kw)
return response, status
def generateJobFuzzyEstimateReports(self, translationJobUid, contentType, tags, **kwargs):
"""
method : POST
api url : /estimates-api/v2/projects/{projectId}/jobs/{translationJobUid}/reports/fuzzy
Responses:
200 : OK
details : https://api-reference.smartling.com/#operation/generateJobFuzzyEstimateReports
"""
kw = {
'contentType':contentType,
'tags':tags,
}
kw.update(kwargs)
url = self.urlHelper.getUrl('/estimates-api/v2/projects/{projectId}/jobs/{translationJobUid}/reports/fuzzy', translationJobUid=translationJobUid, **kwargs)
response, status = self.commandJson('POST', url, kw)
return response, status
def getJobCostEstimateReports(self, translationJobUid, reportStatus='', contentCoverage='', creatorUserUids=[], translationJobSchemaContents=[], tags=[], createdFrom='', createdTo='', limit=0, offset=0, **kwargs):
"""
method : GET
api url : /estimates-api/v2/projects/{projectId}/jobs/{translationJobUid}/reports/cost
Responses:
200 : OK
details : https://api-reference.smartling.com/#operation/getJobCostEstimateReports
"""
kw = {
'reportStatus':reportStatus,
'contentCoverage':contentCoverage,
'creatorUserUids':creatorUserUids,
'translationJobSchemaContents':translationJobSchemaContents,
'tags':tags,
'createdFrom':createdFrom,
'createdTo':createdTo,
'limit':limit,
'offset':offset,
}
kw.update(kwargs)
url = self.urlHelper.getUrl('/estimates-api/v2/projects/{projectId}/jobs/{translationJobUid}/reports/cost', translationJobUid=translationJobUid, **kwargs)
response, status = self.command('GET', url, kw)
return response, status
def generateJobCostEstimateReports(self, translationJobUid, contentType, tags, localeWorkflows, fuzzyProfileUid, **kwargs):
"""
method : POST
api url : /estimates-api/v2/projects/{projectId}/jobs/{translationJobUid}/reports/cost
Responses:
200 : OK
details : https://api-reference.smartling.com/#operation/generateJobCostEstimateReports
"""
kw = {
'contentType':contentType,
'tags':tags,
'localeWorkflows':localeWorkflows,
'fuzzyProfileUid':fuzzyProfileUid,
}
kw.update(kwargs)
url = self.urlHelper.getUrl('/estimates-api/v2/projects/{projectId}/jobs/{translationJobUid}/reports/cost', translationJobUid=translationJobUid, **kwargs)
response, status = self.commandJson('POST', url, kw)
return response, status
def getJobEstimateReportStatus(self, reportUid, reportStatus='', reportType='', **kwargs):
"""
method : GET
api url : /estimates-api/v2/projects/{projectId}/reports/{reportUid}/status
Responses:
200 : OK
details : https://api-reference.smartling.com/#operation/getJobEstimateReportStatus
"""
kw = {
'reportStatus':reportStatus,
'reportType':reportType,
}
kw.update(kwargs)
url = self.urlHelper.getUrl('/estimates-api/v2/projects/{projectId}/reports/{reportUid}/status', reportUid=reportUid, **kwargs)
response, status = self.command('GET', url, kw)
return response, status
def getJobEstimateReport(self, reportUid, reportStatus='', reportType='', **kwargs):
"""
method : GET
api url : /estimates-api/v2/projects/{projectId}/reports/{reportUid}
Responses:
200 : OK
details : https://api-reference.smartling.com/#operation/getJobEstimateReport
"""
kw = {
'reportStatus':reportStatus,
'reportType':reportType,
}
kw.update(kwargs)
url = self.urlHelper.getUrl('/estimates-api/v2/projects/{projectId}/reports/{reportUid}', reportUid=reportUid, **kwargs)
response, status = self.command('GET', url, kw)
return response, status
def deleteJobEstimateReport(self, reportUid, **kwargs):
"""
method : DELETE
api url : /estimates-api/v2/projects/{projectId}/reports/{reportUid}
Responses:
200 : OK
details : https://api-reference.smartling.com/#operation/deleteJobEstimateReport
"""
kw = {
}
kw.update(kwargs)
url = self.urlHelper.getUrl('/estimates-api/v2/projects/{projectId}/reports/{reportUid}', reportUid=reportUid, **kwargs)
response, status = self.command('DELETE', url, kw)
return response, status
def modifyJobEstimateReportTags(self, reportUid, tags, **kwargs):
"""
method : PUT
api url : /estimates-api/v2/projects/{projectId}/reports/{reportUid}/tags
Responses:
200 : OK
details : https://api-reference.smartling.com/#operation/modifyJobEstimateReportTags
"""
kw = {
'tags':tags,
}
kw.update(kwargs)
url = self.urlHelper.getUrl('/estimates-api/v2/projects/{projectId}/reports/{reportUid}/tags', reportUid=reportUid, **kwargs)
response, status = self.commandJson('PUT', url, kw)
return response, status
def exportJobEstimationReport(self, projectUid, reportUid, format, **kwargs):
"""
method : GET
api url : /estimates-api/v2/projects/{projectUid}/reports/{reportUid}/download
Responses:
200 : OK
details : https://api-reference.smartling.com/#operation/exportJobEstimationReport
"""
kw = {
'format':format,
}
kw.update(kwargs)
url = self.urlHelper.getUrl('/estimates-api/v2/projects/{projectUid}/reports/{reportUid}/download', projectUid=projectUid, reportUid=reportUid, **kwargs)
response, status = self.command('GET', url, kw)
return response, status
| 41.722772 | 218 | 0.621262 | 746 | 8,428 | 7.008043 | 0.197051 | 0.041316 | 0.048202 | 0.075746 | 0.73508 | 0.722265 | 0.682862 | 0.682862 | 0.666029 | 0.647666 | 0 | 0.010553 | 0.25795 | 8,428 | 201 | 219 | 41.930348 | 0.825392 | 0.295088 | 0 | 0.690722 | 0 | 0 | 0.186534 | 0.128552 | 0 | 0 | 0 | 0 | 0 | 1 | 0.103093 | false | 0 | 0.010309 | 0 | 0.216495 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
d675f6f9d4cc85cb82ade7351a4c882f764248ac | 494 | py | Python | Python 基础教程/1.5.7 lamda应用.py | shao1chuan/pythonbook | cd9877d04e1e11422d38cc051e368d3d9ce2ab45 | [
"MulanPSL-1.0"
] | 95 | 2020-10-11T04:45:46.000Z | 2022-02-25T01:50:40.000Z | Python 基础教程/1.5.7 lamda应用.py | shao1chuan/pythonbook | cd9877d04e1e11422d38cc051e368d3d9ce2ab45 | [
"MulanPSL-1.0"
] | null | null | null | Python 基础教程/1.5.7 lamda应用.py | shao1chuan/pythonbook | cd9877d04e1e11422d38cc051e368d3d9ce2ab45 | [
"MulanPSL-1.0"
] | 30 | 2020-11-05T09:01:00.000Z | 2022-03-08T05:58:55.000Z | # https://blog.csdn.net/zjuxsl/article/details/77104382
# 一、lambda函数也叫匿名函数,即,函数没有具体的名称。先来看一个最简单例子:
def f(x):
return x**2
print(f(4))
# Python中使用lambda的话,写成这样
g = lambda x : x**2
print (g(4))
# lambda语句中,冒号前是参数,可以有多个,用逗号隔开,冒号右边的返回值。
# lambda语句构建的其实是一个函数对象
from functools import reduce
reduce(lambda x,y:x+y, [1,2,3]) #6
reduce(lambda x,y:x * y, [1,2,4]) #8
reduce(lambda x,y: x and y, [True,False,True]) #False
def f(x, y):
return x + y
reduce(lambda x, y: f(x, y), [1, 2, 3]) # 6
| 18.296296 | 55 | 0.661943 | 91 | 494 | 3.593407 | 0.461538 | 0.055046 | 0.159021 | 0.171254 | 0.180428 | 0.134557 | 0.110092 | 0.110092 | 0 | 0 | 0 | 0.057279 | 0.151822 | 494 | 26 | 56 | 19 | 0.72315 | 0.376518 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.166667 | false | 0 | 0.083333 | 0.166667 | 0.416667 | 0.166667 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 2 |
d686499fa79e594e85a706dad4ff13ce847aaa79 | 88 | py | Python | obliv/__init__.py | dsroche/obliv | 7a7c72e7dcc05d9a30656a501a952d722027dbc3 | [
"Unlicense"
] | 2 | 2020-11-21T00:18:12.000Z | 2020-11-24T02:20:17.000Z | obliv/__init__.py | dsroche/obliv | 7a7c72e7dcc05d9a30656a501a952d722027dbc3 | [
"Unlicense"
] | null | null | null | obliv/__init__.py | dsroche/obliv | 7a7c72e7dcc05d9a30656a501a952d722027dbc3 | [
"Unlicense"
] | null | null | null | __all__ = ["hirb", "voram", "skipstash", "fstore", "mt_ssh_store", "ssh_info", "idstr"]
| 44 | 87 | 0.636364 | 11 | 88 | 4.454545 | 0.909091 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.102273 | 88 | 1 | 88 | 88 | 0.620253 | 0 | 0 | 0 | 0 | 0 | 0.556818 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
d6889b61554d788bc43d1d4a886ba1e6ea609e32 | 348 | py | Python | tests/test_graph.py | ssube/redesigned-barnacle | 314ea415b6f725c798cc97d6e619fbedc7f8bd21 | [
"MIT"
] | null | null | null | tests/test_graph.py | ssube/redesigned-barnacle | 314ea415b6f725c798cc97d6e619fbedc7f8bd21 | [
"MIT"
] | 1 | 2021-11-04T16:00:15.000Z | 2021-11-04T16:00:15.000Z | tests/test_graph.py | ssube/redesigned-barnacle | 314ea415b6f725c798cc97d6e619fbedc7f8bd21 | [
"MIT"
] | null | null | null | from redesigned_barnacle.buffer import CircularBuffer
from redesigned_barnacle.graph import Sparkline
from redesigned_barnacle.mock import MockFramebuffer
from unittest import TestCase
class SparkTest(TestCase):
def test_line(self):
buf = CircularBuffer()
sl = Sparkline(32, 64, buf)
sl.push(16)
sl.draw(MockFramebuffer(), 0, 0) | 29 | 53 | 0.775862 | 44 | 348 | 6.045455 | 0.590909 | 0.157895 | 0.24812 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.026936 | 0.146552 | 348 | 12 | 54 | 29 | 0.868687 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.1 | false | 0 | 0.4 | 0 | 0.6 | 0 | 0 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 2 |
d692c63662549a5451cd5f10534438beadbf49d1 | 401 | py | Python | programs/pgm07_23.py | danielsunzhongyuan/python_practice | 79bc88db1c52ee2f5607f6f9fec1bbacea2804ff | [
"Apache-2.0"
] | null | null | null | programs/pgm07_23.py | danielsunzhongyuan/python_practice | 79bc88db1c52ee2f5607f6f9fec1bbacea2804ff | [
"Apache-2.0"
] | null | null | null | programs/pgm07_23.py | danielsunzhongyuan/python_practice | 79bc88db1c52ee2f5607f6f9fec1bbacea2804ff | [
"Apache-2.0"
] | null | null | null | #
# This file contains the Python code from Program 7.23 of
# "Data Structures and Algorithms
# with Object-Oriented Design Patterns in Python"
# by Bruno R. Preiss.
#
# Copyright (c) 2003 by Bruno R. Preiss, P.Eng. All rights reserved.
#
# http://www.brpreiss.com/books/opus7/programs/pgm07_23.txt
#
class SortedList(OrderedList):
def __init__(self):
super(SortedList, self).__init__()
| 26.733333 | 69 | 0.723192 | 58 | 401 | 4.844828 | 0.844828 | 0.049822 | 0.05694 | 0.099644 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.035821 | 0.164589 | 401 | 14 | 70 | 28.642857 | 0.802985 | 0.700748 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0 | 0 | 0 | 0.666667 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
d6b2d831c8279558598bc5824ede22c12a555e25 | 16,204 | py | Python | STED_analysis/pixel_detect.py | zhaoaite/dynamic_thresholding_algorithm | adcdfc7970098ab78c83d9aa86333f0a38d30bbb | [
"MIT"
] | 3 | 2021-09-23T03:25:09.000Z | 2022-03-11T15:23:18.000Z | STED_analysis/pixel_detect.py | zhaoaite/dynamic_thresholding_algorithm | adcdfc7970098ab78c83d9aa86333f0a38d30bbb | [
"MIT"
] | null | null | null | STED_analysis/pixel_detect.py | zhaoaite/dynamic_thresholding_algorithm | adcdfc7970098ab78c83d9aa86333f0a38d30bbb | [
"MIT"
] | null | null | null | import cv2
import numpy as np
from matplotlib import pyplot as plt
import os
from scipy.stats.stats import pearsonr
def getColors(n):
colors = np.zeros((n, 3))
colors[:, 0] = np.random.permutation(np.linspace(0, 256, n))
colors[:, 1] = np.random.permutation(colors[:, 0])
colors[:, 2] = np.random.permutation(colors[:, 1])
return colors
def connectivity_clump_detect(path1,path2):
tublin=cv2.imread(path1)
tublin=cv2.cvtColor(tublin,cv2.COLOR_BGR2GRAY)
tublin = cv2.GaussianBlur(tublin, (5, 5), 0)
thresh_tublin,binary_tublin=cv2.threshold(tublin, 30, 255,cv2.THRESH_BINARY)
vash2=cv2.imread(path2)
vash2=cv2.cvtColor(vash2,cv2.COLOR_BGR2GRAY)
vash2 = cv2.GaussianBlur(vash2, (3, 3), 0)
thresh_vash2,binary_vash2=cv2.threshold(vash2, 30, 255,cv2.THRESH_BINARY)
connectivity=4
num_labels_tublin, labels_tublin, stats_tublin, centroids_tublin = cv2.connectedComponentsWithStats(binary_tublin, connectivity, cv2.CV_8U)
num_labels_vash2, labels_vash2, stats_vash2, centroids_vash2 = cv2.connectedComponentsWithStats(binary_vash2, connectivity, cv2.CV_8U)
colors = getColors(num_labels_vash2)
dst_tublin = np.ones((binary_tublin.shape[0], binary_tublin.shape[1], 3), dtype=np.uint8) * 0
dst_vash2 = np.ones((binary_vash2.shape[0], binary_vash2.shape[1], 3), dtype=np.uint8) * 0
# for i in range(num_labels):
# dst_vash2[labels == i] = colors[i]
num_tublin=0
cross_pixel=0
num_vash=0
num_vash_pixel=0
cross_num=0
for i in range(num_labels_tublin):
if stats_tublin[i,4]<6000 and stats_tublin[i,4]>5:
num_tublin+=1
dst_tublin[labels_tublin == i] = [255,70,90]
vash_list=[]
for i in range(num_labels_vash2):
# print(num_labels_vash2)
if stats_vash2[i,4]>100 and stats_vash2[i,4]<50000:
dst_vash2[labels_vash2 == i] = [255,70,90]
num_vash+=1
cv2.rectangle(vash2, (stats_vash2[i,0],stats_vash2[i,1]), (stats_vash2[i,0]+stats_vash2[i,2],stats_vash2[i,1]+stats_vash2[i,3]), (255,100,100), 1)
vash_list.append(i)
for i in vash_list:
cross=0
temp=0
for pixel_x in range(stats_vash2[i,1],stats_vash2[i,1]+stats_vash2[i,3]):
for pixel_y in range(stats_vash2[i,0],stats_vash2[i,0]+stats_vash2[i,2]):
if dst_vash2[pixel_x,pixel_y].any()!=np.array([0,0,0]).any():
num_vash_pixel+=1
temp+=1
if dst_tublin[pixel_x,pixel_y].any()!=np.array([0,0,0]).any() and dst_vash2[pixel_x,pixel_y].any()!=np.array([0,0,0]).any():
cross+=1
# cv2.rectangle(dst_tublin, (stats_vash2[i,0],stats_vash2[i,1]), (stats_vash2[i,0]+stats_vash2[i,2],stats_vash2[i,1]+stats_vash2[i,3]), (255,255,255), 1)
cv2.rectangle(vash2, (stats_vash2[i,0],stats_vash2[i,1]), (stats_vash2[i,0]+stats_vash2[i,2],stats_vash2[i,1]+stats_vash2[i,3]), (255,255,255), 1)
if cross!=0:
cross_pixel+=temp
cross_num+=1
new_path=os.path.join('./clumps/',path,subpath,subsubpath)
print(new_path)
if not os.path.exists(new_path):
os.makedirs(new_path)
cv2.imwrite(new_path+'/'+file, vash2)
# cv2.imwrite('1.jpg',dst_vash2)
# cv2.imwrite('2.jpg',dst_tublin)
vash2_in = cross_num/num_vash
vash2_out = (num_vash-cross_num)/num_vash
a=open('a.txt', 'a')
# a.write("--------------------\n")
a.write('path:'+str(path1)+'\n')
a.write('tubulin:'+str(num_tublin)+'\n')
a.write('clump:'+str(num_vash)+'\n')
a.write('clump in tubulin:'+str(cross_num)+'\n')
a.write('clump out of tubulin:'+str(num_vash-cross_num)+'\n')
a.write("--------------------\n")
a.write("clump_in/clump_total:"+str(vash2_in)+'\n')
a.write("clump_out/clump_total:"+str(vash2_out)+'\n')
a.close()
if 'Sample 1' in path1 or 'Sample 2' in path1:
WT.append([vash2_in,vash2_out])
else:
KO.append([vash2_in,vash2_out])
return vash2_in,vash2_out
def PCC(path1,path2):
tublin=cv2.imread(path1)
tublin=cv2.cvtColor(tublin,cv2.COLOR_BGR2GRAY)
# 500*500 local
tublin=cv2.resize(tublin, (1024,1024), interpolation = cv2.INTER_AREA)
vash2=cv2.imread(path2)
vash2=cv2.cvtColor(vash2,cv2.COLOR_BGR2GRAY)
vash2=cv2.resize(vash2, (1024,1024), interpolation = cv2.INTER_AREA)
tublin=np.asarray(tublin)
vash2=np.asarray(vash2)
print(vash2.shape)
co=pearsonr(vash2.reshape(1024*1024), tublin.reshape(1024*1024))
a=open('20201202_cell4_pixel.txt', 'a')
a.write("--------------------\n")
a.write('Pearson:'+str(co[0])+'\n')
a.close()
return co
def vash_num_detect(path1,path2):
tublin=cv2.imread(path1)
tublin=cv2.cvtColor(tublin,cv2.COLOR_BGR2GRAY)
thresh_tublin,binary_tublin=cv2.threshold(tublin, 30, 255,cv2.THRESH_BINARY)
vash2=cv2.imread(path2)
vash2=cv2.cvtColor(vash2,cv2.COLOR_BGR2GRAY)
thresh_vash2,binary_vash2=cv2.threshold(vash2, 30, 255,cv2.THRESH_BINARY)
connectivity=4
num_labels_tublin, labels_tublin, stats_tublin, centroids_tublin = cv2.connectedComponentsWithStats(binary_tublin, connectivity, cv2.CV_8U)
num_labels_vash2, labels_vash2, stats_vash2, centroids_vash2 = cv2.connectedComponentsWithStats(binary_vash2, connectivity, cv2.CV_8U)
colors = getColors(num_labels_vash2)
dst_tublin = np.ones((binary_tublin.shape[0], binary_tublin.shape[1], 3), dtype=np.uint8) * 0
dst_vash2 = np.ones((binary_vash2.shape[0], binary_vash2.shape[1], 3), dtype=np.uint8) * 0
# for i in range(num_labels):
# dst_vash2[labels == i] = colors[i]
num_tublin=0
cross_pixel=0
num_vash=0
num_vash_pixel=0
cross_num=0
for i in range(num_labels_tublin):
if stats_tublin[i,4]<5000 and stats_tublin[i,4]>10:
num_tublin+=1
dst_tublin[labels_tublin == i] = [255,70,90]
vash_list=[]
for i in range(num_labels_vash2):
if stats_vash2[i,4]<100 and stats_vash2[i,4]>3:
dst_vash2[labels_vash2 == i] = [255,70,90]
num_vash+=1
vash_list.append(i)
for i in vash_list:
cross=0
temp=0
for pixel_x in range(stats_vash2[i,1],stats_vash2[i,1]+stats_vash2[i,3]):
for pixel_y in range(stats_vash2[i,0],stats_vash2[i,0]+stats_vash2[i,2]):
if dst_vash2[pixel_x,pixel_y].any()!=np.array([0,0,0]).any():
num_vash_pixel+=1
temp+=1
if dst_tublin[pixel_x,pixel_y].any()!=np.array([0,0,0]).any() and dst_vash2[pixel_x,pixel_y].any()!=np.array([0,0,0]).any():
cross+=1
cv2.rectangle(dst_tublin, (stats_vash2[i,0],stats_vash2[i,1]), (stats_vash2[i,0]+stats_vash2[i,2],stats_vash2[i,1]+stats_vash2[i,3]), (255,255,255), 1)
cv2.rectangle(dst_vash2, (stats_vash2[i,0],stats_vash2[i,1]), (stats_vash2[i,0]+stats_vash2[i,2],stats_vash2[i,1]+stats_vash2[i,3]), (255,255,255), 1)
if cross!=0:
cross_pixel+=temp
cross_num+=1
cv2.imwrite('1.jpg',dst_vash2)
cv2.imwrite('2.jpg',dst_tublin)
# x,y=centroids_vash2[i,:]
# if dst_tublin[int(round(x)),int(round(y))].any()!=np.array([0,0,0]).any():
# cross_pixel+=1
# dst_vash2[labels_vash2 == i] = colors[i]
vash2_in=cross_pixel/num_vash_pixel
vash2_out= (num_vash_pixel-cross_pixel)/num_vash_pixel
# vash2_in = cross_num/num_vash
# vash2_out = (num_vash-cross_num)/num_vash
#
a=open('a.txt', 'a')
# a.write("--------------------\n")
a.write('path:'+str(path1)+'\n')
a.write('tubulin:'+str(num_tublin)+'\n')
a.write('vash2:'+str(num_vash)+'\n')
a.write('vash2 in tubulin:'+str(cross_num)+'\n')
a.write('vash2 out of tubulin:'+str(num_vash-cross_num)+'\n')
a.write("--------------------\n")
a.write("vash2_in/vash2_total:"+str(vash2_in)+'\n')
a.write("vash2_out/vash2_total:"+str(vash2_out)+'\n')
a.close()
if 'Sample 1' in path1 or 'Sample 2' in path1:
WT.append([vash2_in,vash2_out])
else:
KO.append([vash2_in,vash2_out])
return vash2_in,vash2_out
def pixel_detect(path1,path2):
tubulin=cv2.imread(path1)
vash2=cv2.imread(path2)
# vash2 = cv2.GaussianBlur(vash2, (3,3), 0)
# tubulin = cv2.GaussianBlur(tubulin, (5, 5), 0)
tubulin_hsv=cv2.cvtColor(tubulin,cv2.COLOR_BGR2HSV)
vash2__hsv=cv2.cvtColor(vash2,cv2.COLOR_BGR2HSV)
h,w,_=vash2__hsv.shape
z=range(0,h)
d=range(0,w)
num_yellow_pixel=0
num_black_pixel=0
num_red_pixel=0
cross_pixel=0
pixels=0
for x in z:
for y in d:
pixels+=1
if tubulin_hsv[x,y].any()!=np.array([0,0,0]).any():
num_yellow_pixel+=1
if vash2__hsv[x,y].any()!=np.array([0,0,0]).any():
num_red_pixel+=1
if x<h-1 and y<w-1 and x>0 and y>0:
if tubulin_hsv[x,y].any()!=np.array([0,0,0]).any():
# (tubulin_hsv[x+1,y].any()!=np.array([0,0,0]).any() and vash2__hsv[x+1,y].any()!=np.array([0,0,0]).any()) or \
# (tubulin_hsv[x,y+1].any()!=np.array([0,0,0]).any() and vash2__hsv[x,y+1].any()!=np.array([0,0,0]).any()) or \
# (tubulin_hsv[x-1,y].any()!=np.array([0,0,0]).any() and vash2__hsv[x-1,y].any()!=np.array([0,0,0]).any()) or \
# (tubulin_hsv[x,y-1].any()!=np.array([0,0,0]).any() and vash2__hsv[x,y-1].any()!=np.array([0,0,0]).any()):
cross_pixel+=1
# if tubulin_hsv[x,y].any()!=np.array([0,0,0]).any() and vash2__hsv[x,y].any()!=np.array([0,0,0]).any():
# cross_pixel+=1
# if tubulin_hsv[x,y].any()==np.array([0,0,0]).any() and vash2__hsv[x,y].any()!=np.array([0,0,0]).any():
# out_tubulin_pixel+=1
if tubulin_hsv[x,y].any()==np.array([0,0,0]).any() and vash2__hsv[x,y].any()==np.array([0,0,0]).any():
num_black_pixel+=1
# if 'Sample 4' in path1:
# cross_pixel=cross_pixel+int(num_red_pixel*0.07)
# else:
# cross_pixel=cross_pixel-int(num_red_pixel*0.07)
# vash2_in=cross_pixel/num_yellow_pixel
red_vash2_in=cross_pixel/num_red_pixel
red_vash2_out= (num_red_pixel-cross_pixel)/num_red_pixel
a=open('20201202_cell4_pixel.txt', 'a')
# a.write("--------------------\n")
a.write('path:'+str(path1)+'\n')
# a.write('cross_pixel:'+str(cross_pixel)+'\n')
# a.write('tubulin:'+str(num_yellow_pixel)+'\n')
# a.write('vash2:'+str(num_red_pixel)+'\n')
a.write('map4 overlaping tubulin:'+str(cross_pixel)+'\n')
a.write('map4 out of tubulin:'+str(num_red_pixel-cross_pixel)+'\n')
# a.write("black:"+str(num_black_pixel)+'\n')
# a.write("cross_pixel/tubulin:"+str(vash2_in)+'\n')
a.write("--------------------\n")
a.write("map4_overlap/vash_total:"+str(red_vash2_in)+'\n')
a.write("map4_out/vash_total:"+str(red_vash2_out)+'\n')
a.close()
if 'Sample 5' in path1:
# if 'Sample 1' in path1 or 'Sample 2' in path1:
WT.append([red_vash2_in,red_vash2_out])
else:
KO.append([red_vash2_in,red_vash2_out])
# print('cross_pixel',cross_pixel)
# print('tubulin',num_yellow_pixel)
# print("cross_vash2/tubulin:",vash2_in)
# print("outoftubl_vash2/black:",vash2_out)
return red_vash2_in,red_vash2_out
# #if HSV[2,3]==[178 ,255 ,204]:
# # print("红色")
# cv2.imshow("ex_HSV",ex_HSV)
# cv2.imshow("HSV",HSV)
# cv2.imshow('image',image)#显示img
# #cv2.setMouseCallback("imageHSV",getpos)#
# cv2.waitKey(0)
#
if __name__ == '__main__':
data_path = './STED_COLOR/20201202/'
WT=[]
KO=[]
fileList = os.listdir(data_path)
record_pixel_rate=[]
for path in fileList:
paths_list=sorted(os.listdir(os.path.join(data_path,path)))
for subpath in paths_list:
subpaths=sorted(os.listdir(os.path.join(data_path,path,subpath)),key= lambda x:int(x[5:]))
print(subpaths)
for subsubpath in subpaths:
file_list=sorted(os.listdir(os.path.join(data_path,path,subpath,subsubpath)))
for index,file in enumerate(file_list):
if index%2==0 and ('merge' not in file):
print(os.path.join(data_path,path,subpath,subsubpath,file))
# connect->pixel_detect
# dst_tublin,dst_vash2 = connectivity_clump_detect(os.path.join(data_path,path,subpath,subsubpath,file_list[index+1]),os.path.join(data_path,path,subpath,subsubpath,file_list[index]))
# dst_tublin,dst_vash2 = connectivity_detect(os.path.join(data_path,path,subpath,subsubpath,file_list[index]),os.path.join(data_path,path,subpath,subsubpath,file_list[index+1]))
PCC(os.path.join(data_path,path,subpath,subsubpath,file_list[index+1]),os.path.join(data_path,path,subpath,subsubpath,file_list[index]))
vash2_in,vash2_out = pixel_detect(os.path.join(data_path,path,subpath,subsubpath,file_list[index+1]),os.path.join(data_path,path,subpath,subsubpath,file_list[index]))
# vash2_in,vash2_out = pixel_detect(os.path.join(data_path,path,subpath,subsubpath,file_list[index]),os.path.join(data_path,path,subpath,subsubpath,file_list[index+1]))
# vash2_in,vash2_out = pixel_detect(os.path.join(data_path,path,subpath,subsubpath,file_list[index]),os.path.join(data_path,path,subpath,subsubpath,file_list[index+1]),tubulin,vash2)
## vash_num_calculate
# vash2_in,vash2_out = vash_num_detect(os.path.join(data_path,path,subpath,subsubpath,file_list[index]),os.path.join(data_path,path,subpath,subsubpath,file_list[index+1]))
np.savetxt('1213_s5_local_pixel.txt',WT)
np.savetxt('1213_s6_local_pixel.txt',KO)
WT=np.loadtxt('1213_s5_local_pixel.txt')
KO=np.loadtxt('1213_s6_local_pixel.txt')
WT=np.array(WT)
KO=np.array(KO)
plt.plot(range(0,len(WT),1),WT[:,0],'o',label = 's5 in')
plt.plot(50,np.mean(WT[:,0]),'o',label = 's5 in (mean)')
print(np.mean(WT[:,0]))
plt.text(50, np.mean(WT[:,0])+0.02, round(np.mean(WT[:,0]),2), ha='center', va='bottom', fontsize=10)
plt.plot(range(100,100+len(KO),1),KO[:,0],'p',label = 's6 in')
plt.plot(150,np.mean(KO[:,0]),'p',label = 's6 in (mean)')
print(np.mean(KO[:,0]))
plt.text(150, np.mean(KO[:,0])+0.02, round(np.mean(KO[:,0]),2), ha='center', va='bottom', fontsize=10)
plt.plot(range(200,200+len(WT),1),WT[:,1],'>',label = 's5 out')
plt.plot(250,np.mean(WT[:,1]),'>',label = 's5 out (mean)')
print(np.mean(WT[:,1]))
plt.text(250, np.mean(WT[:,1])+0.02, round(np.mean(WT[:,1]),2), ha='center', va='bottom', fontsize=10)
plt.plot(range(300,300+len(KO),1),KO[:,1],'*',label = 's6 out')
plt.plot(350,np.mean(KO[:,1]),'p',label = 's6 out (mean)')
print(np.mean(KO[:,1]))
plt.text(350, np.mean(KO[:,1])+0.02, round(np.mean(KO[:,1]),2), ha='center', va='bottom', fontsize=10)
plt.legend(loc='upper left',ncol=2)
plt.show()
# wt=np.loadtxt('wt.txt')
# ko=np.loadtxt('ko.txt')
# plt.plot(range(0,len(wt),1),wt[:,0],'o')
# plt.plot(range(500,500+len(ko),1),ko[:,0],'p')
# plt.plot(range(1000,1000+len(wt),1),wt[:,1],'>')
# plt.plot(range(1500,1500+len(ko),1),ko[:,1],'*')
# plt.show() | 39.715686 | 206 | 0.593989 | 2,494 | 16,204 | 3.669206 | 0.080593 | 0.032783 | 0.055295 | 0.028849 | 0.780789 | 0.702874 | 0.656868 | 0.639056 | 0.632062 | 0.623866 | 0 | 0.070219 | 0.223957 | 16,204 | 408 | 207 | 39.715686 | 0.657495 | 0.243397 | 0 | 0.470588 | 0 | 0 | 0.066968 | 0.029545 | 0 | 0 | 0 | 0 | 0 | 1 | 0.021008 | false | 0 | 0.021008 | 0 | 0.063025 | 0.033613 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
d6ba53ff8f8d9a8c5b5c3fb8bddc1c05b58ae2e9 | 1,392 | py | Python | Fund.py | jacobgarder/FundInfo-Python | 43cdd60454342f1592f4a748616d270d9707c042 | [
"MIT"
] | null | null | null | Fund.py | jacobgarder/FundInfo-Python | 43cdd60454342f1592f4a748616d270d9707c042 | [
"MIT"
] | null | null | null | Fund.py | jacobgarder/FundInfo-Python | 43cdd60454342f1592f4a748616d270d9707c042 | [
"MIT"
] | null | null | null | from TAA import TAA
class Fund:
def __init__(self, id, name):
self.id = id
self.name = name
self.NAV = TAA.getNAV(id)
self.MA6 = TAA.getMA(id, 120)
self.MA10 = TAA.getMA(id, 200)
self.oneMonth = TAA.getChangePercent(id, "month")
self.threeMonths = TAA.getChangePercent(id, "three_months")
self.sixMonths = TAA.getChangePercent(id, "six_months")
self.oneYear = TAA.getChangePercent(id, "year")
self.average = (self.oneMonth + self.threeMonths +
self.sixMonths + self.oneYear) / 4
def getMA6Indicator(self):
return (self.NAV - self.MA6) / self.MA6 * 100.0
def getMA10Indicator(self):
return (self.NAV - self.MA10) / self.MA10 * 100.0
def getAverageReturns(self):
return (self.average)
def getFormattedHeader():
return '{:>10}{:>32}{:>10}{:>10}{:>10}{:>10}{:>10}{:>10}{:>10}{:>10}'.format(
"Id", "Name", "Current", "MA6 %", "MA10 %", "1 month", "3 months", "6 months", "1 year", "1/3/6/12")
def getFormattedData(self):
return '{:>10}{:>32}{:>10.2f}{:>10.2f}{:>10.2f}{:>10.2f}{:>10.2f}{:>10.2f}{:>10.2f}{:>10.2f}'.format(
self.id, self.name[:30], self.NAV, self.getMA6Indicator(), self.getMA10Indicator(), self.oneMonth,
self.threeMonths, self.sixMonths, self.oneYear, self.average)
| 38.666667 | 112 | 0.576149 | 176 | 1,392 | 4.522727 | 0.267045 | 0.040201 | 0.052764 | 0.070352 | 0.241206 | 0.188442 | 0.188442 | 0.168342 | 0.040201 | 0.040201 | 0 | 0.086223 | 0.233477 | 1,392 | 35 | 113 | 39.771429 | 0.659794 | 0 | 0 | 0 | 0 | 0.074074 | 0.16954 | 0.103448 | 0 | 0 | 0 | 0 | 0 | 1 | 0.222222 | false | 0 | 0.037037 | 0.185185 | 0.481481 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 2 |
d6d2f07be18b317e54e8f317ec95c5951fca8af6 | 747 | py | Python | import_files.py | ravielakshmanan/arcgis | 678abffe99ef82a5a836573835b8662fe05ac43f | [
"MIT"
] | null | null | null | import_files.py | ravielakshmanan/arcgis | 678abffe99ef82a5a836573835b8662fe05ac43f | [
"MIT"
] | null | null | null | import_files.py | ravielakshmanan/arcgis | 678abffe99ef82a5a836573835b8662fe05ac43f | [
"MIT"
] | null | null | null | from google.cloud import storage
import os
client = storage.Client()
bucket = client.get_bucket('noah-water.appspot.com')
blobs = bucket.list_blobs(prefix='trends3/Part6')
os.system("gsutil acl ch -u qqbi676scrf4nlgyg6e3hqrm6e@speckle-umbrella-16.iam.gserviceaccount.com:W gs://noah-water.appspot.com")
for blob in blobs:
print(blob.name)
file_read_perm = "gsutil acl ch -u qqbi676scrf4nlgyg6e3hqrm6e@speckle-umbrella-16.iam.gserviceaccount.com:R gs://noah-water.appspot.com/" + blob.name
os.system(file_read_perm)
# print(file_read_perm)
file_import = "gcloud sql import csv precipitation gs://noah-water.appspot.com/" + blob.name + " --database=prec_anomaly --table=precipitation_trend -q"
os.system(file_import)
# print(file_import) | 39.315789 | 153 | 0.776439 | 110 | 747 | 5.154545 | 0.454545 | 0.063492 | 0.112875 | 0.134039 | 0.407407 | 0.37037 | 0.37037 | 0.268078 | 0.268078 | 0.268078 | 0 | 0.029455 | 0.091031 | 747 | 19 | 154 | 39.315789 | 0.805596 | 0.053548 | 0 | 0 | 0 | 0.166667 | 0.551773 | 0.424113 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.333333 | 0 | 0.333333 | 0.083333 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 2 |
d6dc7e9dfb30c4748a187db0ff762043935ab521 | 1,270 | py | Python | shims/SeeedStudios/grove_py/counterfit_shims_grove/grove_relay.py | CallumTarttelin/CounterFit | 0f8b2ad8a884339857979dce6aa6dccc9644756b | [
"MIT"
] | 86 | 2021-03-25T22:01:37.000Z | 2022-03-29T19:09:58.000Z | shims/SeeedStudios/grove_py/counterfit_shims_grove/grove_relay.py | CallumTarttelin/CounterFit | 0f8b2ad8a884339857979dce6aa6dccc9644756b | [
"MIT"
] | 13 | 2021-03-25T22:00:31.000Z | 2022-03-30T17:59:06.000Z | shims/SeeedStudios/grove_py/counterfit_shims_grove/grove_relay.py | CallumTarttelin/CounterFit | 0f8b2ad8a884339857979dce6aa6dccc9644756b | [
"MIT"
] | 43 | 2021-07-11T04:12:15.000Z | 2022-03-22T01:42:23.000Z | '''
This is the code for
- `Grove - Relay <https://www.seeedstudio.com/s/Grove-Relay-p-769.html>`_
Examples:
.. code-block:: python
import time
from counterfit_connection import CounterFitConnection
from counterfit_shims_grove.grove_relay import GroveRelay
# Init the connection to the CounterFit Virtual IoT Device app
CounterFitConnection.init('127.0.0.1', 5000)
# connect to pin 5(slot D5)
PIN = 5
relay = GroveRelay(PIN)
while True:
relay.on()
time.sleep(1)
relay.off()
time.sleep(1)
'''
# pylint: disable=import-error
from counterfit_connection import CounterFitConnection
__all__ = ["GroveRelay"]
class GroveRelay():
'''
Class for Grove - Relay
Args:
pin(int): number of digital pin the relay connected.
'''
def __init__(self, pin):
self.__pin = pin
# pylint: disable=invalid-name
def on(self) -> None:
'''
light on the led
'''
CounterFitConnection.set_actuator_boolean_value(self.__pin, True)
def off(self) -> None:
'''
light off the led
'''
CounterFitConnection.set_actuator_boolean_value(self.__pin, False)
| 23.518519 | 77 | 0.607874 | 146 | 1,270 | 5.109589 | 0.472603 | 0.053619 | 0.034853 | 0.080429 | 0.284182 | 0.150134 | 0.150134 | 0.150134 | 0.150134 | 0 | 0 | 0.020045 | 0.292913 | 1,270 | 53 | 78 | 23.962264 | 0.81069 | 0.630709 | 0 | 0 | 0 | 0 | 0.025575 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0 | 0.111111 | 0 | 0.555556 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
ba53b3a64c0803deb7410b156a56759bff266edf | 999 | py | Python | link/items.py | KiriKira/LinkSpider | a832e2eaac39f31735664eb1c0ddc1de79fa887d | [
"Apache-2.0"
] | 1 | 2018-04-05T06:11:00.000Z | 2018-04-05T06:11:00.000Z | link/items.py | KiriKira/LinkSpider | a832e2eaac39f31735664eb1c0ddc1de79fa887d | [
"Apache-2.0"
] | null | null | null | link/items.py | KiriKira/LinkSpider | a832e2eaac39f31735664eb1c0ddc1de79fa887d | [
"Apache-2.0"
] | null | null | null | # -*- coding: utf-8 -*-
# Define here the models for your scraped items
#
# See documentation in:
# https://doc.scrapy.org/en/latest/topics/items.html
import scrapy
class IndexItem(scrapy.Item):
tag_qiangdan = scrapy.Field()
class DetailItem(scrapy.Item):
tag = scrapy.Field()
pinlei = scrapy.Field()
mingchen = scrapy.Field()
leixing = scrapy.Field()
yaoqiu01 = scrapy.Field()
yaoqiu02 = scrapy.Field()
guige = scrapy.Field()
chechang = scrapy.Field()
chexing = scrapy.Field()
yunfei = scrapy.Field()
chufa01 = scrapy.Field()
chufa02 = scrapy.Field()
chufa03 = scrapy.Field()
mudi01 = scrapy.Field()
mudi02 = scrapy.Field()
mudi03 = scrapy.Field()
zhuangche01 = scrapy.Field()
zhuangche02 = scrapy.Field()
daohuo01 = scrapy.Field()
daohuo02 = scrapy.Field()
chufa_shengnumber = scrapy.Field()
chufa_shinumber = scrapy.Field()
mudi_shengnumber = scrapy.Field()
mudi_shinumber = scrapy.Field()
| 24.975 | 52 | 0.65966 | 113 | 999 | 5.787611 | 0.469027 | 0.420489 | 0.039755 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.031606 | 0.208208 | 999 | 39 | 53 | 25.615385 | 0.795196 | 0.14014 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.035714 | 0 | 1 | 0 | 0 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
ba5498b8600bafbe767e96e7573a78bb3a0ea603 | 2,357 | py | Python | OnDemandPublicationScript.py | muneebmallick/OndemandPublication-pyscript | 14220908cf234c72b6e0d8078e1cfef49acd0dde | [
"MIT"
] | null | null | null | OnDemandPublicationScript.py | muneebmallick/OndemandPublication-pyscript | 14220908cf234c72b6e0d8078e1cfef49acd0dde | [
"MIT"
] | null | null | null | OnDemandPublicationScript.py | muneebmallick/OndemandPublication-pyscript | 14220908cf234c72b6e0d8078e1cfef49acd0dde | [
"MIT"
] | null | null | null | import getpass
import datetime
import requests
import gzip
import easywebdav
import os
from bs4 import BeautifulSoup as bs
user = raw_input("Username: ")
password = getpass.getpass()
date = raw_input("As of Date (mmddYYYY): ")
#add URL from where the files are required to be downloaded.
archive_url = 'URL'
def get_file_link():
auth = ('user', 'pass')
r = requests.get(archive_url, auth=auth, verify=False)
soup = bs(r.content, 'html.parser')
links = soup.findAll('a')
file_links = [archive_url + link['href'] for link in links if date in link['href']]
return file_links
def download_big_file(url):
auth = ('user', 'pass')
local_filename = url.split('/')[-1]
r = requests.get(url, stream=True, auth=auth, verify=False)
with open(local_filename, 'wb') as f:
for chunk in r.iter_content(chunk_size=1024):
if chunk:
f.write(chunk)
return local_filename
def download_file(url):
auth = ('user', 'pass')
files = []
for link in url:
local_filename = 'C:/TempFiles/' + link.split('/')[-1]
with requests.session() as s:
f = s.get(link, auth=auth, verify=False)
with open(local_filename, 'wb') as g:
g.write(f.content)
g.close()
files.append(local_filename)
return files
def ungzip(files):
filename = []
for file in files:
file_name = str(file.split('.')[0] + ".xml")
with gzip.open(file, 'rb') as f:
with open(file_name, 'wb') as u:
u.write(f.read())
os.remove(file)
#Checking the file name for a specific word.
if "MTM" in file:
filename.insert(0,file_name)
else:
filename.append(file_name)
return filename
def copy_to_datafeed(file):
webdav = easywebdav.connect(
host = 'datafeeds.na.dir.mmallick.com',
username = user,
port = '443',
protocol = 'https',
password = password,
verify_ssl = "C:/pyth/mmallick.pem")
_file = '/pub-dev/' + file.split('/')[-1]
webdav.upload(file, _file)
if __name__ == "__main__":
print '\n' +"Downloading BAPI Links"+ '\n'
bapi_links = get_file_link()
# print bapi_links
print '\n' + "Downloading Bapi Files" + '\n'
gfiles = download_file(bapi_links)
# print gfiles
print '\n' + "Unzipping Bapi Files to a TEMP Location" + '\n'
unfiles = ungzip(gfiles)
# print unfiles
print '\n' + "Copying Bapi Files to a pub-dev Data Feed Location" + '\n'
for file in unfiles:
copy_to_datafeed(file)
os.remove(file) | 20.675439 | 85 | 0.674586 | 355 | 2,357 | 4.352113 | 0.349296 | 0.050485 | 0.023301 | 0.036893 | 0.081553 | 0.056958 | 0.056958 | 0.056958 | 0.056958 | 0.056958 | 0 | 0.006708 | 0.177768 | 2,357 | 114 | 86 | 20.675439 | 0.790506 | 0.061943 | 0 | 0.068493 | 0 | 0 | 0.15179 | 0.01314 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0.082192 | 0.09589 | null | null | 0.054795 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 2 |
ba5a94482a11f5fee59779e8dcc15a40991e4772 | 2,027 | py | Python | compute_masks.py | MuhammadHamed/Deep-tracking | 09dd43cff30bbb3763cfc591dcbb372e85fd2618 | [
"MIT"
] | null | null | null | compute_masks.py | MuhammadHamed/Deep-tracking | 09dd43cff30bbb3763cfc591dcbb372e85fd2618 | [
"MIT"
] | null | null | null | compute_masks.py | MuhammadHamed/Deep-tracking | 09dd43cff30bbb3763cfc591dcbb372e85fd2618 | [
"MIT"
] | null | null | null | import scipy.io, os
from scipy.misc import imsave
import numpy as np
import cPickle
from PIL import Image
import shutil
import matplotlib.pyplot as plt
# used for producing the image with only the labels when "show masks only" in the annotation tool is pressed
def create_mask_for_image(image_array, image_annotation, label_number):
image_height = image_array.shape[0]
image_width = image_array.shape[1]
# background is 1
image_array_only_mask = np.ones_like(image_array)*255
# Do nothing if there is no annotations
if image_annotation == 0:
print "No annotations for current image"
return image_array_only_mask
number_of_annotations = len(image_annotation)
label_number_index = -1
if label_number == 0:
for annotation in image_annotation:
image_array_only_mask[annotation[1]:annotation[3],annotation[0]:annotation[2]] = image_array[annotation[1]:annotation[3],annotation[0]:annotation[2]]
else:
# get the index of the label_number
for i in range(0, number_of_annotations):
if label_number == image_annotation[i][-1]:
label_number_index = i
break
# check if the their is annotation for the given label_number
if label_number_index == -1:
print "No annotations for label number {} for current image".format(label_number)
# make an image array with white background and only the annotated labels showing
else:
image_array_only_mask[image_annotation[label_number_index][1]:image_annotation[label_number_index][3],image_annotation[label_number_index][0]:image_annotation[label_number_index][2]] = image_array[image_annotation[label_number_index][1]:image_annotation[label_number_index][3],image_annotation[label_number_index][0]:image_annotation[label_number_index][2]]
return image_array_only_mask
# loads the annotation frames from the annotation file
def load_annotationsations_from_file(file_name):
f = file(file_name, 'rb')
frame_rectangle_pairs = cPickle.load(f)
f.close()
return frame_rectangle_pairs
| 36.854545 | 363 | 0.77109 | 305 | 2,027 | 4.859016 | 0.288525 | 0.133603 | 0.118758 | 0.175439 | 0.317139 | 0.28475 | 0.232119 | 0.232119 | 0.17274 | 0.17274 | 0 | 0.016279 | 0.151455 | 2,027 | 54 | 364 | 37.537037 | 0.845349 | 0.190923 | 0 | 0.117647 | 0 | 0 | 0.052696 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.205882 | null | null | 0.058824 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
ba61d400f8b57fa12cc8c3893ae9262d26ff44e7 | 22,181 | py | Python | scripts/catalog-manager.py | jtheoof/dotfiles | 6d820a685200e6b4cac52638de366123ff8e3677 | [
"MIT"
] | 13 | 2015-01-31T04:38:12.000Z | 2020-06-10T09:11:52.000Z | scripts/catalog-manager.py | jtheoof/dotfiles | 6d820a685200e6b4cac52638de366123ff8e3677 | [
"MIT"
] | null | null | null | scripts/catalog-manager.py | jtheoof/dotfiles | 6d820a685200e6b4cac52638de366123ff8e3677 | [
"MIT"
] | 5 | 2016-05-12T01:44:05.000Z | 2020-12-06T21:30:36.000Z | #!/usr/bin/python
## @package catalog-manager
# Provides general functions to parse SQ3 files
# and generate SQL code.
#
# Can manage both 3D and MATERIALS (with TEXTURES).
import csv
import fnmatch
import getopt
import logging
import os
import platform
import random
import re
import shutil
import string
import sys
import time
## Main logger
#
# There are two loggers:
# 1. A file logger starting a WARNING level
# 2. A console logger starting a DEUBG level
logger = logging.getLogger('catalog-manager')
logger.setLevel(logging.DEBUG)
fh = logging.FileHandler('catalog-manager.log')
fh.setLevel(logging.WARNING)
ch = logging.StreamHandler()
ch.setLevel(logging.DEBUG)
formatter = logging.Formatter('%(asctime)s - %(levelname)s - %(message)s')
fh.setFormatter(formatter)
ch.setFormatter(formatter)
logger.addHandler(fh)
logger.addHandler(ch)
## Deprecated
KIND_FURNITURE = 1
KIND_MATERIAL = 2
KIND_FRAME = 3
## Catalogs
CATALOGS = {
'Generique': 1,
'Fly': 2,
'Castorama': 3,
'Made': 4,
'Decoondemand': 5
}
CATALOG_GENERIQUE = 1
## Function for making unique non-existent file name
# with saving source file extension.
#
# Credit goes to Denis Barmenkov:
# http://code.activestate.com/recipes/577200-make-unique-file-name/
def add_unique_postfix(fn):
path, name = os.path.split(fn)
name, ext = os.path.splitext(name)
make_fn = lambda i: os.path.join(path, '%s_%d%s' % (name, i, ext))
for i in xrange(1, sys.maxint):
uni_fn = make_fn(i)
if not os.path.exists(uni_fn):
return uni_fn
return None
## Increase occurence of a key in a dictionary.
def increment_occurence(d, k):
if k not in d:
d[k] = 1
else:
d[k] += 1
## Parse the CSV file used to keep track of ids in a catalogue.
#
# The ids are used in order to avoid duplicates and make proper
# copies of SQ3 for SQCM.
def parse_csv(filename):
files = {}
try:
read = csv.reader(open(filename, "rb"))
except IOError:
return files
ids_occ = {}
logger.info("parsing: %s", filename)
for r in read:
iden = r[0]
increment_occurence(ids_occ, iden)
if iden in files:
pass
else:
files[iden] = {
'path': ''
}
for k, v in [(k, v) for k, v in ids_occ.items() if v > 1]:
logger.warning('%s found %d times in csv' % (k, v))
return files
## Parse a 'materials' file (usually Style.xls)
#
# Fills in a dictionary where the key is the id of the material.
# The value is another dictionary containing 'cat_name_fr' and 'texture'.
# Ex:
# {
# 'sketch_in_the_grass_06': {
# 'cat_name_fr': 'Papier peint a motifs',
# 'texture': 'in_the_grass_06'
# }
# }
def parse_material_xls(xls, textures):
import xlrd
logger.info("parsing xml file: %s" % xls)
try:
book = xlrd.open_workbook(xls, formatting_info=True)
except IOError:
logger.error("unable to open: %s" % xls)
sys.exit(2)
materials = {}
for i in range(book.nsheets):
sheet = book.sheet_by_index(i)
# Invalid sheet
if sheet.nrows < 5 or sheet.ncols < 17:
continue
for row in range(4, sheet.nrows):
ide = unicode(sheet.cell(row, 0).value).strip()
lib = string.capwords(unicode(sheet.cell(row, 3).value))
typ = unicode(sheet.cell(row, 5).value)
cat = unicode(sheet.cell(row, 6).value)
tex = unicode(sheet.cell(row, 15).value).strip()
tep = "" # Texture path
if not ide:
continue
logger.debug("material: %s - texture: %s" % (ide, tex))
if len(typ):
typ = typ[0].upper() + typ[1:]
if tex:
if tex not in textures:
logger.error("unable to find texture: %s for: %s" %
(tex, ide))
continue
else:
tep = textures[tex]['path']
if ide in materials:
logger.error("duplicate key: %s" % ide)
continue
buf = {
'cat_name_fr': lib if lib != '' else None,
'texture': tep
}
materials[ide] = buf
return materials
## Find all textures (usually jpg files) in CG/TEXTURES/
#
# Fills in a dictionary of the basename (without the extension) with a path.
# Ex:
# {
# 'in_the_grass_06': {
# 'path': './CG/TEXTURES/garden/in_the_grass_06.jpg'
# }
# }
def find_textures(directory, extensions=["jpg"]):
logger.info('looking for textures in %s' % os.path.abspath(directory))
textures = {}
for root, dirnames, filenames in os.walk(directory):
for f in filenames:
n, e = os.path.splitext(f)
if e[1:].lower() in extensions:
path = os.path.join(root, f)
if n in textures:
logger.error("texture: %s already found here: %s" %
(path, textures[n]['path']))
sys.exit(2)
else:
textures[n] = {
'path': path
}
return textures
## Find geometry files (usually sq3 files) in CG/3D/CATALOG
#
# Fills in a dictionary based on catalog_geometryid.
# Ex:
# {
# 'generic_archmodels_05': {
# 'sq3': 'archmodels_05.SQ3',
# 'path': './CG/3D/GENERIQUE/.../Beds/archmodels_05/archmodels05.SQ3',
# 'type': 'Beds',
# }
# }
def find_geometry(directory, catalog, extension="sq3",
previous_files={}, only_new=True):
logger.info('looking for files in %s' % os.path.abspath(directory))
catalog = catalog.lower()
ids = {}
old = {}
ids_occ = {}
ids_rem = previous_files.copy() # this dict should be empty at the end
sep = os.sep if os.sep != '\\' else '\\\\'
for root, dirnames, filenames in os.walk(directory):
for f in filenames:
n, e = os.path.splitext(f)
if e[1:].lower() == extension:
tmp, bas = os.path.split(root)
ide = '%s_%s' % (catalog, bas)
tmp2, typ = os.path.split(tmp)
increment_occurence(ids_occ, ide)
new = {
'sq3': f,
'path': '%s%s%s' % (root, os.sep, f),
'type': typ,
}
if ide in previous_files:
# Remove key
try:
ids_rem.pop(ide)
except:
pass
if only_new:
continue
else:
old[ide] = new
else:
ids[ide] = new
if len(ids_rem):
for k, v in ids_rem.items():
logger.error('id: %s was removed be careful' % k)
sys.exit(2)
for k, v in [(k, v) for k, v in ids_occ.items() if v > 1]:
logger.warning('id: %s found %d times' % (k, v))
if k in ids:
ids.pop(k)
return ids, old
## Load a Dictionary containing unique names for geometry.
def load_names():
dictionary = os.path.join(os.curdir, "NAMES", "Dictionary.txt")
names = open(dictionary, "rt")
return [ l.strip() for l in names.readlines() ]
## Save the dictionary, removing the names that were used
# in the process of generating the CSV file.
def save_names(names):
dictionary = os.path.join(os.curdir, "NAMES", "Dictionary.txt")
new_names = open(dictionary, "wt")
new_names.writelines([ '%s\r\n' % n for n in names ])
## Generate CSV files.
#
# This function will generate 3 CSV files:
# One for general geometries to keep track of all ids.
# One for new goemetries added to make it easier for import in Excel.
# It's naming convention is: catalog_geometry_X.csv where X is unique.
# One for new categories added when new geometry is found.
def generate_csv(files, output_name, random_names=True):
names = load_names()
pattern = os.sep if os.sep != '\\' else '\\\\'
lines = []
categories = set()
xl_path = os.path.join(os.curdir, "EXCEL")
for k, v in files.items():
if len(names) == 0:
logger.error("no more names in dictionary, please insert new ones")
sys.exit(2)
r = random.randint(0, len(names) - 1)
f = v['path']
t = v['type']
splits = re.split(pattern, f)
splits = splits[3:] # Remove './CG/3D/GENERIQUE/'
cat = os.sep.join(splits[0:-2])
if random_names:
nam = names.pop(r)
else:
nam = ""
if cat not in categories:
categories.add(cat)
line = [k, k + ".SQ3", v['sq3'], nam, t] # [ID, File.sq3, Type]
lines.append(line)
lines_s = sorted(lines, key=lambda x: x[0])
categories_s = sorted(categories)
save_names(names)
geometry_name = '%s_geometry.csv' % output_name.lower()
filename = os.path.join(xl_path, geometry_name)
logger.info("updating: %s" % filename)
output = open(filename, mode='ab')
writer = csv.writer(output)
for l in lines_s:
writer.writerow(l)
filename = os.path.join(xl_path, '%s_geometry.csv' % output_name.lower())
geometry_unique = add_unique_postfix(filename)
logger.info("generating: %s" % geometry_unique)
output = open(geometry_unique, mode='wb')
writer = csv.writer(output)
for l in lines_s:
writer.writerow(l)
filename = os.path.join(xl_path, '%s_category.csv' % output_name.lower())
category_name = add_unique_postfix(filename)
logger.info("generating: %s" % category_name)
catego = open(category_name, 'wb')
writer = csv.writer(catego)
for c in categories_s:
splits = re.split(pattern, c)
writer.writerow(splits)
## Retrieve metadata of a given filename.
def get_file_metadata(filename):
stat_info = os.stat(filename)
return stat_info
## Find out if a file needs to be updated.
#
# If origin is newer than copy, this function will return true.
# Otherwise it will return false.
def need_update(origin, copy):
ori_info = get_file_metadata(origin)
cpy_info = get_file_metadata(copy)
return cpy_info.st_mtime < ori_info.st_mtime
## Copy a file from 'fr' to 'to' if it needs an update.
def copy_file(ide, fr, to):
try:
if os.path.exists(to):
if need_update(fr, to):
logger.warning("updating file: %s" % to)
shutil.copy(fr, to)
else:
shutil.copy(fr, to)
except:
logger.error("unable to copy: %s for id: %s" % (fr, ide))
## Flaten all textures from a material catalog for easier SQCM management.
def tex_to_sqcm(materials, catalog):
path_out = os.path.join(os.curdir, "CG", "MATERIALS")
path_out = os.path.join(path_out, "%s_SQCM" % catalog.split("_")[0])
logger.info("generating sqcm tree to: %s_SQCM" % path_out)
if not os.path.exists(path_out):
logger.info("creating directory: %s" % path_out)
os.makedirs(path_out)
for k, v in materials.items():
texture = v['texture']
if not texture:
continue
filename = os.path.basename(texture)
logger.debug("checking to copy: %s" % filename)
tex_sqcm = os.path.join(path_out, filename)
# Update texture if needed
copy_file(k, texture, tex_sqcm)
## Flaten all geometries from a 3D catalog for easier SQCM management.
#
# It will also look for thumbnails and copy them if needed.
def sq3_to_sqcm(ids, catalog):
logger.info("generating sqcm tree to: %s_SQCM" % catalog)
pattern = os.sep if os.sep != '\\' else '\\\\'
for k, v in ids.items():
sq3 = v['path']
path, filename = os.path.split(sq3)
spl = re.split(pattern, sq3)
out = spl[3] + "_SQCM"
las = spl[-2]
typ = v['type']
thu = os.path.join(path, "%s_v77.jpg" % las)
big = os.path.join(path, "%s_v0001.jpg" % las)
pat = os.path.join(os.curdir, "CG", "3D", out)
if not os.path.exists(pat):
logger.info("creating directory: %s" % pat)
os.makedirs(pat)
sq3_sqcm = os.path.join(pat, "%s.SQ3" % k)
thu_sqcm = os.path.join(pat, "%s_v77.jpg" % k)
big_sqcm = os.path.join(pat, "%s_v512.jpg" % k)
# Update geometry and thumbnails if needed
copy_file(k, sq3, sq3_sqcm)
copy_file(k, thu, thu_sqcm)
copy_file(k, big, big_sqcm)
## Generate SQL based on a Schema file and Database.xls
def generate_sql(host, user, passw, db):
import MySQLdb as mysql
import xlrd
con = None
cur = None
try:
con = mysql.connect(host, user, passw , db,
use_unicode=True, charset="utf8")
cur = con.cursor()
sql = os.path.join('SQL', 'schema.sql')
# Insert SQL Schema
for l in open(sql, 'rt'):
cur.execute(l)
xls = os.path.join("EXCEL", "Database.xls")
book = xlrd.open_workbook(xls, formatting_info=True)
for i in range(book.nsheets):
sheet = book.sheet_by_index(i)
logger.info("processing stylesheet: %s" % sheet.name)
if sheet.name == "Category":
for row in range(4, sheet.nrows):
cate_par_id = cate_cur_id = None
for col in range(1, sheet.ncols):
cate_par_id = cate_cur_id
cat = unicode(sheet.cell(row, col).value).strip()
if not cat:
continue
if col == 1:
cat = cat.capitalize()
cur.execute("SELECT id FROM nehome_catalog \
WHERE name=%s", cat)
data = cur.fetchone()
if not data:
if cat not in CATALOGS:
logger.error("unkown catalog: %s" % cat)
logger.info("update dictionary CATALOGS")
sys.exit(2)
id_cat = CATALOGS[cat]
cur.execute("INSERT INTO nehome_catalog \
SET id=%s, name=%s", (id_cat, cat))
cata_cur_id = id_cat
logger.debug("created catalogue: %s" % cat)
else:
cata_cur_id = int(data[0])
logger.debug("catalog id: %d" % cata_cur_id)
# Inserting new category if needed
cur.execute("SELECT id, id_catalog, name_en, name_fr \
FROM nehome_category \
WHERE name_en=%s AND id_catalog=%s",
(cat, cata_cur_id))
data = cur.fetchone()
if not data:
cur.execute("INSERT INTO nehome_category \
SET name_en=%s, id_catalog=%s",
(cat, cata_cur_id))
cur.execute("SELECT LAST_INSERT_ID()")
cate_cur_id = int(cur.fetchone()[0])
logger.debug("created category: %s" % cat)
else:
cate_cur_id = int(data[0])
# Inserting new tree: parent -> child if needed
if cate_par_id:
# Can occur when two same categories
# follow each other
if cate_par_id == cate_cur_id:
logger.warning("category: %s is looping" % cat)
continue
cur.execute("SELECT * FROM nehome_cat_arbo \
WHERE id_cat_parent=%s AND \
id_cat_child=%s",
(cate_par_id, cate_cur_id))
data = cur.fetchone()
if not data:
cur.execute("INSERT INTO nehome_cat_arbo \
SET id_cat_parent=%s, \
id_cat_child=%s",
(cate_par_id, cate_cur_id))
logger.debug("created arbo: %d -> %d" %
(cate_par_id, cate_cur_id))
elif sheet.name == "Geometry":
cur.execute("INSERT INTO nehome_kind SET \
id=%s, name_en=%s, name_fr=%s",
(1, "Furniture", "Meubles"))
for row in range(4, sheet.nrows):
iden = unicode(sheet.cell(row, 1).value).strip()
geom = unicode(sheet.cell(row, 2).value).strip()
fsq3 = unicode(sheet.cell(row, 3).value).strip()
name = unicode(sheet.cell(row, 4).value).strip()
cate = unicode(sheet.cell(row, 5).value).strip()
defr = unicode(sheet.cell(row, 7).value).strip()
deen = unicode(sheet.cell(row, 8).value).strip()
urlv = unicode(sheet.cell(row, 9).value).strip()
cata = iden.split("_")[0].capitalize()
typc = ('%s_%s' % (cata, cate.replace(" ", "_"))).lower()
id_cata = CATALOGS[cata]
logger.debug('geometry: %s - %s - %s - %s' %
(iden, name, cate, cata))
# Find corresponding catalogue
cur.execute("SELECT id FROM nehome_catalog \
WHERE name=%s", cata)
data = cur.fetchone()
if not data:
logger.error("unable to find catalog: %s" % cata)
#sys.exit(2)
continue
id_cata = int(data[0])
# Find type if exists
cur.execute("SELECT id, name FROM nehome_type \
WHERE name=%s", typc)
data = cur.fetchone()
if not data:
# Find category from name and catalog
cur.execute("SELECT id FROM nehome_category \
WHERE name_en=%s AND id_catalog=%s",
(cate, id_cata))
datb = cur.fetchone()
if not datb:
logger.error("missing category: %s for: %s (%s)" %
(cate, iden, cata))
#sys.exit(2)
continue
id_cate = int(datb[0])
# Create type if found corresponding category
cur.execute("INSERT INTO nehome_type SET name=%s",
typc)
cur.execute("SELECT LAST_INSERT_ID()")
id_type = int(cur.fetchone()[0])
cur.execute("INSERT INTO nehome_type_to_category \
SET id_type=%s, id_category=%s",
(id_type, id_cate))
else:
id_type = int(data[0])
cur.execute("INSERT INTO nehome_object \
SET id=%s, name_en=%s, name_fr=%s, \
desc_en=%s, desc_fr=%s, url=%s, \
sq3_sqcm=%s, sq3_origin=%s, \
id_type=%s, id_catalog=%s, id_kind=%s", (
iden, name, name, deen, defr, urlv,
geom, fsq3, id_type, id_cata, 1))
# Insertion of objects is over
# Now it's time to insert more type_to_categories
cur.execute(" \
SELECT id, id_catalog, name_en \
FROM nehome_category c \
WHERE c.id_catalog=%s \
ORDER BY c.name_en", CATALOGS['Generique'])
data = cur.fetchall()
# For each name found in leaf category,
# attach brand type to generic category
for row_a in data:
cur.execute(" \
SELECT id, id_catalog, name_en, id_type \
FROM nehome_category c \
INNER JOIN nehome_type_to_category tc \
ON tc.id_category=c.id \
WHERE c.name_en=%s AND c.id_catalog>%s \
GROUP BY id",
(row_a[2], CATALOGS['Generique']))
datb = cur.fetchall()
for row_b in datb:
cur.execute(" \
INSERT INTO nehome_type_to_category \
SET id_type=%s, id_category=%s",
(row_b[3], row_a[0]))
elif sheet.name == "Label":
for row in range(4, sheet.nrows):
cate = unicode(sheet.cell(row, 1).value).strip()
cate_en = unicode(sheet.cell(row, 2).value).strip()
cate_fr = unicode(sheet.cell(row, 3).value).strip()
#logger.debug('label: %s - %s - %s' %
# (cate, cate_en, cate_fr))
cur.execute("SELECT id FROM nehome_category \
WHERE name_en=%s", cate)
data = cur.fetchall()
if not data:
#logger.info("category: %s does not exist" % cate)
continue
for d in data:
cur.execute("UPDATE nehome_category \
SET name_en=%s, name_fr=%s \
WHERE id=%s",
(cate_en, cate_fr, int(d[0])))
# Checking missing translations
cur.execute(" \
SELECT c.id, c.name_en FROM nehome_category c \
INNER JOIN nehome_type_to_category tc \
ON c.id=tc.id_category \
INNER JOIN nehome_type t ON t.id=tc.id_type \
WHERE name_fr IS NULL \
GROUP BY name_en")
data = cur.fetchall()
for row in data:
logger.warning("missing translation for category: %s",
row[1])
else:
logger.warning("unkown sheet name: %s" % sheet.name)
# Update name_fr for Brands
cur.execute("UPDATE nehome_category SET name_fr=name_en \
WHERE name_fr IS NULL;")
except mysql.Error, e:
logger.error('mysql error: (%d - %s)' % (e.args[0], e.args[1]))
con.rollback()
except IOError, e:
logger.error('IOError: %s' % str(e))
con.commit()
## Import all geometries from a catalog.
#
# This is a 4-step process:
# 1. Parse a persistent CSV file to grab previous ids.
# 2. Find new geometry files.
# 3. Flaten those files to a directory for SQCM.
# 4. Generate the corresponding CSV files for import in Database.xls
def import_catalog(catalog):
logger.info("importing 3d catalogue: %s" % catalog)
filename = os.path.join(os.curdir, "EXCEL",
'%s_geometry.csv' % catalog.lower())
ids_prev = parse_csv(filename)
catalog_path = os.path.join(os.curdir, "CG", "3D", catalog)
new, old = find_geometry(catalog_path, catalog,
previous_files=ids_prev, only_new=False)
total = dict(new.items() + old.items())
logger.info('found %d SQ3 files (%d new, %d old)' %
(len(total), len(new), len(old)))
if len(total):
sq3_to_sqcm(total, catalog)
if catalog == "GENERIQUE":
random_names = True
else:
random_names = False
generate_csv(new, catalog, random_names)
## Import Styles.xls from a material catalog.
#
# This is a 3-step process:
# 1. Look for all textures.
# 2. Parse Styles.xls to look for materials and grab their textures.
# 3. Copy the textures to a flat directory for SQCM.
# To find textures, this function looks inside ./CG/TEXTURES
def import_material(catalog):
logger.info("importing material from catalog: %s" % catalog)
path_mat = os.path.join(os.curdir, "CG", "MATERIALS", catalog)
path_tex = os.path.join(os.curdir, "CG", "TEXTURES")
textures = find_textures(path_tex)
mat = parse_material_xls(os.path.join(path_mat, "Styles.xls"), textures)
tex_to_sqcm(mat, catalog)
## Print usage of the package.
def usage():
basename = os.path.basename(sys.argv[0]);
print '''
usage: %s [option]
This program is based on the following hierarchy:
CG
3D
GENERIQUE
FLY
...
MATERIALS
GENERIQUE
...
It will, depending on the following options, generate the corresponding
_SQCM flat folders to upload to SQCM later in the process
Options:
--catalog
Import specified 3D catalog
--material
Import specified MATERIAL using MATERIAL/Style.xls
--generate-sql
Generates SQL from SQL/Database.xls
''' % basename
## Entry point.
#
# Deals with options and redirects to proper function.
def main():
try:
opts, argv = getopt.getopt(sys.argv[1:], "h", [
"help", "catalog=", "skip-csv", "generate-sql", "material="
])
except getopt.GetoptError, err:
# print help information and exit:
print str(err) # will print something like "option -a not recognized"
usage()
sys.exit(2)
system = platform.system()
db = False
xls = None
catalog = ""
material = ""
csv = False
reorder = False
for o, a in opts:
if o in ("-h", "--help"):
usage()
sys.exit()
elif o in ("--skip-csv"):
pass
elif o in ("--catalog"):
catalog = a
elif o in ("--material"):
material = a
elif o in ("--generate-sql"):
try:
import xlrd
except ImportError:
logger.error('cannot import xlrd python module')
logger.warning('cannot parse database file')
sys.exit(2)
try:
import MySQLdb
db = True
except ImportError:
logger.error('cannot import mysql python module')
logger.warning('unable to generate database file')
sys.exit(2)
else:
assert False, "unhandled option"
ids_prev = {}
ids = {}
files = []
if db:
generate_sql('localhost', 'sq', 'squareclock', 'hbm')
elif catalog:
import_catalog(catalog)
elif material:
import_material(material)
else:
logger.error("you must specify a catalog or generate-sql")
usage()
sys.exit(2)
if __name__ == "__main__":
main()
| 29.69344 | 77 | 0.640954 | 3,327 | 22,181 | 4.165615 | 0.15239 | 0.016884 | 0.017317 | 0.023306 | 0.251822 | 0.214085 | 0.157587 | 0.119489 | 0.090844 | 0.087957 | 0 | 0.009686 | 0.222713 | 22,181 | 746 | 78 | 29.733244 | 0.794153 | 0.180966 | 0 | 0.241197 | 1 | 0 | 0.133622 | 0 | 0 | 0 | 0 | 0 | 0.001761 | 0 | null | null | 0.008803 | 0.051056 | null | null | 0.003521 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
ba81c485c843d10d4436ea1652179b5715124c2b | 172 | py | Python | beginner_contest/049/B.py | FGtatsuro/myatcoder | 25a3123be6a6311e7d1c25394987de3e35575ff4 | [
"MIT"
] | null | null | null | beginner_contest/049/B.py | FGtatsuro/myatcoder | 25a3123be6a6311e7d1c25394987de3e35575ff4 | [
"MIT"
] | null | null | null | beginner_contest/049/B.py | FGtatsuro/myatcoder | 25a3123be6a6311e7d1c25394987de3e35575ff4 | [
"MIT"
] | null | null | null | import sys
input = sys.stdin.readline
sys.setrecursionlimit(10 ** 7)
h, w = map(int, input().split())
for i in range(h):
c = input().strip()
print(c)
print(c)
| 17.2 | 32 | 0.616279 | 28 | 172 | 3.785714 | 0.714286 | 0.113208 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.021898 | 0.203488 | 172 | 9 | 33 | 19.111111 | 0.751825 | 0 | 0 | 0.25 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.125 | 0 | 0.125 | 0.25 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
ba83dfc67c0f43b3ac93a551683c1a2ad13ec950 | 1,348 | py | Python | app/commands/footprint.py | BartekSzpak/adversary | 231caf58722a5641dd08afe354f2760e89699f3a | [
"Apache-2.0",
"CC0-1.0"
] | 22 | 2019-06-08T11:00:02.000Z | 2021-09-10T10:22:20.000Z | app/commands/footprint.py | BartekSzpak/adversary | 231caf58722a5641dd08afe354f2760e89699f3a | [
"Apache-2.0",
"CC0-1.0"
] | 39 | 2019-04-28T13:28:58.000Z | 2020-07-28T00:49:45.000Z | app/commands/footprint.py | BartekSzpak/adversary | 231caf58722a5641dd08afe354f2760e89699f3a | [
"Apache-2.0",
"CC0-1.0"
] | 11 | 2019-04-29T00:58:35.000Z | 2021-06-28T02:18:48.000Z | from plugins.adversary.app.commands.command import CommandLine
from typing import Callable, Tuple
from plugins.adversary.app.commands import parsers
def files() -> Tuple[CommandLine, Callable[[str], None]]:
command = 'powershell -command "&{$filetype = @(\\"*.docx\\",\\"*.pdf\\",\\"*.xlsx\\"); $startdir = ' \
'\\"c:\\\\Users\\\\\\"; for($k=0;$k -lt $filetype.length; $k++){ $core = dir $startdir\($filetype[$k]) ' \
'-Recurse | Select @{Name=\\"Path\\";Expression={$_.Fullname -as [string]}}; foreach ($alpha in $core) ' \
'{$filename = $alpha.Path -as [string]; [Byte[]] $corrupt_file = [System.IO.File]::ReadAllBytes(' \
'$filename); [Byte[]] $key_file = [System.IO.File]::ReadAllBytes($(' \
'-join($filename, \\".old\\"))); for($i=0; $i -lt $key_file.Length; $i++) { $corrupt_file[$i] = ' \
'$key_file[$i];} [System.IO.File]::WriteAllBytes($(resolve-path $filename), $corrupt_file); ' \
'Remove-Item $(-join($filename,\\".old\\"))}}}"'
return CommandLine('cmd /c {}'.format(command)), parsers.footprint.recover_files
def password(user: str, password: str) -> Tuple[CommandLine, Callable[[str], None]]:
command = 'net user ' + user + ' ' + password
return CommandLine('cmd /c {}'.format(command)), parsers.footprint.password
| 64.190476 | 120 | 0.589021 | 149 | 1,348 | 5.275168 | 0.436242 | 0.041985 | 0.045802 | 0.058524 | 0.374046 | 0.223919 | 0.127226 | 0.127226 | 0 | 0 | 0 | 0.001803 | 0.1773 | 1,348 | 20 | 121 | 67.4 | 0.706943 | 0 | 0 | 0 | 0 | 0.3125 | 0.530415 | 0.202522 | 0 | 0 | 0 | 0 | 0 | 1 | 0.125 | false | 0.1875 | 0.1875 | 0 | 0.4375 | 0.125 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 2 |
ba84dce9cdb94b7aecc1455ec03b4a4a56c913e6 | 1,175 | py | Python | core/tests/test_models.py | CezarPoeta/Fusion | 284880274f0a62512924207931c1f2a9107f1552 | [
"MIT"
] | null | null | null | core/tests/test_models.py | CezarPoeta/Fusion | 284880274f0a62512924207931c1f2a9107f1552 | [
"MIT"
] | null | null | null | core/tests/test_models.py | CezarPoeta/Fusion | 284880274f0a62512924207931c1f2a9107f1552 | [
"MIT"
] | null | null | null | import uuid
from django.test import TestCase
from model_mommy import mommy
from core.models import get_file_path
class GetFilePathTestCase(TestCase):
def setUp(self):
self.filename = f'{uuid.uuid4()}.png'
def test_get_file_path(self):
arquivo = get_file_path('Nome', 'teste.png')
self.assertTrue(len(arquivo),len(self.filename))
class ServicoTestCase(TestCase):
def setUp(self):
self.servico = mommy.make('Servico')
def test_str(self):
self.assertEquals(str(self.servico),self.servico.servico)
class CargoTestCase(TestCase):
def setUp(self):
self.cargo = mommy.make('Cargo')
def test_str(self):
self.assertEquals(str(self.cargo),self.cargo.cargo)
class FuncionarioTestCase(TestCase):
def setUp(self):
self.funcionario = mommy.make('Funcionario')
def test_str(self):
self.assertEquals(str(self.funcionario),self.funcionario.nome)
class CaracteristicaTestCase(TestCase):
def setUp(self):
self.caracteristica = mommy.make('Caracteristica')
def test_str(self):
self.assertEquals(str(self.caracteristica),self.caracteristica.nome)
| 24.479167 | 76 | 0.697021 | 143 | 1,175 | 5.643357 | 0.265734 | 0.089219 | 0.099133 | 0.123916 | 0.332094 | 0.183395 | 0.183395 | 0.183395 | 0 | 0 | 0 | 0.001044 | 0.184681 | 1,175 | 47 | 77 | 25 | 0.841336 | 0 | 0 | 0.3 | 0 | 0 | 0.057922 | 0 | 0 | 0 | 0 | 0 | 0.166667 | 1 | 0.333333 | false | 0 | 0.133333 | 0 | 0.633333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
ba8a89e49ad1ef9c9861e0ea5fe8940af4390c30 | 545 | py | Python | hpo/evals/conll18_eval.py | Dayitva/Parser-v3 | 45754bb722fabefdb18f67ab4c32a41d24114bca | [
"Apache-2.0"
] | 93 | 2018-08-07T02:54:47.000Z | 2022-02-14T13:47:52.000Z | hpo/evals/conll18_eval.py | Dayitva/Parser-v3 | 45754bb722fabefdb18f67ab4c32a41d24114bca | [
"Apache-2.0"
] | 10 | 2019-01-08T02:37:36.000Z | 2021-01-09T07:45:02.000Z | hpo/evals/conll18_eval.py | Dayitva/Parser-v3 | 45754bb722fabefdb18f67ab4c32a41d24114bca | [
"Apache-2.0"
] | 29 | 2018-07-31T09:08:03.000Z | 2022-03-16T14:50:13.000Z | from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import scripts.conll18_ud_eval as ud_eval
from scripts.reinsert_compounds import reinsert_compounds
def evaluate(gold_filename, sys_filename, metric):
""""""
reinsert_compounds(gold_filename, sys_filename)
gold_conllu_file = ud_eval.load_conllu_file(gold_filename)
sys_conllu_file = ud_eval.load_conllu_file(sys_filename)
evaluation = ud_eval.evaluate(gold_conllu_file, sys_conllu_file)
return evaluation[metric].f1
| 32.058824 | 66 | 0.833028 | 76 | 545 | 5.434211 | 0.342105 | 0.145278 | 0.116223 | 0.11138 | 0.145278 | 0.145278 | 0.145278 | 0 | 0 | 0 | 0 | 0.006173 | 0.108257 | 545 | 16 | 67 | 34.0625 | 0.843621 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.090909 | false | 0 | 0.454545 | 0 | 0.636364 | 0.090909 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 2 |
ba938330dbea07848f6aeb3c07cc530fe95b7587 | 914 | py | Python | src/api/controllers/connection/CheckConnectionDatabaseResource.py | PythonDataIntegrator/pythondataintegrator | 6167778c36c2295e36199ac0d4d256a4a0c28d7a | [
"MIT"
] | 14 | 2020-12-19T15:06:13.000Z | 2022-01-12T19:52:17.000Z | src/api/controllers/connection/CheckConnectionDatabaseResource.py | PythonDataIntegrator/pythondataintegrator | 6167778c36c2295e36199ac0d4d256a4a0c28d7a | [
"MIT"
] | 43 | 2021-01-06T22:05:22.000Z | 2022-03-10T10:30:30.000Z | src/api/controllers/connection/CheckConnectionDatabaseResource.py | PythonDataIntegrator/pythondataintegrator | 6167778c36c2295e36199ac0d4d256a4a0c28d7a | [
"MIT"
] | 4 | 2020-12-18T23:10:09.000Z | 2021-04-02T13:03:12.000Z | from injector import inject
from domain.connection.CheckDatabaseConnection.CheckDatabaseConnectionCommand import CheckDatabaseConnectionCommand
from domain.connection.CheckDatabaseConnection.CheckDatabaseConnectionRequest import CheckDatabaseConnectionRequest
from infrastructure.api.ResourceBase import ResourceBase
from infrastructure.api.decorators.Controller import controller
from infrastructure.cqrs.Dispatcher import Dispatcher
@controller()
class CheckConnectionDatabaseResource(ResourceBase):
@inject
def __init__(self,
dispatcher: Dispatcher,
*args, **kwargs):
super().__init__(*args, **kwargs)
self.dispatcher = dispatcher
def post(self, req: CheckDatabaseConnectionRequest):
"""
Check Database Connection
"""
command = CheckDatabaseConnectionCommand(request=req)
self.dispatcher.dispatch(command) | 38.083333 | 115 | 0.762582 | 72 | 914 | 9.569444 | 0.430556 | 0.078374 | 0.058055 | 0.124819 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.171772 | 914 | 24 | 116 | 38.083333 | 0.910172 | 0.027352 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.117647 | false | 0 | 0.352941 | 0 | 0.529412 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 2 |
ba94b8d454521a32f19d17bede93c216fb9a272e | 148 | py | Python | Python/BasicDataTypes/finding_the_percentage.py | rho2/HackerRank | 4d9cdfcabeb20212db308d8e4f2ac1b8ebf7d266 | [
"MIT"
] | null | null | null | Python/BasicDataTypes/finding_the_percentage.py | rho2/HackerRank | 4d9cdfcabeb20212db308d8e4f2ac1b8ebf7d266 | [
"MIT"
] | null | null | null | Python/BasicDataTypes/finding_the_percentage.py | rho2/HackerRank | 4d9cdfcabeb20212db308d8e4f2ac1b8ebf7d266 | [
"MIT"
] | null | null | null | l = {}
for _ in range(int(input())):
s = input().split()
l[s[0]] = sum([float(a) for a in s[1:]])/(len(s)-1)
print('%.2f' % l[input()]) | 24.666667 | 55 | 0.466216 | 27 | 148 | 2.518519 | 0.592593 | 0.058824 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.034483 | 0.216216 | 148 | 6 | 56 | 24.666667 | 0.551724 | 0 | 0 | 0 | 0 | 0 | 0.026846 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.2 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
ba95b0f8dc9be866d1181ebc59b07f057e735c50 | 1,196 | py | Python | pstd/pstd_using_numba.py | FRidh/pstd | 1e8f047dda34eac80cbc11b1a261299052e91de6 | [
"BSD-3-Clause"
] | 5 | 2018-01-01T03:04:09.000Z | 2022-02-12T13:32:36.000Z | pstd/pstd_using_numba.py | FRidh/pstd | 1e8f047dda34eac80cbc11b1a261299052e91de6 | [
"BSD-3-Clause"
] | 1 | 2022-01-17T11:08:47.000Z | 2022-01-17T11:08:47.000Z | pstd/pstd_using_numba.py | FRidh/pstd | 1e8f047dda34eac80cbc11b1a261299052e91de6 | [
"BSD-3-Clause"
] | null | null | null | """
This module contains a Numba-accelerated implementation of the k-space PSTD method.
"""
import numba
from . import pstd
kappa = numba.jit(pstd.kappa)
# abs_exp = numba.jit(pstd.abs_exp)
pressure_abs_exp = numba.jit(pstd.pressure_abs_exp)
velocity_abs_exp = numba.jit(pstd.velocity_abs_exp)
pressure_with_pml = numba.jit(pstd.pressure_with_pml)
velocity_with_pml = numba.jit(pstd.pressure_with_pml)
to_pressure_gradient = numba.jit(pstd.to_pressure_gradient)
to_velocity_gradient = numba.jit(pstd.to_velocity_gradient)
update = numba.jit(pstd.update)
class PSTD(pstd.PSTD):
_update = staticmethod(update)
# _record = numba.jit(pstd.PSTD._record)
# kappa = staticmethod(numba.jit(PSTD.kappa))
# abs_exp = staticmethod(numba.jit(PSTD.abs_exp))
# pressure_with_pml = staticmethod(numba.jit(PSTD.pressure_with_pml))
# velocity_with_pml = staticmethod(numba.jit(PSTD.velocity_with_pml))
# pressure_gradient = staticmethod(numba.jit(PSTD.to_pressure_gradient))
# velocity_gradient = staticmethod(numba.jit(PSTD.to_velocity_gradient))
# update = classmethod(numba.jit(PSTD.update))
# pre_run = numba.jit(PSTD.pre_run)
# run = numba.jit(Model.run)
| 33.222222 | 83 | 0.763378 | 172 | 1,196 | 5.034884 | 0.19186 | 0.17552 | 0.249423 | 0.166282 | 0.573903 | 0.497691 | 0.2194 | 0.136259 | 0.096998 | 0 | 0 | 0 | 0.121237 | 1,196 | 35 | 84 | 34.171429 | 0.823977 | 0.529264 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.166667 | 0 | 0.333333 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
bac5fc1e09a2e012dfac85c34379f3c2de92c1a0 | 771 | py | Python | src/integration-tests/expecteds/py/duplicate-names.py | shanejonas/transpiler | ed293861635c23c004ef8276196e461ca5b940db | [
"Apache-2.0"
] | 5 | 2020-10-26T23:32:25.000Z | 2022-01-20T17:02:18.000Z | src/integration-tests/expecteds/py/duplicate-names.py | shanejonas/transpiler | ed293861635c23c004ef8276196e461ca5b940db | [
"Apache-2.0"
] | 525 | 2020-04-20T08:58:52.000Z | 2022-03-26T02:26:13.000Z | src/integration-tests/expecteds/py/duplicate-names.py | shanejonas/transpiler | ed293861635c23c004ef8276196e461ca5b940db | [
"Apache-2.0"
] | 6 | 2020-10-26T23:32:45.000Z | 2021-12-29T18:03:03.000Z | from typing import NewType
from typing import Union
from typing import List
from typing import Tuple
from typing import TypedDict
from typing import Optional
Baz = NewType("Baz", bool)
Foo = NewType("Foo", str)
"""array of strings is all...
"""
UnorderedSetOfFooz1UBFn8B = NewType("UnorderedSetOfFooz1UBFn8B", List[Foo])
Bar = NewType("Bar", int)
SetOfNumbers = NewType("SetOfNumbers", Tuple[Bar])
class ObjectOfBazLEtnUJ56(TypedDict):
NotFoo: Optional[Baz]
OneOfStuff = NewType("OneOfStuff", Union[UnorderedSetOfFooz1UBFn8B, SetOfNumbers])
"""Generated! Represents an alias to any of the provided schemas
"""
AnyOfFooFooObjectOfBazLEtnUJ56OneOfStuffBar = NewType("AnyOfFooFooObjectOfBazLEtnUJ56OneOfStuffBar", Union[Foo, ObjectOfBazLEtnUJ56, OneOfStuff, Bar])
| 29.653846 | 150 | 0.788586 | 81 | 771 | 7.506173 | 0.444444 | 0.098684 | 0.157895 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.020468 | 0.11284 | 771 | 25 | 151 | 30.84 | 0.868421 | 0 | 0 | 0 | 0 | 0 | 0.147761 | 0.101493 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.4 | 0 | 0.533333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 2 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.