hexsha string | size int64 | ext string | lang string | max_stars_repo_path string | max_stars_repo_name string | max_stars_repo_head_hexsha string | max_stars_repo_licenses list | max_stars_count int64 | max_stars_repo_stars_event_min_datetime string | max_stars_repo_stars_event_max_datetime string | max_issues_repo_path string | max_issues_repo_name string | max_issues_repo_head_hexsha string | max_issues_repo_licenses list | max_issues_count int64 | max_issues_repo_issues_event_min_datetime string | max_issues_repo_issues_event_max_datetime string | max_forks_repo_path string | max_forks_repo_name string | max_forks_repo_head_hexsha string | max_forks_repo_licenses list | max_forks_count int64 | max_forks_repo_forks_event_min_datetime string | max_forks_repo_forks_event_max_datetime string | content string | avg_line_length float64 | max_line_length int64 | alphanum_fraction float64 | qsc_code_num_words_quality_signal int64 | qsc_code_num_chars_quality_signal float64 | qsc_code_mean_word_length_quality_signal float64 | qsc_code_frac_words_unique_quality_signal float64 | qsc_code_frac_chars_top_2grams_quality_signal float64 | qsc_code_frac_chars_top_3grams_quality_signal float64 | qsc_code_frac_chars_top_4grams_quality_signal float64 | qsc_code_frac_chars_dupe_5grams_quality_signal float64 | qsc_code_frac_chars_dupe_6grams_quality_signal float64 | qsc_code_frac_chars_dupe_7grams_quality_signal float64 | qsc_code_frac_chars_dupe_8grams_quality_signal float64 | qsc_code_frac_chars_dupe_9grams_quality_signal float64 | qsc_code_frac_chars_dupe_10grams_quality_signal float64 | qsc_code_frac_chars_replacement_symbols_quality_signal float64 | qsc_code_frac_chars_digital_quality_signal float64 | qsc_code_frac_chars_whitespace_quality_signal float64 | qsc_code_size_file_byte_quality_signal float64 | qsc_code_num_lines_quality_signal float64 | qsc_code_num_chars_line_max_quality_signal float64 | qsc_code_num_chars_line_mean_quality_signal float64 | qsc_code_frac_chars_alphabet_quality_signal float64 | qsc_code_frac_chars_comments_quality_signal float64 | qsc_code_cate_xml_start_quality_signal float64 | qsc_code_frac_lines_dupe_lines_quality_signal float64 | qsc_code_cate_autogen_quality_signal float64 | qsc_code_frac_lines_long_string_quality_signal float64 | qsc_code_frac_chars_string_length_quality_signal float64 | qsc_code_frac_chars_long_word_length_quality_signal float64 | qsc_code_frac_lines_string_concat_quality_signal float64 | qsc_code_cate_encoded_data_quality_signal float64 | qsc_code_frac_chars_hex_words_quality_signal float64 | qsc_code_frac_lines_prompt_comments_quality_signal float64 | qsc_code_frac_lines_assert_quality_signal float64 | qsc_codepython_cate_ast_quality_signal float64 | qsc_codepython_frac_lines_func_ratio_quality_signal float64 | qsc_codepython_cate_var_zero_quality_signal bool | qsc_codepython_frac_lines_pass_quality_signal float64 | qsc_codepython_frac_lines_import_quality_signal float64 | qsc_codepython_frac_lines_simplefunc_quality_signal float64 | qsc_codepython_score_lines_no_logic_quality_signal float64 | qsc_codepython_frac_lines_print_quality_signal float64 | qsc_code_num_words int64 | qsc_code_num_chars int64 | qsc_code_mean_word_length int64 | qsc_code_frac_words_unique null | qsc_code_frac_chars_top_2grams int64 | qsc_code_frac_chars_top_3grams int64 | qsc_code_frac_chars_top_4grams int64 | qsc_code_frac_chars_dupe_5grams int64 | qsc_code_frac_chars_dupe_6grams int64 | qsc_code_frac_chars_dupe_7grams int64 | qsc_code_frac_chars_dupe_8grams int64 | qsc_code_frac_chars_dupe_9grams int64 | qsc_code_frac_chars_dupe_10grams int64 | qsc_code_frac_chars_replacement_symbols int64 | qsc_code_frac_chars_digital int64 | qsc_code_frac_chars_whitespace int64 | qsc_code_size_file_byte int64 | qsc_code_num_lines int64 | qsc_code_num_chars_line_max int64 | qsc_code_num_chars_line_mean int64 | qsc_code_frac_chars_alphabet int64 | qsc_code_frac_chars_comments int64 | qsc_code_cate_xml_start int64 | qsc_code_frac_lines_dupe_lines int64 | qsc_code_cate_autogen int64 | qsc_code_frac_lines_long_string int64 | qsc_code_frac_chars_string_length int64 | qsc_code_frac_chars_long_word_length int64 | qsc_code_frac_lines_string_concat null | qsc_code_cate_encoded_data int64 | qsc_code_frac_chars_hex_words int64 | qsc_code_frac_lines_prompt_comments int64 | qsc_code_frac_lines_assert int64 | qsc_codepython_cate_ast int64 | qsc_codepython_frac_lines_func_ratio int64 | qsc_codepython_cate_var_zero int64 | qsc_codepython_frac_lines_pass int64 | qsc_codepython_frac_lines_import int64 | qsc_codepython_frac_lines_simplefunc int64 | qsc_codepython_score_lines_no_logic int64 | qsc_codepython_frac_lines_print int64 | effective string | hits int64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
629eca0014456d92835c30610fad0e042f72e16a | 251 | py | Python | src/bxcommon/rpc/https/request_formatter.py | dolphinridercrypto/bxcommon | 8f70557c1dbff785a5dd3fcdf91176066e085c3a | [
"MIT"
] | 12 | 2019-11-06T17:39:10.000Z | 2022-03-01T11:26:19.000Z | src/bxcommon/rpc/https/request_formatter.py | dolphinridercrypto/bxcommon | 8f70557c1dbff785a5dd3fcdf91176066e085c3a | [
"MIT"
] | 8 | 2019-11-06T21:31:11.000Z | 2021-06-02T00:46:50.000Z | src/bxcommon/rpc/https/request_formatter.py | dolphinridercrypto/bxcommon | 8f70557c1dbff785a5dd3fcdf91176066e085c3a | [
"MIT"
] | 5 | 2019-11-14T18:08:11.000Z | 2022-02-08T09:36:22.000Z | from aiohttp.web import Request
class RequestFormatter:
_request: Request
def __init__(self, request: Request) -> None:
self._request = request
def __repr__(self) -> str:
return f"HTTPRequest <{self._request.headers}>"
| 20.916667 | 55 | 0.677291 | 28 | 251 | 5.678571 | 0.607143 | 0.264151 | 0.213836 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.223108 | 251 | 11 | 56 | 22.818182 | 0.815385 | 0 | 0 | 0 | 0 | 0 | 0.14741 | 0.099602 | 0 | 0 | 0 | 0 | 0 | 1 | 0.285714 | false | 0 | 0.142857 | 0.142857 | 0.857143 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 6 |
62eaa1958b7d595efe4edcd23dacb5baf17ace05 | 35 | py | Python | test.py | BarMa666/MathExercisesCollector | e4f750f96446416427e8e2e97ece34a34b4165f4 | [
"MIT"
] | null | null | null | test.py | BarMa666/MathExercisesCollector | e4f750f96446416427e8e2e97ece34a34b4165f4 | [
"MIT"
] | null | null | null | test.py | BarMa666/MathExercisesCollector | e4f750f96446416427e8e2e97ece34a34b4165f4 | [
"MIT"
] | null | null | null | import gui.test
import loader.test
| 11.666667 | 18 | 0.828571 | 6 | 35 | 4.833333 | 0.666667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.114286 | 35 | 2 | 19 | 17.5 | 0.935484 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
62ef651f424e32c43688d9cc8009323b12aacce5 | 70 | py | Python | FiniteElements/__init__.py | szhang-cis/Kuru_Mac | 90caaa37f7917e25afd25de24c06216e202e2e96 | [
"MIT"
] | null | null | null | FiniteElements/__init__.py | szhang-cis/Kuru_Mac | 90caaa37f7917e25afd25de24c06216e202e2e96 | [
"MIT"
] | null | null | null | FiniteElements/__init__.py | szhang-cis/Kuru_Mac | 90caaa37f7917e25afd25de24c06216e202e2e96 | [
"MIT"
] | 1 | 2021-04-22T10:43:44.000Z | 2021-04-22T10:43:44.000Z | from .Assembly import AssembleRobinForces #AssembleMass, AssembleForm
| 35 | 69 | 0.871429 | 6 | 70 | 10.166667 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.085714 | 70 | 1 | 70 | 70 | 0.953125 | 0.371429 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
62f9ac1046938e998d1e6252315f4d50d685eec7 | 175 | py | Python | source_code/pypeflow/utils/__init__.py | TomLXXVI/pypeflow | 49e42621180ec3125afa238d3ca56ae9f3a7662a | [
"MIT"
] | 4 | 2020-05-26T01:11:08.000Z | 2021-09-15T20:24:31.000Z | source_code/pypeflow/utils/__init__.py | robertspark/pypeflow | 49e42621180ec3125afa238d3ca56ae9f3a7662a | [
"MIT"
] | null | null | null | source_code/pypeflow/utils/__init__.py | robertspark/pypeflow | 49e42621180ec3125afa238d3ca56ae9f3a7662a | [
"MIT"
] | 1 | 2022-01-19T20:26:11.000Z | 2022-01-19T20:26:11.000Z | """
# Utilities that aid in the design and analysis of piping networks
"""
from pypeflow.utils.system_curve import SystemCurve
from pypeflow.utils.pump_curve import PumpCurve
| 29.166667 | 66 | 0.811429 | 25 | 175 | 5.6 | 0.8 | 0.171429 | 0.242857 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.125714 | 175 | 5 | 67 | 35 | 0.915033 | 0.377143 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
a7f6504da4014d9588f410ffc235aa304405f301 | 10,500 | py | Python | test/Analysis/t_blocking.py | Marcel-Rodekamp/qcdanalysistools | 945c8201337ba0d52bc37267198d367bbe3e75e3 | [
"MIT"
] | null | null | null | test/Analysis/t_blocking.py | Marcel-Rodekamp/qcdanalysistools | 945c8201337ba0d52bc37267198d367bbe3e75e3 | [
"MIT"
] | null | null | null | test/Analysis/t_blocking.py | Marcel-Rodekamp/qcdanalysistools | 945c8201337ba0d52bc37267198d367bbe3e75e3 | [
"MIT"
] | null | null | null | import unittest
import itertools as it
import qcdanalysistools as tools
import numpy as np
# distribution details:
# - uniform [a,b)
# * https://en.wikipedia.org/wiki/Continuous_uniform_distribution
# * mean 0.5 (a+b)
# * var = 1/12 (b-a)^2
# - beta(a,b)
# * https://en.wikipedia.org/wiki/Beta_distribution
# * mean a/(a+b)
# * var = a*b/((a+b+1)(a+b^2))
# - binomial(n,p)
# * https://en.wikipedia.org/wiki/Binomial_distribution
# * mean np
# * var = np*(1-p)
class TestBlocking(unittest.TestCase):
def _obs_1(self,data):
# some test observable without parameter
return np.exp(data)
def _obs_2(self,data, axis):
# some test observable reducing given axis
return np.mean(self._obs_1(data),axis=axis)
def testDimensionalities(self):
"""
This test creates data uniformly on [0,1) for several input dimensions
On each of these datas estimator and variances are computed with both
interfaces, single est & var and combined one. Then the outcome array
dimension is checked for several use cases such as
* Estimate over one axis
* Estimate over one axis after application of observable (self._obs_1)
* Estimate over 0 axis after application of observable reducing last dimension (self._obs_2(data,axis=-1))
* Estimate over 0 axis after application of observable reducing all but the first dimension (self._obs_2(data,axis=(1,2,...dim-1)))
"""
for dim in [1,2,3,4,5,6]:
# datasize
N = tuple([10]*dim)
# data: uniform [0,1)
data = np.random.rand(*N)
#==================================================
# No additional reduction in dimension by observables
#==================================================
for ax,N_blk in it.product(range(dim),range(1,10)):
# create Analysis parameters
param = tools.analysis.AnalysisParam(tools.analysis.Blocking,
# size of the data we are going to handle
data_size = 10,
# number of elements removed i.e. leave N_blk out Blocking
N_blk = N_blk, # that's default
# specify the axis over which the estimate should be taken.
# bahaviour is the same as for numpy
axis = ax
)
# single interface
est_1 = tools.analysis.estimator(t_param=param,t_data=data)
est_2 = tools.analysis.estimator(t_param=param,t_data=data,t_observable=self._obs_1)
var_1 = tools.analysis.variance(t_param=param,t_data=data)
var_2 = tools.analysis.variance(t_param=param,t_data=data,t_observable=self._obs_1)
self.assertEqual(est_1.shape,tuple([10]*(dim-1)))
self.assertEqual(var_2.shape,tuple([10]*(dim-1)))
# combined interface
est_1,var_1 = tools.analysis.dataAnalysis(t_param=param,t_data=data)
est_2,var_2 = tools.analysis.dataAnalysis(t_param=param,t_data=data,t_observable=self._obs_1)
self.assertEqual(est_2.shape,tuple([10]*(dim-1)))
self.assertEqual(var_1.shape,tuple([10]*(dim-1)))
#==================================================
# Additional reduction in dimension by observables
#==================================================
# skip for 1D arrays
if dim == 1:
continue
#==========================
# Reduce last dimension
#==========================
for N_blk in range(1,10):
# create Analysis parameters
param = tools.analysis.AnalysisParam(tools.analysis.Blocking,
# size of the data we are going to handle
data_size = 10,
# number of elements removed i.e. leave N_blk out Blocking
N_blk = N_blk, # that's default
# specify the axis over which the estimate should be taken.
# bahaviour is the same as for numpy
axis = 0 # that's the default
)
# single interface
est = tools.analysis.estimator(t_param=param,t_data=data,t_observable=self._obs_2,axis=-1)
var = tools.analysis.variance(t_param=param,t_data=data,t_observable=self._obs_2,axis=-1)
self.assertEqual(est.shape,tuple([10]*(dim-2)))
self.assertEqual(var.shape,tuple([10]*(dim-2)))
# combined interface
est,var = tools.analysis.dataAnalysis(t_param=param,t_data=data,t_observable=self._obs_2,axis=-1)
self.assertEqual(est.shape,tuple([10]*(dim-2)))
self.assertEqual(var.shape,tuple([10]*(dim-2)))
#==========================
# Reduce all dimensions dimension
#==========================
# 0 is reduced by analysis, 1,2,...dim is reduced by _obs_2
ax = tuple([i for i in range(1,dim)])
for N_blk in range(1,10):
# create Analysis parameters
param = tools.analysis.AnalysisParam(tools.analysis.Blocking,
# size of the data we are going to handle
data_size = 10,
# number of elements removed i.e. leave N_blk out Blocking
N_blk = N_blk, # that's default
# specify the axis over which the estimate should be taken.
# bahaviour is the same as for numpy
axis = 0 # that's the default
)
# single interface
est = tools.analysis.estimator(t_param=param,t_data=data,t_observable=self._obs_2,axis=ax)
var = tools.analysis.variance(t_param=param,t_data=data,t_observable=self._obs_2,axis=ax)
self.assertEqual(est.shape,())
self.assertEqual(var.shape,())
# combined interface
est,var = tools.analysis.dataAnalysis(t_param=param,t_data=data,t_observable=self._obs_2,axis=ax)
self.assertEqual(est.shape,())
self.assertEqual(var.shape,())
def testUniform(self):
"""
Test that mean of Uniform distribution is found correctly
Note this is uncorrelated data!
"""
for N in range(100,500,100):
print(f"Testing Estimation of uniform distribution for a data size of {N}.")
data = np.random.uniform(low=0,high=1,size=(N,))
for N_blk in range(1,N//2):
# non blocking
param = tools.analysis.AnalysisParam(tools.analysis.Blocking,
# size of the data we are going to handle
data_size = N,
# number of elements removed i.e. leave N_blk out Blocking
N_blk = N_blk, # that's default
# specify the axis over which the estimate should be taken.
axis = 0
)
# check if the true value (0.5) is in the intervall est +/- err
est = tools.analysis.estimator(t_param=param,t_data=data)
var = tools.analysis.variance(t_param=param,t_data=data)
self.assertTrue(est - np.sqrt(var) < 0.5 or 0.5 < est + np.sqrt(var))
est,var = tools.analysis.dataAnalysis(t_param=param,t_data=data)
self.assertTrue(est - np.sqrt(var) < 0.5 or 0.5 < est + np.sqrt(var))
def testBeta(self):
"""
Test that mean of Beta distribution is found correctly
Note this is uncorrelated data!
"""
for N in range(100,500,100):
print(f"Testing Estimation of uniform distribution for a data size of {N}.")
data = np.random.beta(a=1,b=1,size=(N,))
for N_blk in range(1,N//2):
# non blocking
param = tools.analysis.AnalysisParam(tools.analysis.Blocking,
# size of the data we are going to handle
data_size = N,
# number of elements removed i.e. leave N_blk out Blocking
N_blk = N_blk, # that's default
# specify the axis over which the estimate should be taken.
axis = 0
)
# check if the true value (0.5) is in the intervall est +/- err
est = tools.analysis.estimator(t_param=param,t_data=data)
var = tools.analysis.variance(t_param=param,t_data=data)
self.assertTrue(est - np.sqrt(var) < 0.5 or 0.5 < est + np.sqrt(var))
est,var = tools.analysis.dataAnalysis(t_param=param,t_data=data)
self.assertTrue(est - np.sqrt(var) < 0.5 or 0.5 < est + np.sqrt(var))
def testBinomial(self):
"""
Test that mean of Binomial distribution is found correctly
Note this is uncorrelated data!
"""
for N in range(100,500,100):
print(f"Testing Estimation of uniform distribution for a data size of {N}.")
data = np.random.binomial(n=1,p=0.5,size=(N,))
for N_blk in range(1,N//2):
# non blocking
param = tools.analysis.AnalysisParam(tools.analysis.Blocking,
# size of the data we are going to handle
data_size = N,
# number of elements removed i.e. leave N_blk out Blocking
N_blk = N_blk, # that's default
# specify the axis over which the estimate should be taken.
axis = 0
)
# check if the true value (0.5) is in the intervall est +/- err
est = tools.analysis.estimator(t_param=param,t_data=data)
var = tools.analysis.variance(t_param=param,t_data=data)
self.assertTrue(est - np.sqrt(var) < 0.5 or 0.5 < est + np.sqrt(var))
est,var = tools.analysis.dataAnalysis(t_param=param,t_data=data)
self.assertTrue(est - np.sqrt(var) < 0.5 or 0.5 < est + np.sqrt(var))
if __name__ == '__main__':
unittest.main()
| 47.945205 | 147 | 0.540571 | 1,333 | 10,500 | 4.155289 | 0.132783 | 0.077451 | 0.041704 | 0.045496 | 0.800686 | 0.779383 | 0.757357 | 0.738581 | 0.72486 | 0.701751 | 0 | 0.025604 | 0.337905 | 10,500 | 218 | 148 | 48.165138 | 0.771145 | 0.329905 | 0 | 0.542857 | 0 | 0 | 0.030596 | 0 | 0 | 0 | 0 | 0 | 0.171429 | 1 | 0.057143 | false | 0 | 0.038095 | 0.019048 | 0.12381 | 0.028571 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
c5043838dfda18fe5a33aa6a3432d7dd226fce59 | 48 | py | Python | package1/__init__.py | earslan74/pynet_class | 0ed789ae82f221a249e7a1136a4f3f345f2a584a | [
"Apache-2.0"
] | null | null | null | package1/__init__.py | earslan74/pynet_class | 0ed789ae82f221a249e7a1136a4f3f345f2a584a | [
"Apache-2.0"
] | null | null | null | package1/__init__.py | earslan74/pynet_class | 0ed789ae82f221a249e7a1136a4f3f345f2a584a | [
"Apache-2.0"
] | null | null | null | from . import test_hello
from . import sum_test
| 16 | 24 | 0.791667 | 8 | 48 | 4.5 | 0.625 | 0.555556 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.166667 | 48 | 2 | 25 | 24 | 0.9 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
3d648508625d78ceb85b17abd0f84bde6fe66b78 | 177 | py | Python | contas/admin.py | acaciojunio28/CRUD-django | 62b34a544ec5a14c53172e9240a1f0b448ed7b69 | [
"Apache-2.0"
] | null | null | null | contas/admin.py | acaciojunio28/CRUD-django | 62b34a544ec5a14c53172e9240a1f0b448ed7b69 | [
"Apache-2.0"
] | null | null | null | contas/admin.py | acaciojunio28/CRUD-django | 62b34a544ec5a14c53172e9240a1f0b448ed7b69 | [
"Apache-2.0"
] | null | null | null | from django.contrib import admin
from.models import categoria
from.models import listar
admin.site.register(categoria)
admin.site.register(listar)
# Register your models here.
| 22.125 | 32 | 0.824859 | 25 | 177 | 5.84 | 0.48 | 0.136986 | 0.219178 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.101695 | 177 | 7 | 33 | 25.285714 | 0.918239 | 0.146893 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.6 | 0 | 0.6 | 0 | 1 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
3d64ca2d38ead7c3a2998858392d8ff414075b57 | 5,761 | py | Python | src/models/custom_model.py | ishikei14k/atma11_1st_solution | 91d29eb83f3e5470f82470f0434ad0fc75a90c61 | [
"MIT"
] | 17 | 2021-07-28T02:52:03.000Z | 2022-03-21T04:03:42.000Z | src/models/custom_model.py | ishikei14k/atma11_1st_solution | 91d29eb83f3e5470f82470f0434ad0fc75a90c61 | [
"MIT"
] | null | null | null | src/models/custom_model.py | ishikei14k/atma11_1st_solution | 91d29eb83f3e5470f82470f0434ad0fc75a90c61 | [
"MIT"
] | null | null | null | #!/usr/bin/env python
# coding: utf-8
import torch
import torch.nn as nn
import torch.nn.functional as F
# model.
import timm
# custom modules.
from . import vision_transformer as vits
from .utils import load_pretrained_weights, load_pretrained_weights_resnet
class AtmaCustomModel(nn.Module):
def __init__(self, architecture):
super(AtmaCustomModel, self).__init__()
self.model = timm.create_model(architecture, pretrained=False, in_chans=3)
#print(self.model)
if 'vit' in architecture:
self.n_features = self.model.head.in_features
self.model.head = nn.Linear(self.n_features, 1)
elif 'resnet' in architecture:
self.n_features = self.model.fc.in_features
self.model.fc = nn.Linear(self.n_features, 1)
elif 'efficient' in architecture:
self.n_features = self.model.classifier.in_features
self.model.classifier = nn.Linear(self.n_features, 1)
elif 'ensenet' in architecture:
self.n_features = self.model.classifier.in_features
self.model.classifier = nn.Linear(self.n_features, 1)
elif 'nfnet' in architecture:
self.n_features = self.model.head.fc.in_features
self.model.head.fc = nn.Linear(self.n_features, 1)
def forward(self, x):
x = self.model(x)
return x
class AtmaCustomModelSoftLabel(nn.Module):
def __init__(self, architecture):
super(AtmaCustomModelSoftLabel, self).__init__()
self.model = timm.create_model(architecture, pretrained=False, in_chans=3)
#print(self.model)
if 'vit' in architecture:
self.n_features = self.model.head.in_features
self.model.head = nn.Identity()
elif 'resnet' in architecture:
self.n_features = self.model.fc.in_features
self.model.fc = nn.Identity()
elif 'efficient' in architecture:
self.n_features = self.model.classifier.in_features
self.model.classifier = nn.Identity()
elif 'ensenet' in architecture:
self.n_features = self.model.classifier.in_features
self.model.classifier = nn.Identity()
elif 'nfnet' in architecture:
self.n_features = self.model.head.fc.in_features
self.model.head.fc = nn.Identity()
self.fc1 = nn.Linear(self.n_features, 1)
self.fc2 = nn.Linear(self.n_features, 1)
def forward(self, x):
x = self.model(x)
x1 = self.fc1(x)
x2 = self.fc2(x)
return x1, x2
class AtmaCustomModelViTDINO(nn.Module):
def __init__(self, architecture, pretrained_path):
super(AtmaCustomModelViTDINO, self).__init__()
self.model = vits.__dict__[architecture](patch_size=16)
load_pretrained_weights(self.model, pretrained_path, 'teacher', architecture, 16)
self.n_features = self.model.embed_dim
self.head = nn.Linear(self.n_features, 1)
def forward(self, x):
x = self.model(x)
x = self.head(x)
return x
class AtmaCustomModelResNetDINO(nn.Module):
def __init__(self, architecture, pretrained_path):
super(AtmaCustomModelResNetDINO, self).__init__()
self.model = timm.create_model(architecture, pretrained=False, in_chans=3)
load_pretrained_weights_resnet(self.model, pretrained_path, 'teacher', architecture, 16)
if 'resnet' in architecture:
self.n_features = self.model.fc.in_features
self.model.fc = nn.Linear(self.n_features, 1)
elif 'efficient' in architecture:
self.n_features = self.model.classifier.in_features
self.model.classifier = nn.Linear(self.n_features, 1)
def forward(self, x):
x = self.model(x)
return x
class AtmaCustomModelResNetDINOClass(nn.Module):
def __init__(self, architecture, pretrained_path):
super(AtmaCustomModelResNetDINOClass, self).__init__()
self.model = timm.create_model(architecture, pretrained=False, in_chans=3)
load_pretrained_weights_resnet(self.model, pretrained_path, 'teacher', architecture, 16)
if 'resnet' in architecture:
self.n_features = self.model.fc.in_features
self.model.fc = nn.Linear(self.n_features, 4)
elif 'efficient' in architecture:
self.n_features = self.model.classifier.in_features
self.model.classifier = nn.Linear(self.n_features, 4)
def forward(self, x):
x = self.model(x)
return x
class AtmaCustomModelViTDINOSoftLabel(nn.Module):
def __init__(self, architecture, pretrained_path):
super(AtmaCustomModelViTDINOSoftLabel, self).__init__()
self.model = vits.__dict__[architecture](patch_size=16)
load_pretrained_weights(self.model, pretrained_path, 'teacher', architecture, 16)
self.n_features = self.model.embed_dim
self.head1 = nn.Linear(self.n_features, 1)
self.head2 = nn.Linear(self.n_features, 1)
def forward(self, x):
x = self.model(x)
x1 = self.head1(x)
x2 = self.head2(x)
return x1, x2
class AtmaCustomModelViTDINOClass(nn.Module):
def __init__(self, architecture, pretrained_path):
super(AtmaCustomModelViTDINOClass, self).__init__()
self.model = vits.__dict__[architecture](patch_size=16)
load_pretrained_weights(self.model, pretrained_path, 'teacher', architecture, 16)
self.n_features = self.model.embed_dim
self.head = nn.Linear(self.n_features, 4)
def forward(self, x):
x = self.model(x)
x = self.head(x)
return x | 34.291667 | 96 | 0.649887 | 704 | 5,761 | 5.096591 | 0.110795 | 0.130435 | 0.115942 | 0.080546 | 0.823021 | 0.814103 | 0.813824 | 0.77369 | 0.77369 | 0.704013 | 0 | 0.012004 | 0.248047 | 5,761 | 168 | 97 | 34.291667 | 0.816251 | 0.015796 | 0 | 0.683761 | 0 | 0 | 0.022065 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.119658 | false | 0 | 0.051282 | 0 | 0.290598 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
3d9c32783e2c1900f975ba0a538daaf173531e52 | 23 | py | Python | Measure/marching_cubes.py | Joevaen/Scikit-image_On_CT | e3bf0eeadc50691041b4b7c44a19d07546a85001 | [
"Apache-2.0"
] | null | null | null | Measure/marching_cubes.py | Joevaen/Scikit-image_On_CT | e3bf0eeadc50691041b4b7c44a19d07546a85001 | [
"Apache-2.0"
] | null | null | null | Measure/marching_cubes.py | Joevaen/Scikit-image_On_CT | e3bf0eeadc50691041b4b7c44a19d07546a85001 | [
"Apache-2.0"
] | null | null | null | # 行进立方体算法可在3d体积数据中查找曲面。 | 23 | 23 | 0.869565 | 1 | 23 | 20 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.045455 | 0.043478 | 23 | 1 | 23 | 23 | 0.863636 | 0.913043 | 0 | null | 0 | null | 0 | 0 | null | 0 | 0 | 0 | null | 1 | null | true | 0 | 0 | null | null | null | 1 | 1 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
3dcb76b4c52a5d1ce0ae768108f28ee8bd281015 | 4,460 | py | Python | responsibleai/tests/counterfactual/test_counterfactual_advanced_features.py | PYTHON01100100/responsible-ai-widgets | 07ff0ca27b6f278d0e27172386cccbd7d909d88c | [
"MIT"
] | 1 | 2021-09-11T14:43:23.000Z | 2021-09-11T14:43:23.000Z | responsibleai/tests/counterfactual/test_counterfactual_advanced_features.py | PYTHON01100100/responsible-ai-widgets | 07ff0ca27b6f278d0e27172386cccbd7d909d88c | [
"MIT"
] | null | null | null | responsibleai/tests/counterfactual/test_counterfactual_advanced_features.py | PYTHON01100100/responsible-ai-widgets | 07ff0ca27b6f278d0e27172386cccbd7d909d88c | [
"MIT"
] | null | null | null |
# Copyright (c) Microsoft Corporation
# Licensed under the MIT License.
import pytest
import numpy as np
from ..common_utils import (
create_iris_data, create_lightgbm_classifier
)
from responsibleai import ModelAnalysis
class TestCounterfactualAdvancedFeatures(object):
@pytest.mark.parametrize('vary_all_features', [True, False])
@pytest.mark.parametrize('feature_importance', [True, False])
def test_counterfactual_vary_features(
self, vary_all_features, feature_importance):
X_train, X_test, y_train, y_test, feature_names, _ = \
create_iris_data()
model = create_lightgbm_classifier(X_train, y_train)
X_train['target'] = y_train
X_test['target'] = y_test
model_analysis = ModelAnalysis(
model=model,
train=X_train,
test=X_test.iloc[0:10],
target_column='target',
task_type='classification')
if vary_all_features:
features_to_vary = 'all'
else:
features_to_vary = [feature_names[0]]
model_analysis.counterfactual.add(
total_CFs=10, desired_class=2,
features_to_vary=features_to_vary,
feature_importance=feature_importance)
model_analysis.counterfactual.compute()
cf_obj = model_analysis.counterfactual.get()[0]
for feature_name in feature_names:
if not vary_all_features and feature_name != feature_names[0]:
expected_array = np.repeat(
[X_test.iloc[0:1][feature_name][0]],
cf_obj.cf_examples_list[0].final_cfs_df.shape[0])
assert np.all(
np.isclose(
cf_obj.cf_examples_list[0].final_cfs_df[feature_name],
expected_array
)
)
else:
expected_array = np.repeat(
[X_test.iloc[0:1][feature_name][0]],
cf_obj.cf_examples_list[0].final_cfs_df.shape[0])
assert not np.all(
np.isclose(
cf_obj.cf_examples_list[0].final_cfs_df[feature_name],
expected_array
)
)
@pytest.mark.parametrize('feature_importance', [True, False])
def test_counterfactual_permitted_range(self, feature_importance):
X_train, X_test, y_train, y_test, feature_names, _ = \
create_iris_data()
model = create_lightgbm_classifier(X_train, y_train)
X_train['target'] = y_train
X_test['target'] = y_test
model_analysis = ModelAnalysis(
model=model,
train=X_train,
test=X_test.iloc[0:10],
target_column='target',
task_type='classification')
model_analysis.counterfactual.add(
total_CFs=10, desired_class=2,
features_to_vary=[feature_names[0]],
permitted_range={feature_names[0]: [2.0, 5.0]},
feature_importance=feature_importance)
model_analysis.counterfactual.compute()
# TODO: The logic below needs to be made robust for gated tests
cf_obj = model_analysis.counterfactual.get()[0]
for feature_name in feature_names:
if feature_name != feature_names[0]:
expected_array = np.repeat(
[X_test.iloc[0:1][feature_name][0]],
cf_obj.cf_examples_list[0].final_cfs_df.shape[0])
assert np.all(
np.isclose(
cf_obj.cf_examples_list[0].final_cfs_df[feature_name],
expected_array
)
)
else:
expected_array = np.repeat(
[X_test.iloc[0:1][feature_name][0]],
cf_obj.cf_examples_list[0].final_cfs_df.shape[0])
assert not np.all(
np.isclose(
cf_obj.cf_examples_list[0].final_cfs_df[feature_name],
expected_array
)
)
# assert np.any(
# cf_obj.cf_examples_list[0].final_cfs_df[feature_name] >=
# 2.0)
# assert np.any(
# cf_obj.cf_examples_list[0].final_cfs_df[feature_name] <=
# 5.0)
| 36.859504 | 78 | 0.559417 | 502 | 4,460 | 4.621514 | 0.189243 | 0.066379 | 0.030172 | 0.064655 | 0.79569 | 0.79569 | 0.778448 | 0.778448 | 0.719828 | 0.719828 | 0 | 0.018326 | 0.35157 | 4,460 | 120 | 79 | 37.166667 | 0.783887 | 0.06704 | 0 | 0.712766 | 0 | 0 | 0.028916 | 0 | 0 | 0 | 0 | 0.008333 | 0.042553 | 1 | 0.021277 | false | 0 | 0.106383 | 0 | 0.138298 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
9ab2e26e33e6dab276e7c11af852425d67dde72e | 93 | py | Python | app/dstvza/__init__.py | Lameaux/mock_server | c387af54d1b974ce1ed5f841de214a45d07fe901 | [
"MIT"
] | null | null | null | app/dstvza/__init__.py | Lameaux/mock_server | c387af54d1b974ce1ed5f841de214a45d07fe901 | [
"MIT"
] | null | null | null | app/dstvza/__init__.py | Lameaux/mock_server | c387af54d1b974ce1ed5f841de214a45d07fe901 | [
"MIT"
] | null | null | null | from flask import Blueprint
dstvza = Blueprint('dstvza', __name__)
from . import endpoints
| 15.5 | 38 | 0.774194 | 11 | 93 | 6.181818 | 0.636364 | 0.441176 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.150538 | 93 | 5 | 39 | 18.6 | 0.860759 | 0 | 0 | 0 | 0 | 0 | 0.064516 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.666667 | 0 | 0.666667 | 0.666667 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 1 | 0 | 6 |
9acdf2a8404c633708e9091f2b88922bd699b3ef | 288 | py | Python | gaia-sdk-python/gaia_sdk/api/data/DataRefRequestConfig.py | leftshiftone/gaia-sdk | 7e0d1ce054fada8ae154da70b71e8a90347c9f97 | [
"MIT"
] | null | null | null | gaia-sdk-python/gaia_sdk/api/data/DataRefRequestConfig.py | leftshiftone/gaia-sdk | 7e0d1ce054fada8ae154da70b71e8a90347c9f97 | [
"MIT"
] | 10 | 2019-11-14T07:55:47.000Z | 2022-02-26T19:36:45.000Z | gaia-sdk-python/gaia_sdk/api/data/DataRefRequestConfig.py | leftshiftone/gaia-sdk | 7e0d1ce054fada8ae154da70b71e8a90347c9f97 | [
"MIT"
] | 2 | 2020-05-12T11:09:53.000Z | 2020-12-25T14:03:04.000Z | from typing import Callable
class DataRefRequestConfig:
def __init__(self, on_upload_progress: Callable[[int], None]):
self.on_upload_progress = on_upload_progress
def on_upload_progress(self, progress: int):
"""Return current upload progress"""
pass
| 22.153846 | 66 | 0.708333 | 34 | 288 | 5.647059 | 0.5 | 0.364583 | 0.333333 | 0.208333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.208333 | 288 | 12 | 67 | 24 | 0.842105 | 0.104167 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0.166667 | 0.166667 | 0 | 0.666667 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 6 |
9af45366f4011ce8551f296cdc06265827add610 | 74 | py | Python | tomoxtal/utils/__init__.py | apeck12/tomoxtal | d2b3407708da2a35ecf061fb62ba397d837b980c | [
"MIT"
] | null | null | null | tomoxtal/utils/__init__.py | apeck12/tomoxtal | d2b3407708da2a35ecf061fb62ba397d837b980c | [
"MIT"
] | null | null | null | tomoxtal/utils/__init__.py | apeck12/tomoxtal | d2b3407708da2a35ecf061fb62ba397d837b980c | [
"MIT"
] | 1 | 2021-11-22T18:30:30.000Z | 2021-11-22T18:30:30.000Z | from .cctbx_tools import *
from .phases import *
from .visualize import *
| 18.5 | 26 | 0.756757 | 10 | 74 | 5.5 | 0.6 | 0.363636 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.162162 | 74 | 3 | 27 | 24.666667 | 0.887097 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
b116672442cff81e07f090b0c606484e8c2295bf | 75 | py | Python | bots/fb/send.py | kosyachniy/dev | 39bb5c5ee10780bfcd8a59cf59cfb1a348ac52a4 | [
"Apache-2.0"
] | 13 | 2018-12-17T23:30:54.000Z | 2021-12-29T14:31:43.000Z | bots/fb/send.py | kosyachniy/dev | 39bb5c5ee10780bfcd8a59cf59cfb1a348ac52a4 | [
"Apache-2.0"
] | 36 | 2018-06-07T21:34:13.000Z | 2022-03-13T21:01:43.000Z | bots/fb/send.py | kosyachniy/dev | 39bb5c5ee10780bfcd8a59cf59cfb1a348ac52a4 | [
"Apache-2.0"
] | 2 | 2021-01-03T11:47:20.000Z | 2021-12-29T14:31:49.000Z | from fb_bot import send as send_fb
send_fb(4019533504784468, 'blah blah') | 18.75 | 38 | 0.8 | 13 | 75 | 4.384615 | 0.615385 | 0.210526 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.246154 | 0.133333 | 75 | 4 | 38 | 18.75 | 0.630769 | 0 | 0 | 0 | 0 | 0 | 0.118421 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.5 | 0 | 0.5 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
491767f4a1b7c25c639a44f598afefa05f3e6ce6 | 63,938 | py | Python | src_pet/sample.py | XiaZeng0223/alps | 72c5f9b02424bfef6b19c8ec9675774ae827242a | [
"MIT"
] | null | null | null | src_pet/sample.py | XiaZeng0223/alps | 72c5f9b02424bfef6b19c8ec9675774ae827242a | [
"MIT"
] | null | null | null | src_pet/sample.py | XiaZeng0223/alps | 72c5f9b02424bfef6b19c8ec9675774ae827242a | [
"MIT"
] | null | null | null | import glob
import logging
import numpy as np
import torch
from torch.utils.data import DataLoader, RandomSampler, SequentialSampler, TensorDataset, Subset
from torch.distributions.categorical import Categorical
from torch.distributions.uniform import Uniform
from tqdm import tqdm, trange
from torch.nn import CrossEntropyLoss, Softmax, KLDivLoss
from torch.nn.functional import one_hot
import torch.nn.functional as F
from sklearn.neighbors import KNeighborsClassifier
# from sklearn.neighbors.dist_metrics import DistanceMetric
import pathlib
import os
from sklearn.cluster import KMeans
from scipy.spatial.distance import cosine, euclidean
from sklearn.metrics.pairwise import pairwise_distances, paired_distances
from typing import Callable, Union
from collections import Counter
from scipy.stats import entropy as entropy_
from scipy.special import softmax
from src_pet.data import (
convert_examples_to_features,
compute_metrics,
processors,
output_modes
)
def sampling_to_head(sampling):
# given [sampling] method, return head of model that is supposed to be used
head = "lm"
warmstart = ["badge", "FTbert", "least", "margin", "entropy", "densitye", "densityc",
"densityes", 'densitycs', "cal", "weighted_density",
"commitee_weighted_vote", "commitee_weighted_KL",
"commitee_vote", "commitee_KL", "commitee_dis", "seede", "seedc", "seedel", "seedcl",
"seedme", "seedmc", "seedmel", "seedmcl",
"seedmet", "seedmct", "seedmetl", "seedmctl",
"seedmep", "seedmcp", "seedmepl", "seedmcpl",
"seedmep_", "seedmcp_", "seedmepl_", "seedmcpl_",
"abs_densitye_seed", "abs_densityc_seed", 'abs_seede', 'abs_seedcl', "abs_cal_seed",
'abs_seedme', 'abs_seedmet', 'abs_seedmep', 'abs_seed_mep_',
"seedmKL", 'seedme0', 'seedmet0', 'seedmep0'] #all of newly implemented strategies are warm start for now
for s in warmstart:
# if sampling needs warmstart method, it needs classification head
if s in sampling:
head = "sc"
return head
def check_model_head(model, sampling):
"""Check whether [model] is correct for [sampling] method"""
if "MaskedLM" in model.config.architectures[0]:
model_head = "lm"
elif "SequenceClassification" in model.config.architectures[0]:
model_head = "sc"
else:
raise NotImplementedError
sampling_head = sampling_to_head(sampling)
return model_head == sampling_head
def load_and_embed_examples(args, model, tokenizer, evaluate=True, text = None, sub_index = None, return_plus = False, return_only_labels = False, return_logits = False):
if args.local_rank not in [-1, 0] and not evaluate:
torch.distributed.barrier() # Make sure only the first process in distributed training process the dataset, and the others will use the cache
task =args.task_name
processor = processors[task]()
# Load data features from cache or dataset file
data_split = "train"
cached_features_file = os.path.join(
args.data_dir,
"cached_{}_{}_{}_{}_{}".format(
data_split,
list(filter(None, args.base_model.split("/"))).pop(),
str(args.max_seq_length),
str(task),
text
),
)
if os.path.exists(cached_features_file) and not args.overwrite_cache:
# print("Loading features from cached file %s", cached_features_file)
features = torch.load(cached_features_file)
else:
print("Creating features from dataset file at %s", args.data_dir)
examples = processor.get_train_examples(args.data_dir)
label_list = processor.get_labels()
features = convert_examples_to_features(
examples,
tokenizer,
label_list=label_list,
max_length=args.max_seq_length,
output_mode='classification',
text=text
)
if args.local_rank in [-1, 0]:
print("Saving features into cached file %s", cached_features_file)
torch.save(features, cached_features_file)
#if we only need a subset of the whole dataset, e.g. obtaining the labeled set
# print('before indexing', len(features))
if sub_index != None:
features=[features[index] for index in sub_index]
# print('after indexing', len(features))
# Convert to Tensors and build dataset
all_input_ids = torch.tensor([f.input_ids for f in features], dtype=torch.long)
all_attention_mask = torch.tensor([f.attention_mask for f in features], dtype=torch.long)
all_token_type_ids = torch.tensor([f.token_type_ids for f in features], dtype=torch.long)
all_labels = torch.tensor([f.label for f in features], dtype=torch.long)
if return_only_labels:
return all_labels
dataset = TensorDataset(all_input_ids, all_attention_mask, all_token_type_ids, all_labels)
eval_sampler = SequentialSampler(dataset)
eval_dataloader = DataLoader(dataset, sampler=eval_sampler, batch_size=args.eval_batch_size)
if return_plus:
all_labeled_emb = None
all_labeled_logits = None
for batch in eval_dataloader:
batch = tuple(t.to(args.device) for t in batch)
inputs = {}
# mask_tokens() requires CPU input_ids
if args.head == "lm":
input_ids_cpu = batch[0].cpu().clone()
input_ids_mask, labels = mask_tokens(input_ids_cpu, tokenizer, args)
input_ids = input_ids_mask if args.masked else batch[0]
input_ids = input_ids.to(args.device)
labels = labels.to(args.device)
inputs["input_ids"] = input_ids
inputs["masked_lm_labels"] = labels
elif args.head == "sc":
inputs["input_ids"] = batch[0]
else:
raise NotImplementedError
inputs["attention_mask"] = batch[1]
if args.model_type != "distilbert":
inputs["token_type_ids"] = (
batch[2] if args.model_type in ["bert", "xlnet", "albert"] else None
) # XLM, DistilBERT, RoBERTa, and XLM-RoBERTa don't use segment_ids
labeled_logits = model(**inputs).logits.cpu().numpy()
if all_labeled_logits is None:
all_labeled_logits = labeled_logits
else:
all_labeled_logits = np.append(all_labeled_logits, labeled_logits, axis=0)
labeled_emb = embedding(model, inputs, args).cpu().numpy()
if all_labeled_emb is None:
all_labeled_emb = labeled_emb
else:
all_labeled_emb = np.append(all_labeled_emb, labeled_emb, axis=0)
return torch.tensor(all_labeled_emb), torch.tensor(all_labeled_logits), all_labels
if return_logits:
all_labeled_logits = None
for batch in eval_dataloader:
batch = tuple(t.to(args.device) for t in batch)
inputs = {}
# mask_tokens() requires CPU input_ids
if args.head == "lm":
input_ids_cpu = batch[0].cpu().clone()
input_ids_mask, labels = mask_tokens(input_ids_cpu, tokenizer, args)
input_ids = input_ids_mask if args.masked else batch[0]
input_ids = input_ids.to(args.device)
labels = labels.to(args.device)
inputs["input_ids"] = input_ids
inputs["masked_lm_labels"] = labels
elif args.head == "sc":
inputs["input_ids"] = batch[0]
else:
raise NotImplementedError
inputs["attention_mask"] = batch[1]
if args.model_type != "distilbert":
inputs["token_type_ids"] = (
batch[2] if args.model_type in ["bert", "xlnet", "albert"] else None
) # XLM, DistilBERT, RoBERTa, and XLM-RoBERTa don't use segment_ids
labeled_logits = model(**inputs).logits.cpu().numpy()
if all_labeled_logits is None:
all_labeled_logits = labeled_logits
else:
all_labeled_logits = np.append(all_labeled_logits, labeled_logits, axis=0)
return torch.tensor(all_labeled_logits), all_labels
else:
all_embeds = None
for batch in tqdm(eval_dataloader, desc="Evaluating"):
batch = tuple(t.to(args.device) for t in batch)
inputs = {}
# mask_tokens() requires CPU input_ids
if args.head == "lm":
input_ids_cpu = batch[0].cpu().clone()
input_ids_mask, labels = mask_tokens(input_ids_cpu, tokenizer, args)
input_ids = input_ids_mask if args.masked else batch[0]
input_ids = input_ids.to(args.device)
labels = labels.to(args.device)
inputs["input_ids"] = input_ids
inputs["masked_lm_labels"] = labels
elif args.head == "sc":
inputs["input_ids"] = batch[0]
else:
raise NotImplementedError
inputs["attention_mask"] = batch[1]
if args.model_type != "distilbert":
inputs["token_type_ids"] = (
batch[2] if args.model_type in ["bert", "xlnet", "albert"] else None
) # XLM, DistilBERT, RoBERTa, and XLM-RoBERTa don't use segment_ids
embeds = embedding(model, inputs, args, pooling=args.pooling).cpu().numpy()
if all_embeds is None:
all_embeds = embeds
else:
all_embeds = np.append(all_embeds, embeds, axis=0)
return all_embeds
def read_logits(args):
'''read logits that's generated previously'''
logits = np.loadtxt('{}/logits/eval_logits.txt'.format(args.output_dir))
return torch.tensor(logits)
def read_multiple_logits(args):
'''read multiple logits that are generated previously'''
logits = []
for i in range(len(args.model_name_or_path)):
filename = '{}/logits_{}/eval_logits.txt'.format(args.output_dir, i)
logits.append(torch.tensor(np.loadtxt(filename)))
return logits
def random(inputs, args, **kwargs):
"""Random sampling by assigning uniformly random scores to all points"""
if args.sampling_seed:
torch.manual_seed(args.sampling_seed)
scores = Uniform(0, 1).sample((inputs["input_ids"].size(0),))
torch.manual_seed(args.seed)
else:
scores = Uniform(0,1).sample((inputs["input_ids"].size(0),))
return scores
def least_conf(model, inputs, args, **kwargs):
"""Least confident sampling by assigning confident scores of label distribution for
example when passed through [model] """
proba = read_logits(args).softmax(dim=-1)
scores= 1 - torch.max(proba, dim=1).values
return scores
def margin(model, inputs, args, **kwargs):
"""
Calculates the margin of the top-2 prediction probabilities.
"""
proba = read_logits(args).softmax(dim=-1).cpu().numpy()
part = np.partition(-proba, 1, axis=1)
scores = torch.tensor(- part[:, 0] + part[:, 1])
return scores
def entropy(model, inputs, args, **kwargs):
"""Maximum entropy sampling by assigning entropy of label distribution for
example when passed through [model]"""
logits = read_logits(args)
categorical = Categorical(logits = logits)
scores = categorical.entropy()
return scores
def density_euclidean_SEED(model, inputs, args, tokenizer, **kwargs):
"""Maximum density sampling by calculating information density for
example when passed through [model]"""
# print('getting embedding_a')
X_a = load_and_embed_examples(args, model, tokenizer, evaluate=True, text = 'text_a')
# print('getting embedding_b')
X_b = load_and_embed_examples(args, model, tokenizer, evaluate=True, text = 'text_b')
X = X_a - X_b
similarity_mtx = 1 / (1 + pairwise_distances(X, X, metric='euclidean'))
scores = torch.tensor(similarity_mtx.mean(axis=1))
return scores
def density_cosine_SEED(model, inputs, args, tokenizer, **kwargs):
"""Maximum density sampling by calculating information density for
example when passed through [model]"""
# print('getting embedding_a')
X_a = load_and_embed_examples(args, model, tokenizer, evaluate=True, text = 'text_a')
# print('getting embedding_b')
X_b = load_and_embed_examples(args, model, tokenizer, evaluate=True, text = 'text_b')
X = X_a - X_b
similarity_mtx = 1 / (1 + pairwise_distances(X, X, metric='cosine'))
scores = torch.tensor(similarity_mtx.mean(axis=1))
# print(scores)
return scores
def abs_densitye_seed(model, inputs, args, tokenizer, **kwargs):
"""Maximum density sampling by calculating information density for
example when passed through [model]"""
# print('getting embedding_a')
X_a = load_and_embed_examples(args, model, tokenizer, evaluate=True, text = 'text_a')
# print('getting embedding_b')
X_b = load_and_embed_examples(args, model, tokenizer, evaluate=True, text = 'text_b')
X = np.absolute(X_a - X_b)
similarity_mtx = 1 / (1 + pairwise_distances(X, X, metric='euclidean'))
scores = torch.tensor(similarity_mtx.mean(axis=1))
return scores
def abs_densityc_seed(model, inputs, args, tokenizer, **kwargs):
"""Maximum density sampling by calculating information density for
example when passed through [model]"""
# print('getting embedding_a')
X_a = load_and_embed_examples(args, model, tokenizer, evaluate=True, text = 'text_a')
# print('getting embedding_b')
X_b = load_and_embed_examples(args, model, tokenizer, evaluate=True, text = 'text_b')
X = np.absolute(X_a - X_b)
similarity_mtx = 1 / (1 + pairwise_distances(X, X, metric='cosine'))
scores = torch.tensor(similarity_mtx.mean(axis=1))
# print(scores)
return scores
def commitee_vote(model, inputs, args, **kwargs):
"""Commitee vote entropy. Voting sampling by calculating the vote entropy for the Committee for
example when passed through each m in [model]"""
# votes = []
# for m , i in zip(model, inputs):
# vote = m(**i)[0].argmax(dim=1).cpu().numpy()
# votes.append(vote)
votes = [logits.argmax(dim=1).numpy() for logits in read_multiple_logits(args)]
votes = np.transpose(votes)
p_vote = np.zeros(shape=(votes.shape[0], 3)) #3-class
committee = args.model_name_or_path
for vote_idx, vote in enumerate(votes):
vote_counter = Counter(vote)
for class_idx, class_label in enumerate([0, 1, 2]):
p_vote[vote_idx, class_idx] = vote_counter[class_label] / len(committee)
entr = entropy_(p_vote, axis=1)
scores = torch.tensor(entr)
return scores
def commitee_weighted_vote(model, inputs, args, **kwargs):
"""Commitee vote entropy. Voting sampling by calculating the vote entropy for the Committee for
example when passed through each m in [model]"""
votes = [logits.argmax(dim=1).numpy() for logits in read_multiple_logits(args)]
votes = np.transpose(votes)
p_vote = np.zeros(shape=(votes.shape[0], 3)) #3-class
committee = args.model_name_or_path
#introduce weighting wrt model size
weighting = [m.config.hidden_size for m in model]
# weighting = [m.num_parameters() for m in model]
w_min = min(weighting)
weighting = [w/w_min for w in weighting]
for vote_idx, vote in enumerate(votes):
vote_counter = Counter()
for v, w in zip(vote, weighting):
vote_counter.update({v: w})
for class_idx, class_label in enumerate([0, 1, 2]):
p_vote[vote_idx, class_idx] = vote_counter[class_label] / len(committee)
entr = entropy_(p_vote, axis=1)
scores = torch.tensor(entr)
return scores
def commitee_weighted_KL(model, inputs, args, **kwargs):
"""Commitee vote entropy. Voting sampling by calculating the vote entropy for the Committee for
example when passed through each m in [model]"""
probas = [logits.softmax(dim=-1).numpy() for logits in read_multiple_logits(args)]
p_vote = np.transpose(probas, axes=[1, 0, 2])
#get consensus that's propotional with model size
weighting = [m.config.hidden_size for m in model]
# weighting = [m.num_parameters() for m in model]
w_min = min(weighting)
weighting = [w/w_min for w in weighting]
p_consensus = np.average(p_vote, axis=1, weights=weighting)
committee = args.model_name_or_path
learner_KL_div = np.zeros(shape=(probas[0].shape[0], len(committee)))
for learner_idx, _ in enumerate(committee):
learner_KL_div[:, learner_idx] = entropy_(np.transpose(p_vote[:, learner_idx, :]), qk=np.transpose(p_consensus))
scores = torch.tensor(np.max(learner_KL_div, axis=1))
return scores
def commitee_KL(model, inputs, args, **kwargs):
"""Commitee vote entropy. Voting sampling by calculating the vote entropy for the Committee for
example when passed through each m in [model]"""
# probas = []
# for m , i in zip(model, inputs):
# proba = m(**i)[0].softmax(dim=-1).cpu().numpy()
# probas.append(proba)
probas = [logits.softmax(dim=-1).numpy() for logits in read_multiple_logits(args)]
p_vote = np.transpose(probas, axes=[1, 0, 2])
p_consensus = np.mean(p_vote, axis=1)
committee = args.model_name_or_path
learner_KL_div = np.zeros(shape=(probas[0].shape[0], len(committee)))
for learner_idx, _ in enumerate(committee):
learner_KL_div[:, learner_idx] = entropy_(np.transpose(p_vote[:, learner_idx, :]), qk=np.transpose(p_consensus))
scores = torch.tensor(np.max(learner_KL_div, axis=1))
return scores
def alps(model, inputs, args, **kwargs):
"""Obtain masked language modeling loss from [model] for tokens in [inputs].
Should return batch_size X seq_length tensor.
model is loaded as lm rather than sc for alps"""
labels = inputs["masked_lm_labels"]
inputs.pop("masked_lm_labels", None)
logits = model(**inputs).logits
batch_size, seq_length, vocab_size = logits.size()
loss_fct = CrossEntropyLoss(reduction='none')
loss_batched = loss_fct(logits.view(-1, vocab_size), labels.view(-1))
scores = loss_batched.view(batch_size, seq_length)
return scores
def badge_gradient(model, inputs, args, **kwargs):
"""Return the loss gradient with respect to the penultimate layer for BADGE"""
pooled_output = embedding(model, inputs, args)
logits = model(**inputs).logits
batch_size, num_classes = logits.size()
softmax = Softmax(dim=1)
probs = softmax(logits)
preds = probs.argmax(dim=1)
preds_oh = one_hot(preds, num_classes=num_classes)
scales = probs - preds_oh
grads_3d = torch.einsum('bi,bj->bij', scales, pooled_output)
grads = grads_3d.view(batch_size, -1)
return grads
def seede(model, inputs, args, tokenizer, **kwargs):
'''use seed to find the data that are farest from the class representative vectors.'''
sampled_file = os.path.join(args.model_name_or_path, 'sampled.pt')
if os.path.isfile(sampled_file):
labeled_ids = torch.load(sampled_file)
else:
print('doing random sampling')
#use random to sample the first 10 instances
return random(inputs, args)
X_a = load_and_embed_examples(args, model, tokenizer, evaluate=True, text = 'text_a', sub_index=labeled_ids)
X_b = load_and_embed_examples(args, model, tokenizer, evaluate=True, text = 'text_b', sub_index=labeled_ids)
X_labeled = X_a - X_b
labeled_y = load_and_embed_examples(args=args, model=model, tokenizer=tokenizer, evaluate=True,
text=None, sub_index=labeled_ids, return_only_labels=True)
vecs=[]
for y in [0, 1, 2]:
idx = np.where(labeled_y == y)[0]
if len(idx) >0:
vec = np.mean(X_labeled[idx], axis=0)
vecs.append(vec)
vecs=np.array(vecs)
X_a = load_and_embed_examples(args, model, tokenizer, evaluate=True, text = 'text_a')
X_b = load_and_embed_examples(args, model, tokenizer, evaluate=True, text = 'text_b')
X = X_a - X_b
scores = np.min(pairwise_distances(X, vecs, metric='euclidean'), axis = 1)
return torch.tensor(scores)
def abs_seede(model, inputs, args, tokenizer, **kwargs):
'''use seed to find the data that are farest from the class representative vectors.'''
sampled_file = os.path.join(args.model_name_or_path, 'sampled.pt')
if os.path.isfile(sampled_file):
labeled_ids = torch.load(sampled_file)
else:
print('doing random sampling')
#use random to sample the first 10 instances
return random(inputs, args)
X_a = load_and_embed_examples(args, model, tokenizer, evaluate=True, text = 'text_a', sub_index=labeled_ids)
X_b = load_and_embed_examples(args, model, tokenizer, evaluate=True, text = 'text_b', sub_index=labeled_ids)
X_labeled = np.absolute(X_a - X_b)
labeled_y = load_and_embed_examples(args=args, model=model, tokenizer=tokenizer, evaluate=True,
text=None, sub_index=labeled_ids, return_only_labels=True)
vecs=[]
for y in [0, 1, 2]:
idx = np.where(labeled_y == y)[0]
if len(idx) >0:
vec = np.mean(X_labeled[idx], axis=0)
vecs.append(vec)
vecs=np.array(vecs)
X_a = load_and_embed_examples(args, model, tokenizer, evaluate=True, text = 'text_a')
X_b = load_and_embed_examples(args, model, tokenizer, evaluate=True, text = 'text_b')
X = np.absolute(X_a - X_b)
scores = np.min(pairwise_distances(X, vecs, metric='euclidean'), axis = 1)
return torch.tensor(scores)
def seedcl(model, inputs, args, tokenizer, **kwargs):
'''use seed to find the data that are closest from the class representative vectors.'''
sampled_file = os.path.join(args.model_name_or_path, 'sampled.pt')
if os.path.isfile(sampled_file):
labeled_ids = torch.load(sampled_file)
else:
print('doing random sampling')
#use random to sample the first 10 instances
return random(inputs, args)
X_a = load_and_embed_examples(args, model, tokenizer, evaluate=True, text = 'text_a', sub_index=labeled_ids)
X_b = load_and_embed_examples(args, model, tokenizer, evaluate=True, text = 'text_b', sub_index=labeled_ids)
X_labeled = X_a - X_b
labeled_y = load_and_embed_examples(args=args, model=model, tokenizer=tokenizer, evaluate=True,
text=None, sub_index=labeled_ids, return_only_labels=True)
vecs=[]
for y in [0, 1, 2]:
idx = np.where(labeled_y == y)
if len(idx) >0:
vec = np.mean(X_labeled[idx], axis=0)
vecs.append(vec)
vecs=np.array(vecs)
X_a = load_and_embed_examples(args, model, tokenizer, evaluate=True, text = 'text_a')
X_b = load_and_embed_examples(args, model, tokenizer, evaluate=True, text = 'text_b')
X = X_a - X_b
scores = -np.min(pairwise_distances(X, vecs, metric='cosine'), axis = 1)
return torch.tensor(scores)
def abs_seedcl(model, inputs, args, tokenizer, **kwargs):
'''use seed to find the data that are closest from the class representative vectors.'''
sampled_file = os.path.join(args.model_name_or_path, 'sampled.pt')
if os.path.isfile(sampled_file):
labeled_ids = torch.load(sampled_file)
else:
print('doing random sampling')
#use random to sample the first 10 instances
return random(inputs, args)
X_a = load_and_embed_examples(args, model, tokenizer, evaluate=True, text = 'text_a', sub_index=labeled_ids)
X_b = load_and_embed_examples(args, model, tokenizer, evaluate=True, text = 'text_b', sub_index=labeled_ids)
X_labeled = np.absolute(X_a - X_b)
labeled_y = load_and_embed_examples(args=args, model=model, tokenizer=tokenizer, evaluate=True,
text=None, sub_index=labeled_ids, return_only_labels=True)
vecs=[]
for y in [0, 1, 2]:
idx = np.where(labeled_y == y)
if len(idx) >0:
vec = np.mean(X_labeled[idx], axis=0)
vecs.append(vec)
vecs=np.array(vecs)
X_a = load_and_embed_examples(args, model, tokenizer, evaluate=True, text = 'text_a')
X_b = load_and_embed_examples(args, model, tokenizer, evaluate=True, text = 'text_b')
X = np.absolute(X_a - X_b)
scores = -np.min(pairwise_distances(X, vecs, metric='cosine'), axis = 1)
return torch.tensor(scores)
def abs_seedme(model, inputs, args, tokenizer, **kwargs):
'''use seed to find the data that makes the class representative vectors move the most.'''
sampled_file = os.path.join(args.model_name_or_path, 'sampled.pt')
if os.path.isfile(sampled_file):
labeled_ids = torch.load(sampled_file)
else:
print('doing random sampling')
#use random to sample the first 10 instances
return random(inputs, args)
X_a = load_and_embed_examples(args, model, tokenizer, evaluate=True, text = 'text_a', sub_index=labeled_ids)
X_b = load_and_embed_examples(args, model, tokenizer, evaluate=True, text = 'text_b', sub_index=labeled_ids)
X_labeled = np.absolute(X_a - X_b)
labeled_y = load_and_embed_examples(args=args, model=model, tokenizer=tokenizer, evaluate=True,
text=None, sub_index=labeled_ids, return_only_labels=True)
vecs=[]; ids=[]
for y in [0, 1, 2]:
idx = np.where(labeled_y == y)[0]
if len(idx) == 0:
vec = np.zeros_like(X_labeled[0])
else:
vec = np.mean(X_labeled[idx], axis=0)
vecs.append(vec)
ids.append(idx)
vecs=np.array(vecs)
X_a = load_and_embed_examples(args, model, tokenizer, evaluate=True, text = 'text_a')
X_b = load_and_embed_examples(args, model, tokenizer, evaluate=True, text = 'text_b')
X = np.absolute(X_a - X_b)
vec_closest = np.argmin(pairwise_distances(X, vecs, metric='euclidean'), axis = 1)
vecs_before=[]
vecs_after=[]
for x, vec_id in zip(X, vec_closest):
vec_before = vecs[vec_id]
vec_after = (vec_before*len(ids[vec_id]) + x)/(len(ids[vec_id])+1)
vecs_before.append(vec_before)
vecs_after.append(vec_after)
vecs_before=np.array(vecs_before)
vecs_after=np.array(vecs_after)
# print(vecs_before.shape, vecs_after.shape)
scores=paired_distances(vecs_before, vecs_after, metric='euclidean')
# print('scores', scores)
return torch.tensor(scores)
def seedme(model, inputs, args, tokenizer, **kwargs):
'''use seed to find the data that makes the class representative vectors move the most.'''
sampled_file = os.path.join(args.model_name_or_path, 'sampled.pt')
if os.path.isfile(sampled_file):
labeled_ids = torch.load(sampled_file)
else:
print('doing random sampling')
#use random to sample the first 10 instances
return random(inputs, args)
X_a = load_and_embed_examples(args, model, tokenizer, evaluate=True, text = 'text_a', sub_index=labeled_ids)
X_b = load_and_embed_examples(args, model, tokenizer, evaluate=True, text = 'text_b', sub_index=labeled_ids)
X_labeled = X_a - X_b
labeled_y = load_and_embed_examples(args=args, model=model, tokenizer=tokenizer, evaluate=True,
text=None, sub_index=labeled_ids, return_only_labels=True)
vecs=[]; ids=[]
for y in [0, 1, 2]:
idx = np.where(labeled_y == y)[0]
if len(idx) >0:
vec = np.mean(X_labeled[idx], axis=0)
vecs.append(vec)
ids.append(idx)
vecs=np.array(vecs)
X_a = load_and_embed_examples(args, model, tokenizer, evaluate=True, text = 'text_a')
X_b = load_and_embed_examples(args, model, tokenizer, evaluate=True, text = 'text_b')
X = X_a - X_b
vec_closest = np.argmin(pairwise_distances(X, vecs, metric='euclidean'), axis = 1)
vecs_before=[]
vecs_after=[]
for x, vec_id in zip(X, vec_closest):
vec_before = vecs[vec_id]
vec_after = (vec_before*len(ids[vec_id]) + x)/(len(ids[vec_id])+1)
vecs_before.append(vec_before)
vecs_after.append(vec_after)
vecs_before=np.array(vecs_before)
vecs_after=np.array(vecs_after)
# print(vecs_before.shape, vecs_after.shape)
scores=paired_distances(vecs_before, vecs_after, metric='euclidean')
# print('scores', scores)
return torch.tensor(scores)
def abs_seedmet(model, inputs, args, tokenizer, **kwargs):
'''use seed to find the data that makes the class representative vectors move the most.
1/3 for each vector movement.'''
sampled_file = os.path.join(args.model_name_or_path, 'sampled.pt')
if os.path.isfile(sampled_file):
labeled_ids = torch.load(sampled_file)
else:
print('doing random sampling')
#use random to sample the first 10 instances
return random(inputs, args)
X_a = load_and_embed_examples(args, model, tokenizer, evaluate=True, text = 'text_a', sub_index=labeled_ids)
X_b = load_and_embed_examples(args, model, tokenizer, evaluate=True, text = 'text_b', sub_index=labeled_ids)
X_labeled = np.absolute(X_a - X_b)
labeled_y = load_and_embed_examples(args=args, model=model, tokenizer=tokenizer, evaluate=True,
text=None, sub_index=labeled_ids, return_only_labels=True)
vecs=[]; ids=[]
for y in [0, 1, 2]:
idx = np.where(labeled_y == y)[0]
if len(idx) == 0:
vec = np.zeros_like(X_labeled[0])
else:
vec = np.mean(X_labeled[idx], axis=0)
vecs.append(vec)
ids.append(idx)
vecs=np.array(vecs)
len_ids = np.array([len(i) for i in ids])
X_a = load_and_embed_examples(args, model, tokenizer, evaluate=True, text = 'text_a')
X_b = load_and_embed_examples(args, model, tokenizer, evaluate=True, text = 'text_b')
X = np.absolute(X_a - X_b)
all_vecs_after=[]
for x in X:
vecs_after = []
for len_id, vec in zip(len_ids, vecs):
vec_after = (vec*len_id + x)/(len_id+1)
vecs_after.append(vec_after)
all_vecs_after.append(vecs_after)
all_vecs_after=np.array(all_vecs_after)
all_scores = []
for i, vec in enumerate(vecs):
scores=pairwise_distances(all_vecs_after[:, i, :], vecs[i].reshape(1, -1), metric='euclidean').flatten()
all_scores.append(scores)
all_scores = np.array(all_scores)
final_scores = np.sum(all_scores, axis=0)/3 #each class is predicted true with 1/3 of probability
return torch.tensor(final_scores)
def seedmet(model, inputs, args, tokenizer, **kwargs):
'''use seed to find the data that makes the class representative vectors move the most.
1/3 for each vector movement.'''
sampled_file = os.path.join(args.model_name_or_path, 'sampled.pt')
if os.path.isfile(sampled_file):
labeled_ids = torch.load(sampled_file)
else:
print('doing random sampling')
#use random to sample the first 10 instances
return random(inputs, args)
X_a = load_and_embed_examples(args, model, tokenizer, evaluate=True, text = 'text_a', sub_index=labeled_ids)
X_b = load_and_embed_examples(args, model, tokenizer, evaluate=True, text = 'text_b', sub_index=labeled_ids)
X_labeled = X_a - X_b
labeled_y = load_and_embed_examples(args=args, model=model, tokenizer=tokenizer, evaluate=True,
text=None, sub_index=labeled_ids, return_only_labels=True)
vecs=[]; ids=[]
for y in [0, 1, 2]:
idx = np.where(labeled_y == y)[0]
if len(idx) >0:
vec = np.mean(X_labeled[idx], axis=0)
vecs.append(vec)
ids.append(idx)
vecs=np.array(vecs)
len_ids = np.array([len(i) for i in ids])
X_a = load_and_embed_examples(args, model, tokenizer, evaluate=True, text = 'text_a')
X_b = load_and_embed_examples(args, model, tokenizer, evaluate=True, text = 'text_b')
X = X_a - X_b
all_vecs_after=[]
for x in X:
vecs_after = []
for len_id, vec in zip(len_ids, vecs):
vec_after = (vec*len_id + x)/(len_id+1)
vecs_after.append(vec_after)
all_vecs_after.append(vecs_after)
all_vecs_after=np.array(all_vecs_after)
all_scores = []
for i, vec in enumerate(vecs):
scores=pairwise_distances(all_vecs_after[:, i, :], vecs[i].reshape(1, -1), metric='euclidean').flatten()
all_scores.append(scores)
all_scores = np.array(all_scores)
final_scores = np.sum(all_scores, axis=0)/3 #each class is predicted true with 1/3 of probability
return torch.tensor(final_scores)
def seedmep(model, inputs, args, tokenizer, **kwargs):
'''use seed to find the data that makes the class representative vectors move the most.
predicted prob for each vector movement.'''
sampled_file = os.path.join(args.model_name_or_path, 'sampled.pt')
if os.path.isfile(sampled_file):
labeled_ids = torch.load(sampled_file)
else:
print('doing random sampling')
#use random to sample the first 10 instances
return random(inputs, args)
X_a = load_and_embed_examples(args, model, tokenizer, evaluate=True, text = 'text_a', sub_index=labeled_ids)
X_b = load_and_embed_examples(args, model, tokenizer, evaluate=True, text = 'text_b', sub_index=labeled_ids)
X_labeled = X_a - X_b
labeled_y = load_and_embed_examples(args=args, model=model, tokenizer=tokenizer, evaluate=True,
text=None, sub_index=labeled_ids, return_only_labels=True)
vecs=[]; ids=[]
for y in [0, 1, 2]:
idx = np.where(labeled_y == y)[0]
if len(idx) == 0:
vec = np.zeros_like(X_labeled[0])
else:
vec = np.mean(X_labeled[idx], axis=0)
vecs.append(vec)
ids.append(idx)
vecs=np.array(vecs)
len_ids = np.array([len(i) for i in ids])
X_a = load_and_embed_examples(args, model, tokenizer, evaluate=True, text = 'text_a')
X_b = load_and_embed_examples(args, model, tokenizer, evaluate=True, text = 'text_b')
X = X_a - X_b
all_vecs_after=[]
for x in X:
vecs_after = []
for len_id, vec in zip(len_ids, vecs):
vec_after = (vec*len_id + x)/(len_id+1)
vecs_after.append(vec_after)
all_vecs_after.append(vecs_after)
all_vecs_after=np.array(all_vecs_after)
all_scores = []
for i, vec in enumerate(vecs):
scores=pairwise_distances(all_vecs_after[:, i, :], vecs[i].reshape(1, -1), metric='euclidean').flatten()
all_scores.append(scores)
all_scores = np.array(all_scores).transpose()
probas = read_logits(args).softmax(dim=-1).numpy()
# print(probas.shape)
final_scores = []
for s, p in zip(all_scores, probas): #use predicted probs to weight the vector movement effect
# print(s, p)
# print(s.shape, p.shape)
score = np.average(s, weights=p)
final_scores.append(score)
# print(s, p, score)
# print(final_scores)
return torch.tensor(final_scores)
def abs_seedmep(model, inputs, args, tokenizer, **kwargs):
'''use seed to find the data that makes the class representative vectors move the most.
predicted prob for each vector movement.'''
sampled_file = os.path.join(args.model_name_or_path, 'sampled.pt')
if os.path.isfile(sampled_file):
labeled_ids = torch.load(sampled_file)
else:
print('doing random sampling')
#use random to sample the first 10 instances
return random(inputs, args)
X_a = load_and_embed_examples(args, model, tokenizer, evaluate=True, text = 'text_a', sub_index=labeled_ids)
X_b = load_and_embed_examples(args, model, tokenizer, evaluate=True, text = 'text_b', sub_index=labeled_ids)
X_labeled = np.absolute(X_a - X_b)
labeled_y = load_and_embed_examples(args=args, model=model, tokenizer=tokenizer, evaluate=True,
text=None, sub_index=labeled_ids, return_only_labels=True)
vecs=[]; ids=[]
for y in [0, 1, 2]:
idx = np.where(labeled_y == y)[0]
if len(idx) == 0:
vec = np.zeros_like(X_labeled[0])
else:
vec = np.mean(X_labeled[idx], axis=0)
vecs.append(vec)
ids.append(idx)
vecs=np.array(vecs)
len_ids = np.array([len(i) for i in ids])
X_a = load_and_embed_examples(args, model, tokenizer, evaluate=True, text = 'text_a')
X_b = load_and_embed_examples(args, model, tokenizer, evaluate=True, text = 'text_b')
X = np.absolute(X_a - X_b)
all_vecs_after=[]
for x in X:
vecs_after = []
for len_id, vec in zip(len_ids, vecs):
vec_after = (vec*len_id + x)/(len_id+1)
vecs_after.append(vec_after)
all_vecs_after.append(vecs_after)
all_vecs_after=np.array(all_vecs_after)
all_scores = []
for i, vec in enumerate(vecs):
scores=pairwise_distances(all_vecs_after[:, i, :], vecs[i].reshape(1, -1), metric='euclidean').flatten()
all_scores.append(scores)
all_scores = np.array(all_scores).transpose()
probas = read_logits(args).softmax(dim=-1).numpy()
# print(probas.shape)
final_scores = []
for s, p in zip(all_scores, probas): #use predicted probs to weight the vector movement effect
# print(s, p)
# print(s.shape, p.shape)
score = np.average(s, weights=p)
final_scores.append(score)
# print(s, p, score)
# print(final_scores)
return torch.tensor(final_scores)
def seedmep_(model, inputs, args, tokenizer, **kwargs):
'''use seed to find the data that makes the class representative vectors move the most.
predicted prob for each vector movement.'''
sampled_file = os.path.join(args.model_name_or_path, 'sampled.pt')
if os.path.isfile(sampled_file):
labeled_ids = torch.load(sampled_file)
else:
print('doing random sampling')
#use random to sample the first 10 instances
return random(inputs, args)
texta_tasks = ['pubmed', 'imdb', 'sst-2', 'cola'] #'agnews' 'wsc'
textab_tasks = ['cfever', 'scifact', 'mnli', 'mnli-mm', 'sts-b', 'mrpc', 'qqp', 'qnli', 'rte', 'wnli']
X_a = load_and_embed_examples(args, model, tokenizer, evaluate=True, text = 'text_a', sub_index=labeled_ids)
if args.task_name in texta_tasks:
X_labeled = X_a
labeled_y = load_and_embed_examples(args=args, model=model, tokenizer=tokenizer, evaluate=True,
text = 'text_a', sub_index=labeled_ids, return_only_labels=True)
elif args.task_name in textab_tasks:
X_b = load_and_embed_examples(args, model, tokenizer, evaluate=True, text = 'text_b', sub_index=labeled_ids)
X_labeled = X_a - X_b
labeled_y = load_and_embed_examples(args=args, model=model, tokenizer=tokenizer, evaluate=True,
text=None, sub_index=labeled_ids, return_only_labels=True)
vecs=[]; ids=[]
for y in range(len(set(labeled_y))):
idx = np.where(labeled_y == y)[0]
if len(idx) == 0:
vec = np.zeros_like(X_labeled[0])
else:
vec = np.mean(X_labeled[idx], axis=0)
vecs.append(vec)
ids.append(idx)
vecs=np.array(vecs)
len_ids = np.array([len(i) for i in ids])
X_a = load_and_embed_examples(args, model, tokenizer, evaluate=True, text = 'text_a')
if args.task_name in texta_tasks:
X=X_a
elif args.task_name in textab_tasks:
X_b = load_and_embed_examples(args, model, tokenizer, evaluate=True, text = 'text_b')
X = X_a - X_b
all_vecs_after=[]
for x in X:
vecs_after = []
for len_id, vec in zip(len_ids, vecs):
vec_after = (vec*len_id + x)/(len_id+1)
vecs_after.append(vec_after)
all_vecs_after.append(vecs_after)
all_vecs_after=np.array(all_vecs_after)
all_scores = []
for i, vec in enumerate(vecs):
scores=pairwise_distances(all_vecs_after[:, i, :], vecs[i].reshape(1, -1), metric='euclidean').flatten()
all_scores.append(scores)
all_scores = np.array(all_scores).transpose()
if args.task_name in texta_tasks:
preds = load_and_embed_examples(args=args, model=model, tokenizer=tokenizer, evaluate=True,
text='text_a', return_only_labels=True)
elif args.task_name in textab_tasks:
preds = load_and_embed_examples(args=args, model=model, tokenizer=tokenizer, evaluate=True,
text=None, return_only_labels=True)
final_scores = [s[p] for s, p in zip(all_scores, preds)]
return torch.tensor(final_scores)
def abs_seedmep_(model, inputs, args, tokenizer, **kwargs):
'''use seed to find the data that makes the class representative vectors move the most.
predicted prob for each vector movement.'''
sampled_file = os.path.join(args.model_name_or_path, 'sampled.pt')
if os.path.isfile(sampled_file):
labeled_ids = torch.load(sampled_file)
else:
print('doing random sampling')
# use random to sample the first 10 instances
return random(inputs, args)
texta_tasks = ['pubmed', 'imdb', 'sst-2', 'cola'] # 'agnews' 'wsc'
textab_tasks = ['cfever', 'scifact', 'mnli', 'mnli-mm', 'sts-b', 'mrpc', 'qqp', 'qnli', 'rte', 'wnli']
X_a = load_and_embed_examples(args, model, tokenizer, evaluate=True, text='text_a', sub_index=labeled_ids)
if args.task_name in texta_tasks:
X_labeled = np.absolute(X_a)
labeled_y = load_and_embed_examples(args=args, model=model, tokenizer=tokenizer, evaluate=True,
text='text_a', sub_index=labeled_ids, return_only_labels=True)
elif args.task_name in textab_tasks:
X_b = load_and_embed_examples(args, model, tokenizer, evaluate=True, text='text_b', sub_index=labeled_ids)
X_labeled = np.absolute(X_a - X_b)
labeled_y = load_and_embed_examples(args=args, model=model, tokenizer=tokenizer, evaluate=True,
text=None, sub_index=labeled_ids, return_only_labels=True)
vecs = [];
ids = []
for y in range(len(set(labeled_y))):
idx = np.where(labeled_y == y)[0]
if len(idx) == 0:
vec = np.zeros_like(X_labeled[0])
else:
vec = np.mean(X_labeled[idx], axis=0)
vecs.append(vec)
ids.append(idx)
vecs = np.array(vecs)
len_ids = np.array([len(i) for i in ids])
X_a = load_and_embed_examples(args, model, tokenizer, evaluate=True, text='text_a')
if args.task_name in texta_tasks:
X = np.absolute(X_a)
elif args.task_name in textab_tasks:
X_b = load_and_embed_examples(args, model, tokenizer, evaluate=True, text='text_b')
X = np.absolute(X_a - X_b)
all_vecs_after = []
for x in X:
vecs_after = []
for len_id, vec in zip(len_ids, vecs):
vec_after = (vec * len_id + x) / (len_id + 1)
vecs_after.append(vec_after)
all_vecs_after.append(vecs_after)
all_vecs_after = np.array(all_vecs_after)
all_scores = []
for i, vec in enumerate(vecs):
scores = pairwise_distances(all_vecs_after[:, i, :], vecs[i].reshape(1, -1), metric='euclidean').flatten()
all_scores.append(scores)
all_scores = np.array(all_scores).transpose()
if args.task_name in texta_tasks:
preds = load_and_embed_examples(args=args, model=model, tokenizer=tokenizer, evaluate=True,
text='text_a', return_only_labels=True)
elif args.task_name in textab_tasks:
preds = load_and_embed_examples(args=args, model=model, tokenizer=tokenizer, evaluate=True,
text=None, return_only_labels=True)
final_scores = [s[p] for s, p in zip(all_scores, preds)]
return torch.tensor(final_scores)
def cal(model, inputs, args, tokenizer, **kwargs):
"""
CAL (Contrastive Active Learning) Acquire data by choosing those with the largest KL divergence in the predictions between a candidate dpool input
and its nearest neighbours in the training set.
"""
# first, get already labeled points
sampled_file = os.path.join(args.model_name_or_path, 'sampled.pt')
if os.path.isfile(sampled_file):
labeled_ids = torch.load(sampled_file)
else:
print('doing random sampling')
#use random to sample the first 10 instances
return random(inputs, args)
texta_tasks = ['pubmed', 'imdb', 'sst-2', 'cola'] # 'agnews' 'wsc'
textab_tasks = ['cfever', 'scifact', 'mnli', 'mnli-mm', 'sts-b', 'mrpc', 'qqp', 'qnli', 'rte', 'wnli']
if args.task_name in texta_tasks:
labeled_emb, labeled_logits, labeled_y = load_and_embed_examples(args=args, model=model, tokenizer=tokenizer, evaluate=True,
text='text_a', sub_index=labeled_ids, return_plus=True)
elif args.task_name in textab_tasks:
labeled_emb, labeled_logits, labeled_y = load_and_embed_examples(args=args, model=model, tokenizer=tokenizer, evaluate=True,
text=None, sub_index=labeled_ids, return_plus=True)
neigh = KNeighborsClassifier(n_neighbors=10) #args.num_nei=10 by default in original implementation
neigh.fit(X=labeled_emb, y=np.array(labeled_y))
criterion = KLDivLoss(reduction='none')
dpool_logits = model(**inputs).logits.cpu()
dpool_bert_emb = embedding(model, inputs, args).cpu()
kl_scores = []
num_adv = 0
distances = []
for unlab_i, candidate in enumerate(zip(dpool_bert_emb, dpool_logits)):
# "Finding neighbours for every unlabeled data point"
# find indices of closesest "neighbours" in train set
distances_, neighbours = neigh.kneighbors(X=[candidate[0].numpy()], return_distance=True)
distances.append(distances_[0])
preds_neigh = [np.argmax(labeled_logits[n], axis=1) for n in neighbours]
neigh_prob = F.softmax(labeled_logits[neighbours], dim=-1)
pred_candidate = [np.argmax(candidate[1])]
num_diff_pred = len(list(set(preds_neigh).intersection(pred_candidate)))
if num_diff_pred > 0: num_adv += 1
uda_softmax_temp = 1
candidate_log_prob = F.log_softmax(candidate[1] / uda_softmax_temp, dim=-1)
kl = np.array([torch.sum(criterion(candidate_log_prob, n), dim=-1).numpy() for n in neigh_prob])
kl_scores.append(kl.mean())
kl_scores = torch.tensor(kl_scores)
return kl_scores
def cal_seed(model, inputs, args, tokenizer, **kwargs):
"""
CAL_seed is basically CAL, apart from it uses SEED embeddings.
"""
# first, get already labeled points
sampled_file = os.path.join(args.model_name_or_path, 'sampled.pt')
if os.path.isfile(sampled_file):
labeled_ids = torch.load(sampled_file)
else:
print('doing random sampling')
#use random to sample the first 10 instances
return random(inputs, args)
labeled_logits, labeled_y = load_and_embed_examples(args=args, model=model, tokenizer=tokenizer, evaluate=True,
text=None, sub_index=labeled_ids, return_logits=True)
X_a = load_and_embed_examples(args, model, tokenizer, evaluate=True, text = 'text_a', sub_index=labeled_ids)
X_b = load_and_embed_examples(args, model, tokenizer, evaluate=True, text = 'text_b', sub_index=labeled_ids)
labeled_emb = X_a - X_b
neigh = KNeighborsClassifier(n_neighbors=10) #args.num_nei=10 by default in original implementation
neigh.fit(X=labeled_emb, y=np.array(labeled_y))
criterion = KLDivLoss(reduction='none')
#we never use/need dpool_y as they are unknown info; here is just to make the code look clean
dpool_logits, dpool_y = load_and_embed_examples(args=args, model=model, tokenizer=tokenizer, evaluate=True,
text=None, return_logits=True)
X_a = load_and_embed_examples(args, model, tokenizer, evaluate=True, text = 'text_a')
X_b = load_and_embed_examples(args, model, tokenizer, evaluate=True, text = 'text_b')
dpool_bert_emb = X_a - X_b
kl_scores = []
num_adv = 0
distances = []
for unlab_i, candidate in enumerate(zip(dpool_bert_emb, dpool_logits)):
# "Finding neighbours for every unlabeled data point"
# find indices of closesest "neighbours" in train set
distances_, neighbours = neigh.kneighbors(X=[candidate[0]], return_distance=True)
distances.append(distances_[0])
preds_neigh = [np.argmax(labeled_logits[n], axis=1) for n in neighbours]
neigh_prob = F.softmax(labeled_logits[neighbours], dim=-1)
pred_candidate = [np.argmax(candidate[1])]
num_diff_pred = len(list(set(preds_neigh).intersection(pred_candidate)))
if num_diff_pred > 0: num_adv += 1
uda_softmax_temp = 1
candidate_log_prob = F.log_softmax(candidate[1] / uda_softmax_temp, dim=-1)
kl = np.array([torch.sum(criterion(candidate_log_prob, n), dim=-1).numpy() for n in neigh_prob])
kl_scores.append(kl.mean())
kl_scores = torch.tensor(kl_scores)
return kl_scores
def abs_cal_seed(model, inputs, args, tokenizer, **kwargs):
"""
CAL_seed is basically CAL, apart from it uses SEED embeddings.
"""
# first, get already labeled points
sampled_file = os.path.join(args.model_name_or_path, 'sampled.pt')
if os.path.isfile(sampled_file):
labeled_ids = torch.load(sampled_file)
else:
print('doing random sampling')
#use random to sample the first 10 instances
return random(inputs, args)
labeled_logits, labeled_y = load_and_embed_examples(args=args, model=model, tokenizer=tokenizer, evaluate=True,
text=None, sub_index=labeled_ids, return_logits=True)
X_a = load_and_embed_examples(args, model, tokenizer, evaluate=True, text = 'text_a', sub_index=labeled_ids)
X_b = load_and_embed_examples(args, model, tokenizer, evaluate=True, text = 'text_b', sub_index=labeled_ids)
labeled_emb = np.absolute(X_a - X_b)
neigh = KNeighborsClassifier(n_neighbors=10) #args.num_nei=10 by default in original implementation
neigh.fit(X=labeled_emb, y=np.array(labeled_y))
criterion = KLDivLoss(reduction='none')
#we never use/need dpool_y as they are unknown info; here is just to make the code look clean
dpool_logits, dpool_y = load_and_embed_examples(args=args, model=model, tokenizer=tokenizer, evaluate=True,
text=None, return_logits=True)
X_a = load_and_embed_examples(args, model, tokenizer, evaluate=True, text = 'text_a')
X_b = load_and_embed_examples(args, model, tokenizer, evaluate=True, text = 'text_b')
dpool_bert_emb = np.absolute(X_a - X_b)
kl_scores = []
num_adv = 0
distances = []
for unlab_i, candidate in enumerate(zip(dpool_bert_emb, dpool_logits)):
# "Finding neighbours for every unlabeled data point"
# find indices of closesest "neighbours" in train set
distances_, neighbours = neigh.kneighbors(X=[candidate[0]], return_distance=True)
distances.append(distances_[0])
preds_neigh = [np.argmax(labeled_logits[n], axis=1) for n in neighbours]
neigh_prob = F.softmax(labeled_logits[neighbours], dim=-1)
pred_candidate = [np.argmax(candidate[1])]
num_diff_pred = len(list(set(preds_neigh).intersection(pred_candidate)))
if num_diff_pred > 0: num_adv += 1
uda_softmax_temp = 1
candidate_log_prob = F.log_softmax(candidate[1] / uda_softmax_temp, dim=-1)
kl = np.array([torch.sum(criterion(candidate_log_prob, n), dim=-1).numpy() for n in neigh_prob])
kl_scores.append(kl.mean())
kl_scores = torch.tensor(kl_scores)
return kl_scores
def embedding(model, inputs, args, pooling='cls', **kwargs):
"""Original alps Return the pooleroutput as embedding, e.g.
output = model.bert(**inputs)[1] for bert.
However, it only works with bert and albert: many models don't have pooler layer, e.g. roberta, deberta.
Here we use the [CLS] token embeddings from last_hidden_state instead:
model.bert(**inputs)[0] returns last_hidden_state and [:, 0, :] gets the embeddings of the [CLS] token for each instance"""
inputs.pop("masked_lm_labels", None)
if pooling == 'cls':
if args.model_type =='bert':
output = model.bert(**inputs)[0][:, 0, :]
elif args.model_type == 'roberta':
output = model.roberta(**inputs)[0][:, 0, :]
elif args.model_type == 'albert':
output = model.albert(**inputs)[0][:, 0, :]
elif args.model_type == 'deberta':
output = model.deberta(**inputs)[0][:, 0, :]
elif args.model_type == 'xlnet':
output = model.transformer(**inputs)[0][:, 0, :]
elif args.model_type == 'longformer':
output = model.longformer(**inputs)[0][:, 0, :]
else:
raise NotImplementedError
elif pooling == 'mean':
if args.model_type =='bert':
output = torch.mean(model.bert(**inputs)[0], 1)
elif args.model_type == 'roberta':
output = torch.mean(model.roberta(**inputs)[0], 1)
elif args.model_type == 'albert':
output = torch.mean(model.albert(**inputs)[0], 1)
elif args.model_type == 'deberta':
output = torch.mean(model.deberta(**inputs)[0], 1)
elif args.model_type == 'xlnet':
output = torch.mean(model.transformer(**inputs)[0], 1)
elif args.model_type == 'longformer':
output = model.longformer(**inputs)[0][:, 0, :]
else:
raise NotImplementedError
elif pooling == 'max':
if args.model_type =='bert':
output = torch.max(model.bert(**inputs)[0], 1).values
elif args.model_type == 'roberta':
output = torch.max(model.roberta(**inputs)[0], 1).values
elif args.model_type == 'albert':
output = torch.max(model.albert(**inputs)[0], 1).values
elif args.model_type == 'deberta':
output = torch.max(model.deberta(**inputs)[0], 1).values
elif args.model_type == 'xlnet':
output = torch.max(model.transformer(**inputs)[0], 1).values
elif args.model_type == 'longformer':
output = model.longformer(**inputs)[0][:, 0, :]
else:
raise NotImplementedError
elif pooling == 'median':
if args.model_type =='bert':
output = torch.median(model.bert(**inputs)[0], 1).values
elif args.model_type == 'roberta':
output = torch.median(model.roberta(**inputs)[0], 1).values
elif args.model_type == 'albert':
output = torch.median(model.albert(**inputs)[0], 1).values
elif args.model_type == 'deberta':
output = torch.median(model.deberta(**inputs)[0], 1).values
elif args.model_type == 'xlnet':
output = torch.median(model.transformer(**inputs)[0], 1).values
elif args.model_type == 'longformer':
output = model.longformer(**inputs)[0][:, 0, :]
else:
raise NotImplementedError
return output
def mask_tokens(inputs, tokenizer, args):
""" Prepare masked tokens inputs/labels for masked language modeling: 80% MASK, 10% random, 10% original. """
if tokenizer.mask_token is None:
raise ValueError(
"This tokenizer does not have a mask token which is necessary for masked language modeling. Remove the --mlm flag if you want to use this tokenizer."
)
labels = inputs.clone()
# We sample a few tokens in each sequence for masked-LM training (with probability args.mlm_probability defaults to 0.15 in Bert/RoBERTa)
probability_matrix = torch.full(labels.shape, args.mlm_probability)
special_tokens_mask = [
tokenizer.get_special_tokens_mask(val, already_has_special_tokens=True) for val in labels.tolist()
]
probability_matrix.masked_fill_(torch.tensor(special_tokens_mask, dtype=torch.bool), value=0.0)
if tokenizer._pad_token is not None:
padding_mask = labels.eq(tokenizer.pad_token_id)
probability_matrix.masked_fill_(padding_mask, value=0.0)
masked_indices = torch.bernoulli(probability_matrix).bool()
labels[~masked_indices] = -100 # We only compute loss on masked tokens
# 80% of the time, we replace masked input tokens with tokenizer.mask_token ([MASK])
indices_replaced = torch.bernoulli(torch.full(labels.shape, 0.8)).bool() & masked_indices
inputs[indices_replaced] = tokenizer.convert_tokens_to_ids(tokenizer.mask_token)
# 10% of the time, we replace masked input tokens with random word
indices_random = torch.bernoulli(torch.full(labels.shape, 0.5)).bool() & masked_indices & ~indices_replaced
random_words = torch.randint(len(tokenizer), labels.shape, dtype=torch.long)
inputs[indices_random] = random_words[indices_random]
# The rest of the time (10% of the time) we keep the masked input tokens unchanged
return inputs, labels
def batch_scores_or_vectors(batch, args, model, tokenizer):
"""Return scores (or vectors) for data [batch] given the active learning method"""
if args.sampling in ['least', 'margin', 'entropy',
'commitee_vote', 'commitee_KL']: #strategies that reads logits rather than generate logits
with torch.no_grad():
scores_or_vectors = sampling_method(args.sampling)(model=None, inputs=None, args = args)
return scores_or_vectors
else:
if type(model) != list:
model.eval()
batch = tuple(t.to(args.device) for t in batch)
inputs = {}
# mask_tokens() requires CPU input_ids
if args.head == "lm":
input_ids_cpu = batch[0].cpu().clone()
input_ids_mask, labels = mask_tokens(input_ids_cpu, tokenizer, args)
input_ids = input_ids_mask if args.masked else batch[0]
input_ids = input_ids.to(args.device)
labels = labels.to(args.device)
inputs["input_ids"] = input_ids
inputs["masked_lm_labels"] = labels
elif args.head == "sc":
inputs["input_ids"] = batch[0]
else:
raise NotImplementedError
inputs["attention_mask"] = batch[1]
if args.model_type != "distilbert":
inputs["token_type_ids"] = (
batch[2] if args.model_type in ["bert", "xlnet", "albert"] else None
) # XLM, DistilBERT, RoBERTa, and XLM-RoBERTa don't use segment_ids
with torch.no_grad():
scores_or_vectors = sampling_method(args.sampling)(model=model, inputs=inputs, args = args, tokenizer = tokenizer)
return scores_or_vectors
def get_scores_or_vectors(eval_dataset, args, model, tokenizer=None):
# Returns scores or vectors needed for active learning sampling
# assert check_model_head(model, args.sampling), "Model-sampling mismatch"
# Loop to handle MNLI double evaluation (matched, mis-matched)
eval_task_names = ("mnli", "mnli-mm") if args.task_name == "mnli" else (args.task_name,)
for eval_task in eval_task_names:
# Note that DistributedSampler samples randomly
eval_sampler = SequentialSampler(eval_dataset)
args.eval_batch_size = args.per_gpu_eval_batch_size * max(1, args.n_gpu)
if args.sampling in ['least', 'margin', 'entropy',
'densityes', 'densitycs', 'seede', 'seedc', 'seedel', 'seedcl', 'cal_seed',
'seedme', 'seedmc', 'seedmel', 'seedmcl',
"seedmet", "seedmct", "seedmetl", "seedmctl",
"seedmep", "seedmcp", "seedmepl", "seedmcpl",
"seedmep", "seedmcp", "seedmepl", "seedmcpl",
"seedmep", "seedmcp", "seedmepl", "seedmcpl",
"seedmep_", "seedmcp_", "seedmepl_", "seedmcpl_",
"seedmKL", 'seedme0', 'seedmet0', 'seedmep0',
"abs_densitye_seed", "abs_densityc_seed", 'abs_seede', 'abs_seedcl', "abs_cal_seed",
'abs_seedme', 'abs_seedmet', 'abs_seedmep', 'abs_seed_mep_',
'commitee_weighted_vote', "commitee_weighted_KL",
'commitee_vote', 'commitee_KL']:
eval_dataloader = DataLoader(eval_dataset, sampler=eval_sampler, batch_size=len(eval_dataset))
else:
eval_dataloader = DataLoader(eval_dataset, sampler=eval_sampler, batch_size=args.eval_batch_size)
# multi-gpu eval
if args.n_gpu > 1 and not isinstance(model, torch.nn.DataParallel):
model = torch.nn.DataParallel(model)
all_scores_or_vectors = None
for batch in tqdm(eval_dataloader, desc="Evaluating"):
scores_or_vectors = batch_scores_or_vectors(batch, args, model, tokenizer)
if all_scores_or_vectors is None:
all_scores_or_vectors = scores_or_vectors.detach().cpu().numpy()
else:
all_scores_or_vectors = np.append(all_scores_or_vectors, scores_or_vectors.detach().cpu().numpy(), axis=0)
all_scores_or_vectors = torch.tensor(all_scores_or_vectors)
return all_scores_or_vectors
def pool_scores_or_vectors(eval_dataset, args, model, tokenizer=None):
mapping={"weighted_density":["densitycs", "entropy"]}
if args.sampling in mapping.keys():
sampling = args.sampling
args.sampling = mapping[sampling][0]
first=get_scores_or_vectors(eval_dataset, args, model, tokenizer).numpy()
args.sampling = mapping[sampling][1]
second=get_scores_or_vectors(eval_dataset, args, model, tokenizer).numpy()
beta = 1
weighted_scores = np.prod([first, np.power(second, beta)], axis=0)
return torch.tensor(weighted_scores)
else:
scores = get_scores_or_vectors(eval_dataset, args, model, tokenizer)
return scores
SAMPLING = {
"rand":random,
"least":least_conf,
"margin":margin,
"entropy":entropy,
"densityes": density_euclidean_SEED,
"densitycs": density_cosine_SEED,
"commitee_vote":commitee_vote,
"commitee_weighted_vote": commitee_weighted_vote,
"commitee_KL":commitee_KL,
"commitee_weighted_KL":commitee_weighted_KL,
"badge":badge_gradient,
"alps": alps,
"cal":cal,
"seede":seede,
'seedcl': seedcl,
"seedme": seedme,
"seedmet": seedmet,
"seedmep": seedmep,
"seedmep_": seedmep_,
'cal_seed':cal_seed,
"abs_densitye_seed":abs_densitye_seed,
"abs_densityc_seed":abs_densityc_seed,
'abs_seede':abs_seede,
'abs_seedcl':abs_seedcl,
"abs_cal_seed":abs_cal_seed,
'abs_seedme':abs_seedme,
'abs_seedmet':abs_seedmet,
'abs_seedmep':abs_seedmep,
'abs_seed_mep_':abs_seedmep_,
}
def sampling_method(method):
"""Determine function [f] given name of sampling [method] for active learning"""
if method in SAMPLING:
f = SAMPLING[method]
elif "mlm" in method:
f = alps
elif "bert" in method:
f = embedding
else:
raise NotImplementedError
return f
| 46.841026 | 170 | 0.651006 | 8,766 | 63,938 | 4.531942 | 0.065366 | 0.033529 | 0.026883 | 0.044806 | 0.792408 | 0.77728 | 0.767287 | 0.743474 | 0.737836 | 0.725602 | 0 | 0.007779 | 0.234008 | 63,938 | 1,364 | 171 | 46.875367 | 0.803373 | 0.130908 | 0 | 0.708103 | 0 | 0.000921 | 0.06343 | 0.002938 | 0 | 0 | 0 | 0 | 0 | 1 | 0.036832 | false | 0 | 0.020258 | 0 | 0.112339 | 0.015654 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
4942a23314ff40a2a5aff73b6a759137e0a86e2d | 3,575 | py | Python | stubs/uiflow_stickc_plus/m5ui.py | c99koder/m5stickc-aerogarden | ac7d157d4ae4e47151896a7c7aee4c574ce52169 | [
"Apache-2.0"
] | 1 | 2022-03-02T10:11:07.000Z | 2022-03-02T10:11:07.000Z | stubs/uiflow_stickc_plus/m5ui.py | c99koder/m5stickc-aerogarden | ac7d157d4ae4e47151896a7c7aee4c574ce52169 | [
"Apache-2.0"
] | null | null | null | stubs/uiflow_stickc_plus/m5ui.py | c99koder/m5stickc-aerogarden | ac7d157d4ae4e47151896a7c7aee4c574ce52169 | [
"Apache-2.0"
] | null | null | null | """
Module: 'm5ui' on uiflow_stickc_plus v1.9.1
"""
# MCU: {'ver': 'v1.12', 'port': 'esp32', 'arch': 'xtensawin', 'sysname': 'esp32', 'release': '1.12.0', 'name': 'micropython', 'mpy': 10757, 'version': '1.12.0', 'machine': 'M5StickC-Plus with ESP32', 'build': 'dirty', 'nodename': 'esp32', 'platform': 'esp32', 'family': 'micropython'}
# Stubber: 1.5.4
from typing import Any
class M5Button():
''
def __init__(self, *argv, **kwargs) -> None:
''
...
class M5Circle():
''
def __init__(self, *argv, **kwargs) -> None:
''
...
def hide(self, *args, **kwargs) -> Any:
...
def setBgColor(self, *args, **kwargs) -> Any:
...
def setBorderColor(self, *args, **kwargs) -> Any:
...
def setPosition(self, *args, **kwargs) -> Any:
...
def setSize(self, *args, **kwargs) -> Any:
...
def show(self, *args, **kwargs) -> Any:
...
class M5Img():
''
def __init__(self, *argv, **kwargs) -> None:
''
...
def changeImg(self, *args, **kwargs) -> Any:
...
def hide(self, *args, **kwargs) -> Any:
...
def setPosition(self, *args, **kwargs) -> Any:
...
def show(self, *args, **kwargs) -> Any:
...
class M5Line():
''
def __init__(self, *argv, **kwargs) -> None:
''
...
HLINE = 1 # type: int
PLINE = 2 # type: int
VLINE = 0 # type: int
def hide(self, *args, **kwargs) -> Any:
...
def setColor(self, *args, **kwargs) -> Any:
...
def setSize(self, *args, **kwargs) -> Any:
...
def show(self, *args, **kwargs) -> Any:
...
class M5Rect():
''
def __init__(self, *argv, **kwargs) -> None:
''
...
def hide(self, *args, **kwargs) -> Any:
...
def setBgColor(self, *args, **kwargs) -> Any:
...
def setBorderColor(self, *args, **kwargs) -> Any:
...
def setPosition(self, *args, **kwargs) -> Any:
...
def setSize(self, *args, **kwargs) -> Any:
...
def show(self, *args, **kwargs) -> Any:
...
class M5TextBox():
''
def __init__(self, *argv, **kwargs) -> None:
''
...
def hide(self, *args, **kwargs) -> Any:
...
def setColor(self, *args, **kwargs) -> Any:
...
def setFont(self, *args, **kwargs) -> Any:
...
def setPosition(self, *args, **kwargs) -> Any:
...
def setRotate(self, *args, **kwargs) -> Any:
...
def setText(self, *args, **kwargs) -> Any:
...
def show(self, *args, **kwargs) -> Any:
...
class M5Title():
''
def __init__(self, *argv, **kwargs) -> None:
''
...
def hide(self, *args, **kwargs) -> Any:
...
def setBgColor(self, *args, **kwargs) -> Any:
...
def setFgColor(self, *args, **kwargs) -> Any:
...
def setTitle(self, *args, **kwargs) -> Any:
...
def show(self, *args, **kwargs) -> Any:
...
class M5Triangle():
''
def __init__(self, *argv, **kwargs) -> None:
''
...
def hide(self, *args, **kwargs) -> Any:
...
def setBgColor(self, *args, **kwargs) -> Any:
...
def setBorderColor(self, *args, **kwargs) -> Any:
...
def setSize(self, *args, **kwargs) -> Any:
...
def show(self, *args, **kwargs) -> Any:
...
def M5UI_Deinit(*args, **kwargs) -> Any:
...
def setScreenColor(*args, **kwargs) -> Any:
...
| 20.3125 | 284 | 0.468811 | 361 | 3,575 | 4.545706 | 0.210526 | 0.23766 | 0.308958 | 0.383303 | 0.730652 | 0.694089 | 0.66362 | 0.642291 | 0.642291 | 0.642291 | 0 | 0.018776 | 0.314685 | 3,575 | 175 | 285 | 20.428571 | 0.65102 | 0.104056 | 0 | 0.836066 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.385246 | false | 0 | 0.008197 | 0 | 0.483607 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
495a2321e919491b034b84a72fb71a455a187d4c | 96 | py | Python | venv/lib/python3.8/site-packages/numpy/typing/_generic_alias.py | Retraces/UkraineBot | 3d5d7f8aaa58fa0cb8b98733b8808e5dfbdb8b71 | [
"MIT"
] | 2 | 2022-03-13T01:58:52.000Z | 2022-03-31T06:07:54.000Z | venv/lib/python3.8/site-packages/numpy/typing/_generic_alias.py | DesmoSearch/Desmobot | b70b45df3485351f471080deb5c785c4bc5c4beb | [
"MIT"
] | 19 | 2021-11-20T04:09:18.000Z | 2022-03-23T15:05:55.000Z | venv/lib/python3.8/site-packages/numpy/typing/_generic_alias.py | DesmoSearch/Desmobot | b70b45df3485351f471080deb5c785c4bc5c4beb | [
"MIT"
] | null | null | null | /home/runner/.cache/pip/pool/13/44/6a/a44f11388e3d5ac59b5a94ad251c6455af81f481b399c3d503218e6851 | 96 | 96 | 0.895833 | 9 | 96 | 9.555556 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.46875 | 0 | 96 | 1 | 96 | 96 | 0.427083 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | null | 0 | 0 | null | null | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
4970a913d58f211a713df3475cd405869d111be1 | 4,938 | py | Python | usage_sample.py | acatoire/json_generator | de99fb0488c95522164c57928311491efb1af63f | [
"Apache-2.0"
] | null | null | null | usage_sample.py | acatoire/json_generator | de99fb0488c95522164c57928311491efb1af63f | [
"Apache-2.0"
] | null | null | null | usage_sample.py | acatoire/json_generator | de99fb0488c95522164c57928311491efb1af63f | [
"Apache-2.0"
] | null | null | null |
"""
Simple usage sample of json generator
"""
from json_generator import JsonGenerator, ConfJson
def method_name(kiss: str = None, homogeneous: bool = True):
kiss_str = "_kiss" if kiss else ""
homogeneous_str = "_homogeneous" if homogeneous else "_non_homogeneous"
# 1 level
my_json = JsonGenerator(name=1,
json_config=ConfJson(nb_string=0,
nb_json=2, conf=ConfJson(3)),
kiss=kiss, homogeneous_schema=homogeneous)
my_json.generate_json_file(f"generation/json_1_levels{kiss_str}{homogeneous_str}.json")
# 2 level
my_json = JsonGenerator(name=1,
json_config=ConfJson(nb_string=2,
nb_obj=2, size_obj=3,
nb_json=3, conf=ConfJson(2, 2, 2)),
kiss=kiss, homogeneous_schema=homogeneous)
my_json.generate_json_file(f"generation/json_2_levels{kiss_str}{homogeneous_str}.json")
# 3 level
my_json = JsonGenerator(name=2,
json_config=ConfJson(nb_string=2,
nb_obj=2, size_obj=3,
nb_json=3, conf=ConfJson(2, 2, 2,
1, ConfJson(1, 2, 2))),
kiss=kiss, homogeneous_schema=homogeneous)
my_json.generate_json_file(f"generation/json_3_levels{kiss_str}{homogeneous_str}.json")
# only one list
my_json = JsonGenerator(name=2,
json_config=ConfJson(nb_string=0,
nb_obj=0, size_obj=3,
nb_json=0, conf=ConfJson(2, 2, 2),
nb_list=1, nb_list_elements=5, conf_lst=ConfJson(nb_string=20)),
kiss=kiss, homogeneous_schema=homogeneous)
my_json.generate_json_file(f"generation/json_one_list{kiss_str}{homogeneous_str}.json")
# 2_level_list
my_json = JsonGenerator(name=5,
json_config=ConfJson(nb_string=0,
nb_obj=0, size_obj=3,
nb_json=0, conf=ConfJson(nb_string=2, nb_obj=2, size_obj=3),
nb_list=1, nb_list_elements=2,
conf_lst=ConfJson(nb_string=0,
nb_obj=0, size_obj=1,
nb_json=0, conf=ConfJson(nb_string=1),
nb_list=1, nb_list_elements=5,
conf_lst=ConfJson(nb_string=3))),
kiss=kiss, homogeneous_schema=homogeneous)
my_json.generate_json_file(f"generation/json_2_level_list{kiss_str}{homogeneous_str}.json")
# 3_level_list
my_json = JsonGenerator(name=5,
json_config=ConfJson(nb_string=0,
nb_obj=0, size_obj=3,
nb_json=0, conf=ConfJson(nb_string=2, nb_obj=2, size_obj=3),
nb_list=1, nb_list_elements=3,
conf_lst=ConfJson(nb_string=0,
nb_obj=0, size_obj=1,
nb_json=0, conf=ConfJson(nb_string=1),
nb_list=1, nb_list_elements=2,
conf_lst=ConfJson(nb_string=0,
nb_obj=0, size_obj=1,
nb_json=0,
conf=ConfJson(nb_string=1),
nb_list=1, nb_list_elements=5,
conf_lst=ConfJson(nb_string=3)))),
kiss=kiss, homogeneous_schema=homogeneous)
my_json.generate_json_file(f"generation/json_3_level_list{kiss_str}{homogeneous_str}.json")
if __name__ == '__main__':
method_name(kiss=None, homogeneous=True)
method_name(kiss='o', homogeneous=True)
method_name(kiss='o', homogeneous=False)
method_name(kiss=None, homogeneous=False)
| 60.219512 | 119 | 0.432564 | 478 | 4,938 | 4.142259 | 0.104603 | 0.085859 | 0.137374 | 0.060101 | 0.869697 | 0.842424 | 0.818687 | 0.704545 | 0.704545 | 0.704545 | 0 | 0.035529 | 0.49271 | 4,938 | 81 | 120 | 60.962963 | 0.75489 | 0.020656 | 0 | 0.587302 | 0 | 0 | 0.080257 | 0.07134 | 0 | 0 | 0 | 0 | 0 | 1 | 0.015873 | false | 0 | 0.015873 | 0 | 0.031746 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
b8de05931873d17358a6b3dd9a9ae5d134fb0f79 | 105 | py | Python | codes/models/__init__.py | makni-mehdi/federated-swag | a0175ed267e07959687c4290aa3a2a8e1899faa5 | [
"BSD-2-Clause"
] | null | null | null | codes/models/__init__.py | makni-mehdi/federated-swag | a0175ed267e07959687c4290aa3a2a8e1899faa5 | [
"BSD-2-Clause"
] | null | null | null | codes/models/__init__.py | makni-mehdi/federated-swag | a0175ed267e07959687c4290aa3a2a8e1899faa5 | [
"BSD-2-Clause"
] | null | null | null | from .resnet18 import *
from .lenet5 import *
from .regression_net import *
from .reducedLeNet5 import *
| 21 | 29 | 0.771429 | 13 | 105 | 6.153846 | 0.538462 | 0.375 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.044944 | 0.152381 | 105 | 4 | 30 | 26.25 | 0.853933 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
7718146f08783ae4bf998d121074f5dc7312177b | 152 | py | Python | test/Test_Edit_Group.py | MaximM27/Pythonfortesting_trainig | 07c283c4ac88fd2bb0ee81983abe8177a9b2eada | [
"Apache-2.0"
] | null | null | null | test/Test_Edit_Group.py | MaximM27/Pythonfortesting_trainig | 07c283c4ac88fd2bb0ee81983abe8177a9b2eada | [
"Apache-2.0"
] | null | null | null | test/Test_Edit_Group.py | MaximM27/Pythonfortesting_trainig | 07c283c4ac88fd2bb0ee81983abe8177a9b2eada | [
"Apache-2.0"
] | null | null | null | from model.group import Group
def test_edit_first_group(app):
app.group.edit_first_group(Group(name="group1", header="group2", footer="group3"))
| 21.714286 | 86 | 0.756579 | 23 | 152 | 4.782609 | 0.652174 | 0.163636 | 0.254545 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.022059 | 0.105263 | 152 | 6 | 87 | 25.333333 | 0.786765 | 0 | 0 | 0 | 0 | 0 | 0.12 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0 | 0.333333 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
620160d217020591500e34208eed40b64ec8ed36 | 127 | py | Python | historical_robots/__init__.py | alexlitel/historical-robots-txt-parser | 29759d254d0ab18c72383748f9173e31e23d5ab4 | [
"MIT"
] | null | null | null | historical_robots/__init__.py | alexlitel/historical-robots-txt-parser | 29759d254d0ab18c72383748f9173e31e23d5ab4 | [
"MIT"
] | 1 | 2021-06-24T13:59:33.000Z | 2021-06-24T13:59:33.000Z | historical_robots/__init__.py | alexlitel/historical-robots-txt-parser | 29759d254d0ab18c72383748f9173e31e23d5ab4 | [
"MIT"
] | 1 | 2021-06-17T13:44:22.000Z | 2021-06-17T13:44:22.000Z | from .parser import parse_robots_txt
from .scraper import historical_scraper
__all__ = [parse_robots_txt, historical_scraper]
| 25.4 | 48 | 0.850394 | 17 | 127 | 5.764706 | 0.529412 | 0.22449 | 0.285714 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.102362 | 127 | 4 | 49 | 31.75 | 0.859649 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.666667 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
6222e531dbf7df8b7405d57efc0a70a0d652b48c | 6,339 | py | Python | andres@programo.ual.es/figurePCA.py | andresmasegosa/PRML-CoreSets | fb768debb15e3ff6f5b65b7224915a41c1493f3d | [
"MIT"
] | null | null | null | andres@programo.ual.es/figurePCA.py | andresmasegosa/PRML-CoreSets | fb768debb15e3ff6f5b65b7224915a41c1493f3d | [
"MIT"
] | null | null | null | andres@programo.ual.es/figurePCA.py | andresmasegosa/PRML-CoreSets | fb768debb15e3ff6f5b65b7224915a41c1493f3d | [
"MIT"
] | null | null | null | import matplotlib.pyplot as plt
import numpy as np
from sklearn.cluster import KMeans
import inferpy as inf
from sklearn import metrics
from datareduction.bayesian_pca_DR import BayesianPCA_DR
from prml.feature_extractions import BayesianPCA, PCA
############## GENERATE DATA ########################
############################################
np.random.seed(1234)
N=2000
K=1
M=3
D=2
def create_toy_data(sample_size=100, ndim_hidden=1, ndim_observe=2, std=1.):
Z = np.random.normal(size=(sample_size, ndim_hidden))
mu = 5#np.random.uniform(-5, 5, size=(ndim_observe))
W = np.random.uniform(-5, 5, (ndim_hidden, ndim_observe))
print(W.T)
print(mu)
X = Z.dot(W) + mu + np.random.normal(scale=std, size=(sample_size, ndim_observe))
return X
data = create_toy_data(sample_size=N, ndim_hidden=K, ndim_observe=D, std=0.5)
np.take(data,np.random.permutation(data.shape[0]),axis=0,out=data)
N=data.shape[0]
D=data.shape[1]
x_train=data[0:int(N/2.0),:]
x_test=data[int(N/2.0):N,:]
M=1
a=-2.5
b=12.5
c=0
d=10
#plt.scatter(x_train[:,0],x_train[:,1])
plt.figure(0)
np.random.seed(1234)
bpca = BayesianPCA(n_components=1)
bpca.fit(x_train, initial="random")
print(sum(bpca.log_proba(x_test)))
print("Figure 0")
print(bpca.W)
print(bpca.var)
print(bpca.C)
plt.scatter(x_train[:, 0], x_train[:, 1])
x0, x1 = np.meshgrid(np.linspace(a, b, 1000), np.linspace(c, d, 1000))
x = np.array([x0, x1]).reshape(2, -1).T
plt.contour(x0, x1, np.exp(bpca.log_proba(x)).reshape(1000, 1000))
plt.xlim(a, b, 100)
plt.ylim(c, d, 100)
plt.gca().set_aspect('equal', adjustable='box')
plt.savefig("./figs/PCA_Artificial_TrueVI.pdf",bbox_inches='tight')
plt.figure(1)
np.random.seed(1234)
bpca_dr1 = BayesianPCA_DR(n_components=1)
bpca_dr1.fit(x_train, initial="random", n_clusters = 1, cluster_method="SS")
print("Figure 1")
print(sum(bpca_dr1.log_proba(x_test)))
print(bpca_dr1.W)
print(bpca_dr1.var)
print(bpca_dr1.C)
#plt.scatter(x_train[:, 0], x_train[:, 1], c = bpca_dr1.kmeans.labels_)
plt.scatter(x_train[:, 0], x_train[:, 1])
x0, x1 = np.meshgrid(np.linspace(a, b, 1000), np.linspace(c, d, 1000))
x = np.array([x0, x1]).reshape(2, -1).T
plt.contour(x0, x1, np.exp(bpca_dr1.log_proba(x)).reshape(1000, 1000))
plt.scatter(bpca_dr1.X_dr['X'][:,0],bpca_dr1.X_dr['X'][:,1], c='k', s=50.0, marker='+')
plt.xlim(a, b, 100)
plt.ylim(c, d, 100)
plt.gca().set_aspect('equal', adjustable='box')
plt.savefig("./figs/PCA_Artificial_SS_M_1.pdf",bbox_inches='tight')
plt.figure(2)
np.random.seed(1234)
bpca_dr1 = BayesianPCA_DR(n_components=1)
bpca_dr1.fit(x_train, initial="random", n_clusters = 5, cluster_method="SS")
print("Figure 2")
print(sum(bpca_dr1.log_proba(x_test)))
print(bpca_dr1.W)
print(bpca_dr1.var)
print(bpca_dr1.C)
plt.scatter(x_train[:, 0], x_train[:, 1], c = bpca_dr1.kmeans.labels_)
plt.scatter(x_train[:, 0], x_train[:, 1])
x0, x1 = np.meshgrid(np.linspace(a, b, 1000), np.linspace(c, d, 1000))
x = np.array([x0, x1]).reshape(2, -1).T
plt.contour(x0, x1, np.exp(bpca_dr1.log_proba(x)).reshape(1000, 1000))
plt.scatter(bpca_dr1.X_dr['X'][:,0],bpca_dr1.X_dr['X'][:,1], c='k', s=50.0, marker='+')
plt.xlim(a, b, 100)
plt.ylim(c, d, 100)
plt.gca().set_aspect('equal', adjustable='box')
plt.savefig("./figs/PCA_Artificial_SS_M_5.pdf",bbox_inches='tight')
plt.figure(3)
np.random.seed(1234)
bpca_dr2 = BayesianPCA_DR(n_components=1)
bpca_dr2.fit(x_train, initial="random", n_clusters = 1, cluster_method="NoSS")
print("Figure 3")
print(sum(bpca_dr2.log_proba(x_test)))
print(bpca_dr2.W)
print(bpca_dr2.var)
print(bpca_dr2.C)
plt.scatter(x_train[:, 0], x_train[:, 1])
x0, x1 = np.meshgrid(np.linspace(a, b, 1000), np.linspace(c, d, 1000))
x = np.array([x0, x1]).reshape(2, -1).T
plt.contour(x0, x1, np.exp(bpca_dr2.log_proba(x)).reshape(1000, 1000))
plt.scatter(bpca_dr2.X_dr['X'][:,0],bpca_dr2.X_dr['X'][:,1], c='k', s=50.0, marker='+')
plt.xlim(a, b, 100)
plt.ylim(c, d, 100)
plt.gca().set_aspect('equal', adjustable='box')
plt.savefig("./figs/PCA_Artificial_NoSS_M_1.pdf",bbox_inches='tight')
plt.figure(4)
np.random.seed(1234)
bpca_dr2 = BayesianPCA_DR(n_components=1)
bpca_dr2.fit(x_train, initial="random", n_clusters = 5, cluster_method="NoSS")
print("Figure 4")
print(sum(bpca_dr2.log_proba(x_test)))
print(bpca_dr2.W)
print(bpca_dr2.var)
print(bpca_dr2.C)
plt.scatter(x_train[:, 0], x_train[:, 1], c = bpca_dr2.kmeans.labels_)
plt.scatter(x_train[:, 0], x_train[:, 1])
x0, x1 = np.meshgrid(np.linspace(a, b, 1000), np.linspace(c, d, 1000))
x = np.array([x0, x1]).reshape(2, -1).T
plt.contour(x0, x1, np.exp(bpca_dr2.log_proba(x)).reshape(1000, 1000))
plt.scatter(bpca_dr2.X_dr['X'][:,0],bpca_dr2.X_dr['X'][:,1], c='k', s=50.0, marker='+')
plt.xlim(a, b, 100)
plt.ylim(c, d, 100)
plt.gca().set_aspect('equal', adjustable='box')
plt.savefig("./figs/PCA_Artificial_NoSS_M_5.pdf",bbox_inches='tight')
plt.figure(5)
np.random.seed(1234)
bpca_dr2 = BayesianPCA_DR(n_components=1)
bpca_dr2.fit(x_train, initial="random", n_clusters = 5, cluster_method="random")
print("Figure 5")
print(sum(bpca_dr2.log_proba(x_test)))
print(bpca_dr2.W)
print(bpca_dr2.var)
print(bpca_dr2.C)
plt.scatter(x_train[:, 0], x_train[:, 1])
x0, x1 = np.meshgrid(np.linspace(a, b, 1000), np.linspace(c, d, 1000))
x = np.array([x0, x1]).reshape(2, -1).T
plt.contour(x0, x1, np.exp(bpca_dr2.log_proba(x)).reshape(1000, 1000))
plt.scatter(bpca_dr2.X_dr['X'][:,0],bpca_dr2.X_dr['X'][:,1], c='k', s=50.0, marker='+')
plt.xlim(a, b, 100)
plt.ylim(c, d, 100)
plt.gca().set_aspect('equal', adjustable='box')
plt.savefig("./figs/PCA_Artificial_Random_M_5_1.pdf",bbox_inches='tight')
plt.figure(6)
np.random.seed(123)
bpca_dr2 = BayesianPCA_DR(n_components=1)
bpca_dr2.fit(x_train, initial="random", n_clusters = 5, cluster_method="random")
print("Figure 6")
print(sum(bpca_dr2.log_proba(x_test)))
print(bpca_dr2.W)
print(bpca_dr2.var)
print(bpca_dr2.C)
plt.scatter(x_train[:, 0], x_train[:, 1])
x0, x1 = np.meshgrid(np.linspace(a, b, 1000), np.linspace(c, d, 1000))
x = np.array([x0, x1]).reshape(2, -1).T
plt.contour(x0, x1, np.exp(bpca_dr2.log_proba(x)).reshape(1000, 1000))
plt.scatter(bpca_dr2.X_dr['X'][:,0],bpca_dr2.X_dr['X'][:,1], c='k', s=50.0, marker='+')
plt.xlim(a, b, 100)
plt.ylim(c, d, 100)
plt.gca().set_aspect('equal', adjustable='box')
plt.savefig("./figs/PCA_Artificial_Random_M_5_2.pdf",bbox_inches='tight')
#plt.show()
| 33.015625 | 87 | 0.691907 | 1,205 | 6,339 | 3.4639 | 0.099585 | 0.062051 | 0.030187 | 0.042166 | 0.828222 | 0.776234 | 0.769765 | 0.757307 | 0.724724 | 0.724724 | 0 | 0.07273 | 0.082505 | 6,339 | 191 | 88 | 33.188482 | 0.644945 | 0.027922 | 0 | 0.588608 | 0 | 0 | 0.078519 | 0.039506 | 0 | 0 | 0 | 0 | 0 | 1 | 0.006329 | false | 0 | 0.044304 | 0 | 0.056962 | 0.234177 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
622f81f924b3fc966122422f2007888a518b07a5 | 254 | py | Python | Codewars/difference between years/difference_between_years.py | adoreblvnk/code_solutions | 03e4261241dd33a4232dabe0e9450d344f7ccc6d | [
"MIT"
] | null | null | null | Codewars/difference between years/difference_between_years.py | adoreblvnk/code_solutions | 03e4261241dd33a4232dabe0e9450d344f7ccc6d | [
"MIT"
] | null | null | null | Codewars/difference between years/difference_between_years.py | adoreblvnk/code_solutions | 03e4261241dd33a4232dabe0e9450d344f7ccc6d | [
"MIT"
] | null | null | null | def how_many_years(date1, date2):
return abs(int(date1[:4]) - int(date2[:4]))
# solution
def how_many_years_solution(date1,date2):
return abs(int(date1.split('/')[0]) - int(date2.split('/')[0]))
print(how_many_years('1997/10/10', '2015/10/10')) | 31.75 | 67 | 0.669291 | 42 | 254 | 3.880952 | 0.404762 | 0.128834 | 0.220859 | 0.184049 | 0.331288 | 0.331288 | 0 | 0 | 0 | 0 | 0 | 0.122807 | 0.102362 | 254 | 8 | 68 | 31.75 | 0.592105 | 0.031496 | 0 | 0 | 0 | 0 | 0.089796 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.4 | false | 0 | 0 | 0.4 | 0.8 | 0.2 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 6 |
6251b280792dc3767b485848d06789c0ad4b0502 | 25 | py | Python | lpthw/deleteme.py | lpan1111/lpthw | a6ba6156a79a9a9d4cee2a0d6276c67842e69067 | [
"MIT"
] | null | null | null | lpthw/deleteme.py | lpan1111/lpthw | a6ba6156a79a9a9d4cee2a0d6276c67842e69067 | [
"MIT"
] | null | null | null | lpthw/deleteme.py | lpan1111/lpthw | a6ba6156a79a9a9d4cee2a0d6276c67842e69067 | [
"MIT"
] | null | null | null | print('please delete me') | 25 | 25 | 0.76 | 4 | 25 | 4.75 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.08 | 25 | 1 | 25 | 25 | 0.826087 | 0 | 0 | 0 | 0 | 0 | 0.615385 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 6 |
62708437c31a1014de98802647d2e3abbddd9831 | 189 | py | Python | annotater/memetext/utils.py | stricoff92/annotater | 8ca471477e2d567945e14f09d3d763d379e7587e | [
"MIT"
] | null | null | null | annotater/memetext/utils.py | stricoff92/annotater | 8ca471477e2d567945e14f09d3d763d379e7587e | [
"MIT"
] | null | null | null | annotater/memetext/utils.py | stricoff92/annotater | 8ca471477e2d567945e14f09d3d763d379e7587e | [
"MIT"
] | null | null | null |
import logging
from annotater.script_logger import spawn_logger
def get_annotation_logger(level=logging.INFO):
return spawn_logger("testannotation", level=level, file_per_day=True)
| 21 | 73 | 0.820106 | 26 | 189 | 5.692308 | 0.692308 | 0.148649 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.10582 | 189 | 8 | 74 | 23.625 | 0.87574 | 0 | 0 | 0 | 0 | 0 | 0.074468 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | false | 0 | 0.5 | 0.25 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 6 |
6576a3170f679f00e454a208bdcb8cde87f263eb | 33 | py | Python | wrappers/__init__.py | mindpowered/car-loan-calculator-python | ca219bbeb4df561faf8f86807a240d0b1e29fcc5 | [
"MIT"
] | null | null | null | wrappers/__init__.py | mindpowered/car-loan-calculator-python | ca219bbeb4df561faf8f86807a240d0b1e29fcc5 | [
"MIT"
] | null | null | null | wrappers/__init__.py | mindpowered/car-loan-calculator-python | ca219bbeb4df561faf8f86807a240d0b1e29fcc5 | [
"MIT"
] | null | null | null | from .CarLoanCalculator import *
| 16.5 | 32 | 0.818182 | 3 | 33 | 9 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.121212 | 33 | 1 | 33 | 33 | 0.931034 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
657b50358da574d58bbe2c3b20c08dda0f417767 | 13,077 | py | Python | main/Interpolation.py | robop/xjob | 8285291212cb5b1e3846872d891e7c374a7aead9 | [
"Apache-2.0"
] | 2 | 2021-08-23T13:41:34.000Z | 2021-10-04T03:19:41.000Z | main/Interpolation.py | robop/xjob | 8285291212cb5b1e3846872d891e7c374a7aead9 | [
"Apache-2.0"
] | null | null | null | main/Interpolation.py | robop/xjob | 8285291212cb5b1e3846872d891e7c374a7aead9 | [
"Apache-2.0"
] | null | null | null | # -*- coding: utf-8 -*-
from math import exp, log
def SolveTriDiagonal(a, b, c, r):
"""Solve a tri-diagonal system of equations with a,b,c vectors of the diagonal elements
b lies on the diagonal, a is below and c above."""
n = len(b)
result = [0.0 for i in range(n)]
temp = [0.0 for i in range(n)]
btemp = b[0]
result[0] = r[0] / btemp
# Forward Substitution
for i in range(1, n):
temp[i] = c[i - 1] / btemp
btemp = b[i] - a[i] * temp[i]
if (btemp == 0.0):
print(i, "Error in tridiagonal solver")
return i, "Error in tridiagonal solver"
result[i] = (r[i] - a[i] * result[i - 1]) / btemp
# Backward substitution
i = n - 2
while i >= 0:
result[i] -= temp[i + 1] * result[i + 1];
i -= 1
return result
def HermiteMi(t, ti, tiplus):
"""Spline coefficient in Hermite Interpolation"""
return (t - ti) / (tiplus - ti)
def HermiteGi(ti, tiplus, riprime, ri, riplus):
"""Spline coefficient in Hermite Interpolation"""
return (tiplus - ti) * riprime - (riplus - ri)
def HermiteCi(ri, riplus, ti, tiplus, riprime, riprimeplus):
"""Spline coefficient in Hermite Interpolation"""
return 2 * (riplus - ri) - (tiplus - ti) * (riprime + riprimeplus)
def RiPrimeIntermediate(timinus, ti, tiplus, ri, riplus, riminus):
"""Gradient vector in Hermite Interpolation: General Case"""
term1 = (ri - riminus) * (tiplus - ti) / (ti - timinus)
term2 = (riplus - ri) * (ti - timinus) / (tiplus - ti)
return (term1 + term2) / (tiplus - timinus)
def RiPrimeFirst(tVec, rVec):
"""Gradient vector in Hermite Interpolation: Special Case, first interval"""
term1 = (rVec[1] - rVec[0]) * (tVec[2] + tVec[1] - 2 * tVec[0]) / (tVec[1] - tVec[0])
term2 = (rVec[2] - rVec[1]) * (tVec[1] - tVec[0]) / (tVec[2] - tVec[1])
return (term1 - term2) / (tVec[2] - tVec[0])
def RiPrimeLast(tVec, rVec):
"""Gradient vector in Hermite Interpolation: Special Case, last interval"""
n = len(tVec)
term1 = (rVec[n - 2] - rVec[n - 3]) * (tVec[n - 1] - tVec[n - 2]) / (tVec[n - 2] - tVec[n - 3])
term2 = (rVec[n - 1] - rVec[n - 2]) * (2 * tVec[n - 1] - tVec[n - 2] - tVec[n - 3]) / (tVec[n - 1] - tVec[n - 2])
return -(term1 - term2) / (tVec[n - 1] - tVec[n - 3])
def CubicA(t, ti, tiplus):
"""Spline coefficient in Cubic Spline Interpolation"""
return (tiplus - t) / (tiplus - ti)
def CubicB(t, ti, tiplus):
"""Spline coefficient in Cubic Spline Interpolation"""
return 1 - CubicA(t, ti, tiplus)
def CubicC(t, ti, tiplus):
"""Spline coefficient in Cubic Spline Interpolation"""
return (CubicA(t, ti, tiplus) ** 3 - CubicA(t, ti, tiplus)) * (tiplus - ti) * (tiplus - ti) / 6.0
def CubicD(t, ti, tiplus):
"""Spline coefficient in Cubic Spline Interpolation"""
return (CubicB(t, ti, tiplus) ** 3 - CubicB(t, ti, tiplus)) * (tiplus - ti) * (tiplus - ti) / 6.0
def RPrimePrime(points, values):
"""Second derivative vector in Cubic Spline Interpolation: Involves solving a tridiagonal system of equations"""
n = len(points)
a = [(points[i] - points[i - 1]) / 6 for i in range(1, n - 1)]
b = [(points[i + 1] - points[i - 1]) / 3 for i in range(1, n - 1)]
c = [(points[i + 1] - points[i]) / 6 for i in range(1, n - 1)]
RHS = [(values[i + 1] - values[i]) / (points[i + 1] - points[i]) - (values[i] - values[i - 1]) / (
points[i] - points[i - 1]) for i in range(1, n - 1)]
result = [0.0 for i in range(n)]
result[1:n - 1] = SolveTriDiagonal(a, b, c, RHS)
return result
def RTPrimePrime(points, values):
"""Second derivative vector in Cubic Spline Interpolation: Involves solving a Tri-Diagonal system of equations"""
n = len(points)
a = [(points[i] - points[i - 1]) / 6 for i in range(1, n - 1)]
b = [(points[i + 1] - points[i - 1]) / 3 for i in range(1, n - 1)]
c = [(points[i + 1] - points[i]) / 6 for i in range(1, n - 1)]
RHS = [(values[i + 1] * points[i + 1] - values[i] * points[i]) / (points[i + 1] - points[i]) - (
values[i] * points[i] - values[i - 1] * points[i - 1]) / (points[i] - points[i - 1]) for i in range(1, n - 1)]
result = [0.0 for i in range(n)]
result[1:n - 1] = SolveTriDiagonal(a, b, c, RHS)
return result
def FindPosition(point, points):
"""Determines the position of point in the vector points"""
if point < points[0]:
return -1
for i in range(len(points) - 1):
if point < points[i + 1]:
return i
return len(points)
def LinearInterpolation(point, points, values, extrapolationBack="Flat", extrapolationForw="Flat"):
""" LinearInterpolation(point, points, values) """
n = len(points)
i = FindPosition(point, points)
if i == -1:
if extrapolationBack == "Flat":
#print("flat")
return values[0]
elif extrapolationBack == "Linear":
#print("linear")
coeff = (values[1] - values[0]) / (points[1] - points[0])
value = values[0] + (point - points[0]) * coeff
return value
elif i == len(values):
if extrapolationForw == "Flat":
#print( "extra" )
return values[len(values) - 1]
elif extrapolationForw == "Linear":
n = len(values) - 1
#print("linear")
coeff = (values[n] - values[n - 1]) / (points[n] - points[n - 1])
value = values[n] + (point - points[n]) * coeff
return value
coeff = (values[i + 1] - values[i]) / (points[i + 1] - points[i])
value = values[i] + (point - points[i]) * coeff
#print("punkt", point, "mellan", points[i], points[i+1], "values", values[i], values[i+1], "värde", value)
return value
def LogLinearInterpolation(point, points, values, extrapolationBack="Flat", extrapolationForw="Flat"):
"""LinearInterpolation(point, points, values)"""
n = len(points)
i = FindPosition(point, points)
if i == -1:
if extrapolationBack == "Flat":
return values[0]
elif extrapolationBack == "Linear":
coeff = (values[1] - values[0]) / (points[1] - points[0])
value = values[0] + (point - points[0]) * coeff
return value
elif i == len(values):
if extrapolationForw == "Flat":
return values[len(values) - 1]
elif extrapolationForw == "Linear":
n = len(values) - 1
coeff = (values[n] - values[n - 1]) / (points[n] - points[n - 1])
value = values[n] + (point - points[n]) * coeff
return value
coeff = (log(values[i + 1]) - log(values[i])) / (points[i + 1] - points[i])
value = log(values[i]) + (point - points[i]) * coeff
return exp(value)
def LogDiscountLinearInterpolation(point, points, values, extrapolationBack="Flat", extrapolationForw="Flat"):
"""LinearInterpolation(point, points, values)"""
n = len(points)
i = FindPosition(point, points)
if i == -1:
if extrapolationBack == "Flat":
return values[0]
elif extrapolationBack == "Linear":
coeff = (values[1] - values[0]) / (points[1] - points[0])
value = values[0] + (point - points[0]) * coeff
return value
elif i == len(values):
if extrapolationForw == "Flat":
return values[len(values) - 1]
elif extrapolationForw == "Linear":
n = len(values) - 1
coeff = (values[n] - values[n - 1]) / (points[n] - points[n - 1])
value = values[n] + (point - points[n]) * coeff
return value
coeff = (values[i + 1] * points[i + 1] - values[i] * points[i]) / (points[i + 1] - points[i])
value = values[i] * points[i] + (point - points[i]) * coeff
return value / point
def HermiteInterpolation(point, points, values, extrapolationBack="Flat", extrapolationForw="Flat"):
"""HermiteInterpolation(point, points, values)"""
n = len(points)
i = FindPosition(point, points)
if i == -1:
if extrapolationBack == "Flat":
return values[0]
elif extrapolationBack == "Linear":
coeff = (values[1] - values[0]) / (points[1] - points[0])
value = values[0] + (point - points[0]) * coeff
return value
elif i == len(values):
if extrapolationForw == "Flat":
return values[len(values) - 1]
elif extrapolationForw == "Linear":
n = len(values) - 1
coeff = (values[n] - values[n - 1]) / (points[n] - points[n - 1])
value = values[n] + (point - points[n]) * coeff
return value
rPrime = [RiPrimeFirst(points, values)]
for k in range(1, n - 1):
rPrime.append(
RiPrimeIntermediate(points[k - 1], points[k], points[k + 1], values[k], values[k + 1], values[k - 1]))
rPrime.append(RiPrimeLast(points, values))
m = HermiteMi(point, points[i], points[i + 1])
g = HermiteGi(points[i], points[i + 1], rPrime[i], values[i], values[i + 1])
c = HermiteCi(values[i], values[i + 1], points[i], points[i + 1], rPrime[i], rPrime[i + 1])
value = values[i] + m * (values[i + 1] - values[i]) + m * (1 - m) * g + m * m * (1 - m) * c
return value
def HermiteLogDiscountInterpolation(point, points, values, extrapolationBack="Flat", extrapolationForw="Flat"):
"""HermiteInterpolation(point, points, values)"""
n = len(points)
i = FindPosition(point, points)
if i == -1:
if extrapolationBack == "Flat":
return values[0]
elif extrapolationBack == "Linear":
coeff = (values[1] - values[0]) / (points[1] - points[0])
value = values[0] + (point - points[0]) * coeff
return value
elif i == len(values):
if extrapolationForw == "Flat":
return values[len(values) - 1]
elif extrapolationForw == "Linear":
n = len(values) - 1
coeff = (values[n] - values[n - 1]) / (points[n] - points[n - 1])
value = values[n] + (point - points[n]) * coeff
return value
rPrime = [RiPrimeFirst(points, values)]
for k in range(1, n - 1):
rPrime.append(
RiPrimeIntermediate(points[k - 1], points[k], points[k + 1], values[k], values[k + 1], values[k - 1]))
rPrime.append(RiPrimeLast(points, values))
m = HermiteMi(point, points[i], points[i + 1])
g = HermiteGi(points[i], points[i + 1], rPrime[i], values[i], values[i + 1])
c = HermiteCi(values[i], values[i + 1], points[i], points[i + 1], rPrime[i], rPrime[i + 1])
value = values[i] + m * (values[i + 1] - values[i]) + m * (1 - m) * g + m * m * (1 - m) * c
return value
def CubicSplineInterpolation(point, points, values, extrapolationBack="Flat", extrapolationForw="Flat"):
i = FindPosition(point, points)
if i == -1:
if extrapolationBack == "Flat":
return values[0]
elif extrapolationBack == "Linear":
coeff = (values[1] - values[0]) / (points[1] - points[0])
value = values[0] + (point - points[0]) * coeff
return value
elif i == len(values):
if extrapolationForw == "Flat":
return values[len(values) - 1]
elif extrapolationForw == "Linear":
n = len(values) - 1
coeff = (values[n] - values[n - 1]) / (points[n] - points[n - 1])
value = values[n] + (point - points[n]) * coeff
return value
valuesPrimePrime = RPrimePrime(points, values)
A = CubicA(point, points[i], points[i + 1])
B = CubicB(point, points[i], points[i + 1])
C = CubicC(point, points[i], points[i + 1])
D = CubicD(point, points[i], points[i + 1])
value = A * values[i] + B * values[i + 1] + C * valuesPrimePrime[i] + D * valuesPrimePrime[i + 1]
return value
def CubicSplineLogDiscountInterpolation(point, points, values, extrapolationBack="Flat", extrapolationForw="Flat"):
i = FindPosition(point, points)
if i == -1:
if extrapolationBack == "Flat":
return values[0]
elif extrapolationBack == "Linear":
coeff = (values[1] - values[0]) / (points[1] - points[0])
value = values[0] + (point - points[0]) * coeff
return value
elif i == len(values):
if extrapolationForw == "Flat":
return values[len(values) - 1]
elif extrapolationForw == "Linear":
n = len(values) - 1
coeff = (values[n] - values[n - 1]) / (points[n] - points[n - 1])
value = values[n] + (point - points[n]) * coeff
return value
valuesPrimePrime = RTPrimePrime(points, values)
A = CubicA(point, points[i], points[i + 1])
B = CubicB(point, points[i], points[i + 1])
C = CubicC(point, points[i], points[i + 1])
D = CubicD(point, points[i], points[i + 1])
value = A * values[i] * points[i] + B * values[i + 1] * points[i + 1] + C * valuesPrimePrime[i] + D * \
valuesPrimePrime[
i + 1]
return value / point | 34.872 | 118 | 0.560144 | 1,701 | 13,077 | 4.30629 | 0.07525 | 0.070717 | 0.038225 | 0.029488 | 0.801638 | 0.77843 | 0.736928 | 0.72901 | 0.714949 | 0.696655 | 0 | 0.027207 | 0.274834 | 13,077 | 375 | 119 | 34.872 | 0.745228 | 0.104994 | 0 | 0.681818 | 0 | 0 | 0.021561 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.086777 | false | 0 | 0.004132 | 0 | 0.305785 | 0.004132 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
65939db6e3238bdf444081a976c9ee29d0a0f626 | 944 | py | Python | sdk/python/pulumi_aws_native/lightsail/__init__.py | pulumi/pulumi-aws-native | 1ae4a4d9c2256b2a79ca536f8d8497b28d10e4c3 | [
"Apache-2.0"
] | 29 | 2021-09-30T19:32:07.000Z | 2022-03-22T21:06:08.000Z | sdk/python/pulumi_aws_native/lightsail/__init__.py | pulumi/pulumi-aws-native | 1ae4a4d9c2256b2a79ca536f8d8497b28d10e4c3 | [
"Apache-2.0"
] | 232 | 2021-09-30T19:26:26.000Z | 2022-03-31T23:22:06.000Z | sdk/python/pulumi_aws_native/lightsail/__init__.py | pulumi/pulumi-aws-native | 1ae4a4d9c2256b2a79ca536f8d8497b28d10e4c3 | [
"Apache-2.0"
] | 4 | 2021-11-10T19:42:01.000Z | 2022-02-05T10:15:49.000Z | # coding=utf-8
# *** WARNING: this file was generated by the Pulumi SDK Generator. ***
# *** Do not edit by hand unless you're certain you know what you are doing! ***
from .. import _utilities
import typing
# Export this package's modules as members:
from ._enums import *
from .alarm import *
from .bucket import *
from .certificate import *
from .container import *
from .database import *
from .disk import *
from .distribution import *
from .get_alarm import *
from .get_bucket import *
from .get_certificate import *
from .get_container import *
from .get_database import *
from .get_disk import *
from .get_distribution import *
from .get_instance import *
from .get_load_balancer import *
from .get_load_balancer_tls_certificate import *
from .get_static_ip import *
from .instance import *
from .load_balancer import *
from .load_balancer_tls_certificate import *
from .static_ip import *
from ._inputs import *
from . import outputs
| 28.606061 | 80 | 0.76589 | 136 | 944 | 5.139706 | 0.382353 | 0.343348 | 0.204578 | 0.071531 | 0.157368 | 0.103004 | 0 | 0 | 0 | 0 | 0 | 0.001253 | 0.154661 | 944 | 32 | 81 | 29.5 | 0.874687 | 0.215042 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
65aea2bc7b3b6d0860c7948e20e622c6673f00ec | 9,462 | py | Python | tests/system/action/mediafile/test_move.py | MJJojo97/openslides-backend | af0d1edb0070e352d46f285a1ba0bbe3702d49ae | [
"MIT"
] | null | null | null | tests/system/action/mediafile/test_move.py | MJJojo97/openslides-backend | af0d1edb0070e352d46f285a1ba0bbe3702d49ae | [
"MIT"
] | null | null | null | tests/system/action/mediafile/test_move.py | MJJojo97/openslides-backend | af0d1edb0070e352d46f285a1ba0bbe3702d49ae | [
"MIT"
] | null | null | null | from typing import Any, Dict
from openslides_backend.permissions.management_levels import OrganizationManagementLevel
from openslides_backend.permissions.permissions import Permissions
from tests.system.action.base import BaseActionTestCase
class MediafileMoveActionTest(BaseActionTestCase):
def setUp(self) -> None:
super().setUp()
self.permission_test_model: Dict[str, Dict[str, Any]] = {
"mediafile/7": {"owner_id": "meeting/1", "is_directory": True},
"mediafile/8": {"owner_id": "meeting/1", "is_directory": True},
}
def test_move_parent_none(self) -> None:
self.set_models(
{
"meeting/222": {
"name": "name_SNLGsvIV",
"is_active_in_organization_id": 1,
},
"mediafile/7": {
"title": "title_7",
"owner_id": "meeting/222",
"parent_id": None,
"child_ids": [8, 9],
},
"mediafile/8": {
"title": "title_8",
"owner_id": "meeting/222",
"parent_id": 7,
"child_ids": [],
},
"mediafile/9": {
"title": "title_9",
"owner_id": "meeting/222",
"parent_id": 7,
"child_ids": [10],
},
"mediafile/10": {
"title": "title_10",
"owner_id": "meeting/222",
"parent_id": 9,
"child_ids": [],
},
}
)
response = self.request(
"mediafile.move",
{"owner_id": "meeting/222", "ids": [8, 9], "parent_id": None},
)
self.assert_status_code(response, 200)
mediafile_7 = self.get_model("mediafile/7")
assert mediafile_7.get("child_ids") == []
assert mediafile_7.get("parent_id") is None
mediafile_8 = self.get_model("mediafile/8")
assert mediafile_8.get("child_ids") == []
assert mediafile_8.get("parent_id") is None
assert mediafile_8.get("is_public")
mediafile_9 = self.get_model("mediafile/9")
assert mediafile_9.get("child_ids") == [10]
assert mediafile_9.get("parent_id") is None
assert mediafile_9.get("is_public")
mediafile_10 = self.get_model("mediafile/10")
assert mediafile_10.get("is_public")
assert mediafile_10.get("inherited_access_group_ids") == []
def test_move_parent_set(self) -> None:
self.set_models(
{
"meeting/222": {
"name": "name_SNLGsvIV",
"is_active_in_organization_id": 1,
},
"mediafile/7": {
"title": "title_7",
"owner_id": "meeting/222",
"parent_id": None,
"child_ids": [],
"is_directory": True,
"is_public": True,
"inherited_access_group_ids": [],
},
"mediafile/8": {
"title": "title_8",
"owner_id": "meeting/222",
"parent_id": None,
"child_ids": [],
},
"mediafile/9": {
"title": "title_9",
"owner_id": "meeting/222",
"parent_id": None,
"child_ids": [],
},
}
)
response = self.request(
"mediafile.move", {"owner_id": "meeting/222", "ids": [8, 9], "parent_id": 7}
)
self.assert_status_code(response, 200)
mediafile_7 = self.get_model("mediafile/7")
assert mediafile_7.get("child_ids") == [8, 9]
assert mediafile_7.get("parent_id") is None
mediafile_8 = self.get_model("mediafile/8")
assert mediafile_8.get("child_ids") == []
assert mediafile_8.get("parent_id") == 7
assert mediafile_8.get("inherited_access_group_ids") == []
assert mediafile_8.get("is_public")
mediafile_9 = self.get_model("mediafile/9")
assert mediafile_9.get("child_ids") == []
assert mediafile_9.get("parent_id") == 7
assert mediafile_9.get("is_public")
assert mediafile_9.get("inherited_access_group_ids") == []
def test_move_non_directory_parent_set(self) -> None:
self.set_models(
{
"meeting/222": {
"name": "name_SNLGsvIV",
"is_active_in_organization_id": 1,
},
"mediafile/7": {
"title": "title_7",
"owner_id": "meeting/222",
"parent_id": None,
"child_ids": [],
"is_directory": False,
},
"mediafile/8": {
"title": "title_8",
"owner_id": "meeting/222",
"parent_id": None,
"child_ids": [],
},
"mediafile/9": {
"title": "title_9",
"owner_id": "meeting/222",
"parent_id": None,
"child_ids": [],
},
}
)
response = self.request(
"mediafile.move", {"owner_id": "meeting/222", "ids": [8, 9], "parent_id": 7}
)
self.assert_status_code(response, 400)
self.assertIn("Parent is not a directory.", response.json["message"])
def test_move_multiple_action_data_items(self) -> None:
self.set_models(
{
"meeting/222": {"is_active_in_organization_id": 1},
"mediafile/7": {"owner_id": "meeting/222", "is_directory": True},
"mediafile/8": {"owner_id": "meeting/222", "is_directory": True},
}
)
response = self.request_multi(
"mediafile.move",
[
{"owner_id": "meeting/222", "ids": [8], "parent_id": 7},
{"owner_id": "meeting/222", "ids": [7], "parent_id": 8},
],
)
self.assert_status_code(response, 400)
mediafile_7 = self.get_model("mediafile/7")
assert mediafile_7.get("parent_id") is None
mediafile_8 = self.get_model("mediafile/8")
assert mediafile_8.get("parent_id") is None
def test_move_owner_mismatch(self) -> None:
self.set_models(
{
"meeting/222": {"is_active_in_organization_id": 1},
"mediafile/7": {"owner_id": "meeting/222", "is_directory": True},
"mediafile/8": {"owner_id": "meeting/222", "is_directory": True},
}
)
response = self.request_multi(
"mediafile.move",
[
{"owner_id": "organization/1", "ids": [8], "parent_id": 7},
],
)
self.assert_status_code(response, 400)
assert "Owner and parent don't match." in response.json["message"]
def test_move_circle(self) -> None:
self.set_models(
{
"meeting/222": {"is_active_in_organization_id": 1},
"mediafile/7": {
"owner_id": "meeting/222",
"is_directory": True,
"child_ids": [8],
},
"mediafile/8": {
"owner_id": "meeting/222",
"is_directory": True,
"parent_id": 7,
},
}
)
response = self.request(
"mediafile.move", {"owner_id": "meeting/222", "ids": [7], "parent_id": 8}
)
self.assert_status_code(response, 400)
self.assertIn(
"Moving item 7 to one of its children is not possible.",
response.json["message"],
)
def test_move_no_permissions(self) -> None:
self.base_permission_test(
self.permission_test_model,
"mediafile.move",
{"owner_id": "meeting/1", "ids": [8], "parent_id": 7},
)
def test_move_permissions(self) -> None:
self.base_permission_test(
self.permission_test_model,
"mediafile.move",
{"owner_id": "meeting/1", "ids": [8], "parent_id": 7},
Permissions.Mediafile.CAN_MANAGE,
)
def test_move_no_permissions_orga(self) -> None:
self.permission_test_model["mediafile/7"]["owner_id"] = "organization/1"
self.permission_test_model["mediafile/8"]["owner_id"] = "organization/1"
self.base_permission_test(
self.permission_test_model,
"mediafile.move",
{"owner_id": "organization/1", "ids": [8], "parent_id": 7},
)
def test_move_permissions_orga(self) -> None:
self.permission_test_model["mediafile/7"]["owner_id"] = "organization/1"
self.permission_test_model["mediafile/8"]["owner_id"] = "organization/1"
self.base_permission_test(
self.permission_test_model,
"mediafile.move",
{"owner_id": "organization/1", "ids": [8], "parent_id": 7},
OrganizationManagementLevel.CAN_MANAGE_ORGANIZATION,
)
| 38.307692 | 88 | 0.491439 | 940 | 9,462 | 4.659574 | 0.098936 | 0.05274 | 0.083105 | 0.085388 | 0.832192 | 0.809132 | 0.760274 | 0.74589 | 0.706849 | 0.687215 | 0 | 0.039886 | 0.372014 | 9,462 | 246 | 89 | 38.463415 | 0.69724 | 0 | 0 | 0.575107 | 0 | 0 | 0.252801 | 0.028747 | 0 | 0 | 0 | 0 | 0.133047 | 1 | 0.04721 | false | 0 | 0.017167 | 0 | 0.06867 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
65af9e9e2a78052480b0bd68f5b6f792d821c6c0 | 3,930 | py | Python | reports/migrations/0001_initial.py | Igorishe/Report_Traker | 886da5d5dd40247779a76611cf6b66cb95963ad7 | [
"MIT"
] | null | null | null | reports/migrations/0001_initial.py | Igorishe/Report_Traker | 886da5d5dd40247779a76611cf6b66cb95963ad7 | [
"MIT"
] | null | null | null | reports/migrations/0001_initial.py | Igorishe/Report_Traker | 886da5d5dd40247779a76611cf6b66cb95963ad7 | [
"MIT"
] | null | null | null | # Generated by Django 3.2.3 on 2021-08-11 12:44
from django.db import migrations, models
class Migration(migrations.Migration):
initial = True
dependencies = [
]
operations = [
migrations.CreateModel(
name='MobinetReport',
fields=[
('id', models.BigAutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('text', models.TextField(help_text='Пункт отчета', verbose_name='Текст репорта')),
('date', models.DateTimeField(auto_now_add=True, verbose_name='Дата создания')),
('author', models.PositiveIntegerField(blank=True, verbose_name='Автор репорта')),
('author_name', models.CharField(blank=True, max_length=20, verbose_name='Логин автора')),
('status', models.CharField(choices=[('New', 'New'), ('Closed', 'Closed'), ('Actual', 'Actual')], default='New', max_length=12, verbose_name='Статус')),
('tag', models.CharField(choices=[('Normal', 'Normal'), ('Burning', 'Burning'), ('Forgotten', 'Forgotten'), ('Delayed', 'Delayed')], default='Normal', max_length=12, verbose_name='Тэг')),
],
options={
'verbose_name': 'Отчет MN',
'verbose_name_plural': 'Отчеты MN',
},
),
migrations.CreateModel(
name='MoneyBack',
fields=[
('id', models.BigAutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('text', models.TextField(help_text='Пункт отчета', verbose_name='Текст репорта')),
('date', models.DateTimeField(auto_now_add=True, verbose_name='Дата создания')),
('author', models.PositiveIntegerField(blank=True, verbose_name='Автор репорта')),
('author_name', models.CharField(blank=True, max_length=20, verbose_name='Логин автора')),
('status', models.CharField(choices=[('New', 'New'), ('Closed', 'Closed'), ('Actual', 'Actual')], default='New', max_length=12, verbose_name='Статус')),
('tag', models.CharField(choices=[('Normal', 'Normal'), ('Burning', 'Burning'), ('Forgotten', 'Forgotten'), ('Delayed', 'Delayed')], default='Normal', max_length=12, verbose_name='Тэг')),
('value', models.DecimalField(decimal_places=2, max_digits=10, verbose_name='Сумма возврата')),
('wallet', models.CharField(blank=True, max_length=50, verbose_name='Кошелек получателя')),
('link', models.CharField(max_length=50, verbose_name='Ссылка на пользователя')),
],
options={
'verbose_name': 'Возврат',
'verbose_name_plural': 'Возвраты',
},
),
migrations.CreateModel(
name='Report',
fields=[
('id', models.BigAutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('text', models.TextField(help_text='Пункт отчета', verbose_name='Текст репорта')),
('date', models.DateTimeField(auto_now_add=True, verbose_name='Дата создания')),
('author', models.PositiveIntegerField(blank=True, verbose_name='Автор репорта')),
('author_name', models.CharField(blank=True, max_length=20, verbose_name='Логин автора')),
('status', models.CharField(choices=[('New', 'New'), ('Closed', 'Closed'), ('Actual', 'Actual')], default='New', max_length=12, verbose_name='Статус')),
('tag', models.CharField(choices=[('Normal', 'Normal'), ('Burning', 'Burning'), ('Forgotten', 'Forgotten'), ('Delayed', 'Delayed')], default='Normal', max_length=12, verbose_name='Тэг')),
],
options={
'verbose_name': 'Отчет RS',
'verbose_name_plural': 'Отчеты RS',
},
),
]
| 59.545455 | 203 | 0.586768 | 393 | 3,930 | 5.704835 | 0.246819 | 0.14719 | 0.040143 | 0.048171 | 0.788136 | 0.772525 | 0.757806 | 0.757806 | 0.757806 | 0.757806 | 0 | 0.013409 | 0.240967 | 3,930 | 65 | 204 | 60.461538 | 0.738183 | 0.01145 | 0 | 0.62069 | 1 | 0 | 0.221478 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.017241 | 0 | 0.086207 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
02f4856d2a2023b8575ed69f22b9bf05e7872bde | 37 | py | Python | strategy/__init__.py | hilearn/ai-game | 5eead5964fc9a4481317402374b13109e09f56c2 | [
"MIT"
] | null | null | null | strategy/__init__.py | hilearn/ai-game | 5eead5964fc9a4481317402374b13109e09f56c2 | [
"MIT"
] | 3 | 2021-10-03T08:46:08.000Z | 2021-10-04T18:14:56.000Z | strategy/__init__.py | hilearn/ai-game | 5eead5964fc9a4481317402374b13109e09f56c2 | [
"MIT"
] | null | null | null | from .strategy import RandomStrategy
| 18.5 | 36 | 0.864865 | 4 | 37 | 8 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.108108 | 37 | 1 | 37 | 37 | 0.969697 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
02fb683cd40f9e89ad2c362a5e3e47fafc5b8f8b | 10,453 | py | Python | thumt/data/multi30k.py | MaxyLee/THUMT | 4c449fe41a682c62f80ff843f2e8c9e1e8053c12 | [
"BSD-3-Clause"
] | null | null | null | thumt/data/multi30k.py | MaxyLee/THUMT | 4c449fe41a682c62f80ff843f2e8c9e1e8053c12 | [
"BSD-3-Clause"
] | null | null | null | thumt/data/multi30k.py | MaxyLee/THUMT | 4c449fe41a682c62f80ff843f2e8c9e1e8053c12 | [
"BSD-3-Clause"
] | null | null | null | import torch
import pickle
import skimage.io as io
from PIL import Image
from tqdm import tqdm
from torch.utils.data import Dataset
from thumt.data.pipeline import _sort_input_file
from thumt.tokenizers import WhiteSpaceTokenizer
def get_infer_dataset(filename, params, model_name, preprocess, dtype, raw=False):
sorted_key, sorted_data = _sort_input_file(filename)
split = filename.split('/')[-1].split('.')[0]
sorted_keys = {v:k for k,v in sorted_key.items()}
if model_name == 'visual_prefix_transformer_v2' or model_name == 'visual_prefix_transformer_v4':
dataset = M30kDatasetv2(sorted_data, params.img_input,
params.vocabulary, params.device, preprocess,
dtype, params.max_length, params.bos, params.eos,
params.pad, params.unk, split, sorted_keys)
else:
dataset = M30kDataset(sorted_data, params.img_input,
params.vocabulary, params.device,
params.max_length, params.bos, params.eos,
params.pad, params.unk, split, sorted_keys, raw)
return sorted_key, dataset
class M30kDataset(Dataset):
def __init__(self,
txt_input,
img_input,
vocab,
device,
seq_len=64,
bos=b'<bos>',
eos=b'<eos>',
pad=b'<pad>',
unk=b'<unk>',
split='train',
sorted_keys=None,
raw=False,
fewshot_name=None
):
self.bos = bos
self.eos = eos
self.pad = pad
self.unk = unk
self.seq_len = seq_len
self.split = split
self.device = device
self.tokenizer = WhiteSpaceTokenizer()
self.src_vocab = vocab['source']
self.tgt_vocab = vocab['target']
self.sorted_keys = sorted_keys
self.raw = raw
self.fewshot_name = fewshot_name
if fewshot_name is not None:
print(f'Using few-shot setting: {fewshot_name}')
self.pad_id = self.src_vocab[pad]
self.unk_id = self.src_vocab[unk]
if sorted_keys is not None:
self.src_txt, self.src_raw = self.load_text(txt_input, self.src_vocab, None, self.eos)
else:
self.src_txt, self.src_raw = self.load_text(txt_input[0], self.src_vocab, None, self.eos)
if split == 'train':
self.tgt_txt, self.tgt_raw = self.load_text(txt_input[1], self.tgt_vocab, self.bos, None)
self.lbl_txt, self.lbl_raw = self.load_text(txt_input[1], self.tgt_vocab, None, self.eos)
self.img_ids, self.img_features = self.load_image_features(img_input[0], img_input[1])
assert len(self.src_txt) == len(self.img_ids)
def __len__(self):
return len(self.src_txt)
def __getitem__(self, idx):
src_seq = torch.tensor(self.src_txt[idx])
src_mask = (src_seq != self.pad_id).float()
src_raw = self.src_raw[idx]
# if the dataset is sorted
if self.sorted_keys is not None:
idx = self.sorted_keys[idx]
img_id = self.img_ids[idx]
img_feature = self.img_features[img_id]
features = {
"img_feature": img_feature.cuda(self.device).float(),
"source": src_seq.cuda(self.device),
"source_mask": src_mask.cuda(self.device)
}
if self.raw:
features.update({
"raw_source": src_raw,
"imgid": img_id,
})
if self.split == 'train':
tgt_seq = torch.tensor(self.tgt_txt[idx])
lbl_seq = torch.tensor(self.lbl_txt[idx])
tgt_mask = (tgt_seq != self.pad_id).float()
features.update({
"target": tgt_seq.cuda(self.device),
"target_mask": tgt_mask.cuda(self.device)
})
return features, lbl_seq.cuda(self.device)
return features
def load_text(self, txt_input, vocab, bos=None, eos=None):
sentences = []
raw = []
if isinstance(txt_input, str):
with open(txt_input, 'rb') as fin:
lines = fin.read().splitlines()
elif isinstance(txt_input, list):
lines = txt_input
else:
import ipdb; ipdb.set_trace()
raise LookupError(f"Unknown txt input type {type(txt_input)}")
for line in lines:
sent = self.tokenizer.encode(line)
if bos:
sent.insert(0, bos)
if eos:
sent.append(eos)
tokens = [self.pad_id] * self.seq_len
for i, s in enumerate(sent):
if s in vocab:
tokens[i] = vocab[s]
else:
tokens[i] = self.unk_id
if i == self.seq_len - 1:
if eos:
tokens[i] = vocab[eos]
break
sentences.append(tokens)
raw.append(line)
return sentences, raw
def load_image_features(self, filepath, feature_path):
if self.split == 'train':
if self.fewshot_name is None:
fn = f'{filepath}/{self.split}.txt.shuf'
else:
fn = f'{filepath}/{self.fewshot_name}.txt'
else:
fn = f'{filepath}/{self.split}.txt'
with open(fn, 'r') as fin:
img_names = fin.read().splitlines()
if '#' in img_names[0]:
img_names = [n.split('#')[0] for n in img_names]
img_ids = [name[:-4] for name in img_names]
with open(feature_path, 'rb') as fin:
all_features = pickle.load(fin)
img_features = {k:all_features[k] for k in img_ids}
return img_ids, img_features
class M30kDatasetv2(Dataset):
def __init__(self,
txt_input,
img_input,
vocab,
device,
preprocess,
dtype,
seq_len=64,
bos=b'<bos>',
eos=b'<eos>',
pad=b'<pad>',
unk=b'<unk>',
split='train',
sorted_keys=None
):
self.bos = bos
self.eos = eos
self.pad = pad
self.unk = unk
self.seq_len = seq_len
self.split = split
self.device = device
self.dtype = dtype
self.tokenizer = WhiteSpaceTokenizer()
self.src_vocab = vocab['source']
self.tgt_vocab = vocab['target']
self.sorted_keys = sorted_keys
self.pad_id = self.src_vocab[pad]
self.unk_id = self.src_vocab[unk]
if sorted_keys is not None:
self.src_txt = self.load_text(txt_input, self.src_vocab, None, self.eos)
else:
self.src_txt = self.load_text(txt_input[0], self.src_vocab, None, self.eos)
if split == 'train':
self.tgt_txt = self.load_text(txt_input[1], self.tgt_vocab, self.bos, None)
self.lbl_txt = self.load_text(txt_input[1], self.tgt_vocab, None, self.eos)
self.img_ids, self.images = self.load_images(img_input[0], img_input[1], preprocess)
assert len(self.src_txt) == len(self.img_ids)
def __len__(self):
return len(self.src_txt)
def __getitem__(self, idx):
src_seq = torch.tensor(self.src_txt[idx])
src_mask = (src_seq != self.pad_id).float()
# if the dataset is sorted
if self.sorted_keys is not None:
idx = self.sorted_keys[idx]
img_id = self.img_ids[idx]
image = self.images[img_id]
features = {
"image": image.cuda(self.device),
"source": src_seq.cuda(self.device),
"source_mask": src_mask.cuda(self.device)
}
if self.split == 'train':
tgt_seq = torch.tensor(self.tgt_txt[idx])
lbl_seq = torch.tensor(self.lbl_txt[idx])
tgt_mask = (tgt_seq != self.pad_id).float()
features.update({
"target": tgt_seq.cuda(self.device),
"target_mask": tgt_mask.cuda(self.device)
})
return features, lbl_seq.cuda(self.device)
return features
def load_text(self, txt_input, vocab, bos=None, eos=None):
sentences = []
if isinstance(txt_input, str):
with open(txt_input, 'rb') as fin:
lines = fin.read().splitlines()
elif isinstance(txt_input, list):
lines = txt_input
else:
import ipdb; ipdb.set_trace()
raise LookupError(f"Unknown txt input type {type(txt_input)}")
for line in lines:
sent = self.tokenizer.encode(line)
if bos:
sent.insert(0, bos)
if eos:
sent.append(eos)
tokens = [self.pad_id] * self.seq_len
for i, s in enumerate(sent):
if s in vocab:
tokens[i] = vocab[s]
else:
tokens[i] = self.unk_id
if i == self.seq_len - 1:
if eos:
tokens[i] = vocab[eos]
break
sentences.append(tokens)
return sentences
def load_images(self, filepath, imgpath, preprocess):
if self.split == 'train':
fn = f'{filepath}/{self.split}.txt.shuf'
else:
fn = f'{filepath}/{self.split}.txt'
with open(fn, 'r') as fin:
img_names = fin.read().splitlines()
if '#' in img_names[0]:
img_names = [n.split('#')[0] for n in img_names]
img_ids = [name[:-4] for name in img_names]
images = {}
for imgid in tqdm(img_ids, desc='Loading images'):
if 'coco' in imgpath:
img_name = f'{imgpath}/train2014/{imgid}.jpg' if 'train' in imgid else f'{imgpath}/val2014/{imgid}.jpg'
else:
img_name = f'{imgpath}/{imgid}.jpg'
image = io.imread(img_name)
images[imgid] = preprocess(Image.fromarray(image)).type(self.dtype)
return img_ids, images | 33.289809 | 119 | 0.53047 | 1,285 | 10,453 | 4.119066 | 0.118288 | 0.036274 | 0.03174 | 0.022671 | 0.729454 | 0.717363 | 0.710561 | 0.710561 | 0.710561 | 0.690913 | 0 | 0.006879 | 0.360279 | 10,453 | 314 | 120 | 33.289809 | 0.784657 | 0.004688 | 0 | 0.702381 | 0 | 0 | 0.0622 | 0.027783 | 0 | 0 | 0 | 0 | 0.007937 | 1 | 0.043651 | false | 0 | 0.039683 | 0.007937 | 0.134921 | 0.003968 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
b8384380738456b89b24f59837c39557638154a2 | 3,401 | py | Python | tests/test_redis.py | ashexpertVersion2/django-health-check | 895e2b6f96ab611b87829375b7b13c88025887f2 | [
"MIT"
] | 739 | 2015-01-22T15:38:35.000Z | 2022-03-29T17:20:05.000Z | tests/test_redis.py | ashexpertVersion2/django-health-check | 895e2b6f96ab611b87829375b7b13c88025887f2 | [
"MIT"
] | 231 | 2015-01-21T00:09:50.000Z | 2022-03-29T20:52:10.000Z | tests/test_redis.py | ashexpertVersion2/django-health-check | 895e2b6f96ab611b87829375b7b13c88025887f2 | [
"MIT"
] | 173 | 2015-01-21T20:14:45.000Z | 2022-03-24T10:07:43.000Z | import mock
from redis.exceptions import ConnectionError, TimeoutError
from health_check.contrib.redis.backends import RedisHealthCheck
class TestRedisHealthCheck:
"""Test Redis health check."""
@mock.patch("health_check.contrib.redis.backends.getattr")
@mock.patch("health_check.contrib.redis.backends.from_url", autospec=True)
def test_redis_refused_connection(self, mocked_connection, mocked_getattr):
"""Test when the connection to Redis is refused."""
mocked_getattr.return_value = "redis_url"
# mock returns
mocked_connection.return_value = mock.MagicMock()
mocked_connection.return_value.__enter__.side_effect = ConnectionRefusedError("Refused connection")
# instantiates the class
redis_healthchecker = RedisHealthCheck()
# invokes the method check_status()
redis_healthchecker.check_status()
assert len(redis_healthchecker.errors), 1
# mock assertions
mocked_connection.assert_called_once_with('redis://localhost/1')
@mock.patch("health_check.contrib.redis.backends.getattr")
@mock.patch("health_check.contrib.redis.backends.from_url")
def test_redis_timeout_error(self, mocked_connection, mocked_getattr):
"""Test Redis TimeoutError."""
mocked_getattr.return_value = "redis_url"
# mock returns
mocked_connection.return_value = mock.MagicMock()
mocked_connection.return_value.__enter__.side_effect = TimeoutError("Timeout Error")
# instantiates the class
redis_healthchecker = RedisHealthCheck()
# invokes the method check_status()
redis_healthchecker.check_status()
assert len(redis_healthchecker.errors), 1
# mock assertions
mocked_connection.assert_called_once_with('redis://localhost/1')
@mock.patch("health_check.contrib.redis.backends.getattr")
@mock.patch("health_check.contrib.redis.backends.from_url")
def test_redis_con_limit_exceeded(self, mocked_connection, mocked_getattr):
"""Test Connection Limit Exceeded error."""
mocked_getattr.return_value = "redis_url"
# mock returns
mocked_connection.return_value = mock.MagicMock()
mocked_connection.return_value.__enter__.side_effect = ConnectionError("Connection Error")
# instantiates the class
redis_healthchecker = RedisHealthCheck()
# invokes the method check_status()
redis_healthchecker.check_status()
assert len(redis_healthchecker.errors), 1
# mock assertions
mocked_connection.assert_called_once_with('redis://localhost/1')
@mock.patch("health_check.contrib.redis.backends.getattr")
@mock.patch("health_check.contrib.redis.backends.from_url")
def test_redis_conn_ok(self, mocked_connection, mocked_getattr):
"""Test everything is OK."""
mocked_getattr.return_value = "redis_url"
# mock returns
mocked_connection.return_value = mock.MagicMock()
mocked_connection.return_value.__enter__.side_effect = True
# instantiates the class
redis_healthchecker = RedisHealthCheck()
# invokes the method check_status()
redis_healthchecker.check_status()
assert len(redis_healthchecker.errors), 0
# mock assertions
mocked_connection.assert_called_once_with('redis://localhost/1')
| 38.213483 | 107 | 0.718024 | 375 | 3,401 | 6.197333 | 0.16 | 0.110155 | 0.069707 | 0.089071 | 0.833046 | 0.819707 | 0.756024 | 0.756024 | 0.756024 | 0.756024 | 0 | 0.00291 | 0.191708 | 3,401 | 88 | 108 | 38.647727 | 0.842488 | 0.14731 | 0 | 0.681818 | 0 | 0 | 0.177335 | 0.121721 | 0 | 0 | 0 | 0 | 0.181818 | 1 | 0.090909 | false | 0 | 0.068182 | 0 | 0.181818 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
b8880c568e4f687ed80330130a02aa08b7b21ed8 | 7,955 | py | Python | tests/ur_client_test.py | duongntbk/urchintai_client | 3a99db9348e970be28301f154fe67d724a6557f9 | [
"MIT"
] | null | null | null | tests/ur_client_test.py | duongntbk/urchintai_client | 3a99db9348e970be28301f154fe67d724a6557f9 | [
"MIT"
] | null | null | null | tests/ur_client_test.py | duongntbk/urchintai_client | 3a99db9348e970be28301f154fe67d724a6557f9 | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
import asyncio
from unittest.mock import Mock
import pytest
from pytest_mock import mocker
from urchintai_client.ur_client import UrClient
ignored_request_sender = None
@pytest.mark.asyncio
async def test_should_throw_error_if_no_property_argument():
'''
Either url or property_code is required
'''
# Arrange
client = UrClient(ignored_request_sender)
# Act
with pytest.raises(ValueError) as e:
await client.is_property_vacant()
# Assert
assert str(e.value) == 'Please provide either property\'s URL or property code'
@pytest.mark.asyncio
async def test_should_return_true_if_property_vacant_using_url():
'''
If the list of empty room returned from UR Chintai API is not "null",
that property is vacant.
'''
# Arrange
url = 'https://www.ur-net.go.jp/chintai/kanto/kanagawa/40_4120.html'
request_sender = setup_request_sender('not null')
client = UrClient(request_sender)
# Act
isVacant = await client.is_property_vacant(url=url)
# Assert
assert isVacant == True
@pytest.mark.asyncio
async def test_should_return_true_if_property_vacant_using_code():
'''
If the list of empty room returned from UR Chintai API is not "null",
that property is vacant.
'''
# Arrange
property_code = {
'store_code': '40',
'house_code': '412',
'type': '0'
}
request_sender = setup_request_sender('not null')
client = UrClient(request_sender)
# Act
isVacant = await client.is_property_vacant(property_code=property_code)
# Assert
assert isVacant == True
@pytest.mark.asyncio
async def test_should_return_false_if_property_full_using_url():
'''
If the list of empty room returned from UR Chintai API is "null",
that property is full.
'''
# Arrange
url = 'https://www.ur-net.go.jp/chintai/kanto/kanagawa/40_4120.html'
request_sender = setup_request_sender('null')
client = UrClient(request_sender)
# Act
isVacant = await client.is_property_vacant(url=url)
# Assert
assert isVacant == False
@pytest.mark.asyncio
async def test_should_return_false_if_property_full_using_code():
'''
If the list of empty room returned from UR Chintai API is "null",
that property is full.
'''
# Arrange
property_code = {
'store_code': '40',
'house_code': '412',
'type': '0'
}
request_sender = setup_request_sender('null')
client = UrClient(request_sender)
# Act
is_vacant = await client.is_property_vacant(property_code=property_code)
# Assert
assert is_vacant == False
@pytest.mark.asyncio
async def test_should_prioritize_property_code_over_url(mocker):
'''
If both room code and URL are provided, use room code.
'''
# Arrange
url = 'https://www.ur-net.go.jp/chintai/kanto/kanagawa/40_4120.html'
property_code = {
'store_code': '40',
'house_code': '412',
'type': '0'
}
mock_parser = mocker.patch('urchintai_client.ur_parser.get_property_code_from_url', \
return_value=property_code)
request_sender = setup_request_sender('null')
client = UrClient(request_sender)
# Act
await client.is_property_vacant(url=url, property_code=property_code)
# Assert
mock_parser.assert_not_called()
@pytest.mark.asyncio
async def test_should_throw_error_if_no_room_argument():
'''
Either url or room_code is required
'''
# Arrange
client = UrClient(ignored_request_sender)
# Act
with pytest.raises(ValueError) as e:
await client.is_room_vacant()
# Assert
assert str(e.value) == 'Please provide either room\'s URL or room code'
@pytest.mark.asyncio
async def test_should_return_true_if_room_vacant_using_url():
'''
If room details returned from UR Chintai API is not "null",
that room is vacant.
'''
# Arrange
url = 'https://www.ur-net.go.jp/chintai/kanto/kanagawa/40_2460_room.html?JKSS=000020654'
request_sender = setup_request_sender('not null')
client = UrClient(request_sender)
# Act
isVacant = await client.is_room_vacant(url=url)
# Assert
assert isVacant == True
@pytest.mark.asyncio
async def test_should_return_true_if_room_vacant_using_code():
'''
If room details returned from UR Chintai API is not "null",
that room is vacant.
'''
# Arrange
room_code = {
'store_code': '40',
'house_code': '246',
'type': '0',
'room_id': '000020654',
}
request_sender = setup_request_sender('not null')
client = UrClient(request_sender)
# Act
isVacant = await client.is_room_vacant(room_code=room_code)
# Assert
assert isVacant == True
@pytest.mark.asyncio
async def test_should_return_false_if_room_full_using_url():
'''
If room details returned from UR Chintai API is "null",
that room is full.
'''
# Arrange
url = 'https://www.ur-net.go.jp/chintai/kanto/kanagawa/40_2460_room.html?JKSS=000020654'
request_sender = setup_request_sender('null')
client = UrClient(request_sender)
# Act
isVacant = await client.is_room_vacant(url=url)
# Assert
assert isVacant == False
@pytest.mark.asyncio
async def test_should_return_false_if_room_full_using_code():
'''
If room details returned from UR Chintai API is "null",
that room is full.
'''
# Arrange
room_code = {
'store_code': '40',
'house_code': '246',
'type': '0',
'room_id': '000020654',
}
request_sender = setup_request_sender('null')
client = UrClient(request_sender)
# Act
isVacant = await client.is_room_vacant(room_code=room_code)
# Assert
assert isVacant == False
@pytest.mark.asyncio
async def test_should_prioritize_room_code_over_url(mocker):
'''
If both room code and URL are provided, use room code.
'''
# Arrange
url = 'https://www.ur-net.go.jp/chintai/kanto/kanagawa/40_2460_room.html?JKSS=000020654'
room_code = {
'store_code': '40',
'house_code': '246',
'type': '0',
'room_id': '000020654',
}
mock_parser = mocker.patch('urchintai_client.ur_parser.get_room_code_from_url', \
return_value=room_code)
request_sender = setup_request_sender('null')
client = UrClient(request_sender)
# Act
await client.is_room_vacant(url=url, room_code=room_code)
# Assert
mock_parser.assert_not_called()
@pytest.mark.asyncio
async def test_get_property_name_should_throw_error_if_no_argument():
'''
Either url or room_code is required
'''
# Arrange
client = UrClient(ignored_request_sender)
urls = [None, '']
# Act
for url in urls:
with pytest.raises(ValueError) as e:
await client.get_property_name(url)
# Assert
assert str(e.value) == 'Room\'s URL cannot be empty'
@pytest.mark.asyncio
async def test_should_get_property_name_from_page(mocker):
'''
If page response contains property name, parse that value and return.
'''
# Arrange
expected_property_name = 'Khu Tap The Bo Cong An'
url = 'https://www.ur-net.go.jp/chintai/kanto/kanagawa/40_4120.html'
request_sender = setup_request_sender('not null', method='GET')
mocker.patch('urchintai_client.ur_parser.get_property_name_from_content',\
return_value=expected_property_name)
client = UrClient(request_sender)
# Act
property_name = await client.get_property_name(url)
# Assert
assert property_name == expected_property_name
def setup_request_sender(text, method='POST'):
resp = asyncio.Future()
resp.set_result(text)
request_sender = Mock()
if method == 'POST':
request_sender.post.return_value = resp
else:
request_sender.get.return_value = resp
return request_sender | 25.015723 | 92 | 0.677687 | 1,060 | 7,955 | 4.817925 | 0.117925 | 0.106912 | 0.046603 | 0.060309 | 0.854513 | 0.825729 | 0.825729 | 0.811827 | 0.779714 | 0.731545 | 0 | 0.021441 | 0.220239 | 7,955 | 318 | 93 | 25.015723 | 0.80187 | 0.036078 | 0 | 0.608108 | 0 | 0.02027 | 0.160193 | 0.024801 | 0 | 0 | 0 | 0 | 0.094595 | 1 | 0.006757 | false | 0 | 0.033784 | 0 | 0.047297 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
b8995b5d680eca47a159b0515c0fcafd2fb0af23 | 80 | py | Python | Codefights/arcade/python-arcade/level-3/25.Print-List/Python/solution1.py | RevansChen/online-judge | ad1b07fee7bd3c49418becccda904e17505f3018 | [
"MIT"
] | 7 | 2017-09-20T16:40:39.000Z | 2021-08-31T18:15:08.000Z | Codefights/arcade/python-arcade/level-3/25.Print-List/Python/solution1.py | RevansChen/online-judge | ad1b07fee7bd3c49418becccda904e17505f3018 | [
"MIT"
] | null | null | null | Codefights/arcade/python-arcade/level-3/25.Print-List/Python/solution1.py | RevansChen/online-judge | ad1b07fee7bd3c49418becccda904e17505f3018 | [
"MIT"
] | null | null | null | # Python3
# 有限制修改區域
def printList(lst):
return f'This is your list: {lst}'
| 13.333333 | 38 | 0.6625 | 12 | 80 | 4.416667 | 0.916667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.015873 | 0.2125 | 80 | 5 | 39 | 16 | 0.825397 | 0.1875 | 0 | 0 | 0 | 0 | 0.387097 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.5 | false | 0 | 0 | 0.5 | 1 | 0.5 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 6 |
b8aac4b30d8c8e8d3f6b81a2af81e0e9c5c4c5c3 | 12,706 | py | Python | tests/kubernetes/runner/test_runner.py | peaudecastor/checkov | a4804b61c1b1390b7abd44ab53285fcbc3e7e80b | [
"Apache-2.0"
] | null | null | null | tests/kubernetes/runner/test_runner.py | peaudecastor/checkov | a4804b61c1b1390b7abd44ab53285fcbc3e7e80b | [
"Apache-2.0"
] | null | null | null | tests/kubernetes/runner/test_runner.py | peaudecastor/checkov | a4804b61c1b1390b7abd44ab53285fcbc3e7e80b | [
"Apache-2.0"
] | null | null | null | import dis
import inspect
import os
import unittest
from collections import defaultdict
from pathlib import Path
from checkov.common.bridgecrew.severities import Severities, BcSeverities
from checkov.common.models.enums import CheckCategories, CheckResult
from checkov.kubernetes.checks.resource.base_spec_check import BaseK8Check
from checkov.runner_filter import RunnerFilter
from checkov.kubernetes.runner import Runner
from checkov.kubernetes.checks.resource.registry import registry
class TestRunnerValid(unittest.TestCase):
def setUp(self) -> None:
self.orig_checks = registry.checks
def test_record_relative_path_with_relative_dir(self):
# test whether the record's repo_file_path is correct, relative to the CWD (with a / at the start).
# this is just constructing the scan dir as normal
current_dir = os.path.dirname(os.path.realpath(__file__))
scan_dir_path = os.path.join(current_dir, "resources")
# this is the relative path to the directory to scan (what would actually get passed to the -d arg)
dir_rel_path = os.path.relpath(scan_dir_path).replace('\\', '/')
runner = Runner()
checks_allowlist = ['CKV_K8S_21']
report = runner.run(root_folder=dir_rel_path, external_checks_dir=None,
runner_filter=RunnerFilter(framework=['kubernetes'], checks=checks_allowlist))
all_checks = report.failed_checks + report.passed_checks
self.assertGreater(len(all_checks), 0) # ensure that the assertions below are going to do something
for record in all_checks:
self.assertEqual(record.repo_file_path, f'/{dir_rel_path}{record.file_path}')
def test_record_relative_path_with_abs_dir(self):
# test whether the record's repo_file_path is correct, relative to the CWD (with a / at the start).
# this is just constructing the scan dir as normal
current_dir = os.path.dirname(os.path.realpath(__file__))
scan_dir_path = os.path.join(current_dir, "resources")
dir_rel_path = os.path.relpath(scan_dir_path).replace('\\', '/')
dir_abs_path = os.path.abspath(scan_dir_path)
runner = Runner()
checks_allowlist = ['CKV_K8S_21']
report = runner.run(root_folder=dir_abs_path, external_checks_dir=None,
runner_filter=RunnerFilter(framework=['kubernetes'], checks=checks_allowlist))
all_checks = report.failed_checks + report.passed_checks
self.assertGreater(len(all_checks), 0) # ensure that the assertions below are going to do something
for record in all_checks:
# no need to join with a '/' because the CFN runner adds it to the start of the file path
self.assertEqual(record.repo_file_path, f'/{dir_rel_path}{record.file_path}')
def test_record_relative_path_with_relative_file(self):
# test whether the record's repo_file_path is correct, relative to the CWD (with a / at the start).
# this is just constructing the scan dir as normal
current_dir = os.path.dirname(os.path.realpath(__file__))
scan_file_path = os.path.join(current_dir, "resources", "example.yaml")
# this is the relative path to the file to scan (what would actually get passed to the -f arg)
file_rel_path = os.path.relpath(scan_file_path)
runner = Runner()
checks_allowlist = ['CKV_K8S_21']
report = runner.run(root_folder=None, external_checks_dir=None, files=[file_rel_path],
runner_filter=RunnerFilter(framework='kubernetes', checks=checks_allowlist))
all_checks = report.failed_checks + report.passed_checks
self.assertGreater(len(all_checks), 0) # ensure that the assertions below are going to do something
for record in all_checks:
# no need to join with a '/' because the CFN runner adds it to the start of the file path
self.assertEqual(record.repo_file_path, f'/{file_rel_path}')
def test_record_relative_path_with_abs_file(self):
# test whether the record's repo_file_path is correct, relative to the CWD (with a / at the start).
# this is just constructing the scan dir as normal
current_dir = os.path.dirname(os.path.realpath(__file__))
scan_file_path = os.path.join(current_dir, "resources", "example.yaml")
file_rel_path = os.path.relpath(scan_file_path)
file_abs_path = os.path.abspath(scan_file_path)
runner = Runner()
checks_allowlist = ['CKV_K8S_21']
report = runner.run(root_folder=None, external_checks_dir=None, files=[file_abs_path],
runner_filter=RunnerFilter(framework='kubernetes', checks=checks_allowlist))
all_checks = report.failed_checks + report.passed_checks
self.assertGreater(len(all_checks), 0) # ensure that the assertions below are going to do something
for record in all_checks:
# no need to join with a '/' because the CFN runner adds it to the start of the file path
self.assertEqual(record.repo_file_path, f'/{file_rel_path}')
def test_list_metadata_annotations(self):
current_dir = os.path.dirname(os.path.realpath(__file__))
scan_file_path = os.path.join(current_dir, "list_annotation", "example.yaml")
file_rel_path = os.path.relpath(scan_file_path)
runner = Runner()
try:
runner.run(root_folder=None, external_checks_dir=None, files=[file_rel_path],
runner_filter=RunnerFilter(framework='kubernetes'))
except Exception:
self.assertTrue(False, "Could not run K8 runner on configuration")
def test_wrong_check_imports(self):
wrong_imports = ["arm", "cloudformation", "dockerfile", "helm", "serverless", "terraform", "kustomize"]
check_imports = []
checks_path = Path(inspect.getfile(Runner)).parent.joinpath("checks")
for file in checks_path.rglob("*.py"):
with file.open() as f:
instructions = dis.get_instructions(f.read())
import_names = [instr.argval for instr in instructions if "IMPORT_NAME" == instr.opname]
for import_name in import_names:
wrong_import = next((import_name for x in wrong_imports if x in import_name), None)
if wrong_import:
check_imports.append({file.name: wrong_import})
assert len(check_imports) == 0, f"Wrong imports were added: {check_imports}"
def test_parse_with_empty_blocks(self):
current_dir = os.path.dirname(os.path.realpath(__file__))
scan_file_path = os.path.join(current_dir, "resources", "example_multiple.yaml")
file_rel_path = os.path.relpath(scan_file_path)
runner = Runner()
try:
report = runner.run(root_folder=None, external_checks_dir=None, files=[file_rel_path],
runner_filter=RunnerFilter(framework='kubernetes'))
# just check that something was parsed and scanned
self.assertGreater(len(report.failed_checks) + len(report.passed_checks), 0)
except Exception:
self.assertTrue(False, "Could not run K8 runner on configuration")
def test_record_includes_severity(self):
custom_check_id = "CKV_MY_CUSTOM_CHECK"
registry.checks = defaultdict(list)
class AnyFailingCheck(BaseK8Check):
def __init__(self, *_, **__) -> None:
super().__init__(
"this should fail",
custom_check_id,
[CheckCategories.KUBERNETES],
["Service"]
)
def scan_spec_conf(self, conf):
return CheckResult.FAILED
check = AnyFailingCheck()
check.severity = Severities[BcSeverities.LOW]
scan_file_path = os.path.join(os.path.dirname(os.path.realpath(__file__)), "resources", "example.yaml")
report = Runner().run(
None,
files=[scan_file_path],
runner_filter=RunnerFilter(framework=['kubernetes'], checks=[custom_check_id])
)
self.assertEqual(report.failed_checks[0].severity, Severities[BcSeverities.LOW])
def test_record_check_severity(self):
custom_check_id = "CKV_MY_CUSTOM_CHECK"
registry.checks = defaultdict(list)
class AnyFailingCheck(BaseK8Check):
def __init__(self, *_, **__) -> None:
super().__init__(
"this should fail",
custom_check_id,
[CheckCategories.KUBERNETES],
["Service"]
)
def scan_spec_conf(self, conf):
return CheckResult.FAILED
check = AnyFailingCheck()
check.severity = Severities[BcSeverities.MEDIUM]
scan_file_path = os.path.join(os.path.dirname(os.path.realpath(__file__)), "resources", "example.yaml")
report = Runner().run(
None,
files=[scan_file_path],
runner_filter=RunnerFilter(framework=['kubernetes'], checks=['LOW'])
)
all_checks = report.failed_checks + report.passed_checks
self.assertTrue(any(c.check_id == custom_check_id for c in all_checks))
def test_record_check_severity_omit(self):
custom_check_id = "CKV_MY_CUSTOM_CHECK"
registry.checks = defaultdict(list)
class AnyFailingCheck(BaseK8Check):
def __init__(self, *_, **__) -> None:
super().__init__(
"this should fail",
custom_check_id,
[CheckCategories.KUBERNETES],
["Service"]
)
def scan_spec_conf(self, conf):
return CheckResult.FAILED
check = AnyFailingCheck()
check.severity = Severities[BcSeverities.MEDIUM]
scan_file_path = os.path.join(os.path.dirname(os.path.realpath(__file__)), "resources", "example.yaml")
report = Runner().run(
None,
files=[scan_file_path],
runner_filter=RunnerFilter(framework=['kubernetes'], checks=['HIGH'])
)
all_checks = report.failed_checks + report.passed_checks
self.assertFalse(any(c.check_id == custom_check_id for c in all_checks))
def test_record_check_skip_severity(self):
custom_check_id = "CKV_MY_CUSTOM_CHECK"
registry.checks = defaultdict(list)
class AnyFailingCheck(BaseK8Check):
def __init__(self, *_, **__) -> None:
super().__init__(
"this should fail",
custom_check_id,
[CheckCategories.KUBERNETES],
["Service"]
)
def scan_spec_conf(self, conf):
return CheckResult.FAILED
check = AnyFailingCheck()
check.severity = Severities[BcSeverities.HIGH]
scan_file_path = os.path.join(os.path.dirname(os.path.realpath(__file__)), "resources", "example.yaml")
report = Runner().run(
None,
files=[scan_file_path],
runner_filter=RunnerFilter(framework=['kubernetes'], skip_checks=['MEDIUM'])
)
all_checks = report.failed_checks + report.passed_checks
self.assertTrue(any(c.check_id == custom_check_id for c in all_checks))
def test_record_check_skip_severity_omit(self):
custom_check_id = "CKV_MY_CUSTOM_CHECK"
registry.checks = defaultdict(list)
class AnyFailingCheck(BaseK8Check):
def __init__(self, *_, **__) -> None:
super().__init__(
"this should fail",
custom_check_id,
[CheckCategories.KUBERNETES],
["Service"]
)
def scan_spec_conf(self, conf):
return CheckResult.FAILED
check = AnyFailingCheck()
check.severity = Severities[BcSeverities.LOW]
scan_file_path = os.path.join(os.path.dirname(os.path.realpath(__file__)), "resources", "example.yaml")
report = Runner().run(
None,
files=[scan_file_path],
runner_filter=RunnerFilter(framework=['kubernetes'], skip_checks=['MEDIUM'])
)
all_checks = report.failed_checks + report.passed_checks
self.assertFalse(any(c.check_id == custom_check_id for c in all_checks))
def tearDown(self):
registry.checks = self.orig_checks
if __name__ == '__main__':
unittest.main()
| 41.796053 | 111 | 0.640091 | 1,513 | 12,706 | 5.081956 | 0.125578 | 0.031994 | 0.024711 | 0.021459 | 0.829757 | 0.818312 | 0.812589 | 0.799974 | 0.796332 | 0.786708 | 0 | 0.002891 | 0.264914 | 12,706 | 303 | 112 | 41.933993 | 0.820343 | 0.104439 | 0 | 0.691244 | 0 | 0 | 0.080458 | 0.007658 | 0 | 0 | 0 | 0 | 0.078341 | 1 | 0.110599 | false | 0.041475 | 0.096774 | 0.023041 | 0.258065 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
b8af9473192b350628a26732b1bff278faf0ba4a | 43 | py | Python | project/apps/django_backend_template/controllers/__init__.py | adosaa/Backend-django-app | 3a7eb746ebc703e2cdbf1e4b2ac5703b3fedcd85 | [
"MIT"
] | 2 | 2020-11-04T21:47:48.000Z | 2020-11-04T21:47:50.000Z | project/apps/django_backend_template/controllers/__init__.py | adosaa/Backend-Django-App | 3a7eb746ebc703e2cdbf1e4b2ac5703b3fedcd85 | [
"MIT"
] | null | null | null | project/apps/django_backend_template/controllers/__init__.py | adosaa/Backend-Django-App | 3a7eb746ebc703e2cdbf1e4b2ac5703b3fedcd85 | [
"MIT"
] | null | null | null | from .core import *
from .student import *
| 14.333333 | 22 | 0.72093 | 6 | 43 | 5.166667 | 0.666667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.186047 | 43 | 2 | 23 | 21.5 | 0.885714 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
b227613a1679a73838a9d2e87c3fc8b0f2d06dcc | 228 | py | Python | async_download_file/download_gui/variables.py | woshimanong1990/download_file_gui | 385c9a59499ffa9a7c034d303376d9ce41b7303a | [
"MIT"
] | null | null | null | async_download_file/download_gui/variables.py | woshimanong1990/download_file_gui | 385c9a59499ffa9a7c034d303376d9ce41b7303a | [
"MIT"
] | null | null | null | async_download_file/download_gui/variables.py | woshimanong1990/download_file_gui | 385c9a59499ffa9a7c034d303376d9ce41b7303a | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
from __future__ import absolute_import
from __future__ import unicode_literals
from __future__ import print_function
from async_download_file.download_task.variables import TaskStatus as DownloadStatus
| 28.5 | 84 | 0.842105 | 29 | 228 | 6 | 0.655172 | 0.172414 | 0.275862 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.004926 | 0.109649 | 228 | 7 | 85 | 32.571429 | 0.852217 | 0.092105 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0.25 | 1 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
b270fb3269984b5866aa54bf1eb04da9b568c8b1 | 4,248 | py | Python | exp/models/chen2018_test.py | f-dangel/hbp | 3ed208ce16fae1c2697e1d5220de91205e1d13c3 | [
"MIT"
] | 13 | 2020-02-27T00:24:27.000Z | 2022-01-09T05:35:46.000Z | exp/models/chen2018_test.py | f-dangel/hbp | 3ed208ce16fae1c2697e1d5220de91205e1d13c3 | [
"MIT"
] | 5 | 2021-06-08T21:00:04.000Z | 2022-03-12T00:17:39.000Z | exp/models/chen2018_test.py | f-dangel/hbp | 3ed208ce16fae1c2697e1d5220de91205e1d13c3 | [
"MIT"
] | 2 | 2020-09-10T03:34:07.000Z | 2022-01-09T05:35:50.000Z | """Test of model architectures from Chen et al.: BDA-PCH (2018)."""
import torch
from bpexts.utils import set_seeds
from .chen2018 import (
cifar10_model,
hbp_cifar10_model,
hbp_mnist_model,
hbp_split_cifar10_model,
hbp_split_mnist_model,
mnist_model,
)
def test_forward_mnist_models():
"""Check same behaviour of original and HBP/split MNIST model."""
max_blocks = 5
input = torch.randn(2, 784)
set_seeds(0)
original = mnist_model()
set_seeds(0)
hbp = hbp_mnist_model()
set_seeds(0)
hbp_parallel = hbp_split_mnist_model(max_blocks)
assert torch.allclose(original(input), hbp(input), atol=1e-5)
assert torch.allclose(original(input), hbp_parallel(input), atol=1e-5)
def test_forward_cifar10_models():
"""Check same behaviour of original and HBP/split CIFAR-10 model."""
max_blocks = 5
input = torch.randn(2, 3072)
set_seeds(0)
original = cifar10_model()
set_seeds(0)
hbp = hbp_cifar10_model()
set_seeds(0)
hbp_parallel = hbp_split_cifar10_model(max_blocks, False, False)
assert torch.allclose(original(input), hbp(input), atol=1e-5)
assert torch.allclose(original(input), hbp_parallel(input), atol=1e-5)
def test_hbp_approximation_mnist_model():
"""Check correct usage of HBP approximations in MNIST model."""
aij = [True, False]
apj = [True, False]
# assert correct approximations in layers
linear_idx = [0, 2, 4, 6]
linear_idx = [item + 1 for item in linear_idx]
activation_idx = [1, 3, 5]
activation_idx = [item + 1 for item in activation_idx]
for i in aij:
for p in apj:
model = hbp_mnist_model(i, p)
for idx in linear_idx:
assert model[idx].uses_hbp_approximation(None, p)
# assert correct approximations in activations
for idx in activation_idx:
assert model[idx].uses_hbp_approximation(i, None)
def test_hbp_approximation_split_mnist_model():
"""Check correct usage of HBP approximations in split MNIST model."""
blocks = 10
aij = [True, False]
apj = [True, False]
# assert correct approximations in layers
linear_idx = [0, 2, 4, 6]
linear_idx = [item + 1 for item in linear_idx]
activation_idx = [1, 3, 5]
activation_idx = [item + 1 for item in activation_idx]
for i in aij:
for p in apj:
model = hbp_split_mnist_model(blocks, i, p)
for idx in linear_idx:
assert model[idx].uses_hbp_approximation(None, p)
# assert correct approximations in activations
for idx in activation_idx:
assert model[idx].uses_hbp_approximation(i, None)
def test_hbp_approximation_cifar10_model():
"""Check correct usage of HBP approximations in CIFAR-10 model."""
aij = [True, False]
apj = [True, False]
# assert correct approximations in layers
linear_idx = [0, 2, 4, 6, 8, 10, 12, 14]
linear_idx = [item + 1 for item in linear_idx]
activation_idx = [1, 3, 5, 7, 9, 11, 13]
activation_idx = [item + 1 for item in activation_idx]
for i in aij:
for p in apj:
model = hbp_cifar10_model(i, p)
for idx in linear_idx:
assert model[idx].uses_hbp_approximation(None, p)
# assert correct approximations in activations
for idx in activation_idx:
assert model[idx].uses_hbp_approximation(i, None)
def test_hbp_approximation_split_cifar10_model():
"""Check correct usage of HBP approximations in split CIFAR-10 model."""
blocks = 10
aij = [True, False]
apj = [True, False]
# assert correct approximations in layers
linear_idx = [0, 2, 4, 6]
linear_idx = [item + 1 for item in linear_idx]
activation_idx = [1, 3, 5]
activation_idx = [item + 1 for item in activation_idx]
for i in aij:
for p in apj:
model = hbp_split_mnist_model(blocks, i, p)
for idx in linear_idx:
assert model[idx].uses_hbp_approximation(None, p)
# assert correct approximations in activations
for idx in activation_idx:
assert model[idx].uses_hbp_approximation(i, None)
| 33.984 | 76 | 0.650424 | 601 | 4,248 | 4.396007 | 0.12812 | 0.054504 | 0.081756 | 0.087812 | 0.869039 | 0.860333 | 0.827025 | 0.827025 | 0.778577 | 0.666541 | 0 | 0.035601 | 0.259416 | 4,248 | 124 | 77 | 34.258065 | 0.804196 | 0.182439 | 0 | 0.688889 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.133333 | 1 | 0.066667 | false | 0 | 0.033333 | 0 | 0.1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
b2aed56a206cc72922958073daa668ce21bbbaf9 | 213 | py | Python | stl/tree/nodes/__init__.py | pieter-hendriks/STL-monitoring | 114b73b1f4b0687b11b8842b3c4a1c8af7b0d9df | [
"MIT"
] | null | null | null | stl/tree/nodes/__init__.py | pieter-hendriks/STL-monitoring | 114b73b1f4b0687b11b8842b3c4a1c8af7b0d9df | [
"MIT"
] | null | null | null | stl/tree/nodes/__init__.py | pieter-hendriks/STL-monitoring | 114b73b1f4b0687b11b8842b3c4a1c8af7b0d9df | [
"MIT"
] | null | null | null | """ Import all the nodes used in the STL tree """
from .node import Node
from .formulanodes import *
from .operationnodes import *
from .valuenodes import *
from .contentnodes import *
from .signalnodes import *
| 23.666667 | 49 | 0.751174 | 28 | 213 | 5.714286 | 0.535714 | 0.25 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.169014 | 213 | 8 | 50 | 26.625 | 0.903955 | 0.192488 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
a22e17274e5437a639a330640a8a6663c2b4da08 | 16,298 | py | Python | anaflow/tools/coarse_graining.py | JarnoHerr/AnaFlow | a7c56cdadf90d652f80bc1e1d38d3687d0365a63 | [
"MIT"
] | 21 | 2019-02-26T14:39:15.000Z | 2022-01-11T20:27:40.000Z | anaflow/tools/coarse_graining.py | JarnoHerr/AnaFlow | a7c56cdadf90d652f80bc1e1d38d3687d0365a63 | [
"MIT"
] | 4 | 2020-04-01T22:38:26.000Z | 2022-02-20T12:12:45.000Z | anaflow/tools/coarse_graining.py | JarnoHerr/AnaFlow | a7c56cdadf90d652f80bc1e1d38d3687d0365a63 | [
"MIT"
] | 4 | 2019-10-23T15:34:12.000Z | 2021-06-03T09:17:56.000Z | # -*- coding: utf-8 -*-
"""
Anaflow subpackage providing helper functions related to coarse graining.
.. currentmodule:: anaflow.tools.coarse_graining
The following functions are provided
.. autosummary::
T_CG
T_CG_inverse
T_CG_error
K_CG
K_CG_inverse
K_CG_error
TPL_CG
TPL_CG_error
"""
# pylint: disable=C0103
import numpy as np
from scipy.optimize import root
from anaflow.tools.special import aniso, tpl_hyp
__all__ = [
"T_CG",
"T_CG_inverse",
"T_CG_error",
"K_CG",
"K_CG_inverse",
"K_CG_error",
"TPL_CG",
"TPL_CG_error",
]
def T_CG(rad, trans_gmean, var, len_scale, T_well=None, prop=1.6):
"""
The coarse-graining Transmissivity.
This solution was presented in ''Schneider & Attinger 2008''[R3]_.
This functions gives an effective
transmissivity for 2D pumpingtests in heterogenous aquifers, where the
transmissivity is following a log-normal distribution and a gaussian
correlation function.
Parameters
----------
rad : :class:`numpy.ndarray`
Array with all radii where the function should be evaluated
trans_gmean : :class:`float`
Geometric-mean transmissivity.
var : :class:`float`
Variance of log-transmissivity.
len_scale : :class:`float`
Correlation-length of log-transmissivity.
T_well : :class:`float`, optional
Explicit transmissivity value at the well. Harmonic mean by default.
prop: :class:`float`, optional
Proportionality factor used within the upscaling procedure.
Default: ``1.6``
Returns
-------
T_CG : :class:`numpy.ndarray`
Array containing the effective transmissivity values.
References
----------
.. [R3] Schneider, C. and Attinger, S.,
''Beyond thiem: A new method for interpreting large scale
pumping tests in heterogeneous aquifers.''
Water resources research, 44(4), 2008
Examples
--------
>>> T_CG([1,2,3], 0.001, 1, 10, 2)
array([0.00061831, 0.00064984, 0.00069236])
"""
chi = -var / 2.0 if T_well is None else np.log(T_well / trans_gmean)
return trans_gmean * np.exp(chi / (1.0 + (prop * rad / len_scale) ** 2))
def T_CG_inverse(T, trans_gmean, var, len_scale, T_well=None, prop=1.6):
"""
The inverse coarse-graining Transmissivity.
See: :func:`T_CG`
Parameters
----------
T : :class:`numpy.ndarray`
Array with all transmissivity values
where the function should be evaluated
trans_gmean : :class:`float`
Geometric-mean transmissivity.
var : :class:`float`
Variance of log-transmissivity.
len_scale : :class:`float`
Correlation-length of log-transmissivity.
T_well : :class:`float`, optional
Explicit transmissivity value at the well. Harmonic mean by default.
prop: :class:`float`, optional
Proportionality factor used within the upscaling procedure.
Default: ``1.6``
Returns
-------
rad : :class:`numpy.ndarray`
Array containing the radii belonging to the given transmissivity values
Examples
--------
>>> T_CG_inverse([7e-4,8e-4,9e-4], 0.001, 1, 10, 2)
array([3.16952925, 5.56935826, 9.67679026])
"""
chi = -var / 2.0 if T_well is None else np.log(T_well / trans_gmean)
return (len_scale / prop) * np.sqrt(chi / np.log(T / trans_gmean) - 1.0)
def T_CG_error(err, trans_gmean, var, len_scale, T_well=None, prop=1.6):
"""
Calculating the radial-point for given error.
Calculating the radial-point where the relative error of the farfield
value is less than the given tollerance.
See: :func:`T_CG`
Parameters
----------
err : :class:`float`
Given relative error for the farfield transmissivity
trans_gmean : :class:`float`
Geometric-mean transmissivity.
var : :class:`float`
Variance of log-transmissivity.
len_scale : :class:`float`
Correlation-length of log-transmissivity.
T_well : :class:`float`, optional
Explicit transmissivity value at the well. Harmonic mean by default.
prop: :class:`float`, optional
Proportionality factor used within the upscaling procedure.
Default: ``1.6``
Returns
-------
rad : :class:`float`
Radial point, where the relative error is less than the given one.
Examples
--------
>>> T_CG_error(0.01, 0.001, 1, 10, 2)
34.91045016779039
"""
chi = -var / 2.0 if T_well is None else np.log(T_well / trans_gmean)
if chi > 0.0:
if chi / np.log(1.0 + err) >= 1.0:
return (len_scale / prop) * np.sqrt(chi / np.log(1.0 + err) - 1.0)
# standard value if the error is less then the variation
return 0
if chi / np.log(1.0 - err) >= 1.0:
return (len_scale / prop) * np.sqrt(chi / np.log(1.0 - err) - 1.0)
# standard value if the error is less then the variation
return 0
def K_CG(rad, cond_gmean, var, len_scale, anis, K_well="KH", prop=1.6):
"""
The coarse-graining conductivity.
This solution was presented in ''Zech 2013''[R8]_.
This functions gives an effective
conductivity for 3D pumpingtests in heterogenous aquifers, where the
conductivity is following a log-normal distribution and a gaussian
correlation function and taking vertical anisotropy into account.
Parameters
----------
rad : :class:`numpy.ndarray`
Array with all radii where the function should be evaluated
cond_gmean : :class:`float`
Geometric-mean conductivity.
var : :class:`float`
Variance of the log-conductivity.
len_scale : :class:`float`
Corralation-length of log-conductivity.
anis : :class:`float`
Anisotropy-ratio of the vertical and horizontal corralation-lengths.
K_well : string/float, optional
Explicit conductivity value at the well. One can choose between the
harmonic mean (``"KH"``), the arithmetic mean (``"KA"``) or an
arbitrary float value. Default: ``"KH"``
prop: :class:`float`, optional
Proportionality factor used within the upscaling procedure.
Default: ``1.6``
Returns
-------
K_CG : :class:`numpy.ndarray`
Array containing the effective conductivity values.
References
----------
.. [R8] Zech, A.
''Impact of Aqifer Heterogeneity on Subsurface Flow and Salt
Transport at Different Scales: from a method determine parameters
of heterogeneous permeability at local scale to a large-scale model
for the sedimentary basin of Thuringia.''
PhD thesis, Friedrich-Schiller-Universität Jena, 2013
Examples
--------
>>> K_CG([1,2,3], 0.001, 1, 10, 1, 2)
array([0.00063008, 0.00069285, 0.00077595])
"""
K_efu = cond_gmean * np.exp(var * (0.5 - aniso(anis)))
if K_well == "KH":
chi = var * (aniso(anis) - 1.0)
elif K_well == "KA":
chi = var * aniso(anis)
else:
chi = np.log(K_well / K_efu)
return K_efu * np.exp(
chi
/ np.sqrt(1.0 + (prop * rad / (len_scale * anis ** (1.0 / 3.0))) ** 2)
** 3
)
def K_CG_inverse(K, cond_gmean, var, len_scale, anis, K_well="KH", prop=1.6):
"""
The inverse coarse-graining conductivity.
See: :func:`K_CG`
Parameters
----------
K : :class:`numpy.ndarray`
Array with all conductivity values
where the function should be evaluated
cond_gmean : :class:`float`
Geometric-mean conductivity.
var : :class:`float`
Variance of the log-conductivity.
len_scale : :class:`float`
Corralation-length of log-conductivity.
anis : :class:`float`
Anisotropy-ratio of the vertical and horizontal corralation-lengths.
K_well : string/float, optional
Explicit conductivity value at the well. One can choose between the
harmonic mean (``"KH"``), the arithmetic mean (``"KA"``) or an
arbitrary float value. Default: ``"KH"``
prop: :class:`float`, optional
Proportionality factor used within the upscaling procedure.
Default: ``1.6``
Returns
-------
rad : :class:`numpy.ndarray`
Array containing the radii belonging to the given conductivity values
Examples
--------
>>> K_CG_inverse([7e-4,8e-4,9e-4], 0.001, 1, 10, 1, 2)
array([2.09236867, 3.27914996, 4.52143956])
"""
K_efu = cond_gmean * np.exp(var * (0.5 - aniso(anis)))
if K_well == "KH":
chi = var * (aniso(anis) - 1.0)
elif K_well == "KA":
chi = var * aniso(anis)
else:
chi = np.log(K_well / K_efu)
return (
len_scale
* anis ** (1.0 / 3.0)
/ prop
* np.sqrt((chi / np.log(K / K_efu)) ** (2.0 / 3.0) - 1.0)
)
def K_CG_error(err, cond_gmean, var, len_scale, anis, K_well="KH", prop=1.6):
"""
Calculating the radial-point for given error.
Calculating the radial-point where the relative error of the farfield
value is less than the given tollerance.
See: :func:`K_CG`
Parameters
----------
err : :class:`float`
Given relative error for the farfield conductivity
cond_gmean : :class:`float`
Geometric-mean conductivity.
var : :class:`float`
Variance of the log-conductivity.
len_scale : :class:`float`
Corralation-length of log-conductivity.
anis : :class:`float`
Anisotropy-ratio of the vertical and horizontal corralation-lengths.
K_well : string/float, optional
Explicit conductivity value at the well. One can choose between the
harmonic mean (``"KH"``), the arithmetic mean (``"KA"``) or an
arbitrary float value. Default: ``"KH"``
prop: :class:`float`, optional
Proportionality factor used within the upscaling procedure.
Default: ``1.6``
Returns
-------
rad : :class:`float`
Radial point, where the relative error is less than the given one.
Examples
--------
>>> K_CG_error(0.01, 0.001, 1, 10, 1, 2)
19.612796453639845
"""
K_efu = cond_gmean * np.exp(var * (0.5 - aniso(anis)))
if K_well == "KH":
chi = var * (aniso(anis) - 1.0)
elif K_well == "KA":
chi = var * aniso(anis)
else:
chi = np.log(K_well / K_efu)
coef = len_scale * anis ** (1.0 / 3.0) / prop
if chi > 0.0:
if chi / np.log(1.0 + err) >= 1.0:
return coef * np.sqrt(
(chi / np.log(1.0 + err)) ** (2.0 / 3.0) - 1.0
)
# standard value if the error is less then the variation
return 0
if chi / np.log(1.0 - err) >= 1.0:
return coef * np.sqrt((chi / np.log(1.0 - err)) ** (2.0 / 3.0) - 1.0)
# standard value if the error is less then the variation
return 0
def TPL_CG(
rad,
cond_gmean,
len_scale,
hurst,
var=None,
c=1.0,
anis=1,
dim=2.0,
K_well="KH",
prop=1.6,
):
"""
The gaussian truncated power-law coarse-graining conductivity.
Parameters
----------
rad : :class:`numpy.ndarray`
Array with all radii where the function should be evaluated
cond_gmean : :class:`float`
Geometric-mean conductivity
len_scale : :class:`float`
upper bound of the corralation-length of conductivity-distribution
hurst: :class:`float`
Hurst coefficient of the TPL model. Should be in (0, 1).
var : :class:`float` or :any:`None`, optional
Variance of log-conductivity
If given, c will be calculated accordingly.
Default: :any:`None`
c : :class:`float`, optional
Intensity of variation in the TPL model.
Is overwritten if var is given.
Default: ``1.0``
anis : :class:`float`, optional
Anisotropy-ratio of the vertical and horizontal corralation-lengths.
This is only applied in 3 dimensions.
Default: 1.0
dim: :class:`float`, optional
Dimension of space.
Default: ``2.0``
K_well : :class:`str` or :class:`float`, optional
Explicit conductivity value at the well. One can choose between the
harmonic mean (``"KH"``),
the arithmetic mean (``"KA"``) or an arbitrary float
value. Default: ``"KH"``
prop: :class:`float`, optional
Proportionality factor used within the upscaling procedure.
Default: ``1.6``
Returns
-------
TPL_CG : :class:`numpy.ndarray`
Array containing the effective conductivity values.
"""
# handle special case in 3D with anisotropy
anis = 1.0 if not np.isclose(dim, 3) else anis
ani = aniso(anis) if np.isclose(dim, 3) else 1.0 / dim
var = c * len_scale ** (2 * hurst) / (2 * hurst) if var is None else var
K_efu = cond_gmean * np.exp(var * (0.5 - ani))
if K_well == "KH":
chi = var * (ani - 1.0)
elif K_well == "KA":
chi = var * ani
else:
chi = np.log(K_well / K_efu)
return K_efu * np.exp(
(chi * 2.0 * hurst / (dim + 2.0 * hurst))
* tpl_hyp(rad, dim, hurst, len_scale * anis ** (1 / 3.0), prop)
)
def TPL_CG_error(
err,
cond_gmean,
len_scale,
hurst,
var=None,
c=1.0,
anis=1,
dim=2.0,
K_well="KH",
prop=1.6,
):
"""
Calculating the radial-point for given error.
Calculating the radial-point where the relative error of the farfield
value is less than the given tollerance.
See: :func:`TPL_CG`
Parameters
----------
err : :class:`float`
Given relative error for the farfield conductivity
cond_gmean : :class:`float`
Geometric-mean conductivity
len_scale : :class:`float`
upper bound of the corralation-length of conductivity-distribution
hurst: :class:`float`
Hurst coefficient of the TPL model. Should be in (0, 1).
var : :class:`float` or :any:`None`, optional
Variance of log-conductivity
If given, c will be calculated accordingly.
Default: :any:`None`
c : :class:`float`, optional
Intensity of variation in the TPL model.
Is overwritten if var is given.
Default: ``1.0``
anis : :class:`float`, optional
Anisotropy-ratio of the vertical and horizontal corralation-lengths.
This is only applied in 3 dimensions.
Default: 1.0
dim: :class:`float`, optional
Dimension of space.
Default: ``2.0``
K_well : :class:`str` or :class:`float`, optional
Explicit conductivity value at the well. One can choose between the
harmonic mean (``"KH"``),
the arithmetic mean (``"KA"``) or an arbitrary float
value. Default: ``"KH"``
prop: :class:`float`, optional
Proportionality factor used within the upscaling procedure.
Default: ``1.6``
Returns
-------
rad : :class:`float`
Radial point, where the relative error is less than the given one.
"""
# handle special case in 3D with anisotropy
anis = 1.0 if not np.isclose(dim, 3) else anis
ani = aniso(anis) if np.isclose(dim, 3) else 1.0 / dim
var = c * len_scale ** (2 * hurst) / (2 * hurst) if var is None else var
K_efu = cond_gmean * np.exp(var * (0.5 - ani))
if K_well == "KH":
chi = var * (ani - 1.0)
elif K_well == "KA":
chi = var * ani
else:
chi = np.log(K_well / K_efu)
Kw = np.exp(chi + np.log(K_efu))
# define a curve, that has its root at the wanted percentile
if chi > 0:
per = (1 + err) * K_efu
if not per < Kw:
return 0
elif chi < 0:
per = (1 - err) * K_efu
if not per > Kw:
return 0
else:
return 0
def curve(x):
"""Curve for fitting."""
return (
TPL_CG(
x,
cond_gmean=cond_gmean,
len_scale=len_scale,
hurst=hurst,
var=var,
c=c,
anis=anis,
dim=dim,
K_well=K_well,
prop=prop,
)
- per
)
return root(curve, 2 * len_scale)["x"][0]
if __name__ == "__main__":
import doctest
doctest.testmod()
| 30.809074 | 79 | 0.600319 | 2,214 | 16,298 | 4.334688 | 0.121951 | 0.056268 | 0.035636 | 0.022924 | 0.830155 | 0.812129 | 0.788163 | 0.779098 | 0.764093 | 0.761384 | 0 | 0.037542 | 0.279237 | 16,298 | 528 | 80 | 30.867424 | 0.779433 | 0.633881 | 0 | 0.539474 | 0 | 0 | 0.022794 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.059211 | false | 0 | 0.026316 | 0 | 0.203947 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
a2a35aa2f9c7cfbee2eea2d1db271877f88d2b61 | 96 | py | Python | spotirip/__init__.py | bttger/spotirip | bb56d9907bde8d2d6eb9d911826816cdf885f2e2 | [
"MIT"
] | 9 | 2019-07-10T12:46:39.000Z | 2021-12-16T05:28:09.000Z | spotirip/__init__.py | bttger/spotirip | bb56d9907bde8d2d6eb9d911826816cdf885f2e2 | [
"MIT"
] | 2 | 2019-07-11T14:37:50.000Z | 2019-07-15T18:42:11.000Z | spotirip/__init__.py | bttger/spotirip | bb56d9907bde8d2d6eb9d911826816cdf885f2e2 | [
"MIT"
] | 1 | 2020-10-24T11:45:43.000Z | 2020-10-24T11:45:43.000Z | import spotirip.const
import spotirip.exporter
import spotirip.recorder
import spotirip.spotify
| 19.2 | 24 | 0.875 | 12 | 96 | 7 | 0.5 | 0.666667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.083333 | 96 | 4 | 25 | 24 | 0.954545 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
a2b1a3503cd4ba3d82f9cc09d55f8587515f0473 | 30 | py | Python | src/resources/__init__.py | Anibittu/hw-recog-be | 96c3f56e6ee25721d54ed8454a0be774cfe4553a | [
"MIT"
] | null | null | null | src/resources/__init__.py | Anibittu/hw-recog-be | 96c3f56e6ee25721d54ed8454a0be774cfe4553a | [
"MIT"
] | 6 | 2020-01-28T23:16:03.000Z | 2020-04-21T13:40:15.000Z | src/resources/__init__.py | Anibittu/hw-recog-be | 96c3f56e6ee25721d54ed8454a0be774cfe4553a | [
"MIT"
] | 2 | 2020-04-16T06:01:47.000Z | 2020-07-07T06:04:16.000Z | from .rect import RectResource | 30 | 30 | 0.866667 | 4 | 30 | 6.5 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.1 | 30 | 1 | 30 | 30 | 0.962963 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
a2ed51333a3e0e5d9371859f87d1abda0fe1d947 | 1,536 | py | Python | skimage/util/tests/test_invert.py | portugueslab/scikit-image | 0fa3bcb118bb208a0cc7d3e8b96cd96c1ce7a75b | [
"BSD-3-Clause"
] | 2 | 2020-02-24T02:24:43.000Z | 2021-12-19T11:44:34.000Z | skimage/util/tests/test_invert.py | portugueslab/scikit-image | 0fa3bcb118bb208a0cc7d3e8b96cd96c1ce7a75b | [
"BSD-3-Clause"
] | null | null | null | skimage/util/tests/test_invert.py | portugueslab/scikit-image | 0fa3bcb118bb208a0cc7d3e8b96cd96c1ce7a75b | [
"BSD-3-Clause"
] | 2 | 2019-06-16T06:38:28.000Z | 2021-12-19T11:44:48.000Z | import numpy as np
from skimage import dtype_limits
from skimage.util import invert
from skimage._shared.testing import assert_array_equal
def test_invert_bool():
dtype = 'bool'
image = np.zeros((3, 3), dtype=dtype)
upper_dtype_limit = dtype_limits(image, clip_negative=False)[1]
image[1, :] = upper_dtype_limit
expected = np.zeros((3, 3), dtype=dtype) + upper_dtype_limit
expected[1, :] = 0
result = invert(image)
assert_array_equal(expected, result)
def test_invert_uint8():
dtype = 'uint8'
image = np.zeros((3, 3), dtype=dtype)
upper_dtype_limit = dtype_limits(image, clip_negative=False)[1]
image[1, :] = upper_dtype_limit
expected = np.zeros((3, 3), dtype=dtype) + upper_dtype_limit
expected[1, :] = 0
result = invert(image)
assert_array_equal(expected, result)
def test_invert_int8():
dtype = 'int8'
image = np.zeros((3, 3), dtype=dtype)
upper_dtype_limit = dtype_limits(image, clip_negative=False)[1]
image[1, :] = upper_dtype_limit
expected = np.zeros((3, 3), dtype=dtype) + upper_dtype_limit
expected[1, :] = 0
result = invert(image)
assert_array_equal(expected, result)
def test_invert_float64():
dtype = 'float64'
image = np.zeros((3, 3), dtype=dtype)
upper_dtype_limit = dtype_limits(image, clip_negative=False)[1]
image[1, :] = upper_dtype_limit
expected = np.zeros((3, 3), dtype=dtype) + upper_dtype_limit
expected[1, :] = 0
result = invert(image)
assert_array_equal(expected, result)
| 30.72 | 67 | 0.684896 | 218 | 1,536 | 4.587156 | 0.151376 | 0.12 | 0.18 | 0.072 | 0.811 | 0.811 | 0.811 | 0.811 | 0.811 | 0.811 | 0 | 0.032077 | 0.188151 | 1,536 | 49 | 68 | 31.346939 | 0.769848 | 0 | 0 | 0.7 | 0 | 0 | 0.013021 | 0 | 0 | 0 | 0 | 0 | 0.125 | 1 | 0.1 | false | 0 | 0.1 | 0 | 0.2 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
0c38d379e6c8a1701d5f7184038b977ce7e746bb | 82 | py | Python | emoji_puncher/entity/__init__.py | GIider/EmojiPuncher | 87f93df7b647d1ddb53d7fe6cd579b7c2cd57071 | [
"MIT"
] | null | null | null | emoji_puncher/entity/__init__.py | GIider/EmojiPuncher | 87f93df7b647d1ddb53d7fe6cd579b7c2cd57071 | [
"MIT"
] | null | null | null | emoji_puncher/entity/__init__.py | GIider/EmojiPuncher | 87f93df7b647d1ddb53d7fe6cd579b7c2cd57071 | [
"MIT"
] | null | null | null | # coding=utf-8
from .action import *
from .platform import *
from .player import * | 20.5 | 23 | 0.731707 | 12 | 82 | 5 | 0.666667 | 0.333333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.014493 | 0.158537 | 82 | 4 | 24 | 20.5 | 0.855072 | 0.146341 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
0c3c829d7ad786ca00eaa554d86c23723e183f50 | 72 | py | Python | mnist_py/models/__init__.py | vijayagril/mnist_cnn_cuda | ecd088f3b46cae23dc986dea8926615c85cf631c | [
"MIT"
] | null | null | null | mnist_py/models/__init__.py | vijayagril/mnist_cnn_cuda | ecd088f3b46cae23dc986dea8926615c85cf631c | [
"MIT"
] | 2 | 2018-11-25T17:06:04.000Z | 2018-12-16T12:14:02.000Z | mnist_py/models/__init__.py | boczekbartek/mnist_cnn_cuda | ecd088f3b46cae23dc986dea8926615c85cf631c | [
"MIT"
] | 1 | 2021-09-05T17:12:49.000Z | 2021-09-05T17:12:49.000Z | from .big_cnn import *
from .small_cnn import *
from . basic_nn import * | 24 | 24 | 0.75 | 12 | 72 | 4.25 | 0.583333 | 0.352941 | 0.509804 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.166667 | 72 | 3 | 25 | 24 | 0.85 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
0c3f88b18b2bb93dc414a8a10b2d3201112377d6 | 3,129 | py | Python | mayan/apps/ocr/tests/test_document_type_api.py | atitaya1412/Mayan-EDMS | bda9302ba4b743e7d829ad118b8b836221888172 | [
"Apache-2.0"
] | 343 | 2015-01-05T14:19:35.000Z | 2018-12-10T19:07:48.000Z | mayan/apps/ocr/tests/test_document_type_api.py | atitaya1412/Mayan-EDMS | bda9302ba4b743e7d829ad118b8b836221888172 | [
"Apache-2.0"
] | 191 | 2015-01-03T00:48:19.000Z | 2018-11-30T09:10:25.000Z | mayan/apps/ocr/tests/test_document_type_api.py | atitaya1412/Mayan-EDMS | bda9302ba4b743e7d829ad118b8b836221888172 | [
"Apache-2.0"
] | 257 | 2019-05-14T10:26:37.000Z | 2022-03-30T03:37:36.000Z | from rest_framework import status
from mayan.apps.documents.tests.mixins.document_mixins import DocumentTestMixin
from mayan.apps.rest_api.tests.base import BaseAPITestCase
from ..permissions import permission_document_type_ocr_setup
from .mixins import DocumentTypeOCRSettingsAPIViewTestMixin
class DocumentTypeOCRSettingsAPIViewTestCase(
DocumentTestMixin, DocumentTypeOCRSettingsAPIViewTestMixin,
BaseAPITestCase
):
auto_upload_test_document = False
def test_document_type_ocr_settings_details_api_view_no_permission(self):
self._clear_events()
response = self._request_test_document_type_ocr_settings_details_api_view()
self.assertEqual(response.status_code, status.HTTP_404_NOT_FOUND)
events = self._get_test_events()
self.assertEqual(events.count(), 0)
def test_document_type_ocr_settings_details_api_view_with_access(self):
self.grant_access(
obj=self.test_document_type,
permission=permission_document_type_ocr_setup
)
self._clear_events()
response = self._request_test_document_type_ocr_settings_details_api_view()
self.assertEqual(response.status_code, status.HTTP_200_OK)
self.assertEqual(response.data, {'auto_ocr': False})
events = self._get_test_events()
self.assertEqual(events.count(), 0)
def test_document_type_ocr_settings_patch_api_view_no_permission(self):
self._clear_events()
response = self._request_test_document_type_ocr_settings_patch_api_view()
self.assertEqual(response.status_code, status.HTTP_404_NOT_FOUND)
events = self._get_test_events()
self.assertEqual(events.count(), 0)
def test_document_type_ocr_settings_patch_api_view_with_access(self):
self.grant_access(
obj=self.test_document_type,
permission=permission_document_type_ocr_setup
)
self._clear_events()
response = self._request_test_document_type_ocr_settings_patch_api_view()
self.assertEqual(response.status_code, status.HTTP_200_OK)
self.assertEqual(response.data, {'auto_ocr': True})
events = self._get_test_events()
self.assertEqual(events.count(), 0)
def test_document_type_ocr_settings_put_api_view_no_permission(self):
self._clear_events()
response = self._request_test_document_type_ocr_settings_put_api_view()
self.assertEqual(response.status_code, status.HTTP_404_NOT_FOUND)
events = self._get_test_events()
self.assertEqual(events.count(), 0)
def test_document_type_ocr_settings_put_api_view_with_access(self):
self.grant_access(
obj=self.test_document_type,
permission=permission_document_type_ocr_setup
)
self._clear_events()
response = self._request_test_document_type_ocr_settings_put_api_view()
self.assertEqual(response.status_code, status.HTTP_200_OK)
self.assertEqual(response.data, {'auto_ocr': True})
events = self._get_test_events()
self.assertEqual(events.count(), 0)
| 35.556818 | 83 | 0.742729 | 382 | 3,129 | 5.578534 | 0.146597 | 0.106992 | 0.112623 | 0.106992 | 0.825903 | 0.811825 | 0.811825 | 0.811825 | 0.811825 | 0.791178 | 0 | 0.009393 | 0.183445 | 3,129 | 87 | 84 | 35.965517 | 0.824658 | 0 | 0 | 0.672131 | 0 | 0 | 0.00767 | 0 | 0 | 0 | 0 | 0 | 0.245902 | 1 | 0.098361 | false | 0 | 0.081967 | 0 | 0.213115 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
a74d95ffc79da8ec60c74ca704dc9a04522d7b68 | 252 | py | Python | send_one.py | MehmedGIT/P4_Copy_To_CPU | 714ee604b8f98e71a68318106bfed8de6f310abb | [
"Apache-2.0"
] | 1 | 2019-05-21T22:11:25.000Z | 2019-05-21T22:11:25.000Z | send_one.py | MehmedGIT/P4_Copy_To_CPU | 714ee604b8f98e71a68318106bfed8de6f310abb | [
"Apache-2.0"
] | null | null | null | send_one.py | MehmedGIT/P4_Copy_To_CPU | 714ee604b8f98e71a68318106bfed8de6f310abb | [
"Apache-2.0"
] | 3 | 2018-03-23T01:58:47.000Z | 2021-04-25T07:08:42.000Z | from scapy.all import *
import sys
p = Ether(src="00:00:00:00:01:00", dst="00:00:00:00:00:01") / IP(src="10.0.1.0", dst="10.0.0.1") / TCP(flags='CE') / "< P1 from Veth8: 10.0.1.0 --> 10.0.0.1!>"
# p.show()
hexdump(p)
# ls(p)
sendp(p, iface = "veth8")
| 28 | 159 | 0.571429 | 57 | 252 | 2.526316 | 0.438596 | 0.194444 | 0.208333 | 0.166667 | 0.138889 | 0 | 0 | 0 | 0 | 0 | 0 | 0.214612 | 0.130952 | 252 | 8 | 160 | 31.5 | 0.442922 | 0.055556 | 0 | 0 | 0 | 0.2 | 0.412766 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.4 | 0 | 0.4 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
a7771b6a3854441916163339a898c369011acbc5 | 2,640 | py | Python | tests/test_inspector.py | DalavanCloud/Logger-1 | 423cab2e8d68b4d715c12fd3339e51319ffd6e1d | [
"BSD-2-Clause"
] | 1 | 2019-03-16T04:11:23.000Z | 2019-03-16T04:11:23.000Z | tests/test_inspector.py | DalavanCloud/Logger-1 | 423cab2e8d68b4d715c12fd3339e51319ffd6e1d | [
"BSD-2-Clause"
] | null | null | null | tests/test_inspector.py | DalavanCloud/Logger-1 | 423cab2e8d68b4d715c12fd3339e51319ffd6e1d | [
"BSD-2-Clause"
] | null | null | null | import unittest
from logger import inspector
from mocker import ANY, MockerTestCase
class TestInspectorTests(MockerTestCase):
def test_is_valid_filename_node1(self):
insp = inspector.Inspector('/var/www/scielo.br/2015-12-30_scielo.br.1.log.gz')
self.assertTrue(insp._is_valid_filename())
expected = {
'date': '2015-12-30',
'collection': 'br'
}
self.assertEqual(expected, insp._parsed_fn.groupdict())
def test_is_valid_filename(self):
insp = inspector.Inspector('/var/www/scielo.br/2015-12-30_scielo.br.log.gz')
self.assertTrue(insp._is_valid_filename())
expected = {
'date': '2015-12-30',
'collection': 'br'
}
self.assertEqual(expected, insp._parsed_fn.groupdict())
def test_is_valid_filename_false(self):
insp = inspector.Inspector('/var/www/scielo.br/2015-12-30_scilo.br.log.gz')
self.assertFalse(insp._is_valid_filename())
def test_is_valid_date_in_filename(self):
insp = inspector.Inspector('/var/www/scielo.br/2015-12-30_scielo.br.log.gz')
self.assertTrue(insp._is_valid_date())
def test_is_valid_date_in_filename_false(self):
insp = inspector.Inspector('/var/www/scielo.br/2015-31-12_scielo.br.log.gz')
self.assertFalse(insp._is_valid_date())
def test_is_valid_collection_in_filename(self):
_insp = self.mocker.patch(inspector.Inspector)
_insp.collections
self.mocker.result({'br': None})
self.mocker.replay()
insp = inspector.Inspector('/var/www/scielo.br/2015-12-30_scielo.br.log.gz')
self.assertTrue(insp._is_valid_collection())
def test_is_invalid_collection_in_filename(self):
_insp = self.mocker.patch(inspector.Inspector)
_insp.collections
self.mocker.result({'br': None})
self.mocker.replay()
insp = inspector.Inspector('/var/www/scielo.br/2015-12-30_scielo.xxx.log.gz')
self.assertFalse(insp._is_valid_collection())
def test_is_valid_source_directory(self):
insp = inspector.Inspector('/var/www/scielo.br/2015-12-30_scielo.br.log.gz')
self.assertTrue(insp._is_valid_source_directory())
def test_is_valid_source_directory_false(self):
insp = inspector.Inspector('/var/www/scielo.br/2015-12-30_sciel.br.log.gz')
self.assertFalse(insp._is_valid_source_directory())
def test_is_valid_source_directory_false(self):
insp = inspector.Inspector('/var/www/scielo.pepsic/2015-12-30_scielo.br.log.gz')
self.assertFalse(insp._is_valid_source_directory())
| 35.2 | 88 | 0.684848 | 354 | 2,640 | 4.833333 | 0.146893 | 0.077732 | 0.051432 | 0.146113 | 0.910579 | 0.897721 | 0.886032 | 0.830508 | 0.785506 | 0.759205 | 0 | 0.04556 | 0.185227 | 2,640 | 74 | 89 | 35.675676 | 0.749884 | 0 | 0 | 0.5 | 0 | 0.019231 | 0.197348 | 0.176136 | 0 | 0 | 0 | 0 | 0.230769 | 1 | 0.192308 | false | 0 | 0.057692 | 0 | 0.269231 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
a7bac578e643faadb42ca53cac3ac013f9a9339c | 235 | py | Python | dotpy/components/attribute.py | Francesco17/dotpy | 36c8a94a80bd526226032a81d8f8e955b436dbbb | [
"MIT"
] | null | null | null | dotpy/components/attribute.py | Francesco17/dotpy | 36c8a94a80bd526226032a81d8f8e955b436dbbb | [
"MIT"
] | null | null | null | dotpy/components/attribute.py | Francesco17/dotpy | 36c8a94a80bd526226032a81d8f8e955b436dbbb | [
"MIT"
] | null | null | null | class Attribute:
def __init__(self, first_attr, second_attr):
self.first_attr = first_attr
self.second_attr = second_attr
def __str__(self):
return "{0} = {1}".format(self.first_attr, self.second_attr) | 29.375 | 68 | 0.668085 | 32 | 235 | 4.40625 | 0.40625 | 0.255319 | 0.276596 | 0.269504 | 0.326241 | 0 | 0 | 0 | 0 | 0 | 0 | 0.010929 | 0.221277 | 235 | 8 | 68 | 29.375 | 0.759563 | 0 | 0 | 0 | 0 | 0 | 0.038136 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0 | 0 | 0.166667 | 0.666667 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 6 |
a7cf20ac261246392b03a91d96bc8954d241fc8a | 207 | py | Python | django_bitcoin/context_processors.py | mladenangelov/django-bitcoin | 78bd509d815cfa27dbfd0a743aa742af644d27bf | [
"MIT"
] | 63 | 2015-01-16T19:59:17.000Z | 2022-03-18T22:39:38.000Z | django_bitcoin/context_processors.py | mladenangelov/django-bitcoin | 78bd509d815cfa27dbfd0a743aa742af644d27bf | [
"MIT"
] | 10 | 2019-12-26T17:31:31.000Z | 2022-03-21T22:17:33.000Z | django_bitcoin/context_processors.py | texib/bitcoin-zoo | 69dc3443a5132ef02f340676a985e4ad9a244eed | [
"MIT"
] | 64 | 2015-01-14T01:22:14.000Z | 2022-03-22T18:53:18.000Z | from django_bitcoin.models import bitcoinprice_eur, bitcoinprice_usd
def bitcoinprice(request):
return {'bitcoinprice_eur': bitcoinprice_eur(),
'bitcoinprice_usd': bitcoinprice_usd(),
}
| 29.571429 | 68 | 0.743961 | 21 | 207 | 7 | 0.52381 | 0.306122 | 0.55102 | 0.408163 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.164251 | 207 | 6 | 69 | 34.5 | 0.849711 | 0 | 0 | 0 | 0 | 0 | 0.154589 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.2 | false | 0 | 0.2 | 0.2 | 0.6 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 6 |
ac04b9d9ebd4955c83d93fbe958b130d159a2af4 | 191 | py | Python | nitorch/cli/misc/__init__.py | balbasty/nitorch | d30c3125a8a66ea1434f2b39ed03338afd9724b4 | [
"MIT"
] | 46 | 2020-07-31T10:14:05.000Z | 2022-03-24T12:51:46.000Z | nitorch/cli/misc/__init__.py | balbasty/nitorch | d30c3125a8a66ea1434f2b39ed03338afd9724b4 | [
"MIT"
] | 36 | 2020-10-06T19:01:38.000Z | 2022-02-03T18:07:35.000Z | nitorch/cli/misc/__init__.py | balbasty/nitorch | d30c3125a8a66ea1434f2b39ed03338afd9724b4 | [
"MIT"
] | 6 | 2021-01-05T14:59:05.000Z | 2021-11-18T18:26:45.000Z | from . import chunk
from . import convert
from . import crop
from . import extract_patches
from . import info
from . import inpaint
from . import pad
from . import pool
from . import unstack
| 19.1 | 29 | 0.764398 | 28 | 191 | 5.178571 | 0.428571 | 0.62069 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.188482 | 191 | 9 | 30 | 21.222222 | 0.935484 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
ac1bcce44f28d83983b8a7d480780e47b3aa8fbe | 10,661 | py | Python | datasets.py | BenCowen/pytorch_tutorial | 65bd8c5a041ce3973587760a45db70a91d8a1708 | [
"MIT"
] | null | null | null | datasets.py | BenCowen/pytorch_tutorial | 65bd8c5a041ce3973587760a45db70a91d8a1708 | [
"MIT"
] | null | null | null | datasets.py | BenCowen/pytorch_tutorial | 65bd8c5a041ce3973587760a45db70a91d8a1708 | [
"MIT"
] | null | null | null | """
Creates datasets and dataloaders for various torchvision classes.
"""
import torch
from torchvision import datasets, transforms
from torch.utils.data.sampler import SubsetRandomSampler
from random import shuffle
def load_dataset(namedataset='mnist', batch_size=200,
vectorize=False, num_workers=1, valid_size=-1,
data_augmentation=False, noise_sigma=0.0,):
'''
Arguments:
namedataset: see below
batch_size : number samples per batch
vectorize: flattens input tensors if True
num_workers: number of GPUs for parallel training
valid_size : number of samples for validation set; if <=0, returns test set
(EXAMPLES:)
noise_sigma: adds noise only to the test set
(only implemented for noisy_mnist as an example)
data_augmentation: use data augmentation if True
(only implemented for cifar10 as an example)
datasets:
mnist: standard mnist
valid_mnist: returns validation set instead of test set
noisy_mnist: EXPERIMENTAL: adds noisy only to the test set.
fashion_mnist
valid_fashion_mnist
cifar10
'''
# Load mnist dataset
if namedataset == 'mnist':
DIR_DATASET = '~/data/mnist'
transform_list = [
transforms.ToTensor(),
transforms.Normalize((0.1307,), (0.3081,))]
if vectorize:
transform_list.append(transforms.Lambda(lambda x: x.view(x.size(1)*x.size(2))))
transform = transforms.Compose(transform_list)
trainset = datasets.MNIST(DIR_DATASET, train=True, download=True, transform=transform)
train_loader = torch.utils.data.DataLoader(trainset, batch_size=batch_size, shuffle=True, num_workers=num_workers)
testset = datasets.MNIST(DIR_DATASET, train=False, transform=transform)
test_loader = torch.utils.data.DataLoader(testset, batch_size=batch_size, shuffle=True, num_workers=num_workers)
classes = tuple(range(10))
n_inputs = 784
elif namedataset == 'valid_mnist':
if valid_size < 1:
raise ValueError('Validation requested with validation size = {}'.format(valid_size))
DIR_DATASET = '~/data/mnist'
# Define the transforms applied to every sample.
# Desired mean and Std Dev of data:
MEAN = 0.1307
STD = 0.3081
transform_list = [
transforms.ToTensor(),
transforms.Normalize((MEAN,), (STD,))]
if vectorize:
transform_list.append(transforms.Lambda(lambda x: x.view(x.size(1)*x.size(2))))
transform = transforms.Compose(transform_list)
# Now load training data and make two dataloaders (train/validation) for it.
allTrainData = datasets.MNIST(DIR_DATASET, train=True, download=True, transform=transform)
num_train = len(allTrainData)
indices = list(range(num_train))
# First shuffle the indices:
shuffle(indices)
# Assign sampled before split to train; after split to validation.
split = num_train - valid_size
print('nTrain='+str(num_train))
print('validsize='+str(valid_size))
print('split='+str(split))
train_idx, valid_idx = indices[:split], indices[split:]
# Random samplers for the relevant indices only.
train_sampler = SubsetRandomSampler(train_idx)
valid_sampler = SubsetRandomSampler(valid_idx)
# Finally, instantiate the data loaders.
train_loader = torch.utils.data.DataLoader(allTrainData,
sampler = train_sampler,
batch_size = batch_size,
num_workers = num_workers)
train_loader.numSamples=len(train_idx)
# THIS IS THE VALIDATION SET:
test_loader = torch.utils.data.DataLoader(allTrainData,
sampler = valid_sampler,
batch_size = batch_size,
num_workers = num_workers)
test_loader.numSamples=len(valid_idx)
print('train size = '+str(train_loader.numSamples))
print('valid size = '+str(test_loader.numSamples))
classes = tuple(range(10))
n_inputs = 784
elif namedataset == 'noisy_mnist':
DIR_DATASET = '~/data/mnist'
# Desired mean and Std Dev of data:
MEAN = 0.1307
STD = 0.3081
transform_list = [
transforms.ToTensor(),
transforms.Normalize((MEAN,), (STD,))]
if vectorize:
transform_list.append(transforms.Lambda(lambda x: x.view(x.size(1)*x.size(2))))
transform = transforms.Compose(transform_list)
trainset = datasets.MNIST(DIR_DATASET, train=True, download=True, transform=transform)
train_loader = torch.utils.data.DataLoader(trainset, batch_size=batch_size, shuffle=True, num_workers=num_workers)
# Add Gaussian noise to the test set.
transform_list.append(transforms.Lambda(lambda x: x + MEAN
+ noise_sigma*STD*torch.randn(x.size())))
testset = datasets.MNIST(DIR_DATASET, train=False, transform=transform)
test_loader = torch.utils.data.DataLoader(testset, batch_size=batch_size, shuffle=True, num_workers=num_workers)
classes = tuple(range(10))
n_inputs = 784
# Load mnist dataset
elif namedataset == 'fashion_mnist':
DIR_DATASET = '~/data/fashion_mnist'
transform_list = [
transforms.ToTensor(),
transforms.Normalize((0.1307,), (0.3081,))]
if vectorize:
transform_list.append(transforms.Lambda(lambda x: x.view(x.size(1)*x.size(2))))
transform = transforms.Compose(transform_list)
trainset = datasets.FashionMNIST(DIR_DATASET, train=True, download=True, transform=transform)
train_loader = torch.utils.data.DataLoader(trainset, batch_size=batch_size, shuffle=True, num_workers=num_workers)
testset = datasets.FashionMNIST(DIR_DATASET, train=False, transform=transform)
test_loader = torch.utils.data.DataLoader(testset, batch_size=batch_size, shuffle=True, num_workers=num_workers)
classes = tuple(range(10))
n_inputs = 784
elif namedataset == 'valid_fashion_mnist':
if valid_size < 1:
raise ValueError('Validation requested with validation size = {}'.format(valid_size))
DIR_DATASET = '~/data/fashion_mnist'
# Define the transforms applied to every sample.
# Desired mean and Std Dev of data:
MEAN = 0.1307
STD = 0.3081
transform_list = [
transforms.ToTensor(),
transforms.Normalize((MEAN,), (STD,))]
if vectorize:
transform_list.append(transforms.Lambda(lambda x: x.view(x.size(1)*x.size(2))))
transform = transforms.Compose(transform_list)
# Now load training data and make two dataloaders (train/validation) for it.
allTrainData = datasets.FashionMNIST(DIR_DATASET, train=True, download=True, transform=transform)
num_train = len(allTrainData)
indices = list(range(num_train))
# First shuffle the indices:
shuffle(indices)
# Assign sampled before split to train; after split to validation.
split = num_train - valid_size
print('nTrain='+str(num_train))
print('validsize='+str(valid_size))
print('split='+str(split))
train_idx, valid_idx = indices[:split], indices[split:]
# Random samplers for the relevant indices only.
train_sampler = SubsetRandomSampler(train_idx)
valid_sampler = SubsetRandomSampler(valid_idx)
# Finally, instantiate the data loaders.
train_loader = torch.utils.data.DataLoader(allTrainData,
sampler = train_sampler,
batch_size = batch_size,
num_workers = num_workers)
train_loader.numSamples=len(train_idx)
# THIS IS THE VALIDATION SET:
test_loader = torch.utils.data.DataLoader(allTrainData,
sampler = valid_sampler,
batch_size = batch_size,
num_workers = num_workers)
test_loader.numSamples=len(valid_idx)
print('train size = '+str(train_loader.numSamples))
print('valid size = '+str(test_loader.numSamples))
classes = tuple(range(10))
n_inputs = 784
# Load cifar10 (preprocessing from https://github.com/kuangliu/pytorch-cifar)
elif namedataset == 'cifar10':
DIR_DATASET = '~/data/cifar10'
transform_list = [
transforms.ToTensor(),
transforms.Normalize((0.4914, 0.4822, 0.4465), (0.2023, 0.1994, 0.2010))]
if vectorize:
transform_list.append(
transforms.Lambda(lambda x: x.view(x.size(0)*x.size(1)*x.size(2))))
transform_test = transforms.Compose(transform_list)
if data_augmentation:
transform_train_list = [
transforms.RandomCrop(32, padding=4),
transforms.RandomHorizontalFlip(),
transforms.ToTensor(),
transforms.Normalize((0.4914, 0.4822, 0.4465), (0.2023, 0.1994, 0.2010))]
if vectorize:
transform_train_list.append(
transforms.Lambda(lambda x: x.view(x.size(0)*x.size(1)*x.size(2))))
transform_train = transforms.Compose(transform_train_list)
else:
transform_train = transform_test
trainset = datasets.CIFAR10(DIR_DATASET, train=True, download=True, transform=transform_train)
train_loader = torch.utils.data.DataLoader(trainset, batch_size=batch_size, shuffle=True, num_workers=num_workers)
testset = datasets.CIFAR10(DIR_DATASET, train=False, download=True, transform=transform_test)
test_loader = torch.utils.data.DataLoader(testset, batch_size=100, shuffle=False, num_workers=num_workers)
classes = ('plane', 'car', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck')
n_inputs = 3*32*32
else:
raise ValueError('Dataset {} not recognized'.format(namedataset))
return train_loader, test_loader, n_inputs, len(classes)
| 41.321705 | 122 | 0.618422 | 1,191 | 10,661 | 5.380353 | 0.146096 | 0.040574 | 0.028402 | 0.037453 | 0.777778 | 0.749844 | 0.749844 | 0.747815 | 0.739856 | 0.721286 | 0 | 0.024413 | 0.285339 | 10,661 | 257 | 123 | 41.48249 | 0.816643 | 0.154676 | 0 | 0.770701 | 0 | 0 | 0.046736 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.006369 | false | 0 | 0.025478 | 0 | 0.038217 | 0.063694 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
ac6d4e75d58927eb9a8956a11ffc4bde94dde61c | 118 | py | Python | lib/djasync/utils.py | hdknr/djasync | e626ca1871a6aa1b9e4337601c1b698c82397d89 | [
"BSD-2-Clause-FreeBSD"
] | null | null | null | lib/djasync/utils.py | hdknr/djasync | e626ca1871a6aa1b9e4337601c1b698c82397d89 | [
"BSD-2-Clause-FreeBSD"
] | null | null | null | lib/djasync/utils.py | hdknr/djasync | e626ca1871a6aa1b9e4337601c1b698c82397d89 | [
"BSD-2-Clause-FreeBSD"
] | null | null | null | import shelve
def get_db(path):
return shelve.open(path)
def get_db_dict(path):
return dict(get_db(path))
| 11.8 | 29 | 0.70339 | 20 | 118 | 3.95 | 0.45 | 0.189873 | 0.202532 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.186441 | 118 | 9 | 30 | 13.111111 | 0.822917 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.4 | false | 0 | 0.2 | 0.4 | 1 | 0 | 1 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 6 |
ac87190506c15606d1bef4a03f6b8665f903940f | 46 | py | Python | config.py | VitaliKaiser/udacity_drl_p3_collab-compet | a1a9060dbf6874766661ad3fe9a71e486aa5a60b | [
"MIT"
] | null | null | null | config.py | VitaliKaiser/udacity_drl_p3_collab-compet | a1a9060dbf6874766661ad3fe9a71e486aa5a60b | [
"MIT"
] | null | null | null | config.py | VitaliKaiser/udacity_drl_p3_collab-compet | a1a9060dbf6874766661ad3fe9a71e486aa5a60b | [
"MIT"
] | null | null | null |
PATH_TO_TENNIS = "/YOUR/PATH/TO/TENNIS/HERE"
| 15.333333 | 44 | 0.73913 | 8 | 46 | 4 | 0.625 | 0.375 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.086957 | 46 | 2 | 45 | 23 | 0.761905 | 0 | 0 | 0 | 0 | 0 | 0.555556 | 0.555556 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
3bb52350dc024e75f2b02b26d054cf50e8220026 | 74 | py | Python | jupyterlabpymolpysnips/Analysis/printBspartB.py | MooersLab/pymolpysnips | 50a89c85adf8006d85c1d6cd3f8aad7e440a0b92 | [
"MIT"
] | null | null | null | jupyterlabpymolpysnips/Analysis/printBspartB.py | MooersLab/pymolpysnips | 50a89c85adf8006d85c1d6cd3f8aad7e440a0b92 | [
"MIT"
] | null | null | null | jupyterlabpymolpysnips/Analysis/printBspartB.py | MooersLab/pymolpysnips | 50a89c85adf8006d85c1d6cd3f8aad7e440a0b92 | [
"MIT"
] | null | null | null | cmd.do('iterate resi %{1:38 and altloc %{2:B, print resi, name, alt, b;')
| 37 | 73 | 0.635135 | 15 | 74 | 3.133333 | 0.866667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.063492 | 0.148649 | 74 | 1 | 74 | 74 | 0.68254 | 0 | 0 | 0 | 0 | 1 | 0.851351 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 6 |
3bdf9aa28d5e44d829b21e32795175c7952cff77 | 655 | py | Python | thrift/compiler/test/fixtures/py3/gen-py3lite/empty/lite_clients.py | killight98/fbthrift | c28d5036f96f7eab67c30abadc5ddbe627a11ee3 | [
"Apache-2.0"
] | 2,112 | 2015-01-02T11:34:27.000Z | 2022-03-31T16:30:42.000Z | thrift/compiler/test/fixtures/py3/gen-py3lite/empty/lite_clients.py | killight98/fbthrift | c28d5036f96f7eab67c30abadc5ddbe627a11ee3 | [
"Apache-2.0"
] | 372 | 2015-01-05T10:40:09.000Z | 2022-03-31T20:45:11.000Z | thrift/compiler/test/fixtures/py3/gen-py3lite/empty/lite_clients.py | killight98/fbthrift | c28d5036f96f7eab67c30abadc5ddbe627a11ee3 | [
"Apache-2.0"
] | 582 | 2015-01-03T01:51:56.000Z | 2022-03-31T02:01:09.000Z | #
# Autogenerated by Thrift
#
# DO NOT EDIT
# @generated
#
import typing as _typing
import folly.iobuf as _fbthrift_iobuf
from thrift.py3lite.client import (
AsyncClient as _fbthrift_py3lite_AsyncClient,
SyncClient as _fbthrift_py3lite_SyncClient,
Client as _fbthrift_py3lite_Client,
)
import thrift.py3lite.exceptions as _fbthrift_py3lite_exceptions
import thrift.py3lite.types as _fbthrift_py3lite_types
import empty.lite_types
class NullService(_fbthrift_py3lite_Client["NullService.Async", "NullService.Sync"]):
class Async(_fbthrift_py3lite_AsyncClient):
pass
class Sync(_fbthrift_py3lite_SyncClient):
pass
| 23.392857 | 85 | 0.793893 | 78 | 655 | 6.307692 | 0.346154 | 0.243902 | 0.172764 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.019713 | 0.148092 | 655 | 27 | 86 | 24.259259 | 0.862007 | 0.071756 | 0 | 0.133333 | 1 | 0 | 0.055 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.133333 | 0.4 | 0 | 0.6 | 0 | 0 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 6 |
3bfbbfcb8550725ad86142b0b5c4baa0ebeea3c4 | 65 | py | Python | dice/__init__.py | skritch/dice | 9ca4fc7fde6d91a38c8903fde2967401650d04dd | [
"MIT"
] | null | null | null | dice/__init__.py | skritch/dice | 9ca4fc7fde6d91a38c8903fde2967401650d04dd | [
"MIT"
] | null | null | null | dice/__init__.py | skritch/dice | 9ca4fc7fde6d91a38c8903fde2967401650d04dd | [
"MIT"
] | null | null | null | from .dice import Dice, roll, d, d4, d6, d8, d10, d12, d20, d100
| 32.5 | 64 | 0.646154 | 13 | 65 | 3.230769 | 0.923077 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.230769 | 0.2 | 65 | 1 | 65 | 65 | 0.576923 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
ce0ca96ce12c3eeb39e56cf3888641f4232bcdd9 | 25 | py | Python | nanome/api/macro/__init__.py | nanome-ai/nanome-plugin-api | f2ce6a5e3123ee7449a90c2659f3891124289f4a | [
"MIT"
] | 3 | 2020-07-02T13:08:27.000Z | 2021-11-24T14:32:53.000Z | nanome/api/macro/__init__.py | nanome-ai/nanome-plugin-api | f2ce6a5e3123ee7449a90c2659f3891124289f4a | [
"MIT"
] | 11 | 2020-09-14T17:01:47.000Z | 2022-02-18T04:00:52.000Z | nanome/api/macro/__init__.py | nanome-ai/nanome-plugin-api | f2ce6a5e3123ee7449a90c2659f3891124289f4a | [
"MIT"
] | 5 | 2020-08-12T16:30:03.000Z | 2021-12-06T18:04:23.000Z | from .macro import Macro
| 12.5 | 24 | 0.8 | 4 | 25 | 5 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.16 | 25 | 1 | 25 | 25 | 0.952381 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
ce0e164714db564e9babdb622605a4ef9b108e58 | 70 | py | Python | regulation/regutil/__init__.py | RNatvik/rntools | ddaf8f9cc440bcd0ed0439f087bc951e0add6dce | [
"MIT"
] | null | null | null | regulation/regutil/__init__.py | RNatvik/rntools | ddaf8f9cc440bcd0ed0439f087bc951e0add6dce | [
"MIT"
] | 1 | 2020-08-11T16:05:51.000Z | 2020-08-11T16:05:51.000Z | regulation/regutil/__init__.py | RNatvik/rntools | ddaf8f9cc440bcd0ed0439f087bc951e0add6dce | [
"MIT"
] | null | null | null | import regutil.controllers as controllers
import regutil.util as util
| 23.333333 | 41 | 0.857143 | 10 | 70 | 6 | 0.5 | 0.433333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.114286 | 70 | 2 | 42 | 35 | 0.967742 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
ce21e8f556c479398ed70dc13fd4b0ee91d7c62c | 18 | py | Python | dashboard-project/dashboard/settings/__init__.py | Sheepzez/dumfries-economic-dashboard | 6332772ccddca110d54a01cad38f6f720d05e798 | [
"Unlicense"
] | 1 | 2016-05-01T21:21:58.000Z | 2016-05-01T21:21:58.000Z | dashboard-project/dashboard/settings/__init__.py | Sheepzez/dumfries-economic-dashboard | 6332772ccddca110d54a01cad38f6f720d05e798 | [
"Unlicense"
] | null | null | null | dashboard-project/dashboard/settings/__init__.py | Sheepzez/dumfries-economic-dashboard | 6332772ccddca110d54a01cad38f6f720d05e798 | [
"Unlicense"
] | null | null | null | from dev import *
| 9 | 17 | 0.722222 | 3 | 18 | 4.333333 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.222222 | 18 | 1 | 18 | 18 | 0.928571 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
cbee21299d00b64245f92f40be7ac01331028e8f | 1,924 | py | Python | repos/system_upgrade/common/libraries/config/mock_configs.py | sm00th/leapp-repository | 1c171ec3a5f9260a3c6f84a9b15cad78a875ac61 | [
"Apache-2.0"
] | 21 | 2018-11-20T15:58:39.000Z | 2022-03-15T19:57:24.000Z | repos/system_upgrade/common/libraries/config/mock_configs.py | sm00th/leapp-repository | 1c171ec3a5f9260a3c6f84a9b15cad78a875ac61 | [
"Apache-2.0"
] | 732 | 2018-11-21T18:33:26.000Z | 2022-03-31T16:16:24.000Z | repos/system_upgrade/common/libraries/config/mock_configs.py | sm00th/leapp-repository | 1c171ec3a5f9260a3c6f84a9b15cad78a875ac61 | [
"Apache-2.0"
] | 85 | 2018-11-20T17:55:00.000Z | 2022-03-29T09:40:31.000Z |
"""
This is not regular library.
The library is supposed to be used only for testing purposes. Import of the
library is expected only inside test files.
"""
from leapp.models import IPUConfig, EnvVar, OSRelease, Version
CONFIG = IPUConfig(
leapp_env_vars=[EnvVar(name='LEAPP_DEVEL', value='0')],
os_release=OSRelease(
release_id='rhel',
name='Red Hat Enterprise Linux Server',
pretty_name='RHEL',
version='7.6 (Maipo)',
version_id='7.6'
),
version=Version(
source='7.6',
target='8.0'
),
architecture='x86_64',
kernel='3.10.0-957.43.1.el7.x86_64',
)
CONFIG_NO_NETWORK_RENAMING = IPUConfig(
leapp_env_vars=[EnvVar(name='LEAPP_DEVEL', value='0'), EnvVar(name='LEAPP_NO_NETWORK_RENAMING', value='1')],
os_release=OSRelease(
release_id='rhel',
name='Red Hat Enterprise Linux Server',
pretty_name='RHEL',
version='7.6 (Maipo)',
version_id='7.6'
),
version=Version(
source='7.6',
target='8.0'
),
architecture='x86_64',
kernel='3.10.0-957.43.1.el7.x86_64',
)
CONFIG_ALL_SIGNED = IPUConfig(
leapp_env_vars=[EnvVar(name='LEAPP_DEVEL_RPMS_ALL_SIGNED', value='1')],
os_release=OSRelease(
release_id='rhel',
name='Red Hat Enterprise Linux Server',
pretty_name='RHEL',
version='7.6 (Maipo)',
version_id='7.6'
),
version=Version(
source='7.6',
target='8.0'
),
architecture='x86_64',
kernel='3.10.0-957.43.1.el7.x86_64',
)
CONFIG_S390X = IPUConfig(
os_release=OSRelease(
release_id='rhel',
name='Red Hat Enterprise Linux Server',
pretty_name='RHEL',
version='7.6 (Maipo)',
version_id='7.6'
),
version=Version(
source='7.6',
target='8.0'
),
architecture='s390x',
kernel='3.10.0-957.43.1.el7.x86_64',
)
| 24.987013 | 112 | 0.602911 | 261 | 1,924 | 4.275862 | 0.256705 | 0.021505 | 0.053763 | 0.089606 | 0.759857 | 0.759857 | 0.759857 | 0.759857 | 0.723118 | 0.723118 | 0 | 0.078404 | 0.244283 | 1,924 | 76 | 113 | 25.315789 | 0.689133 | 0.077443 | 0 | 0.796875 | 0 | 0 | 0.249717 | 0.088335 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.015625 | 0 | 0.015625 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
022078c36fc3aff54f0ae1549bf527a1d839a7b3 | 114 | py | Python | view_user_permission/admin.py | erfan-mehraban/vup | 1a193ba5df8385628b4d0a49cf10db95ec87d895 | [
"MIT"
] | null | null | null | view_user_permission/admin.py | erfan-mehraban/vup | 1a193ba5df8385628b4d0a49cf10db95ec87d895 | [
"MIT"
] | null | null | null | view_user_permission/admin.py | erfan-mehraban/vup | 1a193ba5df8385628b4d0a49cf10db95ec87d895 | [
"MIT"
] | null | null | null | from django.contrib import admin
from .models import *
admin.site.register(Permission)
admin.site.register(Group) | 22.8 | 32 | 0.815789 | 16 | 114 | 5.8125 | 0.625 | 0.236559 | 0.365591 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.087719 | 114 | 5 | 33 | 22.8 | 0.894231 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.5 | 0 | 0.5 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
0289bad5728f65310e0e375e28fdc85258bae863 | 1,309 | py | Python | SayuBot/helper/mongo_connect.py | TaprisSugarbell/SayUbot | 8c8beea35af3229d24cdbdd7b03063e9c52a4346 | [
"MIT"
] | null | null | null | SayuBot/helper/mongo_connect.py | TaprisSugarbell/SayUbot | 8c8beea35af3229d24cdbdd7b03063e9c52a4346 | [
"MIT"
] | null | null | null | SayuBot/helper/mongo_connect.py | TaprisSugarbell/SayUbot | 8c8beea35af3229d24cdbdd7b03063e9c52a4346 | [
"MIT"
] | null | null | null | import os
from dotenv import load_dotenv
from .mongo_db import *
# Variables
load_dotenv()
URI = os.getenv("URI")
async def confirm(user_db, data=None):
if data is None:
data = {}
return user_db.find(data)
async def add_(user_db, data=None):
if data is None:
data = {}
return user_db.insert_one(data)
async def update_(user_db, old_data=None, new_data=None):
if old_data is None:
old_data = {}
if new_data is None:
new_data = {}
return user_db.update_one(old_data, new_data)
async def remove_(user_db, data=None):
if data is None:
data = {}
return user_db.delete_one(data)
def confirm_ofdb(user_db, data=None):
if data is None:
data = {}
return user_db.find(data)
def add_ofdb(user_db, data=None):
if data is None:
data = {}
return user_db.insert_one(data)
def update_ofdb(user_db, old_data=None, new_data=None):
if old_data is None:
old_data = {}
if new_data is None:
new_data = {}
return user_db.update_one(old_data, new_data)
def remove_ofdb(user_db, data=None):
if data is None:
data = {}
return user_db.delete_one(data)
def remove_many(user_db, data=None):
if data is None:
data = {}
return user_db.delete_many(data)
| 19.833333 | 57 | 0.644003 | 210 | 1,309 | 3.766667 | 0.142857 | 0.136536 | 0.139064 | 0.182048 | 0.758534 | 0.758534 | 0.758534 | 0.758534 | 0.758534 | 0.758534 | 0 | 0 | 0.254393 | 1,309 | 65 | 58 | 20.138462 | 0.810451 | 0.006875 | 0 | 0.666667 | 0 | 0 | 0.002311 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.111111 | false | 0 | 0.066667 | 0 | 0.377778 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
5a04a60353b552f6fd75f82c74ff98fc941fd4ef | 298 | py | Python | intersight/resources/ntppolicy.py | sambyers/devnet_learning | d52391ddcc22daf90bd56942d631f781325aa8f6 | [
"MIT"
] | null | null | null | intersight/resources/ntppolicy.py | sambyers/devnet_learning | d52391ddcc22daf90bd56942d631f781325aa8f6 | [
"MIT"
] | null | null | null | intersight/resources/ntppolicy.py | sambyers/devnet_learning | d52391ddcc22daf90bd56942d631f781325aa8f6 | [
"MIT"
] | null | null | null |
class NtpPolicy(object):
def __init__(self, rest_client):
self.rest_client = rest_client
self.path = '/api/v1/ntp/Policies'
def get(self):
return self.rest_client.get(self.path)
def get_byid(self, id):
return self.rest_client.get(self.path+f'/{id}')
| 22.923077 | 55 | 0.637584 | 43 | 298 | 4.186047 | 0.418605 | 0.277778 | 0.311111 | 0.222222 | 0.344444 | 0.344444 | 0.344444 | 0 | 0 | 0 | 0 | 0.004348 | 0.228188 | 298 | 12 | 56 | 24.833333 | 0.778261 | 0 | 0 | 0 | 0 | 0 | 0.084459 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.375 | false | 0 | 0 | 0.25 | 0.75 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 6 |
5a42bcb054b49580605d5e0d8399ac0137acf5ef | 10,141 | py | Python | tests/testdata.py | Monero-Monitor/moneropy | 98e7feb20bf8595e6a0d0dda06c73517f5bb3ad4 | [
"BSD-3-Clause"
] | 57 | 2016-12-25T17:01:07.000Z | 2021-09-07T06:11:50.000Z | tests/testdata.py | Monero-Monitor/moneropy | 98e7feb20bf8595e6a0d0dda06c73517f5bb3ad4 | [
"BSD-3-Clause"
] | 5 | 2017-01-09T21:53:43.000Z | 2018-01-10T11:03:24.000Z | tests/testdata.py | Monero-Monitor/moneropy | 98e7feb20bf8595e6a0d0dda06c73517f5bb3ad4 | [
"BSD-3-Clause"
] | 21 | 2017-02-08T11:33:32.000Z | 2022-03-24T01:18:49.000Z | # These seeds, sks, vks, and addrs are all valid and derived from one another:
valid_seeds = [
['zebra','oasis','oncoming','wobbly','yawning','tiers','reef','friendly','maze','shyness','unknown','eavesdrop','zapped','lumber','often','spiders','twang','afloat','elite','vein','auctions','ingested','demonstrate','diplomat','vein'],
['urgent','loaded','linen','uncle','occur','jockey','cynical','himself','keyboard','lectures','tobacco','racetrack','empty','diode','erosion','merger','upright','wagtail','eternal','getting','dangerous','dazed','speedy','stacking','racetrack'],
['womanly','afield','obedient','quote','square','apex','sphere','poetry','wives','agony','axes','bowling','narrate','coal','aided','vivid','sowed','ignore','sensible','randomly','muddy','oxygen','onto','ouch','wives'],
['owed','cement','roomy','ought','fugitive','wrong','island','avidly','catch','zippers','layout','sovereign','suitcase','rogue','fewest','doorway','unlikely','feel','lion','bugs','second','tomorrow','diplomat','edited','ought'],
['wrong','otter','jukebox','twofold','reorder','doing','idled','dosage','jigsaw','symptoms','tomorrow','umpire','justice','python','butter','opus','aisle','soda','punch','tuition','slower','emerge','island','joining','punch']
]
valid_sks = [
'641731042d78e2ddef3340371e51bf55c6ec8f1757d4d6222976d324e230cd02',
'e72462ff644844d902824e1597d5e7d5768ce80fe84e9e0a27c1d9d801258401',
'b112829329fcbc3ecc0c754e353e180afc2865dff2d0699aeae0eed938694e03',
'44bea1954de9e0abaff67d10a5dda652bf530590ab786363e478a0a196ca5909',
'235c87d42f9dd47ef455da3b3d41c806763e2f71c18a1ac4aac478e8e379b305'
]
valid_vks = [
'158b0dec091eeb4476422d26830c75794ae9a003015a523fdac75ed78cd3e309',
'357f5f7e8b6eab22c4e7fde17ca77d026fa2b9155c4d51239bdbe02a5ddea90c',
'729dc201f1370795789006c1caec15bad7022aaf63a2d8e1087bb069298baa09',
'7f0c51be394bf8ab629f952b77dbbd90b1f7df17cb228024173460cdf7489401',
'57e9d09d40114bd69a81feef0a67eedf097e500ab018de86d6f63eb2e7446503'
]
valid_addrs = [
'4495qPNxDGAT241zya1WdwG5YU6RQ6s7ZgGeQryFtooAMMxoqx2N2oNTjP5NTqDf9GMaZ52iS2Q6xhnWyW16cdi47MCsVRg',
'47Mov77LGqgRoRh6K6XVheSagWVRS7jkQLCR9jPQxTa8g2SrnwbWuMzKWRLyyBFsxn7gHJv15987MDMkYXCXGGvhKA7Qsx4',
'48fj5P3zky9FETVG144GWh2oxnEdBc45VFHLKgKQfZ7UdyJ5M7mDFxuEA12eBuD55RAwgX2jzFYfwjhukHavcLHW9vKn1VG',
'48vTj54ZtU7e6sqwcJY9uq2LApd3Zi6H23vmYFc3wMteS2QzJwi2Z1xCLVwMac55h2HnQAiYwZTceJbwMZJRrm3uNh76Hci',
'48oYzqzeGqY3Nfg6LG8HwS3uF1Y3vV2gfRH6ZMcnhhEmUgkL2mPSjtuSekenrYGkbp8RNvAvrtq3r7Ze4iPoBH3kFK9vbgP'
]
addr_vers = '12'
valid_addr_pubsks = [
'426a2b555065c79b8d6293c5dd18c25a25bae1a8b8c67ceac7484133e6798579',
'975e989ae39b7b9445ac7384edb7a598efe3fbfab6c0bd72c5372fadd86071e9',
'b9e8cd1f42a48c55166f75ead8293e0ad1c420f566b9c85562572936207557dd',
'c09d10f3c5f580ddd0765063d9246007f45ef025a76c7d117fe4e811fa78f395',
'bd785822c5e8330e30cc7e6e7abd3d11579da04e4131d091255172583059aea5'
]
valid_addr_pubvks = [
'bba3444bd48d5d9fcffa64d805e3977b07e2d420a2212df3d612a5dbcc676538',
'5096d3b5eedd396ea5c521456640fb27ebb5a222269eac49e1ddac7134735ea0',
'08613f96d197024ea651e8f226feb03b71aa82f487cb6eff518a30a3b6a2514f',
'9c66f7487c1bef43c64ee0ace763116456666a389eea3b693cd7670c3515a0c0',
'8501a7d7657332995b54357cc02c972c5cf5b2d1804d4d273c6f214854c9cf7e'
]
# These can be tested against valid_addrs for base588 encode/decode
decoded_addrs = [
'12426a2b555065c79b8d6293c5dd18c25a25bae1a8b8c67ceac7484133e6798579bba3444bd48d5d9fcffa64d805e3977b07e2d420a2212df3d612a5dbcc67653844ded707',
'12975e989ae39b7b9445ac7384edb7a598efe3fbfab6c0bd72c5372fadd86071e95096d3b5eedd396ea5c521456640fb27ebb5a222269eac49e1ddac7134735ea0efb2b899',
'12b9e8cd1f42a48c55166f75ead8293e0ad1c420f566b9c85562572936207557dd08613f96d197024ea651e8f226feb03b71aa82f487cb6eff518a30a3b6a2514f0eb176af',
'12c09d10f3c5f580ddd0765063d9246007f45ef025a76c7d117fe4e811fa78f3959c66f7487c1bef43c64ee0ace763116456666a389eea3b693cd7670c3515a0c043794fbf',
'12bd785822c5e8330e30cc7e6e7abd3d11579da04e4131d091255172583059aea58501a7d7657332995b54357cc02c972c5cf5b2d1804d4d273c6f214854c9cf7edd34d73c'
]
# These seeds have the wrong checksum. For test_mnemonic.py:
invalid_seeds = [
['zebra','oasis','oncoming','wobbly','yawning','tiers','reef','friendly','maze','shyness','unknown','eavesdrop','zapped','lumber','often','spiders','twang','afloat','elite','vein','auctions','ingested','demonstrate','diplomat','diplomat'],
['urgent','loaded','linen','uncle','occur','jockey','cynical','himself','keyboard','lectures','tobacco','racetrack','empty','diode','erosion','merger','upright','wagtail','eternal','getting','dangerous','dazed','speedy','stacking','dangerous'],
['womanly','afield','obedient','quote','square','apex','sphere','poetry','wives','agony','axes','bowling','narrate','coal','aided','vivid','sowed','ignore','sensible','randomly','muddy','oxygen','onto','ouch','oxygen'],
['owed','cement','roomy','ought','fugitive','wrong','island','avidly','catch','zippers','layout','sovereign','suitcase','rogue','fewest','doorway','unlikely','feel','lion','bugs','second','tomorrow','diplomat','edited','bugs'],
['wrong','otter','jukebox','twofold','reorder','doing','idled','dosage','jigsaw','symptoms','tomorrow','umpire','justice','python','butter','opus','aisle','soda','punch','tuition','slower','emerge','island','joining','tuition']
]
# For test_utils.py:
hexes = [
'641731042d78e2ddef3340371e51bf55c6ec8f1757d4d6222976d324e230cd02',
'e72462ff644844d902824e1597d5e7d5768ce80fe84e9e0a27c1d9d801258401',
'b112829329fcbc3ecc0c754e353e180afc2865dff2d0699aeae0eed938694e03',
'44bea1954de9e0abaff67d10a5dda652bf530590ab786363e478a0a196ca5909',
'235c87d42f9dd47ef455da3b3d41c806763e2f71c18a1ac4aac478e8e379b305'
]
ints = [
1267166726096927029789014970606765322553773056507777453263144086171181717348,
685792075545825044641993933091589991867261783922969064258564262430660240615,
1495478832876975752448424395286770096003750226651159965005740393562225382065,
4229463239790018874285924388021495332866381457501964753336570234542254833220,
2578671123209650673872865557522209656331423257703416053928268301946856102947
]
extra_hex = [
'01e0a4a7a6acf619da6ce1e2570c0c3439f5f809f101360859268648b4f8ec654e',
'012f72986164c41ae40c838d27b6923297552cea015bf73f9580be84c9cc3743b2',
'022100c80ab4d153c7fec09fd502096e41fbe421764616a3fc62483928c5a77b6d7ece015696990c2d33ef90235216684f7a44c4c2ff73460fdf01f1353003d6bcdf3774',
'0150141de1b549bf59d949cf03723b04a51a93203e2ae6211e2f173f6ab44ee1b90221003f829f11a7768559de4e0072baa81cfd861d810e23b81b2dd949258956458fb7de204edba8bb1837d21a611848a62b6a8a6fdf9a7b08ba21679fac6a117db56e2b35',
'0111e8f0ae2ff804561e718ee132a9a8f17b3b50abd23774cf023c93916e2da49702112c840e01000000000000000000000000000321002ad3ef991096aff1c59a234920c5531baf9e2efba420b293e1a40b9796b652bd'
]
extra_bin = [
[1, 224, 164, 167, 166, 172, 246, 25, 218, 108, 225, 226, 87, 12, 12, 52, 57, 245, 248, 9, 241, 1, 54, 8, 89, 38, 134, 72, 180, 248, 236, 101, 78],
[1, 47, 114, 152, 97, 100, 196, 26, 228, 12, 131, 141, 39, 182, 146, 50, 151, 85, 44, 234, 1, 91, 247, 63, 149, 128, 190, 132, 201, 204, 55, 67, 178],
[2, 33, 0, 200, 10, 180, 209, 83, 199, 254, 192, 159, 213, 2, 9, 110, 65, 251, 228, 33, 118, 70, 22, 163, 252, 98, 72, 57, 40, 197, 167, 123, 109, 126, 206, 1, 86, 150, 153, 12, 45, 51, 239, 144, 35, 82, 22, 104, 79, 122, 68, 196, 194, 255, 115, 70, 15, 223, 1, 241, 53, 48, 3, 214, 188, 223, 55, 116],
[1, 80, 20, 29, 225, 181, 73, 191, 89, 217, 73, 207, 3, 114, 59, 4, 165, 26, 147, 32, 62, 42, 230, 33, 30, 47, 23, 63, 106, 180, 78, 225, 185, 2, 33, 0, 63, 130, 159, 17, 167, 118, 133, 89, 222, 78, 0, 114, 186, 168, 28, 253, 134, 29, 129, 14, 35, 184, 27, 45, 217, 73, 37, 137, 86, 69, 143, 183, 222, 32, 78, 219, 168, 187, 24, 55, 210, 26, 97, 24, 72, 166, 43, 106, 138, 111, 223, 154, 123, 8, 186, 33, 103, 159, 172, 106, 17, 125, 181, 110, 43, 53],
[1, 17, 232, 240, 174, 47, 248, 4, 86, 30, 113, 142, 225, 50, 169, 168, 241, 123, 59, 80, 171, 210, 55, 116, 207, 2, 60, 147, 145, 110, 45, 164, 151, 2, 17, 44, 132, 14, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 3, 33, 0, 42, 211, 239, 153, 16, 150, 175, 241, 197, 154, 35, 73, 32, 197, 83, 27, 175, 158, 46, 251, 164, 32, 178, 147, 225, 164, 11, 151, 150, 182, 82, 189]
]
extra_pub = [
'e0a4a7a6acf619da6ce1e2570c0c3439f5f809f101360859268648b4f8ec654e',
'2f72986164c41ae40c838d27b6923297552cea015bf73f9580be84c9cc3743b2',
'5696990c2d33ef90235216684f7a44c4c2ff73460fdf01f1353003d6bcdf3774',
'50141de1b549bf59d949cf03723b04a51a93203e2ae6211e2f173f6ab44ee1b9',
'11e8f0ae2ff804561e718ee132a9a8f17b3b50abd23774cf023c93916e2da497'
]
extra_payid = [
'',
'',
'c80ab4d153c7fec09fd502096e41fbe421764616a3fc62483928c5a77b6d7ece',
'3f829f11a7768559de4e0072baa81cfd861d810e23b81b2dd949258956458fb7',
''
]
# For test_cryptonote.py
hashed_valid_sks = [
'317a934746c3c765829cc8c9f2c2e8734be9a003015a523fdac75ed78cd3e3c9',
'fcfa4095da97e22a47bee4ca18951a416fa2b9155c4d51239bdbe02a5ddea93c',
'ed687b8ca9ed87fd54dacb35e1c12e4cd8022aaf63a2d8e1087bb069298baa79',
'fad70949f20079143fe95aa08db0d622b2f7df17cb228024173460cdf7489471',
'e5e093cbde63b9e6a02eccc14242285d0a7e500ab018de86d6f63eb2e7446563'
]
reduced_hashed_valid_sks = [
'158b0dec091eeb4476422d26830c75794ae9a003015a523fdac75ed78cd3e309',
'357f5f7e8b6eab22c4e7fde17ca77d026fa2b9155c4d51239bdbe02a5ddea90c',
'729dc201f1370795789006c1caec15bad7022aaf63a2d8e1087bb069298baa09',
'7f0c51be394bf8ab629f952b77dbbd90b1f7df17cb228024173460cdf7489401',
'57e9d09d40114bd69a81feef0a67eedf097e500ab018de86d6f63eb2e7446503'
]
public_from_valid_sks = [
'426a2b555065c79b8d6293c5dd18c25a25bae1a8b8c67ceac7484133e6798579',
'975e989ae39b7b9445ac7384edb7a598efe3fbfab6c0bd72c5372fadd86071e9',
'b9e8cd1f42a48c55166f75ead8293e0ad1c420f566b9c85562572936207557dd',
'c09d10f3c5f580ddd0765063d9246007f45ef025a76c7d117fe4e811fa78f395',
'bd785822c5e8330e30cc7e6e7abd3d11579da04e4131d091255172583059aea5'
]
| 76.24812 | 456 | 0.781383 | 725 | 10,141 | 10.892414 | 0.533793 | 0.003039 | 0.004179 | 0.005065 | 0.432949 | 0.189819 | 0.189819 | 0.189819 | 0.189819 | 0.189819 | 0 | 0.439116 | 0.084903 | 10,141 | 132 | 457 | 76.825758 | 0.411853 | 0.023962 | 0 | 0.266667 | 0 | 0 | 0.642843 | 0.487161 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
5a52bdbb42b7d47d0dd40a720a4d3ed5fc9655a9 | 95 | py | Python | graph_util/__init__.py | rahular/joint-coref-srl | cd85fb4e11af1a1ea400ed657d0a4511c1d6c6be | [
"MIT"
] | null | null | null | graph_util/__init__.py | rahular/joint-coref-srl | cd85fb4e11af1a1ea400ed657d0a4511c1d6c6be | [
"MIT"
] | null | null | null | graph_util/__init__.py | rahular/joint-coref-srl | cd85fb4e11af1a1ea400ed657d0a4511c1d6c6be | [
"MIT"
] | null | null | null | from graph_util.graph_encoder_gat import GAT
from graph_util.output_to_graph import json2graph
| 31.666667 | 49 | 0.894737 | 16 | 95 | 4.9375 | 0.5625 | 0.227848 | 0.329114 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.011494 | 0.084211 | 95 | 2 | 50 | 47.5 | 0.896552 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
5a58cd195f052cae12c8cce7279d629a5c5f6a60 | 176 | py | Python | Codewars/Maximum_product_7_kyu.py | maxcohen31/A-bored-math-student | 007beb4dabf7b4406f48e9a3a967c29d032eab89 | [
"MIT"
] | null | null | null | Codewars/Maximum_product_7_kyu.py | maxcohen31/A-bored-math-student | 007beb4dabf7b4406f48e9a3a967c29d032eab89 | [
"MIT"
] | null | null | null | Codewars/Maximum_product_7_kyu.py | maxcohen31/A-bored-math-student | 007beb4dabf7b4406f48e9a3a967c29d032eab89 | [
"MIT"
] | null | null | null | def adjacent_element_product(array):
return max(array[number] * array[number+1] for number in range(len(array) - 1))
l = [1, 2]
print(adjacent_element_product(l)) | 29.333333 | 83 | 0.693182 | 27 | 176 | 4.37037 | 0.592593 | 0.254237 | 0.372881 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.027397 | 0.170455 | 176 | 6 | 84 | 29.333333 | 0.780822 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | false | 0 | 0 | 0.25 | 0.5 | 0.25 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 6 |
5a6f526d270c4d51bab042777fd9f788af0fa3b5 | 42 | py | Python | test/test_pdi/test_universes.py | cmc333333/parsons | 50804a3627117797570f1e9233c9bbad583f7831 | [
"Apache-2.0"
] | 3 | 2019-09-05T16:57:15.000Z | 2019-10-01T19:56:58.000Z | test/test_pdi/test_universes.py | cmc333333/parsons | 50804a3627117797570f1e9233c9bbad583f7831 | [
"Apache-2.0"
] | 22 | 2019-09-03T13:23:37.000Z | 2019-10-03T20:32:48.000Z | test/test_pdi/test_universes.py | cmc333333/parsons | 50804a3627117797570f1e9233c9bbad583f7831 | [
"Apache-2.0"
] | 2 | 2019-09-01T18:30:10.000Z | 2019-10-03T20:07:46.000Z | # TODO: Add tests for PDI Universes class
| 21 | 41 | 0.761905 | 7 | 42 | 4.571429 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.190476 | 42 | 1 | 42 | 42 | 0.941176 | 0.928571 | 0 | null | 0 | null | 0 | 0 | null | 0 | 0 | 1 | null | 1 | null | true | 0 | 0 | null | null | null | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
ce7e201f8c7ecde542b0899ae4bbcac172d7e273 | 159 | py | Python | Users/apps.py | jinxy17/library-management | 06ca716e201f01a3c207901aeca51ae2d34ebd2c | [
"MIT"
] | null | null | null | Users/apps.py | jinxy17/library-management | 06ca716e201f01a3c207901aeca51ae2d34ebd2c | [
"MIT"
] | null | null | null | Users/apps.py | jinxy17/library-management | 06ca716e201f01a3c207901aeca51ae2d34ebd2c | [
"MIT"
] | null | null | null | ''' App Config for the app Users '''
from django.apps import AppConfig
class UsersConfig(AppConfig):
''' Config for the app Users '''
name = 'Users'
| 19.875 | 36 | 0.666667 | 21 | 159 | 5.047619 | 0.619048 | 0.169811 | 0.226415 | 0.283019 | 0.377358 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.213836 | 159 | 7 | 37 | 22.714286 | 0.848 | 0.339623 | 0 | 0 | 0 | 0 | 0.054945 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.333333 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
0cc00812d160d008bb383ba4872f73462dfe86fc | 1,676 | py | Python | build/trossen/interbotix_ros_core/interbotix_ros_xseries/interbotix_xs_sdk/cmake/interbotix_xs_sdk-genmsg-context.py | Jam-cpu/Masters-Project---Final | 0b266b1f117a579b96507249f0a128d0e3cc082a | [
"BSD-3-Clause-Clear"
] | null | null | null | build/trossen/interbotix_ros_core/interbotix_ros_xseries/interbotix_xs_sdk/cmake/interbotix_xs_sdk-genmsg-context.py | Jam-cpu/Masters-Project---Final | 0b266b1f117a579b96507249f0a128d0e3cc082a | [
"BSD-3-Clause-Clear"
] | null | null | null | build/trossen/interbotix_ros_core/interbotix_ros_xseries/interbotix_xs_sdk/cmake/interbotix_xs_sdk-genmsg-context.py | Jam-cpu/Masters-Project---Final | 0b266b1f117a579b96507249f0a128d0e3cc082a | [
"BSD-3-Clause-Clear"
] | null | null | null | # generated from genmsg/cmake/pkg-genmsg.context.in
messages_str = "/workspace/src/trossen/interbotix_ros_core/interbotix_ros_xseries/interbotix_xs_sdk/msg/JointGroupCommand.msg;/workspace/src/trossen/interbotix_ros_core/interbotix_ros_xseries/interbotix_xs_sdk/msg/JointSingleCommand.msg;/workspace/src/trossen/interbotix_ros_core/interbotix_ros_xseries/interbotix_xs_sdk/msg/JointTrajectoryCommand.msg"
services_str = "/workspace/src/trossen/interbotix_ros_core/interbotix_ros_xseries/interbotix_xs_sdk/srv/Reboot.srv;/workspace/src/trossen/interbotix_ros_core/interbotix_ros_xseries/interbotix_xs_sdk/srv/RobotInfo.srv;/workspace/src/trossen/interbotix_ros_core/interbotix_ros_xseries/interbotix_xs_sdk/srv/MotorGains.srv;/workspace/src/trossen/interbotix_ros_core/interbotix_ros_xseries/interbotix_xs_sdk/srv/TorqueEnable.srv;/workspace/src/trossen/interbotix_ros_core/interbotix_ros_xseries/interbotix_xs_sdk/srv/OperatingModes.srv;/workspace/src/trossen/interbotix_ros_core/interbotix_ros_xseries/interbotix_xs_sdk/srv/RegisterValues.srv"
pkg_name = "interbotix_xs_sdk"
dependencies_str = "std_msgs;trajectory_msgs"
langs = "gencpp;geneus;genlisp;gennodejs;genpy"
dep_include_paths_str = "interbotix_xs_sdk;/workspace/src/trossen/interbotix_ros_core/interbotix_ros_xseries/interbotix_xs_sdk/msg;std_msgs;/opt/ros/melodic/share/std_msgs/cmake/../msg;trajectory_msgs;/opt/ros/melodic/share/trajectory_msgs/cmake/../msg;geometry_msgs;/opt/ros/melodic/share/geometry_msgs/cmake/../msg"
PYTHON_EXECUTABLE = "/usr/bin/python2"
package_has_static_sources = '' == 'TRUE'
genmsg_check_deps_script = "/opt/ros/melodic/share/genmsg/cmake/../../../lib/genmsg/genmsg_check_deps.py"
| 139.666667 | 639 | 0.862768 | 240 | 1,676 | 5.658333 | 0.2625 | 0.191458 | 0.132548 | 0.213549 | 0.613402 | 0.564801 | 0.564801 | 0.564801 | 0.564801 | 0.564801 | 0 | 0.000609 | 0.020286 | 1,676 | 11 | 640 | 152.363636 | 0.826431 | 0.029236 | 0 | 0 | 1 | 0.333333 | 0.875077 | 0.852308 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
0b376f6fdd1ceea71f6556966f3ed79afd47b1a1 | 199 | py | Python | dynadown/plugins/python.py | probablytom/dynamic-markdown | 9a94976b408ca1b09880d5e1d2d6cda619182d50 | [
"MIT"
] | null | null | null | dynadown/plugins/python.py | probablytom/dynamic-markdown | 9a94976b408ca1b09880d5e1d2d6cda619182d50 | [
"MIT"
] | null | null | null | dynadown/plugins/python.py | probablytom/dynamic-markdown | 9a94976b408ca1b09880d5e1d2d6cda619182d50 | [
"MIT"
] | null | null | null | from dynadown import register_plugin
class PythonPlugin:
def __init__(self):
pass
def evaluate(self, block):
return str(eval(block))
register_plugin('python', PythonPlugin)
| 19.9 | 39 | 0.703518 | 23 | 199 | 5.826087 | 0.73913 | 0.208955 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.211055 | 199 | 9 | 40 | 22.111111 | 0.853503 | 0 | 0 | 0 | 0 | 0 | 0.030151 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.285714 | false | 0.142857 | 0.142857 | 0.142857 | 0.714286 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 6 |
0b4b2794883f76d15e06a59c2626dd7d3dd7d09f | 11,168 | py | Python | src/tests/test_pagure_flask_api_issue_create.py | yifengyou/learn-pagure | e54ba955368918c92ad2be6347b53bb2c24a228c | [
"Unlicense"
] | null | null | null | src/tests/test_pagure_flask_api_issue_create.py | yifengyou/learn-pagure | e54ba955368918c92ad2be6347b53bb2c24a228c | [
"Unlicense"
] | null | null | null | src/tests/test_pagure_flask_api_issue_create.py | yifengyou/learn-pagure | e54ba955368918c92ad2be6347b53bb2c24a228c | [
"Unlicense"
] | null | null | null | # -*- coding: utf-8 -*-
"""
(c) 2017 - Copyright Red Hat Inc
Authors:
Pierre-Yves Chibon <pingou@pingoured.fr>
"""
from __future__ import unicode_literals, absolute_import
import datetime
import unittest
import sys
import os
import json
from mock import patch, MagicMock
sys.path.insert(
0, os.path.join(os.path.dirname(os.path.abspath(__file__)), "..")
)
import pagure.lib.query # noqa: E402
import tests # noqa: E402
class PagureFlaskApiIssueCreatetests(tests.Modeltests):
"""Tests for the flask API of pagure for creating an issue"""
@patch("pagure.lib.notify.send_email", MagicMock(return_value=True))
def setUp(self):
""" Set up the environnment, ran before every tests. """
super(PagureFlaskApiIssueCreatetests, self).setUp()
pagure.config.config["TICKETS_FOLDER"] = None
tests.create_projects(self.session)
tests.create_projects_git(os.path.join(self.path, "tickets"))
tests.create_tokens(self.session)
tests.create_tokens_acl(self.session)
# Create project-less token for user foo
item = pagure.lib.model.Token(
id="project-less-foo",
user_id=2,
project_id=None,
expiration=datetime.datetime.utcnow()
+ datetime.timedelta(days=30),
)
self.session.add(item)
self.session.commit()
tests.create_tokens_acl(self.session, token_id="project-less-foo")
# Create project-specific token for user foo
item = pagure.lib.model.Token(
id="project-specific-foo",
user_id=2,
project_id=1,
expiration=datetime.datetime.utcnow()
+ datetime.timedelta(days=30),
)
self.session.add(item)
self.session.commit()
tests.create_tokens_acl(self.session, token_id="project-specific-foo")
def test_create_issue_own_project_no_data(self):
"""Test creating a new ticket on a project for which you're the
main maintainer.
"""
# pingou's token with all the ACLs
headers = {"Authorization": "token aaabbbcccddd"}
# Create an issue on /test/ where pingou is the main admin
output = self.app.post("/api/0/test/new_issue", headers=headers)
self.assertEqual(output.status_code, 400)
data = json.loads(output.get_data(as_text=True))
self.assertEqual(
pagure.api.APIERROR.EINVALIDREQ.name, data["error_code"]
)
self.assertEqual(pagure.api.APIERROR.EINVALIDREQ.value, data["error"])
self.assertEqual(
data["errors"],
{
"issue_content": ["This field is required."],
"title": ["This field is required."],
},
)
def test_create_issue_own_project_incomplete_data(self):
"""Test creating a new ticket on a project for which you're the
main maintainer.
"""
# pingou's token with all the ACLs
headers = {"Authorization": "token aaabbbcccddd"}
# complete data set
data = {"title": "test issue"}
# Create an issue on /test/ where pingou is the main admin
output = self.app.post(
"/api/0/test/new_issue", headers=headers, data=data
)
self.assertEqual(output.status_code, 400)
data = json.loads(output.get_data(as_text=True))
self.assertEqual(
pagure.api.APIERROR.EINVALIDREQ.name, data["error_code"]
)
self.assertEqual(pagure.api.APIERROR.EINVALIDREQ.value, data["error"])
self.assertEqual(
data["errors"], {"issue_content": ["This field is required."]}
)
def test_create_issue_own_project(self):
"""Test creating a new ticket on a project for which you're the
main maintainer.
"""
# pingou's token with all the ACLs
headers = {"Authorization": "token aaabbbcccddd"}
# complete data set
data = {
"title": "test issue",
"issue_content": "This issue needs attention",
}
# Create an issue on /test/ where pingou is the main admin
output = self.app.post(
"/api/0/test/new_issue", headers=headers, data=data
)
self.assertEqual(output.status_code, 200)
data = json.loads(output.get_data(as_text=True))
data["issue"]["date_created"] = "1431414800"
data["issue"]["last_updated"] = "1431414800"
self.assertEqual(
data,
{
"issue": {
"assignee": None,
"blocks": [],
"close_status": None,
"closed_at": None,
"closed_by": None,
"comments": [],
"content": "This issue needs attention",
"custom_fields": [],
"full_url": "http://localhost.localdomain/test/issue/1",
"date_created": "1431414800",
"depends": [],
"id": 1,
"last_updated": "1431414800",
"milestone": None,
"priority": None,
"private": False,
"related_prs": [],
"status": "Open",
"tags": [],
"title": "test issue",
"user": {
"fullname": "PY C",
"full_url": "http://localhost.localdomain/user/pingou",
"name": "pingou",
"url_path": "user/pingou",
},
},
"message": "Issue created",
},
)
@patch("pagure.lib.notify.send_email", MagicMock(return_value=True))
def test_create_issue_someone_else_project_project_less_token(self):
"""Test creating a new ticket on a project with which you have
nothing to do.
"""
# pingou's token with all the ACLs
headers = {"Authorization": "token project-less-foo"}
# complete data set
data = {
"title": "test issue",
"issue_content": "This issue needs attention",
}
# Create an issue on /test/ where pingou is the main admin
output = self.app.post(
"/api/0/test/new_issue", headers=headers, data=data
)
self.assertEqual(output.status_code, 200)
data = json.loads(output.get_data(as_text=True))
data["issue"]["date_created"] = "1431414800"
data["issue"]["last_updated"] = "1431414800"
self.assertEqual(
data,
{
"issue": {
"assignee": None,
"blocks": [],
"close_status": None,
"closed_at": None,
"closed_by": None,
"comments": [],
"content": "This issue needs attention",
"custom_fields": [],
"full_url": "http://localhost.localdomain/test/issue/1",
"date_created": "1431414800",
"depends": [],
"id": 1,
"last_updated": "1431414800",
"milestone": None,
"priority": None,
"private": False,
"related_prs": [],
"status": "Open",
"tags": [],
"title": "test issue",
"user": {
"fullname": "foo bar",
"full_url": "http://localhost.localdomain/user/foo",
"name": "foo",
"url_path": "user/foo",
},
},
"message": "Issue created",
},
)
@patch("pagure.lib.notify.send_email", MagicMock(return_value=True))
def test_create_issue_project_specific_token(self):
"""Test creating a new ticket on a project with a regular
project-specific token.
"""
# pingou's token with all the ACLs
headers = {"Authorization": "token project-specific-foo"}
# complete data set
data = {
"title": "test issue",
"issue_content": "This issue needs attention",
}
# Create an issue on /test/ where pingou is the main admin
output = self.app.post(
"/api/0/test/new_issue", headers=headers, data=data
)
self.assertEqual(output.status_code, 200)
data = json.loads(output.get_data(as_text=True))
data["issue"]["date_created"] = "1431414800"
data["issue"]["last_updated"] = "1431414800"
self.assertEqual(
data,
{
"issue": {
"assignee": None,
"blocks": [],
"close_status": None,
"closed_at": None,
"closed_by": None,
"comments": [],
"content": "This issue needs attention",
"custom_fields": [],
"full_url": "http://localhost.localdomain/test/issue/1",
"date_created": "1431414800",
"depends": [],
"id": 1,
"last_updated": "1431414800",
"milestone": None,
"priority": None,
"private": False,
"related_prs": [],
"status": "Open",
"tags": [],
"title": "test issue",
"user": {
"fullname": "foo bar",
"full_url": "http://localhost.localdomain/user/foo",
"name": "foo",
"url_path": "user/foo",
},
},
"message": "Issue created",
},
)
@patch("pagure.lib.notify.send_email", MagicMock(return_value=True))
def test_create_issue_invalid_project_specific_token(self):
"""Test creating a new ticket on a project with a regular
project-specific token but for another project.
"""
# pingou's token with all the ACLs
headers = {"Authorization": "token project-specific-foo"}
# complete data set
data = {
"title": "test issue",
"issue_content": "This issue needs attention",
}
# Create an issue on /test/ where pingou is the main admin
output = self.app.post(
"/api/0/test2/new_issue", headers=headers, data=data
)
self.assertEqual(output.status_code, 401)
data = json.loads(output.get_data(as_text=True))
self.assertEqual(
pagure.api.APIERROR.EINVALIDTOK.name, data["error_code"]
)
self.assertEqual(pagure.api.APIERROR.EINVALIDTOK.value, data["error"])
if __name__ == "__main__":
unittest.main(verbosity=2)
| 35.009404 | 79 | 0.514595 | 1,112 | 11,168 | 5.035971 | 0.17446 | 0.045536 | 0.02 | 0.02625 | 0.846071 | 0.840714 | 0.813214 | 0.813214 | 0.813214 | 0.804464 | 0 | 0.024142 | 0.365777 | 11,168 | 318 | 80 | 35.119497 | 0.766483 | 0.12885 | 0 | 0.648069 | 0 | 0 | 0.233644 | 0.025018 | 0 | 0 | 0 | 0 | 0.072961 | 1 | 0.030043 | false | 0 | 0.038627 | 0 | 0.072961 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
0b7536e4d9b9f76afc1cb177b75642b5a272e815 | 39 | py | Python | main.py | slacke/python-box | 64bb30a6efa8a28c7eacc919395d8be9ea26942d | [
"MIT"
] | 3 | 2021-01-16T15:07:41.000Z | 2022-03-09T09:24:17.000Z | main.py | slacke/pythonbox | 64bb30a6efa8a28c7eacc919395d8be9ea26942d | [
"MIT"
] | null | null | null | main.py | slacke/pythonbox | 64bb30a6efa8a28c7eacc919395d8be9ea26942d | [
"MIT"
] | null | null | null | import pythonbox
# your program here | 13 | 19 | 0.769231 | 5 | 39 | 6 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.205128 | 39 | 3 | 19 | 13 | 0.967742 | 0.435897 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
0b8cf9aae73847143d0f4c62fabe36ff3080e222 | 105 | py | Python | facetools/test/__init__.py | bigsassy/django-facetools | aeedaea81ab0007ee8e96b2f81f1404dc8bddb3c | [
"MIT"
] | 2 | 2018-01-24T20:41:27.000Z | 2019-06-27T13:24:18.000Z | facetools/test/__init__.py | bigsassy/django-facetools | aeedaea81ab0007ee8e96b2f81f1404dc8bddb3c | [
"MIT"
] | null | null | null | facetools/test/__init__.py | bigsassy/django-facetools | aeedaea81ab0007ee8e96b2f81f1404dc8bddb3c | [
"MIT"
] | null | null | null | from common import TestUserNotLoaded
from testcases import FacebookTestCase, FacebookTransactionTestCase
| 35 | 67 | 0.904762 | 9 | 105 | 10.555556 | 0.777778 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.085714 | 105 | 2 | 68 | 52.5 | 0.989583 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
0ba17a8b460d226f145319ccf9eab4a38f0fdaf4 | 44 | py | Python | main.py | marquesYan/lexical-parser | bc6d935cca30b8e46ed8cc2fdba5d4eda06bb9b5 | [
"MIT"
] | null | null | null | main.py | marquesYan/lexical-parser | bc6d935cca30b8e46ed8cc2fdba5d4eda06bb9b5 | [
"MIT"
] | null | null | null | main.py | marquesYan/lexical-parser | bc6d935cca30b8e46ed8cc2fdba5d4eda06bb9b5 | [
"MIT"
] | null | null | null | import lexical_parser
lexical_parser.main() | 14.666667 | 21 | 0.863636 | 6 | 44 | 6 | 0.666667 | 0.722222 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.068182 | 44 | 3 | 22 | 14.666667 | 0.878049 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.5 | 0 | 0.5 | 0 | 1 | 1 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
0ba6fede1ea9f0e974f32e8510cb532b89c76dd6 | 113 | py | Python | Python/Topics/.loc .iloc/Penguin selecting/main.py | drtierney/hyperskill-problems | b74da993f0ac7bcff1cbd5d89a3a1b06b05f33e0 | [
"MIT"
] | 5 | 2020-08-29T15:15:31.000Z | 2022-03-01T18:22:34.000Z | Python/Topics/.loc .iloc/Penguin selecting/main.py | drtierney/hyperskill-problems | b74da993f0ac7bcff1cbd5d89a3a1b06b05f33e0 | [
"MIT"
] | null | null | null | Python/Topics/.loc .iloc/Penguin selecting/main.py | drtierney/hyperskill-problems | b74da993f0ac7bcff1cbd5d89a3a1b06b05f33e0 | [
"MIT"
] | 1 | 2020-12-02T11:13:14.000Z | 2020-12-02T11:13:14.000Z | # put your code here. The data frame is already loaded and stored as penguins_df.
print(penguins_df.iloc[5:9])
| 28.25 | 82 | 0.761062 | 21 | 113 | 4 | 0.904762 | 0.238095 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.021053 | 0.159292 | 113 | 3 | 83 | 37.666667 | 0.863158 | 0.699115 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.333333 | 0 | 1 | 0 | true | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 6 |
f010e72d68914be6ef3218d09f99dbec20d497e8 | 44 | py | Python | geometric2dr/data/__init__.py | paulmorio/geo2dr | 49d5f1cdc0a4aa0c2c19744f6b1c723fd5988955 | [
"MIT"
] | 32 | 2020-03-13T21:09:50.000Z | 2021-10-02T13:01:46.000Z | geometric2dr/data/__init__.py | paulmorio/geo2dr | 49d5f1cdc0a4aa0c2c19744f6b1c723fd5988955 | [
"MIT"
] | 3 | 2020-03-22T14:34:49.000Z | 2021-08-17T15:20:40.000Z | geometric2dr/data/__init__.py | paulmorio/geo2dr | 49d5f1cdc0a4aa0c2c19744f6b1c723fd5988955 | [
"MIT"
] | 5 | 2020-03-29T00:31:10.000Z | 2021-08-17T10:57:32.000Z | from .dortmund_formatter import DortmundGexf | 44 | 44 | 0.909091 | 5 | 44 | 7.8 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.068182 | 44 | 1 | 44 | 44 | 0.95122 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
f0372b2624b6aa22da17507ef60e8bf8ae4b6441 | 50 | py | Python | wiki/__init__.py | jeffeuxMartin/WikiExtractor | f2290d2e5ed2aad113e98411267c9a6078b05341 | [
"Apache-2.0"
] | 4 | 2020-07-14T12:00:50.000Z | 2020-07-15T16:31:14.000Z | wiki/__init__.py | jeffeuxMartin/WikiExtractor | f2290d2e5ed2aad113e98411267c9a6078b05341 | [
"Apache-2.0"
] | null | null | null | wiki/__init__.py | jeffeuxMartin/WikiExtractor | f2290d2e5ed2aad113e98411267c9a6078b05341 | [
"Apache-2.0"
] | 4 | 2020-07-15T16:46:19.000Z | 2022-03-16T19:00:46.000Z | from wiki.main import *
from wiki.cli import main
| 16.666667 | 25 | 0.78 | 9 | 50 | 4.333333 | 0.555556 | 0.410256 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.16 | 50 | 2 | 26 | 25 | 0.928571 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
f03ce05936189c19330b26373814ec3cf9f1acee | 202 | py | Python | weberp/apps/incomes/admin.py | askar-alty/erp | cf5496fd7feed9d79705bbf5a034d1b13b96a98a | [
"MIT"
] | null | null | null | weberp/apps/incomes/admin.py | askar-alty/erp | cf5496fd7feed9d79705bbf5a034d1b13b96a98a | [
"MIT"
] | null | null | null | weberp/apps/incomes/admin.py | askar-alty/erp | cf5496fd7feed9d79705bbf5a034d1b13b96a98a | [
"MIT"
] | null | null | null | from django.contrib import admin
# Register your models here.
from . import models
admin.site.register(models.IncomeItemGroup)
admin.site.register(models.IncomeItem)
admin.site.register(models.Income) | 25.25 | 43 | 0.821782 | 27 | 202 | 6.148148 | 0.481481 | 0.162651 | 0.307229 | 0.415663 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.084158 | 202 | 8 | 44 | 25.25 | 0.897297 | 0.128713 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.4 | 0 | 0.4 | 0 | 1 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
f07069997e0b91b184e095099b994b2c01fb29b3 | 1,267 | py | Python | axelrod/strategies/forgiver.py | DumisaniZA/Axelrod | e59fc40ebb705afe05cea6f30e282d1e9c621259 | [
"MIT"
] | 33 | 2015-02-20T11:36:48.000Z | 2022-02-16T17:02:06.000Z | axelrod/strategies/forgiver.py | DumisaniZA/Axelrod | e59fc40ebb705afe05cea6f30e282d1e9c621259 | [
"MIT"
] | 108 | 2015-02-18T14:15:44.000Z | 2020-05-08T10:39:58.000Z | axelrod/strategies/forgiver.py | DumisaniZA/Axelrod | e59fc40ebb705afe05cea6f30e282d1e9c621259 | [
"MIT"
] | 41 | 2015-02-18T13:40:04.000Z | 2021-05-31T06:08:10.000Z | from axelrod import Player
class Forgiver(Player):
"""
A player starts by cooperating however will defect if at any point
the opponent has defected more than 10 percent of the time
"""
name = 'Forgiver'
def strategy(self, opponent):
"""
Begins by playing C, then plays D if the opponent has defected more than 10 percent of the time
"""
try:
if opponent.history.count('D')>len(opponent.history)/10:
return 'D'
return 'C'
except IndexError:
return 'C'
class ForgivingTitForTat(Player):
"""
A player starts by cooperating however will defect if at any point,
the opponent has defected more than 10 percent of the time,
and their most recent decision was defect.
"""
name = 'Forgiving Tit For Tat'
def strategy(self, opponent):
"""
Begins by playing C, then plays D if,
the opponent has defected more than 10 percent of the time,
and their most recent decision was defect.
"""
try:
if opponent.history.count('D')>len(opponent.history)/10:
return opponent.history[-1]
return 'C'
except IndexError:
return 'C'
| 28.155556 | 103 | 0.599842 | 161 | 1,267 | 4.720497 | 0.347826 | 0.098684 | 0.073684 | 0.115789 | 0.855263 | 0.855263 | 0.776316 | 0.776316 | 0.776316 | 0.776316 | 0 | 0.015222 | 0.325967 | 1,267 | 44 | 104 | 28.795455 | 0.874707 | 0.422257 | 0 | 0.631579 | 0 | 0 | 0.057416 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.105263 | false | 0 | 0.052632 | 0 | 0.684211 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
b2f853f61ad105301919c60beb3c3a74ef935f0c | 61 | py | Python | py_environment_status/__init__.py | vamseeachanta/py_environment_status | b2461b167a1b63c4e27cfdef12172d294e22a3de | [
"MIT"
] | null | null | null | py_environment_status/__init__.py | vamseeachanta/py_environment_status | b2461b167a1b63c4e27cfdef12172d294e22a3de | [
"MIT"
] | null | null | null | py_environment_status/__init__.py | vamseeachanta/py_environment_status | b2461b167a1b63c4e27cfdef12172d294e22a3de | [
"MIT"
] | null | null | null | from py_environment_status.package_list import package_list
| 20.333333 | 59 | 0.901639 | 9 | 61 | 5.666667 | 0.777778 | 0.431373 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.081967 | 61 | 2 | 60 | 30.5 | 0.910714 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
6548a26c1b5e4d2fcbe8edaffcde6011021d2d1f | 4,176 | py | Python | layerserver/migrations/0010_auto_20190719_0929.py | aroiginfraplan/giscube-admin | b7f3131b0186f847f3902df97f982cb288b16a49 | [
"BSD-3-Clause"
] | 5 | 2018-06-07T12:54:35.000Z | 2022-01-14T10:38:38.000Z | layerserver/migrations/0010_auto_20190719_0929.py | aroiginfraplan/giscube-admin | b7f3131b0186f847f3902df97f982cb288b16a49 | [
"BSD-3-Clause"
] | 140 | 2018-06-18T10:27:28.000Z | 2022-03-23T09:53:15.000Z | layerserver/migrations/0010_auto_20190719_0929.py | aroiginfraplan/giscube-admin | b7f3131b0186f847f3902df97f982cb288b16a49 | [
"BSD-3-Clause"
] | 1 | 2021-04-13T11:20:54.000Z | 2021-04-13T11:20:54.000Z | # Generated by Django 2.1.9 on 2019-07-19 07:29
from django.db import migrations, models
def update_fill_color(apps, schema_editor):
filter = {'shapetype': 'marker', 'marker_color__isnull': False}
filter_childs = {'layer__shapetype': 'marker', 'marker_color__isnull': False}
Model = apps.get_model('layerserver', 'DataBaseLayer')
Model.objects.filter(**filter).update(fill_color=models.F('marker_color'))
Model = apps.get_model('layerserver', 'DataBaseLayerStyleRule')
Model.objects.filter(**filter_childs).update(fill_color=models.F('marker_color'))
Model = apps.get_model('layerserver', 'GeoJsonLayer')
Model.objects.filter(**filter).update(fill_color=models.F('marker_color'))
Model = apps.get_model('layerserver', 'GeoJsonLayerStyleRule')
Model.objects.filter(**filter_childs).update(fill_color=models.F('marker_color'))
def update_fill_color_reverse(apps, schema_editor):
filter = {'shapetype': 'marker', 'fill_color__isnull': False}
filter_childs = {'layer__shapetype': 'marker', 'fill_color__isnull': False}
Model = apps.get_model('layerserver', 'DataBaseLayer')
Model.objects.filter(**filter).update(marker_color=models.F('fill_color'))
Model = apps.get_model('layerserver', 'DataBaseLayerStyleRule')
Model.objects.filter(**filter_childs).update(marker_color=models.F('fill_color'))
Model = apps.get_model('layerserver', 'GeoJsonLayer')
Model.objects.filter(**filter).update(marker_color=models.F('fill_color'))
Model = apps.get_model('layerserver', 'GeoJsonLayerStyleRule')
Model.objects.filter(**filter_childs).update(marker_color=models.F('fill_color'))
def fake_reverse(apps, schema_editor):
pass
class Migration(migrations.Migration):
dependencies = [
('layerserver', '0009_geojsonlayer_design_from'),
]
operations = [
migrations.RunPython(update_fill_color, update_fill_color_reverse),
migrations.RemoveField(
model_name='databaselayer',
name='marker_color',
),
migrations.RemoveField(
model_name='databaselayerstylerule',
name='marker_color',
),
migrations.RemoveField(
model_name='geojsonlayer',
name='marker_color',
),
migrations.RemoveField(
model_name='geojsonlayerstylerule',
name='marker_color',
),
migrations.AlterField(
model_name='databaselayer',
name='fill_color',
field=models.CharField(blank=True, max_length=50, null=True, verbose_name='fill color'),
),
migrations.AlterField(
model_name='databaselayer',
name='stroke_color',
field=models.CharField(blank=True, max_length=50, null=True, verbose_name='stroke color'),
),
migrations.AlterField(
model_name='databaselayerstylerule',
name='fill_color',
field=models.CharField(blank=True, max_length=50, null=True, verbose_name='fill color'),
),
migrations.AlterField(
model_name='databaselayerstylerule',
name='stroke_color',
field=models.CharField(blank=True, max_length=50, null=True, verbose_name='stroke color'),
),
migrations.AlterField(
model_name='geojsonlayer',
name='fill_color',
field=models.CharField(blank=True, max_length=50, null=True, verbose_name='fill color'),
),
migrations.AlterField(
model_name='geojsonlayer',
name='stroke_color',
field=models.CharField(blank=True, max_length=50, null=True, verbose_name='stroke color'),
),
migrations.AlterField(
model_name='geojsonlayerstylerule',
name='fill_color',
field=models.CharField(blank=True, max_length=50, null=True, verbose_name='fill color'),
),
migrations.AlterField(
model_name='geojsonlayerstylerule',
name='stroke_color',
field=models.CharField(blank=True, max_length=50, null=True, verbose_name='stroke color'),
),
]
| 38.666667 | 102 | 0.655651 | 440 | 4,176 | 5.988636 | 0.140909 | 0.075142 | 0.045541 | 0.051613 | 0.847818 | 0.847818 | 0.812144 | 0.696395 | 0.659962 | 0.659962 | 0 | 0.010681 | 0.215278 | 4,176 | 107 | 103 | 39.028037 | 0.793409 | 0.010776 | 0 | 0.818182 | 1 | 0 | 0.225236 | 0.059094 | 0 | 0 | 0 | 0 | 0 | 1 | 0.034091 | false | 0.011364 | 0.011364 | 0 | 0.079545 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
3330da41c80041422794f514b8b8b6670a3830be | 47 | py | Python | stockwatch/__init__.py | jontymorris/StockWatch | 9fdd87d8639299bb73ac4ec510ae2a8ca7cb7b3e | [
"MIT"
] | 1 | 2022-02-10T20:26:08.000Z | 2022-02-10T20:26:08.000Z | stockwatch/__init__.py | ashtonmoomoo/StockWatch | de8e76580c801f1ea3a88166d4f01af50cf7ea53 | [
"MIT"
] | 1 | 2021-04-08T02:02:42.000Z | 2021-06-19T23:38:25.000Z | stockwatch/__init__.py | ashtonmoomoo/StockWatch | de8e76580c801f1ea3a88166d4f01af50cf7ea53 | [
"MIT"
] | 2 | 2020-07-13T03:58:00.000Z | 2021-02-02T07:49:45.000Z | from .market import Market
from .util import *
| 15.666667 | 26 | 0.765957 | 7 | 47 | 5.142857 | 0.571429 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.170213 | 47 | 2 | 27 | 23.5 | 0.923077 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
681d8cc67a96cfb1e3ac95a91b9db26cb29add69 | 73 | py | Python | ting/__init__.py | iowaguy/ting | 2f6c85869a220a9797a3bc7ced9e94a3354296cd | [
"0BSD"
] | null | null | null | ting/__init__.py | iowaguy/ting | 2f6c85869a220a9797a3bc7ced9e94a3354296cd | [
"0BSD"
] | 1 | 2021-01-22T20:00:46.000Z | 2021-01-22T20:06:50.000Z | ting/__init__.py | iowaguy/ting | 2f6c85869a220a9797a3bc7ced9e94a3354296cd | [
"0BSD"
] | null | null | null | """Include these in the basic ting import"""
from ting.ting import ting
| 18.25 | 44 | 0.739726 | 12 | 73 | 4.5 | 0.666667 | 0.37037 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.164384 | 73 | 3 | 45 | 24.333333 | 0.885246 | 0.520548 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
685bb9167c7a24c6c31bbe4434049b9c610d82de | 10,829 | py | Python | tests/test_splunk_df.py | CMiksche/huntlib | b514d52dece44be8f285f0c472c4eeb7a1d6f009 | [
"MIT"
] | 116 | 2018-10-01T17:39:03.000Z | 2022-03-27T03:21:18.000Z | tests/test_splunk_df.py | CMiksche/huntlib | b514d52dece44be8f285f0c472c4eeb7a1d6f009 | [
"MIT"
] | 11 | 2018-10-08T14:38:58.000Z | 2021-05-13T13:51:35.000Z | tests/test_splunk_df.py | CMiksche/huntlib | b514d52dece44be8f285f0c472c4eeb7a1d6f009 | [
"MIT"
] | 19 | 2018-10-18T14:36:02.000Z | 2021-05-26T01:34:39.000Z | #!/usr/bin/env python
import os
import unittest
from multiprocessing import cpu_count
from unittest import TestCase
from huntlib.splunk import SplunkDF
class TestSplunkDF(TestCase):
_splunk_host = "localhost"
_splunk_port = 8089 # This is the API port, NOT the UI port
_splunk_user = "admin"
_splunk_pass = "testpass"
_splunk_conn = None
@classmethod
def setUpClass(self):
'''
Log into the splunk server once, and reuse that connection for all the
tests in this module.
'''
s = SplunkDF(
host=self._splunk_host,
port=self._splunk_port,
username=self._splunk_user,
password=self._splunk_pass
)
self.assertNotEqual(
s, None, "SplunkDF() returned a None object at login.")
self._splunk_conn = s
def test_basic_search_export(self):
'''
Do the most basic search we can (all events in the index over all
time). Then make sure we got the number of events we think we should
have. This version returns results as a generator.
'''
results = self._splunk_conn.search(
spl="search index=main"
)
l = list(results)
self.assertEqual(
len(l),
5,
"Wrong number of search results."
)
for key in ['min', 'max', 'label', 'ts']:
self.assertTrue(
# Just test the first item in the results list
key in l[0].keys(),
f"Key '{key}' was not found in the search results.'"
)
def test_basic_search_limit(self):
'''
Do the most basic search we can (all events in the index over all
time). Then make sure we got the number of events we think we should
have. This version returns results as a generator.
'''
results = self._splunk_conn.search(
spl="search index=main",
limit=3
)
l = list(results)
self.assertEqual(
len(l),
3,
"Wrong number of search results."
)
for key in ['min', 'max', 'label', 'ts']:
self.assertTrue(
# Just test the first item in the results list
key in l[0].keys(),
f"Key '{key}' was not found in the search results.'"
)
def test_basic_search_df_export(self):
'''
Do the most basic search we can (all events in the index over all
time). Then make sure we got the number of events we think we should
have and that all data columns are present.
This version returns results as a pandas DataFrame().
'''
df = self._splunk_conn.search_df(
spl="search index=main"
)
self.assertEqual(
df.shape[0],
5,
"Wrong number of search results."
)
for col in ['min', 'max', 'label', 'ts']:
self.assertTrue(
col in df.columns,
f"Column '{col}' was not found in the search results.'"
)
def test_basic_search_df_parallel(self):
'''
Do the most basic search we can (all events in the index over all
time). Then make sure we got the number of events we think we should
have and that all data columns are present.
This version returns results as a pandas DataFrame().
'''
df = self._splunk_conn.search_df(
spl="search index=main",
limit=3
)
self.assertEqual(
df.shape[0],
3,
"Wrong number of search results."
)
for col in ['min', 'max', 'label', 'ts']:
self.assertTrue(
col in df.columns,
f"Column '{col}' was not found in the search results.'"
)
def test_filtered_search(self):
'''
Test a simple SQL search and return a generator of results. Make sure we have
the proper number of results.
'''
results = self._splunk_conn.search(
spl="search index=main min<=2"
)
self.assertEqual(
len(list(results)),
3,
"There should be exactly 3 search results with min <= 2"
)
def test_filtered_search_df_export(self):
'''
Test a simple SQL search and return a DataFrame of results. Make sure we have
the proper number of results.
'''
df = self._splunk_conn.search_df(
spl="search index=main min<=2"
)
self.assertEqual(
df.shape[0],
3,
"Wrong number of search results with min <= 2"
)
def test_filtered_search_df_parallel(self):
'''
Test a simple SQL search and return a DataFrame of results. Make sure we have
the proper number of results.
'''
df = self._splunk_conn.search_df(
spl="search index=main min<=2",
limit=5
)
self.assertEqual(
df.shape[0],
3,
"Wrong number of search results with min <= 2"
)
def test_internal_fields_export(self):
'''
Test to ensure the internal_fields parameter is working correctly. We
test search_df() since that actually calls search() underneath everything
else, so we're effectively testing both in one shot.
'''
# The default is to filter internal fields, so make sure we do that
df = self._splunk_conn.search_df(
spl="search index=main"
)
self.assertEqual(
df.shape[1],
21,
"Default call did not filter out internal fields correctly. Wrong number of columns."
)
# The same, but explicitly asking for internal field filtering
df = self._splunk_conn.search_df(
spl="search index=main",
internal_fields=False
)
self.assertEqual(
df.shape[1],
21,
"Explicit 'internal_fields=False' did not filter out internal fields correctly. Wrong number of columns."
)
# Explicitly ask for internal fields to be preserved
df = self._splunk_conn.search_df(
spl="search index=main",
internal_fields=True
)
self.assertEqual(
df.shape[1],
30,
"Explicit 'internal_fields=True' call did not return all internal fields correctly. Wrong number of columns."
)
# Filter only named fields, with spaces to make sure they're split and stripped correctly
df = self._splunk_conn.search_df(
spl="search index=main",
internal_fields=" _si, _time ,_sourcetype,_subsecond "
)
self.assertEqual(
df.shape[1],
26,
"Explicitly named internal_fields did not return the correct fields correctly. Wrong number of columns."
)
def test_internal_fields_parallel(self):
'''
Test to ensure the internal_fields parameter is working correctly. We
test search_df() since that actually calls search() underneath everything
else, so we're effectively testing both in one shot.
'''
# The default is to filter internal fields, so make sure we do that
df = self._splunk_conn.search_df(
spl="search index=main",
limit=5
)
self.assertEqual(
df.shape[1],
21,
"Default call did not filter out internal fields correctly. Wrong number of columns."
)
# The same, but explicitly asking for internal field filtering
df = self._splunk_conn.search_df(
spl="search index=main",
internal_fields=False,
limit=5
)
self.assertEqual(
df.shape[1],
21,
"Explicit 'internal_fields=False' did not filter out internal fields correctly. Wrong number of columns."
)
# Explicitly ask for internal fields to be preserved
df = self._splunk_conn.search_df(
spl="search index=main",
internal_fields=True,
limit=5
)
self.assertEqual(
df.shape[1],
30,
"Explicit 'internal_fields=True' call did not return all internal fields correctly. Wrong number of columns."
)
# Filter only named fields, with spaces to make sure they're split and stripped correctly
df = self._splunk_conn.search_df(
spl="search index=main",
internal_fields=" _si, _time ,_sourcetype,_subsecond ",
limit=5
)
self.assertEqual(
df.shape[1],
26,
"Explicitly named internal_fields did not return the correct fields correctly. Wrong number of columns."
)
@unittest.skipUnless("HUNTLIB_TEST_EXTENDED" in os.environ, "Skipping test_large_search() because it takes a long time...")
def test_large_search_df(self):
'''
Do a basic search that should return a lot of rows. This requires
you to have loaded the "bigdata" index with data.
We skip this by default because it takes a very long time, but you can
re-enable it by setting the HUNTLIB_TEST_EXTENDED environment variable.
'''
df = self._splunk_conn.search_df(
spl="search index=bigdata",
fields='val'
)
self.assertEqual(
df.shape[0],
1000000,
"Wrong number of search results."
)
self.assertTrue(
"val" in df.columns,
"Column 'val' was not found in the search results."
)
@unittest.skipUnless("HUNTLIB_TEST_EXTENDED" in os.environ, "Skipping test_large_search() because it takes a long time...")
def test_large_search_df_parallel(self):
'''
Do a basic search that should return a lot of rows. This requires
you to have loaded the "bigdata" index with data.
We skip this by default because it takes a very long time, but you can
re-enable it by setting the HUNTLIB_TEST_EXTENDED environment variable.
'''
df = self._splunk_conn.search_df(
spl="search index=bigdata",
fields='val',
processes=4
)
self.assertEqual(
df.shape[0],
1000000,
"Wrong number of search results."
)
self.assertTrue(
"val" in df.columns,
"Column 'val' was not found in the search results."
)
| 31.207493 | 127 | 0.566811 | 1,316 | 10,829 | 4.556231 | 0.156535 | 0.060707 | 0.042028 | 0.056704 | 0.886925 | 0.884089 | 0.881921 | 0.869079 | 0.864743 | 0.851568 | 0 | 0.010337 | 0.35682 | 10,829 | 346 | 128 | 31.297688 | 0.850538 | 0.264198 | 0 | 0.608295 | 0 | 0 | 0.280048 | 0.023771 | 0 | 0 | 0 | 0 | 0.110599 | 1 | 0.0553 | false | 0.009217 | 0.023041 | 0 | 0.105991 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
6871ab6a4099d233e2294899548f4e935ebd2fa0 | 191 | py | Python | stft_core/solver/__init__.py | lingyunwu14/STFT | 1af5d26c1d27388ef8b143b1de5713d5da8eb787 | [
"BSD-2-Clause"
] | 22 | 2021-07-09T12:42:33.000Z | 2022-03-31T08:36:39.000Z | stft_core/solver/__init__.py | lingyunwu14/STFT | 1af5d26c1d27388ef8b143b1de5713d5da8eb787 | [
"BSD-2-Clause"
] | 1 | 2021-10-05T06:19:13.000Z | 2021-11-12T09:12:48.000Z | stft_core/solver/__init__.py | lingyunwu14/STFT | 1af5d26c1d27388ef8b143b1de5713d5da8eb787 | [
"BSD-2-Clause"
] | 3 | 2021-07-09T12:42:55.000Z | 2022-03-31T08:36:40.000Z | # Copyright (c) SenseTime Research and its affiliates. All Rights Reserved.
from .build import make_optimizer
from .build import make_lr_scheduler
from .lr_scheduler import WarmupMultiStepLR
| 38.2 | 75 | 0.837696 | 26 | 191 | 6 | 0.692308 | 0.115385 | 0.192308 | 0.24359 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.120419 | 191 | 4 | 76 | 47.75 | 0.928571 | 0.382199 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
687fe957d7eec3be0fef909a434e7a9a54018210 | 152 | py | Python | lib/scriptslib.py | S4TURN0/atomic-python | 7c0fa71a2d7e5d2b65d257b465f5a59bba7bc5ca | [
"MIT"
] | 1 | 2021-07-30T02:10:58.000Z | 2021-07-30T02:10:58.000Z | lib/scriptslib.py | S4TURN0/atomic-python | 7c0fa71a2d7e5d2b65d257b465f5a59bba7bc5ca | [
"MIT"
] | 2 | 2021-05-31T15:27:02.000Z | 2021-06-03T01:04:02.000Z | lib/scriptslib.py | S4TURN0/atomic-python | 7c0fa71a2d7e5d2b65d257b465f5a59bba7bc5ca | [
"MIT"
] | null | null | null | from scripts.subdomains import subdomain
from scripts.port_scan import port_scan
from scripts.fuzzing import fuzzing
from scripts.automation import auto | 38 | 40 | 0.875 | 22 | 152 | 5.954545 | 0.454545 | 0.335878 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.098684 | 152 | 4 | 41 | 38 | 0.956204 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
68a133d67cb8a538f2bc33edd7079c76f1543577 | 127 | py | Python | tests/conftest.py | python-happybase/aiohappybase | 0990ef45cfdb720dc987afdb4957a0fac591cb99 | [
"MIT"
] | 14 | 2020-02-17T14:50:21.000Z | 2022-03-15T20:59:03.000Z | tests/conftest.py | python-happybase/aiohappybase | 0990ef45cfdb720dc987afdb4957a0fac591cb99 | [
"MIT"
] | 7 | 2020-06-22T13:47:25.000Z | 2021-10-06T16:14:46.000Z | tests/conftest.py | aiudirog/aiohappybase | 0990ef45cfdb720dc987afdb4957a0fac591cb99 | [
"MIT"
] | 1 | 2021-01-29T17:06:47.000Z | 2021-01-29T17:06:47.000Z | import secrets
import pytest
@pytest.fixture
def table_name() -> bytes:
return b'test_' + secrets.token_hex(5).encode()
| 14.111111 | 51 | 0.716535 | 18 | 127 | 4.888889 | 0.833333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.009346 | 0.15748 | 127 | 8 | 52 | 15.875 | 0.813084 | 0 | 0 | 0 | 0 | 0 | 0.03937 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.2 | true | 0 | 0.4 | 0.2 | 0.8 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 1 | 1 | 0 | 0 | 6 |
d79e965256d1e6ab9a8342ec697a5432a074948c | 2,589 | py | Python | skactiveml/utils/_selection.py | AlexandreAbraham/scikit-activeml | 1e1f4615948501cb9c9559de2e94433f700b2b80 | [
"BSD-3-Clause"
] | 40 | 2020-09-22T00:50:52.000Z | 2022-03-15T14:16:42.000Z | skactiveml/utils/_selection.py | AlexandreAbraham/scikit-activeml | 1e1f4615948501cb9c9559de2e94433f700b2b80 | [
"BSD-3-Clause"
] | 161 | 2020-08-10T09:24:03.000Z | 2022-03-29T13:39:46.000Z | skactiveml/utils/_selection.py | AlexandreAbraham/scikit-activeml | 1e1f4615948501cb9c9559de2e94433f700b2b80 | [
"BSD-3-Clause"
] | 3 | 2021-11-15T09:10:59.000Z | 2021-12-15T11:40:47.000Z | """Utilities for selection."""
import numpy as np
from ._validation import check_random_state
def rand_argmin(a, random_state=None, **argmin_kwargs):
"""Returns index of minimum value. In case of ties, a randomly selected
index of the minimum elements is returned.
Parameters
----------
a: array-like
Indexable data-structure of whose minimum element's index is to be
determined.
random_state: int, RandomState instance or None, optional (default=None)
Determines random number generation for shuffling the data. Pass an int
for reproducible results across multiple
function calls.
argmin_kwargs: dict-like
Keyword argument passed to numpy function argmin.
Returns
-------
index_array: ndarray of ints
Array of indices into the array. It has the same shape as a.shape with
the dimension along axis removed.
"""
random_state = check_random_state(random_state)
a = np.asarray(a)
index_array = np.argmax(random_state.random(a.shape) * (
a == np.nanmin(a, **argmin_kwargs, keepdims=True)),
**argmin_kwargs)
if np.isscalar(index_array) and a.ndim > 1:
index_array = np.unravel_index(index_array, a.shape)
index_array = np.atleast_1d(index_array)
return index_array
def rand_argmax(a, random_state=None, **argmax_kwargs):
"""Returns index of maximum value. In case of ties, a randomly selected
index of the maximum elements is returned.
Parameters
----------
a: array-like
Indexable data-structure of whose maximum element's index is to be
determined.
random_state: int, RandomState instance or None, optional (default=None)
Determines random number generation for shuffling the data. Pass an int
for reproducible results across multiple function calls.
argmax_kwargs: dict-like
Keyword argument passed to numpy function argmin.
Returns
-------
index_array: ndarray of ints
Array of indices into the array. It has the same shape as a.shape with
the dimension along axis removed.
"""
random_state = check_random_state(random_state)
a = np.asarray(a)
index_array = np.argmax(random_state.random(a.shape) * (
a == np.nanmax(a, **argmax_kwargs, keepdims=True)),
**argmax_kwargs)
if np.isscalar(index_array) and a.ndim > 1:
index_array = np.unravel_index(index_array, a.shape)
index_array = np.atleast_1d(index_array)
return index_array
| 36.464789 | 79 | 0.673233 | 349 | 2,589 | 4.859599 | 0.272206 | 0.09434 | 0.042453 | 0.018868 | 0.817217 | 0.817217 | 0.817217 | 0.817217 | 0.817217 | 0.817217 | 0 | 0.002048 | 0.245655 | 2,589 | 70 | 80 | 36.985714 | 0.866359 | 0.539591 | 0 | 0.636364 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.090909 | false | 0 | 0.090909 | 0 | 0.272727 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
d7a2ea328a9e07d894f5ec37e10bf10ec76b6ec3 | 2,839 | py | Python | tensorflow/core/ops/sort_ops.py | seetaresearch/Dragon | 494774d3a545f807d483fd9e6e4563cedec6dda5 | [
"BSD-2-Clause"
] | 81 | 2018-03-13T13:08:37.000Z | 2020-06-13T14:36:29.000Z | tensorflow/core/ops/sort_ops.py | seetaresearch/Dragon | 494774d3a545f807d483fd9e6e4563cedec6dda5 | [
"BSD-2-Clause"
] | 2 | 2019-08-07T09:26:07.000Z | 2019-08-26T07:33:55.000Z | tensorflow/core/ops/sort_ops.py | seetaresearch/Dragon | 494774d3a545f807d483fd9e6e4563cedec6dda5 | [
"BSD-2-Clause"
] | 13 | 2018-03-13T13:08:50.000Z | 2020-05-28T08:20:22.000Z | # ------------------------------------------------------------
# Copyright (c) 2017-present, SeetaTech, Co.,Ltd.
#
# Licensed under the BSD 2-Clause License.
# You should have received a copy of the BSD 2-Clause License
# along with the software. If not, See,
#
# <https://opensource.org/licenses/BSD-2-Clause>
#
# ------------------------------------------------------------
"""Sort ops."""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
from dragon.core.ops import sort_ops
def argsort(values, axis=-1, direction='ASCENDING', name=None):
"""Return the index of sorted elements along the given axis.
By default, the last axis is chosen:
```python
x = tf.constant([[1, 2, 3], [3, 2, 1]])
index1 = tf.argsort(x)
index2 = tf.argsort(x, axis=1) # Equivalent
```
Sort in the inverse order if ``direction`` is ``DESCENDING``:
```python
x = tf.constant([1, 2, 3])
index1 = tf.argsort(-x)
index2 = tf.argsort(x, direction='DESCENDING') # Equivalent
```
Parameters
----------
values : dragon.Tensor
The input tensor.
axis : int, optional, default=-1
The axis to sort elements.
direction : {'ASCENDING', 'DESCENDING'}, optional
The sorting direction.
name : str, optional
The operation name.
Returns
-------
dragon.Tensor
The output tensor.
"""
if direction not in ('ASCENDING', 'DESCENDING'):
raise ValueError('Unknown direction: ' + direction)
value_and_index = sort_ops.sort(
values, axis=axis, descending=direction == 'DESCENDING', name=name)
return value_and_index[1]
def sort(values, axis=-1, direction='ASCENDING', name=None):
"""Return the sorted elements along the given axis.
By default, the last axis is chosen:
```python
x = tf.constant([[1, 2, 3], [3, 2, 1]])
value1 = tf.sort(x)
value2 = tf.sort(x, axis=1) # Equivalent
```
Sort in the inverse order if ``direction`` is ``DESCENDING``:
```python
x = tf.constant([1, 2, 3])
value1 = -tf.sort(-x)
value2 = tf.sort(x, direction='DESCENDING') # Equivalent
```
Parameters
----------
values : dragon.Tensor
The input tensor.
axis : int, optional, default=-1
The axis to sort elements.
direction : {'ASCENDING', 'DESCENDING'}, optional
The sorting direction.
name : str, optional
The operation name.
Returns
-------
dragon.Tensor
The output tensor.
"""
if direction not in ('ASCENDING', 'DESCENDING'):
raise ValueError('Unknown direction: ' + direction)
value_and_index = sort_ops.sort(
values, axis=axis, descending=direction == 'DESCENDING', name=name)
return value_and_index[0]
| 27.298077 | 75 | 0.594928 | 341 | 2,839 | 4.879765 | 0.269795 | 0.016827 | 0.021635 | 0.040865 | 0.828125 | 0.804087 | 0.804087 | 0.804087 | 0.736779 | 0.68149 | 0 | 0.018851 | 0.233885 | 2,839 | 103 | 76 | 27.563107 | 0.746207 | 0.62839 | 0 | 0.5 | 0 | 0 | 0.140741 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.125 | false | 0 | 0.25 | 0 | 0.5 | 0.0625 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
d7f89d98b84f2f056da7cb2b53db13de8f7f692c | 3,977 | py | Python | HashChecker/HashChecker.py | SurajApps/HashChecker | 29d346381e626595aea60c6dd56573e680bade79 | [
"MIT"
] | null | null | null | HashChecker/HashChecker.py | SurajApps/HashChecker | 29d346381e626595aea60c6dd56573e680bade79 | [
"MIT"
] | 3 | 2020-08-09T20:16:48.000Z | 2020-08-09T20:28:08.000Z | HashChecker/HashChecker.py | SurajApps/HashChecker | 29d346381e626595aea60c6dd56573e680bade79 | [
"MIT"
] | null | null | null | import hashlib
import argparse
def Check():
argspec = '''usage: HashChecker.py [-h] -f FILE [-s] [-m] -c CHECKSUM
optional arguments:
-h, --help show this help message and exit
-f FILE, --file FILE Select the file to be checked
-s, --sha256 Check the SHA256 checksum of the file
-m, --md5 info / public / private
-c CHECKSUM, --checksum CHECKSUM
Enter the MD5 or SHA256 checksum of the file
'''
path = ""
checksum = ""
def MD5():
nonlocal path, checksum, argspec
argspec = ""
checksum = checksum.lower()
# Path is the location of the file (can be set a different way)
BLOCK_SIZE = 65536 # The size of each read from the file
file_hash = hashlib.md5() # Create the hash object, can use something other than `.sha256()` if you wish
with open(path, 'rb') as f: # Open the file to read it's bytes
fb = f.read(BLOCK_SIZE) # Read from the file. Take in the amount declared above
while len(fb) > 0: # While there is still data being read from the file
file_hash.update(fb) # Update the hash
fb = f.read(BLOCK_SIZE) # Read the next block from the file
calc_md5 = file_hash.hexdigest()
if (checksum == calc_md5):
print("The file has not been tampered with, and is OK to use.")
else:
print("The file has been tampered with, and is NOT OK to use.")
# print(file_hash.hexdigest()) # Get the hexadecimal digest of the hash
def SHA256():
nonlocal path, checksum, argspec
argspec = ""
checksum = checksum.lower()
# Path is the location of the file (can be set a different way)
BLOCK_SIZE = 65536 # The size of each read from the file
file_hash = hashlib.sha256() # Create the hash object, can use something other than `.sha256()` if you wish
with open(path, 'rb') as f: # Open the file to read it's bytes
fb = f.read(BLOCK_SIZE) # Read from the file. Take in the amount declared above
while len(fb) > 0: # While there is still data being read from the file
file_hash.update(fb) # Update the hash
fb = f.read(BLOCK_SIZE) # Read the next block from the file
calc_sha256 = file_hash.hexdigest()
if (checksum == calc_sha256):
print("The file has not been tampered with, and is OK to use.")
else:
print("The file has been tampered with, and is NOT OK to use.")
# print(file_hash.hexdigest()) # Get the hexadecimal digest of the hash
parser = argparse.ArgumentParser()
parser.add_argument('-f', '--file', help='Select the file to be checked', required=False, action="store", type=str)
parser.add_argument('-s', '--sha256', help='Check the SHA256 checksum of the file', required=False,
action="store_true")
parser.add_argument('-m', '--md5', help='info / public / private', required=False, action="store_true")
parser.add_argument('-c', '--checksum', help='Enter the MD5 or SHA256 checksum of the file', required=False,
action="store", type=str)
args = vars(parser.parse_args())
path = args["file"]
checksum = args["checksum"]
if (args['md5'] == True):
MD5()
if (args['sha256'] == True):
SHA256()
print(argspec)
argspec = '''usage: HashChecker.py [-h] -f FILE [-s] [-m] -c CHECKSUM
optional arguments:
-h, --help show this help message and exit
-f FILE, --file FILE Select the file to be checked
-s, --sha256 Check the SHA256 checksum of the file
-m, --md5 info / public / private
-c CHECKSUM, --checksum CHECKSUM
Enter the MD5 or SHA256 checksum of the file
'''
Check()
| 44.188889 | 119 | 0.587126 | 542 | 3,977 | 4.261993 | 0.202952 | 0.075758 | 0.031169 | 0.049351 | 0.861905 | 0.861905 | 0.808225 | 0.804762 | 0.77619 | 0.72987 | 0 | 0.027412 | 0.312044 | 3,977 | 89 | 120 | 44.685393 | 0.816886 | 0.217501 | 0 | 0.591549 | 0 | 0 | 0.433193 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.042254 | false | 0 | 0.028169 | 0 | 0.070423 | 0.070423 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
d7f915be052265406a7faea6a438546f89e5f553 | 12,783 | py | Python | beehve/apps/honey/migrations/0001_initial.py | Code4Maine/beehve | de4dece5d0c4e4fbe97c6249f105a387095bbaf0 | [
"BSD-3-Clause"
] | 1 | 2015-05-22T15:18:46.000Z | 2015-05-22T15:18:46.000Z | beehve/apps/honey/migrations/0001_initial.py | Code4Maine/beehve | de4dece5d0c4e4fbe97c6249f105a387095bbaf0 | [
"BSD-3-Clause"
] | 24 | 2015-02-09T17:11:57.000Z | 2018-02-22T14:44:33.000Z | beehve/apps/honey/migrations/0001_initial.py | Code4Maine/beehve | de4dece5d0c4e4fbe97c6249f105a387095bbaf0 | [
"BSD-3-Clause"
] | null | null | null | # -*- coding: utf-8 -*-
from django.db import models, migrations
import django.utils.timezone
from django.conf import settings
import django_extensions.db.fields
class Migration(migrations.Migration):
dependencies = [
migrations.swappable_dependency(settings.AUTH_USER_MODEL),
]
operations = [
migrations.CreateModel(
name='Buzz',
fields=[
('id', models.AutoField(verbose_name='ID', serialize=False, auto_created=True, primary_key=True)),
('created', django_extensions.db.fields.CreationDateTimeField(default=django.utils.timezone.now, verbose_name='created', editable=False, blank=True)),
('modified', django_extensions.db.fields.ModificationDateTimeField(default=django.utils.timezone.now, verbose_name='modified', editable=False, blank=True)),
('title', models.CharField(max_length=255, verbose_name='title')),
('slug', django_extensions.db.fields.AutoSlugField(allow_duplicates=False, separator=b'"u\'-\'"', blank=True, populate_from=b'"\'title\'"', editable=False, verbose_name='slug', overwrite=False)),
('description', models.TextField(null=True, verbose_name='description', blank=True)),
('author', models.ForeignKey(to=settings.AUTH_USER_MODEL)),
],
options={
'ordering': ('-modified', '-created'),
'abstract': False,
'get_latest_by': 'modified',
},
bases=(models.Model,),
),
migrations.CreateModel(
name='Event',
fields=[
('id', models.AutoField(verbose_name='ID', serialize=False, auto_created=True, primary_key=True)),
('created', django_extensions.db.fields.CreationDateTimeField(default=django.utils.timezone.now, verbose_name='created', editable=False, blank=True)),
('modified', django_extensions.db.fields.ModificationDateTimeField(default=django.utils.timezone.now, verbose_name='modified', editable=False, blank=True)),
('title', models.CharField(max_length=255, verbose_name='title')),
('slug', django_extensions.db.fields.AutoSlugField(allow_duplicates=False, separator=b'"u\'-\'"', blank=True, populate_from=b'"\'title\'"', editable=False, verbose_name='slug', overwrite=False)),
('description', models.TextField(null=True, verbose_name='description', blank=True)),
('pending', models.BooleanField(default=True)),
('project_count', models.IntegerField(default=0)),
('start_date', models.DateField(verbose_name='Start date')),
('start_time', models.TimeField(null=True, verbose_name='Start time', blank=True)),
('end_date', models.DateField(null=True, verbose_name='End date', blank=True)),
('end_time', models.TimeField(null=True, verbose_name='End time', blank=True)),
('url', models.CharField(max_length=255, null=True, verbose_name='Signup URL', blank=True)),
('location', models.CharField(max_length=255, null=True, blank=True)),
('address', models.CharField(max_length=255, null=True, blank=True)),
],
options={
'abstract': False,
},
bases=(models.Model,),
),
migrations.CreateModel(
name='Link',
fields=[
('id', models.AutoField(verbose_name='ID', serialize=False, auto_created=True, primary_key=True)),
('created', django_extensions.db.fields.CreationDateTimeField(default=django.utils.timezone.now, verbose_name='created', editable=False, blank=True)),
('modified', django_extensions.db.fields.ModificationDateTimeField(default=django.utils.timezone.now, verbose_name='modified', editable=False, blank=True)),
('title', models.CharField(max_length=255, verbose_name='title')),
('slug', django_extensions.db.fields.AutoSlugField(allow_duplicates=False, separator=b'"u\'-\'"', blank=True, populate_from=b'"\'title\'"', editable=False, verbose_name='slug', overwrite=False)),
('description', models.TextField(null=True, verbose_name='description', blank=True)),
('url', models.CharField(max_length=255)),
('author', models.ForeignKey(to=settings.AUTH_USER_MODEL)),
],
options={
'ordering': ('-modified', '-created'),
'abstract': False,
'get_latest_by': 'modified',
},
bases=(models.Model,),
),
migrations.CreateModel(
name='Project',
fields=[
('id', models.AutoField(verbose_name='ID', serialize=False, auto_created=True, primary_key=True)),
('created', django_extensions.db.fields.CreationDateTimeField(default=django.utils.timezone.now, verbose_name='created', editable=False, blank=True)),
('modified', django_extensions.db.fields.ModificationDateTimeField(default=django.utils.timezone.now, verbose_name='modified', editable=False, blank=True)),
('title', models.CharField(max_length=255, verbose_name='title')),
('slug', django_extensions.db.fields.AutoSlugField(allow_duplicates=False, separator=b'"u\'-\'"', blank=True, populate_from=b'"\'title\'"', editable=False, verbose_name='slug', overwrite=False)),
('description', models.TextField(null=True, verbose_name='description', blank=True)),
('public_url', models.CharField(max_length=255, null=True, blank=True)),
('dev_url', models.CharField(max_length=255, null=True, blank=True)),
('git_url', models.CharField(max_length=255, null=True, blank=True)),
('status', models.CharField(default=b'ideation', max_length=10, choices=[(b'inprogress', b'In Progress'), (b'ideation', b'Ideation'), (b'stalled', b'Stalled'), (b'defunct', b'Defunct'), (b'launched', b'Launched')])),
('color', models.CharField(max_length=100, null=True, verbose_name='Color', blank=True)),
('screenshot', models.ImageField(upload_to=b'screenshots', null=True, verbose_name='Screenshot', blank=True)),
('events', models.ManyToManyField(to='honey.Event', null=True, blank=True)),
('founder', models.ForeignKey(related_name='founder', blank=True, to=settings.AUTH_USER_MODEL, null=True)),
('members', models.ManyToManyField(to=settings.AUTH_USER_MODEL, null=True, blank=True)),
],
options={
'ordering': ['-created'],
},
bases=(models.Model,),
),
migrations.CreateModel(
name='ProjectCommit',
fields=[
('id', models.AutoField(verbose_name='ID', serialize=False, auto_created=True, primary_key=True)),
('created', django_extensions.db.fields.CreationDateTimeField(default=django.utils.timezone.now, verbose_name='created', editable=False, blank=True)),
('modified', django_extensions.db.fields.ModificationDateTimeField(default=django.utils.timezone.now, verbose_name='modified', editable=False, blank=True)),
('chash', models.CharField(max_length=255)),
('message', models.TextField(null=True, blank=True)),
('summary', models.TextField(null=True, blank=True)),
('string_author', models.CharField(max_length=255, null=True, blank=True)),
('time', models.DateTimeField(null=True, blank=True)),
('diff', models.TextField(null=True, blank=True)),
('project', models.ForeignKey(to='honey.Project')),
('user_author', models.ForeignKey(blank=True, to=settings.AUTH_USER_MODEL, null=True)),
],
options={
'ordering': ['-created'],
},
bases=(models.Model,),
),
migrations.CreateModel(
name='ProjectIdea',
fields=[
('id', models.AutoField(verbose_name='ID', serialize=False, auto_created=True, primary_key=True)),
('created', django_extensions.db.fields.CreationDateTimeField(default=django.utils.timezone.now, verbose_name='created', editable=False, blank=True)),
('modified', django_extensions.db.fields.ModificationDateTimeField(default=django.utils.timezone.now, verbose_name='modified', editable=False, blank=True)),
('title', models.CharField(max_length=255, verbose_name='Title')),
('description', models.TextField(verbose_name='Description')),
('slug', django_extensions.db.fields.ShortUUIDField(max_length=36, editable=False, blank=True)),
('started_date', models.DateTimeField(null=True, blank=True)),
('created_by', models.ForeignKey(related_name='created_by', blank=True, to=settings.AUTH_USER_MODEL, null=True)),
('user_votes', models.ManyToManyField(related_name='user_votes', null=True, to=settings.AUTH_USER_MODEL, blank=True)),
],
options={
'ordering': ['-created'],
},
bases=(models.Model,),
),
migrations.CreateModel(
name='Technology',
fields=[
('id', models.AutoField(verbose_name='ID', serialize=False, auto_created=True, primary_key=True)),
('created', django_extensions.db.fields.CreationDateTimeField(default=django.utils.timezone.now, verbose_name='created', editable=False, blank=True)),
('modified', django_extensions.db.fields.ModificationDateTimeField(default=django.utils.timezone.now, verbose_name='modified', editable=False, blank=True)),
('title', models.CharField(max_length=255, verbose_name='title')),
('slug', django_extensions.db.fields.AutoSlugField(allow_duplicates=False, separator=b'"u\'-\'"', blank=True, populate_from=b'"\'title\'"', editable=False, verbose_name='slug', overwrite=False)),
('description', models.TextField(null=True, verbose_name='description', blank=True)),
('pending', models.BooleanField(default=True)),
('project_count', models.IntegerField(default=0)),
],
options={
'abstract': False,
},
bases=(models.Model,),
),
migrations.CreateModel(
name='Topic',
fields=[
('id', models.AutoField(verbose_name='ID', serialize=False, auto_created=True, primary_key=True)),
('created', django_extensions.db.fields.CreationDateTimeField(default=django.utils.timezone.now, verbose_name='created', editable=False, blank=True)),
('modified', django_extensions.db.fields.ModificationDateTimeField(default=django.utils.timezone.now, verbose_name='modified', editable=False, blank=True)),
('title', models.CharField(max_length=255, verbose_name='title')),
('slug', django_extensions.db.fields.AutoSlugField(allow_duplicates=False, separator=b'"u\'-\'"', blank=True, populate_from=b'"\'title\'"', editable=False, verbose_name='slug', overwrite=False)),
('description', models.TextField(null=True, verbose_name='description', blank=True)),
('pending', models.BooleanField(default=True)),
('project_count', models.IntegerField(default=0)),
],
options={
'abstract': False,
},
bases=(models.Model,),
),
migrations.AlterUniqueTogether(
name='projectcommit',
unique_together=set([('project', 'chash')]),
),
migrations.AddField(
model_name='project',
name='technologies',
field=models.ManyToManyField(to='honey.Technology', null=True, blank=True),
preserve_default=True,
),
migrations.AddField(
model_name='project',
name='topics',
field=models.ManyToManyField(to='honey.Topic', null=True, blank=True),
preserve_default=True,
),
migrations.AddField(
model_name='link',
name='project',
field=models.ForeignKey(to='honey.Project'),
preserve_default=True,
),
migrations.AddField(
model_name='buzz',
name='project',
field=models.ForeignKey(to='honey.Project'),
preserve_default=True,
),
]
| 62.970443 | 232 | 0.611515 | 1,278 | 12,783 | 5.981221 | 0.107199 | 0.063579 | 0.056515 | 0.075353 | 0.832156 | 0.809131 | 0.777211 | 0.759681 | 0.749608 | 0.707221 | 0 | 0.006046 | 0.236642 | 12,783 | 202 | 233 | 63.282178 | 0.777311 | 0.001643 | 0 | 0.661538 | 0 | 0 | 0.114185 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.020513 | 0 | 0.035897 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
0bca9c3ec93855f6554ca05194d69fbeff6e5e5e | 5,817 | py | Python | rec_to_nwb/test/processing/position/time/valid/test_flPosValidTimeManager.py | jihyunbak/rec_to_nwb | 6e65f8bf0a4faa4d986483ec2442ba19d70c92a9 | [
"Apache-2.0"
] | 8 | 2020-05-29T13:48:35.000Z | 2021-11-19T04:24:48.000Z | rec_to_nwb/test/processing/position/time/valid/test_flPosValidTimeManager.py | jihyunbak/rec_to_nwb | 6e65f8bf0a4faa4d986483ec2442ba19d70c92a9 | [
"Apache-2.0"
] | 12 | 2020-11-13T01:36:32.000Z | 2022-01-23T20:35:55.000Z | rec_to_nwb/test/processing/position/time/valid/test_flPosValidTimeManager.py | jihyunbak/rec_to_nwb | 6e65f8bf0a4faa4d986483ec2442ba19d70c92a9 | [
"Apache-2.0"
] | 3 | 2020-10-20T06:52:45.000Z | 2021-07-06T23:00:53.000Z | from unittest import TestCase
from unittest.mock import MagicMock
import numpy as np
from pynwb import NWBFile
from testfixtures import should_raise
from rec_to_nwb.processing.exceptions.missing_data_exception import MissingDataException
from rec_to_nwb.processing.nwb.components.position.time.valid.fl_pos_valid_time_manager import FlPosValidTimeManager
class TestFlPosValidTimeManager(TestCase):
def test_fl_pos_valid_time_manager_get_fl_pos_valid_times_with_gap_in_middle(self):
gaps_margin = 0.0001
mock_array = np.ndarray(dtype='float', shape=[10,])
array = [1, 2, 3, 4, 5, 7, 9, 10, 11, 12]
for i, number in enumerate(array):
mock_array[i] = number
mock_series = MagicMock()
mock_series.timestamps = mock_array
mock_nwb = MagicMock(spec=NWBFile)
mock_nwb.processing['behavior'].data_interfaces['position'].spatial_series = {'series': mock_series}
mock_metadata = {'times_period_multiplier': 1.5}
fl_pos_valid_time_manager = FlPosValidTimeManager(mock_metadata)
fl_pos_valid_times = fl_pos_valid_time_manager.get_fl_pos_valid_times(
nwb_content=mock_nwb,
gaps_margin=gaps_margin
)
self.assertEqual(len(fl_pos_valid_times), 2)
self.assertEqual(round(fl_pos_valid_times[0].start_time, 4), 1 + gaps_margin)
self.assertEqual(round(fl_pos_valid_times[0].stop_time, 4), 5 - gaps_margin)
self.assertEqual(round(fl_pos_valid_times[1].start_time, 4), 9 + gaps_margin)
self.assertEqual(round(fl_pos_valid_times[1].stop_time, 4), 12 - gaps_margin)
def test_fl_pos_valid_time_manager_get_fl_pos_valid_times_without_gap(self):
gaps_margin = 0.0001
mock_array = np.ndarray(dtype='float', shape=[10, ])
array = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10]
for i, number in enumerate(array):
mock_array[i] = number
mock_series = MagicMock()
mock_series.timestamps = mock_array
mock_nwb = MagicMock(spec=NWBFile)
mock_nwb.processing['behavior'].data_interfaces['position'].spatial_series = {'series': mock_series}
mock_metadata = {'times_period_multiplier': 1.5}
fl_pos_valid_time_manager = FlPosValidTimeManager(mock_metadata)
fl_pos_valid_times = fl_pos_valid_time_manager.get_fl_pos_valid_times(
nwb_content=mock_nwb,
gaps_margin=gaps_margin
)
self.assertEqual(len(fl_pos_valid_times), 1)
self.assertEqual(round(fl_pos_valid_times[0].start_time, 4), 1 + gaps_margin)
self.assertEqual(round(fl_pos_valid_times[0].stop_time, 4), 10 - gaps_margin)
def test_fl_pos_valid_time_manager_get_fl_pos_valid_times_with_gap_at_start(self):
gaps_margin = 0.0001
mock_array = np.ndarray(dtype='float', shape=[10, ])
array = [1, 3, 5, 6, 7, 8, 9, 10, 11, 12]
for i, number in enumerate(array):
mock_array[i] = number
mock_series = MagicMock()
mock_series.timestamps = mock_array
mock_nwb = MagicMock(spec=NWBFile)
mock_nwb.processing['behavior'].data_interfaces['position'].spatial_series = {'series': mock_series}
mock_metadata = {'times_period_multiplier': 1.5}
fl_pos_valid_time_manager = FlPosValidTimeManager(mock_metadata)
fl_pos_valid_times = fl_pos_valid_time_manager.get_fl_pos_valid_times(
nwb_content=mock_nwb,
gaps_margin=gaps_margin
)
self.assertEqual(len(fl_pos_valid_times), 1)
self.assertEqual(round(fl_pos_valid_times[0].start_time, 4), 5 + gaps_margin)
self.assertEqual(round(fl_pos_valid_times[0].stop_time, 4), 12 - gaps_margin)
def test_fl_pos_valid_time_manager_get_fl_pos_valid_times_with_gap_at_end(self):
gaps_margin = 0.0001
mock_array = np.ndarray(dtype='float', shape=[10, ])
array = [1, 2, 3, 4, 5, 6, 7, 8, 10, 12]
for i, number in enumerate(array):
mock_array[i] = number
mock_series = MagicMock()
mock_series.timestamps = mock_array
mock_nwb = MagicMock(spec=NWBFile)
mock_nwb.processing['behavior'].data_interfaces['position'].spatial_series = {'series': mock_series}
mock_metadata = {'times_period_multiplier': 1.5}
fl_pos_valid_time_manager = FlPosValidTimeManager(mock_metadata)
fl_pos_valid_times = fl_pos_valid_time_manager.get_fl_pos_valid_times(
nwb_content=mock_nwb,
gaps_margin=gaps_margin
)
self.assertEqual(len(fl_pos_valid_times), 1)
self.assertEqual(round(fl_pos_valid_times[0].start_time, 4), 1 + gaps_margin)
self.assertEqual(round(fl_pos_valid_times[0].stop_time, 4), 8 - gaps_margin)
@should_raise(TypeError)
def test_fl_pos_valid_time_manager_get_fl_pos_valid_times_failed_due_to_None_param(self):
gaps_margin = 0.0001
mock_metadata = {'times_period_multiplier': 1.5}
fl_pos_valid_time_manager = FlPosValidTimeManager(mock_metadata)
fl_pos_valid_time_manager.get_fl_pos_valid_times(
nwb_content=None,
gaps_margin=gaps_margin
)
@should_raise(MissingDataException)
def test_fl_pos_valid_time_manager_get_fl_pos_valid_times_failed_due_to_lack_of_timestamps(self):
gaps_margin = 0.0001
mock_series = MagicMock()
mock_nwb = MagicMock(spec=NWBFile)
mock_nwb.processing['behavior'].data_interfaces['position'].spatial_series = {'series': mock_series}
mock_metadata = {'times_period_multiplier': 1.5}
fl_pos_valid_time_manager = FlPosValidTimeManager(mock_metadata)
fl_pos_valid_time_manager.get_fl_pos_valid_times(
nwb_content=mock_nwb,
gaps_margin=gaps_margin
) | 45.80315 | 116 | 0.701221 | 807 | 5,817 | 4.629492 | 0.114002 | 0.065578 | 0.131156 | 0.12045 | 0.880086 | 0.862687 | 0.849572 | 0.849572 | 0.849572 | 0.849572 | 0 | 0.029469 | 0.206636 | 5,817 | 127 | 117 | 45.80315 | 0.780065 | 0 | 0 | 0.672897 | 0 | 0 | 0.046064 | 0.023719 | 0 | 0 | 0 | 0 | 0.130841 | 1 | 0.056075 | false | 0 | 0.065421 | 0 | 0.130841 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
042cfa3168535140ea5fcbebb105b1985d17d587 | 255 | py | Python | app/main/errors.py | suzaram3/my_dashboard | 9f78028875893ac25a829c24bd9343f117c3ef05 | [
"MIT"
] | null | null | null | app/main/errors.py | suzaram3/my_dashboard | 9f78028875893ac25a829c24bd9343f117c3ef05 | [
"MIT"
] | null | null | null | app/main/errors.py | suzaram3/my_dashboard | 9f78028875893ac25a829c24bd9343f117c3ef05 | [
"MIT"
] | null | null | null | from flask import render_template
from . import main
@main.app_errorhandler(404)
def page_note_found(e):
return render_template("404.html"), 404
@main.app_errorhandler(500)
def internal_server_error(e):
return render_template("500.html"), 500
| 19.615385 | 43 | 0.768627 | 38 | 255 | 4.921053 | 0.526316 | 0.224599 | 0.203209 | 0.224599 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.080717 | 0.12549 | 255 | 12 | 44 | 21.25 | 0.757848 | 0 | 0 | 0 | 0 | 0 | 0.062745 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | false | 0 | 0.25 | 0.25 | 0.75 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 6 |
f08e9a7a6e1717797703e672623a985f7fa7deb3 | 9,266 | py | Python | main.py | nadavo/MEMM-POS-Tagger | 9696f66451703d8ba987061eda0f7a3d79902dc6 | [
"MIT"
] | null | null | null | main.py | nadavo/MEMM-POS-Tagger | 9696f66451703d8ba987061eda0f7a3d79902dc6 | [
"MIT"
] | null | null | null | main.py | nadavo/MEMM-POS-Tagger | 9696f66451703d8ba987061eda0f7a3d79902dc6 | [
"MIT"
] | null | null | null | from utils import TaggedDataReader, Timer
from MEMM import HistoryTuple, MEMM
from FeaturesFactory import BasicFeatures, AdvancedFeatures
from Viterbi import ViterbiAlgorithm
"""main file which was mainly used for our personal usage during development.
contains interfaces for different operations: training, predicting, evaluating and generating competition files for models."""
train_file = "data/train.wtag"
test_file = "data/test.wtag"
mini_train_file = "data/train.wtag_mini"
def trainAndPredictModel(model_type="basic", features_cutoff=3, regularizer=1, pretrained=False, viterbi_cutoff=20):
"""main interface method for easily training a model, running inference for predictions,
evaluate it and generate competition file for it."""
data = readData(train_file)
features = constructFeatures(model_type, data, features_cutoff)
model = createModel(data, features, regularizer, pretrained=pretrained)
trainModel(model, pretrained=pretrained)
results = evaluateModel(data, model, viterbi_cutoff)
results.append("Features Cutoff: " + str(features_cutoff))
results.append("Regularizer: " + str(regularizer))
results.append("Viterbi Cutoff: " + str(viterbi_cutoff))
def readData(file):
timer = Timer("Data Reader")
data = TaggedDataReader(file)
timer.stop()
print("Number of unique tags in data:", data.getTagDictSize())
print("Number of unique words in data:", data.getWordDictSize())
print("Number of unique word,tag pairs in data:", data.getWordTagDictSize())
print("Number of unique trigrams in data:", len(data.tags_trigrams))
print("Number of unique bigrams in data:", len(data.tags_bigrams))
print("Number of sentences in data:", data.getSentencesSize())
print("Number of tag sequences in data:", data.getTagsSize())
return data
def constructFeatures(complexity, data, features_cutoff):
timer = Timer("Features Construction"+"-"+complexity)
if complexity == 'advanced':
features = AdvancedFeatures(data, features_cutoff)
elif complexity == 'basic':
features = BasicFeatures(data, features_cutoff)
timer.stop()
print("Features Vector Length:", features.getFeaturesVectorLength())
print("Number of unique feature index lists in data:", len(features.features_dict))
print("Maximum frequency in features index lists:", max(features.features_dict.values()))
print("Feature type frequencies:", features.feature_freq)
print("Number of unique tag,history pairs in data:", len(features.histories_dict))
print("Empirical Counts Vector Length:", len(features.empirical_counts))
return features
def createModel(data, features, regularizer, pretrained=None):
timer = Timer("Model Creation")
model = MEMM(features, regularizer, pretrained)
timer.stop()
history = HistoryTuple(0, data.getSentenceByIndex(0), data.getTagsByIndex(0), 2)
timer = Timer("Probability Calculation")
probs = model.probability('VBZ', history, model.getWeights())
timer.stop()
print("Test Probability:", probs)
# model.calc_loss(model.getWeights())
# gradient = model.calc_gradient(model.getWeights())
# print("Gradient length:", len(gradient))
return model
def trainModel(model, pretrained=None):
if pretrained is not True:
model.fit()
def evaluateModel(data, model, viterbi_cutoff):
timer = Timer("Viterbi Calculation")
viterbi = ViterbiAlgorithm(0, data.getSentenceByIndex(0), data.getTagsByIndex(0), model, viterbi_cutoff)
viterbi.run()
timer.stop()
print("Truth:", data.getTagsByIndex(0))
print("Predictions:", viterbi.getBestTagSequence())
model.accuracy(data.getTagsByIndex(0), viterbi.getBestTagSequence(), True)
results = list()
model.predict(data, viterbi_cutoff)
results.append(str(model.evaluate(data)))
test_data = TaggedDataReader(test_file)
model.predict(test_data, viterbi_cutoff)
results.append(str(model.evaluate(test_data)))
return results
def mainModel(pretrained, features_cutoff, regularizer, viterbi_cutoff):
global_timer = Timer("Main Model Run")
timer = Timer("Data Reader")
data = TaggedDataReader(train_file)
timer.stop()
print("Number of unique tags in data:", data.getTagDictSize())
print("Number of unique words in data:", data.getWordDictSize())
print("Number of unique word,tag pairs in data:", data.getWordTagDictSize())
print("Number of unique trigrams in data:", len(data.tags_trigrams))
print("Number of unique bigrams in data:", len(data.tags_bigrams))
print("Number of sentences in data:", data.getSentencesSize())
print("Number of tag sequences in data:", data.getTagsSize())
timer = Timer("Features Construction")
features = BasicFeatures(data, features_cutoff)
timer.stop()
print("Features Vector Length:", features.getFeaturesVectorLength())
# init_weights = initModel(train_file, 0.01)
# exit(0)
timer = Timer("Model Creation")
model = MEMM(features, regularizer=regularizer, pretrained_weights=pretrained)
timer.stop()
history = HistoryTuple(0, data.getSentenceByIndex(0), data.getTagsByIndex(0), 2)
timer = Timer("Probability Calculation")
probs = model.probability('VBZ', history, model.getWeights())
timer.stop()
print("Test Probability:", probs)
print("Number of unique feature index lists in data:", len(features.features_dict))
print("Maximum frequency in features index lists:", max(features.features_dict.values()))
print("Number of unique tag,history pairs in data:", len(features.histories_dict))
print("Empirical Counts Vector Length:", len(features.empirical_counts))
model.calc_loss(model.getWeights())
gradient = model.calc_gradient(model.getWeights())
print("Gradient length:", len(gradient))
if pretrained is False:
model.fit()
timer = Timer("Viterbi Calculation")
viterbi = ViterbiAlgorithm(0, data.getSentenceByIndex(0), data.getTagsByIndex(0), model, viterbi_cutoff)
viterbi.run()
timer.stop()
print("Truth:", data.getTagsByIndex(0))
print("Predictions:", viterbi.getBestTagSequence())
model.accuracy(data.getTagsByIndex(0), viterbi.getBestTagSequence(), True)
results = list()
model.predict(data, viterbi_cutoff)
results.append(str(model.evaluate(data)))
test_data = TaggedDataReader(test_file)
model.predict(test_data, viterbi_cutoff)
results.append(str(model.evaluate(test_data)))
global_timer.stop()
return results
def trainBasicModel(features_cutoff, regularizer, viterbi_cutoff):
global_timer = Timer("Training Run")
pretrained = False
results = mainModel(pretrained, features_cutoff, regularizer, viterbi_cutoff)
results.append("Viterbi Cutoff: " + str(viterbi_cutoff))
results.append("Features Cutoff: " + str(features_cutoff))
results.append("Regularizer: " + str(regularizer))
global_timer.stop()
def testBasicModel(features_cutoff, regularizer, viterbi_cutoff):
global_timer = Timer("Test Run")
pretrained = True
results = mainModel(pretrained, features_cutoff, regularizer, viterbi_cutoff)
results.append("Viterbi Cutoff: " + str(viterbi_cutoff))
results.append("Features Cutoff: " + str(features_cutoff))
results.append("Regularizer: " + str(regularizer))
global_timer.stop()
def evaluateBasicModel(features_cutoff, regularizer, viterbi_cutoff):
pretrained = True
data = readData(train_file)
features = constructFeatures("basic", data, features_cutoff)
model = createModel(data, features, regularizer, pretrained=pretrained)
#model.weights = model.loadTrainedWeights("./cache/data-4962_cutoff-0_regularizer-1.0_trained_weights.pkl")
evaluateModel(data, model, viterbi_cutoff)
def evaluateAdvancedModel(features_cutoff, regularizer, viterbi_cutoff):
pretrained = True
data = readData(train_file)
features = constructFeatures("advanced", data, features_cutoff)
model = createModel(data, features, regularizer, pretrained=pretrained)
#model.weights = model.loadTrainedWeights("./cache/data-4962_features-advanced_weightSize-14056_cutoff-2_regularizer-1.0_trained_weights.pkl")
evaluateModel(data, model, viterbi_cutoff)
print(model.wrong_tags)
def main():
global_timer = Timer("Training + Test Run")
viterbi_cutoff = 20
features_cutoff = 3
regularizer = 1
# evaluateBasicModel(features_cutoff, regularizer, viterbi_cutoff)
# features_cutoff = 3
# evaluateAdvancedModel(features_cutoff, regularizer, viterbi_cutoff)
# exit(0)
pretrained = False
#trainBasicModel(features_cutoff, regularizer, viterbi_cutoff)
#testBasicModel(features_cutoff, regularizer, viterbi_cutoff)
data = readData(train_file)
features = constructFeatures("advanced", data, features_cutoff)
model = createModel(data, features, regularizer, pretrained=pretrained)
trainModel(model, pretrained=pretrained)
results = evaluateModel(data, model, viterbi_cutoff)
results.append("Features Cutoff: " + str(features_cutoff))
results.append("Regularizer: " + str(regularizer))
results.append("Viterbi Cutoff: " + str(viterbi_cutoff))
global_timer.stop()
if __name__ == '__main__':
main()
| 44.12381 | 146 | 0.734405 | 1,057 | 9,266 | 6.323557 | 0.152318 | 0.062238 | 0.035009 | 0.039797 | 0.806852 | 0.780371 | 0.718432 | 0.715589 | 0.676092 | 0.676092 | 0 | 0.006488 | 0.15163 | 9,266 | 209 | 147 | 44.334928 | 0.843786 | 0.091086 | 0 | 0.70122 | 0 | 0 | 0.177038 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.073171 | false | 0 | 0.02439 | 0 | 0.128049 | 0.20122 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
f0a3105433b8b0713019e8f9c82c92dc97f41509 | 13,150 | py | Python | calligra/convert/jansson.py | marmeladema/calligra | 912becec93a2246ed322656131b7bd9fe51fff95 | [
"MIT"
] | 1 | 2020-11-29T07:25:34.000Z | 2020-11-29T07:25:34.000Z | calligra/convert/jansson.py | marmeladema/calligra | 912becec93a2246ed322656131b7bd9fe51fff95 | [
"MIT"
] | 1 | 2019-04-19T15:06:31.000Z | 2019-04-26T13:24:36.000Z | calligra/convert/jansson.py | marmeladema/calligra | 912becec93a2246ed322656131b7bd9fe51fff95 | [
"MIT"
] | null | null | null | import calligra
import calligra.stdlib
@calligra.add_method(calligra.stdlib.boolean, 'to_json')
class boolean_to_json(calligra.function):
def __init__(self, boolean):
# create function prototype
namespace = boolean._namespace
name = boolean.type().name()
super().__init__(
namespace,
namespace.get('json_t').type(),
'json_boolean',
pointer = True,
imported = True,
)
# add arguments
self.add(
calligra.declaration(
namespace,
namespace.get(name),
'value',
)
)
@calligra.add_method(calligra.stdlib.boolean, 'from_json')
class boolean_from_json(calligra.function):
def __init__(self, boolean):
# create function prototype
namespace = boolean._namespace
name = boolean.type().name()
super().__init__(
namespace,
namespace.get('bool').type(),
'bool_from_json(',
)
# add arguments
self.add(
calligra.declaration(
namespace,
namespace.get(name),
'value',
pointer = True,
)
)
self.add(
calligra.declaration(
namespace,
namespace.get('json_t'),
'json',
pointer = True,
)
)
def body(self, prefix = ''):
code = ''
code += prefix + 'if(json_is_boolean(json)) {\n'
code += prefix + '\t*value = json_boolean_value(json);\n'
code += prefix + '\treturn true;\n'
code += prefix + '}\n'
code += prefix + 'return false;\n'
return code
@calligra.add_method(calligra.IntegerType, 'to_json')
class integer_to_json(calligra.function):
def __init__(self, integer):
# create function prototype
namespace = integer._namespace
name = integer.type().name()
super().__init__(
namespace,
namespace.get('json_t').type(),
'json_integer',
pointer = True,
imported = True,
)
# add arguments
self.add(
calligra.declaration(
namespace,
namespace.get(name),
'value',
)
)
@calligra.add_method(calligra.IntegerType, 'from_json')
class integer_from_json(calligra.function):
def __init__(self, integer):
# create function prototype
namespace = integer._namespace
name = integer.type().name()
super().__init__(
namespace,
namespace.get('bool').type(),
calligra.method_name(namespace.get(name).name(), '_from_json'),
)
self._integer = integer
# add arguments
self.add(
calligra.declaration(
namespace,
namespace.get(name),
'value',
pointer = True,
)
)
self.add(
calligra.declaration(
namespace,
namespace.get('json_t'),
'json',
pointer = True,
)
)
def body(self, prefix = ''):
code = ''
code += prefix + 'if(!json_is_integer(json)) {\n'
code += prefix + '\treturn false;\n'
code += prefix + '}\n'
integer = calligra.declaration(
self._namespace, self._namespace.get('json_int_t'), 'integer'
)
code += prefix + 'json_int_t integer = json_integer_value(json);\n'
valid = self._integer.valid(self._namespace, (integer, ))
if valid:
code += prefix + 'if({}) {{\n'.format(valid)
code += prefix + '\treturn false;\n'
code += prefix + '}\n'
code += prefix + '*value = ({})integer;\n'.format(self._integer.name())
code += prefix + 'return true;\n'
return code
@calligra.add_method(calligra.RealType, 'to_json')
class real_to_json(calligra.function):
def __init__(self, real):
# create function prototype
namespace = real._namespace
name = real.type().name()
super().__init__(
namespace,
namespace.get('json_t').type(),
'json_real',
pointer = True,
imported = True,
)
# add arguments
self.add(
calligra.declaration(
namespace,
namespace.get(name),
'value',
)
)
@calligra.add_method(calligra.RealType, 'from_json')
class real_from_json(calligra.function):
def __init__(self, real):
# create function prototype
namespace = real._namespace
name = real.type().name()
super().__init__(
namespace,
namespace.get('bool').type(),
calligra.method_name(namespace.get(name).name(), '_from_json'),
)
self._real = real
# add arguments
self.add(
calligra.declaration(
namespace,
namespace.get(name),
'value',
pointer = True,
)
)
self.add(
calligra.declaration(
namespace,
namespace.get('json_t'),
'json',
pointer = True,
)
)
def body(self, prefix = ''):
code = ''
code += prefix + 'if(!json_is_real(json)) {\n'
code += prefix + '\treturn false;\n'
code += prefix + '}\n'
real = calligra.declaration(
self._namespace, self._namespace.get('double'), 'real'
)
code += prefix + 'double real = json_real_value(json);\n'
code += prefix + 'if({}) {{\n'.format(
self._real.valid(self._namespace, (real, ))
)
code += prefix + '\treturn false;\n'
code += prefix + '}\n'
code += prefix + '*value = ({})real;\n'.format(self._real.name())
code += prefix + 'return true;\n'
return code
@calligra.add_method(calligra.stdlib.char, 'to_json')
class char_to_json(calligra.function):
def __init__(self, char):
# create function prototype
namespace = char._namespace
name = char.type().name()
super().__init__(
namespace,
namespace.get('json_t').type(),
'json_string',
pointer = True,
imported = True,
)
# add arguments
self.add(
calligra.declaration(
namespace,
namespace.get(name),
'value',
pointer = True,
const = True,
)
)
@calligra.add_method(calligra.stdlib.char, 'from_json')
class char_from_json(calligra.function):
def __init__(self, char):
# create function prototype
namespace = char.namespace()
name = char.type().name()
super().__init__(
namespace,
namespace.get('bool').type(),
calligra.method_name(namespace.get(name).name(), '_from_json'),
)
self._char = char
# add arguments
self.add(
calligra.declaration(
namespace,
namespace.get(name),
'value',
pointer = 2,
),
)
self.add(
calligra.declaration(
namespace,
namespace.get('json_t'),
'json',
pointer = True,
)
)
def body(self, prefix = ''):
code = ''
code += prefix + 'if(!json_is_string(json)) {\n'
code += prefix + '\treturn false;\n'
code += prefix + '}\n'
code += prefix + '*value = strndup(json_string_value(json), json_string_length(json));\n'
code += prefix + 'if(!(*value)) {\n'
code += prefix + '\treturn false;\n'
code += prefix + '}\n'
code += prefix + 'return true;\n'
return code
@calligra.add_method(calligra.struct, 'to_json')
class struct_to_json(calligra.function):
def __init__(self, struct):
# create function prototype
namespace = struct._namespace
name = struct.type().name()
super().__init__(
namespace,
namespace.get('json_t').type(),
calligra.method_name(namespace.get(name).name(), '_to_json'),
pointer = True
)
self._struct = struct
# add arguments
self.add(
calligra.declaration(
namespace,
self._struct,
namespace.get(name).name(),
pointer = True,
const = True,
)
)
def body(self, prefix = ''):
code = ''
properties = self._struct.properties()
code += prefix + 'json_t *json = json_object(), *child;\n'
code += prefix + 'if(!json) {\n'
code += prefix + '\treturn NULL;\n'
code += prefix + '}\n'
for property in properties:
property_type = self._namespace.get(property.type())
code += prefix + '/*' + property.name() + '*/\n'
access = property.access(self._namespace, (self.properties()[0], ))
nil = property.nil(self._namespace, (self.properties()[0], ))
if hasattr(property_type, 'to_json'):
code += prefix + 'if({}) {{\n'.format(access & ~nil)
code += prefix + '\tchild = {};\n'.format(
property_type.to_json.call(
(self.properties()[0], property)
),
)
code += prefix + '\tif(!child || json_object_set_new_nocheck(json, "{}", child) != 0) {{\n'.format(
property.name()
)
code += prefix + '\t\tif(child) {\n'
code += prefix + '\t\t\tjson_decref(child);\n'
code += prefix + '\t\t}\n'
code += prefix + '\t\tjson_decref(json);\n'
code += prefix + '\t\treturn NULL;\n'
code += prefix + '\t}\n'
code += prefix + '}\n'
code += prefix + 'return json;\n'
return code
@calligra.add_method(calligra.struct, 'from_json')
class struct_from_json(calligra.function):
def __init__(self, struct):
# create function prototype
namespace = struct._namespace
name = struct.type().name()
super().__init__(
namespace,
namespace.get('bool').type(),
calligra.method_name(namespace.get(name).name(), '_from_json'),
)
self._struct = struct
# add arguments
self.add(
calligra.declaration(
namespace,
namespace.get(name),
'value',
pointer = True,
)
)
self.add(
calligra.declaration(
namespace,
namespace.get('json_t'),
'json',
pointer = True,
)
)
def body(self, prefix = ''):
code = ''
properties = self._struct.properties()
code += prefix + 'json_t *child;\n'
code += prefix + 'size_t count = 0;\n\n'
code += prefix + 'if(!{} || !json_is_object(json)) {{\n'.format(
self.properties()[0].name()
)
code += prefix + '\treturn false;\n'
code += prefix + '}\n\n'
for property in properties:
property_type = self._namespace.get(property.type())
code += prefix + '/*' + property.name() + '*/\n'
access = property.access(self._namespace, (self.properties()[0], ))
#nil = property.nil(self._namespace, (self.properties()[0], ))
if hasattr(property_type, 'from_json'):
code += prefix + 'if({}) {{\n'.format(access)
code += prefix + '\tchild = json_object_get(json, "{}");\n'.format(
property.name()
)
code += prefix + '\tif(!child) {\n'
code += prefix + '\t\treturn false;\n'
code += prefix + '\t}\n'
child = calligra.declaration(
self._namespace,
self._namespace.get('json_t'),
'child',
pointer = True
)
code += prefix + '\tif(!{}) {{\n'.format(
property_type.from_json.call(
(self.properties()[0], property), (child, )
),
)
code += prefix + '\t\treturn false;\n'
code += prefix + '\t}\n'
code += prefix + '\tcount += 1;\n'
code += prefix + '}\n'
code += prefix + 'if(json_object_size(json) != count) {\n'
code += prefix + '\treturn false;\n'
code += prefix + '}\n'
code += prefix + 'return true;\n'
return code
@calligra.add_method(calligra.stdlib.in_addr, 'to_json')
class in_addr_to_json(struct_to_json):
def body(self, prefix = ''):
code = ''
code += prefix + 'char str[INET_ADDRSTRLEN] = {0};\n'
code += prefix + 'if(inet_ntop(AF_INET, {}, str, sizeof(str))) {{\n'.format(
self.properties()[0].name()
)
code += prefix + '\treturn json_string_nocheck(str);\n'
code += prefix + '}\n'
code += prefix + 'return NULL;\n'
return code
@calligra.add_method(calligra.stdlib.in_addr, 'from_json')
class in_addr_from_json(struct_from_json):
def body(self, prefix = ''):
code = ''
code += prefix + 'const char *str;\n'
code += prefix + 'if(!json_is_string(json)) {\n'
code += prefix + '\treturn false;\n'
code += prefix + '}\n'
code += prefix + 'str = json_string_value(json);\n'
code += prefix + 'if(inet_pton(AF_INET, str, {}) == 1) {{\n'.format(
self.properties()[0].name()
)
code += prefix + '\treturn true;\n'
code += prefix + '}\n'
code += prefix + 'return false;\n'
return code
@calligra.add_method(calligra.stdlib.in6_addr, 'to_json')
class in6_addr_to_json(struct_to_json):
def body(self, prefix = ''):
code = ''
code += prefix + 'char str[INET6_ADDRSTRLEN] = {0};\n'
code += prefix + 'if(inet_ntop(AF_INET6, {}, str, sizeof(str))) {{\n'.format(
self.properties()[0].name()
)
code += prefix + '\treturn json_string_nocheck(str);\n'
code += prefix + '}\n'
code += prefix + 'return NULL;\n'
return code
@calligra.add_method(calligra.stdlib.in6_addr, 'from_json')
class in6_addr_from_json(struct_from_json):
def body(self, prefix = ''):
code = ''
code += prefix + 'const char *str;\n'
code += prefix + 'if(!json_is_string(json)) {\n'
code += prefix + '\treturn false;\n'
code += prefix + '}\n'
code += prefix + 'str = json_string_value(json);\n'
code += prefix + 'if(inet_pton(AF_INET6, str, {}) == 1) {{\n'.format(
self.properties()[0].name()
)
code += prefix + '\treturn true;\n'
code += prefix + '}\n'
code += prefix + 'return false;\n'
return code
calligra.PrimaryType(
calligra.stdlib.namespace, 'json_t', imported = 'jansson.h'
)
calligra.IntegerType(
calligra.stdlib.namespace,
'json_int_t',
imported = True,
min_value = 'LLONG_MIN',
max_value = 'LLONG_MAX',
)
| 25.683594 | 103 | 0.595589 | 1,539 | 13,150 | 4.896686 | 0.059779 | 0.126062 | 0.090499 | 0.028662 | 0.839172 | 0.81396 | 0.771364 | 0.746683 | 0.719214 | 0.703424 | 0 | 0.002588 | 0.23597 | 13,150 | 511 | 104 | 25.733855 | 0.747487 | 0.034981 | 0 | 0.618267 | 0 | 0 | 0.180218 | 0.051468 | 0 | 0 | 0 | 0 | 0 | 1 | 0.046838 | false | 0 | 0.018735 | 0 | 0.12178 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
f0e8e4cb9b296f93e2cb01f756d19b3d4daa602a | 13,882 | py | Python | tests/test_routes/test_routes_schema.py | hed-standard/hed-web | 8603526266dff78cf07e49e6c0f0c715a9225289 | [
"MIT"
] | null | null | null | tests/test_routes/test_routes_schema.py | hed-standard/hed-web | 8603526266dff78cf07e49e6c0f0c715a9225289 | [
"MIT"
] | null | null | null | tests/test_routes/test_routes_schema.py | hed-standard/hed-web | 8603526266dff78cf07e49e6c0f0c715a9225289 | [
"MIT"
] | 2 | 2022-02-04T19:55:40.000Z | 2022-02-04T21:36:04.000Z | import io
import os
import unittest
from tests.test_web_base import TestWebBase
class Test(TestWebBase):
def test_schema_results_empty_data(self):
response = self.app.test.post('/schema_submit')
self.assertEqual(200, response.status_code, 'HED schema request succeeds even when no data')
header_dict = dict(response.headers)
self.assertEqual("error", header_dict["Category"],
"The header msg_category when no schema request data is error ")
self.assertFalse(response.data, "The response data for empty schema request is empty")
def test_schema_results_convert_mediawiki_invalid(self):
schema_path = os.path.join(os.path.dirname(os.path.abspath(__file__)),
'../data/HED8.0.0Bad.mediawiki')
with open(schema_path, 'r') as sc:
x = sc.read()
schema_buffer = io.BytesIO(bytes(x, 'utf-8'))
with self.app.app_context():
input_data = {'schema_upload_options': 'schema_file_option',
'command_option': 'convert',
'schema_file': (schema_buffer, 'HED8.0.0Bad.mediawiki'),
'check_for_warnings': 'on'}
response = self.app.test.post('/schema_submit', content_type='multipart/form-data', data=input_data)
self.assertEqual(200, response.status_code, 'Convert of a invalid mediawiki has a response')
headers_dict = dict(response.headers)
self.assertEqual("warning", headers_dict["Category"],
"An mediawiki schema that does not load cannot be converted.")
self.assertTrue(response.data, "The response data for invalid mediawiki conversion should not be empty")
self.assertTrue(headers_dict['Message'],
"The error message for invalid mediawiki conversion should not be empty")
schema_buffer.close()
def test_schema_results_convert_mediawiki_valid(self):
schema_path = os.path.join(os.path.dirname(os.path.abspath(__file__)),
'../data/HED8.0.0.mediawiki')
with open(schema_path, 'r') as sc:
x = sc.read()
schema_buffer = io.BytesIO(bytes(x, 'utf-8'))
with self.app.app_context():
input_data = {'schema_upload_options': 'schema_file_option',
'command_option': 'convert_schema',
'schema_file': (schema_buffer, 'HED8.0.0.mediawiki'),
'check_for_warnings': 'on'}
response = self.app.test.post('/schema_submit', content_type='multipart/form-data', data=input_data)
self.assertEqual(200, response.status_code, 'Convert of a valid mediawiki has a response')
headers_dict = dict(response.headers)
self.assertEqual("success", headers_dict["Category"],
"The valid mediawiki should convert to xml successfully")
self.assertTrue(response.data, "The converted schema should not be empty")
self.assertEqual('attachment filename=HED8.0.0.xml',
headers_dict['Content-Disposition'], "Convert of valid mediawiki should return xml")
schema_buffer.close()
def test_schema_results_convert_xml_valid(self):
schema_path = os.path.join(os.path.dirname(os.path.abspath(__file__)),
'../data/HED8.0.0.xml')
with open(schema_path, 'r') as sc:
x = sc.read()
schema_buffer = io.BytesIO(bytes(x, 'utf-8'))
with self.app.app_context():
input_data = {'schema_upload_options': 'schema_file_option',
'command_option': 'convert_schema',
'schema_file': (schema_buffer, 'HED8.0.0.xml'),
'check_for_warnings': 'on'}
response = self.app.test.post('/schema_submit', content_type='multipart/form-data', data=input_data)
self.assertEqual(200, response.status_code, 'Convert of a valid xml has a response')
headers_dict = dict(response.headers)
self.assertEqual("success", headers_dict["Category"],
"The valid xml should validate successfully")
self.assertTrue(response.data, "The validated schema should not be empty")
self.assertEqual('attachment filename=HED8.0.0.mediawiki',
headers_dict['Content-Disposition'],
"Validation of valid xml should not return a file")
schema_buffer.close()
def test_schema_results_convert_xml_url_valid(self):
schema_url = \
'https://raw.githubusercontent.com/hed-standard/hed-specification/master/hedxml/HED8.0.0.xml'
with self.app.app_context():
input_data = {'schema_upload_options': 'schema_url_option',
'command_option': 'convert_schema',
'schema_url': schema_url,
'check_for_warnings': 'on'}
response = self.app.test.post('/schema_submit', content_type='multipart/form-data', data=input_data)
self.assertEqual(200, response.status_code, 'Conversion of a valid xml url has a response')
headers_dict = dict(response.headers)
self.assertEqual("success", headers_dict["Category"],
"The valid xml url should convert to mediawiki successfully")
self.assertTrue(response.data, "The converted xml url schema should not be empty")
self.assertEqual('attachment filename=HED8.0.0.mediawiki',
headers_dict['Content-Disposition'], "Conversion of valid xml url should return mediawiki")
def test_schema_results_convert_xml_url_valid2(self):
schema_url = \
'https://raw.githubusercontent.com/hed-standard/hed-specification/master/hedxml/HED8.0.0.xml'
with self.app.app_context():
input_data = {'schema_upload_options': 'schema_url_option',
'command_option': 'convert_schema',
'schema_url': schema_url,
'check_for_warnings': 'on'}
response = self.app.test.post('/schema_submit', content_type='multipart/form-data', data=input_data)
self.assertEqual(200, response.status_code, 'Conversion of a valid xml url has a response')
headers_dict = dict(response.headers)
self.assertEqual("success", headers_dict["Category"],
"The valid xml url should convert to mediawiki successfully")
self.assertTrue(response.data, "The converted xml url schema should not be empty")
self.assertEqual('attachment filename=HED8.0.0.mediawiki',
headers_dict['Content-Disposition'], "Conversion of valid xml url should return mediawiki")
def test_schema_results_validate_mediawiki_invalid(self):
schema_path = os.path.join(os.path.dirname(os.path.abspath(__file__)),
'../data/HED8.0.0Bad.mediawiki')
with open(schema_path, 'r') as sc:
x = sc.read()
schema_buffer = io.BytesIO(bytes(x, 'utf-8'))
with self.app.app_context():
input_data = {'schema_upload_options': 'schema_file_option',
'command_option': 'validate',
'schema_file': (schema_buffer, 'HED8.0.0Bad.mediawiki'),
'check_for_warnings': 'on'}
response = self.app.test.post('/schema_submit', content_type='multipart/form-data', data=input_data)
self.assertEqual(200, response.status_code, 'Validation of a invalid mediawiki has a response')
headers_dict = dict(response.headers)
self.assertEqual("warning", headers_dict["Category"],
"A schema that cannot be loaded should return an a warning")
self.assertTrue(response.data, "The response data for invalid mediawiki validation should not be empty")
self.assertTrue(headers_dict['Message'],
"The error message for invalid mediawiki conversion should not be empty")
self.assertGreater(len(headers_dict['Content-Disposition']), 0,
"An error file should be returned if the mediawiki cannot load.")
schema_buffer.close()
def test_schema_results_validate_mediawiki_valid(self):
schema_path = os.path.join(os.path.dirname(os.path.abspath(__file__)),
'../data/HED8.0.1.mediawiki')
with open(schema_path, 'r') as sc:
x = sc.read()
schema_buffer = io.BytesIO(bytes(x, 'utf-8'))
with self.app.app_context():
input_data = {'schema_upload_options': 'schema_file_option',
'command_option': 'validate',
'schema_file': (schema_buffer, 'HED8.0.1.mediawiki'),
'check_for_warnings': 'on'}
response = self.app.test.post('/schema_submit', content_type='multipart/form-data', data=input_data)
self.assertEqual(200, response.status_code, 'Validation of a valid mediawiki has a response')
headers_dict = dict(response.headers)
self.assertEqual("success", headers_dict["Category"],
"The valid mediawiki should validate successfully")
self.assertFalse(response.data, "The response data for validated mediawiki should be empty")
self.assertEqual(None, headers_dict.get('Content-Disposition', None),
"Validation of valid mediawiki should not return a file")
schema_buffer.close()
def test_schema_results_validate_xml_valid(self):
schema_path = os.path.join(os.path.dirname(os.path.abspath(__file__)),
'../data/HED8.0.1.xml')
with open(schema_path, 'r') as sc:
x = sc.read()
schema_buffer = io.BytesIO(bytes(x, 'utf-8'))
with self.app.app_context():
input_data = {'schema_upload_options': 'schema_file_option',
'command_option': 'validate',
'schema_file': (schema_buffer, 'HED8.0.1.xml'),
'check_for_warnings': 'on'}
response = self.app.test.post('/schema_submit', content_type='multipart/form-data', data=input_data)
self.assertEqual(200, response.status_code, 'Validation of a valid xml has a response')
headers_dict = dict(response.headers)
self.assertEqual("success", headers_dict["Category"],
"The valid xml should validate successfully")
self.assertFalse(response.data, "The validated schema data should be empty")
self.assertEqual(None, headers_dict.get('Content-Disposition', None),
"Validation of valid xml should return any response data")
schema_buffer.close()
def test_schema_results_validate_xml_url_invalid(self):
schema_url = \
'https://raw.githubusercontent.com/hed-standard/hed-specification/master/hedxml/HED7.2.0.xml'
with self.app.app_context():
input_data = {'schema_upload_options': 'schema_url_option',
'command_option': 'validate',
'schema_url': schema_url,
'check_for_warnings': 'on'}
response = self.app.test.post('/schema_submit', content_type='multipart/form-data', data=input_data)
self.assertEqual(200, response.status_code, 'Validation of a valid xml url has a response')
headers_dict = dict(response.headers)
self.assertEqual("warning", headers_dict["Category"],
"The xml url for 7.2.0 should not be 3G compliant")
self.assertTrue(response.data, "The validated xml url schema got 2G should have response data")
self.assertTrue(headers_dict['Content-Disposition'],
"Validation of valid gen2 xml should return validation error file")
# TODO: Uncomment when version 8.0.1 is released --- it should work
# def test_schema_results_validate_xml_url_valid(self):
# schema_url = \
# 'https://raw.githubusercontent.com/hed-standard/hed-specification/master/hedxml/HED8.0.1.xml'
# with self.app.app_context():
# input_data = {'schema_upload_options': 'schema_url_option',
# 'command_option': 'validate',
# 'schema_url': schema_url,
# 'check_for_warnings': 'on'}
# response = self.app.test.post('/schema_submit', content_type='multipart/form-data', data=input_data)
# self.assertEqual(200, response.status_code, 'Validation of a valid xml url has a response')
# headers_dict = dict(response.headers)
# self.assertEqual("success", headers_dict["Category"],
# "The valid xml url should be successful")
# self.assertFalse(response.data, "The validated xml url schema should return empty response data")
# self.assertEqual(None, headers_dict.get('Content-Disposition', None),
# "Validation of valid xml url should not return an error file")
if __name__ == '__main__':
unittest.main()
| 63.97235 | 121 | 0.597248 | 1,562 | 13,882 | 5.109475 | 0.091549 | 0.042726 | 0.017918 | 0.027565 | 0.891618 | 0.888485 | 0.865806 | 0.832728 | 0.804536 | 0.789375 | 0 | 0.01129 | 0.298156 | 13,882 | 216 | 122 | 64.268519 | 0.807862 | 0.082625 | 0 | 0.664865 | 0 | 0.016216 | 0.339413 | 0.035437 | 0 | 0 | 0 | 0.00463 | 0.216216 | 1 | 0.054054 | false | 0 | 0.021622 | 0 | 0.081081 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
9beba384123bec2f3d11df6b27c01aaa7ebc561b | 129 | py | Python | cheshire/__init__.py | jmcelve2/cheshire | b0ee61559f1d4c139ffa9138189d32d2d8027a34 | [
"MIT"
] | 8 | 2018-03-23T01:27:21.000Z | 2022-01-29T06:05:49.000Z | cheshire/__init__.py | jmcelve2/cheshire | b0ee61559f1d4c139ffa9138189d32d2d8027a34 | [
"MIT"
] | 9 | 2018-02-13T11:34:33.000Z | 2018-09-01T06:31:21.000Z | cheshire/__init__.py | jmcelve2/cheshire | b0ee61559f1d4c139ffa9138189d32d2d8027a34 | [
"MIT"
] | 3 | 2020-04-13T08:33:13.000Z | 2021-03-21T11:55:12.000Z |
from cheshire.Grid import *
from cheshire.Kinetic import *
from cheshire.Potential import *
from cheshire.ParamSampler import *
| 21.5 | 35 | 0.806202 | 16 | 129 | 6.5 | 0.4375 | 0.461538 | 0.519231 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.131783 | 129 | 5 | 36 | 25.8 | 0.928571 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
9befd7818e486b848c48d9b3b54d3188285257a0 | 96 | py | Python | venv/lib/python3.8/site-packages/jedi/settings.py | GiulianaPola/select_repeats | 17a0d053d4f874e42cf654dd142168c2ec8fbd11 | [
"MIT"
] | 2 | 2022-03-13T01:58:52.000Z | 2022-03-31T06:07:54.000Z | venv/lib/python3.8/site-packages/jedi/settings.py | DesmoSearch/Desmobot | b70b45df3485351f471080deb5c785c4bc5c4beb | [
"MIT"
] | 19 | 2021-11-20T04:09:18.000Z | 2022-03-23T15:05:55.000Z | venv/lib/python3.8/site-packages/jedi/settings.py | DesmoSearch/Desmobot | b70b45df3485351f471080deb5c785c4bc5c4beb | [
"MIT"
] | null | null | null | /home/runner/.cache/pip/pool/e4/6b/72/f3c96c6c10212276e8bcba8949bb55275859edb9b22b410a308a335fce | 96 | 96 | 0.895833 | 9 | 96 | 9.555556 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.427083 | 0 | 96 | 1 | 96 | 96 | 0.46875 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | null | 0 | 0 | null | null | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.