hexsha string | size int64 | ext string | lang string | max_stars_repo_path string | max_stars_repo_name string | max_stars_repo_head_hexsha string | max_stars_repo_licenses list | max_stars_count int64 | max_stars_repo_stars_event_min_datetime string | max_stars_repo_stars_event_max_datetime string | max_issues_repo_path string | max_issues_repo_name string | max_issues_repo_head_hexsha string | max_issues_repo_licenses list | max_issues_count int64 | max_issues_repo_issues_event_min_datetime string | max_issues_repo_issues_event_max_datetime string | max_forks_repo_path string | max_forks_repo_name string | max_forks_repo_head_hexsha string | max_forks_repo_licenses list | max_forks_count int64 | max_forks_repo_forks_event_min_datetime string | max_forks_repo_forks_event_max_datetime string | content string | avg_line_length float64 | max_line_length int64 | alphanum_fraction float64 | qsc_code_num_words_quality_signal int64 | qsc_code_num_chars_quality_signal float64 | qsc_code_mean_word_length_quality_signal float64 | qsc_code_frac_words_unique_quality_signal float64 | qsc_code_frac_chars_top_2grams_quality_signal float64 | qsc_code_frac_chars_top_3grams_quality_signal float64 | qsc_code_frac_chars_top_4grams_quality_signal float64 | qsc_code_frac_chars_dupe_5grams_quality_signal float64 | qsc_code_frac_chars_dupe_6grams_quality_signal float64 | qsc_code_frac_chars_dupe_7grams_quality_signal float64 | qsc_code_frac_chars_dupe_8grams_quality_signal float64 | qsc_code_frac_chars_dupe_9grams_quality_signal float64 | qsc_code_frac_chars_dupe_10grams_quality_signal float64 | qsc_code_frac_chars_replacement_symbols_quality_signal float64 | qsc_code_frac_chars_digital_quality_signal float64 | qsc_code_frac_chars_whitespace_quality_signal float64 | qsc_code_size_file_byte_quality_signal float64 | qsc_code_num_lines_quality_signal float64 | qsc_code_num_chars_line_max_quality_signal float64 | qsc_code_num_chars_line_mean_quality_signal float64 | qsc_code_frac_chars_alphabet_quality_signal float64 | qsc_code_frac_chars_comments_quality_signal float64 | qsc_code_cate_xml_start_quality_signal float64 | qsc_code_frac_lines_dupe_lines_quality_signal float64 | qsc_code_cate_autogen_quality_signal float64 | qsc_code_frac_lines_long_string_quality_signal float64 | qsc_code_frac_chars_string_length_quality_signal float64 | qsc_code_frac_chars_long_word_length_quality_signal float64 | qsc_code_frac_lines_string_concat_quality_signal float64 | qsc_code_cate_encoded_data_quality_signal float64 | qsc_code_frac_chars_hex_words_quality_signal float64 | qsc_code_frac_lines_prompt_comments_quality_signal float64 | qsc_code_frac_lines_assert_quality_signal float64 | qsc_codepython_cate_ast_quality_signal float64 | qsc_codepython_frac_lines_func_ratio_quality_signal float64 | qsc_codepython_cate_var_zero_quality_signal bool | qsc_codepython_frac_lines_pass_quality_signal float64 | qsc_codepython_frac_lines_import_quality_signal float64 | qsc_codepython_frac_lines_simplefunc_quality_signal float64 | qsc_codepython_score_lines_no_logic_quality_signal float64 | qsc_codepython_frac_lines_print_quality_signal float64 | qsc_code_num_words int64 | qsc_code_num_chars int64 | qsc_code_mean_word_length int64 | qsc_code_frac_words_unique null | qsc_code_frac_chars_top_2grams int64 | qsc_code_frac_chars_top_3grams int64 | qsc_code_frac_chars_top_4grams int64 | qsc_code_frac_chars_dupe_5grams int64 | qsc_code_frac_chars_dupe_6grams int64 | qsc_code_frac_chars_dupe_7grams int64 | qsc_code_frac_chars_dupe_8grams int64 | qsc_code_frac_chars_dupe_9grams int64 | qsc_code_frac_chars_dupe_10grams int64 | qsc_code_frac_chars_replacement_symbols int64 | qsc_code_frac_chars_digital int64 | qsc_code_frac_chars_whitespace int64 | qsc_code_size_file_byte int64 | qsc_code_num_lines int64 | qsc_code_num_chars_line_max int64 | qsc_code_num_chars_line_mean int64 | qsc_code_frac_chars_alphabet int64 | qsc_code_frac_chars_comments int64 | qsc_code_cate_xml_start int64 | qsc_code_frac_lines_dupe_lines int64 | qsc_code_cate_autogen int64 | qsc_code_frac_lines_long_string int64 | qsc_code_frac_chars_string_length int64 | qsc_code_frac_chars_long_word_length int64 | qsc_code_frac_lines_string_concat null | qsc_code_cate_encoded_data int64 | qsc_code_frac_chars_hex_words int64 | qsc_code_frac_lines_prompt_comments int64 | qsc_code_frac_lines_assert int64 | qsc_codepython_cate_ast int64 | qsc_codepython_frac_lines_func_ratio int64 | qsc_codepython_cate_var_zero int64 | qsc_codepython_frac_lines_pass int64 | qsc_codepython_frac_lines_import int64 | qsc_codepython_frac_lines_simplefunc int64 | qsc_codepython_score_lines_no_logic int64 | qsc_codepython_frac_lines_print int64 | effective string | hits int64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
114982c1eafc185c89486875155657b542342040 | 905 | py | Python | Day-48/Wiki_Interaction/interaction.py | MihirMore/100daysofcode-Python | 947d91842639c04ee7d23cc82bf04053d3982a85 | [
"MIT"
] | 4 | 2021-04-09T20:01:22.000Z | 2022-03-18T20:49:58.000Z | Day-48/Wiki_Interaction/interaction.py | MihirMore/100daysofcode-Python | 947d91842639c04ee7d23cc82bf04053d3982a85 | [
"MIT"
] | null | null | null | Day-48/Wiki_Interaction/interaction.py | MihirMore/100daysofcode-Python | 947d91842639c04ee7d23cc82bf04053d3982a85 | [
"MIT"
] | null | null | null | from selenium import webdriver
from selenium.webdriver.common.keys import Keys
chrome_driver_path = "C:\Program Files\chromedriver_win32\chromedriver.exe"
driver = webdriver.Chrome(executable_path=chrome_driver_path)
driver.get("http://secure-retreat-92358.herokuapp.com/")
number_of_articles = driver.find_element_by_css_selector("#articlecount a")
print(number_of_articles)
number_of_articles.click()
all_portals = driver.find_element_by_partial_link_text("All portals")
all_portals.click()
search = driver.find_element_by_name("search")
search.send_keys("Python")
search.send_keys(Keys.ENTER)
f_name = driver.find_element_by_name("fName")
f_name.send_keys("Mihir")
l_name = driver.find_element_by_name("lName")
l_name.send_keys("More")
email = driver.find_element_by_name("email")
email.send_keys("mihirmore123@mail.com")
button = driver.find_element_by_css_selector("form button")
button.click()
| 33.518519 | 75 | 0.81989 | 136 | 905 | 5.102941 | 0.411765 | 0.100865 | 0.17147 | 0.191643 | 0.230548 | 0.164265 | 0 | 0 | 0 | 0 | 0 | 0.011751 | 0.059669 | 905 | 26 | 76 | 34.807692 | 0.80376 | 0 | 0 | 0 | 0 | 0 | 0.207735 | 0.068508 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.095238 | 0 | 0.095238 | 0.047619 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
114cf40c68698e4730b54f9bd545e4c801a6aab5 | 38,882 | py | Python | code/prediction/methods.py | philipversteeg/validation-yeast | a977a2b038618530b75577495ade6b7a9e728da4 | [
"BSD-2-Clause"
] | 1 | 2018-02-15T10:50:56.000Z | 2018-02-15T10:50:56.000Z | code/prediction/methods.py | philipversteeg/validation-yeast | a977a2b038618530b75577495ade6b7a9e728da4 | [
"BSD-2-Clause"
] | null | null | null | code/prediction/methods.py | philipversteeg/validation-yeast | a977a2b038618530b75577495ade6b7a9e728da4 | [
"BSD-2-Clause"
] | null | null | null | """Methods that predict a causal effect.
Instantiate class and call fit method.
"""
# stdlib
import os
import time
import abc
from functools import wraps
import tempfile
from subprocess import check_call
# math stats ml
import numpy as np
from scipy.stats import kendalltau, spearmanr
from scipy.spatial.distance import correlation as distancecorr
# local
from ..microarraydata import MicroArrayData
from ..libs import CausalArray
from ..libs.misc import save_array, load_array
from .. import config
OBSERMETHOD_DATATYPES = ['obser', 'inter', 'obser_inter_joint']
###
### Abstract Base Class
###
class Predictor(object):
__metaclass__ = abc.ABCMeta
def __init__(self, processes=None, folder=None, file=None, name=None, save=None, verbose=None):
"""Abstract base class where all prediction estimator methods inherit from.
Need to overwrite at least the abstract fit method
Args:
processes (int, optional): by default single process, inherit also from MultiProcess to use multiple cores
folder (str, optional): disk location where results are stored
file (str, optional): disk location of optionally saved file relative to self.folder
name (str, optional): name of the method
save (bool, optional): if true, always save output
verbose (bool or int, optional): set verbose level; default is off
"""
self.processes = processes or 1
self._name = name
self.folder = folder or config.folder
self._file = file
self.save = save
self._opts = {} # dict used for storing / overriding / deleting task settings.
if verbose is None:
self.verbose = False # default option
else:
self.verbose = verbose
self.result = None
if not os.path.exists(self.folder):
if self.verbose: print '[{}] Creating folder {}'.format(self.name_method, self.folder)
os.makedirs(self.folder)
@abc.abstractmethod
def fit(self, data, bootstrap=False):
"""Overwrite and return self"""
pass
@staticmethod
def fit_decorator(fit_function):
""" Decorator for 'fit' instance methods.
Wrapper for fit(self, data) instance method that returns a CausalArray object
1. if self.folder + self.filename exists
--> load this result
2. execute and time the method
3. if the duration > threshold
--> save result the self.folder + self.filename
"""
@wraps(fit_function)
def wrapper(self, *args, **kwargs):
file_name = self.folder + '/' + self.file
# if self.folder + self.filename exists, return that
if os.path.exists(file_name):
if self.verbose: print '[{}] existing result loaded at {}.'.format(self.name_method,
os.path.relpath(file_name))
self.result = CausalArray.load(file_name)
return self
# compute and time that result
start_time = time.time() # attach start_time to instance for easy printing
if self.verbose: print '[{}] fitting data...'.format(self.name_method)
result_instance = fit_function(self, *args, **kwargs)
# try:
# result_instance = fit_function(*args, **kwargs)
# except Exception as e:
# raise Exception('Error with method %s' % self.name)
result_time = time.time() - start_time
if self.verbose: print '[{}] ...done in {:.2f}s.'.format(self.name_method, result_time)
# check for empty CausalArray
if config.warning_empty_result:
if result_instance.result.nonzero == 0: print '[{}] WARNING: empty CausalArray returned!'.format(self.name_method)
# save if necessary
if result_time > config.save_task_threshold or self.save:
result_instance.result.save(file_name)
return result_instance
return wrapper
@property
def name_method(self):
"""General name of the prediction method (non-unique!)"""
return self.__class__.__name__
@property
def name(self):
"""Unique name for a given method based on all callable options.
If self._name instance is defined, this is used instead.
"""
if self._name:
return self._name
else:
return self.__class__.__name__ + '_' + '_'.join(['{}={}'.format(i,j) for i,j in self.options.iteritems()])
def pretty_name(self, print_options):
""" A pretty printable name."""
return self.name.replace('_', ' ')
@property
def file(self):
if self._file:
return self._file
else:
return self.name + '.hdf5'
@property
def done(self):
return self.result is not None
def pickle_dump(self, file):
with open(file, 'wb') as f:
pickle.dump(self, f, -1) # highest protocol available
@staticmethod
def pickle_load(file):
with open(file, 'rb') as f:
return pickle.load(f)
@property
def options(self):
"""Return dict of all real parameters that are not purely for computation."""
# options_set = set(self.__class__.__init__.__code__.co_varnames).difference(
# set(('self', 'args', 'kwargs')))
# options_dict = dict((opt, getattr(self, opt)) for opt in options_set) if options_set else {}
# if hasattr(self, '_options'): # add overriden options
# print self._options
# for k, v in self._options.iteritems():
# options_dict[k] = v
return dict((k, v) for k, v in self._options.iteritems() if v is not None)
@property
def _options(self):
return self._opts
@_options.setter
def _options(self, opts):
assert type(opts) is dict
self._opts.update(opts)
def __repr__(self):
result = '[{self.__class__.__name__}] {self.name}'.format(self=self)
for k, v in self.options.iteritems():
result += '\n{space}{k}: {v}'.format(space=(len(self.__class__.__name__) + 3) * ' ', k=k, v=v)
return result + '\n'
class MultiProcess(Predictor):
"""Use multiple processes for computation, default is equal to DEFAULT_PROCESSES"""
def __init__(self, processes=None, *args, **kwargs):
super(MultiProcess, self).__init__(processes=processes or config.ncores_method, *args, **kwargs)
class Dummy(MultiProcess):
"""Dummy method used for testing multiprocesses"""
def __init__(self, sleep, *args, **kwargs):
super(Dummy, self).__init__(*args, **kwargs)
self.sleep = sleep
@Predictor.fit_decorator
def fit(self, data):
print '...dummy starting', self.name
time.sleep(self.sleep)
print '...dummy finished', self.name
# return CausalArray(array=np.random.random((data.ngenes, data.ngenes)),
# causes=data.genes, effects=data.genes, name='dummy')
self.result = CausalArray(array=np.random.random((data.ngenes, data.ngenes)),
causes=data.genes, effects=data.genes, name='dummy')
return self
class Random(Predictor):
"""Random (Bernoulli with p=0.5) predictions"""
unitstring = '1'
def __init__(self, *args, **kwargs):
super(Random, self).__init__(*args, **kwargs)
@Predictor.fit_decorator
def fit(self, data):
self.result = CausalArray(array=np.random.binomial(n=1, p=0.5, size=(data.ngenes,data.ngenes)),
causes=data.genes, effects=data.genes, units=self.unitstring, name=self.name)
return self
#############################
# Observational Methods #
#############################
class ObservationalMethod(Predictor):
"""Facilitates observational methods by subclassing this.
Either override fit(data) or define a _fit_simple(x) instance method.
Attributes:
datatype ('obser', 'inter' or 'obser_inter_joint'): Description
result (TYPE): Description
standardized (Bool): Description
"""
def __init__(self, datatype, standardized=False, *args, **kwargs):
assert datatype in OBSERMETHOD_DATATYPES
self.datatype = datatype # add this to options
self.standardized = standardized
super(ObservationalMethod, self).__init__(*args, **kwargs)
self._options = {'datatype':self.datatype, 'standardized':self.standardized}
@Predictor.fit_decorator
def fit(self, data):
assert issubclass(type(data), MicroArrayData)
if self.standardized:
x = getattr(data, self.datatype + '_std')
else:
x = getattr(data, self.datatype)
# need to add units from return value if applicable
simple_result = self._fit_simple(x)
if type(simple_result) is tuple:
array, unitstring = simple_result
else:
array = simple_result
unitstring = '0' # units are unknown by default
# build result
self.result = CausalArray(array=array, causes=data.genes,
effects=data.genes, name=self.name, units=unitstring) # default units are unknown
return self
@property
def name(self):
"""Unique name for a given method based on all callable options.
If self._name instance is defined, this is used instead. Overriden from Predictor.
"""
if self._name:
return self._name
else:
name = self.__class__.__name__
# make sure the datatype is printed first
options_keys = self.options.keys()
options_keys.remove('datatype')
name = name + '_' + self.options['datatype']
# if options are remaining, print the reset
if options_keys:
name = name + '_' + '_'.join(['{}={}'.format(i,j) for i,j in self.options.iteritems() if i is not 'datatype'])
return name
class PearsonCor(ObservationalMethod):
unitstring = '1'
"""Pearson's product-moment coefficient"""
def _fit_simple(self, x):
return np.abs(np.corrcoef(x)), self.unitstring # add units
class SpearmanCor(ObservationalMethod):
"""Spearman's rank correlation coefficient"""
def _fit_simple(self, x):
return np.abs(spearmanr(x, axis=1)[0])
class KendallCor(ObservationalMethod):
"""Kendall's rank correlation coefficient"""
def _fit_simple(self, x):
dim = x.shape[0]
result = np.zeros((dim, dim), dtype=float)
# upper triangular, as it is symmetric
for i in xrange(dim):
for j in xrange(i, dim):
result[i,j], _ = kendalltau(x[i], x[j])
# symmetrize result
return np.abs(result + result.T - np.diag(result.diagonal()))
class DistanceCor(ObservationalMethod):
"""Distance correlation"""
def _fit_simple(self, x):
dim = x.shape[0]
result = np.zeros((dim, dim), dtype=float)
# upper triangular, as it is symmetric
for i in xrange(dim):
for j in xrange(i, dim):
result[i,j] = distancecorr(x[i,:], x[j,:]) # non-negative
return result + result.T - np.diag(result.diagonal())
class VarianceCauseInv(ObservationalMethod):
"""Inverse of the noisy sample standard-deviation of the cause as causal effect estimator.
Note that we add the noise directly to the estimate of te standard-deviation before inverting the result.
Attributes:
noise (float, optional): ammount of standard normal noise added
unitstring (str): Description
"""
unitstring = '1/i'
def __init__(self, noise=None, *args, **kwargs):
super(VarianceCauseInv, self).__init__(*args, **kwargs)
self.noise = noise
if self.noise: self._options = {'noise': True}
def _fit_simple(self, x):
dim = x.shape[0] # ngenes
# np.array( dim * list) stacks lists row-wise, i.e. [[row], [row], [row]]
# in the general case, the first dimension is added.
array = np.reciprocal(np.array(dim * [x.std(axis=1),]).T)
if self.noise is not None:
array = array + np.reciprocal(self.noise * np.random.randn(*array.shape))
return array, self.unitstring
class VarianceEffect(ObservationalMethod):
"""Sample standard-deviation of the effects"""
unitstring = 'j'
def __init__(self, noise=None, *args, **kwargs):
super(VarianceEffect, self).__init__(*args, **kwargs)
self.noise = noise
if self.noise: self._options = {'noise': True}
def _fit_simple(self, x):
dim = x.shape[0] # ngenes
array = np.array(dim * [x.std(axis=1),])
if self.noise is not None:
array = array + self.noise * np.random.randn(*array.shape)
return array, self.unitstring
class VarianceCombined(ObservationalMethod):
"""Sample standard-deviation of the cause and effects combined"""
unitstring = 'j/i'
def __init__(self, noise=None, *args, **kwargs):
super(VarianceCombined, self).__init__(*args, **kwargs)
self.noise = noise
if self.noise: self._options = {'noise': True}
def _fit_simple(self, x):
dim = x.shape[0] # ngenes
array = np.array(dim * [np.reciprocal(x.std(axis=1)),]).T * np.array(dim * [x.std(axis=1),])
if self.noise is not None:
array = array + (self.noise * np.random.randn(*array.shape)) * np.reciprocal((self.noise * np.random.randn(*array.shape)))
return array, self.unitstring
class NonnormalCause(ObservationalMethod):
"""Score i --> j with -log p-value of KS test with a normal distribution"""
def __init__(self, *args, **kwargs):
super(NonnormalCause, self).__init__(standardized=True, *args, **kwargs)
def _fit_simple(self, x):
dim = x.shape[0]
return np.array(dim * [compute_nonnormality(data=x, scale=False, alpha=None),]).T
class NonnormalEffect(ObservationalMethod):
def __init__(self, *args, **kwargs):
super(NonnormalEffect, self).__init__(standardized=True, *args, **kwargs)
def _fit_simple(self, x):
dim = x.shape[0]
return np.array(dim * [compute_nonnormality(data=x, scale=False, alpha=None),])
class ANM(ObservationalMethod):
pass
class NCC(ObservationalMethod, MultiProcess):
def __init__(self, select_causes=None, graph_rdata_name=None, *args, **kwargs):
super(NCC, self).__init__(*args, **kwargs)
###
### R methods
###
class CallRMethod(Predictor):
def run_rscript(self, x, script, pars, keep_temp_files=False):
"""Call r script in subprocess with commandline arguments.
IMPORTANT:
Python (hdf5) and R (rhdf5) use transposed conventions for storing 2D arrays
to disk. The input / output buffer files will therefor need transposing.
We addopt the following convention here:
Input data:
Python observational data is saved p x N
R observational data is used as N x p
--> No transposing needed
Output data:
R scripts return (as saved hdf5 file) causes x effects shape.
--> NEED TO TRANSPOSE THE RESULT LOADED FROM A HDF5-FILE
IN PYTHON WHEN LOADING!
This convention is hard-coded in the save_array and load_array functions if
calling with load_from_r=True option.
Args:
x (TYPE): Description
script (TYPE): Description
pars (TYPE): Description
keep_temp_files (bool, optional): Description
Returns:
CausalArray wrapper of return script.
"""
# setup files. Require that self.name is unique among all methods!
input_buffer_file = os.path.abspath('{}/__input__{}.hdf5'.format(self.folder, self.name))
output_buffer_file = os.path.abspath('{}/__output__{}.hdf5'.format(self.folder, self.name))
if os.path.exists(input_buffer_file): os.remove(input_buffer_file)
if os.path.exists(output_buffer_file): os.remove(output_buffer_file)
print input_buffer_file
print output_buffer_file
print self.folder
save_array(filename=input_buffer_file, data=x)
input_parameters = ['input=%s' % input_buffer_file, 'input.dataset=data',
'output=%s' % output_buffer_file]
# use concurrency if applicable
if (self.processes > 1):
input_parameters = input_parameters + ['processes=%s' % self.processes]
# start script
start_time = time.time()
# fancy_time = time.strftime('%Y-%m-%d %H:%M')
fid_stderr = tempfile.TemporaryFile(mode='w+')
with open('{}/__stdout__{}__.txt'.format(self.folder, self.name), 'w+') as fid_stdout:
try:
print ' '.join(['Rscript', config._folder_extern_R + '/' + script] + input_parameters + pars)
check_call(['Rscript', config._folder_extern_R + '/' + script] + input_parameters + pars, stdout=fid_stdout, stderr=fid_stderr)
fid_stdout.seek(0,0)
fid_stdout.write('***\n*** DONE in {:.2f}s\n***\n\n'.format(time.time() - start_time))
except CalledProcessError as cpe:
print 'CalledProcessError: {}.'.format(cpe)
fid_stderr.seek(0)
print '********\nSTDERR {}\n********\n{}********'.format(self.name, fid_stderr.read())
with open('{}/__stderr__{}__.txt'.format(self.folder, self.name), 'w+') as fid_stderr_write:
fid_stderr.seek(0)
fid_stderr_write.write('STDERR for {}\n\n'.format(script))
fid_stderr_write.write(fid_stderr.read())
finally:
print 'file exists?', os.path.exists(output_buffer_file)
result = load_array(output_buffer_file, load_from_r=True)
if keep_temp_files:
os.rename(input_buffer_file, '{}/input_{}.hdf5'.format(self.folder, self.name))
os.rename(output_buffer_file, '{}/output_{}.hdf5'.format(self.folder, self.name))
else: # perhaps already removed previously!
if os.path.exists(input_buffer_file): os.remove(input_buffer_file)
if os.path.exists(output_buffer_file): os.remove(output_buffer_file)
return result
class DummyR(ObservationalMethod, CallRMethod):
def _fit_simple(self, x):
script = 'ComputeDummy.R'
pars = ['verbose=TRUE']
if self.verbose: print 'input shape:', x.shape
result = self.run_rscript(x=x, script=script, pars=pars)
if self.verbose:
print 'output shape', result.shape
print '--> ALL ENTRIES ARE ' + ('' if np.all(np.equal(x, result)) else 'NOT ') + 'EQUAL.'
return np.random.random((x.shape[0],x.shape[0]))
class Glasso(ObservationalMethod, CallRMethod):
""" Compute GLASSO using standardized data, standardized implies dimensionless quantity."""
unitstring = '1'
def __init__(self, mbapprox, *args, **kwargs):
super(Glasso, self).__init__(standardized=True, *args, **kwargs)
self.mbapprox = mbapprox
if self.mbapprox: self._options = {'mbapprox': True}
def _fit_simple(self, x):
script = 'ComputeGlasso.R'
pars = ['method={}'.format('mb' if self.mbapprox else 'glasso')]
return np.abs(self.run_rscript(x=x, script=script, pars=pars)), self.unitstring
class ElasticNet(ObservationalMethod, MultiProcess, CallRMethod):
"""Elastic Net method (Zou, 2005)
Note on select_causes:
The problem with running the constrained regression on a subset of selected causes,
and that as such the set of regressors X_i is restricted that it will influence which beta_ij are
found as predictive for Y_j
"""
unitstring = '1' # as the data is standardized by default
def __init__(self, alpha, nfolds=None, select_causes=None, *args, **kwargs):
super(ElasticNet, self).__init__(standardized=True, *args, **kwargs)
self.alpha = alpha
self.nfolds = nfolds or 10 # default of 10 folds for CV
self.select_causes = select_causes
# override self.options
self._options = {'alpha':self.alpha, 'select_causes':True if self.select_causes is not None else None}
@Predictor.fit_decorator
def fit(self, data):
assert issubclass(type(data), MicroArrayData)
x = getattr(data, self.datatype + '_std')
script = 'ComputeGLMNet.R'
if self.select_causes is not None: # need the not None here, otherwise ambiguous truth-value!
select_causes = ', '.join([str(i) for i in data.intpos_R(self.select_causes)]) # Note: python zero-based arrays, while R one-based arrays!
else:
select_causes = 'NULL'
pars = ['alpha=%s' % self.alpha, 'nfolds=%s' % self.nfolds, 'selectCauses=%s' % select_causes]
self.result = CausalArray(array=np.abs(self.run_rscript(x=x, script=script, pars=pars)),
causes=self.select_causes if self.select_causes is not None else data.genes, effects=data.genes, units=self.unitstring, name=self.name)
return self
class Ridge(ElasticNet):
"""Run elastic net with alpha = 0 """
def __init__(self, *args, **kwargs):
super(Ridge, self).__init__(alpha=0, *args, **kwargs)
self._options = {'alpha':None} # remove alpha option!
class Lasso(ElasticNet):
"""Run elastic net with alpha = 1 """
def __init__(self, *args, **kwargs):
super(Lasso, self).__init__(alpha=1, *args, **kwargs)
self._options = {'alpha':None} # remove alpha option!
class IDA(ObservationalMethod, MultiProcess, CallRMethod):
"""
Note:
As we give a scaled variant as the input to IDA, the output coefficient is a (physical) dimensionless quantity
and directly corresonds with normalized groundtruths.
"""
unitstring = '1'
def __init__(self, method='pcstable', alpha=0.01, select_causes=None, graph_rdata_name=None, *args, **kwargs):
super(IDA, self).__init__(standardized=True, *args, **kwargs)
self.method = method
assert method in ('pc', 'pcstable', 'pcstablefast', 'empty')
self.alpha = alpha
self.select_causes = select_causes
self.graph_rdata_name = graph_rdata_name or 'ida_' + self.method + '_graph_alpha' + str(self.alpha) + '_' + self.datatype + '.RData'
# override self.options
self._options = {'method':self.method, 'alpha':self.alpha, 'select_causes':True if self.select_causes is not None else None}
@Predictor.fit_decorator
def fit(self, data):
assert issubclass(type(data), MicroArrayData)
x = getattr(data, self.datatype + '_std')
script = 'ComputeIDA.R'
if self.select_causes is not None: # need the not None here, otherwise ambiguous truth-value!
select_causes = ', '.join([str(i) for i in data.intpos_R(self.select_causes)]) # Note: python zero-based arrays, while R one-based arrays!
else:
select_causes = 'NULL'
pars = ['method=%s' % self.method, 'alpha=%s' % self.alpha, 'pcgraphfile=%s' % self.folder + '/' + self.graph_rdata_name,
'selectCauses=%s' % select_causes, 'processes=%s' % self.processes]
self.result = CausalArray(array=self.run_rscript(x=x, script=script, pars=pars),
causes=self.select_causes if self.select_causes is not None else data.genes, effects=data.genes,
units=self.unitstring, name=self.name)
return self
class ICP(MultiProcess):
"""Invariant Causal Prediction with stability sampled ranked predictions.
Attributes:
alpha (float): Confidence level for the statistical test
bootstraps (int): Number of bootstraps to perform for stability sampling
bootstrap_fraction (double): fraction of the interventional data to be subsampled for stability
result (CausalArray): Resulting causal array if done, else None
"""
def __init__(self, alpha=0.05, bootstraps=1, bootstrap_fraction=None, *args, **kwargs):
super(ICP, self).__init__(*args, **kwargs)
self.alpha = alpha
self.bootstraps = bootstraps
self.bootstrap_fraction = bootstrap_fraction or .5
# self.select_causes = select_causes
# if self.prescreening: print '{} WARNING: make sure prescreening CausalArray file {} exists!'.format(self.name_method, self.prescreening)
# self._options = {'select_causes':select_causes is not None}
self._options = {'alpha':self.alpha, 'bootstraps':self.bootstraps, 'bootstrap_fraction':self.bootstrap_fraction}
@Predictor.fit_decorator
def fit(self, data):
"""Fit function, modified from the "rscript" functions in the ObservationalMethod base class to
use the full dataset including interventions as input.
Note: require the full dataset to be completely saved instead of just a "settings_only version", as
the R scripts cannot read those out.
"""
keep_temp_files = False # do not keep temp io files
# Set names etc
input_buffer_file = '{}/__input__{}.hdf5'.format(self.folder, self.name)
output_buffer_file = '{}/__output__{}.hdf5'.format(self.folder, self.name)
if os.path.exists(input_buffer_file): os.remove(input_buffer_file)
if os.path.exists(output_buffer_file): os.remove(output_buffer_file)
# make sure to save the complete file and not only settings!
data.save(file=input_buffer_file, settings_only=False, verbose=False)
# if self.select_causes is not None: # need the not None here, otherwise ambiguous truth-value!
# select_causes = ', '.join([str(i) for i in data.intpos_R(self.select_causes)]) # Note: python zero-based arrays, while R one-based arrays!
# else:
# select_causes = 'NULL'
# Setup script and input parameters
script = 'ComputeICP.R'
input_parameters = [
'input=%s' % input_buffer_file,
'output=%s' % output_buffer_file,
'alpha%s' % self.alpha,
'bootstraps=%s' % self.bootstraps,
'bootstrapFraction=%s' % self.bootstrap_fraction,
# 'selectCauses=%s' % select_causes
]
# use concurrency if applicable
if (self.processes > 1):
input_parameters += ['processes=%s' % self.processes]
# start script
start_time = time.time()
# fancy_time = time.strftime('%Y-%m-%d %H:%M')
fid_stderr = tempfile.TemporaryFile(mode='w+')
with open('{}/__stdout__{}__.txt'.format(self.folder, self.name), 'w+') as fid_stdout:
try:
check_call(['Rscript', config._folder_extern_R + '/' + script] + input_parameters, stdout=fid_stdout, stderr=fid_stderr)
fid_stdout.seek(0,0)
fid_stdout.write('***\n*** DONE in {:.2f}s\n***\n\n'.format(time.time() - start_time))
except CalledProcessError as cpe:
print 'CalledProcessError: {}.'.format(cpe)
fid_stderr.seek(0)
print '********\nSTDERR {}\n********\n{}********'.format(self.name, fid_stderr.read())
with open('{}/__stderr__{}__.txt'.format(self.folder, self.name), 'w+') as fid_stderr_write:
fid_stderr.seek(0)
fid_stderr_write.write('STDERR for {}\n\n'.format(script))
fid_stderr_write.write(fid_stderr.read())
finally:
result = load_array(output_buffer_file, load_from_r=True)
if keep_temp_files:
os.rename(input_buffer_file, '{}/input_{}.hdf5'.format(self.folder, self.name))
os.rename(output_buffer_file, '{}/output_{}.hdf5'.format(self.folder, self.name))
else:
if os.path.exists(input_buffer_file): os.remove(input_buffer_file)
if os.path.exists(output_buffer_file): os.remove(output_buffer_file)
self.result = CausalArray(array=result.astype('int'), causes=data.genes, effects=data.genes, name=self.name)
return self
###
### Combined methods
###
class LCD(Predictor):
def __init__(self, numeric_score=False, alpha=None, beta=None, gt_method=None,
gt_threshold=None, conservative=False, select_causes=None, reduce_memory=False, *args, **kwargs):
super(LCD, self).__init__(*args, **kwargs)
# self.numeric_score = numeric_score
self.alpha = alpha
self.beta = beta
self.conservative = conservative
self.gt_method = gt_method
self.gt_threshold = gt_threshold
self.select_causes = select_causes
self.reduce_memory = reduce_memory
# self._options = {'select_causes':select_causes is not None}
self._options = {'alpha':self.alpha, 'beta':self.beta, 'conservative':self.conservative,
'gt_method':self.gt_method, 'gt_threshold':self.gt_threshold, 'select_causes':True if self.select_causes is not None else None,
'reduce_memory':self.reduce_memory}
@Predictor.fit_decorator
def fit(self, data):
self.result = compute_lcd(data=data, alpha=self.alpha, beta=self.beta, gt_method=self.gt_method,
gt_threshold=self.gt_threshold, conservative=self.conservative, select_causes=self.select_causes,
reduce_memory=self.reduce_memory, full_result_folder=None, verbose=self.verbose)
return self
class LCDScore(LCD, MultiProcess):
"""LCD with stability sampled ranked predictions.
By default, sample 1/2 of the number of datapoints in both regimes for the bootstrap.
Attributes:
bootstraps (int): Number of bootstraps to perform for stability sampling
bootstraps_nobs (int): Number of observational datapoints to be sampled in bootstrap
bootstraps_nobs (int): Number of intervational datapoints to be sampled in bootstrap
result (CausalArray): Resulting causal array if done, else None
"""
def __init__(self, bootstraps=10, bootstrap_obs=None, bootstrap_ints=None, alpha=None, beta=None, gt_method=None,
gt_threshold=None, conservative=False, select_causes=None, reduce_memory=False, **kwargs):
# if 'numeric_score' in kwargs: del kwargs['numeric_score']
self.bootstraps = bootstraps
self.bootstrap_obs = bootstrap_obs
self.bootstrap_ints = bootstrap_ints
super(LCDScore, self).__init__(alpha=alpha, beta=beta, gt_method=gt_method,
gt_threshold=gt_threshold, conservative=conservative, select_causes=select_causes,
reduce_memory=reduce_memory, **kwargs)
self._options = {'bootstraps':self.bootstraps, 'bootstrap_obs':self.bootstrap_obs,'bootstrap_ints':self.bootstrap_ints}
@Predictor.fit_decorator
def fit(self, data):
self.result = compute_bagged_lcd(data=data, bootstraps=self.bootstraps, processes=self.processes,
alpha=self.alpha, beta=self.beta, gt_method=self.gt_method, gt_threshold=self.gt_threshold,
conservative=self.conservative, select_causes=self.select_causes, reduce_memory=self.reduce_memory,
full_result_folder=None, verbose=self.verbose)
return self
class LCDScoreSlow(LCD, MultiProcess):
"""LCD with stability sampled ranked predictions.
By default, sample 1/2 of the number of datapoints in both regimes for the bootstrap.
Attributes:
bootstraps (int): Number of bootstraps to perform for stability sampling
bootstraps_nobs (int): Number of observational datapoints to be sampled in bootstrap
bootstraps_nobs (int): Number of intervational datapoints to be sampled in bootstrap
result (CausalArray): Resulting causal array if done, else None
"""
def __init__(self, bootstraps=10, bootstrap_obs=None, bootstrap_ints=None, alpha=None, beta=None, gt_method=None,
gt_threshold=None, conservative=False, select_causes=None, reduce_memory=False, **kwargs):
# if 'numeric_score' in kwargs: del kwargs['numeric_score']
self.bootstraps = bootstraps
self.bootstrap_obs = bootstrap_obs
self.bootstrap_ints = bootstrap_ints
super(LCDScoreSlow, self).__init__(alpha=alpha, beta=beta, gt_method=gt_method,
gt_threshold=gt_threshold, conservative=conservative, select_causes=select_causes,
reduce_memory=reduce_memory, **kwargs)
self._options = {'bootstraps':self.bootstraps, 'bootstrap_obs':self.bootstrap_obs,'bootstrap_ints':self.bootstrap_ints}
@Predictor.fit_decorator
def fit(self, data):
self.result = compute_bagged_lcd_slow(data=data, bootstraps=self.bootstraps, processes=self.processes,
alpha=self.alpha, beta=self.beta, gt_method=self.gt_method, gt_threshold=self.gt_threshold,
conservative=self.conservative, select_causes=self.select_causes, reduce_memory=self.reduce_memory,
full_result_folder=None, verbose=self.verbose)
return self
class GIES(MultiProcess):
"""Greedy interventional equivalent search with stability sampled ranked predictions.
Score-based algorithm to predict a CPDAG (or essential graph) that modifies greedy
search used in GES with interventional data.
Attributes:
bootstraps (int): Number of bootstraps to perform for stability sampling
bootstrap_fraction (double): fraction of the interventional data to be subsampled for stability
result (CausalArray): Resulting causal array if done, else None
"""
def __init__(self, max_degree=0, bootstraps=1, bootstrap_fraction=None,
select_causes=None, prescreening=None, *args, **kwargs):
super(GIES, self).__init__(*args, **kwargs)
self.max_degree = max_degree
self.bootstraps = bootstraps
self.bootstrap_fraction = bootstrap_fraction or .5
self.select_causes = select_causes
self.prescreening = prescreening
if self.prescreening: print '{} WARNING: make sure prescreening CausalArray file {} exists!'.format(self.name_method, self.prescreening)
@Predictor.fit_decorator
def fit(self, data):
"""Fit function, modified from the "rscript" functions in the ObservationalMethod base class to
use the full dataset including interventions as input.
Note: require the full dataset to be completely saved instead of just a "settings_only version", as
the R scripts cannot read those out.
"""
keep_temp_files = False # do not keep temp io files
# Set names etc
input_buffer_file = '{}/__input__{}.hdf5'.format(self.folder, self.name)
output_buffer_file = '{}/__output__{}.hdf5'.format(self.folder, self.name)
if os.path.exists(input_buffer_file): os.remove(input_buffer_file)
if os.path.exists(output_buffer_file): os.remove(output_buffer_file)
# make sure to save the complete file and not only settings!
data.save(file=input_buffer_file, settings_only=False, verbose=False)
if self.select_causes is not None: # need the not None here, otherwise ambiguous truth-value!
select_causes = ', '.join([str(i) for i in data.intpos_R(self.select_causes)]) # Note: python zero-based arrays, while R one-based arrays!
else:
select_causes = 'NULL'
# Setup script and input parameters
script = 'ComputeGIES.R'
input_parameters = [
'input=%s' % input_buffer_file,
'output=%s' % output_buffer_file,
# 'maxDegree%s' % self.max_degree,
'bootstraps=%s' % self.bootstraps,
'bootstrapFraction=%s' % self.bootstrap_fraction,
'selectCauses=%s' % select_causes
]
# use prescreening file if given
if self.prescreening:
if os.path.exists(self.prescreening):
input_parameters += ['prescreeningFile=%s' % self.prescreening]
else:
print '[{}] WARNING: prescreening file {} does not exists!'.format(self.name_method, self.prescreening)
return self
# use concurrency if applicable
if (self.processes > 1):
input_parameters += ['processes=%s' % self.processes]
# start script
start_time = time.time()
# fancy_time = time.strftime('%Y-%m-%d %H:%M')
fid_stderr = tempfile.TemporaryFile(mode='w+')
with open('{}/__stdout__{}__.txt'.format(self.folder, self.name), 'w+') as fid_stdout:
try:
check_call(['Rscript', script] + input_parameters, stdout=fid_stdout, stderr=fid_stderr)
fid_stdout.seek(0,0)
fid_stdout.write('***\n*** DONE in {:.2f}s\n***\n\n'.format(time.time() - start_time))
except CalledProcessError as cpe:
print 'CalledProcessError: {}.'.format(cpe)
fid_stderr.seek(0)
print '********\nSTDERR {}\n********\n{}********'.format(self.name, fid_stderr.read())
with open('{}/__stderr__{}__.txt'.format(self.folder, self.name), 'w+') as fid_stderr_write:
fid_stderr.seek(0)
fid_stderr_write.write('STDERR for {}\n\n'.format(script))
fid_stderr_write.write(fid_stderr.read())
finally:
result = load_array(output_buffer_file, load_from_r=True)
if keep_temp_files:
os.rename(input_buffer_file, '{}/input_{}.hdf5'.format(self.folder, self.name))
os.rename(output_buffer_file, '{}/output_{}.hdf5'.format(self.folder, self.name))
else:
if os.path.exists(input_buffer_file): os.remove(input_buffer_file)
if os.path.exists(output_buffer_file): os.remove(output_buffer_file)
self.result = CausalArray(array=result.astype('int'), causes=self.select_causes if self.select_causes is not None else data.genes,
effects=data.genes, name=self.name)
return self
# directly get the list of methods available from globals
# METHODS_DICT = dict((k,v) for k,v in (globals().copy()).iteritems() if issubclass(type(v), type(Predictor)) and k[0] is not '_')
# METHODS_AVAILABLE = METHODS_DICT.keys()
| 44.950289 | 152 | 0.63559 | 4,763 | 38,882 | 5.009238 | 0.116103 | 0.032189 | 0.017436 | 0.015089 | 0.604719 | 0.575045 | 0.550275 | 0.531162 | 0.525253 | 0.509661 | 0 | 0.003534 | 0.250476 | 38,882 | 864 | 153 | 45.002315 | 0.815153 | 0.093565 | 0 | 0.496109 | 0 | 0 | 0.072776 | 0.008102 | 0 | 0 | 0 | 0 | 0.011673 | 0 | null | null | 0.003891 | 0.025292 | null | null | 0.046693 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
11616f7ff348fb772c0e34cfec05b99915aec2d5 | 2,219 | py | Python | Source/FaceRecognition/Agents/SystemAgent.py | robertkarol/ReDe-Multiagent-Face-Recognition-System | df17cebecc51b2fafb01e07a9bb68e9e4e04163a | [
"MIT"
] | null | null | null | Source/FaceRecognition/Agents/SystemAgent.py | robertkarol/ReDe-Multiagent-Face-Recognition-System | df17cebecc51b2fafb01e07a9bb68e9e4e04163a | [
"MIT"
] | 7 | 2020-04-24T08:22:20.000Z | 2021-05-21T16:11:52.000Z | Source/FaceRecognition/Agents/SystemAgent.py | robertkarol/ReDe-Multiagent-Face-Recognition-System | df17cebecc51b2fafb01e07a9bb68e9e4e04163a | [
"MIT"
] | 1 | 2020-04-26T15:05:07.000Z | 2020-04-26T15:05:07.000Z | from Utils.Logging import LoggingMixin
from spade.agent import Agent
from spade.behaviour import CyclicBehaviour
from spade.message import Message
class SystemAgent(Agent, LoggingMixin):
class MessageReceiverBehavior(CyclicBehaviour):
def __init__(self, outer_ref):
super().__init__()
self.__outer_ref: SystemAgent = outer_ref
async def on_start(self):
self.__outer_ref.log(f"{self.__outer_ref.jid} starting the message receiver. . .", "info")
async def run(self):
self.__outer_ref.log(f"{self.__outer_ref.jid} checking for message. . .", "info")
message = await self.receive(self.__outer_ref.message_checking_interval)
if message:
self.__outer_ref.log(f"{self.__outer_ref.jid} processing message. . .", "info")
await self.__outer_ref._process_message(message)
self.__outer_ref.log(f"{self.__outer_ref.jid} done processing message. . .", "info")
async def on_end(self):
self.__outer_ref.log(f"{self.__outer_ref.jid} ending the message receiver. . .", "info")
async def _process_message(self, message: Message):
pass
@property
def message_checking_interval(self):
return self.__message_checking_interval
def __init__(self, jid: str, password: str, executor, verify_security: bool = False,
message_checking_interval: int = 5):
super().__init__(jid, password, verify_security)
if executor:
self.loop.set_default_executor(executor)
self.__message_checking_interval = message_checking_interval
async def setup(self):
self.log(f"{self.jid} agent starting . . .", "info")
if self.message_checking_interval > -1:
msg_behavior = self.MessageReceiverBehavior(self)
self.add_behaviour(msg_behavior)
else:
self.log(f"{self.jid} skipping message checking . . .", "info")
def log(self, message, level):
try:
log_method = getattr(self.logger, level)
log_method(message)
except AttributeError:
raise ValueError(f"{level} is not a valid logging level")
| 40.345455 | 102 | 0.652546 | 261 | 2,219 | 5.203065 | 0.291188 | 0.088365 | 0.123711 | 0.055228 | 0.199558 | 0.177467 | 0.133284 | 0.133284 | 0.133284 | 0.133284 | 0 | 0.001196 | 0.246507 | 2,219 | 54 | 103 | 41.092593 | 0.811005 | 0 | 0 | 0 | 0 | 0 | 0.177557 | 0.049572 | 0 | 0 | 0 | 0 | 0 | 1 | 0.090909 | false | 0.068182 | 0.090909 | 0.022727 | 0.25 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
fec85595764c3c86cb185b82f1f42e05a9165c3a | 1,072 | py | Python | 8. md_main.py | PuneethKouloorkar/SPC-Water-Model | faa18fe344dee47eb8eb8e939bd15c5876af12e4 | [
"MIT"
] | null | null | null | 8. md_main.py | PuneethKouloorkar/SPC-Water-Model | faa18fe344dee47eb8eb8e939bd15c5876af12e4 | [
"MIT"
] | null | null | null | 8. md_main.py | PuneethKouloorkar/SPC-Water-Model | faa18fe344dee47eb8eb8e939bd15c5876af12e4 | [
"MIT"
] | null | null | null | import numpy as np
from md_sim import *
import matplotlib.pyplot as plt
# notes on input file format : col 0 = molecule number, col 1 = mass, col 2= molecule number, col 3-5 = positions
def main():
<<<<<<< HEAD
if timestep % dump_frequency == 0:
total_energy, temperature = simulation.calculate_hamiltonian()
simulation.write_output(timestep,dump,total_energy, temperature)
simulation.write_energies(timestep,energies_file)
simulation.write_trajectory(timestep,trajectory_file)
simulation.update_positions()
simulation.update_velocities() # intermediate velocity, t+1/2dt vel-verlet
simulation.calc_intforces()
simulation.update_acceleration()
simulation.update_velocities() # computed velocity, with update acceleration at t + dt
#simulation.leap_frog()
simulation.calculate_hamiltonian()
simulation.calc_extforces()
dump.close()
energies_file.close()
trajectory_file.close()
=======
sim = Simulation(40000, 'positions', 10)
sim.run()
>>>>>>> int_forces
if __name__ == '__main__':
main()
| 25.52381 | 114 | 0.724813 | 127 | 1,072 | 5.889764 | 0.527559 | 0.085562 | 0.045455 | 0.085562 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.01676 | 0.165112 | 1,072 | 41 | 115 | 26.146341 | 0.818994 | 0 | 0 | 0.076923 | 0 | 0 | 0.020311 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.115385 | null | null | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
fedc78485617cf899bd5b7b29bb874dc769d7645 | 499 | py | Python | SWIM-Executables/Windows/pyinstaller-2.0 for windows/buildtests/import/relimp/relimp/relimp2.py | alexsigaras/SWIM | 1a35df8acb26bdcb307a1b8f60e9feba68ed1715 | [
"MIT"
] | 47 | 2020-03-08T08:43:28.000Z | 2022-03-18T18:51:55.000Z | SWIM-Executables/Windows/pyinstaller-2.0 for windows/buildtests/import/relimp/relimp/relimp2.py | alexsigaras/SWIM | 1a35df8acb26bdcb307a1b8f60e9feba68ed1715 | [
"MIT"
] | null | null | null | SWIM-Executables/Windows/pyinstaller-2.0 for windows/buildtests/import/relimp/relimp/relimp2.py | alexsigaras/SWIM | 1a35df8acb26bdcb307a1b8f60e9feba68ed1715 | [
"MIT"
] | 16 | 2020-03-08T08:43:30.000Z | 2022-01-10T22:05:57.000Z |
from __future__ import absolute_import
name = 'relimp.relimp.relimp2'
from . import relimp3
assert relimp3.name == 'relimp.relimp.relimp3'
from .. import relimp
assert relimp.name == 'relimp.relimp'
import relimp
assert relimp.name == 'relimp'
import relimp.relimp2
assert relimp.relimp2.name == 'relimp.relimp2'
# While this seams to work when running Python, it is wrong:
# .relimp should be a sibling of this package
#from .relimp import relimp2
#assert relimp2.name == 'relimp.relimp2'
| 21.695652 | 60 | 0.757515 | 69 | 499 | 5.405797 | 0.376812 | 0.160858 | 0.128686 | 0.128686 | 0.182306 | 0.182306 | 0 | 0 | 0 | 0 | 0 | 0.023474 | 0.146293 | 499 | 22 | 61 | 22.681818 | 0.852113 | 0.338677 | 0 | 0 | 0 | 0 | 0.232198 | 0.130031 | 0 | 0 | 0 | 0 | 0.4 | 1 | 0 | false | 0 | 0.5 | 0 | 0.5 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
fedec09d5298e2f77cfb19a2c659a369f1a2f996 | 560 | py | Python | OpenCV/networktablesaddon.py | cyamonide/FRC-2017 | 1726f844ccf15d94f966669e21db583dcf4cf78b | [
"MIT"
] | null | null | null | OpenCV/networktablesaddon.py | cyamonide/FRC-2017 | 1726f844ccf15d94f966669e21db583dcf4cf78b | [
"MIT"
] | null | null | null | OpenCV/networktablesaddon.py | cyamonide/FRC-2017 | 1726f844ccf15d94f966669e21db583dcf4cf78b | [
"MIT"
] | null | null | null | from networktables import NetworkTable
import logging
logging.basicConfig(level=logging.DEBUG) # enables logging for pynetworktables
NetworkTable.setIPAddress("roboRIO-4914-FRC.local")
NetworkTable.setClientMode()
NewtorkTable.initialize()
table = NetworkTable.getTable("ContoursReport")
while True:
if len(filteredContours) > 0:
table.putNumber('isTarget', 1)
table.putNumber('cX', cX)
table.putNumber('cY', cY)
else:
table.putNumber('isTarget', 0)
table.putNumber('cX', -1)
table.putNumber('cY', -1)
| 29.473684 | 78 | 0.705357 | 60 | 560 | 6.583333 | 0.566667 | 0.212658 | 0.075949 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.019355 | 0.169643 | 560 | 18 | 79 | 31.111111 | 0.830108 | 0.0625 | 0 | 0 | 0 | 0 | 0.114723 | 0.042065 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.125 | 0 | 0.125 | 0 | 0 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
fedf02b8834a72ff1654ca6da113bca8d04dd2c6 | 639 | py | Python | serializers/Base_Serializer.py | AdarshK1/rosbag-dl-utils-mirror | 6139b8a7d43ecd43c4ec079d2f673540491692ce | [
"Apache-2.0"
] | null | null | null | serializers/Base_Serializer.py | AdarshK1/rosbag-dl-utils-mirror | 6139b8a7d43ecd43c4ec079d2f673540491692ce | [
"Apache-2.0"
] | null | null | null | serializers/Base_Serializer.py | AdarshK1/rosbag-dl-utils-mirror | 6139b8a7d43ecd43c4ec079d2f673540491692ce | [
"Apache-2.0"
] | null | null | null | """
Copyright (C) Ghost Robotics - All Rights Reserved
Written by Adarsh Kulkarni <adarsh@ghostrobotics.io>
"""
import os
import rospy
class BaseSerializer:
def __init__(self, topic_name='', skip_frame=1, directory_name='./', bag_file=''):
self.dir_name = directory_name
self.counter = 0
self.skip_frame = skip_frame
self.filename_base = self.dir_name + bag_file.split("/")[-1][:-4] + "/" + \
topic_name[1:].replace("/", "_")
os.makedirs(self.filename_base, exist_ok=True)
self.filename_base += "/{:08d}"
def callback(self, mesh_msg):
pass
| 26.625 | 86 | 0.611894 | 78 | 639 | 4.74359 | 0.589744 | 0.072973 | 0.12973 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.014463 | 0.242567 | 639 | 23 | 87 | 27.782609 | 0.75 | 0.161189 | 0 | 0 | 0 | 0 | 0.024621 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.153846 | false | 0.076923 | 0.153846 | 0 | 0.384615 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
fee29afa34d238876897d0acab6b817ef8b6e8f5 | 1,723 | py | Python | learning_journal/security.py | han8909227/pyramid-learning-journal | c08e63756ae8d466b5efcbf00b1631b57391bf34 | [
"MIT"
] | null | null | null | learning_journal/security.py | han8909227/pyramid-learning-journal | c08e63756ae8d466b5efcbf00b1631b57391bf34 | [
"MIT"
] | 3 | 2019-12-26T16:39:26.000Z | 2021-06-01T21:55:48.000Z | learning_journal/security.py | han8909227/pyramid-learning-journal | c08e63756ae8d466b5efcbf00b1631b57391bf34 | [
"MIT"
] | null | null | null | """Configure and hold all pertinent security information for the app."""
import os
from pyramid.authentication import AuthTktAuthenticationPolicy
from pyramid.authorization import ACLAuthorizationPolicy
from pyramid.security import Authenticated
from pyramid.security import Allow
from passlib.apps import custom_app_context as pwd_context
from pyramid.session import SignedCookieSessionFactory # <-- include this
class MyRoot(object):
"""My root class."""
def __init__(self, request):
"""Init auth class."""
self.request = request
__acl__ = [
(Allow, Authenticated, 'secret'),
]
def is_authenticated(username, password):
"""Check if the user's username and password are good."""
if username == os.environ.get('AUTH_USERNAME', ''):
if pwd_context.verify(password, os.environ.get('AUTH_PASSWORD', '')):
return True
return False
def includeme(config):
"""."""
# set up authentication
auth_secret = os.environ.get('AUTH_SECRET', '')
authn_policy = AuthTktAuthenticationPolicy(
secret=auth_secret,
hashalg='sha512'
)
config.set_authentication_policy(authn_policy)
# set up authorization
authz_policy = ACLAuthorizationPolicy()
config.set_authorization_policy(authz_policy)
# config.set_default_permission('secret')
# if i want every view by default to be behind a login wall
config.set_root_factory(MyRoot)
# set up CSRF token to prevent CSRF attack
session_secret = os.environ.get('SESSION_SECRET', '')
session_factory = SignedCookieSessionFactory(session_secret)
config.set_session_factory(session_factory)
config.set_default_csrf_options(require_csrf=True)
| 32.509434 | 77 | 0.724898 | 201 | 1,723 | 6.00995 | 0.427861 | 0.052152 | 0.039735 | 0.039735 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.002132 | 0.183401 | 1,723 | 52 | 78 | 33.134615 | 0.856432 | 0.204295 | 0 | 0 | 0 | 0 | 0.047015 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.09375 | false | 0.09375 | 0.21875 | 0 | 0.4375 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
fee7ee08afbf0b8a989c0954b49afad23fe38ec9 | 1,670 | py | Python | pizza_box/handlers.py | jklynch/pizza-box | 6d669413f28c188f00d46e843f2d362651ab3e38 | [
"BSD-3-Clause"
] | null | null | null | pizza_box/handlers.py | jklynch/pizza-box | 6d669413f28c188f00d46e843f2d362651ab3e38 | [
"BSD-3-Clause"
] | 8 | 2020-10-09T19:59:18.000Z | 2020-10-16T16:27:29.000Z | pizza_box/handlers.py | jklynch/pizza-box | 6d669413f28c188f00d46e843f2d362651ab3e38 | [
"BSD-3-Clause"
] | 1 | 2020-10-16T14:50:41.000Z | 2020-10-16T14:50:41.000Z | import os
import numpy as np
import pandas as pd
from databroker.assets.handlers_base import HandlerBase
class APBBinFileHandler(HandlerBase):
"Read electrometer *.bin files"
def __init__(self, fpath):
# It's a text config file, which we don't store in the resources yet, parsing for now
fpath_txt = f"{os.path.splitext(fpath)[0]}.txt"
with open(fpath_txt, "r") as fp:
content = fp.readlines()
content = [x.strip() for x in content]
_ = int(content[0].split(":")[1])
# Gains = [int(x) for x in content[1].split(":")[1].split(",")]
# Offsets = [int(x) for x in content[2].split(":")[1].split(",")]
# FAdiv = float(content[3].split(":")[1])
# FArate = float(content[4].split(":")[1])
# trigger_timestamp = float(content[5].split(":")[1].strip().replace(",", "."))
raw_data = np.fromfile(fpath, dtype=np.int32)
columns = ["timestamp", "i0", "it", "ir", "iff", "aux1", "aux2", "aux3", "aux4"]
num_columns = len(columns) + 1 # TODO: figure out why 1
raw_data = raw_data.reshape((raw_data.size // num_columns, num_columns))
derived_data = np.zeros((raw_data.shape[0], raw_data.shape[1] - 1))
derived_data[:, 0] = (
raw_data[:, -2] + raw_data[:, -1] * 8.0051232 * 1e-9
) # Unix timestamp with nanoseconds
for i in range(num_columns - 2):
derived_data[:, i + 1] = raw_data[:, i] # ((raw_data[:, i] ) - Offsets[i]) / Gains[i]
self.df = pd.DataFrame(data=derived_data, columns=columns)
self.raw_data = raw_data
def __call__(self):
return self.df
| 37.954545 | 98 | 0.579042 | 231 | 1,670 | 4.04329 | 0.454545 | 0.089936 | 0.019272 | 0.041756 | 0.036403 | 0.036403 | 0 | 0 | 0 | 0 | 0 | 0.031822 | 0.247305 | 1,670 | 43 | 99 | 38.837209 | 0.711217 | 0.298204 | 0 | 0 | 0 | 0 | 0.081308 | 0.026823 | 0 | 0 | 0 | 0.023256 | 0 | 1 | 0.076923 | false | 0 | 0.153846 | 0.038462 | 0.307692 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
fef44bf3d10e54f43ff548c27fb11dca9ca7df02 | 2,211 | py | Python | smoke_test.py | rphilander/proteus | 840ed265131c7399b983802afaae61306f02ba3b | [
"MIT"
] | null | null | null | smoke_test.py | rphilander/proteus | 840ed265131c7399b983802afaae61306f02ba3b | [
"MIT"
] | null | null | null | smoke_test.py | rphilander/proteus | 840ed265131c7399b983802afaae61306f02ba3b | [
"MIT"
] | null | null | null | import unittest
from core import Atom, AtomIndex, interpret, Query
from database import DB
import grammar
import rows2atoms
test_data = [ '\t'.join(['Manster Bones', '27', 'United States', '2012', 'Bowling', '2', '1', '0', '3']) ]
class SmokeTests(unittest.TestCase):
@classmethod
def setUpClass(cls):
cls.db = DB()
cls.db.load(test_data)
cls.atom_index = AtomIndex('smoke-test')
for atom in rows2atoms.transform(cls.db.query('select * from medals')):
cls.atom_index.write(atom)
map(cls.atom_index.write, grammar.get_atoms())
cls.atom_index.flush()
@classmethod
def tearDownClass(cls):
cls.atom_index.clear()
def test_empty(self):
result = interpret(self.atom_index, grammar, '')
self.assertEqual(result, [])
def test_who_won(self):
user_input = 'athletes most golds 2012'
user_intent = 'Which athletes won the most golds in 2012?'
results = interpret(self.atom_index, grammar, user_input)
self.assertNotEqual(results, [])
english_results = map(lambda r: r.get_english(), results)
self.assertTrue(user_intent in english_results)
sql_results = map(lambda r: r.get_sql(), results)
data_results = map(self.db.query, sql_results)
print map(lambda dr: dr[1], data_results)
self.assertTrue(('Manster Bones', 2) in map(lambda dr: dr[1], data_results))
def test_how_many(self):
user_input = 'how many silvers did Manster Bones win'
results = interpret(self.atom_index, grammar, user_input)
self.assertNotEqual(results, [])
def test_what_sports(self):
user_input = 'what sports were contested?'
results = interpret(self.atom_index, grammar, user_input)
self.assertNotEqual(results, [])
data_results = self.db.query(results[0].get_sql())
def test_no_dupes(self):
results = interpret(self.atom_index, grammar, 'Manster Bones')
english_results = map(lambda q: q.get_english(), results)
self.assertEqual(len(english_results), len(list(set(english_results))))
def main():
unittest.main()
if __name__ == '__main__':
main()
| 33.5 | 106 | 0.655812 | 285 | 2,211 | 4.901754 | 0.315789 | 0.064424 | 0.042949 | 0.07874 | 0.262706 | 0.241947 | 0.186113 | 0.150322 | 0.150322 | 0.150322 | 0 | 0.013929 | 0.220715 | 2,211 | 65 | 107 | 34.015385 | 0.796866 | 0 | 0 | 0.16 | 0 | 0 | 0.108548 | 0 | 0 | 0 | 0 | 0 | 0.14 | 0 | null | null | 0 | 0.1 | null | null | 0.02 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
fefbb710d46a9fb17ee87caf2a317902a57554c4 | 527 | py | Python | python/EXERCICIO 71 - SIMULADOR DO CAIXA ELETRONICO.py | debor4h/exerciciosPython | a18d88c6e98bc49005bfcb8badeb712007c16d69 | [
"MIT"
] | 1 | 2022-03-15T02:25:17.000Z | 2022-03-15T02:25:17.000Z | python/EXERCICIO 71 - SIMULADOR DO CAIXA ELETRONICO.py | debor4h/exerciciosPython | a18d88c6e98bc49005bfcb8badeb712007c16d69 | [
"MIT"
] | null | null | null | python/EXERCICIO 71 - SIMULADOR DO CAIXA ELETRONICO.py | debor4h/exerciciosPython | a18d88c6e98bc49005bfcb8badeb712007c16d69 | [
"MIT"
] | null | null | null | #071)Simular caixa eletronico, celulas 50,20,10,1 R$.EX: 100 e duas celulas de 50
#FORMA SEM O WHILE
saque = float(input('Qual valor você quer sacar? R$ '))
if saque == 0 or saque < 0:
print('Digite um valor acima de ZERO, por favor!')
cinquenta = saque//50
vinte = (saque%50)//20
dez = ((saque%50)%20)//10
um = ((saque%50)%10)
print(f'Total de {cinquenta :.2f} células de R$50')
print(f'Total de {vinte :.2f} células de R$20')
print(f'Total de {dez :.2f} células de R$10')
print(f'Total de {um :.2f} células de R$1')
| 29.277778 | 81 | 0.658444 | 101 | 527 | 3.435644 | 0.435644 | 0.080692 | 0.126801 | 0.149856 | 0.086455 | 0 | 0 | 0 | 0 | 0 | 0 | 0.100687 | 0.170778 | 527 | 17 | 82 | 31 | 0.693364 | 0.184061 | 0 | 0 | 0 | 0 | 0.518779 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.454545 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 |
fefce0ce72b4f6e85489a872e47be3071b0cfb24 | 498 | py | Python | authentication/admin.py | shayweitzman/MyAutoBook | b5451e6d2db07c839b61802f94b57824a6b17da8 | [
"Unlicense"
] | 1 | 2022-01-24T15:49:32.000Z | 2022-01-24T15:49:32.000Z | authentication/admin.py | shayweitzman/MyAutoBook | b5451e6d2db07c839b61802f94b57824a6b17da8 | [
"Unlicense"
] | null | null | null | authentication/admin.py | shayweitzman/MyAutoBook | b5451e6d2db07c839b61802f94b57824a6b17da8 | [
"Unlicense"
] | 2 | 2021-01-03T15:19:05.000Z | 2021-01-14T08:14:12.000Z | from django.contrib import admin
from .models import Student,Adult
class AdultAdmin(admin.ModelAdmin):
search_fields = ['user__first_name', 'user__last_name','ID_Number' ,'id','user__username',]
list_display = ('user','id','ID_Number',)
class StudentAdmin(admin.ModelAdmin):
search_fields = ['user__first_name','user__last_name','id','grade','user__username',]
list_display = ('user','id','grade')
admin.site.register(Adult,AdultAdmin)
admin.site.register(Student,StudentAdmin)
| 33.2 | 95 | 0.742972 | 64 | 498 | 5.4375 | 0.40625 | 0.086207 | 0.12069 | 0.155172 | 0.477011 | 0.477011 | 0.310345 | 0.310345 | 0.310345 | 0.310345 | 0 | 0 | 0.096386 | 498 | 14 | 96 | 35.571429 | 0.773333 | 0 | 0 | 0 | 0 | 0 | 0.269076 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.2 | 0 | 0.8 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
fefe1c353f7dc38d6a35b46761349e5e0da4fc45 | 196 | py | Python | sikulix4python/sxclasses.py | cdlaimin/sikulix4python | 66d4f529a60dd802b73900f1baa858eaad454867 | [
"MIT"
] | 71 | 2019-03-15T10:08:54.000Z | 2022-01-20T20:40:31.000Z | sikulix4python/sxclasses.py | cdlaimin/sikulix4python | 66d4f529a60dd802b73900f1baa858eaad454867 | [
"MIT"
] | 10 | 2019-04-10T02:27:52.000Z | 2022-02-16T10:37:16.000Z | sikulix4python/sxclasses.py | cdlaimin/sikulix4python | 66d4f529a60dd802b73900f1baa858eaad454867 | [
"MIT"
] | 28 | 2019-03-19T10:50:50.000Z | 2022-02-24T15:37:18.000Z | from . sxbase import *
from . sxregion import Region
class Screen(Region):
SXClass = SXScreen
class Location(SXBase):
SXClass = SXLocation
class Image(SXBase):
SXClass = SXImage
| 13.066667 | 29 | 0.704082 | 22 | 196 | 6.272727 | 0.590909 | 0.188406 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.219388 | 196 | 14 | 30 | 14 | 0.901961 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.25 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
fefea3bd651ce6a122ed7dddb4420015f863dc88 | 363 | py | Python | sumultiply.py | GraceAKelly/Is-it-Tuesday | 9fcec0ca650189c993c86def52b49758cd0cc73c | [
"Apache-2.0"
] | null | null | null | sumultiply.py | GraceAKelly/Is-it-Tuesday | 9fcec0ca650189c993c86def52b49758cd0cc73c | [
"Apache-2.0"
] | null | null | null | sumultiply.py | GraceAKelly/Is-it-Tuesday | 9fcec0ca650189c993c86def52b49758cd0cc73c | [
"Apache-2.0"
] | null | null | null | # Grace Kelly
# 08 March 2018
# Practice problem 1
# Use summultiple function
# # https://github.com/ianmcloughlin/problems-python-fundamentals
def sumultiply(x, y): # Define optput required
total = 0
for i in range(y):
total = total + x # Change definition of total Loop y and add x to total
return total
print(sumultiply(11, 13))
print(sumultiply(5, 123))
| 24.2 | 72 | 0.735537 | 56 | 363 | 4.767857 | 0.785714 | 0.11236 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.052805 | 0.165289 | 363 | 14 | 73 | 25.928571 | 0.828383 | 0.575758 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0 | null | null | 0.285714 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
3a05fb6f6850d68b21428ab2f1a6b361ef30667b | 426 | py | Python | 2-python-intermediario (Programacao Procedural)/aula16-reduce/aula16-reduce.py | Leodf/projetos-python | 64e6262e6535d92624ad50148634d881608a7523 | [
"MIT"
] | null | null | null | 2-python-intermediario (Programacao Procedural)/aula16-reduce/aula16-reduce.py | Leodf/projetos-python | 64e6262e6535d92624ad50148634d881608a7523 | [
"MIT"
] | null | null | null | 2-python-intermediario (Programacao Procedural)/aula16-reduce/aula16-reduce.py | Leodf/projetos-python | 64e6262e6535d92624ad50148634d881608a7523 | [
"MIT"
] | null | null | null |
from dados import pessoas, produtos, lista
from functools import reduce
"""
acumula = 0
for item in lista:
acumula += item
print(acumula)
"""
# soma_lista = reduce(lambda ac, i: i + ac, lista, 0)
# print(soma_lista)
# soma_precos = reduce(lambda ac, p: p['preco'] + ac, produtos, 0)
# print(soma_precos / len(produtos))
soma_idades = reduce(lambda ac, p: ac + p['idade'], pessoas, 0)
print(soma_idades / len(pessoas)) | 22.421053 | 66 | 0.685446 | 65 | 426 | 4.4 | 0.369231 | 0.125874 | 0.146853 | 0.104895 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.011299 | 0.169014 | 426 | 19 | 67 | 22.421053 | 0.79661 | 0.396714 | 0 | 0 | 0 | 0 | 0.02809 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.5 | 0 | 0.5 | 0.25 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
3a0beb6d01c78f817e3730d7d0fd9c80e33ee184 | 2,684 | py | Python | py/dd_sliceapply_all.py | bcgov/diutils | caf510c81f7f43372d4a8e18f77eaa86cdede6a5 | [
"Apache-2.0"
] | null | null | null | py/dd_sliceapply_all.py | bcgov/diutils | caf510c81f7f43372d4a8e18f77eaa86cdede6a5 | [
"Apache-2.0"
] | 1 | 2020-12-14T22:00:24.000Z | 2020-12-14T22:00:24.000Z | py/dd_sliceapply_all.py | bcgov/diutils | caf510c81f7f43372d4a8e18f77eaa86cdede6a5 | [
"Apache-2.0"
] | null | null | null | # Copyright 2019 Province of British Columbia
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
'''20190617 extract all fields from a fixed-width file, finding dd (data dictionary)
automagically!
'''
import os
import sys
from misc import *
msg = ("dd_sliceapply_all.py: extract all fields from a .dat file, choosing the data dictionary automatically.\n" +
"usage:\n\tdd_sliceapply_all [dat file name]\nNote: dd/ directory expected in present working folder" +
"usage:\n\tdd_sliceapply_all [dat file name] [cohort file (1-col csv with label studyid)]")
if len(args) >= 2:
fn = sys.argv[1]
ddn = fn + ".dd"
if not exists("dd"):
run("dd_list")
if not os.path.exists(fn) or not os.path.isfile(fn):
err("couldn't find input file: " + fn.strip())
if not os.path.exists(ddn):
run("dd_match_data_file " + fn)
if not os.path.exists(ddn):
err("data dictionary match not found")
dd = open(ddn).read().strip()
if not os.path.exists(dd):
err("could not find dd file: " + dd.strip())
fields = os.popen("dd_fields " + dd).read().strip().split("\n")[-1].split(" ")
print "fields", fields
if len(sys.argv) >= 3:
cohort_file = sys.argv[2]
if not os.path.exists(cohort_file):
err("could not open cohort file: " + cohort_file)
# this program builds .rc file:
run("dd_slice_apply_cohort " + dd + " " + cohort_file + " " + fn.strip())
else:
run("dd_slice_apply " + dd + " " + fn.strip() + " " + (" ".join([f for f in fields if f.upper() != "LINEFEED"])))
a = os.system("rm -f *exclude*") # remove this line
else:
extract = []
files = os.popen("ls -1 *.dat").read().strip().split()
for f in files:
f = f.strip()
extract.append(f)
f = open("./.dd_sliceapply_jobs.txt", "wb")
for i in range(0, len(extract)):
f.write(" dd_sliceapply_all " + extract[i] + "\n")
f.close()
run("cat ./.dd_sliceapply_jobs.txt")
print(msg)
raw_input("run the above jobs? press RETURN or ctrl-c to abort")
run("multicore ./.dd_sliceapply_jobs.txt 4")
# hardware limitation: memory channels
| 35.315789 | 121 | 0.637854 | 403 | 2,684 | 4.176179 | 0.414392 | 0.035651 | 0.032086 | 0.03268 | 0.124183 | 0.079026 | 0.039216 | 0.039216 | 0 | 0 | 0 | 0.012008 | 0.224292 | 2,684 | 75 | 122 | 35.786667 | 0.79635 | 0.243666 | 0 | 0.090909 | 0 | 0.022727 | 0.362493 | 0.089576 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.068182 | null | null | 0.045455 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
3a11b5a5fbec933a09e6cee949e08c1de8d780ad | 639 | py | Python | tests/Handlers/test_DictionaryDeserializer.py | TheBoringBakery/Riot-Watcher | 6e05fffe127530a75fd63e67da37ba81489fd4fe | [
"MIT"
] | 489 | 2015-01-04T22:49:51.000Z | 2022-03-28T03:15:54.000Z | tests/Handlers/test_DictionaryDeserializer.py | TheBoringBakery/Riot-Watcher | 6e05fffe127530a75fd63e67da37ba81489fd4fe | [
"MIT"
] | 162 | 2015-02-09T22:10:40.000Z | 2022-02-22T13:48:50.000Z | tests/Handlers/test_DictionaryDeserializer.py | TheBoringBakery/Riot-Watcher | 6e05fffe127530a75fd63e67da37ba81489fd4fe | [
"MIT"
] | 221 | 2015-01-07T18:01:57.000Z | 2022-03-26T21:18:48.000Z | import json
import pytest
from riotwatcher.Handlers import DictionaryDeserializer
@pytest.mark.unit
class TestDictionaryDeserializer:
def test_basic_json(self):
deserializer = DictionaryDeserializer()
expected = {
"test": {"object": "type", "int": 1},
"bool": True,
"list": ["string", "item"],
}
actual = deserializer.deserialize("", "", json.dumps(expected))
assert expected == actual
def test_empty_string(self):
deserializer = DictionaryDeserializer()
actual = deserializer.deserialize("", "", "")
assert actual == {}
| 22.034483 | 71 | 0.605634 | 53 | 639 | 7.226415 | 0.584906 | 0.036554 | 0.198433 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.002137 | 0.267606 | 639 | 28 | 72 | 22.821429 | 0.816239 | 0 | 0 | 0.111111 | 0 | 0 | 0.054773 | 0 | 0 | 0 | 0 | 0 | 0.111111 | 1 | 0.111111 | false | 0 | 0.166667 | 0 | 0.333333 | 0 | 0 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
3a1b8944c0642a7777a795999a67ab6ebd1571e2 | 48,448 | py | Python | bco_api/api/views.py | biocompute-objects/bco_api | 981341ca2074d337e66e78096397e8fbbcbebc6e | [
"MIT"
] | 1 | 2022-03-09T15:25:39.000Z | 2022-03-09T15:25:39.000Z | bco_api/api/views.py | biocompute-objects/bco_api | 981341ca2074d337e66e78096397e8fbbcbebc6e | [
"MIT"
] | 46 | 2021-06-17T19:43:04.000Z | 2022-03-29T13:43:10.000Z | bco_api/api/views.py | biocompute-objects/bco_api | 981341ca2074d337e66e78096397e8fbbcbebc6e | [
"MIT"
] | null | null | null | #!/usr/bin/env python3
"""BCODB views
Django views for BCODB API
"""
# Based on the "Class Based API View" example at
# https://codeloop.org/django-rest-framework-course-for-beginners/
# For instructions on calling class methods from other classes, see
# https://stackoverflow.com/questions/3856413/call-class-method-from-another-class
from drf_yasg import openapi
from drf_yasg.utils import swagger_auto_schema
from rest_framework import status
# By-view permissions
from rest_framework.permissions import IsAuthenticated
# Message page
# Source: https://www.django-rest-framework.org/topics/html-and-forms/#rendering-html
from rest_framework.renderers import TemplateHTMLRenderer
from rest_framework.response import Response
# Views
from rest_framework.views import APIView
from .permissions import RequestorInPrefixAdminsGroup
# FIX
from .scripts.method_specific.GET_activate_account import GET_activate_account
from .scripts.method_specific.GET_draft_object_by_id import GET_draft_object_by_id
from .scripts.method_specific.GET_published_object_by_id import GET_published_object_by_id
from .scripts.method_specific.GET_published_object_by_id_with_version import GET_published_object_by_id_with_version
# Request-specific methods
from .scripts.method_specific.POST_api_accounts_describe import POST_api_accounts_describe
from .scripts.method_specific.POST_api_accounts_new import POST_api_accounts_new
from .scripts.method_specific.POST_api_groups_create import POST_api_groups_create
from .scripts.method_specific.POST_api_groups_delete import POST_api_groups_delete
from .scripts.method_specific.POST_api_groups_modify import POST_api_groups_modify
from .scripts.method_specific.POST_api_objects_drafts_create import POST_api_objects_drafts_create
from .scripts.method_specific.POST_api_objects_drafts_modify import POST_api_objects_drafts_modify
from .scripts.method_specific.POST_api_objects_drafts_permissions import POST_api_objects_drafts_permissions
from .scripts.method_specific.POST_api_objects_drafts_permissions_set import POST_api_objects_drafts_permissions_set
from .scripts.method_specific.POST_api_objects_drafts_publish import POST_api_objects_drafts_publish
from .scripts.method_specific.POST_api_objects_drafts_read import POST_api_objects_drafts_read
from .scripts.method_specific.POST_api_objects_drafts_token import POST_api_objects_drafts_token
from .scripts.method_specific.POST_api_objects_publish import POST_api_objects_publish
from .scripts.method_specific.POST_api_objects_published import POST_api_objects_published
from .scripts.method_specific.POST_api_objects_search import POST_api_objects_search
from .scripts.method_specific.POST_api_objects_token import POST_api_objects_token
from .scripts.method_specific.POST_api_prefixes_create import POST_api_prefixes_create
from .scripts.method_specific.POST_api_prefixes_delete import POST_api_prefixes_delete
from .scripts.method_specific.POST_api_prefixes_modify import POST_api_prefixes_modify
from .scripts.method_specific.POST_api_prefixes_permissions_set import POST_api_prefixes_permissions_set
from .scripts.method_specific.POST_api_prefixes_token import POST_api_prefixes_token
from .scripts.method_specific.POST_api_prefixes_token_flat import POST_api_prefixes_token_flat
# For helper functions
from .scripts.utilities import UserUtils
################################################################################################
# NOTES
################################################################################################
# Permissions
# We can't use the examples given in
# https://www.django-rest-framework.org/api-guide/permissions/#djangomodelpermissions
# because our permissions system is not tied to
# the request type (DELETE, GET, PATCH, POST).
################################################################################################
# TODO: This is a helper function so might want to go somewhere else
def check_post_and_process(request, PostFunction) -> Response:
"""
Helper function to perform the verification that a request is a POST and to then
make a call to the callback function with the request body.
Returns: An HTTP Response Object
"""
# checked is suppressed for the milestone.
# Check the request
# checked = RequestUtils.RequestUtils().check_request_templates(
# method = 'POST',
# request = request.data
# )
checked = None
if checked is None:
# Pass the request to the handling function.
return PostFunction(request)
else:
return Response(data=checked, status=status.HTTP_400_BAD_REQUEST)
# TODO: This is currently commented out; need to see what checking is meant to do
def check_get(request) -> Response:
"""
Helper function to perform the verification that a request is a GET
Returns: An HTTP Response Object
"""
# Check the request
# checked = RequestUtils.RequestUtils().check_request_templates(
# method = 'GET',
# request = request.data
# )
# Placeholder
return Response(status=status.HTTP_200_OK)
class ApiAccountsActivateUsernameTempIdentifier(APIView):
"""
Activate an account
--------------------
This is a request to activate a new account. This is open to anyone to
activate a new account, as long as they have a valid token generated by this host. This allows
other users to act as the verification layer in addition to the system.
"""
authentication_classes = []
permission_classes = []
# For the success and error messages
renderer_classes = [
TemplateHTMLRenderer
]
template_name = 'api/account_activation_message.html'
auth = []
auth.append(openapi.Parameter('username', openapi.IN_PATH, description="Username to be authenticated.",
type=openapi.TYPE_STRING))
auth.append(openapi.Parameter('temp_identifier', openapi.IN_PATH,
description="The temporary identifier needed to authenticate the activation. This "
"is found in the temporary account table (i.e. where an account is "
"staged).",
type=openapi.TYPE_STRING))
@swagger_auto_schema(manual_parameters=auth, responses={
201: "Account has been authorized.",
208: "Account has already been authorized.",
403: "Requestor's credentials were rejected.",
424: "Account has not been registered."
}, tags=["Account Management"])
def get(self, request, username: str, temp_identifier: str):
# Check the request to make sure it is valid - not sure what this is really doing though
# Placeholder
check_get(request)
checked = None
if checked is None:
# Pass the request to the handling function
return GET_activate_account(username=username, temp_identifier=temp_identifier)
else:
return Response(
{
'activation_success': False,
'status' : status.HTTP_400_BAD_REQUEST
}
)
# Source: https://www.django-rest-framework.org/api-guide/authentication/#by-exposing-an-api-endpoint
class ApiAccountsDescribe(APIView):
"""
Account details
--------------------
No schema for this request since only the Authorization header is required.
The word 'Token' must be included in the header.
For example: 'Token 627626823549f787c3ec763ff687169206626149'
"""
auth = [
openapi.Parameter('Authorization',
openapi.IN_HEADER,
description="Authorization Token",
type=openapi.TYPE_STRING
)
]
@swagger_auto_schema(manual_parameters=auth, responses={
200: "Authorization is successful.",
400: "Bad request. Authorization is not provided in the request headers.",
401: "Unauthorized. Authentication credentials were not provided."
}, tags=["Account Management"])
def post(self, request):
"""
Pass the request to the handling function
Source: https://stackoverflow.com/a/31813810
"""
if 'Authorization' in request.headers:
return POST_api_accounts_describe(token=request.META.get('HTTP_AUTHORIZATION'))
else:
return Response(status=status.HTTP_400_BAD_REQUEST)
class ApiGroupsCreate(APIView):
"""
Create group
--------------------
This API call creates a BCO group in ths system. The name of the group is
required but all other parameters are optional.
"""
POST_api_groups_create_schema = openapi.Schema(
type=openapi.TYPE_OBJECT,
required=['name'],
properties={
'name': openapi.Schema(type=openapi.TYPE_STRING,
description='The name of the group to create'),
'usernames': openapi.Schema(type=openapi.TYPE_ARRAY,
items=openapi.Schema(type=openapi.TYPE_STRING),
description='List of users to add to the group.'),
'delete_members_on_group_deletion': openapi.Schema(type=openapi.TYPE_BOOLEAN,
description='Delete the members of the group if the group is deleted.'),
'description': openapi.Schema(type=openapi.TYPE_STRING,
description='Description of the group.'),
'expiration': openapi.Schema(type=openapi.TYPE_STRING,
description='Expiration date and time of the group. Note, '
'this needs to be in a Python DateTime compatible format.'),
'max_n_members': openapi.Schema(type=openapi.TYPE_INTEGER,
description='Maximum number of members to allow in the group.'),
},
description="Groups to create along with associated information."
)
request_body = openapi.Schema(
type=openapi.TYPE_OBJECT,
title="Group Creation Schema",
description="Parameters that are supported when trying to create a group.",
required=['POST_api_groups_create'],
properties={
'POST_api_groups_create': openapi.Schema(type=openapi.TYPE_ARRAY,
items=POST_api_groups_create_schema,
description='Groups and actions to take on them.')})
@swagger_auto_schema(request_body=request_body, responses={
200: "Group creation is successful.",
400: "Bad request.",
403: "Invalid token.",
409: "Group conflict. There is already a group with this name."
}, tags=["Group Management"])
def post(self, request):
return check_post_and_process(request, POST_api_groups_create)
class ApiGroupsDelete(APIView):
"""
Delete group
--------------------
Deletes one or more groups from the BCO API database. Even if not all
requests are successful, the API can return success. If a 300 response is
returned then the caller should loop through the response to understand
which deletes failed and why.
"""
POST_api_groups_delete_schema = openapi.Schema(
type=openapi.TYPE_OBJECT,
required=['names'],
properties={
'names': openapi.Schema(
type=openapi.TYPE_ARRAY,
description='List of groups to delete.',
items=openapi.Schema(
type=openapi.TYPE_STRING
)
),
}
)
request_body = openapi.Schema(
type=openapi.TYPE_OBJECT,
title="Group Deletion Schema",
description="Parameters that are supported when trying to delete "
"one or more groups.",
required=['POST_api_groups_delete'],
properties={
'POST_api_groups_delete': POST_api_groups_delete_schema
}
)
@swagger_auto_schema(request_body=request_body, responses={
200: "Group deletion is successful.",
300: "Mixture of successes and failures in a bulk delete.",
400: "Bad request.",
403: "Invalid token.",
404: "Missing optional bulk parameters, this request has no effect.",
418: "More than the expected one group was deleted."
}, tags=["Group Management"])
def post(self, request):
return check_post_and_process(request, POST_api_groups_delete)
class ApiGroupsModify(APIView):
"""Modify group
--------------------
Modifies an already existing BCO group. An array of objects are taken where each of these objects
represents the instructions to modify a specific group. Within each of these objects, along with the
group name, the set of modifications to that group exists in a dictionary as defined below.
Example request body which encodes renaming a group named `myGroup1` to `myGroup2`:
```
request_body = ['POST_api_groups_modify' : {
'name': 'myGroup1',
'actions': {
'rename': 'myGroup2'
}
}
]
```
More than one action can be included for a specific group name.
"""
POST_api_groups_modify_schema = openapi.Schema(
type=openapi.TYPE_OBJECT,
required=['name'],
properties={
'name': openapi.Schema(type=openapi.TYPE_STRING,
description='The name of the group to modify'),
'actions': openapi.Schema(type=openapi.TYPE_OBJECT,
properties={
'rename': openapi.Schema(type=openapi.TYPE_STRING,
description=""),
'redescribe': openapi.Schema(type=openapi.TYPE_STRING,
description="Change the description of the group to this."),
'owner_user': openapi.Schema(type=openapi.TYPE_STRING,
description="Change the owner of the group to this user."),
'remove_users': openapi.Schema(type=openapi.TYPE_ARRAY,
items=openapi.Schema(type=openapi.TYPE_STRING),
description="Users to remove from the group."),
'disinherit_from': openapi.Schema(type=openapi.TYPE_ARRAY,
items=openapi.Schema(type=openapi.TYPE_STRING),
description="Groups to disinherit permissions from."),
'add_users': openapi.Schema(type=openapi.TYPE_ARRAY,
items=openapi.Schema(type=openapi.TYPE_STRING),
description="Users to add to the group."),
'inherit_from' : openapi.Schema(type=openapi.TYPE_ARRAY,
items=openapi.Schema(type=openapi.TYPE_STRING),
description="Groups to inherit permissions from."),
},
description="Actions to take upon the group.")
}
)
request_body = openapi.Schema(
type=openapi.TYPE_OBJECT,
title="Group Modification Schema",
description="Parameters that are supported when trying to modify one or more groups.",
required=['POST_api_groups_modify'],
properties={
'POST_api_groups_modify': openapi.Schema(type=openapi.TYPE_ARRAY,
items=POST_api_groups_modify_schema,
description='Groups and actions to take on them.'),
})
@swagger_auto_schema(request_body=request_body, responses={
200: "Group modification is successful.",
400: "Bad request.",
403: "Insufficient privileges."
}, tags=["Group Management"])
def post(self, request):
return check_post_and_process(request, POST_api_groups_modify)
class ApiAccountsNew(APIView):
"""
Account creation request
--------------------
Ask for a new account. Sends an e-mail to the provided e-mail, which must then be clicked to activate the account.
The account create depends on creation of an account in the associated user database. The authentication as
well as the user database host information is used to make this request.
```JSON
{
"hostname": "http://localhost:8000",
"email": "example_email@example.com",
"token": "eyJ1c2VyX2lkIjoyNCwidXNlcm5hbWUiOiJoYWRsZXlraW5nIiwiZXhwIjoxNjQwNzE5NTUwLCJlbWFpbCI6ImhhZGxleV9raW5nQGd3dS5lZHUiLCJvcmlnX2lhdCI6MTY0MDExNDc1MH0.7G3VPmxUBOWFfu-fMt1_UsWAcH_Gd1DfpQa83EwFwYY"
}
```
"""
# Anyone can ask for a new account
authentication_classes = []
permission_classes = []
request_body = openapi.Schema(
type=openapi.TYPE_OBJECT,
title="Account Creation Schema",
description="Account creation schema description.",
required=['hostname', 'email', 'token'],
properties={
'hostname': openapi.Schema(type=openapi.TYPE_STRING, description='Hostname of the User Database.'),
'email' : openapi.Schema(type=openapi.TYPE_STRING, description='Email address of user.'),
'token' : openapi.Schema(type=openapi.TYPE_STRING, description='Token returned with new user being '
'generated in the User Database.'),
})
@swagger_auto_schema(request_body=request_body, responses={
200: "Account creation is successful.",
400: "Bad request.",
403: "Invalid token.",
409: "Account has already been authenticated or requested.",
500: "Unable to save the new account or send authentication email."
}, tags=["Account Management"])
def post(self, request) -> Response:
print("Request: {}".format(request))
return check_post_and_process(request, POST_api_accounts_new)
class ApiObjectsDraftsCreate(APIView):
"""
Create BCO Draft
--------------------
Creates a new BCO draft object.
"""
POST_api_objects_draft_create_schema = openapi.Schema(
type=openapi.TYPE_OBJECT,
required=['prefix', 'owner_group', 'schema', 'contents'],
properties={
'prefix' : openapi.Schema(type=openapi.TYPE_STRING,
description='BCO Prefix to use'),
'owner_group': openapi.Schema(type=openapi.TYPE_STRING,
description='Group which owns the BCO draft.'),
'object_id' : openapi.Schema(type=openapi.TYPE_STRING,
description='BCO Object ID.'),
'schema' : openapi.Schema(type=openapi.TYPE_STRING,
description='Which schema the BCO satisfies.'),
'contents' : openapi.Schema(type=openapi.TYPE_OBJECT, additional_properties=True, description="Contents of the BCO.")
}
)
request_body = openapi.Schema(
type=openapi.TYPE_OBJECT,
title="Create BCO Draft Schema",
description="Parameters that are supported when trying to create a draft BCO.",
required=['POST_api_objects_draft_create'],
properties={
'POST_api_objects_draft_create': openapi.Schema(type=openapi.TYPE_ARRAY,
items=POST_api_objects_draft_create_schema,
description='BCO Drafts to create.'),
})
@swagger_auto_schema(request_body=request_body, responses={
200: "Creation of BCO draft is successful.",
300: "Some requests failed and some succeeded.",
400: "Bad request.",
403: "Invalid token."
}, tags=["BCO Management"])
def post(self, request) -> Response:
return check_post_and_process(request, POST_api_objects_drafts_create)
class ApiObjectsDraftsModify(APIView):
"""
Modify a BCO Object
--------------------
Modifies a BCO object. The BCO object must be a draft in order to be modifiable. The contents of the BCO will be replaced with the
new contents provided in the request body.
"""
POST_api_objects_drafts_modify_schema = openapi.Schema(
type=openapi.TYPE_OBJECT,
required=['object_id', 'contents'],
properties={
'object_id': openapi.Schema(type=openapi.TYPE_STRING, description='BCO Object ID.'),
'contents' : openapi.Schema(type=openapi.TYPE_OBJECT, additional_properties=True, description="Contents of the BCO."),
}
)
request_body = openapi.Schema(
type=openapi.TYPE_OBJECT,
title="Modify BCO Draft Schema",
description="Parameters that are supported when trying to modify a draft BCO.",
required=['POST_api_objects_drafts_modify'],
properties={
'POST_api_objects_drafts_modify': openapi.Schema(type=openapi.TYPE_ARRAY,
items=POST_api_objects_drafts_modify_schema,
description='BCO Drafts to modify.'),
})
@swagger_auto_schema(request_body=request_body, responses={
200: "Modification of BCO draft is successful.",
400: "Bad request.",
403: "Invalid token."
}, tags=["BCO Management"])
def post(self, request) -> Response:
return check_post_and_process(request, POST_api_objects_drafts_modify)
class ApiObjectsDraftsPermissions(APIView):
"""
Get Permissions for a BCO Object
--------------------
Gets the permissions for a BCO object.
"""
POST_api_objects_drafts_permissions_schema = openapi.Schema(
type=openapi.TYPE_OBJECT,
required=['object_id', 'contents'],
properties={
'object_id': openapi.Schema(type=openapi.TYPE_STRING, description='BCO Object ID.'),
'contents' : openapi.Schema(type=openapi.TYPE_OBJECT, additional_properties=True, description="Contents of the BCO."),
}
)
request_body = openapi.Schema(
type=openapi.TYPE_OBJECT,
title="Get BCO Permissions Schema",
description="Parameters that are supported when fetching draft BCO permissions.",
required=['POST_api_objects_drafts_permissions'],
properties={
'POST_api_objects_drafts_permissions': openapi.Schema(type=openapi.TYPE_ARRAY,
items=POST_api_objects_drafts_permissions_schema,
description='BCO Drafts to fetch permissions for.'),
})
@swagger_auto_schema(request_body=request_body, responses={
200: "Checking BCO permissions is successful.",
300: "Some requests failed.",
400: "Bad request.",
403: "Invalid token."
}, tags=["BCO Management"])
def post(self, request) -> Response:
return check_post_and_process(request, POST_api_objects_drafts_permissions)
class ApiObjectsDraftsPermissionsSet(APIView):
"""
Set Permissions for a BCO Object
--------------------
Sets the permissions for a BCO object. The BCO object must be in draft form.
NOTE: This is currently a work in progress and may not yet work.
"""
# TODO: The POST_api_objects_draft_permissions_set call needs to be fixed, doesn't appear to work
POST_api_objects_drafts_permissions_set_schema = openapi.Schema(
type=openapi.TYPE_OBJECT,
required=['object_id'],
properties={
'object_id': openapi.Schema(type=openapi.TYPE_STRING, description='BCO Object ID.'),
'actions' : openapi.Schema(
type=openapi.TYPE_OBJECT,
properties={
'remove_permissions': openapi.Schema(type=openapi.TYPE_STRING,
description="Remove permissions from these users."),
'full_permissions' : openapi.Schema(type=openapi.TYPE_STRING,
description="Give users full permissions."),
'add_permissions' : openapi.Schema(type=openapi.TYPE_STRING,
description="Add permissions to these users."),
},
description="Actions to modify BCO permissions.")
}
)
request_body = openapi.Schema(
type=openapi.TYPE_OBJECT,
title="Set BCO Permissions Schema",
description="Parameters that are supported when setting draft BCO permissions.",
required=['POST_api_objects_drafts_permissions_set'],
properties={
'POST_api_objects_drafts_permissions_set': openapi.Schema(type=openapi.TYPE_ARRAY,
items=POST_api_objects_drafts_permissions_set_schema,
description='BCO Drafts to set permissions for.'),
})
@swagger_auto_schema(request_body=request_body, responses={
200: "Setting BCO permissions is successful.",
300: "Some requests failed.",
400: "Bad request.",
403: "Invalid token."
}, tags=["BCO Management"])
def post(self, request) -> Response:
return check_post_and_process(request, POST_api_objects_drafts_permissions_set)
# TODO: What is the difference between this and ApiObjectsPublish?
class ApiObjectsDraftsPublish(APIView):
"""
Publish a BCO
--------------------
Publish a draft BCO object. Once published, a BCO object becomes immutable.
"""
# TODO: This seems to be missing group, which I would expect to be part of the publication
permission_classes = [IsAuthenticated]
POST_api_objects_drafts_publish_schema = openapi.Schema(
type=openapi.TYPE_OBJECT,
required=['draft_id', 'prefix'],
properties={
'prefix' : openapi.Schema(type=openapi.TYPE_STRING, description='BCO Prefix to publish with.'),
'draft_id' : openapi.Schema(type=openapi.TYPE_STRING, description='BCO Object Draft ID.'),
'object_id' : openapi.Schema(type=openapi.TYPE_STRING, description='BCO Object ID.'),
'delete_draft': openapi.Schema(type=openapi.TYPE_BOOLEAN, description='Whether or not to delete the draft. False by default.'),
}
)
request_body = openapi.Schema(
type=openapi.TYPE_OBJECT,
title="Publish Draft BCO Schema",
description="Parameters that are supported when setting publishing BCOs.",
required=['POST_api_objects_drafts_publish'],
properties={
'POST_api_objects_drafts_publish': openapi.Schema(type=openapi.TYPE_ARRAY,
items=POST_api_objects_drafts_publish_schema,
description='BCO drafts to publish.'),
})
@swagger_auto_schema(request_body=request_body, responses={
200: "BCO Publication is successful.",
300: "Some requests failed.",
400: "Bad request.",
403: "Invalid token."
}, tags=["BCO Management"])
def post(self, request) -> Response:
return check_post_and_process(request, POST_api_objects_drafts_publish)
class ApiObjectsDraftsRead(APIView):
"""
Read BCO
--------------------
Reads a draft BCO object.
"""
POST_api_objects_drafts_read_schema = openapi.Schema(
type=openapi.TYPE_OBJECT,
required=['object_id'],
properties={
'object_id': openapi.Schema(type=openapi.TYPE_STRING, description='BCO Object ID.'),
}
)
request_body = openapi.Schema(
type=openapi.TYPE_OBJECT,
title="Read BCO Schema",
description="Parameters that are supported when reading BCOs.",
required=['POST_api_objects_drafts_read'],
properties={
'POST_api_objects_drafts_read': openapi.Schema(type=openapi.TYPE_ARRAY,
items=POST_api_objects_drafts_read_schema,
description='BCO objects to read.'),
})
@swagger_auto_schema(request_body=request_body, responses={
200: "Read BCO is successful.",
300: "Some requests failed.",
400: "Bad request.",
403: "Invalid token."
}, tags=["BCO Management"])
def post(self, request) -> Response:
return check_post_and_process(request, POST_api_objects_drafts_read)
# TODO: This should probably also be a GET (or only a GET)
class ApiObjectsDraftsToken(APIView):
"""Get Draft BCOs
--------------------
Get all the draft objects for a given token.
You can specify which information should be returned with this.
"""
request_body = openapi.Schema(
type=openapi.TYPE_OBJECT,
title="Get Draft BCO Schema",
description="Parameters that are supported when fetching a draft BCO.",
required=['POST_api_objects_drafts_token'],
properties={
'POST_api_objects_drafts_token': openapi.Schema(
type=openapi.TYPE_OBJECT,
required=['fields'],
properties={
'fields': openapi.Schema(
type=openapi.TYPE_ARRAY,
items=openapi.Schema(
type=openapi.TYPE_STRING,
description="Field to return",
enum=[
'contents',
'last_update',
'object_class',
'object_id',
'owner_group',
'owner_user',
'prefix',
'schema',
'state'
]
),
description="Fields to return.")
}
)
}
)
@swagger_auto_schema(request_body=request_body, responses={
200: "Fetch BCO drafts is successful.",
400: "Bad request.",
403: "Invalid token."
}, tags=["BCO Management"])
def post(self, request) -> Response:
# TODO: Not checking for authorization here?
# No schema for this request since only
# the Authorization header is required.
return POST_api_objects_drafts_token(rqst=request)
class ApiObjectsPublish(APIView):
"""Directly publish a BCO
--------------------
Take the bulk request and publish objects directly.
"""
# TODO: Need to get the schema that is being sent here from FE
request_body = openapi.Schema(
type=openapi.TYPE_OBJECT,
title="BCO Publication Schema",
description="Publish description.",
properties={
'x': openapi.Schema(type=openapi.TYPE_STRING, description='Description of X'),
'y': openapi.Schema(type=openapi.TYPE_STRING, description='Description of Y'),
})
@swagger_auto_schema(request_body=request_body, responses={
200: "BCO publication is successful.",
400: "Bad request.",
403: "Invalid token."
}, tags=["BCO Management"])
def post(self, request) -> Response:
return check_post_and_process(request, POST_api_objects_publish)
class ApiObjectsSearch(APIView):
"""
Search for BCO
--------------------
Search for available BCO objects that match criteria.
"""
# TODO: Need to get the schema that is being sent here from FE
request_body = openapi.Schema(
type=openapi.TYPE_OBJECT,
title="BCO Publication Schema",
description="Publish description.",
properties={
'x': openapi.Schema(type=openapi.TYPE_STRING, description='Description of X'),
'y': openapi.Schema(type=openapi.TYPE_STRING, description='Description of Y'),
})
@swagger_auto_schema(request_body=request_body, responses={
200: "BCO publication is successful.",
400: "Bad request.",
403: "Invalid token."
}, tags=["BCO Management"])
def post(self, request) -> Response:
return check_post_and_process(request, POST_api_objects_search)
class ApiObjectsToken(APIView):
"""
Get User Draft and Published BCOs
--------------------
Get all BCOs available for a specific token, including published ones.
"""
# auth = []
# auth.append(
# openapi.Parameter('Token', openapi.IN_HEADER, description="Authorization Token", type=openapi.TYPE_STRING))
request_body = openapi.Schema(
type=openapi.TYPE_OBJECT,
title="Get BCO Schema",
description="Parameters that are supported when fetching a BCOs.",
required=['POST_api_objects_token'],
properties={
'POST_api_objects_token': openapi.Schema(
type=openapi.TYPE_OBJECT,
required=['fields'],
properties={
'fields': openapi.Schema(
type=openapi.TYPE_ARRAY,
items=openapi.Schema(
type=openapi.TYPE_STRING,
description="Field to return",
enum=['contents', 'last_update', 'object_class', 'object_id', 'owner_group', 'owner_user',
'prefix', 'schema', 'state']
),
description="Fields to return.")
})})
@swagger_auto_schema(request_body=request_body, responses={
200: "Fetch BCOs is successful.",
400: "Bad request.",
403: "Invalid token."
}, tags=["BCO Management"])
def post(self, request) -> Response:
# No schema for this request since only
# the Authorization header is required.
return POST_api_objects_token(rqst=request)
class ApiObjectsPublished(APIView):
"""
Get Published BCOs
--------------------
Get all BCOs available for a specific token, including published ones.
"""
# auth = []
# auth.append(
# openapi.Parameter('Token', openapi.IN_HEADER, description="Authorization Token", type=openapi.TYPE_STRING))
# request_body = openapi.Schema(
# type=openapi.TYPE_OBJECT,
# title="Get BCO Schema",
# description="Parameters that are supported when fetching a BCOs.",
# required=['POST_api_objects_token'],
# properties={
# 'POST_api_objects_token': openapi.Schema(
# type=openapi.TYPE_OBJECT,
# required=['fields'],
# properties={
# 'fields': openapi.Schema(
# type=openapi.TYPE_ARRAY,
# items=openapi.Schema(
# type=openapi.TYPE_STRING,
# description="Field to return",
# enum=['contents', 'last_update', 'object_class', 'object_id', 'owner_group', 'owner_user',
# 'prefix', 'schema', 'state']
# ),
# description="Fields to return.")
# })})
# Anyone can view a published object
authentication_classes = []
permission_classes = []
auth = []
@swagger_auto_schema(manual_parameters=auth, responses={
200: "Success.",
400: "Internal Error. BCO Name and Version are not properly formatted.",
}, tags=["BCO Management"])
def get(self, request) -> Response:
return POST_api_objects_published()
# return POST_api_objects_token(rqst=request)
class ApiPrefixesCreate(APIView):
"""
Create a Prefix
--------------------
Creates a prefix to be used to classify BCOs and to determine permissions.
"""
# TODO: Need to get the schema that is being sent here from FE
request_body = openapi.Schema(
type=openapi.TYPE_OBJECT,
title="BCO Publication Schema",
description="Publish description.",
properties={
'x': openapi.Schema(type=openapi.TYPE_STRING, description='Description of X'),
'y': openapi.Schema(type=openapi.TYPE_STRING, description='Description of Y'),
})
@swagger_auto_schema(request_body=request_body, responses={
200: "Creating a prefix is successful.",
400: "Bad request.",
403: "Invalid token."
}, tags=["Prefix Management"])
def post(self, request) -> Response:
return check_post_and_process(request, POST_api_prefixes_create)
class ApiPrefixesDelete(APIView):
"""
Delete a Prefix
--------------------
Deletes a prefix for BCOs.
"""
# TODO: Not sure if this actually does anything?
# Permissions - prefix admins only
permission_classes = [RequestorInPrefixAdminsGroup]
# TODO: Need to get the schema that is being sent here from FE
request_body = openapi.Schema(
type=openapi.TYPE_OBJECT,
title="Prefix Delete Schema",
description="Prefix delete description.",
properties={
'x': openapi.Schema(type=openapi.TYPE_STRING, description='Description of X'),
'y': openapi.Schema(type=openapi.TYPE_STRING, description='Description of Y'),
})
@swagger_auto_schema(request_body=request_body, responses={
200: "Deleting a prefix is successful.",
400: "Bad request.",
403: "Invalid token."
}, tags=["Prefix Management"])
def post(self, request) -> Response:
return check_post_and_process(request, POST_api_prefixes_delete)
class ApiPrefixesPermissionsSet(APIView):
"""
Set Prefix Permissions
--------------------
Sets the permissions available for a specified prefix.
"""
# TODO: Need to get the schema that is being sent here from FE
request_body = openapi.Schema(
type=openapi.TYPE_OBJECT,
title="Prefix Permissions Schema",
description="Prefix permissions description.",
properties={
'x': openapi.Schema(type=openapi.TYPE_STRING, description='Description of X'),
'y': openapi.Schema(type=openapi.TYPE_STRING, description='Description of Y'),
})
@swagger_auto_schema(request_body=request_body, responses={
200: "Setting prefix permissions is successful.",
400: "Bad request.",
403: "Invalid token."
}, tags=["Prefix Management"])
def post(self, request) -> Response:
return check_post_and_process(request, POST_api_prefixes_permissions_set)
class ApiPrefixesToken(APIView):
"""
Get Prefixes
--------------------
Get all available prefixes for a given token.
The word 'Token' must be included in the header.
For example: 'Token 627626823549f787c3ec763ff687169206626149'.
"""
auth = [openapi.Parameter('Authorization', openapi.IN_HEADER, description="Authorization Token", type=openapi.TYPE_STRING)]
@swagger_auto_schema(manual_parameters=auth, responses={
200: "Fetch prefixes is successful.",
400: "Bad request.",
403: "Invalid token."
}, tags=["Prefix Management"])
def post(self, request) -> Response:
if 'Authorization' in request.headers:
# Pass the request to the handling function
# Source: https://stackoverflow.com/a/31813810
return POST_api_prefixes_token(request=request)
else:
return Response(status=status.HTTP_400_BAD_REQUEST)
# TODO: What does this do? Appears to flatten the prefixes (not sure what for)
class ApiPrefixesTokenFlat(APIView):
"""
Get Prefixes in a flat format
--------------------
TODO: What does this do? Appears to flatten the prefixes (not sure what for)
The word 'Token' must be included in the header.
For example: 'Token 627626823549f787c3ec763ff687169206626149'
"""
auth = [openapi.Parameter('Authorization', openapi.IN_HEADER, description="Authorization Token", type=openapi.TYPE_STRING)]
@swagger_auto_schema(manual_parameters=auth, responses={
200: "Fetch prefixes is successful.",
400: "Bad request.",
403: "Invalid token."
}, tags=["Prefix Management"])
def post(self, request) -> Response:
if 'Authorization' in request.headers:
# Pass the request to the handling function
# Source: https://stackoverflow.com/a/31813810
return POST_api_prefixes_token_flat(request=request)
else:
return Response(status=status.HTTP_400_BAD_REQUEST)
class ApiPrefixesUpdate(APIView):
"""
Update a Prefix
--------------------
Updates a prefix with additional or new information.
"""
# Permissions - prefix admins only
permission_classes = [RequestorInPrefixAdminsGroup]
# TODO: Need to get the schema that is being sent here from FE
request_body = openapi.Schema(
type=openapi.TYPE_OBJECT,
title="Prefix Update Schema",
description="Prefix update description.",
properties={
'x': openapi.Schema(type=openapi.TYPE_STRING, description='Description of X'),
'y': openapi.Schema(type=openapi.TYPE_STRING, description='Description of Y'),
})
@swagger_auto_schema(request_body=request_body, responses={
200: "Updating prefix is successful.",
400: "Bad request.",
403: "Invalid token."
}, tags=["Prefix Management"])
def post(self, request) -> Response:
return check_post_and_process(request, POST_api_prefixes_modify)
class ApiPublicDescribe(APIView):
"""
Describe API
--------------------
Returns information about the API.
"""
auth = []
@swagger_auto_schema(manual_parameters=auth, responses={
201: "Account has been authorized.",
208: "Account has already been authorized.",
403: "Requestor's credentials were rejected.",
424: "Account has not been registered."
}, tags=["API Management"])
def get(self, request):
# Pass the request to the handling function
return Response(UserUtils.UserUtils().get_user_info(username='anon'))
# Source: https://www.django-rest-framework.org/api-guide/permissions/#setting-the-permission-policy
class DraftObjectId(APIView):
"""
Read Object by URI
--------------------
Reads and returns and object based on a URI.
"""
# For the success and error messages
# renderer_classes = [
# TemplateHTMLRenderer
# ]
# template_name = 'api/account_activation_message.html'
auth = []
auth.append(openapi.Parameter('draft_object_id', openapi.IN_PATH, description="Object ID to be viewed.",
type=openapi.TYPE_STRING))
@swagger_auto_schema(manual_parameters=auth, responses={
201: "Account has been authorized.",
208: "Account has already been authorized.",
403: "Requestor's credentials were rejected.",
424: "Account has not been registered."
}, tags=["BCO Management"])
def get(self, request, draft_object_id):
# No need to check the request (unnecessary for GET as it's checked
# by the url parser?).
# Pass straight to the handler.
# TODO: This is not dealing with the draft_object_id parameter being passed in?
return GET_draft_object_by_id(do_id=request.build_absolute_uri(), rqst=request)
# Allow anyone to view published objects.
# Source: https://www.django-rest-framework.org/api-guide/permissions/#setting-the-permission-policy
class ObjectIdRootObjectId(APIView):
"""
View Published BCO by ID
--------------------
Reads and returns a published BCO based on an object ID.
"""
# For the success and error messages
# renderer_classes = [
# TemplateHTMLRenderer
# ]
# template_name = 'api/account_activation_message.html'
auth = []
auth.append(openapi.Parameter('object_id_root', openapi.IN_PATH, description="Object ID to be viewed.",
type=openapi.TYPE_STRING))
# Anyone can view a published object
authentication_classes = []
permission_classes = []
@swagger_auto_schema(manual_parameters=auth, responses={
201: "Account has been authorized.",
208: "Account has already been authorized.",
403: "Requestor's credentials were rejected.",
424: "Account has not been registered."
}, tags=["BCO Management"])
def get(self, request, object_id_root):
return GET_published_object_by_id(object_id_root)
# Allow anyone to view published objects.
# Source: https://www.django-rest-framework.org/api-guide/permissions/#setting-the-permission-policy
class ObjectIdRootObjectIdVersion(APIView):
"""
View Published BCO by ID and Version
--------------------
Reads and returns a published BCO based on an object ID and a version.
"""
# For the success and error messages
# renderer_classes = [
# TemplateHTMLRenderer
# ]
# template_name = 'api/account_activation_message.html'
auth = []
auth.append(openapi.Parameter('object_id_root', openapi.IN_PATH, description="Object ID to be viewed.",
type=openapi.TYPE_STRING))
auth.append(openapi.Parameter('object_id_version', openapi.IN_PATH, description="Object version to be viewed.",
type=openapi.TYPE_STRING))
# Anyone can view a published object
authentication_classes = []
permission_classes = []
@swagger_auto_schema(manual_parameters=auth, responses={
201: "Account has been authorized.",
208: "Account has already been authorized.",
403: "Requestor's credentials were rejected.",
424: "Account has not been registered."
}, tags=["BCO Management"])
def get(self, request, object_id_root, object_id_version):
return GET_published_object_by_id_with_version(object_id_root, object_id_version)
| 40.988156 | 204 | 0.602481 | 5,071 | 48,448 | 5.573457 | 0.100769 | 0.043591 | 0.059442 | 0.085766 | 0.709656 | 0.663871 | 0.634929 | 0.591869 | 0.548066 | 0.500053 | 0 | 0.013897 | 0.297494 | 48,448 | 1,181 | 205 | 41.022862 | 0.816512 | 0.21295 | 0 | 0.500745 | 0 | 0 | 0.210509 | 0.018573 | 0 | 0 | 0 | 0.012701 | 0 | 1 | 0.043219 | false | 0 | 0.052161 | 0.032787 | 0.262295 | 0.00149 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
3a1e6a198dcf50b50fb2545838efd78b66c406d3 | 2,829 | py | Python | justify.py | teared/sublime-justify | 01e3f8f5464b42374934022d9520df6fa7f78403 | [
"Unlicense"
] | 3 | 2015-06-30T15:16:51.000Z | 2018-02-11T01:48:58.000Z | justify.py | teared/sublime-justify | 01e3f8f5464b42374934022d9520df6fa7f78403 | [
"Unlicense"
] | null | null | null | justify.py | teared/sublime-justify | 01e3f8f5464b42374934022d9520df6fa7f78403 | [
"Unlicense"
] | 2 | 2019-05-03T00:17:29.000Z | 2021-04-05T23:25:35.000Z | import re
import sublime
import sublime_plugin
from Default.paragraph import *
from Default.paragraph import OldWrapLinesCommand as WrapLinesCommand
from . import jtextwrap as textwrap
class WrapLinesJustifiedCommand(WrapLinesCommand):
''' Same as parent, except using jtextwrap. '''
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
def run(self, edit, width=0):
if width == 0 and self.view.settings().get("wrap_width"):
try:
width = int(self.view.settings().get("wrap_width"))
except TypeError:
pass
if width == 0 and self.view.settings().get("rulers"):
# try and guess the wrap width from the ruler, if any
try:
width = int(self.view.settings().get("rulers")[0])
except ValueError:
pass
except TypeError:
pass
if width == 0:
width = 78
# Make sure tabs are handled as per the current buffer
tab_width = 8
if self.view.settings().get("tab_size"):
try:
tab_width = int(self.view.settings().get("tab_size"))
except TypeError:
pass
if tab_width == 0:
tab_width == 8
paragraphs = []
for s in self.view.sel():
paragraphs.extend(all_paragraphs_intersecting_selection(self.view, s))
if len(paragraphs) > 0:
self.view.sel().clear()
for p in paragraphs:
self.view.sel().add(p)
# This isn't an ideal way to do it, as we loose the position of the
# cursor within the paragraph: hence why the paragraph is selected
# at the end.
for s in self.view.sel():
wrapper = textwrap.TextWrapper()
wrapper.expand_tabs = False
wrapper.width = width
prefix = self.extract_prefix(s)
if prefix:
wrapper.initial_indent = prefix
wrapper.subsequent_indent = prefix
wrapper.width -= self.width_in_spaces(prefix, tab_width)
if wrapper.width < 0:
continue
txt = self.view.substr(s)
if prefix:
txt = txt.replace(prefix, u"")
txt = txt.expandtabs(tab_width)
txt = wrapper.fill(txt) + u"\n"
self.view.replace(edit, s, txt)
# It's unhelpful to have the entire paragraph selected, just leave the
# selection at the end
ends = [s.end() - 1 for s in self.view.sel()]
self.view.sel().clear()
for pt in ends:
self.view.sel().add(sublime.Region(pt))
| 34.084337 | 82 | 0.533404 | 326 | 2,829 | 4.542945 | 0.358896 | 0.086428 | 0.051992 | 0.076975 | 0.232951 | 0.19919 | 0.081026 | 0.040513 | 0 | 0 | 0 | 0.00732 | 0.372216 | 2,829 | 82 | 83 | 34.5 | 0.826577 | 0.13397 | 0 | 0.266667 | 0 | 0 | 0.020517 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.033333 | false | 0.066667 | 0.1 | 0 | 0.15 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
3a285a04cd8f3ef4e3f982128d534efd911e7580 | 1,116 | py | Python | extensions/utils/plugging.py | taciturasa/atteybot-neo | e0d8953e58f74dda19f14cbecc5644978c233559 | [
"MIT"
] | 1 | 2020-03-13T23:45:46.000Z | 2020-03-13T23:45:46.000Z | extensions/utils/plugging.py | taciturasa/atteybot-neo | e0d8953e58f74dda19f14cbecc5644978c233559 | [
"MIT"
] | null | null | null | extensions/utils/plugging.py | taciturasa/atteybot-neo | e0d8953e58f74dda19f14cbecc5644978c233559 | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
# attey plugging util
# Provides utils for dealing with plugs and entries inside panels.
'''Plugging File'''
import discord
from discord.ext import commands
import rethinkdb
from extensions.utils import logging
def entry():
def inner():
...
return inner
def selection():
def inner():
...
return inner
def deselection():
def inner():
...
return inner
class Plugging():
"""Deals with plugs and entries inside panels."""
def __init__(self, bot):
self.bot = bot
self.conn = bot.conn
async def add_plug(self,
panel_channel: discord.TextChannel,
plug_name: str):
"""Adds a plug to a panel."""
...
async def remove_plug(self,
panel_channel: discord.TextChannel,
plug_name: str):
"""Removes a plug from a panel."""
...
async def parse_reaction(self, reaction):
"""Parses a reaction on an entry."""
...
def setup(bot):
bot.plugging = Plugging(bot) | 21.461538 | 66 | 0.558244 | 125 | 1,116 | 4.896 | 0.448 | 0.039216 | 0.068627 | 0.093137 | 0.333333 | 0.261438 | 0.160131 | 0.160131 | 0.160131 | 0 | 0 | 0.00134 | 0.331541 | 1,116 | 52 | 67 | 21.461538 | 0.819035 | 0.147849 | 0 | 0.5 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | false | 0 | 0.125 | 0 | 0.5 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
3a3eb74c989c102fd4cd8127b399fa9a8ebee1d5 | 6,624 | py | Python | src/mapstp/cli/runner.py | MC-kit/map-stp | a82b6560358a37f704fd0fe76c76def27a15458d | [
"MIT"
] | null | null | null | src/mapstp/cli/runner.py | MC-kit/map-stp | a82b6560358a37f704fd0fe76c76def27a15458d | [
"MIT"
] | 19 | 2021-11-29T10:29:30.000Z | 2022-03-17T11:21:08.000Z | src/mapstp/cli/runner.py | MC-kit/map-stp | a82b6560358a37f704fd0fe76c76def27a15458d | [
"MIT"
] | null | null | null | """Application to transfer meta information from STP.
For given STP file creates Excel table with a list
of STP paths to STP components, corresponding to cells
in MCNP model, would it be generated from the STP with SuperMC.
The excel also contains material numbers, densities, correction factors,
and RWCL id. The values can be specified in the names of STP
components as special tags. A tag is denoted with bracket enclosed
specification at the end of component name: "Component name [<spec>]".
The spec may contain space separated entries:
- m:<mnemonic> - first column in a special material-index.xlxs file.
- f:<factor> - float number for density correction factor
- r:<rwcl> - any label to categorize the components for RWCL
If MCNP file is also specified as the second `mcnp` argument,
then produces output MCNP file with STP paths inserted
as end of line comments after corresponding cells with prefix
"sep:". The material numbers and densities are set according
to the meta information provided in the STP.
"""
from dataclasses import dataclass
from pathlib import Path
import click
from mapstp import __name__ as package_name
from mapstp import __summary__, __version__
from mapstp.excel import create_excel
from mapstp.materials import get_used_materials, load_materials_map
from mapstp.merge import correct_start_cell_number, join_paths, merge_paths
from mapstp.utils.io import can_override, select_output
# TODO dvp: add customized configuring from a configuration toml-file.
from mapstp.workflow import create_path_info
# from .logging import logger
# from click_loguru import ClickLoguru
# LOG_FILE_RETENTION = 3
# NO_LEVEL_BELOW = 30
#
#
# def stderr_log_format_func(msg_dict):
# """Do level-sensitive formatting.
#
# Just a copy from click-loguru so far."""
#
# if msg_dict["level"].no < NO_LEVEL_BELOW:
# return "<level>{message}</level>\n"
# return "<level>{level}</level>: <level>{message}</level>\n"
#
#
# click_loguru = ClickLoguru(
# NAME,
# VERSION,
# stderr_format_func=stderr_log_format_func,
# retention=LOG_FILE_RETENTION,
# log_dir_parent=".logs",
# timer_log_level="info",
# )
@dataclass
class Config:
override: bool = False
_USAGE = f"""
{__summary__}
For given STP file creates Excel table with a list
of STP paths to STP components, corresponding to cells
in MCNP model, would it be generated from the STP with SuperMC.
If MCNP file is also specified as the second `mcnp-file` argument,
then produces output MCNP file with STP paths inserted
as end of line comments after corresponding cells with prefix
"sep:". The material numbers and densities are set according
to the meta information provided in the STP.
"""
# @click_loguru.logging_options
# @click.group(help=meta.__summary__, name=NAME)
@click.command(help=_USAGE, name=package_name)
# @click_loguru.init_logger()
# @click_loguru.stash_subcommand()
@click.option(
"--override/--no-override",
default=False,
help="Override existing files, (default: no)",
)
@click.option(
"--output",
"-o",
metavar="<output>",
type=click.Path(dir_okay=False),
required=False,
help="File to write the MCNP with marked cells (default: stdout)",
)
@click.option(
"--excel",
"-e",
metavar="<excel-file>",
type=click.Path(dir_okay=False),
required=False,
help="Excel file to write the component paths",
)
@click.option(
"--materials",
metavar="<materials-file>",
type=click.Path(dir_okay=False, exists=True),
required=False,
help="Text file containing MCNP materials specifications."
"If present, the selected materials present in this file are printed"
"to the `output` MCNP model, so, it becomes complete valid model",
)
@click.option(
"--materials-index",
"-m",
metavar="<materials-index-file>",
type=click.Path(dir_okay=False, exists=True),
required=False,
help="Excel file containing materials mnemonics and corresponding references for MCNP model "
"(default: file from the package internal data corresponding to ITER C-model)",
)
@click.option(
"--separator",
metavar="<separator>",
type=click.STRING,
default="/",
help="String to separate components in the STP path",
)
@click.option(
"--start-cell-number",
metavar="<number>",
type=click.INT,
required=False,
help="Number to start cell numbering in the Excel file "
"(default: the first cell number in `mcnp` file, if specified, otherwise 1)",
)
@click.argument(
"stp", metavar="<stp-file>", type=click.Path(dir_okay=False, exists=True)
)
@click.argument(
"mcnp",
metavar="[mcnp-file]",
type=click.Path(dir_okay=False, exists=True),
required=False,
)
@click.version_option(__version__, prog_name=package_name)
# @logger.catch(reraise=True)
@click.pass_context
# ctx, verbose: bool, quiet: bool, logfile: bool, profile_mem: bool, override: bool
def mapstp(
ctx,
override: bool,
output,
excel,
materials,
materials_index,
separator,
start_cell_number,
stp,
mcnp,
) -> None:
f"""Transfers meta information from STP to MCNP model and Excel.
Args:
ctx:
override:
output:
excel:
materials:
materials_index:
separator:
start_cell_number:
stp:
mcnp:
Returns:
"""
if not (mcnp or excel):
raise click.UsageError(
"Nor `excel`, neither `mcnp` parameter is specified - nothing to do"
)
# if quiet:
# logger.level("WARNING")
# if verbose:
# logger.level("TRACE")
# logger.info("Running {}", NAME)
# logger.debug("Working dir {}", Path(".").absolute())
#
cfg = ctx.ensure_object(Config)
# obj["DEBUG"] = debug
cfg.override = override
paths, path_info = create_path_info(materials_index, stp)
materials_map = load_materials_map(materials) if materials else None
used_materials_text = (
get_used_materials(materials_map, path_info) if materials_map else None
)
if mcnp:
_mcnp = Path(mcnp)
with select_output(override, output) as _output:
joined_paths = join_paths(paths, separator)
merge_paths(_output, joined_paths, path_info, _mcnp, used_materials_text)
if excel:
start_cell_number = correct_start_cell_number(start_cell_number, mcnp)
_excel = Path(excel)
can_override(_excel, override)
create_excel(_excel, paths, path_info, separator, start_cell_number)
# TODO dvp: add logging
if __name__ == "__main__":
mapstp()
| 29.309735 | 97 | 0.698521 | 886 | 6,624 | 5.065463 | 0.276524 | 0.018048 | 0.026738 | 0.02139 | 0.260918 | 0.256907 | 0.256907 | 0.256016 | 0.256016 | 0.22861 | 0 | 0.000754 | 0.199275 | 6,624 | 225 | 98 | 29.44 | 0.8454 | 0.320199 | 0 | 0.158273 | 1 | 0 | 0.376038 | 0.010321 | 0 | 0 | 0 | 0.004444 | 0 | 1 | 0.007194 | false | 0.007194 | 0.071942 | 0 | 0.093525 | 0.007194 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
3a43951afa3f066a82a397d1838189f905b735a2 | 9,401 | py | Python | strategies/analyser.py | Sytten/artis | fd407b4558212c435abf8f697ea7891c5375a10e | [
"MIT"
] | 6 | 2018-12-17T19:43:26.000Z | 2021-12-19T22:49:13.000Z | strategies/analyser.py | Sytten/artis | fd407b4558212c435abf8f697ea7891c5375a10e | [
"MIT"
] | null | null | null | strategies/analyser.py | Sytten/artis | fd407b4558212c435abf8f697ea7891c5375a10e | [
"MIT"
] | 1 | 2019-09-19T05:30:53.000Z | 2019-09-19T05:30:53.000Z | import logging
import ccxt.async as ccxt
from liqui import Liqui
from binance.client import Client
from database.models.types import Types
from database.models.status import Status
from dynaconf import settings
logger = logging.getLogger(__name__)
class CoinAnalysis(object):
def __init__(self, coin=None, origin=None, origin_price=0, destination=None, destination_price=0, profit_factor=0):
self.coin = coin
self.origin = origin
self.origin_price = origin_price
self.destination = destination
self.destination_price = destination_price
self.profit_factor = profit_factor
class Analyser(object):
_pair = "{}/ETH"
_LIMIT = "limit"
def __init__(self):
self.liqui = Liqui(settings.LIQUI.API_KEY, settings.LIQUI.API_SECRET)
self.binance = Client(settings.BINANCE.API_KEY, settings.BINANCE.API_SECRET)
self.markets = {
'LIQUI': ccxt.liqui({
'apiKey': settings.LIQUI.API_KEY,
'secret': settings.LIQUI.API_SECRET
}),
'BINANCE': ccxt.binance({
'apiKey': settings.BINANCE.API_KEY,
'secret': settings.BINANCE.API_SECRET
})
}
self.minimum_order = settings.MINIMUM_AMOUNT_TO_TRADE
def get_coin_analysis(self, coin, origin, destination):
coin_analysis = CoinAnalysis(coin=coin, origin=origin, destination=destination)
# Temporary origin market checking until abstracted
if origin == "LIQUI":
pair = "{}_eth".format(coin).lower()
origin_ticker = self.liqui.ticker(pair)
origin_last_price = origin_ticker.get(pair).get('last')
elif origin == "BINANCE":
symbol = "{}ETH".format(coin)
origin_ticker = self.binance.get_ticker(symbol=symbol)
origin_last_price = float(origin_ticker.get('lastPrice'))
else:
logger.error("Unknown origin market: {}".format(origin))
return coin_analysis
coin_analysis.origin_price = origin_last_price
logger.debug("Last price for origin market {} is {:.7f}".format(origin, origin_last_price))
# Temporary destination market checking until abstracted
if destination == "LIQUI":
pair = "{}_eth".format(coin).lower()
destination_ticker = self.liqui.ticker(pair)
destination_last_price = destination_ticker.get(pair).get('last')
elif destination == "BINANCE":
symbol = "{}ETH".format(coin)
destination_ticker = self.binance.get_ticker(symbol=symbol)
destination_last_price = float(destination_ticker.get('lastPrice'))
else:
logger.error("Unknown destination market: {}".format(origin))
return coin_analysis
coin_analysis.destination_price = destination_last_price
logger.debug("Last price for destination market {} is {:.7f}".format(destination, destination_last_price))
coin_analysis.profit_factor = round(destination_last_price / origin_last_price, 6)
logger.debug("Profit Factor: {}".format(coin_analysis.profit_factor))
return coin_analysis
async def get_latest_depth(self, market, coin, params={}):
return await self.markets.get(market).fetch_order_book(self._pair.format(coin), params=params)
async def get_balance(self, market, params={}):
return await self.markets.get(market).fetch_balance(params=params)
def is_order_filled(self, order, order_id, market):
"""
Use this method when parsing the result of an order fetch
:param order:
:param order_id:
:param market:
:return:
"""
# TODO: Refactor to add markets more easily
if market == "LIQUI":
if order.get("info").get("return").get(str(order_id)).get("status") == 1:
return True
elif market == "BINANCE":
if order.get("info").get("status") == "FILLED":
return True
else:
logger.error("Cannot extract status for market {}".format(market))
return False
def extract_amount(self, order, market):
# TODO: Refactor to add markets more easily
if market == "LIQUI":
return order.get("info").get("return").get("received")
elif market == "BINANCE":
return float(order.get("info").get("executedQty"))
else:
logger.error("Cannot extract status for market {}".format(market))
return 0
def extract_remaining_amount(self, order, market):
# TODO: Refactor to add markets more easily
if market == "LIQUI":
return order.get("info").get("return").get("remains")
elif market == "BINANCE":
price = float(order.get("info").get("origQty")) - float(order.get("info").get("executedQty"))
logger.debug(price)
return price
else:
logger.error("Cannot remaining amount for market {}".format(market))
return 0
def extract_order_executed_amount(self, order, order_id, market):
# TODO: Refactor to add markets more easily
if market == "LIQUI":
return order.get("info").get("return").get(str(order_id)).get("amount")
elif market == "BINANCE":
return float(order.get("info").get("executedQty"))
else:
logger.error("Cannot executed amount for market {}".format(market))
return 0
def extract_price(self, order, market):
# TODO: Refactor to add markets more easily
if market == "LIQUI":
return order.get("price")
elif market == "BINANCE":
return float(order.get("info").get("price"))
else:
logger.error("Cannot extract status for market {}".format(market))
return 0
@staticmethod
def extract_type(order, market):
if market == "LIQUI":
return Types[order.get("side").upper()]
elif market == "BINANCE":
return Types[order.get("info").get("side")]
else:
logger.error("Cannot extract type for market {}".format(market))
return Types.UNKNOWN
@staticmethod
def extract_start_amount(order, market):
if market == "LIQUI":
return order.get("amount")
elif market == "BINANCE":
return float(order.get("info").get("origQty"))
else:
logger.error("Cannot extract start amount for market {}".format(market))
return 0
@staticmethod
def extract_remaining_amount2(order, market):
if market == "LIQUI":
return order.get("info").get("return").get("remains")
elif market == "BINANCE":
return float(order.get("info").get("origQty")) - float(order.get("info").get("executedQty"))
else:
logger.error("Cannot extract remaining amount for market {}".format(market))
return 0
@staticmethod
def extract_remaining_amount_order(order, market):
if market == "LIQUI":
return order.get("info").get("amount")
elif market == "BINANCE":
return float(order.get("info").get("origQty")) - float(order.get("info").get("executedQty"))
else:
logger.error("Cannot extract remaining amount for market {} (order)".format(market))
return 0
@staticmethod
def extract_price2(order, market):
if market == "LIQUI":
return order.get("price")
elif market == "BINANCE":
return float(order.get("info").get("price"))
else:
logger.error("Cannot extract price for market {}".format(market))
return 0
@staticmethod
def extract_status(order, market):
if market == "LIQUI":
if order.get("info").get("return").get("order_id") == 0:
return Status.DONE
else:
return Status.ONGOING
elif market == "BINANCE":
if order.get("info").get("status") == "FILLED":
return Status.DONE
else:
return Status.ONGOING
else:
logger.error("Cannot extract status for market {}".format(market))
return Status.UNKNOWN
@staticmethod
def extract_status_order(order, market):
if market == "LIQUI":
return {0: Status.ONGOING,
1: Status.DONE,
2: Status.CANCELLED,
3: Status.CANCELLED}[order.get("info").get("status")]
elif market == "BINANCE":
return {'NEW': Status.ONGOING,
'PARTIALLY_FILLED': Status.ONGOING,
'FILLED': Status.DONE,
'CANCELED': Status.CANCELLED}[order.get("info").get("status")]
else:
logger.error("Cannot extract status for market {} (order)".format(market))
return Status.UNKNOWN
@staticmethod
def is_filled(order, market):
if Analyser.extract_status(order, market) == Status.DONE:
return True
else:
return False
def extract_good_order(self, orders):
for order in orders:
if order[0]*order[1] > self.minimum_order:
return order
logger.error("No good order found")
return [orders[0][0], 0]
| 38.371429 | 119 | 0.598979 | 1,050 | 9,401 | 5.247619 | 0.122857 | 0.039201 | 0.050091 | 0.062613 | 0.5951 | 0.523049 | 0.497278 | 0.403085 | 0.350817 | 0.321779 | 0 | 0.003992 | 0.280502 | 9,401 | 244 | 120 | 38.528689 | 0.810615 | 0.033401 | 0 | 0.457286 | 0 | 0 | 0.141975 | 0 | 0 | 0 | 0 | 0.004098 | 0 | 0 | null | null | 0 | 0.035176 | null | null | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
3a4866f725785121d1fcb79febeec99cffaf0c12 | 2,848 | py | Python | xdev/__init__.py | Erotemic/xdev | 86d11305d1c6c54188a012fc61f82cb60c496f38 | [
"Apache-2.0"
] | 3 | 2019-03-04T02:49:22.000Z | 2022-02-08T14:41:23.000Z | xdev/__init__.py | Erotemic/xdev | 86d11305d1c6c54188a012fc61f82cb60c496f38 | [
"Apache-2.0"
] | 4 | 2021-10-05T21:11:38.000Z | 2022-03-25T17:41:10.000Z | xdev/__init__.py | Erotemic/xdev | 86d11305d1c6c54188a012fc61f82cb60c496f38 | [
"Apache-2.0"
] | null | null | null | """
This is Jon Crall's xdev module.
These are tools I often use in IPython, but they almost never make it into
production code, otherwise they would be in :mod:`ubelt`.
"""
__dev__ = """
CommandLine:
# Regenerate the tail of this file
mkinit ~/code/xdev/xdev -w
TODO:
- [ ] Update mkinit so we can either:
(1) blocklist specific modules from importing their attrs or
(2) passlist the modules that will import their attrs
- [ ] Perhaps let submodules specify a 2-tuple with the second item
being a dict that indicates: (nomod, noattr)?
- [ ] Automatically add custom defined names in this file to __all__
"""
__version__ = '0.2.5'
__submodules__ = [
'embeding',
'interactive_iter',
'desktop_interaction',
'introspect',
'class_reloader',
'search_replace',
'misc',
'autojit',
'profiler',
'tracebacks',
]
__extra_all__ = [
'util'
]
from xdev.embeding import util
### The following is autogenerated
from xdev import autojit
from xdev import class_reloader
from xdev import desktop_interaction
from xdev import embeding
from xdev import interactive_iter
from xdev import introspect
from xdev import misc
from xdev import profiler
from xdev import search_replace
from xdev import tracebacks
from xdev.embeding import (EmbedOnException, embed, embed_on_exception_context,
fix_embed_globals,)
from xdev.interactive_iter import (InteractiveIter,)
from xdev.desktop_interaction import (editfile, startfile, view_directory,)
from xdev.introspect import (distext, get_func_kwargs, get_stack_frame,)
from xdev.class_reloader import (reload_class,)
from xdev.search_replace import (GrepResult, Pattern, RE_Pattern, find, grep,
grepfile, sed, sedfile,)
from xdev.misc import (byte_str, difftext, nested_type, quantum_random,
set_overlaps, tree,)
from xdev.autojit import (import_module_from_pyx,)
from xdev.profiler import (IS_PROFILING, profile, profile_now,)
from xdev.tracebacks import (make_warnings_print_tracebacks,)
__all__ = ['EmbedOnException', 'GrepResult', 'IS_PROFILING', 'InteractiveIter',
'Pattern', 'RE_Pattern', 'autojit', 'byte_str', 'class_reloader',
'desktop_interaction', 'difftext', 'distext', 'editfile', 'embed',
'embed_on_exception_context', 'embeding', 'find',
'fix_embed_globals', 'get_func_kwargs', 'get_stack_frame', 'grep',
'grepfile', 'import_module_from_pyx', 'interactive_iter',
'introspect', 'make_warnings_print_tracebacks', 'misc',
'nested_type', 'profile', 'profile_now', 'profiler',
'quantum_random', 'reload_class', 'search_replace', 'sed',
'sedfile', 'set_overlaps', 'startfile', 'tracebacks', 'tree',
'util', 'view_directory']
| 33.505882 | 79 | 0.696278 | 342 | 2,848 | 5.538012 | 0.418129 | 0.088701 | 0.073918 | 0.023231 | 0.057022 | 0.027455 | 0 | 0 | 0 | 0 | 0 | 0.002644 | 0.203301 | 2,848 | 84 | 80 | 33.904762 | 0.832085 | 0.069522 | 0 | 0 | 0 | 0 | 0.397348 | 0.029545 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0.015873 | 0.380952 | 0 | 0.380952 | 0.031746 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
3a55f3e8a524f3f8e4a51c0525886b03c3bb8dbd | 10,695 | py | Python | undaqTools/tests/test_element.py | rogerlew/undaqTools | 5010e6ab303bd8664d28e4eea839c258791cf6dd | [
"Apache-2.0"
] | null | null | null | undaqTools/tests/test_element.py | rogerlew/undaqTools | 5010e6ab303bd8664d28e4eea839c258791cf6dd | [
"Apache-2.0"
] | 1 | 2021-07-08T15:56:06.000Z | 2021-07-27T18:34:58.000Z | undaqTools/tests/test_element.py | rogerlew/undaqTools | 5010e6ab303bd8664d28e4eea839c258791cf6dd | [
"Apache-2.0"
] | 6 | 2016-10-04T21:23:52.000Z | 2021-07-09T12:31:28.000Z | from __future__ import print_function
# Copyright (c) 2013, Roger Lew
# All rights reserved.
import os
import unittest
import numpy as np
from scipy.signal import detrend
from numpy.testing import assert_array_equal
from undaqTools import Daq, Element, fslice, findex
test_file = 'data reduction_20130204125617.daq'
class Test_element(unittest.TestCase):
def setUp(self):
self.x = Element([24,245,6325,2435,245,1234,14,234,548],
range(3000, 3009))
def test0(self):
x = self.x
self.assertTrue(isinstance(x, Element))
def test1(self):
x = self.x
self.assertRaises(NotImplementedError, x.transpose)
def test2(self):
x = self.x
self.assertRaises(NotImplementedError, lambda x : x.T, x)
# daq['TPR_Tire_Surf_Type'] = \
#Element(data = [[11 1 1 11 11 11 1 1 11 11 3 3 3 3 3 3 11 11 1 1 11 11 1 1]
# [11 1 1 11 11 11 1 1 11 11 11 11 3 3 11 11 11 11 1 1 11 11 1 1]
# [11 11 1 1 1 11 11 1 1 11 11 3 3 3 3 3 3 11 11 1 1 11 11 1]
# [11 11 1 1 11 11 11 1 1 11 11 11 11 3 3 11 11 11 11 1 1 11 11 1]
# [ 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0]
# [ 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0]
# [ 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0]
# [ 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0]
# [ 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0]
# [ 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0]],
# frames = [ 2716 5519 5523 5841 5844 5845 7970 7973 8279 8284 8785 8791
# 8818 8824 9127 9132 9166 9171 10270 10274 10597 10600 12655 12659],
# name = 'TPR_Tire_Surf_Type',
# numvalues = 10,
# rate = -1 (CSSDC),
# varrateflag = False,
# nptype = int16)
class Test_getitem(unittest.TestCase):
def setUp(self):
self.x = Element([24,245,6325,2435,245,1234,14,234,548],
range(3000, 3009))
def test0(self):
x = self.x
assert_array_equal([[24,245,6325]], x[:, :3])
assert_array_equal([3000,3001,3002], x[:, :3].frames)
def test1(self):
x = self.x
assert_array_equal([24,245,6325], x[0, :3])
assert_array_equal([3000,3001,3002], x[0, :3].frames)
class Test_setitem(unittest.TestCase):
def setUp(self):
self.x = Element([24,245,6325,2435,245,1234,14,234,548],
range(3000, 3009))
def test0(self):
x = self.x
x[0,2] = 99
assert_array_equal([[24,245,99,2435,245,1234,14,234,548]], x)
assert_array_equal(range(3000, 3009), x.frames)
def test1(self):
x = self.x
x[:,:3] = [[99,99,99]]
assert_array_equal([[99,99,99,2435,245,1234,14,234,548]], x)
assert_array_equal(range(3000, 3009), x.frames)
def test2(self):
x = self.x
rs = [[-2016.,-1599.,4677.,983.,-1011.,174.,-850.,-434.,76.]]
x[:,:] = detrend(x)
assert_array_equal(rs, x)
assert_array_equal(range(3000, 3009), x.frames)
class Test_state_at_frame(unittest.TestCase):
def test0(self):
"""frame between values"""
global testfile
hdf5file = test_file[:-4]+'.hdf5'
daq = Daq()
daq.read_hd5(os.path.join('data', hdf5file))
rs= np.array( [[11],[11],[11],[11],[ 0],[ 0],[ 0],[ 0],[ 0],[ 0]],
dtype=np.int16)
ds = daq['TPR_Tire_Surf_Type'][:, findex(7000)]
assert_array_equal(rs, ds)
self.assertFalse(isinstance(ds, Element))
def test1(self):
"""frame less than element.frames[0]"""
global testfile
hdf5file = test_file[:-4]+'.hdf5'
daq = Daq()
daq.read_hd5(os.path.join('data', hdf5file))
ds = daq['TPR_Tire_Surf_Type'][:,findex(0)]
self.assertTrue(np.isnan(ds))
def test2(self):
"""frame > element.frames[-1]"""
global testfile
hdf5file = test_file[:-4]+'.hdf5'
daq = Daq()
daq.read_hd5(os.path.join('data', hdf5file))
rs= np.array( [[ 1],[ 1],[ 1],[ 1],[ 0],[ 0],[ 0],[ 0],[ 0],[ 0]],
dtype=np.int16)
ds = daq['TPR_Tire_Surf_Type'][:, findex(13000)]
assert_array_equal(rs, ds)
self.assertFalse(isinstance(ds, Element))
def test3(self):
"""on defined frame"""
global testfile
hdf5file = test_file[:-4]+'.hdf5'
daq = Daq()
daq.read_hd5(os.path.join('data', hdf5file))
rs= np.array( [[11],[11],[ 1],[ 1],[ 0],[ 0],[ 0],[ 0],[ 0],[ 0]],
dtype=np.int16)
ds = daq['TPR_Tire_Surf_Type'][:,findex(5841)]
assert_array_equal(rs[:,0], ds[:,0])
self.assertFalse(isinstance(ds, Element))
def test4(self):
"""just after defined frame"""
global testfile
hdf5file = test_file[:-4]+'.hdf5'
daq = Daq()
daq.read_hd5(os.path.join('data', hdf5file))
rs= np.array( [[11],[11],[ 1],[ 1],[ 0],[ 0],[ 0],[ 0],[ 0],[ 0]],
dtype=np.int16)
ds = daq['TPR_Tire_Surf_Type'][:,findex(5842)]
assert_array_equal(rs[:,0], ds[:,0])
self.assertFalse(isinstance(ds, Element))
def test5(self):
"""just before defined frame"""
global testfile
hdf5file = test_file[:-4]+'.hdf5'
daq = Daq()
daq.read_hd5(os.path.join('data', hdf5file))
rs= np.array( [[1],[1],[ 1],[ 1],[ 0],[ 0],[ 0],[ 0],[ 0],[ 0]],
dtype=np.int16)
ds = daq['TPR_Tire_Surf_Type'][:,findex(5840)]
assert_array_equal(rs[:,0], ds[:,0])
self.assertFalse(isinstance(ds, Element))
def test6(self):
"""row indx slice"""
global testfile
hdf5file = test_file[:-4]+'.hdf5'
daq = Daq()
daq.read_hd5(os.path.join('data', hdf5file))
rs= np.array( [[1],[1],[ 1],[ 1]],
dtype=np.int16)
ds = daq['TPR_Tire_Surf_Type'][:4, findex(5840)]
assert_array_equal(rs[:,0], ds[:,0])
self.assertFalse(isinstance(ds, Element))
def test7(self):
"""row indx int"""
global testfile
hdf5file = test_file[:-4]+'.hdf5'
daq = Daq()
daq.read_hd5(os.path.join('data', hdf5file))
ds = daq['TPR_Tire_Surf_Type'][3, findex(5840)]
self.assertEqual(1, ds)
def test8(self):
"""row indx int"""
global testfile
hdf5file = test_file[:-4]+'.hdf5'
daq = Daq()
daq.read_hd5(os.path.join('data', hdf5file))
ds = daq['TPR_Tire_Surf_Type'][4, findex(5840)]
self.assertEqual(0, ds)
class Test_frame_slice(unittest.TestCase):
def setUp(self):
self.x = Element([[24,245,6325,2435,245,1234,14,234,548]],
range(3000, 3009))
def test0(self):
x = self.x
rs = np.array([[ 24., 245., 6325., 2435.]])
assert_array_equal(x[:, fslice(3004)], rs)
def test00(self):
x = self.x
rs = np.array([3000, 3001, 3002, 3003])
assert_array_equal(x[:, fslice(3004)].frames, rs)
def test1(self):
x = self.x
rs = np.array([[ 245., 6325., 2435.]])
assert_array_equal(x[:, fslice(3001, 3004)], rs)
def test2(self):
x = self.x
rs = np.array([[24,6325,245,14,548]])
assert_array_equal(x[:, fslice(None, None, 2)], rs)
def test3(self):
x = self.x
rs = np.array([[24,245,6325,2435,245,1234,14,234,548]])
ds = x[:, fslice(3000, 3009)]
assert_array_equal(ds, rs)
class Test_isCSSDC(unittest.TestCase):
def setUp(self):
global test_file
hdf5file = test_file[:-4]+'.hdf5'
hdf5file = os.path.join('data', hdf5file)
try:
with open(hdf5file):
pass
except IOError:
daq = Daq()
daq.read(os.path.join('data', test_file))
daq.write_hd5(hdf5file)
def test0(self):
global test_file
hdf5file = test_file[:-4]+'.hdf5'
daq = Daq()
daq.read_hd5(os.path.join('data', hdf5file))
self.assertFalse(daq['VDS_Veh_Speed'].isCSSDC())
def test1(self):
global testfile
hdf5file = test_file[:-4]+'.hdf5'
daq = Daq()
daq.read_hd5(os.path.join('data', hdf5file))
self.assertTrue(daq['TPR_Tire_Surf_Type'].isCSSDC())
class Test_toarray(unittest.TestCase):
def setUp(self):
global test_file
hdf5file = test_file[:-4]+'.hdf5'
hdf5file = os.path.join('data', hdf5file)
try:
with open(hdf5file):
pass
except IOError:
daq = Daq()
daq.read(os.path.join('data', test_file))
daq.write_hd5(hdf5file)
def test0(self):
global testfile
hdf5file = test_file[:-4]+'.hdf5'
daq = Daq()
daq.read_hd5(os.path.join('data', hdf5file))
ds =daq['VDS_Veh_Speed'].toarray()
self.assertFalse(isinstance(ds, Element))
self.assertEqual(ds.shape, (1L, 10658L))
def suite():
return unittest.TestSuite((
unittest.makeSuite(Test_element),
unittest.makeSuite(Test_getitem),
unittest.makeSuite(Test_setitem),
unittest.makeSuite(Test_state_at_frame),
unittest.makeSuite(Test_frame_slice),
unittest.makeSuite(Test_isCSSDC),
unittest.makeSuite(Test_toarray)
))
if __name__ == "__main__":
# run tests
runner = unittest.TextTestRunner()
runner.run(suite())
| 33.317757 | 92 | 0.490697 | 1,408 | 10,695 | 3.619318 | 0.134233 | 0.065934 | 0.095369 | 0.122449 | 0.714482 | 0.681515 | 0.670722 | 0.633438 | 0.605377 | 0.584184 | 0 | 0.157223 | 0.369612 | 10,695 | 320 | 93 | 33.421875 | 0.598635 | 0.119682 | 0 | 0.640553 | 0 | 0 | 0.041608 | 0.003058 | 0 | 0 | 0 | 0 | 0.175115 | 0 | null | null | 0.009217 | 0.032258 | null | null | 0.004608 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
3a5ac07cccea483bf853a4345f93bbf45c33210a | 865 | py | Python | meiduo_mall/meiduo_mall/apps/meiduo_admin/views/order.py | ZHD165/Django_- | f89c80a22c5065b46900a20bd505614b5bcb2e6e | [
"MIT"
] | null | null | null | meiduo_mall/meiduo_mall/apps/meiduo_admin/views/order.py | ZHD165/Django_- | f89c80a22c5065b46900a20bd505614b5bcb2e6e | [
"MIT"
] | null | null | null | meiduo_mall/meiduo_mall/apps/meiduo_admin/views/order.py | ZHD165/Django_- | f89c80a22c5065b46900a20bd505614b5bcb2e6e | [
"MIT"
] | null | null | null | from rest_framework.viewsets import ModelViewSet
from orders.models import OrderInfo
from meiduo_admin.utils import PageNum
from meiduo_admin.serializers.order import OrderSeriazlier
from rest_framework.decorators import action
from rest_framework.response import Response
class OrdersView(ModelViewSet):
serializer_class = OrderSeriazlier
queryset = OrderInfo.objects.all()
pagination_class = PageNum
# 在视图中定义status方法修改订单状态
@action(methods=['put'], detail=True)
def status(self, request, pk):
# 获取订单对象
order = self.get_object()
# 获取要修改的状态值
status = request.data.get('status')
# 修改订单状态
order.status = status
order.save()
# 返回结果
ser = self.get_serializer(order)
return Response({
'order_id': order.order_id,
'status': status
})
| 29.827586 | 58 | 0.678613 | 92 | 865 | 6.26087 | 0.51087 | 0.041667 | 0.088542 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.241619 | 865 | 28 | 59 | 30.892857 | 0.878049 | 0.056647 | 0 | 0 | 0 | 0 | 0.028395 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.047619 | false | 0 | 0.285714 | 0 | 0.571429 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
28a6d45bc92a6e1abe55be5dd9e692195ef620d8 | 4,170 | py | Python | analysis_tools/PYTHON_RICARDO/rpl/tools/voxelization/test_intersection.py | lefevre-fraser/openmeta-mms | 08f3115e76498df1f8d70641d71f5c52cab4ce5f | [
"MIT"
] | null | null | null | analysis_tools/PYTHON_RICARDO/rpl/tools/voxelization/test_intersection.py | lefevre-fraser/openmeta-mms | 08f3115e76498df1f8d70641d71f5c52cab4ce5f | [
"MIT"
] | null | null | null | analysis_tools/PYTHON_RICARDO/rpl/tools/voxelization/test_intersection.py | lefevre-fraser/openmeta-mms | 08f3115e76498df1f8d70641d71f5c52cab4ce5f | [
"MIT"
] | null | null | null | """ Represent a triangulated surface using a 3D boolean grid"""
import numpy as np
from scipy import ndimage
import matplotlib.pyplot as plt
import matplotlib.cm as cm
voxel_size = 0.05
#stl_file = "50_standing"
#stl_file = "50p_hunch8"
stl_file = "volume_w_seats"
file_name = "voxels_temp_" + stl_file + "_" + str(voxel_size) + ".npz"
d = np.load(file_name)
in_out = d["in_out"]
#stl_file = "50_standing"
stl_file = "50p_hunch8"
file_name = "voxels_temp_" + stl_file + "_" + str(voxel_size) + ".npz"
in_out_manikin_base = np.load(file_name)["in_out"]
labels, count = ndimage.measurements.label(in_out)
label_corner = labels[-1, -1, -1]
exclude = labels != label_corner
in_out &= exclude
#in_out = ndimage.binary_opening(in_out, iterations=4)
labels, count = ndimage.measurements.label(in_out)
label_corner = labels[-1, -1, -1]
floor = np.ones((in_out.shape[0], in_out.shape[2]), dtype=np.uint16) * (in_out.shape[1] - in_out_manikin_base.shape[1])
#floor = np.ones((in_out.shape[0], in_out.shape[2]), dtype=np.uint16) * (in_out.shape[1])
for obj_num in range(count + 1):
print obj_num, np.sum(labels == obj_num)
if obj_num == label_corner:
continue
print obj_num, np.sum(labels == obj_num)
for i in xrange(in_out.shape[0]):
print "Slice {} of {}".format(i, in_out.shape[0])
for j in xrange(in_out.shape[1]):
for k in xrange(in_out.shape[2]):
if labels[i, j, k] == obj_num:
if labels[i, j - 1, k] == 0:
floor[i, k] = min(floor[i, k], j)
full = in_out_manikin_base.shape[1]
manikin_heights = [full, full - full // 10, full - full // 4]
manikin_heights = [full]
print manikin_heights
v_min = None
v_max = None
for config, manikin_height in enumerate(manikin_heights):
in_out_manikin = in_out_manikin_base[:, :manikin_height, :]
# in_out_manikin = in_out_manikin_base[:, :, :]
labels, count = ndimage.measurements.label(in_out_manikin)
label_corner = labels[-1, -1, -1]
in_out_manikin = labels != label_corner
manikin_volume = np.sum(in_out_manikin)
manikin_i = in_out_manikin.shape[0]
half_i = manikin_i // 2
manikin_j = in_out_manikin.shape[1]
manikin_k = in_out_manikin.shape[2]
half_k = manikin_k // 2
print in_out.shape
print in_out_manikin.shape
intersection_checks = 0
results = np.empty((in_out.shape[0] - manikin_i, in_out.shape[2] - manikin_k), dtype=np.float)
for i in xrange(in_out.shape[0] - manikin_i):
for k in xrange(in_out.shape[2] - manikin_k):
results[i, k] = np.sum(in_out_manikin & in_out[i : i + manikin_i,
floor[i + half_i, k + half_k] : floor[i + half_i, k + half_k] + manikin_j,
k : k + manikin_k])
intersection_checks += 1
print "Intersection checks", intersection_checks
# results = results / manikin_volume
results = (manikin_volume - results) / manikin_volume
if v_min is None:
v_min = np.min(results)
v_max = np.max(results)
plt.subplot(1, len(manikin_heights), config)
plt.imshow(results, cmap=cm.copper_r, vmin=v_min, vmax=v_max, norm=None)
#plt.subplot(122)
#plt.imshow(floor, cmap=cm.copper_r)
plt.show()
#size_check = 50
#
#block_check = np.ones((size_check, size_check, size_check), dtype=np.bool)
#
#
#for i in xrange(in_out.shape[0] - size_check):
# for j in xrange(in_out.shape[1] - size_check):
# print i, j, np.sum(block_check & in_out[i : i + size_check,
# j : j + size_check,
# :size_check])
#
| 30.661765 | 134 | 0.561631 | 576 | 4,170 | 3.798611 | 0.184028 | 0.095978 | 0.077697 | 0.035192 | 0.450183 | 0.430987 | 0.332724 | 0.273309 | 0.143967 | 0.143967 | 0 | 0.022976 | 0.321583 | 4,170 | 136 | 135 | 30.661765 | 0.750442 | 0.1753 | 0 | 0.136364 | 0 | 0 | 0.032451 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.060606 | null | null | 0.106061 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
28a8c51ef0a5f90d220349580731047b34d9dde8 | 2,459 | py | Python | scripts/import-ig-siis.py | firebird631/siis | 8d64e8fb67619aaa5c0a62fda9de51dedcd47796 | [
"PostgreSQL"
] | null | null | null | scripts/import-ig-siis.py | firebird631/siis | 8d64e8fb67619aaa5c0a62fda9de51dedcd47796 | [
"PostgreSQL"
] | null | null | null | scripts/import-ig-siis.py | firebird631/siis | 8d64e8fb67619aaa5c0a62fda9de51dedcd47796 | [
"PostgreSQL"
] | null | null | null | #!/usr/bin/python3
import sys
import subprocess
import pathlib
BROKER = "ig.com"
BASE_PATH = "/mnt/storage/Data/market/ig.com/dumps"
PREFIX = "full"
MARKETS = {
"CS.D.AUDNZD.MINI.IP": "AUDNZD",
"CS.D.AUDUSD.MINI.IP": "AUDUSD",
"CS.D.EURCAD.MINI.IP": "EURCAD",
"CS.D.EURCHF.MINI.IP": "EURCHF",
"CS.D.EURGBP.MINI.IP": "EURGBP",
"CS.D.EURUSD.MINI.IP": "EURUSD",
"IX.D.DAX.IFMM.IP": "GER30",
"CS.D.GBPUSD.MINI.IP": "GBPUSD",
"IX.D.NASDAQ.IFE.IP": "NAS100",
"IX.D.SPTRD.IFE.IP": "SPX500",
"IX.D.DOW.IFE.IP": "US30",
"CS.D.USDCHF.MINI.IP": "USDCHF",
"CS.D.USDJPY.MINI.IP": "USDJPY",
# "CS.D.CFDSILVER.CFM.IP": "XAGUSD",
"CS.D.CFEGOLD.CFE.IP": "XAUUSD"
# @todo WTI
}
IMPORT_TFS = {
"1m": "2017-12-25T00:00:00",
"3m": "2019-12-25T00:00:00",
"5m": "2019-12-15T00:00:00",
"15m": "2019-12-01T00:00:00",
"30m": "2019-10-01T00:00:00",
"1h": "2019-07-01T00:00:00",
"2h": "2019-07-01T00:00:00",
"4h": "2019-01-01T00:00:00",
"1d": "2017-01-01T00:00:00",
"1w": "2010-01-01T00:00:00",
"1M": "2000-01-01T00:00:00"
}
def import_siis_any(market, symbol, prefix="full"):
"""Unique file for any timeframes"""
src_path = pathlib.Path(BASE_PATH, market)
if not src_path.exists():
print("! Missing path for %s" % market)
return
print("Import %s in %s from %s" % (market, "any", src_path))
with subprocess.Popen(["python", "siis.py", "real", "--import",
"--filename=%s/full-%s-%s-any.siis" % (src_path, BROKER, market)]):
print("-- Done")
def import_siis(market, symbol, prefix="full"):
"""Distinct file per timeframe"""
for tf, lfrom in IMPORT_TFS.items():
src_path = pathlib.Path(BASE_PATH, market)
if not src_path.exists():
print("! Missing path for %s" % market)
return
print("Import %s in %s from %s" % (market, tf, src_path))
with subprocess.Popen(["python", "siis.py", "real", "--import",
"--filename=%s/full-%s-%s-%s.siis" % (src_path, BROKER, market, tf)]):
print("-- Done")
if __name__ == "__main__":
if len(sys.argv) > 1:
# overrides
BASE_PATH = sys.argv[1]
if len(sys.argv) > 2:
# overrides
PREFIX = sys.argv[2]
for _market, _symbol in MARKETS.items():
# use unique file
import_siis_any(_market, _symbol, PREFIX)
| 28.929412 | 101 | 0.561204 | 357 | 2,459 | 3.778711 | 0.319328 | 0.024463 | 0.053373 | 0.032617 | 0.355819 | 0.299481 | 0.253521 | 0.253521 | 0.253521 | 0.253521 | 0 | 0.096449 | 0.232615 | 2,459 | 84 | 102 | 29.27381 | 0.618442 | 0.063847 | 0 | 0.196721 | 0 | 0 | 0.373141 | 0.044619 | 0 | 0 | 0 | 0.011905 | 0 | 1 | 0.032787 | false | 0 | 0.196721 | 0 | 0.262295 | 0.098361 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
28aa20d9276cf621dfea7eb047b052a2cc1301af | 1,787 | py | Python | website/auth.py | MarcelKolodziej/Flask-Blog | ae341841a3937400ce4091d521516bb49770d633 | [
"MIT"
] | null | null | null | website/auth.py | MarcelKolodziej/Flask-Blog | ae341841a3937400ce4091d521516bb49770d633 | [
"MIT"
] | null | null | null | website/auth.py | MarcelKolodziej/Flask-Blog | ae341841a3937400ce4091d521516bb49770d633 | [
"MIT"
] | null | null | null | from flask import Blueprint, render_template, redirect, url_for,request, flash
from . import db
from .models import User
from flask_login import login_user, logout_user, login_required
auth = Blueprint("auth", __name__)
@auth.route("/login", methods=['GET', 'POST'])
def login():
email = request.form.get("email")
password = request.form.get("password")
return render_template("login.html")
@auth.route("/sign-up", methods=['GET', 'POST'])
def signup():
if request.method == 'POST':
username = request.form.get("username")
email = request.form.get("email")
password1 = request.form.get("password1")
password2 = request.form.get("password2")
email_exist = User.query.filter_by(email=email).first()
username_exists = User.query.filter_by(username=username).first()
if email_exist:
"""Flash a msg on the screen"""
flash('Email is already in use.', category='error')
elif username_exists:
flash('Username is already in use', category='error')
elif password1 != password2:
flash('Password don\t match', category='error')
elif len(username) < 3:
flash('Username too short!', category='error')
elif len(password1) < 6:
flash('Password too short!')
elif len(email) < 5:
flash('Email too short!', category='error')
else:
new_user = User(email='email', username='username', password = 'password1')
db.session.add(new_user)
db.session.commit()
flash('User created!')
return redirect(url_for('views.home'))
return render_template("signup.html")
@auth.route("/logout")
def logout():
return redirect(url_for("views.home"))
| 37.229167 | 87 | 0.623951 | 215 | 1,787 | 5.083721 | 0.334884 | 0.060384 | 0.076853 | 0.031107 | 0.153705 | 0.10979 | 0.056725 | 0 | 0 | 0 | 0 | 0.008035 | 0.233912 | 1,787 | 47 | 88 | 38.021277 | 0.790358 | 0 | 0 | 0.04878 | 0 | 0 | 0.177677 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.073171 | false | 0.195122 | 0.097561 | 0.02439 | 0.268293 | 0.04878 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
28b0abafa42a841914c1cd31abc06fe8d8ca4704 | 4,342 | py | Python | app/core/migrations/0011_auto_20200903_2018.py | ig0r45ure/recipe-app-api | 0654102293d6e58c13c4b7520909eb6c0ddb45f2 | [
"MIT"
] | null | null | null | app/core/migrations/0011_auto_20200903_2018.py | ig0r45ure/recipe-app-api | 0654102293d6e58c13c4b7520909eb6c0ddb45f2 | [
"MIT"
] | null | null | null | app/core/migrations/0011_auto_20200903_2018.py | ig0r45ure/recipe-app-api | 0654102293d6e58c13c4b7520909eb6c0ddb45f2 | [
"MIT"
] | null | null | null | # Generated by Django 3.0.10 on 2020-09-03 20:18
from django.db import migrations, models
import django.db.models.deletion
class Migration(migrations.Migration):
dependencies = [
('core', '0010_orgunit_is_hqunit'),
]
operations = [
migrations.CreateModel(
name='Action',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('in_process_step', models.PositiveSmallIntegerField()),
('name', models.CharField(max_length=255)),
('notes', models.TextField()),
('trigger', models.CharField(max_length=255)),
('effects', models.CharField(max_length=255)),
],
),
migrations.CreateModel(
name='BCMActivity',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('name', models.CharField(max_length=255)),
('id_BIA', models.CharField(max_length=15)),
('MTPD', models.PositiveSmallIntegerField(choices=[(1, '4 godz.'), (2, '1 dzień'), (3, '2 dni'), (4, '1 tydzień'), (5, '2 tygodnie'), (6, 'do odwołania')])),
('min_recovery_level', models.TextField()),
('TTN', models.PositiveSmallIntegerField(choices=[(1, '4 godz.'), (2, '1 dzień'), (3, '2 dni'), (4, '1 tydzień'), (5, '2 tygodnie'), (6, 'do odwołania')])),
('performer', models.ForeignKey(blank=True, null=True, on_delete=django.db.models.deletion.SET_NULL, to='core.OrgUnit')),
],
),
migrations.CreateModel(
name='Procedure',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('version', models.CharField(max_length=5)),
('effective_date', models.DateField()),
('goal', models.CharField(max_length=255)),
('actions', models.ManyToManyField(to='core.Action')),
('developed_by', models.OneToOneField(blank=True, null=True, on_delete=django.db.models.deletion.SET_NULL, to='core.OrgUnit')),
],
),
migrations.CreateModel(
name='WorkTask',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('in_action_order', models.PositiveSmallIntegerField()),
('task', models.TextField()),
('participants', models.CharField(max_length=63)),
('applications', models.CharField(max_length=63)),
('action', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, to='core.Action')),
('performer', models.OneToOneField(blank=True, null=True, on_delete=django.db.models.deletion.SET_NULL, to='core.OrgUnit')),
],
),
migrations.RemoveField(
model_name='activity',
name='input',
),
migrations.RemoveField(
model_name='activity',
name='performer',
),
migrations.RemoveField(
model_name='activity',
name='process',
),
migrations.RemoveField(
model_name='activity',
name='product',
),
migrations.DeleteModel(
name='Product',
),
migrations.AlterField(
model_name='process',
name='type',
field=models.CharField(choices=[('Operacyjny', 'Operacyjny'), ('Zarządzania', 'Zarządzania'), ('Wsparcia', 'Wsparcia')], max_length=15),
),
migrations.DeleteModel(
name='Activity',
),
migrations.AddField(
model_name='procedure',
name='process',
field=models.ForeignKey(limit_choices_to={'is_megaprocess': False}, on_delete=django.db.models.deletion.CASCADE, to='core.Process'),
),
migrations.AddField(
model_name='bcmactivity',
name='process',
field=models.ForeignKey(limit_choices_to={'is_megaprocess': False}, on_delete=django.db.models.deletion.CASCADE, to='core.Process'),
),
]
| 44.306122 | 173 | 0.563335 | 416 | 4,342 | 5.745192 | 0.271635 | 0.062762 | 0.067782 | 0.090377 | 0.592887 | 0.537238 | 0.445188 | 0.445188 | 0.445188 | 0.427197 | 0 | 0.021201 | 0.283049 | 4,342 | 97 | 174 | 44.762887 | 0.746547 | 0.010594 | 0 | 0.527473 | 1 | 0 | 0.150908 | 0.005123 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.021978 | 0 | 0.054945 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
28b3501d83808336463cf6f0c234c5e4a31d30f7 | 616 | py | Python | buildpack/databroker/config_generator/scripts/generators/stream.py | jobvs/cf-mendix-buildpack | 7df5585b5ac8550fd36d21c9d354d74489ff78c0 | [
"Apache-2.0"
] | 36 | 2015-01-22T16:28:55.000Z | 2021-12-28T10:26:10.000Z | buildpack/databroker/config_generator/scripts/generators/stream.py | jobvs/cf-mendix-buildpack | 7df5585b5ac8550fd36d21c9d354d74489ff78c0 | [
"Apache-2.0"
] | 208 | 2015-06-01T13:39:17.000Z | 2022-03-24T14:16:09.000Z | buildpack/databroker/config_generator/scripts/generators/stream.py | jobvs/cf-mendix-buildpack | 7df5585b5ac8550fd36d21c9d354d74489ff78c0 | [
"Apache-2.0"
] | 135 | 2015-01-17T14:47:22.000Z | 2022-03-07T08:20:18.000Z | import json
from buildpack.databroker.config_generator.scripts.utils import (
template_engine_instance,
)
def generate_config(config):
topologies = {"topologies": []}
env = template_engine_instance()
template = env.get_template("streaming_producer.json.j2")
for service in config.DataBrokerConfiguration.publishedServices:
for entity in service.entities:
renderedTemplate = template.render(entity=entity)
renderedTemplateAsJson = json.loads(renderedTemplate)
topologies["topologies"].append(renderedTemplateAsJson)
return json.dumps(topologies)
| 34.222222 | 68 | 0.738636 | 59 | 616 | 7.576271 | 0.576271 | 0.06264 | 0.098434 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.001969 | 0.175325 | 616 | 17 | 69 | 36.235294 | 0.877953 | 0 | 0 | 0 | 1 | 0 | 0.074675 | 0.042208 | 0 | 0 | 0 | 0 | 0 | 1 | 0.071429 | false | 0 | 0.142857 | 0 | 0.285714 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
28bf7d0d23be8e4b1b9d6f09bd5feb7005ad0239 | 605 | py | Python | aldryn_video/migrations/0002_auto_20201028_1115.py | okfn/website | 7d2fc8e4b379c7cb7e47887acbc83d31d5f855b1 | [
"MIT"
] | 74 | 2016-06-27T17:06:44.000Z | 2022-03-20T19:42:07.000Z | aldryn_video/migrations/0002_auto_20201028_1115.py | okfn/website | 7d2fc8e4b379c7cb7e47887acbc83d31d5f855b1 | [
"MIT"
] | 370 | 2016-06-09T09:15:00.000Z | 2022-03-28T19:02:31.000Z | aldryn_video/migrations/0002_auto_20201028_1115.py | okfn/website | 7d2fc8e4b379c7cb7e47887acbc83d31d5f855b1 | [
"MIT"
] | 104 | 2016-06-09T15:16:02.000Z | 2022-03-12T13:14:10.000Z | # Generated by Django 2.2.16 on 2020-10-28 11:15
from django.db import migrations, models
import django.db.models.deletion
class Migration(migrations.Migration):
dependencies = [
('aldryn_video', '0001_initial'),
]
operations = [
migrations.AlterField(
model_name='oembedvideoplugin',
name='cmsplugin_ptr',
field=models.OneToOneField(auto_created=True, on_delete=django.db.models.deletion.CASCADE, parent_link=True, primary_key=True, related_name='aldryn_video_oembedvideoplugin', serialize=False, to='cms.CMSPlugin'),
),
]
| 30.25 | 223 | 0.690909 | 70 | 605 | 5.814286 | 0.671429 | 0.058968 | 0.068796 | 0.108108 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.041152 | 0.196694 | 605 | 19 | 224 | 31.842105 | 0.796296 | 0.076033 | 0 | 0 | 1 | 0 | 0.174147 | 0.05386 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.153846 | 0 | 0.384615 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
28d40053c91fe46275c6c3999a48e5957247541a | 441 | py | Python | Leetcode/1000-2000/1865. Finding Pairs With a Certain Sum/1865.py | Next-Gen-UI/Code-Dynamics | a9b9d5e3f27e870b3e030c75a1060d88292de01c | [
"MIT"
] | null | null | null | Leetcode/1000-2000/1865. Finding Pairs With a Certain Sum/1865.py | Next-Gen-UI/Code-Dynamics | a9b9d5e3f27e870b3e030c75a1060d88292de01c | [
"MIT"
] | null | null | null | Leetcode/1000-2000/1865. Finding Pairs With a Certain Sum/1865.py | Next-Gen-UI/Code-Dynamics | a9b9d5e3f27e870b3e030c75a1060d88292de01c | [
"MIT"
] | null | null | null | class FindSumPairs:
def __init__(self, nums1: List[int], nums2: List[int]):
self.nums1 = nums1
self.nums2 = nums2
self.count2 = Counter(nums2)
def add(self, index: int, val: int) -> None:
self.count2[self.nums2[index]] -= 1
self.nums2[index] += val
self.count2[self.nums2[index]] += 1
def count(self, tot: int) -> int:
ans = 0
for num in self.nums1:
ans += self.count2[tot - num]
return ans
| 25.941176 | 57 | 0.614512 | 65 | 441 | 4.107692 | 0.369231 | 0.134831 | 0.157303 | 0.142322 | 0.187266 | 0.187266 | 0 | 0 | 0 | 0 | 0 | 0.053412 | 0.235828 | 441 | 16 | 58 | 27.5625 | 0.738872 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.214286 | false | 0 | 0 | 0 | 0.357143 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
28d5c6f35f1b85abcf29350b3a9639b011883e3f | 1,073 | py | Python | tests/integration/test_map_failure.py | elin1231/htmap | b9c43ec1d86e90730210c3317409b75595061d91 | [
"Apache-2.0"
] | 21 | 2018-11-28T23:59:18.000Z | 2021-11-16T20:09:27.000Z | tests/integration/test_map_failure.py | elin1231/htmap | b9c43ec1d86e90730210c3317409b75595061d91 | [
"Apache-2.0"
] | 184 | 2018-09-24T03:30:19.000Z | 2021-06-29T01:01:34.000Z | tests/integration/test_map_failure.py | elin1231/htmap | b9c43ec1d86e90730210c3317409b75595061d91 | [
"Apache-2.0"
] | 6 | 2019-08-08T19:38:22.000Z | 2021-09-05T05:39:49.000Z | # Copyright 2018 HTCondor Team, Computer Sciences Department,
# University of Wisconsin-Madison, WI.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import pytest
from htmap import mapping
def test_exception_inside_submit_removes_map_dir(mocker, doubler):
class Marker(Exception):
pass
def bad_execute_submit(*args, **kwargs):
raise Marker()
mocker.patch("htmap.mapping.execute_submit", bad_execute_submit)
with pytest.raises(Marker):
mapping.map(doubler, range(10))
assert len(list(mapping.maps_dir_path().iterdir())) == 0
| 31.558824 | 74 | 0.744641 | 153 | 1,073 | 5.137255 | 0.673203 | 0.076336 | 0.033079 | 0.040712 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.012387 | 0.172414 | 1,073 | 33 | 75 | 32.515152 | 0.872748 | 0.575023 | 0 | 0 | 0 | 0 | 0.063492 | 0.063492 | 0 | 0 | 0 | 0 | 0.090909 | 1 | 0.181818 | false | 0.090909 | 0.181818 | 0 | 0.454545 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
28d76e5d7b1046fecd900f32bd0b0c1ee1c410be | 2,910 | py | Python | s7_scalable_applications/exercise_files/lfw_dataset.py | fimselamse/dtu_mlops | 550291e60c691584e7765857405072ec7cefefff | [
"Apache-2.0"
] | null | null | null | s7_scalable_applications/exercise_files/lfw_dataset.py | fimselamse/dtu_mlops | 550291e60c691584e7765857405072ec7cefefff | [
"Apache-2.0"
] | null | null | null | s7_scalable_applications/exercise_files/lfw_dataset.py | fimselamse/dtu_mlops | 550291e60c691584e7765857405072ec7cefefff | [
"Apache-2.0"
] | null | null | null | """
LFW dataloading
"""
import argparse
import time
import numpy as np
import torch
from PIL import Image
from torch.utils.data import DataLoader, Dataset
from torchvision import transforms
import os
import glob
import matplotlib.pyplot as plt
class LFWDataset(Dataset):
def __init__(self, path_to_folder: str, transform) -> None:
self.imgs_path = path_to_folder
file_list = glob.glob(self.imgs_path + "*")
# print(file_list)
self.data = []
for class_path in file_list:
class_name = class_path.split("\\")[-1]
for img_path in glob.glob(class_path + "\\*.jpg"):
self.data.append([img_path, class_name])
self.transform = transform
def __len__(self):
return len(self.data)
def __getitem__(self, index: int) -> torch.Tensor:
entry = self.data[index]
image = Image.open(entry[0])
label = entry[1]
return self.transform(image), label
if __name__ == '__main__':
parser = argparse.ArgumentParser()
parser.add_argument('-path_to_folder', default='lfw/', type=str)
parser.add_argument('-batch_size', default=1028, type=int)
parser.add_argument('-num_workers', default=0, type=int)
parser.add_argument('-visualize_batch', action='store_true')
parser.add_argument('-get_timing', action='store_true')
parser.add_argument('-batches_to_check', default=5, type=int)
args = parser.parse_args()
lfw_trans = transforms.Compose([
transforms.RandomAffine(5, (0.1, 0.1), (0.5, 2.0)),
transforms.ToTensor()
])
# Define dataset
dataset = LFWDataset(args.path_to_folder, lfw_trans)
# Define dataloader
dataloader = DataLoader(
dataset,
batch_size=args.batch_size,
shuffle=False,
num_workers=args.num_workers
)
if args.visualize_batch:
# TODO: visualize a batch of images
figure = plt.figure(figsize=(14, 8))
cols, rows = int(len(dataloader)/2), 2
batch = next(iter(dataloader))
images = batch[0]
labels = batch[1]
for i in range(1, cols * rows + 1):
img, label = images[i - 1], labels[i - 1]
figure.add_subplot(rows, cols, i)
plt.title(label)
plt.axis("off")
plt.imshow(img.permute(1,2,0), cmap="gray")
plt.savefig("visualization.jpg")
if args.get_timing:
# lets do some repetitions
res = [ ]
for _ in range(5):
start = time.time()
for batch_idx, batch in enumerate(dataloader):
if batch_idx > args.batches_to_check:
break
end = time.time()
res.append(end - start)
res = np.array(res)
print(f'Timing: {np.mean(res)}+-{np.std(res)}')
| 30.3125 | 68 | 0.590378 | 361 | 2,910 | 4.567867 | 0.351801 | 0.032747 | 0.061856 | 0.019406 | 0.06792 | 0.038811 | 0 | 0 | 0 | 0 | 0 | 0.015981 | 0.290378 | 2,910 | 95 | 69 | 30.631579 | 0.782567 | 0.042955 | 0 | 0 | 0 | 0 | 0.066715 | 0.010458 | 0 | 0 | 0 | 0.010526 | 0 | 1 | 0.042254 | false | 0 | 0.140845 | 0.014085 | 0.225352 | 0.014085 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
28e2f45cf26fbf816621e69ee7d6cc809d05de0c | 19,307 | py | Python | protocols/ddg_monomer_16/run_preminimization.py | Kortemme-Lab/ddg | 37d405af2dac41477c689e6e63d5f5c2b9f5a665 | [
"MIT-0",
"MIT"
] | 11 | 2015-05-07T13:05:52.000Z | 2021-07-18T03:37:47.000Z | protocols/ddg_monomer_16/run_preminimization.py | Kortemme-Lab/ddg | 37d405af2dac41477c689e6e63d5f5c2b9f5a665 | [
"MIT-0",
"MIT"
] | 5 | 2016-01-31T01:34:00.000Z | 2021-12-12T04:18:18.000Z | protocols/ddg_monomer_16/run_preminimization.py | Kortemme-Lab/ddg | 37d405af2dac41477c689e6e63d5f5c2b9f5a665 | [
"MIT-0",
"MIT"
] | 8 | 2016-01-08T19:22:16.000Z | 2021-07-18T03:37:48.000Z | #!/usr/bin/env python2
# The MIT License (MIT)
#
# Copyright (c) 2015 Kyle A. Barlow, Shane O'Connor
#
# Permission is hereby granted, free of charge, to any person obtaining a copy
# of this software and associated documentation files (the "Software"), to deal
# in the Software without restriction, including without limitation the rights
# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
# copies of the Software, and to permit persons to whom the Software is
# furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in
# all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
# THE SOFTWARE.
"""\
The script kicks off the preminmization step of the benchmark run using the minimize_with_cst application from the
Rosetta suite. The command lines used herein are intended to reproduce the protocol from row 16 of the original paper by Kellogg et al.:
Kellogg, EH, Leaver-Fay, A, Baker, D. Role of conformational sampling in computing mutation-induced changes in protein
structure and stability. 2011. Proteins. 79(3):830-8. doi: 10.1002/prot.22921.
Usage:
run_preminimization.py [options]...
Options:
-d --dataset DATASET
A filepath to the input dataset in JSON format. [default: ../../input/json/kellogg.json]
-o --output_directory OUTPUT_DIR
The path where output data will be created. Output will be created inside a time-stamped subfolder of this directory. [default: ./job_output]
--run_identifier RUN_ID
A suffix used to name the output directory.
--test
When this option is set, a shorter version of the benchmark will run with fewer input structures, less fewer DDG experiments, and fewer generated structures. This should be used to test the scripts but not for analysis.
--talaris2014
When this option is set, the talaris2014 score function will be used rather than the default score function. Warning: This option may break when talaris2014 becomes the default Rosetta score function.
--beta_july15
When this option is set, the July 2015 beta score function will be used rather than the default score function. Warning: This option may break when this score function is removed.
--beta_nov15
When this option is set, the November 2015 beta score function will be used rather than the default score function. Warning: This option may break when this score function is removed.
-p --parallel NUM_PROCESSORS
If this argument is set then the job setup will use NUM_PROCESSORS which will speed this step up. Otherwise, a single processor will be used. This should run on both Unix and Windows machines.
--maxp
This is a special case of --parallel. If this argument is set then the job setup will use as many processors as are available on the machine.
Authors:
Kyle Barlow
Shane O'Connor
"""
import sys
import os
import re
import shutil
import traceback
import time
import datetime
import inspect
multiprocessing_module_available = True
try:
import multiprocessing
except:
multiprocessing_module_available = False
import cPickle as pickle
import getpass
import rosetta.parse_settings
from rosetta.write_run_file import process as write_run_file
from analysis.libraries import docopt
from analysis.stats import read_file, write_file
try:
import json
except:
import simplejson as json
from rosetta.pdb import PDB, create_mutfile
from rosetta.basics import ChainMutation
from rosetta.input_files import Mutfile
task_subfolder = 'preminimization'
mutfiles_subfolder = 'mutfiles'
generated_scriptname = 'preminimization_step'
DEFAULT_NUMBER_OF_PROCESSORS_TO_USE = 1 # this can be overridden using the -p command
def create_input_files(job_dict, settings, pdb_dir_path, pdb_data_dir, mutfile_data_dir, keypair, dataset_cases, skip_if_exists = False):
'''Create the stripped PDB files and the mutfiles for the DDG step. Mutfiles are created at this point as we need the
original PDB to generate the residue mapping.
'''
# Read PDB
pdb_id = keypair[0]
chain = keypair[1]
pdb = PDB.from_filepath(pdb_dir_path)
stripped_pdb_path = os.path.join(pdb_data_dir, '%s_%s.pdb' % (pdb_id, chain))
# Strip the PDB to the list of chains. This also renumbers residues in the PDB for Rosetta.
chains = [chain]
pdb.strip_to_chains(chains)
pdb.strip_HETATMs()
stripped_pdb = PDB('\n'.join(pdb.lines))
# Check to make sure that we haven't stripped all the ATOM lines
if not [line for line in stripped_pdb.lines if line[0:4] == "ATOM"]:
raise Exception("No ATOM lines remain in the stripped PDB file %s." % stripped_pdb_path)
# Assert that there are no empty sequences
assert(sorted(stripped_pdb.atom_sequences.keys()) == sorted(chains))
for chain_id, sequence in stripped_pdb.atom_sequences.iteritems():
assert(len(sequence) > 0)
# Check for CSE and MSE
try:
if 'CSE' in stripped_pdb.residue_types:
raise Exception('This case contains a CSE residue which may (or may not) cause an issue with Rosetta depending on the version.')
elif 'MSE' in stripped_pdb.residue_types:
raise Exception('This case contains an MSE residue which may (or may not) cause an issue with Rosetta depending on the version.')
# It looks like MSE (and CSE?) may now be handled - https://www.rosettacommons.org/content/pdb-files-rosetta-format
except Exception, e:
print('%s: %s, chain %s' % (str(e), str(stripped_pdb.pdb_id), chain))
# Turn the lines array back into a valid PDB file
if not(skip_if_exists) or not(os.path.exists(stripped_pdb_path)):
write_file(stripped_pdb_path, '\n'.join(stripped_pdb.lines))
# Create the mapping between PDB and Rosetta residue numbering
# Note: In many Rosetta protocols, '-ignore_unrecognized_res' and '-ignore_zero_occupancy false' are used to allow
# Rosetta to work with structures with missing data and non-canonicals. In those cases, we should supply both flags
# in the string below. Since protocol 16 only uses '-ignore_unrecognized_res', we only use that flag below as otherwise
# we could break the mapping.
rosetta_scripts_bin = os.path.join(settings['local_rosetta_bin'], 'rosetta_scripts%s' % settings['rosetta_binary_type'])
rosetta_database_path = settings['local_rosetta_db_dir']
if not os.path.exists(rosetta_scripts_bin):
raise Exception('The Rosetta scripts executable "{0}" could not be found. Please check your configuration file.'.format(rosetta_database_path))
if not os.path.exists(rosetta_database_path):
raise Exception('The path to the Rosetta database "{0}" could not be found. Please check your configuration file.'.format(rosetta_database_path))
stripped_pdb.construct_pdb_to_rosetta_residue_map(rosetta_scripts_bin,rosetta_database_path, extra_command_flags = '-ignore_unrecognized_res')
atom_to_rosetta_residue_map = stripped_pdb.get_atom_sequence_to_rosetta_json_map()
rosetta_to_atom_residue_map = stripped_pdb.get_rosetta_sequence_to_atom_json_map()
# Save the PDB <-> Rosetta residue mappings to disk
write_file(os.path.join(pdb_data_dir, '%s_%s.rosetta2pdb.resmap.json' % (pdb_id, chain)), rosetta_to_atom_residue_map)
write_file(os.path.join(pdb_data_dir, '%s_%s.pdb2rosetta.resmap.json' % (pdb_id, chain)), atom_to_rosetta_residue_map)
# Assert that there are no empty sequences in the Rosetta-processed PDB file
total_num_residues = 0
d = json.loads(rosetta_to_atom_residue_map)
for chain_id in chains:
num_chain_residues = len([z for z in d.values() if z[0] == chain_id])
total_num_residues += num_chain_residues
assert(num_chain_residues > 0)
# Check that the mutated positions exist and that the wild-type matches the PDB
try:
for dataset_case in dataset_cases:
assert(dataset_case['PDBFileID'] == pdb_id)
# Note: I removed a hack here for 1AJ3->1U5P mapping
# The JSON file does not have the residue IDs in PDB format (5 characters including insertion code) so we need to repad them for the mapping to work
pdb_mutations = [ChainMutation(mutation['WildTypeAA'], PDB.ResidueID2String(mutation['ResidueID']), mutation['MutantAA'], Chain = mutation['Chain']) for mutation in dataset_case['Mutations']]
stripped_pdb.validate_mutations(pdb_mutations)
# Map the PDB mutations to Rosetta numbering which is used by the mutfile format
rosetta_mutations = stripped_pdb.map_pdb_residues_to_rosetta_residues(pdb_mutations)
if (len(rosetta_mutations) != len(pdb_mutations)) or (None in set([m.ResidueID for m in rosetta_mutations])):
raise Exception('An error occurred in the residue mapping code for DDG case: %s, %s' % (pdb_id, pdb_mutations))
# Create the mutfile
mutfile = Mutfile.from_mutagenesis(rosetta_mutations)
mutfilename = os.path.join(mutfile_data_dir, '%d.mutfile' % (dataset_case['RecordID']))
if os.path.exists(mutfilename):
raise Exception('%s already exists. Check that the RecordIDs in the JSON file are all unique.' % mutfilename)
write_file(os.path.join(mutfile_data_dir, '%d.mutfile' % (dataset_case['RecordID'])), str(mutfile))
except Exception, e:
print(str(e))
print(traceback.format_exc())
# Set up --in:file:l parameter
pdb_relpath = os.path.relpath(stripped_pdb_path, settings['output_dir'])
job_dict[os.path.join(task_subfolder, '_'.join(keypair))] = dict(input_file_list = [pdb_relpath])
sys.stdout.write('.'); sys.stdout.flush()
def single_job_pack(args):
print(args)
return single_job(*args)
def use_multiple_processors(settings, pdb_monomers, input_pdb_dir_path, pdb_data_dir, mutfile_data_dir, dataset_cases_by_pdb_chain, num_processors):
assert(multiprocessing_module_available)
pool = multiprocessing.Pool(processes = num_processors)#[, initializer[, initargs]]])
m = multiprocessing.Manager()
job_dict = m.dict()
pool_jobs = []
for keypair in pdb_monomers:
pdb_dir_path = os.path.join(input_pdb_dir_path, '%s.pdb' % keypair[0])
pool_jobs.append(pool.apply_async(create_input_files, (job_dict, settings, pdb_dir_path, pdb_data_dir, mutfile_data_dir, keypair, dataset_cases_by_pdb_chain[keypair])))
pool.close()
pool.join()
sys.stdout.write('\n')
return job_dict._getvalue()
def use_single_processor(settings, pdb_monomers, input_pdb_dir_path, pdb_data_dir, mutfile_data_dir, dataset_cases_by_pdb_chain):
job_dict = {}
for keypair in pdb_monomers:
pdb_dir_path = os.path.join(input_pdb_dir_path, '%s.pdb' % keypair[0])
create_input_files(job_dict, settings, pdb_dir_path, pdb_data_dir, mutfile_data_dir, keypair, dataset_cases_by_pdb_chain[keypair])
sys.stdout.write('\n')
return job_dict
if __name__ == '__main__':
import pprint
try:
arguments = docopt.docopt(__doc__.format(**locals()))
except Exception, e:
print('Failed while parsing arguments: %s.' % str(e))
sys.exit(1)
# Set the PDB input path
input_pdb_dir_path = '../../input/pdbs'
# Read the settings file
settings = rosetta.parse_settings.get_dict()
# Read in the dataset file
dataset_filepath = arguments['--dataset'][0]
dataset_filename = os.path.splitext(os.path.split(dataset_filepath)[1])[0]
if not os.path.exists(dataset_filepath):
raise Exception('The dataset file %s does not exist.' % dataset_filepath)
# Read in any parallel processing options
num_system_processors = 1
if multiprocessing_module_available:
num_system_processors = multiprocessing.cpu_count()
if arguments.get('--maxp'):
num_processors = num_system_processors
else:
num_processors = min(DEFAULT_NUMBER_OF_PROCESSORS_TO_USE, num_system_processors)
if arguments.get('--parallel'):
valid_options = [int(x) for x in arguments['--parallel'] if x.isdigit()]
if not valid_options:
raise Exception('None of the arguments to --parallel are valid. The argument must be an integer between 1 and the number of processors (%d).' % num_system_processors)
else:
num_processors = max(valid_options)
else:
# If the user has not specified the number of processors, only one is selected, and more exist then let them know that this process may run faster
if num_processors == 1 and num_system_processors > 1:
print('The setup is configured to use one processor but this machine has %d processors. The --parallel or --maxp options may make this setup run faster.' % num_system_processors)
if 1 > num_processors or num_processors > num_system_processors:
raise Exception('The number of processors must be an integer between 1 and %d.' % num_system_processors)
# Read the dataset from disk
try:
dataset = json.loads(read_file(dataset_filepath))
dataset_cases = dataset['data']
except Exception, e:
raise Exception('An error occurred parsing the JSON file: %s..' % str(e))
# Set the job directory name
job_name = '%s_%s_ddg_monomer_16' % (time.strftime("%y-%m-%d-%H-%M"), getpass.getuser())
if arguments.get('--run_identifier'):
job_name += '_' + arguments['--run_identifier'][0]
# Set the root output directory
root_output_directory = 'job_output'
if arguments.get('--output_directory'):
root_output_directory = arguments['--output_directory'][0]
if not os.path.exists(root_output_directory):
print('Creating directory %s:' % root_output_directory)
os.makedirs(root_output_directory)
# Set the job output directory
output_dir = os.path.join(root_output_directory, job_name) # The root directory for the protocol run
settings['output_dir'] = output_dir
try:
task_dir = os.path.join(output_dir, task_subfolder) # The root directory for preminization section of the protocol
output_data_dir = os.path.join(output_dir, 'data')
pdb_data_dir = os.path.join(output_data_dir, 'input_pdbs')
mutfile_data_dir = os.path.join(output_data_dir, mutfiles_subfolder)
for jobdir in [output_dir, task_dir, output_data_dir, pdb_data_dir, mutfile_data_dir]:
try: os.mkdir(jobdir)
except: pass
# Make a copy the dataset so that it can be automatically used by the following steps
shutil.copy(dataset_filepath, os.path.join(output_dir, 'dataset.json'))
# Count the number of datapoints per PDB chain
count_by_pdb_chain = {}
dataset_cases_by_pdb_chain = {}
job_dict = {}
for ddg_case in dataset_cases:
chains = set([r['Chain'] for r in ddg_case['Mutations']])
assert(len(chains) == 1)
chain = chains.pop()
pdb_id = ddg_case['PDBFileID']
keypair = (pdb_id, chain)
count_by_pdb_chain[keypair] = count_by_pdb_chain.get(keypair, 0)
count_by_pdb_chain[keypair] += 1
dataset_cases_by_pdb_chain[keypair] = dataset_cases_by_pdb_chain.get(keypair, [])
dataset_cases_by_pdb_chain[keypair].append(ddg_case)
# Create the list of PDB IDs and chains for the dataset
print('')
if arguments['--test']:
pdb_monomers = []
print('Creating test run input...')
num_cases = 0
for keypair, v in sorted(count_by_pdb_chain.iteritems(), key=lambda x:-x[1]):
if v <= 10:
pdb_monomers.append(keypair)
num_cases += v
if num_cases >= 20:
break
else:
pdb_monomers = sorted(count_by_pdb_chain.keys())
# Ensure all the input PDB files exist
for keypair in pdb_monomers:
pdb_path = os.path.join(input_pdb_dir_path, '%s.pdb' % keypair[0])
if not os.path.exists(pdb_path):
raise Exception('Error: The file %s is missing.' % pdb_path)
# Write job dict and setup self-contained data directory
extra_s = ''
if arguments['--talaris2014']:
extra_s = ' (using talaris2014)'
if arguments['--beta_july15']:
assert(not(extra_s))
extra_s = ' (using beta_july15)'
if arguments['--beta_nov15']:
assert(not(extra_s))
extra_s = ' (using beta_nov15)'
print('Creating benchmark input:%s' % extra_s)
if num_processors == 1:
job_dict = use_single_processor(settings, pdb_monomers, input_pdb_dir_path, pdb_data_dir, mutfile_data_dir, dataset_cases_by_pdb_chain)
else:
print('Setting up the preminimization data using %d processors.' % num_processors)
job_dict = use_multiple_processors(settings, pdb_monomers, input_pdb_dir_path, pdb_data_dir, mutfile_data_dir, dataset_cases_by_pdb_chain, num_processors)
with open(os.path.join(output_data_dir, 'job_dict.pickle'), 'w') as f:
pickle.dump(job_dict, f)
settings['numjobs'] = '%d' % len(pdb_monomers)
settings['mem_free'] = '3.0G'
settings['scriptname'] = generated_scriptname
settings['appname'] = 'minimize_with_cst'
settings['rosetta_args_list'] = [
'-in:file:fullatom', '-ignore_unrecognized_res',
'-fa_max_dis', '9.0', '-ddg::harmonic_ca_tether', '0.5',
'-ddg::constraint_weight', '1.0',
'-ddg::out_pdb_prefix', 'min_cst_0.5',
'-ddg::sc_min_only', 'false'
]
if arguments['--talaris2014']:
settings['rosetta_args_list'].extend(['-talaris2014', 'true'])
elif arguments['--beta_july15']:
settings['rosetta_args_list'].extend(['-beta_july15'])
elif arguments['--beta_nov15']:
settings['rosetta_args_list'].extend(['-beta_nov15'])
write_run_file(settings)
job_path = os.path.abspath(output_dir)
print('''Job files written to directory: %s.\n\nTo launch this job:
cd %s
python %s.py\n''' % (job_path, job_path, generated_scriptname))
except Exception, e:
print('\nAn exception occurred setting up the preminimization step: "%s".' % str(e))
sys.stdout.write('Removing the directory %s: ' % output_dir)
try:
shutil.rmtree(output_dir)
print('done.\n')
except Exception, e2:
print('failed.\n')
print(str(e2))
print(traceback.format_exc())
| 47.789604 | 227 | 0.696121 | 2,724 | 19,307 | 4.729075 | 0.202643 | 0.015215 | 0.013197 | 0.013197 | 0.26013 | 0.215805 | 0.174119 | 0.157662 | 0.143611 | 0.143611 | 0 | 0.009559 | 0.214326 | 19,307 | 403 | 228 | 47.908189 | 0.839673 | 0.169938 | 0 | 0.1417 | 0 | 0.016194 | 0.183154 | 0.011295 | 0 | 0 | 0 | 0 | 0.032389 | 0 | null | null | 0.012146 | 0.08502 | null | null | 0.072874 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
28e8f42ac4dfe6b5ceb43f6d0973de03472548de | 1,564 | py | Python | lambda.py | diegonat/smsantispam | b43b6d91adda17eaf9f9969a56e9b247780424a1 | [
"Unlicense"
] | null | null | null | lambda.py | diegonat/smsantispam | b43b6d91adda17eaf9f9969a56e9b247780424a1 | [
"Unlicense"
] | null | null | null | lambda.py | diegonat/smsantispam | b43b6d91adda17eaf9f9969a56e9b247780424a1 | [
"Unlicense"
] | null | null | null | import ctypes
import os
from six.moves import urllib
import zipfile
import stat
import logging
import json
logging.basicConfig(level=logging.DEBUG)
print "logging"
def response(status_code, response_body):
return {
'statusCode': status_code,
'body': json.dumps(response_body) if response_body else json.dumps({}),
'headers': {
'Content-Type': 'application/json',
},
}
def vectorize_sequences(sequences, dimension):
results = np.zeros((len(sequences), dimension))
for i, sequence in enumerate(sequences):
results[i, sequence] = 1.
return results
for d, _, files in os.walk('lib'):
for f in files:
if f.endswith('.a') or f.endswith('.settings'):
continue
print('loading %s...' % f)
ctypes.cdll.LoadLibrary(os.path.join(d, f))
import keras
from keras.preprocessing.text import one_hot
from numpy import array
import numpy as np
model = keras.models.load_model(os.environ['MODEL_NAME'])
def handler(event, context):
print event
#params = event['queryStringParameters']
sms = event['body']
print sms
sentence = one_hot(str(sms), 87413)
print sentence
sentences = [sentence]
vector = vectorize_sequences(sentences,87413)
result = model.predict(vector)
print "Verdict: ", result
print "Shape: ", result.shape
print "Type: ", type(result)
result = float(np.array2string(result)[2:-2])
print(result)
return response(200, result)
| 24.061538 | 87 | 0.638747 | 188 | 1,564 | 5.25 | 0.478723 | 0.036474 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.014419 | 0.246164 | 1,564 | 64 | 88 | 24.4375 | 0.822731 | 0.024936 | 0 | 0 | 0 | 0 | 0.078084 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.229167 | null | null | 0.1875 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
28ef25a57a700f30595e5f04cc7d76ad5118bbea | 839 | py | Python | api_blog/serializers.py | poya-kob/BiaBegard | b7387d4253dbae0650e61c686752dba342b0342e | [
"MIT"
] | null | null | null | api_blog/serializers.py | poya-kob/BiaBegard | b7387d4253dbae0650e61c686752dba342b0342e | [
"MIT"
] | null | null | null | api_blog/serializers.py | poya-kob/BiaBegard | b7387d4253dbae0650e61c686752dba342b0342e | [
"MIT"
] | 1 | 2021-11-07T15:56:04.000Z | 2021-11-07T15:56:04.000Z | from rest_framework.serializers import HyperlinkedModelSerializer, ModelSerializer
from rest_framework import serializers
from .models import Blog, BlogCategory, Comments
class CategorySerializer(ModelSerializer):
class Meta:
model = BlogCategory
exclude = ['id']
class BlogListSerializer(HyperlinkedModelSerializer):
category = CategorySerializer()
class Meta:
model = Blog
exclude = ['active', 'text']
class CommentsSerializer(HyperlinkedModelSerializer):
user = serializers.StringRelatedField()
# blog = serializers.StringRelatedField()
class Meta:
model = Comments
fields = "__all__"
class BlogDetailSerializer(ModelSerializer):
comment = CommentsSerializer(many=True)
class Meta:
model = Blog
exclude = ['active', 'category']
| 23.305556 | 82 | 0.710369 | 68 | 839 | 8.676471 | 0.441176 | 0.061017 | 0.094915 | 0.061017 | 0.105085 | 0.105085 | 0 | 0 | 0 | 0 | 0 | 0 | 0.212157 | 839 | 35 | 83 | 23.971429 | 0.892587 | 0.046484 | 0 | 0.272727 | 0 | 0 | 0.041353 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.136364 | 0 | 0.636364 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
28fba28b478f41583f672d13c923d196bf4c8069 | 1,875 | py | Python | {{cookiecutter.project_pypi_name}}/setup.py | dalthviz/spyder5-plugin-cookiecutter | b01a27ec9881d61585e3eb076090c4513b0b4ccb | [
"MIT"
] | null | null | null | {{cookiecutter.project_pypi_name}}/setup.py | dalthviz/spyder5-plugin-cookiecutter | b01a27ec9881d61585e3eb076090c4513b0b4ccb | [
"MIT"
] | null | null | null | {{cookiecutter.project_pypi_name}}/setup.py | dalthviz/spyder5-plugin-cookiecutter | b01a27ec9881d61585e3eb076090c4513b0b4ccb | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
# ----------------------------------------------------------------------------
# Copyright © {% now "local", "%Y" %}, {{ cookiecutter.full_name }}
#
# Licensed under the terms of the {{ cookiecutter.open_source_license }}
# ----------------------------------------------------------------------------
"""
{{cookiecutter.project_name}} setup.
"""
from setuptools import find_packages
from setuptools import setup
from {{cookiecutter.project_package_name}} import __version__
setup(
# See: https://setuptools.readthedocs.io/en/latest/setuptools.html
name="{{cookiecutter.project_pypi_name}}",
version=__version__,
author="{{cookiecutter.full_name}}",
author_email="{{cookiecutter.email}}",
description="{{cookiecutter.project_short_description}}",
license="{{cookiecutter.open_source_license}}",
url="https://github.com/{{cookiecutter.github_username}}/{{cookiecutter.project_pypi_name}}",
install_requires=[
"qtpy",
"qtawesome",
"spyder>=5.0.1",
],
packages=find_packages(),
entry_points={
"spyder.plugins": [
"{{cookiecutter.project_package_name}} = {{cookiecutter.project_package_name}}.spyder.plugin:{{cookiecutter.project_name.replace(" ", "")}}"
],
},
classifiers=[
"Operating System :: MacOS",
"Operating System :: Microsoft :: Windows",
"Operating System :: POSIX :: Linux",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.6",
"Programming Language :: Python :: 3.7",
"Programming Language :: Python :: 3.8",
"Development Status :: 5 - Production/Stable",
"Intended Audience :: Education",
"Intended Audience :: Science/Research",
"Intended Audience :: Developers",
"Topic :: Scientific/Engineering",
],
)
| 36.764706 | 152 | 0.585067 | 165 | 1,875 | 6.448485 | 0.509091 | 0.142857 | 0.093985 | 0.097744 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.007864 | 0.186133 | 1,875 | 50 | 153 | 37.5 | 0.688729 | 0.201067 | 0 | 0.081081 | 0 | 0.027027 | 0.578147 | 0.21231 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.081081 | null | null | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
28fceab862c7c5376ee90277e625d7b88dbfe5bd | 1,257 | py | Python | mysuper/users/migrations/0004_assets_employee.py | oreon/mysuper | 6f0829c5c5b2e358effaf9e9d904be994668f602 | [
"BSD-3-Clause"
] | null | null | null | mysuper/users/migrations/0004_assets_employee.py | oreon/mysuper | 6f0829c5c5b2e358effaf9e9d904be994668f602 | [
"BSD-3-Clause"
] | null | null | null | mysuper/users/migrations/0004_assets_employee.py | oreon/mysuper | 6f0829c5c5b2e358effaf9e9d904be994668f602 | [
"BSD-3-Clause"
] | null | null | null | # -*- coding: utf-8 -*-
# Generated by Django 1.9.1 on 2016-01-28 23:51
from __future__ import unicode_literals
from django.db import migrations, models
import django.db.models.deletion
class Migration(migrations.Migration):
dependencies = [
('users', '0003_department'),
]
operations = [
migrations.CreateModel(
name='Assets',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('name', models.CharField(blank=True, max_length=15, verbose_name='Name of Asset')),
('department', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, related_name='assets', to='users.Department')),
],
),
migrations.CreateModel(
name='Employee',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('name', models.CharField(blank=True, max_length=255, verbose_name='Name of Employee')),
('department', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, related_name='employees', to='users.Department')),
],
),
]
| 38.090909 | 144 | 0.617343 | 136 | 1,257 | 5.558824 | 0.433824 | 0.042328 | 0.055556 | 0.087302 | 0.502646 | 0.502646 | 0.502646 | 0.502646 | 0.502646 | 0.502646 | 0 | 0.026261 | 0.242641 | 1,257 | 32 | 145 | 39.28125 | 0.767857 | 0.053302 | 0 | 0.4 | 1 | 0 | 0.122999 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.12 | 0 | 0.24 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
28ff9c7388f39a2df950981ba0286bf7c0b1594b | 1,448 | py | Python | rsgeo-py/tests/test_geometry.py | YuRiTan/geofunc | 1757f13ec3777cb63a6de151f2aa216b88ad12ee | [
"MIT"
] | 3 | 2020-12-15T15:17:31.000Z | 2021-03-24T14:00:12.000Z | rsgeo-py/tests/test_geometry.py | YuRiTan/rsgeo | 1757f13ec3777cb63a6de151f2aa216b88ad12ee | [
"MIT"
] | null | null | null | rsgeo-py/tests/test_geometry.py | YuRiTan/rsgeo | 1757f13ec3777cb63a6de151f2aa216b88ad12ee | [
"MIT"
] | null | null | null | import numpy as np
import pytest
from rsgeo.geometry import Polygon # noqa
class TestPolygon:
def setup_method(self):
self.p = Polygon([(0, 0), (1, 1), (1, 0), (0, 0)])
def test_repr(self):
str_repr = str(self.p)
exp = "Polygon([(0, 0), (1, 1), (1, 0), (0, 0)])"
assert str_repr == exp
def test_seq_to_2darray(self):
seq = [(1, 2), (3, 4)]
res = self.p._seq_to_2darray(seq)
np.testing.assert_array_equal(res, np.array([[1, 2], [3, 4]]))
def test_seq_to_2darray_sad_case(self):
seq = [(1, 2, 3), (4, 5, 6)]
with pytest.raises(ValueError):
_ = self.p._seq_to_2darray(seq)
@pytest.mark.parametrize("x, expected", [
(np.array([1, 2, 3]), np.array([1, 2, 3])),
(np.array([[1], [2], [3]]), np.array([1, 2, 3])),
])
def test_to_1d(self, x, expected):
result = self.p._to_1d(x)
np.testing.assert_array_equal(result, expected)
def test_to_1d_sad_case(self):
x = np.array([(1, 2, 3), (4, 5, 6)])
with pytest.raises(ValueError):
_ = self.p._to_1d(x)
def test_contains(self, xs, ys):
res = self.p.contains(xs, ys)
np.testing.assert_array_equal(res, np.array([False, False, False, True]))
def test_distance(self, xs, ys):
result = self.p.distance(xs, ys)
np.testing.assert_array_equal(result, np.array([0, 0, 1.4142135623730951, 0]))
| 31.478261 | 86 | 0.564917 | 226 | 1,448 | 3.442478 | 0.238938 | 0.051414 | 0.030848 | 0.069409 | 0.515424 | 0.447301 | 0.317481 | 0.275064 | 0.18509 | 0.14653 | 0 | 0.070436 | 0.254834 | 1,448 | 45 | 87 | 32.177778 | 0.650602 | 0.002762 | 0 | 0.057143 | 0 | 0.028571 | 0.036061 | 0 | 0 | 0 | 0 | 0 | 0.142857 | 1 | 0.228571 | false | 0 | 0.085714 | 0 | 0.342857 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
e902f482d6efb5b457a1632dc627c36156be8e3d | 2,340 | py | Python | mfg_ais.py | taiwenko/python | c13c170405f4cac1d1c191413f8b61e21052d8b3 | [
"MIT"
] | null | null | null | mfg_ais.py | taiwenko/python | c13c170405f4cac1d1c191413f8b61e21052d8b3 | [
"MIT"
] | null | null | null | mfg_ais.py | taiwenko/python | c13c170405f4cac1d1c191413f8b61e21052d8b3 | [
"MIT"
] | 1 | 2019-08-30T04:00:34.000Z | 2019-08-30T04:00:34.000Z | #!/usr/bin/env python
# rffe functional test
# Author: TaiWen Ko
import xpf6020
import bk8500
import math
import os
from time import sleep
from blessings import Terminal
t = Terminal()
print "Accessing the Power Supply"
ps_path = '/dev/serial/by-id/usb-FTDI_FT232R_USB_UART_A403OT39-if00-port0'
ps = xpf6020.Xpf6020(ps_path)
ps.reset_ps()
ps.set_currentlimit(1,3)
print "Accessing the BK Electronic Load"
bkload = bk8500.Bk8500()
bkload.remote_switch('on')
def measure_check(vin,iin,ch_v,ch_i,ch_i_max):
ps.set_voltage(1,vin)
bkload.config_cc_mode(ch_i,ch_i_max)
bkload.load_switch('on')
ps.ind_output('1','on')
sleep(10)
[p_ch_v, p_ch_i] = ps.measure('1')
volt = float(p_ch_v.split("V")[0])
curr = float(p_ch_i.split("A")[0])
tolerance = 0.03
p_current_max = iin * (1 + tolerance)
p_current_min = iin * (1 - tolerance)
p_voltage_max = vin * (1 + tolerance)
p_voltage_min = vin * (1 - tolerance)
sleep(1)
if float(curr) > float(p_current_max):
result = t.bold_red('FAILED')
elif float(curr) < float(p_current_min):
result = t.bold_red('FAILED')
elif float(volt) > float(p_voltage_max):
result = t.bold_red('FAILED')
elif float(volt) < float(p_voltage_min):
result = t.bold_red('FAILED')
else:
result = t.bold_green('PASSED')
print ('UUT input ' + result + ' @ ' + str(volt) + 'V, ' + str(curr) + 'A')
[r_ch_v, r_ch_i] = bkload.read()
current_max = ch_i * (1 + tolerance)
current_min = ch_i * (1 - tolerance)
voltage_max = ch_v * (1 + tolerance)
voltage_min = ch_v * (1 - tolerance)
sleep(1)
if float(r_ch_i) > float(current_max):
result = t.bold_red('FAILED')
elif float(r_ch_i) < float(current_min):
result = t.bold_red('FAILED')
elif float(r_ch_v) > float(voltage_max):
result = t.bold_red('FAILED')
elif float(r_ch_v) < float(voltage_min):
result = t.bold_red('FAILED')
else:
result = t.bold_green('PASSED')
print ('UUT output ' + result + ' @ ' + str(r_ch_v) + 'V, ' + str(r_ch_i) + 'A')
# clean up
bkload.load_switch('off')
ps.all_output('off')
while True:
measure_check(24,1.15,5.25,4,5)
measure_check(10,2.75,5.25,4,5)
more = raw_input('AIS power test completed. Continue to next UUT? [y/N] ')
if more != 'y':
bkload.remote_switch('off')
break;
raw_input('\n\nPress Enter to close.')
| 24.893617 | 82 | 0.664957 | 392 | 2,340 | 3.737245 | 0.30102 | 0.024573 | 0.075085 | 0.076451 | 0.342662 | 0.316724 | 0.275085 | 0.275085 | 0.275085 | 0.227986 | 0 | 0.040021 | 0.177778 | 2,340 | 93 | 83 | 25.16129 | 0.721414 | 0.02906 | 0 | 0.205882 | 0 | 0 | 0.13933 | 0.027337 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0.029412 | 0.088235 | null | null | 0.058824 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
e9078aa96127e3ac39e935b2dad6eb75ac8378b3 | 1,488 | py | Python | src/pyats/atom.py | CDonnerer/PyATS | 7db098c307ededa63c1a54c0ac1f2036ad35f7ac | [
"MIT"
] | null | null | null | src/pyats/atom.py | CDonnerer/PyATS | 7db098c307ededa63c1a54c0ac1f2036ad35f7ac | [
"MIT"
] | null | null | null | src/pyats/atom.py | CDonnerer/PyATS | 7db098c307ededa63c1a54c0ac1f2036ad35f7ac | [
"MIT"
] | null | null | null | import os
import io
import numpy as np
class Atom(object):
def __init__(self, element, wpos, rpos):
"""
:param element: Periodic symbol
:param wpos: Wyckoff position
:param rpos: Symmetry-equivalent positions
"""
self.element = element
self.wpos = wpos
self.pos = rpos
self.a, self.b, self.c = self.read_formfactor()
def __repr__(self):
n = len(self.pos)
s = str(n) + " " + self.element + " at r = " + str(self.wpos)
return s
def read_formfactor(self):
"""
Read-in atomic form factor parameters from file
:return:
"""
fn = os.path.join(os.path.dirname(__file__), 'dat_files' + os.sep + 'formfactor.dat')
# TODO: check file. works for Python 2,3 c.f. symmetry
f = io.open(fn, mode="r", encoding="utf-8")
#f = open(fn, 'r')
for line in f:
if self.element in line:
ff_str = line[0:]
break
f.close()
a = np.zeros(4)
b = np.zeros(4)
tokens = ff_str.split()
j = 1
for i in range(0, 4):
a[i] = tokens[j]
b[i] = tokens[j + 1]
j += 2
c = float(tokens[-1])
return a, b, c
def formfactor(self, sinThetaOverLam):
bdsquare = np.exp(np.outer(-self.b,sinThetaOverLam ** 2))
ff = np.einsum('i,ij->j',self.a,bdsquare)+self.c
return ff | 25.655172 | 93 | 0.509409 | 199 | 1,488 | 3.723618 | 0.427136 | 0.059379 | 0.021592 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.013655 | 0.360215 | 1,488 | 58 | 94 | 25.655172 | 0.764706 | 0.160618 | 0 | 0 | 0 | 0 | 0.0382 | 0 | 0 | 0 | 0 | 0.017241 | 0 | 1 | 0.114286 | false | 0 | 0.085714 | 0 | 0.314286 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
e90881a24f0e4b298f98b37ccecb32404b6dd599 | 317 | py | Python | config.py | dunossauro/Aul-o_de_flask | 10faaa5d28cd57db62c88975f6fbb675c84566b6 | [
"MIT"
] | 3 | 2019-02-02T21:32:05.000Z | 2021-09-10T03:53:23.000Z | config.py | bergpb/Aulao_de_flask | 10faaa5d28cd57db62c88975f6fbb675c84566b6 | [
"MIT"
] | null | null | null | config.py | bergpb/Aulao_de_flask | 10faaa5d28cd57db62c88975f6fbb675c84566b6 | [
"MIT"
] | 3 | 2019-02-03T00:52:01.000Z | 2022-01-28T16:18:36.000Z | from os import path
dirname = path.dirname(__file__)
class Config:
# SQLALCHEMY_TRACK_MODIFICATIONS = False
SQLALCHEMY_DATABASE_URI = f'sqlite:///{dirname}/db.sqlite3'
BATATINHAS = 3
class development(Config):
DEBUG = True
SQLALCHEMY_ECHO = True
class production(Config):
DEBUG = False
| 16.684211 | 63 | 0.712934 | 37 | 317 | 5.864865 | 0.675676 | 0.101382 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.007874 | 0.198738 | 317 | 18 | 64 | 17.611111 | 0.846457 | 0.119874 | 0 | 0 | 0 | 0 | 0.108303 | 0.108303 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.1 | 0 | 0.9 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
e908a487be31f4fbd900fc21345b41355b86ef3c | 295 | py | Python | L2/type_var.py | thebestday/python | 2efb7fbd5c4ee40c03233875c1989ce68aa0fe18 | [
"MIT"
] | null | null | null | L2/type_var.py | thebestday/python | 2efb7fbd5c4ee40c03233875c1989ce68aa0fe18 | [
"MIT"
] | null | null | null | L2/type_var.py | thebestday/python | 2efb7fbd5c4ee40c03233875c1989ce68aa0fe18 | [
"MIT"
] | null | null | null | # Типы переменных
# Марка
name = 'ford'
print(name, type(name)) # class string
# Age
age = 3
print(age, type(age)) # class int
# Обьем двигателя
engine_volume = 1.6
print(engine_volume, type(engine_volume)) # class float
# Есть ли люк
see_sky = False
print(see_sky, type(see_sky)) # class bool
| 19.666667 | 55 | 0.718644 | 48 | 295 | 4.291667 | 0.541667 | 0.174757 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.012097 | 0.159322 | 295 | 14 | 56 | 21.071429 | 0.818548 | 0.335593 | 0 | 0 | 0 | 0 | 0.021622 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.5 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 |
e9096a057ec0bf10108ed9746f8fa3b52e6dbf50 | 850 | py | Python | venmo-sim/metis/metis_verify.py | akatsarakis/tx_benchmarking | f8233e58bba3f4fb54d82273d7ca8631bae36ebc | [
"MIT"
] | 3 | 2020-07-07T17:08:41.000Z | 2022-01-10T19:25:46.000Z | venmo-sim/metis/metis_verify.py | akatsarakis/tx_benchmarking | f8233e58bba3f4fb54d82273d7ca8631bae36ebc | [
"MIT"
] | null | null | null | venmo-sim/metis/metis_verify.py | akatsarakis/tx_benchmarking | f8233e58bba3f4fb54d82273d7ca8631bae36ebc | [
"MIT"
] | null | null | null | # Generate some graph to verify the quality of Metis' partitioning
# A graph with 300 vertices, vertices 0~99, 100~199, 200~299 form
# a complete graph respectively
# and add some random disturbing edges
import random
import os
os.environ["METIS_DLL"] = "/usr/local/lib/libmetis.so"
import metis
import networkx as nx
dg = nx.MultiDiGraph()
def gen_complete_graph(low, high): # [low, high)
for l in range(low, high):
for r in range(low, high):
if l != r:
dg.add_edge(l, r)
gen_complete_graph(0, 100)
gen_complete_graph(100, 200)
gen_complete_graph(200, 300)
for _ in range(100):
l = random.randint(0, 299)
r = random.randint(0, 299)
dg.add_edge(l, r)
(edgecuts, parts) = metis.part_graph(dg, 3) # num of shards
fp = open("clustered_verify_metis.txt", "w")
print(parts, file=fp)
fp.close()
| 25.757576 | 66 | 0.681176 | 140 | 850 | 4.028571 | 0.485714 | 0.115248 | 0.113475 | 0.049645 | 0.039007 | 0 | 0 | 0 | 0 | 0 | 0 | 0.067548 | 0.198824 | 850 | 32 | 67 | 26.5625 | 0.760646 | 0.261176 | 0 | 0.090909 | 1 | 0 | 0.099839 | 0.083736 | 0 | 0 | 0 | 0 | 0 | 1 | 0.045455 | false | 0 | 0.181818 | 0 | 0.227273 | 0.045455 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
e9159b31b9985e5bbd3247337c4a16d116e33030 | 1,678 | py | Python | unit_bot/bot.py | trimailov/unit-conversion-bot | 9b6533a4142a179b6eca89eb5860bebeadcf8128 | [
"MIT"
] | 1 | 2017-07-14T07:46:58.000Z | 2017-07-14T07:46:58.000Z | unit_bot/bot.py | trimailov/unit-bot | 9b6533a4142a179b6eca89eb5860bebeadcf8128 | [
"MIT"
] | null | null | null | unit_bot/bot.py | trimailov/unit-bot | 9b6533a4142a179b6eca89eb5860bebeadcf8128 | [
"MIT"
] | null | null | null | import logging
import praw
from unit_bot import creds
from unit_bot.finder import Finder
def scan_and_respond(reply=False, sub='test_unitbot'):
logging.basicConfig(filename="info.log", level=logging.INFO)
r = praw.Reddit(user_agent=creds.USER_AGENT)
r.login(creds.USERNAME, creds.PASSWORD, disable_warning=True)
del creds.PASSWORD
last_comment = ''
while True:
for comment in praw.helpers.comment_stream(r, sub):
finder = Finder(comment.body)
# do not reply to yourself
if comment.author.name == creds.USERNAME:
continue
# if we found units to convert and comment is later than the
# last one we replied to, then try to reply
if finder.units and comment.id > last_comment and reply:
last_comment = comment.id
info_str = ("Comment body: {}\n"
"Finder: {}").format(comment.body, finder.units)
# in case we try to reply to unreplieable comment
# e.g. comment got deleted, is to old or etc.
# just ignore error then and pass on
try:
comment.reply(finder.generate_conversion_message())
success_str = "Found and replied\n\n"
logging.info(success_str + info_str)
except praw.errors.APIException as e:
fail_str = "Exception: {}".format(e)
logging.info(fail_str)
except praw.errors.HTTPException as e:
fail_str = "Exception: {}".format(e)
logging.info(fail_str)
| 37.288889 | 76 | 0.577473 | 203 | 1,678 | 4.665025 | 0.44335 | 0.046463 | 0.023231 | 0.040127 | 0.092925 | 0.092925 | 0.092925 | 0.092925 | 0.092925 | 0.092925 | 0 | 0 | 0.340882 | 1,678 | 44 | 77 | 38.136364 | 0.856239 | 0.150179 | 0 | 0.137931 | 0 | 0 | 0.066949 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.034483 | false | 0.068966 | 0.137931 | 0 | 0.172414 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
e91c455f24cbf392babfd705c959392c0af62827 | 2,350 | py | Python | spkcspider/apps/spider_webcfg/models.py | devkral/spkbspider | 97e448b4da412acebd66c4469c7fcdd07bf90ed2 | [
"MIT"
] | 5 | 2019-06-24T14:15:54.000Z | 2021-05-14T23:16:31.000Z | spkcspider/apps/spider_webcfg/models.py | devkral/spkbspider | 97e448b4da412acebd66c4469c7fcdd07bf90ed2 | [
"MIT"
] | 2 | 2018-06-19T09:56:18.000Z | 2018-11-20T12:02:44.000Z | spkcspider/apps/spider_webcfg/models.py | devkral/spkbspider | 97e448b4da412acebd66c4469c7fcdd07bf90ed2 | [
"MIT"
] | null | null | null | __all__ = ["WebConfig"]
from django.urls import reverse
from django.utils.translation import pgettext
from spkcspider.apps.spider.models import DataContent
from spkcspider.apps.spider import registry
from spkcspider.constants import ActionUrl, VariantType
from spkcspider.utils.fields import add_by_field
# from spkcspider.apps.spider.models.base import BaseInfoModel
@add_by_field(registry.contents, "_meta.model_name")
class WebConfig(DataContent):
expose_name = False
appearances = [
{
# only one per domain
"name": "WebConfig",
"ctype": (
VariantType.unique + VariantType.component_feature +
VariantType.persist
),
"strength": 0
},
{
# only one per domain, has life time, don't require user permission
"name": "TmpConfig",
"ctype": (
VariantType.unique + VariantType.component_feature +
VariantType.domain_mode
),
"strength": 5
}
]
class Meta:
proxy = True
@classmethod
def localize_name(cls, name):
_ = pgettext
if name == "TmpConfig":
return _("content name", "Temporary Web Configuration")
else:
return _("content name", "Web Configuration")
def get_content_name(self):
return self.associated.attached_to_token.referrer.url[:255]
@classmethod
def feature_urls(cls, name):
return [
ActionUrl("webcfg", reverse("spider_webcfg:webconfig-view"))
]
def get_size(self, prepared_attachements=None):
# ensure space for at least 100 bytes (for free)
ret = super().get_size(prepared_attachements)
# hacky, use default values, TODO: improve
ret = max(255, ret - 255 + 100)
return ret
def get_priority(self):
# low priority
return -10
def get_form(self, scope):
from .forms import WebConfigForm as f
return f
def get_info(self):
# persistent tokens automatically enforce uniqueness
ret = super().get_info(
unique=(self.associated.attached_to_token.persist < 0)
)
return "{}url={}\x1e".format(
ret, self.associated.attached_to_token.referrer.url
)
| 29.375 | 79 | 0.608936 | 248 | 2,350 | 5.625 | 0.471774 | 0.050179 | 0.03871 | 0.051613 | 0.207168 | 0.143369 | 0.143369 | 0 | 0 | 0 | 0 | 0.012797 | 0.301702 | 2,350 | 79 | 80 | 29.746835 | 0.837294 | 0.126809 | 0 | 0.133333 | 0 | 0 | 0.097847 | 0.013699 | 0 | 0 | 0 | 0.012658 | 0 | 1 | 0.116667 | false | 0 | 0.116667 | 0.05 | 0.433333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
e91d3c242ca92c398ab32e41497a3a60b82c54cf | 1,273 | py | Python | AutoGrade/forms.py | bilalzaib/AutoGrader | 40767cfc3351fde6948a4381eadabc3bf52ce8a6 | [
"MIT"
] | 23 | 2019-07-04T02:43:32.000Z | 2022-01-08T04:04:39.000Z | AutoGrade/forms.py | adroit-404/AutoGrader | 40767cfc3351fde6948a4381eadabc3bf52ce8a6 | [
"MIT"
] | 69 | 2017-08-29T13:34:53.000Z | 2019-04-04T07:08:38.000Z | AutoGrade/forms.py | adroit-404/AutoGrader | 40767cfc3351fde6948a4381eadabc3bf52ce8a6 | [
"MIT"
] | 7 | 2017-09-01T15:23:17.000Z | 2019-06-10T12:41:14.000Z | from django import forms
from django.contrib.auth.forms import UserCreationForm
from django.contrib.auth.models import User
from django.forms import ModelForm
from .models import Course, Submission, Assignment
class SignUpForm(UserCreationForm):
def clean_email(self):
email = self.cleaned_data.get('email')
username = self.cleaned_data.get('username')
if email and User.objects.filter(email=email).exclude(username=username).exists():
raise forms.ValidationError(u'Email addresses must be unique.')
return email
class Meta:
model = User
fields = ('username', 'first_name', 'last_name', 'email' , 'password1', 'password2', )
class EnrollForm(forms.Form):
secret_key = forms.CharField(
widget=forms.TextInput(attrs={'placeholder': 'ABCXYZ12'}),
label='Secret Key',
required=False)
class Meta:
fields = ('secret_key')
class ChangeEmailForm(forms.Form):
email = forms.EmailField()
def clean_email(self):
email = self.cleaned_data.get('email')
if email and User.objects.filter(email=email).exists():
raise forms.ValidationError(u'That email is already used.')
return email
class Meta:
fields = ('email')
| 31.825 | 94 | 0.672427 | 149 | 1,273 | 5.684564 | 0.42953 | 0.047226 | 0.053129 | 0.063754 | 0.269185 | 0.193625 | 0.193625 | 0.193625 | 0.106257 | 0.106257 | 0 | 0.004004 | 0.21524 | 1,273 | 39 | 95 | 32.641026 | 0.843844 | 0 | 0 | 0.290323 | 0 | 0 | 0.133648 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.064516 | false | 0.032258 | 0.16129 | 0 | 0.548387 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
e91fec1f58f4c085f105abf558b5dd061e0c06d4 | 5,712 | py | Python | ThingsConnectorTests/ThingsConnectorBaseTests.py | huvermann/MyPiHomeAutomation | dcda589e82456bb34d3bbbfdcb45ec1066f3d2f8 | [
"MIT"
] | null | null | null | ThingsConnectorTests/ThingsConnectorBaseTests.py | huvermann/MyPiHomeAutomation | dcda589e82456bb34d3bbbfdcb45ec1066f3d2f8 | [
"MIT"
] | null | null | null | ThingsConnectorTests/ThingsConnectorBaseTests.py | huvermann/MyPiHomeAutomation | dcda589e82456bb34d3bbbfdcb45ec1066f3d2f8 | [
"MIT"
] | null | null | null | import unittest
import sys
import json
sys.path.append("..\ThingsConnector")
from ThingsConnectorBase import ThingsConectorBase
#from .. import module # Import module from a higher directory.
from ThingsItemBase import ThingsItemBase
class FunctionMock(object):
def __init__(self, ):
self.mockCalled = False
self.dummyCalled = False
self.mockSendHandwareInfoCalled = False
return super(FunctionMock, self).__init__()
def mockHandleUIMessage(self, ws, message):
self.mockCalled = True
def mockRefresh(self):
self.mockCalled = True
def mockDummy(self):
self.dummyCalled = True
pass
def mockSendHandwareInfo(self):
self.mockSendHandwareInfoCalled = True
class WebsocketMock(object):
def send(self, msg):
self.message = msg
pass
class Test_ThingsConnectorBaseTests(unittest.TestCase):
def test_authHardwareSendsMessage(self):
wsmock = WebsocketMock()
connector = ThingsConectorBase("testid", "node", "descr", 1)
connector.ws = wsmock
connector.authHardware()
self.assertTrue(wsmock.message == '{"data": {"nodeid": "testid", "key": "secretkey"}, "messagetype": "authHardware"}')
def test_MessageTypeLogonResult_setsAuthenticated(self):
"""Checks if authenticated property is set to true"""
mock = FunctionMock()
message = {"messagetype" : "LogonResult", "data" : {"success": True}}
connector = ThingsConectorBase("testid", "node", "descr", 1)
connector.cutConnection = mock.mockDummy
self.assertFalse(connector.authenticated)
connector.sendNodeInfo = mock.mockSendHandwareInfo
connector.parseJsonMessage(None, message)
self.assertTrue(connector.authenticated)
def test_MessageTypeLogonResult_cuts_Connection(self):
"""Checks if message LogonResult calls cut connection if authentication fails."""
mock = FunctionMock()
message = {"messagetype" : "LogonResult", "data" : {"success": False}}
connector = ThingsConectorBase("testid", "node", "descr", 1)
connector.cutConnection = mock.mockDummy
self.assertFalse(connector.authenticated)
connector.parseJsonMessage(None, message)
# check if authenticated is false
self.assertFalse(connector.authenticated)
# check if cutConnection() was called
self.assertTrue(mock.dummyCalled)
def test_HandleLogonResultCalls_SendHardwareInfo(self):
"""Checks if handleLogonResult calls sendHardwareInfo()."""
mock = FunctionMock()
connector = ThingsConectorBase("testid", "node", "descr", 1)
message = {"messagetype" : "LogonResult", "data" : {"success": True}}
connector.cutConnection = mock.mockDummy
connector.sendNodeInfo = mock.mockSendHandwareInfo
self.assertFalse(mock.mockSendHandwareInfoCalled)
connector.handleLogonResult(None, message)
self.assertTrue(mock.mockSendHandwareInfoCalled)
def test_MessageTypeUIMessage(self):
"""Checks if UI-Message calls handleUiMessage."""
mock = FunctionMock()
connector = ThingsConectorBase("testid", "node", "descr", 1)
connector.cutConnection = mock.mockDummy
message = {"messagetype" : "UI-Message", "data" : ""}
connector.handleUIMessage = mock.mockHandleUIMessage
self.assertFalse(mock.mockCalled)
connector.parseJsonMessage(None, message)
self.assertTrue(mock.mockCalled)
def test_MessageTypeRefresh(self):
"""Checks if Refresh message calls prepareRefresh"""
mock = FunctionMock()
message = {"messagetype" : "Refresh", "data" : ""}
connector = ThingsConectorBase("testid", "node", "descr", 1)
connector.prepareRefresh = mock.mockRefresh
connector.parseJsonMessage(None, message)
self.assertTrue(mock.mockCalled)
def test_sendNodeInfo(self):
"""Checks if sendNodeInfo works."""
mock = FunctionMock()
wsmock = WebsocketMock()
connector = ThingsConectorBase("nodeid", "nodename", "nodedescr", 1)
connector.ws = wsmock
testItem = ThingsItemBase("testid", "type1", "descr")
connector.addItem(testItem)
connector.sendNodeInfo()
message = json.loads(wsmock.message.encode('utf-8'))
self.assertTrue(message["messagetype"] == 'nodeinfo')
def test_sendNodeInfoSendsNodeId(self):
mock = FunctionMock()
wsmock = WebsocketMock()
connector = ThingsConectorBase("nodeid", "nodename", "nodedescr", 1)
connector.ws = wsmock
testItem = ThingsItemBase("testid", "type1", "descr")
connector.addItem(testItem)
connector.sendNodeInfo()
message = json.loads(wsmock.message.encode('utf-8'))
data = message["data"]
self.assertEqual(data['nodeid'], "nodeid")
self.assertEqual(data['description'], "nodedescr")
def test_sendNodeInfoSendsHardwareInfo(self):
mock = FunctionMock()
wsmock = WebsocketMock()
connector = ThingsConectorBase("nodeid", "nodename", "nodedescr", 1)
connector.ws = wsmock
testItem = ThingsItemBase("testid", "type1", "descr")
connector.addItem(testItem)
connector.sendNodeInfo()
message = json.loads(wsmock.message.encode('utf-8'))
hardware = message["data"]["hardwareinfo"][0]
self.assertEqual(hardware['id'], "testid")
self.assertEqual(hardware['type'], "type1")
self.assertEqual(hardware['description'], "descr")
if __name__ == '__main__':
unittest.main() | 37.578947 | 126 | 0.660539 | 499 | 5,712 | 7.503006 | 0.226453 | 0.016827 | 0.052885 | 0.059295 | 0.421474 | 0.413729 | 0.400374 | 0.327457 | 0.307425 | 0.307425 | 0 | 0.003832 | 0.223389 | 5,712 | 152 | 127 | 37.578947 | 0.839946 | 0.022584 | 0 | 0.517857 | 0 | 0.008929 | 0.110286 | 0 | 0 | 0 | 0 | 0 | 0.151786 | 0 | null | null | 0.017857 | 0.044643 | null | null | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
e92610c7bf59b3830b9cc455a09124e0dc3f913c | 2,633 | py | Python | src/pyapp/conf/helpers/bases.py | pyapp-org/pyapp | 46c9e8e3dbf1a872628fad8224b99b458972c92c | [
"BSD-3-Clause"
] | 5 | 2020-01-09T14:46:33.000Z | 2021-04-23T15:25:11.000Z | src/pyapp/conf/helpers/bases.py | pyapp-org/pyapp | 46c9e8e3dbf1a872628fad8224b99b458972c92c | [
"BSD-3-Clause"
] | 111 | 2019-06-30T05:45:57.000Z | 2022-03-28T11:15:29.000Z | src/pyapp/conf/helpers/bases.py | pyapp-org/pyapp | 46c9e8e3dbf1a872628fad8224b99b458972c92c | [
"BSD-3-Clause"
] | 2 | 2019-05-29T09:01:10.000Z | 2021-04-23T15:25:32.000Z | """
Conf Helper Bases
~~~~~~~~~~~~~~~~~
"""
import abc
import threading
from abc import ABCMeta
from typing import Any
from typing import Generic
from typing import TypeVar
class DefaultCache(dict):
"""
Very similar to :py:class:`collections.defaultdict` (using __missing__)
however passes the specified key to the default factory method.
"""
__slots__ = ("default_factory",)
def __init__(self, default_factory=None, **kwargs):
super().__init__(**kwargs)
self.default_factory = default_factory
def __missing__(self, key: Any):
if not self.default_factory:
raise KeyError(key)
self[key] = value = self.default_factory(key)
return value
FT = TypeVar("FT")
class FactoryMixin(Generic[FT], metaclass=ABCMeta):
"""
Mixing to provide a factory interface
"""
__slots__ = ()
@abc.abstractmethod
def create(self, name: str = None) -> FT:
"""
Create an instance based on a named setting.
"""
class SingletonFactoryMixin(FactoryMixin[FT], metaclass=ABCMeta):
""""
Mixin that provides a single named instance.
This instance factory type is useful for instance types that only require
a single instance eg database connections, web service agents.
If your instance types are not thread safe it is recommended that the
:py:class:`ThreadLocalSingletonFactoryMixin` is used.
"""
__slots__ = ("_instances",)
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
self._instances = DefaultCache(self.create)
instances_lock = threading.RLock()
def create_wrapper(name: str = None) -> FT:
with instances_lock:
return self._instances[name]
self.create = create_wrapper
class ThreadLocalSingletonFactoryMixin(FactoryMixin[FT], metaclass=ABCMeta):
"""
Mixin that provides a single named instance per thread.
This instance factory type is useful for instance types that only require
a single instance eg database connections, web service agents and that are
not thread safe.
"""
__slots__ = ("_instances",)
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
self._instances = threading.local()
create = self.create
def create_wrapper(name: str = None) -> FT:
try:
cache = self._instances.cache
except AttributeError:
cache = self._instances.cache = DefaultCache(create)
return cache[name]
self.create = create_wrapper
| 25.563107 | 78 | 0.649449 | 295 | 2,633 | 5.569492 | 0.342373 | 0.059647 | 0.043822 | 0.023737 | 0.370055 | 0.337188 | 0.337188 | 0.301887 | 0.301887 | 0.301887 | 0 | 0 | 0.254083 | 2,633 | 102 | 79 | 25.813725 | 0.836558 | 0.298899 | 0 | 0.227273 | 0 | 0 | 0.021499 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.159091 | false | 0 | 0.136364 | 0 | 0.545455 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
e93d64745869b8f1fc177907b23ee5d3d57f1636 | 1,045 | py | Python | ArkDiscordBot/bot_commands/ark.py | Jackybeat/ArkDiscordBot | f902d024e6f90540e2d15dc3d1b3f220998ea182 | [
"MIT"
] | 1 | 2018-08-20T14:11:39.000Z | 2018-08-20T14:11:39.000Z | ArkDiscordBot/bot_commands/ark.py | Jackybeat/ArkDiscordBot | f902d024e6f90540e2d15dc3d1b3f220998ea182 | [
"MIT"
] | 1 | 2018-03-17T19:00:55.000Z | 2018-06-03T14:38:45.000Z | ArkDiscordBot/bot_commands/ark.py | Jackybeat/ArkDiscordBot | f902d024e6f90540e2d15dc3d1b3f220998ea182 | [
"MIT"
] | 1 | 2018-08-20T14:11:46.000Z | 2018-08-20T14:11:46.000Z | # -*- coding: utf-8 -*-
'''
Created on 2 mars 2017
@author: Jacky
'''
import logging
from discord.ext import commands
from ArkDiscordBot.apps import bot
from ArkDiscordBot.discord.utils import parse_context
logger = logging.getLogger('BOT.{}'.format(__name__))
class Commons:
@commands.command(pass_context=True, name='test2', brief='Test the bot.')
async def test(self, ctx):
"""
Test command. Count the number of messages posted by the user on the current channel.
"""
bot, channel, messages, author = parse_context(ctx)
messages
counter = 0
tmp = await bot.send_message(channel, 'Calculating messages...')
async for log in bot.logs_from(channel, limit=10000):
if log.author == author:
counter += 1
messages = '{}, You have posted {} messages on this channel.'.format(author, counter)
logger.debug(messages)
await bot.edit_message(tmp, messages)
bot.add_cog(Commons())
| 24.880952 | 93 | 0.623923 | 126 | 1,045 | 5.087302 | 0.555556 | 0.053042 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.018277 | 0.266986 | 1,045 | 41 | 94 | 25.487805 | 0.818538 | 0.058373 | 0 | 0 | 0 | 0 | 0.110723 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0.052632 | 0.210526 | 0 | 0.263158 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
e944304ba9eea0773445d59a7bfc92c434dd17b6 | 2,470 | py | Python | mesh/scale_test.py | melonwan/sphereHand | 4a5bae9ddc7da42c25e528686b7c6a5388a5e945 | [
"MIT"
] | 53 | 2019-04-23T14:16:34.000Z | 2022-01-13T07:06:53.000Z | mesh/scale_test.py | melonwan/sphereHand | 4a5bae9ddc7da42c25e528686b7c6a5388a5e945 | [
"MIT"
] | 7 | 2019-05-08T10:15:35.000Z | 2022-03-20T12:57:57.000Z | mesh/scale_test.py | melonwan/sphereHand | 4a5bae9ddc7da42c25e528686b7c6a5388a5e945 | [
"MIT"
] | 3 | 2019-05-12T10:33:45.000Z | 2020-01-13T08:36:56.000Z | # to test the gradient back-propagation
from __future__ import absolute_import, division, print_function
import torch
import pickle
import torch.nn as nn
import torch.nn.functional as F
import matplotlib.pyplot as plt
import numpy as np
import cv2
import torch.utils.data as data
from mesh.multiview_utility import MutualProjectionLoss, MultiviewConsistencyLoss
from dataset.nyu_dataset import create_nyu_dataset
from network.util_modules import PosePriorLoss
from dataset.joint_angle import JointAngleDataset
import matplotlib.pyplot as plt
from mesh.kinematicsTransformation import HandTransformationMat
from mesh.pointTransformation import LinearBlendSkinning, RandScale
from network.constants import Constant
from network.util_modules import HandSynthesizer
constant = Constant()
synt_key_points = [[0,1],
[0,2],
[33,34]]
real_key_points = [[30,31],
[30,32],
[0,1]]
def bone_length(joints, indices):
bone_length = []
for [idx_1, idx_2] in indices:
diff = joints[idx_1] - joints[idx_2]
bone_length.append(diff.view(-1).norm())
bone_length = [bone_length[0]/bone_length[1], bone_length[0]/bone_length[2]]
return bone_length
dataset_dir = 'D:\\data\\nyu_hand_dataset_v2\\npy-64\\test'
nyu_dataset = create_nyu_dataset(dataset_dir)
with open('mesh/model/preprocessed_hand.pkl', 'rb') as f:
mesh = pickle.load(f)
hand_synthsizer = HandSynthesizer(mesh, 64, 16, 1.0, 0.01).cuda()
joint_data = JointAngleDataset()
for _ in range(10):
real_data = nyu_dataset[0]
real_joints = real_data[1][0][constant.real_key_points]
real_joints = real_joints * 64 / 300 + 32
fig = plt.figure()
ax = fig.add_subplot(1, 2, 1)
ax.scatter(real_joints[:,0], real_joints[:,1])
for idx in range(len(real_joints)):
ax.annotate('%d'%idx, (real_joints[idx,0], real_joints[idx,1]))
ax.imshow(real_data[0][0].squeeze())
para = joint_data[0].unsqueeze(dim=0).cuda()
synt_result = hand_synthsizer(para)
dm = synt_result[0].squeeze().detach().cpu().numpy()
synt_joints = synt_result[3].squeeze().detach().cpu().numpy()[constant.synt_key_points]
synt_joints = synt_joints * 64 / 300 + 32
ax = fig.add_subplot(1, 2, 2)
ax.scatter(synt_joints[:,0], synt_joints[:,1])
for idx in range(len(synt_joints)):
ax.annotate('%d'%idx, (synt_joints[idx,0], synt_joints[idx,1]))
ax.imshow(dm)
plt.show() | 32.5 | 91 | 0.703644 | 361 | 2,470 | 4.612188 | 0.315789 | 0.054054 | 0.018018 | 0.028829 | 0.184985 | 0.048048 | 0.027628 | 0 | 0 | 0 | 0 | 0.038631 | 0.172065 | 2,470 | 76 | 92 | 32.5 | 0.77555 | 0.01498 | 0 | 0.033898 | 0 | 0 | 0.033306 | 0.030839 | 0 | 0 | 0 | 0 | 0 | 1 | 0.016949 | false | 0 | 0.305085 | 0 | 0.338983 | 0.016949 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
3a5d53e039e596fe145bc0abe341b2c3dc6929f8 | 418 | py | Python | djangobmf/migrations/0002_removed_notification_and_watch.py | dmatthes/django-bmf | 3a97167de7841b13f1ddd23b33ae65e98dc49dfd | [
"BSD-3-Clause"
] | 1 | 2020-05-11T08:00:49.000Z | 2020-05-11T08:00:49.000Z | djangobmf/migrations/0002_removed_notification_and_watch.py | dmatthes/django-bmf | 3a97167de7841b13f1ddd23b33ae65e98dc49dfd | [
"BSD-3-Clause"
] | null | null | null | djangobmf/migrations/0002_removed_notification_and_watch.py | dmatthes/django-bmf | 3a97167de7841b13f1ddd23b33ae65e98dc49dfd | [
"BSD-3-Clause"
] | null | null | null | # -*- coding: utf-8 -*-
from __future__ import unicode_literals
from django.db import models, migrations
from django.conf import settings
class Migration(migrations.Migration):
dependencies = [
('djangobmf', '0001_initial'),
]
operations = [
migrations.DeleteModel(
name='Notification',
),
migrations.DeleteModel(
name='Watch',
),
]
| 19 | 40 | 0.600478 | 36 | 418 | 6.805556 | 0.694444 | 0.081633 | 0.204082 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.016835 | 0.289474 | 418 | 21 | 41 | 19.904762 | 0.808081 | 0.050239 | 0 | 0.266667 | 0 | 0 | 0.096203 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.2 | 0 | 0.4 | 0 | 0 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
3a66590a44e555c673f51de795835e9d9a56326d | 1,952 | py | Python | web/foi_requests/tests/conftest.py | okfn-brasil/pedidosanonimos | 25935346be655f60bb5648d164e781ccf4c65bbb | [
"MIT"
] | 33 | 2018-11-06T12:37:00.000Z | 2022-01-23T02:12:52.000Z | web/foi_requests/tests/conftest.py | okfn-brasil/pedidosanonimos | 25935346be655f60bb5648d164e781ccf4c65bbb | [
"MIT"
] | 15 | 2018-11-07T11:49:37.000Z | 2020-07-08T23:30:33.000Z | web/foi_requests/tests/conftest.py | okfn-brasil/pedidosanonimos | 25935346be655f60bb5648d164e781ccf4c65bbb | [
"MIT"
] | 11 | 2018-11-18T13:36:10.000Z | 2020-06-20T21:36:27.000Z | import pytest
from django.db import transaction
from django.utils import timezone
from ..models import Message, FOIRequest, Esic, PublicBody
@pytest.fixture
def public_body(esic):
return PublicBody(
name='example',
esic=esic
)
@pytest.fixture
def esic():
return Esic(
url='http://example.com'
)
@pytest.fixture
def foi_request():
return FOIRequest()
@pytest.fixture
def message(foi_request):
return Message(
foi_request=foi_request
)
@pytest.fixture
def foi_request_with_sent_user_message(foi_request, message_from_user):
with transaction.atomic():
message_from_user.approve()
message_from_user.foi_request = foi_request
message_from_user.sent_at = timezone.now()
save_message(message_from_user)
foi_request.refresh_from_db()
return foi_request
@pytest.fixture
def message_from_user(public_body):
return Message(
sender=None,
receiver=public_body
)
@pytest.fixture
def message_from_government(public_body):
return Message(
sender=public_body,
sent_at=timezone.now(),
receiver=None
)
def save_message(message):
# FIXME: Ideally a simple message.save() would save everything, but I
# couldn't find out how to do so in Django. Not yet.
with transaction.atomic():
if message.sender:
save_public_body(message.sender)
message.sender_id = message.sender.id
if message.receiver:
save_public_body(message.receiver)
message.receiver_id = message.receiver.id
message.foi_request.save()
message.foi_request_id = message.foi_request.id
message.save()
def save_public_body(public_body):
with transaction.atomic():
if public_body.esic:
public_body.esic.save()
public_body.esic_id = public_body.esic.id
public_body.save()
return public_body
| 22.964706 | 73 | 0.679303 | 245 | 1,952 | 5.171429 | 0.240816 | 0.11839 | 0.088398 | 0.054459 | 0.292818 | 0.033149 | 0 | 0 | 0 | 0 | 0 | 0 | 0.23668 | 1,952 | 84 | 74 | 23.238095 | 0.850336 | 0.060451 | 0 | 0.206349 | 0 | 0 | 0.013654 | 0 | 0 | 0 | 0 | 0.011905 | 0 | 1 | 0.142857 | false | 0 | 0.063492 | 0.095238 | 0.333333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
3a67f0f808879f51079a42676b0ee9e297c90a20 | 766 | gyp | Python | binding.gyp | immuta/node-libhdfs3 | 606363faec5f9b1c3a0dcae793b4614f53e5ebb2 | [
"MIT"
] | null | null | null | binding.gyp | immuta/node-libhdfs3 | 606363faec5f9b1c3a0dcae793b4614f53e5ebb2 | [
"MIT"
] | 3 | 2017-06-21T22:00:18.000Z | 2020-07-07T01:21:27.000Z | binding.gyp | immuta/node-libhdfs3 | 606363faec5f9b1c3a0dcae793b4614f53e5ebb2 | [
"MIT"
] | 1 | 2017-11-29T16:52:45.000Z | 2017-11-29T16:52:45.000Z | {
'targets' : [
{
'target_name' : 'hdfs3_bindings',
'sources' : [
'src/addon.cc',
'src/HDFileSystem.cc',
'src/HDFile.cc'
],
'xcode_settings': {
'OTHER_CFLAGS': ['-Wno-unused-parameter', '-Wno-unused-result']
},
'cflags' : ['-Wall', '-Wextra', '-Wno-unused-parameter', '-Wno-unused-result'],
'include_dirs': [
"<!(node -e \"require('nan')\")"
],
'conditions' : [
[ 'OS == "linux"', { 'libraries' : [ '-lhdfs3' ], 'cflags' : [ '-g' ] }],
[ 'OS == "mac"', { 'libraries' : [ '-lhdfs3' ] }
]
]
}
]
} | 31.916667 | 91 | 0.353786 | 51 | 766 | 5.215686 | 0.666667 | 0.135338 | 0.135338 | 0.157895 | 0.24812 | 0.24812 | 0 | 0 | 0 | 0 | 0 | 0.007026 | 0.442559 | 766 | 24 | 92 | 31.916667 | 0.615925 | 0 | 0 | 0.083333 | 0 | 0 | 0.400261 | 0.054759 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
3a6f949dffbcd5f2b977ccb29bf32e3a9018d255 | 361 | py | Python | Secao7_ColecoesPython/Exercicios/Exerc.3.py | PauloFTeixeira/curso_python | 9040c7dcc5262620f6330bb9637710bb8899bc6b | [
"MIT"
] | null | null | null | Secao7_ColecoesPython/Exercicios/Exerc.3.py | PauloFTeixeira/curso_python | 9040c7dcc5262620f6330bb9637710bb8899bc6b | [
"MIT"
] | null | null | null | Secao7_ColecoesPython/Exercicios/Exerc.3.py | PauloFTeixeira/curso_python | 9040c7dcc5262620f6330bb9637710bb8899bc6b | [
"MIT"
] | null | null | null | """
Ler um conjunto de números reais, armazenando-o em vetor e calcular o quadrado dos componentes deste vetor,
armazenando o resultado em outro vetor. Os conjuntos têm 10 elementos cada. Imprima todos os elementos.
"""
vetor = set(range(1, 11))
vetor1 = set({})
for numero in vetor:
numero = numero ** 2
vetor1.add(numero)
print(vetor)
print(vetor1)
| 27.769231 | 108 | 0.728532 | 55 | 361 | 4.781818 | 0.672727 | 0.091255 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.030303 | 0.177285 | 361 | 12 | 109 | 30.083333 | 0.855219 | 0.587258 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.083333 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.285714 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
3a7b040341d02774bc3161342a3273bc4a2c9edf | 1,198 | py | Python | python/atlasPH.py | ferguman/OpenAg-MVP-II | 600ce329f373ef3dc867163cdd09a424b49cd007 | [
"MIT"
] | 2 | 2019-03-18T05:47:55.000Z | 2019-05-30T13:08:13.000Z | python/atlasPH.py | ferguman/OpenAg-MVP-II | 600ce329f373ef3dc867163cdd09a424b49cd007 | [
"MIT"
] | 14 | 2018-06-27T14:02:23.000Z | 2020-02-16T19:47:43.000Z | python/atlasPH.py | ferguman/OpenAg-MVP-II | 600ce329f373ef3dc867163cdd09a424b49cd007 | [
"MIT"
] | null | null | null | import smbus2, time
address = 0x63 #Atlas PH Probe standard I2C address is 99 decimal.
class atlasPH(object):
def __init__(self):
self.bus = smbus2.SMBus(1)
def write(self, command):
self.bus.write_byte(address, ord(command))
def readBlock(self, numBytes):
return self.bus.read_i2c_block_data(address, 0, 5)
# val:str, -> float
def extractFloatFromString(self, val):
try:
return float(val)
except ValueError:
print("Atlas PH probe value error: " + block)
return 0.00
def extractPH(self, block):
if block[0] == 1:
block.pop(0)
return self.extractFloatFromString("".join(map(chr, block)))
else:
print("Atlas PH probe status code error: " + block[0])
return 0.00
# -> float
def getPH(self):
self.write('R')
time.sleep(0.9)
block = self.readBlock(8)
return self.extractPH(block)
def test(self):
'Self test of the object'
print('*** Test Atlas PH ***')
print('PH: %.2f' %self.getPH())
if __name__=="__main__":
t=atlasPH()
t.test()
| 25.489362 | 84 | 0.557596 | 147 | 1,198 | 4.435374 | 0.44898 | 0.042945 | 0.055215 | 0.052147 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.031863 | 0.318865 | 1,198 | 46 | 85 | 26.043478 | 0.767157 | 0.084307 | 0 | 0.058824 | 0 | 0 | 0.110018 | 0 | 0 | 0 | 0.003578 | 0 | 0 | 1 | 0.205882 | false | 0 | 0.029412 | 0.029412 | 0.441176 | 0.117647 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
3a8fdbe0b5588f2deaee2d690e03eb531c22a4ae | 2,065 | py | Python | purge/admin.py | gregschmit/django-purge | 04f1488d73f6e5be750a84bde8f2b73d9b112bb6 | [
"MIT"
] | 1 | 2019-02-20T08:35:11.000Z | 2019-02-20T08:35:11.000Z | purge/admin.py | gregschmit/django-purge | 04f1488d73f6e5be750a84bde8f2b73d9b112bb6 | [
"MIT"
] | null | null | null | purge/admin.py | gregschmit/django-purge | 04f1488d73f6e5be750a84bde8f2b73d9b112bb6 | [
"MIT"
] | null | null | null | from django.contrib import admin
from django.contrib.admin.widgets import FilteredSelectMultiple
from django.contrib.contenttypes.models import ContentType
from django.forms import ModelMultipleChoiceField
from django.utils.html import format_html
from . import models
class CustomModelMCF(ModelMultipleChoiceField):
widget = FilteredSelectMultiple("Models", False)
def label_from_instance(self, obj):
return "{0} :: {1}".format(obj.app_label, obj)
class DatabasePurgerAdmin(admin.ModelAdmin):
list_filter = ('enabled',)
list_display = ('name',) + list_filter + ('_selected_tables', 'delete_by_age', 'delete_by_quantity', 'datetime_field', 'age_in_days', 'max_records')
search_fields = list_display
fieldsets = (
(None, { 'fields': ('name', 'enabled', 'tables')}),
('Criteria', { 'fields': ('delete_by_age', 'delete_by_quantity', 'datetime_field', 'age_in_days', 'max_records')}),
)
def _selected_tables(self, obj):
return format_html(obj.selected_tables)
def formfield_for_manytomany(self, db_field, request, **kwargs):
if db_field.name == 'tables':
return CustomModelMCF(ContentType.objects.all(), **kwargs)
return super().formfield_for_manytomany(db_field, request, **kwargs)
class FilePurgerAdmin(admin.ModelAdmin):
list_filter = ('enabled',)
list_display = ('name',) + list_filter + ('file_pattern', 'directory', 'recursive_search', 'delete_by_filename', 'filename_date_year_first', 'filename_date_day_first', 'delete_by_atime', 'delete_by_mtime', 'delete_by_ctime', 'age_in_days')
search_fields = list_display
fieldsets = (
(None, { 'fields': ('name', 'enabled', 'file_pattern', 'directory', 'recursive_search')}),
('Criteria', { 'fields': ('delete_by_filename', 'filename_date_year_first', 'filename_date_day_first', 'delete_by_atime', 'delete_by_mtime', 'delete_by_ctime', 'age_in_days')}),
)
admin.site.register(models.DatabasePurger, DatabasePurgerAdmin)
admin.site.register(models.FilePurger, FilePurgerAdmin)
| 43.93617 | 243 | 0.720581 | 238 | 2,065 | 5.920168 | 0.331933 | 0.068133 | 0.02555 | 0.035486 | 0.438609 | 0.388928 | 0.388928 | 0.388928 | 0.388928 | 0.313698 | 0 | 0.001127 | 0.14092 | 2,065 | 46 | 244 | 44.891304 | 0.793123 | 0 | 0 | 0.176471 | 0 | 0 | 0.279903 | 0.045521 | 0 | 0 | 0 | 0 | 0 | 1 | 0.088235 | false | 0 | 0.176471 | 0.058824 | 0.735294 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
3a9a5b55eb0db5aabcd6a33215321df9cfa78280 | 2,987 | py | Python | kagura/utils.py | nishio/kagura | 3748a8b2ed5680ddc03abfdae2326fe49607d878 | [
"MIT"
] | 1 | 2021-03-09T04:28:57.000Z | 2021-03-09T04:28:57.000Z | kagura/utils.py | nishio/kagura | 3748a8b2ed5680ddc03abfdae2326fe49607d878 | [
"MIT"
] | null | null | null | kagura/utils.py | nishio/kagura | 3748a8b2ed5680ddc03abfdae2326fe49607d878 | [
"MIT"
] | null | null | null | """
utilities
=========
"""
def stratified_split(xs, ys, nfold=10):
"""
USAGE:
train_xs, test_xs, train_ys, test_ys = stratified_split(xs, ys)
"""
from sklearn.cross_validation import StratifiedKFold
train, test = StratifiedKFold(ys, nfold).__iter__().next()
return xs[train], xs[test], ys[train], ys[test]
def one_hot_ize(df, col, prefix=None, keep_original=False):
"take DataFrame and convert specified column to one-hot representation"
import pandas as pd
one_hot = pd.get_dummies(df[col], prefix=prefix)
if not keep_original:
df = df.drop(col, axis=1)
df = df.join(one_hot)
return df
from collections import defaultdict
from sklearn.cross_validation import StratifiedShuffleSplit
from sklearn.linear_model import LogisticRegression
from sklearn.neighbors import KNeighborsClassifier
from sklearn.tree import DecisionTreeClassifier
import numpy as np
def quick_cv(X, Y, seed=1234):
"do quick cross validation"
np.random.seed(seed)
score = defaultdict(list)
for train, test in StratifiedShuffleSplit(Y):
trainX = X[train]
trainY = Y[train]
testX = X[test]
testY = Y[test]
m = LogisticRegression()
m.fit(trainX, trainY)
score['LR'].append(m.score(testX, testY))
m = KNeighborsClassifier()
m.fit(trainX, trainY)
score['KNN'].append(m.score(testX, testY))
m = DecisionTreeClassifier()
m.fit(trainX, trainY)
score['DT'].append(m.score(testX, testY))
def show(name):
s = score[name]
return "{} {:.2f}(+-{:.2f})".format(name, np.mean(s), np.std(s) * 2)
print ", ".join(show(name) for name in sorted(score))
import time
class Digest(object):
"print a line per a second to avoid printing overhead"
def __init__(self, elapse=1):
self.starttime = time.time()
self.lasttime = time.time()
self.num_digested = 0
self.elapse = elapse
def digest(self, msg):
t = time.time()
if t - self.lasttime < self.elapse:
self.num_digested += 1
return 0
print "{} ({} message digested)".format(
msg, self.num_digested)
ret = self.num_digested
self.num_digested = 0
self.lasttime = t
return ret
def to_bytes(array):
def crop(v):
v = int(v)
if v < 0: v = 0
if v > 255: v = 255
return chr(v)
return "".join(map(crop, array))
def from_corner(size):
for xy in range(size * size):
for x in range(size):
y = xy - x
if y >= size: continue
if y < 0: break
yield (x, y)
def one(xs):
"""
>>> one([0, 0, 0])
False
>>> one([0, 0, 1])
True
>>> one([0, 1, 1])
False
>>> one([0, 1, 0])
True
"""
ret = False
for x in xs:
if x:
if ret:
return False
ret = True
return ret
| 25.10084 | 76 | 0.579846 | 397 | 2,987 | 4.282116 | 0.332494 | 0.032353 | 0.044118 | 0.028235 | 0.138235 | 0.027059 | 0 | 0 | 0 | 0 | 0 | 0.017086 | 0.29461 | 2,987 | 118 | 77 | 25.313559 | 0.789748 | 0 | 0 | 0.08642 | 0 | 0 | 0.072581 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.111111 | null | null | 0.037037 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
3aa1de018de871c858a8cf85e0642828d164c0e7 | 2,155 | py | Python | utils/subtitution.py | Samoray-l337/CryptoGuesser | 7e9df5a15b3cb9958518fbcb008a03bb9364d3da | [
"MIT"
] | null | null | null | utils/subtitution.py | Samoray-l337/CryptoGuesser | 7e9df5a15b3cb9958518fbcb008a03bb9364d3da | [
"MIT"
] | null | null | null | utils/subtitution.py | Samoray-l337/CryptoGuesser | 7e9df5a15b3cb9958518fbcb008a03bb9364d3da | [
"MIT"
] | null | null | null | import utils.config as config
from utils.words import prensetage_of_real_words_in_list
from utils.random import random_subtitution_string
from utils.search import is_known_part_in_text
from string import ascii_lowercase
from tqdm import tqdm
def get_array_key_from_string(string):
return { ascii_lowercase[i]: string[i] for i in range(len(string)) }
def get_letters_diffs(word):
if len(word) == 1:
return [0]
return [ord(word[i + 1]) - ord(word[i]) for i in range(len(word)-1)]
def brute_force_for_known_part(cipher_text, known_part):
for i in tqdm(range(config.MAX_SUBTITUTION_RETRIES)):
curr_array_key = random_subtitution_string()
decrypted = decrypt(cipher_text, curr_array_key, False)
if known_part in decrypted[0]:
return decrypted
curr_array_key = random_subtitution_string()
return decrypt(cipher_text, curr_array_key, False)
def super_bruteforce(cipher_text, known_part=None):
for i in tqdm(range(config.MAX_SUBTITUTION_RETRIES)):
curr_array_key = random_subtitution_string()
decrypted = decrypt(cipher_text, curr_array_key, False)
plain_text = decrypted[0]
required_presentage_of_real_words = config.REQUIRED_PRESENTAGE_OF_REAL_WORDS_FOR_SENTENSE_TO_BE_REAL
# in case of finding the known part, the presentage should be smaller
if known_part and is_known_part_in_text(known_part, plain_text):
required_presentage_of_real_words /= 5
if prensetage_of_real_words_in_list(plain_text.split(' ')) >= required_presentage_of_real_words:
decrypted[2] = True
return decrypted
curr_array_key = random_subtitution_string()
return decrypt(cipher_text, curr_array_key, False)
def decrypt(cipher_text, key, special_bruteforce_mode, known_part=None):
array_key = get_array_key_from_string(key)
if special_bruteforce_mode:
return super_bruteforce(cipher_text, known_part)
if known_part:
return brute_force_for_known_part(cipher_text, known_part)
return [''.join([ array_key[x] if x in array_key else x for x in cipher_text ]), array_key, False] | 39.181818 | 108 | 0.746172 | 319 | 2,155 | 4.661442 | 0.222571 | 0.084734 | 0.06456 | 0.05111 | 0.572966 | 0.443847 | 0.341627 | 0.341627 | 0.341627 | 0.286483 | 0 | 0.004535 | 0.181439 | 2,155 | 55 | 109 | 39.181818 | 0.838435 | 0.03109 | 0 | 0.3 | 0 | 0 | 0.000479 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.125 | false | 0 | 0.15 | 0.025 | 0.525 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
3aa3b38c176030a9064ad7af57108cf970e35f9b | 601 | py | Python | pastetape/db/base.py | EXLER/Pastetape | 82c57bcc05f1f0b580eddfa231e4bef86597264e | [
"MIT"
] | 1 | 2021-07-08T12:42:10.000Z | 2021-07-08T12:42:10.000Z | pastetape/db/base.py | EXLER/Pastetape | 82c57bcc05f1f0b580eddfa231e4bef86597264e | [
"MIT"
] | null | null | null | pastetape/db/base.py | EXLER/Pastetape | 82c57bcc05f1f0b580eddfa231e4bef86597264e | [
"MIT"
] | 1 | 2021-07-08T12:42:12.000Z | 2021-07-08T12:42:12.000Z | from sqlalchemy.ext.declarative import as_declarative, declared_attr
from pastetape.db.session import engine
@as_declarative()
class Base:
__name__: str
@declared_attr
def __tablename__(cls) -> str:
return cls.__name__.lower()
def initialize_db() -> None:
"""
Initializes the database tables.
Using Alembic package for migrations is preferred,
however for simplicity we are using this approach.
"""
from pastetape.models.paste import Paste # noqa
Base.metadata.create_all(engine, Base.metadata.tables.values(), checkfirst=True) # type: ignore
| 24.04 | 100 | 0.72213 | 74 | 601 | 5.621622 | 0.689189 | 0.0625 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.193012 | 601 | 24 | 101 | 25.041667 | 0.857732 | 0.25624 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.181818 | false | 0 | 0.272727 | 0.090909 | 0.727273 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
3aa75be2e025b222552bd381d28a319ca4e96123 | 1,850 | py | Python | courseparticipation/api/permissions.py | TBrockmeyer/courses-participation-api | 39d9edc9a80faa28e9ad711342b3041cb6635be4 | [
"MIT"
] | null | null | null | courseparticipation/api/permissions.py | TBrockmeyer/courses-participation-api | 39d9edc9a80faa28e9ad711342b3041cb6635be4 | [
"MIT"
] | null | null | null | courseparticipation/api/permissions.py | TBrockmeyer/courses-participation-api | 39d9edc9a80faa28e9ad711342b3041cb6635be4 | [
"MIT"
] | null | null | null | from rest_framework import permissions
class IsAdminOrReadOnly(permissions.BasePermission):
"""
Custom permission to only allow admins to edit an object, and all others to view it.
"""
def has_permission(self, request, view):
# Read permissions are allowed to any authenticated request,
# so we'll always allow GET, HEAD or OPTIONS requests.
if request.method in permissions.SAFE_METHODS and request.user.is_authenticated:
return True
# Write permissions are only allowed to admins.
return request.user.is_staff
class IsOwnerOrAdmin(permissions.BasePermission):
"""
Custom permission to only allow authenticated users or admins to create an object for the given user.
"""
def has_permission(self, request, view):
# Grant permission only to authenticated users
if (request.user.is_authenticated):
# If requesting user is an admin, view and creation permissions are conceded
if (request.user.is_staff):
return True
# If a user_id is given by a non-admin user as an argument in the request,
# it needs to be the user's own ID
if ('user_id' in request.data):
if (int(request.data['user_id']) == request.user.id):
return True
else:
return False
# If no user_id given in request, the retrieval of the user will be handled in views.py
return True
else:
return False
"""
Custom permission to only allow owners of an object or admins to edit it.
"""
def has_object_permission(self, request, view, obj):
if (obj.user_id == request.user.id or request.user.is_staff):
return True
else:
return False | 37.755102 | 105 | 0.629189 | 240 | 1,850 | 4.783333 | 0.354167 | 0.067073 | 0.05662 | 0.057491 | 0.30662 | 0.19338 | 0.090592 | 0 | 0 | 0 | 0 | 0 | 0.308649 | 1,850 | 49 | 106 | 37.755102 | 0.897576 | 0.355135 | 0 | 0.541667 | 0 | 0 | 0.013121 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.125 | false | 0 | 0.041667 | 0 | 0.625 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
3aab31f57c86cd9a0668aea34b9b402e88948d36 | 1,107 | py | Python | lesson10/qiangshihong/users/urls.py | herrywen-nanj/51reboot | 1130c79a360e1b548a6eaad176eb60f8bed22f40 | [
"Apache-2.0"
] | null | null | null | lesson10/qiangshihong/users/urls.py | herrywen-nanj/51reboot | 1130c79a360e1b548a6eaad176eb60f8bed22f40 | [
"Apache-2.0"
] | null | null | null | lesson10/qiangshihong/users/urls.py | herrywen-nanj/51reboot | 1130c79a360e1b548a6eaad176eb60f8bed22f40 | [
"Apache-2.0"
] | null | null | null | #!/usr/bin/python
# author: qsh
from django.urls import path, re_path
from . import views_old,views1,views
from . import views,user,roles
app_name = 'users'
urlpatterns = [
# http://ip:8000/
path("", views.IndexView.as_view(), name='index'),
# http://ip:8000/login/
path("login/", views.LoginView.as_view(), name='login'),
path("logout/", views.LogoutView.as_view(), name='logout'),
path("list/", user.userlist, name='user_list'),
path('userlist/', user.UserListView.as_view(), name = 'user_list'),
path('grouplist/',roles.GroupListView.as_view(), name='group_list'),
path('powerlist/', roles.PowerListView.as_view(), name='power_list'),
# <pk> 既英语单词主键 <Primary key> --> 搜索索引(userid)
re_path('userdetail/(?P<pk>[0-9]+)?/$', user.UserDetailView.as_view(), name='user_detail'),
path('modifypasswd/', user.ModifyPwdView.as_view(), name='modify_pwd'),
re_path('usergrouppower/(?P<pk>[0-9]+)?/$', user.UserGroupPowerView.as_view(), name='user_group_power'),
# http://ip:8000/logout/
#path("logout/", views2.LogoutView.as_view(), name='logout'),
]
| 38.172414 | 108 | 0.663053 | 147 | 1,107 | 4.836735 | 0.401361 | 0.084388 | 0.140647 | 0.059072 | 0.098453 | 0 | 0 | 0 | 0 | 0 | 0 | 0.018538 | 0.122855 | 1,107 | 28 | 109 | 39.535714 | 0.713697 | 0.174345 | 0 | 0 | 0 | 0 | 0.238938 | 0.066372 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0.0625 | 0.1875 | 0 | 0.1875 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
3aadc3537af578445cd65f67c4572925dbbf0773 | 312 | py | Python | lab2/p4b.py | sarahmid/programming-bootcamp | 6dc6ab0ecfac662eb9676956ab0ae799953e88ae | [
"MIT"
] | 1 | 2020-11-06T03:29:24.000Z | 2020-11-06T03:29:24.000Z | lab2/p4b.py | sarahmid/programming-bootcamp | 6dc6ab0ecfac662eb9676956ab0ae799953e88ae | [
"MIT"
] | null | null | null | lab2/p4b.py | sarahmid/programming-bootcamp | 6dc6ab0ecfac662eb9676956ab0ae799953e88ae | [
"MIT"
] | null | null | null | dnaSeq = raw_input("Enter a DNA sequence: ")
motif = raw_input("Enter a motif to search for: ")
if len(motif) > len(dnaSeq):
print "Error: motif sequence is longer than DNA sequence."
else:
if motif in dnaSeq:
print "Found the motif in the sequence."
else:
print "Did not find the motif in the sequence." | 31.2 | 59 | 0.714744 | 52 | 312 | 4.25 | 0.480769 | 0.095023 | 0.117647 | 0.126697 | 0.190045 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.185897 | 312 | 10 | 60 | 31.2 | 0.870079 | 0 | 0 | 0.222222 | 0 | 0 | 0.549521 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0 | null | null | 0.333333 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
3ab37ee5cdb84ceaee330ce1eecafe3afbe3fd5c | 1,646 | py | Python | Filter.py | eejwa/Data-Preprocessing | 41d27d694ed07e052c4d4680d989601850debcc9 | [
"MIT"
] | null | null | null | Filter.py | eejwa/Data-Preprocessing | 41d27d694ed07e052c4d4680d989601850debcc9 | [
"MIT"
] | null | null | null | Filter.py | eejwa/Data-Preprocessing | 41d27d694ed07e052c4d4680d989601850debcc9 | [
"MIT"
] | null | null | null | #!/usr/bin/env python
usage = """Code to filter traces in the directory given a frequency band and file wildcard.
[-fl][-fh][-f][-t] where:
-fl = miniumum frequency value (e.g. 0.05)
-fh = maxiumum frequency value (e.g. 1.0)
-f = filename wildcard (e.g. '*SAC')
-t = type of filtering (e.g. bandpass)
"""
import obspy
import numpy as np
import argparse
# get the arguments from the terminal
parser = argparse.ArgumentParser(description='Preprocessing script for data retrieved from obspy DMT')
parser.add_argument("-f","--file_wildcard", help="Enter the file to be normalised (e.g. *BHR*)", type=str, required=True, action="store", default = "*SAC")
parser.add_argument("-fl","--lower_frequency", help="Enter the lower frequency you want the analysis to be conducted over", type=float, required=True, action="store", default = "0.1")
parser.add_argument("-fh","--upper_frequency", help="Enter the upper frequency you want the analysis to be conducted over", type=float, required=True, action="store", default = "0.4")
parser.add_argument("-t","--filter_type", help="Enter the type of filtering you want to do", type=str, required=True, action="store", default = "bandpass")
args = parser.parse_args()
file_names = args.file_wildcard
flow=args.lower_frequency
fhigh=args.upper_frequency
type=args.filter_type
st = obspy.read(file_names)
# filter the stream
st_filtered = st.filter(type, freqmin=flow, freqmax=fhigh)
for i,tr in enumerate(st_filtered):
# get information about the trace and rename it
network=tr.stats.network
station=tr.stats.station
tr.write("%s_%s_filtered.SAC" %(network,station), format="SAC")
| 34.291667 | 183 | 0.72904 | 254 | 1,646 | 4.649606 | 0.413386 | 0.008467 | 0.057578 | 0.0779 | 0.204911 | 0.204911 | 0.204911 | 0.142252 | 0.142252 | 0.142252 | 0 | 0.006285 | 0.130012 | 1,646 | 47 | 184 | 35.021277 | 0.818436 | 0.072904 | 0 | 0 | 0 | 0 | 0.444809 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0.076923 | 0.115385 | 0 | 0.115385 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
3ac0901790b59f23dbf4b8aabd26ad7fb8437fb5 | 796 | py | Python | ovirtlib4/networks.py | MosheSheena/ovirtlib4 | 340ff6401c6291460da8ef7163838101cab717b9 | [
"Apache-2.0"
] | null | null | null | ovirtlib4/networks.py | MosheSheena/ovirtlib4 | 340ff6401c6291460da8ef7163838101cab717b9 | [
"Apache-2.0"
] | 4 | 2021-08-08T11:48:58.000Z | 2022-03-15T12:59:04.000Z | ovirtlib4/networks.py | MosheSheena/ovirtlib4 | 340ff6401c6291460da8ef7163838101cab717b9 | [
"Apache-2.0"
] | 3 | 2020-07-29T16:27:34.000Z | 2021-07-28T11:16:29.000Z | # -*- coding: utf-8 -*-
import ovirtsdk4.types as types
from .system_service import CollectionService, CollectionEntity
class Networks(CollectionService):
"""
Gives access to all oVirt Networks
"""
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
self.service = self.connection.system_service().networks_service()
self.entity_service = self.service.network_service
self.entity_type = types.Network
def _get_collection_entity(self):
""" Overwrite abstract parent method """
return NetworkEntity(connection=self.connection)
class NetworkEntity(CollectionEntity):
"""
Put Network custom functions here
"""
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
| 26.533333 | 74 | 0.677136 | 83 | 796 | 6.192771 | 0.493976 | 0.077821 | 0.042802 | 0.058366 | 0.155642 | 0.155642 | 0.155642 | 0.155642 | 0.155642 | 0 | 0 | 0.003165 | 0.20603 | 796 | 29 | 75 | 27.448276 | 0.810127 | 0.157035 | 0 | 0.307692 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.230769 | false | 0 | 0.153846 | 0 | 0.615385 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
3ac2740c0e5c139ad6eae7f670f40ddd476288f5 | 1,812 | py | Python | Lib/sunos5/SOCKET.py | AtjonTV/Python-1.4 | 2a80562c5a163490f444181cb75ca1b3089759ec | [
"Unlicense",
"TCL",
"DOC",
"AAL",
"X11"
] | null | null | null | Lib/sunos5/SOCKET.py | AtjonTV/Python-1.4 | 2a80562c5a163490f444181cb75ca1b3089759ec | [
"Unlicense",
"TCL",
"DOC",
"AAL",
"X11"
] | null | null | null | Lib/sunos5/SOCKET.py | AtjonTV/Python-1.4 | 2a80562c5a163490f444181cb75ca1b3089759ec | [
"Unlicense",
"TCL",
"DOC",
"AAL",
"X11"
] | null | null | null | # Generated by h2py from /usr/include/sys/socket.h
NC_TPI_CLTS = 1
NC_TPI_COTS = 2
NC_TPI_COTS_ORD = 3
NC_TPI_RAW = 4
SOCK_STREAM = NC_TPI_COTS
SOCK_DGRAM = NC_TPI_CLTS
SOCK_RAW = NC_TPI_RAW
SOCK_RDM = 5
SOCK_SEQPACKET = 6
SO_DEBUG = 0x0001
SO_ACCEPTCONN = 0x0002
SO_REUSEADDR = 0x0004
SO_KEEPALIVE = 0x0008
SO_DONTROUTE = 0x0010
SO_BROADCAST = 0x0020
SO_USELOOPBACK = 0x0040
SO_LINGER = 0x0080
SO_OOBINLINE = 0x0100
SO_DONTLINGER = (~SO_LINGER)
SO_SNDBUF = 0x1001
SO_RCVBUF = 0x1002
SO_SNDLOWAT = 0x1003
SO_RCVLOWAT = 0x1004
SO_SNDTIMEO = 0x1005
SO_RCVTIMEO = 0x1006
SO_ERROR = 0x1007
SO_TYPE = 0x1008
SO_PROTOTYPE = 0x1009
SOL_SOCKET = 0xffff
AF_UNSPEC = 0
AF_UNIX = 1
AF_INET = 2
AF_IMPLINK = 3
AF_PUP = 4
AF_CHAOS = 5
AF_NS = 6
AF_NBS = 7
AF_ECMA = 8
AF_DATAKIT = 9
AF_CCITT = 10
AF_SNA = 11
AF_DECnet = 12
AF_DLI = 13
AF_LAT = 14
AF_HYLINK = 15
AF_APPLETALK = 16
AF_NIT = 17
AF_802 = 18
AF_OSI = 19
AF_X25 = 20
AF_OSINET = 21
AF_GOSIP = 22
AF_MAX = 22
PF_UNSPEC = AF_UNSPEC
PF_UNIX = AF_UNIX
PF_INET = AF_INET
PF_IMPLINK = AF_IMPLINK
PF_PUP = AF_PUP
PF_CHAOS = AF_CHAOS
PF_NS = AF_NS
PF_NBS = AF_NBS
PF_ECMA = AF_ECMA
PF_DATAKIT = AF_DATAKIT
PF_CCITT = AF_CCITT
PF_SNA = AF_SNA
PF_DECnet = AF_DECnet
PF_DLI = AF_DLI
PF_LAT = AF_LAT
PF_HYLINK = AF_HYLINK
PF_APPLETALK = AF_APPLETALK
PF_NIT = AF_NIT
PF_802 = AF_802
PF_OSI = AF_OSI
PF_X25 = AF_X25
PF_OSINET = AF_OSINET
PF_GOSIP = AF_GOSIP
PF_MAX = AF_MAX
SOMAXCONN = 5
MSG_OOB = 0x1
MSG_PEEK = 0x2
MSG_DONTROUTE = 0x4
MSG_MAXIOVLEN = 16
SOCKETSYS = 88
SOCKETSYS = 83
SO_ACCEPT = 1
SO_BIND = 2
SO_CONNECT = 3
SO_GETPEERNAME = 4
SO_GETSOCKNAME = 5
SO_GETSOCKOPT = 6
SO_LISTEN = 7
SO_RECV = 8
SO_RECVFROM = 9
SO_SEND = 10
SO_SENDTO = 11
SO_SETSOCKOPT = 12
SO_SHUTDOWN = 13
SO_SOCKET = 14
SO_SOCKPOLL = 15
SO_GETIPDOMAIN = 16
SO_SETIPDOMAIN = 17
SO_ADJTIME = 18
| 17.423077 | 50 | 0.766556 | 348 | 1,812 | 3.603448 | 0.393678 | 0.027911 | 0.021531 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.127333 | 0.172185 | 1,812 | 103 | 51 | 17.592233 | 0.708667 | 0.02649 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0.069807 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
3ac5078a14f8186a6b7a0ba77157e8c31c88f685 | 574 | py | Python | services/backend/src/schemas/notes.py | gideonmandu/note_taking_app | bf358a421b42d941ee47cb9c617678d3c074e24d | [
"MIT"
] | null | null | null | services/backend/src/schemas/notes.py | gideonmandu/note_taking_app | bf358a421b42d941ee47cb9c617678d3c074e24d | [
"MIT"
] | null | null | null | services/backend/src/schemas/notes.py | gideonmandu/note_taking_app | bf358a421b42d941ee47cb9c617678d3c074e24d | [
"MIT"
] | null | null | null | from pydantic import BaseModel
from tortoise.contrib.pydantic import pydantic_model_creator
from typing import Optional
from src.database.models import Notes
# Creating new notes
NoteInSchema = pydantic_model_creator(
Notes, name="NoteIn", exclude=["author_id"], exclude_readonly=True
)
# retrieving Notes
NoteOutSchema = pydantic_model_creator(
Notes, name="NoteOut", exclude=[
"modified_at", "author.password",
"author.created_at", "author.modified_at"
]
)
class UpdateNote(BaseModel):
title: Optional[str]
content: Optional[str]
| 22.96 | 70 | 0.745645 | 67 | 574 | 6.223881 | 0.537313 | 0.093525 | 0.143885 | 0.119904 | 0.139089 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.158537 | 574 | 24 | 71 | 23.916667 | 0.863354 | 0.060976 | 0 | 0 | 0 | 0 | 0.154851 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0.0625 | 0.25 | 0 | 0.4375 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
3ad658f663ab436d69c55ee076787f7c5af82a70 | 554 | py | Python | pbs_util/prime_example.py | Clyde-fare/pbs_util | 1c1ed93773a9a020f9216056d2ae49cc0cd589d1 | [
"BSD-3-Clause"
] | 1 | 2015-08-24T02:48:00.000Z | 2015-08-24T02:48:00.000Z | pbs_util/prime_example.py | Clyde-fare/pbs_util | 1c1ed93773a9a020f9216056d2ae49cc0cd589d1 | [
"BSD-3-Clause"
] | null | null | null | pbs_util/prime_example.py | Clyde-fare/pbs_util | 1c1ed93773a9a020f9216056d2ae49cc0cd589d1 | [
"BSD-3-Clause"
] | null | null | null | import pbs_util.pbs_map as ppm
class PrimeWorker(ppm.Worker):
def __call__(self, n):
is_prime = True
for m in xrange(2,n):
if n % m == 0:
is_prime = False
break
return (n, is_prime)
if __name__ == "__main__":
for (n, is_prime) in sorted(ppm.pbs_map(PrimeWorker, range(1000, 10100),
num_clients=100)):
if is_prime:
print '%d is prime' % (n)
else:
print '%d is composite' % (n)
| 25.181818 | 77 | 0.476534 | 69 | 554 | 3.521739 | 0.565217 | 0.17284 | 0.098765 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.04375 | 0.422383 | 554 | 21 | 78 | 26.380952 | 0.715625 | 0 | 0 | 0 | 0 | 0 | 0.061372 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.0625 | null | null | 0.125 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
3adbf439491da91e824f8b8117c4be60189e19af | 2,610 | py | Python | src/podrum/network/protocol/AdventureSettingsPacket.py | genisyspromcpe/Podrum | c3e4644764308d587cf24a068c4568484eb0fb05 | [
"Apache-2.0"
] | 1 | 2021-11-05T16:59:49.000Z | 2021-11-05T16:59:49.000Z | src/podrum/network/protocol/AdventureSettingsPacket.py | genisyspromcpe/Podrum | c3e4644764308d587cf24a068c4568484eb0fb05 | [
"Apache-2.0"
] | null | null | null | src/podrum/network/protocol/AdventureSettingsPacket.py | genisyspromcpe/Podrum | c3e4644764308d587cf24a068c4568484eb0fb05 | [
"Apache-2.0"
] | null | null | null | """
* ____ _
* | _ \ ___ __| |_ __ _ _ _ __ ___
* | |_) / _ \ / _` | '__| | | | '_ ` _ \
* | __/ (_) | (_| | | | |_| | | | | | |
* |_| \___/ \__,_|_| \__,_|_| |_| |_|
*
* Licensed under the Apache License, Version 2.0 (the "License")
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0
"""
from podrum.network.protocol.types.PlayerPermissions import PlayerPermissions
from podrum.network.protocol.DataPacket import DataPacket
from podrum.network.protocol.ProtocolInfo import ProtocolInfo
class AdventureSettingsPacket(DataPacket):
NID = ProtocolInfo.ADVENTURE_SETTINGS_PACKET
PERMISSION_NORMAL = 0
PERMISSION_OPERATOR = 1
PERMISSION_HOST = 2
PERMISSION_AUTOMATION = 3
PERMISSION_ADMIN = 4
BITFLAG_SECOND_SET = 1 << 16
WORLD_IMMUTABLE = 0x01
NO_PVP = 0x02
AUTO_JUMP = 0x20
ALLOW_FLIGHT = 0x40
NO_CLIP = 0x80
WORLD_BUILDER = 0x100
FLYING = 0x200
MUTED = 0x400
BUILD_AND_MINE = 0x01 | BITFLAG_SECOND_SET
DOORS_AND_SWITCHES = 0x02 | BITFLAG_SECOND_SET
OPEN_CONTAINERS = 0x04 | BITFLAG_SECOND_SET
ATTACK_PLAYERS = 0x08 | BITFLAG_SECOND_SET
ATTACK_MOBS = 0x10 | BITFLAG_SECOND_SET
OPERATOR = 0x20 | BITFLAG_SECOND_SET
TELEPORT = 0x80 | BITFLAG_SECOND_SET
flags = 0
commandPermission = PERMISSION_NORMAL
flags2 = -1
playerPermission = PlayerPermissions.MEMBER
customFlags = 0
entityUniqueId = None
def decodePayload(self):
self.flags = self.putUnsignedVarInt()
self.commandPermission = self.putUnsignedVarInt()
self.flags2 = self.putUnsignedVarInt()
self.playerPermission = self.putUnsignedVarInt()
self.customFlags = self.putUnsignedVarInt()
self.entityUniqueId = self.getLLong()
def encodePayload(self):
self.putUnsignedVarInt(self.flags)
self.putUnsignedVarInt(self.commandPermission)
self.putUnsignedVarInt(self.flags2)
self.putUnsignedVarInt(self.playerPermission)
self.putUnsignedVarInt(self.customFlags)
self.putLLong(self.entityUniqueId)
def getFlag(self, flag):
if (flag & self.BITFLAG_SECOND_SET) != 0:
return (self.flags2 & flag) != 0
return (self.flags & flag) != 0
def setFlag(self, flag, value):
if (flag & self.BITFLAG_SECOND_SET) != 0:
flagSet = self.flags2
else:
flagSet = self.flag
if value:
flagSet |= flag
else:
flagSet &= ~flag
| 30.705882 | 84 | 0.657088 | 268 | 2,610 | 6.067164 | 0.41791 | 0.079951 | 0.098401 | 0.046125 | 0.233702 | 0.233702 | 0.233702 | 0.200492 | 0.200492 | 0.200492 | 0 | 0.036735 | 0.249042 | 2,610 | 84 | 85 | 31.071429 | 0.792857 | 0.15364 | 0 | 0.067797 | 0 | 0 | 0 | 0 | 0 | 0 | 0.028649 | 0 | 0 | 1 | 0.067797 | false | 0 | 0.050847 | 0 | 0.644068 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
3ae7d0b0f7f8efe78a27d29989cacba363301144 | 2,033 | py | Python | src/huntsman/pocs/core.py | danjampro/huntsman-pocs | 7ec4f43776ac36d7bff2cf0a4926eb69daa5ca25 | [
"MIT"
] | null | null | null | src/huntsman/pocs/core.py | danjampro/huntsman-pocs | 7ec4f43776ac36d7bff2cf0a4926eb69daa5ca25 | [
"MIT"
] | 1 | 2020-09-15T09:11:18.000Z | 2020-09-15T09:11:18.000Z | src/huntsman/pocs/core.py | danjampro/huntsman-pocs | 7ec4f43776ac36d7bff2cf0a4926eb69daa5ca25 | [
"MIT"
] | 2 | 2020-12-16T09:07:17.000Z | 2020-12-18T06:06:24.000Z | from panoptes.pocs.core import POCS
class HuntsmanPOCS(POCS):
""" Minimal overrides to the POCS class """
def __init__(self, *args, **kwargs):
self._dome_open_states = []
super().__init__(*args, **kwargs)
def run(self, initial_next_state='starting', *args, **kwargs):
""" Override the default initial_next_state parameter from "ready" to "starting".
This allows us to call pocs.run() as normal, without needing to specify the initial next
state explicitly.
Args:
initial_next_state (str, optional): The first state the machine should move to from
the `sleeping` state, default `starting`.
*args, **kwargs: Parsed to POCS.run.
"""
return super().run(initial_next_state=initial_next_state, *args, **kwargs)
def _load_state(self, state, state_info=None):
""" Override method to add dome logic. """
if state_info is None:
state_info = {}
# Check if the state requires the dome to be open
if state_info.pop("requires_open_dome", False):
self.logger.debug(f"Adding state to open dome states: {state}.")
self._dome_open_states.append(state)
return super()._load_state(state, state_info=state_info)
def before_state(self, event_data):
""" Called before each state.
Args:
event_data(transitions.EventData): Contains information about the event
"""
if self.next_state in self._dome_open_states:
self.say(f"Opening the dome before entering the {self.next_state} state.")
self.observatory.open_dome()
self.say(f"Entering {self.next_state} state from the {self.state} state.")
def after_state(self, event_data):
""" Called after each state.
Args:
event_data(transitions.EventData): Contains information about the event
"""
self.say(f"Finished with the {self.state} state. The next state is {self.next_state}.")
| 40.66 | 96 | 0.639941 | 263 | 2,033 | 4.756654 | 0.319392 | 0.079137 | 0.076739 | 0.043165 | 0.156675 | 0.118305 | 0.118305 | 0.118305 | 0.118305 | 0.118305 | 0 | 0 | 0.257747 | 2,033 | 49 | 97 | 41.489796 | 0.829026 | 0.348254 | 0 | 0 | 0 | 0 | 0.222409 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.238095 | false | 0 | 0.047619 | 0 | 0.428571 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
3af6b73729ef2e5afac07a66509a37c7cb74963d | 3,940 | py | Python | audacitorch/core.py | hugofloresgarcia/audacitorch | 71f1c9a7045544556d102c2c77c137a21e159084 | [
"MIT"
] | 32 | 2021-10-30T04:36:15.000Z | 2022-03-18T17:47:13.000Z | audacitorch/core.py | hugofloresgarcia/torchaudacity | 71f1c9a7045544556d102c2c77c137a21e159084 | [
"MIT"
] | 4 | 2021-09-23T00:48:12.000Z | 2021-10-21T21:12:43.000Z | audacitorch/core.py | hugofloresgarcia/audacitorch | 71f1c9a7045544556d102c2c77c137a21e159084 | [
"MIT"
] | 2 | 2021-12-16T22:26:58.000Z | 2021-12-28T18:09:32.000Z | from typing import Tuple
import torch
from torch import nn
def _waveform_check(x: torch.Tensor):
assert x.ndim == 2, "input must have two dimensions (channels, samples)"
assert x.shape[-1] > x.shape[0], f"The number of channels {x.shape[-2]} exceeds the number of samples {x.shape[-1]} in your INPUT waveform. \
There might be something wrong with your model. "
class AudacityModel(nn.Module):
def __init__(self, model: nn.Module):
""" creates an Audacity model, wrapping a child model (that does the real work)"""
super().__init__()
model.eval()
self.model = model
class WaveformToWaveformBase(AudacityModel):
def forward(self, x: torch.Tensor) -> torch.Tensor:
"""
Internal forward pass for a WaveformToWaveform model.
All this does is wrap the do_forward_pass(x) function in assertions that check
that the correct input/output constraints are getting met. Nothing fancy.
"""
_waveform_check(x)
x = self.do_forward_pass(x)
_waveform_check(x)
return x
def do_forward_pass(self, x: torch.Tensor) -> torch.Tensor:
"""
Perform a forward pass on a waveform-to-waveform model.
Args:
x : An input audio waveform tensor. If `"multichannel" == True` in the
model's `metadata.json`, then this tensor will always be shape
`(1, n_samples)`, as all incoming audio will be downmixed first.
Otherwise, expect `x` to be a multichannel waveform tensor with
shape `(n_channels, n_samples)`.
Returns:
torch.Tensor: Output tensor, shape (n_sources, n_samples). Each source
will be
"""
raise NotImplementedError("implement me!")
class WaveformToLabelsBase(AudacityModel):
def forward(self, x: torch.Tensor) -> Tuple[torch.Tensor, torch.Tensor]:
"""
Internal forward pass for a WaveformToLabels model.
All this does is wrap the do_forward_pass(x) function in assertions that check
that the correct input/output constraints are getting met. Nothing fancy.
"""
_waveform_check(x)
output = self.do_forward_pass(x)
assert isinstance(output, tuple), "waveform-to-labels output must be a tuple"
assert len(output) == 2, "output tuple must have two elements, e.g. tuple(labels, timestamps)"
labels = output[0]
timestamps = output[1]
assert torch.all(timestamps >= 0).item(), f"found a timestamp that is less than zero"
for timestamp in timestamps:
assert timestamp[0] < timestamp[1], f"timestamp ends ({timestamp[1]}) before it starts ({timestamp[0]})"
assert labels.ndim == 1, "labels tensor should be one dimensional"
assert labels.shape[0] == timestamps.shape[0], "time dimension between "\
"labels and timestamps tensors must be equal"
assert timestamps.shape[1] == 2, "second dimension of the timestamps tensor"\
"must be size 2"
return output
def do_forward_pass(self, x: torch.Tensor) -> Tuple[torch.Tensor, torch.Tensor]:
"""
Perform a forward pass on a waveform-to-labels model.
Args:
x : An input audio waveform tensor. If `"multichannel" == True` in the
model's `metadata.json`, then this tensor will always be shape
`(1, n_samples)`, as all incoming audio will be downmixed first.
Otherwise, expect `x` to be a multichannel waveform tensor with
shape `(n_channels, n_samples)`.
Returns:
Tuple[torch.Tensor, torch.Tensor]: a tuple of tensors, where the first
tensor contains the output class probabilities
(shape `(n_timesteps, n_labels)`), and the second tensor contains
timestamps with start and end times for each label,
shape `(n_timesteps, 2)`.
"""
raise NotImplementedError("implement me!")
| 39.009901 | 143 | 0.653046 | 522 | 3,940 | 4.854406 | 0.275862 | 0.060773 | 0.030781 | 0.04341 | 0.452644 | 0.427782 | 0.427782 | 0.407656 | 0.393054 | 0.340963 | 0 | 0.007486 | 0.254061 | 3,940 | 100 | 144 | 39.4 | 0.854712 | 0.435787 | 0 | 0.128205 | 0 | 0.025641 | 0.219882 | 0 | 0 | 0 | 0 | 0 | 0.230769 | 1 | 0.153846 | false | 0.102564 | 0.076923 | 0 | 0.358974 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
aaff73c48a74e2d5a92cc648d869da320d67337b | 190 | py | Python | taller control repetitivas/ejercicio_10.py | cristianm24/algoritmos-y-programacion- | d61846e7936e1620298968d4ba3ac8ad2ebd57fa | [
"MIT"
] | null | null | null | taller control repetitivas/ejercicio_10.py | cristianm24/algoritmos-y-programacion- | d61846e7936e1620298968d4ba3ac8ad2ebd57fa | [
"MIT"
] | null | null | null | taller control repetitivas/ejercicio_10.py | cristianm24/algoritmos-y-programacion- | d61846e7936e1620298968d4ba3ac8ad2ebd57fa | [
"MIT"
] | null | null | null | lista=[]
datos=int(input("Ingrese numero de datos: "))
for i in range(0,datos):
alt=float(input("Ingrese las alturas: "))
lista.append(alt)
print("La altura maxima es: ", max(lista)) | 31.666667 | 45 | 0.673684 | 30 | 190 | 4.266667 | 0.766667 | 0.1875 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.006173 | 0.147368 | 190 | 6 | 46 | 31.666667 | 0.783951 | 0 | 0 | 0 | 0 | 0 | 0.350785 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.166667 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
c9006d1a4d572dc67dc1b37e14ea20fc91dd78f8 | 310 | py | Python | tests/test2.py | gaoce/timevis | ae9c8de1cd08107eb8d5c3951370f608a847ca5d | [
"MIT"
] | 1 | 2015-05-07T15:24:35.000Z | 2015-05-07T15:24:35.000Z | tests/test2.py | gaoce/TimeVisAn | ae9c8de1cd08107eb8d5c3951370f608a847ca5d | [
"MIT"
] | null | null | null | tests/test2.py | gaoce/TimeVisAn | ae9c8de1cd08107eb8d5c3951370f608a847ca5d | [
"MIT"
] | null | null | null | import os
import os.path
import json
# The folder holding the test data
data_path = os.path.dirname('.')
# Set the temporal config for testing
os.environ['TIMEVIS_CONFIG'] = os.path.join(data_path, 'config.py')
import timevis
app = timevis.app.test_client()
url = '/api/v2/experiment'
resp = app.get(url)
| 18.235294 | 67 | 0.732258 | 50 | 310 | 4.46 | 0.54 | 0.080717 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.003745 | 0.13871 | 310 | 16 | 68 | 19.375 | 0.831461 | 0.219355 | 0 | 0 | 0 | 0 | 0.175732 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.444444 | 0 | 0.444444 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
c90baa6e8f7e51fa711bc90fdf3af0cfe423d90e | 4,333 | py | Python | yuzu/utils/utils.py | andymitch/yuzu | 34c67d7a19a9d51a7478050ada6286d97ffa9910 | [
"Apache-2.0"
] | null | null | null | yuzu/utils/utils.py | andymitch/yuzu | 34c67d7a19a9d51a7478050ada6286d97ffa9910 | [
"Apache-2.0"
] | 10 | 2021-06-01T04:13:15.000Z | 2021-07-07T03:45:20.000Z | yuzu/utils/utils.py | andymitch/yuzu | 34c67d7a19a9d51a7478050ada6286d97ffa9910 | [
"Apache-2.0"
] | null | null | null | from inspect import signature
from questionary import Style
from pytz import reference
import math, os, datetime
from typing import Callable
from pandas import DataFrame
############################## CONSTANTS
ROOT_PATH = os.path.expanduser('~') + os.sep + '.yuzu'
STRATS_PATH = ROOT_PATH + os.sep + 'strategies'
ENV_PATH = ROOT_PATH + os.sep + '.env'
CONFIG_PATH = ROOT_PATH + os.sep + 'config.json'
EXCHANGES = ['binance', 'binanceus', 'coinbasepro', 'kraken']
EXCHANGE_NAMES = ['Binance', 'Binance US', 'Coinbase Pro', 'Kraken', 'cancel']
INTERVALS = ['1m','5m','15m','30m','1h','12h','1d']
############################## CLI STYLING
style = Style([
('qmark', 'fg:#673ab7 bold'), # token in front of the question
('question', 'bold'), # question text
('answer', 'fg:#f44336 bold'), # submitted answer text behind the question
('pointer', 'fg:#673ab7 bold'), # pointer used in select and checkbox prompts
('highlighted', 'fg:#673ab7 bold'), # pointed-at choice in select and checkbox prompts
('selected', 'fg:#cc5454'), # style for a selected item of a checkbox
('separator', 'fg:#cc5454'), # separator in lists
('instruction', ''), # user instructions for select, rawselect, checkbox
('text', ''), # plain text
('disabled', 'fg:#858585 italic') # disabled choices for select and checkbox prompts
])
############################## UTILS
def since(interval: str, ticks: int, last_epoch: int = -1):
if last_epoch == -1:
last_epoch = int(datetime.datetime.now(tz=reference.LocalTimezone()).timestamp())
return last_epoch - (int(interval[:-1]) * (3600 if interval[-1] == 'h' else 86400 if interval[-1] == 'd' else 60) * ticks)
def safe_round(amount, precision):
return math.floor(amount * (10**precision))/(10**precision)
class colorprint:
@staticmethod
def red(skk): print("\033[91m {}\033[00m" .format(skk))
@staticmethod
def green(skk): print("\033[92m {}\033[00m" .format(skk))
@staticmethod
def yellow(skk): print("\033[93m {}\033[00m" .format(skk))
@staticmethod
def lightpurple(skk): print("\033[94m {}\033[00m" .format(skk))
@staticmethod
def purple(skk): print("\033[95m {}\033[00m" .format(skk))
@staticmethod
def cyan(skk): print("\033[96m {}\033[00m" .format(skk))
@staticmethod
def lightgrey(skk): print("\033[97m {}\033[00m" .format(skk))
@staticmethod
def black(skk): print("\033[98m {}\033[00m" .format(skk))
def validate_strategy(strategy_module):
sig, params, ret_type = None, None, None
try:
sig = signature(strategy_module.strategy)
params = sig.parameters
ret_type = sig.return_annotation
except: raise AttributeError(f"{strategy_module} has not attribute 'strategy'")
assert type(params[0]) is DataFrame, "First strategy parameter must be of type <class 'pandas.core.frame.DataFrame'>."
assert type(params[1]) is dict, "Second strategy parameter must be of type <class 'dict'>."
assert ret_type is DataFrame, "Strategy return type must be of type <class 'pandas.core.frame.DataFrame'>."
config_range = None
try:
config_range: dict = strategy_module.config_range
except: raise AttributeError(f"{strategy_module} has not attribute 'config_range'")
assert type(config_range) is dict, "Strategy config_range must be of type <class 'dict'>."
for k in ['min_ticks', 'stop_limit_buy', ', stop_limit_sell', 'stop_limit_loss']:
assert k in config_range, f"'{k}' must be included in strategy config_range."
t, ts = (list, "<class 'list'>") if k=='min_ticks' else (float, "<class 'float'>")
assert type(config_range[k]) is t, f"'{k}' must be of type {ts}."
for m in config_range['min_ticks']:
assert m in config_range.keys(), f"'{m}'' must be included in strategy config_range if to be considered for min_ticks key."
config = None # TODO: create random config given config_range
df = DataFrame({'open': [], 'high': [], 'low': [], 'close': [], 'volume': []})
df = strategy_module.strategy(df, config)
assert 'buy' in df.columns, "Buy column must exist in strategy's returned DataFrame."
assert 'sell' in df.columns, "Sell column must exist in strategy's returned DataFrame." | 49.238636 | 131 | 0.644357 | 572 | 4,333 | 4.798951 | 0.353147 | 0.052095 | 0.032058 | 0.043716 | 0.268488 | 0.230965 | 0.145355 | 0.101275 | 0.069945 | 0 | 0 | 0.040671 | 0.188553 | 4,333 | 88 | 132 | 49.238636 | 0.740046 | 0.097392 | 0 | 0.135135 | 0 | 0 | 0.319685 | 0.016273 | 0 | 0 | 0 | 0.011364 | 0.121622 | 1 | 0.148649 | false | 0 | 0.081081 | 0.013514 | 0.27027 | 0.121622 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
c90cc5de13fb24ab2ccde44d3bda7da78f665e88 | 23,187 | py | Python | pyjs/runners/giwebkit.py | chopin/pyjs | 8756402995f4c218602b251c984baf9ea8ddf8bf | [
"ECL-2.0",
"Apache-2.0"
] | null | null | null | pyjs/runners/giwebkit.py | chopin/pyjs | 8756402995f4c218602b251c984baf9ea8ddf8bf | [
"ECL-2.0",
"Apache-2.0"
] | null | null | null | pyjs/runners/giwebkit.py | chopin/pyjs | 8756402995f4c218602b251c984baf9ea8ddf8bf | [
"ECL-2.0",
"Apache-2.0"
] | null | null | null | # Copyright (C) 2012 C Anthony Risinger <anthony@xtfx.me>
#
# LICENSE: Apache 2.0 <http://www.apache.org/licenses/LICENSE-2.0.txt>
import os
import sys
import logging
logging.basicConfig(level=logging.INFO)
logging.getLogger(__name__).setLevel(logging.DEBUG)
logger = logging.getLogger(__name__)
import re
from urllib import urlopen
from urlparse import urljoin
import gi
gi.require_version('Gtk', '3.0')
gi.require_version('WebKit', '3.0')
from gi.repository import GObject
from gi.repository import Gtk
from gi.repository import Soup
from gi.repository import WebKit
import types
import signal
import operator
from traceback import print_exc
from pprint import pformat
from uuid import uuid4
sys.stdout = os.fdopen(sys.stdout.fileno(), 'w', 0)
sys.stderr = os.fdopen(sys.stderr.fileno(), 'w', 0)
#TODO: impl collections.MutableMapping
class URI(object):
KEYS = [f.get_name() for f in Soup.URI.__info__.get_fields()]
get_keys = staticmethod(operator.attrgetter(*KEYS))
@staticmethod
def items(uri):
return zip(URI.KEYS, URI.get_keys(uri))
class GIXMLHttpRequestEventTarget(object):
onloadstart = None
onprogress = None
onabort = None
onerror = None
onload = None
ontimeout = None
onloadend = None
class GIXMLHttpRequestUpload(GIXMLHttpRequestEventTarget):
pass
class GIXMLHttpRequest(GIXMLHttpRequestEventTarget):
'''XMLHttpRequest Level 2 http://www.w3.org/TR/XMLHttpRequest/'''
UNSENT = 0
OPENED = 1
HEADERS_RECEIVED = 2
LOADING = 3
DONE = 4
ResponseType = set(['', 'arraybuffer', 'blob',
'document', 'json','text'])
_app = None
_msg = None
_meta = {
'_session': WebKit.get_default_session(),
'source-origin': 'about:blank',
'upload': None,
'status': 0,
'status-text': '',
'ready-state': UNSENT,
'response-type': '',
'response-text': '',
'anonymous-flag': None,
'send-flag': None,
'error-flag': None,
'synchronous-flag': None,
'request-method': 'GET',
'request-url': 'about:blank',
'request-username': None,
'request-password': None,
'author-request-headers': None,
'request-entity-body': None,
'upload-complete-flag': None,
'upload-events-flag': None,
'credentials-flag': None,
'force-preflight-flag': None,
'cross-origin-request-status': None,
'timeout': 0,
}
#TODO: properties ...
onreadystatechange = None
response = None #ro, any
responseXML = None #DOMDocument (or XMLDOC-thing)
@property
def timeout(self):
return self._meta['timeout']
@timeout.setter
def timeout(self, timeout):
#TODO: more to this?
self._meta['timeout'] = timeout
@property
def withCredentials(self):
return self._meta['credentials-flag']
@withCredentials.setter
def withCredentials(self, withCredentials):
#TODO: more to this?
self._meta['credentials-flag'] = withCredentials
@property
def responseText(self):
return self._meta['response-text']
@property
def upload(self):
return self._meta['upload']
@property
def status(self):
return self._meta['status']
@property
def statusText(self):
return self._meta['status-text']
@property
def readyState(self):
return self._meta['ready-state']
@property
def responseType(self):
return self._meta['response-type']
@responseType.setter
def responseType(self, responseType):
if responseType not in self.ResponseType:
raise TypeError('InvalidAccessError')
if self.readyState in (self.LOADING, self.DONE):
raise TypeError('InvalidStateError')
#TODO: there are 1-2 more check, but really ... ?
self._meta['response-type'] = responseType
def __init__(self, app):
self._app = app
def open(self, method, url, async=True, user=None, password=None):
import pygwt
self.abort()
async = async and True or False
user = user or None
password = password or None
view = self._app._view
uri = Soup.URI.new(pygwt.getModuleBaseURL()).new_with_base(url)
uri.set_user(user)
uri.set_password(password)
msg = Soup.Message.new_from_uri(method, uri)
uri = msg.get_uri()
self._msg = msg
self._meta = GIXMLHttpRequest._meta.copy()
self._meta.update({
'source-origin': uri.copy_host().to_string(0).rstrip('/'),
'upload': GIXMLHttpRequestUpload(),
'ready-state': self.OPENED,
'synchronous-flag': not async and True or None,
'request-method': method.upper(),
'request-url': uri.to_string(0),
'request-username': user,
'request-password': password,
'author-request-headers': {
'user-agent': view.get_settings().get_user_agent(),
},
})
def setRequestHeader(self, header, value):
if self.readyState != 1:
raise TypeError('InvalidStateError')
self._meta['author-request-headers'][header.lower()] = value
def send(self, data=None):
msg = self._msg
meta = self._meta
hdrs = meta['author-request-headers']
if self.readyState != self.OPENED or meta['send-flag']:
raise TypeError('InvalidStateError')
if data and meta['request-method'] not in ('HEAD', 'GET'):
meta['request-entity-body'] = data
msg.request_body.append(data)
#TODO: content-Type ...
#TODO: storage mutex? prob doesn't apply ...
if not meta['synchronous-flag']:
for key in GIXMLHttpRequestEventTarget.__dict__:
if callable(getattr(self.upload, key)):
meta['upload-events-flag'] = True
break
meta['error-flag'] = None
if not meta['request-entity-body']:
meta['upload-complete-flag'] = True
if not meta['synchronous-flag']:
meta['send-flag'] = True
#TODO: fire event: readystatechange
#TODO: fire progress event: loadstart
pass
if not meta['upload-complete-flag']:
#TODO: fire progress event: loadstart (upload)
pass
if 'accept' not in meta['author-request-headers']:
hdrs['accept'] = '*/*'
for hdr in hdrs.iteritems():
msg.request_headers.replace(*hdr)
if not meta['anonymous-flag']:
msg.request_headers.replace('origin', meta['source-origin'])
#TODO: app_frame == Referer?
if meta['request-url'].startswith(meta['source-origin']):
#TODO: should be async
meta['status'] = int(meta['_session'].send_message(msg))
meta['response-text'] = str(msg.response_body.data)
meta['ready-state'] = self.DONE
if callable(self.onreadystatechange):
#TODO: wrong signature
self.onreadystatechange(self, None, None)
else:
#TODO: CORS? :-(
pass
def abort(self):
msg = self._msg
meta = self._meta
if (self.readyState in (self.UNSENT, self.OPENED) and
meta['send-flag']) or self.readyState == self.DONE:
meta['ready-state'] = self.DONE
#TODO: fire event: readystatechange
#TODO: fire progress event: abort
#TODO: fire progress event: loadend
pass
if not meta['upload-complete-flag']:
#TODO: fire progress event: abort (upload)
#TODO: fire progress event: loadend (upload)
pass
meta['ready-state'] = self.UNSENT
def getResponseHeader(self, header):
raise NotImplemented('XMLHttpRequest.getResponseHeader')
return ''
def getAllResponseHeaders(self):
raise NotImplemented('XMLHttpRequest.getAllResponseHeaders')
return '...\r\n...\r\n'
def overrideMimeType(self, mime):
raise NotImplemented('XMLHttpRequest.overrideMimeType')
class GITimer(object):
UUID = uuid4().hex
key = None
def __init__(self, key):
self.key = key
def __call__(self, wnd, cb, ms):
doc = wnd.document
ctx = doc.ctx
buf = doc.createTextNode(self.key)
sig = doc.createEvent('MouseEvent')
sig.initMouseEvent(
self.UUID, 0, 0, wnd, ms, 0,
0, 0, 0, 0, 0, 0, 0, 0, buf
)
wnd.dispatch_event(sig)
ctx.addEventListener(buf, self.UUID[::-1], cb)
return int(buf.data)
@classmethod
def bind(cls, key):
owner, attr = key
return types.MethodType(cls(attr), None, owner)
class GIProxy(object):
key = None
getter = None
setter = None
def __init__(self, key, impl='property'):
self.key = key
self.getter = operator.methodcaller('get_%s' % impl, key)
self.setter = operator.attrgetter('set_%s' % impl)
def __get__(self, inst, type_gi):
return self.getter(inst)
def __set__(self, inst, attr):
self.setter(inst)(self.key, attr)
def __delete__(self, inst):
pass
class GIWindowLocation(object):
_ctx = None
def __init__(self, ctx):
#TODO: use SoupURI for this instead
def update(doc, pspec=None):
a.set_href(doc.get_document_uri())
logger.debug('location:%s', a.get_href())
#TODO: this only works for the first window created
# need something like GIProxy but for attributes
app = context
doc = app._doc
a = doc.createElement('a')
doc.connect('notify::document-uri', update)
update(doc)
object.__setattr__(self, '_app', app)
object.__setattr__(self, '_a', a)
def __getattr__(self, key):
return getattr(self._a, key)
def __setattr__(self, key, attr):
#TODO: needs to interact with view.load_uri()
setattr(self._a, key, attr)
def assign(self):
#TODO
self._app._wnd.get_history()
def reload(self):
self._app._view.reload()
def replace(self):
#TODO
self._app._wnd.get_history()
@classmethod
def bind(cls, key):
owner, attr = key
return cls(key)
class GIWindowOpen(object):
def __init__(self, ctx):
pass
def __call__(self, inst, uri, name="_blank", specs=""):
if '://' not in uri:
import pygwt
uri = Soup.URI.new(pygwt.getModuleBaseURL()).new_with_base(uri).to_string(0)
rc = RunnerContext()
rc._destroy_cb = lambda *args: logger.debug('destroying sub window...')
rc.setup(uri)
@classmethod
def bind(cls, key):
owner, attr = key
return types.MethodType(cls(key), None, owner)
class GIResolver(object):
NONE = object()
UPPER = re.compile('([A-Z])')
_custom = {
(WebKit.DOMDOMWindow, 'location'): GIWindowLocation,
(WebKit.DOMDOMWindow, 'open'): GIWindowOpen,
(WebKit.DOMDOMWindow, 'setTimeout'): GITimer,
(WebKit.DOMDOMWindow, 'setInterval'): GITimer,
#TODO: this is actually a bug in pyjs ... UIEvents
# do not have these attributes.
(WebKit.DOMUIEvent, 'shiftKey'): False,
(WebKit.DOMUIEvent, 'ctrlKey'): False,
(WebKit.DOMUIEvent, 'altKey'): False,
}
_type_gi = None
def __init__(self, type_gi):
method = types.MethodType(self, None, type_gi)
type.__setattr__(type_gi, '__getattr__', method)
type.__setattr__(type_gi, '__setattr__', method)
self._type_gi = type_gi
def __call__(self, inst, key, attr=NONE):
if attr is self.NONE:
return self.getattr(inst, key)
self.setattr(inst, key, attr)
def getattr(self, inst, key):
for impl in (self.getattr_gi, self.getattr_w3):
attr = impl(inst, key)
if attr is not self.NONE:
logger.debug('%s:%s.%s', impl.__name__,
inst.__class__.__name__, key)
return attr
raise AttributeError('%r object has no attribute %r' % (
inst.__class__.__name__, key))
def getattr_gi(self, inst, key):
#TODO: this can probably just be removed now?
return self.NONE
try:
if inst.get_data(key) is None:
return self.NONE
except TypeError:
return self.NONE
type.__setattr__(inst.__class__, key, GIProxy(key, 'data'))
return getattr(inst, key)
def getattr_w3(self, inst, key_w3):
key_gi = self._key_gi(key_w3)
for base in inst.__class__.__mro__:
key = (base, key_w3)
if key in self._custom:
try:
attr = self._custom[key].bind(key)
except AttributeError:
attr = self._custom[key]
elif hasattr(inst.props, key_gi):
attr = GIProxy(key_gi)
elif key_gi in base.__dict__:
attr = base.__dict__[key_gi]
else:
continue
type.__setattr__(base, key_w3, attr)
return getattr(inst, key_w3)
return self.NONE
def setattr(self, inst, key, attr):
# hasattr() *specifically* chosen because it calls getattr()
# internally, possibly setting a proxy object; if True, super()
# will then properly setattr() against the proxy or instance.
hasattr(inst, key)
super(self._type_gi, inst).__setattr__(key, attr)
def _key_gi(self, key):
return self.UPPER.sub(r'_\1', key).lower()
class Callback(object):
def __init__(self, sender, cb, boolparam):
self.sender = sender
self.cb = cb
self.boolparam = boolparam
def __call__(self, sender, event, data):
try:
return self.cb(self.sender, event, self.boolparam)
except:
print_exc()
return None
class ApplicationFrame(object):
#TODO: split RunnerContext (multi-frame support)
pass
class RunnerContext(object):
platform = 'webkit'
uri = 'about:blank'
#TODO: rename, accidentally removed?
appdir = None
width = 800
height = 600
# TODO: change WebKit patch to hold reference
listeners = None
def __init__(self):
self.listeners = dict()
def run(self):
logger.debug('mainloop:entering...')
Gtk.main()
logger.debug('mainloop:exiting...')
def setup(self, uri=uri, **kwds):
for k, v in kwds.iteritems():
if hasattr(self, k):
setattr(self, k, v)
if '://' not in uri:
uri = 'file://%s' % os.path.abspath(uri)
uri = self.uri = Soup.URI.new(uri)
logger.info('uri:\n%s', pformat(URI.items(uri)))
view = self._view = WebKit.WebView()
toplevel = self._toplevel = Gtk.Window()
scroller = self._scroller = Gtk.ScrolledWindow()
toplevel.set_default_size(self.width, self.height)
toplevel.add(scroller)
scroller.add(view)
char_q, mod_ctrl = Gtk.accelerator_parse('<Ctrl>q')
accel_destroy = Gtk.AccelGroup.new()
accel_destroy.connect(char_q, mod_ctrl, 0, self._destroy_cb)
#TODO: file:/// with # or ? causes error
view.load_uri(self.uri.to_string(0))
view.connect('onload-event', self._frame_loaded_cb)
view.connect('title-changed', self._title_changed_cb)
view.connect('icon-loaded', self._icon_loaded_cb)
view.connect('populate-popup', self._populate_popup_cb)
view.connect('console-message', self._console_message_cb)
#view.connect('resource-content-length-received',
# self._resource_recv_cb, None)
#view.connect('resource-request-starting',
# self._resource_init_cb, None)
settings = view.get_property('settings')
settings.set_property('auto-resize-window', True)
settings.set_property('enable-file-access-from-file-uris', True)
settings.set_property('enable-accelerated-compositing', True)
settings.set_property('enable-webgl', True)
# GLib.PRIORITY_LOW == 300
GObject.timeout_add(1000, self._idle_loop_cb, priority=300)
signal.signal(signal.SIGINT, self._destroy_cb)
toplevel.connect('destroy', self._destroy_cb)
toplevel.add_accel_group(accel_destroy)
# display and run mainloop (returns after frame load)
toplevel.show_all()
Gtk.main()
def getUri(self):
return self.uri.to_string(0)
def _idle_loop_cb(self):
# mostly here to enable Ctrl^C/SIGINT without resorting to:
# signal.signal(signal.SIGINT, signal.SIG_DFL)
# ... but could be useful in future; active timeout required else
# SIGINT is not be processed until the next Gdk event
return True
def _console_message_cb(self, view, message, lineno, source):
logger.debug('JAVASCRIPT:%s:%s', lineno, message)
return True
def _resource_init_cb(self, view, frame, webres, req, res, data):
m = req.get_message()
content_type = m.request_headers.get_content_type()[0]
if content_type is not None:
if content_type.startswith('application/json'):
print 'JSONRPC!'
def _resource_recv_cb(self, view, frame, res, length, data):
#m = res.get_network_request().get_message()
#print m.request_headers.get_content_type()
print res.props.mime_type, res.props.uri
def _frame_loaded_cb(self, view, frame):
#TODO: multiple apps should be simple to implement
if frame is not view.get_main_frame():
logger.debug('sub-frame: %s', frame)
return
self._doc = self._view.get_dom_document()
self._wnd = self._doc.get_default_view()
self._doc.__dict__['ctx'] = self
self._wnd.__dict__['ctx'] = self
# GITimer: ready the listener
view.execute_script(r'''
(function(wnd, doc, uujs, uugi, undefined){
wnd.addEventListener(uujs, function(e){
var buf = e.relatedTarget;
var evt = doc.createEvent('Event');
evt.initEvent(uugi, 0, 0);
buf.data = wnd[buf.data](function(){
buf.dispatchEvent(evt);
}, e.detail);
});
})(window, document, %r, %r);
''' % (GITimer.UUID, GITimer.UUID[::-1]))
#TODO: redundant? incompat with poly-frame/reload!
import __pyjamas__
__pyjamas__.set_gtk_module(Gtk)
__pyjamas__.set_main_frame(self)
#TODO: made this work ... and skip bootstrap.js
#for m in __pyjamas__.pygwt_processMetas():
# minst = module_load(m)
# minst.onModuleLoad()
# return control to setup()
Gtk.main_quit()
def _icon_loaded_cb(self, view, icon_uri):
current = view.get_property('uri')
dom = view.get_dom_document()
icon = (Gtk.STOCK_DIALOG_QUESTION, None, 0)
found = set()
found.add(icon_uri)
found.add(urljoin(current, '/favicon.ico'))
scanner = {
'href': dom.querySelectorAll(
'head link[rel~=icon][href],'
'head link[rel|=apple-touch-icon][href]'
),
'content': dom.querySelectorAll(
'head meta[itemprop=image][content]'
),
}
for attr in scanner.keys():
for i in xrange(scanner[attr].length):
uri = getattr(scanner[attr].item(i), attr)
if len(uri) == 0:
continue
found.add(urljoin(current, uri))
for uri in found:
fp = urlopen(uri)
if fp.code != 200:
continue
i = fp.info()
if i.maintype == 'image' and 'content-length' in i:
try:
ldr = Gtk.gdk.PixbufLoader()
ldr.write(fp.read(int(i['content-length'])))
ldr.close()
except:
continue
pb = ldr.get_pixbuf()
pbpx = pb.get_height() * pb.get_width()
if pbpx > icon[2]:
icon = (uri, pb, pbpx)
if icon[1] is None:
self._toplevel.set_icon_name(icon[0])
else:
self._toplevel.set_icon(icon[1])
logger.debug('icon:%s', icon[0])
def mash_attrib(self, name, joiner='-'):
return name
def alert(self, msg):
self._wnd.alert(msg)
def _populate_popup_cb(self, view, menu):
menu.append(Gtk.SeparatorMenuItem.new())
go = lambda signal, uri: view.load_uri(uri)
entries = [
('About WebKit', 'http://live.gnome.org/WebKitGtk'),
('About pyjs.org', 'http://pyjs.org/About.html'),
]
for label, uri in entries:
entry = Gtk.MenuItem(label=label)
entry.connect('activate', go, uri)
menu.append(entry)
menu.show_all()
def getDomWindow(self):
return self._wnd
def getDomDocument(self):
return self._doc
def getXmlHttpRequest(self):
return GIXMLHttpRequest(self)
def addWindowEventListener(self, event_name, cb):
listener = Callback(self, cb, True)
self._wnd.add_event_listener(event_name, listener, False, None)
#TODO: this can probably just be removed now?
# if not, MUST USE WEAKREFS OR IT WILL LEAK!
self.listeners[listener] = self._wnd
def addXMLHttpRequestEventListener(self, element, event_name, cb):
cb = Callback(element, cb, True)
setattr(element, "on%s" % event_name, cb._callback)
def addEventListener(self, element, event_name, cb):
listener = Callback(element, cb, False)
element.add_event_listener(event_name, listener, False, None)
#TODO: this can probably just be removed now?
# if not, MUST USE WEAKREFS OR IT WILL LEAK!
self.listeners[listener] = element
def _destroy_cb(self, *args):
logger.debug('destroy:draining events...')
Gtk.main_quit()
def _title_changed_cb(self, view, frame, title):
self._toplevel.set_title(title)
window = property(getDomWindow)
document = property(getDomDocument)
_alert = alert
_addEventListener = addEventListener
_addWindowEventListener = addWindowEventListener
_addXMLHttpRequestEventListener = addXMLHttpRequestEventListener
resolver = GIResolver(WebKit.DOMObject)
context = RunnerContext()
setup = context.setup
run = context.run
| 31.850275 | 88 | 0.589037 | 2,660 | 23,187 | 4.96015 | 0.209399 | 0.015158 | 0.011672 | 0.010914 | 0.124299 | 0.072912 | 0.061846 | 0.057602 | 0.046688 | 0.036683 | 0 | 0.00513 | 0.293828 | 23,187 | 727 | 89 | 31.894085 | 0.80066 | 0.104757 | 0 | 0.142857 | 0 | 0 | 0.123915 | 0.023367 | 0 | 0 | 0 | 0.001376 | 0 | 0 | null | null | 0.026316 | 0.037594 | null | null | 0.009399 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
c9138c12865c890bacd8e1ad529e4b09cbb9a13e | 1,617 | py | Python | backend/tradersplatform/equipment/urls.py | ybedirhanpak/bounswe2019group1 | 9572fd307345b3f842c2c2ff4426857086484ed5 | [
"MIT"
] | 10 | 2019-02-14T14:53:49.000Z | 2019-10-23T08:03:39.000Z | backend/tradersplatform/equipment/urls.py | ybedirhanpak/bounswe2019group1 | 9572fd307345b3f842c2c2ff4426857086484ed5 | [
"MIT"
] | 364 | 2019-02-14T14:50:12.000Z | 2022-02-10T13:43:09.000Z | backend/tradersplatform/equipment/urls.py | bounswe/bounswe2019group1 | 9572fd307345b3f842c2c2ff4426857086484ed5 | [
"MIT"
] | 8 | 2019-05-05T20:04:31.000Z | 2020-12-24T16:44:54.000Z | from django.conf.urls import url
from . import views
urlpatterns = [
url(r'^currency/', views.CurrencyAPI.as_view(), name="update_de"),
url(r'^currencylastmonth/', views.CurrencyAPILastMonth.as_view(), name="update_de"),
url(r'^currencyconvert/', views.CurrencyConverterAPI.as_view(), name="update_de"),
url(r'^cryptocurrency/', views.CryptoCurrencyAPI.as_view(), name="update_de"),
url(r'^cryptocurrencyhistorical/', views.CryptoCurrencyHistoricalAPI.as_view(), name="update_de"),
url(r'^metalcurrency/', views.MetalCurrencyAPI.as_view(), name="update_de"),
url(r'^stock/', views.StockCurrencyAPI.as_view(), name="update_de"),
url(r'^traceindices/', views.TraceIndicesAPIView.as_view(), name="update_de"),
url(r'^traceindicesgainers/', views.TraceIndicesGainers.as_view(), name="update_de"),
url(r'^etfs/', views.ETFsListAPIView.as_view(), name="update_de"),
url(r'^bonds/', views.BondListAPIView.as_view(), name="update_de"),
url(r'^lastmonth/', views.StockLastMonth.as_view(), name="update_de"),
url(r'^etfdetaillist/', views.ETFDeatilistAPIView.as_view(), name="update_de"),
url(r'^currencyList/', views.CurrencyList.as_view(), name="update_de"),
url(r'^cryptocurrencyList/', views.CryptoCurrencyList.as_view(), name="update_de"),
url(r'^metalcurrencyList/', views.MetalListAPIView.as_view(), name="update_de"),
url(r'^stockcurrencyList/', views.StockListAPIView.as_view(), name="update_de"),
url(r'^etfList/', views.ETFListAPIView.as_view(), name="update_de"),
url(r'^traceList/', views.ETFListAPIView.as_view(), name="update_de"),
] | 62.192308 | 102 | 0.717378 | 200 | 1,617 | 5.61 | 0.24 | 0.067736 | 0.16934 | 0.270945 | 0.402852 | 0.402852 | 0.402852 | 0 | 0 | 0 | 0 | 0 | 0.090909 | 1,617 | 26 | 103 | 62.192308 | 0.763265 | 0 | 0 | 0 | 0 | 0 | 0.276267 | 0.029048 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.086957 | 0 | 0.086957 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
c914dc5b4dad6103f9de989dac8428aaef84fbb2 | 441 | py | Python | L1Trigger/GlobalCaloTrigger/test/testElectrons_cfg.py | ckamtsikis/cmssw | ea19fe642bb7537cbf58451dcf73aa5fd1b66250 | [
"Apache-2.0"
] | 852 | 2015-01-11T21:03:51.000Z | 2022-03-25T21:14:00.000Z | L1Trigger/GlobalCaloTrigger/test/testElectrons_cfg.py | ckamtsikis/cmssw | ea19fe642bb7537cbf58451dcf73aa5fd1b66250 | [
"Apache-2.0"
] | 30,371 | 2015-01-02T00:14:40.000Z | 2022-03-31T23:26:05.000Z | L1Trigger/GlobalCaloTrigger/test/testElectrons_cfg.py | ckamtsikis/cmssw | ea19fe642bb7537cbf58451dcf73aa5fd1b66250 | [
"Apache-2.0"
] | 3,240 | 2015-01-02T05:53:18.000Z | 2022-03-31T17:24:21.000Z | import FWCore.ParameterSet.Config as cms
process = cms.Process("TestGct")
process.load("L1Trigger.GlobalCaloTrigger.test.gctTest_cff")
process.load("L1Trigger.GlobalCaloTrigger.test.gctConfig_cff")
process.source = cms.Source("EmptySource")
process.maxEvents = cms.untracked.PSet(
input = cms.untracked.int32(1)
)
process.p1 = cms.Path(process.gctemu)
process.gctemu.doElectrons = True
process.gctemu.inputFile = 'data/testEmDummy_'
| 27.5625 | 62 | 0.789116 | 54 | 441 | 6.388889 | 0.574074 | 0.113043 | 0.115942 | 0.214493 | 0.237681 | 0 | 0 | 0 | 0 | 0 | 0 | 0.014815 | 0.081633 | 441 | 15 | 63 | 29.4 | 0.837037 | 0 | 0 | 0 | 0 | 0 | 0.284091 | 0.204545 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.090909 | 0 | 0.090909 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
c9166a281dd194638b7c4b72a4acdda19fced420 | 834 | py | Python | auto_ml/GRU/gru_tuner.py | AlongZG/XFinAI | 943a1fce709be1da54df8a6ba71c9b8e8505d254 | [
"Apache-2.0"
] | 1 | 2022-03-12T07:11:42.000Z | 2022-03-12T07:11:42.000Z | auto_ml/GRU/gru_tuner.py | AlongZG/XFinAI | 943a1fce709be1da54df8a6ba71c9b8e8505d254 | [
"Apache-2.0"
] | null | null | null | auto_ml/GRU/gru_tuner.py | AlongZG/XFinAI | 943a1fce709be1da54df8a6ba71c9b8e8505d254 | [
"Apache-2.0"
] | null | null | null | import sys
import nni
from sklearn.metrics import r2_score
sys.path.append('../..')
from model_layer.model_hub import GRU
from model_layer.model_tuner import RecurrentModelTuner
from utils import base_io
def main(model_class, future_index, params):
target_metric_func = r2_score
metric_name = 'R_Square'
tune_model = RecurrentModelTuner(model_class=model_class, future_index=future_index,
target_metric_func=target_metric_func, metric_name=metric_name,
params=params)
tune_model.run()
if __name__ == '__main__':
future_index = 'IC'
model_class = GRU
# params = base_io.load_best_params(future_index, model_class.name)
params = nni.get_next_parameter()
main(model_class, future_index, params)
| 30.888889 | 101 | 0.683453 | 105 | 834 | 5.009524 | 0.380952 | 0.114068 | 0.091255 | 0.119772 | 0.117871 | 0.117871 | 0 | 0 | 0 | 0 | 0 | 0.003165 | 0.242206 | 834 | 26 | 102 | 32.076923 | 0.829114 | 0.077938 | 0 | 0 | 0 | 0 | 0.031039 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.052632 | false | 0 | 0.315789 | 0 | 0.368421 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
c918bd18680317bb0b0b3ec876daf122c9d77ac5 | 3,020 | py | Python | All_Source_Code/MultilayerPerceptronNeuralNetwork/MultilayerPerceptronNeuralNetwork_9.py | APMonitor/pds | fa9a7ec920802de346dcdf7f5dd92d752142c16f | [
"MIT"
] | 11 | 2021-01-21T09:46:29.000Z | 2022-03-16T19:33:10.000Z | All_Source_Code/MultilayerPerceptronNeuralNetwork/MultilayerPerceptronNeuralNetwork_9.py | the-mahapurush/pds | 7cb4087dd8e75cb1e9b2a4283966c798175f61f7 | [
"MIT"
] | 1 | 2022-03-16T19:47:09.000Z | 2022-03-16T20:11:50.000Z | All_Source_Code/MultilayerPerceptronNeuralNetwork/MultilayerPerceptronNeuralNetwork_9.py | the-mahapurush/pds | 7cb4087dd8e75cb1e9b2a4283966c798175f61f7 | [
"MIT"
] | 12 | 2021-02-08T21:11:11.000Z | 2022-03-20T12:42:49.000Z | from gekko import GEKKO
import numpy as np
import matplotlib.pyplot as plt
# generate training data
x = np.linspace(0.0,2*np.pi,20)
y = np.sin(x)
# option for fitting function
select = True # True / False
if select:
# Size with cosine function
nin = 1 # inputs
n1 = 1 # hidden layer 1 (linear)
n2 = 1 # hidden layer 2 (nonlinear)
n3 = 1 # hidden layer 3 (linear)
nout = 1 # outputs
else:
# Size with hyperbolic tangent function
nin = 1 # inputs
n1 = 2 # hidden layer 1 (linear)
n2 = 2 # hidden layer 2 (nonlinear)
n3 = 2 # hidden layer 3 (linear)
nout = 1 # outputs
# Initialize gekko
train = GEKKO()
test = GEKKO()
model = [train,test]
for m in model:
# input(s)
m.inpt = m.Param()
# layer 1
m.w1 = m.Array(m.FV, (nin,n1))
m.l1 = [m.Intermediate(m.w1[0,i]*m.inpt) for i in range(n1)]
# layer 2
m.w2a = m.Array(m.FV, (n1,n2))
m.w2b = m.Array(m.FV, (n1,n2))
if select:
m.l2 = [m.Intermediate(sum([m.cos(m.w2a[j,i]+m.w2b[j,i]*m.l1[j]) \
for j in range(n1)])) for i in range(n2)]
else:
m.l2 = [m.Intermediate(sum([m.tanh(m.w2a[j,i]+m.w2b[j,i]*m.l1[j]) \
for j in range(n1)])) for i in range(n2)]
# layer 3
m.w3 = m.Array(m.FV, (n2,n3))
m.l3 = [m.Intermediate(sum([m.w3[j,i]*m.l2[j] \
for j in range(n2)])) for i in range(n3)]
# output(s)
m.outpt = m.CV()
m.Equation(m.outpt==sum([m.l3[i] for i in range(n3)]))
# flatten matrices
m.w1 = m.w1.flatten()
m.w2a = m.w2a.flatten()
m.w2b = m.w2b.flatten()
m.w3 = m.w3.flatten()
# Fit parameter weights
m = train
m.inpt.value=x
m.outpt.value=y
m.outpt.FSTATUS = 1
for i in range(len(m.w1)):
m.w1[i].FSTATUS=1
m.w1[i].STATUS=1
m.w1[i].MEAS=1.0
for i in range(len(m.w2a)):
m.w2a[i].STATUS=1
m.w2b[i].STATUS=1
m.w2a[i].FSTATUS=1
m.w2b[i].FSTATUS=1
m.w2a[i].MEAS=1.0
m.w2b[i].MEAS=0.5
for i in range(len(m.w3)):
m.w3[i].FSTATUS=1
m.w3[i].STATUS=1
m.w3[i].MEAS=1.0
m.options.IMODE = 2
m.options.SOLVER = 3
m.options.EV_TYPE = 2
m.solve(disp=False)
# Test sample points
m = test
for i in range(len(m.w1)):
m.w1[i].MEAS=train.w1[i].NEWVAL
m.w1[i].FSTATUS = 1
print('w1['+str(i)+']: '+str(m.w1[i].MEAS))
for i in range(len(m.w2a)):
m.w2a[i].MEAS=train.w2a[i].NEWVAL
m.w2b[i].MEAS=train.w2b[i].NEWVAL
m.w2a[i].FSTATUS = 1
m.w2b[i].FSTATUS = 1
print('w2a['+str(i)+']: '+str(m.w2a[i].MEAS))
print('w2b['+str(i)+']: '+str(m.w2b[i].MEAS))
for i in range(len(m.w3)):
m.w3[i].MEAS=train.w3[i].NEWVAL
m.w3[i].FSTATUS = 1
print('w3['+str(i)+']: '+str(m.w3[i].MEAS))
m.inpt.value=np.linspace(-2*np.pi,4*np.pi,100)
m.options.IMODE = 2
m.options.SOLVER = 3
m.solve(disp=False)
plt.figure()
plt.plot(x,y,'bo',label='data')
plt.plot(test.inpt.value,test.outpt.value,'r-',label='predict')
plt.legend(loc='best')
plt.ylabel('y')
plt.xlabel('x')
plt.show() | 25.59322 | 75 | 0.572517 | 580 | 3,020 | 2.97931 | 0.196552 | 0.056713 | 0.038194 | 0.070023 | 0.394097 | 0.267361 | 0.229167 | 0.194444 | 0.188657 | 0.153935 | 0 | 0.060411 | 0.227152 | 3,020 | 118 | 76 | 25.59322 | 0.679949 | 0.140728 | 0 | 0.326087 | 1 | 0 | 0.018281 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.032609 | 0 | 0.032609 | 0.043478 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
c91aab4d937ca9221f0af629d9669470d7e0b2aa | 22,142 | py | Python | mergeLOCandDBpedia.py | thisismattmiller/linked-jazz-name-directory | 87521563401becdffc4695de73102765ea07b6e2 | [
"MIT"
] | 6 | 2015-02-13T11:13:42.000Z | 2016-10-16T06:36:14.000Z | mergeLOCandDBpedia.py | linkedjazz/linked-jazz-name-directory | 87521563401becdffc4695de73102765ea07b6e2 | [
"MIT"
] | 1 | 2015-06-17T23:48:26.000Z | 2015-06-18T00:30:18.000Z | mergeLOCandDBpedia.py | linkedjazz/linked-jazz-name-directory | 87521563401becdffc4695de73102765ea07b6e2 | [
"MIT"
] | 1 | 2016-03-10T04:38:27.000Z | 2016-03-10T04:38:27.000Z | import sys, os, re, urllib, time
def main():
if not os.path.exists('data/loc_single'):
os.makedirs('data/loc_single')
dbData = open('data/jazzPeople.nt', 'r')
personNames = {}
personBirthDates = {}
personDeathDates = {}
nameCollisons = {}
matchesBothDate = []
matchesBothDateURIs = []
matchesSingleDate = []
foundCheckList = []
possibleLOC={}
allLOC = {}
for line in dbData:
quad = line.split()
if quad[1] == '<http://xmlns.com/foaf/0.1/name>':
name = ''
name = " ".join(quad[2:])
name = name[1:name[1:].find('@en')]
if len(name) < 5:
print name, line
name = name.replace('\\','')
if personNames.has_key(name) == False:
personNames[name] = quad[0]
else:
if personNames[name] != quad[0]:
if nameCollisons.has_key(name):
if quad[0] not in nameCollisons[name]:
nameCollisons[name].append(quad[0])
else:
nameCollisons[name]=[quad[0]]
print "1Name collision", name, line
print personNames[name], nameCollisons[name]
addNames = []
if name.find('"') != -1:
print name
name = name.split('"')[0].strip() + ' ' + name.split('"')[2].strip()
addNames.append(name)
#we also want to pull their name from the URL, because that is often the most common variant of their name
uri = quad[0]
name = formatName(quad[0].split('/resource/')[len(quad[0].split('/resource/'))-1])
name = name.replace('\\','')
addNames.append(name)
#print name
#remove any nick name and add that as well
if name.find('"') != -1:
print name
name = name.split('"')[0].strip() + ' ' + name.split('"')[2].strip()
addNames.append(name)
for aName in addNames:
#is this name already in the lookup:
print aName
if personNames.has_key(aName):
print "\t Name already in personNames"
#yes, is it the same uir as this one?
if personNames[aName] != quad[0]:
print "\t Name Has Diffrent URI Attached"
#no, it is a new UIR, is it aleady in the collision lookup?
if nameCollisons.has_key(aName):
print "\t Name already in collission"
#yes, is this URI already in it?
if quad[0] not in nameCollisons[aName]:
print "\t Diffrent Name, adding to it"
#no, add it
nameCollisons[aName].appen(quad[0])
else:
#no, add a new array to the collison with it
nameCollisons[aName] = [quad[0]]
print "\t Creating new collission record"
else:
print "\t not yet in personNames, adding it"
personNames[aName] = quad[0]
if quad[1] == '<http://dbpedia.org/ontology/deathDate>':
deathDate = ''
deathDate = " ".join(quad[2:])
deathDate = deathDate[1:deathDate[1:].find('-')+1]
if len(deathDate) != 4:
print "Error death date: ", line
else:
personDeathDates[quad[0]] = deathDate
#print deathDate
if quad[1] == '<http://dbpedia.org/ontology/birthDate>':
birthDate = ''
birthDate = " ".join(quad[2:])
birthDate = birthDate[1:birthDate[1:].find('-')+1]
if len(birthDate) != 4:
print "Error birth date: ", line
else:
personBirthDates[quad[0]] = birthDate
print len(personNames), len(personBirthDates), len(personDeathDates)
temp = open("db_tmp.txt","w")
for key, value in personNames.iteritems():
line = key + ' ' + value
if personBirthDates.has_key(value):
line = line + ' ' + personBirthDates[value]
if personDeathDates.has_key(value):
line = line + ' ' + personDeathDates[value]
temp.writelines(line + "\n")
for key, value in nameCollisons.iteritems():
for x in value:
line = key + ' ' + x
if personBirthDates.has_key(x):
line = line + ' ' + personBirthDates[x]
if personDeathDates.has_key(x):
line = line + ' ' + personDeathDates[x]
temp.writelines(line + "\n")
print line
locFile = open('data/personauthoritiesnames.nt.skos', 'r')
counter = 0
counterMatched = 0
print "building name list"
locDebug = open("loc_tmp.txt","w")
for line in locFile:
counter = counter+1
#if counter % 100000 == 0:
# print "procssed " + str(counter / 100000) + "00k names"
if counter % 1000000 == 0:
print "procssed ", counter / 1000000,"Million names!"
quad = line.split();
name = " ".join(quad[2:])
name = name[1:name[1:].find('@EN')]
name = name.replace('?','')
year = re.findall(r'\d{4}', name)
born = 0
died = 0
possibleNames = []
if len(year) != 0:
if len(year) == 1 and name[len(name)-1:] != '-':
if name.find(' b.') != -1:
born = year[0]
#print "Born : ",year[0]
elif name.find(' d.') != -1:
died = year[0]
#print "died : ",year[0]
elif name.find(' fl.') != -1:
born = year[0]
#print "born(flourished) : ",year[0]
elif name.find('jin shi') != -1:
born = year[0]
#print "born(third stage) : ",year[0]
elif name.find('ju ren') != -1:
born = year[0]
#print "born(second stage) : ",year[0]
elif len(re.findall(r'\d{3}\-', name)) != 0:
year = re.findall(r'\d{3}\-', name)
born = year[0][0:3]
#print "born : ", year[0][0:3]
#now get the death year
died = re.findall(r'\d{4}', name)[0]
elif len(re.findall(r'\-\d{4}', name)) != 0:
died = re.findall(r'\-\d{4}', name)[0][1:]
elif name.find(' ca. ') != -1 or name.find(' ca ') != -1:
born = year[0]
#print "born(ca) : ",year[0]
elif name.find(' b ') != -1:
born = year[0]
#print "Born : ",year[0]
elif name.find(' d ') != -1:
died = year[0]
#print "died : ",year[0]
elif name.find(' born ') != -1:
born = year[0]
#print "Born : ",year[0]
elif name.find(' died ') != -1:
died = year[0]
#print "died : ",year[0]
else:
#print name, "\n"
#print "error: cannot figure out this date, update the regex"
#we have hit like 90% of the cases here, now just stright up weird sutff, so just grab the date
born = year[0]
#print len(year)
elif len(year) == 1 and name[len(name)-1:] == '-':
born = year[0]
elif len(year) == 2:
born = year[0]
died = year[1]
elif len(year) == 3:
#they are doing "1999 or 2000 - blah blah blah" take first and last
born = year[0]
died = year[2]
elif len(year) == 4:
#they are doing "1999 or 2000 - blah blah blah" take first and last
born = year[0]
died = year[3]
else:
print name, "Coluld not process date \n"
sys.exit()
#print name, born, died
#else:
#these people would have lived < 0 bce - 999 AD, we currently do not care about them.
#if len(re.findall(r'\d{3}', name)) != 0:
#print name
#personDates[quad[0]] = [born,died]
#now process the name part
#chop off the rest where a number is detected to get rid of any date
if re.search(r'\d{1}',name) != None:
name = name[0:name.find(re.search(r'\d{1}',name).group())]
name=name.strip()
#now chop off anything past the second comma, it is not name stuff afterwards, also with 3 commas are a lot of "sir" and "duke of earl" etc, dont care about that stuff
if len(re.findall(',', name)) == 2 or len(re.findall(',', name)) == 3:
name = name.split(',')[0] + ', ' + name.split(',')[1]
#print name, '|', newname
if name.find('\"') != -1:
name = name.replace("\\",'')
if len(re.findall(',', name)) == 1:
if name.find('(') == -1:
#there is no pranthetical name
newname = name.split(',')
newname = newname[1] + ' ' + newname[0]
#print name, '|', newname
possibleNames.append(newname.strip())
#we want to add that name, but also add a version with out a middle intial, if that it is present
if len(newname.split()) == 3 and (newname.split()[1][len(newname.split()[1])-1] == '.' or len(newname.split()[1]) == 1):
newname = newname.split()[0] + ' ' + newname.split()[2]
#print "\t" + newname
possibleNames.append(newname.strip())
#we also want to add a name, that if they only have an inital for the first part and a full middle name drop the first intital
if len(newname.split()) == 3 and len(newname.split()[1]) > 2 and (newname.split()[0][len(newname.split()[0])-1] == '.' or len(newname.split()[1]) == 1):
newname = newname.split()[1] + ' ' + newname.split()[2]
#print "\t" + newname
possibleNames.append(newname.strip())
else:
#they have prenthasis in their name meaning that their long form of the name is contained in the pranthesis
newname = name.split(',')
newname = newname[1] + ' ' + newname[0]
#cut out the stuff before the pran
newname = newname[newname.find('(')+1:]
newname = newname.replace(')','')
#print name, '|', newname
possibleNames.append(newname.strip())
#now also cut out the middle inital if it is there and add that version
if len(newname.split()) == 3 and (newname.split()[1][len(newname.split()[1])-1] == '.' or len(newname.split()[1]) == 1):
newname = newname.split()[0] + ' ' + newname.split()[2]
#print "\t" + newname
possibleNames.append(newname.strip())
else:
#so here we are... the depths of the quirks
if name.find('(') != -1:
#if the very first thing is a inital, it is likely a abrrivated name and the full name is in the prans
if len(name.split()[0])==2:
if name.split()[0][1] == '.':
newname = name.split('(')[1]
newname = newname.replace(')','')
possibleNames.append(newname.strip())
#print name, '|', newname
#if len(name.split()[len(name.split())-1])==2:
# if name.split()[len(name.split())-1][1] == '.':
# print name, '|'
else:
#this will be stuff like P-King (Musician), or Shyne (Rapper), stuff we are intrested in, nicknames, so cut out the descriptor
newname = name.split('(')[0].strip()
#TODO: if we really care to take this further here is a spot where we will lose some names
#the quirks get very specific and would need a lot more rules
#print name, '|', newname
possibleNames.append(newname.strip())
else:
#print name, '|'
newname = name.strip()
#single names here, add them in
possibleNames.append(newname.strip())
#print possibleNames
#skip logic:
if int(born) != 0 and int(born) < 1875:
continue
for aPossible in possibleNames:
if personNames.has_key(aPossible):
#we have a match (!)
#add all the Ids we are going to check into a list
useURIs = []
#the main one
useURIs.append(personNames[aPossible])
#check for collision names, names that are the same but reflect diffrent URIs
if nameCollisons.has_key(aPossible):
for collison in nameCollisons[aPossible]:
useURIs.append(collison)
for useURI in useURIs:
locDebug.writelines(aPossible + ' ' + str(born) + ' ' + str(died) + "\n")
if allLOC.has_key(aPossible):
#it is in here already, see if it has this URI
if quad[0] not in allLOC[aPossible]:
allLOC[aPossible].append(quad[0])
else:
allLOC[aPossible] = [quad[0]]
didMatched = False
if personBirthDates.has_key(useURI) and personDeathDates.has_key(useURI):
if int(born) != 0 and int(died) != 0 and int(personBirthDates[useURI]) != 0 and int(personDeathDates[useURI]) != 0:
if (int(personBirthDates[useURI]) == int(born)) and (int(died) == int(personDeathDates[useURI])):
if [useURI, quad[0]] not in matchesBothDate:
didMatched=True
counterMatched = counterMatched + 1
matchesBothDate.append([useURI, quad[0]])
foundCheckList.append(useURI)
matchesBothDateURIs.append(useURI)
#print aPossible, quad[0], born, died
#print aPossible, useURI, personBirthDates[useURI], personDeathDates[useURI]
continue
#see if birth years match
if personBirthDates.has_key(useURI):
if int(personBirthDates[useURI]) == int(born) and int(personBirthDates[useURI]) != 0 and int(born) != 0:
if [useURI, quad[0]] not in matchesSingleDate:
#print personNames[aPossible], '=', quad[0]
didMatched=True
counterMatched = counterMatched + 1
matchesSingleDate.append([useURI, quad[0]])
foundCheckList.append(useURI)
#print aPossible, quad[0], born, "born match"
#print aPossible, useURI, personBirthDates[useURI]
continue
#does it have a death date match?
if personDeathDates.has_key(useURI):
if int(personDeathDates[useURI]) == int(died) and int(personDeathDates[useURI]) != 0 and int(died) != 0:
if [useURI, quad[0]] not in matchesSingleDate:
#print personNames[aPossible], '=', quad[0]
matchesSingleDate.append([useURI, quad[0]])
didMatched=True
counterMatched = counterMatched + 1
foundCheckList.append(useURI)
#print aPossible, quad[0], died, "death match"
#print aPossible, useURI, personDeathDates[useURI]
continue
#we are now going to remove any matches from matchesSingleDate where there is a perfect date match already
temp = []
for aSingleDateMatch in matchesSingleDate:
if aSingleDateMatch[0] not in matchesBothDateURIs:
temp.append(aSingleDateMatch)
else:
for x in matchesBothDate:
if x[0] == aSingleDateMatch[0]:
print "Attempted Dupe", aSingleDateMatch
print "With", x
matchesSingleDate = list(temp)
matchedSingle = []
matchedMany = []
matchedNone = []
for key, value in personNames.iteritems():
if value not in foundCheckList:
#print "Not matched " + value + ' ' + key
if allLOC.has_key(key):
if len(allLOC[key]) == 1:
#print "\tOnly one possible LOC match:" + allLOC[key][0]
matchedSingle.append([value,allLOC[key][0]])
else:
#print "\t 1+ possible LOC match:", allLOC[key]
matchedMany.append([value,allLOC[key]])
else:
matchedNone.append(value)
print " \n****Collision***\n"
for key, value in nameCollisons.iteritems():
for x in value:
if x not in foundCheckList:
#print "Not matched " + x + ' ' + key
if allLOC.has_key(key):
if len(allLOC[key]) == 1:
#print "\tOnly one possible LOC match:" + allLOC[key][0]
matchedSingle.append([x,allLOC[key][0]])
else:
#print "\t 1+ possible LOC match:", allLOC[key]
matchedMany.append([x,allLOC[key]])
else:
matchedNone.append(x)
#for key, value in possibleLOC.iteritems():
#if len(value) == 1:
#if value not in matches:
# matches.append(value)
#print key, '=', value
#make sure there are no duplicates, as in same DB to LOC records in the singles
tempCopy = []
for aSingle in matchedSingle:
add = True
for anotherSingle in tempCopy:
if aSingle[0] == anotherSingle[0] and aSingle[1] == anotherSingle[1]:
add = False
if add:
tempCopy.append(aSingle)
matchedSingle = list(tempCopy)
#now we are going to go through the singles and pull out anyone that has been added twice
#this can happen for common names born in the same year, move them to the 1->many list
matchedSingleCheck = []
matchedSingleDupes = []
for aSingle in matchedSingle:
if aSingle[0] not in matchedSingleCheck:
matchedSingleCheck.append(aSingle[0])
else:
print "Dupe in singles found:", aSingle
matchedSingleDupes.append(aSingle[0])
singleDupes = {}
tempCopy = []
print len(matchedSingle)
for aSingle in matchedSingle:
if aSingle[0] in matchedSingleDupes:
if singleDupes.has_key(aSingle[0]):
singleDupes[aSingle[0]].append(aSingle[1])
else:
singleDupes[aSingle[0]] = [aSingle[1]]
else:
tempCopy.append(aSingle)
matchedSingle = list(tempCopy)
print len(matchedSingle)
print singleDupes
#add them to the matchedmany list
for key, value in singleDupes.iteritems():
matchedMany.append([key,value])
#we now need to do the same for matchesSingleDate, they could have matched a single date true, but it could matched to other people
matchesSingleDateCheck = []
matchesSingleDateDupes = []
for aSingle in matchesSingleDate:
if aSingle[0] not in matchesSingleDateCheck:
matchesSingleDateCheck.append(aSingle[0])
else:
print "Dupe in single date found:", aSingle
matchesSingleDateDupes.append(aSingle[0])
singleDateDupes = {}
tempCopy = []
print len(matchesSingleDate)
for aSingle in matchesSingleDate:
if aSingle[0] in matchesSingleDateDupes:
if singleDateDupes.has_key(aSingle[0]):
singleDateDupes[aSingle[0]].append(aSingle[1])
else:
singleDateDupes[aSingle[0]] = [aSingle[1]]
else:
tempCopy.append(aSingle)
matchesSingleDate = list(tempCopy)
print len(matchesSingleDate)
#add them to the matchedmany list
for key, value in singleDateDupes.iteritems():
matchedMany.append([key,value])
print singleDateDupes
#TODO: This part needs to be fixed so the call to the LOC site is syncrounous, and wait for the file to be ready...
machtedSingleJazz = []
machtedSingleNoJazz = []
machtedSingleNoJazzLOC = []
for x in matchedSingle:
url = x[1]
id = formatName(url.split('/names/')[len(url.split('/names/'))-1])
foundJazz = False
if os.path.exists('data/loc_single/' + id + '.nt') == False:
os.system('wget --output-document="data/loc_single/' + id + '.nt" "http://id.loc.gov/authorities/names/' + id + '.nt"')
#sleep as a TODO fix,
time.sleep( 1.5 )
if os.path.exists('data/loc_single/' + id + '.nt'):
f = open('data/loc_single/' + id + '.nt', 'r')
for line in f:
line = line.lower()
if line.find('jazz') != -1 or line.find('music') != -1 or line.find('blues') != -1 or line.find('jazz') != -1 or line.find('vocal') != -1:
print line
foundJazz = True
f.close()
else:
print 'data/loc_single/' + id + '.nt does not exist'
if id in machtedSingleNoJazzLOC:
foundJazz = False
print "Dupe detected trying to assign" ,x
if foundJazz:
machtedSingleJazz.append(x)
machtedSingleNoJazzLOC.append(id)
else:
machtedSingleNoJazz.append(x)
print len(matchesBothDate), " BothDate Matches", len(matchesSingleDate), " Single Date Matches", len(matchedSingle), "Single LOC", len(matchedMany), "Multiple LOC matches", len(matchedNone), "No Matches"
#print len(matches)+ len(matchedSingle)+len(matchedMany) , " matched out of Total of about ", len(personNames)
print len(matchedSingle) , " = " , len(machtedSingleJazz) , " keyword found and ", len(machtedSingleNoJazz), " no keyword found"
#make the sameas files
allLines=[]
temp = open("data/sameAs_perfect.nt","w")
for value in matchesBothDate:
line = value[0] + ' <http://www.w3.org/2002/07/owl#sameAs> ' + value[1] + " . \n";
if line not in allLines:
temp.writelines(line)
allLines.append(line)
temp = open("data/sameAs_high.nt","w")
for value in matchesSingleDate:
line = value[0] + ' <http://www.w3.org/2002/07/owl#sameAs> ' + value[1] + " . \n";
if line not in allLines:
temp.writelines(line)
allLines.append(line)
temp = open("data/sameAs_medium.nt","w")
for value in machtedSingleJazz:
line = value[0] + ' <http://www.w3.org/2002/07/owl#sameAs> ' + value[1] + " . \n";
if line not in allLines:
temp.writelines(line)
allLines.append(line)
temp = open("data/sameAs_low.nt","w")
for value in machtedSingleNoJazz:
line = value[0] + ' <http://www.w3.org/2002/07/owl#sameAs> ' + value[1] + " . \n";
if line not in allLines:
temp.writelines(line)
allLines.append(line)
temp = open("data/sameAs_many.nt","w")
for value in matchedMany:
for x in value[1]:
temp.writelines(value[0] + ' <http://www.w3.org/2004/02/skos/core#closeMatch> ' + x + " . \n")
temp = open("data/sameAs_none.nt","w")
for value in matchedNone:
temp.writelines(value + ' <http://www.w3.org/2002/07/owl#sameAs> ' + '<none>' + " . \n")
def formatName(name):
#decode it to get rid of URL char codes
name = urllib.unquote(name)
#stop at the first pranathesis, so we dont get things like (jazz_player)
name = name[0:name.rfind('(')]
name = name.replace('_',' ').replace('>','').strip()
name = name.replace(",","")
name = name.replace("Jr.","Jr")
name = name.replace("Sr.","Sr")
return name
if __name__ == '__main__':
main() | 26.234597 | 205 | 0.573209 | 2,746 | 22,142 | 4.605972 | 0.148216 | 0.01186 | 0.012097 | 0.006325 | 0.393501 | 0.321395 | 0.268264 | 0.203115 | 0.165244 | 0.160342 | 0 | 0.022147 | 0.288321 | 22,142 | 844 | 206 | 26.234597 | 0.780492 | 0.217686 | 0 | 0.345679 | 0 | 0 | 0.09746 | 0.0069 | 0 | 0 | 0 | 0.001185 | 0 | 0 | null | null | 0 | 0.002469 | null | null | 0.08642 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
c92aec5412d611551d94038e21842a0b7b55cda0 | 11,019 | py | Python | neuropype/nodes/CrossCorr.py | ablot/Pinceau | ae8f3dd088349f8976a5ddf91d83c60ca1ef39e2 | [
"MIT"
] | null | null | null | neuropype/nodes/CrossCorr.py | ablot/Pinceau | ae8f3dd088349f8976a5ddf91d83c60ca1ef39e2 | [
"MIT"
] | null | null | null | neuropype/nodes/CrossCorr.py | ablot/Pinceau | ae8f3dd088349f8976a5ddf91d83c60ca1ef39e2 | [
"MIT"
] | null | null | null | from neuropype import node
import matplotlib.cm as cm
from neuropype.ressources._common import gaussian_density, flatenList, exponential_density, conv_dict
from neuropype import parameter
from numpy import array
import numpy as np
import neuropype.ressources.progressbar as pgb
from neuropype.ressources.recurtime import _cc
class CrossCorr(node.Node):
'''
kernel can be "gauss" or "exp"
'''
def __init__(self, name, parent):
self.in0_times = node.Input('Time_list')
self.in0_numSweeps = node.Input('int')
self.in1_times = node.Input('Time_list')
self.in1_numSweeps = node.Input('int')
super(CrossCorr, self).__init__(name, parent)
self._inputGroups['time_list0'] = {'time_list': 'in0_times', \
'numSweeps': 'in0_numSweeps'}
self._inputGroups['time_list1'] = {'time_list': 'in1_times', \
'numSweeps': 'in1_numSweeps'}
#defining parameters
keep_zero = parameter.boolean('keep_zero', self, True)
inverse = parameter.boolean('inverse', self, False)
units = parameter.combobox('units', self, sorted(conv_dict.keys()),
'ms')
bins = parameter.integer('bins', self, 100, minVal= 1)
sd = parameter.float_param('sd', self, 1/40.)
self._params = {'keep_zero' : keep_zero,
'sd': sd,
'trange': [-50, 50],
'units': units,
'bins': bins,
'inverse': inverse,
'graphviz':{'style': 'filled',
'fillcolor': 'palegreen'},
'kernel':'gauss',
'lastBefCond':None}
def numSweeps(self):
return min(self.in0_numSweeps(),self.in1_numSweeps())
def single_cross_cor(self, index_sweep):
"""lastBefCond can be None or (cond, boundary)"""
lastBefCond = self.get_param('lastBefCond')
dts = self.get_cache(index_sweep)
if dts is not None:
return dts
trange = self.get_param('trange')
time_list0 = self.in0_times(index_sweep)
ls0 = time_list0.get_data(units = self.get_param('units'))
# if hasattr(ls0, 'count'):
# if not ls0.count():
# return []
time_list1 = self.in1_times(index_sweep)
ls1 = time_list1.get_data(units = self.get_param('units'))
# if hasattr(ls1, 'count'):
# if not ls1.count():
# return []
if self.get_param('inverse'):
temp = ls0
ls0 = ls1
ls1 = temp
dts = _cc(ls0, ls1, trange, lastBefCond = lastBefCond, keep_zero = self.get_param('keep_zero'))
self.set_cache(index_sweep, dts)
return dts
def cross_corr(self, listofsweep = None):
print 'Doing crosscorr'
if listofsweep is None:
listofsweep = range(min(self.in0_numSweeps(),
self.in1_numSweeps()))
if not isinstance(listofsweep, list):
listofsweep = [int(listofsweep)]
cor=[]
pbar = pgb.ProgressBar(maxval=len(listofsweep), term_width = 79). \
start()
for (j,i) in enumerate(listofsweep):
cor.extend(self.single_cross_cor(i))
pbar.update(j)
pbar.finish()
return cor
def kde(self, flatcor, normalise = True):
k = self.get_param('kernel')
s, e = self.get_param('trange')
if k == 'gauss':
#print 'gauss'
gauss=gaussian_density(flatcor,self.get_param('sd'), start = s, end = e)
elif k == 'exp':
print 'exp'
gauss=exponential_density(flatcor,self.get_param('sd'), start = s, end = e)
else:
raise ValueError('unknown param for kernel: %s'%k)
#print 'plot'
if normalise:
gauss[1] /= flatcor.size
return gauss
def plot_CC(self, ax, listofsweep = None, cor = None, clearAx = True, **kwargs):
if cor is None:
cor = self.cross_corr(listofsweep)
trange = self.get_param('trange')
if clearAx: ax.clear()
print 'flatening cor'
onelinecor = flatenList(cor)
gauss = self.kde(onelinecor)
a=ax.hist(onelinecor, bins = self.get_param('bins'),**kwargs)
ax.plot(gauss[0], gauss[1]*a[0].max()/gauss[1].max(), lw=1)
ax.set_xlim(trange[0], trange[1])
ax.set_title(self.name)
del(a)
ax.set_xlabel('time (ms)')
ax.figure.canvas.draw()
#print 'end !!'
return ax
def raster(self, ax, listofsweep = None, cor = None, Sorted = False,
showSw_ind = False, maxNumLine = 1000, numCol = 200, colorbar =
True, simplify = True, **kwargs):
if cor is None:
cor = self.cross_corr(listofsweep)
if Sorted:
withspikesbef = np.array([], dtype ='int16')
nospikesbef = np.array([], dtype ='int16')
last_bef = np.array([])
print "Sorting cor"
pbar = pgb.ProgressBar(maxval=len(cor), term_width = 79). \
start()
if Sorted == 'bef':
for i,line in enumerate(cor):
pbar.update(i)
last_spike = line[line<0]
if last_spike.size > 0:
last_bef = np.append(last_bef, last_spike[-1])
withspikesbef = np.append(withspikesbef, i)
else:
nospikesbef = np.append(nospikesbef,i)
pbar.finish()
sortedind = withspikesbef[last_bef.argsort()]
elif Sorted == 'aft':
for i, line in enumerate(cor):
pbar.update(i)
last_spike = line[line>0]
if last_spike.size > 0:
last_bef = np.append(last_bef, last_spike[0])
withspikesbef = np.append(withspikesbef, i)
else:
nospikesbef = np.append(nospikesbef,i)
pbar.finish()
sortedind = withspikesbef[last_bef.argsort()]
else:
raise ValueError('unkwnown argument for sorted')
argsort = np.hstack((nospikesbef,sortedind))
cor = [cor[i] for i in argsort]
else:
argsort = range(len(cor))
print "plotting cor"
#pbar = pgb.ProgressBar(maxval=len(argsort), term_width = 79). \
# start()
if simplify and len(cor)>maxNumLine:
step = int(np.ceil(len(cor)/float(maxNumLine)))
trange = self.get_param('trange')
cols = np.linspace(trange[0],trange[1], numCol+1) #to diff later
out = np.zeros((maxNumLine, numCol))
n = 0
i = 0
while n < len(cor):
flat = sorted(flatenList(cor[n:n+step]))
inds = [np.searchsorted(flat, c) for c in cols]
out[i] = np.diff(inds)
i+=1
n+=step
if i < maxNumLine:
#len(cor) not exactly divisible by maxNumLine
flat = sorted(flatenList(cor[n-step:-1])) #takes what's left
inds = [np.searchsorted(flat, c) for c in cols]
out[i] = np.diff(inds) #there should be a i+=1 left from the last while
X, Y = np.meshgrid(cols[1:],np.arange(maxNumLine))
pc = ax.pcolor(X, Y, out, cmap = cm.Greys)
if colorbar:
ax.figure.colorbar(pc, ax = ax, orientation='horizontal')
else:
lengths = map(np.size, cor)
nTot = sum(lengths)
out = np.array(np.zeros(nTot), dtype = cor[0].dtype)
outsortedindex = np.array(np.zeros(nTot), dtype = 'int')
outindex = np.array(np.zeros(nTot), dtype = 'int')
n=0
for i, j in enumerate(lengths):
out[n:n+j] = cor[i]
outsortedindex[n:n+j] = np.ones(j)*i
outindex[n:n+j] = np.ones(j)*argsort[i]
n+=j
if showSw_ind:
if kwargs.has_key('c'):
print 'kwarg "c" incompatible with option "showSw_ind"'
kwargs['c'] = np.array(outindex, dtype = 'float')*255/outindex.max()
if not kwargs.has_key('c'): kwargs['c'] = 'k'
if not kwargs.has_key('marker'): kwargs['marker'] = 'o'
if kwargs.has_key('showSw_ind'): kwargs.pop('showSw_ind')
ax.scatter(out, outsortedindex, **kwargs)
#for index, i in enumerate(argsort):
# pbar.update(index)
# ax.plot(cor[i], [index for n in cor[i]], 'ko', **kwargs)
ax.set_xlabel('time (%s)'%self.get_param('units'))
ax.set_ylim(outindex.min(), outindex.max())
#pbar.finish()
ax.figure.canvas.draw()
return ax
def plot_crosscorr(self, fig, listofsweep = None, cor = None, showSw_ind = False,
rasterKwargs ={}, histKwargs = {}):
if cor is None:
cor = self.cross_corr(listofsweep)
fig.clear()
ax0 = fig.add_subplot(211)
ax1 = fig.add_subplot(212, sharex = ax0)
self.raster(ax0, listofsweep, cor = cor,showSw_ind = showSw_ind, **rasterKwargs)
self.plot_CC(ax1, listofsweep, cor = cor, **histKwargs)
return fig
def notIsolatedSpikes(self, sweep):
"""return a list of boolean arrays (one per sweep) indicating if there
is a least one spike in times1 in trange around each spike of times0
Arguments:
- `sweep`: sweep index or list of sweep indices
"""
if not isinstance(sweep, list):
sweep = [int(sweep)]
delist = True
else:
sweep = [int(i) for i in sweep]
delist = False
out = []
for i in sweep:
out.append([c.size != 0 for c in self.single_cross_cor(i)])
if delist: return out[0]
return out
def save(self, name = None, force = 0):
"""Save crossCorr in file 'name' (can only save ALL the times at once)
'name' is absolute or relative to parent.home
if 'force', replace existing file
"""
import os
if name is None:
name = self.parent.name+'_'+self.name+'_crossCorr.npz'
if name[0] != '/':
path = self.parent.home + name
data = self.cross_corr()
np.savez(name, **dict([('sw_'+str(i), d) for i, d in enumerate(data)]))
| 39.213523 | 111 | 0.517833 | 1,262 | 11,019 | 4.421553 | 0.221078 | 0.018817 | 0.030108 | 0.012903 | 0.25466 | 0.209857 | 0.166667 | 0.144803 | 0.144803 | 0.124373 | 0 | 0.014951 | 0.362646 | 11,019 | 280 | 112 | 39.353571 | 0.779581 | 0.047736 | 0 | 0.21028 | 0 | 0 | 0.060466 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.042056 | null | null | 0.028037 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
c92eb492dd12d9cad6f3882844f81c16176aedc5 | 2,196 | py | Python | experiments/tf_trainer/common/base_keras_model.py | deeglaze/conversationai-models | 34d4645e5a3c9c632e20d485b783a72e525f8a59 | [
"Apache-2.0"
] | null | null | null | experiments/tf_trainer/common/base_keras_model.py | deeglaze/conversationai-models | 34d4645e5a3c9c632e20d485b783a72e525f8a59 | [
"Apache-2.0"
] | 28 | 2020-09-26T01:20:51.000Z | 2022-02-10T02:27:17.000Z | experiments/tf_trainer/common/base_keras_model.py | deeglaze/conversationai-models | 34d4645e5a3c9c632e20d485b783a72e525f8a59 | [
"Apache-2.0"
] | null | null | null | """Abstract Base Class for Keras Models."""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import abc
import tensorflow as tf
from keras import models
from tf_trainer.common import types
from tf_trainer.common import base_model
class BaseKerasModel(base_model.BaseModel):
"""Abstract Base Class for Keras Models.
Interface for Keras models.
"""
TMP_MODEL_DIR = '/tmp/keras_model'
@abc.abstractmethod
def _get_keras_model(self) -> models.Model:
"""Compiled Keras model.
Inputs should be word embeddings.
"""
pass
def estimator(self, model_dir):
"""Estimator created based on this instances Keras model.
The generated estimator expected a tokenized text input (i.e. a sequence of
words), and is responsible for generating the embedding with the provided
preprocessor).
"""
keras_model = self._get_keras_model()
# IMPORTANT: model_to_estimator creates a checkpoint, however this checkpoint
# does not contain the embedding variable (or other variables that we might
# want to add outside of the Keras model). The workaround is to specify a
# model_dir that is *not* the actual model_dir of the final model.
estimator = tf.keras.estimator.model_to_estimator(
keras_model=keras_model, model_dir=BaseKerasModel.TMP_MODEL_DIR)
new_config = estimator.config.replace(model_dir=model_dir)
# Why does estimator.model_fn not include params...
def new_model_fn(features, labels, mode, params, config):
return estimator.model_fn(features, labels, mode, config)
return tf.estimator.Estimator(
new_model_fn, config=new_config, params=estimator.params)
@staticmethod
def roc_auc(y_true: types.Tensor, y_pred: types.Tensor,
threshold=0.5) -> types.Tensor:
"""ROC AUC based on TF's metrics package. This provides AUC in a Keras
metrics compatible way (Keras doesn't have AUC otherwise).
We assume true labels are 'soft' and pick 0 or 1 based on a threshold.
"""
y_bool_true = tf.greater(y_true, threshold)
value, update_op = tf.metrics.auc(y_bool_true, y_pred)
return update_op
| 32.776119 | 81 | 0.736339 | 316 | 2,196 | 4.924051 | 0.401899 | 0.057841 | 0.026992 | 0.025707 | 0.104113 | 0.039846 | 0 | 0 | 0 | 0 | 0 | 0.00225 | 0.190346 | 2,196 | 66 | 82 | 33.272727 | 0.872891 | 0.418488 | 0 | 0 | 0 | 0 | 0.0133 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.142857 | false | 0.035714 | 0.285714 | 0.035714 | 0.607143 | 0.035714 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
c9305bf6016ecb6f3a3ce47cef3dbd96805112bd | 2,428 | py | Python | stanza/pipeline/constituency_processor.py | asears/stanza | f91ca215e175d4f7b202259fe789374db7829395 | [
"Apache-2.0"
] | 3,633 | 2016-01-21T17:29:13.000Z | 2022-03-31T13:36:47.000Z | stanza/pipeline/constituency_processor.py | asears/stanza | f91ca215e175d4f7b202259fe789374db7829395 | [
"Apache-2.0"
] | 593 | 2016-01-19T07:16:05.000Z | 2022-03-31T20:23:58.000Z | stanza/pipeline/constituency_processor.py | asears/stanza | f91ca215e175d4f7b202259fe789374db7829395 | [
"Apache-2.0"
] | 525 | 2016-01-20T03:22:19.000Z | 2022-03-24T05:51:56.000Z | """Processor that attaches a constituency tree to a sentence
The model used is a generally a model trained on the Stanford
Sentiment Treebank or some similar dataset. When run, this processor
attaches a score in the form of a string to each sentence in the
document.
TODO: a possible way to generalize this would be to make it a
ClassifierProcessor and have "sentiment" be an option.
"""
import stanza.models.constituency.trainer as trainer
from stanza.models.common import doc
from stanza.models.common.pretrain import Pretrain
from stanza.pipeline._constants import *
from stanza.pipeline.processor import UDProcessor, register_processor
@register_processor(CONSTITUENCY)
class ConstituencyProcessor(UDProcessor):
# set of processor requirements this processor fulfills
PROVIDES_DEFAULT = set([CONSTITUENCY])
# set of processor requirements for this processor
REQUIRES_DEFAULT = set([TOKENIZE, POS])
# default batch size, measured in sentences
DEFAULT_BATCH_SIZE = 50
def _set_up_model(self, config, use_gpu):
# get pretrained word vectors
pretrain_path = config.get('pretrain_path', None)
self._pretrain = Pretrain(pretrain_path) if pretrain_path else None
# set up model
charlm_forward_file = config.get('forward_charlm_path', None)
charlm_backward_file = config.get('backward_charlm_path', None)
self._model = trainer.Trainer.load(filename=config['model_path'],
pt=self._pretrain,
forward_charlm=trainer.load_charlm(charlm_forward_file),
backward_charlm=trainer.load_charlm(charlm_backward_file),
use_gpu=use_gpu)
# batch size counted as sentences
self._batch_size = config.get('batch_size', ConstituencyProcessor.DEFAULT_BATCH_SIZE)
def process(self, document):
sentences = document.sentences
# TODO: perhaps MWT should be relevant here?
# certainly parsing across an MWT boundary is an error
# TODO: maybe some constituency models are trained on UPOS not XPOS
words = [[(w.text, w.xpos) for w in s.words] for s in sentences]
trees = trainer.parse_tagged_words(self._model.model, words, self._batch_size)
document.set(CONSTITUENCY, trees, to_sentence=True)
return document
| 45.811321 | 101 | 0.697282 | 305 | 2,428 | 5.393443 | 0.396721 | 0.038298 | 0.029179 | 0.026748 | 0.035258 | 0 | 0 | 0 | 0 | 0 | 0 | 0.001082 | 0.23888 | 2,428 | 52 | 102 | 46.692308 | 0.889069 | 0.314662 | 0 | 0 | 0 | 0 | 0.043663 | 0 | 0 | 0 | 0 | 0.038462 | 0 | 1 | 0.074074 | false | 0 | 0.185185 | 0 | 0.444444 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
c9373070207db08c0092e72ad233f6df3870cb3d | 847 | py | Python | strings/first_unique_char.py | kandarpck/leetcode | d2ffcccede5d1543aea48f18a39cdbd3d83e3ed8 | [
"MIT"
] | null | null | null | strings/first_unique_char.py | kandarpck/leetcode | d2ffcccede5d1543aea48f18a39cdbd3d83e3ed8 | [
"MIT"
] | null | null | null | strings/first_unique_char.py | kandarpck/leetcode | d2ffcccede5d1543aea48f18a39cdbd3d83e3ed8 | [
"MIT"
] | null | null | null | from collections import Counter
class Solution:
def firstUniqChar(self, s):
"""
:type s: str
:rtype: int
"""
s_counter = Counter(s)
for i, char in enumerate(s):
if s_counter.get(char) == 1:
return i
return -1
def firstUniqChar(self, s):
"""
Time: N^2
Space: N
:type s: str
:rtype: int
"""
checked = set()
for i, char in enumerate(s):
if char not in checked and s.count(char, i) == 1:
return i
checked.add(char)
return -1
if __name__ == '__main__':
sol = Solution()
ip = "leetcode"
print(sol.firstUniqChar(ip))
ip = "loveleetcode"
print(sol.firstUniqChar(ip))
ip = "nononononnooon"
print(sol.firstUniqChar(ip))
| 21.717949 | 61 | 0.502952 | 98 | 847 | 4.244898 | 0.418367 | 0.057692 | 0.151442 | 0.165865 | 0.302885 | 0.105769 | 0.105769 | 0 | 0 | 0 | 0 | 0.009579 | 0.383707 | 847 | 38 | 62 | 22.289474 | 0.787356 | 0.080283 | 0 | 0.478261 | 0 | 0 | 0.06 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.086957 | false | 0 | 0.043478 | 0 | 0.347826 | 0.130435 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
c93f4071198974716d3d6df1e51f55e3f9d0bb7e | 6,168 | py | Python | gaea-platform/src/common/api_exception.py | h3copen/baas | 7fae7bf10be5a4109d809bad68358fb67335a9fe | [
"Apache-2.0"
] | 25 | 2019-04-19T08:57:37.000Z | 2020-10-23T07:59:03.000Z | gaea-platform/src/common/api_exception.py | h3copen/baas | 7fae7bf10be5a4109d809bad68358fb67335a9fe | [
"Apache-2.0"
] | 4 | 2020-04-14T13:06:46.000Z | 2021-12-13T20:46:36.000Z | gaea-platform/src/common/api_exception.py | h3copen/baas | 7fae7bf10be5a4109d809bad68358fb67335a9fe | [
"Apache-2.0"
] | 12 | 2019-04-19T10:01:57.000Z | 2021-01-04T12:07:59.000Z |
from flask import request,json
from werkzeug.exceptions import HTTPException
class ApiException(HTTPException):
code = 500
error_code = 500
msg = 'sorry, made a mistake'
def __init__(self,code=None, error_code=None, msg=None, header=None):
if code:
self.code = code
if msg:
self.msg = msg
if error_code:
self.error_code = error_code
super(ApiException,self).__init__(msg,None)
def get_body(self, environ=None):
body=dict(
msg=self.msg,
error_code = self.error_code,
request=request.method + ' ' + self.get_url_no_param()
)
text = json.dumps(body)
return text
def get_headers(self, environ=None):
return [('Content-Type','application/json')]
@staticmethod
def get_url_no_param():
full_url = str(request.full_path)
main_path = full_url.split('?')
return main_path[0]
class ParameterException(ApiException):
code = 400
error_code = 400
msg = 'invalid parameter'
class BadRequest(ApiException):
"""*400* `Bad Request`
Raise if the browser sends something to the application the application
or server cannot handle.
"""
code = 400
error_code = 400
msg = 'The browser (or proxy) sent a request that this server could not understand.'
class Success(ApiException):
code = 200
msg = 'success'
class ClientDisconnected(BadRequest):
"""Internal exception that is raised if Werkzeug detects a disconnected
client. Since the client is already gone at that point attempting to
send the error message to the client might not work and might ultimately
result in another exception in the server. Mainly this is here so that
it is silenced by default as far as Werkzeug is concerned.
Since disconnections cannot be reliably detected and are unspecified
by WSGI to a large extent this might or might not be raised if a client
is gone.
.. versionadded:: 0.8
"""
class SecurityError(BadRequest):
"""Raised if something triggers a security error. This is otherwise
exactly like a bad request error.
.. versionadded:: 0.9
"""
class BadHost(BadRequest):
"""Raised if the submitted host is badly formatted.
.. versionadded:: 0.11.2
"""
class Unauthorized(ApiException):
"""*401* `Unauthorized`
Raise if the user is not authorized. Also used if you want to use HTTP
basic auth.
"""
code = 401
error_code = 401
msg = (
'The server could not verify that you are authorized to access '
'the URL requested. You either supplied the wrong credentials (e.g. '
'a bad password), or your browser doesn\'t understand how to supply '
'the credentials required.'
)
class Forbidden(ApiException):
"""*403* `Forbidden`
Raise if the user doesn't have the permission for the requested resource
but was authenticated.
"""
code = 403
error_code = 403
msg = (
'You don\'t have the permission to access the requested resource. '
'It is either read-protected or not readable by the server.'
)
class NotFound(ApiException):
"""*404* `Not Found`
Raise if a resource does not exist and never existed.
"""
code = 404
error_code = 404
msg = (
'The requested URL was not found on the server. '
'If you entered the URL manually please check your spelling and '
'try again.'
)
class MethodNotAllowed(ApiException):
"""*405* `Method Not Allowed`
Raise if the server used a method the resource does not handle. For
example `POST` if the resource is view only. Especially useful for REST.
The first argument for this exception should be a list of allowed methods.
Strictly speaking the response would be invalid if you don't provide valid
methods in the header which you can do with that list.
"""
code = 405
error_code = 405
msg = 'The method is not allowed for the requested URL.'
def __init__(self, valid_methods=None, msg=None):
"""Takes an optional list of valid http methods
starting with werkzeug 0.3 the list will be mandatory."""
ApiException.__init__(self, msg)
self.valid_methods = valid_methods
def get_headers(self, environ):
headers = ApiException.get_headers(self, environ)
if self.valid_methods:
headers.append(('Allow', ', '.join(self.valid_methods)))
return headers
class NotAcceptable(ApiException):
"""*406* `Not Acceptable`
Raise if the server can't return any content conforming to the
`Accept` headers of the client.
"""
code = 406
error_code = 406
msg = (
'The resource identified by the request is only capable of '
'generating response entities which have content characteristics '
'not acceptable according to the accept headers sent in the '
'request.'
)
class RequestTimeout(ApiException):
"""*408* `Request Timeout`
Raise to signalize a timeout.
"""
code = 408
error_code = 408
msg = (
'The server closed the network connection because the browser '
'didn\'t finish the request within the specified time.'
)
class UnsupportedMediaType(ApiException):
"""*415* `Unsupported Media Type`
The status code returned if the server is unable to handle the media type
the client transmitted.
"""
code = 415
error_code = 415
description = (
'The server does not support the media type transmitted in '
'the request.'
)
class InternalServerError(ApiException):
"""*500* `Internal Server Error`
Raise if an internal server error occurred. This is a good fallback if an
unknown error occurred in the dispatcher.
"""
code = 500
error_code = 500
description = (
'The server encountered an internal error and was unable to '
'complete your request. Either the server is overloaded or there '
'is an error in the application.'
) | 27.783784 | 88 | 0.65548 | 802 | 6,168 | 4.975062 | 0.332918 | 0.038346 | 0.012531 | 0.015789 | 0.043609 | 0.011028 | 0 | 0 | 0 | 0 | 0 | 0.023757 | 0.26978 | 6,168 | 222 | 89 | 27.783784 | 0.862123 | 0.333171 | 0 | 0.136364 | 0 | 0 | 0.277359 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.054545 | false | 0.009091 | 0.018182 | 0.009091 | 0.563636 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
c9431ce65c688ffe5ab210cf871cc8e081844757 | 706 | py | Python | Aeneas_cryptographic_disc.py | barisozbas/IEEExtreme | ac8e15e40e3657c42541c3698d33fa090045d7bd | [
"MIT"
] | 1 | 2018-12-29T11:11:13.000Z | 2018-12-29T11:11:13.000Z | Aeneas_cryptographic_disc.py | barisozbas/IEEExtreme | ac8e15e40e3657c42541c3698d33fa090045d7bd | [
"MIT"
] | null | null | null | Aeneas_cryptographic_disc.py | barisozbas/IEEExtreme | ac8e15e40e3657c42541c3698d33fa090045d7bd | [
"MIT"
] | null | null | null | import math
r = float(raw_input())
gap = lambda (a1,b1), (a2,b2) : math.sqrt( (a1 - a2) * (a1 - a2) + (b1- b2) * (b1-b2) )
polar = lambda r , angle : ( math.cos( angle ) * r, math.sin( angle ) * r)
coord = {0 : (0,0)}
for i in xrange (0,26):
letter, angle = raw_input().split()
angle = math.radians(float(angle))
a,b = polar ( r,angle )
coord[letter] = (a,b)
tup={}
for firstL in coord.keys():
for secL in coord.keys():
tup[firstL,secL] = gap(coord[firstL] , coord[secL])
ans=0
prev=0
string= raw_input().strip().upper()
for letter in string:
if letter.isalpha():
ans = ans + tup[prev,letter]
prev= letter
print (int(math.ceil(ans)))
| 22.774194 | 87 | 0.566572 | 110 | 706 | 3.609091 | 0.390909 | 0.060453 | 0.055416 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.037594 | 0.246459 | 706 | 30 | 88 | 23.533333 | 0.708647 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.045455 | null | null | 0.045455 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
c944b9fbe4979bf0865d29dd3c92f75c7b9b9d67 | 4,762 | py | Python | smaa.py | youqad/oxford-hack-2020 | 7c4bf02f0dc52ce99cee721a3b7b3344060018f2 | [
"MIT"
] | null | null | null | smaa.py | youqad/oxford-hack-2020 | 7c4bf02f0dc52ce99cee721a3b7b3344060018f2 | [
"MIT"
] | null | null | null | smaa.py | youqad/oxford-hack-2020 | 7c4bf02f0dc52ce99cee721a3b7b3344060018f2 | [
"MIT"
] | null | null | null | from dataclasses import dataclass
import torch
import numpy as np
import pyro
import matplotlib.pyplot as plt
from pyro.infer import MCMC, NUTS
# import pyro.infer
# import pyro.optim
from pyro.distributions import Normal
# def model(data):
# """
# Explanation
# """
# coefs_mean = torch.zeros(dim)
# coefs = pyro.sample('beta', dist.Normal(coefs_mean, torch.ones(3)))
# y = pyro.sample('y', Bernoulli(logits=(coefs * data).sum(-1)), obs=labels)
# return y
# nuts_kernel = NUTS(model, adapt_step_size=True)
# mcmc = MCMC(nuts_kernel, num_samples=500, warmup_steps=300)
# mcmc.run(data)
# print(mcmc.get_samples()['beta'].mean(0))
# mcmc.summary(prob=0.5)
# def conditioned_model(model, sigma, y):
# return poutine.condition(model, data={"obs": y})(sigma)
# pyro.sample("obs_{}".format(i), dist.Bernoulli(f), obs=data[i])
# conditioned_scale = pyro.condition(scale, data={"measurement": 9.5})
# pyro.sample("measurement", dist.Normal(weight, 0.75), obs=9.5)
# def deferred_conditioned_scale(measurement, guess):
# return pyro.condition(scale, data={"measurement": measurement})(guess)
# svi = pyro.infer.SVI(model=conditioned_scale,
# guide=scale_parametrized_guide,
# optim=pyro.optim.SGD({"lr": 0.001, "momentum":0.1}),
# loss=pyro.infer.Trace_ELBO())
class Alternative:
"""
An alternative is a potential outcome for a decision making problem.
Example: Tesla is an alternative for the decision problem of choosing a car to buy.
"""
def __init__(self,name):
self.name=name
class Criterion:
"""
A criterion is a paramater in a decision making problem.
It is given y
- a name 'name'
- an optionnal boolean 'positive' to indicate whether the criterion has a positive or negative impact on the alternatives
Example: manoeuvrability might be a criterion when the alternatives are car brands.
"""
def __init__(self,name,positive=True):
self.name=name
self.positive=positive
class Weight:
"""
A weight represents how much a person values a certain criterion in a decision making problem.
A weight is given by
- a name 'name'
- an optionnal distribution name 'dist' for modelling its uncertainty
- a value 'value' for the weight
- a criterion 'criterion'
Example: a weight of 21 can be given for the criterion manoeuvrability when car brands is the decision making problem.
"""
def __init__(self,name,dist="Unif",value,variance=0,criterion):
self.name=name
self.dist=dist
self.positive=positive
self.value=value
self.variance=variance
self.criterion= criterion.name
class AlternativeCriterionMatrix:
"""
TODO:write
"""
def __init__(self):
class DecisionProblem:
"""
A decision problem consist of a choice of possible outcomes: alternatives
These alternatives depend on parameters: criteria
A person values certain criteria more than others. This is reflected in weights.
The weights and criteria for each alternative are fuzzy and are modelled with distributions.
These distributions may reflect a lack of knowledge, a lack of objective measure,
a true randomness in the process, etc.
Following the SMAA method, a person is guided to take a decision with three indicators.
- acceptabilityIndex: represents the approximate probability that a certain alternative is ranked first.
- centralWeightVector: represents a typical value for the weights that make a certain alternative ranked first.
- confidenceFactor: represents the probability of an alternative being ranked for weights given by centralWeightVector.
"""
def __init__(self,name,weights,criteria,alternatives):
self.name=name
self.weights=weights
self.criteria=criteria
self.alternatives=alternatives
def criteriaList(self):
return True
def alternativesList(self):
return True
def weightsSampler(self):
return True
def criteriaSampler(self):
return True
def rank(self,alternative_number,sample_crit_vector,sample_weight_vector):
return True
def rankAcceptabilityIndex(self,alternative_number,rank):
return True
def acceptabilityIndex(self,alternative_number):
"""
Test
"""
return self.rankAcceptabilityIndex(alternative_number,1)
def centralWeightVector(self,alternative_number):
"""
Test
"""
return True
def confidenceFactor(self,alternative_number):
"""
Test
"""
return True | 32.841379 | 129 | 0.678706 | 592 | 4,762 | 5.378378 | 0.339527 | 0.020101 | 0.02858 | 0.018844 | 0.080088 | 0.021985 | 0 | 0 | 0 | 0 | 0 | 0.007673 | 0.233725 | 4,762 | 145 | 130 | 32.841379 | 0.864894 | 0.237085 | 0 | 0.291667 | 0 | 0 | 0.002429 | 0 | 0 | 0 | 0 | 0.006897 | 0 | 0 | null | null | 0 | 0.145833 | null | null | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
c94614ec9d12dc441fe5a44f628907b7866eec01 | 407 | py | Python | gin/gin.py | Javanile/GIN | f4cce7546aa523d568f2c7cf73318fb4ea20ba3e | [
"MIT"
] | 3 | 2016-11-18T16:35:58.000Z | 2017-01-31T15:09:31.000Z | gin/gin.py | javanile/gin | f4cce7546aa523d568f2c7cf73318fb4ea20ba3e | [
"MIT"
] | null | null | null | gin/gin.py | javanile/gin | f4cce7546aa523d568f2c7cf73318fb4ea20ba3e | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
"""gin.gin: provides entry point main()."""
__version__ = "0.0.4"
import sys
import os
from .parser import Parser
def main():
parser = Parser()
gin_src = sys.argv[1]
ini_dst = sys.argv[1].replace(".gin", ".ini")
if not os.path.isfile(gin_src):
print 'File not found:', gin_src
sys.exit(1)
parser.parse(gin_src, ini_dst)
| 14.034483 | 49 | 0.574939 | 60 | 407 | 3.733333 | 0.533333 | 0.107143 | 0.080357 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.023333 | 0.262899 | 407 | 28 | 50 | 14.535714 | 0.723333 | 0.051597 | 0 | 0 | 0 | 0 | 0.082353 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.25 | null | null | 0.083333 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
c9510bb7d8ed6d4d39ee6821945d8452e80085df | 252 | py | Python | dynamodb_scripts/DeleteTables.py | mattmcclean/sam-voting-app | 48569f42cbd19dcbbbb1362894c09475a2af29e1 | [
"Apache-2.0"
] | null | null | null | dynamodb_scripts/DeleteTables.py | mattmcclean/sam-voting-app | 48569f42cbd19dcbbbb1362894c09475a2af29e1 | [
"Apache-2.0"
] | null | null | null | dynamodb_scripts/DeleteTables.py | mattmcclean/sam-voting-app | 48569f42cbd19dcbbbb1362894c09475a2af29e1 | [
"Apache-2.0"
] | 1 | 2021-02-08T13:59:50.000Z | 2021-02-08T13:59:50.000Z | from __future__ import print_function # Python 2/3 compatibility
import boto3
dynamodb = boto3.resource('dynamodb', endpoint_url="http://localhost:8000")
table = dynamodb.Table('Election')
table.delete()
table = dynamodb.Table('Vote')
table.delete() | 25.2 | 75 | 0.769841 | 32 | 252 | 5.875 | 0.65625 | 0.138298 | 0.191489 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.035088 | 0.095238 | 252 | 10 | 76 | 25.2 | 0.789474 | 0.095238 | 0 | 0.285714 | 0 | 0 | 0.180617 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.285714 | 0 | 0.285714 | 0.142857 | 0 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
c95aacac36d02161b484cfa55fefd8c161baf5a3 | 1,240 | py | Python | keyman44/interface/chain_creation.py | sahabi/keyman44 | 73d62eac3b96ec6951b1d88edf5e0f7c787f7440 | [
"MIT"
] | null | null | null | keyman44/interface/chain_creation.py | sahabi/keyman44 | 73d62eac3b96ec6951b1d88edf5e0f7c787f7440 | [
"MIT"
] | null | null | null | keyman44/interface/chain_creation.py | sahabi/keyman44 | 73d62eac3b96ec6951b1d88edf5e0f7c787f7440 | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
# Form implementation generated from reading ui file 'ui/chain_creation.ui'
#
# Created by: PyQt5 UI code generator 5.9
#
# WARNING! All changes made in this file will be lost!
from PyQt5 import QtCore, QtGui, QtWidgets
class Ui_Dialog(object):
def setupUi(self, Dialog):
Dialog.setObjectName("Dialog")
Dialog.resize(215, 116)
self.textEdit_2 = QtWidgets.QTextEdit(Dialog)
self.textEdit_2.setGeometry(QtCore.QRect(120, 20, 71, 31))
self.textEdit_2.setObjectName("textEdit_2")
self.comboBox = QtWidgets.QComboBox(Dialog)
self.comboBox.setGeometry(QtCore.QRect(30, 20, 78, 27))
self.comboBox.setObjectName("comboBox")
self.buttonBox = QtWidgets.QDialogButtonBox(Dialog)
self.buttonBox.setGeometry(QtCore.QRect(30, 70, 176, 27))
self.buttonBox.setStandardButtons(QtWidgets.QDialogButtonBox.Cancel|QtWidgets.QDialogButtonBox.Ok)
self.buttonBox.setObjectName("buttonBox")
self.retranslateUi(Dialog)
QtCore.QMetaObject.connectSlotsByName(Dialog)
def retranslateUi(self, Dialog):
_translate = QtCore.QCoreApplication.translate
Dialog.setWindowTitle(_translate("Dialog", "Dialog"))
| 37.575758 | 106 | 0.708065 | 140 | 1,240 | 6.214286 | 0.492857 | 0.041379 | 0.044828 | 0.055172 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.040434 | 0.182258 | 1,240 | 32 | 107 | 38.75 | 0.817554 | 0.151613 | 0 | 0 | 1 | 0 | 0.043103 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.1 | false | 0 | 0.05 | 0 | 0.2 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
c95abe449589f695cc8dc319de05486ff4b018fc | 963 | py | Python | cronjob/utils/cache.py | fucangyu/SimpleSpider | a2fd9289f44696c5c06ece9cec8dc5315300eecf | [
"MIT"
] | 4 | 2019-01-13T06:08:48.000Z | 2019-01-14T07:12:37.000Z | cronjob/utils/cache.py | fucangyu/cronjob | 9a27b0a430eab1f9e52ff51700217a7dac15c846 | [
"MIT"
] | 2 | 2019-01-13T04:10:58.000Z | 2019-01-13T07:08:53.000Z | cronjob/utils/cache.py | fucangyu/cronjob | 9a27b0a430eab1f9e52ff51700217a7dac15c846 | [
"MIT"
] | 2 | 2019-01-25T15:43:15.000Z | 2019-06-15T09:42:15.000Z | import logging
import socket
from collections import OrderedDict
LOGGER = logging.getLogger(__name__)
class LocalCache(OrderedDict):
def __init__(self, limit=None):
super().__init__()
self.limit = limit
def __setitem__(self, key, value):
while len(self) >= self.limit:
self.popitem(last=False)
super().__setitem__(key, value)
dns_cache = LocalCache(10000)
# 因为爬虫需要长时间运行,而dns cache容易出错,所以dns cache 并未使用
def set_dns_cache():
def _getaddrinfo(*args, **kwargs):
if args in dns_cache:
LOGGER.debug(f'Use dns cache {args}:{dns_cache[args]}')
return dns_cache[args]
else:
LOGGER.debug(f'Without dns cache {args}')
dns_cache[args] = socket._getaddrinfo(*args, **kwargs)
return dns_cache[args]
if not hasattr(socket, '_getaddrinfo'):
socket._getaddrinfo = socket.getaddrinfo
socket.getaddrinfo = _getaddrinfo
| 25.342105 | 67 | 0.646937 | 111 | 963 | 5.315315 | 0.423423 | 0.122034 | 0.122034 | 0.172881 | 0.19661 | 0.19661 | 0 | 0 | 0 | 0 | 0 | 0.006897 | 0.247144 | 963 | 37 | 68 | 26.027027 | 0.806897 | 0.044652 | 0 | 0.08 | 0 | 0 | 0.08061 | 0.026144 | 0 | 0 | 0 | 0 | 0 | 1 | 0.16 | false | 0 | 0.12 | 0 | 0.4 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
c95b1684a0db1f0047a54d631046ca5ea817d08a | 501 | py | Python | test/ResultsAndPrizes/top-3/test_top_3_winning_numbers_of_the_last_draw.py | FearFactor1/SPA | a05aaa924c5bebb52cd508ebdf7fd3b81c49fac7 | [
"Apache-2.0"
] | 1 | 2019-12-05T06:50:54.000Z | 2019-12-05T06:50:54.000Z | test/ResultsAndPrizes/top-3/test_top_3_winning_numbers_of_the_last_draw.py | FearFactor1/SPA | a05aaa924c5bebb52cd508ebdf7fd3b81c49fac7 | [
"Apache-2.0"
] | null | null | null | test/ResultsAndPrizes/top-3/test_top_3_winning_numbers_of_the_last_draw.py | FearFactor1/SPA | a05aaa924c5bebb52cd508ebdf7fd3b81c49fac7 | [
"Apache-2.0"
] | null | null | null | # Топ-3 + Выигрышные номера последнего тиража
def test_top_3_winning_numbers_last_draw(app):
app.ResultAndPrizes.open_page_results_and_prizes()
app.ResultAndPrizes.click_game_top_3()
app.ResultAndPrizes.button_get_report_winners()
assert "ВЫИГРЫШНЫЕ НОМЕРА" in app.ResultAndPrizes.parser_report_text_winners()
app.ResultAndPrizes.message_id_33_top_3_last_draw()
app.ResultAndPrizes.message_id_33_top_3_winning_numbers_last_draw()
app.ResultAndPrizes.comeback_main_page() | 41.75 | 82 | 0.828343 | 69 | 501 | 5.507246 | 0.507246 | 0.331579 | 0.086842 | 0.094737 | 0.315789 | 0.315789 | 0.315789 | 0 | 0 | 0 | 0 | 0.019956 | 0.0998 | 501 | 12 | 83 | 41.75 | 0.822616 | 0.085828 | 0 | 0 | 0 | 0 | 0.037199 | 0 | 0 | 0 | 0 | 0 | 0.125 | 1 | 0.125 | false | 0 | 0 | 0 | 0.125 | 0 | 0 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
c966a0f19952a5f4083ba9e72fe82dffea41752a | 21,119 | py | Python | mcommon.py | kunyuan/PyGW | 7b32065586f7d7cc221d051b5254397fdf0a3076 | [
"Unlicense"
] | 7 | 2020-07-06T01:56:46.000Z | 2021-05-13T13:07:06.000Z | mcommon.py | kunyuan/PyGW | 7b32065586f7d7cc221d051b5254397fdf0a3076 | [
"Unlicense"
] | null | null | null | mcommon.py | kunyuan/PyGW | 7b32065586f7d7cc221d051b5254397fdf0a3076 | [
"Unlicense"
] | 1 | 2020-07-06T01:56:59.000Z | 2020-07-06T01:56:59.000Z | from scipy import *
from scipy import optimize
from numpy import linalg
from scipy import interpolate
from inout import *
import for_tetrahedra as ft
import fnc
import for_pade as fpade
from pylab import *
def Give2TanMesh(x0,L,Nw):
def fun(x,x0,L,Nw):
"x[0]=d, x[1]=w"
d=x[0]
w=x[1]
#print 'd=', d, 'w=', w
return array([L-w/tan(d), x0-w*tan(pi/(2*Nw)-d/Nw) ])
tnw = tan(pi/(2*Nw))
if x0 > L*0.25*Nw*tnw**2:
x0 = L*0.25*Nw*tnw**2-1e-15
xi=x0/L
d0 = Nw/2.*(tnw-sqrt(tnw**2 - 4*xi/Nw))
w0 = L*d0
sol=optimize.root(fun, [d0,w0], args=(x0,L,Nw) )
(d,w) = sol.x
xt = linspace(0.0,1.0,2*Nw)*(pi-2*d) - pi/2 + d
om = w*tan(xt)
dom = (w*(pi-2*d)/(2.*Nw))/cos(xt)**2
return om,dom
def calc_Fermi(Ebnd, kqm_atet, kqm_wtet, nval, nspin):
eDos = 1.0
if nspin==1 and (nval % 2 == 0):
# just simply counting bands, and setting EF in the middle
nvm = int( nval/2.0 + 0.51 )
evbm = max( Ebnd[:,nvm-1] )
ecbm = min( Ebnd[:,nvm])
EF = 0.5*(evbm+ecbm)
ocint = ft.idos(EF, Ebnd, kqm_atet, kqm_wtet)*2.0/nspin
if ecbm >= evbm and abs(ocint-nval) < 1e-6:
Eg = ecbm - evbm
eDos = 0
if eDos != 0: # the simple method did not succeed
evbm = min(Ebnd.ravel()) # minimum of energy
ecbm = max(Ebnd.ravel()) # maximum of energy
ocmax = sum([ft.idos(ecbm, Ebnd, kqm_atet, kqm_wtet) for isp in range(nspin)])*2.0/nspin
if ocmax <= nval:
print 'ERROR in fermi: not enough bands : %s%-10.4f %s%-10.2f %s%-10.2f' % ('emax=', ecbm, 'ocmax=', ocmax, 'nel= ', nval )
sys.exit(1)
EF = optimize.brentq(lambda x: sum([ft.idos(x, Ebnd, kqm_atet, kqm_wtet) for isp in range(nspin)])*2.0/nspin-nval, evbm, ecbm)
eDos = sum([ft.dostet(EF, Ebnd, kqm_atet, kqm_wtet) for isp in range(nspin)])*2.0/nspin
# For insulator (including semiconductor, set fermi energy as the middle of gap
if eDos < 1e-4:
evbm = max( filter(lambda x: x < EF, Ebnd.ravel()) )
ecbm = min( filter(lambda x: x > EF, Ebnd.ravel()) )
EF = (evbm + ecbm)/2.
Eg = ecbm - evbm
else:
Eg = -eDos
evbm, ecbm = EF, EF
return (EF, Eg, evbm, ecbm, eDos)
def cart2int(klist,strc,latgen):
""" Cartesian to integer representation of k-points"""
if latgen.ortho:
rbas = zeros((3,3))
alat = array([strc.a, strc.b, strc.c])
for j in range(3):
rbas[j,:] = latgen.rbas[j,:]/alat[j]
iklist = dot(klist,rbas)
return iklist
else:
return klist
def Compute_selfc_inside(iq, irk, bands, mwm, fr, kqm, ncg_c, core, Ul, fout, PRINT=False):
def freqconvl(iom, enk, mwm, omega, womeg):
""" We are computing frequency convolution
sigma(iom) = -1/beta \sum_{iOm} mwm[iOm]/(iom-eps+iOm)
Because mwm[-iOm]=mwm[iOm] and at T=0, we can rewrite
sigma(iom) = 1/pi Integrate[ mwm[Om]*(eps-iom)/((eps-iom)^2 + Om^2) , {Om,0,Infinity} ]
Finally, we notice that
Integrate[ 1/((eps-iom)^2 + Om^2) , {Om,0,Infinity} ] = pi*sign(eps)/(2*(eps-iom))
therefore
sigma(iom) = (eps-iom)/pi Integrate[ (mwm[Om]-mwm[iom])/((eps-iom)^2 + Om^2) , {Om,0,Infinity} ] + mwm[iom] * sign(eps)/2
"""
if FORT:
return fnc.fr_convolution(iom+1, enk, mwm, omega, womeg)
else:
eps_om = enk - omega[iom]*1j
sc0 = sum( (mwm[:]-mwm[iom]) * womeg / (omega**2 + eps_om**2) )
return eps_om*sc0/pi + 0.5 * mwm[iom] * sign(enk)
(nom, nb1, nb2) = shape(mwm)
omega, womeg = fr.omega, fr.womeg
ik = kqm.k_ind[irk] # index in all-kpoints, not irreducible
jk = kqm.kqid[ik,iq] # index of k+q
jrk = kqm.kii_ind[jk] # index of k+q in irreducible list
#print 'Compute_selfc_inside:shape(bands)=', shape(bands), 'jrk=', jrk, 'nb2=', nb2
isp=0
enk = zeros( nb2 )
for icg in range(ncg_c):
iat,idf,ic = core.corind[icg,:3]
enk[icg] = core.eig_core[isp][iat][ic]
enk[ncg_c:nb2] = bands[jrk,:(nb2-ncg_c)] # careful, we need G(k+q,:) here, not G(k,:)
if Ul is not None:
(nl,nom) = shape(Ul)
# Cnl[l,ie2,iom] = 1/pi*Integrate[ Ul[l,iOm]*(enk[ie2]-iom)/((enk[ie2]-iom)^2 + Om^2),{Om,0,Infinity}]
Cnl = fnc.many_fr_convolutions(enk, Ul, omega, womeg) # frequency convolution of all svd functions
Cnl = reshape(Cnl,(nl*nb2,nom))
mwm2 = zeros( (nb1, nl*nb2), dtype=complex )
for ie1 in range(nb1):
mwm2[ie1,:] = reshape( mwm[:,ie1,:], nl*nb2 )
sc_p = dot(mwm2,Cnl)
else:
if not (fr.iopfreq==4 and len(fr.omega_precise)!=len(fr.omega)):
sc_p = fnc.all_fr_convolutions(enk, mwm, omega, womeg)
else:
sc_p = zeros( (nb1,nom), dtype=complex )
mwmp = zeros( (nb2,len(fr.omega_precise)), dtype=complex)
for_asymptotic = (omega**2+1.0)
for_asymptotic_precise = (fr.omega_precise**2+1.0)
zeros_nb2 = zeros(nb2, dtype=complex)
for ie1 in range(nb1):
for ie2 in range(nb2):
# Interpolating W on more dense mesh for frequency convolution
# Note that we interpolate W*(om^2+1), which goes to constant at large om, and is easier to interpolate properly
mwmr = interpolate.CubicSpline(omega, for_asymptotic*mwm[:,ie1,ie2].real, bc_type=((2,0),(1,0)), extrapolate=True)
if sum(abs(mwm[:,ie1,ie2].imag))/nom > 1e-10:
mwmi = interpolate.CubicSpline(omega, for_asymptotic*mwm[:,ie1,ie2].imag, bc_type=((2,0),(1,0)), extrapolate=True)
mwmp[ie2,:] = (mwmr(fr.omega_precise) + mwmi(fr.omega_precise)*1j)/for_asymptotic_precise
else:
mwmp[ie2,:] = mwmr(fr.omega_precise)/for_asymptotic_precise
for iom in range(nom):
# We now always subtract this peace, because treating some frequencies and bands differently seems to cause problems.
mwm_iom = mwm[iom,ie1,:]
sc_p[ie1,iom] = fnc.few_fr_convolutions(enk, mwmp, mwm_iom, fr.omega_precise, omega[iom], fr.womeg_precise)
#if irk==3:
# rr = 0
# for ie2 in range(nb2):
# eps_om = enk[ie2] - omega[iom]*1j
# to_sum = (mwmp[ie2,:]-mwm_iom[ie2]) / (fr.omega_precise**2 + eps_om**2)
# sc0 = sum( to_sum * fr.womeg_precise)
# cc = eps_om*sc0/pi + mwm_iom[ie2]/pi * arctan(omega[-1]/(eps_om)) #pi*0.5*sign(enk[ie2])
# rr += cc
# print 'ie2=', ie2, 'en=', enk[ie2], 'iom=', iom, 'om=', omega[iom], 'cont=', cc
# if ( abs(mwm_iom[ie2])>1e-10 ):
# fo = open('case_to_study.dat', 'w')
# print >> fo, '# ', eps_om
# for i in range(len(fr.omega_precise)):
# print >> fo, fr.omega_precise[i], mwmp[ie2,i].real
# fo.close()
# plot(fr.omega_precise, to_sum, 'o-')
# plot(fr.omega_precise, to_sum*fr.womeg_precise*100, 's-')
# show()
# print 'res=', abs(rr-sc_p[ie1,iom]), rr
#print 'iq=', iq, 'irk=', irk, 'convolution finished'
return sc_p
def Compute_quasiparticles(bands, Ebnd, sigc, sigx, Vxct, omega, (iop_ac, iop_es, iop_gw0, npar_ac, iop_rcf), isp, fout, PRINT):
(nirkp, nbnd, nom) = shape(sigc)
eqp = zeros(shape(bands))
eqp_im = zeros(shape(bands))
if (PRINT): print >> fout, 'Quasiparticle energies in eV'
lwarn = True
for irk in range(nirkp):
for ie in range(nbnd):
enk0 = Ebnd[irk,ie]
vxc_nk = Vxct[irk,ie,ie].real
enk = bands[irk,ie]
# enk is the energy at which the self-energy is calculated
debug = '.0.0' if (irk==0 and ie==0) else ''
sig,dsig,apar = AnalyticContinuation(iop_ac, omega, sigc[irk,ie,:], npar_ac, enk, fout, iop_rcf, debug)
# quasiparticle residue, but not at zero freqeucny, but at the energy of the band!
z_nk = 1/(1-dsig.real)
# self-energy at the energy of the band. This is what they believe is the best quasiparticle approximation
s_nk = sig.real + sigx[irk,ie]
if (z_nk > 1.0 or z_nk < 0):
if (lwarn):
print >> fout, 'WARNING : Z_nk at energy e_k=', enk*H2eV, 'eV is unphysical', z_nk, 'irk=', irk, 'ie=', ie, 's_nk=', s_nk*H2eV, 'ds/dw=', dsig.real
lwarn = False # we will warn only once
z_nk = 1.0
dsig = 0.0
if iop_es == -1: # self-consistent GW0
if iop_gw0==1: # this is default
delta = s_nk - vxc_nk + enk0 - enk # this is used in self-consistent GW0
elif iop_gw0==2:
delta = z_nk*(s_nk-vxc_nk) + enk0 - enk
else:
delta = z_nk*(s_nk-vxc_nk + enk0 - enk)
elif iop_es == 0: # this is used in G0W0
delta = z_nk*(s_nk-vxc_nk)
else:
print >> fout, 'Not implemented here'
sys.exit(1)
eqp[irk,ie] = enk + delta
eqp_im[irk,ie] = sig.imag*z_nk
#print >> fout, 'iop_eps=%2d iop_gw0=%2d delta=%16.10f' % (iop_es, iop_gw0, delta)
#print 'eqp['+str(irk)+','+str(ie)+']='+str(eqp[irk,ie]), 'and enk=', enk0
if PRINT:
print >> fout, 'eqp[irk=%3d,ie=%3d]=%16.10f and enk0=%16.10f enk=%16.10f znk=%10.8f snk=%16.10f vxc=%16.10f' % (irk,ie,eqp[irk,ie]*H2eV,enk0*H2eV,enk*H2eV,z_nk,s_nk.real*H2eV, vxc_nk.real*H2eV)
return (eqp, eqp_im)
def Band_Analys(bande, EF, nbmax, titl, kqm, fout):
bands = copy(bande[:,:nbmax])
nirkp = shape(bands)[0]
print >> fout, '-'*60
print >> fout, ' '+titl+' Band Analysis'
print >> fout, '-'*60
print >> fout, ' Range of bands considered: %5d %5d' % (0,nbmax)
print >> fout, ' EFermi[eV]= %10.4f' % (EF*H2eV,)
if max(bands.ravel()) < EF or min(bands.ravel())>EF:
print >> fout, 'WARNING from bandanaly: - Fermi energy outside the energy range of bande!'
print >> fout, 'minimal energy=', min(bands)*H2eV, 'max energy=', max(bands)*H2eV, 'EF=', EF*H2eV
nocc_at_k = [len(filter(lambda x: x<EF, bands[ik,:])) for ik in range(nirkp)] # how many occuiped bands at each k-point
nomax = max(nocc_at_k)-1 # index of the last valence band
numin = min(nocc_at_k) # index of the first conduction band
ikvm = argmax(bands[:,nomax]) # maximum of the valence band
ikcm = argmin(bands[:,numin]) # minimum of the conduction band
Qmetal = (nomax >= numin)
if Qmetal:
print >> fout, ' Valence and Conductance bands overlap: metallic!'
if Qmetal:
evbm = EF
else:
evbm = bands[ikvm,nomax]
print >> fout, ' Band index for VBM and CBM=%4d %4d' % (nomax+1, numin+1)
bands = (bands - evbm)*H2eV
egap1 = bands[ikcm,numin] - bands[ikvm,nomax] # the smallest indirect gap between KS bands
egap2 = min(bands[ikvm,numin:])-bands[ikvm,nomax] # the direct gap starting from valence band
egap3 = bands[ikcm,numin] - max(bands[ikcm,:(nomax+1)])# the direct gap starting from conduction band
#print 'egap1=', egap1, 'egap=', egap2, 'egap3=', egap3
if ikvm==ikcm: # direct gap
print >> fout, ':BandGap_'+titl+' = %12.3feV' % egap1
kp = kqm.kirlist[ikvm,:]/float(kqm.LCM)
print >> fout, (' Direct gap at k= %8.3f%8.3f%8.3f') % tuple(kp), 'ik='+str(ikvm+1)
else:
print >> fout, (':BandGap_'+titl+' = %12.3f%12.3f%12.3f eV') % (egap1,egap2,egap3)
kv = kqm.kirlist[ikvm,:]/float(kqm.LCM)
kc = kqm.kirlist[ikcm,:]/float(kqm.LCM)
print >> fout, ' Indirect gap, k(VBM)=%8.3f%8.3f%8.3f' % tuple(kv), 'ik='+str(ikvm+1)
print >> fout, ' k(CBM)=%8.3f%8.3f%8.3f' % tuple(kc), 'ik='+str(ikcm+1)
print >> fout, 'Range of each band with respect to VBM (eV):'
print >> fout, ('%5s'+'%12s'*3) % ('n ','Bottom','Top','Width')
for i in range(shape(bands)[1]):
ebmin = min(bands[:,i])
ebmax = max(bands[:,i])
print >> fout, '%5d%12.3f%12.3f%12.3f' % (i+1, ebmin, ebmax, ebmax-ebmin)
return (nomax,numin)
def padeMatrix(z,f,N,verbose=False):
"""
Input variables:
z - complex. points in the complex plane.
f - complex. Corresponding (Green's) function in the complex plane.
N - int. Number of Pade coefficients to use
verbose - boolean. Determine if to print solution information
Returns the obtained Pade coefficients.
"""
# number of input points
M = len(z)
r = N/2
y = f*z**r
A = ones((M,N),dtype=complex)
for i in range(M):
A[i,:r] = z[i]**(arange(r))
A[i,r:] = -f[i]*z[i]**(arange(r))
# Calculated Pade coefficients
# rcond=-1 means all singular values will be used in the solution.
sol = linalg.lstsq(A, y, rcond=-1)
# Pade coefficents
x = sol[0]
if verbose:
print 'error_2= ',linalg.norm(dot(A,x)-y)
print 'residuals = ', sol[1]
print 'rank = ',sol[2]
print 'singular values / highest_singlular_value= ',sol[3]/sol[3][0]
return x
def epade(z,x):
"""
Input variables:
z - complex. Points where continuation is evaluated.
x - complex. Pade approximant coefficient.
Returns the value of the Pade approximant at the points z.
"""
r = len(x)/2
numerator = zeros(len(z),dtype=complex256)
denomerator = zeros(len(z),dtype=complex256)
for i in range(r):
numerator += x[i]*z**i
denomerator += x[r+i]*z**i
denomerator += z**r
return numerator/denomerator
def AnalyticContinuation(iop_ac,omg,sc,npar,enk, fout, iop_rcf=0.5, debug=''):
# Analytic continuation of self-energy, and its evaluation at energy enk. Both the value and derivat
# iop_ac==0 -- Thiele's reciprocal difference method as described in
# H. J. Vidberg and J. W. Serence, J. Low Temp. Phys. 29, 179 (1977). This is usual Pade, known in DMFT
# iop_ac==1 -- Rojas, Godby and Needs (PRL 74, 1827 (1996). This is just a fit to imaginary axis data
#
#
# We first take only a few values of self-energy and frequency, which will be used for Pade
def f_unc(x, *argv):
return fpade.pd_funval(x,argv)
def f_jacob(x, *argv):
return fpade.pd_jacoby(x,argv)
#
#if iop_ac==0 or npar==len(omg):
if iop_ac==0 and abs(enk)/Ry2H < iop_rcf:
# n = npar-1
n = min(32, len(omg))
z_p = omg[:n]*1j
y_p = sc[:n]
if True:
apar = fpade.padecof(y_p, z_p)
yc = fpade.padeg([enk], z_p, apar)
dyc = (fpade.padeg([enk+1e-3], z_p, apar)-fpade.padeg([enk-1e-3], z_p, apar))/2e-3
else:
apar = padeMatrix(z_p,y_p, 2*len(z_p))
yc = epade(array([enk]), apar)
dyc = (epade(array([enk+1e-3]), apar) - epade(array([enk-1e-3]), apar))/2e-3
#print >> fout, 'classic Pade: s['+str(enk*H2eV)+']='+str(yc[0]*H2eV)
elif iop_ac==1 or (iop_ac==0 and abs(enk)/Ry2H > iop_rcf):
## Here we fit all values of self-energy at Matsubara pointz iw==z to the following rational function
# (c[1] + c[2] z)
# f(z) = ----------------------
# 1 + c[3] z + c[4] z^2
# which is a rational function with two poles.
# It is also equivalent to pade of the level 4, i.e.,
# f(z) = a[1]/(1+a[2]z/(1+a[3]z/(1+a[4]z))), where non-linear relation between a[i] and c[i] exists
#
iwpa = arange(npar)*int((len(omg)-1.)/(npar-1.)) # linear mesh of only a few points, equidistant.
iwpa[-1] = len(omg)-1 # always take the last point
cx = zeros(len(iwpa), dtype=complex) # imaginary frequency at selected points
cy = zeros(len(iwpa), dtype=complex) # self-energy at selected points
for ip,iw in enumerate(iwpa):
cx[ip] = omg[iw]*1j
cy[ip] = sc[iw]
apar = fpade.init_c(cx, cy) # just a way to find a good starting point for minimization
n = len(apar) # how many complex coefficints we want to fit
anl = hstack( (apar.real,apar.imag) ) # we stack these complex coefficients (starting point coefficients) into real array
if True:
x = hstack( (-omg.imag, omg.real) ) # take all frequency as x values : x = i*omg
y = hstack( ( sc.real, sc.imag) ) # take all self-energy points as y values: y = [real,imag]
# now fit all self-energy points to rational function
# sigma(iom) = P(iom)/Q(iom), where
# P(iom) = sum_{k=0,n } c_{k} (iom)^k
# Q(iom) = sum_{k=1,n+1} c_{k+n} (iom)^k
# and determine c_k by minimization of chi2 using Levenberg-Marquardt algorithm.
popt, pcov = optimize.curve_fit( f_unc, x, y, p0=anl, jac=f_jacob, method='lm')
apar = popt[:n] + popt[n:]*1j # these are ck coefficients
# Evaluate self-energy at energy of the band ek
yc,dyc = fpade.pd_funvalc([enk],apar)
else:
chisq=0.0
chisq = fpade.nllsq(omg, sc, anl)
apar = anl[:n] + anl[n:]*1j
yc,dyc = fpade.pd_funvalc([enk],apar)
#print >> fout, 'modified Pade: s['+str(enk*H2eV)+']='+str(yc[0]*H2eV)
else:
s0 = sc[0].real
dsdw = polyfit([0,omg[0],omg[1]], [0.0,sc[0].imag,sc[1].imag], 1)[0]
yc = [s0 + dsdw*enk]
dyc = [dsdw]
apar = [s0,dsdw]
#print >> fout, 'Simple quasiparticle approximation: s['+str(enk*H2eV)+']='+str(yc[0]*H2eV)
#print >> fout, 'sig[e=%10.6f]=%10.6f %10.6f' % (enk,yc.real, yc.imag)
#print ' cx, cy, apar'
#for ip in range(len(iwpa)):
# print '%21.16f%21.16f %20.16f%20.16f %20.16f%20.16f' % (cx[ip].real, cx[ip].imag, cy[ip].real, cy[ip].imag, apar[ip].real, apar[ip].imag)
if debug:
romega = hstack( (-omg[::-1],omg) )
if iop_ac==0 and abs(enk)/Ry2H < iop_rcf:
#print 'iop_ac=', iop_ac, 'abs(enk)/Ry2H=', abs(enk)/Ry2H, 'iop_rcf=', iop_rcf
if True:
yre = fpade.padeg(romega, z_p, apar)
yim = fpade.padeg(romega*1j, z_p, apar)
else:
yre = epade(romega, apar)
yim = epade(romega*1j, apar)
elif iop_ac==1 or (iop_ac==0 and abs(enk)/Ry2H > iop_rcf):
yre,dyre = fpade.pd_funvalc(romega, apar)
x = hstack( (-romega.imag, romega.real) )
_yim_ = f_unc(x, *popt)
yim = _yim_[:len(romega)] + _yim_[len(romega):]*1j
else:
yre = s0 + dsdw*romega
yim = s0 + dsdw*romega*1j
fo = open('sigma_ancont'+debug, 'w')
print >> fo, '# enk=', enk*H2eV, 'and result is', yc[0]*H2eV
for i in range(len(romega)):
print >> fo, romega[i]*H2eV, yre[i].real*H2eV, yre[i].imag*H2eV, yim[i].real*H2eV, yim[i].imag*H2eV
fo.close()
fo = open('sigma_data'+debug, 'w')
for i in range(len(omg)):
print >> fo, omg[i]*H2eV, sc[i].real*H2eV, sc[i].imag*H2eV
fo.close()
return yc[0], dyc[0], apar
def mpiSplitArray(mrank,msize,leng):
def SplitArray(irank,msize,leng):
if leng % msize==0:
pr_proc = int(leng/msize)
else:
pr_proc = int(leng/msize+1)
if (msize<=leng):
iqs,iqe = min(irank*pr_proc,leng) , min((irank+1)*pr_proc,leng)
else:
rstep=(msize+1)/leng
if irank%rstep==0 and irank/rstep<leng:
iqs = irank/rstep
iqe = iqs+1
else:
if irank/rstep<leng:
iqs = irank/rstep
iqe = irank/rstep
else:
iqs = leng-1
iqe = leng-1
return iqs,iqe
#print 'mrank=', mrank, 'msize=', msize, 'leng=', leng
sendcounts=[]
displacements=[]
for irank in range(msize):
iqs,iqe = SplitArray(irank,msize,leng)
sendcounts.append((iqe-iqs))
displacements.append(iqs)
iqs,iqe = SplitArray(mrank,msize,leng)
return iqs,iqe, array(sendcounts,dtype=int), array(displacements,dtype=int)
| 46.415385 | 209 | 0.538615 | 3,145 | 21,119 | 3.552623 | 0.185374 | 0.020943 | 0.015036 | 0.005907 | 0.172559 | 0.114114 | 0.088427 | 0.06292 | 0.036427 | 0.023539 | 0 | 0.036895 | 0.308253 | 21,119 | 454 | 210 | 46.517621 | 0.727907 | 0.237843 | 0 | 0.130031 | 0 | 0.006192 | 0.063467 | 0.00784 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.027864 | null | null | 0.089783 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
c9686301448dc32dad6dca0715b9454f7824e9bb | 449 | py | Python | webscan/webscan/log.py | gwvsol/WebScanQRcode-JSON-RPC | 69cec4aaaf9289b374eb9a51f22474f1efe308f5 | [
"MIT"
] | null | null | null | webscan/webscan/log.py | gwvsol/WebScanQRcode-JSON-RPC | 69cec4aaaf9289b374eb9a51f22474f1efe308f5 | [
"MIT"
] | null | null | null | webscan/webscan/log.py | gwvsol/WebScanQRcode-JSON-RPC | 69cec4aaaf9289b374eb9a51f22474f1efe308f5 | [
"MIT"
] | null | null | null | # import logging
from loguru import logger as logging
# log_format = '%(asctime)s.%(msecs)d|\
# %(levelname)s|%(module)s.%(funcName)s:%(lineno)d %(message)s'
# logging.basicConfig(level=logging.INFO,
# format=log_format,
# datefmt='%Y-%m-%d %H:%M:%S')
logging.add(str, format="<green>{time:YYYY-MM-DD HH:mm:ss}</green>|\
{level}|<cyan>{name}</cyan>:<cyan>{function}\
</cyan>:<cyan>{line}</cyan> {message}")
| 34.538462 | 68 | 0.603563 | 63 | 449 | 4.269841 | 0.587302 | 0.066915 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.158129 | 449 | 12 | 69 | 37.416667 | 0.71164 | 0.538976 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.25 | 0 | 0.25 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
c971bf79584989fee15209bf83183928c1933cde | 2,921 | py | Python | src/lockFile.py | JaredButcher/Client | 60323eee40c0c68e83b823dd5768db92549900e1 | [
"MIT"
] | null | null | null | src/lockFile.py | JaredButcher/Client | 60323eee40c0c68e83b823dd5768db92549900e1 | [
"MIT"
] | null | null | null | src/lockFile.py | JaredButcher/Client | 60323eee40c0c68e83b823dd5768db92549900e1 | [
"MIT"
] | null | null | null | """MEG system lockfile parser
Can be used to parse the lock file and preform operations on the lockfile
Only interacts with the lockfile, does not preform any git actions
Does not check permissions before actions are taken
All file paths are relitive to the repository directory
Working directory should be changed by the git module
"""
import json
import os.path
import time
class LockFile:
"""Parse a lockfile and preform locking operations
"""
def __init__(self, filepath):
"""Open a lockfile and initalize class with it
Args:
filepath (string): path to the lockfile
"""
self.update(filepath)
def addLock(self, filepath, username):
"""Adds the lock to the lockfile
Args:
filepath (string): path to the file to lock
username (string): name of locking user
"""
self._lockData["locks"].append({
"file": filepath,
"user": username,
"date": time.time()
})
json.dump(self._lockData, open(self._filepath, 'w'))
def removeLock(self, filepath):
"""Remove any lock for the given file
Args:
filepath (string): path to the file to unlock
"""
for entry in self._lockData["locks"]:
if(entry["file"] == filepath):
self._lockData["locks"].remove(entry)
json.dump(self._lockData, open(self._filepath, 'w'))
def findLock(self, filepath):
"""Find if there is a lock on the file
Args:
filepath (string): path of file to look for
Returns:
(dictionary): lockfile entry for the file
(None): There is no entry
"""
for entry in self._lockData["locks"]:
if(entry["file"] == filepath):
return entry
return None
@property
def locks(self):
"""Returns the list of locks
"""
return self._lockData["locks"]
def update(self, filepath=None):
"""Updates this object with the current data in the lockfile
If the file doesn't exist, create one
Args:
filepath (string): path to the lockfile
"""
if(filepath is None):
filepath = self._filepath
else:
self._filepath = filepath
if(not os.path.exists(filepath)):
self._locks = {
"comment": "MEG System locking file, avoid manually editing",
"locks": []
}
newFile = open(filepath, 'w')
newFile.write(json.dumps(self._locks))
newFile.close()
try:
self._lockData = json.load(open(filepath))
except json.decoder.JSONDecodeError:
#Lock file couldn't be found, or is corrupted
#TODO: do something here
pass
| 29.505051 | 81 | 0.565902 | 337 | 2,921 | 4.851632 | 0.364985 | 0.066055 | 0.055046 | 0.067278 | 0.206728 | 0.188379 | 0.188379 | 0.145566 | 0.105199 | 0.056269 | 0 | 0 | 0.343718 | 2,921 | 98 | 82 | 29.806122 | 0.852895 | 0.384457 | 0 | 0.139535 | 0 | 0 | 0.067338 | 0 | 0 | 0 | 0 | 0.010204 | 0 | 1 | 0.139535 | false | 0.023256 | 0.069767 | 0 | 0.302326 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
c972a9494d385a1b98ca700a6c765495e952be1a | 857 | py | Python | artistapp/artist/migrations/0007_auto_20201208_1134.py | fallprojects/ArtistApp | 5564a1f7f4fc95261beb462abfa4ca53f3e5c17f | [
"MIT"
] | null | null | null | artistapp/artist/migrations/0007_auto_20201208_1134.py | fallprojects/ArtistApp | 5564a1f7f4fc95261beb462abfa4ca53f3e5c17f | [
"MIT"
] | null | null | null | artistapp/artist/migrations/0007_auto_20201208_1134.py | fallprojects/ArtistApp | 5564a1f7f4fc95261beb462abfa4ca53f3e5c17f | [
"MIT"
] | null | null | null | # Generated by Django 3.1.3 on 2020-12-08 11:34
from django.db import migrations, models
import django.db.models.deletion
class Migration(migrations.Migration):
dependencies = [
('artist', '0006_auto_20201208_1052'),
]
operations = [
migrations.AlterModelOptions(
name='comment',
options={},
),
migrations.RemoveField(
model_name='comment',
name='name',
),
migrations.AddField(
model_name='comment',
name='slug',
field=models.SlugField(default=1),
preserve_default=False,
),
migrations.AlterField(
model_name='comment',
name='content',
field=models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, to='artist.content'),
),
]
| 25.205882 | 102 | 0.565928 | 81 | 857 | 5.888889 | 0.555556 | 0.092243 | 0.100629 | 0.125786 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.054701 | 0.317386 | 857 | 33 | 103 | 25.969697 | 0.760684 | 0.052509 | 0 | 0.259259 | 1 | 0 | 0.106173 | 0.028395 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.074074 | 0 | 0.185185 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
c9748cc470f6cf9a38039359e7d67d00b99bffc7 | 1,405 | py | Python | exercicios/exe084/exe084.py | tiagolsouza/exercicios-Curso-em-video-PYTHON | e4e6975fac7e4883aeab58b970c6ca72895564e4 | [
"MIT"
] | null | null | null | exercicios/exe084/exe084.py | tiagolsouza/exercicios-Curso-em-video-PYTHON | e4e6975fac7e4883aeab58b970c6ca72895564e4 | [
"MIT"
] | null | null | null | exercicios/exe084/exe084.py | tiagolsouza/exercicios-Curso-em-video-PYTHON | e4e6975fac7e4883aeab58b970c6ca72895564e4 | [
"MIT"
] | null | null | null | a = 50
print('\033[32m_\033[m'*a)
print('\033[1;32m{:=^{}}\033[m'.format('SISTEMA DE IDENTIFICAÇÃO DE PESOS', a))
print('\033[32m-\033[m'*a)
listafinal = list()
listainit = list()
tamanho = 0
maiorp = menorp = 0
continuar = 's'
while True:
if 's' in continuar:
listainit.append(str(input('\033[36mNome: ')))
listainit.append(int(input('Peso: \033[m')))
listafinal.append(listainit[:])
tamanho +=1
if tamanho == 1:
maiorp = listainit[-1]
menorp = listainit[-1]
elif menorp > listainit[-1]:
menorp = listainit[-1]
elif maiorp < listainit[-1]:
maiorp = listainit[-1]
listainit.clear()
elif 'n' in continuar:
break
elif 'sn' not in continuar:
print('\033[31mTente novamente!\033[m')
continuar = str(input('\033[35mQuer continuar? [S/N] \033[m')).lower().strip()[0]
print(f'\033[34mAo todo, você cadastrou {tamanho} pessoas.\033[m')
#print(f'\033[34mAo todo, você cadastrou {len(listafinal} pessoas.\033[m')
print(f'\033[34mO maior peso cadastrado foi de {maiorp}Kg. Peso de: ', end='')
for c in listafinal:
if c[1] == maiorp:
print(f'\033[35m{c[0]}\033[m', end=' ')
print('')
print(f'\033[34mO menor peso cadastrado foi de {menorp}Kg. Peso de: ', end='')
for c in listafinal:
if c[1] == menorp:
print(f'\033[35m{c[0]}\033[m', end=' ')
| 30.543478 | 85 | 0.589324 | 202 | 1,405 | 4.094059 | 0.292079 | 0.048368 | 0.065296 | 0.033857 | 0.349456 | 0.349456 | 0.200726 | 0.125756 | 0.125756 | 0.07497 | 0 | 0.099083 | 0.224199 | 1,405 | 45 | 86 | 31.222222 | 0.659633 | 0.051957 | 0 | 0.210526 | 0 | 0 | 0.301277 | 0.01728 | 0 | 0 | 0 | 0.022222 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.263158 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
c97f792c12ecfce4d3ec9489e8ebc3d9afca6f52 | 896 | py | Python | setup.py | mMeijden/three3cpo | 8bbcddc467e48136c41cce66ef4115203f4f3b58 | [
"MIT"
] | 6 | 2021-05-18T21:08:46.000Z | 2022-02-26T18:47:16.000Z | setup.py | mMeijden/three3cpo | 8bbcddc467e48136c41cce66ef4115203f4f3b58 | [
"MIT"
] | 1 | 2021-08-02T16:55:09.000Z | 2021-08-09T08:43:28.000Z | setup.py | mMeijden/three3cpo | 8bbcddc467e48136c41cce66ef4115203f4f3b58 | [
"MIT"
] | null | null | null | from setuptools import setup, find_packages
setup(
name='t3cpo',
packages=find_packages(exclude=['tests', '.github']),
version='0.1',
license='MIT',
description='Python wrapper for the several 3Commas api endpoints',
author='mmeijden',
author_email='me@mmeijden.nl', # Type in your E-Mail
url='https://github.com/mMeijden/three3cpo',
keywords=['api', 'wrapper', '3c', '3Commas', 'crypto', 'bitcoin', 'altcoin', 'bots', 'exchange', 'trading'],
install_requires=[
'requests'
],
classifiers=[
'Development Status :: 3 - Alpha',
'Intended Audience :: Developers',
'Topic :: Software Development :: Build Tools',
'License :: OSI Approved :: MIT License',
'Programming Language :: Python :: 3.6',
'Programming Language :: Python :: 3.7',
'Programming Language :: Python :: 3.8',
],
)
| 34.461538 | 112 | 0.609375 | 94 | 896 | 5.765957 | 0.734043 | 0.105166 | 0.138376 | 0.143911 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.020202 | 0.226563 | 896 | 25 | 113 | 35.84 | 0.761905 | 0.021205 | 0 | 0.083333 | 0 | 0 | 0.52 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.041667 | 0 | 0.041667 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
a311fff7458112cf33a6f6e30d52f6e84bedba91 | 1,837 | py | Python | src/bitvavo_api_upgraded/settings.py | Thaumatorium/python-bitvavo-api | 14fc994f130105a24ba90cc1a36c8e886778ca7d | [
"0BSD"
] | 1 | 2022-01-18T23:39:54.000Z | 2022-01-18T23:39:54.000Z | src/bitvavo_api_upgraded/settings.py | Thaumatorium/python-bitvavo-api | 14fc994f130105a24ba90cc1a36c8e886778ca7d | [
"0BSD"
] | null | null | null | src/bitvavo_api_upgraded/settings.py | Thaumatorium/python-bitvavo-api | 14fc994f130105a24ba90cc1a36c8e886778ca7d | [
"0BSD"
] | null | null | null | import logging
from pathlib import Path
from decouple import Choices, AutoConfig
from bitvavo_api_upgraded.type_aliases import ms
# don't use/import python-decouple's `config`` variable, because the search_path isn't set,
# which means applications that use a .env file can't override these variables :(
config = AutoConfig(search_path=Path.cwd())
class _BitvavoApiUpgraded:
# default LOG_LEVEL is WARNING, so users don't get their ass spammed.
LOG_LEVEL: str = config(
"BITVAVO_API_UPGRADED_LOG_LEVEL", default="INFO", cast=Choices(list(logging._nameToLevel.keys()))
)
LOG_EXTERNAL_LEVEL: str = config(
"BITVAVO_API_UPGRADED_EXTERNAL_LOG_LEVEL", default="WARNING", cast=Choices(list(logging._nameToLevel.keys()))
)
LAG: ms = config("BITVAVO_API_UPGRADED_LAG", default=ms(50), cast=ms)
RATE_LIMITING_BUFFER: int = config("BITVAVO_API_UPGRADED_RATE_LIMITING_BUFFER", default=25, cast=int)
class _Bitvavo:
"""
Changeable variables are handled by the decouple lib, anything else is just static, because they are based on
Bitvavo's documentation and thus should not be able to be set outside of the application.
"""
ACCESSWINDOW: int = config("BITVAVO_ACCESSWINDOW", default=10_000, cast=int)
API_RATING_LIMIT_PER_MINUTE: int = 1000
API_RATING_LIMIT_PER_SECOND: float = API_RATING_LIMIT_PER_MINUTE / 60
APIKEY: str = config("BITVAVO_APIKEY", default="BITVAVO_APIKEY is missing")
APISECRET: str = config("BITVAVO_APISECRET", default="BITVAVO_APISECRET is missing")
DEBUGGING: bool = config("BITVAVO_DEBUGGING", default=False, cast=bool)
RESTURL: str = "https://api.bitvavo.com/v2"
WSURL: str = "wss://ws.bitvavo.com/v2/"
# Just import these variables to use the settings :)
BITVAVO_API_UPGRADED = _BitvavoApiUpgraded()
BITVAVO = _Bitvavo()
| 41.75 | 117 | 0.748503 | 254 | 1,837 | 5.19685 | 0.444882 | 0.078788 | 0.081818 | 0.072727 | 0.139394 | 0.104545 | 0 | 0 | 0 | 0 | 0 | 0.010947 | 0.1546 | 1,837 | 43 | 118 | 42.72093 | 0.839021 | 0.266195 | 0 | 0 | 0 | 0 | 0.238491 | 0.119245 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.16 | 0 | 0.72 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.