hexsha string | size int64 | ext string | lang string | max_stars_repo_path string | max_stars_repo_name string | max_stars_repo_head_hexsha string | max_stars_repo_licenses list | max_stars_count int64 | max_stars_repo_stars_event_min_datetime string | max_stars_repo_stars_event_max_datetime string | max_issues_repo_path string | max_issues_repo_name string | max_issues_repo_head_hexsha string | max_issues_repo_licenses list | max_issues_count int64 | max_issues_repo_issues_event_min_datetime string | max_issues_repo_issues_event_max_datetime string | max_forks_repo_path string | max_forks_repo_name string | max_forks_repo_head_hexsha string | max_forks_repo_licenses list | max_forks_count int64 | max_forks_repo_forks_event_min_datetime string | max_forks_repo_forks_event_max_datetime string | content string | avg_line_length float64 | max_line_length int64 | alphanum_fraction float64 | qsc_code_num_words_quality_signal int64 | qsc_code_num_chars_quality_signal float64 | qsc_code_mean_word_length_quality_signal float64 | qsc_code_frac_words_unique_quality_signal float64 | qsc_code_frac_chars_top_2grams_quality_signal float64 | qsc_code_frac_chars_top_3grams_quality_signal float64 | qsc_code_frac_chars_top_4grams_quality_signal float64 | qsc_code_frac_chars_dupe_5grams_quality_signal float64 | qsc_code_frac_chars_dupe_6grams_quality_signal float64 | qsc_code_frac_chars_dupe_7grams_quality_signal float64 | qsc_code_frac_chars_dupe_8grams_quality_signal float64 | qsc_code_frac_chars_dupe_9grams_quality_signal float64 | qsc_code_frac_chars_dupe_10grams_quality_signal float64 | qsc_code_frac_chars_replacement_symbols_quality_signal float64 | qsc_code_frac_chars_digital_quality_signal float64 | qsc_code_frac_chars_whitespace_quality_signal float64 | qsc_code_size_file_byte_quality_signal float64 | qsc_code_num_lines_quality_signal float64 | qsc_code_num_chars_line_max_quality_signal float64 | qsc_code_num_chars_line_mean_quality_signal float64 | qsc_code_frac_chars_alphabet_quality_signal float64 | qsc_code_frac_chars_comments_quality_signal float64 | qsc_code_cate_xml_start_quality_signal float64 | qsc_code_frac_lines_dupe_lines_quality_signal float64 | qsc_code_cate_autogen_quality_signal float64 | qsc_code_frac_lines_long_string_quality_signal float64 | qsc_code_frac_chars_string_length_quality_signal float64 | qsc_code_frac_chars_long_word_length_quality_signal float64 | qsc_code_frac_lines_string_concat_quality_signal float64 | qsc_code_cate_encoded_data_quality_signal float64 | qsc_code_frac_chars_hex_words_quality_signal float64 | qsc_code_frac_lines_prompt_comments_quality_signal float64 | qsc_code_frac_lines_assert_quality_signal float64 | qsc_codepython_cate_ast_quality_signal float64 | qsc_codepython_frac_lines_func_ratio_quality_signal float64 | qsc_codepython_cate_var_zero_quality_signal bool | qsc_codepython_frac_lines_pass_quality_signal float64 | qsc_codepython_frac_lines_import_quality_signal float64 | qsc_codepython_frac_lines_simplefunc_quality_signal float64 | qsc_codepython_score_lines_no_logic_quality_signal float64 | qsc_codepython_frac_lines_print_quality_signal float64 | qsc_code_num_words int64 | qsc_code_num_chars int64 | qsc_code_mean_word_length int64 | qsc_code_frac_words_unique null | qsc_code_frac_chars_top_2grams int64 | qsc_code_frac_chars_top_3grams int64 | qsc_code_frac_chars_top_4grams int64 | qsc_code_frac_chars_dupe_5grams int64 | qsc_code_frac_chars_dupe_6grams int64 | qsc_code_frac_chars_dupe_7grams int64 | qsc_code_frac_chars_dupe_8grams int64 | qsc_code_frac_chars_dupe_9grams int64 | qsc_code_frac_chars_dupe_10grams int64 | qsc_code_frac_chars_replacement_symbols int64 | qsc_code_frac_chars_digital int64 | qsc_code_frac_chars_whitespace int64 | qsc_code_size_file_byte int64 | qsc_code_num_lines int64 | qsc_code_num_chars_line_max int64 | qsc_code_num_chars_line_mean int64 | qsc_code_frac_chars_alphabet int64 | qsc_code_frac_chars_comments int64 | qsc_code_cate_xml_start int64 | qsc_code_frac_lines_dupe_lines int64 | qsc_code_cate_autogen int64 | qsc_code_frac_lines_long_string int64 | qsc_code_frac_chars_string_length int64 | qsc_code_frac_chars_long_word_length int64 | qsc_code_frac_lines_string_concat null | qsc_code_cate_encoded_data int64 | qsc_code_frac_chars_hex_words int64 | qsc_code_frac_lines_prompt_comments int64 | qsc_code_frac_lines_assert int64 | qsc_codepython_cate_ast int64 | qsc_codepython_frac_lines_func_ratio int64 | qsc_codepython_cate_var_zero int64 | qsc_codepython_frac_lines_pass int64 | qsc_codepython_frac_lines_import int64 | qsc_codepython_frac_lines_simplefunc int64 | qsc_codepython_score_lines_no_logic int64 | qsc_codepython_frac_lines_print int64 | effective string | hits int64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
48c388d2a91f2301d0f59df1f50eb64349cced6a | 2,104 | py | Python | direct_gd_predict/hash-profile.py | wac/meshop | ea5703147006e5e85617af897e1d1488e6f29f32 | [
"0BSD"
] | 1 | 2016-05-08T14:54:31.000Z | 2016-05-08T14:54:31.000Z | direct_gd_predict/hash-profile.py | wac/meshop | ea5703147006e5e85617af897e1d1488e6f29f32 | [
"0BSD"
] | null | null | null | direct_gd_predict/hash-profile.py | wac/meshop | ea5703147006e5e85617af897e1d1488e6f29f32 | [
"0BSD"
] | null | null | null | import sys
import heapq
import optparse
from bitcount2 import bitcount
hasher={}
profile={}
key_list=[]
key_col=0
def usage():
print sys.argv[0]," [profile_file]"
print " Load the profile lines from profile_file"
print " Hash function uses the features listed in profile_file"
print " and tests for p-value greater/less than or equal (0/1)"
print " Hash all the profiles from stdin"
exit(1)
def do_hash(hasher, p, key_list):
hashval=""
# for k, v in hasher.iteritems():
for k in key_list:
v=hasher[k]
if k in p and p[k] < v:
hashval=hashval+"1"
else:
hashval=hashval+"0"
return hashval
sep='|'
key_col=0
#feature_col=1
#score_col=6
in_feature_col=0
in_score_col=1
process_feature_col=1
process_score_col=6
parser = optparse.OptionParser()
#parser.add_option("-n", dest="heapsize",
# default=50, action="store", type="int")
#parser.add_option("-R", "--random", dest="use_random",
# default=False, action="store_true")
(options, args) = parser.parse_args(sys.argv)
if (len(args) > 1):
profile_filename=args[1]
else:
usage()
for line in open(profile_filename):
if line[0]=='#':
continue
tuples=line.strip().split(sep)
key=tuples[in_feature_col]
key_list.append(key)
hasher[key]=tuples[in_score_col]
curr_profile={}
old_key=""
for line in sys.stdin:
line=line.strip()
if line[0]=='#':
print line
continue
tuples=line.split(sep)
curr_key=tuples[key_col]
if not old_key:
old_key=curr_key
if not old_key==curr_key:
hashval=do_hash(hasher, curr_profile, key_list)
hashval_int=int(hashval, 2)
print old_key+sep+hashval+sep+str(hashval_int)+sep+str(bitcount(hashval_int))
curr_profile={}
old_key=curr_key
curr_profile[tuples[process_feature_col]]=tuples[process_score_col]
hashval=do_hash(hasher, curr_profile, key_list)
hashval_int=int(hashval, 2)
print old_key+sep+hashval+sep+str(hashval_int)+sep+str(bitcount(hashval_int))
| 23.120879 | 85 | 0.65827 | 316 | 2,104 | 4.186709 | 0.28481 | 0.031746 | 0.031746 | 0.029478 | 0.179894 | 0.179894 | 0.179894 | 0.179894 | 0.179894 | 0.179894 | 0 | 0.013956 | 0.21673 | 2,104 | 90 | 86 | 23.377778 | 0.788835 | 0.123099 | 0 | 0.28125 | 0 | 0 | 0.114566 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.0625 | null | null | 0.125 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
48ca7f075d0516343cadcc4c408fff80c48e1083 | 11,129 | py | Python | sym_executor.py | zhangzhenghsy/fiber | af1a8c8b01d4935849df73b01ccfeccbba742205 | [
"BSD-2-Clause"
] | null | null | null | sym_executor.py | zhangzhenghsy/fiber | af1a8c8b01d4935849df73b01ccfeccbba742205 | [
"BSD-2-Clause"
] | null | null | null | sym_executor.py | zhangzhenghsy/fiber | af1a8c8b01d4935849df73b01ccfeccbba742205 | [
"BSD-2-Clause"
] | null | null | null | #!/usr/bin/python
import angr,simuvex
import sys,os
import time
from utils_sig import *
from sym_tracer import Sym_Tracer
from sig_recorder import Sig_Recorder
#This class is responsible for performing symbolic execution.
class Sym_Executor(object):
def __init__(self,options=None,dbg_out=False):
self.tracer = None
self.recorder = None
self.dbg_out = dbg_out
self._whitelist = set()
self._all_bbs = set()
self._num_find = 10
self.options = options
def _get_initial_state(self,proj,start,targetfunc=None):
if proj is None:
return None
st = proj.factory.blank_state(addr=start,symbolic_sp=True)
# print st.arch.registers.keys()
# We can customize the symbolic execution by setting various options in the state
# for a full list of available options:
# https://github.com/angr/simuvex/blob/master/simuvex/s_options.py
# E.g. st.options.add(simuvex.o.LAZY_SOLVES) ('options' is a set)
# CALLLESS to do intra-procedure analysis
st.options.add(simuvex.o.CALLLESS)
if targetfunc is not None:
st.options.add(str(hex(targetfunc)))
# To prevent the engine from discarding log history
st.options.add(simuvex.o.TRACK_ACTION_HISTORY)
if self.options.get('simplify_ast',True):
st.options.add(simuvex.o.SIMPLIFY_EXPRS)
st.options.add(simuvex.o.SIMPLIFY_MEMORY_READS)
st.options.add(simuvex.o.SIMPLIFY_MEMORY_WRITES)
st.options.add(simuvex.o.SIMPLIFY_EXIT_GUARD)
#TODO: Find a way to deal with function side-effect (i.e. a function call will output to a parameter, then the parameter will be used in a condition later)
st.options.add(simuvex.o.IGNORE_EXIT_GUARDS)
st.options.add(simuvex.o.IGNORE_MERGE_CONDITIONS)
st.options.add(simuvex.o.DONT_MERGE_UNCONSTRAINED)
#Use customized addr conc strategy
st.memory.read_strategies = [angr.concretization_strategies.SimConcretizationStrategyHZ(limit=3)]
st.memory.write_strategies = [angr.concretization_strategies.SimConcretizationStrategyHZ(limit=3)]
#print st.options
return st
#Include all the BBs along the path from start to ends in the cfg into the whitelist.
#The CFG here is CFGAcc.
def _prep_whitelist(self,cfg,cfg_bounds,ends,start=None,proj=None,sym_tab=None,cfg2=None,cfg_bounds2=None,ends2=None,start2=None,func_cfg=None):
#print "cfg:", [hex(n.addr) for n in cfg.nodes()]
#print cfg.functions[cfg_bounds[0]]
if cfg is None or cfg_bounds is None or len(cfg_bounds) < 2:
print '_prep_whitelist(): Incomplete CFG information'
return
#for addr in cfg2.functions:
# print cfg2.functions[addr]
if cfg2 is not None:
func_cfg2 = get_func_cfg(cfg2,cfg_bounds2[0],proj=proj,sym_tab=sym_tab)
if func_cfg is None:
print 'No func_cfg is available at %x' % cfg_bounds[0]
return
start = cfg_bounds[0]
self._all_bbs = set([x.addr for x in func_cfg.nodes()])
#print '_all_bbs: ' + str([hex(x) for x in list(self._all_bbs)])
#print '_all_bbs2: '+str([hex(x) for x in list(set([x.addr for x in func_cfg2.nodes()]))])
if cfg2 is not None:
self._all_bbs = self._all_bbs.union(set([x.addr for x in func_cfg2.nodes()]))
self._whitelist = get_node_addrs_between(func_cfg,start,ends,from_func_start=(start == cfg_bounds[0]))
if cfg2 is not None:
self._whitelist= self._whitelist.union(get_node_addrs_between(func_cfg2,start2,ends2,from_func_start=(start2 == cfg_bounds2[0])))
l = list(self._whitelist)
l.sort()
#print 'whitelist: ' + str([hex(x) for x in l])
l = list(self._all_bbs)
l.sort()
#print '_all_bbs: ' + str([hex(x) for x in l])
if self.dbg_out:
l = list(self._whitelist)
l.sort()
print 'whitelist: ' + str([hex(x) for x in l])
return
#Why we put a absolutely 'False' find_func here:
#(1)We rely on an accurate whitelist and all the nodes in the list should be explored, so we don't want
#to stop at a certain node.
#(2)With this find_func, basically we will have no states in the 'found' stash in the end, but that's OK
#because all the things we want to do will be done along the symbolic execution process.
def _find_func(self,p):
return False
def _avoid_func(self,p):
#print 'avoid_func: ' + str(hex(p.addr)) + ' ' + str(p.addr in whitelist)
#One problem is that, sometimes p.addr is in the middle of a certain BB, while in whitelist we only have start addresses of BBs.
#Currently for these cases, we will let it continue to execute because it will align to the BB starts later.
with open('testexplorenodes','a') as f:
f.write(str(hex(p.addr))+'\n')
return False if p.addr not in self._all_bbs else (not p.addr in self._whitelist)
#This is basically the 'hook_complete' used in 'explorer' technique, simply deciding whether num_find has been reached.
def _vt_terminator(self,smg):
return len(smg.stashes['found']) >= self._num_find
def _prep_veritesting_options(self,find=None,avoid=None,num_find=10):
if find is None:
find = self._find_func
if avoid is None:
avoid = self._avoid_func
#We need to construct an 'explorer' as an 'exploration_technique' used in the internal SimManager of Veritesting,
#which is basically the same one as used in normal DSE SimManager (by invoking 'explore()' method)
#NOTE that the Veritesting mode will use a separate SimManager, so we have to make TWO 'explorer'.
exp_tech = angr.exploration_techniques.Explorer(find=find,avoid=avoid,num_find=num_find)
veritesting_options = {}
#NOTE: 'loop_unrolling_limit' is compared and considered as 'passed' with '>=' instead of '>', that means if we use '1', no loops will be even entered.
#However we want exactly ONE loop execution, so we should should use '2' here actually.
veritesting_options['loop_unrolling_limit'] = 2
veritesting_options['tech'] = exp_tech
#NOTE that original 'explorer' technique will set a 'hook_complete' in SimManager, which will be passed from 'run()' to 'step()'
#as a 'until_func', however, Veritesting will not invoke 'run()', instead, it calls 'step()' directly, so this hook is basically
#invalidated. To deal with this, we provide a 'terminator' to Veritesting, which will terminate Veritesting when len(stashes[found]) > num_find
veritesting_options['terminator'] = self._vt_terminator
return veritesting_options
#Do the symbolic execution on the given CFG, from start to target, with Veritesting and Whitelist mechanisms.
#Params:
#proj: the angr project.
#states: if it's None, creates a default initial state@start, if start is None, then @cfg_bounds[0].
#cfg: cfg_accurate.
#cfg_bounds: a 2-element list, specifying the area of the target function (to be executed) in the cfg.
#start: Where to start the symbolic execution? Must be within the cfg_bounds.
#targets: Where to end the symbolic execution? Must be within the cfg_bounds. Can specify multiple targets in a list.
#Ret:
#The resulting SimManager.
def try_sym_exec(self,proj,cfg,cfg_bounds,targets,states=None,start=None,new_tracer=False,tracer=None,new_recorder=False,recorder=None,sym_tab=None,sigs=None,cfg2=None,cfg_bounds2=None,targets2=None, start2=None,func_cfg=None,num_find=10):
#print "start1: ", hex(start)
#print "start2: ", hex(start2)
if cfg is None or cfg_bounds is None or len(cfg_bounds) < 2:
print 'No CFG information available for sym exec.'
return None
#This is the start point of sym exec.
st = start if start is not None else cfg_bounds[0]
if start2 is not None:
st=start2
#Fill initial state.
#print 'hex(start)', hex(start)
#print 'str(hex(start))', str(hex(start))
if states is None:
if start2 is not None:
init_state = self._get_initial_state(proj,st,start)
#init_state = self._get_initial_state(proj,start)
else:
init_state = self._get_initial_state(proj,st)
states = [init_state]
#Whether we need to create a new Sym_Tracer to trace the symbolic execution
if new_tracer:
self.tracer = Sym_Tracer(symbol_table=sym_tab,dbg_out=self.dbg_out)
#for example:<class 'sym_tracer.Sym_Tracer'>: {'addr_collision': False, 'dbg_out': True, 'symbol_table': <sym_table.Sym_Table object at 0x7fffeba54890>, '_addr_conc_buf': [], '_sym_map': {}}
#Clear any remaining breakpoints
self.tracer.stop_trace(states)
self.tracer.trace(states)
else:
self.tracer = tracer
#Whether we need to create a new Sig_Recorder
if new_recorder:
if sigs is None:
print 'You must provide sigs if you want to use new recorder'
return
if self.tracer is None:
print 'You must provide tracer or specify new_tracer flag if you want to use new recorder'
return
self.recorder = Sig_Recorder(sigs,self.tracer,dbg_out=dbg_out)
#Clear any remaining breakpoints
self.recorder.stop_record(states)
#Record structural information (nodes and their relationships) and semantic information of 'root'
#instructions with per-instruction breakpoint, the structural information has already been partly recorded in the initial signature.
self.recorder.record(states)
else:
self.recorder = recorder
#Set the whitelist of basic blocks, we only want to include the BBs that along the paths from st to targets.
self._prep_whitelist(cfg,cfg_bounds,targets,start,proj=proj,sym_tab=sym_tab,cfg2=cfg2,cfg_bounds2=cfg_bounds2,ends2=targets2,start2=start2,func_cfg=func_cfg)
self._num_find = num_find
#Set the VeriTesting options
veritesting_options = self._prep_veritesting_options(num_find=self._num_find)
#Construct the simulation execution manager
smg = proj.factory.simgr(thing=states, veritesting=True, veritesting_options=veritesting_options)
#TODO: Do we still need to use loop limiter for the main DSE SimManager since Veritesting has already got a built-in loop limiter?
#limiter = angr.exploration_techniques.looplimiter.LoopLimiter(count=0, discard_stash='spinning')
#smg.use_technique(limiter)
t0 = time.time()
smg.explore(find=self._find_func, avoid=self._avoid_func, num_find=self._num_find)
print ['%s:%d ' % (name,len(stash)) for name, stash in smg.stashes.items()]
print 'Time elapsed: ' + str(time.time() - t0)
return smg
| 54.287805 | 243 | 0.673556 | 1,625 | 11,129 | 4.467077 | 0.226462 | 0.019837 | 0.018184 | 0.026174 | 0.201543 | 0.163659 | 0.117096 | 0.07136 | 0.053451 | 0.027276 | 0 | 0.008472 | 0.23632 | 11,129 | 204 | 244 | 54.553922 | 0.845629 | 0.422051 | 0 | 0.186441 | 0 | 0 | 0.055477 | 0 | 0 | 0 | 0 | 0.004902 | 0 | 0 | null | null | 0 | 0.050847 | null | null | 0.067797 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
48cd84239fff9070a94f62f2913b39c9eded80ea | 204 | py | Python | shiva/constants.py | tooxie/shiva-server | 4d169aae8d4cb01133f62701b14610695e48c297 | [
"MIT"
] | 70 | 2015-01-09T15:15:15.000Z | 2022-01-14T09:51:55.000Z | shiva/constants.py | tooxie/shiva-server | 4d169aae8d4cb01133f62701b14610695e48c297 | [
"MIT"
] | 14 | 2015-01-04T10:08:26.000Z | 2021-12-13T19:35:07.000Z | shiva/constants.py | tooxie/shiva-server | 4d169aae8d4cb01133f62701b14610695e48c297 | [
"MIT"
] | 19 | 2015-01-02T22:42:01.000Z | 2022-01-14T09:51:59.000Z | # -*- coding: utf-8 -*-
class HTTP:
BAD_REQUEST = 400
UNAUTHORIZED = 401
FORBIDDEN = 403
NOT_FOUND = 404
METHOD_NOT_ALLOWED = 405
CONFLICT = 409
UNSUPPORTED_MEDIA_TYPE = 415
| 17 | 32 | 0.632353 | 25 | 204 | 4.92 | 0.96 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.150685 | 0.284314 | 204 | 11 | 33 | 18.545455 | 0.691781 | 0.102941 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
48d0551fc7668ef91b0cbb625288bc4330046f92 | 642 | py | Python | day8/test_day8.py | bwbeach/advent-of-code-2020 | 572810c3adae5815543efde17a4bca9596d05a5b | [
"CC0-1.0"
] | null | null | null | day8/test_day8.py | bwbeach/advent-of-code-2020 | 572810c3adae5815543efde17a4bca9596d05a5b | [
"CC0-1.0"
] | null | null | null | day8/test_day8.py | bwbeach/advent-of-code-2020 | 572810c3adae5815543efde17a4bca9596d05a5b | [
"CC0-1.0"
] | null | null | null | from day8.day8 import fix_code, parse_code, run
SAMPLE_CODE_LOOP = """nop +0
acc +1
jmp +4
acc +3
jmp -3
acc -99
acc +1
jmp -4
acc +6
"""
SAMPLE_CODE_HALT = """nop +0
acc +1
jmp +4
acc +3
jmp -3
acc -99
acc +1
nop -4
acc +6
"""
def test_parse():
assert parse_code("nop +0\nacc +1\nacc -6") == [("nop", 0), ("acc", 1), ("acc", -6)]
def test_run_loop():
code = parse_code(SAMPLE_CODE_LOOP)
assert run(code) == ("loop", 5)
def test_run_halt():
code = parse_code(SAMPLE_CODE_HALT)
assert run(code) == ("halt", 8)
def test_fix_code():
assert fix_code(parse_code(SAMPLE_CODE_LOOP)) == parse_code(SAMPLE_CODE_HALT)
| 15.285714 | 88 | 0.638629 | 116 | 642 | 3.301724 | 0.215517 | 0.140992 | 0.13577 | 0.198433 | 0.441253 | 0.292428 | 0.151436 | 0.151436 | 0.151436 | 0.151436 | 0 | 0.057692 | 0.190031 | 642 | 41 | 89 | 15.658537 | 0.678846 | 0 | 0 | 0.516129 | 0 | 0 | 0.260125 | 0 | 0 | 0 | 0 | 0 | 0.129032 | 1 | 0.129032 | false | 0 | 0.032258 | 0 | 0.16129 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
48d23528c08e020ee5f13c45ec80e61813e3bd41 | 6,128 | py | Python | biosys/apps/main/tests/api/test_misc.py | florianm/biosys | 934d06ed805b0734f3cb9a00feec6cd81a94e512 | [
"Apache-2.0"
] | 2 | 2018-04-09T04:02:30.000Z | 2019-08-20T03:12:55.000Z | biosys/apps/main/tests/api/test_misc.py | florianm/biosys | 934d06ed805b0734f3cb9a00feec6cd81a94e512 | [
"Apache-2.0"
] | 29 | 2016-01-20T08:14:15.000Z | 2017-07-13T07:17:32.000Z | biosys/apps/main/tests/api/test_misc.py | florianm/biosys | 934d06ed805b0734f3cb9a00feec6cd81a94e512 | [
"Apache-2.0"
] | 5 | 2016-01-14T23:02:36.000Z | 2016-09-21T05:35:03.000Z | from django.shortcuts import reverse
from django.test import TestCase
from rest_framework import status
from rest_framework.test import APIClient
from main.models import Project
from main.tests import factories
from main.tests.api import helpers
class TestWhoAmI(helpers.BaseUserTestCase):
def setUp(self):
super(TestWhoAmI, self).setUp()
self.url = reverse('api:whoami')
def test_get(self):
client = self.anonymous_client
self.assertEqual(
client.get(self.url).status_code,
status.HTTP_200_OK
)
user = factories.UserFactory()
user.set_password('password')
user.save()
client = APIClient()
self.assertTrue(client.login(username=user.username, password='password'))
resp = client.get(self.url)
self.assertEqual(
resp.status_code,
status.HTTP_200_OK
)
# test that the response contains username, first and last name and email at least and the id
data = resp.json()
self.assertEqual(user.username, data['username'])
self.assertEqual(user.first_name, data['first_name'])
self.assertEqual(user.last_name, data['last_name'])
self.assertEqual(user.email, data['email'])
self.assertEqual(user.id, data['id'])
# test that the password is not in the returned fields
self.assertFalse('password' in data)
def test_not_allowed_methods(self):
client = self.readonly_client
self.assertEqual(
client.post(self.url, {}).status_code,
status.HTTP_405_METHOD_NOT_ALLOWED
)
self.assertEqual(
client.put(self.url, {}).status_code,
status.HTTP_405_METHOD_NOT_ALLOWED
)
self.assertEqual(
client.patch(self.url, {}).status_code,
status.HTTP_405_METHOD_NOT_ALLOWED
)
class TestStatistics(TestCase):
def setUp(self):
self.url = reverse('api:statistics')
def test_get(self):
anonymous = APIClient()
client = anonymous
self.assertIn(
client.get(self.url).status_code,
[status.HTTP_401_UNAUTHORIZED, status.HTTP_403_FORBIDDEN]
)
user = factories.UserFactory.create()
user.set_password('password')
user.save()
client = APIClient()
self.assertTrue(client.login(username=user.username, password='password'))
resp = client.get(self.url)
self.assertEqual(
resp.status_code,
status.HTTP_200_OK
)
# expected response with no data
expected = {
'projects': {'total': 0},
'datasets': {
'total': 0,
'generic': {'total': 0},
'observation': {'total': 0},
'speciesObservation': {'total': 0},
},
'records': {
'total': 0,
'generic': {'total': 0},
'observation': {'total': 0},
'speciesObservation': {'total': 0},
},
'sites': {'total': 0},
}
self.assertEqual(expected, resp.json())
# create one project
program = factories.ProgramFactory.create()
project = factories.ProjectFactory.create(program=program)
expected['projects']['total'] = 1
resp = client.get(self.url)
self.assertEqual(
resp.status_code,
status.HTTP_200_OK
)
self.assertEqual(expected, resp.json())
# create some sites
count = 3
factories.SiteFactory.create_batch(
count,
project=project
)
expected['sites']['total'] = count
resp = client.get(self.url)
self.assertEqual(
resp.status_code,
status.HTTP_200_OK
)
self.assertEqual(expected, resp.json())
def test_not_allowed_methods(self):
user = factories.UserFactory.create()
user.set_password('password')
user.save()
client = APIClient()
self.assertTrue(client.login(username=user.username, password='password'))
self.assertEqual(
client.post(self.url, {}).status_code,
status.HTTP_405_METHOD_NOT_ALLOWED
)
self.assertEqual(
client.put(self.url, {}).status_code,
status.HTTP_405_METHOD_NOT_ALLOWED
)
self.assertEqual(
client.patch(self.url, {}).status_code,
status.HTTP_405_METHOD_NOT_ALLOWED
)
class TestSpecies(TestCase):
# set the species list to be the testing one
species_facade_class = helpers.LightSpeciesFacade
def setUp(self):
from main.api.views import SpeciesMixin
SpeciesMixin.species_facade_class = self.species_facade_class
self.url = reverse('api:species')
def test_get(self):
anonymous = APIClient()
client = anonymous
self.assertEqual(
client.get(self.url).status_code,
status.HTTP_200_OK
)
user = factories.UserFactory.create()
user.set_password('password')
user.save()
client = APIClient()
self.assertTrue(client.login(username=user.username, password='password'))
resp = client.get(self.url)
self.assertEqual(
resp.status_code,
status.HTTP_200_OK
)
def test_not_allowed_methods(self):
user = factories.UserFactory.create()
user.set_password('password')
user.save()
client = APIClient()
self.assertTrue(client.login(username=user.username, password='password'))
self.assertEqual(
client.post(self.url, {}).status_code,
status.HTTP_405_METHOD_NOT_ALLOWED
)
self.assertEqual(
client.put(self.url, {}).status_code,
status.HTTP_405_METHOD_NOT_ALLOWED
)
self.assertEqual(
client.patch(self.url, {}).status_code,
status.HTTP_405_METHOD_NOT_ALLOWED
)
| 31.587629 | 101 | 0.590078 | 642 | 6,128 | 5.471963 | 0.174455 | 0.102477 | 0.077427 | 0.096783 | 0.627099 | 0.627099 | 0.606889 | 0.606889 | 0.596641 | 0.568745 | 0 | 0.015478 | 0.304178 | 6,128 | 193 | 102 | 31.751295 | 0.808396 | 0.041612 | 0 | 0.618182 | 0 | 0 | 0.056256 | 0 | 0 | 0 | 0 | 0 | 0.187879 | 1 | 0.054545 | false | 0.066667 | 0.048485 | 0 | 0.127273 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
48d29ebbfa1dba9c5ef7d472e7d45e6999e1c63b | 531 | py | Python | src/netwrok/analytics.py | simonwittber/netwrok-server | d4767faa766e7ecb0de0c912f0c0a26b45b84189 | [
"MIT"
] | 16 | 2015-12-01T14:42:30.000Z | 2021-04-26T21:16:45.000Z | src/netwrok/analytics.py | DifferentMethods/netwrok-server | d4767faa766e7ecb0de0c912f0c0a26b45b84189 | [
"MIT"
] | null | null | null | src/netwrok/analytics.py | DifferentMethods/netwrok-server | d4767faa766e7ecb0de0c912f0c0a26b45b84189 | [
"MIT"
] | 4 | 2015-03-02T07:19:15.000Z | 2015-10-14T07:38:02.000Z | import asyncio
import aiopg
from . import nwdb
from . import core
@core.handler
def register(client, path, event):
"""
Register an event occuring at path. Created time is automatically added.
Useful for generic analytics type stuff.
"""
with (yield from nwdb.connection()) as conn:
cursor = yield from conn.cursor()
yield from cursor.execute("""
insert into analytics(member_id, path, event)
select %s, %s, %s
""", [client.session.get("member_id", None), path, event])
| 27.947368 | 76 | 0.653484 | 69 | 531 | 5 | 0.608696 | 0.078261 | 0.086957 | 0.110145 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.237288 | 531 | 18 | 77 | 29.5 | 0.851852 | 0.212806 | 0 | 0 | 0 | 0 | 0.246231 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.083333 | false | 0 | 0.333333 | 0 | 0.416667 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
48d3bd9308acb8eb9e29472526d5d05261bbdb90 | 635 | py | Python | monte_carlo/helpers/muaanalytical.py | nathhje/bachelorproject | 4bca826d1e065f647e2088b1fd028b1bdf863124 | [
"MIT"
] | null | null | null | monte_carlo/helpers/muaanalytical.py | nathhje/bachelorproject | 4bca826d1e065f647e2088b1fd028b1bdf863124 | [
"MIT"
] | null | null | null | monte_carlo/helpers/muaanalytical.py | nathhje/bachelorproject | 4bca826d1e065f647e2088b1fd028b1bdf863124 | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
"""
Deterimines the reflectance based on r and mua.
"""
import math
import helpers.analyticalvalues as av
def reflectance(mua, r):
"""
mua: the absorption coefficient used.
r: the radial distance used.
"""
values = av.analyticalValues(r, mua)
# the value of the reflectance is determined
return (values.z0 * (values.ueff + values.rho1 ** -1) * math.exp( -values.ueff * values.rho1)
/ (values.rho1 ** 2) + (values.z0 + 2 * values.zb) * (values.ueff + values.rho2 ** -1)
* math.exp( -values.ueff * values.rho2) / (values.rho2 ** 2)) / 4 / math.pi
| 30.238095 | 99 | 0.60315 | 83 | 635 | 4.614458 | 0.46988 | 0.104439 | 0.167102 | 0.104439 | 0.125326 | 0.125326 | 0 | 0 | 0 | 0 | 0 | 0.031315 | 0.245669 | 635 | 20 | 100 | 31.75 | 0.768267 | 0.283465 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.142857 | false | 0 | 0.285714 | 0 | 0.571429 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
48d3e34f960926be47270d979dba99f1e974b2b3 | 476 | py | Python | main/test_data.py | anna01111/demo_web_ui_test_suite | 69bedc25126b874774e2f51a83356dc9ee1b7e74 | [
"CC0-1.0"
] | null | null | null | main/test_data.py | anna01111/demo_web_ui_test_suite | 69bedc25126b874774e2f51a83356dc9ee1b7e74 | [
"CC0-1.0"
] | null | null | null | main/test_data.py | anna01111/demo_web_ui_test_suite | 69bedc25126b874774e2f51a83356dc9ee1b7e74 | [
"CC0-1.0"
] | null | null | null | from faker import Faker
"""
More info: https://microservices-demo.github.io/docs/user-accounts.html
"""
# The demo app is shipped with the following account:
username = 'user'
password = 'password'
# Fake data that is used for new registrations:
faker = Faker()
autogenerated_username = faker.user_name()
autogenerated_first_name = faker.first_name()
autogenerated_last_name = faker.last_name()
autogenerated_email = faker.email()
autogenerated_password = faker.password()
| 26.444444 | 71 | 0.779412 | 63 | 476 | 5.730159 | 0.555556 | 0.141274 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.115546 | 476 | 17 | 72 | 28 | 0.857482 | 0.203782 | 0 | 0 | 0 | 0 | 0.040404 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0.222222 | 0.111111 | 0 | 0.111111 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
48d3f8d217b00f2ba74165ed887ea259202fee75 | 1,115 | py | Python | pfr/run.py | AnnaMag/pdf-flask-react | de89eb13b2e2e0d4418c28041fe294205f528b96 | [
"BSD-2-Clause"
] | 2 | 2019-01-04T16:55:05.000Z | 2019-08-28T20:16:47.000Z | pfr/run.py | AnnaMag/pdf-flask-react | de89eb13b2e2e0d4418c28041fe294205f528b96 | [
"BSD-2-Clause"
] | 2 | 2021-06-01T21:52:21.000Z | 2021-12-13T19:43:43.000Z | pfr/run.py | AnnaMag/pdf-flask-react | de89eb13b2e2e0d4418c28041fe294205f528b96 | [
"BSD-2-Clause"
] | null | null | null | from io import StringIO
from io import BytesIO
import urllib
from urllib import request
import utils
from pdf_processing import scrape_gazette_names, get_info_outline
from data_parsing import save_to_dict
if __name__ == '__main__':
# not saving anything locally, just the names listed on the webpage to access the files later
url = 'http://www.gpwonline.co.za/Gazettes/Pages/Published-National-Regulation-Gazettes.aspx'
doc_names = scrape_gazette_names(url)
db_name = 'gov_docs'
db_collection = 'nat_reg'
collection = utils.set_collection(db_name, db_collection)
for url in doc_names[0][3:5]:
print(url)
fp = BytesIO(urllib.request.urlopen(url).read())
info, device, pages_skipped = get_info_outline(fp)
print(info)
#pages_skipped should be pages for extraction- for now is to montitore problems
gaz_dict = save_to_dict(device.interesting_text, device.aux_text, \
pages_skipped, info, device.page_number, url)
print(gaz_dict)
utils.write_db(collection, gaz_dict)
| 33.787879 | 97 | 0.699552 | 154 | 1,115 | 4.798701 | 0.525974 | 0.048714 | 0.032476 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.003476 | 0.226009 | 1,115 | 32 | 98 | 34.84375 | 0.852839 | 0.15157 | 0 | 0 | 0 | 0.045455 | 0.114528 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.318182 | 0 | 0.318182 | 0.136364 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
48d4f15c7fa28d9ec9d8b63f2ea935ca7b5152ba | 1,246 | py | Python | day9/day9.py | jaredledvina/adventofcode2020 | 2a31fd88c0b6bddd2c06327d04e6630b8fb29909 | [
"MIT"
] | 1 | 2020-12-09T14:50:49.000Z | 2020-12-09T14:50:49.000Z | day9/day9.py | jaredledvina/adventofcode2020 | 2a31fd88c0b6bddd2c06327d04e6630b8fb29909 | [
"MIT"
] | null | null | null | day9/day9.py | jaredledvina/adventofcode2020 | 2a31fd88c0b6bddd2c06327d04e6630b8fb29909 | [
"MIT"
] | null | null | null | #!/usr/bin/env python3
import itertools
def read_input():
with open('input.txt') as f:
puzzle_input = f.read().splitlines()
puzzle_input = [int(num) for num in puzzle_input]
return puzzle_input
def part1(puzzle_input):
preamble = puzzle_input[:25]
remaining = puzzle_input[25:]
for item in remaining:
found_match = False
for product in itertools.product(preamble, preamble):
if product[0] + product[1] == item:
found_match = True
preamble.append(item)
preamble.pop(0)
break
if not found_match:
return item
def part2(puzzle_input):
invalid = part1(puzzle_input)
for position in range(len(puzzle_input)):
combination_position = 0
for combination in itertools.accumulate(puzzle_input[position:]):
if combination == invalid:
return min(puzzle_input[position:combination_position+position]) + max(puzzle_input[position:combination_position+position])
combination_position += 1
def main():
puzzle_input = read_input()
print(part1(puzzle_input))
print(part2(puzzle_input))
if __name__ == '__main__':
main() | 29.666667 | 140 | 0.629213 | 145 | 1,246 | 5.17931 | 0.344828 | 0.234354 | 0.063915 | 0.079893 | 0.122503 | 0.122503 | 0 | 0 | 0 | 0 | 0 | 0.01663 | 0.276083 | 1,246 | 42 | 141 | 29.666667 | 0.815965 | 0.016854 | 0 | 0 | 0 | 0 | 0.013878 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.121212 | false | 0 | 0.030303 | 0 | 0.242424 | 0.060606 | 0 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
48d79b6a3679e4354a437a7315a9dd9bd23f2c50 | 3,971 | py | Python | scraper/edx.py | thanasis457/Mooc-platform | 5ff3b7b43fadc86ec5d4d54db6963449a6610bb5 | [
"MIT"
] | 4 | 2020-08-30T12:18:27.000Z | 2021-05-19T06:42:13.000Z | scraper/edx.py | thanasis457/Mooc-platform | 5ff3b7b43fadc86ec5d4d54db6963449a6610bb5 | [
"MIT"
] | 1 | 2021-01-28T20:21:48.000Z | 2021-01-28T20:21:48.000Z | scraper/edx.py | thanasis457/Mooc-platform | 5ff3b7b43fadc86ec5d4d54db6963449a6610bb5 | [
"MIT"
] | 1 | 2020-09-14T13:20:05.000Z | 2020-09-14T13:20:05.000Z | import requests, json, bs4, urllib.parse, math
from . import Course, Platform
class Edx(Platform):
name = 'edX'
def _urls(self):
res = requests.get(make_url())
count = json.loads(res.text)['objects']['count']
num_pages = math.ceil(count / 20)
urls = [make_url(page=page) for page in range(1, num_pages + 1)]
return urls
def _parse(self, url):
res = requests.get(url)
courses = []
results = res.json()['objects']['results']
for result in results:
title = result['title']
if result['full_description']:
description = html_to_text(result['full_description'])
else:
description = result['short_description']
snippet = ''
if result['short_description'] and result['short_description'] != '.':
snippet = result['short_description']
url = result['marketing_url']
tags = [subject_uuids.get(uuid) for uuid in result['subject_uuids']]
partners = [result.get('org')]
course = Course(title, partners, self.name,
description, tags, url, snippet=snippet)
courses.append(course)
return courses
subject_uuids = {'d8244ef2-45fb-4be3-a9d7-a6749cee3b19': 'Architecture',
'2cc66121-0c07-407b-96c4-99305359a36f': 'Art & Culture',
'9d5b5edb-254a-4d54-b430-776f1f00eaf0': 'Biology & Life Sciences',
'409d43f7-ff36-4834-9c28-252132347d87': 'Business & Management',
'c5ec1f86-4e59-4273-8e22-ceec2b8d10a2': 'Chemistry',
'605bb663-a342-4cf3-b5a5-fee2f33f1642': 'Communication',
'e52e2134-a4e4-4fcb-805f-cbef40812580': 'Computer Science',
'a168a80a-4b6c-4d92-9f1d-4c235206feaf': 'Data Analysis & Statistics',
'34173fb0-fe3d-4715-b4e0-02a9426a873c': 'Design',
'bab458d9-19b3-476e-864f-8abd1d1aab44': 'Economics & Finance',
'8ac7a3da-a60b-4565-b361-384baaa49279': 'Education & Teacher Training',
'337dfb23-571e-49d7-9c8e-385120dea6f3': 'Electronics',
'07406bfc-76c4-46cc-a5bf-2deace7995a6': 'Energy & Earth Sciences',
'0d7bb9ed-4492-419a-bb44-415adafd9406': 'Engineering',
'8aaac548-1930-4614-aeb4-a089dae7ae26': 'Environmental Studies',
'8a552a20-963e-475c-9b0d-4c5efe22d015': 'Ethics',
'caa4db79-f325-41ca-8e09-d5bb6e148240': 'Food & Nutrition',
'51a13a1c-7fc8-42a6-9e96-6636d10056e2': 'Health & Safety',
'c8579e1c-99f2-4a95-988c-3542909f055e': 'Histroy',
'00e5d5e0-ce45-4114-84a1-50a5be706da5': 'Humanities',
'32768203-e738-4627-8b04-78b0ed2b44cb': 'Language',
'4925b67d-01c4-4287-a8d1-a3e0066113b8': 'Law',
'74b6ed2a-3ba0-49be-adc9-53f7256a12e1': 'Literature',
'a669e004-cbc0-4b68-8882-234c12e1cce4': 'Math',
'a5db73b2-05b4-4284-beef-c7876ec1499b': 'Medicine',
'f520dcc1-f5b7-42fe-a757-8acfb1e9e79d': 'Music',
'830f46dc-624e-46f4-9df0-e2bc6b346956': 'Philosophy & Ethics',
'88eb7ca7-2296-457d-8aac-e5f7503a9333': 'Physics',
'f830cfeb-bb7e-46ed-859d-e2a9f136499f': 'Science',
'eefb009b-0a02-49e9-b1b1-249982b6ce86': 'Social Sciences'}
def make_url(page=1):
params = {'selected_facets[]': 'transcript_languages_exact:English',
'partner': 'edx',
'content_type[]': 'courserun',
'page': page,
'page_size': 20}
return 'https://www.edx.org/api/v1/catalog/search?' + urllib.parse.urlencode(params)
def html_to_text(html):
soup = bs4.BeautifulSoup(html, 'lxml')
return soup.text
| 44.617978 | 88 | 0.576681 | 384 | 3,971 | 5.895833 | 0.700521 | 0.019435 | 0.038869 | 0.025618 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.219979 | 0.289096 | 3,971 | 88 | 89 | 45.125 | 0.582005 | 0 | 0 | 0 | 0 | 0 | 0.447998 | 0.280534 | 0 | 0 | 0 | 0 | 0 | 1 | 0.057143 | false | 0 | 0.028571 | 0 | 0.171429 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
48d950cb515fdc01c87e2cf97d07a2e9d9b96b55 | 8,409 | py | Python | main.py | LaudateCorpus1/TotalConnect2.0_API-Arm-Disarm | 96885410defa036b37b5f6ae86b322de89c850ae | [
"MIT"
] | 1 | 2017-03-06T03:44:40.000Z | 2017-03-06T03:44:40.000Z | main.py | LaudateCorpus1/TotalConnect2.0_API-Arm-Disarm | 96885410defa036b37b5f6ae86b322de89c850ae | [
"MIT"
] | null | null | null | main.py | LaudateCorpus1/TotalConnect2.0_API-Arm-Disarm | 96885410defa036b37b5f6ae86b322de89c850ae | [
"MIT"
] | 2 | 2020-01-20T12:57:55.000Z | 2022-02-08T07:03:58.000Z | #!/usr/local/bin/python2.7
#FREEBSD 2 Minutes ARP Expires - /bin/echo "net.link.ether.inet.max_age 300" >> /etc/sysctl.conf
#Crontab -e "* * * * * /usr/local/bin/python2.7 /root/Security.py"
import subprocess
import ConfigParser
import string, os, sys, httplib
import xml.etree.ElementTree as ET
from datetime import datetime, time
now = datetime.now()
now_time = now.time()
#---- BOL FOR CONFIGURTION INI ----#
# Documentation: https://wiki.python.org/moin/ConfigParserExamples #
Config = ConfigParser.ConfigParser()
Config.read("Security.ini")
cfgfile = open("Security.ini")
def BoolConfigSectionMap(section):
dict1 = {}
options = Config.options(section)
for option in options:
try:
dict1[option] = Config.getboolean(section, option)
if dict1[option] == -1:
DebugPrint("skip: %s" % option)
except:
print("exception on %s!" % option)
dict1[option] = None
return dict1
def ConfigSectionMap(section):
dict1 = {}
options = Config.options(section)
for option in options:
try:
dict1[option] = Config.get(section, option)
if dict1[option] == -1:
DebugPrint("skip: %s" % option)
except:
print("exception on %s!" % option)
dict1[option] = None
return dict1
state = BoolConfigSectionMap("Status")['armed']
#---- EOL FOR CONFIGURTION INI ----#
device1 = '00:00:00:00:00:00'
device2 = '00:00:00:00:00:00'
device3 = '00:00:00:00:00:00'
#---- BOL for LOG Output ---- #
Log = open('SecurityAuditlog.txt', 'w')
print >> Log, "---------",now_time,"---------"
#---- BOL API Section ----#
def TC2_SOAPSessionID():
global sessionHash
server_addr = "rs.alarmnet.com"
service_action = "/TC21API/TC2.asmx"
username = ConfigSectionMap("Authentication")['username']
password = ConfigSectionMap("Authentication")['password']
body = """
<soapenv:Envelope xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"><soapenv:Header/><soapenv:Body><tns:AuthenticateUserLoginEx xmlns:tns="https://services.alarmnet.com/TC2/"><tns:userName>%s</tns:userName>"""
body1 = """<tns:password>%s</tns:password><tns:ApplicationID>14588</tns:ApplicationID><tns:ApplicationVersion>3.14.2</tns:ApplicationVersion><tns:LocaleCode></tns:LocaleCode></tns:AuthenticateUserLoginEx></soapenv:Body></soapenv:Envelope>"""
request = httplib.HTTPSConnection(server_addr)
request.putrequest("POST", service_action)
request.putheader("Accept", "application/soap+xml, application/dime, multipart/related, text/*")
request.putheader("Content-Type", "text/xml; charset=utf-8")
request.putheader("Cache-Control", "no-cache")
request.putheader("Pragma", "no-cache")
request.putheader("SOAPAction","https://services.alarmnet.com/TC2/AuthenticateUserLoginEx")
request.putheader("Content-Length", str(len(body % username + body1 % password)))
request.endheaders()
request.send(body % username + body1 % password)
response = request.getresponse().read()
tree = ET.fromstring(response)
sessionHash = tree.find('.//{https://services.alarmnet.com/TC2/}SessionID').text
return
def TC2_DisarmSecuritySystem():
TC2_SOAPSessionID()
server_addr = "rs.alarmnet.com"
service_action = "/TC21API/TC2.asmx"
body = ("""<SOAP-ENV:Envelope xmlns:SOAP-ENV="http://schemas.xmlsoap.org/soap/envelope/" xmlns:s="http://www.w3.org/2001/XMLSchema" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
<SOAP-ENV:Body>
<tns:DisarmSecuritySystem xmlns:tns="https://services.alarmnet.com/TC2/">
<tns:SessionID>%s</tns:SessionID>
<tns:LocationID>0</tns:LocationID>
<tns:DeviceID>0</tns:DeviceID>
<tns:UserCode>-1</tns:UserCode>
</tns:DisarmSecuritySystem>
</SOAP-ENV:Body>
</SOAP-ENV:Envelope>""")
request = httplib.HTTPSConnection(server_addr)
request.putrequest("POST", service_action)
request.putheader("Accept", "application/soap+xml, application/dime, multipart/related, text/*")
request.putheader("Content-Type", "text/xml; charset=utf-8")
request.putheader("Cache-Control", "no-cache")
request.putheader("Pragma", "no-cache")
request.putheader("SOAPAction","https://services.alarmnet.com/TC2/DisarmSecuritySystem")
request.putheader("Content-Length", str(len(body % sessionHash)))
request.endheaders()
request.send(body % sessionHash)
response = request.getresponse().read()
tree = ET.fromstring(response)
print >> Log, "API:", tree.find('.//{https://services.alarmnet.com/TC2/}ResultData').text
return
def TC2_ArmSecuritySystem(armInt):
TC2_SOAPSessionID()
server_addr = "rs.alarmnet.com"
service_action = "/TC21API/TC2.asmx"
body = ("""<SOAP-ENV:Envelope xmlns:SOAP-ENV="http://schemas.xmlsoap.org/soap/envelope/" xmlns:s="http://www.w3.org/2001/XMLSchema" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
<SOAP-ENV:Body>
<tns:ArmSecuritySystem xmlns:tns="https://services.alarmnet.com/TC2/">
<tns:SessionID>%s</tns:SessionID>
<tns:LocationID>0</tns:LocationID>
<tns:DeviceID>0</tns:DeviceID>""")
body1 = ("""<tns:ArmType>%s</tns:ArmType>
<tns:UserCode>-1</tns:UserCode>
</tns:ArmSecuritySystem>
</SOAP-ENV:Body>
</SOAP-ENV:Envelope>""")
request = httplib.HTTPSConnection(server_addr)
request.putrequest("POST", service_action)
request.putheader("Accept", "application/soap+xml, application/dime, multipart/related, text/*")
request.putheader("Content-Type", "text/xml; charset=utf-8")
request.putheader("Cache-Control", "no-cache")
request.putheader("Pragma", "no-cache")
request.putheader("SOAPAction","https://services.alarmnet.com/TC2/ArmSecuritySystem")
request.putheader("Content-Length", str(len(body % sessionHash + body1 % armInt)))
request.endheaders()
request.send(body % sessionHash + body1 % armInt)
response = request.getresponse().read()
tree = ET.fromstring(response)
print >> Log, "API:", tree.find('.//{https://services.alarmnet.com/TC2/}ResultData').text
return
#---- EOL API Section ----#
def countPeople():
global peopleTotal
peopleTotal=0
cmd = subprocess.Popen('/usr/sbin/arp -a -i re0_vlan4', shell=True, stdout=subprocess.PIPE)
for line in cmd.stdout:
if device1 in line:
peopleTotal += 1
print >> Log, "User1 is present",peopleTotal
if device2 in line:
peopleTotal += 1
print >> Log, "User2 is present",peopleTotal
if device3 in line:
peopleTotal += 1
print >> Log, "User3 is present",peopleTotal
# cfgfile = open("Security.ini",'w')
# Config.set('Status','armed', True)
# Config.write(cfgfile)
# cfgfile.close()
return
# ---- BOL Program Initiation and function mapping ----#
def runcheck():
countPeople()
print state, peopleTotal
#Check ENV with if Statement to see if the "Armed" boolean is true or false
if now_time >= time(23,59) or now_time <= time(5,00):
if state == False and peopleTotal >0:
cfgfile = open("Security.ini",'w')
Config.set('Status','armed', True)
Config.write(cfgfile)
cfgfile.close()
TC2_ArmSecuritySystem(1)
print >> Log, "arming - It's now between 11:59AM and 5:30AM"
else:
if state is True and peopleTotal >0:
print >> Log, "disarming - more then 0"
TC2_DisarmSecuritySystem()
cfgfile = open("Security.ini",'w')
Config.set('Status','armed', False)
Config.write(cfgfile)
cfgfile.close()
print "Disarming", state
else:
if state is False and peopleTotal <=0:
print >> Log, "arming away - less then 1"
TC2_ArmSecuritySystem(0)
cfgfile = open("Security.ini",'w')
Config.set('Status','armed', True)
Config.write(cfgfile)
cfgfile.close()
print "Arming Away", state
return
runcheck()
# ---- EOL Program Initiation and function mapping ----#
#---- Logging ---- #
print >> Log, "- Armed",state,"-",peopleTotal,"DEVICES PRESENT","-"
Log.close()
#---- EOL for LOG Output ---- #
| 39.665094 | 275 | 0.646093 | 978 | 8,409 | 5.52863 | 0.235174 | 0.053264 | 0.013316 | 0.013316 | 0.626225 | 0.591825 | 0.537821 | 0.518032 | 0.482523 | 0.466987 | 0 | 0.024562 | 0.191462 | 8,409 | 211 | 276 | 39.853081 | 0.770702 | 0.090617 | 0 | 0.547619 | 0 | 0.02381 | 0.386191 | 0.088343 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0.02381 | 0.029762 | null | null | 0.089286 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
48d989d7c7b86f58f750e3be1818f6a34de5e9dd | 1,538 | py | Python | prm/relations/migrations/0002_activity.py | justaname94/innovathon2019 | d1a4e9b1b877ba12ab23384b9ee098fcdbf363af | [
"MIT"
] | null | null | null | prm/relations/migrations/0002_activity.py | justaname94/innovathon2019 | d1a4e9b1b877ba12ab23384b9ee098fcdbf363af | [
"MIT"
] | 4 | 2021-06-08T20:20:05.000Z | 2022-03-11T23:58:37.000Z | prm/relations/migrations/0002_activity.py | justaname94/personal_crm | d1a4e9b1b877ba12ab23384b9ee098fcdbf363af | [
"MIT"
] | null | null | null | # Generated by Django 2.2.5 on 2019-09-09 21:21
from django.conf import settings
from django.db import migrations, models
import django.db.models.deletion
class Migration(migrations.Migration):
dependencies = [
migrations.swappable_dependency(settings.AUTH_USER_MODEL),
('relations', '0001_initial'),
]
operations = [
migrations.CreateModel(
name='Activity',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('created', models.DateTimeField(auto_now_add=True, help_text='Datetime on which the object was created.', verbose_name='created at ')),
('modified', models.DateTimeField(auto_now=True, help_text='Datetime on which the object was last modified.', verbose_name='modified at ')),
('name', models.CharField(max_length=50)),
('description', models.TextField()),
('is_active', models.BooleanField(default=True, help_text='Are you currently actively doing it?', verbose_name='Is active')),
('last_time', models.DateField(blank=True, null=True, verbose_name='Last time done')),
('owner', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, to=settings.AUTH_USER_MODEL)),
],
options={
'ordering': ['-created', '-modified'],
'get_latest_by': 'created',
'abstract': False,
},
),
]
| 43.942857 | 156 | 0.617035 | 168 | 1,538 | 5.5 | 0.517857 | 0.059524 | 0.038961 | 0.047619 | 0.084416 | 0.084416 | 0.084416 | 0.084416 | 0.084416 | 0 | 0 | 0.018261 | 0.252276 | 1,538 | 34 | 157 | 45.235294 | 0.785217 | 0.029259 | 0 | 0 | 1 | 0 | 0.207243 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.107143 | 0 | 0.214286 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
48dbc22d623e96499bba5ef1f32d58521697a022 | 3,571 | py | Python | taiga/projects/epics/serializers.py | threefoldtech/Threefold-Circles | cbc433796b25cf7af9a295af65d665a4a279e2d6 | [
"Apache-2.0"
] | null | null | null | taiga/projects/epics/serializers.py | threefoldtech/Threefold-Circles | cbc433796b25cf7af9a295af65d665a4a279e2d6 | [
"Apache-2.0"
] | 12 | 2019-11-25T14:08:32.000Z | 2021-06-24T10:35:51.000Z | taiga/projects/epics/serializers.py | threefoldtech/Threefold-Circles | cbc433796b25cf7af9a295af65d665a4a279e2d6 | [
"Apache-2.0"
] | null | null | null | # -*- coding: utf-8 -*-
# Copyright (C) 2014-2017 Andrey Antukh <niwi@niwi.nz>
# Copyright (C) 2014-2017 Jesús Espino <jespinog@gmail.com>
# Copyright (C) 2014-2017 David Barragán <bameda@dbarragan.com>
# Copyright (C) 2014-2017 Alejandro Alonso <alejandro.alonso@kaleidos.net>
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as
# published by the Free Software Foundation, either version 3 of the
# License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Affero General Public License for more details.
#
# You should have received a copy of the GNU Affero General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
from taiga.base.api import serializers
from taiga.base.fields import Field, MethodField
from taiga.base.neighbors import NeighborsSerializerMixin
from taiga.mdrender.service import render as mdrender
from taiga.projects.attachments.serializers import BasicAttachmentsInfoSerializerMixin
from taiga.projects.mixins.serializers import OwnerExtraInfoSerializerMixin
from taiga.projects.mixins.serializers import ProjectExtraInfoSerializerMixin
from taiga.projects.mixins.serializers import AssignedToExtraInfoSerializerMixin
from taiga.projects.mixins.serializers import StatusExtraInfoSerializerMixin
from taiga.projects.notifications.mixins import WatchedResourceSerializer
from taiga.projects.tagging.serializers import TaggedInProjectResourceSerializer
from taiga.projects.votes.mixins.serializers import VoteResourceSerializerMixin
class EpicListSerializer(VoteResourceSerializerMixin, WatchedResourceSerializer,
OwnerExtraInfoSerializerMixin, AssignedToExtraInfoSerializerMixin,
StatusExtraInfoSerializerMixin, ProjectExtraInfoSerializerMixin,
BasicAttachmentsInfoSerializerMixin,
TaggedInProjectResourceSerializer, serializers.LightSerializer):
id = Field()
ref = Field()
project = Field(attr="project_id")
created_date = Field()
modified_date = Field()
subject = Field()
color = Field()
epics_order = Field()
client_requirement = Field()
team_requirement = Field()
version = Field()
watchers = Field()
is_blocked = Field()
blocked_note = Field()
is_closed = MethodField()
user_stories_counts = MethodField()
def get_is_closed(self, obj):
return obj.status is not None and obj.status.is_closed
def get_user_stories_counts(self, obj):
assert hasattr(obj, "user_stories_counts"), "instance must have a user_stories_counts attribute"
return obj.user_stories_counts
class EpicSerializer(EpicListSerializer):
comment = MethodField()
blocked_note_html = MethodField()
description = Field()
description_html = MethodField()
def get_comment(self, obj):
return ""
def get_blocked_note_html(self, obj):
return mdrender(obj.project, obj.blocked_note)
def get_description_html(self, obj):
return mdrender(obj.project, obj.description)
class EpicNeighborsSerializer(NeighborsSerializerMixin, EpicSerializer):
pass
class EpicRelatedUserStorySerializer(serializers.LightSerializer):
epic = Field(attr="epic_id")
user_story = Field(attr="user_story_id")
order = Field()
| 40.123596 | 104 | 0.758051 | 400 | 3,571 | 6.675 | 0.4075 | 0.040449 | 0.050936 | 0.026966 | 0.141573 | 0.125843 | 0.053933 | 0.028464 | 0 | 0 | 0 | 0.011475 | 0.17026 | 3,571 | 88 | 105 | 40.579545 | 0.889639 | 0.25147 | 0 | 0 | 0 | 0 | 0.037288 | 0 | 0 | 0 | 0 | 0 | 0.018182 | 1 | 0.090909 | false | 0.018182 | 0.218182 | 0.072727 | 0.890909 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
48de82f88d77ad42fe5f179efaac8655f74f00d7 | 5,682 | py | Python | tests/db/test_connector.py | DaWeSearch/backend | 809e575ed730fce55d0e89a2fbc2031ba116f5e0 | [
"MIT"
] | 1 | 2021-02-15T01:05:22.000Z | 2021-02-15T01:05:22.000Z | tests/db/test_connector.py | DaWeSearch/backend | 809e575ed730fce55d0e89a2fbc2031ba116f5e0 | [
"MIT"
] | null | null | null | tests/db/test_connector.py | DaWeSearch/backend | 809e575ed730fce55d0e89a2fbc2031ba116f5e0 | [
"MIT"
] | null | null | null | import unittest
import os
import json
from functions.db.connector import *
from functions.db.models import *
from functions.authentication import *
sample_search = {
"search_groups": [
{
"search_terms": ["blockchain", "distributed ledger"],
"match": "OR"
},
{
"search_terms": ["energy", "infrastructure", "smart meter"],
"match": "OR"
}
],
"match": "AND"
}
db_dict = {"db_name": "hallo", "api_key": "test"}
class TestConnector(unittest.TestCase):
def setUp(self):
name = "test_review"
self.review = add_review(name)
self.sample_query = new_query(self.review, sample_search)
with open('test_results.json', 'r') as file:
self.results = json.load(file)
save_results(self.results['records'], self.review, self.sample_query)
def test_add_review(self):
name = "test_review"
new_review = add_review(name)
review = get_review_by_id(new_review._id)
review.delete()
self.assertEqual(review._id, new_review._id)
def test_save_results(self):
query = new_query(self.review, sample_search)
jsonpath = os.path.abspath(os.path.join(
os.path.dirname(__file__), "..", "..", "test_results.json"))
with open(jsonpath, 'r') as file:
results = json.load(file)
save_results(results['records'], self.review, query)
results_from_db = get_persisted_results(query).get('results')
self.assertEqual(len(results_from_db), len(results['records']))
def test_pagination(self):
page1 = get_persisted_results(self.sample_query, 1, 10).get('results')
self.assertTrue(len(page1) == 10)
page2 = get_persisted_results(self.sample_query, 2, 10).get('results')
self.assertTrue(len(page2) == 10)
self.assertNotEqual(page1, page2)
def test_get_list_of_dois_for_review(self):
dois = get_dois_for_review(self.review)
for record in self.results.get('records'):
self.assertTrue(record.get('doi') in dois)
def test_update_score(self):
user = User(name="test user")
doi = self.results.get('records')[0].get('doi')
result = get_result_by_doi(self.review, doi)
self.assertEqual(len(result.scores), 0)
evaluation = {
"user": "testmann",
"score": 2,
"comment": "test_comment"
}
update_score(self.review, result, evaluation)
self.assertEqual(result.scores[0].score, 2)
evaluation = {
"user": "testmann",
"score": 5,
"comment": "joiefjlke"
}
update_score(self.review, result, evaluation)
self.assertEqual(result.scores[0].score, 5)
self.assertEqual(len(result.scores), 1)
user.delete()
def test_delete_results_for_review(self):
num_results = len(get_dois_for_review(self.review))
self.assertGreater(num_results, 0)
delete_results_for_review(self.review)
num_results = len(get_dois_for_review(self.review))
self.assertEquals(num_results, 0)
def tearDown(self):
delete_results_for_review(self.review)
self.review.delete()
class TestUserDB(unittest.TestCase):
# TODO rewrite test cases
def setUp(self):
username = "philosapiens"
name = "Philippe"
surname = "Kalinowski"
email = "test@slr.com"
password = "ABC123"
# databases = DatabaseInfo()
# databases.name = "SPRINGER_API"
# databases.api_key = "5150230aac7a227ve33693f99b5697aa"
# self.user = add_user(username, name, surname, email, password)
def test_add_user(self):
username = "philosapfiens"
name = "Philippe"
surname = "Kalinowski"
email = "test@slr.com"
password = "ABC123222"
db_name = "SPRINGER_API"
api_key = "5150230aac7a227ve33693f99b5697aa"
# databases312 = DatabaseInfo.from_document(sample_databases)
# print(databases312)
new_user = add_user(username, name, surname, email, password)
# update_databases(new_user, db_dict)
# user = get_user_by_id(new_user.name)
def test_get_user_by_username(self):
user = get_user_by_username("philosapiens")
print(user.email)
def test_update_user(self):
user = get_user_by_username("philosapiens")
print(user.email)
update_user(user, user.name, "btesfd", "changed@slr.com", user.password)
user = get_user_by_username("philosapiens")
print(user.email)
def test_get_all_users(self):
print(str(get_users()))
def test_delete_users(self):
user = get_user_by_username("philosapiens")
delete_user(user)
class TestAuth(unittest.TestCase):
def setUp(self):
username = "philosapiens"
name = "Philippe"
surname = "Kalinowski"
email = "test@slr.com"
password = "ABC123"
def test_login(self):
username = "philosapiens"
password = "ABC123222"
user = get_user_by_username(username)
password_correct = check_if_password_is_correct(user, password)
print(password_correct)
token = get_jwt_for_user(user)
print(type(token))
add_jwt_to_session(user, token)
is_token_valid = check_for_token(token)
print(is_token_valid)
is_token_in_session = check_if_jwt_is_in_session(token)
print(is_token_in_session)
# remove_jwt_from_session(user)
if __name__ == '__main__':
unittest.main()
| 29.28866 | 80 | 0.62566 | 659 | 5,682 | 5.125948 | 0.201821 | 0.041445 | 0.033156 | 0.023091 | 0.368561 | 0.322676 | 0.240971 | 0.209295 | 0.183837 | 0.168443 | 0 | 0.022544 | 0.25836 | 5,682 | 193 | 81 | 29.440415 | 0.77907 | 0.067406 | 0 | 0.296296 | 0 | 0 | 0.119705 | 0.006051 | 0 | 0 | 0 | 0.005181 | 0.088889 | 1 | 0.118519 | false | 0.059259 | 0.044444 | 0 | 0.185185 | 0.059259 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
48e060479c6f9450fb40ff919e56deed4c5f57d9 | 7,527 | py | Python | intrinsic/classify.py | seenu-andi-rajendran/plagcomps | 98e82cfb871f73bbd8f4ab1452c2b27a95beee83 | [
"MIT"
] | 2 | 2015-01-18T06:20:27.000Z | 2021-03-19T21:19:16.000Z | intrinsic/classify.py | NoahCarnahan/plagcomps | 98e82cfb871f73bbd8f4ab1452c2b27a95beee83 | [
"MIT"
] | null | null | null | intrinsic/classify.py | NoahCarnahan/plagcomps | 98e82cfb871f73bbd8f4ab1452c2b27a95beee83 | [
"MIT"
] | 2 | 2015-11-19T12:52:14.000Z | 2016-11-11T17:00:50.000Z | # classify.py
# Alternative methods to clustering
import sys, os
from random import shuffle
import cPickle
from collections import Counter
sys.path.append('../pybrain/') # add the pybrain module to the path... TODO: actually install it.
from plagcomps.shared.util import IntrinsicUtility
from ..dbconstants import username
from ..dbconstants import password
from ..dbconstants import dbname
'''
from pybrain.structure import FeedForwardNetwork, LinearLayer, SigmoidLayer, FullConnection, TanhLayer
from pybrain.tools.shortcuts import buildNetwork
from pybrain.datasets import SupervisedDataSet
from pybrain.utilities import percentError
from pybrain.tools.shortcuts import buildNetwork
from pybrain.supervised.trainers import BackpropTrainer
from pybrain.structure.modules import SoftmaxLayer
from pybrain.tools.customxml.networkwriter import NetworkWriter
from pybrain.tools.customxml.networkreader import NetworkReader
from pybrain.structure.modules import BiasUnit
'''
import scipy
import sklearn
import sklearn.metrics
import matplotlib
import matplotlib.pyplot as pyplot
from pylab import ion, ioff, figure, draw, contourf, clf, show, hold, plot
from scipy import diag, arange, meshgrid, where
from numpy.random import multivariate_normal
import sqlalchemy
from sqlalchemy.orm import sessionmaker
from sqlalchemy.ext.declarative import declarative_base
Base = declarative_base()
class NeuralNetworkConfidencesClassifier:
nn_filepath = os.path.join(os.path.dirname(__file__), "neural_networks/nn.xml")
dataset_filepath = os.path.join(os.path.dirname(__file__), "neural_networks/dataset.pkl")
def create_nn(self, features, num_hidden_layer_nodes):
net = buildNetwork(len(features), num_hidden_layer_nodes, 1)
return net
def create_trainer(self, network, dataset):
trainer = BackpropTrainer(network, dataset, learningrate=0.01, momentum=0.01, verbose=True)
return trainer
def roc(self, confidences, actuals):
fpr, tpr, thresholds = sklearn.metrics.roc_curve(actuals, confidences, pos_label=1)
roc_auc = sklearn.metrics.auc(fpr, tpr)
print 'ROC area under curve:', roc_auc
# The following code is from http://scikit-learn.org/stable/auto_examples/plot_roc.html
pyplot.plot(fpr, tpr, label='ROC curve (area = %0.2f)' % roc_auc)
pyplot.plot([0, 1], [0, 1], 'k--')
pyplot.xlim([0.0, 1.0])
pyplot.ylim([0.0, 1.0])
pyplot.xlabel('False Positive Rate')
pyplot.ylabel('True Positive Rate')
pyplot.title('Receiver operating characteristic')
pyplot.legend(loc="lower right")
#path = "figures/roc"+str(time.time())+".pdf"
path = ospath.join(ospath.dirname(__file__), "neural_networks/roc"+str(time.time())+".pdf")
pyplot.savefig(path)
return path, roc_auc
def construct_confidence_vectors_dataset(self, reduced_docs, features, session):
from cluster import cluster
conf_dataset = SupervisedDataSet(len(features), 1)
confidence_vectors = []
num_trues = 0
for feature in features:
vi = 0
for doc in reduced_docs:
feature_vectors = doc.get_feature_vectors([feature], session)
confidences = cluster("outlier", 2, feature_vectors, center_at_mean=True, num_to_ignore=1, impurity=.2)
for i, confidence in enumerate(confidences, 0):
if len(confidence_vectors) <= vi:
confidence_vectors.append([[], 0])
if doc.span_is_plagiarized(doc._spans[i]):
t = 1
num_trues += 1
else:
t = 0
confidence_vectors[vi][0].append(confidence)
confidence_vectors[vi][1] = t
vi += 1
num_plagiarised = num_trues / len(features)
print num_plagiarised
shuffle(confidence_vectors)
for vec in confidence_vectors:
if vec[1] == 0:
num_plagiarised -= 1
if not (vec[1] == 0 and num_plagiarised <= 0):
conf_dataset.addSample(vec[0], vec[1])
f = open(self.dataset_filepath, 'wb')
cPickle.dump(conf_dataset, f)
print 'dumped dataset file'
return conf_dataset
def read_dataset(self):
f = open(self.dataset_filepath, 'rb')
return cPickle.load(f)
def construct_and_train_nn(self, features, num_files, epochs, filepath, session):
from plagcomps.evaluation.intrinsic import _get_reduced_docs
IU = IntrinsicUtility()
all_test_files = IU.get_n_training_files(n=num_files)
reduced_docs = _get_reduced_docs("paragraph", all_test_files, session)
print 'constructing datasets...'
# dataset = self.construct_confidence_vectors_dataset(reduced_docs, features, session)
dataset = self.read_dataset()
training_dataset, testing_dataset = dataset.splitWithProportion(0.75)
print 'dataset lengths:', len(dataset), len(training_dataset), len(testing_dataset)
print
print 'creating neural network...'
net = self.create_nn(features, num_hidden_layer_nodes)
print 'creating trainer...'
trainer = self.create_trainer(net, training_dataset)
print 'training neural network for', epochs, 'epochs...'
trainer.trainEpochs(epochs)
print 'writing neural network to ' + str(filepath) + '...'
NetworkWriter.writeToFile(net, filepath)
print 'testing neural network...'
confidences = []
actuals = []
for point in testing_dataset:
confidences.append(net.activate(point[0])[0])
actuals.append(point[1][0])
print 'confidences|actuals ', zip(confidences, actuals)
print 'generating ROC curve...'
matplotlib.use('pdf')
path, auc = self.roc(confidences, actuals)
print 'area under curve =', auc
def nn_confidences(self, feature_vectors):
'''
Read the saved nn and run it.
'''
net = NetworkReader.readFrom(self.nn_filepath)
confidences = []
for feature_vector in feature_vectors:
confidences.append(net.activate(feature_vector)[0])
return confidences
# an Engine, which the Session will use for connection resources
url = "postgresql://%s:%s@%s" % (username, password, dbname)
engine = sqlalchemy.create_engine(url)
# create tables if they don't already exist
Base.metadata.create_all(engine)
# create a configured "Session" class
Session = sessionmaker(bind=engine)
if __name__ == '__main__':
session = Session()
features = ['average_sentence_length',
'average_syllables_per_word',
'avg_external_word_freq_class',
'avg_internal_word_freq_class',
'flesch_kincaid_grade',
'flesch_reading_ease',
'num_chars',
'punctuation_percentage',
'stopword_percentage',
'syntactic_complexity',
'syntactic_complexity_average']
num_hidden_layer_nodes = 20
num_files = 30
epochs = 400
filepath = os.path.join(os.path.dirname(__file__), "neural_networks/nn.xml")
NN = NeuralNetworkConfidencesClassifier()
NN.construct_and_train_nn(features, num_files, epochs, filepath, session)
| 37.635 | 119 | 0.659891 | 866 | 7,527 | 5.550808 | 0.322171 | 0.022883 | 0.013314 | 0.020803 | 0.122322 | 0.070522 | 0.055128 | 0.055128 | 0.032661 | 0.032661 | 0 | 0.010389 | 0.245516 | 7,527 | 199 | 120 | 37.824121 | 0.836063 | 0.06191 | 0 | 0.014388 | 0 | 0 | 0.122554 | 0.03866 | 0 | 0 | 0 | 0.005025 | 0 | 0 | null | null | 0.014388 | 0.151079 | null | null | 0.100719 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
48e612645ef11a151beea876541ffc2a70be93e5 | 5,123 | py | Python | src/cnc-app-name/views.py | scotchoaf/cnc-skeleton | 2116bf3d61fc1ed834daeaa146f5730713300010 | [
"MIT"
] | null | null | null | src/cnc-app-name/views.py | scotchoaf/cnc-skeleton | 2116bf3d61fc1ed834daeaa146f5730713300010 | [
"MIT"
] | null | null | null | src/cnc-app-name/views.py | scotchoaf/cnc-skeleton | 2116bf3d61fc1ed834daeaa146f5730713300010 | [
"MIT"
] | 1 | 2019-04-08T14:54:12.000Z | 2019-04-08T14:54:12.000Z | # Copyright (c) 2018, Palo Alto Networks
#
# Permission to use, copy, modify, and/or distribute this software for any
# purpose with or without fee is hereby granted, provided that the above
# copyright notice and this permission notice appear in all copies.
#
# THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
# WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
# MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
# ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
# WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
# ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
# OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
# Author: $YOURNAME and $EMAIL
"""
Palo Alto Networks cnc-skeleton
This software is provided without support, warranty, or guarantee.
Use at your own risk.
"""
from django import forms
from django.contrib import messages
from django.shortcuts import HttpResponseRedirect
# Every app will need to import at least the CNCBaseFormView
from pan_cnc.views import CNCBaseFormView, ProvisionSnippetView
# All class attributes can be defined here or in the .pan-cnc.yaml
# In this case, we have defined class level attributes there. This makes it possible to
# create apps while writing no code at all. Just create a view in the .pan-cnc.yaml based on a
# CNCBaseFormView and configure the attributes as needed.
# If you want additional logic, then you subclass the CNCBaseFormView and add your logic there.
# The two main methods to override are 'generate_dynamic_form' and 'form_valid'.
#
# generate_dynamic_form gets called before the web form is created and displayed to the user
#
# form_valid is called after they submit the form
#
class ExampleAppView(CNCBaseFormView):
def form_valid(self, form):
# we now have the form from the user, let's get some values to perform some logic
# every variable entered by the user is saved in the user session. We can access it using this
# convenience method:
var_name = self.get_value_from_workflow('var_name', 'DEFAULT_IF_NOT_FOUND')
var_name_again = self.get_value_from_workflow('var_name_again', 'DEFAULT_IF_NOT_FOUND')
# silly exercise to just upper case the value entered by the user
var_name_upper = str(var_name).upper()
var_name_again_reverse = str(var_name_again)[::-1]
# now, save the values back to the workflow
self.save_value_to_workflow('var_name', var_name_upper)
self.save_value_to_workflow('var_name_again', var_name_again_reverse)
# and call our super to continue processing
return super().form_valid(form)
# Again override the ProvisionSnippetView as we are only building a workflow here.
# CNCBaseFormView will only display the form and perform a redirect after 'form_valid'
# however, ProvisionSnippetView will actually redirect to another CNC class based in the skillet type
# I.e. this is where the logic of how to interact with APIs, PAN-OS devies, render templates, etc is all done
# You usually want a child of this class to the 'last' in a chain if you need extended logic
class ExampleAppPasswordView(ProvisionSnippetView):
def get_snippet(self):
return self.snippet
# this method allows us to customize what is shown to the user beyond what is present in the loaded skillet
# 'variables' section
def generate_dynamic_form(self):
# let's first get the generated from from our base class
dynamic_form = super().generate_dynamic_form()
dynamic_form.fields['password_2'] = forms.CharField(widget=forms.PasswordInput(render_value=True),
initial='')
return dynamic_form
# the user has now completed the form and we have the results
def form_valid(self, form):
# Everything the user has entered will be available here in the 'workflow'
# Note that any 'variable' entries defined in the .meta-cnc snippet will
# be automatically added to the session workflow
workflow = self.get_workflow()
# get the values from the user submitted here
var_name = workflow.get('var_name')
var_name_again = workflow.get('var_name_again')
example_password = workflow.get('example_password')
# to access variables that were not defined in the snippet
# you can grab them directly from the POST on the request object
password_2 = self.request.POST['password_2']
print(f'checking if {example_password} matches {password_2}')
if example_password != password_2:
# Send an error message back to the user
messages.add_message(self.request, messages.ERROR, 'Passwords do not match!')
return HttpResponseRedirect('workflow00')
print('Got some vars here!')
print(f'Found value for var_name: {var_name}')
print(f'Found another value for var_name_again {var_name_again}')
return super().form_valid(form)
| 44.163793 | 111 | 0.728479 | 750 | 5,123 | 4.864 | 0.364 | 0.038377 | 0.032895 | 0.011513 | 0.077029 | 0.044682 | 0.033443 | 0 | 0 | 0 | 0 | 0.002976 | 0.212961 | 5,123 | 115 | 112 | 44.547826 | 0.901786 | 0.585204 | 0 | 0.114286 | 0 | 0 | 0.162397 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.114286 | false | 0.2 | 0.114286 | 0.028571 | 0.428571 | 0.114286 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
48e84fceaf520fea1c5ef759977376465d7f8dcf | 1,514 | py | Python | tests/test_docs.py | gitter-badger/pygsuite | 536766c36f653edbc7585141f1c3327f508e19da | [
"MIT"
] | null | null | null | tests/test_docs.py | gitter-badger/pygsuite | 536766c36f653edbc7585141f1c3327f508e19da | [
"MIT"
] | null | null | null | tests/test_docs.py | gitter-badger/pygsuite | 536766c36f653edbc7585141f1c3327f508e19da | [
"MIT"
] | null | null | null | from pygsuite import DefaultFonts, TextStyle, Color
from pygsuite.docs.doc_elements.paragraph import Paragraph
BRIGHT_GREEN_HEX = "#72FF33"
def test_text(test_document):
document = test_document
docbody = document.body
docbody.delete()
docbody.add_text(
"TEST_CUSTOM\n",
style=TextStyle(font_size=18, font_weight=200, color=Color(hex=BRIGHT_GREEN_HEX)),
)
docbody.add_text("TEST_DEFAULT\n", style=DefaultFonts.NORMAL_TEXT)
docbody.add_text("TEST_INDEX\n", style=DefaultFonts.NORMAL_TEXT, position=1)
document.flush()
text = [item for item in document.body if isinstance(item, Paragraph)]
assert text[0].text.strip() == "TEST_INDEX"
assert text[2].text.strip() == "TEST_DEFAULT"
# TODO: return style objects
assert text[1].elements[0].style.font_size == 18
def test_paragraph(test_document):
document = test_document
docbody = document.body
docbody.delete()
docbody.add_text(
"TEST_CUSTOM\n",
style=TextStyle(font_size=18, font_weight=200, color=Color(hex=BRIGHT_GREEN_HEX)),
)
docbody.flush()
docbody.content[1].text = "TEST_CUSTOM_SETTER"
docbody.add_text("INSERT\n", position=0)
docbody.flush()
docbody.paragraphs[1].elements[0].style = TextStyle(
font_size=24, font_weight=500, color=Color(hex=BRIGHT_GREEN_HEX)
)
docbody.flush()
assert docbody.content[2].text.strip() == "TEST_CUSTOM_SETTER"
assert docbody.paragraphs[1].elements[0].style.font_size == 24
| 33.644444 | 90 | 0.707398 | 204 | 1,514 | 5.053922 | 0.264706 | 0.046557 | 0.067895 | 0.069835 | 0.499515 | 0.445199 | 0.353055 | 0.353055 | 0.310378 | 0.310378 | 0 | 0.027778 | 0.167768 | 1,514 | 44 | 91 | 34.409091 | 0.790476 | 0.017173 | 0 | 0.416667 | 0 | 0 | 0.084118 | 0 | 0 | 0 | 0 | 0.022727 | 0.138889 | 1 | 0.055556 | false | 0 | 0.055556 | 0 | 0.111111 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
48e948236c66512a216844a7ad0e87904606f55a | 2,034 | py | Python | flask_oauth2_login/base.py | BasicBeluga/flask-oauth2-login | 5a12ec70bcea72b2de079c072213be54f29b70b7 | [
"MIT"
] | 42 | 2015-01-13T08:51:04.000Z | 2022-01-14T04:15:31.000Z | flask_oauth2_login/base.py | BasicBeluga/flask-oauth2-login | 5a12ec70bcea72b2de079c072213be54f29b70b7 | [
"MIT"
] | 5 | 2015-04-29T19:31:11.000Z | 2020-03-28T19:37:43.000Z | flask_oauth2_login/base.py | BasicBeluga/flask-oauth2-login | 5a12ec70bcea72b2de079c072213be54f29b70b7 | [
"MIT"
] | 28 | 2015-06-16T20:30:40.000Z | 2021-04-08T15:33:10.000Z | from flask import request, session, url_for
from requests_oauthlib import OAuth2Session
class OAuth2Login(object):
def __init__(self, app=None):
if app:
self.init_app(app)
self.app = app
def get_config(self, app, name, default_value=None):
return app.config.get(self.config_prefix + name, default_value)
def init_app(self, app):
self.client_id = self.get_config(app, "CLIENT_ID")
self.client_secret = self.get_config(app, "CLIENT_SECRET")
self.scope = self.get_config(app, "SCOPE", self.default_scope).split(",")
self.redirect_scheme = self.get_config(app, "REDIRECT_SCHEME", "https")
app.add_url_rule(
self.get_config(app, "REDIRECT_PATH", self.default_redirect_path),
self.redirect_endpoint,
self.login,
)
@property
def redirect_uri(self):
return url_for(
self.redirect_endpoint,
_external=True,
_scheme=self.redirect_scheme,
)
def session(self):
return OAuth2Session(
self.client_id,
redirect_uri=self.redirect_uri,
scope=self.scope,
)
def authorization_url(self, **kwargs):
sess = self.session()
auth_url, state = sess.authorization_url(self.auth_url, **kwargs)
session[self.state_session_key] = state
return auth_url
def login(self):
sess = self.session()
# Get token
try:
sess.fetch_token(
self.token_url,
code=request.args["code"],
client_secret=self.client_secret,
)
# TODO: Check state
except Warning:
# Ignore warnings
pass
except Exception as e:
return self.login_failure_func(e)
# Get profile
try:
profile = self.get_profile(sess)
except Exception as e:
return self.login_failure_func(e)
return self.login_success_func(sess.token, profile)
def login_success(self, f):
self.login_success_func = f
return f
def login_failure(self, f):
self.login_failure_func = f
return f
def get_profile(self, sess):
raise NotImplementedError
| 24.214286 | 77 | 0.675025 | 270 | 2,034 | 4.848148 | 0.262963 | 0.041253 | 0.049656 | 0.061115 | 0.161956 | 0.068755 | 0.068755 | 0.068755 | 0.068755 | 0.068755 | 0 | 0.001901 | 0.224189 | 2,034 | 83 | 78 | 24.506024 | 0.82763 | 0.02704 | 0 | 0.193548 | 0 | 0 | 0.032945 | 0 | 0 | 0 | 0 | 0.012048 | 0 | 1 | 0.16129 | false | 0.016129 | 0.032258 | 0.048387 | 0.354839 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
48f141e3c4e406a1ed8e50060eb75658e2cb4aab | 202 | py | Python | apps/summary/urls.py | sotkonstantinidis/testcircle | 448aa2148fbc2c969e60f0b33ce112d4740a8861 | [
"Apache-2.0"
] | 3 | 2019-02-24T14:24:43.000Z | 2019-10-24T18:51:32.000Z | apps/summary/urls.py | sotkonstantinidis/testcircle | 448aa2148fbc2c969e60f0b33ce112d4740a8861 | [
"Apache-2.0"
] | 17 | 2017-03-14T10:55:56.000Z | 2022-03-11T23:20:19.000Z | apps/summary/urls.py | sotkonstantinidis/testcircle | 448aa2148fbc2c969e60f0b33ce112d4740a8861 | [
"Apache-2.0"
] | 2 | 2016-02-01T06:32:40.000Z | 2019-09-06T04:33:50.000Z | from django.conf.urls import url
from .views import SummaryPDFCreateView
urlpatterns = [
url(r'^(?P<id>[\d]+)/$',
SummaryPDFCreateView.as_view(),
name='questionnaire_summary'),
]
| 18.363636 | 39 | 0.658416 | 22 | 202 | 5.954545 | 0.818182 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.188119 | 202 | 10 | 40 | 20.2 | 0.79878 | 0 | 0 | 0 | 0 | 0 | 0.183168 | 0.10396 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.285714 | 0 | 0.285714 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
48f9edbd6a5a7ba5a520ddc41c7a0b91f9666bf5 | 1,382 | py | Python | cosmic_ray/operators/unary_operator_replacement.py | rob-smallshire/cosmic-ray | 4fd751b38eee30568f8366e09452d7aa60be4e26 | [
"MIT"
] | null | null | null | cosmic_ray/operators/unary_operator_replacement.py | rob-smallshire/cosmic-ray | 4fd751b38eee30568f8366e09452d7aa60be4e26 | [
"MIT"
] | null | null | null | cosmic_ray/operators/unary_operator_replacement.py | rob-smallshire/cosmic-ray | 4fd751b38eee30568f8366e09452d7aa60be4e26 | [
"MIT"
] | null | null | null | """Implementation of the unary-operator-replacement operator.
"""
import ast
from .operator import Operator
from ..util import build_mutations
# None indicates we want to delete the operator
OPERATORS = (ast.UAdd, ast.USub, ast.Invert, ast.Not, None)
def _to_ops(from_op):
"""
The sequence of operators which `from_op` could be mutated to.
"""
for to_op in OPERATORS:
if to_op and isinstance(from_op, ast.Not):
# 'not' can only be removed but not replaced with
# '+', '-' or '~' b/c that may lead to strange results
pass
elif isinstance(from_op, ast.UAdd) and (to_op is None):
# '+1' => '1' yields equivalent mutations
pass
else:
yield to_op
class MutateUnaryOperator(Operator):
"""An operator that modifies unary operators."""
def visit_UnaryOp(self, node): # pylint: disable=invalid-name
"""
http://greentreesnakes.readthedocs.io/en/latest/nodes.html#UnaryOp
"""
return self.visit_mutation_site(
node,
len(build_mutations([node.op], _to_ops)))
def mutate(self, node, idx):
"Perform the `idx`th mutation on node."
_, to_op = build_mutations([node.op], _to_ops)[idx]
if to_op:
node.op = to_op()
return node
return node.operand
| 28.791667 | 78 | 0.607815 | 179 | 1,382 | 4.558659 | 0.49162 | 0.034314 | 0.029412 | 0.046569 | 0.061275 | 0.061275 | 0 | 0 | 0 | 0 | 0 | 0.00203 | 0.287265 | 1,382 | 47 | 79 | 29.404255 | 0.826396 | 0.351664 | 0 | 0.083333 | 0 | 0 | 0.042431 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.125 | false | 0.083333 | 0.125 | 0 | 0.416667 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
48fa5657a82772ca80f844d0c1f8bca709ceaf35 | 2,069 | py | Python | src/icolos/core/workflow_steps/calculation/rmsd.py | jharrymoore/Icolos | c60cc00c34208ab7011d41d52a74651763673e7a | [
"Apache-2.0"
] | 11 | 2022-01-30T14:36:13.000Z | 2022-03-22T09:40:57.000Z | src/icolos/core/workflow_steps/calculation/rmsd.py | jharrymoore/Icolos | c60cc00c34208ab7011d41d52a74651763673e7a | [
"Apache-2.0"
] | 2 | 2022-03-23T07:56:49.000Z | 2022-03-24T12:01:42.000Z | src/icolos/core/workflow_steps/calculation/rmsd.py | jharrymoore/Icolos | c60cc00c34208ab7011d41d52a74651763673e7a | [
"Apache-2.0"
] | 8 | 2022-01-28T10:32:31.000Z | 2022-03-22T09:40:59.000Z | from typing import List
from pydantic import BaseModel
from icolos.core.containers.compound import Conformer, unroll_conformers
from icolos.utils.enums.step_enums import StepRMSDEnum, StepDataManipulationEnum
from icolos.core.workflow_steps.step import _LE
from icolos.core.workflow_steps.calculation.base import StepCalculationBase
_SR = StepRMSDEnum()
_SDM = StepDataManipulationEnum()
class StepRMSD(StepCalculationBase, BaseModel):
def __init__(self, **data):
super().__init__(**data)
# extend parameters
if _SR.METHOD not in self.settings.additional.keys():
self.settings.additional[_SR.METHOD] = _SR.METHOD_ALIGNMOL
def _calculate_RMSD(self, conformers: List[Conformer]):
for conf in conformers:
rmsd_matrix = self._calculate_rms_matrix(
conformers=[conf] + conf.get_extra_data()[_SDM.KEY_MATCHED],
rms_method=self._get_rms_method(),
)
# use the specified tag name if it is the first value and append an index in case there are more
for idx, col in enumerate(rmsd_matrix.columns[1:]):
combined_tag = "".join([_SR.RMSD_TAG, "" if idx == 0 else str(idx)])
rmsd_value = rmsd_matrix.iloc[[0]][col][0]
conf.get_molecule().SetProp(combined_tag, str(rmsd_value))
conf.get_extra_data()[_SDM.KEY_MATCHED][idx].get_molecule().SetProp(
combined_tag, str(rmsd_value)
)
def execute(self):
# this assumes that the conformers that are to be matched for the calculation of the RMSD matrix, are attached
# as a list in a generic data field with a specified key
conformers = unroll_conformers(compounds=self.get_compounds())
self._calculate_RMSD(conformers=conformers)
self._logger.log(
f"Annotated {len(conformers)} conformers with RMSD values (tag: {_SR.RMSD_TAG}).",
_LE.INFO,
)
# TODO: add a nice pandas DF with the RMSD values to a generic data field
| 43.104167 | 118 | 0.669889 | 261 | 2,069 | 5.091954 | 0.417625 | 0.030098 | 0.031603 | 0.033108 | 0.145974 | 0.105342 | 0.105342 | 0.061701 | 0 | 0 | 0 | 0.002553 | 0.242629 | 2,069 | 47 | 119 | 44.021277 | 0.845565 | 0.168197 | 0 | 0 | 0 | 0 | 0.045481 | 0 | 0 | 0 | 0 | 0.021277 | 0 | 1 | 0.090909 | false | 0 | 0.181818 | 0 | 0.30303 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
48ff6f626f5b448c258b452afb93725c786ec289 | 3,713 | py | Python | src/jellyroll/managers.py | jacobian-archive/jellyroll | 02751b3108b6f6ae732a801d42ca3c85cc759978 | [
"BSD-3-Clause"
] | 3 | 2015-03-02T06:34:45.000Z | 2016-11-24T18:53:59.000Z | src/jellyroll/managers.py | jacobian/jellyroll | 02751b3108b6f6ae732a801d42ca3c85cc759978 | [
"BSD-3-Clause"
] | null | null | null | src/jellyroll/managers.py | jacobian/jellyroll | 02751b3108b6f6ae732a801d42ca3c85cc759978 | [
"BSD-3-Clause"
] | null | null | null | import datetime
from django.db import models
from django.db.models import signals
from django.contrib.contenttypes.models import ContentType
from django.utils.encoding import force_unicode
from tagging.fields import TagField
class ItemManager(models.Manager):
def __init__(self):
super(ItemManager, self).__init__()
self.models_by_name = {}
def create_or_update(self, instance, timestamp=None, url=None, tags="", source="INTERACTIVE", source_id="", **kwargs):
"""
Create or update an Item from some instace.
"""
# If the instance hasn't already been saved, save it first. This
# requires disconnecting the post-save signal that might be sent to
# this function (otherwise we could get an infinite loop).
if instance._get_pk_val() is None:
try:
signals.post_save.disconnect(self.create_or_update, sender=type(instance))
except Exception, err:
reconnect = False
else:
reconnect = True
instance.save()
if reconnect:
signals.post_save.connect(self.create_or_update, sender=type(instance))
# Make sure the item "should" be registered.
if not getattr(instance, "jellyrollable", True):
return
# Check to see if the timestamp is being updated, possibly pulling
# the timestamp from the instance.
if hasattr(instance, "timestamp"):
timestamp = instance.timestamp
if timestamp is None:
update_timestamp = False
timestamp = datetime.datetime.now()
else:
update_timestamp = True
# Ditto for tags.
if not tags:
for f in instance._meta.fields:
if isinstance(f, TagField):
tags = getattr(instance, f.attname)
break
if not url:
if hasattr(instance,'url'):
url = instance.url
# Create the Item object.
ctype = ContentType.objects.get_for_model(instance)
item, created = self.get_or_create(
content_type = ctype,
object_id = force_unicode(instance._get_pk_val()),
defaults = dict(
timestamp = timestamp,
source = source,
source_id = source_id,
tags = tags,
url = url,
)
)
item.tags = tags
item.source = source
item.source_id = source_id
if update_timestamp:
item.timestamp = timestamp
# Save and return the item.
item.save()
return item
def follow_model(self, model):
"""
Follow a particular model class, updating associated Items automatically.
"""
self.models_by_name[model.__name__.lower()] = model
signals.post_save.connect(self.create_or_update, sender=model)
def get_for_model(self, model):
"""
Return a QuerySet of only items of a certain type.
"""
return self.filter(content_type=ContentType.objects.get_for_model(model))
def get_last_update_of_model(self, model, **kwargs):
"""
Return the last time a given model's items were updated. Returns the
epoch if the items were never updated.
"""
qs = self.get_for_model(model)
if kwargs:
qs = qs.filter(**kwargs)
try:
return qs.order_by('-timestamp')[0].timestamp
except IndexError:
return datetime.datetime.fromtimestamp(0)
| 35.361905 | 122 | 0.578777 | 412 | 3,713 | 5.063107 | 0.347087 | 0.019175 | 0.033557 | 0.025887 | 0.094919 | 0.067114 | 0.067114 | 0.044104 | 0.044104 | 0 | 0 | 0.00082 | 0.343119 | 3,713 | 104 | 123 | 35.701923 | 0.854449 | 0.105575 | 0 | 0.057143 | 0 | 0 | 0.015678 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.085714 | null | null | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
5b011773dfebfb2a161d58f218cd80c611a2ea9c | 578 | py | Python | app_metrics.py | GSH-LAN/byceps | ab8918634e90aaa8574bd1bb85627759cef122fe | [
"BSD-3-Clause"
] | 33 | 2018-01-16T02:04:51.000Z | 2022-03-22T22:57:29.000Z | app_metrics.py | GSH-LAN/byceps | ab8918634e90aaa8574bd1bb85627759cef122fe | [
"BSD-3-Clause"
] | 7 | 2019-06-16T22:02:03.000Z | 2021-10-02T13:45:31.000Z | app_metrics.py | GSH-LAN/byceps | ab8918634e90aaa8574bd1bb85627759cef122fe | [
"BSD-3-Clause"
] | 14 | 2019-06-01T21:39:24.000Z | 2022-03-14T17:56:43.000Z | """
metrics application instance
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
:Copyright: 2006-2021 Jochen Kupperschmidt
:License: Revised BSD (see `LICENSE` file for details)
"""
import os
from byceps.config import ConfigurationError
from byceps.metrics.application import create_app
ENV_VAR_NAME_DATABASE_URI = 'DATABASE_URI'
database_uri = os.environ.get(ENV_VAR_NAME_DATABASE_URI)
if not database_uri:
raise ConfigurationError(
f"No database URI was specified via the '{ENV_VAR_NAME_DATABASE_URI}' "
"environment variable.",
)
app = create_app(database_uri)
| 22.230769 | 79 | 0.730104 | 73 | 578 | 5.534247 | 0.561644 | 0.217822 | 0.074257 | 0.133663 | 0.155941 | 0 | 0 | 0 | 0 | 0 | 0 | 0.01626 | 0.148789 | 578 | 25 | 80 | 23.12 | 0.804878 | 0.269896 | 0 | 0 | 0 | 0 | 0.243961 | 0.070048 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.272727 | 0 | 0.272727 | 0 | 0 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
5b0196e4037e9465e0b4a7171647fde301968ecb | 1,927 | py | Python | mysql_tests/test_schema.py | maestro-1/gino | 3f06b4a9948a7657044686ae738ef3509b4729e1 | [
"BSD-3-Clause"
] | 1,376 | 2019-12-26T23:41:36.000Z | 2022-03-31T11:08:04.000Z | mysql_tests/test_schema.py | maestro-1/gino | 3f06b4a9948a7657044686ae738ef3509b4729e1 | [
"BSD-3-Clause"
] | 522 | 2017-07-22T00:49:06.000Z | 2019-12-25T17:02:22.000Z | mysql_tests/test_schema.py | maestro-1/gino | 3f06b4a9948a7657044686ae738ef3509b4729e1 | [
"BSD-3-Clause"
] | 89 | 2020-01-02T02:12:37.000Z | 2022-03-21T14:14:51.000Z | from enum import Enum
import pytest
import gino
from gino.dialects.aiomysql import AsyncEnum
pytestmark = pytest.mark.asyncio
db = gino.Gino()
class MyEnum(Enum):
ONE = "one"
TWO = "two"
class Blog(db.Model):
__tablename__ = "s_blog"
id = db.Column(db.BigInteger(), primary_key=True)
title = db.Column(db.Unicode(255), index=True, comment="Title Comment")
visits = db.Column(db.BigInteger(), default=0)
comment_id = db.Column(db.ForeignKey("s_comment.id"))
number = db.Column(db.Enum(MyEnum), nullable=False, default=MyEnum.TWO)
number2 = db.Column(AsyncEnum(MyEnum), nullable=False, default=MyEnum.TWO)
class Comment(db.Model):
__tablename__ = "s_comment"
id = db.Column(db.BigInteger(), primary_key=True)
blog_id = db.Column(db.ForeignKey("s_blog.id", name="blog_id_fk"))
blog_seq = db.Sequence("blog_seq", metadata=db, schema="schema_test")
async def test(engine, define=True):
async with engine.acquire() as conn:
assert not await engine.dialect.has_table(conn, "non_exist")
Blog.__table__.comment = "Blog Comment"
db.bind = engine
await db.gino.create_all()
await Blog.number.type.create_async(engine, checkfirst=True)
await Blog.number2.type.create_async(engine, checkfirst=True)
await db.gino.create_all(tables=[Blog.__table__], checkfirst=True)
await blog_seq.gino.create(checkfirst=True)
await Blog.__table__.gino.create(checkfirst=True)
await db.gino.drop_all()
await db.gino.drop_all(tables=[Blog.__table__], checkfirst=True)
await Blog.__table__.gino.drop(checkfirst=True)
await blog_seq.gino.drop(checkfirst=True)
if define:
class Comment2(db.Model):
__tablename__ = "s_comment_2"
id = db.Column(db.BigInteger(), primary_key=True)
blog_id = db.Column(db.ForeignKey("s_blog.id"))
await db.gino.create_all()
await db.gino.drop_all()
| 30.109375 | 78 | 0.701609 | 273 | 1,927 | 4.725275 | 0.263736 | 0.062016 | 0.069767 | 0.055814 | 0.542636 | 0.455814 | 0.260465 | 0.19845 | 0.106977 | 0.106977 | 0 | 0.004984 | 0.167099 | 1,927 | 63 | 79 | 30.587302 | 0.798754 | 0 | 0 | 0.159091 | 0 | 0 | 0.064868 | 0 | 0 | 0 | 0 | 0 | 0.022727 | 1 | 0 | false | 0 | 0.090909 | 0 | 0.522727 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
5b084682efe35e9ca46aead0d385f2c28ccda23b | 5,630 | py | Python | apps/user/views.py | awsbreathpanda/dailyfresh | c218cdc3ea261b695ff00b6781ba3040f5d06eff | [
"MIT"
] | null | null | null | apps/user/views.py | awsbreathpanda/dailyfresh | c218cdc3ea261b695ff00b6781ba3040f5d06eff | [
"MIT"
] | 7 | 2021-03-30T14:18:30.000Z | 2022-01-13T03:13:37.000Z | apps/user/views.py | awsbreathpanda/dailyfresh | c218cdc3ea261b695ff00b6781ba3040f5d06eff | [
"MIT"
] | null | null | null | from django.shortcuts import redirect
from django.contrib.auth import authenticate, login, logout
from celery_tasks.tasks import celery_send_mail
from apps.user.models import User
import re
from django.shortcuts import render
from django.views import View
from utils.security import get_user_token, get_activation_link, get_user_id
from django.conf import settings
from django.http import HttpResponse
from django.urls import reverse
# Create your views here.
# /user/register
class RegisterView(View):
def get(self, request):
return render(request, 'user_register.html')
def post(self, request):
username = request.POST.get('username')
password = request.POST.get('password')
rpassword = request.POST.get('rpassword')
email = request.POST.get('email')
allow = request.POST.get('allow')
if not all([username, password, rpassword, email, allow]):
context = {'errmsg': '数据不完整'}
return render(request, 'user_register.html', context=context)
if password != rpassword:
context = {'errmsg': '密码不一致'}
return render(request, 'user_register.html', context=context)
if not re.match(r'^[a-z0-9][\w.\-]*@[a-z0-9\-]+(\.[a-z]{2,5}){1,2}$',
email):
context = {'errmsg': '邮箱格式不正确'}
return render(request, 'user_register.html', context=context)
if allow != 'on':
context = {'errmsg': '请同意天天生鲜用户协议'}
try:
user = User.objects.get(username=username)
except User.DoesNotExist:
user = None
if user is not None:
context = {'errmsg': '已经创建该用户名'}
return render(request, 'user_register.html', context=context)
user = User.objects.create_user(username, email, password)
user.is_active = 0
user.save()
user_token = get_user_token(user.id)
activation_link = get_activation_link(settings.ACTIVATION_URL_PATH,
user_token)
# send email
subject = '天天生鲜欢迎信息'
message = ''
html_message = (
'<h1>%s,欢迎您成为天天生鲜的注册会员</h1><p>请点击以下链接激活你的账户</p><br><a href="%s">%s</a>'
% (username, activation_link, activation_link))
from_email = 'dailyfresh<awsbreathpanda@163.com>'
recipient_list = [
'awsbreathpanda@163.com',
]
celery_send_mail.delay(subject,
message,
from_email,
recipient_list,
html_message=html_message)
context = {'errmsg': '添加用户成功'}
return render(request, 'user_register.html', context=context)
# /user/activate/(token)
class ActivateView(View):
def get(self, request, token):
token_bytes = token.encode('utf-8')
user_id = get_user_id(token_bytes)
user = User.objects.get(id=user_id)
user.is_active = 1
user.save()
# TODO
return HttpResponse('<h1>Activate User Successfully</h1>')
# /user/login
class LoginView(View):
def get(self, request):
username = request.COOKIES.get('username')
checked = 'checked'
if username is None:
username = ''
checked = ''
context = {'username': username, 'checked': checked}
return render(request, 'user_login.html', context=context)
def post(self, request):
username = request.POST.get('username')
password = request.POST.get('password')
remember = request.POST.get('remember')
if not all([username, password]):
context = {'errmsg': '参数不完整'}
return render(request, 'user_login.html', context=context)
user = authenticate(request, username=username, password=password)
if user is None:
context = {'errmsg': '用户不存在'}
return render(request, 'user_login.html', context=context)
if not user.is_active:
context = {'errmsg': '用户未激活'}
return render(request, 'user_login.html', context=context)
login(request, user)
next_url = request.GET.get('next', reverse('goods:index'))
response = redirect(next_url)
if remember == 'on':
response.set_cookie('username', username, max_age=7 * 24 * 3600)
else:
response.delete_cookie('username')
return response
# /user/
class UserInfoView(View):
def get(self, request):
if not request.user.is_authenticated:
next_url = reverse(
'user:login') + '?next=' + request.get_full_path()
return redirect(next_url)
else:
return render(request, 'user_center_info.html')
# /user/order/(page)
class UserOrderView(View):
def get(self, request, page):
if not request.user.is_authenticated:
next_url = reverse(
'user:login') + '?next=' + request.get_full_path()
return redirect(next_url)
else:
return render(request, 'user_center_order.html')
# /user/address
class UserAddressView(View):
def get(self, request):
if not request.user.is_authenticated:
next_url = reverse(
'user:login') + '?next=' + request.get_full_path()
return redirect(next_url)
else:
return render(request, 'user_center_site.html')
# /user/logout
class LogoutView(View):
def get(self, request):
logout(request)
return redirect(reverse('goods:index'))
| 31.80791 | 83 | 0.59325 | 622 | 5,630 | 5.249196 | 0.226688 | 0.057274 | 0.075651 | 0.091577 | 0.375804 | 0.326493 | 0.315773 | 0.315773 | 0.259418 | 0.180092 | 0 | 0.006986 | 0.288099 | 5,630 | 176 | 84 | 31.988636 | 0.807635 | 0.025222 | 0 | 0.296875 | 0 | 0.015625 | 0.136571 | 0.040351 | 0 | 0 | 0 | 0.005682 | 0 | 1 | 0.070313 | false | 0.0625 | 0.085938 | 0.007813 | 0.359375 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
5b0af9dfbe74e34130cf9a393f33916249893c28 | 8,315 | py | Python | kubernetes-the-hard-way/system/collections/ansible_collections/community/general/plugins/modules/cloud/misc/proxmox_template.py | jkroepke/homelab | ffdd849e39b52972870f5552e734fd74cb1254a1 | [
"Apache-2.0"
] | 5 | 2020-12-16T21:42:09.000Z | 2022-03-28T16:04:32.000Z | kubernetes-the-hard-way/system/collections/ansible_collections/community/general/plugins/modules/cloud/misc/proxmox_template.py | jkroepke/kubernetes-the-hard-way | 70fd096a04addec0777744c9731a4e3fbdc40c8f | [
"Apache-2.0"
] | null | null | null | kubernetes-the-hard-way/system/collections/ansible_collections/community/general/plugins/modules/cloud/misc/proxmox_template.py | jkroepke/kubernetes-the-hard-way | 70fd096a04addec0777744c9731a4e3fbdc40c8f | [
"Apache-2.0"
] | null | null | null | #!/usr/bin/python
#
# Copyright: Ansible Project
#
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
DOCUMENTATION = '''
---
module: proxmox_template
short_description: management of OS templates in Proxmox VE cluster
description:
- allows you to upload/delete templates in Proxmox VE cluster
options:
api_host:
description:
- the host of the Proxmox VE cluster
type: str
required: true
api_user:
description:
- the user to authenticate with
type: str
required: true
api_password:
description:
- the password to authenticate with
- you can use PROXMOX_PASSWORD environment variable
type: str
validate_certs:
description:
- enable / disable https certificate verification
default: 'no'
type: bool
node:
description:
- Proxmox VE node, when you will operate with template
type: str
required: true
src:
description:
- path to uploaded file
- required only for C(state=present)
type: path
template:
description:
- the template name
- required only for states C(absent), C(info)
type: str
content_type:
description:
- content type
- required only for C(state=present)
type: str
default: 'vztmpl'
choices: ['vztmpl', 'iso']
storage:
description:
- target storage
type: str
default: 'local'
timeout:
description:
- timeout for operations
type: int
default: 30
force:
description:
- can be used only with C(state=present), exists template will be overwritten
type: bool
default: 'no'
state:
description:
- Indicate desired state of the template
type: str
choices: ['present', 'absent']
default: present
notes:
- Requires proxmoxer and requests modules on host. This modules can be installed with pip.
requirements: [ "proxmoxer", "requests" ]
author: Sergei Antipov (@UnderGreen)
'''
EXAMPLES = '''
- name: Upload new openvz template with minimal options
community.general.proxmox_template:
node: uk-mc02
api_user: root@pam
api_password: 1q2w3e
api_host: node1
src: ~/ubuntu-14.04-x86_64.tar.gz
- name: >
Upload new openvz template with minimal options use environment
PROXMOX_PASSWORD variable(you should export it before)
community.general.proxmox_template:
node: uk-mc02
api_user: root@pam
api_host: node1
src: ~/ubuntu-14.04-x86_64.tar.gz
- name: Upload new openvz template with all options and force overwrite
community.general.proxmox_template:
node: uk-mc02
api_user: root@pam
api_password: 1q2w3e
api_host: node1
storage: local
content_type: vztmpl
src: ~/ubuntu-14.04-x86_64.tar.gz
force: yes
- name: Delete template with minimal options
community.general.proxmox_template:
node: uk-mc02
api_user: root@pam
api_password: 1q2w3e
api_host: node1
template: ubuntu-14.04-x86_64.tar.gz
state: absent
'''
import os
import time
try:
from proxmoxer import ProxmoxAPI
HAS_PROXMOXER = True
except ImportError:
HAS_PROXMOXER = False
from ansible.module_utils.basic import AnsibleModule
def get_template(proxmox, node, storage, content_type, template):
return [True for tmpl in proxmox.nodes(node).storage(storage).content.get()
if tmpl['volid'] == '%s:%s/%s' % (storage, content_type, template)]
def upload_template(module, proxmox, api_host, node, storage, content_type, realpath, timeout):
taskid = proxmox.nodes(node).storage(storage).upload.post(content=content_type, filename=open(realpath, 'rb'))
while timeout:
task_status = proxmox.nodes(api_host.split('.')[0]).tasks(taskid).status.get()
if task_status['status'] == 'stopped' and task_status['exitstatus'] == 'OK':
return True
timeout = timeout - 1
if timeout == 0:
module.fail_json(msg='Reached timeout while waiting for uploading template. Last line in task before timeout: %s'
% proxmox.node(node).tasks(taskid).log.get()[:1])
time.sleep(1)
return False
def delete_template(module, proxmox, node, storage, content_type, template, timeout):
volid = '%s:%s/%s' % (storage, content_type, template)
proxmox.nodes(node).storage(storage).content.delete(volid)
while timeout:
if not get_template(proxmox, node, storage, content_type, template):
return True
timeout = timeout - 1
if timeout == 0:
module.fail_json(msg='Reached timeout while waiting for deleting template.')
time.sleep(1)
return False
def main():
module = AnsibleModule(
argument_spec=dict(
api_host=dict(required=True),
api_user=dict(required=True),
api_password=dict(no_log=True),
validate_certs=dict(type='bool', default=False),
node=dict(),
src=dict(type='path'),
template=dict(),
content_type=dict(default='vztmpl', choices=['vztmpl', 'iso']),
storage=dict(default='local'),
timeout=dict(type='int', default=30),
force=dict(type='bool', default=False),
state=dict(default='present', choices=['present', 'absent']),
)
)
if not HAS_PROXMOXER:
module.fail_json(msg='proxmoxer required for this module')
state = module.params['state']
api_user = module.params['api_user']
api_host = module.params['api_host']
api_password = module.params['api_password']
validate_certs = module.params['validate_certs']
node = module.params['node']
storage = module.params['storage']
timeout = module.params['timeout']
# If password not set get it from PROXMOX_PASSWORD env
if not api_password:
try:
api_password = os.environ['PROXMOX_PASSWORD']
except KeyError as e:
module.fail_json(msg='You should set api_password param or use PROXMOX_PASSWORD environment variable')
try:
proxmox = ProxmoxAPI(api_host, user=api_user, password=api_password, verify_ssl=validate_certs)
except Exception as e:
module.fail_json(msg='authorization on proxmox cluster failed with exception: %s' % e)
if state == 'present':
try:
content_type = module.params['content_type']
src = module.params['src']
template = os.path.basename(src)
if get_template(proxmox, node, storage, content_type, template) and not module.params['force']:
module.exit_json(changed=False, msg='template with volid=%s:%s/%s is already exists' % (storage, content_type, template))
elif not src:
module.fail_json(msg='src param to uploading template file is mandatory')
elif not (os.path.exists(src) and os.path.isfile(src)):
module.fail_json(msg='template file on path %s not exists' % src)
if upload_template(module, proxmox, api_host, node, storage, content_type, src, timeout):
module.exit_json(changed=True, msg='template with volid=%s:%s/%s uploaded' % (storage, content_type, template))
except Exception as e:
module.fail_json(msg="uploading of template %s failed with exception: %s" % (template, e))
elif state == 'absent':
try:
content_type = module.params['content_type']
template = module.params['template']
if not template:
module.fail_json(msg='template param is mandatory')
elif not get_template(proxmox, node, storage, content_type, template):
module.exit_json(changed=False, msg='template with volid=%s:%s/%s is already deleted' % (storage, content_type, template))
if delete_template(module, proxmox, node, storage, content_type, template, timeout):
module.exit_json(changed=True, msg='template with volid=%s:%s/%s deleted' % (storage, content_type, template))
except Exception as e:
module.fail_json(msg="deleting of template %s failed with exception: %s" % (template, e))
if __name__ == '__main__':
main()
| 33.26 | 138 | 0.657486 | 1,046 | 8,315 | 5.108031 | 0.209369 | 0.047352 | 0.047165 | 0.058394 | 0.457795 | 0.381995 | 0.341943 | 0.312371 | 0.273629 | 0.24855 | 0 | 0.010902 | 0.238845 | 8,315 | 249 | 139 | 33.393574 | 0.833307 | 0.022489 | 0 | 0.36019 | 0 | 0 | 0.463679 | 0.030781 | 0 | 0 | 0 | 0 | 0 | 1 | 0.018957 | false | 0.061611 | 0.028436 | 0.004739 | 0.07109 | 0.004739 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
5b0b4a59e216a0cba015910bd19bb58090619801 | 3,693 | py | Python | saleor/webhook/observability/payload_schema.py | DevPoke/saleor | ced3a2249a18031f9f593e71d1d18aa787ec1060 | [
"CC-BY-4.0"
] | null | null | null | saleor/webhook/observability/payload_schema.py | DevPoke/saleor | ced3a2249a18031f9f593e71d1d18aa787ec1060 | [
"CC-BY-4.0"
] | null | null | null | saleor/webhook/observability/payload_schema.py | DevPoke/saleor | ced3a2249a18031f9f593e71d1d18aa787ec1060 | [
"CC-BY-4.0"
] | null | null | null | from datetime import datetime
from enum import Enum
from json.encoder import ESCAPE_ASCII, ESCAPE_DCT # type: ignore
from typing import List, Optional, Tuple, TypedDict
class JsonTruncText:
def __init__(self, text="", truncated=False, added_bytes=0):
self.text = text
self.truncated = truncated
self._added_bytes = max(0, added_bytes)
def __eq__(self, other):
if not isinstance(other, JsonTruncText):
return False
return (self.text, self.truncated) == (other.text, other.truncated)
def __repr__(self):
return f'JsonTruncText(text="{self.text}", truncated={self.truncated})'
@property
def byte_size(self) -> int:
return len(self.text) + self._added_bytes
@staticmethod
def json_char_len(char: str) -> int:
try:
return len(ESCAPE_DCT[char])
except KeyError:
return 6 if ord(char) < 0x10000 else 12
@classmethod
def truncate(cls, s: str, limit: int):
limit = max(limit, 0)
s_init_len = len(s)
s = s[:limit]
added_bytes = 0
for match in ESCAPE_ASCII.finditer(s):
start, end = match.span(0)
markup = cls.json_char_len(match.group(0)) - 1
added_bytes += markup
if end + added_bytes > limit:
return cls(
text=s[:start],
truncated=True,
added_bytes=added_bytes - markup,
)
if end + added_bytes == limit:
s = s[:end]
return cls(
text=s,
truncated=len(s) < s_init_len,
added_bytes=added_bytes,
)
return cls(
text=s,
truncated=len(s) < s_init_len,
added_bytes=added_bytes,
)
class ObservabilityEventTypes(str, Enum):
API_CALL = "api_call"
EVENT_DELIVERY_ATTEMPT = "event_delivery_attempt"
HttpHeaders = List[Tuple[str, str]]
class App(TypedDict):
id: str
name: str
class Webhook(TypedDict):
id: str
name: str
target_url: str
subscription_query: Optional[JsonTruncText]
class ObservabilityEventBase(TypedDict):
event_type: ObservabilityEventTypes
class GraphQLOperation(TypedDict):
name: Optional[JsonTruncText]
operation_type: Optional[str]
query: Optional[JsonTruncText]
result: Optional[JsonTruncText]
result_invalid: bool
class ApiCallRequest(TypedDict):
id: str
method: str
url: str
time: float
headers: HttpHeaders
content_length: int
class ApiCallResponse(TypedDict):
headers: HttpHeaders
status_code: Optional[int]
content_length: int
class ApiCallPayload(ObservabilityEventBase):
request: ApiCallRequest
response: ApiCallResponse
app: Optional[App]
gql_operations: List[GraphQLOperation]
class EventDeliveryPayload(TypedDict):
content_length: int
body: JsonTruncText
class EventDelivery(TypedDict):
id: str
status: str
event_type: str
event_sync: bool
payload: EventDeliveryPayload
class EventDeliveryAttemptRequest(TypedDict):
headers: HttpHeaders
class EventDeliveryAttemptResponse(TypedDict):
headers: HttpHeaders
status_code: Optional[int]
content_length: int
body: JsonTruncText
class EventDeliveryAttemptPayload(ObservabilityEventBase):
id: str
time: datetime
duration: Optional[float]
status: str
next_retry: Optional[datetime]
request: EventDeliveryAttemptRequest
response: EventDeliveryAttemptResponse
event_delivery: EventDelivery
webhook: Webhook
app: App
| 24.296053 | 79 | 0.642296 | 394 | 3,693 | 5.85533 | 0.291878 | 0.060685 | 0.024274 | 0.018205 | 0.178587 | 0.160381 | 0.134374 | 0.134374 | 0.103164 | 0.103164 | 0 | 0.005993 | 0.277011 | 3,693 | 151 | 80 | 24.456954 | 0.858052 | 0.003249 | 0 | 0.26087 | 0 | 0 | 0.024735 | 0.022289 | 0 | 0 | 0.001903 | 0 | 0 | 1 | 0.052174 | false | 0 | 0.034783 | 0.017391 | 0.713043 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
5b1a34dd97d2ac3c30c9847cc931832f35fa692e | 7,854 | py | Python | startup/97-standard-plans.py | MikeHart85/SIX_profile_collection | f4b34add0c464006a1310375b084c63597b6baf0 | [
"BSD-3-Clause"
] | null | null | null | startup/97-standard-plans.py | MikeHart85/SIX_profile_collection | f4b34add0c464006a1310375b084c63597b6baf0 | [
"BSD-3-Clause"
] | null | null | null | startup/97-standard-plans.py | MikeHart85/SIX_profile_collection | f4b34add0c464006a1310375b084c63597b6baf0 | [
"BSD-3-Clause"
] | null | null | null | def pol_V(offset=None):
yield from mv(m1_simple_fbk,0)
cur_mono_e = pgm.en.user_readback.value
yield from mv(epu1.table,6) # 4 = 3rd harmonic; 6 = "testing V" 1st harmonic
if offset is not None:
yield from mv(epu1.offset,offset)
yield from mv(epu1.phase,28.5)
yield from mv(pgm.en,cur_mono_e+1) #TODO this is dirty trick. figure out how to process epu.table.input
yield from mv(pgm.en,cur_mono_e)
yield from mv(m1_simple_fbk,1)
print('\nFinished moving the polarization to vertical.\n\tNote that the offset for epu calibration is {}eV.\n\n'.format(offset))
def pol_H(offset=None):
yield from mv(m1_simple_fbk,0)
cur_mono_e = pgm.en.user_readback.value
yield from mv(epu1.table,5) # 2 = 3rd harmonic; 5 = "testing H" 1st harmonic
if offset is not None:
yield from mv(epu1.offset,offset)
yield from mv(epu1.phase,0)
yield from mv(pgm.en,cur_mono_e+1) #TODO this is dirty trick. figure out how to process epu.table.input
yield from mv(pgm.en,cur_mono_e)
yield from mv(m1_simple_fbk,1)
print('\nFinished moving the polarization to horizontal.\n\tNote that the offset for epu calibration is {}eV.\n\n'.format(offset))
def m3_check():
yield from mv(m3_simple_fbk,0)
sclr_enable()
if pzshutter.value == 0:
print('Piezo Shutter is disabled')
flag = 0
if pzshutter.value == 2:
print('Piezo Shutter is enabled: going to be disabled')
yield from pzshutter_disable()
flag = 1
temp_extslt_vg=extslt.vg.user_readback.value
temp_extslt_hg=extslt.hg.user_readback.value
temp_gcdiag = gcdiag.y.user_readback.value
#yield from mv(qem07.averaging_time, 1)
yield from mv(sclr.preset_time, 1)
yield from mv(extslt.hg,10)
yield from mv(extslt.vg,30)
#yield from gcdiag.grid # RE-COMMENT THIS LINE 5/7/2019
#yield from rel_scan([qem07],m3.pit,-0.0005,0.0005,31, md = {'reason':'checking m3 before cff'})
yield from rel_scan([sclr],m3.pit,-0.0005,0.0005,31, md = {'reason':'checking m3'})
#yield from mv(m3.pit,peaks['cen']['gc_diag_grid'])
yield from mv(m3.pit,peaks['cen']['sclr_channels_chan8'])
#yield from mv(m3.pit,peaks['cen']['sclr_channels_chan2'])
yield from mv(extslt.hg,temp_extslt_hg)
yield from mv(extslt.vg,temp_extslt_vg)
yield from mv(gcdiag.y,temp_gcdiag)
yield from sleep(20)
#yield from mv(m1_fbk_sp,extslt_cam.stats1.centroid.x.value)
yield from mv(m3_simple_fbk_target,extslt_cam.stats1.centroid.x.value)#m3_simple_fbk_cen.value)
yield from mv(m3_simple_fbk,1)
if flag == 0:
print('Piezo Shutter remains disabled')
if flag == 1:
print('Piezo Shutter is going to renabled')
yield from pzshutter_enable()
def m1_align_fine2():
m1x_init=m1.x.user_readback.value
m1pit_init=m1.pit.user_readback.value
m1pit_step=50
m1pit_start=m1pit_init-1*m1pit_step
for i in range(0,5):
yield from mv(m1.pit,m1pit_start+i*m1pit_step)
yield from scan([qem05],m1.x,-3,3.8,35)
yield from mv(m1.pit,m1pit_init)
yield from mv(m1.x,m1x_init)
def alignM3x():
# get the exit slit positions to return to at the end
vg_init = extslt.vg.user_setpoint.value
hg_init = extslt.hg.user_setpoint.value
hc_init = extslt.hc.user_setpoint.value
print('Saving exit slit positions for later')
# get things out of the way
yield from m3diag.out
# read gas cell diode
yield from gcdiag.grid
# set detector e.g. gas cell diagnostics qem
detList=[qem07] #[sclr]
# set V exit slit value to get enough signal
yield from mv(extslt.vg, 30)
# open H slit full open
yield from mv(extslt.hg, 9000)
#move extslt.hs appropriately and scan m3.x
yield from mv(extslt.hc,-9)
yield from relative_scan(detList,m3.x,-6,6,61)
yield from mv(extslt.hc,-3)
yield from relative_scan(detList,m3.x,-6,6,61)
yield from mv(extslt.hc,3)
yield from relative_scan(detList,m3.x,-6,6,61)
print('Returning exit slit positions to the inital values')
yield from mv(extslt.hc,hc_init)
yield from mv(extslt.vg, vg_init, extslt.hg, hg_init)
def beamline_align():
yield from mv(m1_fbk,0)
yield from align.m1pit
yield from sleep(5)
yield from m3_check()
#yield from mv(m1_fbk_cam_time,0.002)
#yield from mv(m1_fbk_th,1500)
yield from sleep(5)
yield from mv(m1_fbk_sp,extslt_cam.stats1.centroid.x.value)
yield from mv(m1_fbk,1)
def beamline_align_v2():
yield from mv(m1_simple_fbk,0)
yield from mv(m3_simple_fbk,0)
yield from mv(m1_fbk,0)
yield from align.m1pit
yield from sleep(5)
yield from mv(m1_simple_fbk_target_ratio,m1_simple_fbk_ratio.value)
yield from mv(m1_simple_fbk,1)
yield from sleep(5)
yield from m3_check()
def xas(dets,motor,start_en,stop_en,num_points,sec_per_point):
sclr_enable()
sclr_set_time=sclr.preset_time.value
if pzshutter.value == 0:
print('Piezo Shutter is disabled')
flag = 0
if pzshutter.value == 2:
print('Piezo Shutter is enabled: going to be disabled')
yield from pzshutter_disable()
flag = 1
yield from mv(sclr.preset_time,sec_per_point)
yield from scan(dets,pgm.en,start_en,stop_en,num_points)
E_max = peaks['max']['sclr_channels_chan2'][0]
E_com = peaks['com']['sclr_channels_chan2']
if flag == 0:
print('Piezo Shutter remains disabled')
if flag == 1:
print('Piezo Shutter is going to renabled')
yield from pzshutter_enable()
yield from mv(sclr.preset_time,sclr_set_time)
return E_com, E_max
#TODO put this inside of rixscam
def rixscam_get_threshold(Ei = None):
'''Calculate the minimum and maximum threshold for RIXSCAM single photon counting (LS mode)
Ei\t:\t float - incident energy (default is beamline current energy)
'''
if Ei is None:
Ei = pgm.en.user_readback.value
t_min = 0.7987 * Ei - 97.964
t_max = 1.4907 * Ei + 38.249
print('\n\n\tMinimum value for RIXSCAM threshold (LS mode):\t{}'.format(t_min))
print('\tMaximum value for RIXSCAM threshold (LS mode):\t{}'.format(t_max))
print('\tFor Beamline Energy:\t\t\t\t{}'.format(Ei))
return t_min, t_max
#TODO put this insdie of rixscam
def rixscam_set_threshold(Ei=None):
'''Setup the RIXSCAM.XIP plugin values for a specific energy for single photon counting and
centroiding in LS mode.
Ei\t:\t float - incident energy (default is beamline current energy)
'''
if Ei is None:
Ei = pgm.en.user_readback.value
thold_min, thold_max = rixscam_get_threshold(Ei)
yield from mv(rixscam.xip.beamline_energy, Ei,
rixscam.xip.sum_3x3_threshold_min, thold_min,
rixscam.xip.sum_3x3_threshold_max, thold_max)
#TODO make official so that there is a m1_fbk device like m1fbk.setpoint
m1_fbk = EpicsSignal('XF:02IDA-OP{FBck}Sts:FB-Sel', name = 'm1_fbk')
m1_fbk_sp = EpicsSignal('XF:02IDA-OP{FBck}PID-SP', name = 'm1_fbk_sp')
m1_fbk_th = extslt_cam.stats1.centroid_threshold
#m1_fbk_pix_x = extslt_cam.stats1.centroid.x.value
m1_fbk_cam_time = extslt_cam.cam.acquire_time
#(mv(m1_fbk_th,1500)
m1_simple_fbk = EpicsSignal('XF:02IDA-OP{M1_simp_feed}FB-Ena', name = 'm1_simple_fbk')
m1_simple_fbk_target_ratio = EpicsSignal('XF:02IDA-OP{M1_simp_feed}FB-TarRat', name = 'm1_simple_fbk_target_ratio')
m1_simple_fbk_ratio = EpicsSignal('XF:02IDA-OP{M1_simp_feed}FB-Ratio', name = 'm1_simple_fbk_ratio')
m3_simple_fbk = EpicsSignal('XF:02IDA-OP{M3_simp_feed}FB-Ena', name = 'm3_simple_fbk')
m3_simple_fbk_target = EpicsSignal('XF:02IDA-OP{M3_simp_feed}FB-Targ', name = 'm3_simple_fbk_target')
m3_simple_fbk_cen = EpicsSignal('XF:02IDA-OP{M3_simp_feed}FB_inpbuf', name = 'm3_simple_fbk_cen')
| 37.222749 | 134 | 0.697734 | 1,325 | 7,854 | 3.953208 | 0.193962 | 0.127148 | 0.107102 | 0.042192 | 0.60252 | 0.537801 | 0.488736 | 0.449599 | 0.414662 | 0.358725 | 0 | 0.043218 | 0.192768 | 7,854 | 210 | 135 | 37.4 | 0.782965 | 0.188566 | 0 | 0.434483 | 0 | 0.013793 | 0.183099 | 0.042887 | 0 | 0 | 0 | 0.014286 | 0 | 1 | 0.068966 | false | 0 | 0 | 0 | 0.082759 | 0.103448 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
5b1aca9be8fbadae0d16bcaf4d8c545808d7368a | 3,451 | py | Python | service/test.py | ksiomelo/cubix | cd9e6dda6696b302a7c0d383259a9d60b15b0d55 | [
"Apache-2.0"
] | 3 | 2015-09-07T00:16:16.000Z | 2019-01-11T20:27:56.000Z | service/test.py | ksiomelo/cubix | cd9e6dda6696b302a7c0d383259a9d60b15b0d55 | [
"Apache-2.0"
] | null | null | null | service/test.py | ksiomelo/cubix | cd9e6dda6696b302a7c0d383259a9d60b15b0d55 | [
"Apache-2.0"
] | null | null | null | #!/usr/bin/env python
import pika
import time
import json
import StringIO
#from fca.concept import Concept
from casa import Casa
#from fca.readwrite import cxt
def read_cxt_string(data):
input_file = StringIO.StringIO(data)
assert input_file.readline().strip() == "B",\
"File is not valid cxt"
input_file.readline() # Empty line
number_of_objects = int(input_file.readline().strip())
number_of_attributes = int(input_file.readline().strip())
input_file.readline() # Empty line
objects = [input_file.readline().strip() for i in xrange(number_of_objects)]
attributes = [input_file.readline().strip() for i in xrange(number_of_attributes)]
table = []
for i in xrange(number_of_objects):
line = map(lambda c: c=="X", input_file.readline().strip())
table.append(line)
input_file.close()
return Casa("sample", objects, attributes, table)
def get_a_context():
title = "sample context"
objects = [1, 2, 3, 4]
attributes = ['a', 'b', 'c', 'd']
rels = [[True, False, False, True],\
[True, False, True, False],\
[False, True, True, False],\
[False, True, True, True]]
return Casa(title,objects,attributes,rels)
def on_queue_declared(queue):
channel.queue_bind(queue='test',
exchange='',
routing_key='order.test.customer')
connection = pika.BlockingConnection(pika.ConnectionParameters(
host='localhost'))
channel = connection.channel()
channel.queue_declare(queue='task_queue', durable=True, exclusive=False)
channel.queue_declare(queue='msg_queue', durable=True, exclusive=False)
#channel.exchange_declare(exchange='',
# type="topic",
# durable=True,
# auto_delete=False)
#channel.queue_declare(queue="task_queue",
# durable=True,
# exclusive=False,
# auto_delete=False,
# callback=on_queue_declared)
print ' [*] Waiting for messages. To exit press CTRL+C'
def msg_callback(ch, method, props, body):
print " [x] Received %r" % (body,)
response = body + " MODIFIED"
#response = get_a_concept()
print " [x] Done"
ch.basic_publish(exchange='',
routing_key=props.reply_to,
properties=pika.BasicProperties(correlation_id = \
props.correlation_id),
body= str(response))
ch.basic_ack(delivery_tag = method.delivery_tag)
def callback(ch, method, props, body):
print " [x] Received %r" % (body,)
response = body + " MODIFIED"
context = read_cxt_string(body)
print context.to_dict(False)
#response = get_a_concept()
print " [x] Done"
ch.basic_publish(exchange='',
routing_key=props.reply_to,
properties=pika.BasicProperties(correlation_id = \
props.correlation_id),
body= json.dumps(context.to_dict(False)))#str(response))
ch.basic_ack(delivery_tag = method.delivery_tag)
channel.basic_qos(prefetch_count=1)
channel.basic_consume(callback,
queue='task_queue')
channel.basic_consume(msg_callback,
queue='msg_queue')
channel.start_consuming() | 30.8125 | 86 | 0.597508 | 389 | 3,451 | 5.115681 | 0.311054 | 0.045226 | 0.068342 | 0.066332 | 0.486935 | 0.41809 | 0.368844 | 0.351759 | 0.351759 | 0.351759 | 0 | 0.002023 | 0.283686 | 3,451 | 112 | 87 | 30.8125 | 0.802994 | 0.154738 | 0 | 0.264706 | 0 | 0 | 0.079972 | 0 | 0 | 0 | 0 | 0 | 0.014706 | 0 | null | null | 0 | 0.073529 | null | null | 0.088235 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
5b1ed26356ab2b3641b50b827cab69738be819bd | 15,878 | py | Python | datasets/imppres/imppres.py | ddhruvkr/datasets-1 | 66f2a7eece98d2778bd22bb5034cb7c2376032d4 | [
"Apache-2.0"
] | 7 | 2021-01-04T22:18:26.000Z | 2021-07-10T09:13:29.000Z | datasets/imppres/imppres.py | ddhruvkr/datasets-1 | 66f2a7eece98d2778bd22bb5034cb7c2376032d4 | [
"Apache-2.0"
] | null | null | null | datasets/imppres/imppres.py | ddhruvkr/datasets-1 | 66f2a7eece98d2778bd22bb5034cb7c2376032d4 | [
"Apache-2.0"
] | 3 | 2021-01-03T22:08:20.000Z | 2021-08-12T20:09:39.000Z | # coding=utf-8
# Copyright 2020 The HuggingFace Datasets Authors and the current dataset script contributor.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Over 25k semiautomatically generated sentence pairs illustrating well-studied pragmatic inference types. IMPPRES is an NLI dataset following the format of SNLI (Bowman et al., 2015), MultiNLI (Williams et al., 2018) and XNLI (Conneau et al., 2018), which was created to evaluate how well trained NLI models recognize several classes of presuppositions and scalar implicatures."""
from __future__ import absolute_import, division, print_function
import json
import os
import datasets
# Find for instance the citation on arxiv or on the dataset repo/website
_CITATION = """\
@inproceedings{jeretic-etal-2020-natural,
title = "Are Natural Language Inference Models {IMPPRESsive}? {L}earning {IMPlicature} and {PRESupposition}",
author = "Jereti\v{c}, Paloma and
Warstadt, Alex and
Bhooshan, Suvrat and
Williams, Adina",
booktitle = "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
month = jul,
year = "2020",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/2020.acl-main.768",
doi = "10.18653/v1/2020.acl-main.768",
pages = "8690--8705",
abstract = "Natural language inference (NLI) is an increasingly important task for natural language understanding, which requires one to infer whether a sentence entails another. However, the ability of NLI models to make pragmatic inferences remains understudied. We create an IMPlicature and PRESupposition diagnostic dataset (IMPPRES), consisting of 32K semi-automatically generated sentence pairs illustrating well-studied pragmatic inference types. We use IMPPRES to evaluate whether BERT, InferSent, and BOW NLI models trained on MultiNLI (Williams et al., 2018) learn to make pragmatic inferences. Although MultiNLI appears to contain very few pairs illustrating these inference types, we find that BERT learns to draw pragmatic inferences. It reliably treats scalar implicatures triggered by {``}some{''} as entailments. For some presupposition triggers like {``}only{''}, BERT reliably recognizes the presupposition as an entailment, even when the trigger is embedded under an entailment canceling operator like negation. BOW and InferSent show weaker evidence of pragmatic reasoning. We conclude that NLI training encourages models to learn some, but not all, pragmatic inferences.",
}
"""
# You can copy an official description
_DESCRIPTION = """Over >25k semiautomatically generated sentence pairs illustrating well-studied pragmatic inference types. IMPPRES is an NLI dataset following the format of SNLI (Bowman et al., 2015), MultiNLI (Williams et al., 2018) and XNLI (Conneau et al., 2018), which was created to evaluate how well trained NLI models recognize several classes of presuppositions and scalar implicatures."""
_HOMEPAGE = "https://github.com/facebookresearch/Imppres"
_LICENSE = "Creative Commons Attribution-NonCommercial 4.0 International Public License"
# The HuggingFace dataset library don't host the datasets but only point to the original files
# This can be an arbitrary nested dict/list of URLs (see below in `_split_generators` method)
_URLs = {"default": "https://github.com/facebookresearch/Imppres/blob/master/dataset/IMPPRES.zip?raw=true"}
class Imppres(datasets.GeneratorBasedBuilder):
"""Each sentence type in IMPPRES is generated according to a template that specifies the linear order of the constituents in the sentence. The constituents are sampled from a vocabulary of over 3000 lexical items annotated with grammatical features needed to ensure wellformedness. We semiautomatically generate IMPPRES using a codebase developed by Warstadt et al. (2019a) and significantly expanded for the BLiMP dataset (Warstadt et al., 2019b)."""
VERSION = datasets.Version("1.1.0")
# This is an example of a dataset with multiple configurations.
# If you don't want/need to define several sub-sets in your dataset,
# just remove the BUILDER_CONFIG_CLASS and the BUILDER_CONFIGS attributes.
# If you need to make complex sub-parts in the datasets with configurable options
# You can create your own builder configuration class to store attribute, inheriting from datasets.BuilderConfig
# BUILDER_CONFIG_CLASS = MyBuilderConfig
# You will be able to load one or the other configurations in the following list with
# data = datasets.load_dataset('my_dataset', 'first_domain')
# data = datasets.load_dataset('my_dataset', 'second_domain')
BUILDER_CONFIGS = [
datasets.BuilderConfig(
name="presupposition_all_n_presupposition",
version=VERSION,
description="Presuppositions are facts that the speaker takes for granted when uttering a sentence.",
),
datasets.BuilderConfig(
name="presupposition_both_presupposition",
version=VERSION,
description="Presuppositions are facts that the speaker takes for granted when uttering a sentence.",
),
datasets.BuilderConfig(
name="presupposition_change_of_state",
version=VERSION,
description="Presuppositions are facts that the speaker takes for granted when uttering a sentence.",
),
datasets.BuilderConfig(
name="presupposition_cleft_existence",
version=VERSION,
description="Presuppositions are facts that the speaker takes for granted when uttering a sentence.",
),
datasets.BuilderConfig(
name="presupposition_cleft_uniqueness",
version=VERSION,
description="Presuppositions are facts that the speaker takes for granted when uttering a sentence.",
),
datasets.BuilderConfig(
name="presupposition_only_presupposition",
version=VERSION,
description="Presuppositions are facts that the speaker takes for granted when uttering a sentence.",
),
datasets.BuilderConfig(
name="presupposition_possessed_definites_existence",
version=VERSION,
description="Presuppositions are facts that the speaker takes for granted when uttering a sentence.",
),
datasets.BuilderConfig(
name="presupposition_possessed_definites_uniqueness",
version=VERSION,
description="Presuppositions are facts that the speaker takes for granted when uttering a sentence.",
),
datasets.BuilderConfig(
name="presupposition_question_presupposition",
version=VERSION,
description="Presuppositions are facts that the speaker takes for granted when uttering a sentence.",
),
datasets.BuilderConfig(
name="implicature_connectives",
version=VERSION,
description="Scalar implicatures are inferences which can be drawn when one member of a memorized lexical scale is uttered.",
),
datasets.BuilderConfig(
name="implicature_gradable_adjective",
version=VERSION,
description="Scalar implicatures are inferences which can be drawn when one member of a memorized lexical scale is uttered.",
),
datasets.BuilderConfig(
name="implicature_gradable_verb",
version=VERSION,
description="Scalar implicatures are inferences which can be drawn when one member of a memorized lexical scale is uttered.",
),
datasets.BuilderConfig(
name="implicature_modals",
version=VERSION,
description="Scalar implicatures are inferences which can be drawn when one member of a memorized lexical scale is uttered.",
),
datasets.BuilderConfig(
name="implicature_numerals_10_100",
version=VERSION,
description="Scalar implicatures are inferences which can be drawn when one member of a memorized lexical scale is uttered.",
),
datasets.BuilderConfig(
name="implicature_numerals_2_3",
version=VERSION,
description="Scalar implicatures are inferences which can be drawn when one member of a memorized lexical scale is uttered.",
),
datasets.BuilderConfig(
name="implicature_quantifiers",
version=VERSION,
description="Scalar implicatures are inferences which can be drawn when one member of a memorized lexical scale is uttered.",
),
]
def _info(self):
if (
"presupposition" in self.config.name
): # This is the name of the configuration selected in BUILDER_CONFIGS above
features = datasets.Features(
{
"premise": datasets.Value("string"),
"hypothesis": datasets.Value("string"),
"trigger": datasets.Value("string"),
"trigger1": datasets.Value("string"),
"trigger2": datasets.Value("string"),
"presupposition": datasets.Value("string"),
"gold_label": datasets.ClassLabel(names=["entailment", "neutral", "contradiction"]),
"UID": datasets.Value("string"),
"pairID": datasets.Value("string"),
"paradigmID": datasets.Value("int16")
# These are the features of your dataset like images, labels ...
}
)
else: # This is an example to show how to have different features for "first_domain" and "second_domain"
features = datasets.Features(
{
"premise": datasets.Value("string"),
"hypothesis": datasets.Value("string"),
"gold_label_log": datasets.ClassLabel(names=["entailment", "neutral", "contradiction"]),
"gold_label_prag": datasets.ClassLabel(names=["entailment", "neutral", "contradiction"]),
"spec_relation": datasets.Value("string"),
"item_type": datasets.Value("string"),
"trigger": datasets.Value("string"),
"lexemes": datasets.Value("string"),
# These are the features of your dataset like images, labels ...
}
)
return datasets.DatasetInfo(
# This is the description that will appear on the datasets page.
description=_DESCRIPTION,
# This defines the different columns of the dataset and their types
features=features, # Here we define them above because they are different between the two configurations
# If there's a common (input, target) tuple from the features,
# specify them here. They'll be used if as_supervised=True in
# builder.as_dataset.
supervised_keys=None,
# Homepage of the dataset for documentation
homepage=_HOMEPAGE,
# License for the dataset if available
license=_LICENSE,
# Citation for the dataset
citation=_CITATION,
)
def _split_generators(self, dl_manager):
"""Returns SplitGenerators."""
# TODO: This method is tasked with downloading/extracting the data and defining the splits depending on the configuration
# If several configurations are possible (listed in BUILDER_CONFIGS), the configuration selected by the user is in self.config.name
# dl_manager is a datasets.download.DownloadManager that can be used to download and extract URLs
# It can accept any type or nested list/dict and will give back the same structure with the url replaced with path to local files.
# By default the archives will be extracted and a path to a cached folder where they are extracted is returned instead of the archive
my_urls = _URLs["default"]
base_config = self.config.name.split("_")[0]
secondary_config = self.config.name.split(base_config + "_")[1]
data_dir = os.path.join(dl_manager.download_and_extract(my_urls), "IMPPRES", base_config)
return [
datasets.SplitGenerator(
name=secondary_config,
# These kwargs will be passed to _generate_examples
gen_kwargs={
"filepath": os.path.join(data_dir, secondary_config + ".jsonl"),
"split": "test",
},
)
]
def _generate_examples(self, filepath, split):
""" Yields examples. """
# TODO: This method will receive as arguments the `gen_kwargs` defined in the previous `_split_generators` method.
# It is in charge of opening the given file and yielding (key, example) tuples from the dataset
# The key is not important, it's more here for legacy reason (legacy from tfds)
with open(filepath, encoding="utf-8") as f:
for id_, row in enumerate(f):
data = json.loads(row)
if "presupposition" in self.config.name:
# for k, v in data.items():
# print('{}({}): {}'.format(k, type(v), v))
# print('-'*55)
if "trigger1" not in list(data.keys()):
yield id_, {
"premise": data["sentence1"],
"hypothesis": data["sentence2"],
"trigger": data["trigger"],
"trigger1": "Not_In_Example",
"trigger2": "Not_In_Example",
"presupposition": data["presupposition"],
"gold_label": data["gold_label"],
"UID": data["UID"],
"pairID": data["pairID"],
"paradigmID": data["paradigmID"],
}
else:
yield id_, {
"premise": data["sentence1"],
"hypothesis": data["sentence2"],
"trigger": "Not_In_Example",
"trigger1": data["trigger1"],
"trigger2": data["trigger2"],
"presupposition": "Not_In_Example",
"gold_label": data["gold_label"],
"UID": data["UID"],
"pairID": data["pairID"],
"paradigmID": data["paradigmID"],
}
else:
yield id_, {
"premise": data["sentence1"],
"hypothesis": data["sentence2"],
"gold_label_log": data["gold_label_log"],
"gold_label_prag": data["gold_label_prag"],
"spec_relation": data["spec_relation"],
"item_type": data["item_type"],
"trigger": data["trigger"],
"lexemes": data["lexemes"],
}
| 56.910394 | 1,197 | 0.634463 | 1,750 | 15,878 | 5.679429 | 0.287429 | 0.035919 | 0.040246 | 0.035315 | 0.424489 | 0.405876 | 0.377 | 0.369856 | 0.369856 | 0.356575 | 0 | 0.011385 | 0.286371 | 15,878 | 278 | 1,198 | 57.115108 | 0.865766 | 0.264643 | 0 | 0.458537 | 0 | 0.02439 | 0.465742 | 0.049211 | 0 | 0 | 0 | 0.003597 | 0 | 1 | 0.014634 | false | 0 | 0.02439 | 0 | 0.063415 | 0.004878 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
5b201dedf7625f49673a17f90219f4d165f06f5d | 1,322 | py | Python | app.py | juergenpointinger/status-dashboard | 439c7e9b6966ff10ada4062c6b97d5088083f442 | [
"MIT"
] | null | null | null | app.py | juergenpointinger/status-dashboard | 439c7e9b6966ff10ada4062c6b97d5088083f442 | [
"MIT"
] | null | null | null | app.py | juergenpointinger/status-dashboard | 439c7e9b6966ff10ada4062c6b97d5088083f442 | [
"MIT"
] | null | null | null | # Standard library imports
import logging
import os
# Third party imports
import dash
import dash_bootstrap_components as dbc
from flask_caching import Cache
import plotly.io as pio
# Local application imports
from modules.gitlab import GitLab
import settings
# Initialize logging mechanism
logging.basicConfig(level=settings.LOGLEVEL, format=settings.LOGFORMAT)
logger = logging.getLogger(__name__)
gl = GitLab()
logger.info("Current GitLab version: {}".format(GitLab.version))
# App instance
app = dash.Dash(__name__,
suppress_callback_exceptions=True,
external_stylesheets=[dbc.themes.BOOTSTRAP])
app.title = settings.APP_NAME
# App caching
# CACHE_CONFIG = {
# # Note that filesystem cache doesn't work on systems with ephemeral
# # filesystems like Heroku.
# 'CACHE_TYPE': 'filesystem',
# 'CACHE_DIR': 'cache-directory',
# # should be equal to maximum number of users on the app at a single time
# # higher numbers will store more data in the filesystem / redis cache
# 'CACHE_THRESHOLD': 200
# }
CACHE_CONFIG = {
# try 'filesystem' if you don't want to setup redis
'CACHE_TYPE': 'redis',
'CACHE_REDIS_URL': settings.REDIS_URL
}
cache = Cache()
cache.init_app(app.server, config=CACHE_CONFIG)
pio.templates.default = "plotly_dark" | 28.12766 | 77 | 0.729955 | 173 | 1,322 | 5.421965 | 0.583815 | 0.035181 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.002775 | 0.1823 | 1,322 | 47 | 78 | 28.12766 | 0.86494 | 0.396369 | 0 | 0 | 0 | 0 | 0.09153 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.347826 | 0 | 0.347826 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
d28ad97667405531526925b2fe6abf6f466b39ff | 10,989 | py | Python | bmds/bmds2/logic/rules.py | shapiromatron/bmds | 57562858f3c45e9b9ec23e1c229a8a1de0ea4a70 | [
"MIT"
] | 2 | 2017-05-01T20:00:26.000Z | 2019-07-09T16:42:25.000Z | bmds/bmds2/logic/rules.py | shapiromatron/bmds | 57562858f3c45e9b9ec23e1c229a8a1de0ea4a70 | [
"MIT"
] | 20 | 2016-11-23T21:30:22.000Z | 2022-02-28T15:42:36.000Z | bmds/bmds2/logic/rules.py | shapiromatron/bmds | 57562858f3c45e9b9ec23e1c229a8a1de0ea4a70 | [
"MIT"
] | 2 | 2016-06-28T20:32:00.000Z | 2017-02-23T20:30:24.000Z | import abc
import math
from ... import constants
class Rule(abc.ABC):
def __init__(self, failure_bin, **kwargs):
self.failure_bin = failure_bin
self.enabled = kwargs.get("enabled", True)
self.threshold = kwargs.get("threshold", float("nan"))
self.rule_name = kwargs.get("rule_name", self.default_rule_name)
self.kwargs = kwargs
def __unicode__(self):
enabled = "✓" if self.enabled else "✕"
threshold = "" if math.isnan(self.threshold) else ", threshold={}".format(self.threshold)
return "{0} {1} [bin={2}{3}]".format(enabled, self.rule_name, self.binmoji, threshold)
def check(self, dataset, output):
if self.enabled:
return self.apply_rule(dataset, output)
else:
return self.return_pass()
@property
def binmoji(self):
return constants.BIN_ICON[self.failure_bin]
@property
def bin_text(self):
return constants.BIN_TEXT[self.failure_bin]
def as_row(self):
return [self.rule_name, self.enabled, self.bin_text, self.threshold]
def return_pass(self):
return constants.BIN_NO_CHANGE, None
@abc.abstractmethod
def apply_rule(self, dataset, output):
"""return tuple of (bin, notes) associated with rule or None"""
...
def get_failure_message(self, *args) -> str:
return "An error occurred"
def _is_valid_number(self, val):
# Ensure number is an int or float, not equal to special case -999.
return val is not None and val != -999 and (isinstance(val, int) or isinstance(val, float))
class NumericValueExists(Rule):
# Test succeeds if value is numeric and not -999
field_name = None
field_name_verbose = None
def apply_rule(self, dataset, output):
val = output.get(self.field_name)
if self._is_valid_number(val):
return self.return_pass()
else:
return self.failure_bin, self.get_failure_message()
def get_failure_message(self):
name = getattr(self, "field_name_verbose")
if name is None:
name = self.field_name
return "{} does not exist".format(name)
class BmdExists(NumericValueExists):
default_rule_name = "BMD exists"
field_name = "BMD"
class BmdlExists(NumericValueExists):
default_rule_name = "BMDL exists"
field_name = "BMDL"
class BmduExists(NumericValueExists):
default_rule_name = "BMDU exists"
field_name = "BMDU"
class AicExists(NumericValueExists):
default_rule_name = "AIC exists"
field_name = "AIC"
class RoiExists(NumericValueExists):
default_rule_name = "Residual of interest exists"
field_name = "residual_of_interest"
field_name_verbose = "Residual of Interest"
class ShouldBeGreaterThan(Rule):
# Test fails if value is less-than threshold.
field_name = ""
field_name_verbose = ""
def apply_rule(self, dataset, output):
val = output.get(self.field_name)
threshold = self.threshold
if not self._is_valid_number(val) or val >= threshold:
return self.return_pass()
else:
return self.failure_bin, self.get_failure_message(val, threshold)
def get_failure_message(self, val, threshold):
name = self.field_name_verbose
return "{} is less than threshold ({:.3} < {})".format(name, float(val), threshold)
class GlobalFit(ShouldBeGreaterThan):
default_rule_name = "GGOF"
field_name = "p_value4"
field_name_verbose = "Goodness of fit p-value"
class ShouldBeLessThan(Rule, abc.ABC):
# Test fails if value is greater-than threshold.
msg = "" # w/ arguments for value and threshold
@abc.abstractmethod
def get_value(self, dataset, output):
...
def apply_rule(self, dataset, output):
val = self.get_value(dataset, output)
threshold = self.threshold
if not self._is_valid_number(val) or val <= threshold:
return self.return_pass()
else:
return self.failure_bin, self.get_failure_message(val, threshold)
def get_failure_message(self, val, threshold):
name = self.field_name_verbose
return "{} is greater than threshold ({:.3} > {})".format(name, float(val), threshold)
class BmdBmdlRatio(ShouldBeLessThan):
default_rule_name = "BMD to BMDL ratio"
field_name_verbose = "BMD/BMDL ratio"
def get_value(self, dataset, output):
bmd = output.get("BMD")
bmdl = output.get("BMDL")
if self._is_valid_number(bmd) and self._is_valid_number(bmdl) and bmdl != 0:
return bmd / bmdl
class RoiFit(ShouldBeLessThan):
default_rule_name = "Residual of interest"
field_name_verbose = "Residual of interest"
def get_value(self, dataset, output):
return output.get("residual_of_interest")
class HighBmd(ShouldBeLessThan):
default_rule_name = "High BMD"
field_name_verbose = "BMD/high dose ratio"
def get_value(self, dataset, output):
max_dose = max(dataset.doses)
bmd = output.get("BMD")
if self._is_valid_number(max_dose) and self._is_valid_number(bmd) and bmd != 0:
return bmd / float(max_dose)
class HighBmdl(ShouldBeLessThan):
default_rule_name = "High BMDL"
field_name_verbose = "BMDL/high dose ratio"
def get_value(self, dataset, output):
max_dose = max(dataset.doses)
bmdl = output.get("BMDL")
if self._is_valid_number(max_dose) and self._is_valid_number(bmdl) and max_dose > 0:
return bmdl / float(max_dose)
class LowBmd(ShouldBeLessThan):
default_rule_name = "Low BMD"
field_name_verbose = "minimum dose/BMD ratio"
def get_value(self, dataset, output):
min_dose = min([d for d in dataset.doses if d > 0])
bmd = output.get("BMD")
if self._is_valid_number(min_dose) and self._is_valid_number(bmd) and bmd > 0:
return min_dose / float(bmd)
class LowBmdl(ShouldBeLessThan):
default_rule_name = "Low BMDL"
field_name_verbose = "minimum dose/BMDL ratio"
def get_value(self, dataset, output):
min_dose = min([d for d in dataset.doses if d > 0])
bmdl = output.get("BMDL")
if self._is_valid_number(min_dose) and self._is_valid_number(bmdl) and bmdl > 0:
return min_dose / float(bmdl)
class ControlResidual(ShouldBeLessThan):
default_rule_name = "Control residual"
field_name_verbose = "Residual at lowest dose"
def get_value(self, dataset, output):
if output.get("fit_residuals") and len(output["fit_residuals"]) > 0:
try:
return abs(output["fit_residuals"][0])
except TypeError:
return float("nan")
class ControlStdevResiduals(ShouldBeLessThan):
default_rule_name = "Control stdev"
field_name_verbose = "Ratio of modeled to actual stdev. at control"
def get_value(self, dataset, output):
if (
output.get("fit_est_stdev")
and output.get("fit_stdev")
and len(output["fit_est_stdev"]) > 0
and len(output["fit_stdev"]) > 0
):
try:
modeled = abs(output["fit_est_stdev"][0])
actual = abs(output["fit_stdev"][0])
except TypeError:
return float("nan")
if (
self._is_valid_number(modeled)
and self._is_valid_number(actual)
and modeled > 0
and actual > 0
):
return abs(modeled / actual)
class CorrectVarianceModel(Rule):
# Check variance model (continuous datasets-only)
default_rule_name = "Variance type"
def apply_rule(self, dataset, output):
if "parameters" not in output:
return self.return_pass()
# 0 = non-homogeneous modeled variance => Var(i) = alpha*mean(i)^rho
# 1 = constant variance => Var(i) = alpha*mean(i)
# if rho is a parameter, then variance model 0 is applied
rho = output["parameters"].get("rho")
constant_variance = 0 if rho else 1
p_value2 = output.get("p_value2")
if p_value2 == "<0.0001":
p_value2 = 0.0001
msg = None
if self._is_valid_number(p_value2):
if constant_variance == 1 and p_value2 < 0.1:
msg = "Incorrect variance model (p-value 2 = {}), constant variance selected".format(
p_value2
)
elif constant_variance == 0 and p_value2 > 0.1:
msg = "Incorrect variance model (p-value 2 = {}), modeled variance selected".format(
p_value2
)
else:
msg = "Correct variance model cannot be determined (p-value 2 = {})".format(p_value2)
if msg:
return self.failure_bin, msg
else:
return self.return_pass()
class VarianceModelFit(Rule):
default_rule_name = "Variance fit"
def apply_rule(self, dataset, output):
if "parameters" not in output:
return self.return_pass()
# 0 = non-homogeneous modeled variance => Var(i) = alpha*mean(i)^rho
# 1 = constant variance => Var(i) = alpha*mean(i)
# if rho is a parameter, then variance model 0 is applied
rho = output["parameters"].get("rho")
constant_variance = 0 if rho else 1
p_value2 = output.get("p_value2")
if p_value2 == "<0.0001":
p_value2 = 0.0001
p_value3 = output.get("p_value3")
if p_value3 == "<0.0001":
p_value3 = 0.0001
msg = None
if self._is_valid_number(p_value2) and constant_variance == 1 and p_value2 < 0.1:
msg = "Variance model poorly fits dataset (p-value 2 = {})".format(p_value2)
if self._is_valid_number(p_value3) and constant_variance == 0 and p_value3 < 0.1:
msg = "Variance model poorly fits dataset (p-value 3 = {})".format(p_value3)
if msg:
return self.failure_bin, msg
else:
return self.return_pass()
class NoDegreesOfFreedom(Rule):
"""
Check to ensure at least one degree of freedom exist to prevent recommendation of an
overfit model.
"""
default_rule_name = "Degrees of freedom"
def apply_rule(self, dataset, output):
df = output.get("df", 1)
if df == 0:
return self.failure_bin, "Zero degrees of freedom; saturated model"
return self.return_pass()
class Warnings(Rule):
# Test fails if any warnings exist.
default_rule_name = "Warnings"
def get_failure_message(self, warnings):
return "Warning(s): {}".format("; ".join(warnings))
def apply_rule(self, dataset, output):
warnings = output.get("warnings", [])
if len(warnings) > 0:
return self.failure_bin, self.get_failure_message(warnings)
else:
return self.return_pass()
| 31.760116 | 101 | 0.628629 | 1,409 | 10,989 | 4.709013 | 0.136977 | 0.035268 | 0.042954 | 0.046119 | 0.543482 | 0.439638 | 0.399246 | 0.388998 | 0.375584 | 0.340015 | 0 | 0.014312 | 0.268814 | 10,989 | 345 | 102 | 31.852174 | 0.811201 | 0.074893 | 0 | 0.380753 | 0 | 0 | 0.127851 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.125523 | false | 0.046025 | 0.012552 | 0.033473 | 0.556485 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
d28b98aeee69dc1cdd515a34f7751e391f42ef74 | 5,022 | py | Python | src/main/python/smart/smartplots3_run.py | cday97/beam | 7e1ab50eecaefafd04daab360f8b12bc7cab559b | [
"BSD-3-Clause-LBNL"
] | 123 | 2017-04-06T20:17:19.000Z | 2022-03-02T13:42:15.000Z | src/main/python/smart/smartplots3_run.py | cday97/beam | 7e1ab50eecaefafd04daab360f8b12bc7cab559b | [
"BSD-3-Clause-LBNL"
] | 2,676 | 2017-04-26T20:27:27.000Z | 2022-03-31T16:39:53.000Z | src/main/python/smart/smartplots3_run.py | cday97/beam | 7e1ab50eecaefafd04daab360f8b12bc7cab559b | [
"BSD-3-Clause-LBNL"
] | 60 | 2017-04-06T20:14:32.000Z | 2022-03-30T20:10:53.000Z | import pandas as pd
import smartplots3_setup
def createSetup(name,expansion_factor,percapita_factor,plot_size,settings):
plt_setup_smart={
'name': name,
'expansion_factor':expansion_factor,
'percapita_factor':percapita_factor,
'scenarios_itr': [],
'scenarios_id':[],
'scenarios_year':[],
'plot_size': plot_size,
'bottom_labels': [],
'top_labels': [],
'plots_folder': "makeplots3"
}
plt_setup_smart['name']=name
plt_setup_smart['expansion_factor']=expansion_factor
plt_setup_smart['plot_size']=plot_size
plt_setup_smart['scenarios_year']=[]
plt_setup_smart['scenarios_id']=[]
plt_setup_smart['scenarios_itr']=[]
plt_setup_smart['top_labels']=[]
for (scenarios_year,scenarios_id,scenarios_itr,bottom_label,top_label) in settings:
plt_setup_smart['scenarios_year'].append(scenarios_year)
plt_setup_smart['scenarios_id'].append(scenarios_id)
plt_setup_smart['scenarios_itr'].append(scenarios_itr)
plt_setup_smart['top_labels'].append(top_label)
plt_setup_smart['bottom_labels'].append(bottom_label)
return plt_setup_smart
def createSettingRow(scenarios_year,scenarios_id,scenarios_itr,bottom_label,top_label):
return (scenarios_year,scenarios_id,scenarios_itr,bottom_label,top_label)
scenarios_lables = {
"Base_CL_CT": "Base0",
"Base_STL_STT_BAU": "Base2",
"Base_STL_STT_VTO": "Base3",
"Base_LTL_LTT_BAU": "Base5",
"Base_LTL_LTT_VTO": "Base6",
"A_STL_STT_BAU": "A2",
"A_STL_STT_VTO": "A3",
"B_LTL_LTT_BAU": "B5",
"B_LTL_LTT_VTO": "B6",
"C_LTL_LTT_BAU": "C5",
"C_LTL_LTT_VTO": "C6"
}
output_folder = "/home/ubuntu/git/jupyter/data/28thOct2019"
# Base_CL_CT
# A_STL_STT_BAU
settings=[]
settings.append(createSettingRow(2010,1,15,scenarios_lables["Base_CL_CT"], ""))
settings.append(createSettingRow(2025,6,15,scenarios_lables["A_STL_STT_BAU"], ""))
settings.append(createSettingRow(2025,7,15,scenarios_lables["A_STL_STT_VTO"], ""))
settings.append(createSettingRow(2040,8,15,scenarios_lables["B_LTL_LTT_BAU"], ""))
settings.append(createSettingRow(2040,9,15,scenarios_lables["B_LTL_LTT_VTO"], ""))
settings.append(createSettingRow(2040,10,15,scenarios_lables["C_LTL_LTT_BAU"], ""))
settings.append(createSettingRow(2040,11,15,scenarios_lables["C_LTL_LTT_VTO"], ""))
plt_setup_smart3 = createSetup('7scenarios', (7.75/0.315) * 27.0 / 21.3, 27.0/21.3, (8, 4.5), settings)
#smartplots3_setup.pltRealizedModeSplitByTrips(plt_setup_smart3, output_folder)
#smartplots3_setup.pltModeSplitInPMTPerCapita(plt_setup_smart3, output_folder)
#smartplots3_setup.pltAveragePersonSpeed_allModes(plt_setup_smart3, output_folder)
#smartplots3_setup.pltAveragePersonSpeed_car(plt_setup_smart3, output_folder)
#smartplots3_setup.pltModeSplitInVMT(plt_setup_smart3, output_folder)
#smartplots3_setup.pltRHEmptyPooled(plt_setup_smart3, output_folder)
#smartplots3_setup.pltRHWaitTime(plt_setup_smart3, output_folder)
#smartplots3_setup.pltLdvTechnologySplitInVMT(plt_setup_smart3, output_folder)
settings=[]
settings.append(createSettingRow(2010,1,15,scenarios_lables["Base_CL_CT"], ""))
settings.append(createSettingRow(2025,2,15,scenarios_lables["Base_STL_STT_BAU"], ""))
settings.append(createSettingRow(2025,3,15,scenarios_lables["Base_STL_STT_VTO"], ""))
settings.append(createSettingRow(2040,4,15,scenarios_lables["Base_LTL_LTT_BAU"], ""))
settings.append(createSettingRow(2040,5,15,scenarios_lables["Base_LTL_LTT_VTO"], ""))
settings.append(createSettingRow(2025,6,15,scenarios_lables["A_STL_STT_BAU"], ""))
settings.append(createSettingRow(2025,7,15,scenarios_lables["A_STL_STT_VTO"], ""))
settings.append(createSettingRow(2040,8,15,scenarios_lables["B_LTL_LTT_BAU"], ""))
settings.append(createSettingRow(2040,9,15,scenarios_lables["B_LTL_LTT_VTO"], ""))
settings.append(createSettingRow(2040,10,15,scenarios_lables["C_LTL_LTT_BAU"], ""))
settings.append(createSettingRow(2040,11,15,scenarios_lables["C_LTL_LTT_VTO"], ""))
plt_setup_smart3_base = createSetup('11scenarios', (7.75/0.315) * 27.0 / 21.3, 27.0/21.3, (10, 4.5), settings)
smartplots3_setup.pltEnergyPerCapita(plt_setup_smart3_base, output_folder)
smartplots3_setup.pltRealizedModeSplitByTrips(plt_setup_smart3_base, output_folder)
smartplots3_setup.pltModeSplitInPMTPerCapita(plt_setup_smart3_base, output_folder)
smartplots3_setup.pltAveragePersonSpeed_allModes(plt_setup_smart3_base, output_folder)
smartplots3_setup.pltAveragePersonSpeed_car(plt_setup_smart3_base, output_folder)
smartplots3_setup.pltModeSplitInVMT(plt_setup_smart3_base, output_folder)
smartplots3_setup.pltRHEmptyPooled(plt_setup_smart3_base, output_folder)
smartplots3_setup.pltRHWaitTime(plt_setup_smart3_base, output_folder)
smartplots3_setup.pltLdvTechnologySplitInVMT(plt_setup_smart3_base, output_folder)
#smartplots3_setup.pltMEP(plt_setup_smart3, output_folder, [15071,21151,22872,29014,27541,36325,45267])
smartplots3_setup.tableSummary(plt_setup_smart3_base, output_folder) | 50.727273 | 110 | 0.788331 | 669 | 5,022 | 5.497758 | 0.158445 | 0.076128 | 0.079935 | 0.121805 | 0.781403 | 0.730288 | 0.677814 | 0.587004 | 0.392877 | 0.312398 | 0 | 0.058913 | 0.080645 | 5,022 | 99 | 111 | 50.727273 | 0.737708 | 0.142174 | 0 | 0.207792 | 0 | 0 | 0.176868 | 0.009542 | 0 | 0 | 0 | 0 | 0 | 1 | 0.025974 | false | 0 | 0.025974 | 0.012987 | 0.077922 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
d28c64bd9262b8b74070c47f2ceb3b8061a39ebe | 238 | py | Python | contrib/libs/cxxsupp/libsan/generate_symbolizer.py | HeyLey/catboost | f472aed90604ebe727537d9d4a37147985e10ec2 | [
"Apache-2.0"
] | 6,989 | 2017-07-18T06:23:18.000Z | 2022-03-31T15:58:36.000Z | contrib/libs/cxxsupp/libsan/generate_symbolizer.py | HeyLey/catboost | f472aed90604ebe727537d9d4a37147985e10ec2 | [
"Apache-2.0"
] | 1,978 | 2017-07-18T09:17:58.000Z | 2022-03-31T14:28:43.000Z | contrib/libs/cxxsupp/libsan/generate_symbolizer.py | HeyLey/catboost | f472aed90604ebe727537d9d4a37147985e10ec2 | [
"Apache-2.0"
] | 1,228 | 2017-07-18T09:03:13.000Z | 2022-03-29T05:57:40.000Z | import os
import sys
def main():
print 'const char* ya_get_symbolizer_gen() {'
print ' return "{}";'.format(os.path.join(os.path.dirname(sys.argv[1]), 'llvm-symbolizer'))
print '}'
if __name__ == '__main__':
main()
| 18.307692 | 98 | 0.621849 | 32 | 238 | 4.28125 | 0.6875 | 0.087591 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.005208 | 0.193277 | 238 | 12 | 99 | 19.833333 | 0.708333 | 0 | 0 | 0 | 0 | 0 | 0.323529 | 0.096639 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.25 | null | null | 0.375 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
d28c678a957ea394e636e4d4799124a81070a2a0 | 775 | py | Python | scripts/scheduler/scheduler.py | OCHA-DAP/hdx-scraper-unosat-flood-portal | 80b0bcd404993e4bd1dae442f794c9f86b6d5328 | [
"MIT"
] | 1 | 2016-07-22T13:32:54.000Z | 2016-07-22T13:32:54.000Z | scripts/scheduler/scheduler.py | OCHA-DAP/hdx-scraper-unosat-flood-portal | 80b0bcd404993e4bd1dae442f794c9f86b6d5328 | [
"MIT"
] | 21 | 2015-07-08T21:30:32.000Z | 2015-08-27T17:52:24.000Z | scripts/scheduler/scheduler.py | OCHA-DAP/hdxscraper-unosat-flood-portal | 80b0bcd404993e4bd1dae442f794c9f86b6d5328 | [
"MIT"
] | null | null | null | #!/usr/bin/python
# -*- coding: utf-8 -*-
import os
import sys
import time
import schedule
dir = os.path.split(os.path.split(os.path.realpath(__file__))[0])[0]
sys.path.append(dir)
from utilities.prompt_format import item
from unosat_flood_portal_collect import collect as Collect
def Wrapper(patch=False):
'''Wrapper for main program.'''
#
# Collect data.
#
Collect.Main(patch=True)
#
# Setting-up schedule.
#
schedule.every(1).day.do(Wrapper)
def Main(verbose=True):
'''Wrapper to run all the scheduled tasks.'''
if verbose:
print '%s Running scheduler.' % item('prompt_bullet')
try:
while True:
schedule.run_pending()
time.sleep(1)
except Exception as e:
print e
return False
if __name__ == '__main__':
Main()
| 16.145833 | 68 | 0.68129 | 110 | 775 | 4.636364 | 0.590909 | 0.035294 | 0.043137 | 0.05098 | 0.054902 | 0 | 0 | 0 | 0 | 0 | 0 | 0.007911 | 0.184516 | 775 | 47 | 69 | 16.489362 | 0.799051 | 0.094194 | 0 | 0 | 0 | 0 | 0.067961 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.26087 | null | null | 0.086957 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
d294ed611a40faaaff54b7db50b237d6a8c768e7 | 1,645 | py | Python | py/trawl_analyzer/TrawlSensorsDB_model.py | nwfsc-fram/pyFieldSoftware | 477ba162b66ede2263693cda8c5a51d27eaa3b89 | [
"MIT"
] | null | null | null | py/trawl_analyzer/TrawlSensorsDB_model.py | nwfsc-fram/pyFieldSoftware | 477ba162b66ede2263693cda8c5a51d27eaa3b89 | [
"MIT"
] | 176 | 2019-11-22T17:44:55.000Z | 2021-10-20T23:40:03.000Z | py/trawl_analyzer/TrawlSensorsDB_model.py | nwfsc-fram/pyFieldSoftware | 477ba162b66ede2263693cda8c5a51d27eaa3b89 | [
"MIT"
] | 1 | 2021-05-07T01:06:32.000Z | 2021-05-07T01:06:32.000Z | # from peewee import *
from playhouse.apsw_ext import TextField, IntegerField, PrimaryKeyField
from py.trawl_analyzer.Settings import SensorsModel as BaseModel
# database = SqliteDatabase('data\clean_sensors.db', **{})
class UnknownField(object):
def __init__(self, *_, **__): pass
class EnviroNetRawFiles(BaseModel):
activation_datetime = TextField(db_column='ACTIVATION_DATETIME', null=True)
deactivation_datetime = TextField(db_column='DEACTIVATION_DATETIME', null=True)
deployed_equipment = IntegerField(db_column='DEPLOYED_EQUIPMENT_ID', null=True)
enviro_net_raw_files = PrimaryKeyField(db_column='ENVIRO_NET_RAW_FILES_ID')
haul = TextField(db_column='HAUL_ID', null=True)
raw_file = TextField(db_column='RAW_FILE', null=True)
class Meta:
db_table = 'ENVIRO_NET_RAW_FILES'
class EnviroNetRawStrings(BaseModel):
date_time = TextField(db_column='DATE_TIME', index=True, null=True)
deployed_equipment = IntegerField(db_column='DEPLOYED_EQUIPMENT_ID', null=True)
enviro_net_raw_strings = PrimaryKeyField(db_column='ENVIRO_NET_RAW_STRINGS_ID')
haul = TextField(db_column='HAUL_ID', null=True)
raw_strings = TextField(db_column='RAW_STRINGS', null=True)
class Meta:
db_table = 'ENVIRO_NET_RAW_STRINGS'
class RawSentences(BaseModel):
date_time = TextField(db_column='DATE_TIME', null=True)
deployed_equipment = IntegerField(db_column='DEPLOYED_EQUIPMENT_ID', null=True)
raw_sentence = TextField(db_column='RAW_SENTENCE', null=True)
raw_sentence_id = PrimaryKeyField(db_column='RAW_SENTENCE_ID')
class Meta:
db_table = 'RAW_SENTENCES'
| 38.255814 | 83 | 0.764134 | 208 | 1,645 | 5.668269 | 0.264423 | 0.101781 | 0.129771 | 0.063613 | 0.463104 | 0.463104 | 0.403732 | 0.403732 | 0.332485 | 0.271416 | 0 | 0 | 0.133739 | 1,645 | 42 | 84 | 39.166667 | 0.827368 | 0.046809 | 0 | 0.285714 | 0 | 0 | 0.181586 | 0.098465 | 0 | 0 | 0 | 0 | 0 | 1 | 0.035714 | false | 0.035714 | 0.071429 | 0 | 0.892857 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
d295e921737512140cabce35cb8da35469a21633 | 304 | py | Python | hard-gists/5898352/snippet.py | jjhenkel/dockerizeme | eaa4fe5366f6b9adf74399eab01c712cacaeb279 | [
"Apache-2.0"
] | 21 | 2019-07-08T08:26:45.000Z | 2022-01-24T23:53:25.000Z | hard-gists/5898352/snippet.py | jjhenkel/dockerizeme | eaa4fe5366f6b9adf74399eab01c712cacaeb279 | [
"Apache-2.0"
] | 5 | 2019-06-15T14:47:47.000Z | 2022-02-26T05:02:56.000Z | hard-gists/5898352/snippet.py | jjhenkel/dockerizeme | eaa4fe5366f6b9adf74399eab01c712cacaeb279 | [
"Apache-2.0"
] | 17 | 2019-05-16T03:50:34.000Z | 2021-01-14T14:35:12.000Z | import os
import scipy.io.wavfile as wav
# install lame
# install bleeding edge scipy (needs new cython)
fname = 'XC135672-Red-winged\ Blackbird1301.mp3'
oname = 'temp.wav'
cmd = 'lame --decode {0} {1}'.format( fname,oname )
os.system(cmd)
data = wav.read(oname)
# your code goes here
print len(data[1])
| 25.333333 | 51 | 0.720395 | 49 | 304 | 4.469388 | 0.734694 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.05364 | 0.141447 | 304 | 11 | 52 | 27.636364 | 0.785441 | 0.259868 | 0 | 0 | 0 | 0 | 0.303167 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.25 | null | null | 0.125 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
d29646348f53744d285a4ab6a2096da4edb810a8 | 2,612 | py | Python | examples/home-assistant/custom_components/evacalor/config_flow.py | fredericvl/pyevacalor | 37a3d96f867efffdec4457f11119977e6e887b8a | [
"Apache-2.0"
] | 2 | 2020-10-25T15:42:03.000Z | 2021-01-06T10:25:58.000Z | examples/home-assistant/custom_components/evacalor/config_flow.py | fredericvl/pyevacalor | 37a3d96f867efffdec4457f11119977e6e887b8a | [
"Apache-2.0"
] | 2 | 2021-01-06T09:24:58.000Z | 2021-02-13T21:12:02.000Z | examples/home-assistant/custom_components/evacalor/config_flow.py | fredericvl/pyevacalor | 37a3d96f867efffdec4457f11119977e6e887b8a | [
"Apache-2.0"
] | null | null | null | """Config flow for Eva Calor."""
from collections import OrderedDict
import logging
import uuid
from pyevacalor import ( # pylint: disable=redefined-builtin
ConnectionError,
Error as EvaCalorError,
UnauthorizedError,
evacalor,
)
import voluptuous as vol
from homeassistant import config_entries
from homeassistant.const import CONF_EMAIL, CONF_PASSWORD
from .const import CONF_UUID, DOMAIN
_LOGGER = logging.getLogger(__name__)
def conf_entries(hass):
"""Return the email tuples for the domain."""
return set(
entry.data[CONF_EMAIL] for entry in hass.config_entries.async_entries(DOMAIN)
)
class EvaCalorConfigFlow(config_entries.ConfigFlow, domain=DOMAIN):
"""Eva Calor Config Flow handler."""
VERSION = 1
CONNECTION_CLASS = config_entries.CONN_CLASS_CLOUD_POLL
def _entry_in_configuration_exists(self, user_input) -> bool:
"""Return True if config already exists in configuration."""
email = user_input[CONF_EMAIL]
if email in conf_entries(self.hass):
return True
return False
async def async_step_user(self, user_input=None):
"""User initiated integration."""
errors = {}
if user_input is not None:
# Validate user input
email = user_input[CONF_EMAIL]
password = user_input[CONF_PASSWORD]
if self._entry_in_configuration_exists(user_input):
return self.async_abort(reason="device_already_configured")
try:
gen_uuid = str(uuid.uuid1())
evacalor(email, password, gen_uuid)
except UnauthorizedError:
errors["base"] = "unauthorized"
except ConnectionError:
errors["base"] = "connection_error"
except EvaCalorError:
errors["base"] = "unknown_error"
if "base" not in errors:
return self.async_create_entry(
title=DOMAIN,
data={
CONF_EMAIL: email,
CONF_PASSWORD: password,
CONF_UUID: gen_uuid,
},
)
else:
user_input = {}
data_schema = OrderedDict()
data_schema[vol.Required(CONF_EMAIL, default=user_input.get(CONF_EMAIL))] = str
data_schema[
vol.Required(CONF_PASSWORD, default=user_input.get(CONF_PASSWORD))
] = str
return self.async_show_form(
step_id="user", data_schema=vol.Schema(data_schema), errors=errors
)
| 31.095238 | 87 | 0.616003 | 281 | 2,612 | 5.483986 | 0.33452 | 0.064244 | 0.025308 | 0.033744 | 0.092148 | 0 | 0 | 0 | 0 | 0 | 0 | 0.001101 | 0.304364 | 2,612 | 83 | 88 | 31.46988 | 0.847001 | 0.07925 | 0 | 0.032787 | 0 | 0 | 0.036596 | 0.010638 | 0 | 0 | 0 | 0 | 0 | 1 | 0.032787 | false | 0.081967 | 0.131148 | 0 | 0.311475 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
d296cec19b3a1e77f406394741a977e6895ca59f | 392 | py | Python | PYTHON_Code/TestGUI.py | ROBO-BEV/BARISTO | 0e87d79966efc111cc38c1a1cf22e2d8ee18c350 | [
"CC-BY-3.0",
"MIT"
] | 8 | 2018-03-12T04:52:28.000Z | 2021-05-19T19:37:01.000Z | PYTHON_Code/TestGUI.py | ROBO-BEV/BARISTO | 0e87d79966efc111cc38c1a1cf22e2d8ee18c350 | [
"CC-BY-3.0",
"MIT"
] | null | null | null | PYTHON_Code/TestGUI.py | ROBO-BEV/BARISTO | 0e87d79966efc111cc38c1a1cf22e2d8ee18c350 | [
"CC-BY-3.0",
"MIT"
] | 1 | 2018-01-30T09:43:36.000Z | 2018-01-30T09:43:36.000Z | from tkinter import *
window0 = Tk()
window0.geometry('960x540')
#tk.iconbitmap(default='ROBO_BEV_LOGO.ico')
window0.title("BARISTO")
photo = PhotoImage(file="Page1.png")
widget = Label(window0, image=photo)
widget.photo = photo
widget = Label(window0, text="10", fg="white", font=("Source Sans Pro",50))
#widget = Label(window0, text="9", fg="white")
widget.pack()
window0.mainloop()
| 19.6 | 75 | 0.709184 | 54 | 392 | 5.111111 | 0.648148 | 0.119565 | 0.195652 | 0.15942 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.054286 | 0.107143 | 392 | 19 | 76 | 20.631579 | 0.734286 | 0.221939 | 0 | 0 | 0 | 0 | 0.149007 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.1 | 0 | 0.1 | 0 | 0 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
d2992c7176a1b65595e782d6603b030801317e72 | 2,662 | py | Python | Sindri/Properties.py | mrcsbrn/TCC_software | 17a5335aed17d4740c3bbd0ef828b0fc5dcea1da | [
"MIT"
] | 11 | 2019-10-17T02:01:51.000Z | 2022-03-17T17:39:34.000Z | Sindri/Properties.py | mrcsbrn/TCC_software | 17a5335aed17d4740c3bbd0ef828b0fc5dcea1da | [
"MIT"
] | 2 | 2019-07-25T22:16:16.000Z | 2020-03-28T01:59:59.000Z | Sindri/Properties.py | mrcsbrn/TCC_software | 17a5335aed17d4740c3bbd0ef828b0fc5dcea1da | [
"MIT"
] | 5 | 2019-07-15T18:19:36.000Z | 2021-12-24T08:06:24.000Z | from __future__ import annotations
from constants import DBL_EPSILON
class DeltaProp(object):
def __init__(self, cp: float, h: float, s: float, g: float, u: float, a: float):
self.Cp = cp
self.H = h
self.S = s
self.G = g
self.U = u
self.A = a
def subtract(self, dp2: DeltaProp) -> DeltaProp:
cp = self.Cp - dp2.Cp
h = self.H - dp2.H
s = self.S - dp2.S
g = self.G - dp2.G
u = self.U - dp2.U
a = self.A - dp2.A
return DeltaProp(cp, h, s, g, u, a)
def isEqual(self, dp2: DeltaProp, tol=1e-5) -> bool:
if (
self._relAbsErr(self.Cp, dp2.Cp) < tol
and self._relAbsErr(self.H, dp2.H) < tol
and self._relAbsErr(self.S, dp2.S) < tol
and self._relAbsErr(self.G, dp2.G) < tol
and self._relAbsErr(self.U, dp2.U) < tol
and self._relAbsErr(self.A, dp2.A) < tol
):
return True
return False
def _relAbsErr(self, x: float, y: float) -> float:
if abs(x) < DBL_EPSILON:
return abs(x - y)
return abs((x - y) / x)
class VaporPressure(object):
"""
Class containing information about the vapor pressure of a single substance system.
"""
def __init__(self):
self.EOS = 0
self.AW = 0
self.LK = 0
self.Antoine = 0
self.AntonieLog = 0
def setEOS(self, v: float):
self.EOS = v
def setAW(self, v: float):
self.AW = v
def setLK(self, v: float):
self.LK = v
def setAntoine(self, v: float, log=""):
self.Antoine = v
self.AntonieLog = log
def getAWerr(self) -> float:
return self._relError(self.EOS, self.AW)
def getLKerr(self) -> float:
return self._relError(self.EOS, self.LK)
def getAntoineerr(self) -> float:
return self._relError(self.EOS, self.Antoine)
def _relError(self, _x: float, _y: float) -> float:
if abs(_x) < DBL_EPSILON:
return _x - _y
return (_x - _y) / _x
class Props(object):
def __init__(self):
self.P = 0
self.T = 0
self.Z = 0
self.V = 0
self.rho = 0
self.Pvp = 0
self.Fugacity = 0
self.Props = 0
self.IGProps = 0
self.log = ""
def setRho(self, v: float):
self.rho = v
def setPvp(self, v: VaporPressure):
self.Pvp = v
def setProps(self, v: DeltaProp):
self.Props = v
def setIGProps(self, v: DeltaProp):
self.IGProps = v
def setIGProps(self, v: float):
self.Fugacity = v
| 24.422018 | 87 | 0.531555 | 369 | 2,662 | 3.731707 | 0.203252 | 0.047204 | 0.074074 | 0.068991 | 0.256354 | 0.145243 | 0.145243 | 0.145243 | 0.062455 | 0.062455 | 0 | 0.017331 | 0.349737 | 2,662 | 108 | 88 | 24.648148 | 0.778163 | 0.03118 | 0 | 0.024691 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.234568 | false | 0 | 0.024691 | 0.037037 | 0.419753 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
d29d169f662bf82cfbfb0172089e264d38e0b3c3 | 17,578 | py | Python | utils/save_atten.py | xiaomengyc/SPG | 0006659c5be4c3451f8c9a188f1e91e9ff682fa9 | [
"MIT"
] | 152 | 2018-07-25T01:55:33.000Z | 2022-02-02T15:16:09.000Z | utils/save_atten.py | xiaomengyc/SPG | 0006659c5be4c3451f8c9a188f1e91e9ff682fa9 | [
"MIT"
] | 15 | 2018-09-13T06:35:16.000Z | 2021-08-05T06:23:16.000Z | utils/save_atten.py | xiaomengyc/SPG | 0006659c5be4c3451f8c9a188f1e91e9ff682fa9 | [
"MIT"
] | 27 | 2018-07-26T03:47:55.000Z | 2021-04-05T08:06:41.000Z | import numpy as np
import cv2
import os
import torch
import os
import time
from torchvision import models, transforms
from torch.utils.data import DataLoader
from torch.optim import SGD
from torch.autograd import Variable
idx2catename = {'voc20': ['aeroplane','bicycle','bird','boat','bottle','bus','car','cat','chair','cow','diningtable','dog','horse',
'motorbike','person','pottedplant','sheep','sofa','train','tvmonitor'],
'coco80': ['person', 'bicycle', 'car', 'motorcycle', 'airplane', 'bus', 'train', 'truck',
'boat', 'traffic light', 'fire hydrant', 'stop sign', 'parking meter', 'bench',
'bird', 'cat', 'dog', 'horse', 'sheep', 'cow', 'elephant', 'bear', 'zebra', 'giraffe',
'backpack', 'umbrella', 'handbag', 'tie', 'suitcase', 'frisbee', 'skis', 'snowboard',
'sports ball', 'kite', 'baseball bat', 'baseball glove', 'skateboard', 'surfboard',
'tennis racket', 'bottle', 'wine glass', 'cup', 'fork', 'knife', 'spoon', 'bowl',
'banana', 'apple', 'sandwich', 'orange', 'broccoli', 'carrot', 'hot dog', 'pizza',
'donut', 'cake', 'chair', 'couch', 'potted plant', 'bed', 'dining table', 'toilet',
'tv', 'laptop', 'mouse', 'remote', 'keyboard', 'cell phone', 'microwave', 'oven',
'toaster', 'sink', 'refrigerator', 'book', 'clock', 'vase', 'scissors', 'teddy bear',
'hair drier', 'toothbrush']}
class SAVE_ATTEN(object):
def __init__(self, save_dir='save_bins', dataset=None):
# type: (object, object) -> object
self.save_dir = save_dir
if dataset is not None:
self.idx2cate = self._get_idx2cate_dict(datasetname=dataset)
else:
self.idx2cate = None
if not os.path.exists(self.save_dir):
os.makedirs(self.save_dir)
def save_top_5_pred_labels(self, preds, org_paths, global_step):
img_num = np.shape(preds)[0]
for idx in xrange(img_num):
img_name = org_paths[idx].strip().split('/')[-1]
if '.JPEG' in img_name:
img_id = img_name[:-5]
elif '.png' in img_name or '.jpg' in img_name:
img_id = img_name[:-4]
out = img_id + ' ' + ' '.join(map(str, preds[idx,:])) + '\n'
out_file = os.path.join(self.save_dir, 'pred_labels.txt')
if global_step == 0 and idx==0 and os.path.exists(out_file):
os.remove(out_file)
with open(out_file, 'a') as f:
f.write(out)
def save_masked_img_batch(self, path_batch, atten_batch, label_batch):
#img_num = np.shape(atten_batch)[0]
img_num = atten_batch.size()[0]
# fid = open('imagenet_val_shape.txt', 'a')
# print(np.shape(img_batch), np.shape(label_batch), np.shape(org_size_batch), np.shape(atten_batch))
for idx in xrange(img_num):
atten = atten_batch[idx]
atten = atten.cpu().data.numpy()
label = label_batch[idx]
label = int(label)
self._save_masked_img(path_batch[idx], atten,label)
def _get_idx2cate_dict(self, datasetname=None):
if datasetname not in idx2catename.keys():
print 'The given %s dataset category names are not available. The supported are: %s'\
%(str(datasetname),','.join(idx2catename.keys()))
return None
else:
return {idx:cate_name for idx, cate_name in enumerate(idx2catename[datasetname])}
def _save_masked_img(self, img_path, atten, label):
'''
save masked images with only one ground truth label
:param path:
:param img:
:param atten:
:param org_size:
:param label:
:param scores:
:param step:
:param args:
:return:
'''
if not os.path.isfile(img_path):
raise 'Image not exist:%s'%(img_path)
img = cv2.imread(img_path)
org_size = np.shape(img)
w = org_size[0]
h = org_size[1]
attention_map = atten[label,:,:]
atten_norm = attention_map
print(np.shape(attention_map), 'Max:', np.max(attention_map), 'Min:',np.min(attention_map))
# min_val = np.min(attention_map)
# max_val = np.max(attention_map)
# atten_norm = (attention_map - min_val)/(max_val - min_val)
atten_norm = cv2.resize(atten_norm, dsize=(h,w))
atten_norm = atten_norm* 255
heat_map = cv2.applyColorMap(atten_norm.astype(np.uint8), cv2.COLORMAP_JET)
img = cv2.addWeighted(img.astype(np.uint8), 0.5, heat_map.astype(np.uint8), 0.5, 0)
img_id = img_path.strip().split('/')[-1]
img_id = img_id.strip().split('.')[0]
save_dir = os.path.join(self.save_dir, img_id+'.png')
cv2.imwrite(save_dir, img)
def get_img_id(self, path):
img_id = path.strip().split('/')[-1]
return img_id.strip().split('.')[0]
def save_top_5_atten_maps(self, atten_fuse_batch, top_indices_batch, org_paths, topk=5):
'''
Save top-5 localization maps for generating bboxes
:param atten_fuse_batch: normalized last layer feature maps of size (batch_size, C, W, H), type: numpy array
:param top_indices_batch: ranked predicted labels of size (batch_size, C), type: numpy array
:param org_paths:
:param args:
:return:
'''
img_num = np.shape(atten_fuse_batch)[0]
for idx in xrange(img_num):
img_id = org_paths[idx].strip().split('/')[-1][:-4]
for k in range(topk):
atten_pos = top_indices_batch[idx, k]
atten_map = atten_fuse_batch[idx, atten_pos,:,:]
heat_map = cv2.resize(atten_map, dsize=(224, 224))
# heat_map = cv2.resize(atten_map, dsize=(img_shape[1], img_shape[0]))
heat_map = heat_map* 255
save_path = os.path.join(self.save_dir, 'heat_maps', 'top%d'%(k+1))
if not os.path.exists(save_path):
os.makedirs(save_path)
save_path = os.path.join(save_path,img_id+'.png')
cv2.imwrite(save_path, heat_map)
# def save_heatmap_segmentation(self, img_path, atten, gt_label, save_dir=None, size=(224,224), maskedimg=False):
# assert np.ndim(atten) == 4
#
# labels_idx = np.where(gt_label[0]==1)[0] if np.ndim(gt_label)==2 else np.where(gt_label==1)[0]
#
# if save_dir is None:
# save_dir = self.save_dir
# if not os.path.exists(save_dir):
# os.mkdir(save_dir)
#
# if isinstance(img_path, list) or isinstance(img_path, tuple):
# batch_size = len(img_path)
# for i in range(batch_size):
# img, size = self.read_img(img_path[i], size=size)
# atten_img = atten[i] #get attention maps for the i-th img of the batch
# img_name = self.get_img_id(img_path[i])
# img_dir = os.path.join(save_dir, img_name)
# if not os.path.exists(img_dir):
# os.mkdir(img_dir)
# for k in labels_idx:
# atten_map_k = atten_img[k,:,:]
# atten_map_k = cv2.resize(atten_map_k, dsize=size)
# if maskedimg:
# img_to_save = self._add_msk2img(img, atten_map_k)
# else:
# img_to_save = self.normalize_map(atten_map_k)*255.0
#
# save_path = os.path.join(img_dir, '%d.png'%(k))
# cv2.imwrite(save_path, img_to_save)
def normalize_map(self, atten_map):
min_val = np.min(atten_map)
max_val = np.max(atten_map)
atten_norm = (atten_map - min_val)/(max_val - min_val)
return atten_norm
def _add_msk2img(self, img, msk, isnorm=True):
if np.ndim(img) == 3:
assert np.shape(img)[0:2] == np.shape(msk)
else:
assert np.shape(img) == np.shape(msk)
if isnorm:
min_val = np.min(msk)
max_val = np.max(msk)
atten_norm = (msk - min_val)/(max_val - min_val)
atten_norm = atten_norm* 255
heat_map = cv2.applyColorMap(atten_norm.astype(np.uint8), cv2.COLORMAP_JET)
w_img = cv2.addWeighted(img.astype(np.uint8), 0.5, heat_map.astype(np.uint8), 0.5, 0)
return w_img
def _draw_text(self, pic, txt, pos='topleft'):
font = cv2.FONT_HERSHEY_SIMPLEX #multiple line
txt = txt.strip().split('\n')
stat_y = 30
for t in txt:
pic = cv2.putText(pic,t,(10,stat_y), font, 0.8,(255,255,255),2,cv2.LINE_AA)
stat_y += 30
return pic
def _mark_score_on_picture(self, pic, score_vec, label_idx):
score = score_vec[label_idx]
txt = '%.3f'%(score)
pic = self._draw_text(pic, txt, pos='topleft')
return pic
def get_heatmap_idxes(self, gt_label):
labels_idx = []
if np.ndim(gt_label) == 1:
labels_idx = np.expand_dims(gt_label, axis=1).astype(np.int)
elif np.ndim(gt_label) == 2:
for row in gt_label:
idxes = np.where(row[0]==1)[0] if np.ndim(row)==2 else np.where(row==1)[0]
labels_idx.append(idxes.tolist())
else:
labels_idx = None
return labels_idx
def get_map_k(self, atten, k, size=(224,224)):
atten_map_k = atten[k,:,:]
# print np.max(atten_map_k), np.min(atten_map_k)
atten_map_k = cv2.resize(atten_map_k, dsize=size)
return atten_map_k
def read_img(self, img_path, size=(224,224)):
img = cv2.imread(img_path)
if img is None:
print "Image does not exist. %s" %(img_path)
exit(0)
if size == (0,0):
size = np.shape(img)[:2]
else:
img = cv2.resize(img, size)
return img, size[::-1]
def get_masked_img(self, img_path, atten, gt_label,
size=(224,224), maps_in_dir=False, save_dir=None, only_map=False):
assert np.ndim(atten) == 4
save_dir = save_dir if save_dir is not None else self.save_dir
if isinstance(img_path, list) or isinstance(img_path, tuple):
batch_size = len(img_path)
label_indexes = self.get_heatmap_idxes(gt_label)
for i in range(batch_size):
img, size = self.read_img(img_path[i], size)
img_name = img_path[i].split('/')[-1]
img_name = img_name.strip().split('.')[0]
if maps_in_dir:
img_save_dir = os.path.join(save_dir, img_name)
os.mkdir(img_save_dir)
for k in label_indexes[i]:
atten_map_k = self.get_map_k(atten[i], k , size)
msked_img = self._add_msk2img(img, atten_map_k)
suffix = str(k+1)
if only_map:
save_img = (self.normalize_map(atten_map_k)*255).astype(np.int)
else:
save_img = msked_img
if maps_in_dir:
cv2.imwrite(os.path.join(img_save_dir, suffix + '.png'), save_img)
else:
cv2.imwrite(os.path.join(save_dir, img_name + '_' + suffix + '.png'), save_img)
# if score_vec is not None and labels_idx is not None:
# msked_img = self._mark_score_on_picture(msked_img, score_vec, labels_idx[k])
# if labels_idx is not None:
# suffix = self.idx2cate.get(labels_idx[k], k)
# def get_masked_img_ml(self, img_path, atten, save_dir=None, size=(224,224),
# gt_label=None, score_vec=None):
# assert np.ndim(atten) == 4
#
# if gt_label is not None and self.idx2cate is not None:
# labels_idx = np.where(gt_label[0]==1)[0] if np.ndim(gt_label)==2 else np.where(gt_label==1)[0]
# else:
# labels_idx = None
#
#
# if save_dir is not None:
# self.save_dir = save_dir
# if isinstance(img_path, list) or isinstance(img_path, tuple):
# batch_size = len(img_path)
# for i in range(batch_size):
# img = cv2.imread(img_path[i])
# if img is None:
# print "Image does not exist. %s" %(img_path[i])
# exit(0)
#
# else:
# atten_img = atten[i] #get attention maps for the i-th img
# img_name = img_path[i].split('/')[-1]
# for k in range(np.shape(atten_img)[0]):
# if size == (0,0):
# w, h, _ = np.shape(img)
# # h, w, _ = np.shape(img)
# else:
# h, w = size
# img = cv2.resize(img, dsize=(h, w))
# atten_map_k = atten_img[k,:,:]
# # print np.max(atten_map_k), np.min(atten_map_k)
# atten_map_k = cv2.resize(atten_map_k, dsize=(h,w))
# msked_img = self._add_msk2img(img, atten_map_k)
# if score_vec is not None and labels_idx is not None:
# msked_img = self._mark_score_on_picture(msked_img, score_vec, labels_idx[k])
# if labels_idx is not None:
# suffix = self.idx2cate.get(labels_idx[k], k)
# else:
# suffix = str(k)
# if '.' in img_name:
# img_name = img_name.strip().split('.')[0]
# cv2.imwrite(os.path.join(self.save_dir, img_name + '_' + suffix + '.png'), msked_img)
#
#
# def get_masked_img(self, img_path, atten, save_dir=None, size=(224,224), combine=True):
# '''
#
# :param img_path:
# :param atten:
# :param size: if it is (0,0) use original image size, otherwise use the specified size.
# :param combine:
# :return:
# '''
#
# if save_dir is not None:
# self.save_dir = save_dir
# if isinstance(img_path, list) or isinstance(img_path, tuple):
# batch_size = len(img_path)
#
# for i in range(batch_size):
# atten_norm = atten[i]
# min_val = np.min(atten_norm)
# max_val = np.max(atten_norm)
# atten_norm = (atten_norm - min_val)/(max_val - min_val)
# # print np.max(atten_norm), np.min(atten_norm)
# img = cv2.imread(img_path[i])
# if img is None:
# print "Image does not exist. %s" %(img_path[i])
# exit(0)
#
# if size == (0,0):
# w, h, _ = np.shape(img)
# # h, w, _ = np.shape(img)
# else:
# h, w = size
# img = cv2.resize(img, dsize=(h, w))
#
# atten_norm = cv2.resize(atten_norm, dsize=(h,w))
# # atten_norm = cv2.resize(atten_norm, dsize=(w,h))
# atten_norm = atten_norm* 255
# heat_map = cv2.applyColorMap(atten_norm.astype(np.uint8), cv2.COLORMAP_JET)
# img = cv2.addWeighted(img.astype(np.uint8), 0.5, heat_map.astype(np.uint8), 0.5, 0)
#
#
# # font = cv2.FONT_HERSHEY_SIMPLEX
# # cv2.putText(img,'OpenCV \n hello',(10,500), font, 4,(255,255,255),2,cv2.LINE_AA)
#
# img_name = img_path[i].split('/')[-1]
# print os.path.join(self.save_dir, img_name)
# cv2.imwrite(os.path.join(self.save_dir, img_name), img)
def get_atten_map(self, img_path, atten, save_dir=None, size=(321,321)):
'''
:param img_path:
:param atten:
:param size: if it is (0,0) use original image size, otherwise use the specified size.
:param combine:
:return:
'''
if save_dir is not None:
self.save_dir = save_dir
if isinstance(img_path, list) or isinstance(img_path, tuple):
batch_size = len(img_path)
for i in range(batch_size):
atten_norm = atten[i]
min_val = np.min(atten_norm)
max_val = np.max(atten_norm)
atten_norm = (atten_norm - min_val)/(max_val - min_val)
# print np.max(atten_norm), np.min(atten_norm)
h, w = size
atten_norm = cv2.resize(atten_norm, dsize=(h,w))
# atten_norm = cv2.resize(atten_norm, dsize=(w,h))
atten_norm = atten_norm* 255
img_name = img_path[i].split('/')[-1]
img_name = img_name.replace('jpg', 'png')
cv2.imwrite(os.path.join(self.save_dir, img_name), atten_norm)
class DRAW(object):
def __init__(self):
pass
def draw_text(self, img, text):
if isinstance(text, dict):
pass
| 42.458937 | 131 | 0.529867 | 2,313 | 17,578 | 3.801556 | 0.143104 | 0.034232 | 0.020471 | 0.016377 | 0.508359 | 0.457182 | 0.409417 | 0.367338 | 0.339702 | 0.331969 | 0 | 0.022698 | 0.340824 | 17,578 | 413 | 132 | 42.561743 | 0.73617 | 0.330982 | 0 | 0.185 | 0 | 0 | 0.083573 | 0 | 0 | 0 | 0 | 0 | 0.015 | 0 | null | null | 0.01 | 0.05 | null | null | 0.015 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
d29e853085f1e22d6f5c45806ff223b5999daf1d | 315 | py | Python | notebooks/datasets.py | jweill-aws/jupyterlab-data-explorer | 3db8eed9562f35d2b0e44370cf22f32ac9ffbc4d | [
"BSD-3-Clause"
] | 173 | 2019-01-04T05:18:08.000Z | 2022-03-28T11:15:30.000Z | notebooks/datasets.py | jweill-aws/jupyterlab-data-explorer | 3db8eed9562f35d2b0e44370cf22f32ac9ffbc4d | [
"BSD-3-Clause"
] | 115 | 2019-01-04T01:09:41.000Z | 2022-03-24T01:07:00.000Z | notebooks/datasets.py | jweill-aws/jupyterlab-data-explorer | 3db8eed9562f35d2b0e44370cf22f32ac9ffbc4d | [
"BSD-3-Clause"
] | 34 | 2019-06-12T16:46:53.000Z | 2022-02-01T08:41:40.000Z | #
# @license BSD-3-Clause
#
# Copyright (c) 2019 Project Jupyter Contributors.
# Distributed under the terms of the 3-Clause BSD License.
import IPython.display
import pandas
def output_url(url):
IPython.display.publish_display_data(
{"application/x.jupyter.relative-dataset-urls+json": [url]}
)
| 21 | 67 | 0.730159 | 42 | 315 | 5.404762 | 0.714286 | 0.061674 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.022727 | 0.161905 | 315 | 14 | 68 | 22.5 | 0.837121 | 0.403175 | 0 | 0 | 0 | 0 | 0.263736 | 0.263736 | 0 | 0 | 0 | 0 | 0 | 1 | 0.166667 | false | 0 | 0.333333 | 0 | 0.5 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
d2aa2e4deaca6a1a85b89b1e9c89d89fa5c4d8f5 | 424 | py | Python | archive/jonesboro/__init__.py | jayktee/scrapers-us-municipal | ff52a331e91cb590a3eda7db6c688d75b77acacb | [
"MIT"
] | 67 | 2015-04-28T19:28:18.000Z | 2022-01-31T03:27:17.000Z | archive/jonesboro/__init__.py | jayktee/scrapers-us-municipal | ff52a331e91cb590a3eda7db6c688d75b77acacb | [
"MIT"
] | 202 | 2015-01-15T18:43:12.000Z | 2021-11-23T15:09:10.000Z | archive/jonesboro/__init__.py | jayktee/scrapers-us-municipal | ff52a331e91cb590a3eda7db6c688d75b77acacb | [
"MIT"
] | 54 | 2015-01-27T03:15:45.000Z | 2021-09-10T19:35:32.000Z | from pupa.scrape import Jurisdiction
from legistar.ext.pupa import LegistarPeopleScraper
class Jonesboro(Jurisdiction):
division_id = 'ocd-division/country:us/state:ar/place:jonesboro'
jurisdiction_id = 'ocd-jurisdiction/country:us/state:ar/place:jonesboro/government'
name = 'Jonesboro City Council'
url = 'http://jonesboro.legistar.com/'
scrapers = {
"people": LegistarPeopleScraper,
}
| 28.266667 | 87 | 0.735849 | 47 | 424 | 6.595745 | 0.574468 | 0.135484 | 0.090323 | 0.103226 | 0.193548 | 0.193548 | 0 | 0 | 0 | 0 | 0 | 0 | 0.15566 | 424 | 14 | 88 | 30.285714 | 0.865922 | 0 | 0 | 0 | 0 | 0 | 0.398585 | 0.261792 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.2 | 0 | 0.8 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
d2ab49c4b3562bad12874570d0c5751dda4cf3e6 | 1,194 | py | Python | tests/settings.py | josemarimanio/django-adminlte2-templates | d39ab5eaec674c4725015fe43fc93e74dce78a6e | [
"MIT"
] | 10 | 2020-03-21T10:50:11.000Z | 2022-03-04T08:36:43.000Z | tests/settings.py | josemarimanio/django-adminlte2-templates | d39ab5eaec674c4725015fe43fc93e74dce78a6e | [
"MIT"
] | 6 | 2020-06-06T08:48:29.000Z | 2021-06-10T18:49:35.000Z | tests/settings.py | josemarimanio/django-adminlte2-templates | d39ab5eaec674c4725015fe43fc93e74dce78a6e | [
"MIT"
] | 1 | 2021-09-14T02:00:43.000Z | 2021-09-14T02:00:43.000Z | import os
BASE_DIR = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))
SECRET_KEY = '!t_(11ght0&nmb&$tf4to=gdg&u$!hsm3@)c6dzp=zdc*c9zci' # nosec
INSTALLED_APPS = [
'django.contrib.auth',
'django.contrib.contenttypes',
'django.contrib.sessions',
'django.contrib.sites',
'adminlte2_templates',
'tests',
]
MIDDLEWARE = [
'django.contrib.sessions.middleware.SessionMiddleware',
'django.contrib.auth.middleware.AuthenticationMiddleware',
]
ROOT_URLCONF = 'tests.urls'
TEMPLATES = [
{
'BACKEND': 'django.template.backends.django.DjangoTemplates',
'APP_DIRS': True,
'DIRS': [os.path.join(BASE_DIR, 'tests/templates')],
'OPTIONS': {
'context_processors': [
'django.template.context_processors.request',
'django.contrib.auth.context_processors.auth',
'adminlte2_templates.context_processors.template',
],
},
},
]
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.sqlite3',
'NAME': os.path.join(BASE_DIR, 'db.sqlite3'),
}
}
PASSWORD_HASHERS = [
'django.contrib.auth.hashers.MD5PasswordHasher',
]
| 23.88 | 74 | 0.629816 | 118 | 1,194 | 6.211864 | 0.5 | 0.141883 | 0.092769 | 0.040928 | 0.090041 | 0 | 0 | 0 | 0 | 0 | 0 | 0.012834 | 0.216918 | 1,194 | 49 | 75 | 24.367347 | 0.771123 | 0.004188 | 0 | 0 | 0 | 0 | 0.518955 | 0.385004 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0.051282 | 0.025641 | 0 | 0.025641 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
d2ae04ea58cc84694d33370988510f0b8bdcadb9 | 2,658 | py | Python | two-variables-function-fitting/fxy_gen.py | ettoremessina/fitting-with-mlp-using-tensorflow | 50303c7161521f690c37b80a72a281129052365b | [
"MIT"
] | 9 | 2020-03-21T08:45:28.000Z | 2021-11-30T02:49:41.000Z | two-variables-function-fitting/fxy_gen.py | ettoremessina/fitting-with-mlp-using-tensorflow | 50303c7161521f690c37b80a72a281129052365b | [
"MIT"
] | null | null | null | two-variables-function-fitting/fxy_gen.py | ettoremessina/fitting-with-mlp-using-tensorflow | 50303c7161521f690c37b80a72a281129052365b | [
"MIT"
] | 3 | 2020-04-08T15:35:03.000Z | 2022-03-22T02:19:02.000Z | import argparse
import numpy as np
import csv
if __name__ == "__main__":
parser = argparse.ArgumentParser(description='fxy_gen.py generates a synthetic dataset file calling a two-variables real function on a rectangle')
parser.add_argument('--dsout',
type=str,
dest='ds_output_filename',
required=True,
help='dataset output file (csv format)')
parser.add_argument('--fxy',
type=str,
dest='func_xy_body',
required=True,
help='f(x, y) body (lamba format)')
parser.add_argument('--rxbegin',
type=float,
dest='range_xbegin',
required=False,
default=-5.0,
help='begin x range (default:-5.0)')
parser.add_argument('--rxend',
type=float,
dest='range_xend',
required=False,
default=+5.0,
help='end x range (default:+5.0)')
parser.add_argument('--rybegin',
type=float,
dest='range_ybegin',
required=False,
default=-5.0,
help='begin y range (default:-5.0)')
parser.add_argument('--ryend',
type=float,
dest='range_yend',
required=False,
default=+5.0,
help='end y range (default:+5.0)')
parser.add_argument('--rstep',
type=float,
dest='range_step',
required=False,
default=0.01,
help='step range (default: 0.01)')
args = parser.parse_args()
print("#### Started {} {} ####".format(__file__, args));
x_values = np.arange(args.range_xbegin, args.range_xend, args.range_step, dtype=float)
y_values = np.arange(args.range_ybegin, args.range_yend, args.range_step, dtype=float)
func_xy = eval('lambda x, y: ' + args.func_xy_body)
csv_ds_output_file = open(args.ds_output_filename, 'w')
with csv_ds_output_file:
writer = csv.writer(csv_ds_output_file, delimiter=',')
for i in range(0, x_values.size):
for j in range(0, y_values.size):
writer.writerow([x_values[i], y_values[j], func_xy(x_values[i], y_values[j])])
print("#### Terminated {} ####".format(__file__));
| 37.971429 | 150 | 0.482318 | 281 | 2,658 | 4.338078 | 0.309609 | 0.052502 | 0.059065 | 0.073831 | 0.305168 | 0.229696 | 0.203445 | 0.105004 | 0 | 0 | 0 | 0.015019 | 0.398796 | 2,658 | 69 | 151 | 38.521739 | 0.74781 | 0 | 0 | 0.315789 | 1 | 0 | 0.18623 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.052632 | 0 | 0.052632 | 0.035088 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
d2af35f5ecd1284185b97cd7fd48a1dabdbf319d | 1,714 | py | Python | data_input.py | zpcore/OnePass | fc102fae172c617535d4661bfa99a0302cbe09db | [
"MIT"
] | null | null | null | data_input.py | zpcore/OnePass | fc102fae172c617535d4661bfa99a0302cbe09db | [
"MIT"
] | null | null | null | data_input.py | zpcore/OnePass | fc102fae172c617535d4661bfa99a0302cbe09db | [
"MIT"
] | null | null | null | import json
import string, sys
from random import *
class Token:
def __init__(self):
self.company, self.website, self.email, self.username, self.password = None, None, None, None, None
def get_input(self):
while(self.company in (None,'')):
self.company = input('Account Association:')
if(self.company in (None,'')):
print('Account Association cannot be null, try again.')
self.website = input('Website linked to the account:')
self.email = input('Email linked to the account:')
# while(self.email in (None,'')):
# self.email = input('Registered Email:')
# if(self.email in (None,'')):
# print('Email cannot be null, try again.')
while(self.username in (None,'')):
self.username = input('Username:')
if(self.username in (None,'')):
print('Username cannot be null, try again.')
while(self.password in (None,'')):
select = input('Random generate a password for you? Type Y or N. ').strip().lower()
if(select in ('y','yes')):
characters = string.ascii_letters + string.punctuation + string.digits
low_bound, up_bound = 10, 20
password = "".join(choice(characters) for x in range(randint(low_bound, up_bound)))
self.password = password
print('auto generated password:'+self.password)
elif(select in ('n','no')):
self.password = input('Password:')
if(self.password in (None,'')):
print('Password cannot be null, try again.')
else:
print('Incorrect choice. Try again.')
class MyEncoder(json.JSONEncoder):
def default(self, obj):
if not isinstance(obj, Token):
return super().default(obj)
return obj.__dict__
# tok = Token()
# tok.get_input()
# print(json.dumps(tok, cls=MyEncoder)) | 32.339623 | 101 | 0.656943 | 232 | 1,714 | 4.788793 | 0.340517 | 0.043204 | 0.039604 | 0.054005 | 0.088209 | 0.052205 | 0.052205 | 0 | 0 | 0 | 0 | 0.002865 | 0.185531 | 1,714 | 53 | 102 | 32.339623 | 0.79298 | 0.124854 | 0 | 0 | 0 | 0 | 0.214334 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.083333 | false | 0.25 | 0.083333 | 0 | 0.277778 | 0.138889 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
d2b75bb3697ff16713aa871c5e493e77fa916f5c | 1,620 | py | Python | virtus/core/migrations/0004_auto_20180417_1625.py | eltonjncorreia/gerenciar-dados-virtus | b8e1b8caa152b18221046f6841761d805b232268 | [
"MIT"
] | null | null | null | virtus/core/migrations/0004_auto_20180417_1625.py | eltonjncorreia/gerenciar-dados-virtus | b8e1b8caa152b18221046f6841761d805b232268 | [
"MIT"
] | null | null | null | virtus/core/migrations/0004_auto_20180417_1625.py | eltonjncorreia/gerenciar-dados-virtus | b8e1b8caa152b18221046f6841761d805b232268 | [
"MIT"
] | null | null | null | # Generated by Django 2.0.4 on 2018-04-17 19:25
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('core', '0003_auto_20180417_1613'),
]
operations = [
migrations.CreateModel(
name='Item',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('codigo', models.IntegerField(verbose_name='codigo')),
('descricao', models.CharField(max_length=255, verbose_name='descricao')),
('valor', models.DecimalField(decimal_places=2, max_digits=10, verbose_name='valor')),
('unitario', models.DecimalField(decimal_places=2, max_digits=10, verbose_name='Unitário')),
('quantidade', models.IntegerField(verbose_name='quantidade')),
],
options={
'verbose_name': 'Item',
'verbose_name_plural': 'Itens',
'ordering': ['codigo'],
},
),
migrations.AlterModelOptions(
name='cliente',
options={'ordering': ['nome'], 'verbose_name': 'Cliente', 'verbose_name_plural': 'Clientes'},
),
migrations.AlterModelOptions(
name='endereco',
options={'ordering': ['tipo'], 'verbose_name': 'Endereço', 'verbose_name_plural': 'Endereços'},
),
migrations.AlterModelOptions(
name='pedido',
options={'ordering': ['numero'], 'verbose_name': 'Pedido', 'verbose_name_plural': 'Pedidos'},
),
]
| 38.571429 | 114 | 0.569753 | 145 | 1,620 | 6.172414 | 0.482759 | 0.172067 | 0.075978 | 0.064804 | 0.12067 | 0.12067 | 0.12067 | 0.12067 | 0.12067 | 0.12067 | 0 | 0.034305 | 0.280247 | 1,620 | 41 | 115 | 39.512195 | 0.733276 | 0.027778 | 0 | 0.2 | 1 | 0 | 0.230134 | 0.014622 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.028571 | 0 | 0.114286 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
d2b975627d7b7c61820ad7bec967dad5b7b1e8aa | 4,511 | py | Python | oxide/plugins/other/StartupItems.py | john-clark/rust-oxide-umod | 56feca04f96d8a43a1b56e080fc81d526f7471c3 | [
"MIT"
] | 13 | 2019-05-13T08:03:50.000Z | 2022-02-06T16:44:35.000Z | oxide/plugins/other/StartupItems.py | john-clark/rust-oxide-umod | 56feca04f96d8a43a1b56e080fc81d526f7471c3 | [
"MIT"
] | null | null | null | oxide/plugins/other/StartupItems.py | john-clark/rust-oxide-umod | 56feca04f96d8a43a1b56e080fc81d526f7471c3 | [
"MIT"
] | 8 | 2019-12-12T15:48:03.000Z | 2021-12-24T17:04:45.000Z | # Note:
# I add an underscore at the biginning of the variable name for example: "_variable" to prevent
# conflicts with build-in variables from Oxide.
# Use to manage the player's inventory.
import ItemManager
# Use to get player's information.
import BasePlayer
# The plug-in name should be the same as the class name and file name.
class StartupItems:
# Always start with a constructor.
def __init__(self):
# All the variables listed below are recommended for the plug-in and developer informaton.
self.Title = 'StartupItems'
self.Description = 'Set default items when player respawn after dead.'
self.Author = 'RedNinja1337'
self.Version = V(1, 0, 5)
self.Url = 'http://oxidemod.org/plugins/startupitems.1323/'
self.ResourceId = 1323
# Create the configuration file if it does not exists.
def LoadDefaultConfig(self):
# Add some demo data as an example on the configuration file.
self.Config['GroupItems'] = ({
'admin':({'item_shortname':'attire.hide.boots', 'Amount':1, 'Container':'Wear'},
{'item_shortname':'attire.hide.pants', 'Amount':1, 'Container':'Wear'},
{'item_shortname':'rock', 'Amount':1, 'Container':'Belt'},
{'item_shortname':'bow.hunting', 'Amount':1, 'Container':'Belt'},
{'item_shortname':'arrow.hv', 'Amount':25, 'Container':'Main'},),
'moderator':({},),
'player':({},)
})
# Called from BasePlayer.Respawn.
# Called when the player spawns (specifically when they click the "Respawn" button).
# ONLY called after the player has transitioned from dead to not-dead, so not when they're waking up.
def OnPlayerRespawned(self, BasePlayer):
# Check if there is any group set on the configuration file.
if self.Config['GroupItems']:
# If at least one group is found on the configuration file then set the variable "_GroupItems" equals the group's dictionary.
_GroupItems = self.Config['GroupItems']
# Set the variable "_Group" equals the list of groups the player belogs to. By default all players belog to the group "player".
_Group = permission.GetUserGroups(BasePlayer.userID.ToString())
# Set the variable "_SetGroup" equals the last group the user was added from Oxide.Group. By default all players belog to the group "player".
_SetGroup = _GroupItems.get(_Group[-1])
# Check if the group exists in the config file.
if _SetGroup:
try: # Catch the "KeyNotFoundException" error if "Container", "item_shortname" or "Amount" is not found on the config file.
if _SetGroup[0]['Container'] and _SetGroup[0]['item_shortname'] and _SetGroup[0]['Amount']:
# Set the variable "inv" equals the player's inventory.
inv = BasePlayer.inventory
# Empty the player's inventory.
inv.Strip()
# Iterate through the list of items for the specify group from the configuration file.
for item in _SetGroup:
# Add the items set on the configuration file to each container on the player's inventory.
if item['Container'].lower() == 'main':
inv.GiveItem(ItemManager.CreateByName(item['item_shortname'],item['Amount']), inv.containerMain)
elif item['Container'].lower() == 'belt':
inv.GiveItem(ItemManager.CreateByName(item['item_shortname'],item['Amount']), inv.containerBelt)
elif item['Container'].lower() == 'wear':
inv.GiveItem(ItemManager.CreateByName(item['item_shortname'],item['Amount']), inv.containerWear)
else: return
else: print False
# Catch the "KeyNotFoundException" error if "Container", "item_shortname" or "Amount" is not found on the config file.
except KeyError: return
else: return
else: return
| 51.261364 | 153 | 0.570162 | 491 | 4,511 | 5.179226 | 0.358452 | 0.056233 | 0.047188 | 0.029886 | 0.279591 | 0.22965 | 0.177743 | 0.177743 | 0.177743 | 0.146284 | 0 | 0.008358 | 0.336954 | 4,511 | 87 | 154 | 51.850575 | 0.841859 | 0.381734 | 0 | 0.071429 | 0 | 0 | 0.187771 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.047619 | null | null | 0.02381 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
d2bbabe21477b77848cbfcaba239a66c8fe04262 | 1,043 | py | Python | error_handler.py | jrg1381/sm_asr_console | 47c4090075deaaa7f58e9a092423a58bc7b0a30f | [
"MIT"
] | 2 | 2019-08-07T11:08:06.000Z | 2021-01-20T11:28:37.000Z | error_handler.py | jrg1381/sm_asr_console | 47c4090075deaaa7f58e9a092423a58bc7b0a30f | [
"MIT"
] | null | null | null | error_handler.py | jrg1381/sm_asr_console | 47c4090075deaaa7f58e9a092423a58bc7b0a30f | [
"MIT"
] | null | null | null | # encoding: utf-8
""" Parameterized decorator for catching errors and displaying them in an error popup """
from enum import Enum
import npyscreen
class DialogType(Enum):
"""
Enum defining the type of dialog.
CONFIRM - the dialog waits until the user clicks OK
BRIEF - the dialog appears for a few seconds and then vanishes
"""
CONFIRM = npyscreen.notify_confirm
BRIEF = npyscreen.notify_wait
# PythonDecorators/decorator_function_with_arguments.py
def error_handler(title, dialog_type=DialogType.CONFIRM):
"""
Decorator for functions to catch their exceptions and display them in an error popup
:param title The title of the error pop-up
:param dialog_type A DialogType enum
"""
def wrap(original_function):
def wrapped_f(*args):
try:
return original_function(*args)
except Exception as ex: # pylint: disable=broad-except
dialog_type(str(ex), title)
return None
return wrapped_f
return wrap
| 29.8 | 89 | 0.681687 | 134 | 1,043 | 5.208955 | 0.552239 | 0.04298 | 0.022923 | 0.037249 | 0.051576 | 0 | 0 | 0 | 0 | 0 | 0 | 0.00129 | 0.256951 | 1,043 | 34 | 90 | 30.676471 | 0.899355 | 0.477469 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.2 | false | 0 | 0.133333 | 0 | 0.8 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
d2c1cd83dd904d0ffd396c1f85ce4d771a28e638 | 4,813 | py | Python | app/network_x_tools/network_x_utils.py | ThembiNsele/ClimateMind-Backend | 0e418000b2a0141a1e4a7c11dbe3564082a3f4bb | [
"MIT"
] | 6 | 2020-08-20T10:49:59.000Z | 2022-01-24T16:49:46.000Z | app/network_x_tools/network_x_utils.py | ThembiNsele/ClimateMind-Backend | 0e418000b2a0141a1e4a7c11dbe3564082a3f4bb | [
"MIT"
] | 95 | 2020-07-24T22:32:34.000Z | 2022-03-05T15:01:16.000Z | app/network_x_tools/network_x_utils.py | ThembiNsele/ClimateMind-Backend | 0e418000b2a0141a1e4a7c11dbe3564082a3f4bb | [
"MIT"
] | 5 | 2020-07-30T17:29:09.000Z | 2021-01-10T19:46:15.000Z | class network_x_utils:
"""
This class provides commonly used utils which are shared between all different types
of NetworkX nodes (Feed Items, Solutions, Myths). For each of these, we want to be
able to pull basic information like the IRI, Descriptions, Images, etc.
Include any generalized NetworkX functions here.
"""
def __init__(self):
self.node = None # Current node
def set_current_node(self, node):
"""We usually pull multiple node related items simultaneously. Rather
than pass these in individually for each function, this let's us use the same
node for all of the functions in this class.
"""
self.node = node
def get_node_id(self):
"""Node IDs are the unique identifier in the IRI. This is provided to the
front-end as a reference for the feed, but is never shown to the user.
Example http://webprotege.stanford.edu/R8znJBKduM7l8XDXMalSWSl
"""
offset = 4 # .edu <- to skip these characters and get the unique IRI
full_iri = self.node["iri"]
pos = full_iri.find("edu") + offset
return full_iri[pos:]
def get_description(self):
"""Long Descriptions are used by the front-end to display explanations of the
climate effects shown in user feeds.
"""
try:
return self.node["properties"]["schema_longDescription"][0]
except:
return "No long desc available at present"
def get_short_description(self):
"""Short Descriptions are used by the front-end to display explanations of the
climate effects shown in user feeds.
"""
try:
return self.node["properties"]["schema_shortDescription"][0]
except:
return "No short desc available at present"
def get_image_url(self):
"""Images are displayed to the user in the climate feed to accompany an explanation
of the climate effects. The front-end is provided with the URL and then requests
these images from our server.
"""
try:
return self.node["properties"]["schema_image"][0]
except:
# Default image url if image is added
return "https://yaleclimateconnections.org/wp-content/uploads/2018/04/041718_child_factories.jpg"
def get_image_url_or_none(self):
"""Images are displayed to the user in the climate feed to accompany an explanation
of the climate effects. The front-end is provided with the URL and then requests
these images from our server.
"""
try:
return self.node["properties"]["schema_image"][0]
except:
# Default image url if image is added
return None
def get_causal_sources(self):
"""Sources are displayed to the user in the sources tab of the impacts overlay page.
This function returns a list of urls of the sources to show on the impact overlay page for an impact/effect.
Importantly, these sources aren't directly from the networkx node, but all the networkx edges that cause the node.
Only returns edges that are directly tied to the node (ancestor edge sources are not used)
"""
if "causal sources" in self.node and len(self.node["causal sources"]) > 0:
causal_sources = self.node["causal sources"]
try:
return causal_sources
except:
return (
[]
) # Default source if none #should this be the IPCC? or the US National Climate Assessment?
def get_solution_sources(self):
"""Returns a flattened list of custom solution source values from each node key that matches
custom_source_types string.
"""
try:
return self.node["solution sources"]
except:
return []
def get_is_possibly_local(self, node):
"""Returns whether it's possible that a node effects a particular user based on
their location. Note that here we need to pass in the node directly, rather than
using one set by the class as the node comes from the localised_acyclic_graph.py
rather than a the standard graph.
"""
if "isPossiblyLocal" in node:
if node["isPossiblyLocal"]:
return 1
else:
return 0
else:
return 0
def get_co2_eq_reduced(self):
"""
Returns the solution's CO2 Equivalent Reduced / Sequestered (2020–2050) in Gigatons.
Values taken from Project Drawdown scenario 2.
"""
if "CO2_eq_reduced" in self.node["data_properties"]:
return self.node["data_properties"]["CO2_eq_reduced"]
else:
return 0
| 39.45082 | 122 | 0.635155 | 639 | 4,813 | 4.707355 | 0.345853 | 0.042553 | 0.027926 | 0.028258 | 0.267287 | 0.267287 | 0.24867 | 0.240027 | 0.240027 | 0.240027 | 0 | 0.011249 | 0.298151 | 4,813 | 121 | 123 | 39.77686 | 0.878922 | 0.507584 | 0 | 0.350877 | 0 | 0 | 0.203722 | 0.022037 | 0 | 0 | 0 | 0 | 0 | 1 | 0.192982 | false | 0 | 0 | 0 | 0.526316 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
d2c3e3e6ef11ddd684a0bcebf23085d7e1d9152c | 1,191 | py | Python | crawlai/items/critter/base_critter.py | apockill/CreepyCrawlAI | 2862c03e686801884ffb579a7be29f3c9d0da610 | [
"MIT"
] | 13 | 2020-05-04T03:11:26.000Z | 2021-12-05T03:57:45.000Z | crawlai/items/critter/base_critter.py | apockill/CreepyCrawlAI | 2862c03e686801884ffb579a7be29f3c9d0da610 | [
"MIT"
] | null | null | null | crawlai/items/critter/base_critter.py | apockill/CreepyCrawlAI | 2862c03e686801884ffb579a7be29f3c9d0da610 | [
"MIT"
] | null | null | null | from godot.bindings import ResourceLoader
from crawlai.grid_item import GridItem
from crawlai.items.food import Food
from crawlai.math_utils import clamp
from crawlai.turn import Turn
from crawlai.position import Position
_critter_resource = ResourceLoader.load("res://Game/Critter/Critter.tscn")
class BaseCritter(GridItem):
"""The base class for all critters"""
HEALTH_TICK_PENALTY = 1
MAX_HEALTH = 500
BITE_SIZE = 20
CHOICES = [
Turn(Position(*c), is_action)
for c in [(0, 1), (1, 0), (-1, 0), (0, -1)]
for is_action in (True, False)
] + [Turn(Position(0, 0), False)]
def __init__(self):
super().__init__()
self.health: int
self.age: int
self._reset_stats()
def _reset_stats(self):
self.health = self.MAX_HEALTH
self.age = 0
def _tick_stats(self):
self.age += 1
self.health -= self.HEALTH_TICK_PENALTY
def _load_instance(self):
return _critter_resource.instance()
def perform_action_onto(self, other: 'GridItem'):
if isinstance(other, Food):
max_bite = clamp(self.BITE_SIZE, 0, self.MAX_HEALTH - self.health)
self.health += other.take_nutrition(max_bite)
@property
def delete_queued(self):
return self.health <= 0
| 24.306122 | 74 | 0.715365 | 175 | 1,191 | 4.645714 | 0.377143 | 0.086101 | 0.051661 | 0.04182 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.020101 | 0.164568 | 1,191 | 48 | 75 | 24.8125 | 0.796985 | 0.026029 | 0 | 0 | 0 | 0 | 0.033795 | 0.026863 | 0 | 0 | 0 | 0 | 0 | 1 | 0.166667 | false | 0 | 0.166667 | 0.055556 | 0.527778 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
d2c4507ff5f2b0e60108a433da49147fd8f6e6c4 | 3,008 | py | Python | exercises/networking_selfpaced/networking-workshop/collections/ansible_collections/community/general/plugins/doc_fragments/nios.py | tr3ck3r/linklight | 5060f624c235ecf46cb62cefcc6bddc6bf8ca3e7 | [
"MIT"
] | 17 | 2017-06-07T23:15:01.000Z | 2021-08-30T14:32:36.000Z | ansible/ansible/plugins/doc_fragments/nios.py | SergeyCherepanov/ansible | 875711cd2fd6b783c812241c2ed7a954bf6f670f | [
"MIT"
] | 9 | 2017-06-25T03:31:52.000Z | 2021-05-17T23:43:12.000Z | ansible/ansible/plugins/doc_fragments/nios.py | SergeyCherepanov/ansible | 875711cd2fd6b783c812241c2ed7a954bf6f670f | [
"MIT"
] | 3 | 2018-05-26T21:31:22.000Z | 2019-09-28T17:00:45.000Z | # -*- coding: utf-8 -*-
# Copyright: (c) 2015, Peter Sprygada <psprygada@ansible.com>
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
class ModuleDocFragment(object):
# Standard files documentation fragment
DOCUMENTATION = r'''
options:
provider:
description:
- A dict object containing connection details.
type: dict
suboptions:
host:
description:
- Specifies the DNS host name or address for connecting to the remote
instance of NIOS WAPI over REST
- Value can also be specified using C(INFOBLOX_HOST) environment
variable.
type: str
required: true
username:
description:
- Configures the username to use to authenticate the connection to
the remote instance of NIOS.
- Value can also be specified using C(INFOBLOX_USERNAME) environment
variable.
type: str
password:
description:
- Specifies the password to use to authenticate the connection to
the remote instance of NIOS.
- Value can also be specified using C(INFOBLOX_PASSWORD) environment
variable.
type: str
validate_certs:
description:
- Boolean value to enable or disable verifying SSL certificates
- Value can also be specified using C(INFOBLOX_SSL_VERIFY) environment
variable.
type: bool
default: no
aliases: [ ssl_verify ]
http_request_timeout:
description:
- The amount of time before to wait before receiving a response
- Value can also be specified using C(INFOBLOX_HTTP_REQUEST_TIMEOUT) environment
variable.
type: int
default: 10
max_retries:
description:
- Configures the number of attempted retries before the connection
is declared usable
- Value can also be specified using C(INFOBLOX_MAX_RETRIES) environment
variable.
type: int
default: 3
wapi_version:
description:
- Specifies the version of WAPI to use
- Value can also be specified using C(INFOBLOX_WAP_VERSION) environment
variable.
- Until ansible 2.8 the default WAPI was 1.4
type: str
default: '2.1'
max_results:
description:
- Specifies the maximum number of objects to be returned,
if set to a negative number the appliance will return an error when the
number of returned objects would exceed the setting.
- Value can also be specified using C(INFOBLOX_MAX_RESULTS) environment
variable.
type: int
default: 1000
notes:
- "This module must be run locally, which can be achieved by specifying C(connection: local)."
- Please read the :ref:`nios_guide` for more detailed information on how to use Infoblox with Ansible.
'''
| 35.809524 | 104 | 0.635306 | 359 | 3,008 | 5.261838 | 0.428969 | 0.03388 | 0.050821 | 0.059291 | 0.285866 | 0.233457 | 0.220222 | 0.220222 | 0.141874 | 0.099524 | 0 | 0.01068 | 0.31516 | 3,008 | 83 | 105 | 36.240964 | 0.906311 | 0.069814 | 0 | 0.351351 | 0 | 0.027027 | 0.973863 | 0.051557 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0.040541 | 0 | 0 | 0.027027 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
d2c55dd79284c9bf304a1f86538b6964cbb89f09 | 7,594 | py | Python | alison.py | johanhoiness/SlothBot | 556f9e0f67aa90543bd98889b06a4b939e30450d | [
"MIT"
] | 1 | 2017-06-28T09:24:49.000Z | 2017-06-28T09:24:49.000Z | alison.py | johanhoiness/SlothBot | 556f9e0f67aa90543bd98889b06a4b939e30450d | [
"MIT"
] | null | null | null | alison.py | johanhoiness/SlothBot | 556f9e0f67aa90543bd98889b06a4b939e30450d | [
"MIT"
] | null | null | null | __author__ = 'JohnHiness'
import sys
import os
import random
import time
import string
import connection
from time import strftime
import ceq
import json, urllib2
import thread
args = sys.argv
req_files = ['filegen.py', 'connection.py', 'commands.py', 'general.py', 'automatics.py']
for filename in req_files:
if os.path.exists(filename) == False:
print "Required file \"{}\" not found. Make sure you have acquired all files.".format(filename)
sys.exit(1)
import filegen
if os.path.exists('config.py') == False:
print 'No configuration-file found. Generating config.py'
filegen.gen_config()
python = sys.executable
print str(python)+'||'+str(python)+'||'+ str(* sys.argv)
os.execl(python, python, * sys.argv)
if os.path.exists('revar.py') == False:
print 'No reconfigurable file found. Generating revar.py'
filegen.gen_revar()
python = sys.executable
print str(python)+'||'+str(python)+'||'+ str(* sys.argv)
os.execl(python, python, * sys.argv)
import config
import revar
import filegen
import commands
import general
import automatics
if not revar.channels:
revar.channels = config.channel.replace(', ', ',').replace(' ', ',').split(',')
if len(args) > 1:
if args[1].lower() == 'reconfig' or args[1].lower() == 'config':
answr = raw_input("This will have you regenerate the configuration file and all old configurations will be lost.\nAre you sure you want to do this?(y/n) ")
while answr.lower() != 'y' or answr.lower() != 'n':
answr = raw_input("You must use the letters Y or N to answer: ")
if answr.lower() == 'y':
filegen.gen_config()
sys.exit(0)
if answr.lower() == 'n':
sys.exit(0)
elif args[1].lower() == 'help':
print "Usage: python alison.py <help | reconfig | >"
sys.exit(0)
else:
print "Flag not recognized."
sys.exit(1)
def connect(server, port):
print "Connecting to {} with port {}.".format(server, port)
s = connection.s
readbuffer = ''
try:
s.connect((server, port))
except BaseException as exc:
print 'Failed to connect: ' + str(exc)
sys.exit(1)
s.send("PASS %s\n" % config.password)
s.send("USER %s %s %s :%s\n" % (config.bot_username, config.bot_hostname, config.bot_servername, config.bot_realname))
s.send("NICK %s\n" % revar.bot_nick)
mode_found = False
while not mode_found:
readbuffer = readbuffer + s.recv(2048)
temp = string.split(readbuffer, "\n")
readbuffer = temp.pop()
for rline in temp:
rline = string.rstrip(rline)
rline = string.split(rline)
g = general
if rline[0] == "PING":
g.ssend("PONG %s\r" % rline[1])
if rline[1] == '433':
if revar.bot_nick.lower() != config.bot_nick2.lower():
revar.bot_nick = config.bot_nick2
else:
revar.bot_nick += '_'
g.ssend('NICK %s' % revar.bot_nick)
if len(rline) > 2 and rline[1] == '391':
revar.bot_nick = rline[2]
if len(rline) > 2 and rline[1].lower() == 'join':
if not rline[2].lower() in revar.channels:
revar.channels.append(rline[2].lower())
if len(rline) > 2 and rline[1].lower() == 'part':
if rline[2].lower() in revar.channels:
try:
revar.channels.append(rline[2].lower())
except:
pass
if rline[1] == 'MODE':
mode_found = True
g.ssend('JOIN %s' % ','.join(revar.channels))
general.update_user_info()
def server_responses(rline):
g = general
if rline[0] == "PING":
g.ssend("PONG %s\r" % rline[1])
return True
if len(rline) > 4 and rline[3] == '152':
general.append_user_info(rline)
return True
if rline[1] == '433':
if revar.bot_nick.lower() != config.bot_nick2.lower():
revar.bot_nick = config.bot_nick2
else:
revar.bot_nick += '_'
g.ssend('NICK %s' % revar.bot_nick)
return True
if len(rline) > 2 and rline[1] == '391':
revar.bot_nick = rline[2]
return True
if len(rline) > 1 and rline[1].lower() == 'pong':
general.last_pong = time.time()
return True
if len(rline) > 2 and rline[1].lower() == 'join':
if not rline[2].lower() in revar.channels:
revar.channels.append(rline[2].lower())
return True
if len(rline) > 2 and rline[1].lower() == 'nick':
general.update_user_info()
return True
if len(rline) > 2 and rline[1].lower() == 'part':
if rline[2].lower() in revar.channels:
try:
revar.channels.append(rline[2].lower())
except:
pass
return True
if len(rline) > 3 and rline[1] == '319' and rline[2].lower() == revar.bot_nick.lower():
revar.channels = ' '.join(rline[4:])[1:].replace('+', '').replace('@', '').lower().split()
return True
if len(rline) > 2 and rline[1] == '391':
revar.bot_nick = rline[2]
return True
if not rline[0].find('!') != -1:
return True
if len(rline) > 3 and rline[1] == '315':
return True
return False
def find_imdb_link(chanq, msg):
if msg.lower().find('imdb.com/title/') != -1:
imdb_id = msg.lower()[msg.lower().find('imdb.com/title/')+15:][:9]
g.csend(chanq, commands.imdb_info('id', imdb_id))
def botendtriggerd(chant, usert, msgt):
if not general.check_operator(usert):
outp = 'You do not have permission to use any of these commands.'
else:
msgt = general.check_bottriggers(msgt).split()
outp = commands.operator_commands(chant, msgt)
if outp is not None:
for line in outp.split('\n'):
g.csend(chant, line)
time.sleep(1)
def work_command(chanw, userw, msgw):
msgw = general.check_midsentencecomment(msgw)
msgw, rec, notice, pm = general.checkrec(chanw, userw, msgw)
outp = commands.check_called(chanw, userw, msgw)
if outp is not None:
for line in outp.split('\n'):
g.csend(chanw, line, notice, pm, rec)
time.sleep(1)
def work_line(chanl, userl, msgl):
if chanl in general.countdown and msgl.lower().find('stop') != -1:
general.countdown.remove(chanl)
if chanl.find('#') != -1 and (msgl.lower().find('johan') != -1 or msgl.lower().find('slut') != -1):
for item in general.user_info:
if item['nickserv'].lower() == 'sloth':
general.csend(item['nick'], '{} <{}> {}'.format(chanl, userl, msgl))
general.update_seen(chanl, userl, msgl)
if (" "+msgl).lower().find('deer god') != -1 and time.time() - general.deer_god > 30 and revar.deer_god:
general.deer_god = time.time()
general.csend(chanl, "Deer God http://th07.deviantart.net/fs71/PRE/f/2011/223/3/c/deer_god_by_aubrace-d469jox.jpg")
if __name__ == '__main__':
thread.start_new_thread(automatics.get_ftime, ())
connect(config.server, config.port)
thread.start_new_thread(automatics.autoping, ())
thread.start_new_thread(automatics.autoweather, ())
thread.start_new_thread(automatics.checkpongs, ())
thread.start_new_thread(automatics.who_channel, ())
s = connection.s
readbuffer = ''
while True:
readbuffer = readbuffer + s.recv(2048)
temp = string.split(readbuffer, "\n")
readbuffer = temp.pop()
for rline in temp:
rline = string.rstrip(rline)
rline = string.split(rline)
g = general
if not server_responses(rline) and len(rline) > 3:
msg = ' '.join(rline[3:])[1:]
user = rline[0][1:][:rline[0].find('!')][:-1]
chan = rline[2]
if chan.lower() == revar.bot_nick.lower():
chan = user
if config.verbose:
print g.ftime + ' << ' + ' '.join(rline)
else:
print g.ftime + ' << ' + chan + ' <{}> '.format(user) + msg
if general.check_bottriggers(msg):
thread.start_new_thread(botendtriggerd, (chan, user, msg),)
break
thread.start_new_thread(find_imdb_link, (chan, msg), )
thread.start_new_thread(work_line, (chan, user, msg), )
msg = general.check_midsentencetrigger(msg)
msg = general.check_triggers(msg)
if msg:
thread.start_new_thread(work_command, (chan, user, msg), )
| 29.095785 | 157 | 0.658019 | 1,136 | 7,594 | 4.308979 | 0.203345 | 0.025741 | 0.034321 | 0.027579 | 0.379367 | 0.326456 | 0.301328 | 0.301328 | 0.301328 | 0.28907 | 0 | 0.020632 | 0.170266 | 7,594 | 260 | 158 | 29.207692 | 0.756229 | 0 | 0 | 0.464789 | 0 | 0.00939 | 0.128786 | 0 | 0.004695 | 0 | 0 | 0 | 0 | 0 | null | null | 0.014085 | 0.079812 | null | null | 0.051643 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
d2c9cfe9e4e2384aabafbe6f290a4052329e6bc7 | 1,493 | py | Python | hth/shows/tests/factories.py | roperi/myband | ec1955626fe6997484fd92ed02127b6899cd7062 | [
"MIT"
] | 1 | 2016-04-12T17:38:26.000Z | 2016-04-12T17:38:26.000Z | hth/shows/tests/factories.py | bhrutledge/jahhills.com | 74fe94a214f1ed5681bd45159315f0b68daf5a33 | [
"MIT"
] | 92 | 2015-04-03T10:04:55.000Z | 2021-07-17T11:13:52.000Z | hth/shows/tests/factories.py | roperi/myband | ec1955626fe6997484fd92ed02127b6899cd7062 | [
"MIT"
] | 1 | 2021-01-26T18:02:49.000Z | 2021-01-26T18:02:49.000Z | from datetime import date
from random import randrange
import factory
import factory.fuzzy
from hth.core.tests.utils import from_today
class VenueFactory(factory.django.DjangoModelFactory):
class Meta:
model = 'shows.Venue'
name = factory.Sequence(lambda n: 'Venue %d' % n)
city = factory.Sequence(lambda n: 'City %d' % n)
website = factory.Sequence(lambda n: 'http://venue-%d.dev' % n)
class GigFactory(factory.django.DjangoModelFactory):
class Meta:
model = 'shows.Gig'
date = factory.fuzzy.FuzzyDate(date(2000, 1, 1))
venue = factory.SubFactory(VenueFactory)
description = factory.fuzzy.FuzzyText(length=100)
details = factory.fuzzy.FuzzyText(length=100)
class PublishedGigFactory(GigFactory):
publish = True
class UpcomingGigFactory(PublishedGigFactory):
# Pick a random date from today through next year
date = factory.LazyAttribute(lambda obj: from_today(days=randrange(365)))
@classmethod
def create_batch(cls, size, **kwargs):
batch = super().create_batch(size, **kwargs)
return sorted(batch, key=lambda x: x.date)
class PastGigFactory(PublishedGigFactory):
# Pick a random date from 10 years ago through yesterday
date = factory.LazyAttribute(lambda obj: from_today(randrange(-3650, 0)))
@classmethod
def create_batch(cls, size, **kwargs):
batch = super().create_batch(size, **kwargs)
return sorted(batch, key=lambda x: x.date, reverse=True)
| 26.192982 | 77 | 0.704622 | 185 | 1,493 | 5.648649 | 0.4 | 0.045933 | 0.060287 | 0.063158 | 0.499522 | 0.442105 | 0.369378 | 0.193301 | 0.193301 | 0.193301 | 0 | 0.018122 | 0.186872 | 1,493 | 56 | 78 | 26.660714 | 0.842669 | 0.068319 | 0 | 0.25 | 0 | 0 | 0.038905 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.0625 | false | 0 | 0.15625 | 0 | 0.8125 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
d2d415b3f1a1db25737dd9e6b40de2eb5823d384 | 325 | py | Python | DjangoTry/venv/Lib/site-packages/django_select2/__init__.py | PavelKoksharov/QR-BOOK | 8b05cecd7a3cffcec281f2e17da398ad9e4c5de5 | [
"MIT"
] | null | null | null | DjangoTry/venv/Lib/site-packages/django_select2/__init__.py | PavelKoksharov/QR-BOOK | 8b05cecd7a3cffcec281f2e17da398ad9e4c5de5 | [
"MIT"
] | null | null | null | DjangoTry/venv/Lib/site-packages/django_select2/__init__.py | PavelKoksharov/QR-BOOK | 8b05cecd7a3cffcec281f2e17da398ad9e4c5de5 | [
"MIT"
] | null | null | null | """
This is a Django_ integration of Select2_.
The application includes Select2 driven Django Widgets and Form Fields.
.. _Django: https://www.djangoproject.com/
.. _Select2: https://select2.org/
"""
from django import get_version
if get_version() < '3.2':
default_app_config = "django_select2.apps.Select2AppConfig"
| 23.214286 | 71 | 0.750769 | 43 | 325 | 5.465116 | 0.744186 | 0.085106 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.02847 | 0.135385 | 325 | 13 | 72 | 25 | 0.807829 | 0.593846 | 0 | 0 | 0 | 0 | 0.317073 | 0.292683 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.333333 | 0 | 0.333333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
d2d4cdab7ece6cb0f6e54ac92797ae4e32cdf266 | 673 | py | Python | Sorting/bubble.py | Krylovsentry/Algorithms | 0cd236f04dc065d5247a6f274bb3db503db591b0 | [
"MIT"
] | 1 | 2016-08-21T13:01:42.000Z | 2016-08-21T13:01:42.000Z | Sorting/bubble.py | Krylovsentry/Algorithms | 0cd236f04dc065d5247a6f274bb3db503db591b0 | [
"MIT"
] | null | null | null | Sorting/bubble.py | Krylovsentry/Algorithms | 0cd236f04dc065d5247a6f274bb3db503db591b0 | [
"MIT"
] | null | null | null | # O(n ** 2)
def bubble_sort(slist, asc=True):
need_exchanges = False
for iteration in range(len(slist))[:: -1]:
for j in range(iteration):
if asc:
if slist[j] > slist[j + 1]:
need_exchanges = True
slist[j], slist[j + 1] = slist[j + 1], slist[j]
else:
if slist[j] < slist[j + 1]:
need_exchanges = True
slist[j], slist[j + 1] = slist[j + 1], slist[j]
if not need_exchanges:
return slist
return slist
print(bubble_sort([8, 1, 13, 34, 5, 2, 21, 3, 1], False))
print(bubble_sort([1, 2, 3, 4, 5, 6]))
| 32.047619 | 67 | 0.473997 | 95 | 673 | 3.284211 | 0.336842 | 0.230769 | 0.134615 | 0.153846 | 0.371795 | 0.371795 | 0.371795 | 0.371795 | 0.371795 | 0.371795 | 0 | 0.062802 | 0.384844 | 673 | 20 | 68 | 33.65 | 0.690821 | 0.013373 | 0 | 0.352941 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.058824 | false | 0 | 0 | 0 | 0.176471 | 0.117647 | 0 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
d2d69439ae028b8caac841d651293bd86aa4f321 | 639 | py | Python | rest-api/server.py | phenomax/resnet50-miml-rest | 4f78dd2c9454c54d013085eb4d50080d38a833ac | [
"Unlicense"
] | 1 | 2020-08-29T16:51:47.000Z | 2020-08-29T16:51:47.000Z | rest-api/server.py | phenomax/resnet50-miml-rest | 4f78dd2c9454c54d013085eb4d50080d38a833ac | [
"Unlicense"
] | null | null | null | rest-api/server.py | phenomax/resnet50-miml-rest | 4f78dd2c9454c54d013085eb4d50080d38a833ac | [
"Unlicense"
] | null | null | null | import io
import os
from flask import Flask, request, jsonify
from PIL import Image
from resnet_model import MyResnetModel
app = Flask(__name__)
# max filesize 2mb
app.config['MAX_CONTENT_LENGTH'] = 2 * 1024 * 1024
# setup resnet model
model = MyResnetModel(os.path.dirname(os.path.abspath(__file__)))
@app.route("/")
def hello():
return jsonify({"message": "Hello from the API"})
@app.route('/predict', methods=['POST'])
def predict():
if 'image' not in request.files:
return jsonify({"error": "Missing file in request"})
img = request.files['image']
return jsonify({"result": model.predict(img.read())})
| 22.034483 | 65 | 0.694836 | 87 | 639 | 4.977011 | 0.528736 | 0.090069 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.018622 | 0.159624 | 639 | 28 | 66 | 22.821429 | 0.78771 | 0.054773 | 0 | 0 | 0 | 0 | 0.166389 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.117647 | false | 0 | 0.294118 | 0.058824 | 0.588235 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
d2dfc266c6056fe94eecb550bf60b54a02eaa933 | 470 | py | Python | setup.py | colineRamee/UAM_simulator_scitech2021 | 0583f5ce195cf1ec4f6919d6523fa39851c419fc | [
"MIT"
] | 1 | 2021-02-04T15:57:03.000Z | 2021-02-04T15:57:03.000Z | setup.py | colineRamee/UAM_simulator_scitech2021 | 0583f5ce195cf1ec4f6919d6523fa39851c419fc | [
"MIT"
] | null | null | null | setup.py | colineRamee/UAM_simulator_scitech2021 | 0583f5ce195cf1ec4f6919d6523fa39851c419fc | [
"MIT"
] | 2 | 2021-02-04T04:41:08.000Z | 2022-03-01T16:18:14.000Z | from setuptools import setup
setup(
name='uam_simulator',
version='1.0',
description='A tool to simulate different architectures for UAM traffic management',
author='Coline Ramee',
author_email='coline.ramee@gatech.edu',
packages=['uam_simulator'],
install_requires=['numpy', 'scikit-learn', 'gurobipy']
)
# If installing from source the package name is gurobipy, if installing with conda it's gurobi, but when importing it's still gurobipy
| 36.153846 | 134 | 0.734043 | 63 | 470 | 5.412698 | 0.761905 | 0.070381 | 0.117302 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.005076 | 0.161702 | 470 | 12 | 135 | 39.166667 | 0.860406 | 0.280851 | 0 | 0 | 0 | 0 | 0.470238 | 0.068452 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.1 | 0 | 0.1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
d2e2e8b5aeb34c6ee7b5e4eefd603f0d67226b67 | 419 | py | Python | apps/addresses/migrations/0002_address_picture.py | skyride/python-docker-compose | b3ac1a4da4ae2133b94504447a6cb353cc96f45b | [
"MIT"
] | null | null | null | apps/addresses/migrations/0002_address_picture.py | skyride/python-docker-compose | b3ac1a4da4ae2133b94504447a6cb353cc96f45b | [
"MIT"
] | null | null | null | apps/addresses/migrations/0002_address_picture.py | skyride/python-docker-compose | b3ac1a4da4ae2133b94504447a6cb353cc96f45b | [
"MIT"
] | null | null | null | # Generated by Django 3.0.6 on 2020-05-25 22:13
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('addresses', '0001_initial'),
]
operations = [
migrations.AddField(
model_name='address',
name='picture',
field=models.ImageField(default=None, null=True, upload_to='addresses/images/'),
),
]
| 22.052632 | 92 | 0.606205 | 45 | 419 | 5.577778 | 0.844444 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.062092 | 0.26969 | 419 | 18 | 93 | 23.277778 | 0.75817 | 0.107399 | 0 | 0 | 1 | 0 | 0.139785 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.083333 | 0 | 0.333333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
d2e3ae6e131a5fa41bdb17b19d893736dfd4f861 | 4,967 | py | Python | vendor/func_lib/assert_handle.py | diudiu/featurefactory | ee02ad9e3ea66e2eeafe6e11859801f0420c7d9e | [
"MIT"
] | null | null | null | vendor/func_lib/assert_handle.py | diudiu/featurefactory | ee02ad9e3ea66e2eeafe6e11859801f0420c7d9e | [
"MIT"
] | null | null | null | vendor/func_lib/assert_handle.py | diudiu/featurefactory | ee02ad9e3ea66e2eeafe6e11859801f0420c7d9e | [
"MIT"
] | null | null | null | # -*- coding:utf-8 -*-
from vendor.errors.feature import FeatureProcessError
"""
此目录下所有功能函数均为:
按一定条件检查传入参数合法性
**若不合法, 将抛出异常**
"""
def f_assert_not_null(seq):
"""检测值是否非空或值得列表是否存在非空元素"""
if seq in (None, '', [], {}, ()):
raise FeatureProcessError("value: %s f_assert_not_null Error" % seq)
if isinstance(seq, list):
for value in seq:
if value in (None, '', {}, [], ()):
raise FeatureProcessError("value: %s f_assert_not_null Error" % seq)
return seq
def f_assert_jsonpath_true(seq):
"""假设jsonpath查询到的为true seq为[]空列表时代表没查到字段"""
if seq in ([],):
raise FeatureProcessError("jsonpath not find field")
return seq
def f_assert_must_int(value_list):
"""检测列表中的元素是否为int类型"""
for value in value_list:
if not isinstance(value, int):
raise FeatureProcessError('%s f_assert_must_int Error' % value_list)
return value_list
def f_assert_must_list(value_list):
"""检测列表中的元素是否为list类型"""
for value in value_list:
if not isinstance(value, list):
raise FeatureProcessError('%s f_assert_must_list Error' % value_list)
return value_list
def f_assert_must_dict(value_list):
"""检测列表中的元素是否为dict类型"""
for value in value_list:
if not isinstance(value, dict):
raise FeatureProcessError('%s f_assert_must_dict Error' % value_list)
return value_list
def f_assert_must_digit(value_list, args=False):
"""
检测列表中的元素是否为数字
:param value_list: 待检测列表
:param args: 负数是否通过 false 不通过报异常 True 负数通过
:return: 异常或原值
example:
:value_list [-2,'-2', 3]
:args false
:return 异常
:value_list [-2,'-2', 3]
:args True
:return [-2,'-2', 3]
"""
for value in value_list:
if args:
if not str(value).lstrip('-').isdigit():
raise FeatureProcessError('%s negative number=%s f_assert_must_digit Error' % (value_list, args))
else:
if not str(value).isdigit():
raise FeatureProcessError('%s negative number=%s f_assert_must_digit Error' % (value_list, args))
return value_list
def f_assert_must_basestring(value_list):
"""检测列表中的元素是否为字符串"""
for value in value_list:
if not isinstance(value, basestring):
raise FeatureProcessError('%s f_assert_must_basestring Error' % value_list)
return value_list
def f_assert_must_digit_or_float(value_list, args=False):
"""
检测列表中的元素是否为数字或float, args=false 负数报异常 True 负数通过
:param value_list: 待检测列表
:param args: 负数是否通过 false 不通过报异常 True 负数通过
:return: 异常或原值
example:
:value_list [-2.0,'-2', 3]
:args false
:return 异常
:value_list [-2.0,'-2', 3]
:args True
:return [-2.0,'-2', 3]
"""
for value in value_list:
if args:
if not (str(value).count('.') <= 1 and str(value).replace('.', '').lstrip('-').isdigit()):
raise FeatureProcessError(
'%s negative number=%s f_assert_must_digit_or_float Error' % (value_list, args))
else:
if not (str(value).count('.') <= 1 and str(value).replace('.', '').isdigit()):
raise FeatureProcessError(
'%s negative number=%s f_assert_must_digit_or_float Error' % (value_list, args))
return value_list
def f_assert_must_percent(value_list):
"""
检测是否是百分数
"""
for value in value_list:
if not (str(value)[-1] == '%' and (str(value[:-1]).count('.') <= 1 and str(value[:-1]).replace('.', '').isdigit())):
raise FeatureProcessError(
'%s f_assert_must_percent Error' % value_list)
return value_list
def f_assert_must_between(value_list, args):
"""
检测列表中的元素是否为数字或浮点数且在args的范围内
:param value_list: 待检测列表
:param args: 范围列表
:return: 异常或原值
example:
:value_list [2, 2, 3]
:args [1,3]
:value_list ['-2', '-3', 3]
:args ['-5',3]
"""
assert len(args) == 2
for value in value_list:
if not (str(value).count('.') <= 1 and str(value).replace('.', '').lstrip('-').isdigit()
and float(args[0]) <= float(value) <= float(args[1])):
raise FeatureProcessError('%s f_assert_must_between %s Error' % (value_list, args))
return value_list
def f_assert_seq0_gte_seq1(value_list):
"""检测列表中的第一个元素是否大于等于第二个元素"""
if not value_list[0] >= value_list[1]:
raise FeatureProcessError('%s f_assert_seq0_gte_seq1 Error' % value_list)
return value_list
if __name__ == '__main__':
print f_assert_must_percent(['7.0%'])
| 29.742515 | 124 | 0.571774 | 580 | 4,967 | 4.656897 | 0.155172 | 0.163273 | 0.077379 | 0.044428 | 0.710848 | 0.671973 | 0.545354 | 0.542392 | 0.542392 | 0.423917 | 0 | 0.014472 | 0.304409 | 4,967 | 166 | 125 | 29.921687 | 0.767294 | 0.004027 | 0 | 0.444444 | 0 | 0 | 0.150014 | 0.040991 | 0 | 0 | 0 | 0 | 0.361111 | 0 | null | null | 0 | 0.013889 | null | null | 0.013889 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
d2e52be160ba41f3c7d6be5212d1c7221d94eb66 | 3,211 | py | Python | tests/groups/family/test_pseudo_dojo.py | mbercx/aiida-pseudo | 070bdfa37d30674e1f83bf6d14987aa977426d92 | [
"MIT"
] | null | null | null | tests/groups/family/test_pseudo_dojo.py | mbercx/aiida-pseudo | 070bdfa37d30674e1f83bf6d14987aa977426d92 | [
"MIT"
] | 2 | 2021-09-21T11:28:55.000Z | 2021-09-21T12:13:48.000Z | tests/groups/family/test_pseudo_dojo.py | mbercx/aiida-pseudo | 070bdfa37d30674e1f83bf6d14987aa977426d92 | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
# pylint: disable=unused-argument,pointless-statement
"""Tests for the `PseudoDojoFamily` class."""
import pytest
from aiida_pseudo.data.pseudo import UpfData, Psp8Data, PsmlData, JthXmlData
from aiida_pseudo.groups.family import PseudoDojoConfiguration, PseudoDojoFamily
def test_type_string(clear_db):
"""Verify the `_type_string` class attribute is correctly set to the corresponding entry point name."""
assert PseudoDojoFamily._type_string == 'pseudo.family.pseudo_dojo' # pylint: disable=protected-access
def test_pseudo_types():
"""Test the `PseudoDojoFamily.pseudo_types` method."""
assert PseudoDojoFamily.pseudo_types == (UpfData, PsmlData, Psp8Data, JthXmlData)
def test_default_configuration():
"""Test the `PseudoDojoFamily.default_configuration` class attribute."""
assert isinstance(PseudoDojoFamily.default_configuration, PseudoDojoConfiguration)
def test_valid_configurations():
"""Test the `PseudoDojoFamily.valid_configurations` class attribute."""
valid_configurations = PseudoDojoFamily.valid_configurations
assert isinstance(valid_configurations, tuple)
for entry in valid_configurations:
assert isinstance(entry, PseudoDojoConfiguration)
def test_get_valid_labels():
"""Test the `PseudoDojoFamily.get_valid_labels` class method."""
valid_labels = PseudoDojoFamily.get_valid_labels()
assert isinstance(valid_labels, tuple)
for entry in valid_labels:
assert isinstance(entry, str)
def test_format_configuration_label():
"""Test the `PseudoDojoFamily.format_configuration_label` class method."""
configuration = PseudoDojoConfiguration('0.4', 'PBE', 'SR', 'standard', 'psp8')
assert PseudoDojoFamily.format_configuration_label(configuration) == 'PseudoDojo/0.4/PBE/SR/standard/psp8'
def test_constructor():
"""Test that the `PseudoDojoFamily` constructor validates the label."""
with pytest.raises(ValueError, match=r'the label `.*` is not a valid PseudoDojo configuration label'):
PseudoDojoFamily()
with pytest.raises(ValueError, match=r'the label `.*` is not a valid PseudoDojo configuration label'):
PseudoDojoFamily(label='nc-sr-04_pbe_standard_psp8')
label = PseudoDojoFamily.format_configuration_label(PseudoDojoFamily.default_configuration)
family = PseudoDojoFamily(label=label)
assert isinstance(family, PseudoDojoFamily)
@pytest.mark.usefixtures('clear_db')
def test_create_from_folder(filepath_pseudos):
"""Test the `PseudoDojoFamily.create_from_folder` class method."""
family = PseudoDojoFamily.create_from_folder(
filepath_pseudos('upf'), 'PseudoDojo/0.4/PBE/SR/standard/psp8', pseudo_type=UpfData
)
assert isinstance(family, PseudoDojoFamily)
@pytest.mark.usefixtures('clear_db')
def test_create_from_folder_duplicate(filepath_pseudos):
"""Test that `PseudoDojoFamily.create_from_folder` raises for duplicate label."""
label = 'PseudoDojo/0.4/PBE/SR/standard/psp8'
PseudoDojoFamily(label=label).store()
with pytest.raises(ValueError, match=r'the PseudoDojoFamily `.*` already exists'):
PseudoDojoFamily.create_from_folder(filepath_pseudos('upf'), label)
| 40.64557 | 110 | 0.766116 | 360 | 3,211 | 6.636111 | 0.252778 | 0.071578 | 0.057765 | 0.01172 | 0.277941 | 0.254918 | 0.246965 | 0.154039 | 0.154039 | 0.154039 | 0 | 0.006435 | 0.128932 | 3,211 | 78 | 111 | 41.166667 | 0.847694 | 0.236064 | 0 | 0.142857 | 0 | 0 | 0.149167 | 0.065 | 0 | 0 | 0 | 0 | 0.238095 | 1 | 0.214286 | false | 0 | 0.071429 | 0 | 0.285714 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
d2e7cc251d72d1b4b8afa5565221124b4f826ce6 | 457 | py | Python | was/lib/tuning/actions/ThreadPool.py | rocksun/ucmd | 486de31324195f48c4110e327d635aaafe3d74d6 | [
"Apache-2.0"
] | 2 | 2019-10-09T06:59:47.000Z | 2019-10-10T03:20:17.000Z | was/lib/tuning/actions/ThreadPool.py | rocksun/ucmd | 486de31324195f48c4110e327d635aaafe3d74d6 | [
"Apache-2.0"
] | null | null | null | was/lib/tuning/actions/ThreadPool.py | rocksun/ucmd | 486de31324195f48c4110e327d635aaafe3d74d6 | [
"Apache-2.0"
] | 1 | 2021-11-25T06:41:17.000Z | 2021-11-25T06:41:17.000Z | import os
min=512
max=512
def app_server_tuning(server_confid):
server_name=AdminConfig.showAttribute(server_confid, "name")
threadpool_list=AdminConfig.list('ThreadPool',server_confid).split("\n")
for tp in threadpool_list:
if tp.count('WebContainer')==1:
print "Modify Server '%s' WebContainer Pool Min=%d, Max=%d"% (server_name, min, max)
AdminConfig.modify(tp,[["minimumSize" ,min],["maximumSize" ,max]])
| 30.466667 | 96 | 0.68709 | 59 | 457 | 5.169492 | 0.525424 | 0.118033 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.018421 | 0.16849 | 457 | 14 | 97 | 32.642857 | 0.784211 | 0 | 0 | 0 | 0 | 0 | 0.221978 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.1 | null | null | 0.1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
d2f17cb8a3f0726fbc17e46d02f025d7c4a03f17 | 4,322 | py | Python | usaspending_api/awards/migrations/0074_auto_20170320_1607.py | toolness/usaspending-api | ed9a396e20a52749f01f43494763903cc371f9c2 | [
"CC0-1.0"
] | 1 | 2021-06-17T05:09:00.000Z | 2021-06-17T05:09:00.000Z | usaspending_api/awards/migrations/0074_auto_20170320_1607.py | toolness/usaspending-api | ed9a396e20a52749f01f43494763903cc371f9c2 | [
"CC0-1.0"
] | null | null | null | usaspending_api/awards/migrations/0074_auto_20170320_1607.py | toolness/usaspending-api | ed9a396e20a52749f01f43494763903cc371f9c2 | [
"CC0-1.0"
] | null | null | null | # -*- coding: utf-8 -*-
# Generated by Django 1.10.1 on 2017-03-20 16:07
from __future__ import unicode_literals
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('awards', '0073_auto_20170320_1455'),
]
operations = [
migrations.AlterField(
model_name='award',
name='fain',
field=models.CharField(blank=True, db_index=True, help_text='An identification code assigned to each financial assistance award tracking purposes. The FAIN is tied to that award (and all future modifications to that award) throughout the award’s life. Each FAIN is assigned by an agency. Within an agency, FAIN are unique: each new award must be issued a new FAIN. FAIN stands for Federal Award Identification Number, though the digits are letters, not numbers.', max_length=30, null=True),
),
migrations.AlterField(
model_name='award',
name='period_of_performance_current_end_date',
field=models.DateField(db_index=True, help_text='The current, not original, period of performance end date', null=True, verbose_name='End Date'),
),
migrations.AlterField(
model_name='award',
name='period_of_performance_start_date',
field=models.DateField(db_index=True, help_text='The start date for the period of performance', null=True, verbose_name='Start Date'),
),
migrations.AlterField(
model_name='award',
name='piid',
field=models.CharField(blank=True, db_index=True, help_text='Procurement Instrument Identifier - A unique identifier assigned to a federal contract, purchase order, basic ordering agreement, basic agreement, and blanket purchase agreement. It is used to track the contract, and any modifications or transactions related to it. After October 2017, it is between 13 and 17 digits, both letters and numbers.', max_length=50, null=True),
),
migrations.AlterField(
model_name='award',
name='potential_total_value_of_award',
field=models.DecimalField(blank=True, db_index=True, decimal_places=2, help_text='The sum of the potential_value_of_award from associated transactions', max_digits=20, null=True, verbose_name='Potential Total Value of Award'),
),
migrations.AlterField(
model_name='award',
name='total_obligation',
field=models.DecimalField(db_index=True, decimal_places=2, help_text='The amount of money the government is obligated to pay for the award', max_digits=15, null=True, verbose_name='Total Obligated'),
),
migrations.AlterField(
model_name='award',
name='total_outlay',
field=models.DecimalField(db_index=True, decimal_places=2, help_text='The total amount of money paid out for this award', max_digits=15, null=True),
),
migrations.AlterField(
model_name='award',
name='type',
field=models.CharField(choices=[('U', 'Unknown Type'), ('02', 'Block Grant'), ('03', 'Formula Grant'), ('04', 'Project Grant'), ('05', 'Cooperative Agreement'), ('06', 'Direct Payment for Specified Use'), ('07', 'Direct Loan'), ('08', 'Guaranteed/Insured Loan'), ('09', 'Insurance'), ('10', 'Direct Payment unrestricted'), ('11', 'Other'), ('A', 'BPA Call'), ('B', 'Purchase Order'), ('C', 'Delivery Order'), ('D', 'Definitive Contract')], db_index=True, default='U', help_text='\tThe mechanism used to distribute funding. The federal government can distribute funding in several forms. These award types include contracts, grants, loans, and direct payments.', max_length=5, null=True, verbose_name='Award Type'),
),
migrations.AlterField(
model_name='award',
name='uri',
field=models.CharField(blank=True, db_index=True, help_text='The uri of the award', max_length=70, null=True),
),
migrations.AlterField(
model_name='transaction',
name='federal_action_obligation',
field=models.DecimalField(blank=True, db_index=True, decimal_places=2, help_text='The obligation of the federal government for this transaction', max_digits=20, null=True),
),
]
| 65.484848 | 726 | 0.672837 | 551 | 4,322 | 5.141561 | 0.352087 | 0.070597 | 0.088246 | 0.102365 | 0.383339 | 0.369926 | 0.295094 | 0.2485 | 0.213202 | 0.172962 | 0 | 0.02355 | 0.214021 | 4,322 | 65 | 727 | 66.492308 | 0.810421 | 0.015733 | 0 | 0.5 | 1 | 0.051724 | 0.440837 | 0.040461 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.034483 | 0 | 0.086207 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
d2f56951f340d9aa264e8c54df9fedc28d30df30 | 1,832 | py | Python | src/nucleotide/component/linux/gcc/atom/rtl.py | dmilos/nucleotide | aad5d60508c9e4baf4888069284f2cb5c9fd7c55 | [
"Apache-2.0"
] | 1 | 2020-09-04T13:00:04.000Z | 2020-09-04T13:00:04.000Z | src/nucleotide/component/linux/gcc/atom/rtl.py | dmilos/nucleotide | aad5d60508c9e4baf4888069284f2cb5c9fd7c55 | [
"Apache-2.0"
] | 1 | 2020-04-10T01:52:32.000Z | 2020-04-10T09:11:29.000Z | src/nucleotide/component/linux/gcc/atom/rtl.py | dmilos/nucleotide | aad5d60508c9e4baf4888069284f2cb5c9fd7c55 | [
"Apache-2.0"
] | null | null | null |
#!/usr/bin/env python2
# Copyright 2015 Dejan D. M. Milosavljevic
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import os
import platform
import nucleotide
import nucleotide.component
import nucleotide.component.function
def _linux_RTL_LINKFLAGS( P_data ):
I_flag = ''
#if( 'dynamic' == P_data['type'] ):
# I_flag += 'D'
if( 'static' == P_data['type'] ):
I_flag += '-static'
return [ I_flag ]
atom_linux_RTL = {
'platform' : {
'host' : 'Linux',
'guest' : 'Linux'
},
'cc' : {
'vendor': 'FSF',
'name' : 'gcc',
'version': 'X'
},
'config' : {
'LINKFLAGS' : _linux_RTL_LINKFLAGS
},
'name' :'RTL',
'class': [ 'RTL', 'linux:RTL' ]
}
class RTL:
def __init__(self):
pass
@staticmethod
def extend( P_option ):
nucleotide.component.function.extend( P_option, 'A:linux:RTL', atom_linux_RTL )
atom_linux_RTL['platform']['host'] = 'X';
nucleotide.component.function.extend( P_option, 'x:linux:RTL', atom_linux_RTL )
atom_linux_RTL['platform']['guest'] = 'X';
nucleotide.component.function.extend( P_option, 'y:linux:RTL', atom_linux_RTL )
@staticmethod
def check():
pass
| 27.343284 | 104 | 0.60917 | 225 | 1,832 | 4.813333 | 0.475556 | 0.088643 | 0.066482 | 0.078486 | 0.256694 | 0.186519 | 0.149584 | 0.073869 | 0.073869 | 0 | 0 | 0.006706 | 0.267467 | 1,832 | 66 | 105 | 27.757576 | 0.800298 | 0.357533 | 0 | 0.102564 | 0 | 0 | 0.14569 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.102564 | false | 0.051282 | 0.128205 | 0 | 0.282051 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
d2f65b3512d928c10cc32ae1efdfb3cff693d569 | 876 | py | Python | python/moderation_text_token_demo.py | huaweicloud/huaweicloud-sdk-moderation | fa7cfda017a71ec8abf3afc57a0e476dd7508167 | [
"Apache-2.0"
] | 8 | 2019-06-04T06:24:54.000Z | 2022-01-29T13:16:53.000Z | python/moderation_text_token_demo.py | huaweicloud/huaweicloud-sdk-moderation | fa7cfda017a71ec8abf3afc57a0e476dd7508167 | [
"Apache-2.0"
] | 4 | 2021-12-14T21:21:03.000Z | 2022-01-04T16:34:33.000Z | python/moderation_text_token_demo.py | huaweicloud/huaweicloud-sdk-moderation | fa7cfda017a71ec8abf3afc57a0e476dd7508167 | [
"Apache-2.0"
] | 8 | 2019-08-12T02:18:03.000Z | 2021-11-30T10:39:23.000Z | # -*- coding:utf-8 -*-
from moderation_sdk.gettoken import get_token
from moderation_sdk.moderation_text import moderation_text
from moderation_sdk.utils import init_global_env
if __name__ == '__main__':
# Services currently support North China-Beijing(cn-north-4),China East-Shanghai1(cn-east-3), CN-Hong Kong(ap-southeast-1),AP-Singapore(ap-southeast-3)
init_global_env('cn-north-4')
#
# access moderation text enhance,posy data by token
#
user_name = '******'
password = '******'
account_name = '******' # the same as user_name in commonly use
token = get_token(user_name, password, account_name)
# call interface use the text
result = moderation_text(token, '666666luo聊请+110亚砷酸钾六位qq,fuck666666666666666', 'content',
['ad', 'politics', 'porn', 'abuse', 'contraband', 'flood'])
print(result)
| 38.086957 | 155 | 0.680365 | 111 | 876 | 5.144144 | 0.567568 | 0.098074 | 0.089317 | 0.073555 | 0.112084 | 0.112084 | 0 | 0 | 0 | 0 | 0 | 0.043417 | 0.184932 | 876 | 22 | 156 | 39.818182 | 0.756303 | 0.326484 | 0 | 0 | 0 | 0 | 0.206186 | 0.073883 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0.166667 | 0.25 | 0 | 0.25 | 0.083333 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
d2fb7f6e9f85db6c80048daaef30c307b92d98da | 2,145 | py | Python | community_codebook/eda.py | etstieber/ledatascifi-2022 | 67bc56a60ec498c62ceba03e0b6b9ae8f3fc7fd9 | [
"MIT"
] | null | null | null | community_codebook/eda.py | etstieber/ledatascifi-2022 | 67bc56a60ec498c62ceba03e0b6b9ae8f3fc7fd9 | [
"MIT"
] | 3 | 2022-01-30T18:34:22.000Z | 2022-02-10T15:48:48.000Z | community_codebook/eda.py | etstieber/ledatascifi-2022 | 67bc56a60ec498c62ceba03e0b6b9ae8f3fc7fd9 | [
"MIT"
] | 14 | 2022-01-26T10:45:19.000Z | 2022-03-28T15:59:56.000Z | ###############################################################
#
# This function is... INSUFFICIENT. It was developed as an
# illustration of EDA lessons in the 2021 class. It's quick and
# works well.
#
# Want a higher grade version of me? Then try pandas-profiling:
# https://github.com/pandas-profiling/pandas-profiling
#
###############################################################
def insufficient_but_starting_eda(df,cat_vars_list=None):
'''
Parameters
----------
df : DATAFRAME
cat_vars_list : LIST, optional
A list of strings containing variable names in the dataframe
for variables where you want to see the number of unique values
and the 10 most common values. Likely used for categorical values.
Returns
-------
None. It simply prints.
Description
-------
This function will print a MINIMUM amount of info about a new dataframe.
You should ****look**** at all this output below and consider the data
exploration and cleaning questions from
https://ledatascifi.github.io/ledatascifi-2021/content/03/02e_eda_golden.html#member
Also LOOK at more of the data manually.
Then write up anything notable you observe.
TIP: put this function in your codebook to reuse easily.
PROTIP: Improve this function (better outputs, better formatting).
FEATURE REQUEST: optionally print the nunique and top 10 values under the describe matrix
FEATURE REQUEST: optionally print more stats (percentiles)
'''
print(df.head(), '\n---')
print(df.tail(), '\n---')
print(df.columns, '\n---')
print("The shape is: ",df.shape, '\n---')
print("Info:",df.info(), '\n---') # memory usage, name, dtype, and # of non-null obs (--> # of missing obs) per variable
print(df.describe(), '\n---') # summary stats, and you can customize the list!
if cat_vars_list != None:
for var in cat_vars_list:
print(var,"has",df[var].nunique(),"values and its top 10 most common are:")
print(df[var].value_counts().head(10), '\n---')
| 35.75 | 124 | 0.607459 | 278 | 2,145 | 4.636691 | 0.535971 | 0.027153 | 0.034135 | 0.023274 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.012063 | 0.22704 | 2,145 | 59 | 125 | 36.355932 | 0.76538 | 0.632634 | 0 | 0 | 0 | 0 | 0.18664 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.090909 | false | 0 | 0 | 0 | 0.090909 | 0.727273 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 |
960075d5d481ca0949f159a6dd4c4e2e599c3197 | 391 | py | Python | src/posts/migrations/0007_recipe_preface.py | eduardkh/matkonim2 | d836b16403d7fce0db88dd39dac2ba24575e6fca | [
"MIT"
] | null | null | null | src/posts/migrations/0007_recipe_preface.py | eduardkh/matkonim2 | d836b16403d7fce0db88dd39dac2ba24575e6fca | [
"MIT"
] | null | null | null | src/posts/migrations/0007_recipe_preface.py | eduardkh/matkonim2 | d836b16403d7fce0db88dd39dac2ba24575e6fca | [
"MIT"
] | null | null | null | # Generated by Django 3.2.7 on 2021-09-15 15:40
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('posts', '0006_auto_20210914_0910'),
]
operations = [
migrations.AddField(
model_name='recipe',
name='preface',
field=models.TextField(blank=True, null=True),
),
]
| 20.578947 | 58 | 0.595908 | 43 | 391 | 5.325581 | 0.813953 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.111111 | 0.286445 | 391 | 18 | 59 | 21.722222 | 0.709677 | 0.11509 | 0 | 0 | 1 | 0 | 0.119186 | 0.06686 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.083333 | 0 | 0.333333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
960c27eda1d8cb31a885faeca6a1d05da5d1bc43 | 9,197 | py | Python | glacier/glacierexception.py | JeffAlyanak/amazon-glacier-cmd-interface | f9e50cbc49156233a87f1975323e315370aeeabe | [
"MIT"
] | 166 | 2015-01-01T14:14:56.000Z | 2022-02-20T21:59:45.000Z | glacier/glacierexception.py | JeffAlyanak/amazon-glacier-cmd-interface | f9e50cbc49156233a87f1975323e315370aeeabe | [
"MIT"
] | 31 | 2015-01-04T13:18:02.000Z | 2022-01-10T18:40:52.000Z | glacier/glacierexception.py | JeffAlyanak/amazon-glacier-cmd-interface | f9e50cbc49156233a87f1975323e315370aeeabe | [
"MIT"
] | 75 | 2015-01-03T10:33:41.000Z | 2022-02-22T21:21:47.000Z | import traceback
import re
import sys
import logging
"""
**********
Note by wvmarle:
This file contains the complete code from chained_exception.py plus the
error handling code from GlacierWrapper.py, allowing it to be used in other
modules like glaciercorecalls as well.
**********
"""
class GlacierException(Exception):
"""
An extension of the built-in Exception class, this handles
an additional cause keyword argument, adding it as cause
attribute to the exception message.
It logs the error message (amount of information depends on the log
level) and passes it on to a higher level to handle.
Furthermore it allows for the upstream handler to call for a
complete stack trace or just a simple error and cause message.
TODO: describe usage.
"""
ERRORCODE = {'InternalError': 127, # Library internal error.
'UndefinedErrorCode': 126, # Undefined code.
'NoResults': 125, # Operation yielded no results.
'GlacierConnectionError': 1, # Can not connect to Glacier.
'SdbConnectionError': 2, # Can not connect to SimpleDB.
'CommandError': 3, # Command line is invalid.
'VaultNameError': 4, # Invalid vault name.
'DescriptionError': 5, # Invalid archive description.
'IdError': 6, # Invalid upload/archive/job ID given.
'RegionError': 7, # Invalid region given.
'FileError': 8, # Error related to reading/writing a file.
'ResumeError': 9, # Problem resuming a multipart upload.
'NotReady': 10, # Requested download is not ready yet.
'BookkeepingError': 11, # Bookkeeping not available.
'SdbCommunicationError': 12, # Problem reading/writing SimpleDB data.
'ResourceNotFoundException': 13, # Glacier can not find the requested resource.
'InvalidParameterValueException': 14, # Parameter not accepted.
'DownloadError': 15, # Downloading an archive failed.
'SNSConnectionError': 126, # Can not connect to SNS
'SNSConfigurationError': 127, # Problem with configuration file
'SNSParameterError':128, # Problem with arguments passed to SNS
}
def __init__(self, message, code=None, cause=None):
"""
Constructor. Logs the error.
:param message: the error message.
:type message: str
:param code: the error code.
:type code: str
:param cause: explanation on what caused the error.
:type cause: str
"""
self.logger = logging.getLogger(self.__class__.__name__)
self.exitcode = self.ERRORCODE[code] if code in self.ERRORCODE else 254
self.code = code
if cause:
self.logger.error('ERROR: %s'% cause)
self.cause = cause if isinstance(cause, tuple) else (cause,)
self.stack = traceback.format_stack()[:-2]
else:
self.logger.error('An error occurred, exiting.')
self.cause = ()
# Just wrap up a cause-less exception.
# Get the stack trace for this exception.
self.stack = (
traceback.format_stack()[:-2] +
traceback.format_tb(sys.exc_info()[2]))
# ^^^ let's hope the information is still there; caller must take
# care of this.
self.message = message
self.logger.info(self.fetch(message=True))
self.logger.debug(self.fetch(stack=True))
if self.exitcode == 254:
self.logger.debug('Unknown error code: %s.'% code)
# Works as a generator to help get the stack trace and the cause
# written out.
def causeTree(self, indentation=' ', alreadyMentionedTree=[], stack=False, message=False):
"""
Returns a complete stack tree, an error message, or both.
Returns a warning if neither stack or message are True.
"""
if stack:
yield "Traceback (most recent call last):\n"
ellipsed = 0
for i, line in enumerate(self.stack):
if (ellipsed is not False
and i < len(alreadyMentionedTree)
and line == alreadyMentionedTree[i]):
ellipsed += 1
else:
if ellipsed:
yield " ... (%d frame%s repeated)\n" % (
ellipsed,
"" if ellipsed == 1 else "s")
ellipsed = False # marker for "given out"
yield line
if message:
exc = self if self.message is None else self.message
for line in traceback.format_exception_only(exc.__class__, exc):
yield line
if self.cause:
yield ("Caused by: %d exception%s\n" %
(len(self.cause), "" if len(self.cause) == 1 else "s"))
for causePart in self.cause:
if hasattr(causePart,"causeTree"):
for line in causePart.causeTree(indentation, self.stack):
yield re.sub(r'([^\n]*\n)', indentation + r'\1', line)
else:
for line in traceback.format_exception_only(causePart.__class__, causePart):
yield re.sub(r'([^\n]*\n)', indentation + r'\1', line)
if not message and not stack:
yield ('No output. Specify message=True and/or stack=True \
to get output when calling this function.\n')
def write(self, stream=None, indentation=' ', message=False, stack=False):
"""
Writes the error details to sys.stderr or a stream.
"""
stream = sys.stderr if stream is None else stream
for line in self.causeTree(indentation, message=message, stack=stack):
stream.write(line)
def fetch(self, indentation=' ', message=False, stack=False):
"""
Fetches the error details and returns them as string.
"""
out = ''
for line in self.causeTree(indentation, message=message, stack=stack):
out += line
return out
class InputException(GlacierException):
"""
Exception that is raised when there is someting wrong with the
user input.
"""
VaultNameError = 1
VaultDescriptionError = 2
def __init__(self, message, code=None, cause=None):
""" Handles the exception.
:param message: the error message.
:type message: str
:param code: the error code.
:type code:
:param cause: explanation on what caused the error.
:type cause: str
"""
GlacierException.__init__(self, message, code=code, cause=cause)
class ConnectionException(GlacierException):
"""
Exception that is raised when there is something wrong with
the connection.
"""
GlacierConnectionError = 1
SdbConnectionError = 2
def __init__(self, message, code=None, cause=None):
""" Handles the exception.
:param message: the error message.
:type message: str
:param code: the error code.
:type code:
:param cause: explanation on what caused the error.
:type cause: str
"""
GlacierException.__init__(self, message, code=code, cause=cause)
class CommunicationException(GlacierException):
"""
Exception that is raised when there is something wrong in
the communication with an external library like boto.
"""
def __init__(self, message, code=None, cause=None):
""" Handles the exception.
:param message: the error message.
:type message: str
:param code: the error code.
:type code:
:param cause: explanation on what caused the error.
:type cause: str
"""
GlacierException.__init__(self, message, code=code, cause=cause)
class ResponseException(GlacierException):
"""
Exception that is raised when there is an http response error.
"""
def __init__(self, message, code=None, cause=None):
GlacierException.__init__(self, message, code=code, cause=cause)
if __name__ == '__main__':
class ChildrenException(Exception):
def __init__(self, message):
Exception.__init__(self, message)
class ParentException(GlacierException):
def __init__(self, message, cause=None):
if cause:
GlacierException.__init__(self, message, cause=cause)
else:
GlacierException.__init__(self, message)
try:
try:
raise ChildrenException("parent")
except ChildrenException, e:
raise ParentException("children", cause=e)
except ParentException, e:
e.write(indentation='|| ')
| 38.805907 | 100 | 0.577145 | 986 | 9,197 | 5.293103 | 0.28499 | 0.026059 | 0.040238 | 0.032765 | 0.293351 | 0.280705 | 0.269209 | 0.25503 | 0.213834 | 0.213834 | 0 | 0.009469 | 0.334022 | 9,197 | 236 | 101 | 38.970339 | 0.842612 | 0.096879 | 0 | 0.190083 | 0 | 0 | 0.093377 | 0.020314 | 0 | 0 | 0 | 0.004237 | 0 | 0 | null | null | 0 | 0.033058 | null | null | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
960dcc8a44c5847743443e7deb1bcd0169e59d72 | 469 | py | Python | flags.py | oaxiom/glbase3 | 9d3fc1efaad58ffb97e5b8126c2a96802daf9bac | [
"MIT"
] | 8 | 2019-06-11T02:13:20.000Z | 2022-02-22T09:27:23.000Z | flags.py | JackNg88/glbase3 | 4af190d06b89ef360dcba201d9e4e81f41ef8379 | [
"MIT"
] | 6 | 2020-12-18T15:08:14.000Z | 2021-05-22T00:31:57.000Z | flags.py | JackNg88/glbase3 | 4af190d06b89ef360dcba201d9e4e81f41ef8379 | [
"MIT"
] | 2 | 2020-05-06T04:27:03.000Z | 2022-02-22T09:28:25.000Z | """
flags.py
. should be renamed helpers...
. This file is scheduled for deletion
"""
"""
valid accessory tags:
"any_tag": {"code": "code_insert_as_string"} # execute arbitrary code to construct this key.
"dialect": csv.excel_tab # dialect of the file, default = csv, set this to use tsv. or sniffer
"skip_lines": number # number of lines to skip at the head of the file.
"skiptill": skip until I see the first instance of <str>
"""
# lists of format-specifiers.
| 23.45 | 94 | 0.712154 | 74 | 469 | 4.432432 | 0.716216 | 0.030488 | 0.054878 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.17484 | 469 | 19 | 95 | 24.684211 | 0.847545 | 0.228145 | 0 | null | 0 | null | 0 | 0 | null | 0 | 0 | 0 | null | 1 | null | true | 0 | 0 | null | null | null | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
960deebf26b738896cbcd2ee2bd2d46605e19141 | 2,106 | py | Python | packages/jet_bridge/jet_bridge/app.py | goncalomi/jet-bridge | ed968ac3407affdc99059faafb86ec67ac995838 | [
"MIT"
] | 2 | 2020-04-18T14:34:44.000Z | 2020-04-18T14:34:47.000Z | packages/jet_bridge/jet_bridge/app.py | goncalomi/jet-bridge | ed968ac3407affdc99059faafb86ec67ac995838 | [
"MIT"
] | null | null | null | packages/jet_bridge/jet_bridge/app.py | goncalomi/jet-bridge | ed968ac3407affdc99059faafb86ec67ac995838 | [
"MIT"
] | null | null | null | import os
import tornado.ioloop
import tornado.web
from jet_bridge.handlers.temporary_redirect import TemporaryRedirectHandler
from jet_bridge_base import settings as base_settings
from jet_bridge_base.views.api import ApiView
from jet_bridge_base.views.image_resize import ImageResizeView
from jet_bridge_base.views.file_upload import FileUploadView
from jet_bridge_base.views.message import MessageView
from jet_bridge_base.views.model import ModelViewSet
from jet_bridge_base.views.model_description import ModelDescriptionView
from jet_bridge_base.views.register import RegisterView
from jet_bridge_base.views.reload import ReloadView
from jet_bridge_base.views.sql import SqlView
from jet_bridge import settings, media
from jet_bridge.handlers.view import view_handler
from jet_bridge.handlers.not_found import NotFoundHandler
from jet_bridge.router import Router
def make_app():
router = Router()
router.register('/api/models/(?P<model>[^/]+)/', view_handler(ModelViewSet))
urls = [
(r'/', TemporaryRedirectHandler, {'url': "/api/"}),
(r'/register/', view_handler(RegisterView)),
(r'/api/', view_handler(ApiView)),
(r'/api/register/', view_handler(RegisterView)),
(r'/api/model_descriptions/', view_handler(ModelDescriptionView)),
(r'/api/sql/', view_handler(SqlView)),
(r'/api/messages/', view_handler(MessageView)),
(r'/api/file_upload/', view_handler(FileUploadView)),
(r'/api/image_resize/', view_handler(ImageResizeView)),
(r'/api/reload/', view_handler(ReloadView)),
(r'/media/(.*)', tornado.web.StaticFileHandler, {'path': settings.MEDIA_ROOT}),
]
urls += router.urls
if settings.MEDIA_STORAGE == media.MEDIA_STORAGE_FILE:
urls.append((r'/media/(.*)', tornado.web.StaticFileHandler, {'path': settings.MEDIA_ROOT}))
return tornado.web.Application(
handlers=urls,
debug=settings.DEBUG,
default_handler_class=NotFoundHandler,
template_path=os.path.join(base_settings.BASE_DIR, 'templates'),
autoreload=settings.DEBUG
)
| 39 | 99 | 0.738367 | 259 | 2,106 | 5.787645 | 0.266409 | 0.070047 | 0.130087 | 0.113409 | 0.257505 | 0.15477 | 0.072048 | 0.072048 | 0.072048 | 0 | 0 | 0 | 0.143875 | 2,106 | 53 | 100 | 39.735849 | 0.831392 | 0 | 0 | 0 | 0 | 0 | 0.094967 | 0.025166 | 0 | 0 | 0 | 0 | 0 | 1 | 0.022727 | false | 0 | 0.409091 | 0 | 0.454545 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
9610832f6a592c17ec9781319d909b5b964100ab | 15,186 | py | Python | mwtab/mwschema.py | MoseleyBioinformaticsLab/mwtab | 1bc1e3715538348b29a5760a9c3184fe04f568a6 | [
"BSD-3-Clause-Clear"
] | 7 | 2018-02-02T07:50:20.000Z | 2021-03-14T22:46:58.000Z | mwtab/mwschema.py | MoseleyBioinformaticsLab/mwtab | 1bc1e3715538348b29a5760a9c3184fe04f568a6 | [
"BSD-3-Clause-Clear"
] | 2 | 2019-02-14T08:38:54.000Z | 2020-02-19T08:08:02.000Z | mwtab/mwschema.py | MoseleyBioinformaticsLab/mwtab | 1bc1e3715538348b29a5760a9c3184fe04f568a6 | [
"BSD-3-Clause-Clear"
] | 1 | 2019-10-12T23:38:44.000Z | 2019-10-12T23:38:44.000Z | #!/usr/bin/env python3
# -*- coding: utf-8 -*-
"""
mwtab.mwschema
~~~~~~~~~~~~~~
This module provides schema definitions for different sections of the
``mwTab`` Metabolomics Workbench format.
"""
import sys
from schema import Schema, Optional, Or
if sys.version_info.major == 2:
str = unicode
metabolomics_workbench_schema = Schema(
{
"VERSION": str,
"CREATED_ON": str,
Optional("STUDY_ID"): str,
Optional("ANALYSIS_ID"): str,
Optional("PROJECT_ID"): str,
Optional("HEADER"): str,
Optional("DATATRACK_ID"): str
}
)
project_schema = Schema(
{
"PROJECT_TITLE": str,
Optional("PROJECT_TYPE"): str,
"PROJECT_SUMMARY": str,
"INSTITUTE": str,
Optional("DEPARTMENT"): str,
Optional("LABORATORY"): str,
"LAST_NAME": str,
"FIRST_NAME": str,
"ADDRESS": str,
"EMAIL": str,
"PHONE": str,
Optional("FUNDING_SOURCE"): str,
Optional("PROJECT_COMMENTS"): str,
Optional("PUBLICATIONS"): str,
Optional("CONTRIBUTORS"): str,
Optional("DOI"): str
}
)
study_schema = Schema(
{
"STUDY_TITLE": str,
Optional("STUDY_TYPE"): str,
"STUDY_SUMMARY": str,
"INSTITUTE": str,
Optional("DEPARTMENT"): str,
Optional("LABORATORY"): str,
"LAST_NAME": str,
"FIRST_NAME": str,
"ADDRESS": str,
"EMAIL": str,
"PHONE": str,
Optional("NUM_GROUPS"): str,
Optional("TOTAL_SUBJECTS"): str,
Optional("NUM_MALES"): str,
Optional("NUM_FEMALES"): str,
Optional("STUDY_COMMENTS"): str,
Optional("PUBLICATIONS"): str, # assumed
Optional("SUBMIT_DATE"): str # assumed
}
)
subject_schema = Schema(
{
"SUBJECT_TYPE": str,
"SUBJECT_SPECIES": str,
Optional("TAXONOMY_ID"): str,
Optional("GENOTYPE_STRAIN"): str,
Optional("AGE_OR_AGE_RANGE"): str,
Optional("WEIGHT_OR_WEIGHT_RANGE"): str,
Optional("HEIGHT_OR_HEIGHT_RANGE"): str,
Optional("GENDER"): str,
Optional("HUMAN_RACE"): str,
Optional("HUMAN_ETHNICITY"): str,
Optional("HUMAN_TRIAL_TYPE"): str,
Optional("HUMAN_LIFESTYLE_FACTORS"): str,
Optional("HUMAN_MEDICATIONS"): str,
Optional("HUMAN_PRESCRIPTION_OTC"): str,
Optional("HUMAN_SMOKING_STATUS"): str,
Optional("HUMAN_ALCOHOL_DRUG_USE"): str,
Optional("HUMAN_NUTRITION"): str,
Optional("HUMAN_INCLUSION_CRITERIA"): str,
Optional("HUMAN_EXCLUSION_CRITERIA"): str,
Optional("ANIMAL_ANIMAL_SUPPLIER"): str,
Optional("ANIMAL_HOUSING"): str,
Optional("ANIMAL_LIGHT_CYCLE"): str,
Optional("ANIMAL_FEED"): str,
Optional("ANIMAL_WATER"): str,
Optional("ANIMAL_INCLUSION_CRITERIA"): str,
Optional("CELL_BIOSOURCE_OR_SUPPLIER"): str,
Optional("CELL_STRAIN_DETAILS"): str,
Optional("SUBJECT_COMMENTS"): str,
Optional("CELL_PRIMARY_IMMORTALIZED"): str,
Optional("CELL_PASSAGE_NUMBER"): str,
Optional("CELL_COUNTS"): str,
Optional("SPECIES_GROUP"): str
}
)
subject_sample_factors_schema = Schema(
[
{
"Subject ID": str,
"Sample ID": str,
"Factors": dict,
Optional("Additional sample data"): {
Optional("RAW_FILE_NAME"): str,
Optional(str): str
}
}
]
)
collection_schema = Schema(
{
"COLLECTION_SUMMARY": str,
Optional("COLLECTION_PROTOCOL_ID"): str,
Optional("COLLECTION_PROTOCOL_FILENAME"): str,
Optional("COLLECTION_PROTOCOL_COMMENTS"): str,
Optional("SAMPLE_TYPE"): str, # assumed optional due to large number of files without
Optional("COLLECTION_METHOD"): str,
Optional("COLLECTION_LOCATION"): str,
Optional("COLLECTION_FREQUENCY"): str,
Optional("COLLECTION_DURATION"): str,
Optional("COLLECTION_TIME"): str,
Optional("VOLUMEORAMOUNT_COLLECTED"): str,
Optional("STORAGE_CONDITIONS"): str,
Optional("COLLECTION_VIALS"): str,
Optional("STORAGE_VIALS"): str,
Optional("COLLECTION_TUBE_TEMP"): str,
Optional("ADDITIVES"): str,
Optional("BLOOD_SERUM_OR_PLASMA"): str,
Optional("TISSUE_CELL_IDENTIFICATION"): str,
Optional("TISSUE_CELL_QUANTITY_TAKEN"): str
}
)
treatment_schema = Schema(
{
"TREATMENT_SUMMARY": str,
Optional("TREATMENT_PROTOCOL_ID"): str,
Optional("TREATMENT_PROTOCOL_FILENAME"): str,
Optional("TREATMENT_PROTOCOL_COMMENTS"): str,
Optional("TREATMENT"): str,
Optional("TREATMENT_COMPOUND"): str,
Optional("TREATMENT_ROUTE"): str,
Optional("TREATMENT_DOSE"): str,
Optional("TREATMENT_DOSEVOLUME"): str,
Optional("TREATMENT_DOSEDURATION"): str,
Optional("TREATMENT_VEHICLE"): str,
Optional("ANIMAL_VET_TREATMENTS"): str,
Optional("ANIMAL_ANESTHESIA"): str,
Optional("ANIMAL_ACCLIMATION_DURATION"): str,
Optional("ANIMAL_FASTING"): str,
Optional("ANIMAL_ENDP_EUTHANASIA"): str,
Optional("ANIMAL_ENDP_TISSUE_COLL_LIST"): str,
Optional("ANIMAL_ENDP_TISSUE_PROC_METHOD"): str,
Optional("ANIMAL_ENDP_CLINICAL_SIGNS"): str,
Optional("HUMAN_FASTING"): str,
Optional("HUMAN_ENDP_CLINICAL_SIGNS"): str,
Optional("CELL_STORAGE"): str,
Optional("CELL_GROWTH_CONTAINER"): str,
Optional("CELL_GROWTH_CONFIG"): str,
Optional("CELL_GROWTH_RATE"): str,
Optional("CELL_INOC_PROC"): str,
Optional("CELL_MEDIA"): str,
Optional("CELL_ENVIR_COND"): str,
Optional("CELL_HARVESTING"): str,
Optional("PLANT_GROWTH_SUPPORT"): str,
Optional("PLANT_GROWTH_LOCATION"): str,
Optional("PLANT_PLOT_DESIGN"): str,
Optional("PLANT_LIGHT_PERIOD"): str,
Optional("PLANT_HUMIDITY"): str,
Optional("PLANT_TEMP"): str,
Optional("PLANT_WATERING_REGIME"): str,
Optional("PLANT_NUTRITIONAL_REGIME"): str,
Optional("PLANT_ESTAB_DATE"): str,
Optional("PLANT_HARVEST_DATE"): str,
Optional("PLANT_GROWTH_STAGE"): str,
Optional("PLANT_METAB_QUENCH_METHOD"): str,
Optional("PLANT_HARVEST_METHOD"): str,
Optional("PLANT_STORAGE"): str,
Optional("CELL_PCT_CONFLUENCE"): str,
Optional("CELL_MEDIA_LASTCHANGED"): str
}
)
sampleprep_schema = Schema(
{
"SAMPLEPREP_SUMMARY": str,
Optional("SAMPLEPREP_PROTOCOL_ID"): str,
Optional("SAMPLEPREP_PROTOCOL_FILENAME"): str,
Optional("SAMPLEPREP_PROTOCOL_COMMENTS"): str,
Optional("PROCESSING_METHOD"): str,
Optional("PROCESSING_STORAGE_CONDITIONS"): str,
Optional("EXTRACTION_METHOD"): str,
Optional("EXTRACT_CONCENTRATION_DILUTION"): str,
Optional("EXTRACT_ENRICHMENT"): str,
Optional("EXTRACT_CLEANUP"): str,
Optional("EXTRACT_STORAGE"): str,
Optional("SAMPLE_RESUSPENSION"): str,
Optional("SAMPLE_DERIVATIZATION"): str,
Optional("SAMPLE_SPIKING"): str,
Optional("ORGAN"): str,
Optional("ORGAN_SPECIFICATION"): str,
Optional("CELL_TYPE"): str,
Optional("SUBCELLULAR_LOCATION"): str
}
)
chromatography_schema = Schema(
{
Optional("CHROMATOGRAPHY_SUMMARY"): str,
"CHROMATOGRAPHY_TYPE": str,
"INSTRUMENT_NAME": str,
"COLUMN_NAME": str,
Optional("FLOW_GRADIENT"): str,
Optional("FLOW_RATE"): str,
Optional("COLUMN_TEMPERATURE"): str,
Optional("METHODS_FILENAME"): str,
Optional("SOLVENT_A"): str,
Optional("SOLVENT_B"): str,
Optional("METHODS_ID"): str,
Optional("COLUMN_PRESSURE"): str,
Optional("INJECTION_TEMPERATURE"): str,
Optional("INTERNAL_STANDARD"): str,
Optional("INTERNAL_STANDARD_MT"): str,
Optional("RETENTION_INDEX"): str,
Optional("RETENTION_TIME"): str,
Optional("SAMPLE_INJECTION"): str,
Optional("SAMPLING_CONE"): str,
Optional("ANALYTICAL_TIME"): str,
Optional("CAPILLARY_VOLTAGE"): str,
Optional("MIGRATION_TIME"): str,
Optional("OVEN_TEMPERATURE"): str,
Optional("PRECONDITIONING"): str,
Optional("RUNNING_BUFFER"): str,
Optional("RUNNING_VOLTAGE"): str,
Optional("SHEATH_LIQUID"): str,
Optional("TIME_PROGRAM"): str,
Optional("TRANSFERLINE_TEMPERATURE"): str,
Optional("WASHING_BUFFER"): str,
Optional("WEAK_WASH_SOLVENT_NAME"): str,
Optional("WEAK_WASH_VOLUME"): str,
Optional("STRONG_WASH_SOLVENT_NAME"): str,
Optional("STRONG_WASH_VOLUME"): str,
Optional("TARGET_SAMPLE_TEMPERATURE"): str,
Optional("SAMPLE_LOOP_SIZE"): str,
Optional("SAMPLE_SYRINGE_SIZE"): str,
Optional("RANDOMIZATION_ORDER"): str,
Optional("CHROMATOGRAPHY_COMMENTS"): str
}
)
analysis_schema = Schema(
{
"ANALYSIS_TYPE": str,
Optional("LABORATORY_NAME"): str,
Optional("OPERATOR_NAME"): str,
Optional("DETECTOR_TYPE"): str,
Optional("SOFTWARE_VERSION"): str,
Optional("ACQUISITION_DATE"): str,
Optional("ANALYSIS_PROTOCOL_FILE"): str,
Optional("ACQUISITION_PARAMETERS_FILE"): str,
Optional("PROCESSING_PARAMETERS_FILE"): str,
Optional("DATA_FORMAT"): str,
# not specified in mwTab specification (assumed)
Optional("ACQUISITION_ID"): str,
Optional("ACQUISITION_TIME"): str,
Optional("ANALYSIS_COMMENTS"): str,
Optional("ANALYSIS_DISPLAY"): str,
Optional("INSTRUMENT_NAME"): str,
Optional("INSTRUMENT_PARAMETERS_FILE"): str,
Optional("NUM_FACTORS"): str,
Optional("NUM_METABOLITES"): str,
Optional("PROCESSED_FILE"): str,
Optional("RANDOMIZATION_ORDER"): str,
Optional("RAW_FILE"): str,
}
)
ms_schema = Schema(
{
"INSTRUMENT_NAME": str,
"INSTRUMENT_TYPE": str,
"MS_TYPE": str,
"ION_MODE": str,
"MS_COMMENTS": str, # changed to required
Optional("CAPILLARY_TEMPERATURE"): str,
Optional("CAPILLARY_VOLTAGE"): str,
Optional("COLLISION_ENERGY"): str,
Optional("COLLISION_GAS"): str,
Optional("DRY_GAS_FLOW"): str,
Optional("DRY_GAS_TEMP"): str,
Optional("FRAGMENT_VOLTAGE"): str,
Optional("FRAGMENTATION_METHOD"): str,
Optional("GAS_PRESSURE"): str,
Optional("HELIUM_FLOW"): str,
Optional("ION_SOURCE_TEMPERATURE"): str,
Optional("ION_SPRAY_VOLTAGE"): str,
Optional("IONIZATION"): str,
Optional("IONIZATION_ENERGY"): str,
Optional("IONIZATION_POTENTIAL"): str,
Optional("MASS_ACCURACY"): str,
Optional("PRECURSOR_TYPE"): str,
Optional("REAGENT_GAS"): str,
Optional("SOURCE_TEMPERATURE"): str,
Optional("SPRAY_VOLTAGE"): str,
Optional("ACTIVATION_PARAMETER"): str,
Optional("ACTIVATION_TIME"): str,
Optional("ATOM_GUN_CURRENT"): str,
Optional("AUTOMATIC_GAIN_CONTROL"): str,
Optional("BOMBARDMENT"): str,
Optional("CDL_SIDE_OCTOPOLES_BIAS_VOLTAGE"): str,
Optional("CDL_TEMPERATURE"): str,
Optional("DATAFORMAT"): str,
Optional("DESOLVATION_GAS_FLOW"): str,
Optional("DESOLVATION_TEMPERATURE"): str,
Optional("INTERFACE_VOLTAGE"): str,
Optional("IT_SIDE_OCTOPOLES_BIAS_VOLTAGE"): str,
Optional("LASER"): str,
Optional("MATRIX"): str,
Optional("NEBULIZER"): str,
Optional("OCTPOLE_VOLTAGE"): str,
Optional("PROBE_TIP"): str,
Optional("RESOLUTION_SETTING"): str,
Optional("SAMPLE_DRIPPING"): str,
Optional("SCAN_RANGE_MOVERZ"): str,
Optional("SCANNING"): str,
Optional("SCANNING_CYCLE"): str,
Optional("SCANNING_RANGE"): str,
Optional("SKIMMER_VOLTAGE"): str,
Optional("TUBE_LENS_VOLTAGE"): str,
Optional("MS_RESULTS_FILE"): Or(str, dict)
}
)
nmr_schema = Schema(
{
"INSTRUMENT_NAME": str,
"INSTRUMENT_TYPE": str,
"NMR_EXPERIMENT_TYPE": str,
Optional("NMR_COMMENTS"): str,
Optional("FIELD_FREQUENCY_LOCK"): str,
Optional("STANDARD_CONCENTRATION"): str,
"SPECTROMETER_FREQUENCY": str,
Optional("NMR_PROBE"): str,
Optional("NMR_SOLVENT"): str,
Optional("NMR_TUBE_SIZE"): str,
Optional("SHIMMING_METHOD"): str,
Optional("PULSE_SEQUENCE"): str,
Optional("WATER_SUPPRESSION"): str,
Optional("PULSE_WIDTH"): str,
Optional("POWER_LEVEL"): str,
Optional("RECEIVER_GAIN"): str,
Optional("OFFSET_FREQUENCY"): str,
Optional("PRESATURATION_POWER_LEVEL"): str,
Optional("CHEMICAL_SHIFT_REF_CPD"): str,
Optional("TEMPERATURE"): str,
Optional("NUMBER_OF_SCANS"): str,
Optional("DUMMY_SCANS"): str,
Optional("ACQUISITION_TIME"): str,
Optional("RELAXATION_DELAY"): str,
Optional("SPECTRAL_WIDTH"): str,
Optional("NUM_DATA_POINTS_ACQUIRED"): str,
Optional("REAL_DATA_POINTS"): str,
Optional("LINE_BROADENING"): str,
Optional("ZERO_FILLING"): str,
Optional("APODIZATION"): str,
Optional("BASELINE_CORRECTION_METHOD"): str,
Optional("CHEMICAL_SHIFT_REF_STD"): str,
Optional("BINNED_INCREMENT"): str,
Optional("BINNED_DATA_NORMALIZATION_METHOD"): str,
Optional("BINNED_DATA_PROTOCOL_FILE"): str,
Optional("BINNED_DATA_CHEMICAL_SHIFT_RANGE"): str,
Optional("BINNED_DATA_EXCLUDED_RANGE"): str
}
)
data_schema = Schema(
[
{
Or("Metabolite", "Bin range(ppm)", only_one=True): str,
Optional(str): str,
},
]
)
extended_schema = Schema(
[
{
"Metabolite": str,
Optional(str): str,
"sample_id": str
},
]
)
ms_metabolite_data_schema = Schema(
{
"Units": str,
"Data": data_schema,
"Metabolites": data_schema,
Optional("Extended"): extended_schema
}
)
nmr_binned_data_schema = Schema(
{
"Units": str,
"Data": data_schema
}
)
section_schema_mapping = {
"METABOLOMICS WORKBENCH": metabolomics_workbench_schema,
"PROJECT": project_schema,
"STUDY": study_schema,
"ANALYSIS": analysis_schema,
"SUBJECT": subject_schema,
"SUBJECT_SAMPLE_FACTORS": subject_sample_factors_schema,
"COLLECTION": collection_schema,
"TREATMENT": treatment_schema,
"SAMPLEPREP": sampleprep_schema,
"CHROMATOGRAPHY": chromatography_schema,
"MS": ms_schema,
"NM": nmr_schema,
"MS_METABOLITE_DATA": ms_metabolite_data_schema,
"NMR_METABOLITE_DATA": ms_metabolite_data_schema,
"NMR_BINNED_DATA": nmr_binned_data_schema,
}
| 34.049327 | 94 | 0.61965 | 1,506 | 15,186 | 5.942895 | 0.23838 | 0.325698 | 0.026816 | 0.009385 | 0.119777 | 0.088045 | 0.054525 | 0.04581 | 0.027039 | 0.027039 | 0 | 0.000262 | 0.247004 | 15,186 | 445 | 95 | 34.125843 | 0.782422 | 0.021204 | 0 | 0.080097 | 0 | 0 | 0.35165 | 0.115219 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0.002427 | 0.004854 | 0 | 0.004854 | 0 | 0 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
9616936f76e77083ea419e018de9e5eaec39224e | 4,715 | py | Python | test.py | chdre/noise-randomized | c803fd6c6fd641a0b1c0f4880920584a647587bc | [
"MIT"
] | null | null | null | test.py | chdre/noise-randomized | c803fd6c6fd641a0b1c0f4880920584a647587bc | [
"MIT"
] | null | null | null | test.py | chdre/noise-randomized | c803fd6c6fd641a0b1c0f4880920584a647587bc | [
"MIT"
] | 3 | 2021-10-05T09:01:51.000Z | 2021-10-05T09:37:06.000Z | import unittest
class PerlinTestCase(unittest.TestCase):
def test_perlin_1d_range(self):
from noise import pnoise1
for i in range(-10000, 10000):
x = i * 0.49
n = pnoise1(x)
self.assertTrue(-1.0 <= n <= 1.0, (x, n))
def test_perlin_1d_octaves_range(self):
from noise import pnoise1
for i in range(-1000, 1000):
for o in range(10):
x = i * 0.49
n = pnoise1(x, octaves=o + 1)
self.assertTrue(-1.0 <= n <= 1.0, (x, n))
def test_perlin_1d_base(self):
from noise import pnoise1
self.assertEqual(pnoise1(0.5), pnoise1(0.5, base=0))
self.assertNotEqual(pnoise1(0.5), pnoise1(0.5, base=5))
self.assertNotEqual(pnoise1(0.5, base=5), pnoise1(0.5, base=1))
def test_perlin_2d_range(self):
from noise import pnoise2
for i in range(-10000, 10000):
x = i * 0.49
y = -i * 0.67
n = pnoise2(x, y)
self.assertTrue(-1.0 <= n <= 1.0, (x, y, n))
def test_perlin_2d_octaves_range(self):
from noise import pnoise2
for i in range(-1000, 1000):
for o in range(10):
x = -i * 0.49
y = i * 0.67
n = pnoise2(x, y, octaves=o + 1)
self.assertTrue(-1.0 <= n <= 1.0, (x, n))
def test_perlin_2d_base(self):
from noise import pnoise2
x, y = 0.73, 0.27
self.assertEqual(pnoise2(x, y), pnoise2(x, y, base=0))
self.assertNotEqual(pnoise2(x, y), pnoise2(x, y, base=5))
self.assertNotEqual(pnoise2(x, y, base=5), pnoise2(x, y, base=1))
def test_perlin_3d_range(self):
from noise import pnoise3
for i in range(-10000, 10000):
x = -i * 0.49
y = i * 0.67
z = -i * 0.727
n = pnoise3(x, y, z)
self.assertTrue(-1.0 <= n <= 1.0, (x, y, z, n))
def test_perlin_3d_octaves_range(self):
from noise import pnoise3
for i in range(-1000, 1000):
x = i * 0.22
y = -i * 0.77
z = -i * 0.17
for o in range(10):
n = pnoise3(x, y, z, octaves=o + 1)
self.assertTrue(-1.0 <= n <= 1.0, (x, y, z, n))
def test_perlin_3d_base(self):
from noise import pnoise3
x, y, z = 0.1, 0.7, 0.33
self.assertEqual(pnoise3(x, y, z), pnoise3(x, y, z, base=0))
self.assertNotEqual(pnoise3(x, y, z), pnoise3(x, y, z, base=5))
self.assertNotEqual(pnoise3(x, y, z, base=5), pnoise3(x, y, z, base=1))
class SimplexTestCase(unittest.TestCase):
def test_randomize(self):
from noise import randomize
self.assertTrue(randomize(4096,23490))
def test_simplex_2d_range(self):
from noise import snoise2
for i in range(-10000, 10000):
x = i * 0.49
y = -i * 0.67
n = snoise2(x, y)
self.assertTrue(-1.0 <= n <= 1.0, (x, y, n))
def test_simplex_2d_octaves_range(self):
from noise import snoise2
for i in range(-1000, 1000):
for o in range(10):
x = -i * 0.49
y = i * 0.67
n = snoise2(x, y, octaves=o + 1)
self.assertTrue(-1.0 <= n <= 1.0, (x, n))
def test_simplex_3d_range(self):
from noise import snoise3
for i in range(-10000, 10000):
x = i * 0.31
y = -i * 0.7
z = i * 0.19
n = snoise3(x, y, z)
self.assertTrue(-1.0 <= n <= 1.0, (x, y, z, n))
def test_simplex_3d_octaves_range(self):
from noise import snoise3
for i in range(-1000, 1000):
x = -i * 0.12
y = i * 0.55
z = i * 0.34
for o in range(10):
n = snoise3(x, y, z, octaves=o + 1)
self.assertTrue(-1.0 <= n <= 1.0, (x, y, z, o+1, n))
def test_simplex_4d_range(self):
from noise import snoise4
for i in range(-10000, 10000):
x = i * 0.88
y = -i * 0.11
z = -i * 0.57
w = i * 0.666
n = snoise4(x, y, z, w)
self.assertTrue(-1.0 <= n <= 1.0, (x, y, z, w, n))
def test_simplex_4d_octaves_range(self):
from noise import snoise4
for i in range(-1000, 1000):
x = -i * 0.12
y = i * 0.55
z = i * 0.34
w = i * 0.21
for o in range(10):
n = snoise4(x, y, z, w, octaves=o + 1)
self.assertTrue(-1.0 <= n <= 1.0, (x, y, z, w, o+1, n))
if __name__ == '__main__':
unittest.main()
| 32.972028 | 79 | 0.487381 | 723 | 4,715 | 3.095436 | 0.096819 | 0.028597 | 0.025469 | 0.135836 | 0.842717 | 0.692136 | 0.630027 | 0.567471 | 0.567471 | 0.529491 | 0 | 0.126785 | 0.376034 | 4,715 | 142 | 80 | 33.204225 | 0.633923 | 0 | 0 | 0.495868 | 0 | 0 | 0.001697 | 0 | 0 | 0 | 0 | 0 | 0.181818 | 1 | 0.132231 | false | 0 | 0.140496 | 0 | 0.289256 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
961e5e18627878c209a335c0392cc2286e8803ad | 323 | py | Python | Asap-3.8.4/Projects/NanoparticleMC/misc/viewatomsmc.py | auag92/n2dm | 03403ef8da303b79478580ae76466e374ec9da60 | [
"MIT"
] | 1 | 2021-10-19T11:35:34.000Z | 2021-10-19T11:35:34.000Z | Asap-3.8.4/Projects/NanoparticleMC/misc/viewatomsmc.py | auag92/n2dm | 03403ef8da303b79478580ae76466e374ec9da60 | [
"MIT"
] | null | null | null | Asap-3.8.4/Projects/NanoparticleMC/misc/viewatomsmc.py | auag92/n2dm | 03403ef8da303b79478580ae76466e374ec9da60 | [
"MIT"
] | 3 | 2016-07-18T19:22:48.000Z | 2021-07-06T03:06:42.000Z | import ase
from ase import Atoms
from ase.atom import Atom
import sys
from ase.visualize import view
import pickle
f = open(sys.argv[1],'r') #The .amc file
p = pickle.load(f)
positions = p['atomspositions']
atms = Atoms()
for p0 in positions:
a = Atom('Au',position=p0)
atms.append(a)
atms.center(vacuum=2)
view(atms)
| 17 | 40 | 0.721362 | 56 | 323 | 4.160714 | 0.571429 | 0.090129 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.014545 | 0.148607 | 323 | 18 | 41 | 17.944444 | 0.832727 | 0.040248 | 0 | 0 | 0 | 0 | 0.055016 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.4 | 0 | 0.4 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
961e930045b962f6aec047adbd1d0fd8f14a977a | 453 | py | Python | bot_settings_example.py | nikmedoed/BalanceBot | 731e6d09d71bbf8d7802d0b42a570947343d3ce6 | [
"MIT"
] | null | null | null | bot_settings_example.py | nikmedoed/BalanceBot | 731e6d09d71bbf8d7802d0b42a570947343d3ce6 | [
"MIT"
] | null | null | null | bot_settings_example.py | nikmedoed/BalanceBot | 731e6d09d71bbf8d7802d0b42a570947343d3ce6 | [
"MIT"
] | null | null | null | # это dev среда
TELEGRAM_TOKEN = "..."
RELATIVE_CHAT_IDS = [ "...", '...']
TEXT = {
"bot_info": ('Привет, я бот, который отвечает за равномерное распределение участников по комнатам.\n\n'
'Нажми кнопку, если готов сменить комнату'),
"get_link": "Получить рекомендацию",
"new_room": "Ваша новая комната\n%s",
"nothing_to_change": "На данный момент ничего менять не требуется"
}
def logger(*message):
print(message) | 30.2 | 107 | 0.655629 | 56 | 453 | 5.160714 | 0.946429 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.200883 | 453 | 15 | 108 | 30.2 | 0.798343 | 0.028698 | 0 | 0 | 0 | 0 | 0.601367 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.090909 | false | 0 | 0 | 0 | 0.090909 | 0.090909 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
962392189f97293112a65685c141235eaa945995 | 369 | py | Python | instapp/migrations/0003_auto_20190522_0007.py | imekenye/Instagram-clone | 19c895a7bc4d5137f8df6eab7ade3920dfc3eb39 | [
"Unlicense"
] | null | null | null | instapp/migrations/0003_auto_20190522_0007.py | imekenye/Instagram-clone | 19c895a7bc4d5137f8df6eab7ade3920dfc3eb39 | [
"Unlicense"
] | 13 | 2020-02-12T00:19:23.000Z | 2022-03-11T23:47:08.000Z | instapp/migrations/0003_auto_20190522_0007.py | imekenye/Instagram-clone | 19c895a7bc4d5137f8df6eab7ade3920dfc3eb39 | [
"Unlicense"
] | 1 | 2019-06-07T10:01:06.000Z | 2019-06-07T10:01:06.000Z | # Generated by Django 2.2.1 on 2019-05-22 00:07
from django.db import migrations
class Migration(migrations.Migration):
dependencies = [
('instapp', '0002_auto_20190522_0006'),
]
operations = [
migrations.RenameField(
model_name='image',
old_name='profile',
new_name='user_profile',
),
]
| 19.421053 | 47 | 0.590786 | 40 | 369 | 5.275 | 0.8 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.119691 | 0.298103 | 369 | 18 | 48 | 20.5 | 0.694981 | 0.121951 | 0 | 0 | 1 | 0 | 0.167702 | 0.071429 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.083333 | 0 | 0.333333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
824a4f6bf20408ed367c7e9a67c9b62aea2ab1c0 | 7,611 | py | Python | sweetpea/tests/test_encoding_diagram.py | anniecherk/sweetpea-py | 23dbad99a9213ff764ec207b456cf5d002707fd0 | [
"MIT"
] | 1 | 2018-05-06T03:54:06.000Z | 2018-05-06T03:54:06.000Z | sweetpea/tests/test_encoding_diagram.py | anniecherk/sweetpea-py | 23dbad99a9213ff764ec207b456cf5d002707fd0 | [
"MIT"
] | 5 | 2018-09-18T02:15:17.000Z | 2018-12-05T20:02:24.000Z | sweetpea/tests/test_encoding_diagram.py | anniecherk/sweetpea-py | 23dbad99a9213ff764ec207b456cf5d002707fd0 | [
"MIT"
] | null | null | null | import pytest
import operator as op
from sweetpea import fully_cross_block
from sweetpea.primitives import Factor, DerivedLevel, WithinTrial, Transition, Window
from sweetpea.encoding_diagram import __generate_encoding_diagram
color = Factor("color", ["red", "blue"])
text = Factor("text", ["red", "blue"])
con_level = DerivedLevel("con", WithinTrial(op.eq, [color, text]))
inc_level = DerivedLevel("inc", WithinTrial(op.ne, [color, text]))
con_factor = Factor("congruent?", [con_level, inc_level])
color_repeats_factor = Factor("color repeats?", [
DerivedLevel("yes", Transition(lambda colors: colors[0] == colors[1], [color])),
DerivedLevel("no", Transition(lambda colors: colors[0] != colors[1], [color]))
])
text_repeats_factor = Factor("text repeats?", [
DerivedLevel("yes", Transition(lambda colors: colors[0] == colors[1], [text])),
DerivedLevel("no", Transition(lambda colors: colors[0] != colors[1], [text]))
])
design = [color, text, con_factor]
crossing = [color, text]
blk = fully_cross_block(design, crossing, [])
def test_generate_encoding_diagram():
assert __generate_encoding_diagram(blk) == "\
----------------------------------------------\n\
| Trial | color | text | congruent? |\n\
| # | red blue | red blue | con inc |\n\
----------------------------------------------\n\
| 1 | 1 2 | 3 4 | 5 6 |\n\
| 2 | 7 8 | 9 10 | 11 12 |\n\
| 3 | 13 14 | 15 16 | 17 18 |\n\
| 4 | 19 20 | 21 22 | 23 24 |\n\
----------------------------------------------\n"
def test_generate_encoding_diagram_with_transition():
block = fully_cross_block([color, text, color_repeats_factor],
[color, text],
[])
assert __generate_encoding_diagram(block) == "\
--------------------------------------------------\n\
| Trial | color | text | color repeats? |\n\
| # | red blue | red blue | yes no |\n\
--------------------------------------------------\n\
| 1 | 1 2 | 3 4 | |\n\
| 2 | 5 6 | 7 8 | 17 18 |\n\
| 3 | 9 10 | 11 12 | 19 20 |\n\
| 4 | 13 14 | 15 16 | 21 22 |\n\
--------------------------------------------------\n"
def test_generate_encoding_diagram_with_constraint_and_multiple_transitions():
block = fully_cross_block([color, text, con_factor, color_repeats_factor, text_repeats_factor],
[color, text],
[])
assert __generate_encoding_diagram(block) == "\
-------------------------------------------------------------------------------\n\
| Trial | color | text | congruent? | color repeats? | text repeats? |\n\
| # | red blue | red blue | con inc | yes no | yes no |\n\
-------------------------------------------------------------------------------\n\
| 1 | 1 2 | 3 4 | 5 6 | | |\n\
| 2 | 7 8 | 9 10 | 11 12 | 25 26 | 31 32 |\n\
| 3 | 13 14 | 15 16 | 17 18 | 27 28 | 33 34 |\n\
| 4 | 19 20 | 21 22 | 23 24 | 29 30 | 35 36 |\n\
-------------------------------------------------------------------------------\n"
def test_generate_encoding_diagram_with_constraint_and_multiple_transitions_in_different_order():
block = fully_cross_block([text_repeats_factor, color, color_repeats_factor, text, con_factor],
[color, text],
[])
assert __generate_encoding_diagram(block) == "\
-------------------------------------------------------------------------------\n\
| Trial | text repeats? | color | color repeats? | text | congruent? |\n\
| # | yes no | red blue | yes no | red blue | con inc |\n\
-------------------------------------------------------------------------------\n\
| 1 | | 1 2 | | 3 4 | 5 6 |\n\
| 2 | 25 26 | 7 8 | 31 32 | 9 10 | 11 12 |\n\
| 3 | 27 28 | 13 14 | 33 34 | 15 16 | 17 18 |\n\
| 4 | 29 30 | 19 20 | 35 36 | 21 22 | 23 24 |\n\
-------------------------------------------------------------------------------\n"
def test_generate_encoding_diagram_with_windows():
color3 = Factor("color3", ["red", "blue", "green"])
yes_fn = lambda colors: colors[0] == colors[1] == colors[2]
no_fn = lambda colors: not yes_fn(colors)
color3_repeats_factor = Factor("color3 repeats?", [
DerivedLevel("yes", Window(yes_fn, [color3], 3, 1)),
DerivedLevel("no", Window(no_fn, [color3], 3, 1))
])
block = fully_cross_block([color3_repeats_factor, color3, text], [color3, text], [])
assert __generate_encoding_diagram(block) == "\
---------------------------------------------------------\n\
| Trial | color3 repeats? | color3 | text |\n\
| # | yes no | red blue green | red blue |\n\
---------------------------------------------------------\n\
| 1 | | 1 2 3 | 4 5 |\n\
| 2 | | 6 7 8 | 9 10 |\n\
| 3 | 31 32 | 11 12 13 | 14 15 |\n\
| 4 | 33 34 | 16 17 18 | 19 20 |\n\
| 5 | 35 36 | 21 22 23 | 24 25 |\n\
| 6 | 37 38 | 26 27 28 | 29 30 |\n\
---------------------------------------------------------\n"
def test_generate_encoding_diagram_with_window_with_stride():
congruent_bookend = Factor("congruent bookend?", [
DerivedLevel("yes", Window(lambda colors, texts: colors[0] == texts[0], [color, text], 1, 3)),
DerivedLevel("no", Window(lambda colors, texts: colors[0] == texts[0], [color, text], 1, 3))
])
block = fully_cross_block([color, text, congruent_bookend], [color, text], [])
assert __generate_encoding_diagram(block) == "\
------------------------------------------------------\n\
| Trial | color | text | congruent bookend? |\n\
| # | red blue | red blue | yes no |\n\
------------------------------------------------------\n\
| 1 | 1 2 | 3 4 | 17 18 |\n\
| 2 | 5 6 | 7 8 | |\n\
| 3 | 9 10 | 11 12 | |\n\
| 4 | 13 14 | 15 16 | 19 20 |\n\
------------------------------------------------------\n"
congruent_bookend = Factor("congruent bookend?", [
DerivedLevel("yes", Window(lambda colors, texts: colors[0] == texts[0], [color, text], 2, 2)),
DerivedLevel("no", Window(lambda colors, texts: colors[0] == texts[0], [color, text], 2, 2))
])
block = fully_cross_block([color, text, congruent_bookend], [color, text], [])
assert __generate_encoding_diagram(block) == "\
------------------------------------------------------\n\
| Trial | color | text | congruent bookend? |\n\
| # | red blue | red blue | yes no |\n\
------------------------------------------------------\n\
| 1 | 1 2 | 3 4 | |\n\
| 2 | 5 6 | 7 8 | 17 18 |\n\
| 3 | 9 10 | 11 12 | |\n\
| 4 | 13 14 | 15 16 | 19 20 |\n\
------------------------------------------------------\n"
| 48.170886 | 102 | 0.411247 | 785 | 7,611 | 3.830573 | 0.123567 | 0.068839 | 0.107083 | 0.067509 | 0.618557 | 0.591952 | 0.542401 | 0.532757 | 0.481543 | 0.446625 | 0 | 0.076106 | 0.307713 | 7,611 | 157 | 103 | 48.477707 | 0.494591 | 0 | 0 | 0.412698 | 0 | 0 | 0.021025 | 0 | 0 | 0 | 0 | 0 | 0.055556 | 1 | 0.047619 | false | 0 | 0.039683 | 0 | 0.087302 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
824adf7af953a3787b6ad72eca002b2f5fa3b943 | 297 | py | Python | Source_Code/Python/ConductedTest/case_generator.py | fenglwh/instruments | 7886158d1ed97fe6bfe372a55f4fca107e834311 | [
"MIT"
] | null | null | null | Source_Code/Python/ConductedTest/case_generator.py | fenglwh/instruments | 7886158d1ed97fe6bfe372a55f4fca107e834311 | [
"MIT"
] | 3 | 2018-09-21T00:57:21.000Z | 2018-09-21T01:49:40.000Z | Source_Code/Python/ConductedTest/case_generator.py | fenglwh/instruments | 7886158d1ed97fe6bfe372a55f4fca107e834311 | [
"MIT"
] | null | null | null | import json
from labinstrument.SS.CMW500.CMW500_WIFI.CMW500_WIFI import *
if __name__ == '__main__':
new_config_name='emm'
new_config=CMW_WIFI(17).get_parameters()
config=json.load(open('config.txt'))
config[new_config_name]=new_config
json.dump(config,open('config.txt','w')) | 33 | 61 | 0.737374 | 44 | 297 | 4.568182 | 0.5 | 0.179104 | 0.129353 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.041985 | 0.117845 | 297 | 9 | 62 | 33 | 0.725191 | 0 | 0 | 0 | 0 | 0 | 0.107383 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.25 | 0 | 0.25 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
8258e9ef419949e0cfc0082d25711b7eeaaea221 | 427 | py | Python | realtime/realtime.py | mikerah13/python_samples | c4cd8af3cee99a5199dd2231f182240c35984b97 | [
"MIT"
] | null | null | null | realtime/realtime.py | mikerah13/python_samples | c4cd8af3cee99a5199dd2231f182240c35984b97 | [
"MIT"
] | null | null | null | realtime/realtime.py | mikerah13/python_samples | c4cd8af3cee99a5199dd2231f182240c35984b97 | [
"MIT"
] | null | null | null | from subprocess import Popen, PIPE
def run_command(command):
process = subprocess.Popen(shlex.split(command), stdout=subprocess.PIPE)
while True:
output = process.stdout.readline()
if output == '' and process.poll() is not None:
break
if output:
print output.strip()
rc = process.poll()
return rc
if __name__ == "__main__":
run_command("ping google.com")
| 23.722222 | 76 | 0.627635 | 51 | 427 | 5.058824 | 0.627451 | 0.077519 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.264637 | 427 | 17 | 77 | 25.117647 | 0.821656 | 0 | 0 | 0 | 0 | 0 | 0.053864 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.076923 | null | null | 0.076923 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
825a7d574135cde50db9d1e2e4cce7b2af3b42c9 | 923 | py | Python | resources/model/agenda.py | diegohideky/climatempoworkshop | edb50eec386d6db5ede9b28192520922ed85c55e | [
"MIT"
] | null | null | null | resources/model/agenda.py | diegohideky/climatempoworkshop | edb50eec386d6db5ede9b28192520922ed85c55e | [
"MIT"
] | null | null | null | resources/model/agenda.py | diegohideky/climatempoworkshop | edb50eec386d6db5ede9b28192520922ed85c55e | [
"MIT"
] | null | null | null | from db_connection import db
class Agenda(db.Model):
__tablename__ = "agendas"
id = db.Column(db.Integer, primary_key=True)
date = db.Column(db.Date)
work_start = db.Column(db.Time)
work_end = db.Column(db.Time)
rest_start = db.Column(db.Time)
rest_end = db.Column(db.Time)
user_id = db.Column(db.Integer, db.ForeignKey('usuarios.id'))
user = db.relationship('User')
def __init__(self, date, work_start, work_end, rest_start, rest_end, user_id):
self.date = date
self.work_start = work_start
self.work_end = work_end
self.rest_start = rest_start
self.rest_end = rest_end
self.user_id = user_id
def update(self, date, work_start, work_end, rest_start, rest_end):
self.date = date
self.work_start = work_start
self.work_end = work_end
self.rest_start = rest_start
self.rest_end = rest_end | 31.827586 | 82 | 0.658722 | 139 | 923 | 4.071942 | 0.208633 | 0.09894 | 0.123675 | 0.09894 | 0.674912 | 0.466431 | 0.466431 | 0.466431 | 0.466431 | 0.466431 | 0 | 0 | 0.23727 | 923 | 29 | 83 | 31.827586 | 0.803977 | 0 | 0 | 0.416667 | 0 | 0 | 0.02381 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.083333 | false | 0 | 0.041667 | 0 | 0.541667 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
825b9506a0a8cc2c13904600639147b936af53d7 | 470 | py | Python | graduated_site/migrations/0029_auto_20191218_2109.py | vbacaksiz/KTU-MEBSIS | e1afaa07a16e00ff9be3f39b728603b64f08590e | [
"MIT"
] | null | null | null | graduated_site/migrations/0029_auto_20191218_2109.py | vbacaksiz/KTU-MEBSIS | e1afaa07a16e00ff9be3f39b728603b64f08590e | [
"MIT"
] | null | null | null | graduated_site/migrations/0029_auto_20191218_2109.py | vbacaksiz/KTU-MEBSIS | e1afaa07a16e00ff9be3f39b728603b64f08590e | [
"MIT"
] | null | null | null | # Generated by Django 3.0 on 2019-12-18 21:09
import ckeditor.fields
from django.db import migrations
class Migration(migrations.Migration):
dependencies = [
('graduated_site', '0028_auto_20191218_2028'),
]
operations = [
migrations.AlterField(
model_name='user_internship_post',
name='content',
field=ckeditor.fields.RichTextField(max_length=2000, null=True, verbose_name='İçerik'),
),
]
| 23.5 | 99 | 0.651064 | 54 | 470 | 5.518519 | 0.833333 | 0.09396 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.095506 | 0.242553 | 470 | 19 | 100 | 24.736842 | 0.738764 | 0.091489 | 0 | 0 | 1 | 0 | 0.164706 | 0.054118 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.153846 | 0 | 0.384615 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
82661285c2d18985678122dfb06c00248935e316 | 540 | py | Python | basic/migrations/0003_entrypoint_entry_function.py | kgdunn/django-peer-review-system | 8d013961e00d189fbbade5283128e956a27954f8 | [
"BSD-2-Clause"
] | null | null | null | basic/migrations/0003_entrypoint_entry_function.py | kgdunn/django-peer-review-system | 8d013961e00d189fbbade5283128e956a27954f8 | [
"BSD-2-Clause"
] | 2 | 2020-03-20T11:50:04.000Z | 2020-03-20T11:50:06.000Z | basic/migrations/0003_entrypoint_entry_function.py | kgdunn/django-peer-review-system | 8d013961e00d189fbbade5283128e956a27954f8 | [
"BSD-2-Clause"
] | null | null | null | # -*- coding: utf-8 -*-
# Generated by Django 1.11.3 on 2017-07-27 16:14
from __future__ import unicode_literals
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('basic', '0002_auto_20170727_1741'),
]
operations = [
migrations.AddField(
model_name='entrypoint',
name='entry_function',
field=models.CharField(default='', help_text='Django function, with syntax: "app_name.function_name"', max_length=100),
),
]
| 25.714286 | 131 | 0.644444 | 62 | 540 | 5.387097 | 0.790323 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.086957 | 0.233333 | 540 | 20 | 132 | 27 | 0.719807 | 0.125926 | 0 | 0 | 1 | 0 | 0.226013 | 0.100213 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.153846 | 0 | 0.384615 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
8267a45960a2743e88617d4dc273ba1a2f8b4aea | 1,231 | py | Python | app.py | iio1989/oshite | dd95eced2630929705670aaf23be5f35df3b9737 | [
"OLDAP-2.3"
] | null | null | null | app.py | iio1989/oshite | dd95eced2630929705670aaf23be5f35df3b9737 | [
"OLDAP-2.3"
] | 1 | 2020-09-24T05:15:00.000Z | 2020-09-24T05:17:06.000Z | app.py | iio1989/oshite | dd95eced2630929705670aaf23be5f35df3b9737 | [
"OLDAP-2.3"
] | null | null | null | from flask import Flask, render_template, request, redirect, url_for, Markup
import app_helper as apHelp
app = Flask(__name__)
@app.route('/')
def root():
return render_template('home.html')
# click convetBtn. get HttpParam.
@app.route('/post', methods=['GET', 'POST'])
def post():
if request.method == 'POST':
input_kana = request.form['input_kana']
converted_input_list = apHelp.getConvetedStr_kanaToOshite(input_kana)
# rendering for home.html.
return render_template('home.html',
input_kana=input_kana,
converted_input_list=converted_input_list,
fileType= apHelp.FILE_TYPE)
else: # error redirect.
return redirect(url_for('home'))
# click homeBtn from header.
@app.route('/home', methods=['GET', 'POST'])
def home():
return render_template('home.html')
# click aboutBtn from header.
@app.route('/about', methods=['GET', 'POST'])
def about():
return render_template('about.html')
# click historyBtn from header.
@app.route('/history', methods=['GET', 'POST'])
def history():
return render_template('history.html')
if __name__ == '__main__':
app.run(debug=True) | 30.775 | 77 | 0.645004 | 147 | 1,231 | 5.170068 | 0.353742 | 0.110526 | 0.131579 | 0.089474 | 0.194737 | 0.086842 | 0 | 0 | 0 | 0 | 0 | 0 | 0.215272 | 1,231 | 40 | 78 | 30.775 | 0.786749 | 0.127539 | 0 | 0.071429 | 0 | 0 | 0.11985 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.178571 | false | 0 | 0.071429 | 0.142857 | 0.464286 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 |
826e60c1e9dd5f09bf2a2eb0580dbe4ef233f970 | 578 | py | Python | app/templates/init.py | arudmin/generator-flask-heroku | 12ecd9d37b732bf5d59912c4874f1dbc6cfa63b1 | [
"MIT"
] | null | null | null | app/templates/init.py | arudmin/generator-flask-heroku | 12ecd9d37b732bf5d59912c4874f1dbc6cfa63b1 | [
"MIT"
] | null | null | null | app/templates/init.py | arudmin/generator-flask-heroku | 12ecd9d37b732bf5d59912c4874f1dbc6cfa63b1 | [
"MIT"
] | null | null | null | from flask import Flask, url_for
import os
app = Flask(__name__)
app.config['DEBUG'] = True
app.config['SECRET_KEY'] = 'SECRET_KEY_CH1ng3me'
# Determines the destination of the build. Only usefull if you're using Frozen-Flask
app.config['FREEZER_DESTINATION'] = os.path.dirname(os.path.abspath(__file__))+'/../build'
# Function to easily find your assets
# In your template use <link rel=stylesheet href="{{ static('filename') }}">
<%= appName %>.jinja_env.globals['static'] = (
lambda filename: url_for('static', filename = filename)
)
from <%= appName %> import views
| 32.111111 | 90 | 0.723183 | 80 | 578 | 5.0375 | 0.65 | 0.066998 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.003976 | 0.129758 | 578 | 17 | 91 | 34 | 0.797217 | 0.33391 | 0 | 0 | 0 | 0 | 0.194226 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.3 | null | null | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
826e7e8ce0638e411f4ad1445cfe2c06fdbae9c6 | 936 | py | Python | sigmod2021-exdra-p523/experiments/code/other/l2svm.py | damslab/reproducibility | f7804b2513859f7e6f14fa7842d81003d0758bf8 | [
"Apache-2.0"
] | 4 | 2021-12-10T17:20:26.000Z | 2021-12-27T14:38:40.000Z | sigmod2021-exdra-p523/experiments/code/other/l2svm.py | damslab/reproducibility | f7804b2513859f7e6f14fa7842d81003d0758bf8 | [
"Apache-2.0"
] | null | null | null | sigmod2021-exdra-p523/experiments/code/other/l2svm.py | damslab/reproducibility | f7804b2513859f7e6f14fa7842d81003d0758bf8 | [
"Apache-2.0"
] | null | null | null |
import numpy as np
import argparse
from sklearn.svm import LinearSVR
from sklearn.pipeline import make_pipeline
from sklearn.preprocessing import StandardScaler
from sklearn.datasets import make_regression
parser = argparse.ArgumentParser()
parser.add_argument('-x', '--datapath', type=str, required=True)
parser.add_argument('-y', '--labels', type=str, required=True)
parser.add_argument('-v', '--verbose', type=bool, default=False)
parser.add_argument('-o', '--outputpath', type=str, required=True)
args = parser.parse_args()
X = np.load(args.datapath, allow_pickle=True)
y = np.load(args.labels, allow_pickle=True)
# http://scikit-learn.sourceforge.net/stable/modules/generated/sklearn.svm.LinearSVC.html#sklearn.svm.LinearSVC
regr = make_pipeline(StandardScaler(),
LinearSVR(verbose=args.verbose, tol = 1e-5, max_iter = 30))
regr.fit(X,y)
np.savetxt(args.outputpath, regr.named_steps['linearsvr'].coef_, delimiter=",")
| 36 | 111 | 0.766026 | 130 | 936 | 5.415385 | 0.476923 | 0.0625 | 0.096591 | 0.080966 | 0.102273 | 0.102273 | 0.102273 | 0 | 0 | 0 | 0 | 0.004678 | 0.086538 | 936 | 25 | 112 | 37.44 | 0.818713 | 0.115385 | 0 | 0 | 0 | 0 | 0.069259 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.333333 | 0 | 0.333333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
826ee5078415354fd8746cf24ad960817241f697 | 4,797 | py | Python | libs/fm_mission_planner/python/fm_mission_planner/target_viz.py | ethz-asl/mav_findmine | 2835995ace0a20a30f20812437b1b066428253a9 | [
"MIT"
] | 3 | 2021-06-25T03:38:38.000Z | 2022-01-13T08:39:48.000Z | libs/fm_mission_planner/python/fm_mission_planner/target_viz.py | ethz-asl/mav_findmine | 2835995ace0a20a30f20812437b1b066428253a9 | [
"MIT"
] | null | null | null | libs/fm_mission_planner/python/fm_mission_planner/target_viz.py | ethz-asl/mav_findmine | 2835995ace0a20a30f20812437b1b066428253a9 | [
"MIT"
] | null | null | null | #!/usr/bin/env python
# MIT License
#
# Copyright (c) 2020 Rik Baehnemann, ASL, ETH Zurich, Switzerland
#
# Permission is hereby granted, free of charge, to any person obtaining a copy
# of this software and associated documentation files (the "Software"), to deal
# in the Software without restriction, including without limitation the rights
# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
# copies of the Software, and to permit persons to whom the Software is
# furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in all
# copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
# SOFTWARE.
import rospy
import rospkg
import pandas as pd
import pymap3d as pm
import os
import numpy as np
from matplotlib import cm
from matplotlib import colors
from sensor_msgs.msg import NavSatFix
from visualization_msgs.msg import Marker, MarkerArray
# Load target list from CSV, receive home point from ROS msgs and publish target points to RVIZ.
class TargetViz():
def __init__(self):
self.df_targets = None
self.loadRosParameters()
self.subscribeToTopics()
self.advertiseTopics()
self.loadTargetTable()
self.main()
def loadRosParameters(self):
rospack = rospkg.RosPack()
default_target_path = os.path.join(rospack.get_path('fm_mission_planner'), 'cfg/target_table.csv')
self.target_path = rospy.get_param("~target_table", default_target_path)
self.frame_id = rospy.get_param("~frame_id", 'enu')
def subscribeToTopics(self):
self.home_point_sub = rospy.Subscriber('home_point', NavSatFix, self.homePointCallback)
def advertiseTopics(self):
self.target_pub = rospy.Publisher('~targets', MarkerArray, latch=True)
def homePointCallback(self, msg):
self.lat0 = msg.latitude
self.lon0 = msg.longitude
self.alt0 = msg.altitude
rospy.loginfo_throttle(10.0, 'Received home point lat0: ' + str(self.lat0) + ' lon0: ' + str(self.lon0) + ' alt0: ' + str(self.alt0))
if self.df_targets is not None and len(self.df_targets):
self.convertToENU()
self.createColors()
self.createMarkerArray()
self.target_pub.publish(self.marker_array)
def loadTargetTable(self):
self.df_targets = pd.read_csv(self.target_path, sep=",")
rospy.loginfo('Loading ' + str(len(self.df_targets)) + ' target points.')
def convertToENU(self):
lat = self.df_targets['lat'].values
lon = self.df_targets['lon'].values
alt = np.squeeze(np.zeros((len(self.df_targets), 1)))
print lat
print lat.size
if lat.size == 1 and lon.size == 1 and alt.size == 1:
lat = np.array([lat])
lon = np.array([lon])
alt = np.array([alt])
self.east = []
self.north = []
self.up = []
for i in range(0, len(self.df_targets)):
east, north, up = pm.geodetic2enu(lat[i], lon[i], alt[i], self.lat0, self.lon0, self.alt0)
self.east.append(east)
self.north.append(north)
self.up.append(up)
def createColors(self):
types = self.df_targets['type'].values
color_map = cm.get_cmap('Set1')
norm = colors.Normalize(vmin=min(types), vmax=max(types))
self.colors = color_map(norm(types))
def createMarkerArray(self):
self.marker_array = MarkerArray()
for i in range(0, len(self.df_targets)):
marker = Marker()
marker.type = marker.SPHERE
marker.action = marker.ADD
marker.scale.x = 0.4
marker.scale.y = 0.4
marker.scale.z = 0.4
marker.color.r = self.colors[i, 0]
marker.color.g = self.colors[i, 1]
marker.color.b = self.colors[i, 2]
marker.color.a = self.colors[i, 3]
marker.pose.position.x = self.east[i]
marker.pose.position.y = self.north[i]
marker.pose.position.z = self.up[i]
marker.pose.orientation.w = 1.0
marker.header.frame_id = self.frame_id
marker.id = i
self.marker_array.markers.append(marker)
def main(self):
rospy.spin()
| 36.9 | 141 | 0.64999 | 650 | 4,797 | 4.724615 | 0.364615 | 0.021491 | 0.046565 | 0.02605 | 0.018235 | 0.018235 | 0.018235 | 0.018235 | 0.018235 | 0 | 0 | 0.011056 | 0.245779 | 4,797 | 129 | 142 | 37.186047 | 0.837756 | 0.252658 | 0 | 0.023529 | 0 | 0 | 0.044638 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.117647 | null | null | 0.023529 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
8275090a0a26b9725fd053645507a75767690bfa | 6,656 | py | Python | dumbai.py | CapKenway/dumbai | affa89663c980177d6c1e0fef9bda7978032da4d | [
"Unlicense"
] | null | null | null | dumbai.py | CapKenway/dumbai | affa89663c980177d6c1e0fef9bda7978032da4d | [
"Unlicense"
] | null | null | null | dumbai.py | CapKenway/dumbai | affa89663c980177d6c1e0fef9bda7978032da4d | [
"Unlicense"
] | null | null | null | import sys
from pprint import pprint
import os
#--------------------------------------------------------------------------#
class CsPP():
def __init__(self, domains):
self.domains = domains
self.maindict = {}
self.keyitems = []
pass
def check_if(self):
emptylist = []
for domainkey in list(self.domains.keys()):
if not domainkey in list(self.maindict.keys()):
emptylist.append(domainkey)
for listitem in emptylist:
self.maindict[listitem] = list(self.domains.values())[1]
pass
def not_belonging(self, key, lister):
templist = []
maindomain = self.domains[key]
for item in maindomain:
if not item in lister:
templist.append(item)
self.maindict[key] = templist
pass
def belonging(self, key, lister):
self.maindict.__setitem__(key, lister)
pass
def get_one_up(self, values):
self.keyitems.insert(self.keyitems.index(values[0]), values[1])
def get_one_down(self, values):
self.keyitems.reverse()
self.keyitems.insert(self.keyitems.index(values[1]), values[0])
self.keyitems.reverse()
def not_working_together(self, first, second):
firstlist = self.maindict[first]
secondlist = self.maindict[second]
for item in firstlist:
if item in secondlist:
firstlist.remove(item)
self.maindict[first] = firstlist
def backtrack(self, maindict, what_want = '', conditions = [], starter = ''):
csp_back = CsPP_Backend(domains = maindict, what_want = what_want, conditions = conditions, starter = starter)
return csp_back._backtrack()
pass
def left_to_right(self, maindict, path):
to_do = []
pathkeys = list(path.keys())
pathvalues = list(path.values())
mainkeys = list(maindict.keys())
mainvalues = list(maindict.values())
keylist = []
for key, values in zip(pathkeys, pathvalues):
keylist.append(key)
if len(values) > 1:
to_do.append(values[1:])
if len(to_do) != 0:
for i in range(0, len(to_do)):
popped = to_do.pop(i)
keylist.append(popped)
for item in keylist:
if keylist.count(item) > 1:
keylist.remove(item)
if type(item) == list:
keylist.remove(item)
valuestodict = []
for key in keylist:
if type(key) != list:
valuestodict.append(maindict[key])
else:
keylist.remove(key)
returndict = dict((key, values) for key, values in zip(keylist, valuestodict))
forprune = CsPP_Backend()
pruned = forprune._prune(returndict)
return pruned
def right_to_left(self, maindict, path):
tempkeys = list(path.keys())
tempvalues = list(path.values())
tempvalues.reverse()
tempkeys.reverse()
i = 0
flag = False
templist = []
removeditems = []
indexes = []
i = 0
templist.append(tempkeys[0])
for key in tempkeys:
for n in range(i, len(tempvalues)):
flag = False
for u in range(0, len(tempvalues[n])):
if len(tempvalues)!= 0 and key == tempvalues[n][u]:
i = n
templist.append(tempkeys[n])
flag = True
break
if flag:
break
for item in templist:
if templist.count(item) > 1:
templist.remove(item)
dictvalues = []
for tempval in templist:
dictvalues.append(maindict[tempval])
availdict = dict((key, val) for key, val in zip(templist, dictvalues))
removedvalues = []
for key in list(maindict.keys()):
if not key in list(availdict.keys()):
removeditems.append(key)
removedvalues.append(maindict[key])
removeddict = dict((key, val) for key, val in zip(removeditems, removedvalues))
forprune = CsPP_Backend()
pruned = forprune._prune(availdict)
for key in list(removeddict.keys()):
pruned[key] = []
return pruned
pass
#--------------------------------------------------------------------------#
class CsPP_Backend():
def __init__(self, *args, **kwargs):
self.domains = kwargs.get('domains')
self.conditions = kwargs.get('conditions')
self.what_want = kwargs.get('what_want')
self.starter = kwargs.get('starter')
pass
def _backtrack(self):
if self.what_want == 'mrv':
return self._highest_constraint(self.domains, self.starter)
elif self.what_want == 'lcv':
return self._minimum_constraint(self.domains, self.starter)
else:
return self.domains
def _minimum_constraint(self, domains, starter = ''):
low_constraint = None
if starter != '':
yet_lowest = len(domains[starter])
else:
yet_lowest = len(domains[list(domains.keys())[0]])
for key, val in zip(list(domains.keys()), list(domains.values())):
if yet_lowest > len(val):
yet_lowest = len(val)
low_constraint = key
return low_constraint
pass
def _highest_constraint(self, domains, starter = ''):
high_constraint = None
if starter != '':
yet_highest = len(domains[starter])
else:
yet_highest = len(domains[list(domains.keys())[0]])
for key, val in zip(list(domains.keys()), list(domains.values())):
if yet_highest < len(val):
yet_highest = len(val)
high_constraint = key
return high_constraint
pass
def _prune(self, domains):
emptydict = {}
pruneditems = []
for key, value in zip(list(domains.keys()), list(domains.values())):
for val in value:
if val in pruneditems:
continue
emptydict.__setitem__(key, val)
pruneditems.append(val)
break
for key in list(domains.keys()):
if not key in list(emptydict.keys()):
emptydict.__setitem__(key, [])
return emptydict
#--------------------------------------------------------------------------# | 35.404255 | 118 | 0.526292 | 690 | 6,656 | 4.965217 | 0.169565 | 0.038529 | 0.02627 | 0.012843 | 0.18418 | 0.126386 | 0.093695 | 0.069761 | 0.04495 | 0.04495 | 0 | 0.004077 | 0.336689 | 6,656 | 188 | 119 | 35.404255 | 0.771914 | 0.033353 | 0 | 0.201183 | 0 | 0 | 0.006066 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.088757 | false | 0.053254 | 0.017751 | 0 | 0.171598 | 0.005917 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
82789f2ad5480b27da525091f877dbbf7fb5f30c | 3,923 | py | Python | examples/vector_dot.py | Wheest/EVA | 6d19da1d454f398f0ade297d3a76a4ee9e773929 | [
"MIT"
] | null | null | null | examples/vector_dot.py | Wheest/EVA | 6d19da1d454f398f0ade297d3a76a4ee9e773929 | [
"MIT"
] | null | null | null | examples/vector_dot.py | Wheest/EVA | 6d19da1d454f398f0ade297d3a76a4ee9e773929 | [
"MIT"
] | null | null | null | #!/usr/bin/env python
import argparse
from eva import EvaProgram, Input, Output
from eva.ckks import CKKSCompiler
from eva.seal import generate_keys
import numpy as np
import time
from eva.std.numeric import horizontal_sum
def dot(x, y):
return np.dot(x, y)
def generate_inputs_naive(size, label="x"):
inputs = dict()
inputs_np = np.zeros((size))
i = 0
for n in range(size):
# each element is a list (i.e. a vector of size 1)
inputs[f"{label}_{n}"] = [i]
inputs_np[n] = i
i += 1
return inputs, inputs_np
def generate_vector_dot_naive(size):
"""Vector dot product with vector size of 1"""
fhe_dot = EvaProgram("fhe_dot", vec_size=1)
with fhe_dot:
a = np.array([Input(f"x_{n}") for n in range(size)]).reshape(1, size)
b = np.array([Input(f"w_{k}") for k in range(size)]).reshape(size, 1)
out = dot(a, b)
Output("y", out[0][0])
fhe_dot.set_input_scales(32)
fhe_dot.set_output_ranges(32)
return fhe_dot
def generate_inputs(size, label="x"):
inputs = dict()
inputs_np = np.zeros((size))
i = 0
# all data is stored in a single list of size `size`
inputs[label] = list(range(size))
for n in range(size):
inputs_np[n] = i
i += 1
return inputs, inputs_np
def generate_vector_dot(size):
"""Vector dot product with CKKS vector size equal to the size"""
fhe_dot = EvaProgram("fhe_dot", vec_size=size)
with fhe_dot:
a = np.array([Input("x")])
b = np.array([Input(f"w")])
out = dot(a, b)
Output("y", horizontal_sum(out))
fhe_dot.set_input_scales(32)
fhe_dot.set_output_ranges(32)
return fhe_dot
def benchmark_vector_dot(size, mode="SIMD"):
if mode == "SIMD":
# generate program with SIMD-style
inputs, inputs_np = generate_inputs(size, label="x")
weights, weights_np = generate_inputs(size, label="w")
fhe_dot = generate_vector_dot(size)
else:
# generate program with vector size = 1
inputs, inputs_np = generate_inputs_naive(size, label="x")
weights, weights_np = generate_inputs_naive(size, label="w")
fhe_dot = generate_vector_dot_naive(size)
# compiling program
data = {**weights, **inputs}
compiler = CKKSCompiler(config={"security_level": "128", "warn_vec_size": "false"})
compiled, params, signature = compiler.compile(fhe_dot)
public_ctx, secret_ctx = generate_keys(params)
enc_inputs = public_ctx.encrypt(data, signature)
# Running program
start = time.time()
enc_outputs = public_ctx.execute(compiled, enc_inputs)
end = time.time()
run_time = end - start
# decrypt the output
outputs = secret_ctx.decrypt(enc_outputs, signature)
y = np.array(outputs["y"])
# get time for plaintext dot product
start = time.time()
true_y = inputs_np.dot(weights_np)
end = time.time()
plain_run_time = end - start
# verifying correctness of output
np.testing.assert_allclose(y, true_y)
return run_time, plain_run_time
if __name__ == "__main__":
parser = argparse.ArgumentParser(description="Run a dot product program")
parser.add_argument(
"--mode",
default="SIMD",
choices=["SIMD", "naive"],
)
args = parser.parse_args()
results_cipher = dict()
results_plain = dict()
if args.mode == "SIMD":
print("Generating code in SIMD style")
else:
print("Generating code in naive style")
for size in [4, 8, 16, 32, 64, 128, 256, 512, 1024]:
time_cipher, time_plain = benchmark_vector_dot(size, args.mode)
results_cipher[f"{size}"] = time_cipher
results_plain[f"{size}"] = time_plain
print(f"Done vector size {size}, CKKS time: {time_cipher}")
print("Done")
print("CKKS times:", results_cipher)
print("Plain text times:", results_plain)
| 28.845588 | 87 | 0.640836 | 566 | 3,923 | 4.249117 | 0.245583 | 0.037422 | 0.016632 | 0.02869 | 0.341788 | 0.274844 | 0.231185 | 0.187942 | 0.127235 | 0.127235 | 0 | 0.014667 | 0.235279 | 3,923 | 135 | 88 | 29.059259 | 0.787 | 0.104767 | 0 | 0.315789 | 1 | 0 | 0.085052 | 0 | 0 | 0 | 0 | 0 | 0.010526 | 1 | 0.063158 | false | 0 | 0.073684 | 0.010526 | 0.2 | 0.063158 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
8279df0466383aeaceedee24127f9a8045b9a674 | 401 | py | Python | src/sales/migrations/0029_auto_20191025_1058.py | vladimirtkach/yesjob | 83800f4d29bf2dab30b14fc219d3150e3bc51e15 | [
"MIT"
] | null | null | null | src/sales/migrations/0029_auto_20191025_1058.py | vladimirtkach/yesjob | 83800f4d29bf2dab30b14fc219d3150e3bc51e15 | [
"MIT"
] | 18 | 2020-02-12T00:41:40.000Z | 2022-02-10T12:00:03.000Z | src/sales/migrations/0029_auto_20191025_1058.py | vladimirtkach/yesjob | 83800f4d29bf2dab30b14fc219d3150e3bc51e15 | [
"MIT"
] | null | null | null | # Generated by Django 2.2 on 2019-10-25 10:58
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('sales', '0028_auto_20191024_1736'),
]
operations = [
migrations.AlterField(
model_name='interaction',
name='result',
field=models.CharField(blank=True, max_length=1000),
),
]
| 21.105263 | 64 | 0.608479 | 43 | 401 | 5.55814 | 0.813953 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.117647 | 0.279302 | 401 | 18 | 65 | 22.277778 | 0.709343 | 0.107232 | 0 | 0 | 1 | 0 | 0.126404 | 0.064607 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.083333 | 0 | 0.333333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
8279e33b741621fbcfe10f065044f83eff6d9a93 | 41,238 | py | Python | maintenance/pymelControlPanel.py | GlenWalker/pymel | 8b69b72e1bb726a66792707af39626a987bf5c21 | [
"BSD-3-Clause"
] | null | null | null | maintenance/pymelControlPanel.py | GlenWalker/pymel | 8b69b72e1bb726a66792707af39626a987bf5c21 | [
"BSD-3-Clause"
] | null | null | null | maintenance/pymelControlPanel.py | GlenWalker/pymel | 8b69b72e1bb726a66792707af39626a987bf5c21 | [
"BSD-3-Clause"
] | null | null | null | """
UI for controlling how api classes and mel commands are combined into pymel classes.
This UI modifies factories.apiToMelData which is pickled out to apiMelBridge.
It controls:
which mel methods correspond to api methods
disabling of api methods
preference for overloaded methods (since currently only one overloaded method is supported)
renaming of apiMethod
"""
import inspect, re, os
import pymel.core as pm
import pymel.internal.factories as factories
import logging
logger = logging.getLogger(__name__)
if logger.level == logging.NOTSET:
logger.setLevel(logging.INFO)
FRAME_WIDTH = 800
VERBOSE = True
class PymelControlPanel(object):
def __init__(self):
# key is a tuple of (class, method)
self.classList = sorted( list( set( [ key[0] for key in factories.apiToMelData.keys()] ) ) )
self.classFrames={}
self.processClassFrames()
self.buildUI()
def buildUI(self):
_notifySavingDisabled()
self.win = pm.window(title='Pymel Control Panel')
self.win.show()
with pm.paneLayout(configuration='vertical3', paneSize=([1,20,100], [3,20,100]) ) as self.pane:
# Lef Column: Api Classes
self.classScrollList = pm.textScrollList('apiClassList')
# Center Column: Api Methods
# Would LIKE to do it like this, but there is currently a bug with
# objectType UI, such that even if
# layout('window4|paneLayout5', q=1, exists=1) == True
# when you run:
# objectTypeUI('window4|paneLayout5')
# you will get an error:
# RuntimeError: objectTypeUI: Object 'window4|paneLayout5' not found.
# with formLayout() as apiForm:
# #with scrollLayout() as scroll:
# with tabLayout('apiMethodCol') as self.apiMethodCol:
# pass
# status = helpLine(h=60)
# So, instead, we do it old-school...
apiForm = pm.formLayout()
self.apiMethodCol = pm.tabLayout('apiMethodCol')
pm.setParent(apiForm)
status = pm.cmds.helpLine(h=60)
pm.setParent(self.pane)
apiForm.attachForm( self.apiMethodCol, 'top', 5 )
apiForm.attachForm( self.apiMethodCol, 'left', 5 )
apiForm.attachForm( self.apiMethodCol, 'right', 5 )
apiForm.attachControl( self.apiMethodCol, 'bottom', 5, status )
apiForm.attachPosition( status, 'bottom', 5, 20 )
apiForm.attachForm( status, 'bottom', 5 )
apiForm.attachForm( status, 'left', 5 )
apiForm.attachForm( status, 'right', 5 )
# Right Column: Mel Methods
melForm = pm.formLayout()
label1 = pm.text( label='Unassigned Mel Methods' )
self.unassignedMelMethodLister = pm.textScrollList()
label2 = pm.text( label='Assigned Mel Methods' )
self.assignedMelMethodLister = pm.textScrollList()
label3 = pm.text( label='Disabled Mel Methods' )
self.disabledMelMethodLister = pm.textScrollList()
pm.setParent(self.pane)
melForm.attachForm( label1, 'top', 5 )
melForm.attachForm( label1, 'left', 5 )
melForm.attachForm( label1, 'right', 5 )
melForm.attachControl( self.unassignedMelMethodLister, 'top', 0, label1 )
melForm.attachForm( self.unassignedMelMethodLister, 'left', 5 )
melForm.attachForm( self.unassignedMelMethodLister, 'right', 5 )
melForm.attachPosition( self.unassignedMelMethodLister, 'bottom', 5, 33 )
melForm.attachControl( label2, 'top', 5, self.unassignedMelMethodLister)
melForm.attachForm( label2, 'left', 5 )
melForm.attachForm( label2, 'right', 5 )
melForm.attachControl( self.assignedMelMethodLister, 'top', 0, label2 )
melForm.attachForm( self.assignedMelMethodLister, 'left', 5 )
melForm.attachForm( self.assignedMelMethodLister, 'right', 5 )
melForm.attachPosition( self.assignedMelMethodLister, 'bottom', 5, 66 )
melForm.attachControl( label3, 'top', 5, self.assignedMelMethodLister)
melForm.attachForm( label3, 'left', 5 )
melForm.attachForm( label3, 'right', 5 )
melForm.attachControl( self.disabledMelMethodLister, 'top', 0, label3 )
melForm.attachForm( self.disabledMelMethodLister, 'left', 5 )
melForm.attachForm( self.disabledMelMethodLister, 'right', 5 )
melForm.attachForm( self.disabledMelMethodLister, 'bottom', 5 )
pm.setParent('..')
pm.popupMenu(parent=self.unassignedMelMethodLister, button=3 )
pm.menuItem(l='disable', c=pm.Callback( PymelControlPanel.disableMelMethod, self, self.unassignedMelMethodLister ) )
pm.popupMenu(parent=self.assignedMelMethodLister, button=3 )
pm.menuItem(l='disable', c=pm.Callback( PymelControlPanel.disableMelMethod, self, self.assignedMelMethodLister ) )
pm.popupMenu(parent=self.disabledMelMethodLister, button=3 )
pm.menuItem(l='enable', c=pm.Callback( PymelControlPanel.enableMelMethod))
self.classScrollList.extend( self.classList )
self.classScrollList.selectCommand( lambda: self.apiClassList_selectCB() )
pm.scriptJob(uiDeleted=[str(self.win),cacheResults])
self.win.show()
def disableMelMethod(self, menu):
msel = menu.getSelectItem()
csel = self.classScrollList.getSelectItem()
if msel and csel:
method = msel[0]
clsname = csel[0]
menu.removeItem(method)
self.disabledMelMethodLister.append( method )
#print clsname, method, factories.apiToMelData[ (clsname, method) ]
factories.apiToMelData[ (clsname, method) ]['melEnabled'] = False
def enableMelMethod(self):
menu = self.disabledMelMethodLister
msel = menu.getSelectItem()
csel = self.classScrollList.getSelectItem()
if msel and csel:
method = msel[0]
clsname = csel[0]
menu.removeItem(method)
self.unassignedMelMethodLister.append( method )
#print clsname, method, factories.apiToMelData[ (clsname, method) ]
factories.apiToMelData[ (clsname, method) ].pop('melEnabled')
@staticmethod
def getMelMethods(className):
"""get all mel-derived methods for this class"""
import maintenance.build
if not factories.classToMelMap.keys():
# force factories.classToMelMap to be populated
list(maintenance.build.iterPyNodeText())
assert factories.classToMelMap
reg = re.compile('(.*[a-z])([XYZ])$')
newlist = []
origlist = factories.classToMelMap.get(className, [])
for method in origlist:
m = reg.search(method)
if m:
# strip off the XYZ component and replace with *
newname = m.group(1) + '*'
if newname not in newlist:
newlist.append(newname)
else:
newlist.append(method)
return sorted(newlist)
def apiClassList_selectCB(self, *args):
sel = self.classScrollList.getSelectItem()
if sel:
self.buildClassColumn(sel[0])
def assignMelMethod(self, method):
#print "method %s is now assigned" % method
if method in pm.util.listForNone( self.unassignedMelMethodLister.getAllItems() ):
self.unassignedMelMethodLister.removeItem(method)
self.assignedMelMethodLister.append( method )
def unassignMelMethod(self, method):
#print "method %s is now unassigned" % method
if method in pm.util.listForNone( self.assignedMelMethodLister.getAllItems() ):
self.assignedMelMethodLister.removeItem(method)
self.unassignedMelMethodLister.append( method )
def processClassFrames(self):
"""
This triggers the generation of all the defaults for `factories.apiToMelData`, but it does
not create any UI elements. It creates `ClassFrame` instances, which in turn create
`MethodRow` instances, but the creation of UI elements is delayed until a particular
configuration is requested via `buildClassColumn`.
"""
logger.info( 'processing all classes...' )
for className in self.classList:
melMethods = self.getMelMethods(className)
logger.debug( '%s: mel methods: %s' % (className, melMethods) )
for clsName, apiClsName in getClassHierarchy(className):
if apiClsName and apiClsName not in ['list']:
if clsName not in self.classFrames:
frame = ClassFrame( self, clsName, apiClsName)
self.classFrames[clsName] = frame
# temporarily disable the melName updating until we figure out how to deal
# with base classes that are the parents of many others, and which therefore end up with
# methods derived from many different mel commands, which are only applicable for the inherited classes
# not for the base class on its own. ( see ObjectSet and Character, for an example, specifically 'getIntersection' method )
#self.classFrames[clsName].updateMelNames( melMethods )
logger.info( 'done processing classes' )
def buildClassColumn(self, className ):
"""
Build an info column for a class. This column will include processed `ClassFrame`s for it and its parent classes
"""
pm.setParent(self.apiMethodCol)
self.apiMethodCol.clear()
self.unassignedMelMethodLister.removeAll()
self.assignedMelMethodLister.removeAll()
self.disabledMelMethodLister.removeAll()
melMethods = self.getMelMethods(className)
for method in melMethods:
# fix
if (className, method) in factories.apiToMelData and factories.apiToMelData[ (className, method) ] == {'enabled':False}:
d = factories.apiToMelData.pop( (className, method) )
d.pop('enabled')
d['melEnabled'] = False
if (className, method) in factories.apiToMelData and factories.apiToMelData[(className, method)].get('melEnabled',True) == False:
self.disabledMelMethodLister.append( method )
else:
self.unassignedMelMethodLister.append( method )
#filter = set( ['double', 'MVector'] )
filter = []
count = 0
for clsName, apiClsName in getClassHierarchy(className):
if apiClsName:
#print cls
if clsName in self.classFrames:
logger.debug( "building UI for %s", clsName )
frame = self.classFrames[clsName].buildUI(filter)
self.apiMethodCol.setTabLabel( [frame, clsName] )
count+=1
#frame.setVisible(False)
#if i != len(mro)-1:
# frame.setCollapse(True)
else:
logger.debug( "skipping %s", clsName )
self.apiMethodCol.setSelectTabIndex(count)
#self.classFrames[className].frame.setCollapse(False)
class ClassFrame(object):
def __init__(self, parent, className, apiClassName ):
self.parent = parent
self.className = className
self.apiClassName = apiClassName
self.rows = {}
self.classInfo = factories.apiClassInfo[apiClassName]['methods']
for method in self.classInfo.keys():
row = MethodRow( self, self.className, self.apiClassName, method, self.classInfo[method] )
self.rows[method] = row
def updateMelNames(self, melMethods):
logger.debug( '%s: updating melNames' % self.className )
for rowName, row in self.rows.items():
row.updateMelNames( melMethods )
def buildUI(self, filter=None):
count = 0
#self.form = formLayout()
with pm.frameLayout(collapsable=False, label='%s (%s)' % (self.className, self.apiClassName),
width = FRAME_WIDTH) as self.frame:
#labelAlign='top')
with pm.tabLayout() as tab:
invertibles = factories.apiClassInfo[self.apiClassName].get('invertibles', [])
usedMethods = []
with pm.formLayout() as pairdForm:
tab.setTabLabel( [pairdForm, 'Paired'] )
with pm.scrollLayout() as pairedScroll:
with pm.columnLayout(visible=False, adjustableColumn=True) as pairedCol:
for setMethod, getMethod in invertibles:
pm.setParent(pairedCol) # column
frame = pm.frameLayout(label = '%s / %s' % (setMethod, getMethod),
labelVisible=True, collapsable=True,
collapse=True, width = FRAME_WIDTH)
col2 = pm.columnLayout()
pairCount = 0
pairCount += self.rows[setMethod].buildUI(filter)
pairCount += self.rows[getMethod].buildUI(filter)
usedMethods += [setMethod, getMethod]
if pairCount == 0:
#deleteUI(col2)
frame.setVisible(False)
frame.setHeight(1)
count += pairCount
pairedCol.setVisible(True)
pairdForm.attachForm( pairedScroll, 'top', 5 )
pairdForm.attachForm( pairedScroll, 'left', 5 )
pairdForm.attachForm( pairedScroll, 'right', 5 )
pairdForm.attachForm( pairedScroll, 'bottom', 5 )
with pm.formLayout() as unpairedForm:
tab.setTabLabel( [unpairedForm, 'Unpaired'] )
with pm.scrollLayout() as unpairedScroll:
with pm.columnLayout(visible=False ) as unpairedCol:
# For some reason, on linux, the unpairedCol height is wrong...
# track + set it ourselves
unpairedHeight = 10 # a little extra buffer...
#rowSpace = unpairedCol.getRowSpacing()
for methodName in sorted( self.classInfo.keys() ):
pm.setParent(unpairedCol)
if methodName not in usedMethods:
frame = pm.frameLayout(label = methodName,
labelVisible=True, collapsable=True,
collapse=True, width = FRAME_WIDTH)
col2 = pm.columnLayout()
count += self.rows[methodName].buildUI(filter)
unpairedHeight += self.rows[methodName].frame.getHeight()# + rowSpace
unpairedCol.setHeight(unpairedHeight)
#self.form.attachForm( self.frame, 'left', 2)
#self.form.attachForm( self.frame, 'right', 2)
#self.form.attachForm( self.frame, 'top', 2)
#self.form.attachForm( self.frame, 'bottom', 2)
unpairedCol.setVisible(True)
unpairedForm.attachForm( unpairedScroll, 'top', 5 )
unpairedForm.attachForm( unpairedScroll, 'left', 5 )
unpairedForm.attachForm( unpairedScroll, 'right', 5 )
unpairedForm.attachForm( unpairedScroll, 'bottom', 5 )
return self.frame
class MethodRow(object):
def __init__(self, parent, className, apiClassName, apiMethodName,
methodInfoList):
self.parent = parent
self.className = className
self.methodName = factories.apiClassInfo[apiClassName].get('pymelMethods', {}).get(apiMethodName, apiMethodName)
self.apiClassName = apiClassName
self.apiMethodName = apiMethodName
self.methodInfoList = methodInfoList
self.data = factories._getApiOverrideData(self.className, self.methodName)
self.classInfo = factories.apiClassInfo[self.apiClassName]['methods'][self.apiMethodName]
try:
enabledArray = self.getEnabledArray()
except:
print self.apiClassName, self.apiMethodName
raise
# DEFAULT VALUES
# correct old values
# we no longer store positive values, only negative -- meaning methods will be enabled by default
# if 'enabled' in self.data and ( self.data['enabled'] == True or sum(enabledArray) == 0 ):
# logger.debug( '%s.%s: enabled array: %s' % ( self.className, self.methodName, enabledArray ) )
# logger.debug( '%s.%s: removing enabled entry' % ( self.className, self.methodName) )
# self.data.pop('enabled', None)
# enabled
# if not self.data.has_key( 'enabled' ):
# self.data['enabled'] = True
if self.methodName in factories.EXCLUDE_METHODS : # or sum(enabledArray) == 0:
self.data['enabled'] = False
# useName mode
if not self.data.has_key( 'useName' ):
self.data['useName'] = 'API'
else:
# correct old values
useNameVal = self.data['useName']
if useNameVal == True:
self.data['useName'] = 'API'
elif useNameVal == False:
self.data['useName'] = 'MEL'
elif useNameVal not in ['MEL', 'API']:
self.data['useName'] = str(useNameVal)
# correct old values
if self.data.has_key('overloadPrecedence'):
self.data['overloadIndex'] = self.data.pop('overloadPrecedence')
# correct old values
if self.data.has_key('melName'):
#logger.debug( "correcting melName %s %s %s" % (self.className, self.methodName, str(self.data['melName']) ) )
self.data['melName'] = str(self.data['melName'])
overloadId = self.data.get('overloadIndex', 0)
if overloadId is None:
# in a previous test, it was determined there were no wrappable overload methods,
# but there may be now. try again.
overloadId = 0
# ensure we don't use a value that is not valid
for i in range(overloadId, len(enabledArray)+1):
try:
if enabledArray[i]:
break
except IndexError: # went too far, so none are valid
overloadId = None
# if val is None:
# # nothing valid
# self.data.pop('overloadIndex', None)
# else:
self.data['overloadIndex'] = overloadId
def crossReference(self, melName):
""" create an entry for the melName which points to the data being tracked for the api name"""
factories.apiToMelData[ (self.className, melName ) ] = self.data
def uncrossReference(self, melName):
factories.apiToMelData.pop( (self.className, melName ) )
def updateMelNames(self, melMethods):
# melName
if not self.data.has_key( 'melName' ):
match = None
for method in melMethods:
methreg = re.compile(method.replace('*', '.{0,1}') + '$')
#print self.methodName, methreg
if methreg.match( self.methodName ):
match = str(method)
break
if match:
logger.debug( "%s.%s: adding melName %s" % ( self.className, self.methodName, match ) )
self.data['melName'] = match
self.crossReference( match )
def buildUI(self, filter=None):
if filter:
match = False
for i, info in enumerate( self.methodInfoList):
argUtil = factories.ApiArgUtil( self.apiClassName, self.apiMethodName, i )
if filter.intersection( argUtil.getInputTypes() + argUtil.getOutputTypes() ):
match = True
break
if match == False:
return False
self.layout = { 'columnAlign' : [1,'right'],
'columnAttach' : [1,'right',8] }
#print className, self.methodName, melMethods
isOverloaded = len(self.methodInfoList)>1
self.frame = pm.frameLayout( w=FRAME_WIDTH, labelVisible=False, collapsable=False)
logger.debug("building row for %s - %s" % (self.methodName, self.frame))
col = pm.columnLayout()
enabledArray = []
self.rows = []
self.overloadPrecedenceColl = None
self.enabledChBx = pm.checkBox(label=self.methodName,
changeCommand=pm.CallbackWithArgs( MethodRow.enableCB, self ) )
if isOverloaded:
self.overloadPrecedenceColl = pm.radioCollection()
for i in range( len(self.methodInfoList) ) :
self.createMethodInstance(i)
else:
#row = rowLayout( self.methodName + '_rowMain', nc=2, cw2=[200, 400] )
#self.enabledChBx = checkBox(label=self.methodName, changeCommand=CallbackWithArgs( MethodRow.enableCB, self ) )
#text(label='')
self.createMethodInstance(0)
#setParent('..')
pm.setParent(col)
pm.separator(w=800, h=6)
#self.row = rowLayout( self.methodName + '_rowSettings', nc=4, cw4=[200, 160, 180, 160] )
#self.rows.append(row)
self.row = pm.rowLayout( self.methodName + '_rowSettings', nc=2, cw2=[200, 220], **self.layout )
self.rows.append(self.row)
# create ui elements
pm.text(label='Mel Equivalent')
self.melNameTextField = pm.textField(w=170, editable=False)
self.melNameOptMenu = pm.popupMenu(parent=self.melNameTextField,
button=1,
postMenuCommand=pm.Callback( MethodRow.populateMelNameMenu, self ) )
pm.setParent('..')
self.row2 = pm.rowLayout( self.methodName + '_rowSettings2', nc=3, cw3=[200, 180, 240], **self.layout )
self.rows.append(self.row2)
pm.text(label='Use Name')
self.nameMode = pm.radioButtonGrp(label='', nrb=3, cw4=[1,50,50,50], labelArray3=['api', 'mel', 'other'] )
self.altNameText = pm.textField(w=170, enable=False)
self.altNameText.changeCommand( pm.CallbackWithArgs( MethodRow.alternateNameCB, self ) )
self.nameMode.onCommand( pm.Callback( MethodRow.nameTypeCB, self ) )
isEnabled = self.data.get('enabled', True)
# UI SETUP
melName = self.data.get('melName', '')
try:
#self.melNameOptMenu.setValue( melName )
self.melNameTextField.setText(melName)
if melName != '':
self.parent.parent.assignMelMethod( melName )
except RuntimeError:
# it is possible for a method name to be listed here that was set from a different view,
# where this class was a super class and more mel commands were available. expand the option list,
# and make this frame read-only
pm.menuItem( label=melName, parent=self.melNameOptMenu )
self.melNameOptMenu.setValue( melName )
logger.debug( "making %s frame read-only" % self.methodName )
self.frame.setEnable(False)
self.enabledChBx.setValue( isEnabled )
self.row.setEnable( isEnabled )
self.row2.setEnable( isEnabled )
name = self.data['useName']
if name == 'API' :
self.nameMode.setSelect( 1 )
self.altNameText.setEnable(False)
elif name == 'MEL' :
self.nameMode.setSelect( 2 )
self.altNameText.setEnable(False)
else :
self.nameMode.setSelect( 3 )
self.altNameText.setText(name)
self.altNameText.setEnable(True)
if self.overloadPrecedenceColl:
items = self.overloadPrecedenceColl.getCollectionItemArray()
try:
val = self.data.get('overloadIndex', 0)
if val is None:
logger.info( "no wrappable options for method %s" % self.methodName )
self.frame.setEnable( False )
else:
self.overloadPrecedenceColl.setSelect( items[ val ] )
except:
pass
# # ensure we don't use a value that is not valid
# for val in range(val, len(enabledArray)+1):
# try:
# if enabledArray[val]:
# break
# except IndexError:
# val = None
# if val is not None:
# self.overloadPrecedenceColl.setSelect( items[ val ] )
pm.setParent('..')
pm.setParent('..') # frame
pm.setParent('..') # column
return True
def enableCB(self, *args ):
logger.debug( 'setting enabled to %s' % args[0] )
if args[0] == False:
self.data['enabled'] = False
else:
self.data.pop('enabled', None)
self.row.setEnable( args[0] )
def nameTypeCB(self ):
logger.info( 'setting name type' )
selected = self.nameMode.getSelect()
if selected == 1:
val = 'API'
self.altNameText.setEnable(False)
elif selected == 2:
val = 'MEL'
self.altNameText.setEnable(False)
else:
val = str(self.altNameText.getText())
self.altNameText.setEnable(True)
logger.debug( 'data %s' % self.data )
self.data['useName'] = val
def alternateNameCB(self, *args ):
self.data['useName'] = str(args[0])
# def formatAnnotation(self, apiClassName, methodName ):
# defs = []
# try:
# for methodInfo in factories.apiClassInfo[apiClassName]['methods'][methodName] :
# args = ', '.join( [ '%s %s' % (x[1],x[0]) for x in methodInfo['args'] ] )
# defs.append( '%s( %s )' % ( methodName, args ) )
# return '\n'.join( defs )
# except KeyError:
# print "could not find documentation for", apiClassName, methodName
def overloadPrecedenceCB(self, i):
logger.debug( 'overloadPrecedenceCB' )
self.data['overloadIndex'] = i
def melNameChangedCB(self, newMelName):
oldMelName = str(self.melNameTextField.getText())
if oldMelName:
self.uncrossReference( oldMelName )
if newMelName == '[None]':
print "removing melName"
self.data.pop('melName',None)
self.parent.parent.unassignMelMethod( oldMelName )
self.melNameTextField.setText('')
else:
print "adding melName", newMelName
self.crossReference( newMelName )
self.data['melName'] = newMelName
self.parent.parent.assignMelMethod( newMelName )
self.melNameTextField.setText(newMelName)
def populateMelNameMenu(self):
"""called to populate the popup menu for choosing the mel equivalent to an api method"""
self.melNameOptMenu.deleteAllItems()
pm.menuItem(parent=self.melNameOptMenu, label='[None]', command=pm.Callback( MethodRow.melNameChangedCB, self, '[None]' ))
# need to add a listForNone to this in windows
items = self.parent.parent.unassignedMelMethodLister.getAllItems()
if items:
for method in items:
pm.menuItem(parent=self.melNameOptMenu, label=method, command=pm.Callback( MethodRow.melNameChangedCB, self, str(method) ))
def getEnabledArray(self):
"""returns an array of booleans that correspond to each override method and whether they can be wrapped"""
array = []
for i, info in enumerate( self.methodInfoList ):
argUtil = factories.ApiArgUtil( self.apiClassName, self.apiMethodName, i )
array.append( argUtil.canBeWrapped() )
return array
def createMethodInstance(self, i ):
#setUITemplate('attributeEditorTemplate', pushTemplate=1)
rowSpacing = [30, 20, 400]
defs = []
#try:
argUtil = factories.ApiArgUtil( self.apiClassName, self.apiMethodName, i )
proto = argUtil.getPrototype( className=False, outputs=True, defaults=False )
enable = argUtil.canBeWrapped()
if argUtil.isDeprecated():
pm.text(l='DEPRECATED')
# main info row
row = pm.rowLayout( '%s_rowMain%s' % (self.methodName,i), nc=3, cw3=rowSpacing, enable=enable )
self.rows.append(row)
pm.text(label='')
if self.overloadPrecedenceColl is not None:
# toggle for overloaded methods
pm.radioButton(label='', collection=self.overloadPrecedenceColl,
enable = enable,
onCommand=pm.Callback( MethodRow.overloadPrecedenceCB, self, i ))
pm.text( l='', #l=proto,
annotation = self.methodInfoList[i]['doc'],
enable = enable)
pm.setParent('..')
try:
argList = factories.apiClassOverrides[self.apiClassName]['methods'][self.apiMethodName][i]['args']
except (KeyError, IndexError):
argList = self.methodInfoList[i]['args']
returnType = self.methodInfoList[i]['returnType']
types = self.methodInfoList[i]['types']
args = []
for arg , type, direction in argList:
type = str(types[arg])
assert arg != 'return'
self._makeArgRow( i, type, arg, direction, self.methodInfoList[i]['argInfo'][arg]['doc'] )
if returnType:
self._makeArgRow( i, returnType, 'return', 'return', self.methodInfoList[i]['returnInfo']['doc'] )
pm.separator(w=800, h=14)
return enable
# methodInfo = factories.apiClassInfo[self.apiClassName]['methods'][self.apiMethodName][overloadNum]
# args = ', '.join( [ '%s %s' % (x[1],x[0]) for x in methodInfo['args'] ] )
# return '( %s ) --> ' % ( args )
#except:
# print "could not find documentation for", apiClassName, methodName
def setUnitType(self, methodIndex, argName, unitType ):
if self.apiClassName not in factories.apiClassOverrides:
factories.apiClassOverrides[self.apiClassName] = { 'methods' : {} }
methodOverrides = factories.apiClassOverrides[self.apiClassName]['methods']
if self.apiMethodName not in methodOverrides:
methodOverrides[self.apiMethodName] = {}
if argName == 'return':
if methodIndex not in methodOverrides[self.apiMethodName]:
methodOverrides[self.apiMethodName][methodIndex] = { 'returnInfo' : {} }
methodOverrides[self.apiMethodName][methodIndex]['returnInfo']['unitType'] = unitType
else:
if methodIndex not in methodOverrides[self.apiMethodName]:
methodOverrides[self.apiMethodName][methodIndex] = { 'argInfo' : {} }
if argName not in methodOverrides[self.apiMethodName][methodIndex]['argInfo']:
methodOverrides[self.apiMethodName][methodIndex]['argInfo'][argName] = {}
methodOverrides[self.apiMethodName][methodIndex]['argInfo'][argName]['unitType'] = unitType
def setDirection(self, methodIndex, argName, direction ):
if self.apiClassName not in factories.apiClassOverrides:
factories.apiClassOverrides[self.apiClassName] = { 'methods' : {} }
methodOverrides = factories.apiClassOverrides[self.apiClassName]['methods']
if self.apiMethodName not in methodOverrides:
methodOverrides[self.apiMethodName] = {}
if methodIndex not in methodOverrides[self.apiMethodName]:
methodOverrides[self.apiMethodName][methodIndex] = { }
try:
argList = methodOverrides[self.apiMethodName][methodIndex]['args']
except KeyError:
argList = self.methodInfoList[methodIndex]['args']
newArgList = []
inArgs = []
outArgs = []
for i_argName, i_argType, i_direction in argList:
if i_argName == argName:
argInfo = ( i_argName, i_argType, direction )
else:
argInfo = ( i_argName, i_argType, i_direction )
if argInfo[2] == 'in':
inArgs.append( i_argName )
else:
outArgs.append( i_argName )
newArgList.append( argInfo )
methodOverrides[self.apiMethodName][methodIndex] = { }
methodOverrides[self.apiMethodName][methodIndex]['args'] = newArgList
methodOverrides[self.apiMethodName][methodIndex]['inArgs'] = inArgs
methodOverrides[self.apiMethodName][methodIndex]['outArgs'] = outArgs
def _makeArgRow(self, methodIndex, type, argName, direction, annotation=''):
COL1_WIDTH = 260
COL2_WIDTH = 120
pm.rowLayout( nc=4, cw4=[COL1_WIDTH,COL2_WIDTH, 70, 150], **self.layout )
label = str(type)
pm.text( l=label, ann=annotation )
pm.text( l=argName, ann=annotation )
if direction == 'return':
pm.text( l='(result)' )
else:
direction_om = pm.optionMenu(l='', w=60, ann=annotation, cc=pm.CallbackWithArgs( MethodRow.setDirection, self, methodIndex, argName ) )
for unit in ['in', 'out']:
pm.menuItem(l=unit)
direction_om.setValue(direction)
if self._isPotentialUnitType(type) :
om = pm.optionMenu(l='', ann=annotation, cc=pm.CallbackWithArgs( MethodRow.setUnitType, self, methodIndex, argName ) )
for unit in ['unitless', 'linear', 'angular', 'time']:
pm.menuItem(l=unit)
if argName == 'return':
try:
value = factories.apiClassOverrides[self.apiClassName]['methods'][self.apiMethodName][methodIndex]['returnInfo']['unitType']
except KeyError:
pass
else:
try:
value = factories.apiClassOverrides[self.apiClassName]['methods'][self.apiMethodName][methodIndex]['argInfo'][argName]['unitType']
except KeyError:
pass
try:
om.setValue(value)
except: pass
else:
pm.text( l='', ann=annotation )
pm.setParent('..')
def _isPotentialUnitType(self, type):
type = str(type)
return type == 'MVector' or type.startswith('double')
def _getClass(className):
for module in [pm.nodetypes, pm.datatypes, pm.general]:
try:
pymelClass = getattr(module, className)
return pymelClass
except AttributeError:
pass
def getApiClassName( className ):
pymelClass = _getClass(className)
if pymelClass:
apiClass = None
apiClassName = None
#if cls.__name__ not in ['object']:
try:
apiClass = pymelClass.__dict__[ '__apicls__']
apiClassName = apiClass.__name__
except KeyError:
try:
apiClass = pymelClass.__dict__[ 'apicls']
apiClassName = apiClass.__name__
except KeyError:
#print "could not determine api class for", cls.__name__
apiClassName = None
return apiClassName
else:
logger.warning( "could not find class %s" % (className) )
def getClassHierarchy( className ):
pymelClass = _getClass(className)
if pymelClass:
mro = list( inspect.getmro(pymelClass) )
mro.reverse()
for i, cls in enumerate(mro):
#if cls.__name__ not in ['object']:
try:
apiClass = cls.__dict__[ '__apicls__']
apiClassName = apiClass.__name__
except KeyError:
try:
apiClass = cls.__dict__[ 'apicls']
apiClassName = apiClass.__name__
except KeyError:
#print "could not determine api class for", cls.__name__
apiClassName = None
yield cls.__name__, apiClassName
else:
logger.warning( "could not find class %s" % (className) )
def setManualDefaults():
# set some defaults
# TODO : allow these defaults to be controlled via the UI
pm.util.setCascadingDictItem( factories.apiClassOverrides, ('MFnTransform', 'methods', 'setScalePivot', 0, 'defaults', 'balance' ), True )
pm.util.setCascadingDictItem( factories.apiClassOverrides, ('MFnTransform', 'methods', 'setRotatePivot', 0, 'defaults', 'balance' ), True )
pm.util.setCascadingDictItem( factories.apiClassOverrides, ('MFnTransform', 'methods', 'setRotateOrientation', 0, 'defaults', 'balance' ), True )
pm.util.setCascadingDictItem( factories.apiClassOverrides, ('MFnSet', 'methods', 'getMembers', 0, 'defaults', 'flatten' ), False )
pm.util.setCascadingDictItem( factories.apiClassOverrides, ('MFnDagNode', 'methods', 'instanceCount', 0, 'defaults', 'total' ), True )
pm.util.setCascadingDictItem( factories.apiClassOverrides, ('MFnMesh', 'methods', 'createColorSetWithName', 1, 'defaults', 'modifier' ), None )
# add some manual invertibles: THESE MUST BE THE API NAMES
invertibles = [ ('MPlug', 0, 'setCaching', 'isCachingFlagSet') ,
('MPlug', 0, 'setChannelBox', 'isChannelBoxFlagSet'),
('MFnTransform', 0, 'enableLimit', 'isLimited'),
('MFnTransform', 0, 'setLimit', 'limitValue'),
('MFnTransform', 0, 'set', 'transformation'),
('MFnRadialField', 0, 'setType', 'radialType')
]
for className, methodIndex, setter, getter in invertibles:
# append to the class-level invertibles list
curr = pm.util.getCascadingDictItem( factories.apiClassInfo, (className, 'invertibles' ), [] )
pair = (setter, getter)
if pair not in curr:
curr.append( pair )
pm.util.setCascadingDictItem( factories.apiClassOverrides, (className, 'invertibles'), curr )
# add the individual method entries
pm.util.setCascadingDictItem( factories.apiClassOverrides, (className, 'methods', setter, methodIndex, 'inverse' ), (getter, True) )
pm.util.setCascadingDictItem( factories.apiClassOverrides, (className, 'methods', getter, methodIndex, 'inverse' ), (setter, False) )
nonInvertibles = [ ( 'MFnMesh', 0, 'setFaceVertexNormals', 'getFaceVertexNormals' ),
( 'MFnMesh', 0, 'setFaceVertexNormal', 'getFaceVertexNormal' ) ]
for className, methodIndex, setter, getter in nonInvertibles:
pm.util.setCascadingDictItem( factories.apiClassOverrides, (className, 'methods', setter, methodIndex, 'inverse' ), None )
pm.util.setCascadingDictItem( factories.apiClassOverrides, (className, 'methods', getter, methodIndex, 'inverse' ), None )
fixSpace()
def fixSpace():
"fix the Space enumerator"
enum = pm.util.getCascadingDictItem( factories.apiClassInfo, ('MSpace', 'pymelEnums', 'Space') )
keys = enum._keys.copy()
#print keys
val = keys.pop('postTransform', None)
if val is not None:
keys['object'] = val
newEnum = pm.util.Enum( 'Space', keys )
pm.util.setCascadingDictItem( factories.apiClassOverrides, ('MSpace', 'pymelEnums', 'Space'), newEnum )
else:
logger.warning( "could not fix Space")
def _notifySavingDisabled():
pm.confirmDialog(title='Saving Disabled',
message='Saving using this UI has been disabled until it'
' can be updated. Changes will not be saved.')
def cacheResults():
_notifySavingDisabled()
return
# res = pm.confirmDialog( title='Cache Results?',
# message="Would you like to write your changes to disk? If you choose 'No' your changes will be lost when you restart Maya.",
# button=['Yes','No'],
# cancelButton='No',
# defaultButton='Yes')
# print res
# if res == 'Yes':
# doCacheResults()
# def doCacheResults():
# print "---"
# print "adding manual defaults"
# setManualDefaults()
# print "merging dictionaries"
# # update apiClasIfno with the sparse data stored in apiClassOverrides
# factories.mergeApiClassOverrides()
# print "saving api cache"
# factories.saveApiCache()
# print "saving bridge"
# factories.saveApiMelBridgeCache()
# print "---"
| 42.295385 | 151 | 0.591299 | 3,901 | 41,238 | 6.218918 | 0.18021 | 0.012531 | 0.022424 | 0.02127 | 0.29934 | 0.228112 | 0.174031 | 0.15404 | 0.139283 | 0.131533 | 0 | 0.008823 | 0.301858 | 41,238 | 974 | 152 | 42.338809 | 0.833831 | 0.150007 | 0 | 0.242424 | 0 | 0 | 0.072148 | 0.000653 | 0 | 0 | 0 | 0.001027 | 0.00319 | 0 | null | null | 0.007974 | 0.007974 | null | null | 0.004785 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
827fbde3f6b49a475e21f72342cbd95940e44a4d | 1,255 | py | Python | create_tweet_classes.py | jmcguinness11/StockPredictor | 9dd545a11ca9beab6e108d5b8f001f69501af606 | [
"MIT"
] | null | null | null | create_tweet_classes.py | jmcguinness11/StockPredictor | 9dd545a11ca9beab6e108d5b8f001f69501af606 | [
"MIT"
] | null | null | null | create_tweet_classes.py | jmcguinness11/StockPredictor | 9dd545a11ca9beab6e108d5b8f001f69501af606 | [
"MIT"
] | null | null | null | # create_tweet_classes.py
# this assumes the existence of a get_class(day, hour, ticker) function
# that returns the class (0, 1, or -1) for a given hour and ticker
import collections
import json
import random
refined_tweets = collections.defaultdict(list)
#returns label for company and time
def getLabel(ticker, month, day, hour):
return random.randint(-1,1)
#parses individual json file
def parseJSON(data, month, day, hour):
results = []
for tweet in data.itervalues():
text = tweet['text']
label = getLabel(tweet['company'], month, day, hour)
results.append([text,label])
return results
def loadData(months, days):
hours = [10, 11, 12, 13, 14]
minutes = [0, 15, 30, 45]
output_data = []
for month in months:
for day in days:
for hour in hours:
for minute in minutes:
filename = 'tweets_{}_{}_{}_{}.dat'.format(month, day, hour, minute)
with open(filename, 'r') as f:
try:
data = json.load(f)
except ValueError as err:
print filename
exit(1)
output_data += parseJSON(data, month, day, hour)
f.close()
print len(output_data)
print output_data[0:10]
return output_data
def main():
days = [9,10,11,12,13,16,17]
loadData([4], days)
if __name__=='__main__':
main()
| 24.607843 | 73 | 0.6749 | 188 | 1,255 | 4.393617 | 0.473404 | 0.050847 | 0.072639 | 0.050847 | 0.060533 | 0 | 0 | 0 | 0 | 0 | 0 | 0.039761 | 0.198406 | 1,255 | 50 | 74 | 25.1 | 0.781312 | 0.174502 | 0 | 0 | 0 | 0 | 0.040777 | 0.021359 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.078947 | null | null | 0.078947 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
8285973ff004a3b86ceb55d9c1d9f9899c59ee73 | 7,434 | py | Python | examples/blank_cylinders.py | reflectometry/osrefl | ddf55d542f2eab2a29fd6ffc862379820a06d5c7 | [
"BSD-3-Clause"
] | 2 | 2015-05-21T15:16:46.000Z | 2015-10-23T17:47:36.000Z | examples/blank_cylinders.py | reflectometry/osrefl | ddf55d542f2eab2a29fd6ffc862379820a06d5c7 | [
"BSD-3-Clause"
] | null | null | null | examples/blank_cylinders.py | reflectometry/osrefl | ddf55d542f2eab2a29fd6ffc862379820a06d5c7 | [
"BSD-3-Clause"
] | null | null | null | from greens_thm_form import greens_form_line, greens_form_shape
from numpy import arange, linspace, float64, indices, zeros_like, ones_like, pi, sin, complex128, array, exp, newaxis, cumsum, sum, cos, sin, log, log10
from osrefl.theory.DWBAGISANS import dwbaWavefunction
class shape:
def __init__(self, name):
self.name = name
self.points = []
self.sld = 0.0
self.sldi = 0.0
def rectangle(x0, y0, dx, dy, sld=0.0, sldi=0.0):
#generate points for a rectangle
rect = shape('rectangle')
rect.points = [[x0,y0], [x0+dx, y0], [x0+dx, y0+dy], [x0, y0+dy]]
rect.sld = sld
rect.sldi = sldi
rect.area = dx * dy
return rect
def sawtooth(z, n=6, x_length=3000.0, base_width=500.0, height=300.0, sld=0.0, sldi=0.0, sld_front=0.0, sldi_front=0.0):
if z>height:
return [], sld_front
width = (z / height) * base_width
front_width = base_width - width
rects = [rectangle(0, base_width*(i+0.5) - width/2.0, x_length, width, sld, sldi) for i in range(n)]
# now rectangles for the gaps between the sawtooths...
if (sld_front !=0.0 and sldi_front != 0.0):
front_rects = [rectangle(0, 0, x_length, front_width/2.0, sld_front, sldi_front)]
front_rects.extend([rectangle(0, base_width*(i+0.5)+width/2.0, x_length, front_width, sld_front, sldi_front) for i in range(1,n-1)])
front_rects.append(rectangle(0, base_width*(n-0.5)+width/2.0, x_length, front_width/2.0, sld_front, sldi_front))
rects.extend(front_rects)
# now calculate the average SLD (nuclear) for the layer
avg_sld = (width * sld + front_width * sld_front) / base_width
avg_sldi = (width * sldi + front_width * sldi_front) / base_width
return rects, avg_sld, avg_sldi
def arc(r, theta_start, theta_end, x_center, y_center, theta_step=1.0, close=True, sld=0.0, sldi=0.0, ):
a = shape('arc')
a.theta_start = theta_start
a.theta_end = theta_end
a.area = pi * r**2 * abs(theta_end - theta_start)/360.0
if close == True:
a.points.append([x_center, y_center]) # center point
numpoints = (theta_end - theta_start) / theta_step + 1
thetas = linspace(theta_start, theta_end, numpoints) * pi/180 # to radians
for th in thetas:
a.points.append([r*cos(th) + x_center, r*sin(th) + y_center])
a.sld = sld
a.sldi = sldi
return a
def limit_cyl(arc, xmin=0.0, xmax=0.0, ymin=0.0, ymax=0.0):
new_arc = shape('arc')
new_arc.sld = arc.sld
new_arc.sldi = arc.sldi
new_arc.theta_start = arc.theta_start
new_arc.theta_end = arc.theta_end
#new_arc.area = arc.area
for point in arc.points:
if (point[0] >= xmin) and (point[0] <= xmax) and (point[1] >=ymin) and (point[1] <= ymax):
new_arc.points.append(point)
if len(new_arc.points) < 3:
new_arc.area = 0.0
else:
new_arc.area = (len(new_arc.points) - 2) / 360.0 * arc.area
return new_arc
def conj(sld):
conjugate_sld = sld.copy()
conjugate_sld[:,2] *= -1
return conjugate_sld
# alternating SLD
wavelength = 1.24 # x-ray wavelength, Angstroms
spacing = 600.0 # distance between cylinder centers
radius = 200.0 # Angstroms, radius of cylinders
thickness = 300.0 # Angstrom, thickness of cylinder layer
sublayer_thickness = 200.0 # Angstrom, full layer of matrix below cylinders
matrix_sld = pi/(wavelength**2) * 2.0 * 1.0e-6 # substrate
matrix_sldi = pi/(wavelength**2) * 2.0 * 1.0e-7 # absorption in substrate
cyl_sld = 0.0
cyl_sldi = 0.0 # cylinders are holes in matrix
unit_dx = 2.0 * spacing
unit_dy = 1.0 * spacing
matrix = rectangle(0,0, 3000, 3000, matrix_sld, matrix_sldi)
cylinders = []
centers = []
for i in range(3):
for j in range(6):
x0 = i * 2.0 * spacing
y0 = j * spacing
x1 = x0 + spacing # basis
y1 = y0 + spacing/2.0
cylinders.append(arc(radius, 0.0, 360.0, x0, y0, sld=cyl_sld, sldi=cyl_sldi))
cylinders.append(arc(radius, 0.0, 360.0, x1, y1, sld=cyl_sld, sldi=cyl_sldi))
cyl_area = 0.0
for cyl in cylinders:
cyl_area += cyl.area
clipped_cylinders = [limit_cyl(cyl, xmin=0.0, xmax=3000.0, ymin=0.0, ymax=3000.0) for cyl in cylinders]
clipped_cyl_area = 0.0
for cyl in clipped_cylinders:
clipped_cyl_area += cyl.area
print "clipped_cyl_area / matrix.area = ", clipped_cyl_area / matrix.area
print "ratio should be 0.3491 for FCT planar array with a/b = 2 and r = a/6"
avg_sld = (matrix.area * matrix_sld + clipped_cyl_area * cyl_sld) / matrix.area
avg_sldi = (matrix.area * matrix_sldi + clipped_cyl_area * cyl_sldi) / matrix.area
front_sld = 0.0 # air
back_sld = pi/(wavelength**2) * 2.0 * 5.0e-6 # substrate
back_sldi = pi/(wavelength**2) * 2.0 * 7.0e-8 # absorption in substrate
qz = linspace(0.01, 0.21, 501)
qy = linspace(-0.1, 0.1, 500)
qx = ones_like(qy, dtype=complex128) * 1e-8
SLDArray = [ [0,0,0], # air
[avg_sld, thickness, avg_sldi], # sample
[matrix_sld, sublayer_thickness, matrix_sldi], # full matrix layer under cylinders
[back_sld, 0, back_sldi] ]
FT = zeros_like(qx, dtype=complex128)
for cyl in clipped_cylinders:
FT += greens_form_shape(cyl.points, qx, qy) * (cyl.sld)
FT += greens_form_shape(matrix.points, qx, qy) * (matrix.sld)
FT += greens_form_shape(matrix.points, qx, qy) * (-avg_sld)
SLDArray = array(SLDArray)
def calc_gisans(alpha_in, show_plot=True):
#alpha_in = 0.25 # incoming beam angle
kz_in_0 = 2*pi/wavelength * sin(alpha_in * pi/180.0)
kz_out_0 = kz_in - qz
wf_in = dwbaWavefunction(kz_in_0, SLDArray)
wf_out = dwbaWavefunction(-kz_out_0, conj(SLDArray))
kz_in_l = wf_in.kz_l
kz_out_l = -wf_out.kz_l
zs = cumsum(SLDArray[1:-1,1])
dz = SLDArray[1:-1,1][:,newaxis]
z_array = array(zs)[:,newaxis]
qrt_inside = kz_in_l[1] - kz_out_l[1]
qtt_inside = kz_in_l[1] + kz_out_l[1]
qtr_inside = -kz_in_l[1] + kz_out_l[1]
qrr_inside = -kz_in_l[1] - kz_out_l[1]
# the overlap is the forward-moving amplitude c in psi_in multiplied by
# the forward-moving amplitude in the time-reversed psi_out, which
# ends up being the backward-moving amplitude d in the non-time-reversed psi_out
# (which is calculated by the wavefunction calculator)
# ... and vice-verso for d and c in psi_in and psi_out
overlap = wf_out.d[1] * wf_in.c[1] / (1j * qtt_inside) * (exp(1j * qtt_inside * thickness) - 1.0)
overlap += wf_out.c[1] * wf_in.d[1] / (1j * qrr_inside) * (exp(1j * qrr_inside * thickness) - 1.0)
overlap += wf_out.d[1] * wf_in.d[1] / (1j * qtr_inside) * (exp(1j * qtr_inside * thickness) - 1.0)
overlap += wf_out.c[1] * wf_in.c[1] / (1j * qrt_inside) * (exp(1j * qrt_inside * thickness) - 1.0)
overlap_BA = 1.0 / (1j * qz) * (exp(1j * qz * thickness) - 1.0)
overlap_BA += 1.0 / (-1j * qz) * (exp(-1j * qz * thickness) - 1.0)
gisans = overlap[:,newaxis] * FT[newaxis, :]
gisans_BA = overlap_BA[:,newaxis] * FT[newaxis, :]
extent = [qy.min(), qy.max(), qz.min(), qz.max()]
if show_plot == True:
from pylab import imshow, figure, colorbar
figure()
imshow(log10(abs(gisans)**2), origin='lower', extent=extent, aspect='auto')
colorbar()
figure()
imshow(log10(abs(gisans_BA)**2), origin='lower', extent=extent, aspect='auto')
colorbar()
return gisans, gisans_BA
| 37.356784 | 152 | 0.644471 | 1,240 | 7,434 | 3.685484 | 0.184677 | 0.013129 | 0.006565 | 0.019694 | 0.242451 | 0.201094 | 0.158862 | 0.131947 | 0.100438 | 0.064114 | 0 | 0.055489 | 0.216976 | 7,434 | 198 | 153 | 37.545455 | 0.7296 | 0.11972 | 0 | 0.041958 | 0 | 0.006993 | 0.020571 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.027972 | null | null | 0.013986 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
8288331b93be5bebcd8bf3d2c82ccd107597d65b | 1,067 | py | Python | ApendixI-Games/StacklessPSP-2.5.2_R1/pspsnd.py | MelroLeandro/Matematica-Discreta-para-Hackers-ipnyb | 1f9ca7db685733a3df924db1269bd852acf27602 | [
"MIT"
] | null | null | null | ApendixI-Games/StacklessPSP-2.5.2_R1/pspsnd.py | MelroLeandro/Matematica-Discreta-para-Hackers-ipnyb | 1f9ca7db685733a3df924db1269bd852acf27602 | [
"MIT"
] | 1 | 2019-08-16T12:59:01.000Z | 2019-08-18T06:36:47.000Z | ApendixI-Games/StacklessPSP-2.5.2_R1/pspsnd.py | MelroLeandro/Matematica-Discreta-para-Hackers-ipnyb | 1f9ca7db685733a3df924db1269bd852acf27602 | [
"MIT"
] | null | null | null | """Wrapper for pygame, which exports the PSP Python API on non-PSP systems."""
__author__ = "Per Olofsson, <MagerValp@cling.gu.se>"
import pygame
pygame.init()
_vol_music = 255
_vol_sound = 255
def setMusicVolume(vol):
global _vol_music
if vol >= 0 and vol <= 255:
_vol_music = vol
pygame.mixer.music.set_volume(_vol_music / 255.0)
def setSndFxVolume(vol):
global _vol_sound
if vol >= 0 and vol <= 255:
_vol_sound = vol
class Music:
def __init__(self, filename, maxchan=128, loop=False):
self._loop = loop
pygame.mixer.music.load(filename)
pygame.mixer.music.set_volume(_vol_music / 255.0)
def start(self):
if self._loop:
pygame.mixer.music.play(-1)
else:
pygame.mixer.music.play()
def stop(self):
pygame.mixer.music.stop()
class Sound:
def __init__(self, filename):
self._snd = pygame.mixer.Sound(filename)
def start(self):
self._snd.set_volume(_vol_sound / 255.0)
self._snd.play()
| 21.34 | 78 | 0.62418 | 145 | 1,067 | 4.331034 | 0.337931 | 0.122611 | 0.152866 | 0.028662 | 0.184713 | 0.184713 | 0.184713 | 0.127389 | 0.127389 | 0.127389 | 0 | 0.038168 | 0.263355 | 1,067 | 49 | 79 | 21.77551 | 0.760814 | 0.067479 | 0 | 0.1875 | 0 | 0 | 0.037412 | 0.023256 | 0 | 0 | 0 | 0 | 0 | 1 | 0.21875 | false | 0 | 0.03125 | 0 | 0.3125 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
828ccbf87f380dbc253cd5ac125a944fc9a7bd55 | 4,262 | py | Python | src/commercetools/services/types.py | BramKaashoek/commercetools-python-sdk | 4a4191d7816c921401b782d8ae37626cb32791a1 | [
"MIT"
] | null | null | null | src/commercetools/services/types.py | BramKaashoek/commercetools-python-sdk | 4a4191d7816c921401b782d8ae37626cb32791a1 | [
"MIT"
] | null | null | null | src/commercetools/services/types.py | BramKaashoek/commercetools-python-sdk | 4a4191d7816c921401b782d8ae37626cb32791a1 | [
"MIT"
] | null | null | null | import typing
from commercetools import schemas, types
from commercetools.services import abstract
from commercetools.typing import OptionalListStr
__all__ = ["TypeService"]
class TypeDeleteSchema(abstract.AbstractDeleteSchema):
pass
class TypeQuerySchema(abstract.AbstractQuerySchema):
pass
class TypeService(abstract.AbstractService):
def get_by_id(self, id: str, expand: OptionalListStr = None) -> types.Type:
query_params = {}
if expand:
query_params["expand"] = expand
return self._client._get(f"types/{id}", query_params, schemas.TypeSchema)
def get_by_key(self, key: str, expand: OptionalListStr = None) -> types.Type:
query_params = {}
if expand:
query_params["expand"] = expand
return self._client._get(f"types/key={key}", query_params, schemas.TypeSchema)
def query(
self,
where: OptionalListStr = None,
sort: OptionalListStr = None,
expand: OptionalListStr = None,
limit: int = None,
offset: int = None,
) -> types.TypePagedQueryResponse:
params = TypeQuerySchema().dump(
{
"where": where,
"sort": sort,
"expand": expand,
"limit": limit,
"offset": offset,
}
)
return self._client._get("types", params, schemas.TypePagedQueryResponseSchema)
def create(
self, draft: types.TypeDraft, expand: OptionalListStr = None
) -> types.Type:
query_params = {}
if expand:
query_params["expand"] = expand
return self._client._post(
"types", query_params, draft, schemas.TypeDraftSchema, schemas.TypeSchema
)
def update_by_id(
self,
id: str,
version: int,
actions: typing.List[types.TypeUpdateAction],
expand: OptionalListStr = None,
*,
force_update: bool = False,
) -> types.Type:
query_params = {}
if expand:
query_params["expand"] = expand
update_action = types.TypeUpdate(version=version, actions=actions)
return self._client._post(
endpoint=f"types/{id}",
params=query_params,
data_object=update_action,
request_schema_cls=schemas.TypeUpdateSchema,
response_schema_cls=schemas.TypeSchema,
force_update=force_update,
)
def update_by_key(
self,
key: str,
version: int,
actions: typing.List[types.TypeUpdateAction],
expand: OptionalListStr = None,
*,
force_update: bool = False,
) -> types.Type:
query_params = {}
if expand:
query_params["expand"] = expand
update_action = types.TypeUpdate(version=version, actions=actions)
return self._client._post(
endpoint=f"types/key={key}",
params=query_params,
data_object=update_action,
request_schema_cls=schemas.TypeUpdateSchema,
response_schema_cls=schemas.TypeSchema,
force_update=force_update,
)
def delete_by_id(
self,
id: str,
version: int,
expand: OptionalListStr = None,
*,
force_delete: bool = False,
) -> types.Type:
params = {"version": version}
if expand:
params["expand"] = expand
query_params = TypeDeleteSchema().dump(params)
return self._client._delete(
endpoint=f"types/{id}",
params=query_params,
response_schema_cls=schemas.TypeSchema,
force_delete=force_delete,
)
def delete_by_key(
self,
key: str,
version: int,
expand: OptionalListStr = None,
*,
force_delete: bool = False,
) -> types.Type:
params = {"version": version}
if expand:
params["expand"] = expand
query_params = TypeDeleteSchema().dump(params)
return self._client._delete(
endpoint=f"types/key={key}",
params=query_params,
response_schema_cls=schemas.TypeSchema,
force_delete=force_delete,
)
| 30.22695 | 87 | 0.585171 | 408 | 4,262 | 5.914216 | 0.164216 | 0.086614 | 0.082884 | 0.041442 | 0.707833 | 0.673021 | 0.673021 | 0.653129 | 0.648156 | 0.648156 | 0 | 0 | 0.316987 | 4,262 | 140 | 88 | 30.442857 | 0.828925 | 0 | 0 | 0.68 | 0 | 0 | 0.041764 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.064 | false | 0.016 | 0.032 | 0 | 0.184 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
82995e877d2337617c9148dbf6692f9969d5a1fd | 1,115 | py | Python | qcic.py | milkllc/qcic | dfa8eae928689e3cb114587f62947b7d8397fdef | [
"MIT"
] | null | null | null | qcic.py | milkllc/qcic | dfa8eae928689e3cb114587f62947b7d8397fdef | [
"MIT"
] | null | null | null | qcic.py | milkllc/qcic | dfa8eae928689e3cb114587f62947b7d8397fdef | [
"MIT"
] | null | null | null | import picamera
import datetime
import os
delcount = 2
def check_fs():
global delcount
st = os.statvfs('/')
pct = 100 - st.f_bavail * 100.0 / st.f_blocks
print pct, "percent full"
if pct > 90:
# less than 10% left, delete a few minutes
files = os.listdir('.')
files.sort()
for i in range(0, delcount):
print "deleting", files[i]
os.remove(files[i])
delcount += 1 # keep increasing until we get under 90%
else:
delcount = 2
with picamera.PiCamera() as camera:
try:
check_fs()
tstamp = datetime.datetime.utcnow().strftime('%Y%m%d%H%M%S%f')
print "recording", tstamp
camera.start_recording(tstamp + '.h264')
camera.wait_recording(60)
while True:
check_fs()
tstamp = datetime.datetime.utcnow().strftime('%Y%m%d%H%M%S%f')
print "recording", tstamp
camera.split_recording(tstamp + '.h264')
camera.wait_recording(60)
except KeyboardInterrupt:
print "quitting"
camera.stop_recording()
| 25.340909 | 74 | 0.574888 | 139 | 1,115 | 4.539568 | 0.517986 | 0.095087 | 0.041204 | 0.066561 | 0.36767 | 0.36767 | 0.36767 | 0.240887 | 0.240887 | 0.240887 | 0 | 0.034929 | 0.306726 | 1,115 | 43 | 75 | 25.930233 | 0.781371 | 0.070852 | 0 | 0.294118 | 0 | 0 | 0.083333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.088235 | null | null | 0.147059 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
8299ba8eed08b051c1bd7e22979a2992369a89ff | 4,398 | py | Python | forge/mock_handle.py | ujjwalsh/pyforge | 454d7df39f6d6cc7531d3f87e7b7f7d83ae6e66e | [
"BSD-3-Clause"
] | 7 | 2015-01-01T18:40:53.000Z | 2021-10-20T14:13:08.000Z | forge/mock_handle.py | ujjwalsh/pyforge | 454d7df39f6d6cc7531d3f87e7b7f7d83ae6e66e | [
"BSD-3-Clause"
] | 6 | 2016-03-31T16:40:30.000Z | 2020-12-23T07:24:53.000Z | forge/mock_handle.py | ujjwalsh/pyforge | 454d7df39f6d6cc7531d3f87e7b7f7d83ae6e66e | [
"BSD-3-Clause"
] | 9 | 2016-03-31T15:21:29.000Z | 2021-03-20T06:29:09.000Z | from .handle import ForgeHandle
class MockHandle(ForgeHandle):
def __init__(self, forge, mock, behave_as_instance=True):
super(MockHandle, self).__init__(forge)
self.mock = mock
self.behaves_as_instance = behave_as_instance
self._attributes = {}
self._is_hashable = False
self._is_setattr_enabled_in_replay = False
def is_hashable(self):
return self._is_hashable
def enable_hashing(self):
self._is_hashable = True
def disable_hashing(self):
self._is_hashable = False
def enable_setattr_during_replay(self):
self._is_setattr_enabled_in_replay = True
def disable_setattr_during_replay(self):
self._is_setattr_enabled_in_replay = False
def is_setattr_enabled_in_replay(self):
return self._is_setattr_enabled_in_replay
def has_attribute(self, attr):
return False
def get_attribute(self, attr):
if self.forge.attributes.has_attribute(self.mock, attr):
return self.forge.attributes.get_attribute(self.mock, attr)
if self.has_nonmethod_class_member(attr):
return self.get_nonmethod_class_member(attr)
if self.has_method(attr):
return self.get_method(attr)
raise AttributeError("%s has no attribute %r" % (self.mock, attr))
def set_attribute(self, attr, value, caller_info):
if self.forge.is_recording() or self.is_setattr_enabled_in_replay():
self._set_attribute(attr, value)
else:
self._set_attribute_during_replay(attr, value, caller_info)
def expect_setattr(self, attr, value):
return self.forge.queue.push_setattr(self.mock, attr, value, caller_info=self.forge.debug.get_caller_info())
def _set_attribute_during_replay(self, attr, value, caller_info):
self.forge.queue.pop_matching_setattr(self.mock, attr, value, caller_info)
self._set_attribute(attr, value)
def _set_attribute(self, attr, value):
self.forge.attributes.set_attribute(self.mock, attr, value)
def has_method(self, attr):
return self.forge.stubs.has_initialized_method_stub(self.mock, attr) or self._has_method(attr)
def _has_method(self, name):
raise NotImplementedError()
def has_nonmethod_class_member(self, name):
raise NotImplementedError()
def get_nonmethod_class_member(self, name):
raise NotImplementedError()
def get_method(self, name):
returned = self.forge.stubs.get_initialized_method_stub_or_none(self.mock, name)
if returned is None:
real_method = self._get_real_method(name)
if not self.forge.is_recording():
self._check_unrecorded_method_getting(name)
returned = self._construct_stub(name, real_method)
self._bind_if_needed(name, returned)
self.forge.stubs.add_initialized_method_stub(self.mock, name, returned)
self._set_method_description(returned, name)
elif self.forge.is_replaying() and not returned.__forge__.has_recorded_calls():
self._check_getting_method_stub_without_recorded_calls(name, returned)
return returned
def _set_method_description(self, method, name):
method.__forge__.set_description("%s.%s" % (
self.describe(), name
))
def _construct_stub(self, name, real_method):
return self.forge.create_method_stub(real_method)
def _check_unrecorded_method_getting(self, name):
raise NotImplementedError()
def _check_getting_method_stub_without_recorded_calls(self, name, stub):
raise NotImplementedError()
def _get_real_method(self, name):
raise NotImplementedError()
def handle_special_method_call(self, name, args, kwargs, caller_info):
self._check_special_method_call(name, args, kwargs)
return self.get_method(name).__forge__.handle_call(args, kwargs, caller_info)
def _check_special_method_call(self, name, args, kwargs):
raise NotImplementedError()
def is_callable(self):
raise NotImplementedError()
def _bind_if_needed(self, name, method_stub):
bind_needed, bind_target = self._is_binding_needed(name, method_stub)
if bind_needed:
method_stub.__forge__.bind(bind_target)
def _is_binding_needed(self, name, method_stub):
raise NotImplementedError()
| 48.32967 | 116 | 0.705548 | 561 | 4,398 | 5.131907 | 0.156863 | 0.043765 | 0.075026 | 0.037513 | 0.365405 | 0.261549 | 0.172282 | 0.118791 | 0.092393 | 0.035429 | 0 | 0 | 0.208504 | 4,398 | 90 | 117 | 48.866667 | 0.827061 | 0 | 0 | 0.168539 | 0 | 0 | 0.006139 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.314607 | false | 0 | 0.011236 | 0.067416 | 0.460674 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
82a12ebdf14809677818644038ba067ccbd91713 | 474 | py | Python | examples/test_cross.py | rballester/ttpy | a2fdf08fae9d34cb1e5ba28482e82e04b249911b | [
"MIT"
] | null | null | null | examples/test_cross.py | rballester/ttpy | a2fdf08fae9d34cb1e5ba28482e82e04b249911b | [
"MIT"
] | null | null | null | examples/test_cross.py | rballester/ttpy | a2fdf08fae9d34cb1e5ba28482e82e04b249911b | [
"MIT"
] | 1 | 2021-01-10T07:02:09.000Z | 2021-01-10T07:02:09.000Z | import sys
sys.path.append('../')
import numpy as np
import tt
d = 30
n = 2 ** d
b = 1E3
h = b / (n + 1)
#x = np.arange(n)
#x = np.reshape(x, [2] * d, order = 'F')
#x = tt.tensor(x, 1e-12)
x = tt.xfun(2, d)
e = tt.ones(2, d)
x = x + e
x = x * h
sf = lambda x : np.sin(x) / x #Should be rank 2
y = tt.multifuncrs([x], sf, 1e-6, ['y0', tt.ones(2, d)])
#y1 = tt.tensor(sf(x.full()), 1e-8)
print "pi / 2 ~ ", tt.dot(y, tt.ones(2, d)) * h
#print (y - y1).norm() / y.norm()
| 18.230769 | 56 | 0.516878 | 104 | 474 | 2.355769 | 0.423077 | 0.04898 | 0.085714 | 0.097959 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.062842 | 0.227848 | 474 | 25 | 57 | 18.96 | 0.606557 | 0.337553 | 0 | 0 | 0 | 0 | 0.045455 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.2 | null | null | 0.066667 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
82a2aae9ea64aaa7fb4b9cb2856b242dd76d5578 | 239 | py | Python | scripts/plotRUC.py | akrherz/radcomp | d44459f72891c6e1a92b61488e08422383b000d1 | [
"Apache-2.0"
] | 3 | 2015-04-18T22:23:27.000Z | 2016-05-12T11:24:32.000Z | scripts/plotRUC.py | akrherz/radcomp | d44459f72891c6e1a92b61488e08422383b000d1 | [
"Apache-2.0"
] | 4 | 2016-09-30T15:04:46.000Z | 2022-03-05T13:32:40.000Z | scripts/plotRUC.py | akrherz/radcomp | d44459f72891c6e1a92b61488e08422383b000d1 | [
"Apache-2.0"
] | 4 | 2015-04-18T22:23:57.000Z | 2017-05-07T15:23:37.000Z | import matplotlib.pyplot as plt
import netCDF4
import numpy
nc = netCDF4.Dataset("data/ructemps.nc")
data = nc.variables["tmpc"][17, :, :]
nc.close()
(fig, ax) = plt.subplots(1, 1)
ax.imshow(numpy.flipud(data))
fig.savefig("test.png")
| 17.071429 | 40 | 0.698745 | 37 | 239 | 4.513514 | 0.648649 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.028436 | 0.117155 | 239 | 13 | 41 | 18.384615 | 0.763033 | 0 | 0 | 0 | 0 | 0 | 0.117155 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.333333 | 0 | 0.333333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
82a4a9f7dd1ed9b3be8582ffaccf49c75f0cf8a6 | 3,031 | py | Python | tools/draw_cal_lr_ablation.py | twangnh/Calibration_mrcnn | e5f3076cefbe35297a403a753bb57e11503db818 | [
"Apache-2.0"
] | 87 | 2020-07-24T01:28:39.000Z | 2021-08-29T08:40:18.000Z | tools/draw_cal_lr_ablation.py | twangnh/Calibration_mrcnn | e5f3076cefbe35297a403a753bb57e11503db818 | [
"Apache-2.0"
] | 3 | 2020-09-27T12:59:28.000Z | 2022-01-06T13:14:08.000Z | tools/draw_cal_lr_ablation.py | twangnh/Calibration_mrcnn | e5f3076cefbe35297a403a753bb57e11503db818 | [
"Apache-2.0"
] | 20 | 2020-09-05T04:37:19.000Z | 2021-12-13T02:25:48.000Z |
import matplotlib
import matplotlib.pyplot as plt
import numpy as np
import math
from matplotlib.ticker import FormatStrFormatter
from matplotlib import scale as mscale
from matplotlib import transforms as mtransforms
# z = [0,0.1,0.3,0.9,1,2,5]
z = [7.8, 10, 20, 30, 40, 50, 60, 70, 80, 90, 100, 200, 300, 400, 500, 600, 700, 800, 900, 1000, 1230]
# thick = [20,40,20,60,37,32,21]ax1.set_xscale('log')
# thick=[15.4, 18.2, 18.7, 19.2, 19.4, 19.5, 19.9, 20.1, 20.4, 20.5, 20.6, 20.7, 20.8, 20.7, 20.7, 20.6, 20.6, 20.6, 20.5, 20.5, 19.8]
mrcnn=[17.7, 19.8, 20.0, 19.9, 20.2, 19.5, 19.1, 19.1]
x_ticks = [0.001, 0.002, 0.004, 0.008, 0.01, 0.02, 0.04, 0.08]
# plt.plot([1.0],[44.8], 'D', color = 'black')
# plt.plot([0],[35.9], 'D', color = 'red')
# plt.plot([1.0],[56.8], 'D', color = 'black')
fig = plt.figure(figsize=(8,5))
ax1 = fig.add_subplot(111)
matplotlib.rcParams.update({'font.size': 20})
ax1.plot(x_ticks, mrcnn, linestyle='dashed', marker='o', linewidth=2, c='k', label='mrcnn-r50-ag')
# ax1.plot(z, htc, marker='o', linewidth=2, c='g', label='htc')
# ax1.plot([1e-4],[15.4], 'D', color = 'green')
# ax1.plot([1230],[19.8], 'D', color = 'red')
plt.xlabel('calibration lr', size=16)
plt.ylabel('bAP', size=16)
# plt.gca().set_xscale('custom')
ax1.set_xscale('log')
ax1.set_xticks(x_ticks)
# from matplotlib.ticker import ScalarFormatter
# ax1.xaxis.set_major_formatter(ScalarFormatter())
# plt.legend(['calibration lr'], loc='best')
plt.minorticks_off()
plt.grid()
plt.savefig('calibration_lr.eps', format='eps', dpi=1000)
plt.show()
# import numpy as np
# import matplotlib.pyplot as plt
# from scipy.interpolate import interp1d
# y1=[35.9, 43.4, 46.1, 49.3, 50.3, 51.3, 51.4, 49.9, 49.5, 48.5, 44.8]
# y2=[40.5, 48.2, 53.9 , 56.9, 57.8, 59.2, 58.3, 57.9, 57.5, 57.2, 56.8]
# y3=[61.5, 61.5, 61.5, 61.5, 61.5, 61.5, 61.5, 61.5, 61.5, 61.5, 61.5]
# x = np.linspace(0, 1, num=11, endpoint=True)
#
# f1 = interp1d(x, y1, kind='cubic')
# f2 = interp1d(x, y2, kind='cubic')
# f3 = interp1d(x, y3, kind='cubic')
# xnew = np.linspace(0, 1, num=101, endpoint=True)
# plt.plot(xnew, f3(xnew), '--', color='fuchsia')
# plt.plot(xnew, f1(xnew), '--', color='blue')
# plt.plot(xnew, f2(xnew), '--', color='green')
#
# plt.plot([0],[40.5], 'D', color = 'red')
# plt.plot([1.0],[44.8], 'D', color = 'black')
# plt.plot([0],[35.9], 'D', color = 'red')
# plt.plot([1.0],[56.8], 'D', color = 'black')
# plt.plot(x, y3, 'o', color = 'fuchsia')
# plt.plot(x, y1, 'o', color = 'blue')
# plt.plot(x, y2, 'o', color = 'green')
# plt.plot([0],[40.5], 'D', color = 'red')
# plt.plot([1.0],[44.8], 'D', color = 'black')
# plt.plot([0],[35.9], 'D', color = 'red')
# plt.plot([1.0],[56.8], 'D', color = 'black')
# plt.legend(['teacher','0.25x', '0.5x', 'full-feature-imitation', 'only GT supervison'], loc='best')
# plt.xlabel('Thresholding factor')
# plt.ylabel('mAP')
# plt.title('Resulting mAPs of varying thresholding factors')
# #plt.legend(['0.5x'])
# # plt.savefig('varying_thresh.eps', format='eps', dpi=1000)
# plt.show()
| 35.244186 | 134 | 0.61069 | 575 | 3,031 | 3.196522 | 0.302609 | 0.064744 | 0.027203 | 0.032644 | 0.282916 | 0.192057 | 0.18988 | 0.161589 | 0.161589 | 0.161589 | 0 | 0.159909 | 0.12933 | 3,031 | 85 | 135 | 35.658824 | 0.536567 | 0.680633 | 0 | 0 | 0 | 0 | 0.076336 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.318182 | 0 | 0.318182 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
82a4daed7ce221589ab2b1a7f5ba42efc8b6ae34 | 653 | py | Python | Lesson08/problem/problem_optional_pandas.py | AlexMazonowicz/PythonFundamentals | 5451f61d3b4e7cd285dea442795c25baa5072ef9 | [
"MIT"
] | 2 | 2020-02-27T01:33:43.000Z | 2021-03-29T13:11:54.000Z | Lesson08/problem/problem_optional_pandas.py | AlexMazonowicz/PythonFundamentals | 5451f61d3b4e7cd285dea442795c25baa5072ef9 | [
"MIT"
] | null | null | null | Lesson08/problem/problem_optional_pandas.py | AlexMazonowicz/PythonFundamentals | 5451f61d3b4e7cd285dea442795c25baa5072ef9 | [
"MIT"
] | 6 | 2019-03-18T04:49:11.000Z | 2022-03-22T04:03:19.000Z | import pandas as pd
# Global variable to set the base path to our dataset folder
base_url = '../dataset/'
def update_mailing_list_pandas(filename):
"""
Your docstring documentation starts here.
For more information on how to proper document your function, please refer to the official PEP8:
https://www.python.org/dev/peps/pep-0008/#documentation-strings.
"""
df = # Read your csv file with pandas
return # Your logic to filter only rows with the `active` flag the return the number of rows
# Calling the function to test your code
print(update_mailing_list_pandas('mailing_list.csv'))
| 29.681818 | 104 | 0.70291 | 94 | 653 | 4.797872 | 0.680851 | 0.073171 | 0.075388 | 0.101996 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.009901 | 0.226646 | 653 | 21 | 105 | 31.095238 | 0.883168 | 0.324655 | 0 | 0 | 0 | 0 | 0.153409 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.166667 | null | null | 0.166667 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.