hexsha string | size int64 | ext string | lang string | max_stars_repo_path string | max_stars_repo_name string | max_stars_repo_head_hexsha string | max_stars_repo_licenses list | max_stars_count int64 | max_stars_repo_stars_event_min_datetime string | max_stars_repo_stars_event_max_datetime string | max_issues_repo_path string | max_issues_repo_name string | max_issues_repo_head_hexsha string | max_issues_repo_licenses list | max_issues_count int64 | max_issues_repo_issues_event_min_datetime string | max_issues_repo_issues_event_max_datetime string | max_forks_repo_path string | max_forks_repo_name string | max_forks_repo_head_hexsha string | max_forks_repo_licenses list | max_forks_count int64 | max_forks_repo_forks_event_min_datetime string | max_forks_repo_forks_event_max_datetime string | content string | avg_line_length float64 | max_line_length int64 | alphanum_fraction float64 | qsc_code_num_words_quality_signal int64 | qsc_code_num_chars_quality_signal float64 | qsc_code_mean_word_length_quality_signal float64 | qsc_code_frac_words_unique_quality_signal float64 | qsc_code_frac_chars_top_2grams_quality_signal float64 | qsc_code_frac_chars_top_3grams_quality_signal float64 | qsc_code_frac_chars_top_4grams_quality_signal float64 | qsc_code_frac_chars_dupe_5grams_quality_signal float64 | qsc_code_frac_chars_dupe_6grams_quality_signal float64 | qsc_code_frac_chars_dupe_7grams_quality_signal float64 | qsc_code_frac_chars_dupe_8grams_quality_signal float64 | qsc_code_frac_chars_dupe_9grams_quality_signal float64 | qsc_code_frac_chars_dupe_10grams_quality_signal float64 | qsc_code_frac_chars_replacement_symbols_quality_signal float64 | qsc_code_frac_chars_digital_quality_signal float64 | qsc_code_frac_chars_whitespace_quality_signal float64 | qsc_code_size_file_byte_quality_signal float64 | qsc_code_num_lines_quality_signal float64 | qsc_code_num_chars_line_max_quality_signal float64 | qsc_code_num_chars_line_mean_quality_signal float64 | qsc_code_frac_chars_alphabet_quality_signal float64 | qsc_code_frac_chars_comments_quality_signal float64 | qsc_code_cate_xml_start_quality_signal float64 | qsc_code_frac_lines_dupe_lines_quality_signal float64 | qsc_code_cate_autogen_quality_signal float64 | qsc_code_frac_lines_long_string_quality_signal float64 | qsc_code_frac_chars_string_length_quality_signal float64 | qsc_code_frac_chars_long_word_length_quality_signal float64 | qsc_code_frac_lines_string_concat_quality_signal float64 | qsc_code_cate_encoded_data_quality_signal float64 | qsc_code_frac_chars_hex_words_quality_signal float64 | qsc_code_frac_lines_prompt_comments_quality_signal float64 | qsc_code_frac_lines_assert_quality_signal float64 | qsc_codepython_cate_ast_quality_signal float64 | qsc_codepython_frac_lines_func_ratio_quality_signal float64 | qsc_codepython_cate_var_zero_quality_signal bool | qsc_codepython_frac_lines_pass_quality_signal float64 | qsc_codepython_frac_lines_import_quality_signal float64 | qsc_codepython_frac_lines_simplefunc_quality_signal float64 | qsc_codepython_score_lines_no_logic_quality_signal float64 | qsc_codepython_frac_lines_print_quality_signal float64 | qsc_code_num_words int64 | qsc_code_num_chars int64 | qsc_code_mean_word_length int64 | qsc_code_frac_words_unique null | qsc_code_frac_chars_top_2grams int64 | qsc_code_frac_chars_top_3grams int64 | qsc_code_frac_chars_top_4grams int64 | qsc_code_frac_chars_dupe_5grams int64 | qsc_code_frac_chars_dupe_6grams int64 | qsc_code_frac_chars_dupe_7grams int64 | qsc_code_frac_chars_dupe_8grams int64 | qsc_code_frac_chars_dupe_9grams int64 | qsc_code_frac_chars_dupe_10grams int64 | qsc_code_frac_chars_replacement_symbols int64 | qsc_code_frac_chars_digital int64 | qsc_code_frac_chars_whitespace int64 | qsc_code_size_file_byte int64 | qsc_code_num_lines int64 | qsc_code_num_chars_line_max int64 | qsc_code_num_chars_line_mean int64 | qsc_code_frac_chars_alphabet int64 | qsc_code_frac_chars_comments int64 | qsc_code_cate_xml_start int64 | qsc_code_frac_lines_dupe_lines int64 | qsc_code_cate_autogen int64 | qsc_code_frac_lines_long_string int64 | qsc_code_frac_chars_string_length int64 | qsc_code_frac_chars_long_word_length int64 | qsc_code_frac_lines_string_concat null | qsc_code_cate_encoded_data int64 | qsc_code_frac_chars_hex_words int64 | qsc_code_frac_lines_prompt_comments int64 | qsc_code_frac_lines_assert int64 | qsc_codepython_cate_ast int64 | qsc_codepython_frac_lines_func_ratio int64 | qsc_codepython_cate_var_zero int64 | qsc_codepython_frac_lines_pass int64 | qsc_codepython_frac_lines_import int64 | qsc_codepython_frac_lines_simplefunc int64 | qsc_codepython_score_lines_no_logic int64 | qsc_codepython_frac_lines_print int64 | effective string | hits int64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
5bd7df91b4668904964e69e486f54f6977162982 | 2,300 | py | Python | tick2ohlc.py | nu11ptr/tick2ohlc | b9cc8cc148533dc4feb9c76aaeb7896600c53873 | [
"BSD-3-Clause"
] | 1 | 2021-10-19T03:10:33.000Z | 2021-10-19T03:10:33.000Z | tick2ohlc.py | nu11ptr/tick2ohlc | b9cc8cc148533dc4feb9c76aaeb7896600c53873 | [
"BSD-3-Clause"
] | null | null | null | tick2ohlc.py | nu11ptr/tick2ohlc | b9cc8cc148533dc4feb9c76aaeb7896600c53873 | [
"BSD-3-Clause"
] | null | null | null | import pandas as pd
import sys
# NOTE: Each list below must be divisible by last entry of prev list
# (15 by 5, 30 by 15, 1D by 6H, etc.)
_UNITS = [
["1min"],
["5min"],
["15min"],
["30min"],
["1H"],
["4H", "6H"],
["1D"],
["3D", "1W", "1M"],
]
_DATE_FORMAT = "%m/%d/%Y"
_TIME_FORMAT = "%H:%M"
def resample(df: pd.DataFrame, unit: str) -> pd.DataFrame:
tick = "last" in df.columns
col_actions = (
{"last": "ohlc", "volume": "sum"}
if tick
else {
"open": "first",
"high": "max",
"low": "min",
"close": "last",
"volume": "sum",
}
)
new_df = df.resample(unit).agg(col_actions).dropna()
if tick:
# Collapse multindex and make it flat
new_df.columns = new_df.columns.get_level_values(1)
# Convert index into separate date/time columns
new_df["date"] = new_df.index.strftime(_DATE_FORMAT)
new_df["time"] = new_df.index.strftime(_TIME_FORMAT)
# Make DT index a regular column
return new_df
def write_data(filename: str, df: pd.DataFrame, unit: str):
new_df = df.reset_index()
new_df.to_csv(
f"{filename}_{unit.lower()}.csv",
index=False,
columns=["date", "time", "open", "high", "low", "close", "volume"],
)
def read_data(filename: str) -> pd.DataFrame:
return pd.read_csv(
filename,
names=["datetime", "last", "volume"],
parse_dates=["datetime"],
date_parser=lambda epoch: pd.to_datetime(epoch, unit="s"),
index_col="datetime",
)
if __name__ == "__main__":
if len(sys.argv) < 2:
print("Usage: python tick2ohlc <base_filename>\n")
print("<base_filename> = filename without the .csv extension")
sys.exit(1)
filename = sys.argv[1]
# Start by reading tick data
print("Reading tick data...", end="", flush=True)
df = read_data(filename + ".csv")
print("done")
for units in _UNITS:
for unit in units:
print(f"Resampling and writing CSV for: {unit}...", end="", flush=True)
new_df = resample(df, unit)
write_data(filename, new_df, unit)
print("done")
# Last new_df of list becomes new df for next cycle
df = new_df
| 27.380952 | 83 | 0.563913 | 305 | 2,300 | 4.088525 | 0.429508 | 0.060144 | 0.02085 | 0.027265 | 0.032077 | 0 | 0 | 0 | 0 | 0 | 0 | 0.016275 | 0.278696 | 2,300 | 83 | 84 | 27.710843 | 0.735383 | 0.126957 | 0 | 0.03125 | 0 | 0 | 0.1915 | 0.0145 | 0 | 0 | 0 | 0 | 0 | 1 | 0.046875 | false | 0 | 0.03125 | 0.015625 | 0.109375 | 0.09375 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
5bd87c7f3cf4dbb87910b47eb4c9320d786e3c84 | 3,131 | py | Python | promoterz/representation/chromosome.py | emillj/gekkoJaponicus | d77c8c7a303b97a3643eb3f3c8b995b8b393f3f7 | [
"MIT"
] | null | null | null | promoterz/representation/chromosome.py | emillj/gekkoJaponicus | d77c8c7a303b97a3643eb3f3c8b995b8b393f3f7 | [
"MIT"
] | null | null | null | promoterz/representation/chromosome.py | emillj/gekkoJaponicus | d77c8c7a303b97a3643eb3f3c8b995b8b393f3f7 | [
"MIT"
] | 1 | 2021-11-29T20:18:25.000Z | 2021-11-29T20:18:25.000Z | #!/bin/python
from deap import base
from deap import creator
from deap import tools
from copy import deepcopy
import random
from .. import functions
getPromoterFromMap = lambda x: [x[z] for z in list(x.keys())]
def constructPhenotype(stratSettings, chrconf, Individue):
Settings = {}
GeneSize=2
R = lambda V, lim: (lim[1]-lim[0]) * V/(33*chrconf['GeneSize']) + lim[0]
PromotersPath = {v: k for k, v in Individue.PromoterMap.items()}
#print(PromotersPath)
#print(Individue[:])
Promoters = list(PromotersPath.keys())
for C in Individue:
for BP in range(len(C)):
if C[BP] in Promoters:
read_window = C[BP+1:BP+1+GeneSize]
read_window = [ V for V in read_window if type(V) == int and V < 33 ]
Value = sum(read_window)
ParameterName = PromotersPath[C[BP]]
Value = R(Value, stratSettings[ParameterName])
Settings[ParameterName] = Value
_Settings = functions.expandNestedParameters(Settings)
return _Settings
def getToolbox(Strategy, genconf, Attributes):
toolbox = base.Toolbox()
creator.create("FitnessMax", base.Fitness, weights=(1.0,))
creator.create("Individual", list, fitness=creator.FitnessMax,
PromoterMap=None, Strategy=Strategy)
toolbox.register("mate", functions.pachytene)
toolbox.register("mutate", functions.mutate)
PromoterMap = initPromoterMap(Attributes)
toolbox.register("newind", initInd, creator.Individual, PromoterMap, genconf.chromosome)
toolbox.register("population", tools.initRepeat, list, toolbox.newind)
toolbox.register("constructPhenotype", constructPhenotype, Attributes, genconf.chromosome)
return toolbox
def initPromoterMap(ParameterRanges):
PRK = list(ParameterRanges.keys())
Promoters = [ x for x in PRK ]
space = list(range(120,240))
random.shuffle(space)
PromoterValues = [ space.pop() for x in Promoters ]
PromoterMap = dict(zip(Promoters, PromoterValues))
#print(ParameterRanges)
print(PromoterMap)
assert(len(PRK) == len(list(PromoterMap.keys())))
return PromoterMap
def initChromosomes(PromoterMap, chrconf):
Promoters = getPromoterFromMap(PromoterMap)
PromoterPerChr = round(len(Promoters)/chrconf['Density'])+1
_promoters = deepcopy(Promoters)
Chromosomes = [[] for k in range(PromoterPerChr)]
while _promoters:
for c in range(len(Chromosomes)):
if random.random() < 0.3:
if _promoters:
promoter = _promoters.pop(random.randrange(0,len(_promoters)))
Chromosomes[c].append(promoter)
for G in range(chrconf['GeneSize']):
Chromosomes[c].append(random.randrange(0, 33))
return Chromosomes
def initInd(Individual, PromoterMap, chrconf):
i = Individual()
i[:] = initChromosomes(PromoterMap, chrconf)
i.PromoterMap = PromoterMap
return i
def generateUID():
Chars = string.ascii_uppercase + string.digits
UID = ''.join(random.choices(Chars), k=6)
return UID
| 30.696078 | 94 | 0.661131 | 344 | 3,131 | 5.982558 | 0.31686 | 0.036443 | 0.020408 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.010713 | 0.224848 | 3,131 | 101 | 95 | 31 | 0.837248 | 0.023315 | 0 | 0 | 0 | 0 | 0.028497 | 0 | 0 | 0 | 0 | 0 | 0.014493 | 1 | 0.086957 | false | 0 | 0.086957 | 0 | 0.26087 | 0.014493 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
5bd912416252f099d311b6a7bf71cbb0a03fb0aa | 1,124 | py | Python | Python/esys/lsm/vis/vtk/pointExtractor.py | danielfrascarelli/esys-particle | e56638000fd9c4af77e21c75aa35a4f8922fd9f0 | [
"Apache-2.0"
] | null | null | null | Python/esys/lsm/vis/vtk/pointExtractor.py | danielfrascarelli/esys-particle | e56638000fd9c4af77e21c75aa35a4f8922fd9f0 | [
"Apache-2.0"
] | null | null | null | Python/esys/lsm/vis/vtk/pointExtractor.py | danielfrascarelli/esys-particle | e56638000fd9c4af77e21c75aa35a4f8922fd9f0 | [
"Apache-2.0"
] | null | null | null | #############################################################
## ##
## Copyright (c) 2003-2017 by The University of Queensland ##
## Centre for Geoscience Computing ##
## http://earth.uq.edu.au/centre-geoscience-computing ##
## ##
## Primary Business: Brisbane, Queensland, Australia ##
## Licensed under the Open Software License version 3.0 ##
## http://www.apache.org/licenses/LICENSE-2.0 ##
## ##
#############################################################
import vtk as kwvtk
from esys.lsm.vis import core
class PointExtractor(core.PointExtractor):
def __init__(
self,
pointMap = lambda dataRecord: dataRecord.getPoint()
):
core.PointExtractor.__init__(self, pointMap)
def getVtkPoints(self, data):
vtkPoints = kwvtk.vtkPoints()
for dataRecord in data:
vtkPoints.InsertNextPoint(self.getPoint(dataRecord))
return vtkPoints
| 40.142857 | 64 | 0.47242 | 88 | 1,124 | 5.943182 | 0.670455 | 0.072658 | 0.061185 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.016021 | 0.33363 | 1,124 | 27 | 65 | 41.62963 | 0.682243 | 0.341637 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.153846 | false | 0 | 0.153846 | 0 | 0.461538 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
5bd9266ab2b8b9521255499e10166243811219e3 | 2,266 | py | Python | modules/images/module_slap.py | Fogapod/KiwiBot | 49743118661abecaab86388cb94ff8a99f9011a8 | [
"MIT"
] | 18 | 2018-05-25T08:50:12.000Z | 2021-10-04T07:13:09.000Z | modules/images/module_slap.py | Fogapod/BotMyBot | 49743118661abecaab86388cb94ff8a99f9011a8 | [
"MIT"
] | 4 | 2018-10-20T21:10:38.000Z | 2019-06-25T13:12:07.000Z | modules/images/module_slap.py | Fogapod/BotMyBot | 49743118661abecaab86388cb94ff8a99f9011a8 | [
"MIT"
] | 6 | 2018-10-20T21:06:24.000Z | 2021-11-08T05:51:14.000Z | from objects.modulebase import ModuleBase
import discord
from io import BytesIO
from PIL import Image
from PIL.ImageOps import mirror
from utils.funcs import find_image
class Module(ModuleBase):
usage_doc = '{prefix}{aliases} [image]'
short_doc = 'Makes a slap meme'
long_doc = (
'Flags:\n'
'\t[--batface|-b] <image>: uses custom second image'
)
name = 'slap'
aliases = (name, )
category = 'Images'
flags = {
'batface': {
'alias': 'b',
'bool': False
}
}
ratelimit = (1, 3)
async def on_call(self, ctx, args, **flags):
image = await find_image(args[1:], ctx, include_gif=False)
robin = await image.to_pil_image()
if image.error:
return await ctx.warn(f'Error getting first image: {image.error}')
batface_flag = flags.get('batface')
if batface_flag is not None:
image = await find_image(batface_flag, ctx, include_gif=False)
bat = await image.to_pil_image()
if image.error:
return await ctx.warn(f'Error getting second image: {image.error}')
else:
try:
bat = Image.open(
BytesIO(
await ctx.author.avatar_url_as(format='png').read()
)
)
except Exception as e:
return await ctx.error(f'Failed to download author\'s avatar: {e}')
result = await self.bot.loop.run_in_executor(
None, self.slap, robin, bat)
await ctx.send(file=discord.File(result, filename=f'slap.png'))
def slap(self, robin, bat):
template = Image.open('templates/slap.png')
bat = bat.convert('RGBA')
bat = mirror(bat.resize((220, 220), Image.ANTIALIAS).rotate(10, expand=True))
template.paste(bat, (460, 200), mask=bat.split()[3])
robin = robin.convert('RGBA')
robin = robin.resize((260, 260), Image.ANTIALIAS)
template.paste(robin, (200, 310), mask=robin.split()[3])
result = BytesIO()
template.save(result, format='PNG')
template.close()
bat.close()
robin.close()
return BytesIO(result.getvalue())
| 27.975309 | 85 | 0.566637 | 274 | 2,266 | 4.613139 | 0.416058 | 0.031646 | 0.033228 | 0.030063 | 0.099684 | 0.099684 | 0.099684 | 0.099684 | 0.099684 | 0.099684 | 0 | 0.019808 | 0.309356 | 2,266 | 80 | 86 | 28.325 | 0.787859 | 0 | 0 | 0.033333 | 0 | 0.016667 | 0.124007 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.016667 | false | 0 | 0.1 | 0 | 0.333333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
5bdaca6657ffcf8af132b8355ef2a3c0fd26d275 | 308 | py | Python | tests/test_helpers.py | pyapp-org/pyapp.aiosmtplib | f928a7eb838b041d279d974f7cb555964764a410 | [
"BSD-3-Clause"
] | null | null | null | tests/test_helpers.py | pyapp-org/pyapp.aiosmtplib | f928a7eb838b041d279d974f7cb555964764a410 | [
"BSD-3-Clause"
] | 20 | 2020-07-31T05:07:07.000Z | 2022-02-11T19:02:03.000Z | tests/test_helpers.py | pyapp-org/pyapp.aiosmtplib | f928a7eb838b041d279d974f7cb555964764a410 | [
"BSD-3-Clause"
] | null | null | null | from unittest.mock import Mock
from pyapp_ext.aiosmtplib import helpers
class TestEmail:
def test_init(self, monkeypatch):
mock_factory = Mock()
monkeypatch.setattr(helpers, "get_client", mock_factory)
helpers.Email(name="foo")
mock_factory.assert_called_with("foo")
| 22 | 64 | 0.707792 | 38 | 308 | 5.526316 | 0.657895 | 0.157143 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.201299 | 308 | 13 | 65 | 23.692308 | 0.853659 | 0 | 0 | 0 | 0 | 0 | 0.051948 | 0 | 0 | 0 | 0 | 0 | 0.125 | 1 | 0.125 | false | 0 | 0.25 | 0 | 0.5 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
5bdc283f9a85f238ab321674c1684e85da942ecb | 1,444 | py | Python | pywizlight/tests/test_bulb_light_strip_1_21_4.py | mikemakaroff/pywizlight | 0b32b917a064d9ca1be0ce9fb24ea68ce89993ed | [
"MIT"
] | 1 | 2022-03-30T22:42:51.000Z | 2022-03-30T22:42:51.000Z | pywizlight/tests/test_bulb_light_strip_1_21_4.py | mikemakaroff/pywizlight | 0b32b917a064d9ca1be0ce9fb24ea68ce89993ed | [
"MIT"
] | null | null | null | pywizlight/tests/test_bulb_light_strip_1_21_4.py | mikemakaroff/pywizlight | 0b32b917a064d9ca1be0ce9fb24ea68ce89993ed | [
"MIT"
] | null | null | null | """Tests for the Bulb API with a light strip."""
from typing import AsyncGenerator
import pytest
from pywizlight import PilotBuilder, wizlight
from pywizlight.bulblibrary import BulbClass, BulbType, Features, KelvinRange
from pywizlight.tests.fake_bulb import startup_bulb
@pytest.fixture()
async def light_strip() -> AsyncGenerator[wizlight, None]:
shutdown, port = await startup_bulb(
module_name="ESP20_SHRGB_01ABI", firmware_version="1.21.4"
)
bulb = wizlight(ip="127.0.0.1", port=port)
yield bulb
await bulb.async_close()
shutdown()
@pytest.mark.asyncio
async def test_setting_rgbww(light_strip: wizlight) -> None:
"""Test setting rgbww."""
await light_strip.turn_on(PilotBuilder(rgbww=(1, 2, 3, 4, 5)))
state = await light_strip.updateState()
assert state and state.get_rgbww() == (1, 2, 3, 4, 5)
@pytest.mark.asyncio
async def test_model_description_light_strip(light_strip: wizlight) -> None:
"""Test fetching the model description for a light strip."""
bulb_type = await light_strip.get_bulbtype()
assert bulb_type == BulbType(
features=Features(
color=True, color_tmp=True, effect=True, brightness=True, dual_head=False
),
name="ESP20_SHRGB_01ABI",
kelvin_range=KelvinRange(max=6500, min=2700),
bulb_type=BulbClass.RGB,
fw_version="1.21.4",
white_channels=2,
white_to_color_ratio=80,
)
| 32.088889 | 85 | 0.701524 | 195 | 1,444 | 5.010256 | 0.441026 | 0.092119 | 0.046059 | 0.038895 | 0.13306 | 0.079836 | 0 | 0 | 0 | 0 | 0 | 0.036689 | 0.188366 | 1,444 | 44 | 86 | 32.818182 | 0.796928 | 0.029086 | 0 | 0.060606 | 0 | 0 | 0.041953 | 0 | 0 | 0 | 0 | 0 | 0.060606 | 1 | 0 | false | 0 | 0.151515 | 0 | 0.151515 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
5bdc65f7f7f6d8254722d6ccff4a26998def2d9d | 47,068 | py | Python | casim/casim.py | pdebuyl/cancer_sim | 305492d5108e1fb50783e4f13ddf2e1cf5b08976 | [
"MIT"
] | 1 | 2022-02-16T03:34:44.000Z | 2022-02-16T03:34:44.000Z | casim/casim.py | pdebuyl/cancer_sim | 305492d5108e1fb50783e4f13ddf2e1cf5b08976 | [
"MIT"
] | 12 | 2020-03-16T20:59:21.000Z | 2020-09-18T08:41:09.000Z | casim/casim.py | pdebuyl/cancer_sim | 305492d5108e1fb50783e4f13ddf2e1cf5b08976 | [
"MIT"
] | 3 | 2020-09-16T12:41:19.000Z | 2021-03-11T23:19:24.000Z | # -*- coding: utf-8 -*-
#!/usr/bin/env python3
__author__ = 'Luka Opasic, MD'
__email__ = 'opasic@evolbio.mpg.de'
__version__ = '1.1.0'
from argparse import ArgumentParser
from operator import itemgetter
from random import shuffle
from scipy.sparse import lil_matrix
from time import sleep, time
from timeit import default_timer as timer
import dill
import itertools
import logging
import math
import matplotlib.pyplot as plt
import numpy
import os
import pickle
import random as prng
import sys
from importlib.util import spec_from_file_location, module_from_spec
np = numpy
LOG_FORMAT_STR = '%(asctime)s %(levelname)s: %(message)s'
logging.basicConfig(
level=logging.INFO,
format=LOG_FORMAT_STR,
handlers=[
logging.StreamHandler(sys.stderr)
]
)
class CancerSimulatorParameters(object):
"""
:class CancerSimulatorParameters: Represents the parameters for a cancer simulation.
"""
def __init__(self,
matrix_size=None,
number_of_generations=None,
division_probability=None,
adv_mutant_division_probability=None,
death_probability=None,
adv_mutant_death_probability=None,
mutation_probability=None,
adv_mutant_mutation_probability=None,
number_of_mutations_per_division=None,
adv_mutation_wait_time=None,
number_of_initial_mutations=None,
tumour_multiplicity=None,
read_depth=None,
sampling_fraction=None,
plot_tumour_growth=None,
export_tumour=None,
sampling_positions=None,
):
"""
Construct a new CancerSimulationParameters object.
:param matrix_size: The size of the (square) grid in each dimension.
:type matrix_size: int (matrix_size > 0)
:param number_of_generations: The number of generations to simulate.
:type number_of_generations: int (number_of_generations > 0)
:param division_probability: The probability for a cell division to occur during one generation.
:type division_probability: float (0.0 <= division_probability <= 1.0)
:param adv_mutant_division_probability: The probability for the division of a cell with advantageous mutation to occur during one generation.
:type adv_mutant_division_probability: float (0.0 <= adv_mutant_division_probability <= 1.0)
:param death_probability: The probability for a cell to die during one generation.
:type death_probability: float (0.0 <= death_probability <= 1.0)
:param adv_mutant_death_probability: The probability for a cell with advantageous mutation to die during one generation.
:type adv_mutant_death_probability: float (0.0 <= adv_mutant_death_probability <= 1.0)
:param mutation_probability: The probability of mutation.
:type mutation_probability: float (0.0 <= mutation_probability <= 1.0)
:param adv_mutant_mutation_probability: The probability for a mutation to occur during one generation in a cell with adv. mutation.
:type adv_mutant_mutation_probability: float (0.0 <= adv_mutant_mutation_probability <= 1.0)
:param number_of_mutations_per_division: The number of mutations per division
:type number_of_mutations_per_division: int (0 < number_of_mutations_per_division)
:param adv_mutation_wait_time: The number of generations into the simulation after which the advantageous mutation is inserted.
:type adv_mutation_wait_time: int (adv_mutation_wait_time > 0)
:param number_of_initial_mutations: Number of mutations present in first cancer cell.
:type number_of_initial_mutations: int (number_of_initial_mutations >= 0)
:param tumour_multiplicity: Run in single or double tumour mode (i.e. consider growth of one single tumour or two tumours simultaneously). Possible values: "single", "double".
:type tumour_multiplicity: str
:param read_depth: The sequencing read depth (read length * number of reads / genome length). Default: 100.
:type read_depth: int (read_depth >= 0)
:param sampling_fraction: The fraction of cells to include in a sample. Default: 0.
:type sampling_fraction: float (0 <= sampling_fraction <= 1)
:param sampling_positions: The positions of cells to include in a sample. Default: Random position.
:type sampling_positions: List (or array) of tuples of ints. E.g. ([10,20], [2,31]).
:param plot_tumour_growth: Render graph of the tumour size as function of time. Default: True.
:type plot_tumour_growth: bool
:param export_tumour: Dump the tumour data to file. Default: True.
:type export_tumour: bool
"""
# Store parameters on the object.
self.matrix_size = matrix_size
self.number_of_generations = number_of_generations
self.division_probability = division_probability
self.adv_mutant_division_probability = adv_mutant_division_probability
self.death_probability = death_probability
self.adv_mutant_death_probability = adv_mutant_death_probability
self.mutation_probability = mutation_probability
self.adv_mutant_mutation_probability = adv_mutant_mutation_probability
self.number_of_mutations_per_division = number_of_mutations_per_division
self.adv_mutation_wait_time = adv_mutation_wait_time
self.number_of_initial_mutations = number_of_initial_mutations
self.tumour_multiplicity = tumour_multiplicity
self.read_depth = read_depth
self.sampling_fraction = sampling_fraction
self.sampling_positions = sampling_positions
self.plot_tumour_growth = plot_tumour_growth
self.export_tumour = export_tumour
@property
def matrix_size(self):
return self.__matrix_size
@matrix_size.setter
def matrix_size(self, val):
self.__matrix_size = check_set_number(val, int, 10, 1, None )
@property
def number_of_generations(self):
return self.__number_of_generations
@number_of_generations.setter
def number_of_generations(self, val):
self.__number_of_generations = check_set_number(val, int, 2, 1, None)
@property
def division_probability(self):
return self.__division_probability
@division_probability.setter
def division_probability(self, val):
self.__division_probability = check_set_number(val, float, 1, 0.0, 1.0)
@property
def adv_mutant_division_probability(self):
return self.__adv_mutant_division_probability
@adv_mutant_division_probability.setter
def adv_mutant_division_probability(self, val):
self.__adv_mutant_division_probability = check_set_number(val, float, 1, 0.0, 1.0)
@property
def death_probability(self):
return self.__death_probability
@death_probability.setter
def death_probability(self, val):
self.__death_probability = check_set_number(val, float, 0, 0.0, 1.0)
@property
def adv_mutant_death_probability(self):
return self.__adv_mutant_death_probability
@adv_mutant_death_probability.setter
def adv_mutant_death_probability(self, val):
self.__adv_mutant_death_probability = check_set_number(val, float, 0.0, 0.0, 1.0)
@property
def mutation_probability(self):
return self.__mutation_probability
@mutation_probability.setter
def mutation_probability(self, val):
self.__mutation_probability = check_set_number(val, float, 0.8, 0.0, 1.0)
@property
def adv_mutant_mutation_probability(self):
return self.__adv_mutant_mutation_probability
@adv_mutant_mutation_probability.setter
def adv_mutant_mutation_probability(self, val):
self.__adv_mutant_mutation_probability = check_set_number(val, float, 1.0, 0.0, 1.0)
@property
def number_of_mutations_per_division(self):
return self.__number_of_mutations_per_division
@number_of_mutations_per_division.setter
def number_of_mutations_per_division(self, val):
self.__number_of_mutations_per_division = check_set_number(val, int, 1, 0)
@property
def adv_mutation_wait_time(self):
return self.__adv_mutation_wait_time
@adv_mutation_wait_time.setter
def adv_mutation_wait_time(self, val):
self.__adv_mutation_wait_time = check_set_number(val, int, 50000, 0)
@property
def number_of_initial_mutations(self):
return self.__number_of_initial_mutations
@number_of_initial_mutations.setter
def number_of_initial_mutations(self, val):
self.__number_of_initial_mutations = check_set_number(val, int, 1, 0)
@property
def tumour_multiplicity(self):
return self.__tumour_multiplicity
@tumour_multiplicity.setter
def tumour_multiplicity(self,val):
if val is None:
val = 'single'
if not isinstance(val, str):
raise TypeError("Wrong type for parameter 'tumour_multiplicity'. Expected str, got %s" % type(val))
if val not in ["single", "double"]:
raise ValueError("Only 'single' and 'double' are allowed values for parameter 'tumour_multiplicity'.")
self.__tumour_multiplicity = val
@property
def read_depth(self):
return self.__read_depth
@read_depth.setter
def read_depth(self, val):
self.__read_depth = check_set_number(val, int, 100, 0, None)
@property
def sampling_fraction(self):
return self.__sampling_fraction
@sampling_fraction.setter
def sampling_fraction(self, val):
self.__sampling_fraction = check_set_number(val, float, 0.0, 0.0, 1.0)
@property
def sampling_positions(self):
return self.__sampling_positions
@sampling_positions.setter
def sampling_positions(self, val):
if val is not None:
for pos in val:
if not hasattr(val, "__iter__"):
raise TypeError("Sampling positions must be list of tuples.")
if len(pos) != 2:
raise ValueError("Sampling positions must be list of 2-tuples (x,y coordinates).")
for xy in pos:
if not isinstance(xy, int):
raise TypeError("Sampling position must be integer")
if xy < 0 or xy > self.matrix_size:
raise ValueError("Sampling position must be positive integer not larger than the matrix size.")
self.__sampling_positions = val
@property
def plot_tumour_growth(self):
return self.__plot_tumour_growth
@plot_tumour_growth.setter
def plot_tumour_growth(self, val):
if val is None:
val = True
try:
val = bool(val)
except:
raise TypeError("Incompatible type: Expected bool, got {}.".format(type(val)))
self.__plot_tumour_growth = val
@property
def export_tumour(self):
return self.__export_tumour
@export_tumour.setter
def export_tumour(self, val):
if val is None:
val = True
try:
val = bool(val)
except:
raise TypeError("Incompatible type: Expected bool, got {}.".format(type(val)))
self.__export_tumour = val
class CancerSimulator(object):
"""
:class CancerSimulator: Represents the Monte-Carlo simulation of cancer tumour growth on a 2D grid.
"""
def __init__(self, parameters=None,
seed=None,
outdir=None,
):
"""
Construct a new CancerSimulation.
:param parameters: The cancer simulation parameters
:type parameters: CancerSimulationParameters
:param seed: The random seed.
:type seed: int
:param outdir: The directory where simulation data is saved. Default: "casim_out/" in the current working directory.
:type outdir: (str || path-like object)
"""
if parameters is None:
raise ValueError("No parameters given, simulation cannot execute.")
self.parameters = parameters
# Setup internal variables.
self.__mtx = lil_matrix((self.parameters.matrix_size, self.parameters.matrix_size), dtype=int)
self.__mut_container = None
self.__xaxis_histogram = None
self.__biopsy_timing = None
self.__beneficial_mutation = []
self.__growth_plot_data = None
self.__s = self.parameters.number_of_mutations_per_division
# Keep track of how many steps where performed in previous run if this
# is a reloaded run.
self.__init_step = 0
# Keep track of mutation count in previous run if this is a rerun.
self.__mutation_counter = 1
# Handle direct parameters.
self.seed = seed
self.outdir = outdir
self.__ploidy=2
self.__mut_multiplier=[self.__s]*100000
if self.parameters.tumour_multiplicity == 'single':
logging.info('Running in single tumour mode.')
initLoc=(int(self.parameters.matrix_size*0.5),int(self.parameters.matrix_size*0.5))
logging.info("First cell at %s.", str(initLoc))
self.__mtx[initLoc]=1
self.__mut_container=[(0, 0), (0, 1)]
self.__pool=[initLoc]
#start the pool of cancer cells by adding the initial cancer cell into it
if self.parameters.tumour_multiplicity == 'double':
logging.info('Running in sdsa mode.')
### COMMENT: Should these be given as parameters?
distance_between_tumours=0.05
initLoc=(int(self.parameters.matrix_size*0.45),int(self.parameters.matrix_size*0.5))
secondinitLoc=(int(self.parameters.matrix_size*0.65),int(self.parameters.matrix_size*0.51))
self.__mtx[initLoc]=1
self.__mtx[secondinitLoc]=2
self.__mut_container=[(0, 0), (0, 1), (0,2)]
self.__pool=[initLoc, secondinitLoc]
# create lists used in loops
self.__growth_plot_data=[]
@property
def seed(self):
return self.__seed
@seed.setter
def seed(self, val):
""" Set the random seed for the simulation.
:param val: The seed to set
:type val: int
"""
# If not given: Set it to number of seconds since Jan. 1. 1970 (T0)
if val is None:
val = int(time())
if not isinstance(val, int):
raise TypeError("Wrong type for parameter 'seed'. Expected int, got %s." % type(val))
if not val > 0:
raise ValueError("The parameter 'seed' must a positive integer (int).")
self.__seed = val
@property
def dumpfile(self):
return self.__dumpfile
@property
def outdir(self):
return self.__outdir
@outdir.setter
def outdir(self, val):
""" Create the output directory if not existing. If simulation data is already present inside an existing directory, the simulation aborts. """
self._setup_io(val)
def _setup_io(self, outdir):
""" """
""" Setup the output directories.
:param outdir: The directory under which all simulation output will be stored.
:type outdir: str
:raises: IOError (Directory for this seed already exists)"""
self.__outdir = './outdir'
self.__seeddir = './seeddir'
self.__logdir = './logdir'
self.__simdir = './simdir'
if outdir is None:
outdir = "casim_out"
# Create top-level outdir.
if not os.path.exists(outdir):
os.mkdir(outdir)
self.__outdir = outdir
# Check if directory for this seed already exists. Bail out if yes.
seeddir = os.path.join(outdir, 'cancer_%d' % self.__seed)
if os.path.exists(seeddir):
raise IOError("The directory %s already exists. Cowardly refusing to overwrite. Please specify another seed or a different outdir." % seeddir)
os.mkdir(seeddir)
self.__seeddir = seeddir
# Setup dump file
self.__dumpfile = os.path.join(self.__seeddir, 'cancer_sim.py.dill')
# Create subdirectories.
logdir = os.path.join(seeddir, "log")
simdir = os.path.join(seeddir, "simOutput")
os.mkdir(logdir)
os.mkdir(simdir)
# Store on object.
self.__logdir = logdir
self.__simdir = simdir
# Configure the logging filehandler.
root_logger = logging.getLogger()
fhandler = logging.FileHandler(os.path.join(logdir, "casim.log"))
fhandler.setFormatter(root_logger.handlers[0].formatter)
root_logger.addHandler(fhandler)
def extend_sample(self, sample_center, sample_size):
""" Takes a subset of cells from the tumour positioned around single input cell with specific coordinates. Output is a list of tuples of cells belonging to the sample.
:param sample_center: coordinates of cell that will be center of the sample
:type sample: tuple
:param sample_size: The size of the sample (fraction of total cells.)
:type sample_size: float
"""
biopsy_size=math.ceil(sample_size*len(self.__pool))
#look at z-tier neighbours around sample_center
for z in range(1,len(self.__pool)):
expanded_sample=[]
#look at z-tier neighbours around sample_center
for i in range(-z, z+1):
for j in range(-z, z+1):
nc=(sample_center[0]+i, sample_center[1]+j)
#if surrounding cell in the pool add it to sample list
if nc in self.__pool:
expanded_sample.append(nc)
# if chunk is larger than wanted percentage of total tumour
if len(expanded_sample)>biopsy_size:
#remove last value until desired chunk size
while len(expanded_sample)>biopsy_size:
expanded_sample=expanded_sample[:-1]
break
if len(expanded_sample) == 0:
logging.warning("""
Sample is empty. Consider enlarging the `sampling_fraction` parameter.
If that does not help, you may be sampling an empty region of the tumour matrix.
Inspect the tumour matrix data `mtx.p` in the output directory""")
return expanded_sample
def dump(self):
""" Serialize the object. The current simulation will be stored in a machine readable
format to <OUTDIR>/cancer_<SEED>/cancer_sim.py.dill, where <OUTDIR> is the specified output
directory or (if the latter was not defined) a temporary directory."""
with open(self.dumpfile, 'wb') as fp:
dill.dump(self, fp)
def run(self):
""" Run the simulation.
:return: 0 if the run finishes successfully.
After a successful run, simulation output and log will be written to
the output directory `<DIR>/cancer_<SEED>/simOutput` and
`<DIR>/cancer_<SEED>/log`, respectively. Simulation output is split into
several files:
- `mtx_VAF.txt` is a datafile with three columns: `mutation_id` lists the index of
each primary mutation, `additional_mut_id` indexes the subsequent mutations that occur in a cell of
a given `mutation_id`; `frequency` is the frequency at which a given mutation occurs.
- `sample_out_XXX_YYY.txt` lists all mutations of the artificial sample
taken from the whole tumour. Columns are identical to `mtx_VAF.txt`.
- `wholeTumourVAFHistogram.pdf` contains a histogram plot of the
mutation frequencies for the whole tumour
- `sampleHistogram_XXX_YYY.pdf` is the mutation frequency histogram for
the sampled portion of the tumour. The two numbers XXX and YYY are the
positional coordinates (grid indices) in the tumour matrix.
- `mtx.p` is the serialized (aka "pickled") 2D tumour matrix in sparse
matrix format.
- `mut_container.p` is the serialized (aka "pickled") mutation list, a
list of tuples `[t_i]`. Each tuple `t_i` consists of two values, `t_i =
(c_i, m_i)`. The first element `c_i` is the cell number in which the i'th mutation
occurs. The second element, `m_i`, is the mutation index `m_i=i`.
"""
# Setup square matrix.
matrix_size=self.parameters.matrix_size
self.__pre_run_log()
logging.info('Tumour growth in progress.')
start = timer()
seed=self.__seed
prng.seed(seed)
numpy.random.seed(seed)
#run growth function
#output variable (true_vaf) is list of tuples with mutation id and frequency of mutation in the tumour [(mut_id, frequency),...]
true_vaf=self.tumour_growth()
# Export a graph containing change in tumour size over time
if self.parameters.plot_tumour_growth:
self.growth_plot()
# Sampling
# Setup list of coordinates that serve as center of sampling [(x,y)]
samples_coordinates_list=self.__find_sample_coordinates()
#iterate over each sample from the list of samples
for center_cell_coordinates in samples_coordinates_list:
#get sample of certain size
extended_sample=self.extend_sample(center_cell_coordinates, sample_size=self.parameters.sampling_fraction)
#extract mutation profiles of all cells found in the sample
dna_from_sample=self.mutation_reconstruction(extended_sample)
#count the number of detected mutations and calculate frequency of each mutation (getFrequencies=False gives count for each mutation)
counted_sample=self.count_mutations(dna_from_sample, get_frequencies=True)
if self.parameters.number_of_mutations_per_division==1 and self.parameters.number_of_initial_mutations==1:
#export mutational profile of the sample
self.export_sample(counted_sample, center_cell_coordinates)
if self.parameters.number_of_mutations_per_division>1 or self.parameters.number_of_initial_mutations>1:
#increases number of mutations in the tumour by factor from params.number_of_number_of_mutations_per_division
increased_mut_number_sample=self.increase_mut_number(counted_sample)
# Additional mutation serves to distinguish different mutations that occured
# in the same cell at the same time.
# Introduce sequencing noise, works only with increased number of mutations
noisy_data=self.simulate_seq_depth(increased_mut_number_sample)
self.export_sample(noisy_data, center_cell_coordinates)
#creates and exports histogram of mutational frequencies
self.export_histogram(noisy_data, center_cell_coordinates)
end=timer()
self.__post_run_log()
logging.info("Consumed Wall time of this run: %f s.", end - start)
return 0
def __pre_run_log(self):
message = ""
logging.info("Ready to start CancerSim run with these parameters:")
for k,v in self.parameters.__dict__.items():
logging.info("%s = %s", k.split("__")[-1], v)
def __post_run_log(self):
logging.info("CancerSim run has finished.")
logging.info("Simulation output written to: %s.", self.__simdir)
logging.info("Log files written to: %s.""", self.__logdir)
def export_histogram(self, sample_data, sample_coordinates):
""" Create and export histogram of mutational frequencies (aka variant allelic frequencies)
:param sample_data: List of mutations and their frequencies
:type sample_data: list
:param sample_coordinates: coordinates of central sample cell
:type sample_coordinates: tuple (i,j) of cell indices
"""
xaxis_histogram=np.arange(0.0,1,0.01)
#setdetection limit of the mutation in the sample (depends on the sequencing machine and sequencing depth)
detection_limit=0.05
#plots all mutations with frequences above detection threshold
fig, ax = plt.subplots()
_ = ax.hist([s[1] for s in sample_data if s[1]>detection_limit], bins=xaxis_histogram)
_ = ax.set_xlabel('Mutation frequency')
_ = ax.set_ylabel('Number of mutations')
#export VAF histogram of the whole tumour
if sample_coordinates=='whole_tumour':
figure_path = os.path.join(self.__simdir,'wholeTumourVAFHistogram.pdf')
#export VAF histogram of sample
else:
figure_path = os.path.join(self.__simdir,'sampleHistogram_'+str(sample_coordinates[0])+'_'+str(sample_coordinates[1])+'.pdf')
fig.savefig(figure_path)
plt.close(fig)
def export_sample(self, sample_data, sample_coordinates):
""" Export (write to disk) frequencies of samples.
:param sample_data: List of mutations and their frequencies
:type sample_data: list
:param sample_coordinates: coordinates of central sample cell
:type sample_coordinates: tuple (i,j) of cell indices
"""
if len(sample_data) == 0:
return
fname = os.path.join(self.__simdir, 'sample_out_'+str(sample_coordinates[0])+'_'+str(sample_coordinates[1])+'.txt')
logging.info("Writing sampled tumour data to %s.", fname)
with open(fname,'w') as sample_vaf_ex:
if len(sample_data[0])==2:
sample_vaf_ex.write('mutation_id'+'\t'+'frequency'+'\n')
for i in sample_data:
sample_vaf_ex.write(str(i[0])+'\t'+str(i[1])+'\n')
elif len(sample_data[0])==3:
sample_vaf_ex.write('mutation_id'+'\t'+'additional_mut_id'+'\t'+'frequency'+'\n')
for i in sample_data:
sample_vaf_ex.write(str(i[0])+'\t'+str(i[2])+'\t'+str(i[1])+'\n')
def export_tumour_matrix(self, tumour_mut_data):
""" Export (write to disk) the matrix of tumour cells.
:param tumour_matrix: The tumour matrix to export
:type tumour_matrix: array like
"""
if not self.parameters.export_tumour:
return
fname = os.path.join(self.__simdir, 'mtx_VAF.txt')
logging.info('Writing tumour profile to %s.', fname)
# save VAF to text file
with open(fname,'w') as vaf_ex:
if len(tumour_mut_data[0])==2:
vaf_ex.write('mutation_id'+'\t'+'frequency'+'\n')
for i in tumour_mut_data:
vaf_ex.write(str(i[0])+'\t'+str(i[1])+'\n')
if len(tumour_mut_data[0])==3:
vaf_ex.write('mutation_id'+'\t'+'additional_mut_id'+'\t'+'frequency'+'\n')
for i in tumour_mut_data:
vaf_ex.write(str(i[0])+'\t'+str(i[2])+'\t'+str(i[1])+'\n')
# Pickle the data.
fname = os.path.join(self.__simdir, 'mtx.p')
logging.info('Writing simulation matrix to %s.', fname)
with open(fname,'wb') as fp:
pickle.dump(self.__mtx, fp)
fname = os.path.join(self.__simdir, 'mut_container.p')
logging.info('Writing mutation list to %s.', fname)
with open(fname,'wb') as fp:
pickle.dump(self.__mut_container, fp)
def growth_plot(self):
#Plots number of cancer cells over time and outputs it in .pdf
if self.outdir is None:
return
fig, ax = plt.subplots()
_ = ax.plot([x/2 for x in range(len(self.__growth_plot_data))], self.__growth_plot_data)
_ = ax.set_xlabel('Division cycle')
_ = ax.set_ylabel('Number of tumour cells')
figure_path = os.path.join(self.__simdir,'growthCurve.pdf')
fig.savefig(figure_path)
plt.close(fig)
logging.info("Growth curve graph written to %s.", figure_path)
def count_mutations(self,mutation_list, get_frequencies):
""" Count number each time mutation is detected in the sample
:param mutation_list: mutation profiles of each cell in the sample
:type mutation_list: list of lists
"""
mut_count=[]
#flatten the list of mutations
reduced=list(itertools.chain(*[j for j in mutation_list]))
#count number of unique mutations in whole tumour at time step
for i in set(reduced):
mut_count.append((i, float(reduced.count(i))))
#sort list of mutations based on the mutation id just in case they are not sorted
mut_count=sorted(mut_count,key=itemgetter(0))
mut_freq=[]
if get_frequencies:
for mutation in mut_count:
#getting grequency of each mutation by dividing absolute number of detected mutations for each mutation with number of mutations "1" as this one is a proxy for sample size as every cancer cell has mutaiton "1"
mut_freq.append((mutation[0],(mutation[1]/mut_count[0][1])/self.__ploidy))
return mut_freq
return mut_count
def simulate_seq_depth(self, extended_vaf):
""" Adds a beta binomial noise to sampled mutation frequencies
:param extended_vaf: The list of cells to take a sample from.
:type extended_vaf: list
"""
depth=np.random.poisson(self.parameters.read_depth, len(extended_vaf))
AF=np.array([i[1] for i in extended_vaf])
samp_alleles=np.random.binomial(depth, AF)
VAF = samp_alleles/depth
return [(extended_vaf[i][0], VAF[i], extended_vaf[i][2]) for i in range(len(extended_vaf)) if VAF[i]!=0]
def increase_mut_number(self, original_mut_list):
""" Scale up the number of mutations according to the 'number_of_initial_mutations' 'and number_of_mutations_per_division' parameter.
:param solid_pre_vaf: The list of mutations to scale.
:type solid_pre_vaf: list
"""
extended_mut_list=[]
target_mut_solid=[]
for i in original_mut_list:
#first mutation duplicate N number of times
# adding additional clonal mutations
if i[0]==1:
for j in range(self.parameters.number_of_initial_mutations):
extended_mut_list.append((i[0] , float(i[1]),j))
else:
# for all subsequent mutations duplicate number
# of them based on poisson distribution in variable self.__mutMultiplier
for j in range(self.__mut_multiplier[i[0]]):
extended_mut_list.append((i[0] , float(i[1]),j))
# Return the multiplied mutations.
return extended_mut_list
def terminate_cell(self, cell, step):
""" Kills cancer cell and removes it from the pool of cancer cells
:param cell: cell chosen for termination
:type cell: tuple (i,j) of cell indices.
:param int step: The time step in the simulation
"""
#removes cell from the pool
self.__pool.remove(cell)
#resets value of position on matrix to zero
self.__mtx[cell]=0
def death_step(self, step):
""" Takes a group of random cells and kills them
:param int step: The time step in the simulation
"""
for cell in self.__pool:
beneficial = self.__mtx[cell] in self.__beneficial_mutation
r = prng.random()
if (beneficial and r < self.parameters.adv_mutant_death_probability) or r < self.parameters.death_probability:
self.terminate_cell(cell, step)
def mutation_reconstruction(self,cells_to_reconstruct):
""" Reconstructs list of mutations of individual cell by going thorough its ancestors.
:param cell: Cell for which mutational profile will be recovered.
:type cell: list of tuples [(i,j)] of cell indices.
"""
# Return container.
reconstructed = []
# Map mutation count to origin (could save this step if elements in
# mut_container where (c,o) instead of (o,c)).
lookup_map = dict([(k,v) for v,k in self.__mut_container])
# Loop over cell indices.
for i in cells_to_reconstruct:
# Get cell.
cell = self.__mtx[i]
logging.debug("Untangling cell %d.", cell)
# Setup intermediate container.
mut_prof=[]
# Start with the first mutation of this cell.
mc=self.__mut_container[cell]
# Get mutation count
m = mc[1]
# Now go through the mutation container and trace back the history.
while m>0:
# Append current mutation count.
mut_prof.append(m)
# Get mutation origin of this count.
m = lookup_map[m]
# Store on return container in reverse order.
reconstructed.append(mut_prof[::-1])
return reconstructed
def tumour_growth(self):
""" Run the tumour growth simulation. """
# setup a counter to keep track of number of mutations that occur in this run.
# take into account mutations from previous runs if rerun.
if self.parameters.tumour_multiplicity == 'single':
mutation_counter = self.__mutation_counter
if self.parameters.tumour_multiplicity == 'double':
mutation_counter = self.__mutation_counter + 1
# Loop over time steps.
for step in range(self.__init_step, self.__init_step+self.parameters.number_of_generations):
# logging.debug("Cell matrix: \n%s", str(self.__mtx.todense()))
logging.debug('%d/%d generation started', step+1, self.__init_step + self.parameters.number_of_generations + 1)
# setup a temporary list to store the mutated cells in this iteration.
temp_pool=[]
# reshuffle the order of pool to avoid that cells with low number divide always first.
shuffle(self.__pool)
# logging.debug('list of cancer cells %s', str(self.__pool))
# Loop over all cells in the pool.
for cell in self.__pool:
logging.debug('cell to divide %s', str(cell))
# Get the existing neighboring cells.
neigh=self.neighbours(cell)
# first condition: if available neighbors
if neigh:
# if cell has beneficial mutation.
beneficial = self.__mtx[cell] in self.__beneficial_mutation
r = prng.random()
if (beneficial and r < self.parameters.adv_mutant_division_probability) or r < self.parameters.division_probability:
mutation_counter = self.division(cell, beneficial, neigh, step, mutation_counter, temp_pool)
# add new cancer cells to a pool of cells available for division next round
[self.__pool.append(v) for v in temp_pool]
self.__growth_plot_data.append(len(self.__pool))
self.death_step(step)
self.__growth_plot_data.append(len(self.__pool))
logging.info("All generations finished. Starting tumour reconstruction.")
# Update internal step counter if we dump and reload.
self.__init_step = step+1
self.__mutation_counter = mutation_counter
# Reconstruct mutation history.
reconstructed = self.mutation_reconstruction(self.__pool)
logging.info("Reconstruction done, get statistics.")
mutation_counts=self.count_mutations(reconstructed, get_frequencies=True)
if self.parameters.number_of_mutations_per_division==1 and self.parameters.number_of_initial_mutations==1:
self.export_tumour_matrix(mutation_counts)
return mutation_counts
if self.parameters.number_of_mutations_per_division>1 or self.parameters.number_of_initial_mutations>1:
increased_mut_number_tumour=self.increase_mut_number(mutation_counts) #increases number of mutations in the tumour by factor from params.number_of_number_of_mutations_per_division
noisy_data=self.simulate_seq_depth(increased_mut_number_tumour) #introduce sequencing noise, works only with increased number of mutations
self.export_tumour_matrix(noisy_data)
center_cell_coordinates='whole_tumour'
self.export_histogram(noisy_data, center_cell_coordinates) #creates and exports histogram of mutational frequencies
return noisy_data
def neighbours(self, cell):
""" Returns the nearest-neighbor cells around the given node.
:param cell: The node for which to calculate the neighbors.
:type cell: tuple (i,j) of cell indices.
"""
# make list of all surrounding nodes
neighboursList=[
(cell[0]-1, cell[1]+1),
(cell[0] , cell[1]+1),
(cell[0]+1, cell[1]+1),
(cell[0]-1, cell[1] ),
(cell[0]+1, cell[1] ),
(cell[0]-1, cell[1]-1),
(cell[0] , cell[1]-1),
(cell[0]+1, cell[1]-1)]
# return nodes that are not cancerous, do not contain mutation index
return [y for y in neighboursList if self.__mtx[y]==0]
def place_to_divide(self):
""" Selects random unoccupied place on the matrix where cell will divide."""
a = prng.randint(0,self.parameters.matrix_size-1)
b = prng.randint(0,self.parameters.matrix_size-1)
random_place_to_divide=(a,b)
if self.__mtx[random_place_to_divide]==0:
return a, b
else:
while self.__mtx[random_place_to_divide]!=0:
a = prng.randint(0,self.parameters.matrix_size-1)
b = prng.randint(0,self.parameters.matrix_size-1)
random_place_to_divide=(a,b)
return a, b
def division(self, cell, beneficial, neighbors, step, mutation_counter, pool):
""" Perform a cell division.
:param tuple cell: The mother cell coordinates.
:param bool beneficial: Flag to indicate if the cell carries the beneficial mutation.
:param list neighbors: The neighboring cells.
:param int step: The time step in the simulation
:param int mutation_counter: The counter of mutations to be updated
:param list pool: The (temporary) pool of cells.
"""
# Draw a free neighbor.
place_to_divide=prng.choice(neighbors)
pool.append(place_to_divide)
if beneficial:
logging.info("Division of beneficial mutation carrier. Cell index = %s, mutation index = %d, place_to_divide=%s", str(cell), self.__mtx[cell], str(place_to_divide))
mutation_counter = self.mutation(cell, neighbors, step, mutation_counter, pool,place_to_divide, beneficial)
return mutation_counter
def mutation(self, *args):
""" Perform a mutation.
:param cell: At which cell the mutation occurs
:param neighbors: The neighboring cells
:param mutation_counter: The current number of mutations, to be incremented.
:param pool: The pool of all cells.
:param place_to_divide: The position at which the mutation occurs.
:param beneficial: Flag to control whether the mutation is beneficial or not.
"""
cell, neighbors, step, mutation_counter, pool, place_to_divide, beneficial = args
# Mutation.
if prng.random()<self.parameters.mutation_probability:
# Increment mutation counter.
mutation_counter=mutation_counter+1
# New cell gets the index number of largest number of mutation
self.__mtx[place_to_divide]=len(self.__mut_container)
self.__mut_container.append((self.__mut_container[self.__mtx[cell]][1], mutation_counter))
# Log
logging.debug('Neighbor cell has new index %d', self.__mtx[place_to_divide])
logging.debug("%d, %d", self.__mut_container[self.__mtx[cell]][1], mutation_counter)
# logging.debug('mut container updated: %s', str(self.__mut_container))
if beneficial:
self.__beneficial_mutation.append(int(self.__mtx[place_to_divide]))
logging.info("Mutation of beneficial mutation carrier. Cell index = %s, mutation index = %d, place to divide = %s", cell, self.__mtx[cell], str(place_to_divide))
else:
# Decide whether an advantageous mutation occurs.
if prng.random()<self.parameters.adv_mutant_mutation_probability \
and len(self.__beneficial_mutation)==0 \
and step==self.parameters.adv_mutation_wait_time:
logging.info('New beneficial mutation: %d', int(self.__mtx[place_to_divide]))
self.__beneficial_mutation.append(int(self.__mtx[place_to_divide]))
# Mother cell mutates
mutation_counter=mutation_counter+1
self.__mut_container.append((self.__mut_container[self.__mtx[cell]][1], mutation_counter))
# Update mutation list.
self.__mtx[cell]=len(self.__mut_container)-1
# No new mutation.
else:
logging.debug('No new mutation in normal division, inheriting from parent')
self.__mtx[place_to_divide]=self.__mtx[cell]
return mutation_counter
def __find_sample_coordinates(self):
""" """
""" Find the sample coordinates based on the list of coordinates given
at startup. """
# If no positions where given, find the center of the matrix.
if self.parameters.sampling_positions is None:
self.parameters.sampling_positions = [prng.choice(self.__pool)]
return self.parameters.sampling_positions
def main(arguments):
""" The entry point for the command line interface.
:param arguments: The command line arguments for the cancer simulation tool.
:type arguments: Namespace
"""
parameters = CancerSimulatorParameters()
if os.path.isfile(arguments.params):
spec = spec_from_file_location("params", arguments.params)
params = module_from_spec(spec)
spec.loader.exec_module(params)
parameters = CancerSimulatorParameters(
matrix_size=params.matrix_size,
number_of_generations=params.number_of_generations,
division_probability=params.division_probability,
adv_mutant_division_probability=params.adv_mutant_division_probability,
death_probability=params.death_probability,
adv_mutant_death_probability=params.adv_mutant_death_probability,
mutation_probability=params.mutation_probability,
adv_mutant_mutation_probability=params.adv_mutant_mutation_probability,
number_of_mutations_per_division=params.number_of_mutations_per_division,
adv_mutation_wait_time=params.adv_mutation_wait_time,
number_of_initial_mutations=params.number_of_initial_mutations,
tumour_multiplicity=params.tumour_multiplicity,
sampling_fraction=params.sampling_fraction,
sampling_positions=params.sampling_positions,
read_depth=params.read_depth,
export_tumour=params.export_tumour,
plot_tumour_growth=params.plot_tumour_growth,
)
# Set loglevel.
loglevel = {0 : logging.WARNING,
1 : logging.INFO,
2 : logging.DEBUG,
}
if not arguments.loglevel in loglevel.keys():
arguments.loglevel = 0
logging.getLogger().setLevel(loglevel[arguments.loglevel])
casim = CancerSimulator(parameters, seed=arguments.seed, outdir=arguments.outdir)
return (casim.run())
def check_set_number(value, typ, default=None, minimum=None, maximum=None):
""" Checks if a value is instance of type and lies within permissive_range if given. """
if value is None:
return default
if not isinstance(value, typ):
try:
value = typ(value)
except:
raise TypeError("Incompatible type: Expected {0}, got {1}.".format(typ, type(value)))
if minimum is not None:
if value < minimum:
raise ValueError("Value must be larger than {}.".format(minimum))
if maximum is not None:
if value > maximum:
raise ValueError("Value must be smaller than {}.".format(maximum))
return value
def load_cancer_simulation(dumpfile):
""" Unpickle a cancer simulation from a dill generated dump.
:param dumpfile: Path to the file that contains the dumped object.
:type dumpfile: str
"""
with open(dumpfile, 'rb') as fp:
obj = dill.load(fp)
return obj
if __name__ == "__main__":
# Entry point
# Setup the command line parser.
parser = ArgumentParser()
# Seed parameter.
parser.add_argument("-p",
"--params",
help="""Path to the python file holding the simulation
parameters. Defaults to `params.py` in the current working
directory. If no file is found, default parameters will
be chosen, see the API for CancerSimulatorParameters for
details.""",
default="params.py",
type=str,
metavar="PARAMS",
)
parser.add_argument("-s",
"--seed",
help="The prng seed.",
type=int,
default=1,
metavar="SEED",
)
parser.add_argument("-o",
"--outdir",
dest="outdir",
metavar="DIR",
default=None,
help="Directory where simulation data is saved.",
type=str,
)
parser.add_argument("--verbose",
"-v",
dest="loglevel",
action='count',
default=0,
help="Increase the verbosity level by adding 'v's."
)
# Parse the arguments.
arguments = parser.parse_args()
sys.exit(main(arguments))
| 39.028192 | 225 | 0.640754 | 5,867 | 47,068 | 4.914948 | 0.123402 | 0.022472 | 0.018865 | 0.015259 | 0.353759 | 0.26134 | 0.196456 | 0.165661 | 0.144264 | 0.110522 | 0 | 0.008574 | 0.27643 | 47,068 | 1,205 | 226 | 39.060581 | 0.838124 | 0.287286 | 0 | 0.156986 | 0 | 0.00471 | 0.095262 | 0.003671 | 0 | 0 | 0 | 0 | 0 | 1 | 0.103611 | false | 0 | 0.026688 | 0.031397 | 0.199372 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
5bde1f82acb41df2e414dc677ad1aaaf4d992610 | 7,296 | py | Python | asar_pi_applications/asar_web_server/asar_web_server/asar_web_server.py | ssnover/msd-p18542 | 32bef466f9d5ba55429da2119a14081b3e411d0b | [
"MIT"
] | 3 | 2021-01-07T07:46:50.000Z | 2021-11-17T10:48:39.000Z | asar_pi_applications/asar_web_server/asar_web_server/asar_web_server.py | ssnover/msd-p18542 | 32bef466f9d5ba55429da2119a14081b3e411d0b | [
"MIT"
] | 3 | 2018-02-19T20:30:30.000Z | 2018-04-20T23:25:29.000Z | asar_pi_applications/asar_web_server/asar_web_server/asar_web_server.py | ssnover95/msd-p18542 | 32bef466f9d5ba55429da2119a14081b3e411d0b | [
"MIT"
] | 1 | 2021-01-07T07:46:52.000Z | 2021-01-07T07:46:52.000Z | #!/usr/bin/python3
"""
file: app.py
purpose: Holds the view for the Flask web application and handling of the
database.
"""
from .gui_constants import GUI_CONSTANTS, DANGER, ENVIRONMENT, STATE
import datetime
from flask import Flask, request, session, g, redirect, url_for, abort, \
render_template, flash, make_response, send_file
import os
from .simulation_settings import SimulationSettingsForm
import sqlite3
import threading
app = Flask(__name__)
app.config.from_object(GUI_CONSTANTS)
SAMPLE_IMAGE_PATH = os.path.join(os.sep, 'home', 'ssnover', 'develop', 'msd-p18542', 'asar_pi_applications', 'asar_web_server', 'asar_web_server', 'static', 'hondas2000.jpg')
APP_WORKER_THREAD = threading.Thread(target=app.run, name="ASAR Web Application Server Thread")
def connectDatabase(db_path):
"""
Connects to the application database.
"""
rv = sqlite3.connect(db_path)
rv.row_factory = sqlite3.Row
return rv
def getDatabase(db_path):
"""
Opens a new database connection if one is not open yet from the application globals.
"""
if not hasattr(g, 'sqlite_db'):
g.sqlite_db = connectDatabase(db_path)
return g.sqlite_db
@app.teardown_appcontext
def closeDatabase(error):
"""
Closes the database again at the end of the request.
"""
if hasattr(g, 'sqlite_db'):
g.sqlite_db.close()
def initializeDatabase():
db = getDatabase(app.config['DATABASE'])
with app.open_resource('schema.sql', mode='r') as f:
db.cursor().executescript(f.read())
db.commit()
@app.after_request
def add_header(r):
"""
Add headers to both force latest IE rendering engine or Chrome Frame,
and also to cache the rendered page for 10 minutes.
"""
r.headers["Cache-Control"] = "no-cache, no-store, must-revalidate"
r.headers["Pragma"] = "no-cache"
r.headers["Expires"] = "0"
r.headers['Cache-Control'] = 'public, max-age=0'
return r
@app.cli.command('initdb')
def initdb_handler():
"""
Make a call to initialize the database.
"""
initializeDatabase()
print("Initialized the " + __name__ + " database.")
# Initial settings for application state
update_settings(DANGER['SAFE'], ENVIRONMENT['FOREST FIRE'], STATE['STOPPED'])
@app.route('/', methods=['GET', 'POST'])
def command_console():
"""
This function loads the resources into the HTML template for the GUI for
using the web server.
"""
form = SimulationSettingsForm(request.form)
if request.method == 'POST':
# get the current settings and replace with form submission
settings = get_current_settings()
if (settings[2] == STATE['STOPPED']):
update_settings(form.danger.data, form.environment.data, settings[2])
else:
print("Invalid: User tried to change settings during simulation.")
return render_template('view.html', form=form)
@app.route('/set_state', methods=['POST'])
def update_simulation_state():
"""
Update the state of the simulation in the database.
"""
button_clicked = request.form['button_clicked']
settings = get_current_settings()
# assume the state won't change by default
new_state = settings[2]
if (settings[2] == STATE['STOPPED']):
if button_clicked == 'play':
new_state = STATE['RUNNING']
elif (settings[2] == STATE['RUNNING']):
if button_clicked == 'stop':
new_state = STATE['STOPPED']
elif button_clicked == 'pause':
new_state = STATE['PAUSED']
elif (settings[2] == STATE['PAUSED']):
if button_clicked == 'play':
new_state = STATE['RUNNING']
elif button_clicked == 'stop':
new_state = STATE['STOPPED']
update_settings(settings[0], settings[1], new_state)
return redirect('/')
@app.route('/get_state', methods=["GET"])
def get_current_state():
"""
"""
settings = get_current_settings()
state = "error"
if settings[2] == STATE['STOPPED']:
state = "stopped"
elif settings[2] == STATE['RUNNING']:
state = "running"
elif settings[2] == STATE['PAUSED']:
state = "paused"
return state
@app.route('/image_stream')
def most_recent_image():
"""
This function opens the database to grab the most recent image taken to push
to the client.
"""
image_path = get_most_recent_image()
if image_path:
print("Sending off the most recent image.")
return send_file(image_path, mimetype='image/jpeg')
else:
print("Sending the default image.")
return send_file(SAMPLE_IMAGE_PATH, mimetype='image/jpeg')
@app.route('/add_image')
def add_image_in_db_for_prototyping():
"""
Adds a hardcoded image to the database for testing purposes.
"""
IMAGE_PATH = os.path.join(os.sep, 'home', 'ssnover', 'develop', 'msd-p18542', 'asar-pi-applications', 'asar_web_server', 'asar_web_server', 'static', 'hondas2000.jpg')
add_image_to_database(IMAGE_PATH)
return redirect('/')
def update_settings(danger, environment, state):
"""
Utility method for setting values in the database.
"""
print("Updating the settings for the simulation.")
backend_database = getDatabase(app.config['DATABASE'])
backend_database.execute("""insert into settings
(time_set, danger, environment, state)
values (?, ?, ?, ?)""",
[datetime.datetime.now(),
danger,
environment,
state])
backend_database.commit()
def get_current_settings():
"""
Utility method to retrieve a tuple of the current simulation settings.
Returns: The settings in order of (danger, environment, state)
"""
backend_database = getDatabase(app.config['DATABASE'])
cursor = backend_database.execute("""select danger, environment, state from settings
order by time_set desc
limit 1""")
current_settings = cursor.fetchall()[0]
return current_settings
def add_image_to_database(path_to_image):
"""
Utility method for adding image to database.
"""
backend_database = getDatabase(app.config['DATABASE'])
backend_database.execute("""insert into images
(image_path, time_taken)
values (?, ?)""",
[path_to_image, datetime.datetime.now()])
backend_database.commit()
def get_most_recent_image():
"""
Utility method grabbing the most recent image from the database.
"""
backend_database = getDatabase(app.config['DATABASE'])
cursor = backend_database.execute("""select image_path from images
order by time_taken desc
limit 1""")
result = cursor.fetchall()
if len(result) > 0:
return result[0][0]
else:
return None
def main():
"""
Runs the application on localhost:5000.
"""
#APP_WORKER_THREAD.start()
app.run(host='0.0.0.0', port=5000)
if __name__ == "__main__":
main()
| 30.915254 | 174 | 0.626234 | 865 | 7,296 | 5.117919 | 0.284393 | 0.033883 | 0.022137 | 0.031624 | 0.244635 | 0.181161 | 0.171222 | 0.139598 | 0.139598 | 0.120172 | 0 | 0.010095 | 0.253289 | 7,296 | 235 | 175 | 31.046809 | 0.802496 | 0.173383 | 0 | 0.177778 | 0 | 0 | 0.247273 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.118519 | false | 0 | 0.051852 | 0 | 0.259259 | 0.037037 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
5be18ef0ed38e04271a5246710a5e01dc9b41ee1 | 3,642 | py | Python | raster_aggregation/admin.py | geodesign/django-raster-aggregation | 2a4c155071f1f05923819da78f5f854b212b6926 | [
"BSD-3-Clause"
] | 9 | 2016-07-03T21:07:09.000Z | 2019-02-19T01:26:00.000Z | raster_aggregation/admin.py | geodesign/django-raster-aggregation | 2a4c155071f1f05923819da78f5f854b212b6926 | [
"BSD-3-Clause"
] | 2 | 2017-06-11T23:12:33.000Z | 2018-04-03T22:33:15.000Z | raster_aggregation/admin.py | geodesign/django-raster-aggregation | 2a4c155071f1f05923819da78f5f854b212b6926 | [
"BSD-3-Clause"
] | 6 | 2016-12-14T04:53:43.000Z | 2021-08-24T14:32:46.000Z | from __future__ import unicode_literals
from raster.models import RasterLayer
from django import forms
from django.contrib.admin.helpers import ACTION_CHECKBOX_NAME
from django.contrib.gis import admin
from django.http import HttpResponseRedirect
from django.shortcuts import render
from .models import AggregationArea, AggregationLayer, AggregationLayerGroup, ValueCountResult
from .tasks import aggregation_layer_parser, compute_value_count_for_aggregation_layer
class ValueCountResultAdmin(admin.ModelAdmin):
readonly_fields = (
'aggregationarea', 'rasterlayers', 'formula',
'layer_names', 'zoom', 'units', 'value', 'created'
)
class SelectLayerActionForm(forms.Form):
"""
Form for selecting the raster-layer on which to compute value counts.
"""
_selected_action = forms.CharField(widget=forms.MultipleHiddenInput)
rasterlayers = forms.ModelMultipleChoiceField(queryset=RasterLayer.objects.all(), required=True)
class ComputeActivityAggregatesModelAdmin(admin.ModelAdmin):
readonly_fields = ['modified']
actions = ['parse_shapefile_data', 'compute_value_count', ]
search_fields = ('name', )
def parse_shapefile_data(self, request, queryset):
for lyr in queryset.all():
lyr.log('Scheduled shapefile parsing.', AggregationLayer.PENDING)
aggregation_layer_parser.delay(lyr.id)
self.message_user(
request,
"Parsing shapefile asynchronously, please check the collection parse log for status.",
)
def compute_value_count(self, request, queryset):
form = None
layer = queryset[0]
# After posting, set the new name to file field
if 'apply' in request.POST:
form = SelectLayerActionForm(request.POST)
if form.is_valid():
rasterlayers = form.cleaned_data['rasterlayers']
for rst in rasterlayers:
compute_value_count_for_aggregation_layer(
layer,
rst.id,
compute_area=True
)
self.message_user(
request,
"Started Value Count on \"{agg}\" with {count} rasters. "
"Check parse log for results.".format(agg=layer, count=rasterlayers.count())
)
return HttpResponseRedirect(request.get_full_path())
# Before posting, prepare empty action form
if not form:
form = SelectLayerActionForm(initial={
'_selected_action': request.POST.getlist(ACTION_CHECKBOX_NAME),
})
return render(
request,
'raster_aggregation/select_raster_for_aggregation.html',
{
'layers': RasterLayer.objects.all(),
'form': form,
'title': u'Select Layer on which to Compute Value Counts'
}
)
class AggregationLayerInLine(admin.TabularInline):
model = AggregationLayerGroup.aggregationlayers.through
class AggregationLayerGroupAdmin(admin.ModelAdmin):
inlines = (
AggregationLayerInLine,
)
exclude = ['aggregationlayers']
class AggregationAreaAdmin(admin.OSMGeoAdmin):
raw_id_fields = ('aggregationlayer', )
search_fields = ('name', )
admin.site.register(AggregationArea, AggregationAreaAdmin)
admin.site.register(ValueCountResult, ValueCountResultAdmin)
admin.site.register(AggregationLayer, ComputeActivityAggregatesModelAdmin)
admin.site.register(AggregationLayerGroup, AggregationLayerGroupAdmin)
| 33.109091 | 102 | 0.661999 | 339 | 3,642 | 6.955752 | 0.40413 | 0.030534 | 0.028838 | 0.016964 | 0.057676 | 0.057676 | 0.027142 | 0 | 0 | 0 | 0 | 0.00037 | 0.257002 | 3,642 | 109 | 103 | 33.412844 | 0.871027 | 0.043383 | 0 | 0.093333 | 0 | 0 | 0.140179 | 0.015287 | 0 | 0 | 0 | 0 | 0 | 1 | 0.026667 | false | 0 | 0.12 | 0 | 0.4 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
5be40c75d2881dfa910d5ccbe37999de02fdbebd | 3,043 | py | Python | differential-privacy-library-main/tests/tools/test_histogram2d.py | gonzalo-munillag/Exponential_Randomised_Response | 1ae2c867d77c6e92f1df0bb7120862e4f9aa15e4 | [
"MIT"
] | 597 | 2019-06-19T11:26:50.000Z | 2022-03-30T13:23:42.000Z | differential-privacy-library-main/tests/tools/test_histogram2d.py | gonzalo-munillag/Exponential_Randomised_Response | 1ae2c867d77c6e92f1df0bb7120862e4f9aa15e4 | [
"MIT"
] | 45 | 2019-06-20T08:03:31.000Z | 2022-03-30T14:02:02.000Z | differential-privacy-library-main/tests/tools/test_histogram2d.py | gonzalo-munillag/Exponential_Randomised_Response | 1ae2c867d77c6e92f1df0bb7120862e4f9aa15e4 | [
"MIT"
] | 163 | 2019-06-19T23:56:19.000Z | 2022-03-26T23:59:24.000Z | import numpy as np
from unittest import TestCase
from diffprivlib.accountant import BudgetAccountant
from diffprivlib.tools.histograms import histogram2d
from diffprivlib.utils import global_seed, PrivacyLeakWarning, BudgetError
class TestHistogram2d(TestCase):
def test_no_params(self):
x = np.array([1, 2, 3, 4, 5])
y = np.array([5, 7, 1, 5, 9])
with self.assertWarns(PrivacyLeakWarning):
res = histogram2d(x, y)
self.assertIsNotNone(res)
def test_no_range(self):
x = np.array([1, 2, 3, 4, 5])
y = np.array([5, 7, 1, 5, 9])
with self.assertWarns(PrivacyLeakWarning):
res = histogram2d(x, y, epsilon=1)
self.assertIsNotNone(res)
def test_missing_range(self):
x = np.array([1, 2, 3, 4, 5])
y = np.array([5, 7, 1, 5, 9])
with self.assertWarns(PrivacyLeakWarning):
res = histogram2d(x, y, epsilon=1, range=[(0, 10), None])
self.assertIsNotNone(res)
def test_bins_instead_of_range(self):
x = np.array([1, 2, 3, 4, 5])
y = np.array([5, 7, 1, 5, 9])
res = histogram2d(x, y, epsilon=1, range=None, bins=([0, 1, 10], [0, 1, 10]))
self.assertIsNotNone(res)
def test_custom_bins(self):
x = np.array([1, 2, 3, 4, 5])
y = np.array([5, 7, 1, 5, 9])
res = histogram2d(x, y, epsilon=1, bins=[0, 3, 10])
self.assertIsNotNone(res)
def test_same_edges(self):
x = np.array([1, 2, 3, 4, 5])
y = np.array([5, 7, 1, 5, 9])
_, edges_x, edges_y = np.histogram2d(x, y, bins=3, range=[(0, 10), (0, 10)])
_, dp_edges_x, dp_edges_y = histogram2d(x, y, epsilon=1, bins=3, range=[(0, 10), (0, 10)])
self.assertTrue((edges_x == dp_edges_x).all())
self.assertTrue((edges_y == dp_edges_y).all())
def test_different_result(self):
global_seed(3141592653)
x = np.array([1, 2, 3, 4, 5])
y = np.array([5, 7, 1, 5, 9])
hist, _, _ = np.histogram2d(x, y, bins=3, range=[(0, 10), (0, 10)])
dp_hist, _, _ = histogram2d(x, y, epsilon=0.1, bins=3, range=[(0, 10), (0, 10)])
# print("Non-private histogram: %s" % hist)
# print("Private histogram: %s" % dp_hist)
self.assertTrue((hist != dp_hist).any())
def test_density(self):
global_seed(3141592653)
x = np.array([1, 2, 3, 4, 5])
y = np.array([5, 7, 1, 5, 9])
dp_hist, _, _ = histogram2d(x, y, epsilon=1, bins=3, range=[(0, 10), (0, 10)], density=True)
# print(dp_hist.sum())
self.assertAlmostEqual(dp_hist.sum(), 1.0 * (3 / 10) ** 2)
def test_accountant(self):
acc = BudgetAccountant(1.5, 0)
x = np.array([1, 2, 3, 4, 5])
y = np.array([5, 7, 1, 5, 9])
histogram2d(x, y, epsilon=1, bins=3, range=[(0, 10), (0, 10)], density=True, accountant=acc)
with self.assertRaises(BudgetError):
histogram2d(x, y, epsilon=1, bins=3, range=[(0, 10), (0, 10)], density=True, accountant=acc)
| 37.109756 | 104 | 0.564574 | 462 | 3,043 | 3.616883 | 0.151515 | 0.075404 | 0.093357 | 0.048474 | 0.614602 | 0.562537 | 0.505087 | 0.499102 | 0.488929 | 0.488929 | 0 | 0.092082 | 0.261255 | 3,043 | 81 | 105 | 37.567901 | 0.651246 | 0.033848 | 0 | 0.491803 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.213115 | 1 | 0.147541 | false | 0 | 0.081967 | 0 | 0.245902 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
5be50f9d777ded17c518467854d968e5956e5137 | 11,453 | py | Python | rl_environments/RLBanditEnv.py | joedaws/lde2021 | ece9857667bab8691cf617ed56af561676945b60 | [
"MIT"
] | null | null | null | rl_environments/RLBanditEnv.py | joedaws/lde2021 | ece9857667bab8691cf617ed56af561676945b60 | [
"MIT"
] | null | null | null | rl_environments/RLBanditEnv.py | joedaws/lde2021 | ece9857667bab8691cf617ed56af561676945b60 | [
"MIT"
] | null | null | null |
import gym
import numpy as np
import torch
import stable_baselines3 as sb3
from stable_baselines3.common.evaluation import evaluate_policy
from stable_baselines3.common.env_util import make_vec_env
import pybullet_envs
import pandas as pd
import pickle
import os
import matplotlib.pyplot as plt
import seaborn as sns
sns.set_theme(style='whitegrid', palette=[sns.color_palette('colorblind')[i] for i in [0,3,4,2]])
np.set_printoptions(suppress=True, linewidth=100, precision=4)
pd.set_option('precision', 4)
gym.logger.set_level(40)
plt.rcParams['font.family'] = 'monospace'
plt.rcParams['font.weight'] = 'bold'
class RLBanditEnv:
'''
numerical experiment where the policies are trained on rl environments and
then compared in the bandit setting via various policy evaluation methods
'''
def __init__(self, params):
self.__dict__.update(params)
self.make_env()
def make_env(self):
'''create the environment'''
try:
self.env = gym.make(self.env_name)
except:
self.env = make_vec_env(self.env_name, n_envs=1)
self.low = self.env.action_space.low
self.high = self.env.action_space.high
def train_target_policies(self, seed=None):
'''train policies to be ranked'''
if seed is not None:
np.random.seed(seed)
torch.manual_seed(seed)
self.env.seed(seed)
self.env.action_space.seed(seed)
models = {
'A2C': sb3.A2C('MlpPolicy', self.env, seed=seed).learn(self.train_steps),
'DDPG': sb3.DDPG('MlpPolicy', self.env, seed=seed).learn(self.train_steps),
'PPO': sb3.PPO('MlpPolicy', self.env, seed=seed).learn(self.train_steps),
'SAC': sb3.SAC('MlpPolicy', self.env, seed=seed).learn(self.train_steps),
'TD3': sb3.TD3('MlpPolicy', self.env, seed=seed).learn(self.train_steps)}
self.target_policies = {name: model.policy for name, model in models.items()}
self.num_policy_pairs = len(models) * (len(models) - 1) / 2
def evaluate_policy_rl(self, policy, num_sims=10):
'''evaluate policy in rl environment'''
reward_avg, reward_std = evaluate_policy(policy, self.env, n_eval_episodes=num_sims,
deterministic=False, warn=False)
return reward_avg, reward_std
def estimate_policy_value(self, policy, num_sims, seed=None):
'''estimate policy value in bandit environment'''
policy_value = 0
for _ in range(num_sims):
if seed is not None:
self.env.seed(seed)
obs = self.env.reset()
for t in range(self.env_steps):
action, _ = policy.predict(obs, deterministic=False)
obs, reward, done, _ = self.env.step(action)
policy_value += reward
if done:
break
policy_value /= num_sims
return policy_value
def evaluate_target_policies(self, num_sims=100):
'''evaluate target policies in bandit environment'''
self.value_true = {}
for name, policy in self.target_policies.items():
self.value_true[name] = self.estimate_policy_value(policy, num_sims)
def probability_proxy(self, action1, action2):
'''compute probability of taking action1 instead of action2'''
action_delta = (action1 - action2) / (self.high - self.low)
prob = np.exp((1 - 1 / (1 - action_delta**2 + 1e-08)).mean())
return prob
def generate_historical_data(self):
'''sample historical data by deploying target policies'''
self.historical_data, self.value_emp = [], {}
for name, policy in self.target_policies.items():
self.value_emp[name] = 0
seed = np.random.randint(1e+06)
self.env.seed(seed)
obs = self.env.reset()
actions, value, prob = [], 0, 1
for t in range(self.env_steps):
action, _ = policy.predict(obs, deterministic=False)
actions.append(action)
action_det, _ = policy.predict(obs, deterministic=True)
prob *= self.probability_proxy(action, action_det)
obs, reward, done, _ = self.env.step(action)
value += reward
if done:
break
self.historical_data.append([seed, actions, value, prob])
self.value_emp[name] += value
self.rho = np.mean(list(self.value_emp.values()))
def estimate_trajectory_probability(self, policy, trajectory):
'''estimate proability that the policy follows the trajectory'''
prob = 1.
seed, actions, _, _ = trajectory
self.env.seed(seed)
obs = self.env.reset()
for t in range(min(self.env_steps, len(actions))):
action, _ = policy.predict(obs, deterministic=True)
prob *= self.probability_proxy(action, actions[t])
obs, _, done, _ = self.env.step(action)
return prob
def compute_value_dim(self, policy):
'''evaluate the policy via the direct method'''
value_dim = []
for trajectory in self.historical_data:
s, a, r, _ = trajectory
prob = self.estimate_trajectory_probability(policy, trajectory)
value_dim.append(r * prob)
return np.mean(value_dim)
def compute_value_lde(self, policy):
'''evaluate the policy via the limited data estimator'''
value_lde = []
for trajectory in self.historical_data:
s, a, r, _ = trajectory
prob = self.estimate_trajectory_probability(policy, trajectory)
value_lde.append((r - self.rho) * prob + self.rho)
return np.mean(value_lde)
def compute_value_dre(self, policy):
'''evaluate the policy via the doubly robust estimator'''
value_dre = []
for trajectory in self.historical_data:
s, a, r, p = trajectory
prob = self.estimate_trajectory_probability(policy, trajectory)
value_dre.append((r - self.rho) * prob / (p + 1e-06) + self.rho)
return np.mean(value_dre)
def compute_value_ips(self, policy):
'''evaluate the policy via the inverse propensity scoring'''
value_ips = []
for trajectory in self.historical_data:
s, a, r, p = trajectory
prob = self.estimate_trajectory_probability(policy, trajectory)
value_ips.append(r * prob / (p + 1e-06))
return np.mean(value_ips)
def swap_count(self, array1, array2):
'''count the number of swaps required to transform array1 into array2'''
L = list(array2)
swaps = 0
for element in list(array1):
ind = L.index(element)
L.pop(ind)
swaps += ind
return swaps
def rank_target_policies(self):
'''evaluate and rank target policies via various methods'''
self.value_dim, self.value_lde, self.value_dre, self.value_ips = {}, {}, {}, {}
for name, policy in self.target_policies.items():
self.value_lde[name] = self.compute_value_lde(policy)
self.value_dre[name] = self.compute_value_dre(policy)
self.value_ips[name] = self.compute_value_ips(policy)
self.value_dim[name] = self.compute_value_dim(policy)
self.method_values = {'True': self.value_true, 'LDE': self.value_lde,
'DRE': self.value_dre, 'IPS': self.value_ips,
'DiM': self.value_dim, 'Emp': self.value_emp}
self.values = pd.DataFrame.from_dict(self.method_values)
self.ranks = {method: np.argsort(list(value.values()))
for method, value in self.method_values.items()}
def score_ranking(self):
'''compute scores of individual rankings'''
scores = [1 - self.swap_count(self.ranks[method], self.ranks['True'])\
/ self.num_policy_pairs for method in self.ranks]
return scores
def report_scores(self):
'''print the resulting scores'''
scores = np.array(self.scores, ndmin=2)[:,:-1]
scores_med = np.median(scores, axis=0)
scores_avg = np.mean(scores, axis=0)
scores_std = np.std(scores, axis=0)
print(f'average scores of policy evaluation methods on {self.env_name}:')
for k in range(1,len(self.ranks)-1):
print(f' {list(self.ranks)[k]} = {scores_med[k]:.4f}',
f'/ {scores_avg[k]:.4f} ({scores_std[k]:.3f})')
print()
self.method_values.pop('Emp', None)
data = pd.DataFrame(scores, columns=self.method_values.keys()).drop(columns='True')
fig, ax = plt.subplots(figsize=(8,4))
sns.violinplot(data=data, cut=0, gridsize=1000, bw=.5, linewidth=3)
ax.set_title(self.env_name, fontname='monospace', fontweight='bold')
ax.set_ylim(0,1)
plt.tight_layout()
os.makedirs('./images/', exist_ok=True)
plt.savefig(f'./images/scores_{self.env_name}.pdf', format='pdf')
plt.show()
def run_simulation_explicit(self, seed=None):
'''run a single ranking with verbose output'''
print(f'\ntraining target policies...')
self.train_target_policies(seed)
print(f'rl-values of target policies:')
for name, policy in self.target_policies.items():
value_avg, value_std = self.evaluate_policy_rl(policy)
print(f' {name:>4s}-value = {value_avg:.4f} (std = {value_std:.4f})')
self.evaluate_target_policies()
print(f'\ngenerating historical data...')
self.generate_historical_data()
print(f'estimating values of target policies via policy evaluation methods...')
self.rank_target_policies()
print(f'estimated values:\n{self.values}')
self.scores = self.score_ranking()
def run_simulations(self, num_sims, seed=None):
'''run multiple simulations'''
self.train_target_policies(seed)
self.evaluate_target_policies()
self.scores = []
for n in range(num_sims):
self.generate_historical_data()
self.rank_target_policies()
self.scores.append(self.score_ranking())
def run_tests(self, num_sims, num_tests, seed=None):
'''run multiple tests'''
if seed is not None:
np.random.seed(seed)
seeds = list(map(int, np.random.randint(1e+06, size=num_tests)))
test_scores = []
for n in range(num_tests):
print(f'running test {n+1}/{num_tests} on {self.env_name}...')
self.run_simulations(num_sims, seeds[n])
test_scores += self.scores
self.scores = test_scores
def save_variables(self, path='./save/'):
'''save class variables to a file'''
os.makedirs(path, exist_ok=True)
save_name = f'{self.env_name}.pkl'
with open(path + save_name, 'wb') as save_file:
pickle.dump(self.__dict__, save_file)
def load_variables(self, save_name, path='./save/'):
'''load class variables from a file'''
try:
with open(path + save_name, 'rb') as save_file:
self.__dict__.update(pickle.load(save_file))
except:
raise NameError(f'\ncannot load file {save_name}...')
| 42.735075 | 97 | 0.611805 | 1,464 | 11,453 | 4.614754 | 0.199454 | 0.03212 | 0.014654 | 0.019982 | 0.26717 | 0.223061 | 0.210036 | 0.181616 | 0.171551 | 0.130551 | 0 | 0.011357 | 0.269624 | 11,453 | 267 | 98 | 42.895131 | 0.796294 | 0.089671 | 0 | 0.24186 | 0 | 0 | 0.071616 | 0.00962 | 0 | 0 | 0 | 0 | 0 | 1 | 0.102326 | false | 0 | 0.055814 | 0 | 0.209302 | 0.051163 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
5bedc842eb7b3b97c410dd8ff5e9293a731be138 | 12,016 | py | Python | natural_selection/genetic_programs/__init__.py | Zipfian-Science/natural-selection | 5bf04142a73f39a83e86ad0eb53ba0fecb365864 | [
"Apache-2.0"
] | null | null | null | natural_selection/genetic_programs/__init__.py | Zipfian-Science/natural-selection | 5bf04142a73f39a83e86ad0eb53ba0fecb365864 | [
"Apache-2.0"
] | 1 | 2021-02-26T10:10:43.000Z | 2021-02-26T10:10:43.000Z | natural_selection/genetic_programs/__init__.py | Zipfian-Science/natural-selection | 5bf04142a73f39a83e86ad0eb53ba0fecb365864 | [
"Apache-2.0"
] | null | null | null | # -*- coding: utf-8 -*-
"""Basic classes for running Genetic Algorithms.
"""
__author__ = "Justin Hocking"
__copyright__ = "Copyright 2021, Zipfian Science"
__credits__ = []
__license__ = ""
__version__ = "0.0.1"
__maintainer__ = "Justin Hocking"
__email__ = "justin.hocking@zipfian.science"
__status__ = "Development"
import uuid
from typing import List, Union, Any, Callable
import natural_selection.genetic_programs.functions as op
from natural_selection.genetic_programs.utils import GeneticProgramError
class Node:
"""
Basic class for building Node trees. A node can be either a terminal or a parent.
When initialised as a terminal node, `is_terminal` has to be set to True and either a `label` or a `terminal_value` has to be set.
When setting a `terminal_value`, the terminal is a literal, constant value.
Example: n = Node(is_terminal=True, terminal_value=42).
On only setting a `label`, the terminal is treated as a variable passed on through the function.
Example: n = Node(is_terminal=True, label='x').
Setting the arity is optional for when no children nodes are added.
Args:
label (str): Optionally set the label, only used for variable terminals (default = None).
arity (int): Optionally set the function arity, the norm being 2 for functions (default = 1).
operator (Operator): If the node is a function, set the operator (default = None).
is_terminal (bool): Explicitly define if the node is a terminal (default = None).
terminal_value (Any): Only set if the node is terminal and a constant value (default = None).
children (list): Add a list of child nodes, list length must match arity (default = None).
"""
def __init__(self, label : str = None,
arity : int = 1,
operator : op.Operator = None,
is_terminal = False,
terminal_value = None,
children : List = None):
if label:
self.label = label
elif operator:
self.label = operator.operator_label
else:
self.label = str(terminal_value)
self.arity = arity
self.operator = operator
self.is_terminal = is_terminal
self.terminal_value = terminal_value
if children:
self.children = children
self.arity = len(children)
else:
self.children = [None] * self.arity
def __call__(self, **kwargs):
if self.is_terminal:
if self.label in kwargs.keys():
return kwargs[self.label]
return self.terminal_value
else:
return self.operator.exec([x(**kwargs) for x in self.children])
def __str__(self):
"""
Essentially equivalent to __repr__, but returns a string in the natural order of nodes.
Where two functionally same trees will return different string representations. Use __repr__ when comparing tree strings.
Returns:
str: String representation of tree in natural order of symbols/labels.
"""
if self.is_terminal:
return self.label
else:
return f"{self.label}({', '.join([str(x) for x in self.children])})"
def __repr__(self):
"""
Essentially equivalent to __str__, but more precisely returns an alphabetically sorted str.
Where two functionally same trees might return different string representations, they will have the exact same __repr__ string.
Returns:
str: String representation of tree in alphabetic order of symbols/labels.
"""
if self.is_terminal:
return self.label
else:
labels = list()
for n in self.children:
labels.append(repr(n))
return f"{self.label}({', '.join(sorted(labels))})"
def __setitem__(self, index, node):
if isinstance(index, slice):
assert index.start < len(self.children), 'Index Out of bounds!'
else:
assert index < len(self.children), 'Index Out of bounds!'
self.children[index] = node
def __getitem__(self, index):
if isinstance(index, slice):
assert index.start < len(self.children), 'Index Out of bounds!'
else:
assert index < len(self.children), 'Index Out of bounds!'
return self.children[index]
def __iter__(self):
self.__n = 0
return self
def __next__(self):
if self.__n < len(self.children):
gene = self.children[self.__n]
self.__n += 1
return gene
else:
raise StopIteration
def __len__(self):
return len(self.children)
def clear_children(self):
self.children = [None] * self.arity
def depth(self):
"""
Finds the depth of the current tree. This is done by traversing the tree and returning the deepest depth found.
Returns:
int: Deepest node depth found.
"""
if self.is_terminal:
return 1
deepest = 0
for n in self.children:
d = n.depth() + 1
if d > deepest:
deepest = d
return deepest
class GeneticProgram:
"""
A class that encapsulates a single genetic program, with node tree and a fitness evaluation function.
Args:
fitness_function (Callable): Function with ``func(Node, island, **params)`` signature.
operators (list): List of all operators that nodes can be constructed from.
terminals (list): List of all terminals that can be included in the node tree, can be numeric or strings for variables.
max_depth (int): Maximum depth that node tree can grow.
name (str): Name for keeping track of lineage (default = None).
species_type (str) : A unique string to identify the species type, for preventing cross polluting (default = None).
Attributes:
fitness (Numeric): The fitness score after evaluation.
age (int): How many generations was the individual alive.
genetic_code (str): String representation of node tree.
history (list): List of dicts of every evaluation.
parents (list): List of strings of parent names.
"""
def __init__(self,
fitness_function: Callable,
operators : List[op.Operator],
terminals : List[Union[str,int,float]],
max_depth : int,
name: str = None,
species_type : str = None):
if name is None:
self.name = str(uuid.uuid4())
else:
self.name = name
self.operators = operators
self.terminals = terminals
self.max_depth = max_depth
self.node_tree = None
self.root_node = Node(label=name,arity=1, operator=op.OperatorReturn())
self.fitness_function = fitness_function
self.fitness = None
self.age = 0
self.genetic_code = None
self.history = list()
self.parents = list()
if species_type:
self.species_type = species_type
else:
self.species_type = "DEFAULT_SPECIES"
def __call__(self, **kwargs):
return self.root_node(**kwargs)
def register_parent_names(self, parents: list, reset_parent_name_list: bool = True):
"""
In keeping lineage of family lines, the names of parents are kept track of.
Args:
parents (list): A list of GeneticProgram of the parents.
"""
if reset_parent_name_list:
self.parents = list()
for parent in parents:
self.parents.append(parent.name)
def reset_name(self, name: str = None):
"""
A function to reset the name of a program, helping to keep linage of families.
Args:
name (str): Name (default = None).
"""
if name is None:
self.name = str(uuid.uuid4())
else:
self.name = name
def birthday(self, add: int = 1):
"""
Add to the age. This is for keeping track of how many generations a program has "lived" through.
Args:
add (int): Amount to age.
"""
self.age += add
def reset_fitness(self, fitness: Any = None, reset_genetic_code: bool = True):
"""
Reset (or set) the fitness of the program.
Args:
fitness (Any): New fitness value (default = None).
reset_genetic_code (bool): Whether to reset the genetic code. (default = True)
"""
self.fitness = fitness
if reset_genetic_code:
self.genetic_code = None
def evaluate(self, params : dict, island=None) -> Any:
"""
Run the fitness function with the given params.
Args:
params (dict): Named dict of eval params.
island (Island): Pass the Island for advanced fitness functions based on Island properties and populations.
Returns:
numeric: Fitness value.
"""
self.fitness = self.fitness_function(node_tree=self.node_tree, island=island, **params)
stamp = { "name": self.name,
"age": self.age,
"fitness": self.fitness,
"node_tree": str(self.node_tree),
"parents": self.parents,
}
if island:
stamp["island_generation" ] = island.generation_count
self.history.append(stamp)
return self.fitness
def unique_genetic_code(self) -> str:
"""
Gets the unique genetic code, generating if it is undefined.
Returns:
str: String name of Chromosome.
"""
if self.genetic_code is None:
self.genetic_code = repr(self.node_tree)
return self.genetic_code
def __str__(self) -> str:
return f'({self.name}:{self.fitness})'
def __eq__(self, other):
if isinstance(other, GeneticProgram):
return self.unique_genetic_code() == other.unique_genetic_code()
else:
raise GeneticProgramError(message=f'Can not compare {type(other)}')
def __ne__(self, other):
if isinstance(other, GeneticProgram):
return self.unique_genetic_code() != other.unique_genetic_code()
else:
raise GeneticProgramError(message=f'Can not compare {type(other)}')
def __lt__(self, other):
if isinstance(other, GeneticProgram):
return self.fitness < other.fitness
elif isinstance(other, int):
return self.fitness < other
elif isinstance(other, float):
return self.fitness < other
else:
raise GeneticProgramError(message=f'Can not compare {type(other)}')
def __le__(self, other):
if isinstance(other, GeneticProgram):
return self.fitness <= other.fitness
elif isinstance(other, int):
return self.fitness <= other
elif isinstance(other, float):
return self.fitness <= other
else:
raise GeneticProgramError(message=f'Can not compare {type(other)}')
def __gt__(self, other):
if isinstance(other, GeneticProgram):
return self.fitness > other.fitness
elif isinstance(other, int):
return self.fitness > other
elif isinstance(other, float):
return self.fitness > other
else:
raise GeneticProgramError(message=f'Can not compare {type(other)}')
def __ge__(self, other):
if isinstance(other, GeneticProgram):
return self.fitness >= other.fitness
elif isinstance(other, int):
return self.fitness >= other
elif isinstance(other, float):
return self.fitness >= other
else:
raise GeneticProgramError(message=f'Can not compare {type(other)}') | 35.134503 | 135 | 0.604194 | 1,428 | 12,016 | 4.935574 | 0.191877 | 0.032633 | 0.031356 | 0.037457 | 0.29441 | 0.26206 | 0.240778 | 0.229994 | 0.229994 | 0.229994 | 0 | 0.002769 | 0.308755 | 12,016 | 342 | 136 | 35.134503 | 0.845774 | 0.321072 | 0 | 0.368687 | 0 | 0 | 0.071925 | 0.010763 | 0 | 0 | 0 | 0 | 0.020202 | 1 | 0.131313 | false | 0 | 0.020202 | 0.015152 | 0.318182 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
5bef9dd75f6bd3b7b7d6aa908b20235fbd72a39c | 615 | py | Python | tests/fixtures/uploads.py | ReeceHoffmann/virtool | f9befad060fe16fa29fb80124e674ac5a9c4f538 | [
"MIT"
] | 39 | 2016-10-31T23:28:59.000Z | 2022-01-15T00:00:42.000Z | tests/fixtures/uploads.py | ReeceHoffmann/virtool | f9befad060fe16fa29fb80124e674ac5a9c4f538 | [
"MIT"
] | 1,690 | 2017-02-07T23:39:48.000Z | 2022-03-31T22:30:44.000Z | tests/fixtures/uploads.py | ReeceHoffmann/virtool | f9befad060fe16fa29fb80124e674ac5a9c4f538 | [
"MIT"
] | 25 | 2017-02-08T18:25:31.000Z | 2021-09-20T22:55:25.000Z | import pytest
from sqlalchemy.ext.asyncio import AsyncSession
from virtool.uploads.models import Upload
@pytest.fixture
async def test_uploads(pg, fake, static_time):
user_1 = await fake.users.insert()
user_2 = await fake.users.insert()
upload_1 = Upload(id=1, name="test.fq.gz", type="reads", user=user_1["_id"])
upload_2 = Upload(id=2, name="test.fq.gz", type="reference", user=user_1["_id"])
upload_3 = Upload(id=3, name="test.fq.gz", user=user_2["_id"])
async with AsyncSession(pg) as session:
session.add_all([upload_1, upload_2, upload_3])
await session.commit()
| 30.75 | 84 | 0.695935 | 97 | 615 | 4.237113 | 0.402062 | 0.036496 | 0.072993 | 0.087591 | 0.160584 | 0 | 0 | 0 | 0 | 0 | 0 | 0.027027 | 0.157724 | 615 | 19 | 85 | 32.368421 | 0.766409 | 0 | 0 | 0 | 0 | 0 | 0.086179 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.230769 | 0 | 0.230769 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
5bf071101faa3aefed0673b302d38405eb0bab30 | 3,029 | py | Python | cuppa/methods/compile.py | chriskohlhoff/cuppa | c777adb5cd91e52ac06c87688e1a635a61f609d1 | [
"BSL-1.0"
] | 1 | 2021-08-31T22:05:15.000Z | 2021-08-31T22:05:15.000Z | cuppa/methods/compile.py | chriskohlhoff/cuppa | c777adb5cd91e52ac06c87688e1a635a61f609d1 | [
"BSL-1.0"
] | null | null | null | cuppa/methods/compile.py | chriskohlhoff/cuppa | c777adb5cd91e52ac06c87688e1a635a61f609d1 | [
"BSL-1.0"
] | null | null | null |
# Copyright Jamie Allsop 2013-2018
# Distributed under the Boost Software License, Version 1.0.
# (See accompanying file LICENSE_1_0.txt or copy at
# http://www.boost.org/LICENSE_1_0.txt)
#-------------------------------------------------------------------------------
# CompileMethod
#-------------------------------------------------------------------------------
import os.path
import cuppa.progress
from SCons.Script import Flatten
from SCons.Node import Node
from cuppa.colourise import as_notice, as_info, as_warning, as_error
from cuppa.log import logger
class CompileMethod:
def __init__( self, shared=False ):
self._shared = shared
def __call__( self, env, source, **kwargs ):
sources = Flatten( [ source ] )
objects = []
if 'CPPPATH' in env:
env.AppendUnique( INCPATH = env['CPPPATH'] )
if self._shared:
obj_prefix = env.subst('$SHOBJPREFIX')
obj_suffix = env.subst('$SHOBJSUFFIX')
obj_builder = env.SharedObject
else:
obj_prefix = env.subst('$OBJPREFIX')
obj_suffix = env.subst('$OBJSUFFIX')
obj_builder = env.Object
logger.trace( "Build Root = [{}]".format( as_notice( env['build_root'] ) ) )
for source in sources:
if not isinstance( source, Node ):
source = env.File( source )
logger.trace( "Object source = [{}]/[{}]".format( as_notice(str(source)), as_notice(source.path) ) )
if os.path.splitext(str(source))[1] == obj_suffix:
objects.append( source )
else:
target = None
target = os.path.splitext( os.path.split( str(source) )[1] )[0]
if not source.path.startswith( env['build_root'] ):
if os.path.isabs( str(source) ):
target = env.File( os.path.join( obj_prefix + target + obj_suffix ) )
else:
target = env.File( os.path.join( env['build_dir'], obj_prefix + target + obj_suffix ) )
else:
offset_dir = os.path.relpath( os.path.split( source.path )[0], env['build_dir'] )
target = env.File( os.path.join( offset_dir, obj_prefix + target + obj_suffix ) )
logger.trace( "Object target = [{}]/[{}]".format( as_notice(str(target)), as_notice(target.path) ) )
objects.append(
obj_builder(
target = target,
source = source,
CPPPATH = env['SYSINCPATH'] + env['INCPATH'],
**kwargs ) )
cuppa.progress.NotifyProgress.add( env, objects )
return objects
@classmethod
def add_to_env( cls, cuppa_env ):
cuppa_env.add_method( "Compile", cls( False ) )
cuppa_env.add_method( "CompileStatic", cls( False ) )
cuppa_env.add_method( "CompileShared", cls( True ) )
| 36.493976 | 116 | 0.526576 | 321 | 3,029 | 4.806854 | 0.320872 | 0.038885 | 0.02722 | 0.029164 | 0.132858 | 0.132858 | 0 | 0 | 0 | 0 | 0 | 0.008604 | 0.309343 | 3,029 | 82 | 117 | 36.939024 | 0.728968 | 0.120502 | 0 | 0.074074 | 0 | 0 | 0.080529 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.055556 | false | 0 | 0.111111 | 0 | 0.203704 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
5bf4bd9bfb15b22cd17a71e718232ade00af3a48 | 593 | py | Python | format_turning.py | xiao-gy/daily_bz | 6a79c19d559039f2e015720079baea3a8d9ffd37 | [
"MIT"
] | 3 | 2021-04-13T00:18:20.000Z | 2021-07-15T08:25:22.000Z | format_turning.py | xiao-gy/daily_bz | 6a79c19d559039f2e015720079baea3a8d9ffd37 | [
"MIT"
] | null | null | null | format_turning.py | xiao-gy/daily_bz | 6a79c19d559039f2e015720079baea3a8d9ffd37 | [
"MIT"
] | 5 | 2021-05-05T12:58:19.000Z | 2021-09-12T10:28:33.000Z | import json
import os
try:
os.rename(os.path.join(os.getcwd(),'config','like.json'),os.path.join(os.getcwd(),'config','like_copy.json'))
except:
print("无法进行重命名")
f = open(os.path.join(os.getcwd(),'config','like_copy.json'),mode='r',encoding='utf8')
likes = json.loads(f.read())
list = {"likes":[{"name":"默认收藏夹","contents":[]}]}
for i in likes['likes']:
list['likes'][0]['contents'].append({"id":i['id'],"name":i['name'],"mark":i['mark']})
f = open(os.path.join(os.getcwd(),'config','like.json'),mode='w+',encoding='utf8')
f.write(json.dumps(list,ensure_ascii=False))
f.close() | 31.210526 | 113 | 0.639123 | 95 | 593 | 3.957895 | 0.431579 | 0.06383 | 0.106383 | 0.12766 | 0.388298 | 0.388298 | 0.388298 | 0.388298 | 0.292553 | 0 | 0 | 0.005455 | 0.072513 | 593 | 19 | 114 | 31.210526 | 0.678182 | 0 | 0 | 0 | 0 | 0 | 0.249158 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.142857 | 0 | 0.142857 | 0.071429 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
5bf81eaf34d29539cf6fc4ab7d86334be33a32aa | 8,440 | py | Python | Project/models/homography.py | iust-projects/Computer-Vision-IUST | 732c8f1eaf1df032f1b7ec0518756017117038af | [
"Apache-2.0"
] | null | null | null | Project/models/homography.py | iust-projects/Computer-Vision-IUST | 732c8f1eaf1df032f1b7ec0518756017117038af | [
"Apache-2.0"
] | 1 | 2020-12-22T09:02:20.000Z | 2020-12-22T09:02:20.000Z | Project/models/homography.py | iust-projects/Computer-Vision-IUST | 732c8f1eaf1df032f1b7ec0518756017117038af | [
"Apache-2.0"
] | null | null | null | # %% import libraries
import numpy as np
import random
import cv2
import PIL
import matplotlib.pyplot as plt
from copy import deepcopy
# %% 1 Extract Harris interest points
def get_points(img, threshold=0.1, coordinate=False):
"""
Extract harris points of given image
:param img: An image of type open cv
:param threshold: Threshold of max value in found points
:param coordinate: Return a tuple of (x, y) coordinates instead of mask
:return: A matrix same size as input as a binary mask of points
"""
img_gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY).astype(np.float32)
img_harris = cv2.cornerHarris(img_gray, 7, 3, 0.03)
img_points = img_harris > threshold * img_harris.max()
if coordinate:
img_points = (img_points * 1).nonzero()
return img_points
return img_points
image1 = cv2.imread('data/images/building1.jpg')
image2 = cv2.imread('data/images/building2.jpg')
image1_points = get_points(image1, coordinate=True)
image2_points = get_points(image2, coordinate=True)
# 1.4 visualization
vis = deepcopy(image1)
vis[image1_points] = [255, 0, 0]
plt.imshow(vis, cmap='gray')
plt.show()
vis = deepcopy(image2)
vis[image2_points] = [255, 0, 0]
plt.imshow(vis, cmap='gray')
plt.show()
# %% Lucas-Kanade Optical Flow
def to_array(img_points):
"""
Changes tuple of coordinates (list[x], list[y]) to a [len(x|y), 1, 2] numpy float32 array
PS. convert type of points into 'cv2.goodFeaturesToTrack' convention.
:param img_points: Coordinate of harris points in form of tuple
:return: A 3D array of coordinates
"""
assert isinstance(img_points, tuple)
img_points_lk = np.zeros((len(img_points[0]), 1, 2), dtype=np.float32)
img_points_lk[:, :, 0] = np.array(img_points[1]).reshape(-1, 1)
img_points_lk[:, :, 1] = np.array(img_points[0]).reshape(-1, 1)
return img_points_lk
def to_tuple(img_points):
"""
Changes a [len(x|y), 1, 2] numpy float32 array coordinates to tuple of coordinates (list[x], list[y]) uint8
PS. convert type of points into 'np.nonzero()' convention.
:param img_points: A 3D array of coordinates
:return: Coordinate of harris points in form of tuple
"""
coor1 = [i[0, 0].astype(np.int) for i in img_points]
coor2 = [i[0, 1].astype(np.int) for i in img_points]
img_points_tuple = (coor2, coor1)
return img_points_tuple
image1_points_lk = to_array(image1_points)
image2_points_lk = to_array(image2_points)
def lucas_kanade_tracker(img1, img2, img1_points_lk, img2_points_lk, lk_params=None, threshold=1.0):
if lk_params is None:
lk_params = dict(winSize=(19, 19), maxLevel=2,
criteria=(cv2.TERM_CRITERIA_EPS | cv2.TERM_CRITERIA_COUNT, 10, 0.03))
img2_points_lk, _, _ = cv2.calcOpticalFlowPyrLK(cv2.cvtColor(img1, cv2.COLOR_BGR2GRAY),
cv2.cvtColor(img2, cv2.COLOR_BGR2GRAY),
img1_points_lk, None, **lk_params)
img1_points_lk_recalc, _, _ = cv2.calcOpticalFlowPyrLK(cv2.cvtColor(img2, cv2.COLOR_BGR2GRAY),
cv2.cvtColor(img1, cv2.COLOR_BGR2GRAY),
img2_points_lk, None, **lk_params)
distance = abs(img1_points_lk - img1_points_lk_recalc).reshape(-1, 2).max(-1)
status = distance < threshold
# preserve good points
img1_good_points = img1_points_lk[status == 1]
img2_good_points = img2_points_lk[status == 1]
return img1_good_points, img2_good_points
image1_good_points, image2_good_points = lucas_kanade_tracker(image1, image2, image1_points_lk, image2_points_lk)
# visualization
color = np.random.randint(0, 255, (len(image2_good_points), 3))
mask = np.zeros_like(image1)
frame = image2.copy()
for i, (i2, i1) in enumerate(zip(image2_good_points, image1_good_points)):
a, b = i2.ravel()
c, d = i1.ravel()
mask = cv2.line(mask, (a, b), (c, d), color[i].tolist(), 2)
frame = cv2.circle(frame, (a, b), 5, color[i].tolist(), -1)
vis = cv2.add(frame, mask)
plt.imshow(vis, cmap='gray')
plt.show()
# get homography
def homography(points1, points1_indices, points2, points2_indices, num_points=4, min_num_points=4):
"""
Computes homography matrix for given two sets of points
:param points1: First point set
:param points1_indices: First point set indices
:param points2: Second point set
:param points2_indices: Second Point set indices
:param num_points: Number of points to use for calculating homography
:param min_num_points: Minimum number of points required (Degree of freedom)
:return: A 3x3 normalized homography matrix
"""
assert num_points >= min_num_points
# build A matrix
a_matrix = np.zeros((num_points * 2, 9))
idx = 0
for i, j in zip(points1_indices, points2_indices):
a_matrix[idx, :] = np.array([-points1[i, 0, 0], -points1[i, 0, 1], -1,
0, 0, 0,
points2[j, 0, 0] * points1[i, 0, 0],
points2[j, 0, 0] * points1[i, 0, 1],
points2[j, 0, 0]])
idx += 1
a_matrix[idx, :] = np.array([0, 0, 0,
-points1[i, 0, 0], -points1[i, 0, 1], -1,
points2[j, 0, 1] * points1[i, 0, 0],
points2[j, 0, 1] * points1[i, 0, 1],
points2[j, 0, 1]])
idx += 1
u, s, v = np.linalg.svd(a_matrix)
h_unnormalized = v[8].reshape(3, 3)
h = (1 / h_unnormalized.flatten()[8]) * h_unnormalized
# eig = np.linalg.eig(a_matrix.T.dot(a_matrix))
# # smallest_idx = eig[0].argmin()
# h_ = eig[1][-1].reshape(3, 3)
# h = (1 / h_.flatten()[8]) * h_
return h
MIN_NUM_POINTS = 4
points1_indices = random.sample(list(range(len(image1_good_points))), MIN_NUM_POINTS)
# points2_indices = random.sample(list(range(len(image2_good_points))), MIN_NUM_POINTS)
h_matrix = homography(image1_good_points, points1_indices, image2_good_points, points1_indices)
# %% RANSAC
# error
def error(match, homography):
"""
Computes the error (L2 norm) between predicted position of point set 2 using given homography matrix
:param match: A dictionary containing 'p1' as point 1 and 'p2' as point 2
:param homography: A 3x3 homography matrix
:return: A float number as euclidean distance between predicted point and it's original value
"""
p1 = match['p1']
p2 = match['p2']
# to cart
point1 = np.array([p1[0], p1[1], 1])
point2_pred = np.dot(homography, point1.T)
# to homo
point2_pred = point2_pred / point2_pred[-1]
point2 = np.array([p2[0], p2[1], 1])
return np.sqrt(np.sum((point2 - point2_pred) ** 2))
# RANSAC
def ransac(points1, points2, threshold=10.0, n_iterations=10000, early_stop=None, min_num_points=4):
max_inlier = 0
best_h = np.zeros((3, 3))
if early_stop is not None:
raise NotImplementedError()
for i in range(n_iterations):
points1_indices = random.sample(list(range(len(points1))), min_num_points)
# points2_indices = random.sample(list(range(len(points2))), min_num_points)
h_matrix = homography(points1, points1_indices, points2, points1_indices)
inlier = 0
for j in range(len(image1_good_points)):
match = {'p1': image1_good_points[j, 0],
'p2': image2_good_points[j, 0]}
e = error(match, h_matrix)
if e <= threshold:
inlier += 1
if inlier > max_inlier:
max_inlier = inlier
best_h = h_matrix
return best_h, max_inlier
# test single point transformation
best_homography, max_inliers = ransac(image1_good_points, image2_good_points, n_iterations=50000, threshold=10.)
print('Max inliers: ', max_inliers)
z1 = image1_good_points[0]
z1 = np.array([z1[0, 0], z1[0, 1], 1])
z2 = best_homography.dot(z1.T)
z2 = z2/z2[-1]
print(z2[:-1], image2_good_points[0])
# %% test
H, status = cv2.findHomography(image1_good_points, image2_good_points, cv2.RANSAC, 10.0)
h, w = image2.shape[:2]
overlay = cv2.warpPerspective(image1, best_homography, (w, h))
vis = cv2.addWeighted(vis, 0.5, overlay, 0.5, 0.0)
plt.imshow(vis)
plt.show()
| 36.223176 | 113 | 0.638981 | 1,222 | 8,440 | 4.235679 | 0.195581 | 0.044436 | 0.027821 | 0.00966 | 0.249227 | 0.194552 | 0.126159 | 0.084042 | 0.059699 | 0.034003 | 0 | 0.056197 | 0.240995 | 8,440 | 232 | 114 | 36.37931 | 0.751795 | 0.244668 | 0 | 0.084615 | 0 | 0 | 0.013391 | 0.008067 | 0 | 0 | 0 | 0 | 0.015385 | 1 | 0.053846 | false | 0 | 0.046154 | 0 | 0.161538 | 0.015385 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
5bfc12035df037a7d9db7f30808756998f54bdee | 6,592 | py | Python | dataset/data.py | mondrasovic/reid_baseline_syncbn | 3d21a786fb1a0519caaa0572c649f750036689b5 | [
"MIT"
] | 1 | 2022-01-05T15:42:44.000Z | 2022-01-05T15:42:44.000Z | dataset/data.py | mondrasovic/reid_baseline_syncbn | 3d21a786fb1a0519caaa0572c649f750036689b5 | [
"MIT"
] | null | null | null | dataset/data.py | mondrasovic/reid_baseline_syncbn | 3d21a786fb1a0519caaa0572c649f750036689b5 | [
"MIT"
] | null | null | null | import torch
import os.path as osp
from PIL import Image
from torch.utils.data import Dataset
import numpy as np
from torchvision import transforms as T
import glob
import re
from PIL import ImageFile
ImageFile.LOAD_TRUNCATED_IMAGES = True
def read_image(img_path):
"""Keep reading image until succeed.
This can avoid IOError incurred by heavy IO process."""
got_img = False
if not osp.exists(img_path):
raise IOError("{} does not exist".format(img_path))
while not got_img:
try:
img_type = 'RGB'
img = Image.open(img_path).convert(img_type)
got_img = True
except IOError:
print(
"IOError incurred when reading '{}'. "
"Will redo. Don't worry. Just chill.".format(img_path)
)
pass
return img
class ImageDataset(Dataset):
"""Image Person ReID Dataset"""
def __init__(self, dataset, cfg, transform=None):
self.dataset = dataset
self.cfg = cfg
self.transform = transform
def __len__(self):
return len(self.dataset)
def __getitem__(self, index):
img_path, pid, camid = self.dataset[index]
img = read_image(img_path)
if self.transform is not None:
img = self.transform(img)
return img, pid, camid, img_path
class BaseDataset:
def __init__(
self,
root='/home/zbc/data/reid',
train_dir='',
query_dir='',
gallery_dir='',
verbose=True,
**kwargs
):
self.dataset_dir = root
self.train_dir = osp.join(self.dataset_dir, train_dir)
self.query_dir = osp.join(self.dataset_dir, query_dir)
self.gallery_dir = osp.join(self.dataset_dir, gallery_dir)
self._check_before_run()
train = self._process_dir(self.train_dir, relabel=True)
query = self._process_dir(self.query_dir, relabel=False)
gallery = self._process_dir(self.gallery_dir, relabel=False)
if verbose:
print("=> Data loaded")
self.print_dataset_statistics(train, query, gallery)
self.train = train
self.query = query
self.gallery = gallery
self.num_train_pids, self.num_train_imgs, self.num_train_cams = self.get_imagedata_info(
self.train
)
self.num_query_pids, self.num_query_imgs, self.num_query_cams = self.get_imagedata_info(
self.query
)
self.num_gallery_pids, self.num_gallery_imgs, self.num_gallery_cams = self.get_imagedata_info(
self.gallery
)
def get_imagedata_info(self, data):
pids, cams = [], []
for _, pid, camid in data:
pids += [pid]
cams += [camid]
pids = set(pids)
cams = set(cams)
num_pids = len(pids)
num_cams = len(cams)
num_imgs = len(data)
return num_pids, num_imgs, num_cams
def print_dataset_statistics(self, train, query, gallery):
num_train_pids, num_train_imgs, num_train_cams = self.get_imagedata_info(
train
)
num_query_pids, num_query_imgs, num_query_cams = self.get_imagedata_info(
query
)
num_gallery_pids, num_gallery_imgs, num_gallery_cams = self.get_imagedata_info(
gallery
)
print("Dataset statistics:")
print(" ----------------------------------------")
print(" subset | # ids | # images | # cameras")
print(" ----------------------------------------")
print(
" train | {:5d} | {:8d} | {:9d}".format(
num_train_pids, num_train_imgs, num_train_cams
)
)
print(
" query | {:5d} | {:8d} | {:9d}".format(
num_query_pids, num_query_imgs, num_query_cams
)
)
print(
" gallery | {:5d} | {:8d} | {:9d}".format(
num_gallery_pids, num_gallery_imgs, num_gallery_cams
)
)
print(" ----------------------------------------")
def _check_before_run(self):
"""Check if all files are available before going deeper"""
if not osp.exists(self.dataset_dir):
raise RuntimeError(
"'{}' is not available".format(self.dataset_dir)
)
if not osp.exists(self.train_dir):
raise RuntimeError("'{}' is not available".format(self.train_dir))
if not osp.exists(self.query_dir):
raise RuntimeError("'{}' is not available".format(self.query_dir))
if not osp.exists(self.gallery_dir):
raise RuntimeError(
"'{}' is not available".format(self.gallery_dir)
)
def _process_dir(self, dir_path, relabel=False):
img_paths = glob.glob(osp.join(dir_path, '*.jpg'))
pattern = re.compile(r'([-\d]+)_c([\d]+)')
pid_container = set()
for img_path in img_paths:
pid, _ = map(int, pattern.search(img_path).groups())
if pid == -1: continue # junk images are just ignored
pid_container.add(pid)
pid2label = {pid: label for label, pid in enumerate(pid_container)}
dataset = []
for img_path in img_paths:
pid, camid = map(int, pattern.search(img_path).groups())
if pid == -1: continue # junk images are just ignored
camid -= 1 # index starts from 0
if relabel: pid = pid2label[pid]
dataset.append((img_path, pid, camid))
return dataset
def init_dataset(cfg):
"""
Use path in cfg to init a dataset
the dataset should be the following format
- Each Image should be named in
(pid)_c(camid)_(iid).jpg
where pid is the person id,
camid is camera id,
iid is image id(unique to every image)
- train set and val set should be organzed as
cfg.DATASETS.TRAIN_PATH: all the training images
cfg.DATASETS.QUERY_PATH: all the query images
cfg.DATASETS.GALLERY_PATH: all the gallery images
"""
return BaseDataset(
root=cfg.DATASETS.DATA_PATH,
train_dir=cfg.DATASETS.TRAIN_PATH,
query_dir=cfg.DATASETS.QUERY_PATH,
gallery_dir=cfg.DATASETS.GALLERY_PATH
)
| 33.979381 | 103 | 0.556887 | 776 | 6,592 | 4.501289 | 0.21134 | 0.026052 | 0.032064 | 0.034354 | 0.271686 | 0.256227 | 0.214143 | 0.15574 | 0.105354 | 0.04008 | 0 | 0.003404 | 0.331614 | 6,592 | 193 | 104 | 34.15544 | 0.789378 | 0.106493 | 0 | 0.086667 | 0 | 0 | 0.092898 | 0.021521 | 0 | 0 | 0 | 0 | 0 | 1 | 0.066667 | false | 0.006667 | 0.06 | 0.006667 | 0.18 | 0.08 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
5bfc9c9dc6bfe8a46625975c2ef7e96d083e2b69 | 407 | py | Python | StarletteServer/functions.py | Amatobahn/starlette-boilerplate | 92e91bd30e918df45d3e2a09602833fd07f698f2 | [
"MIT"
] | 1 | 2021-11-30T20:08:17.000Z | 2021-11-30T20:08:17.000Z | StarletteServer/functions.py | Amatobahn/starlette-boilerplate | 92e91bd30e918df45d3e2a09602833fd07f698f2 | [
"MIT"
] | null | null | null | StarletteServer/functions.py | Amatobahn/starlette-boilerplate | 92e91bd30e918df45d3e2a09602833fd07f698f2 | [
"MIT"
] | 2 | 2019-07-13T11:27:21.000Z | 2020-01-27T07:13:09.000Z | from starlette.requests import Request
from starlette.responses import JSONResponse, Response
def hello_world(scope):
return Response("Hello World!")
def hello_world_form_data(scope):
async def parse(receive, send):
request = Request(scope, receive)
data = await request.form()
response = JSONResponse(data['data'])
await response(receive, send)
return parse
| 25.4375 | 54 | 0.70516 | 48 | 407 | 5.895833 | 0.416667 | 0.106007 | 0.091873 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.206388 | 407 | 15 | 55 | 27.133333 | 0.876161 | 0 | 0 | 0 | 0 | 0 | 0.039312 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.181818 | false | 0 | 0.181818 | 0.090909 | 0.545455 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
5bff6a5a233a953254c76067336649d556192ab7 | 4,966 | py | Python | project/settings.py | panubo/panubo-dns | fce7cf8b26f06da749c659f24e7f6339997c4102 | [
"MIT"
] | null | null | null | project/settings.py | panubo/panubo-dns | fce7cf8b26f06da749c659f24e7f6339997c4102 | [
"MIT"
] | 1 | 2015-08-19T05:24:09.000Z | 2019-07-01T01:47:12.000Z | project/settings.py | panubo/panubo-dns | fce7cf8b26f06da749c659f24e7f6339997c4102 | [
"MIT"
] | 2 | 2016-06-06T09:48:24.000Z | 2021-04-19T15:33:50.000Z | """
Django settings for project.
"""
# Build paths inside the project like this: os.path.join(BASE_DIR, ...)
import os
BASE_DIR = os.path.dirname(os.path.dirname(__file__))
PROJECT_PATH = os.path.abspath(os.path.split(__file__)[0])
ALLOWED_HOSTS = ['*']
# Application definition
INSTALLED_APPS = (
'django.contrib.admin',
'django.contrib.auth',
'django.contrib.contenttypes',
'django.contrib.sessions',
'django.contrib.messages',
'django.contrib.staticfiles',
'django.contrib.sites',
'reversion',
'dnsmanager',
'rest_framework',
'project.account',
'project.couch',
)
MIDDLEWARE_CLASSES = (
'django.contrib.sessions.middleware.SessionMiddleware',
'django.middleware.common.CommonMiddleware',
'django.middleware.csrf.CsrfViewMiddleware',
'django.contrib.auth.middleware.AuthenticationMiddleware',
'django.contrib.messages.middleware.MessageMiddleware',
'django.middleware.clickjacking.XFrameOptionsMiddleware',
)
ROOT_URLCONF = 'project.urls'
TEMPLATES = [
{
'BACKEND': 'django.template.backends.django.DjangoTemplates',
'DIRS': [
os.path.join(PROJECT_PATH, 'templates'),
],
'APP_DIRS': True,
'OPTIONS': {
'context_processors': [
'django.contrib.auth.context_processors.auth',
'django.template.context_processors.debug',
'django.template.context_processors.i18n',
'django.template.context_processors.media',
'django.template.context_processors.static',
'django.template.context_processors.tz',
'django.contrib.messages.context_processors.messages',
'project.context_processor.app_name',
],
},
},
]
WSGI_APPLICATION = 'project.wsgi.application'
# Internationalization
# https://docs.djangoproject.com/en/1.6/topics/i18n/
LANGUAGE_CODE = 'en-us'
USE_I18N = True
USE_L10N = True
USE_TZ = True
# Static files (CSS, JavaScript, Images)
# https://docs.djangoproject.com/en/1.6/howto/static-files/
STATIC_ROOT = os.path.join(BASE_DIR, 'www', 'static')
STATIC_URL = '/static/'
MEDIA_ROOT = os.path.join(BASE_DIR, 'www', 'media')
MEDIA_URL = '/media/'
# Settings from .env (optional load)
from dj_database_url import config as db_config
DATABASES = {'default': db_config(default='sqlite://localhost//%s' % os.path.join(BASE_DIR, 'db', 'project.sqlite3'))}
TIME_ZONE = os.environ.setdefault('TIME_ZONE', "Australia/Sydney")
EMAIL_HOST = os.environ.setdefault('EMAIL_HOST', 'localhost')
EMAIL_PORT = int(os.environ.setdefault('EMAIL_PORT', '25'))
SERVER_EMAIL = os.environ.get('SERVER_EMAIL')
DEFAULT_FROM_EMAIL = os.environ.get('DEFAULT_FROM_EMAIL')
ADMINS = ()
for admin in os.environ.get('ADMINS', '').split():
ADMINS += (tuple(admin.split('/')),)
MANAGERS = ADMINS
DEBUG = bool(os.environ.get('DEBUG', 'False').lower() in ("true", "yes", "t", "1"))
if os.environ.get('DEBUG_EMAIL', False):
EMAIL_BACKEND = 'django.core.mail.backends.console.EmailBackend'
# Loading SECRET_KEY from .env variable
try:
SECRET_KEY = os.environ['SECRET_KEY']
except KeyError:
print("Warning: settings.SECRET_KEY is not set!")
raise
from dnsmanager.defaults import DNS_MANAGER_RECIPES_DEFAULT
DNS_MANAGER_RECIPES = DNS_MANAGER_RECIPES_DEFAULT + (
('project.dns_manager_recipes.VoltGridEmail', 'Set Volt Grid MX'),
('project.dns_manager_recipes.VoltGridNameServers', 'Set Volt Grid NS'),
('project.dns_manager_recipes.CromovaNameServers', 'Set CroMoVa NS'),
('project.dns_manager_recipes.PanuboNameServers', 'Set Panubo NS'),
)
DNS_MANAGER_DOMAIN_MODEL = 'project.account.Domain'
DNS_MANAGER_ZONE_ADMIN_FILTER = ('domain__organisation', )
DNS_MANAGER_NAMESERVERS = os.environ.get('DNS_MANAGER_NAMESERVERS', None)
# CouchDB Config
COUCH_DATABASES = {
'dns': {
'NAME': os.environ.setdefault('COUCHDB_DNS_NAME', 'dns'),
'USER': os.environ.setdefault('COUCHDB_DNS_USER', 'admin'),
'PASS': os.environ.setdefault('COUCHDB_DNS_PASS', 'admin'),
'HOST': os.environ.setdefault('COUCHDB_DNS_HOST', 'http://127.0.0.1:5984'),
}
}
COUCH_IGNORE_MISSING = True
REST_FRAMEWORK = {
'DEFAULT_AUTHENTICATION_CLASSES': (
'rest_framework.authentication.BasicAuthentication',
'rest_framework.authentication.SessionAuthentication',
),
'DEFAULT_PERMISSION_CLASSES': (
'rest_framework.permissions.IsAuthenticated',
),
'PAGINATE_BY': 100
}
# Redis cache for DNS MANAGER
if os.environ.get('REDIS_URL', False):
CACHES = {
"default": {
"BACKEND": "django_redis.cache.RedisCache",
"LOCATION": "%s/1" % os.environ.get('REDIS_URL', 'redis://localhost:6379'),
"OPTIONS": {
"CLIENT_CLASS": "django_redis.client.DefaultClient",
}
}
}
# Custom attributes
APP_NAME = 'Panubo DNS'
# Django Sites
SITE_ID = 1 | 30.654321 | 118 | 0.681232 | 563 | 4,966 | 5.79929 | 0.35524 | 0.044104 | 0.029403 | 0.047473 | 0.106585 | 0.032466 | 0.032466 | 0 | 0 | 0 | 0 | 0.008778 | 0.174184 | 4,966 | 162 | 119 | 30.654321 | 0.787369 | 0.0882 | 0 | 0.034188 | 0 | 0 | 0.45765 | 0.302439 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0.008547 | 0.025641 | 0 | 0.025641 | 0.008547 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
7505d44db8a60afab4853b6715130e001e8c0b30 | 587 | py | Python | src/extract/test.py | AutoKnowledge/AutoKnowledge | 1a9fce1449d9605dc0289ab13736d073453ed102 | [
"Apache-2.0"
] | 1 | 2021-02-24T10:22:19.000Z | 2021-02-24T10:22:19.000Z | src/extract/test.py | AutoKnowledge/AutoKnowledge | 1a9fce1449d9605dc0289ab13736d073453ed102 | [
"Apache-2.0"
] | null | null | null | src/extract/test.py | AutoKnowledge/AutoKnowledge | 1a9fce1449d9605dc0289ab13736d073453ed102 | [
"Apache-2.0"
] | null | null | null | '''
import analyze
any = analyze.Analyze()
# 吻别是由张学友演唱的一首歌曲。
#text = '《盗墓笔记》是2014年欢瑞世纪影视传媒股份有限公司出品的一部网络季播剧,改编自南派三叔所著的同名小说,由郑保瑞和罗永昌联合导演,李易峰、杨洋、唐嫣、刘天佐、张智尧、魏巍等主演。'
#text = '姚明1980年9月12日出生于上海市徐汇区,祖籍江苏省苏州市吴江区震泽镇,前中国职业篮球运动员,司职中锋,现任中职联公司董事长兼总经理。'
knowledge = any.knowledge(text)
print(knowledge)
'''
from medext import getTriples
text = "据报道称,新冠肺炎患者经常会发热、咳嗽,少部分患者会胸闷、=乏力,其病因包括: 1.自身免疫系统缺陷 2.人传人。"
#text = "少部分先天性心脏病在5岁前有自愈的机会,另外有少部分患者畸形轻微、对循环功能无明显影响,而无需任何治疗,但大多数患者需手术治疗校正畸形。随着医学技术的飞速发展,手术效果已经极大提高,目前多数患者如及时手术治疗,可以和正常人一样恢复正常,生长发育不受影响,并能胜任普通的工作、学习和生活的需要。"
result = getTriples(text)
print(result)
| 39.133333 | 156 | 0.810903 | 62 | 587 | 7.677419 | 0.774194 | 0.037815 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.025362 | 0.059625 | 587 | 14 | 157 | 41.928571 | 0.836957 | 0.749574 | 0 | 0 | 0 | 0 | 0.410072 | 0.280576 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.25 | 0 | 0.25 | 0.25 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
75075401a18228befc57b214cb804403c7d028e7 | 4,460 | py | Python | alfred-workflow-py3/tests/test_workflow_xml.py | kw-lee/alfdaumdict | fde5c54fb5e8eb30bd6308c4a6086e46b60f101b | [
"MIT"
] | 1 | 2022-03-19T10:27:12.000Z | 2022-03-19T10:27:12.000Z | alfred-workflow-py3/tests/test_workflow_xml.py | kw-lee/alfdaumdict | fde5c54fb5e8eb30bd6308c4a6086e46b60f101b | [
"MIT"
] | null | null | null | alfred-workflow-py3/tests/test_workflow_xml.py | kw-lee/alfdaumdict | fde5c54fb5e8eb30bd6308c4a6086e46b60f101b | [
"MIT"
] | null | null | null | #!/usr/bin/env python
# encoding: utf-8
#
# Copyright (c) 2017 Dean Jackson <deanishe@deanishe.net>
#
# MIT Licence. See http://opensource.org/licenses/MIT
#
# Created on 2017-05-06
#
"""Unit tests for Workflow's XML feedback generation."""
import sys
from contextlib import contextmanager
from xml.etree import ElementTree as ET
import pytest
from six.moves import StringIO
from workflow import Workflow
@pytest.fixture(scope="function")
def wf(infopl):
"""Create a :class:`~workflow.Workflow` object."""
yield Workflow()
@contextmanager
def stdout():
"""Capture output to STDOUT."""
old = sys.stdout
sio = StringIO()
sys.stdout = sio
yield sio
sio.close()
sys.stdout = old
def test_item_creation(wf):
"""XML generation"""
wf.add_item(
"title",
"subtitle",
arg="arg",
autocomplete="autocomplete",
valid=True,
uid="uid",
icon="icon.png",
icontype="fileicon",
type="file",
largetext="largetext",
copytext="copytext",
quicklookurl="http://www.deanishe.net/alfred-workflow",
)
with stdout() as sio:
wf.send_feedback()
output = sio.getvalue()
root = ET.fromstring(output)
item = list(root)[0]
assert item.attrib["uid"] == "uid"
assert item.attrib["autocomplete"] == "autocomplete"
assert item.attrib["valid"] == "yes"
assert item.attrib["uid"] == "uid"
title, subtitle, arg, icon, largetext, copytext, quicklookurl = list(item)
assert title.text == "title"
assert title.tag == "title"
assert subtitle.text == "subtitle"
assert subtitle.tag == "subtitle"
assert arg.text == "arg"
assert arg.tag == "arg"
assert largetext.tag == "text"
assert largetext.text == "largetext"
assert largetext.attrib["type"] == "largetype"
assert copytext.tag == "text"
assert copytext.text == "copytext"
assert copytext.attrib["type"] == "copy"
assert icon.text == "icon.png"
assert icon.tag == "icon"
assert icon.attrib["type"] == "fileicon"
assert quicklookurl.tag == "quicklookurl"
assert quicklookurl.text == "http://www.deanishe.net/alfred-workflow"
def test_item_creation_with_modifiers(wf):
"""XML generation (with modifiers)."""
mod_subs = {}
for mod in ("cmd", "ctrl", "alt", "shift", "fn"):
mod_subs[mod] = mod
wf.add_item(
"title",
"subtitle",
mod_subs,
arg="arg",
autocomplete="autocomplete",
valid=True,
uid="uid",
icon="icon.png",
icontype="fileicon",
type="file",
)
with stdout() as sio:
wf.send_feedback()
output = sio.getvalue()
root = ET.fromstring(output)
item = list(root)[0]
assert item.attrib["uid"] == "uid"
assert item.attrib["autocomplete"] == "autocomplete"
assert item.attrib["valid"] == "yes"
assert item.attrib["uid"] == "uid"
(title, subtitle, sub_cmd, sub_ctrl, sub_alt, sub_shift, sub_fn, arg, icon) = list(
item
)
assert title.text == "title"
assert title.tag == "title"
assert subtitle.text == "subtitle"
assert sub_cmd.text == "cmd"
assert sub_cmd.attrib["mod"] == "cmd"
assert sub_ctrl.text == "ctrl"
assert sub_ctrl.attrib["mod"] == "ctrl"
assert sub_alt.text == "alt"
assert sub_alt.attrib["mod"] == "alt"
assert sub_shift.text == "shift"
assert sub_shift.attrib["mod"] == "shift"
assert sub_fn.text == "fn"
assert sub_fn.attrib["mod"] == "fn"
assert subtitle.tag == "subtitle"
assert arg.text == "arg"
assert arg.tag == "arg"
assert icon.text == "icon.png"
assert icon.tag == "icon"
assert icon.attrib["type"] == "fileicon"
def test_item_creation_no_optionals(wf):
"""XML generation (no optionals)"""
wf.add_item("title")
with stdout() as sio:
wf.send_feedback()
output = sio.getvalue()
root = ET.fromstring(output)
item = list(root)[0]
for key in ["uid", "arg", "autocomplete"]:
assert key not in item.attrib
assert item.attrib["valid"] == "no"
title, subtitle = list(item)
assert title.text == "title"
assert title.tag == "title"
assert subtitle.text is None
tags = [elem.tag for elem in list(item)]
for tag in ["icon", "arg"]:
assert tag not in tags
if __name__ == "__main__": # pragma: no cover
pytest.main([__file__])
| 26.081871 | 87 | 0.609865 | 547 | 4,460 | 4.888483 | 0.224863 | 0.037397 | 0.053852 | 0.028422 | 0.456245 | 0.439791 | 0.415856 | 0.415856 | 0.415856 | 0.415856 | 0 | 0.004709 | 0.238117 | 4,460 | 170 | 88 | 26.235294 | 0.782225 | 0.085874 | 0 | 0.504 | 0 | 0 | 0.144662 | 0 | 0 | 0 | 0 | 0 | 0.4 | 1 | 0.04 | false | 0 | 0.048 | 0 | 0.088 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
750808c8295fb8a3000d23694263942f06120b62 | 4,618 | py | Python | executables/adjusted_ranking_experiments.py | mberr/rank-based-evaluation | 76a0847eecf4350d92783e9773d6fc6b6c69ca51 | [
"MIT"
] | 5 | 2021-02-16T01:04:39.000Z | 2021-09-01T01:36:02.000Z | executables/adjusted_ranking_experiments.py | mberr/rank-based-evaluation | 76a0847eecf4350d92783e9773d6fc6b6c69ca51 | [
"MIT"
] | null | null | null | executables/adjusted_ranking_experiments.py | mberr/rank-based-evaluation | 76a0847eecf4350d92783e9773d6fc6b6c69ca51 | [
"MIT"
] | null | null | null | # coding=utf-8
"""Evaluation of different training and test sizes."""
import argparse
import logging
import random
import mlflow
import numpy
import torch
import tqdm
from kgm.data import get_dataset_by_name
from kgm.eval.matching import evaluate_matching_model
from kgm.models import GCNAlign
from kgm.modules import MarginLoss, SampledMatchingLoss, get_similarity
from kgm.training.matching import AlignmentModelTrainer
from kgm.utils.mlflow_utils import log_metrics_to_mlflow, log_params_to_mlflow
def main():
logging.basicConfig(level=logging.INFO)
parser = argparse.ArgumentParser()
parser.add_argument('--dataset', type=str, default='dbp15k_jape')
parser.add_argument('--subset', type=str, default='zh_en')
parser.add_argument('--num_epochs', type=int, default=2_000)
parser.add_argument('--iterations', type=int, default=5)
parser.add_argument('--device', type=int, default=0)
parser.add_argument('--tracking_uri', type=str, default='http://localhost:5000')
args = parser.parse_args()
# Mlflow settings
logging.info(f'Logging to MLFlow @ {args.tracking_uri}')
mlflow.set_tracking_uri(uri=args.tracking_uri)
mlflow.set_experiment('adjusted_ranking_experiments')
# Determine device
device = torch.device("cuda") if torch.cuda.is_available() else torch.device("cpu")
logging.info(f"Using device={device}")
# load dataset
dataset = get_dataset_by_name(
dataset_name=args.dataset,
subset_name=args.subset,
inverse_triples=True, # GCNAlign default
self_loops=True, # GCNAlign default
)
for num_train in [
0,
10,
20,
50,
100,
200,
500,
1000,
2000,
3000,
5000,
7500,
]:
ea_full = dataset.alignment.all
i_all = ea_full.shape[1]
i_train = num_train
# store optimal evaluation batch size for different sizes
for iteration in tqdm.trange(args.iterations, unit='run', unit_scale=True):
# fix random seed
torch.manual_seed(iteration)
numpy.random.seed(iteration)
random.seed(iteration)
# train-test split
assert ea_full.shape[0] == 2
ea_full = ea_full[:, torch.randperm(i_all)]
ea_train, ea_test = ea_full[:, :i_train], ea_full[:, i_train:]
# instantiate model
model = GCNAlign(
dataset=dataset,
embedding_dim=200,
n_layers=2,
use_conv_weights=False,
).to(device=device)
# instantiate similarity
similarity = get_similarity(
similarity="l1",
transformation="negative",
)
if i_train > 0:
# instantiate loss
loss = SampledMatchingLoss(
similarity=similarity,
base_loss=MarginLoss(margin=3.),
num_negatives=50,
)
# instantiate trainer
trainer = AlignmentModelTrainer(
model=model,
similarity=similarity,
dataset=dataset,
loss=loss,
optimizer_cls="adam",
optimizer_kwargs=dict(
lr=1.0,
),
)
# train
trainer.train(num_epochs=args.num_epochs)
# evaluate with different test set sizes
total_num_test_alignments = ea_test.shape[1]
test_sizes = list(range(1_000, total_num_test_alignments, 1_000))
results = dict(evaluate_matching_model(
model=model,
alignments={
k: ea_test[:, :k]
for k in test_sizes
},
similarity=similarity,
)[0])
# store results
for size, result in results.items():
# start experiment
with mlflow.start_run():
log_params_to_mlflow(config=dict(
dataset=args.dataset,
subset=args.subset,
num_epochs=args.num_epochs,
num_train_alignments=i_train,
num_test_alignments=ea_test[:, :size].shape[1],
seed=iteration,
))
log_metrics_to_mlflow(metrics=result)
if __name__ == '__main__':
main()
| 31.848276 | 87 | 0.561498 | 484 | 4,618 | 5.136364 | 0.340909 | 0.016895 | 0.04103 | 0.012872 | 0.055511 | 0 | 0 | 0 | 0 | 0 | 0 | 0.0247 | 0.351234 | 4,618 | 144 | 88 | 32.069444 | 0.805073 | 0.083369 | 0 | 0.064815 | 0 | 0 | 0.052244 | 0.006649 | 0 | 0 | 0 | 0 | 0.009259 | 1 | 0.009259 | false | 0 | 0.12037 | 0 | 0.12963 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
750a34fed3230d5f5f9e5491dc14d2f490974ee6 | 5,204 | py | Python | microsoft/gestures/gesture_container.py | dany74q/python-microsoft-project-prague-sdk | abfea98d75e16c2d8862973e61970d99122f9cec | [
"MIT"
] | 1 | 2017-07-30T10:17:38.000Z | 2017-07-30T10:17:38.000Z | microsoft/gestures/gesture_container.py | dany74q/python-microsoft-project-prague-sdk | abfea98d75e16c2d8862973e61970d99122f9cec | [
"MIT"
] | null | null | null | microsoft/gestures/gesture_container.py | dany74q/python-microsoft-project-prague-sdk | abfea98d75e16c2d8862973e61970d99122f9cec | [
"MIT"
] | null | null | null | from microsoft.gestures.fingertip_placement_relation import FingertipPlacementRelation
from microsoft.gestures.fingertip_distance_relation import FingertipDistanceRelation
from xml.etree.ElementTree import Element, SubElement, Comment, tostring
from microsoft.gestures.relative_placement import RelativePlacement
from microsoft.gestures.any_finger_context import AnyFingerContext
from microsoft.gestures.any_finger_context import AnyFingerContext
from microsoft.gestures.relative_distance import RelativeDistance
from microsoft.gestures.hand_part_motion import HandPartMotion
from microsoft.gestures.any_hand_context import AnyHandContext
from microsoft.gestures.pose_direction import PoseDirection
from microsoft.gestures.finger_flexion import FingerFlexion
from microsoft.gestures.finger_pose import FingerPose
from microsoft.gestures.palm_pose import PalmPose
from microsoft.gestures.finger import Finger
from xml.dom import minidom
class GestureContainer(object):
def __init__(self, gesture, is_global, pid):
self._gesture = gesture
self._is_global = is_global
self._pid = pid
@property
def name(self):
return self._gesture._name
def to_xaml(self):
xml_str = ''
root = Element('Gesture')
root.set('xmlns', 'http://schemas.microsoft.com/gestures/2015/xaml')
root.set('Name', self.name)
segmens = SubElement(root, 'Gesture.Segments')
idle = SubElement(segmens, 'IdleGestureSegment')
idle.set('Name', 'Idle')
for pose in self._gesture._segments:
pose_element = SubElement(segmens, pose.__class__.__name__)
pose_element.set('Name', pose._name)
for constraint in pose._constrains:
constraint_element = SubElement(pose_element, constraint.__class__.__name__)
if isinstance(constraint._context, AnyHandContext):
constraint_element.set('Context', '{AnyHand}')
elif isinstance(constraint._context, AnyFingerContext):
constraint_element.set('Context', '{AnyFinger %s}' % (', '.join(map(lambda x: self._get_bitmask_keys_from_dict(Finger, x), constraint._context))))
else:
constraint_element.set('Context', ', '.join(map(lambda x: self._get_bitmask_keys_from_dict(Finger, x), constraint._context)))
if isinstance(constraint, HandPartMotion):
constraint_element.set('MotionScript', ', '.join(map(lambda x: x._name, constraint._motion_script._motion_segments)))
if isinstance(constraint, FingerPose):
if constraint._pose_direction:
constraint_element.set('Direction', self._get_bitmask_keys_from_dict(PoseDirection, constraint._pose_direction))
constraint_element.set('Flexion', self._get_key_from_dict(FingerFlexion, constraint._finger_flextion))
if isinstance(constraint, PalmPose):
if constraint._direction:
constraint_element.set('Direction', self._get_bitmask_keys_from_dict(PoseDirection, constraint._direction))
constraint_element.set('Orientation', self._get_bitmask_keys_from_dict(PoseDirection, constraint._orientation))
if hasattr(constraint, '_other_context') and constraint._other_context:
constraint_element.set('OtherContext', ', '.join(map(lambda x: self._get_bitmask_keys_from_dict(Finger, x), constraint._other_context)))
if hasattr(constraint, '_distance') and isinstance(constraint, FingertipPlacementRelation):
constraint_element.set('PlacementRelation', self._get_bitmask_keys_from_dict(RelativePlacement, constraint._distance))
if hasattr(constraint, '_distance') and isinstance(constraint, FingertipDistanceRelation):
constraint_element.set('DistanceRelation', self._get_bitmask_keys_from_dict(RelativeDistance, constraint._distance))
connections = SubElement(root, 'Gesture.SegmentsConnections')
for connection in self._gesture._segment_connections:
from_, to_ = connection['From'], connection['To']
connection_element = SubElement(connections, 'SegmentConnections')
connection_element.set('From', from_)
connection_element.set('To', to_)
xml_str = tostring(root, 'utf-8')
pretty = minidom.parseString(xml_str)
return pretty.toprettyxml(indent=" ").replace('\n', '').replace('<?xml version="1.0" ?>', '').strip()
@staticmethod
def _get_key_from_dict(cls, value):
d = cls.__dict__
for k, v in filter(lambda _: isinstance(_[1], int), d.iteritems()):
if v == value:
return k
@staticmethod
def _get_bitmask_keys_from_dict(cls, value):
d = cls.__dict__
values = [x for x in d.values() if isinstance(x, int)]
keys = []
for k, v in filter(lambda x: isinstance(x[1], int), d.iteritems()):
if v & value != 0 or (v == 0 and value == 0):
keys.append(k)
return '|'.join(keys) | 57.186813 | 166 | 0.683128 | 555 | 5,204 | 6.099099 | 0.225225 | 0.041359 | 0.08065 | 0.047858 | 0.274446 | 0.258789 | 0.218316 | 0.161595 | 0.14712 | 0.14712 | 0 | 0.002952 | 0.21887 | 5,204 | 91 | 167 | 57.186813 | 0.829766 | 0 | 0 | 0.073171 | 0 | 0 | 0.072046 | 0.005187 | 0 | 0 | 0 | 0 | 0 | 1 | 0.060976 | false | 0 | 0.182927 | 0.012195 | 0.304878 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
750d8e497e21c46d3ed340b4abc7623e18ad44e4 | 7,027 | py | Python | pygraph/domination.py | jysh1214/pygraph | fba581ce5e259854a4b86163c4fb61030e663a81 | [
"MIT"
] | null | null | null | pygraph/domination.py | jysh1214/pygraph | fba581ce5e259854a4b86163c4fb61030e663a81 | [
"MIT"
] | null | null | null | pygraph/domination.py | jysh1214/pygraph | fba581ce5e259854a4b86163c4fb61030e663a81 | [
"MIT"
] | null | null | null | from .get_imformation import GI
class DM:
def __init__(self, adj_matrix, ins_matrix):
self.Adjacency_Matrix = adj_matrix
self.Insidence_Matrix = ins_matrix
self.N = len(self.Adjacency_Matrix)
### Packing: Find Maximal ###
def clique(self):
"""
Returns:
Maximal clique
Attention:
Maximal
"""
# from get_information
gi = GI(self.Adjacency_Matrix, self.Insidence_Matrix)
# check every vertex are connected with each orthers
def complete(vertex):
for k in range(len(temp_set)):
if not gi.check_conn(vertex, temp_set[k]):
return False
return True
def clique_recursion(vertex):
nb = gi.get_nb(vertex)
for j in range(len(nb)):
if (not nb[j] in temp_set) and complete(nb[j]):
temp_set.append(nb[j])
clique_recursion(nb[j])
cliq_set = []
used_vertex = []
vertex = [i for i in range(self.N)]
temp_set = [] # local var.
for offset in range(self.N):
for b in range(self.N):
# use circular queue
i = (b+offset)%self.N
if vertex[i] in used_vertex:
continue
temp_set.append(vertex[i])
# find the clique contain vertex[i]
clique_recursion(vertex[i])
used_vertex = list(set(temp_set)|set(used_vertex))
temp = []
temp = put_all(temp_set, temp)
cliq_set.append(temp)
temp_set = []
max_ = 0
max_clique = 0
for i in range(len(cliq_set)):
if len(cliq_set[i]) > max_:
max_ = len(cliq_set[i])
max_clique = i
return cliq_set[max_clique]
def indp_set(self):
"""
Returns:
Maximal independent set.
Attention:
Maximal
"""
in_set = []
vertex = [i for i in range(self.N)]
# from get_imformation
gi = GI(self.Adjacency_Matrix, self.Insidence_Matrix)
for offset in range(self.N):
selected = []
non_selected = []
while len(selected)+len(non_selected) < self.N:
for b in range(self.N):
# use circular queue
i = (b+offset)%self.N
if not vertex[i] in non_selected:
selected.append(vertex[i])
nb = gi.get_nb(vertex[i])
non_selected = list(set(nb)|set(non_selected))
temp = []
temp = put_all(selected, temp)
in_set.append(temp)
max_ = 0
max_set = 0
for i in range(len(in_set)):
if len(in_set[i]) > max_:
max_ = len(in_set[i])
max_set = i
return in_set[max_set]
### Covering: Find Minimal ###
def dominating_set(self):
"""
Returns:
Minimal dominating set.
Attention:
Minimal
"""
dom_set = []
vertex = [i for i in range(self.N)]
# from get_imformation
gi = GI(self.Adjacency_Matrix, self.Insidence_Matrix)
for offset in range(self.N):
selected = []
non_selected = []
while len(selected)+len(non_selected) < self.N:
for b in range(self.N):
# use circular queue
i = (b+offset)%self.N
if not vertex[i] in non_selected:
selected.append(vertex[i])
nb = gi.get_nb(vertex[i])
non_selected = list(set(nb)|set(non_selected))
temp = []
temp = put_all(selected, temp)
dom_set.append(temp)
min_ = self.N
min_set = 0
for i in range(len(dom_set)):
if len(dom_set[i]) < min_:
min_ = len(dom_set[i])
min_set = i
return dom_set[min_set]
def vertex_cover(self):
"""
Method:
Greedy algorithm.
Returns:
Minimal vertex cover set.
Attention:
Minimal
"""
selected = []
cov_edge = []
# from get_imformation
gi = GI(self.Adjacency_Matrix, self.Insidence_Matrix)
degree = []
for i in range(self.N):
degree.append([i, gi.get_degree(i)]) # [vertex No, degree]
degree = sorted(degree, key = lambda x: x[1], reverse = True)
while len(cov_edge) < len(self.Insidence_Matrix[0]):
vertex = degree[0][0] # select max degree vertex
selected.append(vertex)
nb = gi.get_nb(vertex)
# remove selected vertex
degree.remove(degree[0])
for j in range(len(nb)):
temp_edge = gi.get_edge(vertex, nb[j])
cov_edge = list(set([temp_edge])|set(cov_edge))
# degree of neighbor vertex reduxce 1
for k in range(len(degree)):
if degree[k][0] == nb[j]:
degree[k][1] -= 1
# resort
degree = sorted(degree, key = lambda x: x[1], reverse = True)
return selected
def edge_cover(self):
"""
Returns:
Minimal edge cover set.
Attention:
Minimal
"""
selected = []
cov_ver = []
# from get_imformation
gi = GI(self.Adjacency_Matrix, self.Insidence_Matrix)
# [edge No, vertex number of the edge]
e = len(self.Insidence_Matrix[0])
vertex_num = [[i, 2] for i in range(e)]
"""
for i in range(len(self.Insidence_Matrix[0])):
vertex_num.append([i, 2]) # [edge No, vertex number of the edge]
print(vertex_num)
"""
while len(cov_ver) < self.N:
edge = vertex_num[0][0]
selected.append(edge)
[a, b] = gi.edge_term(edge)
cov_ver = list(set([a, b])|set(cov_ver))
def reduce_(vertex):
nb = gi.get_nb(vertex)
for i in range(len(nb)):
temp_edge = gi.get_edge(vertex, nb[i])
for j in range(len(vertex_num)):
if vertex_num[j][0] == temp_edge:
vertex_num[j][1] -= 1
break
# number of vertex of terminal reduce 1
reduce_(a)
reduce_(b)
# sorted by number of vertex
vertex_num = sorted(vertex_num, key = lambda x: x[1], reverse = True)
return selected
def put_all(a, b):
for i in a:
b.append(i)
return b
| 27.342412 | 81 | 0.477871 | 820 | 7,027 | 3.931707 | 0.134146 | 0.045596 | 0.020471 | 0.034119 | 0.505583 | 0.462779 | 0.405707 | 0.343052 | 0.322891 | 0.322891 | 0 | 0.006401 | 0.421944 | 7,027 | 256 | 82 | 27.449219 | 0.787297 | 0.120677 | 0 | 0.369565 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.072464 | false | 0 | 0.007246 | 0 | 0.144928 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
750db8dedf746751bad013c1015bec9f1774f4f4 | 680 | py | Python | game/combat/effects/moveeffect/cure.py | Sipondo/ulix-dexflow | de46482fe08e3d600dd5da581f0524b55e5df961 | [
"MIT"
] | 5 | 2021-06-25T16:44:38.000Z | 2021-12-31T01:29:00.000Z | game/combat/effects/moveeffect/cure.py | Sipondo/ulix-dexflow | de46482fe08e3d600dd5da581f0524b55e5df961 | [
"MIT"
] | null | null | null | game/combat/effects/moveeffect/cure.py | Sipondo/ulix-dexflow | de46482fe08e3d600dd5da581f0524b55e5df961 | [
"MIT"
] | 1 | 2021-06-25T20:33:47.000Z | 2021-06-25T20:33:47.000Z | from .basemoveeffect import BaseMoveEffect
from game.combat.effects.genericeffect import GenericEffect
class Cure(BaseMoveEffect):
def after_action(self):
if self.scene.board.random_roll(self.move.chance):
target_effects = self.scene.get_effects_on_target(self.move.target)
for status in [x for x in target_effects if x.type == "Majorstatus"]:
self.scene.delete_effect(status)
self.scene.add_effect(
GenericEffect(
self.scene,
f"{self.scene.board.get_actor(self.move.user).name} was cured!",
)
)
return True, False, False
| 37.777778 | 84 | 0.616176 | 79 | 680 | 5.177215 | 0.518987 | 0.132029 | 0.06846 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.294118 | 680 | 17 | 85 | 40 | 0.852083 | 0 | 0 | 0 | 0 | 0 | 0.104412 | 0.072059 | 0 | 0 | 0 | 0 | 0 | 1 | 0.066667 | false | 0 | 0.133333 | 0 | 0.333333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
750e4085fb2729c790fdc37560668ec0186b9389 | 2,945 | py | Python | mlserve/utils.py | jettify/mlserve | 571152e4475738e0b01fcbde166d95a3636b3c5f | [
"Apache-2.0"
] | 17 | 2018-08-06T09:38:17.000Z | 2018-08-14T10:55:58.000Z | mlserve/utils.py | ml-libs/mlserve | 571152e4475738e0b01fcbde166d95a3636b3c5f | [
"Apache-2.0"
] | 63 | 2018-09-07T21:40:16.000Z | 2022-02-10T17:11:13.000Z | mlserve/utils.py | jettify/mlserve | 571152e4475738e0b01fcbde166d95a3636b3c5f | [
"Apache-2.0"
] | 1 | 2019-05-06T10:18:59.000Z | 2019-05-06T10:18:59.000Z | import json
import os
import trafaret as t
import yaml
from dataclasses import dataclass, asdict
from pathlib import Path
from typing import Any, List, Dict
ModelMeta = t.Dict(
{
t.Key('name'): t.String,
t.Key('description'): t.String,
t.Key('model_path'): t.String,
t.Key('data_schema_path'): t.String,
t.Key('target'): t.String | t.List(t.String),
t.Key('loader', default='pickle'): t.Enum('pickle', 'joblib'),
}
)
# TODO: rename to something more general
ModelConfig = t.Dict({
t.Key('host', default='127.0.0.1'): t.String,
t.Key('port', default=9000): t.Int[0: 65535],
t.Key('workers', default=2): t.Int[1:127],
t.Key('models'): t.List(ModelMeta),
})
ServerConfigTrafaret = t.Dict({
t.Key('host', default='127.0.0.1'): t.String,
t.Key('port', default=9000): t.Int[0: 65535],
t.Key('workers', default=2): t.Int[1:127],
}).ignore_extra('*')
@dataclass(frozen=True)
class ServerConfig:
host: str
port: int
workers: int
@dataclass(frozen=True)
class ModelDescriptor:
name: str
description: str
target: List[str]
features: List[str]
schema: Dict[Any, Any]
model_path: Path
model_size: int
data_schema_path: Path
schema_size: int
loader: str
def asdict(self) -> Dict[str, Any]:
return asdict(self)
def load_model_config(fname: Path) -> Dict[str, Any]:
with open(fname, 'rt') as f:
raw_data = yaml.safe_load(f)
data: Dict[str, Any] = ModelConfig(raw_data)
return data
def load_models(model_conf: List[Dict[str, str]]) -> List[ModelDescriptor]:
result: List[ModelDescriptor] = []
for m in model_conf:
with open(m['data_schema_path'], 'rb') as f:
schema = json.load(f)
_target = m['target']
target: List[str] = _target if isinstance(_target, list) else [_target]
schema = drop_columns(schema, target)
schema_size = os.path.getsize(m['data_schema_path'])
model_size = os.path.getsize(m['model_path'])
features = list(schema['schema']['properties'].keys())
model_desc = ModelDescriptor(
name=m['name'],
description=m['description'],
target=target,
features=features,
schema=schema,
model_path=Path(m['model_path']),
model_size=model_size,
data_schema_path=Path(m['data_schema_path']),
schema_size=schema_size,
loader=m['loader'],
)
result.append(model_desc)
return result
def drop_columns(schema: Dict[str, Any], columns: List[str]) -> Dict[str, Any]:
for col in columns:
schema['schema']['properties'].pop(col, None)
schema['ui_schema'].pop(col, None)
schema['example_data'].pop(col, None)
if col in schema['schema']['required']:
schema['schema']['required'].remove(col)
return schema
| 27.018349 | 79 | 0.609168 | 396 | 2,945 | 4.419192 | 0.239899 | 0.029714 | 0.036571 | 0.044 | 0.139429 | 0.101714 | 0.101714 | 0.101714 | 0.101714 | 0.101714 | 0 | 0.018717 | 0.238031 | 2,945 | 108 | 80 | 27.268519 | 0.761141 | 0.012903 | 0 | 0.094118 | 0 | 0 | 0.105336 | 0 | 0 | 0 | 0 | 0.009259 | 0 | 1 | 0.047059 | false | 0 | 0.082353 | 0.011765 | 0.352941 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
750f07c7e86e349dd91bdbbb50528afd0a003a01 | 2,844 | py | Python | crawlergooglescholar/get_picts.py | vignif/Crawler-google-scholar | 5e95114d253ef5d160148422af240f034a3e5623 | [
"MIT"
] | null | null | null | crawlergooglescholar/get_picts.py | vignif/Crawler-google-scholar | 5e95114d253ef5d160148422af240f034a3e5623 | [
"MIT"
] | null | null | null | crawlergooglescholar/get_picts.py | vignif/Crawler-google-scholar | 5e95114d253ef5d160148422af240f034a3e5623 | [
"MIT"
] | null | null | null | """this script crawls for the profile pictures of researchers in google scholar
and saves them in a folder called [figures]
the crawler exploit the informations via the description of the tags in the html of google scholar
be aware that too many requests to a server might interrupt your script, please
set a proper sleep timing
debug mode is also available for crawl local hosted websites.
make sure to create a folder named 'figures' in the same path where you run this script
"""
import requests
from bs4 import BeautifulSoup
import re
import time
import urllib
from .utils import enable_debug_mode, name_surname
##this script lets you collect the profile pictures from researcher given a list of researchers
##it crawls google scholar
# evaluate performances
start = time.time()
web_site, base_url = enable_debug_mode(False)
# Source excel for researcher names
# names are in first column
# surname are in second column
# ind = df.index.values + 2
def download_mainpage(name, surname):
r = requests.get(base_url + name + "+" + surname)
print(r.status_code)
return r.text
def download_subpage(link):
r1 = requests.get(web_site + link)
return r1.text
def data_not_available(name, surname, i):
print("Data not available for " + name + " " + surname + " in index " + str(i))
def fetch(df):
all = name_surname(df)
size_db = len(name_surname(df))
print("get picts: start now")
for i in range(size_db):
name = name_surname(df)[i][0]
surname = name_surname(df)[i][1]
print(name, surname)
ind = i + 2
soup = BeautifulSoup(download_mainpage(name, surname), "html.parser")
result = soup.find("h3", {"class": "gs_ai_name"})
if result is None:
data_not_available(name, surname, i)
continue
else:
link = result.find("a", href=re.compile(r"[/]([a-z]|[A-Z])\w+")).attrs[
"href"
]
soup = BeautifulSoup(download_subpage(link), "html.parser")
central_table = soup.find(id="gsc_prf_w")
img = central_table.find(id="gsc_prf_pup-img")
try:
urlpic = "https://scholar.google.com" + img["src"]
save_to = "figures/" + str(ind) + "-" + surname + ".jpg"
urllib.request.urlretrieve(urlpic, save_to)
except Exception as e:
print("Error: ", e)
if e.reason.errno == -2:
print("try a new link")
urlpic = img["src"]
urllib.request.urlretrieve(urlpic, save_to)
print(urlpic)
time.sleep(0.5)
end = time.time()
print("elapsed time: ")
print(end - start)
if __name__ == "__main__":
print("run this script from crawl.py")
| 30.913043 | 102 | 0.621308 | 386 | 2,844 | 4.466321 | 0.440415 | 0.076566 | 0.030162 | 0.031323 | 0.074246 | 0.074246 | 0 | 0 | 0 | 0 | 0 | 0.005332 | 0.274613 | 2,844 | 91 | 103 | 31.252747 | 0.830344 | 0.257384 | 0 | 0.036364 | 0 | 0 | 0.125181 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.072727 | false | 0 | 0.109091 | 0 | 0.218182 | 0.181818 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
7510b10f0b56bd6f23e065d973fded7927aa8141 | 3,438 | py | Python | kelas_2c/nurul.py | idamfadilah/belajarpython | 72c5108a7f44d8b8f33dc5d5b1bd4f8a83f8b811 | [
"MIT"
] | 1 | 2020-01-13T15:21:11.000Z | 2020-01-13T15:21:11.000Z | kelas_2c/nurul.py | idamfadilah/belajarpython | 72c5108a7f44d8b8f33dc5d5b1bd4f8a83f8b811 | [
"MIT"
] | 32 | 2019-11-21T08:46:48.000Z | 2020-01-12T07:53:02.000Z | kelas_2c/nurul.py | idamfadilah/belajarpython | 72c5108a7f44d8b8f33dc5d5b1bd4f8a83f8b811 | [
"MIT"
] | 437 | 2019-11-21T06:11:13.000Z | 2021-04-22T22:11:23.000Z | import csv
import matplotlib.pyplot as plt
import requests
class nurul:
def ganjilgenap(self):
with open('kelas_2c/nurul.csv') as files:
reader=csv.reader(files, delimiter=',')
for row in reader:
if int(row[0])%2 == 1:
print(row[0],"merupakan Bilangan Ganjil")
else:
print(row[0],"merupakan Bilangan Genap atau Bukan Bilangan Ganjil")
def urutan(self):
contacts = []
with open('kelas_2c/nurul.csv') as csv_file:
csv_reader = csv.reader(csv_file, delimiter=",")
for row in csv_reader:
contacts.append(row)
labels = contacts.pop(0)
#print(labels)
#print(contacts)
print("-"*34)
print(f'{labels[0]}')
for data in contacts:
print("-"*34)
print(f'{data[0]}')
def tambah(self):
file = open('kelas_2c/nurul.csv', 'a', newline='\n')
barisbaru = [
['21'],
['22']
]
filecsv = csv.writer(file)
filecsv.writerows(barisbaru)
print("Writing Done!")
def metplot(self):
plt.plot([1, 2, 3, 4])
plt.ylabel('some numbers')
plt.show()
def acak(self):
x = [2,4,6,7,9,13,19,26,29,31,36,40,48,51,57,67,69,71,78,88]
y = [54,72,43,2,8,98,109,5,35,28,48,83,94,84,73,11,464,75,200,54]
plt.scatter(x,y)
plt.show()
def histogram(self):
x = [2,4,6,5,42,543,5,3,73,64,42,97,63,76,63,8,73,97,23,45,56,89,45,3,23,2,5,78,23,56,67,78,8,3,78,34,67,23,324,234,43,544,54,33,223,443,444,234,76,432,233,23,232,243,222,221,254,222,276,300,353,354,387,364,309]
num_bins = 6
n, bins, patches = plt.hist(x, num_bins, facecolor = 'pink')
plt.show()
def req(self):
# Search GitHub's repositories for requests
response = requests.get(
'https://api.github.com/search/repositories',
params={'q': 'requests+language:python'},
)
# Inspect some attributes of the `requests` repository
json_response = response.json()
repository = json_response['items'][0]
print(f'Repository name: {repository["name"]}') # Python 3.6+
print(f'Repository description: {repository["description"]}') # Python 3.6+
def req1(self):
response = requests.get(
'https://api.github.com/search/repositories',
params={'q': 'requests+language:python'},
headers={'Accept': 'application/vnd.github.v3.text-match+json'},
)
# View the new `text-matches` array which provides information
# about your search term within the results
json_response = response.json()
repository = json_response['items'][0]
print(f'Text matches: {repository["text_matches"]}')
def req2(self):
x = requests.get('https://w3schools.com/python/demopage.htm')
print(x.text)
def mt(self):
plt.plot([1, 2, 3, 4])
plt.ylabel('some numbers')
plt.plot([1, 2, 3, 4],'g--d')
plt.show()
def panggil(self):
self.ganjilgenap()
self.urutan()
self.tambah()
self.metplot()
self.acak()
self.histogram()
self.req()
self.req1()
self.req2()
self.mt()
| 29.135593 | 219 | 0.538685 | 457 | 3,438 | 4.021882 | 0.420131 | 0.016322 | 0.021763 | 0.026115 | 0.296518 | 0.226333 | 0.22198 | 0.194777 | 0.194777 | 0.194777 | 0 | 0.108878 | 0.30541 | 3,438 | 118 | 220 | 29.135593 | 0.660804 | 0.072717 | 0 | 0.238095 | 0 | 0 | 0.17856 | 0.045269 | 0 | 0 | 0 | 0 | 0 | 1 | 0.130952 | false | 0 | 0.035714 | 0 | 0.178571 | 0.130952 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
75110cf2b69cd80e1422da4c443d622bdec91b65 | 1,227 | py | Python | yandex_algorithm2/home1b.py | erjan/coding_exercises | 53ba035be85f1e7a12b4d4dbf546863324740467 | [
"Apache-2.0"
] | null | null | null | yandex_algorithm2/home1b.py | erjan/coding_exercises | 53ba035be85f1e7a12b4d4dbf546863324740467 | [
"Apache-2.0"
] | null | null | null | yandex_algorithm2/home1b.py | erjan/coding_exercises | 53ba035be85f1e7a12b4d4dbf546863324740467 | [
"Apache-2.0"
] | null | null | null | '''
Витя работает недалеко от одной из станций кольцевой линии Московского метро, а живет рядом с другой станцией той же линии. Требуется выяснить, мимо какого наименьшего количества промежуточных станций необходимо проехать Вите по кольцу, чтобы добраться с работы домой.
Формат ввода
Станции пронумерованы подряд натуральными числами 1, 2, 3, …, N (1-я станция – соседняя с N-й), N не превосходит 100.
Вводятся три числа: сначала N – общее количество станций кольцевой линии, а затем i и j – номера станции, на которой Витя садится, и станции, на которой он должен выйти. Числа i и j не совпадают. Все числа разделены пробелом.
Формат вывода
Требуется выдать минимальное количество промежуточных станций (не считая станции посадки и высадки), которые необходимо проехать Вите.
'''
def main_function(n, i, j):
first_distance = abs(j - i - 1)
second_distance = (n - j + i - 1)
#print('1st distance: %d' % first_distance)
#print('2nd distance: %d' % second_distance)
res = min(first_distance, second_distance)
print(res)
return res
if __name__ == "__main__":
l = list(map(int, input().split()))
l = sorted(l)
i = l[0]
j = l[1]
n = l[2]
res = main_function(n, i, j)
| 35.057143 | 268 | 0.712306 | 187 | 1,227 | 4.620321 | 0.572193 | 0.045139 | 0.048611 | 0.032407 | 0.034722 | 0 | 0 | 0 | 0 | 0 | 0 | 0.014257 | 0.199674 | 1,227 | 34 | 269 | 36.088235 | 0.85947 | 0.703341 | 0 | 0 | 0 | 0 | 0.022535 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.076923 | false | 0 | 0 | 0 | 0.153846 | 0.076923 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
7515c9edf4c6cfe592909aca896e0ddefbb578de | 7,110 | py | Python | Udemy_Py_DataScience_ML/Sec15_LinearReg.py | gonzalosc2/LearningPython | 0210d4cbbb5e154f12007b8e8f825fd3d0022be0 | [
"MIT"
] | null | null | null | Udemy_Py_DataScience_ML/Sec15_LinearReg.py | gonzalosc2/LearningPython | 0210d4cbbb5e154f12007b8e8f825fd3d0022be0 | [
"MIT"
] | null | null | null | Udemy_Py_DataScience_ML/Sec15_LinearReg.py | gonzalosc2/LearningPython | 0210d4cbbb5e154f12007b8e8f825fd3d0022be0 | [
"MIT"
] | null | null | null | ####################################
# author: Gonzalo Salazar
# course: Python for Data Science and Machine Learning Bootcamp
# purpose: lecture notes
# description: Section 15 - Linear Regression
# other: N/A
####################################
#%%
import os
from numpy.lib.function_base import corrcoef
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
from sklearn.model_selection import train_test_split
from sklearn.linear_model import LinearRegression
from sklearn import metrics
#%matplotlib inline
os.chdir('/Users/gsalazar/Documents/C_Codes/Learning-Python/Udemy_Py_DataScience_ML/lr_data')
#%%
df = pd.read_csv('USA_Housing.csv')
# %%
# Brief descriptive statistics from the data
df.head()
df.info()
df.describe()
# %%
# Describing the data we'll work with
sns.pairplot(df)
sns.displot(df['Price'],kde = True)
sns.heatmap(df.corr(),cmap = 'Greens')
plt.show()
# %%
# Establishing the data we'll work with as well as defining training and testing samples
X = df[df.columns[:5]]
y = df['Price']
X_train, X_test, y_train, y_test = train_test_split(X,y,test_size = 0.4, random_state=101)
## TRAINING/ESTIMATION
# %%
# Instantiating a linear regression model and training it
lm = LinearRegression()
lm.fit(X_train,y_train)
# %%
# Estimated coefficients
cdf = pd.DataFrame(lm.coef_,X.columns, columns = ['Coeff'])
cdf
# Interpretation (example)
# holding everything else constant, a one unit increase in average area income
# is associated with a 21.52 dolars increase in the unit price
#%%
## PREDICTION
predic = lm.predict(X_test) # predicted housing prices
# Checking how well is performing the model
plt.scatter(y_test,predic)
# Plotting residuals
sns.displot(y_test-predic, kde = True)
# Notice residuals are normally distributed -> model selected was a correct choice for the data.
# %%
# Measuring performance
m1 = metrics.mean_absolute_error(y_test,predic)
m2 = metrics.mean_squared_error(y_test,predic)
m3 = np.sqrt(metrics.mean_squared_error(y_test,predic))
error_measurement = pd.DataFrame([m1,m2,m3],['MAE','MSE','RMSE'],columns = ['Value'])
error_measurement
del y, X, X_test, X_train, y_train, y_test, m1, m2, m3, error_measurement, lm, predic
####################################################################################
# PROJECT EXERCISE - Ecommerce Customers
# %%
# Reading in the Ecommerce Customers csv file as a DataFrame called customers
df_ecomm = pd.read_csv('Ecommerce Customers')
# Checking the head of customers, and checking out its info() and describe() methods
df_ecomm.head()
df_ecomm.info()
df_ecomm.describe()
# %%
## Exploratory Data Analysis
# Comparing the Time on Website and Yearly Amount Spent.
sns.jointplot(y = 'Time on Website', x = 'Yearly Amount Spent', data = df_ecomm)
# Comparing the Time on APP and Yearly Amount Spent.
sns.jointplot(y = 'Time on App', x = 'Yearly Amount Spent', data = df_ecomm)
# Comment: given the first correlation, which is null, it might be happening that people
# visiting the website find it interesting and then continue following through the
# App (if they did not already started following it through the App since the very
# beginning). Perhaps people prefer to use the App because it is more convenient
# since they do not have to turn on a PC each time they want to buy something as
# well as they can use the App wherever they want to. Besides, since they have their
# smartphones with them all the time, they can spend more time shopping and selecting
# what they really want to buy, this also eases the way the buy something since they
# only need to press a bottom to buy (assuming they can register their card on their
# devices).
#
# The previous phenomenon might be explaining why the yearly amount spent is
# positively associated with spending more time on APP. The correlation between
# both is not pefect, though (around 0.5).
# %%
# Comparing Time on App and Length of Membership
sns.jointplot(y = 'Time on App', x = 'Length of Membership', data = df_ecomm, kind = 'hex')
# Comment: similarly people who spend more time on App is positively correlated with people who
# hold older memberships. This phenomenon might be explained by the fact that older
# members found the App easier to use compared to the Website given the experience they
# have had across the years. Perhaps those who remain using the website are people who
# is reluctant to switch, or that do not understand how to use it very well. Here,
# a measure for technology's insertion and age, would be great to measure that.
# %%
# Exploring types of relationships across the entire data set
sns.pairplot(df_ecomm)
# Comment: those who have yearly spent more are those who have older memberships (the strogest
# correlation). This might be a fidelity effect, in the sense that people who have
# spent more years buying in here, are the ones who prefer to buy all their clothing
# with us.
# %%
# Create a linear model plot (using seaborn's lmplot) of Yearly Amount Spent vs. Length of
# Membership.
sns.lmplot(y = 'Yearly Amount Spent', x = 'Length of Membership', data = df_ecomm)
# Comment: this plot reassures what was my interpretation above.
# %%
## Training the model
y = df_ecomm['Yearly Amount Spent']
X = df_ecomm[df_ecomm.columns[3:7]]
X_train, X_test, y_train, y_test = train_test_split(X,y,test_size = 0.3,random_state=101)
# Instantiating a linear regression model
#lm = LinearRegression(copy_X=True, fit_intercept=True, n_jobs=1, normalize=False)
lm = LinearRegression()
lm.fit(X_train,y_train)
# %%
## Predicting with the model
predic = lm.predict(X_test)
# Checking performance
sns.scatterplot(predic,y_test)
# %%
# Evaluating the model by calculating the Mean Absolute Error, Mean Squared
# Error, and the Root Mean Squared Error
m1 = metrics.mean_absolute_error(y_test,predic)
m2 = metrics.mean_squared_error(y_test,predic)
m3 = np.sqrt(metrics.mean_squared_error(y_test,predic))
error_measurement = pd.DataFrame([m1,m2,m3],['MAE','MSE','RMSE'],columns = ['Value'])
error_measurement
# %%
# Plotting a histogram of the residuals and make sure it looks normally distributed
sns.distplot(y_test-predic, bins = 50)
# %%
## Interpreting
coef = pd.DataFrame(lm.coef_,X.columns,columns = ['Coeff'])
coef
# Comment: more focus should be put on membership time instead of increasing
# efforts on the mobile app or in a website development. More people
# should follow and remain following the business. Notice, a one-year
# increase in membership time is associated with a 61.27 increase in
# yearly amount spent (compared to a 0.19 and 38.59 associated with
# both time on Website and time on App, resp.). Though the company
# should also focus more on their mobile app.
# NOTICE: IN THIS PROJECT WE HAVEN'T USED HYPOTHESIS TESTING TO EVALUATE
# THE SIGNIFICANCE OF EACH COEFFICIENT! | 38.852459 | 97 | 0.715752 | 1,063 | 7,110 | 4.709313 | 0.347131 | 0.014982 | 0.019776 | 0.019177 | 0.195565 | 0.173592 | 0.166001 | 0.137435 | 0.10867 | 0.093088 | 0 | 0.00859 | 0.181294 | 7,110 | 183 | 98 | 38.852459 | 0.8514 | 0.621378 | 0 | 0.275862 | 0 | 0 | 0.134181 | 0.033238 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.155172 | 0 | 0.155172 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
75160826b98614b47f394cd26909f4c19c70ebbb | 1,994 | py | Python | src/lib_example/hypot.py | atpage/cuda_intro | 01dcebdadb961ada4f3532b847f259ac4ea4e615 | [
"MIT"
] | null | null | null | src/lib_example/hypot.py | atpage/cuda_intro | 01dcebdadb961ada4f3532b847f259ac4ea4e615 | [
"MIT"
] | 2 | 2016-02-09T17:39:02.000Z | 2016-05-09T14:44:26.000Z | src/lib_example/hypot.py | atpage/cuda_intro | 01dcebdadb961ada4f3532b847f259ac4ea4e615 | [
"MIT"
] | null | null | null | #!/usr/bin/env python
import numpy as np
import argparse
from ctypes import *
import sys
################################ Load library: ################################
lib_name = 'libhypot.so'
try:
# try to use the one the OS finds (e.g. in /usr/local/lib)
libhypot = CDLL(lib_name)
except OSError:
# library probably wasn't installed; look in local dir instead:
libhypot = CDLL('./' + lib_name)
############################## Parse input args: ##############################
parser = argparse.ArgumentParser(description='Compute hypotenuses for right triangles, ' +\
'given lists of the other two sides. '+\
'Values will be treated as floats. ' +\
'Inputs should be ASCII files.')
parser.add_argument("A", metavar='A.txt', help="List of side 1 values")
parser.add_argument("B", metavar='B.txt', help="List of side 2 values")
parser.add_argument("C", metavar='C.txt', help="Output, list of hypotenuse values")
args = parser.parse_args()
############################# Load input file: ################################
A = np.loadtxt(args.A, dtype='float32')
A_p = A.ctypes.data_as( POINTER(c_float) )
B = np.loadtxt(args.B, dtype='float32')
B_p = B.ctypes.data_as( POINTER(c_float) )
assert len(A) == len(B)
########################### Prepare output array: #############################
C = np.zeros( len(A) ).astype('float32')
C_p = C.ctypes.data_as( POINTER(c_float) )
################################# Get result: #################################
retval = libhypot.gpuHypot( A_p, B_p, C_p, len(A) )
if retval:
print("hypot() failed!")
# Results are already stored in C.
################################ Save to disk: ################################
np.savetxt(args.C, C)
#################################### Done. ####################################
sys.exit(retval)
###############################################################################
| 32.688525 | 91 | 0.474925 | 224 | 1,994 | 4.142857 | 0.491071 | 0.022629 | 0.054957 | 0.061422 | 0.117457 | 0.080819 | 0 | 0 | 0 | 0 | 0 | 0.004808 | 0.165496 | 1,994 | 60 | 92 | 33.233333 | 0.552885 | 0.140923 | 0 | 0 | 0 | 0 | 0.23946 | 0 | 0 | 0 | 0 | 0 | 0.034483 | 1 | 0 | false | 0 | 0.137931 | 0 | 0.137931 | 0.034483 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
751835fbbe2d4d0b3a1f8607d3b130e6bcbf8669 | 560 | py | Python | tests/conftest.py | Frederik-Baetens/pytest-inmanta | 5cebff7b2bb9ad9005a3d68a25df87ee1fc0512c | [
"Apache-2.0"
] | null | null | null | tests/conftest.py | Frederik-Baetens/pytest-inmanta | 5cebff7b2bb9ad9005a3d68a25df87ee1fc0512c | [
"Apache-2.0"
] | null | null | null | tests/conftest.py | Frederik-Baetens/pytest-inmanta | 5cebff7b2bb9ad9005a3d68a25df87ee1fc0512c | [
"Apache-2.0"
] | null | null | null | import pytest
import pytest_inmanta.plugin
import os
import sys
import pkg_resources
pytest_plugins = ["pytester"]
@pytest.fixture(autouse=True)
def set_cwd(testdir):
pytest_inmanta.plugin.CURDIR = os.getcwd()
@pytest.fixture(scope="function", autouse=True)
def deactive_venv():
old_os_path = os.environ.get("PATH", "")
old_prefix = sys.prefix
old_path = sys.path
yield
os.environ["PATH"] = old_os_path
sys.prefix = old_prefix
sys.path = old_path
pkg_resources.working_set = pkg_resources.WorkingSet._build_master()
| 20.740741 | 72 | 0.728571 | 78 | 560 | 4.987179 | 0.423077 | 0.092545 | 0.097686 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.160714 | 560 | 26 | 73 | 21.538462 | 0.82766 | 0 | 0 | 0 | 0 | 0 | 0.042857 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.105263 | false | 0 | 0.263158 | 0 | 0.368421 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
75195768b9bfe7c08dd3ea0eaa74f62fb37dc728 | 1,546 | py | Python | tests/test_vaccination.py | covid-19-impact-lab/sid | d867f55d4d005b01c672bd2edd0e1dc974cb182b | [
"MIT"
] | 18 | 2020-04-18T09:18:52.000Z | 2021-10-19T02:42:39.000Z | tests/test_vaccination.py | covid-19-impact-lab/sid | d867f55d4d005b01c672bd2edd0e1dc974cb182b | [
"MIT"
] | 143 | 2020-04-18T16:58:20.000Z | 2022-03-07T22:16:03.000Z | tests/test_vaccination.py | covid-19-impact-lab/sid | d867f55d4d005b01c672bd2edd0e1dc974cb182b | [
"MIT"
] | 1 | 2021-01-07T07:38:53.000Z | 2021-01-07T07:38:53.000Z | import itertools
from contextlib import ExitStack as does_not_raise # noqa: N813
import pandas as pd
import pytest
from sid.vaccination import vaccinate_individuals
@pytest.mark.integration
@pytest.mark.parametrize(
"vaccination_models, expectation, expected",
[
({}, does_not_raise(), pd.Series([False] * 15)),
(
{
"vaccine": {
"model": lambda receives_vaccine, states, params, seed: pd.Series(
index=states.index, data=True
),
"start": pd.Timestamp("2020-03-01"),
"end": pd.Timestamp("2020-03-04"),
}
},
does_not_raise(),
pd.Series([True] * 15),
),
(
{
"vaccine": {
"model": lambda receives_vaccine, states, params, seed: None,
"start": pd.Timestamp("2020-03-01"),
"end": pd.Timestamp("2020-03-04"),
}
},
pytest.raises(ValueError, match="The model 'vaccine' of 'vaccination_mode"),
None,
),
],
)
def test_vaccinate_individuals(
vaccination_models, initial_states, params, expectation, expected
):
with expectation:
result = vaccinate_individuals(
pd.Timestamp("2020-03-03"),
vaccination_models,
initial_states,
params,
itertools.count(),
)
assert result.equals(expected)
| 29.169811 | 88 | 0.513583 | 139 | 1,546 | 5.582734 | 0.42446 | 0.070876 | 0.096649 | 0.109536 | 0.39433 | 0.25 | 0.25 | 0.25 | 0.25 | 0.118557 | 0 | 0.048554 | 0.373868 | 1,546 | 52 | 89 | 29.730769 | 0.753099 | 0.006468 | 0 | 0.1875 | 0 | 0 | 0.111473 | 0 | 0 | 0 | 0 | 0 | 0.020833 | 1 | 0.020833 | false | 0 | 0.104167 | 0 | 0.125 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
751abaf1ab249f31eecee3f3846c229b7dc366a5 | 12,216 | py | Python | main.py | dumpmemory/W2NER | fb1b6eb1111eb001b1c965097d995244b840bdda | [
"MIT"
] | 128 | 2021-12-21T04:20:17.000Z | 2022-03-31T03:05:53.000Z | main.py | dumpmemory/W2NER | fb1b6eb1111eb001b1c965097d995244b840bdda | [
"MIT"
] | 15 | 2022-01-07T02:39:58.000Z | 2022-03-30T14:12:30.000Z | main.py | dumpmemory/W2NER | fb1b6eb1111eb001b1c965097d995244b840bdda | [
"MIT"
] | 24 | 2021-12-21T05:06:08.000Z | 2022-03-31T13:42:13.000Z | import argparse
import json
import numpy as np
import prettytable as pt
import torch
import torch.autograd
import torch.nn as nn
import transformers
from sklearn.metrics import precision_recall_fscore_support, f1_score
from torch.utils.data import DataLoader
import config
import data_loader
import utils
from model import Model
class Trainer(object):
def __init__(self, model):
self.model = model
self.criterion = nn.CrossEntropyLoss()
bert_params = set(self.model.bert.parameters())
other_params = list(set(self.model.parameters()) - bert_params)
no_decay = ['bias', 'LayerNorm.weight']
params = [
{'params': [p for n, p in model.bert.named_parameters() if not any(nd in n for nd in no_decay)],
'lr': config.bert_learning_rate,
'weight_decay': config.weight_decay},
{'params': [p for n, p in model.bert.named_parameters() if any(nd in n for nd in no_decay)],
'lr': config.bert_learning_rate,
'weight_decay': 0.0},
{'params': other_params,
'lr': config.learning_rate,
'weight_decay': config.weight_decay},
]
self.optimizer = transformers.AdamW(params, lr=config.learning_rate, weight_decay=config.weight_decay)
self.scheduler = transformers.get_linear_schedule_with_warmup(self.optimizer,
num_warmup_steps=config.warm_factor * updates_total,
num_training_steps=updates_total)
def train(self, epoch, data_loader):
self.model.train()
loss_list = []
pred_result = []
label_result = []
for i, data_batch in enumerate(data_loader):
data_batch = [data.cuda() for data in data_batch[:-1]]
bert_inputs, grid_labels, grid_mask2d, pieces2word, dist_inputs, sent_length = data_batch
outputs = model(bert_inputs, grid_mask2d, dist_inputs, pieces2word, sent_length)
grid_mask2d = grid_mask2d.clone()
loss = self.criterion(outputs[grid_mask2d], grid_labels[grid_mask2d])
loss.backward()
torch.nn.utils.clip_grad_norm_(self.model.parameters(), config.clip_grad_norm)
self.optimizer.step()
self.optimizer.zero_grad()
loss_list.append(loss.cpu().item())
outputs = torch.argmax(outputs, -1)
grid_labels = grid_labels[grid_mask2d].contiguous().view(-1)
outputs = outputs[grid_mask2d].contiguous().view(-1)
label_result.append(grid_labels.cpu())
pred_result.append(outputs.cpu())
self.scheduler.step()
label_result = torch.cat(label_result)
pred_result = torch.cat(pred_result)
p, r, f1, _ = precision_recall_fscore_support(label_result.numpy(),
pred_result.numpy(),
average="macro")
table = pt.PrettyTable(["Train {}".format(epoch), "Loss", "F1", "Precision", "Recall"])
table.add_row(["Label", "{:.4f}".format(np.mean(loss_list))] +
["{:3.4f}".format(x) for x in [f1, p, r]])
logger.info("\n{}".format(table))
return f1
def eval(self, epoch, data_loader, is_test=False):
self.model.eval()
pred_result = []
label_result = []
total_ent_r = 0
total_ent_p = 0
total_ent_c = 0
with torch.no_grad():
for i, data_batch in enumerate(data_loader):
entity_text = data_batch[-1]
data_batch = [data.cuda() for data in data_batch[:-1]]
bert_inputs, grid_labels, grid_mask2d, pieces2word, dist_inputs, sent_length = data_batch
outputs = model(bert_inputs, grid_mask2d, dist_inputs, pieces2word, sent_length)
length = sent_length
grid_mask2d = grid_mask2d.clone()
outputs = torch.argmax(outputs, -1)
ent_c, ent_p, ent_r, _ = utils.decode(outputs.cpu().numpy(), entity_text, length.cpu().numpy())
total_ent_r += ent_r
total_ent_p += ent_p
total_ent_c += ent_c
grid_labels = grid_labels[grid_mask2d].contiguous().view(-1)
outputs = outputs[grid_mask2d].contiguous().view(-1)
label_result.append(grid_labels.cpu())
pred_result.append(outputs.cpu())
label_result = torch.cat(label_result)
pred_result = torch.cat(pred_result)
p, r, f1, _ = precision_recall_fscore_support(label_result.numpy(),
pred_result.numpy(),
average="macro")
e_f1, e_p, e_r = utils.cal_f1(total_ent_c, total_ent_p, total_ent_r)
title = "EVAL" if not is_test else "TEST"
logger.info('{} Label F1 {}'.format(title, f1_score(label_result.numpy(),
pred_result.numpy(),
average=None)))
table = pt.PrettyTable(["{} {}".format(title, epoch), 'F1', "Precision", "Recall"])
table.add_row(["Label"] + ["{:3.4f}".format(x) for x in [f1, p, r]])
table.add_row(["Entity"] + ["{:3.4f}".format(x) for x in [e_f1, e_p, e_r]])
logger.info("\n{}".format(table))
return e_f1
def predict(self, epoch, data_loader, data):
self.model.eval()
pred_result = []
label_result = []
result = []
total_ent_r = 0
total_ent_p = 0
total_ent_c = 0
i = 0
with torch.no_grad():
for data_batch in data_loader:
sentence_batch = data[i:i+config.batch_size]
entity_text = data_batch[-1]
data_batch = [data.cuda() for data in data_batch[:-1]]
bert_inputs, grid_labels, grid_mask2d, pieces2word, dist_inputs, sent_length = data_batch
outputs = model(bert_inputs, grid_mask2d, dist_inputs, pieces2word, sent_length)
length = sent_length
grid_mask2d = grid_mask2d.clone()
outputs = torch.argmax(outputs, -1)
ent_c, ent_p, ent_r, decode_entities = utils.decode(outputs.cpu().numpy(), entity_text, length.cpu().numpy())
for ent_list, sentence in zip(decode_entities, sentence_batch):
sentence = sentence["sentence"]
instance = {"sentence": sentence, "entity": []}
for ent in ent_list:
instance["entity"].append({"text": [sentence[x] for x in ent[0]],
"type": config.vocab.id_to_label(ent[1])})
result.append(instance)
total_ent_r += ent_r
total_ent_p += ent_p
total_ent_c += ent_c
grid_labels = grid_labels[grid_mask2d].contiguous().view(-1)
outputs = outputs[grid_mask2d].contiguous().view(-1)
label_result.append(grid_labels.cpu())
pred_result.append(outputs.cpu())
i += config.batch_size
label_result = torch.cat(label_result)
pred_result = torch.cat(pred_result)
p, r, f1, _ = precision_recall_fscore_support(label_result.numpy(),
pred_result.numpy(),
average="macro")
e_f1, e_p, e_r = utils.cal_f1(total_ent_c, total_ent_p, total_ent_r)
title = "TEST"
logger.info('{} Label F1 {}'.format("TEST", f1_score(label_result.numpy(),
pred_result.numpy(),
average=None)))
table = pt.PrettyTable(["{} {}".format(title, epoch), 'F1', "Precision", "Recall"])
table.add_row(["Label"] + ["{:3.4f}".format(x) for x in [f1, p, r]])
table.add_row(["Entity"] + ["{:3.4f}".format(x) for x in [e_f1, e_p, e_r]])
logger.info("\n{}".format(table))
with open(config.predict_path, "w", encoding="utf-8") as f:
json.dump(result, f, ensure_ascii=False)
return e_f1
def save(self, path):
torch.save(self.model.state_dict(), path)
def load(self, path):
self.model.load_state_dict(torch.load(path))
if __name__ == '__main__':
parser = argparse.ArgumentParser()
parser.add_argument('--config', type=str, default='./config/conll03.json')
parser.add_argument('--save_path', type=str, default='./model.pt')
parser.add_argument('--predict_path', type=str, default='./output.json')
parser.add_argument('--device', type=int, default=0)
parser.add_argument('--dist_emb_size', type=int)
parser.add_argument('--type_emb_size', type=int)
parser.add_argument('--lstm_hid_size', type=int)
parser.add_argument('--conv_hid_size', type=int)
parser.add_argument('--bert_hid_size', type=int)
parser.add_argument('--ffnn_hid_size', type=int)
parser.add_argument('--biaffine_size', type=int)
parser.add_argument('--dilation', type=str, help="e.g. 1,2,3")
parser.add_argument('--emb_dropout', type=float)
parser.add_argument('--conv_dropout', type=float)
parser.add_argument('--out_dropout', type=float)
parser.add_argument('--epochs', type=int)
parser.add_argument('--batch_size', type=int)
parser.add_argument('--clip_grad_norm', type=float)
parser.add_argument('--learning_rate', type=float)
parser.add_argument('--weight_decay', type=float)
parser.add_argument('--bert_name', type=str)
parser.add_argument('--bert_learning_rate', type=float)
parser.add_argument('--warm_factor', type=float)
parser.add_argument('--use_bert_last_4_layers', type=int, help="1: true, 0: false")
parser.add_argument('--seed', type=int)
args = parser.parse_args()
config = config.Config(args)
logger = utils.get_logger(config.dataset)
logger.info(config)
config.logger = logger
if torch.cuda.is_available():
torch.cuda.set_device(args.device)
# random.seed(config.seed)
# np.random.seed(config.seed)
# torch.manual_seed(config.seed)
# torch.cuda.manual_seed(config.seed)
# torch.backends.cudnn.benchmark = False
# torch.backends.cudnn.deterministic = True
logger.info("Loading Data")
datasets, ori_data = data_loader.load_data_bert(config)
train_loader, dev_loader, test_loader = (
DataLoader(dataset=dataset,
batch_size=config.batch_size,
collate_fn=data_loader.collate_fn,
shuffle=i == 0,
num_workers=4,
drop_last=i == 0)
for i, dataset in enumerate(datasets)
)
updates_total = len(datasets[0]) // config.batch_size * config.epochs
logger.info("Building Model")
model = Model(config)
model = model.cuda()
trainer = Trainer(model)
best_f1 = 0
best_test_f1 = 0
for i in range(config.epochs):
logger.info("Epoch: {}".format(i))
trainer.train(i, train_loader)
f1 = trainer.eval(i, dev_loader)
test_f1 = trainer.eval(i, test_loader, is_test=True)
if f1 > best_f1:
best_f1 = f1
best_test_f1 = test_f1
trainer.save(config.save_path)
logger.info("Best DEV F1: {:3.4f}".format(best_f1))
logger.info("Best TEST F1: {:3.4f}".format(best_test_f1))
trainer.load(config.save_path)
trainer.predict("Final", test_loader, ori_data[-1])
| 39.406452 | 126 | 0.564014 | 1,454 | 12,216 | 4.483494 | 0.147868 | 0.034515 | 0.065194 | 0.022089 | 0.544562 | 0.512655 | 0.47538 | 0.408958 | 0.398527 | 0.398527 | 0 | 0.014328 | 0.314424 | 12,216 | 309 | 127 | 39.533981 | 0.76406 | 0.016372 | 0 | 0.399123 | 0 | 0 | 0.070855 | 0.003846 | 0 | 0 | 0 | 0 | 0 | 1 | 0.026316 | false | 0 | 0.061404 | 0 | 0.105263 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
7528a707d7cd810fb6f1250d9781c45b722c4ed4 | 3,965 | py | Python | tools/pylib/uvmap.py | maxymilianz/demoscene | 7d912e77f160a3ad695f567b381a78215fd8be5d | [
"Artistic-2.0"
] | null | null | null | tools/pylib/uvmap.py | maxymilianz/demoscene | 7d912e77f160a3ad695f567b381a78215fd8be5d | [
"Artistic-2.0"
] | null | null | null | tools/pylib/uvmap.py | maxymilianz/demoscene | 7d912e77f160a3ad695f567b381a78215fd8be5d | [
"Artistic-2.0"
] | null | null | null | from __future__ import print_function
from math import atan2, cos, sin, pi, sqrt, tan
from utils import dist, lerp, frpart
from array import array
from PIL import Image
def FancyEye(x, y):
a = atan2(x, y)
r = dist(x, y, 0.0, 0.0)
if r == 0:
return (0, 0)
u = 0.04 * y + 0.06 * cos(a * 3.0) / r
v = 0.04 * x + 0.06 * sin(a * 3.0) / r
return (u, v)
def Anamorphosis(x, y):
a = atan2(x, y)
r = dist(x, y, 0.0, 0.0)
if r == 0:
return (0, 0)
u = cos(a) / (3.0 * r)
v = sin(a) / (3.0 * r)
return (u, v)
def HotMagma(x, y):
a = atan2(x, y)
r = dist(x, y, 0.0, 0.0)
if r == 0:
return (0, 0)
u = 0.5 * a / pi
v = sin(2.0 * r)
return (u, v)
def Ball(x, y):
r = dist(x, y, 0.0, 0.0)
r2 = r * r
try:
v = x * (1.33 - sqrt(1.0 - r2)) / (r2 + 1.0)
u = y * (1.33 - sqrt(1.0 - r2)) / (r2 + 1.0)
return (u, v)
except ValueError:
pass
def Butterfly(x, y):
x = abs(x)
y = abs(y)
p = 0.1
q = 0.2
a = atan2(x, y)
r = dist(x, y, 0.0, 0.0)
if r == 0.0 or y < p * x or x < q * y:
return None
u = .5 * cos(3 * a) + r * tan(a) * 0.1
v = .5 * sin(3 * a) + r / (1 if tan(a) == 0 else tan(a))
return (u, v)
class UVMap(object):
def __init__(self, width, height, texsize=128):
self.umap = array('f', [0.0 for i in range(width * height)])
self.vmap = array('f', [0.0 for i in range(width * height)])
self.mask = array('B', [0 for i in range(width * height)])
self.width = width
self.height = height
self.texsize = texsize
def put(self, x, y, value):
i = x + y * self.width
if value:
self.umap[i] = value[0]
self.vmap[i] = value[1]
self.mask[i] = 1
else:
self.mask[i] = 0
def get(self, x, y):
i = x + y * self.width
return (self.umap[i], self.vmap[i])
def generate(self, fn, view):
for j in range(self.height):
for i in range(self.width):
x = lerp(view[0], view[1], float(i) / self.width)
y = lerp(view[2], view[3], float(j) / self.height)
self.put(i, j, fn(x, y))
def _load_map(self, name):
im = Image.open(name)
if im.size[0] != self.width or im.size[1] != self.height:
raise RuntimeError('Image size does not match uvmap size!')
return im.convert('RGB')
def load_uv(self, name):
im_u = self._load_map(name + '-u.png')
im_v = self._load_map(name + '-v.png')
for i, (uc, vc) in enumerate(zip(im_u.getdata(), im_v.getdata())):
u, v = uc[0], vc[0]
if uc == (u, u, u) and vc == (v, v, v):
self.mask[i] = 1
self.umap[i] = float(u) / 256.0
self.vmap[i] = float(v) / 256.0
def save(self, name, fn=None, scale=256):
size = self.texsize
data = array('H')
for i in range(self.width * self.height):
if self.mask[i]:
u = int(frpart(self.umap[i]) * scale) % size
v = int(frpart(self.vmap[i]) * scale) % size
data.append(u * size + v)
else:
data.append(0xffff)
if fn:
data = fn(data)
print('u_short %s[%d] = {' % (name, self.width * self.height))
for i in range(0, self.width * self.height, self.width):
row = ['0x%04x' % val for val in data[i:i + self.width]]
print(' %s,' % ', '.join(row))
print('};')
def save_uv(self, name):
im = Image.new('L', (self.width, self.height))
im.putdata([frpart(u) * 256 for u in self.umap])
im.save(name + '-u.png', 'PNG')
im = Image.new('L', (self.width, self.height))
im.putdata([frpart(v) * 256 for v in self.vmap])
im.save(name + '-v.png', 'PNG')
| 26.433333 | 74 | 0.473644 | 656 | 3,965 | 2.829268 | 0.179878 | 0.022629 | 0.016164 | 0.03556 | 0.277478 | 0.264547 | 0.210668 | 0.210668 | 0.196121 | 0.16056 | 0 | 0.052262 | 0.353342 | 3,965 | 149 | 75 | 26.610738 | 0.671607 | 0 | 0 | 0.247788 | 0 | 0 | 0.027491 | 0 | 0 | 0 | 0.001513 | 0 | 0 | 1 | 0.115044 | false | 0.00885 | 0.044248 | 0 | 0.265487 | 0.035398 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
752cb11b33632675f903629216e71f86426bc4b6 | 1,687 | py | Python | stabilizer/llrd.py | VigneshBaskar/stabilizer | 970a55a6da4e57596cbc953830160057138719e8 | [
"Apache-2.0"
] | 22 | 2021-09-17T09:51:07.000Z | 2022-03-24T04:19:26.000Z | stabilizer/llrd.py | VigneshBaskar/stabilizer | 970a55a6da4e57596cbc953830160057138719e8 | [
"Apache-2.0"
] | 4 | 2021-09-18T07:57:27.000Z | 2021-09-27T19:54:54.000Z | stabilizer/llrd.py | VigneshBaskar/stabilizer | 970a55a6da4e57596cbc953830160057138719e8 | [
"Apache-2.0"
] | 5 | 2021-09-17T12:21:12.000Z | 2022-03-28T04:57:58.000Z | def get_optimizer_parameters_with_llrd(model, peak_lr, multiplicative_factor):
num_encoder_layers = len(model.transformer.encoder.layer)
# Task specific layer gets the peak_lr
tsl_parameters = [
{
"params": [param for name, param in model.task_specific_layer.named_parameters()],
"param_names": [name for name, param in model.task_specific_layer.named_parameters()],
"lr": peak_lr,
"name": "tsl_param_group",
}
]
# Starting from the last encoder layer each encoder layers get a lr defined by
# current_layer_lr = prev_layer_lr * multiplicative_factor
# the last encoder layer lr = peak_lr * multiplicative_factor
encoder_parameters = [
{
"params": [param for name, param in layer.named_parameters()],
"param_names": [name for name, param in layer.named_parameters()],
"lr": peak_lr * (multiplicative_factor ** (num_encoder_layers - layer_num)),
"name": f"transformer.encoder.layer.{layer_num}",
}
for layer_num, layer in enumerate(model.transformer.encoder.layer)
]
# Embedding layer gets embedding layer lr = first encoder layer lr * multiplicative_factor
embedding_parameters = [
{
"params": [param for name, param in model.transformer.embeddings.named_parameters()],
"param_names": [name for name, param in model.transformer.embeddings.named_parameters()],
"lr": peak_lr * (multiplicative_factor ** (num_encoder_layers + 1)),
"name": "embeddings_param_group",
}
]
return tsl_parameters + encoder_parameters + embedding_parameters
| 46.861111 | 101 | 0.662122 | 196 | 1,687 | 5.433673 | 0.234694 | 0.033803 | 0.123944 | 0.078873 | 0.517371 | 0.49108 | 0.483568 | 0.406573 | 0.367136 | 0.309859 | 0 | 0.000782 | 0.241849 | 1,687 | 35 | 102 | 48.2 | 0.8319 | 0.189093 | 0 | 0 | 0 | 0 | 0.104993 | 0.043319 | 0 | 0 | 0 | 0 | 0 | 1 | 0.035714 | false | 0 | 0 | 0 | 0.071429 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
752d121311dd12e755c6822eb5eb0468f439d38f | 4,812 | py | Python | python_parser/parser/glasMeasure.py | marcelscode/glasnost | e54ce9ece91433df8ac73229d01e06c012a7b8d8 | [
"BSD-3-Clause"
] | 58 | 2015-04-25T10:47:27.000Z | 2022-03-31T15:37:58.000Z | python_parser/parser/glasMeasure.py | marcelscode/glasnost | e54ce9ece91433df8ac73229d01e06c012a7b8d8 | [
"BSD-3-Clause"
] | 1 | 2017-03-29T11:33:33.000Z | 2018-01-02T20:19:28.000Z | python_parser/parser/glasMeasure.py | marcelscode/glasnost | e54ce9ece91433df8ac73229d01e06c012a7b8d8 | [
"BSD-3-Clause"
] | 18 | 2016-02-11T14:06:58.000Z | 2022-03-15T11:13:39.000Z | # Glasnost Parser v2.
# Developed 2011/2012 by Hadi Asghari (http://deeppacket.info)
#
# Statistics about test streams
class GlasMeasurement:
"""" Class to hold statistics about one test stream """
def __init__(self, direction, port_typ, flow_typ, tcp_port):
assert direction in ['u','d']
assert flow_typ in ['af', 'cf'] # app-flow vs control-flow
assert port_typ in ['ap', 'np'] # app-port vs neutral-port
self.proto = flow_typ
self.port = port_typ
self.tcp_port = int(tcp_port)
self.dir = direction
##########################################################
@staticmethod
def oldstyle1_factory(ts, old_prot, old_di, old_port):
direction = "u" if old_di=="downstream" else "d" if old_di=="upstream" else None
# NOTE: U/D REVERSED. the log files and the server files explain it differently.
# with this flip here, the problem in speed() is solved
port_type = 'ap' if 6889 >=old_port>= 6881 else 'np' # approximation - works mostly but not 27-01-2009 - 02-02-2009!!
flow_type = 'af' if old_prot.lower()=='bt' else 'cf' if old_prot.lower()=='tcp' else None
gs = GlasMeasurement(direction, port_type, flow_type,tcp_port=old_port)
gs.oldstyle = True
gs.ts_start = ts
gs.ts_end = None
gs.srv_b = None
gs.srv_t = None
gs.cli_b = None
gs.cli_t = None
gs.rst_sent = 0
gs.srv_rst = 0
gs.cli_rst = 0
return gs
def oldstyle1_server_transfer(self, ts, srv_b, srv_t, sbps, fail=False):
self.ts_end = ts
self.srv_b = int(srv_b)
self.srv_t = srv_t
def oldstyle1_transfer_abort(self, ts):
self.ts_end = ts
self.srv_t = 0.0 # we used to set flags, now we simply set time to 0, like MPI
def oldstyle1_client_speed(self, cli_b, cli_t):
self.cli_b = cli_b
self.cli_t = cli_t
def oldstyle1_server_reset_seen(self, srv_rst, rst_sent):
self.srv_rst = srv_rst
self.rst_sent = rst_sent
def oldstyle1_client_reset_seen(self):
self.cli_rst = 1
##########################################################
def client_data_1(self,bps,length_ms,resets):
self.cli_t = float(length_ms) /1000.0
self.cli_b = int(bps)*self.cli_t/8 if str(bps)!='reset' else 0
self.cli_rst = int(resets)
return self
def server_data_1(self,bps,length,resets,resets_sent):
self.srv_t = float(length) if float(length)>=0 else 0.0 # GLASNOST-ERROR: negative times errors in summary-fields
self.srv_b = int(bps)*self.srv_t/8
self.srv_rst = int(resets)
self.rst_sent = int(resets_sent)
return self
def client_data_2(self,bytes,length,resets):
self.cli_b = int(bytes) if self.dir=='d' else None
self.cli_b_dbg = int(bytes) if self.cli_b is None else None
self.cli_t = float(length)
self.cli_rst = int(resets)
return self
def server_data_2(self,bytes,length,resets,resets_sent):
self.srv_b = int(bytes) if self.dir=='u' else None
self.srv_b_dbg = int(bytes) if self.srv_b is None else None
self.srv_t = float(length)
self.srv_rst = int(resets)
self.rst_sent = int(resets_sent)
return self
##########################################################
def __repr__(self):
rst = '*' if self.rst_sent or self.cli_rst or self.srv_rst else ''
speed = '%7.1f kbps' % self.speed() if self.speed() is not None else '---'
return "%s/%s/%s\t%s\t(srv: %s in %s\tcli: %s in %s)\t%srst: %s,%s,%s\n" % \
(self.dir, self.port, self.proto, speed, self.srv_b, self.srv_t, self.cli_b, self.cli_t, rst, self.rst_sent, self.srv_rst, self.cli_rst)
def speed(self):
if self.dir == 'd':
# assertion: we do get <0 in v1-logs - an error. this checks that it doesn't cause negative speeds
assert not self.cli_t or self.cli_t >0, "negative client speed"
return None if not self.cli_t else self.cli_b*0.008/self.cli_t
if self.dir == 'u':
assert not self.srv_t or self.srv_t>0, "negative server speed"
return None if not self.srv_t else self.srv_b*0.008/self.srv_t
#
def is_broken(self):
return self.duration()==0 or self.speed()==0
def duration(self):
dur = self.cli_t if self.dir=='d' else self.srv_t
assert dur>=0
return dur
def flow(self):
return (self.dir, self.port, self.proto)
#
| 39.442623 | 152 | 0.568786 | 724 | 4,812 | 3.595304 | 0.223757 | 0.064541 | 0.033807 | 0.021514 | 0.286208 | 0.217441 | 0.075298 | 0.075298 | 0.075298 | 0.075298 | 0 | 0.022867 | 0.291147 | 4,812 | 121 | 153 | 39.768595 | 0.740252 | 0.128845 | 0 | 0.141176 | 0 | 0.011765 | 0.044376 | 0 | 0 | 0 | 0 | 0 | 0.070588 | 1 | 0.188235 | false | 0 | 0 | 0.023529 | 0.329412 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
752daa07c16402e3055f39a32f880e876f78b385 | 2,563 | py | Python | GPy_ABCD/Kernels/linearOffsetKernel.py | juanluislm/GPy-ABCD | 63aa3a8a83148e0aaf8691ac3f69bced6fbaf600 | [
"BSD-3-Clause"
] | null | null | null | GPy_ABCD/Kernels/linearOffsetKernel.py | juanluislm/GPy-ABCD | 63aa3a8a83148e0aaf8691ac3f69bced6fbaf600 | [
"BSD-3-Clause"
] | null | null | null | GPy_ABCD/Kernels/linearOffsetKernel.py | juanluislm/GPy-ABCD | 63aa3a8a83148e0aaf8691ac3f69bced6fbaf600 | [
"BSD-3-Clause"
] | 1 | 2021-01-21T12:52:37.000Z | 2021-01-21T12:52:37.000Z | import numpy as np
from GPy.kern.src.kern import Kern
from GPy.core.parameterization import Param
from paramz.transformations import Logexp
from paramz.caching import Cache_this
class LinearWithOffset(Kern):
"""
Linear kernel with horizontal offset
.. math::
k(x,y) = \sigma^2 (x - o)(y - o)
:param input_dim: the number of input dimensions
:type input_dim: int
:param variances: the variance :math:`\sigma^2`
:type variances: array or list of the appropriate size (or float if there
is only one variance parameter)
:param offset: the horizontal offset :math:`\o`.
:type offset: array or list of the appropriate size (or float if there is only one offset parameter)
:param active_dims: indices of dimensions which are used in the computation of the kernel
:type active_dims: array or list of the appropriate size
:param name: Name of the kernel for output
:type String
:rtype: Kernel object
"""
def __init__(self, input_dim: int, variance: float = 1., offset: float = 0., active_dims: int = None, name: str = 'linear_with_offset') -> None:
super(LinearWithOffset, self).__init__(input_dim, active_dims, name)
if variance is not None:
variance = np.asarray(variance)
assert variance.size == 1
else:
variance = np.ones(1)
self.variance = Param('variance', variance, Logexp())
self.offset = Param('offset', offset)
self.link_parameters(self.variance, self.offset)
def to_dict(self):
input_dict = super(LinearWithOffset, self)._save_to_input_dict()
input_dict["class"] = "LinearWithOffset"
input_dict["variance"] = self.variance.values.tolist()
input_dict["offset"] = self.offset
return input_dict
@staticmethod
def _build_from_input_dict(kernel_class, input_dict):
useGPU = input_dict.pop('useGPU', None)
return LinearWithOffset(**input_dict)
@Cache_this(limit=3)
def K(self, X, X2=None):
if X2 is None: X2 = X
return self.variance * (X - self.offset) * (X2 - self.offset).T
def Kdiag(self, X):
return np.sum(self.variance * np.square(X - self.offset), -1)
def update_gradients_full(self, dL_dK, X, X2=None):
if X2 is None: X2 = X
dK_dV = (X - self.offset) * (X2 - self.offset).T
dK_do = self.variance * (2 * self.offset - X - X2)
self.variance.gradient = np.sum(dL_dK * dK_dV)
self.offset.gradient = np.sum(dL_dK * dK_do)
| 34.173333 | 148 | 0.65002 | 358 | 2,563 | 4.519553 | 0.298883 | 0.061805 | 0.020396 | 0.024104 | 0.163782 | 0.163782 | 0.140297 | 0.091471 | 0.091471 | 0.066749 | 0 | 0.009307 | 0.245416 | 2,563 | 74 | 149 | 34.635135 | 0.827301 | 0.273508 | 0 | 0.052632 | 0 | 0 | 0.040782 | 0 | 0 | 0 | 0 | 0 | 0.026316 | 1 | 0.157895 | false | 0 | 0.131579 | 0.026316 | 0.421053 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
752e21ffea89f00b97756d8ff46ca51435646c62 | 12,829 | py | Python | OCT_reader_demo.py | kai-neuhaus/OCT_file_tools | 1272b4d68822dc8ac9b0d7031c9b1b95b5b4a79a | [
"MIT"
] | null | null | null | OCT_reader_demo.py | kai-neuhaus/OCT_file_tools | 1272b4d68822dc8ac9b0d7031c9b1b95b5b4a79a | [
"MIT"
] | null | null | null | OCT_reader_demo.py | kai-neuhaus/OCT_file_tools | 1272b4d68822dc8ac9b0d7031c9b1b95b5b4a79a | [
"MIT"
] | null | null | null | # This file shows some example usage of Python functions to read an OCT file.
# To use exectute this test reader, scroll to the bottom and pass an OCT file to the function unzip_OCTFile.
# Find the comment #Example usage.
#
# Additional modules to be installed should be 'xmltodict', 'shutil', and 'gdown'.
# Tested in Python 3.7 and 3.8 (Mac, Colab)
#
# This file can be called like below assuming you have only Python 3 installed
# 'python OCT_reader_demo.py'
# Alternative you can call for specific versions 3 or 3.8
# 'python3 OCT_reader_demo.py'
# 'python3.8 OCT_reader_demo.py'
#
# The function unzip_OCTFile show an option to extract files with python.
#
# The Header.xml is converted to a dictionary named 'handle'.
# This allows to access data for different OCT files.
#
# The function get_OCTVideoImage demonstrates how to use handle to extract and show the video image.
#
# The function get_OCTIntensityImage demonstrates how to use handle to extract and show the intensity data.
import numpy as np
from scipy.fftpack import fft,ifft
from scipy.interpolate import interp1d
import matplotlib; matplotlib.use('Qt5Agg')
import matplotlib.pyplot as pp
import xmltodict
import os
import tempfile
import zipfile
import warnings
from warnings import warn
formatwarning_orig = warnings.formatwarning
warnings.formatwarning = lambda message, category, filename, lineno, line=None: \
formatwarning_orig(message, category, filename='', lineno='', line='')
def unzip_OCTFile(filename):
"""
Unzip the OCT file into a temp folder.
"""
tempdir = tempfile.gettempdir()
handle = dict()
handle['filename'] = filename
handle['path'] = os.path.join(tempdir, 'OCTData')
temp_oct_data_folder = os.path.join(handle['path'],os.path.basename(filename).split('.oct')[0])
handle['temp_oct_data_folder'] = temp_oct_data_folder
if os.path.exists(temp_oct_data_folder) and os.path.exists(os.path.join(temp_oct_data_folder, 'Header.xml')):
warn('Reuse data in {}\n'.format(temp_oct_data_folder))
else:
print('\nTry to extract {} into {}. Please wait.\n'.format(filename,temp_oct_data_folder))
if not os.path.exists(handle['path']):
os.mkdir(handle['path'])
if not os.path.exists(temp_oct_data_folder):
os.mkdir(temp_oct_data_folder)
with zipfile.ZipFile(file=handle['filename']) as zf:
zf.extractall(path=temp_oct_data_folder)
# read Header.xml
with open(os.path.join(temp_oct_data_folder, 'Header.xml'),'rb') as fid:
up_to_EOF = -1
xmldoc = fid.read(up_to_EOF)
# convert Header.xml to dictionary
handle_xml = xmltodict.parse(xmldoc)
handle.update(handle_xml)
return handle
def get_OCTDataFileProps(handle, data_name=None, prop=None):
"""
List some of the properties as in the Header.xml.
"""
metadata_all = handle['Ocity']['DataFiles']['DataFile']
metadata_name = np.take(metadata_all, np.flatnonzero([data_name in h['#text'] for h in metadata_all]))
props = [m[prop] for m in metadata_name]
return props
def get_OCTFileMetaData(handle, data_name):
"""
The metadata for files are store in a list.
The artifact 'data\\' stems from windows path separators and may need fixing.
"""
# Check if data_name is available
data_names_available = [d['#text'] for d in handle['Ocity']['DataFiles']['DataFile']]
data_name = data_name # check this on windows
assert data_name in data_names_available, 'Did not find {}.\nAvailable names are: {}'.format(data_name,data_names_available)
metadatas = handle['Ocity']['DataFiles']['DataFile'] # get list of all data files
# select the data file matching data_name
metadata = metadatas[np.argwhere([data_name in h['#text'] for h in handle['Ocity']['DataFiles']['DataFile']]).squeeze()]
return handle, metadata
def get_OCTVideoImage(handle):
"""
Examples how to extract VideoImage data
"""
handle, metadata = get_OCTFileMetaData(handle, data_name='data\\VideoImage.data')
# print(metadata)
data_filename = os.path.join(handle['temp_oct_data_folder'], metadata['#text'])
img_type = metadata['@Type']
dtype = handle['python_dtypes'][img_type][metadata['@BytesPerPixel']] # This is not consistent! unsigned and signed not distinguished!
sizeX = int(metadata['@SizeX'])
sizeZ = int(metadata['@SizeZ'])
data = np.fromfile(data_filename, dtype).reshape([sizeX,sizeZ])
data = abs(data)/abs(data).max()
return data
def get_OCTIntensityImage(handle):
"""
Example how to extract Intensity data
"""
handle, metadata = get_OCTFileMetaData(handle, data_name='data\\Intensity.data')
data_filename = os.path.join(handle['temp_oct_data_folder'], metadata['#text'])
img_type = metadata['@Type'] # this is @Real
dtype = handle['python_dtypes'][img_type][metadata['@BytesPerPixel']] # This is not consistent! unsigned and signed not distinguished!
sizeX = int(metadata['@SizeX'])
sizeZ = int(metadata['@SizeZ'])
data = (np.fromfile(data_filename, dtype=(dtype, [sizeX,sizeZ])))[0].T # there are two images. Take the first [0].
return data
def get_OCTSpectralRawFrame(handle, spec_name = 'data\\Spectral0.data'):
"""
Demo read raw spectral data.
Take note that we access all parameters using the dictionary from Header.xml.
Although, this still looks a bit messy it should not require changes for different data.
"""
# if the metadata are all the same for each Spectral.data then this can be called separately once
handle, metadata = get_OCTFileMetaData(handle, data_name=spec_name)
# make False --> 'unsigned', True --> 'signed
sign = handle['Ocity']['Instrument']['RawDataIsSigned'].replace('False','unsigned').replace('True','signed')
apo_rng = range(int(metadata['@ApoRegionStart0']),int(metadata['@ApoRegionEnd0']))
scan_rng = range(int(metadata['@ScanRegionStart0']),int(metadata['@ScanRegionEnd0']))
bytesPP = metadata['@BytesPerPixel'] # probably 2
raw_type = metadata['@Type'] # Raw
data_filename = metadata['#text']
data_file = os.path.join(handle['temp_oct_data_folder'], data_filename)
dtype = handle['python_dtypes'][raw_type][sign][bytesPP]
sizeX = int(metadata['@SizeX'])
sizeZ = int(metadata['@SizeZ'])
# select one [0] of two data frames
raw_data = np.fromfile(data_file, dtype=(dtype, [sizeX,sizeZ]))[0]
apo_data = raw_data[apo_rng]
spec_data = raw_data[scan_rng]
# return also apodization data
return spec_data, apo_data
def get_OCTSpectralRawFrame2(handle, spec_name = 'data\\Spectral0.data'):
"""
Demo read raw spectral data for a different data file.
In this case the first Spectral0.data is all apodization data.
The first file has now a different set of parameters
"""
handle, metadata = get_OCTFileMetaData(handle, data_name=spec_name)
sign = handle['Ocity']['Instrument']['RawDataIsSigned'].replace('False','unsigned').replace('True','signed')
apo_rng = range(int(metadata['@ApoRegionStart0']),int(metadata['@ApoRegionEnd0']))
scan_rng = range(int(metadata['@ScanRegionStart0']),int(metadata['@ScanRegionEnd0']))
bytesPP = metadata['@BytesPerPixel'] # probably 2
raw_type = metadata['@Type'] # Raw
data_filename = metadata['#text']
data_file = os.path.join(handle['temp_oct_data_folder'], data_filename)
dtype = handle['python_dtypes'][raw_type][sign][bytesPP]
sizeX = int(metadata['@SizeX'])
sizeZ = int(metadata['@SizeZ'])
# select one [0] of two data frames
raw_data = np.fromfile(data_file, dtype=(dtype, [sizeX,sizeZ]))[0]
apo_data = raw_data[apo_rng]
spec_data = raw_data[scan_rng]
# return also apodization data
return spec_data, apo_data
def get_OCTSpectralImage(handle):
"""
Reconstruct the image from spectral data: remove DC; k-space-lin; ifft
"""
spec, apo_data = get_OCTSpectralRawFrame(handle, spec_name = 'data\\Spectral0.data')
binECnt = np.float(handle['Ocity']['Instrument']['BinaryToElectronCountScaling'])
handle, metadata = get_OCTFileMetaData(handle, data_name='data\\OffsetErrors.data')
err_offset_fname = os.path.join(handle['temp_oct_data_folder'], metadata['#text'])
err_offset = np.fromfile(err_offset_fname, dtype=handle['python_dtypes']['Real'][metadata['@BytesPerPixel']])
handle, metadata = get_OCTFileMetaData(handle, data_name='data\\ApodizationSpectrum.data')
apodization_fname = os.path.join(handle['temp_oct_data_folder'], metadata['#text'])
apodization_data = np.fromfile(apodization_fname, dtype=handle['python_dtypes']['Real'][metadata['@BytesPerPixel']])
# same length after ifft
handle, metadata = get_OCTFileMetaData(handle, data_name='data\\Chirp.data')
chirp_fname = os.path.join(handle['temp_oct_data_folder'], metadata['#text'])
chirp_data = np.fromfile(chirp_fname, dtype=handle['python_dtypes']['Real'][metadata['@BytesPerPixel']])
bframe = spec - np.mean(apo_data,axis=0) # Subtract DC using inline apo_data
ip_fun = interp1d(x=chirp_data, y=bframe) # create interpolation on chirp_data
num_samples = bframe.shape[1] # SizeZ
bframe = ip_fun(np.arange(num_samples)) # k-space linearize
return bframe
def demo_printing_parameters(handle):
"""
This functions demonstrates how to access the xml paratemeters from the dictionary.
The parameters are read in the unzip_OCTFile function.
See this code snipped to read the Header.xml data:
with open(os.path.join(temp_oct_data_folder, 'Header.xml'),'rb') as fid:
up_to_EOF = -1
xmldoc = fid.read(up_to_EOF)
handle_xml = xmltodict.parse(xmldoc)
handle.update(handle_xml)
"""
# example to list properties
print('properties:')
print(handle.keys()) # list all keys in handle
print(handle['Ocity'].keys()) # list all keys in Ocity. This is from Header.xml
print(handle['Ocity']['Acquisition'].keys()) # list all keys in Acquisition
print(handle['Ocity']['MetaInfo']['Comment']) # get comment value from MetaInfo
print(handle['Ocity']['Acquisition']['RefractiveIndex'])
# print(handle['Ocity']['Acquisition']['SpeckleAveraging'].keys())
# fastaxis = handle['Ocity']['Acquisition']['SpeckleAveraging']['FastAxis']
# print('Speckle Averaging FastAxis: ', fastaxis)
print(handle['Ocity']['Image'].keys())
# example list all data files
print('\n\ndata file names:')
[print(h['#text']) for h in handle['Ocity']['DataFiles']['DataFile']]
print('\nProperties:')
print(get_OCTDataFileProps(handle, data_name='VideoImage', prop='@Type')) # print type of video image
print(get_OCTDataFileProps(handle, data_name='Intensity', prop='@Type'))
print(get_OCTDataFileProps(handle, data_name='Spectral', prop='@Type'))
print(get_OCTDataFileProps(handle, data_name='Spectral', prop='#text'))
# Example usage
# Ask if some test data should be retrieved.
if not os.path.exists('test.oct'):
print('File \'test.oct\' does not exist.')
print('Do you want to download it (50 MB)?')
if 'y' in input('y/n'):
import gdown
gdown.download(url='https://drive.google.com/uc?id=18xtWgvMdHw3OslDyyXZ6yMKDywhj_zdR',output='./test.oct')
# handle = unzip_OCTFile('test.oct')
handle = unzip_OCTFile('/Users/kai/Documents/Acer_mirror/sdb5/Sergey Alexandrov/srSESF_OCT_data/data/RS_12032019_0008_Mode3D_1280_NSDT.oct')
# Create a python_types dictionary for required data types
# I.e. the Thorlabs concept can mean a "Raw - signed - 2 bytes" --> np.int16
python_dtypes = {'Colored': {'4': np.int32, '2': np.int16},
'Real': {'4': np.float32},
'Raw': {'signed': {'1': np.int8, '2': np.int16},
'unsigned': {'1': np.uint8, '2': np.uint16}}}
print('dtype raw_signed_2 =',python_dtypes['Raw']['signed']['2']) # example
handle.update({'python_dtypes': python_dtypes})
# print some parameters from the xml file
demo_printing_parameters(handle)
# get and plot VideoImage
data = get_OCTVideoImage(handle)
fig,ax = pp.subplots(1,num='VideoImage')
ax.set_title(fig.canvas.get_window_title())
im = ax.imshow(data,cmap='Greys',vmin=0.0,vmax=0.4)
pp.colorbar(mappable=im)
# get and plot IntensityImage
data = get_OCTIntensityImage(handle)
fig,ax = pp.subplots(1,num='Intensity')
ax.set_title(fig.canvas.get_window_title())
im = ax.imshow(data,cmap='Greys_r',vmin=30,vmax=50)
pp.colorbar(mappable=im)
# get and processed spectral data, and plot the image
data = get_OCTSpectralImage(handle)
fig, ax = pp.subplots(1,num='Spectral')
im = ax.imshow(np.log10(abs(ifft(data)))[:,0:1024].T,vmin=-1.3,vmax=-0.5, cmap='Greys_r',aspect=2,interpolation='antialiased')
ax.set_title(fig.canvas.get_window_title())
pp.colorbar(mappable=im)
pp.show()
| 44.391003 | 140 | 0.706057 | 1,769 | 12,829 | 4.982476 | 0.219333 | 0.019061 | 0.023712 | 0.036646 | 0.447243 | 0.402201 | 0.387679 | 0.372589 | 0.321761 | 0.272975 | 0 | 0.010271 | 0.157612 | 12,829 | 288 | 141 | 44.545139 | 0.805311 | 0.280926 | 0 | 0.296296 | 0 | 0.006173 | 0.198729 | 0.023963 | 0 | 0 | 0 | 0 | 0.006173 | 1 | 0.055556 | false | 0 | 0.074074 | 0 | 0.179012 | 0.123457 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
752e72698ce6c406e4123fed3e680fc56ab5c8db | 7,281 | py | Python | varsome_api/vcf.py | definitelysean/varsome-api-client-python | 43ebeb65baf94e745a2f0e8ec326ed09f681bf24 | [
"Apache-2.0"
] | 23 | 2018-01-12T20:09:19.000Z | 2022-02-26T13:39:36.000Z | varsome_api/vcf.py | definitelysean/varsome-api-client-python | 43ebeb65baf94e745a2f0e8ec326ed09f681bf24 | [
"Apache-2.0"
] | 3 | 2018-01-15T11:10:40.000Z | 2019-05-20T07:37:20.000Z | varsome_api/vcf.py | definitelysean/varsome-api-client-python | 43ebeb65baf94e745a2f0e8ec326ed09f681bf24 | [
"Apache-2.0"
] | 11 | 2018-01-12T11:07:56.000Z | 2021-09-29T18:02:27.000Z | # Copyright 2018 Saphetor S.A.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import os
import time
import vcf
from collections import OrderedDict
from vcf.parser import _Info
from varsome_api.client import VarSomeAPIClient
from varsome_api.models.variant import AnnotatedVariant
__author__ = "ckopanos"
class VCFAnnotator(VarSomeAPIClient):
"""
VCFAnnotator will take an input vcf file parse it and produce an annotated vcf file
"""
def __init__(self, api_key=None,
max_variants_per_batch=1000, ref_genome='hg19', get_parameters=None, max_threads=None):
super().__init__(api_key, max_variants_per_batch)
self.ref_genome = ref_genome
self.get_parameters = get_parameters
self.total_varialts = 0
self.filtered_out_variants = 0
self.variants_with_errors = 0
self.max_threads = max_threads or 1
if self.max_variants_per_batch > 3000 and self.max_threads > 1:
self.logger.warning("Having more than 1 thread with more than 3000 variants per batch may not be optimal")
def _process_request(self, input_batch):
start = time.time()
api_results = self.batch_lookup(list(input_batch.keys()), params=self.get_parameters,
ref_genome=self.ref_genome, max_threads=self.max_threads)
duration = time.time() - start
self.logger.info('Annotated %s variants in %s' % (len(input_batch), duration))
self.logger.info('Writing to output vcf file')
for i, requested_variant in enumerate(input_batch.keys()):
try:
results = api_results[i]
record = input_batch[requested_variant]
if results:
if 'filtered_out' in results:
self.logger.info(results['filtered_out'])
self.filtered_out_variants += 1
continue
if 'error' in results:
self.logger.error(results['error'])
self.variants_with_errors += 1
continue
if 'variant_id' in results:
variant_result = AnnotatedVariant(**results)
record = self.annotate_record(record, variant_result)
self.vcf_writer.write_record(record)
else:
self.logger.error(results)
self.variants_with_errors += 1
except Exception as e:
self.logger.error(e)
self.variants_with_errors += 1
pass # log an exception..
def annotate_record(self, record, variant_result):
"""
Method to annotate a record. You should override this with your own implementation
to include variant result properties you want in your output vcf
:param record: vcf record object
:param variant_result: AnnotatedVariant object
:return: annotated record object
"""
record.INFO['variant_id'] = variant_result.variant_id
record.INFO['gene'] = ",".join(variant_result.genes)
record.INFO['gnomad_exomes_AF'] = variant_result.gnomad_exomes_af
record.INFO['gnomad_genomes_AF'] = variant_result.gnomad_genomes_af
record.ALT = variant_result.alt
record.POS = variant_result.pos
record.ID = ";".join(variant_result.rs_ids) or "."
return record
def add_vcf_header_info(self, vcf_template):
"""
Adds vcf INFO headers for the annotated values provided
This is just a base method you need to override in your own implementation
depending on the annotations added through the annotate_record method
:param vcf_template: vcf reader object
:return:
"""
vcf_template.infos['variant_id'] = _Info('variant_id', 1, 'Integer', 'Saphetor variant identifier', None, None)
vcf_template.infos['gene'] = _Info('gene', '.', 'String', 'Genes related to this variant', None, None)
vcf_template.infos['gnomad_exomes_AF'] = _Info('gnomad_exomes_AF', '.', 'Float',
'GnomAD exomes allele frequency value', None, None)
vcf_template.infos['gnomad_genomes_AF'] = _Info('gnomad_genomes_AF', '.', 'Float',
'GnomAD genomes allele frequency value', None, None)
def annotate(self, input_vcf_file, output_vcf_file=None, template=None, **kwargs):
"""
:param input_vcf_file: The input vcf file to be annotated
:param output_vcf_file: The file to write annotations back if none input_vcf_file.annotated.vcf will be
generated instead
:param template: An alternate vcf file to use for vcf file headers. If none the input vcf file will
be used
:return:
"""
annotations_start = time.time()
if not os.path.isfile(input_vcf_file):
raise FileNotFoundError('%s does not exist' % input_vcf_file)
if output_vcf_file is None:
output_vcf_file = "%s.annotated.vcf" % input_vcf_file
vcf_reader = vcf.Reader(filename=input_vcf_file, strict_whitespace=kwargs.get('strict_whitespace', True))
vcf_template = vcf_reader if template is None else vcf.Reader(filename=template, strict_whitespace=kwargs.get(
'strict_whitespace', True))
self.add_vcf_header_info(vcf_template)
self.vcf_writer = vcf.Writer(open(output_vcf_file, 'w'), vcf_template)
input_batch = OrderedDict()
# this will keep the request queue large enough so that parallel requests will not stop executing
batch_limit = self.max_variants_per_batch * self.max_threads * 2
for record in vcf_reader:
for alt_seq in record.ALT:
requested_variant = "%s:%s:%s:%s" % (record.CHROM, record.POS, record.REF or "", alt_seq or "")
input_batch[requested_variant] = record
self.total_varialts += 1
if len(input_batch) < batch_limit:
continue
self._process_request(input_batch)
# reset input batch
input_batch = OrderedDict()
# we may have some variants remaining if input batch is less than batch size
if len(input_batch) > 0:
self._process_request(input_batch)
self.vcf_writer.close()
self.logger.info("Annotating %s variants in %s. "
"Filtered out %s. "
"Errors %s" % (self.total_varialts, time.time() - annotations_start,
self.filtered_out_variants, self.variants_with_errors))
| 49.195946 | 119 | 0.631232 | 897 | 7,281 | 4.915273 | 0.265329 | 0.030166 | 0.027217 | 0.024949 | 0.09526 | 0.034021 | 0.020413 | 0 | 0 | 0 | 0 | 0.006952 | 0.288834 | 7,281 | 147 | 120 | 49.530612 | 0.844535 | 0.226617 | 0 | 0.104167 | 0 | 0 | 0.11537 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.052083 | false | 0.010417 | 0.072917 | 0 | 0.145833 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
752ee5b4eeb5208d629fa9b4144505411d1f467e | 77,806 | py | Python | bin/ADFRsuite/CCSBpckgs/DejaVu2/ColormapGui.py | AngelRuizMoreno/Jupyter_Dock_devel | 6d23bc174d5294d1e9909a0a1f9da0713042339e | [
"MIT"
] | null | null | null | bin/ADFRsuite/CCSBpckgs/DejaVu2/ColormapGui.py | AngelRuizMoreno/Jupyter_Dock_devel | 6d23bc174d5294d1e9909a0a1f9da0713042339e | [
"MIT"
] | null | null | null | bin/ADFRsuite/CCSBpckgs/DejaVu2/ColormapGui.py | AngelRuizMoreno/Jupyter_Dock_devel | 6d23bc174d5294d1e9909a0a1f9da0713042339e | [
"MIT"
] | 1 | 2021-11-04T21:48:14.000Z | 2021-11-04T21:48:14.000Z | ################################################################################
##
## This library is free software; you can redistribute it and/or
## modify it under the terms of the GNU Lesser General Public
## License as published by the Free Software Foundation; either
## version 2.1 of the License, or (at your option) any later version.
##
## This library is distributed in the hope that it will be useful,
## but WITHOUT ANY WARRANTY; without even the implied warranty of
## MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
## Lesser General Public License for more details.
##
## You should have received a copy of the GNU Lesser General Public
## License along with this library; if not, write to the Free Software
## Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
##
## (C) Copyrights Dr. Michel F. Sanner and TSRI 2016
##
################################################################################
#############################################################################
#
# Author: Michel F. SANNER
#
# Copyright: M. Sanner TSRI 2000
#
# Revision: Guillaume Vareille
#
#############################################################################
#
# $Header: /mnt/raid/services/cvs/DejaVu2/ColormapGui.py,v 1.1.1.1.4.1 2017/07/13 22:28:32 annao Exp $
#
# $Id: ColormapGui.py,v 1.1.1.1.4.1 2017/07/13 22:28:32 annao Exp $
#
import types, numpy, os, sys
import Tkinter, Pmw
import tkFileDialog
from string import strip, split, find
from opengltk.OpenGL import GL
from opengltk.extent import _gllib as gllib
from mglutil.util.misc import deepCopySeq
from mglutil.gui.BasicWidgets.Tk.thumbwheel import ThumbWheel
from mglutil.util.colorUtil import HSL2RGB, RGB2HSL, \
RGBA2HSLA_list, HSLA2RGBA_list, ToHSV, ToHEX
from mglutil.gui import widgetsOnBackWindowsCanGrabFocus
from DejaVu2 import viewerConst
from DejaVu2.colorMap import ColorMap
from DejaVu2.MaterialEditor import OGLWidget
from DejaVu2.colorMapLegend import ColorMapLegend
from DejaVu2.Geom import Geom
from DejaVu2.colorTool import RGBARamp, RedWhiteBlueRamp, RedWhiteRamp, \
WhiteBlueRamp, TkColor
from DejaVu2.Legend import drawLegendOnly
class ComboBoxRename(Pmw.ComboBox):
def setExternalList(self, externalList, reverse=False, numOfBlockedLabels=0):
self.externalList = externalList
self.reverse = reverse
self.numOfBlockedLabels = numOfBlockedLabels
def _addHistory(self):
#print "_addHistory"
lCurSelection = self.curselection()
if len(lCurSelection) > 0:
lCurSelection0 = int(lCurSelection[0])
if self.reverse is True:
lBoolTest = (len(self.externalList)-1-lCurSelection0) >= self.numOfBlockedLabels
else:
lBoolTest = lCurSelection0 >= self.numOfBlockedLabels
if lBoolTest:
input = self._entryWidget.get()
if (input != '') and (input not in self._list.get(0, 'end') ):
self.delete(lCurSelection0)
self.insert(lCurSelection0, input)
if self.reverse is True:
self.externalList[len(self.externalList)-1-lCurSelection0] = input
else:
self.externalList[lCurSelection0] = input
self.selectitem(lCurSelection0)
else:
self.selectitem(0)
class DummyEvent:
"""dummy Tkinter event """
def __init__(self, x=0, y=0):
self.x = x
self.y = y
COLORMAPGUI_MAXIMAL_HEIGHT = 1024
class ColorMapGUI(Tkinter.Frame, ColorMap):
"""The ColorMapGUI is an object providing an interface to expose
and to modify a ColorMap object.
The ColorMapGUI has a local copy of the colormap.ramp, colormap.mini and
colormap.maxi which will be modified through the GUI.
The associated colormap object will only be updated when by the apply_cb
which is called whenever a mouseUp event or update is called if the gui
is in continuous mode or when the Aplly button is pressed when the gui is
not in a continuous mode.
The GUI allows the user to edit each of 4 different color
properties: 'Hue','Sat' or saturation, 'Val' or value
and 'Opa' or opacity for each entry in the ramp of the
ColorMap. This is implemented via 4 Canvases, one for
each color property. (Drawing on a canvas is done by
pressing the left mouse button over the canvas and
holding it down while moving the mouse.)
Drawing at a point 'x','y', on one of the canvases changes
the value of the corresponding color property in this way:
'y' specifies the entry in the ColorMap.ramp to be changed.
'x' is first normalized to give a value between 0 and 1.0.
(by dividing it by the effective canvas width).
A new rgba value is built from the hsva list comprised of
3 previous values and this new normalized 'x' in the
appropriate position. (that is, if the user is drawing on
the 'Hue' canvas, the normalized 'x' value replaces the 'h',
or zeroth entry,etc).
If the GUI has a viewer, an OpenGLColorMapWidget is constructed
which displays the GUI's current rgbMap. Also, the user can choose
to add a ColorMapLegend to the viewer which displays the current
ColorMap.ramp and which can be labeled.
"""
staticRamp = RGBARamp(size=128, upperValue=.85).tolist()
def __init__(self, cmap=None, master=None, continuous=0, allowRename=True, modifyMinMax=False,
viewer=None, xoffset=25, width=200, height=256,
geoms=None, name=None, ramp=None, labels=None, mini=None, maxi=None,
filename=None, show=True, numOfBlockedLabels=0, **kw):
#this is the active colormap
if cmap is None:
ColorMap.__init__(self, name=name, ramp=ramp, labels=labels,
filename=filename, mini=mini, maxi=maxi)
else:
ColorMap.__init__(self, name=cmap.name, ramp=cmap.ramp, labels=cmap.labels,
mini=cmap.mini, maxi=cmap.maxi)
self.history = []
self.legend = None
if geoms is None:
geoms = {}
self.geoms = geoms
self.viewer = None
self.currentOnStack = False
self.numOfBlockedLabels = numOfBlockedLabels
# initialize a bunch of flags.
self.cmapCurrent = True
self.cmapCompToReset = None
self.cmapResetAll = False
self.nbStepBack = 0
# Initialize the local guiRamp, guiMini and guiMaxi values
# which will be modified
self.guiRamp = []
self.guiMini = None
self.guiMaxi = None
if self.labels is not None:
self.guiLabels = deepCopySeq(self.labels)
self.colorbin = []
self.labelbin = []
self.lengthRamp = len(self.ramp)
if height < self.lengthRamp:
height = self.lengthRamp
self.height = height
self.linesPerRampValue = self.height / float(self.lengthRamp)
self.currentHue = []
self.currentSat = []
# Create parts of the GUI
if master is None:
if viewer:
theMaster = Tkinter.Toplevel(viewer.master)
else:
theMaster = Tkinter.Toplevel()
else:
theMaster = master
if hasattr(theMaster, 'protocol'):
theMaster.protocol('WM_DELETE_WINDOW', self.dismiss)
# we hide it if it was asked
if show is False:
master2 = theMaster
while hasattr(master2, 'withdraw') is False and hasattr(master2, 'master') is True:
master2 = master2.master
master2.withdraw()
Tkinter.Frame.__init__(self, theMaster)
Tkinter.Pack.config(self, expand=1, fill='both')
if hasattr(theMaster, 'title'):
theMaster.title(self.name)
# Canvas width (default=200)
self.width = width
self.canvasStart = 4
# left border width (default=25)
self.xoffset = xoffset
# Available drawing width default=width-xoffset
self.xrange = float(width - xoffset)
# DejaVu2 viewer to which colorMapLegend is added
self.viewer = None
self.continuousUpdate = Tkinter.IntVar()
self.continuousUpdate.set(continuous)
#if allowRename is False, name entry is disabled.
self.allowRename = allowRename
self.modifyMinMax = modifyMinMax
self.idList = ['Hue','Sat','Val','Opa']
self.getColorFunc = { 'Hue':self.hueColor,
'Sat':self.satColor,
'Val':self.valColor,
'Opa':self.opaColor
}
#initialize dictionaries
#NB all the lines end at xpoints corresponding to the
#the colorproperty value at that point... (except
#'Hue' is backwards, width-property+xoffset instead of
#property+xoffset)
self.rightXVals = {}
self.lines = {}
for idStr in self.idList:
self.rightXVals[idStr] = []
self.lines[idStr] = []
self.current = 'Hue' # name of the currently visible prop. canvas
self.callbacks = [] # list of functions to be called when apply
# button is pressed or ramp is modified and
# mode is continuous update
# Calls the createWidgets method to create the 4 canvases
# and the buttons.
self.createWidgets(theMaster)
self.configureGui(ramp=self.ramp, geoms=self.geoms,
legend=self.legend, labels=self.labels,
mini=self.mini, maxi=self.maxi, **kw)
# Call the update to initialize the gui with the given cmap values,
# without configuring the cmap
self.update( mini=self.mini, maxi=self.maxi,
ramp=deepCopySeq(self.ramp), cfgCmap=False)
self.cmapCurrent = True
#set current colorFunc, lines and values to 'Hue'
self.getColor = self.getColorFunc['Hue']
self.currentLines = self.lines['Hue']
self.currentValues = self.rightXVals['Hue']
Tkinter.Widget.bind(self, "<Enter>", self.enter_cb)
Tkinter.Widget.bind(self, "<Leave>", self.leave_cb)
#set-up canvas mouse bindings:
for idStr in ['Hue', 'Sat', 'Val', 'Opa']:
Tkinter.Widget.bind(self.canvas[idStr], "<ButtonPress-1>",
self.mouseDown)
Tkinter.Widget.bind(self.canvas[idStr], "<Button1-Motion>",
self.mouseMotion)
Tkinter.Widget.bind(self.canvas[idStr], "<ButtonRelease-1>",
self.mouseUp)
Tkinter.Widget.bind(self.canvas[idStr], "<Motion>",
self.updateCurTk)
Tkinter.Widget.bind(self.canvas[idStr], "<ButtonPress-3>",
self.mouseDownRight)
Tkinter.Widget.bind(self.canvas[idStr], "<Button3-Motion>",
self.mouseMotionRight)
Tkinter.Widget.bind(self.canvas[idStr], "<ButtonRelease-3>",
self.mouseUpRight)
self.canvas[idStr].configure(cursor='cross')
self.straightLine = None
# create the cml
if self.legend is None:
self.createCML()
self.SetViewer(viewer)
# # fields created by the CMLwidget
# self.legend.createOwnGui()
# self.legend.hideOwnGui()
if master is not None:
master2 = theMaster
if hasattr(master2, 'master') is True and isinstance(master2.master,Tkinter.Widget):
master2 = master2.master
if hasattr(master2, 'master') is True and isinstance(master2.master,Tkinter.Widget):
master2 = master2.master
#Tkinter.Widget.bind(master2, '<Configure>', self.configure_cb)
self.inVision = True
else:
Tkinter.Widget.bind(self.fullWidgetFrame, '<Configure>', self.configure_cb)
self.inVision = False
def configure_cb(self, event=None):
#print "configure_cb", event, dir (event)
#print "configure_cb event.height", event.height
# self.width = event.width
# self.xrange = float(self.width - self.xoffset)
if self.inVision is True:
height = event.height - 2*self.canvasStart \
- self.menuFrame1.winfo_reqheight() \
- self.buttonFrame.winfo_reqheight() \
- 73
elif os.name == 'nt': #sys.platform == 'win32':
height = event.height - 2*self.canvasStart \
- self.menuFrame1.winfo_reqheight() \
- self.buttonFrame.winfo_reqheight() \
- self.frame2.winfo_reqheight() \
- 2
else:
height = event.height - 2*self.canvasStart \
- self.menuFrame1.winfo_reqheight() \
- self.buttonFrame.winfo_reqheight() \
- self.frame2.winfo_reqheight()
#print "configure_cb height", height
if height >= self.lengthRamp and height > 0:
self.height = height
#print "self.height 1", self.height
self.linesPerRampValue = self.height / float(self.lengthRamp)
self.resizeCanvases(self.height)
self.drawRampCanvases(drawRamp=True)
def changeRampLength_cb(self, numOfRampValue):
#print "changeRampLength_cb", numOfRampValue, len(self.guiRamp), len(self.ramp)
#import traceback;traceback.print_stack()
lenguiramp = len(self.guiRamp)
if numOfRampValue < self.numOfBlockedLabels:
self.numOfRampValues.set(lenguiramp)
return
elif numOfRampValue == lenguiramp:
return
elif numOfRampValue < lenguiramp:
for i in range(numOfRampValue, lenguiramp):
self.colorbin.append(self.guiRamp.pop())
else: # numOfRampValue > lenguiramp:
lNumOfColorsToAdd = numOfRampValue - lenguiramp
lNumOfColorsToCreate = lNumOfColorsToAdd - len(self.colorbin)
if lNumOfColorsToCreate > 0:
for i in range(len(self.colorbin)):
self.guiRamp.append(self.colorbin.pop())
for i in range(lNumOfColorsToCreate):
lIndex = len(self.guiRamp) % len(self.staticRamp)
self.guiRamp.append( self.staticRamp[ lIndex ] )
else:
for i in range(lNumOfColorsToAdd):
self.guiRamp.append(self.colorbin.pop())
self.configureGui(
ramp=self.guiRamp,
mini=self.mini,
maxi=self.maxi,
updateGui=True,
theGivenRampIsTheGuiRamp=True,
#guiRamp=self.guiRamp
)
self.cmapCurrent=False
def adjustGuiLabels(self, newNumOfGuiLabels):
#print "adjustGuiLabels", newNumOfGuiLabels
lenguiLabels = len(self.guiLabels)
if newNumOfGuiLabels == lenguiLabels:
return
elif newNumOfGuiLabels < lenguiLabels:
for i in range(newNumOfGuiLabels, lenguiLabels):
self.labelbin.append(self.guiLabels.pop())
else:
lNumOfLabelsToAdd = newNumOfGuiLabels - lenguiLabels
lNumOfLabelsToCreate = lNumOfLabelsToAdd - len(self.labelbin)
if lNumOfLabelsToCreate > 0:
for i in range(len(self.labelbin)):
self.guiLabels.append(self.labelbin.pop())
for i in range(lNumOfLabelsToCreate):
self.guiLabels.append( str(len(self.guiLabels) ) )
else:
for i in range(lNumOfLabelsToAdd):
self.guiLabels.append(self.labelbin.pop())
self.configureGui(labels=self.guiLabels)
def createCML(self):
"""create the ColorMap Legend
"""
if self.legend:
return
#width defaults to len(ramp); height defaults to 1
#mini, maxi and interp set from self
cmlOptions = {'width':10, 'height':1,
'name':self.name, 'ramp':self.ramp}
if self.mini is not None:
cmlOptions['mini'] = self.mini
if self.maxi is not None:
cmlOptions['maxi'] = self.maxi
self.legend = apply(ColorMapLegend, (self,), cmlOptions)
# def resetComp(self, comp):
# """resets on component to first entry in history list"""
# compNum = ['Hue','Sat','Val','Opa'].index(comp)
# # Get the first entry in the history list
# hist = self.history[0]
# # Remove all the entries in the history list
# self.history = self.history[:1]
#
# # Transform the history RGBramp into a HSL ramp
# histHSL = map(lambda x: list(RGBA2HSLA_list(x)), hist)
# # Transform the current ramp in a HSL ramp
# hsl = map(lambda x: list(x), self.asHSL())
# for i in range(len(hsl)):
# hsl[i][compNum] = histHSL[i][compNum]
# ramp = map(lambda x: list(HSLA2RGBA_list(x)), hsl)
# self.configure(ramp=ramp, updateGui=False)
def reset(self):
"""return to first entry in history list"""
ramp = self.history[0]
self.history = []
self.configure(ramp=deepCopySeq(ramp), updateGui=False)
def pushRamp(self):
"""
The pushRamp method appends the current ramp stored in cmap.ramp at the
end of the history list.
The attribute self.currentOnStack is set to True.
"""
# append the current ramp to the history if the ramp is not None and
# if the ramp has not been pushed onto the history stack yet.
if self.ramp is None or self.currentOnStack:
return
self.history.append(deepCopySeq(self.ramp))
self.currentOnStack = True
def popRamp(self, index=-1):
"""
optional arguments
index -- default -1, specifies the index of the ramp to pop.
It is a negative index
as we are popping ramps from the end of the history list.
The popRamp method will save the ramp stored at the given index
in cmap.ramp then remove all the entry from the ramp (included)
till the end of the history list.
After a popRamp the attribute self.currentOnStack is False.
"""
# Need at least one entry in the history
if not len(self.history): return
# Always keep the first entry of the history
if len(self.history)==1:
if not self.currentOnStack:
self.ramp = self.history[0]
self.currentOnStack = True
return
# PopRamp removes entry from the end of the history list
# which is why the index has to be negative.
assert (index < 0)
# 1- set the actual ramp to be the entry of history
# corresponding to the given index
# 2- history = history[:index]
# the popped history is not on the stack any longer except
# when it is the first entry
# Get the index from the beginning of the history list
newind = len(self.history)+index
pushRamp = False
if newind <= 0:
newind = 0
# Always keep the first entry in the history list
pushRamp = True
elif newind >= len(self.history):
newind = -1
pushRamp = False
ramp = self.history[newind]
self.history = self.history[:newind]
# We do not want to push the ramp we are popping onto the history stack
# the current ramp will not be on the history stack any longer.
self.configure(ramp=deepCopySeq(ramp), updateGui=False,
pushRamp=pushRamp)
def SetViewer(self, viewer):
"""to give a viewer reference to the ColorMapGUI even
after it's creation
"""
if viewer :
if self.viewer is not None:
self.viewer.RemoveObject(self.legend)
self.viewer = viewer
self.createCML()
self.legend.name = self.viewer.ensureUniqueName(self.legend.name)
self.update(cmapName=self.legend.name)
self.legend.replace = False
#self.legend.protected = True
self.viewer.AddObject(self.legend, redo=0)
self.showLegendVar = Tkinter.IntVar()
self.showLegendButton = Tkinter.Checkbutton(
self.buttonFrame, text='show legend',
variable=self.showLegendVar, command=self.showLegendButton_cb)
self.showLegendButton.grid(row=5, column=0, columnspan=2, sticky='w')
def createWidgets(self, master):
"""create Tkinter widgets: 4 canvas and buttons in 2 frames
"""
#print "createWidgets"
self.fullWidgetFrame = Tkinter.Frame(master)
self.fullWidgetFrame.pack(side='top', expand=1, fill='both')
# create menu frame
self.menuFrame1 = Tkinter.Frame(self.fullWidgetFrame, relief='raised', borderwidth=3)
self.menuFrame1.pack(side='top', expand=1, fill='x')
filebutton = Tkinter.Menubutton(self.menuFrame1, text='File')
filebutton.pack(side='left')
filemenu = Tkinter.Menu(filebutton, {})
filemenu.add_command(label='Read', command=self.read_cb)
filemenu.add_command(label='Write', command=self.write_cb)
filebutton['menu'] = filemenu
editbutton = Tkinter.Menubutton(self.menuFrame1, text='Edit')
editbutton.pack(side='left', anchor='w')
editmenu = Tkinter.Menu(editbutton, {})
editmenu.add_command(label='Reset to first in history', command=self.resetAll_cb)
editmenu.add_command(label='Step back in history loop', command=self.stepBack_cb)
## editmenu.add_checkbutton(label='Preview', variable=self.preview)
editmenu.add_checkbutton(label='Continuous',
variable=self.continuousUpdate)
editmenu.add_command(label='Edit Legend', command=self.editLegend_cb)
editbutton['menu'] = editmenu
self.numOfRampValues = ThumbWheel(
self.menuFrame1,
labCfg={'text':'Num of colors:', 'side':'left'},
showLabel=1,
width=80,
height=16,
min=1,
max=COLORMAPGUI_MAXIMAL_HEIGHT,
type=int,
value=self.lengthRamp,
callback=self.changeRampLength_cb,
continuous=True,
oneTurn=60,
wheelPad=2
)
self.numOfRampValues.pack(side='right', anchor='e')
# create canvas frames
self.canvasesFrame = Tkinter.Frame(self.fullWidgetFrame)
# create a frame for rproperties canvases
self.propCanvasFrame = Tkinter.Frame(self.canvasesFrame)
# create 4 canvases
self.canvas = {}
for idStr in self.idList:
self.canvas[idStr] = Tkinter.Canvas(
self.propCanvasFrame, relief='sunken', borderwidth=3,
width=self.width, height=self.height)
#pack Hue to start with
self.canvas['Hue'].pack(side='left', expand=1, fill='both')
self.propCanvasFrame.pack(side='left', expand=1, fill='both')
self.ogl_cmw = OGLColorMapWidget( self.canvasesFrame, self,
width=19, height=self.height,
)
self.canvasesFrame.pack(side = 'top', expand=1, fill='both')
# create a frame for buttons
self.buttonFrame = Tkinter.Frame(self.fullWidgetFrame, relief='ridge',
borderwidth=3)
# create radio buttons to switch between canvas
self.currentCanvasVar = Tkinter.StringVar()
self.buttonHue = Tkinter.Radiobutton(
self.buttonFrame, text='Hue', value = 'Hue', width=8,
indicatoron = 0, variable = self.currentCanvasVar,
command=self.button_cb)
self.buttonHue.grid(row=0, column=0, sticky='we')#pack(side='left')
self.buttonSat = Tkinter.Radiobutton(
self.buttonFrame, text='Sat.', value = 'Sat', width=8,
indicatoron = 0, variable = self.currentCanvasVar,
command=self.button_cb)
self.buttonSat.grid(row=0, column=1, sticky='we')#pack(side='left')
self.buttonVal = Tkinter.Radiobutton(
self.buttonFrame, text='Lum.', value = 'Val', width=8,
indicatoron = 0, variable = self.currentCanvasVar,
command=self.button_cb)
self.buttonVal.grid(row=0, column=2, sticky='we')#pack(side='left')
self.buttonOpa = Tkinter.Radiobutton(
self.buttonFrame, text='Opa.', value = 'Opa', width=8,
indicatoron = 0, variable = self.currentCanvasVar,
command=self.button_cb)
self.buttonOpa.grid(row=0, column=3, sticky='we')#pack(side='left')
self.currentCanvas = self.canvas['Hue']
self.currentCanvasVar.set('Hue')
# name
self.nameVar = Tkinter.StringVar()
if self.name is not None:
self.nameVar.set(self.name)
a = Tkinter.Label(self.buttonFrame, text='name')
a.grid(row=2, column=0, sticky='e')
if self.allowRename is True:
self.nameEntry = Tkinter.Entry(
self.buttonFrame, textvariable=self.nameVar, width=18)
self.nameEntry.bind('<Return>', self.rename_cb)
self.nameEntry.bind('<Leave>', self.rename_cb)
else:
self.nameEntry = Tkinter.Label(
self.buttonFrame, textvariable=self.nameVar, width=18,
relief='groove', padx=2, pady=1)
self.nameEntry.grid(row=2, column=1, columnspan=3, sticky='w')
# min max
self.maxTk = Tkinter.StringVar()
if self.guiMaxi is not None:
self.maxTk.set(('%8.2f'%self.guiMaxi))
a = Tkinter.Label(self.buttonFrame, text='max')
a.grid(row=3, column=2, sticky='e')
if self.modifyMinMax:
self.maxEntry = Tkinter.Entry(
self.buttonFrame, textvariable=self.maxTk, width=8)
self.maxEntry.bind('<Return>', self.max_cb)
self.maxEntry.bind('<Leave>', self.max_cb)
else:
self.maxEntry = Tkinter.Label(
self.buttonFrame, textvariable=self.maxTk, width=8,
relief='groove', justify='left', padx=2, pady=1)
self.maxEntry.grid(row=3, column=3,sticky='w')
self.curyTk = Tkinter.StringVar()
a = Tkinter.Label(self.buttonFrame, text='y ')
a.grid(row=4, column=2, sticky='e')
self.curYLabel = Tkinter.Label(
self.buttonFrame, textvariable=self.curyTk, width=8,
relief='groove', justify='left', padx=2, pady=1)
self.curYLabel.grid(row=4, column=3, sticky='w')
self.minTk = Tkinter.StringVar()
if self.guiMini is not None:
self.minTk.set(('%8.2f'%self.guiMini))
a = Tkinter.Label(self.buttonFrame, text='min')
a.grid(row=5, column=2, sticky='e')
if self.modifyMinMax:
self.minEntry = Tkinter.Entry(
self.buttonFrame, textvariable=self.minTk, width=8)
self.minEntry.bind('<Return>', self.min_cb)
self.minEntry.bind('<Leave>', self.min_cb)
else:
self.minEntry = Tkinter.Label(
self.buttonFrame, textvariable=self.minTk, width=8,
relief='groove', justify='right', padx=2, pady=1)
self.minEntry.grid(row=5, column=3, sticky='w')
self.curxTk = Tkinter.StringVar()
a = Tkinter.Label(self.buttonFrame, text='x')
a.grid(row=4, column=1, sticky='w')
self.curXLabel = Tkinter.Label(
self.buttonFrame, textvariable=self.curxTk, width=8,
relief='groove', justify='left', padx=2, pady=1)
self.curXLabel.grid(row=4, column=0, sticky='w')
if self.labels is not None:
#self.labelsInComboBox = Tkinter.StringVar()
self.labelsComboBox = ComboBoxRename(
self.buttonFrame,
label_text='label',
labelpos='w',
history=1)
self.labelsComboBox.setExternalList(
self.guiLabels,
reverse=True,
numOfBlockedLabels=self.numOfBlockedLabels)
self.labelsComboBox.grid(row=6, column=0, columnspan=4, sticky='w')
#Apply and Dismiss go here
self.frame2 = Tkinter.Frame(self.fullWidgetFrame, relief='raise', borderwidth=3)
f2 = self.frame2
self.apply = Tkinter.Button(f2, text='Apply', command=self.apply_cb)
self.apply.pack(side='left', expand=1, fill='x')
self.dismiss = Tkinter.Button(f2, text='Dismiss', command=self.dismiss)
self.dismiss.pack(side='left', expand=1, fill='x')
f2.pack(side='bottom', expand=1, fill='x')
self.frame2.pack(side='bottom', expand=1, fill=Tkinter.X)
self.buttonFrame.pack(side='bottom', expand=1, fill='x')
def update(self, ramp=None, mini=None, maxi=None,
cmapName=None, drawRamp=True, cfgCmap=True,
theGivenRampIsTheGuiRamp=False,
#guiRamp=None
):
# Update the name of cmg if relevant
if hasattr(self.master, 'title'):
self.master.title(cmapName)
# Update the maxi and mini values of cmg
if maxi is not None and mini is not None:
if mini <= maxi:
self.guiMaxi = maxi
self.guiMini = mini
self.minTk.set(('%8.2f'%self.guiMini))
self.maxTk.set(('%8.2f'%self.guiMaxi))
else:
self.guiMaxi = None
self.guiMini = None
self.minTk.set('')
self.maxTk.set('')
elif maxi is not None and mini is None:
if self.guiMini <= maxi:
self.guiMaxi = maxi
self.maxTk.set(('%8.2f'%self.guiMaxi))
else:
self.guiMaxi = None
self.maxTk.set('')
elif mini is not None and maxi is None:
if mini <= self.guiMaxi:
self.guiMini = mini
self.minTk.set(('%8.2f'%self.guiMini))
else:
self.guiMini = None
self.minTk.set('')
# Update the cmg ramp with the one given
if not ramp is None:
if theGivenRampIsTheGuiRamp is False:
self.cmapCurrent = False
ramp = deepCopySeq(self.checkRamp(ramp))
self.lengthRamp = len(ramp)
if self.lengthRamp > self.height :
self.height = self.lengthRamp
#print "self.height 2", self.height
self.numOfRampValues.set(self.lengthRamp)
self.linesPerRampValue = self.height / float(self.lengthRamp)
self.resizeCanvases(self.height)
self.guiRamp = ramp
if self.labels is not None:
self.adjustGuiLabels(self.lengthRamp)
self.drawRampCanvases(drawRamp=drawRamp)
else:
#print "guiRamp"
self.lengthRamp = len(ramp)
if self.lengthRamp > self.height :
self.height = self.lengthRamp
#print "self.height 3", self.height
#self.numOfRampValues.set(self.lengthRamp)
self.linesPerRampValue = self.height / float(self.lengthRamp)
self.resizeCanvases(self.height)
#self.guiRamp = guiRamp
if self.labels is not None:
self.adjustGuiLabels(self.lengthRamp)
self.drawRampCanvases(drawRamp=drawRamp)
# Update the OpenGL ramp
self.ogl_cmw.ramp = deepCopySeq(self.guiRamp)
self.ogl_cmw.tkRedraw()
if self.continuousUpdate.get():
self.configureCmap(cfgCmap=cfgCmap)
def configure(self, name=None, ramp=None, geoms=None, legend=None, labels=None,
mini='not passed', maxi='not passed', viewer=None, updateGui=True,
pushRamp=True, **kw):
#print "ColorMapGUI.configure", mini, maxi
ColorMap.configure(self, name=name, ramp=ramp, labels=labels, mini=mini, maxi=maxi)
if ramp is not None:
ramp = self.ramp
if labels is not None:
labels = self.labels # because it was just set in the configure
self.configureGui(ramp=ramp, labels=labels,
geoms=geoms, legend=legend,
viewer=viewer, updateGui=updateGui,
pushRamp=pushRamp, **kw)
def configureGui(self, name=None, ramp=None, labels=None,
geoms=None, legend=None,
viewer=None, updateGui=True,
pushRamp=True,
#guiRamp=None,
theGivenRampIsTheGuiRamp=False,
**kw):
"""Configure the colormapGui with the given values.
"""
if (ramp is not None) and (theGivenRampIsTheGuiRamp is False):
# The ramp is new put has not been pushed onto the history
# stack yet
self.currentOnStack = False
# When pushRamp is True this new ramp is pushed on the stack
if pushRamp:
self.pushRamp()
if labels is not None and self.labels is not None:
#print "labels", labels
#import traceback;traceback.print_stack()
self.labelsComboBox.delete(0,'end')
for label in labels:
self.labelsComboBox.insert(0, str(label) )
if geoms is not None and len(geoms):
self.geoms.update(geoms)
if viewer is not None and viewer != self.viewer:
self.SetViewer(viewer)
if self.name != self.legend.name:
self.nameVar.set(self.legend.name)
self.rename_cb()
# if a legend is specified then set self.legend
if legend is not None:
self.legend = legend
# Then update the legend with the given values
if self.legend is not None:
cmlOptions = {'ramp':self.ramp}
# Need to update the legendValues as well
cmlOptions['mini'] = self.mini
cmlOptions['maxi'] = self.maxi
if hasattr(self, 'showLegendVar'):
cmlOptions['visible'] = self.showLegendVar.get()
apply(self.legend.Set, (), cmlOptions)
if updateGui:
self.update(ramp=ramp, mini=self.mini, maxi=self.maxi,
cfgCmap=False,
theGivenRampIsTheGuiRamp=theGivenRampIsTheGuiRamp,
)
def Map(self, values, mini='not passed', maxi='not passed'):
col = ColorMap.Map(self, values, mini=mini, maxi=maxi)
if self.legend is not None:
cmlOptions = {'ramp':self.ramp}
# Need to update the legendValues with the value that where just used
cmlOptions['mini'] = self.lastMini
cmlOptions['maxi'] = self.lastMaxi
if hasattr(self, 'showLegendVar'):
cmlOptions['visible'] = self.showLegendVar.get()
apply(self.legend.Set, (), cmlOptions)
return col
def configureCmap(self, cfgCmap=True):
""" This method configures the associated cmap with the GUI new values.
"""
if self.cmapCompToReset is not None:
self.resetComp(self.cmapCompToReset)
self.cmapCompToReset = None
cfgCmap = False
self.cmapCurrent = True
if self.cmapResetAll is True:
self.reset()
self.cmapResetAll = False
cfgCmap = False
self.cmapCurrent = True
if self.nbStepBack != 0:
if self.nbStepBack == len(self.history):
cfgCmap = False
self.cmapCurrent = True
self.popRamp(-self.nbStepBack)
self.nbStepBack = 0
if cfgCmap:
# if self.allowRename:
# name = self.nametk.get()
# else:
# name=None
# donnot update the cmap if already current...
if self.cmapCurrent:
ramp=None
else:
ramp = deepCopySeq(self.guiRamp)
self.configure(ramp=ramp, mini=self.guiMini, maxi=self.guiMaxi)
if self.legend is not None:
if self.legend.viewer:
self.tk.call(self.legend.viewer.currentCamera._w, 'makecurrent')
self.legend.RedoDisplayList()
self.legend.viewer.Redraw()
self.cmapCurrent = True
if self.labels is not None:
self.labels = deepCopySeq(self.guiLabels)
#print "self.labels", self.labels
# Then will call the callbacks with the new cmap values.
self.callCallbacks()
def drawRampCanvases(self, event=None, drawRamp=True):
"""draw all ramp canvases
"""
#print "drawRampCanvases"
# update Ramp
self.currentHue = range(self.lengthRamp)
self.currentSat = range(self.lengthRamp)
if drawRamp:
for v in self.idList:
self.deleteCanvasLines(v)
self.setRightXVals(v)
var = self.currentCanvasVar.get()
self.currentValues = self.rightXVals[var]
self.drawHue()
self.drawSaturation()
self.drawValue()
self.drawOpacity()
#################################################################
### MOUSE CALLBACKS
#################################################################
def mouseUp(self, event=None):
self.cmapCurrent=False
self.update(drawRamp=False)
def mouseDown(self, event):
j = self.canvasStart
# canvas x and y take the screen coords from the event and translate
# them into the coordinate system of the canvas object
Y = min(j+self.height-1, event.y)
x = min(self.width, event.x)
x = max(self.xoffset, x)
if Y < j:
Y = j
y = int ( (Y-j) / self.linesPerRampValue)
if y < 0:
y = 0
self.startx = x
self.startY = Y
self.starty = y
self.updateCurTk(event)
c = self.currentCanvas
lineIndex = self.lengthRamp-1-y
line = self.currentLines[lineIndex]
col, graybitmap = self.getColor(x, lineIndex) #y
newline = c.create_rectangle( j,
y*self.linesPerRampValue+j,
1+x,#+j,
(1+y)*self.linesPerRampValue+j,
outline='',
fill=col,
stipple=graybitmap)
self.currentLines[lineIndex] = newline
#update self.guiRamp
self.updateRGBMap(x, lineIndex)
#update rightXVals
self.currentValues[lineIndex] = x
c.delete(line)
def mouseMotion(self, event):
j = self.canvasStart
# canvas x and y take the screen coords from the event and translate
# them into the coordinate system of the canvas object
# x,y are float
#x = self.canvasHue.canvasx(event.x)
#y = self.canvasHue.canvasy(event.y)
# event.x, event.y are same as x,y but int
Y = min(j+self.height-1, event.y)
x = min(self.width, event.x)
x = max(self.xoffset, x)
if Y < j:
Y = j
y = int ( (Y-j) / self.linesPerRampValue)
if y < 0:
y = 0
elif self.startx is None:
self.mouseDown(event)
c = self.currentCanvas
if self.startY == Y:
lineIndex = self.lengthRamp-1-y
#print "lineIndex 0 ...", lineIndex
line = self.currentLines[lineIndex]
col, graybitmap = self.getColor(x, lineIndex)
newline = c.create_rectangle( j,
y*self.linesPerRampValue+j,
1+x,#+j,
(1+y)*self.linesPerRampValue+j,
outline='',
fill=col,
stipple=graybitmap)
self.currentLines[lineIndex] = newline
c.delete(line)
self.currentValues[lineIndex] = x
self.updateRGBMap(x, lineIndex)
else: # we need to interpolate for all y's between self.starty and y
dx = x-self.startx
dy = Y-self.startY
rat = float(dx)/float(dy)
if Y > self.startY:
for Yl in range(self.startY, Y+1):
yl = int ( (Yl-j) / self.linesPerRampValue)
ddx = int(rat*(Yl-self.startY)) + self.startx
lineIndex = self.lengthRamp-1-yl
#print "lineIndex 1 ...", lineIndex
line = self.currentLines[lineIndex]
col, graybitmap = self.getColor(ddx, lineIndex)
newline = c.create_rectangle( j,
yl*self.linesPerRampValue+j,
1+ddx,#+j,
(yl+1)*self.linesPerRampValue+j,
outline='',
fill=col,
stipple=graybitmap)
self.currentLines[lineIndex] = newline
c.delete(line)
self.currentValues[lineIndex] = ddx
self.updateRGBMap(ddx, lineIndex)
else:
for Yl in range(self.startY, Y-1, -1):
yl = int ( (Yl-j) / self.linesPerRampValue)
ddx = int(rat*(Yl-self.startY)) + self.startx
lineIndex = self.lengthRamp-1-yl
#print "lineIndex 2 ...", lineIndex
line = self.currentLines[lineIndex]
col, graybitmap = self.getColor(ddx, lineIndex)
newline = c.create_rectangle( j,
yl*self.linesPerRampValue+j,
1+ddx,#+j,
(1+yl)*self.linesPerRampValue+j,
outline='',
fill=col,
stipple=graybitmap)
self.currentLines[lineIndex] = newline
c.delete(line)
self.currentValues[lineIndex] = ddx
self.updateRGBMap(ddx, lineIndex)
self.startY = Y
self.startx = x
# this flushes the output, making sure that
# the rectangle makes it to the screen
# before the next event is handled
self.update_idletasks()
self.updateCurTk(event)
def mouseDownRight(self, event):
j = self.canvasStart
Y = min(j+self.height-1, event.y)
x = min(self.width, event.x)
x = max(self.xoffset, x)
if Y < j:
Y = j
y = int ( (Y-j) / self.linesPerRampValue)
if y < 0:
y = 0
self.startx = x
self.startY = Y
self.starty = y
def mouseMotionRight(self, event):
j = self.canvasStart
Y = min(j+self.height-1, event.y)
x = min(self.width, event.x)
x = max(self.xoffset, x)
if Y < j:
Y = j
y = int ( (Y-j) / self.linesPerRampValue)
if y < 0:
y = 0
elif self.startx is None:
self.mouseDown(event)
c = self.currentCanvas
if self.straightLine is not None:
c.delete(self.straightLine[0])
c.delete(self.straightLine[1])
dy = Y-self.startY
if abs(dy) < (self.linesPerRampValue/2):
return
straightLineBlack = c.create_line(
self.startx,#+j,
(self.starty+.5)*self.linesPerRampValue+j,
x,#+j,
(y+.5)*self.linesPerRampValue+j,
fill='black',
width=3)
straightLineWhite = c.create_line(
self.startx,#+j,
(self.starty+.5)*self.linesPerRampValue+j,
x,#+j,
(y+.5)*self.linesPerRampValue+j,
fill='white',
width=1)
self.straightLine = [straightLineBlack, straightLineWhite]
self.update_idletasks()
self.updateCurTk(event)
def mouseUpRight(self, event=None):
j = self.canvasStart
c = self.currentCanvas
if self.straightLine is not None:
c.delete(self.straightLine[0])
c.delete(self.straightLine[1])
Y = min(j+self.height-1, event.y)
x = min(self.width, event.x)
x = max(self.xoffset, x)
if Y < j:
Y = j
y = int ( (Y-j) / self.linesPerRampValue)
if y < 0:
y = 0
elif self.startx is None:
self.mouseDown(event)
dx = x-self.startx
dY = Y-self.startY
dy = y-self.starty
if abs(dY) < (self.linesPerRampValue/2):
return
rat = float(dx)/float(dy)
if y > self.starty:
for yl in range(self.starty, y+1):
ddx = int(rat*(yl-self.starty)) + self.startx
lineIndex = self.lengthRamp-1-yl
line = self.currentLines[lineIndex]
col, graybitmap = self.getColor(ddx, lineIndex)
newline = c.create_rectangle( j,
yl*self.linesPerRampValue+j,
1+ddx,#+j,
(yl+1)*self.linesPerRampValue+j,
outline='',
fill=col,
stipple=graybitmap)
self.currentLines[lineIndex] = newline
c.delete(line)
self.currentValues[lineIndex] = ddx
self.updateRGBMap(ddx, lineIndex)
else:
for yl in range(self.starty, y-1, -1):
ddx = int(rat*(yl-self.starty)) + self.startx
lineIndex = self.lengthRamp-1-yl
line = self.currentLines[lineIndex]
col, graybitmap = self.getColor(ddx, lineIndex)
newline = c.create_rectangle( j,
yl*self.linesPerRampValue+j,
1+ddx,#+j,
(yl+1)*self.linesPerRampValue+j,
outline='',
fill=col,
stipple=graybitmap)
self.currentLines[lineIndex] = newline
c.delete(line)
self.currentValues[lineIndex] = ddx
self.updateRGBMap(ddx, lineIndex)
self.starty = y
self.startY = Y
self.startx = x
# this flushes the output, making sure that
# the rectangle makes it to the screen
# before the next event is handled
self.update_idletasks()
self.updateCurTk(event)
self.cmapCurrent=False
self.update(drawRamp=False)
###########################################################################
## CALLBACK FUNCTIONS
###########################################################################
def apply_cb(self, event=None, cfgCmap=True):
"""
"""
#print "ColorMapGUI apply_cb"
# Need to set self.guiMini, self.guiMaxi,from, to, from fill and to fill
# to be set to the proper values.
# Method which reconfigures the colormap associated to the gui
self.configureCmap()
def fileOpenAsk(self, idir=None, ifile=None, types=None,
title='Open'):
if types==None: types = [ ('All files', '*') ]
file = tkFileDialog.askopenfilename( filetypes=types,
initialdir=idir,
initialfile=ifile,
title=title)
if file=='': file = None
return file
def fileSaveAsk(self, idir=None, ifile=None, types = None,
title='Save'):
if types==None: types = [ ('All files', '*') ]
file = tkFileDialog.asksaveasfilename( filetypes=types,
initialdir=idir,
initialfile=ifile,
title=title)
if file=='': file = None
return file
def button_cb(self, event=None):
"""call back function for the buttons allowing to toggle between
different canvases.
This function hides the currentCanvas and shows the canvas
corresponding to the active radio button.
In addition it sets
self.currentCanvas : used to hide it next time we come in
self.currentLines : list of Canvas Line objects (one per canvas)
self.currentValues : list of numerical rightXVals (one per canvas)
self.getColor = function to be called to represent a color as a
Tk string
"""
var = self.currentCanvasVar.get()
if var == 'Hue':
self.drawHue()
elif var == 'Sat':
self.drawSaturation()
elif var == 'Val':
self.drawValue()
elif var == 'Opa':
self.drawOpacity()
newCanvas = self.canvas[var]
self.currentCanvas.forget()
newCanvas.pack(side='top')
self.currentCanvas = newCanvas
self.currentLines = self.lines[var]
self.currentValues = self.rightXVals[var]
self.getColor = self.getColorFunc[var]
self.current = var
def min_cb(self, event=None):
minVal = self.minTk.get()
try:
minVal = float(minVal)
if self.guiMaxi is not None and minVal >= self.guiMaxi:
raise ValueError
self.update(mini=minVal)
except:
self.minTk.set('')
self.guiMini = None
return
def max_cb(self, event=None):
maxVal = self.maxTk.get()
try:
maxVal = float(maxVal)
if self.guiMini is not None and maxVal <= self.guiMini:
raise ValueError
self.update(maxi=maxVal)
except:
self.maxTk.set('')
self.guiMaxi = None
return
def rename_cb(self, event=None):
self.name = self.nameVar.get()
self.update(cmapName=self.name)
self.legend.Set(name=self.name)
if self.legend.ownGui is not None:
self.legend.ownGui.title(self.legend.name)
if self.viewer is not None:
self.viewer.Redraw()
def quit(self):
#print "quit"
self.master.destroy()
def dismiss(self):
#print "dismiss"
if self.master.winfo_ismapped():
self.master.withdraw()
def resetAll_cb(self, event=None):
self.cmapResetAll = True
self.update(ramp = deepCopySeq(self.history[0]), cfgCmap=False)
# def reset_cb(self, event=None):
# """reset only the currently visible property canvas"""
# var = self.currentCanvasVar.get()
# self.deleteCanvasLines(var)
# ramp = deepCopySeq(self.history[0])
#
# self.lengthRamp = len(ramp)
# if self.lengthRamp > self.height :
# self.height = self.lengthRamp
# self.numOfRampValues.set( self.lengthRamp )
#
# hsva = map(lambda x, conv=RGBA2HSLA_list: conv(x), ramp)
# ind = self.idList.index(var)
# #format?...list?....array??
# newVals = numpy.array(hsva)[:,ind]
# self.guiRamp = self.buildRGBMap(ind, newVals)
#
# self.cmapCompToReset = var
# if var == 'Hue':
# self.setRightXVals('Hue')
# self.drawHue()
# elif var == 'Sat':
# self.setRightXVals('Sat')
# self.drawSaturation()
# elif var == 'Val':
# self.setRightXVals('Val')
# self.drawValue()
# elif var == 'Opa':
# self.setRightXVals('Opa')
# self.drawOpacity()
# if self.continuousUpdate.get():
# self.configureCmap()
# #self.mouseUp(None) # to call callbacks
# self.currentValues = self.rightXVals[var]
# self.ogl_cmw.ramp = self.guiRamp[:]
# self.ogl_cmw.tkRedraw()
def stepBack_cb(self):
#update self.ramp to last ramp on history list
if len(self.history)==1 and self.cmapCurrent is True:
return
if self.cmapCurrent and self.nbStepBack == 0:
index = self.nbStepBack + 2
else:
index = self.nbStepBack + 1
self.nbStepBack = index
newind = len(self.history)-index
cfgCmap = True
if newind < 0:
# tried to go back too far
newind = 0
self.nbStepBack = 0
if newind == 0:
# only valid for continuous mode.
# want to go back to the first entry of the history which
# stays so we don't need to update the cmap
cfgCmap=False
self.update(ramp=self.history[newind], cfgCmap=cfgCmap)
def read_cb(self):
fileTypes = [("ColorMap",'*_map.py'), ("any file",'*.*')]
fileBrowserTitle = "Read Color Map"
fileName = self.fileOpenAsk(types=fileTypes,
title=fileBrowserTitle)
if not fileName:
return
self.read(fileName)
def read(self, filename):
ColorMap.read(self, filename)
self.configureGui(ramp=self.ramp, mini=self.mini, maxi=self.maxi)
if self.legend.viewer:
#self.legend.RedoDisplayList()
self.legend.viewer.Redraw()
#self.configureCmap()
def enter_cb(self, event=None):
if widgetsOnBackWindowsCanGrabFocus is False:
lActiveWindow = self.focus_get()
if lActiveWindow is not None \
and ( lActiveWindow.winfo_toplevel() != self.winfo_toplevel() ):
return
self.focus_set()
def leave_cb(self, event=None):
pass
############################################################
### UTILITY FUNCTIONS
############################################################
def makeEvent(self, x, y):
xval = x*self.xrange + self.xoffset
return DummyEvent( xval, y)
def hueColor(self, val, rampIndex):
""" TkColorString <- hueColor(val)
val is an integer between self.xoffset and self.width.
(default:val is an integer between 25 and 200.)
returns color to be use to draw hue lines in hue canvas
"""
#print "hueColor", h
if val > self.width:
h = 1
else:
h = (float(self.width-val)/self.xrange) #* .999 #*.6666666667
# if h == 0:
# h = 1
self.currentHue[rampIndex] = h
rgb = HSL2RGB(h, 1., .5)
graybitmap = ''
return TkColor(rgb), graybitmap
def satColor(self, val, rampIndex):
""" TkColorString <- satColor(val)
val is an integer between self.xoffset and self.width.
(default:val is an integer between 25 and 200.)
returns color to be use to draw saturation lines in saturation canvas
"""
#print "satColor", val, rampIndex
if val < self.xoffset:
x = 0
else:
x = float(val-self.xoffset)/self.xrange
self.currentSat[rampIndex] = x
rgb = HSL2RGB(self.currentHue[rampIndex], x, .5)
rgb.append(1.)
graybitmap = ''
return TkColor(rgb), graybitmap
def valColor(self, val, rampIndex):
""" TkColorString <- valColor(val)
val is an integer between 25 and 200
returns color to be used to draw value and opacity lines in canvas
"""
#print "valColor", val, rampIndex
if val < self.xoffset:
x = 0
else:
x = float(val-self.xoffset)/self.xrange
rgb = HSL2RGB(self.currentHue[rampIndex], self.currentSat[rampIndex], x)
rgb.append(1.)
graybitmap = ''
return TkColor(rgb), graybitmap
def opaColor(self, val, rampIndex=None):
#print "opaColor"
if val < self.xoffset:
x = 0
else:
x = float(val-self.xoffset)/self.xrange
if x >= 1:
graybitmap = ''
col = 'black'
elif x > .75:
graybitmap = 'gray75'
col = 'black'
elif x > .50:
graybitmap = 'gray50'
col = 'black'
elif x > .25:
graybitmap = 'gray25'
col = 'black'
elif x > 0:
graybitmap = 'gray12'
col = 'black'
else:
graybitmap = ''
col=''
return col, graybitmap
def opaColor_Unused(self, val, rampIndex=None):
# the color of the dot changes
# we don't use it because the limit line drawn is less visible
x = float(val- self.xoffset)/self.xrange
background = self.canvas['Opa'].configure()['background']
backgroundHSV = ToHSV(background[4], mode='HEX', flag255=0)
bgHSV = [backgroundHSV[0], backgroundHSV[1], backgroundHSV[2] * (1-x)]
col = ToHEX(bgHSV, mode='HSV')
if x >= 1:
graybitmap = ''
elif x > .75:
graybitmap = 'gray75'
elif x > .50:
graybitmap = 'gray50'
elif x > .25:
graybitmap = 'gray25'
elif x > 0:
graybitmap = 'gray12'
else:
graybitmap = ''
return col, graybitmap
# def createCMLWidget(self):
# self.legend_gui = Tkinter.Toplevel()
# self.legend_gui.title(self.legend.name)
# self.legend_gui.protocol('WM_DELETE_WINDOW', self.legend_gui.withdraw )
#
# frame1 = Tkinter.Frame(self.legend_gui)
# frame1.pack(side='top')
#
# #unit
# self.unitsEnt = Pmw.EntryField(frame1,
# label_text='Units ',
# labelpos='w',
# command=self.updateCML_cb)
# self.unitsEnt.pack(side='top', fill='x')
#
# #glf vector font
# self.glfFont = Tkinter.StringVar()
# self.glfFont.set('chicago1.glf')
# self.glfFontCB = Pmw.ComboBox(frame1, label_text='Font ',
# labelpos='w',
# entryfield_value=self.glfFont.get(),
# scrolledlist_items=ColorMapLegend.glfVectorFontList,
# selectioncommand=self.updateCML_cb)
# self.glfFontCB.pack(side='top', fill='x')
#
# #fontScale
# self.fontScale = ThumbWheel(frame1,
# labCfg={'text':'font scale ', 'side':'left'},
# showLabel=1,
# width=90,
# height=14,
# min=0,
# max=200,
# type=int,
# value=self.legend.fontScale,
# callback=self.updateCML_cb,
# continuous=True,
# oneTurn=10,
# wheelPad=0)
# self.fontScale.pack(side='top')
#
# #label
# self.labelValsEnt = Pmw.EntryField(
# frame1,
# label_text='numpy labels ',
# labelpos='w',
# command=self.updateCML_cb)
# self.labelValsEnt.component('entry').config(width=6)
# self.labelValsEnt.pack(side='top', fill='x')
#
# #numOfLabel
# self.numOfLabelsCtr = ThumbWheel(frame1,
# labCfg={'text':'Automatic labels', 'side':'left'},
# showLabel=1,
# width=90,
# height=14,
# min=0,
# max=200,
# type=int,
# value=5,
# callback=self.updateCML_cb,
# continuous=True,
# oneTurn=20,
# wheelPad=0)
# self.numOfLabelsCtr.pack(side='top')
#
# # Interpolate
# self.interpVar = Tkinter.IntVar()
# self.interpVar.set(0)
# self.checkBoxFrame = Tkinter.Checkbutton(
# frame1,
# text='Interpolate',
# variable=self.interpVar,
# command=self.updateCML_cb)
# self.checkBoxFrame.pack(side='top')
#
# # frame
# self.frameVar = Tkinter.IntVar()
# self.frameVar.set(1)
# self.checkBoxFrame = Tkinter.Checkbutton(
# frame1,
# text='Frame',
# variable=self.frameVar,
# command=self.updateCML_cb)
# self.checkBoxFrame.pack(side='top')
#
# # invert labels color
# self.invertLabelsColorVar = Tkinter.IntVar()
# self.invertLabelsColorVar.set(0)
# self.checkBoxinvertLabelsColor = Tkinter.Checkbutton(
# frame1,
# text='Invert labels color',
# variable=self.invertLabelsColorVar,
# command=self.updateCML_cb)
# #self.checkBoxFrame.pack(side='top')
# self.checkBoxinvertLabelsColor.pack(side='top')
#
# # colormapguiwidget:
# self.launchColormapWidget = Tkinter.Button(
# frame1,
# text="Show colormap settings",
# command=self.showColormapSettings_cb )
# self.launchColormapWidget.pack(side='top', fill='x')
def showColormapSettings_cb(self, event=None):
#print "showColormapSettings_cb"
master = self.master
while hasattr(master, 'deiconify') is False and hasattr(master, 'master') is True:
master = master.master
if hasattr(master, 'deiconify'):
if master.winfo_ismapped() == 0:
master.deiconify()
master.lift()
#else: master.withdraw()
def showLegendButton_cb(self, event=None):
#print "showLegendButton_cb"
if not self.viewer:
return
if not self.legend:
self.createCML()
visible = self.showLegendVar.get()
self.legend.Set(visible=visible)
#self.legend.setWithOwnGui()
if visible == 0:
if self.legend.viewer.currentObject is self.legend:
self.legend.viewer.SetCurrentObject(self.legend.viewer.rootObject)
#do you have to do a redraw to see it?
self.viewer.Redraw()
def showLegend(self):
self.showLegendVar.set(1)
self.showLegendButton_cb()
def hideLegend(self):
self.showLegendVar.set(0)
self.showLegendButton_cb()
def editLegend_cb(self, event=None):
#print "editLegend_cb"
if not self.legend:
self.createCML()
self.legend.showOwnGui()
def resizeCanvases(self, height=None, width=None):
#print "resizeCanvases"
if height is not None:
for c in self.canvas.values():
c.configure(height=height)
self.ogl_cmw.configure(height=height)
self.ogl_cmw.height = height
if width is not None:
for c in self.canvas.values():
c.configure(width=width)
self.ogl_cmw.configure(width=width)
self.ogl_cmw.width = width
def buildRGBMap(self, ind, newVals):
""" ind is 0 for Hue, 1 for Sat etc
newVals is list of new rightXValues for that column
"""
#print "buildRGBMap"
rgbs = []
hsva = map(list, map(RGBA2HSLA_list, self.guiRamp))
for i in range(len(self.guiRamp)):
hsva[i][ind] = newVals[i]
rgbs.append(list(HSLA2RGBA_list(hsva[i])))
return rgbs
def buildRightXVals(self):
""" idList = ['Hue','Sat','Opa','Val']
"""
for id in self.idList:
self.setRightXVals(id)
def setRightXVals(self, idStr):
#each canvas line is drawn at constant y value
#between x1=0 on the left and x2 which is current
#ramp at y (which ranges between 0. and 1.) times 175
#NB: 175 is the effective width of the canvas
#idList = ['Hue','Sat','Opa','Val']
#print "setRightXVals"
if idStr not in self.idList:
return
ind = self.idList.index(idStr)
#apparently these are 200-this value and
#for entire Hue ramp delta x = .66401
hsvaRamp = map(list, map(RGBA2HSLA_list, self.guiRamp))
ramp_vals = numpy.array(hsvaRamp)[:,ind]
norm_n = ramp_vals
if idStr=='Hue':
# self.width is 200 which is 25 border + 175 map
# 25 and 175 are the default values but can be changed
val = self.width - (norm_n * self.xrange)
else:
val = norm_n * self.xrange + self.xoffset
#self.rightXVals[idStr] = map(int, val.tolist())
self.rightXVals[idStr] = val.tolist()
def write_cb(self):
fileTypes = [("ColorMap",'*_map.py'), ("any file",'*.*')]
fileBrowserTitle = "Write Color Map"
fileName = self.fileSaveAsk(types=fileTypes,
title=fileBrowserTitle)
if not fileName:
return
self.write(fileName)
def deleteCanvasLines(self, name):
"""delete all lines in ramp canvas"""
assert name in self.idList
#assert name in ['Hue', 'Sat', 'Val', 'Opa' ]
l = self.lines[name]
c = self.canvas[name]
for i in range(len(l)):
c.delete(l[i])
self.lines[name] = []
self.rightXVals[name] = []
self.curxTk.set("0.0")
self.curyTk.set("0.0")
if self.current==name:
self.currentValues = self.rightXVals[name]
self.currentLines = self.lines[name]
def drawHue(self):
j = self.canvasStart
#print "drawHue"
#this is for initializing and resets
l = self.lines['Hue']
c = self.canvas['Hue']
v = self.rightXVals['Hue']
for i in range(len(l)):
c.delete(l[i])
self.lines['Hue'] = []
l = self.lines['Hue']
for i in range(self.lengthRamp):
#nb val is the LENGTH of the line to draw
val = v[i]
col, graybitmap = self.hueColor(val, i)
l.append( c.create_rectangle( j,
(self.lengthRamp-1-i)*self.linesPerRampValue+j,
1+val,
((self.lengthRamp-1-i)+1)*self.linesPerRampValue+j,
fill=col,
outline='') )
if self.current=='Hue':
self.currentLines = l
def drawSaturation(self):
#print "drawSaturation"
c = self.canvas['Sat']
l = self.lines['Sat']
v = self.rightXVals['Sat']
j = self.canvasStart
for i in range(len(l)):
c.delete(l[i])
self.lines['Sat'] = []
l = self.lines['Sat']
for i in range(self.lengthRamp):
val = v[i]
col, graybitmap = self.satColor(val, i)
l.append( c.create_rectangle( j,
(self.lengthRamp-1-i)*self.linesPerRampValue+j,
1+val,
((self.lengthRamp-1-i)+1)*self.linesPerRampValue+j,
fill=col,
outline='') )
if self.current=='Sat':
self.currentLines = l
def drawValue(self):
c = self.canvas['Val']
l = self.lines['Val']
v = self.rightXVals['Val']
j = self.canvasStart
for i in range(len(l)):
c.delete(l[i])
self.lines['Val'] = []
l = self.lines['Val']
for i in range(self.lengthRamp):
val = v[i]
col, graybitmap = self.valColor(val, i)
#print "val i col", val, i, col
l.append( c.create_rectangle( j,
(self.lengthRamp-1-i)*self.linesPerRampValue+j,
1+val,
((self.lengthRamp-1-i)+1)*self.linesPerRampValue+j,
fill=col,
outline='') )
if self.current=='Val':
self.currentLines = l
def drawOpacity(self):
c = self.canvas['Opa']
l = self.lines['Opa']
v = self.rightXVals['Opa']
j = self.canvasStart
for i in range(len(l)):
c.delete(l[i])
self.lines['Opa'] = []
l = self.lines['Opa']
for i in range(self.lengthRamp):
val = v[i]
col, graybitmap = self.opaColor(val)
l.append( c.create_rectangle( j,
(self.lengthRamp-1-i)*self.linesPerRampValue+j,
1+val,
((self.lengthRamp-1-i)+1)*self.linesPerRampValue+j,
outline='',
fill=col,
stipple=graybitmap) )
if self.current=='Opa':
self.currentLines = l
def addCallback(self, function):
assert callable(function)
self.callbacks.append( function )
def callCallbacks(self):
for f in self.callbacks:
f( self )
def valueAtY(self, y):
# compute the value for q given line in canvas
y = y - self.canvasStart
y = max(y, 0)
y = min(y, self.height-1)
if self.guiMini is not None and self.guiMaxi is not None:
range = self.guiMaxi-self.guiMini
return (float(self.height-y)/(self.height-1))*range + self.guiMini
else:
return int((self.height-y)/self.linesPerRampValue)
def indexAtY(self, y):
# compute the value for q given line in canvas
j = self.canvasStart
Y = min(j+self.height-1, y)
if Y < j:
Y = j
y = int ( (Y-j) / self.linesPerRampValue)
return y
def updateCurTk(self, event):
x = event.x# - self.canvasStart
x = max(x, self.xoffset)
x = min(x, self.width)
val = (x-self.xoffset)/self.xrange
self.curxTk.set("%4.2f"%val)
lVal = self.valueAtY(event.y)
if lVal is not None:
self.curyTk.set("%4.2f"%lVal)
else:
self.curyTk.set('')
if self.labels is not None:
self.labelsComboBox.selectitem(self.indexAtY(event.y))
def updateOGLwidget(self):
self.ogl_cmw.tkRedraw()
def updateRGBMap(self, x, y):
cur_val = list(RGBA2HSLA_list(self.guiRamp[y]))
cur_val[0] = self.currentHue[y]
cur_val[1] = self.currentSat[y]
if self.current=='Hue':
h = ((self.width-x)/self.xrange) * .999 #*.6666666667
cur_val[0] = round(h, 3)
else:
#use keyList = ['Sat', 'Val', 'Opa']
keyList = self.idList[1:]
ind = keyList.index(self.current) + 1
h = (x-self.xoffset)/self.xrange
cur_val[ind] = round(h, 3)
self.guiRamp[y] = list(HSLA2RGBA_list(cur_val))
class OGLColorMapWidget(OGLWidget):
"""This object provides a OpenGL window that displays an RGBA color map"""
def __init__(self, master, cmap, title='OGLColorMapWidget', width=None,
height=None, cnf=None, **kw):
if width is None:
kw['width'] = 19
else:
kw['width'] = width
if height is None:
kw['height'] = 180
else:
kw['height'] = height
self.callback = None
assert isinstance(cmap, ColorMap)
self.ramp = deepCopySeq(cmap.ramp)
self.step = 1 # set to larger values to accelerate redraw
tf = self.topFrame = Tkinter.Frame(master, borderwidth=1)
self.frame = Tkinter.Frame(tf, borderwidth=3, relief='sunken')
if cnf is None:
cnf = {}
apply( OGLWidget.__init__, (self, self.frame, title, cnf, 0), kw )
self.frame.pack(expand=1,fill='both')
self.topFrame.pack(expand=1,fill='both')
self.pack(side='left',expand=1,fill='both')
#self.initTexture()
def tkRedraw(self, *dummy):
"""guillaume: probably useless, the code is almost identical
in the overriden function OGLWidget.tkRedraw
"""
self.update_idletasks()
self.tk.call(self._w, 'makecurrent')
self.initProjection()
GL.glPushMatrix()
apply( self.redraw, dummy ) #guillaume: self.redraw() in OGLWidget.tkRedraw
GL.glFlush()
GL.glPopMatrix()
self.tk.call(self._w, 'swapbuffers')
def redraw(self, line=None):
#print "redraw"
drawLegendOnly(
fullWidth=self.width,
fullHeight=self.height,
ramp=self.ramp,
verticalLegend=True,
roomLeftToLegend=0,
roomBelowLegend=0,
legendShortSide=self.width,
legendLongSide=self.height,
interpolate=False,
selected=False,
)
return
if __name__ == '__main__':
#import pdb
#test = ColorMapGUI()
#def cb(ramp, min, max):
#print len(ramp), min, max
#test.addCallback(cb)
##cm = ColorMap('cmap', filename='/home/rhuey/python/dev/rgb.map')
#cmg = ColorMapGUI(cm, allowRename=1)
l = {}
g = {}
execfile('Tests/rgb256_map.py', g,l)
cm = None
#for name, object in l.items():
# if isinstance(object, ColorMap):
# cm = object
# break
cm = l['cm']
from DejaVu2 import Viewer
vi = Viewer()
cmg = ColorMapGUI(cm, allowRename=1, viewer=vi )
#test.mainloop()
| 37.623791 | 102 | 0.527029 | 8,144 | 77,806 | 5.010928 | 0.112598 | 0.008233 | 0.009263 | 0.005121 | 0.388223 | 0.318288 | 0.270774 | 0.225024 | 0.208091 | 0.191428 | 0 | 0.012273 | 0.36537 | 77,806 | 2,067 | 103 | 37.641993 | 0.814189 | 0.23727 | 0 | 0.427344 | 0 | 0 | 0.022413 | 0 | 0 | 0 | 0 | 0 | 0.003125 | 1 | 0.055469 | false | 0.002344 | 0.014063 | 0 | 0.096875 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
7531c0f4b4056b7e460c08361366e9674000dff4 | 1,861 | py | Python | migration/rack/commits/commit90f2d4f55668786ffa01bba2a646c7468849c97d.py | tuxji/RACK | 74b59b9a89b48cf2da91d7d9ac23ab3408e32bcf | [
"BSD-3-Clause"
] | 4 | 2021-07-02T08:58:05.000Z | 2022-02-02T03:02:32.000Z | migration/rack/commits/commit90f2d4f55668786ffa01bba2a646c7468849c97d.py | tuxji/RACK | 74b59b9a89b48cf2da91d7d9ac23ab3408e32bcf | [
"BSD-3-Clause"
] | 309 | 2020-11-02T19:46:14.000Z | 2022-03-24T21:35:28.000Z | migration/rack/commits/commit90f2d4f55668786ffa01bba2a646c7468849c97d.py | tuxji/RACK | 74b59b9a89b48cf2da91d7d9ac23ab3408e32bcf | [
"BSD-3-Clause"
] | 7 | 2020-11-30T22:22:06.000Z | 2022-02-02T03:09:12.000Z | # Copyright (c) 2021, Galois, Inc.
#
# All Rights Reserved
#
# This material is based upon work supported by the Defense Advanced Research
# Projects Agency (DARPA) under Contract No. FA8750-20-C-0203.
#
# Any opinions, findings and conclusions or recommendations expressed in this
# material are those of the author(s) and do not necessarily reflect the views
# of the Defense Advanced Research Projects Agency (DARPA).
from migration_helpers.name_space import rack
from ontology_changes import (
AtMost,
Commit,
ChangeCardinality,
ChangePropertyIsATypeOf,
RenameClass,
SingleValue,
)
ANALYSIS = rack("ANALYSIS")
PROV_S = rack("PROV-S")
commit = Commit(
number="90f2d4f55668786ffa01bba2a646c7468849c97d",
changes=[
# ANALYSIS.sadl
RenameClass(
from_name_space=ANALYSIS,
from_name="ANALYSIS_REPORT",
to_name_space=ANALYSIS,
to_name="ANALYSIS_OUTPUT",
),
ChangeCardinality(
name_space=ANALYSIS,
class_id="ANALYSIS_OUTPUT",
property_id="result",
to_cardinality=AtMost(1),
),
ChangePropertyIsATypeOf(
name_space=ANALYSIS,
class_id="ANALYSIS_OUTPUT",
property_id="analyzes",
from_name_space=PROV_S,
from_property_id="wasDerivedFrom",
to_name_space=PROV_S,
to_property_id="wasImpactedBy",
),
ChangeCardinality(
name_space=ANALYSIS,
class_id="ANALYSIS_ANNOTATION",
property_id="fromReport",
to_cardinality=SingleValue(),
),
ChangeCardinality(
name_space=ANALYSIS,
class_id="ANALYSIS_ANNOTATION",
property_id="annotationType",
to_cardinality=SingleValue(),
),
],
)
| 29.078125 | 78 | 0.631381 | 183 | 1,861 | 6.196721 | 0.453552 | 0.071429 | 0.089947 | 0.077601 | 0.300705 | 0.300705 | 0.300705 | 0.206349 | 0.206349 | 0.121693 | 0 | 0.03165 | 0.286943 | 1,861 | 63 | 79 | 29.539683 | 0.822909 | 0.222461 | 0 | 0.367347 | 0 | 0 | 0.15122 | 0.027875 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.040816 | 0 | 0.040816 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
7531c29be32f4d71954813624b8633e9ae001662 | 1,215 | py | Python | part5/operations_overloading_3_2.py | MADTeacher/python_basics | 06ae43d8063c1c8426a4fbb53443b6d1ee727951 | [
"MIT"
] | null | null | null | part5/operations_overloading_3_2.py | MADTeacher/python_basics | 06ae43d8063c1c8426a4fbb53443b6d1ee727951 | [
"MIT"
] | null | null | null | part5/operations_overloading_3_2.py | MADTeacher/python_basics | 06ae43d8063c1c8426a4fbb53443b6d1ee727951 | [
"MIT"
] | 4 | 2020-10-04T12:24:14.000Z | 2022-01-16T17:01:59.000Z | class MyRange:
def __init__(self, start, stop, step=1):
self.start = start
self.stop = stop
self.step = step
def __iter__(self):
return MyRangeIterator(self)
class MyRangeIterator:
def __init__(self, myrange_object):
self.my_range = myrange_object
self.count_value = myrange_object.start - myrange_object.step
def __next__(self):
if self.count_value+self.my_range.step >= self.my_range.stop:
raise StopIteration
self.count_value += self.my_range.step
return self.count_value
if __name__ == "__main__":
print("MyRange")
my_range = MyRange(0, 4)
for it in my_range:
print(it, end=' ')
print()
for it in MyRange(0, 12, 4):
print(it, end=' ')
print()
# my_iter = iter(my_range)
# print(next(my_iter))
# print(next(my_iter))
# print(next(my_iter))
# print(next(my_iter))
# print(next(my_iter))
test_range = MyRange(0, 3)
for firtst_it in test_range:
print(f'firtst_it = {firtst_it}')
for second_it in test_range:
print(f'second_it = {second_it}, '
f'firtst_it*second_it = {firtst_it*second_it}') | 27.613636 | 69 | 0.609877 | 166 | 1,215 | 4.114458 | 0.216867 | 0.071742 | 0.080527 | 0.10981 | 0.250366 | 0.250366 | 0.194729 | 0.10981 | 0.10981 | 0.10981 | 0 | 0.010204 | 0.274074 | 1,215 | 44 | 70 | 27.613636 | 0.764172 | 0.106173 | 0 | 0.129032 | 0 | 0 | 0.1 | 0.019444 | 0 | 0 | 0 | 0 | 0 | 1 | 0.129032 | false | 0 | 0 | 0.032258 | 0.258065 | 0.225806 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
753a71640be1f2fee9267c9fd8f2b67b64a0df78 | 2,484 | py | Python | Chapter3/Cantonese/Onset-only/functional_load_onset_can.py | AndreaCeolin/Functionalism_Contrast_Change | 1557a4c76c253c7db292e503d6bd5cff5cea2d93 | [
"MIT"
] | null | null | null | Chapter3/Cantonese/Onset-only/functional_load_onset_can.py | AndreaCeolin/Functionalism_Contrast_Change | 1557a4c76c253c7db292e503d6bd5cff5cea2d93 | [
"MIT"
] | null | null | null | Chapter3/Cantonese/Onset-only/functional_load_onset_can.py | AndreaCeolin/Functionalism_Contrast_Change | 1557a4c76c253c7db292e503d6bd5cff5cea2d93 | [
"MIT"
] | null | null | null | #!/usr/bin/env python3
# -*- coding: utf-8 -*-
'''
This script has been used perform functional load calculations on CANCORP.
author: Andrea Ceolin
date: February 2021
'''
from collections import Counter
import math
'''
Get the token frequencies of the corpus
'''
words_tokens = Counter()
for line in open('cantonese_corpus.txt', 'r'):
word, counts = line.split()
words_tokens[word] += int(counts)
print(sum(words_tokens.values()))
'''
Get the type frequencies of the corpus
The number of types is lower than 5000, because of homophones
'''
words_types = {key:1 for key in words_tokens}
print(sum(words_types.values()))
'''
These are the main functions to extract ngrams and calculate entropy loss
'''
def entropy(words_dic, k=0):
'''
:param words_dic: a dictionary containing words and their corpus frequency
:param k: the order of the Markov model
:return: entropy
'''
total = sum(words_dic.values()) #ngram total
sommation = 0
for value in words_dic.values(): #sommation
sommation += value/total * math.log(value/total, 2)
sommation = sommation / (k+1)
return -sommation
def functional_load_onset(words_dic, phon1, phon2):
'''
:param words_dic: a dictionary containing words and their corpus frequency
:param phon1: phoneme replaced
:param phon2: phoneme used as replacement
:return: the different in entropy between the two states
'''
merged_words = Counter()
for word, count in words_dic.items():
new_word = ''
#ignore the last letter of the word, which is the tone
for index, letter in enumerate(word[:-1]):
#if the following letter is a tone, then the current letter must be ignored, because it is a coda
if word[index+1].isdigit():
new_word += letter
elif letter in {phon1, phon2}:
new_word += '#'
else:
new_word += letter
#add the last letter
new_word += word[-1]
merged_words[new_word] += count
print(round((entropy(words_dic)-entropy(merged_words))/entropy(words_dic),4))
'''
This prints the functional load for the pairs mentioned in the work
'''
functional_load_onset(words_tokens, 'n', 'l')
functional_load_onset(words_tokens, 'n', 'd')
functional_load_onset(words_tokens, 'n', 't')
functional_load_onset(words_tokens, 'n', 's')
functional_load_onset(words_tokens, 'n', 'z')
functional_load_onset(words_tokens, 'n', 'c') | 29.571429 | 109 | 0.67351 | 348 | 2,484 | 4.678161 | 0.405172 | 0.067568 | 0.081695 | 0.103194 | 0.19656 | 0.19656 | 0.08231 | 0.08231 | 0.08231 | 0.08231 | 0 | 0.01286 | 0.217391 | 2,484 | 84 | 110 | 29.571429 | 0.824588 | 0.276167 | 0 | 0.055556 | 0 | 0 | 0.023876 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.055556 | false | 0 | 0.055556 | 0 | 0.138889 | 0.083333 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
753d0a15ee14b8c547bc4f3f4bb5ef8f44343ac6 | 15,401 | py | Python | src/agents.py | YounesZ/RL_trading | 69f1bfad4cdfac7a53ac64e3c8404477cbafeb74 | [
"MIT"
] | 1 | 2018-10-20T07:53:21.000Z | 2018-10-20T07:53:21.000Z | src/agents.py | YounesZ/RL_trading | 69f1bfad4cdfac7a53ac64e3c8404477cbafeb74 | [
"MIT"
] | null | null | null | src/agents.py | YounesZ/RL_trading | 69f1bfad4cdfac7a53ac64e3c8404477cbafeb74 | [
"MIT"
] | null | null | null | import random
import matplotlib.pyplot as plt
import tensorflow as tf
import keras as K
from os import path
from src.lib import *
#from src.logging import *
from copy import deepcopy
from datetime import datetime
class Agent:
def __init__(self, model, batch_size=12, discount_factor=0.95,
buffer_size=200, prediction_model=None, planning_horizon=10):
self.model = model
self.p_model = prediction_model
self.batch_size = batch_size
self.discount_factor = discount_factor
self.memory = []
self.memory_short= []
self.buffer_size= buffer_size
self.planning_horizon = planning_horizon
def remember(self, state, action, reward, next_state, done, next_valid_actions):
self.memory_short.append( (state, action, reward, next_state, done, next_valid_actions) )
if done:
# Subtract mean
rew_avg = np.mean( [x[2] for x in self.memory_short] )
rew_std = np.std( [x[2] for x in self.memory_short] )
# Transfer to long-term memory
for ix in self.memory_short:
temp = []
temp = deepcopy( (ix[0], ix[1], (ix[2]-rew_avg)/rew_std, ix[3], ix[4], ix[5]) )
self.memory.append( temp )
self.memory_short= []
def replay(self, window_size=100):
if len(self.memory)>self.buffer_size:
sample_id = [np.random.randint(len(self.memory)) for x in range(self.batch_size)]
rewards = [self.get_discounted_reward(x, window_size) for x in sample_id]
batch = [self.memory[x][0].flatten() for x in sample_id]
self.model.fit(batch, [], rewards)
"""
for state, action, reward, next_state, done, next_valid_actions in batch:
self.model.fit(np.concatenate([x[0] for x in batch], axis=1), [], [x[2] for x in batch])
q = deepcopy(reward)
if not done:
q += self.discount_factor * np.nanmax(self.get_q_valid(next_state, next_valid_actions))
self.model.fit(state, action, q)
"""
def get_discounted_reward(self, bin_id, window_size):
max_bin = np.minimum( bin_id+self.batch_size, ( int( bin_id/window_size ) + 1 ) * window_size )
batch = self.memory[bin_id:max_bin]
rews = [x[2].astype(float) for x in batch]
disc = np.power(self.discount_factor, range(len(batch)))
return np.sum( np.multiply(disc, rews) )
def get_q_valid(self, state, valid_actions):
self.p_model.model.set_weights(self.model.model.get_weights())
q = self.p_model.predict(state.T)
#assert np.max(q)<=1 and np.min(q)>=-1
"""
# Constrain value within range
q_valid = [np.nan] * len(q)
for action in valid_actions:
q_valid[action] = q[action]
"""
return q #q_valid
def act(self, state, exploration, valid_actions):
if np.random.random() > exploration:
q_valid = self.get_q_valid(state, valid_actions)[0]
return np.maximum( np.minimum(q_valid, valid_actions[1]), valid_actions[0] )
return np.random.random(1) + valid_actions[0]
def save(self, fld):
makedirs(fld)
attr = {
'batch_size':self.batch_size,
'discount_factor':self.discount_factor,
#'memory':self.memory
}
pickle.dump(attr, open(os.path.join(fld, 'agent_attr.pickle'),'wb'))
self.model.save(fld)
def load(self, fld):
path = os.path.join(fld, 'agent_attr.pickle')
print(path)
attr = pickle.load(open(path,'rb'))
for k in attr:
setattr(self, k, attr[k])
self.model.load(fld)
def add_dim(x, shape):
return np.reshape(x, (1,) + shape)
class QModelKeras:
# ref: https://keon.io/deep-q-learning/
def init(self):
pass
def build_model(self):
pass
def __init__(self, state_shape, n_action, wavelet_channels=0):
self.state_shape = state_shape
self.n_action = n_action
self.attr2save = ['state_shape','n_action','model_name']
self.wavelet_channels= wavelet_channels
self.init()
def save(self, fld):
makedirs(fld)
with open(os.path.join(fld, 'model.json'), 'w') as json_file:
json_file.write(self.model.to_json())
self.model.save_weights(os.path.join(fld, 'weights.hdf5'))
attr = dict()
for a in self.attr2save:
attr[a] = getattr(self, a)
pickle.dump(attr, open(os.path.join(fld, 'Qmodel_attr.pickle'),'wb'))
def load(self, fld, learning_rate):
json_str = open(os.path.join(fld, 'model.json')).read()
self.model = keras.models.model_from_json(json_str)
self.model.load_weights(os.path.join(fld, 'weights.hdf5'))
self.model.compile(loss='mse', optimizer=keras.optimizers.Adam(lr=learning_rate))
attr = pickle.load(open(os.path.join(fld, 'Qmodel_attr.pickle'), 'rb'))
for a in attr:
setattr(self, a, attr[a])
def predict(self, state):
# UNCOMMENT for modular wavelet channels
"""
# Reshape state-space if wavelet transformed
if self.wavelet_channels>0:
rshp_state = self.modular_state_space(state)
else:
rshp_state = add_dim(state, self.state_shape)
"""
#rshp_state = add_dim(state, self.state_shape)
q = self.model.predict( state )[0]
"""
if np.isnan(np.max(q, axis=1)).any():
print('state'+str(state))
print('q'+str(q))
raise ValueError
"""
return q
def fit(self, state, action, q_action):
"""
q = self.predict(state.T)
q[action] = q_action
"""
q = q_action
# UNCOMMENT for modular wavelet channels
"""
if self.wavelet_channels>0:
rshp_state = self.modular_state_space(state)
else:
rshp_state = add_dim(state, self.state_shape)
self.model.fit( rshp_state, add_dim(q, (self.n_action,)), epochs=1, verbose=0)
"""
self.model.fit( np.array(state), q, epochs=1, verbose=0)
def modular_state_space(self, state):
new_shape = (len(state) / np.power(2, list(range(1, self.wavelet_channels + 1)) + [self.wavelet_channels])).astype(int)
cum_shape = np.cumsum(new_shape).astype(int)
# rshp_state = [add_dim(state[x:y], (z,1)) for x,y,z in zip( np.insert(cum_shape,0,0), cum_shape, new_shape )]
rshp_state = [state[x:y].T for x, y in zip(np.insert(cum_shape[:-1], 0, 0), cum_shape)]
return rshp_state
class QModelMLP(QModelKeras):
# multi-layer perception (MLP), i.e., dense only
def init(self):
self.qmodel = 'MLP'
def build_model(self, n_units, learning_rate, activation='relu'):
#if self.wavelet_channels==0:
# Purely dense MLP
model = K.models.Sequential()
model.add( K.layers.Dense( n_units[0], input_shape=(np.prod(self.state_shape),)) )
for i in range(1, len(n_units)):
model.add(keras.layers.Dense(n_units[i], activation=activation))
#model.add(keras.layers.Dropout(drop_rate))
model.add(keras.layers.Dense(self.n_action, activation='linear'))
"""
else:
# Composite architecture : 1MLP for each decomposition channel
max_scales = int(np.log(self.state_shape[0])/np.log(2))
inputs = np.power(2, range(1, self.wavelet_channels+1))[max_scales-self.wavelet_channels-1:][::-1]
inp_layers = []
inp_models = []
for ii in np.append(inputs, inputs[-1]):
# Make a model
input = keras.layers.Input(shape=(ii,))
hid1 = keras.layers.Dense(int(1.5*ii), activation='relu')(input)
hid2 = keras.layers.Dense(int(1.5*ii), activation='relu')(hid1)
inp_layers += [input]
inp_models += [hid2]
# Make composite
composite = keras.layers.Concatenate(axis=-1)(inp_models)
compoD1 = keras.layers.Dense(self.state_shape[0])(composite)
compoD2 = keras.layers.Dense( int(self.state_shape[0]*.75) )(compoD1)
output = keras.layers.Dense(self.n_action, activation='linear')(compoD2)
model = keras.models.Model(inputs=inp_layers, outputs=output)
"""
model.compile(loss='mse', optimizer=keras.optimizers.Adam(lr=learning_rate))
self.model = model
self.model_name = self.qmodel + str(n_units)
class PGModelMLP(QModelKeras):
# multi-layer perception (MLP), i.e., dense only
def init(self):
self.qmodel = 'MLP_PG'
def build_model(self, n_hidden, learning_rate, activation='relu', input_size=32, batch_size=8):
# --- Purely dense MLP
input = keras.layers.Input((batch_size, input_size), name='main_input')
# Hidden
hidden = []
in_lay = input
for ii in n_hidden:
hidden += [keras.layers.Dense(units=ii, activation=activation)(in_lay)]
in_lay = hidden[-1]
# Output
#output1 = keras.layers.Dense(units=self.n_action, activation='linear')(in_lay)
#output2 = keras.layers.Dense(units=self.n_action, activation='softmax')(output1)
output1 = keras.layers.Dense(units=self.n_action, activation='tanh')(in_lay)
model = keras.models.Model(inputs=input, outputs=output1)
#model = keras.models.Model(inputs=input, outputs=(output1, output2) )
#model.compile(loss='binary_crossentropy', optimizer=keras.optimizers.Adam(lr=learning_rate))
model.compile(loss='mse', optimizer=keras.optimizers.Adam(lr=learning_rate))
self.model = model
self.model_name = self.qmodel + str(n_hidden)
self.learning_rate = learning_rate
class QModelRNN(QModelKeras):
"""
https://keras.io/getting-started/sequential-model-guide/#example
note param doesn't grow with len of sequence
"""
def _build_model(self, Layer, n_hidden, dense_units, learning_rate, activation='relu'):
model = keras.models.Sequential()
model.add(keras.layers.Reshape(self.state_shape, input_shape=self.state_shape))
m = len(n_hidden)
for i in range(m):
model.add(Layer(n_hidden[i],
return_sequences=(i<m-1)))
for i in range(len(dense_units)):
model.add(keras.layers.Dense(dense_units[i], activation=activation))
model.add(keras.layers.Dense(self.n_action, activation='linear'))
model.compile(loss='mse', optimizer=keras.optimizers.Adam(lr=learning_rate))
self.model = model
self.model_name = self.qmodel + str(n_hidden) + str(dense_units)
class QModelLSTM(QModelRNN):
def init(self):
self.qmodel = 'LSTM'
def build_model(self, n_hidden, dense_units, learning_rate, activation='relu'):
Layer = keras.layers.LSTM
self._build_model(Layer, n_hidden, dense_units, learning_rate, activation)
class QModelGRU(QModelRNN):
def init(self):
self.qmodel = 'GRU'
def build_model(self, n_hidden, dense_units, learning_rate, activation='relu'):
Layer = keras.layers.GRU
self._build_model(Layer, n_hidden, dense_units, learning_rate, activation)
class QModelConv(QModelKeras):
"""
ref: https://keras.io/layers/convolutional/
"""
def init(self):
self.qmodel = 'Conv'
def build_model(self,
filter_num, filter_size, dense_units,
learning_rate, activation='relu', dilation=None, use_pool=None):
if use_pool is None:
use_pool = [True]*len(filter_num)
if dilation is None:
dilation = [1]*len(filter_num)
model = keras.models.Sequential()
model.add(keras.layers.Reshape(self.state_shape, input_shape=self.state_shape))
for i in range(len(filter_num)):
model.add(keras.layers.Conv1D(filter_num[i], kernel_size=filter_size[i], dilation_rate=dilation[i],
activation=activation, use_bias=True))
if use_pool[i]:
model.add(keras.layers.MaxPooling1D(pool_size=2))
model.add(keras.layers.Flatten())
for i in range(len(dense_units)):
model.add(keras.layers.Dense(dense_units[i], activation=activation))
model.add(keras.layers.Dense(self.n_action, activation='linear'))
model.compile(loss='mse', optimizer=keras.optimizers.Adam(lr=learning_rate))
self.model = model
self.model_name = self.qmodel + str([a for a in
zip(filter_num, filter_size, dilation, use_pool)
])+' + '+str(dense_units)
class QModelConvRNN(QModelKeras):
"""
https://keras.io/getting-started/sequential-model-guide/#example
note param doesn't grow with len of sequence
"""
def _build_model(self, RNNLayer, conv_n_hidden, RNN_n_hidden, dense_units, learning_rate,
conv_kernel_size=3, use_pool=False, activation='relu'):
model = keras.models.Sequential()
model.add(keras.layers.Reshape(self.state_shape, input_shape=self.state_shape))
for i in range(len(conv_n_hidden)):
model.add(keras.layers.Conv1D(conv_n_hidden[i], kernel_size=conv_kernel_size,
activation=activation, use_bias=True))
if use_pool:
model.add(keras.layers.MaxPooling1D(pool_size=2))
m = len(RNN_n_hidden)
for i in range(m):
model.add(RNNLayer(RNN_n_hidden[i],
return_sequences=(i<m-1)))
for i in range(len(dense_units)):
model.add(keras.layers.Dense(dense_units[i], activation=activation))
model.add(keras.layers.Dense(self.n_action, activation='linear'))
model.compile(loss='mse', optimizer=keras.optimizers.Adam(lr=learning_rate))
self.model = model
self.model_name = self.qmodel + str(conv_n_hidden) + str(RNN_n_hidden) + str(dense_units)
class QModelConvLSTM(QModelConvRNN):
def init(self):
self.qmodel = 'ConvLSTM'
def build_model(self, conv_n_hidden, RNN_n_hidden, dense_units, learning_rate,
conv_kernel_size=3, use_pool=False, activation='relu'):
Layer = keras.layers.LSTM
self._build_model(Layer, conv_n_hidden, RNN_n_hidden, dense_units, learning_rate,
conv_kernel_size, use_pool, activation)
class QModelConvGRU(QModelConvRNN):
def init(self):
self.qmodel = 'ConvGRU'
def build_model(self, conv_n_hidden, RNN_n_hidden, dense_units, learning_rate,
conv_kernel_size=3, use_pool=False, activation='relu'):
Layer = keras.layers.GRU
self._build_model(Layer, conv_n_hidden, RNN_n_hidden, dense_units, learning_rate,
conv_kernel_size, use_pool, activation)
def load_model(fld, learning_rate):
s = open(os.path.join(fld,'QModel.txt'),'r').read().strip()
qmodels = {
'Conv':QModelConv,
'DenseOnly':QModelMLP,
'MLP':QModelMLP,
'LSTM':QModelLSTM,
'GRU':QModelGRU,
}
qmodel = qmodels[s](None, None)
qmodel.load(fld, learning_rate)
return qmodel
def test_ddpg():
import gym
sess = tf.Session()
K.backend.set_session(sess)
env = gym.make("Pendulum-v0")
actor_critic= DDPGModelMLP(sess, exploration_rate=0.05, buffer_size=200, tau=0.8)
actor_critic.build_graph(sess, [16, 16, 12, 6], [[16, 16], [16, 10]], 1e-4, env.observation_space.shape, env.action_space.shape)
num_trials = 10000
trial_len = 500
cur_state = env.reset()
n_steps = 1
all_rew = [[], [], []]
FF = plt.figure()
AX = FF.add_subplot(111)
plt.ion()
plt.draw()
while True:
print('Step {}'.format(n_steps))#env.render()
cur_state = cur_state.reshape((1, env.observation_space.shape[0]))
action = actor_critic.act(cur_state, env.action_space.sample())
action = action.reshape( (1, env.action_space.shape[0]) )
new_state, reward, done, _ = env.step(action)
new_state = new_state.reshape( (1, env.observation_space.shape[0]) )
actor_critic.remember(cur_state, action, reward, new_state, done)
actor_critic.replay()
cur_state = new_state
# Plot rewards
all_rew = process_rewards(all_rew, reward, 20, 0.5)
if n_steps%20==0:
viz_perf( n_steps, all_rew, AX)
plt.pause(0.05)
FF.canvas.draw()
n_steps += 1
def process_rewards(rew, reward, win_size, gamma):
# Append to raw reward
rew[0] += [reward]
# Slice the data
inf_lim = np.minimum(len(rew[0]), win_size)
slice = rew[0][-inf_lim:]
# Compute the moving average
rew[1] += [np.mean(slice)]
# Compute the exponential average
e_avg = [x*np.exp(-ix*gamma) for ix, x in enumerate(slice[::-1])] / np.sum([np.exp(-ix*gamma) for ix in range(len(slice))])
rew[2] += [e_avg[0][0]]
return rew
def viz_perf(x, rewards, ax):
# Make a figure
colors = ['r', 'b', 'k']
labels = ['raw', 'moving average', 'exp. average']
[ax.plot(range(x), ix, ic, label=il) for ix, ic, il in zip(rewards, colors, labels)]
return
# ========
# LAUNCHER
# ========
if __name__ == '__main__':
test_ddpg() | 31.62423 | 129 | 0.703006 | 2,366 | 15,401 | 4.396872 | 0.148774 | 0.034894 | 0.021244 | 0.031049 | 0.46304 | 0.430261 | 0.385946 | 0.364414 | 0.29876 | 0.26377 | 0 | 0.013108 | 0.147977 | 15,401 | 487 | 130 | 31.62423 | 0.779683 | 0.105383 | 0 | 0.24911 | 0 | 0 | 0.033439 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.131673 | false | 0.007117 | 0.032028 | 0.003559 | 0.238434 | 0.007117 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
753d88404d166a84eabbbafeb2f4b21a47e63a44 | 3,036 | py | Python | day03.py | alberto-re/adventoofcode2021 | 0cb32368ec2d0418d5b36fd566aaee3ed979017e | [
"MIT"
] | null | null | null | day03.py | alberto-re/adventoofcode2021 | 0cb32368ec2d0418d5b36fd566aaee3ed979017e | [
"MIT"
] | null | null | null | day03.py | alberto-re/adventoofcode2021 | 0cb32368ec2d0418d5b36fd566aaee3ed979017e | [
"MIT"
] | null | null | null | #!/usr/bin/env python3
# --- Day 3: Binary Diagnostic ---
from typing import Dict, List, Tuple
from copy import deepcopy
MOST_COMMON = 1
LEAST_COMMON = 0
EXAMPLE = """\
00100
11110
10110
10111
10101
01111
00111
11100
10000
11001
00010
01010\
"""
def lines_to_matrix(lines: List[str]) -> List[List[str]]:
matrix = []
for n, line in enumerate(lines):
matrix.append([])
for bit in list(line.rstrip()):
matrix[n].append(bit)
return matrix
def count_bits_by_position(report: List[List[str]]) -> List[Dict[str, int]]:
bit_count = []
for col in range(len(report[0])):
bit_count.append({"0": 0, "1": 0})
for row in range(len(report)):
digit = report[row][col]
bit_count[col][digit] += 1
return bit_count
def compute_gamma_epsilon(bit_count: List[Dict[str, int]]) -> Tuple[str, str]:
gamma, epsilon = "", ""
for i in range(len(bit_count)):
if bit_count[i]["0"] > bit_count[i]["1"]:
gamma += "0"
epsilon += "1"
else:
gamma += "1"
epsilon += "0"
return gamma, epsilon
def compute_rating(report: List[List[str]], criteria: int) -> Tuple[str, str]:
considered = deepcopy(report)
for i in range(len(report[0])):
bit_count = count_bits_by_position(considered)
if bit_count[i]["0"] == bit_count[i]["1"]:
filter_match = str(criteria)
else:
filter_match = sorted(bit_count[i].items(), key=lambda kv: kv[1])[criteria][
0
]
considered = list(filter(lambda x: x[i] == filter_match, considered))
if len(considered) == 1:
break
return "".join(considered.pop())
report = lines_to_matrix(EXAMPLE.split("\n"))
assert len(report) == 12
assert len(report[0]) == 5
bit_count = count_bits_by_position(report)
assert bit_count[0]["0"] == 5
assert bit_count[0]["1"] == 7
gamma, epsilon = compute_gamma_epsilon(bit_count)
assert gamma == "10110"
assert epsilon == "01001"
power_consumption = int(gamma, base=2) * int(epsilon, base=2)
assert power_consumption == 198
oxygen_generator_rating = compute_rating(report, MOST_COMMON)
co2_scrubber_rating = compute_rating(report, LEAST_COMMON)
assert oxygen_generator_rating == "10111"
assert co2_scrubber_rating == "01010"
life_support_rating = int(oxygen_generator_rating, base=2) * int(
co2_scrubber_rating, base=2
)
assert life_support_rating == 230
with open("input/day03.txt") as f:
report = lines_to_matrix(f.readlines())
bit_count = count_bits_by_position(report)
gamma, epsilon = compute_gamma_epsilon(bit_count)
power_consumption = int(gamma, base=2) * int(epsilon, base=2)
print(f"Part one solution: {power_consumption}")
oxygen_generator_rating = compute_rating(report, MOST_COMMON)
co2_scrubber_rating = compute_rating(report, LEAST_COMMON)
life_support_rating = int(oxygen_generator_rating, base=2) * int(
co2_scrubber_rating, base=2
)
print(f"Part two solution: {life_support_rating}")
| 25.3 | 88 | 0.65975 | 425 | 3,036 | 4.505882 | 0.258824 | 0.075196 | 0.023499 | 0.039687 | 0.402611 | 0.355614 | 0.345692 | 0.244386 | 0.244386 | 0.22141 | 0 | 0.055072 | 0.204545 | 3,036 | 119 | 89 | 25.512605 | 0.737888 | 0.017787 | 0 | 0.181818 | 0 | 0 | 0.067785 | 0.007047 | 0 | 0 | 0 | 0 | 0.113636 | 1 | 0.045455 | false | 0 | 0.022727 | 0 | 0.113636 | 0.022727 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
753f674ed26d4491ca57d504a35abe01191fc849 | 1,760 | py | Python | 3_1.py | Nutella-duck/nutella_agent | f9b28dcf4a97ad316732f6ce891037f4391da18d | [
"MIT"
] | null | null | null | 3_1.py | Nutella-duck/nutella_agent | f9b28dcf4a97ad316732f6ce891037f4391da18d | [
"MIT"
] | null | null | null | 3_1.py | Nutella-duck/nutella_agent | f9b28dcf4a97ad316732f6ce891037f4391da18d | [
"MIT"
] | null | null | null | # 데이터 다운로드
from keras.datasets import imdb
(train_data, train_labels), (test_data, test_labels) = imdb.load_data(num_words=10000)
# 데이터 변환
import numpy as np
def vectorize_sequences(sequences, dimension=10000):
results = np.zeros((len(sequences), dimension))
for i, sequence in enumerate(sequences):
results[i, sequence] = 1.
return results
x_train = vectorize_sequences(train_data)
x_test = vectorize_sequences(test_data)
y_train = np.asarray(train_labels).astype('float32')
y_test = np.asarray(test_labels).astype('float32')
from nutellaAgent import hpo
from nutellaAgent import nu_simple_fmin
from sklearn.metrics import roc_auc_score
import sys
def objective(params):
# 모델 설계
from keras import models
from keras import layers
model = models.Sequential()
model.add(layers.Dense(units=params['units'], activation='relu', input_shape=(10000,)))
model.add(layers.Dropout(params['dropout']))
model.add(layers.Dense(units=params['units'], activation='relu'))
model.add(layers.Dropout(params['dropout']))
model.add(layers.Dense(1, activation='sigmoid'))
# 컴파일
model.compile(optimizer=params['optimizer'],
loss='binary_crossentropy',
metrics=['acc'])
# data 설정
x_val = x_train[:10000]
partial_x_train = x_train[10000:]
y_val = y_train[:10000]
partial_y_train = y_train[10000:]
# 학습
history = model.fit(partial_x_train,
partial_y_train,
epochs=params['epochs'],
batch_size=params['batch_size'],
validation_data=(x_val, y_val))
loss, acc = model.evaluate(x_test, y_test)
return {'loss': loss, 'status': hpo.STATUS_OK}
best, trials = nu_simple_fmin("", objective)
| 27.936508 | 89 | 0.684659 | 234 | 1,760 | 4.948718 | 0.376068 | 0.025907 | 0.060449 | 0.049223 | 0.159758 | 0.159758 | 0.159758 | 0.159758 | 0.159758 | 0.091537 | 0 | 0.028812 | 0.191477 | 1,760 | 62 | 90 | 28.387097 | 0.784961 | 0.020455 | 0 | 0.05 | 0 | 0 | 0.064065 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.05 | false | 0 | 0.2 | 0 | 0.3 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
75447c4c912f2a5090056786cfa6c9e6c3619652 | 413 | py | Python | configs/video/camvid/memory_r50-d8_640x640_80k_camvid_video.py | Xlinford/video_mmseg | 28c905b38b10f857301a584ce95949ecf1ec7e0d | [
"Apache-2.0"
] | null | null | null | configs/video/camvid/memory_r50-d8_640x640_80k_camvid_video.py | Xlinford/video_mmseg | 28c905b38b10f857301a584ce95949ecf1ec7e0d | [
"Apache-2.0"
] | null | null | null | configs/video/camvid/memory_r50-d8_640x640_80k_camvid_video.py | Xlinford/video_mmseg | 28c905b38b10f857301a584ce95949ecf1ec7e0d | [
"Apache-2.0"
] | null | null | null | _base_ = [
'../../_base_/models/memory_r50-d8.py', '../../_base_/datasets/camvid_video.py',
'../../_base_/default_runtime.py', '../../_base_/schedules/schedule_80k.py'
]
optimizer = dict(type='SGD', lr=0.01, momentum=0.9, weight_decay=0.0005)
model = dict(
decode_head=dict(num_classes=12), auxiliary_head=dict(num_classes=12)
)
test_cfg = dict(mode='slide', crop_size=(640, 640), stride=(512, 512))
| 37.545455 | 84 | 0.677966 | 61 | 413 | 4.245902 | 0.688525 | 0.069498 | 0.084942 | 0.138996 | 0.15444 | 0 | 0 | 0 | 0 | 0 | 0 | 0.08311 | 0.096852 | 413 | 10 | 85 | 41.3 | 0.61126 | 0 | 0 | 0 | 0 | 0 | 0.364078 | 0.34466 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
754ca8a8829402ac4848e0cd1a56270189f5a5eb | 2,195 | py | Python | mamprefs/__init__.py | arubertoson/maya-mamprefs | 0bf972322416499c51b67ad083600d3cbaa5d0e7 | [
"MIT"
] | null | null | null | mamprefs/__init__.py | arubertoson/maya-mamprefs | 0bf972322416499c51b67ad083600d3cbaa5d0e7 | [
"MIT"
] | 7 | 2016-01-10T09:48:48.000Z | 2016-07-28T20:40:55.000Z | mamprefs/__init__.py | arubertoson/maya-mamprefs | 0bf972322416499c51b67ad083600d3cbaa5d0e7 | [
"MIT"
] | null | null | null | """
"""
import os
import json
import logging
from maya import cmds
__title__ = 'mayaprefs'
__version__ = '0.1.6'
__author__ = 'Marcus Albertsson <marcus.arubertoson@gmail.com>'
__url__ = 'http://github.com/arubertoson/maya-mayaprefs'
__license__ = 'MIT'
__copyright__ = 'Copyright 2016 Marcus Albertsson'
logger = logging.getLogger(__name__)
# Constants
cwd = ''
cfg_paths = []
config = {}
def set_cwd_and_cfg_paths():
"""
Set current work dir and check if prefs exists in parent dir.
"""
global cwd, cfg_paths
cwd = os.path.abspath(os.path.dirname(__file__))
_cwd_parent = os.path.abspath(os.path.join(cwd, os.pardir))
cfg_paths = [cwd]
if os.path.isdir(os.path.join(_cwd_parent, 'prefs')):
cfg_paths.append(os.path.join(_cwd_parent, 'prefs'))
class Config(dict):
"""
Config dict object.
Written to make mamprefs isolated from the userPref file.
"""
def __init__(self, file_=None):
config_file = file_ or os.path.join(cwd, '.mamprefs')
with open(config_file, 'rb') as f:
data = json.loads(f.read())
self.config_file = config_file
super(Config, self).__init__(data)
def __setitem__(self, key, value):
super(Config, self).__setitem__(key, value)
self.dump()
def dump(self):
with open(self.config_file, 'wb') as f:
json.dump(self, f, indent=4, sort_keys=True)
def init(*args):
"""
Initilizes the mampref package.
Init takes one or several paths
"""
global config
# init config
set_cwd_and_cfg_paths()
config = Config()
# add custom path for setting.
if args:
for i in args:
cfg_paths.append(i)
initialize_settings()
job = cmds.scriptJob(e=['NewSceneOpened', initialize_settings])
config['CURRENT_MAYA_SESSION_SCRIPTJOB_NUMBER'] = job
def initialize_settings():
# Import package files and init them.
from mamprefs import settings, markingmenus, layouts
layouts.init()
markingmenus.init()
settings.init()
if __name__ == '__main__':
pass
| 23.351064 | 68 | 0.627335 | 273 | 2,195 | 4.714286 | 0.417582 | 0.037296 | 0.03108 | 0.040404 | 0.09324 | 0.037296 | 0 | 0 | 0 | 0 | 0 | 0.00492 | 0.259226 | 2,195 | 93 | 69 | 23.602151 | 0.786593 | 0.133485 | 0 | 0 | 0 | 0 | 0.127794 | 0.038395 | 0 | 0 | 0 | 0 | 0 | 1 | 0.117647 | false | 0.019608 | 0.098039 | 0 | 0.235294 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
754e707729e5a852fcfee89bbfbf837d1fd7207c | 2,973 | py | Python | pysph/examples/rigid_body/bouncing_cube.py | nauaneed/pysph | 9cb9a859934939307c65a25cbf73e4ecc83fea4a | [
"BSD-3-Clause"
] | 293 | 2017-05-26T14:41:15.000Z | 2022-03-28T09:56:16.000Z | pysph/examples/rigid_body/bouncing_cube.py | nauaneed/pysph | 9cb9a859934939307c65a25cbf73e4ecc83fea4a | [
"BSD-3-Clause"
] | 217 | 2017-05-29T15:48:14.000Z | 2022-03-24T16:16:55.000Z | pysph/examples/rigid_body/bouncing_cube.py | nauaneed/pysph | 9cb9a859934939307c65a25cbf73e4ecc83fea4a | [
"BSD-3-Clause"
] | 126 | 2017-05-25T19:17:32.000Z | 2022-03-25T11:23:24.000Z | """A cube bouncing inside a box. (5 seconds)
This is used to test the rigid body equations.
"""
import numpy as np
from pysph.base.kernels import CubicSpline
from pysph.base.utils import get_particle_array_rigid_body
from pysph.sph.equation import Group
from pysph.sph.integrator import EPECIntegrator
from pysph.solver.application import Application
from pysph.solver.solver import Solver
from pysph.sph.rigid_body import (BodyForce, RigidBodyCollision,
RigidBodyMoments, RigidBodyMotion,
RK2StepRigidBody)
dim = 3
dt = 5e-3
tf = 5.0
gz = -9.81
hdx = 1.0
dx = dy = 0.02
rho0 = 10.0
class BouncingCube(Application):
def create_particles(self):
nx, ny, nz = 10, 10, 10
dx = 1.0 / (nx - 1)
x, y, z = np.mgrid[0:1:nx * 1j, 0:1:ny * 1j, 0:1:nz * 1j]
x = x.flat
y = y.flat
z = (z - 1).flat
m = np.ones_like(x) * dx * dx * rho0
h = np.ones_like(x) * hdx * dx
# radius of each sphere constituting in cube
rad_s = np.ones_like(x) * dx
body = get_particle_array_rigid_body(name='body', x=x, y=y, z=z, h=h,
m=m, rad_s=rad_s)
body.vc[0] = -5.0
body.vc[2] = -5.0
# Create the tank.
nx, ny, nz = 40, 40, 40
dx = 1.0 / (nx - 1)
xmin, xmax, ymin, ymax, zmin, zmax = -2, 2, -2, 2, -2, 2
x, y, z = np.mgrid[xmin:xmax:nx * 1j, ymin:ymax:ny * 1j, zmin:zmax:nz *
1j]
interior = ((x < 1.8) & (x > -1.8)) & ((y < 1.8) & (y > -1.8)) & (
(z > -1.8) & (z <= 2))
tank = np.logical_not(interior)
x = x[tank].flat
y = y[tank].flat
z = z[tank].flat
m = np.ones_like(x) * dx * dx * rho0
h = np.ones_like(x) * hdx * dx
# radius of each sphere constituting in cube
rad_s = np.ones_like(x) * dx
tank = get_particle_array_rigid_body(name='tank', x=x, y=y, z=z, h=h,
m=m, rad_s=rad_s)
tank.total_mass[0] = np.sum(m)
return [body, tank]
def create_solver(self):
kernel = CubicSpline(dim=dim)
integrator = EPECIntegrator(body=RK2StepRigidBody())
solver = Solver(kernel=kernel, dim=dim, integrator=integrator, dt=dt,
tf=tf, adaptive_timestep=False)
solver.set_print_freq(10)
return solver
def create_equations(self):
equations = [
Group(equations=[
BodyForce(dest='body', sources=None, gz=gz),
RigidBodyCollision(dest='body', sources=['tank'], kn=1e4, en=1)
]),
Group(equations=[RigidBodyMoments(dest='body', sources=None)]),
Group(equations=[RigidBodyMotion(dest='body', sources=None)]),
]
return equations
if __name__ == '__main__':
app = BouncingCube()
app.run()
| 30.649485 | 79 | 0.539522 | 418 | 2,973 | 3.739234 | 0.287081 | 0.040307 | 0.038388 | 0.042226 | 0.227127 | 0.180422 | 0.143314 | 0.143314 | 0.143314 | 0.143314 | 0 | 0.041185 | 0.330306 | 2,973 | 96 | 80 | 30.96875 | 0.743847 | 0.064918 | 0 | 0.142857 | 0 | 0 | 0.012992 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.042857 | false | 0 | 0.114286 | 0 | 0.214286 | 0.014286 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
7550b302b143fc382ccf3fdfe9024a6e71f17aa5 | 4,986 | py | Python | jarvis_cli/admin.py | clb6/jarvis-cli | 44dfe0a94243e444eaddc72496efd677be9272e7 | [
"Apache-2.0"
] | null | null | null | jarvis_cli/admin.py | clb6/jarvis-cli | 44dfe0a94243e444eaddc72496efd677be9272e7 | [
"Apache-2.0"
] | 3 | 2016-09-08T03:20:33.000Z | 2016-12-08T05:19:57.000Z | jarvis_cli/admin.py | clb6/jarvis-cli | 44dfe0a94243e444eaddc72496efd677be9272e7 | [
"Apache-2.0"
] | null | null | null | import subprocess, os, time, shutil
from datetime import datetime
import jarvis_cli as jc
from jarvis_cli import config, client
from jarvis_cli.client import log_entry as cle
def create_snapshot(environment, config_map):
snapshot_filepath = "jarvis_snapshot_{0}_{1}.tar.gz".format(environment,
datetime.utcnow().strftime("%Y%m%d%H%M%S"))
snapshot_filepath =os.path.join(config.get_jarvis_snapshots_directory(config_map),
snapshot_filepath)
data_dir = config.get_jarvis_data_directory(config_map)
data_top_dirname = os.path.dirname(data_dir)
data_basename = os.path.basename(data_dir)
cmd = ["tar", "-czf", snapshot_filepath, "-C", data_top_dirname, data_basename]
cp = subprocess.run(cmd, stdout=subprocess.PIPE, stderr=subprocess.STDOUT)
if cp.returncode == 0:
return snapshot_filepath
else:
# Tarballing failed and there maybe a bad tarball so try to remove it
try:
os.remove(snapshot_filepath)
except:
pass
print(cp.stdout)
def restore_snapshot(config_map, snapshot_filepath):
if not os.path.isfile(snapshot_filepath):
print("Snapshot file does not exist: {0}".format(snapshot_filepath))
data_dir = config.get_jarvis_data_directory(config_map)
data_top_dirname = os.path.dirname(data_dir)
# Move current data directory to a temp
data_dir_prev = os.path.join(data_top_dirname, "jarvis_prev")
os.rename(data_dir, data_dir_prev)
cmd = ["tar", "-xf", snapshot_filepath, "-C", data_top_dirname]
cp = subprocess.run(cmd, stdout=subprocess.PIPE, stderr=subprocess.STDOUT)
if cp.returncode == 0:
if data_dir_prev:
shutil.rmtree(data_dir_prev)
return True
else:
print(cp.stdout)
# Something bad happened so go back to previous version
shutil.rmtree(data_dir)
os.rename(data_dir_prev, data_dir)
return False
# TODO: Must test. This code was lifted from jarvis_migrate.py and reworked to be
# library calls.
def _migrate_resources(resource_type, conn_prev, transform_func, post_to_new_func):
to_migrate = [ transform_func(r)
for r in client.query(resource_type, conn_prev, []) ]
print("Migrate #{0}: {1}".format(resource_type, len(to_migrate)))
def migrate_reverse_result(r):
return None if post_to_new_func(r) else r
start_time = time.time()
results = [ migrate_reverse_result(r) for r in to_migrate ]
num_attempted = len(to_migrate)
num_succeeded = len(list(filter(lambda x: not x, results)))
print("#attempted: {0}, #succeeded: {1}, elapsed: {2}s".format(
num_attempted, num_succeeded, time.time()-start_time))
def migrate(conn_prev, conn_next):
def migrate_tags():
def transform(tag):
"""Transform to tag request"""
del tag["modified"]
del tag['version']
return tag
# NOTE!
# 1. Errors regardless of whether 400 or 409 are treated equally.
# 2. The method "migrate_tags" was written before the "skip_tags_check"
# came out. Originally thought tags could be migrated without it but then
# realized that there are circular relationships
def post_to_new(tag_transformed):
return client.post_tag(conn_next, tag_transformed, skip_tags_check=True)
_migrate_resources("tags", conn_prev, transform, post_to_new)
def migrate_log_entries():
def transform(log_entry):
"""Transform to log entry request and event request"""
event = None
if not log_entry["event"]:
event = { "created": log_entry["created"],
"occurred": log_entry["occurred"],
"category": "migrated",
"source": jc.EVENT_SOURCE,
"weight": 50,
"description": log_entry["setting"] }
del log_entry["modified"]
del log_entry["version"]
del log_entry["occurred"]
del log_entry["setting"]
return (log_entry, event)
def post_to_new(decomposed_log_entry):
log_entry, event = decomposed_log_entry
event_id = log_entry["event"]
del log_entry["event"]
if event:
event_id = client.post_event(conn_next, event)["eventId"]
return cle.post_log_entry(event_id, conn_next, log_entry)
_migrate_resources("logentries", conn_prev, transform, post_to_new)
def migrate_events():
def transform(event):
del event["location"]
return event
def post_to_new(event_transformed):
return client.post_event(conn_next, event_transformed)
_migrate_resources("events", conn_prev, transform, post_to_new)
# Order matters in the migration
migrate_tags()
migrate_events()
migrate_log_entries()
| 33.918367 | 86 | 0.648416 | 641 | 4,986 | 4.786271 | 0.294852 | 0.049544 | 0.023468 | 0.024446 | 0.191656 | 0.180574 | 0.133638 | 0.133638 | 0.110169 | 0.110169 | 0 | 0.005392 | 0.256117 | 4,986 | 146 | 87 | 34.150685 | 0.821785 | 0.124148 | 0 | 0.126316 | 0 | 0 | 0.077773 | 0.006903 | 0 | 0 | 0 | 0.006849 | 0 | 1 | 0.147368 | false | 0.010526 | 0.052632 | 0.031579 | 0.305263 | 0.052632 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
7551478291616d2c374c6237ba8190235b7df550 | 4,609 | py | Python | examples/quickstart_mxnet/client.py | Chris-george-anil/flower | 98fb2fcde273c1226cc1f2e1638c1e4d8f35815c | [
"Apache-2.0"
] | 895 | 2020-03-22T20:34:16.000Z | 2022-03-31T15:20:42.000Z | examples/quickstart_mxnet/client.py | Chris-george-anil/flower | 98fb2fcde273c1226cc1f2e1638c1e4d8f35815c | [
"Apache-2.0"
] | 322 | 2020-02-19T10:16:33.000Z | 2022-03-31T09:49:08.000Z | examples/quickstart_mxnet/client.py | Chris-george-anil/flower | 98fb2fcde273c1226cc1f2e1638c1e4d8f35815c | [
"Apache-2.0"
] | 234 | 2020-03-31T10:52:16.000Z | 2022-03-31T14:04:42.000Z | """Flower client example using MXNet for MNIST classification.
The code is generally adapted from:
https://mxnet.apache.org/api/python/docs/tutorials/packages/gluon/image/mnist.html
"""
import flwr as fl
import numpy as np
import mxnet as mx
from mxnet import nd
from mxnet import gluon
from mxnet.gluon import nn
from mxnet import autograd as ag
import mxnet.ndarray as F
# Fixing the random seed
mx.random.seed(42)
# Setup context to GPU or CPU
DEVICE = [mx.gpu() if mx.test_utils.list_gpus() else mx.cpu()]
def main():
def model():
net = nn.Sequential()
net.add(nn.Dense(256, activation="relu"))
net.add(nn.Dense(64, activation="relu"))
net.add(nn.Dense(10))
net.collect_params().initialize()
return net
train_data, val_data = load_data()
model = model()
init = nd.random.uniform(shape=(2, 784))
model(init)
# Flower Client
class MNISTClient(fl.client.NumPyClient):
def get_parameters(self):
param = []
for val in model.collect_params(".*weight").values():
p = val.data()
param.append(p.asnumpy())
return param
def set_parameters(self, parameters):
params = zip(model.collect_params(".*weight").keys(), parameters)
for key, value in params:
model.collect_params().setattr(key, value)
def fit(self, parameters, config):
self.set_parameters(parameters)
[accuracy, loss], num_examples = train(model, train_data, epoch=2)
results = {"accuracy": float(accuracy[1]), "loss": float(loss[1])}
return self.get_parameters(), num_examples, results
def evaluate(self, parameters, config):
self.set_parameters(parameters)
[accuracy, loss], num_examples = test(model, val_data)
print("Evaluation accuracy & loss", accuracy, loss)
return float(loss[1]), num_examples, {"accuracy": float(accuracy[1])}
# Start Flower client
fl.client.start_numpy_client("0.0.0.0:8080", client=MNISTClient())
def load_data():
print("Download Dataset")
mnist = mx.test_utils.get_mnist()
batch_size = 100
train_data = mx.io.NDArrayIter(
mnist["train_data"], mnist["train_label"], batch_size, shuffle=True
)
val_data = mx.io.NDArrayIter(mnist["test_data"], mnist["test_label"], batch_size)
return train_data, val_data
def train(net, train_data, epoch):
trainer = gluon.Trainer(net.collect_params(), "sgd", {"learning_rate": 0.01})
accuracy_metric = mx.metric.Accuracy()
loss_metric = mx.metric.CrossEntropy()
metrics = mx.metric.CompositeEvalMetric()
for child_metric in [accuracy_metric, loss_metric]:
metrics.add(child_metric)
softmax_cross_entropy_loss = gluon.loss.SoftmaxCrossEntropyLoss()
for i in range(epoch):
train_data.reset()
num_examples = 0
for batch in train_data:
data = gluon.utils.split_and_load(
batch.data[0], ctx_list=DEVICE, batch_axis=0
)
label = gluon.utils.split_and_load(
batch.label[0], ctx_list=DEVICE, batch_axis=0
)
outputs = []
with ag.record():
for x, y in zip(data, label):
z = net(x)
loss = softmax_cross_entropy_loss(z, y)
loss.backward()
outputs.append(z.softmax())
num_examples += len(x)
metrics.update(label, outputs)
trainer.step(batch.data[0].shape[0])
trainings_metric = metrics.get_name_value()
print("Accuracy & loss at epoch %d: %s" % (i, trainings_metric))
return trainings_metric, num_examples
def test(net, val_data):
accuracy_metric = mx.metric.Accuracy()
loss_metric = mx.metric.CrossEntropy()
metrics = mx.metric.CompositeEvalMetric()
for child_metric in [accuracy_metric, loss_metric]:
metrics.add(child_metric)
val_data.reset()
num_examples = 0
for batch in val_data:
data = gluon.utils.split_and_load(batch.data[0], ctx_list=DEVICE, batch_axis=0)
label = gluon.utils.split_and_load(
batch.label[0], ctx_list=DEVICE, batch_axis=0
)
outputs = []
for x in data:
outputs.append(net(x).softmax())
num_examples += len(x)
metrics.update(label, outputs)
metrics.update(label, outputs)
return metrics.get_name_value(), num_examples
if __name__ == "__main__":
main()
| 33.642336 | 87 | 0.62378 | 586 | 4,609 | 4.737201 | 0.274744 | 0.039625 | 0.020173 | 0.025937 | 0.349424 | 0.332133 | 0.31268 | 0.31268 | 0.290346 | 0.256484 | 0 | 0.012899 | 0.259926 | 4,609 | 136 | 88 | 33.889706 | 0.800938 | 0.057713 | 0 | 0.235849 | 0 | 0 | 0.044542 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.084906 | false | 0 | 0.075472 | 0 | 0.235849 | 0.028302 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
755336617545c70fccce90a28a65b00fb6da7a58 | 2,116 | py | Python | driver/eater.py | riastrad/newSeer | 0f9841c7e7bb555c27c1ed2fc1ea7623f8e30f13 | [
"MIT"
] | null | null | null | driver/eater.py | riastrad/newSeer | 0f9841c7e7bb555c27c1ed2fc1ea7623f8e30f13 | [
"MIT"
] | 5 | 2017-03-27T17:22:16.000Z | 2017-04-25T03:14:56.000Z | driver/eater.py | riastrad/newSeer | 0f9841c7e7bb555c27c1ed2fc1ea7623f8e30f13 | [
"MIT"
] | null | null | null | #!/usr/bin/env python3
#
# @Author: Josh Erb <josh.erb>
# @Date: 27-Feb-2017 11:02
# @Email: josh.erb@excella.com
# @Last modified by: josh.erb
# @Last modified time: 27-Feb-2017 11:02
"""
Main driver script for ingesting RSS data. Uses the ArticleFeed() object
from the feeder.py script and a dictionary of publications and RSS urls in order
to save data into a daily .csv file.
"""
#######################################
# IMPORTS
#######################################
import os
from datetime import datetime
from feeder import ArticleFeed
#######################################
# CONSTANTS
#######################################
feeds = {
'New York Times': 'http://rss.nytimes.com/services/xml/rss/nyt/HomePage.xml',
'BuzzFeed': 'https://www.buzzfeed.com/usnews.xml',
'Slate': 'http://feeds.slate.com/slate',
'The New Yorker': 'http://www.newyorker.com/feed/news',
'Wall Street Journal': 'http://www.wsj.com/xml/rss/3_7085.xml',
# 'Washington Post': 'http://feeds.washingtonpost.com/rss/national',
'The Daily Beast': 'http://feeds.feedburner.com/thedailybeast/articles?format=xml',
}
#######################################
# FUNCTIONS
#######################################
def iterate_feeds(feed_list, time=datetime.utcnow().date()):
"""
A function to loop over feeds contained in a dictionary object. Expects that
object is formatted with [key]:[value] consistent with [publication name]:[RSS Feed URL]
"""
# Point functions to save data in folder in the current working directory
path = os.path.abspath('data/feed_data_' + str(time) + '.csv')
# Iterate through the dictionary and grab the article data for all feeds
for key, value in feed_list.items():
rss = ArticleFeed(key)
rss.get(value)
rss.dump(path)
return
def main():
"""
Primary execution function
"""
# Run our iterator and generate our feed objects
iterate_feeds(feeds)
return
#######################################
# EXECUTION
#######################################
if __name__ == '__main__':
main()
| 29.802817 | 92 | 0.577977 | 255 | 2,116 | 4.737255 | 0.513725 | 0.023179 | 0.014901 | 0.018212 | 0.021523 | 0 | 0 | 0 | 0 | 0 | 0 | 0.014731 | 0.165879 | 2,116 | 70 | 93 | 30.228571 | 0.669688 | 0.409263 | 0 | 0.086957 | 0 | 0 | 0.400227 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.086957 | false | 0 | 0.130435 | 0 | 0.304348 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
755568c409e234d158d2eba42024254a2b722778 | 919 | py | Python | app/main.py | DanNduati/fastapi | f575d3f51e91aaf2d342e43c613795c01bd9229c | [
"MIT"
] | 1 | 2021-12-21T16:27:16.000Z | 2021-12-21T16:27:16.000Z | app/main.py | DanNduati/FastAPI-social-API | c4270a035b263e434ab49de98c183060e81e0181 | [
"MIT"
] | 1 | 2021-12-18T14:54:16.000Z | 2021-12-19T13:56:11.000Z | app/main.py | DanNduati/fastapi | f575d3f51e91aaf2d342e43c613795c01bd9229c | [
"MIT"
] | null | null | null | from fastapi import FastAPI
from fastapi.middleware.cors import CORSMiddleware
from app.routers import auth
from . import models
from .database import engine
from .routers import posts, users, auth, votes
#create our posts table if its not present
models.Base.metadata.create_all(bind=engine)
# aplication instance
app = FastAPI(
title="FastAPI Social App API",
description=(
"FastAPI learning project"
),
version="0.0.1",
docs_url="/",
contact={
"name": "Nduati Daniel Chege",
"url": "https://github.com/DanNduati",
}
)
origins = ["*"]
app.add_middleware(
CORSMiddleware,
allow_origins=origins,
allow_credentials=True,
allow_methods=["*"],
allow_headers=["*"],
)
#path operations - synonymous to routes
#routers
app.include_router(auth.router)
app.include_router(users.router)
app.include_router(posts.router)
app.include_router(votes.router) | 23.564103 | 50 | 0.710555 | 114 | 919 | 5.631579 | 0.54386 | 0.062305 | 0.099688 | 0.102804 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.003942 | 0.171926 | 919 | 39 | 51 | 23.564103 | 0.839685 | 0.115343 | 0 | 0 | 0 | 0 | 0.134568 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.193548 | 0 | 0.193548 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
755e3ccae90e818b543931a7a80b80c4d6d28e00 | 1,569 | py | Python | src/fftIfftTests.py | vberthiaume/vblandr | dbd139e7b6172b9dbc97707ff4874bc398de7aaa | [
"Apache-2.0"
] | null | null | null | src/fftIfftTests.py | vberthiaume/vblandr | dbd139e7b6172b9dbc97707ff4874bc398de7aaa | [
"Apache-2.0"
] | 10 | 2016-08-29T20:06:05.000Z | 2016-10-27T20:40:58.000Z | src/fftIfftTests.py | vberthiaume/vblandr | dbd139e7b6172b9dbc97707ff4874bc398de7aaa | [
"Apache-2.0"
] | null | null | null | import subprocess as sp
import scikits.audiolab
import numpy as np
from scipy.fftpack import fft, ifft
from scipy.io import wavfile
#--CONVERT MP3 TO WAV------------------------------------------
song_path = '/home/gris/Music/vblandr/test_small/punk/07 Alkaline Trio - Only Love.mp3'
command = [ 'ffmpeg',
'-i', song_path,
'-f', 's16le',
'-acodec', 'pcm_s16le',
'-ar', '44100', # sms tools wavread can only read 44100 Hz
'-ac', '1', # mono file
'-loglevel', 'quiet',
'-'] #instead of having an output file, using '-' sends it in the pipe. not actually sure how this works.
#run the command
pipe = sp.Popen(command, stdout=sp.PIPE)
#read the output into a numpy array
stdoutdata = pipe.stdout.read()
audio_array = np.fromstring(stdoutdata, dtype=np.int16)
audio_array = audio_array[:2**16]
#--------------------------------------------------------------
#---- FFT THEN IFFT ------------------------
fft_output = fft (audio_array)
ifft_output = ifft(fft_output).real
#this stuff is equivalent
#fft_output = np.fft.rfft (audio_array, axis=0)
#print "fft_output is type", type(fft_output[0])
#ifft_output = np.fft.irfft(fft_output, axis=0)
#print "ifft_output is type", type(ifft_output[0])
#--SAVE WAVE AS NEW FILE ----------------
ifft_output = np.round(ifft_output).astype('int16')
wavfile.write('/home/gris/Music/vblandr/testIfft.wav', 44100, ifft_output)
#scikits.audiolab.wavwrite(ifft_output, '/home/gris/Music/vblandr/testIfft.wav', fs=44100, enc='pcm16')
| 35.659091 | 124 | 0.615679 | 213 | 1,569 | 4.42723 | 0.497653 | 0.084836 | 0.041357 | 0.063627 | 0.065748 | 0.065748 | 0 | 0 | 0 | 0 | 0 | 0.032184 | 0.16826 | 1,569 | 43 | 125 | 36.488372 | 0.690421 | 0.463352 | 0 | 0 | 0 | 0.045455 | 0.20919 | 0.096735 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.227273 | 0 | 0.227273 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
7560a82fac322ef2d8c50e83c01dd45192035ed7 | 372 | py | Python | MIC/LCD12864/urls.py | blacksea3/MainMIC | 02f226bce63c6b85f6922420fff4da885e6c24a3 | [
"Apache-2.0"
] | null | null | null | MIC/LCD12864/urls.py | blacksea3/MainMIC | 02f226bce63c6b85f6922420fff4da885e6c24a3 | [
"Apache-2.0"
] | null | null | null | MIC/LCD12864/urls.py | blacksea3/MainMIC | 02f226bce63c6b85f6922420fff4da885e6c24a3 | [
"Apache-2.0"
] | null | null | null | from django.conf.urls import patterns, include, url
from django.http import HttpResponseRedirect
from LCD12864.views import *
def auto_redirect(request):
return HttpResponseRedirect('/MIC/index/')
urlpatterns = patterns('',
url(r'index/$', index),
url(r'search/$', search),
url(r'update_database/$', update_database),
url(r'$',auto_redirect),
) | 24.8 | 51 | 0.704301 | 45 | 372 | 5.733333 | 0.533333 | 0.062016 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.015823 | 0.150538 | 372 | 15 | 52 | 24.8 | 0.800633 | 0 | 0 | 0 | 0 | 0 | 0.117962 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.090909 | false | 0 | 0.272727 | 0.090909 | 0.454545 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
7562fcbbdac668092cdb6d5710b89f8c5f901a32 | 6,259 | py | Python | visgraph/tests/test_graphcore.py | vEpiphyte/vivisect | 14947a53c6781175f0aa83d49cc16c524a2e23a3 | [
"ECL-2.0",
"Apache-2.0"
] | 1 | 2020-12-23T19:23:17.000Z | 2020-12-23T19:23:17.000Z | visgraph/tests/test_graphcore.py | vEpiphyte/vivisect | 14947a53c6781175f0aa83d49cc16c524a2e23a3 | [
"ECL-2.0",
"Apache-2.0"
] | null | null | null | visgraph/tests/test_graphcore.py | vEpiphyte/vivisect | 14947a53c6781175f0aa83d49cc16c524a2e23a3 | [
"ECL-2.0",
"Apache-2.0"
] | 1 | 2020-12-23T19:23:58.000Z | 2020-12-23T19:23:58.000Z | import unittest
import visgraph.graphcore as v_graphcore
s1paths = [
('a','c','f'),
('a','b','d','f'),
('a','b','e','f'),
]
s2paths = [
('a','b'),
('a','b','c'),
]
class GraphCoreTest(unittest.TestCase):
def getSampleGraph1(self):
# simple branching/merging graph
g = v_graphcore.HierGraph()
g.addHierRootNode('a')
for c in ('b','c','d','e','f'):
g.addNode(c)
g.addEdgeByNids('a','b')
g.addEdgeByNids('a','c')
g.addEdgeByNids('c','f')
g.addEdgeByNids('b','d')
g.addEdgeByNids('b','e')
g.addEdgeByNids('d','f')
g.addEdgeByNids('e','f')
return g
def getSampleGraph2(self):
# primitive loop graph
g = v_graphcore.HierGraph()
g.addHierRootNode('a')
for c in ('b','c'):
g.addNode(c)
g.addEdgeByNids('a','b')
g.addEdgeByNids('b','b')
g.addEdgeByNids('b','c')
return g
def getSampleGraph3(self):
# flat loop graph
g = v_graphcore.HierGraph()
g.addHierRootNode('a')
for c in ('b','c','d'):
g.addNode(c)
g.addEdgeByNids('a','b')
g.addEdgeByNids('b','c')
g.addEdgeByNids('c','b')
g.addEdgeByNids('c','d')
return g
def test_visgraph_pathscount(self):
g = self.getSampleGraph1()
self.assertEqual(g.getHierPathCount(), 3)
g = self.getSampleGraph2()
self.assertEqual(g.getHierPathCount(), 1)
g = self.getSampleGraph3()
self.assertEqual(g.getHierPathCount(), 1)
def assertPathsFrom(self, g, paths):
allpaths = set(paths)
root = g.getNode('a')
for path in g.getHierPathsFrom(root):
nids = tuple([ n[0] for (n,e) in path])
self.assertIn(nids,allpaths)
allpaths.remove(nids)
self.assertFalse(allpaths)
def test_visgraph_pathsfrom(self):
self.assertPathsFrom( self.getSampleGraph1(), s1paths)
self.assertPathsFrom( self.getSampleGraph2(), s2paths)
def assertPathsTo(self, g, nid, paths):
allpaths = set(paths)
node = g.getNode(nid)
for path in g.getHierPathsTo(node):
nids = tuple([ n[0] for (n,e) in path])
self.assertIn(nids,allpaths)
allpaths.remove(nids)
self.assertFalse(allpaths)
def test_visgraph_pathsto(self):
'''
'''
self.assertPathsTo( self.getSampleGraph1(), 'f', s1paths)
self.assertPathsTo( self.getSampleGraph2(), 'c', [ ('a','b','c'), ])
def assertPathsThru(self, g, nid, paths):
allpaths = set(paths)
node = g.getNode(nid)
for path in g.getHierPathsThru(node):
nids = tuple([ n[0] for (n,e) in path])
self.assertIn(nids,allpaths)
allpaths.remove(nids)
self.assertFalse(allpaths)
def test_visgraph_paththru(self):
self.assertPathsThru( self.getSampleGraph1(),'b',[('a','b','d','f'),('a','b','e','f')])
self.assertPathsThru( self.getSampleGraph2(),'b',[('a','b'),('a','b','c'),])
def test_visgraph_nodeprops(self):
g = v_graphcore.Graph()
a = g.addNode('a')
g.setNodeProp(a,'foo','bar')
self.assertEqual(a[1].get('foo'), 'bar')
self.assertTrue( a in g.getNodesByProp('foo') )
self.assertTrue( a in g.getNodesByProp('foo','bar') )
self.assertFalse( a in g.getNodesByProp('foo','blah') )
g.delNodeProp(a,'foo')
self.assertFalse( a in g.getNodesByProp('foo') )
self.assertFalse( a in g.getNodesByProp('foo','bar') )
self.assertIsNone(a[1].get('foo'))
def test_visgraph_edgeprops(self):
g = v_graphcore.Graph()
a = g.addNode('a')
b = g.addNode('b')
e = g.addEdge(a,b)
g.setEdgeProp(e,'foo','bar')
self.assertEqual(e[3].get('foo'),'bar')
self.assertTrue( e in g.getEdgesByProp('foo') )
self.assertTrue( e in g.getEdgesByProp('foo','bar') )
self.assertFalse( e in g.getEdgesByProp('foo','blah') )
g.delEdgeProp(e,'foo')
self.assertFalse( e in g.getEdgesByProp('foo') )
self.assertFalse( e in g.getEdgesByProp('foo','bar') )
self.assertIsNone(e[3].get('foo'))
def test_visgraph_subcluster(self):
g = v_graphcore.Graph()
a = g.addNode('a')
b = g.addNode('b')
c = g.addNode('c')
d = g.addNode('d')
e = g.addNode('e')
r = g.addNode('f')
g.addEdgeByNids('a','b')
g.addEdgeByNids('a','c')
g.addEdgeByNids('d','e')
g.addEdgeByNids('d','f')
subs = g.getClusterGraphs()
self.assertEqual(len(subs),2)
subtests = [ set(['a','b','c']), set(['d','e','f']) ]
for sub in subs:
if sub.getNode('a'):
self.assertIsNone(sub.getNode('d'))
self.assertIsNone(sub.getNode('e'))
self.assertIsNone(sub.getNode('f'))
akids = [ edge[2] for edge in sub.getRefsFromByNid('a') ]
self.assertTrue('b' in akids )
self.assertTrue('c' in akids )
elif sub.getNode('d'):
self.assertIsNone(sub.getNode('a'))
self.assertIsNone(sub.getNode('b'))
self.assertIsNone(sub.getNode('c'))
dkids = [ edge[2] for edge in sub.getRefsFromByNid('d') ]
self.assertTrue('e' in dkids )
self.assertTrue('f' in dkids )
else:
raise Exception('Invalid SubCluster!')
def test_visgraph_formnode(self):
g = v_graphcore.Graph()
def wootctor(n):
g.setNodeProp(n,'lul',1)
n1 = g.formNode('woot', 10, ctor=wootctor)
self.assertEqual( n1[1].get('lul'), 1 )
g.setNodeProp(n1, 'lul', 2)
g.setNodeProp(n1, 'foo', 'bar')
n2 = g.formNode('woot', 20, ctor=wootctor)
n3 = g.formNode('woot', 10, ctor=wootctor)
self.assertEqual( n1[0], n3[0] )
self.assertEqual( n1[1].get('lul'), 2)
self.assertEqual( n3[1].get('foo'), 'bar')
self.assertNotEqual( n1[0], n2[0])
| 28.321267 | 95 | 0.537466 | 751 | 6,259 | 4.447403 | 0.147803 | 0.075449 | 0.026946 | 0.046707 | 0.544012 | 0.474252 | 0.46497 | 0.337425 | 0.287725 | 0.249701 | 0 | 0.013252 | 0.288704 | 6,259 | 220 | 96 | 28.45 | 0.736972 | 0.010705 | 0 | 0.320513 | 0 | 0 | 0.041478 | 0 | 0 | 0 | 0 | 0 | 0.307692 | 1 | 0.096154 | false | 0 | 0.012821 | 0 | 0.134615 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
f3287946bcd7a026ba36e730bf9b3548a5dd79b1 | 4,029 | py | Python | chatty_goose/agents/chat.py | saileshnankani/chatty-goose | ef3a27119d6825a96ae85d1453d6b4eac4ed22b7 | [
"Apache-2.0"
] | 24 | 2021-03-08T09:53:59.000Z | 2022-03-17T06:47:06.000Z | chatty_goose/agents/chat.py | saileshnankani/chatty-goose | ef3a27119d6825a96ae85d1453d6b4eac4ed22b7 | [
"Apache-2.0"
] | 10 | 2021-03-08T13:35:54.000Z | 2021-11-15T03:32:37.000Z | chatty_goose/agents/chat.py | saileshnankani/chatty-goose | ef3a27119d6825a96ae85d1453d6b4eac4ed22b7 | [
"Apache-2.0"
] | 8 | 2021-03-03T00:37:18.000Z | 2021-08-01T00:50:47.000Z | import logging
from chatty_goose.cqr import Hqe, Ntr
from chatty_goose.pipeline import RetrievalPipeline
from chatty_goose.settings import HqeSettings, NtrSettings
from chatty_goose.types import CqrType, PosFilter
from parlai.core.agents import Agent, register_agent
from pyserini.search import SimpleSearcher
@register_agent("ChattyGooseAgent")
class ChattyGooseAgent(Agent):
@classmethod
def add_cmdline_args(cls, parser, partial_opt = None):
parser.add_argument('--name', type=str, default='CQR', help="The agent's name.")
parser.add_argument('--cqr_type', type=str, default='fusion', help="hqe, t5, or fusion")
parser.add_argument('--episode_done', type=str, default='[END]', help="end signal for interactive mode")
parser.add_argument('--hits', type=int, default=50, help="number of hits to retrieve from searcher")
# Pyserini
parser.add_argument('--k1', default=0.82, help='BM25 k1 parameter')
parser.add_argument('--b', default=0.68, help='BM25 b parameter')
parser.add_argument('--from_prebuilt', type=str, default='cast2019', help="Pyserini prebuilt index")
# T5
parser.add_argument('--from_pretrained', type=str, default='castorini/t5-base-canard', help="Huggingface T5 checkpoint")
# HQE
parser.add_argument('--M', default=5, type=int, help='aggregate historcial queries for first stage (BM25) retrieval')
parser.add_argument('--eta', default=10, type=float, help='QPP threshold for first stage (BM25) retrieval')
parser.add_argument('--R_topic', default=4.5, type=float, help='topic keyword threshold for first stage (BM25) retrieval')
parser.add_argument('--R_sub', default=3.5, type=float, help='subtopic keyword threshold for first stage (BM25) retrieval')
parser.add_argument('--filter', default='pos', help='filter word method: no, pos, stp')
parser.add_argument('--verbose', action='store_true')
return parser
def __init__(self, opt, shared=None):
super().__init__(opt, shared)
self.name = opt["name"]
self.episode_done = opt["episode_done"]
self.cqr_type = CqrType(opt["cqr_type"])
# Initialize searcher
searcher = SimpleSearcher.from_prebuilt_index(opt["from_prebuilt"])
searcher.set_bm25(float(opt["k1"]), float(opt["b"]))
# Initialize retrievers
retrievers = []
if self.cqr_type == CqrType.HQE or self.cqr_type == CqrType.FUSION:
hqe_settings = HqeSettings(
M=opt["M"],
eta=opt["eta"],
R_topic=opt["R_topic"],
R_sub=opt["R_sub"],
filter=PosFilter(opt["filter"]),
verbose=opt["verbose"],
)
hqe = Hqe(searcher, hqe_settings)
retrievers.append(hqe)
if self.cqr_type == CqrType.T5 or self.cqr_type == CqrType.FUSION:
t5_settings = NtrSettings(model_name=opt["from_pretrained"], verbose=opt["verbose"])
t5 = Ntr(t5_settings)
retrievers.append(t5)
self.rp = RetrievalPipeline(searcher, retrievers, int(opt["hits"]))
def observe(self, observation):
# Gather the last word from the other user's input
self.query = observation.get("text", "")
if observation.get("episode_done") or self.query == self.episode_done:
logging.info("Resetting agent history")
self.rp.reset_history()
def act(self):
if self.query == self.episode_done:
return {"id": self.id, "text": "Session finished"}
# Retrieve hits
hits = self.rp.retrieve(self.query)
if len(hits) == 0:
result = "Sorry, I couldn't find any results"
else:
result = hits[0].raw
return { "id": self.id, "text": result }
if __name__ == "__main__":
from parlai.scripts.interactive import Interactive
Interactive.main(model="ChattyGooseAgent", cqr_type="fusion")
| 44.274725 | 131 | 0.641598 | 498 | 4,029 | 5.044177 | 0.303213 | 0.050159 | 0.094745 | 0.035828 | 0.155653 | 0.10629 | 0.085589 | 0.085589 | 0.068471 | 0.068471 | 0 | 0.015098 | 0.227352 | 4,029 | 90 | 132 | 44.766667 | 0.791841 | 0.029784 | 0 | 0 | 0 | 0 | 0.219944 | 0.006152 | 0 | 0 | 0 | 0 | 0 | 1 | 0.059701 | false | 0 | 0.119403 | 0 | 0.238806 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
f32993767d59acc690a30337b62cd5533087e465 | 5,639 | py | Python | analysis/utils.py | VUB-HYDR/2022_Vanderkelen_etal_GMD | eceb5507a00c96d559b9611125935577ab84991b | [
"MIT"
] | null | null | null | analysis/utils.py | VUB-HYDR/2022_Vanderkelen_etal_GMD | eceb5507a00c96d559b9611125935577ab84991b | [
"MIT"
] | null | null | null | analysis/utils.py | VUB-HYDR/2022_Vanderkelen_etal_GMD | eceb5507a00c96d559b9611125935577ab84991b | [
"MIT"
] | null | null | null | """
Utils and functions to for MizuRoute postprocessing on Cheyenne
Inne Vanderkelen - March 2021
"""
import numpy as np
def set_plot_param():
"""Set my own customized plotting parameters"""
import matplotlib as mpl
mpl.rc('xtick',labelsize=12)
mpl.rc('ytick',labelsize=12)
mpl.rc('axes',titlesize=16)
mpl.rc('axes',labelsize=12)
mpl.rc('axes',edgecolor='grey')
mpl.rc('grid', color='lightgray')
mpl.rc('axes',labelcolor='dimgrey')
mpl.rc('xtick',color='dimgrey')
mpl.rc('ytick',color='dimgrey')
mpl.rc('text',color='dimgrey')
mpl.rc('legend',fontsize=12, frameon=False)
def calc_nse(obs,mod):
"""Nash-Sutcliffe Efficiency"""
return (1 - np.sum((mod-obs)**2)/np.sum((obs-np.mean(obs))**2))
def calc_rmse(obs,mod):
"""Root Mean Square Error"""
return np.sqrt(np.mean((mod-obs)**2))
def calc_kge(obs,mod):
"""Kling-Gupta efficiency https://agrimetsoft.com/calculators/Kling-Gupta%20efficiency"""
term1 = (np.corrcoef(obs, mod)[0,1]-1)**2
term2 = (np.std(mod)/np.std(obs)-1)**2
term3 = (np.mean(mod)/np.mean(obs)-1)**2
return 1-np.sqrt(term1 + term2 + term3)
def calc_bias(obs,mod):
"""Mean bias"""
return np.mean(mod)-np.mean(obs)
def get_rescontrolled_pfafs(ntopo,pfaf_reservoirs,threshold):
"""Get pfafstetter codes of main stream until which the reservoir has influence
based on: 1. river mouth is reached (there is no downstream pfaf code)
2. Next reservoir on stream network is reached
3. length threshold is exceeded (only inlcude segments within threshold)
input: da of river topo, list of pfaf codes of reservoirs and length threshold (in m)
output: list of outlets corresponding to reservoir list"""
import math
count = 1
print('')
print('-------- Finding reservoir influenced streams --------')
# initialise list with reservoir dependend stream segements
controlled_pfaf = []
for pfaf_res in pfaf_reservoirs:
print('processing '+str(count)+ ' of '+str(len(pfaf_reservoirs)),end='\r')
# transform ntopo to river topo dataframe
df_ntopo = ntopo[['PFAF','seg_id','Tosegment','Length']].to_dataframe()
df_ntopo['PFAF'] = np.char.strip(df_ntopo['PFAF'].values.astype(str))
# get downstream lookup table
downstream_lookup = get_downstream_PFAF(df_ntopo)
# initialise
pfaf_current = pfaf_res
total_length = 0
outlet_found = False
# travel downstream from reservoir pfaf to identify outlet (end of reservoir influence)
while not outlet_found:
# add current pfaf to list of controlled pfaf
controlled_pfaf.append(pfaf_current)
# next downstream segment
pfaf_down = downstream_lookup[downstream_lookup['PFAF']==pfaf_current]['PFAF_downstream'].values[0]
# res pfaf has no downstream
if math.isnan(float(pfaf_down)):
pfaf_outlet = pfaf_current
outlet_found = True
else:
# add length of downstream segment to total length
total_length = total_length + df_ntopo[df_ntopo['PFAF']==pfaf_down]['Length'].values[0]
# check if pfaf_current is outlet:
# 1. pfaf is river mouth (no outlet downstream)
if math.isnan(float(downstream_lookup[downstream_lookup['PFAF']==pfaf_down]['PFAF_downstream'].values[0])):
pfaf_outlet = pfaf_down
outlet_found = True
controlled_pfaf.append(pfaf_down)
# 2. pfaf is other reservoir
elif pfaf_down in pfaf_reservoirs:
pfaf_outlet = pfaf_down
controlled_pfaf.append(pfaf_down)
outlet_found = True
# 3. length threshold is exceeded for downstream segment (so include current)
elif total_length > threshold:
pfaf_outlet = pfaf_current
outlet_found = True
# move downstream
else:
pfaf_current = pfaf_down
count=count+1
return controlled_pfaf
def get_downstream_PFAF(df_ntopo):
"""Get PFAF code of directly downstream segment
input df of river segments with seg_id and Tosegment """
to_segment = df_ntopo[['PFAF','seg_id']].rename(columns={'seg_id':'Tosegment', "PFAF":"PFAF_downstream"})
df_PFAF_downstream = df_ntopo.merge(to_segment[["PFAF_downstream","Tosegment"]], on='Tosegment',how='left')
return df_PFAF_downstream[["PFAF","PFAF_downstream"]]
### CLM FUNCTIONS
# function to cut out analysis period out of data-array (1900-2015)
def extract_anaperiod(da, stream, nspinupyears):
if nspinupyears == 0 :
# no spin up
da = da[:-1,:,:]
elif stream == 'h1' : # this option still to test
# daily timesteps
# last day of previous year is also saved in variable therefore add one
nspinupdays = (nspinupyears * 365) + 1
# exclude spin up year and last timestep ()
da = da[nspinupdays:-1,:,:]
else:
# spin up with monthly timestep
# first month of first year is not saved in variable therefore substract one
nspinupmonths = (nspinupyears * 12) - 1
# exclude spin up year and last timestep ()
da = da[nspinupmonths:-1,:,:]
return da | 34.384146 | 124 | 0.609328 | 699 | 5,639 | 4.793991 | 0.320458 | 0.016413 | 0.010743 | 0.014324 | 0.163533 | 0.078186 | 0.043569 | 0.022083 | 0.022083 | 0.022083 | 0 | 0.01658 | 0.283384 | 5,639 | 164 | 125 | 34.384146 | 0.81267 | 0.316368 | 0 | 0.171053 | 0 | 0 | 0.092345 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.105263 | false | 0 | 0.039474 | 0 | 0.236842 | 0.039474 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
f32c7c7a86c83603395a612781c31e2718a2e153 | 14,611 | py | Python | fhirstore/fhirstore.py | arkhn/pyfhirstore | dd43b6d7db600f95d81dc83ae0a6e6de78ff02c6 | [
"Apache-2.0"
] | 15 | 2019-10-04T14:29:42.000Z | 2021-12-27T09:15:07.000Z | fhirstore/fhirstore.py | arkhn/pyfhirstore | dd43b6d7db600f95d81dc83ae0a6e6de78ff02c6 | [
"Apache-2.0"
] | 34 | 2019-10-08T16:37:26.000Z | 2020-11-30T17:51:59.000Z | fhirstore/fhirstore.py | arkhn/pyfhirstore | dd43b6d7db600f95d81dc83ae0a6e6de78ff02c6 | [
"Apache-2.0"
] | 1 | 2020-12-14T06:13:19.000Z | 2020-12-14T06:13:19.000Z | import sys
import logging
from typing import Union, Dict, Optional
import json
from elasticsearch import Elasticsearch
from elasticsearch.exceptions import (
NotFoundError as ESNotFoundError,
RequestError as ESRequestError,
AuthenticationException as ESAuthenticationException,
)
import pydantic
from pymongo import MongoClient, ASCENDING
from pymongo.errors import DuplicateKeyError
from tqdm import tqdm
import fhirpath
from fhirpath.enums import FHIR_VERSION
from fhirpath.search import SearchContext, Search
from fhir.resources import construct_fhir_element, FHIRAbstractModel
from fhir.resources.operationoutcome import OperationOutcome
from fhir.resources.bundle import Bundle
from fhirstore import ARKHN_CODE_SYSTEMS
from fhirstore.errors import (
FHIRStoreError,
NotSupportedError,
ValidationError,
DuplicateError,
RequiredError,
NotFoundError,
)
from fhirstore.search_engine import ElasticSearchEngine
class FHIRStore:
def __init__(self, mongo_client: MongoClient, es_client: Elasticsearch, db_name: str):
self.es = es_client
self.db = mongo_client[db_name]
self.resources = self.db.list_collection_names()
if self.es and len(self.es.transport.hosts) > 0:
self.search_engine = ElasticSearchEngine(FHIR_VERSION.R4, self.es, db_name)
else:
logging.warning("No elasticsearch client provided, search features are disabled")
@property
def initialized(self):
return len(self.resources) > 0
def reset(self, mongo=True, es=True):
"""
Drops all collections currently in the database.
"""
if mongo and not es:
raise FHIRStoreError("You also need to drop ES indices when resetting mongo")
if mongo:
for collection in self.resources:
self.db.drop_collection(collection)
self.resources = []
if es:
self.search_engine.reset()
def bootstrap(self, resource: Optional[str] = None, show_progress: Optional[bool] = True):
"""
Parses the FHIR json-schema and create the collections according to it.
"""
existing_resources = self.db.list_collection_names()
# Bootstrap elastic indices
self.search_engine.create_es_index(resource=resource)
# Bootstrap mongoDB collections
self.resources = (
[*self.resources, resource] if resource else self.search_engine.mappings.keys()
)
resources = [r for r in self.resources if r not in existing_resources]
if show_progress:
tqdm.write("\n", end="")
resources = tqdm(resources, file=sys.stdout, desc="Bootstrapping collections...")
for resource_type in resources:
self.db.create_collection(resource_type)
# Add unique constraint on id
self.db[resource_type].create_index("id", unique=True)
# Add unique constraint on (identifier.system, identifier.value)
self.db[resource_type].create_index(
[("identifier.value", ASCENDING), ("identifier.system", ASCENDING),],
unique=True,
partialFilterExpression={
"identifier.value": {"$exists": True},
"identifier.system": {"$exists": True},
},
)
def normalize_resource(self, resource: Union[Dict, FHIRAbstractModel]) -> FHIRAbstractModel:
if isinstance(resource, dict):
resource_type = resource.get("resourceType")
if not resource_type:
raise RequiredError("resourceType is missing")
elif resource_type not in self.resources:
raise NotSupportedError(f'unsupported FHIR resource: "{resource_type}"')
return construct_fhir_element(resource.get("resourceType"), resource)
elif not isinstance(resource, FHIRAbstractModel):
raise FHIRStoreError("Provided resource must be of type Union[Dict, FHIRAbstractModel]")
return resource
def create(
self, resource: Union[Dict, FHIRAbstractModel]
) -> Union[FHIRAbstractModel, OperationOutcome]:
"""
Creates a resource. The structure of the resource will be checked
against its json-schema FHIR definition.
Args:
- resource: either a dict with the resource data or a fhir.resources.FHIRAbstractModel
Returns: The created resource as fhir.resources.FHIRAbstractModel.
"""
try:
resource = self.normalize_resource(resource)
except pydantic.ValidationError as e:
return ValidationError(e).format()
except FHIRStoreError as e:
return e.format()
try:
self.db[resource.resource_type].insert_one(json.loads(resource.json()))
except DuplicateKeyError as e:
return DuplicateError(
f"Resource {resource.resource_type} {resource.id} already exists: {e}"
).format()
return resource
def read(self, resource_type, instance_id) -> Union[FHIRAbstractModel, OperationOutcome]:
"""
Finds a resource given its type and id.
Args:
- resource_type: type of the resource (eg: 'Patient')
- id: The expected id is the resource 'id', not the
internal database identifier ('_id').
Returns: The found resource.
"""
if resource_type not in self.resources:
return NotSupportedError(f'unsupported FHIR resource: "{resource_type}"').format()
res = self.db[resource_type].find_one({"id": instance_id}, projection={"_id": False})
if res is None:
return NotFoundError(f"{resource_type} with id {instance_id} not found").format()
return construct_fhir_element(resource_type, res)
def update(self, instance_id, resource) -> Union[FHIRAbstractModel, OperationOutcome]:
"""
Update a resource given its type, id and a resource. It applies
a "replace" operation, therefore the resource will be overriden.
The structure of the updated resource will be checked against
its json-schema FHIR definition.
Args:
- resource_type: type of the resource (eg: 'Patient')
- id: The expected id is the resource 'id', not the
internal database identifier ('_id').
- resource: The updated resource.
Returns: The updated resource.
"""
try:
resource = self.normalize_resource(resource)
except pydantic.ValidationError as e:
return ValidationError(e).format()
except FHIRStoreError as e:
return e.format()
update_result = self.db[resource.resource_type].replace_one(
{"id": instance_id}, json.loads(resource.json())
)
if update_result.matched_count == 0:
return NotFoundError(
f"{resource.resource_type} with id {instance_id} not found"
).format()
return resource
def patch(
self, resource_type, instance_id, patch
) -> Union[FHIRAbstractModel, OperationOutcome]:
"""
Update a resource given its type, id and a patch. It applies
a "patch" operation rather than a "replace", only the fields
specified in the third argument will be updated. The structure
of the updated resource will be checked against its json-schema
FHIR definition.
Args:
- resource_type: type of the resource (eg: 'Patient')
- id: The expected id is the resource 'id', not the
internal database identifier ('_id').
- patch: The patch to be applied on the resource.
Returns: The updated resource.
"""
if resource_type not in self.resources:
return NotSupportedError(f'unsupported FHIR resource: "{resource_type}"').format()
res = self.db[resource_type].find_one({"id": instance_id}, projection={"_id": False})
if res is None:
return NotFoundError(f"{resource_type} with id {instance_id} not found").format()
patched_resource = {**construct_fhir_element(resource_type, res).dict(), **patch}
try:
resource = self.normalize_resource(patched_resource)
except pydantic.ValidationError as e:
return ValidationError(e).format()
except FHIRStoreError as e:
return e.format()
update_result = self.db[resource_type].update_one({"id": instance_id}, {"$set": patch})
if update_result.matched_count == 0:
return NotFoundError(f"{resource_type} with id {instance_id} not found").format()
return resource
def delete(
self, resource_type, instance_id=None, resource_id=None, source_id=None
) -> OperationOutcome:
"""
Deletes a resource given its type and id.
Args:
- resource_type: type of the resource (eg: 'Patient')
- id: The expected id is the resource 'id', not the
internal database identifier ('_id').
Returns: The id of the deleted resource.
"""
if resource_type not in self.resources:
return NotSupportedError(f'unsupported FHIR resource: "{resource_type}"').format()
if instance_id:
res = self.db[resource_type].delete_one({"id": instance_id})
if res.deleted_count == 0:
return NotFoundError(f"{resource_type} with id {instance_id} not found").format()
elif resource_id:
res = self.db[resource_type].delete_many(
{
"meta.tag": {
"$elemMatch": {
"code": {"$eq": resource_id},
"system": {"$eq": ARKHN_CODE_SYSTEMS.resource},
}
}
}
)
if res.deleted_count == 0:
return NotFoundError(
f"{resource_type} with resource_id {resource_id} not found"
).format()
elif source_id:
res = self.db[resource_type].delete_many(
{
"meta.tag": {
"$elemMatch": {
"code": {"$eq": source_id},
"system": {"$eq": ARKHN_CODE_SYSTEMS.source},
}
}
}
)
if res.deleted_count == 0:
return NotFoundError(
f"{resource_type} with source_id {source_id} not found"
).format()
else:
raise FHIRStoreError("one of: 'instance_id', 'resource_id' or 'source_id' are required")
return OperationOutcome(
issue=[
{
"severity": "information",
"code": "informational",
"diagnostics": f"deleted {res.deleted_count} {resource_type}",
}
]
)
def search(
self, resource_type=None, query_string=None, params=None, as_json=False
) -> Union[Bundle, dict, OperationOutcome]:
"""
Searchs for params inside a resource.
Returns a bundle of items, as required by FHIR standards.
Args:
- resource_type: FHIR resource (eg: 'Patient')
- params: search parameters as returned by the API. For a simple
search, the parameters should be of the type {"key": "value"}
eg: {"gender":"female"}, with possible modifiers {"address.city:exact":"Paris"}.
If a search is made one field with multiple arguments (eg: language is French
OR English), params should be a payload of type {"multiple": {"language":
["French", "English"]}}.
If a search has more than one field queried, params should be a payload of
the form: {"address.city": ["Paris"], "multiple":
{"language": ["French", "English"]}}.
Returns: A bundle with the results of the search, as required by FHIR
search standard.
"""
if resource_type and resource_type not in self.resources:
return NotSupportedError(f'unsupported FHIR resource: "{resource_type}"').format(
as_json
)
search_context = SearchContext(self.search_engine, resource_type)
fhir_search = Search(search_context, query_string=query_string, params=params)
try:
return fhir_search(as_json=as_json)
except ESNotFoundError as e:
return NotFoundError(
f"{e.info['error']['index']} is not indexed in the database yet."
).format(as_json)
except (ESRequestError, ESAuthenticationException) as e:
return FHIRStoreError(e.info["error"]["root_cause"]).format(as_json)
except ESAuthenticationException as e:
return FHIRStoreError(e.info["error"]["root_cause"]).format(as_json)
except pydantic.ValidationError as e:
return ValidationError(e).format(as_json)
except fhirpath.exceptions.ValidationError as e:
return ValidationError(str(e)).format(as_json)
except NotImplementedError as e:
return NotSupportedError(str(e)).format(as_json)
def upload_bundle(self, bundle) -> Union[None, OperationOutcome]:
"""
Upload a bundle of resource instances to the store.
Args:
- bundle: the fhir bundle containing the resources.
"""
if "resourceType" not in bundle or bundle["resourceType"] != "Bundle":
return FHIRStoreError(
f"input must be a FHIR Bundle resource, got {bundle.get('resourceType')}"
).format()
for entry in bundle["entry"]:
if "resource" not in entry:
return RequiredError("Bundle entry is missing a resource.")
try:
res = self.create(entry["resource"])
if isinstance(res, OperationOutcome):
logging.error(
f"could not upload resource {entry['resource']['resourceType']} "
f"with id {entry['resource']['id']}: {[i.diagnostics for i in res.issue]}"
)
except DuplicateKeyError as e:
logging.warning(f"Document already existed: {e}")
return None
| 39.596206 | 100 | 0.605024 | 1,566 | 14,611 | 5.53576 | 0.17433 | 0.063675 | 0.013496 | 0.016611 | 0.409044 | 0.359557 | 0.321952 | 0.312493 | 0.312493 | 0.306264 | 0 | 0.000787 | 0.304497 | 14,611 | 368 | 101 | 39.703804 | 0.852293 | 0.202519 | 0 | 0.287554 | 0 | 0 | 0.147434 | 0.014752 | 0 | 0 | 0 | 0 | 0 | 1 | 0.051502 | false | 0 | 0.081545 | 0.004292 | 0.291845 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
f330826045b7dfd8da77b8e1ad7b89465919a84a | 3,684 | py | Python | Binary Tree/7.1-binary_tree.py | neeveermoree/data_structures_and_algorithms | 8aa37cade53539909383fb9d4952b13ca19c931a | [
"MIT"
] | null | null | null | Binary Tree/7.1-binary_tree.py | neeveermoree/data_structures_and_algorithms | 8aa37cade53539909383fb9d4952b13ca19c931a | [
"MIT"
] | null | null | null | Binary Tree/7.1-binary_tree.py | neeveermoree/data_structures_and_algorithms | 8aa37cade53539909383fb9d4952b13ca19c931a | [
"MIT"
] | null | null | null | from utils_queue import Queue
class _Node:
__slots__ = '_val', '_left', '_right'
def __init__(self, val, left=None, right=None):
self._val = val
self._left = left
self._right = right
class BinaryTree:
__slots__ = '_root'
def __init__(self, root=None):
self._root = root
def preorder_traversal(self, node):
if node:
print(node._val, end=' ')
self.preorder_traversal(node._left)
self.preorder_traversal(node._right)
def inorder_traversal(self, node):
if node:
self.inorder_traversal(node._left)
print(node._val, end=' ')
self.inorder_traversal(node._right)
def postorder_traversal(self, node):
if node:
self.postorder_traversal(node._left)
self.postorder_traversal(node._right)
print(node._val, end=' ')
def levelorder_traversal(self, node_list):
if not len(node_list):
return
new_node_list = []
for node in node_list:
if not node:
continue
print(node._val, end=' ')
new_node_list.append(node._left)
new_node_list.append(node._right)
self.levelorder_traversal(new_node_list)
def levelorder_traversal_queue(self):
queue = Queue()
if self._root:
queue.enqueue(self._root)
while not queue.is_empty():
node = queue.dequeue()
if node:
print(node._val, end=' ')
queue.enqueue(node._left)
queue.enqueue(node._right)
def node_count(self, node):
c = 0
if node:
c += 1
c += self.node_count(node._left)
c += self.node_count(node._right)
return c
def level(self, node):
if node:
left_height = self.level(node._left)
right_height = self.level(node._right)
return max(left_height, right_height) + 1
return 0
def height(self, node):
return self.level(node) - 1
"""
Tree structure:
1
2 3
4 5 6 7
"""
node4 = _Node(4)
node5 = _Node(5)
node6 = _Node(6)
node7 = _Node(7)
node2 = _Node(2, node4, node5)
node3 = _Node(3, node6, node7)
node1 = _Node(1, node2, node3)
bt = BinaryTree(node1)
print('Preorder traversal')
bt.preorder_traversal(bt._root)
print('\nInorder traversal')
bt.inorder_traversal(bt._root)
print('\nPostorder traversal')
bt.postorder_traversal(bt._root)
print('\nLevelorder traversal')
bt.levelorder_traversal([bt._root])
print('\nLevelorder traversal queue')
bt.levelorder_traversal_queue()
print('\nNode count: ')
print(bt.node_count(bt._root))
print('Height: ')
print(bt.height(bt._root))
"""
Another tree structure
1
2 3
4 6
"""
node_4 = _Node(4)
node_2 = _Node(2, left=node_4)
node_6 = _Node(6)
node_3 = _Node(3, right=node_6)
node_1 = _Node(1, node_2, node_3)
bt_ = BinaryTree(node_1)
print('\nPreorder traversal')
bt_.preorder_traversal(bt_._root)
print('\nInorder traversal')
bt_.inorder_traversal(bt_._root)
print('\nPostorder traversal')
bt_.postorder_traversal(bt_._root)
print('\nLevelorder traversal')
bt_.levelorder_traversal([bt_._root])
print('\nLevelorder traversal queue')
bt_.levelorder_traversal_queue()
print('\nNode count: ')
print(bt_.node_count(bt_._root))
print('Height: ')
print(bt_.height(bt_._root))
| 24.891892 | 57 | 0.580347 | 439 | 3,684 | 4.548975 | 0.15262 | 0.088132 | 0.055083 | 0.08012 | 0.429644 | 0.370556 | 0.305458 | 0.305458 | 0.305458 | 0.305458 | 0 | 0.020784 | 0.307818 | 3,684 | 147 | 58 | 25.061224 | 0.762353 | 0 | 0 | 0.221154 | 0 | 0 | 0.085544 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.096154 | false | 0 | 0.009615 | 0.009615 | 0.192308 | 0.221154 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
f3334adaaf08994c3d3ed59dde038655efaef8ef | 3,798 | py | Python | src/MATGenerator.py | yumataesu/TouchDesigner-ShaderBuilder | 5f9e8300603afc91cd60bf0c91061d11401520d1 | [
"MIT"
] | 1 | 2022-02-13T13:19:56.000Z | 2022-02-13T13:19:56.000Z | src/MATGenerator.py | yumataesu/TouchDesigner-ShaderBuilder | 5f9e8300603afc91cd60bf0c91061d11401520d1 | [
"MIT"
] | null | null | null | src/MATGenerator.py | yumataesu/TouchDesigner-ShaderBuilder | 5f9e8300603afc91cd60bf0c91061d11401520d1 | [
"MIT"
] | null | null | null | import platform
class MATGenerator:
def __init__(self, ownerComp):
self.ownerComp = ownerComp
self.OUT_glsl_struct = ''
self.OUT_pixel = ''
self.OUT_vertex = ''
self.use_alpha_hashed = False
self.Update(self.ownerComp.op('in1'))
def eval_template(self, tmpl_dat_name, ctx):
t = op.ShaderBuilder.Template(self.ownerComp.op(tmpl_dat_name).text)
return t(ctx)
def Update(self, dat):
c = op.ShaderBuilder.Context.fromJson(dat.text)
self.updateParameters(c)
ctx = c.__dict__.copy()
ctx['shader_builder_path'] = op.ShaderBuilder.path
ctx['alpha_hashed'] = self.use_alpha_hashed
ctx['camera_index_type'] = 'int'
ctx['geom_def'] = self.eval_template('tmpl_geom_def', ctx)
self.OUT_glsl_struct = self.eval_template('tmpl_glsl_struct', ctx)
ctx['camera_index_type'] = 'flat int'
ctx['geom_def'] = self.eval_template('tmpl_geom_def', ctx)
self.OUT_vertex = self.eval_template('tmpl_vertex', ctx)
self.OUT_pixel = self.eval_template('tmpl_pixel', ctx)
return 'ok'
def updateParameters(self, ctx):
o = op('glsl')
# reset all params
for x in o.pars('uniname*'):
x.val = x.default
for x in o.pars('value*'):
x.val = x.default
for x in o.pars('sampler*'):
x.val = x.default
for x in o.pars('top*'):
x.val = x.default
dim = 'xyzw'
for i, k in enumerate(ctx.vector_uniforms):
u = ctx.vector_uniforms[k]
p = o.pars('uniname%i' % i)[0]
p.val = u['name']
for k, x in enumerate(u['pars']):
p = o.pars('value%i%s' % (i, dim[k]))[0]
p.expr = x
for i, k in enumerate(ctx.sampler_uniforms):
u = ctx.sampler_uniforms[k]
p = o.pars('sampler%i' % i)[0]
p.val = u['name']
p = o.pars('top%i' % i)[0]
p.expr = u['top']
### blendmode etc
if ctx.blending:
blendmode = ctx.blending['blendmode']
self.use_alpha_hashed = False
m = blendmode['mode']
if m == 'Disable':
o.par.alphatest = 0
o.par.blending = 0
elif m == 'Alphablend':
o.par.alphatest = 0
o.par.blending = 1
o.par.srcblend = 'sa'
o.par.destblend = 'omsa'
elif m == 'Add':
o.par.alphatest = 0
o.par.blending = 1
o.par.srcblend = 'sa'
o.par.destblend = 'one'
elif m == 'Multiply':
o.par.alphatest = 0
o.par.blending = 1
o.par.srcblend = 'zero'
o.par.destblend = 'scol'
elif m == 'Alphaclip':
o.par.alphatest = 1
o.par.blending = 0
o.par.alphathreshold = blendmode['threshold']
elif m == 'Alphahashed':
o.par.alphatest = 0
o.par.blending = 0
self.use_alpha_hashed = True
o.par.depthtest = ctx.blending['depthtest']
o.par.depthwriting = ctx.blending['depthwriting']
else:
names = ["alphatest", "blending", "srcblend", "destblend", "alphathreshold", "depthtest", "depthwriting"]
for x in names:
p = o.pars(x)[0]
p.val = p.default
if ctx.settings:
o.par.cullface = ctx.settings['cullface']
o.par.polygonoffset = ctx.settings['polygonoffset']
o.par.polygonoffsetfactor = ctx.settings['polygonoffsetfactor']
o.par.polygonoffsetunits = ctx.settings['polygonoffsetunits']
o.par.wireframe = ctx.settings['wireframe']
o.par.wirewidth = ctx.settings['wirewidth']
else:
names = ["cullface", "polygonoffset", "polygonoffsetfactor", "polygonoffsetunits", "wireframe", "wirewidth"]
for x in names:
p = o.pars(x)[0]
p.val = p.default
### deform
names = ['dodeform', 'deformdata', 'targetsop', 'pcaptpath', 'pcaptdata', 'skelrootpath', 'mat']
for x in names:
p = o.pars(x)[0]
p.val = p.default
if ctx.deform:
o.par.dodeform = True
for k in ctx.deform:
p = o.pars(k)[0]
p.val = ctx.deform[k]
| 26.013699 | 112 | 0.614007 | 534 | 3,798 | 4.264045 | 0.219101 | 0.049188 | 0.02108 | 0.043917 | 0.288538 | 0.234958 | 0.21827 | 0.207729 | 0.184014 | 0.153711 | 0 | 0.007177 | 0.229595 | 3,798 | 145 | 113 | 26.193103 | 0.771018 | 0.009742 | 0 | 0.314815 | 0 | 0 | 0.175166 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.037037 | false | 0 | 0.009259 | 0 | 0.074074 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
f335a88172969ef4b07ebc46cdb0148f28a15f96 | 8,820 | py | Python | mixtas/mixtas/pedidos/views.py | stad-team/stad-mixtas | 68513f247eeedd7f731d18339891146634619af1 | [
"MIT"
] | null | null | null | mixtas/mixtas/pedidos/views.py | stad-team/stad-mixtas | 68513f247eeedd7f731d18339891146634619af1 | [
"MIT"
] | null | null | null | mixtas/mixtas/pedidos/views.py | stad-team/stad-mixtas | 68513f247eeedd7f731d18339891146634619af1 | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
"""
STAD TEAM
~~~~~~~~~~
"""
from __future__ import absolute_import, unicode_literals, print_function
import win32print
import win32ui
from escpos import printer
from datetime import datetime
from .models import Mesas, DetalleOrden, Simbolos, Menu, Folio
from rest_framework.viewsets import ModelViewSet
from rest_framework.permissions import AllowAny
from rest_framework.response import Response
from rest_framework import status
from .serializers import SerializadorMesas, SerializadorSimbolos, SerializadorMenu, SerializadorFolio, \
SerializadorOrden
class CrearObtenerMesasView(ModelViewSet):
serializer_class = SerializadorMesas
queryset = Mesas.objects.all()
permission_classes = (AllowAny,)
def get_queryset(self):
qs = Mesas.objects.all()
floor = self.request.query_params.get('floor')
caja = self.request.query_params.get('caja')
if floor:
qs = qs.filter(location=floor)
if caja:
qs = qs.filter(status=True)
return qs
# class CrearPedidosView(ModelViewSet):
# serializer_class = SerializadorPedidos
# queryset = DetalleOrden.objects.all()
# permission_classes = (AllowAny,)
# def perform_create(self, serializer):
# if not self.request.user.is_staff:
# self.permission_denied(
# self.request, message="You have no permissions for creating a user"
# )
# serializer.save()
class ObtenerSimbolos(ModelViewSet):
serializer_class = SerializadorSimbolos
queryset = Simbolos.objects.all()
permission_classes = (AllowAny,)
class ObtenerMenu(ModelViewSet):
serializer_class = SerializadorMenu
queryset = Menu.objects.all()
permission_classes = (AllowAny,)
class CrearFolio(ModelViewSet):
serializer_class = SerializadorFolio
queryset = Folio.objects.all()
permission_classes = (AllowAny,)
def printCobrar(self):
printFinal = ''
qsFinal = self.request.data.get('qsFinal')
qsTotal = self.request.data.get('qsTotal')
folio = self.request.data.get('folio')
mesero = self.request.data.get('mesero')
mesa = self.request.data.get('mesa')
qsFinal.pop(0)
for orden in qsFinal:
for platillo in orden:
_platillo = platillo.get('platillo')
if 'B-' in _platillo:
_platillo = 'Bebida {0}'.format(platillo.get('platillo').split('-')[1])
elif 'Q-' in _platillo:
_platillo = 'Quesadilla {0}'.format(platillo.get('platillo').split('-')[1])
printFinal += '\n{0} {1} \n\t\t\t\t ${2}'.format(platillo.get('cantidad'), _platillo, platillo.get('precio'))
print_final_caja = """
\t FOLIO: {folio}
MIXTAS EL COSTEÑO
Mesero: {mesero}
Mesa # {mesa}
\r ----------------
Ordenes:
{printFinal}
\t\t\t---- \t
\t\t\t $ {total} \t
\n\tDesarollado por TEAM-ANYOAN\n\t anyoan-team@gmail.com
""".format(
folio=folio,
mesero=mesero,
mesa=mesa,
printFinal=printFinal,
total=qsTotal
)
# Epson Bebidas y Caja
printer = win32print.OpenPrinter('EPSON TM-T88V 1')
jid = win32print.StartDocPrinter(printer, 1, ('TEST DOC', None, 'RAW'))
bytes = win32print.WritePrinter(printer, print_final_caja)
win32print.EndDocPrinter(printer)
win32print.ClosePrinter(printer)
# Cortar
hDC = win32ui.CreateDC()
hDC.CreatePrinterDC('EPSON TM-T88V 1')
hDC.StartDoc("Test doc")
hDC.StartPage()
hDC.EndPage()
hDC.EndDoc()
def printOrden(self):
numTortillas = ''
cantidad = 0
totalOrdenTaquero = ''
totalOrdenTotillera = ''
totalOrdenTomar = ''
ordenes = self.request.data.get('ordenesImprimir')
mesa = self.request.data.get('mesa')
mesero = self.request.data.get('nombreMesero')
for orden in ordenes:
last = len(orden) - 1
for index, platillo in enumerate(orden):
cantidad = platillo.split(' ')[0]
if not 'B-' in platillo:
if index == last:
totalOrdenTaquero += '\n{0} \n--------------\n'.format(platillo)
else:
totalOrdenTaquero += '\n{0}'.format(platillo)
if 'Quesadilla' in platillo or 'Q' in platillo:
totalOrdenTotillera += '\n{0} totillas para Quesadillas'.format(cantidad)
elif 'Taco' in platillo or 'Tacos' in platillo or 'T' in platillo:
totalOrdenTotillera += '\n{0} totillas para Tacos'.format(cantidad)
elif 'B-' in platillo:
totalOrdenTomar += '\n{0}'.format(platillo)
# Epson Bebidas y Caja
if totalOrdenTomar != '':
print_final_bebidas = """
\t Mesero: {mesero}
Mesa # {mesa}
\r ----------------
{totalOrdenTomar}
""".format(
mesero=mesero,
mesa=mesa,
totalOrdenTomar=totalOrdenTomar
)
printer = win32print.OpenPrinter('EPSON TM-T88V 1')
jid = win32print.StartDocPrinter(printer, 1, ('TEST DOC', None, 'RAW'))
bytes = win32print.WritePrinter(printer, print_final_bebidas)
win32print.EndDocPrinter(printer)
win32print.ClosePrinter(printer)
# Cortar
hDC = win32ui.CreateDC()
hDC.CreatePrinterDC('EPSON TM-T88V 1')
hDC.StartDoc("Test doc")
hDC.StartPage()
hDC.EndPage()
hDC.EndDoc()
if totalOrdenTaquero != '':
print_final_taquero = """
\t Mesero: {mesero}
Mesa # {mesa}
\r ----------------
{totalOrdenTaquero}
""".format(
mesero=mesero,
mesa=mesa,
totalOrdenTaquero=totalOrdenTaquero
)
# Epson Taquero
printer = win32print.OpenPrinter('EPSON TM-T88V 3')
jid = win32print.StartDocPrinter(printer, 1, ('TEST DOC', None, 'RAW'))
bytes = win32print.WritePrinter(printer, print_final_taquero)
win32print.EndDocPrinter(printer)
win32print.ClosePrinter(printer)
# Cortar
hDC = win32ui.CreateDC()
hDC.CreatePrinterDC('EPSON TM-T88V 3')
hDC.StartDoc("Test doc")
hDC.StartPage()
hDC.EndPage()
hDC.EndDoc()
if totalOrdenTotillera != '':
print_final_tortillera = """
\t Mesero: {mesero}
Mesa # {mesa}
\r ----------------
{totalOrdenTotillera}
""".format(
mesero=mesero,
mesa=mesa,
totalOrdenTotillera=totalOrdenTotillera
)
# Epson Tortillera
printer = win32print.OpenPrinter('EPSON TM-T88V 2')
jid = win32print.StartDocPrinter(printer, 1, ('TEST DOC', None, 'RAW'))
bytes = win32print.WritePrinter(printer, print_final_tortillera)
win32print.EndDocPrinter(printer)
win32print.ClosePrinter(printer)
# Cortar
hDC = win32ui.CreateDC()
hDC.CreatePrinterDC('EPSON TM-T88V 2')
hDC.StartDoc("Test doc")
hDC.StartPage()
hDC.EndPage()
hDC.EndDoc()
def perform_create(self, serializer):
self.printOrden()
serializer.save()
def perform_update(self, serializer):
if not self.request.data.get('pagado') and not self.request.data.get('print') == 'imprimir':
self.printOrden()
elif self.request.data.get('print') == 'imprimir':
self.printCobrar()
else:
self.printCobrar()
serializer.save()
def get_queryset(self):
qs = Folio.objects.all()
date = self.request.query_params.get('date')
if date:
objDate = datetime.strptime(date, '%b-%d-%Y')
qs = qs.filter(fecha__gte=objDate)
return qs
class CrearOrden(ModelViewSet):
serializer_class = SerializadorOrden
queryset = DetalleOrden.objects.all()
permission_classes = (AllowAny,)
def get_queryset(self):
qs = DetalleOrden.objects.all()
idOrden = int(self.request.query_params.get('idOrdenMesa'))
if idOrden:
qs = qs.filter(idOrden__id=idOrden).order_by('cliente')
return qs
| 30.413793 | 125 | 0.566667 | 831 | 8,820 | 5.939832 | 0.216607 | 0.037885 | 0.033428 | 0.040113 | 0.449352 | 0.38513 | 0.307942 | 0.263371 | 0.247569 | 0.247569 | 0 | 0.016084 | 0.316213 | 8,820 | 289 | 126 | 30.519031 | 0.802355 | 0.062245 | 0 | 0.4 | 0 | 0.005 | 0.152335 | 0.005094 | 0 | 0 | 0 | 0 | 0 | 1 | 0.035 | false | 0 | 0.055 | 0 | 0.205 | 0.195 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
f337e2d1736dd597f92bbbb31718a4f5323d8684 | 2,656 | py | Python | setup.py | batterseapower/hdf5storage | 9ccec8818e11c2a66a0e834bd2be2e3d3a6761b7 | [
"BSD-2-Clause"
] | null | null | null | setup.py | batterseapower/hdf5storage | 9ccec8818e11c2a66a0e834bd2be2e3d3a6761b7 | [
"BSD-2-Clause"
] | null | null | null | setup.py | batterseapower/hdf5storage | 9ccec8818e11c2a66a0e834bd2be2e3d3a6761b7 | [
"BSD-2-Clause"
] | null | null | null | # Copyright (c) 2013-2020, Freja Nordsiek
# All rights reserved.
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions are
# met:
#
# 1. Redistributions of source code must retain the above copyright
# notice, this list of conditions and the following disclaimer.
#
# 2. Redistributions in binary form must reproduce the above copyright
# notice, this list of conditions and the following disclaimer in the
# documentation and/or other materials provided with the distribution.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
# HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
import sys
from setuptools import setup
if sys.hexversion < 0x3050000:
raise NotImplementedError('Python < 3.5 not supported.')
with open('README.rst') as file:
long_description = file.read()
setup(name='hdf5storage',
version='0.2',
description='Utilities to read/write Python types to/from HDF5 files, including MATLAB v7.3 MAT files.',
long_description=long_description,
author='Freja Nordsiek',
author_email='fnordsie@gmail.com',
url='https://github.com/frejanordsiek/hdf5storage',
packages=['hdf5storage'],
install_requires=["setuptools", "numpy", "h5py>=2.3"],
tests_require=['nose>=1.0'],
test_suite='nose.collector',
license='BSD',
keywords='hdf5 matlab',
zip_safe=True,
classifiers=[
"Programming Language :: Python :: 3 :: Only",
"Development Status :: 3 - Alpha",
"License :: OSI Approved :: BSD License",
"Operating System :: OS Independent",
"Intended Audience :: Developers",
"Intended Audience :: Information Technology",
"Intended Audience :: Science/Research",
"Topic :: Scientific/Engineering",
"Topic :: Database",
"Topic :: Software Development :: Libraries :: Python Modules"
]
)
| 42.15873 | 110 | 0.707078 | 330 | 2,656 | 5.666667 | 0.587879 | 0.019251 | 0.018182 | 0.024599 | 0.098396 | 0.072727 | 0.072727 | 0.072727 | 0.072727 | 0.072727 | 0 | 0.017013 | 0.203313 | 2,656 | 62 | 111 | 42.83871 | 0.86673 | 0.486822 | 0 | 0 | 0 | 0.030303 | 0.488407 | 0.016455 | 0 | 0 | 0.006731 | 0 | 0 | 1 | 0 | false | 0 | 0.060606 | 0 | 0.060606 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
f338e98f4e08bde2d9d5d0b8509480a59736e419 | 3,192 | py | Python | polaris_health/util/log.py | pjns-lb/polaris-gslb | d5c4f1865ceb4311a6c36c7c6d23462565864e98 | [
"BSD-3-Clause"
] | 225 | 2015-09-02T16:53:34.000Z | 2022-03-19T16:52:32.000Z | polaris_health/util/log.py | pjns-lb/polaris-gslb | d5c4f1865ceb4311a6c36c7c6d23462565864e98 | [
"BSD-3-Clause"
] | 60 | 2015-09-08T09:39:00.000Z | 2022-02-01T10:42:34.000Z | polaris_health/util/log.py | pjns-lb/polaris-gslb | d5c4f1865ceb4311a6c36c7c6d23462565864e98 | [
"BSD-3-Clause"
] | 77 | 2015-09-08T16:23:21.000Z | 2022-03-19T15:57:23.000Z | # -*- coding: utf-8 -*-
import logging
import logging.config
from polaris_health import Error, config
__all__ = [ 'setup', 'setup_debug' ]
LOG = logging.getLogger(__name__)
LOG.addHandler(logging.NullHandler())
FORMAT = '%(asctime)s [%(levelname)s] %(name)s: %(message)s'
class DatagramText(logging.handlers.DatagramHandler):
"""Override SocketHandler.emit() to emit plain text messages,
as oppose to pickled logging.Record's
"""
def __init__(self, *args, **kwargs):
super(DatagramText, self).__init__(*args, **kwargs)
def emit(self, record):
try:
# original emit() has "s = self.makePickle(record)" here
s = self.format(record).encode()
self.send(s)
except Exception:
self.handleError(record)
def setup():
"""Setup logging"""
level = config.BASE['LOG_LEVEL']
# validate level
if level not in [ 'none', 'debug', 'info', 'warning', 'error' ]:
log_msg = 'Unknown logging level "{}"'.format(level)
LOG.error(log_msg)
raise Error(log_msg)
# do not setup logging if level is 'none'
if level=='none':
return
handler = config.BASE['LOG_HANDLER']
# validate handler
if handler not in [ 'syslog', 'datagram' ]:
log_msg = 'Unknown log handler "{}"'.format(handler)
LOG.error(log_msg)
raise Error(log_msg)
hostname = config.BASE['LOG_HOSTNAME']
port = config.BASE['LOG_PORT']
# define common config dict elements
log_config = {
'version': 1,
'disable_existing_loggers': False,
'formatters': {
'standard': {
'format': FORMAT,
},
},
'handlers': {},
'loggers': {
'': {
'handlers': [ 'syslog' ],
'level': level.upper(),
},
}
}
# add handler specific items
if handler == 'syslog':
log_config['handlers']['syslog'] = {
'class': 'logging.handlers.SysLogHandler',
'formatter': 'standard',
'address': '/dev/log',
}
log_config['loggers']['']['handlers'] = [ 'syslog' ]
elif handler == 'datagram':
log_config['handlers']['datagram'] = {
'class': 'polaris.util.logging.DatagramText',
'formatter': 'standard',
'host': hostname,
'port': port,
}
log_config['loggers']['']['handlers'] = [ 'datagram' ]
# initialize logging
logging.config.dictConfig(log_config)
def setup_debug():
"""Setup debug mode logging"""
log_config = {
'version': 1,
'disable_existing_loggers': False,
'formatters': {
'standard': {
'format': FORMAT,
},
},
'handlers': {
'console': {
'class':'logging.StreamHandler',
'formatter': 'standard',
},
},
'loggers': {
'': {
'handlers': [ 'console' ],
'level': 'DEBUG',
},
}
}
logging.config.dictConfig(log_config)
| 23.470588 | 68 | 0.518797 | 292 | 3,192 | 5.530822 | 0.349315 | 0.044582 | 0.034056 | 0.017337 | 0.178328 | 0.1387 | 0.1387 | 0.1387 | 0.101548 | 0.101548 | 0 | 0.001408 | 0.332393 | 3,192 | 135 | 69 | 23.644444 | 0.756452 | 0.114662 | 0 | 0.290698 | 0 | 0 | 0.232808 | 0.047278 | 0 | 0 | 0 | 0 | 0 | 1 | 0.046512 | false | 0 | 0.034884 | 0 | 0.104651 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
f3395aa0c5cc601517f789fbb804e0b43eadb2eb | 4,147 | py | Python | osbrain/tests/test_agent_sync_publications_handlers.py | RezaBehzadpour/osbrain | 1b7061bfa6bcfa2176685081fd39c5c971107d51 | [
"Apache-2.0"
] | 176 | 2016-07-12T20:05:32.000Z | 2022-01-18T10:12:07.000Z | osbrain/tests/test_agent_sync_publications_handlers.py | RezaBehzadpour/osbrain | 1b7061bfa6bcfa2176685081fd39c5c971107d51 | [
"Apache-2.0"
] | 358 | 2016-08-04T09:21:35.000Z | 2021-10-15T07:20:07.000Z | osbrain/tests/test_agent_sync_publications_handlers.py | RezaBehzadpour/osbrain | 1b7061bfa6bcfa2176685081fd39c5c971107d51 | [
"Apache-2.0"
] | 50 | 2016-07-17T11:52:36.000Z | 2021-05-10T14:48:45.000Z | """
Test file for synchronized publications handlers.
"""
import pytest
from osbrain import Agent
from osbrain import run_agent
from osbrain.helper import wait_agent_attr
from .common import append_received
class ServerSyncPub(Agent):
def on_init(self):
self.received = []
self.bind('SYNC_PUB', alias='publish', handler='reply')
def reply(self, request):
self.received.append(request)
return 'reply!'
def publish(self):
self.send('publish', 'publication!')
class ClientWithHandler(Agent):
def on_init(self):
self.received = []
self.alternative_received = []
def receive_method(self, response):
self.received.append(response)
def alternative_receive(self, response):
self.alternative_received.append(response)
def test_sync_pub_handler_exists(nsproxy):
"""
When binding a SYNC_PUB socket without a handler, an exception must be
thrown, letting the user know that a handler must be specified.
"""
server = run_agent('server', base=Agent)
with pytest.raises(ValueError) as error:
server.bind('SYNC_PUB', alias='should_crash')
assert 'This socket requires a handler!' in str(error.value)
@pytest.mark.parametrize(
'handler', ['reply', append_received, lambda a, x: a.received.append(x)]
)
def test_sync_pub_handler_types(nsproxy, handler):
"""
When binding a SYNC_PUB socket, we must accept different types of
handlers: methods, functions, lambda expressions...
"""
server = run_agent('server', base=ServerSyncPub)
assert server.bind('SYNC_PUB', alias='should_not_crash', handler=handler)
@pytest.mark.parametrize(
'handler, check_function',
[
('receive_method', False),
(append_received, True),
(lambda a, x: a.received.append(x), False),
],
)
def test_sync_pub_connect_handler_types(nsproxy, handler, check_function):
"""
The handler for the normal PUB/SUB communication is specified in the
`connect` call.
We should be able to specify this in various ways: method, functions,
lambda expressions...
"""
server = run_agent('server', base=ServerSyncPub)
client = run_agent('client', base=ClientWithHandler)
addr = server.addr('publish')
client.connect(addr, alias='sub', handler=handler)
server.each(0.01, 'publish')
assert wait_agent_attr(client, length=2, data='publication!')
if check_function:
# Check that the function was not stored as a method for the object
with pytest.raises(AttributeError) as error:
assert client.get_attr('append_received')
assert 'object has no attribute' in str(error.value)
@pytest.mark.parametrize(
'handler, check_function, should_crash',
[
('receive_method', False, False),
(append_received, True, False),
(lambda a, x: a.received.append(x), False, False),
(None, False, True),
],
)
def test_sync_pub_send_handlers(
nsproxy, handler, check_function, should_crash
):
"""
The handler for the requests MUST be specified in the `send` call.
It can be specified in different ways: method, functions...
"""
server = run_agent('server', base=ServerSyncPub)
client = run_agent('client', base=ClientWithHandler)
addr = server.addr('publish')
# Use an alternative handler so as to guarantee connection is established
client.connect(addr, alias='sub', handler='alternative_receive')
server.each(0.01, 'publish')
assert wait_agent_attr(
client, name='alternative_received', length=2, data='publication!'
)
if should_crash:
with pytest.raises(ValueError):
client.send('sub', 'request!')
else:
client.send('sub', 'request!', handler=handler)
assert wait_agent_attr(client, length=1)
if check_function:
# Check that the function was not stored as a method for the object
with pytest.raises(AttributeError) as error:
assert client.get_attr('append_received')
assert 'object has no attribute' in str(error.value)
| 30.718519 | 79 | 0.67591 | 518 | 4,147 | 5.281853 | 0.250965 | 0.023026 | 0.019006 | 0.020468 | 0.498904 | 0.44481 | 0.349415 | 0.340643 | 0.26864 | 0.236111 | 0 | 0.002766 | 0.215336 | 4,147 | 134 | 80 | 30.947761 | 0.838045 | 0.195804 | 0 | 0.317073 | 0 | 0 | 0.139155 | 0 | 0 | 0 | 0 | 0 | 0.109756 | 1 | 0.121951 | false | 0 | 0.060976 | 0 | 0.219512 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
f3407e32f5f6523d6af3047286ae165f92497d48 | 1,988 | py | Python | webserver.py | ChaunceyXCX/e_gate | 1df172c1293b54ec9d47410b90bf5bd00c43e96b | [
"MIT"
] | null | null | null | webserver.py | ChaunceyXCX/e_gate | 1df172c1293b54ec9d47410b90bf5bd00c43e96b | [
"MIT"
] | null | null | null | webserver.py | ChaunceyXCX/e_gate | 1df172c1293b54ec9d47410b90bf5bd00c43e96b | [
"MIT"
] | null | null | null | import picoweb
import gpio
import network
import ujson
import wlanauto
app = picoweb.WebApp("SafeGate")
@app.route("/")
def index(req, resp):
yield from picoweb.start_response(resp)
htmFile = open('./static/gate.html','r')
for line in htmFile:
yield from resp.awrite(b""+line)
# yield from resp.awrite()
@app.route("/wlancfg")
def wlan_cfg(req, resp):
yield from picoweb.start_response(resp)
htmFile = open('./static/wlancfg.html','r')
for line in htmFile:
yield from resp.awrite(b""+line)
@app.route("/wlanscan")
def wlan_scan(req, resp):
yield from picoweb.start_response(resp, content_type = "application/json")
wlan_sta = network.WLAN(network.STA_IF)
wlans = wlanauto.wlannearby(wlan_sta)
wlans = ujson.dumps(wlans)
yield from resp.awrite(wlans)
@app.route("/wlanconnect")
def wlan_connect(req, resp):
yield from picoweb.start_response(resp, content_type = "application/json")
query_str = req.qs
print(query_str)
param = qs_parse(query_str)
print(param['ssid'])
try:
wlan = wlanauto.get_connection(param['ssid'],param['password'])
except OSError as e:
print("exception", str(e))
resp_json = {}
resp_json['isConnected'] = 1
resp_json = ujson.dumps(resp_json)
yield from resp.awrite(resp_json)
def qs_parse(qs):
parameters = {}
ampersandSplit = qs.split("&")
for element in ampersandSplit:
equalSplit = element.split("=")
parameters[equalSplit[0]] = equalSplit[1]
return parameters
@app.route("/opengate")
def open_gate(req, resp):
gpio.open_gate()
yield from picoweb.start_response(resp)
yield from resp.awrite("open")
@app.route("/stop")
def stop(req, resp):
gpio.stop()
yield from picoweb.start_response(resp)
yield from resp.awrite("stop")
@app.route("/closegate")
def close_gate(req, resp):
gpio.close_gate()
yield from picoweb.start_response(resp)
yield from resp.awrite("close") | 28 | 78 | 0.678068 | 269 | 1,988 | 4.899628 | 0.275093 | 0.102428 | 0.078907 | 0.115326 | 0.379363 | 0.379363 | 0.379363 | 0.379363 | 0.379363 | 0.379363 | 0 | 0.00185 | 0.184105 | 1,988 | 71 | 79 | 28 | 0.810728 | 0.012072 | 0 | 0.177419 | 0 | 0 | 0.094753 | 0.010698 | 0 | 0 | 0 | 0 | 0 | 1 | 0.129032 | false | 0.016129 | 0.080645 | 0 | 0.225806 | 0.048387 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
f3416451d8e96fafe38d11507d5a845e008e0c70 | 9,046 | py | Python | dnn_reco/export_model.py | mxmeier/dnn_reco | c26ca45c7e0f9b160a99598d25e29779a674707f | [
"MIT"
] | null | null | null | dnn_reco/export_model.py | mxmeier/dnn_reco | c26ca45c7e0f9b160a99598d25e29779a674707f | [
"MIT"
] | null | null | null | dnn_reco/export_model.py | mxmeier/dnn_reco | c26ca45c7e0f9b160a99598d25e29779a674707f | [
"MIT"
] | null | null | null | #!/usr/bin/env python
# -*- coding: utf-8 -*-
from __future__ import division, print_function
import os
import shutil
import glob
import click
import ruamel.yaml as yaml
import tensorflow as tf
from dnn_reco import misc
from dnn_reco.setup_manager import SetupManager
from dnn_reco.data_handler import DataHandler
from dnn_reco.data_trafo import DataTransformer
from dnn_reco.model import NNModel
@click.command()
@click.argument('config_files', type=click.Path(exists=True), nargs=-1)
@click.option('--output_folder', '-o', default=None,
help='folder to which the model will be exported')
@click.option('--data_settings', '-s', default=None,
help='Config file used to create training data')
@click.option('--logs/--no-logs', default=True,
help='Export tensorflow log files.')
def main(config_files, output_folder, data_settings, logs):
"""Script to export dnn reco model.
Parameters
----------
config_files : list of strings
List of yaml config files.
"""
# Check paths and define output names
if not os.path.isdir(output_folder):
print('Creating directory: {!r}'.format(output_folder))
os.makedirs(output_folder)
else:
if len(os.listdir(output_folder)) > 0:
if click.confirm("Directory already exists and contains files! "
"Delete {!r}?".format(output_folder),
default=False):
shutil.rmtree(output_folder)
os.makedirs(output_folder)
else:
raise ValueError('Aborting!')
# read in and combine config files and set up
setup_manager = SetupManager(config_files)
config = setup_manager.get_config()
# Create Data Handler object
data_handler = DataHandler(config)
data_handler.setup_with_test_data(config['training_data_file'])
# create data transformer
data_transformer = DataTransformer(
data_handler=data_handler,
treat_doms_equally=config['trafo_treat_doms_equally'],
normalize_dom_data=config['trafo_normalize_dom_data'],
normalize_label_data=config['trafo_normalize_label_data'],
normalize_misc_data=config['trafo_normalize_misc_data'],
log_dom_bins=config['trafo_log_dom_bins'],
log_label_bins=config['trafo_log_label_bins'],
log_misc_bins=config['trafo_log_misc_bins'],
norm_constant=config['trafo_norm_constant'])
# load trafo model from file
data_transformer.load_trafo_model(config['trafo_model_path'])
# create NN model
model = NNModel(is_training=True,
config=config,
data_handler=data_handler,
data_transformer=data_transformer)
# compile model: define loss function and optimizer
model.compile()
# -------------------------
# Export latest checkpoints
# -------------------------
checkpoint_dir = os.path.dirname(config['model_checkpoint_path'])
latest_checkpoint = tf.train.latest_checkpoint(checkpoint_dir)
if latest_checkpoint is None:
raise ValueError('Could not find a checkpoint. Aborting export!')
else:
for ending in ['.index', '.meta', '.data-00000-of-00001']:
shutil.copy2(src=latest_checkpoint + ending,
dst=output_folder)
shutil.copy2(src=os.path.join(checkpoint_dir, 'checkpoint'),
dst=output_folder)
# -----------------------------
# read and export data settings
# -----------------------------
export_data_settings(data_settings=data_settings,
output_folder=output_folder)
# -----------------------------
# Export trafo model and config
# -----------------------------
base_name = os.path.basename(config['trafo_model_path'])
if '.' in base_name:
file_ending = base_name.split('.')[-1]
base_name = base_name.replace('.' + file_ending, '')
shutil.copy2(src=config['trafo_model_path'],
dst=os.path.join(output_folder, 'trafo_model.npy'))
shutil.copy2(src=os.path.join(os.path.dirname(config['trafo_model_path']),
'config_trafo__{}.yaml'.format(base_name)),
dst=os.path.join(output_folder, 'config_trafo.yaml'))
# ----------------------------
# Export training config files
# ----------------------------
checkpoint_directory = os.path.dirname(config['model_checkpoint_path'])
training_files = glob.glob(os.path.join(checkpoint_directory,
'config_training_*.yaml'))
for training_file in training_files:
shutil.copy2(src=training_file,
dst=os.path.join(output_folder,
os.path.basename(training_file)))
shutil.copy2(src=os.path.join(checkpoint_directory, 'training_steps.yaml'),
dst=os.path.join(output_folder, 'training_steps.yaml'))
# ----------------------
# Export model meta data
# ----------------------
# Export all the information that the datahandler and data trafo collect
# via the test file
# ToDo: implement DataHandler.setup_with_config(config_meta_data.yaml)
# (instead of DataHandler.setup_with_data_container)
meta_data = {
'label_names': data_handler.label_names,
'label_name_dict': data_handler.label_name_dict,
'label_shape': data_handler.label_shape,
'num_labels': data_handler.num_labels,
'misc_names': data_handler.misc_names,
'misc_name_dict': data_handler.misc_name_dict,
'misc_data_exists': data_handler.misc_data_exists,
'misc_shape': data_handler.misc_shape,
'num_misc': data_handler.num_misc,
}
with open(os.path.join(output_folder, 'config_meta_data.yaml'), 'w') as f:
yaml.dump(meta_data, f, default_flow_style=False)
# ------------------------------------
# Export package versions and git hash
# ------------------------------------
version_control = {
'git_short_sha': config['git_short_sha'],
'git_sha': config['git_sha'],
'git_origin': config['git_origin'],
'git_uncommited_changes': config['git_uncommited_changes'],
'pip_installed_packages': config['pip_installed_packages'],
}
with open(os.path.join(output_folder, 'version_control.yaml'), 'w') as f:
yaml.dump(version_control, f, default_flow_style=False)
# -------------------------------
# Export tensorflow training logs
# -------------------------------
if logs:
log_directory = os.path.dirname(config['log_path'])
shutil.copytree(src=log_directory,
dst=os.path.join(output_folder, 'logs'))
print('\n====================================')
print('= Successfully exported model to: =')
print('====================================')
print('{!r}\n'.format(output_folder))
def export_data_settings(data_settings, output_folder):
"""Read and export data settings.
Parameters
----------
data_settings : str
Path to config file that was used to create the training data.
output_folder : str
Path to model output directory to which the exported model will be
written to.
"""
try:
with open(data_settings, 'r') as stream:
data_config = yaml.safe_load(stream)
except Exception as e:
print(e)
print('Falling back to modified SafeLoader')
with open(data_settings, 'r') as stream:
yaml.SafeLoader.add_constructor('tag:yaml.org,2002:python/unicode',
lambda _, node: node.value)
data_config = dict(yaml.safe_load(stream))
for k in ['pulse_time_quantiles', 'pulse_time_binning',
'autoencoder_settings', 'autoencoder_encoder_name']:
if k not in data_config or data_config[k] is None:
data_config[k] = None
for k in ['pulse_time_quantiles', 'pulse_time_binning']:
if data_config[k] is not None:
data_config[k] = list(data_config[k])
data_settings = {}
data_settings['num_bins'] = data_config['num_data_bins']
data_settings['relative_time_method'] = data_config['relative_time_method']
data_settings['data_format'] = data_config['pulse_data_format']
data_settings['time_bins'] = data_config['pulse_time_binning']
data_settings['time_quantiles'] = data_config['pulse_time_quantiles']
data_settings['autoencoder_settings'] = data_config['autoencoder_settings']
data_settings['autoencoder_name'] = data_config['autoencoder_encoder_name']
with open(os.path.join(output_folder,
'config_data_settings.yaml'), 'w') as f:
yaml.dump(data_settings, f, default_flow_style=False)
if __name__ == '__main__':
main()
| 40.565022 | 79 | 0.612425 | 1,044 | 9,046 | 5.021073 | 0.210728 | 0.054941 | 0.022892 | 0.024418 | 0.185807 | 0.162724 | 0.10187 | 0.028997 | 0.015261 | 0 | 0 | 0.003473 | 0.236016 | 9,046 | 222 | 80 | 40.747748 | 0.755028 | 0.162282 | 0 | 0.076389 | 0 | 0 | 0.219251 | 0.063102 | 0 | 0 | 0 | 0.004505 | 0 | 1 | 0.013889 | false | 0 | 0.083333 | 0 | 0.097222 | 0.055556 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
f341934b753020a83775024239d87aaaed4b7ee7 | 11,379 | py | Python | neuralparticles/scripts/run_punet.py | senliontec/NeuralParticles | 8ede22bfb43e60be175b9cef19045c1c7b1ffb73 | [
"MIT"
] | null | null | null | neuralparticles/scripts/run_punet.py | senliontec/NeuralParticles | 8ede22bfb43e60be175b9cef19045c1c7b1ffb73 | [
"MIT"
] | null | null | null | neuralparticles/scripts/run_punet.py | senliontec/NeuralParticles | 8ede22bfb43e60be175b9cef19045c1c7b1ffb73 | [
"MIT"
] | null | null | null | import numpy as np
import h5py
import keras
import keras.backend as K
from glob import glob
import json
import math, scipy
from scipy.optimize import linear_sum_assignment
import time
from collections import OrderedDict
import matplotlib
matplotlib.use('Agg')
import matplotlib.pyplot as plt
import keras
from neuralparticles.tensorflow.models.PUNet import PUNet
from neuralparticles.tools.data_helpers import PatchExtractor, get_data_pair, extract_particles, in_bound, get_data, get_nearest_idx
from keras.layers import Input, multiply, concatenate, Conv1D, Lambda, add, Dropout, Dense, Reshape, RepeatVector, Flatten, Permute
from keras.models import Model, load_model
from neuralparticles.tools.uniio import writeParticlesUni, readNumpyOBJ
from neuralparticles.tools.plot_helpers import plot_particles, write_csv
from neuralparticles.tensorflow.tools.eval_helpers import eval_frame, eval_patch
from neuralparticles.tools.param_helpers import *
from neuralparticles.tensorflow.losses.tf_approxmatch import emd_loss, approx_match
#python -m neuralparticles.scripts.run_punet data data/3D_data/ test data/Teddy/ config config_3d/version_00.txt real 1 corr 0 res 120
dst_path = getParam("dst", "")
data_path = getParam("data", "data/")
config_path = getParam("config", "config/version_00.txt")
test_path = getParam("test", "test/")
real = int(getParam("real", 0)) != 0
corr = int(getParam("corr", 1)) != 0
verbose = int(getParam("verbose", 0)) != 0
gpu = getParam("gpu", "")
t_int = int(getParam("t_int", 1))
temp_coh_dt = float(getParam("temp_coh_dt", 0))
out_res = int(getParam("res", -1))
checkpoint = int(getParam("checkpoint", -1))
patch_pos = np.fromstring(getParam("patch", ""),sep=",")
if len(patch_pos) == 2:
patch_pos = np.append(patch_pos, [0.5])
checkUnusedParams()
if dst_path == "":
dst_path = data_path + "result/"
if not os.path.exists(dst_path):
os.makedirs(dst_path)
if not gpu is "":
os.environ["CUDA_VISIBLE_DEVICES"] = gpu
with open(config_path, 'r') as f:
config = json.loads(f.read())
with open(os.path.dirname(config_path) + '/' + config['data'], 'r') as f:
data_config = json.loads(f.read())
with open(os.path.dirname(config_path) + '/' + config['preprocess'], 'r') as f:
pre_config = json.loads(f.read())
with open(os.path.dirname(config_path) + '/' + config['train'], 'r') as f:
train_config = json.loads(f.read())
if verbose:
print("Config Loaded:")
print(config)
print(data_config)
print(pre_config)
print(train_config)
dst_path += "%s_%s/" % (test_path.split("/")[-2], config['id'])
if verbose:
print(dst_path)
pad_val = pre_config['pad_val']
dim = data_config['dim']
factor_d = math.pow(pre_config['factor'], 1/dim)
factor_d = np.array([factor_d, factor_d, 1 if dim == 2 else factor_d])
patch_size = pre_config['patch_size'] * data_config['res'] / factor_d[0]
patch_size_ref = pre_config['patch_size_ref'] * data_config['res']
par_cnt = pre_config['par_cnt']
par_cnt_dst = pre_config['par_cnt_ref']
hres = 200#data_config['res']
res = int(hres/factor_d[0])
if out_res < 0:
out_res = hres
bnd = data_config['bnd']
half_ps = patch_size_ref//2
features = train_config['features']
if checkpoint > 0:
model_path = data_path + "models/checkpoints/%s_%s_%02d.h5" % (data_config['prefix'], config['id'], checkpoint)
else:
model_path = data_path + "models/%s_%s_trained.h5" % (data_config['prefix'], config['id'])
config_dict = {**data_config, **pre_config, **train_config}
punet = PUNet(**config_dict)
punet.load_model(model_path)
print(model_path)
if real:
src_samples = glob(test_path + "real/*.obj")
src_samples.sort()
else:
src_samples = glob(test_path + "source/*.obj")
src_samples.sort()
ref_samples = glob(test_path + "reference/*.obj")
ref_samples.sort()
positions = None
tmp_path = dst_path
if not os.path.exists(tmp_path):
os.makedirs(tmp_path)
if len(patch_pos) == 3:
tmp_path += "patch_%d-%d-%d/" % (patch_pos[0],patch_pos[1],patch_pos[2])
if not os.path.exists(tmp_path):
os.makedirs(tmp_path)
if corr:
data = None
ref_data = None
else:
data = []
ref_data = []
plot_z = 0
for i,item in enumerate(src_samples):
d = readNumpyOBJ(item)[0]
if not real:
d_ref = readNumpyOBJ(ref_samples[i])[0]
plot_z += np.mean(d[:,2])
if data is None:
data = np.empty((len(src_samples), d.shape[0], 6))
if not real:
ref_data = np.empty((len(ref_samples), d_ref.shape[0], 3))
if corr:
data[i,:,:3] = d
else:
data.append(d)
if not real:
if corr:
ref_data[i] = d_ref
else:
ref_data.append(d_ref)
print(np.max(data))
print(np.max(ref_data))
plot_z/=len(data)
print(plot_z)
'''data[...,:3] -= np.min(data[...,:3],axis=(0,1))
data[...,:3] *= (res - 2 * data_config['bnd']) / np.max(data[...,:3])
data[...,:3] += data_config['bnd']'''
"""
def scale_data(data, min_v, max_v, res, bnd):
data -= min_v
data *= (res - 2 * bnd) / max_v
data += bnd
return data
min_v = np.min(ref_data[...,:3],axis=(0,1))
max_v = np.max(ref_data[...,:3])
"""
src_data_n = None
vel = None
for i,item in enumerate(data):
if i % t_int != 0:
continue
print("Frame: %d" % i)
src_data = item[...,:3]
par_aux = {}
if i+1 < len(data):
src_data_n = data[i+1][...,:3]
if corr:
par_aux['v'] = (src_data_n - src_data) * data_config['fps']
else:
par_aux['v'] = np.expand_dims(src_data_n, axis=0) - np.expand_dims(src_data, axis=1)
match = K.eval(approx_match(K.constant(np.expand_dims(src_data_n, 0)), K.constant(np.expand_dims(src_data, 0))))[0]
par_aux['v'] = np.sum(np.expand_dims(match, -1) * par_aux['v'], axis=1) * data_config['fps']
#print(par_aux['v'].shape)
#print(np.mean(np.sqrt(np.linalg.norm(src_data - np.dot(match, src_data_n), axis=-1))))
#print(K.eval(emd_loss(K.constant(np.expand_dims(src_data_n, 0)), K.constant(np.expand_dims(src_data + par_aux['v']/data_config['fps'], 0)))))
#print(K.eval(emd_loss(K.constant(np.expand_dims(src_data_n, 0)), K.constant(np.expand_dims(src_data, 0)))))
else:
src_data_n = data[i-1][...,:3]
if corr:
par_aux['v'] = (src_data - src_data_n) * data_config['fps']
else:
par_aux['v'] = -np.expand_dims(src_data_n, axis=0) + np.expand_dims(src_data, axis=1)
match = K.eval(approx_match(K.constant(np.expand_dims(src_data_n, 0)), K.constant(np.expand_dims(src_data, 0))))[0]
par_aux['v'] = np.sum(np.expand_dims(match, -1) * par_aux['v'], axis=1) * data_config['fps']
#match = approx_match(pred, gt*zero_mask(gt, self.pad_val))
#cost = np.linalg.norm(np.expand_dims(src_data_n, axis=0) - np.expand_dims(src_data, axis=1), axis=-1)
#row_ind, col_ind = linear_sum_assignment(cost)
#print(row_ind.shape)
#print(col_ind.shape)
vel = par_aux['v']
par_aux['d'] = np.ones((item.shape[0],1))*1000
par_aux['p'] = np.ones((item.shape[0],1))*1000
print(np.mean(par_aux['v'],axis=0))
print(np.mean(np.linalg.norm(par_aux['v'],axis=-1)))
print(np.max(np.linalg.norm(par_aux['v'],axis=-1)))
patch_extractor = PatchExtractor(src_data, np.zeros((1 if dim == 2 else int(out_res/factor_d[0]), int(out_res/factor_d[0]), int(out_res/factor_d[0]),1)), patch_size, par_cnt, pre_config['surf'], 0 if len(patch_pos) == 3 else 2, aux_data=par_aux, features=features, pad_val=pad_val, bnd=bnd, last_pos=positions, stride_hys=1.0)
if len(patch_pos) == 3:
idx = get_nearest_idx(patch_extractor.positions, patch_pos)
patch = patch_extractor.get_patch(idx, False)
plot_particles(patch_extractor.positions, [0,int(out_res/factor_d[0])], [0,int(out_res/factor_d[0])], 5, tmp_path + "patch_centers_%03d.png"%i, np.array([patch_extractor.positions[idx]]), np.array([patch_pos]), z=patch_pos[2] if dim == 3 else None)
patch_pos = patch_extractor.positions[idx] + par_aux['v'][patch_extractor.pos_idx[idx]] / data_config['fps']
result = eval_patch(punet, [np.array([patch])], tmp_path + "result_%s" + "_%03d"%i, z=None if dim == 2 else 0, verbose=3 if verbose else 1)
hdr = OrderedDict([ ('dim',len(result)),
('dimX',int(patch_size_ref)),
('dimY',int(patch_size_ref)),
('dimZ',1 if dim == 2 else int(patch_size_ref)),
('elementType',0),
('bytesPerElement',16),
('info',b'\0'*256),
('timestamp',(int)(time.time()*1e6))])
result = (result + 1) * 0.5 * patch_size_ref
if dim == 2:
result[..., 2] = 0.5
writeParticlesUni(tmp_path + "result_%03d.uni"%i, hdr, result)
src = (patch[...,:3] + 1) * 0.5 * patch_size
if dim == 2:
src[..., 2] = 0.5
hdr['dim'] = len(src)
hdr['dimX'] = int(patch_size)
hdr['dimY'] = int(patch_size)
writeParticlesUni(tmp_path + "source_%03d.uni"%i, hdr, src)
if not real:
ref_patch = extract_particles(ref_data[i], patch_pos * factor_d, par_cnt_dst, half_ps, pad_val)[0]
hdr['dim'] = len(ref_patch)
ref_patch = (ref_patch + 1) * 0.5 * patch_size_ref
if dim == 2:
ref_patch[..., 2] = 0.5
writeParticlesUni(tmp_path + "reference_%03d.uni"%i, hdr, ref_patch)
print("particles: %d -> %d (fac: %.2f)" % (np.count_nonzero(patch[...,0] != pre_config['pad_val']), len(result), (len(result)/np.count_nonzero(patch[...,0] != pre_config['pad_val']))))
else:
positions = (patch_extractor.positions + par_aux['v'][patch_extractor.pos_idx] / data_config['fps'])
plot_particles(patch_extractor.positions, [0,int(out_res/factor_d[0])], [0,int(out_res/factor_d[0])], 5, tmp_path + "patch_centers_%03d.png"%i, z=plot_z if dim == 3 else None)
result = eval_frame(punet, patch_extractor, factor_d[0], tmp_path + "result_%s" + "_%03d"%i, src_data, par_aux, None, out_res, z=None if dim == 2 else out_res//2, verbose=3 if verbose else 1)
hdr = OrderedDict([ ('dim',len(result)),
('dimX',hres),
('dimY',hres),
('dimZ',1 if dim == 2 else hres),
('elementType',0),
('bytesPerElement',16),
('info',b'\0'*256),
('timestamp',(int)(time.time()*1e6))])
writeParticlesUni(tmp_path + "result_%03d.uni"%i, hdr, result * hres / out_res)
if not real:
hdr['dim'] = len(ref_data[i])
writeParticlesUni(tmp_path + "reference_%03d.uni"%i, hdr, ref_data[i] * hres / out_res)
hdr['dim'] = len(src_data)
hdr['dimX'] = res
hdr['dimY'] = res
if dim == 3: hdr['dimZ'] = res
writeParticlesUni(tmp_path + "source_%03d.uni"%i, hdr, src_data * hres / out_res)
print("particles: %d -> %d (fac: %.2f)" % (len(src_data), len(result), (len(result)/len(src_data))))
| 37.065147 | 330 | 0.61508 | 1,722 | 11,379 | 3.842044 | 0.138211 | 0.031741 | 0.016929 | 0.031741 | 0.370012 | 0.330411 | 0.294135 | 0.279625 | 0.27237 | 0.214631 | 0 | 0.024215 | 0.2161 | 11,379 | 306 | 331 | 37.186275 | 0.717489 | 0.06635 | 0 | 0.222222 | 0 | 0 | 0.079402 | 0.01172 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.101852 | 0 | 0.101852 | 0.074074 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
f3437fdf05ac2adaef08bbdcb3bffd59e965cdee | 1,120 | py | Python | client/src/service/send_cheat_state.py | y-yu/qrand | b041c3c9cccaf20ee24a0ad90c81b89d3dc753bf | [
"MIT"
] | 3 | 2020-02-02T09:04:21.000Z | 2020-02-09T07:25:59.000Z | client/src/service/send_cheat_state.py | y-yu/qrand | b041c3c9cccaf20ee24a0ad90c81b89d3dc753bf | [
"MIT"
] | null | null | null | client/src/service/send_cheat_state.py | y-yu/qrand | b041c3c9cccaf20ee24a0ad90c81b89d3dc753bf | [
"MIT"
] | null | null | null | from ..repository import quantum, qrand_api_caller
from random import Random
from qulacs import QuantumState
from qulacs.gate import H
# クライアントがチートをするためのサービス。
class PostCheatStateService:
def __init__(
self,
random_impl: Random,
send_qubit_impl: qrand_api_caller.QRandApiRepository,
):
self.random_impl = random_impl
self.send_qubit_impl = send_qubit_impl
# チート用の2 qubitである|+>, |->を用意する。
h_gate = H(0)
s1 = QuantumState(1)
s1.set_computational_basis(0)
h_gate.update_quantum_state(s1)
s2 = QuantumState(1)
s2.set_computational_basis(1)
h_gate.update_quantum_state(s2)
self.psi = [s1, s2]
# チートを実行する。
# 本来の1 qubitとは異なり|+>, |->を送信する。
def post_cheat(self) -> object:
qubit = self.random_impl.choice(self.psi)
response = self.send_qubit_impl.send_measure(qubit.get_vector())
# a, xはチートの進行に応じて決めるため、クライアントに返さない。
return {
'is_cheating': True,
'b': response.json()['b'],
'session': response.cookies['session']
}
| 27.317073 | 72 | 0.63125 | 128 | 1,120 | 5.25 | 0.453125 | 0.059524 | 0.077381 | 0.059524 | 0.130952 | 0 | 0 | 0 | 0 | 0 | 0 | 0.01827 | 0.266964 | 1,120 | 40 | 73 | 28 | 0.800244 | 0.111607 | 0 | 0 | 0 | 0 | 0.0273 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.071429 | false | 0 | 0.142857 | 0 | 0.285714 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
f347cd2141e29b71f96f81b6d7df95e5fa292a36 | 1,655 | py | Python | ecc_attack_code/ecc_attack.py | gbanegas/SDCECC | 987017be79448b2a0d786b9fe9f5f1b99aa14e1f | [
"Apache-2.0"
] | null | null | null | ecc_attack_code/ecc_attack.py | gbanegas/SDCECC | 987017be79448b2a0d786b9fe9f5f1b99aa14e1f | [
"Apache-2.0"
] | null | null | null | ecc_attack_code/ecc_attack.py | gbanegas/SDCECC | 987017be79448b2a0d786b9fe9f5f1b99aa14e1f | [
"Apache-2.0"
] | null | null | null | import random
import math
from itertools import product
from ecc_types import *
def bitfield(n):
return [int(digit) for digit in bin(n)[2:]]
class AttackECC():
def __init__(self, ecc, gen_ed25519):
self.ecc = ecc
self.gen_data = gen_ed25519
self.to_divide = float(2**self.ecc.get_k())
self.m_0 = 5
def d_solve(self):
w = 0
w_prime = 10
while(w < self.ecc.get_k()):
a_j , b_j = self.generate_aj_bj(w,w_prime)
#d_candidates = self.select_candidates(pairs_aj_bj)
def generate_aj_bj(self, w, w_prime):
a_j =[]
b_j =[]
for i in range(self.ecc.N):
a = math.floor(float(float(self.gen_data.generate_v_values()[i]) / self.to_divide)) % (2**w)
b = self.gen_data.generate_v_values()[i] % (2**w)
if self.__diff__(a,b,w_prime):
print("storing ")
a_j.append(a)
b_j.append(b)
return a_j, b_j
def select_candidates(self, pairs):
raise NotImplementedError()
def __diff__(self, a,b, w_prime):
bit_a = bitfield(a)
bit_b = bitfield(b)
print ("a = ", a)
print ("bit_a = ", bit_a)
print ("b = ", a)
print ("bit_b = ", bit_b)
if len(bit_a) > len(bit_b):
bit_b = bit_b + [0]*(len(bit_a)-len(bit_b))
else:
bit_a = bit_a + [0]*(len(bit_b)-len(bit_a))
diff = 0
for i in range(w_prime):
if( i < len(bit_a) and i < len(bit_b)):
if bit_a[i] != bit_b[i]:
diff+=1
return self.m_0 > diff
| 27.583333 | 104 | 0.524471 | 255 | 1,655 | 3.121569 | 0.254902 | 0.050251 | 0.035176 | 0.015075 | 0.103015 | 0.103015 | 0.067839 | 0 | 0 | 0 | 0 | 0.021978 | 0.340181 | 1,655 | 59 | 105 | 28.050847 | 0.70696 | 0.030211 | 0 | 0 | 0 | 0 | 0.01995 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.12766 | false | 0 | 0.085106 | 0.021277 | 0.297872 | 0.106383 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
f349db6f7d61685b49d49c7f0b47f4281e88cd8e | 2,913 | py | Python | tests/utilities.py | Terrencebosco/lambdata-dspt7-tb | 9a5be4e6e0fea1801393253221fcca0511ded83c | [
"MIT"
] | null | null | null | tests/utilities.py | Terrencebosco/lambdata-dspt7-tb | 9a5be4e6e0fea1801393253221fcca0511ded83c | [
"MIT"
] | null | null | null | tests/utilities.py | Terrencebosco/lambdata-dspt7-tb | 9a5be4e6e0fea1801393253221fcca0511ded83c | [
"MIT"
] | null | null | null | import pandas as pd
import numpy as np
from sklearn.model_selection import train_test_split
# helper function for dropping columns with nan values.
def drop_high_nan(df, num_nans):
'''
drop columns with preselected number of nans
df = selected dataframe
num_nans = the number of nans as a threshold to drop
'''
col = df.columns.to_list()
for col in df:
if df[col].isnull().sum() > num_nans:
df = df.drop(col, axis=1)
return df
def num_nans(df):
"""
print the number of nans for your dataframe.
"""
return print(df.isnull().sum())
def enlarge(n):
"""
Param n is a number
Function will enlarge the number
"""
return n * 100
def date_splitter(dataframe, date_column_name):
"""
Takes a passed in dataframe and converts the date feature into a Datetime
column, then extracts the years, months and days to separate features.
"""
dataframe[date_column_name] = pd.to_datetime(
dataframe[date_column_name],
infer_datetime_format=True
)
dataframe['Year'] = dataframe[date_column_name].dt.year
dataframe['Month'] = dataframe[date_column_name].dt.month
dataframe['Day'] = dataframe[date_column_name].dt.day
dataframe.drop(date_column_name, axis=1, inplace=True)
return dataframe
def train_validation_test_split(df, features, target,
train_size=0.7, val_size=0.1,
test_size=0.2, random_state=None,
shuffle=True):
'''
This function is a utility wrapper around the Scikit-Learn train_test_split
that splits arrays or matrices into train, validation, and test subsets.
Args:
df (Pandas DataFrame): Dataframe with code.
X (list): A list of features.
y (str): A string with target column.
train_size (float|int): Proportion of data for train split (0 to 1).
val_size (float|int): Proportion of data for validation split (0 to 1).
test_size (float or int): Proportion of data for test split (0 to 1).
random_state (int): Controls the shuffling applied to the data before
applying the split for reproducibility.
shuffle (bool): Whether or not to shuffle the data before splitting
Returns:
Train, test, and validation dataframes for features (X) and target (y).
'''
X = df[features]
y = df[target]
X_train_val, X_test, y_train_val, y_test = train_test_split(
X, y, test_size=test_size, random_state=random_state, shuffle=shuffle)
X_train, X_val, y_train, y_val = train_test_split(
X_train_val, y_train_val, test_size=val_size / (train_size + val_size),
random_state=random_state, shuffle=shuffle)
return X_train, X_val, X_test, y_train, y_val, y_test | 31.663043 | 79 | 0.645726 | 413 | 2,913 | 4.372881 | 0.300242 | 0.03876 | 0.054264 | 0.076412 | 0.147841 | 0.078627 | 0.078627 | 0 | 0 | 0 | 0 | 0.008019 | 0.272228 | 2,913 | 92 | 80 | 31.663043 | 0.843868 | 0.415036 | 0 | 0 | 0 | 0 | 0.007687 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.142857 | false | 0 | 0.085714 | 0 | 0.371429 | 0.028571 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
f34a82741807c790a71d945cf37f449008072a15 | 4,963 | py | Python | main.py | Nircek/telnet-communicator-server | 24abe728879225e4a1fa75ef17056f9de38088fb | [
"MIT"
] | null | null | null | main.py | Nircek/telnet-communicator-server | 24abe728879225e4a1fa75ef17056f9de38088fb | [
"MIT"
] | 6 | 2019-04-22T14:08:04.000Z | 2019-06-28T09:10:44.000Z | main.py | Nircek/telnet-communicator-server | 24abe728879225e4a1fa75ef17056f9de38088fb | [
"MIT"
] | null | null | null | #!/usr/bin/env python3
# MIT License
# Copyright (c) 2019 Nircek
# Permission is hereby granted, free of charge, to any person obtaining a copy
# of this software and associated documentation files (the "Software"), to deal
# in the Software without restriction, including without limitation the rights
# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
# copies of the Software, and to permit persons to whom the Software is
# furnished to do so, subject to the following conditions:
# The above copyright notice and this permission notice shall be included in all
# copies or substantial portions of the Software.
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
# SOFTWARE.
from socket import socket, AF_INET, SOCK_STREAM, SHUT_RD, SOL_SOCKET, SO_REUSEADDR
from threading import Thread, Lock
from sys import argv
class TelnetListener(Thread):
def __init__(self, port, server):
super().__init__()
self.port, self.server, self.down = port, server, False
def run(self):
try:
self.socket = socket(AF_INET, SOCK_STREAM)
self.socket.setsockopt(SOL_SOCKET, SO_REUSEADDR, 1) # anti-already-in-use
self.socket.bind(('', self.port))
self.socket.listen()
while True:
self.server.new(*self.socket.accept())
except:
if self.down:
pass
else:
raise
def stop(self):
self.down = True
self.socket.shutdown(SHUT_RD)
self.socket.close()
class TelnetUserConnector(Thread):
def __init__(self, server, socket, addr, port):
super().__init__()
self.socket, self.addr = socket, addr
self.port, self.server = port, server
self.down, self.destroyed = False, False
self.nick = None
def destroy(self):
if not self.destroyed:
self.destroyed = True
self.server.remove(self)
self.server.broadcast(self.nick + ' left server.\n'.encode())
def send(self, msg):
try:
self.socket.sendall(msg)
except BrokenPipeError:
self.destroy()
def run(self):
try:
self.down = False
self.send('Type your nick: '.encode())
d = self.socket.recv(32)
if d[-1] == 0x0a: # \n
d = d[:-1]
if d[-1] == 0x0d: # \r
d = d[:-1]
self.nick = d
self.server.broadcast(self.nick + ' joined server.\n'.encode())
while True:
d = self.socket.recv(2**10)
if self.down:
break
if d:
self.server.broadcast(self.nick + ': '.encode()+d)
else:
self.destroy()
except ConnectionResetError:
self.destroy()
except:
if self.down:
pass
else:
raise
def stop(self):
self.down = True
try:
self.socket.shutdown(SHUT_RD)
except OSError:
pass
self.socket.close()
class TelnetServer:
def __init__(self, ports):
self.ports = ports
self.threads = [TelnetListener(x, self) for x in ports]
self.clients = []
self.clients_lock = Lock()
def new(self, socket, addr):
with self.clients_lock:
t = TelnetUserConnector(self, socket, *addr)
self.clients += [t]
t.start()
def remove(self, x):
x.stop()
with self.clients_lock:
self.clients.remove(x)
def start(self):
for t in self.threads:
t.start()
def broadcast(self, msg):
print(msg)
for c in filter(lambda x: x.nick is not None, self.clients):
c.send(msg)
def stop(self):
self.broadcast('Server is going down...\n'.encode())
for t in self.threads:
t.stop()
for c in self.clients:
c.stop()
def block(self):
for e in self.threads+self.clients:
e.join()
if __name__ == '__main__':
ports = []
for e in argv[1:]:
try:
ports += [int(e)]
except:
pass
if not ports:
ports = [ 23 ]
print('Ports:', ports if ports[1:] else ports[0])
server = TelnetServer(ports)
server.start()
try:
server.block()
except KeyboardInterrupt:
print('Interrupting... ')
finally:
server.stop()
| 32.437908 | 85 | 0.574048 | 613 | 4,963 | 4.579119 | 0.319739 | 0.053438 | 0.011756 | 0.016031 | 0.124332 | 0.069825 | 0.03705 | 0.03705 | 0.03705 | 0.03705 | 0 | 0.007173 | 0.325811 | 4,963 | 152 | 86 | 32.651316 | 0.831739 | 0.222849 | 0 | 0.382114 | 0 | 0 | 0.027372 | 0 | 0 | 0 | 0.002086 | 0 | 0 | 1 | 0.121951 | false | 0.03252 | 0.02439 | 0 | 0.170732 | 0.02439 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
f34b3398d34130125064588eeea3a67c4b10f9ab | 1,082 | py | Python | apps/accounts/events.py | goztrk/django-htk | c56bf112e5d627780d2f4288460eae5cce80fa9e | [
"MIT"
] | 206 | 2015-10-15T07:05:08.000Z | 2021-02-19T11:48:36.000Z | apps/accounts/events.py | goztrk/django-htk | c56bf112e5d627780d2f4288460eae5cce80fa9e | [
"MIT"
] | 8 | 2017-10-16T10:18:31.000Z | 2022-03-09T14:24:27.000Z | apps/accounts/events.py | goztrk/django-htk | c56bf112e5d627780d2f4288460eae5cce80fa9e | [
"MIT"
] | 61 | 2015-10-15T08:12:44.000Z | 2022-03-10T12:25:06.000Z | # Python Standard Library Imports
# Third Party (PyPI) Imports
import rollbar
# HTK Imports
from htk.utils import htk_setting
from htk.utils.notifications import slack_notify
def failed_recaptcha_on_login(user, request=None):
extra_data = {
'user' : {
'id': user.id,
'username': user.username,
'email': user.email,
},
}
message = 'Failed reCAPTCHA. Suspicious login detected.'
rollbar.report_message(
message,
request=request,
extra_data=extra_data
)
if htk_setting('HTK_SLACK_NOTIFICATIONS_ENABLED'):
slack_message = '%s User: %s <%s>' % (
message,
user.username,
user.email,
)
slack_notify(slack_message, level='warning')
def failed_recaptcha_on_account_register(request=None):
message = 'Failed reCAPTCHA. Suspicious account registration detected.'
rollbar.report_message(message, request=request)
if htk_setting('HTK_SLACK_NOTIFICATIONS_ENABLED'):
slack_notify(message, level='warning')
| 24.590909 | 75 | 0.651571 | 118 | 1,082 | 5.754237 | 0.347458 | 0.088365 | 0.035346 | 0.05891 | 0.276878 | 0.276878 | 0.276878 | 0.132548 | 0 | 0 | 0 | 0 | 0.254159 | 1,082 | 43 | 76 | 25.162791 | 0.841388 | 0.064695 | 0 | 0.137931 | 0 | 0 | 0.212302 | 0.061508 | 0 | 0 | 0 | 0 | 0 | 1 | 0.068966 | false | 0 | 0.103448 | 0 | 0.172414 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
f34bef219deb20fe6f67ee4c3842b697e7cda3a6 | 501 | py | Python | Exercícios/ex042.py | JefterV/Cursoemvideo.py | e65ac53a4e38793be3039d360e7127e1c5d51030 | [
"MIT"
] | 3 | 2020-11-24T17:20:34.000Z | 2020-12-03T01:19:31.000Z | Exercícios/ex042.py | JefterV/Cursoemvideo.py | e65ac53a4e38793be3039d360e7127e1c5d51030 | [
"MIT"
] | null | null | null | Exercícios/ex042.py | JefterV/Cursoemvideo.py | e65ac53a4e38793be3039d360e7127e1c5d51030 | [
"MIT"
] | 1 | 2021-01-03T00:48:48.000Z | 2021-01-03T00:48:48.000Z | import playsound
r1 = int(input('Segmento um: '))
r2 = int(input('Segmento dois: '))
r3 = int(input('Segmento três: '))
if r1 < r2 + r3 and r2 < r1 + r3 and r3 < r1 + r2:
print('Os segmentos acima, PODEM formar um triangulo', end=' ')
if r1 == r2 and r2 == r3:
playsound.playsound('rllx.mp3', tr)
print('EQUILÁTERO')
elif r1 != r2 != r3 != r1:
print('ESCALENO')
else:
print('ISÓCELES')
else:
print('Os segmentos acima NÃO PODEM formar um triangulo') | 33.4 | 67 | 0.59481 | 71 | 501 | 4.197183 | 0.422535 | 0.053691 | 0.161074 | 0.14094 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.056 | 0.251497 | 501 | 15 | 68 | 33.4 | 0.738667 | 0 | 0 | 0.133333 | 0 | 0 | 0.340637 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.066667 | 0 | 0.066667 | 0.333333 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
f34c12ad843b85b4615c748246bcb5d45ef5266f | 9,584 | py | Python | data_extraction/updatedb.py | amajee11us/driving-data-collection-reference-kit | 92eb839e55c0f0c65992b62aa71c3f63ecad4925 | [
"BSD-3-Clause"
] | 10 | 2019-12-04T06:30:03.000Z | 2022-01-04T23:09:14.000Z | data_extraction/updatedb.py | amajee11us/driving-data-collection-reference-kit | 92eb839e55c0f0c65992b62aa71c3f63ecad4925 | [
"BSD-3-Clause"
] | 6 | 2019-01-24T05:39:52.000Z | 2021-03-16T05:25:00.000Z | data_extraction/updatedb.py | amajee11us/driving-data-collection-reference-kit | 92eb839e55c0f0c65992b62aa71c3f63ecad4925 | [
"BSD-3-Clause"
] | 8 | 2019-02-11T03:11:56.000Z | 2021-08-18T08:00:51.000Z | #!/usr/bin/env python
'''
Updates db will all info.json files found in the provided dir path.
This is gnerally used to update db periodically from all info.json files available in dataset
Params:
dataset_path - dir containing rosbag file to search for corresponding info.json files
Copyright (C) 2019 Intel Corporation
SPDX-License-Identifier: BSD-3-Clause
'''
# file handling
import os
import fnmatch
import json # handle json data
from mongoengine import * # handle mongodb
import humanfriendly # convert time and number to human readable
from tqdm import tqdm # progress bar
# if called as standlone script, add top dir to syspat to import our modules
if __name__ == "__main__":
import sys
repoPath = os.path.dirname(os.path.realpath(__file__))
sys.path.append(repoPath)
# else, we presume the modules are organized and available
from model.RawBagfile import RawBagfile # MongoDb model for rosbag
from model.setupdb import setupdb # to setup mongodb connection
from utils.md5hash import genMd5 # util to generate MD5 hash
from utils import Logger # Logging utils
__author__ = "Nitheesh K L"
__copyright__ = "Copyright (C) 2019, Intel Corporation"
__credits__ = ["Nitheesh K L"]
__license__ = "BSD-3-Clause"
__version__ = "1.0.0"
__maintainer__ = "Nitheesh K L"
__email__ = "nitheesh.k.l@intel.com"
__status__ = "Dev"
def update_all_info(dataset_path):
# find all info.json files in the given path
Logger.debug('fetching info.json files in ' + dataset_path)
matches = []
for root,dirnames,filenames in os.walk(dataset_path):
for filename in fnmatch.filter(filenames, '*.bag'):
matches.append(os.path.join(root, filename))
Logger.debug("")
Logger.debug(os.path.join(root, filename))
Logger.debug("")
Logger.debug("found " + str(len(matches)) + " info.json files")
# connect to wwe database
Logger.debug("connecting to database...")
setupdb()
# update db from each info.json
for bagfile in tqdm(matches, unit="file", desc="updating db"):
infofile = bagfile.replace("rosbags", "raw", 1)
infofile = infofile.replace(".bag", "/info.json", 1)
if not os.path.exists(infofile):
Logger.warn("info file not found - " + infofile)
continue
baginfo = json.load(open(infofile))
# split infofile path. ex: wwe/raw/intel/<...>/2018_01_11/2018-01-11_15-40-44/info.json
pathinfo = infofile.split("/")
vendor = pathinfo[1]
cdate = pathinfo[-3]
size = humanfriendly.format_size(baginfo['size'])
duration = humanfriendly.format_timespan(baginfo['duration'])
name = baginfo['path'].split("/")[-1]
name = name[:10] + "_" + name[11:]
dist = "0"
if 'distance' in baginfo and 'dist' in baginfo['distance']:
dist = "{0:.5f} ".format(baginfo['distance']['dist']) + baginfo['distance']['unit']
Logger.debug("adding distance: " + dist)
# add relevant locations as necessary
# TODO: pickup locations from a config file instead of manual checks
if "bangalore" in infofile:
loc = "bangalore"
elif "telangana" in infofile:
loc = "hyderabad"
elif "hyderabad" in infofile:
loc = "hyderabad"
else:
loc = "unknown"
rawbagfile = RawBagfile(
key = genMd5(infofile),
vendor = vendor,
hsize = size,
hduration = duration,
filename = name,
location = loc,
capturedate = cdate,
path = infofile,
distance = dist,
info = baginfo
)
duplicateFound = False
# for existing entries
for bagfile in RawBagfile.objects(key=rawbagfile.key):
Logger.warn("found entry with duplicate key - " + bagfile.key)
duplicateFound = True
bagfile.delete()
for bagfile in RawBagfile.objects(filename=rawbagfile.filename):
Logger.warn("found entry with duplicate filename - " + bagfile.filename)
duplicateFound = True
bagfile.delete()
# save new info if no duplicates
#if not duplicateFound:
# debugPrint("udpating db with new info...")
rawbagfile.save()
'''
Update db with info from the provided info.json file
Params:
infofile - info.json file whose data has to be inserted into db
'''
def update_single_info(infofile):
Logger.debug('reading info from ' + infofile)
with open(infofile, 'r') as f:
baginfo = json.load(f)
# connect to wwe database
Logger.debug("connecting to database...")
setupdb()
# split infofile path. ex: raw_dataset/intel/bangalore/2018_01_11/2018-01-11_15-40-44/info.json
pathinfo = infofile.split("/")
vendor = pathinfo[1]
cdate = pathinfo[-3]
size = humanfriendly.format_size(baginfo['size'])
duration = humanfriendly.format_timespan(baginfo['duration'])
name = baginfo['path'].split("/")[-1]
name = name[:10] + "_" + name[11:]
dist = "0"
if 'distance' in baginfo and 'dist' in baginfo['distance']:
dist = "{0:.5f} ".format(baginfo['distance']['dist']) + baginfo['distance']['unit']
Logger.debug("adding distance: " + dist)
# add relevant locations as necessary
# TODO: pickup locations from a config file instead of manual checks
if "bangalore" in infofile:
loc = "bangalore"
elif "telangana" in infofile:
loc = "hyderabad"
elif "hyderabad" in infofile:
loc = "hyderabad"
else:
loc = "unknown"
rawbagfile = RawBagfile(
key = genMd5(infofile),
vendor = vendor,
hsize = size,
hduration = duration,
filename = name,
location = loc,
capturedate = cdate,
path = infofile,
distance = dist,
info = baginfo
)
duplicateFound = False
# for existing entries
for bagfile in RawBagfile.objects(key=rawbagfile.key):
Logger.warn("found entry with duplicate key - " + bagfile.key)
duplicateFound = True
bagfile.delete()
for bagfile in RawBagfile.objects(filename=rawbagfile.filename):
Logger.warn("found entry with duplicate filename - " + bagfile.filename)
duplicateFound = True
bagfile.delete()
# save new info if no duplicates
#if not duplicateFound:
# debugPrint("udpating db with new info...")
rawbagfile.save()
'''
Deletes all RawBagfile entries from DB
'''
def flushdb():
# connecting to wwe database
Logger.debug("connecting to database...")
setupdb()
Logger.debug("proceeding to delete entires...")
for bagfile in RawBagfile.objects():
Logger.debug("deleting " + bagfile.filename)
bagfile.delete()
Logger.debug("finished!")
'''
Find database entries of records having the provided filename
Params:
name - Filename/bagname to search in db
'''
def findFilename(name):
# connecting to wwe database
Logger.debug("connecting to database...")
setupdb()
for bagfile in RawBagfile.objects(filename=name):
print("filename = " + bagfile.filename)
print("key = " + bagfile.key)
print("vendor = " + bagfile.vendor)
print("hsize = " + bagfile.hsize)
print("hduration = " + bagfile.hduration)
print("location = " + bagfile.location)
print("capturedate = " + bagfile.capturedate)
print("path = " + bagfile.path)
print("distance = " + bagfile.distance)
def main(argv):
parser = argparse.ArgumentParser(description="Update wwe db with rosbag info files", formatter_class=argparse.RawTextHelpFormatter)
parser.add_argument('action', help='\
action to perform.[all|single]\n\
all - to updated db with all info.json files found recursively in the given path.\n\
single - to update db with a single info json file.\n\
flush - removes all entries from the db.\n\
find - find and print details of the entry in db containing the provided filename.')
parser.add_argument('-d', '--dataset_path', help='path to wwe raw dataset dir to search for info.json files')
parser.add_argument('-i', '--info', help='info.json file whose content has to be added to the existing db')
parser.add_argument('-f', '--filename', help='filename to search in db')
parser.add_argument('-v', '--verbose', action='store_true', help='enable verbose outputs')
args = parser.parse_args()
# Initialized logger
Logger.init(level=Logger.LEVEL_INFO, name="updateDB")
if args.verbose:
Logger.setLevel(Logger.LEVEL_DEBUG)
if args.action == 'all':
if args.dataset_path is not None:
update_all_info(args.dataset_path)
else:
Logger.error("Dataset path not provided!")
return
elif args.action == 'single':
if args.info is not None:
update_single_info(args.info)
else:
Logger.error("info.json file not provided to update db!")
return
elif args.action == "flush":
flushdb()
elif args.action == "find":
if args.filename is not None:
findFilename(args.filename)
else:
Logger.error("filename not provided to search db")
return
else:
Logger.error("unknown action - " + args.action)
if __name__ == "__main__":
import argparse
main(sys.argv) | 36.030075 | 135 | 0.632095 | 1,144 | 9,584 | 5.216783 | 0.229895 | 0.022788 | 0.017426 | 0.022118 | 0.443029 | 0.421079 | 0.414879 | 0.414879 | 0.414879 | 0.400134 | 0 | 0.011809 | 0.257826 | 9,584 | 266 | 136 | 36.030075 | 0.827218 | 0.156928 | 0 | 0.52356 | 0 | 0 | 0.175132 | 0.002843 | 0 | 0 | 0 | 0.007519 | 0 | 1 | 0.026178 | false | 0 | 0.062827 | 0 | 0.104712 | 0.052356 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
f34e8ae9da4353d89753677155c3922a13e63d3a | 336 | py | Python | TPs/TP7/compute_pi.py | Aympab/BigDataHadoopSparkDaskCourse | 42f9e0475cbd7c5db240ccc6dc00c19b9006012a | [
"Apache-2.0"
] | null | null | null | TPs/TP7/compute_pi.py | Aympab/BigDataHadoopSparkDaskCourse | 42f9e0475cbd7c5db240ccc6dc00c19b9006012a | [
"Apache-2.0"
] | null | null | null | TPs/TP7/compute_pi.py | Aympab/BigDataHadoopSparkDaskCourse | 42f9e0475cbd7c5db240ccc6dc00c19b9006012a | [
"Apache-2.0"
] | 1 | 2022-01-31T17:14:27.000Z | 2022-01-31T17:14:27.000Z | import findspark
findspark.init()
import pyspark
import random
sc = pyspark.SparkContext(appName="Pi")
num_samples = 100000000
def inside(p):
x, y = random.random(), random.random()
return x*x + y*y < 1
count = sc.parallelize(range(0, num_samples)).filter(inside).count()
pi = 4 * count / num_samples
print(pi)
sc.stop()
| 16.8 | 68 | 0.696429 | 50 | 336 | 4.62 | 0.54 | 0.12987 | 0.155844 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.042553 | 0.160714 | 336 | 19 | 69 | 17.684211 | 0.776596 | 0 | 0 | 0 | 0 | 0 | 0.005952 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.076923 | false | 0 | 0.230769 | 0 | 0.384615 | 0.076923 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
f34f25a84c269a3d04e4b16d8fc56c5a7bc7f675 | 4,296 | py | Python | utils.py | bmack/example-repository-offline | 2cf9c2c26ef35b60d669d863b8346b8b6213f584 | [
"MIT"
] | null | null | null | utils.py | bmack/example-repository-offline | 2cf9c2c26ef35b60d669d863b8346b8b6213f584 | [
"MIT"
] | null | null | null | utils.py | bmack/example-repository-offline | 2cf9c2c26ef35b60d669d863b8346b8b6213f584 | [
"MIT"
] | null | null | null | # common utilities for other scripts
from tuf import repository_tool as rt
import os
import shutil
# shorthand to create keypairs
def write_and_import_keypair(keystorefolder, filename):
pathpriv = keystorefolder + '/{}_key'.format(filename)
pathpub = '{}.pub'.format(pathpriv)
rt.generate_and_write_ed25519_keypair(password='pw', filepath=pathpriv)
public_key = rt.import_ed25519_publickey_from_file(pathpub)
private_key = rt.import_ed25519_privatekey_from_file(password='pw', filepath=pathpriv)
return (public_key, private_key)
# loads keys from the files, should be used whenever someone wants to interact with the
# the repository (e.g. adding a new target)
def loadkey(keystorefolder, filename):
pathpriv = keystorefolder + '/{}_key'.format(filename)
pathpub = '{}.pub'.format(pathpriv)
public_key = rt.import_ed25519_publickey_from_file(pathpub)
private_key = rt.import_ed25519_privatekey_from_file(password='pw', filepath=pathpriv)
return (public_key, private_key)
# shorthand to create full repo with all keys, only do this once
def create_repo(basefolder, keystorefolder, reponame):
if not os.path.isdir(basefolder):
os.mkdir(basefolder)
os.chdir(basefolder)
(public_root_key, private_root_key) = write_and_import_keypair(keystorefolder, 'root')
(public_targets_key, private_targets_key) = write_and_import_keypair(keystorefolder, 'targets')
(public_snapshots_key, private_snapshots_key) = write_and_import_keypair(keystorefolder, 'snapshot')
(public_timestamps_key, private_timestamps_key) = write_and_import_keypair(keystorefolder, 'timestamp')
# Bootstrap Repository
repository = rt.create_new_repository(reponame, basefolder)
repository.root.add_verification_key(public_root_key)
repository.root.load_signing_key(private_root_key)
# Add additional roles
repository.targets.add_verification_key(public_targets_key)
repository.targets.load_signing_key(private_targets_key)
repository.snapshot.add_verification_key(public_snapshots_key)
repository.snapshot.load_signing_key(private_snapshots_key)
repository.timestamp.add_verification_key(public_timestamps_key)
repository.timestamp.load_signing_key(private_timestamps_key)
repository.status()
# Make it happen (consistently)
repository.mark_dirty(['root', 'snapshot', 'targets', 'timestamp'])
repository.writeall(consistent_snapshot=True)
def load_repo(basefolder, reponame):
os.chdir(basefolder)
repository = rt.load_repository(reponame)
return repository
def load_signing_keys_into_repo(repository, keystorefolder):
(public_root_key, private_root_key) = loadkey(keystorefolder, 'root')
(public_targets_key, private_targets_key) = loadkey(keystorefolder, 'targets')
(public_snapshots_key, private_snapshots_key) = loadkey(keystorefolder, 'snapshot')
(public_timestamps_key, private_timestamps_key) = loadkey(keystorefolder, 'timestamp')
#repository.root.add_verification_key(public_root_key)
repository.root.load_signing_key(private_root_key)
# Add additional roles
#repository.targets.add_verification_key(public_targets_key)
repository.targets.load_signing_key(private_targets_key)
#repository.snapshot.add_verification_key(public_snapshots_key)
repository.snapshot.load_signing_key(private_snapshots_key)
#repository.timestamp.add_verification_key(public_timestamps_key)
repository.timestamp.load_signing_key(private_timestamps_key)
def add_target(repository, target, absolute_source, absolute_target):
repository.status()
# copy absolute_source into targets folder
# Copy the source file to the targets location as TUF expects the file to already be present
os.makedirs(os.path.dirname(absolute_target), exist_ok=True)
shutil.copyfile(absolute_source, absolute_target)
repository.targets.add_targets([target])
repository.mark_dirty(['snapshot', 'targets', 'timestamp'])
repository.writeall(consistent_snapshot=True)
def remove_target(repository, target, absolute_target):
repository.status()
repository.targets.remove_target(target)
os.remove(absolute_target)
repository.mark_dirty(['snapshot', 'targets', 'timestamp'])
repository.writeall(consistent_snapshot=True)
| 47.208791 | 107 | 0.79027 | 527 | 4,296 | 6.119545 | 0.212524 | 0.055814 | 0.044651 | 0.059535 | 0.662016 | 0.631938 | 0.584806 | 0.584806 | 0.47938 | 0.457674 | 0 | 0.006621 | 0.121043 | 4,296 | 90 | 108 | 47.733333 | 0.847458 | 0.166899 | 0 | 0.451613 | 0 | 0 | 0.046029 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.112903 | false | 0.048387 | 0.193548 | 0 | 0.354839 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
f34f5a1bf3d1e8af420a24b49d952e59caad3230 | 1,186 | py | Python | utils/util.py | xzhang2016/tfagent | 433df751f0c5cbe3d730d8e912a05a2430dd165b | [
"BSD-2-Clause"
] | null | null | null | utils/util.py | xzhang2016/tfagent | 433df751f0c5cbe3d730d8e912a05a2430dd165b | [
"BSD-2-Clause"
] | 1 | 2020-06-11T17:03:22.000Z | 2020-06-11T17:03:22.000Z | utils/util.py | xzhang2016/tfagent | 433df751f0c5cbe3d730d8e912a05a2430dd165b | [
"BSD-2-Clause"
] | 3 | 2017-04-19T15:38:31.000Z | 2019-05-07T21:18:52.000Z | import urllib.request
import os
def merge_dict_sum(dict1, dict2):
"""
Merge two dictionaries and add values of common keys.
Values of the input dicts can be any addable objects, like numeric, str, list.
"""
dict3 = {**dict1, **dict2}
for key, value in dict3.items():
if key in dict1 and key in dict2:
dict3[key] = value + dict1[key]
return dict3
def merge_dict_list(dict1, dict2):
"""
Merge two dictionaries and merge values of common keys to a list.
"""
dict3 = {**dict1, **dict2}
for key, value in dict3.items():
if key in dict1 and key in dict2:
dict3[key] = [value, dict1[key]]
return dict3
def download_file_dropbox(url, fout_name):
"""
Download file from dropbox
"""
u = urllib.request.urlopen(url)
data = u.read()
u.close()
#save
with open(fout_name, 'wb') as fw:
fw.write(data)
def make_folder(data_folder):
if not os.path.isfile(data_folder):
# Emulate mkdir -p (no error if folder exists)
try:
os.mkdir(data_folder)
return True
except Exception:
return False
| 26.355556 | 82 | 0.598651 | 163 | 1,186 | 4.282209 | 0.460123 | 0.057307 | 0.034384 | 0.051576 | 0.386819 | 0.386819 | 0.292264 | 0.292264 | 0.292264 | 0.292264 | 0 | 0.02657 | 0.301855 | 1,186 | 44 | 83 | 26.954545 | 0.816425 | 0.231872 | 0 | 0.296296 | 0 | 0 | 0.002326 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.148148 | false | 0 | 0.074074 | 0 | 0.37037 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
f350076941ab65f9dd7c2c16db5664edfd92e574 | 1,096 | py | Python | qnn/input/havlicek_data_handler.py | bjader/quantum-neural-network | 3f23e14fac8700d3f48593f0727c6da59af5f77f | [
"MIT"
] | 9 | 2021-06-08T14:02:38.000Z | 2022-03-08T10:14:22.000Z | qnn/input/havlicek_data_handler.py | bjader/quantum-neural-network | 3f23e14fac8700d3f48593f0727c6da59af5f77f | [
"MIT"
] | null | null | null | qnn/input/havlicek_data_handler.py | bjader/quantum-neural-network | 3f23e14fac8700d3f48593f0727c6da59af5f77f | [
"MIT"
] | 1 | 2021-06-12T16:28:53.000Z | 2021-06-12T16:28:53.000Z | import numpy as np
from qiskit import QuantumRegister, QuantumCircuit
from qiskit.circuit import Parameter
from input.data_handler import DataHandler
class HavlicekDataHandler(DataHandler):
"""
Data encoding based on Havlicek et al. Nature 567, pp209–212 (2019). For quantum circuit diagram see Fig. 4 in
arXiv:2011.00027.
"""
def __init__(self):
super().__init__()
def get_quantum_circuit(self, input_data):
self.qr = QuantumRegister(len(input_data))
self.qc = QuantumCircuit(self.qr)
num_qubits = len(input_data)
param_list = []
for index in range(num_qubits):
self.qc.h(self.qr[index])
param = Parameter("input{}".format(str(index)))
param_list.append(param)
self.qc.rz(param, self.qr[index])
for i in range(num_qubits - 1):
for j in range(i + 1, num_qubits):
param_i = param_list[i]
param_j = param_list[j]
self.qc.rzz((param_i - np.pi / 2) * (param_j - np.pi / 2), i, j)
return self.qc
| 29.621622 | 114 | 0.611314 | 149 | 1,096 | 4.328859 | 0.42953 | 0.046512 | 0.04031 | 0.049612 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.034308 | 0.281934 | 1,096 | 36 | 115 | 30.444444 | 0.78399 | 0.116788 | 0 | 0 | 0 | 0 | 0.007384 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.086957 | false | 0 | 0.173913 | 0 | 0.347826 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
f350d58a0b541e697505f5ba96c2ceb95944d171 | 1,951 | py | Python | prep_cars_data.py | ppik/loxodon | c9d3148ec70f281ba28b3b39e1d843db2fd9a3ac | [
"MIT"
] | null | null | null | prep_cars_data.py | ppik/loxodon | c9d3148ec70f281ba28b3b39e1d843db2fd9a3ac | [
"MIT"
] | 1 | 2018-01-18T09:04:47.000Z | 2018-01-18T14:29:13.000Z | prep_cars_data.py | ppik/loxodon | c9d3148ec70f281ba28b3b39e1d843db2fd9a3ac | [
"MIT"
] | 1 | 2018-01-17T14:14:52.000Z | 2018-01-17T14:14:52.000Z | #!/usr/bin/env python
import os
from os.path import basename, dirname, exists
from glob import glob
from random import seed, sample
from math import ceil
import shutil
from scipy.io import loadmat
DATA_PATH = 'data/'
VALID_RATIO = 0.2
seed(20171111)
info = loadmat(DATA_PATH + 'cars_annos.mat')
class_names = []
for name in info['class_names'].squeeze():
parts = name[0].lower().split()
class_name = parts[0]
if parts[1] in {'general', 'karma', 'martin', 'rover'}:
class_name += '_' + parts[1]
class_names.append(class_name)
# 49 diffreent classes
## Create training set
for item in info['annotations'].squeeze():
image = DATA_PATH + item[0][0]
make = class_names[item[5][0][0] - 1] # Matlab uses 1-based indexing
dest = DATA_PATH + 'train/' + make + '/' + basename(image)
if exists(image):
os.renames(image, dest)
## Create validation set
makes = glob(DATA_PATH + 'train/*')
for make in makes:
images = glob(make + '/*')
data_size = len(images)
validation_size = ceil(data_size*VALID_RATIO)
for image in sample(images, validation_size):
os.renames(image, image.replace('train/', 'valid/', 1))
## Train for only certain models
keep = {'volkswagen', 'bmw', 'opel', 'mercedes-benz', 'audi', 'renault', 'peugeot', 'ford', 'volvo', 'citroön', 'seat', 'toyota'}
for folder in glob(DATA_PATH + 'train/*'):
make = os.path.basename(folder)
if make not in keep:
os.renames(folder, DATA_PATH + 'hold/train/' + make)
for folder in glob(DATA_PATH + 'valid/*'):
make = os.path.basename(folder)
if make not in keep:
os.renames(folder, DATA_PATH + 'hold/valid/' + make)
## Rotate images by 90 degrees
from PIL import Image
files = glob(DATA_PATH + 'train/*/*')
for file in files:
im = Image.open(file)
im = im.transpose(Image.ROTATE_90)
im = im.convert('RGB')
im.save(DATA_PATH + dirname(file) + '/r' + basename(file))
| 24.08642 | 129 | 0.650436 | 280 | 1,951 | 4.435714 | 0.375 | 0.070853 | 0.041868 | 0.041063 | 0.175523 | 0.143317 | 0.10628 | 0.10628 | 0.10628 | 0.10628 | 0 | 0.017869 | 0.196822 | 1,951 | 80 | 130 | 24.3875 | 0.774729 | 0.087135 | 0 | 0.085106 | 0 | 0 | 0.122599 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.170213 | 0 | 0.170213 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
f3510b1c13e2b0996f72d7df7b7e4d5ce6187252 | 4,318 | py | Python | LAB04/02-CloudAlbum-Chalice/cloudalbum/tests/test_photos.py | liks79/moving-to-serverless-renew | 2f173071ab387654d4cc851a0b39130613906378 | [
"MIT"
] | 6 | 2019-08-21T04:13:34.000Z | 2019-10-29T07:15:39.000Z | LAB04/02-CloudAlbum-Chalice/cloudalbum/tests/test_photos.py | liks79/moving-to-serverless-renew | 2f173071ab387654d4cc851a0b39130613906378 | [
"MIT"
] | 89 | 2019-07-31T02:29:54.000Z | 2022-03-12T01:03:22.000Z | LAB04/02-CloudAlbum-Chalice/cloudalbum/tests/test_photos.py | michaelrishiforrester/moving-to-serverless-renew | 27cbcbde9db3d2bc66212fe4f768563d25f64c19 | [
"MIT"
] | 4 | 2019-08-02T03:00:35.000Z | 2020-02-26T18:44:03.000Z | """
cloudalbum/tests/test_photos.py
~~~~~~~~~~~~~~~~~~~~~~~
Test cases for photos REST API
:description: CloudAlbum is a fully featured sample application for 'Moving to AWS serverless' training course
:copyright: © 2019 written by Dayoungle Jun, Sungshik Jou.
:license: MIT, see LICENSE for more details.
"""
import boto3
import pytest
import unittest
import base64
from app import app
from chalice.config import Config
from chalice.local import LocalGateway
from tests.base import BaseTestCase, user as existed_user
from tests.multipart import MultipartFormdataEncoder
from chalicelib import cognito
from chalicelib.model_ddb import Photo
upload = dict(
tags='ITA, Venezia, SONY , DSLR-A300, 2048 x 1371',
filename_orig='test_image.jpg',
desc='TEST',
make='SONY',
model='DSLR-A300',
width='2048',
height='1371',
geotag_lat='45.43472222222222',
geotag_lng='12.346736111111111',
taken_date='2012:07:15 09:46:46',
city='Venezia',
nation='ITA',
address='Badoer Gritti, Campo Bandiera e Moro o de la Bragora 3608, 30122, Venezia, ITA',
)
class TestPhotoService(BaseTestCase):
"""Tests for the Photo Service."""
@pytest.fixture(autouse=True)
def gateway_factory(self):
config = Config()
self.gateway = LocalGateway(app, config)
@pytest.fixture(autouse=True)
def create_token_and_header(self):
try:
client = boto3.client('cognito-idp')
dig = cognito.generate_digest(existed_user)
cognito.signup(client, existed_user, dig)
auth = cognito.generate_auth(existed_user)
body = cognito.generate_token(client, auth, existed_user)
self.access_token = body['accessToken']
except client.exceptions.UsernameExistsException as e:
# Do nothing
pass
finally:
auth = cognito.generate_auth(existed_user)
body = cognito.generate_token(client, auth, existed_user)
self.access_token = body['accessToken']
@pytest.fixture(autouse=True)
def multipart_encode(self):
with open('test_image.jpg', 'rb') as file:
base64_image = base64.b64encode(file.read())
upload['base64_image'] = base64_image
fields = [(k, v) for k, v in upload.items()]
files = [('file', 'test_image.jpg', file)]
self.multipart_content_type, self.multipart_body = MultipartFormdataEncoder().encode(fields, files)
def test_list(self):
"""Ensure the /photos/ route behaves correctly."""
response = self.gateway.handle_request(
method='GET',
path='/photos/',
headers={'Content-Type': 'application/json',
'Authorization': 'Bearer {0}'.format(self.access_token)},
body=None)
self.assertEqual(response['statusCode'], 200)
def test_upload(self):
"""Ensure the /photos/file behaves correctly."""
response = self.gateway.handle_request(
method='POST',
path='/photos/file',
headers={'Content-Type': self.multipart_content_type,
'Authorization': 'Bearer {0}'.format(self.access_token)},
body=self.multipart_body)
self.assertEqual(response['statusCode'], 200)
def test_delete(self):
"""Ensure the /photos/<photo_id> route behaves correctly."""
# 1. upload
response = self.gateway.handle_request(
method='POST',
path='/photos/file',
headers={'Content-Type': self.multipart_content_type,
'Authorization': 'Bearer {0}'.format(self.access_token)},
body=self.multipart_body)
self.assertEqual(response['statusCode'], 200)
photo_id = [item.id for item in Photo.scan(Photo.filename_orig.startswith('test_image.jpg'), limit=1)]
# 2. delete
response = self.gateway.handle_request(
method='DELETE',
path='/photos/{}'.format(photo_id[0]),
headers={'Content-Type': 'application/json',
'Authorization': 'Bearer {0}'.format(self.access_token)},
body=None)
self.assertEqual(response['statusCode'], 200)
if __name__ == '__main__':
unittest.main()
| 37.224138 | 114 | 0.630153 | 487 | 4,318 | 5.457906 | 0.386037 | 0.028969 | 0.03386 | 0.042889 | 0.394658 | 0.364184 | 0.349887 | 0.34462 | 0.318284 | 0.318284 | 0 | 0.03567 | 0.246874 | 4,318 | 115 | 115 | 37.547826 | 0.781365 | 0.117184 | 0 | 0.375 | 0 | 0 | 0.158595 | 0 | 0 | 0 | 0 | 0 | 0.045455 | 1 | 0.068182 | false | 0.011364 | 0.125 | 0 | 0.204545 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
f352ff7f4b31c7040c5ea7317ed22c6aa5e813c5 | 932 | py | Python | wav_to_mp3_to_wav/post_mp3_information_retrieval.py | LiquidFun/stegowav | 89ef0b40c52c834febffeeefba30eccbb0862e29 | [
"Apache-2.0"
] | null | null | null | wav_to_mp3_to_wav/post_mp3_information_retrieval.py | LiquidFun/stegowav | 89ef0b40c52c834febffeeefba30eccbb0862e29 | [
"Apache-2.0"
] | null | null | null | wav_to_mp3_to_wav/post_mp3_information_retrieval.py | LiquidFun/stegowav | 89ef0b40c52c834febffeeefba30eccbb0862e29 | [
"Apache-2.0"
] | null | null | null | from pathlib import Path
from tempfile import TemporaryDirectory
from wav_steganography.wav_file import WAVFile
from wav_to_mp3_to_wav.analyze_flipped_bits import find_matching_audio_file, convert_to_file_format_and_back
def compare_headers(file_path):
with TemporaryDirectory() as tmp_dir:
wav_file = WAVFile(file_path)
wav_file.encode(b"ABCDEF", redundant_bits=300, repeat_data=True)
encoded_file_path = Path(tmp_dir) / "encoded_file.wav"
wav_file.write(encoded_file_path)
pre_conversion, post_conversion = convert_to_file_format_and_back(encoded_file_path, bitrate="312k")
print("=== Pre conversion ===")
print(pre_conversion.decode())
print("=== Post conversion ===")
print("This crashes no matter what when using mp3")
print(post_conversion.decode())
def main():
compare_headers(find_matching_audio_file("voice"))
if __name__ == "__main__":
main()
| 33.285714 | 108 | 0.748927 | 127 | 932 | 5.070866 | 0.448819 | 0.062112 | 0.069876 | 0.065217 | 0.080745 | 0.080745 | 0 | 0 | 0 | 0 | 0 | 0.010165 | 0.155579 | 932 | 27 | 109 | 34.518519 | 0.808132 | 0 | 0 | 0 | 0 | 0 | 0.135193 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.1 | false | 0 | 0.2 | 0 | 0.3 | 0.25 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
f353b7a18dac9fe20b5fa12a9cc76abc354a1603 | 4,896 | py | Python | Tp2/ej3/FBorrosificador.py | luisemacsel/IA2 | b99e19df3cd689d1c6cb42cd83cd71d6302e89eb | [
"MIT"
] | null | null | null | Tp2/ej3/FBorrosificador.py | luisemacsel/IA2 | b99e19df3cd689d1c6cb42cd83cd71d6302e89eb | [
"MIT"
] | null | null | null | Tp2/ej3/FBorrosificador.py | luisemacsel/IA2 | b99e19df3cd689d1c6cb42cd83cd71d6302e89eb | [
"MIT"
] | null | null | null | import matplotlib.pyplot as plt
import numpy as np
"""
La funcion se encarga de buscar el valor de fuerza que le corresponde para ese valor f en el rango elegido
Buscavalor(max,rango,funcion):
max:valor obtenido de la funcion figual
rango: rango de la funcion de f de la FAM
funcion: es la funcion dada para el rango de f de la FAM
"""
def Buscavalor(max,rango,funcion):
fmax=[]
i=0
while(i<80001):
x=round(funcion[i],3) #redondea la funcion a 3 decimales como el valor maximo obtendio
if(max==x):
fmax.append(rango[i])
i+=1
return(fmax)
"""
DectoRad(dec)
dec: numero sexagecimal que desea convertir
Convierte un valor de sexagecimal a radian
"""
def DectoRad(dec):
rad=(dec*np.pi)/180
return(rad)
"""
vecx(x0,xfin)
x0: valor incial del intervalo
xfin: valor final del intervalo
Genera los intervalos nitidos de la funcion borrosa
"""
def vecx(x0,xfin):
x=[]
while(x0<xfin):
dt=0.0001
xn=x0+dt
x.append(round(x0,4))
x0=xn
return(x)
def hombroder(xfin,x0,xintermedio,pendiente,t):
PGTheta=[]
t=t
while(x0<xfin):
dt=0.0001
if(x0>=xintermedio):
PGTheta.append(1)
else:
nv=(1/pendiente)*t
t+=dt
PGTheta.append(nv)
xn=x0+dt
x0=xn
return(PGTheta)
def hombroizq(xfin,x0,xintermedio,pendiente,t):
NGTheta=[]
t=t
while(x0<xfin):
dt=0.0001
if(x0<=xintermedio):
NGTheta.append(1)
else:
nv=1-(1/pendiente)*t
t+=dt
NGTheta.append(nv)
xn=x0+dt
x0=xn
return(NGTheta)
def triangulo(x0,xintermedio,xfinal,dt,pendiente):
NPTheta=[]
t=0
t1=0
while(x0<xfinal):
if(x0<xintermedio):
nv=(pendiente)*t
NPTheta.append(nv)
t+=dt
elif(x0<xfinal):
nv=1-(pendiente)*t1
NPTheta.append(nv)
t1+=dt
x0+=dt
return(NPTheta)
def graficar(x,y,titulo,variable):
fig, ax = plt.subplots()
ax.plot(x, y)
ax.set(xlabel=variable, ylabel='fborrosa', title=titulo)
ax.grid()
plt.show()
"""
valormedio(x,fbor):
Entrega el valor medio de la funcion borrosa elegida
x: intervalo de la funcion borrosa
fbor: funcion borrosa
"""
def valormedio(x,fbor):
i=0
salida=True
while(salida):
if(fbor[i]==1):
vmed=x[i]
salida=False
i+=1
return(vmed)
"""
BorrosificadorSingleton(variableNitida,fbor,x):
variable nitida ejemplo : fuerza se coloca un valor con 4 sifras significativa
fbor: funcion que desea borrosificar ej: NG,NP,Z,PP O PG
x : intervalor de la funcion borrosa ej: vNG,vNP,vZ,vPP,vPG
"""
def BorrosificadorSingleton(variableNitida,fbor,x):
i=0
salida=True
while(salida):
if(variableNitida==x[i]):
A=fbor[i]
salida=False
else:
A=0
i+=1
return(round(A,3))
"""
Calculo de la funcion del borrosificador de theta
"""
NGTheta=hombroizq(DectoRad(-30),DectoRad(-90),DectoRad(-60),DectoRad(30),0)
xNG=vecx(DectoRad(-90),DectoRad(-30))
xNP=vecx(DectoRad(-60),DectoRad(0))
NPTheta=triangulo(DectoRad(-60),DectoRad(-30),DectoRad(0),0.0001,DectoRad(110))
xZ=vecx(DectoRad(-30),DectoRad(30))
ZTheta=triangulo(DectoRad(-30),DectoRad(0),DectoRad(30),0.0001,DectoRad(110))
xPP=vecx(DectoRad(0),DectoRad(60))
PPTheta=triangulo(DectoRad(0),DectoRad(30),DectoRad(60),0.0001,DectoRad(110))
xPG=vecx(DectoRad(30),DectoRad(90))
PGTheta=hombroder(DectoRad(90),DectoRad(30),DectoRad(60),DectoRad(30),0)
#graficar(xNG,NGTheta,'NG','theta [rad]')
#graficar(xNP,NPTheta,'NP','theta [rad]')
#graficar(xZ,ZTheta,'Z','theta [rad]')
#graficar(xPP,NPTheta,'PP','theta [rad]')
#graficar(xPG,PGTheta,'PG','theta [rad]')
"""
Calculo de la funcion del borrosificador de velocidad
"""
NGVel=hombroizq(-2,-6,-4,2,0)
vNG=vecx(-6,-2)
vNP=vecx(-4,0)
NPVel=triangulo(-4,-2,0,0.0001,0.5)
vZ=vecx(-2,2)
ZVel=triangulo(-2,0,2,0.0001,0.5)
vPP=vecx(0,4)
PPVel=triangulo(0,2,4,0.0001,0.5)
vPG=vecx(2,6)
PGVel=hombroder(6,2,4,2,0)
#graficar(vNG,NGVel,'NG','w [rad/s]')
#graficar(vNP,NPVel,'NP','w [rad/s]')
#graficar(vZ,ZVel,'Z','w [rad/s]')
#graficar(vPP,NPVel,'PP','w [rad/s]')
#graficar(vPG,PGVel,'PG','w [rad/s]')
"""
Calculo de la funcion del borrosificador de la fuerza
"""
NGF=hombroizq(-4,-12,-8,4,0)
fNG=vecx(-12,-4)
fNP=vecx(-8,0)
NPF=triangulo(-8,-4,0,0.0001,0.25)
fZ=vecx(-4,4)
ZF=triangulo(-4,0,4,0.0001,0.25)
fPP=vecx(0,8)
PPF=triangulo(0,4,8,0.0001,0.25)
fPG=vecx(4,12)
PGF=hombroder(12,4,8,4,0)
#print(len(PPF))
#graficar(fNG,NGF,'NG','f [N]')
#graficar(fNP,NPF,'NP','f [N]')
#graficar(fZ,ZF,'Z','f [N]')
#graficar(fPP,NPF,'PP','f [N]')
#graficar(fPG,PGF,'PG','f [N]')
| 26.181818 | 110 | 0.612337 | 764 | 4,896 | 3.924084 | 0.234293 | 0.036024 | 0.033022 | 0.024016 | 0.156104 | 0.099066 | 0.093062 | 0.03936 | 0.023349 | 0.023349 | 0 | 0.065535 | 0.217729 | 4,896 | 186 | 111 | 26.322581 | 0.717232 | 0.122345 | 0 | 0.280992 | 0 | 0 | 0.002587 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.07438 | false | 0 | 0.016529 | 0 | 0.090909 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
f353cc9934915d2b7584b135ed41a08d4e931189 | 2,883 | py | Python | impl/dlsgs/transformer/derived.py | ju-kreber/Transformers-and-GANs-for-LTL-sat | 45fe14815562dd3e0d3705573ce9358bfbdc22b3 | [
"MIT"
] | null | null | null | impl/dlsgs/transformer/derived.py | ju-kreber/Transformers-and-GANs-for-LTL-sat | 45fe14815562dd3e0d3705573ce9358bfbdc22b3 | [
"MIT"
] | null | null | null | impl/dlsgs/transformer/derived.py | ju-kreber/Transformers-and-GANs-for-LTL-sat | 45fe14815562dd3e0d3705573ce9358bfbdc22b3 | [
"MIT"
] | null | null | null | # implementation based on DeepLTL https://github.com/reactive-systems/deepltl
import tensorflow as tf
import dlsgs.transformer.positional_encoding as pe
from dlsgs.transformer.base import Transformer, TransformerEncoder
from dlsgs.transformer.common import create_padding_mask
class EncoderOnlyTransformer(Transformer):
def __init__(self, params):
tf.keras.Model.__init__(self)
self.__dict__['params'] = params
self.encoder_embedding = tf.keras.layers.Embedding(params['input_vocab_size'], params['d_embed_enc'], dtype=params['dtype'])
self.encoder_positional_encoding = pe.positional_encoding(params['max_encode_length'], params['d_embed_enc'], dtype=params['dtype'])
self.encoder_dropout = tf.keras.layers.Dropout(params['dropout'])
self.encoder_stack = TransformerEncoder(params)
self.final_projection = tf.keras.layers.Dense(params['target_vocab_size'])
self.softmax = tf.keras.layers.Softmax(dtype=params['dtype'])
def get_config(self):
return {
'params': self.params
}
def call(self, inputs, training, return_quantities=[]):
"""
inputs:
indata: int tensor with shape (batch_size, input_length)
(positional_encoding: float tensor with shape (batch_size, input_length, d_embed_enc), custom postional encoding)
(target: int tensor with shape (batch_size, 1))
"""
indata = inputs['indata']
input_padding_mask = create_padding_mask(indata, self.params['input_pad_id'], self.params['dtype'])
if 'positional_encoding' in inputs:
positional_encoding = inputs['positional_encoding']
else:
seq_len = tf.shape(indata)[1]
positional_encoding = self.encoder_positional_encoding[:, :seq_len, :]
encoder_outdata = self.encode(indata, input_padding_mask, positional_encoding, training)
predictions = self.predict_(encoder_outdata, input_padding_mask, training)
returns = {}
if 'predictions' in return_quantities:
returns['predictions'] = tf.expand_dims(predictions, 1)
if 'decodings' in return_quantities:
returns['decodings'] = tf.expand_dims(tf.argmax(predictions, axis=-1), -1)
return returns
def predict_(self, encoder_outdata, input_padding_mask, training):
if self.params['enc_accumulation'] == 'first':
encoder_outdata = encoder_outdata[:, 0, :]
elif self.params['enc_accumulation'] == 'mean-before':
encoder_outdata = tf.reduce_mean(encoder_outdata, axis=1)
projected = self.final_projection(encoder_outdata)
if self.params['enc_accumulation'] == 'mean-after':
projected = tf.reduce_mean(projected, axis=1)
predictions = self.softmax(projected)
return predictions
| 44.353846 | 140 | 0.680194 | 327 | 2,883 | 5.743119 | 0.293578 | 0.095847 | 0.027689 | 0.031949 | 0.184771 | 0.138445 | 0.082002 | 0.044728 | 0.044728 | 0 | 0 | 0.00352 | 0.211585 | 2,883 | 64 | 141 | 45.046875 | 0.822701 | 0.109261 | 0 | 0 | 0 | 0 | 0.11222 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.090909 | false | 0 | 0.090909 | 0.022727 | 0.272727 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
f357486dc273f59930f7e3c9ba8e47b6209add12 | 1,574 | py | Python | pyDipole/particles.py | rakab/pyDipole | 2793db6db951e9e5e79e064430a0faf6636d3a2a | [
"BSD-3-Clause"
] | null | null | null | pyDipole/particles.py | rakab/pyDipole | 2793db6db951e9e5e79e064430a0faf6636d3a2a | [
"BSD-3-Clause"
] | null | null | null | pyDipole/particles.py | rakab/pyDipole | 2793db6db951e9e5e79e064430a0faf6636d3a2a | [
"BSD-3-Clause"
] | null | null | null | __all__ = [
'Particle',
'particles',
]
"""
Style: name: [mass(string),isQCD(bool)]
"""
particle_table = {
'u' : ['0' , True],
'ubar' : ['0' , True],
't' : ['mt', True],
'tbar' : ['mt', True],
'e' : ['0' , False],
'ebar' : ['0' , False],
'g' : ['0' , True]
}
par_id = 1
class Particle(object):
def __init__(self,name):
self.name = name
try:
self.mass = particle_table[name][0]
self.isQCD = particle_table[name][1]
self.isMassive = False if self.mass == '0' else True
except KeyError:
raise KeyError("particle {} is not defined".format(name))
global par_id
self.id = par_id
par_id = par_id + 1
@property
def name(self):
return self.__name;
@name.setter
def name(self, name):
if not isinstance(name,str):
raise TypeError("Particle name should be a string")
else:
self.__name = name
def __str__(self):
return '{0}({1})'.format(self.__class__.__name__, self.__dict__)
def __setattr__(self,name,value):
super(Particle,self).__setattr__(name,value)
def particles(name):
if not isinstance(name, str):
raise TypeError("Particle names should be given as a space separated string")
else:
particles = name.split()
output = tuple()
for p in particles:
particle=Particle(p)
output = output + (particle,)
return output
| 24.984127 | 85 | 0.527954 | 180 | 1,574 | 4.372222 | 0.361111 | 0.060991 | 0.045743 | 0.025413 | 0.143583 | 0.121982 | 0.121982 | 0.121982 | 0.121982 | 0 | 0 | 0.011483 | 0.336086 | 1,574 | 62 | 86 | 25.387097 | 0.741627 | 0 | 0 | 0.081633 | 0 | 0 | 0.109365 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.122449 | false | 0 | 0 | 0.040816 | 0.204082 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
f359c6c0485b2afb42cd60a1563a6e6b2a3277bc | 677 | py | Python | shadow.py | luctalatinian/pygame_stealth | 9d7db47ed23621aa038f7c5e06dcab0c6b33de66 | [
"MIT"
] | null | null | null | shadow.py | luctalatinian/pygame_stealth | 9d7db47ed23621aa038f7c5e06dcab0c6b33de66 | [
"MIT"
] | null | null | null | shadow.py | luctalatinian/pygame_stealth | 9d7db47ed23621aa038f7c5e06dcab0c6b33de66 | [
"MIT"
] | 1 | 2018-07-09T20:56:10.000Z | 2018-07-09T20:56:10.000Z | import pygame
class Shadow:
# shadow color
COLOR = (32, 32, 32, 192)
# unit size in pixels of a shadow
# length/width multiples are passed to the constructor
# to determine individual shadow size
U = 32
def __init__(self, posX, posY, width, length):
self.posX = posX
self.posY = posY
self.width = width * self.U
self.length = length * self.U
self.data = pygame.Surface( (self.width, self.length),
pygame.SRCALPHA, 32 )
self.data.fill(self.COLOR)
def draw(self, surface):
surface.blit( self.data, (self.posX, self.posY) )
| 28.208333 | 63 | 0.5613 | 84 | 677 | 4.47619 | 0.416667 | 0.06383 | 0.06383 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.029279 | 0.344165 | 677 | 23 | 64 | 29.434783 | 0.817568 | 0.196455 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.142857 | false | 0 | 0.071429 | 0 | 0.428571 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
f35e505927276af6aead2233d5ffc470b867a7a9 | 9,500 | py | Python | ixia_orch/setup/__main__.py | QualiSystemsLab/TechRepo | 5abb0769ad0299ed6bad5d40b0b98c8179eaa030 | [
"Apache-2.0"
] | null | null | null | ixia_orch/setup/__main__.py | QualiSystemsLab/TechRepo | 5abb0769ad0299ed6bad5d40b0b98c8179eaa030 | [
"Apache-2.0"
] | null | null | null | ixia_orch/setup/__main__.py | QualiSystemsLab/TechRepo | 5abb0769ad0299ed6bad5d40b0b98c8179eaa030 | [
"Apache-2.0"
] | null | null | null | #!/usr/bin/env python
# -*- coding: utf-8 -*-
import cloudshell.helpers.scripts.cloudshell_dev_helpers as dev_helpers
from cloudshell.api.cloudshell_api import InputNameValue
from cloudshell.workflow.orchestration.sandbox import Sandbox
from cloudshell.workflow.orchestration.setup.default_setup_orchestrator import DefaultSetupWorkflow
from cloudshell.workflow.orchestration.setup.default_setup_logic import DefaultSetupLogic
TARGET_TYPE_RESOURCE = 'Resource'
REMAP_CHILD_RESOURCES = 'connect_child_resources'
IXVM_CHASSIS_MODEL = "IxVM Virtual Traffic Chassis 2G"
VYOS_MODEL = "Vyos"
RE_AUTOLOAD_MODELS = [IXVM_CHASSIS_MODEL, VYOS_MODEL]
RE_CONNECT_CHILD_RESOURCES_MODELS = [IXVM_CHASSIS_MODEL]
# Update this with the current reservation ID
reservation_id = "5a05a58f-2e60-430d-98df-68685df173fd"
'''
dev_helpers.attach_to_cloudshell_as('admin', 'admin', 'Global',reservation_id,server_address='localhost', cloudshell_api_port='8029')
'''
DEBUG = False
def execute_autoload_on_ixvm(sandbox, components):
""" Execute autoload on deployed Virtual IxVM Chassis """
deployed_apps_names = [app.deployed_app.Name for app in components.values()]
resource_details_cache = {app_name: sandbox.automation_api.GetResourceDetails(app_name) for app_name in
deployed_apps_names}
# execute autoload on the deployed apps after they've got IPs
for app_name in deployed_apps_names:
app_resource_details = resource_details_cache[app_name]
if app_resource_details.ResourceModelName not in RE_AUTOLOAD_MODELS:
continue
sandbox.automation_api.WriteMessageToReservationOutput(reservationId=sandbox.id,
message='Autoload resource {}'.format(app_name))
sandbox.automation_api.AutoLoad(app_name)
# execute remap connections on the deployed apps after correct autoload(s)
# for app_name in deployed_apps_names:
# app_resource_details = resource_details_cache[app_name]
#
# if app_resource_details.ResourceModelName not in RE_CONNECT_CHILD_RESOURCES_MODELS:
# continue
#
# sandbox.automation_api.WriteMessageToReservationOutput(reservationId=sandbox.id,
# message='Connect Child resource on {}'.format(app_name))
#
# sandbox.logger.info("Triggering Connect Child resources command on {}".format(app_name))
# sandbox.automation_api.ExecuteCommand(sandbox.id,
# app_name,
# TARGET_TYPE_RESOURCE,
# REMAP_CHILD_RESOURCES, [])
DefaultSetupLogic.remap_connections(api=Sandbox.automation_api, reservation_id=sandbox.id,
apps_names=deployed_apps_names, logger=sandbox.logger)
sandbox.logger.info("Triggering 'connect_all_routes_in_reservation' method from the DefaultSetupLogic")
sandbox.automation_api.WriteMessageToReservationOutput(reservationId=sandbox.id,
message='Connecting routes in the reservation')
reservation_details = sandbox.automation_api.GetReservationDetails(sandbox.id)
DefaultSetupLogic.connect_all_routes_in_reservation(api=sandbox.automation_api,
reservation_details=reservation_details,
reservation_id=sandbox.id,
resource_details_cache=resource_details_cache,
logger=sandbox.logger)
def loadConfig(Sandbox, components):
# We are passing the resource_name into the function in the components parameter
resource_name = components
resources = Sandbox.components.resources
# Get the path to the FTP directory from the Configuration FTP Repo resource
ftpServer = Sandbox.components.resources['Configuration FTP Repo']
response = Sandbox.automation_api.GetAttributeValue(resourceFullPath='Configuration FTP Repo',
attributeName = 'Ftpserver.ftp_dir')
ftp_path = response.Value
# ensure that the path starts with "/" - if not, add it
if ftp_path[0] != "/":
ftp_path = "/"+ftp_path
# username from FTP Server resource
response = Sandbox.automation_api.GetAttributeValue(resourceFullPath='Configuration FTP Repo',
attributeName = 'Ftpserver.ftp_username')
user = response.Value
# password from FTP Server resource
response = Sandbox.automation_api.GetAttributeValue(resourceFullPath='Configuration FTP Repo',
attributeName = 'Ftpserver.ftp_password')
passwd = response.Value
ftp_full_path = 'ftp://'+user+':'+passwd+'@'+ftpServer.FullAddress
full_path = ftp_full_path+ftp_path
if "cisco" in resource_name.lower():
# Use either a pre-defined file or a custom configured file if not BGP, OSPF or CLEAN
if Sandbox.global_inputs['Router Configuration File Set'] == "BGP":
routerConfig = full_path+'/cisco_bgp.cfg'
elif Sandbox.global_inputs['Router Configuration File Set'] == "OSPF":
routerConfig = full_path+'/cisco_ospf.cfg'
elif Sandbox.global_inputs['Router Configuration File Set'] == "CLEAN":
routerConfig = full_path+'/cisco_clean.cfg'
# else:
# routerConfig = full_path+'/'+Sandbox.global_inputs['Cisco Router Configuration File']
if "juniper" in resource_name.lower():
# Use either a pre-defined file or a custom configured file if not BGP, OSPF or CLEAN
if Sandbox.global_inputs['Router Configuration File Set'] == "BGP":
routerConfig = full_path+'/juniper_bgp.cfg'
elif Sandbox.global_inputs['Router Configuration File Set'] == "OSPF":
routerConfig = full_path+'/juniper_ospf.cfg'
elif Sandbox.global_inputs['Router Configuration File Set'] == "CLEAN":
routerConfig = full_path+'/juniper_clean.cfg'
#else:
# routerConfig = full_path+'/'+Sandbox.global_inputs['Juniper Router Configuration File']
myList = []
myList.append(InputNameValue(Name='path',Value=routerConfig))
myList.append(InputNameValue(Name='configuration_type',Value='running'))
myList.append(InputNameValue(Name='restore_method',Value='override'))
response = Sandbox.automation_api.ExecuteCommand(reservationId=Sandbox.id,
targetName=resource_name,
targetType='Resource',
commandName='restore',
commandInputs=myList)
Sandbox.automation_api.WriteMessageToReservationOutput(reservationId=Sandbox.id,message='<div style="color: green; font-weight:bold">'+resource_name+' configuration completed</div>')
Sandbox.automation_api.SetResourceLiveStatus(resource_name, "Online" , "Active")
def connect_l1(sandbox, component):
for route in sandbox.automation_api.GetReservationDetails(sandbox.id).ReservationDescription.RequestedRoutesInfo:
sandbox.automation_api.ConnectRoutesInReservation(sandbox.id, [route.Source,route.Target],'bi')
def showGlobalInputs (Sandbox, components):
# Blueprint Type
message = "Router Configuration File Set: "+Sandbox.global_inputs['Router Configuration File Set']
Sandbox.automation_api.WriteMessageToReservationOutput(reservationId=Sandbox.id,message=message)
# Cisco Router Config
# message = "Cisco Router Configuration File: "+Sandbox.global_inputs['Cisco Router Configuration File']
#Sandbox.automation_api.WriteMessageToReservationOutput(reservationId=Sandbox.id,message=message)
# Juniper Router Config
#message = "Juniper Router Configuration File: "+Sandbox.global_inputs['Juniper Router Configuration File']
#Sandbox.automation_api.WriteMessageToReservationOutput(reservationId=Sandbox.id,message=message)
if __name__ == '__main__':
Sandbox = Sandbox()
DefaultSetupWorkflow().register(Sandbox) #, enable_configuration=False
# For each configurable resource, load its config
if (Sandbox.global_inputs['Router Configuration File Set']).lower() not in ('none'):
for resource_name, resource in Sandbox.components.resources.iteritems():
if "/" in resource_name:
continue
if "FTP" in resource_name:
continue
# If not a Cisco or Juniper device, don't load config
if "cisco" in resource_name.lower() or "juniper" in resource_name.lower():
Sandbox.workflow.add_to_configuration(function=loadConfig, components=resource_name)
Sandbox.workflow.on_provisioning_ended(function=showGlobalInputs, components=Sandbox.components.resources)
Sandbox.workflow.add_to_connectivity(function=connect_l1,components=None)
#Sandbox.workflow.on_configuration_ended(function=execute_autoload_on_ixvm, components=Sandbox.components.apps)
Sandbox.execute_setup()
| 51.075269 | 187 | 0.669368 | 968 | 9,500 | 6.359504 | 0.213843 | 0.055231 | 0.064977 | 0.038012 | 0.491391 | 0.416017 | 0.376218 | 0.32976 | 0.296134 | 0.265432 | 0 | 0.004206 | 0.249158 | 9,500 | 185 | 188 | 51.351351 | 0.858825 | 0.245789 | 0 | 0.14433 | 0 | 0 | 0.143425 | 0.020321 | 0 | 0 | 0 | 0 | 0 | 1 | 0.041237 | false | 0.030928 | 0.051546 | 0 | 0.092784 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
f35f42825e9ec49afa9f00f36718a54a1f778562 | 2,470 | py | Python | src/sparkmon/mlflow_utils.py | stephanecollot/sparkmon | ca7aee915e0f1db2fb82d41e08a0a2d782236e23 | [
"MIT"
] | 11 | 2021-07-05T12:57:54.000Z | 2022-01-30T05:25:27.000Z | src/sparkmon/mlflow_utils.py | stephanecollot/sparkmon | ca7aee915e0f1db2fb82d41e08a0a2d782236e23 | [
"MIT"
] | 83 | 2021-07-12T22:14:16.000Z | 2022-03-28T22:33:13.000Z | src/sparkmon/mlflow_utils.py | stephanecollot/sparkmon | ca7aee915e0f1db2fb82d41e08a0a2d782236e23 | [
"MIT"
] | 2 | 2021-07-13T09:44:39.000Z | 2021-12-01T11:12:37.000Z | # Copyright (c) 2021 ING Wholesale Banking Advanced Analytics
#
# Permission is hereby granted, free of charge, to any person obtaining a copy of
# this software and associated documentation files (the "Software"), to deal in
# the Software without restriction, including without limitation the rights to
# use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of
# the Software, and to permit persons to whom the Software is furnished to do so,
# subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in all
# copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS
# FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR
# COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER
# IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
# CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
"""mlflow missing utilities."""
import tempfile
from contextlib import contextmanager
from pathlib import Path
@contextmanager
def log_file(artifact_full_path: str):
"""Yield a file-object that is going to be saved as an artifact in mlflow.
mlflow API is really missing this functionality, so let's implement it via a temporary directory.
"""
import mlflow
artifact_full_path = Path(artifact_full_path)
# The artifact_path argument of log_artifact() is actually the directory path
artifact_path = artifact_full_path.parent
if len(artifact_path.parts) == 0:
artifact_path = None
else:
artifact_path = str(artifact_path)
# Let's create a temporary directory and save the file with the same file name
tmpdir = tempfile.TemporaryDirectory()
local_path_tmp = Path(tmpdir.name) / artifact_full_path
local_path_tmp.parent.mkdir(parents=True, exist_ok=True)
fp = open(local_path_tmp, "w")
try:
yield fp
finally:
fp.close()
mlflow.log_artifact(local_path=local_path_tmp, artifact_path=artifact_path)
tmpdir.cleanup()
def active_run():
"""Get the active run with all logs updated."""
import mlflow
active_run = mlflow.active_run()
# active_run.data.params # This is not updated
return mlflow.get_run(active_run.info.run_id)
| 38.59375 | 101 | 0.748583 | 363 | 2,470 | 4.988981 | 0.460055 | 0.048592 | 0.044174 | 0.022087 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.002494 | 0.188259 | 2,470 | 63 | 102 | 39.206349 | 0.900748 | 0.61417 | 0 | 0.076923 | 0 | 0 | 0.001103 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.076923 | false | 0 | 0.192308 | 0 | 0.307692 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
f35faf6124800bbcd57e33c69660b190bc9d8905 | 1,854 | py | Python | service/microservice.py | SFDigitalServices/bluebeam-microservice | bb529f291b3399e29b71dd754e77c73f759c7762 | [
"MIT"
] | 1 | 2020-05-28T17:38:12.000Z | 2020-05-28T17:38:12.000Z | service/microservice.py | SFDigitalServices/bluebeam-microservice | bb529f291b3399e29b71dd754e77c73f759c7762 | [
"MIT"
] | 3 | 2021-02-10T02:34:39.000Z | 2022-01-07T23:28:51.000Z | service/microservice.py | SFDigitalServices/bluebeam-microservice | bb529f291b3399e29b71dd754e77c73f759c7762 | [
"MIT"
] | null | null | null | """Main application module"""
import os
import json
import jsend
import sentry_sdk
import falcon
from .resources.db import create_session
from .resources.welcome import Welcome
from .resources.submission import Submission
from .resources.export import Export, ExportStatus
from .resources.login import Login
def start_service():
"""Start this service
set SENTRY_DSN environmental variable to enable logging with Sentry
"""
# Initialize Sentry
sentry_sdk.init(os.environ.get('SENTRY_DSN'))
# Initialize Falcon
api = falcon.API(middleware=[SQLAlchemySessionManager(create_session())])
api.add_route('/welcome', Welcome())
api.add_route('/submission', Submission())
api.add_route('/export/status', ExportStatus())
api.add_route('/export', Export())
api.add_route('/login', Login())
api.add_static_route('/static', os.path.abspath('static'))
api.add_sink(default_error, '^((?!static).)*$')
return api
def default_error(_req, resp):
"""Handle default error"""
resp.status = falcon.HTTP_404
msg_error = jsend.error('404 - Not Found')
sentry_sdk.capture_message(msg_error)
resp.body = json.dumps(msg_error)
class SQLAlchemySessionManager:
"""
Create a session for every request and close it when the request ends.
"""
def __init__(self, Session):
self.Session = Session # pylint: disable=invalid-name
def process_resource(self, req, resp, resource, params):
# pylint: disable=unused-argument
"""attach a db session for every resource"""
resource.session = self.Session()
def process_response(self, req, resp, resource, req_succeeded):
# pylint: disable=no-self-use, unused-argument
"""close db session for every resource"""
if hasattr(resource, 'session'):
resource.session.close()
| 33.107143 | 77 | 0.69849 | 229 | 1,854 | 5.519651 | 0.388646 | 0.033228 | 0.043513 | 0.026899 | 0.039557 | 0 | 0 | 0 | 0 | 0 | 0 | 0.003958 | 0.182309 | 1,854 | 55 | 78 | 33.709091 | 0.829815 | 0.226537 | 0 | 0 | 0 | 0 | 0.077536 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.147059 | false | 0 | 0.294118 | 0 | 0.5 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
f35fc9e10ba11e3ac0a41499770d8ee424ef0361 | 1,458 | py | Python | apps/accounts/serializers/user_serializer.py | vicobits/django-wise | 3fdc01eabdff459b31e016f9f6d1cafc19c5a292 | [
"MIT"
] | 5 | 2020-04-11T20:11:48.000Z | 2021-03-16T23:58:01.000Z | apps/accounts/serializers/user_serializer.py | victoraguilarc/django-wise | 3fdc01eabdff459b31e016f9f6d1cafc19c5a292 | [
"MIT"
] | 5 | 2020-04-11T20:17:56.000Z | 2021-06-16T19:18:29.000Z | apps/accounts/serializers/user_serializer.py | victoraguilarc/django-wise | 3fdc01eabdff459b31e016f9f6d1cafc19c5a292 | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
from rest_framework import serializers
from apps.accounts.models import User
from apps.accounts.api.error_codes import AccountsErrorCodes
from apps.contrib.api.exceptions.base import SerializerFieldExceptionMixin
PASSWORD_MAX_LENGTH = User._meta.get_field('password').max_length # noqa: WPS437
user_read_only_fields = (
'id',
'username',
'date_joined',
'last_login',
'new_email',
'password',
'is_superuser',
'is_staff',
'is_active',
'date_joined',
'email_token',
'token',
'groups',
'user_permissions',
)
class UserUpdateSerializer(SerializerFieldExceptionMixin, serializers.ModelSerializer):
"""It helps to validate the user basic info updating."""
username = serializers.CharField(required=False)
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
self.request = self.context.get('request', None)
self.user = getattr(self.request, 'user', None)
def validate_username(self, username): # noqa: D102
user_exists = User.objects.filter(username=username).exists()
if self.user.username != username and user_exists:
self.raise_exception(AccountsErrorCodes.USERNAME_UNAVAILABLE)
return username
class Meta:
model = User
fields = (
'id',
'username',
'first_name',
'last_name',
'photo',
)
| 26.509091 | 87 | 0.650892 | 154 | 1,458 | 5.941558 | 0.545455 | 0.02623 | 0.034973 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.006261 | 0.233196 | 1,458 | 54 | 88 | 27 | 0.812165 | 0.066529 | 0 | 0.146341 | 0 | 0 | 0.132299 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.04878 | false | 0.04878 | 0.097561 | 0 | 0.243902 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
f361061823963f88458df17e30230bb3e0fa2a85 | 3,013 | py | Python | biodata/api/views/other.py | znatty22/biodataservice | a3eeb137d2e727a0fc58437b185f2637bc4665ed | [
"Apache-2.0"
] | null | null | null | biodata/api/views/other.py | znatty22/biodataservice | a3eeb137d2e727a0fc58437b185f2637bc4665ed | [
"Apache-2.0"
] | null | null | null | biodata/api/views/other.py | znatty22/biodataservice | a3eeb137d2e727a0fc58437b185f2637bc4665ed | [
"Apache-2.0"
] | null | null | null | """
Views for endpoints that are not part of the biodata CRUD API
"""
from rest_framework import status
from rest_framework.decorators import api_view
from rest_framework.response import Response
from django.conf import settings
from django.db import models
import django_rq
from rq.job import Job, NoSuchJobError, JobStatus
from redis import Redis
from biodata.api import models as m
DISTINCT_VAL_THRESHOLD = 15
@api_view(['GET'])
def health_check(request):
"""
Health check endpoint
"""
return Response(
{
'message': 'Welcome to the Biodataservice API',
'status': status.HTTP_200_OK
}
)
def summary_task(study_id):
"""
Collect stats for a study: total count for each entity and distinct
values per entity attribute
"""
def entity_stats(model_cls):
stats = {
'total': model_cls.objects.count(),
}
distincts = {}
for f in model_cls._meta.fields:
if not isinstance(f, models.ForeignKey):
ds = {
value
for value, *_ in
model_cls.objects.values_list(f.attname).distinct()
}
if len(ds) > DISTINCT_VAL_THRESHOLD:
ds = 'Too many distinct values to list'
distincts[f.attname] = ds
stats['distinct_values'] = distincts
return stats
return {
'study_id': study_id,
'stats': {
m.Participant.__name__: entity_stats(m.Participant),
m.Biospecimen.__name__: entity_stats(m.Biospecimen)
}
}
@api_view(['GET'])
def summary(request, study_id):
"""
Endpoint to compute study stats in async task
"""
try:
study = m.Study.objects.get(kf_id=study_id)
except m.Study.DoesNotExist:
return Response({
'message': f'Could not compute stats! Study {study_id}'
}, status=status.HTTP_404_NOT_FOUND)
def job_id(study_id):
return f'{study_id}-summary-job'
q = django_rq.get_queue('biodataservice')
print(f'Jobs in queue: {q.job_ids}')
# Look for existing job
try:
job = q.fetch_job(job_id(study_id))
except NoSuchJobError:
print('Job does not exist')
job = None
# No job yet, or job previous failed
if (not job) or (job and job.get_status() == JobStatus.FAILED):
job = q.enqueue(
summary_task, args=(study_id,), job_id=job_id(study_id),
result_ttl=10
)
print(f'Submitted new job: {job.id}')
return Response({
'message': f'Submitted status computation for {study_id}'
})
# Try getting result from the job if it finished
else:
try:
return Response(job.result)
except AttributeError:
return Response({
'message': f'Stats computation for {study_id} not complete '
'yet! Check back soon!'
})
| 26.901786 | 76 | 0.593097 | 368 | 3,013 | 4.6875 | 0.345109 | 0.052754 | 0.026087 | 0.038261 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.004833 | 0.313309 | 3,013 | 111 | 77 | 27.144144 | 0.828903 | 0.109525 | 0 | 0.12987 | 0 | 0 | 0.150915 | 0.008384 | 0 | 0 | 0 | 0 | 0 | 1 | 0.064935 | false | 0 | 0.116883 | 0.012987 | 0.285714 | 0.038961 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
f367913e08216490dbf81b44ad9a97732b5bc0a3 | 5,099 | py | Python | tests/test_data_handling.py | LLR-ILD/alldecays | 08e51e99385ae7ca96edefddafd715e1d8cac3d3 | [
"Apache-2.0"
] | null | null | null | tests/test_data_handling.py | LLR-ILD/alldecays | 08e51e99385ae7ca96edefddafd715e1d8cac3d3 | [
"Apache-2.0"
] | null | null | null | tests/test_data_handling.py | LLR-ILD/alldecays | 08e51e99385ae7ca96edefddafd715e1d8cac3d3 | [
"Apache-2.0"
] | null | null | null | import numpy as np
import pytest
from conftest import channel1_path, channel_polarized_path, decay_names
import alldecays
@pytest.mark.parametrize("data_type", ["polarized", "unpolarized"])
def test_name_changes_work(data_type, channel1, channel_polarized):
channel = dict(polarized=channel_polarized, unpolarized=channel1)[data_type]
pc0 = next(iter(channel._pure_channels.values()))
old_decay_names = channel.decay_names
new_decay_names = ["new" + n for n in channel.decay_names]
channel.decay_names = new_decay_names
assert pc0.decay_names == new_decay_names
channel.decay_names = old_decay_names
old_bkg_names = channel.bkg_names
new_bkg_names = ["new" + n for n in channel.bkg_names]
channel.bkg_names = new_bkg_names
assert set(pc0.bkg_names).issubset(new_bkg_names)
channel.bkg_names = old_bkg_names
old_box_names = channel.box_names
new_box_names = ["new" + n for n in channel.box_names]
channel.box_names = new_box_names
assert list(pc0.box_names) == new_box_names
channel.box_names = old_box_names
def test_purity_change(channel_polarized):
old_polarization = channel_polarized.polarization
channel_polarized.polarization = (1.0, -1.0)
channel_polarized.polarization = old_polarization
def test_expected_counts(channel_polarized):
channel = channel_polarized
box_exp = channel.get_expected_counts().values
expected_should_be = np.array([2790.9, 2394.5, 1706.9, 2572.2])
assert box_exp == pytest.approx(expected_should_be, abs=1e-1)
changed_brs = np.zeros_like(channel.data_brs)
changed_brs[0] = 1
box_changed_br = channel.get_expected_counts(data_brs=changed_brs).values
changed_should_be = np.array([2004.7, 3124.4, 2080.7, 2308.6])
assert box_changed_br == pytest.approx(changed_should_be, abs=1e-1)
def test_toys(channel_polarized):
rng = np.random.default_rng(1)
one_toy = channel_polarized.get_toys(rng=rng)
toy_should_be = np.array([2766, 2340, 1738, 2620])
assert (one_toy == toy_should_be).all()
size = (2, 3)
toy_sum_expected = np.ones(size) * sum(toy_should_be)
toy_sum_obtained = channel_polarized.get_toys(size, rng=rng).sum(axis=-1)
assert (toy_sum_expected == toy_sum_obtained).all()
@pytest.mark.parametrize("data_type", ["polarized", "unpolarized"])
def test_data_set_add_channel(data_type):
channel_paths = {
"unpolarized": channel1_path,
"polarized": channel_polarized_path,
}
polarizations = {
"unpolarized": None,
"polarized": (-0.8, 0.3),
}
ds = alldecays.DataSet(decay_names, polarization=polarizations[data_type])
ds.add_channel("by_name", channel_paths[data_type])
ds.add_channel(channel_paths[data_type])
ds.add_channels(
{"nameA": channel_paths[data_type], "nameB": channel_paths[data_type]}
)
def go_through_setters(ds, channel, is_combination=False):
old_decay_names = ds.decay_names
new_decay_names = ["new" + n for n in decay_names]
ds.decay_names = new_decay_names
assert channel.decay_names == new_decay_names
ds.decay_names = old_decay_names
old_data_brs = ds.data_brs
changed_brs = np.zeros_like(ds.data_brs)
changed_brs[0] = 1
ds.data_brs = changed_brs
assert (changed_brs == channel.data_brs).all()
ds.data_brs = old_data_brs
old_signal_scaler = ds.signal_scaler
new_signal_scaler = 2.5
ds.signal_scaler = new_signal_scaler
assert channel.signal_scaler == new_signal_scaler
ds.signal_scaler = old_signal_scaler
if is_combination:
return
old_polarization = ds.polarization
new_polarization = (1.0, -0.1)
ds.polarization = new_polarization
assert channel.polarization == new_polarization
ds.polarization = old_polarization
old_luminosity_ifb = ds.luminosity_ifb
new_luminosity_ifb = 1.1
ds.luminosity_ifb = new_luminosity_ifb
assert channel.luminosity_ifb == new_luminosity_ifb
ds.luminosity_ifb = old_luminosity_ifb
def test_data_set_setters():
ds = alldecays.DataSet(decay_names, polarization=(-0.8, 0.3))
ds.add_channel("my_channel", channel_polarized_path)
channel = ds._channels["my_channel"]
go_through_setters(ds, channel)
def test_data_set_combined():
ds1 = alldecays.DataSet(decay_names, polarization=(-0.8, 0.3))
ds2 = alldecays.DataSet(decay_names, polarization=(0, 0))
ds1.add_channel("my_channel", channel_polarized_path)
ds2.add_channel("my_channel", channel_polarized_path)
ds2.add_channel("one_more_channel", channel_polarized_path)
combined = alldecays.CombinedDataSet(decay_names, {"ds1": ds1, "ds2": ds2})
combined.add_data_sets({"ds1_copy": ds1})
channel = combined._channels["ds1:my_channel"]
go_through_setters(combined, channel, is_combination=True)
def test_data_set_subclassing():
from alldecays.data_handling.abstract_data_set import AbstractDataSet
assert isinstance(alldecays.DataSet(decay_names), AbstractDataSet)
assert isinstance(alldecays.CombinedDataSet(decay_names), AbstractDataSet)
| 35.409722 | 80 | 0.740341 | 719 | 5,099 | 4.895688 | 0.179416 | 0.079545 | 0.029545 | 0.030682 | 0.419034 | 0.300852 | 0.192045 | 0.169318 | 0.101136 | 0.045455 | 0 | 0.026555 | 0.15807 | 5,099 | 143 | 81 | 35.657343 | 0.793385 | 0 | 0 | 0.036364 | 0 | 0 | 0.041381 | 0 | 0 | 0 | 0 | 0 | 0.127273 | 1 | 0.081818 | false | 0 | 0.045455 | 0 | 0.136364 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
f36b2d045e4b5fb8f6739e6b9464ce4f48c00174 | 1,614 | py | Python | elections/forms.py | zinaukarenku/zkr-platform | 8daf7d1206c482f1f8e0bcd54d4fde783e568774 | [
"Apache-2.0"
] | 2 | 2018-11-16T21:45:17.000Z | 2019-02-03T19:55:46.000Z | elections/forms.py | zinaukarenku/zkr-platform | 8daf7d1206c482f1f8e0bcd54d4fde783e568774 | [
"Apache-2.0"
] | 13 | 2018-08-17T19:12:11.000Z | 2022-03-11T23:27:41.000Z | elections/forms.py | zinaukarenku/zkr-platform | 8daf7d1206c482f1f8e0bcd54d4fde783e568774 | [
"Apache-2.0"
] | null | null | null | from crispy_forms.helper import FormHelper
from crispy_forms.layout import Layout, Div, Submit
from django import forms
from django.forms.utils import ErrorList
from web.models import Municipality
from django.utils.translation import gettext_lazy as _
class MayorCandidatesFiltersForm(forms.Form):
municipality = forms.ModelChoiceField(
label=_("Savivaldybė"),
queryset=Municipality.objects.all(),
to_field_name='slug',
empty_label=_("Visos savivaldybės"),
required=False
)
def __init__(self, data=None, files=None, auto_id='id_%s', prefix=None, initial=None, error_class=ErrorList,
label_suffix=None, empty_permitted=False, field_order=None, use_required_attribute=None,
renderer=None):
super().__init__(data, files, auto_id, prefix, initial, error_class, label_suffix, empty_permitted, field_order,
use_required_attribute, renderer)
self.helper = FormHelper()
self.helper.form_method = "GET"
self.helper.layout = Layout(
Div(
Div('municipality'),
Div(Submit('filter', 'Filtruoti', css_class="btn btn-primary btn-block btn-sm"))
)
)
self.is_valid()
def filter_municipality(self, queryset, municipality):
return queryset.filter(municipality=municipality)
def filter_queryset(self, queryset):
municipality = self.cleaned_data.get('municipality')
if municipality:
queryset = self.filter_municipality(queryset, municipality)
return queryset
| 35.866667 | 120 | 0.672243 | 176 | 1,614 | 5.943182 | 0.397727 | 0.076482 | 0.028681 | 0.06501 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.23544 | 1,614 | 44 | 121 | 36.681818 | 0.84765 | 0 | 0 | 0 | 0 | 0 | 0.069393 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.085714 | false | 0 | 0.171429 | 0.028571 | 0.371429 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
f36bb2734cf8a06f514d97eb5b2c6edbcf20fc1d | 988 | py | Python | DataBatcher.py | malfusion/furnace | 2d2a31212f0d67a99743a125eee1825da2af2182 | [
"MIT"
] | null | null | null | DataBatcher.py | malfusion/furnace | 2d2a31212f0d67a99743a125eee1825da2af2182 | [
"MIT"
] | 1 | 2021-01-28T20:27:14.000Z | 2021-01-28T20:27:14.000Z | DataBatcher.py | malfusion/furnace | 2d2a31212f0d67a99743a125eee1825da2af2182 | [
"MIT"
] | null | null | null | from collections import deque
class DataBatcher:
def __init__(self, keyFunc, valFunc=None):
self.keyFunc = keyFunc
self.valFunc = valFunc
self.prevKey = None
self.batches = deque()
self.batch = None
def addData(self, data):
key = self.keyFunc(data)
if key == None:
raise Exception('Batching Key cannot be None')
if key != self.prevKey:
if self.batch != None:
self.batches.append({ self.prevKey: self.batch })
self.batch = []
self.prevKey = key
self.batch.append(self.valFunc(data) if self.valFunc else data)
def endBatch(self):
self.batches.append({ self.prevKey: self.batch })
self.prevKey = None
self.batch = None
def getBatches(self):
while(self.batches):
yield self.batches.popleft()
def getBatchesCount(self):
return len(self.batches)
| 24.7 | 71 | 0.563765 | 109 | 988 | 5.073395 | 0.311927 | 0.113924 | 0.070524 | 0.068716 | 0.148282 | 0.148282 | 0.148282 | 0.148282 | 0 | 0 | 0 | 0 | 0.337045 | 988 | 39 | 72 | 25.333333 | 0.844275 | 0 | 0 | 0.222222 | 0 | 0 | 0.027356 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.185185 | false | 0 | 0.037037 | 0.037037 | 0.296296 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
f36c464ee68faaf7ed9c2d48de3665e5eff18856 | 1,309 | py | Python | Dynamic Programming/chess.py | roycek7/operation_research | 37f01b7fcd93494a7de38459c324132516724b99 | [
"MIT"
] | 1 | 2021-04-17T17:33:30.000Z | 2021-04-17T17:33:30.000Z | Dynamic Programming/chess.py | roycek7/operation_research | 37f01b7fcd93494a7de38459c324132516724b99 | [
"MIT"
] | null | null | null | Dynamic Programming/chess.py | roycek7/operation_research | 37f01b7fcd93494a7de38459c324132516724b99 | [
"MIT"
] | null | null | null | """
Chess Strategy
Vladimir is playing Keith in a two-game chess match. Winning a game scores one match point and drawing a game scores
a half match point. After the two games are played, the player with more match points is declared the champion.
If the two players are tied after two games, they continue playing until somebody wins a game (the winner of that game
will be the champion).
During each game, Vladimir can play one of two ways: boldly or conservatively. If he plays boldly, he has a 45% chance
of winning the game and a 55% chance of losing the game. If he plays conservatively, he has a 90% chance of drawing
the game and a 10% chance of losing the game.
What strategy should Vladimir follow to maximize his probability of winning the match?
"""
# Chess(t, s) is the max probability of winning match if we start game t with s points
# we want Chess(1, 0)
def Chess(t, s):
if t == 3:
if s < 1:
return 0, 'Lost'
elif s > 1:
return 1, 'Won'
else:
return 0.45, 'Bold'
else:
bold = 0.45 * Chess(t + 1, s + 1)[0] + 0.55 * Chess(t + 1, s + 0)[0]
conservative = 0.9 * Chess(t + 1, s + 1 / 2)[0] + 0.10 * Chess(t + 1, s + 0)[0]
return max((bold, 'Bold'), (conservative, 'Conservative'))
print(Chess(1, 0))
| 42.225806 | 118 | 0.657754 | 231 | 1,309 | 3.727273 | 0.372294 | 0.041812 | 0.03252 | 0.037166 | 0.092915 | 0.023229 | 0 | 0 | 0 | 0 | 0 | 0.04499 | 0.252865 | 1,309 | 30 | 119 | 43.633333 | 0.835378 | 0.656226 | 0 | 0.153846 | 0 | 0 | 0.061224 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.076923 | false | 0 | 0 | 0 | 0.384615 | 0.076923 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
f36d640e79dacaedb0c4acb228f021e81749af5f | 4,261 | py | Python | tests/unit/test_book_reads.py | josealobato/go-over | ebc012a4d74a81fc729419f4ea670b9d6b4271bb | [
"MIT"
] | null | null | null | tests/unit/test_book_reads.py | josealobato/go-over | ebc012a4d74a81fc729419f4ea670b9d6b4271bb | [
"MIT"
] | 7 | 2022-02-13T09:21:55.000Z | 2022-03-02T07:56:31.000Z | tests/unit/test_book_reads.py | josealobato/go-over | ebc012a4d74a81fc729419f4ea670b9d6b4271bb | [
"MIT"
] | null | null | null | # More info at: https://vald-phoenix.github.io/pylint-errors/
# pylint: disable=C0114
# pylint: disable=C0116
from datetime import datetime
import pytest
from go_over.goodreads import Book, BookRead, Bookshelf
# pylint: disable=C0301
# Line too long
BOOKS = [
{"Book Id": "00", "Title": "Book 0", "Author": "Cervantes", "Exclusive Shelf": "read", "Date Read": "", "My Rating": "3"},
{"Book Id": "01", "Title": "Book 0", "Author": "Cervantes", "Exclusive Shelf": "read", "Date Read": "2019/11/23", "My Rating": "3"},
{"Book Id": "10", "Title": "Book 2", "Author": "Cervantes", "Exclusive Shelf": "read", "Date Read": "2020/11/23", "My Rating": "3"},
{"Book Id": "21", "Title": "Book 0", "Author": "Cervantes", "Exclusive Shelf": "currently reading", "Date Read": "", "My Rating": "3"},
]
# Last read
def test_unread_book_date():
# A book without date read will have no last read date
book = Book(**BOOKS[0])
assert book.last_read == None
def test_read_book_date():
# A book with one read date return that date as its last read
book = Book(**BOOKS[1])
assert book.last_read == datetime(2019, 11, 23)
# Adding dates
def test_adding_read_dates_to_a_book():
# Adding read dates to a book with no read will store those dates
book = Book(**BOOKS[0])
book.add_reads(["2019/11/23", "2020/10/14"])
assert len(book.read_dates) == 2
def test_adding_read_dates_to_a_book_with_date():
# Adding read dates to a book with that that will do nothing
book = Book(**BOOKS[1])
book.add_reads(["2018/01/11", "2020/10/14"])
assert len(book.read_dates) == 3
def test_adding_read_dates_to_a_book_with_that_date():
# Adding read dates to a book with that that will do nothing
book = Book(**BOOKS[1])
book.add_reads(["2019/11/23", "2020/10/14"]) # The first one exist
assert len(book.read_dates) == 2
# Last read when adding dates
def test_adding_read_dates_can_change_last_read():
# Adding a newer date will update the last read date.
book = Book(**BOOKS[1])
book.add_reads(["2020/10/14"])
assert book.last_read == datetime(2020, 10, 14)
def test_adding_old_read_dates_wont_change_last_read():
# Adding old read date wont change the last read.
book = Book(**BOOKS[1])
book.add_reads(["2010/10/14"])
assert book.last_read == datetime(2019, 11, 23)
# Read on year
def test_unread_book_not_read_in_year():
book = Book(**BOOKS[0])
assert book.read_on_year(2019) == False
def test_read_book_read_in_year():
book = Book(**BOOKS[1])
assert book.read_on_year(2019) == True
def test_check_positive_read_on_multiple_read():
book = Book(**BOOKS[0])
book.add_reads(["2020/01/01", "2019/01/01"])
assert book.read_on_year(2018) is False
assert book.read_on_year(2019) is True
assert book.read_on_year(2020) is True
assert book.read_on_year(2021) is False
# Read on unknown date
def test_book_read_on_unknown_date():
book = Book(**BOOKS[0])
assert book.read_in_unknown_date is True
def test_book_not_read_on_unknown_date():
book = Book(**BOOKS[3])
assert book.read_in_unknown_date is False
# all reads
def test_get_all_reads():
# get all the reads from a book
book = Book(**BOOKS[0])
book.add_reads(["2020/01/01", "2019/01/01"])
# Execute
reads = book.reads()
# Vefiry
assert len(reads) == 2
for read in reads:
assert isinstance(read, BookRead)
# reads order
def test_read_order_when_not_sorted():
# get all the reads from a book
book = Book(**BOOKS[0])
book.add_reads(["2020/01/01", "2019/01/01"])
# Execute
reads = book.reads()
# Vefiry
assert len(reads) == 2
assert reads[0].date == datetime(2019, 1, 1)
assert reads[0].read_number == 1
assert reads[1].date == datetime(2020, 1, 1)
assert reads[1].read_number == 2
def test_read_order_when_sorted():
# get all the reads from a book
book = Book(**BOOKS[0])
book.add_reads(["2019/01/01", "2020/01/01"])
# Execute
reads = book.reads()
# Vefiry
assert len(reads) == 2
assert reads[0].date == datetime(2019, 1, 1)
assert reads[0].read_number == 1
assert reads[1].date == datetime(2020, 1, 1)
assert reads[1].read_number == 2 | 32.526718 | 139 | 0.663459 | 682 | 4,261 | 3.964809 | 0.165689 | 0.053254 | 0.072115 | 0.04142 | 0.691938 | 0.657914 | 0.596524 | 0.411243 | 0.348003 | 0.307322 | 0 | 0.080313 | 0.190566 | 4,261 | 131 | 140 | 32.526718 | 0.703682 | 0.186811 | 0 | 0.468354 | 0 | 0 | 0.141153 | 0 | 0 | 0 | 0 | 0 | 0.341772 | 1 | 0.189873 | false | 0 | 0.037975 | 0 | 0.227848 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
f3718eeff96e99b364364254176ffbb225e3ed47 | 993 | py | Python | gym_collision_avoidance/experiments/src/test_pytorch.py | meghdeepj/Social-Navigation-Simulator | 806d304081bf5ff4fc7a0a58defb050627375865 | [
"MIT"
] | null | null | null | gym_collision_avoidance/experiments/src/test_pytorch.py | meghdeepj/Social-Navigation-Simulator | 806d304081bf5ff4fc7a0a58defb050627375865 | [
"MIT"
] | null | null | null | gym_collision_avoidance/experiments/src/test_pytorch.py | meghdeepj/Social-Navigation-Simulator | 806d304081bf5ff4fc7a0a58defb050627375865 | [
"MIT"
] | null | null | null | import torch
import torch.nn as nn
import torch.optim as optim
import torch.nn.functional as F
print(torch.cuda.is_available())
device=torch.device("cpu")
# device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
print(device)
class PatNet(nn.Module):
## HYPERPARAMTERS
def __init__(self, nA, nS):
super(PatNet,self).__init__()
self.nA=nA
self.input_dims=nS
self.fc1=nn.Linear(self.input_dims, 128) # 1st Hidden Layer
self.fc2=nn.Linear(128, 84) # 2nd Hidden Layer
self.fc3=nn.Linear(84, self.nA) # Output Layer
# self.dropout=nn.Dropout(p=0.2) # Dropout
def forward(self, observation):
observation=F.relu(self.fc1(observation))
observation=F.relu(self.fc2(observation))
qsa=self.fc3(observation)
return qsa
class DefNet(nn.Module):
def __init__(self):
pass
Q_policy=PatNet(4, 6)
# Q_policy=DefNet()
print("tested") | 26.837838 | 73 | 0.635448 | 139 | 993 | 4.410072 | 0.388489 | 0.071778 | 0.042414 | 0.065253 | 0.101142 | 0 | 0 | 0 | 0 | 0 | 0 | 0.030383 | 0.237664 | 993 | 37 | 74 | 26.837838 | 0.779392 | 0.205438 | 0 | 0 | 0 | 0 | 0.011538 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.12 | false | 0.04 | 0.16 | 0 | 0.4 | 0.12 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
f37405eeefd850d1e3f3acef5be96b619c25c9aa | 11,125 | py | Python | templates/illumraw2call.py | lvclark/h3agwas | 5e42e60123b819d3c331a91b25ee50846e55af3b | [
"MIT"
] | 62 | 2016-08-29T11:27:35.000Z | 2022-03-10T17:16:14.000Z | templates/illumraw2call.py | lvclark/h3agwas | 5e42e60123b819d3c331a91b25ee50846e55af3b | [
"MIT"
] | 33 | 2016-12-26T13:48:19.000Z | 2021-12-05T13:34:06.000Z | templates/illumraw2call.py | lvclark/h3agwas | 5e42e60123b819d3c331a91b25ee50846e55af3b | [
"MIT"
] | 50 | 2017-04-15T04:17:43.000Z | 2022-03-30T07:26:01.000Z | #! /usr/bin/env python
from __future__ import print_function
import argparse
import os
import sys
import struct
from numpy import empty, uint32,fromfile,uint16
# we avoid the use of backslashes to assist in templatising the code for Nextflow
TAB=unichr(9)
EOL=unichr(10)
FID_nSNPsRead = 1000
FID_IlluminaID = 102
FID_SD = 103
FID_Mean = 104
FID_NBeads = 107
FID_MidBlock = 200
FID_RunInfo = 300
FID_RedGreen = 400
FID_MostlyNull = 401
FID_Barcode = 402
FID_ChipType = 403
FIDs = [FID_nSNPsRead, FID_IlluminaID, FID_SD, FID_Mean, FID_NBeads, FID_MidBlock, FID_RunInfo, FID_RedGreen, FID_MostlyNull, FID_Barcode, FID_ChipType]
def parseArguments():
parser=argparse.ArgumentParser()
parser.add_argument('sample', type=str, metavar='samplesheet'),
parser.add_argument('idat', type=str, metavar='IDATDIR',help="directory where IDAT files can be found"),
parser.add_argument('manifest', type=str, metavar='MANIFESTFILE',help="file with Illumina manifest"),
parser.add_argument('--sample-sample-column',dest="sample_col",type=int,default=0,\
help="col in sample file where sample ID can be found (number cols from 0)")
parser.add_argument("--sentrix-barcode-column",dest="barcode_col",type=int,default=3,\
help="col in sample file where Sentrix barccode found (number cols from 0)")
parser.add_argument("--sentrix-position-column",dest="position_col",type=int,default=4,\
help="col in sample file where Sentrix barccode found (number cols from 0)")
parser.add_argument("--sample-delimiter",dest="sample_delimiter",type=str,default=",",\
help="what separates entries in the sample file"),
parser.add_argument("-a","--allow-missing-idat",dest="allow",action="store_true",default=False,\
help="if IDAT files are missing, report only, otherwise crash")
parser.add_argument("-s","--suppress-warnings",dest="suppress",action="store_true",default=False,\
help="suppress warnings -- be careful")
parser.add_argument("--skip-header",action="store_true",dest="has_header",default=False)
parser.add_argument("-n","--num-threads",dest="num_threads",type=int,default=1,\
help="number of threads for parallelism")
parser.add_argument("-c","--chrom-pos",dest="chrom_pos",action="store_true",default=False,\
help="show snp as chromosome-position (default SNP ID as in manifest")
parser.add_argument("-o","--out",dest="out",type=str,required=True,\
help="name of output file")
args = parser.parse_args()
return args
class SampleEntry:
def __init__(self,pid,fs):
self.pid=pid # person or sample ID
self.fs = fs # red and gree files
class SNP:
def __init__(self,addr_a,addr_b,strand,name,uid,chrom,pos,the_snps):
self.addr_a = addr_a
self.addr_b = addr_b
self.strand = strand
self.name = name
self.uid = uid
self.chrom = chrom
self.pos = pos
self.alleles = the_snps
self.a_pos = self.b_pos = None
def setPos(self,pos,ab): # position of probes in idat file
if ab == 0:
self.a_pos = pos
else:
self.b_pos = pos
def chrom_pos(self):
return (self.chrom,self.pos)
def __str__(self):
return("{}:{}".format(self.chrom,self.pos))
def getiDatHash(idat_dir):
''' dict of all idat files and thei locations : different projects organise their
idat files differently -- some flat, some in a hierarchical space '''
tree = os.walk(idat_dir)
hash = {}
for (d,subds,fs) in tree:
for f in fs:
if f.endswith(".idat"):
hash[f] = os.path.join(d,f)
return hash
def parseSampleSheet(args):
#parse the sample file to extract the IDs of the particpants and their corresponding
#idat files. Print warning or crash if the files don't exst
with open(args.sample) as mf:
idats=[]
for line in mf:
recs = line.split(args.sample_delimiter)
pid = recs[args.sample_col]
barcode = recs[args.barcode_col]
pos = recs[args.position_col]
curr_fs = []
ok= True
warning = ""
for colour in ["Red","Grn"]:
base_file = "{barcode}_{pos}_{colour}.idat".format(barcode=barcode,pos=pos,colour=colour)
f = idat_hash[base_file]
this_ok = os.access(f,os.R_OK)
if not this_ok: warning=warning+"Warning: file {} does not exist or readable{}".format(f,EOL)
ok = ok & this_ok
curr_fs.append(f)
if not ok:
if args.allow:
if not args.suppress:
sys.stderr(warning+EOL)
continue
else:
sys.exit("Missing idat files: "+EOL+warning)
idats.append(SampleEntry(pid,curr_fs))
return idats
def colsOfManifest(fnames):
''' return the index(base 0) of the column in the manifest file for the key fields we need'''
fields = []
for name in ["IlmnStrand","Name","SNP","AddressA_ID","AddressB_ID","Chr","MapInfo"]:
fields.append(fnames.index(name))
return fields
def getManifest(args):
# Returns a list of all the SNPs plus an index for each probe saying which SNP it belongs go
snp_manifest = []
address_index= {}
with open(args.manifest) as f:
line=f.readline()
while line[:6] != "IlmnID":
line=f.readline()
fnames = line.split(",")
cols=colsOfManifest(fnames)
oldpos=oldchrom=1
for line in f:
fields = line.split(",")
if "Controls" in fields[0]: break
try:
(strand,name,snps,address_a,address_b,chrom,pos)=map(lambda col: fields[col],cols)
uid = "{}:{}".format(chrom,pos)
the_snps = snps[1:-1]
addr_a = int(address_a)
addr_b = int(address_b) if address_b else None
snp_manifest.append(SNP(addr_a,addr_b,strand,name,uid,chrom,pos,the_snps))
except IndexError:
if not args.suppress: sys.stderr.write(line)
snp_manifest.sort(key=SNP.chrom_pos)
print("Here",len(snp_manifest))
for i, snp in enumerate(snp_manifest):
address_index[snp.addr_a]=(i,0)
if snp.addr_b>=0: address_index[snp.addr_b]=(i,1)
print(len(address_index.keys()))
return (snp_manifest,address_index)
def getNum(f,num_bytes=4):
#j=i+num_bytes
if num_bytes==2:
code='H'
elif num_bytes==4:
code='L'
elif num_bytes==8:
code='Q'
data = f.read(num_bytes)
res, = struct.unpack("<%s"%code,data)
return res
def getVals(fname):
with open(fname,"rb") as f:
# read as string
#data = f.read()
magic_number=f.read(4)
if magic_number != "IDAT":
sys.exit("Not an IDAT file")
version = getNum(f)
if version != 3:
sys.exit("IDAT version 3 supported only, found {}".format(version))
#skip
getNum(f)
fcount = getNum(f)
field_val = {}
for i in range(fcount):
fcode = getNum(f,2)
offset= getNum(f,8)
field_val[fcode]=offset
#print(fcode,offset)
f.seek(field_val[FID_nSNPsRead])
num_markers = getNum(f)
f.seek(field_val[FID_Barcode])
bcode = getNum(f)
f.seek(field_val[FID_IlluminaID])
iids = fromfile(f,dtype=uint32,count=num_markers)
f.seek(field_val[FID_Mean])
vals = fromfile(f,dtype=uint16,count=num_markers)
return (iids,vals)
def probeIndexInData(idatf,smf,aidx):
print(idatf)
probe_addr, intensities = getVals(idatf)
for i, addr in enumerate(probe_addr):
try:
snp_pos,ab = aidx[addr]
smf[snp_pos].setPos(i,ab)
#print(i,addr,snp_pos,ab)
except KeyError:
if not args.suppress:
sys.stderr.write("Warning: Probe {} not in manifest{}".format(addr,EOL))
for snp in smf:
if not snp.a_pos and not args.suppress:
sys.stderr.write("Warning: SNP {} not in idat file{}".format(addr,EOL))
def getSNPIntensities(data,s_idx,res,smf):
''' For each SNP find probe(s) for that SNP and get values
data -- numpy array where data to be stored
sample -- whose file we're dealing with (index)
res -- array of red, green values by probe
smf -- SNP manififest file '''
for i, snp in enumerate(smf):
a_idx = snp.a_pos
if not a_idx: continue # warning given earlier
for colour in [0,1]:
data[s_idx,colour,i] = res[colour][a_idx]
def showHeading(f,idats):
f.write("SNP{}Coord{}Alleles".format(TAB,TAB,TAB))
for entry in idats:
f.write("{}{}".format(TAB,entry.pid))
f.write(EOL)
def showSNP(f,data,snp_i,snp,address_pos,AB,num):
if not address_pos:
return
if args.chrom_pos:
f.write(snp.uid+AB)
else:
f.write(snp.name)
f.write("{}{}{}{}".format(TAB,snp_i,TAB,snp.alleles.replace("/","")))
for sample_i in range(0,num):
for colour in [0,1]:
f.write("{}{}".format(TAB,data[sample_i,colour,snp_i]))
f.write(EOL)
def showIntensities(args,data,smf,idats,num=-1):
if num==-1: num=len(data.shape[0])
old_chrom=f=None
for snp_i,snp in enumerate(smf):
if snp.chrom != old_chrom:
if old_chrom: f.close()
f=open(args.out+"_"+snp.chrom+".csv","w")
showHeading(f,idats)
old_chrom=snp.chrom
showSNP(f,data,snp_i,snp,snp.addr_a,"",num)
showSNP(f,data,snp_i,snp,snp.addr_b,"B",num)
if snp.addr_b:
showSNP
f.close()
def batchProcessIDATS(args,data,idats,curr_b,smf):
batch = range(len(idats))[curr_b]
for i in batch:
sample = idats[i].fs
res = map(lambda fn : getVals(fn)[1], sample)
print(idats[i].pid)
getSNPIntensities(data,i,res,smf)
def processIDATS(args,idats,smf,aidx):
n_samples=len(idats)
n_snps = len(smf)
data = empty((n_samples,2,n_snps), dtype=uint32)
probeIndexInData(idats[0].fs[0],smf,aidx)
for batch in range(args.num_threads):
curr_b = slice(batch,n_samples,args.num_threads)
batchProcessIDATS(args,data,idats,curr_b,smf)
showIntensities(args,data,smf,idats,10)
if __name__ == '__main__':
args = parseArguments()
print("Reading sample sheet")
idats = parseSampleSheet(args)
print("Reading manifest")
(smf,aidx) = getManifest(args)
print("Processing idat files")
processIDATS(args,idats,smf,aidx)
| 35.657051 | 152 | 0.598382 | 1,523 | 11,125 | 4.243598 | 0.2239 | 0.018103 | 0.034195 | 0.01114 | 0.153334 | 0.111403 | 0.086338 | 0.051215 | 0.043169 | 0.035278 | 0 | 0.010824 | 0.277483 | 11,125 | 311 | 153 | 35.771704 | 0.793232 | 0.087551 | 0 | 0.060729 | 0 | 0 | 0.129292 | 0.009923 | 0 | 0 | 0 | 0 | 0 | 1 | 0.076923 | false | 0 | 0.024292 | 0.008097 | 0.145749 | 0.032389 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
f3756e26ce73e2ba77e2e0abfa79281d0a6b2617 | 18,090 | py | Python | spacy/lang/ru/tokenizer_exceptions.py | snosrap/spaCy | 3f68bbcfec44ef55d101e6db742d353b72652129 | [
"MIT"
] | 1 | 2019-11-27T13:14:04.000Z | 2019-11-27T13:14:04.000Z | spacy/lang/ru/tokenizer_exceptions.py | snosrap/spaCy | 3f68bbcfec44ef55d101e6db742d353b72652129 | [
"MIT"
] | 2 | 2022-02-21T05:50:08.000Z | 2022-03-15T03:19:49.000Z | spacy/lang/ru/tokenizer_exceptions.py | snosrap/spaCy | 3f68bbcfec44ef55d101e6db742d353b72652129 | [
"MIT"
] | null | null | null | from ..tokenizer_exceptions import BASE_EXCEPTIONS
from ...symbols import ORTH, NORM
from ...util import update_exc
_exc = {}
_abbrev_exc = [
# Weekdays abbreviations
{ORTH: "пн", NORM: "понедельник"},
{ORTH: "вт", NORM: "вторник"},
{ORTH: "ср", NORM: "среда"},
{ORTH: "чт", NORM: "четверг"},
{ORTH: "чтв", NORM: "четверг"},
{ORTH: "пт", NORM: "пятница"},
{ORTH: "сб", NORM: "суббота"},
{ORTH: "сбт", NORM: "суббота"},
{ORTH: "вс", NORM: "воскресенье"},
{ORTH: "вскр", NORM: "воскресенье"},
{ORTH: "воскр", NORM: "воскресенье"},
# Months abbreviations
{ORTH: "янв", NORM: "январь"},
{ORTH: "фев", NORM: "февраль"},
{ORTH: "февр", NORM: "февраль"},
{ORTH: "мар", NORM: "март"},
# {ORTH: "март", NORM: "март"},
{ORTH: "мрт", NORM: "март"},
{ORTH: "апр", NORM: "апрель"},
# {ORTH: "май", NORM: "май"},
{ORTH: "июн", NORM: "июнь"},
# {ORTH: "июнь", NORM: "июнь"},
{ORTH: "июл", NORM: "июль"},
# {ORTH: "июль", NORM: "июль"},
{ORTH: "авг", NORM: "август"},
{ORTH: "сен", NORM: "сентябрь"},
{ORTH: "сент", NORM: "сентябрь"},
{ORTH: "окт", NORM: "октябрь"},
{ORTH: "октб", NORM: "октябрь"},
{ORTH: "ноя", NORM: "ноябрь"},
{ORTH: "нояб", NORM: "ноябрь"},
{ORTH: "нбр", NORM: "ноябрь"},
{ORTH: "дек", NORM: "декабрь"},
]
for abbrev_desc in _abbrev_exc:
abbrev = abbrev_desc[ORTH]
for orth in (abbrev, abbrev.capitalize(), abbrev.upper()):
_exc[orth] = [{ORTH: orth, NORM: abbrev_desc[NORM]}]
_exc[orth + "."] = [{ORTH: orth + ".", NORM: abbrev_desc[NORM]}]
for abbr in [
# Year slang abbreviations
{ORTH: "2к15", NORM: "2015"},
{ORTH: "2к16", NORM: "2016"},
{ORTH: "2к17", NORM: "2017"},
{ORTH: "2к18", NORM: "2018"},
{ORTH: "2к19", NORM: "2019"},
{ORTH: "2к20", NORM: "2020"},
{ORTH: "2к21", NORM: "2021"},
{ORTH: "2к22", NORM: "2022"},
{ORTH: "2к23", NORM: "2023"},
{ORTH: "2к24", NORM: "2024"},
{ORTH: "2к25", NORM: "2025"},
]:
_exc[abbr[ORTH]] = [abbr]
for abbr in [
# Profession and academic titles abbreviations
{ORTH: "ак.", NORM: "академик"},
{ORTH: "акад.", NORM: "академик"},
{ORTH: "д-р архитектуры", NORM: "доктор архитектуры"},
{ORTH: "д-р биол. наук", NORM: "доктор биологических наук"},
{ORTH: "д-р ветеринар. наук", NORM: "доктор ветеринарных наук"},
{ORTH: "д-р воен. наук", NORM: "доктор военных наук"},
{ORTH: "д-р геогр. наук", NORM: "доктор географических наук"},
{ORTH: "д-р геол.-минерал. наук", NORM: "доктор геолого-минералогических наук"},
{ORTH: "д-р искусствоведения", NORM: "доктор искусствоведения"},
{ORTH: "д-р ист. наук", NORM: "доктор исторических наук"},
{ORTH: "д-р культурологии", NORM: "доктор культурологии"},
{ORTH: "д-р мед. наук", NORM: "доктор медицинских наук"},
{ORTH: "д-р пед. наук", NORM: "доктор педагогических наук"},
{ORTH: "д-р полит. наук", NORM: "доктор политических наук"},
{ORTH: "д-р психол. наук", NORM: "доктор психологических наук"},
{ORTH: "д-р с.-х. наук", NORM: "доктор сельскохозяйственных наук"},
{ORTH: "д-р социол. наук", NORM: "доктор социологических наук"},
{ORTH: "д-р техн. наук", NORM: "доктор технических наук"},
{ORTH: "д-р фармацевт. наук", NORM: "доктор фармацевтических наук"},
{ORTH: "д-р физ.-мат. наук", NORM: "доктор физико-математических наук"},
{ORTH: "д-р филол. наук", NORM: "доктор филологических наук"},
{ORTH: "д-р филос. наук", NORM: "доктор философских наук"},
{ORTH: "д-р хим. наук", NORM: "доктор химических наук"},
{ORTH: "д-р экон. наук", NORM: "доктор экономических наук"},
{ORTH: "д-р юрид. наук", NORM: "доктор юридических наук"},
{ORTH: "д-р", NORM: "доктор"},
{ORTH: "д.б.н.", NORM: "доктор биологических наук"},
{ORTH: "д.г.-м.н.", NORM: "доктор геолого-минералогических наук"},
{ORTH: "д.г.н.", NORM: "доктор географических наук"},
{ORTH: "д.и.н.", NORM: "доктор исторических наук"},
{ORTH: "д.иск.", NORM: "доктор искусствоведения"},
{ORTH: "д.м.н.", NORM: "доктор медицинских наук"},
{ORTH: "д.п.н.", NORM: "доктор психологических наук"},
{ORTH: "д.пед.н.", NORM: "доктор педагогических наук"},
{ORTH: "д.полит.н.", NORM: "доктор политических наук"},
{ORTH: "д.с.-х.н.", NORM: "доктор сельскохозяйственных наук"},
{ORTH: "д.социол.н.", NORM: "доктор социологических наук"},
{ORTH: "д.т.н.", NORM: "доктор технических наук"},
{ORTH: "д.т.н", NORM: "доктор технических наук"},
{ORTH: "д.ф.-м.н.", NORM: "доктор физико-математических наук"},
{ORTH: "д.ф.н.", NORM: "доктор филологических наук"},
{ORTH: "д.филос.н.", NORM: "доктор философских наук"},
{ORTH: "д.фил.н.", NORM: "доктор филологических наук"},
{ORTH: "д.х.н.", NORM: "доктор химических наук"},
{ORTH: "д.э.н.", NORM: "доктор экономических наук"},
{ORTH: "д.э.н", NORM: "доктор экономических наук"},
{ORTH: "д.ю.н.", NORM: "доктор юридических наук"},
{ORTH: "доц.", NORM: "доцент"},
{ORTH: "и.о.", NORM: "исполняющий обязанности"},
{ORTH: "к.б.н.", NORM: "кандидат биологических наук"},
{ORTH: "к.воен.н.", NORM: "кандидат военных наук"},
{ORTH: "к.г.-м.н.", NORM: "кандидат геолого-минералогических наук"},
{ORTH: "к.г.н.", NORM: "кандидат географических наук"},
{ORTH: "к.геогр.н", NORM: "кандидат географических наук"},
{ORTH: "к.геогр.наук", NORM: "кандидат географических наук"},
{ORTH: "к.и.н.", NORM: "кандидат исторических наук"},
{ORTH: "к.иск.", NORM: "кандидат искусствоведения"},
{ORTH: "к.м.н.", NORM: "кандидат медицинских наук"},
{ORTH: "к.п.н.", NORM: "кандидат психологических наук"},
{ORTH: "к.псх.н.", NORM: "кандидат психологических наук"},
{ORTH: "к.пед.н.", NORM: "кандидат педагогических наук"},
{ORTH: "канд.пед.наук", NORM: "кандидат педагогических наук"},
{ORTH: "к.полит.н.", NORM: "кандидат политических наук"},
{ORTH: "к.с.-х.н.", NORM: "кандидат сельскохозяйственных наук"},
{ORTH: "к.социол.н.", NORM: "кандидат социологических наук"},
{ORTH: "к.с.н.", NORM: "кандидат социологических наук"},
{ORTH: "к.т.н.", NORM: "кандидат технических наук"},
{ORTH: "к.ф.-м.н.", NORM: "кандидат физико-математических наук"},
{ORTH: "к.ф.н.", NORM: "кандидат филологических наук"},
{ORTH: "к.фил.н.", NORM: "кандидат филологических наук"},
{ORTH: "к.филол.н", NORM: "кандидат филологических наук"},
{ORTH: "к.фарм.наук", NORM: "кандидат фармакологических наук"},
{ORTH: "к.фарм.н.", NORM: "кандидат фармакологических наук"},
{ORTH: "к.фарм.н", NORM: "кандидат фармакологических наук"},
{ORTH: "к.филос.наук", NORM: "кандидат философских наук"},
{ORTH: "к.филос.н.", NORM: "кандидат философских наук"},
{ORTH: "к.филос.н", NORM: "кандидат философских наук"},
{ORTH: "к.х.н.", NORM: "кандидат химических наук"},
{ORTH: "к.х.н", NORM: "кандидат химических наук"},
{ORTH: "к.э.н.", NORM: "кандидат экономических наук"},
{ORTH: "к.э.н", NORM: "кандидат экономических наук"},
{ORTH: "к.ю.н.", NORM: "кандидат юридических наук"},
{ORTH: "к.ю.н", NORM: "кандидат юридических наук"},
{ORTH: "канд. архитектуры", NORM: "кандидат архитектуры"},
{ORTH: "канд. биол. наук", NORM: "кандидат биологических наук"},
{ORTH: "канд. ветеринар. наук", NORM: "кандидат ветеринарных наук"},
{ORTH: "канд. воен. наук", NORM: "кандидат военных наук"},
{ORTH: "канд. геогр. наук", NORM: "кандидат географических наук"},
{ORTH: "канд. геол.-минерал. наук", NORM: "кандидат геолого-минералогических наук"},
{ORTH: "канд. искусствоведения", NORM: "кандидат искусствоведения"},
{ORTH: "канд. ист. наук", NORM: "кандидат исторических наук"},
{ORTH: "к.ист.н.", NORM: "кандидат исторических наук"},
{ORTH: "канд. культурологии", NORM: "кандидат культурологии"},
{ORTH: "канд. мед. наук", NORM: "кандидат медицинских наук"},
{ORTH: "канд. пед. наук", NORM: "кандидат педагогических наук"},
{ORTH: "канд. полит. наук", NORM: "кандидат политических наук"},
{ORTH: "канд. психол. наук", NORM: "кандидат психологических наук"},
{ORTH: "канд. с.-х. наук", NORM: "кандидат сельскохозяйственных наук"},
{ORTH: "канд. социол. наук", NORM: "кандидат социологических наук"},
{ORTH: "к.соц.наук", NORM: "кандидат социологических наук"},
{ORTH: "к.соц.н.", NORM: "кандидат социологических наук"},
{ORTH: "к.соц.н", NORM: "кандидат социологических наук"},
{ORTH: "канд. техн. наук", NORM: "кандидат технических наук"},
{ORTH: "канд. фармацевт. наук", NORM: "кандидат фармацевтических наук"},
{ORTH: "канд. физ.-мат. наук", NORM: "кандидат физико-математических наук"},
{ORTH: "канд. филол. наук", NORM: "кандидат филологических наук"},
{ORTH: "канд. филос. наук", NORM: "кандидат философских наук"},
{ORTH: "канд. хим. наук", NORM: "кандидат химических наук"},
{ORTH: "канд. экон. наук", NORM: "кандидат экономических наук"},
{ORTH: "канд. юрид. наук", NORM: "кандидат юридических наук"},
{ORTH: "в.н.с.", NORM: "ведущий научный сотрудник"},
{ORTH: "мл. науч. сотр.", NORM: "младший научный сотрудник"},
{ORTH: "м.н.с.", NORM: "младший научный сотрудник"},
{ORTH: "проф.", NORM: "профессор"},
{ORTH: "профессор.кафедры", NORM: "профессор кафедры"},
{ORTH: "ст. науч. сотр.", NORM: "старший научный сотрудник"},
{ORTH: "чл.-к.", NORM: "член корреспондент"},
{ORTH: "чл.-корр.", NORM: "член-корреспондент"},
{ORTH: "чл.-кор.", NORM: "член-корреспондент"},
{ORTH: "дир.", NORM: "директор"},
{ORTH: "зам. дир.", NORM: "заместитель директора"},
{ORTH: "зав. каф.", NORM: "заведующий кафедрой"},
{ORTH: "зав.кафедрой", NORM: "заведующий кафедрой"},
{ORTH: "зав. кафедрой", NORM: "заведующий кафедрой"},
{ORTH: "асп.", NORM: "аспирант"},
{ORTH: "гл. науч. сотр.", NORM: "главный научный сотрудник"},
{ORTH: "вед. науч. сотр.", NORM: "ведущий научный сотрудник"},
{ORTH: "науч. сотр.", NORM: "научный сотрудник"},
{ORTH: "к.м.с.", NORM: "кандидат в мастера спорта"},
]:
_exc[abbr[ORTH]] = [abbr]
for abbr in [
# Literary phrases abbreviations
{ORTH: "и т.д.", NORM: "и так далее"},
{ORTH: "и т.п.", NORM: "и тому подобное"},
{ORTH: "т.д.", NORM: "так далее"},
{ORTH: "т.п.", NORM: "тому подобное"},
{ORTH: "т.е.", NORM: "то есть"},
{ORTH: "т.к.", NORM: "так как"},
{ORTH: "в т.ч.", NORM: "в том числе"},
{ORTH: "и пр.", NORM: "и прочие"},
{ORTH: "и др.", NORM: "и другие"},
{ORTH: "т.н.", NORM: "так называемый"},
]:
_exc[abbr[ORTH]] = [abbr]
for abbr in [
# Appeal to a person abbreviations
{ORTH: "г-н", NORM: "господин"},
{ORTH: "г-да", NORM: "господа"},
{ORTH: "г-жа", NORM: "госпожа"},
{ORTH: "тов.", NORM: "товарищ"},
]:
_exc[abbr[ORTH]] = [abbr]
for abbr in [
# Time periods abbreviations
{ORTH: "до н.э.", NORM: "до нашей эры"},
{ORTH: "по н.в.", NORM: "по настоящее время"},
{ORTH: "в н.в.", NORM: "в настоящее время"},
{ORTH: "наст.", NORM: "настоящий"},
{ORTH: "наст. время", NORM: "настоящее время"},
{ORTH: "г.г.", NORM: "годы"},
{ORTH: "гг.", NORM: "годы"},
{ORTH: "т.г.", NORM: "текущий год"},
]:
_exc[abbr[ORTH]] = [abbr]
for abbr in [
# Address forming elements abbreviations
{ORTH: "респ.", NORM: "республика"},
{ORTH: "обл.", NORM: "область"},
{ORTH: "г.ф.з.", NORM: "город федерального значения"},
{ORTH: "а.обл.", NORM: "автономная область"},
{ORTH: "а.окр.", NORM: "автономный округ"},
{ORTH: "м.р-н", NORM: "муниципальный район"},
{ORTH: "г.о.", NORM: "городской округ"},
{ORTH: "г.п.", NORM: "городское поселение"},
{ORTH: "с.п.", NORM: "сельское поселение"},
{ORTH: "вн.р-н", NORM: "внутригородской район"},
{ORTH: "вн.тер.г.", NORM: "внутригородская территория города"},
{ORTH: "пос.", NORM: "поселение"},
{ORTH: "р-н", NORM: "район"},
{ORTH: "с/с", NORM: "сельсовет"},
{ORTH: "г.", NORM: "город"},
{ORTH: "п.г.т.", NORM: "поселок городского типа"},
{ORTH: "пгт.", NORM: "поселок городского типа"},
{ORTH: "р.п.", NORM: "рабочий поселок"},
{ORTH: "рп.", NORM: "рабочий поселок"},
{ORTH: "кп.", NORM: "курортный поселок"},
{ORTH: "гп.", NORM: "городской поселок"},
{ORTH: "п.", NORM: "поселок"},
{ORTH: "в-ки", NORM: "выселки"},
{ORTH: "г-к", NORM: "городок"},
{ORTH: "з-ка", NORM: "заимка"},
{ORTH: "п-к", NORM: "починок"},
{ORTH: "киш.", NORM: "кишлак"},
{ORTH: "п. ст. ", NORM: "поселок станция"},
{ORTH: "п. ж/д ст. ", NORM: "поселок при железнодорожной станции"},
{ORTH: "ж/д бл-ст", NORM: "железнодорожный блокпост"},
{ORTH: "ж/д б-ка", NORM: "железнодорожная будка"},
{ORTH: "ж/д в-ка", NORM: "железнодорожная ветка"},
{ORTH: "ж/д к-ма", NORM: "железнодорожная казарма"},
{ORTH: "ж/д к-т", NORM: "железнодорожный комбинат"},
{ORTH: "ж/д пл-ма", NORM: "железнодорожная платформа"},
{ORTH: "ж/д пл-ка", NORM: "железнодорожная площадка"},
{ORTH: "ж/д п.п.", NORM: "железнодорожный путевой пост"},
{ORTH: "ж/д о.п.", NORM: "железнодорожный остановочный пункт"},
{ORTH: "ж/д рзд.", NORM: "железнодорожный разъезд"},
{ORTH: "ж/д ст. ", NORM: "железнодорожная станция"},
{ORTH: "м-ко", NORM: "местечко"},
{ORTH: "д.", NORM: "деревня"},
{ORTH: "с.", NORM: "село"},
{ORTH: "сл.", NORM: "слобода"},
{ORTH: "ст. ", NORM: "станция"},
{ORTH: "ст-ца", NORM: "станица"},
{ORTH: "у.", NORM: "улус"},
{ORTH: "х.", NORM: "хутор"},
{ORTH: "рзд.", NORM: "разъезд"},
{ORTH: "зим.", NORM: "зимовье"},
{ORTH: "б-г", NORM: "берег"},
{ORTH: "ж/р", NORM: "жилой район"},
{ORTH: "кв-л", NORM: "квартал"},
{ORTH: "мкр.", NORM: "микрорайон"},
{ORTH: "ост-в", NORM: "остров"},
{ORTH: "платф.", NORM: "платформа"},
{ORTH: "п/р", NORM: "промышленный район"},
{ORTH: "р-н", NORM: "район"},
{ORTH: "тер.", NORM: "территория"},
{
ORTH: "тер. СНО",
NORM: "территория садоводческих некоммерческих объединений граждан",
},
{
ORTH: "тер. ОНО",
NORM: "территория огороднических некоммерческих объединений граждан",
},
{ORTH: "тер. ДНО", NORM: "территория дачных некоммерческих объединений граждан"},
{ORTH: "тер. СНТ", NORM: "территория садоводческих некоммерческих товариществ"},
{ORTH: "тер. ОНТ", NORM: "территория огороднических некоммерческих товариществ"},
{ORTH: "тер. ДНТ", NORM: "территория дачных некоммерческих товариществ"},
{ORTH: "тер. СПК", NORM: "территория садоводческих потребительских кооперативов"},
{ORTH: "тер. ОПК", NORM: "территория огороднических потребительских кооперативов"},
{ORTH: "тер. ДПК", NORM: "территория дачных потребительских кооперативов"},
{ORTH: "тер. СНП", NORM: "территория садоводческих некоммерческих партнерств"},
{ORTH: "тер. ОНП", NORM: "территория огороднических некоммерческих партнерств"},
{ORTH: "тер. ДНП", NORM: "территория дачных некоммерческих партнерств"},
{ORTH: "тер. ТСН", NORM: "территория товарищества собственников недвижимости"},
{ORTH: "тер. ГСК", NORM: "территория гаражно-строительного кооператива"},
{ORTH: "ус.", NORM: "усадьба"},
{ORTH: "тер.ф.х.", NORM: "территория фермерского хозяйства"},
{ORTH: "ю.", NORM: "юрты"},
{ORTH: "ал.", NORM: "аллея"},
{ORTH: "б-р", NORM: "бульвар"},
{ORTH: "взв.", NORM: "взвоз"},
{ORTH: "взд.", NORM: "въезд"},
{ORTH: "дор.", NORM: "дорога"},
{ORTH: "ззд.", NORM: "заезд"},
{ORTH: "км", NORM: "километр"},
{ORTH: "к-цо", NORM: "кольцо"},
{ORTH: "лн.", NORM: "линия"},
{ORTH: "мгстр.", NORM: "магистраль"},
{ORTH: "наб.", NORM: "набережная"},
{ORTH: "пер-д", NORM: "переезд"},
{ORTH: "пер.", NORM: "переулок"},
{ORTH: "пл-ка", NORM: "площадка"},
{ORTH: "пл.", NORM: "площадь"},
{ORTH: "пр-д", NORM: "проезд"},
{ORTH: "пр-к", NORM: "просек"},
{ORTH: "пр-ка", NORM: "просека"},
{ORTH: "пр-лок", NORM: "проселок"},
{ORTH: "пр-кт", NORM: "проспект"},
{ORTH: "проул.", NORM: "проулок"},
{ORTH: "рзд.", NORM: "разъезд"},
{ORTH: "ряд", NORM: "ряд(ы)"},
{ORTH: "с-р", NORM: "сквер"},
{ORTH: "с-к", NORM: "спуск"},
{ORTH: "сзд.", NORM: "съезд"},
{ORTH: "туп.", NORM: "тупик"},
{ORTH: "ул.", NORM: "улица"},
{ORTH: "ш.", NORM: "шоссе"},
{ORTH: "влд.", NORM: "владение"},
{ORTH: "г-ж", NORM: "гараж"},
{ORTH: "д.", NORM: "дом"},
{ORTH: "двлд.", NORM: "домовладение"},
{ORTH: "зд.", NORM: "здание"},
{ORTH: "з/у", NORM: "земельный участок"},
{ORTH: "кв.", NORM: "квартира"},
{ORTH: "ком.", NORM: "комната"},
{ORTH: "подв.", NORM: "подвал"},
{ORTH: "кот.", NORM: "котельная"},
{ORTH: "п-б", NORM: "погреб"},
{ORTH: "к.", NORM: "корпус"},
{ORTH: "ОНС", NORM: "объект незавершенного строительства"},
{ORTH: "оф.", NORM: "офис"},
{ORTH: "пав.", NORM: "павильон"},
{ORTH: "помещ.", NORM: "помещение"},
{ORTH: "раб.уч.", NORM: "рабочий участок"},
{ORTH: "скл.", NORM: "склад"},
{ORTH: "coop.", NORM: "сооружение"},
{ORTH: "стр.", NORM: "строение"},
{ORTH: "торг.зал", NORM: "торговый зал"},
{ORTH: "а/п", NORM: "аэропорт"},
{ORTH: "им.", NORM: "имени"},
]:
_exc[abbr[ORTH]] = [abbr]
for abbr in [
# Others abbreviations
{ORTH: "тыс.руб.", NORM: "тысяч рублей"},
{ORTH: "тыс.", NORM: "тысяч"},
{ORTH: "руб.", NORM: "рубль"},
{ORTH: "долл.", NORM: "доллар"},
{ORTH: "прим.", NORM: "примечание"},
{ORTH: "прим.ред.", NORM: "примечание редакции"},
{ORTH: "см. также", NORM: "смотри также"},
{ORTH: "кв.м.", NORM: "квадрантный метр"},
{ORTH: "м2", NORM: "квадрантный метр"},
{ORTH: "б/у", NORM: "бывший в употреблении"},
{ORTH: "сокр.", NORM: "сокращение"},
{ORTH: "чел.", NORM: "человек"},
{ORTH: "б.п.", NORM: "базисный пункт"},
]:
_exc[abbr[ORTH]] = [abbr]
TOKENIZER_EXCEPTIONS = update_exc(BASE_EXCEPTIONS, _exc)
| 45 | 88 | 0.58513 | 2,224 | 18,090 | 4.748201 | 0.222572 | 0.073485 | 0.033239 | 0.018939 | 0.383239 | 0.28286 | 0.17642 | 0.104451 | 0.085985 | 0.085985 | 0 | 0.005333 | 0.191487 | 18,090 | 401 | 89 | 45.112219 | 0.716669 | 0.021117 | 0 | 0.078378 | 0 | 0 | 0.461795 | 0.01136 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.008108 | 0 | 0.008108 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |