hexsha string | size int64 | ext string | lang string | max_stars_repo_path string | max_stars_repo_name string | max_stars_repo_head_hexsha string | max_stars_repo_licenses list | max_stars_count int64 | max_stars_repo_stars_event_min_datetime string | max_stars_repo_stars_event_max_datetime string | max_issues_repo_path string | max_issues_repo_name string | max_issues_repo_head_hexsha string | max_issues_repo_licenses list | max_issues_count int64 | max_issues_repo_issues_event_min_datetime string | max_issues_repo_issues_event_max_datetime string | max_forks_repo_path string | max_forks_repo_name string | max_forks_repo_head_hexsha string | max_forks_repo_licenses list | max_forks_count int64 | max_forks_repo_forks_event_min_datetime string | max_forks_repo_forks_event_max_datetime string | content string | avg_line_length float64 | max_line_length int64 | alphanum_fraction float64 | qsc_code_num_words_quality_signal int64 | qsc_code_num_chars_quality_signal float64 | qsc_code_mean_word_length_quality_signal float64 | qsc_code_frac_words_unique_quality_signal float64 | qsc_code_frac_chars_top_2grams_quality_signal float64 | qsc_code_frac_chars_top_3grams_quality_signal float64 | qsc_code_frac_chars_top_4grams_quality_signal float64 | qsc_code_frac_chars_dupe_5grams_quality_signal float64 | qsc_code_frac_chars_dupe_6grams_quality_signal float64 | qsc_code_frac_chars_dupe_7grams_quality_signal float64 | qsc_code_frac_chars_dupe_8grams_quality_signal float64 | qsc_code_frac_chars_dupe_9grams_quality_signal float64 | qsc_code_frac_chars_dupe_10grams_quality_signal float64 | qsc_code_frac_chars_replacement_symbols_quality_signal float64 | qsc_code_frac_chars_digital_quality_signal float64 | qsc_code_frac_chars_whitespace_quality_signal float64 | qsc_code_size_file_byte_quality_signal float64 | qsc_code_num_lines_quality_signal float64 | qsc_code_num_chars_line_max_quality_signal float64 | qsc_code_num_chars_line_mean_quality_signal float64 | qsc_code_frac_chars_alphabet_quality_signal float64 | qsc_code_frac_chars_comments_quality_signal float64 | qsc_code_cate_xml_start_quality_signal float64 | qsc_code_frac_lines_dupe_lines_quality_signal float64 | qsc_code_cate_autogen_quality_signal float64 | qsc_code_frac_lines_long_string_quality_signal float64 | qsc_code_frac_chars_string_length_quality_signal float64 | qsc_code_frac_chars_long_word_length_quality_signal float64 | qsc_code_frac_lines_string_concat_quality_signal float64 | qsc_code_cate_encoded_data_quality_signal float64 | qsc_code_frac_chars_hex_words_quality_signal float64 | qsc_code_frac_lines_prompt_comments_quality_signal float64 | qsc_code_frac_lines_assert_quality_signal float64 | qsc_codepython_cate_ast_quality_signal float64 | qsc_codepython_frac_lines_func_ratio_quality_signal float64 | qsc_codepython_cate_var_zero_quality_signal bool | qsc_codepython_frac_lines_pass_quality_signal float64 | qsc_codepython_frac_lines_import_quality_signal float64 | qsc_codepython_frac_lines_simplefunc_quality_signal float64 | qsc_codepython_score_lines_no_logic_quality_signal float64 | qsc_codepython_frac_lines_print_quality_signal float64 | qsc_code_num_words int64 | qsc_code_num_chars int64 | qsc_code_mean_word_length int64 | qsc_code_frac_words_unique null | qsc_code_frac_chars_top_2grams int64 | qsc_code_frac_chars_top_3grams int64 | qsc_code_frac_chars_top_4grams int64 | qsc_code_frac_chars_dupe_5grams int64 | qsc_code_frac_chars_dupe_6grams int64 | qsc_code_frac_chars_dupe_7grams int64 | qsc_code_frac_chars_dupe_8grams int64 | qsc_code_frac_chars_dupe_9grams int64 | qsc_code_frac_chars_dupe_10grams int64 | qsc_code_frac_chars_replacement_symbols int64 | qsc_code_frac_chars_digital int64 | qsc_code_frac_chars_whitespace int64 | qsc_code_size_file_byte int64 | qsc_code_num_lines int64 | qsc_code_num_chars_line_max int64 | qsc_code_num_chars_line_mean int64 | qsc_code_frac_chars_alphabet int64 | qsc_code_frac_chars_comments int64 | qsc_code_cate_xml_start int64 | qsc_code_frac_lines_dupe_lines int64 | qsc_code_cate_autogen int64 | qsc_code_frac_lines_long_string int64 | qsc_code_frac_chars_string_length int64 | qsc_code_frac_chars_long_word_length int64 | qsc_code_frac_lines_string_concat null | qsc_code_cate_encoded_data int64 | qsc_code_frac_chars_hex_words int64 | qsc_code_frac_lines_prompt_comments int64 | qsc_code_frac_lines_assert int64 | qsc_codepython_cate_ast int64 | qsc_codepython_frac_lines_func_ratio int64 | qsc_codepython_cate_var_zero int64 | qsc_codepython_frac_lines_pass int64 | qsc_codepython_frac_lines_import int64 | qsc_codepython_frac_lines_simplefunc int64 | qsc_codepython_score_lines_no_logic int64 | qsc_codepython_frac_lines_print int64 | effective string | hits int64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
ac654004ede38ed59ee1d06508160025db152b79 | 2,743 | py | Python | src/condor_tests/ornithology/fixtures.py | sridish123/htcondor | 481d975fd8602242f6a052aab04e20b0b560db89 | [
"Apache-2.0"
] | 217 | 2015-01-08T04:49:42.000Z | 2022-03-27T10:11:58.000Z | src/condor_tests/ornithology/fixtures.py | sridish123/htcondor | 481d975fd8602242f6a052aab04e20b0b560db89 | [
"Apache-2.0"
] | 185 | 2015-05-03T13:26:31.000Z | 2022-03-28T03:08:59.000Z | src/condor_tests/ornithology/fixtures.py | sridish123/htcondor | 481d975fd8602242f6a052aab04e20b0b560db89 | [
"Apache-2.0"
] | 133 | 2015-02-11T09:17:45.000Z | 2022-03-31T07:28:54.000Z | # Copyright 2020 HTCondor Team, Computer Sciences Department,
# University of Wisconsin-Madison, WI.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from typing import Mapping, Any, Optional
import collections
import pytest
CONFIG_IDS = collections.defaultdict(set)
def _check_params(params):
if params is None:
return True
for key in params.keys():
if "-" in key:
raise ValueError('config param keys must not include "-"')
def _add_config_ids(func, params):
if params is None:
return
CONFIG_IDS[func.__module__] |= params.keys()
PARAMS = Optional[Mapping[str, Any]]
def config(*args, params: PARAMS = None):
"""
Marks a function as a **config** fixture.
Config is always performed before any :func:`standup` or :func:`action` fixtures
are run.
Parameters
----------
params
"""
def decorator(func):
_check_params(params)
_add_config_ids(func, params)
return pytest.fixture(
scope="module",
params=params.values() if params is not None else None,
ids=params.keys() if params is not None else None,
)(func)
if len(args) == 1:
return decorator(args[0])
return decorator
def standup(*args):
"""
Marks a function as a **standup** fixture.
Standup is always performed after all :func:`config` fixtures have run,
and before any :func:`action` fixtures that depend on it.
"""
def decorator(func):
return pytest.fixture(scope="class")(func)
if len(args) == 1:
return decorator(args[0])
return decorator
def action(*args, params: PARAMS = None):
"""
Marks a function as an **action** fixture.
Actions are always performed after all :func:`standup` fixtures have run,
and before any tests that depend on them.
Parameters
----------
params
"""
_check_params(params)
def decorator(func):
return pytest.fixture(
scope="class",
params=params.values() if params is not None else None,
ids=params.keys() if params is not None else None,
)(func)
if len(args) == 1:
return decorator(args[0])
return decorator
| 25.398148 | 84 | 0.650747 | 363 | 2,743 | 4.867769 | 0.352617 | 0.047538 | 0.033956 | 0.029428 | 0.39219 | 0.329938 | 0.269949 | 0.269949 | 0.178268 | 0.178268 | 0 | 0.006806 | 0.250091 | 2,743 | 107 | 85 | 25.635514 | 0.852212 | 0.416332 | 0 | 0.545455 | 0 | 0 | 0.036814 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.181818 | false | 0 | 0.068182 | 0.045455 | 0.5 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
ac6625a7a016999d9a79c8d0dc634ab9a942fcb1 | 4,931 | py | Python | cubes/net/serializers/_particle.py | DavisDmitry/pyCubes | f234b9d3df959401c96c9314d5fd319524c27763 | [
"MIT"
] | 8 | 2021-09-27T04:45:14.000Z | 2022-03-14T12:42:53.000Z | cubes/net/serializers/_particle.py | DavisDmitry/pyCubes | f234b9d3df959401c96c9314d5fd319524c27763 | [
"MIT"
] | 57 | 2021-10-08T07:08:31.000Z | 2022-03-04T07:30:07.000Z | cubes/net/serializers/_particle.py | DavisDmitry/pyCubes | f234b9d3df959401c96c9314d5fd319524c27763 | [
"MIT"
] | null | null | null | import io
from cubes.net.serializers import _abc, _mixins, _simple, _slot, _var_length
from cubes.types_ import particle
class ParticleSerializer(
_mixins.BufferSerializeMixin[particle.Particle],
_abc.AbstractSerializer[particle.Particle],
):
@classmethod
def validate(cls, value: particle.Particle) -> None:
""""""
@classmethod
def deserialize(cls, data: bytes) -> particle.Particle:
return cls.from_buffer(io.BytesIO(data))
def to_buffer(self, buffer: io.BytesIO) -> None:
_var_length.VarIntSerializer(self._value.id, validate=False).to_buffer(buffer)
match self._value.id:
case particle.ParticleID.BLOCK | particle.ParticleID.FALLING_DUST:
_var_length.VarIntSerializer(self._value.block_state).to_buffer(buffer)
case particle.ParticleID.DUST:
value: particle.DustParticle = self._value
_simple.FloatSerializer(value.red, validate=False).to_buffer(buffer)
_simple.FloatSerializer(value.green, validate=False).to_buffer(buffer)
_simple.FloatSerializer(value.blue, validate=False).to_buffer(buffer)
_simple.FloatSerializer(value.scale, validate=False).to_buffer(buffer)
case particle.ParticleID.DUST_COLOR_TRANSITION:
value: particle.DustColorTransitionParticle = self._value
_simple.FloatSerializer(value.from_red, validate=False).to_buffer(
buffer
)
_simple.FloatSerializer(value.from_green, validate=False).to_buffer(
buffer
)
_simple.FloatSerializer(value.from_blue, validate=False).to_buffer(
buffer
)
_simple.FloatSerializer(value.scale, validate=False).to_buffer(buffer)
_simple.FloatSerializer(value.to_red, validate=False).to_buffer(buffer)
_simple.FloatSerializer(value.to_green, validate=False).to_buffer(
buffer
)
_simple.FloatSerializer(value.to_blue, validate=False).to_buffer(buffer)
case particle.ParticleID.ITEM:
_slot.SlotSerializer(self._value.item, validate=False).to_buffer(buffer)
case particle.ParticleID.VIBRATION:
value: particle.VibrationParticle = self._value
_simple.DoubleSerializer(value.origin_x, validate=False).to_buffer(
buffer
)
_simple.DoubleSerializer(value.origin_y, validate=False).to_buffer(
buffer
)
_simple.DoubleSerializer(value.origin_z, validate=False).to_buffer(
buffer
)
_simple.DoubleSerializer(value.dest_x, validate=False).to_buffer(buffer)
_simple.DoubleSerializer(value.dest_y, validate=False).to_buffer(buffer)
_simple.DoubleSerializer(value.dest_z, validate=False).to_buffer(buffer)
_simple.IntSerializer(value.ticks, validate=False).to_buffer(buffer)
@classmethod
def from_buffer(cls, buffer: io.BytesIO) -> particle.Particle:
particle_id = particle.ParticleID(
_var_length.VarIntSerializer.from_buffer(buffer)
)
match particle_id:
case particle.ParticleID.BLOCK:
result = particle.BlockParticle(
_var_length.VarIntSerializer.from_buffer(buffer)
)
case particle.ParticleID.DUST:
result = particle.DustParticle(
*[_simple.FloatSerializer.from_buffer(buffer) for _ in range(4)]
)
case particle.ParticleID.DUST_COLOR_TRANSITION:
from_colors = [
_simple.FloatSerializer.from_buffer(buffer) for _ in range(3)
]
scale = _simple.FloatSerializer.from_buffer(buffer)
result = particle.DustColorTransitionParticle(
*from_colors,
*[_simple.FloatSerializer.from_buffer(buffer) for _ in range(3)],
scale,
)
case particle.ParticleID.FALLING_DUST:
result = particle.FallingDustParticle(
_var_length.VarIntSerializer.from_buffer(buffer)
)
case particle.ParticleID.ITEM:
result = particle.ItemParticle(_slot.SlotSerializer.from_buffer(buffer))
case particle.ParticleID.VIBRATION:
result = particle.VibrationParticle(
*[_simple.DoubleSerializer.from_buffer(buffer) for _ in range(6)],
_simple.IntSerializer.from_buffer(buffer),
)
case _:
result = particle.Particle(particle_id)
return result
| 47.873786 | 88 | 0.610018 | 454 | 4,931 | 6.372247 | 0.162996 | 0.128586 | 0.101625 | 0.145178 | 0.635327 | 0.536122 | 0.468718 | 0.44141 | 0.383685 | 0.105081 | 0 | 0.00117 | 0.306834 | 4,931 | 102 | 89 | 48.343137 | 0.845231 | 0 | 0 | 0.242105 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.042105 | false | 0 | 0.031579 | 0.010526 | 0.105263 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
ac66f6b660a52f080c6d19571035ed57297e0ca5 | 1,100 | py | Python | cmapPy/pandasGEXpress/view.py | Cellular-Longevity/cmapPy | abd4349f28af6d035f69fe8c399fde7bef8dd635 | [
"BSD-3-Clause"
] | null | null | null | cmapPy/pandasGEXpress/view.py | Cellular-Longevity/cmapPy | abd4349f28af6d035f69fe8c399fde7bef8dd635 | [
"BSD-3-Clause"
] | 10 | 2022-03-14T18:40:45.000Z | 2022-03-22T12:45:02.000Z | cmapPy/pandasGEXpress/view.py | Cellular-Longevity/cmapPy | abd4349f28af6d035f69fe8c399fde7bef8dd635 | [
"BSD-3-Clause"
] | null | null | null | import h5py
__author__ = "David Tingley"
__email__ = "davidtingley2@gmail.com"
def view(filename,printing=True,filter=None):
'''
cmapPy equivalent to h5ls
Input
filename - name of HDF5 file
printing - boolean to print list out
filter - substring to filter on (i.e. "DATA" or "META")
Output
nodeNames - names of H5 nodes returned
'''
h5ls = H5ls()
# this will now visit all objects inside the hdf5 file and store datasets in h5ls.names
df = h5py.File(filename,'r')
df.visititems(h5ls)
df.close()
if printing:
[print(name,shape) for name,shape in zip(h5ls.names,h5ls.shapes)]
if filter is not None:
return [name for name in h5ls.names if filter in name]
else:
return h5ls.names
class H5ls:
def __init__(self):
# Store an empty list for dataset names
self.names = []
self.shapes = []
def __call__(self, name, h5obj):
if hasattr(h5obj,'dtype') and not name in self.names:
self.names += [name]
self.shapes.append(h5obj.shape)
| 27.5 | 91 | 0.623636 | 150 | 1,100 | 4.466667 | 0.52 | 0.053731 | 0.032836 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.024051 | 0.281818 | 1,100 | 39 | 92 | 28.205128 | 0.824051 | 0.309091 | 0 | 0 | 0 | 0 | 0.058496 | 0.032033 | 0 | 0 | 0 | 0 | 0 | 1 | 0.136364 | false | 0 | 0.045455 | 0 | 0.318182 | 0.136364 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
ac673ff2ac96eba5fd2db0fb760c65fb85d81885 | 1,568 | py | Python | src/ka/cli.py | Kevinpgalligan/ki | da6134c80e545a98a360103f3ee7337d1f088c05 | [
"MIT"
] | null | null | null | src/ka/cli.py | Kevinpgalligan/ki | da6134c80e545a98a360103f3ee7337d1f088c05 | [
"MIT"
] | null | null | null | src/ka/cli.py | Kevinpgalligan/ki | da6134c80e545a98a360103f3ee7337d1f088c05 | [
"MIT"
] | null | null | null | import argparse
import sys
from .interpret import (run_interpreter, execute,
print_units, print_functions, print_unit_info,
print_function_info)
def add_and_store_argument(parser, flaglist, name, **kwargs):
flaglist.append(name)
parser.add_argument(name, **kwargs)
def main():
parser = argparse.ArgumentParser(description="A calculator language. Run with no arguments to start the interpreter.")
parser.add_argument("x", nargs="?", help="The statements to evaluate.")
flaglist = ["-h", "--help"]
add_and_store_argument(parser, flaglist, "--units", action="store_true", help="List all available units.")
add_and_store_argument(parser, flaglist, "--functions", action="store_true", help="List all available functions.")
add_and_store_argument(parser, flaglist, "--unit", help="See the details of a particular unit.")
add_and_store_argument(parser, flaglist, "--function", help="See the details of a particular function.")
add_and_store_argument(parser, flaglist, "--gui", help="Start the Graphical User Interface.", action="store_true")
raw_args = sys.argv[1:]
if len(raw_args) == 1 and raw_args[0] not in flaglist:
sys.exit(execute(raw_args[0]))
args = parser.parse_args()
if args.units:
print_units()
elif args.functions:
print_functions()
elif args.unit:
print_unit_info(args.unit)
elif args.function:
print_function_info(args.function)
elif args.gui:
from .gui import run_gui
run_gui()
else:
run_interpreter()
| 37.333333 | 122 | 0.698342 | 209 | 1,568 | 5.028708 | 0.325359 | 0.034253 | 0.062797 | 0.108468 | 0.312084 | 0.312084 | 0.123692 | 0 | 0 | 0 | 0 | 0.003115 | 0.181122 | 1,568 | 41 | 123 | 38.243902 | 0.815421 | 0 | 0 | 0 | 0 | 0 | 0.21875 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.058824 | false | 0 | 0.117647 | 0 | 0.176471 | 0.176471 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
ac691659812ec1c91b4b2613709fac1083ef2f2b | 4,790 | py | Python | DOPE/visu.py | omarsou/sign_language_project | 3f85e46cc2bba5ddf0c379b1ce76e0f2d6e3b6ab | [
"Apache-2.0"
] | 3 | 2020-12-14T11:40:06.000Z | 2021-01-15T07:56:11.000Z | DOPE/visu.py | omarsou/sign_language_project | 3f85e46cc2bba5ddf0c379b1ce76e0f2d6e3b6ab | [
"Apache-2.0"
] | null | null | null | DOPE/visu.py | omarsou/sign_language_project | 3f85e46cc2bba5ddf0c379b1ce76e0f2d6e3b6ab | [
"Apache-2.0"
] | null | null | null | # Copyright 2020-present NAVER Corp.
# CC BY-NC-SA 4.0
# Available only for non-commercial use
import numpy as np
import cv2
def _get_bones_and_colors(J, ignore_neck=False): # colors in BGR
"""
param J: number of joints -- used to deduce the body part considered.
param ignore_neck: if True, the neck bone of won't be returned in case of a body (J==13)
"""
if J==13: # full body (similar to LCR-Net)
lbones = [(9,11),(7,9),(1,3),(3,5)]
if ignore_neck:
rbones = [(0,2),(2,4),(8,10),(6,8)] + [(4,5),(10,11)] + [([4,5],[10,11])]
else:
rbones = [(0,2),(2,4),(8,10),(6,8)] + [(4,5),(10,11)] + [([4,5],[10,11]),(12,[10,11])]
bonecolors = [ [0,255,0] ] * len(lbones) + [ [255,0,0] ] * len(rbones)
pltcolors = [ 'g-' ] * len(lbones) + [ 'b-' ] * len(rbones)
bones = lbones + rbones
elif J==21: # hand (format similar to HO3D dataset)
bones = [ [(0,n+1),(n+1,3*n+6),(3*n+6,3*n+7),(3*n+7,3*n+8)] for n in range(5)]
bones = sum(bones,[])
bonecolors = [(255,0,255)]*4 + [(255,0,0)]*4 + [(0,255,0)]*4 + [(0,255,255)]*4 + [(0,0,255)] *4
pltcolors = ['m']*4 + ['b']*4 + ['g']*4 + ['y']*4 + ['r']*4
elif J==84: # face (ibug format)
bones = [ (n,n+1) for n in range(83) if n not in [32,37,42,46,51,57,63,75]] + [(52,57),(58,63),(64,75),(76,83)]
# 32 x contour + 4 x r-sourcil + 4 x l-sourcil + 7 x nose + 5 x l-eye + 5 x r-eye +20 x lip + l-eye + r-eye + lip + lip
bonecolors = 32 * [(255,0,0)] + 4*[(255,0,0)] + 4*[(255,255,0)] + 7*[(255,0,255)] + 5*[(0,255,255)] + 5*[(0,255,0)] + 18*[(0,0,255)] + [(0,255,255),(0,255,0),(0,0,255),(0,0,255)]
pltcolors = 32 * ['b'] + 4*['b'] + 4*['c'] + 7*['m'] + 5*['y'] + 5*['g'] + 18*['r'] + ['y','g','r','r']
else:
raise NotImplementedError('unknown bones/colors for J='+str(J))
return bones, bonecolors, pltcolors
def _get_xy(pose2d, i):
if isinstance(i,int):
return pose2d[i,:]
else:
return np.mean(pose2d[i,:], axis=0)
def _get_xy_tupleint(pose2d, i):
return tuple(map(int,_get_xy(pose2d, i)))
def _get_xyz(pose3d, i):
if isinstance(i,int):
return pose3d[i,:]
else:
return np.mean(pose3d[i,:], axis=0)
def visualize_bodyhandface2d(im, dict_poses2d, dict_scores=None, lw=2, max_padding=100, bgr=True):
"""
bgr: whether input/output is bgr or rgb
dict_poses2d: some key/value among {'body': body_pose2d, 'hand': hand_pose2d, 'face': face_pose2d}
"""
if all(v.size==0 for v in dict_poses2d.values()): return im
h,w = im.shape[:2]
bones = {}
bonecolors = {}
for k,v in dict_poses2d.items():
bones[k], bonecolors[k], _ = _get_bones_and_colors(v.shape[1])
# pad if necessary (if some joints are outside image boundaries)
pad_top, pad_bot, pad_lft, pad_rgt = 0, 0, 0, 0
for poses2d in dict_poses2d.values():
if poses2d.size==0: continue
xmin, ymin = np.min(poses2d.reshape(-1,2), axis=0)
xmax, ymax = np.max(poses2d.reshape(-1,2), axis=0)
pad_top = max(pad_top, min(max_padding, max(0, int(-ymin-5))))
pad_bot = max(pad_bot, min(max_padding, max(0, int(ymax+5-h))))
pad_lft = max(pad_lft, min(max_padding, max(0, int(-xmin-5))))
pad_rgt = max(pad_rgt, min(max_padding, max(0, int(xmax+5-w))))
imout = cv2.copyMakeBorder(im, top=pad_top, bottom=pad_bot, left=pad_lft, right=pad_rgt, borderType=cv2.BORDER_CONSTANT, value=[0,0,0] )
if not bgr: imout = np.ascontiguousarray(imout[:,:,::-1])
outposes2d = {}
for part,poses2d in dict_poses2d.items():
outposes2d[part] = poses2d.copy()
outposes2d[part][:,:,0] += pad_lft
outposes2d[part][:,:,1] += pad_top
# for each part
for part, poses2d in outposes2d.items():
# draw each detection
for ipose in range(poses2d.shape[0]): # bones
pose2d = poses2d[ipose,...]
# draw poses
for ii, (i,j) in enumerate(bones[part]):
p1 = _get_xy_tupleint(pose2d, i)
p2 = _get_xy_tupleint(pose2d, j)
cv2.line(imout, p1, p2, bonecolors[part][ii], thickness=lw*2)
for j in range(pose2d.shape[0]):
p = _get_xy_tupleint(pose2d, j)
cv2.circle(imout, p, (2 if part!='face' else 1)*lw, (0,0,255), thickness=-1)
# draw scores
if dict_scores is not None: cv2.putText(imout, '{:.2f}'.format(dict_scores[part][ipose]), (int(pose2d[12,0]-10),int(pose2d[12,1]-10)), cv2.FONT_HERSHEY_SIMPLEX, 0.5, (0,0,0) )
if not bgr: imout = imout[:,:,::-1]
return imout
| 45.188679 | 187 | 0.54739 | 778 | 4,790 | 3.281491 | 0.269923 | 0.013318 | 0.011751 | 0.009401 | 0.160595 | 0.117509 | 0.033686 | 0.021152 | 0.021152 | 0.021152 | 0 | 0.104436 | 0.256367 | 4,790 | 105 | 188 | 45.619048 | 0.612296 | 0.153445 | 0 | 0.083333 | 0 | 0 | 0.014254 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.069444 | false | 0 | 0.027778 | 0.013889 | 0.194444 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
ac69633df24924958007fafe01724b96b07baafa | 10,128 | py | Python | mofa/analytics/src/dtAnalytics/dtPageViewAnalytics.py | BoxInABoxICT/BoxPlugin | ad351978faa37ab867a86d2f4023a2b3e5a2ce19 | [
"Apache-2.0"
] | null | null | null | mofa/analytics/src/dtAnalytics/dtPageViewAnalytics.py | BoxInABoxICT/BoxPlugin | ad351978faa37ab867a86d2f4023a2b3e5a2ce19 | [
"Apache-2.0"
] | null | null | null | mofa/analytics/src/dtAnalytics/dtPageViewAnalytics.py | BoxInABoxICT/BoxPlugin | ad351978faa37ab867a86d2f4023a2b3e5a2ce19 | [
"Apache-2.0"
] | null | null | null | # This program has been developed by students from the bachelor Computer Science at Utrecht University within the
# Software and Game project course
# ©Copyright Utrecht University Department of Information and Computing Sciences.
from analytics.src import utils
from analytics.src import lrsConnect as lrs
from datetime import datetime, timedelta
from django.conf import settings
from copy import deepcopy
import numpy as np
import pandas as pd
from sklearn import linear_model
import math
import os
import json
def dtViewedPages(scenarioID, courseid):
"""
Calculates the optimal regression model and returns the coefficients of the pages.
\n
:param scenarioID: The id of the scenario to calculate the correlation for \t
:type scenarioID: string \n
:param courseid: The course to correlate the scenario with. \t
:type courseid: string \n
:returns: A dictionary with the coefficients and some metadata \t
:rtype: dict \n
"""
d = os.path.dirname(os.path.realpath(__file__))
if (not os.path.isfile(f"{d}/response_{scenarioID}_{courseid}.json")):
getStudentData(scenarioID, courseid)
return dtViewedPages(scenarioID, courseid)
studentdata = loadStudentData(scenarioID, courseid)
if utils.hasError(studentdata):
return studentdata
featuredata = getFeatures(studentdata)
features = featuredata["features"]
scores = featuredata["scores"]
options = [0.4, 0.45, 0.5, 0.55, 0.6, 0.65, 0.7, 0.75, 0.8]
resultOptions = list(map(lambda percentage: linearRegression(features, scores, percentage), options))
best = resultOptions[0]
for result in resultOptions:
if result["predRMSE"] < best["predRMSE"]:
best = result
return best
def linearRegression(features, scores, trainpercentage=0.6):
"""
Create a Ridge linear regression model and return the RMSE and the pageCoefficients
\n
:param features: The dictionary of features to use \t
:type features: dict \n
:param scores: A list of the correct scores for the dictionary features \t
:type scores: list \n
:param trainpercentage: The percentage of data to use for training. The other part will be used for testing \t
:type trainpercentage: float \n
:returns: A dictionary with the data \t
:rtype: dict \n
"""
df = pd.DataFrame(features)
traincount = round(trainpercentage * len(df))
df_train = df[:traincount]
score_train = scores[:traincount]
df_test = df[traincount:]
score_test = scores[traincount:]
reg = linear_model.Ridge(alpha=2.0)
reg.fit(df_train, score_train)
pageCoefs = list(map(lambda page, coef: {"page": page, "coef": round(coef)}, features.keys(), reg.coef_))
medians = [utils.getMedian(score_test)] * len(score_test)
pred = reg.predict(df_test)
medianRMSE = getRMSE(medians, score_test)
predRMSE = getRMSE(pred, score_test)
return {
"trainingPercentage": trainpercentage * 100,
"intercept": reg.intercept_,
"medianRMSE": medianRMSE,
"predRMSE": predRMSE,
"pageCoefs": pageCoefs
}
def getRMSE(set1, set2):
"""
Gets the Root Mean Squared Error of two same length sets.
\n
:param set1: The first set \t
:type set1: iterable \n
:param set2: A second set with the same length as set1 \t
:type set2: iterable \n
:returns: The Root Mean Squared Error \t
:rtype: float \n
"""
if (len(set1) == 0 or len(set2) == 0):
return 0
sqe = list(map(lambda a, b: (a - b) * (a - b), set1, set2))
return math.sqrt(sum(sqe) / len(sqe))
def getStudentData(scenarioID, courseid):
"""
Collect the ids and timestamps of the students that completed a scenario and for each student, get the pages they looked at in the 14 days before that.
\n
:param scenarioID: The scenario to perform the analysis on \t
:type scenarioID: string \n
:param courseid: The courseid of the course the scenario belongs to \t
:type courseid: string \n
:returns: void \t
:rtype: void \n
"""
data = getDTdata(scenarioID)
studentdata = ""
if utils.hasError(data):
studentdata = data
else:
aggregated = groupByStudent(data)
studentdata = list(map(lambda data, actor: {"score": data["score"], "pages": getPageViewData(courseid, actor, data["timestamp"])}, list(aggregated.values()), list(aggregated.keys())))
d = os.path.dirname(os.path.realpath(__file__))
f = open(f"{d}/response_{scenarioID}_{courseid}.json", "w")
f.write(json.dumps(studentdata))
f.close()
def loadStudentData(scenarioID, courseid):
"""
Load the response file of one analysis
\n
:param scenarioID: The scenarioID to load the response file for \t
:type scenarioID: string \n
:param courseid: The courseid of the course to load the response file for \t
:type courseid: string \n
:returns: A json structured dictionary \t
:rtype: dict \n
"""
d = os.path.dirname(os.path.realpath(__file__))
f = open(f'{d}/response_{scenarioID}_{courseid}.json')
data = json.load(f)
return data
def getFeatures(studentdata):
"""
From a set of studentdata, extract and filter the features.
\n
:param studentdata: A set of student data (in the format of the response files) to extract features from \t
:rtype studentdata: dict \n
:returns: A dictionary of features \t
:rtype: dict \n
"""
uniquepages = [a for b in list(map(lambda stm: stm["pages"], deepcopy(studentdata))) for a in b]
uniquepages = list(set(uniquepages))
featuredata = extractFeatures(uniquepages, studentdata)
studentVisitTreshold = 0.05 # at leas 5% of students needs to visit the page
studentcount = len(studentdata)
featuredata["features"] = {k: v for k, v in featuredata["features"].items() if len(list(filter(lambda x: x > 0, v))) >= studentVisitTreshold * studentcount}
rejected = len(uniquepages) - len(featuredata["features"])
return featuredata
def extractFeatures(pages, dataset):
"""
Extract the amount of page views for each student for each page
\n
:param pages: A list of all the (unique) pages to use \t
:type pages: list \n
:param dataset: A list of dictionaries, each dictionary representing one student and having at least the key "pages" \t
:type dataset: [dict] \n
:returns: A dictionary with two keys: "scores" and "features" \t
:rtype: dict \n
"""
scores = list()
pageslists = dict()
for page in pages:
pageslists[page] = list()
for datapoint in dataset:
scores.append(datapoint.get("score"))
for page in pages:
if page in datapoint["pages"]:
pageslists[page].append(datapoint["pages"][page])
else:
pageslists[page].append(0)
return {"scores": scores, "features": pageslists}
def getPageViewData(courseid, actorid, untiltime):
"""
For one student, get all the pages viewed between the untiltime and 14 days earlier
\n
:param courseid: The course to which the pages should belong \t
:type courseid: string \n
:param actorid: The full LRS id of the student to get the page views for \t
:type actorid: string (stringified json) \n
:param untiltime: A "YYYY-MM-DDThh:mm:ssZ" formatted timestamp \t
:type untiltime: string (datetime) \n
:returns: A dictionary of pages visited pages and for each page the amount of visits \t
:rtype: dict \n
"""
def stmInRange(lower, upper, stm):
timestamp = datetime.fromisoformat(stm["timestamp"].replace(".000Z", ""))
return lower < timestamp and timestamp < upper
untiltime = datetime.fromisoformat(untiltime.replace("Z", ""))
sincetime = untiltime - timedelta(days=14)
querydata = (
lrs.Query()
.select(lrs.Attr.ACTIVITY, lrs.Attr.TIMESTAMP)
.where(lrs.Attr.VERB, lrs.IS, "http://id.tincanapi.com/verb/viewed")
.where(lrs.Attr.CONTEXTACTIVITY, lrs.IS, f"http://localhost/course/view.php?id={courseid}")
.where(lrs.Attr.ACTIVITY, lrs.CONTAINS, "/mod/page/view.php")
.where(lrs.Attr.ACTOR, lrs.IS, actorid)
.execute()
)
querydata = filter(lambda stm: stmInRange(sincetime, untiltime, stm), querydata)
querydata = map(lambda stm: utils.getIdFromUrl(stm["activity"]), querydata)
querydata = utils.groupOn(querydata, utils.id, lambda x: 1, lambda total, x: total + 1)
return querydata
def getDTdata(scenarioID):
"""
Query the LRS for all the students that completed a specific scenario
\n
:param scenarioID: The id of the scenario that was completed \t
:type scenarioID: string \n
:returns: A list of scenario completions (xAPI statements), with for each completion the timestamp, the actor and the result \t
:rtype: [dict] \n
"""
return (
lrs.Query()
.where(lrs.Attr.ACTIVITY, lrs.IS, f"https://en.dialoguetrainer.app/scenario/play/{scenarioID}")
.where(lrs.Attr.VERB, lrs.IS, "https://adlnet.gov/expapi/verbs/completed")
.select(lrs.Attr.ACTOR, lrs.Attr.RESULT, lrs.Attr.TIMESTAMP)
.execute()
)
def groupByStudent(dataset):
"""
Calculate the total score of each scenario for each student and group the scenario completions by student
\n
:param dataset: A list of xAPI statements, each containing at least the actor, the timestamp and the result \t
:type dataset: [dict] \n
:returs: A dictionary, with for each student the score and timestamp of their first attempt \t
:rtype: dict<float> \n
"""
newDataset = utils.groupOn(
dataset,
lambda x: x["actor"],
lambda x: {"timestamp": x["timestamp"], "score": utils.getAverageScore(x["result"])},
# lambda total, x: {"timestamp": x["timestamp"], "score": getAverageScore(x["result"])} if getAverageScore(x["result"]) > total["score"] else total
lambda total, x: {"timestamp": x["timestamp"], "score": utils.getAverageScore(x["result"])}
)
return newDataset
| 36.431655 | 191 | 0.671603 | 1,334 | 10,128 | 5.074963 | 0.236882 | 0.016839 | 0.011817 | 0.011374 | 0.178877 | 0.131019 | 0.098966 | 0.08449 | 0.067061 | 0.045199 | 0 | 0.007711 | 0.218898 | 10,128 | 277 | 192 | 36.563177 | 0.847933 | 0.396228 | 0 | 0.085271 | 0 | 0 | 0.101449 | 0.021477 | 0 | 0 | 0 | 0 | 0 | 1 | 0.085271 | false | 0 | 0.085271 | 0 | 0.271318 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
ac6a40a0a68d81e8eb46e62f6f967f895ff55e4f | 31,160 | py | Python | edl/hits.py | jmeppley/py-metagenomics | 0dbab073cb7e52c4826054e40eb802c9e0298e9a | [
"MIT"
] | 7 | 2015-05-14T09:36:36.000Z | 2022-03-30T14:32:21.000Z | edl/hits.py | jmeppley/py-metagenomics | 0dbab073cb7e52c4826054e40eb802c9e0298e9a | [
"MIT"
] | 1 | 2015-07-14T11:47:25.000Z | 2015-07-17T01:45:26.000Z | edl/hits.py | jmeppley/py-metagenomics | 0dbab073cb7e52c4826054e40eb802c9e0298e9a | [
"MIT"
] | 7 | 2015-07-25T22:29:29.000Z | 2022-03-01T21:26:14.000Z | from edl.util import parseMapFile
from edl.taxon import getNodeFromHit, \
getAncestorClosestToRank, \
readTaxonomy, \
add_taxonomy_dir_argument
from edl.blastm8 import filterM8Stream, \
FilterParams, \
formatsWithNoDescription, \
add_hit_table_arguments
from edl.expressions import accessionRE, nrOrgRE, koRE, giRE, pfamRE
import logging
logger = logging.getLogger(__name__)
#############
# Constants #
#############
ACCS = 'accs'
ORGS = 'orgs'
KEGG = 'kegg'
PFAM = 'pfam'
GIS = 'gis'
HITID = 'hitid'
HITDESC = 'hitdesc'
parsingREs = {
ORGS: nrOrgRE,
ACCS: accessionRE,
KEGG: koRE,
GIS: giRE,
PFAM: pfamRE}
ALLEQ = 'all'
FIRST = 'first'
PORTION = 'portion'
def translateHits(hitMap, hitTranslation):
for (read, hit) in hitMap.items():
if isinstance(hit, type([])):
newHits = []
for h in hit:
t = hitTranslation.get(h, None)
if t is not None:
if isinstance(t, type([])):
newHits.extend(t)
else:
newHits.append(t)
else:
newHits.append(h)
hitMap[read] = list(set(newHits))
else:
t = hitTranslation.get(hit, None)
if t is not None:
hitMap[read] = t
def translateCounts(counts, translation):
for key in counts.keys():
newKey = translation.get(key, None)
if newKey is not None:
count = counts.pop(key)
counts[newKey] = counts.setdefault(newKey, 0) + count
def binHits(hitMap):
"""
return map of assignments to list of reads
"""
hits = {}
for (read, hit) in hitMap.items():
if isinstance(hit, list):
for h in hit:
hits.setdefault(h, []).append(read)
else:
hits.setdefault(hit, []).append(read)
return hits
def binAndMapHits(hitIter):
"""
return map of assignments to list of reads
"""
hits = {}
hitMap = {}
for (read, hit) in hitIter:
hitMap[read] = hit
if isinstance(hit, list):
for h in hit:
hits.setdefault(h, []).append(read)
else:
hits.setdefault(hit, []).append(read)
return (hits, hitMap)
def loadSequenceWeights(weightFiles):
"""
Load and merge list of sequence weight maps.
"""
if len(weightFiles) > 0:
sequenceWeights = {}
for weightFile in weightFiles:
sequenceWeights.update(parseMapFile(weightFiles, valueType=int))
else:
sequenceWeights = None
return sequenceWeights
def add_weight_arguments(parser, multiple=False):
action = 'store'
default = None
helpText = "File listing counting weights by sequence id. This is \
used for clustered or assembled data where each read (or contig) could \
represent any number of raw reads. The file should be a simple two-column \
tab-separated table with sequence-ids in the first column and integer \
weights in the second. "
if multiple:
action = 'append'
default = []
helpText += "For multiple files, supply the flag (-w or \
--sequenceWeights) for each file name. Concatenating all tables into \
one file will have the same net result."
parser.add_argument("-w", "--sequenceWeights", dest='weights',
action=action, default=default, help=helpText)
def add_count_arguments(parser, defaults={}):
default = defaults.get('cutoff', 0.01)
parser.add_argument(
"-c",
"--cutoff",
dest="cutoff",
type=float,
default=default,
help="Cutoff for showing taxa. If a fractional count for a taxa "
"is below this value, it will be folded up into its parent "
"domain. Defaults to: %s" % default,
metavar="CUTOFF")
default = defaults.get('allMethod', ALLEQ)
parser.add_argument(
"-a",
"--allMethod",
dest="allMethod",
default=default,
choices=(
FIRST,
ALLEQ,
PORTION),
help="%r means +1 for every hit found for each read. %r means"
" +1 to the first hit for each read. %r means +1/(nhits) for all"
" hits of each read. Defaults to %r" % (ALLEQ,
FIRST,
PORTION,
default))
def getAllMethod(allMethod):
return allMethods[allMethod]
def applyFractionalCutoff(counts, threshold=None, cutoff=None, label='Other'):
"""
For any value in the dict below cutoff, remove and add to 'other' value
"""
if threshold is None:
if cutoff is None:
logger.warn("Nothing to do for applyFractionalCutoff")
return
threshold = float(cutoff) * sum(counts.values())
osum = 0
for key in list(counts.keys()):
if key == label:
continue
count = counts[key]
if count < threshold:
osum += count
del counts[key]
counts[label] = osum + counts.get(label, 0)
return counts
def countIterHits(hitIter, allMethod=ALLEQ, weights=None, returnMap=True):
"""
bin counts by hit and find total
return map from assignments to number of reads
and dict of original mappings
"""
countHitsForRead = getAllMethod(allMethod)
total = 0
counts = {}
if returnMap:
hitMap = {}
multiplier = 1
for (read, hit) in hitIter:
total += 1
if returnMap:
hitMap[read] = hit
if weights is not None:
multiplier = weights.get(read, 1)
if isinstance(hit, type([])):
countHitsForRead(hit, counts, multiplier=multiplier)
else:
counts[hit] = multiplier + counts.get(hit, 0)
if returnMap:
return (total, counts, hitMap)
return (total, counts)
def _oneCountPerHit(hits, counts, multiplier=1):
for hit in hits:
counts[hit] = multiplier + counts.get(hit, 0)
def _portionHitCount(hits, counts, multiplier=1):
multiplier = multiplier / float(len(hits))
_oneCountPerHit(hits, counts, multiplier=multiplier)
def _countFirstHit(hits, counts, multiplier=1):
counts[hits[0]] = multiplier + counts.get(hits[0], 0)
def countHits(hitMap):
"""
bin counts by hit and find total
return map from assignments to number of reads
"""
total = 0
counts = {}
if isinstance(hitMap, dict):
hitIter = hitMap.items()
else:
hitIter = hitMap
for (read, hit) in hitIter:
total += 1
if isinstance(hit, type([])):
for h in hit:
counts[h] = 1 + counts.get(h, 0)
else:
counts[hit] = 1 + counts.get(hit, 0)
return (total, counts)
def parseAndFilterM8Stream(inhandle, options):
"""
runs the input stream through m8 filtering
and then through parseM8Hits to get map from each read to all hits
"""
inhandle = filterM8Stream(inhandle, options, return_lines=False)
logger.info("Parsing hits")
# since filter already parses hits, use that info
infoInDescription = options.parseStyle in [KEGG, ORGS, PFAM]
return parseM8Hits(inhandle, infoInDescription)
def parseM8File(inhandle,
hitStringMap,
options,
parsingStyle,
countMethod,
taxonomy=None,
rank=None,
ignoreEmptyHits=True,
):
"""
Wrapper method that combines filterM8, parseHits, and process hits to:
filter hits using format and scorePct
map reads to hits using parseHits
translate hits using processHits
If taxonomy is not None, hits will be TaxNode objects
contMethod can only be LCA if taxonomy given
Return a dict from read to hits
"""
hitIter = parseM8FileIter(inhandle,
hitStringMap,
options,
parsingStyle,
countMethod,
taxonomy=taxonomy,
rank=rank,
ignoreEmptyHits=ignoreEmptyHits,
)
hitMap = {}
for (read, hits) in hitIter:
hitMap[read] = hits
logger.info("Done counting %d hits" % (len(hitMap)))
return hitMap
def parseM8FileIter(inhandle,
hitStringMap,
options,
parsingStyle,
countMethod,
taxonomy=None,
rank=None,
ignoreEmptyHits=True,
):
"""
Wrapper method that combines filterM8, parseHits, and process hits to:
filter hits using format and scorePct
map reads to hits using parseHits
translate hits using processHits
If taxonomy is not None, hits will be TaxNode objects
contMethod can only be LCA if taxonomy given
Return an iterator over (read,hits) tuples.
"""
# get map from reads to lists of hit strings
logger.info("Parsing hits")
# filters and parses
# options.parseStyle = parsingStyle
hitIter = filterM8Stream(inhandle, options, return_lines=False)
# apply org or acc translation
# apply map of hit names if given'
# look up taxon node
hitIter = processHits(
hitIter,
hitStringMap=hitStringMap,
parseStyle=parsingStyle,
taxonomy=taxonomy,
rank=rank)
# apply count method
hitIter = applyCountMethod(hitIter, countMethod, ignoreEmptyHits)
return hitIter
def parseHitsIter(
hitIter,
hitStringMap,
parsingStyle,
countMethod,
taxonomy=None,
rank=None,
ignoreEmptyHits=None):
"""
Same as parseM8FileIter, but takes in an iterator over Hit objects
Simply runs processHits and applyCountMethod
"""
# apply org or acc translation
# apply map of hit names if given'
# look up taxon node
hitIter = processHits(
hitIter,
hitStringMap=hitStringMap,
parseStyle=parsingStyle,
taxonomy=taxonomy,
rank=rank)
# debugKey="F4UZ9WW02HMBZJ"
# logger.debug("Hits for %s: %r" % (debugKey,hitMap[debugKey]))
# apply count method
hitIter = applyCountMethod(hitIter, countMethod, ignoreEmptyHits)
return hitIter
def sortedHitIterator(hitMap):
"""
Given a dictionary of reads to hits, return in order
"""
for read in sorted(hitMap.keys()):
yield (read, hitMap[read])
def applyCountMethod(hitIter, method, ignoreEmpty=True):
# chose function that applies method
if method == 'LCA' or method == 'rLCA':
getBestHit = _findLeastCommonAncestor
elif method == 'first':
getBestHit = _takeFirstHit
elif method == 'all':
getBestHit = _returnAllHits
elif method == 'consensus':
getBestHit = _returnConsensus
elif method == 'most':
getBestHit = _returnMostCommon
if ignoreEmpty:
removeEmptyFunc = _removeEmpty
else:
removeEmptyFunc = _return_value
# apply method to hit map
hitsIn = 0
hitsOut = 0
reads = 0
for (read, hits) in hitIter:
reads += 1
hitsIn += len(hits)
hits = getBestHit(hits)
hits = removeEmptyFunc(hits)
if hits is not None:
hitsOut += len(hits)
yield (read, hits)
logger.debug("%s=>%r" % (read, hits))
logger.info(
"Collected %d hits into %d hits for %d reads" %
(hitsIn, hitsOut, reads))
def _findLeastCommonAncestor(hits):
"""
Given a list of hits as TaxNode objects, find the least common ancestor.
Hits that are not TaxNodes are ignored.
"""
# check for hits not translated to TaxNode objects
i = 0
while i < len(hits):
if hits[i] is None:
hits.pop(i)
elif isinstance(hits[i], type("")):
logger.info(
"Skipping hit: %s (cannot translate to taxon)" %
(hits.pop(i)))
else:
i += 1
# make sure there are some hits to process
if len(hits) == 0:
# sys.exit("No hits given!")
return None
# get LCA for these hits
hit = hits[0]
for i in range(1, len(hits)):
hit = hit.getLCA(hits[i])
return [hit, ]
def _returnMostCommon(hits):
counts = {}
for hit in hits:
count = counts.get(hit, 0)
count += 1
counts[hit] = count
logger.debug(repr(counts))
bestCount = 0
bestHit = None
for (hit, count) in counts.items():
if count > bestCount:
bestHit = [hit, ]
bestCount = count
elif count == bestCount:
bestHit.append(hit)
return bestHit
def _takeFirstHit(hits):
if len(hits) > 0:
return hits[0:1]
else:
logger.debug("No hits!")
return None
def _returnAllHits(hits):
return list(set(hits))
def _returnConsensus(hits):
hits = _returnAllHits(hits)
if len(hits) == 1:
return hits
else:
return None
def _return_value(value):
return value
def _removeEmpty(hits):
if hits is None:
return hits
while True:
try:
hits.remove(None)
except ValueError:
break
while True:
try:
hits.remove('')
except ValueError:
break
if len(hits) > 0:
return hits
else:
return []
def parseHits(inhandle, readCol, hitCol, skipFirst, hitSep):
"""
read over lines and pull out (read,[hits]) pairs given:
inhandle: iterable set of strings (ie lines in a file)
readCol: index of column with read name
hitCol: index of column with hit name (-1 => every non-read column)
skipFirst: skip first line if True
hitSep: if not None, split data in hit column with this separator
"""
logger.debug("BEGIN parseHits(in, %r, %r, %r, %r)" %
(readCol, hitCol, skipFirst, hitSep))
# get line parsing function
if hitSep == 'eval':
extractReadHits = _getReadHitsEval
else:
hitCol = int(hitCol)
if hitCol < 0:
extractReadHits = _getReadHitsAll
elif hitSep is not None:
extractReadHits = _getReadHitsSep
else:
extractReadHits = _getReadHitsSimple
if skipFirst:
next(inhandle)
hitCount = 0
lineCount = 0
lastRead = None
for line in inhandle:
lineCount += 1
cells = line.rstrip('\n\r').split('\t')
(read, hits) = extractReadHits(cells, readCol, hitCol, hitSep)
if read != lastRead:
if lastRead is not None:
yield (lastRead, readHits)
readHits = list(hits)
lastRead = read
else:
readHits.extend(hits)
hitCount += len(hits)
if lastRead is not None:
yield (lastRead, readHits)
logger.info("Read %d hits from %d lines" % (hitCount, lineCount))
def parseM8Hits(hitIter, returnHitDescriptions):
logger.debug("BEGIN parseM8Hits()")
lastRead = None
hitCount = 0
readCount = 0
for read, hits in hitIter:
readCount += 1
fields = []
for hit in hits:
hitCount += 1
if returnHitDescriptions:
fields.append(hit.hitDesc)
else:
fields.append(hit.hit)
yield (read, fields)
logger.info("Read %d hits from %d reads" % (hitCount, readCount))
# -- helpers for parseHits -- #
# the following functions take a line from a table and return a read name
# and an iterable collection of hits
def _getReadHitsEval(cells, readCol, hitCol, hitSep):
"""
use eval to evaluate contents of hit cell. If resulting object is
not iterable, put it into a tuple
"""
read = cells[readCol]
hit = cells[hitCol]
# try to evaluate expression
try:
hit = eval(hit)
except Exception:
logger.warn("exception from 'eval(%r)'" % (hit))
# make it iterable if it's not
try:
getattr(hit, '__iter__')
except AttributeError:
hit = (hit,)
return (read, hit)
def _getReadHitsAll(cells, readCol, hitCol, hitSep):
"""
every entry in cells (other than read) is a hit
"""
read = cells.pop(readCol)
return(read, cells)
def _getReadHitsSep(cells, readCol, hitCol, hitSep):
"""
use hitSep to divide hit cell in to multipl hits
"""
read = cells[readCol]
hitCell = cells[hitCol]
hits = hitCell.strip().split(hitSep)
return (read, hits)
def _getReadHitsSimple(cells, readCol, hitCol, hitSep):
read = cells[readCol]
hit = cells[hitCol]
return (read, (hit,))
# -- end helpers for parseHits -- #
class HitTranslator:
"""
Given a list of (function,data,returnType) tuples ("mappings")
Return an object with translateHit method that will apply the
mappings to a hit
"""
def __init__(self, mappings, useDesc=False, hitsAreObjects=True):
self.mappings = mappings
if mappings is None or len(mappings) == 0:
self.applyMappings = self.returnSame
if hitsAreObjects:
if useDesc:
self.getId = self.getDescription
else:
self.getId = self.returnSame
def getId(self, hit):
return hit.hit
def getDescription(self, hit):
return hit.hitDesc
def returnSame(self, hit):
return hit
def translateHit(self, hit):
return self.applyMappings([self.getId(hit), ])
def applyMappings(self, hits):
for (mapFunc, mapping, retType) in self.mappings:
newHits = []
for hit in hits:
mapped = mapFunc(hit, mapping)
if retType is list:
newHits.extend(mapped)
elif retType is str:
newHits.append(mapped)
else:
if isinstance(mapped, list) or isinstance(mapped, tuple):
newHits.extend(mapped)
else:
newHits.append(mapped)
hits = newHits
return hits
def getHitTranslator(
hitStringMap=None,
parseStyle=ORGS,
taxonomy=None,
rank=None,
defaultToNone=True,
hitsAreObjects=True):
"""
Return a function that will return a list of organsims from a single hit.
hitStringMap (None): dictionary mapping hit IDs to something else
parseStyle (ORGS): how to process hit data into an identifying string
taxonomy (None): An edl.taxon.Taxonomy object or directory
conatining taxdmp
rank (None): Maximum rank to resolve hits
hitsAreObjects: True if hits are edl.blastm8.Hit objects, else strings
"""
parseRE = parsingREs.get(parseStyle, None)
if logger.getEffectiveLevel() <= logging.INFO:
if hitStringMap is None:
mapstr = 'None'
else:
mapstr = '%d keys' % (len(hitStringMap))
if parseRE is None:
exprstr = 'None'
else:
exprstr = parseRE.pattern
if taxonomy is None:
taxstr = 'None'
else:
taxstr = '%d ids' % (len(taxonomy.idMap))
logger.info(
"Creating hit translator:\n default to None: %r\n map: %s\n "
"parsing %s: %s\n taxa: %s\n rank: %s" %
(defaultToNone, mapstr, parseStyle, exprstr, taxstr, rank))
# set up variables
infoInDescription = parseStyle in [KEGG, ORGS, PFAM]
mappings = []
if defaultToNone:
mapFunction = _simpleMapNoneFunction
else:
mapFunction = _simpleMapFunction
# initial parsing of hit id or description via regular expression
if parseRE is not None:
mappings.append((_findAllREfunctionSimpler, parseRE, list))
# optional look up table
if hitStringMap is not None:
mappings.append((mapFunction, hitStringMap, None))
# optional conversion to Taxon objects
if taxonomy is not None:
if parseStyle == ORGS:
if defaultToNone:
mappings.append((getNodeFromHit, taxonomy.nameMap, str))
else:
mappings.append((_getNodeHitFunction, taxonomy.nameMap, str))
else:
mappings.append((mapFunction, taxonomy.idMap, str))
if rank is not None:
mappings.append((getAncestorClosestToRank, rank, str))
return HitTranslator(
mappings,
useDesc=infoInDescription,
hitsAreObjects=hitsAreObjects)
# turn hit lines into organisms or KOs or anything else
def processHits(hitIter, **kwargs):
"""
Take an in iterator over read,hits tuples and apply mappings using
a HitTranslator
"""
translator = getHitTranslator(**kwargs)
# translate hits
for (key, hits) in hitIter:
logger.debug("%s => %s" % (key, hits))
newHits = []
for h in hits:
newHits.extend(translator.translateHit(h))
logger.debug(str(newHits))
yield (key, newHits)
def processHitsOld(
hitIter,
mapping=None,
expr=None,
taxIdMap=None,
taxNameMap=None,
defaultToNone=True,
rank=None):
"""
Take a map of reads (or other keys) to lists of hits and translate hits.
Can use the following steps in this order with any steps omitted:
simpile dictionary translation using 'mapping'
regular expression (where every captured group is returned as a hit)
a translation to taxNode objects by one of:
simple dictionary translation using taxIdMap
name based look up using edl.taxon.getNodeFromHit() and taxNameMap
if defaultToNone is changed to False, anything not found in one of
the mappings
(mapping, taxIdMap, or taxNameMap)
"""
if logger.getEffectiveLevel() <= logging.DEBUG:
if mapping is None:
mapstr = 'None'
else:
mapstr = '%d keys' % (len(mapping))
if expr is None:
exprstr = 'None'
else:
exprstr = expr.pattern
if taxIdMap is None:
if taxNameMap is None:
taxstr = 'None'
else:
taxstr = '%d names' % (len(taxNameMap))
else:
taxstr = '%d ids' % (len(taxIdMap))
logger.debug(
"Starting processHits:\n default to None: %r\n map: %s\n "
"exp: %s\n taxa: %s\n rank: %s" %
(defaultToNone, mapstr, exprstr, taxstr, rank))
# set the functions to use:
if mapping is None:
mapFunction = _passFunction
elif defaultToNone:
mapFunction = _simpleMapNoneFunction
else:
mapFunction = _simpleMapFunction
exprFunction = _findAllREfunction
if taxIdMap is not None:
taxMap = taxIdMap
if defaultToNone:
taxFunction = _simpleMapNoneFunction
else:
taxFunction = _simpleMapFunction
elif taxNameMap is not None:
taxMap = taxNameMap
if defaultToNone:
taxFunction = getNodeFromHit
else:
taxFunction = _getNodeHitFunction
else:
taxMap = None
taxFunction = _passFunction
if taxMap is None or rank is None:
rankFunction = _passFunction
else:
rankFunction = getAncestorClosestToRank
# translate hits
for (key, hits) in hitIter:
logger.debug("%s => %s" % (key, hits))
newHits = []
for h in hits:
# find all matches to expr, may be more than one
hs = exprFunction(h, expr)
logger.debug("%s => %s" % (h, hs))
for hit in hs:
hts = mapFunction(hit, mapping)
if not (isinstance(hts, list) or isinstance(hts, tuple)):
hts = [hts]
for hit in hts:
hit = taxFunction(hit, taxMap)
hit = rankFunction(hit, rank)
newHits.append(hit)
logger.debug(str(newHits))
yield (key, newHits)
# helper functions for processHits
# each function takes a hit and something else, and then reutrns a
# translated hit
def _passFunction(hit, mapping):
return hit
def _simpleMapFunction(hit, mapping):
newHit = mapping.get(hit, hit)
logger.debug("%s --> %r" % (hit, newHit))
return newHit
def _simpleMapNoneFunction(hit, mapping):
newHit = mapping.get(hit, None)
logger.debug("%s --> %r" % (hit, newHit))
return newHit
def _getNodeHitFunction(hit, taxMap):
newHit = getNodeFromHit(hit, taxMap)
if newHit is None:
return hit
else:
return newHit
def _findAllREfunctionSimpler(hit, expr):
hits = expr.findall(hit)
if len(hits) == 0:
return [hit, ]
else:
return hits
def _findAllREfunction(hit, expr):
if expr is None:
return (hit,)
hits = expr.findall(hit)
if len(hits) == 0:
return [hit, ]
else:
return hits
# end helper functions for processHits
def add_taxon_arguments(parser, defaults={}, choices={}):
# get format and filter_top_pct options from blastm8
add_hit_table_arguments(parser, defaults,
flags=['format', 'filter_top_pct'])
# specific to taxon parsing:
parser.add_argument(
"-m",
"--mapFile",
dest="mapFile",
default=defaults.get(
"mapFile",
None),
metavar="MAPFILE",
help="Location of file containing table of with db hit name "
"as first column and taxa or taxonids in second column. "
"Defaults to '%s'" % (defaults.get("mapFile", None)))
parser.add_argument(
"-p",
"--parseStyle",
default=defaults.get(
"parseStyle",
ACCS),
choices=[
ACCS,
GIS,
ORGS,
HITID,
HITDESC],
help="What should be parsed from the hit table: accessions('accs'), "
"'gis', organsim names in brackets ('orgs'), the full hit "
"name('hitid'), or the full hit description('hitdesc'). "
"(defaults to '%s')" % (defaults.get("parseStyles", ACCS)))
parser.add_argument(
"-C",
"--countMethod",
dest="countMethod",
default=defaults.get(
"countMethod",
"first"),
choices=choices.get(
'countMethod',
('first',
'most',
'all',
'LCA',
'consensus')),
help="How to deal with counts from multiple hits. (first, most: "
"can return multiple hits in case of a tie, LCA: MEGAN-like, "
"all: return every hit, consensus: return None unless all "
"the same). Default is %s" % (defaults.get("countMethod",
"first")),
metavar="COUNTMETHOD")
add_taxonomy_dir_argument(parser, defaults)
def readMaps(options, namesMap=False):
"""
Load the taxonomy and id->to->taxid maps requested by user
"""
return (readTaxonomyFiles(options, namesMap=namesMap), readIDMap(options))
def readTaxonomyFiles(options, namesMap=False):
"""
load the taxonomy specififed by the user. Create a name lookup map if
parseStyle is 'orgs'
"""
# read taxonomy
if options.taxdir is not None:
getTaxNames = namesMap or options.parseStyle == ORGS
taxonomy = readTaxonomy(options.taxdir, namesMap=getTaxNames)
logging.info("Read %d nodes from tax dump" % (len(taxonomy.idMap)))
else:
taxonomy = None
if options.countMethod == 'LCA' or options.countMethod == 'rLCA':
raise Exception('Cannot use LCA without providng a taxonomy (-n)')
logging.info("No taxonomy needed")
return taxonomy
def readIDMap(options):
"""
Load the specififed lookup table for hit IDs. If the parseStyle
requested is 'gis', convert keys to integers. The values are always
convereted to integeres since they are assumed to be taxids
"""
# map reads to hits
if options.parseStyle == GIS:
keyType = int
else:
keyType = None
if options.taxdir is not None:
valueType = int
else:
valueType = None
return parseMapFile(options.mapFile, valueType=valueType, keyType=keyType)
allMethods = {ALLEQ: _oneCountPerHit,
FIRST: _countFirstHit,
PORTION: _portionHitCount}
############
# Tests
############
def test():
import sys
global myAssertEq, myAssertIs
from test import myAssertEq, myAssertIs
if len(sys.argv) > 2:
loglevel = logging.DEBUG
else:
loglevel = logging.warn
logging.basicConfig(stream=sys.stderr, level=loglevel)
logger.setLevel(loglevel)
hits = testParseHits(sys.argv[1])
testTranslateAndCountHits(hits)
def testParseHits(testFile):
# test line parsing methods
cells = [1, 2, 3, 4, "(4,5)", "6,7"]
(read, hitIter) = _getReadHitsSimple(cells, 0, 2, None)
hits = []
for h in hitIter:
hits.append(h)
myAssertEq(read, 1)
myAssertEq(len(hits), 1)
myAssertEq(hits[0], 3)
(read, hitIter) = _getReadHitsSep(cells, 1, 5, ',')
hits = []
for h in hitIter:
hits.append(h)
myAssertEq(read, 2)
myAssertEq(hits, ['6', '7'])
(read, hitIter) = _getReadHitsAll(list(cells), 3, -1, None)
hits = []
for h in hitIter:
hits.append(h)
myAssertEq(read, 4)
myAssertEq(len(hits), 5)
myAssertEq(hits, [1, 2, 3, "(4,5)", "6,7"])
# give it a test file
hitIter = parseHits(open(testFile), 0, -1, True, None)
hits = {}
for r, h in hitIter:
hits[r] = h
logging.debug(repr(hits))
myAssertEq(len(hits), 29)
myAssertEq(hits['000023_2435_2174'], ['Prochlorococcus'])
myAssertEq(hits['000178_2410_1152'], ['Bacteria <prokaryote>'])
myAssertEq(hits['000093_2435_2228'], ['Candidatus Pelagibacter'])
return hits
def testTranslateAndCountHits(hits):
(total, counts) = countHits(hits)
myAssertEq(total, 29)
myAssertEq(counts["Prochlorococcus"], 10)
myAssertEq(counts['root'], 7)
translateHits(hits,
{'Bacteria <prokaryote>': 'other',
'root': 'other',
'Candidatus Pelagibacter': 'Pelagibacter'})
myAssertEq(hits['000178_2410_1152'], ['other'])
myAssertEq(hits['000093_2435_2228'], ['Pelagibacter'])
if __name__ == '__main__':
test()
| 28.199095 | 78 | 0.580392 | 3,386 | 31,160 | 5.307442 | 0.16834 | 0.005286 | 0.009015 | 0.003339 | 0.225141 | 0.196094 | 0.15219 | 0.130822 | 0.123365 | 0.101831 | 0 | 0.009026 | 0.324422 | 31,160 | 1,104 | 79 | 28.224638 | 0.844663 | 0.162003 | 0 | 0.341823 | 0 | 0 | 0.084949 | 0.001726 | 0 | 0 | 0 | 0 | 0.025469 | 1 | 0.073727 | false | 0.005362 | 0.009383 | 0.010724 | 0.148794 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
ac6dc2a662f00be68161bbd97af2d65a6aba9753 | 8,300 | py | Python | src/lib/up_motors.py | staadecker/Robotics-2019 | 664615d7e5c8be435ebfa8c57631eb4ffb2233c8 | [
"MIT"
] | 1 | 2019-08-14T11:50:21.000Z | 2019-08-14T11:50:21.000Z | src/lib/up_motors.py | staadecker/Robotics-2019 | 664615d7e5c8be435ebfa8c57631eb4ffb2233c8 | [
"MIT"
] | 5 | 2019-04-20T12:58:55.000Z | 2019-04-20T17:58:14.000Z | src/lib/up_motors.py | staadecker/Robotics-2019 | 664615d7e5c8be435ebfa8c57631eb4ffb2233c8 | [
"MIT"
] | null | null | null | import math
import ev3dev2.motor as motor
import lib.up_ports as ports
import time
class Lift:
"""Class to control arm of robot"""
_ACCELERATION = 1000 # Time in milliseconds the motor would take to reach 100% max speed from not moving
_DEFAULT_SPEED = motor.SpeedRPM(192)
# Degrees predictions for arm
_DEG_TO_FIBRE = 578
_DEG_TO_NODE = 495
_POS_UP = 0
_POS_FIBRE = 1
_POS_NODE = 2
def __init__(self):
self._position = self._POS_UP
self._lift = motor.MediumMotor(ports.LIFT_MOTOR)
self._lift.ramp_up_sp = self._ACCELERATION
self._lift.polarity = motor.Motor.POLARITY_NORMAL
def calibrate(self):
self._lift.on(self._DEFAULT_SPEED)
self._lift.wait_until_not_moving()
self._lift.on_for_degrees(self._DEFAULT_SPEED, -50, block=True)
self._position = self._POS_UP
def up(self, block=True):
if self._position == self._POS_FIBRE:
self._lift.on_for_degrees(self._DEFAULT_SPEED, self._DEG_TO_FIBRE, block=block)
self._position = self._POS_UP
elif self._position == self._POS_NODE:
self._lift.on_for_degrees(self._DEFAULT_SPEED, self._DEG_TO_NODE, block=block)
self._position = self._POS_UP
else:
print("WARNING: called Lift.up() when already up")
def to_fibre(self, block=True):
"""Lowers arm the degrees to pick up the fibre"""
if self._position == self._POS_UP:
self._lift.on_for_degrees(self._DEFAULT_SPEED, -self._DEG_TO_FIBRE, block=block)
self._position = self._POS_FIBRE
else:
print("WARNING: called Lift.to_fibre() when not in up position")
def to_node(self, block=True):
"""Lowers arm the degrees to pick up the fibre"""
if self._position == self._POS_UP:
self._lift.on_for_degrees(self._DEFAULT_SPEED, -self._DEG_TO_NODE, block=block)
self._position = self._POS_NODE
else:
print("WARNING: called Lift.to_fibre() when not in up position")
class Swivel:
"""Class to control the robot's swivel"""
_ACCELERATION = 300 # Time in milliseconds the motor would take to reach 100% max speed from not moving
_DEFAULT_SPEED = motor.SpeedRPM(80) # In percent
_START_POSITION = 0
def __init__(self):
self._swivel = motor.MediumMotor(ports.SWIVEL_MOTOR)
self._swivel.ramp_up_sp = self._ACCELERATION
self._swivel.ramp_down_sp = self._ACCELERATION
self._swivel.position = self._START_POSITION
self._swivel.stop_action = motor.Motor.STOP_ACTION_HOLD
def forward(self, block=True):
self._swivel.on_to_position(self._DEFAULT_SPEED, 0, block=block)
def left(self, block=True):
self._swivel.on_to_position(self._DEFAULT_SPEED, 90, block=block)
def right(self, block=True):
self._swivel.on_to_position(self._DEFAULT_SPEED, -90, block=block)
def back(self, block=True):
self._swivel.on_to_position(self._DEFAULT_SPEED, 180, block=block)
def reset(self):
self._swivel.on_to_position(self._DEFAULT_SPEED, self._START_POSITION)
class Mover:
"""Class to move the robot"""
_WHEEL_RADIUS = 28
CHASSIS_RADIUS = 67
_DEFAULT_SPEED = 40
_DEFAULT_ROTATE_SPEED = 30
_RAMP_UP = 300
_RAMP_DOWN = 300
def __init__(self, reverse_motors=False):
self._mover = motor.MoveTank(ports.LEFT_MOTOR, ports.RIGHT_MOTOR,
motor_class=motor.MediumMotor)
self._mover.left_motor.ramp_up_sp = 0 #self._RAMP_UP
self._mover.right_motor.ramp_up_sp = 0 #self._RAMP_UP
self._mover.left_motor.ramp_down_sp = 0 #self._RAMP_DOWN
self._mover.right_motor.ramp_down_sp = 0 #self._RAMP_DOWN
for my_motor in self._mover.motors.values():
if reverse_motors:
my_motor.polarity = motor.Motor.POLARITY_INVERSED
else:
my_motor.polarity = motor.Motor.POLARITY_NORMAL
def travel(self, distance=None, speed=_DEFAULT_SPEED, block=True, backwards=False):
"""Make the robot move forward or backward a certain number of mm"""
if distance is None:
if block:
raise ValueError("Can't run forever with block=True")
if backwards:
self._mover.on(-speed, -speed)
else:
self._mover.on(speed, speed)
else:
degrees_for_wheel = Mover._convert_rad_to_deg(Mover._convert_distance_to_rad(distance))
if backwards:
self._mover.on_for_degrees(-speed, -speed, degrees_for_wheel, block=block)
else:
self._mover.on_for_degrees(speed, speed, degrees_for_wheel, block=block)
if block:
time.sleep(0.1)
def rotate(self, degrees=None, arc_radius=0, clockwise=True, speed=_DEFAULT_ROTATE_SPEED, block=True,
backwards=False) -> None:
"""
:param arc_radius: the radius or tightness of the turn in mm. 0 means the robot is turning on itself.
:param degrees: the degrees the robot should rotate
:param clockwise: the direction of rotation
:param speed: the speed the fastest wheel should travel
:param block: whether to return immediately or to wait for end of movement
:param backwards: whether the rotate movement should move the robot backwards
"""
if degrees is None:
if block:
raise ValueError("Can't run forever with block=True")
inside_speed = (arc_radius - Mover.CHASSIS_RADIUS) / (arc_radius + Mover.CHASSIS_RADIUS) * speed
if clockwise:
if backwards:
self._mover.on(-inside_speed, -speed)
else:
self._mover.on(speed, inside_speed)
else:
if backwards:
self._mover.on(-speed, -inside_speed)
else:
self._mover.on(inside_speed, speed)
else:
if degrees <= 0:
raise ValueError(
"Can't rotate a negative number of degrees. Use clockwise=False to turn counter-clockwise")
degrees_in_rad = Mover._convert_deg_to_rad(degrees)
inside_distance = (arc_radius - Mover.CHASSIS_RADIUS) * degrees_in_rad
outside_distance = (arc_radius + Mover.CHASSIS_RADIUS) * degrees_in_rad
movement_time = outside_distance / speed
inside_speed = inside_distance / movement_time
outside_degrees = Mover._convert_rad_to_deg(Mover._convert_distance_to_rad(outside_distance))
if clockwise:
if backwards:
self._mover.on_for_degrees(-inside_speed, -speed, outside_degrees, block=block)
else:
self._mover.on_for_degrees(speed, inside_speed, outside_degrees, block=block)
else:
if backwards:
self._mover.on_for_degrees(-speed, -inside_speed, outside_degrees, block=block)
else:
self._mover.on_for_degrees(inside_speed, speed, outside_degrees, block=block)
if block:
time.sleep(0.1)
def steer(self, steering, speed=_DEFAULT_SPEED):
"""Make the robot move in a direction. -100 is to the left. +100 is to the right. 0 is straight"""
# Modified code from ev3dev2.robot.MoveSteering
if steering < -100 or steering > 100:
raise ValueError("Steering, must be between -100 and 100 (inclusive)")
inside_speed = speed - speed * abs(steering) / 50
if steering >= 0:
self._mover.on(speed, inside_speed)
else:
self._mover.on(inside_speed, speed)
def stop(self):
"""Make robot stop"""
self._mover.off()
time.sleep(0.1)
@staticmethod
def _convert_distance_to_rad(distance):
return distance / Mover._WHEEL_RADIUS
@staticmethod
def _convert_deg_to_rad(deg):
return deg * math.pi / 180
@staticmethod
def _convert_rad_to_deg(rad):
return rad / math.pi * 180
| 35.930736 | 111 | 0.630723 | 1,069 | 8,300 | 4.592142 | 0.153414 | 0.038501 | 0.031371 | 0.038704 | 0.554084 | 0.488694 | 0.446323 | 0.411082 | 0.380118 | 0.351599 | 0 | 0.016346 | 0.28506 | 8,300 | 230 | 112 | 36.086957 | 0.81092 | 0.127952 | 0 | 0.314103 | 0 | 0 | 0.049818 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.121795 | false | 0 | 0.025641 | 0.019231 | 0.288462 | 0.019231 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
ac6e65f5f7f8da8da7e36ad7ee805ad70afef98a | 383 | py | Python | python_morsels/add/add.py | stefmolin/random-fun | 59bdca29e1d51d1a21f57540ea1de66e6fa090c9 | [
"MIT"
] | 3 | 2019-09-16T08:51:10.000Z | 2020-03-21T06:00:51.000Z | python_morsels/add/add.py | stefmolin/random-fun | 59bdca29e1d51d1a21f57540ea1de66e6fa090c9 | [
"MIT"
] | null | null | null | python_morsels/add/add.py | stefmolin/random-fun | 59bdca29e1d51d1a21f57540ea1de66e6fa090c9 | [
"MIT"
] | null | null | null | from operator import itemgetter
def add(*args):
# validate inputs
if len(set(map(len, args))) != 1 or any(len(set(map(len, args[i]))) != 1 for i in range(len(args))):
raise ValueError('Lists are of different sizes.')
result = []
for row in zip(*args):
result.append([sum(list(map(itemgetter(i), row))) for i in range(len(row[0]))])
return result
| 34.818182 | 104 | 0.62141 | 61 | 383 | 3.901639 | 0.57377 | 0.088235 | 0.07563 | 0.10084 | 0.252101 | 0 | 0 | 0 | 0 | 0 | 0 | 0.009934 | 0.211488 | 383 | 10 | 105 | 38.3 | 0.778146 | 0.039164 | 0 | 0 | 0 | 0 | 0.079235 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.125 | false | 0 | 0.125 | 0 | 0.375 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
ac6fe26cf2574b10d2fb8373cf9260a97a2e4e71 | 291 | py | Python | byte_backend_old/__init__.py | saarimrahman/byte | 24d68b1834975bf491c3706f56aa190e3893f498 | [
"Apache-2.0"
] | null | null | null | byte_backend_old/__init__.py | saarimrahman/byte | 24d68b1834975bf491c3706f56aa190e3893f498 | [
"Apache-2.0"
] | null | null | null | byte_backend_old/__init__.py | saarimrahman/byte | 24d68b1834975bf491c3706f56aa190e3893f498 | [
"Apache-2.0"
] | null | null | null | from app import create_app, init_app
# Starts the database
if __name__ == '__main__':
initial_setup = True
app = create_app('testing')
db = init_app(app)
if initial_setup:
with app.app_context():
db.drop_all()
db.create_all()
app.run()
| 19.4 | 36 | 0.608247 | 39 | 291 | 4.102564 | 0.538462 | 0.1125 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.28866 | 291 | 14 | 37 | 20.785714 | 0.772947 | 0.065292 | 0 | 0 | 0 | 0 | 0.055762 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.1 | 0 | 0.1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
ac756dbc793b32c476556969d6bcad4beef32609 | 31,124 | py | Python | build/lib/WORC/classification/fitandscore.py | Sikerdebaard/PREDICTFastr | e1f172c3606e6f33edf58008f958dcd1c0ac5b7b | [
"ECL-2.0",
"Apache-2.0"
] | null | null | null | build/lib/WORC/classification/fitandscore.py | Sikerdebaard/PREDICTFastr | e1f172c3606e6f33edf58008f958dcd1c0ac5b7b | [
"ECL-2.0",
"Apache-2.0"
] | null | null | null | build/lib/WORC/classification/fitandscore.py | Sikerdebaard/PREDICTFastr | e1f172c3606e6f33edf58008f958dcd1c0ac5b7b | [
"ECL-2.0",
"Apache-2.0"
] | null | null | null | #!/usr/bin/env python
# Copyright 2016-2019 Biomedical Imaging Group Rotterdam, Departments of
# Medical Informatics and Radiology, Erasmus MC, Rotterdam, The Netherlands
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from sklearn.preprocessing import StandardScaler
from sklearn.preprocessing import MinMaxScaler
from sklearn.model_selection._validation import _fit_and_score
import numpy as np
from sklearn.linear_model import Lasso
from sklearn.feature_selection import SelectFromModel
import scipy
from sklearn.decomposition import PCA
from sklearn.multiclass import OneVsRestClassifier
from imblearn.over_sampling import SMOTE, RandomOverSampler
from sklearn.utils import check_random_state
import random
from sklearn.metrics import make_scorer, average_precision_score
from WORC.classification.estimators import RankedSVM
from WORC.classification import construct_classifier as cc
from WORC.classification.metrics import check_scoring
from WORC.featureprocessing.Relief import SelectMulticlassRelief
from WORC.featureprocessing.Imputer import Imputer
from WORC.featureprocessing.VarianceThreshold import selfeat_variance
from WORC.featureprocessing.StatisticalTestThreshold import StatisticalTestThreshold
from WORC.featureprocessing.SelectGroups import SelectGroups
def fit_and_score(X, y, scoring,
train, test, para,
fit_params=None,
return_train_score=True,
return_n_test_samples=True,
return_times=True, return_parameters=True,
error_score='raise', verbose=True,
return_all=True):
'''
Fit an estimator to a dataset and score the performance. The following
methods can currently be applied as preprocessing before fitting, in
this order:
1. Select features based on feature type group (e.g. shape, histogram).
2. Oversampling
3. Apply feature imputation (WIP).
4. Apply feature selection based on variance of feature among patients.
5. Univariate statistical testing (e.g. t-test, Wilcoxon).
6. Scale features with e.g. z-scoring.
7. Use Relief feature selection.
8. Select features based on a fit with a LASSO model.
9. Select features using PCA.
10. If a SingleLabel classifier is used for a MultiLabel problem,
a OneVsRestClassifier is employed around it.
All of the steps are optional.
Parameters
----------
estimator: sklearn estimator, mandatory
Unfitted estimator which will be fit.
X: array, mandatory
Array containingfor each object (rows) the feature values
(1st Column) and the associated feature label (2nd Column).
y: list(?), mandatory
List containing the labels of the objects.
scorer: sklearn scorer, mandatory
Function used as optimization criterion for the hyperparamater optimization.
train: list, mandatory
Indices of the objects to be used as training set.
test: list, mandatory
Indices of the objects to be used as testing set.
para: dictionary, mandatory
Contains the settings used for the above preprocessing functions
and the fitting. TODO: Create a default object and show the
fields.
fit_params:dictionary, default None
Parameters supplied to the estimator for fitting. See the SKlearn
site for the parameters of the estimators.
return_train_score: boolean, default True
Save the training score to the final SearchCV object.
return_n_test_samples: boolean, default True
Save the number of times each sample was used in the test set
to the final SearchCV object.
return_times: boolean, default True
Save the time spend for each fit to the final SearchCV object.
return_parameters: boolean, default True
Return the parameters used in the final fit to the final SearchCV
object.
error_score: numeric or "raise" by default
Value to assign to the score if an error occurs in estimator
fitting. If set to "raise", the error is raised. If a numeric
value is given, FitFailedWarning is raised. This parameter
does not affect the refit step, which will always raise the error.
verbose: boolean, default=True
If True, print intermediate progress to command line. Warnings are
always printed.
return_all: boolean, default=True
If False, only the ret object containing the performance will be
returned. If True, the ret object plus all fitted objects will be
returned.
Returns
----------
Depending on the return_all input parameter, either only ret or all objects
below are returned.
ret: list
Contains optionally the train_scores and the test_scores,
test_sample_counts, fit_time, score_time, parameters_est
and parameters_all.
GroupSel: WORC GroupSel Object
Either None if the groupwise feature selection is not used, or
the fitted object.
VarSel: WORC VarSel Object
Either None if the variance threshold feature selection is not used, or
the fitted object.
SelectModel: WORC SelectModel Object
Either None if the feature selection based on a fittd model is not
used, or the fitted object.
feature_labels: list
Labels of the features. Only one list is returned, not one per
feature object, as we assume all samples have the same feature names.
scaler: scaler object
Either None if feature scaling is not used, or
the fitted object.
imputer: WORC Imputater Object
Either None if feature imputation is not used, or
the fitted object.
pca: WORC PCA Object
Either None if PCA based feature selection is not used, or
the fitted object.
StatisticalSel: WORC StatisticalSel Object
Either None if the statistical test feature selection is not used, or
the fitted object.
ReliefSel: WORC ReliefSel Object
Either None if the RELIEF feature selection is not used, or
the fitted object.
sm: WORC SMOTE Object
Either None if the SMOTE oversampling is not used, or
the fitted object.
ros: WORC ROS Object
Either None if Random Oversampling is not used, or
the fitted object.
'''
# We copy the parameter object so we can alter it and keep the original
para_estimator = para.copy()
estimator = cc.construct_classifier(para_estimator)
if scoring != 'average_precision_weighted':
scorer = check_scoring(estimator, scoring=scoring)
else:
scorer = make_scorer(average_precision_score, average='weighted')
para_estimator = delete_cc_para(para_estimator)
# X is a tuple: split in two arrays
feature_values = np.asarray([x[0] for x in X])
feature_labels = np.asarray([x[1] for x in X])
# ------------------------------------------------------------------------
# Feature imputation
if 'Imputation' in para_estimator.keys():
if para_estimator['Imputation'] == 'True':
imp_type = para_estimator['ImputationMethod']
if verbose:
message = ('Imputing NaN with {}.').format(imp_type)
print(message)
imp_nn = para_estimator['ImputationNeighbours']
imputer = Imputer(missing_values=np.nan, strategy=imp_type,
n_neighbors=imp_nn)
imputer.fit(feature_values)
feature_values = imputer.transform(feature_values)
else:
imputer = None
else:
imputer = None
if 'Imputation' in para_estimator.keys():
del para_estimator['Imputation']
del para_estimator['ImputationMethod']
del para_estimator['ImputationNeighbours']
# Delete the object if we do not need to return it
if not return_all:
del imputer
# ------------------------------------------------------------------------
# Use SMOTE oversampling
if 'SampleProcessing_SMOTE' in para_estimator.keys():
if para_estimator['SampleProcessing_SMOTE'] == 'True':
# Determine our starting balance
pos_initial = int(np.sum(y))
neg_initial = int(len(y) - pos_initial)
len_in = len(y)
# Fit SMOTE object and transform dataset
# NOTE: need to save random state for this one as well!
sm = SMOTE(random_state=None,
ratio=para_estimator['SampleProcessing_SMOTE_ratio'],
m_neighbors=para_estimator['SampleProcessing_SMOTE_neighbors'],
kind='borderline1',
n_jobs=para_estimator['SampleProcessing_SMOTE_n_cores'])
feature_values, y = sm.fit_sample(feature_values, y)
# Also make sure our feature label object has the same size
# NOTE: Not sure if this is the best implementation
feature_labels = np.asarray([feature_labels[0] for x in X])
# Note the user what SMOTE did
pos = int(np.sum(y))
neg = int(len(y) - pos)
if verbose:
message = ("Sampling with SMOTE from {} ({} pos, {} neg) to {} ({} pos, {} neg) patients.").format(str(len_in),
str(pos_initial),
str(neg_initial),
str(len(y)),
str(pos),
str(neg))
print(message)
else:
sm = None
if 'SampleProcessing_SMOTE' in para_estimator.keys():
del para_estimator['SampleProcessing_SMOTE']
del para_estimator['SampleProcessing_SMOTE_ratio']
del para_estimator['SampleProcessing_SMOTE_neighbors']
del para_estimator['SampleProcessing_SMOTE_n_cores']
# Delete the object if we do not need to return it
if not return_all:
del sm
# ------------------------------------------------------------------------
# Full Oversampling: To Do
if 'SampleProcessing_Oversampling' in para_estimator.keys():
if para_estimator['SampleProcessing_Oversampling'] == 'True':
if verbose:
print('Oversample underrepresented classes in training.')
# Oversample underrepresented classes in training
# We always use a factor 1, e.g. all classes end up with an
# equal number of samples
if len(y.shape) == 1:
# Single Class, use imblearn oversampling
# Create another random state
# NOTE: Also need to save this random seed. Can be same as SMOTE
random_seed2 = np.random.randint(5000)
random_state2 = check_random_state(random_seed2)
ros = RandomOverSampler(random_state=random_state2)
feature_values, y = ros.fit_sample(feature_values, y)
else:
# Multi class, use own method as imblearn cannot do this
sumclass = [np.sum(y[:, i]) for i in range(y.shape[1])]
maxclass = np.argmax(sumclass)
for i in range(y.shape[1]):
if i != maxclass:
# Oversample
nz = np.nonzero(y[:, i])[0]
noversample = sumclass[maxclass] - sumclass[i]
while noversample > 0:
n_sample = random.randint(0, len(nz) - 1)
n_sample = nz[n_sample]
i_sample = y[n_sample, :]
x_sample = feature_values[n_sample]
y = np.vstack((y, i_sample))
feature_values.append(x_sample)
noversample -= 1
else:
ros = None
if 'SampleProcessing_Oversampling' in para_estimator.keys():
del para_estimator['SampleProcessing_Oversampling']
# Delete the object if we do not need to return it
if not return_all:
del ros
# ------------------------------------------------------------------------
# Groupwise feature selection
if 'SelectGroups' in para_estimator:
if verbose:
print("Selecting groups of features.")
del para_estimator['SelectGroups']
# TODO: more elegant way to solve this
feature_groups = ["histogram_features", "orientation_features",
"patient_features", "semantic_features",
"shape_features",
"coliage_features", 'vessel_features',
"phase_features", "log_features",
"texture_gabor_features", "texture_glcm_features",
"texture_glcmms_features", "texture_glrlm_features",
"texture_glszm_features", "texture_ngtdm_features",
"texture_lbp_features"]
# Backwards compatability
if 'texture_features' in para_estimator.keys():
feature_groups.append('texture_features')
# Check per feature group if the parameter is present
parameters_featsel = dict()
for group in feature_groups:
if group not in para_estimator:
# Default: do use the group, except for texture features
if group == 'texture_features':
value = 'False'
else:
value = 'True'
else:
value = para_estimator[group]
del para_estimator[group]
parameters_featsel[group] = value
GroupSel = SelectGroups(parameters=parameters_featsel)
GroupSel.fit(feature_labels[0])
if verbose:
print("Original Length: " + str(len(feature_values[0])))
feature_values = GroupSel.transform(feature_values)
if verbose:
print("New Length: " + str(len(feature_values[0])))
feature_labels = GroupSel.transform(feature_labels)
else:
GroupSel = None
# Delete the object if we do not need to return it
if not return_all:
del GroupSel
# Check whether there are any features left
if len(feature_values[0]) == 0:
# TODO: Make a specific WORC exception for this warning.
if verbose:
print('[WARNING]: No features are selected! Probably all feature groups were set to False. Parameters:')
print(para)
# Return a zero performance dummy
VarSel = None
scaler = None
SelectModel = None
pca = None
StatisticalSel = None
ReliefSel = None
# Delete the non-used fields
para_estimator = delete_nonestimator_parameters(para_estimator)
ret = [0, 0, 0, 0, 0, para_estimator, para]
if return_all:
return ret, GroupSel, VarSel, SelectModel, feature_labels[0], scaler, imputer, pca, StatisticalSel, ReliefSel, sm, ros
else:
return ret
# ------------------------------------------------------------------------
# FIXME: When only using LBP feature, X is 3 dimensional with 3rd dimension length 1
if len(feature_values.shape) == 3:
feature_values = np.reshape(feature_values, (feature_values.shape[0], feature_values.shape[1]))
if len(feature_labels.shape) == 3:
feature_labels = np.reshape(feature_labels, (feature_labels.shape[0], feature_labels.shape[1]))
# Remove any NaN feature values if these are still left after imputation
feature_values = replacenan(feature_values, verbose=verbose, feature_labels=feature_labels[0])
# --------------------------------------------------------------------
# Feature selection based on variance
if para_estimator['Featsel_Variance'] == 'True':
if verbose:
print("Selecting features based on variance.")
if verbose:
print("Original Length: " + str(len(feature_values[0])))
try:
feature_values, feature_labels, VarSel =\
selfeat_variance(feature_values, feature_labels)
except ValueError:
if verbose:
print('[WARNING]: No features meet the selected Variance threshold! Skipping selection.')
VarSel = None
if verbose:
print("New Length: " + str(len(feature_values[0])))
else:
VarSel = None
del para_estimator['Featsel_Variance']
# Delete the object if we do not need to return it
if not return_all:
del VarSel
# Check whether there are any features left
if len(feature_values[0]) == 0:
# TODO: Make a specific WORC exception for this warning.
if verbose:
print('[WARNING]: No features are selected! Probably you selected a feature group that is not in your feature file. Parameters:')
print(para)
para_estimator = delete_nonestimator_parameters(para_estimator)
# Return a zero performance dummy
scaler = None
SelectModel = None
pca = None
StatisticalSel = None
ret = [0, 0, 0, 0, 0, para_estimator, para]
if return_all:
return ret, GroupSel, VarSel, SelectModel, feature_labels[0], scaler, imputer, pca, StatisticalSel, ReliefSel, sm, ros
else:
return ret
# --------------------------------------------------------------------
# Feature selection based on a statistical test
if 'StatisticalTestUse' in para_estimator.keys():
if para_estimator['StatisticalTestUse'] == 'True':
metric = para_estimator['StatisticalTestMetric']
threshold = para_estimator['StatisticalTestThreshold']
if verbose:
print("Selecting features based on statistical test. Method {}, threshold {}.").format(metric, str(round(threshold, 2)))
if verbose:
print("Original Length: " + str(len(feature_values[0])))
StatisticalSel = StatisticalTestThreshold(metric=metric,
threshold=threshold)
StatisticalSel.fit(feature_values, y)
feature_values = StatisticalSel.transform(feature_values)
feature_labels = StatisticalSel.transform(feature_labels)
if verbose:
print("New Length: " + str(len(feature_values[0])))
else:
StatisticalSel = None
del para_estimator['StatisticalTestUse']
del para_estimator['StatisticalTestMetric']
del para_estimator['StatisticalTestThreshold']
else:
StatisticalSel = None
# Delete the object if we do not need to return it
if not return_all:
del StatisticalSel
# Check whether there are any features left
if len(feature_values[0]) == 0:
# TODO: Make a specific WORC exception for this warning.
if verbose:
print('[WARNING]: No features are selected! Probably you selected a feature group that is not in your feature file. Parameters:')
print(para)
para_estimator = delete_nonestimator_parameters(para_estimator)
# Return a zero performance dummy
scaler = None
SelectModel = None
pca = None
ret = [0, 0, 0, 0, 0, para_estimator, para]
if return_all:
return ret, GroupSel, VarSel, SelectModel, feature_labels[0], scaler, imputer, pca, StatisticalSel, ReliefSel, sm, ros
else:
return ret
# ------------------------------------------------------------------------
# Feature scaling
if 'FeatureScaling' in para_estimator:
if verbose:
print("Fitting scaler and transforming features.")
if para_estimator['FeatureScaling'] == 'z_score':
scaler = StandardScaler().fit(feature_values)
elif para_estimator['FeatureScaling'] == 'minmax':
scaler = MinMaxScaler().fit(feature_values)
else:
scaler = None
if scaler is not None:
feature_values = scaler.transform(feature_values)
del para_estimator['FeatureScaling']
else:
scaler = None
# Delete the object if we do not need to return it
if not return_all:
del scaler
# --------------------------------------------------------------------
# Relief feature selection, possibly multi classself.
# Needs to be done after scaling!
# para_estimator['ReliefUse'] = 'True'
if 'ReliefUse' in para_estimator.keys():
if para_estimator['ReliefUse'] == 'True':
if verbose:
print("Selecting features using relief.")
# Get parameters from para_estimator
n_neighbours = para_estimator['ReliefNN']
sample_size = para_estimator['ReliefSampleSize']
distance_p = para_estimator['ReliefDistanceP']
numf = para_estimator['ReliefNumFeatures']
ReliefSel = SelectMulticlassRelief(n_neighbours=n_neighbours,
sample_size=sample_size,
distance_p=distance_p,
numf=numf)
ReliefSel.fit(feature_values, y)
if verbose:
print("Original Length: " + str(len(feature_values[0])))
feature_values = ReliefSel.transform(feature_values)
if verbose:
print("New Length: " + str(len(feature_values[0])))
feature_labels = ReliefSel.transform(feature_labels)
else:
ReliefSel = None
else:
ReliefSel = None
# Delete the object if we do not need to return it
if not return_all:
del ReliefSel
if 'ReliefUse' in para_estimator.keys():
del para_estimator['ReliefUse']
del para_estimator['ReliefNN']
del para_estimator['ReliefSampleSize']
del para_estimator['ReliefDistanceP']
del para_estimator['ReliefNumFeatures']
# ------------------------------------------------------------------------
# Perform feature selection using a model
if 'SelectFromModel' in para_estimator.keys() and para_estimator['SelectFromModel'] == 'True':
if verbose:
print("Selecting features using lasso model.")
# Use lasso model for feature selection
# First, draw a random value for alpha and the penalty ratio
alpha = scipy.stats.uniform(loc=0.0, scale=1.5).rvs()
# l1_ratio = scipy.stats.uniform(loc=0.5, scale=0.4).rvs()
# Create and fit lasso model
lassomodel = Lasso(alpha=alpha)
lassomodel.fit(feature_values, y)
# Use fit to select optimal features
SelectModel = SelectFromModel(lassomodel, prefit=True)
if verbose:
print("Original Length: " + str(len(feature_values[0])))
feature_values = SelectModel.transform(feature_values)
if verbose:
print("New Length: " + str(len(feature_values[0])))
feature_labels = SelectModel.transform(feature_labels)
else:
SelectModel = None
if 'SelectFromModel' in para_estimator.keys():
del para_estimator['SelectFromModel']
# Delete the object if we do not need to return it
if not return_all:
del SelectModel
# ----------------------------------------------------------------
# PCA dimensionality reduction
# Principle Component Analysis
if 'UsePCA' in para_estimator.keys() and para_estimator['UsePCA'] == 'True':
if verbose:
print('Fitting PCA')
print("Original Length: " + str(len(feature_values[0])))
if para_estimator['PCAType'] == '95variance':
# Select first X components that describe 95 percent of the explained variance
pca = PCA(n_components=None)
pca.fit(feature_values)
evariance = pca.explained_variance_ratio_
num = 0
sum = 0
while sum < 0.95:
sum += evariance[num]
num += 1
# Make a PCA based on the determined amound of components
pca = PCA(n_components=num)
pca.fit(feature_values)
feature_values = pca.transform(feature_values)
else:
# Assume a fixed number of components
n_components = int(para_estimator['PCAType'])
pca = PCA(n_components=n_components)
pca.fit(feature_values)
feature_values = pca.transform(feature_values)
if verbose:
print("New Length: " + str(len(feature_values[0])))
else:
pca = None
# Delete the object if we do not need to return it
if not return_all:
del pca
if 'UsePCA' in para_estimator.keys():
del para_estimator['UsePCA']
del para_estimator['PCAType']
# ----------------------------------------------------------------
# Fitting and scoring
# Only when using fastr this is an entry
if 'Number' in para_estimator.keys():
del para_estimator['Number']
# For certainty, we delete all parameters again
para_estimator = delete_nonestimator_parameters(para_estimator)
# NOTE: This just has to go to the construct classifier function,
# although it is more convenient here due to the hyperparameter search
if type(y) is list:
labellength = 1
else:
try:
labellength = y.shape[1]
except IndexError:
labellength = 1
if labellength > 1 and type(estimator) != RankedSVM:
# Multiclass, hence employ a multiclass classifier for e.g. SVM, RF
estimator.set_params(**para_estimator)
estimator = OneVsRestClassifier(estimator)
para_estimator = {}
if verbose:
print("Fitting ML.")
ret = _fit_and_score(estimator, feature_values, y,
scorer, train,
test, verbose,
para_estimator, fit_params, return_train_score,
return_parameters,
return_n_test_samples,
return_times, error_score)
# Remove 'estimator object', it's the causes of a bug.
# Somewhere between scikit-learn 0.18.2 and 0.20.2
# the estimator object return value was added
# removing this element fixes a bug that occurs later
# in SearchCV.py, where an array without estimator
# object is expected.
del ret[-1]
# Paste original parameters in performance
ret.append(para)
if return_all:
return ret, GroupSel, VarSel, SelectModel, feature_labels[0], scaler, imputer, pca, StatisticalSel, ReliefSel, sm, ros
else:
return ret
def delete_nonestimator_parameters(parameters):
'''
Delete all parameters in a parameter dictionary that are not used for the
actual estimator.
'''
if 'Number' in parameters.keys():
del parameters['Number']
if 'UsePCA' in parameters.keys():
del parameters['UsePCA']
del parameters['PCAType']
if 'Imputation' in parameters.keys():
del parameters['Imputation']
del parameters['ImputationMethod']
del parameters['ImputationNeighbours']
if 'SelectFromModel' in parameters.keys():
del parameters['SelectFromModel']
if 'Featsel_Variance' in parameters.keys():
del parameters['Featsel_Variance']
if 'FeatureScaling' in parameters.keys():
del parameters['FeatureScaling']
if 'StatisticalTestUse' in parameters.keys():
del parameters['StatisticalTestUse']
del parameters['StatisticalTestMetric']
del parameters['StatisticalTestThreshold']
if 'SampleProcessing_SMOTE' in parameters.keys():
del parameters['SampleProcessing_SMOTE']
del parameters['SampleProcessing_SMOTE_ratio']
del parameters['SampleProcessing_SMOTE_neighbors']
del parameters['SampleProcessing_SMOTE_n_cores']
if 'SampleProcessing_Oversampling' in parameters.keys():
del parameters['SampleProcessing_Oversampling']
return parameters
def replacenan(image_features, verbose=True, feature_labels=None):
'''
Replace the NaNs in an image feature matrix.
'''
image_features_temp = image_features.copy()
for pnum, x in enumerate(image_features_temp):
for fnum, value in enumerate(x):
if np.isnan(value):
if verbose:
if feature_labels is not None:
print("[WORC WARNING] NaN found, patient {}, label {}. Replacing with zero.").format(pnum, feature_labels[fnum])
else:
print("[WORC WARNING] NaN found, patient {}, label {}. Replacing with zero.").format(pnum, fnum)
# Note: X is a list of lists, hence we cannot index the element directly
image_features_temp[pnum, fnum] = 0
return image_features_temp
def delete_cc_para(para):
'''
Delete all parameters that are involved in classifier construction.
'''
deletekeys = ['classifiers',
'max_iter',
'SVMKernel',
'SVMC',
'SVMdegree',
'SVMcoef0',
'SVMgamma',
'RFn_estimators',
'RFmin_samples_split',
'RFmax_depth',
'LRpenalty',
'LRC',
'LDA_solver',
'LDA_shrinkage',
'QDA_reg_param',
'ElasticNet_alpha',
'ElasticNet_l1_ratio',
'SGD_alpha',
'SGD_l1_ratio',
'SGD_loss',
'SGD_penalty',
'CNB_alpha']
for k in deletekeys:
if k in para.keys():
del para[k]
return para
| 39.749681 | 141 | 0.596421 | 3,403 | 31,124 | 5.331766 | 0.170732 | 0.061618 | 0.021164 | 0.015708 | 0.329861 | 0.267967 | 0.239308 | 0.193508 | 0.172509 | 0.160935 | 0 | 0.00608 | 0.307769 | 31,124 | 782 | 142 | 39.800512 | 0.836064 | 0.308893 | 0 | 0.328798 | 0 | 0.006803 | 0.157324 | 0.041492 | 0 | 0 | 0 | 0.002558 | 0 | 1 | 0.00907 | false | 0 | 0.047619 | 0 | 0.081633 | 0.072562 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
ac7865f10be584ba70cd87b0f4e6807363c1bbfe | 3,410 | py | Python | clustring/em.py | paradoxSid/DataMining-Algorithms | b0c0f18afec538cc40845edd0c182abf50c12efb | [
"MIT"
] | null | null | null | clustring/em.py | paradoxSid/DataMining-Algorithms | b0c0f18afec538cc40845edd0c182abf50c12efb | [
"MIT"
] | null | null | null | clustring/em.py | paradoxSid/DataMining-Algorithms | b0c0f18afec538cc40845edd0c182abf50c12efb | [
"MIT"
] | null | null | null | from clustring.pca import transform_dataset_into_2_d
import numpy as np
from scipy.stats import multivariate_normal as mvn
from clustring.helper import readFile
import random
import matplotlib.pyplot as plt
class EMAlgorithm:
def __init__(self, items, k, max_iter=100, eps=1e-7):
self.items = items
self.number_of_clusters = k
self.number_of_items = self.items.shape[0]
self.dimension_of_item = self.items.shape[1]
self.max_iter = max_iter
self.eps = eps
# Mean of i cluster with in d dimensions
self.means = np.random.rand(k, self.dimension_of_item)
# Sigma of i cluster with in d dimensions
self.sigma = np.random.rand(k, self.dimension_of_item)
# Fraction of items came from i cluster
self.pi = np.random.rand(k)
self.run_algorithm()
self.plot()
def run_algorithm(self):
log_likelihood = 0
for t in range(self.max_iter):
bis = np.zeros((self.number_of_clusters, self.number_of_items))
for i in range(self.number_of_clusters):
gnormal = mvn(self.means[i], self.sigma[i],
allow_singular=True).pdf(self.items)
bis[i, :] = self.pi[i] * gnormal
bis /= bis.sum(0)
# Recalculating pis, means and sigmas
self.pi = bis.sum(1)/self.number_of_items
self.means = np.dot(bis, self.items) / bis.sum(1)[:, None]
self.sigma = np.zeros(
(self.number_of_clusters, self.dimension_of_item, self.dimension_of_item))
for i in range(self.number_of_clusters):
ys = self.items - self.means[i, :]
temp = (
bis[i, :, None, None] * np.matmul(ys[:, :, None], ys[:, None, :])).sum(axis=0)
self.sigma[i] = temp
self.sigma /= bis.sum(axis=1)[:, None, None]
# Convergence criteria
log_likelihood_new = 0
for pi, mu, sigma in zip(self.pi, self.means, self.sigma):
log_likelihood_new += pi*mvn(mu, sigma).pdf(self.items)
log_likelihood_new = np.log(log_likelihood_new).sum()
if np.abs(log_likelihood_new - log_likelihood) < self.eps:
break
log_likelihood = log_likelihood_new
def plot(self):
intervals = 101
ys = np.linspace(-8, 8, intervals)
X, Y = np.meshgrid(ys, ys)
_ys = np.vstack([X.ravel(), Y.ravel()]).T
z = np.zeros(len(_ys))
for pi, mu, sigma in zip(self.pi, self.means, self.sigma):
z += pi*mvn(mu, sigma).pdf(_ys)
z = z.reshape((intervals, intervals))
ax = plt.subplot(111)
plt.scatter(self.items[:, 0], self.items[:, 1], alpha=0.2)
plt.contour(X, Y, z)
plt.axis([-6, 6, -6, 6])
plt.xlabel('Principal Component 1')
plt.ylabel('Principal Component 2')
plt.title(f'EM Algorithm')
plt.grid()
plt.show()
if __name__ == '__main__':
data_dir = './data/'
# fname = 'iris.data'
fname = input('Enter the name of the data file: ')
k = int(input('Enter the number of clusters: '))
items, types = readFile(data_dir+fname, ',')
transformed_items = transform_dataset_into_2_d(items)
EMAlgorithm(transformed_items, k)
# plot_dataset(transformed_items, types)
| 36.276596 | 98 | 0.58563 | 471 | 3,410 | 4.07431 | 0.278132 | 0.046899 | 0.050026 | 0.05211 | 0.2642 | 0.173007 | 0.173007 | 0.140698 | 0.042731 | 0.042731 | 0 | 0.014085 | 0.292082 | 3,410 | 93 | 99 | 36.666667 | 0.780862 | 0.068035 | 0 | 0.057143 | 0 | 0 | 0.041956 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.042857 | false | 0 | 0.085714 | 0 | 0.142857 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
ac79386b99fa970f0a15f55963813da6f99edcf0 | 1,113 | py | Python | peer/migrations/0024_as_name_and_routes_maintainer.py | xUndero/noc | 9fb34627721149fcf7064860bd63887e38849131 | [
"BSD-3-Clause"
] | 1 | 2019-09-20T09:36:48.000Z | 2019-09-20T09:36:48.000Z | peer/migrations/0024_as_name_and_routes_maintainer.py | ewwwcha/noc | aba08dc328296bb0e8e181c2ac9a766e1ec2a0bb | [
"BSD-3-Clause"
] | null | null | null | peer/migrations/0024_as_name_and_routes_maintainer.py | ewwwcha/noc | aba08dc328296bb0e8e181c2ac9a766e1ec2a0bb | [
"BSD-3-Clause"
] | null | null | null | # -*- coding: utf-8 -*-
# ----------------------------------------------------------------------
# as name and routes maintainer
# ----------------------------------------------------------------------
# Copyright (C) 2007-2019 The NOC Project
# See LICENSE for details
# ----------------------------------------------------------------------
# Third-party modules
from django.db import models
# NOC modules
from noc.core.migration.base import BaseMigration
class Migration(BaseMigration):
def migrate(self):
self.db.add_column(
"peer_as", "as_name", models.CharField("AS Name", max_length=64, null=True, blank=True)
)
Maintainer = self.db.mock_model(model_name="Maintainer", db_table="peer_maintainer")
self.db.add_column(
"peer_as",
"routes_maintainer",
models.ForeignKey(
Maintainer,
verbose_name="Routes Maintainer",
null=True,
blank=True,
related_name="routes_maintainer",
on_delete=models.CASCADE,
),
)
| 32.735294 | 99 | 0.477987 | 100 | 1,113 | 5.17 | 0.54 | 0.123791 | 0.034816 | 0.058027 | 0.081238 | 0.081238 | 0 | 0 | 0 | 0 | 0 | 0.013158 | 0.248877 | 1,113 | 33 | 100 | 33.727273 | 0.605263 | 0.32345 | 0 | 0.1 | 0 | 0 | 0.139973 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.05 | false | 0 | 0.1 | 0 | 0.2 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
ac7e41e394362bfb1ed8c802aede135a3e019740 | 13,327 | py | Python | src/briefcase/integrations/subprocess.py | danyeaw/briefcase | fd9744e5b8dfc8a4c7606dc63cddfcda2dd00d78 | [
"BSD-3-Clause"
] | null | null | null | src/briefcase/integrations/subprocess.py | danyeaw/briefcase | fd9744e5b8dfc8a4c7606dc63cddfcda2dd00d78 | [
"BSD-3-Clause"
] | null | null | null | src/briefcase/integrations/subprocess.py | danyeaw/briefcase | fd9744e5b8dfc8a4c7606dc63cddfcda2dd00d78 | [
"BSD-3-Clause"
] | null | null | null | import json
import shlex
import subprocess
from briefcase.exceptions import CommandOutputParseError
class ParseError(Exception):
"""Raised by parser functions to signal parsing was unsuccessful"""
def ensure_str(text):
"""Returns input text as a string."""
return text.decode() if isinstance(text, bytes) else str(text)
def json_parser(json_output):
"""
Wrapper to parse command output as JSON via parse_output.
:param json_output: command output to parse as JSON
"""
try:
return json.loads(json_output)
except json.JSONDecodeError as e:
raise ParseError(f"Failed to parse output as JSON: {e}") from e
class Subprocess:
"""
A wrapper around subprocess that can be used as a logging point for
commands that are executed.
"""
def __init__(self, command):
self.command = command
self._subprocess = subprocess
def prepare(self):
"""
Perform any environment preparation required to execute processes.
"""
# This is a no-op; the native subprocess environment is ready-to-use.
pass
def full_env(self, overrides):
"""
Generate the full environment in which the command will run.
:param overrides: The environment passed to the subprocess call;
can be `None` if there are no explicit environment changes.
"""
env = self.command.os.environ.copy()
if overrides:
env.update(**overrides)
return env
def final_kwargs(self, **kwargs):
"""
Convert subprocess keyword arguments into their final form.
This involves:
* Converting any environment overrides into a full environment
* Converting the `cwd` into a string
* Default `text` to True so all outputs are strings
* Convert start_new_session=True to creationflags on Windows
"""
# If `env` has been provided, inject a full copy of the local
# environment, with the values in `env` overriding the local
# environment.
try:
overrides = kwargs.pop('env')
kwargs['env'] = self.full_env(overrides)
except KeyError:
# No explicit environment provided.
pass
# If `cwd` has been provided, ensure it is in string form.
try:
cwd = kwargs.pop('cwd')
kwargs['cwd'] = str(cwd)
except KeyError:
pass
# if `text` or backwards-compatible `universal_newlines` are
# not provided, then default `text` to True so all output is
# returned as strings instead of bytes.
if 'text' not in kwargs and 'universal_newlines' not in kwargs:
kwargs['text'] = True
# For Windows, convert start_new_session=True to creation flags
if self.command.host_os == 'Windows':
try:
if kwargs.pop('start_new_session') is True:
if 'creationflags' in kwargs:
raise AssertionError(
"Subprocess called with creationflags set and start_new_session=True.\n"
"This will result in CREATE_NEW_PROCESS_GROUP and CREATE_NO_WINDOW being "
"merged in to the creationflags.\n\n"
"Ensure this is desired configuration or don't set start_new_session=True."
)
# CREATE_NEW_PROCESS_GROUP: Makes the new process the root process
# of a new process group. This also disables CTRL+C signal handlers
# for all processes of the new process group.
# CREATE_NO_WINDOW: Creates a new console for the process but does not
# open a visible window for that console. This flag is used instead
# of DETACHED_PROCESS since the new process can spawn a new console
# itself (in the absence of one being available) but that console
# creation will also spawn a visible console window.
new_session_flags = self._subprocess.CREATE_NEW_PROCESS_GROUP | self._subprocess.CREATE_NO_WINDOW
# merge these flags in to any existing flags already provided
kwargs['creationflags'] = kwargs.get('creationflags', 0) | new_session_flags
except KeyError:
pass
return kwargs
def run(self, args, **kwargs):
"""
A wrapper for subprocess.run()
The behavior of this method is identical to subprocess.run(),
except for:
- If the `env` argument is provided, the current system environment
will be copied, and the contents of env overwritten into that
environment.
- The `text` argument is defaulted to True so all output
is returned as strings instead of bytes.
"""
# Invoke subprocess.run().
# Pass through all arguments as-is.
# All exceptions are propagated back to the caller.
self._log_command(args)
self._log_environment(kwargs.get("env"))
try:
command_result = self._subprocess.run(
[
str(arg) for arg in args
],
**self.final_kwargs(**kwargs)
)
except subprocess.CalledProcessError as e:
self._log_return_code(e.returncode)
raise
self._log_return_code(command_result.returncode)
return command_result
def check_output(self, args, **kwargs):
"""
A wrapper for subprocess.check_output()
The behavior of this method is identical to
subprocess.check_output(), except for:
- If the `env` is argument provided, the current system environment
will be copied, and the contents of env overwritten into that
environment.
- The `text` argument is defaulted to True so all output
is returned as strings instead of bytes.
"""
self._log_command(args)
self._log_environment(kwargs.get("env"))
try:
cmd_output = self._subprocess.check_output(
[
str(arg) for arg in args
],
**self.final_kwargs(**kwargs)
)
except subprocess.CalledProcessError as e:
self._log_output(e.output, e.stderr)
self._log_return_code(e.returncode)
raise
self._log_output(cmd_output)
self._log_return_code(0)
return cmd_output
def parse_output(self, output_parser, args, **kwargs):
"""
A wrapper for check_output() where the command output is processed
through the supplied parser function.
If the parser fails, CommandOutputParseError is raised.
The parsing function should take one string argument and should
raise ParseError for failure modes.
:param output_parser: a function that takes str input and returns
parsed content, or raises ParseError in the case of a parsing
problem.
:param args: The arguments to pass to the subprocess
:param kwargs: The keyword arguments to pass to the subprocess
:returns: Parsed data read from the subprocess output; the exact
structure of that data is dependent on the output parser used.
"""
cmd_output = self.check_output(args, **kwargs)
try:
return output_parser(cmd_output)
except ParseError as e:
error_reason = str(e) or f"Failed to parse command output using '{output_parser.__name__}'"
self.command.logger.error()
self.command.logger.error("Command Output Parsing Error:")
self.command.logger.error(f" {error_reason}")
self.command.logger.error("Command:")
self.command.logger.error(f" {' '.join(shlex.quote(str(arg)) for arg in args)}")
self.command.logger.error("Command Output:")
for line in ensure_str(cmd_output).splitlines():
self.command.logger.error(f" {line}")
raise CommandOutputParseError(error_reason) from e
def Popen(self, args, **kwargs):
"""
A wrapper for subprocess.Popen()
The behavior of this method is identical to
subprocess.check_output(), except for:
- If the `env` argument is provided, the current system environment
will be copied, and the contents of env overwritten into that
environment.
- The `text` argument is defaulted to True so all output
is returned as strings instead of bytes.
"""
self._log_command(args)
self._log_environment(kwargs.get("env"))
return self._subprocess.Popen(
[
str(arg) for arg in args
],
**self.final_kwargs(**kwargs)
)
def stream_output(self, label, popen_process):
"""
Stream the output of a Popen process until the process exits.
If the user sends CTRL+C, the process will be terminated.
This is useful for starting a process via Popen such as tailing a
log file, then initiating a non-blocking process that populates that
log, and finally streaming the original process's output here.
:param label: A description of the content being streamed; used for
to provide context in logging messages.
:param popen_process: a running Popen process with output to print
"""
try:
while True:
# readline should always return at least a newline (ie \n)
# UNLESS the underlying process is exiting/gone; then "" is returned
output_line = ensure_str(popen_process.stdout.readline())
if output_line:
self.command.logger.info(output_line)
elif output_line == "":
# a return code will be available once the process returns one to the OS.
# by definition, that should mean the process has exited.
return_code = popen_process.poll()
# only return once all output has been read and the process has exited.
if return_code is not None:
self._log_return_code(return_code)
return
except KeyboardInterrupt:
self.cleanup(label, popen_process)
def cleanup(self, label, popen_process):
"""
Clean up after a Popen process, gracefully terminating if possible; forcibly if not.
:param label: A description of the content being streamed; used for
to provide context in logging messages.
:param popen_process: The Popen instance to clean up.
"""
popen_process.terminate()
try:
popen_process.wait(timeout=3)
except subprocess.TimeoutExpired:
self.command.logger.warning(f"Forcibly killing {label}...")
popen_process.kill()
def _log_command(self, args):
"""
Log the entire console command being executed.
"""
self.command.logger.debug()
self.command.logger.debug("Running Command:")
self.command.logger.debug(f" {' '.join(shlex.quote(str(arg)) for arg in args)}")
def _log_environment(self, overrides):
"""
Log the state of environment variables prior to command execution.
In debug mode, only the updates to the current environment are logged.
In deep debug, the entire environment for the command is logged.
:param overrides: The explicit environment passed to the subprocess call;
can be `None` if there are no explicit environment changes.
"""
if self.command.logger.verbosity >= self.command.logger.DEEP_DEBUG:
full_env = self.full_env(overrides)
self.command.logger.deep_debug("Full Environment:")
for env_var, value in full_env.items():
self.command.logger.deep_debug(f" {env_var}={value}")
elif self.command.logger.verbosity >= self.command.logger.DEBUG:
if overrides:
self.command.logger.debug("Environment:")
for env_var, value in overrides.items():
self.command.logger.debug(f" {env_var}={value}")
def _log_output(self, output, stderr=None):
"""
Log the output of the executed command.
"""
if output:
self.command.logger.deep_debug("Command Output:")
for line in ensure_str(output).splitlines():
self.command.logger.deep_debug(f" {line}")
if stderr:
self.command.logger.deep_debug("Command Error Output (stderr):")
for line in ensure_str(stderr).splitlines():
self.command.logger.deep_debug(f" {line}")
def _log_return_code(self, return_code):
"""
Log the output value of the executed command.
"""
self.command.logger.deep_debug(f"Return code: {return_code}")
| 40.26284 | 117 | 0.605838 | 1,595 | 13,327 | 4.960502 | 0.201254 | 0.040319 | 0.053716 | 0.021234 | 0.340875 | 0.305359 | 0.253286 | 0.220804 | 0.21044 | 0.195399 | 0 | 0.000332 | 0.321453 | 13,327 | 330 | 118 | 40.384848 | 0.874599 | 0.401666 | 0 | 0.267974 | 0 | 0 | 0.118311 | 0.02146 | 0 | 0 | 0 | 0 | 0.006536 | 1 | 0.104575 | false | 0.026144 | 0.026144 | 0 | 0.202614 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
ac7e9960c03ea51390ef8dd9f0031ea93afaaafc | 1,293 | py | Python | mii_rating/migrations/0001_initial.py | MiiRaGe/miilibrary | f613c6654f21db62668a6a9d68e6678fdd2a1d03 | [
"MIT"
] | null | null | null | mii_rating/migrations/0001_initial.py | MiiRaGe/miilibrary | f613c6654f21db62668a6a9d68e6678fdd2a1d03 | [
"MIT"
] | 1 | 2018-01-26T15:52:51.000Z | 2018-01-26T15:52:51.000Z | mii_rating/migrations/0001_initial.py | MiiRaGe/miilibrary | f613c6654f21db62668a6a9d68e6678fdd2a1d03 | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
from __future__ import unicode_literals
from django.db import models, migrations
class Migration(migrations.Migration):
dependencies = [
('mii_sorter', '0001_initial'),
]
operations = [
migrations.CreateModel(
name='MovieQuestionSet',
fields=[
('id', models.AutoField(verbose_name='ID', serialize=False, auto_created=True, primary_key=True)),
('movie', models.OneToOneField(on_delete=models.CASCADE, to='mii_sorter.Movie')),
],
),
migrations.CreateModel(
name='QuestionAnswer',
fields=[
('id', models.AutoField(verbose_name='ID', serialize=False, auto_created=True, primary_key=True)),
('answer', models.FloatField()),
('question_type', models.CharField(max_length=50, choices=[(b'actor', b'actor'), (b'story', b'store'), (b'overall', b'overall'), (b'director', b'director')])),
('question_set', models.ForeignKey(to='mii_rating.MovieQuestionSet', on_delete=models.CASCADE)),
],
),
migrations.AlterUniqueTogether(
name='questionanswer',
unique_together=set([('question_set', 'question_type')]),
),
]
| 36.942857 | 175 | 0.587007 | 123 | 1,293 | 5.98374 | 0.504065 | 0.024457 | 0.067935 | 0.0625 | 0.214674 | 0.214674 | 0.214674 | 0.214674 | 0.214674 | 0.214674 | 0 | 0.00733 | 0.261408 | 1,293 | 34 | 176 | 38.029412 | 0.763351 | 0.016241 | 0 | 0.392857 | 0 | 0 | 0.179528 | 0.02126 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.071429 | 0 | 0.178571 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
ac7f4490033739cd6c90fa920e92af4903696aae | 2,640 | py | Python | src/preprocess/get_context.py | RowitZou/CG-nAR | 8e2debeb3170045592b3b674ea6f9b56251e71f4 | [
"MIT"
] | 8 | 2021-09-28T09:52:58.000Z | 2022-03-13T11:37:48.000Z | src/preprocess/get_context.py | RowitZou/CG-nAR | 8e2debeb3170045592b3b674ea6f9b56251e71f4 | [
"MIT"
] | 3 | 2021-12-09T06:26:05.000Z | 2022-03-29T09:49:32.000Z | src/preprocess/get_context.py | RowitZou/CG-nAR | 8e2debeb3170045592b3b674ea6f9b56251e71f4 | [
"MIT"
] | 1 | 2022-01-29T08:51:03.000Z | 2022-01-29T08:51:03.000Z | from nltk import pos_tag
from nltk.corpus import wordnet
from nltk.stem import WordNetLemmatizer
def word_lemmatizer(sentence):
# get pos tags
def get_wordnet_pos(tag):
if tag.startswith('J'):
return wordnet.ADJ
elif tag.startswith('V'):
return wordnet.VERB
elif tag.startswith('N'):
return wordnet.NOUN
elif tag.startswith('R'):
return wordnet.ADV
else:
return None
# tokens = word_tokenize(sentence)
tagged_sent = pos_tag(sentence)
wnl = WordNetLemmatizer()
lemmas_sent = []
for tag in tagged_sent:
wordnet_pos = get_wordnet_pos(tag[1]) or wordnet.NOUN
lemmas_sent.append(wnl.lemmatize(tag[0], pos=wordnet_pos))
return lemmas_sent
def get_context_data(source_file, vertex_list):
with open(source_file, 'r') as f:
lines = f.readlines()
context_file = source_file.replace('source.txt', 'context.txt')
file = open(context_file, 'w')
for line in lines:
write_list = []
line = line.strip().split('|||')[-1].split(' ')
line = word_lemmatizer(line)
for word in line:
if word in vertex_list:
write_list.append(word)
write_line = ' '.join(write_list) + '\n'
file.write(write_line)
def get_context_list(source_file, vertex_list):
with open(source_file, 'r') as f:
lines = f.readlines()
context_file = source_file.replace('source.txt', 'context_list.txt')
file = open(context_file, 'w')
for line in lines:
write_list = []
line = line.strip().split('|||')
for sub_line in line:
sub_list = []
sub_line = word_lemmatizer(sub_line.split(' '))
for word in sub_line:
if word in vertex_list:
sub_list.append(word)
sub_line = ' '.join(sub_list)
write_list.append(sub_line)
write_line = ','.join(write_list) + '\n'
file.write(write_line)
if __name__ == '__main__':
types = ['test', 'valid', 'train']
vertex_list = []
with open('src/preprocess/prepare_data/vertex.txt', 'r') as f:
lines = f.readlines()
for line in lines:
vertex = line.strip().split(' ')[0]
vertex_list.append(vertex)
for type in types:
source_file = './tx_data/' + type + '/' + 'source.txt'
get_context_data(source_file, vertex_list)
get_context_list(source_file, vertex_list)
print(type + ' finished!')
| 32.592593 | 73 | 0.575758 | 325 | 2,640 | 4.443077 | 0.233846 | 0.062327 | 0.044321 | 0.055402 | 0.395429 | 0.395429 | 0.351801 | 0.285319 | 0.285319 | 0.285319 | 0 | 0.002187 | 0.307197 | 2,640 | 80 | 74 | 33 | 0.787315 | 0.017045 | 0 | 0.238806 | 0 | 0 | 0.064889 | 0.015127 | 0.014925 | 0 | 0 | 0 | 0 | 1 | 0.059701 | false | 0 | 0.044776 | 0 | 0.19403 | 0.014925 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
ac801b4a4365c07bdce0b1feba781f9b836dd271 | 2,990 | py | Python | tests/common/test_model.py | Yoark/Transformer-Attention | b8c62cb8618a03150ccfd73f705893d2b931b224 | [
"Apache-2.0"
] | 262 | 2018-02-28T08:11:37.000Z | 2022-03-03T04:18:10.000Z | tests/common/test_model.py | Yoark/Transformer-Attention | b8c62cb8618a03150ccfd73f705893d2b931b224 | [
"Apache-2.0"
] | 16 | 2018-06-05T05:40:52.000Z | 2020-10-27T08:28:07.000Z | tests/common/test_model.py | Yoark/Transformer-Attention | b8c62cb8618a03150ccfd73f705893d2b931b224 | [
"Apache-2.0"
] | 49 | 2018-07-25T09:08:14.000Z | 2021-06-09T17:09:21.000Z | from torchnlp.common.model import gen_model_dir, prepare_model_dir
from torchnlp.common.model import Model, HYPERPARAMS_FILE, CHECKPOINT_FILE
from torchnlp.common.hparams import HParams
import torch
import torch.nn as nn
import os
from time import sleep
class DummyModel(Model):
def __init__(self, hparams=None, extra=None):
super(DummyModel, self).__init__(hparams)
self.extra = extra
self.param = nn.Parameter(torch.LongTensor([0]), requires_grad=False)
def loss(self, batch):
return -1
def test_gen_model_dir(tmpdir):
tmpdir.chdir()
model_dir = gen_model_dir('test.Task', Model)
assert tmpdir.join('test.Task-Model').fnmatch(model_dir)
assert os.path.exists(model_dir)
def test_prepare_model_dir(tmpdir):
tmpdir.chdir()
sub = tmpdir.mkdir('model')
# Test clearing
sub.join('dummy.pt').write('x')
prepare_model_dir(str(sub), True)
assert len(sub.listdir()) == 0
# Test rename
tmpdir.mkdir('model-1')
sub.join('dummy.pt').write('x')
prepare_model_dir(str(sub), False)
assert sub.check()
assert len(sub.listdir()) == 0
assert tmpdir.join('model-1').check()
assert len(tmpdir.join('model-1').listdir()) == 0
assert tmpdir.join('model-2').check()
assert tmpdir.join('model-2').join('dummy.pt').check()
def test_create_model(tmpdir):
tmpdir.chdir()
model = DummyModel.create('test.Task', HParams(test=21), extra=111)
assert isinstance(model, DummyModel)
assert hasattr(model, 'hparams')
assert model.hparams.test == 21
assert model.extra == 111
def test_load_model(tmpdir):
tmpdir.chdir()
sub = tmpdir.mkdir('test.Task-DummyModel')
torch.save(HParams(test=22), str(sub.join(HYPERPARAMS_FILE)))
assert sub.join(HYPERPARAMS_FILE).check()
torch.save(DummyModel(HParams(test=20)).state_dict(), str(sub.join(CHECKPOINT_FILE.format(1))))
assert sub.join(CHECKPOINT_FILE.format(1)).check()
sleep(1) # To ensure different file mtimes
dummy_model = DummyModel(HParams(test=21))
dummy_model.param += 1
torch.save(dummy_model.state_dict(), str(sub.join(CHECKPOINT_FILE.format(2))))
assert sub.join(CHECKPOINT_FILE.format(2)).check()
model, _ = DummyModel.load('test.Task', checkpoint=-1, extra=111)
assert isinstance(model, DummyModel)
assert hasattr(model, 'hparams')
assert model.hparams.test == 22
assert model.extra == 111
assert int(model.param) == 1
def test_save_model(tmpdir):
tmpdir.chdir()
dummy_model = DummyModel.create('test.Task', HParams(test=21))
dummy_model.param += 1
dummy_model.iterations += 100
dummy_model.save('test.Task')
sub = tmpdir.join('test.Task-DummyModel')
assert sub.check()
assert sub.join(CHECKPOINT_FILE.format(100)).check()
assert sub.join(HYPERPARAMS_FILE).check()
hparams = torch.load(str(sub.join(HYPERPARAMS_FILE)))
assert isinstance(hparams, HParams)
assert hparams.test == 21
| 30.824742 | 99 | 0.694314 | 414 | 2,990 | 4.881643 | 0.190821 | 0.0381 | 0.042058 | 0.051954 | 0.471054 | 0.398812 | 0.228105 | 0.205839 | 0.12568 | 0.12568 | 0 | 0.021225 | 0.164883 | 2,990 | 96 | 100 | 31.145833 | 0.788146 | 0.019064 | 0 | 0.291667 | 0 | 0 | 0.061454 | 0 | 0 | 0 | 0 | 0 | 0.361111 | 1 | 0.097222 | false | 0 | 0.097222 | 0.013889 | 0.222222 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
ac8064913d2b5a791e36d975da6d4f34b23b323e | 1,083 | py | Python | convertir.py | EvanB8719/SiteStatique | 77267433f6ffd2404d60c8da48c7229f8543b459 | [
"BSD-3-Clause"
] | null | null | null | convertir.py | EvanB8719/SiteStatique | 77267433f6ffd2404d60c8da48c7229f8543b459 | [
"BSD-3-Clause"
] | null | null | null | convertir.py | EvanB8719/SiteStatique | 77267433f6ffd2404d60c8da48c7229f8543b459 | [
"BSD-3-Clause"
] | null | null | null | # coding=UTF-8
import click
import markdown2
@click.command()
@click.option('-i',"--input-file","input_file", default='', help='chemin du fichier à convertir.')
@click.option("-o","--output-directory","output_directory", default='', help = 'Chemin du fichier de sortie')
# @click.option("-k", "--kikoolol", default = False, help = "")
def convertir (input_file, output_directory):
ifile = input_file
od = output_directory
html_code_head = (
'<!DOCTYPE html>\n<html>\n<head>\n<meta charset="UTF-8">\n<title>Titre</title>\n</head>\n<body>\n'
)
html_code_foot = "</body>\n</html>"
html = markdown2.markdown_path(ifile)
html_code = html_code_head + html + html_code_foot
page = open(od + "index.html", "w+", encoding="UTF-8").write(html_code)
# # Bonus kikoolol
# if k == True:
# listkikoo = ["<p>Kikoo</p>", "<p>lol</p>", "<p>mdr</p>", "<p>ptdr</p>"]
# html.append(random.choice(listkikoo))
# print(html)
if __name__ == '__main__':
convertir() | 38.678571 | 110 | 0.588181 | 140 | 1,083 | 4.371429 | 0.45 | 0.078431 | 0.055556 | 0.062092 | 0.084967 | 0 | 0 | 0 | 0 | 0 | 0 | 0.005903 | 0.217913 | 1,083 | 28 | 111 | 38.678571 | 0.716647 | 0.253001 | 0 | 0 | 0 | 0.058824 | 0.317104 | 0.106117 | 0 | 0 | 0 | 0 | 0 | 1 | 0.058824 | false | 0 | 0.117647 | 0 | 0.176471 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
ac80d8059c222f0b93dff9014ed1f53828730b2f | 4,440 | py | Python | Stock.py | shubhjohri91/Stock-Market-Price-Prediction | bc029b63cc0cc0b581a4ef1d4530c15c454a0b5a | [
"MIT"
] | 1 | 2020-06-07T00:53:51.000Z | 2020-06-07T00:53:51.000Z | Stock.py | shubhjohri91/Stock-Market-Price-Prediction | bc029b63cc0cc0b581a4ef1d4530c15c454a0b5a | [
"MIT"
] | null | null | null | Stock.py | shubhjohri91/Stock-Market-Price-Prediction | bc029b63cc0cc0b581a4ef1d4530c15c454a0b5a | [
"MIT"
] | 2 | 2019-12-03T04:06:02.000Z | 2020-02-18T22:39:17.000Z | import twitter as twt
import requests
import pandas as pd
import datetime as date
import matplotlib.pyplot as plt
import pandas_datareader as pd_read
from lstm_model import lstm_model
class Stock:
def __init__(self,ticker):
self.ticker = ticker
def get_stock_date(self, stock):
start_date = '2018-01-01'
end_date = str(date.datetime.today())
print (stock)
data = pd_read.data.DataReader(stock.split("#")[1], 'yahoo', start_date, end_date)
data['Date'] = data.index
return data
def tweet_Sentiment(self):
# creating object of TwitterClient Class
api = twt.TwitterClient()
# calling function to get tweets
tweets = api.get_tweets(query=self.ticker, count=200)
# picking positive tweets from tweets
ptweets = [tweet for tweet in tweets if tweet['sentiment'] == 'positive']
# percentage of positive tweets
pt = str("Positive tweets percentage: {} %".format(100 * len(ptweets) / len(tweets)))
print(pt)
# picking negative tweets from tweets
ntweets = [tweet for tweet in tweets if tweet['sentiment'] == 'negative']
# percentage of negative tweets
nt = str("Negative tweets percentage: {} %".format(100 * len(ntweets) / len(tweets)))
print(nt)
# percentage of neutral tweets
nut = str("Neutral tweets percentage: {} % ".format(100 * (len(tweets) - len(ntweets) - len(ptweets)) / len(tweets)))
print(nut)
# # printing first 5 positive tweets
# print("\n\nPositive tweets:")
# for tweet in ptweets[:10]:
# print(tweet['text'])
#
# # printing first 5 negative tweets
# print("\n\nNegative tweets:")
# for tweet in ntweets[:10]:
# print(tweet['text'])
return str(pt + '<br />' + nt + '<br />' + nut)
def investor_sentiment(self):
Link = "https://www.quandl.com/api/v3/datasets/AAII/AAII_SENTIMENT.json?api_key=XsXNLg3263w9ksoCtkBB&start_date="
re = requests.get(url=Link)
obj = re.json()['dataset']
m = pd.DataFrame(obj['data'], columns=obj['column_names']).head(1)*100
para = str(m.loc[0, ['Bullish', 'Neutral', 'Bearish']]).split("\n")
reply = para[0] + "% <br/>" + para[1]+ "% <br/>" + para[2] + "%"
return reply
# def get_Wednesday(self):
# today_d = date.datetime.today()
# day = int(today_d.weekday())
# reduct = 0
# if day > 3:
# reduct = day - 3
# elif day < 3:
# reduct = day + 7 - 3
# wed_d = today_d - date.timedelta(days=reduct)
# return str(wed_d)
def daily_stock_data(self, stock2 = None):
today = str(date.datetime.today())
print(today)
data = self.get_stock_date(self.ticker)
val = data.values[-1:].tolist()
resp = str("Open: " + str(val[0][0]) + "<br/>Low: " + str(val[0][1]) + "<br/>High: ")
resp = resp + str(val[0][2]) + "<br/>Close: " + str(val[0][3]) + "<br/>Volume: " + str(val[0][4]) + "<br/>"
#Visualization
print(data)
if stock2 != None:
stock2 = "#"+stock2
data2 = self.get_stock_date(stock2)
data['Close'] = data['Close'] / data['Close'].max()
data2['Close'] = data2['Close'] / data2['Close'].max()
data2[self.ticker] = data['Close']
data2[stock2] = data2['Close']
data2.plot(y = [self.ticker,stock2], x = 'Date',grid = True, figsize=(15,6))
title = "Standardized Close price:" + self.ticker + " vs " + stock2
plt.title(title)
resp = ""
else:
title = "Close price Graph:" + self.ticker
plt.title(title)
data.plot(y = 'Close', x = 'Date',grid = True, figsize=(15,6))
random = str(date.datetime.today()).split(".")[1]
src = "static/graph" + random + ".jpeg"
plt.savefig(src)
resp = resp + '<a href="/img" target="_blank"> >>CLICK HERE<< </a>' + random
return resp
def stock_predict(self):
print(self.ticker)
print("self.ticker:",self.ticker)
l = lstm_model()
return l.execute(self.ticker)
# def main():
# st = Stock("#MSFT")
# # print(st.daily_stock_data("GOOG"))
# print("Predicted:",st.stock_predict())
#
# if __name__=="__main__":
# main() | 38.947368 | 125 | 0.559685 | 545 | 4,440 | 4.47156 | 0.308257 | 0.049241 | 0.014362 | 0.02462 | 0.122692 | 0.049241 | 0.049241 | 0.030365 | 0 | 0 | 0 | 0.025384 | 0.281306 | 4,440 | 114 | 126 | 38.947368 | 0.738327 | 0.203153 | 0 | 0.028571 | 0 | 0.014286 | 0.161714 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.085714 | false | 0 | 0.1 | 0 | 0.271429 | 0.114286 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
ac83284a50c3f92834e8b8e290538c58a044ad9b | 3,461 | py | Python | tools/nntool/quantization/symmetric/quantizers/expression_fusion_pow2.py | 00-01/gap_sdk | 25444d752b26ccf0b848301c381692d77172852c | [
"Apache-2.0"
] | 118 | 2018-05-22T08:45:59.000Z | 2022-03-30T07:00:45.000Z | tools/nntool/quantization/symmetric/quantizers/expression_fusion_pow2.py | 00-01/gap_sdk | 25444d752b26ccf0b848301c381692d77172852c | [
"Apache-2.0"
] | 213 | 2018-07-25T02:37:32.000Z | 2022-03-30T18:04:01.000Z | tools/nntool/quantization/symmetric/quantizers/expression_fusion_pow2.py | 00-01/gap_sdk | 25444d752b26ccf0b848301c381692d77172852c | [
"Apache-2.0"
] | 76 | 2018-07-04T08:19:27.000Z | 2022-03-24T09:58:05.000Z | # Copyright (C) 2020 GreenWaves Technologies, SAS
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as
# published by the Free Software Foundation, either version 3 of the
# License, or (at your option) any later version.
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Affero General Public License for more details.
# You should have received a copy of the GNU Affero General Public License
# along with this program. If not, see <https://www.gnu.org/licenses/>.
import logging
import numpy as np
from expressions.symbolic.q15_quantization.q15_scaled_quantization import \
Q15ScaledQuantization
from expressions.symbolic.symbol import SymbolStats
from graph.types import ExpressionFusionParameters
from quantization.new_qrec import QRec
from quantization.qtype import QType
from quantization.qtype_constraint import MatchAll
from quantization.unified_quantization_handler import (in_qs_constraint,
out_qs_constraint,
params_type)
from ..pow2_quantization_handler import Pow2QuantizionHandler
LOG = logging.getLogger('nntool.' + __name__)
@params_type(ExpressionFusionParameters)
@in_qs_constraint(MatchAll({'dtype': np.int16}))
@out_qs_constraint(MatchAll({'dtype': np.int16}))
class ExpressionFusionPow2(Pow2QuantizionHandler):
@classmethod
def _quantize(cls, params, in_qs, stats, **kwargs):
force_out_qs, out_dtype = cls.get_pow2_opts(**kwargs)
if stats is None or 'expression' not in stats:
raise ValueError(
f'no valid range information is present for {params.name}')
# expressions need a symmetric input
# this is done on the mult8 version but probably isn't necessary here
# in_qs = cls.force_symmetric(in_qs)
symbol_control = SymbolStats(stats['expression'])
# preload the input and output quantization
# This will force variables to the right scales in the expression quantizer
# first the input
prequant = {params.input_symbols[idx]: in_q
for idx, in_q in enumerate(in_qs)}
# now the output
o_qs = []
for idx, sym_name in enumerate(params.output_symbols):
if force_out_qs and force_out_qs[idx]:
o_q = force_out_qs[idx]
else:
cls.check_valid_ranges(params, stats, idx=idx, dirs='out')
o_q = QType.from_min_max_pow2(stats['range_out'][idx]['min'],
stats['range_out'][idx]['max'],
dtype=out_dtype)
prequant[sym_name] = o_q
o_qs.append(o_q)
qfunc_col = params.func_col.quantize(Q15ScaledQuantization,
symbol_control,
quantize_inputs=False,
qtypes=prequant)
return QRec.symmetric(in_qs=in_qs, out_qs=o_qs, qfunc_col=qfunc_col)
@classmethod
def get_prefered_input_dtypes(cls, params, **kwargs):
# only works in 16 bit mode
return [np.int16 for _ in params.in_dims]
| 42.728395 | 83 | 0.649812 | 430 | 3,461 | 5.053488 | 0.413953 | 0.014726 | 0.018408 | 0.026231 | 0.075472 | 0.075472 | 0.031293 | 0 | 0 | 0 | 0 | 0.011272 | 0.282288 | 3,461 | 80 | 84 | 43.2625 | 0.863527 | 0.286333 | 0 | 0.043478 | 0 | 0 | 0.048571 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.043478 | false | 0 | 0.217391 | 0.021739 | 0.326087 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
ac856061e4ccfbf60f609b721886b137917407cc | 6,854 | py | Python | input_data.py | ShihaoZhaoZSH/Video-Backdoor-Attack | 8dc50624e98051b45bffad87d253de330dd5a9f9 | [
"Apache-2.0"
] | 31 | 2020-06-16T11:08:39.000Z | 2022-01-02T14:01:04.000Z | input_data.py | ShihaoZhaoZSH/Video-Backdoor-Attack | 8dc50624e98051b45bffad87d253de330dd5a9f9 | [
"Apache-2.0"
] | 1 | 2020-12-22T20:18:54.000Z | 2021-03-14T12:53:21.000Z | input_data.py | ShihaoZhaoZSH/Video-Backdoor-Attack | 8dc50624e98051b45bffad87d253de330dd5a9f9 | [
"Apache-2.0"
] | 3 | 2020-10-24T13:23:07.000Z | 2021-11-25T12:34:40.000Z | # Copyright 2015 Google Inc. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import os
from six.moves import xrange
import tensorflow as tf
import PIL.Image as Image
import random
import numpy as np
import cv2
import time
def sample_data(ori_arr, num_frames_per_clip, sample_rate):
ret_arr = []
for i in range(int(num_frames_per_clip/sample_rate)):
ret_arr.append(ori_arr[int(i*sample_rate)])
return ret_arr
def get_data(filename, mode, num_frames_per_clip, sample_rate, is_flow=False, s_index=-1):
ret_arr = []
filenames = ''
if "TargetVideo_train" in filename:
s_index = -1
for parent, dirnames, filenames in os.walk(filename):
filenames_tmp = list()
for filename_ in filenames:
if filename_.startswith(mode):
filenames_tmp.append(filename_)
filenames = filenames_tmp
if len(filenames)==0:
print('DATA_ERRO: %s'%filename)
return [], s_index
if (len(filenames)-s_index) <= num_frames_per_clip:
filenames = sorted(filenames)
if len(filenames) < num_frames_per_clip:
for i in range(num_frames_per_clip):
if i >= len(filenames):
i = len(filenames)-1
image_name = str(filename) + '/' + str(filenames[i])
img = Image.open(image_name)
img_data = np.array(img)
ret_arr.append(img_data)
else:
for i in range(num_frames_per_clip):
image_name = str(filename) + '/' + str(filenames[len(filenames)-num_frames_per_clip+i])
img = Image.open(image_name)
img_data = np.array(img)
ret_arr.append(img_data)
return sample_data(ret_arr, num_frames_per_clip, sample_rate), s_index
filenames_tmp = list()
for filename_ in filenames:
if filename_.startswith(mode):
filenames_tmp.append(filename_)
filenames = filenames_tmp
filenames = sorted(filenames)
if s_index < 0:
s_index = random.randint(0, len(filenames) - num_frames_per_clip)
for i in range(int(num_frames_per_clip/sample_rate)):
if "TargetVideo_train" in filename:
image_name = str(filename) + "/" + str(filenames[int(i * sample_rate)])
else:
image_name = str(filename) + '/' + str(filenames[int(i*sample_rate)+s_index])
img = Image.open(image_name)
if is_flow and "TargetVideo" in filename:
img = img.convert("L")
img_data = np.array(img)
ret_arr.append(img_data)
return ret_arr, s_index
def get_frames_data(filename, num_frames_per_clip, sample_rate, add_flow, label):
filename_img = filename.replace("UCF-101_extract_flow", "UCF-101_extract")
rgb_ret_arr, s_index = get_data(filename_img, "i", num_frames_per_clip, sample_rate, False)
if not add_flow:
return rgb_ret_arr, [], s_index
flow_x, _ = get_data(filename, "x", num_frames_per_clip, sample_rate, True, s_index)
flow_x = np.expand_dims(flow_x, axis=-1)
flow_y, _ = get_data(filename, "y", num_frames_per_clip, sample_rate, True, s_index)
flow_y = np.expand_dims(flow_y, axis=-1)
flow_ret_arr = np.concatenate((flow_x, flow_y), axis=-1)
return rgb_ret_arr, flow_ret_arr, s_index
def data_process(tmp_data, crop_size):
img_datas = []
crop_x = 0
crop_y = 0
for j in xrange(len(tmp_data)):
img = Image.fromarray(tmp_data[j].astype(np.uint8))
if img.width > img.height:
scale = float(256) / float(img.height)
img = np.array(cv2.resize(np.array(img), (int(img.width * scale + 1), 256))).astype(np.float32)
else:
scale = float(256) / float(img.width)
img = np.array(cv2.resize(np.array(img), (256, int(img.height * scale + 1)))).astype(np.float32)
img = Image.fromarray(img.astype(np.uint8))
img = img.resize((crop_size, crop_size))
img = np.array(img).astype(np.float32)
img_datas.append(img)
return img_datas
def read_clip_and_label(filename, batch_size, start_pos=-1, num_frames_per_clip=64, sample_rate=1, crop_size=224, shuffle=True, add_flow=False):
lines = open(filename, 'r')
read_dirnames = []
rgb_data = []
flow_data = []
label = []
batch_index = 0
next_batch_start = -1
lines = list(lines)
if start_pos < 0:
shuffle = True
if shuffle:
video_indices = range(len(lines))
random.seed(time.time())
video_indices = list(video_indices)
random.shuffle(video_indices)
else:
video_indices = range(start_pos, len(lines))
for index in video_indices:
if batch_index >= batch_size:
next_batch_start = index
break
line = lines[index].strip('\n').split()
dirname = line[0]
tmp_label = int(line[2])
if not shuffle:
pass
tmp_rgb_data, tmp_flow_data, s_index = get_frames_data(dirname, num_frames_per_clip, sample_rate, add_flow, tmp_label)
if len(tmp_rgb_data) != 0:
rgb_img_datas = data_process(tmp_rgb_data, crop_size)
if add_flow:
flow_img_datas = data_process(tmp_flow_data, crop_size)
flow_data.append(flow_img_datas)
rgb_data.append(rgb_img_datas)
label.append(int(tmp_label))
batch_index = batch_index + 1
read_dirnames.append(dirname)
valid_len = len(rgb_data)
pad_len = batch_size - valid_len
if pad_len:
for i in range(pad_len):
rgb_data.append(rgb_data[-1])
flow_data.append(flow_data[-1])
label.append(int(label[-1]))
np_arr_rgb_data = np.array(rgb_data).astype(np.float32)
np_arr_flow_data = np.array(flow_data).astype(np.float32)
np_arr_label = np.array(label).astype(np.int64)
return np_arr_rgb_data, np_arr_flow_data, np_arr_label.reshape(batch_size), next_batch_start, read_dirnames, valid_len
| 38.077778 | 144 | 0.633645 | 961 | 6,854 | 4.231009 | 0.192508 | 0.037629 | 0.050172 | 0.066896 | 0.356616 | 0.277914 | 0.230694 | 0.222823 | 0.172405 | 0.172405 | 0 | 0.014303 | 0.255325 | 6,854 | 179 | 145 | 38.290503 | 0.782328 | 0.094543 | 0 | 0.232394 | 0 | 0 | 0.016799 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.035211 | false | 0.007042 | 0.077465 | 0 | 0.169014 | 0.014085 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
ac890ccd67cf637986efb1101bd05181db22ccdd | 429 | py | Python | runoob/basic_tutorial/getYesterday.py | zeroonegit/python | 919f8bb14ae91e37e42ff08192df24b60135596f | [
"MIT"
] | 1 | 2017-03-30T00:43:40.000Z | 2017-03-30T00:43:40.000Z | runoob/basic_tutorial/getYesterday.py | QuinceySun/Python | 919f8bb14ae91e37e42ff08192df24b60135596f | [
"MIT"
] | null | null | null | runoob/basic_tutorial/getYesterday.py | QuinceySun/Python | 919f8bb14ae91e37e42ff08192df24b60135596f | [
"MIT"
] | null | null | null | #!/usr/bin/env python3
# -*- coding: utf-8 -*-
############################
# File Name: getYesterday.py
# Author: One Zero
# Mail: zeroonegit@gmail.com
# Created Time: 2015-12-28 01:26:19
############################
# 引入 datetime 模块
import datetime
def getYesterday():
today = datetime.date.today()
oneday = datetime.timedelta(days = 1)
yesterday = today - oneday
return yesterday
# 输出
print(getYesterday())
| 20.428571 | 41 | 0.589744 | 50 | 429 | 5.06 | 0.82 | 0.086957 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.047354 | 0.16317 | 429 | 20 | 42 | 21.45 | 0.657382 | 0.386946 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.142857 | false | 0 | 0.142857 | 0 | 0.428571 | 0.142857 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
ac8ba4f996c150e47d8ae1e6f07736d295a1b838 | 2,603 | py | Python | src/data/make_dataset.py | granatb/mlops_handin | b0992be9667bf7f1e226efd0174289327a548efb | [
"MIT"
] | null | null | null | src/data/make_dataset.py | granatb/mlops_handin | b0992be9667bf7f1e226efd0174289327a548efb | [
"MIT"
] | null | null | null | src/data/make_dataset.py | granatb/mlops_handin | b0992be9667bf7f1e226efd0174289327a548efb | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
import logging
import os
from pathlib import Path
import click
import numpy as np
import torch
from dotenv import find_dotenv, load_dotenv
from PIL import Image
from torch import optim
from torch.utils.data import DataLoader, Dataset
from torchvision import transforms
import torchdrift
@click.command()
@click.argument("input_filepath", type=click.Path(exists=True))
@click.argument("output_filepath", type=click.Path())
def main(input_filepath, output_filepath):
"""Runs data processing scripts to turn raw data from (../raw) into
cleaned data ready to be analyzed (saved in ../processed).
"""
logger = logging.getLogger(__name__)
logger.info("making final data set from raw data")
print(os.getcwd())
transform = transforms.Compose(
[transforms.ToTensor(), transforms.Normalize((0.5,), (0.5,))]
)
train_paths = [input_filepath + f"/corruptmnist/train_{i}.npz" for i in range(5)]
X_train = np.concatenate(
[np.load(train_file)["images"] for train_file in train_paths]
)
Y_train = np.concatenate(
[np.load(train_file)["labels"] for train_file in train_paths]
)
X_test = np.load(input_filepath + "/corruptmnist/test.npz")["images"]
Y_test = np.load(input_filepath + "/corruptmnist/test.npz")["labels"]
train = MNISTdata(X_train, Y_train, transform=transform)
test = MNISTdata(X_test, Y_test, transform=transform)
torch.save(train, output_filepath + "/train.pth")
torch.save(test, output_filepath + "/test.pth")
class MNISTdata(Dataset):
def __init__(self, data, targets, transform=None, additional_transform=None):
self.data = data
self.targets = torch.LongTensor(targets)
self.transform = transform
self.additional_transform = additional_transform
def __getitem__(self, index):
x = self.data[index]
y = self.targets[index]
if self.transform:
x = self.transform(x)
if self.additional_transform:
x = self.additional_transform(x)
return x.float(), y
def __len__(self):
return len(self.data)
if __name__ == "__main__":
log_fmt = "%(asctime)s - %(name)s - %(levelname)s - %(message)s"
logging.basicConfig(level=logging.INFO, format=log_fmt)
# not used in this stub but often useful for finding various files
project_dir = Path(__file__).resolve().parents[2]
# find .env automagically by walking up directories until it's found, then
# load up the .env entries as environment variables
load_dotenv(find_dotenv())
main()
| 31.743902 | 85 | 0.6869 | 345 | 2,603 | 4.994203 | 0.391304 | 0.037725 | 0.040046 | 0.024376 | 0.114916 | 0.114916 | 0.087057 | 0.048752 | 0 | 0 | 0 | 0.003348 | 0.196696 | 2,603 | 81 | 86 | 32.135802 | 0.82066 | 0.128313 | 0 | 0 | 0 | 0 | 0.105731 | 0.031542 | 0 | 0 | 0 | 0 | 0 | 1 | 0.070175 | false | 0 | 0.210526 | 0.017544 | 0.333333 | 0.017544 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
3ba4bf8a11f53aa851f5cfeabe7e5f9a4db99b72 | 3,023 | py | Python | 14_rhymer/rhymer.py | herjazz/tiny_python_projects | ac87b34ae8d8d079219f1373bb58b8ae272a4d0f | [
"MIT"
] | null | null | null | 14_rhymer/rhymer.py | herjazz/tiny_python_projects | ac87b34ae8d8d079219f1373bb58b8ae272a4d0f | [
"MIT"
] | null | null | null | 14_rhymer/rhymer.py | herjazz/tiny_python_projects | ac87b34ae8d8d079219f1373bb58b8ae272a4d0f | [
"MIT"
] | null | null | null | #!/usr/bin/env python3
"""
Title : Rhymer
Author : wrjt <wrjt@localhost>
Date : 2021-09-04
Purpose: Find rhyming words using regexes
"""
import argparse
import re
import string
# --------------------------------------------------
def get_args():
"""Get command-line arguments"""
parser = argparse.ArgumentParser(
description='Make rhyming "words"',
formatter_class=argparse.ArgumentDefaultsHelpFormatter)
parser.add_argument('word', metavar='word', help='A word to rhyme')
args = parser.parse_args()
# if len(args.word) != 1:
# parser.error("Please only enter one word.")
return args
# --------------------------------------------------
def main():
""" Main prog """
args = get_args()
consonants = list('bcdfghjklmnpqrstvwxyz')
clusters = """\
bl br ch cl cr dr fl fr gl gr pl pr sc sh sk sl sm sn sp st sw th
tr tw thw wh wr sch scr shr sph spl spr squ str thr""".split()
prefixes = sorted(consonants + clusters)
remove_me, stem = stemmer(args.word)
if stem:
output = '\n'.join([p + stem for p in prefixes if p != remove_me])
else:
output = f'Cannot rhyme "{args.word}"'
print(output)
def stemmer(word: str) -> tuple:
""" Return leading consonants (if any), and 'stem' of the word """
letters, vowels = string.ascii_lowercase, 'aeiou'
consonants = ''.join([c for c in letters if c not in vowels])
# consonants = ''.join(filter(lambda c: c not in vowels, letters))
word = word.lower()
# # Alternative using re.compile and findall (which returns a list)
# pattern_regex = re.compile(
# rf'''(
# ([{consonants}]+)? # Capture one or more (optional)
# ([{vowels}]+) # Capture at least one vowel
# (.*) # Capture zero or more of anything else
# )''', re.VERBOSE)
# match = pattern_regex.findall(word)
# if match:
# p1 = match[0][1] or ''
# p2 = match[0][2] or ''
# p3 = match[0][3] or ''
# return (p1, p2 + p3)
# else:
# return (word, '')
pattern = (
f'([{consonants}]+)?' # Capture one or more (optional)
f'([{vowels}])' # Capture at least one vowel
'(.*)' # Capture zero of more of anything else
)
match = re.match(pattern, word)
if match:
p1 = match.group(1) or ''
p2 = match.group(2) or ''
p3 = match.group(3) or ''
return (p1, p2 + p3)
else:
return (word, '')
def test_stemmer():
""" Test stemmer() """
assert stemmer('') == ('', '')
assert stemmer('cake') == ('c', 'ake')
assert stemmer('chair') == ('ch', 'air')
assert stemmer('APPLE') == ('', 'apple')
assert stemmer('RDNZL') == ('rdnzl', '')
assert stemmer('123') == ('123', '')
# --------------------------------------------------
if __name__ == '__main__':
main()
| 27.733945 | 81 | 0.51472 | 350 | 3,023 | 4.391429 | 0.471429 | 0.050748 | 0.007807 | 0.015615 | 0.156148 | 0.132726 | 0.088484 | 0.088484 | 0.037736 | 0 | 0 | 0.017257 | 0.290771 | 3,023 | 108 | 82 | 27.990741 | 0.699627 | 0.379424 | 0 | 0.040816 | 0 | 0.020408 | 0.179417 | 0.011558 | 0 | 0 | 0 | 0 | 0.122449 | 1 | 0.081633 | false | 0 | 0.061224 | 0 | 0.204082 | 0.020408 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
3ba515ad3f9521ac40e9bfdbf554b8c98d70b8b0 | 759 | py | Python | widgets/choice/lisp_codegen.py | ardovm/wxGlade | a4cf8e65bcc6df5f65cf8ca5c49b9a628bf1e8eb | [
"MIT"
] | 225 | 2018-03-26T11:23:22.000Z | 2022-03-24T09:44:08.000Z | widgets/choice/lisp_codegen.py | ardovm/wxGlade | a4cf8e65bcc6df5f65cf8ca5c49b9a628bf1e8eb | [
"MIT"
] | 403 | 2018-01-03T19:47:28.000Z | 2018-03-23T17:43:39.000Z | widgets/choice/lisp_codegen.py | ardovm/wxGlade | a4cf8e65bcc6df5f65cf8ca5c49b9a628bf1e8eb | [
"MIT"
] | 47 | 2018-04-08T16:48:38.000Z | 2021-12-21T20:08:44.000Z | """\
Lisp generator functions for wxChoice objects
@copyright: 2002-2004 D. H. aka crazyinsomniac on sourceforge
@copyright: 2014-2016 Carsten Grohmann
@copyright: 2017 Dietmar Schwertberger
@license: MIT (see LICENSE.txt) - THIS PROGRAM COMES WITH NO WARRANTY
"""
import common
import wcodegen
class LispChoiceGenerator(wcodegen.LispWidgetCodeWriter):
#tmpl = '(setf %(name)s (%(klass)s_Create %(parent)s %(id)s -1 -1 -1 -1 %(choices_len)s (vector %(choices)s) %(style)s))\n'
tmpl = '(setf %(name)s (%(klass)s_Create %(parent)s %(id)s -1 -1 -1 -1 %(choices_len)s (vector %(choices)s) 0))\n'
def initialize():
klass = 'wxChoice'
common.class_names['EditChoice'] = klass
common.register('lisp', klass, LispChoiceGenerator(klass) )
| 31.625 | 128 | 0.699605 | 105 | 759 | 5.009524 | 0.552381 | 0.022814 | 0.022814 | 0.04943 | 0.243346 | 0.243346 | 0.243346 | 0.243346 | 0.243346 | 0.243346 | 0 | 0.044822 | 0.147563 | 759 | 23 | 129 | 33 | 0.768161 | 0.500659 | 0 | 0 | 0 | 0.125 | 0.345946 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.125 | false | 0 | 0.25 | 0 | 0.625 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
3ba62d061d7e49dc282e61f9bce3f1306da5f3d8 | 7,200 | py | Python | test/api/test_project.py | zepellin/memsource-wrap | 49694129b26e4c32a07d10cdca3af80b344fee3d | [
"MIT"
] | 9 | 2016-02-12T00:32:02.000Z | 2021-10-11T10:16:05.000Z | test/api/test_project.py | zepellin/memsource-wrap | 49694129b26e4c32a07d10cdca3af80b344fee3d | [
"MIT"
] | 42 | 2015-01-07T07:31:14.000Z | 2019-12-10T05:32:51.000Z | test/api/test_project.py | zepellin/memsource-wrap | 49694129b26e4c32a07d10cdca3af80b344fee3d | [
"MIT"
] | 9 | 2016-06-29T16:56:58.000Z | 2021-11-26T02:33:17.000Z | import datetime
from unittest.mock import patch, PropertyMock
import requests
import api as api_test
from memsource import api, models, constants
class TestApiProject(api_test.ApiTestCase):
def setUp(self):
self.url_base = 'https://cloud.memsource.com/web/api/v3/project'
self.project = api.Project()
@patch.object(requests.Session, 'request')
def test_create(self, mock_request):
type(mock_request()).status_code = PropertyMock(return_value=200)
returning_id = self.gen_random_int()
mock_request().json.return_value = {
'id': returning_id
}
name = 'test project'
source_lang = 'en'
target_lang = 'ja'
client = self.gen_random_int()
domain = self.gen_random_int()
self.assertEqual(
self.project.create(name, source_lang, target_lang, client, domain),
returning_id,
"create function returns id value of JSON"
)
mock_request.assert_called_with(
constants.HttpMethod.post.value,
'{}/create'.format(self.url_base),
data={
'token': self.project.token,
'name': name,
'sourceLang': source_lang,
'targetLang': target_lang,
'client': client,
'domain': domain,
},
timeout=constants.Base.timeout.value
)
@patch.object(requests.Session, 'request')
def test_list(self, mock_request):
type(mock_request()).status_code = PropertyMock(return_value=200)
mock_request().json.return_value = [
{
'id': self.gen_random_int(),
'name': 'test project 1',
'status': 'NEW',
'sourceLang': 'en',
'targetLangs': ['ja'],
'dateDue': None,
'dateCreated': '2013-05-10T15:31:31Z',
'note': 'test project note 1'
},
{
'id': self.gen_random_int(),
'name': 'test project 2',
'status': 'NEW',
'sourceLang': 'en',
'targetLangs': ['cs'],
'dateDue': None,
'dateCreated': '2013-05-10T15:31:31Z',
'note': 'test project note 2'
}
]
for project in self.project.list(name='foo project'):
self.assertIsInstance(project, models.Project)
self.assertIsInstance(project.date_created, datetime.datetime)
mock_request.assert_called_with(
constants.HttpMethod.post.value,
'{}/list'.format(self.url_base),
data={
'name': 'foo project',
'token': self.project.token,
},
timeout=constants.Base.timeout.value
)
@patch.object(requests.Session, 'request')
def test_get_trans_memories(self, mock_request):
type(mock_request()).status_code = PropertyMock(return_value=200)
project_id = self.gen_random_int()
mock_request().json.return_value = [{
'writeMode': True,
'transMemory': {
'id': 1,
'targetLangs': ['ja'],
'sourceLang': 'en',
'name': 'transMem'
},
'targetLang': 'ja',
'penalty': 0,
'readMode': True,
'workflowStep': None
}]
returned_values = self.project.getTransMemories(project_id)
mock_request.assert_called_with(
constants.HttpMethod.post.value,
'{}/getTransMemories'.format(self.url_base),
data={
'token': self.project.token,
'project': project_id
},
timeout=constants.Base.timeout.value
)
self.assertEqual(len(returned_values), len(mock_request().json()))
for translation_memory in returned_values:
self.assertIsInstance(translation_memory, models.TranslationMemory)
@patch.object(requests.Session, 'request')
def test_set_trans_memories(self, mock_request):
type(mock_request()).status_code = PropertyMock(return_value=200)
project_id = self.gen_random_int()
self.project.setTransMemories(project_id)
mock_request.assert_called_with(
constants.HttpMethod.post.value,
'{}/setTransMemories'.format(self.url_base),
data={
'token': self.project.token,
'project': project_id
},
timeout=constants.Base.timeout.value
)
read_trans_memory_ids = (self.gen_random_int(), )
write_trans_memory_id = self.gen_random_int()
penalties = (self.gen_random_int(), )
target_lang = 'ja'
self.project.setTransMemories(project_id,
read_trans_memory_ids=read_trans_memory_ids,
write_trans_memory_id=write_trans_memory_id,
penalties=penalties,
target_lang=target_lang)
mock_request.assert_called_with(
constants.HttpMethod.post.value,
'{}/setTransMemories'.format(self.url_base),
data={
'token': self.project.token,
'project': project_id,
'readTransMemory': read_trans_memory_ids,
'writeTransMemory': write_trans_memory_id,
'penalty': penalties,
'targetLang': target_lang,
},
timeout=constants.Base.timeout.value
)
@patch.object(requests.Session, 'request')
def test_setStatus(self, mock_request):
type(mock_request()).status_code = PropertyMock(return_value=200)
mock_request().json.return_value = None
project_id = self.gen_random_int()
self.assertIsNone(self.project.setStatus(project_id, constants.ProjectStatus.CANCELLED))
mock_request.assert_called_with(
constants.HttpMethod.post.value,
'{}/setStatus'.format(self.url_base),
data={
'token': self.project.token,
'project': project_id,
'status': constants.ProjectStatus.CANCELLED.value,
},
timeout=constants.Base.timeout.value
)
@patch.object(requests.Session, 'request')
def test_get_termbases(self, mock_request):
type(mock_request()).status_code = PropertyMock(return_value=200)
termbase_response = [
{'termBase': {'id': self.gen_random_int()}},
]
mock_request().json.return_value = termbase_response
returned_id = self.project.getTermBases(123)
self.assertEqual(termbase_response, returned_id)
mock_request.assert_called_with(
constants.HttpMethod.get.value,
'{}/getTermBases'.format(self.url_base),
params={
'token': self.project.token,
'project': 123,
},
timeout=constants.Base.timeout.value,
)
| 34.615385 | 96 | 0.559028 | 691 | 7,200 | 5.593343 | 0.17945 | 0.071151 | 0.040362 | 0.049677 | 0.578266 | 0.515653 | 0.515136 | 0.485899 | 0.456404 | 0.402846 | 0 | 0.012231 | 0.33 | 7,200 | 207 | 97 | 34.782609 | 0.788972 | 0 | 0 | 0.397727 | 0 | 0 | 0.107222 | 0 | 0 | 0 | 0 | 0 | 0.079545 | 1 | 0.039773 | false | 0 | 0.028409 | 0 | 0.073864 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
3baaefb6b43fc12cb3364be9a2a9fd1c4e6510a2 | 466 | py | Python | src/py/tool/bin2fixed.py | ivanvig/2dconv-verilog | eebc20dc9074d3cd3e2a5724451b6f3cfb2e6f80 | [
"MIT"
] | 16 | 2017-12-16T19:30:46.000Z | 2021-12-15T10:08:35.000Z | src/py/tool/bin2fixed.py | ivanvig/2dconv-verilog | eebc20dc9074d3cd3e2a5724451b6f3cfb2e6f80 | [
"MIT"
] | 1 | 2017-12-10T22:06:36.000Z | 2017-12-11T11:41:30.000Z | src/py/tool/bin2fixed.py | ivanvig/2dconv-verilog | eebc20dc9074d3cd3e2a5724451b6f3cfb2e6f80 | [
"MIT"
] | 12 | 2017-09-29T14:40:47.000Z | 2021-06-08T06:37:15.000Z | def bin2fixed(signed, N, Nf, num):
shift = 1
result = 0
q = -Nf
if signed == 'S':
sig = num & (1 << N-1)
if sig:
num = ~num + 1
elif signed == 'U':
sig = 0
else:
raise ValueError("S for signed, U for unsigned")
while (N - Nf) > q:
if num & shift:
result += 2**q
q += 1
shift = shift << 1
return -result if sig else result | 20.26087 | 56 | 0.414163 | 60 | 466 | 3.216667 | 0.4 | 0.031088 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.040323 | 0.467811 | 466 | 23 | 57 | 20.26087 | 0.737903 | 0 | 0 | 0 | 0 | 0 | 0.06424 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.055556 | false | 0 | 0 | 0 | 0.111111 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
3bad2f1f98d45750150214e0aed5c5c197ee7b00 | 899 | py | Python | BOJ/exaustive_search_boj/word_board_r2.py | mrbartrns/swacademy_structure | 778f0546030385237c383d81ec37d5bd9ed1272d | [
"MIT"
] | null | null | null | BOJ/exaustive_search_boj/word_board_r2.py | mrbartrns/swacademy_structure | 778f0546030385237c383d81ec37d5bd9ed1272d | [
"MIT"
] | null | null | null | BOJ/exaustive_search_boj/word_board_r2.py | mrbartrns/swacademy_structure | 778f0546030385237c383d81ec37d5bd9ed1272d | [
"MIT"
] | null | null | null | # BOJ 2186
import sys
si = sys.stdin.readline
def dfs(x, y, idx):
if dp[x][y][idx] > -1:
return dp[x][y][idx]
if idx >= len(word):
return 1
dp[x][y][idx] = 0
for i in range(4):
for j in range(1, k + 1):
nx = x + j * dx[i]
ny = y + j * dy[i]
if nx < 0 or nx >= n or ny < 0 or ny >= m:
continue
if board[nx][ny] != word[idx]:
continue
dp[x][y][idx] += dfs(nx, ny, idx + 1)
return dp[x][y][idx]
n, m, k = map(int, si().split())
board = []
for _ in range(n):
board.append(list(si().strip()))
word = si().strip()
dp = [[[-1 for _ in range(81)] for _ in range(101)] for _ in range(101)]
dx = [-1, 1, 0, 0]
dy = [0, 0, -1, 1]
res = 0
for i in range(n):
for j in range(m):
if board[i][j] == word[0]:
res += dfs(i, j, 1)
print(res) | 19.543478 | 72 | 0.446051 | 160 | 899 | 2.48125 | 0.28125 | 0.141058 | 0.075567 | 0.088161 | 0.146096 | 0.085642 | 0.085642 | 0 | 0 | 0 | 0 | 0.057391 | 0.3604 | 899 | 46 | 73 | 19.543478 | 0.633043 | 0.008899 | 0 | 0.125 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.03125 | false | 0 | 0.03125 | 0 | 0.15625 | 0.03125 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
3bad4d8251dbb9117e3ecd8f293e593fe3344f61 | 9,879 | py | Python | test.py | mumbleskates/sqlite-s3-query | f4e7d718763588e4b2dc80b12b2f94c8e1a4c934 | [
"MIT"
] | null | null | null | test.py | mumbleskates/sqlite-s3-query | f4e7d718763588e4b2dc80b12b2f94c8e1a4c934 | [
"MIT"
] | null | null | null | test.py | mumbleskates/sqlite-s3-query | f4e7d718763588e4b2dc80b12b2f94c8e1a4c934 | [
"MIT"
] | null | null | null | from datetime import datetime
import functools
import hashlib
import hmac
import sqlite3
import tempfile
import unittest
import urllib.parse
import uuid
import httpx
from sqlite_s3_query import sqlite_s3_query
class TestSqliteS3Query(unittest.TestCase):
def test_select(self):
db = get_db([
"CREATE TABLE my_table (my_col_a text, my_col_b text);",
] + [
"INSERT INTO my_table VALUES " + ','.join(["('some-text-a', 'some-text-b')"] * 500),
])
put_object('my-bucket', 'my.db', db)
with sqlite_s3_query('http://localhost:9000/my-bucket/my.db', get_credentials=lambda: (
'us-east-1',
'AKIAIOSFODNN7EXAMPLE',
'wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY',
None,
)) as query:
with query('SELECT my_col_a FROM my_table') as (columns, rows):
rows = list(rows)
self.assertEqual(rows, [('some-text-a',)] * 500)
def test_placeholder(self):
db = get_db([
"CREATE TABLE my_table (my_col_a text, my_col_b text);",
] + [
"INSERT INTO my_table VALUES ('a','b'),('c','d')",
])
put_object('my-bucket', 'my.db', db)
with sqlite_s3_query('http://localhost:9000/my-bucket/my.db', get_credentials=lambda: (
'us-east-1',
'AKIAIOSFODNN7EXAMPLE',
'wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY',
None,
)) as query:
with query("SELECT my_col_a FROM my_table WHERE my_col_b = ?", params=(('d',))) as (columns, rows):
rows = list(rows)
self.assertEqual(rows, [('c',)])
def test_partial(self):
db = get_db([
"CREATE TABLE my_table (my_col_a text, my_col_b text);",
] + [
"INSERT INTO my_table VALUES ('a','b'),('c','d')",
])
put_object('my-bucket', 'my.db', db)
query_my_db = functools.partial(sqlite_s3_query,
url='http://localhost:9000/my-bucket/my.db',
get_credentials=lambda: (
'us-east-1',
'AKIAIOSFODNN7EXAMPLE',
'wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY',
None,
)
)
with query_my_db() as query:
with query("SELECT my_col_a FROM my_table WHERE my_col_b = ?", params=(('d',))) as (columns, rows):
rows = list(rows)
self.assertEqual(rows, [('c',)])
def test_time_and_non_python_identifier(self):
db = get_db(["CREATE TABLE my_table (my_col_a text, my_col_b text);"])
put_object('my-bucket', 'my.db', db)
with sqlite_s3_query('http://localhost:9000/my-bucket/my.db', get_credentials=lambda: (
'us-east-1',
'AKIAIOSFODNN7EXAMPLE',
'wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY',
None,
)) as query:
now = datetime.utcnow()
with query("SELECT date('now'), time('now')") as (columns, rows):
rows = list(rows)
self.assertEqual(rows, [(now.strftime('%Y-%m-%d'), now.strftime('%H:%M:%S'))])
self.assertEqual(columns, ("date('now')", "time('now')"))
def test_non_existant_table(self):
db = get_db(["CREATE TABLE my_table (my_col_a text, my_col_b text);"])
put_object('my-bucket', 'my.db', db)
with sqlite_s3_query('http://localhost:9000/my-bucket/my.db', get_credentials=lambda: (
'us-east-1',
'AKIAIOSFODNN7EXAMPLE',
'wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY',
None,
)) as query:
with self.assertRaises(Exception):
query("SELECT * FROM non_table").__enter__()
def test_empty_object(self):
db = get_db(["CREATE TABLE my_table (my_col_a text, my_col_b text);"])
put_object('my-bucket', 'my.db', b'')
with sqlite_s3_query('http://localhost:9000/my-bucket/my.db', get_credentials=lambda: (
'us-east-1',
'AKIAIOSFODNN7EXAMPLE',
'wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY',
None,
)) as query:
with self.assertRaises(Exception):
query("SELECT * FROM non_table").__enter__()
def test_bad_db_header(self):
db = get_db(["CREATE TABLE my_table (my_col_a text, my_col_b text);"])
put_object('my-bucket', 'my.db', b'*' * 100)
with sqlite_s3_query('http://localhost:9000/my-bucket/my.db', get_credentials=lambda: (
'us-east-1',
'AKIAIOSFODNN7EXAMPLE',
'wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY',
None,
)) as query:
with self.assertRaises(Exception):
query("SELECT * FROM non_table").__enter__()
def test_bad_db_second_half(self):
db = get_db(["CREATE TABLE my_table (my_col_a text, my_col_b text);"] + [
"INSERT INTO my_table VALUES " + ','.join(["('some-text-a', 'some-text-b')"] * 5000),
])
half_len = int(len(db) / 2)
db = db[:half_len] + len(db[half_len:]) * b'-'
put_object('my-bucket', 'my.db', db)
with sqlite_s3_query('http://localhost:9000/my-bucket/my.db', get_credentials=lambda: (
'us-east-1',
'AKIAIOSFODNN7EXAMPLE',
'wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY',
None,
)) as query:
with self.assertRaises(Exception):
with query("SELECT * FROM my_table") as (columns, rows):
list(rows)
def put_object(bucket, key, content):
create_bucket(bucket)
enable_versioning(bucket)
url = f'http://127.0.0.1:9000/{bucket}/{key}'
body_hash = hashlib.sha256(content).hexdigest()
parsed_url = urllib.parse.urlsplit(url)
headers = aws_sigv4_headers(
'AKIAIOSFODNN7EXAMPLE', 'wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY',
(), 's3', 'us-east-1', parsed_url.netloc, 'PUT', parsed_url.path, (), body_hash,
)
response = httpx.put(url, content=content, headers=headers)
response.raise_for_status()
def create_bucket(bucket):
url = f'http://127.0.0.1:9000/{bucket}/'
content = b''
body_hash = hashlib.sha256(content).hexdigest()
parsed_url = urllib.parse.urlsplit(url)
headers = aws_sigv4_headers(
'AKIAIOSFODNN7EXAMPLE', 'wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY',
(), 's3', 'us-east-1', parsed_url.netloc, 'PUT', parsed_url.path, (), body_hash,
)
response = httpx.put(url, content=content, headers=headers)
def enable_versioning(bucket):
content = '''
<VersioningConfiguration xmlns="http://s3.amazonaws.com/doc/2006-03-01/">
<Status>Enabled</Status>
</VersioningConfiguration>
'''.encode()
url = f'http://127.0.0.1:9000/{bucket}/?versioning'
body_hash = hashlib.sha256(content).hexdigest()
parsed_url = urllib.parse.urlsplit(url)
headers = aws_sigv4_headers(
'AKIAIOSFODNN7EXAMPLE', 'wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY',
(), 's3', 'us-east-1', parsed_url.netloc, 'PUT', parsed_url.path, (('versioning', ''),), body_hash,
)
response = httpx.put(url, content=content, headers=headers)
response.raise_for_status()
def aws_sigv4_headers(access_key_id, secret_access_key, pre_auth_headers,
service, region, host, method, path, params, body_hash):
algorithm = 'AWS4-HMAC-SHA256'
now = datetime.utcnow()
amzdate = now.strftime('%Y%m%dT%H%M%SZ')
datestamp = now.strftime('%Y%m%d')
credential_scope = f'{datestamp}/{region}/{service}/aws4_request'
pre_auth_headers_lower = tuple((
(header_key.lower(), ' '.join(header_value.split()))
for header_key, header_value in pre_auth_headers
))
required_headers = (
('host', host),
('x-amz-content-sha256', body_hash),
('x-amz-date', amzdate),
)
headers = sorted(pre_auth_headers_lower + required_headers)
signed_headers = ';'.join(key for key, _ in headers)
def signature():
def canonical_request():
canonical_uri = urllib.parse.quote(path, safe='/~')
quoted_params = sorted(
(urllib.parse.quote(key, safe='~'), urllib.parse.quote(value, safe='~'))
for key, value in params
)
canonical_querystring = '&'.join(f'{key}={value}' for key, value in quoted_params)
canonical_headers = ''.join(f'{key}:{value}\n' for key, value in headers)
return f'{method}\n{canonical_uri}\n{canonical_querystring}\n' + \
f'{canonical_headers}\n{signed_headers}\n{body_hash}'
def sign(key, msg):
return hmac.new(key, msg.encode('ascii'), hashlib.sha256).digest()
string_to_sign = f'{algorithm}\n{amzdate}\n{credential_scope}\n' + \
hashlib.sha256(canonical_request().encode('ascii')).hexdigest()
date_key = sign(('AWS4' + secret_access_key).encode('ascii'), datestamp)
region_key = sign(date_key, region)
service_key = sign(region_key, service)
request_key = sign(service_key, 'aws4_request')
return sign(request_key, string_to_sign).hex()
return (
(b'authorization', (
f'{algorithm} Credential={access_key_id}/{credential_scope}, '
f'SignedHeaders={signed_headers}, Signature=' + signature()).encode('ascii')
),
(b'x-amz-date', amzdate.encode('ascii')),
(b'x-amz-content-sha256', body_hash.encode('ascii')),
) + pre_auth_headers
def get_db(sqls):
with tempfile.NamedTemporaryFile() as fp:
with sqlite3.connect(fp.name, isolation_level=None) as con:
cur = con.cursor()
for sql in sqls:
cur.execute(sql)
with open(fp.name, 'rb') as f:
return f.read()
| 36.319853 | 111 | 0.592975 | 1,189 | 9,879 | 4.72582 | 0.158116 | 0.018687 | 0.028475 | 0.03417 | 0.60242 | 0.592454 | 0.579285 | 0.579285 | 0.579285 | 0.559352 | 0 | 0.022706 | 0.259945 | 9,879 | 271 | 112 | 36.453875 | 0.745862 | 0 | 0 | 0.490826 | 0 | 0 | 0.291932 | 0.078955 | 0 | 0 | 0 | 0 | 0.041284 | 1 | 0.073395 | false | 0 | 0.050459 | 0.004587 | 0.151376 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
3bae3a235e4f9dcbd04dfcf1ac96a1a128fee4fa | 11,612 | py | Python | ee/clickhouse/queries/experiments/trend_experiment_result.py | dorucioclea/posthog | a7e792c3fc5c1abc70d8167e1ead12d4ea24f17a | [
"MIT"
] | null | null | null | ee/clickhouse/queries/experiments/trend_experiment_result.py | dorucioclea/posthog | a7e792c3fc5c1abc70d8167e1ead12d4ea24f17a | [
"MIT"
] | null | null | null | ee/clickhouse/queries/experiments/trend_experiment_result.py | dorucioclea/posthog | a7e792c3fc5c1abc70d8167e1ead12d4ea24f17a | [
"MIT"
] | null | null | null | import dataclasses
from datetime import datetime
from functools import lru_cache
from math import exp, lgamma, log
from typing import List, Optional, Type
from numpy.random import default_rng
from rest_framework.exceptions import ValidationError
from ee.clickhouse.queries.experiments import (
CONTROL_VARIANT_KEY,
FF_DISTRIBUTION_THRESHOLD,
MIN_PROBABILITY_FOR_SIGNIFICANCE,
)
from ee.clickhouse.queries.trends.clickhouse_trends import ClickhouseTrends
from posthog.constants import ACTIONS, EVENTS, TRENDS_CUMULATIVE
from posthog.models.feature_flag import FeatureFlag
from posthog.models.filters.filter import Filter
from posthog.models.team import Team
Probability = float
P_VALUE_SIGNIFICANCE_LEVEL = 0.05
@dataclasses.dataclass
class Variant:
key: str
count: int
exposure: float
# count of total events exposed to variant
absolute_exposure: int
class ClickhouseTrendExperimentResult:
"""
This class calculates Experiment Results.
It returns two things:
1. A trend Breakdown based on Feature Flag values
2. Probability that Feature Flag value 1 has better conversion rate then FeatureFlag value 2
Currently, it only supports two feature flag values: control and test
The passed in Filter determines which trend to create, along with the experiment start & end date values
Calculating (2) uses the formula here: https://www.evanmiller.org/bayesian-ab-testing.html#count_ab
"""
def __init__(
self,
filter: Filter,
team: Team,
feature_flag: FeatureFlag,
experiment_start_date: datetime,
experiment_end_date: Optional[datetime] = None,
trend_class: Type[ClickhouseTrends] = ClickhouseTrends,
):
breakdown_key = f"$feature/{feature_flag.key}"
variants = [variant["key"] for variant in feature_flag.variants]
query_filter = filter.with_data(
{
"display": TRENDS_CUMULATIVE,
"date_from": experiment_start_date,
"date_to": experiment_end_date,
"breakdown": breakdown_key,
"breakdown_type": "event",
"properties": [{"key": breakdown_key, "value": variants, "operator": "exact", "type": "event"}],
# :TRICKY: We don't use properties set on filters, instead using experiment variant options
}
)
exposure_filter = filter.with_data(
{
"date_from": experiment_start_date,
"date_to": experiment_end_date,
ACTIONS: [],
EVENTS: [
{
"id": "$feature_flag_called",
"name": "$feature_flag_called",
"order": 0,
"type": "events",
"math": "dau",
}
],
"breakdown_type": "event",
"breakdown": "$feature_flag_response",
"properties": [
{"key": "$feature_flag_response", "value": variants, "operator": "exact", "type": "event"},
{"key": "$feature_flag", "value": [feature_flag.key], "operator": "exact", "type": "event"},
],
}
)
self.query_filter = query_filter
self.exposure_filter = exposure_filter
self.team = team
self.insight = trend_class()
def get_results(self):
insight_results = self.insight.run(self.query_filter, self.team)
exposure_results = self.insight.run(self.exposure_filter, self.team,)
control_variant, test_variants = self.get_variants(insight_results, exposure_results)
probabilities = self.calculate_results(control_variant, test_variants)
mapping = {
variant.key: probability for variant, probability in zip([control_variant, *test_variants], probabilities)
}
significant = self.are_results_significant(control_variant, test_variants, probabilities)
return {
"insight": insight_results,
"probability": mapping,
"significant": significant,
"filters": self.query_filter.to_dict(),
}
def get_variants(self, insight_results, exposure_results):
# this assumes the Trend insight is Cumulative
control_variant = None
test_variants = []
exposure_counts = {}
exposure_ratios = {}
for result in exposure_results:
count = result["count"]
breakdown_value = result["breakdown_value"]
exposure_counts[breakdown_value] = count
control_exposure = exposure_counts.get(CONTROL_VARIANT_KEY, 0)
if control_exposure != 0:
for key, count in exposure_counts.items():
exposure_ratios[key] = count / control_exposure
for result in insight_results:
count = result["count"]
breakdown_value = result["breakdown_value"]
if breakdown_value == CONTROL_VARIANT_KEY:
# count exposure value is always 1, the baseline
control_variant = Variant(
key=breakdown_value,
count=int(count),
exposure=1,
absolute_exposure=exposure_counts.get(breakdown_value, 1),
)
else:
test_variants.append(
Variant(
breakdown_value,
int(count),
exposure_ratios.get(breakdown_value, 1),
exposure_counts.get(breakdown_value, 1),
)
)
return control_variant, test_variants
@staticmethod
def calculate_results(control_variant: Variant, test_variants: List[Variant]) -> List[Probability]:
"""
Calculates probability that A is better than B. First variant is control, rest are test variants.
Supports maximum 4 variants today
For each variant, we create a Gamma distribution of arrival rates,
where alpha (shape parameter) = count of variant + 1
beta (exposure parameter) = 1
"""
if not control_variant:
raise ValidationError("No control variant data found", code="no_data")
if len(test_variants) > 2:
raise ValidationError("Can't calculate A/B test results for more than 3 variants", code="too_much_data")
if len(test_variants) < 1:
raise ValidationError("Can't calculate A/B test results for less than 2 variants", code="no_data")
return calculate_probability_of_winning_for_each([control_variant, *test_variants])
@staticmethod
def are_results_significant(
control_variant: Variant, test_variants: List[Variant], probabilities: List[Probability]
) -> bool:
# TODO: Experiment with Expected Loss calculations for trend experiments
for variant in test_variants:
# We need a feature flag distribution threshold because distribution of people
# can skew wildly when there are few people in the experiment
if variant.absolute_exposure < FF_DISTRIBUTION_THRESHOLD:
return False
if control_variant.absolute_exposure < FF_DISTRIBUTION_THRESHOLD:
return False
if max(probabilities) < MIN_PROBABILITY_FOR_SIGNIFICANCE:
return False
p_value = calculate_p_value(control_variant, test_variants)
return p_value < P_VALUE_SIGNIFICANCE_LEVEL
def simulate_winning_variant_for_arrival_rates(target_variant: Variant, variants: List[Variant]) -> float:
random_sampler = default_rng()
simulations_count = 100_000
variant_samples = []
for variant in variants:
# Get `N=simulations` samples from a Gamma distribution with alpha = variant_sucess + 1,
# and exposure = relative exposure of variant
samples = random_sampler.gamma(variant.count + 1, 1 / variant.exposure, simulations_count)
variant_samples.append(samples)
target_variant_samples = random_sampler.gamma(
target_variant.count + 1, 1 / target_variant.exposure, simulations_count
)
winnings = 0
variant_conversions = list(zip(*variant_samples))
for i in range(simulations_count):
if target_variant_samples[i] > max(variant_conversions[i]):
winnings += 1
return winnings / simulations_count
def calculate_probability_of_winning_for_each(variants: List[Variant]) -> List[Probability]:
"""
Calculates the probability of winning for each variant.
"""
if len(variants) == 2:
# simple case
probability = simulate_winning_variant_for_arrival_rates(variants[1], [variants[0]])
return [1 - probability, probability]
elif len(variants) == 3:
probability_third_wins = simulate_winning_variant_for_arrival_rates(variants[2], [variants[0], variants[1]])
probability_second_wins = simulate_winning_variant_for_arrival_rates(variants[1], [variants[0], variants[2]])
return [1 - probability_third_wins - probability_second_wins, probability_second_wins, probability_third_wins]
elif len(variants) == 4:
probability_fourth_wins = simulate_winning_variant_for_arrival_rates(
variants[3], [variants[0], variants[1], variants[2]]
)
probability_third_wins = simulate_winning_variant_for_arrival_rates(
variants[2], [variants[0], variants[1], variants[3]]
)
probability_second_wins = simulate_winning_variant_for_arrival_rates(
variants[1], [variants[0], variants[2], variants[3]]
)
return [
1 - probability_fourth_wins - probability_third_wins - probability_second_wins,
probability_second_wins,
probability_third_wins,
probability_fourth_wins,
]
else:
raise ValidationError("Can't calculate A/B test results for more than 4 variants", code="too_much_data")
@lru_cache(maxsize=100_000)
def combinationln(n: int, k: int) -> float:
"""
Returns the log of the binomial coefficient.
"""
return lgamma(n + 1) - lgamma(k + 1) - lgamma(n - k + 1)
def intermediate_poisson_term(count: int, iterator: int, relative_exposure: float):
return exp(
combinationln(count, iterator)
+ iterator * log(relative_exposure)
+ (count - iterator) * log(1 - relative_exposure)
)
def poisson_p_value(control_count, control_exposure, test_count, test_exposure):
"""
Calculates the p-value of the A/B test.
Calculations from: https://www.evanmiller.org/statistical-formulas-for-programmers.html#count_test
"""
relative_exposure = test_exposure / (control_exposure + test_exposure)
total_count = control_count + test_count
low_p_value = 0.0
high_p_value = 0.0
for i in range(test_count + 1):
low_p_value += intermediate_poisson_term(total_count, i, relative_exposure)
for i in range(test_count, total_count + 1):
high_p_value += intermediate_poisson_term(total_count, i, relative_exposure)
return min(1, 2 * min(low_p_value, high_p_value))
def calculate_p_value(control_variant: Variant, test_variants: List[Variant]) -> Probability:
best_test_variant = max(test_variants, key=lambda variant: variant.count)
return poisson_p_value(
control_variant.count, control_variant.exposure, best_test_variant.count, best_test_variant.exposure
)
| 37.701299 | 118 | 0.651567 | 1,300 | 11,612 | 5.571538 | 0.192308 | 0.038658 | 0.026232 | 0.025128 | 0.306089 | 0.245616 | 0.185697 | 0.167472 | 0.160707 | 0.128676 | 0 | 0.009513 | 0.266707 | 11,612 | 307 | 119 | 37.824104 | 0.841104 | 0.136497 | 0 | 0.090909 | 0 | 0 | 0.068518 | 0.007196 | 0 | 0 | 0 | 0.003257 | 0 | 1 | 0.052632 | false | 0 | 0.062201 | 0.004785 | 0.215311 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
3bb101c307ebddd51dafeee21fd468ff7b140dd8 | 4,294 | py | Python | pkg/azure/conversion_worker.py | NihilBabu/xmigrate | c33d0b506a86a0ebef22df8ce299cd84f560d034 | [
"Apache-2.0"
] | null | null | null | pkg/azure/conversion_worker.py | NihilBabu/xmigrate | c33d0b506a86a0ebef22df8ce299cd84f560d034 | [
"Apache-2.0"
] | null | null | null | pkg/azure/conversion_worker.py | NihilBabu/xmigrate | c33d0b506a86a0ebef22df8ce299cd84f560d034 | [
"Apache-2.0"
] | null | null | null | from model.storage import *
from model.disk import *
from model.blueprint import *
from utils.dbconn import *
import os
from pkg.azure import sas
from asyncio.subprocess import PIPE, STDOUT
import asyncio
from pathlib import Path
from utils.logger import *
async def download_worker(osdisk_raw,project,host):
con = create_db_con()
account_name = Storage.objects(project=project)[0]['storage']
container_name = Storage.objects(project=project)[0]['container']
access_key = Storage.objects(project=project)[0]['access_key']
sas_token = sas.generate_sas_token(account_name,access_key)
pipe_result = ''
file_size = '0'
try:
cur_path = os.getcwd()
path = cur_path+"/osdisks/"+osdisk_raw
if not os.path.exists(path):
os.popen('echo "download started"> ./logs/ansible/migration_log.txt')
url = "https://" + account_name + ".blob.core.windows.net/" + container_name + "/" + osdisk_raw + "?" + sas_token
command1 = "azcopy copy --recursive '" + url + "' '"+path+"'"
os.popen('echo '+command1+'>> ./logs/ansible/migration_log.txt')
process1 = await asyncio.create_subprocess_shell(command1, stdin = PIPE, stdout = PIPE, stderr = STDOUT)
await process1.wait()
BluePrint.objects(project=project,host=host).update(status='32')
except Exception as e:
print(repr(e))
logger(str(e),"warning")
finally:
con.close()
async def upload_worker(osdisk_raw,project,host):
con = create_db_con()
account_name = Storage.objects(project=project)[0]['storage']
container_name = Storage.objects(project=project)[0]['container']
access_key = Storage.objects(project=project)[0]['access_key']
sas_token = sas.generate_sas_token(account_name,access_key)
pipe_result = ''
file_size = '0'
try:
osdisk_vhd = osdisk_raw.replace(".raw.000",".vhd")
cur_path = os.getcwd()
path = cur_path+"/osdisks/"+osdisk_raw
vhd_path = cur_path+"/osdisks/"+osdisk_vhd
file_size = Path(vhd_path).stat().st_size
os.popen('echo "Filesize calculated" >> ./logs/ansible/migration_log.txt')
os.popen('echo "VHD uploading" >> ./logs/ansible/migration_log.txt')
url = "https://" + account_name + ".blob.core.windows.net/" + container_name + "/" + osdisk_vhd + "?" + sas_token
command3 = "azcopy copy --recursive '"+vhd_path + "' '" + url + "'"
process3 = await asyncio.create_subprocess_shell(command3, stdin = PIPE, stdout = PIPE, stderr = STDOUT)
await process3.wait()
os.popen('echo "VHD uploaded" >> ./logs/ansible/migration_log.txt')
BluePrint.objects(project=project,host=host).update(status='36')
Disk.objects(host=host,project=project).update_one(vhd=osdisk_vhd, file_size=str(file_size), upsert=True)
except Exception as e:
print(repr(e))
logger(str(e),"warning")
os.popen('echo "'+repr(e)+'" >> ./logs/ansible/migration_log.txt')
finally:
con.close()
async def conversion_worker(osdisk_raw,project,host):
con = create_db_con()
account_name = Storage.objects(project=project)[0]['storage']
container_name = Storage.objects(project=project)[0]['container']
access_key = Storage.objects(project=project)[0]['access_key']
sas_token = sas.generate_sas_token(account_name,access_key)
pipe_result = ''
try:
osdisk_vhd = osdisk_raw.replace(".raw.000",".vhd")
cur_path = os.getcwd()
path = cur_path+"/osdisks/"+osdisk_raw
vhd_path = cur_path+"/osdisks/"+osdisk_vhd
print("Start converting")
print(path)
os.popen('echo "start converting">> ./logs/ansible/migration_log.txt')
command2 = "qemu-img convert -f raw -o subformat=fixed -O vpc "+path+" "+vhd_path
process2 = await asyncio.create_subprocess_shell(command2, stdin = PIPE, stdout = PIPE, stderr = STDOUT)
await process2.wait()
BluePrint.objects(project=project,host=host).update(status='34')
os.popen('echo "Conversion completed" >> ./logs/ansible/migration_log.txt')
except Exception as e:
print(str(e))
logger(str(e),"warning")
file_size = '0'
finally:
con.close()
| 44.268041 | 125 | 0.655333 | 546 | 4,294 | 4.979853 | 0.214286 | 0.066936 | 0.092681 | 0.092681 | 0.687385 | 0.561604 | 0.561604 | 0.521883 | 0.503494 | 0.463773 | 0 | 0.010781 | 0.200745 | 4,294 | 96 | 126 | 44.729167 | 0.781469 | 0 | 0 | 0.539326 | 0 | 0 | 0.186859 | 0.070363 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.11236 | 0 | 0.11236 | 0.067416 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
3bb2598e55a4cbae69f540bdd47d68dabb7e53c0 | 19,880 | py | Python | transliterate.py | sven-oly/LanguageTools | 8c1e0bbae274232064e9796aa401c906797af452 | [
"Apache-2.0"
] | 3 | 2021-02-02T12:11:27.000Z | 2021-12-28T03:58:05.000Z | transliterate.py | sven-oly/LanguageTools | 8c1e0bbae274232064e9796aa401c906797af452 | [
"Apache-2.0"
] | 7 | 2020-12-11T00:44:52.000Z | 2022-03-01T18:00:00.000Z | transliterate.py | sven-oly/LanguageTools | 8c1e0bbae274232064e9796aa401c906797af452 | [
"Apache-2.0"
] | 3 | 2019-06-08T17:46:47.000Z | 2021-09-16T02:03:56.000Z | #!/usr/bin/env python
# -*- coding: utf-8 -*-
from __future__ import print_function
# For Python 2 and 3 compatibility
#from builtins import chr
import logging
import re
import sys
import unicodedata
import xml.etree.ElementTree as ET
# Default transliteration framework.
# Uses ICU-like syntax of transliteration rules.
# TODO: 13-Dec-2016
# 1. Remove testing from this file.
# 2. Complete conversion into classes.
# 3. Determine how to use any transliteration rules from upload.
# Take a transliteration rule with phases.
# Extract shortcuts such as "$nondigits = [^\u1040-\u1049];
# clauses "\u107B > \u1039 \u1018 ;
# Separate the phases with "::Null;
# For each phase,
# for each clause in phrase,
# replace with result in all cases
def ensure_unicode(x):
if sys.version_info < (3, 0):
what = type(x)
if what is not 'unicode':
return x.decode('utf-8')
return x
# The class for turning the transliterator text description
# into phases and rules.
class TranslitParser():
def __init__(self):
return
class Rule():
# Stores one rule of a phase, including substitution information
def __init__(self, pattern, substitution,
context = None,
after_context=None,
before_context=None,
in_context=None,
before_reposition=None,
after_reposition=None,
normalize=None,
id=0):
self.id = id
self.pattern = pattern
self.re_pattern = re.compile(self.pattern, re.UNICODE)
self.subst = substitution
self.context = context
self.in_context = in_context
self.after_context = after_context
self.before_reposition = before_reposition
self.after_reposition = after_reposition
# Store info on repositioning cursor
self.cursor_offset = 0
def parseString(self, rule_string):
return
class Phase():
# one phase of the transliteration spec
def __init__(self, id=0):
self.rules = [] # Old tuples
self.RuleList = [] # Rule objects
self.phase_id = id
self.normalize = None
# For creating the parts of the phase.
self.parts_splitter = re.compile(u'>|→', re.UNICODE)
self.rule_pattern = re.compile(
u'(?P<before_context>[^{]*)(?P<left_context_mark>{?)(?P<in_context>[^}>→]*)\
(?P<right_context_mark>}?)(?P<after_context>[^>→]*)\
[>→](?P<before_reposition>[^|]*)(?P<reposition_mark>\|?)(?P<after_reposition>[^;]*)\
(?P<final_semicolon>;?)(\s*)(?P<comment>\#?.*)', re.UNICODE)
def setNormalize(self, norm_string):
# Check for valid form
name_type = norm_string.replace('::', '')
if name_type in ['NFC', 'NFD', 'NFKC', 'NFKD']:
self.normalize = name_type
def normalizeText(self, text):
# print('NORMALIZE %s' % text)
return text
# !!!! return unicodedata.normalize(self.normalize, text.encode('utf-8'))
def fillRules(self, rulelist):
# set up pattern and subst value for each rule
index = 0
for rule1 in rulelist:
# TODO: handle reposition with unescaped '|'
resposition = rule1.find('|')
# TODO: handle context with unescaped '{'
context = rule1.find('}')
# Extract parts for context, matching, and output with respositioning
test_match = self.rule_pattern.match(rule1)
context = None
left_context_mark = None
before_context = None
in_context = None
after_context = None
before_reposition = None
after_reposition = None
if test_match:
groups = test_match.groupdict()
if groups['reposition_mark']:
after_reposition = re.sub(' ', '', uStringsFixPlaceholder(groups['after_reposition']))
before_reposition = re.sub(' ', '', uStringsFixPlaceholder(groups['before_reposition']))
if groups['left_context_mark']:
left_context_mark = groups['left_context_mark']
after_context = re.sub(' ', '', uStringsFixPlaceholder(groups['after_context']))
before_context = re.sub(' ', '', uStringsFixPlaceholder(groups['before_context']))
in_context = re.sub(' ', '', uStringsFixPlaceholder(groups['in_context']))
rule = re.sub('\n', '', rule1.strip())
# Remove comment lines.
# TODO: remove final semicolon
if rule and rule[0] != '#':
# TODO: Use matched results instead of simple split
parts = self.parts_splitter.split(rule)
pattern = re.sub(' ', '', parts[0]) # but don't remove quoted space
# Handle those without before_context
# Use context information to create context rules
if len(parts) < 2:
print('Parts expects > 1: %s' % parts)
subst = re.sub(' ', '', uStringsFixPlaceholder(parts[1]))
if left_context_mark:
pattern_string = '(%s)%s(%s)' % (before_context, in_context, after_context)
pattern = pattern_string
new_subst = '\\1%s\\2' % subst
subst = new_subst
# TODO: Separate rule parsing
# newRule = Rule.fromString(rule_string)
# self.RuleList.append(newRule)
try:
newPair = (pattern, subst)
self.rules.append(newPair)
self.RuleList.append(Rule(pattern, subst,
context=context,
before_context=before_context,
in_context=in_context,
after_context=after_context,
before_reposition=before_reposition,
after_reposition=after_reposition,
id=index)) # Rule objects
except IndexError as before_context:
print('IndexError before_context = %s. Phase %s, %d rule = %s' % (before_context, self.phase_id, index, rule1))
print(' Rule = >>%s<< %d' % (rule, len(rule)))
break
except ValueError as value_error:
print('ValueError value_error = %s. Phase %s, %d rule = %s' % (value_error, self.phase_id, index, rule1))
print(' Rule = >>%s<< %d' % (rule, len(rule)))
break
except:
other_error = sys.exc_info()[0]
print('!! Other error other_error = %s. Phase %s, %d rule = %s' % (other_error, self.phase_id, index, rule1))
print(' Rule = >>%s<< %d characters' % (rule, len(rule)))
break
index += 1
def getRules(self):
# Old style rules
return self.rules
def getRuleList(self):
# List of rule objects
return self.RuleList
def apply(self, intext):
# takes each rule (pattern, substitute), applying to intext
# Apply special normalization first
if self.normalize:
intext = self.normalizeText(intext)
result = ''
for rule in self.rules:
result = re.sub(rule[0], rule[1], intext)
intext = result
return result
def getInfo(self):
return '%s %s'
def getRulesStrings(self):
return self.RuleList
def extractShortcuts(ruleString):
# Shortcuts are clauses of the form "$id = re; What about a literal ";?
# also remove comment lines and blank lines
shorcut_pattern = '(\$\w+)\s*=\s*([^;]*)'
matches = re.findall(shorcut_pattern, ruleString)
shortcuts = {}
for m in matches:
shortcuts[m[0]] = m[1]
# Remove shortcuts and comments from input.
shorcut_pattern = '\$(\w+)\s*=\s*([^;]*);\n'
commentPattern = '#[^\n]*\n+' # Handle comments at ends of lines, too.
multipleNewlinePattern = '\s*\n+'
stripped = re.sub(shorcut_pattern, '', ruleString)
smaller = re.sub(commentPattern, "\n", stripped)
smaller = re.sub(multipleNewlinePattern, "\n", smaller)
return shortcuts, smaller
def expandShortcuts(shortcuts, inlist):
newlist = inlist
if shortcuts:
for key, value in shortcuts.items():
key = re.sub('\$', '\$', key)
sublist = re.sub(key, value, newlist)
newlist = sublist
return newlist
def splitPhases(ruleString):
phases = ruleString.split('::Null;')
return phases
def testZawgyiConvert():
z1 = 'ဘယ္'
u1 = ConvertZawgyiToUnicode(z1)
return u1
def ConvertZawgyiToUnicode(ztext):
# Run the phases over the data.
out1 = ztext
for phase in phases:
# Apply each regular expression with global replacement;
rules = phases.rules
for rule in rules:
# apply
continue
def uStringsFixPlaceholder(string):
return re.sub(u'\$(\d)', subBackSlash, string) # Fix the replacement patterns
def uStringsToText(string):
pattern = r'\\u[0-9A-Fa-f]{4}'
result = re.sub(pattern, decodeHexU, string)
return re.sub(u'\$(\d)', subBackSlash, result) # Fix the replacement patterns
def uStringToHex(string):
result = ''
for c in string:
result += '%4s ' % hex(ord(c))
return result
def subBackSlash(pattern):
return '\\' + pattern.group(0)[1:]
def decodeHexU(uhexcode):
# Convert \uhhhh in input hex code match to Unicode character
text = uhexcode.group(0)[2:]
return chr(int(text, 16)).encode('utf-8')
class Transliterate():
# Accepting a set of rules, create a transliterator with phases,
# ready to apply them.
def __init__(self, raw_rules, description='Default conversion', debug=False):
# Get the short cuts.
# if (debug):
# logging.info('--------- TRANSLITERATE __init__: raw_rules = %s. description = %s' %
# (raw_rules.encode('utf-8'), description))
self.debug_mode = debug
self.description = description
# Convert Unicode escapes to characters
self.raw_rules = raw_rules #.decode('unicode-escape')
(self.shortcuts, self.reduced) = extractShortcuts(self.raw_rules)
# Expand short cuts.
# if (debug):
# logging.info('shortcuts: %s' % self.shortcuts)
# logging.info('Reduced: %s' % self.reduced.encode('utf-8'))
self.expanded = expandShortcuts(self.shortcuts, self.reduced)
# if (debug):
# logging.info('expanded: %s' % self.expanded.encode('utf-8'))
self.phaseStrings = splitPhases(self.expanded)
# if (debug):
# logging.info('phaseStrings: %s' % self.phaseStrings)
# Create the phase objects
self.phaseList = []
index = 0
for phase in self.phaseStrings:
self.phaseList.append(Phase(index))
new_phase = self.phaseList[-1]
rule_lines = phase.split('\n')
phase_rules = []
for r in rule_lines:
# TODO: Handle lines with semicolon as left side of rule
rule_parts = r.rsplit(';', 1)
if rule_parts[0]:
# Handle Special case for ::NFC and ::NFD
if rule_parts[0] == "::NFC" or rule_parts[0] == "::NFD":
new_phase.setNormalize(rule_parts[0])
else:
phase_rules.append(rule_parts[0])
# Omit empty lines
new_phase.fillRules(phase_rules)
index += 1
# Range of current string, for passing information to substFunction.
self.start = 0
self.limit = 0
def printSummary(self):
# Print the statistics
print('%4d raw rules' % len(self.raw_rules))
print('%4d shortcuts ' % len(self.shortcuts))
print('%4d reduced ' % len(self.reduced))
print('%4d phaseStrings ' % len(self.phaseStrings))
print('%4d phaseList ' % len(self.phaseList))
index = 0
for phase in self.phaseList:
print(' %3d rules in phase %2d' % (len(self.phaseList[index].rules), index))
index += 1
def printPhases(self):
for phase in self.phaseStrings:
self.printPhase(phase)
def printPhase(self, phase_num):
phase = self.phaseList[phase_num]
# TODO: Print the rules for the phase
def getSummary(self):
# Print the statistics
result = {
'raw rules': len(self.raw_rules),
'shortcuts': self.shortcuts,
'reduced': len(self.reduced),
'phaseStrings': self.phaseStrings,
'phaseList': self.phaseList,
}
return result
def substFunction(self, matchObj):
return 'UNFINISHED'
def applyPhase(self, index, instring, debug):
# It should do:
# a. Find rule that matches from the start
# b. if a match, substitute text and move start as required
# until start >= limit
# For each rule, apply to instring.
self.start = 0
self.limit = len(instring) - 1
this_phase = self.phaseList[index]
ruleList = this_phase.RuleList
if this_phase.normalize:
instring = this_phase.normalizeText(instring)
current_string = instring
if debug:
print('UUUUUUUUUUUUU current = %s' % current_string.encode('utf-8'))
match_obj = True
while self.start <= self.limit:
# Look for a rule that matches
rule_index = 0
match_obj = None
self.limit = len(current_string) - 1
found_rule = None
for rule in ruleList:
# Try to match each rule at the current start point.
re_pattern = rule.re_pattern
try:
# look at the current position.
match_obj = re_pattern.match(current_string[self.start:])
# matchObj = re.match(rule.pattern, currentString[self.start:])
except TypeError as e:
print('***** TypeError EXCEPTION %s in phase %s, rule %s: %s -> %s' % (e,
index, rule_index, uStringToHex(rule.pattern), uStringToHex(rule.subst)))
except:
e = sys.exc_info()[0]
print('***** EXCEPTION %s in phase %s, rule %s: %s -> %s' % (e,
index, rule_index, uStringToHex(rule.pattern), uStringToHex(rule.subst)))
if match_obj:
# Do just one substitution!
found_rule = True
if debug:
print('MATCHING Rule in phase %s= %s --> %s. current = %s' % (
index, rule.pattern, rule.subst, current_string))
# Size of last part of old string after the replacement
c_size = len(current_string) - match_obj.end(0) - self.start # Last part of old string not matched
if not rule.before_reposition:
substitution = rule.after_reposition
else:
# TODO: Handle case of before and after substitutions.
substitution = rule.subst
if debug:
logging.info('SUBSTITUTION Rule = %s --> %s. current = %s' % (
str(rule.pattern), rule.subst, current_string))
if not substitution:
substitution = u''
try:
outstring = re.sub(rule.pattern, substitution, current_string[self.start:], 1)
if debug:
print(' Substitution gives outstring: %s, %s' % (outstring.encode('utf-8'), len(outstring)))
except TypeError as e:
outstring = u'&*&*& %s &*&*&' % substitution
except UnicodeDecodeError as e:
print('CURRENT_STRING = %s' % (current_string))
print('rule.pattern = %s' % (rule.pattern))
print('substitution = %s' % (substitution))
print('##### re.sub problem with rule.pattern = %s, sub = %s, current_string[] = %s' %
(rule.pattern.encode('utf-8'), substitution.encode('utf-8'), current_string[self.start:].encode('utf-8')))
logging.error('##### re.sub problem with rule.pattern = %s, sub = %s, current_string[] = %s' %
(rule.pattern.encode('utf-8'), substitution.encode('utf-8'), current_string[self.start:].encode('utf-8')))
# Try to advance start.
new_string = ''
try:
new_string = current_string[0:self.start] + outstring
except :
other_error = sys.exc_info()[0]
print('Error %s' % other_error)
print('!!!!!!!!!!! ERROR with substitution start = %s, currentString length = %s !!!!!!!!' %
(self.start, len(current_string)))
print('!!!!!!!!!!! last part = %s, outstring = %s' % (current_string[self.start:], outstring))
print('!!!!!!!!!!! first part = %s, outstring = %s' % (current_string[0:self.start], outstring))
self.limit = len(new_string) - 1
# Figure out the new start and limit.
# New: don't advance if all the text is after the reposition mark.
if debug:
print('!!!!!!!!!!! before_reposition before_reposition before_reposition before_reposition rule = %s' %
rule.after_reposition)
if not rule.before_reposition:
self.start = self.start # Unchanged
elif rule.after_reposition:
# Find the location of the '|' in the result,
# Remove that, and set the new position
# Backwards from this place, to self.start
if debug:
print('!!!!!!!!!!! after_reposition = %s' % current_string)
for pos in range(self.limit - c_size, self.start, -1):
if new_string[pos] == u'|':
self.start = pos
new_string = new_string[0:pos] + new_string[pos+1:]
break
else:
if debug:
print('!!!!!!!!!!! NO reposition = %s' % current_string)
self.start = self.limit - c_size + 1
current_string = new_string
break
rule_index += 1
# Rule loop complete
if not found_rule:
# Increment position since no rule matched
self.start += 1
if debug:
print('OUTPUT Phase %s = %s' % (index, current_string))
return current_string
def transliterate(self, instring, debug=None):
# Apply each phase to the incoming string or string list.
if debug:
# print('---- TRANSLITERATE data = %s' % (instring))
logging.info('--------------- TRANSLITERATE type = %s. data = %s' % (type(instring), instring))
if type(instring) == list:
# Repeat on each list item.
print('**** calling with list item >%s<' % instring)
return [self.transliterate(item, debug) for item in instring]
if debug:
print('---------------transliterate line 422 instring = %s' % instring)
instring = ensure_unicode(instring)
for phase_index in range(len(self.phaseList)):
if debug:
print('---------------transliterate line 425 phase %d = >%s<' % (phase_index, self.phaseList))
print('---------------transliterate line 426 instring = >%s<' % (instring))
outstring = u'NOT SET'
outstring = self.applyPhase(phase_index, instring, debug)
try:
outstring = self.applyPhase(phase_index, instring, debug)
except:
e = sys.exc_info()[0]
logging.error('!! Calling applyPhase Error e = %s. phase_index =%s, instring = %s' %
(e, phase_index, instring))
instring = outstring
# ?? .decode('unicode-escape')
if debug:
print('****** outstring = %s' % outstring)
return outstring
# Derived class that takes an XML file from CLDR and create a transliterator from it
class TranslitXML(Transliterate):
def __init__(self, file_path):
self.path = file_path
self.tree = None
self.root = None
self.rules_text = None
self.transforms = None
self.openFile()
self.parseXmlTree()
super(TranslitXML, self).__init__(self.rules_text)
return
def openFile(self):
self.tree = ET.parse(self.path)
self.root = self.tree.getroot()
return
def parseXmlTree(self):
if self.root:
# Look for the rules and get into proper shape
self.transforms = self.root.find('transforms')
self.transform = self.transforms.find('transform')
text = self.transform.find('tRule').text
#text = text.encode('unicode-escape')
in_str = text.encode('unicode-escape') # bytes with all chars escaped (the original escapes have the backslash escaped)
in_str = in_str.replace(b'\\\\u', b'\\u') # unescape the \
text = in_str.decode('unicode-escape')
self.rules_text = text
# self.rules_text = self.transform.find('tRule').text.decode( 'unicode-escape' )
return
| 35.563506 | 132 | 0.610765 | 2,397 | 19,880 | 4.964539 | 0.171882 | 0.026218 | 0.010924 | 0.011092 | 0.184034 | 0.134034 | 0.086639 | 0.065882 | 0.047059 | 0.047059 | 0 | 0.009125 | 0.261318 | 19,880 | 558 | 133 | 35.62724 | 0.800885 | 0.213732 | 0 | 0.178667 | 0 | 0.005333 | 0.129463 | 0.008313 | 0 | 0 | 0 | 0.001792 | 0 | 1 | 0.090667 | false | 0 | 0.016 | 0.026667 | 0.192 | 0.106667 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
3bb3d5e7f4b683812dc529089c0f433e9607e22b | 3,955 | py | Python | google/cloud/forseti/scanner/scanners/config_validator_util/data_models/data_model_builder.py | aarontp/forseti-security | 6d03c14114468ff6170846392b7d14a0619fa9f0 | [
"Apache-2.0"
] | 921 | 2017-03-09T01:01:24.000Z | 2019-04-16T11:38:25.000Z | google/cloud/forseti/scanner/scanners/config_validator_util/data_models/data_model_builder.py | aarontp/forseti-security | 6d03c14114468ff6170846392b7d14a0619fa9f0 | [
"Apache-2.0"
] | 1,996 | 2017-03-03T22:07:50.000Z | 2019-04-17T00:02:28.000Z | google/cloud/forseti/scanner/scanners/config_validator_util/data_models/data_model_builder.py | aarontp/forseti-security | 6d03c14114468ff6170846392b7d14a0619fa9f0 | [
"Apache-2.0"
] | 241 | 2017-03-09T01:00:04.000Z | 2019-04-15T18:53:35.000Z | # Copyright 2019 The Forseti Security Authors. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Builds the data models."""
from builtins import object
import importlib
from google.cloud.forseti.common.util import logger
from google.cloud.forseti.scanner.scanners.config_validator_util.data_models \
import data_model_requirements_map
LOGGER = logger.get_logger(__name__)
class DataModelBuilder(object):
"""Data Model Builder."""
def __init__(self, global_configs, scanner_configs, service_config,
model_name):
"""Initialize the data model builder.
Args:
global_configs (dict): Global configurations.
service_config (ServiceConfig): Service configuration.
scanner_configs (dict): Scanner configurations.
model_name (str): name of the data model
"""
self.global_configs = global_configs
self.scanner_configs = scanner_configs
self.service_config = service_config
self.model_name = model_name
def build(self):
"""Build the data models.
Returns:
list: data model instances that will be created.
"""
data_models = []
requirements_map = data_model_requirements_map.REQUIREMENTS_MAP
for scanner in self.scanner_configs.get('scanners'):
scanner_name = scanner.get('name')
if scanner.get('enabled') and requirements_map.get(scanner_name):
data_model = self._instantiate_data_model(scanner_name)
if data_model:
data_models.append(data_model)
return data_models
def _instantiate_data_model(self, data_model_name):
"""Make individual data models based on the data model name.
Args:
data_model_name (str): the name of the data model to create in the
requirements_map.
Returns:
data_model: the individual data model instance.
"""
module_path = 'google.cloud.forseti.scanner.scanners.' \
'config_validator_util.data_models.{}'
requirements_map = data_model_requirements_map.REQUIREMENTS_MAP
LOGGER.debug('Initializing Config Validator data model: %s - %s',
data_model_name, requirements_map.get(data_model_name))
module_name = module_path.format(
requirements_map.get(
data_model_name).get('module_name'))
try:
module = importlib.import_module(module_name)
except (ImportError, TypeError, ValueError):
LOGGER.exception('Unable to import %s for building '
'Config Validator data model.\n',
module_name)
return None
class_name = requirements_map.get(
data_model_name).get('class_name')
try:
data_model_class = getattr(module, class_name)
except AttributeError:
LOGGER.exception('Unable to instantiate %s for building '
'Config Validator data model.\n',
class_name)
return None
data_model = data_model_class(self.global_configs,
self.scanner_configs,
self.service_config,
self.model_name)
return data_model
| 36.962617 | 78 | 0.633375 | 449 | 3,955 | 5.358575 | 0.311804 | 0.108479 | 0.037822 | 0.029925 | 0.252702 | 0.17581 | 0.17581 | 0.131338 | 0.100582 | 0.100582 | 0 | 0.002877 | 0.296839 | 3,955 | 106 | 79 | 37.311321 | 0.86228 | 0.299621 | 0 | 0.150943 | 0 | 0 | 0.112687 | 0.028363 | 0 | 0 | 0 | 0 | 0 | 1 | 0.056604 | false | 0 | 0.132075 | 0 | 0.283019 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
3bb7a901bd407bbb609298dedf7487156c684686 | 1,773 | py | Python | UserCode/bressler/xebcmuonanalysis.py | cericdahl/SBCcode | 90a7841a5c1208d64f71a332289d9005a011aa21 | [
"MIT"
] | 4 | 2018-08-27T18:02:34.000Z | 2020-06-09T21:19:04.000Z | UserCode/bressler/xebcmuonanalysis.py | SBC-Collaboration/SBC-Analysis | 90a7841a5c1208d64f71a332289d9005a011aa21 | [
"MIT"
] | null | null | null | UserCode/bressler/xebcmuonanalysis.py | SBC-Collaboration/SBC-Analysis | 90a7841a5c1208d64f71a332289d9005a011aa21 | [
"MIT"
] | 4 | 2019-06-20T21:36:26.000Z | 2020-11-10T17:23:14.000Z | #!/usr/bin/env python3
# -*- coding: utf-8 -*-
"""
Created on Thu Apr 9 14:31:59 2020
@author: bressler
"""
import SBCcode as sbc
import matplotlib.pyplot as plt
import runlistscatalogue as rlc
import numpy as np
from os.path import isfile,join
from os import listdir
import gc
bial = rlc.BiAlOct6to9
cf = rlc.cfJuly6to11
bg = rlc.bgOct10and11
bg2 = rlc.bgOct2and3
bg3 = rlc.bgCombinedMultiTemp
types = [cf]
runs = []
lt = 0
for typE in types:
for run in typE:
runs.append(run)
evids = []
phe = []
for run in runs:
print(run)
with open("/nashome/b/bressler/sbcoutput/%s_muonCoincidences.txt"%run,"r") as coincfile:
data = coincfile.readlines()
if len(data)>1:
for i in range(1,len(data)):
evids.append(data[i].split()[0]+"-"+data[i].split()[1])
phe.append(float(data[i].split()[2].rstrip()))
if phe[-1]<1:
print(evids[-1])
runpath = '/bluearc/storage/SBC-17-data/'+run
events = [evnt for evnt in listdir(runpath) if not isfile(join(runpath,evnt))]
for event in events:
e = sbc.DataHandling.GetSBCEvent.GetEvent(runpath,event)
lt += e["event"]["livetime"]
gc.collect()
print(evids)
print(phe)
print(len(phe))
print(lt)
print("Rate: %f Hz"%(len(phe)/lt))
bins = [(2**i)+0.5 for i in range(12)]
#bins = np.arange(1,int(1+np.ceil(max(spect))))
bins = np.insert(bins,0,0.5)
bins=np.insert(bins,0,-0.5)
bins=np.insert(bins,0,-1.5)
binc=[(bins[i+1]+bins[i])/2 for i in range(len(bins)-1)]
plt.figure()
N,_,_=plt.hist(phe,bins,histtype='step',linewidth=4)
plt.xscale('symlog')
plt.grid()
plt.xlabel('phe')
plt.ylabel('count')
plt.show()
plt.figure()
plt.errorbar(binc,N,np.sqrt(N),ds='steps-mid')
plt.xscale('symlog')
plt.xlabel('phe')
plt.show | 25.695652 | 92 | 0.643542 | 290 | 1,773 | 3.924138 | 0.431034 | 0.02109 | 0.015817 | 0.028998 | 0.04833 | 0.04833 | 0.04833 | 0.04833 | 0.04833 | 0.04833 | 0 | 0.038828 | 0.172025 | 1,773 | 69 | 93 | 25.695652 | 0.736376 | 0.081218 | 0 | 0.103448 | 0 | 0 | 0.088889 | 0.050617 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.12069 | 0 | 0.12069 | 0.12069 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
3bb7c823797b7b03557c91b27bcdd9bd2c8c8247 | 5,186 | py | Python | src/garage/envs/base.py | lywong92/garage | 96cb8887fcae90531a645d540653010e7fe10fcc | [
"MIT"
] | 1 | 2020-01-05T14:57:43.000Z | 2020-01-05T14:57:43.000Z | src/garage/envs/base.py | lywong92/garage | 96cb8887fcae90531a645d540653010e7fe10fcc | [
"MIT"
] | null | null | null | src/garage/envs/base.py | lywong92/garage | 96cb8887fcae90531a645d540653010e7fe10fcc | [
"MIT"
] | null | null | null | """Wrapper class that converts gym.Env into GarageEnv."""
import collections
from akro import Box
from akro import Dict
from akro import Discrete
from akro import Tuple
import glfw
import gym
from gym.spaces import Box as GymBox
from gym.spaces import Dict as GymDict
from gym.spaces import Discrete as GymDiscrete
from gym.spaces import Tuple as GymTuple
from garage.core import Serializable
from garage.envs.env_spec import EnvSpec
# The gym environments using one of the packages in the following list as entry
# points don't close their viewer windows.
KNOWN_GYM_NOT_CLOSE_VIEWER = [
# Please keep alphabetized
'gym.envs.mujoco',
'gym.envs.robotics'
]
class GarageEnv(gym.Wrapper, Serializable):
"""
Returns an abstract Garage wrapper class for gym.Env.
In order to provide pickling (serialization) and parameterization
for gym.Envs, they must be wrapped with a GarageEnv. This ensures
compatibility with existing samplers and checkpointing when the
envs are passed internally around garage.
Furthermore, classes inheriting from GarageEnv should silently
convert action_space and observation_space from gym.Spaces to
akro.spaces.
Args: env (gym.Env): the env that will be wrapped
"""
def __init__(self, env=None, env_name=''):
if env_name:
super().__init__(gym.make(env_name))
else:
super().__init__(env)
self.action_space = self._to_akro_space(self.env.action_space)
self.observation_space = self._to_akro_space(
self.env.observation_space)
if self.spec:
self.spec.action_space = self.action_space
self.spec.observation_space = self.observation_space
else:
self.spec = EnvSpec(
action_space=self.action_space,
observation_space=self.observation_space)
Serializable.quick_init(self, locals())
def close(self):
"""
Close the wrapped env.
Returns:
None
"""
self._close_mjviewer_window()
self.env.close()
def _close_mjviewer_window(self):
"""
Close the MjViewer window.
Unfortunately, the gym environments using MuJoCo don't close the viewer
windows properly, which leads to "out of memory" issues when several
of these environments are tested one after the other.
This method searches for the viewer object of type MjViewer, and if the
environment is wrapped in other environment classes, it performs depth
search in those as well.
This method can be removed once OpenAI solves the issue.
"""
if self.env.spec:
if any(package in self.env.spec._entry_point
for package in KNOWN_GYM_NOT_CLOSE_VIEWER):
# This import is not in the header to avoid a MuJoCo dependency
# with non-MuJoCo environments that use this base class.
from mujoco_py.mjviewer import MjViewer
if (hasattr(self.env, 'viewer')
and isinstance(self.env.viewer, MjViewer)):
glfw.destroy_window(self.env.viewer.window)
else:
env_itr = self.env
while hasattr(env_itr, 'env'):
env_itr = env_itr.env
if (hasattr(env_itr, 'viewer')
and isinstance(env_itr.viewer, MjViewer)):
glfw.destroy_window(env_itr.viewer.window)
break
def reset(self, **kwargs):
"""
This method is necessary to suppress a deprecated warning
thrown by gym.Wrapper.
Calls reset on wrapped env.
"""
return self.env.reset(**kwargs)
def step(self, action):
"""
This method is necessary to suppress a deprecated warning
thrown by gym.Wrapper.
Calls step on wrapped env.
"""
return self.env.step(action)
def _to_akro_space(self, space):
"""
Converts a gym.space into an akro.space.
Args:
space (gym.spaces)
Returns:
space (akro.spaces)
"""
if isinstance(space, GymBox):
return Box(low=space.low, high=space.high, dtype=space.dtype)
elif isinstance(space, GymDict):
return Dict(space.spaces)
elif isinstance(space, GymDiscrete):
return Discrete(space.n)
elif isinstance(space, GymTuple):
return Tuple(list(map(self._to_akro_space, space.spaces)))
else:
raise NotImplementedError
def Step(observation, reward, done, **kwargs): # noqa: N802
"""
Convenience method for creating a namedtuple from the results of
environment.step(action). Provides the option to put extra
diagnostic info in the kwargs (if it exists) without demanding
an explicit positional argument.
"""
return _Step(observation, reward, done, kwargs)
_Step = collections.namedtuple('Step',
['observation', 'reward', 'done', 'info'])
| 33.675325 | 79 | 0.628616 | 633 | 5,186 | 5.037915 | 0.301738 | 0.026341 | 0.020383 | 0.023832 | 0.169332 | 0.077767 | 0.062088 | 0.045155 | 0.045155 | 0.045155 | 0 | 0.000826 | 0.29946 | 5,186 | 153 | 80 | 33.895425 | 0.876961 | 0.354994 | 0 | 0.055556 | 0 | 0 | 0.025258 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.097222 | false | 0 | 0.194444 | 0 | 0.402778 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
3bbc89d60fef5db126ce96f82078394ce273c48e | 15,300 | py | Python | weight_statistics.py | fightingnoble/myproject | ba27cbb96bc15ee6044a3811e59061a693187c79 | [
"Apache-2.0"
] | null | null | null | weight_statistics.py | fightingnoble/myproject | ba27cbb96bc15ee6044a3811e59061a693187c79 | [
"Apache-2.0"
] | null | null | null | weight_statistics.py | fightingnoble/myproject | ba27cbb96bc15ee6044a3811e59061a693187c79 | [
"Apache-2.0"
] | null | null | null | from __future__ import print_function
import argparse
import os
import time
import model
import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
import torch.optim.lr_scheduler as lr_scheduler
from datasets.mnist import getmnist, NUM_TRAIN
from torchvision import datasets, transforms
from helper import accuracy, AverageMeter, save_checkpoint
from module.layer1 import crxb_Conv2d, crxb_Linear
# import pydevd_pycharm
# pydevd_pycharm.settrace('0.0.0.0', port=12346, stdoutToServer=True, stderrToServer=True)
import visdom
vis = visdom.Visdom(env ="weight_statistic")
class Net(nn.Module):
def __init__(self, **crxb_cfg):
super(Net, self).__init__()
# self.conv1 = nn.Conv2d(1, 20, 5, 1)
# self.conv2 = nn.Conv2d(20, 50, 5, 1)
# self.fc1 = nn.Linear(4*4*50, 500)
# self.fc2 = nn.Linear(500, 10)
self.conv1 = crxb_Conv2d(1, 20, kernel_size=5, is_first_layer=True, **crxb_cfg)
self.conv2 = crxb_Conv2d(20, 50, kernel_size=5, **crxb_cfg)
self.conv2_drop = nn.Dropout2d()
self.fc1 = crxb_Linear(4 * 4 * 50, 500, **crxb_cfg)
self.fc2 = crxb_Linear(500, 10, is_last_layer=True, **crxb_cfg)
self.activation_name = ["conv1", "max_pool2d", "conv2", "conv2_drop", "max_pool2d", "fc1", "dropout", "fc2"]
def forward(self, x):
self.activation = [x]
x = F.relu(self.conv1(x)) # leaky_relu
self.activation.append(x)
x = F.max_pool2d(x, 2, 2)
self.activation.append(x)
x = F.relu(self.conv2(x))
self.activation.append(x)
x = self.conv2_drop(x)
self.activation.append(x)
x = F.max_pool2d(x, 2, 2)
self.activation.append(x)
x = x.view(-1, 4 * 4 * 50)
x = F.relu(self.fc1(x))
self.activation.append(x)
x = F.dropout(x, training=self.training)
self.activation.append(x)
x = self.fc2(x)
self.activation.append(x)
return F.log_softmax(x, dim=1)
def plot_histogram(self):
for i, layer in enumerate(self.children()):
if isinstance(layer, (crxb_Conv2d,crxb_Linear)):
data = layer.weight.data.view(-1)
vis.histogram(data, 'w%d'%i, opts=dict(title='w%d'%i))
data = layer.weight_quan.data.view(-1)*layer.delta_w
vis.histogram(data, 'wq%d'%i, opts=dict(title='wq%d'%i))
# Define loss, optimizer, scheduler
criterion = nn.CrossEntropyLoss()
def test(args, model, device, test_loader):
if test_loader.dataset.train:
print("test on validation set\r\n")
else:
print("test on test set\r\n")
# validate
model.eval()
# test_loss = 0
# correct = 0
# num_samples = 0
losses = AverageMeter()
top1 = AverageMeter()
top5 = AverageMeter()
with torch.no_grad():
for data in test_loader:
inputs, labels = data[0].to(device), data[1].to(device)
outputs = model(inputs)
test_loss = criterion(outputs, labels).item()
# pred = outputs.argmax(dim=1, keepdim=True)
# correct += pred.eq(labels.view_as(pred)).sum().item()
# # _, pred = torch.max(outputs, 1)
# # correct += (pred == labels).sum().item()
# num_samples += pred.size(0)
prec1, prec5 = accuracy(outputs, labels, topk=(1, 5))
losses.update(test_loss, labels.size(0))
top1.update(prec1[0], labels.size(0))
top5.update(prec5[0], labels.size(0))
print('\nTest set: Average loss: {:.4f}, Accuracy: Prec@1:{}/{} ({:.2f}%) Prec@5:{}/{} ({:.2f}%)\n'.format(
losses.avg, top1.sum // 100, top1.count, top1.avg, top5.sum // 100, top1.count, top5.avg))
with torch.no_grad():
# model.clip_w()
model.plot_histogram()
# model.plot_qcurve()
return top1.avg, top5.avg, losses.avg
best_prec1 = 0
def main():
# Training settings
parser = argparse.ArgumentParser(description='PyTorch MNIST Example')
# model cfg
parser.add_argument('--model-type', type=str, default="MNIST", help="type of the model.")
parser.add_argument('--model-structure', type=int, default=0, metavar='N',
help='model structure to be trained (default: 0)')
parser.add_argument('--resume', default='', type=str, metavar='PATH',
help='path to latest checkpoint, (default: None)')
parser.add_argument('--e', '--evaluate', dest='evaluate', action='store_true',
help='evaluate model on validation set')
# dataset
parser.add_argument('--dataset-root', type=str, default="../datasets", help="load dataset path.")
parser.add_argument('--workers', default=0, type=int, metavar='N',
help='number of data loading workers (default: 0)')
parser.add_argument('--train-batch-size', type=int, default=64, metavar='N',
help='input batch size for training (default: 64)')
parser.add_argument('--test-batch-size', type=int, default=1000, metavar='N',
help='input batch size for testing (default: 1000)')
# train cfg
parser.add_argument('--epochs', type=int, default=20, metavar='N',
help='number of epochs to train (default: 10)')
parser.add_argument('--start-epoch', default=0, type=int, metavar='N',
help='manual epoch number (useful to restarts)')
parser.add_argument('--lr', type=float, default=0.01, metavar='LR',
help='learning rate (default: 0.01)')
# device init cfg
parser.add_argument('--no-cuda', action='store_true', default=False,
help='disables CUDA training')
parser.add_argument('--seed', type=int, default=1, metavar='S',
help='random seed (default: 1)')
# optimizer
parser.add_argument('--optim', type=str, default="", help="optim type Adam/SGD")
parser.add_argument('--resume-optim', action='store_true', default=False,
help='resume optim')
parser.add_argument('--momentum', type=float, default=0.5, metavar='M',
help='SGD momentum (default: 0.5)')
parser.add_argument('--wd', default=5e-4, type=float,
metavar='W', help='weight decay (default: 5e-4)')
# scheduler
parser.add_argument('--scheduler', type=str, default="None", help="scheduler MultiStepLR/None/ReduceLROnPlateau")
parser.add_argument('--gamma', type=float, default=0.1, help='LR is multiplied by gamma on schedule.')
parser.add_argument('--decreasing-lr', default='10', help='decreasing strategy')
# result output cfg
parser.add_argument('--detail', action='store_true', default=False,
help='show log in detial')
parser.add_argument('--log-interval', type=int, default=10, metavar='N',
help='how many batches to wait before logging training status')
parser.add_argument('--save-model', action='store_true', default=True,
help='For Saving the current Model')
parser.add_argument('--checkpoint-path', type=str, default="", help="save model path.")
# crossbar cfg
parser.add_argument('--Quantized', action='store_true', default=False,
help='use quantized model')
parser.add_argument('--qbit', default='8', help='activation/weight qbit')
parser.add_argument('--crxb-size', type=int, default=64, help='corssbar size')
parser.add_argument('--vdd', type=float, default=3.3, help='supply voltage')
parser.add_argument('--gwire', type=float, default=0.0357,
help='wire conductacne')
parser.add_argument('--gload', type=float, default=0.25,
help='load conductance')
parser.add_argument('--gmax', type=float, default=0.000333,
help='maximum cell conductance')
parser.add_argument('--gmin', type=float, default=0.000000333,
help='minimum cell conductance')
parser.add_argument('--ir-drop', action='store_true', default=False,
help='switch to turn on ir drop analysis')
parser.add_argument('--scaler-dw', type=float, default=1,
help='scaler to compress the conductance')
parser.add_argument('--test', action='store_true', default=False,
help='switch to turn inference mode')
parser.add_argument('--enable_noise', action='store_true', default=False,
help='switch to turn on noise analysis')
parser.add_argument('--enable_SAF', action='store_true', default=False,
help='switch to turn on SAF analysis')
parser.add_argument('--enable_ec-SAF', action='store_true', default=False,
help='switch to turn on SAF error correction')
parser.add_argument('--freq', type=float, default=10e6,
help='scaler to compress the conductance')
parser.add_argument('--temp', type=float, default=300,
help='scaler to compress the conductance')
args = parser.parse_args()
print("+++", args)
# Train the network on the training data
# Test the network on the test data
use_cuda = not args.no_cuda and torch.cuda.is_available()
torch.manual_seed(args.seed)
device = torch.device("cuda" if use_cuda else "cpu")
print(device)
qbit_list = list(map(int, args.qbit.split(',')))
crxb_cfg = {'ir_drop': args.ir_drop, 'device': device,
'gmax': args.gmax, 'gmin': args.gmin, 'gwire': args.gwire, 'gload': args.gload,
'input_qbit': qbit_list[0], 'weight_qbit': qbit_list[1], 'activation_qbit': qbit_list[2],
'vdd': args.vdd, 'enable_noise': args.enable_noise,
'freq': args.freq, 'temp': args.temp, 'crxb_size': args.crxb_size,
'enable_SAF': args.enable_SAF, 'enable_ec_SAF': args.enable_ec_SAF}
net = Net(**crxb_cfg).to(device)
if torch.cuda.device_count() > 1:
print("Let's use", torch.cuda.device_count(), "GPUs!")
net = nn.DataParallel(net)
net.to(device)
# for param in net.parameters():
# param = nn.init.normal_(param)
# config
milestones = list(map(int, args.decreasing_lr.split(',')))
print(milestones)
# optimizer = optim.SGD(net.parameters(), lr=lr, momentum=MOMENTUM, weight_decay=WEIGHT_DECAY) # not good enough 68%
# optimizer = optim.Adam(net.parameters(), lr=args.lr, weight_decay=args.wd)
if args.optim == "Adam":
optimizer = optim.Adam(net.parameters(), lr=args.lr, weight_decay=args.wd)
elif args.optim == "SGD":
optimizer = optim.SGD(net.parameters(), lr=args.lr, weight_decay=args.wd, momentum=args.momentum)
else:
optimizer = optim.SGD(net.parameters(), lr=args.lr, momentum=args.momentum)
# optionlly resume from a checkpoint
if args.resume:
print("=> using pre-trained model '{}'".format(args.model_type))
else:
print("=> creating model '{}'".format(args.model_type))
global best_prec1
if args.resume:
if os.path.isfile(args.resume):
print("=> loading checkpoint '{}'".format(args.resume))
checkpoint = torch.load(args.resume)
args.start_epoch = checkpoint['epoch']
best_prec1 = checkpoint['best_prec1']
model_dict = net.state_dict()
pretrained_dict = torch.load(args.resume)['state_dict']
# pretrained_dict = torch.load(args.resume,map_location=torch.device('cpu'))['state_dict']
print(model_dict.keys(),'\r\n')
print(pretrained_dict.keys(),'\r\n')
new_dict = {}
for k,v in model_dict.items():
if 'module.'+k in pretrained_dict:
new_dict[k] = pretrained_dict['module.'+k]
print(k,' ')
else:
new_dict[k] = pretrained_dict[k]
print(k,'!!')
model_dict.update(new_dict)
net.load_state_dict(model_dict)
# net.load_state_dict(checkpoint['state_dict'])
if args.resume_optim:
try:
optimizer.load_state_dict(checkpoint['optimizer'])
except KeyError:
print("saved optim not compatible")
print("=> loaded checkpoint '{}' (epoch {})".format(args.resume, checkpoint['epoch']))
else:
print("=> no checkpoint found at '{}'".format(args.resume))
# Data loading
kwargs = {'num_workers': args.workers, 'pin_memory': True} if use_cuda else {}
trainloader = torch.utils.data.DataLoader(
datasets.MNIST(args.dataset_root, train=True, download=True,
transform=transforms.Compose([
transforms.ToTensor(),
transforms.Normalize((0.1307,), (0.3081,))
])),
batch_size=args.train_batch_size, shuffle=True, **kwargs)
testloader = torch.utils.data.DataLoader(
datasets.MNIST(args.dataset_root, train=False, download=True, transform=transforms.Compose([
transforms.ToTensor(),
transforms.Normalize((0.1307,), (0.3081,))
])),
batch_size=args.test_batch_size, shuffle=True, **kwargs)
print(len(trainloader), len(testloader))
t_s = time.monotonic()
print("!!test!!")
test(args, net, device, testloader)
t_e = time.monotonic()
m, s = divmod(t_e - t_s, 60)
h, m = divmod(m, 60)
print("%d:%02d:%02d" % (h, m, s))
# print([i for i in net.named_modules()],len([i for i in net.named_modules()]),"\n")
# print([i for i in net.named_children()],len([i for i in net.named_children()]),"\n")
# plot_histogram(net.activation, net.activation_name)
# for epoch in range(1, args.epochs + 1):
# test(args, net, device, test_loader)
# plot_histogram(net.activation)
parm_name = []
parm_value = []
for name, parameters in net.named_parameters():
print(name, ':', parameters.size())
parm_name.append(name)
parm_value.append(parameters.detach())
# plot_histogram(parm_value, parm_name)
from scipy import stats
for weight in parm_value:
N = weight.cpu().numpy().ravel().shape[0]
p = 0.03
acc_p = 0
print('length:%d\n'%N)
k = stats.binom.ppf(p,N,p)
# stats.truncnorm
print(N-k,stats.binom.cdf(k,N,p))
# while acc_p<0.03:
# acc_p =+ comb(N, k)*(p**k)*((1-p)**(N-k))
# if np.isnan(acc_p):
# break
# k += 1
# if np.isnan(acc_p):
# k = stats.poisson.ppf(p, N*p)
# print(k, stats.poisson.cdf(k, N*p))
# break
# else:
# print(k, acc_p,'\n')
if __name__ == '__main__':
main()
# 2020/03/25: divide the quatization bits into three class, IA, W, MA.
# format the physical parameter dict: crxb_cfg
# add layer number flag: is_first_layer, is_last_layer
# 2020/03/25: add layer number flag: is_first_layer, is_last_layer
| 41.689373 | 120 | 0.599216 | 1,970 | 15,300 | 4.528934 | 0.198477 | 0.04035 | 0.076216 | 0.024658 | 0.252634 | 0.181462 | 0.154113 | 0.126877 | 0.111298 | 0.090787 | 0 | 0.02495 | 0.253399 | 15,300 | 366 | 121 | 41.803279 | 0.756106 | 0.135882 | 0 | 0.113821 | 0 | 0.004065 | 0.182972 | 0.002585 | 0 | 0 | 0 | 0 | 0 | 1 | 0.020325 | false | 0 | 0.065041 | 0 | 0.097561 | 0.097561 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
3bbce131fd00cd33b602f41faebc657778eab17b | 1,400 | py | Python | scripts/replace.py | lukin0110/docker-django-boilerplate | 597f6501c6b6fddb5371b1083eeaf68fbf3a4de0 | [
"Apache-2.0"
] | 47 | 2016-10-12T13:04:36.000Z | 2021-11-21T05:16:40.000Z | scripts/replace.py | lukin0110/docker-django-boilerplate | 597f6501c6b6fddb5371b1083eeaf68fbf3a4de0 | [
"Apache-2.0"
] | 14 | 2016-10-11T20:27:14.000Z | 2022-02-10T11:34:03.000Z | scripts/replace.py | lukin0110/docker-django-boilerplate | 597f6501c6b6fddb5371b1083eeaf68fbf3a4de0 | [
"Apache-2.0"
] | 22 | 2017-03-03T19:59:27.000Z | 2021-02-23T17:29:04.000Z | #!/usr/bin/python
"""
Replaces the 'hello' project with your 'project name'. It will replace the necessary settings in a
few files.
"""
import os
def replace(file, old, new):
with open(file) as f:
new_text = f.read().replace(old, new)
with open(file, "w") as f:
f.write(new_text)
def handle(project_name):
replace("docker-compose.yml",
"POSTGRES_DB_NAME=hello",
"POSTGRES_DB_NAME={0}".format(project_name))
replace("app/manage.py",
'os.environ.setdefault("DJANGO_SETTINGS_MODULE", "hello.settings")',
'os.environ.setdefault("DJANGO_SETTINGS_MODULE", "{0}.settings")'.format(project_name))
replace("app/hello/settings.py",
"ROOT_URLCONF = 'hello.urls'",
"ROOT_URLCONF = '{0}.urls'".format(project_name))
replace("app/hello/settings.py",
"WSGI_APPLICATION = 'hello.wsgi.application'",
"WSGI_APPLICATION = '{0}.wsgi.application'".format(project_name))
replace("app/hello/wsgi.py",
'os.environ.setdefault("DJANGO_SETTINGS_MODULE", "hello.settings")',
'os.environ.setdefault("DJANGO_SETTINGS_MODULE", "{0}.settings")'.format(project_name))
# Rename the 'hello' dir to 'your_project'
os.rename("app/hello", "app/{0}".format(project_name))
if __name__ == "__main__":
name = raw_input("Project name: ")
handle(name)
| 31.111111 | 99 | 0.637857 | 176 | 1,400 | 4.875 | 0.335227 | 0.115385 | 0.118881 | 0.111888 | 0.4662 | 0.392774 | 0.355478 | 0.355478 | 0.277389 | 0.277389 | 0 | 0.005362 | 0.200714 | 1,400 | 44 | 100 | 31.818182 | 0.761394 | 0.119286 | 0 | 0.230769 | 0 | 0 | 0.459967 | 0.243464 | 0 | 0 | 0 | 0 | 0 | 1 | 0.076923 | false | 0 | 0.038462 | 0 | 0.115385 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
3bbd0ba88dc65d12926f3fc3f5160c37ab6639cf | 1,600 | py | Python | immortals/core/client.py | Den4200/immortals | 2c3e3316f498ade2f301f43748fc95f5fbe9daf2 | [
"MIT"
] | null | null | null | immortals/core/client.py | Den4200/immortals | 2c3e3316f498ade2f301f43748fc95f5fbe9daf2 | [
"MIT"
] | 2 | 2021-06-08T20:59:31.000Z | 2021-09-08T01:49:50.000Z | immortals/core/client.py | Den4200/immortals | 2c3e3316f498ade2f301f43748fc95f5fbe9daf2 | [
"MIT"
] | null | null | null | import arcade
from pymunk import Vec2d
from .events.events import PlayerEvent
from .events.states import GameState, PlayerState
class Player:
def __init__(self, x, y, color, filled=True):
self.pos = Vec2d(x, y)
self.color = color
self.filled = filled
self.size = 50
def draw(self):
if self.filled:
arcade.draw_rectangle_filled(
self.pos.x, self.pos.y,
self.size, self.size,
self.colorDamn
)
else:
arcade.draw_rectangle_outline(
self.pos.x, self.pos.y,
self.size, self.size,
self.color,
border_width=4
)
class Immortals(arcade.Window):
def __init__(
self,
width: int,
height: int,
title: str = "Immortal"
) -> None:
super().__init__(width, height, title=title)
arcade.set_background_color(arcade.color.WHITE)
self.game_state = GameState(player_states=[PlayerState()])
self.player = Player(0, 0, arcade.color.GREEN_YELLOW, filled=False)
self.player_input = PlayerEvent()
self.keys_pressed = dict()
def on_draw(self) -> None:
arcade.start_render()
self.player.draw()
def on_key_press(self, key, modifiers) -> None:
self.keys_pressed[key] = True
self.player_input.keys = self.keys_pressed
def on_key_release(self, key, modifiers) -> None:
self.keys_pressed[key] = False
self.player_input.keys = self.keys_pressed
| 26.666667 | 75 | 0.579375 | 188 | 1,600 | 4.739362 | 0.324468 | 0.039282 | 0.084175 | 0.026936 | 0.242424 | 0.242424 | 0.242424 | 0.166106 | 0.080808 | 0.080808 | 0 | 0.006434 | 0.32 | 1,600 | 59 | 76 | 27.118644 | 0.8125 | 0 | 0 | 0.130435 | 0 | 0 | 0.005 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.130435 | false | 0 | 0.086957 | 0 | 0.26087 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
3bbf109435c6dc48312104c0a78ab9500a46a4ce | 19,405 | py | Python | course/views.py | ArnedyNavi/studymate | 55e6a2c6717dd478a311ea8bf839a26ca3ef2b40 | [
"MIT"
] | 4 | 2021-12-31T17:25:00.000Z | 2022-02-08T17:05:46.000Z | course/views.py | ArnedyNavi/studymate | 55e6a2c6717dd478a311ea8bf839a26ca3ef2b40 | [
"MIT"
] | null | null | null | course/views.py | ArnedyNavi/studymate | 55e6a2c6717dd478a311ea8bf839a26ca3ef2b40 | [
"MIT"
] | null | null | null | from django.http.response import JsonResponse
from django.http import Http404
from django.core.exceptions import PermissionDenied, BadRequest
from django.shortcuts import render
from django.views.decorators.csrf import csrf_exempt
from django.contrib.auth.decorators import login_required
from django.http import HttpResponseRedirect, HttpResponse, HttpResponseNotFound
from django.forms.models import model_to_dict
from django.core import serializers
from django.db.models import Q, Value
from django.template import loader
import json
from django.urls import reverse
from .models import *
import markdown as md
from PIL import Image
import os
import uuid
from studymate.settings import MEDIA_ROOT as media
import datetime
def preview(request, id):
course = Course.objects.filter(id=id).first()
if course == None:
raise Http404("Course Not Found")
else:
instructors = course.instructors.all()
categories = course.categories.all()
contents = []
content_groups = []
content_groupsDB = CourseContentGroup.objects.filter(course=course).order_by("order")
for content_group in content_groupsDB:
content = CourseContent.objects.filter(content_group = content_group).order_by("order")
content_group = {"title": content_group.title, "contents": content}
content_groups.append(content_group)
usercourseDB = UserCourse.objects.filter(user=request.user, course__id=id).first()
if usercourseDB == None:
enrolled = False
else:
enrolled = True
context = {
"course": course,
"instructors": instructors,
"categories": categories,
"content_groups": content_groups,
"enrolled": enrolled
}
return render(request, "course/preview.html", context)
def enroll(request, course_id):
if request.method == "POST":
userCourseDB = UserCourse.objects.filter(user=request.user, course__id=course_id).first()
if userCourseDB == None:
course = Course.objects.filter(id=course_id).first()
usercourseDB = UserCourse(user=request.user, course=course)
usercourseDB.save()
firstContentGroup = CourseContentGroup.objects.filter(course=course).order_by("order").first()
firstContent = CourseContent.objects.filter(content_group=firstContentGroup).order_by("order").first()
userProgress = CourseUserProgress(info=usercourseDB, last_content=firstContent.id)
contents = CourseContent.objects.filter(content_group__course=course)
for content in contents:
userContentProgress = ContentUserProgress(content=content, user=request.user)
userContentProgress.save()
contentgroups = CourseContentGroup.objects.filter(course=course)
for group in contentgroups:
userContentGroupProgress = ContentGroupUserProgress(content_group=group, user=request.user)
userContentGroupProgress.save()
output = {
"status": "success"
}
return JsonResponse(output)
else:
raise PermissionDenied()
def unenroll(request):
if request.method == "POST":
data = request.POST
course_id = data.get("id", -1)
userCourseDB = UserCourse.objects.filter(user=request.user, course__id=course_id).first()
if userCourseDB != None:
course = Course.objects.filter(id=course_id).first()
userCourseProgress = CourseUserProgress.objects.filter(info=userCourseDB)
userCourseProgress.delete()
userCourseDB.delete()
userContentsProgress = ContentUserProgress.objects.filter(content__content_group__course=course, user=request.user)
for userContent in userContentsProgress:
userContent.delete()
userContentGroupProgress = ContentGroupUserProgress.objects.filter(content_group__course=course, user=request.user)
for group in userContentGroupProgress:
group.delete()
output = {
"status": "success"
}
return JsonResponse(output)
def resetLastContent(userprogress, course_id):
firstContentGroup = CourseContentGroup.objects.filter(course__id=course_id).order_by("order").first()
firstContent = CourseContent.objects.filter(content_group=firstContentGroup).order_by("order").first()
userprogress.last_content = firstContent.id
userprogress.save()
return firstContent
def learn(request, course_id):
usercourseDB = UserCourse.objects.filter(course__id=course_id, user=request.user).first()
if usercourseDB == None:
return HttpResponseRedirect(reverse("course_preview", args=[course_id]))
else:
userprogress = CourseUserProgress.objects.filter(info=usercourseDB).first()
if userprogress == None:
userprogress = CourseUserProgress(info=usercourseDB)
userprogress.save()
course = Course.objects.filter(id=course_id).first()
last_group = CourseContent.objects.filter(id=userprogress.last_content).first()
if last_group == None:
last_group = resetLastContent(userprogress, course_id)
last_group = last_group.content_group.id
context = {
"user_data": usercourseDB,
"progress": userprogress,
"course": course,
"last_content_group": last_group
}
return render(request, "course/learn.html", context)
def search(request):
return render(request, "course/search.html")
def search_info(request):
query = request.GET["query"]
courses = Course.objects.filter(Q(name__icontains = query) | Q(description__icontains = query)).order_by('-overall_ratings').distinct()
html_response = loader.render_to_string("course/search_card_temp.html", {"courses": courses})
html_response = html_response.strip()
output = {
"status": "success",
"html": html_response
}
return JsonResponse(output)
def my_course(request):
course_inprogress = UserCourse.objects.filter(user=request.user, completed=False).order_by("-start_date")
course_completed = UserCourse.objects.filter(user=request.user, completed=True).order_by("-complete_date")
course_byuser = Course.objects.filter(maker=request.user)
context = {
"course_inprogress": course_inprogress,
"course_completed": course_completed,
"course_byuser": course_byuser
}
return render(request, "course/mycourse.html", context)
@csrf_exempt
def make_course(request):
if request.method == "POST":
data = request.POST
files = request.FILES
name = data["name"]
desc = data["desc"]
categories = json.loads(data["categories"])
contents = json.loads(data["content"])
instructors = json.loads(data["instructors"])
thumbnail = ""
profile_instructors = {}
for file in files:
if file == "thumbnail":
thumbnail = files[file]
else:
owner = int(file.split("-")[-1])
profile_instructors[owner] = files[file]
i = 0
instructorsDB = []
for instructor in instructors:
image = profile_instructors.get(i, 0)
if image == 0:
instructorModel = CourseInstructor(name=instructor)
else:
instructorModel = CourseInstructor(name=instructor, profile_image=image)
instructorModel.save()
instructorsDB.append(instructorModel)
i += 1
categoriesDB =[]
for category in categories:
categoryDB = CourseCategory.objects.filter(name=category).first()
if categoryDB == None:
categoryDB = CourseCategory(name=category)
categoryDB.save()
categoriesDB.append(categoryDB)
if thumbnail != "":
course = Course(name=name, description=desc, banner_image=thumbnail, maker=request.user)
else:
course = Course(name=name, description=desc, banner_image=thumbnail, maker=request.user)
course.save()
for category in categoriesDB:
course.categories.add(category)
for instructor in instructorsDB:
course.instructors.add(instructor)
course.save()
for i in range(len(contents)):
content_group = CourseContentGroup(course=course, title=contents[i]["subTopicTitle"], order=i+1)
content_group.save()
for j in range(len(contents[i]["contents"])):
content_now = contents[i]["contents"][j]
title = content_now["title"]
type = content_now["type"]
if type == "text":
isVideo = False
else:
isVideo = True
video_link = content_now["video_link"]
text_content = content_now["text_content"]
content = CourseContent(content_group=content_group, title=title, is_video=isVideo, video_link=video_link, content=text_content, order=j+1)
content.save()
output = {
"status": "success",
"url": reverse("course_preview", args=[course.id])
}
return JsonResponse(output)
return render(request, "course/add.html")
@csrf_exempt
def upload_image(request):
if request.method == "POST":
file = request.FILES
data = request.POST
if len(file) != 0:
img = Image.open(file["image"])
filename_before = file["image"].name
filename = "/course/content/uploads/" + str(uuid.uuid4()) + ".jpg"
img.save(media + filename, "JPEG")
output = {
"status": "success",
"url": "/media" + filename,
"filename": filename_before
}
else:
output = {
"status": "failed"
}
return JsonResponse(output)
@csrf_exempt
def markdown(request):
if request.method == "POST":
data = request.POST
text = data["text"]
md_ext = md.Markdown(extensions=["markdown_markup_emoji.markup_emoji", 'mdx_math', 'tables', 'footnotes', 'def_list', 'abbr', 'attr_list', 'fenced_code'])
html = md_ext.convert(text)
output = {
"status": "success",
"html": html
}
return JsonResponse(output)
def markdown_func(text):
md_ext = md.Markdown(extensions=["markdown_markup_emoji.markup_emoji", 'mdx_math', 'tables', 'footnotes', 'def_list', 'abbr', 'attr_list', 'fenced_code'])
html = md_ext.convert(text)
return html
@login_required
def finishContent(request):
if request.method == "POST":
content_id = request.POST.get("id", -1)
if content_id != -1:
userContentProgress = ContentUserProgress.objects.filter(content__id=content_id, user=request.user).first()
if userContentProgress == None:
output = {
"status": "failed"
}
else:
if userContentProgress.completed == False:
userContentProgress.completed = True
userContentProgress.save()
courseContentDB = CourseContent.objects.filter(id=content_id).first()
checkFinishGroup(request, courseContentDB.content_group)
output = {
"status": "success"
}
else:
output = {
"status": "failed"
}
return JsonResponse(output)
def checkFinishGroup(request, content_group):
contentsDB = ContentUserProgress.objects.filter(content__content_group=content_group, user=request.user)
finish = True
for content in contentsDB:
if content.completed == False:
finish = False
if finish == True:
userContentGroupProgress = ContentGroupUserProgress.objects.filter(content_group=content_group, user=request.user).first()
userContentGroupProgress.completed = True
userContentGroupProgress.save()
@login_required
def completeContent(request):
status = "failed"
if request.method == "POST":
data = request.POST
content_id = data["id"]
userContentProgress = ContentUserProgress.objects.filter(content__id=content_id, user=request.user).first()
if userContentProgress != None:
userContentProgress.completed = True
userContentProgress.save()
checkFinishGroup(request, userContentProgress.content.content_group)
status = "success"
output = {
"status": status
}
return JsonResponse(output)
@login_required
def setLastViewed(request):
status = "failed"
if request.method == "POST":
data = request.POST
content_id = data["id"]
content = CourseContent.objects.filter(id=content_id).first()
if content != None:
course = content.content_group.course
userprogress = CourseUserProgress.objects.filter(info__course=course).first()
if userprogress != None:
userprogress.last_content = content_id
userprogress.save()
status = "success"
output = {
'status': status
}
return JsonResponse(output)
@login_required
def validateCompletion(request):
status = "failed"
completed = False
if request.method == "POST":
data = request.POST
course_id = data.get("id", -1)
if course_id != -1:
userGroupProgress = ContentGroupUserProgress.objects.filter(content_group__course__id=course_id, user=request.user)
completed = True
for progress in userGroupProgress:
if progress.completed == False:
completed = False
if completed:
userCourseInfo = UserCourse.objects.filter(user=request.user, course__id=course_id).first()
if userCourseInfo != None:
if userCourseInfo.completed == False:
userCourseInfo.completed = True
userCourseInfo.complete_date = datetime.datetime.now()
userCourseInfo.save()
status = "success"
output = {
"status": status,
"completed": completed
}
return JsonResponse(output)
@login_required
def getCourseInfo(request):
data = request.GET
by = data["by"]
course = data.get("course", -1)
group = data.get("group", -1)
content = data.get("content", -1)
data = {}
status = "failed"
if by != -1:
if by == "course":
if course != -1:
courseDB = Course.objects.filter(id=course)
if len(courseDB) != 0:
userCourse = UserCourse.objects.filter(user=request.user, course=courseDB.first()).first()
if userCourse != None:
status = "success"
courseDB = Course.objects.filter(id=course)
course_info = list(courseDB.values())[0]
groupDB = CourseContentGroup.objects.filter(course=courseDB.first()).order_by('order')
group_info = list(groupDB.values())
content_info = None
data["course"] = course_info
data["groups"] = group_info
elif by == "group":
if group != -1:
groupDB = CourseContentGroup.objects.filter(id=group).first()
if groupDB != None:
courseDB = groupDB.course
userCourse = UserCourse.objects.filter(user=request.user, course=courseDB).first()
if userCourse != None:
status = "success"
contentsDB = CourseContent.objects.filter(content_group=groupDB).order_by('order')
info = json.loads(serializers.serialize('json', [courseDB, groupDB]))
data["course"] = info[0]["fields"]
data["groups"] = info[1]["fields"]
data["contents"] = list(contentsDB.values('id', 'title', 'is_video'))
elif by == "content":
if content != -1:
contentDB = CourseContent.objects.filter(id=content).first()
if contentDB != None:
groupDB = contentDB.content_group
courseDB = groupDB.course
userCourse = UserCourse.objects.filter(user=request.user, course=courseDB).first()
if userCourse != None:
status = "success"
info = json.loads(serializers.serialize('json', [courseDB, groupDB, contentDB]))
data["course"] = info[0]["fields"]
data["groups"] = info[1]["fields"]
data["contents"] = info[2]["fields"]
data["contents"]["content"] = markdown_func(data["contents"]["content"])
output = {
"status": status,
"data": data
}
return JsonResponse(output)
@login_required
def getUserCourseInfo(request):
data = request.GET
by = data["by"]
course = data.get("course", -1)
group = data.get("group", -1)
data = {}
status = "failed"
if by != -1:
if by == "course":
if course != -1:
courseDB = Course.objects.filter(id=course)
if len(courseDB) != 0:
userCourse = UserCourse.objects.filter(user=request.user, course=courseDB.first()).first()
if userCourse != None:
status = "success"
contentGroupProgress = ContentGroupUserProgress.objects.filter(content_group__course__id=course, user=request.user).order_by("content_group__order")
if len(contentGroupProgress) != 0:
groupProgressInfo = list(contentGroupProgress.values())
data["group_progress"] = groupProgressInfo
elif by == "group":
if group != -1:
groupDB = CourseContentGroup.objects.filter(id=group)
if len(groupDB) != 0:
userCourse = UserCourse.objects.filter(user=request.user, course=groupDB.first().course).first()
if userCourse != None:
status = "success"
contentProgress = ContentUserProgress.objects.filter(content__content_group__id = group, user=request.user).order_by("content__order")
if len(contentProgress) != 0:
contentProgressInfo = list(contentProgress.values())
data["content_progress"] = contentProgressInfo
output = {
"status": status,
"data": data
}
return JsonResponse(output)
| 39.441057 | 172 | 0.59629 | 1,826 | 19,405 | 6.212486 | 0.12322 | 0.057299 | 0.031735 | 0.026181 | 0.453103 | 0.39122 | 0.325458 | 0.29055 | 0.234044 | 0.224083 | 0 | 0.003163 | 0.29951 | 19,405 | 491 | 173 | 39.521385 | 0.831384 | 0 | 0 | 0.398601 | 0 | 0 | 0.068649 | 0.006185 | 0 | 0 | 0 | 0 | 0 | 1 | 0.044289 | false | 0 | 0.04662 | 0.002331 | 0.137529 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
3bbf2f26eba0cc05c039e5a5abf9a4d4e5d27561 | 19,151 | py | Python | trackerTest/signDetection.py | gone-still/TE3002B | ccd2d532a947b624ff7890b633f66b6e60158948 | [
"MIT"
] | null | null | null | trackerTest/signDetection.py | gone-still/TE3002B | ccd2d532a947b624ff7890b633f66b6e60158948 | [
"MIT"
] | null | null | null | trackerTest/signDetection.py | gone-still/TE3002B | ccd2d532a947b624ff7890b633f66b6e60158948 | [
"MIT"
] | null | null | null | # File : signDetection.py (Traffic signal detection, classification and tracking example)
# Version : 0.10.3
# Description : Script that tests classification + tracking of
# : traffic signal
# Date: : June 05, 2022
# Author : Ricardo Acevedo-Avila (racevedoaa@gmail.com)
# License : MIT
import cv2
import numpy as np
from fastKLT import FastKLT
from tensorflow.keras.models import load_model
# Shows an image
def showImage(imageName, inputImage, delay=0):
cv2.namedWindow(imageName, cv2.WINDOW_NORMAL)
cv2.imshow(imageName, inputImage)
cv2.waitKey(delay)
# Writes a png image to disk:
def writeImage(imagePath, inputImage):
imagePath = imagePath + ".png"
cv2.imwrite(imagePath, inputImage, [cv2.IMWRITE_PNG_COMPRESSION, 0])
print("Wrote Image: " + imagePath)
# Clamps an integer to a valid range:
def clamp(val, minval, maxval):
if val < minval: return minval
if val > maxval: return maxval
return val
# Obtains a blob bounding rect via grab-cut
def getGrabCutMask(inputRect, inputImage):
# Unpack the rect tuple:
(sx, sy, sw, sh) = inputRect
# Default out values:
goodBlob = False
blobRect = ()
# goodMask = np.zeros((sh, sw, 1), dtype="uint8")
# Define object area for grab-cut (the "window"),
# Window centroid:
cxWindow = int(0.5 * sw)
cyWindow = int(0.5 * sh)
# Loop thru all window scales:
for s in range(1, 4):
# Get current scale:
s = 2 * s
print("scale: " + str(s))
# Define window top left corner:
currentWindowScale = 1 / s
xWindow = int(currentWindowScale * cxWindow)
yWindow = int(currentWindowScale * cyWindow)
# Define window width and height:
wWindow = int(2 * (cxWindow - xWindow))
hWindow = int(2 * (cyWindow - yWindow))
# Define the tuple:
grabCutRect = (xWindow, yWindow, wWindow, hWindow)
print(grabCutRect)
# Show the window:
grabCutArea = inputImage.copy()
# Show the grab cut area:
color = (0, 0, 255)
cv2.rectangle(grabCutArea, (xWindow, yWindow), (xWindow + wWindow, yWindow + hWindow),
color, 2)
# Show centroid:
color = (255, 0, 0)
cv2.line(grabCutArea, (cxWindow, cyWindow), (cxWindow, cyWindow), color, 2)
showImage("grabCutArea: " + str(s), grabCutArea)
# writeImage(outPath + "grabCutArea", grabCutArea)
# Tune the detection using grab n cut:
# The mask is a uint8 type, same dimensions as
# original input:
mask = np.zeros(inputImage.shape[:2], np.uint8)
# Grab n Cut needs two empty matrices of
# Float type (64 bits) and size 1 (rows) x 65 (columns):
bgModel = np.zeros((1, 65), np.float64)
fgModel = np.zeros((1, 65), np.float64)
# Run Grab n Cut on INIT_WITH_RECT mode:
grabCutIterations = 2
mask, bgModel, fgModel = cv2.grabCut(inputImage, mask, grabCutRect, bgModel, fgModel,
grabCutIterations, mode=cv2.GC_INIT_WITH_RECT)
# Set all definite background (0) and probable background pixels (2)
# to 0 while definite foreground and probable foreground pixels are
# set to 1
outputMask = np.where((mask == cv2.GC_BGD) | (mask == cv2.GC_PR_BGD), 0, 1)
# Scale the mask from the range [0, 1] to [0, 255]
outputMask = (outputMask * 255).astype("uint8")
showImage("GrabCut Mask", outputMask)
# writeImage(outPath + "grabCutMask", outputMask)
# Get blob area:
currentBlobArea = cv2.countNonZero(outputMask)
print("currentBlobArea: " + str(currentBlobArea))
# Check if we have a good blob,
# Check area:
if currentBlobArea > minBlobArea:
# Get blob rect:
(bx, by, bw, bh) = cv2.boundingRect(outputMask)
blobAspectRatio = bh / bw
print("blobAspectRatio: " + str(blobAspectRatio))
# Get aspect ratio difference:
aspectRatioDifference = abs(1.0 - blobAspectRatio)
epsilon = 0.3
# Check aspect ratio:
if aspectRatioDifference <= epsilon:
print("Got good blob. Scale: " + str(s))
goodBlob = True
# # Give some slack:
# bx = clamp(bx - trackerBorders[0], 0, bw)
# by = clamp(by - trackerBorders[1], 0, bh)
# bw = clamp(bw + 2 * trackerBorders[2], 0, bw)
# bh = clamp(bh + 2 * trackerBorders[3], 0, bh)
blobRect = (bx, by, bw, bh)
# goodMask = outputMask
# Set out values:
outTuple = (goodBlob, blobRect)
return outTuple
# Script variables:
# Set the file paths and names:
filePath = "D://trackerTest//"
outPath = filePath + "out//"
modelsPath = filePath + "models//"
# CNN size:
imageSize = (64, 64)
# Class dictionary:
classDictionary = {0: "Stop", 1: "Ahead Only", 2: "Roundabout", 3: "Turn Right", 4: "End Speed", 5: "No Entry"}
# Speed of the video:
videoSpeed = 1
frameWidth = 1280
frameHeight = 720
# Frame aspect ratio:
aspectRatio = frameWidth / frameHeight
# Frame Counter:
frameCounter = 0
# Min CNN probability:
minClassProbability = 0.6
# Cascade resize parameters:
cascadeScale = 50
# Resize the frame for cascade detection:
resizedWidth = int(frameWidth * cascadeScale / 100)
resizedHeight = int(frameHeight * cascadeScale / 100)
# Cascade ROI
# Crop the roi for cascade detection, top left, width, height:
roiScale = cascadeScale / 100
roiX = int(0)
roiY = int(60)
roiHeight = int(255)
roiWidth = int(frameWidth * roiScale)
cascadeRoi = (int(roiX), int(roiY * roiScale), int(roiWidth), int(roiHeight))
# Left/Right starting horizontal coordinates:
sideWidthFactor = 0.4
leftSide = int(sideWidthFactor * roiWidth)
rightSide = int(roiWidth - sideWidthFactor * roiWidth)
# Detection mask:
detectionMask = np.zeros((resizedHeight, resizedWidth, 1), np.uint8)
detectionMask = 255 - detectionMask
# Set the detection mask coordinates:
maskX = leftSide
maskY = roiY
maskWidth = rightSide - leftSide
maskHeight = roiHeight
# draw detection mask rect:
cv2.rectangle(detectionMask, (maskX, maskY), (maskX + maskWidth, maskY + maskHeight), 0, -1)
showImage("detectionMask", detectionMask)
# Crop Mask to detection dimensions:
detectionMask = detectionMask[cascadeRoi[1]:cascadeRoi[1] + cascadeRoi[3],
cascadeRoi[0]:cascadeRoi[0] + cascadeRoi[2]]
showImage("detectionMask [Cropped]", detectionMask)
# Tracker Parameters:
maxFeatures = 100
fastThreshold = 5
nRows = 3
nCols = 3
kltWindowSize = 10
shrinkRatio = 0.05
ransacThreshold = 0.9
trackerId = 1
# Tracking margin in pixels (x,y)
# This control how much of the signal surrounding area
# the tracker "sees". Useful for increasing tracking
# Keypoints (more stable tracking)
trackerBorders = (3, 3)
# Running cascade at first frame:
runCascade = True
# Set tracker parameters:
parametersTuple = [maxFeatures, (nRows, nCols), fastThreshold, shrinkRatio, (kltWindowSize, kltWindowSize),
ransacThreshold, trackerId]
# Create the tracker with parameters:
tracker = FastKLT(parametersTuple)
# Enable debug information:
tracker.setVerbose(False)
# Show tracker's grid keypoints:
tracker.showGrid(False)
# Load the CNN model:
model = load_model(modelsPath + "signnet.model")
classString = ""
# Set the video device:
videoDevice = cv2.VideoCapture(filePath + "trafficSign05.mp4")
trackerCounter = 0
# Load cascade:
signCascade = cv2.CascadeClassifier(modelsPath + "cascades//" "signalCascade-05.xml")
# Threshold parameters:
minCascadeArea = 900
minCascadeAspectRatio = 0.9
minBlobArea = 10
# Check if device is opened:
while videoDevice.isOpened():
# Get video device frame:
success, frame = videoDevice.read()
# We have a nice frame to process:
if success:
# Extract frame size:
(frameHeight, frameWidth) = frame.shape[:2]
# Resize image
detectionRoi = cv2.resize(frame, (resizedWidth, resizedHeight), interpolation=cv2.INTER_LINEAR)
# writeImage(filePath+"inputFrame", detectionRoi)
# Resized deep copy:
roiCopy = detectionRoi.copy()
# Draw ROI area:
# Roi rect:
cv2.rectangle(roiCopy, (roiX, roiY), (roiX + roiWidth, roiY + roiHeight), (0, 0, 255), 1)
# Left and right:
cv2.line(roiCopy, (leftSide, 0), (leftSide, roiY + resizedHeight), (255, 0, 0), 1)
cv2.line(roiCopy, (rightSide, 0), (rightSide, roiY + resizedHeight), (255, 0, 0), 1)
showImage("roiCopy", roiCopy)
# Crop to detection dimensions:
detectionRoi = detectionRoi[cascadeRoi[1]:cascadeRoi[1] + cascadeRoi[3],
cascadeRoi[0]:cascadeRoi[0] + cascadeRoi[2]]
# Grayscale Conversion:
detectionRoiColor = detectionRoi.copy()
detectionRoi = cv2.cvtColor(detectionRoi, cv2.COLOR_BGR2GRAY)
showImage("detectionRoi", detectionRoi)
# Let's see if we must run cascade detection:
if runCascade:
# Run Haar Cascade
# Tune the parameters to tune detection;s quality:
boundingBoxes = signCascade.detectMultiScale(detectionRoi, scaleFactor=1.015, minNeighbors=4,
minSize=(3, 3))
totalBoxes = len(boundingBoxes)
print("Objects detected via Cascade: " + str(totalBoxes))
# We need at least one detection:
if totalBoxes > 0:
# Got detection,
# Convert gray ROI to BGR:
detectionRoi = cv2.cvtColor(detectionRoi, cv2.COLOR_GRAY2BGR)
# Loop through all mah bounding boxes:
for (x, y, w, h) in boundingBoxes:
# Compute box area:
cascadeArea = w * h
print("Cascade Area: " + str(cascadeArea))
# Check minimum area:
if cascadeArea >= minCascadeArea:
print("Got box with good area.")
# Compute box centroid:
cx = int(x + 0.5 * w)
cy = int(y + 0.5 * h)
# Default color:
color = (0, 0, 0)
# Get detection mask "zone valid" pixel:
validPixel = int(detectionMask[cy, cx])
# Check if we have a valid pixel inside the
# "processing" zone:
if validPixel == 255:
# green is right:
color = (0, 255, 0)
print("Got valid Haar Cascade Box")
else:
# red is not:
color = (0, 0, 255)
print("Got invalid Haar Cascade Box")
# Draw the bounding box:
cv2.rectangle(detectionRoi, (x, y), (x + w, y + h), color, 2)
showImage("Haar Boxes", detectionRoi)
# So far, so good. Continue processing:
if validPixel == 255:
# Crop via cascade:
targetCrop = detectionRoiColor[y:y + h, x:x + w]
showImage("targetCrop [Cascade]", targetCrop)
# Define the "search window" for
# grab-cut:
maskRect = (x, y, w, h)
# Get refined rectangle via grab-cut:
(goodBlob, boundRect) = getGrabCutMask(maskRect, targetCrop)
# Check out if grab-cut got a valid blob:
if goodBlob:
print("Grab-cut found valid blob.")
# Got good blob, compute its bounding rectangle:
# boundRect = cv2.boundingRect(outputMask)
# Set new rect dimensions:
xGrabCut = boundRect[0]
yGrabCut = boundRect[1]
wGrabCut = boundRect[2]
hGrabCut = boundRect[3]
# Refine crop area:
targetCrop = targetCrop[yGrabCut:yGrabCut + hGrabCut, xGrabCut:xGrabCut + wGrabCut]
showImage("targetCrop [Refined]", targetCrop)
# writeImage(outPath + "targetCropRefined", targetCrop)
print("Sending crop to CNN...")
showImage("targetCrop [Pre-process]", targetCrop)
# Resize to CNN dimensions:
targetCrop = cv2.cvtColor(targetCrop, cv2.COLOR_BGR2RGB)
targetCrop = cv2.resize(targetCrop, imageSize)
showImage("targetCrop [Post-process]", targetCrop)
# Scale between 0.0 and 1.0
targetCrop = targetCrop.astype("float") / 255.0
# Add the "batch" dimension:
targetCrop = np.expand_dims(targetCrop, axis=0)
print("[signnet - Test] Classifying image...")
# Get the goddamn predictions:
predictions = model.predict(targetCrop)
print(predictions)
# Get max probability and its class:
classIndex = predictions.argmax(axis=1)[0]
classLabel = classDictionary[classIndex]
classProbability = predictions[0][classIndex]
print("ClassIndex:", classIndex, " classProbability:", classProbability, " classLabel:",
classLabel)
# Yeah, discard bullshit classifications and process
# only if we have a good prediction:
if classProbability >= minClassProbability:
classString = str(classIndex) + " " + classLabel + " (" + str(
int(100 * classProbability)) + "%)"
print("Sending frame to tracker...")
# Goes to the tracker:
# Add the initial cropped amount and add some margins:
print((x, y, w, h))
(cropHeight, cropWidth) = detectionRoi.shape[:2]
xTrack = clamp(x + (xGrabCut - trackerBorders[0]), 0, cropWidth)
yTrack = clamp(y + (yGrabCut - trackerBorders[1]), 0, cropHeight)
wTrack = clamp(wGrabCut + 2 * trackerBorders[0], 0, cropWidth)
hTrack = clamp(hGrabCut + 2 * trackerBorders[1], 0, cropHeight)
# My bounding rectangle, lemme show it to you:
print((xTrack, yTrack, wTrack, hTrack))
# Draw the trackin area:
trackerRectInput = detectionRoiColor.copy()
# Cascade estimation:
cv2.rectangle(trackerRectInput, (int(x), int(y)),
(int(x + w), int(y + h)), (255, 255, 0), 1)
# Margin added
cv2.rectangle(trackerRectInput, (int(xTrack), int(yTrack)),
(int(xTrack + wTrack), int(yTrack + hTrack)), (255, 0, 255), 1)
showImage("trackerRectInput [Tracker Rect]", trackerRectInput)
# showImage("detectionRoi [Tracker Input]", detectionRoi)
# writeImage(outPath + "trackerInput", detectionRoi)
# Fucking initialize the tracker:
tracker.initTracker(detectionRoi, (xTrack, yTrack, wTrack, hTrack))
# Cascade is no longer needed:
runCascade = False
else:
# I got bullshit, courtesy of the CNN:
print("Min Class Probability not met. Running CNN on next frame...")
else:
print("Updating Tracker...")
# Update the tracker:
detectionRoi = cv2.cvtColor(detectionRoi, cv2.COLOR_GRAY2BGR)
status, trackedObj = tracker.updateTracker(detectionRoi)
print(status)
# If the tracker is good, let's continue
# processing:
if status:
# Draw rectangle:
(startX, startY, endX, endY) = trackedObj
color = (0, 255, 0)
cv2.rectangle(detectionRoi, (int(startX), int(startY)),
(int(startX + endX), int(startY + endY)), color, 2)
# Class text:
org = (int(startX), int(startY + endY))
font = cv2.FONT_HERSHEY_SIMPLEX
color = (255, 0, 0)
cv2.putText(detectionRoi, classString, org, font, 0.4, color, 1, cv2.LINE_AA)
else:
# Tracker failed (probably lost or not enough keypoints),
# Run manual detection + classification on next frames:
runCascade = True
# Show the final output:
showImage("resizedImage [Objects]", detectionRoi, 0)
# writeImage(outPath + "detectionRoi-" + str(trackerCounter), detectionRoi)
trackerCounter += 1
# Increase frame counter:
frameCounter += 1
# Show the raw, input frame:
textX = 10
textY = 30
org = (textX, textY)
font = cv2.FONT_HERSHEY_SIMPLEX
color = (0, 255, 0)
frameString = "Frame: " + str(frameCounter)
cv2.putText(frame, frameString, org, font, 1, color, 1, cv2.LINE_AA)
showImage("Input Frame", frame, videoSpeed)
# writeImage(outPath + "inputFrame", frame)
# Break on "q"
if cv2.waitKey(1) & 0xFF == ord("q"):
break
else:
print("Could not extract frame.")
break
# Release the capture device:
videoDevice.release()
cv2.destroyAllWindows()
print("Video Device closed")
| 37.186408 | 120 | 0.537048 | 1,833 | 19,151 | 5.597381 | 0.279324 | 0.002534 | 0.001949 | 0.002632 | 0.05039 | 0.041033 | 0.022807 | 0.012866 | 0.012866 | 0.012866 | 0 | 0.029133 | 0.370894 | 19,151 | 514 | 121 | 37.258755 | 0.82246 | 0.242964 | 0 | 0.099174 | 0 | 0 | 0.061489 | 0 | 0 | 0 | 0.000279 | 0 | 0 | 1 | 0.016529 | false | 0 | 0.016529 | 0 | 0.041322 | 0.099174 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
3bbfff9d5cb10f9609c6a00f9d361868093f1e49 | 616 | py | Python | solutions/codeforces/155A.py | forxhunter/ComputingIntro | 50fa2ac030748626c694ec5c884c5ac32f0b42a8 | [
"Apache-2.0"
] | 1 | 2021-01-02T04:31:34.000Z | 2021-01-02T04:31:34.000Z | solutions/codeforces/155A.py | forxhunter/ComputingIntro | 50fa2ac030748626c694ec5c884c5ac32f0b42a8 | [
"Apache-2.0"
] | null | null | null | solutions/codeforces/155A.py | forxhunter/ComputingIntro | 50fa2ac030748626c694ec5c884c5ac32f0b42a8 | [
"Apache-2.0"
] | null | null | null | '''
I_love_%username%
requirements:
First, it is amazing if during the contest the coder earns strictly more points that he earned on each past contest.
Second, it is amazing if during the contest the coder earns strictly less points that he earned on each past contest.
Third, A coder's first contest isn't considered amazing.
Output:
Amazing number
'''
n = int(input())
scores = list(map(int,input().split(' ')))
amazings = 0
for i in range(1, n):
max_score = max(scores[:i])
min_score = min(scores[:i])
if scores[i] > max_score or scores[i] < min_score:
amazings += 1
print(amazings)
| 29.333333 | 118 | 0.702922 | 100 | 616 | 4.27 | 0.51 | 0.065574 | 0.051522 | 0.06089 | 0.398126 | 0.398126 | 0.398126 | 0.398126 | 0.234192 | 0.234192 | 0 | 0.006036 | 0.193182 | 616 | 20 | 119 | 30.8 | 0.853119 | 0.561688 | 0 | 0 | 0 | 0 | 0.003922 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.111111 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
3bc27355e00764d980c05ece718138e0306ee60d | 3,173 | py | Python | streetmapper/tests/test_data_ops.py | ResidentMario/streetmapper | 6c7410eb2fee292e0b28971b053b0bdffa3bd2cd | [
"MIT"
] | 8 | 2018-11-19T18:03:30.000Z | 2020-05-28T02:50:46.000Z | streetmapper/tests/test_data_ops.py | ResidentMario/streetmapper | 6c7410eb2fee292e0b28971b053b0bdffa3bd2cd | [
"MIT"
] | null | null | null | streetmapper/tests/test_data_ops.py | ResidentMario/streetmapper | 6c7410eb2fee292e0b28971b053b0bdffa3bd2cd | [
"MIT"
] | null | null | null | import unittest
import pandas as pd
import geopandas as gpd
from shapely.geometry import Polygon, Point
import streetmapper
class TestJoinBldgsBlocks(unittest.TestCase):
def setUp(self):
self.blocks = gpd.GeoDataFrame(
{'block_uid': [1, 2]},
geometry=[
Polygon(((0, -1), (0, 1), (1, 1), (1, -1))), # top side
Polygon(((0, -1), (0, 1), (-1, 1), (-1, -1))) # bottom side
]
)
def testNoBldgs(self):
bldgs = gpd.GeoDataFrame()
blocks = self.blocks
matches, multimatches, nonmatches =\
streetmapper.pipeline.join_bldgs_blocks(bldgs, blocks, 'bldg_uid', 'block_uid')
self.assertEqual(len(matches), 0)
self.assertEqual(len(multimatches), 0)
self.assertEqual(len(nonmatches), 0)
def testFullyUnivariateMatch(self):
bldgs = gpd.GeoDataFrame(
{'bldg_uid': [1, 2, 3, 4]},
geometry=[
Polygon(((0, 0), (0, 1), (1, 1), (1, 0))).buffer(-0.01), # top right
Polygon(((0, 0), (0, -1), (1, -1), (1, 0))).buffer(-0.01), # top left
Polygon(((0, 0), (0, 1), (-1, 1), (-1, 0))).buffer(-0.01), # bottom right
Polygon(((0, 0), (0, -1), (-1, -1), (-1, 0))).buffer(-0.01) # bottom left
]
)
blocks = self.blocks
matches, multimatches, nonmatches =\
streetmapper.pipeline.join_bldgs_blocks(bldgs, blocks, 'bldg_uid', 'block_uid')
self.assertEqual(len(matches), 4)
self.assertEqual(len(multimatches), 0)
self.assertEqual(len(nonmatches), 0)
def testAllKindsOfMatches(self):
bldgs = gpd.GeoDataFrame(
{'bldg_uid': [1, 2, 3]},
geometry=[
Polygon(((0, 0), (0, 1), (1, 1), (1, 0))).buffer(-0.01), # top right, interior
Polygon(((-1, 0), (1, 0), (1, -1), (-1, -1))).buffer(-0.01), # bottom, spanning
Polygon(((10, 10), (10, 11), (11, 11), (11, 10))) # exterior
]
)
blocks = self.blocks
matches, multimatches, nonmatches =\
streetmapper.pipeline.join_bldgs_blocks(bldgs, blocks, 'bldg_uid', 'block_uid')
self.assertEqual(len(matches), 1)
self.assertEqual(len(multimatches), 2)
self.assertEqual(len(nonmatches), 1)
class TestBldgsOnBlock(unittest.TestCase):
def setUp(self):
self.block = Polygon(((0, 0), (0, 2), (2, 2), (2, 0)))
def testSimple(self):
bldgs = gpd.GeoDataFrame(geometry=[
Polygon(((0, 0), (0, 1), (1, 1), (1, 0))), # in
Polygon(((10, 10), (10, 11), (11, 11), (11, 10))) # out
])
result = streetmapper.pipeline.bldgs_on_block(bldgs, self.block)
assert len(result) == 1
def testMulitmatchOff(self):
bldgs = gpd.GeoDataFrame(geometry=[
Polygon(((0, 0), (0, 1), (1, 1), (1, 0))), # in
Polygon(((1, 1), (5, 1), (5, 5), (1, 5))) # through
])
result = streetmapper.pipeline.bldgs_on_block(bldgs, self.block, include_multimatches=False)
assert len(result) == 1
| 36.471264 | 100 | 0.526316 | 379 | 3,173 | 4.353562 | 0.168865 | 0.04 | 0.04 | 0.029091 | 0.675758 | 0.675758 | 0.63697 | 0.632727 | 0.632727 | 0.486667 | 0 | 0.07414 | 0.294359 | 3,173 | 86 | 101 | 36.895349 | 0.662796 | 0.04034 | 0 | 0.457143 | 0 | 0 | 0.025074 | 0 | 0 | 0 | 0 | 0 | 0.157143 | 1 | 0.1 | false | 0 | 0.071429 | 0 | 0.2 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
3bc46be25992dc03d42b2173d89358b266382279 | 11,532 | py | Python | Extraction/extract_vector.py | Wenhao-Yang/DeepSpeaker-pytorch | 99eb8de3357c85e2b7576da2a742be2ffd773ead | [
"MIT"
] | 8 | 2020-08-26T13:32:56.000Z | 2022-01-18T21:05:46.000Z | Extraction/extract_vector.py | Wenhao-Yang/DeepSpeaker-pytorch | 99eb8de3357c85e2b7576da2a742be2ffd773ead | [
"MIT"
] | 1 | 2020-07-24T17:06:16.000Z | 2020-07-24T17:06:16.000Z | Extraction/extract_vector.py | Wenhao-Yang/DeepSpeaker-pytorch | 99eb8de3357c85e2b7576da2a742be2ffd773ead | [
"MIT"
] | 5 | 2020-12-11T03:31:15.000Z | 2021-11-23T15:57:55.000Z | #!/usr/bin/env python
# encoding: utf-8
"""
@Author: yangwenhao
@Contact: 874681044@qq.com
@Software: PyCharm
@File: extract_vector.py
@Time: 19-6-25 下午3:47
@Overview:Given audio samples, extract embeddings from checkpoint file in this script.
Extractor vectors for enrollment and test sets.
For enrollment set: Output (features, spkid)
For test set: Output (features, uttid)
"""
import argparse
import torch
import torch.optim as optim
import torchvision.transforms as transforms
from torch.autograd import Variable
import torch.backends.cudnn as cudnn
import os
import pdb
import numpy as np
from tqdm import tqdm
from Define_Model import ResSpeakerModel
from logger import Logger
from Process_Data.DeepSpeakerDataset_dynamic import DeepSpeakerEnrollDataset
from Process_Data.voxceleb_wav_reader import if_load_npy
from Define_Model import PairwiseDistance
from Process_Data.audio_processing import toMFB, totensor, truncatedinput, truncatedinputfromMFB,read_MFB,read_audio,mk_MFB
from Process_Data.audio_processing import to4tensor, concateinputfromMFB
import torch._utils
try:
torch._utils._rebuild_tensor_v2
except AttributeError:
def _rebuild_tensor_v2(storage, storage_offset, size, stride, requires_grad, backward_hooks):
tensor = torch._utils._rebuild_tensor(storage, storage_offset, size, stride)
tensor.requires_grad = requires_grad
tensor._backward_hooks = backward_hooks
return tensor
torch._utils._rebuild_tensor_v2 = _rebuild_tensor_v2
# Training settings
parser = argparse.ArgumentParser(description='PyTorch Speaker Recognition Feature Extraction')
# Dataset and model file path
parser.add_argument('--dataroot', type=str, default='Data/dataset/enroll',
help='path to extracting dataset')
parser.add_argument('--enroll', action='store_true', default=True,
help='enroll step or test step')
parser.add_argument('--extract-path', type=str, default='Data/xvector/enroll',
help='path to pairs file')
parser.add_argument('--log-dir', default='Data/extract_feature_logs',
help='folder to output model checkpoints')
parser.add_argument('--model-path', default='Data/checkpoint/checkpoint_35.pth', type=str, metavar='PATH',
help='path to latest checkpoint (default: none)')
# Model options
parser.add_argument('--embedding-size', type=int, default=512, metavar='ES',
help='Dimensionality of the embedding')
parser.add_argument('--test-batch-size', type=int, default=64, metavar='BST',
help='input batch size for testing (default: 64)')
parser.add_argument('--test-input-per-file', type=int, default=1, metavar='IPFT',
help='input sample per file for testing (default: 8)')
parser.add_argument('--n-triplets', type=int, default=100000, metavar='N',
help='how many triplets will generate from the dataset')
parser.add_argument('--margin', type=float, default=0.1, metavar='MARGIN',
help='the margin value for the triplet loss function (default: 1.0')
parser.add_argument('--min-softmax-epoch', type=int, default=2, metavar='MINEPOCH',
help='minimum epoch for initial parameter using softmax (default: 2')
parser.add_argument('--loss-ratio', type=float, default=2.0, metavar='LOSSRATIO',
help='the ratio softmax loss - triplet loss (default: 2.0')
parser.add_argument('--lr', type=float, default=0.1, metavar='LR',
help='learning rate (default: 0.125)')
parser.add_argument('--lr-decay', default=1e-4, type=float, metavar='LRD',
help='learning rate decay ratio (default: 1e-4')
parser.add_argument('--wd', default=0.0, type=float,
metavar='W', help='weight decay (default: 0.0)')
parser.add_argument('--optimizer', default='adagrad', type=str,
metavar='OPT', help='The optimizer to use (default: Adagrad)')
# Device options
parser.add_argument('--no-cuda', action='store_true', default=False,
help='enables CUDA training')
parser.add_argument('--gpu-id', default='3', type=str,
help='id(s) for CUDA_VISIBLE_DEVICES')
parser.add_argument('--seed', type=int, default=0, metavar='S',
help='random seed (default: 0)')
parser.add_argument('--log-interval', type=int, default=1, metavar='LI',
help='how many batches to wait before logging training status')
# Spectrum feature options
parser.add_argument('--mfb', action='store_true', default=True,
help='start from MFB file')
parser.add_argument('--makemfb', action='store_true', default=False,
help='need to make mfb file')
args = parser.parse_args()
# set the device to use by setting CUDA_VISIBLE_DEVICES env variable in
# order to prevent any memory allocation on unused GPUs
os.environ['CUDA_VISIBLE_DEVICES'] = args.gpu_id
args.cuda = not args.no_cuda and torch.cuda.is_available()
np.random.seed(args.seed)
if not os.path.exists(args.log_dir):
os.makedirs(args.log_dir)
if args.cuda:
cudnn.benchmark = True
CKP_DIR = args.model_path
EXT_DIR = args.extract_path
LOG_DIR = args.log_dir + '/extract_{}-n{}-lr{}-wd{}-m{}-embed{}-alpha10'\
.format(args.optimizer, args.n_triplets, args.lr, args.wd,
args.margin, args.embedding_size)
data_set_list = "Data/enroll_set.npy"
classes_to_label_list = "Data/enroll_classes.npy"
dataroot = args.dataroot
if not args.enroll:
dataroot = args.dataroot.replace("enroll", "test")
EXT_DIR = args.extract_path.replace("enroll", "test")
data_set_list = data_set_list.replace("enroll", "test")
classes_to_label_list = classes_to_label_list.replace("enroll", "test")
if not os.path.exists(EXT_DIR):
os.makedirs(EXT_DIR)
# create logger
logger = Logger(LOG_DIR)
kwargs = {'num_workers': 0, 'pin_memory': True} if args.cuda else {}
l2_dist = PairwiseDistance(2)
audio_set = []
audio_set = if_load_npy(dataroot, data_set_list)
if args.makemfb:
#pbar = tqdm(voxceleb)
for datum in audio_set:
# print(datum['filename'])
mk_MFB((datum['filename']+'.wav'))
print("Complete convert")
if args.mfb:
transform = transforms.Compose([
concateinputfromMFB(),
to4tensor()
# truncatedinputfromMFB(),
# totensor()
])
transform_T = transforms.Compose([
truncatedinputfromMFB(input_per_file=args.test_input_per_file),
totensor()
])
file_loader = read_MFB
else:
transform = transforms.Compose([
truncatedinput(),
toMFB(),
totensor(),
#tonormal()
])
file_loader = read_audio
enroll_dir = DeepSpeakerEnrollDataset(audio_set=audio_set, dir=args.dataroot, loader=file_loader, transform=transform, enroll=args.enroll)
classes_to_label = enroll_dir.class_to_idx
if not os.path.isfile(classes_to_label_list):
if not args.enroll:
classes_to_label = enroll_dir.uttid
np.save(classes_to_label_list, classes_to_label)
print("update the classes to labels list files.")
else:
# TODO: add new classes to the file
print("Classes to labels list files already existed!")
try:
qwer = enroll_dir.__getitem__(3)
except IndexError:
print("wav in enroll set is less than 3?")
del audio_set
# pdb.set_trace()
def main():
test_display_triplet_distance = False
# print the experiment configuration
print('\nparsed options:\n{}\n'.format(vars(args)))
print('\nNumber of Wav file:\n{}\n'.format(len(enroll_dir.indices)))
# instantiate model and initialize weights
model = ResSpeakerModel(embedding_size=args.embedding_size, resnet_size=10, num_classes=1211)
if args.cuda:
model.cuda()
optimizer = create_optimizer(model, args.lr)
# optionally resume from a checkpoint
if args.model_path:
if os.path.isfile(args.model_path):
print('=> loading checkpoint {}'.format(args.model_path))
checkpoint = torch.load(args.model_path, map_location='cpu')
args.start_epoch = checkpoint['epoch']
filtered = {k: v for k, v in checkpoint['state_dict'].items() if 'num_batches_tracked' not in k}
model.load_state_dict(filtered)
optimizer.load_state_dict(checkpoint['optimizer'])
else:
raise Exception('=> no checkpoint found at {}'.format(args.model_path))
# train_loader = torch.utils.data.DataLoader(train_dir, batch_size=args.batch_size, shuffle=False, **kwargs)
epoch = args.start_epoch
def my_collate(batch):
data = [item[0] for item in batch]
target = [item[1] for item in batch]
target = torch.LongTensor(target)
return [data, target]
enroll_loader = torch.utils.data.DataLoader(enroll_dir, batch_size=args.test_batch_size, collate_fn=my_collate, shuffle=False, **kwargs)
#for epoch in range(start, end):
enroll(enroll_loader, model, epoch)
def enroll(enroll_loader, model, epoch):
# switch to evaluate mode
# pdb.set_trace()
model.eval()
labels, features = [], []
pbar = tqdm(enumerate(enroll_loader))
for batch_idx, (data_a, label) in pbar:
pdb.set_trace()
current_sample = data_a.size(0)
data_a = data_a.resize_(args.test_input_per_file * current_sample, 1, data_a.size(2), data_a.size(3))
if args.cuda:
data_a = data_a.cuda(),
data_a, label = Variable(data_a, volatile=True), Variable(label)
# compute output
out_a = model(data_a)
features.append(out_a)
if not args.enroll:
labels.append(label)
else:
labels.append(label.data.cpu().numpy())
if batch_idx % args.log_interval == 0:
pbar.set_description('{}: {} [{}/{} ({:.0f}%)]'.format(
"enroll" if args.enroll else "test",
epoch,
batch_idx * len(data_a),
len(enroll_loader.dataset),
100. * batch_idx / len(enroll_loader)))
print('Xvector extraction completed!')
feature_np = []
for tensors in features:
for tensor in tensors:
feature_np.append(tensor)
label_np = []
for label in labels:
for lab in label:
label_np.append(lab)
wav_dict = []
for index, label in enumerate(label_np):
wav_dict.append((label, feature_np[index]))
# wav_dict = dict(zip(label_np, feature_np))
np.save(EXT_DIR+'/extract_{}-lr{}-wd{}-embed{}-alpha10.npy'.format(args.optimizer, args.lr, args.wd, args.embedding_size), wav_dict)
logger.log_value('Extracted Num', len(features))
def create_optimizer(model, new_lr):
# setup optimizer
if args.optimizer == 'sgd':
optimizer = optim.SGD(model.parameters(), lr=new_lr,
momentum=0.9, dampening=0.9,
weight_decay=args.wd)
elif args.optimizer == 'adam':
optimizer = optim.Adam(model.parameters(), lr=new_lr,
weight_decay=args.wd)
elif args.optimizer == 'adagrad':
optimizer = optim.Adagrad(model.parameters(),
lr=new_lr,
lr_decay=args.lr_decay,
weight_decay=args.wd)
return optimizer
if __name__ == '__main__':
main()
| 36.609524 | 140 | 0.662071 | 1,497 | 11,532 | 4.921844 | 0.244489 | 0.026873 | 0.05076 | 0.012215 | 0.142372 | 0.059989 | 0.026873 | 0 | 0 | 0 | 0 | 0.010993 | 0.219043 | 11,532 | 314 | 141 | 36.726115 | 0.807129 | 0.096427 | 0 | 0.093023 | 0 | 0 | 0.179751 | 0.01811 | 0 | 0 | 0 | 0.003185 | 0 | 1 | 0.023256 | false | 0 | 0.083721 | 0 | 0.12093 | 0.037209 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
3bc786088bb1e893e458aa9949935032754f8858 | 1,760 | py | Python | src/lib/kombi/Template/procedures/mathProcedures.py | paulondc/chilopoda | 046dbb0c1b4ff20ea5f2e1679f8d89f3089b6aa4 | [
"MIT"
] | 2 | 2019-09-24T18:56:27.000Z | 2021-02-07T04:58:49.000Z | src/lib/kombi/Template/procedures/mathProcedures.py | paulondc/kombi | 046dbb0c1b4ff20ea5f2e1679f8d89f3089b6aa4 | [
"MIT"
] | 20 | 2019-02-16T04:21:13.000Z | 2019-03-09T21:21:21.000Z | src/lib/kombi/Template/procedures/mathProcedures.py | paulondc/kombi | 046dbb0c1b4ff20ea5f2e1679f8d89f3089b6aa4 | [
"MIT"
] | 3 | 2019-11-15T05:16:32.000Z | 2021-09-28T21:28:29.000Z | """
Basic math functions.
The arithmetic operations can be done directly through
the operator support. For instance:
(4 + 4) same as (sum 4 4)
"""
import operator
from ..Template import Template
def sumInt(*args):
"""
Sum (cast to integer).
"""
intArgs = __castToInt(*args)
return int(operator.add(
intArgs[0],
intArgs[1]
))
def subtractInt(*args):
"""
Subtraction (cast to integer).
"""
intArgs = __castToInt(*args)
return int(operator.sub(
intArgs[0],
intArgs[1]
))
def multiplyInt(*args):
"""
Multiply (cast to integer).
"""
intArgs = __castToInt(*args)
return int(operator.mul(
intArgs[0],
intArgs[1]
))
def divideInt(*args):
"""
Divide (cast to integer).
"""
intArgs = __castToInt(*args)
return int(operator.truediv(
intArgs[0],
intArgs[1]
))
def minimumInt(*args):
"""
Minimum (cast to integer).
"""
intArgs = __castToInt(*args)
return int(min(
intArgs[0],
intArgs[1]
))
def maximumInt(*args):
"""
Maximum (cast to integer).
"""
intArgs = __castToInt(*args)
return int(max(
intArgs[0],
intArgs[1]
))
def __castToInt(*args):
"""
Cast the input args to int.
"""
return list(map(int, args))
# sum
Template.registerProcedure(
'sum',
sumInt
)
# subtraction
Template.registerProcedure(
'sub',
subtractInt
)
# multiply
Template.registerProcedure(
'mult',
multiplyInt
)
# divide
Template.registerProcedure(
'div',
divideInt
)
# minimum
Template.registerProcedure(
'min',
minimumInt
)
# maximum
Template.registerProcedure(
'max',
maximumInt
)
| 15.438596 | 54 | 0.584659 | 179 | 1,760 | 5.670391 | 0.312849 | 0.089655 | 0.076847 | 0.118227 | 0.392118 | 0.279803 | 0.279803 | 0.279803 | 0.197044 | 0 | 0 | 0.012668 | 0.282386 | 1,760 | 113 | 55 | 15.575221 | 0.790974 | 0.214773 | 0 | 0.46875 | 0 | 0 | 0.015032 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.109375 | false | 0 | 0.03125 | 0 | 0.25 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
3bc857abeeec95d09a9ac687261d090c0e95f042 | 878 | py | Python | tests/template_tests/test_base.py | shinshin86/django | 5cc81cd9eb69f5f7a711412c02039b435c393135 | [
"PSF-2.0",
"BSD-3-Clause"
] | 2 | 2020-11-04T06:26:42.000Z | 2021-01-17T19:29:52.000Z | tests/template_tests/test_base.py | Blaahborgh/django | c591bc3ccece1514d6b419826c7fa36ada9d9213 | [
"PSF-2.0",
"BSD-3-Clause"
] | 11 | 2020-03-24T15:46:05.000Z | 2022-03-11T23:20:58.000Z | tests/template_tests/test_base.py | Blaahborgh/django | c591bc3ccece1514d6b419826c7fa36ada9d9213 | [
"PSF-2.0",
"BSD-3-Clause"
] | 2 | 2018-01-08T08:14:29.000Z | 2020-11-04T08:46:29.000Z | from django.template.base import Variable, VariableDoesNotExist
from django.test import SimpleTestCase
class VariableDoesNotExistTests(SimpleTestCase):
def test_str(self):
exc = VariableDoesNotExist(msg='Failed lookup in %r', params=({'foo': 'bar'},))
self.assertEqual(str(exc), "Failed lookup in {'foo': 'bar'}")
class VariableTests(SimpleTestCase):
def test_integer_literals(self):
self.assertEqual(Variable('999999999999999999999999999').literal, 999999999999999999999999999)
def test_nonliterals(self):
"""Variable names that aren't resolved as literals."""
var_names = []
for var in ('inf', 'infinity', 'iNFiniTy', 'nan'):
var_names.extend((var, '-' + var, '+' + var))
for var in var_names:
with self.subTest(var=var):
self.assertIsNone(Variable(var).literal)
| 38.173913 | 102 | 0.666287 | 94 | 878 | 6.148936 | 0.478723 | 0.036332 | 0.072664 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.077253 | 0.203872 | 878 | 22 | 103 | 39.909091 | 0.749642 | 0.05467 | 0 | 0 | 0 | 0 | 0.129854 | 0.032767 | 0 | 0 | 0 | 0 | 0.1875 | 1 | 0.1875 | false | 0 | 0.125 | 0 | 0.4375 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
3bcc11ffd7252b74dbd127d865f82485a1c9230c | 1,100 | py | Python | setup.py | gabrielferreira/djangoreport | 6cad4d6c8a2b7965c140300172686e227ef6037d | [
"Apache-2.0"
] | 1 | 2016-04-14T15:40:40.000Z | 2016-04-14T15:40:40.000Z | setup.py | gabrielferreira/djangoreport | 6cad4d6c8a2b7965c140300172686e227ef6037d | [
"Apache-2.0"
] | null | null | null | setup.py | gabrielferreira/djangoreport | 6cad4d6c8a2b7965c140300172686e227ef6037d | [
"Apache-2.0"
] | null | null | null | from setuptools import setup, find_packages
import os
version = __import__('djangoreport').__version__
def read(fname):
# read the contents of a text file
return open(os.path.join(os.path.dirname(__file__), fname)).read()
setup(
name = "djangoreport",
version = version,
url = 'http://github.com/',
license = '',
platforms=['OS Independent'],
description = "",
long_description = read('README.md'),
author = '',
author_email = '',
packages=find_packages(),
install_requires = (
'Django==1.5.2',
'South==0.8.2',
'wsgiref==0.1.2',
),
include_package_data=True,
zip_safe=False,
classifiers = [
'Development Status :: 1 - Alpha',
'Framework :: Django',
'Intended Audience :: Developers',
# 'License :: OSI Approved :: BSD License',
'Operating System :: OS Independent',
'Programming Language :: Python',
'Topic :: Internet :: WWW/HTTP',
],
test_suite='setuptest.setuptest.SetupTestSuite',
tests_require=(
'django-setuptest',
),
) | 26.829268 | 70 | 0.598182 | 114 | 1,100 | 5.578947 | 0.692982 | 0.037736 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.012165 | 0.252727 | 1,100 | 41 | 71 | 26.829268 | 0.761557 | 0.067273 | 0 | 0.055556 | 0 | 0 | 0.320313 | 0.033203 | 0 | 0 | 0 | 0 | 0 | 1 | 0.027778 | false | 0 | 0.083333 | 0.027778 | 0.138889 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
3bcc3c43a71ad6b6a3d4a7e769437d945611b3d2 | 3,452 | py | Python | property_prediction/nn_regression.py | acceleratedmaterials/AMDworkshop_demo | e7c2b931e023fc00ff7494b8acb2181f5c75bc4e | [
"MIT"
] | 5 | 2019-04-02T03:20:43.000Z | 2021-07-13T18:23:26.000Z | property_prediction/nn_regression.py | NUS-SSE/AMDworkshop_demo | edbd6c60957dd0d83c3ef43c7e9e28ef1fef3bd9 | [
"MIT"
] | null | null | null | property_prediction/nn_regression.py | NUS-SSE/AMDworkshop_demo | edbd6c60957dd0d83c3ef43c7e9e28ef1fef3bd9 | [
"MIT"
] | 5 | 2019-05-12T17:41:58.000Z | 2021-06-08T04:38:35.000Z | '''
File: nn_regression.py
Project: ML_workshop
File Created: Monday, 13th August 2018 12:48:42 am
Author: Qianxiao Li (liqix@ihpc.a-star.edu.sg)
-----
Copyright - 2018 Qianxiao Li, IHPC, A*STAR
License: MIT License
'''
import numpy as np
import pandas as pd
import logging
import utils
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import MinMaxScaler
from sklearn.neural_network import MLPRegressor
if __name__ == "__main__":
# Logging
logging.basicConfig()
logging.getLogger().setLevel(logging.INFO)
# Set random seed
np.random.seed(0)
# Load data and do train-test Split
df = pd.read_excel('./data/Concrete_Data.xls', sheet_name='Sheet1')
X, y = df[df.columns[:-1]], df[df.columns[-1]]
X_train, X_test, y_train, y_test = train_test_split(
X, y, test_size=0.2
)
# Scale inputs
scaler = MinMaxScaler()
X_train_scaled = scaler.fit_transform(X_train)
X_test_scaled = scaler.transform(X_test)
# MLP regressor fit
regressor = MLPRegressor(
hidden_layer_sizes=[256, 128, 64], max_iter=1000)
regressor.fit(X_train_scaled, y_train)
y_hat_train = regressor.predict(X_train_scaled)
y_hat_test = regressor.predict(X_test_scaled)
# Plot predictions
utils.plot_predictions(
y=[y_train, y_test],
y_hat=[y_hat_train, y_hat_test],
labels=['Train', 'Test'],
save_path='./nn_fit.pdf')
# ############################ #
# Illustration of over-fitting #
# ############################ #
# Data split with validation
X_train, X_valid, y_train, y_valid = train_test_split(
X_train, y_train, test_size=0.1
)
scaler = MinMaxScaler()
scaler.fit(X_train)
# MLP regressor fit
# For demo purposes, we are going to fit the NN on a smaller dataset
# instead of going for a much bigger network (and training for a long time)
X_train, y_train = X_train[:50], y_train[:50]
X_train_scaled, X_valid_scaled, X_test_scaled = \
map(scaler.transform, [X_train, X_valid, X_test])
regressor = MLPRegressor(
hidden_layer_sizes=[256, 128, 64], max_iter=100, warm_start=True,
solver='sgd', alpha=0, momentum=0,
learning_rate='adaptive', learning_rate_init=1e-3)
# Train and log the losses
n_iter = 50
train_losses, valid_losses, test_losses = [], [], []
for n in range(n_iter):
regressor.fit(X_train_scaled, y_train)
train_losses.append(
utils.rmse(regressor.predict(X_train_scaled), y_train))
valid_losses.append(
utils.rmse(regressor.predict(X_valid_scaled), y_valid))
test_losses.append(
utils.rmse(regressor.predict(X_test_scaled), y_test))
logging.info(
'Iteration %d | Train loss %.3f | Valid loss %.3f | Test loss %.3f'
% (n, train_losses[-1], valid_losses[-1], test_losses[-1]))
y_hat_train = regressor.predict(X_train_scaled)
y_hat_test = regressor.predict(X_test_scaled)
# Plot predictions
utils.plot_predictions(
y=[y_train, y_test],
y_hat=[y_hat_train, y_hat_test],
labels=['Train', 'Test'],
save_path='./nn_overfit.pdf')
# Plot training curves
utils.plot_training_curves(
losses=[train_losses, valid_losses, test_losses],
labels=['Train', 'Validation', 'Test'],
save_path='./nn_overfit_training_curves.pdf')
| 32.566038 | 79 | 0.65701 | 485 | 3,452 | 4.397938 | 0.317526 | 0.042194 | 0.039381 | 0.030474 | 0.340835 | 0.320675 | 0.285045 | 0.203469 | 0.203469 | 0.203469 | 0 | 0.023264 | 0.215527 | 3,452 | 105 | 80 | 32.87619 | 0.764402 | 0.174102 | 0 | 0.276923 | 0 | 0 | 0.076201 | 0.020224 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.107692 | 0 | 0.107692 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
3bcfdebb75bf5824a15dbcba5b0c86ab98d8b157 | 6,326 | py | Python | neurodocker/reprozip/merge.py | AndysWorth/neurodocker | b4e5f470e5b883bd4aa8a3f20a0dde197b79b8dc | [
"Apache-2.0"
] | null | null | null | neurodocker/reprozip/merge.py | AndysWorth/neurodocker | b4e5f470e5b883bd4aa8a3f20a0dde197b79b8dc | [
"Apache-2.0"
] | null | null | null | neurodocker/reprozip/merge.py | AndysWorth/neurodocker | b4e5f470e5b883bd4aa8a3f20a0dde197b79b8dc | [
"Apache-2.0"
] | null | null | null | """Merge multiple ReproZip version 2 pack files.
This implementation makes several assumptions about the ReproZip traces. Please
do not use this as a general pack file merger.
Important note
--------------
The ouptut config.yml file is not created by merging the original config.yml
files. This file (created with `reprozip combine`) will inherit traits of the
machine on which it is running. Specifically, the architecture, distribution,
hostname, system, group ID, and user ID will all come from the local machine
and will not be taken from the original config.yml files. This script modifies
the combined config.yml file to use the distribution from the first run of the
first config.yml file.
Assumptions
-----------
- Pack files are version 2.
- All traces were run on the same distribution (e.g., debian stretch).
- If the same files exist in different traces, the contents of those files are
identical.
"""
from glob import glob
import logging
import os
import tarfile
import tempfile
logger = logging.getLogger(__name__)
def _check_deps():
"""Raise RuntimeError if a dependency is not found. These dependencies are
not included in `requirements.txt`.
"""
import shutil
msg = "Dependency '{}' not found."
if shutil.which("rsync") is None:
raise RuntimeError(msg.format("rsync"))
try:
import reprozip
except Exception:
raise RuntimeError(msg.format("reprozip"))
def _extract_rpz(rpz_path, out_dir):
"""Unpack .rpz file (tar archive) and the DATA.tar.gz file inside it."""
basename = os.path.basename(rpz_path)
prefix = "{}-".format(basename)
path = tempfile.mkdtemp(prefix=prefix, dir=out_dir)
with tarfile.open(rpz_path, "r:*") as tar:
tar.extractall(path)
data_path = os.path.join(path, "DATA.tar.gz")
with tarfile.open(data_path, "r:*") as tar:
tar.extractall(path)
def _merge_data_dirs(data_dirs, merged_dest):
"""Merge data directories using `rsync`, and tar.gz the output."""
import subprocess
tmp_dest = tempfile.mkdtemp(prefix="reprozip-data")
data_dirs = " ".join(data_dirs)
merge_cmd = "rsync -rqabuP {srcs} {dest}" "".format(srcs=data_dirs, dest=tmp_dest)
subprocess.run(merge_cmd, shell=True, check=True)
data_tar = os.path.join(merged_dest, "DATA.tar.gz")
with tarfile.open(data_tar, "w:gz") as tar:
tar.add(tmp_dest, arcname="")
def _get_distribution(filepath):
"""Return Linux distribution from the first run of a config.yml file."""
import yaml
with open(filepath, "r") as fp:
config = yaml.load(fp)
return config["runs"][0]["distribution"]
def _fix_config_yml(filepath, distribution):
"""Comment out 'additional_patterns', and replace the distribution of the
local machine with `distribution`.
"""
with open(filepath) as fp:
config = fp.readlines()
for ii, line in enumerate(config):
if line.startswith("additional_patterns"):
config[ii] = "# " + line
if "distribution:" in line:
pre = line.split(":")[0]
config[ii] = "{}: {}\n".format(pre, distribution)
with open(filepath, "w") as fp:
for line in config:
fp.write(line)
class _Namespace:
# Replicates argparse namespace.
# https://stackoverflow.com/a/28345836/5666087
def __init__(self, **kwargs):
self.__dict__.update(kwargs)
def _combine_traces(traces, out_dir):
"""Run `reprozip combine` to combine trace databases and create new
config.yml file.
Important note: the config.yml file lists the local machine's architecture,
distribution, hostname and system, and the current group id and user id.
For best results, this should be run on the same machine as the traces.
"""
from reprozip.main import combine
args = _Namespace(
traces=traces, dir=out_dir, identify_packages=False, find_inputs_outputs=False
)
combine(args)
original_config = os.path.join(os.path.dirname(traces[0]), "config.yml")
distribution = _get_distribution(original_config)
config_filepath = os.path.join(out_dir, "config.yml")
_fix_config_yml(config_filepath, distribution)
def _write_version2_file(merged_dest):
path = os.path.join(merged_dest, "METADATA", "version")
with open(path, "w") as fp:
fp.write("REPROZIP VERSION 2\n")
def _create_rpz(path, outfile):
"""Create a .rpz file from a `path` that contains METADATA and DATA.tar.gz.
"""
data = os.path.join(path, "DATA.tar.gz")
metadata = os.path.join(path, "METADATA")
with tarfile.open(outfile, "w:") as tar:
tar.add(data, arcname="DATA.tar.gz")
tar.add(metadata, arcname="METADATA")
def merge_pack_files(outfile, packfiles):
"""Merge reprozip version 2 pack files.
This implementation has limitations. It uses rsync to merge the directories
in different reprozip pack files, and does not take into account that files
might have the same name but different contents.
"""
if len(packfiles) < 2:
raise ValueError("At least two packfiles are required.")
_check_deps()
if not outfile.endswith(".rpz"):
logger.info("Adding '.rpz' extension to output file.")
outfile += ".rpz"
for pf in packfiles:
if not os.path.isfile(pf):
raise ValueError("File not found: {}".format(pf))
tmp_dest = tempfile.mkdtemp(prefix="neurodocker-reprozip-merge-")
merged_dest = os.path.join(tmp_dest, "merged")
merged_dest_metadata = os.path.join(merged_dest, "METADATA")
os.makedirs(merged_dest_metadata)
for this_rpz in packfiles:
logger.info("Extracting {}".format(this_rpz))
_extract_rpz(this_rpz, tmp_dest)
logger.info("Merging DATA directories")
data_dirs_pattern = os.path.abspath(os.path.join(tmp_dest, "**", "DATA"))
data_dirs = glob(data_dirs_pattern)
_merge_data_dirs(data_dirs, merged_dest)
logger.info("Merging traces and creating new config.yml")
traces_pattern = os.path.join(tmp_dest, "**", "METADATA", "trace.sqlite3")
traces = glob(traces_pattern)
_combine_traces(traces=traces, out_dir=merged_dest_metadata)
_write_version2_file(merged_dest)
logger.info("Creating merged pack file")
_create_rpz(merged_dest, outfile)
| 32.777202 | 86 | 0.688903 | 876 | 6,326 | 4.842466 | 0.268265 | 0.021216 | 0.025931 | 0.009901 | 0.160302 | 0.11009 | 0.069543 | 0 | 0 | 0 | 0 | 0.005125 | 0.198071 | 6,326 | 192 | 87 | 32.947917 | 0.831066 | 0.319001 | 0 | 0.019802 | 0 | 0 | 0.131304 | 0.006422 | 0 | 0 | 0 | 0 | 0 | 1 | 0.09901 | false | 0 | 0.09901 | 0 | 0.217822 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
3bd00d87b72a513c2821158a775d5f60ae37e0ed | 667 | py | Python | Conditions/exercicio 7.py | SkaarlK/Learning-Python | bbf011182fb5bf876aa9a274400c41a266a0e8c7 | [
"MIT"
] | 2 | 2022-01-01T19:31:56.000Z | 2022-01-01T19:32:54.000Z | Conditions/exercicio 7.py | SkaarlK/Learning-Python | bbf011182fb5bf876aa9a274400c41a266a0e8c7 | [
"MIT"
] | null | null | null | Conditions/exercicio 7.py | SkaarlK/Learning-Python | bbf011182fb5bf876aa9a274400c41a266a0e8c7 | [
"MIT"
] | null | null | null | categoria = int(input("Digite a categoria do produto: "))
if categoria == 1:
preco = 10
elif categoria == 2:
preco = 18
elif categoria == 3:
preco = 23
elif categoria == 4:
preco = 26
elif categoria == 5:
preco == 31
else:
print("Categoria inválida, digite um valor entre 1 e 5!")
preco = 0
print("O preço do produto é: R$%.2f" % preco)
print("Ta batendo com a tabela sim, mas o programa que veio na pasta estava dando erro de indentação, então, resolvi refatorar para ficar mais legível.")
print("Também é possível utilizar Case Switch... é até mais aconselhável por serem valores constantes, mas quis manter a essência da atividade.")
| 33.35 | 153 | 0.695652 | 103 | 667 | 4.504854 | 0.699029 | 0.112069 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.036053 | 0.209895 | 667 | 19 | 154 | 35.105263 | 0.844402 | 0 | 0 | 0 | 0 | 0.117647 | 0.58021 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.235294 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
3bd2489d8e29e3c3c4f90a233246e490177932c0 | 1,916 | py | Python | dsbasic/frame/preprocessing/impute.py | liordanon/dsbasic | 9d1706e9032b7e1cfe299f5430e864247ca4903a | [
"MIT"
] | null | null | null | dsbasic/frame/preprocessing/impute.py | liordanon/dsbasic | 9d1706e9032b7e1cfe299f5430e864247ca4903a | [
"MIT"
] | null | null | null | dsbasic/frame/preprocessing/impute.py | liordanon/dsbasic | 9d1706e9032b7e1cfe299f5430e864247ca4903a | [
"MIT"
] | null | null | null | import numpy as np
import pandas as pd
from sklearn.base import TransformerMixin, BaseEstimator
from ...utils import check_type, check_has_columns
class fImputer(BaseEstimator, TransformerMixin):
"""
sklearn style imputer for DataFrames and Series objects.
"""
def __init__(self, strategy='mean', copy=True, na_sentinel=-1, columns=None):
self.strategy = strategy
self.copy = copy
self.na_sentinel = na_sentinel
self.columns = columns
def fit(self, X, y=None):
# ensuring validity of strategy
strategy = self.strategy.lower()
options = ['mean', 'median', 'most_frequent', 'na_sentinel']
if strategy not in options:
raise ValueError('strategy must be one of : ' + str(options))
# checking X is either a Dataframe or Series
check_type(X, types=[pd.DataFrame, pd.Series])
# choose columns to impute
if self.columns is None:
self.columns_ = X.columns.tolist()
else:
self.columns_ = self.columns
# choosing value/s to fillt Na with
if strategy == 'mean':
self.fill_ = X.mean()
elif strategy == 'median':
self.fill_ = X.median()
elif strategy == 'most_frequent':
self.fill_ = X.mode().iloc[0]
else:
self.fill_ = {column : self.na_sentinel for column in self.columns_}
return self
def transform(self, X):
# check X has all columns to be imputed
check_has_columns(X, self.columns_)
# checking X is either a Dataframe or Series
check_type(X, types=[pd.DataFrame, pd.Series])
# getting imputed DtaFrame/Series
result = X.fillna(self.fill_, inplace= not self.copy)
if result is None:
result = X
return result
| 29.476923 | 82 | 0.590814 | 230 | 1,916 | 4.804348 | 0.382609 | 0.069683 | 0.024434 | 0.030769 | 0.124887 | 0.124887 | 0.124887 | 0.124887 | 0.124887 | 0.124887 | 0 | 0.001535 | 0.319937 | 1,916 | 64 | 83 | 29.9375 | 0.846508 | 0.15762 | 0 | 0.111111 | 0 | 0 | 0.057049 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.083333 | false | 0 | 0.111111 | 0 | 0.277778 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
3bd6ee35b1988c852d7525501a8a59c39d07ba1a | 45,133 | py | Python | src/bobcatlib/preprocessor.py | bronger/bobcat | 93e1cc88069001268824bc832490fd8db178848c | [
"MIT"
] | null | null | null | src/bobcatlib/preprocessor.py | bronger/bobcat | 93e1cc88069001268824bc832490fd8db178848c | [
"MIT"
] | null | null | null | src/bobcatlib/preprocessor.py | bronger/bobcat | 93e1cc88069001268824bc832490fd8db178848c | [
"MIT"
] | null | null | null | #!/usr/bin/env python
# -*- coding: utf-8 -*-
#
# Copyright © 2007, 2008 Torsten Bronger <bronger@physik.rwth-aachen.de>
#
# This file is part of the Bobcat program.
#
# Bobcat is free software; you can use it, redistribute it and/or modify it
# under the terms of the MIT license.
#
# You should have received a copy of the MIT license with Bobcat. If not,
# see <http://bobcat.origo.ethz.ch/wiki/Licence>.
#
"""The preprocessor of Bobcat source files.
Its main purpose is twofold: First, it converts characters sequences to single
Unicode characters. And secondly, it keeps track of the origins of the
preprocessed text, so that in case of parsing errors the user can be told where
exactly the error occured in the source document.
It achieves this by one fat unicode-like data type called `Excerpt`.
"""
import re, os.path, codecs, string, warnings
from . import common
from .common import FileError, EncodingError, PositionMarker
class Excerpt(unicode):
"""Class for preprocessed Bobcat source text. It behaves like a unicode string
with extra methods and attributes.
The typical lifecycle of such an object is as follows:
1. The Bobcat source text is read from the file (or whereever) and stored as
one big unicode string.
2. This unicode string is used to create an Excerpt instance from it. In
order to do this, the pre input method rules are applied.
3. This excerpt is send to the parser that divides it into smaller and
smaller excerpts, parsing it recursively while building the parse tree.
4. When parsing is finished, the post input method is applied (which is
usually *much* smaller than the pre method).
5. Now, the excerpts are given to the routines of the backend, which
process them further, convert them to unicodes, and write them to the
output.
:cvar entity_pattern: Regexp pattern for numerical entities like
``\\0x0207;`` or ``\\#8022;``.
:type entity_pattern: re.pattern
:ivar escaped_positions: the indices of all characters in the Excerpt which
were escaped in the original input. Note that this is a set which is not
ordered.
:ivar code_snippets_intervals: all start--end tuples of index ranges in the
Excerpt which contain code snippets, so that they have to be treated as
escaped. Note that they must be in ascending order of the start
indices. Actually, this could also be called ``escaped_intervals``
because it could be substituted with many equivalent entries in
`escaped_positions`. However, for performance reasons, code snippets are
stored in this start--end form. Otherwise, `escaped_positions` would be
cluttered up with too many subsequent entries.
:ivar original_positions: maps indices in the Excerpt to position markers
that point to the actual origin of this index in the Excerpt.
:ivar original_text: the original unicode string this Excerpt stems from
:ivar __post_substitutions: the substitutions for the post input method.
They are stored here for eventual use in `apply_post_input_method`.
:ivar __escaped_text: the unicode equivalent of the Excerpt, with all
escaped characters and characters of code snippets replaced with NULL
characters. It is a cache used in `escaped_text`.
:type escaped_positions: set of int
:type code_snippets_intervals: list of (int, int)
:type original_positions: list of `common.PositionMarker`
:type original_text: unicode
:type __post_substitutions: list of (re.pattern, unicode)
:type __escaped_text: unicode
"""
# FixMe: The following pylint directive is necessary because astng doesn't
# parse attribute settings in the __new__ classmethod. If this changes or
# if a workaround is found, this directive should be removed in order to
# find real errors.
#
# pylint: disable-msg=E1101
entity_pattern = re.compile(r"((0x(?P<hex>[0-9a-fA-F]+))|(#(?P<dec>[0-9]+)));")
whitespace_pattern = re.compile(r"(\A\s+)|(\s+\Z)|(\s{2,})|([\t\n\r\f\v])")
@classmethod
def get_next_match(cls, original_text, substitutions, offset=0):
"""Return the next input method match in `original_text`. The search
starts at `offset`.
:Parameters:
- `original_text`: the original line in the Bobcat input file
- `substitutions`: the substitution dictionary to be used
- `offset`: starting position for the search in original_text
:type original_text: unicode
:type substitutions: list with the (match, replacement) tuples
:type offset: int
:Return:
the position of the found match, the length of the match, and the
replacement for this match (a single character). If no match was
found, it's len(original_text), 0, None instead
:rtype: int, int, unicode
"""
earliest_match_position = len(original_text)
longest_match_length = 0
best_match = None
for substitution in substitutions:
match = substitution[0].search(original_text, offset)
if match and match.group().count("\r") + match.group().count("\n") == 0:
start, end = match.span()
if start == earliest_match_position:
if end - start > longest_match_length:
longest_match_length = end - start
best_match = match
replacement = substitution[1]
elif start < earliest_match_position:
earliest_match_position = start
longest_match_length = end - start
best_match = match
replacement = substitution[1]
if not best_match or not best_match.group():
return len(original_text), 0, None
return best_match.start(), best_match.end() - best_match.start(), replacement
def is_escaped(self, position):
"""Return True, if the character at position is escaped.
:Parameters:
- `position`: the position or interval in the Excerpt
:type position: int or (int, int)
:Return:
whether or not the character at `position` is escaped. If an
interval was given, whether or not at least one character in the
interval is escaped.
:rtype: boolean
"""
if isinstance(position, (list, tuple)):
part = self.escaped_text()[position[0]:position[1]]
else:
part = self.escaped_text()[position]
return u"\u0000" in part
def escaped_text(self):
"""Returns the unicode representation of the Excerpt with all escaped
characters replaced with Null characters.
:Return:
the unicode representation of the Excerpt with all escaped characters
replaced with Null characters
:rtype: unicode
"""
# pylint: disable-msg=E0203, W0201
if self.__escaped_text is None:
text = list(unicode(self))
for pos in self.escaped_positions:
text[pos] = u"\u0000"
for start, end in self.code_snippets_intervals:
text[start:end] = (end-start) * u"\u0000"
self.__escaped_text = u"".join(text)
return self.__escaped_text
def original_position(self, position=0):
"""Maps a position within the excerpt to the position in the original
file.
:Parameters:
- `position`: the position in the excerpt to which this method
belongs. Note that len(self) is an allowed value for `position`, in
order to get the original span of the whole string.
:type position: int
:Return:
the Position the given character originates from. This includes url
(filename), linenumber, and column. If the Excerpt was empty, the
position of the following character in the original file is
returned.
:rtype: PositionMarker
:Exceptions:
- `IndexError`: if a position was requested which lies outside the
line.
"""
length = len(self)
if not 0 <= position <= length:
raise IndexError("invalid value %d for "
"position in original_position near line %d of file %s" %
(position, self.original_positions[0].linenumber,
self.original_positions[0].url))
closest_position = max([pos for pos in self.original_positions if pos <= position])
offset = position - closest_position
closest_marker = self.original_positions[closest_position].transpose(offset)
closest_marker.column += offset
return closest_marker
def split(self, split_characters=None):
"""Splits the Excerpt like Python's split() string method does. If no
argument is given, it splits at whitespace (just as the string method).
Important note: Escaped characters are not regarded as split
characters.
:Parameters:
- `split_characters`: a string containing all characters that divide
the parts that should be created
:type split_characters: unicode
:Return:
a list with all parts in which the Excerpt was split up
:rtype: list of `Excerpt`
"""
parts = []
characters = split_characters if split_characters is not None else u" \t\v\n\r"
for match in re.finditer(u"[^" + re.escape(characters) + "]*",
self.escaped_text(), re.UNICODE):
start, end = match.span()
if start == end:
# Match was empty; then it is ignored, unless at the beginning
# and the end.
if split_characters is None or \
(start != 0 and start != len(self.escaped_text())):
continue
parts.append(self[start:end])
return parts
def normalize_whitespace(self):
def add_part(result, new_part):
return result + new_part if result else new_part
result = None
unicode_representation = unicode(self)
whitespace_match = self.whitespace_pattern.match(unicode_representation)
if whitespace_match:
current_position = start = whitespace_match.end()
else:
current_position = start = 0
while current_position < len(unicode_representation):
whitespace_match = self.whitespace_pattern.search(unicode_representation, current_position)
if whitespace_match:
current_position = whitespace_match.end()
if whitespace_match.end() == len(unicode_representation):
result = add_part(result, self[start:whitespace_match.start()])
break
elif whitespace_match.group().startswith(u" "):
result = add_part(result, self[start:whitespace_match.start()+1])
start = current_position
elif whitespace_match.group().endswith(u" "):
result = add_part(result, self[start:whitespace_match.start()])
start = current_position - 1
else:
result = add_part(result, self[start:whitespace_match.start()])
result = add_part(result, u" ")
start = current_position
else:
result = add_part(result, self[start:len(unicode_representation)])
break
return result or self[:0]
class Status(object):
"""A mere container for some immutable data structures used in the pre-
and postprocessing.
The only reason for its existence is that the "nonlocal" statement is
not yet implemented in Python. Therefore, I need a mutable data type
in order to use side effects in the local functions in
apply_pre_input_method() and apply_post_input_method(). It's not nice,
but the alternatives are even uglier. BTDT.
To sum it up, Status holds (part of) the current status of the
pre/postprocessor.
:ivar linenumber: current linenumber in the source file
:ivar last_linestart: position of the last character that startet a
line
:ivar position: current position in the source file
:ivar processed_text: the so far already preprocessed text
:ivar in_sourcecode: are we in source code enclosed by tripple
backquotes?
:type linenumber: int
:type last_linestart: int
:type position: int
:type processed_text: unicode
:type in_sourcecode: bool
"""
def __init__(self):
self.linenumber = 0
self.last_linestart = 0
self.position = 0
self.processed_text = u""
self.in_sourcecode = False
@classmethod
def apply_pre_input_method(cls, original_text, url, pre_substitutions):
"""This class method transforms the pristine line `original_text` into
a processed line, escpecially by applying substitutions by the pre(!)
input method.
:Parameters:
- `original_text`: the original text from a Bobcat source file
- `url`: URL of the original ressource file
- `pre_substitutions`: substitution list of the pre input method
:type original_text: unicode
:type url: str
:type pre_substitutions: list with the (match, replacement) tuples
:Return:
- the processed line
- original positions as a dict of position -- (filename, linenumber,
column). The linenumber starts at 1, the column at 0.
- positions of escaped characters as a list of positions
- intervals of source code snippets
:rtype: unicode, dict, list, list
"""
comment_line_pattern = re.compile(r"^\.\.( .*)?$", re.MULTILINE)
# The following functions seem to violate an important programming
# rule: They modify variables of the outer scope, i.e. the enclosing
# function (side effects). However, they are simple to explain and
# they do simple things, so this approach leads to easy-to-comprehend
# code. Therefore, I make an exception to that rule.
def drop_characters(number_of_characters):
"""This routine must be called immediately after the character is
being added to `preprocessed_text` and the current `position`
points to the next character in `original_text` to be processed.
Thus, this is a mere synchronisation between `original_text` and
`preprocessed_text`.
If you pass 0 for `number_of_characters`, you can use this function
for just adding a `PositionMarker` without dropping anything. This
is used in `resync_at_linestart`.
"""
s.position += number_of_characters
# Now re-sync
original_positions[len(s.processed_text)] = \
PositionMarker(url, s.linenumber, s.position - s.last_linestart, s.position)
def escape_next_character():
"""Mark the next character that is to be added to the processed
text as being escaped."""
escaped_positions.add(len(s.processed_text))
def copy_character(char=None):
"""Beware: It is only allowed that `char` is one single
character. In particular, s.processed_text must consist only of
single characters, otherwise len(s.processed_text) would yield a
wrong string length!
"""
if char == None:
char = current_char
assert len(current_char) == 1
s.processed_text.append(char)
s.position += 1
def resync_at_linestart():
"""For the sake of performance, it is tried to keep the distance
between two `PositionMarkers` small. In particular, a re-sync is
done at each start of a new line. This is done here."""
s.linenumber += 1
s.last_linestart = s.position
comment_match = comment_line_pattern.match(original_text, s.position)
if comment_match:
# Drop comment lines
drop_characters(comment_match.end() - comment_match.start())
else:
drop_characters(0)
s = Excerpt.Status()
# For performance reasons, I use a list of unicode strings rather than
# a unicode string for the result string. Before returning it, I will
# concetenate all list elements to one string. This is really much
# faster for strings longer than a couple of 10k.
s.processed_text = []
original_positions = {}
escaped_positions = set()
code_snippets_intervals = []
deferred_escape = False
resync_at_linestart()
# For the sake of performance, I don't test every characters position
# for input method matches, but look for the next upcoming match and
# store it.
next_match_position, next_match_length, replacement = \
cls.get_next_match(original_text, pre_substitutions)
# Next comes the Big While which crawls through the whole source code
# and preprocesses it.
while s.position < len(original_text):
current_char = original_text[s.position]
if current_char in string.whitespace:
if deferred_escape and current_char in " \t":
# drop the tab or space
drop_characters(1)
elif current_char in "\n\r":
if original_text[s.last_linestart:s.position].strip() == "":
deferred_escape = False
# Here, I normalize all line endings to "\n". In order to
# avoid generating a new position marker, I convert \r if
# followed by a \n to a space (so this may generate
# trailing spaces).
if current_char == "\r" and original_text[s.position+1:s.position+2] == "\n":
copy_character(" ")
copy_character("\n")
# For performance, make a sync at every linestart
resync_at_linestart()
else:
copy_character()
continue
if s.in_sourcecode:
if original_text[s.position:s.position+3] == "```":
code_snippets_intervals[-1] = \
(code_snippets_intervals[-1], len(s.processed_text))
copy_character()
copy_character()
copy_character()
s.in_sourcecode = False
elif original_text[s.position:s.position+2] == r"\`":
drop_characters(2)
next_character = original_text[s.position+2:s.position+3]
if next_character:
escape_next_character()
copy_character(next_character)
else:
copy_character()
continue
if current_char == "\\":
if original_text[s.position+1:s.position+2] == "\\":
if deferred_escape:
escape_next_character()
deferred_escape = False
copy_character()
drop_characters(1)
deferred_escape = False
continue
entity_match = cls.entity_pattern.match(original_text, s.position+1)
if entity_match:
if entity_match.group("hex"):
char = unichr(int(entity_match.group("hex"), 16))
elif entity_match.group("dec"):
char = unichr(int(entity_match.group("dec")))
if deferred_escape:
escape_next_character()
deferred_escape = False
copy_character(char)
drop_characters(entity_match.end() - entity_match.start())
continue
if (current_char == "[" and original_text[s.position+1:s.position+2] == "[") or \
(current_char == "]" and original_text[s.position+1:s.position+2] == "]"):
escape_next_character()
copy_character()
drop_characters(1)
deferred_escape = False
continue
if s.position > next_match_position:
# I must update the next match
next_match_position, next_match_length, replacement = \
cls.get_next_match(original_text, pre_substitutions, s.position)
if s.position == next_match_position:
if deferred_escape:
escape_next_character()
copy_character(replacement)
drop_characters(next_match_length - 1)
deferred_escape = False
continue
if current_char == "\\":
deferred_escape = False
next_character = original_text[s.position+1:s.position+2]
if s.position + 1 == next_match_position:
drop_characters(1)
copy_character(next_character)
elif next_character and (next_character not in string.whitespace):
escape_next_character()
drop_characters(1)
copy_character(next_character)
else:
drop_characters(1)
deferred_escape = True
continue
if current_char == "`" and original_text[s.position+1:s.position+3] == "``" \
and not deferred_escape:
s.in_sourcecode = True
copy_character()
copy_character()
copy_character()
code_snippets_intervals.append(len(s.processed_text))
continue
# Now for the usual case of an ordinary character
if deferred_escape:
escape_next_character()
deferred_escape = False
copy_character()
if s.in_sourcecode:
code_snippets_intervals[-1] = (code_snippets_intervals[-1], len(s.processed_text))
return u"".join(s.processed_text), original_positions, escaped_positions, \
code_snippets_intervals
def __add__(self, other):
concatenation = unicode(self) + unicode(other)
concatenation = Excerpt(concatenation, mode="NONE")
concatenation.__post_substitutions = self.__post_substitutions
if isinstance(other, Excerpt):
assert self.__post_substitutions == other.__post_substitutions
concatenation.original_text = self.original_text + other.original_text
concatenation.original_positions = self.original_positions.copy()
length_first_part = len(self)
length_first_part_original = len(self.original_text)
concatenation.original_positions.update\
([(pos + length_first_part,
other.original_positions[pos].transpose(length_first_part_original))
for pos in other.original_positions if pos > 0])
assert 0 in other.original_positions
concatenation.escaped_positions = self.escaped_positions | \
set([pos + length_first_part for pos in other.escaped_positions])
# FixMe: When the last interval from "self" and the first of "other"
# touch each other, they should be merged.
concatenation.code_snippets_intervals = self.code_snippets_intervals + \
[(start + length_first_part, end + length_first_part)
for start, end in other.code_snippets_intervals]
else:
# Note that adding an ordinary Unicode to an Excerpt should only be
# done for simple cases. At the moment, this functionality is only
# used for single spaces. The reason is that otherwise, the post
# input method is also applied to the Unicode string, which may be
# wrong. (Not mecessarily wrong, though.)
assert isinstance(other, basestring)
assert "\n" not in other
concatenation.original_text = self.original_text + other
concatenation.original_positions = self.original_positions.copy()
concatenation.escaped_positions = self.escaped_positions
concatenation.code_snippets_intervals = self.code_snippets_intervals
return concatenation
def __getitem__(self, key):
if key < 0:
key += len(self)
character = super(Excerpt, self).__getitem__(key)
character = Excerpt(character, mode="NONE")
character.__post_substitutions = self.__post_substitutions
marker = self.original_positions.get(key, self.original_position(key))
character.original_text = \
self.original_text[marker.index:self.original_position(key+1).index]
marker.index = 0
character.original_positions = {0: marker}
if key in self.escaped_positions:
character.escaped_positions = set([0])
else:
character.escaped_positions = set()
character.code_snippets_intervals = []
for start, end in self.code_snippets_intervals:
if start <= key < end:
character.code_snippets_intervals = [(0, 1)]
break
if key < start:
break
return character
def __getslice__(self, i, j):
length = len(self)
i = max(min(i, length), 0)
j = max(min(j, length), i)
text = super(Excerpt, self).__getslice__(i, j)
slice_ = Excerpt(text, mode="NONE")
slice_.__post_substitutions = self.__post_substitutions
start_marker = self.original_position(i)
offset = start_marker.index
slice_.original_text = \
self.original_text[start_marker.index:self.original_position(j).index]
slice_.original_positions = \
dict([(pos - i, self.original_positions[pos].transpose(-offset))
for pos in self.original_positions if i <= pos < j])
if 0 not in slice_.original_positions:
slice_.original_positions[0] = start_marker.transpose(-offset)
slice_.escaped_positions = set([pos - i for pos in self.escaped_positions if i <= pos < j])
slice_.code_snippets_intervals = \
[(start - i, end - i) for start, end in self.code_snippets_intervals
if start < j and end > i]
if slice_.code_snippets_intervals:
slice_.code_snippets_intervals[0] = (max(slice_.code_snippets_intervals[0][0], 0),
slice_.code_snippets_intervals[0][1])
slice_.code_snippets_intervals[-1] = (slice_.code_snippets_intervals[-1][0],
min(slice_.code_snippets_intervals[-1][1], j))
return slice_
@classmethod
def apply_post_input_method(cls, excerpt):
"""This class method transforms an excerpt into a terminally processed text by
applying substitutions by the post input method. This means that this
text has already been preprocessed and parsed. It is in a terminal
text node, and post-processing is the final step before the backend
sees it.
:Parameters:
- `excerpt`: the original excerpt
:type excerpt: Excerpt
:Return:
- the processed text
- original positions as a dict of position -- (filename, linenumber,
column). The linenumber starts at 1, the column at 0.
- positions of escaped characters as a set of positions
- intervals of source code snippets
:rtype: unicode, dict, set, list
"""
# The following functions seem to violate an important programming
# rule: They modify variables of the outer scope, i.e. the enclosing
# function (side effects). However, they are simple to explain and
# they do simple things, so this approach leads to easy-to-comprehend
# code. Therefore, I make an exception to that rule.
#
# Their semantics are taken from those in apply_pre_input_method(),
# however, their implementation differs a little bit.
def drop_characters(number_of_characters):
"""This routine must be called immediately after the character is
being added to `preprocessed_text` and the current `position`
points to the next character in `original_text` to be processed.
"""
s.position += number_of_characters
if s.position not in excerpt.original_positions:
original_positions[len(s.processed_text)] = excerpt.original_position(s.position)
def copy_character(char=None):
if char == None:
char = current_char
s.processed_text += char
s.position += 1
s = Excerpt.Status()
original_positions = {}
escaped_positions = set()
code_snippets_intervals = []
original_code_snippets_intervals = excerpt.code_snippets_intervals[:]
# For the sake of performance, I don't test every characters position
# for input method matches, but look for the next upcoming match and
# store it.
text = unicode(excerpt)
next_match_position, next_match_length, replacement = \
cls.get_next_match(text, excerpt.__post_substitutions)
# Next comes the Big While which crawls through the whole source code
# and postprocesses it.
while s.position < len(text):
if s.position in excerpt.escaped_positions:
escaped_positions.add(len(s.processed_text))
if s.in_sourcecode:
if s.position >= original_code_snippets_intervals[0][1]:
del original_code_snippets_intervals[0]
s.in_sourcecode = False
code_snippets_intervals[-1] = \
(code_snippets_intervals[-1], len(s.processed_text))
if original_code_snippets_intervals:
if s.position >= original_code_snippets_intervals[0][0]:
code_snippets_intervals.append(len(s.processed_text))
s.in_sourcecode = True
if s.position in excerpt.original_positions:
original_positions[len(s.processed_text)] = excerpt.original_positions[s.position]
if s.in_sourcecode:
copy_character()
continue
current_char = text[s.position]
if current_char in string.whitespace:
copy_character()
continue
if s.position > next_match_position:
# I must update the next match
next_match_position, next_match_length, replacement = \
cls.get_next_match(text, excerpt.__post_substitutions, s.position)
if s.position == next_match_position:
any_escaped = False
for i in range(s.position, s.position + next_match_length):
if i in excerpt.escaped_positions:
any_escaped = True
break
if not any_escaped:
copy_character(replacement)
drop_characters(next_match_length - 1)
continue
# Now for the usual case of an ordinary character
copy_character()
if s.in_sourcecode:
assert len(original_code_snippets_intervals) == 1
assert original_code_snippets_intervals[0] == len(text)
code_snippets_intervals[-1] = (code_snippets_intervals[-1], len(s.processed_text))
else:
assert not original_code_snippets_intervals
return s.processed_text, original_positions, escaped_positions, code_snippets_intervals
def __new__(cls, excerpt, mode, url=None,
pre_substitutions=None, post_substitutions=None):
"""Here I create the instance. I create a unicode object and add some
attributes to it. Note that this class doesn't have an __init__
method. There are three "modes", reflecting the three stages in the
lifecycle of an Excerpt. Note that the mode "NONE" is used for slicing
and indexing.
:Parameters:
- `excerpt`: the original text that will be used for initialising the
instance. If mode is "PRE" or "NONE", this should be a unicode
string, else it must be an Excerpt itself.
- `mode`: Either "PRE", "POST", or "NONE". This tells the method
which input must be applied (if at all), and of what type is
`excerpt`. (See above.)
- `url`: URL of the original ressource file. Must be given only for
the "PRE" mode.
- `pre_substitutions`: substitution list of the pre input method.
Must be given only for the "PRE" mode.
- `post_substitutions`: substitution list of the post input method.
Must be given only for the "PRE" mode.
:type excerpt: unicode or Excerpt
:type mode: str
:type url: str
:type pre_substitutions: list with the (match, replacement) tuples
:type post_substitutions: list with the (match, replacement) tuples
:Return:
the newly created instance of Excerpt.
:rtype: Excerpt
"""
if mode == "NONE":
self = unicode.__new__(cls, excerpt)
elif mode == "PRE":
preprocessed_text, original_positions, escaped_positions, code_snippets_intervals = \
cls.apply_pre_input_method(excerpt, url, pre_substitutions)
self = unicode.__new__(cls, preprocessed_text)
self.original_text = unicode(excerpt)
self.original_positions = original_positions
self.escaped_positions = escaped_positions
self.code_snippets_intervals = code_snippets_intervals
self.__post_substitutions = post_substitutions
elif mode == "POST":
postprocessed_text, original_positions, escaped_positions, code_snippets_intervals = \
cls.apply_post_input_method(excerpt)
self = unicode.__new__(cls, postprocessed_text)
self.original_positions = original_positions
self.escaped_positions = escaped_positions
self.code_snippets_intervals = code_snippets_intervals
self.original_text = excerpt.original_text
self.__post_substitutions = None
self.__escaped_text = None
return self
def apply_postprocessing(self):
"""Applies the rules for post processing this the excerpt and returns
the processed excerpt. Note that this method can be called only once
per excerpt, i.e., for the returned excerpt, this method cannot be
called once again.
:Return:
the newly created instance of Excerpt, with applied post input
method.
:rtype: Excerpt
"""
assert self.__post_substitutions is not None, "post input method can be applied only once"
return Excerpt(self, mode="POST")
# FixMe: The following path variable will eventually be set by some sort of
# configuration.
input_methods_path = os.path.join(common.modulepath, "data")
def read_input_method(input_method_name):
"""Return the substitution dictionary for one input method.
:Parameters:
- `input_method_name`: name of the input method, e.g. "minimal"
:type input_method_name: string
:Return:
A list with the (match, replacement) tuples. Both are strings, the first
being a regular expression, and the second one single character. Their
order is the same as in the file, and duplicates are not deleted.
:rtype: list
:Exceptions:
- `LocalVariablesError`: if the first line is not a local variables line
- `FileError`: if there is an invalid line in the file
"""
if input_method_name == "none":
return [], []
pre_substitutions = []
post_substitutions = []
filename = os.path.join(input_methods_path, input_method_name+".bim")
local_variables = common.parse_local_variables(open(filename).readline(), force=True)
if local_variables.get("input-method-name") != input_method_name:
raise FileError("input method name in first line doesn't match file name", filename)
input_method_file = codecs.open(filename, encoding=local_variables.get("coding", "utf8"))
input_method_file.readline()
if not re.match(r"\.\. Bobcat input method\Z", input_method_file.readline().rstrip()):
raise FileError("second line is invalid", filename)
if "parental-input-method" in local_variables:
for input_method in local_variables["parental-input-method"].split(","):
parent_pre, parent_post = read_input_method(input_method)
pre_substitutions.extend(parent_pre)
post_substitutions.extend(parent_post)
line_pattern = re.compile(r"(?P<match>.+?)\t+"
r"((?P<replacement>.)|(#(?P<dec>\d+))|(0x(?P<hex>[0-9a-fA-F]+)))"
r"(\s+.*\s*)?\Z")
for i, line in enumerate(input_method_file):
linenumber = i + 3
if line.strip() == "" or line.rstrip() == ".." or line.startswith(".. "):
continue
line_match = line_pattern.match(line)
if not line_match:
raise FileError("line %d is invalid" % linenumber, filename)
match = line_match.group("match")
post = match.startswith("POST::")
if post:
match = match[6:]
if match.startswith("REGEX::"):
match = match[7:]
if re.match(u"(?:"+match+u")?", "").groups():
raise FileError("the match in line %d contains a group" % linenumber, filename)
else:
match = re.escape(match)
if line_match.group("replacement"):
replacement = line_match.group("replacement")
elif line_match.group("dec"):
replacement = unichr(int(line_match.group("dec")))
elif line_match.group("hex"):
replacement = unichr(int(line_match.group("hex"), 16))
if post:
post_substitutions.append((match, replacement))
else:
pre_substitutions.append((match, replacement))
return pre_substitutions, post_substitutions
def process_text(text, filepath, input_method):
"""Take the raw contents of the Bobcat file and turn it into "digested"
contents with applied input method and marking of escaped characters.
:Parameters:
- `text`: raw contents of the input file. Only the encoding was
applied.
- `filepath`: path to the Bobcat input file. This is only used for the
error messages.
- `input_method`: name of the input method to be applied. If more than
one, a list of names of input methods.
:type text: unicode
:type filepath: string
:type input_method: string or list of strings
:Return:
the preprocessed contents
:rtype: Excerpt
"""
def sort_and_filter_substitutions(substitutions):
"""Sort and filter the list of substitutions: Reverse order, and remove
duplicates. Additionally, complile the regular expressions to match
objects."""
hitherto_matches = set()
sorted_substitutions = []
for i in range(len(substitutions)):
match, replacement = substitutions[-i-1]
if match not in hitherto_matches:
hitherto_matches.add(match)
sorted_substitutions.append((re.compile(match, re.MULTILINE), replacement))
return sorted_substitutions
# First, read the input method(s)
if isinstance(input_method, list):
input_methods = input_method
else:
input_methods = [input_method]
pre_substitutions = []
post_substitutions = []
for input_method in input_methods:
pre, post = read_input_method(input_method)
pre_substitutions.extend(pre)
post_substitutions.extend(post)
pre_substitutions = sort_and_filter_substitutions(pre_substitutions)
post_substitutions = sort_and_filter_substitutions(post_substitutions)
# Now, apply it to the contents
return Excerpt(text, "PRE", filepath, pre_substitutions, post_substitutions)
def detect_header_data(bobcat_file):
"""Detect the local variables of the given text file and the Bobcat format
version according to its first two lines. This is very similar to the
method used for Python source files. There is no default encoding, the
default input method is "minimal".
:Parameters:
- `bobcat_file`: source file, with the file pointer set to the start
:type bobcat_file: string
:Return:
- encoding of the file. If none was found, it returns None.
- input method of the file. It defaults to "minimal". If more than one
input method was given, a list of strings is returned.
- Bobcat version; defaults to "1.0"
:rtype: string, string, string
"""
first_line = bobcat_file.readline()
local_variables = common.parse_local_variables(first_line)
if local_variables != None:
coding = local_variables.get("coding")
input_method = local_variables.get("input-method", "minimal")
second_line = bobcat_file.readline()
else:
coding, input_method = None, "minimal"
second_line = first_line
if re.match(r"\.\. \s*Bobcat", second_line):
bobcat_version_match = re.match(r"\.\. \s*Bobcat\s+([0-9]+\.[0-9]+)\s*\Z", second_line)
if bobcat_version_match:
bobcat_version = bobcat_version_match.group(1)
else:
raise FileError("Bobcat version line was invalid", bobcat_file.name)
else:
warnings.warn("No Bobcat version was specified. I assume 1.0.")
bobcat_version = "1.0"
return coding, input_method, bobcat_version
def load_file(filename):
"""Load the Bobcat file "filename" and return an `Excerpt` instance containing
that file.
:Parameters:
- `filename`: Bobcat filename
:type filename: string
:Return:
- `Excerpt` with the contents of the file
- auto-detected encoding of the file. None if the encoding was given
explicitly in the file.
- Bobcat version of the file as a string
:rtype: Excerpt, string, string
"""
encoding, input_method, bobcat_version = detect_header_data(open(filename))
# First, auto-detect encoding
if encoding:
try:
lines = codecs.open(filename, encoding=encoding).readlines()
encoding = None
except UnicodeDecodeError:
raise EncodingError("The encoding given in the file (%s) was wrong." % encoding,
filename)
else:
warnings.warn("I have to auto-detect file encoding. This may fail. "
"Please specify file encoding explicitly.")
# Test for UTF-8
try:
lines = codecs.open(filename, encoding="utf-8").readlines()
encoding = "utf-8"
except UnicodeDecodeError:
lines = []
# Test for Latin-1
for line in open(filename):
for char in line:
# Cheap heuristics: the characters 0x80...0x9f almost never
# occur in Latin-1.
if 0x80 <= ord(char) <= 0x9f:
break
else:
lines.append(line.decode("latin-1"))
continue
break
else:
encoding = "latin-1"
if not encoding:
# Test for cp1252
try:
return codecs.open(filename, encoding="cp1252").readlines(), "cp1252", \
bobcat_version
except UnicodeDecodeError:
raise EncodingError("Couldn't auto-detect file encoding. "
"Please specify explicitly.", filename)
text = process_text(u"".join(lines), filename, input_method)
return text, encoding, bobcat_version
| 46.433128 | 103 | 0.615846 | 5,341 | 45,133 | 5.043438 | 0.125445 | 0.026135 | 0.0382 | 0.007573 | 0.343988 | 0.268441 | 0.228867 | 0.204403 | 0.182908 | 0.152652 | 0 | 0.00653 | 0.307824 | 45,133 | 971 | 104 | 46.480947 | 0.855698 | 0.352469 | 0 | 0.343985 | 0 | 0.005639 | 0.041294 | 0.008178 | 0 | 0 | 0.000293 | 0.00309 | 0.016917 | 1 | 0.048872 | false | 0 | 0.005639 | 0.00188 | 0.103383 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
3bd768723aa937604b201f5855fe1b09d5c5592c | 2,338 | py | Python | test/lloyd_test.py | gdmcbain/voropy | a02e1c8d434e14edf21ba615556f0512e4e3bbe0 | [
"MIT"
] | null | null | null | test/lloyd_test.py | gdmcbain/voropy | a02e1c8d434e14edf21ba615556f0512e4e3bbe0 | [
"MIT"
] | null | null | null | test/lloyd_test.py | gdmcbain/voropy | a02e1c8d434e14edf21ba615556f0512e4e3bbe0 | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
#
import numpy
import meshio
import voropy
from helpers import download_mesh
def test_simple_lloyd(max_steps=5, output_filetype=None):
X = numpy.array([
[0.0, 0.0, 0.0],
[1.0, 0.0, 0.0],
[1.0, 1.0, 0.0],
[0.0, 1.0, 0.0],
[0.4, 0.5, 0.0],
])
cells = numpy.array([
[0, 1, 4],
[1, 2, 4],
[2, 3, 4],
[3, 0, 4],
])
submesh_bools = {0: numpy.ones(len(cells), dtype=bool)}
X, cells = voropy.smoothing.lloyd_submesh(
X, cells, submesh_bools,
1.0e-2,
skip_inhomogenous_submeshes=True,
max_steps=max_steps,
fcc_type='boundary',
verbose=True,
output_filetype=output_filetype
)
# Test if we're dealing with the mesh we expect.
nc = X.flatten()
norm1 = numpy.linalg.norm(nc, ord=1)
norm2 = numpy.linalg.norm(nc, ord=2)
normi = numpy.linalg.norm(nc, ord=numpy.inf)
tol = 1.0e-12
assert abs(norm1 - 4.9853556578540266) < tol
assert abs(norm2 - 2.1179164560036154) < tol
assert abs(normi - 1.0) < tol
return
def test_pacman_lloyd(max_steps=1000, output_filetype=None):
filename = download_mesh(
'pacman.msh',
'2da8ff96537f844a95a83abb48471b6a'
)
X, cells, _, _, _ = meshio.read(filename)
submesh_bools = {0: numpy.ones(len(cells['triangle']), dtype=bool)}
X, cells = voropy.smoothing.lloyd_submesh(
X, cells['triangle'], submesh_bools,
1.0e-2,
skip_inhomogenous_submeshes=False,
max_steps=max_steps,
fcc_type='boundary',
flip_frequency=1,
verbose=False,
output_filetype=output_filetype
)
# Test if we're dealing with the mesh we expect.
nc = X.flatten()
norm1 = numpy.linalg.norm(nc, ord=1)
norm2 = numpy.linalg.norm(nc, ord=2)
normi = numpy.linalg.norm(nc, ord=numpy.inf)
tol = 1.0e-12
# assert abs(norm1 - 1944.49523751269) < tol
# assert abs(norm2 - 76.097893244864181) < tol
assert abs(norm1 - 1939.1198108068188) < tol
assert abs(norm2 - 75.949652079323229) < tol
assert abs(normi - 5.0) < tol
return
if __name__ == '__main__':
# test_pacman_lloyd(
test_simple_lloyd(
max_steps=100,
output_filetype='png'
)
| 24.87234 | 71 | 0.591959 | 319 | 2,338 | 4.188088 | 0.282132 | 0.025449 | 0.026946 | 0.023952 | 0.548653 | 0.513473 | 0.513473 | 0.422156 | 0.360778 | 0.357784 | 0 | 0.121678 | 0.275877 | 2,338 | 93 | 72 | 25.139785 | 0.667454 | 0.094953 | 0 | 0.352941 | 0 | 0 | 0.040323 | 0.01518 | 0 | 0 | 0 | 0 | 0.088235 | 1 | 0.029412 | false | 0 | 0.058824 | 0 | 0.117647 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
3bde4903d2551e5f28a57f6e66dc745022d95680 | 2,680 | py | Python | attic/tests/post-deployment/resources/test_support/archiver.py | ska-telescope/skampi | cd2f95bd56594888c8d0c3476824b438dfcfcf71 | [
"BSD-3-Clause"
] | null | null | null | attic/tests/post-deployment/resources/test_support/archiver.py | ska-telescope/skampi | cd2f95bd56594888c8d0c3476824b438dfcfcf71 | [
"BSD-3-Clause"
] | 3 | 2019-10-25T13:38:56.000Z | 2022-03-30T09:13:33.000Z | attic/tests/post-deployment/resources/test_support/archiver.py | ska-telescope/skampi | cd2f95bd56594888c8d0c3476824b438dfcfcf71 | [
"BSD-3-Clause"
] | 2 | 2019-11-04T09:59:06.000Z | 2020-05-07T11:05:42.000Z | from tango import DeviceProxy,AttributeProxy
from time import sleep
import logging
class ArchiverHelper:
def __init__(self,conf_manager='archiving/hdbpp/confmanager01', eventsubscriber='archiving/hdbpp/eventsubscriber01'):
self.conf_manager = conf_manager
self.eventsubscriber = eventsubscriber
self.conf_manager_proxy = DeviceProxy(self.conf_manager)
self.evt_subscriber_proxy = DeviceProxy(self.eventsubscriber)
def attribute_add(self, fqdn, polling_period=1000, period_event=3000):
if not self.is_already_archived(fqdn):
AttributeProxy(fqdn).read()
self.conf_manager_proxy.write_attribute("SetAttributeName", fqdn)
self.conf_manager_proxy.write_attribute("SetArchiver", self.eventsubscriber)
self.conf_manager_proxy.write_attribute("SetStrategy", "ALWAYS")
self.conf_manager_proxy.write_attribute("SetPollingPeriod", int(polling_period))
self.conf_manager_proxy.write_attribute("SetPeriodEvent", int(period_event))
self.conf_manager_proxy.AttributeAdd()
return True
return False
def attribute_list(self):
return self.evt_subscriber_proxy.read_attribute("AttributeList").value
def is_already_archived(self, fqdn):
attr_list = self.attribute_list()
if attr_list is not None:
for already_archived in attr_list:
if fqdn in str(already_archived).lower():
return True
return False
def start_archiving(self, fqdn=None, polling_period=1000, period_event=3000):
if(fqdn is not None):
self.attribute_add(fqdn,polling_period,period_event)
return self.evt_subscriber_proxy.Start()
def stop_archiving(self, fqdn):
self.evt_subscriber_proxy.AttributeStop(fqdn)
return self.conf_manager_proxy.AttributeRemove(fqdn)
def evt_subscriber_attribute_status(self, fqdn):
return self.evt_subscriber_proxy.AttributeStatus(fqdn)
def conf_manager_attribute_status(self, fqdn):
return self.conf_manager_proxy.AttributeStatus(fqdn)
def is_started(self, fqdn):
return "Archiving : Started" in self.evt_subscriber_attribute_status(fqdn)
def wait_for_start(self,fqdn,sleep_time=0.1,max_retries=30):
total_sleep_time = 0
for x in range(0, max_retries):
try:
if("Archiving : Started" in self.conf_manager_attribute_status(fqdn)):
break
except:
pass
sleep(sleep_time)
total_sleep_time += 1
return total_sleep_time* sleep_time
| 41.230769 | 121 | 0.688433 | 313 | 2,680 | 5.603834 | 0.249201 | 0.094071 | 0.111174 | 0.102623 | 0.297605 | 0.199544 | 0.038769 | 0 | 0 | 0 | 0 | 0.013145 | 0.233582 | 2,680 | 64 | 122 | 41.875 | 0.840798 | 0 | 0 | 0.075472 | 0 | 0 | 0.076493 | 0.023134 | 0 | 0 | 0 | 0 | 0 | 1 | 0.188679 | false | 0.018868 | 0.056604 | 0.075472 | 0.471698 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
3bde8114fe89bdf8eaec92007c022e2ab3234ec2 | 3,527 | py | Python | light9/editchoicegtk.py | drewp/light9 | ab173a40d095051546e532962f7a33ac502943a6 | [
"MIT"
] | 2 | 2018-10-05T13:32:46.000Z | 2022-01-01T22:51:20.000Z | light9/editchoicegtk.py | drewp/light9 | ab173a40d095051546e532962f7a33ac502943a6 | [
"MIT"
] | 4 | 2021-06-08T19:33:40.000Z | 2022-03-11T23:18:06.000Z | light9/editchoicegtk.py | drewp/light9 | ab173a40d095051546e532962f7a33ac502943a6 | [
"MIT"
] | null | null | null | import logging
from gi.repository import Gtk
from gi.repository import Gdk
from rdflib import URIRef
log = logging.getLogger('editchoicegtk')
class Local(object):
"""placeholder for the local uri that EditChoice does not
manage. Set resourceObservable to Local to indicate that you're
unlinked"""
class EditChoice(Gtk.HBox):
"""
this is a gtk port of editchoice.EditChoice
"""
def __init__(self, graph, resourceObservable, label="Editing:"):
"""
getResource is called to get the URI of the currently
"""
self.graph = graph
# the outer box should have a distinctive border so it's more
# obviously a special drop target
Gtk.HBox.__init__(self)
self.pack_start(Gtk.Label(label), False, True, 0) #expand, fill, pad
# this is just a label, but it should look like a physical
# 'thing' (and gtk labels don't work as drag sources)
self.currentLink = Gtk.Button("http://bar")
self.pack_start(self.currentLink, True, True, 0) #expand, fill, pad
self.unlinkButton = Gtk.Button(label="Unlink")
self.pack_start(self.unlinkButton, False, True, 0) #expand, fill pad
self.unlinkButton.connect("clicked", self.onUnlink)
self.show_all()
self.resourceObservable = resourceObservable
resourceObservable.subscribe(self.uriChanged)
self.makeDragSource()
self.makeDropTarget()
def makeDropTarget(self):
def ddr(widget, drag_context, x, y, selection_data, info, timestamp):
dtype = selection_data.get_data_type()
if dtype.name() not in ['text/uri-list', 'TEXT']:
raise ValueError("unknown DnD selection type %r" % dtype)
data = selection_data.get_data().strip()
log.debug('drag_data_received data=%r', data)
self.resourceObservable(URIRef(data))
self.currentLink.drag_dest_set(
flags=Gtk.DestDefaults.ALL,
targets=[
Gtk.TargetEntry.new('text/uri-list', 0, 0),
Gtk.TargetEntry.new('TEXT', 0,
0), # getting this from chrome :(
],
actions=Gdk.DragAction.LINK | Gdk.DragAction.COPY)
self.currentLink.connect("drag_data_received", ddr)
def makeDragSource(self):
self.currentLink.drag_source_set(
start_button_mask=Gdk.ModifierType.BUTTON1_MASK,
targets=[
Gtk.TargetEntry.new(target='text/uri-list', flags=0, info=0)
],
actions=Gdk.DragAction.LINK | Gdk.DragAction.COPY)
def source_drag_data_get(btn, context, selection_data, info, time):
selection_data.set_uris([self.resourceObservable()])
self.currentLink.connect("drag_data_get", source_drag_data_get)
def uriChanged(self, newUri):
# if this resource had a type icon or a thumbnail, those would be
# cool to show in here too
if newUri is Local:
self.currentLink.set_label("(local)")
self.currentLink.drag_source_unset()
else:
self.graph.addHandler(self.updateLabel)
self.makeDragSource()
self.unlinkButton.set_sensitive(newUri is not Local)
def updateLabel(self):
uri = self.resourceObservable()
label = self.graph.label(uri)
self.currentLink.set_label(label or uri or "")
def onUnlink(self, *args):
self.resourceObservable(Local)
| 35.27 | 77 | 0.631415 | 421 | 3,527 | 5.180523 | 0.377672 | 0.061898 | 0.017882 | 0.020633 | 0.109124 | 0.081614 | 0.068776 | 0 | 0 | 0 | 0 | 0.003876 | 0.2685 | 3,527 | 99 | 78 | 35.626263 | 0.841473 | 0.168415 | 0 | 0.129032 | 0 | 0 | 0.064067 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.129032 | false | 0 | 0.064516 | 0 | 0.225806 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
3be00db559348b9f52a6bebfd662d19b6f287e80 | 1,137 | py | Python | cogs/misc.py | ddugovic/WingBot | 81c325427dac2fcad6d27f0cf87c49709c51ad7f | [
"MIT"
] | 5 | 2020-03-29T08:23:54.000Z | 2022-03-28T17:13:41.000Z | cogs/misc.py | ddugovic/WingBot | 81c325427dac2fcad6d27f0cf87c49709c51ad7f | [
"MIT"
] | 8 | 2020-07-31T13:49:36.000Z | 2022-03-28T17:12:05.000Z | cogs/misc.py | ddugovic/WingBot | 81c325427dac2fcad6d27f0cf87c49709c51ad7f | [
"MIT"
] | 2 | 2020-07-31T13:19:12.000Z | 2022-03-26T15:50:37.000Z | import discord
from discord.ext import commands
from utils.configManager import BotConfig, RedditConfig
class Misc(commands.Cog):
"""Miscellaneous commands."""
def __init__(self, bot):
self.bot = bot
self.bot_config = BotConfig()
@commands.command(
usage='"<question>" "<option1>" "<option2>" (the quotation marks are important)'
)
async def poll(self, ctx, question: str, option1: str, option2: str):
"""Make polls."""
commandmsg = await ctx.channel.fetch_message(ctx.channel.last_message_id)
await commandmsg.delete()
embed = discord.Embed(title=question, color=discord.Color.from_rgb(230, 0, 0))
embed.add_field(name="A", value=option1, inline=False)
embed.add_field(name="B", value=option2, inline=False)
pollmsg = await ctx.send(embed=embed)
await pollmsg.add_reaction("🇦")
await pollmsg.add_reaction("🇧")
def setup(bot):
"""
Called automatically by discord while loading extension. Adds the Miscellaneous cog on to the bot.
"""
bot.add_cog(Misc(bot))
| 29.921053 | 103 | 0.639402 | 138 | 1,137 | 5.181159 | 0.514493 | 0.029371 | 0.027972 | 0.047552 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.012717 | 0.239226 | 1,137 | 37 | 104 | 30.72973 | 0.811561 | 0.1073 | 0 | 0 | 0 | 0 | 0.080851 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.095238 | false | 0 | 0.190476 | 0 | 0.333333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
3be1c690806d982104f8c79a75d03e1252134df3 | 6,061 | py | Python | pynairus/validators/operator_validator.py | venairus/pynairus | 76227072aa0f0f98a36a3a04eb6a436473cfd9a6 | [
"MIT"
] | 2 | 2018-02-15T12:16:10.000Z | 2018-09-11T12:05:12.000Z | pynairus/validators/operator_validator.py | venairus/pynairus | 76227072aa0f0f98a36a3a04eb6a436473cfd9a6 | [
"MIT"
] | null | null | null | pynairus/validators/operator_validator.py | venairus/pynairus | 76227072aa0f0f98a36a3a04eb6a436473cfd9a6 | [
"MIT"
] | 1 | 2019-10-30T09:40:28.000Z | 2019-10-30T09:40:28.000Z | # coding: utf-8
"""Module for operation validation."""
from ..errors.app_error import BadArgumentError
from ..errors.app_error import ValidateError
from ..helpers.string_helper import parse_time_string, convert_seconds_to_time
class BaseValidator():
"""Abstract class for validators."""
def validate(self, answer, first, second):
"""Validate the answer.
:param answer: the answer to validate
:param first: the first number of the operation
:param second: the second number of the operation
:type answer: int|str
:type first: int|str
:type second: int|str
:return: bool
"""
return self.convert_answer(answer) == self.get_result(first, second)
def get_result(self, first, second):
"""Return the good result for the operation."""
message = f"method not implemented for class {self.__class__.__name__}"
raise ValidateError(message)
def convert_answer(self, answer):
"""Convert the type of the answer.
By default no conversion is made.
Override this method to do one.
"""
return answer
class AdditionValidator(BaseValidator):
"""Validator for addition."""
def get_result(self, first, second):
"""Return the result of the addition.
:param first: first number
:param second: second number
:type first: int
:type second: int
:return: int
"""
return first + second
def convert_answer(self, answer):
"""Convert the type of the answer."""
return int(answer)
class SubstractionValidator(BaseValidator):
"""Validator for substraction."""
def get_result(self, first, second):
"""Return the result of the substraction.
:param first: first number
:param second: second number
:type first: int
:type second: int
:return: int
"""
return first - second
def convert_answer(self, answer):
"""Convert the type of the answer."""
return int(answer)
class MultiplicationValidator(BaseValidator):
"""Validator for multiplication."""
def get_result(self, first, second):
"""Return the result for the multiplication.
:param first: first number
:param second: second number
:type first: int
:type second: int
:return: int
"""
return first * second
def convert_answer(self, answer):
"""Convert the type of the answer."""
return int(answer)
class TimeAdditionValidator(BaseValidator):
"""Validator for time addition."""
def get_result(self, first, second):
"""Return the result for the time addition.
:param first: first time
:param second: second ttime
:type first: str
:type second: str
:return: str
:raise ValidateError: if an error occured while parsing the args
"""
try:
# parse the firt time string
first_tuple = parse_time_string(first)
# parse the second time string
second_tuple = parse_time_string(second)
# add the hours, minutes and seconds
hours = first_tuple[0] + second_tuple[0]
mins = first_tuple[1] + second_tuple[1]
secs = first_tuple[2] + second_tuple[2]
# convert in seconds
timestamp = (hours * 60 * 60) + (mins * 60) + secs
# return the result
return convert_seconds_to_time(timestamp)
except BadArgumentError as error:
raise ValidateError(
f"An error occured while validating: {first} + {second}",
error)
class TimeSubstractionValidator(BaseValidator):
"""Validator for substraction."""
def get_result(self, first, second):
"""Return the result of the substraction.
:param first: first number
:param second: second number
:type first: int
:type second: int
:return: int
:raise ValidateError: if an error occured
"""
try:
# parse the firt time string and convert in second
first_tuple = parse_time_string(first)
first_seconds = (first_tuple[0] * 60 * 60) + \
(first_tuple[1] * 60) + first_tuple[2]
# parse the second time string and convert in second
second_tuple = parse_time_string(second)
second_seconds = (second_tuple[0] * 60 * 60) + \
(second_tuple[1] * 60) + second_tuple[2]
# verify the numbers:
# second number must not be greater than the first one
if second_seconds > first_seconds:
raise ValidateError(
f"The first time ({first}) isn't greater than {second}")
# return the final result
return convert_seconds_to_time(first_seconds - second_seconds)
except BadArgumentError as error:
raise ValidateError(
f"An error occured while validating: {first} - {second}",
error)
class DivisionValidator(BaseValidator):
"""Validator for multiplication."""
def get_result(self, first, second):
"""Return the result for the division.
:param first: first number
:param second: second number
:type first: int
:type second: int
:return: str
:raise ValidateError: if first arg isn't greater than second args
"""
if second > first:
message = f"The first number ({first}) isn't greater than {second}"
raise ValidateError(message)
quotient = first // second
tmp_rest = first % second
rest = f"r{tmp_rest}" if tmp_rest > 0 else ""
return f"{quotient}{rest}"
| 29.280193 | 79 | 0.581752 | 666 | 6,061 | 5.192192 | 0.168168 | 0.050896 | 0.034702 | 0.032389 | 0.565067 | 0.52487 | 0.408039 | 0.408039 | 0.397629 | 0.397629 | 0 | 0.007935 | 0.334598 | 6,061 | 206 | 80 | 29.42233 | 0.849492 | 0.347962 | 0 | 0.439394 | 0 | 0 | 0.090965 | 0.007657 | 0 | 0 | 0 | 0 | 0 | 1 | 0.181818 | false | 0 | 0.045455 | 0 | 0.5 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
3be24b94a64a93174866cc783d80c4c47480814c | 7,283 | py | Python | comic_dl/sites/japscan.py | StefanUlbrich/comic-dl | 84a38c9f5706fa40be8ff149ea31865c61de3a38 | [
"MIT"
] | 478 | 2016-11-13T15:11:10.000Z | 2022-03-30T22:22:22.000Z | comic_dl/sites/japscan.py | darodi/comic-dl | 1e752321b79ee1fd599e22b31328248e6ee9c41c | [
"MIT"
] | 270 | 2017-02-01T03:21:45.000Z | 2022-03-28T04:16:27.000Z | comic_dl/sites/japscan.py | darodi/comic-dl | 1e752321b79ee1fd599e22b31328248e6ee9c41c | [
"MIT"
] | 101 | 2016-11-14T20:31:55.000Z | 2022-03-11T06:33:15.000Z | #!/usr/bin/env python
# -*- coding: utf-8 -*-
import os
import re
import cloudscraper
from comic_dl import globalFunctions
from PIL import Image
from bs4 import BeautifulSoup
from tqdm import tqdm
JAPSCAN_URL = 'https://www.japscan.to'
class Japscan():
def __init__(self, manga_url, download_directory, chapter_range, **kwargs):
self.scraper = cloudscraper.create_scraper()
conversion = kwargs.get("conversion")
keep_files = kwargs.get("keep_files")
self.logging = kwargs.get("log_flag")
self.sorting = kwargs.get("sorting_order")
self.manga_url = manga_url + '/'
self.print_index = kwargs.get("print_index")
if 'manga' in manga_url:
self.comic_id = str(str(manga_url).split("/")[-1])
self.full_series(comic_id=self.comic_id, sorting=self.sorting, download_directory=download_directory,
chapter_range=chapter_range, conversion=conversion, keep_files=keep_files)
if 'lecture-en-ligne' in manga_url:
self.comic_id = str(str(manga_url).split("/")[-2])
chapter_path = re.sub(re.compile(r'.*japscan.to'), '', str(self.manga_url))
self.single_chapter(chapter_path, comic_id=self.comic_id, download_directory=download_directory,
scraper=scraper)
def full_series(self, comic_id, sorting, download_directory, chapter_range, conversion, keep_files):
scraper = self.scraper
content = scraper.get(self.manga_url).content
chapter_divs = BeautifulSoup(content, features='lxml').findAll('div', {
'class': 'chapters_list'})
starting, ending = self.compute_start_end(chapter_divs, chapter_range)
if self.print_index:
idx = 0
for chap_link in chapter_divs[::-1]:
idx = idx + 1
print(str(idx) + ": " + re.sub('[\t\r\n]', '', chap_link.find('a').getText()))
return 0
for chapter_div in chapter_divs[::-1][starting-1:ending]:
chapter_path = chapter_div.find(href=True)['href']
try:
self.single_chapter(chapter_path, comic_id, download_directory)
except Exception as ex:
break # break to continue processing other mangas
# @Chr1st-oo - modified condition due to some changes on automatic download and config.
if chapter_range != "All" and (chapter_range.split("-")[1] == "__EnD__" or len(chapter_range.split("-")) == 3):
globalFunctions.GlobalFunctions().addOne(self.manga_url)
return 0
@staticmethod
def compute_start_end(chapter_divs, chapter_range):
if chapter_range != "All":
starting = int(str(chapter_range).split("-")[0])
total_chapters = len(chapter_divs)
if str(chapter_range).split("-")[1].isdigit():
ending = int(str(chapter_range).split("-")[1])
else:
ending = total_chapters
if ending > total_chapters:
ending = total_chapters
else:
starting = 1
ending = len(chapter_divs)
return starting, ending
def single_chapter(self, chapter_path, comic_id, download_directory):
scraper = self.scraper
chapter_url = JAPSCAN_URL + chapter_path
chapter_name = chapter_path.split('/')[-2]
pages = BeautifulSoup(scraper.get(chapter_url).content, features='lxml').find('select', {'id': 'pages'})
page_options = pages.findAll('option', value=True)
file_directory = globalFunctions.GlobalFunctions().create_file_directory(chapter_name, comic_id)
directory_path = os.path.realpath(str(download_directory) + "/" + str(file_directory))
if not os.path.exists(directory_path):
os.makedirs(directory_path)
links = []
file_names = []
pbar = tqdm(page_options, leave=True, unit='image(s)', position=0)
pbar.set_description('[Comic-dl] Downloading : %s [%s] ' % (comic_id, chapter_name))
for page_tag in page_options:
page_url = JAPSCAN_URL + page_tag['value']
page = BeautifulSoup(scraper.get(page_url).content, features='lxml')
image_url = page.find('div', {'id': 'image'})['data-src']
links.append(image_url)
file_name = image_url.split("/")[-1]
file_names.append(file_name)
# pbar = tqdm([image_url], leave=True, unit='image(s)', position=0)
self.download_image(referer=image_url, directory_path=directory_path, pbar=pbar, image_url=image_url,
file_name=file_name)
def download_image(self, image_url, file_name, referer, directory_path, pbar):
unscramble = False
if 'clel' in image_url:
unscramble = True
file_check_path = str(directory_path) + os.sep + str(file_name)
if os.path.isfile(file_check_path):
pbar.write('[Comic-dl] File Exist! Skipping : %s\n' % file_name)
pass
if not os.path.isfile(file_check_path):
headers = {
'User-Agent':
'Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/56.0.2924.87 Safari/537.36',
'Accept-Encoding': 'gzip, deflate',
'Referer': referer
}
image_content = self.scraper.get(image_url, headers=headers).content
if unscramble is True:
scrambled_image = file_check_path + '_scrambled'
else:
scrambled_image = file_check_path
file = open(scrambled_image, 'wb')
file.write(image_content)
file.close()
if unscramble is True:
self.unscramble_image(scrambled_image, file_check_path)
os.remove(scrambled_image)
pbar.update()
def unscramble_image(self, scrambled_image, image_full_path):
input_image = Image.open(scrambled_image)
temp = Image.new("RGB", input_image.size)
output_image = Image.new("RGB", input_image.size)
for x in range(0, input_image.width, 200):
col1 = input_image.crop((x, 0, x + 100, input_image.height))
if (x + 200) <= input_image.width:
col2 = input_image.crop((x + 100, 0, x + 200, input_image.height))
temp.paste(col1, (x + 100, 0))
temp.paste(col2, (x, 0))
else:
col2 = input_image.crop((x + 100, 0, input_image.width, input_image.height))
temp.paste(col1, (x, 0))
temp.paste(col2, (x + 100, 0))
for y in range(0, temp.height, 200):
row1 = temp.crop((0, y, temp.width, y + 100))
if (y + 200) <= temp.height:
row2 = temp.crop((0, y + 100, temp.width, y + 200))
output_image.paste(row1, (0, y + 100))
output_image.paste(row2, (0, y))
else:
row2 = temp.crop((0, y + 100, temp.width, temp.height))
output_image.paste(row1, (0, y))
output_image.paste(row2, (0, y + 100))
output_image.save(image_full_path)
| 43.094675 | 131 | 0.593711 | 888 | 7,283 | 4.656532 | 0.23536 | 0.037727 | 0.018863 | 0.02104 | 0.219831 | 0.165417 | 0.107376 | 0.05127 | 0.019347 | 0.019347 | 0 | 0.024006 | 0.285047 | 7,283 | 168 | 132 | 43.35119 | 0.770117 | 0.032404 | 0 | 0.094891 | 0 | 0.007299 | 0.067452 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.043796 | false | 0.007299 | 0.051095 | 0 | 0.124088 | 0.021898 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
3be4283d633749a8a265e631d354bd0c41456180 | 7,596 | py | Python | nfvbench/compute.py | michaelspedersen/nfvbench | b6a8022c6d41688b21615683149a48dfcb98b705 | [
"Apache-2.0"
] | null | null | null | nfvbench/compute.py | michaelspedersen/nfvbench | b6a8022c6d41688b21615683149a48dfcb98b705 | [
"Apache-2.0"
] | null | null | null | nfvbench/compute.py | michaelspedersen/nfvbench | b6a8022c6d41688b21615683149a48dfcb98b705 | [
"Apache-2.0"
] | null | null | null | # Copyright 2016 Cisco Systems, Inc. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
"""Module to interface with nova and glance."""
import time
import traceback
from glanceclient import exc as glance_exception
try:
from glanceclient.openstack.common.apiclient.exceptions import NotFound as GlanceImageNotFound
except ImportError:
from glanceclient.v1.apiclient.exceptions import NotFound as GlanceImageNotFound
import keystoneauth1
import novaclient
from .log import LOG
class Compute(object):
"""Class to interface with nova and glance."""
def __init__(self, nova_client, glance_client, config):
"""Create a new compute instance to interact with nova and glance."""
self.novaclient = nova_client
self.glance_client = glance_client
self.config = config
def find_image(self, image_name):
"""Find an image by name."""
try:
return next(self.glance_client.images.list(filters={'name': image_name}), None)
except (novaclient.exceptions.NotFound, keystoneauth1.exceptions.http.NotFound,
GlanceImageNotFound):
pass
return None
def upload_image_via_url(self, final_image_name, image_file, retry_count=60):
"""Directly upload image to Nova via URL if image is not present."""
retry = 0
try:
# check image is file/url based.
with open(image_file) as f_image:
img = self.glance_client.images.create(name=str(final_image_name),
disk_format="qcow2",
container_format="bare",
visibility="public")
self.glance_client.images.upload(img.id, image_data=f_image)
# Check for the image in glance
while img.status in ['queued', 'saving'] and retry < retry_count:
img = self.glance_client.images.get(img.id)
retry += 1
LOG.debug("Image not yet active, retrying %s of %s...", retry, retry_count)
time.sleep(self.config.generic_poll_sec)
if img.status != 'active':
LOG.error("Image uploaded but too long to get to active state")
raise Exception("Image update active state timeout")
except glance_exception.HTTPForbidden:
LOG.error("Cannot upload image without admin access. Please make "
"sure the image is uploaded and is either public or owned by you.")
return False
except IOError:
# catch the exception for file based errors.
LOG.error("Failed while uploading the image. Please make sure the "
"image at the specified location %s is correct.", image_file)
return False
except keystoneauth1.exceptions.http.NotFound as exc:
LOG.error("Authentication error while uploading the image: %s", str(exc))
return False
except Exception:
LOG.error(traceback.format_exc())
LOG.error("Failed to upload image %s.", image_file)
return False
return True
def delete_image(self, img_name):
"""Delete an image by name."""
try:
LOG.log("Deleting image %s...", img_name)
img = self.find_image(image_name=img_name)
self.glance_client.images.delete(img.id)
except Exception:
LOG.error("Failed to delete the image %s.", img_name)
return False
return True
def image_multiqueue_enabled(self, img):
"""Check if multiqueue property is enabled on given image."""
try:
return img['hw_vif_multiqueue_enabled'] == 'true'
except KeyError:
return False
def image_set_multiqueue(self, img, enabled):
"""Set multiqueue property as enabled or disabled on given image."""
cur_mqe = self.image_multiqueue_enabled(img)
LOG.info('Image %s hw_vif_multiqueue_enabled property is "%s"',
img.name, str(cur_mqe).lower())
if cur_mqe != enabled:
mqe = str(enabled).lower()
self.glance_client.images.update(img.id, hw_vif_multiqueue_enabled=mqe)
img['hw_vif_multiqueue_enabled'] = mqe
LOG.info('Image %s hw_vif_multiqueue_enabled property changed to "%s"', img.name, mqe)
# Create a server instance with name vmname
# and check that it gets into the ACTIVE state
def create_server(self, vmname, image, flavor, key_name,
nic, sec_group, avail_zone=None, user_data=None,
config_drive=None, files=None):
"""Create a new server."""
if sec_group:
security_groups = [sec_group['id']]
else:
security_groups = None
# Also attach the created security group for the test
LOG.info('Creating instance %s with AZ: "%s"', vmname, avail_zone)
instance = self.novaclient.servers.create(name=vmname,
image=image,
flavor=flavor,
key_name=key_name,
nics=nic,
availability_zone=avail_zone,
userdata=user_data,
config_drive=config_drive,
files=files,
security_groups=security_groups)
return instance
def poll_server(self, instance):
"""Poll a server from its reference."""
return self.novaclient.servers.get(instance.id)
def get_server_list(self):
"""Get the list of all servers."""
servers_list = self.novaclient.servers.list()
return servers_list
def delete_server(self, server):
"""Delete a server from its object reference."""
self.novaclient.servers.delete(server)
def find_flavor(self, flavor_type):
"""Find a flavor by name."""
try:
flavor = self.novaclient.flavors.find(name=flavor_type)
return flavor
except Exception:
return None
def create_flavor(self, name, ram, vcpus, disk, ephemeral=0):
"""Create a flavor."""
return self.novaclient.flavors.create(name=name, ram=ram, vcpus=vcpus, disk=disk,
ephemeral=ephemeral)
def get_hypervisor(self, hyper_name):
"""Get the hypervisor from its name.
Can raise novaclient.exceptions.NotFound
"""
# first get the id from name
hyper = self.novaclient.hypervisors.search(hyper_name)[0]
# get full hypervisor object
return self.novaclient.hypervisors.get(hyper.id)
| 43.159091 | 98 | 0.590837 | 882 | 7,596 | 4.968254 | 0.269841 | 0.024646 | 0.025559 | 0.030123 | 0.11456 | 0.057052 | 0.019626 | 0.019626 | 0.019626 | 0 | 0 | 0.003725 | 0.328594 | 7,596 | 175 | 99 | 43.405714 | 0.85549 | 0.200632 | 0 | 0.163793 | 0 | 0 | 0.118644 | 0.016781 | 0 | 0 | 0 | 0 | 0 | 1 | 0.112069 | false | 0.008621 | 0.077586 | 0 | 0.353448 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
3be87595c16df464ab239892d3d8c6c850157e41 | 7,295 | py | Python | icebrk/histos.py | mieskolainen/icenet | 030e2ab658ebc1d83f20cb24dca2bb46b8ac44ca | [
"MIT"
] | null | null | null | icebrk/histos.py | mieskolainen/icenet | 030e2ab658ebc1d83f20cb24dca2bb46b8ac44ca | [
"MIT"
] | 2 | 2020-03-01T09:10:05.000Z | 2021-05-25T20:48:12.000Z | icebrk/histos.py | mieskolainen/icenet | 030e2ab658ebc1d83f20cb24dca2bb46b8ac44ca | [
"MIT"
] | 1 | 2020-02-28T13:37:41.000Z | 2020-02-28T13:37:41.000Z | # B/RK analyzer observables and histograms
#
#
# Mikael Mieskolainen, 2020
# m.mieskolainen@imperial.ac.uk
import bz2
import numpy as np
import iceplot
import icebrk.tools as tools
obs_M = {
# Axis limits
'xlim' : (4.5, 6.5),
'ylim' : None,
'xlabel' : r'System $M$',
'ylabel' : r'Candidates',
'units' : r'GeV',
'label' : r'3-body system invariant mass',
'figsize' : (4,4),
# Histogramming
'bins' : np.arange(0.0, 10.0 + 0.1, 0.1),
'density' : False,
# Function to calculate
'func' : None,
# Disk save
'pickle' : True
}
obs_Pt = {
# Axis limits
'xlim' : (0, 25.0),
'ylim' : None,
'xlabel' : r'System $P_t$',
'ylabel' : r'Candidates',
'units' : r'GeV',
'label' : r'System tranverse momentum',
'figsize' : (4,4),
# Histogramming
'bins' : np.arange(0.0, 25.0 + 1.0, 1.0),
'density' : False,
# Function to calculate
'func' : None,
# Disk save
'pickle' : True
}
obs_q2 = {
# Axis limits
'xlim' : (0, 12.0),
'ylim' : None,
'xlabel' : r'Electron pair $q^2$',
'ylabel' : r'Candidates',
'units' : r'GeV$^2$',
'label' : r'Electron system invariant squared',
'figsize' : (4,4),
# Histogramming
'bins' : np.arange(0, 12 + 0.4, 0.4),
'density' : False,
# Function to calculate
'func' : None,
# Disk save
'pickle' : True
}
obs_pt_l1 = {
# Axis limits
'xlim' : (0, 12.0),
'ylim' : None,
'xlabel' : r'Leading electron $p_t$',
'ylabel' : r'Candidates',
'units' : r'GeV',
'label' : r'Leading $e$ transverse momentum',
'figsize' : (4,4),
# Histogramming
'bins' : np.arange(0, 12 + 0.5, 0.5),
'density' : False,
# Function to calculate
'func' : None,
# Disk save
'pickle' : False
}
obs_pt_l2 = {
# Axis limits
'xlim' : (0, 12.0),
'ylim' : None,
'xlabel' : r'Sub-leading electron $p_t$',
'ylabel' : r'Candidates',
'units' : r'GeV',
'label' : r'Sub-leading transverse momentum',
'figsize' : (4,4),
# Histogramming
'bins' : np.arange(0, 12 + 0.5, 0.5),
'density' : False,
# Function to calculate
'func' : None,
# Disk save
'pickle' : False
}
obs_pt_k = {
# Axis limits
'xlim' : (0, 12.0),
'ylim' : None,
'xlabel' : r'Kaon $p_t$',
'ylabel' : r'Candidates',
'units' : r'GeV',
'label' : r'Kaon transverse momentum',
'figsize' : (4,4),
# Histogramming
'bins' : np.arange(0, 12 + 0.5, 0.5),
'density' : False,
# Function to calculate
'func' : None,
# Disk save
'pickle' : False
}
# ** MC ONLY **
obs_first_t3i = {
# Axis limits
'xlim' : (-1, 20),
'ylim' : None,
'xlabel' : r'First signal triplet',
'ylabel' : r'Events',
'units' : r'MC index',
'label' : r'First signal triplet',
'figsize' : (4,4),
# Histogramming
'bins' : np.arange(-1, 20 + 1, 1),
'density' : False,
# Function to calculate
'func' : None,
# Disk save
'pickle' : False
}
obs_last_t3i = {
# Axis limits
'xlim' : (-1, 20),
'ylim' : None,
'xlabel' : r'Last signal triplet',
'ylabel' : r'Events',
'units' : r'MC index',
'label' : r'Last signal triplet',
'figsize' : (4,4),
# Histogramming
'bins' : np.arange(-1, 20 + 0.5, 0.5),
'density' : False,
# Function to calculate
'func' : None,
# Disk save
'pickle' : False
}
obs_N_signal_t3 = {
# Axis limits
'xlim' : (-1, 20),
'ylim' : None,
'xlabel' : r'Number of signal triplets',
'ylabel' : r'Events',
'units' : r'',
'label' : r'Number of signal triplets',
'figsize' : (4,4),
# Histogramming
'bins' : np.arange(-1, 20 + 0.5, 0.5),
'density' : False,
# Function to calculate
'func' : None,
# Disk save
'pickle' : False
}
# Dictionary of all batch observables
obs_all = {
# JAGGED
'M' : obs_M,
'Pt' : obs_Pt,
'q2' : obs_q2,
'pt_l1' : obs_pt_l1,
'pt_l2' : obs_pt_l2,
'pt_k' : obs_pt_k,
# NORMAL
'first_t3i' : obs_first_t3i,
'last_t3i' : obs_last_t3i,
'N_signal_t3' : obs_N_signal_t3,
'N_signal_pfpf_t3' : obs_N_signal_t3,
'N_signal_lowlow_t3' : obs_N_signal_t3
}
def calc_batch_observables(l1_p4, l2_p4, k_p4):
"""JAGGED + VECTORIZED (operates on event batch) observables.
Args:
l1_p4:
l2_p4:
k_p4:
Returns:
x: Observables
"""
x = {
'M' : None,
'Pt' : None,
'q2' : None,
'pt_l1' : None,
'pt_l2' : None,
'pt_k' : None
}
x['M'] = (l1_p4['e'] + l2_p4['e'] + k_p4['k']).mass
x['Pt'] = (l1_p4['e'] + l2_p4['e'] + k_p4['k']).pt
x['q2'] = (l1_p4['e'] + l2_p4['e']).mass2
x['pt_l1'] = l1_p4['e'].pt
x['pt_l2'] = l2_p4['e'].pt
x['pt_k'] = k_p4['k'].pt
return x
def calc_batch_MC_observables(d, l1_p4, l2_p4, k_p4):
""" MC ONLY batch observables.
Args:
d:
l1_p4:
l2_p4:
k_p4:
Returns:
x
"""
x = {
}
return x
def calc_observables(evt_index, d, l1_p4, l2_p4, k_p4, sets, MAXT3):
"""NON-JAGGED (NORMAL) observables.
Args:
l1_p4:
l2_p4:
k_p4:
Returns:
x: Observables
"""
x = {
}
return x
vals = {
'init' : False,
}
vals_pfpf = {
'init' : False,
}
vals_lowlow = {
'init' : False,
}
def calc_MC_observables(evt_index, d, l1_p4, l2_p4, k_p4, sets, MAXT3):
"""MC ONLY observables.
Args:
evt_index:
d:
l1_p4:
l2_p4:
k_p4:
sets:
MAXT3:
Returns:
x: Observables
"""
if vals['init'] == False:
vals['init'] = True
for i in range(100):
vals[str(i)] = 0
if vals_pfpf['init'] == False:
vals_pfpf['init'] = True
for i in range(100):
vals_pfpf[str(i)] = 0
if vals_lowlow['init'] == False:
vals_lowlow['init'] = True
for i in range(100):
vals_lowlow[str(i)] = 0
x = {
'first_t3i' : None,
'last_t3i' : None,
'N_signal_t3' : None,
'N_signal_pfpf_t3' : None,
'N_signal_lowlow_t3' : None
}
# Number of signal triplets
x['N_signal_t3'] = np.sum(d['_BToKEE_is_signal'][evt_index])
x['N_signal_lowlow_t3'] = np.sum(d['_BToKEE_is_signal'][evt_index] & d['Electron_isLowPt'][d['BToKEE_l1Idx']][evt_index] & d['Electron_isLowPt'][d['BToKEE_l2Idx']][evt_index])
x['N_signal_pfpf_t3'] = np.sum(d['_BToKEE_is_signal'][evt_index] & d['Electron_isPF'][d['BToKEE_l1Idx']][evt_index] & d['Electron_isPF'][d['BToKEE_l2Idx']][evt_index])
vals[str(x['N_signal_t3'])] += 1
vals_lowlow[str(x['N_signal_lowlow_t3'])] += 1
vals_pfpf[str(x['N_signal_pfpf_t3'])] += 1
# The first signal index
x['first_t3i'] = tools.index_of_first_signal(evt_index, d, sets, MAXT3)
# The last signal index
x['last_t3i'] = tools.index_of_last_signal(evt_index, d, sets, MAXT3)
#print(list(vals.values())[0:22])
#print(list(vals_lowlow.values())[0:22])
#print(list(vals_pfpf.values())[0:22])
return x
def pickle_files(iodir, N_algo, label, mode='rb'):
"""Open pickle files.
Args:
iodir:
N_algo:
label:
mode: mode = 'rb' (read binary), 'ab' (append binary), 'wb' (write binary)
Returns:
x: Observables
"""
wfile = {'S': dict(), 'B': dict()}
for ID in wfile.keys():
for i in range(N_algo):
wfile[ID][str(i)] = bz2.BZ2File(iodir + f'/{label}_{ID}_weights_{i}.bz2', mode)
obsfile = {'S': dict(), 'B': dict()}
for ID in obsfile.keys():
for obs in obs_all.keys():
if obs_all[obs]['pickle']:
obsfile[ID][obs] = bz2.BZ2File(iodir + f'/{label}_{ID}_obs_{obs}.bz2', mode)
return obsfile, wfile | 18.421717 | 176 | 0.581768 | 1,095 | 7,295 | 3.700457 | 0.147032 | 0.027641 | 0.031096 | 0.033317 | 0.659674 | 0.571076 | 0.52542 | 0.480257 | 0.448421 | 0.406466 | 0 | 0.046942 | 0.229061 | 7,295 | 396 | 177 | 18.421717 | 0.673542 | 0.205894 | 0 | 0.42029 | 0 | 0 | 0.282355 | 0.009901 | 0 | 0 | 0 | 0 | 0 | 1 | 0.024155 | false | 0 | 0.019324 | 0 | 0.067633 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
3be9d1e049c1fb214a64d9d738d40794ebcac547 | 1,351 | py | Python | eventarc/generic/main.py | glasnt/python-docs-samples | 9869d47937b45840072485f86248438efa1c955d | [
"Apache-2.0"
] | 5,938 | 2015-05-18T05:04:37.000Z | 2022-03-31T20:16:39.000Z | eventarc/generic/main.py | glasnt/python-docs-samples | 9869d47937b45840072485f86248438efa1c955d | [
"Apache-2.0"
] | 4,730 | 2015-05-07T19:00:38.000Z | 2022-03-31T21:59:41.000Z | eventarc/generic/main.py | FFHixio/python-docs-samples | b39441b3ca0a7b27e9c141e9b43e78e729105573 | [
"Apache-2.0"
] | 6,734 | 2015-05-05T17:06:20.000Z | 2022-03-31T12:02:51.000Z | # Copyright 2020 Google, LLC.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# [START eventarc_generic_server]
import os
from flask import Flask, request
app = Flask(__name__)
# [END eventarc_generic_server]
# [START eventarc_generic_handler]
@app.route('/', methods=['POST'])
def index():
print('Event received!')
print('HEADERS:')
headers = dict(request.headers)
headers.pop('Authorization', None) # do not log authorization header if exists
print(headers)
print('BODY:')
body = dict(request.json)
print(body)
resp = {
"headers": headers,
"body": body
}
return (resp, 200)
# [END eventarc_generic_handler]
# [START eventarc_generic_server]
if __name__ == "__main__":
app.run(debug=True, host='0.0.0.0', port=int(os.environ.get('PORT', 8080)))
# [END eventarc_generic_server]
| 26.490196 | 83 | 0.705403 | 187 | 1,351 | 4.967914 | 0.582888 | 0.064586 | 0.09042 | 0.034446 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.01721 | 0.182828 | 1,351 | 50 | 84 | 27.02 | 0.824275 | 0.57587 | 0 | 0 | 0 | 0 | 0.137681 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.05 | false | 0 | 0.1 | 0 | 0.2 | 0.25 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
3beabe3d594047ea6d67c0f23051c0b95cff8a3b | 2,725 | py | Python | isaactest/tests/symbolic_q_text_entry_correct.py | jsharkey13/isaac-selenium-testing | fc57ec57179cf7d9f0bb5ef46d759792b2af3bc8 | [
"MIT"
] | null | null | null | isaactest/tests/symbolic_q_text_entry_correct.py | jsharkey13/isaac-selenium-testing | fc57ec57179cf7d9f0bb5ef46d759792b2af3bc8 | [
"MIT"
] | 1 | 2016-01-15T11:28:06.000Z | 2016-01-25T17:09:18.000Z | isaactest/tests/symbolic_q_text_entry_correct.py | jsharkey13/isaac-selenium-testing | fc57ec57179cf7d9f0bb5ef46d759792b2af3bc8 | [
"MIT"
] | 1 | 2019-05-14T16:53:49.000Z | 2019-05-14T16:53:49.000Z | import time
from ..utils.log import log, INFO, ERROR, PASS
from ..utils.isaac import answer_symbolic_q_text_entry, open_accordion_section, submit_login_form, assert_logged_in
from ..utils.i_selenium import assert_tab, image_div
from ..utils.i_selenium import wait_for_xpath_element
from ..tests import TestWithDependency
from selenium.common.exceptions import TimeoutException, NoSuchElementException
__all__ = ["symbolic_q_text_entry_correct"]
#####
# Test : Symbolic Questions Text Entry Correct Answers
#####
@TestWithDependency("SYMBOLIC_Q_TEXT_ENTRY_CORRECT")
def symbolic_q_text_entry_correct(driver, Users, ISAAC_WEB, WAIT_DUR, **kwargs):
"""Test if symbolic questions can be answered correctly with text entry.
- 'driver' should be a Selenium WebDriver.
- 'ISAAC_WEB' is the string URL of the Isaac website to be tested.
- 'WAIT_DUR' is the time in seconds to wait for JavaScript to run/load.
"""
assert_tab(driver, ISAAC_WEB)
driver.get(ISAAC_WEB + "/questions/_regression_test_")
time.sleep(WAIT_DUR)
assert_tab(driver, ISAAC_WEB + "/questions/_regression_test_")
time.sleep(WAIT_DUR)
try:
open_accordion_section(driver, 4)
sym_question = driver.find_element_by_xpath("//div[@ng-switch-when='isaacSymbolicQuestion']")
except NoSuchElementException:
log(ERROR, "Can't find the symbolic question; can't continue!")
return False
log(INFO, "Attempt to enter correct answer.")
if not answer_symbolic_q_text_entry(sym_question, "(((x)))", wait_dur=WAIT_DUR):
log(ERROR, "Couldn't answer symbolic Question; can't continue!")
return False
time.sleep(WAIT_DUR)
try:
wait_for_xpath_element(driver, "//div[@ng-switch-when='isaacSymbolicQuestion']//h1[text()='Correct!']")
log(INFO, "A 'Correct!' message was displayed as expected.")
wait_for_xpath_element(driver, "(//div[@ng-switch-when='isaacSymbolicQuestion']//p[text()='This is a correct choice. It requires an exact match!'])[2]")
log(INFO, "The editor entered explanation text was correctly shown.")
wait_for_xpath_element(driver, "//div[@ng-switch-when='isaacSymbolicQuestion']//strong[text()='Well done!']")
log(INFO, "The 'Well done!' message was correctly shown.")
log(INFO, "Avoid rate limiting: wait 1 minute.")
time.sleep(WAIT_DUR)
log(PASS, "Symbolic Question 'correct value, correct unit' behavior as expected.")
return True
except TimeoutException:
image_div(driver, "ERROR_symbolic_q_correct")
log(ERROR, "The messages shown for a correct answer were not all displayed; see 'ERROR_symbolic_q_correct.png'!")
return False
| 48.660714 | 160 | 0.716697 | 365 | 2,725 | 5.128767 | 0.350685 | 0.029915 | 0.034722 | 0.048077 | 0.332265 | 0.189637 | 0.189637 | 0.14797 | 0.14797 | 0.097756 | 0 | 0.001774 | 0.172477 | 2,725 | 55 | 161 | 49.545455 | 0.828381 | 0.112294 | 0 | 0.219512 | 0 | 0.02439 | 0.39385 | 0.174389 | 0 | 0 | 0 | 0 | 0.097561 | 1 | 0.02439 | false | 0.04878 | 0.170732 | 0 | 0.292683 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
3bebc91db5a12916cb4f790c22d615d4c1e07f21 | 2,086 | py | Python | benchmarks/tests/test_partitioned_dataset_filter_benchmark.py | JayjeetAtGithub/benchmarks | 0a7540fc0869b01d6c05850839211177e8260d17 | [
"MIT"
] | null | null | null | benchmarks/tests/test_partitioned_dataset_filter_benchmark.py | JayjeetAtGithub/benchmarks | 0a7540fc0869b01d6c05850839211177e8260d17 | [
"MIT"
] | 3 | 2021-07-31T18:12:23.000Z | 2021-08-05T19:09:55.000Z | benchmarks/tests/test_partitioned_dataset_filter_benchmark.py | JayjeetAtGithub/benchmarks | 0a7540fc0869b01d6c05850839211177e8260d17 | [
"MIT"
] | 2 | 2021-09-02T10:34:06.000Z | 2021-09-02T17:16:07.000Z | import copy
import pytest
from .. import partitioned_dataset_filter_benchmark
from ..tests._asserts import assert_context, assert_cli, R_CLI
HELP = """
Usage: conbench partitioned-dataset-filter [OPTIONS]
Run partitioned-dataset-filter benchmark(s).
For each benchmark option, the first option value is the default.
Valid benchmark combinations:
--query=vignette
--query=payment_type_3
--query=small_no_files
--query=count_rows
To run all combinations:
$ conbench partitioned-dataset-filter --all=true
Options:
--query [count_rows|payment_type_3|small_no_files|vignette]
--all BOOLEAN [default: False]
--iterations INTEGER [default: 1]
--drop-caches BOOLEAN [default: False]
--cpu-count INTEGER
--show-result BOOLEAN [default: True]
--show-output BOOLEAN [default: False]
--run-id TEXT Group executions together with a run id.
--run-name TEXT Name of run (commit, pull request, etc).
--help Show this message and exit.
"""
def assert_benchmark(result, source, name, case, language="Python"):
munged = copy.deepcopy(result)
expected = {
"name": name,
"dataset": source,
"cpu_count": None,
}
if language == "R":
expected["query"] = case[0]
expected["language"] = "R"
assert munged["tags"] == expected
assert_context(munged, language=language)
benchmark = partitioned_dataset_filter_benchmark.PartitionedDatasetFilterBenchmark()
cases, case_ids = benchmark.cases, benchmark.case_ids
@pytest.mark.parametrize("case", cases, ids=case_ids)
def test_partitioned_dataset_filter(case):
pytest.skip("needs a test partitioned dataset")
[(result, output)] = benchmark.run(case, iterations=1)
assert_benchmark(result, "dataset-taxi-parquet", benchmark.name, case, language="R")
assert R_CLI in str(output)
def test_partitioned_dataset_filter_cli():
command = ["conbench", "partitioned-dataset-filter", "--help"]
assert_cli(command, HELP)
| 30.676471 | 88 | 0.678811 | 248 | 2,086 | 5.564516 | 0.387097 | 0.117391 | 0.13913 | 0.071739 | 0.044928 | 0 | 0 | 0 | 0 | 0 | 0 | 0.003032 | 0.209492 | 2,086 | 67 | 89 | 31.134328 | 0.833839 | 0 | 0 | 0 | 0 | 0 | 0.513423 | 0.095398 | 0 | 0 | 0 | 0 | 0.14 | 1 | 0.06 | false | 0 | 0.08 | 0 | 0.14 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
3becd87456f2ea3698a22391c714b9a077b4d0dc | 1,328 | py | Python | plot.py | green-cabbage/delphes | 40e178512a3f3e9761e056e392510c173e5875bb | [
"MIT"
] | null | null | null | plot.py | green-cabbage/delphes | 40e178512a3f3e9761e056e392510c173e5875bb | [
"MIT"
] | null | null | null | plot.py | green-cabbage/delphes | 40e178512a3f3e9761e056e392510c173e5875bb | [
"MIT"
] | null | null | null | import numpy as np
import matplotlib.pyplot as plt
branch_names = [ "ElectronCHS", "MuonTightCHS"] #["Jet", "ElectronCHS", "MuonTightCHS"]
for branch_name in branch_names:
data = np.load("cut_data_pair_" + branch_name + ".npy")
print(data.shape)
names = ["Pt", "Eta", "Phi"] # list of values to plot
names_ranges = [ [0, 500], [-3,3], [-3.2, 3.2]] # list of ranges for the respective values
print(data.shape)
for name_idx in range(len(names)):
bins = np.linspace(names_ranges[name_idx][0], names_ranges[name_idx][1], 15)
plt.hist(data[name_idx,:], bins, label =branch_name+" "+names[name_idx])
# plt.title(branch_name+" "+names[name_idx])
# plt.savefig(branch_name+" "+names[name_idx]+".png")
plt.title(branch_name+" pairing "+names[name_idx])
plt.savefig(branch_name+" pairing "+names[name_idx]+".png")
plt.clf()
delta_data = np.load("cut_data_deltas.npy")
print(delta_data.shape)
delta_names = ["delta eta", "delta phi"]
delta_names_ranges = [[0,4], [0,3] ]
for idx in range(len(delta_data)):
# print(np.max(delta_data[idx]))
bins = np.linspace(delta_names_ranges[idx][0], delta_names_ranges[idx][1], 15)
plt.hist(delta_data[idx], bins)
plt.title(delta_names[idx])
plt.savefig(delta_names[idx]+".png")
plt.clf() | 41.5 | 95 | 0.654367 | 201 | 1,328 | 4.124378 | 0.273632 | 0.075995 | 0.072376 | 0.068758 | 0.264174 | 0.162847 | 0.077201 | 0 | 0 | 0 | 0 | 0.02 | 0.171687 | 1,328 | 32 | 96 | 41.5 | 0.733636 | 0.171687 | 0 | 0.16 | 0 | 0 | 0.103196 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.08 | 0 | 0.08 | 0.12 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
3bef99c1c951097e71ec5419ad1ef35132aab840 | 834 | py | Python | tests/fractaltree/test_ft_merge.py | alexgorji/musurgia | 81d37afbf1ac70348002a93299db228b5ed4a591 | [
"MIT"
] | null | null | null | tests/fractaltree/test_ft_merge.py | alexgorji/musurgia | 81d37afbf1ac70348002a93299db228b5ed4a591 | [
"MIT"
] | 45 | 2020-02-24T19:37:00.000Z | 2021-04-06T16:13:56.000Z | tests/fractaltree/test_ft_merge.py | alexgorji/musurgia | 81d37afbf1ac70348002a93299db228b5ed4a591 | [
"MIT"
] | null | null | null | import os
from unittest import TestCase
from musurgia.fractaltree.fractaltree import FractalTree
path = os.path.abspath(__file__).split('.')[0]
class Test(TestCase):
def test_1(self):
ft = FractalTree(proportions=(1, 2, 3, 4, 5), tree_permutation_order=(3, 5, 1, 2, 4), value=10)
ft.add_layer()
# ft.add_layer()
# print(ft.get_leaves(key=lambda leaf: leaf.index))
# print(ft.get_leaves(key=lambda leaf: leaf.fractal_order))
# print(ft.get_leaves(key=lambda leaf: round(float(leaf.value), 2)))
ft.merge_children(1, 2, 2)
# print(ft.get_leaves(key=lambda leaf: leaf.index))
self.assertEqual(ft.get_leaves(key=lambda leaf: leaf.fractal_order), [3, 5, 2])
self.assertEqual(ft.get_leaves(key=lambda leaf: round(float(leaf.value), 2)), [2.0, 4.0, 4.0]) | 41.7 | 103 | 0.660671 | 129 | 834 | 4.131783 | 0.333333 | 0.056285 | 0.123827 | 0.157599 | 0.532833 | 0.532833 | 0.532833 | 0.523452 | 0.457786 | 0.165103 | 0 | 0.041116 | 0.183453 | 834 | 20 | 104 | 41.7 | 0.741557 | 0.286571 | 0 | 0 | 0 | 0 | 0.001695 | 0 | 0 | 0 | 0 | 0 | 0.181818 | 1 | 0.090909 | false | 0 | 0.272727 | 0 | 0.454545 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
3bf2ecc83d0cb4cc86ae04e283e5526d9a396d97 | 2,527 | py | Python | src/headers_keys.py | dadosjusbr/parser-mpsc | 9949b90efd95af8cdc0cf83b4fdb95d009a89789 | [
"MIT"
] | null | null | null | src/headers_keys.py | dadosjusbr/parser-mpsc | 9949b90efd95af8cdc0cf83b4fdb95d009a89789 | [
"MIT"
] | null | null | null | src/headers_keys.py | dadosjusbr/parser-mpsc | 9949b90efd95af8cdc0cf83b4fdb95d009a89789 | [
"MIT"
] | null | null | null | CONTRACHEQUE = "contracheque"
INDENIZACOES = "indenizacoes"
INDENIZACOES_2021 = "indenizacoes_2021"
HEADERS = {
CONTRACHEQUE: {
"Remuneração do Cargo Efetivo": 4,
"Outras Verbas Remuneratórias Legais/Judiciais": 5,
"Função de Confiança ou Cargo em Comissão": 6,
"Adicional de Férias": 7,
"Abono de Permanência": 8,
"Retenção do Teto": 12,
"Imposto de Renda": 13,
"Contribuição Previdenciária": 14,
},
INDENIZACOES: {
"Ajuda de Custo": 7,
"Auxílio-Alimentação": 8,
"Auxílio-Creche": 9,
"Auxílio-Educação": 10,
"Auxílio-Moradia": 11,
"Auxílio-Saúde": 12,
"Auxílio-Transporte Estagiários": 13,
"Conversão de Licença-Prêmio": 14,
"Indenização de Férias": 15,
"Indenização de Transporte": 16,
"Ressarcimento de Despesas": 17,
"Ressarcimento por uso de veículo próprio": 18,
"Ajuda de Custo - remu": 20,
"Auxílio-Educação - remu": 21,
"Auxílio-Saúde - remu": 22,
"Diferença de Entrância - remu": 23,
"Diferenças Salariais – Ajustes - remu": 24,
"Estorno de tributos e contribuições - remu": 25,
"Gratificação Turma de Recursos - remu": 26,
"Gratificação Coordenador Administrativo - remu": 27,
"Gratificação por Cumulação de Função - remu": 28,
"Horas-Extras - remu": 29,
"Substituição de cargo comissionado - remu": 30,
"Substituição de Função Gratificada - remu": 31
},
INDENIZACOES_2021: {
"Ajuda de Custo": 7,
"Auxílio-Alimentação": 8,
"Auxílio-Creche": 9,
"Auxílio-Educação": 10,
"Auxílio-Moradia": 11,
"Auxílio-Saúde": 12,
"Indenização da Licença Compensatória": 13,
"Indenização de Férias": 14,
"Indenização de Transporte": 15,
"Ressarcimento de Despesas": 16,
"Ressarcimento por uso de veículo próprio": 17,
"Conversão de Licença-Prêmio": 19,
"Diferença de Entrância": 20,
"Diferenças Salariais - Ajustes": 21,
"Estorno de tributos e contribuições": 22,
"Gratificação Programa ATUA": 23,
"Gratificação Coordenador Administrativo": 24,
"Gratificação Especial - Concurso": 25,
"Gratificação por Cumulação de Função": 26,
"Horas-Extras": 27,
"Indenização de Férias": 28,
"Substituição de cargo comissionado": 29,
"Substituição de Função Gratificada": 30
},
}
| 35.097222 | 61 | 0.600712 | 260 | 2,527 | 5.830769 | 0.403846 | 0.042876 | 0.023747 | 0.01715 | 0.251979 | 0.168865 | 0.122691 | 0.122691 | 0.122691 | 0.122691 | 0 | 0.061907 | 0.290463 | 2,527 | 71 | 62 | 35.591549 | 0.783045 | 0 | 0 | 0.181818 | 0 | 0 | 0.592006 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
3bf3f17b808d268cafd246137302bc3f68bfcd1d | 7,604 | py | Python | rapidTest/tests.py | aryaputra28/covidify-PerancanganWeb | 34d6d0017f44248c172fc58e6e1b138e23e68a95 | [
"Unlicense"
] | null | null | null | rapidTest/tests.py | aryaputra28/covidify-PerancanganWeb | 34d6d0017f44248c172fc58e6e1b138e23e68a95 | [
"Unlicense"
] | null | null | null | rapidTest/tests.py | aryaputra28/covidify-PerancanganWeb | 34d6d0017f44248c172fc58e6e1b138e23e68a95 | [
"Unlicense"
] | null | null | null | from django.test import TestCase, Client
from django.urls import resolve
from .views import rapidTest, form_Rapid, api_rapid
from .models import Rapid
class RapidAppTest(TestCase):
maxDiff = None
# Rapid Test
def test_apakah_url_rapidTest_ada(self):
response = Client().get('/rapidTest/')
self.assertEqual(response.status_code,200)
def test_apakah_di_halaman_Rapid_Test_ada_templatenya(self):
response = Client().get('/rapidTest/')
self.assertTemplateUsed(response, 'rapidTest/rapidTest.html')
def test_apakah_menggunakan_fungsi_rapidTest(self):
found = resolve('/rapidTest/')
self.assertEqual(found.func, rapidTest)
def test_apakah_di_halaman_Rapid_Test_ada_text_dan_tombol(self):
response = Client().get('/rapidTest/')
html_kembalian = response.content.decode('utf8')
self.assertIn("DAFTAR TEMPAT RAPID TEST", html_kembalian)
self.assertIn("Nama Tempat", html_kembalian)
self.assertIn("Tanggal Pelaksanaan", html_kembalian)
self.assertIn("Biaya", html_kembalian)
self.assertIn("Alamat", html_kembalian)
self.assertIn("Add", html_kembalian)
# Form Rapid
def test_apakah_url_formRapid_ada(self):
response = Client().get('/formRapid/')
self.assertEqual(response.status_code,200)
def test_apakah_di_halaman_Form_Rapid_ada_templatenya(self):
response = Client().get('/formRapid/')
self.assertTemplateUsed(response, 'rapidTest/formRapid.html')
def test_apakah_menggunakan_fungsi_form_Rapid(self):
found = resolve('/formRapid/')
self.assertEqual(found.func, form_Rapid)
def test_apakah_di_halaman_Form_Rapid_ada_text_dan_tombol(self):
response = Client().get('/formRapid/')
html_kembalian = response.content.decode('utf8')
self.assertIn("TAMBAH TEMPAT RAPID TEST", html_kembalian)
self.assertIn("Nama Tempat", html_kembalian)
self.assertIn("Tanggal Pelaksanaan", html_kembalian)
self.assertIn("Biaya", html_kembalian)
self.assertIn("Alamat", html_kembalian)
self.assertIn("Add", html_kembalian)
self.assertIn("Back", html_kembalian)
# views api_rapid
def test_apakah_url_dataRapid_ada(self):
response = Client().get('/dataRapid/')
self.assertEqual(response.status_code,200)
def test_apakah_menggunakan_fungsi_api_rapid(self):
found = resolve('/dataRapid/')
self.assertEqual(found.func, api_rapid)
def test_content_type_api_rapid(self):
response = Client().get('/dataRapid/')
self.assertEqual(response['content-type'], 'text/json-comment-filtered')
# Test Model
def test_apakah_sudah_ada_model_Rapid(self):
Rapid.objects.create(nama_tempat= "Labklin Kimia Farma Pontianak", tanggal_pelaksanaan_mulai= '2020-1-1',
tanggal_pelaksanaan_akhir= '2020-12-31', biaya= 150.000,
alamat= "Jl. Prof. M.Yamin No.A7, Sungai Bangkong, Kec. Pontianak Sel., Kota Pontianak, Kalimantan Barat")
hitung_object = Rapid.objects.all().count()
self.assertEqual(hitung_object,1)
# Test Form
def test_apakah_bisa_menyimpan_sebuah_POST_request(self):
response = self.client.post('/formRapid/', data={'nama_tempat':'Labklin Kimia Farma Pontianak',
'tanggal_pelaksanaan_mulai': '2020-1-1',
'tanggal_pelaksanaan_akhir': '2020-12-31', 'biaya':150.000,
'alamat':'Jl. Prof. M.Yamin No.A7, Sungai Bangkong, Kec. Pontianak Sel., Kota Pontianak, Kalimantan Barat'})
hitung_object = Rapid.objects.all().count()
self.assertEqual(hitung_object,1)
self.assertEqual(response.status_code, 302)
self.assertEqual(response['location'], '/rapidTest')
new_response = self.client.get('/rapidTest/')
html_response = new_response.content.decode('utf8')
self.assertIn('DAFTAR TEMPAT RAPID TEST', html_response)
# def test_apakah_data_json_sesuai(self):
# Rapid.objects.create(nama_tempat= "Labklin Kimia Farma Pontianak", tanggal_pelaksanaan_mulai= '2020-1-1',
# tanggal_pelaksanaan_akhir= '2020-12-31', biaya= 150.000,
# alamat= "Jl. Prof. M.Yamin No.A7, Sungai Bangkong, Kec. Pontianak Sel., Kota Pontianak, Kalimantan Barat")
# obj = Rapid.objects.all()
# # self.assertEqual(obj, [{"model": "rapidTest.rapid", "pk": 1, "fields": {"nama_tempat": "Labklin Kimia Farma Pontianak",
# # "tanggal_pelaksanaan_mulai": "2020-1-1", "tanggal_pelaksanaan_akhir": "2020-12-31",
# # "biaya": "150.000", "alamat": "Jl. Prof. M.Yamin No.A7, Sungai Bangkong, Kec. Pontianak Sel., Kota Pontianak, Kalimantan Barat"}}])
# self.assertEqual(obj, <QuerySet [<Rapid: Rapid obj (1)>]>)
# def test_apakah_pemanggilan_data_json_sesuai(self):
# response = Client().get('/dataRapid/')
# self.assertJSONEqual(response.content.decode("utf-8"), {"model": "rapidTest.rapid", "pk": 1, "fields": {"nama_tempat": "RS Restu Kasih", "tanggal_pelaksanaan_mulai": "2020-09-17",
# "tanggal_pelaksanaan_akhir": "2020-12-31", "biaya": "150.000",
# "alamat": "Jalan Raya Bogor KM.19 No.3A, Kramat Jati, RT.3/RW.1, Kramat Jati, Kec. Kramat jati, Kota Jakarta Timur, Daerah Khusus Ibukota Jakarta 13510, Kota Jakarta Timur 13510"}},
# {"model": "rapidTest.rapid", "pk": 2, "fields": {"nama_tempat": "RS Sumber Waras", "tanggal_pelaksanaan_mulai": "2020-09-17",
# "tanggal_pelaksanaan_akhir": "2020-12-31", "biaya": "135.000", "alamat": "Jl. Kyai Tapa No 1 RT 10 RW 10, Tomang, Kecamatan Grogol petamburan, Kota Jakarta Barat, Daerah Khusus Ibukota Jakarta 11440, Kota Jakarta Barat 11440"}},
# {"model": "rapidTest.rapid", "pk": 3, "fields": {"nama_tempat": "Labklin Kimia Farma Semarang Sutomo", "tanggal_pelaksanaan_mulai": "2020-09-17",
# "tanggal_pelaksanaan_akhir": "2020-12-31", "biaya": "150.000", "alamat": "Jl. Pemuda No.135, Sekayu, Kec. Semarang Tengah, Kota Semarang, Jawa Tengah 50132, Kota Semarang"}})
# def test_apakah_pemanggilan_data_json_tidak_sesuai(self):
# mod = Rapid.objects.all()
# m = mod.model
# self.assertEqual(m, Rapid)
# response = Client().get('/dataRapid/')
# self.assertJSONNotEqual(response.content.decode("utf-8"),{"model":"abc"})
# def test_POSTing_a_new_item(self):
# listt = Rapid.objects.create()
# response = Client().get('/dataRapid/')
# response2 = self.client.post(response,
# {"nama_tempat": "RS", "tanggal_pelaksanaan_mulai": "2020-09-17",
# "tanggal_pelaksanaan_akhir": "2020-12-31", "biaya": 150.000,
# "alamat": "Jalan i"})
# self.assertEqual(response2.status_code,200)
# new_item = listt.item_set.get()
# self.assertEqual(new_item.model, rapidTest.rapid) | 53.174825 | 295 | 0.613624 | 842 | 7,604 | 5.342043 | 0.195962 | 0.072032 | 0.043353 | 0.061138 | 0.62739 | 0.591374 | 0.501556 | 0.501556 | 0.418853 | 0.407959 | 0 | 0.042572 | 0.261704 | 7,604 | 143 | 296 | 53.174825 | 0.758639 | 0.425039 | 0 | 0.385714 | 0 | 0.028571 | 0.179908 | 0.028637 | 0 | 0 | 0 | 0 | 0.385714 | 1 | 0.185714 | false | 0 | 0.057143 | 0 | 0.271429 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
3bf5c5154e6d0256a0f281bab0ab6ae78a23f1cf | 556 | py | Python | model/contact.py | umagda/python_training | 77fd6c6cecc1dac0f792408d4771067efa1e1a50 | [
"Apache-2.0"
] | null | null | null | model/contact.py | umagda/python_training | 77fd6c6cecc1dac0f792408d4771067efa1e1a50 | [
"Apache-2.0"
] | null | null | null | model/contact.py | umagda/python_training | 77fd6c6cecc1dac0f792408d4771067efa1e1a50 | [
"Apache-2.0"
] | null | null | null | class Contact:
def __init__(self, firstname, middlename, lastname, nickname, title, company, address, home, mobile, email,
bday, bmonth, byear):
self.firstname = firstname
self.middlename = middlename
self.lastname = lastname
self.nickname = nickname
self.title = title
self.company = company
self.address = address
self.home = home
self.mobile = mobile
self.email = email
self.bday = bday
self.bmonth = bmonth
self.byear = byear
| 30.888889 | 111 | 0.595324 | 57 | 556 | 5.736842 | 0.315789 | 0.079511 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.327338 | 556 | 17 | 112 | 32.705882 | 0.874332 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.0625 | false | 0 | 0 | 0 | 0.125 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
3bf7808e577bd3264504ed20432d6ab987b8971b | 518 | py | Python | library/urls.py | GitHub-Harrison/gamers-library | a82f0fba7b75d6ec152f548b1bd401a4ae6bdb86 | [
"FSFAP"
] | 1 | 2022-02-23T10:47:43.000Z | 2022-02-23T10:47:43.000Z | library/urls.py | GitHub-Harrison/gamers-library | a82f0fba7b75d6ec152f548b1bd401a4ae6bdb86 | [
"FSFAP"
] | 8 | 2022-02-24T17:23:00.000Z | 2022-03-31T17:42:25.000Z | library/urls.py | GitHub-Harrison/gamers-library | a82f0fba7b75d6ec152f548b1bd401a4ae6bdb86 | [
"FSFAP"
] | null | null | null | from django.urls import path
from . import views
urlpatterns = [
path('library/', views.library, name='library'),
path('<slug:slug>', views.post_detail, name='post_detail'),
path('update_comment/<int:id>', views.update_comment, name='update_comment'),
path('edit_comment/<int:id>', views.edit_comment, name='edit_comment'), # this url for post method to edit comment
path('delete_comment/<int:id>', views.delete_comment, name='delete_comment'),
# this url for post method to delete comment
]
| 43.166667 | 119 | 0.712355 | 73 | 518 | 4.90411 | 0.315068 | 0.122905 | 0.100559 | 0.142458 | 0.162011 | 0.162011 | 0.162011 | 0 | 0 | 0 | 0 | 0 | 0.137066 | 518 | 11 | 120 | 47.090909 | 0.800895 | 0.160232 | 0 | 0 | 0 | 0 | 0.333333 | 0.155093 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.222222 | 0 | 0.222222 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
3bfa121bd561f598df905f706b097e0a08c98d0c | 1,610 | py | Python | 2017/python/14.py | gcp825/advent_of_code | b4ea17572847e1a9044487041b3e12a0da58c94b | [
"MIT"
] | 1 | 2021-12-29T09:32:08.000Z | 2021-12-29T09:32:08.000Z | 2017/python/14.py | gcp825/advent_of_code | b4ea17572847e1a9044487041b3e12a0da58c94b | [
"MIT"
] | null | null | null | 2017/python/14.py | gcp825/advent_of_code | b4ea17572847e1a9044487041b3e12a0da58c94b | [
"MIT"
] | null | null | null | def knot_hash(data):
lengths = list(map(ord,list(data))) + [17, 31, 73, 47, 23]; numbers = list(range(256)); i = 0; skip = 0
for _ in range(64):
for length in lengths:
numbers = numbers[i:] + numbers[:i]
numbers = numbers[:length][::-1] + numbers[length:]
numbers = numbers[i*-1:] + numbers[:i*-1]
i = (i + length + skip) % len(numbers)
skip += 1
return ''.join([('0'+hex(eval('^'.join(list(map(str,numbers[i:i+16])))))[2:])[-2:] for i in range(0,256,16)])
def setup_grid(salt,size):
grid = []
for n in range(size):
grid += [list(map(int,list(bin(int(knot_hash(salt+'-'+str(n)),16))[2:].zfill(size))))]
return grid
def determine_regions(grid,size):
x= 0; y = 0; regions = 0
while y < size:
while x < size:
if grid[y][x] == 1:
grid = explore_region(grid,size,y,x)
regions += 1
x += 1
x = 0; y += 1
return regions
def explore_region(grid,size,y,x):
members = set(); queue = [(y,x)]
while len(queue) > 0:
y,x = queue.pop(0)
if grid[y][x] == 1:
members.add((y,x))
queue += [(b,a) for b,a in [(y-1,x),(y+1,x),(y,x-1),(y,x+1)] if (b,a) not in members and 0 <= a < size and 0 <= b < size]
grid[y][x] = 0
return grid
def main(salt,size):
grid = setup_grid(salt,size)
used = sum(sum(row) for row in grid)
regions = determine_regions(grid,size)
return used, regions
print(main('jxqlasbh',128))
| 28.245614 | 133 | 0.499379 | 245 | 1,610 | 3.244898 | 0.261224 | 0.025157 | 0.015094 | 0.042767 | 0.080503 | 0.057862 | 0 | 0 | 0 | 0 | 0 | 0.050817 | 0.315528 | 1,610 | 56 | 134 | 28.75 | 0.670599 | 0 | 0 | 0.1 | 0 | 0 | 0.006832 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.125 | false | 0 | 0 | 0 | 0.25 | 0.025 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
3bfb11496ec6cd2f9afd830e2139091e8846c971 | 589 | py | Python | gce-eventdata/mongo_load_cmds.py | TwoRavens/test-deploy | 1d6e7827190ae21b2daa59256c99573cb36d80dc | [
"Apache-2.0"
] | null | null | null | gce-eventdata/mongo_load_cmds.py | TwoRavens/test-deploy | 1d6e7827190ae21b2daa59256c99573cb36d80dc | [
"Apache-2.0"
] | null | null | null | gce-eventdata/mongo_load_cmds.py | TwoRavens/test-deploy | 1d6e7827190ae21b2daa59256c99573cb36d80dc | [
"Apache-2.0"
] | null | null | null | """Mongo commands"""
collections = ['acled_africa', 'acled_asia', 'acled_middle_east',
'cline_phoenix_fbis', 'cline_phoenix_nyt', 'cline_phoenix_swb',
'cline_speed', 'icews']
cnt = 0
for cname in collections:
cnt+=1
cmd1 = f'db.{cname}.drop()'
cmd2 = f'mongorestore -u AdminEvent --port 17231 --authenticationDatabase admin -d event_data -c {cname} /home/eventuser/dbs/{cname}.bson'
print('-' * 40)
print(f'({cnt}) {cname}')
print('-' * 40)
print(cmd1)
print('')
print(cmd2)
if __name__ == '__main__':
show_cmds()
| 26.772727 | 143 | 0.611205 | 72 | 589 | 4.708333 | 0.666667 | 0.106195 | 0.070796 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.032538 | 0.217317 | 589 | 21 | 144 | 28.047619 | 0.70282 | 0.023769 | 0 | 0.125 | 0 | 0.0625 | 0.488576 | 0.098418 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.375 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
3bfc586013864f3cd33bbfa8087bf2f878451472 | 1,597 | py | Python | app/core/management/utils/xsr_client.py | OpenLXP/openlxp-xia-jko | 971bc637aeaad1f3d998af983b52cb62f3141ea1 | [
"Apache-2.0"
] | null | null | null | app/core/management/utils/xsr_client.py | OpenLXP/openlxp-xia-jko | 971bc637aeaad1f3d998af983b52cb62f3141ea1 | [
"Apache-2.0"
] | 1 | 2021-07-16T22:53:05.000Z | 2021-07-16T22:53:05.000Z | app/core/management/utils/xsr_client.py | OpenLXP/openlxp-xia-jko | 971bc637aeaad1f3d998af983b52cb62f3141ea1 | [
"Apache-2.0"
] | null | null | null | import hashlib
import logging
import pandas as pd
from openlxp_xia.management.utils.xia_internal import get_key_dict
from core.models import XSRConfiguration
logger = logging.getLogger('dict_config_logger')
def read_source_file():
"""setting file path from s3 bucket"""
xsr_data = XSRConfiguration.objects.first()
file_name = xsr_data.source_file
extracted_data = pd.read_excel(file_name, engine='openpyxl')
std_source_df = extracted_data.where(pd.notnull(extracted_data),
None)
# Creating list of dataframes of sources
source_list = [std_source_df]
logger.debug("Sending source data in dataframe format for EVTVL")
# file_name.delete()
return source_list
def get_source_metadata_key_value(data_dict):
"""Function to create key value for source metadata """
# field names depend on source data and SOURCESYSTEM is system generated
field = ['LearningResourceIdentifier', 'SOURCESYSTEM']
field_values = []
for item in field:
if not data_dict.get(item):
logger.info('Field name ' + item + ' is missing for '
'key creation')
return None
field_values.append(data_dict.get(item))
# Key value creation for source metadata
key_value = '_'.join(field_values)
# Key value hash creation for source metadata
key_value_hash = hashlib.md5(key_value.encode('utf-8')).hexdigest()
# Key dictionary creation for source metadata
key = get_key_dict(key_value, key_value_hash)
return key
| 31.94 | 76 | 0.684408 | 205 | 1,597 | 5.117073 | 0.429268 | 0.068637 | 0.064824 | 0.062917 | 0.089609 | 0.062917 | 0 | 0 | 0 | 0 | 0 | 0.002465 | 0.237946 | 1,597 | 49 | 77 | 32.591837 | 0.859491 | 0.212899 | 0 | 0 | 0 | 0 | 0.127317 | 0.020951 | 0 | 0 | 0 | 0 | 0 | 1 | 0.071429 | false | 0 | 0.178571 | 0 | 0.357143 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
3bfef50e91a2bd23f4fab75fac06ba1bf5b50898 | 6,487 | py | Python | detectron2/modeling/meta_arch/Image_classification/CLFE_Multi_head.py | dongdongdong1217/Detectron2-FC | 92356ebbf52b4e39c94537af26abcf46419c8c2f | [
"Apache-2.0"
] | 4 | 2022-01-02T07:06:58.000Z | 2022-01-08T05:04:43.000Z | detectron2/modeling/meta_arch/Image_classification/CLFE_Multi_head.py | dongdongdong1217/Detectron2-FC | 92356ebbf52b4e39c94537af26abcf46419c8c2f | [
"Apache-2.0"
] | null | null | null | detectron2/modeling/meta_arch/Image_classification/CLFE_Multi_head.py | dongdongdong1217/Detectron2-FC | 92356ebbf52b4e39c94537af26abcf46419c8c2f | [
"Apache-2.0"
] | 1 | 2022-01-02T11:46:23.000Z | 2022-01-02T11:46:23.000Z | from unittest import result
import torch.nn as nn
import torch
import numpy as np
from ..build import META_ARCH_REGISTRY
class CLFE_block(nn.Module):
def __init__(self,image_size=(224,224)):
super().__init__()
self.lin1 = torch.nn.Linear(image_size[0]*image_size[1],1)
self.fun = nn.ReLU(inplace=True)
self.W = torch.randn((512,224,224), requires_grad=True).cuda()
self.bn = nn.BatchNorm2d(256)
def forward(self,x):
#--------------特征注意模块(CSFE)-------------#
batchsize = x.shape[0]
original_x = x
#通道注意力
x = torch.reshape(x,(batchsize,x.shape[1],x.shape[2]*x.shape[3]))
x = self.lin1(x)
x = self.fun(x)
x = torch.reshape(x,(x.shape[0],x.shape[1],1,1))
x = x * original_x
x = self.fun(x)
#局部注意力
x = x * self.W
x = torch.sum(x,dim=1)
x = self.fun(x)
x = torch.reshape(x,(x.shape[0],1,x.shape[1],x.shape[2]))
x = original_x + x
return self.fun(self.bn(x))
class ScaledDotProductAttention(nn.Module):
def __init__(self):
super(ScaledDotProductAttention, self).__init__()
def forward(self, Q, K, V,d_k):
'''
Q: [batch_size, n_heads, len_q, d_k]
K: [batch_size, n_heads, len_k, d_k]
V: [batch_size, n_heads, len_v(=len_k), d_v]
attn_mask: [batch_size, n_heads, seq_len, seq_len]
'''
scores = torch.matmul(Q, K.transpose(-1, -2)) / np.sqrt(d_k) # scores : [batch_size, n_heads, len_q, len_k]
attn = nn.Softmax(dim=-1)(scores)
context = torch.matmul(attn, V) # [batch_size, n_heads, len_q, d_v]
return context
class MultiHeadAttention(nn.Module):
def __init__(self,d_model=512,d_k=64,d_v=64,n_heads=8):
super(MultiHeadAttention, self).__init__()
self.d_k = d_k
self.d_v = d_v
self.n_heads = n_heads
self.W_Q = nn.Linear(d_model, d_k * n_heads, bias=False)
self.W_K = nn.Linear(d_model, d_k * n_heads, bias=False)
self.W_V = nn.Linear(d_model, d_v * n_heads, bias=False)
self.fc = nn.Linear(n_heads * d_v, d_model, bias=False)
self.norm = nn.LayerNorm(d_model)
def forward(self, input_Q, input_K, input_V):
'''
input_Q: [batch_size, len_q, d_model]
input_K: [batch_size, len_k, d_model]
input_V: [batch_size, len_v(=len_k), d_model]
attn_mask: [batch_size, seq_len, seq_len]
'''
residual, batch_size = input_Q, input_Q.size(0)
# (B, S, D) -proj-> (B, S, D_new) -split-> (B, S, H, W) -trans-> (B, H, S, W)
Q = self.W_Q(input_Q).view(batch_size, -1, self.n_heads, self.d_k).transpose(1,2) # Q: [batch_size, n_heads, len_q, d_k]
K = self.W_K(input_K).view(batch_size, -1, self.n_heads, self.d_k).transpose(1,2) # K: [batch_size, n_heads, len_k, d_k]
V = self.W_V(input_V).view(batch_size, -1, self.n_heads, self.d_v).transpose(1,2) # V: [batch_size, n_heads, len_v(=len_k), d_v]
# context: [batch_size, n_heads, len_q, d_v], attn: [batch_size, n_heads, len_q, len_k]
context = ScaledDotProductAttention()(Q, K, V, self.d_k)
context = context.transpose(1, 2).reshape(batch_size, -1, self.n_heads * self.d_v) # context: [batch_size, len_q, n_heads * d_v]
output = self.fc(context) # [batch_size, len_q, d_model]
return self.norm(output + residual)
class PoswiseFeedForwardNet(nn.Module):
def __init__(self,d_model=512,d_ff=2048):
super(PoswiseFeedForwardNet, self).__init__()
self.fc = nn.Sequential(
nn.Linear(d_model, d_ff, bias=False),
nn.ReLU(),
nn.Linear(d_ff, d_model, bias=False)
)
self.norm = nn.LayerNorm(d_model)
def forward(self, inputs):
'''
inputs: [batch_size, seq_len, d_model]
'''
residual = inputs
output = self.fc(inputs)
return self.norm(output + residual)
@META_ARCH_REGISTRY.register()
class CLFE_Multi_head(nn.Module):
def __init__(self,cfg,image_size=(224,224)):
super().__init__()
self.conv1_1 = nn.Conv2d(3,256,3,1,1)
self.conv1_2 = nn.Conv2d(3,256,11,1,5)
self.bn1 = nn.BatchNorm2d(256)
self.fun1 = nn.ReLU(inplace=True)
self.CLFE = CLFE_block(image_size)
self.conv2 = nn.Conv2d(512,512,3,2)
self.conv3 = nn.Conv2d(512,512,3,2)
self.conv4 = nn.Conv2d(512,512,3,2)
self.conv5 = nn.Conv2d(512,512,3,2)
self.multi_head = MultiHeadAttention()
self.Feed_forward = PoswiseFeedForwardNet()
self.projection = nn.Sequential(
nn.LayerNorm(512),
nn.Linear(512, cfg.Arguments1)
)
def forward(self,data):
#------------------预处理(data里面既含有image、label、width、height信息。)-----------------#
batchsize = len(data)
batch_images = []
batch_label = []
for i in range(0,batchsize,1):
batch_images.append(data[i]["image"])
batch_label.append(int(float(data[i]["y"])))
batch_images=[image.tolist() for image in batch_images]
batch_images_tensor = torch.tensor(batch_images,dtype=torch.float).cuda()
batchsize = batch_images_tensor.shape[0]
#----------------特征引入模块----------------#
x1 = self.fun1(self.bn1(self.conv1_1(batch_images_tensor)))
x2 = self.fun1(self.bn1(self.conv1_2(batch_images_tensor)))
x = torch.cat([x1, x2], dim=1)
#--------------特征注意模块(CSFE)-------------#
x = self.CLFE(x)
#-------------调整特征图大小----------------#
x = self.fun1(self.bn1(self.conv2(x)))
x = self.fun1(self.bn1(self.conv3(x)))
x = self.fun1(self.bn1(self.conv4(x)))
x = self.fun1(self.bn1(self.conv5(x)))
x = x.reshape(batchsize,169,512)
#-------------多头注意力强化特征-------------#
x = self.multi_head(x,x,x)
x = self.Feed_forward(x)
x = x.mean(dim = 1)
x = self.projection(x)
if self.training:
#得到损失函数值
batch_label = torch.tensor(batch_label,dtype=float).cuda()
loss_fun = nn.CrossEntropyLoss()
loss = loss_fun(x,batch_label.long())
return loss
else:
#直接返回推理结果
return x
# model = CLFE_Multi_head().cuda()
# image = torch.ones((4,3,224,224)).cuda()
# result = model(image)
# print(result.shape)
| 37.715116 | 137 | 0.575767 | 965 | 6,487 | 3.640415 | 0.158549 | 0.058924 | 0.031312 | 0.046968 | 0.358668 | 0.285226 | 0.254768 | 0.189582 | 0.164532 | 0.132081 | 0 | 0.039455 | 0.253738 | 6,487 | 171 | 138 | 37.935673 | 0.686222 | 0.18776 | 0 | 0.076923 | 0 | 0 | 0.001171 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.08547 | false | 0 | 0.042735 | 0 | 0.222222 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
ce003cf61f0f9d2dd0598c3c5475f5a02dcce4fe | 16,301 | py | Python | vaxrank/cli.py | shah-newaz/vaxrank | 65832878f28ce44ccaaf47be3e0c6d38a1743988 | [
"Apache-2.0"
] | null | null | null | vaxrank/cli.py | shah-newaz/vaxrank | 65832878f28ce44ccaaf47be3e0c6d38a1743988 | [
"Apache-2.0"
] | null | null | null | vaxrank/cli.py | shah-newaz/vaxrank | 65832878f28ce44ccaaf47be3e0c6d38a1743988 | [
"Apache-2.0"
] | null | null | null | # Copyright (c) 2016-2018. Mount Sinai School of Medicine
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from __future__ import absolute_import, print_function, division
import json
import sys
import logging
import logging.config
import pkg_resources
from argparse import ArgumentParser
from isovar.cli.rna_args import allele_reads_generator_from_args
from isovar.cli.translation_args import add_translation_args
from isovar.cli.variant_sequences_args import make_variant_sequences_arg_parser
from mhctools.cli import (
add_mhc_args,
mhc_alleles_from_args,
mhc_binding_predictor_from_args,
)
import pandas as pd
import serializable
from varcode.cli import variant_collection_from_args
from . import __version__
from .core_logic import VaxrankCoreLogic
from .gene_pathway_check import GenePathwayCheck
from .report import (
make_ascii_report,
make_html_report,
make_pdf_report,
make_csv_report,
make_minimal_neoepitope_report,
TemplateDataCreator,
PatientInfo,
)
import os, subprocess
logger = logging.getLogger(__name__)
def new_run_arg_parser():
# inherit commandline options from Isovar
arg_parser = make_variant_sequences_arg_parser(
prog="vaxrank",
description=(
"Select personalized vaccine peptides from cancer variants, "
"expression data, and patient HLA type."),
)
add_version_args(arg_parser)
add_translation_args(arg_parser)
add_mhc_args(arg_parser)
add_vaccine_peptide_args(arg_parser)
add_output_args(arg_parser)
add_optional_output_args(arg_parser)
add_supplemental_report_args(arg_parser)
return arg_parser
def cached_run_arg_parser():
arg_parser = ArgumentParser(
prog="vaxrank",
description=(
"Select personalized vaccine peptides from cancer variants, "
"expression data, and patient HLA type."),
)
add_version_args(arg_parser)
arg_parser.add_argument(
"--input-json-file",
default="",
help="Path to JSON file containing results of vaccine peptide report")
add_output_args(arg_parser)
add_optional_output_args(arg_parser)
add_supplemental_report_args(arg_parser)
return arg_parser
def add_version_args(parser):
parser.add_argument(
"--version",
help="Print Vaxrank version and immediately exit",
default=False,
action="store_true")
# Lets the user specify whether they want to see particular sections in the report.
def add_optional_output_args(arg_parser):
manufacturability_args = arg_parser.add_mutually_exclusive_group(required=False)
manufacturability_args.add_argument(
"--include-manufacturability-in-report",
dest="manufacturability",
action="store_true")
manufacturability_args.add_argument(
"--no-manufacturability-in-report",
dest="manufacturability",
action="store_false")
arg_parser.set_defaults(manufacturability=True)
wt_epitope_args = arg_parser.add_mutually_exclusive_group(required=False)
wt_epitope_args.add_argument(
"--include-non-overlapping-epitopes-in-report",
dest="wt_epitopes",
action="store_true",
help="Set to true to include a report section for each vaccine peptide containing "
"strong binders that do not overlap the mutation")
wt_epitope_args.add_argument(
"--no-non-overlapping-epitopes-in-report",
dest="wt_epitopes",
action="store_false",
help="Set to false to exclude report information for each vaccine peptide about "
"strong binders that do not overlap the mutation")
arg_parser.set_defaults(wt_epitopes=True)
def add_output_args(arg_parser):
output_args_group = arg_parser.add_argument_group("Output options")
output_args_group.add_argument(
"--output-patient-id",
default="",
help="Patient ID to use in report")
output_args_group.add_argument(
"--output-csv",
default="",
help="Name of CSV file which contains predicted sequences")
output_args_group.add_argument(
"--output-ascii-report",
default="",
help="Path to ASCII vaccine peptide report")
output_args_group.add_argument(
"--output-html-report",
default="",
help="Path to HTML vaccine peptide report")
output_args_group.add_argument(
"--output-pdf-report",
default="",
help="Path to PDF vaccine peptide report")
output_args_group.add_argument(
"--output-json-file",
default="",
help="Path to JSON file containing results of vaccine peptide report")
output_args_group.add_argument(
"--output-xlsx-report",
default="",
help="Path to XLSX vaccine peptide report worksheet, one sheet per variant. This is meant "
"for use by the vaccine manufacturer.")
output_args_group.add_argument(
"--output-merged-report-file",
default="",
help="Path to XLSX merged report.")
output_args_group.add_argument(
"--output-neoepitope-report",
default="",
help="Path to XLSX neoepitope report, containing information focusing on short peptide "
"sequences.")
output_args_group.add_argument(
"--num-epitopes-per-peptide",
type=int,
help="Number of top-ranking epitopes for each vaccine peptide to include in the "
"neoepitope report.")
output_args_group.add_argument(
"--output-reviewed-by",
default="",
help="Comma-separated list of reviewer names")
output_args_group.add_argument(
"--output-final-review",
default="",
help="Name of final reviewer of report")
output_args_group.add_argument(
"--log-path",
default="python.log",
help="File path to write the vaxrank Python log to")
output_args_group.add_argument(
"--max-mutations-in-report",
type=int,
help="Number of mutations to report")
output_args_group.add_argument(
"--output-passing-variants-csv",
default="",
help="Path to CSV file containing some metadata about every variant that has passed all "
"variant caller filters")
def add_vaccine_peptide_args(arg_parser):
vaccine_peptide_group = arg_parser.add_argument_group("Vaccine peptide options")
vaccine_peptide_group.add_argument(
"--vaccine-peptide-length",
default=25,
type=int,
help="Number of amino acids in the vaccine peptides (default %(default)s)")
vaccine_peptide_group.add_argument(
"--padding-around-mutation",
default=0,
type=int,
help=(
"Number of off-center windows around the mutation to consider "
"as vaccine peptides (default %(default)s)"
))
vaccine_peptide_group.add_argument(
"--max-vaccine-peptides-per-mutation",
default=1,
type=int,
help="Number of vaccine peptides to generate for each mutation")
vaccine_peptide_group.add_argument(
"--min-epitope-score",
default=1e-10,
type=float,
help=(
"Ignore predicted MHC ligands whose normalized binding score "
"falls below this threshold"))
def add_supplemental_report_args(arg_parser):
report_args_group = arg_parser.add_argument_group("Supplemental report options")
report_args_group.add_argument(
"--cosmic_vcf_filename",
default="",
help="Local path to COSMIC vcf")
def check_args(args):
if not (args.output_csv or
args.output_ascii_report or
args.output_html_report or
args.output_pdf_report or
args.output_json_file or
args.output_xlsx_report or
args.output_neoepitope_report or
args.output_passing_variants_csv):
raise ValueError(
"Must specify at least one of: --output-csv, "
"--output-xlsx-report, "
"--output-ascii-report, "
"--output-html-report, "
"--output-pdf-report, "
"--output-neoepitope-report, "
"--output-json-file, "
"--output-passing-variants-csv")
def ranked_variant_list_with_metadata(args):
"""
Computes all the data needed for report generation.
Parameters
----------
args : Namespace
Parsed user args from this run
Returns a dictionary containing 3 items:
- ranked variant/vaccine peptide list
- a dictionary of command-line arguments used to generate it
- patient info object
"""
if hasattr(args, 'input_json_file'):
with open(args.input_json_file) as f:
data = serializable.from_json(f.read())
# the JSON data from the previous run will have the older args saved, which may need to
# be overridden with args from this run (which all be output related)
data['args'].update(vars(args))
# if we need to truncate the variant list based on max_mutations_in_report, do that here
if len(data['variants']) > args.max_mutations_in_report:
data['variants'] = data['variants'][:args.max_mutations_in_report]
return data
# get various things from user args
mhc_alleles = mhc_alleles_from_args(args)
logger.info("MHC alleles: %s", mhc_alleles)
variants = variant_collection_from_args(args)
logger.info("Variants: %s", variants)
# generator that for each variant gathers all RNA reads, both those
# supporting the variant and reference alleles
reads_generator = allele_reads_generator_from_args(args)
mhc_predictor = mhc_binding_predictor_from_args(args)
core_logic = VaxrankCoreLogic(
variants=variants,
reads_generator=reads_generator,
mhc_predictor=mhc_predictor,
vaccine_peptide_length=args.vaccine_peptide_length,
padding_around_mutation=args.padding_around_mutation,
max_vaccine_peptides_per_variant=args.max_vaccine_peptides_per_mutation,
min_alt_rna_reads=args.min_alt_rna_reads,
min_variant_sequence_coverage=args.min_variant_sequence_coverage,
min_epitope_score=args.min_epitope_score,
num_mutant_epitopes_to_keep=args.num_epitopes_per_peptide,
variant_sequence_assembly=args.variant_sequence_assembly,
gene_pathway_check=GenePathwayCheck()
)
variants_count_dict = core_logic.variant_counts()
assert len(variants) == variants_count_dict['num_total_variants'], \
"Len(variants) is %d but variants_count_dict came back with %d" % (
len(variants), variants_count_dict['num_total_variants'])
if args.output_passing_variants_csv:
variant_metadata_dicts = core_logic.variant_properties()
df = pd.DataFrame(variant_metadata_dicts)
df.to_csv(args.output_passing_variants_csv, index=False)
ranked_list = core_logic.ranked_vaccine_peptides()
ranked_list_for_report = ranked_list[:args.max_mutations_in_report]
patient_info = PatientInfo(
patient_id=args.output_patient_id,
vcf_paths=variants.sources,
bam_path=args.bam,
mhc_alleles=mhc_alleles,
num_somatic_variants=variants_count_dict['num_total_variants'],
num_coding_effect_variants=variants_count_dict['num_coding_effect_variants'],
num_variants_with_rna_support=variants_count_dict['num_variants_with_rna_support'],
num_variants_with_vaccine_peptides=variants_count_dict['num_variants_with_vaccine_peptides']
)
# return variants, patient info, and command-line args
data = {
'variants': ranked_list_for_report,
'patient_info': patient_info,
'args': vars(args),
}
logger.info('About to save args: %s', data['args'])
# save JSON data if necessary. as of time of writing, vaxrank takes ~25 min to run,
# most of which is core logic. the formatting is super fast, and it can
# be useful to save the data to be able to iterate just on the formatting
if args.output_json_file:
with open(args.output_json_file, 'w') as f:
f.write(serializable.to_json(data))
logger.info('Wrote JSON report data to %s', args.output_json_file)
return data
def main(args_list=None):
"""
Script to generate vaccine peptide predictions from somatic cancer variants,
patient HLA type, and tumor RNA-seq data.
Example usage:
vaxrank
--vcf somatic.vcf \
--bam rnaseq.bam \
--vaccine-peptide-length 25 \
--output-csv vaccine-peptides.csv
"""
if args_list is None:
args_list = sys.argv[1:]
if "--version" in args_list:
print("Vaxrank version: %s" % __version__)
return
if "--input-json-file" in args_list:
arg_parser = cached_run_arg_parser()
else:
arg_parser = new_run_arg_parser()
args = arg_parser.parse_args(args_list)
logging.config.fileConfig(
pkg_resources.resource_filename(
__name__,
'logging.conf'),
defaults={'logfilename': args.log_path})
logger.info(args)
check_args(args)
data = ranked_variant_list_with_metadata(args)
ranked_variant_list = data['variants']
patient_info = data['patient_info']
args_for_report = data['args']
###################
# CSV-based reports
###################
if args.output_csv or args.output_xlsx_report:
make_csv_report(
ranked_variant_list,
excel_report_path=args.output_xlsx_report,
csv_report_path=args.output_csv)
if args.output_neoepitope_report:
make_minimal_neoepitope_report(
ranked_variant_list,
num_epitopes_per_peptide=args.num_epitopes_per_peptide,
excel_report_path=args.output_neoepitope_report)
########################
# Template-based reports
########################
if not (args.output_ascii_report or args.output_html_report or args.output_pdf_report):
return
input_json_file = args.input_json_file if hasattr(args, 'input_json_file') else None
template_data_creator = TemplateDataCreator(
ranked_variants_with_vaccine_peptides=ranked_variant_list,
patient_info=patient_info,
final_review=args.output_final_review,
reviewers=args.output_reviewed_by,
args_for_report=args_for_report,
input_json_file=input_json_file,
cosmic_vcf_filename=args.cosmic_vcf_filename)
template_data = template_data_creator.compute_template_data()
if args.output_json_file:
output_file_name = args.output_json_file
output_file_name = output_file_name.rsplit('.', 1)[0]
with open(output_file_name + '-merged-report.json', 'w') as f:
f.write(serializable.to_json(template_data))
logger.info('Wrote Full JSON report data to %s', output_file_name)
# Run Sanoskas' report parser
d = dict(os.environ) # Make a copy of the current environment
subprocess.Popen(['$JAVA11', '-jar', '$PEPTIDE_SEL ' + output_file_name + '-merged-report.json ' + output_file_name + '-merged-report.xlsx'], env=d)
if args.output_ascii_report:
make_ascii_report(
template_data=template_data,
ascii_report_path=args.output_ascii_report)
if args.output_html_report:
make_html_report(
template_data=template_data,
html_report_path=args.output_html_report)
if args.output_pdf_report:
make_pdf_report(
template_data=template_data,
pdf_report_path=args.output_pdf_report)
| 35.669584 | 156 | 0.684314 | 2,053 | 16,301 | 5.143205 | 0.192401 | 0.030685 | 0.030306 | 0.030306 | 0.370111 | 0.254096 | 0.18837 | 0.139975 | 0.118761 | 0.091297 | 0 | 0.002304 | 0.227961 | 16,301 | 456 | 157 | 35.747807 | 0.83671 | 0.125698 | 0 | 0.248503 | 0 | 0 | 0.234541 | 0.045613 | 0 | 0 | 0 | 0 | 0.002994 | 1 | 0.02994 | false | 0.017964 | 0.056886 | 0 | 0.10479 | 0.005988 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
ce009968e8640e6d1c77e4d902dc078249782d25 | 2,330 | py | Python | src/onegov/form/extensions.py | politbuero-kampagnen/onegov-cloud | 20148bf321b71f617b64376fe7249b2b9b9c4aa9 | [
"MIT"
] | null | null | null | src/onegov/form/extensions.py | politbuero-kampagnen/onegov-cloud | 20148bf321b71f617b64376fe7249b2b9b9c4aa9 | [
"MIT"
] | null | null | null | src/onegov/form/extensions.py | politbuero-kampagnen/onegov-cloud | 20148bf321b71f617b64376fe7249b2b9b9c4aa9 | [
"MIT"
] | null | null | null | form_extensions = {}
class FormExtension(object):
""" Enables the extension of form definitions/submissions.
When either of those models create a form class they will take the
'extensions' key in the meta dictionary to extend those formcode
based forms.
This allows for specialised behaviour of formcode forms with the drawback
that those definitions/submissions are more tightly bound to the code. That
is to say code in module A could not use submissions defined by module B
unless module B is also present in the path.
To create and register a form extension subclass as follows::
class MyExtension(FormExtension, name='my-extension'):
def create(self):
return self.form_class
Note that you *should not* change the form_class provided to you. Instead
you should subclass it. If you need to change the form class, you need
to clone it::
class MyExtension(FormExtension, name='my-extension'):
def create(self):
return self.form_class.clone()
class MyExtension(FormExtension, name='my-extension'):
def create(self):
class ExtendedForm(self.form_class):
pass
return ExtendedForm
Also, names must be unique and can only be registered once.
"""
def __init__(self, form_class):
self.form_class = form_class
def __init_subclass__(cls, name, **kwargs):
super().__init_subclass__(**kwargs)
assert name not in form_extensions, (
f"A form extension named {name} already exists"
)
form_extensions[name] = cls
def create(self):
raise NotImplementedError
class Extendable(object):
""" Models extending their form classes use this mixin to create the
extended forms. It also serves as a marker to possibly keep track of all
classes that use extended forms.
"""
def extend_form_class(self, form_class, extensions):
if not extensions:
return form_class
for extension in extensions:
if extension not in form_extensions:
raise KeyError(f"Unknown form extension: {extension}")
form_class = form_extensions[extension](form_class).create()
return form_class
| 31.066667 | 79 | 0.663519 | 297 | 2,330 | 5.094276 | 0.373737 | 0.089227 | 0.051553 | 0.065433 | 0.167217 | 0.138136 | 0.138136 | 0.138136 | 0.138136 | 0.100463 | 0 | 0 | 0.276395 | 2,330 | 74 | 80 | 31.486486 | 0.89739 | 0.584979 | 0 | 0.095238 | 0 | 0 | 0.093824 | 0 | 0 | 0 | 0 | 0 | 0.047619 | 1 | 0.190476 | false | 0 | 0 | 0 | 0.380952 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
ce02d362faf3e0854e0a7e7d8cf85733ccda435d | 571 | py | Python | linkysets/entries/urls.py | hqrrylyu/linkysets | 1b8c319820bdf116a5cad7efff69178e739cf26b | [
"MIT"
] | null | null | null | linkysets/entries/urls.py | hqrrylyu/linkysets | 1b8c319820bdf116a5cad7efff69178e739cf26b | [
"MIT"
] | 5 | 2021-04-08T19:20:07.000Z | 2021-09-22T19:03:30.000Z | linkysets/entries/urls.py | hqrrylyu/polemicflow | 1b8c319820bdf116a5cad7efff69178e739cf26b | [
"MIT"
] | null | null | null | from django.urls import path
from . import views
app_name = "entries"
urlpatterns = [
path("", views.HomeView.as_view(), name="home"),
path("search/", views.SearchView.as_view(), name="search"),
path("detail/<str:pk>/", views.EntrySetDetailView.as_view(), name="detail"),
path("create/", views.create_entryset_view, name="create"),
path("edit/<str:pk>/", views.edit_entryset_view, name="edit"),
path("delete/<str:pk>/", views.EntrySetDeleteView.as_view(), name="delete"),
path("repost/<str:pk>/", views.repost_entry_view, name="repost"),
]
| 35.6875 | 80 | 0.674256 | 75 | 571 | 4.986667 | 0.36 | 0.149733 | 0.106952 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.117338 | 571 | 15 | 81 | 38.066667 | 0.742063 | 0 | 0 | 0 | 0 | 0 | 0.211909 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.166667 | 0 | 0.166667 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
ce02f406398562fa98bd890e5f14b1f5b9c07c09 | 7,409 | py | Python | src/spaceone/statistics/manager/resource_manager.py | whdalsrnt/statistics | 7401ad6753e3b6ee3942b5036d1e5214d8930cb3 | [
"Apache-2.0"
] | 7 | 2020-06-04T23:01:03.000Z | 2021-06-22T07:06:28.000Z | src/spaceone/statistics/manager/resource_manager.py | whdalsrnt/statistics | 7401ad6753e3b6ee3942b5036d1e5214d8930cb3 | [
"Apache-2.0"
] | 3 | 2020-08-20T01:49:08.000Z | 2022-03-23T09:02:18.000Z | src/spaceone/statistics/manager/resource_manager.py | whdalsrnt/statistics | 7401ad6753e3b6ee3942b5036d1e5214d8930cb3 | [
"Apache-2.0"
] | 6 | 2020-06-10T02:00:24.000Z | 2021-12-03T06:02:36.000Z | import logging
import pandas as pd
import numpy as np
from spaceone.core.manager import BaseManager
from spaceone.statistics.error import *
from spaceone.statistics.connector.service_connector import ServiceConnector
_LOGGER = logging.getLogger(__name__)
_JOIN_TYPE_MAP = {
'LEFT': 'left',
'RIGHT': 'right',
'OUTER': 'outer',
'INNER': 'inner'
}
_SUPPORTED_AGGREGATE_OPERATIONS = [
'query',
'join',
'concat',
'sort',
'formula',
'fill_na'
]
class ResourceManager(BaseManager):
def stat(self, aggregate, page, domain_id):
results = self._execute_aggregate_operations(aggregate, domain_id)
return self._page(page, results)
def _execute_aggregate_operations(self, aggregate, domain_id):
df = None
if 'query' not in aggregate[0]:
raise ERROR_REQUIRED_QUERY_OPERATION()
for stage in aggregate:
if 'query' in stage:
df = self._query(stage['query'], domain_id)
elif 'join' in stage:
df = self._join(stage['join'], domain_id, df)
elif 'concat' in stage:
df = self._concat(stage['concat'], domain_id, df)
elif 'sort' in stage:
df = self._sort(stage['sort'], df)
elif 'formula' in stage:
df = self._execute_formula(stage['formula'], df)
elif 'fill_na' in stage:
df = self._fill_na(stage['fill_na'], df)
else:
raise ERROR_REQUIRED_PARAMETER(key='aggregate.query | aggregate.join | aggregate.concat | '
'aggregate.sort | aggregate.formula | aggregate.fill_na')
df = df.replace({np.nan: None})
results = df.to_dict('records')
return results
@staticmethod
def _fill_na(options, base_df):
data = options.get('data', {})
if len(data.keys()) > 0:
base_df = base_df.fillna(data)
return base_df
def _execute_formula(self, options, base_df):
if len(base_df) > 0:
if 'eval' in options:
base_df = self._execute_formula_eval(options['eval'], base_df)
elif 'query' in options:
base_df = self._execute_formula_query(options['query'], base_df)
else:
raise ERROR_REQUIRED_PARAMETER(key='aggregate.formula.eval | aggregate.formula.query')
return base_df
@staticmethod
def _execute_formula_query(formula, base_df):
try:
base_df = base_df.query(formula)
except Exception as e:
raise ERROR_STATISTICS_FORMULA(formula=formula)
return base_df
@staticmethod
def _execute_formula_eval(formula, base_df):
try:
base_df = base_df.eval(formula)
except Exception as e:
raise ERROR_STATISTICS_FORMULA(formula=formula)
return base_df
@staticmethod
def _sort(options, base_df):
if 'key' in options and len(base_df) > 0:
ascending = not options.get('desc', False)
try:
return base_df.sort_values(by=options['key'], ascending=ascending)
except Exception as e:
raise ERROR_STATISTICS_QUERY(reason=f'Sorting failed. (sort = {options})')
else:
return base_df
def _concat(self, options, domain_id, base_df):
concat_df = self._query(options, domain_id, operator='join')
try:
base_df = pd.concat([base_df, concat_df], ignore_index=True)
except Exception as e:
raise ERROR_STATISTICS_CONCAT(reason=str(e))
return base_df
@staticmethod
def _generate_empty_data(query):
empty_data = {}
aggregate = query.get('aggregate', [])
aggregate.reverse()
for stage in aggregate:
if 'group' in stage:
group = stage['group']
for key in group.get('keys', []):
if 'name' in key:
empty_data[key['name']] = []
for field in group.get('fields', []):
if 'name' in field:
empty_data[field['name']] = []
break
return pd.DataFrame(empty_data)
def _join(self, options, domain_id, base_df):
if 'type' in options and options['type'] not in _JOIN_TYPE_MAP:
raise ERROR_INVALID_PARAMETER_TYPE(key='aggregate.join.type', type=list(_JOIN_TYPE_MAP.keys()))
join_keys = options.get('keys')
join_type = options.get('type', 'LEFT')
join_df = self._query(options, domain_id, operator='join')
try:
if join_keys:
base_df = pd.merge(base_df, join_df, on=join_keys, how=_JOIN_TYPE_MAP[join_type])
else:
base_df = pd.merge(base_df, join_df, left_index=True, right_index=True, how=_JOIN_TYPE_MAP[join_type])
except Exception as e:
if join_keys is None:
raise ERROR_STATISTICS_INDEX_JOIN(reason=str(e))
else:
raise ERROR_STATISTICS_JOIN(resource_type=options['resource_type'], join_keys=join_keys)
return base_df
def _query(self, options, domain_id, operator='query'):
resource_type = options.get('resource_type')
query = options.get('query')
extend_data = options.get('extend_data', {})
if resource_type is None:
raise ERROR_REQUIRED_PARAMETER(key=f'aggregate.{operator}.resource_type')
if query is None:
raise ERROR_REQUIRED_PARAMETER(key=f'aggregate.{operator}.query')
self.service_connector: ServiceConnector = self.locator.get_connector('ServiceConnector')
service, resource = self._parse_resource_type(resource_type)
try:
response = self.service_connector.stat_resource(service, resource, query, domain_id)
results = response.get('results', [])
if len(results) > 0 and not isinstance(results[0], dict):
df = pd.DataFrame(results, columns=['value'])
else:
df = pd.DataFrame(results)
if len(df) == 0:
df = self._generate_empty_data(options['query'])
return self._extend_data(df, extend_data)
except ERROR_BASE as e:
raise ERROR_STATISTICS_QUERY(reason=e.message)
except Exception as e:
raise ERROR_STATISTICS_QUERY(reason=e)
@staticmethod
def _parse_resource_type(resource_type):
try:
service, resource = resource_type.split('.')
except Exception as e:
raise ERROR_INVALID_PARAMETER(key='resource_type', reason=f'resource_type is invalid. ({resource_type})')
return service, resource
@staticmethod
def _extend_data(df, data):
for key, value in data.items():
df[key] = value
return df
@staticmethod
def _page(page, results):
response = {
'total_count': len(results)
}
if 'limit' in page and page['limit'] > 0:
start = page.get('start', 1)
if start < 1:
start = 1
response['results'] = results[start - 1:start + page['limit'] - 1]
else:
response['results'] = results
return response
| 32.073593 | 118 | 0.588878 | 863 | 7,409 | 4.819235 | 0.144844 | 0.047608 | 0.023082 | 0.030296 | 0.269296 | 0.252705 | 0.207983 | 0.169752 | 0.11397 | 0.07069 | 0 | 0.002537 | 0.308409 | 7,409 | 230 | 119 | 32.213043 | 0.809133 | 0 | 0 | 0.227273 | 0 | 0 | 0.095155 | 0.014172 | 0 | 0 | 0 | 0 | 0 | 1 | 0.079545 | false | 0 | 0.034091 | 0 | 0.204545 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
ce06b3d180466303f98a643e48e24807259ebe49 | 16,245 | py | Python | Blender 2.91/2.91/scripts/addons/object_collection_manager/operator_utils.py | calculusrobotics/RNNs-for-Bayesian-State-Estimation | 2aacf86d2e447e10c840b4926d4de7bc5e46d9bc | [
"MIT"
] | 1 | 2021-06-30T00:39:40.000Z | 2021-06-30T00:39:40.000Z | release/scripts/addons/object_collection_manager/operator_utils.py | kubaroth/blender-2.9.1-arm64 | 63a9045eba7746d28828323f95526234951a5df9 | [
"Naumen",
"Condor-1.1",
"MS-PL"
] | null | null | null | release/scripts/addons/object_collection_manager/operator_utils.py | kubaroth/blender-2.9.1-arm64 | 63a9045eba7746d28828323f95526234951a5df9 | [
"Naumen",
"Condor-1.1",
"MS-PL"
] | null | null | null | # ##### BEGIN GPL LICENSE BLOCK #####
#
# This program is free software; you can redistribute it and/or
# modify it under the terms of the GNU General Public License
# as published by the Free Software Foundation; either version 2
# of the License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software Foundation,
# Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
#
# ##### END GPL LICENSE BLOCK #####
# Copyright 2011, Ryan Inch
import bpy
from .internals import (
layer_collections,
qcd_slots,
expanded,
expand_history,
rto_history,
copy_buffer,
swap_buffer,
update_property_group,
get_move_selection,
)
mode_converter = {
'EDIT_MESH': 'EDIT',
'EDIT_CURVE': 'EDIT',
'EDIT_SURFACE': 'EDIT',
'EDIT_TEXT': 'EDIT',
'EDIT_ARMATURE': 'EDIT',
'EDIT_METABALL': 'EDIT',
'EDIT_LATTICE': 'EDIT',
'POSE': 'POSE',
'SCULPT': 'SCULPT',
'PAINT_WEIGHT': 'WEIGHT_PAINT',
'PAINT_VERTEX': 'VERTEX_PAINT',
'PAINT_TEXTURE': 'TEXTURE_PAINT',
'PARTICLE': 'PARTICLE_EDIT',
'OBJECT': 'OBJECT',
'PAINT_GPENCIL': 'PAINT_GPENCIL',
'EDIT_GPENCIL': 'EDIT_GPENCIL',
'SCULPT_GPENCIL': 'SCULPT_GPENCIL',
'WEIGHT_GPENCIL': 'WEIGHT_GPENCIL',
'VERTEX_GPENCIL': 'VERTEX_GPENCIL',
}
rto_path = {
"exclude": "exclude",
"select": "collection.hide_select",
"hide": "hide_viewport",
"disable": "collection.hide_viewport",
"render": "collection.hide_render",
"holdout": "holdout",
"indirect": "indirect_only",
}
set_off_on = {
"exclude": {
"off": True,
"on": False
},
"select": {
"off": True,
"on": False
},
"hide": {
"off": True,
"on": False
},
"disable": {
"off": True,
"on": False
},
"render": {
"off": True,
"on": False
},
"holdout": {
"off": False,
"on": True
},
"indirect": {
"off": False,
"on": True
}
}
get_off_on = {
False: {
"exclude": "on",
"select": "on",
"hide": "on",
"disable": "on",
"render": "on",
"holdout": "off",
"indirect": "off",
},
True: {
"exclude": "off",
"select": "off",
"hide": "off",
"disable": "off",
"render": "off",
"holdout": "on",
"indirect": "on",
}
}
def get_rto(layer_collection, rto):
if rto in ["exclude", "hide", "holdout", "indirect"]:
return getattr(layer_collection, rto_path[rto])
else:
collection = getattr(layer_collection, "collection")
return getattr(collection, rto_path[rto].split(".")[1])
def set_rto(layer_collection, rto, value):
if rto in ["exclude", "hide", "holdout", "indirect"]:
setattr(layer_collection, rto_path[rto], value)
else:
collection = getattr(layer_collection, "collection")
setattr(collection, rto_path[rto].split(".")[1], value)
def apply_to_children(parent, apply_function):
# works for both Collections & LayerCollections
child_lists = [parent.children]
while child_lists:
new_child_lists = []
for child_list in child_lists:
for child in child_list:
apply_function(child)
if child.children:
new_child_lists.append(child.children)
child_lists = new_child_lists
def isolate_rto(cls, self, view_layer, rto, *, children=False):
off = set_off_on[rto]["off"]
on = set_off_on[rto]["on"]
laycol_ptr = layer_collections[self.name]["ptr"]
target = rto_history[rto][view_layer]["target"]
history = rto_history[rto][view_layer]["history"]
# get active collections
active_layer_collections = [x["ptr"] for x in layer_collections.values()
if get_rto(x["ptr"], rto) == on]
# check if previous state should be restored
if cls.isolated and self.name == target:
# restore previous state
for x, item in enumerate(layer_collections.values()):
set_rto(item["ptr"], rto, history[x])
# reset target and history
del rto_history[rto][view_layer]
cls.isolated = False
# check if all RTOs should be activated
elif (len(active_layer_collections) == 1 and
active_layer_collections[0].name == self.name):
# activate all collections
for item in layer_collections.values():
set_rto(item["ptr"], rto, on)
# reset target and history
del rto_history[rto][view_layer]
cls.isolated = False
else:
# isolate collection
rto_history[rto][view_layer]["target"] = self.name
# reset history
history.clear()
# save state
for item in layer_collections.values():
history.append(get_rto(item["ptr"], rto))
child_states = {}
if children:
# get child states
def get_child_states(layer_collection):
child_states[layer_collection.name] = get_rto(layer_collection, rto)
apply_to_children(laycol_ptr, get_child_states)
# isolate collection
for item in layer_collections.values():
if item["name"] != laycol_ptr.name:
set_rto(item["ptr"], rto, off)
set_rto(laycol_ptr, rto, on)
if rto not in ["exclude", "holdout", "indirect"]:
# activate all parents
laycol = layer_collections[self.name]
while laycol["id"] != 0:
set_rto(laycol["ptr"], rto, on)
laycol = laycol["parent"]
if children:
# restore child states
def restore_child_states(layer_collection):
set_rto(layer_collection, rto, child_states[layer_collection.name])
apply_to_children(laycol_ptr, restore_child_states)
else:
if children:
# restore child states
def restore_child_states(layer_collection):
set_rto(layer_collection, rto, child_states[layer_collection.name])
apply_to_children(laycol_ptr, restore_child_states)
elif rto == "exclude":
# deactivate all children
def deactivate_all_children(layer_collection):
set_rto(layer_collection, rto, True)
apply_to_children(laycol_ptr, deactivate_all_children)
cls.isolated = True
def toggle_children(self, view_layer, rto):
laycol_ptr = layer_collections[self.name]["ptr"]
# clear rto history
del rto_history[rto][view_layer]
rto_history[rto+"_all"].pop(view_layer, None)
# toggle rto state
state = not get_rto(laycol_ptr, rto)
set_rto(laycol_ptr, rto, state)
def set_state(layer_collection):
set_rto(layer_collection, rto, state)
apply_to_children(laycol_ptr, set_state)
def activate_all_rtos(view_layer, rto):
off = set_off_on[rto]["off"]
on = set_off_on[rto]["on"]
history = rto_history[rto+"_all"][view_layer]
# if not activated, activate all
if len(history) == 0:
keep_history = False
for item in reversed(list(layer_collections.values())):
if get_rto(item["ptr"], rto) == off:
keep_history = True
history.append(get_rto(item["ptr"], rto))
set_rto(item["ptr"], rto, on)
if not keep_history:
history.clear()
history.reverse()
else:
for x, item in enumerate(layer_collections.values()):
set_rto(item["ptr"], rto, history[x])
# clear rto history
del rto_history[rto+"_all"][view_layer]
def invert_rtos(view_layer, rto):
if rto == "exclude":
orig_values = []
for item in layer_collections.values():
orig_values.append(get_rto(item["ptr"], rto))
for x, item in enumerate(layer_collections.values()):
set_rto(item["ptr"], rto, not orig_values[x])
else:
for item in layer_collections.values():
set_rto(item["ptr"], rto, not get_rto(item["ptr"], rto))
# clear rto history
rto_history[rto].pop(view_layer, None)
def copy_rtos(view_layer, rto):
if not copy_buffer["RTO"]:
# copy
copy_buffer["RTO"] = rto
for laycol in layer_collections.values():
copy_buffer["values"].append(get_off_on[
get_rto(laycol["ptr"], rto)
][
rto
]
)
else:
# paste
for x, laycol in enumerate(layer_collections.values()):
set_rto(laycol["ptr"],
rto,
set_off_on[rto][
copy_buffer["values"][x]
]
)
# clear rto history
rto_history[rto].pop(view_layer, None)
del rto_history[rto+"_all"][view_layer]
# clear copy buffer
copy_buffer["RTO"] = ""
copy_buffer["values"].clear()
def swap_rtos(view_layer, rto):
if not swap_buffer["A"]["values"]:
# get A
swap_buffer["A"]["RTO"] = rto
for laycol in layer_collections.values():
swap_buffer["A"]["values"].append(get_off_on[
get_rto(laycol["ptr"], rto)
][
rto
]
)
else:
# get B
swap_buffer["B"]["RTO"] = rto
for laycol in layer_collections.values():
swap_buffer["B"]["values"].append(get_off_on[
get_rto(laycol["ptr"], rto)
][
rto
]
)
# swap A with B
for x, laycol in enumerate(layer_collections.values()):
set_rto(laycol["ptr"], swap_buffer["A"]["RTO"],
set_off_on[
swap_buffer["A"]["RTO"]
][
swap_buffer["B"]["values"][x]
]
)
set_rto(laycol["ptr"], swap_buffer["B"]["RTO"],
set_off_on[
swap_buffer["B"]["RTO"]
][
swap_buffer["A"]["values"][x]
]
)
# clear rto history
swap_a = swap_buffer["A"]["RTO"]
swap_b = swap_buffer["B"]["RTO"]
rto_history[swap_a].pop(view_layer, None)
rto_history[swap_a+"_all"].pop(view_layer, None)
rto_history[swap_b].pop(view_layer, None)
rto_history[swap_b+"_all"].pop(view_layer, None)
# clear swap buffer
swap_buffer["A"]["RTO"] = ""
swap_buffer["A"]["values"].clear()
swap_buffer["B"]["RTO"] = ""
swap_buffer["B"]["values"].clear()
def clear_copy(rto):
if copy_buffer["RTO"] == rto:
copy_buffer["RTO"] = ""
copy_buffer["values"].clear()
def clear_swap(rto):
if swap_buffer["A"]["RTO"] == rto:
swap_buffer["A"]["RTO"] = ""
swap_buffer["A"]["values"].clear()
swap_buffer["B"]["RTO"] = ""
swap_buffer["B"]["values"].clear()
def link_child_collections_to_parent(laycol, collection, parent_collection):
# store view layer RTOs for all children of the to be deleted collection
child_states = {}
def get_child_states(layer_collection):
child_states[layer_collection.name] = (layer_collection.exclude,
layer_collection.hide_viewport,
layer_collection.holdout,
layer_collection.indirect_only)
apply_to_children(laycol["ptr"], get_child_states)
# link any subcollections of the to be deleted collection to it's parent
for subcollection in collection.children:
if not subcollection.name in parent_collection.children:
parent_collection.children.link(subcollection)
# apply the stored view layer RTOs to the newly linked collections and their
# children
def restore_child_states(layer_collection):
state = child_states.get(layer_collection.name)
if state:
layer_collection.exclude = state[0]
layer_collection.hide_viewport = state[1]
layer_collection.holdout = state[2]
layer_collection.indirect_only = state[3]
apply_to_children(laycol["parent"]["ptr"], restore_child_states)
def remove_collection(laycol, collection, context):
# get selected row
cm = context.scene.collection_manager
selected_row_name = cm.cm_list_collection[cm.cm_list_index].name
# delete collection
bpy.data.collections.remove(collection)
# update references
expanded.discard(laycol["name"])
if expand_history["target"] == laycol["name"]:
expand_history["target"] = ""
if laycol["name"] in expand_history["history"]:
expand_history["history"].remove(laycol["name"])
if qcd_slots.contains(name=laycol["name"]):
qcd_slots.del_slot(name=laycol["name"])
if laycol["name"] in qcd_slots.overrides:
qcd_slots.overrides.remove(laycol["name"])
# reset history
for rto in rto_history.values():
rto.clear()
# update tree view
update_property_group(context)
# update selected row
laycol = layer_collections.get(selected_row_name, None)
if laycol:
cm.cm_list_index = laycol["row_index"]
elif len(cm.cm_list_collection) <= cm.cm_list_index:
cm.cm_list_index = len(cm.cm_list_collection) - 1
if cm.cm_list_index > -1:
name = cm.cm_list_collection[cm.cm_list_index].name
laycol = layer_collections[name]
while not laycol["visible"]:
laycol = laycol["parent"]
cm.cm_list_index = laycol["row_index"]
def select_collection_objects(is_master_collection, collection_name, replace, nested, selection_state=None):
if bpy.context.mode != 'OBJECT':
return
if is_master_collection:
target_collection = bpy.context.view_layer.layer_collection.collection
else:
laycol = layer_collections[collection_name]
target_collection = laycol["ptr"].collection
if replace:
bpy.ops.object.select_all(action='DESELECT')
if selection_state == None:
selection_state = get_move_selection().isdisjoint(target_collection.objects)
def select_objects(collection):
for obj in collection.objects:
try:
obj.select_set(selection_state)
except RuntimeError:
pass
select_objects(target_collection)
if nested:
apply_to_children(target_collection, select_objects)
def set_exclude_state(target_layer_collection, state):
# get current child exclusion state
child_exclusion = []
def get_child_exclusion(layer_collection):
child_exclusion.append([layer_collection, layer_collection.exclude])
apply_to_children(target_layer_collection, get_child_exclusion)
# set exclusion
target_layer_collection.exclude = state
# set correct state for all children
for laycol in child_exclusion:
laycol[0].exclude = laycol[1]
| 30.027726 | 108 | 0.570452 | 1,832 | 16,245 | 4.824782 | 0.134279 | 0.064487 | 0.037335 | 0.017649 | 0.414074 | 0.354565 | 0.266207 | 0.213146 | 0.181242 | 0.180563 | 0 | 0.002702 | 0.31659 | 16,245 | 540 | 109 | 30.083333 | 0.793461 | 0.110988 | 0 | 0.298592 | 0 | 0 | 0.086433 | 0.004736 | 0 | 0 | 0 | 0 | 0 | 1 | 0.067606 | false | 0.002817 | 0.005634 | 0 | 0.08169 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
ce06d0e8faf0accdcd4165eff3e6877077e3490a | 809 | py | Python | tests/unmarshalling/test_models.py | steinitzu/spoffy | 40cce0f00accbe006084a610d0d50396c21ec96c | [
"Apache-2.0"
] | 1 | 2019-04-24T19:50:03.000Z | 2019-04-24T19:50:03.000Z | tests/unmarshalling/test_models.py | steinitzu/spoffy | 40cce0f00accbe006084a610d0d50396c21ec96c | [
"Apache-2.0"
] | 3 | 2019-10-11T20:31:57.000Z | 2020-04-13T16:06:43.000Z | tests/unmarshalling/test_models.py | steinitzu/spoffy | 40cce0f00accbe006084a610d0d50396c21ec96c | [
"Apache-2.0"
] | null | null | null | from spoffy.models import Artist, AlbumSimplePaging, Playlist, CurrentPlayback
from tests.mock.responses.get_artist import artist, artist_with_null_followers
from tests.mock.responses.get_artist_albums import artist_albums_relinked
from tests.mock.responses.get_playlist import (
playlist_w_markets,
playlist_relinked,
)
from tests.mock.responses.player import current_playback_w_track
from tests.unmarshalling.util import dict_obj_diff
def test_all():
pairs = [
(artist_with_null_followers, Artist),
(artist, Artist),
(artist_albums_relinked, AlbumSimplePaging),
(playlist_w_markets, Playlist),
(playlist_relinked, Playlist),
(current_playback_w_track, CurrentPlayback),
]
for obj, cls in pairs:
dict_obj_diff(obj, cls(**obj))
| 31.115385 | 78 | 0.754017 | 99 | 809 | 5.848485 | 0.353535 | 0.07772 | 0.08981 | 0.151986 | 0.215889 | 0.107081 | 0 | 0 | 0 | 0 | 0 | 0 | 0.169345 | 809 | 25 | 79 | 32.36 | 0.861607 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.05 | false | 0 | 0.3 | 0 | 0.35 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
ce0807bc48e5329ca59d444954295443539d76bd | 3,864 | py | Python | nicos_demo/verwin/setups/charmbig.py | ebadkamil/nicos | 0355a970d627aae170c93292f08f95759c97f3b5 | [
"CC-BY-3.0",
"Apache-2.0",
"CC-BY-4.0"
] | 12 | 2019-11-06T15:40:36.000Z | 2022-01-01T16:23:00.000Z | nicos_demo/verwin/setups/charmbig.py | ebadkamil/nicos | 0355a970d627aae170c93292f08f95759c97f3b5 | [
"CC-BY-3.0",
"Apache-2.0",
"CC-BY-4.0"
] | 91 | 2020-08-18T09:20:26.000Z | 2022-02-01T11:07:14.000Z | nicos_demo/verwin/setups/charmbig.py | ISISComputingGroup/nicos | 94cb4d172815919481f8c6ee686f21ebb76f2068 | [
"CC-BY-3.0",
"Apache-2.0",
"CC-BY-4.0"
] | 6 | 2020-01-11T10:52:30.000Z | 2022-02-25T12:35:23.000Z | description = 'Big ErWIN detector devices'
group = 'optional'
devices = dict(
b_cathode1 = device('nicos.devices.generic.VirtualMotor',
lowlevel = True,
abslimits = [0, 2225],
unit = 'V',
speed = 50,
fmtstr = '%.1f',
),
b_cathode2 = device('nicos.devices.generic.VirtualMotor',
lowlevel = True,
abslimits = [0, 2225],
unit = 'V',
speed = 50,
fmtstr = '%.1f',
),
b_window = device('nicos.devices.generic.VirtualMotor',
lowlevel = True,
abslimits = [-2225, 0],
unit = 'V',
speed = 50,
fmtstr = '%.1f',
),
b_tripped = device('nicos.devices.generic.ManualSwitch',
description = 'Trip indicator',
states = ['', 'High current seen', 'High current', 'Trip'],
pollinterval = 1,
),
b_hv = device('nicos_mlz.erwin.devices.charmhv.HVSwitch',
description = 'HV supply small detector',
anodes = ['b_anode%d' % i for i in range(1, 10)],
banodes = ['b_banode%d' % i for i in range(1, 9)],
cathodes = ['b_cathode1', 'b_cathode2'],
window = 'b_window',
trip = 'b_tripped',
mapping = {
'on': {
'b_anode1': 2190,
'b_anode2': 2192,
'b_anode3': 2194,
'b_anode4': 2197,
'b_anode5': 2200,
'b_anode6': 2203,
'b_anode7': 2206,
'b_anode8': 2208,
'b_anode9': 2210,
'b_banode1': 2192,
'b_banode2': 2194,
'b_banode3': 2196,
'b_banode4': 2199,
'b_banode5': 2199,
'b_banode6': 2198,
'b_banode7': 2197,
'b_banode8': 2196,
'b_cathode1': 200,
'b_cathode2': 200,
'b_window': -1500,
'ramp': 5,
},
'off': {
'b_anode1': 0,
'b_anode2': 0,
'b_anode3': 0,
'b_anode4': 0,
'b_anode5': 0,
'b_anode6': 0,
'b_anode7': 0,
'b_anode8': 0,
'b_anode9': 0,
'b_banode1': 0,
'b_banode2': 0,
'b_banode3': 0,
'b_banode4': 0,
'b_banode5': 0,
'b_banode6': 0,
'b_banode7': 0,
'b_banode8': 0,
'b_cathode1': 0,
'b_cathode2': 0,
'b_window': 0,
'ramp': 10,
},
'safe': {
'b_anode1': 200,
'b_anode2': 200,
'b_anode3': 200,
'b_anode4': 200,
'b_anode5': 200,
'b_anode6': 200,
'b_anode7': 200,
'b_anode8': 200,
'b_anode9': 200,
'b_banode1': 200,
'b_banode2': 200,
'b_banode3': 200,
'b_banode4': 200,
'b_banode5': 200,
'b_banode6': 200,
'b_banode7': 200,
'b_banode8': 200,
'b_cathode1': 200,
'b_cathode2': 200,
'b_window': 200,
'ramp': 10,
},
},
),
)
for i in range(1, 10):
devices['b_anode%d' % i] = device('nicos.devices.generic.VirtualMotor',
lowlevel = True,
abslimits = [0, 2225],
unit = 'V',
speed = 5,
fmtstr = '%.1f',
)
for i in range(1, 9):
devices['b_banode%d' % i] = device('nicos.devices.generic.VirtualMotor',
lowlevel = True,
abslimits = [0, 2225],
unit = 'V',
speed = 5,
fmtstr = '%.1f',
)
| 29.723077 | 76 | 0.408126 | 373 | 3,864 | 4.029491 | 0.246649 | 0.055888 | 0.071856 | 0.0998 | 0.355955 | 0.355955 | 0.335995 | 0.303393 | 0.223553 | 0.223553 | 0 | 0.129384 | 0.453934 | 3,864 | 129 | 77 | 29.953488 | 0.582938 | 0 | 0 | 0.28 | 0 | 0 | 0.255176 | 0.063147 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
ce09a7107879a0b31157c59940123b288d2b6617 | 4,663 | py | Python | Detect_and_save_segmentation.py | Kohulan/DECIMER-Image-Segmentation | 68ee9a9693e5bad5c41826d28e2d6558a20fe21f | [
"MIT"
] | 29 | 2021-01-08T13:48:18.000Z | 2022-01-17T08:29:00.000Z | Detect_and_save_segmentation.py | Kohulan/DECIMER-Image-Segmentation | 68ee9a9693e5bad5c41826d28e2d6558a20fe21f | [
"MIT"
] | 23 | 2021-01-07T21:43:21.000Z | 2022-03-14T21:52:17.000Z | Detect_and_save_segmentation.py | Kohulan/DECIMER-Image-Segmentation | 68ee9a9693e5bad5c41826d28e2d6558a20fe21f | [
"MIT"
] | 8 | 2021-01-08T05:39:21.000Z | 2022-02-14T10:06:38.000Z | '''
* This Software is under the MIT License
* Refer to LICENSE or https://opensource.org/licenses/MIT for more information
* Written by ©Kohulan Rajan 2020
'''
import os
import numpy as np
import skimage.io
import cv2
from PIL import Image
import argparse
import tensorflow as tf
import warnings
warnings.filterwarnings("ignore")
from mrcnn import utils
from mrcnn import model as modellib
from mrcnn import visualize
from mrcnn import moldetect
from Scripts import complete_structure
# Root directory of the project
ROOT_DIR = os.path.dirname(os.path.dirname(os.getcwd()))
def main():
# Handle input arguments
parser = argparse.ArgumentParser(description="Select the chemical structures from a scanned literature and save them")
parser.add_argument(
'--input',
help='Enter the input filename',
required=True
)
args = parser.parse_args()
# Define image path and output path
IMAGE_PATH = os.path.normpath(args.input)
output_directory = str(IMAGE_PATH) + '_output'
if os.path.exists(output_directory):
pass
else:
os.system("mkdir " + output_directory)
# Segment chemical structure depictions
zipper = get_segments(output_directory, IMAGE_PATH)
print("Segmented Images can be found in: ", str(os.path.normpath(zipper)))
def load_model(path = "model_trained/mask_rcnn_molecule.h5"):
# Directory to save logs and trained model
MODEL_DIR = os.path.join(ROOT_DIR, "logs")
# Local path to trained weights file
TRAINED_MODEL_PATH = os.path.join(path)
# Download COCO trained weights from Releases if needed
if not os.path.exists(TRAINED_MODEL_PATH):
utils.download_trained_weights(TRAINED_MODEL_PATH)
config = moldetect.MolDetectConfig()
# Override the training configurations with a few
# changes for inferencing.
class InferenceConfig(config.__class__):
# Run detection on one image at a time
GPU_COUNT = 1
IMAGES_PER_GPU = 1
config = InferenceConfig()
#config.display()
# Create model object in inference mode.
model = modellib.MaskRCNN(mode="inference", model_dir=MODEL_DIR, config=config)
# Load weights trained on MS-COCO
model.load_weights(TRAINED_MODEL_PATH, by_name=True)
#class_names=['BG', 'Molecule']
return model
def get_segments(output_directory, IMAGE_PATH):
# Structure detection
model = load_model()
r = get_masks(IMAGE_PATH,model)
# Mask expansion
image = skimage.io.imread(IMAGE_PATH)
expanded_masks = complete_structure.complete_structure_mask(image_array = image, mask_array = r['masks'], debug = False)
# Save segments
zipper = (expanded_masks,IMAGE_PATH,output_directory)
segmented_img = save_segments(zipper)
return segmented_img
def get_masks(IMAGE_PATH,model):
# Read image
image = skimage.io.imread(IMAGE_PATH)
# Run detection
results = model.detect([image], verbose=1)
r = results[0]
return r
def save_segments(zipper):
expanded_masks,IMAGE_PATH,output_directory = zipper
mask = expanded_masks
for i in range(mask.shape[2]):
image = cv2.imread(os.path.join(IMAGE_PATH), -1)
for j in range(image.shape[2]):
image[:,:,j] = image[:,:,j] * mask[:,:,i]
#Remove unwanted background
grayscale = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
_,thresholded = cv2.threshold(grayscale,0,255,cv2.THRESH_OTSU)
bbox = cv2.boundingRect(thresholded)
x, y, w, h = bbox
foreground = image[y:y+h, x:x+w]
masked_image = np.zeros(image.shape).astype(np.uint8)
masked_image = visualize.apply_mask(masked_image, mask[:, :, i],[1,1,1])
masked_image = Image.fromarray(masked_image)
masked_image = masked_image.convert('RGB')
im_gray = cv2.cvtColor(np.asarray(masked_image), cv2.COLOR_RGB2GRAY)
(thresh, im_bw) = cv2.threshold(im_gray, 128, 255, cv2.THRESH_BINARY | cv2.THRESH_OTSU)
#Removal of transparent layer - black background
_,alpha = cv2.threshold(im_bw,0,255,cv2.THRESH_BINARY)
b, g, r = cv2.split(image)
rgba = [b,g,r, alpha]
dst = cv2.merge(rgba,4)
background = dst[y:y+h, x:x+w]
trans_mask = background[:,:,3] == 0
background[trans_mask] = [255, 255, 255, 255]
new_img = cv2.cvtColor(background, cv2.COLOR_BGRA2BGR)
#Save segments
#Making directory for saving the segments
if os.path.exists(output_directory+"/segments"):
pass
else:
os.system("mkdir "+str(os.path.normpath(output_directory+"/segments")))
#Define the correct path to save the segments
segment_dirname = os.path.normpath(output_directory+"/segments/")
filename = str(IMAGE_PATH).replace("\\", "/").split("/")[-1][:-4]+"_%d.png"%i
file_path = os.path.normpath(segment_dirname + "/" +filename)
print(file_path)
cv2.imwrite(file_path, new_img)
return output_directory+"/segments/"
if __name__ == '__main__':
main()
| 28.783951 | 121 | 0.742226 | 677 | 4,663 | 4.945347 | 0.338257 | 0.023297 | 0.020908 | 0.008961 | 0.139785 | 0.114098 | 0.032855 | 0.032855 | 0.032855 | 0 | 0 | 0.01722 | 0.140682 | 4,663 | 161 | 122 | 28.962733 | 0.818068 | 0.188505 | 0 | 0.063158 | 0 | 0 | 0.073067 | 0.009333 | 0 | 0 | 0 | 0 | 0 | 1 | 0.052632 | false | 0.021053 | 0.136842 | 0 | 0.263158 | 0.021053 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
ce0be90aafe6e7d99e9dde8deae77d7664f2f463 | 2,410 | py | Python | assets_new_new/data/2021-03-05/json_for_classification/data_prepare_for_self-supervised_learning.py | ggzhang0071/PaDiM-Anomaly-Detection-Localization-master | 823404d45be078348328c7c6cd16e8ac11a51587 | [
"Apache-2.0"
] | null | null | null | assets_new_new/data/2021-03-05/json_for_classification/data_prepare_for_self-supervised_learning.py | ggzhang0071/PaDiM-Anomaly-Detection-Localization-master | 823404d45be078348328c7c6cd16e8ac11a51587 | [
"Apache-2.0"
] | null | null | null | assets_new_new/data/2021-03-05/json_for_classification/data_prepare_for_self-supervised_learning.py | ggzhang0071/PaDiM-Anomaly-Detection-Localization-master | 823404d45be078348328c7c6cd16e8ac11a51587 | [
"Apache-2.0"
] | null | null | null | # split data in train, val and test dataset.
import os,cv2, glob
#image path
image_data_root="/git/PaDiM-master/kangqiang_result/segment_image_result_wide_resnet50_2/image/**/*.jpg"
save_image_path="/git/PaDiM-master/kangqiang_result/segment_image_result_wide_resnet50_2/all_croped_images"
# json file path
json_path="/git/PaDiM-master/assets_new_new/data/2021-03-05/json_for_classification"
image_label_dict={}
image_label_file_name_list=["train.txt","val.txt","test.txt"]
for image_label_file_name in image_label_file_name_list:
with open(os.path.join(json_path,image_label_file_name),"r") as fid:
image_label_list=fid.readlines()
for image_label in image_label_list:
label= image_label.split(" ")[-1].strip()
image_name_path= image_label.split(" ")[0]
image_path,image_name=os.path.split(image_name_path)
part_image_path=image_path.split("/")[5:]
image_name_without_ext,ext=os.path.splitext(image_name)
image_name_part=image_name_without_ext.split("_")
recover_image_name_path=os.path.join("/".join(part_image_path),"_".join(image_name_part[:-1])+ext)
image_label_dict[recover_image_name_path]=label
print(len(image_label_dict.keys()))
image_name_list=[]
for image_name_path in glob.glob(image_data_root,recursive=True):
img = cv2.imread(image_name_path)
if img is None:
print("file name isn't exists {}".format(image_name_path))
os.remove(image_name_path)
else:
image_path,image_name=os.path.split(image_name_path)
part_image_path=image_path.split("/")[5:]
image_name_without_ext,ext=os.path.splitext(image_name)
image_name_part=image_name_without_ext.split("_")
recover_image_name_path=os.path.join("/".join(part_image_path),"_".join(image_name_part[:-1])+ext)
if recover_image_name_path in image_label_dict:
save_subfolder_path=os.path.join(save_image_path,image_label_dict[recover_image_name_path])
if not os.path.exists(save_subfolder_path):
os.makedirs(save_subfolder_path)
save_new_image_name=os.path.join(save_subfolder_path,image_name)
else:
save_new_image_name=os.path.join(save_image_path,image_name)
if not os.path.exists(save_new_image_name):
cv2.imwrite(save_new_image_name,img)
| 45.471698 | 110 | 0.719087 | 367 | 2,410 | 4.302452 | 0.215259 | 0.176694 | 0.098797 | 0.063331 | 0.513616 | 0.48575 | 0.459151 | 0.392654 | 0.354655 | 0.354655 | 0 | 0.01146 | 0.16722 | 2,410 | 52 | 111 | 46.346154 | 0.775287 | 0.027801 | 0 | 0.3 | 0 | 0 | 0.131253 | 0.105601 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.025 | 0 | 0.025 | 0.05 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
ce1097e6fe873d6ea4f080407bd28bfacfa8d6ef | 946 | py | Python | taroapp/cmd/stop.py | PetrSixta/taro | afe0caf0e0feb6948c4cc80217b5c5d11418859b | [
"MIT"
] | null | null | null | taroapp/cmd/stop.py | PetrSixta/taro | afe0caf0e0feb6948c4cc80217b5c5d11418859b | [
"MIT"
] | null | null | null | taroapp/cmd/stop.py | PetrSixta/taro | afe0caf0e0feb6948c4cc80217b5c5d11418859b | [
"MIT"
] | null | null | null | import os
from taro.client import JobsClient
from taroapp import ps
from taroapp.view.instance import JOB_ID, INSTANCE_ID, CREATED, STATE
def run(args):
with JobsClient() as client:
all_jobs = client.read_jobs_info()
jobs = [job for job in all_jobs if job.matches(args.instance)]
if not jobs:
print('No such instance to stop: ' + args.instance)
exit(1)
if len(jobs) > 1 and not args.all:
print('No action performed, because the criteria matches more than one instance. '
'Use --all flag if you wish to stop them all:' + os.linesep)
ps.print_table(jobs, [JOB_ID, INSTANCE_ID, CREATED, STATE], show_header=True, pager=False)
return # Exit code non-zero?
inst_results = client.stop_jobs([job.instance_id for job in jobs], args.interrupt)
for i_res in inst_results:
print(f"{i_res[0]} -> {i_res[1]}")
| 36.384615 | 102 | 0.635307 | 140 | 946 | 4.171429 | 0.485714 | 0.05137 | 0.044521 | 0.05137 | 0.092466 | 0.092466 | 0 | 0 | 0 | 0 | 0 | 0.00578 | 0.268499 | 946 | 25 | 103 | 37.84 | 0.83815 | 0.020085 | 0 | 0 | 0 | 0 | 0.181622 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.052632 | false | 0 | 0.210526 | 0 | 0.315789 | 0.210526 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
ce117a0fd2cded907acee859c1a30d7f91d9ed6b | 55,603 | py | Python | atlas/gpdeletion/views.py | PanDAWMS/panda-bigmon-atlas | a3688b9ed722a15c0469c8bee84cc9a417670608 | [
"Apache-2.0"
] | null | null | null | atlas/gpdeletion/views.py | PanDAWMS/panda-bigmon-atlas | a3688b9ed722a15c0469c8bee84cc9a417670608 | [
"Apache-2.0"
] | 15 | 2015-01-06T13:41:52.000Z | 2022-03-30T10:37:25.000Z | atlas/gpdeletion/views.py | PanDAWMS/panda-bigmon-atlas | a3688b9ed722a15c0469c8bee84cc9a417670608 | [
"Apache-2.0"
] | 1 | 2017-07-20T08:01:24.000Z | 2017-07-20T08:01:24.000Z | from django.contrib.auth.models import User
from django.contrib.messages.context_processors import messages
from django.http.response import HttpResponseBadRequest
from rest_framework.generics import get_object_or_404
from rest_framework.parsers import JSONParser
from atlas.ami.client import AMIClient
from atlas.prodtask.models import ActionStaging, ActionDefault, DatasetStaging, StepAction, TTask, \
GroupProductionAMITag, ProductionTask, GroupProductionDeletion, TDataFormat, GroupProductionStats, TRequest, \
ProductionDataset, GroupProductionDeletionExtension, GroupProductionDeletionProcessing, \
GroupProductionDeletionRequest
from atlas.dkb.views import es_by_fields, es_by_keys, es_by_keys_nested
from atlas.prodtask.ddm_api import DDM
from datetime import datetime, timedelta
import pytz
from rest_framework import serializers, generics
from django.forms.models import model_to_dict
from rest_framework import status
from atlas.settings import defaultDatetimeFormat
import logging
from django.utils import timezone
from rest_framework.decorators import api_view, authentication_classes, permission_classes
from rest_framework.response import Response
from rest_framework.authentication import TokenAuthentication, BasicAuthentication, SessionAuthentication
from rest_framework.permissions import IsAuthenticated
from rest_framework.decorators import parser_classes
from atlas.celerybackend.celery import app
from django.core.cache import cache
_logger = logging.getLogger('prodtaskwebui')
_jsonLogger = logging.getLogger('prodtask_ELK')
FORMAT_BASES = ['BPHY', 'EGAM', 'EXOT', 'FTAG', 'HDBS', 'HIGG', 'HION', 'JETM', 'LCALO', 'MUON', 'PHYS',
'STDM', 'SUSY', 'TAUP', 'TCAL', 'TOPQ', 'TRIG', 'TRUTH']
CP_FORMATS = ["FTAG", "EGAM", "MUON", 'PHYS', "JETM", "TAUP", "IDTR", "TCAL"]
def get_all_formats(format_base):
return list(TDataFormat.objects.filter(name__startswith='DAOD_' + format_base).values_list('name', flat=True))
LIFE_TIME_DAYS = 60
def collect_stats(format_base, is_real_data):
formats = get_all_formats(format_base)
version = 1
if format_base in CP_FORMATS:
version = 2
if is_real_data:
data_prefix = 'data'
else:
data_prefix = 'mc'
for output_format in formats:
to_cache = get_stats_per_format(output_format, version, is_real_data)
result = []
for ami_tag in to_cache.keys():
if to_cache[ami_tag]:
ami_tag_info = GroupProductionAMITag.objects.get(ami_tag=ami_tag)
skim='noskim'
if ami_tag_info.skim == 's':
skim='skim'
result.append({'ami_tag':ami_tag,'cache':','.join([ami_tag_info.cache,skim]),
'containers':to_cache[ami_tag]})
cache.delete('gp_del_%s_%s_'%(data_prefix,output_format))
if result:
cache.set('gp_del_%s_%s_'%(data_prefix,output_format),result,None)
def get_stats_per_format(output_format, version, is_real_data):
by_tag_stats = {}
to_cache = {}
if is_real_data:
data_prefix = 'data'
else:
data_prefix = 'mc'
samples = GroupProductionDeletion.objects.filter(output_format=output_format)
for sample in samples:
if sample.container.startswith(data_prefix):
if sample.ami_tag not in by_tag_stats:
by_tag_stats[sample.ami_tag] = {'containers': 0, 'bytes': 0, 'to_delete_containers': 0, 'to_delete_bytes':0}
to_cache[sample.ami_tag] = []
if sample.version >= version:
to_cache[sample.ami_tag].append(GroupProductionDeletionUserSerializer(sample).data)
by_tag_stats[sample.ami_tag]['containers'] += 1
by_tag_stats[sample.ami_tag]['bytes'] += sample.size
if sample.days_to_delete <0:
by_tag_stats[sample.ami_tag]['to_delete_containers'] += 1
by_tag_stats[sample.ami_tag]['to_delete_bytes'] += sample.size
current_stats = GroupProductionStats.objects.filter(output_format=output_format, real_data=is_real_data)
updated_tags = []
for current_stat in current_stats:
if current_stat.ami_tag in by_tag_stats.keys():
current_stat.size = by_tag_stats[current_stat.ami_tag]['bytes']
current_stat.containers = by_tag_stats[current_stat.ami_tag]['containers']
current_stat.to_delete_size = by_tag_stats[current_stat.ami_tag]['to_delete_bytes']
current_stat.to_delete_containers = by_tag_stats[current_stat.ami_tag]['to_delete_containers']
current_stat.save()
updated_tags.append(current_stat.ami_tag)
else:
current_stat.size = 0
current_stat.containers = 0
current_stat.to_delete_size = 0
current_stat.to_delete_containers = 0
current_stat.save()
for tag in by_tag_stats.keys():
if tag not in updated_tags:
current_stat, is_created = GroupProductionStats.objects.get_or_create(ami_tag=tag, output_format=output_format, real_data=is_real_data)
current_stat.size = by_tag_stats[tag]['bytes']
current_stat.containers = by_tag_stats[tag]['containers']
current_stat.to_delete_size = by_tag_stats[tag]['to_delete_bytes']
current_stat.to_delete_containers = by_tag_stats[tag]['to_delete_containers']
current_stat.save()
return to_cache
def apply_extension(container, number_of_extension, user, message):
container = container[container.find(':')+1:]
gp = GroupProductionDeletion.objects.get(container=container)
gp_extension = GroupProductionDeletionExtension()
gp_extension.container = gp
gp_extension.user = user
gp_extension.timestamp = timezone.now()
gp_extension.message = message
gp_extension.save()
if (number_of_extension > 0) and (gp.days_to_delete < 0):
number_of_extension += (gp.days_to_delete // GroupProductionDeletion.EXTENSIONS_DAYS) * -1
if gp.days_to_delete + (number_of_extension * GroupProductionDeletion.EXTENSIONS_DAYS) > 365:
number_of_extension = (gp.days_to_delete // GroupProductionDeletion.EXTENSIONS_DAYS) * -1 + 6
if gp.extensions_number:
gp.extensions_number += number_of_extension
else:
gp.extensions_number = number_of_extension
if gp.extensions_number < 0:
gp.extensions_number = 0
gp.save()
_logger.info(
'GP extension by {user} for {container} on {number_of_extension} with messsage {message}'.format(user=user, container=container,
number_of_extension=number_of_extension,message=message))
_jsonLogger.info('Request for derivation container extension for: {message}'.format(message=message), extra={'user':user,'container':container,'number_of_extension':number_of_extension})
def remove_extension(container):
gp = GroupProductionDeletion.objects.get(container=container)
gp.extensions_number = 0
gp.save()
def form_gp_from_dataset(dataset):
gp = GroupProductionDeletion()
dataset = dataset[dataset.find(':')+1:]
container_name = get_container_name(dataset)
ami_tag = container_name.split('_')[-1]
if not GroupProductionAMITag.objects.filter(ami_tag=ami_tag).exists():
update_tag_from_ami(ami_tag, gp.container.startswith('data'))
gp.skim = GroupProductionAMITag.objects.get(ami_tag=ami_tag).skim
gp.container = container_name
gp.dsid = container_name.split('.')[1]
gp.output_format = container_name.split('.')[4]
if gp.container.startswith('data'):
key_postfix = container_name.split('.')[2]
else:
key_postfix = 'mc'
gp.input_key = '.'.join([str(gp.dsid), gp.output_format, '_'.join(container_name.split('.')[-1].split('_')[:-1]), gp.skim,key_postfix])
gp.ami_tag = ami_tag
gp.version = 0
return gp
def get_existing_datastes(output, ami_tag, ddm):
tasks = es_by_keys_nested({'ctag': ami_tag, 'output_formats': output})
if(len(tasks)>0):
print(ami_tag, len(tasks))
result = []
for task in tasks:
if 'valid' not in task['taskname'] and task['status'] not in ProductionTask.RED_STATUS:
if not task['output_dataset'] and task['status'] in ProductionTask.NOT_RUNNING:
datasets = ProductionDataset.objects.filter(task_id=task['taskid'])
for dataset in datasets:
if output in dataset.name:
if ddm.dataset_exists(dataset.name):
metadata = ddm.dataset_metadata(dataset.name)
events = metadata['events']
bytes = metadata['bytes']
if bytes is None:
break
result.append({'task': task['taskid'], 'dataset': dataset.name, 'size': bytes,
'task_status': task['status'], 'events': events, 'end_time': task['task_timestamp']})
break
for dataset in task['output_dataset']:
deleted = False
try:
deleted = dataset['deleted']
except:
print('no deleted', task['taskid'])
if output == dataset['data_format'] and not deleted and ddm.dataset_exists(dataset['name']):
if ('events' not in dataset) or (not dataset['events']):
print('no events', task['taskid'])
metadata = ddm.dataset_metadata(dataset['name'])
events = metadata['events']
if not events:
events = 0
if ('bytes' not in dataset) or dataset['bytes'] == 0:
dataset['bytes'] = metadata['bytes']
else:
events = dataset['events']
if task['status'] not in ProductionTask.NOT_RUNNING:
production_task = ProductionTask.objects.get(id=int(task['taskid']))
if production_task.status != task['status']:
print('wrong status', task['taskid'])
task['status'] = production_task.status
if dataset['bytes'] is None:
break
result.append({'task': task['taskid'], 'dataset': dataset['name'], 'size': dataset['bytes'],
'task_status': task['status'], 'events': events, 'end_time': task['task_timestamp']})
break
return result
def ami_tags_reduction_w_data(postfix, data=False):
if 'tid' in postfix:
postfix = postfix[:postfix.find('_tid')]
if data:
return postfix
new_postfix = []
first_letter = ''
for token in postfix.split('_')[:-1]:
if token[0] != first_letter and not (token[0] == 's' and first_letter == 'a'):
new_postfix.append(token)
first_letter = token[0]
new_postfix.append(postfix.split('_')[-1])
return '_'.join(new_postfix)
def get_container_name(dataset_name):
return '.'.join(dataset_name.split('.')[:-1] + [ami_tags_reduction_w_data(dataset_name.split('.')[-1], dataset_name.startswith('data') or ('TRUTH' in dataset_name) )])
def collect_datasets(format_base, data, only_new = False):
if data:
prefix = 'data'
else:
prefix = 'mc'
for output in get_all_formats(format_base):
if only_new:
if GroupProductionDeletion.objects.filter(output_format=output, container__startswith=prefix).exists():
continue
if data:
fill_db(output, True, True, False)
else:
fill_db(output, False, True, False)
fill_db(output, False, False, False)
collect_stats(format_base, data)
return True
def collect_datasets_per_output(output, data, is_skim):
if is_skim:
skim = 's'
else:
skim = 'n'
_logger.info(
'Start collecting containers for {output} {skim}) '.format(output=output, skim=skim))
ami_tags_cache = list(
GroupProductionAMITag.objects.filter(real_data=data, skim=skim).values_list('ami_tag', 'cache'))
ami_tags_cache.sort(reverse=True, key=lambda x: list(map(int, x[1].split('.'))))
ami_tags = [x[0] for x in ami_tags_cache]
result = {}
ddm = DDM()
for ami_tag in ami_tags:
for dataset in get_existing_datastes(output, ami_tag, ddm):
dataset_name = get_container_name(dataset['dataset'])
dataset_key = dataset_name[:dataset_name.rfind('_')] + '.' + skim
if dataset_key not in result:
result[dataset_key] = {'versions': -1}
if ami_tag not in result[dataset_key]:
result[dataset_key]['versions'] += 1
result[dataset_key][ami_tag] = {'datasets': [], 'size': 0, 'events': 0, 'status': 'finished',
'end_time': None, 'version': result[dataset_key]['versions']}
if dataset['end_time']:
if not result[dataset_key][ami_tag]['end_time'] or (
dataset['end_time'] > result[dataset_key][ami_tag]['end_time']):
result[dataset_key][ami_tag]['end_time'] = dataset['end_time']
result[dataset_key][ami_tag]['datasets'].append(dataset)
result[dataset_key][ami_tag]['size'] += dataset['size']
result[dataset_key][ami_tag]['events'] += dataset['events']
if dataset['task_status'] not in ProductionTask.NOT_RUNNING:
result[dataset_key][ami_tag]['status'] = 'running'
return result
def create_single_tag_container(container_name):
container_name = container_name[container_name.find(':')+1:]
gp_container = GroupProductionDeletion.objects.get(container=container_name)
ddm = DDM()
if not ddm.dataset_exists(container_name):
datasets = datassets_from_es(gp_container.ami_tag, gp_container.output_format, gp_container.dsid,container_name,ddm)
if datasets:
empty_replica = True
for es_dataset in datasets:
if len(ddm.dataset_replicas(es_dataset))>0:
empty_replica = False
break
if not empty_replica:
print(str(datasets),' will be added to ',container_name)
ddm.register_container(container_name,datasets)
def range_containers(container_key):
gp_containers = GroupProductionDeletion.objects.filter(input_key=container_key)
if gp_containers.count() > 1:
by_amitag = {}
for gp_container in gp_containers:
by_amitag[gp_container.ami_tag] = gp_container
ami_tags_cache = [(x, GroupProductionAMITag.objects.get(ami_tag=x).cache) for x in by_amitag.keys()]
ami_tags_cache.sort(reverse=True, key=lambda x: list(map(int, x[1].split('.'))))
ami_tags = [x[0] for x in ami_tags_cache]
available_tags = ','.join(ami_tags)
latest = by_amitag[ami_tags[0]]
version = 0
if latest.version !=0 or latest.available_tags != available_tags:
latest.version = 0
latest.last_extension_time = None
latest.available_tags = available_tags
latest.save()
for ami_tag in ami_tags[1:]:
if latest.status == 'finished':
version += 1
last_extension = max([latest.update_time,by_amitag[ami_tag].update_time])
if version != by_amitag[ami_tag].version or by_amitag[ami_tag].available_tags != available_tags or by_amitag[ami_tag].last_extension_time!=last_extension:
by_amitag[ami_tag].last_extension_time = last_extension
by_amitag[ami_tag].version = version
by_amitag[ami_tag].available_tags = available_tags
by_amitag[ami_tag].save()
latest = by_amitag[ami_tag]
else:
gp_container = GroupProductionDeletion.objects.get(input_key=container_key)
if gp_container.version != 0 or gp_container.available_tags:
gp_container.version = 0
gp_container.last_extension_time = None
gp_container.available_tags = None
gp_container.save()
def unify_dataset(dataset):
if(':' in dataset):
return dataset
else:
return dataset.split('.')[0]+':'+dataset
def check_container(container_name, ddm, additional_datasets = None, warning_exists = False):
container_name = container_name[container_name.find(':')+1:]
if GroupProductionDeletion.objects.filter(container=container_name).count() >1:
gp_to_delete = list(GroupProductionDeletion.objects.filter(container=container_name))
for gp in gp_to_delete:
gp.delete()
if GroupProductionDeletion.objects.filter(container=container_name).exists():
gp_container = GroupProductionDeletion.objects.get(container=container_name)
is_new = False
else:
gp_container = form_gp_from_dataset(additional_datasets[0])
is_new = True
container_key = gp_container.input_key
datasets = ddm.dataset_in_container(container_name)
if additional_datasets:
for dataset in additional_datasets:
if dataset not in datasets:
datasets.append(unify_dataset(dataset))
events = 0
bytes = 0
is_running = False
datasets += datassets_from_es(gp_container.ami_tag, gp_container.output_format, gp_container.dsid,container_name,ddm,datasets)
if datasets:
if warning_exists:
_logger.warning(
'Container {container} has datasets which were not found from ES '.format(container=container_name))
print('Container {container} has datasets which were not found from ES '.format(container=container_name))
for dataset in datasets:
metadata = ddm.dataset_metadata(dataset)
if metadata['events']:
events += metadata['events']
if metadata['bytes']:
bytes += metadata['bytes']
task_id = metadata['task_id']
task = ProductionTask.objects.get(id=task_id)
if task.status not in ProductionTask.NOT_RUNNING:
is_running = True
gp_container.events = events
gp_container.datasets_number = len(datasets)
gp_container.size = bytes
if is_running:
gp_container.status = 'running'
gp_container.update_time = timezone.now()
else:
gp_container.status = 'finished'
if is_new:
gp_container.update_time = timezone.now()
_logger.info(
'Container {container} has been added to group production lists '.format(
container=gp_container.container))
gp_container.save()
range_containers(container_key)
else:
_logger.info(
'Container {container} has been deleted from group production lists '.format(container=container_name))
rerange_after_deletion(gp_container)
def store_dataset(item):
for x in ['update_time', 'last_extension_time']:
if item.get(x):
item[x] = datetime.strptime(item[x], "%d-%m-%Y %H:%M:%S").replace(tzinfo=pytz.utc)
gp_container = GroupProductionDeletion(**item)
if GroupProductionDeletion.objects.filter(container=item['container']).exists():
gp_container.id = GroupProductionDeletion.objects.get(container=item['container']).id
gp_container.save()
return gp_container.id
def do_gp_deletion_update():
update_for_period(timezone.now()-timedelta(days=2), timezone.now()+timedelta(hours=3))
cache.set('gp_deletion_update_time',timezone.now(),None)
for f in FORMAT_BASES:
collect_stats(f,False)
collect_stats(f,True)
def update_for_period(time_since, time_till):
tasks = ProductionTask.objects.filter(timestamp__gte=time_since, timestamp__lte=time_till, provenance='GP')
containers = {}
for task in tasks:
if (task.phys_group not in ['SOFT','VALI']) and ('valid' not in task.name) and (task.status in ['finished','done']):
datasets = ProductionDataset.objects.filter(task_id=task.id)
for dataset in datasets:
if '.log.' not in dataset.name:
container_name = get_container_name(unify_dataset(dataset.name))
if container_name not in containers:
containers[container_name] = []
containers[container_name].append(unify_dataset(dataset.name))
ddm = DDM()
for container, datasets in containers.items():
try:
check_container(container,ddm,datasets)
except Exception as e:
_logger.error("problem during gp container check %s" % str(e))
return True
def redo_all(is_data, exclude_list):
for base in FORMAT_BASES:
if base not in exclude_list:
redo_whole_output(base, is_data)
def redo_whole_output(format_base, is_data):
if is_data:
prefix = 'data'
else:
prefix = 'mc'
for output in get_all_formats(format_base):
_logger.info(
'Redo {output} for {prefix} '.format(output=output,prefix=prefix))
print('Redo {output} for {prefix} '.format(output=output,prefix=prefix))
group_containers = list(GroupProductionDeletion.objects.filter(output_format=output, container__startswith=prefix))
for group_container in group_containers:
group_container.previous_container = None
group_container.save()
group_container.delete()
if is_data:
fill_db(output, True, True, False)
else:
fill_db(output, False, True, False)
fill_db(output, False, False, False)
def redo_format(output, is_data):
if is_data:
prefix = 'data'
else:
prefix = 'mc'
_logger.info(
'Redo {output} for {prefix} '.format(output=output,prefix=prefix))
print('Redo {output} for {prefix} '.format(output=output,prefix=prefix))
group_containers = list(GroupProductionDeletion.objects.filter(output_format=output, container__startswith=prefix))
for group_container in group_containers:
group_container.previous_container = None
group_container.save()
group_container.delete()
if is_data:
fill_db(output, True, True, False)
else:
fill_db(output, False, True, False)
fill_db(output, False, False, False)
def datassets_from_es(ami_tag, output_formats, run_number, container, ddm, checked_datasets = []):
tasks = es_by_keys_nested({'ctag': ami_tag, 'output_formats': output_formats,
'run_number': run_number})
es_datatses = []
for task in tasks:
if task['status'] not in ProductionTask.RED_STATUS:
for dataset in task['output_dataset']:
deleted = False
try:
deleted = dataset['deleted']
except:
_logger.warning('task {taskid} has no deleted in es'.format(taskid=task['taskid']))
if (unify_dataset(dataset['name']) not in checked_datasets) and (
output_formats in dataset['data_format'] and not deleted) and \
(get_container_name(dataset[
'name']) == container) and ddm.dataset_exists(
dataset['name']):
es_datatses.append(dataset['name'])
return es_datatses
def rerange_after_deletion(gp_delete_container):
gp_containers = GroupProductionDeletion.objects.filter(input_key=gp_delete_container.input_key)
if gp_containers.count() > 1:
by_amitag = {}
for gp_container in gp_containers:
if gp_container != gp_delete_container:
by_amitag[gp_container.ami_tag] = gp_container
if len(by_amitag.keys()) == 1:
ami_tag, gp_container = by_amitag.popitem()
gp_container.available_tags = gp_container.ami_tag
gp_container.version = 0
gp_container.save()
else:
ami_tags_cache = [(x, GroupProductionAMITag.objects.get(ami_tag=x).cache) for x in by_amitag.keys()]
ami_tags_cache.sort(reverse=True, key=lambda x: list(map(int, x[1].split('.'))))
ami_tags = [x[0] for x in ami_tags_cache]
available_tags = ','.join(ami_tags)
latest = by_amitag[ami_tags[0]]
version = 0
if latest.version !=0 or latest.available_tags != available_tags:
latest.version = 0
latest.last_extension_time = None
latest.available_tags = available_tags
latest.save()
for ami_tag in ami_tags[1:]:
if latest.status == 'finished':
version += 1
last_extension = max([latest.update_time,by_amitag[ami_tag].update_time])
if version != by_amitag[ami_tag].version or by_amitag[ami_tag].available_tags != available_tags or by_amitag[ami_tag].last_extension_time!=last_extension:
by_amitag[ami_tag].last_extension_time = last_extension
by_amitag[ami_tag].version = version
by_amitag[ami_tag].previous_container = None
by_amitag[ami_tag].available_tags = available_tags
by_amitag[ami_tag].save()
latest = by_amitag[ami_tag]
gp_extensions = GroupProductionDeletionExtension.objects.filter(container=gp_delete_container)
for gp_extension in gp_extensions:
gp_extension.delete()
gp_delete_container.delete()
def fix_update_time(container):
gp_container = GroupProductionDeletion.objects.get(container=container)
ddm = DDM()
gp_container.update_time = ddm.dataset_metadata(container)['updated_at']
gp_container.save()
def clean_superceeded(do_es_check=True, full=False, format_base = None):
# for base_format in FORMAT_BASES:
ddm = DDM()
if not format_base:
format_base = FORMAT_BASES
cache_key = 'ALL'
else:
if format_base in FORMAT_BASES:
cache_key = format_base
format_base = [format_base]
else:
return False
existed_datasets = []
for base_format in format_base:
superceed_version = 1
if base_format in CP_FORMATS:
superceed_version = 2
formats = get_all_formats(base_format)
for output_format in formats:
if full:
existed_containers = GroupProductionDeletion.objects.filter(output_format=output_format)
else:
existed_containers = GroupProductionDeletion.objects.filter(output_format=output_format, version__gte=superceed_version)
for gp_container in existed_containers:
container_name = gp_container.container
datasets = ddm.dataset_in_container(container_name)
delete_container = False
if len(datasets) == 0:
delete_container = True
if do_es_check:
es_datasets = datassets_from_es(gp_container.ami_tag, gp_container.output_format, gp_container.dsid, gp_container.container, ddm )
empty_replica = True
if('TRUTH' not in output_format):
for es_dataset in es_datasets:
if len(ddm.dataset_replicas(es_dataset))>0:
empty_replica = False
break
else:
empty_replica = False
if len(es_datasets) > 0 and not empty_replica:
delete_container = False
if gp_container.days_to_delete <0 and (gp_container.version >= version_from_format(gp_container.output_format)):
existed_datasets += es_datasets
if gp_container.version != 0:
_logger.error('{container} is empty but something is found'.format(container=container_name))
else:
if (gp_container.days_to_delete < 0) and (gp_container.version >= version_from_format(gp_container.output_format)):
existed_datasets += datasets
if delete_container:
try:
rerange_after_deletion(gp_container)
_logger.info(
'Container {container} has been deleted from group production lists '.format(
container=container_name))
except Exception as e:
_logger.error('Container {container} has problem during deletion from group production lists '.format(
container=container_name))
cache.set('dataset_to_delete_'+cache_key,existed_datasets,None)
def clean_containers(changed_containers, output, data, is_skim):
if is_skim:
skim = 's'
else:
skim = 'n'
existed_containers = list(GroupProductionDeletion.objects.filter(output_format=output, skim=skim).values_list('container',flat=True))
ddm=DDM()
for gp_container in existed_containers:
if (data and not(gp_container.startswith('data'))) or ((not data) and gp_container.startswith('data')):
continue
if gp_container not in changed_containers:
check_container(gp_container, ddm, warning_exists=True)
def fill_db(output, data, is_skim, test=True):
results = collect_datasets_per_output(output, data, is_skim)
to_db = []
for sample_key, samples_collection in results.items():
samples = []
for ami_tag, sample in samples_collection.items():
if ami_tag != 'versions':
sample.update({'ami_tag': ami_tag})
samples.append(sample)
samples.sort(key=lambda x: x['version'])
superceed_time = None
version = 0
db_sample_collection = []
existed_ami_tags = []
for index, sample in enumerate(samples):
db_sample = {}
for x in ['size', 'events', 'status', 'ami_tag']:
db_sample[x] = sample[x]
db_sample['datasets_number'] = len(sample['datasets'])
db_sample['version'] = version
if superceed_time:
db_sample['last_extension_time'] = superceed_time
if db_sample['status'] == 'finished':
superceed_time = sample['end_time']
existed_ami_tags.append(sample['ami_tag'])
version += 1
elif index > 0:
db_sample['status'] = 'alarm'
db_sample['update_time'] = sample['end_time']
db_sample['container'] = '.'.join(sample_key.split('.')[:-1]) + '_' + sample['ami_tag']
db_sample['dsid'] = sample_key.split('.')[1]
db_sample['output_format'] = sample_key.split('.')[4]
db_sample['skim'] = sample_key.split('.')[-1]
if data:
key_postfix = sample_key.split('.')[2]
else:
key_postfix = 'mc'
db_sample['input_key'] = '.'.join([str(db_sample['dsid']), db_sample['output_format'],
sample_key.split('.')[-2], db_sample['skim'],key_postfix])
db_sample_collection.append(db_sample)
available_tags = ','.join(existed_ami_tags)
for db_sample in db_sample_collection:
if db_sample['version'] >= 1:
db_sample['available_tags'] = available_tags
to_db += db_sample_collection
if test:
to_db.reverse()
return to_db
else:
to_db.reverse()
current_key = None
last_id = None
changed_containers = []
_logger.info('Store {total} to DB '.format(total=len(to_db)))
for index, item in enumerate(to_db):
try:
if (item['input_key'] == current_key) and last_id:
last_id = store_dataset(item)
else:
last_id = store_dataset(item)
current_key = item['input_key']
changed_containers.append(item['container'])
except Exception as e:
_logger.error('Error during storing container {error} to DB '.format(error=str(e)))
print(index)
return to_db
clean_containers(changed_containers, output, data, is_skim )
return to_db
def update_tag_from_ami(tag, is_data=False):
ami = AMIClient()
gp_tag = GroupProductionAMITag()
ami_tag = ami.get_ami_tag(tag)
gp_tag.cache = ami_tag['cacheName']
if 'passThrough' in ami_tag:
gp_tag.skim = 'n'
else:
gp_tag.skim = 's'
gp_tag.ami_tag = tag
gp_tag.real_data = is_data
gp_tag.save()
@api_view(['GET'])
def gpdetails(request):
try:
current_id = request.query_params.get('gp_id')
gp_container = GroupProductionDeletion.objects.get(id=current_id)
gp_containers = list(GroupProductionDeletion.objects.filter(input_key=gp_container.input_key))
containers = [x.data for x in [GroupProductionDeletionSerializer(y) for y in gp_containers]]
return Response({'id': current_id, 'containers': containers})
except Exception as e:
return HttpResponseBadRequest(e)
@api_view(['GET'])
def ami_tags_details(request):
try:
ami_tags = request.query_params.get('ami_tags').split(',')
result = {}
for ami_tag in ami_tags:
if GroupProductionAMITag.objects.filter(ami_tag=ami_tag).exists():
ami_tag_details = GroupProductionAMITag.objects.get(ami_tag=ami_tag)
result.update({ami_tag_details.ami_tag:{'cache': ami_tag_details.cache, 'skim': ami_tag_details.skim}})
return Response(result)
except Exception as e:
return HttpResponseBadRequest(e)
@api_view(['GET'])
def gp_container_details(request):
try:
result = {}
container_name = request.query_params.get('container')
if not GroupProductionDeletion.objects.filter(container=container_name).exists():
return Response(None)
ddm = DDM()
gp_main_container = GroupProductionDeletion.objects.get(container=container_name)
extensions = GroupProductionDeletionExtension.objects.filter(container=gp_main_container).order_by('id')
result['extension'] = [ GroupProductionDeletionExtensionSerializer(x).data for x in extensions]
gp_same_key_containers = GroupProductionDeletion.objects.filter(input_key=gp_main_container.input_key)
same_key_containers = []
for gp_container in gp_same_key_containers:
datasets = ddm.dataset_in_container(gp_container.container)
datasets += datassets_from_es(gp_container.ami_tag, gp_container.output_format, gp_container.dsid, gp_container.container, ddm,datasets )
datasets_info = []
for dataset in datasets:
metadata = ddm.dataset_metadata(dataset)
datasets_info.append( {'name':dataset,'events':metadata['events'],'bytes':metadata['bytes'],'task_id':metadata['task_id'] })
if gp_container == gp_main_container:
result['main_container'] = {'container': gp_container.container, 'datasets':datasets_info,
'details': GroupProductionDeletionSerializer(gp_container).data}
else:
same_key_containers.append({'container': gp_container.container, 'datasets':datasets_info,
'details': GroupProductionDeletionSerializer(gp_container).data})
result['same_input'] = same_key_containers
return Response(result)
except Exception as e:
return HttpResponseBadRequest(e)
@api_view(['POST'])
def extension(request):
try:
username = request.user.username
containers = request.data['containers']
message = request.data['message']
number_of_extensions = request.data['number_of_extensions']
for container in containers:
apply_extension(container['container'],number_of_extensions,username,message)
except Exception as e:
return HttpResponseBadRequest(e)
return Response({'message': 'OK'})
@api_view(['POST'])
@authentication_classes((TokenAuthentication, BasicAuthentication, SessionAuthentication))
@permission_classes((IsAuthenticated,))
@parser_classes((JSONParser,))
def extension_api(request):
"""
Increase by "number_of_extensions" for each container in "containers" list with "message"
Post data must contain two fields message and containers, e.g.:
{"message":"Test","containers":['container1','container2']}\n
:return is {'containers_extented': number of containers extented,'containers_with_problems': list of containers with problems}
"""
containers_extended = 0
containers_with_problems = []
try:
username = request.user.username
containers = request.data['containers']
message = request.data['message']
number_of_extensions = request.data.get('number_of_extensions',1)
for container in containers:
try:
apply_extension(container,number_of_extensions,username,message)
containers_extended += 1
except Exception as e:
containers_with_problems.append((container, str(e)))
except Exception as e:
return HttpResponseBadRequest(e)
return Response({'containers_extented': containers_extended,'containers_with_problems': containers_with_problems})
@api_view(['POST'])
@authentication_classes((TokenAuthentication, BasicAuthentication, SessionAuthentication))
@permission_classes((IsAuthenticated,))
@parser_classes((JSONParser,))
def extension_container_api(request):
"""
Increase by "number_of_extensions" for each container in "period_container" with "message"
Post data must contain two fields message and period_container, e.g.:
{"message":"Test","period_container":'container'}\n
:return is {'containers_extented': number of containers extented,'containers_with_problems': list of containers with problems}
"""
containers_extended = 0
containers_with_problems = []
try:
username = request.user.username
container = request.data['period_container']
message = request.data['message']
number_of_extensions = request.data.get('number_of_extensions',1)
ddm = DDM()
datasets = ddm.dataset_in_container(container)
containers = list(set(map(get_container_name,datasets)))
for container in containers:
try:
apply_extension(container,number_of_extensions,username,message)
containers_extended += 1
except Exception as e:
containers_with_problems.append((container, str(e)))
except Exception as e:
return HttpResponseBadRequest(e)
return Response({'containers_extented': containers_extended,'containers_with_problems': containers_with_problems})
class UnixEpochDateField(serializers.DateTimeField):
def to_representation(self, value):
""" Return epoch time for a datetime object or ``None``"""
import time
try:
return int(time.mktime(value.timetuple()))
except (AttributeError, TypeError):
return None
def to_internal_value(self, value):
import datetime
return datetime.datetime.fromtimestamp(int(value))
class GroupProductionDeletionExtensionSerializer(serializers.ModelSerializer):
class Meta:
model = GroupProductionDeletionExtension
fields = '__all__'
class GroupProductionDeletionSerializer(serializers.ModelSerializer):
epoch_last_update_time = UnixEpochDateField(source='last_extension_time')
class Meta:
model = GroupProductionDeletion
fields = '__all__'
class GroupProductionDeletionUserSerializer(serializers.ModelSerializer):
epoch_last_update_time = UnixEpochDateField(source='last_extension_time')
class Meta:
model = GroupProductionDeletion
fields = ['container','events','available_tags','version','extensions_number','size','epoch_last_update_time','days_to_delete']
class GroupProductionStatsSerializer(serializers.ModelSerializer):
class Meta:
model = GroupProductionStats
fields = '__all__'
class ListGroupProductionStatsView(generics.ListAPIView):
serializer_class = GroupProductionStatsSerializer
lookup_fields = ['id', 'ami_tag', 'output_format', 'real_data']
def get_queryset(self):
"""
Optionally restricts the returned purchases to a given user,
by filtering against a `username` query parameter in the URL.
"""
filter = {}
for field in self.lookup_fields:
if field == 'real_data' and self.request.query_params.get(field, None):
if self.request.query_params[field] == '1':
filter['real_data'] = True
else:
filter['real_data'] = False
elif self.request.query_params.get(field, None): # Ignore empty fields.
filter[field] = self.request.query_params[field]
queryset = GroupProductionStats.objects.filter(**filter)
return queryset
def version_from_format(output_format):
for base_format in CP_FORMATS:
if base_format in output_format:
return 2
return 1
@api_view(['GET'])
@authentication_classes((TokenAuthentication, BasicAuthentication, SessionAuthentication))
@permission_classes((IsAuthenticated,))
@parser_classes((JSONParser,))
def last_update_time_group_production(request):
return Response(cache.get('gp_deletion_update_time', timezone.now()).ctime())
@api_view(['GET'])
@authentication_classes((TokenAuthentication, BasicAuthentication, SessionAuthentication))
@permission_classes((IsAuthenticated,))
@parser_classes((JSONParser,))
def group_production_datasets_full(request):
"""
Return the list of containers from cache. If no output_format or base_format are set it returns all containers for \n
the data_type. \n
* output_format: Output format. Example: "DAOD_BPHY1". \n
* base_format: Base format. Example: "BPHY". \n
* data_type: 'mc' or 'data', default is 'mc'. Example: "data".
"""
data_prefix = 'mc'
if request.query_params.get('data_type'):
data_prefix = request.query_params.get('data_type')
formats = []
if request.query_params.get('output_format'):
formats = [request.query_params.get('output_format')]
else:
if request.query_params.get('base_format'):
formats = get_all_formats(request.query_params.get('base_format'))
else:
for output_format in FORMAT_BASES:
formats += get_all_formats(output_format)
result = {'timestamp':str(cache.get('gp_deletion_update_time', timezone.now())),'formats':[]}
for output_format in formats:
format_data = cache.get('gp_del_%s_%s_'%(data_prefix,output_format), None)
if format_data:
result['formats'].append({'output_format':output_format,'data':format_data})
return Response(result)
@api_view(['GET'])
@authentication_classes((TokenAuthentication, BasicAuthentication, SessionAuthentication))
@permission_classes((IsAuthenticated,))
def all_datasests_to_delete(request):
"""
Return list of all datasets which are marked to deletion. List is taken from the cache, the cache is updated once a day.\n
* filter: return datasets with 'filter' value in the name. Example: "DAOD_BPHY1" \n
* data_type: 'mc' or 'data'. Example: "data".
"""
result = cache.get('dataset_to_delete_ALL')
if request.query_params.get('data_type'):
result = [x for x in result if x.startswith(request.query_params.get('data_type'))]
if request.query_params.get('filter'):
result = [x for x in result if request.query_params.get('filter') in x]
return Response(result)
class ListGroupProductionDeletionForUsersView(generics.ListAPIView):
"""
Return the list of containers for selected output_format and selected data_type\n
* output_format: Output format. Example: "DAOD_BPHY1". Required\n
* data_type: 'mc' or 'data'. Example: "data". Requied
"""
serializer_class = GroupProductionDeletionUserSerializer
authentication_classes = [TokenAuthentication, SessionAuthentication, BasicAuthentication]
permission_classes = [IsAuthenticated]
lookup_fields = [ 'output_format', 'skim', 'ami_tag','data_type']
fields = ['container']
def get_queryset(self):
"""
Optionally restricts the returned purchases to a given user,
by filtering against a `username` query parameter in the URL.
"""
filter = {}
if not self.request.query_params.get('output_format', None):
return []
for field in self.lookup_fields:
if field == 'data_type' and self.request.query_params.get(field, None):
filter['container__startswith'] = self.request.query_params[field]
elif field == 'output_format' and self.request.query_params.get(field, None) :
filter['version__gte'] = version_from_format(self.request.query_params[field])
filter[field] = self.request.query_params[field]
elif self.request.query_params.get(field, None): # Ignore empty fields.
filter[field] = self.request.query_params[field]
queryset = GroupProductionDeletion.objects.filter(**filter).order_by('-ami_tag','container')
return queryset
class ListGroupProductionDeletionView(generics.ListAPIView):
serializer_class = GroupProductionDeletionSerializer
lookup_fields = ['dsid', 'output_format', 'version', 'status', 'skim', 'ami_tag','data_type']
def get_queryset(self):
"""
Optionally restricts the returned purchases to a given user,
by filtering against a `username` query parameter in the URL.
"""
filter = {}
for field in self.lookup_fields:
if field == 'data_type' and self.request.query_params.get(field, None):
filter['container__startswith'] = self.request.query_params[field]
elif field == 'output_format' and self.request.query_params.get(field, None) and not (self.request.query_params.get('version', None)):
filter['version__gte'] = version_from_format(self.request.query_params[field])
filter[field] = self.request.query_params[field]
elif self.request.query_params.get(field, None): # Ignore empty fields.
filter[field] = self.request.query_params[field]
queryset = GroupProductionDeletion.objects.filter(**filter).order_by('-ami_tag','container')
return queryset
def collect_tags(start_requests):
requests = TRequest.objects.filter(request_type='GROUP', reqid__gte=start_requests)
for request in requests:
if request.phys_group not in ['VALI', 'SOFT']:
if 'valid' not in str(request.project):
if ProductionTask.objects.filter(request=request).exists():
task = ProductionTask.objects.filter(request=request).last()
if not GroupProductionAMITag.objects.filter(ami_tag=task.ami_tag).exists():
print(task.ami_tag)
update_tag_from_ami(task.ami_tag,task.name.startswith('data'))
@api_view(['POST'])
def set_datasets_to_delete(request):
try:
username = request.user.username
deadline = datetime.strptime(request.data['deadline'],"%Y-%m-%dT%H:%M:%S.%fZ")
start_deletion = datetime.strptime(request.data['start_deletion'],"%Y-%m-%dT%H:%M:%S.%fZ")
user = User.objects.get(username=username)
if not user.is_superuser:
return Response('Not enough permissions', status.HTTP_401_UNAUTHORIZED)
last_record = GroupProductionDeletionRequest.objects.last()
if deadline.replace(tzinfo=pytz.utc) < last_record.start_deletion:
return Response('Previous deletion is not yet done', status.HTTP_400_BAD_REQUEST)
new_deletion_request = GroupProductionDeletionRequest()
new_deletion_request.username = username
new_deletion_request.status = 'Waiting'
new_deletion_request.start_deletion = start_deletion
new_deletion_request.deadline = deadline
new_deletion_request.save()
if deadline.replace(tzinfo=pytz.utc) <= timezone.now():
check_deletion_request()
except Exception as e:
return Response('Problem %s'%str(e), status.HTTP_400_BAD_REQUEST)
return Response(GroupProductionDeletionRequestSerializer(new_deletion_request).data)
@app.task()
def check_deletion_request():
if not GroupProductionDeletionRequest.objects.filter(status='Waiting').exists():
return
deletion_request = GroupProductionDeletionRequest.objects.filter(status='Waiting').last()
if datetime.now().replace(tzinfo=pytz.utc) >= deletion_request.deadline:
containers, total_size = find_containers_to_delete(deletion_request.deadline)
deletion_request.size = total_size
deletion_request.containers = len(containers)
for container in containers:
gp_processing = GroupProductionDeletionProcessing()
if GroupProductionDeletionProcessing.objects.filter(container=container).exists():
gp_processing = GroupProductionDeletionProcessing.objects.filter(container=container).last()
gp_processing.container = container
gp_processing.status = 'ToDelete'
gp_processing.save()
deletion_request.status = 'Submitted'
deletion_request.save()
datasets = cache.get('dataset_to_delete_ALL')
datasets = [x[x.find(':')+1:] for x in datasets]
cache.set("datasets_to_be_deleted",datasets, None)
return
@app.task()
def run_deletion():
if not GroupProductionDeletionRequest.objects.filter(status__in=['Submitted','Executing']).exists():
return
if GroupProductionDeletionRequest.objects.filter(status='Submitted').exists():
deletion_request = GroupProductionDeletionRequest.objects.filter(status='Submitted').last()
if datetime.now().replace(tzinfo=pytz.utc) >= deletion_request.start_deletion:
runDeletion.apply_async(countdown=3600)
deletion_request.status = 'Executing'
deletion_request.save()
return
if GroupProductionDeletionRequest.objects.filter(status='Executing').exists():
deletion_request = GroupProductionDeletionRequest.objects.filter(status='Executing').last()
if not GroupProductionDeletionProcessing.objects.filter(status='ToDelete').exists():
deletion_request.status = 'Done'
deletion_request.save()
return
else:
runDeletion.apply_async(countdown=3600)
class GroupProductionDeletionRequestSerializer(serializers.ModelSerializer):
class Meta:
model = GroupProductionDeletionRequest
fields = '__all__'
class ListGroupProductionDeletionRequestsView(generics.ListAPIView):
serializer_class = GroupProductionDeletionRequestSerializer
lookup_fields = ['id']
def get_queryset(self):
"""
Optionally restricts the returned purchases to a given user,
by filtering against a `username` query parameter in the URL.
"""
filter = {}
for field in self.lookup_fields:
if self.request.query_params.get(field, None): # Ignore empty fields.
filter[field] = self.request.query_params[field]
queryset = GroupProductionDeletionRequest.objects.filter(**filter).order_by('-timestamp')
return queryset
def find_containers_to_delete(deletion_day, total_containers=None, size=None):
days_to_delete = (deletion_day - datetime.now().replace(tzinfo=pytz.utc)).days
container_to_check = GroupProductionDeletion.objects.filter(version__gte=1)
containers_to_delete = []
total_size = 0
for gp_container in container_to_check:
if ((gp_container.days_to_delete < days_to_delete) and (gp_container.version >= version_from_format(gp_container.output_format)) and
(not GroupProductionDeletionProcessing.objects.filter(container=gp_container.container, status='ToDelete').exists())):
containers_to_delete.append(gp_container.container)
total_size += gp_container.size
if size and total_size > size:
break
if total_containers and len(containers_to_delete)>=total_containers:
break
return containers_to_delete, total_size
@app.task(time_limit=10800)
def runDeletion(lifetime=3600):
containers_to_delete = GroupProductionDeletionProcessing.objects.filter(status='ToDelete')
all_datasets = cache.get("datasets_to_be_deleted")
ddm = DDM()
for container_to_delete in containers_to_delete:
datasets = ddm.dataset_in_container(container_to_delete.container)
datasets = [x[x.find(':')+1:] for x in datasets]
all_marked = True
deleted_datasets = 0
for dataset in datasets:
if dataset not in all_datasets:
_logger.error('Dataset {dataset} is not marked for deletion'.format(dataset=dataset))
_jsonLogger.error('Dataset {dataset} is not marked for deletion'.format(dataset=dataset), extra={'dataset':dataset})
all_marked = False
if all_marked:
for dataset in datasets:
try:
_logger.info('{dataset} is about being deleted'.format(dataset=dataset))
_jsonLogger.info('{dataset} is about being deleted'.format(dataset=dataset), extra={'dataset':dataset, 'container':container_to_delete.container})
ddm.deleteDataset(dataset, lifetime)
deleted_datasets += 1
except Exception as e:
_logger.error('Problem with {dataset} deletion error: {error}'.format(dataset=dataset,error=str(e)))
_jsonLogger.error('Problem with {dataset} deletion'.format(dataset=dataset), extra={'dataset':dataset, 'container':container_to_delete.container, 'error':str(e)})
if deleted_datasets > 0:
container_to_delete.command_timestamp = timezone.now()
container_to_delete.deleted_datasets = deleted_datasets
if len(datasets) == deleted_datasets:
container_to_delete.status = 'Deleted'
container_to_delete.save()
else:
container_to_delete.status = 'Problematic'
container_to_delete.save() | 47.201188 | 190 | 0.648976 | 6,322 | 55,603 | 5.451439 | 0.070864 | 0.019673 | 0.01828 | 0.014624 | 0.540535 | 0.469388 | 0.409674 | 0.358867 | 0.317172 | 0.292798 | 0 | 0.003618 | 0.249447 | 55,603 | 1,178 | 191 | 47.201188 | 0.822203 | 0.037732 | 0 | 0.399221 | 0 | 0 | 0.082265 | 0.006204 | 0 | 0 | 0 | 0 | 0 | 1 | 0.048685 | false | 0.000974 | 0.025316 | 0.002921 | 0.153846 | 0.009737 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
ce178974ee8fea5f9247c4d0f527ac0365b295c5 | 1,396 | py | Python | Tools/Format/linux.py | Apollo-o/Whistle | f6df3b67be81fe36f0ecb8b4831bc5dc9cdc4a52 | [
"CC0-1.0"
] | null | null | null | Tools/Format/linux.py | Apollo-o/Whistle | f6df3b67be81fe36f0ecb8b4831bc5dc9cdc4a52 | [
"CC0-1.0"
] | null | null | null | Tools/Format/linux.py | Apollo-o/Whistle | f6df3b67be81fe36f0ecb8b4831bc5dc9cdc4a52 | [
"CC0-1.0"
] | null | null | null | # AUTHOR: o-o
# DATE: 2/27/2019
# DESCRIPTION: Executes Linux Commands.
from pynput.keyboard import Key, Controller
import time
# GLOBAL VARIABLES.
TIME = 3
# Creates a new tab.
# Precondition: None.
# Postcondition: Creates a new tab.
def tab():
# Keyboard (Object)
keyboard = Controller()
# SHIFT + CTRL + T
keyboard.press(Key.shift_l)
keyboard.press(Key.ctrl_l)
keyboard.press("t")
keyboard.release(Key.shift_l)
keyboard.release(Key.ctrl_l)
keyboard.release("t")
time.sleep(TIME)
# Executes a Command.
# Precondition: A String.
# Postcondition: Executes a Command.
def command(command):
# Keyboard (Object)
keyboard = Controller()
# COMMAND
keyboard.type(command)
time.sleep(TIME)
# ENTER
keyboard.press(Key.enter)
keyboard.release(Key.enter)
time.sleep(TIME)
# Closes the Current Tab.
# Precondition: None.
# Postcondition: Closes the Current Tab.
def end():
# Keyboard (Object)
keyboard = Controller()
# CTRL + C
keyboard.press(Key.ctrl_l)
keyboard.press("c")
keyboard.release(Key.ctrl_l)
keyboard.release("c")
time.sleep(TIME)
# SHIFT + CTRL + W
keyboard.press(Key.shift_l)
keyboard.press(Key.ctrl_l)
keyboard.press("w")
keyboard.release(Key.shift_l)
keyboard.release(Key.ctrl_l)
keyboard.release("w")
time.sleep(TIME)
| 17.45 | 43 | 0.659026 | 178 | 1,396 | 5.11236 | 0.258427 | 0.098901 | 0.105495 | 0.105495 | 0.338462 | 0.338462 | 0.338462 | 0.259341 | 0.259341 | 0.259341 | 0 | 0.007326 | 0.217765 | 1,396 | 79 | 44 | 17.670886 | 0.826007 | 0.307307 | 0 | 0.545455 | 0 | 0 | 0.006349 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.090909 | false | 0 | 0.060606 | 0 | 0.151515 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
ce1ba66191310ee7bbd6fc9be7fdc1a1bddc70f1 | 4,555 | py | Python | negspacy/test_en.py | arianpasquali/negspacy | da9c43f4fc46c2a18076846ac660642167c547c3 | [
"MIT"
] | null | null | null | negspacy/test_en.py | arianpasquali/negspacy | da9c43f4fc46c2a18076846ac660642167c547c3 | [
"MIT"
] | null | null | null | negspacy/test_en.py | arianpasquali/negspacy | da9c43f4fc46c2a18076846ac660642167c547c3 | [
"MIT"
] | null | null | null | import pytest
import spacy
from negation import Negex
from spacy.pipeline import EntityRuler
def build_docs():
docs = list()
docs.append(
(
"Patient denies Apple Computers but has Steve Jobs. He likes USA.",
[("Apple Computers", True), ("Steve Jobs", False), ("USA", False)],
)
)
docs.append(
(
"No history of USA, Germany, Italy, Canada, or Brazil",
[
("USA", True),
("Germany", True),
("Italy", True),
("Canada", True),
("Brazil", True),
],
)
)
docs.append(("That might not be Barack Obama.", [("Barack Obama", False)]))
return docs
def build_med_docs():
docs = list()
docs.append(
(
"Patient denies cardiovascular disease but has headaches. No history of smoking. Alcoholism unlikely. Smoking not ruled out.",
[
("Patient denies", False),
("cardiovascular disease", True),
("headaches", False),
("No history", True),
("smoking", True),
("Alcoholism", True),
("Smoking", False),
],
)
)
docs.append(
(
"No history of headaches, prbc, smoking, acid reflux, or GERD.",
[
("No history", True),
("headaches", True),
("prbc", True),
("smoking", True),
("acid reflux", True),
("GERD", True),
],
)
)
docs.append(
(
"Alcoholism was not the cause of liver disease.",
[("Alcoholism", True), ("cause", False), ("liver disease", False)],
)
)
docs.append(
(
"There was no headache for this patient.",
[("no headache", True), ("patient", True)],
)
)
return docs
def test():
nlp = spacy.load("en_core_web_sm")
negex = Negex(nlp)
nlp.add_pipe(negex, last=True)
docs = build_docs()
for d in docs:
doc = nlp(d[0])
for i, e in enumerate(doc.ents):
print(e.text, e._.negex)
assert (e.text, e._.negex) == d[1][i]
def test_en():
nlp = spacy.load("en_core_web_sm")
negex = Negex(nlp, language= "en")
nlp.add_pipe(negex, last=True)
docs = build_docs()
for d in docs:
doc = nlp(d[0])
for i, e in enumerate(doc.ents):
print(e.text, e._.negex)
assert (e.text, e._.negex) == d[1][i]
def test_umls():
nlp = spacy.load("en_core_sci_sm")
negex = Negex(
nlp, language="en_clinical", ent_types=["ENTITY"], chunk_prefix=["no"]
)
nlp.add_pipe(negex, last=True)
docs = build_med_docs()
for d in docs:
doc = nlp(d[0])
for i, e in enumerate(doc.ents):
print(e.text, e._.negex)
assert (e.text, e._.negex) == d[1][i]
def test_umls2():
nlp = spacy.load("en_core_sci_sm")
negex = Negex(
nlp, language="en_clinical_sensitive", ent_types=["ENTITY"], chunk_prefix=["no"]
)
nlp.add_pipe(negex, last=True)
docs = build_med_docs()
for d in docs:
doc = nlp(d[0])
for i, e in enumerate(doc.ents):
print(e.text, e._.negex)
assert (e.text, e._.negex) == d[1][i]
# blocked by spacy 2.1.8 issue. Adding back after spacy 2.2.
# def test_no_ner():
# nlp = spacy.load("en_core_web_sm", disable=["ner"])
# negex = Negex(nlp)
# nlp.add_pipe(negex, last=True)
# with pytest.raises(ValueError):
# doc = nlp("this doc has not been NERed")
def test_own_terminology():
nlp = spacy.load("en_core_web_sm")
negex = Negex(nlp, termination=["whatever"])
nlp.add_pipe(negex, last=True)
doc = nlp("He does not like Steve Jobs whatever he says about Barack Obama.")
assert doc.ents[1]._.negex == False
def test_get_patterns():
nlp = spacy.load("en_core_web_sm")
negex = Negex(nlp)
patterns = negex.get_patterns()
assert type(patterns) == dict
assert len(patterns) == 4
def test_issue7():
nlp = spacy.load("en_core_web_sm")
negex = Negex(nlp)
nlp.add_pipe(negex, last=True)
ruler = EntityRuler(nlp)
patterns = [{"label": "SOFTWARE", "pattern": "spacy"}]
doc = nlp("fgfgdghgdh")
if __name__ == "__main__":
test()
test_en()
test_umls()
test_own_terminology()
test_get_patterns()
test_issue7()
| 26.794118 | 138 | 0.529967 | 557 | 4,555 | 4.186715 | 0.238779 | 0.024014 | 0.041166 | 0.048027 | 0.453688 | 0.453688 | 0.417238 | 0.377358 | 0.377358 | 0.361921 | 0 | 0.005857 | 0.325357 | 4,555 | 169 | 139 | 26.952663 | 0.75301 | 0.060593 | 0 | 0.427536 | 0 | 0.007246 | 0.21447 | 0.004917 | 0 | 0 | 0 | 0 | 0.050725 | 1 | 0.065217 | false | 0 | 0.028986 | 0 | 0.108696 | 0.028986 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
ce1cf9c0e53928b53cb5e8707872a03cda7bc816 | 23,392 | py | Python | raw_packet/Scanners/scanner.py | 4ekin/raw-packet | 40322ec2f6c3ce0647ba69283df40fa8da4817e2 | [
"MIT"
] | null | null | null | raw_packet/Scanners/scanner.py | 4ekin/raw-packet | 40322ec2f6c3ce0647ba69283df40fa8da4817e2 | [
"MIT"
] | null | null | null | raw_packet/Scanners/scanner.py | 4ekin/raw-packet | 40322ec2f6c3ce0647ba69283df40fa8da4817e2 | [
"MIT"
] | null | null | null | # region Description
"""
scanner.py: Scan local network
Author: Vladimir Ivanov
License: MIT
Copyright 2020, Raw-packet Project
"""
# endregion
# region Import
# region Raw-packet modules
from raw_packet.Utils.base import Base
from raw_packet.Scanners.arp_scanner import ArpScan
from raw_packet.Scanners.icmpv6_scanner import ICMPv6Scan
# endregion
# region Import libraries
import xml.etree.ElementTree as ET
import subprocess as sub
from prettytable import PrettyTable
from os.path import dirname, abspath, isfile
from os import remove
from typing import Union, List, Dict
current_path = dirname((abspath(__file__)))
# endregion
# endregion
# region Authorship information
__author__ = 'Vladimir Ivanov'
__copyright__ = 'Copyright 2020, Raw-packet Project'
__credits__ = ['']
__license__ = 'MIT'
__version__ = '0.2.1'
__maintainer__ = 'Vladimir Ivanov'
__email__ = 'ivanov.vladimir.mail@gmail.com'
__status__ = 'Development'
# endregion
# region Main class - Scanner
class Scanner:
# region Variables
base: Base = Base()
arp_scan: ArpScan = ArpScan()
icmpv6_scan: ICMPv6Scan = ICMPv6Scan()
nmap_scan_result: str = current_path + '/nmap_scan.xml'
# endregion
# region Init
def __init__(self):
if not self.base.check_installed_software('nmap'):
self.base.print_error('Could not find program: ', 'nmap')
exit(1)
# endregion
# region Apple device selection
def apple_device_selection(self, apple_devices: Union[None, List[List[str]]],
exit_on_failure: bool = False) -> Union[None, List[str]]:
try:
assert apple_devices is not None, 'List of Apple devices is None!'
assert len(apple_devices) != 0, 'List of Apple devices is empty!'
for apple_device in apple_devices:
assert len(apple_device) == 3, \
'Bad list of Apple device, example: [["192.168.0.1", "12:34:56:78:90:ab", "Apple, Inc."]]'
assert (self.base.ip_address_validation(ip_address=apple_device[0]) or
self.base.ipv6_address_validation(ipv6_address=apple_device[0])), \
'Bad list of Apple device, example: [["192.168.0.1", "12:34:56:78:90:ab", "Apple, Inc."]]'
assert self.base.mac_address_validation(mac_address=apple_device[1]), \
'Bad list of Apple device, example: [["192.168.0.1", "12:34:56:78:90:ab", "Apple, Inc."]]'
apple_device: Union[None, List[str]] = None
if len(apple_devices) == 1:
apple_device = apple_devices[0]
self.base.print_info('Only one Apple device found:')
self.base.print_success(apple_device[0] + ' (' + apple_device[1] + ') ', apple_device[2])
if len(apple_devices) > 1:
self.base.print_info('Apple devices found:')
device_index: int = 1
apple_devices_pretty_table = PrettyTable([self.base.cINFO + 'Index' + self.base.cEND,
self.base.cINFO + 'IP address' + self.base.cEND,
self.base.cINFO + 'MAC address' + self.base.cEND,
self.base.cINFO + 'Vendor' + self.base.cEND])
for apple_device in apple_devices:
apple_devices_pretty_table.add_row([str(device_index), apple_device[0],
apple_device[1], apple_device[2]])
device_index += 1
print(apple_devices_pretty_table)
device_index -= 1
current_device_index = input(self.base.c_info + 'Set device index from range (1-' +
str(device_index) + '): ')
if not current_device_index.isdigit():
self.base.print_error('Your input data is not digit!')
return None
if any([int(current_device_index) < 1, int(current_device_index) > device_index]):
self.base.print_error('Your number is not within range (1-' + str(device_index) + ')')
return None
current_device_index = int(current_device_index) - 1
apple_device = apple_devices[current_device_index]
return apple_device
except KeyboardInterrupt:
self.base.print_info('Exit')
exit(0)
except AssertionError as Error:
self.base.print_error(Error.args[0])
if exit_on_failure:
exit(1)
return None
# endregion
# region IPv4 device selection
def ipv4_device_selection(self, ipv4_devices: Union[None, List[Dict[str, str]]],
exit_on_failure: bool = False) -> Union[None, Dict[str, str]]:
try:
assert ipv4_devices is not None, 'List of IPv4 devices is None!'
assert len(ipv4_devices) != 0, 'List of IPv4 devices is empty!'
for ipv4_device in ipv4_devices:
assert len(ipv4_device) == 3, \
'Bad dict of IPv4 device, example: ' + \
'[{"ip-address": "fd00::1", "mac-address": "12:34:56:78:90:ab", "vendor": "Apple, Inc."}]'
assert 'ip-address' in ipv4_device.keys(), \
'Bad dict of IPv4 device, example: ' + \
'[{"ip-address": "fd00::1", "mac-address": "12:34:56:78:90:ab", "vendor": "Apple, Inc."}]'
assert self.base.ip_address_validation(ipv4_device['ip-address']), \
'Bad dict of IPv4 device, example: ' + \
'[{"ip-address": "fd00::1", "mac-address": "12:34:56:78:90:ab", "vendor": "Apple, Inc."}]'
assert 'mac-address' in ipv4_device.keys(), \
'Bad dict of IPv4 device, example: ' + \
'[{"ip-address": "fd00::1", "mac-address": "12:34:56:78:90:ab", "vendor": "Apple, Inc."}]'
assert self.base.mac_address_validation(ipv4_device['mac-address']), \
'Bad dict of IPv4 device, example: ' + \
'[{"ip-address": "fd00::1", "mac-address": "12:34:56:78:90:ab", "vendor": "Apple, Inc."}]'
assert 'vendor' in ipv4_device.keys(), \
'Bad dict of IPv4 device, example: ' + \
'[{"ip-address": "fd00::1", "mac-address": "12:34:56:78:90:ab", "vendor": "Apple, Inc."}]'
ipv4_device: Union[None, Dict[str, str]] = None
# region IPv4 devices is found
# region Only one IPv4 device found
if len(ipv4_devices) == 1:
ipv4_device: Dict[str, str] = ipv4_devices[0]
self.base.print_info('Only one IPv4 device found:')
self.base.print_success(ipv4_device['ip-address'] + ' (' + ipv4_device['mac-address'] + ') ' +
ipv4_device['vendor'])
# endregion
# region More than one IPv4 device found
if len(ipv4_devices) > 1:
self.base.print_success('Found ', str(len(ipv4_devices)), ' IPv4 alive hosts!')
device_index: int = 1
pretty_table = PrettyTable([self.base.info_text('Index'),
self.base.info_text('IPv4 address'),
self.base.info_text('MAC address'),
self.base.info_text('Vendor')])
for ipv4_device in ipv4_devices:
pretty_table.add_row([str(device_index), ipv4_device['ip-address'],
ipv4_device['mac-address'], ipv4_device['vendor']])
device_index += 1
print(pretty_table)
device_index -= 1
current_device_index: Union[int, str] = \
input(self.base.c_info + 'Set device index from range (1-' + str(device_index) + '): ')
assert current_device_index.isdigit(), \
'Your input data is not digit!'
current_device_index: int = int(current_device_index)
assert not any([current_device_index < 1, current_device_index > device_index]), \
'Your number is not within range (1-' + str(device_index) + ')'
current_device_index: int = int(current_device_index) - 1
ipv4_device: Dict[str, str] = ipv4_devices[current_device_index]
# endregion
# endregion
# region IPv4 devices not found
else:
if exit_on_failure:
self.base.print_error('Could not find IPv4 devices!')
exit(1)
# endregion
return ipv4_device
except KeyboardInterrupt:
self.base.print_info('Exit')
exit(0)
except AssertionError as Error:
self.base.print_error(Error.args[0])
if exit_on_failure:
exit(1)
return None
# endregion
# region IPv6 device selection
def ipv6_device_selection(self, ipv6_devices: Union[None, List[Dict[str, str]]],
exit_on_failure: bool = False) -> Union[None, Dict[str, str]]:
try:
assert ipv6_devices is not None, 'List of IPv6 devices is None!'
assert len(ipv6_devices) != 0, 'List of IPv6 devices is empty!'
for ipv6_device in ipv6_devices:
assert len(ipv6_device) == 3, \
'Bad dict of IPv6 device, example: ' + \
'[{"ip-address": "fd00::1", "mac-address": "12:34:56:78:90:ab", "vendor": "Apple, Inc."}]'
assert 'ip-address' in ipv6_device.keys(), \
'Bad dict of IPv6 device, example: ' + \
'[{"ip-address": "fd00::1", "mac-address": "12:34:56:78:90:ab", "vendor": "Apple, Inc."}]'
assert self.base.ipv6_address_validation(ipv6_device['ip-address']), \
'Bad dict of IPv6 device, example: ' + \
'[{"ip-address": "fd00::1", "mac-address": "12:34:56:78:90:ab", "vendor": "Apple, Inc."}]'
assert 'mac-address' in ipv6_device.keys(), \
'Bad dict of IPv6 device, example: ' + \
'[{"ip-address": "fd00::1", "mac-address": "12:34:56:78:90:ab", "vendor": "Apple, Inc."}]'
assert self.base.mac_address_validation(ipv6_device['mac-address']), \
'Bad dict of IPv6 device, example: ' + \
'[{"ip-address": "fd00::1", "mac-address": "12:34:56:78:90:ab", "vendor": "Apple, Inc."}]'
assert 'vendor' in ipv6_device.keys(), \
'Bad dict of IPv6 device, example: ' + \
'[{"ip-address": "fd00::1", "mac-address": "12:34:56:78:90:ab", "vendor": "Apple, Inc."}]'
ipv6_device: Union[None, Dict[str, str]] = None
# region IPv6 devices is found
# region Only one IPv6 device found
if len(ipv6_devices) == 1:
ipv6_device: Dict[str, str] = ipv6_devices[0]
self.base.print_info('Only one IPv6 device found:')
self.base.print_success(ipv6_device['ip-address'] + ' (' + ipv6_device['mac-address'] + ') ' +
ipv6_device['vendor'])
# endregion
# region More than one IPv6 device found
if len(ipv6_devices) > 1:
self.base.print_success('Found ', str(len(ipv6_devices)), ' IPv6 alive hosts!')
device_index: int = 1
pretty_table = PrettyTable([self.base.info_text('Index'),
self.base.info_text('IPv6 address'),
self.base.info_text('MAC address'),
self.base.info_text('Vendor')])
for ipv6_device in ipv6_devices:
pretty_table.add_row([str(device_index), ipv6_device['ip-address'],
ipv6_device['mac-address'], ipv6_device['vendor']])
device_index += 1
print(pretty_table)
device_index -= 1
current_device_index: Union[int, str] = \
input(self.base.c_info + 'Set device index from range (1-' + str(device_index) + '): ')
assert current_device_index.isdigit(), \
'Your input data is not digit!'
current_device_index: int = int(current_device_index)
assert not any([current_device_index < 1, current_device_index > device_index]), \
'Your number is not within range (1-' + str(device_index) + ')'
current_device_index: int = int(current_device_index) - 1
ipv6_device: Dict[str, str] = ipv6_devices[current_device_index]
# endregion
# endregion
# region IPv6 devices not found
else:
if exit_on_failure:
self.base.print_error('Could not find IPv6 devices!')
exit(1)
# endregion
return ipv6_device
except KeyboardInterrupt:
self.base.print_info('Exit')
exit(0)
except AssertionError as Error:
self.base.print_error(Error.args[0])
if exit_on_failure:
exit(1)
return None
# endregion
# region Find all devices in local network
def find_ip_in_local_network(self,
network_interface: str = 'eth0',
timeout: int = 3, retry: int = 3,
show_scan_percentage: bool = True,
exit_on_failure: bool = True) -> Union[None, List[str]]:
try:
local_network_ip_addresses: List[str] = list()
arp_scan_results = self.arp_scan.scan(network_interface=network_interface, timeout=timeout,
retry=retry, exit_on_failure=False, check_vendor=True,
show_scan_percentage=show_scan_percentage)
assert len(arp_scan_results) != 0, \
'Could not find network devices on interface: ' + self.base.error_text(network_interface)
for device in arp_scan_results:
if self.base.ip_address_validation(device['ip-address']):
local_network_ip_addresses.append(device['ip-address'])
return local_network_ip_addresses
except KeyboardInterrupt:
self.base.print_info('Exit')
exit(0)
except AssertionError as Error:
self.base.print_error(Error.args[0])
if exit_on_failure:
exit(1)
return None
# endregion
# region Find Apple devices in local network with arp_scan
def find_apple_devices_by_mac(self, network_interface: str = 'eth0',
timeout: int = 3, retry: int = 3,
show_scan_percentage: bool = True,
exit_on_failure: bool = True) -> Union[None, List[List[str]]]:
try:
apple_devices: List[List[str]] = list()
arp_scan_results = self.arp_scan.scan(network_interface=network_interface, timeout=timeout,
retry=retry, exit_on_failure=False, check_vendor=True,
show_scan_percentage=show_scan_percentage)
assert len(arp_scan_results) != 0, \
'Could not find network devices on interface: ' + self.base.error_text(network_interface)
for device in arp_scan_results:
if 'Apple' in device['vendor']:
apple_devices.append([device['ip-address'], device['mac-address'], device['vendor']])
assert len(apple_devices) != 0, \
'Could not find Apple devices on interface: ' + self.base.error_text(network_interface)
return apple_devices
except KeyboardInterrupt:
self.base.print_info('Exit')
exit(0)
except AssertionError as Error:
self.base.print_error(Error.args[0])
if exit_on_failure:
exit(1)
return None
# endregion
# region Find Apple devices in local network with ICMPv6 scan
def find_apple_devices_by_mac_ipv6(self, network_interface: str = 'eth0',
timeout: int = 5, retry: int = 3,
exit_on_failure: bool = True) -> Union[None, List[List[str]]]:
try:
apple_devices: List[List[str]] = list()
icmpv6_scan_results = self.icmpv6_scan.scan(network_interface=network_interface, timeout=timeout,
retry=retry, exit_on_failure=False, check_vendor=True)
assert len(icmpv6_scan_results) != 0, \
'Could not find IPv6 network devices on interface: ' + self.base.error_text(network_interface)
for device in icmpv6_scan_results:
if 'Apple' in device['vendor']:
apple_devices.append([device['ip-address'], device['mac-address'], device['vendor']])
assert len(apple_devices) != 0, \
'Could not find Apple devices on interface: ' + self.base.error_text(network_interface)
return apple_devices
except KeyboardInterrupt:
self.base.print_info('Exit')
exit(0)
except AssertionError as Error:
self.base.print_error(Error.args[0])
if exit_on_failure:
exit(1)
return None
# endregion
# region Find IPv6 devices in local network with icmpv6_scan
def find_ipv6_devices(self, network_interface: str = 'eth0',
timeout: int = 5, retry: int = 3,
exclude_ipv6_addresses: Union[None, List[str]] = None,
exit_on_failure: bool = True) -> Union[None, List[Dict[str, str]]]:
try:
ipv6_devices: List[Dict[str, str]] = list()
ipv6_scan_results = self.icmpv6_scan.scan(network_interface=network_interface, timeout=timeout, retry=retry,
target_mac_address=None, check_vendor=True, exit_on_failure=False)
assert len(ipv6_scan_results) != 0, \
'Could not find IPv6 network devices on interface: ' + self.base.error_text(network_interface)
for device in ipv6_scan_results:
if exclude_ipv6_addresses is not None:
if device['ip-address'] not in exclude_ipv6_addresses:
ipv6_devices.append(device)
else:
ipv6_devices.append(device)
assert len(ipv6_devices) != 0, \
'Could not find IPv6 devices on interface: ' + self.base.error_text(network_interface)
return ipv6_devices
except KeyboardInterrupt:
self.base.print_info('Exit')
exit(0)
except AssertionError as Error:
self.base.print_error(Error.args[0])
if exit_on_failure:
exit(1)
return None
# endregion
# region Find Apple devices in local network with nmap
def find_apple_devices_with_nmap(self, network_interface: str = 'eth0',
exit_on_failure: bool = True) -> Union[None, List[List[str]]]:
try:
if isfile(Scanner.nmap_scan_result):
remove(Scanner.nmap_scan_result)
local_network_devices: List[List[str]] = list()
apple_devices: List[List[str]] = list()
local_network = self.base.get_first_ip_on_interface(network_interface) + '-' + \
self.base.get_last_ip_on_interface(network_interface).split('.')[3]
self.base.print_info('Start nmap scan: ', 'nmap ' + local_network + ' -n -O --osscan-guess -T5 -e ' +
network_interface + ' -oX ' + Scanner.nmap_scan_result)
nmap_process = sub.Popen(['nmap ' + local_network + ' -n -O --osscan-guess -T5 -e ' +
network_interface + ' -oX ' + Scanner.nmap_scan_result],
shell=True, stdout=sub.PIPE)
nmap_process.wait()
nmap_report = ET.parse(Scanner.nmap_scan_result)
root_tree = nmap_report.getroot()
for element in root_tree:
if element.tag == 'host':
state = element.find('status').attrib['state']
if state == 'up':
ip_address: str = ''
mac_address: str = ''
description: str = ''
for address in element.findall('address'):
if address.attrib['addrtype'] == 'ipv4':
ip_address = address.attrib['addr']
if address.attrib['addrtype'] == 'mac':
mac_address = address.attrib['addr'].lower()
try:
description = address.attrib['vendor'] + ' device'
except KeyError:
pass
for os_info in element.find('os'):
if os_info.tag == 'osmatch':
try:
description += ', ' + os_info.attrib['name']
except TypeError:
pass
break
local_network_devices.append([ip_address, mac_address, description])
assert len(local_network_devices) != 0, \
'Could not find any devices on interface: ' + self.base.error_text(network_interface)
for network_device in local_network_devices:
if 'Apple' in network_device[2] or 'Mac OS' in network_device[2] or 'iOS' in network_device[2]:
apple_devices.append(network_device)
assert len(apple_devices) != 0, \
'Could not find Apple devices on interface: ' + self.base.error_text(network_interface)
return apple_devices
except OSError:
self.base.print_error('Something went wrong while trying to run ', '`nmap`')
exit(2)
except KeyboardInterrupt:
self.base.print_info('Exit')
exit(0)
except AssertionError as Error:
self.base.print_error(Error.args[0])
if exit_on_failure:
exit(1)
return None
# endregion
# endregion
| 48.835073 | 120 | 0.535311 | 2,578 | 23,392 | 4.655935 | 0.085725 | 0.047322 | 0.034658 | 0.009998 | 0.752728 | 0.683829 | 0.640173 | 0.617762 | 0.571357 | 0.543614 | 0 | 0.02984 | 0.35961 | 23,392 | 478 | 121 | 48.937238 | 0.771429 | 0.048521 | 0 | 0.532787 | 0 | 0.040984 | 0.166021 | 0.001351 | 0 | 0 | 0 | 0 | 0.114754 | 1 | 0.02459 | false | 0.005464 | 0.02459 | 0 | 0.112022 | 0.095628 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
ce2237546bebadd3e7fe3e30ab5c403cbd55c422 | 2,282 | py | Python | valet/views.py | rayhu-osu/vcube | ff1af048adb8a9f1007368150a78b309b4d821af | [
"MIT"
] | 1 | 2019-02-20T18:47:04.000Z | 2019-02-20T18:47:04.000Z | valet/views.py | rayhu-osu/vcube | ff1af048adb8a9f1007368150a78b309b4d821af | [
"MIT"
] | null | null | null | valet/views.py | rayhu-osu/vcube | ff1af048adb8a9f1007368150a78b309b4d821af | [
"MIT"
] | null | null | null | from django.shortcuts import render
from vendor.models import Store
from cart.models import Order
from .models import StoreSequence, ConsumerSequence
from vip.models import Item
from sign_up.models import Consumer
# Create your views here.
def index(request, driver_id):
return render(request, 'valet/index.html', {'driver_id':driver_id})
def availability(request, driver_id):
#item_list = Item.objects.all()
context = {'driver_id': driver_id}
return render(request, 'valet/availability.html', context)
# order view shows the store sequence
def order(request, driver_id): # oder view based on driver id
order_num = Order.objects.filter(processed=True).count()
seq = StoreSequence.objects.get(driver__id=driver_id)
store_id_list = str(seq).split()
selected_store = []
for store_id in store_id_list:
selected_store.append(Store.objects.get(id=store_id))
context = {'selected_store': selected_store, 'order_num':order_num, 'driver_id': driver_id}
return render(request, 'valet/order.html', context)
# store detail shows items in each store
def store_detail(request, driver_id, store_id):
item_list = Item.objects.filter(store__id=store_id, order__processed=True).distinct()
context = {'store_id':store_id, 'driver_id': driver_id, 'item_list':item_list}
return render(request, 'valet/store_detail.html', context)
# deliver shows the consumer sequence
def deliver(request, driver_id):
order_num = Order.objects.filter(processed=True).count()
seq = ConsumerSequence.objects.get(driver__id=driver_id)
consumer_id_list = str(seq).split()
selected_consumer = []
for consumer_id in consumer_id_list:
selected_consumer.append(Consumer.objects.get(id=consumer_id))
context = {'selected_consumer': selected_consumer, 'order_num':order_num, 'driver_id': driver_id}
return render(request, 'valet/deliver.html', context)
# shows the orders of each consumer with item detail
def deliver_detail(request, driver_id, consumer_id):
order_list = Order.objects.filter(consumer__id=consumer_id, processed=True).distinct()
context = {'order_list':order_list, 'driver_id': driver_id, 'consumer_id':consumer_id}
return render(request, 'valet/deliver_detail.html', context)
| 33.558824 | 101 | 0.749343 | 318 | 2,282 | 5.138365 | 0.194969 | 0.112607 | 0.05508 | 0.078335 | 0.329253 | 0.287638 | 0.160343 | 0.160343 | 0.135863 | 0.135863 | 0 | 0 | 0.141543 | 2,282 | 67 | 102 | 34.059701 | 0.834099 | 0.106924 | 0 | 0.054054 | 0 | 0 | 0.129128 | 0.034993 | 0 | 0 | 0 | 0 | 0 | 1 | 0.162162 | false | 0 | 0.162162 | 0.027027 | 0.486486 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
ce268011c87075269a8d3cb6e5ceffb23a92c9c9 | 3,103 | py | Python | examples/vector_basics.py | DrewMQ/pygame_tutorials | a9c079544b6cc168d0dce3d927189183869709f7 | [
"MIT"
] | 544 | 2017-04-04T00:51:26.000Z | 2022-03-30T18:38:47.000Z | examples/vector_basics.py | torietyler/pygame_tutorials | dd5574788cfbfc4736d7d507b5b3d15eb5f1b115 | [
"MIT"
] | 9 | 2018-12-24T19:04:09.000Z | 2020-10-02T15:25:20.000Z | examples/vector_basics.py | torietyler/pygame_tutorials | dd5574788cfbfc4736d7d507b5b3d15eb5f1b115 | [
"MIT"
] | 685 | 2017-02-05T09:21:22.000Z | 2022-03-29T12:21:16.000Z | # pg template - skeleton for a new pg project
import pygame as pg
from math import atan2, cos, sin
WIDTH = 800
HEIGHT = 600
FPS = 30
# define colors
WHITE = (255, 255, 255)
BLACK = (0, 0, 0)
RED = (255, 0, 0)
GREEN = (0, 255, 0)
YELLOW = (255, 255, 0)
# initialize pg and create window
pg.init()
pg.mixer.init()
screen = pg.display.set_mode((WIDTH, HEIGHT))
pg.display.set_caption("Vector Basics")
clock = pg.time.Clock()
vec = pg.math.Vector2
class Bullet(pg.sprite.Sprite):
def __init__(self, player):
pg.sprite.Sprite.__init__(self)
self.image = pg.Surface((5, 5))
self.image.fill(YELLOW)
self.rect = self.image.get_rect()
a = (pg.mouse.get_pos() - player.pos).angle_to(vec(1, 0))
self.pos = player.pos + vec(50, 0).rotate(-a)
self.vel = vec(400, 0).rotate(-a)
self.spawn_time = pg.time.get_ticks()
def update(self, dt):
self.pos += self.vel * dt
self.rect.center = self.pos
if pg.time.get_ticks() - self.spawn_time > 2000:
self.kill()
class Player(pg.sprite.Sprite):
def __init__(self):
pg.sprite.Sprite.__init__(self)
self.image = pg.Surface((50, 50))
self.image.fill(GREEN)
self.rect = self.image.get_rect()
self.pos = vec(WIDTH / 2, HEIGHT / 2)
self.rect.center = self.pos
def update(self, dt):
# pass
self.move_to_mouse(dt)
# self.move_to_mouse_no_vector(dt)
# self.angle_to_mouse(dt)
def angle_to_mouse(self, dt):
d = pg.mouse.get_pos() - self.pos
a = d.angle_to(vec(1, 0))
pg.display.set_caption(str(a))
def move_to_mouse(self, dt):
mpos = pg.mouse.get_pos()
# self.vel = (mpos - self.pos).normalize() * 5
self.vel = (mpos - self.pos) * 0.1 * 25
# if (mpos - self.pos).length() > 5:
self.pos += self.vel * dt
self.rect.center = self.pos
def move_to_mouse_no_vector(self, dt):
mpos = pg.mouse.get_pos()
dx = mpos[0] - self.rect.centerx
dy = mpos[1] - self.rect.centery
a = atan2(dy, dx)
vx = 500 * cos(a)
vy = 500 * sin(a)
if (dx**2 + dy**2) > 15:
self.rect.centerx += vx * dt
self.rect.centery += vy * dt
all_sprites = pg.sprite.Group()
player = Player()
all_sprites.add(player)
# Game loop
running = True
while running:
# keep loop running at the right speed
dt = clock.tick(FPS) / 1000
# Process input (events)
for event in pg.event.get():
# check for closing window
if event.type == pg.QUIT:
running = False
if event.type == pg.KEYDOWN and event.key == pg.K_ESCAPE:
running = False
if event.type == pg.MOUSEBUTTONDOWN:
b = Bullet(player)
all_sprites.add(b)
# Update
all_sprites.update(dt)
# Draw / render
screen.fill(BLACK)
all_sprites.draw(screen)
# pg.draw.line(screen, WHITE, player.pos, pg.mouse.get_pos(), 2)
# *after* drawing everything, flip the display
pg.display.flip()
pg.quit()
| 28.46789 | 68 | 0.585562 | 462 | 3,103 | 3.816017 | 0.287879 | 0.043676 | 0.028361 | 0.036869 | 0.277935 | 0.212706 | 0.113443 | 0.087351 | 0.087351 | 0.041974 | 0 | 0.038188 | 0.274251 | 3,103 | 108 | 69 | 28.731481 | 0.744671 | 0.146632 | 0 | 0.189873 | 0 | 0 | 0.004941 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.088608 | false | 0 | 0.025316 | 0 | 0.139241 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
ce2702b06c4d0f0d130b72d0cc4323d59170855d | 5,835 | py | Python | examples/jms_notifications.py | zhmcclient/python-zhmcclient | 7d200afb0343a02535c52dc8b6ba0d224010075c | [
"Apache-2.0"
] | 30 | 2016-08-24T10:02:19.000Z | 2021-11-25T10:44:26.000Z | examples/jms_notifications.py | zhmcclient/python-zhmcclient | 7d200afb0343a02535c52dc8b6ba0d224010075c | [
"Apache-2.0"
] | 883 | 2016-08-23T12:32:12.000Z | 2022-03-28T13:18:24.000Z | examples/jms_notifications.py | zhmcclient/python-zhmcclient | 7d200afb0343a02535c52dc8b6ba0d224010075c | [
"Apache-2.0"
] | 25 | 2017-06-23T18:10:51.000Z | 2022-03-28T02:53:29.000Z | #!/usr/bin/env python
# Copyright 2016-2021 IBM Corp. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""
Example demonstrates JMS notifications for completion of async operation
"""
import sys
from time import sleep
import threading
from pprint import pprint
import yaml
import requests.packages.urllib3
import stomp
import zhmcclient
__callback = None
requests.packages.urllib3.disable_warnings()
if len(sys.argv) != 2:
print("Usage: %s hmccreds.yaml" % sys.argv[0])
sys.exit(2)
hmccreds_file = sys.argv[1]
with open(hmccreds_file, 'r') as fp:
hmccreds = yaml.safe_load(fp)
examples = hmccreds.get("examples", None)
if examples is None:
print("examples not found in credentials file %s" %
(hmccreds_file))
sys.exit(1)
jms_notifications = examples.get("jms_notifications", None)
if jms_notifications is None:
print("jms_notifications not found in credentials file %s" %
(hmccreds_file))
sys.exit(1)
hmc = jms_notifications["hmc"]
cpcname = jms_notifications["cpcname"]
partname = jms_notifications["partname"]
amqport = jms_notifications['amqport']
callback = None
topic = None
cred = hmccreds.get(hmc, None)
if cred is None:
print("Credentials for HMC %s not found in credentials file %s" %
(hmc, hmccreds_file))
sys.exit(1)
userid = cred['userid']
password = cred['password']
# Thread-safe handover of notifications between listener and main threads
NOTI_DATA = None
NOTI_LOCK = threading.Condition()
class MyListener(object):
def on_connecting(self, host_and_port):
print("Listener: Attempting to connect to message broker")
sys.stdout.flush()
def on_connected(self, headers, message):
print("Listener: Connected to broker")
sys.stdout.flush()
def on_disconnected(self):
print("Listener: Disconnected from broker")
sys.stdout.flush()
def on_error(self, headers, message):
print('Listener: Received an error: %s' % message)
sys.stdout.flush()
def on_message(self, headers, message):
global NOTI_DATA, NOTI_LOCK
print('Listener: Received a notification')
sys.stdout.flush()
with NOTI_LOCK:
# Wait until main program has processed the previous notification
while NOTI_DATA:
NOTI_LOCK.wait()
# Indicate to main program that there is a new notification
NOTI_DATA = headers
NOTI_LOCK.notifyAll()
print(__doc__)
print("Using HMC %s with userid %s ..." % (hmc, userid))
session = zhmcclient.Session(hmc, userid, password)
cl = zhmcclient.Client(session)
print("Retrieving notification topics ...")
topics = session.get_notification_topics()
for topic in topics:
if topic['topic-type'] == 'job-notification':
job_topic_name = topic['topic-name']
break
conn = stomp.Connection([(session.host, amqport)], use_ssl="SSL")
conn.set_listener('', MyListener())
conn.connect(userid, password, wait=True)
sub_id = 42 # subscription ID
print("Subscribing for job notifications using topic: %s" % job_topic_name)
conn.subscribe(destination="/topic/"+job_topic_name, id=sub_id, ack='auto')
print("Finding CPC by name=%s ..." % cpcname)
try:
cpc = cl.cpcs.find(name=cpcname)
except zhmcclient.NotFound:
print("Could not find CPC %s on HMC %s" % (cpcname, hmc))
sys.exit(1)
print("Finding partition by name=%s ..." % partname)
try:
partition = cpc.partitions.find(name=partname)
except zhmcclient.NotFound:
print("Could not find partition %s in CPC %s" % (partname, cpc.name))
sys.exit(1)
print("Accessing status of partition %s ..." % partition.name)
partition_status = partition.get_property('status')
print("Status of partition %s: %s" % (partition.name, partition_status))
if partition_status == 'active':
print("Stopping partition %s asynchronously ..." % partition.name)
job = partition.stop(wait_for_completion=False)
elif partition_status in ('inactive', 'stopped'):
print("Starting partition %s asynchronously ..." % partition.name)
job = partition.start(wait_for_completion=False)
else:
raise zhmcclient.Error("Cannot deal with partition status: %s" % \
partition_status)
print("Waiting for completion of job %s ..." % job.uri)
sys.stdout.flush()
# Just for demo purposes, we show how a loop for processing multiple
# notifications would look like.
while True:
with NOTI_LOCK:
# Wait until listener has a new notification
while not NOTI_DATA:
NOTI_LOCK.wait()
# Process the notification
print("Received notification:")
pprint(NOTI_DATA)
sys.stdout.flush()
# This test is just for demo purposes, it should always be our job
# given what we subscribed for.
if NOTI_DATA['job-uri'] == job.uri:
break
else:
print("Unexpected completion received for job %s" % \
NOTI_DATA['job-uri'])
sys.stdout.flush()
# Indicate to listener that we are ready for next notification
NOTI_DATA = None
NOTI_LOCK.notifyAll()
print("Job has completed: %s" % job.uri)
sys.stdout.flush()
conn.disconnect()
sleep(1) # Allow listener to print disconnect message (just for demo)
print("Done.")
| 30.549738 | 77 | 0.687404 | 774 | 5,835 | 5.095607 | 0.319121 | 0.036511 | 0.031947 | 0.017241 | 0.181542 | 0.105223 | 0.068966 | 0.023327 | 0.023327 | 0.023327 | 0 | 0.005611 | 0.205827 | 5,835 | 190 | 78 | 30.710526 | 0.84549 | 0.215253 | 0 | 0.256 | 0 | 0 | 0.229872 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.04 | false | 0.024 | 0.064 | 0 | 0.112 | 0.224 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
ce275584193afe7f832f37da4351a3f6597b7f7f | 286 | py | Python | news_app/user/urls.py | nijatrajab/NewsApi | a359a3c62dc8abd84c22a995981f085f0fae6670 | [
"MIT"
] | null | null | null | news_app/user/urls.py | nijatrajab/NewsApi | a359a3c62dc8abd84c22a995981f085f0fae6670 | [
"MIT"
] | null | null | null | news_app/user/urls.py | nijatrajab/NewsApi | a359a3c62dc8abd84c22a995981f085f0fae6670 | [
"MIT"
] | null | null | null | from django.urls import path
from . import views
app_name = "user"
urlpatterns = [
path("register/", views.CreateUserView.as_view(), name="register"),
path("token/", views.CreateTokenView.as_view(), name="token"),
path("me/", views.ManageUserView.as_view(), name="me"),
]
| 26 | 71 | 0.681818 | 36 | 286 | 5.305556 | 0.5 | 0.094241 | 0.157068 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.132867 | 286 | 10 | 72 | 28.6 | 0.770161 | 0 | 0 | 0 | 0 | 0 | 0.129371 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.25 | 0 | 0.25 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
ce2c338539882b73e990bc943c18760b1c64fc86 | 3,205 | py | Python | test/unit/webapps/api/test_fetch_schema.py | thepineapplepirate/galaxy | a9f0fd6ade1c0693dd5ba6a059d779613604bc7e | [
"CC-BY-3.0"
] | null | null | null | test/unit/webapps/api/test_fetch_schema.py | thepineapplepirate/galaxy | a9f0fd6ade1c0693dd5ba6a059d779613604bc7e | [
"CC-BY-3.0"
] | 6 | 2021-11-11T20:57:49.000Z | 2021-12-10T15:30:33.000Z | test/unit/webapps/api/test_fetch_schema.py | thepineapplepirate/galaxy | a9f0fd6ade1c0693dd5ba6a059d779613604bc7e | [
"CC-BY-3.0"
] | null | null | null | from copy import deepcopy
from json import dumps
from galaxy.schema.fetch_data import (
FetchDataPayload,
FileDataElement,
FtpImportElement,
NestedElement,
PastedDataElement,
UrlDataElement,
)
HISTORY_ID = "abcdef0123456789"
example_payload = {
"targets": [
{
"destination": {"type": "hdas"},
"elements": [
{
"src": "pasted",
"paste_content": "abcdef",
"name": None,
"dbkey": "?",
"ext": "auto",
"space_to_tab": False,
"to_posix_lines": True,
},
{"src": "url", "url": "https://github.com/bla.txt"},
{"src": "files", "name": "uploaded file"},
],
}
],
"auto_decompress": True,
"files": [],
"history_id": HISTORY_ID,
}
items_payload = {
"targets": [
{
"destination": {"type": "hdas"},
"items": [
{
"src": "url",
"url": "https://raw.githubusercontent.com/galaxyproject/galaxy/dev/test-data/html_file.txt",
},
],
}
],
"history_id": HISTORY_ID,
}
nested_collection_payload = {
"targets": [
{
"destination": {"type": "hdca"},
"elements": [{"name": "samp1", "elements": [{"src": "files", "dbkey": "hg19", "info": "my cool bed"}]}],
"collection_type": "list:list",
"name": "Test upload",
}
],
"history_id": HISTORY_ID,
}
ftp_hdca_target = {
"elements_from": "directory",
"src": "ftp_import",
"ftp_path": "subdir",
"collection_type": "list",
}
recursive_archive_payload = {
"history_id": "f3f73e481f432006",
"targets": [
{
"destination": {"type": "library", "name": "My Cool Library"},
"items_from": "archive",
"src": "path",
"path": "/Users/mvandenb/src/metadata_embed/test-data/testdir1.zip",
}
],
}
def test_fetch_data_schema():
payload = FetchDataPayload(**example_payload)
elements = payload.targets[0].items # type: ignore[union-attr] # alias doesn't type check properly
assert len(elements) == 3
assert isinstance(elements[0], PastedDataElement)
assert isinstance(elements[1], UrlDataElement)
assert isinstance(elements[2], FileDataElement)
def test_data_items():
FetchDataPayload(**items_payload)
def test_nested_collection():
payload = FetchDataPayload(**nested_collection_payload)
collection_element = payload.targets[0].items[0] # type: ignore[union-attr] # alias doesn't type check properly
assert isinstance(collection_element, NestedElement)
assert isinstance(collection_element.items[0], FileDataElement)
def test_ftp_hdca_target():
FtpImportElement(**ftp_hdca_target)
def test_recursive_archive():
FetchDataPayload(**recursive_archive_payload)
def test_recursive_archive_form_like_data():
payload = deepcopy(recursive_archive_payload)
payload["targets"] = dumps(payload["targets"])
FetchDataPayload(**payload)
| 27.62931 | 117 | 0.568799 | 293 | 3,205 | 6.013652 | 0.358362 | 0.040863 | 0.049943 | 0.049376 | 0.097616 | 0.060159 | 0.060159 | 0.060159 | 0.060159 | 0.060159 | 0 | 0.014867 | 0.286427 | 3,205 | 115 | 118 | 27.869565 | 0.755575 | 0.037754 | 0 | 0.153061 | 0 | 0.010204 | 0.232543 | 0.018513 | 0 | 0 | 0 | 0 | 0.061224 | 1 | 0.061224 | false | 0 | 0.061224 | 0 | 0.122449 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
ce30805a68fb91998aac8c820a705f335180f72f | 10,431 | py | Python | src/composerstoolkit/core.py | nickpeck/thecomposerstoolkit | 0e81080da8233acb8501f196a21f2bfc769e3ada | [
"MIT"
] | null | null | null | src/composerstoolkit/core.py | nickpeck/thecomposerstoolkit | 0e81080da8233acb8501f196a21f2bfc769e3ada | [
"MIT"
] | null | null | null | src/composerstoolkit/core.py | nickpeck/thecomposerstoolkit | 0e81080da8233acb8501f196a21f2bfc769e3ada | [
"MIT"
] | null | null | null | from collections import namedtuple
import itertools
from time import sleep
from midiutil.MidiFile import MIDIFile
from toolz import pipe as pipe
from infix import or_infix
class NotChainableException(Exception): pass
@or_infix
def chain(a,b):
try:
return a.chain(b)
except AttributeError:
raise NotChainableException(
"object {} is not chainable".format(str(a)))
_ctevent = namedtuple("ctevent", ["pitches", "duration"])
class CTEvent(_ctevent):
def __new__(cls, pitches=None, duration=0):
if pitches is None:
pitches = []
elif isinstance(pitches, int):
pitches = [pitches]
return _ctevent.__new__(cls, pitches, duration)
@property
def pitches(self):
return self[0]
@property
def duration(self):
return self[1]
def __str__(self):
return "<CTEvent {0}, {1}>".format(self.pitches, self.duration)
def __add__(self, other):
return CTSequence([self, other])
def __setattr__(self, *ignored):
raise NotImplementedError
def __delattr__(self, *ignored):
raise NotImplementedError
midievent = namedtuple("midievent", ["pitch", "type", "time"])
class CTSequence():
def __init__(self, events, memento=None):
self.events = events
self.memento = memento
def chain(self, f):
new_events = f(self)
return CTSequence(new_events, self)
def to_midi_events(self, time_offset=0):
results = []
for e in self.events:
for pitch in e.pitches:
results.append(midievent(
pitch = pitch,
type = "NOTE_ON",
time = time_offset
))
results.append(midievent(
pitch = pitch,
type = "NOTE_OFF",
time = time_offset + e.duration
))
time_offset = time_offset + e.duration
results.sort(key=lambda x: x.time, reverse=False)
return results
@property
def pitches(self):
return list(itertools.chain.from_iterable([e.pitches for e in self.events]))
@property
def durations(self):
return [e.duration for e in self.events]
def to_pitch_set(self):
return {* (itertools.chain.from_iterable([e.pitches for e in self.events]))}
def to_pitch_class_set(self):
pitch_set = self.to_pitch_set()
return {*[p % 12 for p in pitch_set]}
def lookup(self, offset=0):
if offset < 0:
return None
for e in self.events:
if e.duration >= offset:
return e
offset = offset - e.duration
return None
def __getitem__(self, slice):
start, stop, step = None, None, None
try:
start, stop, step = slice
sliced_events = self.events[start:stop:step]
except TypeError:
try:
start, stop = slice
sliced_events = self.events[start:stop]
except TypeError:
start = slice
sliced_events = self.events[start]
if not isinstance(sliced_events , list):
sliced_events = [sliced_events]
return CTSequence(sliced_events, self)
def __str__(self):
return "<CTSequence {}>".format(self.events)
def __add__(self, other):
events = self.events[:] + other.events[:]
return CTSequence(events)
def CTGenerator(functor):
def getConfig(*args, **kwargs):
return CTSequence(functor(*args, **kwargs))
return getConfig
import functools
class reprwrapper(object):
"""helper to override __repr__ for a function for debugging purposes
see https://stackoverflow.com/questions/10875442/possible-to-change-a-functions-repr-in-python
"""
def __init__(self, repr, func):
self._repr = repr
self._func = func
functools.update_wrapper(self, func)
def __call__(self, *args, **kw):
return self._func(*args, **kw)
def __repr__(self):
return self._repr(self._func)
def withrepr(reprfun):
"""decorator for reprwrapper"""
def _wrap(func):
return reprwrapper(reprfun, func)
return _wrap
class CTTransformer():
def __init__(self, functor):
self._functor = functor
def __call__(self, *args, **kwargs):
@withrepr(
lambda x: "<CTTransformer: {}{}>".format(
self._functor.__name__, args + tuple(kwargs.items())))
def transform(instance):
nonlocal args
nonlocal kwargs
_kwargs = kwargs
if "gate" in kwargs.keys():
gate = _kwargs["gate"]
del _kwargs["gate"]
_args = args[:]
return gate(self._functor, instance, *_args, **_kwargs)
_args = [instance] + list(args)
return self._functor(*_args, **_kwargs)
return transform
def __str__(self):
return "<CTTransformer : {}>".format(self._functor.__name__)
def boolean_gate(gate):
def transform(functor, instance, *args, **kwargs):
nonlocal gate
offset = 0
result = []
buffer = [] #toggle state, sequence
past_toggle_state = False
for i in range(len(instance.events)):
e = instance.events[i]
offset = offset + e.duration
cur_gate_event = gate.lookup(offset)
if cur_gate_event is None:
# # there is no event at this offset
# # just append 'e' to result
buffer = buffer + [e]
past_toggle_state = False
continue
cur_toggle_state = (cur_gate_event.pitches != [])
has_changed = cur_toggle_state != past_toggle_state
if not has_changed or i == 0:
# no change, just add to the buffer
buffer = buffer + [e]
elif has_changed and cur_toggle_state:
# the gate has changed to 'on'
# add the buffer to result
result = result + buffer
buffer = [e]
elif has_changed and not cur_toggle_state and i:
# the gate has changed to 'off'
# transform the contents of buffer
_args = [CTSequence(buffer)] + list(args)
buffer = functor(*_args, **kwargs)
# add to result
result = result + buffer
buffer = [e]
past_toggle_state = cur_toggle_state
# terminal condition
if len(buffer) and cur_toggle_state:
# there are items left in the buffer
# state is ON is transform and add to result
_args = [CTSequence(buffer)] + list(args)
buffer = functor(*_args, **kwargs)
# no transform, just add to result
result = result + buffer
if len(buffer) and not cur_toggle_state:
# add to result
result = result + buffer
return result
return transform
class Container():
def __init__(self, **kwargs):
self.options = {
"bpm": 120,
"playback_rate": 1
}
self.sequences = []
self.options.update(kwargs)
def add_sequence(self, offset, seq, channel_no=None):
if channel_no is None:
channel_no = len(self.sequences)
self.sequences.append((channel_no, offset, seq))
def get_playback_events(self):
playback_rate = self.options["playback_rate"]
all_midi_events = []
for (channel_no, offset,seq) in self.sequences:
for me in seq.to_midi_events(offset):
all_midi_events.append(midievent(me.pitch, me.type, me.time / playback_rate))
all_midi_events = sorted(all_midi_events, key=lambda x: x.time)
return all_midi_events
def playback(self, player_func, dynamic=60):
playback_events = self.get_playback_events()
#nb the events are chronologically ordered
count = 0
for event in playback_events:
if event.time != count:
sleep(event.time - count)
count = event.time
if event.type == "NOTE_ON":
player_func.noteon(0, event.pitch, dynamic)
elif event.type == "NOTE_OFF":
player_func.noteoff(0, event.pitch)
def save_as_midi_file(self, filename, dynamic=60):
mf = MIDIFile(len(self.sequences))
for (channel_no, offset, seq) in self.sequences:
mf.addTrackName(channel_no, offset, "Channel {}".format(channel_no))
count = offset
for event in seq.events:
for pitch in event.pitches:
mf.addNote(channel_no, 0, pitch, count, event.duration, dynamic)
count = count + event.duration
# mf.addTempo(0, 0, self.options["bpm"])
with open(filename, 'wb') as outf:
mf.writeFile(outf)
class Vertex(object):
"""
Vertex used to represent a musical event when parsed into
a directed graph structure
"""
@classmethod
def treeFromGraph(cls, graph):
results = {}
for key in graph.keys():
v = Vertex(key)
results[key] = v
for key in graph.keys():
node = results[key]
neighbours = graph[key]
for (name, pitch_delta, time_delta) in neighbours:
try:
neighbour = results[name]
except KeyError:
results[name] = Vertex(name)
node.addNeighbour((pitch_delta, time_delta), results[name])
return [v for k,v in results.items()]
def __init__(self, name):
self.name = name
self.neighbours = []
def __repr__(self):
return "Vertex({})".format(self.name)
def addNeighbour(self, vector, neighbour):
self.neighbours.append((vector, neighbour)) | 32.905363 | 98 | 0.552871 | 1,134 | 10,431 | 4.887125 | 0.191358 | 0.021653 | 0.017683 | 0.009022 | 0.191628 | 0.129376 | 0.112595 | 0.055936 | 0.036449 | 0.018044 | 0 | 0.005042 | 0.353562 | 10,431 | 317 | 99 | 32.905363 | 0.816847 | 0.071422 | 0 | 0.211618 | 0 | 0 | 0.024608 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.170124 | false | 0.004149 | 0.029046 | 0.058091 | 0.365145 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
ce31a5388df44a388e9301ac1d93fce47b707a91 | 2,535 | py | Python | speakeasy/winenv/api/winapi.py | dtrizna/speakeasy | b73b1401f0264c5d74a8638cf060684bcf00392b | [
"MIT"
] | 816 | 2020-08-26T16:01:41.000Z | 2021-09-19T08:27:09.000Z | speakeasy/winenv/api/winapi.py | dtrizna/speakeasy | b73b1401f0264c5d74a8638cf060684bcf00392b | [
"MIT"
] | 83 | 2020-08-26T16:39:48.000Z | 2021-09-16T01:28:34.000Z | speakeasy/winenv/api/winapi.py | dtrizna/speakeasy | b73b1401f0264c5d74a8638cf060684bcf00392b | [
"MIT"
] | 130 | 2020-08-26T15:50:07.000Z | 2021-09-16T01:04:57.000Z | # Copyright (C) 2020 FireEye, Inc. All Rights Reserved.
import sys
import inspect
import speakeasy.winenv.arch as _arch
from speakeasy.errors import ApiEmuError
from speakeasy.winenv.api import api
from speakeasy.winenv.api.kernelmode import * # noqa
from speakeasy.winenv.api.usermode import * # noqa
def autoload_api_handlers():
api_handlers = []
for modname, modobj in sys.modules.items():
if not modname.startswith(('speakeasy.winenv.api.kernelmode.',
'speakeasy.winenv.api.usermode.')):
continue
for clsname, clsobj in inspect.getmembers(modobj, inspect.isclass):
if clsobj is not api.ApiHandler and issubclass(clsobj, api.ApiHandler):
api_handlers.append((clsobj.name, clsobj))
return tuple(api_handlers)
API_HANDLERS = autoload_api_handlers()
class WindowsApi:
def __init__(self, emu):
self.mods = {}
self.instances = []
self.data = {}
self.emu = emu
arch = self.emu.get_arch()
if arch == _arch.ARCH_X86:
self.ptr_size = 4
elif arch == _arch.ARCH_AMD64:
self.ptr_size = 8
else:
raise ApiEmuError('Invalid architecture')
def load_api_handler(self, mod_name):
for name, hdl in API_HANDLERS:
name = name.lower()
if mod_name and name == mod_name.lower():
handler = self.mods.get(name)
if not handler:
handler = hdl(self.emu)
self.mods.update({name: handler})
return handler
return None
def get_data_export_handler(self, mod_name, exp_name):
mod = self.mods.get(mod_name)
if not mod:
mod = self.load_api_handler(mod_name)
if not mod:
return None, None
return (mod, mod.get_data_handler(exp_name))
def get_export_func_handler(self, mod_name, exp_name):
mod = self.mods.get(mod_name)
if not mod:
mod = self.load_api_handler(mod_name)
if not mod:
return None, None
return (mod, mod.get_func_handler(exp_name))
def call_api_func(self, mod, func, argv, ctx):
"""
Call the handler to implement the imported API
"""
return func(mod, self.emu, argv, ctx)
def call_data_func(self, mod, func, ptr):
"""
Call the handler to initialize and return imported data variables
"""
return func(mod, ptr)
| 30.178571 | 83 | 0.602761 | 318 | 2,535 | 4.63522 | 0.27044 | 0.042741 | 0.061058 | 0.032564 | 0.161465 | 0.161465 | 0.161465 | 0.161465 | 0.161465 | 0.161465 | 0 | 0.005685 | 0.306114 | 2,535 | 83 | 84 | 30.542169 | 0.832291 | 0.069822 | 0 | 0.169492 | 0 | 0 | 0.035513 | 0.026851 | 0 | 0 | 0 | 0 | 0 | 1 | 0.118644 | false | 0 | 0.118644 | 0 | 0.40678 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
ce32860eee791143019bb4ab2831202037ac88fb | 1,486 | py | Python | leetcode/0-250/241-395. Longest Substring with At Least K Repeating Characters.py | palash24/algorithms-and-data-structures | 164be7d1a501a21af808673888964bbab36243a1 | [
"MIT"
] | 23 | 2018-11-06T03:54:00.000Z | 2022-03-14T13:30:40.000Z | leetcode/0-250/241-395. Longest Substring with At Least K Repeating Characters.py | palash24/algorithms-and-data-structures | 164be7d1a501a21af808673888964bbab36243a1 | [
"MIT"
] | null | null | null | leetcode/0-250/241-395. Longest Substring with At Least K Repeating Characters.py | palash24/algorithms-and-data-structures | 164be7d1a501a21af808673888964bbab36243a1 | [
"MIT"
] | 5 | 2019-05-24T16:56:45.000Z | 2022-03-10T17:29:10.000Z | # 395. Longest Substring with At Least K Repeating Characters
import collections
class Solution:
def longestSubstring2(self, s, k):
count, n = [], len(s)
i = j = 0
while i < n:
while j < n and s[i] == s[j]: j += 1
count.append((s[i], j - i))
i = j
ans = 0
dct = collections.defaultdict(int)
tot1 = 0
for i in range(len(count)):
ch, cnt = count[i]
dct[ch] += cnt
dct_copy = {key: v for key, v in dct.items()}
tot1 = tot2 = tot1 + cnt
for j in range(i + 1):
if all(v == 0 or v >= k for key, v in dct_copy.items()):
ans = max(ans, tot2)
break
ch, cnt = count[j]
dct_copy[ch] -= cnt
tot2 -= cnt
if tot2 < ans: break
return ans
# leetcode
def longestSubstring3(self, s, k):
if len(s) < k: return 0
ch = min(set(s), key=s.count)
if s.count(ch) >= k: return len(s)
return max(self.longestSubstring(sp, k) for sp in s.split(ch))
# leetcode 2
def longestSubstring(self, s, k):
for ch in set(s):
if s.count(ch) < k:
return max(self.longestSubstring(sp, k) for sp in s.split(ch))
return len(s)
sol = Solution()
print(sol.longestSubstring("aaabb", 2))
print(sol.longestSubstring("bbaaacbaabaaad", 2))
| 30.958333 | 78 | 0.486541 | 205 | 1,486 | 3.512195 | 0.292683 | 0.011111 | 0.025 | 0.025 | 0.202778 | 0.169444 | 0.130556 | 0.130556 | 0.130556 | 0.130556 | 0 | 0.02439 | 0.393001 | 1,486 | 48 | 79 | 30.958333 | 0.773836 | 0.053163 | 0 | 0.051282 | 0 | 0 | 0.013533 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.076923 | false | 0 | 0.025641 | 0 | 0.230769 | 0.051282 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
ce329b1c7ca996174bf7f5deb5e4f4128c755548 | 2,033 | py | Python | DataVis-2019nCov/TimeSeries/TimeseriesDataFether20200130.py | wangrui12138/Bilibili346623 | 445e956e97bc8765af8d8d5c15cee30f9f137d93 | [
"MIT"
] | 441 | 2018-03-13T12:55:49.000Z | 2022-03-30T08:09:12.000Z | DataVis-2019nCov/TimeSeries/TimeseriesDataFether20200130.py | wangrui12138/Bilibili346623 | 445e956e97bc8765af8d8d5c15cee30f9f137d93 | [
"MIT"
] | 3 | 2019-04-04T20:15:24.000Z | 2021-05-31T11:45:50.000Z | DataVis-2019nCov/TimeSeries/TimeseriesDataFether20200130.py | wangrui12138/Bilibili346623 | 445e956e97bc8765af8d8d5c15cee30f9f137d93 | [
"MIT"
] | 589 | 2018-01-25T16:25:16.000Z | 2022-03-31T07:27:35.000Z | import urlfetch
import json
# 抓取数据
url = 'https://view.inews.qq.com/g2/getOnsInfo?name=wuwei_ww_cn_day_counts'
res = urlfetch.fetch(url)
resstr = res.content.decode('utf-8')
# 解析JSON数据
jsonRes = json.loads(resstr)
data = jsonRes['data']
data2=json.loads(data)
# 数据根据日期排序
data2.sort(key = lambda x:x['date'])
# 构造数据集 (日期,确诊,疑似,死亡,治愈)
outall = ''
for i in range(0,len(data2)):
if i != (len(data2)-1):
outstr = '\t\t\t\t\t[\''+ str(data2[i]['date']) + '\', '+str(data2[i]['confirm'])+', '+str(data2[i]['suspect'])+', '+str(data2[i]['dead'])+', '+str(data2[i]['heal'])+'],\n'
else:
outstr = ''
# 当天的数据通常不准确,暂时不绘制在图中
#outstr = '\t\t\t\t\t[\''+ str(data2[i]['date']) + '\', '+str(data2[i]['confirm'])+', '+str(data2[i]['suspect'])+', '+str(data2[i]['dead'])+', '+str(data2[i]['heal'])+']\n'
outall = outall+outstr
# 获取确诊、疑似数据的最大值
maxOne1 = sorted(data2, key = lambda x:int(x['confirm']), reverse=True)
maxOne2 = sorted(data2, key = lambda x:int(x['suspect']), reverse=True)
maxOne = max([int(maxOne1[0]['confirm']),int(maxOne2[0]['suspect'])])
# 获取死亡、治愈数据的最大值
maxTwo1 = sorted(data2, key = lambda x:int(x['dead']), reverse=True)
maxTwo2 = sorted(data2, key = lambda x:int(x['heal']), reverse=True)
maxTwo = max([int(maxTwo1[0]['dead']), int(maxTwo2[0]['heal'])])
# 读取模板HTML
fid = open('TimeseriesData20200130Temp.html','rb')
oriStr = fid.read().decode('utf-8')
fid.close()
# 写入数据集
modifiedStr = oriStr.replace('//dataInsert//',outall)
# 写入线图和柱图的Y轴最大值和分隔区间
interval1 = int(int(maxOne)/1000)+1
interval2 = int(float(maxTwo)/(interval1))+1
modifiedStr = modifiedStr.replace('//maxOne//',str(int(interval1*1000)))
modifiedStr = modifiedStr.replace('//intervalOne//', '1000')
modifiedStr = modifiedStr.replace('//maxTwo//',str(int(interval2*interval1)))
modifiedStr = modifiedStr.replace('//intervalTwo//',str(interval2))
# 输出更新后的HTML
fid = open('TimeseriesData20200130Modified.html','wb')
fid.write(modifiedStr.encode('utf-8'))
fid.close() | 39.096154 | 181 | 0.636498 | 271 | 2,033 | 4.760148 | 0.376384 | 0.062016 | 0.069767 | 0.012403 | 0.206202 | 0.206202 | 0.206202 | 0.128682 | 0.128682 | 0.128682 | 0 | 0.041196 | 0.128382 | 2,033 | 52 | 182 | 39.096154 | 0.686795 | 0.152976 | 0 | 0.057143 | 0 | 0 | 0.203737 | 0.039783 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.057143 | 0 | 0.057143 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
ce32d484c4f86cc69a8662a9acbac8684f5f116f | 4,602 | py | Python | tests/test_exceptions.py | dirn/Henson | 072cf07e3338cfb4701c299170aad1aab17b7ba0 | [
"Apache-2.0"
] | 1 | 2018-02-25T06:48:17.000Z | 2018-02-25T06:48:17.000Z | tests/test_exceptions.py | dirn/Henson | 072cf07e3338cfb4701c299170aad1aab17b7ba0 | [
"Apache-2.0"
] | null | null | null | tests/test_exceptions.py | dirn/Henson | 072cf07e3338cfb4701c299170aad1aab17b7ba0 | [
"Apache-2.0"
] | null | null | null | """Test Doozer's exceptions."""
from __future__ import annotations
from doozer import exceptions
from doozer.base import Application
def test_abort_preprocessor(event_loop, cancelled_future, queue):
"""Test that aborted preprocessors stop execution."""
# This test sets up two preprocessors, a callback, and a
# postprocessor. The first preprocessor will raise an Abort
# exception. None of the others should be called.
preprocess1_called = False
preprocess2_called = False
callback_called = False
postprocess_called = False
queue.put_nowait({"a": 1})
async def callback(app, message):
nonlocal callback_called
callback_called = True
return message
app = Application("testing", callback=callback)
@app.message_preprocessor
async def preprocess1(app, message):
nonlocal preprocess1_called
preprocess1_called = True
raise exceptions.Abort("testing", message)
@app.message_preprocessor
async def preprocess2(app, message):
nonlocal preprocess2_called
preprocess2_called = True
return message
@app.result_postprocessor
async def postprocess(app, result):
nonlocal postprocess_called
postprocess_called = True
return result
event_loop.run_until_complete(app._process(cancelled_future, queue, event_loop))
assert preprocess1_called
assert not preprocess2_called
assert not callback_called
assert not postprocess_called
def test_abort_callback(event_loop, cancelled_future, queue):
"""Test that aborted callbacks stop execution."""
# This test sets up a callback and a postprocessor. The callback
# will raise an Abort exception. The postprocessor shouldn't be
# called.
callback_called = False
postprocess_called = False
queue.put_nowait({"a": 1})
async def callback(app, message):
nonlocal callback_called
callback_called = True
raise exceptions.Abort("testing", message)
app = Application("testing", callback=callback)
@app.result_postprocessor
async def postprocess(app, result):
nonlocal postprocess_called
postprocess_called = True
return result
event_loop.run_until_complete(app._process(cancelled_future, queue, event_loop))
assert callback_called
assert not postprocess_called
def test_abort_error(event_loop, cancelled_future, queue):
"""Test that aborted error callbacks stop execution."""
# This test sets up a callback and two error callbacks. The callback
# will raise an exception and the first error callback will an raise
# Abort exception. The second error callback shouldn't be called.
callback_called = False
error1_called = False
error2_called = False
queue.put_nowait({"a": 1})
async def callback(app, message):
nonlocal callback_called
callback_called = True
raise TypeError("testing")
app = Application("testing", callback=callback)
@app.error
async def error1(app, message, exc):
nonlocal error1_called
error1_called = True
raise exceptions.Abort("testing", message)
@app.error
async def error2(app, message, exc):
nonlocal error2_called
error2_called = True
event_loop.run_until_complete(app._process(cancelled_future, queue, event_loop))
assert callback_called
assert error1_called
assert not error2_called
def test_abort_postprocess(event_loop, cancelled_future, queue):
"""Test that aborted postprocessors stop execution of the result."""
# This test sets up a callback and two postprocessors. The first
# will raise an Abort exception for one of the two results returned
# by the callback.
postprocess1_called_count = 0
postprocess2_called_count = 0
queue.put_nowait({"a": 1})
async def callback(app, message):
return [True, False]
app = Application("testing", callback=callback)
@app.result_postprocessor
async def postprocess1(app, result):
nonlocal postprocess1_called_count
postprocess1_called_count += 1
if result:
raise exceptions.Abort("testing", result)
return result
@app.result_postprocessor
async def postprocess2(app, result):
nonlocal postprocess2_called_count
postprocess2_called_count += 1
return result
event_loop.run_until_complete(app._process(cancelled_future, queue, event_loop))
assert postprocess1_called_count == 2
assert postprocess2_called_count == 1
| 30.078431 | 84 | 0.711864 | 548 | 4,602 | 5.784672 | 0.153285 | 0.034069 | 0.050473 | 0.030284 | 0.625552 | 0.556782 | 0.521136 | 0.490221 | 0.385804 | 0.359306 | 0 | 0.011179 | 0.222512 | 4,602 | 152 | 85 | 30.276316 | 0.87479 | 0.188831 | 0 | 0.5625 | 0 | 0 | 0.018128 | 0 | 0 | 0 | 0 | 0 | 0.114583 | 1 | 0.041667 | false | 0 | 0.03125 | 0 | 0.145833 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
ce3502f5079fa6852e13265b391e01e6ac109b62 | 967 | py | Python | rivercam.py | OrrinEdenfield/RiverCam | 207f8c623bbcb9dc0cdbbefe91e1fd33bdb0b84e | [
"MIT"
] | null | null | null | rivercam.py | OrrinEdenfield/RiverCam | 207f8c623bbcb9dc0cdbbefe91e1fd33bdb0b84e | [
"MIT"
] | null | null | null | rivercam.py | OrrinEdenfield/RiverCam | 207f8c623bbcb9dc0cdbbefe91e1fd33bdb0b84e | [
"MIT"
] | null | null | null | #!/usr/bin/python
import os
import datetime
from picamera import PiCamera
from time import sleep
from azure.storage.blob import BlobClient
# Path to temporary local image file
localpic = '/home/pi/rivercam/image.jpg'
# Take photo
camera = PiCamera()
sleep(5)
camera.capture(localpic)
# Create the variable to use for the filename
dt = str(datetime.datetime.now())
newdt = dt.replace(":", "-")
newdt = newdt.replace(" ", "-")
newdt = newdt.replace(".", "-")
newdt = newdt[0:16]
newname = newdt+'.jpg'
# Upload to local IoT Edge Blob Service
blob = BlobClient.from_connection_string(conn_str="DefaultEndpointsProtocol=http;BlobEndpoint=http://192.168.0.201:11002/azurepistorage;AccountName=azurepistorage;AccountKey=[LOCAL-IOT-EDGE-BLOB-KEY]", container_name="pisynccontainer", blob_name=newname)
with open(localpic, "rb") as data:
blob.upload_blob(data)
# Delete the local file now that it's been uploaded
os.remove(localpic) | 30.21875 | 255 | 0.730093 | 132 | 967 | 5.30303 | 0.583333 | 0.051429 | 0.072857 | 0.068571 | 0.072857 | 0.072857 | 0 | 0 | 0 | 0 | 0 | 0.022919 | 0.142709 | 967 | 32 | 256 | 30.21875 | 0.821472 | 0.201655 | 0 | 0 | 0 | 0.052632 | 0.274457 | 0.036685 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.263158 | 0 | 0.263158 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
ce37b0b4f990d5e6331ff4ffb6eb4f2a682abe77 | 12,165 | py | Python | drgadget/plugins/gadgetfinder.py | patois/DrGadget | 25e9bf6e1d58afc16e3c20ac0632b384f552fad5 | [
"MIT"
] | 34 | 2015-05-18T01:50:30.000Z | 2022-02-16T10:36:25.000Z | drgadget/plugins/gadgetfinder.py | patois/DrGadget | 25e9bf6e1d58afc16e3c20ac0632b384f552fad5 | [
"MIT"
] | 3 | 2017-01-02T11:57:34.000Z | 2017-01-11T11:53:57.000Z | drgadget/plugins/gadgetfinder.py | patois/DrGadget | 25e9bf6e1d58afc16e3c20ac0632b384f552fad5 | [
"MIT"
] | 6 | 2016-03-28T18:26:27.000Z | 2019-11-13T09:12:03.000Z | from idaapi import *
from idc import *
from idautils import Assemble, Modules, DecodeInstruction
from payload import Item
class FindInstructionsForm(Form):
def __init__(self):
Form.__init__(self, r"""STARTITEM {id:iInstructions}
BUTTON YES* Ok
BUTTON CANCEL Cancel
Find instruction(s)
{FormChangeCb}
Filters:
<Exclude ASLR modules:{rASLR}>
<Exclude DEP modules:{rDEP}>
<Exclude non-executable segments:{rExec}>{cGroup1}>
Options:
<#Refreshes memory content before starting search process#Sync memory:{rSync}>
<#Find regex expression (less speed, more flexibility)#Regex:{rRegex}>{cGroup2}>
Find instruction(s):
<#mov eax, 1; pop; pop; 33 C0; ret#:{iInstructions}>
""", {
'cGroup1': Form.ChkGroupControl(("rASLR", "rDEP", "rExec")),
'cGroup2': Form.ChkGroupControl(("rSync", "rRegex")),
'iInstructions': Form.StringInput(),
'FormChangeCb': Form.FormChangeCb(self.OnFormChange)
})
def OnFormChange(self, fid):
if GetProcessState() == DSTATE_NOTASK:
self.SetControlValue(self.rASLR, False)
self.SetControlValue(self.rDEP, False)
self.SetControlValue(self.rSync, False)
self.EnableField(self.rASLR, False)
self.EnableField(self.rDEP, False)
self.EnableField(self.rSync, False)
self.SetFocusedField(self.iInstructions)
return 1
def AskInstructionsUsingForm():
result = (False, "Cancelled")
f = FindInstructionsForm()
f.Compile()
f.rASLR.checked = True
f.rDEP.checked = True
f.rExec.checked = True
f.rSync.checked = True
f.rRegex.checked = False
ok = f.Execute()
f.Free()
if ok == 1:
result = (True, (f.iInstructions.value, f.rASLR.checked, f.rDEP.checked, f.rExec.checked, f.rSync.checked, f.rRegex.checked))
return result
class SearchResultChoose(Choose2):
def __init__(self, ealist, title):
self.list = ealist
global payload
global ropviewer
self.payload = payload
self.rv = ropviewer
self.copy_item_cmd_id = self.append_item_cmd_id = None
Choose2.__init__(self, \
title, \
[["address", 10 | Choose2.CHCOL_PLAIN], \
["segment", 10 | Choose2.CHCOL_PLAIN], \
["code", 30 | Choose2.CHCOL_PLAIN]], \
popup_names = ["Insert", "Delete", "Edit", "Copy item"])
def OnCommand(self, n, cmd_id):
if cmd_id == self.copy_item_cmd_id:
ropviewer.set_clipboard((0, "c", Item(self.list[n-1].ea, Item.TYPE_CODE)))
return 0
def OnClose (self):
pass
def OnGetLine (self, n):
return self.list[n-1].columns
def OnGetSize (self):
return len (self.list)
# dbl click / enter
def OnSelectLine(self, n):
Jump (self.list[n-1].ea)
def set_copy_item_handler(self, cmd_id):
self.copy_item_cmd_id = cmd_id
class SearchResult:
def __init__(self, ea):
self.ea = ea
self.columns = []
name = SegName(ea)
disasm = GetDisasmEx(ea, GENDSM_FORCE_CODE)
self.columns.append ("%X" % ea)
self.columns.append (name)
self.columns.append (disasm)
def assemble_code(instructions):
re_opcode = re.compile('^[0-9a-f]{2} *', re.I)
lines = instructions.split(";")
bufs = []
global payload
for line in lines:
if re_opcode.match(line):
# convert from hex string to a character list then join the list to form one string
buf = ''.join([chr(int(x, 16)) for x in line.split()])
else:
# assemble the instruction
if payload.proc.supports_assemble():
ret, buf = Assemble(FirstSeg(), line)
if not ret:
return (False, "Failed to assemble instruction:"+line)
else:
return (False, "Processor module can't assemble code. Please use regex option.")
# add the assembled buffer
bufs.append(buf)
buf = ''.join(bufs)
bin_str = ' '.join(["%02X" % ord(x) for x in buf])
return (True, bin_str)
def get_disasm(ea, maxinstr=5):
result = ""
delim = "\n"
i = 0
while i<maxinstr:
ins = DecodeInstruction(ea)
if not ins:
break
disasm = GetDisasmEx(ea, GENDSM_FORCE_CODE)
if not disasm:
break
result += disasm + delim
ea += ins.size
i += 1
return result
def compile_regex(s):
try:
regex = re.compile(s, re.I | re.DOTALL)
except:
return (False, "Could not compile regex.")
return (True, regex)
def match_regex(startEA, endEA, regex):
result = BADADDR
ea = startEA
while ea < endEA:
disasm = get_disasm(ea)
if disasm:
if regex.match(disasm):
result = ea
break
ea += 1
return result
def FindInstructionsInSegments(segments, bin_str, exclASLR, exclDEP, exclNonExec, checkDllChars=False):
ret = []
cancelled = False
isRegex = isinstance(bin_str, type(re.compile('foo')))
curseg = 0
maxseg = len(segments)
# thedude had too much coffee
thedude = [" " + "\n" \
" (._.)" + "\n" \
" /( )\\" + "\n" \
" | |" + "\n",
" . " + "\n" \
" (._.)" + "\n" \
" /( )\\" + "\n" \
" / \\" + "\n",
" o " + "\n" \
" (._.)" + "\n" \
" /( )\\" + "\n" \
" | |" + "\n",
" O " + "\n" \
" (._.)" + "\n" \
" /( )\\" + "\n" \
" / \\" + "\n",
" * " + "\n" \
" (._.)" + "\n" \
" /( )\\" + "\n" \
" | |" + "\n"]
show_wait_box("Say hello to thedude!")
nMatches = 0
for seg in segments:
curseg += 1
if (seg.perm & SEGPERM_EXEC) == 0 and exclNonExec:
continue
ea = sea = seg.startEA
segname = SegName(ea)
eea = seg.endEA
if checkDllChars:
dllchar = get_dll_characteristics(sea, eea-sea)
if dllchar:
dynbase, nx = get_security_flags(dllchar)
if dynbase and exclASLR:
continue
if nx and exclDEP:
continue
pos = 0
if isRegex:
while True:
ea = match_regex(ea, eea, bin_str)
if ea == BADADDR:
break
ret.append(ea)
ea += 1
nMatches += 1
if wasBreak():
cancelled = True
break
replace_wait_box("Segment: %d/%d (%s)\n0x%X-0x%X\nMatches: %d\n\n%s" % (curseg, maxseg, segname, ea, eea, nMatches,thedude[pos]))
pos += 1
pos %= len(thedude)
else:
while True:
ea = find_binary(ea, eea, bin_str, 16, SEARCH_DOWN)
if ea == BADADDR:
break
ret.append(ea)
ea += 1
nMatches += 1
if wasBreak():
cancelled = True
break
replace_wait_box("Segment: %d/%d (%s)\n0x%X-0x%X\nMatches: %d\n\n%s" % (curseg, maxseg, segname, ea, eea, nMatches,thedude[pos]))
pos += 1
pos %= len(thedude)
if cancelled:
break
hide_wait_box()
if not ret:
return (False, "Could not match [%s]" % bin_str if not isRegex else "regular expression")
return (True, ret)
def FindInstructionsInModules(modules, bin_str, exclASLR, exclDEP, exclNonExec):
segments = []
for mod in modules:
if mod.dynbase and exclASLR:
continue
if mod.nx and exclDEP:
continue
segments += get_segments(mod.base, mod.base + mod.size)
return FindInstructionsInSegments(segments, bin_str, exclASLR, exclDEP, exclNonExec)
def get_security_flags(dllchars):
IMAGE_DLLCHARACTERISTICS_DYNAMIC_BASE = 0x40
IMAGE_DLLCHARACTERISTICS_NX_COMPAT = 0x100
dynbase = dllchars & IMAGE_DLLCHARACTERISTICS_DYNAMIC_BASE != 0
nx = dllchars & IMAGE_DLLCHARACTERISTICS_NX_COMPAT != 0
return (dynbase, nx)
class ModuleInfo(object):
def __init__(self, mod):
self.dynbase = self.nx = None
self.name = mod.name
self.base = mod.base
self.size = mod.size
self.rebase_to = mod.rebase_to
self.dll_char = get_dll_characteristics(self.base, self.size)
if self.dll_char:
self.dynbase, self.nx = get_security_flags(self.dll_char)
self.columns = []
aslr = "N/A"
dep = "N/A"
if self.dll_char:
aslr = "X" if self.dynbase else ""
dep = "X" if self.dynbase else ""
self.columns.append(self.name)
self.columns.append("%X" % self.base)
self.columns.append("%X" % self.size)
self.columns.append(aslr)
self.columns.append(dep)
# -----------------------------------------------------------------------
# TODO: add NOSEH
def get_dll_characteristics(base, size):
# minimal, bugged pe parser
result = None
if size >= 0x40:
mz = DbgWord(base)
if mz == 0x5A4D or mz == 0x4D5A:
offs_pe = DbgDword(base+0x3C)
if size > offs_pe + 2:
pe = DbgWord(base + offs_pe)
if pe == 0x4550:
if size > offs_pe + 0x5E + 2:
result = DbgWord(base + offs_pe + 0x5E)
return result
def get_modules():
results = []
for mod in Modules():
results.append(ModuleInfo(mod))
return results
def get_segments(startEA=MinEA(), endEA=MaxEA()):
segments = []
seg = getseg(startEA)
while seg and seg.endEA <= endEA:
segments.append(seg)
seg = get_next_seg(seg.startEA)
return segments
payload = None
ropviewer = None
class drgadgetplugin_t:
def __init__(self, pl, rv):
global payload
global ropviewer
payload = pl
ropviewer = rv
self.menucallbacks = [("Find gadgets", self.run, "Ctrl-F3")]
# mandatory
# must return list of tuples
# (label of menu, callback)
# or None if no callbacks should be installed
def get_callback_list(self):
global payload
result = self.menucallbacks
return result
def run(self):
success, s = AskInstructionsUsingForm()
if success:
findstr, excl_aslr, excl_dep, excl_nonexec, sync, regex = s
if sync:
RefreshDebuggerMemory()
if regex:
success, s = compile_regex(findstr)
else:
success, s = assemble_code(findstr)
if not success:
Warning(s)
return 0
if GetProcessState() == DSTATE_NOTASK:
success, ret = FindInstructionsInSegments(get_segments(), s, excl_aslr, excl_dep, excl_nonexec)
else:
success, ret = FindInstructionsInModules(get_modules(), s, excl_aslr, excl_dep, excl_nonexec)
if success:
results = []
for ea in ret:
results.append(SearchResult(ea))
title = "Search result for: [%s]" % findstr
close_chooser(title)
c = SearchResultChoose(results, title)
c.Show()
c.set_copy_item_handler(c.AddCommand("Copy item"))
else:
Warning(ret)
else:
Warning(s)
def term(self):
pass
| 29.889435 | 145 | 0.521085 | 1,298 | 12,165 | 4.765794 | 0.24037 | 0.006143 | 0.00679 | 0.007113 | 0.16408 | 0.110572 | 0.095377 | 0.058196 | 0.056903 | 0.05464 | 0 | 0.011203 | 0.361611 | 12,165 | 406 | 146 | 29.963054 | 0.785346 | 0.032717 | 0 | 0.274143 | 0 | 0.006231 | 0.095967 | 0.011571 | 0 | 0 | 0.003658 | 0.002463 | 0 | 1 | 0.080997 | false | 0.006231 | 0.012461 | 0.006231 | 0.174455 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
ce39e43027d69b17e4e0dfca0b10c18b5c3e97ad | 2,549 | py | Python | cascade_models/decreasing_cascade_model.py | theReuben/sonar | 8cf49f01bf9e6c38dbcea9d92b814781fd905e94 | [
"MIT"
] | 1 | 2020-11-03T01:28:04.000Z | 2020-11-03T01:28:04.000Z | cascade_models/decreasing_cascade_model.py | theReuben/sonar | 8cf49f01bf9e6c38dbcea9d92b814781fd905e94 | [
"MIT"
] | null | null | null | cascade_models/decreasing_cascade_model.py | theReuben/sonar | 8cf49f01bf9e6c38dbcea9d92b814781fd905e94 | [
"MIT"
] | null | null | null | from __future__ import division
import networkx as nx
import numpy as np
def decreasing_cascade_model_2(G, nodes, contagious_nodes, attempted_nodes, active_nodes) :
class Node :
number = None
S = set()
adj = set()
def __init__(self, number) :
self.number = number
S = set()
def attempted(self, node) :
S.add(node)
def S_size(self):
return len(S)
def decreasing_cascade_model_activation(con, adj) :
if len(nodes[adj].S) == 0 :
score = G[con][adj]['weight']
else :
score = G[con][adj]['weight'] / len(nodes[adj].S)
nodes[adj].S.add(con)
return score
next_contagious = set()
this_attempt = set()
next_active = set()
if len(nodes) == 0 :
for n in G.nodes() :
nodes[n] = Node(n)
return decreasing_cascade_model_2(G, nodes, contagious_nodes, attempted_nodes, active_nodes)
if len(attempted_nodes) == len(nodes) : # If all nodes have been attempted, break
if len(active_nodes) == len(G) :
print ("All nodes have been activated.")
print ("{}/{} nodes have been activated.".format(len(active_nodes), len (G)))
else :
print("All nodes have been attempted.")
print ("{}/{} nodes have been activated.".format(len(active_nodes), len (G)))
return active_nodes
elif len(contagious_nodes) == 0 : # If no nodes have been activated in the previous turn, break
print ("There are no longer any contagious nodes.")
print ("{}/{} nodes have been activated.".format(len(active_nodes), len (G)))
return active_nodes
else :
for con in contagious_nodes :
adjacent = set(G[con])
for adj in adjacent.difference(active_nodes.intersection(adjacent)) :
this_attempt.add(adj) # Node has now been attempted
activate = np.random.uniform()
if (decreasing_cascade_model_activation(con, adj) > activate) :
next_active.add(adj) # Node has been activated
next_contagious.add(adj) # Node will be contagious at time t+1
return decreasing_cascade_model_2(G, nodes, next_contagious, attempted_nodes.union(this_attempt), active_nodes.union(next_active))
def decreasing_cascade_model(G, seed_set) :
nodes = {}
return decreasing_cascade_model_2(G, nodes, seed_set, seed_set, seed_set)
| 37.485294 | 138 | 0.598274 | 321 | 2,549 | 4.563863 | 0.239875 | 0.075085 | 0.105119 | 0.075085 | 0.414334 | 0.325597 | 0.27372 | 0.221843 | 0.221843 | 0.221843 | 0 | 0.004469 | 0.297764 | 2,549 | 67 | 139 | 38.044776 | 0.813966 | 0.073362 | 0 | 0.185185 | 0 | 0 | 0.08871 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.111111 | false | 0 | 0.055556 | 0.018519 | 0.37037 | 0.111111 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
ce3b1c0e23b73a0bf90de1ce87b2c191e5a4a32d | 936 | py | Python | basic.py | boristown/WX | f633f9a346e6f23e8c463d736489bd7b1452dd16 | [
"MIT"
] | 2 | 2019-08-14T02:13:12.000Z | 2019-08-16T12:52:03.000Z | basic.py | boristown/WX | f633f9a346e6f23e8c463d736489bd7b1452dd16 | [
"MIT"
] | null | null | null | basic.py | boristown/WX | f633f9a346e6f23e8c463d736489bd7b1452dd16 | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
# filename: basic.py
import urllib
import time
import json
import mypsw
class Basic:
def __init__(self):
self.__accessToken = ''
self.__leftTime = 0
def __real_get_access_token(self):
appId = mypsw.wechatguest.appId
appSecret = mypsw.wechatguest.appSecret
postUrl = ("https://api.weixin.qq.com/cgi-bin/token?grant_type=client_credential&appid=%s&secret=%s" % (appId, appSecret))
urlResp = urllib.request.urlopen(postUrl)
urlResp = json.loads(urlResp.read())
print(urlResp)
self.__accessToken = urlResp['access_token']
self.__leftTime = urlResp['expires_in']
def get_access_token(self):
if self.__leftTime < 10:
self.__real_get_access_token()
return self.__accessToken
def run(self):
while(True):
if self.__leftTime > 10:
time.sleep(2)
self.__leftTime -= 2
else:
self.__real_get_access_token()
| 26 | 126 | 0.673077 | 117 | 936 | 5.034188 | 0.478632 | 0.101868 | 0.095076 | 0.091681 | 0.074703 | 0 | 0 | 0 | 0 | 0 | 0 | 0.010782 | 0.207265 | 936 | 35 | 127 | 26.742857 | 0.783019 | 0.042735 | 0 | 0.071429 | 0 | 0.035714 | 0.12206 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.142857 | false | 0 | 0.142857 | 0 | 0.357143 | 0.035714 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |